text
stringlengths
59
500k
subset
stringclasses
6 values
\begin{document} \author{TIMOTHY LA FOND \affil{Purdue University} JENNIFER NEVILLE \affil{Purdue University} BRIAN GALLAGHER \affil{Lawrence Livermore National Laboratory} } \title{Size-Consistent Statistics for Anomaly Detection in Dynamic Networks} \maketitle \begin{abstract} An important task in network analysis is the detection of anomalous events in a network time series. These events could merely be times of interest in the network timeline or they could be examples of malicious activity or network malfunction. Hypothesis testing using network statistics to summarize the behavior of the network provides a robust framework for the anomaly detection decision process. Unfortunately, choosing network statistics that are dependent on confounding factors like the total number of nodes or edges can lead to incorrect conclusions (e.g., false positives and false negatives). In this dissertation we describe the challenges that face anomaly detection in dynamic network streams regarding confounding factors. We also provide two solutions to avoiding error due to confounding factors: the first is a randomization testing method that controls for confounding factors, and the second is a set of size-consistent network statistics which avoid confounding due to the most common factors, edge count and node count. \end{abstract} {\def1.5{1.5}\tabcolsep=2pt \begin{table} \begin{footnotesize} \caption{{Glossary of terms}} \label{glossary} \begin{tabular}{| l | l |} \hline $G_t$ & Observed graph at time $t$ \\ $V_t$ & Set of vertices in graph $G_t$, size $N$ \\ $W_t$ & Weighted adjacency matrix of $G_t$ \\ $| W_t |$ & Total weight of $W_t$ \\ $P^*_t$ & True distribution of edge weights in the underlying model, size $|V^*| x N^*$ \\ $A^*_t$ & Adjacency matrix of $P^*_t$; i.e. $a_{ij,t} = I[ p^*_{ij,t} > 0 ]$ \\ $V^*$ & True vertex set of underlying model, $V_t \subset V^*$ \\ $P_t$ & Renormalized distribution of edge weight on vertex set $V_t$, used to sample $G_t$ \\ $A_t$ & Adjacency matrix of $P_t$; i.e. $a_{ij,t} = I[ p_{ij,t} > 0 ]$ \\ $| A_t |$ & Number of nonzero cells in adjacency matrix \\ $\widehat{P}_t$ & Approximate distribution of edge weights estimated from $G_t$: $\widehat{p}_{ij,t} = \frac{w_{ij,t}}{| W_t |_1}$\\ $\widehat{A}_t$ & Adjacency matrix of $G_t$; i.e. $a_{ij,t} = I[ w_{ij,t} > 0 ] $\\ $\overline{p^*}_t$ & Mean value of any nonzero cell in $P^*_t$ \\ $\overline{p}_t$ & Mean value of any nonzero cell in $P_t$ \\ $\overline{\widehat{p}}_t$ & Mean value of any nonzero cell in $\widehat{P}_t$ \\ $\overline{p^*}_t \bigg| V_t$ & Mean value of the $P^*_t$ cells that belong to vertex subset $V_t$ \\ $w_{row_i,2}$ & Total weight in row $i$ of $W_t$ \\ $\overline{p^*}_{row,t}$ & Expected mass in any row of $P^*_t$ \\ $\overline{p}_{row,t}$ & Expected mass in any row of $P_t$ \\ $\overline{\widehat{p}}_{row,t}$ & Expected mass in any row of $\widehat{P}_t$ \\ $\overline{p^*}_{row,t} | V_t$ & Expected mass of rows in $P^*_t$, excluding any rows or cells that do not belong to $V_t$\\ \hline \end{tabular} \end{footnotesize} \end{table} } \section{Introduction} In this paper, we will focus on the task of anomaly detection in a dynamic network where the structure of the network is changing over time. For example, each time step could represent one day's worth of activity on an e-mail network or communications of a computer network. The goal is then to identify any time steps where the pattern of those communications seems abnormal compared to those of other time steps. We will be approaching this problem as a hypothesis testing task - the null hypothesis is that a time step under scrutiny represents normal behavior of the network while the alternative hypothesis is that it is anomalous. The null distribution will be constructed from graph examples observed in the past, and the test statistics will be various network statistics. Whenever the null hypothesis is rejected for a time step, we will flag the tested time step as an anomaly. \begin{comment} \begin{figure*} \caption{An example of an underlying network trend confounding anomaly detectors. Subfigure (a) shows the edge count of the network and the location of anomalies. Subfigure (b) shows the Netsimile detector, and (c) is the Deltacon detector.} \label{fig:syntheticdegree} \end{figure*} \end{comment} A typical real-world network experiences many changes in the course of its natural behavior, changes which are not examples of anomalous events. The most common of these is variation in the volume of edges. In the case of an e-mail network where the edges represent messages, the network could be growing in size over time or there could be random variance in the number of messages sent each day. The statistics used to measure the network properties are usually intended to capture some other effect of the network than simply the volume of edges: for example, the common clustering coefficient is a measure of transitivity which is the propensity for triangular interactions in the network. However, statistics such as the clustering coefficient are \textit{Statistically Inconsistent} as the size of the network changes - more or fewer edges/nodes change the output of the statistic even when the transitivity property is constant making graph size a confounding factor. Statistical consistency and inconsistency are described in more detail in Section 6.3. Even on an Erd{\"o}s-R{\'e}nyi network, which does not explicitly capture transitive relationships through a network property, the clustering coefficient will be greater as the number of edges in the network increases as more triangles will be closed due to random chance. When statistics vary with the number of edges in the network, it is not valid to compare different network time steps using those statistics unless the number of edges is constant in each time step. The flowchart in Figure \ref{control-statistic} outlines the detection approach: unless the statistic is carefully defined to be robust to confounding factors, it is impossible to determine which factor that generated the graph is responsible for detected anomalies. Table \ref{glossary} shows a glossary of terms that will be used throughout this Chapter. Some, like the terms $G_t$, $V_t$, and $W_t$, are from the dynamic graph definitions used previously. The other terms will be explained as they are used throughout the Chapter. Figure \ref{fig:compare} shows the effect of statistical (in)consistency. During the experiment pairs of graphs were generated using a Chung-Lu generative model (described in Section 6.6) with a certain number of total edges. Subfigure (a) shows the values of a \textit{Size Consistent Statistic} called \textit{Probability Mass Shift} (described in Section 6.4) calculated on pairs of graphs, while Subfigure (b) shows the same for the Netsimile statistic (described in a previous Chapter). Each black point shows the average value of 100 generated graph pairs while the red points are the minimum and maximum of these pairs. As the edge weight increases (x-axis) the statically consistent Mass Shift (\ref{fig:compare}a) maintains a consistent mean, whereas the statistically inconsistent Netsimile (\ref{fig:compare}b) varies wildly, even though all graphs are generated from the same underlying model (Chung-Lu \cite{chunglu} with a power law degree distribution). \begin{figure} \caption{Statistic values network data generated from same model, but with increasing size. Behavior of (a) Consistent Statistic; (b) Inconsistent Statistic. Black pts: avg of 100 trials, red pts: [min, max].} \label{fig:compare} \end{figure} \begin{comment} Figure \ref{fig:syntheticdegree} (a) shows the edge counts of a network with a simple trend, where vertical lines and triangular points indicate an anomalous edge count. Subfigures (b) and (c) show the anomaly scores and points identified as anomalous by two current statistics: Netsimile and Deltacon respectively \cite{koutra, berlingerio}. The vertical lines are the true anomaly times and the triangular points are the times indicated as anomalies. Clearly the accuracy of these detectors is hindered by the underlying trend, detecting anomalies in the sparser networks (in the beginning and end of the series) but not in the denser ones (the middle of the series). \end{comment} In this work, we will analytically characterize statistics by their sensitivity to network size, and offer principled alternatives that are \textbf{ consistent estimators} of network behavior, which empirically give more accurate results when finding anomalies in dynamic networks with varying sizes. In terms of confounding effects this approach eliminates confounding by ensuring that the test statistics used do not vary when the confounding network properties change, bringing the statistics closer to the ideal one-to-one relationship with network properties. \begin{figure} \caption{Controlling for confounding effects through careful definition of the test statistics.} \label{control-statistic} \end{figure} The major contributions of this work are: \begin{itemize} \item We define \textbf{Size Inconsistent} and \textbf{Size Consistent} properties of network statistics and show that Size Consistent statistics have fewer false positives and false negatives that Inconsistent statistics. \item We prove that several commonly used network statistics are Size Inconsistent and fail to capture the network behavior with varying network densities. \item We introduce provably Size Consistent statistics which measure changes to the network structure regardless of the total number of observed edges. \item We demonstrate that our proposed statistics converge quickly and have superior ROC performance compared to conventional statistics. \end{itemize} \begin{comment} \begin{figure}\end{figure} \end{comment} \section{Problem Definition and Data Model} Let $G = \{ V, W \}$ be a weighted graph that represents a network, where $V$ is the node set and $W$ is the weighted adjacency matrix representing messages or some other interaction, with $w_{ij}$ the number of messages between nodes $i$ and $j$. Let $|V|$ and $|W|$ refer to the number of nodes, and total weight of the edges, in $G$ respectively. A dynamic network is simply a set of graphs $\{ G_1, G_2, ... G_T \}$ where each graph represents network activity within a consistent-width time step (e.g. one step per day). \textbf{Problem definition}: Given a stream of graph examples $\{G_{1}, G_{2}, ..., G_{{t-1}}\}$ drawn from a {\em normal} model $M^n$, and a graph $G_{t}$ drawn from an unknown model, determine if $G_{t}$ was drawn from $M^n$ or some alternative model $M^a$. \\ Given an observed graph, we wish to decide if this graph exhibits the same behavior (network properties) as past graph examples or if is likely the product of some different, anomalous properties. We will be solving this problem with hypothesis tests utilizing network statistics as the test statistics. If $S_k(G)$ is some network statistic designed to measure a network property $k$, then the set of statistics calculated on the normal examples $\{S_k(G_{1}), S_k(G_{2}), ..., S_k(G_{{t-1})}\}$ forms the empirical null distribution, and the value $S_k(G_{{t}})$ is the test point. For this work we will use a two-tailed test with $p$-values of $\alpha = 0.05$. Anomalous test cases where the null hypothesis is rejected correspond to {\em true positives}; normal cases where the null hypothesis is rejected correspond to {\em false positives}. Likewise anomalous cases where the null is not rejected correspond to {\em false negatives} and normal cases where the null is not rejected correspond to {\em true negatives}. \begin{comment} \begin{figure} \caption{Anomaly detection procedure for a graph stream $\{ G_1 ... G_t \}$, graph statistic $S_k$ and p-value $\alpha$.} \label{alg:anomalydetection} \end{figure} \end{comment} The anomaly detection procedure is summarized in Figure \ref{fig:anomalydetectiontask} from model down to null distribution and test point. If all the graph examples have the same number of edges and nodes then graph size cannot be a confounding factor regardless of the choice of test statistic - those properties are naturally controlled in the data. However, if $M^n$ and $M^a$ produce graphs with a variable number of edges and nodes then any test statistic needs to be robust to changes in the graph size. Ideally, if $G_x, G_y \sim M$ but $| V_x | \neq | V_y |, | W_x | \neq | W_y |$ we would still want $S_k(G_x) \approx S_k(G_y)$ to be true. To accommodate the observations of graphs of varying size, let us assume the models that generated the observed graphs are hidden but take the form of a multinomial sampling procedure. Let $P^*$ be a $| V^* | \times | V^* |$ matrix where the rows and columns represent a node set $V^*$ and the sum of all cells is equal to 1. Here $V^*$ represents a large set of {\em possible} nodes, i.e., larger than the set we may see in any one graph $G$. The entry $p^*_{ij,t}$ specifies the probability that a randomly sampled message at time $t$ is between $i$ and $j$. Let $| V |$ and $| W |$ be drawn from distributions $M_V$ and $M_W$. \begin{figure} \caption{Dynamic network anomaly detection task. Given past instances of graphs created by the typical model of behavior, identify any new graph instance created by an alternative anomalous model.} \label{fig:anomalydetectiontask} \end{figure} The full generative process for all graphs is then: \begin{itemize} \item Draw $|V| \sim M_V$. Select $V$ from $V^*$ uniformly at random. \item Construct $P$ by selecting the rows/columns from $P^*$ that correspond to $V$ and normalize the probabilities to sum to 1 (i.e., $p_{ij} = \frac{1}{Z} p^*_{ij}$, where $Z = \sum_{ij \in V} p^*_{ij}$). \item Draw $| W | \sim M_W$. Sample $| W |$ messages using probabilities $P$. \item Construct the graph $G=(V,W)$ from the sampled messages. \end{itemize} \begin{comment} \begin{itemize} \item Draw $|V| \sim M_V$. Select $V$ from $V^*$ uniformly at random. \item Construct $P$ by selecting the rows/columns from $P^*$ that correspond to $V$ and normalize the probabilities to sum to 1 (i.e., $p_{ij} = \frac{1}{Z} p^*_{ij}$, where $Z = \sum_{ij \in V} p^*_{ij}$). \item Draw $| W | \sim M_W$. Sample $| W |$ messages using probabilities $P$. \item Construct the graph $G=(V,W)$ from the sampled messages. \end{itemize} \end{comment} \noindent $G$ is the output of a multinomial sampling procedure on $P$, with each independent message sample increasing the weight of one cell in $W$. $P$ itself is a set of probabilities obtained by sampling $V$ from $V^*$. This graph generation process is summarized in figure \ref{sampling-complete}. \begin{figure} \caption{Graph generation process. The matrix $P^*$ represents all possible nodes and their interaction probabilities. By sampling $|V|$ nodes and $|W|$ edges the observed graph $G$ is obtained. } \label{sampling-complete} \end{figure} Given this generative process, the difference between normal and anomalous graphs is characterized by differences in their underlying models. Let the normal model be represented by $P^{*n}, M^{n}_V$ and $M^{n}_W$ and let the anomalous model be represented by $P^{*a}, M^{a}_V$ and $M^{a}_W$. Finding instances where $M_V$ or $M_W$ are anomalous is trivial since we can use the count of nodes or messages as the test statistic. Finding instances of graphs drawn from $P^{*a}$ is more difficult as our choice of network statistics affects whether we are sensitive to changes in $|W|$ or $|V|$. If we redefine our network statistics to be functions over $P^*$ instead of $G$ we avoid the problem of graph size as a confounding factor as $P^*$ is independent of $M_V$ or $M_W$. However, since $P^*$ is unobservable, there is no way to calculate $S_k(P^*)$ directly. Instead we can only calculate $\widehat{S}_k(G)$ from the observed graph $G$. $\widehat{S}_k(G)$ is an estimate of $S_k(P)$ using the sampled messages $W$ to estimate the underlying probabilities, and $S_k(P)$ itself is an estimate of the true $S_k(P^*)$ on a subset $V$ of the total nodes. So just as the sampling procedure follows $P^* \rightarrow P \rightarrow G$, the estimation procedure follows the inverse steps $\widehat{S}_k(G) \rightarrow S_k(P) \rightarrow S_k(P^*)$. Delta statistics like Graph Edit Distance can also be used for anomaly detection. In this case the empirical statistic will be $\widehat{S}_k(G_1,G_2)$, where $G_1$ and $G_2$ are generated using the graph generation procedure described above, and the true value of the statistic is $S_k(P^*_1,P^*_2)$. In order to be consistent Delta statistics should not change when either graph changes in size. Ideally, $\widehat{S}_k(G) = S_k(P) = S_k(P^*)$ and we would have the same output regardless of $W$ and $V$, being sensitive only to changes in the model. However this is typically not attainable in practice as it is difficult to estimate the true statistic value from graphs that are extremely small - few edges and nodes provides less evidence of the underlying properties. In addition, an unbiased statistic with extremely high variance is also a poor test statistic. In many scenarios the best statistics are ones which converge to the value of $S_k(P^*)$ as $|V|, |W|$ increase, a property that we will refer to as {\em Size Consistency}. In the next section we will formally define the properties of Size Consistent and Size Inconsistent statistics and show how they affect the accuracy of hypothesis tests. \section{Properties of the Test Statistic} \subsection{Size Consistency} As described previously, a statistic $S_k(P^*)$ depends on the properties of the procedure that generated the graph instance and is a measure of the graph properties independent of the exact number of edges and nodes in the graph. Although the empirical statistic $\widehat{S}_k(G)$ may not be independent of the edge and node count, if it converges to $S_k(P^*)$ as $|V|$ and $|W|$ increase it is a reasonable approximation as long as $|V|$ and $|W|$ are large enough. The bias of the empirical statistic due to graph size is $abs( S_k(P^*) - \widehat{S}_k(G) )$; if this bias converges to 0 as $|V|$ and $|W|$ increase then $\widehat{S}_k(G)$ is \textit{Size Consistent}. \begin{definition} A statistic $\widehat{S}_k$ is {\em Size Consistent} w.r.t. $S_k$ if: \begin{small} \begin{align*} &\lim_{|W| \rightarrow \infty} \widehat{S}_k(G) - S_k(P) = 0 \\ \mbox{ AND } \;\;\; &\lim_{|V| \rightarrow |V^*|} S_k(P) - S_k(P^*) = 0 \end{align*} \end{small} \end{definition} Delta statistics have the same requirements for consistency as standard statistic except that the edge and node count of both graphs must be increasing. \begin{definition} A delta statistic $\widehat{S}_k$ is {\em Size Consistent} w.r.t. $S_k$ if: \begin{small} \begin{align*} &\lim_{|V_1|, |V_2| \rightarrow |V^*|} \widehat{S}_k(G_1,G_2) = S_k(P_1,P_2) \\ \mbox{ AND } \;\;\; &\lim_{|W_1|, |W_2| \rightarrow \infty} S_k(P_1,P_2) = S_k(P^*_1,P^*_2) \end{align*} \end{small} \end{definition} \begin{theorem} \textbf{False Positive Rate for Size Consistent Statistics} \\ Let $ \{G_1... G_{{x}} \}$ be a finite set of ``normal'' graphs drawn from $P^*$, $M^{n}_W$ and $M^{n}_V$ and let $G_{{test}}$ be a test graph drawn from $P^*$, $M^{a}_W$ and $M^{a}_V$. Let $| W_{min} |$ be the minimum edge count in both $\{ G_{{x}} \}$ and $G_{{test}}$ and $|V_{min}|$ be the minimum node count. For a hypothesis test using a Size Consistent test statistic $S_k$ and a p-value of $\alpha$, as $| W_{min} |\rightarrow \infty$ and $|V_{min}| \rightarrow |V^*|$ the probability of identifying $G_{{test}}$ as a false positive approaches $\alpha$. \label{convergetrueneg} \end{theorem} \begin{proof} If $\widehat{S}_k(G)$ is a consistent estimator of $S_k(P)$, then as $| W_{min} | \rightarrow \infty$, $\widehat{S}_k(G) \rightarrow S_k(P)$ and if $S_k(P)$ is a consistent estimator of $S_k(P^*)$ then as $|V_{min}| \rightarrow |V^*|$, $S_k(P) \rightarrow S_k(P^*)$. If $\{G_1...G_x\}$ and $G_{test}$ are drawn from $P^{*n}$, then $\widehat{S}_k(G_1)...\widehat{S}_k(G_x)$ and $\widehat{S}_k(G_{test}$ are converging to the same distribution of values and the hypothesis test will reject with the p-value probability of $\alpha$. \end{proof} As the number of edges and nodes drawn for the null distribution and test instance increase, the bias $abs( S_k(P^*) - \widehat{S}_k(G) )$ of the statistic calculated on those networks converges to zero. This means that $\widehat{S}_k(G)$ effectively becomes equal to $S_k(P^*)$, and the outcome of the hypothesis test is only dependent on whether the test instance and null examples were both drawn from $P^{*n}$ or if the test instance was drawn from $P^{*a}$. Even if the test case has an unusual number of edges or nodes, as long as the number of edges and nodes is not too small there will not be a false positive. Size consistency is also beneficial in the case of false negatives. A statistic which is sensitive to changes in the edge or node count will produce a null distribution with high variance if $M_V^n$ or $M_W^n$ have high variance, which increases the chance of producing false negatives. A size consistent statistic will have less variance as $|V|$ and $|W|$ increase, so as long as the minimum outputs of $M_V^n$ or $M_W^n$ are not too small the variance will be negligible. \begin{theorem} \textbf{False Negative Rate for Size Consistent Statistics} \\ Let $G_{{test}}$ be a network that is anomalous (i.e., drawn from $P^{*a}$, $M^n_V$, $M^n_W$) with respect to property $k$, and $\{G_1...G_x \}$ be graph examples drawn from $P^{*n}$, $M^n_V$, $M^n_W$. Let $| W_{min} |$ be the minimum of $M^n_W$ and $|V_{min}|$ be the minimum of $M^n_V$. As $| W_{min} | \rightarrow \infty$ and $|V_{min}| \rightarrow |V^*|$ the probability of failing to reject $G_{test}$ approach 0 if $P^{*n} \neq P^{*a}$. \label{convergetruepos} \end{theorem} \begin{proof} If $\widehat{S}_k(G)$ is a consistent estimator of $S_k(P)$, then as $| W_{min} | \rightarrow \infty$, $\widehat{S}_k(G) \rightarrow S_k(P)$ and if $S_k(P)$ is a consistent estimator of $S_k(P^*)$ then as $|V_{min}| \rightarrow |V^*|$, $S_k(P) \rightarrow S_k(P^*)$. If $S_k(P^{*a}) \neq S_k(P^{*n})$, then as $| W_{min} | \rightarrow \infty$ and $|V_{min}| \rightarrow |V^*|$ the statistic $\widehat{S}_k(G_{test})$ and the set of statistics $ \widehat{S}_k(G_{1}), \widehat{S}_k(G_{2}), ... \widehat{S}_k(G_{x})$ converge to different values and $G_{test}$ will be flagged as an anomaly with probability 1. \end{proof} Now that we have investigated the effects of size consistency, we must look at the effects of its inverse. \subsection{Size Inconsistency} {\em Size Inconsistency} is the inverse of size consistency: if a statistic is not size consistent, then it is size inconsistent. \begin{definition} A statistic $\widehat{S}_k$ is {\em Size Inconsistent} w.r.t. $S_k$ if: \begin{small} \begin{align*} &\lim_{|W| \rightarrow \infty} \widehat{S}_k(G) - S_k(P) \neq 0 \\ \mbox{ OR } \;\;\; &\lim_{|V| \rightarrow |V^*|} S_k(P) - S_k(P^*) \neq 0 \end{align*} \end{small} \end{definition} \noindent Where $S_k$ is a nontrivial function (a trivial function being one that is a constant, $\infty$, or $-\infty$ for all input values). This definition also applies to the delta statistic case. \begin{theorem} \textbf{False Positives for Size Divergent Statistics} \\ Let $\{G_1... G_{{x}} \}$ be a finite set of ``normal'' graphs drawn from $P^*$, $M^{n}_W$, and $M^{n}_V$ and $G_{{test}}$ be a test graph drawn from $P^*$. If $\widehat{S}_k(G)$ diverges with increasing $|W|$ or $|V|$ and $M^{n}_W$, $M^{n}_V$ have finite bounds, there is some $|V|$ or $|W|$ for which a hypothesis test using $S_k(G)$ as the test statistic will incorrectly flag $G_{test}$ as an anomaly. \label{falsepos} \end{theorem} \begin{proof} When the set of graphs $\{G_1...G_{{x}}\}$ are used to estimate an empirical distribution of the null $\widehat{S_k}$, the distribution is bounded by $max[{S}_k(\{G_1...G_{{x}}\})]$ and $min[{S}_k(\{G_1...G_{x}\})]$, so the critical points $\phi_{lower}$ and $\phi_{upper}$ of a hypothesis test using this set of graphs will be within these bounds. Since an increasing $| W_{{test}} | $ or $|V_{test}|$ implies ${S}_k(G_{{test}})$ diverges, then there exists a $| W_{{test}} |$ or $| V_{test}|$ such that ${S}_k(G_{{test}})$ is not within $\phi_{lower}$ and $\phi_{upper}$ and will be rejected by the test. \end{proof} Size Inconsistency generally occurs when the value of a statistic is a linear function of the edge weight or the number of nodes in the graph: when the edge weight or the number of nodes goes to infinity, the output of the statistic also diverges. If a statistic is dependent on the size of a graph, then two graphs both drawn from $P^{*n}$ may produce entirely different values and a false positive will occur. A second problem occurs when the edge counts in the estimation set have high variance. If the statistic is dependent on the number of edges, noise in the edge counts translates to noise in the statistic values which lowers the statistical power (i.e. the percentage of true anomalies detected) of the test. With a sufficient amount of edge count noise, the signal is completely drowned out and the statistical power of the anomaly detector drops to zero. \begin{theorem} \textbf{False Negatives for Size Divergent Statistics} \\ Let $G_{{test}}$ be a network that is anomalous (i.e., drawn from $P^{*a}$) with respect to property $k$. If ${S}_k(G)$ diverges with increasing $| W|$ or $|V|$ there exists some $M^{n}_W$ or $M^{n}_V$ with sufficient variance such that a hypothesis test with p-value $\alpha$ and empirical null distribution $\widehat{S_k}$ will fail to detect ${S}_k(G_{{test}})$ as an anomaly with probability $1 - \alpha$. \label{falseneg} \end{theorem} \begin{proof} Let $G_{{test}}$ be the test network drawn from $P^{*a}$, $M^{n}_W$, and $M^{n}_V$, and $\{G_1...G_{{x}}\}$ be the set of null distribution graphs drawn from $P^{*n}$, $M^{n}_W$ and $M^{n}_V$. If ${S}_k(G)$ is a divergent function of $| W|$ or $|V|$, then the variance of of the null distribution $\widehat{S}_k$ estimated from $\{G_1...G_{{x}}\}$ is dependent on the variance of $| W|$ and $|V|$. If the variance of sampled $| W|$ or $|V|$ is sufficiently large, the variance of $\widehat{S}_k$ will increase to cover all possible ${S}_k(G)$ values, and the test instance will fail to be flagged as an anomaly with probability $1 - \alpha$. \end{proof} \noindent With a sufficient amount of edge count noise, the statistical power of the anomaly detector drops to zero. Regardless of whether a time step is an example of an anomaly or not, if the variance of the test statistic is dominated by random noise in the edge count the time step will only be flagged due to random chance. These theorems show that divergence with number of edges or nodes can lead to both false positives and false negatives from anomaly detectors that look for unusual network properties other than edge count. These theorems have been defined using a statistic calculated on a single network, but some statistics are delta measures which are measured on two networks. In these cases, the edge counts of either or both of the networks can cause edge dependency issues. \section{Network Statistics} \label{Network Statistics} In this section we introduce our set of proposed size consistent statistics, as well as analyze multiple existing statistics to determine if they are size consistent or inconsistent. These properties are summarized in Table \ref{tab:statistics}; \textit{Fast Convergence} indicates fewer necessary edge/node observations to obtain a high level of accuracy. \subsection{Conventional Statistics} \paragraph{Graph Edit Distance} \noindent The graph edit distance (GED) \cite{gao} is often used in anomaly detection tasks. GED on a weighted graph is typically defined as: \begin{small} \begin{align} \label{eq:ged} GED(G_{1}, G_{2}) &= | V_{1} | + | V_{2} | - 2 | V_{1} \cap V_{2} | \nonumber \\ & + \sum_{ij \in V_1 \cup V_2} abs( w_{ij,1} - w_{ij,2} ) \end{align} \end{small} \begin{comment} \noindent Let the value of GED on $P^*_1, P^*_2$ be: \begin{small} \begin{align} \label{eq:pged} GED(P^*_1, P^*_2) = \sum_{ij \in V^*} abs( p^*_{ij,1} - p^*_{ij,2} ) \end{align} \end{small} \end{comment} \begin{claim} GED is a Size Inconsistent statistic. \end{claim} \noindent Consider the case where $G_1$ and $G_2$ are both drawn from $P^*$. Let $|W_2| = |W_1| + W_\Delta$ where $W_\Delta$ is some constant value. The expected difference in weights between two nodes $i,j$ in $G_1$ versus $G_2$ is: \begin{small} \begin{align*} E[ w_{ij,1} - w_{ij,2} ] &= |W_1| p_{ij} - (|W_1| + W_{\Delta}) p_{ij} \nonumber = W_{\Delta} p_{ij} \end{align*} \end{small} \noindent Then, the limit as $|W_1|, |W_2|$ increase is \begin{small} \begin{align*} \lim_{|W_1|,|W_2| \rightarrow \infty} &GED(G_{1}, G_{2}) \nonumber \\ &= \lim_{|W_1|,|W_2| \rightarrow \infty} |V| + |V| - 2 |V \cap V| + \sum_{ij \in V} W_{\Delta} p_{ij} \nonumber \\ &= W_{\Delta} \end{align*} \end{small} \noindent As $GED(G_{1}, G_{2})$ is converging to a constant value, it is not converging to a nontrivial $S_k$ and the first condition of Size Consistency is violated. $\square$ \paragraph{Degree Distribution and Degree Dist. Difference} \noindent As defined before the Degree Distribution of a graph is the distribution of node degrees. In this task we will find the difference between the degree distributions of two graphs using a delta statistic. Define the delta statistic Degree Distribution Difference $DD(G_1,G_2)$ between two graphs as: \begin{small} \begin{align} \label{eq:dd} {DD}(G_1, G_2) =& \sum_{bin_k \in Bins} ( \sum_{i \in V_1} I[ D_i(G_1) \in bin_k ] \nonumber \\ &- \sum_{i \in V_2} I[ D_i(G_2) \in bin_k ] )^2 \end{align} \end{small} \noindent where $Bins$ is a consecutive sequence of equal size bins which encompass all degree values in both graphs. Note that this value is an approximation of the Cram\'{e}r von-Mises criterion between the two empirical degree distributions. Let the probabilistic degree of node $i$ be $D_i(P^*) = \sum_{j \neq i \in V^*} p^*_{ij}$. Then let the value of $DD(P^*_1,P^*_2)$ be: \begin{small} \begin{align} \label{eq:edd} {DD}(P^*_1,P^*_2) = &\sum_{bin_k \in Bins} ( \sum_{i \in V^*} I[ D_i(P^*_1) \in bin_k ] \nonumber \\ &- \sum_{i \in V^*} I[ D_i(P^*_2) \in bin_k ] )^2 \end{align} \end{small} \begin{table}[t] \centering \caption{Statistical properties of previous network statistics and our proposed alternatives.} \label{tab:statistics} { \begin{tabular}{l | c | c | c | c |} & Inconsist. & Consist. & Fast \\ &&& Convergence \\ \hline Mass Shift && \checkmark & \checkmark \\ Probabilistic Degree && \checkmark & \checkmark \\ Triangle Probability && \checkmark &\checkmark \\ \hline Graph Edit Distance & \checkmark & & \\ Degree Distribution & \checkmark & & \\ Barrat Clustering & &\checkmark & \\ \hline Netsimile & \checkmark & & \\ Deltacon & & \checkmark & \\ \end{tabular} } \end{table} \begin{claim} $DD(G_1,G_2)$ is a Size Inconsistent statistic. \end{claim} \noindent Let $G_1, G_2$ be drawn from $P^*$ using the same node set $V$ and let $|W_2| = |W_1| + W_\Delta$ for some constant $W_\Delta$. As $|W_1|,|W_2|$ increase the value of $D_i(G_2) - D_i(G_1)$ converges to \begin{small} \begin{align} (|W_1| + W_\Delta) \sum_{j \neq i \in V} p^*_{ij} - |W_1| \sum_{j \neq i \in V} p^*_{ij} = W_\Delta \sum_{j \neq i \in V} p^*_{ij} \end{align} \end{small} \noindent So for sufficiently large $W_\Delta$, at least one node will be placed into a higher bin for $G_2$ versus $G_1$, and the limit $\lim_{|W_1|,|W_2|\rightarrow \infty} DD(G_1,G_2)$ is not equal to $DD(P^*,P^*)$. This violates the first condition of Size Consistency and therefore the Degree Distribution Difference is Size Inconsistent. $\square$ Other measures create aggregates using the degrees of multiple nodes \cite{priebe, berlingerio} but as the degree is size inconsistent these aggregates tend to be so as well. \paragraph{Weighted Clustering Coefficient} \noindent Clustering coefficient is a measure of the transitivity, the propensity to form triangular relationships in a network. As the standard clustering coefficient is not designed for weighted graphs we will be analyzing a weighted clustering coefficient, specifically the Barrat weighted clustering coefficient (CB)\cite{saramaki}: \begin{small} \begin{align} \label{eq:wcc} CB(G) = \sum_{i} \frac{1}{|V|(k_i-1)D_i(G)} \sum_{j,k} \frac{w_{ij}+w_{ik}}{2} \widehat{a}_{ij}\widehat{a}_{ik}\widehat{a}_{jk} \end{align} \end{small} \noindent where $\widehat{a}_{ij} = I[w_{ij} > 0]$, $D_i(G) = \sum_j w_{ij}$, and $k_i = \sum_j a_{ij}$. Other weighted clustering coefficients exist but they behave similarly to the Barrat coefficient. \begin{claim} CB is a Size Consistent statistic that converges to \begin{small} \begin{align} CB(P^*) &= \sum_{i} \! \frac{1}{|V^*| ({a}^*_i-1) \sum_{j} \! p^*_{ij}} \sum_{j,k} \frac{ (p^*_{ij}+p^*_{ik})}{2} {a}^*_{ij}{a}^*_{ik}{a}^*_{jk}. \end{align} \end{small} \end{claim} \noindent First we will find $CB(P)$ by taking the limit as $|W| \rightarrow \infty$: \allowdisplaybreaks{ \begin{align*} \lim_{|W| \rightarrow \infty} &CB(G) \nonumber \\ &= \lim_{|W| \rightarrow \infty} \sum_{i} \frac{1}{|V| ( \widehat{a}_i-1) D_i(G)} \sum_{j,k} \frac{w_{ij}+w_{ik}}{2} \widehat{a}_{ij}\widehat{a}_{ik}\widehat{a}_{jk} \nonumber \\ &= \sum_{i} \frac{1}{|V| ({a}_i-1) |W| \sum_{j} p_{ij}} \sum_{j,k} \frac{|W| (p_{ij}+p_{ik})}{2} {a}_{ij}{a}_{ik}{a}_{jk} \nonumber \\ &= \sum_{i} \frac{1}{|V| ({a}_i-1) \sum_{j} p_{ij}} \sum_{j,k} \frac{ (p_{ij}+p_{ik})}{2} {a}_{ij}{a}_{ik}{a}_{jk} \nonumber \\ &= CB(P) \end{align*} } \noindent Now we take the limit $\lim_{|V| \rightarrow |V^*|} CB(P)$: \allowdisplaybreaks{ \begin{align*} & \lim_{|V| \rightarrow |V^*|} CB(P) \nonumber \\ &= \lim_{|V| \rightarrow |V^*|} \sum_{i} \frac{1}{|V| ({a}_i-1) \sum_{j} p_{ij}} \sum_{j,k} \frac{ (p_{ij}+p_{ik})}{2} {a}_{ij}{a}_{ik}{a}_{jk} \nonumber \\ &= \lim_{|V| \rightarrow |V^*|} \sum_{i} \frac{1}{|V| ({a}_i-1) \frac{1}{\sum_{ij \in V} p^*_{ij}} \sum_{j} p^*_{ij}} \nonumber \\ & \sum_{j,k} \frac{1}{\sum_{ij \in V}p^*_{ij}} \frac{ (p^*_{ij}+p^*_{ik})}{2} {a}_{ij}{a}_{ik}{a}_{jk} \nonumber \\ & = \sum_{i} \frac{1}{|V^*| ({a}^*_i-1) \sum_{j} p^*_{ij}} \sum_{j,k} \frac{ (p^*_{ij}+p^*_{ik})}{2} {a}^*_{ij}{a}^*_{ik}{a}^*_{jk} \nonumber \\ \square \end{align*} } Other weighted clustering coefficients such as those proposed by Onnela et. al. \cite{onnela} and Holme et. al. \cite{holme} behave in a similar manner. \paragraph{Deltacon} \noindent The core element of the Deltacon statistic is the Affinity Matrix which is a measure of the closeness (in terms of random walk distance) between all nodes in a graph. Pairs of graphs with similar Affinity Matrices are scored as being more likely to be from the same distribution. \begin{claim} Deltacon is a Size Consistent statistic. \end{claim} \noindent The Affinity Matrix $S$ is approximated with Fast Belief Propagation and is estimated with $S \approx I + \epsilon*A + \epsilon^2*A^2$ where $A$ is the adjacency matrix and $\epsilon$ is the coefficient of attenuating neighbor influence. As $|W| \rightarrow \infty$ and $|V| \rightarrow |V^*|$ the adjacency matrix $A$ approaches $A^*$ which is the adjacency matrix of $P^*$, so the statistic does converge to the value given by the Affinity Matrix difference calculated on the true $P^*$ graphs. However, this convergence will be slow in practice as a small difference between $A$ and $A^*$ can cause large changes in the path lengths between nodes if the missing edges are critical bridges between graph regions. \paragraph{Netsimile} \noindent Netsimile is an aggregate statistic using multiple simple statistics to form descriptive vectors. These statistics include number of neighbors, clustering coefficient (unweighted), two-hop neighborhood size, clustering coefficient of neighbors, and total edges, outgoing edges, and neighbors of the egonet. \begin{claim} Netsimile is a Size Inconsistent statistic. \end{claim} \noindent Statistics that use the raw edge count such as $D_i(G)$ will not be consistent as shown earlier, so aggregates that use these types of statistics will also be inconsistent. The statistic uses the Canberra distance ($\frac{abs(S_k(G_1) - S_k(G_2))}{S_k(G_1) + S_k(G_2)}$) for each component statistic $S_k$ as a form of normalization, but as the component statistics diverge to infinity the Canberra distance converges to 0 and the normalization is still inconsistent. \subsection{Proposed Size Consistent Statistics} We will now define a set of Size Consistent statistics designed to measure network properties similar to the previously described dependent statistics, but without the sensitivity to total network edge count. They will also be designed such that the empirical estimations converge to the true values as quickly as possible. \begin{figure} \caption{Estimation of the $\widehat{P_t}$ matrix from the observed $W$ weights.} \label{P-estimation} \end{figure} These statistics use a matrix $\widehat{P}_t$ where $\widehat{p}_{ij,t} = \frac{w_{ij,t}}{|W_t|}$ which is empirical estimate of $P_t$ obtained by normalizing the matrix as shown in figure \ref{P-estimation}. Obtaining this matrix can be thought of as a reversal of the sampling process shown in Figure \ref{sampling-complete}. Although $\widehat{P}_t$ is only an estimate of $P_t$, it is an unbiased one, and given an increasingly large $|W_t|$ it will be eventually exactly equal to $P_t$. Therefore, empirical statistics which use $\widehat{P}_t$ in place of $P_t$ as their input will converge to the true statistic calculated on $P_t$ and the statistic converges w.r.t. $|W_t|$. However, this does not guarantee that $\widehat{P}_t$ will converge in probability to $P^*_t$ as the number of nodes in $V_t$ increases. In fact, the value of any cell $\widehat{p}_{ij,t}$ is inversely proportionate to $|V_t|$: as both $\widehat{P}_t$ and $P^*_t$ are probability distributions which sum to 1, the more cells in either matrix the lower the probability mass in each cell on average. This concentrating effect as $V_t$ is sampled from $V^*_t$ is demonstrated in figure \ref{mass-concentration}. \begin{figure} \caption{Mass of the cells in $P$ increase as $|V|$ decreases.} \label{mass-concentration} \end{figure} The solution to avoiding this concentration of probability mass is to introduce normalizing terms which negates the effect. These terms are $\overline{p^*}_t = \frac{1}{| A_t^* |} \sum_{ij \in V_t^*} p^*_{ij,t}$ for $S_k(P^*_t)$ and the empirical term $\overline{p}_t = \frac{1}{| A_t |} \sum_{ij \in V} p_{ij,t}$ for $S_k(P_t)$, where $|A_t^*|$ and $|A_t|$ are the number of nonzero cells in $P_t^*$ and $P_t$ respectively. Replacing each $p^*_{ij,t}$ and $p_{ij,t}$ term in a statistic with $\frac{p^*_{ij,t}}{\overline{p^*_t}}$ and $\frac{p_{ij,t}}{\overline{p_t}}$ ensures that the statistic also converges as $|V_t|$ increases. The utility of the $\overline{p^*}_t$ and $\overline{p}_t$ terms is to normalize the probability mass concentration effect when the size of $|V|$ changes. As $P$ is a proper probability distribution and sums to a total of 1, decreasing $|V|$ also causes the cells in $P$ to decrease and the probability mass in each cell to rise (illustrated in figure \ref{mass-concentration}). Normalizing by the mean of each nonzero cell $\overline{p}$ allows the terms of the consistent statistics to converge as $|V|$ increases and ensures that the bias remains small. Another way to consider this term is that $\overline{p}$ is a renormalization of $\overline{p^*}_V$ where $\overline{p^*}_V$ is the mean of the subset of $P^*$ cells that belong to $V$. As $\overline{p^*}_V$ is the sample mean approximating $\overline{p^*}$ it is an unbiased estimator of $\overline{p^*}$ and the inverse $\frac{1}{\overline{p^*}_V}$ is a consistent estimator of $\frac{1}{\overline{p^*}}$ due to Slutsky's theorem. The regions spanned by each of these terms are shown in figure \ref{region-visualization}. \begin{figure} \caption{Visualization of the regions averaged to obtain $\overline{p^*}, \overline{p},$ and $ \overline{p^*}_V$.} \label{region-visualization} \end{figure} \paragraph{Probability Mass Shift} \noindent We will now introduce a new consistent statistic called Probability Mass Shift which is a measure of change in the underlying $P^*$ matrices which produced two graphs. Similar to graph edit distance when used on a dynamic network it is a measure of the rate of change the network is experiencing; unlike graph edit distance it is consistent with respect to the size of the input graphs. \noindent Let $P^*_1$ and $P^*_2$ be probability distributions over a node set $V^*$. Define the Probability Mass Shift between $P^*_1$ and $P^*_2$ to be: \begin{small} \begin{align} \label{eq:ms} {MS}(P^*_1, P^*_2) &= \frac{1}{Z_{V^*}} \sum_{ij \in V^*} ( \frac{ {p^*}_{ij,1} }{\overline{p^*_1}} - \frac{ {p^*}_{ij,2} }{\overline{p^*_2}} )^2 \end{align} \end{small} \noindent where the term $\overline{p^*_x} = \frac{1}{| A^* |} \sum_{ij \in V^*} p^*_{ij}$ refers to the average value of nonzero cells in $P^*_x$ and $Z_{V^*} = {| V^* | \choose 2}$. Let the Probability Mass Shift between node subsets $V_1, V_2 \subset V^*$ be: \begin{small} \begin{align} \label{eq:ms} {MS}(P_1, P_2) &= \frac{1}{Z_{V_{\cap}}} \sum_{ij \in V_{\cap}} ( \frac{ {p}_{ij,1} }{\overline{p_1}} - \frac{ {p}_{ij,2} }{\overline{p_2}} )^2 \end{align} \end{small} \noindent where $V_{\cap}$ is the intersection of $V_1$ and $V_2$, $p_{ij,x} = \frac{p^*_{ij,x}}{\sum_{ij \in V_{\cap}} p^*_{ij,x}}$, and $\overline{p_x} = \frac{1}{| A |} \sum_{ij \in V_{\cap}} p_{ij}$. \begin{comment} \begin{figure} \caption{Mass of the cells in $P$ increase as $|V|$ decreases. \jn{Seems like this should go with description of the data model, not the statistics.}} \label{mass-concentration} \end{figure} \begin{figure}\label{region-visualization} \end{figure} \end{comment} Now define the empirical Probability Mass Shift $\widehat{MS}(G_1,G_2)$ to be: \begin{small} \begin{align} \label{eq:ems} \widehat{MS}(G_1, G_2) &= \frac{1}{Z_{V_{\cap}}} \sum_{ij \in V_{\cap}} ( \frac{ \widehat{p}_{ij,1} }{\overline{\widehat{p}_1}} - \frac{ \widehat{p}_{ij,2} }{\overline{\widehat{p}_2}} )^2 \end{align} \end{small} \noindent where $\overline{\widehat{p}_x} = \frac{1}{| \widehat{A_x} |} \sum_{ij \in V_{\cap}} \widehat{p}_{ij}$, $| \widehat{A_x} | = \sum_{ij \in V_{\cap}} I[ w_{ij,x} > 0]$, and $\widehat{p}_{ij,x} = \frac{w_{ij,x}}{\sum_{ij \in V_{\cap}} w_{ij,x} }$. \begin{comment} \begin{lemma}{$ MS(P_1,P_2) = 0$ if $P^*_1 = P^*_2$.}\end{lemma} \noindent First note that ${MS}(P^*, P^*)=0$ since the squared difference term is 0 for each $i,j$ pair. Then, note that since $P_1$ and $P_2$ involve the same nodes from $P^*$ the normalized probabilities in $P_1$ and $P_2$ will be the same (i.e., ${p}_{ij,1} = {p}_{ij,2}$). Moreover, the number of non-zero cells in $P_1$ and $P_2$ will be the same. Let $A$ be the count of the non-zero cells. Then $\overline{p_1}=\overline{p_2}=1/A$. \begin{small} \begin{align} {MS}(P_1, P_2) &= \frac{1}{Z_{V_{\cap}}} \sum_{ij \in V_{\cap}} ( \frac{ {p}_{ij,1} }{\overline{p_1}} - \frac{ {p}_{ij,2} }{\overline{p_2}} )^2 \nonumber \\ &= \frac{1}{Z_{P}} \sum_{ij \in V_{\cap}} ( A \cdot {p}_{ij,1} - A \cdot {p}_{ij,1} )^2 \nonumber \\ &= 0 = {MS}(P^*, P^*) \nonumber \square \end{align} \end{small} \end{comment} \begin{theorem} $\widehat{MS}(G_1,G_2)$ is a size consistent statistic which \\ converges to $MS(P^*_1, P^*_2)$. \end{theorem} \begin{small} \allowdisplaybreaks{ \begin{align} & \lim_{|V_1|,|V_2| \rightarrow |V^*|} MS(P_1,P_2) \nonumber \\ &= \lim_{|V_1|,|V_2| \rightarrow |V^*|} \frac{1}{Z_P} \sum_{ij \in V_{\cap}} ( {| A_1 |} {p}_{ij,1} - {| A_2 |} {p}_{ij,2})^2 \nonumber \\ &= \lim_{|V_1|,|V_2| \rightarrow |V^*|} \frac{1}{Z_P} \sum_{ij \in V_{\cap}} (\frac{| A_1 |}{\sum_{ij \in V_{\cap} } p^*_{ij,1} } p^*_{ij,1} - \frac{| A_2 |}{\sum_{ij \in V_{\cap}}p^*_{ij,2} } p^*_{ij,2})^2 \nonumber \\ \end{align} } \end{small} \noindent As $\frac{| A_x |}{\sum_{ij \in V_{\cap}} p^*_{ij,x}} = \frac{ \sum_{ij \in V_{\cap}} I [ p^*_{ij,x} > 0 ] }{\sum_{ij \in V_{\cap}} p^*_{ij,x}} \approx \frac{1}{ (\overline{ p^*_x } ) }$, this is an approximation calculated over only a subset of the nodes $V_{\cap}$, denoted as $\frac{1}{\overline{p^*_1} | V_{\cap}}$. \begin{small} \allowdisplaybreaks{ \begin{align} \lim_{|V_1|,|V_2| \rightarrow |V^*|} MS(P_1,P_2) \nonumber \\ &= \lim_{|V_1|,|V_2| \rightarrow |V^*|} \frac{1}{Z_P} \sum_{ij \in V_{\cap}} (\frac{1}{\overline{p^*_1} | V_{\cap}} p^*_{ij,1} - \frac{1}{\overline{p^*_2} | V_{\cap}} p^*_{ij,2})^2 \nonumber \\ &= \frac{1}{Z_{V^*}} \sum_{ij \in V^*} \lim_{|V_1|,|V_2| \rightarrow |V^*|} \big( \frac{1}{(\overline{p^*_1} | V_{\cap})^2 } (p^*_{ij,1})^2 + \frac{1}{(\overline{p^*_2} | V_{\cap})^2 } (p^*_{ij,2})^2 \nonumber \\ &- 2 \frac{1}{(\overline{p^*_1} | V_{\cap}) } \frac{1}{(\overline{p^*_2} | V_{\cap}) } p^*_{ij,1} p^*_{ij,2} \big) \end{align} } \end{small} \noindent $\overline{p^*_x} | V_{\cap}$ and $(\overline{p^*_x} | V_{\cap})^2$ are the sample mean and square of the sample mean of the value of the $P^*$ cells in $V_{\cap}$ respectively, and according to Slutsky's Theorem their inverses $\frac{1}{\overline{p^*_x} | V_{\cap}}$ and $\frac{1}{(\overline{p^*_x} | V_{\cap})^2}$ converge in probability to $\frac{1}{(\overline{p^*_x})}$ and $\frac{1}{(\overline{p^*_x})^2}$ as $| V_{\cap} | \rightarrow | V^* |$. \allowdisplaybreaks{ \begin{align} \lim_{|V_1|,|V_2| \rightarrow |V^*|} MS(P_1,P_2) & = \frac{1}{Z_{V^*}} \sum_{ij \in V^*} \big( \frac{1}{(\overline{p^*_1} )^2 } (p^*_{ij,1})^2 \nonumber \\ + \frac{1}{(\overline{p^*_2})^2 } (p^*_{ij,2})^2 -& 2 \frac{1}{(\overline{p^*_1} ) } \frac{1}{(\overline{p^*_2} ) } p^*_{ij,1} p^*_{ij,2} \big) = MS(P^*_1,P^*_2) \end{align} } \allowdisplaybreaks{ \begin{align} \lim_{| W_1 |, | W_2 | \rightarrow \infty} &\widehat{MS}(G_1, G_2) \nonumber \\ &= \lim_{| W_1 |, | W_2 | \rightarrow \infty} \frac{1}{Z_P} \sum_{ij \in V_{\cap}} ( \widehat{| A_1 |} \widehat{p}_{ij,1} - \widehat{| A_2 |} \widehat{p}_{ij,2})^2 \nonumber \\ &= \lim_{| W_1 |, | W_2 | \rightarrow \infty} \frac{1}{Z_P} \sum_{ij \in V} \left( \widehat{| A_1 |} \frac{ {w}_{ij,1} }{| W_1 |} - \widehat{| A_2 |} \frac{ {w}_{ij,2} }{| W_2 |}\right)^2 \nonumber \\ &= \lim_{| W_1 |, | W_2 | \rightarrow \infty} \frac{1}{Z_P} \sum_{ij \in V} \frac{\widehat{| A_1 |}^2}{| W_1 |^2} {w}_{ij,1}^2 \nonumber \\ &+ \frac{\widehat{| A_2 |}^2}{| W_2 |^2} {w}_{ij,2}^2 - 2\frac{ \widehat{| A_1 |} }{| W_1 |}\frac{ \widehat{| A_2 |} }{| W_2 |}{w}_{ij,1}{w}_{ij,2} \end{align} } Let the minimum value of any nonzero cell in $P_x$ be a finite $\epsilon$. The probability of any node pair not being sampled from $P_x$ is $(1 - \epsilon)^{| W_x |}$, which is converging to 0. Once every nonzero cell has been sampled, $|\widehat{A_x}| = |A_x|$, so this term is converging and can be replaced: \allowdisplaybreaks{ \begin{align} &= \lim_{| W_1 |, | W_2 | \rightarrow \infty} \frac{1}{Z_P} \sum_{ij \in V} \frac{{| A_1 |}^2}{| W_1 |^2} {w}_{ij,1}^2 + \frac{{| A_2 |}^2}{| W_2 |^2} {w}_{ij,2}^2 \nonumber \\ &- 2\frac{ {| A_1 |} }{| W_1 |}\frac{ {| A_2 |} }{| W_2 |} {w}_{ij,1} {w}_{ij,2} \nonumber \\ &= \frac{1}{Z_P} \sum_{ij \in V} {| A_1 |}^2 {p}_{ij,1}^2 + {| A_2 |}^2 {p}_{ij,2}^2 - 2 {| A_1 |} {| A_2 |} {p}_{ij,1} {p}_{ij,2} \nonumber \\ &= MS(P_1,P_2) \square \end{align} } \noindent We can improve upon the empirical version of the statistic by calculating the amount of bias for $|W_1|,|W_2|$ values and compensating. As the expectation of $w_{ij,x}^2$ for any node pair $i,j$ given $ | W_x |$ can be written as: \begin{align} E_{W_x }[ w^2_{ij,x} ] &= Var(w_{ij,x}) + E_{W_x }[ w_{ij,x} ]^2 \nonumber \\&= Var( Bin(| W_x |, {p}_{ij,x}) ) + E_{W_x }[ Bin(| W_x |, {p}_{ij,x}) ]^2 \nonumber \\ &= | W_x |{p}_{ij,x}(1-{p}_{ij,x}) + | W_x |^2{p}^2_{ij,x} \end{align} \noindent We can rewrite the expectation of the empirical mass shift: \allowdisplaybreaks{ \begin{align} &\frac{1}{Z_P} \sum_{ij \in V} \frac{{(| A_1 |)}^2}{| W_1 |^2} ( | W_1 |{p}_{ij,1}(1-{p}_{ij,1}) + | W_1 |^2{p}^2_{ij,1} ) \nonumber \\ & + \frac{{(| A_2 |)}^2}{| W_2 |^2} ( | W_2 |{p}_{ij,2}(1-{p}_{ij,2}) + | W_2 |^2{p}^2_{ij,2} ) \nonumber \\ &- 2\frac{ {| A_1 |} }{| W_1 |}\frac{ {| A_2 |} }{| W_2 |}| W_1 |p_{ij,1} | W_2 |p_{ij,2} \nonumber \\ &= \frac{1}{Z_P} \sum_{ij \in V} (| A_1 |p_{ij,1} - | A_2 |p_{ij,2})^2 \nonumber \\ &+ \frac{1}{| W_1 |} | A_1 |^2p_{ij,1}(1-p_{ij,1}) + \frac{1}{| W_2 |} | A_2 |^2p_{ij,2}(1-p_{ij,2}) \nonumber \\ &= MS(P_1, P_2) + \frac{1}{| W_1 |} | A_1 |^2 p_{ij,1}(1-p_{ij,1}) \nonumber \\ &+ \frac{1}{| W_2 |} | A_2 |^2 p_{ij,2} (1-p_{ij,2}) \end{align} } \noindent which is equal to $MS(P_1, P_2)$ plus a bias term. Although we have shown the empirical mass shift to be size consistent, we can improve the rate of convergence by subtracting our estimate of the bias: \begin{small} \allowdisplaybreaks{ \begin{align} \widehat{MS}(G_1, G_2) &= ( | \widehat{A_1} | \widehat{p}_{ij,1} - | \widehat{A_2} | \widehat{p}_{ij,2})^2 - \frac{1}{| W_1 |} | \widehat{A_1} |^2 \widehat{p}_{ij,1}(1-\widehat{p}_{ij,1}) \nonumber \\ &- \frac{1}{| W_2 |} | \widehat{A_2} |^2 \widehat{p}_{ij,2}(1-\widehat{p}_{ij,2}) \end{align} } \end{small} \paragraph{Probabilistic Degree Distance} The Probabilistic Degree Distance is a delta statistic that measures the difference between the degree distributions of two graphs in a size-consistent manner. It is defined as: \begin{small} \begin{align} & {PDD}(P^*_1, P^*_2) = \sum_{bin_k \in Bins} (\frac{1}{{|V^*|}} \sum_{i \in V^*} I[ PD_i(P^*_1) \in bin_k ] \nonumber \\ &- \frac{1}{{|V^*|}} \sum_{i \in V^*} I[ PD_i(P^*_2) \in bin_k ] )^2 \end{align} \end{small} \noindent where $Bins$ is a consecutive sequence of equal size bins which encompass all $PD_i$ values, $PD_i(P^*) = \frac{1}{\overline{p^*}}\sum_{j \in V^*} p^*_{ij}$ is the \textit{Probabilistic Degree} of a node $i$, and $\overline{p^*} = \frac{1}{|V^*|}$ is the average probability mass per node. We can rewrite the probabilistic degree as $PD_i(P^*) = |V^*| \sum_{j \in V^*} p^*_{ij}$. As the name suggests the probabilistic degree is a normalized version of node degree, and the distribution of probabilistic degrees replaces the standard degree distribution. Before we can begin our proofs about the consistency of the PDD, we must first analyze the behavior of this probabilistic degree distribution. The probabilistic degree of a node can be represented as the mean of the masses in the cells of that node in $P^*_k$: \begin{small} \begin{align} F^*_{row,k}(x) &= \sum_{i \in V^*_k} I[ x > \frac{ \sum_{j \neq i \in V^*_k} p^*_{ij,k}}{(\overline{p^*}_{row,k})} ] \end{align} \end{small} Where $\overline{p^*}_{row,k} = \frac{\sum_{ij \in V^*_k} p^*_{ij,k}}{N^*} = \frac{1}{N^*}$. We can rewrite the CDF as \begin{small} \begin{align} F^*_{row,k}(x) &= \sum_{i \in V^*_k} I[ x > N^* * \sum_{j \neq i \in V^*_k} {p^*_{ij,k}} ] \end{align} \end{small} As before let us investigate the effect of node sampling by calculating the value of the statistic using the normalized probabilities $P_1, P_2$ on node subsets $V_1, V_2$: \begin{small} \begin{align} F_{row,k}(x) &= {\sum_{i \in V_k} I[ x > \sum_{j \neq i \in V_k} \frac{p_{ij,k}}{(\overline{p}_{row,k})} ]} \end{align} \end{small} Where $\overline{p}_{row,k} = \frac{1}{N}$. Since $p_{ij,k} = \frac{p^*_{ij,k}}{\sum_{ij \in V_k} p^*_{ij,k}}$, we can rewrite $F_{row,k}(x)$ as \begin{small} \begin{align} F_{row,k}(x) &= {\sum_{i \in V_k} I[ x > N * \frac{1}{\sum_{jl \in V_k} p^*_{jl,k}}*{\sum_{j \neq i \in V_k} p^*_{ij,k}} ]} \nonumber \\ &= \sum_{i \in V_k} I[ x > \frac{ 1 }{\overline{p^*}_{row,k} | V_k} * \sum_{j \neq i \in V_k} p^*_{ij,k} ] \end{align} \end{small} Where $\overline{p^*}_{row,k} | V_k$ is the mean probability mass per row in $P^*$ excluding any cells/rows that don't belong in the set $V_k$. If we take the expectation of the PDF for a particular $i$ with respect to the node sample $V_k$: \begin{small} \begin{align} &E_{V_k} [ \frac{1}{\overline{p^*}_{row,k} | V_k}*{\sum_{j \neq i \in V_k} p^*_{ij,k}} ] \nonumber \\ = &E_{N} \bigg[ E_{V_k \big| N} [ \frac{1}{\overline{p^*}_{row,k} | V_k}*{\sum_{j \neq i \in V_k} p^*_{ij,k}} ] \bigg] \nonumber \\ = &E_{N} \bigg[ E_{V_k \big| N} [ \frac{1}{\overline{p^*}_{row,k} | V_k}]*E_{V_k \big| N} [ {\sum_{j \neq i \in V_k} p^*_{ij,k}} ] \bigg] \nonumber \\ \end{align} \end{small} If we apply Wald's equation to $E_{V_k \big| N} [ {\sum_{j \neq i \in V_k} p^*_{ij,k}} ]$ we obtain \begin{small} \begin{align} E_{V_k \big| N} [ {\sum_{j \neq i \in V_k} p^*_{ij,k}} ] = E_{V_k \big| N} [ | A_{row_i,k} | ] * \overline{p^*_{ij,k}} \end{align} \end{small} If we assume that probability mass in row $i$ is evenly distributed amongst the columns, then the fraction of row mass in $V_k$ versus $V^*_k$ is equal to the fraction of their sizes: \begin{small} \begin{align} E_{V_k \big| N} [ | A_{row_i,k} | ] * \overline{p^*_{ij,k}} = \frac{N}{N^*} * | A^*_{row_i,k} | * \overline{p^*_{ij,k}} = \frac{N}{N^*} * {\sum_{j \neq i \in V^*_k} p^*_{ij,k}} \end{align} \end{small} Now if we approximate $E_{V_k \big| N} [ \frac{1}{\sum_{jl \in V_k} p^*_{jl,k}}]$ with a taylor expansion we obtain: \begin{small} \begin{align} &E_{V_k \big| N} [ \frac{1}{\overline{p^*}_{row,k} | V_k}] = \nonumber \\ &\frac{1}{E_{V_k \big| N} [\overline{p^*}_{row,k} | V_k]} + \frac{1}{(E_{V_k \big| N} [\overline{p^*}_{row,k} | V_k])^3}*Var(\overline{p^*}_{row,k} | V_k) \end{align} \end{small} If we make the same assumption that row mass is roughly evenly distributed across the columns of the matrix, \[ E_{V_k \big| N} [\overline{p^*}_{row,k} | V_k] = \frac{N}{N^*} * \overline{p^*}_{row,k}\]. We can also rewrite the variance term as \[ Var( \frac{\sum_{ij \in V_k} p^*_{ij,k}}{N}) = \frac{1}{N^2} * Var(\sum_{ij \in V_k} p^*_{ij,k}) \] Putting this together we have \begin{small} \begin{align} &E_{V_k \big| N} [ \frac{1}{\overline{p^*}_{row,k} | V_k}] = \nonumber \\ &\frac{N^*}{N * \overline{p^*}_{row,k} } + \frac{N^{*3}}{N^3 * \overline{p^*}_{row,k}^3} *\frac{1}{N^2} * Var(\sum_{ij \in V_k} p^*_{ij,k}) \end{align} \end{small} A typical degree distribution of a social network tends to be a power-law in type, which means that a handful of nodes have a large degree and most have a very small degree. Again we will assume that covariance between edge probabilities are limited to within row/column pairs. If we assume that the majority of nodes have a sub $O(N^{1/2})$ number of neighbors then the $1/N^2$ term will be greater than the number of covariance terms and the bias from these nodes will converge to 0. Likewise, if the handful of high degree nodes have a sub $O(N)$ number of neighbors their covariance terms will also be less than $1/N^2$ and the bias will also converge to 0. \begin{small} \begin{align} \frac{N^*}{N * \overline{p^*}_{row,k} }*\frac{N}{N^*} * {\sum_{j \neq i \in V^*_k} p^*_{ij,k}} = \frac{\sum_{j \neq i \in V^*_k} p^*_{ij,k}}{\overline{p^*}_{row,k}} \end{align} \end{small} Which is the true PDF calculated on $P^*_k$. This means that the CDF of the row masses converges to the correct distribution as $N$ approaches $N^*$. $\square$ \textbf{Degree Distribution Edge Bias} Now we will consider the CDF of the empirical row mass calculated from an edge sampling $W_k$: \begin{small} \begin{align} \widehat{F}_{row,k}(x) &= \sum_{i \in V} I[ x > \frac{\sum_{j \neq i \in \widehat{V}_k} \widehat{p}_{ij,k}}{\overline{\widehat{p}_{row,k}} } ] \end{align} \end{small} Where $\overline{\widehat{p}_{row,k}} = \frac{1}{\widehat{N}_k}$ and $\widehat{V}_k$ is the set of nodes that have at least one edge in $W_k$. If we take the expectation of the PDF with respect to $W_k$ we obtain \begin{small} \begin{align} E_{W_k} [ \widehat{N}_k * \sum_{j \neq i \in \widehat{V}_k} \widehat{p}_{ij,k} ] \end{align} \end{small} As all rows in $P_k$ have at least one cell with nonzero probability, as $| W_k | \uparrow$, $\widehat{V}_k \rightarrow V_k$ as the probability of sampling at least one edge from every node approaches 1. So if we take the limit as $| W_k |$ increases: \begin{small} \begin{align} &\lim_{| W_k | \rightarrow \infty}E_{W_k} [ \widehat{N}_k * \sum_{j \neq i \in \widehat{V}_k} \widehat{p}_{ij,k} ] \nonumber \\ = &E_{W_k} [ {N}_k * \sum_{j \neq i \in {V}_k} \widehat{p}_{ij,k} ] \nonumber \\ = & {N}_k * \sum_{j \neq i \in {V}_k} E_{W_k} [ \widehat{p}_{ij,k} ] \nonumber \\ = & {N}_k * \sum_{j \neq i \in {V}_k} p_{ij,k} \end{align} \end{small} So \begin{small} \begin{align} &\lim_{| W_k | \rightarrow \infty}E_{W_k} [ \widehat{F}_{row,k}(x) ] = F_{row,k}(x) \end{align} \end{small} Now let us define the empirical probabilistic degree distance and analyze its behavior. The empirical probabilistic degree is $\widehat{PD}_i(G) = {{|V|}} \sum_{j \in V} {\widehat{p}}_{ij}$ and the empirical version of the delta statistic on $G_1,G_2$ is: \begin{small} \begin{align} & \widehat{PDD}(G_1, G_2) = \sum_{bin_k \in Bins} (\frac{1}{{|V|}} \sum_{i \in V} I[ \widehat{PD}_i(G_1) \in bin_k ] \nonumber \\ &- \frac{1}{{|V|}} \sum_{i \in V^*} I[ \widehat{PD}_i(G_2) \in bin_k ] )^2 \end{align} \end{small} \begin{theorem} $\widehat{PDD}(G_1, G_2)$ is a size consistent statistic \\ which converges to ${PDD}(P^*_1, P^*_2)$. \end{theorem} \noindent First take the limit of the Probabilistic Degree for a node as $|W|$ increases: \begin{small} \allowdisplaybreaks{ \begin{align} \lim_{|W| \rightarrow \infty} \widehat{PD}_i(G) &= \lim_{|W| \rightarrow \infty} {{|V|}} \sum_{j \in V} \widehat{p}_{ij} \nonumber \\ = \lim_{|W| \rightarrow \infty} {|V|} \sum_{j \in V} \frac{{w}_{ij}}{|W|} &= |V| \sum_{j \in V} p_{ij} = PD_i(P) \end{align} } \end{small} \noindent If we take the same limit over the Probabilistic Degree Difference we obtain: \begin{small} \begin{align} \lim_{|W| \rightarrow \infty} & \widehat{PDD}(G_1, G_2) = \sum_{bin_k \in Bins} (\frac{1}{{|V_1|}} \sum_{i \in V_1} I[ {PD}_i(P_1) \in bin_k ] \nonumber \\ &- \frac{1}{{|V_2|}} \sum_{i \in V_2} I[ {PD}_i(P_2) \in bin_k ] )^2 = PDD(P_1,P_2) \end{align} \end{small} \noindent If we take the limit as $|V| \rightarrow |V^*|$ of $PD_i(P)$: \begin{small} \begin{align} \lim_{|V| \rightarrow |V^*|} PD_i(P) &= \lim_{|V| \rightarrow |V^*|} |V| \sum_{j \in V} p_{ij} = \lim_{|V| \rightarrow |V^*|} \frac{|V|}{\sum_{ij \in V} p^*_{ij} } \sum_{j \in V} p^*_{ij} \end{align} \end{small} \noindent $\frac{|V|}{\sum_{ij \in V} p^*_{ij} }$ can be rewritten as $\frac{1}{\bar{p^*} | V}$ where $\bar{p^*} | V$ is the average probability mass per node in $V$. As this is an inverse mean, it will converge to the true value $\frac{1}{\bar{p^*}}$, and therefore $\lim_{|V| \rightarrow |V^*|} PD_i(P) = \frac{1}{\bar{p^*} } \sum_{j \in V^*} p^*_{ij} = |V^*| \sum_{j \in V^*} p^*_{ij} = PD_i(P^*)$. If we take the limit on the $PDD$ we obtain a similar result: \begin{small} \begin{align} \lim_{|V| \rightarrow |V^*|} PDD(P_1,P_2) = \sum_{bin_k \in Bins} &(\frac{1}{{|V^*|}} \sum_{i \in V^*} I[ {PD}_i(P^*_1) \in bin_k ] \nonumber \\ - \frac{1}{{|V^*|}} \sum_{i \in V^*} I[ {PD}_i(P_2) \in bin_k ] )^2 \nonumber &= PDD(P^*_1,P^*_2) \end{align} \end{small} \noindent If we take the expectation over $V$: \allowdisplaybreaks{ \begin{align} E[ PDD(P_1,&P_2) ] = E[ \sum_{bin_k \in Bins} (\frac{1}{{|V_1|}} \sum_{i \in V_1} I[ {PD}_i(P_1) \in bin_k ] \nonumber \\ &- \frac{1}{{|V_2|}} \sum_{i \in V_2} I[ {PD}_i(P_2) \in bin_k ] )^2 ] \nonumber \\ =& \sum_{bin_k \in Bins} E[ (\frac{1}{{|V_1|}} Bin(|V_1|, p_{k,1}) - \frac{1}{{|V_2|}} Bin(|V_2|, p_{k,2}) )^2 ] \nonumber \\ =& \sum_{bin_k \in Bins} E[ (\frac{1}{{|V_1|}} Bin(|V_1|, p_{k,1}))^2 ] + E[ (\frac{1}{{|V_2|}} Bin(|V_2|, p_{k,2}) )^2 ] \nonumber \\ &- 2 E[\frac{1}{{|V_1|}} Bin(|V_1|, p_{k,1})] E[\frac{1}{{|V_2|}} Bin(|V_2|, p_{k,2})] \nonumber \\ \end{align} } \noindent where $p_{bin k,x}$ is the probability of any node selected from $V_x$ belonging to bin $k$. Using the same approach as with Mass Shift, we obtain a bias correction of $-\frac{ (p_{k,1})(1 - p_{k,1}) }{|V_2|} - \frac{ (p_{k,2})(1 - p_{k,2}) }{|V_2|}$. \paragraph{Triangle Probability} \noindent As the name suggests, the triangle probability (TP) statistic is an approach to capturing the transitivity of the network and an alternative to traditional clustering coefficient measures. Define the triangle probability as: \begin{small} \begin{align} \label{eq:tp} TP(P^*) &= \frac{1}{Z^*} \sum_{ijk \in V^*} (\frac{1}{(\overline{p^*})})^3 p^*_{ij} p^*_{ik} p^*_{jk} \nonumber \\ &=\frac{1}{Z^*} \sum_{ijk \in V^*} | A^* |^3 p^*_{ij} p^*_{ik} p^*_{jk} \end{align} \end{small} \noindent The empirical version on $G$ is: \begin{small} \begin{align} \widehat{TP}(G) &= \frac{1}{Z} \sum_{ijk \in V} \widehat{| A |}^3 \widehat{p}_{ij} \widehat{p}_{ik} \widehat{p}_{jk} \end{align} \end{small} \noindent where $Z^* = {| V^* | \choose 3}$ and $Z = {| V | \choose 3}$. \begin{theorem} $\widehat{TP}(G)$ is a size consistent statistic which converges to ${TP}(P^*)$. \end{theorem} \noindent As before, if we take the limit with increasing $|W|$: \begin{small} \allowdisplaybreaks{ \begin{align} \lim_{|W| \rightarrow \infty} \frac{1}{Z} \sum_{ijk \in V} \widehat{| A |}^3 \widehat{p}_{ij} \widehat{p}_{ik} \widehat{p}_{jk} &= \lim_{|W| \rightarrow \infty} \frac{1}{Z} \sum_{ijk \in V} \frac{\widehat{| A |}^3}{|W|^3} {w}_{ij} {w}_{ik} {w}_{jk} \nonumber \\ = \frac{1}{Z} \sum_{ijk \in V} {{| A |}^3} {p}_{ij} {p}_{ik} {p}_{jk} &= TP(P) \end{align} } \end{small} \noindent Now if we take the limit as $|V| \rightarrow |V^*|$: \begin{small} \allowdisplaybreaks{ \begin{align} &\lim_{|V| \rightarrow |V^*|} \frac{1}{Z} \sum_{ijk \in V} {{| A |}^3} {p}_{ij} {p}_{ik} {p}_{jk} \nonumber \\ &= \lim_{|V| \rightarrow |V^*|} \frac{1}{Z} \sum_{ijk \in V} \frac{{| A |}^3}{(\sum_{ij \in V} p^*_{ij} )^3} p^*_{ij} p^*_{ik} p^*_{jk} \nonumber \\ &= \lim_{|V| \rightarrow |V^*|} \frac{1}{Z} \sum_{ijk \in V} \frac{1}{(\bar{p^*} | V )^3} p^*_{ij} p^*_{ik} p^*_{jk} \end{align} } \end{small} \noindent Similar to the approach before, $ \frac{1}{(\bar{p^*} | V )^3}$ converges to $\frac{1}{(\bar{p^*} )^3}$ by Slutsky's Theorem, so the final limit is \begin{small} \begin{align} &= \frac{1}{Z^*} \sum_{ijk \in V^*} \frac{1}{(\bar{p^*} )^3} p^*_{ij} p^*_{ik} p^*_{jk} = TP(P^*) \square \end{align} \end{small} As with the Mass Shift, let us take the expectation w.r.t. $|W|$ and see if we can improve the rate of convergence with a bias correction: \begin{small} \begin{align} E[ \widehat{TP}(G) ] &= E \biggr[ \frac{1}{Z} \sum_{ijk \in V} (\widehat{| A |})^3 \widehat{p}_{ij} \widehat{p}_{ik} \widehat{p}_{jk} \biggr] \nonumber \\ &= \frac{1}{Z} \sum_{ijk \in V} E \biggr[ (\frac{\widehat{| A |}}{| W |})^3 {w}_{ij} {w}_{ik} {w}_{jk} \biggr] \end{align} \end{small} \noindent As before, let us assume that we have enough edge samples so that $\widehat{| A |} = | A |$: \begin{small} \begin{align} \frac{1}{Z} \sum_{ijk \in V} E\biggr[ (\frac{| A |}{| W |})^3 {w}_{ij} {w}_{ik} {w}_{jk} \biggr] \end{align} \end{small} \noindent The quantity $E[ \frac{1}{| W |^3} {w}_{ij} {w}_{ik} {w}_{jk} ]$ can be calculated with \begin{small} \allowdisplaybreaks{ \begin{align*} E\biggr[ \frac{1}{| W |^3} &w_{ij} w_{ik} w_{jk} \biggr] = \frac{1}{| W |^3} E[w_{ij} w_{ik} w_{jk}] \nonumber \\ =& \frac{1}{| W |^3} E[w_{ij} w_{ik}] E[w_{jk}] - cov(w_{ij}w_{ik}, w_{jk}) \\ =& \frac{1}{| W |^3} E[w_{ij}] E[w_{ik}] E[w_{jk}] \nonumber \\ &- E[w_{jk}] cov(w_{ij},w_{ik}) - cov(w_{ij} w_{ik}, w_{jk}) \\ =& \frac{1}{| W |^3} | W |^3 p_{ij} p_{ik} p_{jk} + | W |^2 p_{ij} p_{ik} p_{jk} - cov(w_{ij} w_{ik}, w_{jk}) \end{align*} } \end{small} \noindent The covariance term can be expanded with the formula for products of random variables \cite{bohrnstedt}: \begin{small} \allowdisplaybreaks{ \begin{align*} cov(w_{ij} \cdot w_{ik},&w_{jk}) = \nonumber \\ & E[w_{ij}]cov(w_{ik},w_{jk}) + E[w_{ik}]cov(w_{ij}w_{jk}) \\ &+ E[(w_{ij} - E[w_{ij}])(w_{ik} - E[w_{ik}])(w_{jk} - E[w_{jk}]) ] \\ =& -2 | W |^2p_{ij}p_{ik}p_{jk} \\&+E\biggr[ w_{ij}w_{ik}w_{jk} - E[w_{ij}]w_{ik}w_{jk} \nonumber \\ &- E[w_{ik}]w_{ij}w_{jk} - E[w_{jk}]w_{ik}w_{ij} \\ & + E[w_{ij}]E[w_{ik}]w_{jk} + E[w_{jk}]E[w_{ik}]w_{ij} \\& + E[w_{ij}]E[w_{jk}]w_{ik}- E[w_{ij}]E[w_{jk}]E[w_{ik}] \biggr] \\ =& -2| W |^2p_{ij}p_{ik}p_{jk} + E[w_{ij} w_{ik} w_{jk}] \\&- | W |p_{ij}E[w_{ik}w_{jk}] - | W |p_{ik}E[w_{ij}w_{jk}] \nonumber \\ &- | W |p_{jk}E[w_{ik}w_{ij}] \\&+ 3| W |^3p_{ij}p_{ik}p_{jk} - | W |^3p_{ij}p_{ik}p_{jk} \\ =& -2| W |^2p_{ij}p_{ik}p_{jk} \nonumber \\ & + E[w_{ij}w_{ik}w_{jk}] - 3| W |^3p_{ij}p_{ik}p_{jk} \\&+ | W |p_{ij}cov(w_{ik},w_{jk}) \nonumber \\ & + | W |p_{ik}cov(w_{ij},w_{jk}) + | W |p_{jk}cov(w_{ik},w_{ij}) \\&+ 3| W |^3p_{ij}p_{ik}p_{jk} - | W |^3p_{ij}p_{ik}p_{jk} \\ =& -2| W |^2p_{ij}p_{ik}p_{jk} + E[w_{ij}w_{ik}w_{jk}] \nonumber \\ &- 3| W |^2p_{ij}p_{ik}p_{jk} - | W |^3p_{ij}p_{ik}p_{jk} \\ =& -5| W |^2p_{ij}p_{ik}p_{jk} + E[w_{ij}w_{ik}w_{jk}] - | W |^3 p_{ij}p_{ik}p_{jk} \end{align*} } \end{small} \noindent By plugging the covariance into the original equation we obtain: \allowdisplaybreaks{ \begin{align*} E\biggr[ \frac{1}{| W |^3} w_{ij} w_{ik} w_{jk} \biggr] =& \frac{1}{| W |^3} ( | W |^3p_{ij} p_{ik} p_{jk} + | W |^2 p_{ij} p_{ik} p_{jk} \\ + 5| W |^2p_{ij}p_{ik}p_{jk} &- E[w_{ij}w_{ik} w_{jk}] + | W |^3p_{ij}p_{ik}p_{jk} )\\ 2 E[ \frac{1}{| W |^3} w_{ij} w_{ik} w_{jk} ] =& \frac{1}{| W |^3} ( 2| W |^3p_{ij} p_{ik} p_{jk} + 6| W |^2 p_{ij} p_{ik} p_{jk} ) \\ E[ \frac{1}{| W |^3} w_{ij} w_{ik} w_{jk} ] =& \frac{1}{| W |^3}( | W |^3p_{ij} p_{ik} p_{jk} + 3| W |^2 p_{ij} p_{ik} p_{jk} )\\ =& p_{ij} p_{ik} p_{jk} + \frac{3}{| W |}{p}_{ij}{p}_{ik}{p}_{jk} \end{align*} } \noindent So the bias term is $ \frac{3}{| W |}{p}_{ij}{p}_{ik}{p}_{jk}$. If we subtract the empirical version of this term to compensate, we obtain the corrected empirical Triangle Probability: \begin{small} \begin{align*} \widehat{TP}(G) &= \frac{1}{Z} \sum_{ijk \in V} \widehat{| A |}^3 ( \widehat{p}_{ij} \widehat{p}_{ik} \widehat{p}_{jk} - \frac{3}{| W |}\widehat{p}_{ij}\widehat{p}_{ik}\widehat{p}_{jk} ) \end{align*} \end{small} \section{Anomaly Detection Process} In order to perform the anomaly detection on a dynamic network the collection of messages need to first be converted into a sequence of graph instances. As each message consists of a pair of nodes and an associated timestamp, after picking a time step width $\Delta$ the graph at each sequential time step $t$ is created by adding all messages falling between $t$ and $t+\Delta$ to matrix $W_t$, producing a sequence of graphs. The algorithm is described in Figure~\ref{alg:graphprocess}. Then, a statistic value needs to be calculated at every graph instance in the stream. As the length of the stream is usually short compared to the size of the graphs, the computational complexity depends on the cost of calculating the network statistics on the largest graph instance. In order to calculate our consistent statistics $\widehat{P}_t$ must be estimated, which is easily obtained by normalizing the observed messages $W$. Then the network statistic scores are calculated at each time step. This generates a set of standard time series which can be analyzed with traditional time series anomaly detection techniques. Selection of a proper $\Delta$ time step width is crucial. Due to the nature of size-consistent statistics larger values of $\Delta$ will reduce the error associated with statistical bias, but larger values also reduce the granularity of the detection algorithm making it harder to pinpoint the exact time that the anomaly occurred. \begin{figure} \caption{Creation of the dynamic graph sequence from message stream using time step width $\Delta$.} \label{alg:graphprocess} \end{figure} Now that we have a stream of graphs we can perform the anomaly detection process. For every time step $t$ the graph at $G_t$ becomes the test graph and the graphs $G_{t-1}, G_{t-2}... G_{t-k}$ become the null distribution examples (here we use $k=50$). By applying $\widehat{S}_k$ to each graph we obtain both the test point and the null distribution. Given a certain p-value $\alpha$, we then set the critical points to be the values which reject the most extreme $\alpha/2$ values from the null distribution on both sides. If the test point $\widehat{S}_k(G_t)$ falls outside of these critical points we can reject the null hypothesis and raise an anomaly flag. This detection algorithm is described in \ref{alg:anomalydetection}. \subsection{Smoothing} Rather than calculating delta statistics using a weighted matrix $W_{t-1}$ which contains only the communications of the immediately prior time step, an aggregate of prior time steps $W_{t-k} ... W_{t-1}$ can be used by simply calculating the average weighted matrix $\overline{W}$ from $W_{t-k} ... W_{t-1}$ and then calculating $S_k(W_t, \overline{W})$ as the delta statistic. The advantage of this approach is that it measures the distance of the current behavior from the average behavior seen in a range of recent past timesteps, and as such is less susceptible to flagging time $t$ due to an outlier in $W_{t-1}$. \begin{figure} \caption{ Anomaly detection procedure for a graph stream $\{ G_1 ... G_t \}$, graph statistic $S_k$ and p-value $\alpha$.} \label{alg:anomalydetection} \end{figure} Another smoothing option is to use a moving window approach with overlapping time steps, i.e. calculate $W_t, W_{t+\delta}, W_{t+2*\delta}...$ where $W_{t+\delta}$ starts at time $t+\delta$ and ends at time $t+\delta+\Delta$. This effectively allows for a larger time step without sacrificing granularity, as it should be straightforward to find which $\delta$-wide time span that an anomaly occurred in. A prior edge weight value for the cells of $W_t$ is another option. Instead of using $W_t$, one can use $W'_t$ where $w'_{ij,t} = w_{ij,t} + c$ for some value of $c$. In general, $c$ should be small, usually less than 1, as this prior value adds $c |V_t|^2$ total weight to the matrix and $c |V_t|^2 << |W_t|$ in the ideal case. Larger values of $c$ can easily wash out the actual network behavior leading all of the graph examples to seem uniform. So far the $P$ matrices have been estimated with a frequentist approach using the observed message frequencies to estimate the probabilities. If one desires to assign a prior distribution to the $P$ matrix, a Bayesian approach is easily implemented by choosing a Beta distribution for each cell in $P$ and using them as conjugate priors for normalized binomial distributions using the observed message frequencies as the evidence. The reason we did not utilize this approach is because it is difficult to choose proper prior distributions: due to the sparsity of most networks the vast majority of cells in $P$ are zero. Similar to the prior edge weight approach assigning a nonzero prior to all cells in $P$ tends to dilute the network, but deciding which cells to assign a zero prior probability is nontrivial. Because 0 is the natural value for most pairs of nodes in the network trying to smooth by assigning a non-zero prior to all these node pairs is detrimental. \subsection{Complexity Analysis} Statistics like probability mass and probabilistic degree can be calculated at each step in $O( |A_t|)$ time, making their overall complexity $O( |A_t| t)$, where $|A_t|$ is the number of nonzero elements in $W_t$. Triangle probability, on the other hand, is more expensive as some triangle counting algorithm must be applied. The fastest counting algorithms typically run in $O(|V|^k)$ time where $2 < k < 3$, making the overall complexity $O(|V|^k t)$ for the whole stream. However, if we make the assumption that the maximum number of neighbors of any node is bounded by $n_{max}$, we can approximate the triangle count with $O(|V| n_{max}^2 t)$. Note that any other statistic-based approach such as Netsimile that utilizes triangle count or clustering coefficient must make the same approximations in order to run in linear time. \begin{comment} \begin{figure*} \caption{ROC curves on synthetic datasets with varying alphas. False positives due to extra edges.} \label{fig:edgesROC} \end{figure*} \begin{figure*} \caption{ROC curves on synthetic datasets with varying alphas. False positives due to extra nodes.} \label{fig:nodesROC} \end{figure*} \begin{figure*} \caption{Triangle Probability behavior: (a) Edge False Pos (b) Node False Pos (c) Power} \label{fig:lineplot} \end{figure*} \begin{figure*} \caption{Synthetic Data Generation Procedure} \label{fig:syntheticgeneration} \end{figure*} \begin{figure*} \caption{Semi-Synthetic Data Generation Procedure} \label{fig:semisyntheticgeneration} \end{figure*} \end{comment} \begin{comment} \begin{figure*} \caption{Mass Shift behavior: (a) Edge False Pos (b) Node False Pos (c) Power} \end{figure*} \begin{figure*} \caption{Mass Shift behavior: (a) Edge False Pos (b) Node False Pos (c) Power} \end{figure*} \end{comment} \begin{comment} \begin{figure*} \caption{Mass Shift behavior: (a) Edge False Pos (b) Node False Pos (c) Power} \label{fig:lineplots} \end{figure*} \end{comment} \section{Experiments} Now that we have established the properties of size-consistent and -inconsistent statistics we will show the tangible effects of these statistical properties using synthetic, semi-synthetic, and real-world datasets. The objective for the synthetic and semi-synthetic experiments is to maximize the true positive detection rate (where a true positive is flagging a graph generated with anomalous parameters) and minimize the false positive rate (where a false positive is flagging a graph with unusual edge count or node count but generated with normal parameters). The real-world experiments will be an exploratory analysis, demonstrating how to discover and explain events in a real-world dynamic graph. We will compare each of the consistent statistics to the conventional one they were intended to replace: graph edit distance for probability mass shift, degree distribution difference to probabilistic degree difference, and Barrat weighted clustering to triangle probability. In addition we will also compare the performance of the consistent statistics to Netsimile and Deltacon. Netsimile is an aggregate statistic which attempts to measure graph differences in a variety of dimensions and as such can be applied to find many types of anomalies. Deltacon on the other hand measures graph differences through the distances between nodes in the graphs and attempts to find anomalies of an entirely different type than the consistent statistics. \subsection{Synthetic Data Experiments} In order to create data with specific known properties we used generative graph models. There are four types of graphs generated: \begin{enumerate} \item normal graph examples which are used to create the null distribution for a hypothesis test. \item edge false positive graphs which are generated using the same model parameters but with additional sampled edges. \item node false positive graphs which are generated using the same model parameters but with additional sampled nodes. \item true positive graphs which are generated with a normal number of edges and nodes but different model parameters. \end{enumerate} The first three sets of graphs are created with the same generative model but with varying edges and nodes in the output graphs while the last set uses a different generative model. An illustration of the null distribution, false positive distribution with additional edges, and true positive distribution is shown in figure \ref{synthetic-generation}. First a set of normal graph examples are created using the process described in 1. which will form the null distribution graphs. A statistic is calculated for each graph example and given an $\alpha$ value the two critical points are found. Then a false positive graph set is created using either 2. or 3. and a true positive graph set created using 4. and statistics calculated for each. The percentage of false positive graphs outside the critical points becomes the \textit{false positive rate} while the percentage of true positive graphs becomes the \textit{true positive rate}. By varying the value of $\alpha$ and plotting the true positive vs. false positive rate for each value we can create an ROC curve showing the tradeoff of true anomalous instances found versus falsely flagged instances. The circle on the ROC curves represents selecting a p-value of 0.05. The number of edges in the normal and true positive graphs ranges from 300k-400k while the edge false positive graphs have 400k-500k, and the number of nodes in the normal and true positive graphs is 25k while the node false positive graphs have 30k. An equal number of graphs of each type were generated. For a statistic that detects the model changes reasonably well we expect the false positive distribution to be very close to the null distribution, while the anomalous distribution is significantly different. Ideal performance on the ROC curve would be a horizontal line across the entire top of the plot: this would indicate perfect performance in detecting true positive graphs even at low p-value, and a false positive rate that is low until the p-value is increased. For comparison a diagonal line with a slope of 1 would indicate random performance where each false positive and true positive graph is flagged as anomalous using an unbiased coin flip. Any statistic which has a curve below this line has more sensitivity to the additional edges or nodes of the false positive graphs than to the model changes of the true positive graphs. Some of the statistics evaluated even have a vertical line at the right of the plot: this indicates that no matter the p-value picked all false positive graphs are being flagged but not all true positive graphs are flagged; this is the worst possible space for the statistic to be in. To evaluate delta statistics, graphs were generated in pairs, the first being from the normal/false positive/true positive model while the other always from the normal model, and the delta statistic calculated between them. \begin{figure} \caption{Diagram of synthetic experiments and the three sets of generated graphs. } \label{synthetic-generation} \end{figure} \begin{figure} \caption{Synthetic data experimental procedure for statistic $S_k$ using normal probability matrix $P_N$, anomalous probability matrix $P_A$, and $\Delta$ additional edges in False Positive graphs.} \label{fig:synthgeneration} \end{figure} \begin{figure*} \caption{(a),(b),(c): ROC curves with false positives due to extra edges. (d),(e),(f): ROC curves with false positives due to extra nodes. (g) false positive rate vs. edges, (h) false positive rate vs. nodes, (i) true positive rate.} \label{fig:ROC} \end{figure*} To test the performance of graph change statistics like graph edit distance and probability mass shift, synthetic data was generated using a mixture model that either samples edges from a static normal graph instance from 1. or from an anomalous graph from 4. The initial graph has a power-law degree distribution with an exponent of 2.0 and was generated using a Chung-Lu sampling process while the alternative graph was generated with an Erdos-Renyii graph model. The normal model draws edge samples only from the initial graph, while the alternative model draws 5\% of the edges from the alternative graph. The performance of these statistics is shown in \ref{fig:ROC} (a) and (d). Mass Shift strictly dominates the other statistics as either the edges or nodes changes. To determine ability to detect degree distribution changes synthetic graphs were also generated using a Chung-Lu process, however anomalous graph instances were generated by altering the exponent parameter of the power law determining degree distribution rather than using a mixture model. The normal graph instances have a power-law degree distribution with an exponent of 2.0 while the true positive graph have an exponent of 1.8. The performance is shown in figure \ref{fig:ROC} (b) and (e). The transitivity experiments were done by creating graphs with a varying amount of triangle closures. To create each graph, a KPGM model with a seed of \begin{small} $\left[ \begin{tabular}{c c} 0.4 & 0.3 \\ 0.3 & 0 \end{tabular} \right]$ \end{small} is used to sample an initial edge set. These parameters were selected to create a graph with a branching pattern with few natural triangles. Then, with probability $\rho$ each edge is removed and replaced with a triangle closure by performing a random walk (identical to the technique used in the Transitive Chung Lu model \cite{pfeiffer}). The normal graphs were generated with a rho of 0.05 while the alternative graphs had a rho of 0.055. The results are in figures (c) and (f). Figures \ref{fig:ROC} (g)-(i) shows the effect of changing (g) edges, (h) nodes, or (i) model parameter on transitivity statistics. The zero point on the false positive plots compares graphs of the same size and model which will produce false positives at the p-value rate, while deviating in either direction introduces more false positives. The power in figure (i) depends on the deviation in the model parameter. \begin{comment} \begin{figure} \caption{(a) Timeline of student e-mail network in Fall 2011 and early Spring 2012 and (b) Timeline of the Facebook subnetwork consisting of students from March 2007 to March 2008.} \label{fig:realTimeline} \end{figure} \end{comment} \begin{comment} \begin{figure*} \caption{Detection of Anomalies in the Enron e-mail dataset, Purdue e-mail dataset, and Facebook Wall interactions. Filled circles are detections from our proposed statistics, open circles are other methods. } \label{fig:enrontimeline} \end{figure*} \end{comment} \subsection{Semi-Synthetic Data Experiments} \begin{figure} \caption{Diagram of semi-synthetic experiments and the three sets of generated graphs. } \label{semi-synthetic-generation} \end{figure} Although synthetically driven experiments have the advantage of complete control over the network properties of the generated graphs, these experiments give inherently artificial results and the utility of any conclusions drawn from those experiments depends heavily on the comprehensiveness of the experiments. To ensure that these results generalize to more realistic scenarios I've also evaluated them using a set of semi-synthetic experiments where the normal and false positive graph examples of 1. -- 3. are sampled from real-world networks and the true positive anomaly examples of 4. are artificially inserted. These experiments show that the proposed consistent statistics are superior at discovering anomalies inserted into real-world data. \begin{figure} \caption{Semi-synthetic data experimental procedure for statistic $S_k$ using normal probability matrix $P_N$, anomalous probability matrix $P_A$, and $\Delta$ additional edges in False Positive graphs.} \label{fig:semisynthgeneration} \end{figure} The first step in generating the graph sets is to aggregate all graph instances from a dynamic network source into a single graph example which will become our normal graph source. All normal graph examples are generated from this source graph by first sampling an active node set, obtaining the subgraph over those active nodes, then sampling edges with replacement to create the sample graph. By aggregating all instances over time we smooth out any variations that occur over the lifespan of the network and obtain the ``average'' behavior of the network to use as our normal examples. False positive examples are creates by sampling additional nodes or edges from the same source network. True positive examples are sampled from a separate, alternate source instance which is created by permuting the original source graph in some way. To generate network change anomalies the alternate source has 5\% of its edges selected uniformly at random compared to the source; degree distribution anomalies are generated by taking 30\% of the edges of high degree nodes (high degree meaning in the top 50\% of nodes) and assigning them uniformly at random; and transitivity anomalies are generated by performing triangle closures by selecting an initial node, randomly walking two steps, then linking the endpoints of the walk to form a triangle. The semi-synthetic data generation process is shown in figure \ref{semi-synthetic-generation} and the exact algorithm for generating the data is described in Figure \ref{fig:semisynthgeneration}. The input $P_N$ is created by dividing the aggregated normal graph described above by $|W|$ and the input $P_A$ is created by modifying the aggregated normal graph in one of the ways described above and then dividing by $|W|$. The dataset used for the underlying graph was the University E-mail dataset described in Section \ref{real-data-experiments} used in the real data experiments; when aggregated this data forms a graph with 54102 total nodes and 5236591 total messages. Edge false positives are generated by creating graphs with 20k nodes and either 400k or 600k edges while node false positives are generated by sampling either 20k or 30k nodes and sampling edges equal to 20 times the number of nodes. Sampling edges as a ratio of nodes in the node false positive experiment is to hold the density of the graphs constant. We analyze the performance of the statistics using the same ROC approach as with the synthetic data. Figure \ref{fig:ROCsemiSynth} shows the resulting ROC curves. As with the synthetic experiments (a)-(c) show mass shift statistics, degree distribution statistics, and transitivity statistics respectively when the false positives are generated with additional edges, while (d)-(f) have false positives generated via additional nodes. The proposed consistent statistics have superior performance in most cases, and none of the competing statistics perform well in both the additional edges and additional nodes scenarios. \begin{figure} \caption{ROC curves on the semi-synthetic dataset with varying alphas.} \label{fig:ROCsemiSynth} \end{figure} \subsection{Real Data Experiments} \label{real-data-experiments} Now let us investigate the types of anomalies found when these statistics are applied to three real-world networks and contrast these events to those found by other detectors. The first dataset is the Enron communication data, a subset of e-mail communications from prominent figures of the Enron corporation (150 individuals, 47088 total messages) with a time step width of one week used in papers such as Priebe et al \cite{priebe}. The second is the University E-mail data, e-mail communications of students from one university in the 2011-2012 school year (54102 individuals, 5236591 total messages), sampled daily and described in detail in the paper by LaFond et al \cite{tlafond}. The third is a Facebook network subset made up of postings to the walls of students in the 2007-2008 school year (444829 individuals, 4171383 total messages), also from the same university and sampled daily. The Facebook dataset was also used in a paper by LaFond \cite{tlafond2} and is described there in more detail. \begin{figure*} \caption{Detected anomalies in Enron corporation e-mail dataset. Filled circles are detections from our proposed statistics, open circles are other methods.} \label{fig:enrontimeline} \end{figure*} \begin{figure*} \caption{Detected anomalies in university student e-mail dataset. } \label{fig:purduetimeline} \end{figure*} \begin{figure*} \caption{Detected anomalies in Facebook wall postings dataset. } \label{fig:facebooktimeline} \end{figure*} \begin{comment} \begin{table}[] \begin{center} \caption{Semi-synthetic experimental results using Pooled E-mail dataset. Normal: 30k nodes, 3 million messages, False Positives: 20k nodes, 2 million messages.} \label{fig:contingencyTable2} \begin{tabular}[l]{l c c | c c c} & False & \multicolumn{3}{c}{True} \\ &Positives & \multicolumn{3}{c}{Positives} \\ & & \textbf{10\%} & \textbf{20\%} & \textbf{30\%} \\ Mass Shift & 0 & 56 & 100 & 100 \\ Triangle Prob. & 29 & 96 & 100 & 100 \\ \hline Edit Dist & 45 & 50 & 62 & 61 \\ Barrat Clust. & 82 & 100 & 100 & 100 \\ \hline Netsimile & 18 & 100 & 100 & 100 \\ \end{tabular} \end{center} \end{table} \begin{table}[] \begin{center} \begin{tabular}[l]{l c c | c c c} & False & \multicolumn{3}{c}{True} \\ &Positives & \multicolumn{3}{c}{Positives} \\ & & \textbf{10\%} & \textbf{20\%} & \textbf{30\%} \\ Mass Shift & 11 & 100 & 100 & 100 \\ Triangle Prob. & 5 & 100 & 100 & 100 \\ \hline Edit Dist & 12 & 50 & 50 & 52 \\ Barrat Clust. & 1 & 100 & 100 & 100 \\ \hline Netsimile & 18 & 100 & 100 & 100 \\ \end{tabular} \end{center} \caption{Semi-synthetic experimental results using Pooled E-mail dataset. Normal: 20k nodes, 2 million messages, False Positives: 20k nodes, 3 million messages.} \label{fig:contingencyTable} \end{table} \end{comment} \begin{comment} \begin{figure} \caption{Breakdown of the cumulative anomaly score on the nodes of the detected networks. (a) Triangle Probability, E-mail network time step 86; (b) Barrat Clustering, E-mail network time step 228.} \label{fig:anomCumulative} \end{figure} \end{comment} \begin{comment} \begin{figure} \caption{(a) Structure accounting for 50\% of the Degree Shift anomaly detected at time steps 351-352 in Facebook network. (b) Structure accounting for 10\% of the Degree Distance anomaly detected at time steps 214-215 in Facebook network. } \label{fig:localAnomsDegree} \end{figure} \begin{figure} \caption{(a) Structure accounting for 50\% of the Triangle Probability anomaly detected at time steps 85-86 in E-mail network. (b) Structure accounting for 10\% of the Barrat Clustering anomaly detected at time steps 227-228 in E-mail network. } \label{fig:localAnomsTriangle} \end{figure} \end{comment} Figure \ref{fig:enrontimeline} shows the results of multiple statistics detectors when applied to the set of e-mail data from the Enron corporation, including our three proposed statistics, the raw message count, Netsimile, and Deltacon. Time step 143 represents the most significant event in the stream, Jeffrey Skilling's testimony before congress on February 6, 2002. The detected triangle anomalies at time steps 50-60 coincide with Enron's price manipulation strategy known as ``Death Star'' which was put into action in May 2000. Other events include The CEO transition from Lay to Skilling in December 2000, the ``asshole'' conference call featured prominently in the book ``The Smartest Guys in the Room,'' and Lay approaching Skilling about resigning. Netsimile has difficulty detecting most of important events in the Enron timeline. Although it accurately flags the time of the Congressional hearings, the other points flagged, particularly early on, do not correspond to any notable events and are probably false positives due to the artificial sensitivity of the algorithm in very sparse network slices. Deltacon detects a greater range of events than Netsimile but still fails to detect several important events such as the price manipulation and Skilling's attempted resignation. In general it generates detections more frequently in the region between May and December 2001 which is also the region of highest message activity, and fails to generate detections in times with fewer messages. Figure \ref{fig:purduetimeline} shows the detected time steps of the University E-mail dataset. Several major events from the academic school year like the start of the school year and Christmas break are shown. It seems that the consistent statistics flag times closer to holidays and other events compared to other statistics. Unfortunately, as the text content of the messages was unavailable it is impossible to determine if the detected conversations correspond to specific events based on the dialogues of users. Figure \ref{fig:facebooktimeline} shows the detected events of the Facebook wall data and the explanations for the detected events. Some of the listed events are holidays while others were obtained by investigating the time steps flagged as anomalies; see Section \ref{local-anomaly-decomposition} for an explanation of this process. Some events of interest are: the ``Race to 2k Posts'' where a pair of individuals noticed they were nearing two thousand posts on one of their walls and decided to reach that mark in one night, generating much more traffic between them than usual (over 160 posts); the ``Divorce w/ Third Party'' where a pair of individuals were going through a messy breakup and a mutual friend was cracking jokes and egging them on; and a discussion about Tiger Woods' odds in the 2007 Open Championship. \begin{comment} \begin{figure*} \caption{(a) Mass Shift anomaly, Facebook network times 31-32 (end of spring break). (b) Degree Shift anomaly, Enron times 142-143 (Congressional hearings). (c) Triangle Probability anomaly, University E-mail network times 168-169 (Christmas). } \label{fig:localAnoms} \end{figure*} \end{comment} \section{Local Anomaly Decomposition} \label{local-anomaly-decomposition} After flagging a time step as anomalous it is useful to have some indication as to what is happening in the network at that time that generated the flag. One tool for investigating the flagged time step is local anomaly decomposition, where the network is broken down into subgraphs that contribute the most value to the total statistic score at that time step. For many statistics like mass shift or triangle probability which are summations over the edges, nodes, or triplets of the graph this process is trivial: each component of the summation has an associated anomaly score and the components that provide the most anomaly score are the ones investigated. For others such as PDD which cannot be easily decomposed into node and edge contributions this approach is nearly impossible. Anomaly score decomposition is more useful when the score is skewed rather than uniformly distributed as it is easier to highlight a concise region that contributes the most towards the anomaly. To demonstrate the decomposition, we applied the statistics to the real-world networks and sorted all of the nodes (for Barrat clustering) or edges (all other statistics) from highest to lowest contribution to the anomaly score sum. From there we selected the components with the highest anomaly score contribution totaling at least 20\% of the log of the anomaly score to be part of the visualized anomaly. We then plotted all of the selected components as well as any adjacent edges and nodes. We investigated the Enron and Facebook datasets as these have names/message content associated with the graphs; the e-mail dataset has neither so these graphs are omitted. Figures \ref{fig:enronmasslocal} - \ref{fig:enronbarratlocal} show the local subgraphs reported by the mass shift, triangle probability, graph edit distance, and Barrat clustering respectively. The left subgraph shows activity in the time step immediately prior to the anomaly while the right shows the subgraph during the anomaly. Red nodes and edges are part of the top anomaly contributors while black edges and nodes are merely adjacent; the thickness of the edges corresponds to the edge weight in that time step. Figure \ref{fig:enronmasslocal} shows an unusually large amount of communication between Senior Vice President Richard Shapiro and Government Relations Executive Jeff Dasovich immediately before Lay approaches Skilling about resigning as CEO. Figure \ref{fig:enrontrilocal} shows the triangular communications occurring between members of the Enron legal department which was occurring during the price-fixing strategy in California. Both of these methods find succinct subgraphs to represent the anomalies occurring at these times. \ref{fig:enrongedlocal}, on the other hand, shows graph edit distance reporting nearly the entirety of the network at that time. While this does represent an event (the Congressional hearings) there is no interpretation of the event other than that there were many messages being sent at that time. Barrat clustering identifies the legal department in \ref{fig:enronbarratlocal} but does so at a time with relatively low communication. Barrat clustering normalizes by node degree which makes it more likely to report triangles with less weight as long as the participating nodes don't communicate with anyone else. \begin{figure*} \caption{Subgraph responsible for most of the mass shift anomaly in the Enron network at the weeks of June 25 (before anomaly) and July 2 (during anomaly), 2001 respectively. } \label{fig:enronmasslocal} \end{figure*} \begin{figure*} \caption{Subgraph responsible for most of the triangle probability anomaly in the Enron network at the weeks of May 1 (before anomaly) and May 8 (during anomaly), 2000 respectively. } \label{fig:enrontrilocal} \end{figure*} \begin{figure*} \caption{Subgraph responsible for most of the graph edit distance anomaly in the Enron network at the weeks of January 28 (before anomaly) and Feburary 4 (during anomaly), 2002 respectively. } \label{fig:enrongedlocal} \end{figure*} \begin{figure*} \caption{Subgraph responsible for most of the Barrat clustering anomaly in the Enron network at the weeks of November 1 (before anomaly) and November 8 (during anomaly), 1999 respectively. } \label{fig:enronbarratlocal} \end{figure*} \begin{figure*} \caption{Subgraph responsible for most of the mass shift anomaly in the Facebook network at June 1 (before anomaly) and June 2 (during anomaly), 2007 respectively. } \label{fig:facebookmasslocal} \end{figure*} \begin{figure*} \caption{Subgraph responsible for most of the triangle probability anomaly in the Facebook network at July 20 (before anomaly) and July 21 (during anomaly), 2007 respectively. } \label{fig:facebooktrilocal} \end{figure*} \begin{figure*} \caption{Subgraph responsible for most of the graph edit distance anomaly in the Facebook network at October 15 (before anomaly) and October 16 (during anomaly), 2007 respectively. } \label{fig:facebookgedlocal} \end{figure*} \begin{figure*} \caption{Subgraph responsible for most of the Barrat clustering anomaly in the Facebook network at May 17 (before anomaly) and May 18 (during anomaly), 2007 respectively. } \label{fig:facebookbarratlocal} \end{figure*} Figures \ref{fig:facebookmasslocal} - \ref{fig:facebookbarratlocal} show the local subgraphs found in the Facebook dataset. \ref{fig:facebookmasslocal} shows the event we named ``race to 2k posts;'' at this time a pair of individuals noticed they were closing in on two thousand posts on their walls and decided to reach that goal in one night. The result is a massively higher amount of communication than was typical between the two in prior time steps. \ref{fig:facebooktrilocal} shows the communications occurring during the 2007 Open Championship golf tournament. The three individuals with the most communication were arguing about the odds that Tiger Woods would win the tournament. Graph edit distance, by contrast, identifies no coherent local structure in \ref{fig:facebookgedlocal}. It is likely that this event signifies a global increase in communication rather than a change in the distribution of messages. As the additional edges were distributed throughout the network, when looking for subgraphs that generated the most anomaly score the majority of the network has similar scores so a random chunk of the network is found. \ref{fig:facebookbarratlocal} is the structure found by Barrat clustering; as before it finds a set of triangular communication with relatively low weights, around 2 -- 4, while the anomaly found by Triangle Probability has about 12 messages per edge. \begin{comment} \section{Related Work} A number of algorithms exist for anomaly detection in time series data, including CUSUM-based and likelihood ratio approaches \cite{tartakovsky, siris, mcculloh2, basseville, kawahara, chen}. The basic time series algorithms will flag any time steps were deviation from a baseline model is detected. To apply these methods to a time series of networks, each network in the series can be converted to a single value using a network statistic. Another approach to anomaly detection is a model-based detector. Here testing is done by positing some null model and reporting a detection if the observed data is unlikely under the model \cite{moreno2}. Some of the most popular models for hypothesis testing are the ERGM family \cite{snijders4}, which has been applied to a wide variety of domains \cite{yates, faust}. ERGM is an exponential family model which calculates the likelihood of a network example using the weighted sum of network statistics like total edge count or count of triangles. One variant is dynamic ERGM variant for hypothesis testing in a network stream \cite{desmarais, hanneke, snijders, banks, mcculloh, snijders3}. A standard ERGM model calculates the likelihood for a particular graph configuration using a set of network statistics; a dynamic ERGM simply selects certain temporal network statistics like the edges retained from the previous time step. A dynamic ERGM model can be used with likelihood ratio testing to perform anomaly detection. The ERGM model has several drawbacks, however; it suffers from degeneracy issues when learning \cite{hunter}, and learns inconsistent parameters on subsets of networks \cite{shalizi}. Borgs et. al. investigate the behavior of motif-style statistics in the limit as the number of nodes sampled from some graph generative process increases \cite{borgs2006graph}. Similar to our approach they prove a finite limit that depends only on the properties of the graph generative process; however, they only take they limit with respect to number of nodes, making the assumption that once two nodes are observed the true edge weight between them is known and not estimated from node interaction samples. \end{comment} \section{Conclusions} In this paper we have demonstrated that dependence on network edge count hinders the ability of statistics to detect certain changes in dynamic networks. To remedy this we have introduced the concept of Size Consistency and shown that statistics with this property are less affected by edge count variation. We proposed three Size Consistent network statistics, Mass Shift, Degree Shift, and Triangle Probability to replace the Graph Edit Distance, Degree Distribution and Clustering Coefficient statistics. These statistics are provably Size Consistent and we demonstrated using synthetic trials that anomaly detectors using our statistics have superior performance on variable sized networks. The framework for developing Size Consistent network statistics can be applied to new statistics in the future. We hope that researchers who propose network statistics in the future will make sure to analyze the effects that changing network size have on their proposed statistics and ensure that those statistics meet the Size Consistency requirements. \end{document}
arXiv
Behavioral modifications by a large-northern herbivore to mitigate warming conditions Jyoti S. Jennewein ORCID: orcid.org/0000-0002-9650-65371, Mark Hebblewhite2, Peter Mahoney3, Sophie Gilbert4, Arjan J. H. Meddens5, Natalie T. Boelman6, Kyle Joly7, Kimberly Jones8, Kalin A. Kellie9, Scott Brainerd10, Lee A. Vierling1 & Jan U. H. Eitel1,11 Temperatures in arctic-boreal regions are increasing rapidly and pose significant challenges to moose (Alces alces), a heat-sensitive large-bodied mammal. Moose act as ecosystem engineers, by regulating forest carbon and structure, below ground nitrogen cycling processes, and predator-prey dynamics. Previous studies showed that during hotter periods, moose displayed stronger selection for wetland habitats, taller and denser forest canopies, and minimized exposure to solar radiation. However, previous studies regarding moose behavioral thermoregulation occurred in Europe or southern moose range in North America. Understanding whether ambient temperature elicits a behavioral response in high-northern latitude moose populations in North America may be increasingly important as these arctic-boreal systems have been warming at a rate two to three times the global mean. We assessed how Alaska moose habitat selection changed as a function of ambient temperature using a step-selection function approach to identify habitat features important for behavioral thermoregulation in summer (June–August). We used Global Positioning System telemetry locations from four populations of Alaska moose (n = 169) from 2008 to 2016. We assessed model fit using the quasi-likelihood under independence criterion and conduction a leave-one-out cross validation. Both male and female moose in all populations increasingly, and nonlinearly, selected for denser canopy cover as ambient temperature increased during summer, where initial increases in the conditional probability of selection were initially sharper then leveled out as canopy density increased above ~ 50%. However, the magnitude of selection response varied by population and sex. In two of the three populations containing both sexes, females demonstrated a stronger selection response for denser canopy at higher temperatures than males. We also observed a stronger selection response in the most southerly and northerly populations compared to populations in the west and central Alaska. The impacts of climate change in arctic-boreal regions increase landscape heterogeneity through processes such as increased wildfire intensity and annual area burned, which may significantly alter the thermal environment available to an animal. Understanding habitat selection related to behavioral thermoregulation is a first step toward identifying areas capable of providing thermal relief for moose and other species impacted by climate change in arctic-boreal regions. Global temperatures are drastically increasing [36], which directly affect animal behavior and fitness [9, 88, 91]. When ambient temperatures rise above an animal's thermal neutral zone, they use physiological and behavioral mechanisms to dissipate heat and mitigate thermal stress. For instance, additional energy may be spent to augment the cardiovascular and respiratory systems enabling evaporative cooling but may also lead to dehydration [16, 54, 73]. Consequentially, increases in ambient temperature may contribute to a negative energy balance within an animal [5, 85, 87]. Energetic requirements of mammals vary by season and traits (e.g., body mass, lactation). Summer is an important season for mammals as they need to recover from winter food deficits, lactate and rear young, and store fat [14, 75, 85]. Climate change puts further stress on these important activities, which may, in turn, limit the ability of mammals to meet energetic requirements for reproduction and survival [25, 50, 90]. Recent work suggests that large-bodied mammals respond more strongly to climate change, when compared to smaller-bodied mammals, through contraction or expansion of elevational ranges and also experience increased extinction risk [53]. Moose (Alces alces) are an important, large-bodied mammal vulnerable to increasing temperatures because they are well-adapted to cold climates [73, 76]. Moose also act as ecosystem engineers, by regulating forest carbon and structure, below ground nitrogen cycling processes, and predator-prey dynamics [12, 15, 48, 55]. According to the seminal physiological study by Renecker and Hudson [73], moose reached their upper critical temperature threshold at 14 °C in summer where they increased their heart and respiration rates, while open-mouthed panting began at 20 °C. However, recent works call these thresholds into question and suggest there is no static temperature threshold where free-ranging moose become heat stressed [83, 84]. Similarly, behavioral changes are often observed at temperatures that exceeds the upper critical summer threshold proposed by Renecker and Hudson [73] [11, 56]. Behavioral alterations elicited by changes in temperature influence both resource selection patterns and movement rates. For example, previous studies showed that during hotter periods, moose displayed stronger selection for riparian or wetland habitats [74, 80], taller and denser forest canopies that provide thermal cover [20, 56, 88], and minimized exposure to solar radiation [54]. Additionally, moose may also decrease their activity and movement rates in response to warmer daytime temperatures [58, 80]. Moose thermoregulatory behaviors are indeed a 'hot topic' in applied ecology because of rising temperatures related to climate change and their important ecosystem role (e.g., [56, 58, 80]). However, most previous studies occurred in Europe or the southern end of moose range in North America [50, 56, 88]. Understanding whether ambient temperature elicits a behavioral response in high-northern latitude (i.e., ≥ 60°N) moose populations in North America may be increasingly important as these arctic-boreal systems have been warming at a rate two to three times the global mean [2, 36, 77, 95] and current projections anticipate continued increases in temperature [36, 51]. Thus, it is important to explore how movement patterns of moose, a heat-sensitive large-bodied mammal, are influenced by changes in temperature at the northern extent of their range. Accordingly, our study objective was to assess Alaska moose (Alces alces gigas) habitat selection as a function of ambient temperature. We tested the hypothesis that moose modified resource selection in response to ambient temperature as predicted by physiological models. To accomplish this, we used Global Positioning System (GPS) -telemetry locations from four Alaska moose populations (n = 169 moose; Fig. 1 & Table 1) from 2008 to 2016 that were located in four unique ecoregions [65]. We combined moose GPS locations with remotely sensed products important to thermoregulatory behaviors. We analyzed only summer months (June–August) because of their importance in moose life history and because thermal stress is most likely to occur in summer [23, 88]. Each population was analyzed independently and separated into male and female subsets because fine-scale movements vary by sex and local habitat characteristics [41, 43, 49]. We predicted that Alaska moose exhibit a detectable behavioral response to increasing summer temperatures, and, that as temperature increased, moose would select for cooler locations, such as thermal refugia provided through increased canopy cover, areas closer to water, and/or low exposure to solar radiation. Moose (Alces alces gigas) study area locations in four distinct ecoregions of Alaska, USA. In total, 169 moose were included in these analyses (111 females; 58 males) Table 1 Summaries of Alaska moose (Alces alces gigas) Global Positioning System (GPS) datasets by study area. Information on the number of fixes and the fix success rate are specific to summer (June 1 – August 31). The number of clusters for each population-sex partition refer to the unique combination of individual-year, which were used in our conditional logistic regression models as a clustering variable for estimating robust variance estimates using generalized estimating equations All four study areas span a mixture of subarctic and arctic boreal forest vegetation including black spruce (Picea mariana), alders (Alnus spp.), willows (Salix spp.), Alaska birch (Betula neoalaskaa), white spruce (Picea glauca), quaking aspen (Populus tremuloides), and balsam poplar (Populus balsmifera). The upper Koyukuk region located in the Brooks Mountain Range (Fig. 1) is rugged and varies from 500 to 2600 m above sea level [1]. Wildfire is common in this region, which experiences strongly continental climate patterns where summers are short, but temperatures can exceed 30 °C [41]. Average daily summer (June–August) temperature ranged from 7.5 °C to 15 °C from 1986 to 2016 [64]. The Tanana Flats region is located south of Fairbanks, where the alluvial plane from the Alaska Mountain Range slopes northward making meandering rivers and oxbow lakes common [1]. Elevation ranges from 0 to 700 m, however the highest elevations occurred in the northern portion of the Alaska Mountain Range [1]. The Tanana region experiences dry-continental climate, and average daily summer temperature ranged from 11 °C to 19.5 °C from 1986 to 2016 [64]. The Innoko region lies in southwest Alaska and includes a portion of the lower Yukon River. Meandering waterways, oxbow lakes and floods are common in the lowlands while upland areas experience more wildfire disturbance [67]. Elevation varies little (30–850 m) and average daily summer temperatures ranged from 9.5 °C to 17.5 °C from 1989 to 2016 [64]. The Susitna moose range lies south of Alaska Mountain Range, and is characterized by numerous wetlands, hilly moraines, black spruce woodlands, and mountains. Elevation varies widely from 400 to 3500 m. This region is primarily located in temperate-continental climate, with some exposure to temperate coastal climates in the southern portion of the range [1]. Average daily summer temperatures ranged from 11.5 °C to 19 °C from 1988 to 2016 in this region [64]. Moose data All capture protocols and handling protocols adhered to the Alaska Animal Care and Use Committee approval process (#07–11) as well as the Institutional Animal Care and Use Committee Protocol (#09–01). Moose in all regions were darted from helicopter (Robison R-44) and injected using carfentanil citrate (Wildnil® Wildlife Pharmaceuticals, Incorporated, Fort Collins, CO) and xylazine hydrochloride (Anaset®; Lloyd Laboratories, Shenandoah, IA). Moose were instrumented with GPS radio-collars with three and a half to eight-hour fix rates (Table 1). Specifically, moose were fitted with the following collars from Telonics Inc. (Telonics, Mesa, AZ): Koyukuk – GW-4780, Tanana –TGW-4780-3, Susitna – TGW-4780-2, Innoko –CLM-340. We used a step-selection function (SSF) to assess moose behavioral responses to changing temperatures. SSF's model habitat selection in a used-available design that accounts for changing availability of resources at any point in time [27, 86]. We aggregated moose datasets to a near eight-hour fix rate to enable regional comparisons of behavior (Table 1). We chose this modeling framework because it allows for assessments of fine-scale habitat selection, and the effect of temperature on large herbivore movement behavior are most pronounced at fine to intermediate spatial and temporal scales [89]. To sample availability, we generated ten-paired available locations based on empirical distributions of an individual's step length and turning angles between sampling intervals, which were estimated using the "ABoVE-NASA" R package [29]. We used conditional-logistic regression (CLR, [35]) in the "survival" R package [82] to compare each used location with the concurrent available locations at the same point in time and space (i.e., one stratum contained one used point and ten randomly generated available points). The equation can be written as: $$ w\ast \left(\mathrm{x}\right)=\frac{\mathit{\exp}\left(\beta 1\mathrm{x}1+\beta 2\mathrm{x}2+\dots +\beta \mathrm{nxn}+e\right)}{1+\mathit{\exp}\left(\beta 1\mathrm{x}1+\beta 2\mathrm{x}2+\dots +\beta \mathrm{nxn}+e\right)} $$ where w*(x), the relative probability of selection, is dependent on habitat covariates X1 through Xn, and their estimated regression coefficients β1 to βn, respectively. Steps with higher w*(x) indicate a greater chance of selection. CLR compares strata (i.e., one used point and ten available points) individually, which enabled us to assess selection of fine-scale habitat features rather than broader-scale landscape characteristics [6]. We did not directly incorporate random effects into our SSF models as the analytical techniques for doing this are sparse and often computationally prohibitive for complex model sets [61]. In our models, we would have a needed to incorporate a random effect of individual for each covariate in the model – the equivalent of random slopes. We believe this would likely have led to convergence issues as our models are already complex (see section regarding temperature interaction terms). Instead, we fit our CLR models with generalized estimating equations (GEE) using a clustering variable of "animal-year" to split the data into statistically independent clusters. This allowed us to account for lack of independence between steps within an individual for a given summer, and provided unbiased (i.e., robust) variance estimates provided there are at least 20 independent clusters and preferably 30 [71]. Our data all had at least 20 unique animal year clusters, and all but one had greater than 30 (Table 1). Habitat covariates We obtained temperature estimates from the North American Regional Reanalysis (NARR) as opposed to weather stations. NARR provides a suite of highly-temporally dynamic (eight times daily; 32 km) set of meteorological variables [57]. We annotated NARR temperature estimates using the environmental-data automated track annotation (Env-DATA) system available from Movebank [21]. To ensure accuracy of these temperature estimates, we performed a validation exercise on the two populations of moose which included temperature sensors on their collars (Innoko and Koyukuk). We found a moderate relationship between the two (Supplementary material (S)1; R2 = 0.47–0.58, RMSE = 3.88–4.43 °C). NARR temperature estimates represent an ambient, neighborhood temperature, allowing us to investigate how moose respond to ambient variation in temperature via fine-scale selection for environmental characteristics that are likely to create cooler microclimates. We excluded ambient temperature as a main effect within CLR models because it did not vary within strata, and only included it as an interaction term with other covariates. Moose may move to areas that provide thermal cover when temperatures increase such as denser canopied forests [56]. In our models, a United States Geological Survey (USGS) percent canopy product for 2010 (30 m cell size, [31]) was used as an index of thermal cover. Moose use canopy cover for purposes other than thermoregulation such as predator avoidance [85]. However, by considering the interaction between temperature and canopy cover, it is likely that we captured behavioral thermoregulation in our models. We assessed the importance of water habitats in behavioral thermoregulation using a distance-to-water covariate. We estimated this covariate from Pekel et al.'s [68] percent global surface water map, which quantified global surface water from 1984 to 2015. We used the R "raster" package [34] to estimate the Euclidian distance of the nearest water pixel (30 m cell size) from a given moose location. Elevation estimations (in meters) were extracted from the ArcticDEM (version 6, 5 m cell size [69];). The solar radiation index (SRI [46];) was estimated mathematically as a function of latitude, aspect, and slope using the "RSAGA" package [8] – which were derived from the ArcticDEM, with the resultant values representing the hourly extraterrestrial radiation striking an arbitrarily oriented surface [46]. We chose to consider only continuous covariates as predictors to represent habitat as dynamic and continuous (sensu [17]). Covariates were standardized by dividing them by two times their standard deviation [28], allowing coefficients to be directly comparable across models. Collinearity was assessed using Pearson correlation coefficients, if correlation coefficients between predictors exceeded 0.70 we excluded collinear metrics from being present in the same model [22]. Two-way temperature interactions We considered both linear and nonlinear interactions between habitat covariates and ambient temperature as nonlinear processes are widespread in ecology particularly in response to climate change [13, 92]. In total, three model variants for each population-sex partition were considered: (1) a base model that included habitat covariates as described above with no interaction terms or consideration of temperature, (2) linear interaction models where habitat covariates sequentially interacted with temperature linearly, and (3) spline interaction models where habitat covariates sequentially interacted nonlinearly with temperature using natural cubic splines. Because nonlinear terms are at risk of overfitting models, we constrained any nonlinear relationships explored in the spline interactions to two or three knots in CLR models using the "splines" package [72]. Habitat selection model evaluation and validation We evaluated model fit for each population-sex partition using the quasi-likelihood under independence criterion (QIC [66];) because it is well suited for case-control models [19]. Finally, predictive ability of model variants were assessed using leave-one-out cross validation (LOOCV), which is a k-fold cross validation variant [7] where each individual animal is sequentially left out and predicted based on the remaining data. Mean Spearman rank coefficients were used to determine the predictive ability of model variants. For each population-sex partition, the model with the highest correlation coefficients from LOOCV and lowest QIC was considered the best. All spatial processing and statistical analyses were conducted in the statistical software R version 3.6.1 [72]. In total, seven base, 28 linear interaction, and 28 spline interaction models were estimated. For the sake of parsimony, only the most biologically significant results are presented and summarized by sex and population. Elevation was collinear with distance-to-water in the Innoko population, we retained the latter because of its known importance in moose ecology [74, 80]. In all but one case (Koyukuk males, S2), spline-based models where percent canopy interacted with temperature outperformed linear interaction and base models and are thus the only models discussed (Tables 2 and 3). In contrast to the strong habitat selection responses of moose for canopy cover, we did not find evidence for other behavioral thermoregulation strategies. For example, we found no support that Alaska moose altered resource selection with increasing summer temperatures in response to topography (i.e., more northerly, cooler slopes), elevation (with the exception of one population, S2), nor hydrology (i.e., by selecting to be closer to water). Table 2 Model evaluation (QIC) and cross validation (LOOCV) for female moose organized by population. Base models contain no temperature covariates, while spline models incorporate nonlinear interactions between a given covariate and ambient temperature. In this case, "Spline %can2" refers to percent canopy interacted with ambient temperature with two spline segments, while "Spline %can3" refers to percent canopy interacted with ambient temperature with three spline segments. Decreases in QIC indicate a better model fit while increases in LOOCV indicate more predictive ability Table 3 Model model evaluation (QIC) and cross validation (LOOCV) for male moose summary of organized by population. See additional descriptors in Table 3 The best fit spline models across all four populations occurred when percent canopy interacted with temperature using two to three knots. These spline interaction models had significant improvements in model fit compared to both the base models (ΔQIC = − 108 to − 284; Table 2) and the linear interaction models (not shown). Cross validation scores for spline interaction models experienced small to moderate improvements when compared to the base model (ΔLOOCV = + 1% to + 10%; Table 3). In summer, female moose in all four regions selected for increased canopy cover nonlinearly as temperature increased (Fig. 2; S3). However, the magnitude of the selection response to thermal cover was most pronounced in the most southerly region (Susitna; β%canopy1 = 33.90, p < 0.001; β%canopy2 = 20.09, p < 0.001; Table 4) as well as the most northerly region (Koyukuk; β%canopy1 = 24.91, p < 0.001; β%canopy2 = 20. 03, p < 0.001). Although the effect of canopy cover was reduced in both the Innoko moose (β%canopy1 = 14.82, p < 0.001; β%canopy2 = 9.01, p < 0.001) and the Tanana moose (β%canopy1 = 4.71, p < 0.001; β%canopy2 = 8.97, β%canopy3 = 7.70, p < 0.001), both populations still revealed highly statistically significant results indicating female moose selected nonlinearly for increased canopy cover as temperature increased. Conditional probability of selection of spline-based thermal cover as a function of temperature for Alaskan female moose by region in summer months (June–August). We used natural splines with two to three degrees of freedom to represent the relationship between canopy cover and temperature. The probability of selection of denser canopy increased significantly with temperature during summer for all four regions, where red lines indicated the 90% temperature percentiles of experienced temperature and the blue lines indicate the 10% temperature percentiles experienced temperature by region. Shaded bands represent a 95% confidence interval. Plots were created in the 'ggplot2' R package [94] Table 4 Best habitat selection models by population for female moose (Alces alces gigas) in Alaska from the step-selection function analysis. The best models across all four populations occurred when percent canopy interacted with temperature nonlinearly and are presented here. Natural spline (sp) predictors, where percent canopy interacted with temperature, have coefficients estimated for each line segment. Therefore, numbers one through three in the spline predictor terms represent an individual line segment. Only one of four populations (Tanana) has a third set of coefficients. In the Innoko population, elevation was collinear with distance-to-water and was thus excluded. All predictors were standardized by dividing by two times their standard deviation, making coefficients directly comparable. Robust standard errors are reported Female moose in the Koyukuk and Susitna regions also showed an increased affinity for water demonstrated in the significant negative beta coefficients for the "distance-to-water" predictor (Table 4), suggesting that moose in these regions preferred to be closer to water. Additionally, we observed additional selection behaviors in the Innoko and Susitna female moose. Female moose in the Innoko population showed an avoidance of areas of high solar radiation (βSRI = − 0.18, p < 0.001), while females in the Susitna population showed an avoidance of higher elevation locations (βelevation = − 1.21, p < 0.001), but these results were independent of temperature. For males, the best fit spline models in the Susitna and Innoko populations were also from percent canopy interacted with temperature (ΔQIC = −142 and − 97 respectively; Table 3). For the Koyukuk males, the best fit spline model came from elevation interacted with temperature (S2), but males in this region also saw improved model fit from percent canopy interacted with temperature (ΔQIC = − 54). Cross validation scores for spline interaction models (percent canopy interacted with temperature) in all three male populations experienced small to moderate increases when compared to the base model (ΔLOOCV = + 3% to + 6%). Male moose in all three populations (no males were collared in the Tanana population, see Table 1) selected for increased canopy cover as temperature increased (Fig. 3; S3). However, like with the females, the response to selection of thermal cover was most pronounced in the most northerly region (Koyukuk; β%canopy1 = 27.84, p < 0.001; β%canopy2 = 24.30, p < 0.001; Table 5) as well as the most southerly region (Susitna; β%canopy1 = 22.51, p < 0.001; β%canopy2 = 14.71, p < 0.001). The effect of canopy cover was reduced in the Innoko males (β%canopy1 = 13.02, p < 0.001; β%canopy2 = 8.50, p < 0.001), yet the results still revealed highly statistically significant results indicating moose selected for increased canopy cover as temperature increased. Conditional probability of selection of spline-based thermal cover as a function of temperature for Alaskan male moose by region in summer months (June–August). We used natural splines with two to three degrees of freedom to represent the relationship between canopy cover and temperature. The probability of selection of denser canopy increased significantly with temperature during summer for all four regions, where red lines indicated the 90% temperature percentiles of experienced temperature and the blue lines indicate the 10% temperature percentiles experienced temperature by region. Shaded bands represent a 95% confidence interval. Plots were created in the 'ggplot2' R package [94] Table 5 Best habitat selection models for male Alaska moose from the step-selection function analysis. Natural spline (sp) predictors, where percent canopy interacted with temperature, have coefficients estimated for each line segment. Numbers one and two in the spline predictors represent an individual line segment. All three populations had temperature-canopy interactions with two-line segments. In the Innoko population, elevation was collinear with distance-to-water and was thus excluded. All predictors were standardized by dividing by two times their standard deviation. Robust standard errors are reported Additionally, male moose in the Susitna population showed increased selection of locations closer to water and, like their female counterparts, avoided areas of higher elevation (βelevation = − 1.11, p < 0.001). Similarly, Innoko males showed avoidance for areas with increased topographical solar radiation exposure (βSRI = − 0.12, p < 0.001), but these selection behaviors were independent of temperature. Our results demonstrate that moose at the northern extent of their range altered habitat selection patterns in response to temperature. Across all populations and sexes, moose selected for denser canopy cover as temperature increased, which is consistent with previous studies [20, 56, 88], and our prediction that moose would select cooler locations as ambient temperature increased. Magnitude of selection response to temperature varied by sex and population Our habitat selection results also demonstrated that the magnitude of moose selection for dense canopy cover at higher temperatures varied between populations and sexes (Figs. 2 and 3; S2 and S3; Tables 4 and 5). In two (Innoko and Susitna) of the three populations containing both male and female moose, females demonstrated a stronger selection response for denser canopy at higher temperatures than males. This may be linked to calving and nursing demands on female moose [79] who may more strongly select for denser canopy cover to avoid spending calories to thermoregulate using physiological mechanisms. However, we were unable to distinguish between females with and without calves in this study. This likely influenced our results as females accompanied by their calves tend to increase selection for areas that provide cover for predator avoidance [24, 43] and drastically change their movements both before and after parturition [81]. We also considered whether population differences in selection strength may be related to the availability of thermal cover between regions (i.e., a functional response) where animals alter their habitat selection based on habitat availability [3, 63]. However, our results cannot entirely be explained by a functional response in habitat selection for thermal cover. For example, the Koyukuk moose showed strong selection for thermal cover as temperature increased but also had the second lowest available canopy cover regionally (37.6%; S4). Thus, we do not think a functional response per se explains regional differences in the selection strength, rather we anticipate that it is likely a combination of environmental factors interacting in complex ways to create a suite of unique habitat differences across regions (S5). However, to fully understand functional responses in habitat selection one must also consider the different spatial scales of selection [38, 63], as such responses are often evaluated at the landscape or home range scale [30, 32, 33, 60]. Thus, the lack of functional response of moose to canopy cover in our study may be related to the fine-scale nature of our analytical framework and not an absence of a functional response of moose to thermal cover. Implications of habitat selection results within a changing climate The consistent patterns of resource selection for thermal refugia under increasing temperatures found in this study may have important implications for moose resilience in arctic-boreal landscapes responding to increased temperatures from global climate change. For instance, landscape changes associated with wildfire are generally reducing canopy cover from coniferous species, and annual area burned in North American boreal systems doubled in the last half century [44], which is strongly linked to climate and annual weather patterns [37, 45]. Vegetation in interior Alaska now has less older spruce forests, the most common thermal refugia by moose, and a greater proportion of early successional vegetation than before 1990 [51]. Burn severity also plays a major role in how boreal forests recover after wildfire [26], where areas of low burn severity in black spruce stands tend to undergo self-replacement succession [39] and areas of high burn severity favor relay succession of deciduous species over black spruce because of increased exposure of mineral soil and reduced seedbank availability [40, 78]. For moose, such changes in habitat structure may provide new forage resources [4, 47], but also may limit the available thermal refugia needed for behavioral thermoregulation immediately after disturbance events prior to vegetation regeneration, or in late spring (March–April) prior to budburst when moose have not yet shed their winter coats. Limitations and future work Our results showed moose did not select for areas closer to water as temperature increased, which differ from previous observations where moose sought wetland or riparian areas to thermoregulate [76, 80]. We believe our results differed due to the spatial resolution (30 m grid cell size) used to represent this behavioral strategy. This restricted detection of smaller aquatic microhabitats important to moose. Unfortunately, no finer-scale map currently exists andlimited our ability to study selection for aquatic microhabitats, which may be especially relevant in flatter, more swamp-like areas such as the Tanana and Innoko regions. Based on our results and limitations encountered, we make three broad recommendations for future work regarding animal behavioral thermoregulation. First, future work should investigate the vulnerability and resilience of arctic-boreal animals to structural habitat changes as forage resources increase and thermal cover decreases (e.g., [52, 88]). For example, recent work on Alpine ibex (Capra ibex) – another heat-sensitive ungulate – indicates that male ibex response to minimize heat stress comes at the expense of optimal foraging [9]. Unfortunately, we did not have a detailed forage quality or biomass model calibrated for our study areas and hesitated to use categorical land cover maps because of criticisms regarding their use [17]. In Alaska, there is not a wide distinction between shrub classes in landcover maps that would enable us to determine if selected shrub habitats correspond to palatable species and foraging behavior. For instance, "shrub" in most vegetative classifications does not distinguish between shade forages (Salicaceae, Betula neoalaskana) and shade only (Alnus, B. nana) species, which is critical for parsing selection behavior. Moose maximize energy intake in the hottest parts of summer, so selection for forage biomass and quality plausibly overrides thermal stress and predation risk for a time. However, we were unable to directly assess this tradeoff due to data limitations. Second, we suggest testing for differences in female selection and movement relative to presence or absence of offspring. Such a distinction would connect nicely to calls to link behavior and movement to population outcomes [10, 59], especially when considering the thermal environment as survival and fitness often depend on the availability of suitable habitat to buffer against thermal extremes in a landscape [25]. Finally, a critical next step is to evaluate how habitat selection under thermal stress impacts individual fitness and population dynamics, as temperature plays an important role in limiting fecundity in other mammals [18, 93] including moose [50, 62]. This is especially important as population responses to climate change can vary dramatically. For instance, Joly et al. [42] found the influence of climate on caribou herds in Alaska was not uniform, instead, western populations increased in size while northwestern populations declined as a result of intensity changes in the Pacific Decadal Oscillation. Similarly, using detailed demographic information for caribou (Rangifer tarandus), red deer (Cervus elaphus), and elk (C. canadensis) across the Northern Hemisphere, Post et al. [70] showed that that different population responses to climate varied in both direction and magnitude. The impacts of climate change in arctic-boreal regions increase landscape heterogeneity through processes such as increased wildfire intensity and area burned, which can significantly alter the thermal environment available to an animal. Despite recognizing the importance of thermal conditions to animals, there is a distinct lack of research on how animals might respond to climate driven changes in thermal refugia. Our regional assessment provides insight on how Alaska moose may respond to changes in ambient temperature, where statewide annual temperatures are averaging an increase of 0.4 °C per decade and summer temperatures are projected to increase 2–5 °C by midcentury [51]. Understanding habitat selection and movement patterns related to behavioral thermoregulation is a first step toward identifying areas capable of providing thermal relief for moose and other species impacted by climate change. The GPS-telemetry data support the findings of this study are owned by Alaska Department of Fish and Game, National Park Service, and the Bureau of Land Management but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. ABoVE: Arctic-boreal vulnerability experiment ADFG: Alaska Department of Fish and Game AMAP: Arctic Monitoring and Assessment Programme CLR: Conditional-logistic regression Env-DATA: Environmental-data automated track annotation GEE: Generalized estimating equations IPCC: LOOCV: Leave-one-out cross validation NARR: North American regional reanalysis NASA: NOAA: QIC: Quasi-likelihood under independence criterion USGS: United States Geological Survey RMSE: Root mean square error RSAGA: R System for Automated Geospatial Analysis Sp: Natural spline SRI: Solar radiation index SSF: Step selection function Alaska Department of Fish and Game (ADFG). Our wealth maintained: a strategy for conserving Alaska's diverse wildlife and fish resources. Juneau: Alaska Department of Fish and Game; 2006. p. xviii+824. https://www.adfg.alaska.gov/static/species/wildlife_action_plan/cwcs_full_document.pdf. Arctic Monitoring and Assessment Programme (AMAP). Snow, water, ice, and permafrost in the Arctic: summary for policy-makers. Oslo; 2017. Retrieved from www.amap.no/swipa. Arthur SM, Manly BFJ, Garner GW. Assessing habitat selection when availability changes. Ecology. 1996;77(1):215–27. Beck PSA, Goetz SJ, Mack MC, Alexander HD, Jin Y, Randerson JT, et al. The impacts and implications of an intensifying fire regime on Alaskan boreal forest composition and albedo. Glob Chang Biol. 2011;17(9):2853–66. https://doi.org/10.1111/j.1365-2486.2011.02412.x. Bourgoin G, Garel M, Blanchard P, Dubray D, Maillard D, Gaillard JM. Daily responses of mouflon (Ovis gmelini musimon × Ovis sp.) activity to summer climatic conditions. NRC Research Press. 2011;89(9):765–73. https://doi.org/10.1139/Z11-046. Boyce MS. Scale for resource selection functions. Divers Distrib. 2006;12(3):269–76. https://doi.org/10.1111/j.1366-9516.2006.00243.x. Boyce MS, Vernier PR, Nielsen SE, Schmiegelow FKA. Evaluating resource selection functions. Ecol Model. 2002;157(2–3):281–300. Brenning A. Statistical geocomputing combining R and SAGA: the example of landslide susceptibility analysis with generalized additive models. In: Boehner J, Blaschke T, Montanarella L, editors. SAGA - seconds out (= hamburger Beitraege zur Physischen Geographie und Landschaftsoekologie), vol. 19; 2008. p. 23–32. Brivio F, Zurmühl M, Grignolio S, Von Hardenberg J, Apollonio M, Ciuti S. Forecasting the response to global warming in a heat-sensitive species. Sci Rep. 2019;9(3048):1–16. https://doi.org/10.1038/s41598-019-39450-5. Brodie JF, Post ES, Doak DF. Wildlife conservation in a changing climate. Chicago: University of Chicago Press; 2012. Broders HG, Coombs AB, Mccarron JR. Ecothermic responses of moose (Alces alces) to thermoregulatory stress on mainland Nova Scotia. Alces. 2012;48:53–61. Bump JK, Webster CR, Vucetich JA, Rolf O, Shields JM, Powers MD. Ungulate carcasses perforate ecological filters and create biogeochemical hotspots in forest herbaceous layers allowing trees a competitive advantage. Ecosystems. 2009;12(6):996–1007. https://doi.org/10.1007/s10021-009-9274-0. Burkett VR, Wilcox DA, Stottlemyer R, Barrow W, Fagre D, Baron J, et al. Nonlinear dynamics in ecosystem response to climatic change: case studies and policy implications. Ecol Complex. 2005;2(4):357–94. https://doi.org/10.1016/j.ecocom.2005.04.010. Cameron RD, Smith T, Fancy SG, Gerhart KL, White RG. Calving success of female caribou in relation to body weight. Can J Zool. 1993;71(3):480–6. Christie KS, Ruess RW, Lindberg MS, Mulder CP. Herbivores influence the growth, reproduction, and morphology of a widespread Arctic willow. PLoS One. 2014;9(7):1–9. https://doi.org/10.1371/journal.pone.0101716. Clarke A, Rothery P. Scaling of body temperature in mammals and birds. Funct Ecol. 2008;22(1):58–67. https://doi.org/10.1111/j.1365-2435.2007.01341.x. Coops NC, Wulder MA. Breaking the habit(at). Trends Ecol Evol. 2019;34(7):585–7. https://doi.org/10.1016/j.tree.2019.04.013. Corlatti L, Gugiatti A, Ferrari N, Formenti N, Trogu T, Pedrotti L. The cooler the better? Indirect effect of spring–summer temperature on fecundity in a capital breeder. Ecosphere. 2018;9(6):1–13. https://doi.org/10.1002/ecs2.2326. Craiu RV, Duchesne T, Fortin D. Inference methods for the conditional logistic regression model with longitudinal data. Biom J. 2008;50(1):97–109. Demarchi MW, Bunnell FL. Forest cover selection and activity of cow moose in summer. Acta Theriol. 1995;4(1):23–36. Dodge S, Bohrer G, Weinzierl R, Davidson S, Kays R, Douglas D, et al. The environmental-DATA automated track annotation (Env-DATA) system: linking animal tracks with environmental data. Movement Ecology. 2013;1(1):3. https://doi.org/10.1186/2051-3933-1-3. Dormann CF, Elith J, Bacher S, Buchmann C, Carl G, Carr G, et al. Collinearity: a review of methods to deal with it and a simulation study evaluating their performance. Ecography. 2013;36(1):27–46. https://doi.org/10.1111/j.1600-0587.2012.07348.x. Dussault C, Ouellet J-P, Courtois R, Huot J, Breton L, Larochelle J. Behavioural responses of moose to thermal conditions in the boreal forest. Ecoscience. 2004;11(3):321–8. Dussault C, Ouellet J, Courtois R, Huot J, Breton L, Jolicoeur H. Linking moose habitat selection to limiting factors. Ecography. 2005;28(5):619–28. Elmore RD, Carroll JM, Tanner EP, Hovick TJ, Grisham BA, Fuhlendorf SD, et al. Implications of the thermal environment for terrestrial wildlife management. Wildl Soc Bull. 2017;41(2):183–93. https://doi.org/10.1002/wsb.772. Epting J, Verbyla D. Landscape-level interactions of prefire vegetation , burn severity, and postfire vegetation over a 16-year period in interior Alaska. Can J For Res. 2005;35(6):1367–77. https://doi.org/10.1139/X05-060. Fortin D, Beyer HL, Boyce MS, Smith DW, Duchesne T, Mao JS. Wolves influence elk movements: behavior shapes a trophic cascade in Yellowstone National Park. Ecology. 2005;86(5):1320–30. Gelman A. Scaling regression inputs by dividing by two standard deviations. Stat Med. 2008;27(15):2865–73. https://doi.org/10.1002/sim.3107. Gurarie E, Mahoney P, LaPoint S, Davidson S. Above: functions and methods for the animals on the move project of the Arctic boreal vulnerability experiment (ABoVE - NASA). R package version 0.11; 2018. Hansen BB, Herfindal I, Aanes R, Sæther B-E, Henriksen S. Functional response in habitat selection and the tradeoffs between foraging niche components in a large herbivore. Nordic Society Oikos. 2009;118(6):859–72. Hansen MC, Potapov PV, Moore R, Hancher M, Turubanova SA, Tyukavina A, et al. High-resolution global maps of forest cover change. Science. 2013;342(6160):850–3. https://doi.org/10.1126/science.1244693. Hayes RD, Harestad AS. Wolf functional response and regulation of moose in the Yukon. Can J Zool. 2000;78(1):60–6. Hebblewhite M, Merrill E. Modelling wildlife-human relationships for social species with mixed-effects resource selection models. J Appl Ecol. 2008;45(3):834–44. https://doi.org/10.1111/j.1365-2664.2008.01466.x. Hijmans RJ. Raster: geographic data analysis and modeling. R package version 3.0–2; 2019. https://CRAN.R-project.org/package=raster. Hosmer DW, Lemeshow S. Applied logistic regression. 2nd ed. New York: Wiley; 2000. Intergovernmental Panel on Climate Change (IPCC). In: Core Writing Team, Pachauri RK, Meyer LA, editors. Climate Change 2014: Synthesis Report. Contribution of working groups I, II and III to the fifth assessment report of the intergovernmental panel on climate change. Geneva: IPCC; 2014. p. 151. Johnson EA. Fire and vegetation dynamics: studies from the north American boreal forest. New York: Cambridge University Press; 1996. Johnson DH. The comparison of usage and availability measurements for evaluating resource preference. Ecology. 1980;61(1):65–71. Johnstone JF, Chapin FSIII. Fire interval effects on successional trajectory in boreal forests of Northwest Canada. Ecosystems. 2006;9(2):268–77. https://doi.org/10.1007/S10021-005-0061-2. Johnstone JF, Hollingsworth TN, Chapin FSIII, Mack MC. Changes in fire regime break the legacy lock on successional trajectories in Alaskan boreal forest. Glob Chang Biol. 2010;16(4):1281–95. https://doi.org/10.1111/j.1365-2486.2009.02051.x. Joly K, Craig T, Sorum MS, McMillan JS, Spindler MA. Variation in fine-scale movements of moose in the upper Koyukuk River drainage, northcentral Alaska. Alces. 2015;51:97–105. Joly K, Klein DR, Verbyla DL, Rupp TS, Chapin FS III. Linkages between large-scale climate patterns and the dynamics of Arctic caribou populations. Ecography. 2011;34(2):345–52. https://doi.org/10.1111/j.1600-0587.2010.06377.x. Joly K, Sorum MS, Craig T, Julianus EL. The effects of sex, terrain, wildfire, winter severity, and maternal status on habitat selection by moose in north-Central Alaska. Alces. 2016;52:101–15. Kasischke ES, Turetsky MR. Recent changes in the fire regime across the north American boreal region — spatial and temporal patterns of burning across Canada and Alaska. Geophys Res Lett. 2006;33(9). https://doi.org/10.1029/2006GL025677. Kasischke ES, Verbyla DL, Rupp TS, McGuire AD, Murphy KA, Jandt R, et al. Alaska's changing fire regime — implications for the vulnerability of its boreal forests 1. Candian J Forest Res. 2010;40(7):1313–24. https://doi.org/10.1139/X10-098. Keating KA, Gogan PJP, Vore JM, Irby L. A simple solar radiation index for wildlife habitat studies. J Wildl Manag. 2007;71(4):1344–8. https://doi.org/10.2193/2006-359. Kelly R, Chipman ML, Higuera PE, Stefanova I, Brubaker LB, Sheng F. Recent burning of boreal forests exceeds fire regime limits of the past 10,000 years. Proc Natl Acad Sci. 2013;110(32):13055–60. https://doi.org/10.1073/pnas.1305069110. Kielland K, Bryant JP. Moose herbivory in taiga: effects on biogeochemistry and vegetation dynamics in primary succession. Oikos. 1998;82(2):377–83. Leblond M, Dussault C, Ouellet JP. What drives fine-scale movements of large herbivores? A case study using moose. Ecography. 2010;33(6):1102–12. https://doi.org/10.1111/j.1600-0587.2009.06104.x. Lenarz MS, Nelson ME, Schrage MW, Edwards AJ. Temperature mediated moose survival in northeastern Minnesota. J Wildl Manag. 2009;73(4):503–10. https://doi.org/10.2193/2008-265. Markon C, Gray S, Berman M, Eerkes-Medrano L, Hennessy T, Huntington H, et al. Alaska. In: Reidmiller DR, Avery CW, Easterling DR, Kunkel KE, Lewis KLM, Maycock TK, Stewart BC, editors. Impacts, risks, and adaptation in the United States: fourth National Climate Assessment, volume II. Washington, DC: US Global Change Research Program; 2018. p. 11–85–1241. Mason TH, Brivio F, Stephens PA, Apollonio M, Grignolio S. The behavioral trade-off between thermoregulation and foraging in a heatsensitive species. Behav Ecol. 2017;28(3):908–18. McCain CM, King SRB. Body size and activity times mediate mammalian responses to climate change. Glob Chang Biol. 2014;20(6):1760–9. https://doi.org/10.1111/gcb.12499. McCann NP, Moen RA, Harris TR. Warm-season heat stress in moose (Alces alces). Can J Zool. 2013;91(12):893–8 Retrieved from http://www.nrcresearchpress.com/doi/abs/10.1139/cjz-2013-0175. McLaren BE, Peterson RO. Wolves, moose, and tree rings on isle Royale. Science. 1994;266(5190):1555–8. Melin M, Matala J, Mehtätalo L, Tiilikainen R, Tikkanen OP, Maltamo M, et al. Moose (Alces alces) reacts to high summer temperatures by utilizing thermal shelters in boreal forests - an analysis based on airborne laser scanning of the canopy structure at moose locations. Glob Chang Biol. 2014;20(4):1115–25. https://doi.org/10.1111/gcb.12405. Mesinger FM, DiMego G, Kalnay E, Mitchell K, Shafran PC, Ebiuzaki W, et al. North american regional reanalysis. Am Meterological Soc. 2006;87(3):343–60. https://doi.org/10.1175/BAMS-87-3-343. Montgomery RA, Redilla KM, Moll RJ, Van Moorter B, Rolandsen CM, Millspaugh JJ, et al. Movement modeling reveals the complex nature of the response of moose to ambient temperatures during summer. J Mammal. 2019;100(1):169–77. https://doi.org/10.1093/jmammal/gyy185. Morales JM, Moorcroft PR, Matthiopoulos J, Frair JL, Kie JG, Powell RA, et al. Building the bridge between animal movement and population dynamics. Philos Transact Royal Society B: Biol Sci. 2010;365(1550):2289–301. https://doi.org/10.1098/rstb.2010.0082. Moreau G, Fortin D, Couturier S, Duchesne T. Multi-level functional responses for wildlife conservation: the case of threatened caribou in managed boreal forests. J Appl Ecol. 2012;49(3):611–20. https://doi.org/10.1111/j.1365-2664.2012.02134.x. Muff S, Signer J, Fieberg J. Accounting for individual-specific variation in habitat-selection studies: efficient estimation of mixed-effects models using Bayesian or frequentist computation. J Anim Ecol. 2020;89(1):80–92. https://doi.org/10.1111/1365-2656.13087. Murray DL, Cox EW, Ballard WB, Whitlaw HA, Lenarz MS, Custer TW, et al. Pathogens, nutritional deficiency, and climate influences on a declining moose population. Wildl Monogr. 2006;166:1), 1–30. Mysterud A, Ims R. Functional responses in habitat use: availability influences relative use in trade-off situations. Ecology. 1998;79(4):1435–41. https://doi.org/10.2307/176754. National Oceanic and Atmospheric Administration (NOAA). National Centers for environmental information, temperature summaries; 2019. [FIPS:02]. Retrieved from https://www.ncdc.noaa.gov/cdo-web/search, [Accessed 1/6/2020]. Nowacki GJ, Spencer P, Fleming M, Jorgenson T. Unified ecoregions of Alaska, U.S. Geol Surv Open File Rep. 2003. p. 02–297 (map). https://pubs.er.usgs.gov/publication/ofr2002297. Pan W. Akaike's information criterion in generalized estimating equations. Biometrics. 2001;57(1):120–5. Paragi TF, Kellie KA, Peirce JM, Warren MJ. Movements and Sightability of moose in game management unit 21E. Juneau: Alaska Department of Fish and Game; 2017. Pekel JF, Cottam A, Gorelick N, Belward AS. High-resolution mapping of global surface water and its long-term changes. Nature. 2016;540(7633):418–22. https://doi.org/10.1038/nature20584. Porter, Claire, Morin, Paul; Howat, Ian; Noh, Myoung-Jon; Bates, Brian; Peterman, Kenneth; Keesey, Scott; Schlenk, Matthew; Gardiner, Judith; Tomko, Karen; Willis, Michael; Kelleher, Cole; Cloutier, Michael; Husby, Eric; Foga, Steven; Nakamura, Hitomi; Platson, Melisa; Wethington, Michael, Jr.; Williamson, Cathleen; Bauer, Gregory; Enos, Jeremy; Arnold, Galen; Kramer, William; Becker, Peter; Doshi, Abhijit; D'Souza, Cristelle; Cummens, Pat; Laurier, Fabien; Bojesen, Mikkel, 2018, "ArcticDEM", https://doi.org/10.7910/DVN/OHHUKH, Harvard Dataverse, V1, 2018, [Accessed 10/1/2018]. Post E, Brodie J, Hebblewhite M, Anders AD, Maier JAK, Wilmers CC. Global population dynamics and hot spots of response to climate change. Bioscience. 2009;59(6):489–97. https://doi.org/10.1525/bio.2009.59.6.7. Prima MC, Duchesne T, Fortin D. Robust inference from conditional logistic regression applied to movement and habitat selection analysis. PLoS One. 2017;12(1):1–13. https://doi.org/10.1371/journal.pone.0169779. R Core Team. R: a language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2019. URL https://www.R-project.org/. Renecker LA, Hudson RJ. Seasonal energy expenditures and thermoregulatory responses of moose. Can J Zool. 1986;64(2):322–7. Renecker LA, Schwartz CC. Food habits and feeding behavior. In: Franzmann, Schwartz CC, editors. Ecology and Management of the North American Moose. 2nd ed. Washington, D.C.: Wildlife Management Institutions; 2007. p. 403–39. Rönnegård L, Forslund P, Danell Ö. Lifetime patterns in adult female mass, reproduction, and offspring mass in semidomestic reindeer (Rangifer tarandus tarandus). Can J Zool. 2002;80(12):2047–55. https://doi.org/10.1139/Z02-192. Schwartz CC, Renecker LA. Nutrition and energetics. In: Franzmann, Schwartz CC, editors. Ecology and Management of the North American Moose. 2nd ed. Washington, D.C.: Wildlife Management Institutions; 2007. p. 441–78. Screen JA. Arctic amplification decreases temperature variance in northern mid- to high-latitudes. Nat Clim Chang. 2014;4(7):577–82. https://doi.org/10.1038/NCLIMATE2268. Shenoy A, Johnstone JF, Kasischke ES, Kielland K. Persistent effects of fire severity on early successional forests in interior Alaska. For Ecol Manage. 2011;261(3):381–90. https://doi.org/10.1016/j.foreco.2010.10.021. Speakman JR, Król E. Maximal heat dissipation capacity and hyperthermia risk: neglected key factors in the ecology of endotherms. J Anim Ecol. 2010;79(4):726–46. https://doi.org/10.1111/j.1365-2656.2010.01689.x. Street GM, Rodgers AR, Fryxell JM. Mid-day temperature variation influences seasonal habitat selection by moose. J Wildl Manag. 2015;79(3):505–12. https://doi.org/10.1002/jwmg.859. Testa JW, Becker EF, Lee GR. Movements of female moose in relation to birth and death of calves. Alces. 2000;36:155–62. Therneau T. A package for survival analysis in S. version 2.38; 2015. https://CRAN.R-project.org/package=survival. Thompson DP, Barboza PS, Crouse JA, McDonough TJ, Badajos OH, Herberg AM. Body temperature patterns vary with day, season, and body condition of moose (Alces alces). J Mammal. 2019;100(5):1466–78. Thompson DP, Crouse JA, Jaques S, Barboza PS. Redefining physiological responses of moose (Alces alces) to warm environmental conditions. J Therm Biol. 2020;102581. Timmermann HR, McNicol JG. Moose habitat needs. For Chron. 1988;64(3):238–45. Thurfjell H, Ciuti S, Boyce MS. Applications of step-selection functions in ecology and conservation. Movement Ecology. 2014;2(4):1–12. https://doi.org/10.1186/2051-3933-2-4. van Beest FM, Milner JM. Behavioural responses to thermal conditions affect seasonal mass change in a heat-sensitive northern ungulate. PLoS One. 2013;8(6). https://doi.org/10.1371/journal.pone.0065972. van Beest FM, Van Moorter B, Milner JM. Temperature-mediated habitat use and selection by a heat-sensitive northern ungulate. Anim Behav. 2012;84(3):723–35. https://doi.org/10.1016/j.anbehav.2012.06.032. van Beest FM, Rivrud IM, Loe LE, Milner JM, Mysterud A. What determines variation in home range size across spatiotemporal scales in a large browsing herbivore? J Anim Ecol. 2011;80(4):771–85. https://doi.org/10.1111/j.1365-2656.2011.01829.x. Vors LS, Boyce MS. Global declines of caribou and reindeer. Glob Chang Biol. 2009;15(11):2626–33. https://doi.org/10.1111/j.1365-2486.2009.01974.x. Walker WH, Meléndez-Fernández OH, Nelson RJ, Reiter RJ. Global climate change and invariable photoperiods: a mismatch that jeopardizes animal fitness. Ecol Evol. 2019;9(17):10044–54. https://doi.org/10.1002/ece3.5537. Walther GR. Community and ecosystem responses to recent climate change. Philos Transact Royal Society B: Biol Sci. 2010;365(1549):2019–24. https://doi.org/10.1098/rstb.2010.0021. Wells K, O'Hara RB, Cooke BD, Mutze GJ, Prowse TAA, Fordham DA. Environmental effects and individual body condition drive seasonal fecundity of rabbits: identifying acute and lagged processes. Oecologia. 2016;181(3):853–64. https://doi.org/10.1007/s00442-016-3617-2. Wickham H. ggplot2: elegant graphics for data analysis. New York: Springer-Verlag; 2016. Wolken JM, Hollingsworth TN, Rupp TS, Chapin FS, Trainor SF, Barrett TM, et al. Evidence and implications of recent and projected climate change in Alaska's forest ecosystems. Ecosphere. 2011;2(11):1–35. https://doi.org/10.1890/ES11-00288.1. We sincerely thank data owners who supplied the moose GPS-telemetry data used in this study as well as biologists whose edits contributed significantly to the clarity of this paper (specifically Tom Paragi, Graham Frye, Glenn Stout, Jeffrey Stetz, and Erin Julianus). Funding for this work was provided by the National Aeronautics and Space Administration's (NASA) Arctic Boreal Vulnerability Experiment (ABoVE) grant numbers: NNX15AT89A, NNX15AW71A, NNX15AU20A, NNX15AV92A. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Department of Natural Resources and Society, University of Idaho, Moscow, ID, USA Jyoti S. Jennewein, Lee A. Vierling & Jan U. H. Eitel Wildlife Biology Program, Department of Ecosystem and Conservation Science, W.A. Franke College of Forestry and Conservation, University of Montana, Missoula, MT, USA Mark Hebblewhite College of the Environment, University of Washington, Seattle, WA, USA Peter Mahoney Department of Fish and Wildlife Sciences, University of Idaho, Moscow, ID, USA Sophie Gilbert School of the Environment, Washington State University, Pullman, WA, USA Arjan J. H. Meddens Lamont-Doherty Earth Observatory, Columbia University, Palisades, NY, USA Natalie T. Boelman National Park Service, Gates of the Arctic National Park and Preserve, Fairbanks, AK, USA Kyle Joly Alaska Department of Fish and Game, 1800 Glenn Hwy #2, Palmer, AK, USA Kimberly Jones Alaska Department of Fish and Game, Division of Wildlife Conservation, 1300 College Rd, Fairbanks, Alaska, USA Kalin A. Kellie Department of Forestry and Wildlife Management, Inland Norway University of Applied Sciences, Evenstad, Norway Scott Brainerd McCall Outdoor Science School, University of Idaho, McCall, ID, USA Jan U. H. Eitel Jyoti S. Jennewein Lee A. Vierling JSJ primary analyst and author; MH, PM, SG, AJHM, NB, KJ, KAK, SB, LAV, and JUHE all contributed and advised on methodology, writing, and manuscript edits. The authors read and approved the final manuscript. Correspondence to Jyoti S. Jennewein. All capture protocols and handling protocols for moose in this study adhered to the Alaska Animal Care and Use Committee approval process (#07–11) as well as the Institutional Animal Care and Use Committee Protocol (#09–01). Additional file 1: Supplementary 1: Temperature Validation. Supplementary 2: Koyukuk males spline model results for elevation and temperature interaction. Supplementary 3: Interactive 3D plots of interaction between ambient temperature and canopy cover. Supplementary 4: Used-Available Tables of Covariates. Supplementary 5: Regional Habitat Features. Figure 1e: Regional variation in elevation. ANOVA results comparing regional variation in elevation show that all regions vary from each other statistically (F = 2705, p < 0.001). Figure 2e: Regional variation in ambient temperature. ANOVA results comparing regional variation in ambient temperature show that all regions vary from each other statistically (F = 2705, p < 0.001). With Tanana showing the highest temperatures, Innoko second, Koyukuk third, and Susitna fourth. Figure 3: Regional variation in cloud cover. ANOVA results show all regions vary from each other statistically (F = 1472, p < 0.001), except Koyukuk and Susitna. Table 1E: Regional variation in fixes occurring in the rain. Percent estimated proportionally comparing number of fixes in the rain to total number of fixes regionally. Jennewein, J.S., Hebblewhite, M., Mahoney, P. et al. Behavioral modifications by a large-northern herbivore to mitigate warming conditions. Mov Ecol 8, 39 (2020). https://doi.org/10.1186/s40462-020-00223-9 Behavioral thermoregulation Thermal stress
CommonCrawl
A piece of string fits exactly once around the perimeter of a square whose area is 144. Rounded to the nearest whole number, what is the area of the largest circle that can be formed from the piece of string? Since the area of the square is 144, each side has length $\sqrt{144}=12$. The length of the string equals the perimeter of the square which is $4 \times 12=48$. The largest circle that can be formed from this string has a circumference of 48 or $2\pi r=48$. Solving for the radius $r$, we get $r=\frac{48}{2\pi} = \frac{24}{\pi}$. Therefore, the maximum area of a circle that can be formed using the string is $\pi \cdot \left( \frac{24}{\pi} \right)^2 = \frac{576}{\pi} \approx \boxed{183}$.
Math Dataset
\begin{definition}[Definition:Strictly Decreasing/Mapping] Let $\struct {S, \preceq_1}$ and $\struct {T, \preceq_2}$ be ordered sets. Let $\phi: \struct {S, \preceq_1} \to \struct {T, \preceq_2}$ be a mapping. Then $\phi$ is '''strictly decreasing''' {{iff}}: :$\forall x, y \in S: x \prec_1 y \implies \map \phi y \prec_2 \map \phi x$ Note that this definition also holds if $S = T$. \end{definition}
ProofWiki
COPSS Distinguished Achievement Award and Lectureship The COPSS Distinguished Achievement Award and Lectureship (formerly known as R. A. Fisher Award and Lectureship) is a very high recognition of achievement and scholarship in statistical science that recognizes the highly significant impact of statistical methods on scientific investigations. The award was funded in 1963 by the North American Committee of Presidents of Statistical Societies (COPSS) "to honor both the contributions of Sir Ronald Aylmer Fisher and the work of a present–day statistician for their advancement of statistical theory and applications."[1] The COPSS Starting in 1964, the Distinguished Lecture is given at the Joint Statistical Meetings in North America and is subsequently published in a statistics journal. The lecturer receives a plaque and a cash award of US$1000. It is given every year if a nominee considered eligible and worthy is found, which one was in all but five years up to 1984, and in all years since. In June 2020, the name of the award was changed to its current name after discussions concerning Fisher's controversial views on race and eugenics.[2][3] Past recipients of the award • 1964 Maurice Bartlett • 1965 Oscar Kempthorne • 1967 John Tukey • 1968 Leo Goodman • 1970 Leonard Savage • 1971 Cuthbert Daniel • 1972 William G. Cochran • 1973 Jerome Cornfield • 1974 George E. P. Box • 1975 Herman Chernoff • 1976 George Alfred Barnard • 1977 R. C. Bose • 1978 William Kruskal • 1979 C. R. Rao • 1982 F. J. Anscombe • 1983 I. R. Savage • 1985 Theodore W. Anderson • 1986 David H. Blackwell • 1987 Frederick Mosteller • 1988 Erich Leo Lehmann • 1989 David R. Cox • 1990 Donald A. S. Fraser • 1991 David Brillinger • 1992 Paul Meier • 1993 Herbert Robbins • 1994 Elizabeth A. Thompson • 1995 Norman Breslow • 1996 Bradley Efron • 1997 Colin Mallows • 1998 Arthur P. Dempster • 1999 Jack Kalbfleisch • 2000 Ingram Olkin • 2001 James O. Berger • 2002 Raymond Carroll • 2003 Adrian F. M. Smith • 2004 Donald Rubin • 2005 R. Dennis Cook • 2006 Terence Speed • 2007 Marvin Zelen • 2008 Ross L. Prentice • 2009 Noel Cressie • 2010 Bruce G. Lindsay • 2011 C.F. Jeff Wu • 2012 Roderick Little • 2013 Peter J. Bickel • 2014 Grace Wahba • 2015 Stephen Fienberg • 2016 Alice S. Whittemore • 2017 Robert E. Kass • 2018 Susan Murphy • 2019 Paul R. Rosenbaum • 2020 Kathryn Roeder • 2021 Wing Hung Wong • 2022 Nancy Reid • 2023 Bin Yu Renaming of the lectureship On June 4, 2020, following national movements to fight systemic racism and police brutality in response to the murder of George Floyd, one of the Lectureship award committee members, Daniela Witten (UW), started a discussion on renaming the Fisher Lectureship on Twitter as R.A. Fisher was a eugenicist.[2] A petition to "Rename The Fisher Lecture After David Blackwell" was initiated by Miles Ott (Smith) on Change.org. The COPSS leadership responded by soliciting input via an online form on the official website. Harry Crane (Rutgers), Joseph Guinness (Cornell) and Ryan Martin (NCSU) posted a comment arguing against the renaming on June 13, 2020.[3] They argued that the lectureship was established to honor Fisher's scientific achievement, not the scientist. They proposed to amend the description of the lectureship instead of renaming it. On June 15 the Executive Director of ASA, Ron Wasserstein, notified its members that the leadership has recommended changing the lectureship name to COPSS. The process that led to the decision was unclear. Ron commented on Twitter, "There is no principle of greater value than the principle of strengthening the statistical community by moving forward to form a more just, equitable, diverse, and inclusive society". On June 23, the name R.A. Fisher Award and Lectureship was officially retired and the announced recipient of the award for 2020, Kathryn Roeder, was to receive the award under the new name.[4][5] The chair of COPSS, Bhramar Mukherjee, also made the announcement on Twitter. In their statement, the COPSS mentioned that they retired the previous name of the award "to advance a more just, equitable, diverse, and inclusive statistical community."[5] Other lecture series named after R. A. Fisher Two other series of lectures are also named after R. A. Fisher: • The Fisher Memorial Lecture on an application of mathematics to biology, usually given in the UK, first given in 1964 • The Sir Ronald Fisher Lecture on genetics, evolutionary biology or statistics, given at the University of Adelaide, Australia, first given in 1990 See also • COPSS Presidents' Award • International Prize in Statistics • Guy Medals • List of mathematics awards References 1. "COPSS Distinguished Achievement Award and Lectureship". Committee of Presidents of Statistical Societies. Retrieved 2020-07-01. 2. Witten, Daniela (2020-06-04). "Thread by @daniela_witten". compiled by Thread Reader. Retrieved 2020-07-01. 3. Crane, Harry; Guinness, Joseph; Martin, Ryan. "Comment on the Proposal to Rename the R.A. Fisher Lecture". Retrieved 2020-06-15. 4. ASA. "R.A. Fisher Award and Lectureship". Archived from the original on 2018-08-15. Retrieved 2020-06-23. 5. "COPSS Statement on Fisher Lectureship and Award". 2020-06-23. Archived from the original on 2019-09-30. Retrieved 2020-07-01. External links • Official website
Wikipedia
\begin{document} \setcounter{tocdepth}{2} \begin{abstract} In this paper, we prove the cut-off phenomenon in total variation distance for the Brownian motions traced on the classical symmetric spaces of compact type, that is to say: \begin{enumerate} \item the classical simple compact Lie groups: special orthogonal groups, special unitary groups and compact symplectic groups; \item the real, complex and quaternionic Grassmannian varieties (including the real spheres, and the complex or quaternionic projective spaces); \item the spaces of real, complex and quaternionic structures. \end{enumerate} Denoting $\mu_{t}$ the law of the Brownian motion at time $t$, we give explicit lower bounds for $d_{\mathrm{TV}}(\mu_{t},\mathrm{Haar})$ if $t < t_{\text{cut-off}}=\alpha \log n$, and explicit upper bounds if $t > t_{\text{cut-off}}$. This provides in particular an answer to some questions raised in recent papers by Chen and Saloff-Coste. Our proofs are inspired by those given by Rosenthal and Porod for products of random rotations in $\mathrm{SO}(n)$, and by Diaconis and Shahshahani for products of random transpositions in $\mathfrak{S}_{n}$. \end{abstract} \maketitle \hrule \tableofcontents \section{Introduction} \subsection{The cut-off phenomenon for random permutations} This paper is concerned with the analogue for Brownian motions on compact Lie groups and symmetric spaces of the famous \emph{cut-off phenomenon} observed in random shuffles of cards (\emph{cf.} \cite{AD86,BD92}). Let us recall this result in the case of ``natural'' shuffles of cards, also known as \emph{riffle shuffles}. Consider a deck of $n$ ordered cards $1,2,\ldots,n$, originally in this order. At each time $k \geq 1$, one performs the following procedure: \begin{enumerate} \item One cuts the deck in two parts of sizes $m$ and $n-m$, the integer $m$ being chosen randomly according to a binomial law of parameter $\frac{1}{2}$: $$ \mathbb{P}[m=M]=\frac{1}{2^{n}}\binom{n}{M}. $$ So for instance, if $n=10$ and the deck was initially $123456789\mathrm{X}$, then one obtains the two blocks $A=123456$ and $B=789\mathrm{X}$ with probability $\frac{1}{2^{10}}\binom{10}{6}=\frac{105}{512}\simeq 0.21$. \item The first card of the new deck comes from $A$ with probability $(\mathrm{card}\, A)/n$, and from $B$ with probability $(\mathrm{card}\, B)/n$. Then, if $A'$ and $B'$ are the remaining blocks after removal of the first card, the second card of the new deck will come from $A'$ with probability $(\mathrm{card}\, A')/(n-1)$, and from $B'$ with probability $(\mathrm{card}\, B')/(n-1)$; and similarly for the other cards. So for instance, by shuffling $A=123456$ and $B=789\mathrm{X}$, one can obtain with probability $1/\binom{10}{6}\simeq 0.0048$ the deck $17283459\mathrm{X}6$. \end{enumerate} Denote $\mathfrak{S}_{n}$ the symmetric group of order $n$, and $\sigma^{(k)}$ the random permutation in $\mathfrak{S}_{n}$ obtained after $k$ independent shuffles. One can guess that as $k$ goes to infinity, the law $\mathbb{P}^{(k)}$ of $\sigma^{(k)}$ converges to the uniform law $\mathbb{U}$ on $\mathfrak{S}_{n}$. There is a natural distance on the set $\mathscr{P}(\mathfrak{S}_{n})$ of probability measures on $\mathfrak{S}_{n}$ that allows to measure this convergence: the so-called \emph{total variation distance} $d_{\mathrm{TV}}$. Consider more generally a measurable space $X$ with $\sigma$-field $\mathcal{B}(X)$. The total variation distance is the metric on the set of probability measures $\mathscr{P}(X)$ defined by $$ d_{\mathrm{TV}}(\mu,\nu)=\sup\left\{|\mu(A)-\nu(A)|,\,\,\,A \in \mathcal{B}(X)\right\}\,\, \in [0,1]. $$ The convergence in total variation distance is in general a stronger notion than the weak convergence of probability measures. On the other hand, if $\mu$ and $\nu$ are absolutely continuous with respect to a third measure $dx$ on $X$, then their total variation distance can be written as a $\mathscr{L}^{1}$-norm: $$ d_{\mathrm{TV}}(\mu,\nu)=\frac{1}{2}\int_{X} \left|\frac{d\mu}{dx}(x)-\frac{d\nu}{dx}(x)\right|\,dx. $$ It turns out that with respect to total variation distance, the convergence of random shuffles occurs at a specific time $k_{\text{cut-off}}$, that is to say that $d_{\mathrm{TV}}(\mathbb{P}^{(k)},\mathbb{U})$ stays close to $1$ for $k < k_{\text{cut-off}}$, and that $d_{\mathrm{TV}}(\mathbb{P}^{(k)},\mathbb{U})$ is then extremely close to $0$ for $k > k_{\text{cut-off}}$. More precisely, in \cite{BD92} (see also \cite[Chapter 10]{CSST}), it is shown that: \begin{theorem}[Bayer-Diaconis]\label{diaconishuffle} Assume $k=\frac{3}{2\log 2}\,\log n +\theta$. Then, $$ d_{\mathrm{TV}}(\mathbb{P}^{(k)},\mathbb{U}) = 1-2\,\phi\left(\frac{-2^{-\theta}}{4\sqrt{3}}\right) + O\left(n^{-1/4}\right), \quad\text{with }\phi(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x} \mathrm{e}^{-\frac{s^{2}}{2}}\,ds. $$ So for $\theta$ negative, the total variation distance is extremely close to $1$, whereas it is extremely close to $0$ for $\theta$ positive. \end{theorem} \noindent The cut-off phenomenon has been proved for other shuffling algorithms (\emph{e.g.} random transpositions of cards), and more generally for large classes of finite Markov chains, see for instance \cite{DSC96,Dia96}. It has also been investigated by Chen and Saloff-Coste for Markov processes on continuous spaces, \emph{e.g.} spheres and Lie groups; see in particular \cite{SC94,SC04,CSC08} and the discussion of \S\ref{sofar}. However, in this case, cut-offs are easier to prove for the $\mathscr{L}^{p>1}$-norm of $p_{t}(x)-1$, where $p_{t}(x)$ is the density of the process at time $t$ and point $x$ with respect to the equilibrium measure. The case of the $\mathscr{L}^{1}$-norm, which is (up to a factor $2$) the total variation distance, is somewhat different. In particular, a proof of the cut-off phenomenon for the total variation distance between the Haar measure and the marginal law $\mu_{t}$ of the Brownian motion on a classical compact Lie group was apparently not known --- see the remark just after \cite[Theorem 1.2]{CSC08}. The purpose of this paper is precisely to give a proof of this $\mathscr{L}^{1}$-cut-off for all classical compact Lie groups, and more generally for all classical symmetric spaces of compact type. In the two next paragraphs, we describe the spaces in which we will be interested (\S\ref{symmetric}), and we precise what is meant by ``Brownian motion'' on a space of this type (\emph{cf.} \S\ref{brown}). This will then enable us to explain the results of Chen and Saloff-Coste in \S\ref{sofar}, and finally to state in \S\ref{statement} which improvements we were able to prove. \subsection{Classical compact Lie groups and symmetric spaces}\label{symmetric} To begin with, let us fix some notations regarding the three classical families of simple compact Lie groups, and their quotients corresponding to irreducible simply connected compact symmetric spaces. We use here most of the conventions of \cite{Hel78,Hel84}. For every $n \geq 1$, we denote $\mathrm{U}(n)=\mathrm{U}(n,\mathbb{C})$ the \emph{unitary group} of order $n$; $\mathrm{O}(n)=\mathrm{O}(n,\mathbb{R})$ the \emph{orthogonal group} of order $n$; and $\mathrm{U}\mathrm{Sp}(n)=\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$ the \emph{compact symplectic group} of order $n$. They are defined by the same equations: $$ UU^{\dagger}=U^{\dagger}U=I_{n} \quad;\quad OO^{t}=O^{t}O=I_{n} \quad;\quad SS^{\star}=S^{\star}S=I_{n} $$ with complex, real or quaternionic coefficients, the conjugate of a quaternion $w+\mathrm{i} x + \mathrm{j} y + \mathrm{k} z$ being $w-\mathrm{i} x - \mathrm{j} y - \mathrm{k} z$. The orthogonal groups are not connected, so we shall rather work with the \emph{special orthogonal groups} $$ \mathrm{SO}(n)=\mathrm{SO}(n,\mathbb{R})=\left\{O \in \mathrm{O}(n,\mathbb{R})\,\,|\,\, \det O=1\right\}. $$ On the other hand, the unitary groups are not simple Lie groups (their center is one-dimensional), so it is convenient to introduce the \emph{special unitary groups} $$\mathrm{SU}(n)=\mathrm{SU}(n,\mathbb{C})=\left\{U \in \mathrm{U}(n,\mathbb{C})\,\,|\,\, \det U=1\right\}.$$ Then, for every $n \geq 1$, $\mathrm{SU}(n,\mathbb{C})$, $\mathrm{SO}(n,\mathbb{R})$ and $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$ are connected simple compact real Lie groups, of respective dimensions $$ \dim_{\mathbb{R}} \mathrm{SU}(n,\mathbb{C})=n^{2}-1\quad;\quad\dim_{\mathbb{R}} \mathrm{SO}(n,\mathbb{R})=\frac{n(n-1)}{2}\quad;\quad \dim_{\mathbb{R}} \mathrm{U}\mathrm{Sp}(n,\mathbb{H})= 2n^{2}+n. $$ The special unitary groups and compact symplectic groups are simply connected; on the other hand, for $n \geq 3$, the fundamental group of $\mathrm{SO}(n,\mathbb{R})$ is $\mathbb{Z}/2\mathbb{Z}$, and its universal cover is the \emph{spin group} $\mathrm{Spin}(n)$. Many computations on these simple compact Lie groups can be performed by using their \emph{representation theory}, which is covered by the highest weight theorem; see \S\ref{weyltheory}. We shall recall all this briefly in Section \ref{fourier}, and give in each case the list of all irreducible representations, and the corresponding dimensions and characters. It is well known that every simply connected compact simple Lie group is: \begin{itemize} \item either one group in the infinite families $\mathrm{SU}(n)$, $\mathrm{Spin}(n)$, $\mathrm{U}\mathrm{Sp}(n)$; \item or, an exceptional simple compact Lie group of type $\mathrm{E}_{6}$, $\mathrm{E}_{7}$, $\mathrm{E}_{8}$, $\mathrm{F}_{4}$ or $\mathrm{G}_{2}$. \end{itemize} We shall refer to the first case as the \emph{classical simple compact Lie groups}, and as mentioned before, our goal is to study Brownian motions on these groups. We shall more generally be interested in compact symmetric spaces; see \emph{e.g} \cite[Chapter 4]{Hel78}. These spaces can be defined by a local condition on geodesics, and by Cartan-Ambrose-Hicks theorem, a symmetric space $X$ is isomorphic as a Riemannian manifold to $G/K$, where $G$ is the connected component of the identity in the isometry group of $X$; $K$ is the stabilizer of a point $x\in X$ and a compact subgroup of $G$; and $(G,K)$ is a symmetric pair, which means that $K$ is included in the group of fixed points $G^{\theta}$ of an involutive automorphism of $G$, and contains the connected component $(G^{\theta})^{0}$ of the identity in this group. Moreover, $X$ is compact if and only if $G$ is compact. This result reduces the classification of symmetric spaces to the classification of real Lie groups and their involutive automorphisms. So, consider an irreducible simply connected symmetric space, of compact type. Two cases arise: \begin{enumerate} \item The isometry group $G=K\times K$ is the product of a compact simple Lie group with itself, and $K$ is embedded into $G$ via the diagonal map $k \mapsto (k,k)$. The symmetric space $X$ is then the group $K$ itself, the quotient map from $G$ to $X\simeq K$ being \begin{align*} G &\to K\\ g=(k_{1},k_{2}) &\mapsto k_{1}k_{2}^{-1}. \end{align*} In particular, the isometries of $K$ are the multiplication on the left and the right by elements of $K\times K$, and this action restricted to $K \subset G$ is the action by conjugacy. \item The isometry group $G$ is a compact simple Lie group, and $K$ is a closed subgroup of it. In this case, there exists in fact a non-compact simple Lie group $L$ with maximal compact subgroup $K$, such that $G$ is a compact subgroup of the complexified Lie group $L^{\mathbb{C}}$, and maximal among those containing $K$. The involutive automorphism $\theta$ extends to $L^{\mathbb{C}}$, with $K=G^{\theta}=L^{\theta}$ and the two orthogonal symmetric Lie algebras $(\mathfrak{g},d_{e}\theta)$ and $(\mathfrak{l},d_{e}\theta)$ dual of each other. \end{enumerate} The classification of irreducible simply connected compact symmetric spaces is therefore the following: in addition to the compact simple Lie groups themselves, there are the seven infinite families \begin{align*} &\mathrm{Gr}(p+q,q,\mathbb{R})=\mathrm{SO}(p+q)/(\mathrm{SO}(p)\times \mathrm{SO}(q))\,\,\,\text{with }p,q \geq 1\,\,\,\text{(real Grassmannians)};\\ &\mathrm{Gr}(p+q,q,\mathbb{C})=\mathrm{SU}(p+q)/\mathrm{S}(\mathrm{U}(p)\times \mathrm{U}(q))\,\,\,\text{with }p,q \geq 1\,\,\,\text{(complex Grassmannians)};\\ &\mathrm{Gr}(p+q,q,\mathbb{H})=\mathrm{U}\mathrm{Sp}(p+q)/(\mathrm{U}\mathrm{Sp}(p)\times \mathrm{U}\mathrm{Sp}(q))\,\,\,\text{with }p,q \geq 1\,\,\,\text{(quaternionic Grassmannians)};\\ &\mathrm{SU}(n)/\mathrm{SO}(n)\,\,\,\text{with }n\geq 2\,\,\,\text{(real structures on $\mathbb{C}^{n}$)};\\ &\mathrm{U}\mathrm{Sp}(n)/\mathrm{U}(n)\,\,\,\text{with }n\geq 1\,\,\,\text{(complex structures on $\mathbb{H}^{n}$)};\\ &\mathrm{SO}(2n)/\mathrm{U}(n)\,\,\,\text{with }n\geq 2\,\,\,\text{(complex structures on $\mathbb{R}^{2n}$)};\\ &\mathrm{SU}(2n)/\mathrm{U}\mathrm{Sp}(n)\,\,\,\text{with }n\geq 2\,\,\,\text{(quaternionic structures on $\mathbb{C}^{2n}$)}; \end{align*} and quotients involving exceptional Lie groups, \emph{e.g.} $\mathbb{P}^{2}(\mathbb{O})=\mathrm{F}_{4}/\mathrm{Spin}(9)$; see \cite[Chapter 10]{Hel78}. For the two last families, one sees $\mathrm{U}(n)$ as a subgroup of $\mathrm{SO}(2n)$ by replacing each complex number $x+\mathrm{i} y$ by the $2\times2$ real matrix \begin{equation} \begin{pmatrix} x & y \\ -y & x \end{pmatrix};\label{doublecomplex} \end{equation} and one sees $\mathrm{U}\mathrm{Sp}(n)$ as a subgroup of $\mathrm{SU}(2n)$ by replacing each quaternion number $w+\mathrm{i} x + \mathrm{j} y + \mathrm{k} z$ by the $2 \times 2$ complex matrix \begin{equation}\begin{pmatrix} w+\mathrm{i} x & y + \mathrm{i} z\\ -y + \mathrm{i} z & w-\mathrm{i} x \end{pmatrix}; \label{doublequaternion}\end{equation} $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$ is then the intersection of $\mathrm{SU}(2n,\mathbb{C})$ and of the complex symplectic group $\mathrm{Sp}(2n,\mathbb{C})$. We shall refer to the seven aforementioned families as \emph{classical simple compact symmetric spaces} (of type non-group); again, we aim to study in detail the Brownian motions on these spaces. \subsection{Laplace-Beltrami operators and Brownian motions on symmetric spaces}\label{brown} We denote $d\eta_{K}(k)$ or $dk$ the \emph{Haar measure} of a (simple) compact Lie group $K$, and $d\eta_{X}(x)$ or $dx$ the Haar measure of a compact symmetric space $X=G/K$, which is the image measure of $d\eta_{G}$ by the projection map $\pi : G \to G/K$. We refer to \cite[Chapter 1]{Hel84} for precisions on the integration theory over (compact) Lie groups and their homogeneous spaces. There are several complementary ways to define a Brownian motion on a compact Lie group $K$ or a on compact symmetric space $G/K$, see in particular \cite{Liao04book}. Hence, one can view them: \begin{enumerate} \item as Markov processes with infinitesimal generator the Laplace-Beltrami differential operator of the underlying Riemannian manifold; \item as conjugacy-invariant continuous L\'evy processes on $K$, or as projections of such a process on $G/K$; \item at least in the group case, as solutions of stochastic differential equations driven by standard (multidimensional) Brownian motions on the Lie algebra. \end{enumerate} The first and the third point of view will be specially useful for our computations. For the sake of completeness, let us recall briefly each point of view --- the reader already acquainted with these notions can thus go directly to \S\ref{sofar}. \subsubsection{Choice of normalization and Laplace-Beltrami operators} To begin with, let us precise the Riemannian structures chosen in each case. In the case of a simple compact Lie group $K$, the opposite of the \emph{Killing form} $B(X,Y)=\mathrm{tr}(\mathrm{ad}\,X\,\circ\,\mathrm{ad}\,Y)$ is negative-definite and gives by transport on each tangent space the unique bi-$K$-invariant Riemannian structure on $K$, up to a positive scalar. We choose this normalization constant as follows. When $K=\mathrm{SU}(n)$ or $\mathrm{SO}(n)$ or $\mathrm{U}\mathrm{Sp}(n)$, the Killing form on $\mathfrak{k}$ is a scalar multiple of the bilinear form $X\otimes Y \mapsto \Re(\mathrm{tr}(XY))$ --- the real part is only needed for the quaternionic case. Then, we shall always consider the following invariant scalar products on $\mathfrak{k}$: \begin{equation} \scal{X}{Y}=-\frac{\beta n}{2} \,\Re(\mathrm{tr}(XY)),\label{normalization} \end{equation} with $\beta=1$ for special orthogonal groups, $\beta=2$ for special unitary groups and unitary groups, and $\beta=4$ for compact symplectic groups (these are the conventions of \emph{e.g.} \cite{Lev11}). Similarly, on a simple compact symmetric space $X=G/K$ of type non-group, we take the previously chosen $\mathrm{Ad}(G)$-invariant scalar product (the one given by Equation \eqref{normalization}), and we restrict it to the orthogonal complement $\mathfrak{x}$ of $\mathfrak{k}$ in $\mathfrak{g}$. This $\mathfrak{x}$ can be identified with the tangent space of $X=G/K$ at $eK$, and by transport one gets the unique (up to a scalar) $G$-invariant Riemannian structure on $X$, called the Riemannian structure \emph{induced} by the Riemannian structure of $G$. From now on, each classical simple compact symmetric space $X=G/K$ will be endowed with this induced Riemannian structure. \begin{remark} This is not necessarily the ``usual'' normalization for these quotients: in particular, when $G=\mathrm{SO}(n+1)$ and $K=\mathrm{SO}(n)\times\mathrm{SO}(1)=\mathrm{SO}(n)$, the Riemannian structure defined by the previous conventions on the $n$-dimensional sphere $X=\mathbb{S}^{n}(\mathbb{R})$ differs from the restriction of the standard euclidian metric of $\mathbb{R}^{n+1}$ by a factor $\sqrt{n+1}$. However this normalization does not change the nature of the cut-off phenomenon that we are going to prove. \end{remark} \begin{remark} The bilinear form in \eqref{normalization} is only proportional to minus the Killing form, and not equal to it; for instance, the Killing form of $\mathrm{SO}(n,\mathbb{R})$ is $$(n-2)\,\mathrm{tr}(XY)=-\frac{2n-4}{n}\scal{X}{Y},$$ and not $-\scal{X}{Y}$. However, the normalization of Formula \eqref{normalization} enables one to relate the Brownian motions on the compact Lie groups to the ``standard'' Brownian motions on their Lie algebras, and to the classical ensembles of random matrix theory (see the SDEs at the end of this paragraph). \end{remark} The \emph{Laplace-Beltrami operator} on a Riemannian manifold $M$ is the differential operator of degree $2$ defined by $$\Delta f(m)=\sum_{1\leq i,j\leq d} g^{ij}( \nabla_{X_{i}}\!\nabla_{X_{j}}f(m)-\nabla_{\nabla_{X_{i}}X_{j}}f(m)),$$ where $(X_{1},\ldots,X_{d})$ is a basis of $T_{m}M$, $(g^{ij})_{i,j}$ is the inverse of the metric tensor $(g_{ij}=\scal{X_{i}}{X_{j}}_{T_{m}M})_{i,j}$, and $\nabla_{X}Y$ denotes the covariant derivative of a vector $Y$ along a vector $X$ and with respect to the Levi-Civita connection. In the case of a compact Lie group $K$, this expression can be greatly simplified as follows (see for instance \cite[\S2.3]{Liao04book}). Fix once and for all an orthonormal basis $(X_{1},X_{2},\ldots,X_{d})$ of $\mathfrak{k}$. On another tangent space $T_{k}K$, one transports each $X_{i}$ by setting $$ X_{i}^{l}(k)=\{d_{e}R_{k}\}(X_{i}) \in T_{k}K, $$ where $R_{k}$ is the multiplication on the right by $k$. One thus obtains a vector field $X_{i}^{l}=\frac{\partial}{\partial x_{i}}$ which is left-invariant by construction and right-invariant because of the $\mathrm{Ad}(K)$-invariance of the scalar product on $\mathfrak{k}$. Then, \begin{equation} \Delta=\sum_{i=1}^{d}\frac{\partial^{2}}{\partial x_{i}^{2}}.\label{laplacebeltrami} \end{equation} \begin{definition}\label{defbrown} A (standard) Brownian motion on a compact Riemannian manifold $M$ is a continuous Feller process $(m_{t})_{t \in \mathbb{R}_{+}}$ whose infinitesimal generator restricted to $\mathscr{C}^{2}(M)$ is $\frac{1}{2}\,\Delta$. \end{definition} \noindent In the following, on a compact Lie group $K$ or a compact symmetric space $G/K$, we shall also assume that $m_{0}=e$ or $m_{0}=eK$ almost surely. We shall then denote $\mu_{t}$ the marginal law of the process at time $t$, and $p_{t}^{K}(k)=\frac{d\mu_{t}}{d\eta_{K}}(k)$ or $p_{t}^{X}(x)=\frac{d\mu_{t}}{d\eta_{X}}(x)$ the density of $\mu_{t}$ with respect to the Haar measure. General results about hypoelliptic diffusions on manifolds ensure that these densities exist for $t >0$ and are continuous in time and space; we shall later give explicit formulas for them (\emph{cf.} Section \ref{fourier}). \subsubsection{Brownian motions as continuous L\'evy processes} By using the geometry of the spaces considered and the language of L\'evy processes, one can give another equivalent definition of Brownian motions. The \emph{right increments} of a random process $(g_{t})_{t \in \mathbb{R}_{+}}$ with values in a (compact) Lie group $G$ are the random variables $r_{t}^{s}=g_{s}^{-1}g_{t}$, so $g_{t}=g_{s}\,r_{t}^{s}$ for any times $s \leq t$. Then, a \emph{left L\'evy process} on $G$ is a c\`adl\`ag random process such that: \begin{enumerate} \item For any times $0=t_{0}\leq t_{1}\leq \cdots \leq t_{n}$, the right increments $r_{t_{1}}^{t_{0}}, r_{t_{2}}^{t_{1}},\ldots, r_{t_{n}}^{t_{n-1}}$ are independent. \item For any times $s \leq t$, the law of $r_{t}^{s}$ only depends on the difference $t-s$: $r_{t-s}^{0}\stackrel{\text{\tiny law}}{=} r_{t}^{s}.$ \end{enumerate} Denote $P_{t}$ the operator on the space $\mathscr{C}(G)$ of continuous functions on $G$ defined by $(P_{t}f)(g)=\mathbb{E}[f(gg_{t})];$ and $\mu_{t}$ the law of $g_{t}$ assuming that $g_{0}=e_{G}$ almost surely. For $h \in G$, we also denote by $L_{h}$ the operator on $\mathscr{C}(G)$ defined by $L_{h}f(g)=f(hg)$. If $(g_{t})_{t \in \mathbb{R}_{+}}$ is a left L\'evy process on $G$ starting at $g_{0}=e_{G}$, then: \begin{enumerate} \item The family of operators $(P_{t})_{t \in \mathbb{R}_{+}}$ is a Feller semigroup that is left $G$-invariant, meaning that $P_{t}\circ L_{h}=L_{h}\circ P_{t}$ for all $h \in G$ and for all time $t$. Conversely, any such Feller semigroup is the group of transitions of a left L\'evy process which is unique in law. \item The family of laws $(\mu_{t})_{t \in \mathbb{R}_{+}}$ is a semigroup of probability measures for the convolution product of measures $$ (\mu*\nu)(f)=\int_{G^{2}}f(gh)\,d\mu(g)\,d\nu(h). $$ Hence, $\mu_{s}*\mu_{t}=\mu_{s+t}$ for any $s$ and $t$. Moreover, this semigroup is continuous, \emph{i.e.}, the limit in law $\lim_{t \to 0}\mu_{t}$ exists and is the Dirac measure $\delta_{e}$. Conversely, given such a semigroup of measures, there is always a corresponding left L\'evy process, and it is unique in law. \end{enumerate} Thus, left L\'evy processes are the same as left $G$-invariant Feller semigroups of operators, and they are also the same as continuous semigroups of probability measures on $G$. In particular, on a compact Lie group, they are characterized by their infinitesimal generator $$Lf(g)=\lim_{t \to \infty}\frac{P_{t}f(g)-f(g)}{t}$$ defined on a suitable subspace of $\mathscr{C}(G)$. Hunt's theorem (\emph{cf.} \cite{Hun56}) then characterizes the possible infinitesimal generators of (left) L\'evy processes on a Lie group; in particular, continuous left-L\'evy processes correspond to left-invariant differential operator of degree $2$. Assume then that $(g_{t})_{t\in \mathbb{R}_{+}}$ is a continuous L\'evy process on a simple compact Lie group $G$, starting from $e$ and with the additional property that $(hg_{t}h^{-1})_{t \in \mathbb{R}_{+}}$ and $(g_{t})_{t \in \mathbb{R}_{+}}$ have the same law in $\mathscr{C}(\mathbb{R}_{+},G)$ for every $h$. These hypotheses imply that the infinitesimal generator $L$, which is a differential operator of degree $2$, is a scalar multiple of the Laplace-Beltrami operator $\Delta$. Thus, on a simple compact Lie group $K$, up to a linear change of time $ t\mapsto at$, a \emph{conjugacy-invariant continuous left L\'evy process} is a Brownian motion in the sense of Definition \ref{defbrown}. Similarly, on a simple compact symmetric space $G/K$, up to a linear change of time, the image $(g_{t}K)_{t \in \mathbb{R}_{+}}$ of a conjugacy-invariant continuous left L\'evy process on $G$ is a Brownian motion in the sense of Definition \ref{defbrown}. This second definition of Brownian motions on compact symmetric spaces has the following important consequence: \begin{lemma}\label{nonincreasingdistance} Let $\mu_{t}$ be the law of a Brownian motion on a compact Lie group $K$ or on a compact symmetric space $G/K$. The total variation distance $d_{\mathrm{TV}}(\mu_{t},\mathrm{Haar})$ is a non-increasing function of $t$. \end{lemma} \begin{proof} First, let us treat the case of compact Lie groups. If $f_{1},f_{2}$ are in $\mathscr{L}^{1}(K,d\eta_{K})$, then their convolution product $f_{1}*f_{2}$ is again in $\mathscr{L}^{1}(K)$, with $$\|f_{1}*f_{2}\|_{\mathscr{L}^{1}(K)} \leq \|f_{1}\|_{\mathscr{L}^{1}(K)}\,\|f_{2}\|_{\mathscr{L}^{1}(K)}.$$ Now, since $\mu_{s+t}=\mu_{s}*\mu_{t}$, the densities of the Brownian motion also satisfy $p_{s+t}^{K}=p_{s}^{K}*p_{t}^{K}$. Consequently, \begin{align*} 2\,d_{\mathrm{TV}}(\mu_{s+t},\eta_{K})&=\|p_{s+t}^{K}-1\|_{\mathscr{L}^{1}(K)}=\|(p_{s}^{K}-1)*p_{t}^{K}\|_{\mathscr{L}^{1}(K)}\\ & \leq \|p_{s}^{K}-1\|_{\mathscr{L}^{1}(K)} \,\|p_{t}^{K}\|_{\mathscr{L}^{1}(K)}=\|p_{s}^{K}-1\|_{\mathscr{L}^{1}(G)}=2\,d_{\mathrm{TV}}(\mu_{s},\eta_{K}). \end{align*} The proof is thus done in the group case. For a compact symmetric space $X=G/K$, denote $p_{t}^{G}$ the density of the Brownian motion on $G$, and $p_{t}^{X}$ the density of the Brownian motion on $X$. Since the Brownian motion on $X$ is the image of the Brownian motion on $G$ by $\pi :G \to G/K$, one has: $$\forall x=gK,\,\,\,p_{t}^{X}(x)=\int_{K}p_{t}^{G}(gk)\,dk.$$ As a consequence, \begin{align*} \|p_{s+t}^{X}-1\|&_{\mathscr{L}^{1}(X)}=\int_{G} \left|p_{s+t}^{X}(gK)-1\right|dg =\int_{G}\left|\int_{K}(p_{s+t}^{G}(gk)-1)\,dk\right|dg\\ &=\int_{G}\left|\int_{K\times G}(p_{s}^{G}(h^{-1}gk)-1)\,p_{t}^{G}(h)\,dk\,dh\right|dg = \int_{G} \left|\int_{G} (p_{s}^{X}(h^{-1}gK)-1)\,p_{t}^{G}(h)\,dh\right|dg\\ &\leq \int_{G\times G} \left|p_{s}^{X}(h^{-1}gK)-1)\right|\,\left|p_{t}^{G}(h)\right|\,dh\,dg=\|p_{s}^{X}-1\|_{\mathscr{L}^{1}(X)}\,\|p_{t}^{G}\|_{\mathscr{L}^{1}(G)}=\|p_{s}^{X}-1\|_{\mathscr{L}^{1}(X)}, \end{align*} so $d_{\mathrm{TV}}(\mu_{s+t},\eta_{X}) \leq d_{\mathrm{TV}}(\mu_{s},\eta_{X})$ also in the case of symmetric spaces. \end{proof} \begin{remark} Later, this property will allow us to compute estimates of $d_{\mathrm{TV}}(\mu_{t},\eta_{X})$ only for $t$ around the cut-off time. Indeed, if one has for instance an (exponentially small) estimate of $1-d_{\mathrm{TV}}(\mu_{t_{0}},\eta_{X})$ at time $t_{0}=(1-\varepsilon)\,t_{\text{cut-off}}$, then the same estimate will also hold for $1-d_{\mathrm{TV}}(\mu_{t},\eta_{X})$ with $t < t_{0}$. \end{remark} \begin{remark} Actually, the same result holds for the $\mathscr{L}^{p}$-norm of $p_{t}(x)-1$, and in the broader setting of Markov processes with a stationary measure; see \emph{e.g.} \cite[Proposition 3.1]{CSC08}. Our proof is a little more elementary. \end{remark} \subsubsection{Brownian motions as solutions of SDE} A third equivalent definition of Brownian motions on compact Lie groups is by mean of stochastic differential equations. More precisely, given a Brownian motion $(k_{t})_{t \in \mathbb{R}_{+}}$ traced on a compact Lie group $K$, there exists a (trajectorially unique) standard $d$-dimensional Brownian motion $(W_{t})_{t \in \mathbb{R}_{+}}$ on the Lie algebra $\mathfrak{k}$ that drives stochastic differential equations for every test function $f \in \mathscr{C}^{2}(K)$ of $(k_{t})_{t \in \mathbb{R}_{+}}$ (\emph{cf.} \cite{Liao04book}). So for instance, on a unitary group $\mathrm{U}(n,\mathbb{C})$, the Brownian motion is the solution of the SDE $$ U_{0}=I_{n}\qquad;\qquad dU_{t}=\mathrm{i}\,U_{t}\cdot dH_{t}-\frac{1}{2}\,U_{t}\,dt, $$ where $(H_{t})_{t \in \mathbb{R}_{+}}$ is a Brownian hermitian matrix normalized so that at time $t=1$ the diagonal coefficients are independent real gaussian variables of variance $1/n$, and the upper-diagonal coefficients are independent complex gaussian variables with real and imaginary parts independent and of variance $1/2n$. In the general case, let us introduce the \emph{Casimir operator} \begin{equation} C=\sum_{i=1}^{d} X_{i} \otimes X_{i}.\label{casimir} \end{equation} This tensor should be considered as an element of the universal enveloping algebra $U(\mathfrak{k})$. Then, for every representation $\pi : K \to \mathrm{GL}(V)$, the image of $C$ by the infinitesimal representation $d\pi : U(\mathfrak{k}) \to \mathrm{End}(V)$ commutes with $d\pi(\mathfrak{g})$. In particular, for an irreducible representation $V$, $d\pi(C)$ is a scalar multiple $\kappa_{V}\mathrm{id}_{V}$ of $\mathrm{id}_{V}$. Assume that $K$ is a classical simple Lie group. Then its ``geometric'' representation is irreducible, so $\sum_{i=1}^{d} (X_{i})^{2}=\alpha_{\mathfrak{g}}\,I_{n}$ if one sees the $X_{i}$'s as matrices in $\mathrm{M}(n,\mathbb{R})$ or $\mathrm{M}(n,\mathbb{C})$ or $\mathrm{M}(n,\mathbb{H})$. The stochastic differential equation satisfied by a Brownian motion on $K$ is then $$ k_{0}=e_{K}\qquad;\qquad dk_{t}=k_{t}\cdot dB_{t} + \frac{\alpha_{\mathfrak{k}}}{2} k_{t}\,dt, $$ where $B_{t}=\sum_{i=1}^{d}W_{t}^{i}\,X_{i}$ is a standard Brownian motion on the Lie algebra $\mathfrak{k}$. The constant $\alpha_{\mathfrak{k}}$ is given in the classical cases by\label{casimircoefficient} $$ \alpha_{\mathfrak{su}(n)}=-\frac{n^{2}-1}{n^{2}}\quad;\quad\alpha_{\mathfrak{so}(n)}=-\frac{n-1}{n}\quad;\quad \alpha_{\mathfrak{sp}(n)}=-\frac{2n+1}{2n} $$ see \cite[Lemma 1.2]{Lev11}. These Casimir operators will play a prominent role in the computation of the densities of these Brownian motions (\emph{cf.} \S\ref{weyltheory}), and also at the end of this paper (\S\ref{zonal}), see Lemma \ref{expectationcoefficients}. \subsection{Chen-Saloff-Coste results on $\mathscr{L}^{p}$-cut-offs of Markov processes}\label{sofar} Fix $p \in [1,\infty)$, and consider a Markov process $\mathfrak{X}=(x_{t})_{t \in \mathbb{R}_{+}}$ with values in a measurable space $(X,\mathcal{B}(X))$, and admitting an invariant probability $\eta$. One denotes $\mu_{t,x}$ the marginal law of $x_{t}$ assuming $x_{0}=x$ almost surely, and $$d_{t}^{p}(\mathfrak{X})=\max_{x\in X} \left(\int_{X} \left|\frac{d\mu_{t,x}}{d\eta}(y)-1\right|^{p}\eta(dy)\right)^{\frac{1}{p}},$$ with by convention $$d_{t}^{p}(\mathfrak{X})=\begin{cases} 2&\text{if }p=1,\\+\infty &\text{if }p>1,\end{cases}$$ when $\mu_{t,x}$ is not absolutely continuous with respect to $\eta$. This is obviously a generalization of the total variation distance to the stationary measure. In virtue of the remark stated just after Lemma \ref{nonincreasingdistance}, $t \mapsto d_{t}^{p}(\mathfrak{X})$ is always non-increasing. A sequence of Markov processes $(\mathfrak{X}^{(n)})_{n \in \mathbb{N}}$ with values in measurable spaces $(X^{(n)},\mathcal{B}(X^{(n)}))_{n \in \mathbb{N}}$ is said to have a \emph{max-$\mathscr{L}^{p}$-cut-off} with cut-off times $(t^{(n)})_{n \in \mathbb{N}}$ if $$\lim_{n \to \infty} \left(\sup_{t>(1+\varepsilon)t^{(n)}}d_{t}^{p}(\mathfrak{X}^{(n)})\right)=0\qquad;\qquad \lim_{n \to \infty} \left(\inf_{t<(1-\varepsilon)t^{(n)}}d_{t}^{p}(\mathfrak{X}^{(n)})\right)=\limsup_{n\to \infty}\,d_{0}^{p}(\mathfrak{X}^{(n)})=M>0$$ for every $\varepsilon \in (0,1)$ --- usually $M$ will be equal to $2$ or $+\infty$. A generalization of Theorem \ref{diaconishuffle} ensures that these $\mathscr{L}^{p}$-cut-offs occur for instance in the case of riffle shuffles of cards, with $t^{(n)}=\frac{3\log n}{2\log 2}$ for every $p \in [1,+\infty)$. In \cite{CSC08}, Chen and Saloff-Coste shown that a general criterion due to Peres ensures a $\mathscr{L}^{p>1}$-cut-off for a sequence of Markov processes; but then one does not know necessarily the value of the cut-off time $t^{(n)}$. Call \emph{spectral gap} $\lambda(\mathfrak{X})$ of a Markov process $\mathfrak{X}$ the largest $c \geq 0$ such that for all $f \in \mathscr{L}^{2}(X,\eta)$ and all time $t$, $\|(P_{t}-\eta)f\|_{\mathscr{L}^{2}(X)}\leq \mathrm{e}^{-tc}\,\|f\|_{\mathscr{L}^{2}(X)},$ where $(P_{t})_{t \in \mathbb{R}_{+}}$ stands for the semigroup associated to the Markov process. \begin{theorem}[Chen-Saloff-Coste]\label{csc} Fix $p \in (1,\infty)$. One considers a family of Markov processes $(\mathfrak{X}^{(n)})_{n \in \mathbb{N}}$ with normal operators $P_{t}$ and spectral gaps $\lambda^{(n)}$, and one assumes that $\lim_{t \to \infty}d_{t}^{p}(\mathfrak{X}^{(n)})=0$ for every $n$. For $\varepsilon_{0} >0$ fixed, set $$t^{(n)}=\inf\{t : d_{t}^{p}(\mathfrak{X}^{(n)})\leq \varepsilon_{0}\}.$$ The family of Markov processes has a max-$\mathscr{L}^{p}$-cut-off if and only if Peres' criterion is satisfied: $$\lim_{n \to \infty} \lambda^{(n)}\,t^{(n)}=+\infty.$$ \end{theorem} In this case, the sequence $(t^{(n)})_{n \in \mathbb{N}}$ gives the values of the cut-off times. A lower bound on $t^{(n)}$ also ensures the cut-off phenomenon; but then, the cut-off time remains \emph{unknown}. Nevertheless, an important application of this general criterion is (see \cite[Theorem 1.2]{CSC08}, and also \cite[Theorem 1.1 and 1.2]{SC04}): \begin{corollary}[Saloff-Coste]\label{window} Consider the Brownian motions traced on the special orthogonal groups $\mathrm{SO}(n,\mathbb{R})$, with the normalization of the metric detailed in the previous paragraph. They exhibit for every $p \in (1,\infty)$ a cut-off with $t^{(n)}$ asymptotically between $2\log n$ and $4\log n$ --- notice that $t^{(n)}$ depends on $p$. \end{corollary} \noindent Indeed, the spectral gap stays bounded and has a non-negative limit (which we shall compute later), whereas $t^{(n)}$ was shown by Saloff-Coste to be a $O(\log n)$. Similar results are presented in \cite{SC04} in the broader setting of simple compact Lie groups or compact symmetric spaces, but without a proof of the cut-off phenomenon (Saloff-Coste gave a window for $t^{(n)}$ for every $p \in [1,+\infty]$). The main result of our paper is that a cut-off indeed occurs for every $p \in [1,+\infty($, for every classical simple compact Lie group or classical simple compact symmetric space, and with a cut-off time equal to $\log n$ or $2\log n$ depending on the type of the space considered. In particular, the main improvements in comparison to the aforementioned theorems are: \begin{enumerate} \item the case $p=1$ is now included; \item one knows the precise value of the cut-off time. \end{enumerate} \subsection{Statement of the main results and discriminating events}\label{statement} \begin{theorem}\label{main} Let $\mu_{t}$ be the marginal law of the Brownian motion traced on a classical simple compact Lie group, or on classical simple compact symmetric space. There exists positive constants $\alpha$, $\gamma_{b}$, $\gamma_{a}$, $c$, $C$ and an integer $n_{0}$ such that in each family, for all $n \geq n_{0}$, \begin{align} &\forall \varepsilon \in (0,1/4),\,\,\, d_{\mathrm{TV}}(\mu_{t},\mathrm{Haar}) \geq 1-\frac{c}{n^{\gamma_{b}\varepsilon}}\,\,\,\text{ if }t=\alpha\,(1-\varepsilon)\,\log n;\label{mainlower}\\ &\forall \varepsilon \in (0,\infty),\,\,\, d_{\mathrm{TV}}(\mu_{t},\mathrm{Haar}) \leq \frac{C}{n^{\gamma_{a}\varepsilon/4}}\,\,\,\,\,\,\,\,\,\,\,\,\text{ if }t=\alpha\,(1+\varepsilon)\,\log n.\label{mainupper} \end{align} The constants $\alpha$, $\gamma_{b}$ and $\gamma_{a}$ are determined by the type of the space considered, and then one can make the following choices for $n_{0}$, $c$ and $C$: $$ \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline $K$ or $G/K$ & $\beta$ & $\alpha$ & $\gamma_{b}$ & $\gamma_{a}$ & $n_{0}$& $c$&$C$\\ \hline \hline $\mathrm{SO}(n,\mathbb{R})$ & $1$& $2$& $2$& $2$& $10$& $36$ & $6$\\ \hline $\mathrm{SU}(n,\mathbb{C})$ & $2$ & $2$ &$2$&$4$& $2$ & $8$ & $10$\\ \hline $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$& $4$ & $2$ &$2$&$2$& $3$ & $5$& $3$\\ \hline \hline $\mathrm{Gr}(n,q,\mathbb{R})$& $1$ & $1$ &$1$&$1$& $10$ & $32$& $2$\\ \hline $\mathrm{Gr}(n,q,\mathbb{C})$& $2$ & $1$ &$1$&$2$& $2$ & $32$& $2$\\ \hline $\mathrm{Gr}(n,q,\mathbb{H})$& $4$ & $1$ &$1$&$1$& $3$ & $16$& $2$\\ \hline \hline $\mathrm{SO}(2n,\mathbb{R})/\mathrm{U}(n,\mathbb{C})$& $1$ & $1$ &$2$&$1$& $10$ & $8$& $2$\\ \hline $\mathrm{SU}(n,\mathbb{C})/\mathrm{SO}(n,\mathbb{R})$& $2$ & $1$ &$2$&$2$& $2$ & $24$& $8$\\ \hline $\mathrm{SU}(2n,\mathbb{C})/\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$& $2$ & $1$ &$2$&$2$& $2$ & $22$& $8$\\ \hline $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})/\mathrm{U}(n,\mathbb{C})$& $4$ & $1$ &$2$&$1$& $3$ & $17$& $2$\\ \hline \end{tabular} $$ \end{theorem} \figcap{ \includegraphics{DTV.pdf} }{Aspect of the function $t \mapsto d_{\mathrm{TV}}(\mu_{t},\mathrm{Haar})$ for the Brownian motion on a classical simple compact Lie group or on a classical simple compact symmetric space.\label{mainfig}} As the function $t \mapsto d_{\mathrm{TV}}(\mu_{t},\mathrm{Haar})$ is non-increasing in $t$, the aspect of this function in the scale $t \propto \log n$ is then always as on Figure \ref{mainfig}. The constants $c$ and $C$ in Theorem \ref{main} can be slightly improved by raising the integer $n_{0}$; the restriction $n\geq n_{0}$ will only be used to ease certain computations and to get reasonable constants $c$ and $C$. A result similar to Theorem \ref{main} has been proved by Rosenthal and Porod in \cite{Ros94,Por96a,Por96b} for random products of (real, or complex, or quaternionic) reflections. Our proofs are really inspired by their proofs, though quite different in the details of the computations. For the upper bound \eqref{mainupper}, it has long been known that if $\lambda(X_{n})$ denotes the spectral gap of the heat semigroup associated to the infinitesimal generator $L=\frac{1}{2}\Delta$, then for $n$ fixed, the total variation distance $d_{\mathrm{TV}}(\mu_{t},\eta_{X_{n}})$ decreases exponentially fast (see \emph{e.g.} \cite{Liao04paper}): $$ d_{\mathrm{TV}}(\mu_{t},\eta_{X_{n}}) \leq C(X_{n})\,\mathrm{e}^{-\lambda(X_{n})\,t}. $$ Consider now the family of spaces $(X_{n})_{ n \in \mathbb{N}}$, and assume that $C(X_{n})=C\,n^{\delta}$, and that $\lambda(X_{n})$ stays almost constant to $\lambda$ --- this last condition is ensured by the normalization \eqref{normalization}. Then, one obtains for $t=(1+\varepsilon)\,\frac{\delta}{\lambda}\,\log n$ the bound $$ d_{\mathrm{TV}}(\mu_{t},\eta_{X_{n}}) \leq \frac{C}{n^{\delta \varepsilon}}. $$ Thus in theory, the upper bound \eqref{mainupper} follows from the calculations of the constants $C(X_{n})$ and $\lambda(X_{n})$ in each classical family. It is very hard to find directly a constant $C(X_{n})$ that works for every time $t$. But on the other side, by using the representation theory of the classical simple compact Lie groups (\emph{cf.} Section \ref{fourier}), one can determine series of negative exponentials that dominates the total variation distance; see Proposition \ref{scaryseries}. In these series, the ``least negative'' exponentials give the correct order of decay $\lambda(X_{n})$. It remains then to prove that the other terms can be uniformly bounded. This is tedious, but doable, and these precise estimates are shown in Section \ref{upper}: we shall adapt and improve the arguments of \cite{Ros94,Por96a,Por96b,CSST}. As for the lower bound \eqref{mainlower}, it is obtained by looking at \emph{discriminating events}, that have a probability close to $1$ with respect to a marginal law $\mu_{t}$ with $t <t_{\text{cut-off}}$, and close to $0$ with respect to the Haar measure. For instance, in the case of riffle shuffles, the sizes of the \emph{rising sequences} of a permutation enable one to discriminate a random shuffle of order $k < k_{\text{cut-off}}$ from a uniform permutation; see \cite[\S2]{BD92}. In the case of a Brownian motion on a classical compact Lie group, this is the \emph{trace} of the matrices that allows to discriminate Haar distributed elements and random Brownian elements before cut-off time. \figcap{ \includegraphics[scale=0.5]{density3.pdf} }{Aspect of the density of the trace $\mathrm{tr}\,U_{n}$ of a random unitary matrix, with $U_{n} \sim \mathrm{Haar}$ for the left peak, and $U_{n} \sim \mu_{t<t_{\text{cut-off}}}$ for the right peak (using \texttt{Mathematica}).} Indeed, consider for instance a random unitary matrix $U_{n}$ of size $n$, taken under the Haar measure or under the marginal law $\mu_{t}$ of the Brownian motion at a given time $t$. Then, $\mathrm{tr}\, U_{n}$ is a complex valued random variable, and we shall see that $$ \mathbb{E}\left[|\mathrm{tr}\, U_{n}-m|^{2}\right]\leq 1, $$ where $m$ is the mean of $\mathrm{tr}\, U_{n}$; and this, for any $n\geq 1$ and any time $t \geq 0$ if $U_{n} \sim \mu_{t}$. However, $m=0$ under the Haar measure, whereas $|m|\gg 1$ for $t < t_{\text{cut-off}}$. So, the trace of a Brownian unitary matrix before cut-off time will never ``look the same'' as the trace of an Haar distributed unitary matrix. Up to a minor modification, the same argument will work for special orthogonal groups and compact special orthogonal groups --- in this later case, the trace of a quaternionic matrix of size $n$ is defined as the trace of the corresponding complex matrix of size $2n$, \emph{cf.} the remark at the end of \S\ref{symmetric}. Over the classical simple compact symmetric spaces, the trace of matrices will be replaced by a zonal spherical function ``of minimal non-zero weight''; these minimal zonal spherical functions are also those that give the order of decay of the series of negative exponentials that dominate $d_{\mathrm{TV}}(\mu_{t},\mathrm{Haar})$ after the cut-off time. This argument for the lower bound was already known, since it has been used successfully in \cite{SC94} to prove the cut-off phenomenon over spheres: we have simply extended it to the case of general simple compact symmetric spaces (\emph{cf.} Section \ref{lower}). An important consequence of Theorem \ref{main} and its proof is that one also has a max-$\mathscr{L}^{p}$-cut-off for every $p \in [1,\infty]$. Moreover, the value of the cut-off time is known when $p \in [1,2]$. \begin{corollary}\label{lpcutoff} For every $p \in [1,+\infty]$, the family of Brownian motions $(\mathfrak{X}_{n})_{n \in \mathbb{N}}$ traced on simple compact Lie groups $(K_{n})_{n \in \mathbb{N}}$ in one of the three classical families (respectively, on simple compact symmetric spaces of type non-group $(X_{n})_{n \in \mathbb{N}}$ in one of the seven classical families) has a max-$\mathscr{L}^{p}$-cut-off. If $p \in [1,2]$, it is with respect to the sequence $t^{(n)}=2\log n$ (respectively, $t^{(n)}=\log n$). \end{corollary} \begin{proof} The upper bound in Theorem \ref{main} will be shown by using Cauchy-Schwarz inequality and estimating the $\mathscr{L}^{2}$-norm of $|p_{t}-1|$, which can be written as a series $S_{n}(t)$ of negative exponentials. Section \ref{upper} is devoted to the proof of the fact that $S_{n}(t)$ is small after cut-off time, and on the other hand, the same series trivially goes to infinity before cut-off time, because some of its terms go to infinity (consider for instance the term indexed by the ``minimal'' label identified in Lemma \ref{decay}). Thus, our proof of Theorem \ref{main} implies readily a $\mathscr{L}^{2}$-cut-off; and since the Brownian motion is invariant by action of the isometry group, it is even a max-$\mathscr{L}^{2}$-cut-off. We can then use \cite[Theorem 5.3]{CSC08} to obtain the existence of a max-$\mathscr{L}^{p}$-cut-off for every $p \in (1,+\infty]$, and the comparison theorem of mixing times \cite[Proposition 5.1]{CSC08} to get the value of the cut-off time when $p$ is between $1$ and $2$. \end{proof} \begin{remark} When $p=+\infty$, \cite[Theorem 5.3]{CSC08} also gives the value of the cut-off time: it is $4 \log n$ in the group case, and $2 \log n$ in the non-group case. However, when $p \in (2,+\infty)$, one still does not know the value of the mixing time: one has only the window $\alpha\log n \leq t^{(n)} \leq 2\alpha \log n$. \end{remark} \subsection{Organization of the paper} In Section \ref{fourier}, we recall the basics of representation theory and harmonic analysis on compact symmetric spaces, with a particular emphasis on explicit formulas since we will need them in each case. All of it is really classical and of course well-known by the experts, but it is necessary in order to fix the notations related to the harmonic analysis of the classical compact Lie groups and compact symmetric spaces. In Section \ref{upper}, we use the explicit expansion of the densities to establish precise upper bounds on $\|p_{t}-1\|_{\mathscr{L}^{2}(X,\eta)}$; by Cauchy-Schwarz we obtain similar upper bounds on $d_{\mathrm{TV}}(\mu_{t},\eta)$. The main idea is to control the growth of the dimension of an irreducible spherical representation involved in the expansion of $p_{t}$ when the corresponding highest weight grows in the lattice of weights (\S\ref{versus}). The crucial fact, which was apparently unknown, is that precisely at cut-off time, the quantity $$\begin{cases} (D^{\lambda})^{2}\,\mathrm{e}^{-t_{\text{cut-off}} B_{n}(\lambda)} & \text{in the group case},\\ D^{\lambda}\,\mathrm{e}^{-t_{\text{cut-off}} B_{n}(\lambda)} & \text{in the non-group case}, \end{cases} $$ stays \emph{bounded} for every $n$ and every $\lambda$; $D^{\lambda}$ being the dimension of the irreducible or spherical irreducible representation of label $\lambda$, and $-B_{n}(\lambda)$ the associated eigenvalue of the Laplace-Beltrami operator. Combining this argument with a simple analysis of the generating series $$\sum_{\lambda \text{ partition}} x^{|\lambda|}=\prod_{i \geq 1}\frac{1}{1-x^{i}},$$ this is sufficient to get a correct upper bound after cut-off time. Section \ref{lower} is then devoted to the proof of the lower bounds. We use in each case a ``minimal'' zonal spherical function (the trace of matrices in the case of groups; see \S\ref{zonal}), and we compute its expectation and variance under Haar measure and Brownian measures (\S\ref{bienayme}). A simple application of Bienaym\'e-Chebyshev's inequality will then show that the chosen zonal spherical function is indeed discriminating. An algebraic difficulty occurs in the case of symmetric spaces $G/K$ of type non-group, as one has to compute the expansion in zonal functions of the square of the discriminating zonal function, and this is far less obvious than in the case of irreducible characters. The problem is solved by writing the discriminating zonal function in terms of the coefficients of the matrices in the isometry group $G$, and by computing the joint moments of these coefficients under a Brownian measure. The combinations of negative exponentials appearing in these formulas are then in correspondence with the expansions of the squares of the discriminating zonal spherical functions. \section{Fourier expansion of the densities}\label{fourier} In this section, we explain how to compute the density $p_{t}^{K}(k)$ or $p_{t}^{X}(x)$ of the marginal law $\mu_{t}$ of the Brownian motion traced on a classical compact symmetric space. This computation is done in an abstract setting for instance in \cite{Liao04paper} or \cite{Apple11}, and we shall give at the end of this section its concrete counterpart in each classical case, see Theorem \ref{explicitdensity}. The main ingredients of the computation are: \begin{enumerate} \item Peter-Weyl's theorem and its refinement due to Cartan, that ensures that the matrix coefficients of the irreducible representations of $K$ (respectively, of the irreducible spherical representations of $G$) form an orthogonal basis of $\mathscr{L}^{2}(K,\eta)$ (respectively, of $\mathscr{L}^{2}(G/K,\eta)$); see \S\ref{peterweyl}. \item the classical highest weight theory, that describes the irreducible representations of a compact simple Lie group and give formulas for their dimensions and characters; see \S\ref{weyltheory}. \end{enumerate} On these subjects, we refer to the two books by Helgason \cite{Hel78,Hel84}, and also to \cite{BD85,Var89,FH91,Far08,GW09} for the representation theory of compact Lie groups. We shall only recall what is needed in order to have a good understanding of the formulas of Theorem \ref{explicitdensity}. We shall also fix all the notations related to the harmonic analysis on (classical) compact symmetric spaces. \subsection{Peter-Weyl's theorem and Cartan's refinement}\label{peterweyl} Let $K$ be a compact (Lie) group, and $\widehat{K}$ be the set of isomorphism classes of irreducible complex linear representations of $K$. Each class $\lambda \in \widehat{K}$ is finite-dimensional, and we shall denote $V^{\lambda}$ the corresponding complex vector space; $\rho^{\lambda} : K \to \mathrm{U}(V^{\lambda})$ the representation morphism; $D^{\lambda}=\dim_{\mathbb{C}}V^{\lambda}$ the dimension of the representation; $\chi^{\lambda}(\cdot)=\mathrm{tr}\,\rho^{\lambda}(\cdot)$ the character; and $\widehat{\chi}^{\lambda}(\cdot)=\chi^{\lambda}(\cdot)/D^{\lambda}$ the normalized character. An Hermitian scalar product on $\mathrm{End}(V^{\lambda})$ is $\scal{M}{N}_{\mathrm{End}(V^{\lambda})}=D^{\lambda}\,\mathrm{tr}(M^{\dagger}N)$. For every class $\lambda$ and every function $f \in \mathscr{L}^{2}(K)$, we set $$\widehat{f}(\lambda)=\int_{K}f(k)\,\rho^{\lambda}(k)\,dk;$$ this is an element of $\mathrm{End}(V^{\lambda})$. We refer to \cite{BD85,Far08} for a proof of the following results. \begin{theorem}[Peter-Weyl] The (non-commutative) Fourier transform $\mathcal{F} : f \mapsto \sum_{\lambda \in \widehat{K}}\widehat{f}(\lambda)$ realizes an isometry and an isomorphism of (non-unital) algebras between $\mathscr{L}^{2}(K,\eta)$ and $\bigoplus_{\lambda \in \widehat{K}}\mathrm{End}(V^{\lambda})$. So, if $f \in \mathscr{L}^{2}(K)$, then \begin{align} f(k)&=\sum_{\lambda \in \widehat{K}} D^{\lambda}\,\mathrm{tr}\left(\widetilde{f}(\lambda)\,\rho^{\lambda}(k)\right)= \sum_{\lambda \in \widehat{K}} D^{\lambda}\,\mathrm{tr}\left(\int_{K}f(h^{-1}k)\,\rho^{\lambda}(h)\,dh\right)\label{fourierexpansion}\\ \|f\|^{2}_{\mathscr{L}^{2}(K)}&=\sum_{\lambda\in \widehat{K}} \left\|\widehat{f}(\lambda)\right\|_{\mathrm{End}(V^\lambda)}^{2}=\sum_{\lambda\in \widehat{K}} D^{\lambda}\,\mathrm{tr}\left(\widehat{f}(\lambda)^{\dagger}\widehat{f}(\lambda)\right)\label{parseval} \end{align} where $\widetilde{f}(\lambda)=\widehat{f^{-}}(\lambda)=\int_{K}f(k^{-1})\,\rho^{\lambda}(k)\,dk$. \end{theorem} Assume now that $f$ is in $\mathscr{L}^{2}(K,\eta)^{K}$, the subalgebra of conjugacy-invariant functions. The Fourier expansion \eqref{fourierexpansion} and the Parseval identity \eqref{parseval} become then $$f(k)=\sum_{\lambda \in \widehat{K}}(D^{\lambda})^{2}\,\widehat{\chi}^{\lambda}(f^{-})\,\widehat{\chi}^{\lambda}(k)\qquad;\qquad \|f\|_{\mathscr{L}^{2}(K)}^{2}=\sum_{\lambda \in \widehat{K}} |\chi^{\lambda}(f)|^{2},$$ and in particular, the irreducible characters $\chi^{\lambda}$ form an orthonormal basis of $\mathscr{L}^{2}(K)^{K}$. Cartan gave a statement generalizing Theorem \ref{peterweyl} for $\mathscr{L}^{2}(G/K,\eta)$, where $X=G/K$ is a simply connected irreducible compact symmetric space. Call \emph{spherical} an irreducible representation $(V^{\lambda},\rho^{\lambda})$ of $G$ such that $(V^{\lambda})^{K}$, the space of vectors invariant by $\rho^{\lambda}(K)$, is non-zero. Then, it is in fact one-dimensional, so one can find a vector $e^{\lambda}$ of norm $\|e^{\lambda}\|^{2}=1$, unique up to multiplication by $z \in \mathbb{T}$, such that $(V^{\lambda})^{K}=\mathbb{C} e^{\lambda}$. Denote then $\mathscr{C}^{\lambda}(G/K)$ the set of functions from $G$ to $\mathbb{C}$ that can be written as \begin{equation} f(g)=f_{v}(g)=\scal{v}{\rho^{\lambda}(g)(e^{\lambda})}\quad\text{with }v\in V^{\lambda}.\label{generalizedmatrixcoeff} \end{equation} Such a function is right-$K$-invariant, so it can be considered as a function from $G/K$ to $\mathbb{C}$. \begin{theorem}[Cartan]\label{helgason} Let $\widehat{G}^{K}$ be the set of spherical irreducible representations of $G$. The Hilbert space $\mathscr{L}^{2}(G/K,\eta)$ is isomorphic to the orthogonal sum $\bigoplus_{\lambda \in \widehat{G}^{K}}\mathscr{C}^{\lambda}(G/K)$. This decomposition corresponds to the Fourier expansion \begin{equation} f(gK)=\sum_{\lambda \in \widehat{G}^{K}}D^{\lambda}\,\mathrm{tr}\left(\int_{G}f(h^{-1}gK)\,\rho^{\lambda}(h)\,dh\right)\label{superfourier} \end{equation} for $f \in \mathscr{L}^{2}(G/K)$. \end{theorem} \noindent In each space $\mathscr{C}^{\lambda}(G/K)$, the space of left $K$-invariant functions is one-dimensional, and it is generated by the \emph{zonal spherical function} $ \phi^{\lambda}(gK)=\scal{e^{\lambda}}{\rho^{\lambda}(g)(e^{\lambda})}.$ These spherical functions form an orthogonal basis of $\mathscr{L}^{2}(X)^{K}$ when $\lambda$ runs over spherical representations. So, a $K$-invariant function writes as $$f(gK)=\sum_{\lambda \in \widehat{G}^{K}} D^{\lambda}\,\phi^{\lambda}(f^{-})\,\phi^{\lambda}(gK),$$ where $\phi^{\lambda}(f)=\int_{G/K}f(x)\,\phi^{\lambda}(x)\,dx=\scal{e^{\lambda}}{\int_{G} f(gK)\,\rho^{\lambda}(g)(e^{\lambda})\,dg}$. To conclude with, notice that the decomposition of Theorem \ref{helgason} is the decomposition of $\mathscr{L}^{2}(G/K,\eta)$ in common eigenspaces of the elements of $\mathscr{D}(G/K)$, the commutative algebra of $G$-invariant differential operators on $X$. Thus, there are morphisms of algebras $c^{\lambda} : \mathscr{D}(G/K) \to \mathbb{C}$ such that $$L(f^{\lambda})=c^{\lambda}(L)\,f^{\lambda}$$ for every $\lambda \in \widehat{G}^{K}$, every $L \in \mathscr{D}(G/K)$ and every $f^{\lambda} \in \mathscr{C}^{\lambda}(G/K)$. \subsection{Highest weight theorem and Weyl's character formula}\label{weyltheory} The theory of highest weights of representations enables us to identify $\widehat{K}$ or $\widehat{G}^{K}$, and to compute the coefficients $c^{\lambda}(\Delta)$ associated to the Laplace-Beltrami operator. If $G$ is a connected compact Lie group, its maximal tori are all conjugated, and every element of $K$ is contained in a maximal torus $T$. Denote $W=\mathrm{Norm}(T)/T$ the \emph{Weyl group} of $G$, and call \emph{weight} of a representation $V$ of $G$ an element of $\mathfrak{t}^{*}$, or equivalently a group morphism $\omega : T \to \mathbb{T}$ such that $V^{\omega}=\{v \in V\,\,|\,\, \forall t \in T,\,\,\,t \cdot v = \omega(t)\cdot v\}\neq 0$. Every representation $V$ of $G$ is the direct sum of its weight subspaces $V^{\omega}$, and this decomposition is always $W$-invariant. Besides, the set of all weights of all representations of $G$ is a lattice $\mathbb{Z}\Omega$ whose rank is also the dimension of $T$. We take a $W$-invariant scalar product on the real vector space $\mathbb{R}\Omega=\mathbb{Z}\Omega \otimes_{\mathbb{Z}}\mathbb{R}$, \emph{e.g.}, the dual of the scalar product given by Equation \eqref{normalization}, where $\mathbb{R}\Omega$ is identified with $\mathfrak{t}^{*}$ by mean of $\omega \mapsto d_{e}\omega$ for $\omega \in \mathbb{Z}\Omega$. We also fix a closed fundamental set $C$ for the action of the Weyl group on $\mathbb{R}\Omega$. We call \emph{dominant} a weight $\omega$ that falls in the Weyl chamber $C$. A \emph{root} of $G$ is a non-zero weight of the adjoint representation. The set of roots $\Phi$ is a root system, which means that certain combinatorial relations are satisfied between its elements. There is a unique way to split $\Phi$ in a set $\Phi_{+}$ of positive roots and a set $\Phi_{-}=-\Phi_{+}$ such that $$C=\{x \in \mathbb{R}\Omega\,\,|\,\,\forall \alpha \in \Phi_{+},\,\, \scal{x}{\alpha} \geq 0\}.$$ Call \emph{simple} a positive root $\alpha$ that cannot be written as the sum of two positive roots; and simple coroot an element $\check{\alpha}=\frac{2\alpha}{\scal{\alpha}{\alpha}}$ with $\alpha$ simple root. Then, a distinguished basis of the lattice $\mathbb{Z}\Omega$ is given by the \emph{fundamental weights} $\varpi_{1},\varpi_{2},\ldots,\varpi_{r}$, the dual basis of the basis of coroots. Hence, the sets of weights and of dominant weights have the following equivalent descriptions: \begin{align*} \mathbb{Z}\Omega&=\bigoplus_{i=1}^{r}\mathbb{Z}\varpi_{i}=\left\{x \in \mathbb{R}\Omega\,\,\big|\,\,\forall \alpha \in \Phi,\,\,\,\frac{\scal{x}{\alpha}}{\scal{\alpha}{\alpha}} \in \mathbb{Z} \right\} ;\\ \mathrm{Dom}(\mathbb{Z}\Omega)&=\bigoplus_{i=1}^{r}\mathbb{N}\varpi_{i}=\left\{x \in \mathbb{R}\Omega\,\,\big|\,\,\forall \alpha \in \Phi,\,\,\,\frac{\scal{x}{\alpha}}{\scal{\alpha}{\alpha}} \in \mathbb{N} \right\}. \end{align*} Suppose now that $G$ is a semi-simple simply connected compact Lie group, and consider the partial order induced by the convex set $C$ on $\mathbb{R}\Omega$. Recall that the Weyl group $W$ is a Coxeter group generated by the symmetries along the simple roots $\alpha_{1},\alpha_{2},\ldots,\alpha_{r}$; so in particular, it admits a signature morphism $\varepsilon : W \to \{\pm 1\}$. Weyl's theorem ensures that every irreducible representation $V$ of $G$ has a unique highest weight $\omega_{0}$ for this order, which is then of multiplicity one and determines the isomorphism class of $V$. Moreover, the restriction to $T$ of the irreducible character associated to a dominant weight $\lambda$ is given by \begin{equation} \chi^{\lambda}(t)=\frac{\sum_{\sigma \in W} \varepsilon(\sigma)\,\sigma(\lambda+\rho)(t)}{\sum_{\sigma \in W}\varepsilon(\sigma)\,\sigma(\rho)(t)},\label{weylcharacter} \end{equation} where $\rho$ is the half-sum of all positive roots, or equivalently the sum of the fundamental weights. This formula degenerates into the dimension formula \begin{equation} D^{\lambda}=\dim V^{\lambda}=\frac{\prod_{\alpha \in \Phi_{+}}\scal{\lambda+\rho}{\alpha}}{\prod_{\alpha \in \Phi_{+}}\scal{\rho}{\alpha}}. \end{equation} \noindent These results make Equation \eqref{fourierexpansion} essentially explicit in the case of a conjugacy invariant function on a (semi-)simple compact Lie group $K$; in particular, we shall see in a moment that the highest weights are labelled by partitions or similar combinatorial objects in all the classical cases. The case of a compact symmetric space $X=G/K$ of type non-group is a little more involved. Denote $\theta$ an involutive automorphism of a semi-simple simply connected compact Lie group $G$, with $K=G^{\theta}$. Set $P=\{g \in G\,\,|\,\, \theta(g)=g^{-1}\}$; one has then the Cartan decomposition $G=KP$. In addition to the previous assumptions, one assumes that the maximal torus $T \subset G$ is chosen so that $T^{\theta}=T$ and $P \cap T$ is a maximal torus in $P$ (one can always do so up to conjugation of the torus). Then, Cartan-Helgason theorem (\cite[Theorem 4.1]{Hel84}) says that the spherical representations in $\widehat{G}^{K}$ are precisely the irreducible representations in $\widehat{G}$ that are trivial on $K \cap T=T^{\theta}$. This subgroup $T^{\theta}$ of $T\simeq \mathbb{T}^{r}$ is always the product of a subtorus $\mathbb{T}^{s\leq r}$ with an elementary abelian $2$-group $(\mathbb{Z}/2\mathbb{Z})^{t}$; this will correspond to additional conditions on the size and the parity of the parts of the partitions labeling the highest weights in $\widehat{G}^{K}$ (in comparison to $\widehat{G}$), \emph{cf.} \S\ref{explicit}. The corresponding zonal spherical functions $\phi^{\lambda}$ do not have in general an expression as simple as \eqref{weylcharacter}; see however \cite[Part 1]{HS94}. For most of our computations, this will not be a problem, since we shall only use certain properties of the spherical functions --- \emph{e.g.}, their orthogonality and the formula for the dimension $D^{\lambda}$ --- and not their explicit form; see however \S\ref{zonal}. The last ingredient in the computation of the densities is the value of the coefficient $c^\lambda(\Delta)$ such that $$\frac{\Delta(f^{\lambda})}{2}=c^{\lambda}(\Delta)\,f^{\lambda}$$ for every function $f^{\lambda}$ either in $\mathscr{R}^{\lambda}(K)=\mathrm{Vect}(\{k \mapsto (\rho^{\lambda}(k))_{ij},\,\,\,1\leq i,j \leq D^{\lambda}\})$ in the group case, or in $\mathscr{C}^{\lambda}(G/K)$ in the case of a symmetric space. In the group case, by comparing the definition of the Casimir operator \eqref{casimir} with the definition of the Laplace-Beltrami operator \eqref{laplacebeltrami}, one sees that $2c^{\lambda}(\Delta)$ is also $\kappa_{\lambda}$, the constant by which the Casimir operator $C$ acts \emph{via} the infinitesimal representation $d\rho^{\lambda} : U(\mathfrak{k}) \to \mathrm{End}(V^{\lambda})$ --- \emph{cf.} the remark at the end of \S\ref{brown}. This constant is equal to \begin{equation} \kappa_{\lambda}=-\scal{\lambda+2\rho}{\lambda},\label{kappa} \end{equation} see \cite[Equation (3.4)]{Apple11} and the references therein, or \cite{Lev11} and \cite[Chapter12]{Far08} for a case-by-case computation. These later explicit computations follow from the following expressions of the Casimir operators (see \cite[Lemma 1.2]{Lev11}): \begin{align*} C_{\mathfrak{so}(n)}&=\sum_{1\leq i<j\leq n} \left(\frac{E_{ij}-E_{ji}}{\sqrt{n}}\right)^{\otimes 2}\\ C_{\mathfrak{su}(n)}&=\frac{1}{n}\sum_{i=1}^{n} \mathrm{i} E_{ii}\otimes \mathrm{i} E_{ii}-\frac{1}{n^{2}}\sum_{i,j=1}^{n} \mathrm{i} E_{ii}\otimes \mathrm{i} E_{jj}+\sum_{1\leq i<j\leq n} \left(\frac{E_{ij}-E_{ji}}{\sqrt{2n}}\right)^{\otimes 2} + \left(\frac{\mathrm{i} E_{ij}+\mathrm{i} E_{ji}}{\sqrt{2n}}\right)^{\otimes 2} \\ C_{\mathfrak{usp}(n)}&=\frac{1}{2n}\sum_{i=1}^{n} \mathrm{i} E_{ii}\otimes \mathrm{i} E_{ii} + \mathrm{j} E_{ii}\otimes \mathrm{j} E_{ii}+\mathrm{k} E_{ii}\otimes \mathrm{k} E_{ii}\\ &+\sum_{1\leq i<j\leq n} \left(\frac{E_{ij}-E_{ji}}{\sqrt{4n}}\right)^{\otimes 2}+\left(\frac{\mathrm{i} E_{ij}+\mathrm{i} E_{ji}}{\sqrt{4n}}\right)^{\otimes 2}+\left(\frac{\mathrm{j} E_{ij}+\mathrm{j} E_{ji}}{\sqrt{4n}}\right)^{\otimes 2}+\left(\frac{\mathrm{k} E_{ij}+\mathrm{k} E_{ji}}{\sqrt{4n}}\right)^{\otimes 2} \end{align*} where $E_{ij}$ are the elementary matrices in $\mathrm{M}(n,k)$ with $k=\mathbb{R}$, $\mathbb{C}$ or $\mathbb{H}$ --- beware that the tensor product are over $\mathbb{R}$, since we deal with real Lie algebras. In the case of a compact symmetric space, the same Formula \eqref{kappa} gives the action of $\Delta^{G/K}$ on $\mathscr{C}^{\lambda}(G/K)$. Indeed, remember that the Riemannian structures on $G$ and $G/K$ are chosen in such a way that for any $f \in \mathscr{C}^{\infty}(G)$ that is right $K$-invariant, $\Delta^{G/K}(f)(gK)=\Delta^{G}(f)(g).$ Consider then a function in $\mathscr{C}^{\lambda}(G/K)$, viewed as a function on $G$. In Definition \eqref{generalizedmatrixcoeff}, $f$ appears clearly as a linear combination of matrix coefficients of the spherical representation $\lambda$, so the previous discussion holds. \subsection{Densities of a Brownian motion with values in a compact symmetric space}\label{explicit} Let us now see how the previous results can be used to compute the density $p_{t}^{K}(k)$ or $p_{t}^{X}(x)$ of a Brownian motion on a compact Lie group or symmetric space. These densities are in both cases $K$-invariant, so they can be written as $$p_{t}^{K}(k)=\sum_{\lambda \in \widehat{K}} a_{\lambda}(t)\,\widehat{\chi}^{\lambda}(k) \quad \text{or} \quad p_{t}^{X}(x)=\sum_{\lambda \in \widehat{G}^{K}} a_{\lambda}(t)\,\phi^{\lambda}(x)$$ by using either Peter-Weyl's theorem in the case of conjugacy-invariant functions on $K$, or Cartan's theorem in the case of left $K$-invariant functions on $G/K$. We then apply $\frac{\Delta}{2}=\left.\frac{dP_{t}}{dt}\right|_{t=0}$ to these formulas: $$ \frac{\Delta p_{t}^{K}(k)}{2} =\sum_{\lambda \in \widehat{K}} \frac{\kappa_{\lambda}}{2}\,a_{\lambda}(t)\,\widehat{\chi}^{\lambda}(k)= \frac{dp_{t}^{K}(t)}{dt} = \sum_{\lambda \in \widehat{K}}\frac{da_{\lambda}(t)}{dt}\,\widehat{\chi}^{\lambda}(k), $$ and similarly in the case of a compact symmetric space. Thus, $\frac{da_{\lambda}(t)}{dt}=\frac{\kappa_{\lambda}}{2}a_{\lambda}(t)$ and $a_{\lambda}(t)=a_{\lambda}(0)\,\mathrm{e}^{\frac{\kappa_{\lambda}}{2}t}$ for every class $\lambda$. The coefficient $a_{\lambda}(0)$ is given in the group case by $$a_{\lambda}(0)=(D^{\lambda})^{2}\int_{K} \widehat{\chi}^{\lambda}(k)\,\delta_{e_{K}}(dk)=(D^{\lambda})^{2}\,\widehat{\chi}^{\lambda}(e_{K})=(D^{\lambda})^{2}$$ and in the case of a compact symmetric space of type non-group by $$a_{\lambda}(0)=D^{\lambda}\, \scal{e^{\lambda}}{\int_{G} \rho^{\lambda}(g)(e^{\lambda})\,\delta_{e_{G}}(dg)}=D^{\lambda}\,\phi^{\lambda}(e_{G})=D^{\lambda}.$$ \begin{proposition}\label{abstractdensity} The density of the law $\mu_{t}$ of the Brownian motion traced on a classical simple compact Lie group $K$ is $$p_{t}^{K}(k)=\sum_{\lambda \in \widehat{K}} \mathrm{e}^{-\frac{t}{2}\scal{\lambda+2\rho}{\lambda}}\,\left(\frac{\prod_{\alpha \in \Phi_{+}} \scal{\lambda+\rho}{\alpha} }{\prod_{\alpha \in \Phi_{+}} \scal{\rho}{\alpha}} \right)^{\!2}\widehat{\chi}^{\lambda}(k),$$ and the density of the Brownian motion traced on a classical simple compact symmetric space $G/K$ is $$p_{t}^{X}(x)=\sum_{\lambda \in \widehat{G}^{K}} \mathrm{e}^{-\frac{t}{2}\scal{\lambda+2\rho}{\lambda}}\,\left(\frac{\prod_{\alpha \in \Phi_{+}} \scal{\lambda+\rho}{\alpha} }{\prod_{\alpha \in \Phi_{+}} \scal{\rho}{\alpha}} \right)\phi^{\lambda}(x).$$ \end{proposition} Let us now apply this in each classical case. We refer to \cite{BD85}, \cite[Chapter 24]{FH91} and \cite[Chapter 10]{Hel78} for most of the computations. Unfortunately, we have not found a reference which describes explicitly the spherical representations; this explains the following long discussion. For convenience, we shall assume: \begin{itemize} \item $n \geq 2$ when considering $\mathrm{SU}(n)$, $\mathrm{SU}(n)/\mathrm{SO}(n)$, $\mathrm{SU}(2n)/\mathrm{U}\mathrm{Sp}(n)$ or $\mathrm{SU}(n)/\mathrm{S}(\mathrm{U}(n-q)\times \mathrm{U}(q))$; \item $n \geq 3$ when considering $\mathrm{U}\mathrm{Sp}(n)$, $\mathrm{U}\mathrm{Sp}(n)/\mathrm{U}(n)$ or $\mathrm{U}\mathrm{Sp}(n)/(\mathrm{U}\mathrm{Sp}(n-q) \times \mathrm{U}\mathrm{Sp}(q))$; \item $n \geq 10$ when considering $\mathrm{SO}(n)$, $\mathrm{SO}(2n)/\mathrm{U}(n)$ or $\mathrm{SO}(n)/(\mathrm{SO}(n-q)\times \mathrm{SO}(q))$. \end{itemize} For $\mathrm{SU}(2n)/\mathrm{U}\mathrm{Sp}(n)$ and $\mathrm{SO}(2n)/\mathrm{U}(n)$, the restriction will hold on the ``$2n$'' parameter of the group of isometries. These assumptions shall ensure that the root systems and the Schur functions of type $\mathrm{B}$, $\mathrm{C}$ and $\mathrm{D}$ are not degenerate, and later this will ease certain computations. For Grassmanian varieties, we shall also suppose by symmetry that $q \leq \lfloor \frac{n}{2} \rfloor$. \subsubsection{Special unitary groups and their quotients} In $\mathrm{SU}(n,\mathbb{C})$, a maximal torus is $$ T=\left\{\mathrm{diag}(z_{1},z_{2},\ldots,z_{n})\,\,\bigg|\,\,\forall i,\,\,z_{i} \in \mathbb{T} \text{ and } \prod_{i=1}^{n}z_{i}=1\right\}=\mathbb{T}^{n}/\mathbb{T}, $$ and the Weyl group is the symmetric group $\mathfrak{S}_{n}$. The simple roots and the fundamental weights, viewed as elements of $\mathfrak{t}^{*}$, are $\alpha_{i}=e^{i}-e^{i+1}$ and $$\varpi_{i}=\frac{n-i}{n}(e^{1}+\cdots+e^{i})-\frac{i}{n}(e^{i+1}+\cdots+e^{n})$$ for $i \in \left[\!\left[ 1,n-1\right]\!\right]$, where $e^{i}$ is the coordinate form on $\mathfrak{t}=\mathrm{i}\mathbb{R}^{n}$ defined by $e^{i}(\mathrm{diag}(\mathrm{i} t_{1},\mathrm{i} t_{2},\ldots,\mathrm{i} t_{n}))=t_{i}$. The dominant weights are then the $$(\lambda_{1}-\lambda_{2})\varpi_{1}+\cdots +\lambda_{n-1}\varpi_{n-1}=\lambda_{1}e^{1}+\cdots +\lambda_{n-1}e^{n-1}-|\lambda|\,\frac{\varpi_{n}}{n},$$ where $\lambda = (\lambda_{1}\geq \lambda_{2}\geq \cdots \geq \lambda_{n-1})$ is any partition (non-increasing sequence of non-negative integers) of length $(n-1)$; it is then convenient to set $\lambda_{n}=0$. The half-sum of positive roots is given by $2\rho=2(\varpi_{1}+\cdots + \varpi_{n-1})=\sum_{i=1}^{n}(n+1-2i)e^{i}$, and the scalar product on $\mathfrak{t}^{*}$ is $\frac{1}{n}$ times the usual euclidian scalar product $\scal{e^{i}}{e^{j}}=\delta_{ij}$. So, $$ D^{\lambda}=\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\qquad;\qquad \chi^{\lambda}(k)=s_{\lambda}(z_{1},\ldots,z_{n})=\frac{\det(z_{i}^{\lambda_{j}+n-j})_{1\leq i,j \leq n}}{\det(z_{i}^{n-j})_{1\leq i,j \leq n}}, $$ where $z_{1},\ldots,z_{n}$ are the eigenvalues of $k$; thus, characters are given by Schur functions. The Casimir coefficient is $$-\kappa_\lambda=-\frac{|\lambda|^{2}}{n^{2}}+\frac{1}{n}\sum_{i=1}^{n}\lambda_{i}^{2}+(n+1-2i)\lambda_{i},$$ where $|\lambda|=\sum_{i=1}^{n}\lambda_{i}$ denotes the size of the partition. Though we have chosen to examine only the Brownian motions on simple Lie groups, the same work can be performed over the unitary groups $\mathrm{U}(n,\mathbb{C})$, which are reducible Lie groups. Irreducible representations of $\mathrm{U}(n,\mathbb{C})$ are labelled by sequences $\lambda=(\lambda_{1}\geq \cdots \geq \lambda_{n})$ in $\mathbb{Z}^{n}$, the action of the torus $\mathbb{T}^{n}$ on a corresponding highest weight vector being given by the morphism $\lambda(z_{1},\ldots,z_{n})=z_{1}^{\lambda_{1}}\cdots z_{n}^{\lambda_{n}}$. The dimensions and characters are the same as before, and the Casimir coefficient is $\frac{1}{n}\sum_{i=1}^{n}\lambda_{i}^{2}+(n+1-2i)\lambda_{i}$. For the spaces of quaternionic structures $\mathrm{SU}(2n,\mathbb{C})/\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$, the involutive automorphism defining the symmetric pair is $\theta(g)=J_{2n}\,\overline{g}\,J_{2n}^{-1}$, where $J_{2n}$ is the skew symmetric matrix $$ J_{2n}=\begin{pmatrix} 0 & 1 & & & \\ -1 & 0 & & & \\ & &\ddots& & \\ & & & 0 & 1 \\ & & & -1 & 0 \end{pmatrix}$$ of size $2n$. The subgroup $T^{\theta}$ is the set of matrices $\mathrm{diag}(z_{1},z_{1}^{-1},\ldots,z_{n},z_{n}^{-1})$, with all the $z_{i}$'s in $\mathbb{T}$. The dominant weights $\lambda$ trivial on $T^{ \theta}$ correspond then to partitions will all parts doubled: $$\forall i \in \left[\!\left[ 1,n\right]\!\right],\,\,\,\lambda_{2i-1}=\lambda_{2i}.$$ In the spaces of real structures $\mathrm{SU}(n,\mathbb{C})/\mathrm{SO}(n,\mathbb{R})$, $\theta(g)=\overline{g}$. The intersection of the torus with $\mathrm{SO}(n,\mathbb{R})$ is isomorphic to $(\mathbb{Z}/2\mathbb{Z})^{n}/(\mathbb{Z}/2\mathbb{Z})$, and therefore, by Cartan-Helgason theorem, the spherical representations correspond to partitions with even parts: $$\forall i \in \left[\!\left[ 1,n\right]\!\right],\,\,\,\lambda_{i} \equiv 0 \bmod 2.$$ Finally, for the complex Grassmannian varieties $\mathrm{SU}(n,\mathbb{C})/\mathrm{S}(\mathrm{U}(n-q,\mathbb{C}) \times \mathrm{U}(q,\mathbb{C}))$, it is a little simpler to work with $\mathrm{U}(n,\mathbb{C})/(\mathrm{U}(n-q,\mathbb{C})\times \mathrm{U}(q,\mathbb{C}))$, which is the same space. An involutive automorphism defining the symmetric pair is then $\theta(g)=K_{n,q}\,g \,K_{n,q}$, where $$K_{n,q}=\begin{pmatrix} &&T_{q} \\ &I_{n-2q}&\\ T_{q} & & \end{pmatrix}$$ and $T_{q}$ is the $(q\times q)$-anti-diagonal matrix with entries $1$ on the anti-diagonal. The subgroup $T^{\theta}$ is then the set of diagonal matrices $\mathrm{diag}(z_{1},\ldots,z_{q},z_{q+1},\ldots,z_{n-q},z_{q},\ldots,z_{1})$ with the $z_{i}$'s in $\mathbb{T}$. The dominant weights $\lambda$ trivial on $T^{\theta}$ correspond then to partitions of length $q$, written as $$\lambda=(\lambda_{1},\ldots,\lambda_{q},0,\ldots,0,-\lambda_{q},\ldots,-\lambda_{1}).$$ \subsubsection{Compact symplectic groups and their quotients} Considering $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$ as a subgroup of $\mathrm{SU}(2n,\mathbb{C})$, a maximal torus is $$T=\left\{ \mathrm{diag}(z_{1},z_{1}^{-1},\ldots,z_{n},z_{n}^{-1})\,\,\big|\,\,\forall i, \,\,z_{i}\in \mathbb{T} \right\},$$ and the Weyl group is the hyperoctahedral group $\mathfrak{H}_{n}=(\mathbb{Z}/2\mathbb{Z})\wr\mathfrak{S}_{n}$. The simple roots, viewed as elements of $\mathfrak{t}^{*}$, are $\alpha_{i}=e^{i}-e^{i+1}$ for $i \in \left[\!\left[ 1,n-1\right]\!\right]$ and $\alpha_{n}=2e^{n}$; and the fundamental weights are $\varpi_{i}=e^{1}+\cdots+e^{i}$ for $i \in \left[\!\left[ 1,n\right]\!\right]$. Here, $e^{i}(\mathrm{diag}(\mathrm{i} t_{1},-\mathrm{i} t_{1},\ldots,\mathrm{i} t_{n},-\mathrm{i} t_{n}))=t_{i}$. The dominant weights can therefore be written as $\lambda_{1}e^{1}+\cdots + \lambda_{n}e^{n},$ where $\lambda=(\lambda_{1}\geq \lambda_{2}\geq \cdots \geq \lambda_{n})$ is any partition of length $n$. This leads to \begin{align*} D^{\lambda}&=\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i} \prod_{1\leq i\leq j \leq n} \frac{\lambda_{i}+\lambda_{j}+2n+2-i-j}{2n+2-i-j};\\ \chi^{\lambda}(k)&=sc_{\lambda}(z_{1},z_{1}^{-1},\ldots,z_{n},z_{n}^{-1})=\frac{\det(z_{i}^{\lambda_{j}+n-j+1}-z_{i}^{-(\lambda_{j}+n-j+1)})_{1\leq i,j \leq n}}{\det(z_{i}^{n-j+1}-z_{i}^{-(n-j+1)})_{1\leq i,j \leq n}}, \end{align*} where $z_{1}^{\pm1},\ldots,z_{n}^{\pm 1}$ are the eigenvalues of $k$ viewed as a matrix in $\mathrm{SU}(2n,\mathbb{C})$. The Casimir coefficient is $-\kappa_{\lambda}=\frac{1}{2n}\sum_{i=1}^{n}\lambda_{i}^{2}+(2n+2-2i)\lambda_{i}$. In the spaces of complex structures $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})/\mathrm{U}(n,\mathbb{C})$, $\theta(g)=\overline{g}$ (inside $\mathrm{SU}(2n,\mathbb{C})$). The subgroup $T^{\theta}$ is isomorphic to $(\mathbb{Z}/2\mathbb{Z})^{n}$, so the spherical representations correspond here again to partitions with even parts. On the other hand, for quaternionic Grassmannian varieties $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})/(\mathrm{U}\mathrm{Sp}(n-q,\mathbb{H})\times \mathrm{U}\mathrm{Sp}(q,\mathbb{H}))$, a choice for the involutive automorphism is $\theta(g)=L_{2n,q}\,g\,L_{2n,q}$, where $$L_{2n,q}=\begin{pmatrix} T_{4} & & & \\ & \ddots & & \\ & & T_{4} & \\ & & & I_{2n-4q} \end{pmatrix},$$ $T_{4}$ appearing $q$ times (with all the computations made inside $\mathrm{SU}(2n,\mathbb{C})$). Then, $T^{\theta}$ is the set of diagonal matrices $\mathrm{diag}(z_{1},z_{1}^{-1},z_{1}^{-1},z_{1},\ldots,z_{q},z_{q}^{-1},z_{q}^{-1},z_{q},z_{2q+1},z_{2q+1}^{-1},\ldots,z_{n},z_{n}^{-1})$ with the $z_{i}$'s in $\mathbb{T}$. The dominant weights $(\lambda_{1},\ldots,\lambda_{n})$ trivial on $T^{\theta}$ write therefore as partitions of length $q$ with all parts doubled: $$\lambda=(\lambda_{1},\lambda_{1},\ldots,\lambda_{q},\lambda_{q},0,\ldots,0).$$ \subsubsection{Special orthogonal groups and their quotients} Odd and even special orthogonal groups do not have the same kind of root system, and on the other hand, $\mathrm{SO}(n,\mathbb{R})$ is not simply connected and has for fundamental group $\mathbb{Z}/2\mathbb{Z}$ for $n \geq 3$. So in theory, the arguments previously recalled apply only for the universal cover $\mathrm{Spin}(n)$. Nonetheless, most of the results will stay true, and in particular the labeling of the irreducible representations; see the end of \cite[Chapter 5]{BD85} for details on this question. In the odd case, a maximal torus in $\mathrm{SO}(2n+1,\mathbb{R})$ is $$ T = \left\{\mathrm{diag}(R_{\theta_{1}},\ldots,R_{\theta_{n}},1) \,\,\bigg|\,\, \forall i,\,\,R_{\theta_{i}}=\left(\begin{smallmatrix} \cos \theta_{i} &-\sin \theta_{i}\\ \sin \theta_{i} & \cos \theta_{i} \end{smallmatrix} \right) \in \mathrm{SO}(2,\mathbb{R})\right\},$$ and the Weyl group is again the hyperoctahedral group $\mathfrak{H}_{n}$. The simple roots are $\alpha_{i}=e^{i}-e^{i+1}$ for $i \in \left[\!\left[ 1,n-1\right]\!\right]$, and $\alpha_{n}=e^{n}$; and the fundamental weights are $\varpi_{i}=e^{1}+\cdots+e^{i}$ for $i \in \left[\!\left[ 1,n-1\right]\!\right]$, and $\varpi_{n}=\frac{1}{2}(e^{1}+\cdots+e^{n})$. Here, $$e^{i}\left(\mathrm{diag}\left(\left(\begin{smallmatrix} 0 &-a_{1} \\ a_{1} & 0\end{smallmatrix} \right),\ldots,\left(\begin{smallmatrix} 0 &-a_{n} \\ a_{n} & 0\end{smallmatrix} \right),0\right) \right)= a_{i}$$ and it corresponds to the morphism $\mathrm{diag}(R_{\theta_{1}},\ldots,R_{\theta_{n}},1) \mapsto \mathrm{e}^{\mathrm{i}\theta_{i}}$. The dominant weights are then the $\lambda_{1}e^{1}+\cdots + \lambda_{n}e^{n}$, where $\lambda=(\lambda_{1}\geq \lambda_{2}\geq \cdots \geq \lambda_{n})$ is either a partition of length $n$, or an half-partition of length $n$, where by half-partition we mean a non-increasing sequence of half-integers in $\mathbb{N}'=\mathbb{N}+1/2$. So, one obtains \begin{align*} D^{\lambda}&=\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i} \prod_{1\leq i\leq j \leq n} \frac{\lambda_{i}+\lambda_{j}+2n+1-i-j}{2n+1-i-j} ; \\ \chi^{\lambda}(k)&=sb_{\lambda}(z_{1},z_{1}^{-1},\ldots,z_{n},z_{n}^{-1},1)=\frac{\det(z_{i}^{\lambda_{j}+n-j+1/2}-z_{i}^{-(\lambda_{j}+n-j+1/2)})_{1\leq i,j \leq n}}{\det(z_{i}^{n-j+1/2}-z_{i}^{-(n-j+1/2)})_{1\leq i,j \leq n}}, \end{align*} where $z_{1}^{\pm 1},\ldots,z_{n}^{\pm 1},1$ are the eigenvalues of $k$. The Casimir coefficient associated to the highest weight $\lambda$ is $-\kappa_{\lambda}=\frac{1}{2n+1}\sum_{i=1}^{n} \lambda_{i}^{2}+(2n+1-2i)\lambda_{i}$. In the even case, a maximal torus in $\mathrm{SO}(2n,\mathbb{R})$ is $$ T = \left\{\mathrm{diag}(R_{\theta_{1}},\ldots,R_{\theta_{n}}) \,\,\bigg|\,\, \forall i,\,\,R_{\theta_{i}}=\left(\begin{smallmatrix} \cos \theta_{i} &-\sin \theta_{i}\\ \sin \theta_{i} & \cos \theta_{i} \end{smallmatrix} \right) \in \mathrm{SO}(2,\mathbb{R})\right\}$$ and the Weyl group is $\mathfrak{H}_{n}^{+}$, the subgroup of $\mathfrak{H}_{n}$ of index $2$ consisting in signed permutations with an even number of signs $-1$. The simple roots are $\alpha_{i}=e^{i}-e^{i+1}$ for $i \in \left[\!\left[ 1,n-1\right]\!\right]$ and $\alpha_{n}=e^{n-1}+e^{n}$; and the fundamental weights are $\varpi_{i}=e^{1}+\cdots+e^{i}$ for $i \in \left[\!\left[ 1,n-2\right]\!\right]$ and $\varpi_{n-1,n}=\frac{1}{2}(e^{1}+\cdots+e^{n-1}\pm e^{n})$. The dominant weights are then $\lambda_{1}e^{1}+\cdots+ \lambda_{n-1}e^{n-1}+\varepsilon \lambda_{n}e^{n}$, where $\varepsilon$ is a sign and $(\lambda_{1}\geq \cdots \geq \lambda_{n})$ is either a partition or an half-partition of length $n$. So, \begin{align*} D^{\lambda}&=\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i} \,\frac{\lambda_{i}+\lambda_{j}+2n-i-j}{2n-i-j}, \\ \chi^{\lambda}(k)&=sd_{\lambda}(z_{1},z_{1}^{-1},\ldots,z_{n},z_{n}^{-1})\\ &=\frac{\det(z_{i}^{\lambda_{j}+n-j}-z_{i}^{-(\lambda_{j}+n-j)})_{1\leq i,j \leq n}+\det(z_{i}^{\lambda_{j}+n-j}+z_{i}^{-(\lambda_{j}+n-j)})_{1\leq i,j \leq n}}{\det(z_{i}^{n-j}+z_{i}^{-(n-j)})_{1\leq i,j \leq n}}, \end{align*} and $-\kappa_{\lambda}=\frac{1}{2n}\sum_{i=1}^{n} \lambda_{i}^{2}+(2n-2i)\lambda_{i}$. For real Grassmannian varieties $\mathrm{SO}(n,\mathbb{R})/(\mathrm{SO}(n-q,\mathbb{R})\times\mathrm{SO}(q,\mathbb{R}))$ and for spaces of complex structures $\mathrm{SO}(2n,\mathbb{R})/\mathrm{U}(n,\mathbb{C})$, one cannot directly apply the Cartan-Helgason theorem, since $\mathrm{SO}(n,\mathbb{R})$ is not simply connected. A rigorous way to deal with this problem is to first look at quotients of the spin group $\mathrm{Spin}(n)$. For instance, consider the Grassmannian variety of \emph{non-oriented} vector spaces $\mathrm{Gr}^{\pm}(n,q,\mathbb{R})\simeq \mathrm{Spin}(n)/(\mathrm{Spin}(n-q)\times \mathrm{Spin}(q));$ $\mathrm{Gr}(n,q,\mathbb{R})$ is a $2$-fold covering of $\mathrm{Gr}^{\pm}(n,q,\mathbb{R})$. The defining map of $\mathrm{Gr}^{\pm}(n,q,\mathbb{R})$ corresponds to the involution of $\mathrm{SO}(n,\mathbb{R})$ given by $\theta(g)=N_{n,q}\,g\,N_{n,q}$, where $$N_{n,q}= \begin{pmatrix} T_{2} & & & \\ & \ddots & &\\ & & T_{2} & \\ & & & I_{n-2q} \end{pmatrix}$$ with $q$ blocks $T_{2}$. Then $T^{\theta}$ is $(\mathbb{Z}/2\mathbb{Z})^{q} \times (\mathrm{SO}(2,\mathbb{R}))^{\lfloor \frac{n}{2} \rfloor-q}$, so the dominant weights trivial on $T^{\theta}$ write as $\lambda=(\lambda_{1},\ldots,\lambda_{q},0,\ldots,0)$, with $\lambda_{i} \equiv 0 \bmod 2$ for all $i \in \left[\!\left[ 1,q\right]\!\right]$. They are therefore given by an integer partition of length $q$, with all parts even. Now, for the simply connected Grassmannian variety $\mathrm{Gr}(n,q,\mathbb{R}) $, there are twice as many spherical representations, as $T^{\theta}$ is in this case isomorphic to $((\mathbb{Z}/2\mathbb{Z})^{q}/(\mathbb{Z}/2\mathbb{Z})) \times \mathbb{T}^{\lfloor \frac{n}{2} \rfloor-q}$, instead of $(\mathbb{Z}/2\mathbb{Z})^{q}\times \mathbb{T}^{\lfloor \frac{n}{2} \rfloor-q}$. Therefore, the condition of parity is now $$\forall i,j \in \left[\!\left[ 1,q \right]\!\right],\,\,\,\lambda_{i} \equiv \lambda_{j} \bmod 2.$$ Similar considerations show that for the spaces $\mathrm{SO}(2n,\mathbb{R})/\mathrm{U}(n,\mathbb{C})$, the dominant weights $\lambda$ trivial on the intersection $T^{\theta}$ are given by $$\lambda=(\lambda_{1},\lambda_{1},\ldots,\lambda_{m},\lambda_{m}) \,\text{ or }\, \lambda=(\lambda_{1},\lambda_{1},\ldots,\lambda_{m},\lambda_{m},0)$$ that is to say a partition with all non-zero parts that are doubled. \subsubsection{Summary} Let us summarize the previous results (this is redundant, but very useful in order to follow all the computations of Section \ref{upper}). We denote: $\mathfrak{Y}_{n}$ the set of partitions of length $n$; $\mathfrak{Z}_{n}$ the set of non-decreasing sequences of (possibly negative) integers; $\frac{1}{2}\mathfrak{Y}_{n}$ the set of partitions and half-partitions of length $n$; $2\mathfrak{Y}_{n}$ the set of partitions of length $n$ with even parts; $2\mathfrak{Y}_{n} \boxplus 1$ the set of partitions of length $n$ with odd parts; and $\mathfrak{Y}\mathfrak{Y}_{n}$ the set of partitions of length $n$ and with all non-zero parts doubled. It is understood that if $i$ is too big, then $\lambda_{i}=0$ for a partition or an half-partition $\lambda$ of prescribed length. \begin{theorem}\label{explicitdensity} The density of the law $\mu_{t}$ of the Brownian motion traced on a classical simple compact Lie group writes as:\begin{align*} \sum_{\lambda \in \mathfrak{Y}_{n-1}}\, &\mathrm{e}^{-\frac{t}{2n}\,\left(-\frac{|\lambda|^{2}}{n}+\sum_{i=1}^{n-1}\lambda_{i}^{2}+(n+1-2i)\lambda_{i}\right)}\left(\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\right) s_{\lambda}(k);\\ \sum_{\lambda \in \mathfrak{Z}_{n}}\,\,\,\, &\mathrm{e}^{-\frac{t}{2n}\,\sum_{i=1}^{n}\lambda_{i}^{2}+(n+1-2i)\lambda_{i}}\left(\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\right) s_{\lambda}(k);\\ \sum_{\lambda \in \mathfrak{Y}_{n}}\,\,\,&\mathrm{e}^{-\frac{t}{4n}\,\sum_{i=1}^{n}\lambda_{i}^{2}+(2n+2-2i)\lambda_{i}}\left(\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\prod_{1\leq i \leq j \leq n}\frac{\lambda_{i}+\lambda_{j}+2n+2-i-j}{2n+2-i-j}\right)sc_{\lambda}(k);\\ \sum_{\lambda \in \frac{1}{2}\mathfrak{Y}_{n}}\,&\mathrm{e}^{-\frac{t}{4n+2}\,\sum_{i=1}^{n}\lambda_{i}^{2}+(2n+1-2i)\lambda_{i}}\left(\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\prod_{1\leq i \leq j \leq n}\frac{\lambda_{i}+\lambda_{j}+2n+1-i-j}{2n+1-i-j}\right)sb_{\lambda}(k);\\ \sum_{\lambda\in \frac{1}{2}\mathfrak{Y}_{n}}\,&\mathrm{e}^{-\frac{t}{4n}\,\sum_{i=1}^{n}\lambda_{i}^{2}+(2n-2i)\lambda_{i}}\left(\prod_{1\leq i<j \leq n}\frac{(\lambda_{i}-\lambda_{j}+j-i)(\lambda_{i}+\lambda_{j}+2n-i-j)}{(j-i)(2n-i-j)}\right)(sd_{\lambda}(k)+sd_{\varepsilon\lambda}(k)) \end{align*} respectively for special unitary groups $\mathrm{SU}(n,\mathbb{C})$, unitary groups $\mathrm{U}(n,\mathbb{C})$, symplectic groups $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$, odd special orthogonal groups $\mathrm{SO}(2n+1,\mathbb{R})$, and even special orthogonal groups $\mathrm{SO}(2n,\mathbb{R})$. In this last case, $\varepsilon\lambda=(\lambda_{1},\ldots,\lambda_{n-1},-\lambda_{n})$, and it is agreed that $sd_{\lambda}+sd_{\varepsilon\lambda}$ stands for $sd_{\lambda}$ if $\lambda_{n}=0$. We denote generically $\phi_{\lambda}(x)$ a zonal spherical function associated to a spherical representation (the function depends of course of the implicit type of the space considered). The density of the law $\mu_{t}$ of the Brownian motion traced on a classical simple compact symmetric space writes then as follows: \begin{align*} \sum_{\lambda \in 2\mathfrak{Y}_{n-1}} &\mathrm{e}^{-\frac{t}{2n}\,\left(-\frac{|\lambda|^{2}}{n}+\sum_{i=1}^{n-1}\lambda_{i}^{2}+(n+1-2i)\lambda_{i}\right)}\left(\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\right) \phi_{\lambda}(x);\end{align*} \begin{align*} \sum_{\lambda \in \mathfrak{Y}\mathfrak{Y}_{2n-1}} \!\!&\mathrm{e}^{-\frac{t}{4n}\,\left(-\frac{|\lambda|^{2}}{2n}+\sum_{i=1}^{2n-2}\lambda_{i}^{2}+(2n+1-2i)\lambda_{i}\right)}\left(\prod_{1\leq i<j \leq 2n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\right) \phi_{\lambda}(x);\\ \sum_{\lambda \in \mathfrak{Y}_{q}} \,\,\,\,&\mathrm{e}^{-\frac{t}{n}\,\sum_{i=1}^{q} \lambda_{i}^{2}+(n+1-2i)\lambda_{i}}\left(\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\right) \phi_{\lambda}(x);\\ \sum_{\lambda \in 2\mathfrak{Y}_{n}} \,\,&\mathrm{e}^{-\frac{t}{4n}\,\sum_{i=1}^{n}\lambda_{i}^{2}+(2n+2-2i)\lambda_{i}}\left(\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\prod_{1\leq i \leq j \leq n}\frac{\lambda_{i}+\lambda_{j}+2n+2-i-j}{2n+2-i-j}\right)\phi_{\lambda}(x);\\ \sum_{\lambda \in \mathfrak{Y}\mathfrak{Y}_{2q}} \,&\mathrm{e}^{-\frac{t}{4n}\,\sum_{i=1}^{2q}\lambda_{i}^{2}+(2n+2-2i)\lambda_{i}}\left(\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\prod_{1\leq i \leq j \leq n}\frac{\lambda_{i}+\lambda_{j}+2n+2-i-j}{2n+2-i-j}\right)\phi_{\lambda}(x);\\ \sum_{\lambda\in \mathfrak{Y}\mathfrak{Y}_{n} } \,&\mathrm{e}^{-\frac{t}{4n}\,\sum_{i=1}^{n}\lambda_{i}^{2}+(2n-2i)\lambda_{i}}\left(\prod_{1\leq i<j \leq n}\frac{(\lambda_{i}-\lambda_{j}+j-i)(\lambda_{i}+\lambda_{j}+2n-i-j)}{(j-i)(2n-i-j)}\right)\phi_{\lambda}(x);\\ \sum_{\lambda \in 2\mathfrak{Y}_{q} \sqcup 2\mathfrak{Y}_{q}\boxplus 1}\!\!\!\!&\mathrm{e}^{-\frac{t}{4n+2}\,\sum_{i=1}^{q}\lambda_{i}^{2}+(2n+1-2i)\lambda_{i}}\!\left(\prod_{1\leq i<j \leq n}\!\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\prod_{1\leq i \leq j \leq n}\frac{\lambda_{i}+\lambda_{j}+2n+1-i-j}{2n+1-i-j}\right)\!\phi_{\lambda}(x);\\ \sum_{\lambda\in 2\mathfrak{Y}_{q} \sqcup2\mathfrak{Y}_{q}\boxplus 1} \!\!\!\!&\mathrm{e}^{-\frac{t}{4n}\,\sum_{i=1}^{q}\lambda_{i}^{2}+(2n-2i)\lambda_{i}}\!\left(\prod_{1\leq i<j \leq n}\frac{(\lambda_{i}-\lambda_{j}+j-i)(\lambda_{i}+\lambda_{j}+2n-i-j)}{(j-i)(2n-i-j)}\right)\!\phi_{\lambda}(x) \end{align*} for real structures $\mathrm{SU}(n,\mathbb{C})/\mathrm{SO}(n,\mathbb{R})$, quaternionic structures $\mathrm{SU}(2n,\mathbb{C})/\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$, complex Grassmannian varieties $\mathrm{Gr}(n,q,\mathbb{C})$, complex structures $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})/\mathrm{U}(n,\mathbb{C})$, quaternionic Grassmannian varieties $\mathrm{Gr}(n,q,\mathbb{H})$, complex structures $\mathrm{SO}(2n,\mathbb{R})/\mathrm{U}(n,\mathbb{C})$, odd real Grassmannian varieties $\mathrm{Gr}(2n+1,q,\mathbb{R})$ and even real Grassmannian varieties $\mathrm{Gr}(2n,q,\mathbb{R})$. \end{theorem} \begin{remark} In the case of complex Grassmannian varieties, it is understood that $\lambda_{n+1-i}=-\lambda_{i}$ as explained before. We have not tried to reduce the expressions in the previous formulas, so some simplifications can be made by replacing the indexing sets of type $2\mathfrak{Y}_{p}$ or $\mathfrak{Y}\mathfrak{Y}_{p}$ by $\mathfrak{Y}_{p}$. On the other hand, it should be noticed that in each case, the ``degree of freedom'' in the choice of partitions labeling the irreducible or spherical representations is exactly the rank of the Riemannian variety under consideration, that is to say the maximal dimension of flat totally geodesic sub-manifolds. \end{remark} \begin{example}[Brownian motions on spheres and projective spaces] Let us examine the case $q=1$ for Grassmannian varieties: it corresponds to real spheres $\mathbb{S}^{n}(\mathbb{R})=\mathrm{SO}(n+1,\mathbb{R})/\mathrm{SO}(n,\mathbb{R})$, to complex projective spaces $\mathbb{P}^{n}(\mathbb{C})=\mathrm{SU}(n+1,\mathbb{C})/\mathrm{S}(\mathrm{U}(n,\mathbb{C})\times \mathrm{U}(1,\mathbb{C}))$ and to quaternionic projective spaces $\mathbb{P}^{n}(\mathbb{H})=\mathrm{U}\mathrm{Sp}(n+1,\mathbb{H})/(\mathrm{U}\mathrm{Sp}(n,\mathbb{H})\times \mathrm{U}\mathrm{Sp}(1,\mathbb{H}))$. In each case, spherical representations are labelled by a single integer $k \in \mathbb{N}$. So, the densities are: \begin{align} p_{t}^{\mathbb{S}^{n}(\mathbb{R})}(x)&=\sum_{k=0}^{\infty} \mathrm{e}^{-\frac{k(k+n-1)\,t}{2n+2}}\,\frac{(n-2+k)!}{(n-1)!\,k!}\,(2k+n-1)\,\,\phi_{n,k}^{\mathbb{R}}(x);\label{realsphere}\\ p_{t}^{\mathbb{P}^{n}(\mathbb{C})}(x)&=\sum_{k=0}^{\infty} \mathrm{e}^{-\frac{k(k+n)\,t}{n+1}}\,\frac{((n-1+k)!)^{2}}{(n-1)!\,n!\,(k!)^{2}}\,(2k+n)\,\,\phi_{n,k}^{\mathbb{C}}(x);\label{complexprojective}\\ p_{t}^{\mathbb{P}^{n}(\mathbb{H})}(x)&=\sum_{k=0}^{\infty}\mathrm{e}^{-\frac{k(k+2n+1)\,t}{2(n+1)}}\,\frac{(2n+k)!\,(2n-1+k)!}{(2n+1)!\,(2n-1)!\,(k+1)!\,k!}\,(2k+2n+1)\,\,\phi_{n,k}^{\mathbb{H}}(x).\label{quaternionicprojective} \end{align} In particular, one recovers the well-known fact that, up to the aforementioned normalization factor $(n+1)$, the eigenvalues of the Laplacian on the $n$-sphere are the $k(k+n-1)$, each with multiplicity $\frac{(n-2+k)!}{(n-1)!\,k!}\,(2k+n-1)$; see \emph{e.g.} \cite[\S3.3]{SC94}. \end{example} \begin{example}[Torus and Fourier analysis] Take the circle $\mathbb{T}=\mathrm{U}(1,\mathbb{C})=\mathbb{S}^{1}(\mathbb{R})$. The Brownian motion on $\mathbb{T}$ is the projection of the real Brownian motion of density $p_{t}^{\mathbb{R}}(\theta)=\frac{1}{\sqrt{2\pi t}}\,\mathrm{e}^{-\theta^{2}/2t}$ by the map $\theta \mapsto \mathrm{e}^{\mathrm{i} \theta}$. Thus, $$ p_{t}^{\mathbb{T}}(\mathrm{e}^{\mathrm{i} \theta})=2\pi\,\sum_{m=-\infty}^{\infty} p_{t}^{\mathbb{R}}(\theta+2m\pi)= \sqrt{\frac{2\pi}{t}}\sum_{m=-\infty}^{\infty}\mathrm{e}^{-\frac{(\theta+2m\pi)^{2}}{2t}}= \sqrt{\frac{2\pi}{t}}\,S(\theta,t). $$ The series $S(\theta,t)$ is smooth and $2\pi$-periodic, so it is equal to its Fourier series $\sum_{n=-\infty}^{\infty} c_{k}(S(t))\,\mathrm{e}^{k\mathrm{i} \theta}$, with $$ c_{k}(S(t))=\sum_{m=-\infty}^{\infty}\int_{0}^{2\pi} \mathrm{e}^{-\frac{(\theta+2m\pi)^{2}}{2t}}\,\mathrm{e}^{-k\mathrm{i} \theta}\,\frac{d\theta}{2\pi}=\frac{1}{2\pi}\int_{\mathbb{R}}\mathrm{e}^{-\frac{y^{2}}{2t}-k\mathrm{i} y}\,dy=\sqrt{\frac{t}{2\pi}}\,\mathrm{e}^{-\frac{k^{2}t}{2}}. $$ Thus, the density of the Brownian motion on the circle with respect to the Haar measure $\frac{d\theta}{2\pi}$ is $$ p_{t}^{\mathbb{T}}(\mathrm{e}^{\mathrm{i} \theta})=\sum_{k=-\infty}^{\infty}\mathrm{e}^{-\frac{k^{2}t}{2}}\,\mathrm{e}^{k\mathrm{i} \theta}=1+2\sum_{k=1}^{\infty}\mathrm{e}^{-\frac{k^{2}t}{2}}\,\cos k\theta, $$ Since $s_{(k)}(\mathrm{e}^{\mathrm{i} \theta})=\mathrm{e}^{k\mathrm{i} \theta}$, this is indeed a specialization of the second formula of Theorem \ref{explicitdensity}, for $\mathrm{U}(1,\mathbb{C})$. \end{example} \begin{example}[Brownian motion on the $3$-dimensional sphere] Consider the Brownian motion on $\mathrm{U}\mathrm{Sp}(1,\mathbb{H})$, which is also $\mathrm{SU}(2,\mathbb{C})$ by one of the exceptional isomorphisms. The specialization of the first formula of Theorem \ref{explicitdensity} for $\mathrm{SU}(2,\mathbb{C})$ gives $$ p_{t}^{\mathrm{SU}(2,\mathbb{C})}(g)=\sum_{k=0}^{\infty} \mathrm{e}^{-\frac{k(k+2)\,t}{8}}\,(k+1)\,\frac{\sin (k+1)\theta}{\sin \theta}, $$ if $\mathrm{e}^{\pm\mathrm{i}\theta}$ are the eigenvalues of $g \in \mathrm{SU}(2,\mathbb{C})$. It agrees with the example of \cite[\S4]{Liao04paper}, and also with Formula \eqref{realsphere} when $n=3$, since the group of unit quaternions is topologically a $3$-sphere. \end{example} \begin{remark} The previous examples show that the restrictions $n \geq n_{0}$ are not entirely necessary for the formulas of Theorem \ref{explicitdensity} to hold. One should only beware that the root systems of type $\mathrm{B}_{1}$, $\mathrm{C}_{1}$, $\mathrm{D}_{1}$ and $\mathrm{D}_{2}$ are somewhat degenerated, and that the dominant weights do not have the same indexing set as for $\mathrm{B}_{n\geq 2}$ or $\mathrm{C}_{n \geq 2}$ or $\mathrm{D}_{n\geq 3}$. For instance, for the special orthogonal group $\mathrm{SO}(3,\mathbb{R})$, the only positive root is $e^{1}$, and the only fundamental weight is also $e^{1}$. Consequently, irreducible representations have highest weights $k\,e^{1}$ with $k\in \mathbb{N}$; the dimension of the representation of label $k$ is $2k+1$, and the corresponding character is again $\frac{\sin (k+1)\theta}{\sin \theta}$ if $\mathrm{e}^{\mathrm{i}\theta}$ and $\mathrm{e}^{-\mathrm{i} \theta}$ are the non-trivial eigenvalues of the considered rotation. So $$ p_{t}^{\mathrm{SO}(3,\mathbb{R})}(g)=\sum_{k=0}^{\infty} \mathrm{e}^{-\frac{k(k+1)\,t}{3}}\,(2k+1) \,\frac{\sin (k+1)\theta}{\sin \theta} $$ if $g$ is a rotation of angle $\theta$ around some axis. \end{remark} \section{Upper bounds after the cut-off time}\label{upper} Let $\mu$ be a probability measure on a compact Lie group $K$ or compact symmetric space $G/K$, that is absolutely continuous with respect to the Haar measure $\eta$, and with density $p$. Cauchy-Schwarz inequality ensures that $$4\,d_{\mathrm{TV}}(\mu,\eta)^{2}=\left(\int_{X} |p(x)-1|\,dx\right)^{2} \leq \int_{X} |p(x)-1|^{2}\,dx=\|p-1\|_{\mathscr{L}^{2}(X)}^{2}.$$ The discussion of Section \ref{fourier} allows now to relate the right-hand side of this inequality with the harmonic analysis on $X$. Let us first treat the case of a compact Lie group $K$. If one assumes that $p$ is invariant by conjugacy, then Parseval's identity \eqref{parseval} shows that the right-hand side is $\sum_{\lambda \in \widehat{K}} |\chi^{\lambda}(p-1)|^{2}$. However, by orthogonality of characters, for any non-trivial irreducible representation of $K$ --- \emph{i.e.}, not equal to $\mathbf{1}_{K} : k \in K \mapsto 1$ --- one has $$\chi^{\lambda}(1)=\int_{K} \chi^{\lambda}(k)\,dk=\int_{K} \chi^{\lambda}(k)\,\chi^{\mathbf{1}_{K}}(k^{-1})\,dk=0.$$ On the other hand, for any measure $\mu$ on the group, $\chi^{\mathbf{1}_{K}}(\mu)=\int_{K}\chi^{\mathbf{1}}(k)\,\mu(dk)=\int_{K} \mu(dk)=1$. Hence, the inequality now takes the form $$4\,d_{\mathrm{TV}}(\mu,\eta_{K})^{2}\leq \sum_{\lambda \in \widehat{K}}^{\prime} |\chi^{\lambda}(p)|^{2} ,$$ where the $\prime$ indicates that we remove the trivial representation from the summation. Similarly, on a compact symmetric space $G/K$, supposing that $p$ is $K$-invariant, Parseval's identity reads $\|p-1\|_{\mathscr{L}^{2}(G/K)}^{2}=\sum_{\lambda \in \widehat{G}^{K}} D^{\lambda}\,|\phi^{\lambda}(p-1)|^{2}$. However, for any non-trivial representation $\lambda$, $$\phi^{\lambda}(1)=\scal{e^{\lambda}}{\int_{G}\rho^{\lambda}(g)(e^{\lambda})\,dg}=0.$$ Indeed, using only elementary properties of the Haar measure, one sees that $\widehat{1}(\lambda)=\int_{G}\rho^{\lambda}(g)\,dg=0$, because it is a projector and it has trace $\chi^{\lambda}(1)=0$. So again, the previous inequality can be simplified and it becomes $$4\,d_{\mathrm{TV}}(\mu,\eta_{G/K})^{2}\leq \sum_{\lambda \in \widehat{G}^{K}}^{\prime} D^{\lambda}\,|\phi^{\lambda}(p)|^{2}.$$ In the setting and with the notations of Proposition \ref{abstractdensity}, a bound at time $t$ on $4 \,d_{\mathrm{TV}}(\mu_{t},\eta_{K})^{2}$ (respectively, on $4\,d_{\mathrm{TV}}(\mu_{t},\eta_{G/K})^{2}$) is then $$ \sum_{\lambda \in \widehat{K}}^{\prime} \mathrm{e}^{-t\scal{\lambda+2\rho}{\lambda}}\,(D^{\lambda})^{2}\quad;\quad\text{respectively},\quad \sum_{\lambda \in \widehat{G}^{K}}^{\prime} \mathrm{e}^{-t\scal{\lambda+2\rho}{\lambda}}\,D^{\lambda}.$$ \begin{proposition}\label{scaryseries} In every classical case, $4\,d_{\mathrm{TV}}(\mu_{t},\mathrm{Haar})^{2}$ is bounded by $\sum^{\prime}_{\lambda \in W_{n}} A_{n}(\lambda)\,\mathrm{e}^{-t\, B_{n}(\lambda)}$, where the indexing sets $W_{n}$ and the constants $B_{n}(\lambda)$ are the same as in Theorem \ref{explicitdensity}, and $A_{n}(\lambda)=(D^{\lambda})^{2}$ for compact Lie groups and $D^{\lambda}$ for compact symmetric spaces. \end{proposition} \comment{\begin{sidewaystable} $$ \begin{tabular}{|c|c|c|c|} \hline &&& \\ $K\,\, \mathit{or}\,\, G/K$ & $W_{n}$ & $A_{n}(\lambda)$ & $B_{n}(\lambda)$\\ & & & \\ \hline\hline & & & \\ $\mathrm{SO}(2n+1,\mathbb{R})$& $\frac{1}{2}\mathfrak{Y}_{n}$ & $\left(\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\prod_{1\leq i \leq j \leq n}\frac{\lambda_{i}+\lambda_{j}+2n+1-i-j}{2n+1-i-j}\right)^{2}$& $\frac{1}{2n+1}\left(\sum_{i=1}^{n} \lambda_{i}^{2}+(2n+1-2i)\lambda_{i}\right)$\\ & & & \\ \hline & & & \\ $\mathrm{SO}(2n,\mathbb{R})$& $\frac{1}{2}\mathfrak{Y}_{n}$& $2\left(\prod_{1\leq i<j \leq n}\frac{(\lambda_{i}-\lambda_{j}+j-i)(\lambda_{i}+\lambda_{j}+2n-i-j)}{(j-i)(2n-i-j)}\right)^{2}$& $\frac{1}{2n}\left(\sum_{i=1}^{n} \lambda_{i}^{2}+(2n-2i)\lambda_{i}\right)$\\ & & & \\ \hline & & & \\ $\mathrm{SU}(n,\mathbb{C})$& $\mathfrak{Y}_{n-1}$& $\left(\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\right)^{2}$ &$\frac{1}{n}\left(-\frac{|\lambda|^{2}}{n}+\sum_{i=1}^{n-1}\lambda_{i}^{2}+(n+1-2i)\lambda_{i}\right)$ \\ & & & \\ \hline & & & \\ $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$& $\mathfrak{Y}_{n}$ & $\left(\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\prod_{1\leq i \leq j \leq n}\frac{\lambda_{i}+\lambda_{j}+2n+2-i-j}{2n+2-i-j}\right)^{2}$ & $\frac{1}{2n}\left(\sum_{i=1}^{n} \lambda_{i}^{2}+(2n+2-2i)\lambda_{i}\right)$\\ & & & \\ \hline\hline & & & \\ $\mathrm{Gr}(2n+1,q,\mathbb{R})$& $2\mathfrak{Y}_{q} \sqcup 2\mathfrak{Y}_{q} \boxplus 1$ & $\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\prod_{1\leq i \leq j \leq n}\frac{\lambda_{i}+\lambda_{j}+2n+1-i-j}{2n+1-i-j}$ &$\frac{1}{2n+1}\left(\sum_{i=1}^{q} \lambda_{i}^{2}+(2n+1-2i)\lambda_{i}\right)$ \\ & & & \\ \hline & & & \\ $\mathrm{Gr}(2n,q,\mathbb{R})$& $2\mathfrak{Y}_{q}\sqcup 2\mathfrak{Y}_{q} \boxplus 1$& $\prod_{1\leq i<j \leq n}\frac{(\lambda_{i}-\lambda_{j}+j-i)(\lambda_{i}+\lambda_{j}+2n-i-j)}{(j-i)(2n-i-j)}$ & $\frac{1}{2n}\left(\sum_{i=1}^{q} \lambda_{i}^{2}+(2n-2i)\lambda_{i}\right)$ \\ & & & \\ \hline & & & \\ $\mathrm{Gr}(n,q,\mathbb{C})$ & $\mathfrak{Y}_{q}$ & $\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}$ & $\frac{2}{n}\left(\sum_{i=1}^{q} \lambda_{i}^{2}+(n+1-2i)\lambda_{i}\right)$ \\ & & & \\ \hline & & & \\ $\mathrm{Gr}(n,q,\mathbb{H})$ & $\mathfrak{Y}\mathfrak{Y}_{2q}$& $\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\prod_{1\leq i \leq j \leq n}\frac{\lambda_{i}+\lambda_{j}+2n+2-i-j}{2n+2-i-j}$& $\frac{1}{2n}\left(\sum_{i=1}^{2q}\lambda_{i}^{2}+(2n+2-2i)\lambda_{i}\right)$\\ & & & \\ \hline\hline & & & \\ $\mathrm{SO}(2n,\mathbb{R})/\mathrm{U}(n,\mathbb{C})$& $\mathfrak{Y}\mathfrak{Y}_{n}$& $\prod_{1\leq i<j \leq n}\frac{(\lambda_{i}-\lambda_{j}+j-i)(\lambda_{i}+\lambda_{j}+2n-i-j)}{(j-i)(2n-i-j)}$ & $\frac{1}{2n}\left(\sum_{i=1}^{n}\lambda_{i}^{2}+(2n-2i)\lambda_{i}\right)$\\ & & & \\ \hline & & & \\ $\mathrm{SU}(n,\mathbb{C})/\mathrm{SO}(n,\mathbb{R})$& $2\mathfrak{Y}_{n-1}$& $\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}$ & $\frac{1}{n}\left(-\frac{|\lambda|^{2}}{n}+\sum_{i=1}^{n-1}\lambda_{i}^{2}+(n+1-2i)\lambda_{i}\right)$\\ & & & \\ \hline & & & \\ $\mathrm{SU}(2n,\mathbb{C})/\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$& $\mathfrak{Y}\mathfrak{Y}_{2n-1}$ & $\prod_{1\leq i<j \leq 2n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}$ & $\frac{1}{2n}\left(-\frac{|\lambda|^{2}}{2n}+\sum_{i=1}^{2n-2}\lambda_{i}^{2}+(2n+1-2i)\lambda_{i}\right)$\\ & & & \\ \hline & & & \\ $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})/\mathrm{U}(n,\mathbb{C})$& $2\mathfrak{Y}_{n}$& $\prod_{1\leq i<j \leq n}\frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\prod_{1\leq i \leq j \leq n}\frac{\lambda_{i}+\lambda_{j}+2n+2-i-j}{2n+2-i-j}$ &$\frac{1}{2n}\left(\sum_{i=1}^{n} \lambda_{i}^{2}+(2n+2-2i)\lambda_{i}\right)$\\ &&&\\ \hline \end{tabular}\label{sidetable} $$ \end{sidewaystable}} \comment{ \begin{remark} One can shorten all these formulas by introducing the modified weight coordinates \begin{itemize} \item $\mu_{i}=\lambda_{i}+\frac{n+1-2i}{2}$ in type $\mathrm{A}_{n-1}$, \emph{i.e.}, for $\mathrm{SU}(n,\mathbb{C})$; \item $\mu_{i}=\lambda_{i}+n+\frac{1}{2}-i$ in type $\mathrm{B}_{n}$, \emph{i.e.}, for $\mathrm{SO}(2n+1,\mathbb{R})$; \item $\mu_{i}=\lambda_{i}+n+1-i$ in type $\mathrm{C}_{n}$, \emph{i.e.}, for $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$; \item $\mu_{i}=\lambda_{i}+n-i$ in type $\mathrm{D}_{n}$, \emph{i.e.}, for $\mathrm{SO}(2n,\mathbb{R})$. \end{itemize} Then, $D^{\lambda}$ is equal to \begin{align*} &\frac{\mathrm{V}(\mu_{1},\mu_{2},\ldots,\mu_{n})}{\mathrm{V}(1,2,\ldots,n)} \,\,\,\,\quad\qquad\qquad\qquad\qquad\text{ in type }\mathrm{A}_{n-1};\\ &\left(\frac{2^{n^{2}}\prod_{i=1}^{n} \mu_{i}}{(2n-1)!!}\right)\frac{\mathrm{V}(\mu_{1}^{2},\ldots,\mu_{n}^{2})}{\mathrm{V}((2n-1)^{2},\ldots,1^{2})} \,\,\qquad\text{ in type }\mathrm{B}_{n};\\ &\left( \frac{\prod_{i=1}^{n}\mu_{i}}{n!}\right)\frac{\mathrm{V}(\mu_{1}^{2},\ldots,\mu_{n}^{2})}{\mathrm{V}(n^{2},\ldots,1^{2})} \,\,\,\,\,\quad\quad\qquad\qquad\text{ in type }\mathrm{C}_{n};\\ &\frac{\mathrm{V}(\mu_{1}^{2},\ldots,\mu_{n}^{2})}{\mathrm{V}((n-1)^{2},\ldots,0^{2})} \qquad\qquad\qquad\qquad\qquad\text{ in type }\mathrm{D}_{n}; \end{align*} where $\mathrm{V}(x_{1},\ldots,x_{n})$ denotes the Vandermonde determinant. Also, $B_{n}(\lambda)$ is always an affine function (depending on $n$) of $\sum_{i=1}^{n} \mu_{i}^{2}$ in type $\mathrm{B}$, $\mathrm{C}$ and $\mathrm{D}$; and of $\sum_{i=1}^{n} \mu_{i}^{2} - \frac{1}{n}\left(\sum_{i=1}^{n}\mu_{i}\right)^{2}$ in type $\mathrm{A}$. \end{remark} } This section is now organized as follows. In \S\ref{order}, we compute the weights that minimize $B_{n}(\lambda)$; they will give the correct order of decay of the whole series after cut-off time. In \S\ref{versus}, we then show case-by-case that all the other terms of the series $S_{n}(t)$ of Proposition \ref{scaryseries} can be controlled uniformly. Essentially, we adapt the arguments of \cite{Ros94,Por96a,Por96b}, though we also introduce new computational tricks. As explained in the introduction, the main reason why one has a good control over $S_{n}(t)$ after cut-off time is that each term $T_{n}(\lambda,t)=A_{n}(\lambda)\,\mathrm{e}^{-t\,B_{n}(\lambda)}$ of the series $S_{n}(t)$ stays bounded when $t=t_{\text{cut-off}}$, for every $n$, every class $\lambda$ and in every case. We have unfortunately not found a way to factorize all the computations needed to prove this, so each case will have to be treated separately. However, the scheme of the proof will always be the same, and the reader will find the main arguments in \S\ref{sympupper} (for symplectic groups and their quotients), so he can safely skip \S\ref{oddupper}-\ref{unitaryupper} if he does not want to see the minor modifications needed to treat the other cases. \comment{\begin{remark} The bounds obtained on the series $S_{n}(t)$ in \S\ref{versus} are not at all optimal, and in particular one can conjecture that the exponent of $n$ in these bounds can at least be multiplied by a factor $2$. A possible way to improve our bounds would be to use an alternative approach to the control of the series $S_{n}(t)$ that is related to the problem of minimization of the \emph{logarithmic potential} $$ \iint_{x \neq y } \log \frac{1}{|x-y|} \,\nu(dx)\nu(dy)+\int_{\mathbb{R}} P(x)\,\nu(dx) $$ of a probability measure $\nu$ on the real line (\emph{cf.} \cite{ST97}). Set $t=(1+\varepsilon)\,t_{\text{cut-off}}$, and consider only the case of compact groups. The crucial remark is that up to some explicit constant $f_{n}(t)$, the general term $T_{n}(\lambda,t)$ of the series $S_{n}(t)$ is equal to the exponential of a logarithmic potential of either the discrete measure $\sum_{i=1}^{n} \delta_{\mu_{i}}$ in the case of unitary groups, or of the discrete measure $\sum_{i=1}^{n} \delta_{(\mu_{i})^{2}}$ in the case of special orthogonal groups or compact symplectic groups (the $\mu_{i}'s$ are the modified weight coordinates): \begin{align*} T_{n}(\lambda,t)&\propto V(\mu_{1},\ldots,\mu_{n})^{2}\,\mathrm{e}^{-\frac{t}{n}\sum_{i=1}^{n}\mu_{i}^{2}}\quad\text{when }K=\mathrm{U}(n);\\ T_{n}(\lambda,t)&\propto \left(\prod_{i=1}^{n}\mu_{i}\right)^{\!2\,}V(\mu_{1}^{2},\ldots,\mu_{n}^{2})^{2}\,\mathrm{e}^{-\frac{t}{2n+1}\sum_{i=1}^{n}\mu_{i}^{2}}\quad\text{when }K=\mathrm{SO}(2n+1);\\ T_{n}(\lambda,t)&\propto \left(\prod_{i=1}^{n}\mu_{i}\right)^{\!2\,}V(\mu_{1}^{2},\ldots,\mu_{n}^{2})^{2}\,\mathrm{e}^{-\frac{t}{2n}\sum_{i=1}^{n}\mu_{i}^{2}}\quad\text{when }K=\mathrm{U}\mathrm{Sp}(n);\\ T_{n}(\lambda,t)&\propto V(\mu_{1}^{2},\ldots,\mu_{n}^{2})^{2}\,\mathrm{e}^{-\frac{t}{2n}\sum_{i=1}^{n}\mu_{i}^{2}}\quad\text{when }K=\mathrm{SO}(2n). \end{align*} Assume for a moment that the $\mu_{i}$'s are arbitrary (ordered) \emph{real} numbers, instead of integers of half-integers. Then one can use the following well-known results on discrete minimizers of logarithmic potentials, see \cite[Chapters 5 and 6]{Szego} and \cite[Volume 2, Chapter 10]{MOT53}. Denote $$ H_{n}(x)=(-1)^{n}\, \mathrm{e}^{x^{2}}\frac{d^{n}}{dx^{n}}\left(\mathrm{e}^{-x^{2}}\right) $$ the \emph{Hermite polynomials}, and $$ L_{n}^{(-1)}(x)=\frac{x\,\mathrm{e}^{x}}{n!}\,\frac{d^{n}}{dx^{n}} \left(\mathrm{e}^{-x}\,x^{n-1}\right)\qquad;\qquad L_{n}^{(0)}(x)=\frac{\mathrm{e}^{x}}{n!}\,\frac{d^{n}}{dx^{n}} \left(\mathrm{e}^{-x}\,x^{n}\right) $$ the generalized \emph{Laguerre polynomials} of weight $-1$ and $0$. If $\frac{1}{n}\sum_{i=1}^{n} \mu_{i}^{2}\leq L$, then the maximum of $\Delta(\mu_{1},\ldots,\mu_{n})^{2}$ is attained if and only if the $\mu_{i}$'s are the $n$ zeroes of the polynomial $H_{n}(\sqrt{\frac{n-1}{2L}}\,x).$ Similarly, under the same assumption $\frac{1}{n}\sum_{i=1}^{n} \mu_{i}^{2}\leq L$, the maximum of $\left(\prod_{i=1}^{n} \mu_{i}\right)\Delta(\mu_{1}^{2},\ldots,\mu_{n}^{2})$ is attained if and only if the $\mu_{i}$'s are the $n$ zeroes of the polynomial $L_{n}^{(0)}(nx/L)$, and the maximum of $\Delta(\mu_{1}^{2},\ldots,\mu_{n}^{2})$ is attained if and only if the $\mu_{i}$'s are the $n$ zeroes of the polynomial $L_{n}^{(-1)}((n-1)x/L)$. In each case, these values can be calculated, see \cite[Theorem 6.71]{Szego}. This enables one to have an explicit bound on $T_{n}(\lambda,t)$ assuming that $\sum_{i=1}^{n} \mu_{i}^{2}$ is contained in some interval $[\alpha,\beta]$. Multiplying these bounds by $\frac{1}{n!}$ times the number of integer of half-integer points in the ``orange peel'' $$\left\{(x_{1},\ldots,x_{n})\in \mathbb{R}^{n}\,\,\bigg|\,\,\sum_{i=1}^{n} x_{i}^{2} \in [kn^{3},(k+1)n^{3}]\right\},$$ and then summing these bounds over $k$ provides an upper bound $n^{-2\varepsilon}$ on the part of $S_{n}(t)$ corresponding to weights such that $\sum_{i=1}^{n} \mu_{i}^{2}$ is big enough, say bigger than $10\,n^{3}$ --- we skip the details of these computations, as they are quite similar to what will be done in \S\ref{versus}. Unfortunately, we were not able to use this method to bound the whole series $S_{n}(t)$: indeed, for ``small'' values of $\sum_{i=1}^{n} \mu_{i}^{2}$, the points of the lattice of weights cannot approximate sufficiently the roots of the aforementioned renormalized Hermite or Laguerre polynomials, and therefore the corresponding bound was too crude for most of the corresponding terms in $S_{n}(t)$. Nonetheless, this alternative method hints at possible better bounds on $S_{n}(t)$. \end{remark}} \subsection{Guessing the order of decay of the dominating series}\label{order} Remember the restriction $n \geq 2$ (respectively, $n \geq 3$ and $n \geq 10$) when studying special unitary groups (resp., compact symplectic groups and special orthogonal groups) and their quotients. We use the superscript $\star$ to denote a set of partitions or half-partitions minus the trivial partition $(0,0,\ldots,0)$. The lemma hereafter allows to guess the correct order of decay of the series under study. \begin{lemma}\label{decay} Each weight $\lambda_{\min}$ indicated in the table hereafter corresponds to an irreducible representation in the case of compact groups, and to a spherical irreducible representation in the case of symmetric spaces of type non-group. The table also gives the corresponding values of $A_{n}$ and $B_{n}$. In the group case, $B_{n}(\lambda_{\min})$ is minimal among $\{B_{n}(\lambda),\,\,\lambda \in W_{n}^{\star}\}$. \end{lemma} \begin{remark} For symmetric spaces of type non-group, one can also check the minimality of $B_{n}(\lambda_{\min})$, except for certain real Grassmannian varieties $\mathrm{Gr}(n,q,\mathbb{R})$. For instance, if $q=1$, then $(1)_{q}$ labels the geometric representation of $\mathrm{SO}(n,\mathbb{R})$ on $\mathbb{C}^{n}$, which has indeed an invariant vector by $\mathrm{SO}(n-1,\mathbb{R})\times\mathrm{SO}(1,\mathbb{R})$; and the corresponding value of $B(\lambda)$ is $(n-1)/n < 2$. Fortunately, $\lambda_{\min}$, though not minimal, will still yield in this case the correct order of decay of the series $S(t)$. \end{remark} \begin{remark} To each ``minimal'' weight $\lambda_{\min}$ corresponds a very natural representation. Namely, for a special orthogonal group $\mathrm{SO}(n,\mathbb{R})$ (respectively, a compact symplectic group $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$), the minimizer is the ``geometric'' representation over $\mathbb{C}^{n}$ (respectively $\mathbb{C}^{2n}$) corresponding to the embedding $\mathrm{SO}(n,\mathbb{R}) \hookrightarrow \mathrm{SO}(n,\mathbb{C}) \hookrightarrow \mathrm{GL}(n,\mathbb{C})$ (respectively $\mathrm{U}\mathrm{Sp}(n,\mathbb{H}) \hookrightarrow \mathrm{SU}(2n,\mathbb{C})\hookrightarrow \mathrm{GL}(2n,\mathbb{C})$). For a special unitary group $\mathrm{SU}(n,\mathbb{C})$, one has again the geometric representation over $\mathbb{C}^{n}$, and its compose with the involution $k \mapsto (k^{t})^{-1}$ corresponds to the label $(1,\ldots,1)_{n-1}$, which also minimizes $B_{n}(\lambda)$. The case of spherical minimizers is more involved but still workable: we shall detail it in Section \ref{lower}. \end{remark} \begin{proof} To avoid any ambiguity, we shall use indices to precise the length of a partition or half-partition. Let us first find the minimizers of $B_{n}(\lambda)$ in the group case: $$ \label{specialpage} \begin{tabular}{|c|c|c|c|} \hline &&&\\ $K$ or $G/K$& $\lambda_{\min}$ & $B_{n}(\lambda_{\min})$ &$A_{n}(\lambda_{\min})$ \\ &&&\\ \hline\hline &&&\\ $\mathrm{SO}(2n+1,\mathbb{R})$ &$(1,0,\ldots,0)_{n}$ & $\frac{2n}{2n+1}$ & $(2n+1)^{2}$\\ &&& \\ \hline &&&\\ $\mathrm{SO}(2n,\mathbb{R})$ &$(1,0,\ldots,0)_{n}$ & $\frac{2n-1}{2n}$ & $4n^{2}$\\ &&&\\ \hline &&&\\ $\mathrm{SU}(n,\mathbb{C})$ & $(1,0,\ldots,0)_{n-1}$ & $1-\frac{1}{n^{2}}$ & $n^{2}$\\ &&&\\ \hline &&& \\ $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$ & $(1,0,\ldots,0)_{n}$ &$\frac{2n+1}{2n}$& $4n^{2}$\\ &&&\\ \hline\hline &&&\\ $\mathrm{Gr}(2n+1,q,\mathbb{R})$ & $(2,0,\ldots,0)_{q}$ & $2$& $2n^{2}+3n$\\ &&&\\ \hline &&&\\ $\mathrm{Gr}(2n,q,\mathbb{R})$ & $(2,0,\ldots,0)_{q}$ & $2$ &$2n^{2}+n-1 $\\ &&&\\ \hline &&&\\ $\mathrm{Gr}(n,q,\mathbb{C})$ & $(1,0,\ldots,0)_{q}$ & $2$ & $n^{2}-1$\\ &&&\\ \hline &&&\\ $\mathrm{Gr}(n,q,\mathbb{H})$ & $(1,1,0,\ldots,0)_{2q}$ & $2$ & $(n-1)(2n+1)$ \\ &&&\\ \hline\hline &&&\\ $\mathrm{SO}(2n,\mathbb{R})/\mathrm{U}(n,\mathbb{C})$ & $(1,1,0,\ldots,0)_{n}$ & $\frac{2(n-1)}{n}$ & $n(2n-1)$\\ &&&\\ \hline &&&\\ $\mathrm{SU}(n,\mathbb{C})/\mathrm{SO}(n,\mathbb{R})$ & $(2,0,\ldots,0)_{n-1}$ & $\frac{2(n-1)(n+2)}{n^{2}}$ &$\frac{n(n+1)}{2}$\\ &&&\\ \hline &&&\\ $\mathrm{SU}(2n,\mathbb{C})/\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$ & $(1,1,0,\ldots,0)_{2n-1}$ &$\frac{(n-1)(2n+1)}{n^{2}}$ & $n(2n-1)$\\ &&&\\ \hline &&&\\ $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})/\mathrm{U}(n,\mathbb{C})$ & $(2,0,\ldots,0)_{n}$ &$\frac{2(n+1)}{n}$ & $n(2n+1)$\\ &&&\\ \hline \end{tabular} $$ \begin{itemize} \item $\mathrm{SU}(n)$: one has to minimize $$-\frac{|\lambda|^{2}}{n}+\sum_{i=1}^{n-1} \lambda_{i}^{2}+(n+1-2i) \lambda_{i}=\frac{1}{n}\left(\sum_{1 \leq i<j\leq n}(\lambda_{i}-\lambda_{j})^{2}\right)+\left(\sum_{i=1}^{n-1} i(n-i)(\lambda_{i}-\lambda_{i+1})\right)=A+B $$ over $\mathfrak{Y}_{n-1}^{\star}$. In $B$, at least one term is non-zero, so $$B \geq \left(\min_{i \in \left[\!\left[ 1,n-1\right]\!\right]}i(n-i)\right)=n-1,$$ with equality if and only if $\lambda=(1,0,\ldots,0)_{n-1}$ or $\lambda=(1,\ldots,1)_{n-1}$. In both cases, $A$ is then equal to $\frac{n-1}{n}$. However, $\frac{n-1}{n}$ is also the minimum value of $A$ over $\mathfrak{Y}_{n-1}^{\star}$. Indeed, there is at least one index $l \in \left[\!\left[ 1,n-1\right]\!\right]$ such that $\lambda_{l}>\lambda_{l+1}$. Then all the $(\lambda_{i}-\lambda_{j})^{2}$ with $i \leq l$ and $j \geq l+1$ give a contribution at least equal to $1$, and there are $l(n-l)$ such contributions. Thus $$A \geq \frac{l(n-l)}{n}\geq \frac{n-1}{n},$$ and one concludes that $\min B_{n}(\lambda)$ is obtained only for the two aforementioned partitions, and is equal to $\frac{1}{n}(A_{\min}+B_{\min})=1-\frac{1}{n^{2}}$. \item $\mathrm{SO}(2n)$: the quantity to minimize over $\frac{1}{2}\mathfrak{Y}_{n}^{\star}$ is $$ \left(\sum_{i=1}^{n} \lambda_{i}^{2} \right)+ \left(\sum_{i=1}^{n-2} i(2n-1-i)(\lambda_{i}-\lambda_{i+1})\right)+n(n-1)\lambda_{n-1}=A+B+C, $$ again with $A$, $B$ and $C$ non-negative in each case. Only $A$ involves $\lambda_{n}$, so a minimizer satisfies necessarily $\lambda_{n}=0$ (partitions) or $\lambda_{n}=\frac{1}{2}$ (half-partitions). In the case of partitions, a minimizer of $B+C$ is $(1,0,\ldots,0)_{n}$, which gives the value $\min_{i\in\left[\!\left[ 1,n-1\right]\!\right]}i(2n-1-i)=2n-2$. The same sequence minimizes $A$ over $\mathfrak{Y}_{n}^{\star}$, so the minimal value of $A+B+C$ over non-trivial partitions is $2n-1$ and it is obtained only for $(1,0,\ldots,0)_{n}$. On the other hand, over half-partitions, the minimizer is $\left(\frac{1}{2},\ldots,\frac{1}{2}\right)_{n}$, giving the value $$\frac{n}{4}+\frac{n(n-1)}{2}=\frac{n(2n-1)}{4}.$$ Since we assume $2n\geq 10$ and therefore $n \geq 5$, this value is strictly bigger than $2n-1$, so the only minimizer of $B_{n}(\lambda)$ in $\frac{1}{2}\mathfrak{Y}_{n}^{\star}$ is $(1,0,\ldots,0)_{n}$. \item $\mathrm{SO}(2n+1)$: exactly the same reasoning gives the unique minimizer $(1,0,\ldots,0)_{n}$, with corresponding value $2n$ for $A+B+C=(2n+1)\,B_{n}(\lambda)$. \item $\mathrm{U}\mathrm{Sp}(n)$: here one has only to look at partitions, and the same reasoning as for $\mathrm{SO}(2n)$ and $\mathrm{SO}(2n+1)$ yields the unique minimizer $(1,0,\ldots,0)_{n}$, corresponding to the value $2n+1$ for $2n\,B_{n}(\lambda)$. \end{itemize} The spherical minimizers are obtained by the same techniques; however, some cases (with $n$ or $q$ too small) are exceptional, so we have only retained in the statement of our Lemma the ``generic'' minimizer. The corresponding values of $A_{n}(\lambda)$ and $B_{n}(\lambda)$ are easy calculations. \end{proof} Suppose for a moment that the series $S_{n}(t)$ of Proposition \ref{scaryseries} has the same behavior as its ``largest term'' $A_{n}(\lambda_{\min})\,\mathrm{e}^{-t\,B_{n}(\lambda_{\min})}$. We shall show in a moment that this is indeed true just after cut-off time (for $n$ big enough). Then, $S_{n}(t)$ is a $O(\cdot)$ of \begin{itemize} \item $n^{2}\,\mathrm{e}^{-t}$ for classical simple compact Lie groups; \item $n^{2}\,\mathrm{e}^{-2t}$ for classical simple compact symmetric spaces of type non-group. \end{itemize} Set then $t_{n,\varepsilon}=\alpha\,(1+\varepsilon)\,\log n$, with $\alpha=2$ in the case $n^{2}\,\mathrm{e}^{-t}$, and $\alpha=1$ in the case $n^{2}\,\mathrm{e}^{-2t}$. Under the assumption $S_{n}(t)\sim A_{n}(\lambda_{\min})\,\mathrm{e}^{-t\,B_{n}(\lambda_{\min})}$, one has $S_{n}(t_{n,\varepsilon}) = O(n^{-2\varepsilon}).$ Thus, the previous computations lead to the following guess: the cut-off time is \begin{itemize} \item $2 \log n$ for classical simple compact Lie groups; \item $\log n$ for classical simple compact symmetric spaces of type non-group. \end{itemize} \subsection{Growth of the dimensions versus decay of the Laplace-Beltrami eigenvalues}\label{versus} The estimate $S_{n}(t_{n,\varepsilon})\sim A_{n}(\lambda_{\min})\,\mathrm{e}^{-t_{n,\varepsilon}\,B_{n}(\lambda)}=O(n^{-2\varepsilon})$ might seem very optimistic; nonetheless, we are going to prove that the sum of all the other terms $A_{n}(\lambda)\,\mathrm{e}^{-t_{n,\varepsilon}\,B_{n}(\lambda)}$ in $S_{n}(t)$ does not change too much this bound, and that one still has at least $ S(t_{n,\varepsilon})=O\left(n^{-\frac{\varepsilon}{2}}\right).$ We actually believe that at least in the group case, the exponent $2\varepsilon$ is good, \emph{cf.} the remark before \S\ref{order} --- the previous discussion shows that it is then optimal. Suppose that one can bound $A_{n}(\lambda)\,\mathrm{e}^{-t_{n,\varepsilon}\,B_{n}(\lambda)}$ by $c(n)^{|\lambda|}$, where $|\lambda|$ is the size of the partition and $c(n)$ is some function of $n$ that goes to $0$ as $n$ goes to infinity (say, $Cn^{-\delta \varepsilon}$). We can then use: \begin{lemma}\label{partition} Assume $x \leq \frac{1}{2}$. Then, the sum over all partitions $\sum_{\lambda} x^{|\lambda|}$, which is convergent, is smaller than $1+5x$. Consequently, $$\sum_{\substack{\lambda \in \mathfrak{Y}_{n} \\ \lambda \neq (0,\ldots,0)}} x^{|\lambda|}\leq 5x.$$ \end{lemma} \begin{proof} The power series $P(x)=\sum_{\lambda} x^{|\lambda|} = \prod_{i=1}^{\infty}\frac{1}{1-x^{i}}=1+x+2x^{2}+3x^{3}+5x^{4}+\cdots$ has radius of convergence $1$, and it is obviously convex on $\mathbb{R}_{+}$. Thus, it suffices to verify the bound at $x=0$ and $x=\frac{1}{2}$. However, $$P(0)=1=1+(5\times 0)\quad;\quad P\left(\frac{1}{2}\right)\leq 3.463\leq 1+\left(5\times \frac{1}{2}\right). $$ \end{proof} \noindent With this in mind, the idea is then to control the growth of the coefficients $A_{n}(\lambda)$, starting from the trivial partition $(0,\ldots,0)$. This is also what is done in \cite{Por96a,Por96b}, but the way we make our partitions grow is different. The simplest cases to treat in this perspective are the compact symplectic groups and their quotients. \subsubsection{Symplectic groups and their quotients}\label{sympupper} Set $t_{n,\varepsilon}=2(1+\varepsilon)\log n$; in particular, $t_{n,0}=2\log n$. We fix a partition $\lambda \in \mathfrak{Y}_{n}$, and for $k \leq \lambda_{n}$, we denote $\rho_{k,n}$ the quotient of the dimensions $D^{\lambda}$ associated to the two rectangular partitions \begin{equation} (k,\ldots,k)_{n} \quad\text{and} \quad(k-1,\ldots,k-1)_{n}.\label{evolutionfirst} \end{equation} Using the formula given in \S\ref{explicit} in the case of compact symplectic groups, one obtains: \begin{align*} \rho_{k,n}=\prod_{1 \leq i\leq j \leq n} \frac{2k+2n+2-i-j}{2k+2n-i-j}=\prod_{1 \leq i\leq j \leq n} 1+\frac{2}{2k+2n-i-j}\leq \exp\left(\sum_{1\leq i\leq j\leq n} \frac{2}{2k+2n-i-j}\right). \end{align*} The double sum can be estimated by standard comparison techniques between sums and integrals. Namely, since $x,y \mapsto \frac{1}{2k+2n-x-y}$ is convex on $\{(x,y)\,\,|\,\, x \geq 0,\,\, y \geq 0,\,\,2k+2n \geq x+y\}$, one can bound each term by $$\frac{2}{2k+2n-i-j} \leq \iint_{\left[i-\frac{1}{2},i+\frac{1}{2}\right]\times\left[j-\frac{1}{2},j+\frac{1}{2}\right] }\, \frac{2}{2k+2n-x-y}\,dx\,dy.$$ We use this bound for non-diagonal terms with indices $i<j$, and for diagonal terms with $i=j$, we use the simpler bound $$\sum_{i=1}^{n}\frac{1}{k+n-i} = \sum_{u=0}^{n-1} \frac{1}{k+u}=H_{k+n-1}-H_{k-1}\leq \frac{1}{k}+\log(k+n-1)-\log k $$ where $H_{n}$ denotes the $n$-th harmonic sum. So, \begin{align*}\log \rho_{k,n} &\leq \sum_{1 \leq i\leq j\leq n}\frac{2}{2k+2n-i-j} \leq H_{k+n-1}-H_{k-1}+\iint_{\left[\frac{1}{2},n+\frac{1}{2}\right]^{2}} \frac{1}{2k+2n-x-y}\,dx\,dy \\ &\leq \frac{1}{k}+\log(k+n-1)-\log k\\ &\quad+(2k+2n-1)\log(2k+2n-1)+(2k-1)\log(2k-1)-2(2k+n-1)\log(2k+n-1).\end{align*} On the other hand, the same transformation on partitions makes $-t_{n,0}\,B_{n}(\lambda)$ evolve by $-(2k+n)\log n$. So, if $\eta_{k,n}^{2}$ is the quotient of the quantities $(D^{\lambda})^{2}\,\mathrm{e}^{-t_{n,0}\,B_{n}(\lambda)}$ with $\lambda$ as in Equation \eqref{evolutionfirst}, then \begin{align*} \log \eta_{k,n} &\leq -\frac{2k+n}{2}\log n + \frac{1}{k}+\log(k+n-1)-\log k\\ &\quad+(2k+2n-1)\log(2k+2n-1)+(2k-1)\log(2k-1)-2(2k+n-1)\log(2k+n-1). \end{align*} Suppose $k \geq 2$. Then, one can fix $n\geq 3$ and study the previous expression as a function of $k$. Its derivative is then always negative, so $\log \eta_{k,n}\leq \log \eta_{2,n}$, which is also always negative. From this, one deduces that $$D^{\lambda}\,\mathrm{e}^{-\frac{t_{n,0}}{2}\,B_{n}(\lambda)} \leq \eta_{1,n}$$ for any rectangular partition $(\lambda_{n},\ldots,\lambda_{n})_{n}$; indeed, the left-hand side is the product of the contributions $\eta_{k,n}$ for $k$ in $\left[\!\left[ 1,\lambda_{n}\right]\!\right]$. However, $\eta_{1,n}$ is also smaller than $1$: in this case, the dimension is given by the exact formula $$D^{(1,\ldots,1)_{n}}=\mathrm{Cat}_{n+1}=\frac{1}{n+2}\binom{2n+2}{n+1},$$ so $\eta_{1,n}=\mathrm{Cat}_{n+1}\,\mathrm{e}^{-\frac{n+2}{2}\log n}$, which can be checked to be smaller than $1$ for every $n \geq 3$. So in fact, $$D^{\lambda}\,\mathrm{e}^{-\frac{t_{n,0}}{2}\,B_{n}(\lambda)} \leq 1$$ for any rectangular partition $(\lambda_{n},\ldots,\lambda_{n})_{n}$. The previous discussion hints at the more general result: \begin{proposition}\label{superboundsymplectic} In the case of compact symplectic groups, at cut-off time, $$D^{\lambda}\,\mathrm{e}^{-\frac{t_{n,0}}{2}\,B_{n}(\lambda)} \leq \frac{14}{3}$$ for any integer partition $\lambda$ of length $n$ (not only the rectangular partitions). \end{proposition} \figcap{\includegraphics{grow.pdf}}{One makes the partitions grow layer by layer, starting from the bottom.\label{grow}} \begin{proof} We fix $l \in \left[\!\left[ 1,n-1\right]\!\right]$, and the idea is again to study the quotient $\rho_{k,l}$ of the dimensions associated to the two partitions \begin{equation} (k+\lambda_{l+1},\ldots,k+\lambda_{l+1},\lambda_{l+1},\ldots,\lambda_{n})_{n} \quad\text{and} \quad(k-1+\lambda_{l+1},\ldots,k-1+\lambda_{l+1},\lambda_{l+1},\ldots,\lambda_{n})_{n},\label{evolution} \end{equation} where $k$ is some integer smaller than $\lambda_{l}-\lambda_{l+1}$ --- in other words, the $n-l$ last parts of our partition have already been constructed, and one adds $k$ to the $l$ first parts, until $k=\lambda_{l}-\lambda_{l+1}$; see Figure \ref{grow}. The transformation on partitions described by Equation \eqref{evolution} makes the quantity $-t_{n,0}\,B_{n}(\lambda)$ change by $-\frac{l(2k'+2n-l)}{n}\log n$. We shall prove that this variation plus $\log \rho_{k,l}$ is almost always negative. For convenience, we will treat separately the cases $l=1$ or $2$ and the case $l\geq 3$; hence, suppose first that $l \in \left[\!\left[ 3,n-1\right]\!\right]$. The quotients of Vandermonde determinants can be simplified as follows: $$\rho_{k,l}=\!\prod_{j=l+1}^{n} \frac{k+j-1+\lambda_{l+1}-\lambda_{j}}{k+j-l-1+\lambda_{l+1}-\lambda_{j}}\,\frac{k+\lambda_{l+1}+\lambda_{j}+2n+1-j}{k+\lambda_{l+1}+\lambda_{j}+2n+1-j-l} \prod_{1\leq i \leq j \leq l}\!\!\frac{2k+2\lambda_{l+1}+2n+2-i-j}{2k+2\lambda_{l+1}+2n-i-j}.$$ Notice that the second product $\rho_{k,l,(2)}$ in this formula is very similar to $\rho_{k,n}$; the main difference is that indices $i,j$ are now smaller than $l$ (instead of $n$). Hence, by adapting the arguments, one obtains \begin{align*} \log \rho_{k,l,(2)}&\leq \sum_{1\leq i\leq j \leq l} \frac{2}{2k'+2n-i-j} \leq H_{k'+n-1}-H_{k'+n-l-1} +\iint_{\left[\frac{1}{2},l+\frac{1}{2}\right]^{2}}\frac{1}{2k'+2n-x-y}\,dx\,dy\\ &\leq \frac{1}{k'+n-l}+\log(k'+n-1)-\log(k'+n-l)+(2k'+2n-1)\log(2k'+2n-1)\\ &\quad+(2k'+2n-2l-1)\log(2k'+2n-2l-1)-2(2k'+2n-l-1)\log(2k'+2n-l-1) \end{align*} where $k'$ stands for $k+\lambda_{l+1}$. So, if $(\eta_{k,l})^{2}$ is the quotient of the quantities $(D^{\lambda})^{2}\,\mathrm{e}^{-t_{n,0}\,B_{n}(\lambda)}$ with $\lambda$ as in Equation \eqref{evolution}, then $\log \eta_{k,l} \leq \log \widetilde{\eta}_{k,l}+\log \rho_{k,l,(1)}$, where $\log \widetilde{\eta}_{k,l}$ is given by \begin{align*} &-\frac{l(2k'+2n-l)}{2n}\log n+\frac{1}{k'+n-l}+\log(k'+n-1)-\log(k'+n-l)\\ &+(2k'+2n-1)\log(2k'+2n-1)+(2k'+2n-2l-1)\log(2k'+2n-2l-1)\\ &-2(2k'+2n-l-1)\log(2k'+2n-l-1), \end{align*} and $\rho_{k,l,(1)}$ is the first product in the expansion of $\rho_{k,l}$. Let us analyze these two quantities separately. \begin{itemize} \item $\log \widetilde{\eta}_{k,l}$: here the technique is really the same as for $\log \eta_{k,n}$. Namely, with $n$ and $l$ fixed, $\log \widetilde{\eta}_{k,l}$ appears as a decreasing function of $x=k'$, because its derivative with respect to $x$ is \begin{align*}&-\frac{l\,\log n}{n}-\frac{1}{(x+n-l)^{2}}+\frac{1}{x+n-1}-\frac{1}{x+n-l}\\ &+2\big(\log(2x+2n-1)+\log(2x+2n-2l-1)-2\log(2x+2n-l-1)\big). \end{align*} A upper bound on the first line is $-\frac{(l-1)\log n}{n}\leq 0$ (remember that $n \geq 3$ and therefore $\log n \geq 1$), and the second line is negative by concavity of the logarithm. From this, one deduces that $\log\widetilde{\eta}_{k,l}\leq\log\widetilde{\eta}_{1,l}$, and we shall use this estimate in order to compensate the other part of $\log \eta_{k,l}$: \begin{align*} \log \widetilde{\eta}_{k,l}&\leq -\frac{l(2v+2+2n-l)}{2n}\log n+\frac{1}{v+n+1-l}+\log(v+n)-\log(v+n+1-l)\\ &\quad+(2v+2n+1)\log(2v+2n+1)+(2v+2n-2l+1)\log(2v+2n-2l+1)\\ &\quad-2(2v+2n-l+1)\log(2v+2n-l+1) \end{align*} where $v$ stands for $\lambda_{l+1}$. \item $\log \rho_{k,l,(1)}$: in the product $\rho_{k,l,(1)}$, each term of index $j$ writes as \begin{align*} \frac{(k'+n)^{2}-(\lambda_{j}+n+1-j)^{2}}{(k'+n-l)^{2}-(\lambda_{j}+n+1-j)^{2}}&\leq \frac{(k'+n)^{2}-(\lambda_{l+1}+n+1-j)^{2}}{(k'+n-l)^{2}-(\lambda_{l+1}+n+1-j)^{2}} \\ &\leq \frac{k+j-1}{k+j-l-1}\,\frac{k''+2n+1-j}{k''+2n+1-j-l} \end{align*} with $k''=k+2\lambda_{l+1}=k+2v$; and multiplying all these bounds together, one gets $$ \rho_{k,l,(1)}\leq \frac{(k+n-1)!}{(k+l-1)!}\,\frac{(k-1)!}{(k+n-l-1)!}\,\frac{(k''+2n-l)!}{(k''+n)!}\,\frac{(k''+n-l)!}{(k''+2n-2l)!}. $$ Again, this is decreasing in $k$, so $$ \rho_{k,l,(1)}\leq \frac{n!\,(2v+2n-l+1)!\,(2v+n-l+1)!}{l!\,(n-l)!\,(2v+n+1)!\,(2v+2n-2l+1)!}. $$ Recall the classical Stirling estimates: for $m \geq 1$, $$\log m! =m\log m +\frac{1}{2}\log m-m+\log \sqrt{2\pi}+\frac{1}{12m}-r_{m},\quad\text{with }\,0 \leq r_{m} \leq \frac{1}{360m^{3}}.$$ It enables us to bound $\log \rho_{k,l,(1)}$ by the sum of the following quantities: \begin{itemize} \item[{$\star$}] $A=(2v+2n-l+1)\log(2v+2n-l+1)+(2v+n-l+1)\log(2v+n-l+1)$\\ \phantom{blop}~$-(2v+n+1)\log(2v+n+1)-(2v+2n-2l+1)\log(2v+2n-2l+1)$. \item[{$\star$}] $B=\frac{1}{2}(\log(2v+2n-l+1)+\log(2v+n-l+1)-\log(2v+n+1)-\log(2v+2n-2l+1))$, which is non-positive by concavity of the logarithm. \item[{$\star$}] $C=n\log n - l \log l - (n-l) \log (n-l)$. \item[{$\star$}] $D=\frac{1}{2}(\log n - \log l - \log(n-l))$. This is non-positive unless $n=l+1$ --- recall that we assume for the moment $l \in \left[\!\left[ 3,n-1\right]\!\right]$. In that case, it is smaller than $\frac{1}{2(n-1)}$. \item[{$\star$}] $E=\frac{1}{12}\left(\frac{1}{n}-\frac{1}{l}-\frac{1}{n-l}+\frac{1}{2v+2n-l+1}+\frac{1}{2v+n-l+1}-\frac{1}{2v+n+1}-\frac{1}{2v+2n-2l+1}\right)$. \item[{$\star$}] $F=\frac{1}{360}\left(\frac{1}{l^{3}}+\frac{1}{(n-l)^{3}}+\frac{1}{(2v+n+1)^{3}}+\frac{1}{(2v+2n-2l+1)^{3}}\right)$. \end{itemize} The sum of the two last terms $E\!F=E+F$ happens to be negative. Indeed, $E$ and $F$ are decreasing in $v$ (we use the convexity of $x \mapsto \frac{1}{x^{2}}$ to show that $\frac{dE}{dv}\leq 0$), so it suffices to check the result when $v=0$. Then, with $l$ fixed, \begin{align*}E\!F(n,l)&=\frac{1}{12}\left(\frac{1}{n}-\frac{1}{l}-\frac{1}{n-l}+\frac{1}{2n-l+1}+\frac{1}{n-l+1}-\frac{1}{n+1}-\frac{1}{2n-2l+1}\right)\\ &\quad+\frac{1}{360}\left(\frac{1}{l^{3}}+\frac{1}{(n-l)^{3}}+\frac{1}{(n+1)^{3}}+\frac{1}{(2n-2l+1)^{3}}\right) \end{align*} is decreasing in $n$, hence smaller than its value when $n=l+1$. So, it suffices to look at $E\!F(l+1,l)$, which is now increasing in $l$, but still negative. Thus, in the following, we shall use the bound $$\log \rho_{k,l,(1)}\leq A+C+D\leq A+C+\frac{1}{2n-2}.$$ \end{itemize} \noindent Adding together the bounds previously demonstrated, we get \begin{align*} \log \eta_{k,l}&\leq-\frac{l(2v+2+2n-l)}{2n}\log n+\frac{1}{2n-2}+\frac{1}{v+n+1-l}+\log(v+n)-\log(v+n+1-l)\\ &\quad+(2v+2n+1)\log(2v+2n+1)-(2v+2n-l+1)\log(2v+2n-l+1)\\ &\quad+(2v+n-l+1)\log(2v+n-l+1)-(2v+n+1)\log(2v+n+1)\\ &\quad+n \log n - l \log l -(n-l)\log(n-l). \end{align*} By concavity of $x \log x$, the sum of the second and third rows is non-positive. What remains is decreasing in $l$ and in $v$, and when $l=3$ and $v=0$, we get $$\frac{3}{2n}\log n + \frac{1}{2n-2}+\frac{1}{n-2}+\log \left(\frac{n}{n-2}\right)+(n-3)\log \left(\frac{n}{n-3}\right)-3\log 3$$ which is maximal for $n=5$, and still (barely) negative at this value. Thus, we have shown so far that $\eta_{k,l} \leq 1$ for any $k$, any $l \in \left[\!\left[ 3,n-1\right]\!\right]$, and any partition $\lambda$ that we fill as in Figure \ref{grow}. When $l=1$ or $l=2$, the approximations on $\log \eta_{k,l}$ that we were using before are not good enough, but we can treat these cases separately. When $l=1$, \begin{align*} \rho_{k,1}&=\frac{\lambda_{2}+k+n}{\lambda_{2}+k+n-1}\,\prod_{j=2}^{n}\frac{k+j-1+\lambda_{2}-\lambda_{j}}{k+j-2+\lambda_{2}-\lambda_{j}}\,\frac{k+\lambda_{2}+\lambda_{j}+2n+1-j}{k+\lambda_{2}+\lambda_{j}+2n-j}\\ &\leq \frac{k+n}{k+n-1}\,\prod_{j=2}^{n}\frac{k+j-1}{k+j-2}\,\frac{k+2n+1-j}{k+2n-j}=\frac{k+2n-1}{k};\\ \eta_{k,1}&\leq \frac{k+2n-1}{k}\,\mathrm{e}^{-\frac{2k+2n-1}{2n}\log n}. \end{align*} If $k=1$, which only happens once when one makes the partition grow, then the bound above is $2n\,\mathrm{e}^{-\frac{2n+1}{2n}\log n}\leq 2$. On the other hand, if $k \geq 2$, then the bound is decreasing in $k$ and therefore smaller than $\left(n+\frac{1}{2}\right)\mathrm{e}^{-\frac{2n+3}{2n}\log n}\leq1$. So, one also has $\eta_{k,1}\leq 1$ for any $k$ but $k=1$, where a correct bound is $2$. Similarly, when $l=2$, \begin{align*} \rho_{k,2}&=\frac{\lambda_{3}+k+n}{\lambda_{3}+k+n-2}\,\frac{2\lambda_{3}+2k+2n-1}{2\lambda_{3}+2k+2n-3}\,\prod_{j=3}^{n}\frac{k+j-1+\lambda_{3}-\lambda_{j}}{k+j-3+\lambda_{3}-\lambda_{j}}\,\frac{k+\lambda_{3}+\lambda_{j}+2n+1-j}{k+\lambda_{3}+\lambda_{j}+2n-1-j}\\ &\leq \frac{k+n}{k+n-2}\,\frac{2k+2n-1}{2k+2n-3}\,\prod_{j=3}^{n}\frac{k+j-1}{k+j-3}\,\frac{k+2n+1-j}{k+2n-1-j}= \frac{k+2n-2}{k}\,\frac{k+2n-1}{k+1}\,\frac{2k+2n-1}{2k+2n-3};\\ \eta_{k,2}&\leq \frac{k+2n-2}{k}\,\frac{k+2n-1}{k+1}\,\frac{2k+2n-1}{2k+2n-3}\,\mathrm{e}^{-\frac{2n+2k-2}{n}\log n}. \end{align*} Again, the last bound is decreasing in $k$, smaller than $2+\frac{1}{n}\leq \frac{7}{3}$ when $k=1$ and smaller than $1$ when $k=2$. Hence, $\eta_{k,2}\leq 1$ unless $k=1$, where a correct bound is $\frac{7}{3}$ (and again this situation occurs at most once whence making the partition grow). \noindent Conclusion: every quotient $\eta_{k,l}$ satisfies $\eta_{k,l}\leq 1$, but the two exceptions: $k=1$ and $l=1$ or $2$. The product of the bounds on these two exceptions is $2 \times \frac{7}{3}=\frac{14}{3}$, so for every partition $\lambda$, one has indeed $$D^{\lambda}\,\mathrm{e}^{-\frac{t_{n,0}}{2}\,B_{n}(\lambda)} =\prod_{l=1}^{n}\prod_{k=1}^{\lambda_{l}-\lambda_{l+1}}\eta_{k,l} \leq \frac{14}{3}. $$ \end{proof} \begin{remark} A small refinement of the previous proof shows that the worst case is in fact the partition $(2,1,0,\ldots,0)_{n}$ --- by that we mean that any other partition has quotients $\rho_{k,l}$ that are smaller. Its dimension is provided by the exact formula $$D^{\lambda}=\frac{8n(n^{2}-1)}{3},$$ so one can replace the bound $\frac{14}{3}$ of Proposition \ref{superboundsymplectic} by $\frac{8}{3}$. \end{remark} \comment{\begin{remark} The main trick in the proof of Proposition \ref{superboundsymplectic} is the way we make our partitions grow. One could have tried something simpler, namely, make the first part $\lambda_{1}$ grow box by box, then the second part $\lambda_{2}$, \emph{etc.} However, with the same technique of estimation of quotients of dimensions, we were then not able to prove something better than $$D^{\lambda}\leq C\,\mathrm{e}^{-t_{n,0}\,B_{n}(\lambda)},$$ instead of $C\,\mathrm{e}^{-\frac{t_{n,0}}{2}\,B_{n}(\lambda)}$ --- this is in accordance with the bound $2\alpha\log n$ mentioned in Saloff-Coste's Theorem \ref{window}. By making the partitions grow layer by layer, we use in a much better way the fact that the parts $\lambda_{1},\lambda_{2},\ldots,\lambda_{n}$ are ordered, and the ``compensations'' in the growth of $D^{\lambda}$ given by the determinantal structure of its formula. \end{remark}} The upper bound \eqref{mainupper} is now an easy consequence of Lemma \ref{partition} and Proposition \ref{superboundsymplectic}. For any partition $\lambda$, notice that $$B_{n}(\lambda)\geq \frac{1}{2n}\sum_{i=1}^{n} (2n+2-2i)\lambda_{i}= \frac{1}{2n} \sum_{i=1}^{n} i(2n+1-i)(\lambda_{i}-\lambda_{i+1})\geq \frac{1}{2}\sum_{i=1}^{n}i(\lambda_{i}-\lambda_{i+1})=\frac{|\lambda|}{2}.$$ From this, one deduces that in the case of compact symplectic groups, $$S_{n}(t_{n,\varepsilon})=\sum_{\lambda \in \mathfrak{Y}_{n}^{\star}} (D^{\lambda})^{2}\,\mathrm{e}^{-t_{n,\varepsilon}\,B_{n}(\lambda)} \leq \frac{64}{9} \sum_{\lambda \in \mathfrak{Y}_{n}^{\star}} \mathrm{e}^{-\varepsilon |\lambda|\log n} \leq \frac{320}{9n^{\varepsilon}}\leq \frac{36}{n^{\varepsilon}}$$ if one assumes that $\frac{1}{n^{\varepsilon}} \leq \frac{1}{2}$ (in order to apply Lemma \ref{partition}). By Proposition \ref{scaryseries}, one concludes that $$d_{\mathrm{TV}}^{\mathrm{U}\mathrm{Sp}(n,\mathbb{H})}(\mu_{2(1+\varepsilon)\log n},\mathrm{Haar}) \leq \frac{3}{n^{\frac{\varepsilon}{2}}}.$$ Here one can remove the assumption $\frac{1}{n^{\varepsilon}} \leq \frac{1}{2}$: otherwise, the right-hand side is bigger than $1$ and therefore the inequality is trivially satisfied. This ends the proof of the upper bound in the case of compact symplectic groups. For their quotients, one can still use Proposition \ref{superboundsymplectic}, as follows. For quaternionic Grassmannians, $$S_{n}\left(\frac{t_{n,\varepsilon}}{2}\right)=\sum_{\lambda \in \mathfrak{Y}\mathfrak{Y}_{2q}^{\star}} D^{\lambda}\,\mathrm{e}^{-\frac{t_{n,\varepsilon}}{2}\,B_{n}(\lambda)} \leq \frac{8}{3} \sum_{\lambda \in \mathfrak{Y}_{n}^{\star}} \mathrm{e}^{-\frac{\varepsilon}{2}|\lambda|\log n} \leq \frac{40}{3n^{\frac{\varepsilon}{2}}}\leq \frac{16}{n^{\frac{\varepsilon}{2}}}$$ assuming $\frac{1}{n^{\frac{\varepsilon}{2}}}\leq \frac{1}{2}$. This implies that $$d_{\mathrm{TV}}^{\mathrm{Gr}(n,q,\mathbb{H})}(\mu_{(1+\varepsilon)\log n},\mathrm{Haar}) \leq \frac{2}{n^{\frac{\varepsilon}{4}}}.$$ Again, the assumption on $n^{\frac{\varepsilon}{2}}$ is superfluous, since otherwise the right-hand side is bigger than $1$. Exactly the same proof works for the spaces $\mathrm{U}\mathrm{Sp}(n)/\mathrm{U}(n)$, with the same bound (it may be improved by using the fact that one looks only at even partitions). \subsubsection{Odd special orthogonal groups and their quotients}\label{oddupper} Though the same reasoning holds in every case, we unfortunately have to check case by case that everything works. For odd special orthogonal groups $\mathrm{SO}(2n+1,\mathbb{R})$, set $t_{n,\varepsilon}=2\,(1+\varepsilon)\log (2n+1)$, with in particular $t_{n,0}=2\log (2n+1)$. The main difference between $\mathrm{SO}(2n+1)$ and $\mathrm{U}\mathrm{Sp}(n)$ is the appearance of half-partitions, which is solved by: \begin{lemma}\label{halfpartition} For any integer partition $\lambda$, denote $\lambda \boxplus \frac{1}{2}$ the half-partition $\lambda_{1}+\frac{1}{2},\lambda_{2}+\frac{1}{2},\ldots,\lambda_{n}+\frac{1}{2}$. $$\frac{D^{\lambda\boxplus \frac{1}{2}}}{D^{\lambda}}\,\mathrm{e}^{-\frac{t_{n,0}}{2}\left(B_{n}(\lambda\boxplus \frac{1}{2})-B_{n}(\lambda)\right)}\leq \mathrm{e}^{n\left(\log 2-\frac{\log(2n+1)}{4}\right)}\leq 2.$$ \end{lemma} \begin{proof} The quotient of dimensions is $$\prod_{1 \leq i\leq j \leq n} \frac{\lambda_{i}+\lambda_{j}+2n+2-i-j}{\lambda_{i}+\lambda_{j}+2n+1-i-j}\leq\prod_{1 \leq i\leq j \leq n} \frac{2n+2-i-j}{2n+1-i-j}=2^{n},$$ and the difference $\frac{t_{n,0}}{2}\left(B_{n}(\lambda\boxplus \frac{1}{2})-B_{n}(\lambda)\right)$ is equal to $$\frac{\log(2n+1)}{2n+1} \left(\sum_{i=1}^{n} \lambda_{i}+\frac{1}{4}+\frac{2n+1-2i}{2}\right) \geq \frac{\log(2n+1)}{2n+1}\left(\frac{n}{4}+\frac{n^{2}}{2}\right)=\frac{n \log (2n+1)}{4} .$$ This yields the first part of the inequality, and the second part is an easy analysis of the variations of the bound with respect to $n$. \end{proof} Then, for any integer partition $\lambda$, one can as before prove a uniform bound on $D^{\lambda}\,\mathrm{e}^{-\log(2n+1)\,B_{n}(\lambda)}$; the differences are tiny, \emph{e.g.}, in many formulas, $2n+2$ is replaced by $2n+1$, or $\frac{1}{2n}$ is replaced by $\frac{1}{2n+1}$. We refer to Appendix \ref{annexsoupper} for these computations. \begin{proposition}\label{tobeprovedinannexsoupper} In the case of odd special orthogonal groups, at cut-off time, $$D^{\lambda}\, \mathrm{e}^{-\frac{t_{n,0}}{2}\,B_{n}(\lambda)}\leq \frac{11}{10}$$ for any integer partition $\lambda$ of length $n$. For half-integer partitions, the bound is replaced by $\frac{11}{5}$. \end{proposition} There is one last computation that needs to be done, namely, the special case $\lambda=(\frac{1}{2},\ldots,\frac{1}{2})_{n}=(0,\ldots,0)_{n}\boxplus \frac{1}{2}$ --- it corresponds to the \emph{spin representation} of $\mathrm{SO}(2n+1,\mathbb{R})$. The value of $B_{n}(\lambda)$ is then $\frac{n}{4}$, and $D^{\lambda}=2^{n}$. Thus, in this special case, $$(D^{\lambda})^{2} \,\mathrm{e}^{-t_{n,\varepsilon}\,B_{n}(\lambda)} \leq \mathrm{e}^{n\log 4-\frac{n\log (2n+1)}{2}}\,\mathrm{e}^{-\frac{\varepsilon n \log(2n+1)}{2}}\leq \frac{11}{4}\,\frac{1}{(2n+1)^{\varepsilon}}$$ for every $n \geq 5$. On the other hand, $$B_{n}(\lambda)=\frac{1}{2n+1}\sum_{i=1}^{n}\lambda_{i}^{2}+i(2n-i)(\lambda_{i}-\lambda_{i+1}) \geq \frac{|\lambda|}{2n+1}+\frac{n}{2n+1}\sum_{i=1}^{n}i(\lambda_{i}-\lambda_{i+1})=\frac{(n+1)|\lambda|}{2n+1}\geq \frac{|\lambda|}{2},$$ so we can now write: \begin{align*} S_{n}(t_{n,\varepsilon}) &\leq \frac{11}{4}\,\frac{1}{(2n+1)^{\varepsilon}}+ \sum_{\lambda \in \mathfrak{Y}_{n}^{\star}} (D^{\lambda})^{2}\,\mathrm{e}^{-t_{n,\varepsilon}\,B_{n}(\lambda)}+ (D^{\lambda \boxplus \frac{1}{2}})^{2}\,\mathrm{e}^{-t_{n,\varepsilon}\,B_{n}(\lambda\boxplus \frac{1}{2})}\\ &\leq\frac{11}{4}\,\frac{1}{(2n+1)^{\varepsilon}}+ \sum_{\lambda \in \mathfrak{Y}_{n}^{\star}} \left((D^{\lambda})^{2}\,\mathrm{e}^{-t_{n,0}\,B_{n}(\lambda)}+ (D^{\lambda \boxplus \frac{1}{2}})^{2}\, \mathrm{e}^{-t_{n,0}\,B_{n}(\lambda\boxplus \frac{1}{2})}\right)\mathrm{e}^{-2\varepsilon\log(2n+1)\,B_{n}(\lambda)}\\ &\leq\frac{11}{4}\,\frac{1}{(2n+1)^{\varepsilon}}+ \sum_{\lambda \in \mathfrak{Y}_{n}^{\star}} \left(\frac{121}{100}+\frac{121}{25}\right)\mathrm{e}^{-\varepsilon|\lambda|\log(2n+1)}\\ &\leq\frac{11}{4}\,\frac{1}{(2n+1)^{\varepsilon}}+ \frac{121}{20}\sum_{\lambda \in \mathfrak{Y}_{n}^{\star}} \frac{1}{(2n+1)^{\varepsilon|\lambda|}} \leq \frac{33}{(2n+1)^{\varepsilon}}\leq \frac{144}{(2n+1)^{\varepsilon}} \end{align*} if one assumes $\frac{1}{(2n+1)^{\varepsilon}}\leq \frac{1}{2}$. Thus, by Proposition \ref{scaryseries}, $$d_{\mathrm{TV}}^{\mathrm{SO}(2n+1,\mathbb{R})}(\mu_{2\,(1+\varepsilon)\log (2n+1)},\mathrm{Haar})\leq \frac{6}{(2n+1)^{\frac{\varepsilon}{2}}}.$$ and again we can now remove the assumption $\frac{1}{(2n+1)^{\varepsilon}}\leq \frac{1}{2}$. The same technique applies to odd real Grassmannians, with $$ S_{n}\left(\frac{t_{n,\varepsilon}}{2}\right)= \sum_{\lambda \in (2\mathfrak{Y}_{q}\sqcup 2\mathfrak{Y}_{q} \boxplus 1)^{\star}} D^{\lambda}\,\mathrm{e}^{-\frac{t_{n,\varepsilon}}{2}\,B_{n}(\lambda)}\leq \frac{11}{10}\sum_{\lambda \in \mathfrak{Y}_{n}^{\star}}\mathrm{e}^{-\frac{\varepsilon}{2} |\lambda|\log(2n+1)} \leq \frac{55}{10(2n+1)^{\frac{\varepsilon}{2}}}\leq \frac{16}{(2n+1)^{\frac{\varepsilon}{2}}},$$ and therefore $$ d_{\mathrm{TV}}^{\mathrm{Gr}(2n+1,q,\mathbb{R})}(\mu_{(1+\varepsilon)\log (2n+1)},\mathrm{Haar})\leq \frac{2}{(2n+1)^{\frac{\varepsilon}{4}}}.$$ \subsubsection{Even special orthogonal groups and their quotients} Though the computations have to be done once again, we shall prove exactly the same bounds as before for even special orthogonal groups and even real Grassmannians. Denote $t_{n,\varepsilon}=2\,(1+\varepsilon)\log(2n)$. The possibility of a sign $\pm$ for the last part $\lambda_{n}$ of the partitions leads to a coefficient $2$ in the series $S_{n}(t)$, and on the other hand, the case of half-partitions is reduced to the case of partitions by way of an analogue of Lemma \ref{halfpartition}. Indeed, $$\frac{D^{\lambda\boxplus \frac{1}{2}}}{D^{\lambda}}\,\mathrm{e}^{-\frac{t_{n,0}}{2}\left(B_{n}(\lambda\boxplus \frac{1}{2})-B_{n}(\lambda)\right)}\leq \mathrm{e}^{n\log 2-\frac{(2n-1)\log(2n)}{8}}\leq \frac{12}{5}$$ for any $n \geq 5$ and any partition. Again, we put the proof of the following Proposition at the end of the paper, in Appendix \ref{annexsouppereven}. \begin{proposition}\label{tobeprovedinannexsouppereven} In the case of even special orthogonal groups, at cut-off time, $$D^{\lambda}\, \mathrm{e}^{-\frac{t_{n,0}}{2}\,B_{n}(\lambda)}\leq \frac{4}{3} \quad\big(\text{respectively},\,\,\,\frac{48}{15}\big)$$ for any integer partition (resp. any half-partition) $\lambda$ of length $n$. \end{proposition} Besides, the same proof as in the case of odd special orthogonal groups shows that $B_{n}(\lambda)\geq \frac{|\lambda|}{2}$ for any partition. For the special half-partition $\lambda = (0,\ldots,0)_{n}\boxplus \frac{1}{2}$ that cannot be treated by combining Lemmas \ref{partition} and \ref{halfpartition}, one has $D^{\lambda}=2^{n-1}$ and $B_{n}(\lambda)=\frac{n}{4}$, hence $$(D^{\lambda})^{2} \,\mathrm{e}^{-t_{n,\varepsilon}\,B_{n}(\lambda)} \leq \mathrm{e}^{(n-1)\log 4-\frac{n\log (2n)}{2}}\,\mathrm{e}^{-\frac{\varepsilon n \log(2n)}{2}}\leq \frac{1}{(2n)^{\varepsilon}}$$ for $n \geq 5$. We conclude that \begin{align*} \frac{1}{2} S_{n}(t_{n,\varepsilon}) &\leq \frac{1}{(2n)^{\varepsilon}}+ \sum_{\lambda \in \mathfrak{Y}_{n}^{\star}} (D^{\lambda})^{2}\,\mathrm{e}^{-t_{n,\varepsilon}\,B_{n}(\lambda)}+ (D^{\lambda \boxplus \frac{1}{2}})^{2}\,\mathrm{e}^{-t_{n,\varepsilon}\,B_{n}(\lambda\boxplus \frac{1}{2})}\\ &\leq\frac{1}{(2n)^{\varepsilon}}+ \sum_{\lambda \in \mathfrak{Y}_{n}^{\star}} \left(\frac{16}{9}+\frac{2304}{225}\right)\mathrm{e}^{-\varepsilon|\lambda|\log(2n)}\leq\frac{2749}{45(2n)^{\varepsilon}}\leq \frac{72}{(2n^{\varepsilon})}, \end{align*} and therefore, by Proposition \ref{scaryseries}, $$d_{\mathrm{TV}}^{\mathrm{SO}(2n,\mathbb{R})}(\mu_{2(1+\varepsilon)\log (2n)},\mathrm{Haar})\leq \frac{6}{(2n)^{\frac{\varepsilon}{2}}}.$$ For even real Grassmannian varieties, $$S_{n}\left(\frac{t_{n,\varepsilon}}{2}\right)=\sum_{\lambda \in (2\mathfrak{Y}_{q}\sqcup 2\mathfrak{Y}_{q} \boxplus 1)^{\star}}D^{\lambda}\,\mathrm{e}^{-\frac{t_{n,\varepsilon}}{2}\,B_{n}(\lambda)} \leq \frac{4}{3} \sum_{\lambda \in \mathfrak{Y}_{n}^{\star}}\mathrm{e}^{-\frac{\varepsilon}{2}|\lambda|\log(2n)} \leq \frac{20}{3(2n)^{\frac{\varepsilon}{2}}}\leq \frac{16}{(2n)^{\frac{\varepsilon}{2}}},$$ and again, the total variation distance is bounded by $2/(2n)^{\frac{\varepsilon}{4}}$. So, the inequalities take the same form for even and odd special orthogonal groups or real Grassmannians, and the proof of the upper bound in this case is done. The same inequality holds also for the spaces of structures $\mathrm{SO}(2n)/\mathrm{U}(n)$. \subsubsection{Special unitary groups and their quotients}\label{unitaryupper} Set $t_{n,\varepsilon}=2(1+\varepsilon)\log n$. For special unitary groups, Weyl's dimension formula fortunately takes a much simpler form than before, but on the other hand, the computations on $B_{n}(\lambda)$ are this time a little more subtle. We shall still prove that almost every quotient $\eta_{k,l}$ of the quantities $D^{\lambda}\,\mathrm{e}^{-t_{n,0}\,B_{n}(\lambda)}$ with $\lambda$ going from $$ (\lambda_{l+1}+k-1,\ldots,\lambda_{l+1}+k-1,\lambda_{l+1},\ldots,\lambda_{n-1})_{n-1}\quad \text{to}\quad (\lambda_{l+1}+k,\ldots,\lambda_{l+1}+k,\lambda_{l+1},\ldots,\lambda_{n-1})_{n-1}$$ is smaller than $1$; but in practice, what will happen is that the negative exponentials may be much larger than before, whereas the quotients of dimensions $\rho_{k,l}$ will be much smaller. Consider for a start $\eta_{k,n-1}$. One has $$\rho_{k,n-1}=\prod_{i=1}^{n-1}\frac{k+n-i}{k-1+n-i}=\frac{k+n-1}{k},$$ whereas $B_{n}(\lambda)$ is changed by $\frac{(n-1)(n+2k-1)}{n^{2}}$. So, $$\eta_{k,n-1}=\frac{k+n-1}{k}\,\mathrm{e}^{-\frac{(n-1)(n+2k-1)}{n^{2}} \log n}\leq \begin{cases} n\,\mathrm{e}^{-\frac{n^{2}-1}{n^{2}}\log n}=\mathrm{e}^{\frac{\log n}{n^{2}}} \leq 2^{\frac{1}{4}} &\text{if }k=1,\\ \frac{n+1}{2}\,\mathrm{e}^{-\frac{n^{2}+2n-3}{n^{2}}\log n} \leq \frac{n+1}{2n} \leq 1 &\text{if }k\geq 2, \end{cases}$$ by using the decreasing behavior with respect to $k$. Notice that $\rho_{1,n-1}$ is indeed much smaller than before (linear in $n$ whereas before it grew exponentially in $n$), but $B_{n}(\lambda)$ for $k=1$ is almost constant instead of linear in $n$. In the general case, $$\rho_{k,l}=\prod_{j=l+1}^{n}\frac{k'-\lambda_{j}+j-1}{k'-\lambda_{j}+j-l-1}\leq \prod_{j=l+1}^{n}\frac{k+j-1}{k+j-l-1}$$ with the usual notation $k'=k+\lambda_{l+1}$. On the other hand, the transformation on partitions makes $B_{n}(\lambda)$ change by $$\frac{-l(n-l)(n+2k'-1)+2l |\lambda|_{l+1,n}}{n^{2}},$$ where $|\lambda|_{l+1,n}$ is the restricted size $\sum_{j=l+1}^{n}\lambda_{j}$. Notice now that $$-(n-l)k'+|\lambda|_{l+1,n}=\sum_{j=l+1}^{n}\lambda_{j}-\lambda_{l+1}-k \leq \sum_{j=l+1}^{n} -k = -(n-l)k.$$ So, $$ \eta_{k,l} \leq \prod_{j=l+1}^{n}\frac{k+j-1}{k+j-l-1}\,\,\mathrm{e}^{-\frac{l(n-l)(n+2k-1)}{n^{2}}\log n} \leq \binom{n}{l}\,\mathrm{e}^{-\frac{l(n-l)(n+1)}{n^{2}}\log n}$$ which can as usual be estimated by Stirling (this is the same kind of computations as before). Hence, with $l \geq 3$, the last bound is always smaller than $1$, and also if $l=2$ unless $n=4$. If $n=4$ and $l=2$, then $$\eta_{k,2}\leq \frac{(k+2)(k+3)}{k(k+1)}\,\mathrm{e}^{-\frac{3+2k}{2}\log2}\leq \begin{cases} \frac{3}{2^{3/2}} &\text{if }k=1,\\ 1&\text{if }k\geq 2. \end{cases}$$ Finally, when $l=1$, one has exactly the same bound as for $l=n-1$, so $2^{\frac{1}{4}}$ when $k=1$ and $1$ for $k=2$, Multiplying together all the bounds ($3/2^{\frac{3}{2}}$ and twice $2^{\frac{1}{4}}$), we obtain: \begin{proposition} In the case of special unitary groups, at cut-off time, $$D^{\lambda}\, \mathrm{e}^{-\frac{t_{n,0}}{2}\,B_{n}(\lambda)}\leq \frac{3}{2}$$ for any integer partition $\lambda$ of length $n-1$. \end{proposition} Another big difference with the previous cases is that one cannot use Lemma \ref{partition} anymore. Indeed, for $\lambda=(k,\ldots,k)_{n-1}$, $B_{n}(\lambda)=\frac{k(n-1)}{n}=\frac{|\lambda|}{n}$, so there is no hope to have an inequality of the type $B_{n}(\lambda)\geq \alpha\,|\lambda|$ for any partition. That said, set $\delta_{i}=\lambda_{i}-\lambda_{i+1}$; then, $$B_{n}(\lambda)=\frac{1}{n^{2}}\sum_{1 \leq i<j\leq n}(\lambda_{i}-\lambda_{j})^{2}+\frac{1}{n}\sum_{i=1}^{n-1}i(n-i)\,\delta_{i}\geq \sum_{i=1}^{n-1} \frac{i(n-i)}{n}\,\delta_{i}.$$ This leads us to study the series $$T_{n}(x)=\sum_{\delta_{1},\ldots,\delta_{n-1} \geq 0} x^{\sum_{i=1}^{n-1} \frac{i(n-i)}{n}\delta_{i}}=\prod_{i=1}^{n-1} \frac{1}{1-x^{\frac{i(n-i)}{n}}}.$$ Clearly, each $T_{n}(x)$ is convex on $\mathbb{R}_{+}$, so if we can show for example that $T_{n}\left(\frac{1}{8}\right)$ stays smaller than $1+\frac{K}{8}$ for every $n$, then we will also have the inequality $T_{n}(x)\leq 1+Kx$ for every $0\leq x \leq \frac{1}{8}$. Set $U_{n}(x)=\log(T_{n}(x))$; one has $$U_{n}(x)=\sum_{i=1}^{n-1}-\log\left(1-x^{\frac{i(n-i)}{n}}\right) \leq \sum_{i=1}^{n-1}x^{\frac{i(n-i)}{n}}\leq 2 \sum_{i=1}^{\lfloor\frac{n}{2}\rfloor} x^{\frac{i}{2}} \leq \frac{2}{1-x^{\frac{1}{2}}}$$ for $0 \leq x \leq \frac{1}{8}$. It follows that $T_{n}(x)\leq 1+Kx$ with $K\leq 169$. Suppose $\frac{1}{n^{2\varepsilon}}\leq \frac{1}{8}$. Then, \begin{align*} S_{n}(t_{n,\varepsilon})&= \sum_{\lambda \in \mathfrak{Y}_{n-1}^{\star}}(D^{\lambda})^{2}\,\mathrm{e}^{-t_{n,\varepsilon}\,B_{n}(\lambda)}\leq \frac{9}{4}\sum_{\lambda \in \mathfrak{Y}_{n-1}^{\star}} \left(\frac{1}{n^{2\varepsilon}}\right)^{B_{n}(\lambda)} \leq \frac{9}{4} \left(T_{n}\!\left(\frac{1}{n^{2\varepsilon}}\right)-1\right)\leq \frac{1521}{4n^{2\varepsilon}}\leq \frac{400}{n^{2\varepsilon}}, \end{align*} which leads to $$d_{\mathrm{TV}}^{\mathrm{SU}(n,\mathbb{C})}(\mu_{2(1+\varepsilon)\log n},\mathrm{Haar})\leq \frac{10}{n^{\varepsilon}}.$$ If $\frac{1}{n^{2\varepsilon}}\geq \frac{1}{8}$, then this inequality is also trivially satisfied. Hence, the case of special unitary groups is done. For the quotients $\mathrm{SU}(n)/\mathrm{SO}(n)$, one obtains $$S_{n}\left(\frac{t_{n,\varepsilon}}{2}\right) \leq \frac{3}{2}\left( T_{n}\!\left(\frac{1}{n^{\varepsilon}}\right)-1 \right)\leq \frac{507}{2n^{\varepsilon}} \leq \frac{256}{n^{\varepsilon}}$$ and therefore $$d_{\mathrm{TV}}^{\mathrm{SU}(n,\mathbb{C})/\mathrm{SO}(n,\mathbb{R})}(\mu_{(1+\varepsilon)\log n},\mathrm{Haar})\leq \frac{8}{n^{\frac{\varepsilon}{2}}}.$$ The proof is exactly the same for $\mathrm{SU}(2n)/\mathrm{U}\mathrm{Sp}(n)$ and gives the same inequality, however with $(2n)^{\frac{\varepsilon}{2}}$ instead of $n^{\frac{\varepsilon}{2}}$. For the complex Grassmannian varieties, we have seen that it was easier to see them as quotients of $\mathrm{U}(n)$ (instead of $\mathrm{SU}(n)$), and this forces us to do some additional computations. Though the cut-off phenomenon also holds in the case of $\mathrm{U}(n)$, the set of irreducible representations is then labelled by sequences of possibly negative integers, which makes our scheme of growth of partitions a little bit more cumbersome to apply. Fortunately, for Grassmannians, the spherical representations can be labelled by true partitions, but then the dimensions are given by a different formula and we have to do once again the estimates of quotients $\rho_{k,l}$ and $\eta_{k,l}$. We refer to Appendix \ref{annexsuupper} for a proof of the following: $$A_{n}(\lambda)\,\mathrm{e}^{-\log n\,B_{n}(\lambda)}\leq 1$$ for any partition. Then, one can compare directly $B_{n}(\lambda)$ to $|\lambda|$: $$B_{n}(\lambda)=\frac{2}{n}\sum_{i=1}^{p}\lambda_{i}^{2}+(n+1-2i)\lambda_{i} \geq 2\sum_{i=1}^{p}\frac{i(n-i)}{n}(\lambda_{i}-\lambda_{i+1}) \geq \sum_{i=1}^{p}i(\lambda_{i}-\lambda_{i+1})=|\lambda|.$$ We conclude that $$S_{n}\left(\frac{t_{n,\varepsilon}}{2}\right)\leq \sum_{\lambda \in \mathfrak{Y}_{q}^{*}} \mathrm{e}^{-\varepsilon|\lambda|\log n} \leq \frac{5}{n^{\varepsilon}}\leq \frac{16}{n^{\varepsilon}}\quad;\quadd_{\mathrm{TV}}^{\mathrm{Gr}(n,q,\mathbb{C})} (\mu_{(1+\varepsilon)\log n},\mathrm{Haar})\leq \frac{2}{n^{\frac{\varepsilon}{2}}}$$ and this ends the proof of all the upper bounds of type \eqref{mainupper}. \section{Lower bounds before the cut-off time}\label{lower} The proofs of the lower bounds before cut-off time rely on the following simple ideas. Denote $\lambda_{\min}$ the (spherical) irreducible representation ``of minimal eigenvalue'' identified in Section \ref{order}. We then consider the random variable: \begin{equation} \Omega=\begin{cases}\chi^{\lambda_{\min}}(k) &\text{in the case of groups},\\ \sqrt{D^{\lambda_{\min}}}\,\phi^{\lambda_{\min}}(gK)&\text{in the case of symmetric spaces of type non-group}. \end{cases}\label{discrimination} \end{equation} In this equation, $k$ or $gK$ will be taken at random either under the Haar measure of the space, or under a marginal law $\mu_{t}$ of the Brownian motion; we shall denote $\mathbb{E}_{\infty}$ and $\mathbb{E}_{t}$ the corresponding expectations. When $\Omega$ is real valued, we also denote $\mathrm{Var}_{\infty}$ and $\mathrm{Var}_{t}$ the corresponding variances: $$\mathrm{Var}[\Omega]=\mathbb{E}[\Omega^{2}]-\mathbb{E}[\Omega]^{2}=\mathbb{E}\!\left[(\Omega-\mathbb{E}[\Omega])^{2}\right].$$ In the case of unitary groups and their quotients, $\Omega$ will be complex valued, and we shall use the notations $\mathrm{Var}_{\infty}$ and $\mathrm{Var}_{t}$ for the expectation of the square \emph{of the module} of $\Omega-\mathbb{E}[\Omega]$: $$\mathrm{Var}[\Omega]=\mathbb{E}\!\left[|\Omega|^{2}\right]-\left|\mathbb{E}[\Omega]\right|^{2}=\mathbb{E}\!\left[|\Omega-\mathbb{E}[\Omega]|^{2}\right].$$ The normalization of Equation \eqref{discrimination} is actually chosen so that $\Omega$ is in any case of mean $0$ and variance $1$ under the Haar measure. \begin{remark} In fact, much more is known about the asymptotic distribution of these functions under Haar measure, when $n$ goes to infinity; see \cite{DS94}. For instance, over the unitary groups, the moments of order smaller than $n_{0}$ of $\chi^{(1,0,\ldots,0)}(g)=\mathrm{tr}\, g$ agree with those of a standard complex gaussian variable as soon as $n$ is bigger than $n_{0}$. In particular, if $g$ is distributed according to the Haar measure of $\mathrm{U}(n,\mathbb{C})$, then $\mathrm{tr}\, g$ converges (without any normalization) towards a standard complex gaussian variable. One has similar results for orthogonal and symplectic groups, this time with standard real gaussian variables. As far as we know, the same problem with spherical functions on the classical symmetric spaces is still open, and certain computations performed in this section are related to this question. \end{remark} One will also prove that under a marginal law $\mu_{t}$, the variance of $\Omega$ stays small for every value of $t$, whereas its mean before cut-off time is big (not at all near zero). Standard methods of moments allow then to prove that the probability of a event $$E_{\alpha}=\{k\,\,|\,\,|\Omega(k)| \geq\alpha \}\quad\text{or}\quad\{gK\,\,|\,\,|\Omega(gK)| \geq\alpha \}$$ is before cut-off time near $1$ under $\mu_{t}$, and near $0$ under Haar measure (for an adequate choice of $\alpha$). This is sufficient to prove the lower bounds, see \S\ref{bienayme}; in other words, $\Omega$ is a discriminating random variable for the cut-off phenomenon. The method presented above reduces the problem mainly to the expansion in irreducible characters or in spherical zonal functions of $\Omega^{2}$ or of $|\Omega|^{2}$; \emph{cf.} \S\ref{zonal}. In the case of compact groups, this amounts simply to understand the tensor product of $V^{\lambda_{\min}}$ with itself, or with its conjugate when the character $\Omega$ is complex valued. However, for compact symmetric spaces of type non-group, this is far less obvious. Notice that a zonal spherical function $\phi^{\lambda}$ can be uniquely characterized by the following properties: \begin{itemize} \item it is a linear combination of matrix coefficients of the representation $V^{\lambda}$: $$\phi^{\lambda}(gK)=\sum_{i=1}^{D^{\lambda}}\sum_{j=1}^{D^{\lambda}} c^{ij} \rho^{\lambda}_{ij}(gK).$$ \item it is in $\mathscr{L}^{2}(G/K)^{K}$, \emph{i.e.}, it is $K$-bi-invariant; and it is normalized so that $\phi^{\lambda}(eK)=1$. \end{itemize} Consequently, if $(V^{\lambda_{\min}})^{\otimes 2}=V^{\nu_{1}}\oplus \cdots \oplus V^{\nu_{s}} \oplus V^{\epsilon_{1}}\oplus \cdots \oplus V^{\epsilon_{t}}$ with the $V^{\nu_{i}}$ spherical irreducible representations and the $V^{\epsilon_{j}}$ non-spherical irreducible representations, then there exists an expansion \begin{equation} (\phi^{\lambda_{\min}})^{2}=c_{\nu_{1}}\phi^{\nu_{1}}+c_{\nu_{2}}\phi^{\nu_{2}}+\cdots+c_{\nu_{s}}\phi^{\nu_{s}}. \label{squarezonal} \end{equation} Nonetheless, it seems difficult to guess at the same time the values of the coefficients $c_{\nu}$ in this expansion. The only ``easy'' computation is the coefficient of the constant function in $(\phi^{\lambda})^{2}$, or more generally in a product $\phi^{\lambda}\,\phi^{\rho}$: $$c_{\phi^{\mathbf{1}_{G}}}[\phi^{\lambda}\,\phi^{\rho}] = \int_{X} \phi^{\lambda}(x)\,\phi^{\rho}(x)\,dx=\begin{cases} 0 &\text{if }\phi^{\rho}\neq \overline{\phi^{\lambda}},\\ \frac{1}{D^{\lambda}}&\text{otherwise}.\end{cases}$$ \comment{\begin{example} Let us examine in detail the case of the complex projective line $$\mathbb{P}^{1}(\mathbb{C})=\mathrm{SU}(2,\mathbb{C})/\mathrm{S}(\mathrm{U}(1,\mathbb{C})\times \mathrm{U}(1,\mathbb{C}))=\mathrm{U}(2,\mathbb{C})/(\mathrm{U}(1,\mathbb{C})\times \mathrm{U}(1,\mathbb{C})).$$ As we shall see later, the label $\lambda_{\min}=(1,0,\ldots,0,-1)_{n}$ of the discriminating representation in the case of a complex Grassmannian varieties $\mathrm{Gr}(n,q,\mathbb{C})=\mathrm{U}(n,\mathbb{C})/(\mathrm{U}(p,\mathbb{C})\times \mathrm{U}(q,\mathbb{C}))$ corresponds to the adjoint representation of $\mathrm{SU}(n,\mathbb{C})$ on $\mathfrak{sl}(n,\mathbb{C})$. We use the traditional embedding of $\mathrm{U}(p,\mathbb{C})\times \mathrm{U}(q,\mathbb{C})$ into $\mathrm{U}(n,\mathbb{C})$ by block diagonal matrices of sizes $p=n-q$ and $q$. A scalar product on $\mathfrak{sl}(n,\mathbb{C})$ for which $\mathrm{SU}(n,\mathbb{C})$ acts by isometry is $\scal{M}{N}=\mathrm{tr}\, MN^{\dagger}$; and a spherical vector for the subgroup $\mathrm{S}(\mathrm{U}(p,\mathbb{C})\times \mathrm{U}(q,\mathbb{C}))$ is $$M_{p,q}=e^{(1,0,\ldots,0,-1)_{n}}=\frac{1}{\sqrt{npq}}\begin{pmatrix} -q\,I_{p} & 0 \\ 0 & p\,I_{q}\end{pmatrix}.$$ As a consequence, if $(g_{ij})_{1 \leq i,j \leq n}$ are the coefficients of a matrix $g \in \mathrm{SU}(n,\mathbb{C})$, then the spherical function $\phi^{(1,0,\ldots,0,-1)_{n}}$ writes as \begin{align} \phi^{(1,0,\ldots,0,-1)_{n}}(g)&=\mathrm{tr}(M_{p,q}(gM_{p,q}g^{-1})^{\dagger})=\mathrm{tr}(M_{p,q}gM_{p,q}g^{-1})\nonumber \\ &\!\!\!\!\!\!\!\!=\frac{q}{np}\left(\sum_{i=1}^{p}\sum_{j=1}^{p}|g_{ij}|^{2}\right)+\frac{p}{nq}\left(\sum_{i=p+1}^{n}\sum_{j=p+1}^{n}|g_{ij}|^{2}\right)-\frac{1}{n}\left(\sum_{i=1}^{p}\sum_{j=p+1}^{n}|g_{ij}|^{2}+\sum_{i=p+1}^{n}\sum_{j=1}^{p}|g_{ij}|^{2}\right)\nonumber \\ &\!\!\!\!\!\!\!\!=\frac{1}{p}\left(\sum_{i=1}^{p}\sum_{j=1}^{p}|g_{ij}|^{2}\right)+\frac{1}{q}\left(\sum_{i=p+1}^{n}\sum_{j=p+1}^{n}|g_{ij}|^{2}\right)-1\label{sphericalcomplexgrass} \end{align} by using on the third line the fact that rows and columns of a unitary matrix are of norm $1$. In particular, the random variable $\Omega$ is real-valued. Now, it can be shown (by a calculation with Schur functions) that \begin{align*}(V^{(1,0,\ldots,0,-1)_{n}})^{\otimes 2}&=V^{(0,\ldots,0)_{n}}\oplus V^{(2,0,\ldots,0,-2)_{n}} \oplus \mathbf{1}_{n\geq 3} V^{(1,0,\ldots,0,-1)_{n}} \oplus \mathbf{1}_{n\geq 4} V^{(1,1,0,\ldots,0,-1,-1)_{n}}\\ & \quad \oplus V^{(1,0,\ldots,0,-1)_{n}} \oplus \mathbf{1}_{n\geq 3} V^{(2,0,\ldots,0,-1,-1)_{n}} \oplus \mathbf{1}_{n\geq 3} V^{(1,1,0,\ldots,0,-2)_{n}}. \end{align*} Actually, the first line is the decomposition in irreducibles of the symmetric square $\mathcal{S}^{2}(\mathfrak{sl}(n,\mathbb{C}))$, whereas the second line is the decomposition of the skew-symmetric square $\mathcal{A}^{2}(\mathfrak{sl}(n,\mathbb{C}))$. The two last irreducible representations are not spherical, so there should be an expansion of the form $$(\phi^{(1,0,\ldots,0,-1)_{n}})^{2}=\frac{1}{n^{2}-1}+a\,\phi^{(1,0,\ldots,0,-1)_{n}}+b\,\phi^{(2,0,\ldots,0,-2)_{n}}+c\,\phi^{(1,1,0,\ldots,0,-1,-1)_{n}}.$$ But since one does not know \emph{a priori} what are the spherical functions $\phi^{(2,0,\ldots,0,-2)_{n}}$ and $\phi^{(1,1,0,\ldots,0,-1,-1)_{n}}$, it seems really hard to find by this direct algebraic approach the coefficients $a,b,c$. In the case of $\mathbb{P}^{1}(\mathbb{C})$, one can proceed as follows. The matrices of $\mathrm{SU}(2,\mathbb{C})$ write uniquely as $\left(\begin{smallmatrix} w & -\overline{z} \\ z&\overline{w} \end{smallmatrix}\right)$ with $|w|^{2}+|z|^{2}=1$, and the zonal spherical function $\phi^{1,-1}$ is then $$\phi^{1,-1}(w,z)=2|w|^{2}-1.$$ Denote $H=E_{11}-E_{22}$, $X=E_{12}$ and $Y=E_{21}$ the generators of the complex Lie algebra $\mathfrak{sl}(2,\mathbb{C})$; the decomposition $(\mathfrak{sl}(2,\mathbb{C}))^{\otimes 2}=V^{0,0}\oplus V^{2,-2}\oplus V^{1,-1}$ can be seen as a decomposition in irreducible $\mathfrak{sl}(2,\mathbb{C})$-modules. The first space is generated by the Casimir element $$C=2\sum_{i,j=1}^{2}E_{ij}\otimes E_{ji} - \sum_{i,j=1}^{2}E_{ii}\otimes E_{jj}$$ and it corresponds to the constant function. The second space is generated by the five symmetric tensors \begin{align*} S_{a}&=3\,\mathcal{S}(X, Y)-C\quad;\quad S_{b}=\mathcal{S}(H, X)\quad;\quad S_{c}=\mathcal{S}(H, Y)\quad;\\ S_{d}&=\mathcal{S}(X, X) \quad;\quad S_{e}=\mathcal{S}(Y,Y). \end{align*} where $\mathcal{S}(V,W)=V\otimes W+W\otimes V$. The action of $H$ reads then as: $$H \cdot S_{a}=0 \quad;\quad H\cdot S_{b}=2S_{b} \quad;\quad H\cdot S_{c}=-2S_{c} \quad;\quad H \cdot S_{d}=4 S_{d}\quad;\quad H \cdot S_{e}=-4 S_{e}.$$ As $H$ generates linearly the subalgebra $\mathfrak{s}(\mathfrak{gl}(1,\mathbb{C})\times\mathfrak{gl}(1,\mathbb{C}))$, it follows that the spherical vector associated to $V^{2,-2}$ is up to a scalar constant $$ S_{a}=E_{11}\otimes E_{22}+E_{22}\otimes E_{11}+E_{12}\otimes E_{21}+E_{21}\otimes E_{12}-E_{11}\otimes E_{11}-E_{22}\otimes E_{22}=\mathcal{S}(X,Y)-\frac{\mathcal{S}(H,H)}{2} $$ and therefore, $\phi^{2,-2}(w,z)=1-6|wz|^{2}=1-6|w|^{2}+6|w|^{4}$. So, one sees that $$(\phi^{1,-1})^{2}=\frac{1}{3}\,\phi^{0,0}+\frac{2}{3}\,\phi^{2,-2},$$ and in particular the spherical function $\phi^{1,-1}$ does not appear in the right-hand side. Now, it seems quite hard to generalize this argument to the general case of $\mathfrak{sl}(n,\mathbb{C})$ acting on $(\mathfrak{sl}(n,\mathbb{C}))^{\otimes 2}$. Indeed, one would first need a description of each irreducible submodule\footnote{Even the identification of the irreducible component $V^{(1,0,\ldots,0,-1)_{n}}$ inside $\mathcal{S}^{2}(\mathfrak{sl}(n,\mathbb{C}))$ and $\mathcal{A}^{2}(\mathfrak{sl}(n,\mathbb{C}))$ is not easy. Namely, one can show that the first space is linearly generated by the symmetric tensors \begin{align*} \mathcal{S}H_{k} &= \sum_{i=1}^{n}\mathcal{S}(E_{ik},E_{ki})-\mathcal{S}(E_{i(k+1)},E_{(k+1)i})-\frac{2}{n}\sum_{i=1}^{n}\mathcal{S}(E_{kk}-E_{(k+1)(k+1)},E_{ii});\\ \mathcal{S}E_{kl}&= \sum_{i = 1}^{n} \mathcal{S}(E_{il},E_{ki}) -\frac{2}{n}\sum_{i=1}^{n}\mathcal{S}(E_{kl},E_{ii}), \end{align*} whereas the second space is generated by the skew-symmetric tensors $$ \mathcal{A}H_{k} = \sum_{i=1}^{n}\mathcal{A}(E_{ik},E_{ki})-\mathcal{A}(E_{i(k+1)},E_{(k+1)i})\qquad;\qquad\mathcal{A}E_{kl} = \sum_{i=1}^{n}\mathcal{A}(E_{il},E_{ki}), $$ with $\mathcal{A}(V,W)=V\otimes W-W\otimes V$. Thus, the spherical vectors of label $(2,0,\ldots,0,-2)_{n}$ and $(1,1,0,\ldots,0,-1,-1)_{n}$ lie in the orthogonal in $\mathcal{S}^{2}(\mathfrak{sl}(n,\mathbb{C}))$ of the subspace generated by the $\mathcal{S}H_{k}$'s, the $\mathcal{S}E_{kl}$'s, and the Casimir element; but this does not really help us to determine these vectors.} inside $(\mathfrak{sl}(n,\mathbb{C}))^{\otimes 2}$, and then to find adequate $\mathfrak{s}(\mathfrak{gl}(p,\mathbb{C})\otimes \mathfrak{gl}(q,\mathbb{C}))$-spherical vectors in these spaces --- it seems to us the only direct algebraic way to determine the spherical functions $\phi^{(2,0,\ldots,0,-2)_{n}}$ and $\phi^{(1,1,0,\ldots,0,-1,-1)_{n}}$. \end{example} } \noindent As far as we know, for a general zonal spherical function, there is a definitive solution to Equation \eqref{squarezonal} only in the case of symmetric spaces of rank $1$, see \cite{Gas70}. For our problem, one can fortunately give in every case a geometric description of the discriminating spherical representation and of the corresponding spherical vector. This yields an expression of $\phi^{\lambda_{\min}}(gK)$ as a degree $2$ polynomial of the matrix coefficients of $g$. Now it turns out that the joint moments of these coefficients under $\mu_{t}$ and $\mu_{\infty}=\mathrm{Haar}$ can be calculated by mean of the stochastic differential equations defining the $G$-valued Brownian motion; see Lemma \ref{expectationcoefficients}, which we reproduce from \cite[Proposition 1.4]{Lev11}. As $(\phi^{\lambda_{\min}}(gK))^{2}$ or $|\phi^{\lambda_{\min}}(gK)|^{2}$ is also a polynomial in the coefficients $g_{ij}$, one can therefore compute its expectation under $\mu_{t}$, and this actually gives back the coefficients in the expansion \eqref{squarezonal}. Thus, the algebraic difficulties raised in our proof of the lower bounds will be solved by arguments of stochastic analysis. \subsection{Expansion of the square of the discriminating zonal spherical functions}\label{zonal} The orthogonality of characters or of zonal spherical functions ensures that for every non-trivial (spherical) irreducible representation $\lambda$, \begin{align*} \mathbb{E}_{\infty}[\chi^{\lambda}]&=\mathbb{E}_{\infty}[\chi^{\lambda}(k)\,\chi^{\mathbf{1}_{K}}(k)]=\scal{\chi^{\lambda}}{\chi^{\mathbf{1}_{K}}}_{\mathscr{L}^{2}(K)}=0;\\ \mathbb{E}_{\infty}\!\left[\sqrt{D^{\lambda}}\,\phi^{\lambda}\right]&=\sqrt{D^{\lambda}}\,\mathbb{E}_{\infty}[\phi^{\lambda}(gK)\,\phi^{\mathbf{1}_{G}}(gK)]=\sqrt{D^{\lambda}}\,\scal{\phi^{\lambda}}{\phi^{\mathbf{1}_{K}}}_{\mathscr{L}^{2}(G/K)}=0. \end{align*} The function corresponding to the trivial representation, which is just the constant function equal to $1$, has of course mean $1$ under the Haar measure, and also under $\mu_{t}$. On the other hand, Theorem \ref{explicitdensity} allows one to compute the mean of a non-trivial irreducible character of zonal spherical function under $\mu_{t}$: \begin{align*} \mathbb{E}_{t}[\chi^{\lambda}]&=\int_{K} p_{t}^{K}(k)\,\chi^{\lambda}(k)\,dk=[\chi^{\lambda}](p_{t}^{K})=D^{\lambda}\,\mathrm{e}^{-\frac{t}{2}\,B_{n}(\lambda)}=\left\{A_{n}(\lambda)\,\mathrm{e}^{-t\,B_{n}(\lambda)}\right\}^{\frac{1}{2}}\\ \mathbb{E}_{t}\!\left[\sqrt{D^{\lambda}}\,\phi^{\lambda}\right]&=\sqrt{D^{\lambda}}\int_{X=G/K} p_{t}^{X}(x)\,\phi^{\lambda}(x)\,dx=\sqrt{D^{\lambda}}\,\frac{[\phi^{\lambda}](p_{t}^{X})}{D^{\lambda}}=\left\{A_{n}(\lambda)\,\mathrm{e}^{-t\,B_{n}(\lambda)}\right\}^{\frac{1}{2}} \end{align*} with the notations of Proposition \ref{scaryseries}, and where $[\chi^{\lambda}](f)$ or $[\phi^{\lambda}](f)$ denotes the coefficient of $\chi^{\lambda}$ or $\phi^{\lambda}$ in the expansion of $f$. So, with the help of the table of Lemma \ref{decay}, we can compute readily $\mathbb{E}_{t}[\Omega]$ in each case, and also $\mathbb{E}_{\infty}[\Omega]$. In order to estimate $\mathrm{Var}_{t}[\Omega]$ and $\mathrm{Var}_{\infty}[\Omega]$, we now need to find a representation-theoretic interpretation of either $\Omega^{2}$ when $\Omega$ is real-valued, or of $|\Omega|^{2}$ when $\Omega$ is complex-valued. We begin with compact groups: \begin{lemma}\label{squaregroup} Suppose $G=\mathrm{SO}(2n,\mathbb{R})$ or $\mathrm{SO}(2n+1,\mathbb{R})$ or $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$. Then $\Omega=\chi^{(1,0,\ldots,0)_{n}}$ is real-valued, and \begin{equation} \Omega^{2}=(\chi^{(1,0,\ldots,0)_{n}})^{2}=\chi^{(2,0,\ldots,0)_{n}}+\chi^{(1,1,0,\ldots,0)_{n}}+\chi^{(0,0,\ldots,0)_{n}}.\label{squareso} \end{equation} On the other hand, when $G=\mathrm{SU}(n,\mathbb{C})$, $\Omega$ is complex-valued, and \begin{equation} |\Omega|^{2}=\chi^{(1,0,\ldots,0)_{n-1}}\,\chi^{(1,\ldots,1)_{n-1}}=\chi^{(2,1,\ldots,1)_{n-1}}+ \chi^{(0,0,\ldots,0)_{n-1}}.\label{squareunit} \end{equation} \end{lemma} \begin{proof} In each case, $\Omega(k)=\mathrm{tr}\,k$, up to the map \eqref{doublequaternion} in the symplectic case; this explains why $\Omega$ is real-valued in the orthogonal and symplectic case, and complex-valued in the unitary case. Then, the simplest way to prove the identities \eqref{squareso} and \eqref{squareunit} is by manipulating the Schur functions of type $\mathrm{A}$, $\mathrm{B}$, $\mathrm{C}$ and $\mathrm{D}$; indeed, these polynomials evaluated on the eigenvalues are known to be the irreducible characters of the corresponding groups, see \S\ref{explicit}. We start with the special orthogonal groups. In type $\mathrm{B}_{n}$, $(z_{1}+\cdots+z_{n}+z_{1}^{-1}+\cdots+z_{n}^{-1}+1)^{2}$ is indeed equal to the sum of the three terms \begin{align*} sb_{(2,0,\ldots,0)}(Z,Z^{-1},1)&=\left(\sum_{1\leq i\leq j\leq n}z_{i}z_{j}+z_{i}z_{j}^{-1}+z_{i}^{-1}z_{j}+z_{i}^{-1}z_{j}^{-1}\right)+\left(\sum_{i=1}^{n}z_{i}+z_{i}^{-1} \right)-n;\\ sb_{(1,1,0,\ldots,0)}(Z,Z^{-1},1)&=\left(\sum_{1\leq i< j\leq n}z_{i}z_{j}+z_{i}z_{j}^{-1}+z_{i}^{-1}z_{j}+z_{i}^{-1}z_{j}^{-1}\right)+\left(\sum_{i=1}^{n}z_{i}+z_{i}^{-1} \right)+n;\\ sb_{(0,\ldots,0)}(Z,Z^{-1},1)&=1; \end{align*} whereas in type $\mathrm{D}_{n}$, $(z_{1}+\cdots+z_{n}+z_{1}^{-1}+\cdots+z_{n}^{-1})^{2}$ is equal to the sum of the three terms \begin{align*} sd_{(2,0,\ldots,0)}(Z,Z^{-1})&=\left(\sum_{1\leq i\leq j\leq n}z_{i}z_{j}+z_{i}z_{j}^{-1}+z_{i}^{-1}z_{j}+z_{i}^{-1}z_{j}^{-1}\right)-n-1;\\ sd_{(1,1,0,\ldots,0)}(Z,Z^{-1})&=\left(\sum_{1\leq i< j\leq n}z_{i}z_{j}+z_{i}z_{j}^{-1}+z_{i}^{-1}z_{j}+z_{i}^{-1}z_{j}^{-1}\right)+n;\\ sd_{(0,\ldots,0)}(Z,Z^{-1})&=1. \end{align*} For compact symplectic groups, hence in type $\mathrm{C}_{n}$, $(z_{1}+\cdots+z_{n}+z_{1}^{-1}+\cdots+z_{n}^{-1})^{2}$ is indeed equal to the sum of the three terms \begin{align*} sc_{(2,0,\ldots,0)}(Z,Z^{-1})&=\left(\sum_{1\leq i\leq j\leq n}z_{i}z_{j}+z_{i}z_{j}^{-1}+z_{i}^{-1}z_{j}+z_{i}^{-1}z_{j}^{-1}\right)-n;\\ sc_{(1,1,0,\ldots,0)}(Z,Z^{-1})&=\left(\sum_{1\leq i< j\leq n}z_{i}z_{j}+z_{i}z_{j}^{-1}+z_{i}^{-1}z_{j}+z_{i}^{-1}z_{j}^{-1}\right)+n-1;\\ sc_{(0,\ldots,0)}(Z,Z^{-1})&=1; \end{align*} and this is also $(sc_{(1,0,\ldots,0)}(Z,Z^{-1}))^{2}=(\chi^{(1,0,\ldots,0)}(k))^{2}=\Omega(k)^{2}$. Thus, Formula \eqref{squareso} is proved. In type $\mathrm{A}_{n-1}$, notice that for every character $\chi^{\lambda}$, $\overline{\chi^{\lambda}(k)}=\chi^{\lambda}(k^{-1})=\chi^{\lambda^{*}}(k)$, where $\lambda^{*}$ is the sequence obtained from $\lambda$ by the simple transformation \begin{equation} (\lambda_{1}\geq \lambda_{2}\geq \cdots \geq \lambda_{n-1})_{n-1} \mapsto (\lambda_{1} \geq \lambda_{1}- \lambda_{n-1} \geq \cdots \geq \lambda_{1}- \lambda_{2})_{n-1}.\label{conjugaterepresentation} \end{equation} Indeed, if $z_{1},\ldots,z_{n}$ are the eigenvalues of $k$, then \begin{align*} \overline{\chi^{\lambda}(k)}&=s_{(\lambda_{1},\ldots,\lambda_{n-1})_{n-1}}(z_{1}^{-1},\ldots,z_{n}^{-1})=s_{(\lambda_{1},\ldots,\lambda_{n-1},0)_{n}}(z_{1}^{-1},\ldots,z_{n}^{-1})=s_{(0,-\lambda_{n-1},\ldots,-\lambda_{1})_{n}}(z_{n},\ldots,z_{1})\\ &=s_{(\lambda_{1},\lambda_{1}-\lambda_{n-1},\ldots,0)_{n}}(z_{1},\ldots,z_{n})=s_{(\lambda_{1},\lambda_{1}-\lambda_{n-1},\ldots,\lambda_{1}-\lambda_{2})_{n-1}}(z_{1},\ldots,z_{n})=\chi^{\lambda^{*}}(k) \end{align*} Here, one uses the relation $z_{1}z_{2}\cdots z_{n}=1$ for every element of the torus of $\mathrm{SU}(n,\mathbb{C})$, which enables one to transform a $n$-vector of possibly negative integers into a $(n-1)$-vector of non-negative integers. In particular, $|\Omega(k)|^{2}=|\chi^{(1,0,\ldots,0)_{n-1}}(k)|^{2}=\chi^{(1,0,\ldots,0)_{n-1}}(k)\,\chi^{(1,1,\ldots,1)_{n-1}}(k).$ Then, a simple calculation with symmetric functions yields Formula \eqref{squareunit}: \begin{align*} \chi^{(1,0,\ldots,0)_{n-1}}(k)\,\chi^{(1,1,\ldots,1)_{n-1}}(k)&=(z_{1}+\cdots+z_{n})(z_{1}^{-1}+\cdots+z_{n}^{-1})\\ &=\left(n-1+\sum_{i<j}z_{i}z_{j}^{-1}+z_{i}^{-1}z_{j} \right)+1\\ &=s_{(1,0,\ldots,0,-1)_{n}}(Z)+s_{(0,\ldots,0)_{n}}(Z)=s_{(2,1,\ldots,1)_{n-1}}(Z)+s_{(0,\ldots,0)_{n-1}}(Z)\\ &=\chi^{(2,1,\ldots,1)_{n-1}}(k)+\chi^{(0,\ldots,0)_{n-1}}(k) \end{align*} where $Z=\{z_{1},\ldots,z_{n}\}$ is the alphabet of the eigenvalues of $k$. \end{proof} \subsubsection{Values of the zonal functions and abstract expansions of their squares} As explained in the introduction of this part, the case of compact symmetric spaces of type non-group is much more involved. We start by finding an expression of $\Omega(gK)$ in terms of the matrix coefficients $g_{ij}$ of the matrix $g$. \begin{proposition}\label{coeffsphericalfunction} In terms of the matrix coefficients of $g$, $\phi^{\lambda_{\min}}(gK)$ is given by: $$ \begin{tabular}{|c|c|c|c|} \hline &&&\\ $G/K$& $V^{\lambda_{\min}}$ & $\phi^{\lambda_{\min}}(gK)$ & $\Bbbk$ \\ &&&\\ \hline\hline &&&\\ $\mathrm{Gr}(n,q,\mathbb{R})$ &$ \mathfrak{so}^{\perp}(n,\mathbb{C})$ & $\frac{1}{p}\sum_{i=1}^{p}\sum_{j=1}^{p}(g_{ij})^{2}+\frac{1}{q}\sum_{i=p+1}^{n}\sum_{j=p+1}^{n}(g_{ij})^{2}-1$ & $\mathbb{R}$\\ && &\\ \hline && &\\ $\mathrm{Gr}(n,q,\mathbb{C})$ &$\mathfrak{sl}(n,\mathbb{C})$ & $\frac{1}{p}\sum_{i=1}^{p}\sum_{j=1}^{p}|g_{ij}|^{2}+\frac{1}{q}\sum_{i=p+1}^{n}\sum_{j=p+1}^{n}|g_{ij}|^{2}-1$ & $\mathbb{R}$\\ && &\\ \hline && &\\ $\mathrm{Gr}(n,q,\mathbb{H})$ &$ \!\mathfrak{sp}^{\perp}(2n,\mathbb{C}) \!$ & $\frac{1}{p}\sum_{i=1}^{p}\sum_{j=1}^{p}|g_{ij}|^{2}+\frac{1}{q}\sum_{i=p+1}^{n}\sum_{j=p+1}^{n}|g_{ij}|^{2}-1$ & $\mathbb{R}$\\ && &\\ \hline \end{tabular}$$ $$ \begin{tabular}{|c|c|c|c|} \hline &&&\\ $\mathrm{SO}(2n,\mathbb{R})/\mathrm{U}(n,\mathbb{C})$ &$\mathcal{A}^{2}(\mathbb{C}^{2n})$ & $\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n}g_{(2i)(2j)}g_{(2i-1)(2j-1)}-g_{(2i)(2j-1)}g_{(2i-1)(2j)} $ & $\mathbb{R}$\\ && &\\ \hline && &\\ $\mathrm{SU}(n,\mathbb{C})/\mathrm{SO}(n,\mathbb{R})$ &$\mathcal{S}^{2}(\mathbb{C}^{n})$ & $\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n}(g_{ij})^{2}$ & $\mathbb{C}$\\ && &\\ \hline && &\\ $\mathrm{SU}(2n,\mathbb{C})/\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$ &$\mathcal{A}^{2}(\mathbb{C}^{2n})$ & $\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n}g_{(2i)(2j)}g_{(2i-1)(2j-1)}-g_{(2i)(2j-1)}g_{(2i-1)(2j)} $ & $\mathbb{C}$\\ && &\\ \hline & &&\\ $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})/\mathrm{U}(n,\mathbb{C})$ &$\mathcal{S}^{2}(\mathbb{C}^{2n})$ & $\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n} ([1](g_{ij}))^{2}+([\mathrm{j}](g_{ij}))^{2}-([\mathrm{i}](g_{ij}))^{2}-([\mathrm{k}](g_{ij}))^{2}$ & $\mathbb{R}$\\ &&&\\ \hline \end{tabular} $$ \noindent For real Grassmannians, $\mathfrak{so}^{\perp}(n,\mathbb{C})$ denotes the orthogonal complement of $\mathfrak{so}(n,\mathbb{C})$ in $\mathfrak{sl}(n,\mathbb{C})$; and for quaternionic Grassmannians, $\mathfrak{sp}^{\perp}(2n,\mathbb{C})$ denotes the orthogonal complement of $\mathfrak{sp}(2n,\mathbb{C})$ in $\mathfrak{sl}(2n,\mathbb{C})$. \end{proposition} \begin{proof} Each space $V^{\lambda_{\min}}$ described in the statement of our proposition is endowed with a natural action of $G=\mathrm{SO}(n)$ or $\mathrm{SU}(n)$ or $\mathrm{U}\mathrm{Sp}(n)$, namely, the action by conjugation in the case of Grassmannian varieties, and the diagonal action on tensors in the case of spaces of structures. Then, to say that \begin{align*} &V^{(2,0,\ldots,0)_{\lfloor \frac{n}{2}\rfloor}}_{\mathrm{SO}(n,\mathbb{R})}=\mathfrak{so}^{\perp}(n,\mathbb{C})\qquad;\qquad V^{(1,0,\ldots,0,-1)_{n}}_{\mathrm{U}(n,\mathbb{C})}=\mathfrak{sl}(n,\mathbb{C}) \qquad;\qquad V^{(1,1,0,\ldots,0)_{n}}_{\mathrm{U}\mathrm{Sp}(n,\mathbb{H})}=\mathfrak{sp}^{\perp}(2n,\mathbb{C})\qquad;\\ &V^{(1,1,0,\ldots,0)_{n}}_{\mathrm{SO}(2n,\mathbb{R})}=\mathcal{A}^{2}(\mathbb{C}^{n}) \qquad;\qquad V^{(2,0,\ldots,0)_{n-1}}_{\mathrm{SU}(n,\mathbb{C})}=\mathcal{S}^{2}(\mathbb{C}^{n}) \qquad;\qquad V^{(1,1,0,\ldots,0)_{2n-1}}_{\mathrm{SU}(2n,\mathbb{C})}=\mathcal{A}^{2}(\mathbb{C}^{2n}) \qquad;\\ &V^{(2,0,\ldots,0)_{n}}_{\mathrm{U}\mathrm{Sp}(n,\mathbb{H})}=\mathcal{S}^{2}(\mathbb{C}^{2n}) \end{align*} is equivalent to the following statements: the trace of $g \in \mathrm{SO}(n,\mathbb{R})$ acting on $\mathfrak{so}^{\perp}(n,\mathbb{C})$ is given by the Schur function of type $\mathrm{B}$ or $\mathrm{D}$ and label $(2,0,\ldots,0)_{\lfloor \frac{n}{2}\rfloor}$; the trace of $g \in \mathrm{U}(n,\mathbb{C})$ acting on $\mathfrak{sl}(n,\mathbb{C})$ is given by the Schur function of type $\mathrm{A}$ and label $(1,0,\ldots,0,-1)_{n}$; \emph{etc.} Let us detail for instance this last case. We have seen in the previous Lemma that $$s_{(1,0,\ldots,0,-1)_{n}}(Z)=(z_{1}+\cdots+z_{n})(z_{1}^{-1}+\cdots+z_{n}^{-1})-1.$$ On the other hand, the module $\mathfrak{gl}(n,\mathbb{C})$ on which $\mathrm{SU}(n,\mathbb{C})$ acts by conjugation is the tensor product of modules $(\mathbb{C}^{n})\otimes (\mathbb{C}^{n})^{*}$. It follows that the trace of the action by conjugation of $g \in \mathrm{SU}(n,\mathbb{C})$ on $\mathfrak{gl}(n,\mathbb{C})$ is $$\chi(g)=(\mathrm{tr} g) \,(\mathrm{tr} (g^{-1})^{t})=(z_{1}+\cdots+z_{n})(z_{1}^{-1}+\cdots+z_{n}^{-1})$$ if $z_{1},\ldots,z_{n}$ are the eigenvalues of $g$. Subtracting $1$ amounts to look at the irreducible submodule $\mathfrak{sl}(n,\mathbb{C})$ inside $\mathfrak{gl}(n,\mathbb{C})$. The other cases are entirely similar, and the corresponding values of the Schur functions have all been computed in Lemma \ref{squaregroup}. Once the discriminating representations have been given a geometric interpretation, it is easy to find the corresponding $K$-invariant (spherical) vectors. We endow each space of matrices with the invariant scalar product $\scal{M}{N}=\mathrm{tr}\,MN^{\dagger}$, and each space of tensors with the scalar product $\scal{x_{1}\otimes x_{2}}{y_{1} \otimes y_{2}}=\scal{x_{1}}{y_{1}}\,\scal{x_{2}}{y_{2}}$, where $\scal{v}{w}$ is the usual Hermitian scalar product on $\mathbb{C}^{n}$ or $\mathbb{C}^{2n}$. We also denote $(e_{i})_{i}$ the canonical basis of $\mathbb{C}^{n}$ or $\mathbb{C}^{2n}$. Then, the $K$-spherical vectors write as: $$ \begin{tabular}{|c|c|c|} \hline &&\\ $G$& $K$ & $e^{\lambda_{\min}}$ \\ &&\\ \hline\hline &&\\ $\mathrm{SO}(n)$ & $\mathrm{SO}(p)\times \mathrm{SO}(q)$ & $\frac{1}{\sqrt{npq}}\,\left(\begin{smallmatrix} -qI_{p} & 0 \\ 0 & pI_{q} \end{smallmatrix}\right)$\\ && \\ \hline && \\ $\mathrm{SU}(n)$ &$\mathrm{S}(U(p)\times U(q))$ & $\frac{1}{\sqrt{npq}}\,\left(\begin{smallmatrix} -qI_{p} & 0 \\ 0 & pI_{q} \end{smallmatrix}\right)$\\ && \\ \hline && \\ $\mathrm{U}\mathrm{Sp}(n)$ & $\mathrm{U}\mathrm{Sp}(p) \times \mathrm{U}\mathrm{Sp}(q)$ & $\frac{1}{\sqrt{2npq}}\,\left(\begin{smallmatrix} -qI_{2p} & 0 \\ 0 & pI_{2q} \end{smallmatrix}\right)$\\ && \\ \hline \end{tabular}$$ $$ \begin{tabular}{|c|c|c|} \hline &&\\ $\mathrm{SO}(2n)$ & $\mathrm{U}(n) $ & $\frac{1}{\sqrt{2n}} \sum_{i=1}^{n} e_{2i}\otimes e_{2i-1}-e_{2i-1}\otimes e_{2i}$ \\ && \\ \hline && \\ $\mathrm{SU}(n)$ &$\mathrm{SO}(n)$ & $\frac{1}{\sqrt{n}}\sum_{i=1}^{n} e_{i} \otimes e_{i}$\\ && \\ \hline && \\ $\mathrm{SU}(2n)$ &$\mathrm{U}\mathrm{Sp}(n)$ & $\frac{1}{\sqrt{2n}} \sum_{i=1}^{n} e_{2i}\otimes e_{2i-1}-e_{2i-1}\otimes e_{2i}$ \\ && \\ \hline & &\\ $\mathrm{U}\mathrm{Sp}(n)$ &$\mathrm{U}(n)$ & $\frac{1}{\sqrt{2n}}\sum_{i=1}^{2n} e_{i} \otimes e_{i}$\\ &&\\ \hline \end{tabular} $$ In each case, $e^{\lambda_{\min}}$ belongs trivially to $V^{\lambda_{\min}}$ and is of norm $1$, so the only thing to check then is the $K$-invariance. In the case of Grassmannian varieties, the matrix $e^{\lambda_{\min}}$ commutes indeed with $\mathrm{G}(p) \times \mathrm{G}(q)$, since it is also $(p,q)$-block-diagonal and with scalar multiples of the identity matrix in each diagonal block. The notation $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$ used in this paper was meant to avoid any confusion between $\mathrm{Sp}(2n,\mathbb{C})$ and its compact form, the compact symplectic group. For $\mathrm{U}(n)$ inside $\mathrm{SO}(2n)$, we use the well-known fact that inside $\mathrm{SL}(2n,\mathbb{C})$, \begin{equation} \mathrm{SO}(2n,\mathbb{R}) \cap \mathrm{Sp}(2n,\mathbb{C}) \simeq \mathrm{U}(n,\mathbb{C}),\label{magicintersection} \end{equation} the isomorphism being given by the map \eqref{doublecomplex}. This implies in particular that $\mathrm{U}(n)$ leaves invariant the skew-symmetric tensor $\sum_{i=1}^{n} e_{2i}\otimes e_{2i-1}-e_{2i-1}\otimes e_{2i}$ corresponding to the skew-symmetric form defining $\mathrm{Sp}(2n,\mathbb{C})$. The intersection formula \eqref{magicintersection} also proves that $\mathrm{U}(n)$ leaves invariant the symmetric tensor $\sum_{i=1}^{2n} e_{i} \otimes e_{i}$, whence the value of the spherical vector for $\mathrm{U}(n)$ inside $\mathrm{U}\mathrm{Sp}(n)$. Finally, for $\mathrm{SO}(n)$ inside $\mathrm{SU}(n)$ and $\mathrm{U}\mathrm{Sp}(n)$ inside $\mathrm{SU}(2n)$, we use again the defining symmetric bilinear form or skew-symmetric bilinear form associated to the group $K$ to construct a $K$-invariant vector. The value of $\phi^{\lambda_{\min}}$ is then given by the formula $\phi^{\lambda}(g)=\scal{e^{\lambda}}{\rho^{\lambda}(g)e^{\lambda}}$, that is to say $$ \mathrm{tr}(M_{p,q}\,g\,M_{p,q}\,g^{t})\quad;\quad \mathrm{tr}(M_{p,q}\,g\,M_{p,q}\,g^{\dagger}) \quad;\quad \frac{1}{2}\,\mathrm{tr}(\widetilde{M}_{p,q}\,\widetilde{g}\,\widetilde{M}_{p,q}\,\widetilde{g}^{\dagger}) $$ for real, complex and quaternionic Grassmannians; $$\frac{1}{n}\sum_{i,j=1}^{n}(g_{ij})^{2} \quad;\quad \frac{1}{2n}\sum_{i,j=1}^{2n}(\widetilde{g}_{ij})^{2}$$ for $\mathrm{SU}(n)/\mathrm{SO}(n)$ and $\mathrm{U}\mathrm{Sp}(n)/\mathrm{U}(n)$; and $$\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n}g_{(2i)(2j)}g_{(2i-1)(2j-1)}-g_{(2i)(2j-1)}g_{(2i-1)(2j)} $$ for $\mathrm{SO}(2n)/\mathrm{U}(n)$ and $\mathrm{SU}(2n)/\mathrm{U}\mathrm{Sp}(n)$. Here by $\widetilde{g}$ we mean the complex matrix of size $2n\times 2n$ obtained from a quaternionic matrix of size $n \times n$ by the map \eqref{doublequaternion}. In this last case, the computations can in fact be done inside $\mathrm{M}(n,\mathbb{H})$: indeed, \begin{align*} &(\widetilde{g}_{(2i-1)(2j-1)})^{2}+(\widetilde{g}_{(2i-1)(2j)})^{2}+(\widetilde{g}_{(2i)(2j-1)})^{2}+(\widetilde{g}_{(2i)(2j)})^{2}\\ &=2\big(([1](g_{ij}))^{2}+([\mathrm{j}](g_{ij}))^{2}-([\mathrm{i}](g_{ij}))^{2}-([\mathrm{k}](g_{ij}))^{2}\big), \end{align*} whereas $\widetilde{M^{\star}}=(\widetilde{M})^{\dagger}$ and $\frac{1}{2}\,\mathrm{tr} \widetilde{M}=\Re(\mathrm{tr}\, M)$. Thus, the formulas for the discriminating spherical functions of the spaces of structures are entirely proved, and for Grassmannian varieties, it suffices to check that for any unitary quaternionic matrix $N$, $$\Re(\mathrm{tr}\,M_{p,q}NM_{p,q}N^{\star})=\frac{1}{p}\sum_{i=1}^{p}\sum_{j=1}^{p}|g_{ij}|^{2}+\frac{1}{q}\sum_{i=p+1}^{n}\sum_{j=p+1}^{n}|g_{ij}|^{2}-1;$$ indeed the real and complex cases are specializations of this formula. This is easily done. \end{proof} \begin{lemma}\label{abstractexpansion} There exists coefficients $a,b,c,\ldots$ (different on each line, and depending on $n$ and $q$) such that the following expansions hold: \begin{align*} &\mathrm{Gr}(n,q,\mathbb{R}): \quad\left(\phi^{(2,0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}}\right)^{2}= \frac{2}{n^{2}+n-2}+a\, \phi^{(2,0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}}+b\,\phi^{(1,1,0\ldots,0)_{\lfloor \frac{n}{2} \rfloor}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad +c \,\phi^{(2,2,0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}}+d\, \phi^{(3,1,0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}}+e \, \phi^{(4,0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}};\\ &\mathrm{Gr}(n,q,\mathbb{C}):\quad \left(\phi^{(2,1,\ldots,1)_{n-1}}\right)^{2}= \frac{1}{n^{2}-1}+a\,\phi^{(2,1,\ldots,1)_{n-1}}+b\,\phi^{(4,2,\ldots,2)_{n-1}}+c\,\phi^{(2,2,1,\ldots,1,0)_{n-1}};\\ &\mathrm{Gr}(n,q,\mathbb{H}): \quad\left(\phi^{(1,1,0,\ldots,0)_{n}}\right)^{2}= \frac{1}{2n^{2}-n-1}+a\,\phi^{(1^2,0,\ldots,0)_{n-1}}+b\,\phi^{(1^4,0,\ldots,0)_{n}}+c\,\phi^{(2,2,0,\ldots,0)_{n}};\\ &\mathrm{SO}(2n)/\mathrm{U}(n): \quad\left(\phi^{(1,1,0,\ldots,0)_{n}}\right)^{2}= \frac{1}{2n^{2}-n}+a\,\phi^{(1^{2},0,\ldots,0)_{n}}+b\,\phi^{(1^{4},0,\ldots,0)_{n}}+c\,\phi^{(2,2,0,\ldots,0)_{n}};\\ &\mathrm{SU}(n)/\mathrm{SO}(n): \quad\left|\phi^{(2,0,\ldots,0)_{n-1}}\right|^{2}= \frac{2}{n^{2}+n}+a\,\phi^{(4,2,\ldots,2)_{n-1}};\\ &\mathrm{SU}(2n)/\mathrm{U}\mathrm{Sp}(n):\quad \left|\phi^{(1,1,0,\ldots,0)_{2n-1}}\right|^{2}= \frac{1}{2n^{2}-n}+ a \,\phi^{(2,2,1,\ldots,1,0)_{2n-1}};\\ &\mathrm{U}\mathrm{Sp}(n)/\mathrm{U}(n): \quad\left(\phi^{(2,0,\ldots,0)_{n}}\right)^{2}= \frac{1}{2n^{2}+n}+a\,\phi^{(2,0,\ldots,0)_{n}}+b\,\phi^{(2,2,0,\ldots,0)_{n}}+c\,\phi^{(4,0,\ldots,0)_{n}}. \end{align*} In these formulas, it is understood that if the label $\lambda$ of the spherical function $\phi^{\lambda}$ does not make sense for a choice of $n$ and $q$, then this term does not appear in the expansion. \end{lemma} \begin{proof} Each time, one computes the expansion in irreducible representations of $V^{\lambda_{\min}}\otimes V^{\lambda_{\min}}$ in the case of real-valued spherical functions, and of $V^{\lambda_{\min}}\otimes V^{\lambda_{\min}^{*}}$ in the case of complex-valued spherical functions, where $\lambda \mapsto \lambda^{*}$ is the transformation of weights given by Equation \eqref{conjugaterepresentation}. This expansion can be found with Schur functions; let us detail for instance the case of complex Grassmannian varieties $\mathrm{Gr}(n,q,\mathbb{C})$. With an alphabet of eigenvalues $Z=\{z_{1},\ldots,z_{n}\}$ such that $z_{1}z_{2}\cdots z_{n}=1$, one has \begin{align*} s_{(0,\ldots,0)_{n-1}}(Z)&=1\\ s_{(2,1,\ldots,1)_{n-1}}(Z)&=s_{(1,0,\ldots,0,-1)_{n}}(Z)=\left(\sum_{i,j=1}^{n} z_{i}z_{j}^{-1}\right)-1\\ s_{(4,2,\ldots,2)_{n-1}}(Z)&=s_{(2,0,\ldots,0,-2)_{n}}(Z)=\left(\sum_{1 \leq i \leq j \leq n}\sum_{1 \leq k \leq l \leq n}z_{i}z_{j}z_{k}^{-1}z_{l}^{-1}\right)-\left(\sum_{i,j=1}^{n}z_{i}z_{j}^{-1}\right)\\ s_{(2,2,1,\ldots,1,0)_{n-1}}(Z)&=s_{(1,1,0,\ldots,0,-1,-1)_{n}}(Z)=\left(\sum_{1 \leq i < j \leq n}\sum_{1 \leq k < l \leq n}z_{i}z_{j}z_{k}^{-1}z_{l}^{-1}\right)-\left(\sum_{i,j=1}^{n}z_{i}z_{j}^{-1}\right)\\ s_{(3,1,\ldots,1,0)_{n-1}}(Z)&=s_{(2,0,\ldots,0,-1,-1)_{n}}(Z)=\left(\sum_{1 \leq i \leq j \leq n}\sum_{1 \leq k < l \leq n}z_{i}z_{j}z_{k}^{-1}z_{l}^{-1}\right)-\left(\sum_{i,j=1}^{n}z_{i}z_{j}^{-1}\right) +1\\ s_{(3,3,2,\ldots,2)_{n-1}}(Z)&=s_{(1,1,0,\ldots,0,-2)_{n}}(Z)=\left(\sum_{1 \leq i < j \leq n}\sum_{1 \leq k \leq l \leq n}z_{i}z_{j}z_{k}^{-1}z_{l}^{-1}\right)-\left(\sum_{i,j=1}^{n}z_{i}z_{j}^{-1}\right) +1. \end{align*} Consequently, \begin{align*}(V^{(2,1,\ldots,1)_{n-1}})^{\otimes 2}&=V^{(0,\ldots,0)_{n-1}} \oplus 2\,V^{(2,1,\ldots,1)_{n-1}} \oplus V^{(4,2,\ldots,2)_{n-1}} \oplus V^{(2,2,1,\ldots,1,0)_{n-1}}\\ &\quad \oplus V^{(3,3,2,\ldots,2)_{n-1}} \oplus V^{(3,1,\ldots,1,0)_{n-1}}, \end{align*} because the same equality with Schur functions holds. The second line corresponds to non spherical representations, so only the terms of the first line can contribute to $(\phi^{(2,1,\ldots,1)_{n-1}})^{2}$. Entirely similar calculations yield: \begin{itemize} \item $\mathrm{Gr}(n,q,\mathbb{R})$: \begin{align*}\left(V^{(2,0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}}\right)^{\otimes 2}&=V^{(0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}}\oplus V^{(2,0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}}\oplus V^{(1,1,0\ldots,0)_{\lfloor \frac{n}{2} \rfloor}}\\ &\quad \oplus V^{(2,2,0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}}\oplus V^{(3,1,0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}}\oplus V^{(4,0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}} \end{align*} \item $\mathrm{Gr}(n,q,\mathbb{H})$: \begin{align*} \left(V^{(1,1,0,\ldots,0)_{n}}\right)^{\otimes 2}&=V^{(0,\ldots,0)_{n}}\oplus V^{(1,1,0,\ldots,0)_{n}} \oplus V^{(1,1,1,1,0,\ldots,0)_{n}}\oplus V^{(2,2,0,\ldots,0)_{n}}\\ &\quad \oplus V^{(2,1,1,0,\ldots,0)_{n}}. \end{align*} Only the terms on the first line are spherical. \item $\mathrm{SO}(2n,\mathbb{R})/\mathrm{U}(n,\mathbb{C})$: \begin{align*} \left(V^{(1,1,0,\ldots,0)_{n}}\right)^{\otimes 2}&= V^{(0,\ldots,0)_{n}}\oplus V^{(1,1,0,\ldots,0)_{n}} \oplus V^{(1,1,1,1,0,\ldots,0)_{n}} \oplus V^{(2,2,0,\ldots,0)_{n}}\\ &\quad \oplus V^{(2,0,\ldots,0)_{n}}\oplus V^{(2,1,1,0,\ldots,0)_{n}}, \end{align*} again with non-spherical representations gathered on the second line. \item $\mathrm{SU}(n,\mathbb{C})/\mathrm{SO}(n,\mathbb{R})$: $$V^{(2,0,\ldots,0)_{n-1}}\otimes V^{(2,2,\ldots,2)_{n-1}}=V^{(0,\ldots,0)_{n-1}} \oplus V^{(4,2,\ldots,2)_{n-1}}\oplus V^{(2,1,\ldots,1)_{n-1}},$$ and the last term is not a spherical representation. \item $\mathrm{SU}(2n,\mathbb{C})/\mathrm{U}\mathrm{Sp}(n,\mathbb{H})$: $$V^{(1,1,0,\ldots,0)_{2n-1}}\otimes V^{(1,\ldots,1,0)_{2n-1}}=V^{(0,\ldots,0)_{2n-1}} \oplus V^{(2,2,1,\ldots,1,0)_{2n-1}}\oplus V^{(2,1,\ldots,1)_{2n-1}},$$ and again the last term is not spherical. \item $\mathrm{U}\mathrm{Sp}(n,\mathbb{H})/\mathrm{U}(n,\mathbb{C})$: \begin{align*} \left(V^{(2,0,\ldots,0)_{n}}\right)^{\otimes 2}&=V^{(0,\ldots,0)_{n}}\oplus V^{(2,0,\ldots,0)_{n}} \oplus V^{(2,2,\ldots,0)_{n}} \oplus V^{(4,0,\ldots,0)_{n}}\\ &\quad\oplus V^{(1,1,0,\ldots,0)_{n}}\oplus V^{(3,1,0,\ldots,0)_{n}}. \end{align*} The terms on the second line corresponds to non-spherical representations. \end{itemize} As mentioned before, the coefficient of the constant function in $|\phi^{\lambda_{\min}}|^{2}$ is then always equal to $\frac{1}{D^{\lambda_{\min}}}$. \end{proof} For the spaces $\mathrm{SU}(n)/\mathrm{SO}(n)$ and $\mathrm{SU}(2n)/\mathrm{U}\mathrm{Sp}(n)$, the remaining coefficient $a$ can be found by evaluating the spherical functions at $e_{G}$. Thus, \begin{align*} &\mathrm{SU}(n)/\mathrm{SO}(n): \quad\left|\phi^{(2,0,\ldots,0)_{n-1}}\right|^{2}= \frac{2}{n^{2}+n}+\frac{n^{2}+n-2}{n^{2}+n}\,\phi^{(4,2,\ldots,2)_{n-1}};\\ &\mathrm{SU}(2n)/\mathrm{U}\mathrm{Sp}(n):\quad \left|\phi^{(1,1,0,\ldots,0)_{2n-1}}\right|^{2}= \frac{1}{2n^{2}-n}+ \frac{2n^{2}-n-1}{2n^{2}-n} \,\phi^{(2,2,1,\ldots,1,0)_{2n-1}}. \end{align*} But in the other cases, the values of the spherical functions appearing in the right-hand side of the formulas of Lemma \ref{abstractexpansion} are unfortunately not known \emph{a priori}, which makes finding the coefficients $a,b,c,\ldots$ quite difficult. However, since one only needs to compute $\mathbb{E}_{t}[(\phi^{\lambda_{\min}})^{2}]$, and since $\phi^{\lambda_{\min}}$ is explicit in terms of matrix coefficients, one can use the following Lemma (\emph{cf.} \cite[Proposition 1.4]{Lev11}). \begin{lemma}\label{expectationcoefficients} Let $k\geq 1$ be any integer, and $(g_{t})_{t \in \mathbb{R}_{+}}$ be the Brownian motion on $\mathrm{SO}(n)$ or $\mathrm{SU}(n)$. The joint moments of order $k$ of the matrix coefficients of $g_{t}$ are given by \begin{equation}\mathbb{E}[g_{t}^{\otimes k}]=\exp\left(t\,\frac{k\, \alpha_{\mathfrak{g}}}{2}\,(I_{n})^{\otimes k}+t \sum_{1\leq i< j \leq k} \eta_{i,j}(C_{\mathfrak{g}})\right)\label{levy}\end{equation} where $\alpha_{\mathfrak{g}}$ is the coefficient introduced on page \pageref{casimircoefficient}; $C_{\mathfrak{g}}$ is the Casimir operator; and $\eta_{i,j}$ is the linear map from $\mathrm{M}(n,\Bbbk)^{\otimes 2}$ to $\mathrm{M}(n,\Bbbk)^{\otimes k}$ defined on simple tensors $X \otimes Y$ by $$X \otimes Y \mapsto (I_{n})^{\otimes (i-1)}\otimes X \otimes (I_{n})^{\otimes (j-i-1)} \otimes Y \otimes (I_{n})^{\otimes k-j}.$$ In the complex case, one has also: $$\mathbb{E}[(g_{t})^{\otimes k}\otimes (\overline{g_{t}})^{\otimes l}]=\exp\left(t\,\frac{(k+l)\, \alpha_{\mathfrak{g}}}{2}\,(I_{n})^{\otimes k+l}+t \sum_{1\leq i< j \leq k+l} \widetilde\eta_{i,j}(C_{\mathfrak{g}})\right)\label{levymore},$$ with $$ \widetilde\eta_{i,j}(X\otimes Y)=\begin{cases} \eta_{i,j}(X\otimes Y) &\text{if }i,j \in \left[\!\left[ 1,k\right]\!\right];\\ -\eta_{i,j}(X \otimes Y^{t}) &\text{if }i \in \left[\!\left[ 1,k\right]\!\right] \text{ and }j \in \left[\!\left[ k+1,k+l\right]\!\right];\\ \eta_{i,j}(X^{t} \otimes Y^{t}) &\text{if }i,j \in \left[\!\left[ k+1,k+l\right]\!\right]. \end{cases} $$ \end{lemma} \begin{proof} In the complex case, recall the stochastic differential equation satisfied by $g_{t}$, and therefore by $\overline{g_{t}}$: $$dg_{t}=g_{t}\,dB_{t}+\frac{\alpha_{\mathfrak{g}}}{2}\,g_{t}\,dt\qquad;\qquad d\overline{g_{t}}=-\overline{g_{t}}\,dB_{t}^{t}+\frac{\alpha_{\mathfrak{g}}}{2}\,\overline{g_{t}}\,dt.$$ It\^{o}'s formula yields then a stochastic differential equation for $(g_{t})^{\otimes k}\otimes (\overline{g_{t}})^{\otimes l}$: \begin{align*} d(g^{\otimes k} \otimes \overline{g}^{\otimes l})_{t}&=(g_{t})^{\otimes k}\otimes (\overline{g_{t}})^{\otimes l}\left(\frac{(k+l)\,\alpha_{\mathfrak{g}}}{2}\,dt+\sum_{1\leq i<j\leq k}\widetilde\eta_{i,j}(dB_{t}\otimes dB_{t}) \right)\\ &\!\!\!\!\!\!\!\!+(g_{t})^{\otimes k}\otimes (\overline{g_{t}})^{\otimes l}\left(\sum_{i=1}^{k} (I_{n})^{\otimes i-1}\otimes dB_{t} \otimes (I_{n})^{\otimes k+l-i}-\sum_{i=k+1}^{k+l} (I_{n})^{\otimes i-1}\otimes dB_{t}^{t} \otimes (I_{n})^{\otimes k+l-i}\right). \end{align*} The quadratic variation of $B_{t}$ is given by the Casimir operator: $dB_{t}\otimes dB_{t}=C_{\mathfrak{g}}\,dt$. Taking expectations in the formula above leads now to a differential equation for $\mathbb{E}[(g_{t})^{\otimes k}\otimes (\overline{g_{t}})^{\otimes l}]$, whose solution is the exponential of matrices in the statement of this lemma. The real case is the specialization $l=0$ of the previous discussion, though with a different Casimir operator. In the quaternionic case, one has to be more careful. In particular, since the quaternionic conjugate of $pq$ is $q^{\star} p^{\star}$ instead of $p^{\star}q^{\star}$, in the previous argument the SDE for $g_{t}^{\star}$ does not take the same form. A way to overcome this problem is to use the doubling map \eqref{doublequaternion}. Thus, we write an equation for $\widetilde{g}_{t}$ instead of $g_{t}$: $$\mathbb{E}\!\left[(\widetilde{g}_{t})^{\otimes k}\right]=\exp\left(t\,\frac{k\, \alpha_{\mathfrak{g}}}{2}\,(I_{2n})^{\otimes k}+t \sum_{1\leq i< j \leq k} \eta_{i,j}(C_{\mathfrak{g}})\right),$$ where the Casimir is now considered as an element of $(\mathrm{End}(\mathbb{C}^{2n}))^{\otimes 2}$. As we shall see later, joint moments of the entries of $g$ and $g^{\star}$ are combinations of the joint moments of the entries of $\widetilde{g}$, so the previous formula will prove sufficient to solve our problem in the quaternionic case. \end{proof} It turns out that in each case important for our computations, the matrix $\sum_{1\leq i<j \leq k} \widetilde\eta_{i,j}(C_{\mathfrak{g}})$ can be explicitly diagonalized, with a basis of eigenvectors that is quite tractable (to be honest, with the help of a computer). In the following, we describe the eigenvalues and eigenvectors of these matrices, and leave the reader check that they are indeed eigenvalues and eigenvectors: this is each time an immediate computation with elementary matrices, though quite tedious if $k=4$ or $k+l=4$. For simplification, we write $e[i_{1},i_{2},\ldots,i_{r}]=e_{i_{1}}\otimes e_{i_{2}}\otimes \cdots \otimes e_{i_{r}}$. \subsubsection{Quotients of orthogonal groups} For special orthogonal groups, set $\frac{1}{n}M_{n,k}=\sum_{1\leq i< j \leq k} \eta_{i,j}(C_{\mathfrak{so}(n)})$, to be considered as an element of $\mathrm{End}((\mathbb{R}^{n})^{\otimes k})$. If $k=2$, then the eigenvalues and eigenvectors of $M_{n,2}=\sum_{1\leq i<j\leq n}(E_{ij}-E_{ji})^{\otimes 2}$ are: $$ \begin{tabular}{|c|c|c|} \hline &&\\ eigenvalue & multiplicity & eigenvectors \\ &&\\ \hline && \\ $n-1$ & $1$ & $\sum_{i=1}^{n}e[i,i]$ \\ &&\\ \hline && \\ $1$ & $\frac{n(n-1)}{2}$ & $e[i,j] - e[j,i],\,\, i<j $ \\ &&\\ \hline && \\ $-1$ & $\frac{(n+2)(n-1)}{2}$ & $e[i,j] + e[j,i],\,\, i<j$ \\ &&\\ & & $e[i,i]-e[i+1,i+1],\,\, i \leq n-1$ \\ &&\\ \hline \end{tabular} $$ \noindent This allows to compute $\exp(-\frac{(n-1)t}{n})\,\exp(\frac{t}{n}M_{n,2})$, which is the right-hand side of Formula \eqref{levy} in the case of $\mathrm{SO}(n,\mathbb{R})$ and for $k=2$. One obtains: \begin{align*} \mathbb{E}[(g_{ii})^{2}]&=\frac{1}{n}+\left(1-\frac{1}{n}\right)\mathrm{e}^{-t}\quad;\quad\mathbb{E}[(g_{ij})^{2}]=\frac{1}{n}\left(1-\mathrm{e}^{-t}\right)\quad;\\ \mathbb{E}[g_{ii}g_{jj}]&=\frac{1}{2}\left(\mathrm{e}^{-t}+\mathrm{e}^{-\frac{n-2}{n}t}\right)\quad;\quad \mathbb{E}[g_{ij}g_{ji}]=\frac{1}{2}\left(\mathrm{e}^{-t}-\mathrm{e}^{-\frac{n-2}{n}t}\right)\quad; \end{align*} and all the other mixed moments vanish (\emph{e.g.}, $\mathbb{E}[g_{ii}g_{ij}]$ or $\mathbb{E}[g_{ij}g_{kl}]$). Now, if $k=4$, then the eigenvalues and eigenvectors of $M_{n,4}$ are: $$ \begin{tabular}{|c|c|c|} \hline &&\\ eigenvalue & multiplicity & eigenvectors (not exhaustive, some repetitions)\\ &&\\ \hline && \\ $2n-2$ & $3$ & $\sum_{k=1}^{n}\sum_{l=1}^{n}e[k,k,l,l], \,\,\star$ \\ &&\\ \hline && \\ $n$ & $3n(n-1)$ & $\sum_{k=1}^{n}e[i,j,k,k]-e[j,i,k,k],\,\, i<j, \,\,\star $ \\ &&\\ \hline && \\ $n-2$ & $3(n+2)(n-1)$ & $\sum_{k=1}^{n}e[i,j,k,k]+e[j,i,k,k],\,\, i<j, \,\,\star$ \\ &&\\ & & $\sum_{k=1}^{n}e[i,i,k,k]-e[i+1,i+1,k,k],\,\, i \leq n-1, \,\,\star$ \\ &&\\ \hline && \\ $6$ & $\frac{n(n-1)(n-2)(n-3)}{24}$ & $\sum_{\sigma \in \mathfrak{S}_{4}} \varepsilon(\sigma)\,e[i,j,k,l]^{\sigma},\,\,i<j<k<l$ \\ &&\\ \hline && \\ $2$&$\frac{3n(n+2)(n-1)(n-3)}{8}$&$D_{1}^{\eta}(i,j,k,l),\,\,D_{2}^{\eta}(i,j,k,l),\,\,D_{3}^{\eta}(i,j,k,l),\,\,i \neq j \neq k \neq l$\\ &&\\ \hline && \\ $0$&$\frac{n(n+1)(n+2)(n-3)}{6}$&$S_{1}(i,j,k,l),\,\,S_{2}(i,j,k,l),\,\,i\neq j \neq k \neq l$\\ &&\\ &&$K_{1}(i,j,k,l),\,\,K_{2}(i,j,k,l),\,\,i\neq j \neq k \neq l$\\ &&\\ \hline && \\ $-2$&$\frac{3(n-1)(n-2)(n+1)(n+4)}{8}$&$\left(\substack{e[i,j]^{\otimes 2}+e[j,k]^{\otimes 2}+e[k,i]^{\otimes 2}\\-e[j,i]^{\otimes 2}-e[k,j]^{\otimes 2}-e[i,k]^{\otimes 2}}\right),\,\,i\neq j\neq k,\,\,\star$\\ &&\\ &&$D_{1}^{\theta}(i,j,k,l),\,\,D_{2}^{\theta}(i,j,k,l),\,\,D_{3}^{\theta}(i,j,k,l),\,\,i \neq j \neq k \neq l$\\ &&\\ \hline && \\ $-6$ & $\frac{n(n-1)(n+1)(n+6)}{24}$ & $\sum_{\sigma \in \mathfrak{S}_{4}} e[i,j,k,l]^{\sigma},\,\,i <j<k<l$ \\ &&\\ & & $e[i,i,i,i]+e[j,j,j,j]-\sum_{\sigma \in \mathfrak{S}_{4}}' e[i,i,j,j]^{\sigma},\,\,i<j$ \\ &&\\ \hline \end{tabular} $$ The star $\star$ means that the eigenvectors of a basis are listed up to action of $\mathfrak{S}_{4}$; and the symbols $\sum_{\sigma\in \mathfrak{S}_{4}}'$ mean that we take the sum of all \emph{distinct} permutations of the tensors. For the eigenvectors associated to the value $2$, denote $\mathfrak{D}_{4,(1)}=\langle (1,3,2,4),(1,2)\rangle$, $\mathfrak{D}_{4,(2)}=\langle (1,2,3,4),(1,3)\rangle$ and $\mathfrak{D}_{4,(3)}=\langle(1,2,4,3), (1,4)\rangle$ the three dihedral groups of order $4$ (hence cardinality $8$) that can be found inside $\mathfrak{S}_{4}$. Each dihedral group of order $4$ has for presentation $$\mathfrak{D}_{4}=\langle r,s \,\,| \,\, r^{4}=s^{2}=(rs)^{2}=1\rangle,$$ so the parity $\eta(\sigma)$ of the number of occurrences of $s$ in a reduced writing of $\sigma \in \mathfrak{D}_{4}$ is well-defined, and provides a morphism $\eta : \mathfrak{D}_{4,(v)} \to \{ \pm 1\}$ for $v=1,2,3$. Then, it can be checked that for every $i \neq j \neq k \neq l$ and any $v$, $$D_{v}^{\eta}(i,j,k,l)=\sum_{\sigma \in \mathfrak{D}_{4,(v)}} \eta(\sigma) \,e[i,j,k,l]^{\sigma}$$ is in $V_{2}$. The eigenvectors $D_{1}^{\theta}(i,j,k,l)$, $D_{2}^{\theta}(i,j,k,l)$ and $D_{3}^{\theta}(i,j,k,l)$ associated to the eigenvalue $-2$ are defined exactly the same way, but with the morphism $\theta : \mathfrak{D}_{4,(v)} \to\{ \pm 1\}$ associated to the parity of the number of occurrences of $r$ in a reduced decomposition of $\sigma \in \mathfrak{D}_{4}$ (again it is well defined): $$D_{v}^{\theta}(i,j,k,l)=\sum_{\theta \in \mathfrak{D}_{4,(v)}} \theta(\sigma) \,e[i,j,k,l]^{\sigma}.$$ For the eigenvectors associated to the value $0$, $S_{1}(i,j,k,l)$ is defined by \begin{align*} &\,\,e[i,i,k,k]+e[k,k,i,i]+e[j,j,l,l]+e[l,l,j,j]-e[i,i,l,l]-e[l,l,i,i]-e[j,j,k,k]-e[k,k,j,j]\\ &-e[i,k,k,i]-e[k,i,i,k]-e[j,l,l,j]-e[l,j,j,l]+e[i,l,l,i]+e[l,i,i,l]+e[j,k,k,j]+e[k,j,j,k] \end{align*} and $S_{2}(i,j,k,l)$ is obtained by replacing each term $a \otimes b \otimes b \otimes a$ by $a\otimes b \otimes a \otimes b$ in the previous formula. On the other hand, if $\mathfrak{K}_{4}=\{\mathrm{id},(1,2)(3,4),(1,3)(2,4),(1,4)(2,3)\}$ denotes the Klein group, then $K_{1}(i,j,k,l)$ and $K_{2}(i,j,k,l)$ are defined as \begin{align*}K_{1}(i,j,k,l)&=\sum_{\sigma \in \mathfrak{K}_{4}} e[i,j,k,l]^{\sigma}\,\,-\!\!\sum_{\sigma \in (1,2,3)\mathfrak{K}_{4}}e[i,j,k,l]^{\sigma};\\ K_{2}(i,j,k,l)&=\sum_{\sigma \in \mathfrak{K}_{4}} e[i,j,k,l]^{\sigma}\,\,-\!\!\sum_{\sigma \in (1,3,2)\mathfrak{K}_{4}}e[i,j,k,l]^{\sigma}. \end{align*} That said, the deduction of the mixed moments of order $4$ of the coefficients of $g$ goes as follows. One notices that \begin{align*}(n+2)\sum_{i=1}^{n}e_{i}^{\otimes 4}&=\left(\sum_{k,l=1}^{n} e[k,k,l,l]+e[k,l,k,l]+e[k,l,l,k]\right)+\sum_{i<j} \left(e_{i}^{\otimes 4}+e_{j}^{\otimes 4}-\sum_{\sigma \in \mathfrak{S}_{4}}' e[i,i,j,j]^{\sigma}\right) \end{align*} with the first sum in the eigenspace $V_{2n-2}$ and the second sum in the eigenspace $V_{-6}$. On the other hand, for any $i \neq j$, \begin{align*} &(n+4)(e_{i}^{\otimes 4}-e_{j}^{\otimes 4})=6\,e[i,i,i,i] + \sum_{k \neq i,j} \sum_{\sigma \in \mathfrak{S}_{4}}'e[i,i,k,k]^{\sigma} - \sum_{k \neq i,j} \sum_{\sigma \in \mathfrak{S}_{4}}'e[j,j,k,k]^{\sigma} - 6\,e[j,j,j,j] \\ &+\sum_{k \neq i,j}\left(e[i,i,i,i] + e[k,k,k,k] - \sum_{\sigma \in \mathfrak{S}_{4}}' e[i,i,k,k]^{\sigma}\right)-\left(e[j,j,j,j] + e[k,k,k,k] - \sum_{\sigma \in \mathfrak{S}_{4}}' e[j,j,k,k]^{\sigma}\right), \end{align*} with the first line in $V_{n-2}$ and the second in $V_{-6}$. Since $e_{i}^{\otimes 4}=\frac{1}{n}\sum_{j=1}^{n} e_{j}^{\otimes 4}+ \frac{1}{n}\sum_{j=1}^{n}(e_{i}^{\otimes 4}-e_{j}^{\otimes 4}),$ one concludes that \begin{align*} e_{i}^{\otimes 4}&=\frac{1}{n(n+2)}\sum_{k,l=1}^{n}e[k,k,l,l]+e[k,l,k,l]+e[k,l,l,k]\\ &\quad+\frac{1}{n(n+4)} \sum_{\sigma \in \mathfrak{S}_{4}}' \sum_{k,l=1}^{n} (e[i,i,k,k]-e[l,l,k,k])^{\sigma}\end{align*} \begin{align*} &\quad+\frac{n+1}{(n+2)(n+4)}\sum_{k \neq i}\left(e[i,i,i,i]+e[k,k,k,k]-\sum_{\sigma \in \mathfrak{S}_{4}}'e[i,i,k,k]^{\sigma}\right)\\ &\quad-\frac{1}{(n+2)(n+4)}\sum_{(k<l) \neq i}\left(e[k,k,k,k]+e[l,l,l,l]-\sum_{\sigma \in \mathfrak{S}_{4}}'e[k,k,l,l]^{\sigma}\right), \end{align*} each line being in a different eigenspace: $V_{2n-2}$, $V_{n-2}$, $V_{-6}$ and $V_{-6}$. The technique is now the following: to compute $\mathbb{E}[g_{ij_{1}}g_{ij_{2}}g_{ij_{3}}g_{ij_{4}}]$, one counts the number of occurrences of $e[j_{1},j_{2},j_{3},j_{4}]$ in each term of the previous expansion. This leads to: \begin{align*} \mathbb{E}[(g_{ii})^{4}]&=\frac{3}{n(n+2)}+\frac{6(n-1)}{n(n+4)}\,\mathrm{e}^{-t}+\frac{(n+1)(n-1)}{(n+2)(n+4)}\,\mathrm{e}^{-\frac{2n+4}{n}t};\\ \mathbb{E}[(g_{ij})^{4}]&=\frac{3}{n(n+2)}-\frac{6}{n(n+4)}\,\mathrm{e}^{-t}+\frac{3}{(n+2)(n+4)}\,\mathrm{e}^{-\frac{2n+4}{n}t};\\ \mathbb{E}[(g_{ii})^{2}(g_{ij})^{2}]&=\frac{1}{n(n+2)}+\frac{(n-2)}{n(n+4)}\,\mathrm{e}^{-t}-\frac{(n+1)}{(n+2)(n+4)}\,\mathrm{e}^{-\frac{2n+4}{n}t};\\ \mathbb{E}[(g_{ij})^{2}(g_{ik})^{2}]&=\frac{1}{n(n+2)}-\frac{2}{n(n+4)}\,\mathrm{e}^{-t}+\frac{1}{(n+2)(n+4)}\,\mathrm{e}^{-\frac{2n+4}{n}t}; \end{align*} and one sees also that the other expectations $\mathbb{E}[g_{ij}g_{ik}g_{il}g_{im}]$ vanish (\emph{e.g.}, $\mathbb{E}[g_{ij}g_{ik}(g_{il})^{2}]$ with $i \neq j \neq k \neq l$). Similar manipulations yield the decomposition in eigenvectors of $e_{i}^{\otimes 2}\otimes e_{j}^{\otimes 2}$: \begin{align*} &\frac{n+1}{n(n-1)(n+2)}\sum_{k,l=1}^{n}e[k,k,l,l]-\frac{1}{(n-1)n(n+2)}\sum_{k,l=1}^{n}e[k,l,k,l]+e[k,l,l,k]\\ &\text{- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -}\\ &+\frac{1}{n(n-2)}\sum_{k \neq i,j}\left(\sum_{l=1}^{n}e[i,i,l,l]-e[k,k,l,l]+\sum_{l=1}^{n}e[l,l,j,j]-e[l,l,k,k]\right)\\ &-\frac{1}{n(n-2)(n+4)}\sum_{k \neq i,j}\sum_{\sigma \in \mathfrak{S}_{4}}' \left(\sum_{l=1}^{n}(e[i,i,l,l]-e[k,k,l,l])^{\sigma}+\sum_{l=1}^{n}(e[l,l,j,j]-e[l,l,k,k])^{\sigma} \right)\\ &\text{- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -}\\ &+\frac{1}{6(n-1)(n-2)}\sum_{(k<l)\neq i,j}S_{1}(i,k,j,l)+S_{1}(i,l,j,k)+S_{2}(i,k,j,l)+S_{2}(i,l,j,k)\\ &\text{- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -}\\ &+\frac{1}{2n}\sum_{k\neq i,j}e[i,i,j,j]+e[j,j,k,k]+e[k,k,i,i]-e[j,j,i,i]-e[k,k,j,j]-e[i,i,k,k]\\ &\text{- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -}\\ &+\frac{1}{6(n+4)}\left(\sum_{k \neq i}\left(e_{i}^{\otimes 4}+e_{k}^{\otimes 4}-\sum_{\sigma \in \mathfrak{S}_{4}}'e[i,i,k,k]^{\sigma}\right)+\sum_{k\neq j}\left(e_{j}^{\otimes 4}+e_{k}^{\otimes 4}-\sum_{\sigma \in \mathfrak{S}_{4}}'e[j,j,k,k]^{\sigma}\right)\right)\\ &-\frac{1}{6}\left(e_{i}^{\otimes 4}+e_{j}^{\otimes 4}-\sum_{\sigma \in \mathfrak{S}_{4}}'e[i,i,j,j]^{\sigma}\right)-\frac{1}{3(n+2)(n+4)}\sum_{(k<l)} \left(e_{k}^{\otimes 4}+e_{l}^{\otimes 4}-\sum_{\sigma \in \mathfrak{S}_{4}}'e[k,k,l,l]^{\sigma}\right), \end{align*} where the eigenspaces associated to each part are $V_{2n-2}$, $V_{n-2}$, $V_{0}$, $V_{-2}$ and $V_{-6}$. As a consequence, \begin{align*} \mathbb{E}[(g_{ii})^{2}(g_{jj})^{2}]&=\frac{n+1}{(n-1)n(n+2)}+\frac{2(n+3)}{n(n+4)}\,\mathrm{e}^{-t}+\frac{n-3}{3(n-1)}\,\mathrm{e}^{-\frac{2n-2}{n}t}+\frac{n-2}{2n}\,\mathrm{e}^{-2t}\\ &\quad+\frac{n^{2}+4n+6}{6(n+2)(n+4)}\,\mathrm{e}^{-\frac{2n+4}{n}t};\\ \mathbb{E}[(g_{ij})^{2}(g_{ji})^{2}]&=\frac{n+1}{n(n-1)(n+2)}-\frac{2}{n(n+4)}\,\mathrm{e}^{-t}+\frac{n-3}{3(n-1)}\,\mathrm{e}^{-\frac{2n-2}{n}t}-\frac{n-2}{2n}\,\mathrm{e}^{-2t}\\ &\quad+\frac{n^{2}+4n+6}{6(n+2)(n+4)}\,\mathrm{e}^{-\frac{2n+4}{n}t};\\ \mathbb{E}[(g_{ii})^{2}(g_{jk})^{2}]&=\frac{n+1}{n(n-1)(n+2)}+\frac{n^{2}-8}{n(n-2)(n+4)}\,\mathrm{e}^{-t}-\frac{n-3}{3(n-1)(n-2)}\,\mathrm{e}^{-\frac{2n-2}{n}t}\\ &\quad-\frac{1}{2n}\,\mathrm{e}^{-2t}-\frac{n}{6(n+2)(n+4)}\,\mathrm{e}^{-\frac{2n+4}{n}t};\\ \mathbb{E}[(g_{ij})^{2}(g_{jk})^{2}]&=\frac{n+1}{n(n-1)(n+2)}-\frac{2}{(n-2)(n+4)}\,\mathrm{e}^{-t}-\frac{n-3}{3(n-1)(n-2)}\,\mathrm{e}^{-\frac{2n-2}{n}t}\\ &\quad+\frac{1}{2n}\,\mathrm{e}^{-2t}-\frac{n}{6(n+2)(n+4)}\,\mathrm{e}^{-\frac{2n+4}{n}t};\\ \mathbb{E}[(g_{ij})^{2}(g_{kl})^{2}]&=\frac{n+1}{n(n-1)(n+2)}-\frac{2(n+2)}{n(n-2)(n+4)}\,\mathrm{e}^{-t}+\frac{2}{3(n-1)(n-2)}\,\mathrm{e}^{-\frac{2n-2}{n}t}\\ &\quad+\frac{1}{3(n+2)(n+4)}\,\mathrm{e}^{-\frac{2n+4}{n}t};\\ \mathbb{E}[g_{ii}g_{ij}g_{jj}g_{ji}]&=-\frac{1}{n(n-1)(n+2)}-\frac{2}{n(n+4)}\,\mathrm{e}^{-t}-\frac{n-3}{6(n-1)}\,\mathrm{e}^{-\frac{2n-2}{n}t}\\ &\quad+\frac{n^{2}+4n+6}{6(n+2)(n+4)}\,\mathrm{e}^{-\frac{2n+4}{n}t};\\ \mathbb{E}[g_{ik}g_{il}g_{jk}g_{jl}]&=-\frac{1}{n(n-1)(n+2)}+\frac{4}{n(n-2)(n+4)}\,\mathrm{e}^{-t}-\frac{1}{3(n-1)(n-2)}\,\mathrm{e}^{-\frac{2n-2}{n}}\\ &\quad+\frac{1}{3(n+2)(n+4)}\,\mathrm{e}^{-\frac{2n+4}{n}t}. \end{align*} Finally, the elementary tensor $e_{i} \otimes e_{j} \otimes e_{k} \otimes e_{l}$ with $i \neq j \neq k \neq l$ can be expanded as a combination of eigenvectors in $V_{6}$, $V_{2}$, $V_{0}$, $V_{-2}$ and $V_{-6}$. This expansion is related to a remarkable identity in the group algebra $\mathbb{C}\mathfrak{S}_{4}$, which can be considered as a relation of orthogonality of characters, but that only involves one-dimensional representations. Denote $$D_{1}^{\eta}=\sum_{\sigma \in \mathfrak{D}_{4,(1)}}\eta(\sigma)\,\sigma,$$ and similarly for $D_{2}^{\eta}$, $D_{3}^{\eta}$, $D_{1}^{\theta}$, $D_{2}^{\theta}$ and $D_{3}^{\theta}$. We also introduce $I = \sum_{\sigma \in \mathfrak{S}_{4}} \sigma$, $E = \sum_{\sigma \in \mathfrak{S}_{4}} \varepsilon(\sigma)\,\sigma$, and $$K_{1}=\sum_{\sigma \in \mathfrak{K}_{4}}\sigma\,\,\,\,-\!\!\!\!\sum_{\sigma \in (1,2,3)\mathfrak{K}_{4}}\!\!\!\!\sigma \qquad;\qquad K_{2}=\sum_{\sigma \in \mathfrak{K}_{4}}\sigma\,\,\,\,-\!\!\!\!\sum_{\sigma \in (1,3,2)\mathfrak{K}_{4}}\!\!\!\!\sigma.$$ As explained before, all these sums correspond to eigenvectors in $V_{6}$, $V_{2}$, $V_{0}$, $V_{-2}$ and $V_{-6}$. Then, $$\mathrm{id}_{\left[\!\left[ 1,4\right]\!\right]}=\frac{1}{24}\,I+\frac{1}{8}\,(D_{1}^{\eta}+D_{2}^{\eta}+D_{3}^{\eta})+\frac{1}{12}(K_{1}+K_{2})+\frac{1}{8}\,(D_{1}^{\theta}+D_{2}^{\theta}+D_{3}^{\theta})+\frac{1}{24}\,E.\label{crazys4}$$ \comment{Indeed, $\frac{1}{24}\,I+\frac{1}{24}\,E$ is one twelfth of the sum of all even permutations in the alternate group $\mathfrak{A}_{4}$. By adding $\frac{1}{12}(K_{1}+K_{2})$ to this quantity, one removes all the permutations that are in $\mathfrak{A}_{4}$ and not in $\mathfrak{K}_{4}$, so $$\frac{1}{24}\,I+\frac{1}{12}(K_{1}+K_{2})+\frac{1}{24}\,E=\frac{1}{4}\sum_{\sigma \in \mathfrak{K}_{4}}\sigma.$$ On the other hand, $$\frac{1}{8}\left(D_{1}^{\eta}+D_{1}^{\theta}\right)=\frac{1}{4}\,\mathrm{id}_{\left[\!\left[ 1,4\right]\!\right]}+\frac{1}{4}\,(1,2)(3,4)-\frac{1}{4}\,(1,3)(2,4)-\frac{1}{4}\,(1,4)(2,3)$$ and similarly for the two other dihedral groups. Thus, the whole sum on the right-hand side of \eqref{crazys4} is indeed $\mathrm{id}_{\left[\!\left[ 1,4\right]\!\right]}$. } As a consequence, $e[i,j,k,l]$ is equal to \begin{align*}&\frac{1}{24}\sum_{\sigma \in \mathfrak{S}_{4}}\varepsilon(\sigma)\, e[i,j,k,l]^{\sigma}+\frac{1}{8}(D_{1}^{\eta}(i,j,k,l)+D_{2}^{\eta}(i,j,k,l)+D_{3}^{\eta}(i,j,k,l))+\frac{1}{12}(K_{1}(i,j,k,l)+K_{2}(i,j,k,l))\\ &+\frac{1}{8}(D_{1}^{\theta}(i,j,k,l)+D_{2}^{\theta}(i,j,k,l)+D_{3}^{\theta}(i,j,k,l))+\frac{1}{24}\sum_{\sigma \in \mathfrak{S}_{4}} e[i,j,k,l]^{\sigma} \end{align*} with each term respectively in $V_{6}$, $V_{2}$, $V_{0}$, $V_{-2}$ and $V_{-6}$. This leads to \begin{align*} \mathbb{E}[g_{ii}g_{jj}g_{kk}g_{ll}]&=\frac{1}{24}\,\mathrm{e}^{-\frac{2n-8}{n}t}+\frac{3}{8}\,\mathrm{e}^{-\frac{2n-4}{n}t}+\frac{1}{6}\,\mathrm{e}^{-\frac{2n-2}{n}t}+\frac{3}{8}\,\mathrm{e}^{-2t}+\frac{1}{24}\,\mathrm{e}^{-\frac{2n+4}{n}t};\\ \mathbb{E}[g_{ij}g_{jk}g_{kl}g_{li}]&=-\frac{1}{24}\,\mathrm{e}^{-\frac{2n-8}{n}t}+\frac{1}{8}\,\mathrm{e}^{-\frac{2n-4}{n}t}-\frac{1}{8}\,\mathrm{e}^{-2t}+\frac{1}{24}\,\mathrm{e}^{-\frac{2n+4}{n}t};\\ \mathbb{E}[g_{ii}g_{jj}g_{kl}g_{lk}]&=-\frac{1}{24}\,\mathrm{e}^{-\frac{2n-8}{n}t}-\frac{1}{8}\,\mathrm{e}^{-\frac{2n-4}{n}t} +\frac{1}{8}\,\mathrm{e}^{-2t}+\frac{1}{24}\,\mathrm{e}^{-\frac{2n+4}{n}t};\\ \mathbb{E}[g_{ij}g_{ji}g_{kl}g_{lk}]&=\frac{1}{24}\,\mathrm{e}^{-\frac{2n-8}{n}t} -\frac{1}{8}\,\mathrm{e}^{-\frac{2n-4}{n}t}+\frac{1}{6}\,\mathrm{e}^{-\frac{2n-2}{n}t}-\frac{1}{8}\,\mathrm{e}^{-2t}+\frac{1}{24}\,\mathrm{e}^{-\frac{2n+4}{n}t}; \end{align*} and we are done with the computations in the case of special orthogonal groups. \comment{\begin{remark} It would be very nice to find the analogue of Equation \eqref{crazys4} for symmetric groups of larger order, in connection with the diagonalization of $M_{n,2k}$. An interesting feature of this identity is that it involves only generating functions of one-dimensional characters of subgroups $H \subset \mathfrak{S}_{4}$: $$\varSigma(H,\chi)=\sum_{\sigma \in H \subset \mathfrak{S}_{4}}\chi(\sigma)\,\sigma.$$ Indeed, this is obvious for most of the terms, and notice that $K_{1}+K_{2}=\varSigma(\mathfrak{A}_{4},\chi)+\varSigma(\mathfrak{A}_{4},\chi^{2})$, where $\chi$ is the quotient map $\mathfrak{A}_{4} \to \mathfrak{A}_{4} / \mathfrak{K}_{4} \simeq \mathbb{Z}/3\mathbb{Z} = \{1,\mathrm{e}^{2\mathrm{i} \pi/3},\mathrm{e}^{-2\mathrm{i} \pi/3}\}$. This combinatorial property of the reduction of $M_{n,2k}$ seems profound and quite mysterious for the moment. \end{remark}} \begin{proposition} For the real Grassmannian varieties $\mathrm{Gr}(n,q,\mathbb{R})$ and the spaces $\mathrm{SO}(2n)/\mathrm{U}(n)$, the coefficients of Lemma \ref{abstractexpansion} are: \begin{align*} &\mathrm{Gr}(n,q,\mathbb{R}): \quad \frac{2}{n^{2}+n-2}+\frac{2n^{2}}{3}\left(\frac{1}{(n-1)(n-2)}-\frac{1}{pq(n-2)}\right)\phi^{(2,2,0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}}\\ &\qquad\qquad\qquad +\frac{\frac{4n^{2}}{pq}-16}{(n-2)(n+4)}\,\phi^{(2,0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}}+\frac{n^{2}}{3}\left(\frac{1}{(n + 2)(n + 4)}+\frac{2}{pq(n + 4)}\right) \phi^{(4,0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}};\\ &\mathrm{SO}(2n)/\mathrm{U}(n):\quad \frac{1}{2n^{2}-n}+\frac{n-1}{3n}\,\phi^{(1^{4},0,\ldots,0)_{n}}+\frac{4(n^{2}-1)}{(3n)(2n-1)}\,\phi^{(2,2,0,\ldots,0)_{n}}. \end{align*} \end{proposition} \begin{proof} One expands the square of the sum given by Proposition \ref{coeffsphericalfunction}, and one gathers the joint moments of the coefficients according to the possible identities between the indices. For real Grassmannians, $\left(\phi^{(2,0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}}\right)^{2}$ has for expansion:\label{expansquaregrass} \begin{align*}&\left(\frac{1}{p}\sum_{i,j\leq p} (g_{ij})^{2}+\frac{1}{q}\sum_{i,j>p} (g_{ij})^{2}-1\right)^{2}=\left(\frac{1}{p}\sum_{i,j\leq p} (g_{ij})^{2}+\frac{1}{q}\sum_{i,j>p} (g_{ij})^{2}\right)^{2}-2\,\phi^{(2,0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}}-1\\ &=\left(\frac{n}{pq}\right)T[(g_{11})^{4}]+\left(4-\frac{n}{pq}\right)T[(g_{11}g_{22})^{2}]+\left(2-\frac{n}{pq}\right)\left(4\,T[(g_{11}g_{12})^{2}]+T[(g_{12})^{4}]+T[(g_{12}g_{21})^{2}]\right)\\ &\,\,\,+\left(4n-16+\frac{4n}{pq}\right)T[(g_{11}g_{23})^{2}]+\left(2n-12+\frac{4n}{pq}\right)\left(T[(g_{12}g_{13})^{2}]+T[(g_{12}g_{23})^{2}]\right)\\ &\,\,\,+\left(n^{2}-8n+24-\frac{6n}{pq}\right)T[(g_{12}g_{34})^{2}]-2\,\phi^{(2,0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}}-1. \end{align*} where by $T[(g_{11})^{4}]$ we mean a linear combination of products $(g_{ii})^{4}$, whose expectation is therefore $\mathbb{E}[(g_{11})^{4}]$; by $T[(g_{11}g_{22})^{2}]$ we mean a linear combination of products $(g_{ii}\,g_{jj})^{2}$ whose expectation will be $\mathbb{E}[(g_{11}g_{22})^{2}]$, \emph{etc.} Thus, the expectation of $\left(\phi^{(2,0,\ldots,0)_{\lfloor \frac{n}{2} \rfloor}}\right)^{2}$ is \begin{align*} &\frac{2}{n^{2}+n-2}+\frac{\frac{4n^{2}}{pq}-16}{(n-2)(n+4)}\,\mathrm{e}^{-t}+\frac{2n^{2}}{3}\left(\frac{1}{(n-1)(n-2)}-\frac{1}{pq(n-2)}\right)\,\mathrm{e}^{-\frac{2n-2}{n}t}\\ &+\frac{n^{2}}{3}\left(\frac{1}{(n + 2)(n + 4)}+\frac{2}{pq(n + 4)}\right)\,\mathrm{e}^{-\frac{2n+4}{n}t}, \end{align*} and by identifying the Casimir coefficients of the spherical functions, one deduces from this the expansion of the square of the discriminating zonal function in zonal functions. For the spaces $\mathrm{SO}(2n)/\mathrm{U}(n)$, one computes again the square of the homogeneous polynomial of degree $2$ given by Proposition \ref{coeffsphericalfunction}. Thus, $\frac{1}{n^{2}}(\sum_{i,j=1}^{n}g_{2i,2j}\,g_{2i-1,2j-1}-g_{2i,2j-1}\,g_{2i-1,2j})^{2}$ equals \begin{align*} &\!\!\!\frac{1}{n}\left(T[(g_{11}g_{22})^{2}]+T[(g_{12}g_{21})^{2}]-2\,T[g_{11}g_{12}g_{22}g_{21}]\right)\\ &+ \frac{n-1}{n} \left(2\,T[(g_{12}g_{34})^{2}]-2\,T[g_{13}g_{14}g_{23}g_{24}]\right)\\ &+ \frac{n-1}{n} \left(2\,T[g_{12}g_{21}g_{34}g_{43}]-2\,T[g_{12}g_{23}g_{34}g_{41}]\right)\\ &+ \frac{n-1}{n} \left(T[g_{11}g_{22}g_{33}g_{44}]+T[g_{12}g_{21}g_{34}g_{43}]-2\,T[g_{11}g_{22}g_{34}g_{43}]\right)\\%i i j j &+ \text{remainder}, \end{align*} with the same notations as before, and where the remainder is a combination of products of coefficients whose expectation vanish under Brownian (and Haar) measures. More precisely, terms with a certain symmetry cancel with one another when taking the expectation: for instance, \begin{equation} (g_{2i,2j}\,g_{2i-1,2j-1}-g_{2i,2j-1}\,g_{2i-1,2j})\times (g_{2k,2l}\,g_{2k-1,2l-1}-g_{2k,2l-1}\,g_{2k-1,2l}) \label{fourstrangers}\end{equation} with $i \neq j \neq k \neq l$ is equal to $a + b - c - d$ , where $a,b,c,d$ are products of type $g_{ij}g_{kl}g_{mn}g_{op}$, and have therefore the same expectation. Consequently, every product of type \eqref{fourstrangers} will not contribute to the expectation of $(\phi^{(1,1,0,\ldots,0)_{\lfloor \frac{n}{2}\rfloor}})^{2}$. The following sets of indices have the same property of ``self-cancellation'': \begin{align*} &(i,i,i,j)\,\,;\,\,(i,i,j,i)\,\,;\,\,(i,j,i,i)\,\,;\,\,(j,i,i,i)\,\,;\,\,(i,i,j,k)\,\,;\,\,(j, k ,i ,i )\,\,;\\ &(i ,j, i, k) \,\,;\,\, (j ,i ,k ,i )\,\,;\,\, (i ,j ,k ,i) \,\,;\,\, (j ,i ,i ,k) \,\,;\,\, (i ,j, k, l)\,\,;\,\,(i,j,k,l)\,\,;\end{align*} so it suffices to consider products with sets of indices $(i,i,i,i)$, $(i,j,i,j)$, $(i,j,j,i)$ or $(i,i,j,j)$ --- these are the four lines of the previous expansion. Using the formulas given before for the joint moments of the entries (beware that one has to use them with the parameter $2n$), we obtain: $$\mathbb{E}[(\phi^{(1,1,0,\ldots,0)_{n}})^{2}]=\frac{1}{n(2n-1)}+\frac{n-1}{3n}\,\mathrm{e}^{-\frac{2n-4}{n}t}+\frac{4(n^{2}-1)}{(3n)(2n-1)}\,\mathrm{e}^{-\frac{2n-1}{n}t}$$ and it suffices then to identify the coefficients of the negative exponentials. \end{proof} \subsubsection{Quotients of unitary groups} For special unitary groups, set $\frac{1}{n^{2}}M_{n,k,l}=\sum_{1 \leq i<j \leq k+l} \widetilde\eta_{i,j}(C_{\mathfrak{su}(n)})$, to be considered as an element of $\mathrm{End}((\mathbb{C}^{n})^{\otimes k+l})$. If $k=l=1$, then $$\mathbb{E}[|g_{ii}|^{2}]=\frac{1}{n}+\left(1-\frac{1}{n}\right)\mathrm{e}^{-t}\quad;\quad \mathbb{E}[|g_{ij}|^{2}]=\frac{1}{n}\left(1-\mathrm{e}^{-t}\right)\quad;\quad\mathbb{E}[g_{ii}\overline{g_{jj}}]=\mathrm{e}^{-t}$$ since the eigenvalues and eigenvectors of $M_{n,1,1}=\mathrm{i} I_{n}\otimes \mathrm{i} I_{n}+n\,\sum_{i,j=1}^{n} E_{ij}\otimes E_{ij}$ are: $$ \begin{tabular}{|c|c|c|} \hline &&\\ eigenvalue & multiplicity & eigenvectors \\ &&\\ \hline && \\ $n^{2}-1$ & $1$ & $\sum_{i=1}^{n}e[i,i]$ \\ &&\\ \hline && \\ $-1$ & $n^{2}-1$ & $e[i,j],\,\,1\leq i\neq j\leq n$ \\ &&\\ & & $e[i,i]-e[i+1,i+1],\,\,i\leq n-1$ \\ &&\\ \hline \end{tabular} $$ \noindent If $k=l=2$, then the eigenvalues and eigenvectors of $M_{n,2,2}$ are: $$ \begin{tabular}{|c|c|c|} \hline &&\\ eigenvalue & multiplicity & eigenvectors (not exhaustive, some repetitions)\\ &&\\ \hline && \\ $2n^{2}-2$ & $2$ & $\sum_{k=1}^{n}\sum_{l=1}^{n}e[k,l,k,l],\,\,\sum_{k=1}^{n}\sum_{l=1}^{n}e[k,l,l,k]$ \\ && \\ \hline && \\ $n^{2}-2$ & $4(n+1)(n-1)$ &$\sum_{k=1}^{n}e[i,k,i,k]-e[i+1,k,i+1,k], \,\, i \leq n-1$\\ &&\\ && $\sum_{k=1}^{n}e[k,i,k,i]-e[k,i+1,k,i+1],\,\, i \leq n-1$ \\ &&\\ &&$\sum_{k=1}^{n}e[i,k,k,i]-e[i+1,k,k,i+1], \,\, i \leq n-1$\\ &&\\ && $\sum_{k=1}^{n}e[k,i,i,k]-e[k,i+1,i+1,k],\,\, i \leq n-1$ \\ &&\\ \hline && \\ $2n-2$ & $\frac{n^{2}(n+1)(n-3)}{4}$ & $\left(\substack{(e[i,j]-e[j,i])^{\otimes 2}-(e[j,k]-e[k,j])^{\otimes 2}\\+(e[k,l]-e[l,k])^{\otimes 2}-(e[l,i]-e[i,l])^{\otimes 2}}\right),\,\,i \neq j \neq k \neq l$\\ &&\\ \hline && \\ $-2$ & $\frac{(n+2)(n+1)(n-1)(n-2)}{2}$ & $\left(\substack{e[i,j,i,j]+e[j,k,j,k]+e[k,i,k,i] \\ -e[j,i,j,i]-e[k,j,k,j]- e[i,k,i,k]}\right),\,\,i<j<k$\\ &&\\ && $\left(\substack{e[i,j,j,i]+e[j,k,k,j]+e[k,i,i,k] \\ -e[j,i,i,j]-e[k,j,j,k]- e[i,k,k,i]}\right),\,\,i<j<k$\\ &&\\ \hline && \\ $-2n-2$ & $\frac{n^{2}(n-1)(n+3)}{4}$ & $ e[i,i,i,i]+e[j,j,j,j]-(e[i,j]+e[j,i])^{\otimes 2},\,\,i<j$\\ &&\\ \hline \end{tabular} $$ \noindent Again, we can use the previous table to decompose some elementary $4$-tensors in eigenvectors of $M_{n,2,2}$; we refer to Appendix \ref{expun}. Thus, \begin{align*}\mathbb{E}[|g_{ii}|^{4}]&=\frac{2}{n(n+1)}+\frac{4(n-1)}{n(n+2)}\,\mathrm{e}^{-t}+\frac{n(n-1)}{(n+1)(n+2)}\,\mathrm{e}^{-\frac{2n+2}{n}t};\\ \mathbb{E}[|g_{ij}|^{4}]&=\frac{2}{n(n+1)}-\frac{4}{n(n+2)}\,\mathrm{e}^{-t}+\frac{2}{(n+1)(n+2)}\,\mathrm{e}^{-\frac{2n+2}{n}t};\\ \mathbb{E}[|g_{ii}|^{2}|g_{ij}|^{2}]&= \frac{1}{n(n+1)}+\frac{n-2}{n(n+2)}\,\mathrm{e}^{-t}-\frac{n}{(n+1)(n+2)}\,\mathrm{e}^{-\frac{2n+2}{n}t};\\ \mathbb{E}[|g_{ij}|^{2}|g_{ik}|^{2}]&=\frac{1}{n(n+1)}-\frac{2}{n(n+2)}\,\mathrm{e}^{-t} +\frac{1}{(n+1)(n+2)}\,\mathrm{e}^{-\frac{2n+2}{n}t};\\ \mathbb{E}[|g_{ii}|^{2}|g_{jj}|^{2}]&=\frac{1}{(n-1)(n+1)}+ \frac{2(n+1)}{n(n+2)}\,\mathrm{e}^{-t}+\frac{n-3}{4(n-1)}\,\mathrm{e}^{-\frac{2n-2}{n}t}\\ &\quad+\frac{n-2}{2n}\,\mathrm{e}^{-2t}+\frac{n^{2}+n+2}{4(n+1)(n+2)}\,\mathrm{e}^{-\frac{2n+2}{n}t};\\ \mathbb{E}[|g_{ij}|^{2}|g_{ji}|^{2}]&=\frac{1}{(n-1)(n+1)}-\frac{2}{n(n+2)}\,\mathrm{e}^{-t}+\frac{n-3}{4(n-1)}\,\mathrm{e}^{-\frac{2n-2}{n}t}\\ &\quad-\frac{n-2}{2n}\,\mathrm{e}^{-2t}+\frac{n^{2}+n+2}{4(n+1)(n+2)}\,\mathrm{e}^{-\frac{2n+2}{n}t};\\ \mathbb{E}[|g_{ii}|^{2}|g_{jk}|^{2}]&=\frac{1}{(n-1)(n+1)}+\frac{n^{2}-2n-2}{n(n-2)(n+2)}\,\mathrm{e}^{-t}-\frac{n-3}{4(n-1)(n-2)}\,\mathrm{e}^{-\frac{2n-2}{n}t}\\ &\quad-\frac{1}{2n}\,\mathrm{e}^{-2t}-\frac{n-1}{4(n+1)(n+2)}\,\mathrm{e}^{-\frac{2n+2}{n}t};\\ \mathbb{E}[|g_{ij}|^{2}|g_{jk}|^{2}]&=\frac{1}{(n-1)(n+1)}-\frac{2(n-1)}{n(n-2)(n+2)}\,\mathrm{e}^{-t}-\frac{n-3}{4(n-1)(n-2)}\,\mathrm{e}^{-\frac{2n-2}{n}t}\\ &\quad+\frac{1}{2n}\,\mathrm{e}^{-2t}-\frac{n-1}{4(n+1)(n+2)}\,\mathrm{e}^{-\frac{2n+2}{n}t};\end{align*} \begin{align*} \mathbb{E}[|g_{ij}|^{2}|g_{kl}|^{2}]&=\frac{1}{(n-1)(n+1)}-\frac{2}{(n-2)(n+2)}\,\mathrm{e}^{-t}+\frac{1}{2(n-1)(n-2)}\,\mathrm{e}^{-\frac{2n-2}{n}t}\\ &\quad+\frac{1}{2(n+1)(n+2)}\,\mathrm{e}^{-\frac{2n+2}{n}t}. \end{align*} \begin{proposition} For the symmetric spaces with isometry group $\mathrm{SU}(n)$ or $\mathrm{SU}(2n)$, the coefficients of Lemma \ref{abstractexpansion} are: \begin{align*} &\mathrm{Gr}(n,q,\mathbb{C}): \quad \frac{1}{n^{2}-1}+ \frac{\frac{2n^{2}}{pq}-8}{n^{2}-4}\,\phi^{(2,1,\ldots,1)_{n-1}}+\frac{n^{2}}{2}\left( \frac{1}{(n-1)(n-2)}-\frac{1}{pq(n-2)}\right)\phi^{(2,2,1,\ldots,1,0)_{n-1}};\\ &\qquad\qquad\qquad +\frac{n^{2}}{2}\left(\frac{1}{(n+1)(n+2)}+\frac{1}{pq(n+2)}\right)\phi^{(4,2,\ldots,2)_{n-1}}; \\ &\mathrm{SU}(n)/\mathrm{SO}(n): \quad \frac{2}{n^{2}+n}+\frac{n^{2}+n-2}{n^{2}+n}\,\phi^{(4,2,\ldots,2)_{n-1}};\\ &\mathrm{SU}(2n)/\mathrm{U}\mathrm{Sp}(n):\quad \frac{1}{2n^{2}-n}+ \frac{2n^{2}-n-1}{2n^{2}-n} \,\phi^{(2,2,1,\ldots,1,0)_{2n-1}}. \end{align*} \end{proposition} \begin{proof} For $\mathrm{SU}(n)/\mathrm{SO}(n)$ and $\mathrm{SU}(2n)/\mathrm{U}\mathrm{Sp}(n)$, the only missing coefficient has already been computed. For complex Grassmannians, $(\phi^{(2,1,\ldots,1)_{n-1}})^{2}$ has exactly the same expansion as in the real case, but with square modules. From the computation of the joint moments $\mathbb{E}[|g_{ij}g_{kl}|^{2}]$ performed previously, one deduces that the expectation of the square of the discriminating zonal function is \begin{align*} &\frac{1}{n^{2}-1}+\frac{\frac{2n^{2}}{pq}-8}{n^{2}-4}\,\mathrm{e}^{-t}+\frac{n^{2}}{2}\left( \frac{1}{(n-1)(n-2)}-\frac{1}{pq(n-2)}\right)\mathrm{e}^{-\frac{2n-2}{n}t}\\ &+\frac{n^{2}}{2}\left(\frac{1}{(n+1)(n+2)}+\frac{1}{pq(n+2)}\right)\mathrm{e}^{-\frac{2n+2}{n}t} \end{align*} whence the expansion in zonal spherical functions by identifying the coefficients. \end{proof} \subsubsection{Quotients of symplectic groups} Finally, set $\frac{1}{n}M_{n,k}=\sum_{1 \leq i<j \leq k+l} \widetilde\eta_{i,j}(C_{\mathfrak{usp}(n)})$, which is considered as an element of $\mathrm{End}((\mathbb{C}^{2n})^{\otimes k})$. Recall that the diagonalization of these matrices will yield the joint moments of the entries of $\widetilde{g}$, the matrix obtained from $g$ by the map \eqref{doublequaternion}. Again, as a warm-up, let us compute the joint moments of order $2$. If $k=2$, then \begin{align*} \mathbb{E}\!\left[|g_{ii}|^{2}\right]&=\frac{1}{n}+\frac{n-1}{n}\,\mathrm{e}^{-t}\qquad;\qquad\mathbb{E}\!\left[|g_{ij}|^{2}\right]=\frac{1}{n}\left(1-\mathrm{e}^{-t}\right) \quad \forall i,j \in \left[\!\left[ 1,n\right]\!\right];\\ \mathbb{E}\!\left[(\widetilde{g}_{ii})^{2}\right]&=\mathrm{e}^{-\frac{n+1}{n}t}\qquad;\qquad\mathbb{E}\!\left[(\widetilde{g}_{ij})^{2}\right]=0\quad\forall i,j \in \left[\!\left[ 1,2n\right]\!\right]; \end{align*} since the eigenvectors and eigenvalues of $M_{n,2}$ are: $$\!\!\!\!\!\!\! \begin{tabular}{|c|c|c|} \hline &&\\ eigenvalue & multiplicity & eigenvectors \\ &&\\ \hline && \\ $\frac{2n+1}{2}$ & $1$ & $ \sum_{i=1}^{n} e[2i-1,2i]-e[2i,2i-1]$ \\ &&\\ \hline && \\ $\frac{1}{2}$ & $(n-1)(2n+1) $ & $ (e[2i-1,2i]-e[2i,2i-1])-(e[2i+1,2i+2]-e[2i+2,2i+1]),\,\,i\leq n-1$ \\ &&\\ &&$e[2i-1,2j-1]-e[2j-1,2i-1],\,\,e[2i,2j]-e[2j,2i],\,\, 1\leq i<j \leq n$\\ &&\\ &&$e[2i-1,2j]-e[2j,2i-1],\,\, 1\leq i \neq j\leq n$\\ &&\\ \hline && \\ $-\frac{1}{2}$ & $n(2n+1)$ & $e_{k}\otimes e_{l}+e_{l} \otimes e_{k}, 1 \leq k \leq l \leq 2n$\\ &&\\ \hline \end{tabular} $$ For $k=4$, we refer to Appendix \ref{expspn} for the expansion in eigenvectors of simple tensors. One obtains: \begin{align*}\mathbb{E}\!\left[(\widetilde{g}_{ii})^{4}\right]&=\mathrm{e}^{-\frac{2n+4}{n}t};\\ \mathbb{E}\!\left[(\widetilde{g}_{ij})^{4}\right]&=\mathbb{E}\!\left[(\widetilde{g}_{ii}\,\widetilde{g}_{ij})^{2}\right]=\mathbb{E}\!\left[(\widetilde{g}_{ij}\,\widetilde{g}_{ik})^{2}\right]=0;\\ \mathbb{E}[(\widetilde{g}_{2i-1,2i-1}\,\widetilde{g}_{2i,2i})^{2}]&=\frac{1}{n(2n+1)}+\frac{n-1}{n(n+1)}\,\mathrm{e}^{-t}+\frac{1}{n+1}\,\mathrm{e}^{-\frac{n+1}{n}t}+\frac{(2n-1)(2n-2)}{3(2n+1)(2n+2)}\,\mathrm{e}^{-\frac{2n+1}{n}t}\\ &\quad+\frac{n-1}{2(n+1)}\,\mathrm{e}^{-\frac{2n+2}{n}t}+\frac{1}{6}\,\mathrm{e}^{-\frac{2n+4}{n}t};\\ \mathbb{E}[(\widetilde{g}_{2i-1,2i}\,\widetilde{g}_{2i,2i-1})^{2}]&=\frac{1}{n(2n+1)}+\frac{n-1}{n(n+1)}\,\mathrm{e}^{-t}-\frac{1}{n+1}\,\mathrm{e}^{-\frac{n+1}{n}t}+\frac{(2n-1)(2n-2)}{3(2n+1)(2n+2)}\,\mathrm{e}^{-\frac{2n+1}{n}t}\\ &\quad-\frac{n-1}{2(n+1)}\,\mathrm{e}^{-\frac{2n+2}{n}t}+\frac{1}{6}\,\mathrm{e}^{-\frac{2n+4}{n}t};\\ \mathbb{E}[(\widetilde{g}_{2i-1,2j-1}\,\widetilde{g}_{2i,2j})^{2}]&=\mathbb{E}[(\widetilde{g}_{2i-1,2j}\,\widetilde{g}_{2i,2j-1})^{2}]=\frac{1}{n(2n+1)}-\frac{1}{n(n+1)}\,\mathrm{e}^{-t}+\frac{1}{(2n+1)(n+1)}\,\mathrm{e}^{-\frac{2n+1}{n}t};\end{align*} and the other moments of type $\mathbb{E}[(\widetilde{g}_{2i-1,a}\widetilde{g}_{2i,b})^{2}]$ vanish. On the other hand, assuming that $\{a,b\}$ is not a pair $\{2i-1,2i\}$ in $\left[\!\left[ 1,2n\right]\!\right]$, one has also \begin{align*}\mathbb{E}[(\widetilde{g}_{aa}\,\widetilde{g}_{bb})^{2}]&=\frac{1}{3}\,\mathrm{e}^{-\frac{2n+1}{n}t}+\frac{1}{2}\,\mathrm{e}^{-\frac{2n+2}{n}t}+\frac{1}{6}\,\mathrm{e}^{-\frac{2n+4}{n}t};\\ \mathbb{E}[(\widetilde{g}_{ab}\,\widetilde{g}_{ba})^{2}]&=\frac{1}{3}\,\mathrm{e}^{-\frac{2n+1}{n}t}-\frac{1}{2}\,\mathrm{e}^{-\frac{2n+2}{n}t}+\frac{1}{6}\,\mathrm{e}^{-\frac{2n+4}{n}t}; \end{align*} and the other moments of type $\mathbb{E}[(\widetilde{g}_{ab}\,\widetilde{g}_{cd})^{2}]$ with $\{c,d\}\neq \{a,b\}$ vanish. The same expansions enable one to compute many moments of type $\mathbb{E}[|g_{ij}\,g_{kl}|^{2}]$, namely, all those that write as $\mathbb{E}[|g_{ij}\,g_{ik}|^{2}]$. For instance, since $|g_{ii}|^{4}=(\widetilde{g}_{2i-1,2i-1}\,\widetilde{g}_{2i,2i}-\widetilde{g}_{2i-1,2i}\,\widetilde{g}_{2i,2i-1})^{2},$ its expectation is a combination of those of $(\widetilde{g}_{2i-1,2i-1}\,\widetilde{g}_{2i,2i})^{2}$, $(\widetilde{g}_{2i-1,2i}\,\widetilde{g}_{2i,2i-1})^{2}$ and $\widetilde{g}_{2i-1,2i-1}\,\widetilde{g}_{2i-1,2i}\,\widetilde{g}_{2i,2i}\,\widetilde{g}_{2i,2i-1}$. This last expectation is $$-\frac{1}{2n(2n+1)}-\frac{n-1}{2n(n+1)}\,\mathrm{e}^{-t}-\frac{(2n-1)(2n-2)}{6(2n+1)(2n+2)}\,\mathrm{e}^{-\frac{2n+1}{n}t}+\frac{1}{6}\,\mathrm{e}^{-\frac{2n+4}{n}t}.$$ Thus, with a few more computations, one gets \begin{align*}\mathbb{E}[|g_{ii}|^{4}]&=\frac{3}{n(2n+1)}+\frac{3(n-1)}{n(n+1)}\,\mathrm{e}^{-t}+\frac{(2n-1)(2n-2)}{(2n+1)(2n+2)}\,\mathrm{e}^{-\frac{2n+1}{n}t};\\ \mathbb{E}[|g_{ij}|^{4}]&=\frac{3}{n(2n+1)}-\frac{3}{n(n+1)}\,\mathrm{e}^{-t}+\frac{3}{(2n+1)(n+1)}\,\mathrm{e}^{-\frac{2n+1}{n}t};\\ \mathbb{E}[|g_{ii}\,g_{ij}|^{2}]&=\frac{2}{n(2n+1)}+\frac{(n-2)}{n(n+1)}\,\mathrm{e}^{-t} - \frac{2(2n-1)}{(2n+1)(2n+2)}\,\mathrm{e}^{-\frac{2n+1}{n}t};\\ \mathbb{E}[|g_{ij}\,g_{ik}|^{2}]&=\frac{2}{n(2n+1)}-\frac{2}{n(n+1)}\,\mathrm{e}^{-t}+\frac{2}{(2n+1)(n+1)}\,\mathrm{e}^{-\frac{2n+1}{n}t};\\ \mathbb{E}[|g_{ii}\,g_{jj}|^{2}]&=\frac{2n-1}{n(n-1)(2n+1)}+\frac{2}{n+1}\,\mathrm{e}^{-t}+ \frac{n-3}{6(n-1)}\,\mathrm{e}^{-\frac{2n-2}{n}t}\\ &\quad+\frac{n-2}{2n}\,\mathrm{e}^{-2t}+\frac{2n^{2}-n+3}{3(n+1)(2n+1)}\,\mathrm{e}^{-\frac{2n+1}{n}t}; \end{align*} \begin{align*} \mathbb{E}[|g_{ij}\,g_{ji}|^{2}]&=\frac{2n-1}{n(n-1)(2n+1)}-\frac{2}{n(n+1)}\,\mathrm{e}^{-t}+\frac{n-3}{6(n-1)}\,\mathrm{e}^{-\frac{2n-2}{n}t}\\ &\quad-\frac{n-2}{2n}\,\mathrm{e}^{-2t}+\frac{2n^{2}-n+3}{3(n+1)(2n+1)}\,\mathrm{e}^{-\frac{2n+1}{n}t};\\ \mathbb{E}[|g_{ii}\,g_{jk}|^{2}]&=\frac{2n-1}{n(n-1)(2n+1)}+\frac{n^2-3n+1}{n(n+1)(n-2)}\,\mathrm{e}^{-t}-\frac{n-3}{6(n-1)(n-2)}\,\mathrm{e}^{-\frac{2n-2}{n}t}\\ &\quad-\frac{1}{2n}\,\mathrm{e}^{-2t}-\frac{2n-3}{3(n+1)(2n+1)}\,\mathrm{e}^{-\frac{2n+1}{n}t};\\ \mathbb{E}[|g_{ij}\,g_{jk}|^{2}]&=\frac{2n-1}{n(n-1)(2n+1)}-\frac{2n-3}{n(n+1)(n-2)}\,\mathrm{e}^{-t}-\frac{n-3}{6(n-1)(n-2)}\,\mathrm{e}^{-\frac{2n-2}{n}t} \\ &\quad +\frac{1}{2n}\,\mathrm{e}^{-2t}-\frac{2n-3}{3(n+1)(2n+1)}\,\mathrm{e}^{-\frac{2n+1}{n}t};\\ \mathbb{E}[|g_{ij}\,g_{kl}|^{2}]&=\frac{2n-1}{n(n-1)(2n+1)}-\frac{2n-2}{n(n+1)(n-2)}\,\mathrm{e}^{-t}+\frac{1}{3(n-1)(n-2)}\,\mathrm{e}^{-\frac{2n-2}{n}t} \\ &\quad+ \frac{4}{3(n+1)(2n+1)}\,\mathrm{e}^{-\frac{2n+1}{n}t}. \end{align*} \begin{proposition} For the quaternionic Grassmannian varieties $\mathrm{Gr}(n,q,\mathbb{H})$ and the spaces $\mathrm{U}\mathrm{Sp}(n)/\mathrm{U}(n)$, the coefficients of Lemma \ref{abstractexpansion} are: \begin{align*} &\mathrm{Gr}(n,q,\mathbb{H}):\quad\frac{1}{2n^{2}-n-1}+\frac{n^{2}}{3}\left(\frac{1}{(n-1)(n-2)}-\frac{1}{pq(n-2)}\right)\phi^{(1^4,0,\ldots,0)_{n}}\\ &\qquad\qquad\qquad+\frac{\frac{n^{2}}{pq}-4}{(n-2)(n+1)}\,\phi^{(1^2,0,\ldots,0)_{n-1}}+\frac{n^{2}}{3}\left(\frac{4}{(n+1)(2n+1)}+\frac{1}{pq(n+1)}\right)\phi^{(2,2,0,\ldots,0)_{n}};\\ &\mathrm{U}\mathrm{Sp}(n)/\mathrm{U}(n):\quad \frac{1}{2n^{2}+n}+\frac{4(n-1)(n+1)}{3n(2n+1)}\,\phi^{(2,2,0,\ldots,0)_{n}}+\frac{n+1}{3n}\,\phi^{(4,0,\ldots,0)_{n}}. \end{align*} \end{proposition} \begin{proof} The case of quaternionic Grassmannians is again done by using the expansion on page \pageref{expansquaregrass}, with square modules instead of squares. One obtains the following formula for the expectation of $(\phi^{(1,1,0,\ldots,0)_{n}})^{2}$: \begin{align*}&\frac{1}{2n^{2}-n-1}+\frac{\frac{n^{2}}{pq}-4}{(n-2)(n+1)}\,\mathrm{e}^{-t}+\frac{n^{2}}{3}\left(\frac{1}{(n-1)(n-2)}-\frac{1}{pq(n-2)}\right)\mathrm{e}^{-\frac{2n-2}{n}t}\\ &+\frac{n^{2}}{3}\left(\frac{4}{(n+1)(2n+1)}+\frac{1}{pq(n+1)}\right)\mathrm{e}^{-\frac{2n+1}{n}t}, \end{align*} hence the expansion in zonal functions by identification of the coefficients. Finally, for the structure spaces $\mathrm{U}\mathrm{Sp}(n)/\mathrm{U}(n)$, $(\phi^{(2,0,\ldots,0)_{n}})^{2}$ is equal to $$\frac{1}{2n}\left(T[(\widetilde{g}_{11})^{4}]+T[(\widetilde{g}_{11}\widetilde{g}_{22})^{2}]+T[(\widetilde{g}_{12}\widetilde{g}_{21})^{2}]\right)+\frac{n-1}{n}\left(T[(\widetilde{g}_{13}\widetilde{g}_{24})^{2}]+T[(\widetilde{g}_{11}\widetilde{g}_{33})^{2}]+T[(\widetilde{g}_{13}\widetilde{g}_{31})^{2}]\right)$$ plus some remainder whose expectation under Brownian measures will be zero. Hence, $$\mathbb{E}[(\phi^{(2,0,\ldots,0)_{n}})^{2}]=\frac{1}{n(2n+1)}+\frac{4(n-1)(n+1)}{3n(2n+1)}\,\mathrm{e}^{-\frac{2n+1}{n}t}+\frac{n+1}{3n}\,\mathrm{e}^{-\frac{2n+4}{n}t},$$ and $\frac{2n+1}{n}$ is the exponent corresponding to the spherical representation of label $(2,2,0,\ldots,0)_{n}$, whereas $\frac{2n+4}{n}$ is the exponent corresponding to the spherical representation of label $(4,0,\ldots,0)_{n}$. \end{proof} \subsection{Proof of the lower bound on the total variation distance}\label{bienayme} The proof of the lower bound is now a simple application of Bienaym\'e-Chebyshev inequality. First, under the Haar measure, we have: \begin{proposition}\label{upperboundhaartrace} If $E_{a}$ is the event $\{|\Omega|\geq a\}$, then the Haar measure of $E_{a}$ satisfies the inequality $$ \eta_{X}(E_{a}) \leq \frac{1}{a^{2}} $$ for every classical simple compact Lie group $X=K$ and every classical simple compact symmetric space $X=G/K$. \end{proposition} \begin{proof} The previous computations ensure that $\mathbb{E}_{\infty}[|\Omega|^{2}]=1$ in every case, so $$\eta_{X}[|\Omega|\geq a]=\eta_{X}[|\Omega|^{2}\geq a^{2}]\leq \frac{\mathbb{E}_{\infty}[|\Omega|^{2}]}{a^{2}}=\frac{1}{a^{2}}. $$ \end{proof} Next, let us estimate $\mathbb{E}_{t}[\Omega]$ and $\mathrm{Var}_{t}[\Omega]$ for $t=\alpha\,(1-\varepsilon)\log n$. The exact values are listed in the table on the following page. We assume $\varepsilon < \frac{1}{4}$; indeed, Lemma \ref{nonincreasingdistance} ensures that it is sufficient to control the total variation distance around the cut-off time. We shall use a lot the inequality of convexity $$\exp(x)\leq 1+\frac{\mathrm{e}^{y}-1}{y}\,x\quad \forall x \in \left(0,y\right).$$ \begin{lemma}\label{controlvariance} Under the usual assumptions on $n$, for groups and spaces of structures (but not for Grassmannian varieties), $\mathrm{Var}_{t}[\Omega]$ is uniformly bounded for every $t =\alpha\,(1-\varepsilon)\,\log n$ with $\varepsilon \in (0,1/4)$. Possible upper bounds are listed below: \begin{align*} &\mathrm{SU}(n),\,\,\mathrm{SU}(n)/\mathrm{SO}(n),\,\, \mathrm{SU}(2n)/\mathrm{U}\mathrm{Sp}(n): 1 \quad;\\ &\mathrm{SO}(2n)/\mathrm{U}(n),\,\,\mathrm{U}\mathrm{Sp}(n),\,\,\mathrm{U}\mathrm{Sp}(n)/\mathrm{U}(n): 3\quad;\\ &\mathrm{SO}(n): 8. \end{align*} \end{lemma} \begin{proof} We proceed case by case, and denote $\Delta_{t}(\lambda,\mu)=\mathrm{e}^{-\lambda t}-\mathrm{e}^{-\mu t}$. Notice that $\Delta_{t}(\lambda,\mu)\leq 0 $ if $\lambda \geq \mu$. On the other hand, $\Delta_{t}(\lambda,\mu)$ is always smaller than $1$ for $\lambda, \mu \geq 0$. \begin{itemize} \item $\mathrm{SO}(n)$: \begin{align*} \mathrm{Var}_{t}[\Omega] &= \Delta_{t}\!\left(0,\frac{n-1}{n}\right)+\frac{n(n-1)}{2}\,\Delta_{t}\!\left(\frac{n-4}{n},\frac{n-1}{n}\right)+\left(\frac{n(n+1)}{2}-1\right)\Delta_{t}\!\left(1,\frac{n-1}{n}\right)\\ &\leq 1 + \frac{n(n-1)}{2}\,\Delta_{t}\!\left(\frac{n-4}{n},\frac{n-1}{n}\right)=1+\frac{n(n-1)}{2}\,\mathrm{e}^{-\frac{n-1}{n}t}\,\left(\mathrm{e}^{\frac{3t}{n}}-1\right) \\ &\leq 1+\frac{13}{2}\,n\log n\,\mathrm{e}^{-\frac{n-1}{n}t} \end{align*} since $\frac{6 \log n}{n}\leq 1.382$ when $n \geq 10$, and $\frac{\mathrm{e}^{1.382}-1}{1.382} \leq \frac{13}{6}$. Then, $$\mathrm{e}^{-\frac{n-1}{n}t} \leq \mathrm{e}^{-\frac{3(n-1)\log n}{2n}} = n^{-1}\,\mathrm{e}^{-\frac{(n-3)\,\log n}{2n}}\leq \frac{14}{13}\,(n\log n)^{-1} $$ for $n \geq 10$, so $\mathrm{Var}_{t}[\Omega]\leq 1+7=8$. $$\!\!\!\!\!\!\!\!\!\begin{tabular}{|c|c|l|} \hline &&\\ $K$ or $G/K$ & $\mathbb{E}_{t}[\Omega]$& $\qquad\qquad\qquad\qquad\qquad\quad\mathrm{Var}_{t}[\Omega]$ \\ &&\\ \hline && \\ $\mathrm{SO}(n)$ & $n\,\mathrm{e}^{-\frac{n-1}{2n}t}$& $1+\frac{n(n-1)}{2}\,\mathrm{e}^{-\frac{n-4}{n}t} +\left(\frac{n(n+1)}{2}-1\right)\mathrm{e}^{-t}-n^{2}\,\mathrm{e}^{-\frac{n-1}{n}t}$ \\ &&\\ \hline & & \\ $\mathrm{SU}(n)$ &$n\,\mathrm{e}^{-\frac{n^{2}-1}{2n^{2}}t} $& $1+(n^{2}-1)\,\mathrm{e}^{-t}-n^{2}\,\mathrm{e}^{-\frac{n^{2}-1}{n^{2}}t} $ \\ &&\\ \hline && \\ $\mathrm{U}\mathrm{Sp}(n)$&$2n\,\mathrm{e}^{-\frac{2n+1}{4n}t}$ & $1+(2n+1)(n-1)\,\mathrm{e}^{-t}+(2n+1)\,n\,\mathrm{e}^{-\frac{n+1}{n}t}-4n^{2}\,\mathrm{e}^{-\frac{2n+1}{2n}t} $ \\ &&\\ \hline && \\ $\mathrm{Gr}(n,q,\mathbb{R})$ & $\sqrt{\frac{(n+2)(n-1)}{2}}\,\mathrm{e}^{-t}$& $1+\left(\frac{2n^{2}}{pq}-8\right)\frac{(n-1)(n+2)}{(n-2)(n+4)}\,\mathrm{e}^{-t}+\frac{n^{2}}{3}\left(\frac{n+2}{n-2}-\frac{(n+2)(n-1)}{pq(n-2)}\right)\,\mathrm{e}^{-\frac{2n-2}{n}t}$\\ & &\\ &&$+\frac{n^{2}}{6}\left(\frac{n-1}{n + 4}+\frac{2(n+2)(n-1)}{pq(n + 4)}\right)\,\mathrm{e}^{-\frac{2n+4}{n}t}-\frac{(n+2)(n-1)}{2}\,\mathrm{e}^{-2t}$ \\ &&\\ \hline && \\ $\mathrm{Gr}(n,q,\mathbb{C})$ &$\sqrt{n^{2}-1}\,\mathrm{e}^{-t}$ & $ 1+\left(\frac{2n^{2}}{pq}-8\right)\frac{n^{2}-1}{n^{2}-4}\,\mathrm{e}^{-t}+\frac{n^{2}}{2}\left( \frac{n+1}{n-2}-\frac{n^{2}-1}{pq(n-2)}\right)\mathrm{e}^{-\frac{2n-2}{n}t}$\\ &&\\ &&$+\frac{n^{2}}{2}\left(\frac{n-1}{n+2}+\frac{n^{2}-1}{pq(n+2)}\right)\mathrm{e}^{-\frac{2n+2}{n}t}-(n^{2}-1)\,\mathrm{e}^{-2t}$ \\ &&\\ \hline && \\ $\mathrm{Gr}(n,q,\mathbb{H})$ & $\sqrt{(2n+1)(n-1)}\,\mathrm{e}^{-t}$& $ 1+\left(\frac{n^{2}}{pq}-4\right)\frac{(n-1)(2n+1)}{(n-2)(n+1)}\,\mathrm{e}^{-t}+\frac{n^{2}}{3}\left(\frac{2n+1}{n-2}-\frac{(2n+1)(n-1)}{pq(n-2)}\right)\mathrm{e}^{-\frac{2n-2}{n}t}$\\ &&\\ &&$+\frac{n^{2}}{3}\left(\frac{4(n-1)}{(n+1)}+\frac{(2n+1)(n-1)}{pq(n+1)}\right)\mathrm{e}^{-\frac{2n+1}{n}t} - (2n+1)(n-1)\,\mathrm{e}^{-2t}$ \\ &&\\ \hline && \\ $\mathrm{SO}(2n)/\mathrm{U}(n)$ &$\sqrt{n(2n-1)}\,\mathrm{e}^{-\frac{n-1}{n}t}$& $1+\frac{(n-1)(2n-1)}{3}\,\mathrm{e}^{-\frac{2n-4}{n}t}+\frac{4(n^{2}-1)}{3}\,\mathrm{e}^{-\frac{2n-1}{n}t} -n(2n-1)\,\mathrm{e}^{-\frac{2n-2}{n}t}$ \\ &&\\ \hline && \\ $\mathrm{SU}(n)/\mathrm{SO}(n)$ &$\sqrt{\frac{n(n+1)}{2}}\,\mathrm{e}^{-\frac{(n-1)(n+2)}{n^{2}}t}$& $ 1+\frac{(n+2)(n-1)}{2}\,\mathrm{e}^{-\frac{2n+2}{n}t}-\frac{n(n+1)}{2}\,\mathrm{e}^{-\frac{(n-1)(2n+4)}{n^{2}}t}$ \\ &&\\ \hline && \\ $\mathrm{SU}(2n)/\mathrm{U}\mathrm{Sp}(n)$ & $\sqrt{2n^{2}-n}\,\mathrm{e}^{-\frac{(n-1)(2n+1)}{2n^{2}}t}$ & $ 1+ (2n^{2}-n-1) \,\mathrm{e}^{-\frac{2n-1}{n}t} - (2n^{2}-n)\,\mathrm{e}^{-\frac{(n-1)(2n+1)}{n^{2}}t}$ \\ &&\\ \hline && \\ $\mathrm{U}\mathrm{Sp}(n)/\mathrm{U}(n)$ & $\sqrt{n(2n+1)}\,\mathrm{e}^{-\frac{n+1}{n}t}$ & $1+\frac{4(n-1)(n+1)}{3}\,\mathrm{e}^{-\frac{2n+1}{n}t}+\frac{(2n+1)(n+1)}{3}\,\mathrm{e}^{-\frac{2n+4}{n}t} -n(2n+1)\,\mathrm{e}^{-\frac{2n+2}{n}t}$ \\ &&\\ \hline \end{tabular} $$ \item $\mathrm{SU}(n)$: $$\mathrm{Var}_{t}[\Omega]= \Delta_{t}\!\left(0,\frac{n^{2}-1}{n^{2}}\right)+(n^{2}-1)\,\Delta_{t}\!\left(1,\frac{n^{2}-1}{n^{2}}\right) \leq 1.$$ \item $\mathrm{U}\mathrm{Sp}(n)$: \begin{align*} \mathrm{Var}_{t}[\Omega] &= \Delta_{t}\!\left(0,\frac{2n+1}{2n}\right)+(2n+1)(n-1)\,\Delta_{t}\!\left(1,\frac{2n+1}{2n}\right)+(2n+1)\,n\,\Delta_{t}\!\left(\frac{2n+2}{2n},\frac{2n+1}{2n}\right)\\ &\leq 1+(2n+1)(n-1)\,\Delta_{t}\!\left(1,\frac{2n+1}{2n}\right)\leq 1+2n^{2}\,\mathrm{e}^{-\frac{2n+1}{2n}t}\,\left(\mathrm{e}^{\frac{t}{2n}}-1\right)\\ &\leq 1+\frac{5}{2}\,n\log n\,\mathrm{e}^{-\frac{2n+1}{2n}t} \end{align*} since $\frac{\log n}{n} \leq 0.367$ when $n \geq 3$, and $\frac{\mathrm{e}^{0.367}-1}{0.367} \leq \frac{5}{4}$. Then, $$\mathrm{e}^{-\frac{2n+1}{2n}t} \leq \mathrm{e}^{-\frac{3\log n}{2}} = n^{-\frac{3}{2}} \leq \frac{4}{5}\,(n\log n)^{-1}$$ for $n \geq 3$, so $\mathrm{Var}_{t}[\Omega]\leq 1+2=3$. \item $\mathrm{SO}(2n)/\mathrm{U}(n)$: \begin{align*} \mathrm{Var}_{t}[\Omega] &=\Delta_{t}\!\left(0,\frac{2n-2}{n}\right)+\frac{(n-1)(2n-1)}{3}\,\Delta_{t}\!\left(\frac{2n-4}{n},\frac{2n-2}{n}\right)+\frac{4(n^{2}-1)}{3}\,\Delta_{t}\!\left(\frac{2n-1}{n},\frac{2n-2}{n}\right) \\ &\leq 1+\frac{(n-1)(2n-1)}{3}\,\Delta_{t}\!\left(\frac{2n-4}{n},\frac{2n-2}{n}\right) \leq 1+\frac{2n^{2}}{3}\,\mathrm{e}^{-\frac{2n-2}{n}t}\,\left(\mathrm{e}^{\frac{2t}{n}}-1\right) \\ &\leq 1+\frac{20}{9} n \log 2n\,\mathrm{e}^{-\frac{2n-2}{n}t} \end{align*} since $\frac{2\log 2n}{n} \leq 0.922$ when $2n \geq 10$, and $\frac{\mathrm{e}^{0.922}-1}{0.922} \leq \frac{5}{3}$. Since $$\mathrm{e}^{-\frac{2n-2}{n}t} \leq \mathrm{e}^{-\frac{3(n-1)\log 2n}{2n}} = \frac{1}{2}\,n^{-1}\,\mathrm{e}^{-\frac{(n-3)\,\log 2n}{2n}}\leq \frac{3}{4}(n \log 2n)^{-1}$$ for $2n \geq 10$, one concludes that $\mathrm{Var}_{t}[\Omega]\leq 1+\frac{5}{3}\leq 3$. \item $\mathrm{SU}(n)/\mathrm{SO}(n)$: $$\mathrm{Var}_{t}[\Omega] =\Delta_{t}\!\left(0,\frac{2(n-1)(n+2)}{n^{2}}\right)+\frac{(n+2)(n-1)}{2}\,\Delta_{t}\!\left(\frac{2(n+1)}{n},\frac{2(n-1)(n+2)}{n^{2}}\right) \leq 1.$$ \item $\mathrm{SU}(2n)/\mathrm{U}\mathrm{Sp}(n)$: $$\mathrm{Var}_{t}[\Omega] =\Delta_{t}\!\left(0,\frac{(n-1)(2n+1)}{n^{2}}\right) + (2n^{2}-n-1) \,\Delta_{t}\!\left(\frac{2n-1}{n},\frac{(n-1)(2n+1)}{n^{2}}\right) \leq 1.$$ \item $\mathrm{U}\mathrm{Sp}(n)/\mathrm{U}(n)$: \begin{align*} \mathrm{Var}_{t}[\Omega] &=\Delta_{t}\!\left(0,\frac{2n+2}{n}\right)+\frac{4(n^{2}-1)}{3}\,\Delta_{t}\!\left(\frac{2n+1}{n},\frac{2n+2}{n}\right)+\frac{2n^{2}+3n+1}{3}\,\Delta_{t}\!\left(\frac{2n+4}{n},\frac{2n+2}{n}\right) \\ &\leq 1+\frac{4(n^{2}-1)}{3}\,\Delta_{t}\!\left(\frac{2n+1}{n},\frac{2n+2}{n}\right) \leq 1+\frac{4n^{2}}{3}\,\mathrm{e}^{-\frac{2n+2}{n}t}\,\left(\mathrm{e}^{\frac{t}{n}}-1\right) \\ &\leq 1+\frac{5}{3}\,n \log n\,\mathrm{e}^{-\frac{2n+2}{n}t} \end{align*} by using the same estimate on $\frac{\log n}{n}$ as in the case of $\mathrm{U}\mathrm{Sp}(n)$. Since $$\mathrm{e}^{-\frac{2n+2}{n}t} \leq \mathrm{e}^{-\frac{3\log n}{2}} = n^{-\frac{3}{2}} \leq \frac{4}{5}\,(n\log n)^{-1}$$ for $n \geq 3$, one obtains $\mathrm{Var}_{t}[\Omega]\leq 1+\frac{4}{3}\leq 3$. \end{itemize} It is not possible to prove such uniform bounds for Grassmannians, because of the term $\mathrm{e}^{-t}$ that appears in the variance. We shall address this problem in Lemma \ref{uniformboundvariancegrass}. \end{proof} \begin{proposition}\label{trickychebyshev} Denote $K_{X}$ the bound computed in the previous Lemma for the variance of the discriminating zonal function $\Omega$ associated to a space $X$. Then, $$d_{\mathrm{TV}}(\mu_{t},\mathrm{Haar})\geq 1-\frac{4(K_{X}+1)}{(\mathbb{E}_{t}[\Omega])^{2}}.$$ \end{proposition} \begin{proof} Assuming $a$ smaller than $m=\mathbb{E}_{t}[\Omega]$, if $|\Omega-m|\leq a$, then $|\Omega| \geq m-a$. Consequently, $$\mu_{t}[|\Omega| \geq m-a] \geq 1 - \mathbb{P}[|\Omega-m| > a] \geq 1-\frac{\mathrm{Var}_{t}[\Omega]}{a^{2}}=1-\frac{K_{X}}{a^{2}}.$$ Next, take $a=\frac{m}{2}$. The combination of Lemma \ref{upperboundhaartrace} and of the previous inequality yields $$d_{\mathrm{TV}}(\mu_{t},\mathrm{Haar}) \geq \mu_{t}(E_{a})-\eta_{X}(E_{a}) \geq 1-\frac{K_{X}+1}{a^{2}} = 1 - \frac{4(K_{X}+1)}{m^{2}}.$$ Since $m^{2}$ behaves as $n^{2\varepsilon}$, this essentially ends the proof of the lower bounds in the case of compact Lie groups and compact spaces of structures. More precisely: \begin{itemize} \item$\mathrm{SO}(n)$: $m^{2} \geq n^{2\varepsilon}$ so the constant $c$ in our main Theorem \ref{main} is $4(8+1)=36$. \item$\mathrm{SU}(n)$: again, $m^{2} \geq n^{2\varepsilon}$, so the constant is $4(1+1)=8$. \item$\mathrm{U}\mathrm{Sp}(n)$: here, $m^{2} \geq 4\,n^{2\varepsilon}\,\mathrm{e}^{-\frac{\log n}{2n}} \geq \frac{16}{5}\,n^{2\varepsilon}$ for $n \geq 3$, so the constant is $\frac{5}{16}\,4(3+1)=5$. \item$\mathrm{SO}(2n)/\mathrm{U}(n)$: $m^{2} \geq \frac{2n-1}{4n}\,(2n)^{2\varepsilon} \geq \frac{9}{20}\,(2n)^{2\varepsilon}$ for $2n \geq 10$, whence a constant $\frac{9}{20}\,4(3+1)=\frac{36}{5}\leq 8$. \item$\mathrm{SU}(n)/\mathrm{SO}(n)$: $m^{2} \geq \frac{n^{2\varepsilon}}{2}\,\mathrm{e}^{-\frac{2(n-2)\log n}{n^{2}}} \geq \frac{n^{2\varepsilon}}{3}$ for $n \geq 2$, so a possible constant is $3\times 4(1+1)=24$. \item$\mathrm{SU}(2n)/\mathrm{U}\mathrm{Sp}(n)$: $m^{2} \geq \frac{2n-1}{4n}\,(2n)^{2\varepsilon} \geq \frac{3}{8}\,(2n)^{2\varepsilon}$, and a possible constant is $\frac{8}{3}\,4(1+1)=\frac{64}{3}\leq 22$. \item$\mathrm{U}\mathrm{Sp}(n)/\mathrm{U}(n)$: $m^{2} \geq 2 n^{2\varepsilon}\, \mathrm{e}^{-\frac{2 \log n}{n}}\geq \frac{16}{17}\,n^{2\varepsilon}$ for $n \geq 3$, whence a constant $\frac{17}{16}\,4(3+1)= 17$. \end{itemize} \end{proof} Unfortunately, for Grassmannian varieties, the variance of $\Omega$ at time $t=(1-\varepsilon)\log n$ can only be bounded by a constant times $n^{\varepsilon}$. However, since the mean of $\Omega$ is also of order $n^{\varepsilon}$, this will still ensure that the discriminating zonal spherical function has not at all the same behavior under Haar measure and under Brownian measures before cut-off time. The only downside is the loss of a factor $n^{\varepsilon}$ in the estimate of the total variation distance. \begin{lemma}\label{uniformboundvariancegrass} Under the usual assumptions on $n$, for Grassmannian varieties, $$\frac{\mathrm{Var}_{t}[\Omega]}{n^{\varepsilon}} \leq \begin{cases} 3 &\text{if }\Bbbk=\mathbb{R},\\ 5 &\text{if }\Bbbk=\mathbb{C}\text{ or }\mathbb{H}, \end{cases}$$ for every $t =\alpha\,(1-\varepsilon)\,\log n$ with $\varepsilon \in (0,1/4)$. \end{lemma} \begin{proof} The quantity $\frac{1}{pq}$ is bounded by $$\frac{4}{n^{2}} \leq \frac{1}{pq} \leq \frac{1}{n-1},$$ the extremal values corresponding to $p=q=\frac{n}{2}$ and to $p=n-1$ or $q=n-1$. In particular, in the expansions hereafter, all the coefficients that precede differences of exponentials $\Delta_{t}(\lambda,\mu)$ are positive. Now, we proceed case by case: \begin{itemize} \item $\mathrm{Gr}(n,q,\mathbb{R})$: \begin{align*} \mathrm{Var}_{t}[\Omega]&=\Delta_{t}\!\left(0,2\right)+\left(\frac{2n^{2}}{pq}-8\right)\frac{(n-1)(n+2)}{(n-2)(n+4)}\,\Delta_{t}\!\left(1,2\right)+\frac{n^{2}}{3}\!\left(\frac{n+2}{n-2}-\frac{(n+2)(n-1)}{pq(n-2)}\right)\Delta_{t}\!\left(\frac{2n-2}{n},2\right)\\ &\quad +\frac{n^{2}}{6}\left(\frac{n-1}{n + 4}+\frac{2(n+2)(n-1)}{pq(n + 4)}\right)\Delta_{t}\!\left(\frac{2n+4}{n},2\right)\\ &\leq 1+2n\,\Delta_{t}(1,2)+\frac{n^{2}}{3}\,\Delta_{t}\!\left(\frac{2n-2}{n},2\right). \end{align*} For the difference $\Delta_{t}(1,2)$, one cannot obtain a better bound than $\mathrm{e}^{-t}=n^{\varepsilon-1}$. The second difference $\Delta_{t}\!\left(\frac{2n-2}{n},2\right)$ is bounded from above by $$\mathrm{e}^{-2t}\left(\mathrm{e}^{\frac{2t}{n}}-1\right) \leq n^{-\frac{3}{2}}\,\frac{8\log n}{3n} \leq 2n^{-2}$$ by using similar arguments as in the proof of Lemma \ref{controlvariance}, and the inequality $n\geq 10$. So, $$\mathrm{Var}_{t}[\Omega] \leq 1+\frac{2}{3}+2n^{\varepsilon} \leq 3n^{\varepsilon}.$$ \item$\mathrm{Gr}(n,q,\mathbb{C})$: \begin{align*} \mathrm{Var}_{t}[\Omega]&= \Delta_{t}\!\left(0,2\right)+\left(\frac{2n^{2}}{pq}-8\right)\frac{n^{2}-1}{n^{2}-4}\,\Delta_{t}\!\left(1,2\right)+\frac{n^{2}}{2}\left( \frac{n+1}{n-2}-\frac{n^{2}-1}{pq(n-2)}\right)\Delta_{t}\!\left(\frac{2n-2}{n},2\right)\\ &\quad+\frac{n^{2}}{2}\left(\frac{n-1}{n+2}+\frac{n^{2}-1}{pq(n+2)}\right)\,\Delta_{t}\!\left(\frac{2n+2}{2},2\right)\\ &\leq 1+2n\,\Delta_{t}(1,2)+\frac{n^{2}}{2}\,\Delta_{t}\!\left(\frac{2n-2}{n},2\right). \end{align*} The second difference is controlled exactly as in the case of real Grassmannians, but under the constraint $n \geq 2$: $$\Delta_{t}\!\left(\frac{2n-2}{n},2\right)\leq \mathrm{e}^{-2t}\left(\mathrm{e}^{\frac{2t}{n}}-1\right) \leq n^{-\frac{3}{2}}\,\frac{3\log n}{n} \leq \frac{9}{4}\,n^{-2}.$$ Hence, $\mathrm{Var}_{t}[\Omega] \leq 1+\frac{9}{8}+2n^{\varepsilon} \leq 5n^{\varepsilon}.$ \item$\mathrm{Gr}(n,q,\mathbb{H})$: \begin{align*} \mathrm{Var}_{t}[\Omega]&= \Delta_{t}\!\left(0,2\right)+\frac{n^{2}}{3}\left(\frac{2n+1}{n-2}-\frac{(2n+1)(n-1)}{pq(n-2)}\right)\Delta_{t}\!\left(\frac{2n-2}{n},2\right)\\ &\quad+\left(\frac{n^{2}}{pq}-4\right)\frac{(n-1)(2n+1)}{(n-2)(n+1)}\,\Delta_{t}(1,2)+\frac{n^{2}}{3}\left(\frac{4(n-1)}{(n+1)}+\frac{(2n+1)(n-1)}{pq(n+1)}\right)\Delta_{t}\!\left(\frac{2n+1}{n},2\right)\\ &\leq 1+2n\,\Delta_{t}(1,2)+\frac{2n^{2}}{3}\,\Delta_{t}\!\left(\frac{2n-2}{n},2\right)\leq 1+2n^{\varepsilon}+\frac{3}{2} \leq 5n^{\varepsilon}. \end{align*} \end{itemize} \end{proof} Now, Proposition \ref{trickychebyshev} still holds, but with $K_{X}$ varying with $n$ and equal to $3n^{\varepsilon}$ or $5n^{\varepsilon}$ according to the field $\Bbbk=\mathbb{R}$, $\mathbb{C}$ or $\mathbb{H}$. Thus: \begin{proposition} For Grassmannian varieties $\mathrm{Gr}(n,q,\Bbbk)$, if $t=(1-\varepsilon)\log n$ with $\varepsilon \in (0,1/4)$, then $$d_{\mathrm{TV}}(\mu_{t},\mathrm{Haar}) \geq 1 - \frac{L n^{\varepsilon}}{m^{2}}\quad \text{with }L=\begin{cases} 16 &\text{if }\Bbbk=\mathbb{R},\\ 24 &\text{if }\Bbbk=\mathbb{C}\text{ or }\mathbb{H}. \end{cases}$$ \end{proposition} \noindent Finally, the deduction of the constants in Theorem \ref{main} goes as follows: \begin{itemize} \item $\mathrm{Gr}(n,q,\mathbb{R})$: $m \geq \frac{n^{2\varepsilon}}{2}$, so the constant can be taken equal to $2\times 16=32$. \item $\mathrm{Gr}(n,q,\mathbb{C})$: $m \geq \frac{n^{2}-1}{n^{2}}\,n^{2\varepsilon} \geq \frac{3}{4}\,n^{2\varepsilon}$, so a possible constant is again $\frac{4}{3}\,24=32$. \item $\mathrm{Gr}(n,q,\mathbb{H})$: $m \geq \frac{2n^{2}-n-1}{2}\,n^{2\varepsilon} \geq \frac{3}{2}\,n^{2\varepsilon}$ for $n \geq 3$, whence a constant $\frac{2}{3}\,24=16$. \end{itemize} These computations end the proof of the cut-off phenomenon. \section{Appendices (tedious computations)} \subsection{Proof of the upper bound for odd special orthogonal groups}\label{annexsoupper} With the same scheme of growth of partitions as for compact symplectic groups, one has the following bounds: \begin{itemize} \item $\eta_{1,n}$: it is given by the exact formula $\binom{2n+1}{n+1}\,\mathrm{e}^{-\frac{n(n+1)}{2n+1}\log (2n+1)}$, which is indeed smaller than $1$ for $n \geq 5$. \item $\eta_{k\geq 2,n}$: the comparison techniques between sums and integrals give \begin{align*} \log \eta_{k,n} &\leq -\frac{n(2k-1+n)}{2n+1}\log (2n+1) + \frac{2}{2k-1}+\frac{1}{k}+\log(k+n-2)-\log k\\ &\quad+(2k+2n-2)\log(2k+2n-2)+(2k-2)\log(2k-2)-2(2k+n-2)\log(2k+n-2). \end{align*} This bound is decreasing in $k$, whence smaller than its value when $k=2$, which is negative for every value of $n \geq 5$. \item $\eta_{k,l \in \left[\!\left[ 3,n-1\right]\!\right]}$: as before, $\rho_{k,l}$ splits into $\rho_{k,l,(1)}$ and $\rho_{k,l,(2)}$: $$\rho_{k,l}=\!\prod_{j=l+1}^{n} \frac{k+j-1+\lambda_{l+1}-\lambda_{j}}{k+j-l-1+\lambda_{l+1}-\lambda_{j}}\,\frac{k+\lambda_{l+1}+\lambda_{j}+2n-j}{k+\lambda_{l+1}+\lambda_{j}+2n-j-l} \prod_{1\leq i \leq j \leq l}\!\frac{2k+2\lambda_{l+1}+2n+1-i-j}{2k+2\lambda_{l+1}+2n-1-i-j}.$$ The bound on $\log \widetilde{\eta}_{k,l}$, the sum of $\log \rho_{k,l,(2)}$ and of the variation of $-\frac{t_{n,0}}{2}B_{n}(\lambda)$, is \begin{align*} \log \widetilde{\eta}_{k,l} &\leq -\frac{l(2k'-1+2n-l)}{2n+1}\log (2n+1) + \frac{1}{k'+n-l-1}+\log(k'+n-2)-\log(k'+n-l-1)\\ &\quad+(2k'+2n-2)\log(2k'+2n-2)+(2k'+2n-2l-2)\log(2k'+2n-2l-2)\\ &\quad-2(2k'+2n-l-2)\log(2k'+2n-l-2)\\ &\leq-\frac{l(2v+2n+1-l)}{2n+1}\log (2n+1) + \frac{1}{v+n-l}+\log(v+n-1)-\log(v+n-l)\\ &\quad+(2v+2n)\log(2v+2n)+(2v+2n-2l)\log(2v+2n-2l)\\ &\quad-2(2v+2n-l)\log(2v+2n-l) \end{align*} with $k'=k+\lambda_{l+1}=k+v$. On the other hand, in the product $\rho_{k,l,(1)}$, each term of index $j$ writes as \begin{align*} \frac{(k'+n-1/2)^{2}-(\lambda_{j}+n+1/2-j)^{2}}{(k'+n-1/2-l)^{2}-(\lambda_{j}+n+1/2-j)^{2}} &\leq \frac{(k'+n-1/2)^{2}-(\lambda_{l+1}+n+1/2-j)^{2}}{(k'+n-1/2-l)^{2}-(\lambda_{l+1}+n+1/2-j)^{2}} \\ &\leq\frac{k+j-1}{k+j-l-1}\,\frac{k''+2n-j}{k''+2n-j-l}, \end{align*} so the quantity $\rho_{k,l,(1)}$ is bounded by \begin{align*}&\frac{(k+n-1)!}{(k+l-1)!}\,\frac{(k-1)!}{(k+n-l-1)!}\,\frac{(k''+2n-l-1)!}{(k''+n-1)!}\,\frac{(k''+n-l-1)!}{(k''+2n-2l-1)!}\\ &\leq\frac{n!\,(2v+2n-l)!\,(2v+n-l)!}{l!\,(n-l)!\,(2v+n)!\,(2v+2n-2l)!}. \end{align*} Again, Stirling approximation leads to \begin{align*}\log \rho_{k,l,(1)} &\leq (2v+2n-l)\log(2v+2n-l)+(2v+n-l)\log(2v+n-l)-(2v+n)\log(2v+n)\\ &\quad-(2v+2n-2l)\log(2v+2n-2l)+n\log n-l \log l -(n-l)\log (n-l)+\frac{1}{2n-2}, \end{align*} and therefore \begin{align*}\log\eta_{k,l} &\leq -\frac{l(2v+2n+1-l)}{2n+1}\log (2n+1) + \frac{1}{v+n-l}+\log(v+n-1)-\log(v+n-l)\\ &\quad+(2v+2n)\log(2v+2n)+(2v+n-l)\log(2v+n-l)\\ &\quad-(2v+n)\log(2v+n)-(2v+2n-l)\log(2v+2n-l)\\ &\quad+n\log n-l \log l -(n-l)\log (n-l)+\frac{1}{2n-2}\\ &\leq -\frac{l(2n+1-l)}{2n+1}\log (2n+1) + \frac{1}{n-l}+\log(n-1)-\log(n-l)\\ &\quad+n\log n-l \log l -(n-l)\log (n-l)+\frac{1}{2n-2}. \end{align*} The last bound is decreasing in $l$, so it suffices to look at the case $l=3$; then the bound is decreasing in $n$, so it suffices to check that the bound is negative when $n=5$, which is just a computation. We conclude that $\log \eta_{k,l}\leq 0$ for any $k$ and any $l \in \left[\!\left[ 3,n-1\right]\!\right]$. \item $\eta_{k,1}$: a bound on $\rho_{k,1}$ is $\frac{k+n-2}{k}\,\frac{2k+2n-1}{2k+2n-3}$, so \begin{align*}\eta_{k,1} &\leq \frac{k+n-2}{k}\,\frac{2k+2n-1}{2k+2n-3}\,\mathrm{e}^{-\frac{2k+2n-2}{2n+1}\log(2n+1)}\leq \frac{(n-1)(2n+1)}{2n-1}\,\mathrm{e}^{-\frac{2n}{2n+1}\log(2n+1)}\\ &\leq \frac{n-1}{2n-1}\,\mathrm{e}^{\frac{\log(2n+1)}{2n+1}}\leq \frac{1}{2}\,\mathrm{e}^{\frac{\log 11}{11}}\leq 1. \end{align*} \item $\eta_{k,2}$: a bound on $\rho_{k,2}$ is $\frac{k+2n-4}{k}\,\frac{k+2n-3}{k+1}\,\frac{2k+2n-1}{2k+2n-5}\,\frac{k+n-1}{k+n-2}$, so $$\eta_{k,2}\leq \frac{k+2n-4}{k}\,\frac{k+2n-3}{k+1}\,\frac{2k+2n-1}{2k+2n-5}\,\frac{k+n-1}{k+n-2}\,\mathrm{e}^{-\frac{4k+4n-6}{2n+1}\log(2n+1)}\leq \frac{n}{2n+1}\,\mathrm{e}^{\frac{4\log(2n+1)}{2n+1}}.$$ The last bound is bigger than $1$ only when $n=5$ or $6$. The maximal value is obtained for $n=5$, and is smaller than $1.09 \leq \frac{11}{10}$. Moreover, if $k \geq 2$, then one has a much better bound, that is smaller than $1$ even when $n=5$ or $6$. \end{itemize} Putting all together, one sees that at most one quotient $\eta_{k,l}$ may be bigger than $1$ (and actually only when $n=5$ or $6$). Thus, we have proved Proposition \ref{tobeprovedinannexsoupper}. \subsection{Proof of the upper bound for even special orthogonal groups}\label{annexsouppereven} We analyze as before the various quotients $\rho_{k,l}$ and $\eta_{k,l}$ corresponding to the growth of partition described by Equation \eqref{evolution}: \begin{itemize} \item $\eta_{k,n}$: the general formula is $$\eta_{k,n}=\left(\prod_{i=1}^{n-1}\frac{2k+2n-2i-1}{2k+n-i-1}\,\frac{2k+2n-2i-2}{2k+n-i-2}\right)\mathrm{e}^{-\frac{2k+n-2}{2}\log(2n)},$$ which is decreasing in $k$ and reduces to $\binom{2n-1}{n}\,\mathrm{e}^{-\frac{n\log(2n)}{2}}$ when $k=1$. This latter bound is always smaller than $1$. \item $\eta_{k,l\in \left[\!\left[ 2,n-1\right]\!\right]}$: the quotient of dimensions $\rho_{k,l}=\rho_{k,l,(1)}\,\rho_{k,l,(2)}$ is equal to $$\prod_{j=l+1}^{n}\frac{k+j-1+\lambda_{l+1}-\lambda_{j}}{k+j-l-1+\lambda_{l+1}-\lambda_{j}}\,\frac{k+\lambda_{l+1}+\lambda_{j}+2n-1-j}{k+\lambda_{l+1}+\lambda_{j}+2n-1-j-l}\,\prod_{1\leq i<j\leq l}\!\frac{2k+2\lambda_{l+1}+2n-i-j}{2k+2\lambda_{l+1}+2n-2-i-j}.$$ The main difference with the previous computations is that $\rho_{k,l,(2)}$ is a product over distinct indices $i<j$, so we will not have to worry about diagonal terms in the corresponding sum (see the argument at the beginning of \S\ref{sympupper}). Hence, with the same notations as before, \begin{align*} \log \widetilde{\eta}_{k,l}&\leq -\frac{l(2k'-2+2n-l)}{2n}\log(2n)+(2k'+2n-3)\log(2k'+2n-3)\\ &\quad+(2k'+2n-2l-3)\log(2k'+2n-2l-3)-2(2k'+2n-l-3)\log(2k'+2n-l-3)\\ &\leq -\frac{l(2v+2n-l)}{2n}\log(2n)+(2v+2n-1)\log(2v+2n-1)\\ &\quad+(2v+2n-2l-1)\log(2n-2l-1)-2(2v+2n-l-1)\log(2v+2n-l-1);\\ \log \rho_{k,l,(1)}&\leq (2v+2n-l-1)\log(2v+2n-l-1)+(2v+n-l-1)\log(2v+n-l-1)\\ &\quad-(2v+n-1)\log(2v+n-1)-(2v+2n-2l-1)\log(2v+2n-2l-1)\\ &\quad+n\log n-l \log l -(n-l)\log (n-l)+\frac{1}{2n-2}. \end{align*} Adding together these bounds, using the concavity of $x \log x$ and then the decreasing character with respect to $v$ gives $$\log \eta_{k,l} = \log \widetilde{\eta}_{k,l} + \log \rho_{k,l,(1)}\leq -\frac{l(2n-l)}{2n}\log(2n)+n \log n - l \log l -(n-l)\log(n-l)+\frac{1}{2n-2},$$ which is decreasing in $l \geq 2$. Then, $$-\frac{2n-2}{n}\log(2n)+n\log(n)-2\log 2 -(n-2)\log(n-2)+ \frac{1}{2n-2}$$ is decreasing in $n$, and one can check that it is negative when $n=5$. So, $\eta_{k,l}\leq 1$ for any $k$ and any $l \in \left[\!\left[ 2,n-1\right]\!\right]$. \item $\eta_{k,1}$: one has $\rho_{k,1}\leq \frac{k+2n-3}{k}\,\frac{k+n-1}{k+n-2}$, and therefore $$\eta_{k,1}\leq \frac{k+2n-3}{k}\,\frac{k+n-1}{k+n-2}\,\mathrm{e}^{-\frac{2n+2k-3}{2n}\log(2n)}.$$ Suppose $k\geq 2$; then the right-hand side is smaller than $\frac{2n-1}{2n}\,\frac{n+1}{2n}$, so $\eta_{k,1}\leq 1$. On the other hand, for $k=1$, which happens only once, $$\eta_{1,1}\leq \mathrm{e}^{\frac{\log (2n)}{2n}} \leq \mathrm{e}^{\frac{\log 10}{10}}\leq \frac{4}{3}.$$ \end{itemize} This proves Proposition \ref{tobeprovedinannexsouppereven}. \subsection{Proof of the upper bound for complex Grassmannians}\label{annexsuupper} For a partition of size $p=\lfloor \frac{n}{2}\rfloor$, one has $B_{n}(\lambda)= \frac{2}{n}\sum_{i=1}^{p} \lambda_{i}^{2}+(n+1-2i)\lambda_{i}$ and either $$ A_{n}(\lambda) = \left(\prod_{i=1}^{p}\prod_{j=1}^{p}\frac{\lambda_{i}+\lambda_{j}+n+1-i-j}{n+1-i-j}\right)\left(\prod_{1\leq i<j\leq p} \frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\right)^{2} $$ if $n=2p$, or $$ A_{n}(\lambda) = \left(\prod_{i=1}^{p+1}\prod_{j=1}^{p+1}\frac{\lambda_{i}+\lambda_{j}+n+1-i-j}{n+1-i-j}\right)\left(\prod_{1\leq i<j\leq p} \frac{\lambda_{i}-\lambda_{j}+j-i}{j-i}\right)^{2} $$ when $n=2p+1$. Let us give the details when $n=2p$. Again, one looks at $\rho_{k,l}=A_{n}(\lambda)/A_{n}(\mu)$ and $\eta_{k,l}=\rho_{k,l}\,\mathrm{e}^{-\log n (B_{n}(\lambda)-B_{n}(\mu))}$, with $\mu$ and $\lambda$ equal to $$(\lambda_{l+1}+k-1,\ldots,\lambda_{l+1}+k-1,\lambda_{l+1},\ldots,\lambda_{p})_{p}\quad \text{and}\quad (\lambda_{l+1}+k,\ldots,\lambda_{l+1}+k,\lambda_{l+1},\ldots,\lambda_{p})_{p}.$$ The quotient of dimensions is $$\rho_{k,l}=\left(\prod_{j=1}^{l}\frac{(2k'+n-j)(2k'+n-j-1)}{(2k'+n-j-l)(2k'+n-j-l-1)}\right)\left(\prod_{j=l+1}^{p} \frac{(k'-\lambda_{j}+j-1)(k'+\lambda_{j}+n-j)}{(k'-\lambda_{j}+j-l-1)(k'+\lambda_{j}+n-j-l)}\right)^{\!2},$$ and a lower bound is then obtained by the usual replacement $\lambda_{l+1}=\lambda_{j}=0$ and then $k=1$: $$\rho_{k,l} \leq \frac{n-2l+1}{n+1}\,\binom{n+1}{l}^{\!2}.$$ This leads to the inequality $$\eta_{k,l} \leq \frac{n-2l+1}{n+1}\,\binom{n+1}{l}^{\!2}\,\mathrm{e}^{-\frac{2l(n+1-l)}{n}\log n}$$ The last quantity is decreasing in $l$, as the quotient of two consecutive terms of parameters $n,l$ and $n,l+1$ is smaller than $$\left(\frac{n+1-l}{l+1}\,\mathrm{e}^{-\frac{n-2l}{n}\log n}\right)^{2}\leq 1.$$ So, $$\eta_{k,l}\leq \frac{n-1}{n+1}\,(n+1)^{2}\,\mathrm{e}^{-2\log n}=\frac{n^{2}-1}{n^{2}}\leq 1$$ and $A_{n}(\lambda)\,\mathrm{e}^{-\log n\,B_{n}(\lambda)}$ is smaller than $1$ for any partition (we leave to the reader the verification of the other case $n=2p+1$, which is very similar). \subsection{Expansion of elementary $4$-tensors for unitary groups}\label{expun} For the eigenvectors associated to the value $2n-2$, we shall write $$S(i,j,k,l)=(e[i,j]-e[j,i])^{\otimes 2}-(e[j,k]-e[k,j])^{\otimes 2}+(e[k,l]-e[l,k])^{\otimes 2}-(e[l,i]-e[i,l])^{\otimes 2}.$$ The elementary tensor $e_{i}^{\otimes 4}$ is equal to \begin{align*} &\frac{1}{n(n+1)}\sum_{k,l=1}^{n} e[k,l,k,l]+e[k,l,l,k]+\frac{1}{n(n+2)}\sum_{k \neq i}\sum_{l=1}^{n}\left(\substack{(e[i,l,i,l]-e[k,l,k,l] )+ (e[l,i,l,i]-e[l,k,l,k])\\ +(e[i,l,l,i]-e[k,l,l,k])+ (e[l,i,i,l]-e[l,k,k,l])}\right)\\ &+\frac{1}{n+2}\sum_{k \neq i} e_{i}^{\otimes 4}+e_{k}^{\otimes 4}-(e[i,k]+e[k,i])^{\otimes 2} - \frac{1}{(n+1)(n+2)}\sum_{k<l } e_{k}^{\otimes 4}+e_{l}^{\otimes 4}-(e[k,l]+e[l,k])^{\otimes 2} \end{align*} with the two first terms respectively in $V_{2n^{2}-2}$ and $V_{n^{2}-2}$, and the second line in $V_{-2n-2}$. Similarly, $e_{i}\otimes e_{j} \otimes e_{i} \otimes e_{j}$ is equal to \begin{align*} &\frac{1}{(n-1)(n+1)}\sum_{k=1}^{n}\sum_{l=1}^{n}e[k,l,k,l] - \frac{1}{n(n-1)(n+1)}\sum_{k=1}^{n}\sum_{l=1}^{n}e[k,l,l,k]\\ &\text{- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -}\\ &+\frac{1}{n(n+2)}\left(\sum_{l=1}^{n}\left(\substack{e[i,l,i,l]-e[j,l,j,l]\\+e[l,j,l,j]-e[l,i,l,i]}\right)\right)+\frac{1}{(n-2)(n+2)}\sum_{k \neq i,j}\left(\sum_{l=1}^{n}\left(\substack{e[i,l,i,l]-e[k,l,k,l]\\+e_[l,j,l,j]-e[l,k,l,k]}\right)\right)\\ &-\frac{1}{n(n-2)(n+2)}\sum_{k \neq i,j} \left(\sum_{l=1}^{n}\left(\substack{e[i,l,l,i]+e[j,l,l,j]-2e[k,l,l,k] \\ + e[l,i,i,l]+e[l,j,j,l]-2e[l,k,k,l] }\right)\right)\\ &\text{- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -}\\ &+\frac{1}{4(n-1)(n-2)}\sum_{(k<l )\neq i,j}\!\!2S(i,j,k,l)-S(i,k,j,l)\\ &\text{- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -}\\ &+\frac{1}{2n}\sum_{k\neq i,j}\left(\substack{e[i,j,i,j]+e[j,k,j,k]+e[k,i,k,i] \\ -e[j,i,j,i]-e[k,j,k,j]- e[i,k,i,k]}\right)\\ &\text{- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -}\\ &+\frac{1}{4(n+2)}\left(\sum_{k \neq i}e_{i}^{\otimes 4}+e_{k}^{\otimes 4}-(e[i,k]+e[k,i])^{\otimes 2}+\sum_{k\neq j}e_{j}^{\otimes 4}+e_{k}^{\otimes 4}-(e[j,k]+e[k,j])^{\otimes 2}\right)\\ &-\frac{1}{4}\left(e_{i}^{\otimes 4}+e_{j}^{\otimes 4}-(e[i,j]+e[j,i])^{\otimes 2}\right)-\frac{1}{2(n+1)(n+2)}\left(\sum_{k<l} e_{k}^{\otimes 4}+e_{l}^{\otimes 4}-(e[k,l]+e[l,k])^{\otimes 2}\right) \end{align*} with the parts of this expansion respectively in $V_{2n^{2}-2}$, $V_{n^{2}-2}$, $V_{2n-2}$, $V_{-2}$ and $V_{-2n-2}$. \subsection{Expansion of elementary $4$-tensors for compact symplectic groups}\label{expspn} It is a little more tedious than before to find a complete list of ``simple'' eigenvectors of $M_{n,4}$ (or at least a sufficient list to expand simple tensors). The list of possible eigenvalues of $M_{n,4}$ is $\{2n+1,n+1,n,3,1,0,-1,-3\},$ and on the other hand, one can easily identify a basis of $V_{2n+1}$: it consists in the three vectors \begin{align*}v_{2n+1,1}&=\sum_{i,j=1}^{n}\left(\substack{e[2i-1,2i,2j-1,2j]+e[2i,2i-1,2j,2j-1]\\-e[2i,2i-1,2j-1,2j]-e[2i-1,2i,2j,2j-1]}\right);\\ v_{2n+1,2}&=\sum_{i,j=1}^{n}\left(\substack{e[2i-1,2j-1,2i,2j]+e[2i,2j,2i-1,2j-1]\\-e[2i,2j-1,2i-1,2j]-e[2i-1,2j,2i,2j-1]}\right);\\ v_{2n+1,3}&=\sum_{i,j=1}^{n}\left(\substack{e[2i-1,2j-1,2j,2i]+e[2i,2j,2j-1,2i-1]\\-e[2i,2j-1,2j,2i-1]-e[2i-1,2j,2j-1,2i]}\right). \end{align*} But then, it becomes really difficult to describe the other eigenspaces. However, one can still find the eigenvector expansion of simple tensors such as $e_{i}^{\otimes 4}$, $e_{i}^{\otimes 2}e_{j}^{\otimes 2}$, or $e[i,j,k,l]$; hence, in the following, we just give these expansions (again it is easy to check that each part of an expansion is indeed an eigenvector). The tensor $e[i,i,i,i]$ is an eigenvector in $V_{-3}$, and on the other hand, $e[2i-1,2i-1,2i,2i]$ decomposes into the eigenvectors \begin{align*} &\frac{1}{2n(2n+1)}\left(v_{2n+1,2}+v_{2n+1,3}\right)\\ &\text{- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -}\\ &+\frac{n-2}{4n(n+1)}\sum_{\sigma \in S}\sum_{j \neq i}\left(\substack{e[2i-1,2j-1,2i,2j]+e[2i,2j,2i-1,2j-1]\\-e[2i-1,2j,2i,2j-1]-e[2i,2j-1,2i-1,2j]}\right)^{\sigma}\\ &+\frac{1}{4n(n+1)}\sum_{\sigma \in S}\sum_{j,k\neq i} \left(\substack{e[2j-1,2k,2j,2k-1]+e[2j,2k-1,2j-1,2k]\\-e[2j-1,2k-1,2j,2k]-e[2j,2k,2j-1,2k-1]}\right)^{\sigma}\\ &+\frac{n-1}{2n(n+1)}\left(\substack{2e[2i-1,2i-1,2i,2i]+2e[2i,2i,2i-1,2i-1]-e[2i-1,2i,2i-1,2i]\\-e[2i,2i-1,2i,2i-1]-e[2i,2i-1,2i-1,2i]-e[2i-1,2i,2i,2i-1]}\right)\\ &\text{- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -}\\ &+\frac{1}{4(n+1)}\sum_{\sigma \in S}\sum_{j=1}^{n}\left(\substack{e[2i-1,2j-1,2i,2j]+e[2i,2j-1,2i-1,2j]\\-e[2i-1,2j,2i,2j-1]-e[2i,2j,2i-1,2j-1]}\right)^{\sigma} \\ &\text{- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -}\end{align*} \begin{align*} &+\frac{2n-1}{2(2n+1)(2n+2)}\sum_{\sigma \in S}\sum_{j \neq i} \left(\substack{e[2i-1,2j,2j-1,2i]+e[2i,2j-1,2j,2i-1]\\-e[2i-1,2j-1,2j,2i]-e[2i,2j,2j-1,2i-1]}\right)^{\sigma}\\ &+\frac{1}{2(2n+1)(2n+2)}\sum_{\sigma \in S}\sum_{j, k \neq i} \left(\substack{e[2j-1,2k-1,2j,2k]+e[2j,2k,2j-1,2k-1]\\-e[2j-1,2k,2j,2k-1]-e[2j,2k-1,2j-1,2k]}\right)^{\sigma}\\ &+\frac{(2n-1)(2n-2)}{6(2n+1)(2n+2)}\left(\substack{2e[2i-1,2i-1,2i,2i]+2e[2i,2i,2i-1,2i-1]-e[2i-1,2i,2i-1,2i]\\-e[2i,2i-1,2i,2i-1]-e[2i,2i-1,2i-1,2i]-e[2i-1,2i,2i,2i-1]}\right)\\ &\text{- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -}\\ &+\frac{n-1}{2(n+1)}\left(e[2i-1,2i-1,2i,2i]-e[2i,2i,2i-1,2i-1]\right) \\ &+\frac{1}{4(n+1)}\sum_{\sigma \in S}\sum_{j \neq i} \left(\substack{e[2i-1,2j,2j-1,2i]+e[2i,2j,2j-1,2i-1]\\-e[2i-1,2j-1,2j,2i]-e[2i,2j-1,2j,2i-1]}\right)^{\sigma}\\ &\text{- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -}\\ &+\frac{1}{6}\sum_{\sigma \in \mathfrak{S}_{4}}'e[2i-1,2i-1,2i,2i]^{\sigma} \end{align*} with the parts of this expansion respectively in $V_{2n+1}$, $V_{n+1}$, $V_{n}$, $V_{0}$, $V_{-1}$, and $V_{-3}$. In these expansions, $S=(\mathbb{Z}/2\mathbb{Z})^{2}$ denotes the group of permutations $\{\mathrm{id},(1,2),(3,4),(1,2)(3,4)\}$. The expansion in eigenvectors of $e[2i,2i,2j,2j]$ is \begin{align*} &\frac{1}{6}\left(\substack{2e[2i,2i,2j,2j]+2e[2j,2j,2i,2i]-e[2i,2j,2i,2j] \\ -e[2j,2i,2j,2i]-e[2i,2j,2j,2i]-e[2j,2i,2i,2j]}\right)+\frac{1}{2}\left(\substack{e[2i,2i,2j,2j]\\-e[2j,2j,2i,2i]}\right) +\frac{1}{6}\sum_{\sigma \in \mathfrak{S}_{4}}'e[2i,2i,2j,2j]^{\sigma} \end{align*} with each part respectively in $V_{0}$, $V_{-1}$ and $V_{-3}$; and similarly for the expansions of $e[2i-1,2i-1,2j,2j]$ or $e[2i-1,2i-1,2j-1,2j-1]$. Finally, we skip the expansion in eigenvectors of $e[2i-1,2i,2j-1,2j]$ as it is two pages long. \end{document}
arXiv
Selfishness is clever There are at least two groups who play with Selfishness in a clever way, and those two groups are evolutionary psychologists and objectivists / Ayn Rand followers. I say it's clever because it's like playing with a set of math theorems and seeing what some consequences are, it's like trying to put together a puzzle with all the pieces being more or less the same yet different enough that you can't arbitrarily place them. In evolutionary psychology, it's a given that genes are selfish, and our genes determine what we are and set the limits on who we can be. (The human brain is very malleable, so those limits aren't always clear in humans but they're nevertheless there.) So we look at human universals like altruism, a sense of justice, honor, trust, guilt, friendship, sympathy, suspicion and hypocrisy, and many of these things defy explanation if you can only think that "everyone is out for themselves." The long-standing explanation for these traits is based in Kin Selection, which in a nutshell (I recommend reading the linked paper or Robert Wright's The Moral Animal) is the idea that kin have more or less the same genes, and when a gene arises that increases the likelihood of altruism (through yelling a warning cry when a predator is spotted for squirrels and possibly sacrificing itself, or through sharing a banana), the individuals most likely to benefit from that altruism are closely related and may have that gene as well. Hence the genes still propagate and are selected for; the benefits to an individual's kin who share the gene outweigh any individual loss from the altruism that might make that individual lose out in the gene pool normally. This doesn't mean that conscious altruism goes away, or that conscious minds are therefore automatically entirely selfish, just that at the gene level, which is all that matters to evolution, high-level altruism got there from a low-level selfish process. The mechanisms of an altruistic brain are altruistic: altruistic people will feel like they're being altruistic, others will call them altruistic, and for all intents and purposes they are altruistic. At the gene level this altruism is a selfish benefit for that person's (and likely their kin's) genes. See Full Post and Comments A Fun, Dirty, Computer Engineering Comic Horrible joke I know, and I suck at drawing, especially "anatomy". Computer Engineers have a trick to convert a bunch of resistors in a delta (triangle) formation to a Y (three prongs) formation and vice versa, and with that trick one can reduce the "hair circuit" shown (that some people apparently call a Christmas tree since it contains both a delta and a Y) to the nice single-resistor trim. The "Christmas tree" appears in Giancoli, because usually Computer Engineers don't have to deal with it, and Giancoli's "solution" is to write down several equations of Kirchhoff current and voltage laws to reduce the circuit. Yuck! Here is the diagram, they want to know the equivalent resistance between points A and C. Cross posted from my dA account, here's some blog filler so I don't feel like I wrote nothing this month. Also this will add my word-count score. I just tried to read the same thing six times and still didn't parse it, so instead of going to sleep I'm going to do this instead! Tagged by :iconTechnologic-Skies: here are Teh Rules: Quick CouchDB Fun I'm a fan of the command line, so when I found out I could just use curl to mess with CouchDB instead of Futon I was happy. If you want to create a new document with views under some database, you can do this: curl -d @t.json -u un:pw -H "Content-Type: application/json" -X PUT http://dynamobi.cloudant.com/sw/_design/rar This will take the contents of the file t.json (which is below) and send them off to CouchDB. It will create a new document called "rar" under the "sw" database. Here is t.json: Progressive Taxes I used to be of the opinion that we should have flat taxes, either amount-wise or percentage-wise. That is, after all, the mathematically fair solution if you divide on total money taxed. There are two obvious, killer problems with that, though. One, it severely hurts poor people. 50% of $1 is a lot more than 50% of $1000 if those two amounts are all two people have, but the cost of a loaf of bread is still 99 cents and so one person has to go hungry. The second problem, is that the government then doesn't have enough income to support itself unless the percentage is so high. (And as you raise the percentage, the poor suffer more and more.) So the crafters of our tax laws realized these two problems, and they went a step further by introducing tax brackets. You pay 20% on all income under $X, but every dollar you make above $X, you might get taxed 30% until you hit $Y, and so on. Obviously someone with $1000 does not need $1 as much as someone who only has $2 and wants a gallon of milk, so the idea of the tax bracket again makes sense if you want to ease the suffering of the poor. Attacking Mere Employees Doing Their Jobs "I quarrel not with far-off foes, but with those who, near at home, co-operate with, and do the bidding of, those far away, and without whom the latter would be harmless." ~Thoreau In the post-war Nazi trials, soldiers would justify their horrible actions as just "following orders." Employees of big corporations do less horrible but still offensive actions and justify them by saying "I'm just doing my job." (Recent case in point: TSA Agent frisking a 6 year old girl.) Through the Milgram experiment, psychology tells us that these people may even have a point. For whatever reasons, we evolved to respond differently to authority than we might under our own direction. Can we really blame these perpetrators for their actions when they're just victims of the same human malfunction we all have? A new favorite equation? In my differential equations class, we noted the function: [math]f(x) = e^{-x^2}[/math] and how it looks really neat when plotted, but in a normal calculus course you'll probably be told that you can't integrate it. We then noted that: [math]\int_{-\infty}^{\infty} e^{-x^2} dx = \sqrt{\pi}[/math]
CommonCrawl
\begin{document} {\title{On the stability of some imaging functionals}} \author{Guillaume Bal \thanks{Department of Applied Physics and Applied Mathematics, Columbia University, New York NY, 10027 ; [email protected]} \and Olivier Pinaud \thanks{Department of Mathematics, Colorado State University, Fort Collins CO, 80523 ; [email protected]} \and Lenya Ryzhik \thanks{Department of Mathematics, Stanford University, Stanford CA 94305 ; [email protected]}} \maketitle \begin{abstract} This work is devoted to the stability/resolution analysis of several imaging functionals in complex environments. We consider both linear functionals in the wavefield as well as quadratic functionals based on wavefield correlations. Using simplified measurement settings and reduced functionals that retain the main features of functionals used in practice, we obtain optimal asymptotic estimates of the signal-to-noise ratios depending on the main physical parameters of the problem. We consider random media with possibly long-range dependence and with a correlation length that is less than or equal to the central wavelength of the source we aim to reconstruct. This corresponds to the wave propagation regimes of radiative transfer or homogenization. \end{abstract} \section{Introduction} Imaging in complex media has a long history with many applications such as non-destructive testing, seismology, or underwater acoustics \cite{Claerbout, Shull, bag-math}. Standard methodologies consist in emitting a pulse in the heterogeneous medium and performing measurements of the wavefield or other relevant quantities at an array of detectors. Depending on the experimental setting, this may give access, for instance, to the backscattered wavefield (and its spatio-temporal correlations), or to the wave energy. The imaging procedure then typically amounts to back-propagating the measured data appropriately. When scattering of the wave by the medium is not too strong, measurements are usually migrated in a homogeneous medium neglecting the heterogeneities. This is the principle of the Kirchhoff migration and similar techniques, and is referred to as coherent imaging~\cite{bleistein}. When scattering is stronger, the interaction between the wave and the medium cannot be neglected and a different model for the inversion is needed. We will only consider weak heterogeneities in this work, so that a homogeneous model is enough to obtain accurate reconstructions. We refer to \cite{BR-AM,BP-M3AS} for a consideration of stronger fluctuations and transport-based imaging. We are interested here in comparing two classes of imaging methods in terms of stability and resolution: one based on the wavefield measurements (and thus linear in the wavefields), such as Kirchhoff migration; and the other one based on correlations of the wavefield (and therefore quadratic in the wavefields), such as coherent interferometry imaging \cite{CINT}. Here, ``stability'' refers to the stability of the reconstructions with respect to changes in the medium or to a measurement noise. Stability and resolution of coherent imaging functionals in random waveguides were addressed in \cite{Borcea-wg}; for functionals based on topological derivatives, see also e.g. \cite{habib-topo}. We refer to \cite{FGPS-07,Garnier-P, GS-velo, CINT-SNR} and references therein for more details on wave propagation in random media and imaging. Our analysis is done in the framework of three-dimensional acoustic wave propagation in a random medium with correlation length $\ell_c$ and fluctuations amplitude of order~$\sigma_0$. The quantities $\ell_c$ and $\sigma_0$ are fixed parameters of the experiments. The random medium is allowed to exhibit either short-range or long-range dependence. Our goal is to image a source centered at the origin from measurements performed over a detector array $D$. Denoting by $\lambda$ the central wavelength of the source, we will assume that it is larger than or equal to the correlation length, and that many wavelengths separate the source from the detector, so that we are in a high frequency regime. The case $\lambda=\ell_c$ is referred to as the weak coupling regime in the literature (for an amplitude $\sigma_0\ll 1$), while the case $\lambda \gg \ell_c$ is the stochastic homogenization regime. The ratio $\ell_c/\lambda$ is a critical parameter as it controls, along with $\sigma_0$, the strength of the interaction of the sound waves with the heterogeneities. The regime $\lambda \ll \ell_c$ is addressed in~\cite{CINT-Habib} and is known as the random geometrical optics regime. The stability/resolution analysis is somewhat direct in that case, since a simple expression for the heterogeneous Green's function is available by means of random travel times. This is not the case in our regimes of interest, where the interaction between the waves and the underlying medium is more difficult to describe mathematically. We will work with a simplified configuration and will define reduced imaging functionals that retain the main features of the functionals used in practice while offering more tractable computations. In such a setting, we obtain optimal estimates for the stability of the functional in terms of the most relevant physical parameters. We furthermore quantify the signal-to-noise ratio (SNR). The statistical stability is evaluated by computing the variance of the functionals at the source location. For the wave-based functional (WB), this involves the use of stationary phase techniques for oscillatory integrals, while computations related to the correlation-based functional (CB) involve averaging the scintillation function of Wigner transforms \cite{GMMP,LP} against singular test functions. It is now well-established that correlation-based functionals enjoy a better stability than wave-based functionals at the price of a lower resolution \cite{CINT}. This is due to self-averaging effects of Wigner transforms that we want to quantify here. The paper is organized as follows: in Section \ref{mainresults}, we present the setting and our main results. We define the wave-based and correlation-based functionals and describe our models for the measurements in Section \ref{meas_imag}. Section \ref{secCB} is devoted to the analysis of the resolution and stability of the correlation-based functional and Section \ref{secWB} to that of the wave-based functional. The proofs of the main results are presented in Section \ref{secproofs} and a conclusion is offered in Section \ref{conc}. \section{Setting and main results} \label{mainresults} \subsection{Wave propagation and measurement setting} \label{gene} The propagation of three-dimensional acoustic waves is described by the scalar wave equation for the pressure $p$: $$ \frac{\partial^2 p}{\partial t^2}=\kappa^{-1}(x) \nabla \cdot [\rho^{-1}(x) \nabla p], \qquad x \in \Rm^3, \;t>0, $$ supplemented with initial conditions $p(t=0,x)$ and $p_t(t=0,x)$. We suppose for simplicity that the density is constant and equal to one; $\rho=1$. We also assume that the compressibility is random and takes the form $$\kappa(x)=\kappa_0 \left(1+\sigma_0 V \left(\frac{x}{\ell_c} \right)\right), \qquad \kappa_0=1,$$ where $\sigma_0$ measures the amplitude of the random fluctuations and $\ell_c$ their correlation length. We suppose that $V$ is bounded and $\sigma_0$ is small enough so that $\kappa$ remains positive. The sound speed $c=(\kappa\rho)^{-\frac12}$ thus satisfies $c=1+\mathcal O(\sigma_0) \simeq 1$ and the average sound speed is $c_0=1$. The random field $V$ is a mean-zero stationary process with correlation function $$R(x)=\mathbb E \{V(x+y) V(y)\},$$ where $\mathbb E$ denotes the ensemble average over realizations of the random medium. We assume~$R$ to be isotropic so that $R(x)=R(|x|)$. We will consider two types of correlation functions: (i) integrable $R$, which models random media with short-range correlations; and (ii) non-integrable $R$ corresponding to media with long-range correlations. The latter media are of interest for instance when waves propagate through a turbulent atmosphere or the earth upper crust \cite{dolan,sidi}. Such properties can be translated into the power spectrum $\hat{R}$, the Fourier transform of $R$, by supposing that $\hat{R}$ has the form $$ \hat{R}(k)=\frac{S(k)}{|k|^\delta}=\int_{\Rm^3} e^{-i x \cdot k} R(x)dx, \qquad 0 \leq \delta <2, $$ where $S$ is a smooth function with fast decay at infinity. A simple dimensional analysis shows that $R(x)$ behaves likes $|x|^{\delta-3}$, $\delta<3$, as $|x|\to\infty$, which is not integrable for large $|x|$. The case $\delta=0$ corresponds to integrable $R$ since the function $S$ is regular. We assume that $\delta<2$ so that the transport mean free path (see section \ref{models}) is well-defined. Propagation in media such that $2<\delta<3$ is still possible but requires an elaborate theory of transport equations with highly singular scattering operators that are beyond the scope of this paper. Our setting of measurements is the following: we assume that measurements of the wavefield are performed on a three dimensional detector $D$ and at a fixed time $T$. We assume that the detector $D$ is a cube centered at $x_D=(L,0,0)$ of side $l<L$; see figure \ref{fig1}. Our goal is then to image an initial condition $$p(t=0,x)=p_0(x), \qquad \partial_t p(t=0,x)=0,$$ localized around $x_0 =(0,0,0)$. Practical settings of measurements usually involve recording in time of the wavefield on a surface detector. The two configurations share similar three-dimensional information: spatial 3D for our setting, and spatial 2D + time 1D for the standard configuration. Using wave propagation in a homogeneous medium, it is also relatively straightforward to pass from one measurement setting to the other. Our choice of a 3D detector was made because it offers slightly more tractable computations for the stability of the imaging functionals while qualitatively preserving the same structure of data. As waves approximately propagate in a homogeneous medium with constant speed $c_0$, it takes an average time $t_D=(L-l/2)/c_0$ for the wave to reach the detector. We will therefore suppose that $T>t_D$, and even make the assumption that $T= L/c_0$ so that the wave packet has reached the center of the detector. Note that in such a measurement configuration, the range of the source is then known since the initial time of emission is given. We therefore focus only on the cross-range resolution. We consider an isotropic initial condition obtained by Fourier transform of a frequency profile $g$ (a smooth function that decays rapidly at infinity): $$ p_0(x)= \lambda^3 \int_{\Rm^3} g( B^{-1} (|k|-|k_0|/\lambda)) e^{-i k \cdot x} dk, $$ where $|k_0|$ is a non-dimensional parameter, $\lambda$ is the central wavelength, and $B c_0^{-1}$ the bandwidth. Rescaling variables as $x \to x L$, $t \to t L/c_0 $, introducing the parameters $\eta=\ell_c/\lambda$, $\varepsilon=\lambda/L$, and still denoting by $p$ the corresponding rescaled wavefield, the dimensionless wave equation now reads \begin{equation} \label{waveq} \left(1+\sigma_0 V \left(\frac{x}{\eta \varepsilon} \right) \right) \frac{\partial^2 p}{\partial t^2}=\Delta p, \quad p(t=0,x)=p_{0}^\varepsilon(x), \quad \frac{\partial p}{\partial t}(t=0,x)=0. \end{equation} We quantify the bandwidth of the source in terms of the central frequency by setting $B=(L \varepsilon \mu)^{-1}$, where $\mu$ is a given non-dimensional parameter such that $\mu \gg 1$. We suppose that $\mu \ll 1/\sqrt{\varepsilon}$ so that the initial condition can be considered as broadband. More details about the latter condition will be given later. The rescaled initial condition then has the form \begin{equation} \label{CI1} p_{0}^\varepsilon(x)=\mu \varepsilon^{3} \int_{\Rm^3} g\left( \varepsilon \mu\left(|k|-\frac{|k_0|}{\varepsilon}\right)\right) e^{-i k \cdot x} dk. \end{equation} The normalization is chosen so that $p_0^\varepsilon(0)$ is of order $O(1)$. If $\hat g$ denotes the Fourier transform of $g$, we can write $$ p_{0}^\varepsilon(x) = p_{0}^\varepsilon(|x|)\simeq |k_0|^2 \int_{S^2} \hat g\left(\frac{\hat k \cdot x}{ \varepsilon \mu}\right)e^{-i \frac{|k_0|}{\varepsilon} \hat k \cdot x} d \hat k. $$ Above, the symbol $\simeq$ means equality up to negligible terms and $S^2$ is the unit sphere of $\Rm^3$. The initial condition is therefore essentially a function with support $\varepsilon \mu$ oscillating isotropically at a frequency $|k_0|/\varepsilon$. \begin{figure} \caption{Geometry} \label{fig1} \end{figure} As explained in the introduction, we assume that $\ell_c \leq \lambda$, which implies $\eta \leq 1$. The case $\eta \sim 1$ leads to the radiative transfer regime in the limit $\varepsilon \to 0$ \cite{RPK-WM}. The case $\eta\ll1$ corresponds to the homogenization regime \cite{Bal-homog} and a propagation in an effective medium (which is here the homogeneous medium since $\sigma_0$ is small). The opposite case $\eta \gg 1$ gives rise to the random geometric regime addressed in \cite{CINT-Habib}. For the asymptotic analysis, we recast the scalar wave equation (\ref{waveq}) as a first-order hyperbolic system on the wavefield $\bu=(\bv, p)$, where $\bv$ is the velocity: $$ \rho \frac{\partial \bv}{ \partial t }+\nabla p=0, \qquad \kappa \frac{\partial p}{ \partial t }+\nabla \cdot \bv=0, $$ augmented with initial conditions $p(t=0,x)=p_0^\varepsilon\left(x \right)$ and $\bv(t=0,x)=0$. The latter system is rewritten as \begin{equation} \label{eq:hypsyst} \left( I+\sigma_0 \mathcal V \left(\frac{x}{\varepsilon \eta}\right)\right)\pdr{\bu}t + D^j \pdr{\bu}{x_j} =0, \end{equation} where $\mathcal V=\textrm{diag}(\mathbf 0, V) \in \Rm^{4 \times 4}$, $D^j_{mn}=\delta_{m4}\delta_{nj}+ \delta_{n4}\delta_{mj}$ is a $4\times4$ symmetric matrix for $1\leq j\leq3$. Above and in the sequel, we use the Einstein convention of summation over repeated indices. \subsection{Results} \label{results} Our main results concern the signal-to-noise ratio at the point $x=0$ defined by $$ SNR=\frac{\mathbb E \{I\}(0) }{\sqrt{\textrm{Var} \{ I\}(0)}}, \qquad I=I^C \textrm{ or } I^W, $$ where $I^C$ stands for the CB functional and $I^W$ for the WB functional, which are defined further in section \ref{meas_imag}. We will also quantify the support of $\textrm{Var} \{ I\}(z)$. We will obtain optimal estimates for the CB and WB functionals in terms of the main parameters that define them as well as $\varepsilon$, $\eta$, $\delta$ and $\mu$. Our measurements of the pressure are assumed to have the form $$p(t,x)=\mathbb E \{p\}(t,x) +\delta p^\varepsilon (t,x)+\sigma_n n_p^\varepsilon(t,x),$$ where $\delta p^\varepsilon (t,x)$ models the statistical instabilities in the Born approximation and $\sigma_n n_p^\varepsilon(t,x)$ is an additive mean zero noise with amplitude $\sigma_n$. Note that by construction, coherent-based functionals will perform well only in the presence of relatively weak heterogeneities. Hence, modeling the statistical instabilities at first order is both practically relevant and mathematically feasible. Taking into account second order interactions is considerably more difficult mathematically; see \cite{BP-CPDE2} for a stability analysis of Wigner transforms (and not of the more complicated imaging functionals) in the paraxial approximation. The term $\sigma_n n_p^\varepsilon$ models an additive noise at the detector and also takes into account in a very crude way the higher order interactions between the wave and the medium that are not included in $\delta p^\varepsilon$. We suppose that $n_p^\varepsilon$ oscillates at a frequency $\varepsilon^{-1}$ (so that a simple frequency analysis cannot separate the real signal from the noise) and that it is independent of the random medium. The variance can then be decomposed as $\textrm{Var} \{ I^C\}=V^C+V_n^C$, where $V^C$ is the variance corresponding to the single scattering term and $V_n^C$ the noise contribution. We also write $\textrm{Var} \{ I^W\}=V^W+V_n^W$. Note that the measurement noise can actually be much larger than the average $\mathbb E \{p\}$. Indeed, a simple analysis of $\mathbb E \{p\}(t,x)$ shows that it is of order $\varepsilon$ when $|x|=1$ (omitting the absorption). It is the refocusing properties of the functionals that lead to a reconstructed source of order one from measurements of order $\varepsilon$. We formally rescale the single scattering instabilities by $e^{-c_0 \Sigma t/2}$, where $\Sigma^{-1}$ is the transport mean free path defined in section \ref{models}, so that the first-order interaction between the average field $\mathbb E \{p\}$ (proportional to $e^{-c_0 \Sigma t/2}$) and the medium has a comparable amplitude to $\mathbb E \{p\}$. The fact that the single scattering instabilities are exponentially decreasing as the ballistic part can be proved in simplified regimes of propagation, where a closed-form equation for their variance can be obtained; see e.g. \cite{B-Ito-04, BP-CPDE}. We need to introduce another important parameter, which is the typical length over which correlations are calculated in the CB functional. In dimensionless units, we define it in terms of the wavelength by $N_C \varepsilon$. We assume that the detector cannot perform subwavelength measurements, which implies $N_C \geq 1$. We suppose for simplicity that correlations are calculated isotropically. Accounting for anisotropic correlations would add additional parameters and technicalities, but presents no conceptual difficulties. Since the resolution and the stability of the CB functional are mostly influenced by $N_C \varepsilon$ and not by the size of the detector as $N_C \varepsilon \leq l/L$ by construction (see section \ref{meancint} below), we will systematically suppose that the detector is sufficiently large compared to the wavelength so that its effects on the stability can be neglected in a first approximation. The parameter $N_C$ is crucial in that it controls the resolution of the CB functional (shown in section \ref{meancint} to be $L/N_C$) and its stability. Small values of $N_C$ yield a good stability for a poor resolution, while large values lead to less stability with a resolution comparable to that of the WB functional. \begin{table}[ht] \caption{Notation} \begin{tabular}{c|lc} $L$ & Source-detector distance; see Fig. \ref{fig1}\\ $l$ & Size of the array; see Fig. \ref{fig1}\\ $\sigma_0$ & Amplitude of the random fluctuations \\ $\ell_c$ & Correlation length of the random fluctuations\\ $\delta$ & The correlation function decreases as $|x|^{\delta-3}$, $0 \leq \delta <2$ \\ $\lambda$ & Central wavelength of probing signal \\ $\varepsilon$, $\eta$ & $\varepsilon=\frac\lambda L$ with $L$ distance of propagation; $\eta=\frac{\ell_c}{\lambda}$ \\ $B$ & $Bc_0^{-1}$ is bandwidth with $c_0$ background sound speed \\ $\mu$ & $\mu=\frac1{L\varepsilon B}$ with width of probing signal equal to $\mu\lambda$\\ $\sigma_n$ & Amplitude of detector noise $\sigma_n n^\varepsilon_p(t,x)$\\ $I^C$ and $I^W$ & Correlation-based (CB) and wave-based (WB) imaging functionals, respectively \\ $V^C$ and $V^W$ & Variance of the above imaging functionals\\ $N_C$ & $\varepsilon N_C$ is length over which correlations are considered in $I^C$ \\ $\frac{\lambda L}l$ & Cross-range resolution of wave-based functional (Rayleigh formula)\\ $\frac{L}{N_C}$ & Cross-range resolution of correlation-based functional \\ $\Sigma$ & Inverse of transport mean free path; see \eqref{RTE2}. \end{tabular} \end{table} Our main results are the following: \begin{theorem} \label{theo} Denote by $V^C(z)$ (resp. $V_n^C$) and $V^{W}(z)$ (resp. $V_n^W$) the variances of the correlation-based and wave-based functionals defined in section \ref{meas_imag} for the single scattering contribution (resp. the noise contribution). Let $\Sigma^{-1}$ be the transport mean free path defined in \eqref{RTE2} in section \ref{models} below. Then we have: \begin{align*} &V^C(0) \sim e^{-2\Sigma L}\sigma_0^2 \; \mu^{-2}\left(\frac{\ell_c}{\lambda}\right)^{3-\delta} \left(\frac{\lambda}{L}\right)^{4+(1-\delta)\wedge 0} \bigg( \left(\frac{L}{\lambda}\right)^4\wedge (N_C\mu)^4 \bigg)\\ &V^C(0) \sim e^{-2\Sigma L}\sigma_0^2 \; \left(\frac{\lambda}{L}\right)^{4} \left[\frac{1}{\mu^2}\left(\frac{\ell_c}{\lambda}\right)^{3-\delta}\left(\frac{\lambda}{L}\right)^{(1-\delta)\wedge 0}\bigg( \left(\frac{L}{\lambda}\right)^4\wedge (N_C\mu)^4 \bigg) \right] \vee \left[ \left(\frac{\ell_c}{\lambda}\right)^{(3-\delta)\wedge 2} N_C^4\right]\\ &V^C_n(0) \sim e^{-\Sigma L} \sigma_n^2 \left(\frac{\lambda}{L}\right)^{4} N_C^4 \\ &V^W(0) \sim e^{-\Sigma L}\sigma_0^2, \qquad V^W_n(0) \sim \sigma_n^2. \end{align*} Moreover, $V^C$, $V^C_n$ and $V^W$ are mostly supported on $|z|\leq \mu \varepsilon$. Above, the notation $\sim$ means equality up to multiplicative constants independent of the main parameters, and $a \wedge b=\min(a,b)$, $a \vee b=\max(a,b)$ . Denote now by $SNR^C$ (resp. $SNR_n^C$) and $SNR^{W}$ (resp. $SNR_n^W$) the corresponding signal-to-noise ratios. Then: $$ SNR^C_{tot}(0) = SNR^C(0)\wedge SNR_n^C(0) \qquad SNR_{tot}^W(0) = SNR^W(0)\wedge SNR_n^W(0), $$ where \begin{align*} &SNR^C(0) \sim \frac{1}{\sigma_0} \left[\mu \left(\frac{\lambda}{\ell_c}\right)^{(3-\delta)/2}\left(\frac{L}{\lambda}\right)^{(((1-\delta)/2)\wedge 0)} \right] \wedge \left[\left(\frac{\lambda}{\ell_c}\right)^{((3-\delta)/2)\wedge 1} \left(\frac{L}{N_C \lambda}\right)^2 \wedge \mu^2 \right]\\ & SNR^C_n(0) \sim \frac{e^{-\frac12\Sigma L}}{\sigma_n} \left(\left(\frac{L}{N_C \lambda}\right)^2 \wedge \mu^2\right) \\ &SNR^W(0) \sim \frac{1}{\sigma_0}, \qquad SNR^W_n(0) \sim \frac{e^{-\frac12\Sigma L}}{\sigma_n}. \end{align*} \end{theorem} We below comment on the results of the theorem. \paragraph{Comparison CB-WB.} Let us consider first the case of short-range correlations $\delta=0$. We neglect absorption at this point (i.e. $e^{-\Sigma L}=1$) and will take it into account when considering the SNR of the external noise. Let us start with the single scattering contributions $SNR^C$ and $SNR^W$. Suppose first that $N_C$ is small enough and $\mu$ large enough so that $(L/N_C\lambda)\wedge \mu=\mu$. Then, $$SNR^C \sim \frac{1}{\sigma_0} \; \left(\frac{\lambda}{\ell_c} \right)\mu .$$ Hence, the SNR increases when the following quantities increase: $\lambda/ \ell_c$ (the dynamics gets closer to the homogenization regime where the homogenized solution is deterministic), $\mu$ (smaller bandwidth, which results in a loss of range resolution) and $\sigma_0^{-1}$ (weaker fluctuations). The fact that the SNR increases as $\mu$ stems from the self-averaging effects of the quadratic functional. There are no such effects for $V^W$ and we find $SNR^C(0)=SNR^W(0) (\mu \lambda/\ell_c)$ so that the CB functional is more stable than the WB functional in this configuration of small $N_C$, whether in the radiative transfer regime ($\lambda=\ell_c$) or in the homogenization regime ($\lambda \gg \ell_c$). It is shown in section \ref{meancint} that the resolution of the CB functional is $L/N_C$, so that the best resolution is achieved for the largest possible value $N_C$, where correlations are calculated over the largest possible domain, namely $N_C = l/\lambda$. Hence, the best resolution is $\lambda L/l$, which is the celebrated Rayleigh formula and the same cross-range resolution as the WB functional. Now, for this best resolution and when $l \mu > L$, then $(L/N_C\lambda)\wedge \mu=(L/N_C\lambda)$ so that $SNR^C(0)=SNR^W(0) (\lambda/\ell_c)$ (equality here means equality up to multiplicative constants independent of the parameters $\lambda$, $N_C$, $\ell_c$, $\sigma_0$). This means that in the absence of external noise, and in a weak disorder regime where multiple scattering can be neglected, the SNR of the CB and WB functionals differ only by the factor $(\lambda / \ell_c)$ for a similar resolution. This is a term of order one in the radiative transfer regime, and a large term in the homogenization regime. In the radiative transfer regime, in order to significantly increase the CB SNR compared to WB, one needs to decrease $N_C$ and therefore to lower the resolution. This is the classical stability/resolution trade off as was already observed in \cite{CINT-Habib} for the random geometrical regime. Note also that the statistical errors for both functionals are essentially localized on the support (of diameter $\mu \lambda$) of the initial source. We consider now the noise contributions $V_n^C$ and $V^W_n$. A first important observation is that $V^C_n(x)$ is mostly localized on the support of the source, while $V^W_n(x)$ is essentially a constant everywhere. This stems from the fact that in the CB functional, the noise is correlated with the average field so that the functional keeps track of the source direction, while in the WB functional the noise is correlated with itself, see sections \ref{flucmod} and \ref{secWB}. Moreover,the SNR are $$ SNR^W_n(0) \sim \frac{e^{-\frac12\Sigma L}}{\sigma_n}, \qquad SNR^C_n(0) \sim SNR^W_n(0)\left(\left(\frac{L}{N_C \lambda}\right)\wedge \mu \right)^2. $$ As for the single scattering SNR, for identical resolutions (i.e. when $N_c=l/\lambda$), the SNR are comparable. In order to obtain to better SNR for CB, one needs to lower the resolution (and therefore decrease $N_C$) so that $(L/(N_C \lambda)) \wedge \mu =\mu$ and $SNR^C_n(0)=SNR^W_n(0) \mu \gg SNR^W_n(0)$. \paragraph{Minimal wavelength for a given SNR.} We want to address here the question of the lowest central wavelength $\lambda_m$ of the source (and therefore the best resolution) that can be achieved for a given SNR. In order to do so, we need to take into account the frequency-dependent absorption factor. We first consider the WB functional. We assume for concreteness that the noise contribution is larger than the single scattering one, but the same analysis holds for the reversed situation. Let us fix the SNR at one. Hence, $\lambda_m$ has to be such that $e^{-\Sigma L/2}=\sigma_n$. Writing $\sigma_n=e^{-\tau_n}$, this yields $\Sigma L=2 \tau_n$. We will see in section \ref{mean} that $\Sigma$ admits the following expression $$ \Sigma \sim \frac{ \sigma_0^2 \eta^{3-\delta}}{\varepsilon (1-\frac{\delta}{2})}=\frac{ \sigma_0^2 \ell_c^{3-\delta} L}{\lambda_m^{4-\delta} (1-\frac{\delta}{2})}, \qquad \textrm{which gives} \qquad \lambda_m^{4-\delta} \sim \frac{ \sigma_0^2 \ell_c^{3-\delta} L}{\tau_m (1-\frac{\delta}{2})}. $$ Hence, as can be expected, $\lambda_m$ decreases as the fluctuations of the random medium and the intensity of the noise decrease (i.e. $\sigma_0 \downarrow$ and $\tau_n \uparrow$.) Remark as well that $\lambda_m$ decreases as $\ell_c$ does, and in a faster way compared to $\sigma_0$ or $\tau_n$. This is also expected since the limit $\ell_c \to 0$ corresponds to the homogenization limit in which the wave propagates in a homogeneous medium provided $\sigma_0$ is small. The measurements are therefore primarily coherent and both the CB and WB functionals perform well. Let us consider now the CB functional and let $\delta=0$. Since for $(L/(N_C \lambda))\wedge \mu \gg 1$, $SNR^C$ is greater than $SNR^W$, we may expect in principle to find a lower minimal wavelength. A way to exploit this fact is to consider a central wavelength of the source $\lambda_m / \alpha$, for the $\lambda_m$ above and some $\alpha>1$, and to compute correlations over a sufficiently small domain in order to gain stability, but a domain not too large so that the resolution is still better than $\lambda_m$. The resolution of the CB functional being $L/N_C$, we choose $N_C= L \beta / \lambda_m$, with $\beta>1$ so that $\lambda_m/ \beta$ is smaller than $\lambda_m$. The prefactor in the SNR is then the square of $(L\alpha/ (N_C \lambda_m))\wedge \mu =(\alpha/\beta) \wedge \mu$ that we suppose is equal to $(\alpha/\beta)^2$. We find $$ SNR^C(0) \sim \sigma_n^{4\alpha-1} (\alpha/\beta)^2, $$ and we look for $\alpha>1$ and $\beta>1$ such that $\sigma_n^{4\alpha-1} \alpha/\beta=1$. Since $\sigma_n \leq 1$, this is possible only when $\alpha$ is not too large (so that the central wavelength of the source cannot be too small, otherwise absorption is too strong) and when $\sigma_n$ is not too small either. When $\sigma_n$ is below the threshold, then the minimal wavelength is the same for the CB and WB functionals. Hence, compared to the WB functional, the averaging effects of the CB functional can be exploited in order to improve the optimal resolution only when the noise is significant. Finding the optimal $N_C$ is actually a difficult problem, and is addressed numerically in \cite{Borcea-adapt}. \paragraph{Effects of long-range correlations.} The strongest such effect is seen in $\Sigma$ since $\Sigma \to \infty$ as $\delta \to 2$. Hence, as the fluctuations get correlated over a larger and larger spatial range, the mean free path decreases and the amplitude of the signal becomes very small. This means that coherent-based functionals are therefore not efficient in random media with long-range correlations, and inversion methodologies based on transport equations \cite{BR-AM, BP-M3AS} that rely on uncoherent information should be used preferrably. There is also a loss of stability in the variance of the single scattering term for the CB functional, which for sufficiently long correlations (i.e. $\delta>1$) becomes larger than in the short-range case. In such a situation, for the SNR of the CB functional to be larger than that of the WB functional, one needs to decrease the resolution by a factor greater than in the short-range case. Notice moreover that there is an effect of the long-range dependence on the term $(\ell_c/\lambda)^{3-\delta}$ that measures the distance to the homogenization regime. As the medium becomes correlated over larger distances, the variance increases, which suggests that the homogenization regime becomes less accurate. \section{Imaging functionals and models} \label{meas_imag} We introduce in sections \ref{WBfunc} and \ref{CBfunc} the WB and CB functionals, and in section \ref{models} our models for the measurements. \subsection{Expression of the WB functional} \label{WBfunc} We give here the expression of the WB functional for generic solutions of the wave equation, which we denote by $p$ and its time derivative $\partial_t p$. The solution to the homogeneous wave equation with regular initial conditions $(p(t=0)=q_0, \partial_ t p(t=0)= q_1)$, for $(q_0,q_1)$ given, reads formally in three dimensions $$ p(t)=\partial_t G(t,\cdot) * q_0 + G(t,\cdot) * q_1, \qquad G(t,x)=\frac{1}{4 \pi |x|} \delta_0(t-|x|), $$ where $*$ denotes convolution in the spatial variables and $\delta_0$ the Dirac measure at zero. In the Fourier space, this reduces to $$ \mathcal F p(t,k)= \cos |k| t \; \mathcal F q_0(k)+\frac{\sin |k| t}{|k|}\mathcal F q_1(k), $$ where we define the Fourier transform as $\mathcal F p(k)=\int_{\Rm^3} e^{-i k \cdot x} p(x)dx$. From the data $(p(T,x), \partial_t p(T,x))$ at a time $t=T$ for $x \in D$, the natural expression of the WB functional in our setting is obtained by backpropagating the measurements, similarly to the time reversal procedure \cite{Fink-Prada-01}. This leads to the functional $$ I^W_0(x)=[\partial_t G(T,\cdot) * {\mathbbmss{1}}_D p(T,\cdot)- G(T,\cdot) * {\mathbbmss{1}}_D \partial_t p(T,\cdot)](x), $$ where ${\mathbbmss{1}}_D$ denotes the characteristic function of the detector. Using Fourier transforms, this can be recast as $$ I^W_0(x)=\mathcal F^{-1}_{k \to x}\left( \cos |k| T \; \mathcal F ({\mathbbmss{1}}_D p(T,\cdot))(k)-\frac{\sin |k| T}{|k|} \mathcal F ( {\mathbbmss{1}}_D \partial_t p(T,\cdot))(k)\right). $$ This expression is slightly different from the classical Kirchhoff migration functional because of our different measurement setting. It nevertheless performs the same operation of backpropagation. Assuming that all the wavefield is measured, i.e. $D=\Rm^3$, and that $q_1=0$ so $\mathcal F p(t,k)= \cos |k| t \; \mathcal F q_0(k)$, one recovers the initial condition perfectly, i.e. $I^W_0(x)=q_0(x)$. In practice, the entire wavefield is generally not available and diffraction effects limit the resolution. In order to simplify the calculations, we assume without loss of generality that only the pressure $p(T,x)$ is used for this functional. This modifies $I^W_0$ as \begin{equation} \label{expK} I^W(x)=\mathcal F^{-1}_{k \to x}\left( \cos |k| T \; \mathcal F ({\mathbbmss{1}}_D p(T,\cdot))(k)\right). \end{equation} This significantly reduces the technicalities of our derivations while very little affecting the reconstructions. Indeed, for localized initial conditions, the value of the maximal peak of the functional $I^W_0$ is divided by a factor two compared to the full $I^W_0$: if $D=\Rm^3$, and $\mathcal F p(t,k)= \cos |k| t \; \mathcal F q_0(k)$, then $$ I^W(x)=\mathcal F^{-1}_{k \to x}\left( (\cos |k| T)^2 \mathcal F q_0(k) \right)=\frac{1}{2}\mathcal F^{-1}_{k \to x}\left( \mathcal F q_0(k) \right)+\frac{1}{2}\mathcal F^{-1}_{k \to x}\left( \cos 2 |k| T\mathcal F q_0(k) \right). $$ The first term above yields $\frac{1}{2}q_0(x)$ while the second one is essentially supported on a sphere of radius $2T$ far away from the source. We will analyse in section \ref{secWB} the expected value and the variance of the functional $I^W$ for random measured wavefields. \subsection{Expression of the CB functional} \label{CBfunc} We define here the CB functional for a measured wavefield $\bu$. This requires us to introduce the Wigner transform of $\bu$ \cite{LP,GMMP} and to decompose it into propagating and vortical modes. Since the initial velocity is identically zero, the amplitude of the latter modes remains zero at all times, see \cite{RPK-WM}. The projection onto the propagating mode is done with the eigenvectors of the dispersion matrix $L(k)=k_i D^i$ where $D^i$ was defined in \fref{eq:hypsyst}. These vectors are given by $b_\pm(k)=(\hat k,\pm 1)/\sqrt{2} $, with $\hat k=k/|k|$ and we define in addition the matrices $B_{\pm}=b_\pm \otimes b_\pm$, where $\otimes$ denotes tensor product of vectors. The full matrix-valued Wigner transform of $\bu$ is defined by $$ W^\varepsilon(t,x,k)=\frac{1}{(2\pi)^3} \int_{\Rm^3} e^{i \, k \cdot y}\,\bu(t,x-\frac{\varepsilon y}{2}) \otimes \bu(t,x+\frac{\varepsilon y}{2}) \, dy. $$ The quantity $W^\varepsilon$ is a real-valued matrix. In our setting, the field $\bu$ is measured at a time $t=T=1$. We compute the correlations over a domain $\EuScript{C} \subset D$, so that we do not form the full Wigner transform but only a smoothed version of it given by $$ W^\varepsilon_S(T,x,k)=\frac{1}{(2\pi)^3} \int_{x\pm \frac{\varepsilon y}{2} \in \EuScript{C}} e^{i \, k \cdot y}\,\bu(T,x-\frac{\varepsilon y}{2}) \otimes \bu(T,x+\frac{\varepsilon y}{2}) \, dy. $$ We only consider the propagating mode associated with the positive speed of propagation $c_0$, the other one can be recovered by symmetry. This mode corresponds to the vector $b_+$ and the associated amplitude is given by \begin{equation} \label{aD} a_S(T,x,k):=\textrm{Tr} (W^{\varepsilon}_S (T,x,k))^T B_{+}(k), \end{equation} where $(W^{\varepsilon}_D)^T$ denotes the matrix transpose of $W^\varepsilon_S$ and $\textrm{Tr}$ the matrix trace. The CB functional is then defined in our setting by $$I^C(x)= \int_{D_x} dk \, a_S(T,x+c_0 T\hat{k},k), \qquad D_x=\{ k \in \Rm^3, (x+c_0 T\hat{k}) \in D\}. $$ Above, we suppose implicitely that if $(x+c_0 T\hat{k}) \in D$, then $\EuScript{C}$ is such that $(x+c_0 T\hat{k}) \pm \frac{\varepsilon y}{2} \in D$. The functional is slightly different from the classical coherent interferometry functional of \cite{CINT} because of our measurement setting. However, the two functionals qualitatively perform the same operation, that is the backpropagation of field-correlations along rays. The difference resides in how these correlations are calculated: in our configuration, we have access to 3D volumic measurements at a fixed time so that the 3D spatial Wigner transform is available and is the retropropagated quantity; in \cite{CINT}, measurements are performed on a 2D surface and recorded in time so that the 3D spatial Wigner transform is replaced by a 2D spatio-1D temporal Wigner transform. For the stability and resolution analysis, it is convenient to recast $I^C$ in terms of the amplitude $a$ associated with the full Wigner transform (i.e. computed for $\EuScript{C}=\Rm^3$). We then obtain the expression, using the rescaled variables, $c_0=T=1$: \begin{equation} \label{filter} I^C(x)= \int_{D_x} \int_{\Rm^3} dk dq F^x_\varepsilon(k-q) a(1,x+\hat{k},q), \qquad F^x_\varepsilon(k)= \int_{x\pm \varepsilon y/2 \in \EuScript{C}} e^{i \, k \cdot y} dy. \end{equation} Note that $I^C$ is real-valued. \subsection{Models for the measurements} \label{models} We introduce in this section our different models for the measurements. We will define a model for the mean of the measurements, and a model for the statistical instabilities. The latter is obtained by using the single scattering approximation (Born approximation). We start with the mean. \subsubsection{Model for the mean} \label{mean} \paragraph{The wavefield.} Denoting by $\Box=\partial^2_{t^2}-\Delta$ the d'Alembert operator and by \begin{equation} \label{balK} p_B(t)=\partial_t G(t,\cdot) * q_0 \end{equation} the ballistic part associated to an initial condition $q_0$ (with vanishing initial $\partial_t p$), the solution to \fref{waveq} reads: $$ p(t,x)=p_B(t,x)- \sigma_0 \Box^{-1} \left[V \left(\frac{\cdot}{\eta \varepsilon} \right)\frac{\partial^2 p}{\partial t^2} \right](t,x). $$ Obtaining an expression for the expectation of $p$ requires the analysis of the term $\mathbb E V ((\cdot/(\eta \varepsilon)) \partial^2_{t^2} p$ which is not straighforward and requires a diagrammatic expansion. Rather, we follow the simpler, heuristic, method of \cite{keller64}, which amounts to iterating the above inversion procedure one more time and getting \begin{align} \label{p2} &p(t,x)= \\ \nonumber &p_B(t,x)- \sigma_0 \Box^{-1} \left[V \left(\frac{\cdot}{\eta \varepsilon} \right)\frac{\partial^2 p_B}{\partial t^2} \right](t,x)+ \sigma_0^2 \Box^{-1} \left[V \left(\frac{\cdot}{\eta \varepsilon} \right)\frac{\partial^2}{\partial t^2} \Box^{-1} V \left(\frac{\cdot}{\eta \varepsilon} \right)\frac{\partial^2 p}{\partial t^2}\right](t,x), \end{align} and to replace $p$ in the last term by $p_B$. Since $\mathbb E\{ p\} \simeq p_B$ at first order in $\sigma_0$, $p_B$ may be replaced by $\mathbb E\{ p\}$ to obtain a homogenized equation for $\mathbb E\{ p\}$. The result is \cite{keller64} that a harmonic wave $\hat p(\omega,x)$ (Fourier transform in time of $p$) is damped exponentially, i.e. for $|x| \neq 0$, $$ |\mathbb E\{\hat p(\omega,x) \}|\leq C e^{-\gamma(\omega) |x|} \qquad \textrm{with} \qquad \gamma(\omega)=\sigma_0^2 \omega^2 \int_{\Rm^3} \frac{1-\cos (2 \omega |x|)}{ 16 \pi |x|^2} R\left(\frac{|x|}{\varepsilon \eta}\right) dx. $$ The absorption coefficient $\gamma$ is obtained by adapting the setting of \cite{keller64} to ours. Since our initial condition \fref{CI1} is mostly localized around wavenumbers with frequency $|k_0|/\varepsilon$, we find that the waves are absorbed by a factor $$ \gamma\left(\frac{|k_0|}{\varepsilon}\right)=\frac{\sigma_0^2 |k_0|^2 \eta}{\varepsilon} \int_{\Rm^3} \frac{1-\cos (2 \eta |k_0| |x|)}{ 16 \pi |x|^2} R(|x|) dx :=\gamma_\varepsilon. $$ A Taylor expansion then leads to the classical $|k|^4$ dependency of the absorption associated to the Rayleigh scattering: $$ \gamma_\varepsilon \simeq \frac{\sigma_0^2 |k_0|^4 \eta^3 }{8 \pi \varepsilon} \int_{\Rm^3}R(|x|) dx = \frac{\sigma_0^2 |k_0|^4 \eta^3 }{8 \pi \varepsilon} \hat{R}(0). $$ We implicitely assumed above that $\hat{R}(0)$ was defined, which holds in random media with short-range correlations but not in media with long-range correlations. The latter case is addressed below. We can relate the latter expression for $\gamma_\varepsilon$ to the mean free path $\Sigma^{-1}:=\Sigma^{-1}(|k_0|)$ defined in the next paragraph in \fref{RTE2} by $$ \Sigma= \frac{\eta^3 \sigma_0^2}{\varepsilon}\frac{\pi |k_0|^{4}}{2 (2\pi)^3} \int_{S^{2}}\hat{R}(\eta(k_0-|k|\hat p)) d \hat p \simeq \frac{\eta^3 \sigma_0^2}{\varepsilon}\frac{\pi |k_0|^{4}}{(2\pi)^2} \hat{R}(0). $$ Hence $\Sigma\simeq 2\gamma(|k_0|/\varepsilon)$. Obtaining such a relation in the case of long-range correlations is slightly more involved. To do so, we first notice that $$R\left(\frac{|x|}{\varepsilon \eta}\right)\simeq \frac{(\varepsilon \eta)^{3-\delta} S(0)}{(2 \pi)^3} \int_{\Rm^3} \frac{e^{i k \cdot x}}{|k|^\delta} dk=c_\delta S(0) (\varepsilon \eta)^{3-\delta} |x|^{\delta-3},$$ where the constant $c_\delta$ is given by $c_\delta=2^{3-\delta} \pi^{3/2} \Gamma(\frac{3-\delta}{2}) (\Gamma(\frac{\delta}{2}))^{-1}(2 \pi)^{-3}$ \cite{gelfand}, $\Gamma$ being the Gamma function. The expression of $\gamma_\varepsilon$ is then $$ \gamma_\varepsilon \simeq c_\delta S(0) \eta^{3-\delta} \sigma_0^2 \varepsilon^{1-\delta} \int_{\Rm^3} \frac{1-\cos\left(\frac{2 |k_0| |x|}{\varepsilon}\right)}{ 16 \pi |x|^{5-\delta}} dx=\frac{\sigma_0^2 |k_0|^{4-\delta} \eta^{3-\delta}}{2^\delta \pi^{3/2}\varepsilon} \frac{\Gamma(\frac{3-\delta}{2})}{2\Gamma(\frac{\delta}{2})}\int_{0}^\infty \frac{1-\cos (r)}{r^{3-\delta}} dr.$$ Recall that $\delta<2$ so that the above integral is well-defined. The inverse of the mean free path is now in the long-range case \begin{eqnarray*} \Sigma&\simeq& \frac{\eta^{3-\delta} \sigma_0^2}{\varepsilon}\frac{\pi |k_0|^{4-\delta} S(0)}{2 (2\pi)^3 2^{\delta/2}} \int_{S^{2}} \frac{1}{(1-\hat k_0 \cdot \hat p)^{\delta/2}} d \hat p=\frac{\eta^{3-\delta} \sigma_0^2}{\varepsilon}\frac{\pi^2 |k_0|^{4-\delta} S(0)}{ (2\pi)^3 2^{\delta/2}} \int_{-1}^{1} \frac{1}{(1-x)^{\delta/2}} d x\\ &=&\frac{ \sigma_0^2 \eta^{3-\delta} |k_0|^{4-\delta}}{\varepsilon}\frac{S(0)}{ 2^{\delta} 4 \pi (1-\frac{\delta}{2})}. \end{eqnarray*} We used again the fact that $\delta<2$ to make sense of the integral. Since the following relation holds $$ \frac{\Gamma(\frac{3-\delta}{2})}{\sqrt{\pi}\Gamma(\frac{\delta}{2})}\int_{0}^\infty \frac{1-\cos (r)}{r^{3-\delta}} dr=\frac{1}{ 4 (1-\frac{\delta}{2})}, $$ we recover that $\Sigma\simeq 2\gamma(|k_0|/\varepsilon)$ in the long-range case. We will therefore systematically replace $\gamma(|k_0|/\varepsilon)$ by $\Sigma/2$ in the sequel. The expression of $\mathbb E \{p\}(t,x)$ is obtained by Fourier-transforming back $\mathbb E \{\hat p\}(\omega,x)$ and using the fact that $|x|=c_0 t$ since the ballistic part is supported on the sphere of radius $|x|=c_0 t$. This yields \begin{eqnarray} \label{meanH} \mathbb E \{p\}(t,x) &\simeq& e^{- c_0 \Sigma t /2}p_B(t,x)\label{pS}. \end{eqnarray} This is our model for the average pressure. \paragraph{The Wigner transform.} It is shown in \cite{RPK-WM} that the Wigner transform $W^\varepsilon$ of a wavefield $\bu$ satisfies the following system \begin{equation} \label{eq:Wigner} \frac{\partial W^\varepsilon}{ \partial t}+ \left(\mathcal Q_1+\frac{\mathcal Q_2}{\varepsilon} \right) W^\varepsilon=\left(\mathcal P_1+\frac{\mathcal P_2}{\varepsilon} \right) W^\varepsilon:= \mathcal S^\varepsilon W^\varepsilon, \end{equation} with \begin{eqnarray*} \mathcal Q_1 W&=&\frac{1}{2} \left(D^j \pdr{W}{x_j}+\pdr{W}{x_j} D^j\right), \qquad \mathcal Q_2 W=i k_j D^j W-i W k_j D^j\\ \mathcal P_1 W&=&\frac{\sigma_0}{2} \int \int \frac{dy dp e^{i p \cdot y}}{(2 \pi)^d} \left[ \mathcal V\left( \frac{x+\varepsilon y}{\varepsilon \eta}\right)D^j \pdr{W(k+\frac{1}{2}p)}{x_j}\right.\\ &&\left. \hspace{5cm}+\pdr{W(k-\frac{1}{2}p)}{x_j} D^j\mathcal V\left( \frac{x+\varepsilon y}{\varepsilon \eta}\right)\right]\\ \mathcal P_2 W&=&i\sigma_0 \int \int \frac{dy dp e^{i p \cdot y}}{(2 \pi)^d} \left[ \mathcal V\left( \frac{x+\varepsilon y}{\varepsilon \eta}\right)[k+p/2]_jD^j W(k+p/2)\right.\\ &&\left. \hspace{5cm}-W(k-p/2) [k-p/2]_j D^j\mathcal V\left( \frac{x+\varepsilon y}{\varepsilon \eta}\right)\right]. \end{eqnarray*} Noticing that $$ \int dy e^{i p \cdot y} \mathcal V\left( \frac{x+\varepsilon y}{\varepsilon \eta}\right)=\eta^3 e^{-i \frac{p \cdot x}{\varepsilon}} \hat{\mathcal V}(-\eta p), $$ we can recast the r.h.s of (\ref{eq:Wigner}) as \begin{eqnarray*} \mathcal S^\varepsilon W^\varepsilon &=& F_\varepsilon *_p \left(D^j \left[\frac{\varepsilon}{2}\pdr{W^\varepsilon}{x_j} +i p_j W^\varepsilon\right] \right)+\left(\left[\frac{\varepsilon}{2}\pdr{W^\varepsilon}{x_j} -i p_j W^\varepsilon\right] D^j \right) *_p \overline{F_\varepsilon}\\ F_\varepsilon(x,p)&=&\frac{\sigma_0} {\varepsilon} \left(\frac{\eta} {\pi}\right)^3 e^{\frac{2 i p \cdot x}{\varepsilon}} \hat{\mathcal V}(2\eta p). \end{eqnarray*} It is then well established \cite{RPK-WM} that a good approximation of $\mathbb E \{ W^\varepsilon\}$ is $$ \mathbb E \{ W^\varepsilon\} \simeq \overline{W}_0^\varepsilon:=\sum_{\pm } \bar a_\pm^\varepsilon B_\pm, $$ where the matrices $B_\pm$ were introduced in section \ref{CBfunc} and the amplitudes $\bar a_\pm^\varepsilon(t,x,k)$ satisfy the following radiative transfer equation \begin{align} \label{RTE} &\frac{\partial \bar a_\pm^\varepsilon}{ \partial t} \pm c_0 \hat k \cdot \nabla_x \bar a_\pm^\varepsilon =Q( \bar a_\pm^\varepsilon),\\ \nonumber &Q(\bar a_\pm^\varepsilon)(k)=\int_{\Rm^3} \sigma(k,p) (\bar a_\pm^\varepsilon(p)- \bar a_\pm^\varepsilon(k)) \delta_0(c_0|k|-c_0|p|) d p,\\\nonumber &\sigma(k,p)= \frac{\eta^3 c_0^2 \sigma_0^2}{\varepsilon}\frac{\pi |k|^2}{2 (2\pi)^3}\hat{R}(\eta(k-p)). \end{align} Above, $\delta_0$ is the Dirac measure at zero. The equation (\ref{RTE}) is supplemented with the initial condition $$ \bar a_\pm(t=0)=a_{0,\pm}^\varepsilon= \textrm{Tr} (W^{\varepsilon,0})^T B_{\pm}, $$ where $W^{\varepsilon,0}$ is the Wigner transform of the initial wavefield $\bu(t=0)$. Since $\bar a_-^\varepsilon(k)=\bar a_+^\varepsilon(-k)$, we focus only on the mode $\bar{a}_+^\varepsilon$ and drop both the $+$ lower script and the $\varepsilon$ upper script for notational simplicity. We now need to compute the Wigner transform of the initial condition. We consider an approximate expression that simplifies the analysis of the imaging functionals. The scalar Wigner transform of $p_0^\varepsilon$, denoted by $w_0^\varepsilon$, reads: \begin{eqnarray*} w_0^\varepsilon(x,k)&\sim&\frac{|k_0|^4}{(2\pi)^3}\int_{\Rm^3} \int_{S^2 }\int_{S^2 } dy d\hat{k}_1 d\hat{k}_2e^{i (k-|k_0| (\hat{k}_1+\hat{k}_2)/2) \cdot y} e^{i |k_0|x \cdot(\hat k_1 -\hat k_2)/\varepsilon} \\ &&\hspace{3cm} \times \hat{g}\left( \frac{\hat k_1 \cdot (x-\varepsilon y/2)}{\varepsilon \mu}\right) \overline{\hat{g}}\left( \frac{\hat k_2 \cdot (x+\varepsilon y/2)}{\varepsilon \mu}\right). \end{eqnarray*} Since the bandwidth parameter $\mu$ is such that $\mu\varepsilon \gg \varepsilon$, we can separate the scales of the $\hat{g}$ terms and the oscillating exponentials. We can then state that, as is classical for Wigner transforms, different wavevectors lead in a weak sense to negligible contributions because of the highly oscillating term $e^{i |k_0| x \cdot(\hat k_1 -\hat k_2)/\varepsilon}$. The leading term in the initial condition is therefore \begin{eqnarray*} w_0^\varepsilon(x,k)&\simeq& \int_{\Rm^3} \int_{S^2 } dy d\hat k_1 e^{i (k-|k_0| \hat k_1) \cdot y} \hat{g}\left( \frac{\hat k_1 \cdot (x-\varepsilon y/2)}{\varepsilon \mu}\right) \overline{\hat{g}}\left( \frac{\hat k_1 \cdot (x+\varepsilon y/2)}{\varepsilon \mu}\right) \\ &:=& \mu^3 \int_{S^2} d \hat k_1 W_{\hat k_1} \left(\frac{x}{\varepsilon \mu},\frac{k-|k_0| \hat k_1}{\mu^{-1}}\right), \end{eqnarray*} where $$ W_{\hat k_1}(x,k) = \int_{\Rm^3} dy e^{i k \cdot y} \hat{g}\left(\hat k_1 \cdot (x- y/2)\right) \overline{\hat{g}}\left(\hat k_1 \cdot (x+y/2)\right). $$ The effect of the function $w_0^\varepsilon$ in the phase space is essentially to localize the variable $x$ around $\varepsilon \mu$ and the variable $|k|$ on a shell of radius $|k_0|$ with width $\mu^{-1}$. This is what is expected since the initial condition for the wave equation is isotropic, oscillates at a frequency $\varepsilon^{-1} |k_0|$ and has a bandwidth of order $(\varepsilon \mu)^{-1}$. The fact that $\mu \ll 1/\sqrt{\varepsilon}$ implies that the localization is stronger in the spatial variables than in the momentum variables, which can be seen as a broadband property. For the sake of simplicity, we replace the initial condition by a function that shares the same properties but has an easier expression to handle. We write: $$ a_{0,\pm}^\varepsilon=\mu w_0\left( \frac{x}{\varepsilon \mu},\mu(|k|-|k_0|)\right), $$ where the function $w_0$ is smooth. With $c_0=1$, we then recast \fref{RTE} as \begin{align} \label{RTE2} &\frac{\partial \bar a}{ \partial t} +\hat k \cdot \nabla_x \bar a + \Sigma(k) \bar a=Q_+( \bar a), \qquad \bar{a}_\pm(t=0,x,k)=\mu w_0\left( \frac{x}{\varepsilon \mu},\mu(|k|-|k_0|)\right)\\\nonumber &Q_+(\bar a)(k)=\int_{\Rm^3} \sigma(k,p) \bar a(p) \delta_0(|k|-|p|) d p, \qquad \Sigma(k)= \frac{\eta^3 \sigma_0^2}{\varepsilon}\frac{\pi |k|^{4}}{2 (2\pi)^3} \int_{S^{2}}\hat{R}(\eta(k-|k|\hat p)) d \hat p. \end{align} When $ \sigma_0^2= \varepsilon$, $\eta=1$, we recover the usual equation for the weak coupling regime. Note that since $R(x)=R(|x|)$, we have $\hat{R}(k)=\hat{R}(|k|)$ and $\Sigma(k)=\Sigma(|k|)$. Going back to the measured amplitude $a_D$ defined in (\ref{aD}), an accurate approximation of its mean is therefore \begin{equation} \mathbb E\{a_ S\} =\textrm{Tr} (\mathbb E \{W^{\varepsilon}_S\})^T B_{+} =\textrm{Tr} (F^x_\varepsilon *_k \mathbb E \{W^\varepsilon\})^T B_{+}\simeq F^x_\varepsilon *_k \bar{a}, \label{meana} \end{equation} where the filter $F^x_\varepsilon$ was defined in \fref{filter}. The integral solution to (\ref{RTE2}) reads, denoting by $T_t \bar a(x,k):=\bar a(x-c_0 t \hat k,k)$ the free transport semigroup: \begin{equation} \label{integ} \bar a(t,x,k) =T_t a_0^\varepsilon(x,k) e^{-\Sigma(|k|) t}+ \int_0^t ds e^{-\Sigma(|k|) (t-s)}T_{t-s} Q_+( \bar a)(x,k). \end{equation} The average $\bar a(t,x,k)$ is the sum of a ballistic term and the multiple scattering contribution. \subsubsection{Model for the random fluctuations in the measurements} \label{flucmod} \paragraph{The wavefield.} We follow the Born approximation, which consists in retaining in \fref{p2} the terms at most linear in $V$. In order to take into account the absorption as explained in section \ref{results}, we replace the ballistic term in \fref{p2} by $\mathbb E\{p\}$. In doing so, both the average $\mathbb E\{p\}$ and the fluctuations are exponentially decreasing. The random instabilities on the pressure are then generated by the term \begin{equation} \label{randH} \delta p^\varepsilon (t,x)=- \sigma_0 e^{-c_0 \Sigma t/2 }\Box^{-1} \left[V \left(\frac{\cdot}{\eta \varepsilon} \right)\frac{\partial^2 p_B}{\partial t^2} \right](t,x). \end{equation} We suppose that the additive noise on the wavefield $\bu$ has the form $\sigma_n \mathbf n^\varepsilon(x)=\sigma_n (\mathbf n_v^\varepsilon,n_p^\varepsilon)(x)=\sigma_n (\mathbf n_v(\frac{x}{\varepsilon}),n_p(\frac{x}{\varepsilon}))$ and is independent of the random medium. For simplicity, we assume that the noise entries are real and have the same correlation structure, $\mathbb E\{ \mathbf n^\varepsilon_i(x) \mathbf n^\varepsilon_j(y)\}=\Phi(\frac{x-y}{\varepsilon})$, $i,j=1,\cdots,4$, where $\Phi(x) \equiv \Phi(|x|)$ and is smooth. Using \fref{meanH} and \fref{randH}, our model for the measurements is therefore \begin{equation} \label{modH} p(t,x)=\mathbb E \{p\}(t,x) +\delta p^\varepsilon (t,x)+\sigma_n n_p^\varepsilon(x). \end{equation} \paragraph{The Wigner transform.} Let $a^\varepsilon$ be the projection of the full Wigner transform $W^\varepsilon$ onto the $+$ mode, i.e. $a^\varepsilon=\textrm{Tr} (W^{\varepsilon})^T B_{+}$, so that, for the $a_S$ defined in \fref{aD}, we have $a_S=F^x_\varepsilon *_k a^\varepsilon$. We already know from \fref{meana} that $\mathbb E\{a_S\} \simeq F^x_\varepsilon *_k \bar{a}$, with $\bar a$ the solution to the radiative transfer equation (\ref{RTE2}). We subsequently write $a^\varepsilon=\bar a + \delta a^\varepsilon$, where $\delta a^\varepsilon$ accounts for the random fluctuations. The simplest model for $\delta a^\varepsilon$ is obtained for the single scattering approximation which consists in retaining in $W^\varepsilon$ only terms at most linear in $V$, so that the related variance will be at most linear in $\hat{R}$. This leads to defining the random term $W^\varepsilon_1$ \begin{equation} \label{correc} \frac{\partial W^\varepsilon_1}{ \partial t}+ \left(\mathcal Q_1+\frac{\mathcal Q_2}{\varepsilon} \right) W^\varepsilon_1= \mathcal S^\varepsilon W^\varepsilon_0, \end{equation} with vanishing initial conditions and where $$ \frac{\partial W^\varepsilon_0}{ \partial t}+ \left(\mathcal Q_1+\frac{\mathcal Q_2}{\varepsilon} \right) W^\varepsilon_0=0, \qquad W^{\varepsilon}_0(t=0)=W^{\varepsilon,0}. $$ As was the case for the pressure, such a definition of $W_1^\varepsilon$ does not take into account the absorption factor $e^{-c_0 \Sigma t}$. We thus formally correct this and set \begin{equation} \label{perturb} \delta a^\varepsilon= e^{-c_0 \Sigma t}\textrm{Tr} (W^{\varepsilon}_1)^T B_{+}. \end{equation} The fluctuations of the amplitude $\delta a^\varepsilon$ and the wavefield can be related by writing the field $\bu$ as $\mathbb E\{\bu\}+ \delta \bu$, where $\delta \bu$ is obtained in the single scattering approximation. The perturbation $\delta a^\varepsilon$ is then the projection on the $+$ mode of the sum of Wigner transforms $W[\mathbb E\{\bu\},\delta \bu]+W[\delta \bu, \mathbb E\{\bu\}]$. Taking into account the external noise $\sigma_n \mathbf n^\varepsilon(x)$ and denoting by $a_n^\varepsilon$ the projection of $W[\mathbb E\{\bu\}, \mathbf n^\varepsilon ]+W[\mathbf n^\varepsilon, \mathbb E\{\bu\}]$ on the $+$ mode as in \fref{perturb}, our complete model for the measurements in the single scattering approximation is therefore \begin{eqnarray} \label{meas} a^\varepsilon(t,x,k)&=&\bar a(t,x,k)+\delta a^\varepsilon(t,x,k)+ \sigma_n a^\varepsilon_n(t,x,k). \end{eqnarray} \section{Analysis of the CB functional} \label{secCB} We compute in this section the mean and the variance of the CB functional. \subsection{Average of the functional} \label{meancint} Assume for the moment that our measurements have the form, for $g$ a smooth function: $$ a_S=F^x_\varepsilon *_k g, \qquad g(x,k)=g_0(x-c_0 T \hat k,k):=T_T g_0(x,k). $$ Above, $F^x_\varepsilon(k)$ is the filter defined in \eqref{filter} and $T_t$ is the free transport semigroup introduced in \fref{integ}. Plugging this expression into the CB functional yields: \begin{eqnarray*} I^C[g](x)&=& \int_{D_x} \int_{\Rm^3} dk dq F^x_\varepsilon(k-q) g_0(x+c_0 T (\hat{k}-\hat{q}),q). \end{eqnarray*} Let us verify first that when $D=\Rm^3$, we recover that $ I^C[g](x) =(2 \pi)^3 \int_{\Rm^3} dq g_0(x,q), $ so that if $g_0$ is the scalar Wigner transform of a function $\psi$, we find $I^C[g](x)=(2 \pi)^3 |\psi(x)|^2$ and the reconstruction is then perfect. This is immediate since from the definition of $F_\varepsilon^x(k)$ given in (\ref{filter}) we conclude that $F_\varepsilon^x(k)=(2 \pi)^3 \delta_0(k)$ when $\EuScript{C}=\Rm^3$. When $\EuScript{C}$ is finite as is the case in practical situations, one does not recover $\int_{\Rm^3} dq g(x,q)$ but an approximate version of it limited by the resolution of the functional. This point is addressed below. \paragraph{Amplitude and resolution.} For simplicity, we suppose that the domain $\EuScript{C}$ is chosen such that $F_\varepsilon^x$ defined in \eqref{filter} is independent of $x$. We suppose in addition that $\EuScript{C}$ is a ball centered at zero of a certain radius. We parametrize $\EuScript{C}$ by $\gamma \in [0,1]$ such that if $\pm \frac{\varepsilon y}{2} \in \EuScript{C}$, then we have $|\varepsilon y| \leq r_0 \varepsilon^{1-\gamma}$ for some $r_0>0$. This means that the diameter of the ball is equal to $$r_0 \varepsilon^{-\gamma}:=N_C$$ where the parameter $N_C$ was introduced in section \ref{results}. Since the (rescaled) detector has a side $\frac{l}{L} <1$, we have necessarily by construction that $r_0<\frac{l}{L}$. We could easily accommodate for anisotropic domains $\EuScript{C}$ with additional technicalities. In order to deal with a regular filter, we smooth out the characteristic function of the unit disk and replace it by some approximate function $\chi(x)\equiv \chi(|x|)$. Rescaling $y$ as $y \to y r_0 \varepsilon^{-\gamma}$, it comes for the filter $F_\varepsilon$: $$ F_\varepsilon(k)= \frac{r_0^3}{\varepsilon^{3\gamma}} \int_{\Rm^3} e^{i \, r_0\varepsilon^{-\gamma}k \cdot y} \chi(|y|)dy:=\frac{r_0^3}{\varepsilon^{3\gamma}} F\left( \frac{r_0|k|}{\varepsilon^\gamma}\right). $$ Recall that our measurements read $a_S=F^x_\varepsilon * a^\varepsilon$, where $a^\varepsilon$ is given by \fref{meas}. The total functional is denoted by $I^C[a^\varepsilon](x)$ and its average is given by $I^C[\bar{a}](x)$. The mean of the measurements $\bar{a}$ is the sum of a ballistic term $a^B(t,x,k) =T_t a_0^\varepsilon(x,k) e^{-\Sigma(|k|) t}$ and a multiple scattering term defined in \fref{integ}. Measurements in a homogeneous medium have the form $T_t a_0^\varepsilon(x,k)$. The CB functional is tailored to backpropagate the data along the characteristics of the transport equation, so as to undo the effects of free transport. Since the multiple scattering term is smoother than the ballistic term, one can expect the inversion operation to produce a signal with a lower amplitude for the multiple scattering part than for the ballistic part. We therefore neglect the multiple scattering in the computation of the average functional. Moreover, the resolution of $I^C$ is mostly limited by the filter $F$ and not by the size of the detector since $\EuScript{C}$ is included in $D$. As a consequence, the limitations due to $D$ on the resolution are negligible and we replace $D_x$ by $\Rm^3$ in the definition of $I^C$. We then find, \begin{eqnarray*} \mathbb E\{ I^C[a^\varepsilon]\}(x) &\simeq& I^C[a^B](x)\\ &\simeq & \frac{\mu e^{-\Sigma }r_0^3}{\varepsilon^{3 \gamma}} \int_{D_x} \int_{\Rm^3} F\left( \frac{r_0|k-q|}{\varepsilon^\gamma}\right)w_0\left(\frac{x+\hat{k}-\hat q}{\varepsilon \mu},\mu(|q|-|k_0|) \right) dk d|q| d \hat q \end{eqnarray*} so that the reconstructed image is essentially obtained by a convolution-type relation in the $k$ variable between a localizing kernel and the source. This is clear when $\varepsilon^\gamma \ll \varepsilon \mu$ where \begin{eqnarray*} \mathbb E\{ I^C[a^\varepsilon]\}(x) &\simeq & e^{-\Sigma} \int_{\Rm^3} \int_{\Rm^3} F\left(|k|\right)w_0\left(\frac{x+\frac{\varepsilon^\gamma}{|k_0|r_0} (k-(\hat q \cdot k) \hat q)}{\varepsilon \mu},|q| \right) dk d|q| d \hat q. \end{eqnarray*} We can then define the resolution of the functional to be the scale of the localization, which is $\varepsilon^\gamma /r_0=N_C^{-1}$, or $L/N_C$ in dimension variables. Hence, the larger the domain on which the correlations are calculated, the better is the resolution. When the domain is large, more phase information about the wavefield is retrieved, and this is the most useful information to achieve a good resolution. On the contrary, when $\gamma=0$, the phase is lost and imaging is performed mostly using the singularity of the Green's function, which lowers the resolution. Regarding the amplitude of the signal, we set $x=0$ and compute the value of the peak. If the bandwidth parameter $\mu=\varepsilon^{-\alpha}$ with $\alpha>1-\gamma$, we find \begin{eqnarray*} \mathbb E\{ I^C[a^\varepsilon]\}(0) &\simeq & e^{-\Sigma}\int_{\Rm^3} \int_{\Rm^3} F\left( |k|\right)w_0\left(0,|q| \right) dk dq \simeq e^{-\Sigma}. \end{eqnarray*} In this case, the convolution is done at a smaller scale than the spatial support of $w_0$, so that there is no geometrical loss of amplitude with respect to $w_0(x=0)$. When $\alpha<1-\gamma$, convolution is done at a larger scale and we have \begin{eqnarray*} \mathbb E\{ I^C[a^\varepsilon]\}(0) &\simeq & r_0^2 \mu^2 \varepsilon^{2(1-\gamma)}e^{-\Sigma}\int_{\Rm^d} \int_{\Rm^3} F\left( |k|\right)w_0\left(e_1 \theta_1+e_2 \theta_2,|q| \right) d|k| d\theta_1 d\theta_2 d|q|\\ &\simeq & r_0^2\mu^2 \varepsilon^{2(1-\gamma)} e^{-\Sigma}. \end{eqnarray*} Above, $\theta_1$ and $ \theta_2$ are such that $\hat k \simeq \hat q+\varepsilon \mu(e_1 \theta_1+e_2 \theta_2)$ with $e_1 \cdot \hat q=e_2 \cdot \hat q=e_1 \cdot e_2=0$ and $|e_1|=|e_2|=1$. The case $\alpha=1-\gamma$ follows similarly. As a conclusion of this section, we therefore have \begin{equation} \label{ampsig} \mathbb E\{ I^C[a^\varepsilon]\}(0) \simeq e^{-\Sigma} \varepsilon^{2(1-\gamma)} \left( (r_0 \mu) \wedge \varepsilon^{\gamma-1}\right)^2. \end{equation} \subsection{Variance of the functional} We compute in this section the variance of the CB functional with our measurements \fref{meas}. For simplicity, we replace $c_0$ and $T$ by their actual values, $c_0=T=1$. The total variance is defined by \begin{eqnarray*} V^C_{\textrm{tot}}(z)&=& \mathbb E \left\{I^C[a^\varepsilon]^2(z) \right\}-(\mathbb E \left\{I^C[a^\varepsilon](z) \right\})^2=\mathbb E \left\{I^C[\delta a^\varepsilon]^2(z) \right\}+\sigma_n^2 \mathbb E \left\{I^C[a^\varepsilon_n]^2(z) \right\}\\ &:=&V^C(z)+V^C_n(z). \end{eqnarray*} For some function $\varphi$ to be defined later on, let us introduce the following quantity $$ w_\varepsilon(t)=\int_{\Rm^{12}} dx_1 dx_2 d q_1 d q_2 J(t,x_1,q_1,x_2,q_2) \varphi(x_1,q_1) \varphi(x_2,q_2), $$ where $$ J= \mathbb E \{\mathcal{W}^\varepsilon\}, \qquad \mathcal{W}^\varepsilon(t,x_1,q_1,x_2,q_2) = \delta a^\varepsilon (t,x_1,q_1) \delta a ^\varepsilon(t,x_2,q_2). $$ The function $J$ is the single scattering approximation of the scintillation function of the Wigner transform and can be seen as a measure of the statistical instabilities. See \cite{B-Ito-04, BLP-JMP, BP-CPDE} for an extensive use of scintillation functions in the analysis of the statistical stability of wave propagation in random media. We have the following technical lemma, proved in section \ref{proofs}, that will be used to obtain an explicit expression for the variance $V^C$: \begin{lemma} \label{lemscin}$w_\varepsilon$ is given by $$ w_\varepsilon(t) = \frac{\sigma_0^2 \eta^3e^{-2 \Sigma}} { 4\pi^3\varepsilon^2} \int_{\Rm^3} dk \hat{R}(2\eta k) |H_\varepsilon(t,k)|^2 , $$ where the function $H_\varepsilon$ reads \begin{align*} &H_\varepsilon(t,k)= \frac{1}{2}\int_0^t \int \int ds dx dq e^{2 i \frac{k\cdot x}{\varepsilon}} (\mathfrak H^\varepsilon_1+\mathfrak H^\varepsilon_2)(t,k,s,x,q)\\ &\mathfrak H^\varepsilon_1=\sum_{\sigma_1,\sigma_2 =\pm 1} \sigma_1 \varphi_{\sigma_2} f_{\sigma_1,\sigma_2}, \qquad \mathfrak H^\varepsilon_2=\hat k \cdot \hat{q} \sum_{\sigma_1,\sigma_2 =\pm 1} \varphi_{\sigma_2} f_{\sigma_1,\sigma_2} \\ &f_{\sigma_1,\sigma_2}=\mu \left(\frac{1}{2 \mu} \widehat{q} \cdot \nabla_x +i\sigma_2 |q| \right)w_0\left(\frac{x-\sigma_1 s\hat q}{\mu\varepsilon},\mu(|q|-|k_0|)\right)\\ &\varphi_{\sigma_2} =\varphi(x+(t-s)(\widehat{q+\sigma_2k}),q+\sigma_2 k). \end{align*} \end{lemma} If we set \begin{equation} \label{testf} \varphi_z(x,q)=\int_{D_z}dk F_\varepsilon(k-q) \delta_0(x-\hat{k}-z) \end{equation} in the definition of $w_\varepsilon$, then \begin{eqnarray} \nonumber V^C(z)&\sim &\int_{\Rm^{12}}dp d q dk dk' F_\varepsilon(k-q) F_\varepsilon(k'-p) \mathbb E \{\delta a ^\varepsilon(1,z+\hat{k},q) \delta a^\varepsilon(1,z+\hat{k}',p)\}\\\label{var1} &\sim& \int_{\Rm^{12}} dp d q du dv J(1,u,q,v,p) \varphi_z(u,q) \varphi_z(v,p)=w_\varepsilon(1). \end{eqnarray} The analysis of the variance is somewhat technical in that the scintillation is integrated against singular test functions (due to the application of the CB functional) while a classical stability analysis amounts to integrating against regular test functions. As before, we assume that the detector $D$ is large enough so that its effects on the variance can be neglected and we replace it by $\Rm^3$ as a first approximation. We have the following two propositions proved in sections \ref{varproof} and \ref{secvarcintn}: \begin{proposition} \label{varcint} The variance of the CB functional for the single scattering contribution satisfies $$ V^C(z) \sim e^{-2\Sigma}\sigma_0^2 \eta^2 \varepsilon^{4(1-\gamma)} \left(\eta^{1-\delta}\varepsilon^{(1-\delta) \wedge 0} \mu^{-2}((r_0\mu) \wedge \varepsilon^{\gamma-1})^4\right) \vee \left( r_0^4 \eta^{(1-\delta) \wedge 0}\right),\quad |z| \leq \varepsilon \mu $$ and $V^C$ is mostly supported on $|z| \leq \mu \varepsilon$. \end{proposition} The above result gives the optimal dependency on the parameters $\eta$, $\varepsilon$, $\delta$, $\gamma$ and $\mu$. Regarding the contribution of the noise, we have: \begin{proposition} \label{varcintn} The variance of the CB functional for the noise contribution satisfies $$ V^C_n(z) \sim e^{-\Sigma}\sigma_n^2 r_0^4 \varepsilon^{4(1-\gamma)} \quad \textrm{for} \quad |z| \leq \varepsilon \mu, $$ and $V^C_n$ is mostly supported on $|z| \leq \mu \varepsilon$. \end{proposition} As before, the result is optimal and the variance decreases as $\gamma$ goes to zero and resolution is lost. Notice that both variances are essentially supported on the support of the initial source. We now turn to the WB functional. \section{Analysis of the WB functional} \label{secWB} In this section we compute the mean and the variance of the WB functional. \subsection{Average of the functional} Using our model \fref{modH} and the definition of the functional \fref{expK}, we have $$ \mathbb E\{I^W [p]\}(z) \simeq e^{-\Sigma /2} \mathbb E \{I^W [p_B]\}(z). $$ The cross-range resolution of $I^W$ is the same as the classical Kirchhoff functional and given by the Rayleigh formula $\lambda L/l$. \subsection{Variance of the functional} The variance is defined by $$ V^W(z)= \mathbb E \left\{|I^W[p](z)-\mathbb E \left\{I^W[p] \right\}(z)|^2 \right\}. $$ As before, we distinguish the contribution of the single scattering, denoted by $V^W$, from the one of the noise, denoted by $V^W_n$. We then have the following propositions, proved in sections \ref{proofpropvarK} and \ref{proofpropvarKn}: \begin{proposition} \label{propvarK} The variance of the WB functional for the single scattering contribution satisfies: $$ V^W(z) \sim e^{-\Sigma}\sigma_0^2\quad \textrm{for} \quad |z| \leq \varepsilon \mu, $$ and $V^W$ is mostly supported on $|z| \leq \mu \varepsilon$. \end{proposition} As for the CB functional, the results are optimal. Regarding the contribution of the noise, we have: \begin{proposition} \label{propvarKn} The variance of the WB functional for the noise contribution satisfies for all $z$: \begin{eqnarray*} V^W_n(z) &\sim& \sigma_n^2. \end{eqnarray*} \end{proposition} In the latter case, the variance is uniform in $z$ and not just supported on the support of the source. This is due to the fact that, contrary to the CB functional which considers the correlation of the noise with the coherent wavefield (i.e. $V^C_n$ involves $ (p^B)^2 (\sigma_n n_p^\varepsilon)^2$), the variance of the noise for the WB functional does not involve the average wavefield and the information about the source is lost (i.e. $V^W_n$ involves only $ (\sigma_n n_p^\varepsilon)^2$). Remark also that $V_n^W$ and $V_n^C$ are comparable when $\gamma=1$, namely when both functionals have the same resolution. $V_n^C$ decreases when the resolution worsens. The proof of theorem \ref{theo} is then a straightforward consequence of \fref{ampsig}, propositions \ref{varcint}, \ref{varcintn}, \ref{propvarK}, and \ref{propvarKn} after recasting $\varepsilon$, $\eta$ and $r_0 \varepsilon^{-\gamma}$ in terms of $\lambda$, $L$, $\ell_c$ and $N_C$. \section{Proofs} \label{secproofs} \subsection{Proof of lemma \ref{lemscin}} \label{proofs} We start from (\ref{correc}) using the notations of section \ref{models}. We project the initial condition $W_0^\varepsilon$ onto the propagating modes and introduce the related amplitudes $a_\pm$ (recall there are no vortical modes since the initial velocity is zero): $$ W^\varepsilon_0(t,x,k)=\sum_{\pm} a_{\pm}(t,x,k) B_{\pm}(k), \qquad B_{\pm}=b_\pm \otimes b_\pm \qquad b_\pm=\frac{1}{\sqrt{2}} (\hat k,\pm 1)^T, $$ with $$ \frac{\partial a_\pm}{ \partial t} \pm \hat k \cdot \nabla_x a_\pm=0, \qquad a_{\pm}(t=0)= \textrm{Tr} (W^{\varepsilon,0})^T B_{\pm}=w_0^\mu(x/\varepsilon,|k|-|k_0|), $$ where $w_0^\mu(x,|k|-|k_0|):=\mu w_0(x/\mu,\mu(|k|-|k_0|))$. Split then $\mathcal S^\varepsilon$ in \eqref{eq:Wigner} into $\mathcal S^\varepsilon_1+\mathcal S^\varepsilon_2$ with obvious notation. We project \fref{correc} on the $+$ mode and need to compute $\textrm{Tr} (\mathcal S^\varepsilon_1 W^{\varepsilon}_0)^T(k) B_{+}(k)$. Direct calculations yield \begin{align*} &\textrm{Tr} (\mathcal S^\varepsilon_1 W^{\varepsilon}_0)^T(k) B_{+}(k)=\frac{1}{4}\sum_{\pm} \int dp f_\varepsilon(x,k-p) (\hat k \cdot \hat p \pm 1) \left(\frac{\varepsilon}{2} \hat p \cdot \nabla_x a_\pm +i |p| a_\pm \right)\\ &f_\varepsilon(x,p)=\frac{\sigma_0} {\varepsilon} \left(\frac{\eta} {\pi}\right)^3 e^{\frac{2 i p \cdot x}{\varepsilon}} \hat{V}(2\eta p). \end{align*} In the same way, we find $$ \textrm{Tr} (\mathcal S^\varepsilon_2 W^{\varepsilon}_0)^T(k) B_{+}(k)=\frac{1}{4}\sum_{\pm} \int dp \overline{f_\varepsilon}(x,k-p) (\hat k \cdot \hat p \pm 1) \left(\frac{\varepsilon}{2} \hat p \cdot \nabla_x a_\pm -i |p| a_\pm \right) $$ and finally, recasting $\delta a ^\varepsilon$ by $a_1$ for simplicity of notation: \begin{align*} &\frac{\partial a_1}{ \partial t} +\hat k \cdot \nabla_x a_1=A_\varepsilon S_\varepsilon, \qquad S_\varepsilon =\frac{1}{2}\sum_{\pm} (\hat k \cdot \hat p \pm 1) \left(\frac{\varepsilon}{2} \hat p \cdot \nabla_x a_\pm +i |p| a_\pm \right),\\ &A_\varepsilon =\Re \int dp f_\varepsilon(x,k-p)S_\varepsilon(x,p). \end{align*} Above, $\Re$ stands for real part. The integral equation for $a_1$ reads: $$ a_1(t,x,k)=\int_{0}^t ds A_\varepsilon S_\varepsilon(t-s,x-s \hat{k},k):=D^{-1}A_\varepsilon S_\varepsilon. $$ Let us introduce the product function $$ \mathcal{W}^\varepsilon(t,x_1,q_1,x_2,q_2):=a_1(t,x_1,q_1)a_1(t,x_2,q_2). $$ We then have \begin{align*} \mathcal{W}^\varepsilon(t,x_1,q_1,x_2,q_2) &:= (D^{-1}A_\varepsilon S_\varepsilon)(t,x_1,q_1) (D^{-1}A_\varepsilon S_\varepsilon)(t,x_2,q_2)\\ &= \frac{1}{2}\Re \int_0^t\int_0^t \int \int f_\varepsilon(x_1-s_1\hat{q_1}, q_1-\eta_1) f_\varepsilon(x_2-s_2\hat{q_2}, q_2-\eta_2) \\ &\qquad \times S_\varepsilon(t-s_1, x_1-s_1\hat{q_1}, \eta_1) S_\varepsilon(t-s_2, x_2-s_2\hat{q_2},\eta_2) d\eta_1 d \eta_2 d s_1 d s_2\\ &\quad +\frac{1}{2}\Re \int_0^t\int_0^t \int \int f_\varepsilon(x_1-s_1\hat{q_1}, q_1-\eta_1) \overline{f_\varepsilon}(x_2-s_2\hat{q_2}, q_2-\eta_2) \\ &\qquad\times S_\varepsilon(t-s_1, x_1-s_1\hat{q_1}, \eta_1) \overline{S_\varepsilon}(t-s_2, x_2-s_2\hat{q_2},\eta_2) d\eta_1 d \eta_2 d s_1 d s_2\\ &:=T_1+T_2. \end{align*} Using the fact that $\mathbb E \{\hat{V}(\xi)\hat {V}(\nu) \}= (2 \pi)^{3}\hat{R}(\xi) \delta_0(\xi+\nu),$ we find \begin{eqnarray*} \mathbb E \{f_\varepsilon(x,p) f_\varepsilon(y,q)\}&=&\frac{\sigma_0^2 \eta^3} {\pi^3 \varepsilon^2} e^{\frac{2 i p \cdot (x-y)}{\varepsilon}} \hat{R}(2\eta p) \delta_0(p+q)\\ \mathbb E \{f_\varepsilon(x,p) \overline{f_\varepsilon}(y,q)\}&=&\frac{\sigma_0 \eta^3} {\pi^3 \varepsilon^2} e^{\frac{2 i p \cdot (x-y)}{\varepsilon}} \hat{R}(2\eta p) \delta_0(p-q). \end{eqnarray*} Therefore \begin{align*} T_1 &= \frac{\sigma_0^2 \eta^3} {2 \pi^3 \varepsilon^2} \Re \int_0^t\int_0^t \int \hat{R}(2\eta (q_1-\eta_1)) e^{\frac{2 i (q_1-\eta_1) \cdot (x_1-s_1 \hat{q}_1-x_2+s_2 \hat{q}_2)}{\varepsilon}}\\ &\;\times S_\varepsilon(t-s_1, x_1-s_1\hat{q_1}, \eta_1) S_\varepsilon(t-s_2, x_2-s_2\hat{q_2},q_2+q_1-\eta_1) d\eta_1 d s_1 d s_2 \end{align*} and \begin{align*} T_2 &= \frac{\sigma_0^2 \eta^3} {2 \pi^3\varepsilon^2} \Re \int_0^t\int_0^t \int \hat{R}(2\eta (q_1-\eta_1)) e^{\frac{2 i (q_1-\eta_1) \cdot (x_1-s_1 \hat{q}_1-x_2+s_2 \hat{q}_2)}{\varepsilon}}\\ &\;\times S_\varepsilon(t-s_1, x_1-s_1\hat{q}_1, \eta_1) \overline{S_\varepsilon}(t-s_2, x_2-s_2\hat{q}_2,q_2-q_1+\eta_1) d\eta_1 d s_1 d s_2. \end{align*} Hence \begin{align*} & \mathbb E\{\mathcal{W}^\varepsilon\}(t,x_1,q_1,x_2,q_2) \\ & =\frac{\sigma_0^2 \eta^3} {2 \varepsilon^2} \Re \int_0^t\int_0^t \int \hat{R}(2\eta (q_1-\eta_1)) e^{\frac{2 i (q_1-\eta_1) \cdot (x_1-s_1 \hat{q_1}-x_2+s_2 \hat{q_2})}{\varepsilon}}S_\varepsilon(t-s_1, x_1-s_1\hat{q_1}, \eta_1)\\ &\times \left[S_\varepsilon(t-s_2, x_2-s_2\hat{q_2},q_2+q_1-\eta_1)+\overline{S_\varepsilon}(t-s_2, x_2-s_2\hat{q_2},q_2-q_1+\eta_1) \right]\\ &= \frac{\sigma_0^2 \eta^d} {4 \varepsilon^2}\int_0^t\int_0^t \int \hat{R}(2\eta k) e^{\frac{2 i k\cdot (x_1-s_1 \hat{q_1}-x_2+s_2 \hat{q_2})}{\varepsilon}}\\ &\times \left[S_\varepsilon(t-s_1, x_1-s_1\hat{q_1}, q_1-k)+\overline{S_\varepsilon}(t-s_1, x_1-s_1\hat{q_1}, q_1+k) \right]\\ &\times \left[S_\varepsilon(t-s_2, x_2-s_2\hat{q_2},q_2+k)+\overline{S_\varepsilon}(t-s_2, x_2-s_2\hat{q_2},q_2-k) \right]. \end{align*} Accounting finally for the absorption $e^{-\Sigma}$, this allows to obtain the following expression for $w_\varepsilon$: $$ w_\varepsilon(t) = \frac{\sigma_0^2 \eta^3 e^{-2\Sigma}} {4 \pi^3\varepsilon^2} \int_{\Rm^3} dk \hat{R}(2\eta k) |H_\varepsilon(t,k)|^2, $$ where \begin{align*} &H_\varepsilon(t,k)=\int_0^t \int_{\Rm^3}\int_{\Rm^3} ds dx dq e^{\frac{2 i k\cdot x}{\varepsilon}} \left[S_\varepsilon(s, x, q-k)+\overline{S_\varepsilon}(s, x, q+k) \right] \varphi(x+(t-s) \hat{q},q). \end{align*} We have finally \begin{eqnarray*} S_\varepsilon (s, x, p)&=&\frac{1}{2}\sum_{\pm} (\hat k \cdot \hat p \pm 1) \frac{1}{2} \hat p \cdot (\nabla_x w_0^\mu)\left(\frac{x\mp s\hat p}{\varepsilon},|p|-|k_0|\right)\\ && +\frac{i}{2}\sum_{\pm} (\hat k \cdot \hat p \pm 1) |p| w_0^\mu\left(\frac{x\mp s\hat p}{\varepsilon},|p|-|k_0|\right), \end{eqnarray*} which replaced in $H_\varepsilon$ yields the expression given in the lemma. This is the final result and ends the proof. \subsection{Proof of proposition \ref{varcint}} \label{varproof} We compute formally the leading term in an asymptotic expansion of the variance, which will give us the optimal dependency in the main parameters. The proof involves the analysis of oscillating integrals coupled to localizing terms. We start from \fref{var1}. We treat only the case $\gamma \in(0,1)$, the cases $\gamma=0$ and $\gamma=1$ follow by using the same approach. Plugging \fref{testf} in \fref{var1}, and assuming the detector is sufficiently large so that we can replace $D_z$ by $\Rm^3$ in a first approximation, we find the following expression for $H_\varepsilon$: \begin{eqnarray*} &&H_\varepsilon(t=1,k)=H_\varepsilon^1(k)+H_\varepsilon^2(k)\\ &&H_\varepsilon^1(k)= \frac{1}{2}\sum_{\sigma_1,\sigma_2=\pm 1} \int_0^{1} \int ds dq du F_\varepsilon(u-q) \hat k \cdot (\widehat{ q-\sigma_2k}) e^{2 i k\cdot (\sigma_1 s \widehat{q-\sigma_2k}+\Phi_\varepsilon))/\varepsilon} f_{\sigma_2}\left(\frac{\Phi_\varepsilon}{\varepsilon \mu },q\right) \\ &&H_\varepsilon^2(k)= \frac{1}{2}\sum_{\sigma_1,\sigma_2=\pm 1} \sigma_1 \int_0^{1} \int ds dq du F_\varepsilon(u-q) e^{2 i k\cdot (\sigma_1 s \widehat{q-\sigma_2k}+\Phi_\varepsilon))/\varepsilon} f_{\sigma_2}\left(\frac{\Phi_\varepsilon}{\varepsilon \mu},q\right) \end{eqnarray*} where \begin{align*} &\Phi_\varepsilon=z-\sigma_1 s(\widehat{q-\sigma_2k})+s\hat{q}-\hat{q}+\hat{u}\\ &f_{\sigma_2}(x,q) = \left(\frac{1}{2} \widehat{q-\sigma_2k} \cdot \nabla_x +i\mu \sigma_2 |q-\sigma_2k| \right)w_0\big(x,\mu(|q-\sigma_2k|-|k_0|)\big). \end{align*} The dependency of the function $f_{\sigma_2}$ on $k$ and $\mu$ is not explicited in order to simplify already heavy notations. Expanding $|H_\varepsilon^1+H_\varepsilon^2|^2$ leads to the following decomposition of $w_\varepsilon$: we can write $w_\varepsilon=w_\varepsilon^1+w_\varepsilon^2+w_\varepsilon^{12}$, where \begin{eqnarray*} &&w^1_\varepsilon(t=1) = \frac{\sigma_0^2 \eta^3e^{-2\Sigma}} { 4\pi^3\varepsilon^2} \int_{\Rm^3} dk \hat{R}(2\eta k) |H^1_\varepsilon(k)|^2 \end{eqnarray*} and $w_\varepsilon^1$ and $w_\varepsilon^{12}$ are obtained in the same fashion. We will focus only on the most technical term $w_\varepsilon^1$ since $w_\varepsilon^2+w_\varepsilon^{12}$ lead either to the same leading order in terms of $\varepsilon$, $\gamma$, $\eta$ and $\sigma_0$ or to a negligible contribution. The term $w_\varepsilon^1$ can itself be decomposed as \begin{eqnarray*} w_\varepsilon^1&=&\int_0^\varepsilon \int_0^\varepsilon ds_1 d s_2 \;(\cdots)+\int_\varepsilon^1 \int_\varepsilon^1 ds_1 ds_2 \; (\cdots)+\int_\varepsilon^1 \int_0^\varepsilon ds_1 ds_2 \; (\cdots)\\ &:=&v_1^\varepsilon+v_2^\varepsilon+v_3^\varepsilon. \end{eqnarray*} The term $v_3^\varepsilon$ can be shown to be negligible compared to the first two due to the mismatch of the integration domains, and we thus focus on $v_1^\varepsilon$ and $v_2^\varepsilon$. We start with the most difficult term $v_2^\varepsilon$. We give a detailed derivation for the case $\mu=\varepsilon^{-\alpha}$ with $\alpha<1-\gamma$ (so that $\varepsilon \mu \ll \varepsilon^\gamma$) and only state the result for the case $\alpha\geq 1-\gamma$. Let us start by denoting by $(\theta_u^1,\theta_u^2)$ and $(\theta_q^1,\theta_q^2)$ the angles defining $\hat u$ and $\hat q$ with the convention $(\theta_u^1,\theta_u^2) \in (0, 2\pi) \times (0,\pi)$, so that $\hat q=(\cos \theta_q^1 \sin \theta_q^2,\sin \theta_q^1 \sin \theta_q^2,\cos \theta_q^2 )$. After performing the changes of variables $(\theta_u^1,\theta_u^2) \to (\theta_q^1,\theta_q^2)+\varepsilon \mu (\theta_u^1,\theta_u^2)$, we first find that \begin{equation} \label{change} \hat u \simeq\hat q+\varepsilon\mu e_ {1} \theta^1_u + \varepsilon\mu e_2 \theta^2_u \qquad \textrm{with} \qquad e_1 \cdot \hat q=e_2 \cdot \hat q=e_1 \cdot e_2=0, \qquad |e_1|=|e_2|=1. \end{equation} Setting in addition $s_1 \to \varepsilon s_1$, $s_2 \to \varepsilon s_2$, $|u| \to |q| + \varepsilon^\gamma |u|/r_0$, $k \to \varepsilon k$, $\xi_1=s_1 \hat q_1$ and $\xi_2=s_2 \hat q_2$, we find for the leading term, using the assumptions $\varepsilon \mu \ll \varepsilon^\gamma$, $ \varepsilon\mu^2 \ll 1$ and $\mu \gg 1$: \begin{eqnarray} \nonumber v^\varepsilon_2&\sim& \sigma_0^2 \eta^{3-\delta}\varepsilon^{4(1-\gamma)+1-\delta} \mu^4 r_0^4 e^{-2\Sigma}\\ \nonumber&&\times \int_{\Rm^3} \int_{\varepsilon \leq |\xi_1| \leq 1}\int_{\varepsilon \leq |\xi_2| \leq 1}\frac{dk d \xi_1 d\xi_2}{|k|^\delta |\xi_1|^2 |\xi_2|^2} (\hat k \cdot \hat{\xi}_2)(\hat k \cdot \hat{\xi}_2) e^{2i k\cdot \{\xi_1-\xi_2\}} \\ &&\hspace{4cm} \times G_{z}\left(\frac{|\xi_1|}{\varepsilon \mu},\hat \xi_1 \right)\overline{G_{z}}\left(\frac{|\xi_2|}{\varepsilon \mu},\hat \xi_2 \right) \label{v2} \end{eqnarray} where \begin{align*} &G_{z}(|\xi|,\hat \xi)=S(0)\sum_{\sigma_1,\sigma_2=\pm 1} \int dY |q|^4 F(|u| \hat \xi ) f^0_{\sigma_2} (z_{\varepsilon\mu}+(1-\sigma_1)\xi+e_ {1} \theta^1_u + e_2 \theta^2_u,|q|, \hat \xi) \\ &\int dY =\int_{\Rm_+} \int_{\Rm}\int_{\Rm} \int_{\Rm_+} d|q| d\theta^1_u d\theta^2_u d|u|, \qquad e_1 \cdot \hat \xi=e_2 \cdot \hat \xi=e_1 \cdot e_2=0\\ &f^0_{\sigma_2} (x,|q|, \hat \xi)=\left(\frac{1}{2} \widehat{\xi} \cdot \nabla_x +i\mu \sigma_2 |q| \right)w_0\big(x,\mu(|q|-|k_0|)\big), \end{align*} and $z_{\varepsilon \mu}=z/\varepsilon\mu$. We will use the following relation: \begin{equation} \label{delt} \int_{\Rm^3} dk\frac{e^{i k\cdot x}}{|k|^\beta}= \left\{ \begin{array}{l} (2 \pi)^3 \delta_0(x) \qquad \textrm{if} \quad \beta=0,\\ C_\beta|x|^{\beta-3} \qquad \textrm{if} \quad 0<\beta \qquad \textrm{and} \qquad \beta \neq 3,5,\cdots, \end{array} \right. \end{equation} where $\delta_0$ is the Dirac measure at zero and $C_\beta=2^{3-\beta} \pi^{\frac{3}{2}} \Gamma(\frac{3-\beta}{2})(\Gamma(\frac{\beta}{2}))^{-1}$ ($\Gamma$ being the gamma function). Note that \fref{delt} still holds for the case $\beta=3,5,\cdots$ provided the constant $C_\beta$ is adjusted, see \cite{gelfand}. We will not explicit this constant since only the dependency in $x$ matters to us. Hence, we deduce from \fref{delt} that \begin{equation} \label{delt2} \int_{\Rm^3} dk\frac{e^{i k\cdot x}}{|k|^\delta}\hat k_j \hat k_l = C_\delta' \partial^2_{x_j x_l} |x|^{\delta-1} \qquad \textrm{for} \quad \delta \in [0,2], \end{equation} where $C_\delta'$ is a constant. When $\delta \in (0,1)$, setting $\xi_1 \to \varepsilon \xi_1$ and $\xi_2 \to \varepsilon \xi_2$ in \fref{v2}, it comes using \fref{delt2} and $\mu \gg 1$: \begin{eqnarray*} v^\varepsilon_2&\sim& \sigma_0^2 \eta^{3-\delta}\varepsilon^{4(1-\gamma)} \mu^4 r_0^4 e^{-2\Sigma}\\ &&\times \int_{1\leq |\xi_1|}\int_{1 \leq |\xi_2|}\frac{d \xi_1 d\xi_2 (\hat \xi_1 \cdot (\xi_1-\xi_2))(\hat \xi_2 \cdot (\xi_1-\xi_2))}{|\xi_1-\xi_2|^{5-\delta} |\xi_1|^2 |\xi_2|^2} G_{z}(0,\hat \xi_1)\overline{G_{z}}(0,\hat \xi_2) \end{eqnarray*} plus a term of same order. The integral above is easily shown to be finite using the Hardy-Littlewood-Sobolev inequality \cite{RS-80-2}. A close look at $G_z$ shows that only the term proportional to $\hat \xi \cdot \nabla_x$ is left because of the sum over $\sigma_2$. Setting finally $|q| \to |k_0|+|q|/\mu$ yields $v^\varepsilon_2\sim \sigma_0^2 \eta^{3-\delta}\varepsilon^{4(1-\gamma)} \mu^2 r_0^4 e^{-2\Sigma}$ for $|z|\leq \mu \varepsilon$. Because of the term $f_{\sigma_2}^0$, it is not difficult to see that $v^\varepsilon_2$ is mostly supported on $|z|\leq \mu \varepsilon$. When $\delta \in [1,2)$ we use \fref{delt2} and directly send $\varepsilon$ to zero in \fref{v2}. The leading term in $G_{z}$ is obtained for $\sigma_1=1$, and we find $$ v^\varepsilon_2\sim \sigma_0^2 \eta^{3-\delta}\varepsilon^{4(1-\gamma)+1-\delta} \mu^4 r_0^4 e^{-2\Sigma}\int_{|\xi_1| \leq 1}\int_{|\xi_2| \leq 1}\frac{d \xi_1 d\xi_2}{|\xi_1-\xi_2|^{3-\delta} |\xi_1|^2 |\xi_2|^2} G^0_{z}(\hat \xi_1)\overline{G^0_{z}}(\hat \xi_2), $$ where $$ G_{z}^0(\hat \xi)= S(0)\sum_{\sigma_2=\pm 1} \int dY |q|^4 F(|u| \hat \xi ) f^0_{\sigma_2} (z_{\varepsilon \mu}+e_ {1} \theta^1_u + e_2 \theta^2_u,|q|, \hat \xi). $$ Again, summation over $\sigma_2$ in $G^0_{z}$ imply that $v^\varepsilon_2\sim \sigma_0^2 \eta^{3-\delta}\varepsilon^{4(1-\gamma)+1-\delta} \mu^2 r_0^4 e^{-2\Sigma}$ for $|z|\leq \mu \varepsilon$. The case $\delta=0$ gives $v^\varepsilon_2\sim \sigma_0^2 \eta^{3}\varepsilon^{4(1-\gamma)} \mu^2 r_0^4$, and $v_2^\varepsilon(z)$ is mostly supported on $|z| \leq \varepsilon \mu$. Gathering all the previous results, we find for the case $\mu=\varepsilon^{-\alpha}$ with $\alpha<1-\gamma$: $$ v_2^\varepsilon(z) \sim \sigma_0^2 \eta^{3-\delta}\varepsilon^{4(1-\gamma)+(1-\delta) \wedge 0} \mu^2 r_0^4 e^{-2\Sigma}, \quad |z| \leq \varepsilon \mu, $$ and $ v_2^\varepsilon(z)$ is essentially supported on $|z| \leq \varepsilon \mu$. When $\alpha\geq 1-\gamma$ (so that $\varepsilon \mu \geq \varepsilon^\gamma$), very similar calculations show that for $|z| \leq \varepsilon \mu$, we have $w_\varepsilon(z) \sim \sigma_0^2\eta^{3-\delta}\varepsilon^{(1-\delta) \wedge 0}\mu^{-2}e^{-2\Sigma} $, and that $v_2^\varepsilon(z)\ll v_2^\varepsilon(0)$ when $|z| \gg \varepsilon \mu$. Combining the last two results, it finally follows that $$ v^\varepsilon_2(z) \sim \sigma_0^2\eta^{3-\delta} \varepsilon^{4(1-\gamma)+(1-\delta) \wedge 0} \mu^{-2} ((r_0\mu) \wedge \varepsilon^{\gamma-1})^4e^{-2\Sigma},\quad |z| \leq \varepsilon \mu, $$ and $ v_2^\varepsilon(z)$ is mostly supported on $ |z| \leq \varepsilon \mu$. We consider now the term $v_1^\varepsilon$. After the changes of variables $s_1 \to \varepsilon s_1$ and $s_2 \to \varepsilon s_2$, we find \begin{eqnarray*} &&v_1^\varepsilon \sim \sigma_0^2 \eta^3 e^{-2\Sigma}\int_{\Rm^3} dk \hat{R}(2\eta k) |V^1_\varepsilon(k)|^2 \end{eqnarray*} where \begin{eqnarray*} V_\varepsilon^1(k)= \frac{1}{2}\sum_{\sigma_1,\sigma_2=\pm 1} \int_0^{1} \int ds dq du F_\varepsilon(u-q) \hat k \cdot (\widehat{ q-\sigma_2k}) e^{2 i k\cdot (s\hat{q}+(z+\hat{u}-\hat{q})/\varepsilon)} f_{\sigma_2}\left(\frac{z+\hat{u}-\hat{q}}{\varepsilon \mu },q\right) \\ \end{eqnarray*} and $f_{\sigma_2}$ is the same as before. Both cases $\alpha<1-\gamma$ and $\alpha \geq 1-\gamma$ lead to the same order, and we therefore only detail the case $\alpha\geq 1-\gamma$. The change of variables $u \to q+\varepsilon^\gamma u/r_0 $ leads to: \begin{align*} &V_\varepsilon^1(k) \sim \sum_{\sigma_1,\sigma_2=\pm 1} \int_0^{1} \int ds dq \hat F\big(2\varepsilon^{\gamma-1} ((k \cdot \hat q) \hat q-k)/(r_0|q|))\big) \hat k \cdot (\widehat{ q-\sigma_2k}) e^{2 i s k\cdot\hat{q}} f_{\sigma_2}\left(z_{\varepsilon \mu},q\right), \end{align*} where $\hat F$ is the Fourier transform of $F$. Denoting then as before by $(\theta_k^1,\theta_k^2)$ and $(\theta_q^1,\theta_q^2)$ the angles defining $\hat k$ and $\hat q$, we perform the change of variables $(\theta_q^1,\theta_q^2) \to (\theta_k^1,\theta_k^2)+r_0 \varepsilon^{1-\gamma} (\theta_q^1,\theta_q^2)$, which yields $$ \hat q \simeq \hat k+\varepsilon^{1-\gamma}r_0 e_ {1} \theta^1_q + \varepsilon^{1-\gamma}r_0 e_2 \theta^2_q \qquad \textrm{with} \qquad e_1 \cdot \hat k=e_2 \cdot \hat k=e_1 \cdot e_2=0, \qquad |e_1|=|e_2|=1. $$ The term $V_\varepsilon^1$ then becomes \begin{align*} &V_\varepsilon^1(k) \sim r_0^2 \varepsilon^{2(1-\gamma)} \sum_{\sigma_1,\sigma_2=\pm 1} \int_0^{1} \int ds |q|^2 d|q| d\theta_q^1 d\theta_q^2 \;\hat F\big(2 |k|(e_ {1} \theta^1_q + e_2 \theta^2_q)/|q|\big) \\ & \hspace{6cm}\times \textrm{sign}(|q|-\sigma_2 |k|) e^{2 i s |k|} f_{\sigma_2}\left(z_{\varepsilon \mu},\hat k |q|\right) \end{align*} We need now to consider separately $\sigma_2=1$ and $\sigma_2=-1$ in the expression above. Let us start with $\sigma_2=1$, so that $f_{\sigma_2=1}$ involves the term $w_0(z_{\varepsilon \mu},\mu (||q|-|k||-|k_0|))$, which suggests the change of variables $|q| \to |k|+|k_0| + |q|/ \mu$. This brings a factor $\mu^{-1}$ than cancels with the $\mu$ in the definition of $f_{\sigma_2}$. It remains to make sense of the integral in $k$ in the definition of $v_1^\varepsilon$. Finite values pose no problems, and lead, after taking the square, to a term of overall order $\sigma_0^2 r_0^4 \eta^{(3-\delta)} \varepsilon^{4(1-\gamma)}e^{-2\Sigma}$. We focus therefore on large values of $k$. The integral of the term depending on $s$ behaves as $|k|^{-1}$, while the one in $\hat F$ is essentially independent of $|k|$ when $k$. Hence, taking the square, we find a term proportional to $$ \sigma_0^2 r_0^4 \eta^3 \varepsilon^{4(1-\gamma)}e^{-2\Sigma}\int_{|k| \geq 1} dk \hat{R}(2\eta k) |k|^{-2} \sim \sigma_0^2 r_0^4 \eta^3 \varepsilon^{4(1-\gamma)}e^{-2\Sigma}\int_{|k| \geq 1} d|k| S(2\eta k) |\eta k|^{-\delta}. $$ The integral is finite when $\delta>1$ and leads to a factor $\eta^{3-\delta}$. When $\delta \leq 1$, we set $k \to k /\eta$, which leads to a factor $\eta^2$. The term associated to $\sigma_2=1$ is therefore of order $\sigma_0^2 r_0^4\eta^{(3-\delta) \wedge 2} \varepsilon^{4(1-\gamma)}e^{-2\Sigma}$. Consider now the case $\sigma_2=-1$, so that $f_{\sigma_2=-1}$ involves now $w_0(z_{\varepsilon \mu},\mu (||q|+|k||-|k_0|))$. When $|k|>|k_0|$, the second argument in $w_0$ never vanishes, which leads to high order terms in $\mu^{-1}$. The leading contribution is thus obtained for $|k| \leq |k_0|$. As mentioned above, bounded values of $k$ leads to an overall order of $\sigma_0^2 r_0^4\eta^{(3-\delta)} \varepsilon^{4(1-\gamma)}e^{-2\Sigma}$, which is negligible compared to the case $\sigma_2=1$ when $\eta \ll 1$, or to the same order when $\eta=1$. It follows therefore that $v_1^\varepsilon$ is of order $\sigma_0^2 r_0^4\eta^{(3-\delta) \wedge 2} \varepsilon^{4(1-\gamma)}e^{-2\Sigma}$, and with the same arguments as $v_2^\varepsilon$, that it is mostly supported on $|z| \leq \varepsilon \mu$. Owing this, and taking the maximum with $v_2^\varepsilon$ gives the result of the proposition. \subsection{Proof of proposition \ref{varcintn}} \label{secvarcintn} We use the same notation as in section \ref{flucmod} and only treat the case $\gamma \in (0,1)$ for brevity. We have \begin{eqnarray*} V^C_n(z)&=&\sigma_n^2 \mathbb E \left\{I^C[a^\varepsilon_n]^2(z) \right\}\\ &\sim& \sigma_n^2 \int_{\Rm^{12}}dp d q dk dk' F_\varepsilon(k-q) F_\varepsilon(k'-p) \mathbb E \{a_n^\varepsilon(1,z+\hat{k},q) a_n^\varepsilon(1,z+\hat{k}',p)\}.\end{eqnarray*} Moreover \begin{eqnarray*} a_n^\varepsilon(1,x,q)&=&\textrm{Tr} (W[\mathbb E\{u\},\mathbf n^\varepsilon]+W[\mathbf n^\varepsilon, \mathbb E\{u\}])^T(1,x,q) B^+(q)\\ &=& w[b_+ \cdot \mathbb E\{u\}, b_+ \cdot\mathbf n^\varepsilon ]+w[b_+ \cdot\mathbf n^\varepsilon,b_+ \cdot \mathbb E\{u\}]\\ &:=&\frac{1}{2}(w[\mathbb E\{p\}, \mathbf n_p^\varepsilon ]+w[\mathbf n^\varepsilon_p,\mathbb E\{p\}])+ w_\bv, \end{eqnarray*} where $w[\cdot,\cdot]$ denotes the scalar Wigner transform and $w_\bv$ contains the remaining terms not included in the first one. We will not analyse the contributions to the variance of the terms involving $w_\bv$ since they have the same structure as the one involving $w[\mathbb E\{p\}, \mathbf n_p^\varepsilon ]+w[\mathbf n^\varepsilon_p,\mathbb E\{p\}]$ and yield similar results. Then, \begin{align}\nonumber &\mathbb E \{a_n^\varepsilon(1,z,q) a_n^\varepsilon(1,z',p)\}\\\nonumber &=\frac{1}{4 }\mathbb E \left\{(w[\mathbb E\{p\}, \mathbf n_p^\varepsilon ]+w[\mathbf n^\varepsilon_p,\mathbb E\{p\}])(z,q)(w[\mathbb E\{p\}, \mathbf n_p^\varepsilon ]+w[\mathbf n^\varepsilon_p,\mathbb E\{p\}])(z',p) \right\}+R_\varepsilon\\\label{Vnp} &=\frac{1}{4 }\mathbb E \{w[\mathbb E\{p\}, \mathbf n_p^\varepsilon ](z,p)w[\mathbb E\{p\}, \mathbf n_p^\varepsilon ](z',q)\} +R'_\varepsilon. \end{align} Again, we do not treat the term $R'_\varepsilon$ since it has the same structure as the first term above. We denote by $V^C_{n,p}$ the contribution to the variance of the first term of \fref{Vnp}. It reads, with the notation $u=z+\hat{k}$ and $u'=z+\hat{k}'$: \begin{align*} &V^C_{n,p} \sim \sigma_n^2 e^{-\Sigma}\int_{\Rm^{18}}dp d q dk dk' dy dy'F_\varepsilon(k-q) F_\varepsilon(k'-p) e^{i q \cdot y}e^{i p \cdot y'}\\ &\hspace{4cm}p_B(1,u-\frac{\varepsilon}{2} y)p_B(1,u'-\frac{\varepsilon}{2} y')\Phi\left(\frac{u-u'}{\varepsilon}+\frac{1}{2}(y-y')\right). \end{align*} In Fourier variables and after setting $\xi \to \varepsilon^{-1}\xi$, $\xi' \to \varepsilon^{-1}\xi'$ and writing the cosines as sum of exponentials, this translates into: \begin{align*} &V^C_{n,p} \sim \sigma_n^2 e^{-\Sigma}\sum_{\sigma_1, \sigma_2=\pm 1} \int_{\Rm^{15}}d q dk dk' d\xi d\xi' F_\varepsilon(k-q) F_\varepsilon(k'+q-(\xi+\xi')/2) e^{i \frac{u \cdot \xi}{\varepsilon}}e^{i\frac{u' \cdot \xi'}{\varepsilon}}\\ &\hspace{4cm}e^{i \frac{u-u'}{\varepsilon}\cdot (\xi - 2q)} e^{i \frac{\sigma_1|\xi|}{\varepsilon}} e^{i\frac{\sigma_2 |\xi'|}{\varepsilon}}\hat \Phi\left(\xi - 2q\right) g_\mu(|\xi|-|k_0|)g_\mu(|\xi'|-|k_0|). \end{align*} Above, we used the fact that $\hat p_B(t,\xi)= \varepsilon^3 g_\mu(\varepsilon(|\xi|-|k_0|/\varepsilon)) \cos t |\xi| $ with $g_\mu(x)=\mu g(\mu x)$. We then decompose $\xi$ and $\xi'$ as $\xi=\xi_\parallel \hat k+ \xi_\bot$, $\xi'=\xi_\parallel' \hat k+ \xi_\bot'$ with $\xi_\bot \cdot k=\xi_\bot' \cdot k=0$. Denoting by $(\theta_k^1,\theta_k^2)$ and $(\theta_{k'}^1,\theta_{k'}^2)$ the angles defining $\hat k$ and $\hat k'$ and performing the changes of variables $(\theta_{k'}^1,\theta_{k'}^2) \to (\theta_{k}^1,\theta_{k}^2)+\varepsilon (\theta_{k'}^1,\theta_{k'}^2)$, we find $\hat k'\simeq \hat k+ \varepsilon e_1 \theta_{k'}^1+\varepsilon e_2 \theta_{k'}^2$ with $e_1 \cdot \hat k=e_2 \cdot \hat k=0$ and $|e_1|=|e_2|=1$. Together with $\xi_\bot \to \sqrt{\varepsilon} \xi_\bot$, $\xi'_\bot\to \sqrt{\varepsilon} \xi_\bot'$, as well as $|\xi_\parallel \hat k+ \sqrt{\varepsilon}\xi_\bot| \simeq |\xi_\parallel| (1+\varepsilon |\xi_\bot|^2/(2 |\xi_\parallel|))$, it comes, since $\mu^2 \ll \varepsilon^{-1} $ and $\varepsilon \ll \varepsilon^\gamma$ as $\gamma \in (0,1)$: \begin{align*} &V^C_{n,p} \sim \sigma_n^2 \varepsilon^{4} e^{-\Sigma}\sum_{\sigma_1, \sigma_2=\pm 1} \int dX F_\varepsilon(k-q) F_\varepsilon(\hat k|k'|+q-\hat k (|\xi_\parallel|+|\xi'_\parallel|)/2) \\ &\hspace{4cm}\times e^{i(\xi_\parallel+\sigma_1 |\xi_\parallel|)/\varepsilon}e^{i (\xi_\parallel'+\sigma_2 |\xi_\parallel'|)/\varepsilon}e^{i \sigma_1 |\xi_\bot|^2/(2|\xi_\parallel|)}e^{i \sigma_2 |\xi'_\bot|^2/(2|\xi'_\parallel|)} \\ &\hspace{4cm}\times e^{-i (e_1 \theta^1_{k'}+ e_2 \theta^2_{k'}) \cdot (\xi_\parallel \hat k-2q)} e^{i z \cdot (\xi_\varepsilon-\xi'_\varepsilon)/\varepsilon} \\ &\hspace{4cm}\times \hat \Phi(\xi_\parallel \hat k- 2q) g_\mu(|\xi_\parallel|-|k_0|)g_\mu(|\xi'_\parallel|-|k_0|)\\ &\int dX=\int_{\Rm^3}\int_{\Rm^3}\int_{\Rm_+}\int_{\Rm}\int_{\Rm} \int_{\Rm^3}\int_{\Rm^3}d q dk d|k'| d\theta^1_{k'}d\theta^1_{k'} d\xi d\xi'|k'|^2. \end{align*} Above, we used the notation $\xi_\varepsilon=\xi_\parallel \hat k+ \sqrt{\varepsilon}\xi_\bot$ and $\xi'_\varepsilon=\xi'_\parallel \hat k+ \sqrt{\varepsilon}\xi'_\bot$. The leading term is obtained for $\xi_\parallel+\sigma_1 |\xi_\parallel|=\xi'_\parallel+\sigma_2 |\xi_\parallel'|=0$ so that the first two phases vanish. There is otherwise some averaging that leads to a higher order contribution. Writing $q=q_\parallel \hat k+ q_\bot$, $q_\bot \cdot k=0$, integrating in $(\theta^1_{k'},\theta^2_{k'})$ the phase term involving $(e_1 \theta^1_{k'}+ e_2 \theta^2_{k'}) \cdot (\xi_\parallel \hat k-2q)=-2(e_1 \theta^1_{k'}+ e_2 \theta^2_{k'}) \cdot q$, we obtain a Dirac measure enforcing that $q_\bot =0$. As a consequence, using the fact that $F(x)= F(|x|)$ and $\hat \Phi(k)= \hat \Phi(|k|)$: \begin{align*} &V^C_{n,p} \sim \sigma_n^2 \varepsilon^{4}e^{-\Sigma}\sum_{\pm} \int d Y F_\varepsilon\big(||k|\mp q_\parallel|\big) F_\varepsilon\big(||k'|\pm q_\parallel-\frac{1}{2} (|\xi_\parallel|+|\xi'_\parallel|)|\big) \\ &\hspace{3cm}\times e^{i |\xi_\bot|^2/(2|\xi_\parallel|)}e^{i |\xi'_\bot|^2/(2|\xi'_\parallel|)}e^{i z \cdot (\xi_\varepsilon-\xi'_\varepsilon)/\varepsilon} \\ &\hspace{3cm}\times \hat \Phi\big(|\xi_\parallel\mp 2q_\parallel|\big) g_\mu(|\xi_\parallel|-|k_0|)g_\mu (|\xi'_\parallel|-|k_0|)\\ &\int dY=\int_{\Rm_+}\int_{\Rm_+}\int_{\Rm_+}\int_{\Rm^3}\int_{\Rm^3}\int_{\Rm^3}\int_{\Rm_+} dq_\parallel d\xi_\parallel d\xi_\parallel' d\xi_\bot d\xi'_\bot dk d |k'| |k'|^2. \end{align*} The final expression is obtained by setting $|k| \to \varepsilon^\gamma |k|/r_0\pm q_\parallel$, $q_\parallel\to \varepsilon^\gamma q_\parallel/r_0\mp |k'|\pm (|\xi_\parallel|+|\xi'_\parallel|)/2$, $|\xi_\parallel| \to \mu^{-1}|\xi_\parallel|+|k_0|$ and $|\xi'_\parallel| \to \mu^{-1}|\xi'_\parallel|+|k_0|$, which leads to, using that $\mu \gg 1$: $$ V^C_{n,p}(z) \sim \sigma_n^2 \varepsilon^{4(1-\gamma)} r_0^4 e^{-\Sigma} e^{i\frac{|z|^2}{\varepsilon}}\int_{\Rm}\int_{\Rm} d\xi_\parallel d\xi'_\parallel J_0 \big(|z||(\xi_\parallel-\xi_\parallel')/(\mu \varepsilon)\big)G(\xi_\parallel,\xi'_\parallel), $$ for some smooth function $G$ and where $J_0$ is the zero-th order Bessel function of the first kind. Hence, $V^C_{n,p}(z) \sim \sigma_n^2 \varepsilon^{4(1-\gamma)}r_0^4 e^{-\Sigma} $ for $|z| \leq \varepsilon \mu$ and $V^C_{n,p}(z)\ll V^C_{n,p}(0)$ when $|z|\gg \varepsilon \mu$ so that $V_{n,p}^C$ is essentially supported on $|z| \leq \varepsilon \mu$. This ends the proof. \subsection{Proof of proposition \ref{propvarK}} \label{proofpropvarK} We use here the notation of section \ref{meas_imag}. With $\delta p^\varepsilon$ the random fluctuations defined in \fref{randH}, the variance of the WB functional admits the expression $$ V^W(z)=\frac{1}{(2 \pi)^3} \int_{\Rm^3} dk dk' e^{i (k-k') \cdot z} \cos |k|\cos |k'| \mathbb E \{ (\mathcal F {\mathbbmss{1}}_D \delta p^\varepsilon)(t=1,k) \overline{(\mathcal F {\mathbbmss{1}}_D \delta p^\varepsilon)}(t=1,k')\}. $$ Similarly to the CB functional, we neglect the finite size of the detector as a first approximation, so that $V^W$ reads $$ V^W(z)\sim \int_{\Rm^3} dk dk' e^{i (k-k') \cdot z} \cos |k|\cos |k'| \mathbb E \{ (\mathcal F \delta p^\varepsilon)(t=1,k) \overline{(\mathcal F \delta p^\varepsilon)}(t=1,k')\}. $$ We start by computing $(\mathcal F \delta p^\varepsilon) (t,k)$. For a regular function $f$, we have $$ (\Box^{-1} f)=\int_0^t G(t-s,\cdot) * f(s,\cdot) ds, \quad \mbox{so } (\mathcal F (\Box^{-1} f)(t,k)=\int_0^t \frac{ds}{|k|} \sin |k| (t-s)\mathcal F f(s,k) ds. $$ Using \fref{balK} and \fref{randH}, this implies that \begin{eqnarray*} (\mathcal F \delta p^\varepsilon) (t,k)&=&-\sigma_0 e^{-t\Sigma/2}\int_0^t \frac{ds}{|k|} \sin |k| (t-s) \mathcal F \left(V \left(\frac{\cdot}{\eta \varepsilon} \right)\frac{\partial^2 p^2_B}{\partial t^2}\right)(s,k)\\ &=&-\sigma_0 (\eta \varepsilon)^3 e^{-t\Sigma/2}\int_0^t \frac{ds}{|k|} \sin |k| (t-s) (\hat V(\varepsilon \eta \cdot) *_k \widehat{\Delta p^B})(s,k). \end{eqnarray*} Recall that $p^B=\partial_t G(t,\cdot) * p_0^\varepsilon$, so that $\Delta p^B=\partial_t G(t,\cdot) * \Delta p_0^\varepsilon$ and $$\widehat{\Delta p^B}(s,k)= -\varepsilon^3|k|^2 g_\mu\left(\varepsilon (|k|-\frac{|k_0|}{\varepsilon})\right)\cos |k|s, \qquad g_\mu(x)=\mu g(\mu x). $$ Hence $$ (\mathcal F \delta p^\varepsilon) (t,k)=\sigma_0 \eta^3 \varepsilon^6e^{-\frac{t\Sigma}{2}}\int_0^t \int_{\Rm^3} ds dp |k| \sin |k| (t-s) \hat{V}(\varepsilon \eta p) g_\mu\left(\varepsilon (|k-p|-\frac{|k_0|}{\varepsilon})\right) \cos |k-p|s. $$ Therefore, using that $\mathbb E \{\hat{V}(\xi)\hat {V}(\nu) \}= (2 \pi)^{3}\hat{R}(\xi) \delta_0(\xi+\nu)$, we find for the second moment \begin{align*} &\mathbb E \{(\mathcal F \delta p^\varepsilon) (t=1,k)\overline{(\mathcal F \delta p^\varepsilon)} (t=1,k')\}\sim \\ &\sigma_0^2 (\eta \varepsilon)^3 \varepsilon^6\int_0^{1}\int_0^{1} \int_{\Rm^3} ds ds' dp|k||k'| \sin |k| (1-s) \sin |k'| (1-s') f(k,k',p,s), \end{align*} where $$ f(k,k',p,s)=e^{-\Sigma}\hat{R}(\varepsilon \eta p) g_\mu\left(\varepsilon (|k-p|-\frac{|k_0|}{\varepsilon})\right) \overline{g_\mu}\left(\varepsilon (|k'-p|-\frac{|k_0|}{\varepsilon})\right) \cos |k-p|s \cos |k'-p|s'. $$ Hence, the variance of the functional reads \begin{eqnarray} \nonumber V^W(z)&\sim& \sigma_0^2 \eta^3 \varepsilon^9 \int_0^1\int_0^1 \int_{\Rm^9} ds ds' dp dk dk'|k||k'|e^{i (k-k') \cdot z}\sin |k| (1-s) \sin |k'| (1-s') \\ &&\cos |k|\cos |k'| f(k,k',p,s). \label{vartrd} \end{eqnarray} Setting in order $k' \to p+ \varepsilon^{-1}k'$, $k \to p+ \varepsilon^{-1}k$, as well as $p \to \varepsilon^{-1} p$, and writing the sines and cosines as sum of complex exponentials, we find \begin{align*} &V^W(z) \sim \frac{ \sigma_0^2 \eta^3}{\varepsilon^{2}} \sum_{\sigma_1, \sigma_2,\sigma_3,\sigma_4,\sigma_5,\sigma_6=\pm 1} \sigma_3 \sigma_4 \int dX |k+p| |k'+p| h_\mu(k',p,k)\\ &\hspace{1cm} \times \exp \frac{i}{\varepsilon}\left\{ \sigma_1 |k+p|+\sigma_2 |k'+p|\right\}\exp \frac{i}{\varepsilon}\left\{ \sigma_3 | k+p|+\sigma_4 |k'+p|\right\}\\ & \hspace{1cm}\times \exp \frac{i}{\varepsilon}\left\{(\sigma_5 |k|-\sigma_3 |k+p|)s\right\}\exp \frac{i}{\varepsilon}\left\{(\sigma_6 |k'|-\sigma_4 |k'+p|) s'\right\}, \end{align*} where $$ h_\mu(k',p,k)=e^{-\Sigma}e^{i (k-k')\cdot z/\varepsilon}g_\mu(|k|-|k_0|) \overline{g_\mu}(|k'|-|k_0|)\hat R(\eta p) $$ and $$ \int dX=\int_0^{1}\int_0^1\int_{\Rm^3}\int_{\Rm^3} \int_{\Rm^3} ds ds' d k dk' dp. $$ The first two oscillating phases in $V^W$ compensate directly when $\sigma_1=-\sigma_3$ and $\sigma_2=-\sigma_4$. The phases are otherwise strictly positive which leads to some averaging. The leading order is therefore obtained for $\sigma_1=-\sigma_3$ and $\sigma_2=-\sigma_4$. We then use the following short time - long time decomposition of $V^W$: \begin{align*} &V^W(z) = \int_0^\varepsilon\int_0^\varepsilon ds ds'(\cdots)+ \int^1_\varepsilon\int_\varepsilon^1 ds ds'(\cdots)+\int_0^\varepsilon\int^1_\varepsilon ds ds'(\cdots)+\int_\varepsilon^1\int_0^\varepsilon ds ds'(\cdots)\\ &:=\sum_{i=1}^4 V_i(z). \end{align*} The terms $V_3$ and $V_4$ can be shown to be negligible compared to the first two because of the different integration domains. We focus consequently only on the most interesting terms $V_1$ since $V_2$. Let us start with the most technical term $V_2$. We perform a standard stationary phase analysis in the variable $p$. We are looking for a point $p_0$ such that $\sigma_3 \widehat{k+p_0} s+\sigma_4 \widehat{k'+p_0} s'=0$ (first order term in the phase), together with $\sigma_5 |k|-\sigma_3 |k+p_0|=\sigma_6 |k'|-\sigma_4 |k'+p_0|=0$ (zero order term). This suggests that $\sigma_5=\sigma_3$, $\sigma_6=\sigma_4$, $p_0=0$ and $\sigma_3 \widehat{k} s=-\sigma_4 \widehat{k'} s'$. Defining $\xi_1= s \hat k$, $\xi_2= s' \hat k'$, and performing the changes of variables $p \to \sqrt{\varepsilon} p$, $\xi_1=-\sigma_3 \sigma_4 \xi_2+\sqrt{\varepsilon} \xi_1$ leads to, all computations done for the leading term: \begin{align*} &V_2(z) \sim \sigma_0^2 \eta^{3-\delta} \varepsilon^{1-\frac{\delta}{2}}e^{-\Sigma} \sum_{\sigma_3,\sigma_4=\pm 1} \sigma_3 \sigma_4 \int dX^\varepsilon |k|^3 |k'|^3 |\xi_1|^{-\delta} |\xi_2|^{\delta/2-4} e^{i\, \textrm{sign}(\sigma_3/(2|k|)+\sigma_4/(2|k'|)) |\xi_1|^2} \\ & \hspace{7cm} \times |\sigma_3/(2|k|)+\sigma_4/(2|k'|)|^{\frac{\delta}{2}} |\sin (\hat \xi_1 \cdot \hat \xi_2)|^\delta\\ & \hspace{7cm} \times e^{-i (\sigma_3 \sigma_4|k|+|k'|) \hat \xi_2 \cdot z/\varepsilon} g_\mu(|k|-|k_0|) \overline{g_\mu}(|k'|-|k_0|)\\ &\int dX^\varepsilon=\int_{\Rm^3} \int_{\varepsilon \leq |\xi_2| \leq 1}\int_{\Rm_+} \int_{\Rm_+} d\xi_1 d\xi_2 d|k| d|k'|. \end{align*} Setting finally $\xi_2 \to \varepsilon \xi_2$, $|k|\to |k_0|+ |k|/\mu$, $|k'|\to |k_0|+ |k'|/\mu$, we obtain \begin{align*} &V_2(z) \sim \sigma_0^2 \eta^{3-\delta} e^{-\Sigma} \int_{S^2} d\hat \xi_2 e^{-2i |k_0| \hat \xi_2 \cdot z/\varepsilon} \left|\hat g \left(\frac{z \cdot \hat \xi_2}{\varepsilon \mu}\right)\right|^2 \end{align*} The function above has a very similar structure to one of the initial condition, and is therefore mostly supported on $|z| \leq \varepsilon \mu$. Let us consider now the term $V_1$. After the changes of variables $s \to \varepsilon s$ and $s' \to \varepsilon s'$, we find (recall that we have for the leading order that $\sigma_1=-\sigma_3$ and $\sigma_2=-\sigma_4$) \begin{align*} &V_1(z) \sim \sigma_0^2 \eta^3 \sum_{\sigma_3,\sigma_4,\sigma_5,\sigma_6=\pm 1} \sigma_3 \sigma_4 \int dX |k+p| |k'+p| h_\mu(k',p,k)\\ & \hspace{1cm}\times \exp i\left\{(\sigma_5 |k|-\sigma_3 |k+p|)s\right\}\exp i \left\{(\sigma_6 |k'|-\sigma_4 |k'+p|) s'\right\}. \end{align*} When $\eta=1$, then $V_1$ is of the same order as $V_2$. When $\eta \ll 1$, we will see that $V_1$ is the leading term. Remarking first that \begin{align*} &\sigma_3 |k+p| \exp i\left\{(\sigma_5 |k|-\sigma_3 |k+p|)s\right\}= \sigma_5 |k| \exp i\left\{(\sigma_5 |k|-\sigma_3 |k+p|)s\right\}\\ & \hspace{7cm}- \frac{1}{i}\frac{d}{ds} \exp i\left\{(\sigma_5 |k|-\sigma_3 |k+p|)s\right\}, \end{align*} with a similar observation for the other exponential, $V^W_1$ can be split into four terms. It can be shown that the leading one is the one involving the product of the derivatives, so that \begin{align*} &V_1(z) \sim \sigma_0^2 \eta^3 \sum_{\sigma_3,\sigma_4,\sigma_5,\sigma_6=\pm 1} \int_{\Rm^9} dk dk' dp h_\mu(k',p,k)\\ & \hspace{1cm}\times \left(\exp i\left\{(\sigma_5 |k|-\sigma_3 |k+p|)\right\}-1 \right)\left(\exp i \left\{(\sigma_6 |k'|-\sigma_4 |k'+p|)\right\}-1 \right). \end{align*} With the changes of variables $|k|\to |k_0|+ |k|/\mu$, $|k'|\to |k_0|+ |k'|/\mu$ and $p \to p/\eta$, we find \begin{align*} &V_1(z) \sim \sigma_0^2 e^{-\Sigma} \int_{S^2}\int_{S^2} d\hat k d \hat k' e^{i (\hat k-\hat k') \cdot z/\varepsilon} \left|\hat g \left(\frac{(\hat k-\hat k') \cdot z}{\varepsilon \mu}\right)\right|^2\\ &\hspace{3cm} \sum_{\sigma_3,\sigma_4=\pm 1}\int_{\Rm^3} dp \hat R(p) \left( e^{ - i \sigma_3 |p| /\eta)}-1 \right)\left(e^{-i \sigma_4 |p|)/\eta}-1 \right). \end{align*} Terms in the expression above involving an oscillating exponential are negligible, which shows that $V_1(0) \sim \sigma_0^2 e^{-\Sigma}$, and for the same reason as $V_2$, $V_1$ is mostly supported on $|z| \leq \varepsilon \mu$. The leading term is therefore $V_1$, which ends the proof. \subsection{Proof of proposition \ref{propvarKn}} \label{proofpropvarKn} The proof is straightforward, the variance of the WB functional for the noise contribution admits the expression $$ V^W_n(z)=\frac{\sigma_n^2}{(2 \pi)^3} \int_{\Rm^3} dk dk' e^{i (k-k') \cdot z} \cos |k|\cos |k'| \mathbb E \{ (\mathcal F {\mathbbmss{1}}_D n_p^\varepsilon)(t=1,k) \overline{(\mathcal F {\mathbbmss{1}}_D n_p^\varepsilon)}(t=1,k')\}. $$ Neglecting the finite size of the detector in first approximation leads to $$ V^W_n(z)\sim \sigma_n^2 \varepsilon^3 \int_{\Rm^3} dk (\cos |k|)^2 \hat \Phi(\varepsilon k) \sim \sigma_n^2. $$ \section{Conclusion} \label{conc} This work is concerned with the comparison in terms of resolution and stability of prototype wave-based and correlation-based imaging functionals. In the framework of 3D acoustic waves propagating in a random medium with possibly long-range correlations, we obtained optimal estimates of the variance and the SNR in terms of the main physical parameters of the problem. In the radiative transfer regime, we showed that for an identical cross-range resolution, the CB and WB functionals have a comparable SNR. The CB functional is shown to offer a better SNR provided the resolution is lowered, which is achieved by calculating correlations over a small domain compared to the detector. This is the classical stability/resolution trade-off. We obtained morever that the minimal central wavelength $\lambda_m$ that the functionals could accurately reconstruct were identical in the regime of weak fluctuations in the random medium, and that in the case of larger fluctuations, the CB functional offered a better (smaller) $\lambda_m$ (resolution) than the WB functional. We also investigated the effects of long-range correlations in the complex medium. We showed that coherent imaging became difficult to implement because the mean free path was very small and therefore the measured signal too weak compared to additive, external, noise. In such a case, transport-based imaging with lower resolution is a good alternative to wave-based (coherent) imaging. This will be investigated in more detail in future works. \section*{Acknowledgment} This work was supported in part by grant AFOSR NSSEFF- FA9550-10-1-0194. {\footnotesize } \end{document}
arXiv
Is there a good general definition of "sheaves with values in a category"? Asked 1 year, 9 months ago Let $\mathcal{A}$ be a category. There is a common definition of "sheaves with values in $\mathcal{A}$", which is what one obtains by taking the Grothendieck-style definition of "sheaf of sets" (i.e. in terms of presheaves satisfying a certain limit condition with respect to all covering sieves) and blithely replacing $\textbf{Set}$ with $\mathcal{A}$. In my view, this is a bad definition if we do not assume $\mathcal{A}$ is sufficiently nice – say, locally finitely presentable. When $\mathcal{A}$ is locally finitely presentable, we obtain various properties I consider to be desiderata for a "good" definition of "sheaves with values in $\mathcal{A}$", namely: The properties of limits and colimits in the category of sheaves on a general site with values in $\mathcal{A}$ are "similar" to those of $\mathcal{A}$ itself. (I am being vague here because even when $\mathcal{A}$ is locally finitely presentable, the category of sheaves with values in $\mathcal{A}$ may not be locally finitely presentable – this already happens for $\mathcal{A} = \textbf{Set}$.) The category of sheaves on a site $(\mathcal{C}, J)$ with values in $\mathcal{A}$ is (pseudo)functorial in $(\mathcal{C}, J)$ with respect to morphisms of sites. (By "morphism of sites" I mean the notion that contravariantly induces geometric morphisms.) The construction respects Morita equivalence of sites, i.e. factors through the (bi)category of Grothendieck toposes. The construction respects "good" (bi)colimits in the (bi)category of Grothendieck toposes, i.e. sends them to (bi)limits of categories. (I don't know what "good" should mean here, but at minimum it should include coproducts. When $\mathcal{A}$ is locally finitely presentable, there is a classifying topos, so in fact the construction respects all (bi)colimits.) The category of sheaves on the point with values in $\mathcal{A}$ is canonically equivalent to $\mathcal{A}$. The category of sheaves on the Sierpiński space with values in $\mathcal{A}$ is canonically equivalent to the arrow category of $\mathcal{A}$. Question. What is a (the?) "good" definition of "sheaves with values in $\mathcal{A}$"? ... when $\mathcal{A}$ is finitely accessible, not necessarily cocomplete, e.g. the category of Kan complexes, or the category of divisible abelian groups? ... when $\mathcal{A}$ is an abelian category, not necessarily accessible, e.g. the category of finite abelian groups, or the category of finitely generated abelian groups? ... when $\mathcal{A}$ is a Grothendieck abelian category, not necessarily locally finitely presentable? Perhaps something like right Kan extension along the inclusion of the (bi)category of presheaf toposes into the (bi)category of Grothendieck toposes might work – but the existence of toposes with no points suggests it may not – but it would be nice to have a somewhat more concrete description. There is a temptation to strengthen desideratum 6 to require that the category of presheaves on a (small) category $\mathcal{C}$ be equivalent to the category of functors $\mathcal{C}^\textrm{op} \to \mathcal{A}$, but this does not appear to be a good idea. As Simon Henry remarks, if $\mathcal{C}^\textrm{op}$ is filtered, then the unique functor $\mathcal{C} \to \mathbf{1}$ is a morphism of sites corresponding to a geometric morphism that has a right adjoint (i.e. the inverse image functor itself has a left adjoint that preserves finite limits), so contravariant (bi)functoriality with respect to geometric morphisms forces the induced $\mathcal{A} \to [\mathcal{C}^\textrm{op}, \mathcal{A}]$ to have a left adjoint, i.e. $\mathcal{A}$ must have colimits of shape $\mathcal{C}^\textrm{op}$. This is the same argument that shows that the category of points of a topos must have filtered colimits. Since I am looking for a construction where $\mathcal{A}$ is not necessarily the category of models of a geometric theory, I conclude that I cannot require the category of presheaves on $\mathcal{C}$ to be $[\mathcal{C}^\textrm{op}, \mathcal{A}]$. ct.category-theory higher-category-theory topos-theory edited Apr 6, 2021 at 10:55 Zhen Lin asked Apr 5, 2021 at 10:25 Zhen LinZhen Lin $\begingroup$ I'm not sure I agree with your claim that the "naive" way of defining sheaves in a category does nto give you these properties. As long as you restrict to category with all limits they are essentially all satisfied (excepte maybe 1 that is a very vague). Of course, trying to defines sheaves with value in a category that does not have limits is probably a bad idea as the definition involve plenty of limits. $\endgroup$ – Simon Henry $\begingroup$ I started writting some details in an answer because it was too long for a comment. $\endgroup$ $\begingroup$ I have a bad feeling about this question (although I like it). It's like chasing the elephant without a picture of the elephant. You might end up framing a tiger instead. Why isn't 5 replaced, more generally, by "the category of sheaves on a discrete category $X$, with discrete topology, is canonically equivalent to an $X$-fold product of copies of $\cal A$? $\endgroup$ – fosco $\begingroup$ Just as an interesting example, here is a "definition" that fits all your criteria: As discussed, wanting copower force filtered colimits to exists. Assuming that $\mathcal{A}$ has filtered colimits, then you can define for a topos $T$, $Sh(T,\mathcal{A})$ as the category of functors $Pt(T) \to \mathcal{A}$ that preserve filtered colimits (where $Pt(T)$ is the category of points of $T$, which always have directed colimits). This satisfies all your requirement (with restriction on (4)). Moreover, if finite limits and filtered colimit comute in $\mathcal{A}$, then the $f^*$ are left exact. $\endgroup$ $\begingroup$ That is certainly an interesting construction! It shows that my list doesn't capture "cohesiveness": by going via the category of points, for example Hausdorff spaces become identified with their underlying set of points, which is definitely not desirable. Hmmm... $\endgroup$ – Zhen Lin In my view, the correct notion of "sheaf of Xs" is "internal X in the topos (or $\infty$-topos) of sheaves of sets (or spaces)". (I mentioned this previously on MO here.) Since sheaves of sets are a limit theory, if X is also defined by a limit theory (i.e. the category of Xs is locally presentable), then by commutation of limits this is the same as a sheaf of Xs in the naive sense. But for other values of X it gives different answers. In fact, the answer it gives may depend on exactly how the theory of X is presented; but that's reasonable becaues sometimes there is more than one correct notion of "sheaf of Xs" (equivalently, there is more than one version of X in the internal constructive logic of a topos). For instance: If X = fields, there are discrete fields, Heyting fields, and residue fields. I think discrete fields are the one that corresponds to viewing fields as models of a limit-colimit sketch (i.e. as an accessible category), but the others are often more useful (e.g. Heyting fields include the sheaf of continuous real-valued functions on a topological space). The case of X = Kan complexes has already been mentioned in other answers. Although in general once you're talking about homotopy theory, it's better to incorporate the homotopy theory into the ambient $\infty$-topos and work with stacks. If X = finite abelian groups, there are different notions of finite object in a topos. If X = topological spaces, you can internalize that directly, but often more useful is to internalize the notion of locale -- for instance, a "sheaf of locales" on a sufficiently nice topological space $Y$ is equivalent to a space over $Y$. If X = local rings, written as a geometric theory, this definition gives you the generally accepted definition of "sheaf of local rings", i.e. a sheaf of rings whose stalks are local. This definition of "sheaf of Xs" satisfies your criteria (3) and (5). It also satisfies your criterion (1) in as strong a way as I think could be expected: the category of internal Xs in a topos behaves exactly like the ordinary category of Xs, as long as the latter is interpreted using constructive logic. And it satisfies your criteria (2), (4), and (6) if the theory of Xs is geometric, hence has a classifying topos -- which I think is the most general situation in which one can expect these properties to hold. (Note, by the way, that your criterion (6), as well as the stronger version referring to all presheaf toposes, is a special case of your (4), since presheaves on $C$ are the Cat-enriched copower of the terminal topos by $C$ in the bicategory of toposes.) Mike ShulmanMike Shulman $\begingroup$ It should be noted however that in the case where condition (4) & (6) are satisfied, it is for a notion of morphisms that is forced upon you by the definition of objects you used. For example, for finite sets (or finite group) it is going to be either surjection between finite set (using Kuratowski finitness) or bijection between finite sets only (using "cardinal finite") and there is no way to have finite sets/groups with all morphisms between them from a geometric theory. $\endgroup$ $\begingroup$ I, too, am of the opinion that the correct definition is supplied by internal logic. The difficulty with this approach is that it is "intensional", in the sense that the X in "sheaves of X" needs to have an a priori definition in terms of logic (whether the traditional kind or the (co)limit sketch kind); what I am hoping for is an "extensional" approach where X can be a "black box" category. As you say, going from "intensional" to "extensional" is lossy, but perhaps there is a canonical "intension" for every "extension" that is maximal or minimal or otherwise universal in some sense. $\endgroup$ $\begingroup$ @SimonHenry Indeed! I believe this is an issue for local rings too -- you don't get the "local homomorphisms" from a classifying topos. $\endgroup$ – Mike Shulman $\begingroup$ Curiosity and, to be honest, a wish for a good general definition that could be presented to "ordinary" mathematicians who seem to be averse to "logic". (This is perhaps stretching the notion of "ordinary" a bit.) The fact that the naïve definition is repeated in many places without any caveats is quite frustrating for me but I wonder if that is because no alternative has appeared. $\endgroup$ $\begingroup$ @ZhenLin: Sure, calling it "intensional" makes it sound like something scary from logic. But if instead you say "this notion of $A$-valued sheaves requires extra structure on $A$, and can depend on the choice of this structure", I don't think most mathematicians will be scared at all — that's a very familiar phenomenon when generalising. $\endgroup$ – Peter LeFanu Lumsdaine Apr 6, 2021 at 2:56 This is a very complicated question. Categories of Sheaves. Let me start from something that you evidently know (given the hidden references in your question). Bourceux et al. have worked on defining sheaves of something over a site $(C,J)$. Borceux, Sheaves of algebras for a commutative theory, Ann. Soc. Sci. Bruxelles Sér. I 95 (1981), no. 1, 3–19 Borceux and Kelly, On locales of localizations, J. Pure Appl. Algebra, Volume 46, Issue 1, 1987, Pages 1-34. Borceux and Veit, On the Left Exactness of Orthogonal Reflections J. Pure Appl. Algebra, 49 (1987), pp. 33-42. Borceux, Subobject Classifier for Algebraic Structures. Subobject classifier for algebraic structures J. Algebra, 112 (1988), pp. 306-314. Veit, Sheaves, localizations, and unstable extensions: Some counterexamples. J. Pure Appl. Algebra, Volume 140, Issue 2, July 1991, Pages 370-391. Borceux and Quinteiro. A theory of enriched sheaves. Cahiers de Topologie et Géométrie Différentielle Catégoriques 37.2 (1996): 145-162. As you mention, this theory works at its best when $\mathcal{V}$ is a regular locally finitely presentable monoidal closed category (these are the working hypotheses of the last reference in the list above). This is partially satisfactory, as in these instances we recover a nice correspondence between topologies and lexreflective localizations, and thus we can maintain the intuition for the theory of sites and lex-reflectors. On the other hand, this theory does not even recover the notion of sheaves over a topos $\mathsf{Sh}((C,J), \mathcal{E})$, as many topoi are not locally finitely presentable. An idea. I never checked, but I always believed that most of these results can be recovered when $\mathcal{V}$ is a cocomplete precontinuous category (in the sense of Adamek, Rosicky and Vitale, see On Algebraically Exact Categories and Essential Localizations of Varieties) with a dense generator (and of course monoidal closed). This assumptions would recover the topos case and also the case of Grothendieck categories, where precontinuity is known as (AB5). (This framework would also meet all your desiderata). Of course this idea is not entirely satisfying for you, as it would not encompass those finitely accessible categories that are not cocomplete, but at least provides a good framework to study sheaves over a cocomplete precontinuous monoidal closed category with a dense generator (which, again, include topoi and Grothendieck categories). Grothendieck topoi. The first dishonest way to answer your question is to say that you are looking for a way to describe lex-reflective $\mathcal{V}$-categories. This is the over-formalist point of view of who thinks that lex-reflectivity of categories of sheaves is not a theorem in the theory of topoi, it is a very intrinsic characterization and should be taken as a definition. I am not sure that I support this idea. Anyway, if one wants to follow this path, there is the beautiful paper by Garner and Lack. Garner and Lack, Lex Colimits. J. Pure Appl. Algebra. Volume 216, Issue 6, June 2012, Pages 1372-1396. As the assumptions on $\mathcal{V}$ are very mild in this case, one cannot expect to recover a good theory of sites, and thus the very intuition of sheaf is a bit lost. Still, depending on your religious belief, this could be a starting point. Caveat 1. I think that this tentative solution unveils the first problem of your question. If you do not choose a flavour of problems that you want to solve, or attack with this notion of sheaf, it's hard to come with a correct definition which does not rely on a specific point of view. Caveat 2. Even the formalists that are fashinated by this approach, should be warned by the evidence. In the case of Grothendieck categories the correct notion of morphism is not that of left exact cocontinuous functors, as discussed in the very introduction of a paper of Ramos Gonzalez and myself. Di Liberti and Ramos Gonzalez, Exponentiable Grothendieck categories in flat algebraic Geometry. arXiv:2103.07876. So one should be very careful with getting carried by this point of view. Elementary topoi and finite abelian groups. I was always fascinated by a very natural way to produce elementary topoi. The category of functors $\mathsf{FinSet}^C$ is an elementary topos, and when $C$ is a finite category it is even a Grothendieck topos with respect to the Grothendieck universe of finite sets. This case is very similar to that of finite abelian groups. So, are you honestly changing the notion of sheaf, or just dishonestly changing the notion of size? I think that this is a question to think about before forcing a definition that we might not need. Kan complexes.You listed Kan complexes among finitely accessible categories that are not cocomplete, but is this the correct point of view on them? Kan compleses are indeed cocomplete with respect to the relevant notion of colimit, and if you want to see this cocompleteness, you should take sheaves over simplicial sets and study model topoi in the sense Rezk. Rezk, Toposes and homotopy toposes. All in all. I do not have a good answer to your question, but I have discussed a couple of remarks that I hope will thicken the debate. But I have a very informal question for you, what are the defining and conceptual features of the notion of sheaf that you want to model? Personally, I find your list of desiderata nor defining nor conceptual. edited Apr 6, 2021 at 9:04 Ivan Di LibertiIvan Di Liberti $\begingroup$ Sheaves of Kan complexes were considered in Ken Brown's paper introducing categories of fibrant objects. He gives (what I consider to be equivalent to) the correct definition: a sheaf of Kan complexes on a topological space is a simplicial sheaf whose stalks are Kan complexes. This generalises to internal Kan complexes in a topos. $\endgroup$ $\begingroup$ Yes, but isn't it the same of the theory of model topoi in the sense of Rezk? $\endgroup$ – Ivan Di Liberti $\begingroup$ Sure, for the purposes of actually doing homotopy theory, there are better models. But it illustrates that there is (1) a good definition of sheaves of objects that is not just the naïve one and (2) works well for categories that are not necessarily complete or cocomplete. $\endgroup$ $\begingroup$ To answer your question... I think categories of sheaves of <whatever> should form a stack on the category of Grothendieck toposes. For structures that are axiomatised by geometric theories, we get representable stacks, and furthermore the pullback functors have good properties in regards to (co)limits. My desiderata mostly stem from this observation. $\endgroup$ $\begingroup$ Thanks for this clarification, it's an interesting insight. I will think about it in the next days and hopefully come back to the question. $\endgroup$ The naive definition of sheaves is very well behaved if you look at functoriality in the $f_*$ direction: Of course, you are going to need to assume that $\mathcal{A}$ has all limits as the definition of $\mathcal{A}$-valued sheaves involves arbitrary limit. If you want to restrict to category that have for example finite limits, you are going to have to restrict to sites that have only have finite cover, and to geometric morphisms that satisfies some finiteness conditions. Once you assume that $\mathcal{A}$ has finite limits, it works pretty much without any problems. In fact it has little to do with Grothendieck topologies and works well for arbitrary "limit sketches". Definition: (maybe not completely standard terminology) By a "limit sketches" I mean a small category $\mathcal{C}$ together with $S$ a set of maps in the category $\widehat{C}$ of presheaves of sets on $\widehat{C}$. A site is a special case with $S$ the set of covering sieves. Given $\mathcal{E}$ a category with colimits, any functor $f:C \to \mathcal{E}$ induces an adjunction $f_! \dashv f^*$ where $f^*$ is the nerve functor $\mathcal{E} \to \widehat{C}$ and $f_!:\widehat{C} \to \mathcal{E} $ is the pointwise left kan extention of $f$. Definition: A $(C,S)$-comodel in $\mathcal{E}$ is a functor $f:C \to \mathcal{E}$ such that $f_!$ sends all maps in $S$ to isomorphism in $\mathcal{E}$, or equivalently such that for all $X \in \mathcal{E}$, $f^* E$ is orthogonal to all maps in $S$. Now, if $\mathcal{A}$ is a category with all limits, then a $(C,S)$-model is $\mathcal{A}$ is a a $(C,S)$-comodel in $\mathcal{A}^{op}$. In particular, the category of $(C,S)$-model in $Set$ is the full subcategory of presheaf on $C$ that are orthogonal to all maps in $S$. I'm denoting this category by $Set(C,S)$ Propostion: For all category $\mathcal{E}$ with all colimits, the category of $(C,S)$-comodel in $\mathcal{E}$ is equivalent to the category of left adjoint functors (equivalently, colimit preserving functors) $Set(C,S) \to \mathcal{E}$. Indeed functor $C \to \mathcal{E}$ corresponds to left adjoint functors $\widehat{C} \to \mathcal{E}$ and by definition $(C,S)$-model corresponds exactly to the ones that inverts all maps in $S$, but by classical manipulation, this is the same as left adjoints functors $Set(C,S) \to \mathcal{E}$. As $Set(C,S)$ is locally presentable, this is also the same as colimit preverving functor. You get your functoriality requirement: Corollary: The category of $(C,S)$-(co)model in a category with all (co)limit is functorial on all functor preserving all (co)limits. In particular, it is functorial on site and Geometric morphisms. Indeed, for comodel it it justs the functor represented by $Set(C,S)$ in the category of all colimits preserving functor between cocomplete category. It follows for model by duality. Simon HenrySimon Henry $\begingroup$ Since desideratum 1 is vague let me ask a concrete question. Suppose $\mathcal{A}$ is an AB3 abelian category. Does your construction yield abelian categories back? Are the functors induced by geometric morphisms exact? $\endgroup$ $\begingroup$ So, if I is filtered then $Colim : [I,Set] \to Set$ is left exact, so is a geometric morphism. I think your requirement forces this to interpret to $Colim: [I,A] \to A$. But asking for this to be left exact is already condition $(AB5)$. So I don't think your initial requirement are compatible with this. $\endgroup$ $\begingroup$ That is a very good point. I should have realised it myself. Functoriality + copowers already forces $\mathcal{A}$ to have filtered colimits. But it is perfectly possible to define "sheaves of finite abelian groups" for sites with enough points so that must mean the construction doesn't preserve copowers... $\endgroup$ Assumptions on the category C for sheafification of C-valued presheaves Definition of sheaves in wikipedia Local presentability and representable presheaves over the category of topological spaces When is the category of small (pre)sheaves a(n elementary) topos? What is an Elementary "Homotopy, Model" Topos? Locally presentable abelian categories with enough injective objects When is the category of models of a limit theory a topos? Does every category with a subobject classifier embed into a topos?
CommonCrawl
Cartesian tensor In geometry and linear algebra, a Cartesian tensor uses an orthonormal basis to represent a tensor in a Euclidean space in the form of components. Converting a tensor's components from one such basis to another is done through an orthogonal transformation. The most familiar coordinate systems are the two-dimensional and three-dimensional Cartesian coordinate systems. Cartesian tensors may be used with any Euclidean space, or more technically, any finite-dimensional vector space over the field of real numbers that has an inner product. Use of Cartesian tensors occurs in physics and engineering, such as with the Cauchy stress tensor and the moment of inertia tensor in rigid body dynamics. Sometimes general curvilinear coordinates are convenient, as in high-deformation continuum mechanics, or even necessary, as in general relativity. While orthonormal bases may be found for some such coordinate systems (e.g. tangent to spherical coordinates), Cartesian tensors may provide considerable simplification for applications in which rotations of rectilinear coordinate axes suffice. The transformation is a passive transformation, since the coordinates are changed and not the physical system. Cartesian basis and related terminology Vectors in three dimensions In 3D Euclidean space, $\mathbb {R} ^{3}$, the standard basis is ex, ey, ez. Each basis vector points along the x-, y-, and z-axes, and the vectors are all unit vectors (or normalized), so the basis is orthonormal. Throughout, when referring to Cartesian coordinates in three dimensions, a right-handed system is assumed and this is much more common than a left-handed system in practice, see orientation (vector space) for details. For Cartesian tensors of order 1, a Cartesian vector a can be written algebraically as a linear combination of the basis vectors ex, ey, ez: $\mathbf {a} =a_{\text{x}}\mathbf {e} _{\text{x}}+a_{\text{y}}\mathbf {e} _{\text{y}}+a_{\text{z}}\mathbf {e} _{\text{z}}$ where the coordinates of the vector with respect to the Cartesian basis are denoted ax, ay, az. It is common and helpful to display the basis vectors as column vectors $\mathbf {e} _{\text{x}}={\begin{pmatrix}1\\0\\0\end{pmatrix}}\,,\quad \mathbf {e} _{\text{y}}={\begin{pmatrix}0\\1\\0\end{pmatrix}}\,,\quad \mathbf {e} _{\text{z}}={\begin{pmatrix}0\\0\\1\end{pmatrix}}$ when we have a coordinate vector in a column vector representation: $\mathbf {a} ={\begin{pmatrix}a_{\text{x}}\\a_{\text{y}}\\a_{\text{z}}\end{pmatrix}}$ A row vector representation is also legitimate, although in the context of general curvilinear coordinate systems the row and column vector representations are used separately for specific reasons – see Einstein notation and covariance and contravariance of vectors for why. The term "component" of a vector is ambiguous: it could refer to: • a specific coordinate of the vector such as az (a scalar), and similarly for x and y, or • the coordinate scalar-multiplying the corresponding basis vector, in which case the "y-component" of a is ayey (a vector), and similarly for x and z. A more general notation is tensor index notation, which has the flexibility of numerical values rather than fixed coordinate labels. The Cartesian labels are replaced by tensor indices in the basis vectors ex ↦ e1, ey ↦ e2, ez ↦ e3 and coordinates ax ↦ a1, ay ↦ a2, az ↦ a3. In general, the notation e1, e2, e3 refers to any basis, and a1, a2, a3 refers to the corresponding coordinate system; although here they are restricted to the Cartesian system. Then: $\mathbf {a} =a_{1}\mathbf {e} _{1}+a_{2}\mathbf {e} _{2}+a_{3}\mathbf {e} _{3}=\sum _{i=1}^{3}a_{i}\mathbf {e} _{i}$ It is standard to use the Einstein notation—the summation sign for summation over an index that is present exactly twice within a term may be suppressed for notational conciseness: $\mathbf {a} =\sum _{i=1}^{3}a_{i}\mathbf {e} _{i}\equiv a_{i}\mathbf {e} _{i}$ An advantage of the index notation over coordinate-specific notations is the independence of the dimension of the underlying vector space, i.e. the same expression on the right hand side takes the same form in higher dimensions (see below). Previously, the Cartesian labels x, y, z were just labels and not indices. (It is informal to say "i = x, y, z"). Second-order tensors in three dimensions A dyadic tensor T is an order-2 tensor formed by the tensor product ⊗ of two Cartesian vectors a and b, written T = a ⊗ b. Analogous to vectors, it can be written as a linear combination of the tensor basis ex ⊗ ex ≡ exx, ex ⊗ ey ≡ exy, ..., ez ⊗ ez ≡ ezz (the right-hand side of each identity is only an abbreviation, nothing more): ${\begin{aligned}\mathbf {T} =\quad &\left(a_{\text{x}}\mathbf {e} _{\text{x}}+a_{\text{y}}\mathbf {e} _{\text{y}}+a_{\text{z}}\mathbf {e} _{\text{z}}\right)\otimes \left(b_{\text{x}}\mathbf {e} _{\text{x}}+b_{\text{y}}\mathbf {e} _{\text{y}}+b_{\text{z}}\mathbf {e} _{\text{z}}\right)\\[5pt]{}=\quad &a_{\text{x}}b_{\text{x}}\mathbf {e} _{\text{x}}\otimes \mathbf {e} _{\text{x}}+a_{\text{x}}b_{\text{y}}\mathbf {e} _{\text{x}}\otimes \mathbf {e} _{\text{y}}+a_{\text{x}}b_{\text{z}}\mathbf {e} _{\text{x}}\otimes \mathbf {e} _{\text{z}}\\[4pt]{}+{}&a_{\text{y}}b_{\text{x}}\mathbf {e} _{\text{y}}\otimes \mathbf {e} _{\text{x}}+a_{\text{y}}b_{\text{y}}\mathbf {e} _{\text{y}}\otimes \mathbf {e} _{\text{y}}+a_{\text{y}}b_{\text{z}}\mathbf {e} _{\text{y}}\otimes \mathbf {e} _{\text{z}}\\[4pt]{}+{}&a_{\text{z}}b_{\text{x}}\mathbf {e} _{\text{z}}\otimes \mathbf {e} _{\text{x}}+a_{\text{z}}b_{\text{y}}\mathbf {e} _{\text{z}}\otimes \mathbf {e} _{\text{y}}+a_{\text{z}}b_{\text{z}}\mathbf {e} _{\text{z}}\otimes \mathbf {e} _{\text{z}}\end{aligned}}$ Representing each basis tensor as a matrix: ${\begin{aligned}\mathbf {e} _{\text{x}}\otimes \mathbf {e} _{\text{x}}&\equiv \mathbf {e} _{\text{xx}}={\begin{pmatrix}1&0&0\\0&0&0\\0&0&0\end{pmatrix}}\,,&\mathbf {e} _{\text{x}}\otimes \mathbf {e} _{\text{y}}&\equiv \mathbf {e} _{\text{xy}}={\begin{pmatrix}0&1&0\\0&0&0\\0&0&0\end{pmatrix}}\,,&\mathbf {e} _{\text{z}}\otimes \mathbf {e} _{\text{z}}&\equiv \mathbf {e} _{\text{zz}}={\begin{pmatrix}0&0&0\\0&0&0\\0&0&1\end{pmatrix}}\end{aligned}}$ then T can be represented more systematically as a matrix: $\mathbf {T} ={\begin{pmatrix}a_{\text{x}}b_{\text{x}}&a_{\text{x}}b_{\text{y}}&a_{\text{x}}b_{\text{z}}\\a_{\text{y}}b_{\text{x}}&a_{\text{y}}b_{\text{y}}&a_{\text{y}}b_{\text{z}}\\a_{\text{z}}b_{\text{x}}&a_{\text{z}}b_{\text{y}}&a_{\text{z}}b_{\text{z}}\end{pmatrix}}$ See matrix multiplication for the notational correspondence between matrices and the dot and tensor products. More generally, whether or not T is a tensor product of two vectors, it is always a linear combination of the basis tensors with coordinates Txx, Txy, ..., Tzz: ${\begin{aligned}\mathbf {T} =\quad &T_{\text{xx}}\mathbf {e} _{\text{xx}}+T_{\text{xy}}\mathbf {e} _{\text{xy}}+T_{\text{xz}}\mathbf {e} _{\text{xz}}\\[4pt]{}+{}&T_{\text{yx}}\mathbf {e} _{\text{yx}}+T_{\text{yy}}\mathbf {e} _{\text{yy}}+T_{\text{yz}}\mathbf {e} _{\text{yz}}\\[4pt]{}+{}&T_{\text{zx}}\mathbf {e} _{\text{zx}}+T_{\text{zy}}\mathbf {e} _{\text{zy}}+T_{\text{zz}}\mathbf {e} _{\text{zz}}\end{aligned}}$ while in terms of tensor indices: $\mathbf {T} =T_{ij}\mathbf {e} _{ij}\equiv \sum _{ij}T_{ij}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\,,$ and in matrix form: $\mathbf {T} ={\begin{pmatrix}T_{\text{xx}}&T_{\text{xy}}&T_{\text{xz}}\\T_{\text{yx}}&T_{\text{yy}}&T_{\text{yz}}\\T_{\text{zx}}&T_{\text{zy}}&T_{\text{zz}}\end{pmatrix}}$ Second-order tensors occur naturally in physics and engineering when physical quantities have directional dependence in the system, often in a "stimulus-response" way. This can be mathematically seen through one aspect of tensors – they are multilinear functions. A second-order tensor T which takes in a vector u of some magnitude and direction will return a vector v; of a different magnitude and in a different direction to u, in general. The notation used for functions in mathematical analysis leads us to write v − T(u),[1] while the same idea can be expressed in matrix and index notations[2] (including the summation convention), respectively: ${\begin{aligned}{\begin{pmatrix}v_{\text{x}}\\v_{\text{y}}\\v_{\text{z}}\end{pmatrix}}&={\begin{pmatrix}T_{\text{xx}}&T_{\text{xy}}&T_{\text{xz}}\\T_{\text{yx}}&T_{\text{yy}}&T_{\text{yz}}\\T_{\text{zx}}&T_{\text{zy}}&T_{\text{zz}}\end{pmatrix}}{\begin{pmatrix}u_{\text{x}}\\u_{\text{y}}\\u_{\text{z}}\end{pmatrix}}\,,&v_{i}&=T_{ij}u_{j}\end{aligned}}$ By "linear", if u = ρr + σs for two scalars ρ and σ and vectors r and s, then in function and index notations: ${\begin{aligned}\mathbf {v} &=&&\mathbf {T} (\rho \mathbf {r} +\sigma \mathbf {s} )&=&&\rho \mathbf {T} (\mathbf {r} )+\sigma \mathbf {T} (\mathbf {s} )\\[1ex]v_{i}&=&&T_{ij}(\rho r_{j}+\sigma s_{j})&=&&\rho T_{ij}r_{j}+\sigma T_{ij}s_{j}\end{aligned}}$ and similarly for the matrix notation. The function, matrix, and index notations all mean the same thing. The matrix forms provide a clear display of the components, while the index form allows easier tensor-algebraic manipulation of the formulae in a compact manner. Both provide the physical interpretation of directions; vectors have one direction, while second-order tensors connect two directions together. One can associate a tensor index or coordinate label with a basis vector direction. The use of second-order tensors are the minimum to describe changes in magnitudes and directions of vectors, as the dot product of two vectors is always a scalar, while the cross product of two vectors is always a pseudovector perpendicular to the plane defined by the vectors, so these products of vectors alone cannot obtain a new vector of any magnitude in any direction. (See also below for more on the dot and cross products). The tensor product of two vectors is a second-order tensor, although this has no obvious directional interpretation by itself. The previous idea can be continued: if T takes in two vectors p and q, it will return a scalar r. In function notation we write r = T(p, q), while in matrix and index notations (including the summation convention) respectively: $r={\begin{pmatrix}p_{\text{x}}&p_{\text{y}}&p_{\text{z}}\end{pmatrix}}{\begin{pmatrix}T_{\text{xx}}&T_{\text{xy}}&T_{\text{xz}}\\T_{\text{yx}}&T_{\text{yy}}&T_{\text{yz}}\\T_{\text{zx}}&T_{\text{zy}}&T_{\text{zz}}\end{pmatrix}}{\begin{pmatrix}q_{\text{x}}\\q_{\text{y}}\\q_{\text{z}}\end{pmatrix}}=p_{i}T_{ij}q_{j}$ The tensor T is linear in both input vectors. When vectors and tensors are written without reference to components, and indices are not used, sometimes a dot ⋅ is placed where summations over indices (known as tensor contractions) are taken. For the above cases:[1][2] ${\begin{aligned}\mathbf {v} &=\mathbf {T} \cdot \mathbf {u} \\r&=\mathbf {p} \cdot \mathbf {T} \cdot \mathbf {q} \end{aligned}}$ motivated by the dot product notation: $\mathbf {a} \cdot \mathbf {b} \equiv a_{i}b_{i}$ More generally, a tensor of order m which takes in n vectors (where n is between 0 and m inclusive) will return a tensor of order m − n, see Tensor § As multilinear maps for further generalizations and details. The concepts above also apply to pseudovectors in the same way as for vectors. The vectors and tensors themselves can vary within throughout space, in which case we have vector fields and tensor fields, and can also depend on time. Following are some examples: An applied or given... ...to a material or object of... ...results in... ...in the material or object, given by: unit vector nCauchy stress tensor σa traction force t$\mathbf {t} ={\boldsymbol {\sigma }}\cdot \mathbf {n} $ angular velocity ω moment of inertia I an angular momentum J$\mathbf {J} =\mathbf {I} \cdot {\boldsymbol {\omega }}$ a rotational kinetic energy T$T={\tfrac {1}{2}}{\boldsymbol {\omega }}\cdot \mathbf {I} \cdot {\boldsymbol {\omega }}$ electric field E electrical conductivity σa current density flow J$\mathbf {J} ={\boldsymbol {\sigma }}\cdot \mathbf {E} $ polarizability α (related to the permittivity ε and electric susceptibility χE)an induced polarization field P$\mathbf {P} ={\boldsymbol {\alpha }}\cdot \mathbf {E} $ magnetic H fieldmagnetic permeability μa magnetic B field$\mathbf {B} ={\boldsymbol {\mu }}\cdot \mathbf {H} $ For the electrical conduction example, the index and matrix notations would be: ${\begin{aligned}J_{i}&=\sigma _{ij}E_{j}\equiv \sum _{j}\sigma _{ij}E_{j}\\{\begin{pmatrix}J_{\text{x}}\\J_{\text{y}}\\J_{\text{z}}\end{pmatrix}}&={\begin{pmatrix}\sigma _{\text{xx}}&\sigma _{\text{xy}}&\sigma _{\text{xz}}\\\sigma _{\text{yx}}&\sigma _{\text{yy}}&\sigma _{\text{yz}}\\\sigma _{\text{zx}}&\sigma _{\text{zy}}&\sigma _{\text{zz}}\end{pmatrix}}{\begin{pmatrix}E_{\text{x}}\\E_{\text{y}}\\E_{\text{z}}\end{pmatrix}}\end{aligned}}$ while for the rotational kinetic energy T: ${\begin{aligned}T&={\frac {1}{2}}\omega _{i}I_{ij}\omega _{j}\equiv {\frac {1}{2}}\sum _{ij}\omega _{i}I_{ij}\omega _{j}\,,\\&={\frac {1}{2}}{\begin{pmatrix}\omega _{\text{x}}&\omega _{\text{y}}&\omega _{\text{z}}\end{pmatrix}}{\begin{pmatrix}I_{\text{xx}}&I_{\text{xy}}&I_{\text{xz}}\\I_{\text{yx}}&I_{\text{yy}}&I_{\text{yz}}\\I_{\text{zx}}&I_{\text{zy}}&I_{\text{zz}}\end{pmatrix}}{\begin{pmatrix}\omega _{\text{x}}\\\omega _{\text{y}}\\\omega _{\text{z}}\end{pmatrix}}\,.\end{aligned}}$ See also constitutive equation for more specialized examples. Vectors and tensors in n dimensions In n-dimensional Euclidean space over the real numbers, $\mathbb {R} ^{n}$, the standard basis is denoted e1, e2, e3, ... en. Each basis vector ei points along the positive xi axis, with the basis being orthonormal. Component j of ei is given by the Kronecker delta: $(\mathbf {e} _{i})_{j}=\delta _{ij}$ A vector in $\mathbb {R} ^{n}$ takes the form: $\mathbf {a} =a_{i}\mathbf {e} _{i}\equiv \sum _{i}a_{i}\mathbf {e} _{i}\,.$ Similarly for the order-2 tensor above, for each vector a and b in $\mathbb {R} ^{n}$: $\mathbf {T} =a_{i}b_{j}\mathbf {e} _{ij}\equiv \sum _{ij}a_{i}b_{j}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\,,$ or more generally: $\mathbf {T} =T_{ij}\mathbf {e} _{ij}\equiv \sum _{ij}T_{ij}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\,.$ Transformations of Cartesian vectors (any number of dimensions) Meaning of "invariance" under coordinate transformations The position vector x in $\mathbb {R} ^{n}$ is a simple and common example of a vector, and can be represented in any coordinate system. Consider the case of rectangular coordinate systems with orthonormal bases only. It is possible to have a coordinate system with rectangular geometry if the basis vectors are all mutually perpendicular and not normalized, in which case the basis is orthogonal but not orthonormal. However, orthonormal bases are easier to manipulate and are often used in practice. The following results are true for orthonormal bases, not orthogonal ones. In one rectangular coordinate system, x as a contravector has coordinates xi and basis vectors ei, while as a covector it has coordinates xi and basis covectors ei, and we have: ${\begin{aligned}\mathbf {x} &=x^{i}\mathbf {e} _{i}\,,&\mathbf {x} &=x_{i}\mathbf {e} ^{i}\end{aligned}}$ In another rectangular coordinate system, x as a contravector has coordinates xi and basis ei, while as a covector it has coordinates xi and basis ei, and we have: ${\begin{aligned}\mathbf {x} &={\bar {x}}^{i}{\bar {\mathbf {e} }}_{i}\,,&\mathbf {x} &={\bar {x}}_{i}{\bar {\mathbf {e} }}^{i}\end{aligned}}$ Each new coordinate is a function of all the old ones, and vice versa for the inverse function: ${\begin{aligned}{\bar {x}}{}^{i}={\bar {x}}{}^{i}\left(x^{1},x^{2},\ldots \right)\quad &\rightleftharpoons \quad x{}^{i}=x{}^{i}\left({\bar {x}}^{1},{\bar {x}}^{2},\ldots \right)\\{\bar {x}}{}_{i}={\bar {x}}{}_{i}\left(x_{1},x_{2},\ldots \right)\quad &\rightleftharpoons \quad x{}_{i}=x{}_{i}\left({\bar {x}}_{1},{\bar {x}}_{2},\ldots \right)\end{aligned}}$ and similarly each new basis vector is a function of all the old ones, and vice versa for the inverse function: ${\begin{aligned}{\bar {\mathbf {e} }}{}_{j}={\bar {\mathbf {e} }}{}_{j}\left(\mathbf {e} _{1},\mathbf {e} _{2},\ldots \right)\quad &\rightleftharpoons \quad \mathbf {e} {}_{j}=\mathbf {e} {}_{j}\left({\bar {\mathbf {e} }}_{1},{\bar {\mathbf {e} }}_{2},\ldots \right)\\{\bar {\mathbf {e} }}{}^{j}={\bar {\mathbf {e} }}{}^{j}\left(\mathbf {e} ^{1},\mathbf {e} ^{2},\ldots \right)\quad &\rightleftharpoons \quad \mathbf {e} {}^{j}=\mathbf {e} {}^{j}\left({\bar {\mathbf {e} }}^{1},{\bar {\mathbf {e} }}^{2},\ldots \right)\end{aligned}}$ for all i, j. A vector is invariant under any change of basis, so if coordinates transform according to a transformation matrix L, the bases transform according to the matrix inverse L−1, and conversely if the coordinates transform according to inverse L−1, the bases transform according to the matrix L. The difference between each of these transformations is shown conventionally through the indices as superscripts for contravariance and subscripts for covariance, and the coordinates and bases are linearly transformed according to the following rules: Vector elements Contravariant transformation law Covariant transformation law Coordinates ${\bar {x}}^{j}=x^{i}({\boldsymbol {\mathsf {L}}})_{i}{}^{j}=x^{i}{\mathsf {L}}_{i}{}^{j}$ ${\bar {x}}_{j}=x_{k}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{j}{}^{k}$ Basis ${\bar {\mathbf {e} }}_{j}=\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{j}{}^{k}\mathbf {e} _{k}$ ${\bar {\mathbf {e} }}^{j}=({\boldsymbol {\mathsf {L}}})_{i}{}^{j}\mathbf {e} ^{i}={\mathsf {L}}_{i}{}^{j}\mathbf {e} ^{i}$ Any vector ${\bar {x}}^{j}{\bar {\mathbf {e} }}_{j}=x^{i}{\mathsf {L}}_{i}{}^{j}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{j}{}^{k}\mathbf {e} _{k}=x^{i}\delta _{i}{}^{k}\mathbf {e} _{k}=x^{i}\mathbf {e} _{i}$ ${\bar {x}}_{j}{\bar {\mathbf {e} }}^{j}=x_{i}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{j}{}^{i}{\mathsf {L}}_{k}{}^{j}\mathbf {e} ^{k}=x_{i}\delta ^{i}{}_{k}\mathbf {e} ^{k}=x_{i}\mathbf {e} ^{i}$ where Lij represents the entries of the transformation matrix (row number is i and column number is j) and (L−1)ik denotes the entries of the inverse matrix of the matrix Lik. If L is an orthogonal transformation (orthogonal matrix), the objects transforming by it are defined as Cartesian tensors. This geometrically has the interpretation that a rectangular coordinate system is mapped to another rectangular coordinate system, in which the norm of the vector x is preserved (and distances are preserved). The determinant of L is det(L) = ±1, which corresponds to two types of orthogonal transformation: (+1) for rotations and (−1) for improper rotations (including reflections). There are considerable algebraic simplifications, the matrix transpose is the inverse from the definition of an orthogonal transformation: ${\boldsymbol {\mathsf {L}}}^{\textsf {T}}={\boldsymbol {\mathsf {L}}}^{-1}\Rightarrow \left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{i}{}^{j}=\left({\boldsymbol {\mathsf {L}}}^{\textsf {T}}\right)_{i}{}^{j}=({\boldsymbol {\mathsf {L}}})^{j}{}_{i}={\mathsf {L}}^{j}{}_{i}$ From the previous table, orthogonal transformations of covectors and contravectors are identical. There is no need to differ between raising and lowering indices, and in this context and applications to physics and engineering the indices are usually all subscripted to remove confusion for exponents. All indices will be lowered in the remainder of this article. One can determine the actual raised and lowered indices by considering which quantities are covectors or contravectors, and the relevant transformation rules. Exactly the same transformation rules apply to any vector a, not only the position vector. If its components ai do not transform according to the rules, a is not a vector. Despite the similarity between the expressions above, for the change of coordinates such as xj = Lijxi, and the action of a tensor on a vector like bi = Tij aj, L is not a tensor, but T is. In the change of coordinates, L is a matrix, used to relate two rectangular coordinate systems with orthonormal bases together. For the tensor relating a vector to a vector, the vectors and tensors throughout the equation all belong to the same coordinate system and basis. Derivatives and Jacobian matrix elements The entries of L are partial derivatives of the new or old coordinates with respect to the old or new coordinates, respectively. Differentiating xi with respect to xk: ${\frac {\partial {\bar {x}}_{i}}{\partial x_{k}}}={\frac {\partial }{\partial x_{k}}}(x_{j}{\mathsf {L}}_{ji})={\mathsf {L}}_{ji}{\frac {\partial x_{j}}{\partial x_{k}}}=\delta _{kj}{\mathsf {L}}_{ji}={\mathsf {L}}_{ki}$ so ${{\mathsf {L}}_{i}}^{j}\equiv {\mathsf {L}}_{ij}={\frac {\partial {\bar {x}}_{j}}{\partial x_{i}}}$ is an element of the Jacobian matrix. There is a (partially mnemonical) correspondence between index positions attached to L and in the partial derivative: i at the top and j at the bottom, in each case, although for Cartesian tensors the indices can be lowered. Conversely, differentiating xj with respect to xi: ${\frac {\partial x_{j}}{\partial {\bar {x}}_{k}}}={\frac {\partial }{\partial {\bar {x}}_{k}}}\left({\bar {x}}_{i}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{ij}\right)={\frac {\partial {\bar {x}}_{i}}{\partial {\bar {x}}_{k}}}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{ij}=\delta _{ki}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{ij}=\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{kj}$ so $\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{i}{}^{j}\equiv \left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{ij}={\frac {\partial x_{j}}{\partial {\bar {x}}_{i}}}$ is an element of the inverse Jacobian matrix, with a similar index correspondence. Many sources state transformations in terms of the partial derivatives: ${\begin{array}{c}\displaystyle {\bar {x}}_{j}=x_{i}{\frac {\partial {\bar {x}}_{j}}{\partial x_{i}}}\\[3pt]\upharpoonleft \downharpoonright \\[3pt]\displaystyle x_{j}={\bar {x}}_{i}{\frac {\partial x_{j}}{\partial {\bar {x}}_{i}}}\end{array}}$ and the explicit matrix equations in 3d are: ${\begin{aligned}{\bar {\mathbf {x} }}&={\boldsymbol {\mathsf {L}}}\mathbf {x} \\{\begin{pmatrix}{\bar {x}}_{1}\\{\bar {x}}_{2}\\{\bar {x}}_{3}\end{pmatrix}}&={\begin{pmatrix}{\frac {\partial {\bar {x}}_{1}}{\partial x_{1}}}&{\frac {\partial {\bar {x}}_{1}}{\partial x_{2}}}&{\frac {\partial {\bar {x}}_{1}}{\partial x_{3}}}\\{\frac {\partial {\bar {x}}_{2}}{\partial x_{1}}}&{\frac {\partial {\bar {x}}_{2}}{\partial x_{2}}}&{\frac {\partial {\bar {x}}_{2}}{\partial x_{3}}}\\{\frac {\partial {\bar {x}}_{3}}{\partial x_{1}}}&{\frac {\partial {\bar {x}}_{3}}{\partial x_{2}}}&{\frac {\partial {\bar {x}}_{3}}{\partial x_{3}}}\end{pmatrix}}{\begin{pmatrix}x_{1}\\x_{2}\\x_{3}\end{pmatrix}}\end{aligned}}$ similarly for $\mathbf {x} ={\boldsymbol {\mathsf {L}}}^{-1}{\bar {\mathbf {x} }}={\boldsymbol {\mathsf {L}}}^{\textsf {T}}{\bar {\mathbf {x} }}$ Projections along coordinate axes As with all linear transformations, L depends on the basis chosen. For two orthonormal bases ${\begin{aligned}{\bar {\mathbf {e} }}_{i}\cdot {\bar {\mathbf {e} }}_{j}&=\mathbf {e} _{i}\cdot \mathbf {e} _{j}=\delta _{ij}\,,&\left|\mathbf {e} _{i}\right|&=\left|{\bar {\mathbf {e} }}_{i}\right|=1\,,\end{aligned}}$ • projecting x to the x axes: ${\bar {x}}_{i}={\bar {\mathbf {e} }}_{i}\cdot \mathbf {x} ={\bar {\mathbf {e} }}_{i}\cdot x_{j}\mathbf {e} _{j}=x_{i}{\mathsf {L}}_{ij}\,,$ • projecting x to the x axes: $x_{i}=\mathbf {e} _{i}\cdot \mathbf {x} =\mathbf {e} _{i}\cdot {\bar {x}}_{j}{\bar {\mathbf {e} }}_{j}={\bar {x}}_{j}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{ji}\,.$ Hence the components reduce to direction cosines between the xi and xj axes: ${\begin{aligned}{\mathsf {L}}_{ij}&={\bar {\mathbf {e} }}_{i}\cdot \mathbf {e} _{j}=\cos \theta _{ij}\\\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{ij}&=\mathbf {e} _{i}\cdot {\bar {\mathbf {e} }}_{j}=\cos \theta _{ji}\end{aligned}}$ where θij and θji are the angles between the xi and xj axes. In general, θij is not equal to θji, because for example θ12 and θ21 are two different angles. The transformation of coordinates can be written: ${\begin{array}{c}{\bar {x}}_{j}=x_{i}\left({\bar {\mathbf {e} }}_{i}\cdot \mathbf {e} _{j}\right)=x_{i}\cos \theta _{ij}\\[3pt]\upharpoonleft \downharpoonright \\[3pt]x_{j}={\bar {x}}_{i}\left(\mathbf {e} _{i}\cdot {\bar {\mathbf {e} }}_{j}\right)={\bar {x}}_{i}\cos \theta _{ji}\end{array}}$ and the explicit matrix equations in 3d are: ${\begin{aligned}{\bar {\mathbf {x} }}&={\boldsymbol {\mathsf {L}}}\mathbf {x} \\{\begin{pmatrix}{\bar {x}}_{1}\\{\bar {x}}_{2}\\{\bar {x}}_{3}\end{pmatrix}}&={\begin{pmatrix}{\bar {\mathbf {e} }}_{1}\cdot \mathbf {e} _{1}&{\bar {\mathbf {e} }}_{1}\cdot \mathbf {e} _{2}&{\bar {\mathbf {e} }}_{1}\cdot \mathbf {e} _{3}\\{\bar {\mathbf {e} }}_{2}\cdot \mathbf {e} _{1}&{\bar {\mathbf {e} }}_{2}\cdot \mathbf {e} _{2}&{\bar {\mathbf {e} }}_{2}\cdot \mathbf {e} _{3}\\{\bar {\mathbf {e} }}_{3}\cdot \mathbf {e} _{1}&{\bar {\mathbf {e} }}_{3}\cdot \mathbf {e} _{2}&{\bar {\mathbf {e} }}_{3}\cdot \mathbf {e} _{3}\end{pmatrix}}{\begin{pmatrix}x_{1}\\x_{2}\\x_{3}\end{pmatrix}}={\begin{pmatrix}\cos \theta _{11}&\cos \theta _{12}&\cos \theta _{13}\\\cos \theta _{21}&\cos \theta _{22}&\cos \theta _{23}\\\cos \theta _{31}&\cos \theta _{32}&\cos \theta _{33}\end{pmatrix}}{\begin{pmatrix}x_{1}\\x_{2}\\x_{3}\end{pmatrix}}\end{aligned}}$ similarly for $\mathbf {x} ={\boldsymbol {\mathsf {L}}}^{-1}{\bar {\mathbf {x} }}={\boldsymbol {\mathsf {L}}}^{\textsf {T}}{\bar {\mathbf {x} }}$ The geometric interpretation is the xi components equal to the sum of projecting the xj components onto the xj axes. The numbers ei⋅ej arranged into a matrix would form a symmetric matrix (a matrix equal to its own transpose) due to the symmetry in the dot products, in fact it is the metric tensor g. By contrast ei⋅ej or ei⋅ej do not form symmetric matrices in general, as displayed above. Therefore, while the L matrices are still orthogonal, they are not symmetric. Apart from a rotation about any one axis, in which the xi and xi for some i coincide, the angles are not the same as Euler angles, and so the L matrices are not the same as the rotation matrices. Transformation of the dot and cross products (three dimensions only) The dot product and cross product occur very frequently, in applications of vector analysis to physics and engineering, examples include: • power transferred P by an object exerting a force F with velocity v along a straight-line path: $P=\mathbf {v} \cdot \mathbf {F} $ • tangential velocity v at a point x of a rotating rigid body with angular velocity ω: $\mathbf {v} ={\boldsymbol {\omega }}\times \mathbf {x} $ • potential energy U of a magnetic dipole of magnetic moment m in a uniform external magnetic field B: $U=-\mathbf {m} \cdot \mathbf {B} $ • angular momentum J for a particle with position vector r and momentum p: $\mathbf {J} =\mathbf {r} \times \mathbf {p} $ • torque τ acting on an electric dipole of electric dipole moment p in a uniform external electric field E: ${\boldsymbol {\tau }}=\mathbf {p} \times \mathbf {E} $ • induced surface current density jS in a magnetic material of magnetization M on a surface with unit normal n: $\mathbf {j} _{\mathrm {S} }=\mathbf {M} \times \mathbf {n} $ How these products transform under orthogonal transformations is illustrated below. Dot product, Kronecker delta, and metric tensor The dot product ⋅ of each possible pairing of the basis vectors follows from the basis being orthonormal. For perpendicular pairs we have ${\begin{array}{llll}\mathbf {e} _{\text{x}}\cdot \mathbf {e} _{\text{y}}&=\mathbf {e} _{\text{y}}\cdot \mathbf {e} _{\text{z}}&=\mathbf {e} _{\text{z}}\cdot \mathbf {e} _{\text{x}}&=\\\mathbf {e} _{\text{y}}\cdot \mathbf {e} _{\text{x}}&=\mathbf {e} _{\text{z}}\cdot \mathbf {e} _{\text{y}}&=\mathbf {e} _{\text{x}}\cdot \mathbf {e} _{\text{z}}&=0\end{array}}$ while for parallel pairs we have $\mathbf {e} _{\text{x}}\cdot \mathbf {e} _{\text{x}}=\mathbf {e} _{\text{y}}\cdot \mathbf {e} _{\text{y}}=\mathbf {e} _{\text{z}}\cdot \mathbf {e} _{\text{z}}=1.$ Replacing Cartesian labels by index notation as shown above, these results can be summarized by $\mathbf {e} _{i}\cdot \mathbf {e} _{j}=\delta _{ij}$ where δij are the components of the Kronecker delta. The Cartesian basis can be used to represent δ in this way. In addition, each metric tensor component gij with respect to any basis is the dot product of a pairing of basis vectors: $g_{ij}=\mathbf {e} _{i}\cdot \mathbf {e} _{j}.$ For the Cartesian basis the components arranged into a matrix are: $\mathbf {g} ={\begin{pmatrix}g_{\text{xx}}&g_{\text{xy}}&g_{\text{xz}}\\g_{\text{yx}}&g_{\text{yy}}&g_{\text{yz}}\\g_{\text{zx}}&g_{\text{zy}}&g_{\text{zz}}\\\end{pmatrix}}={\begin{pmatrix}\mathbf {e} _{\text{x}}\cdot \mathbf {e} _{\text{x}}&\mathbf {e} _{\text{x}}\cdot \mathbf {e} _{\text{y}}&\mathbf {e} _{\text{x}}\cdot \mathbf {e} _{\text{z}}\\\mathbf {e} _{\text{y}}\cdot \mathbf {e} _{\text{x}}&\mathbf {e} _{\text{y}}\cdot \mathbf {e} _{\text{y}}&\mathbf {e} _{\text{y}}\cdot \mathbf {e} _{\text{z}}\\\mathbf {e} _{\text{z}}\cdot \mathbf {e} _{\text{x}}&\mathbf {e} _{\text{z}}\cdot \mathbf {e} _{\text{y}}&\mathbf {e} _{\text{z}}\cdot \mathbf {e} _{\text{z}}\\\end{pmatrix}}={\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\\\end{pmatrix}}$ so are the simplest possible for the metric tensor, namely the δ: $g_{ij}=\delta _{ij}$ This is not true for general bases: orthogonal coordinates have diagonal metrics containing various scale factors (i.e. not necessarily 1), while general curvilinear coordinates could also have nonzero entries for off-diagonal components. The dot product of two vectors a and b transforms according to $\mathbf {a} \cdot \mathbf {b} ={\bar {a}}_{j}{\bar {b}}_{j}=a_{i}{\mathsf {L}}_{ij}b_{k}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{jk}=a_{i}\delta _{i}{}_{k}b_{k}=a_{i}b_{i}$ which is intuitive, since the dot product of two vectors is a single scalar independent of any coordinates. This also applies more generally to any coordinate systems, not just rectangular ones; the dot product in one coordinate system is the same in any other. Cross product, Levi-Civita symbol, and pseudovectors Cyclic permutations of index values and positively oriented cubic volume. Anticyclic permutations of index values and negatively oriented cubic volume. Non-zero values of the Levi-Civita symbol εijk as the volume ei ⋅ ej × ek of a cube spanned by the 3d orthonormal basis. For the cross product (×) of two vectors, the results are (almost) the other way round. Again, assuming a right-handed 3d Cartesian coordinate system, cyclic permutations in perpendicular directions yield the next vector in the cyclic collection of vectors: ${\begin{aligned}\mathbf {e} _{\text{x}}\times \mathbf {e} _{\text{y}}&=\mathbf {e} _{\text{z}}&\mathbf {e} _{\text{y}}\times \mathbf {e} _{\text{z}}&=\mathbf {e} _{\text{x}}&\mathbf {e} _{\text{z}}\times \mathbf {e} _{\text{x}}&=\mathbf {e} _{\text{y}}\\[1ex]\mathbf {e} _{\text{y}}\times \mathbf {e} _{\text{x}}&=-\mathbf {e} _{\text{z}}&\mathbf {e} _{\text{z}}\times \mathbf {e} _{\text{y}}&=-\mathbf {e} _{\text{x}}&\mathbf {e} _{\text{x}}\times \mathbf {e} _{\text{z}}&=-\mathbf {e} _{\text{y}}\end{aligned}}$ while parallel vectors clearly vanish: $\mathbf {e} _{\text{x}}\times \mathbf {e} _{\text{x}}=\mathbf {e} _{\text{y}}\times \mathbf {e} _{\text{y}}=\mathbf {e} _{\text{z}}\times \mathbf {e} _{\text{z}}={\boldsymbol {0}}$ and replacing Cartesian labels by index notation as above, these can be summarized by: $\mathbf {e} _{i}\times \mathbf {e} _{j}={\begin{cases}+\mathbf {e} _{k}&{\text{cyclic permutations: }}(i,j,k)=(1,2,3),(2,3,1),(3,1,2)\\[2pt]-\mathbf {e} _{k}&{\text{anticyclic permutations: }}(i,j,k)=(2,1,3),(3,2,1),(1,3,2)\\[2pt]{\boldsymbol {0}}&i=j\end{cases}}$ where i, j, k are indices which take values 1, 2, 3. It follows that: ${\mathbf {e} _{k}\cdot \mathbf {e} _{i}\times \mathbf {e} _{j}}={\begin{cases}+1&{\text{cyclic permutations: }}(i,j,k)=(1,2,3),(2,3,1),(3,1,2)\\[2pt]-1&{\text{anticyclic permutations: }}(i,j,k)=(2,1,3),(3,2,1),(1,3,2)\\[2pt]0&i=j{\text{ or }}j=k{\text{ or }}k=i\end{cases}}$ These permutation relations and their corresponding values are important, and there is an object coinciding with this property: the Levi-Civita symbol, denoted by ε. The Levi-Civita symbol entries can be represented by the Cartesian basis: $\varepsilon _{ijk}=\mathbf {e} _{i}\cdot \mathbf {e} _{j}\times \mathbf {e} _{k}$ which geometrically corresponds to the volume of a cube spanned by the orthonormal basis vectors, with sign indicating orientation (and not a "positive or negative volume"). Here, the orientation is fixed by ε123 = +1, for a right-handed system. A left-handed system would fix ε123 = −1 or equivalently ε321 = +1. The scalar triple product can now be written: $\mathbf {c} \cdot \mathbf {a} \times \mathbf {b} =c_{i}\mathbf {e} _{i}\cdot a_{j}\mathbf {e} _{j}\times b_{k}\mathbf {e} _{k}=\varepsilon _{ijk}c_{i}a_{j}b_{k}$ with the geometric interpretation of volume (of the parallelepiped spanned by a, b, c) and algebraically is a determinant:[3]: 23  $\mathbf {c} \cdot \mathbf {a} \times \mathbf {b} ={\begin{vmatrix}c_{\text{x}}&a_{\text{x}}&b_{\text{x}}\\c_{\text{y}}&a_{\text{y}}&b_{\text{y}}\\c_{\text{z}}&a_{\text{z}}&b_{\text{z}}\end{vmatrix}}$ This in turn can be used to rewrite the cross product of two vectors as follows: ${\begin{aligned}(\mathbf {a} \times \mathbf {b} )_{i}={\mathbf {e} _{i}\cdot \mathbf {a} \times \mathbf {b} }&=\varepsilon _{\ell jk}{(\mathbf {e} _{i})}_{\ell }a_{j}b_{k}=\varepsilon _{\ell jk}\delta _{i\ell }a_{j}b_{k}=\varepsilon _{ijk}a_{j}b_{k}\\\Rightarrow \quad {\mathbf {a} \times \mathbf {b} }=(\mathbf {a} \times \mathbf {b} )_{i}\mathbf {e} _{i}&=\varepsilon _{ijk}a_{j}b_{k}\mathbf {e} _{i}\end{aligned}}$ Contrary to its appearance, the Levi-Civita symbol is not a tensor, but a pseudotensor, the components transform according to: ${\bar {\varepsilon }}_{pqr}=\det({\boldsymbol {\mathsf {L}}})\varepsilon _{ijk}{\mathsf {L}}_{ip}{\mathsf {L}}_{jq}{\mathsf {L}}_{kr}\,.$ Therefore, the transformation of the cross product of a and b is: ${\begin{aligned}&\left({\bar {\mathbf {a} }}\times {\bar {\mathbf {b} }}\right)_{i}\\[1ex]{}={}&{\bar {\varepsilon }}_{ijk}{\bar {a}}_{j}{\bar {b}}_{k}\\[1ex]{}={}&\det({\boldsymbol {\mathsf {L}}})\;\;\varepsilon _{pqr}\;\;{\mathsf {L}}_{pi}{\mathsf {L}}_{qj}{\mathsf {L}}_{rk}\;\;a_{m}{\mathsf {L}}_{mj}\;\;b_{n}{\mathsf {L}}_{nk}\\[1ex]{}={}&\det({\boldsymbol {\mathsf {L}}})\;\;\varepsilon _{pqr}\;\;{\mathsf {L}}_{pi}\;\;{\mathsf {L}}_{qj}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{jm}\;\;{\mathsf {L}}_{rk}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{kn}\;\;a_{m}\;\;b_{n}\\[1ex]{}={}&\det({\boldsymbol {\mathsf {L}}})\;\;\varepsilon _{pqr}\;\;{\mathsf {L}}_{pi}\;\;\delta _{qm}\;\;\delta _{rn}\;\;a_{m}\;\;b_{n}\\[1ex]{}={}&\det({\boldsymbol {\mathsf {L}}})\;\;{\mathsf {L}}_{pi}\;\;\varepsilon _{pqr}a_{q}b_{r}\\[1ex]{}={}&\det({\boldsymbol {\mathsf {L}}})\;\;(\mathbf {a} \times \mathbf {b} )_{p}{\mathsf {L}}_{pi}\end{aligned}}$ and so a × b transforms as a pseudovector, because of the determinant factor. The tensor index notation applies to any object which has entities that form multidimensional arrays – not everything with indices is a tensor by default. Instead, tensors are defined by how their coordinates and basis elements change under a transformation from one coordinate system to another. Note the cross product of two vectors is a pseudovector, while the cross product of a pseudovector with a vector is another vector. Applications of the δ tensor and ε pseudotensor Other identities can be formed from the δ tensor and ε pseudotensor, a notable and very useful identity is one that converts two Levi-Civita symbols adjacently contracted over two indices into an antisymmetrized combination of Kronecker deltas: $\varepsilon _{ijk}\varepsilon _{pqk}=\delta _{ip}\delta _{jq}-\delta _{iq}\delta _{jp}$ The index forms of the dot and cross products, together with this identity, greatly facilitate the manipulation and derivation of other identities in vector calculus and algebra, which in turn are used extensively in physics and engineering. For instance, it is clear the dot and cross products are distributive over vector addition: ${\begin{aligned}\mathbf {a} \cdot (\mathbf {b} +\mathbf {c} )&=a_{i}(b_{i}+c_{i})=a_{i}b_{i}+a_{i}c_{i}=\mathbf {a} \cdot \mathbf {b} +\mathbf {a} \cdot \mathbf {c} \\[1ex]\mathbf {a} \times (\mathbf {b} +\mathbf {c} )&=\mathbf {e} _{i}\varepsilon _{ijk}a_{j}(b_{k}+c_{k})=\mathbf {e} _{i}\varepsilon _{ijk}a_{j}b_{k}+\mathbf {e} _{i}\varepsilon _{ijk}a_{j}c_{k}=\mathbf {a} \times \mathbf {b} +\mathbf {a} \times \mathbf {c} \end{aligned}}$ without resort to any geometric constructions – the derivation in each case is a quick line of algebra. Although the procedure is less obvious, the vector triple product can also be derived. Rewriting in index notation: $\left[\mathbf {a} \times (\mathbf {b} \times \mathbf {c} )\right]_{i}=\varepsilon _{ijk}a_{j}(\varepsilon _{k\ell m}b_{\ell }c_{m})=(\varepsilon _{ijk}\varepsilon _{k\ell m})a_{j}b_{\ell }c_{m}$ and because cyclic permutations of indices in the ε symbol does not change its value, cyclically permuting indices in εkℓm to obtain εℓmk allows us to use the above δ-ε identity to convert the ε symbols into δ tensors: ${\begin{aligned}\left[\mathbf {a} \times (\mathbf {b} \times \mathbf {c} )\right]_{i}{}={}&\left(\delta _{i\ell }\delta _{jm}-\delta _{im}\delta _{j\ell }\right)a_{j}b_{\ell }c_{m}\\{}={}&\delta _{i\ell }\delta _{jm}a_{j}b_{\ell }c_{m}-\delta _{im}\delta _{j\ell }a_{j}b_{\ell }c_{m}\\{}={}&a_{j}b_{i}c_{j}-a_{j}b_{j}c_{i}\\{}={}&\left[(\mathbf {a} \cdot \mathbf {c} )\mathbf {b} -(\mathbf {a} \cdot \mathbf {b} )\mathbf {c} \right]_{i}\end{aligned}}$ thusly: $\mathbf {a} \times (\mathbf {b} \times \mathbf {c} )=(\mathbf {a} \cdot \mathbf {c} )\mathbf {b} -(\mathbf {a} \cdot \mathbf {b} )\mathbf {c} $ Note this is antisymmetric in b and c, as expected from the left hand side. Similarly, via index notation or even just cyclically relabelling a, b, and c in the previous result and taking the negative: $(\mathbf {a} \times \mathbf {b} )\times \mathbf {c} =(\mathbf {c} \cdot \mathbf {a} )\mathbf {b} -(\mathbf {c} \cdot \mathbf {b} )\mathbf {a} $ and the difference in results show that the cross product is not associative. More complex identities, like quadruple products; $(\mathbf {a} \times \mathbf {b} )\cdot (\mathbf {c} \times \mathbf {d} ),\quad (\mathbf {a} \times \mathbf {b} )\times (\mathbf {c} \times \mathbf {d} ),\quad \ldots $ and so on, can be derived in a similar manner. Transformations of Cartesian tensors (any number of dimensions) Tensors are defined as quantities which transform in a certain way under linear transformations of coordinates. Second order Let a = aiei and b = biei be two vectors, so that they transform according to aj = aiLij, bj = biLij. Taking the tensor product gives: $\mathbf {a} \otimes \mathbf {b} =a_{i}\mathbf {e} _{i}\otimes b_{j}\mathbf {e} _{j}=a_{i}b_{j}\mathbf {e} _{i}\otimes \mathbf {e} _{j}$ then applying the transformation to the components ${\bar {a}}_{p}{\bar {b}}_{q}=a_{i}{\mathsf {L}}_{i}{}_{p}b_{j}{\mathsf {L}}_{j}{}_{q}={\mathsf {L}}_{i}{}_{p}{\mathsf {L}}_{j}{}_{q}a_{i}b_{j}$ and to the bases ${\bar {\mathbf {e} }}_{p}\otimes {\bar {\mathbf {e} }}_{q}=\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{pi}\mathbf {e} _{i}\otimes \left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{qj}\mathbf {e} _{j}=\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{pi}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{qj}\mathbf {e} _{i}\otimes \mathbf {e} _{j}={\mathsf {L}}_{ip}{\mathsf {L}}_{jq}\mathbf {e} _{i}\otimes \mathbf {e} _{j}$ gives the transformation law of an order-2 tensor. The tensor a⊗b is invariant under this transformation: ${\begin{aligned}{\bar {a}}_{p}{\bar {b}}_{q}{\bar {\mathbf {e} }}_{p}\otimes {\bar {\mathbf {e} }}_{q}{}={}&{\mathsf {L}}_{kp}{\mathsf {L}}_{\ell q}a_{k}b_{\ell }\,\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{pi}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{qj}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\\[1ex]{}={}&{\mathsf {L}}_{kp}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{pi}{\mathsf {L}}_{\ell q}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{qj}\,a_{k}b_{\ell }\mathbf {e} _{i}\otimes \mathbf {e} _{j}\\[1ex]{}={}&\delta _{k}{}_{i}\delta _{\ell j}\,a_{k}b_{\ell }\mathbf {e} _{i}\otimes \mathbf {e} _{j}\\[1ex]{}={}&a_{i}b_{j}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\end{aligned}}$ More generally, for any order-2 tensor $\mathbf {R} =R_{ij}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\,,$ the components transform according to; ${\bar {R}}_{pq}={\mathsf {L}}_{i}{}_{p}{\mathsf {L}}_{j}{}_{q}R_{ij},$ and the basis transforms by: ${\bar {\mathbf {e} }}_{p}\otimes {\bar {\mathbf {e} }}_{q}=\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{ip}\mathbf {e} _{i}\otimes \left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{jq}\mathbf {e} _{j}$ If R does not transform according to this rule – whatever quantity R may be – it is not an order-2 tensor. Any order More generally, for any order p tensor $\mathbf {T} =T_{j_{1}j_{2}\cdots j_{p}}\mathbf {e} _{j_{1}}\otimes \mathbf {e} _{j_{2}}\otimes \cdots \mathbf {e} _{j_{p}}$ the components transform according to; ${\bar {T}}_{j_{1}j_{2}\cdots j_{p}}={\mathsf {L}}_{i_{1}j_{1}}{\mathsf {L}}_{i_{2}j_{2}}\cdots {\mathsf {L}}_{i_{p}j_{p}}T_{i_{1}i_{2}\cdots i_{p}}$ and the basis transforms by: ${\bar {\mathbf {e} }}_{j_{1}}\otimes {\bar {\mathbf {e} }}_{j_{2}}\cdots \otimes {\bar {\mathbf {e} }}_{j_{p}}=\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{j_{1}i_{1}}\mathbf {e} _{i_{1}}\otimes \left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{j_{2}i_{2}}\mathbf {e} _{i_{2}}\cdots \otimes \left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{j_{p}i_{p}}\mathbf {e} _{i_{p}}$ For a pseudotensor S of order p, the components transform according to; ${\bar {S}}_{j_{1}j_{2}\cdots j_{p}}=\det({\boldsymbol {\mathsf {L}}}){\mathsf {L}}_{i_{1}j_{1}}{\mathsf {L}}_{i_{2}j_{2}}\cdots {\mathsf {L}}_{i_{p}j_{p}}S_{i_{1}i_{2}\cdots i_{p}}\,.$ Pseudovectors as antisymmetric second order tensors The antisymmetric nature of the cross product can be recast into a tensorial form as follows.[2] Let c be a vector, a be a pseudovector, b be another vector, and T be a second order tensor such that: $\mathbf {c} =\mathbf {a} \times \mathbf {b} =\mathbf {T} \cdot \mathbf {b} $ As the cross product is linear in a and b, the components of T can be found by inspection, and they are: $\mathbf {T} ={\begin{pmatrix}0&-a_{\text{z}}&a_{\text{y}}\\a_{\text{z}}&0&-a_{\text{x}}\\-a_{\text{y}}&a_{\text{x}}&0\\\end{pmatrix}}$ so the pseudovector a can be written as an antisymmetric tensor. This transforms as a tensor, not a pseudotensor. For the mechanical example above for the tangential velocity of a rigid body, given by v = ω × x, this can be rewritten as v = Ω ⋅ x where Ω is the tensor corresponding to the pseudovector ω: ${\boldsymbol {\Omega }}={\begin{pmatrix}0&-\omega _{\text{z}}&\omega _{\text{y}}\\\omega _{\text{z}}&0&-\omega _{\text{x}}\\-\omega _{\text{y}}&\omega _{\text{x}}&0\\\end{pmatrix}}$ For an example in electromagnetism, while the electric field E is a vector field, the magnetic field B is a pseudovector field. These fields are defined from the Lorentz force for a particle of electric charge q traveling at velocity v: $\mathbf {F} =q(\mathbf {E} +\mathbf {v} \times \mathbf {B} )=q(\mathbf {E} -\mathbf {B} \times \mathbf {v} )$ and considering the second term containing the cross product of a pseudovector B and velocity vector v, it can be written in matrix form, with F, E, and v as column vectors and B as an antisymmetric matrix: ${\begin{pmatrix}F_{\text{x}}\\F_{\text{y}}\\F_{\text{z}}\\\end{pmatrix}}=q{\begin{pmatrix}E_{\text{x}}\\E_{\text{y}}\\E_{\text{z}}\\\end{pmatrix}}-q{\begin{pmatrix}0&-B_{\text{z}}&B_{\text{y}}\\B_{\text{z}}&0&-B_{\text{x}}\\-B_{\text{y}}&B_{\text{x}}&0\\\end{pmatrix}}{\begin{pmatrix}v_{\text{x}}\\v_{\text{y}}\\v_{\text{z}}\\\end{pmatrix}}$ If a pseudovector is explicitly given by a cross product of two vectors (as opposed to entering the cross product with another vector), then such pseudovectors can also be written as antisymmetric tensors of second order, with each entry a component of the cross product. The angular momentum of a classical pointlike particle orbiting about an axis, defined by J = x × p, is another example of a pseudovector, with corresponding antisymmetric tensor: $\mathbf {J} ={\begin{pmatrix}0&-J_{\text{z}}&J_{\text{y}}\\J_{\text{z}}&0&-J_{\text{x}}\\-J_{\text{y}}&J_{\text{x}}&0\\\end{pmatrix}}={\begin{pmatrix}0&-(xp_{\text{y}}-yp_{\text{x}})&(zp_{\text{x}}-xp_{\text{z}})\\(xp_{\text{y}}-yp_{\text{x}})&0&-(yp_{\text{z}}-zp_{\text{y}})\\-(zp_{\text{x}}-xp_{\text{z}})&(yp_{\text{z}}-zp_{\text{y}})&0\\\end{pmatrix}}$ Although Cartesian tensors do not occur in the theory of relativity; the tensor form of orbital angular momentum J enters the spacelike part of the relativistic angular momentum tensor, and the above tensor form of the magnetic field B enters the spacelike part of the electromagnetic tensor. Vector and tensor calculus The following formulae are only so simple in Cartesian coordinates – in general curvilinear coordinates there are factors of the metric and its determinant – see tensors in curvilinear coordinates for more general analysis. Vector calculus Following are the differential operators of vector calculus. Throughout, let Φ(r, t) be a scalar field, and ${\begin{aligned}\mathbf {A} (\mathbf {r} ,t)&=A_{\text{x}}(\mathbf {r} ,t)\mathbf {e} _{\text{x}}+A_{\text{y}}(\mathbf {r} ,t)\mathbf {e} _{\text{y}}+A_{\text{z}}(\mathbf {r} ,t)\mathbf {e} _{\text{z}}\\[1ex]\mathbf {B} (\mathbf {r} ,t)&=B_{\text{x}}(\mathbf {r} ,t)\mathbf {e} _{\text{x}}+B_{\text{y}}(\mathbf {r} ,t)\mathbf {e} _{\text{y}}+B_{\text{z}}(\mathbf {r} ,t)\mathbf {e} _{\text{z}}\end{aligned}}$ be vector fields, in which all scalar and vector fields are functions of the position vector r and time t. The gradient operator in Cartesian coordinates is given by: $\nabla =\mathbf {e} _{\text{x}}{\frac {\partial }{\partial x}}+\mathbf {e} _{\text{y}}{\frac {\partial }{\partial y}}+\mathbf {e} _{\text{z}}{\frac {\partial }{\partial z}}$ and in index notation, this is usually abbreviated in various ways: $\nabla _{i}\equiv \partial _{i}\equiv {\frac {\partial }{\partial x_{i}}}$ This operator acts on a scalar field Φ to obtain the vector field directed in the maximum rate of increase of Φ: $\left(\nabla \Phi \right)_{i}=\nabla _{i}\Phi $ The index notation for the dot and cross products carries over to the differential operators of vector calculus.[3]: 197  The directional derivative of a scalar field Φ is the rate of change of Φ along some direction vector a (not necessarily a unit vector), formed out of the components of a and the gradient: $\mathbf {a} \cdot (\nabla \Phi )=a_{j}(\nabla \Phi )_{j}$ The divergence of a vector field A is: $\nabla \cdot \mathbf {A} =\nabla _{i}A_{i}$ Note the interchange of the components of the gradient and vector field yields a different differential operator $\mathbf {A} \cdot \nabla =A_{i}\nabla _{i}$ which could act on scalar or vector fields. In fact, if A is replaced by the velocity field u(r, t) of a fluid, this is a term in the material derivative (with many other names) of continuum mechanics, with another term being the partial time derivative: ${\frac {D}{Dt}}={\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla $ which usually acts on the velocity field leading to the non-linearity in the Navier-Stokes equations. As for the curl of a vector field A, this can be defined as a pseudovector field by means of the ε symbol: $\left(\nabla \times \mathbf {A} \right)_{i}=\varepsilon _{ijk}\nabla _{j}A_{k}$ which is only valid in three dimensions, or an antisymmetric tensor field of second order via antisymmetrization of indices, indicated by delimiting the antisymmetrized indices by square brackets (see Ricci calculus): $\left(\nabla \times \mathbf {A} \right)_{ij}=\nabla _{i}A_{j}-\nabla _{j}A_{i}=2\nabla _{[i}A_{j]}$ which is valid in any number of dimensions. In each case, the order of the gradient and vector field components should not be interchanged as this would result in a different differential operator: $\varepsilon _{ijk}A_{j}\nabla _{k}=A_{i}\nabla _{j}-A_{j}\nabla _{i}=2A_{[i}\nabla _{j]}$ which could act on scalar or vector fields. Finally, the Laplacian operator is defined in two ways, the divergence of the gradient of a scalar field Φ: $\nabla \cdot (\nabla \Phi )=\nabla _{i}(\nabla _{i}\Phi )$ or the square of the gradient operator, which acts on a scalar field Φ or a vector field A: ${\begin{aligned}(\nabla \cdot \nabla )\Phi &=(\nabla _{i}\nabla _{i})\Phi \\(\nabla \cdot \nabla )\mathbf {A} &=(\nabla _{i}\nabla _{i})\mathbf {A} \end{aligned}}$ In physics and engineering, the gradient, divergence, curl, and Laplacian operator arise inevitably in fluid mechanics, Newtonian gravitation, electromagnetism, heat conduction, and even quantum mechanics. Vector calculus identities can be derived in a similar way to those of vector dot and cross products and combinations. For example, in three dimensions, the curl of a cross product of two vector fields A and B: ${\begin{aligned}&\left[\nabla \times (\mathbf {A} \times \mathbf {B} )\right]_{i}\\{}={}&\varepsilon _{ijk}\nabla _{j}(\varepsilon _{k\ell m}A_{\ell }B_{m})\\{}={}&(\varepsilon _{ijk}\varepsilon _{\ell mk})\nabla _{j}(A_{\ell }B_{m})\\{}={}&(\delta _{i\ell }\delta _{jm}-\delta _{im}\delta _{j\ell })(B_{m}\nabla _{j}A_{\ell }+A_{\ell }\nabla _{j}B_{m})\\{}={}&(B_{j}\nabla _{j}A_{i}+A_{i}\nabla _{j}B_{j})-(B_{i}\nabla _{j}A_{j}+A_{j}\nabla _{j}B_{i})\\{}={}&(B_{j}\nabla _{j})A_{i}+A_{i}(\nabla _{j}B_{j})-B_{i}(\nabla _{j}A_{j})-(A_{j}\nabla _{j})B_{i}\\{}={}&\left[(\mathbf {B} \cdot \nabla )\mathbf {A} +\mathbf {A} (\nabla \cdot \mathbf {B} )-\mathbf {B} (\nabla \cdot \mathbf {A} )-(\mathbf {A} \cdot \nabla )\mathbf {B} \right]_{i}\\\end{aligned}}$ where the product rule was used, and throughout the differential operator was not interchanged with A or B. Thus: $\nabla \times (\mathbf {A} \times \mathbf {B} )=(\mathbf {B} \cdot \nabla )\mathbf {A} +\mathbf {A} (\nabla \cdot \mathbf {B} )-\mathbf {B} (\nabla \cdot \mathbf {A} )-(\mathbf {A} \cdot \nabla )\mathbf {B} $ Tensor calculus One can continue the operations on tensors of higher order. Let T = T(r, t) denote a second order tensor field, again dependent on the position vector r and time t. For instance, the gradient of a vector field in two equivalent notations ("dyadic" and "tensor", respectively) is: $(\nabla \mathbf {A} )_{ij}\equiv (\nabla \otimes \mathbf {A} )_{ij}=\nabla _{i}A_{j}$ which is a tensor field of second order. The divergence of a tensor is: $(\nabla \cdot \mathbf {T} )_{j}=\nabla _{i}T_{ij}$ which is a vector field. This arises in continuum mechanics in Cauchy's laws of motion – the divergence of the Cauchy stress tensor σ is a vector field, related to body forces acting on the fluid. Difference from the standard tensor calculus Cartesian tensors are as in tensor algebra, but Euclidean structure of and restriction of the basis brings some simplifications compared to the general theory. The general tensor algebra consists of general mixed tensors of type (p, q): $\mathbf {T} =T_{j_{1}j_{2}\cdots j_{q}}^{i_{1}i_{2}\cdots i_{p}}\mathbf {e} _{i_{1}i_{2}\cdots i_{p}}^{j_{1}j_{2}\cdots j_{q}}$ with basis elements: $\mathbf {e} _{i_{1}i_{2}\cdots i_{p}}^{j_{1}j_{2}\cdots j_{q}}=\mathbf {e} _{i_{1}}\otimes \mathbf {e} _{i_{2}}\otimes \cdots \mathbf {e} _{i_{p}}\otimes \mathbf {e} ^{j_{1}}\otimes \mathbf {e} ^{j_{2}}\otimes \cdots \mathbf {e} ^{j_{q}}$ the components transform according to: ${\bar {T}}_{\ell _{1}\ell _{2}\cdots \ell _{q}}^{k_{1}k_{2}\cdots k_{p}}={\mathsf {L}}_{i_{1}}{}^{k_{1}}{\mathsf {L}}_{i_{2}}{}^{k_{2}}\cdots {\mathsf {L}}_{i_{p}}{}^{k_{p}}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{\ell _{1}}{}^{j_{1}}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{\ell _{2}}{}^{j_{2}}\cdots \left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{\ell _{q}}{}^{j_{q}}T_{j_{1}j_{2}\cdots j_{q}}^{i_{1}i_{2}\cdots i_{p}}$ as for the bases: ${\bar {\mathbf {e} }}_{k_{1}k_{2}\cdots k_{p}}^{\ell _{1}\ell _{2}\cdots \ell _{q}}=\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{k_{1}}{}^{i_{1}}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{k_{2}}{}^{i_{2}}\cdots \left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{k_{p}}{}^{i_{p}}{\mathsf {L}}_{j_{1}}{}^{\ell _{1}}{\mathsf {L}}_{j_{2}}{}^{\ell _{2}}\cdots {\mathsf {L}}_{j_{q}}{}^{\ell _{q}}\mathbf {e} _{i_{1}i_{2}\cdots i_{p}}^{j_{1}j_{2}\cdots j_{q}}$ For Cartesian tensors, only the order p + q of the tensor matters in a Euclidean space with an orthonormal basis, and all p + q indices can be lowered. A Cartesian basis does not exist unless the vector space has a positive-definite metric, and thus cannot be used in relativistic contexts. History Dyadic tensors were historically the first approach to formulating second-order tensors, similarly triadic tensors for third-order tensors, and so on. Cartesian tensors use tensor index notation, in which the variance may be glossed over and is often ignored, since the components remain unchanged by raising and lowering indices. See also • Tensor algebra • Tensor calculus • Tensors in curvilinear coordinates • Rotation group References 1. C.W. Misner; K.S. Thorne; J.A. Wheeler (15 September 1973). Gravitation. ISBN 0-7167-0344-0., used throughout 2. T. W. B. Kibble (1973). Classical Mechanics. European physics series (2nd ed.). McGraw Hill. ISBN 978-0-07-084018-8., see Appendix C. 3. M. R. Spiegel; S. Lipcshutz; D. Spellman (2009). Vector analysis. Schaum's Outlines (2nd ed.). McGraw Hill. ISBN 978-0-07-161545-7. General references • D. C. Kay (1988). Tensor Calculus. Schaum's Outlines. McGraw Hill. pp. 18–19, 31–32. ISBN 0-07-033484-6. • M. R. Spiegel; S. Lipcshutz; D. Spellman (2009). Vector analysis. Schaum's Outlines (2nd ed.). McGraw Hill. p. 227. ISBN 978-0-07-161545-7. • J.R. Tyldesley (1975). An introduction to tensor analysis for engineers and applied scientists. Longman. pp. 5–13. ISBN 0-582-44355-5. Further reading and applications • S. Lipcshutz; M. Lipson (2009). Linear Algebra. Schaum's Outlines (4th ed.). McGraw Hill. ISBN 978-0-07-154352-1. • Pei Chi Chou (1992). Elasticity: Tensor, Dyadic, and Engineering Approaches. Courier Dover Publications. ISBN 048-666-958-0. • T. W. Körner (2012). Vectors, Pure and Applied: A General Introduction to Linear Algebra. Cambridge University Press. p. 216. ISBN 978-11070-3356-6. • R. Torretti (1996). Relativity and Geometry. Courier Dover Publications. p. 103. ISBN 0-4866-90466. • J. J. L. Synge; A. Schild (1978). Tensor Calculus. Courier Dover Publications. p. 128. ISBN 0-4861-4139-X. • C. A. Balafoutis; R. V. Patel (1991). Dynamic Analysis of Robot Manipulators: A Cartesian Tensor Approach. The Kluwer International Series in Engineering and Computer Science: Robotics: vision, manipulation and sensors. Vol. 131. Springer. ISBN 0792-391-454. • S. G. Tzafestas (1992). Robotic systems: advanced techniques and applications. Springer. ISBN 0-792-317-491. • T. Dass; S. K. Sharma (1998). Mathematical Methods In Classical And Quantum Physics. Universities Press. p. 144. ISBN 817-371-0899. • G. F. J. Temple (2004). Cartesian Tensors: An Introduction. Dover Books on Mathematics Series. Dover. ISBN 0-4864-3908-9. • H. Jeffreys (1961). Cartesian Tensors. Cambridge University Press. ISBN 9780521054232. External links • Cartesian Tensors • V. N. Kaliakin, Brief Review of Tensors, University of Delaware • R. E. Hunt, Cartesian Tensors, University of Cambridge Tensors Glossary of tensor theory Scope Mathematics • Coordinate system • Differential geometry • Dyadic algebra • Euclidean geometry • Exterior calculus • Multilinear algebra • Tensor algebra • Tensor calculus • Physics • Engineering • Computer vision • Continuum mechanics • Electromagnetism • General relativity • Transport phenomena Notation • Abstract index notation • Einstein notation • Index notation • Multi-index notation • Penrose graphical notation • Ricci calculus • Tetrad (index notation) • Van der Waerden notation • Voigt notation Tensor definitions • Tensor (intrinsic definition) • Tensor field • Tensor density • Tensors in curvilinear coordinates • Mixed tensor • Antisymmetric tensor • Symmetric tensor • Tensor operator • Tensor bundle • Two-point tensor Operations • Covariant derivative • Exterior covariant derivative • Exterior derivative • Exterior product • Hodge star operator • Lie derivative • Raising and lowering indices • Symmetrization • Tensor contraction • Tensor product • Transpose (2nd-order tensors) Related abstractions • Affine connection • Basis • Cartan formalism (physics) • Connection form • Covariance and contravariance of vectors • Differential form • Dimension • Exterior form • Fiber bundle • Geodesic • Levi-Civita connection • Linear map • Manifold • Matrix • Multivector • Pseudotensor • Spinor • Vector • Vector space Notable tensors Mathematics • Kronecker delta • Levi-Civita symbol • Metric tensor • Nonmetricity tensor • Ricci curvature • Riemann curvature tensor • Torsion tensor • Weyl tensor Physics • Moment of inertia • Angular momentum tensor • Spin tensor • Cauchy stress tensor • stress–energy tensor • Einstein tensor • EM tensor • Gluon field strength tensor • Metric tensor (GR) Mathematicians • Élie Cartan • Augustin-Louis Cauchy • Elwin Bruno Christoffel • Albert Einstein • Leonhard Euler • Carl Friedrich Gauss • Hermann Grassmann • Tullio Levi-Civita • Gregorio Ricci-Curbastro • Bernhard Riemann • Jan Arnoldus Schouten • Woldemar Voigt • Hermann Weyl
Wikipedia
A Rotini Model of an Atom Principia Philosophiae by Rene Descartes, page 271 (detail). Amsterdam 1644. From the European Cultural Heritage Online project, and the Max Planck Institute for the History of Science. Consider an atom $\mathbf{A}$ described by a repetitive chain of space-time events with time coordinates $t$. Our first spatial conception of such an atom was as a compound quark in quark space. But to implement the hypothesis of spatial isotropy our next view is set in a Cartesian coordinate system where $\mathbf{A}$ is represented as a rotating atomic clock with a phase-angle $\theta$ given by $\theta \left(t\right) = \theta_{0} +\omega t$ such that $\mathbf{A}$ is whirling about its polar axis with an angular frequency of $\omega$. The rotation supposedly blurs variations in the electric and magnetic radii leaving an effective orbital radius $R$ that is then used to represent the atom as a rotating cylinder. This rotating cylinder model smooths out some rough edges, but it is still incomplete because the electromagnetic part of the quark metric is larger than the other non-polar components. So one radial direction is predominant and the atom is shaped more like a piece of rotini pasta than a solid cylinder. This corkscrew spiral can be approximated by a geometric curve called a helicoid . It is described mathematically by radii of $\rho_{x} = R \cos{\! 2 \theta}$ and $\rho_{y} = R \sin{\! 2 \theta }$ and $\begin{align} \rho_{z} = \frac{ \lambda \theta}{2 \pi} \end{align}$ where $\lambda$ is the wavelength of $\mathbf{A}$. When moving, the rotini model looks a lot like a machine called the Archimedean screw . Humans have been thinking about screw conveyor mechanisms like this for thousands of years. They were reportedly used to irrigate the Hanging Gardens of Babylon as early as 600 BC. This atomic model is good for understanding the Euclidean metric of the ordinary spaces in our laboratories and classrooms. page revision: 170, last edited: 01 May 2019 15:20
CommonCrawl
Only show content I have access to (56) Publications of the Astronomical Society of Australia (36) Symposium - International Astronomical Union (13) Proceedings of the International Astronomical Union (7) International Astronomical Union Colloquium (6) Transactions of the International Astronomical Union (1) International Astronomical Union (27) WALLABY Pilot Survey: Public release of HI kinematic models for more than 100 galaxies from phase 1 of ASKAP pilot observations N. Deg, K. Spekkens, T. Westmeier, T. N. Reynolds, P. Venkataraman, S. Goliath, A. X. Shen, R. Halloran, A. Bosma, B Catinella, W. J. G. de Blok, H. Dénes, E. M. DiTeodoro, A. Elagali, B.-Q. For, C Howlett, G. I. G. Józsa, P. Kamphuis, D. Kleiner, B Koribalski, K. Lee-Waddell, F. Lelli, X. Lin, C. Murugeshan, S. Oh, J. Rhee, T. C. Scott, L. Staveley-Smith, J. M. van der Hulst, L. Verdes-Montenegro, J. Wang, O. I. Wong We present the Widefield ASKAP L-band Legacy All-sky Blind surveY (WALLABY) Pilot Phase I Hi kinematic models. This first data release consists of Hi observations of three fields in the direction of the Hydra and Norma clusters, and the NGC 4636 galaxy group. In this paper, we describe how we generate and publicly release flat-disk tilted-ring kinematic models for 109/592 unique Hi detections in these fields. The modelling method adopted here—which we call the WALLABY Kinematic Analysis Proto-Pipeline (WKAPP) and for which the corresponding scripts are also publicly available—consists of combining results from the homogeneous application of the FAT and 3DBarolo algorithms to the subset of 209 detections with sufficient resolution and $S/N$ in order to generate optimised model parameters and uncertainties. The 109 models presented here tend to be gas rich detections resolved by at least 3–4 synthesised beams across their major axes, but there is no obvious environmental bias in the modelling. The data release described here is the first step towards the derivation of similar products for thousands of spatially resolved WALLABY detections via a dedicated kinematic pipeline. Such a large publicly available and homogeneously analysed dataset will be a powerful legacy product that that will enable a wide range of scientific studies. WALLABY pilot survey: Public release of H i data for almost 600 galaxies from phase 1 of ASKAP pilot observations T. Westmeier, N. Deg, K. Spekkens, T. N. Reynolds, A. X. Shen, S. Gaudet, S. Goliath, M. T. Huynh, P. Venkataraman, X. Lin, T. O'Beirne, B. Catinella, L. Cortese, H. Dénes, A. Elagali, B.-Q. For, G. I. G. Józsa, C. Howlett, J. M. van der Hulst, R. J. Jurek, P. Kamphuis, V. A. Kilborn, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, C. Murugeshan, J. Rhee, P. Serra, L. Shao, L. Staveley-Smith, J. Wang, O. I. Wong, M. A. Zwaan, J. R. Allison, C. S. Anderson, Lewis Ball, D. C.-J. Bock, D. Brodrick, J. D. Bunton, F. R. Cooray, N. Gupta, D. B. Hayman, E. K. Mahony, V. A. Moss, A. Ng, S. E. Pearce, W. Raja, D. N. Roxby, M. A. Voronkov, K. A. Warhurst, H. M. Courtois, K. Said We present WALLABY pilot data release 1, the first public release of H i pilot survey data from the Wide-field ASKAP L-band Legacy All-sky Blind Survey (WALLABY) on the Australian Square Kilometre Array Pathfinder. Phase 1 of the WALLABY pilot survey targeted three $60\,\mathrm{deg}^{2}$ regions on the sky in the direction of the Hydra and Norma galaxy clusters and the NGC 4636 galaxy group, covering the redshift range of $z \lesssim 0.08$ . The source catalogue, images and spectra of nearly 600 extragalactic H i detections and kinematic models for 109 spatially resolved galaxies are available. As the pilot survey targeted regions containing nearby group and cluster environments, the median redshift of the sample of $z \approx 0.014$ is relatively low compared to the full WALLABY survey. The median galaxy H i mass is $2.3 \times 10^{9}\,{\rm M}_{{\odot}}$ . The target noise level of $1.6\,\mathrm{mJy}$ per 30′′ beam and $18.5\,\mathrm{kHz}$ channel translates into a $5 \sigma$ H i mass sensitivity for point sources of about $5.2 \times 10^{8} \, (D_{\rm L} / \mathrm{100\,Mpc})^{2} \, {\rm M}_{{\odot}}$ across 50 spectral channels ( ${\approx} 200\,\mathrm{km \, s}^{-1}$ ) and a $5 \sigma$ H i column density sensitivity of about $8.6 \times 10^{19} \, (1 + z)^{4}\,\mathrm{cm}^{-2}$ across 5 channels ( ${\approx} 20\,\mathrm{km \, s}^{-1}$ ) for emission filling the 30′′ beam. As expected for a pilot survey, several technical issues and artefacts are still affecting the data quality. Most notably, there are systematic flux errors of up to several 10% caused by uncertainties about the exact size and shape of each of the primary beams as well as the presence of sidelobes due to the finite deconvolution threshold. In addition, artefacts such as residual continuum emission and bandpass ripples have affected some of the data. The pilot survey has been highly successful in uncovering such technical problems, most of which are expected to be addressed and rectified before the start of the full WALLABY survey. The First Large Absorption Survey in H i (FLASH): I. Science goals and survey design James R. Allison, E. M. Sadler, A. D. Amaral, T. An, S. J. Curran, J. Darling, A. C. Edge, S. L. Ellison, K. L. Emig, B. M. Gaensler, L. Garratt-Smithson, M. Glowacki, K. Grasha, B. S. Koribalski, C. del P. Lagos, P. Lah, E. K. Mahony, S. A. Mao, R. Morganti, V. A. Moss, M. Pettini, K. A. Pimbblet, C. Power, P. Salas, L. Staveley-Smith, M. T. Whiting, O. I. Wong, H. Yoon, Z. Zheng, M. A. Zwaan Published online by Cambridge University Press: 21 March 2022, e010 We describe the scientific goals and survey design of the First Large Absorption Survey in H i (FLASH), a wide field survey for 21-cm line absorption in neutral atomic hydrogen (H i) at intermediate cosmological redshifts. FLASH will be carried out with the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope and is planned to cover the sky south of $\delta \approx +40\,\deg$ at frequencies between 711.5 and 999.5 MHz. At redshifts between $z = 0.4$ and $1.0$ (look-back times of 4 – 8 Gyr), the H i content of the Universe has been poorly explored due to the difficulty of carrying out radio surveys for faint 21-cm line emission and, at ultra-violet wavelengths, space-borne searches for Damped Lyman- $\alpha$ absorption in quasar spectra. The ASKAP wide field of view and large spectral bandwidth, in combination with a radio-quiet site, will enable a search for absorption lines in the radio spectra of bright continuum sources over 80% of the sky. This survey is expected to detect at least several hundred intervening 21-cm absorbers and will produce an H i-absorption-selected catalogue of galaxies rich in cool, star-forming gas, some of which may be concealed from optical surveys. Likewise, at least several hundred associated 21-cm absorbers are expected to be detected within the host galaxies of radio sources at $0.4 < z < 1.0$ , providing valuable kinematical information for models of gas accretion and jet-driven feedback in radio-loud active galactic nuclei. FLASH will also detect OH 18-cm absorbers in diffuse molecular gas, megamaser OH emission, radio recombination lines, and stacked H i emission. GASKAP-HI pilot survey science I: ASKAP zoom observations of Hi emission in the Small Magellanic Cloud N. M. Pingel, J. Dempsey, N. M. McClure-Griffiths, J. M. Dickey, K. E. Jameson, H. Arce, G. Anglada, J. Bland-Hawthorn, S. L. Breen, F. Buckland-Willis, S. E. Clark, J. R. Dawson, H. Dénes, E. M. Di Teodoro, B.-Q. For, Tyler J. Foster, J. F. Gómez, H. Imai, G. Joncas, C.-G. Kim, M.-Y. Lee, C. Lynn, D. Leahy, Y. K. Ma, A. Marchal, D. McConnell, M.-A. Miville-Deschènes, V. A. Moss, C. E. Murray, D. Nidever, J. Peek, S. Stanimirović, L. Staveley-Smith, T. Tepper-Garcia, C. D. Tremblay, L. Uscanga, J. Th. van Loon, E. Vázquez-Semadeni, J. R. Allison, C. S. Anderson, Lewis Ball, M. Bell, D. C.-J. Bock, J. Bunton, F. R. Cooray, T. Cornwell, B. S. Koribalski, N. Gupta, D. B. Hayman, L. Harvey-Smith, K. Lee-Waddell, A. Ng, C. J. Phillips, M. Voronkov, T. Westmeier, M. T. Whiting Published online by Cambridge University Press: 07 February 2022, e005 We present the most sensitive and detailed view of the neutral hydrogen ( ${\rm H\small I}$ ) emission associated with the Small Magellanic Cloud (SMC), through the combination of data from the Australian Square Kilometre Array Pathfinder (ASKAP) and Parkes (Murriyang), as part of the Galactic Australian Square Kilometre Array Pathfinder (GASKAP) pilot survey. These GASKAP-HI pilot observations, for the first time, reveal ${\rm H\small I}$ in the SMC on similar physical scales as other important tracers of the interstellar medium, such as molecular gas and dust. The resultant image cube possesses an rms noise level of 1.1 K ( $1.6\,\mathrm{mJy\ beam}^{-1}$ ) $\mathrm{per}\ 0.98\,\mathrm{km\ s}^{-1}$ spectral channel with an angular resolution of $30^{\prime\prime}$ ( ${\sim}10\,\mathrm{pc}$ ). We discuss the calibration scheme and the custom imaging pipeline that utilises a joint deconvolution approach, efficiently distributed across a computing cluster, to accurately recover the emission extending across the entire ${\sim}25\,\mathrm{deg}^2$ field-of-view. We provide an overview of the data products and characterise several aspects including the noise properties as a function of angular resolution and the represented spatial scales by deriving the global transfer function over the full spectral range. A preliminary spatial power spectrum analysis on individual spectral channels reveals that the power law nature of the density distribution extends down to scales of 10 pc. We highlight the scientific potential of these data by comparing the properties of an outflowing high-velocity cloud with previous ASKAP+Parkes ${\rm H\small I}$ test observations. The GLEAM 200-MHz local radio luminosity function for AGN and star-forming galaxies T. M. O. Franzen, N. Seymour, E. M. Sadler, T. Mauch, S. V. White, C. A. Jackson, R. Chhetri, B. Quici, M. E. Bell, J. R. Callingham, K. S. Dwarakanath, B. For, B. M. Gaensler, P. J. Hancock, L. Hindson, N. Hurley-Walker, M. Johnston-Hollitt, A. D. Kapińska, E. Lenc, B. McKinley, J. Morgan, A. R. Offringa, P. Procopio, L. Staveley-Smith, R. B. Wayth, C. Wu, Q. Zheng Published online by Cambridge University Press: 06 September 2021, e041 The GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) is a radio continuum survey at 76–227 MHz of the entire southern sky (Declination $<\!{+}30^{\circ}$ ) with an angular resolution of ${\approx}2$ arcmin. In this paper, we combine GLEAM data with optical spectroscopy from the 6dF Galaxy Survey to construct a sample of 1 590 local (median $z \approx 0.064$ ) radio sources with $S_{200\,\mathrm{MHz}} > 55$ mJy across an area of ${\approx}16\,700\,\mathrm{deg}^{2}$ . From the optical spectra, we identify the dominant physical process responsible for the radio emission from each galaxy: 73% are fuelled by an active galactic nucleus (AGN) and 27% by star formation. We present the local radio luminosity function for AGN and star-forming (SF) galaxies at 200 MHz and characterise the typical radio spectra of these two populations between 76 MHz and ${\sim}1$ GHz. For the AGN, the median spectral index between 200 MHz and ${\sim}1$ GHz, $\alpha_{\mathrm{high}}$ , is $-0.600 \pm 0.010$ (where $S \propto \nu^{\alpha}$ ) and the median spectral index within the GLEAM band, $\alpha_{\mathrm{low}}$ , is $-0.704 \pm 0.011$ . For the SF galaxies, the median value of $\alpha_{\mathrm{high}}$ is $-0.650 \pm 0.010$ and the median value of $\alpha_{\mathrm{low}}$ is $-0.596 \pm 0.015$ . Among the AGN population, flat-spectrum sources are more common at lower radio luminosity, suggesting the existence of a significant population of weak radio AGN that remain core-dominated even at low frequencies. However, around 4% of local radio AGN have ultra-steep radio spectra at low frequencies ( $\alpha_{\mathrm{low}} < -1.2$ ). These ultra-steep-spectrum sources span a wide range in radio luminosity, and further work is needed to clarify their nature. GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) survey III: South Galactic Pole data release Murchison Widefield Array T. M. O. Franzen, N. Hurley-Walker, S. V. White, P. J. Hancock, N. Seymour, A. D. Kapińska, L. Staveley-Smith, R. B. Wayth We present the South Galactic Pole (SGP) data release from the GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) survey. These data combine both years of GLEAM observations at 72–231 MHz conducted with the Murchison Widefield Array (MWA) and cover an area of 5 113 $\mathrm{deg}^{2}$ centred on the SGP at $20^{\mathrm{h}} 40^{\mathrm{m}} < \mathrm{RA} < 05^{\mathrm{h}} 04^{\mathrm{m}}$ and $-48^{\circ} < \mathrm{Dec} < -2^{\circ} $. At 216 MHz, the typical rms noise is ${\approx}5$ mJy beam–1 and the angular resolution ${\approx}2$ arcmin. The source catalogue contains a total of 108 851 components above $5\sigma$, of which 77% have measured spectral indices between 72 and 231 MHz. Improvements to the data reduction in this release include the use of the GLEAM Extragalactic catalogue as a sky model to calibrate the data, a more efficient and automated algorithm to deconvolve the snapshot images, and a more accurate primary beam model to correct the flux scale. This data release enables more sensitive large-scale studies of extragalactic source populations as well as spectral variability studies on a one-year timescale. New candidate radio supernova remnants detected in the GLEAM survey over 345° < l < 60°, 180° < l < 240° N. Hurley-Walker, M. D. Filipović, B. M. Gaensler, D. A. Leahy, P. J. Hancock, T. M. O. Franzen, A. R. Offringa, J. R. Callingham, L. Hindson, C. Wu, M. E. Bell, B.-Q. For, M. Johnston-Hollitt, A. D. Kapińska, J. Morgan, T. Murphy, B. McKinley, P. Procopio, L. Staveley-Smith, R. B. Wayth, Q. Zheng We have detected 27 new supernova remnants (SNRs) using a new data release of the GLEAM survey from the Murchison Widefield Array telescope, including the lowest surface brightness SNR ever detected, G 0.1 – 9.7. Our method uses spectral fitting to the radio continuum to derive spectral indices for 26/27 candidates, and our low-frequency observations probe a steeper spectrum population than previously discovered. None of the candidates have coincident WISE mid-IR emission, further showing that the emission is non-thermal. Using pulsar associations we derive physical properties for six candidate SNRs, finding G 0.1 – 9.7 may be younger than 10 kyr. Sixty per cent of the candidates subtend areas larger than 0.2 deg2 on the sky, compared to < 25% of previously detected SNRs. We also make the first detection of two SNRs in the Galactic longitude range 220°–240°. GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) survey II: Galactic plane 345° < l < 67°, 180° < l < 240° N. Hurley-Walker, P. J. Hancock, T. M. O. Franzen, J. R. Callingham, A. R. Offringa, L. Hindson, C. Wu, M. E. Bell, B.-Q. For, B. M. Gaensler, M. Johnston-Hollitt, A. D. Kapińska, J. Morgan, T. Murphy, B. McKinley, P. Procopio, L. Staveley-Smith, R. B. Wayth, Q. Zheng This work makes available a further $2\,860~\text{deg}^2$ of the GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) survey, covering half of the accessible galactic plane, across 20 frequency bands sampling 72–231 MHz, with resolution $4\,\text{arcmin}-2\,\text{arcmin}$. Unlike previous GLEAM data releases, we used multi-scale CLEAN to better deconvolve large-scale galactic structure. For the galactic longitude ranges $345^\circ < l < 67^\circ$, $180^\circ < l < 240^\circ$, we provide a compact source catalogue of 22 037 components selected from a 60-MHz bandwidth image centred at 200 MHz, with RMS noise $\approx10-20\,\text{mJy}\,\text{beam}^{-1}$ and position accuracy better than 2 arcsec. The catalogue has a completeness of 50% at ${\approx}120\,\text{mJy}$, and a reliability of 99.86%. It covers galactic latitudes $1^\circ\leq|b|\leq10^\circ$ towards the galactic centre and $|b|\leq10^\circ$ for other regions, and is available from Vizier; images covering $|b|\leq10^\circ$ for all longitudes are made available on the GLEAM Virtual Observatory (VO).server and SkyView. Candidate radio supernova remnants observed by the GLEAM survey over 345° < l < 60° and 180° < l < 240° N. Hurley-Walker, B. M. Gaensler, D. A. Leahy, M. D. Filipović, P. J. Hancock, T. M. O. Franzen, A. R. Offringa, J. R. Callingham, L. Hindson, C. Wu, M. E. Bell, B.-Q. For, M. Johnston-Hollitt, A. D. Kapińska, J. Morgan, T. Murphy, B. McKinley, P. Procopio, L. Staveley-Smith, R. B. Wayth, Q. Zheng We examined the latest data release from the GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) survey covering 345° < l < 60° and 180° < l < 240°, using these data and that of the Widefield Infrared Survey Explorer to follow up proposed candidate Supernova Remnant (SNR) from other sources. Of the 101 candidates proposed in the region, we are able to definitively confirm ten as SNRs, tentatively confirm two as SNRs, and reclassify five as H ii regions. A further two are detectable in our images but difficult to classify; the remaining 82 are undetectable in these data. We also investigated the 18 unclassified Multi-Array Galactic Plane Imaging Survey (MAGPIS) candidate SNRs, newly confirming three as SNRs, reclassifying two as H ii regions, and exploring the unusual spectra and morphology of two others. The Phase II Murchison Widefield Array: Design overview Randall B. Wayth, Steven J. Tingay, Cathryn M. Trott, David Emrich, Melanie Johnston-Hollitt, Ben McKinley, B. M. Gaensler, A. P. Beardsley, T. Booler, B. Crosse, T. M. O. Franzen, L. Horsley, D. L. Kaplan, D. Kenney, M. F. Morales, D. Pallot, G. Sleap, K. Steele, M. Walker, A. Williams, C. Wu, Iver. H. Cairns, M. D. Filipovic, S. Johnston, T. Murphy, P. Quinn, L. Staveley-Smith, R. Webster, J. S. B. Wyithe We describe the motivation and design details of the 'Phase II' upgrade of the Murchison Widefield Array radio telescope. The expansion doubles to 256 the number of antenna tiles deployed in the array. The new antenna tiles enhance the capabilities of the Murchison Widefield Array in several key science areas. Seventy-two of the new tiles are deployed in a regular configuration near the existing array core. These new tiles enhance the surface brightness sensitivity of the array and will improve the ability of the Murchison Widefield Array to estimate the slope of the Epoch of Reionisation power spectrum by a factor of ∼3.5. The remaining 56 tiles are deployed on long baselines, doubling the maximum baseline of the array and improving the array u, v coverage. The improved imaging capabilities will provide an order of magnitude improvement in the noise floor of Murchison Widefield Array continuum images. The upgrade retains all of the features that have underpinned the Murchison Widefield Array's success (large field of view, snapshot image quality, and pointing agility) and boosts the scientific potential with enhanced imaging capabilities and by enabling new calibration strategies. Calibration and Stokes Imaging with Full Embedded Element Primary Beam Model for the Murchison Widefield Array M. Sokolowski, T. Colegate, A. T. Sutinjo, D. Ung, R. Wayth, N. Hurley-Walker, E. Lenc, B. Pindor, J. Morgan, D. L. Kaplan, M. E. Bell, J. R. Callingham, K. S. Dwarakanath, Bi-Qing For, B. M. Gaensler, P. J. Hancock, L. Hindson, M. Johnston-Hollitt, A. D. Kapińska, B. McKinley, A. R. Offringa, P. Procopio, L. Staveley-Smith, C. Wu, Q. Zheng The Murchison Widefield Array (MWA), located in Western Australia, is one of the low-frequency precursors of the international Square Kilometre Array (SKA) project. In addition to pursuing its own ambitious science programme, it is also a testbed for wide range of future SKA activities ranging from hardware, software to data analysis. The key science programmes for the MWA and SKA require very high dynamic ranges, which challenges calibration and imaging systems. Correct calibration of the instrument and accurate measurements of source flux densities and polarisations require precise characterisation of the telescope's primary beam. Recent results from the MWA GaLactic Extragalactic All-sky Murchison Widefield Array (GLEAM) survey show that the previously implemented Average Embedded Element (AEE) model still leaves residual polarisations errors of up to 10–20% in Stokes Q. We present a new simulation-based Full Embedded Element (FEE) model which is the most rigorous realisation yet of the MWA's primary beam model. It enables efficient calculation of the MWA beam response in arbitrary directions without necessity of spatial interpolation. In the new model, every dipole in the MWA tile (4 × 4 bow-tie dipoles) is simulated separately, taking into account all mutual coupling, ground screen, and soil effects, and therefore accounts for the different properties of the individual dipoles within a tile. We have applied the FEE beam model to GLEAM observations at 200–231 MHz and used false Stokes parameter leakage as a metric to compare the models. We have determined that the FEE model reduced the magnitude and declination-dependent behaviour of false polarisation in Stokes Q and V while retaining low levels of false polarisation in Stokes U. Spectral-Line Observations Using a Phased Array Feed on the Parkes Telescope T.N. Reynolds, L. Staveley-Smith, J. Rhee, T. Westmeier, A. P. Chippendale, X. Deng, R. D. Ekers, M. Kramer We present first results from pilot observations using a phased array feed (PAF) mounted on the Parkes 64-m radio telescope. The observations presented here cover a frequency range from 1 150 to 1 480 MHz and are used to show the ability of PAFs to suppress standing wave problems by a factor of ~10, which afflict normal feeds. We also compare our results with previous HIPASS observations and with previous H i images of the Large Magellanic Cloud. Drift scan observations of the GAMA G23 field resulted in direct H i detections at z = 0.0043 and z = 0.0055 of HIPASS galaxies J2242-30 and J2309-30. Our new measurements generally agree with archival data in spectral shape and flux density, with small differences being due to differing beam patterns. We also detect signal in the stacked H i data of 1 094 individually undetected galaxies in the GAMA G23 field in the redshift range 0.05 ⩽ z ⩽ 0.075. Finally, we use the low standing wave ripple and wide bandwidth of the PAF to set a 3σ upper limit to any positronium recombination line emission from the Galactic Centre of <0.09 K, corresponding to a recombination rate of <3.0 × 1045 s−1. A High-Resolution Foreground Model for the MWA EoR1 Field: Model and Implications for EoR Power Spectrum Analysis P. Procopio, R. B. Wayth, J. Line, C. M. Trott, H. T. Intema, D. A. Mitchell, B. Pindor, J. Riding, S. J. Tingay, M. E. Bell, J. R. Callingham, K. S. Dwarakanath, Bi-Qing For, B. M. Gaensler, P. J. Hancock, L. Hindson, N. Hurley-Walker, M. Johnston-Hollitt, A. D. Kapińska, E. Lenc, B. McKinley, J. Morgan, A. Offringa, L. Staveley-Smith, Chen Wu, Q. Zheng Published online by Cambridge University Press: 10 August 2017, e033 The current generation of experiments aiming to detect the neutral hydrogen signal from the Epoch of Reionisation (EoR) is likely to be limited by systematic effects associated with removing foreground sources from target fields. In this paper, we develop a model for the compact foreground sources in one of the target fields of the MWA's EoR key science experiment: the 'EoR1' field. The model is based on both the MWA's GLEAM survey and GMRT 150 MHz data from the TGSS survey, the latter providing higher angular resolution and better astrometric accuracy for compact sources than is available from the MWA alone. The model contains 5 049 sources, some of which have complicated morphology in MWA data, Fornax A being the most complex. The higher resolution data show that 13% of sources that appear point-like to the MWA have complicated morphology such as double and quad structure, with a typical separation of 33 arcsec. We derive an analytic expression for the error introduced into the EoR two-dimensional power spectrum due to peeling close double sources as single point sources and show that for the measured source properties, the error in the power spectrum is confined to high k⊥ modes that do not affect the overall result for the large-scale cosmological signal of interest. The brightest 10 mis-modelled sources in the field contribute 90% of the power bias in the data, suggesting that it is most critical to improve the models of the brightest sources. With this hybrid model, we reprocess data from the EoR1 field and show a maximum of 8% improved calibration accuracy and a factor of two reduction in residual power in k-space from peeling these sources. Implications for future EoR experiments including the SKA are discussed in relation to the improvements obtained. Low-Frequency Spectral Energy Distributions of Radio Pulsars Detected with the Murchison Widefield Array Tara Murphy, David L. Kaplan, Martin E. Bell, J. R. Callingham, Steve Croft, Simon Johnston, Dougal Dobie, Andrew Zic, Jake Hughes, Christene Lynch, Paul Hancock, Natasha Hurley-Walker, Emil Lenc, K. S. Dwarakanath, B.-Q. For, B. M. Gaensler, L. Hindson, M. Johnston-Hollitt, A. D. Kapińska, B. McKinley, J. Morgan, A. R. Offringa, P. Procopio, L. Staveley-Smith, R. Wayth, C. Wu, Q. Zheng We present low-frequency spectral energy distributions of 60 known radio pulsars observed with the Murchison Widefield Array telescope. We searched the GaLactic and Extragalactic All-sky Murchison Widefield Array survey images for 200-MHz continuum radio emission at the position of all pulsars in the Australia Telescope National Facility (ATNF) pulsar catalogue. For the 60 confirmed detections, we have measured flux densities in 20 × 8 MHz bands between 72 and 231 MHz. We compare our results to existing measurements and show that the Murchison Widefield Array flux densities are in good agreement. A Southern-Sky Total Intensity Source Catalogue at 2.3 GHz from S-Band Polarisation All-Sky Survey Data B. W. Meyers, N. Hurley-Walker, P. J. Hancock, T. M. O. Franzen, E. Carretti, L. Staveley-Smith, B. M. Gaensler, M. Haverkorn, S. Poppi The S-band Polarisation All-Sky Survey has observed the entire southern sky using the 64-m Parkes radio telescope at 2.3 GHz with an effective bandwidth of 184 MHz. The surveyed sky area covers all declinations δ ⩽ 0°. To analyse compact sources, the survey data have been re-processed to produce a set of 107 Stokes I maps with 10.75 arcmin resolution and the large scale emission contribution filtered out. In this paper, we use these Stokes I images to create a total intensity southern-sky extragalactic source catalogue at 2.3 GHz. The source catalogue contains 23 389 sources and covers a sky area of 16 600 deg2, excluding the Galactic plane for latitudes |b| < 10°. Approximately, 8% of catalogued sources are resolved. S-band Polarisation All-Sky Survey source positions are typically accurate to within 35 arcsec. At a flux density of 225 mJy, the S-band Polarisation All-Sky Survey source catalogue is more than 95% complete, and ~ 94% of S-band Polarisation All-Sky Survey sources brighter than 500 mJy beam−1 have a counterpart at lower frequencies. The Radio Remnant of Supernova 1987A − A Broader View G. Zanardo, L. Staveley-Smith, C. -Y. Ng, R. Indebetouw, M. Matsuura, B. M. Gaensler, A. K. Tzioumis Journal: Proceedings of the International Astronomical Union / Volume 12 / Issue S331 / February 2017 Published online by Cambridge University Press: 17 October 2017, pp. 274-283 Supernova remnants (SNRs) are powerful particle accelerators. As a supernova (SN) blast wave propagates through the circumstellar medium (CSM), electrons and protons scatter across the shock and gain energy by entrapment in the magnetic field. The accelerated particles generate further magnetic field fluctuations and local amplification, leading to cosmic ray production. The wealth of data from Supernova 1987A is providing a template of the SN-CSM interaction, and an important guide to the radio detection and identification of core-collapse SNe based on their spectral properties. Thirty years after the explosion, radio observations of SNR 1987A span from 70 MHz to 700 GHz. We review extensive observing campaigns with the Australia Telescope Compact Array (ATCA) and the Atacama Large Millimeter/submillimeter Array (ALMA), and follow-ups with other radio telescopes. Observations across the radio spectrum indicate rapid changes in the remnant morphology, while current ATCA and ALMA observations show that the SNR has entered a new evolutionary phase. ALMA observations of Molecules in Supernova 1987A M. Matsuura, R. Indebetouw, S. Woosley, V. Bujarrabal, F. J. Abellán, R. McCray, J. Kamenetzky, C. Fransson, M. J. Barlow, H. L. Gomez, P. Cigan, I De Looze, J. Spyromilio, L. Staveley-Smith, G. Zanardo, P. Roche, J. Larsson, S. Viti, J. Th. van Loon, J. C. Wheeler, M. Baes, R. Chevalier, P. Lundqvist, J. M. Marcaide, E. Dwek, M. Meixner, C.-Y. Ng, G. Sonneborn, J. Yates Supernova (SN) 1987A has provided a unique opportunity to study how SN ejecta evolve in 30 years time scale. We report our ALMA spectral observations of SN 1987A, taken in 2014, 2015 and 2016, with detections of CO, 28SiO, HCO+ and SO, with weaker lines of 29SiO. We find a dip in the SiO line profiles, suggesting that the ejecta morphology is likely elongated. The difference of the CO and SiO line profiles is consistent with hydrodynamic simulations, which show that Rayleigh-Taylor instabilities causes mixing of gas, with heavier elements much more disturbed, making more elongated structure. Using 28SiO and its isotopologues, Si isotope ratios were estimated for the first time in SN 1987A. The estimated ratios appear to be consistent with theoretical predictions of inefficient formation of neutron rich atoms at lower metallicity, such as observed in the Large Magellanic Cloud (about half a solar metallicity). The deduced large HCO+ mass and small SiS mass, which are inconsistent to the predictions of chemical model, might be explained by some mixing of elements immediately after the explosion. The mixing might have made some hydrogen from the envelope to sink into carbon and oxygen-rich zone during early days after the explosion, enabling the formation of a substantial mass of HCO+. Oxygen atoms may penetrate into silicon and sulphur zone, suppressing formation of SiS. Our ALMA observations open up a new window to investigate chemistry, dynamics and explosive-nucleosynthesis in supernovae. Ionospheric Modelling using GPS to Calibrate the MWA. II: Regional Ionospheric Modelling using GPS and GLONASS to Estimate Ionospheric Gradients B. S. Arora, J. Morgan, S. M. Ord, S. J. Tingay, M. Bell, J. R. Callingham, K. S. Dwarakanath, B.-Q. For, P. Hancock, L. Hindson, N. Hurley-Walker, M. Johnston-Hollitt, A. D. Kapińska, E. Lenc, B. McKinley, A. R. Offringa, P. Procopio, L. Staveley-Smith, R. B. Wayth, C. Wu, Q. Zheng Published online by Cambridge University Press: 13 July 2016, e031 We estimate spatial gradients in the ionosphere using the Global Positioning System and GLONASS (Russian global navigation system) observations, utilising data from multiple Global Positioning System stations in the vicinity of Murchison Radio-astronomy Observatory. In previous work, the ionosphere was characterised using a single-station to model the ionosphere as a single layer of fixed height and this was compared with ionospheric data derived from radio astronomy observations obtained from the Murchison Widefield Array. Having made improvements to our data quality (via cycle slip detection and repair) and incorporating data from the GLONASS system, we now present a multi-station approach. These two developments significantly improve our modelling of the ionosphere. We also explore the effects of a variable-height model. We conclude that modelling the small-scale features in the ionosphere that have been observed with the MWA will require a much denser network of Global Navigation Satellite System stations than is currently available at the Murchison Radio-astronomy Observatory. A Large-Scale, Low-Frequency Murchison Widefield Array Survey of Galactic H ii Regions between 260 < l < 340 L. Hindson, M. Johnston-Hollitt, N. Hurley-Walker, J. R. Callingham, H. Su, J. Morgan, M. Bell, G. Bernardi, J. D. Bowman, F. Briggs, R. J. Cappallo, A. A. Deshpande, K. S. Dwarakanath, B.-Q For, B. M. Gaensler, L. J. Greenhill, P. Hancock, B. J. Hazelton, A. D. Kapińska, D. L. Kaplan, E. Lenc, C. J. Lonsdale, B. Mckinley, S. R. McWhirter, D. A. Mitchell, M. F. Morales, E. Morgan, D. Oberoi, A. Offringa, S. M. Ord, P. Procopio, T. Prabu, N. Udaya Shankar, K. S. Srivani, L. Staveley-Smith, R. Subrahmanyan, S. J. Tingay, R. B. Wayth, R. L. Webster, A. Williams, C. L. Williams, C. Wu, Q. Zheng Published online by Cambridge University Press: 17 May 2016, e020 We have compiled a catalogue of H ii regions detected with the Murchison Widefield Array between 72 and 231 MHz. The multiple frequency bands provided by the Murchison Widefield Array allow us identify the characteristic spectrum generated by the thermal Bremsstrahlung process in H ii regions. We detect 306 H ii regions between 260° < l < 340° and report on the positions, sizes, peak, integrated flux density, and spectral indices of these H ii regions. By identifying the point at which H ii regions transition from the optically thin to thick regime, we derive the physical properties including the electron density, ionised gas mass, and ionising photon flux, towards 61 H ii regions. This catalogue of H ii regions represents the most extensive and uniform low frequency survey of H ii regions in the Galaxy to date. HI Supergiant Shells in the Large Magellanic Cloud S. Kim, L. Staveley-Smith, R. J. Sault, M. J. Kesteven, D. McConnell, M. A. Dopita, M. Bessell Journal: Publications of the Astronomical Society of Australia / Volume 15 / Issue 1 / 1998 The recently completed HI mosaic survey of the Large Magellanic Cloud (Kim et al. 1997) reveals complex structure in the interstellar medium, including filaments, arcs, holes and shells. We have catalogued giant and supergiant HI shells and searched for correlations with Hα emission, using a new image taken with a camera lens mounted on the 16-inch telescope at Siding Spring Observatory.
CommonCrawl
Measurement Instruments for the Social Sciences Why ability point estimates can be pointless: a primer on using skill measures from large-scale assessments in secondary analyses Advances in Methodology Clemens M. Lechner ORCID: orcid.org/0000-0003-3053-87011, Nivedita Bhaktha1, Katharina Groskurth1 & Matthias Bluemke1 Measurement Instruments for the Social Sciences volume 3, Article number: 2 (2021) Cite this article Measures of cognitive or socio-emotional skills from large-scale assessments surveys (LSAS) are often based on advanced statistical models and scoring techniques unfamiliar to applied researchers. Consequently, applied researchers working with data from LSAS may be uncertain about the assumptions and computational details of these statistical models and scoring techniques and about how to best incorporate the resulting skill measures in secondary analyses. The present paper is intended as a primer for applied researchers. After a brief introduction to the key properties of skill assessments, we give an overview over the three principal methods with which secondary analysts can incorporate skill measures from LSAS in their analyses: (1) as test scores (i.e., point estimates of individual ability), (2) through structural equation modeling (SEM), and (3) in the form of plausible values (PVs). We discuss the advantages and disadvantages of each method based on three criteria: fallibility (i.e., control for measurement error and unbiasedness), usability (i.e., ease of use in secondary analyses), and immutability (i.e., consistency of test scores, PVs, or measurement model parameters across different analyses and analysts). We show that although none of the methods are optimal under all criteria, methods that result in a single point estimate of each respondent's ability (i.e., all types of "test scores") are rarely optimal for research purposes. Instead, approaches that avoid or correct for measurement error—especially PV methodology—stand out as the method of choice. We conclude with practical recommendations for secondary analysts and data-producing organizations. In the last two decades, large-scale assessments surveys (LSAS) have expanded considerably in number and scope. National and international LSAS, such as PISA, TIMMS, PIAAC, NEPS, or NAEP, now provide a wealth of data on cognitive and socio-emotional (or "non-cognitive") skillsFootnote 1 of children, youth, and adults. This increasing data availability has led to a veritable surge in investigations in economics, psychology, and sociology on issues such as skill formation, inequality in skills, or labor market returns to skills. As the methodological sophistication of LSAS has evolved, the gap between expert psychometriciansFootnote 2 who curate the assessments and applied researchers who use these data as secondary analysts has widened. LSAS often apply advanced statistical models and scoring techniques with which few applied researchers are familiar (e.g., Jacob & Rothstein, 2016; Jerrim, Lopez-Agudo, Marcenaro-Gutierrez, & Shure, 2017). Consequently, there is uncertainty among applied researchers about the statistical assumptions and computational details behind these different models and scoring techniques, their respective pros and cons, and how to best incorporate the skill measures that result from them in secondary analyses. Moreover, secondary analysts often use the best available methods in less-than-optimal ways (e.g., Braun & von Davier, 2017). Thus, there is the risk of a growing disconnect between best practices in the use of data from LSAS and actual practices in applied research. Less-than-optimal practices may result in faulty analyses and erroneous substantive conclusions. Against this backdrop, this article is intended as a primer for applied researchers working with LSAS as secondary analysts. Our exposition starts with a non-technical introduction to the key properties of skill assessment. We then review the three principal options that applied researchers have at their disposal to incorporate skill measures from LSAS in their secondary analyses: test scores, structural equation modeling (SEM), and plausible values (PVs). We discuss advantages and disadvantages of the three methods (i.e., test scores, SEM, and PVs) based on three criteria: fallibility (i.e., control for measurement error and unbiasedness), usability (i.e., ease of use in secondary analyses), and immutability (i.e., consistency across different analyses and analysts of test scores, PVs, or measurement model parameters in SEM). Our aim is to inform secondary analysts about the advantages and potential pitfalls of each option in order to help them make informed choices and understand potential limitations and biases that may ensue from using one option. The most important take-away message will be that using a single ability point estimate per person—that is, using test scores—is not the most appropriate option for research on skills. We conclude with practical recommendations. From testing to test scores: three basic properties of skill assessment Many innovations in psychometrics have sprung from the context of LSAS. To appreciate why increasingly sophisticated psychometric models are needed, it is critical to understand the properties of skill assessments and the challenges these entail: (1) the distinction between latent (unobserved) skill variables and their manifest (observed) indicators; (2) the concept of measurement error; and (3) the difference between measurement/population models and individual ability point estimates. Below, we briefly introduce these properties as they relate to LSAS. Skills are latent variables The cognitive or socio-emotional skills that LSAS seek to assess are latent variables. That is, these skills cannot be directly observed but only inferred from individuals' responses to a set of test items. This is typically expressed in path diagrams such as the one shown in Fig. 1. Here, observed responses on items 1 through K (i.e., Yk) are used to estimate the latent skill, denoted θ, an idea first introduced by Spearman (1904) in his true score model. A latent measurement model in CTT notation. The latent ability θ is measured by k test items (manifest indicators Y1 … Yk). The different loadings λ1…λk and measurement error terms ε1 … εk reflect different degrees to which each item reflects the latent ability. For substantive analyses and/or for estimating PVs, θ can be predicted by background variables (X1…Xp). The latter model part is often referred to as the population model (e.g., Mislevy, 1991) Consider the example of literacy skills (i.e., the ability to understand, evaluate, and utilize written text; e.g., OECD, 2016). There is no way to directly measure an individual's literacy skills as one would measure their height or body weight. However, literacy can be made accessible to measurement if one conceived of literacy as a latent variable that manifests in individuals' ability to solve test items that were designed such that they require a certain level of literacy skills to be solved. Test takers' answers to test items are observed ("manifest") variables that reflect the unobserved ("latent") variable of interest, literacy skills. Test items and test scores contain measurement error Because a skill is a latent variable, any test designed to measure it will only imperfectly capture the individual's true ability θi. Individuals' responses to each test items will always reflect extraneous influences other than the skill that the test intends to measure, that is, they will contain measurement error εi. Possible sources of measurement error include, for example, random influences such as guessing, accidentally choosing the wrong answer despite knowing the correct one, or external disturbances during the testing session. Especially as test length increases, factors such as fatigue, loss of motivation, or practice effects, may also tarnish item responses. Measurement error is indeed an inextricable property of test items and, hence, of tests scores that, in their most basic form, are the sum across these test items (Lord & Novick, 2008). Consequently, any test score will only yield an estimate of that true ability,\( {\hat{\theta}}_i \), and that estimate will contain measurement error: \( {\hat{\theta}}_i={\theta}_i+{\varepsilon}_i. \) Hypothetically, one might get closer to the true ability by administering a very large number of test items, or by repeatedly testing every individual many times. Akin to improving the "signal-to-noise ratio," increasing test length or testing on multiple occasions can add information about θiwhile minimizing the influence of measurement error (i.e., can increase the reliability of the test). This applies only if the measurement errors of individual items εik are random and independent of each other and the additional items or measurement occasions are valid indicators of θi (for a brief exposition, see Niemi, Carmines, & McIver, 1986).Footnote 3 In real-world scenarios, however, resource constraints and concerns about respondent burden make it impossible to administer a large number of items, let alone administer them repeatedly. Why should applied researchers care about measurement error? The answer is simple: If unaccounted for, measurement error can bias research findings. As has long been known (Fuller, 1987; Schofield, 2015; Spearman, 1904), classical measurement error (i.e., random error that is normally distributed and uncorrelated with the latent variable) acts like noise that blurs the signal. When error-laden skill measures are used as predictors in a regression, measurement error can substantially decrease the association between the skill and an outcome compared to its true size, a bias known as "attenuation bias" or "regression dilution" (Lord & Novick, 2008; Skrondal & Laake, 2001). A more specific variation of this problem occurs when researchers seek to control for confounders in a regression or to establish the incremental predictive validity of a skill over other predictors (or vice versa). Here, measurement error in a skill and/or one of the covariates can lead to overly optimistic conclusions about incremental validity (i.e., type I error; Westfall & Yarkoni, 2016) and phantom effects (i.e., biased compositional effects) in multi-level models (Pokropek, 2015; Televantou et al., 2015). In longitudinal studies, measurement error can reduce rank-order consistencies (stabilities) of skills. Moreover, although random measurement error does not bias estimates of population means, it does bias variances and hence also standard errors (i.e., precision). In addition to classical measurement error, a variety of other biases other than attenuation can occur when using test scores (i.e., ability point estimates \( {\hat{\theta}}_i \)). These biases can lead to both over- and underestimations of regression coefficients, variances, and related statistics (Hyslop & Imbens, 2001; Jacob & Rothstein, 2016; Nimon, Zientek, & Henson, 2012). For example, if measurement error in a skill is not classical but correlates with the measurement error or true score of an outcome the skill is meant to predict, assumptions about the independence of error and true scores are violated. This may not only attenuate but also inflate the regression coefficients describing the skill–outcome relationship (for in-depth discussions, see Fuller, 1987; Hyslop & Imbens, 2001; Nimon et al., 2012; Stefanski, 2000). Moreover, as we will see later, different methods of computing test scores entail different forms of biases that can have different, and often hard-to-predict, consequences (e.g., some "shrinkage" estimators pull individuals' ability estimates towards the population mean, making extreme scores less extreme). Measurement models and individual ability estimates are not the same The goal of LSAS is not to provide ability estimates for individuals. Instead, their goal is to provide estimates of population quantities such as means and variances of the skill distribution or associations between skills and their predictors and/or outcomes. Compared to tests that are meant to inform decisions about individual test takers (e.g., college admission or employee recruitment tests), the tests in LSAS comprise far fewer items. Additionally, LSAS often use complex booklet designs in which each individual works only on a small subset of items: Test items from a large item pool are assigned to blocks, which are arranged in test booklets. After answering a set of common questions from the background questionnaire, each respondent works only on a randomly assigned booklet, that is, only on some of the item blocks (e.g., Braun & von Davier, 2017). Such "planned missingness" or "incomplete block" designs reduce respondent burden and cut costs for the data-producing organization (see Graham, Taylor, Olchowski, & Cumsille, 2006, for a general introduction). Analyzing data from skill assessments, and especially from skill assessments employing complex test designs, requires specialized statistical models. These statistical models comprise a measurement model (or "latent variable model") linking individual responses to test items with the latent skill construct θiand oftentimes also a population model stipulating the distribution of the latent skill (mostly the normal distribution) and its relations to the background variables from the background questionnaire (see Mislevy, 1991, pp. 180–181). Crucially, such models are primarily designed for estimating population parameters of tests (e.g., item difficulties or reliability) and the resulting skill distribution in the population—but not necessarily for providing individual ability estimates (i.e., test scores). To better understand this point, let us briefly—and with some omission and simplification—review the psychometric theories that underlie skill assessments in LSAS: classical test theory (CTT) and item response theory (IRT), also known as probabilistic test theory (Lord & Novick, 2008; see Steyer, 2015, for an overview). Both theories provide measurement models that express the relation between a latent variable (e.g., a skill) and its indicators (e.g., test items) in different but closely related ways (Glöckner-Rist & Hoijtink, 2003; Raykov & Marcoulides, 2016). Most modern cognitive skill assessments in which test items have a binary or categorial response format (e.g., correct–incorrect) are based on IRT models. IRT is arguably better able to handle incomplete block designs, population models with background variables, and computer adaptive testing (e.g., Bauer & von Davier, 2017). CTT continues to be widely used and has an important place in scale construction, especially (but not exclusively) in the assessment of socio-emotional skills using polytomous (rating scale) formats. Many LSAS employ both CTT and IRT, albeit in different stages of the analysis. Without going into detail, both theories assume that responses to test items reflect the person's true ability—but only imperfectly and often to different degrees. The fundamental equations of these theories map the latent true ability to the observed answers to test items while highlighting that these manifest items are only imperfect (i.e., unreliable) indicators of the latent quantity. As we saw in Fig. 1, in CTT, a person's response to an assessment item (or subtest) Yik is modeled as a function of their true ability θi (often scaled by a factor loading λk that indicates how strongly the item reflects the θi) and a measurement error εik that is orthogonal to true ability:Footnote 4 $$ {Y}_{ik}={\lambda}_k\cdotp {\theta}_i+{\varepsilon}_{ik} $$ Measurement error in CTT is defined as the difference between the observed response and the true ability, εik = Yik − θi. The main goal of CTT is to garner information about tests (not test takers). Of particular interest is a test's reliability in a sample, defined as the proportion of variance in the test that is due to variance in the true scores, Rel(Y) = Var(θ)/Var(Y). In IRT, each respondent's probability of answering an item correctly is modeled as a function of the latent ability θi, the difficulty of the item bk, and often scaled by an item discrimination parameter ak. For example, a model for a binary test item where wrong answers are coded 0 and correct answers coded 1 can be estimated as: $$ P\left({Y}_{ik}=1\right)=\exp \left({a}_k\cdotp \left({\theta}_i-{b}_k\right)\right)/\left[1+\exp \left({a}_k\cdotp \left({\theta}_i-{b}_k\right)\right)\right] $$ Unlike in CTT, measurement error does not appear as a parameter in the IRT equation. Instead, it is implicit in the probabilistic (i.e., non-deterministic) relationship between the latent ability variable and its manifest indicators. Note that in IRT the term "measurement error" is often used to denote the standard error \( SE\left({\hat{\theta}}_i\right) \)of a respondent's test score, or more generally of the ability point estimate. It may surprise readers to learn that CTT models can operate on an input matrix that solely contains the sample variances and covariances but not individual responses to test items. Many IRT models do operate on individual responses, yet the estimation of item parameters (the ak and bk) occurs prior to, and independent of, any estimation of person parameters (i.e., ability estimates).Footnote 5 Computing test scores thus involves a transition from a CTT or IRT measurement model for the population to a single point estimate \( {\hat{\theta}}_i \) for an individual's unknown true ability θi. This step, called "ability estimation," is critical: Only a latent measurement model of a skill, but not a prediction of an individual's test score, is able to separate true ability from measurement error (e.g., McDonald, 2011). Moreover, some test scores such as unit-weighted sum scores ignore possible differences in item difficulties and discriminations in the measurement model (McNeish & Wolf, 2020; von Davier, Gonzalez, & Mislevy, 2009). Fortunately, as we will see, some approaches circumvent computing individual point estimates and the biases that can result. Overview of the three main methods for using skill measures from LSAS We now review the three principal methods that secondary analysts can use to incorporate skill measures from LSAS in their analysis: as test scores, through structural equation modeling (SEM), and in the form of plausible values (PVs). Table 1 provides an overview of the three methods. Table 1 Evaluation of the three main methods of using skill measures from LSAS After a general description of each method, we therefore evaluate it based on three criteria:Footnote 6 (1) fallibility, (2) usability, and (3) immutability. Fallibility describes whether the method accounts for measurement error; that is, whether it separates true ability from measurement error. Associations between fallible measures and predictors or outcomes of interest are subject to attenuation bias (i.e., lower than they truly are) and other forms of bias. Fallibility is the most important touchstone for comparing the methods, as it is most important for the unbiasedness of research results. Usability denotes the ease of use for secondary analysts in terms of the required statistical and data-analytical expertise. Usability is an important consideration because even methods that are less biased but too complex to implement tend to be less popular with secondary analysts and are generally prone to being used erroneously. Finally, immutability indicates whether (a) the individual ability point estimates (test scores), (b) PVs, or (c) parameters (e.g, the loadings λk) of a latent measurement model in SEM remain the same (i.e., unchanged) across different analysis setups (i.e., variables included in the analysis, subsamples used, statistical models and estimators employed) or not. If test scores, PVs, or parameters of the latent measurement models are not immutable, this may also lead to different statistical inferences about a parameter of interest (e.g., the relation of a skill to a predictor or outcome) and ultimately to different substantive conclusions. It is, of course, highly undesirable if different analysts arrive at different conclusions and policy implications merely because of variations in how they analyze the same data. Immutability is, thus, closely related to the replicability of research findings, for which it is an important prerequisite. Regarding these criteria, it is important to distinguish between two scenarios. In the first scenario, secondary analysts re-use test scores (or PVs) that were computed by the data-producing agency and included in the data dissemination. This is the most common scenario and limits what can be done to remedy the issues with test scores that we will raise. In the second scenario, secondary analysts estimate their own custom set of test scores, SEM, or PVs. This increases flexibility on the side of analysts but requires specialized psychometric expertise. Even more fundamentally, it requires access to the item-level data (i.e., the data need to include variables that store information about test-takers' responses to individual test items), which is not always the case with LSAS. For simplicity, we assume that the IRT or CTT measurement model for the skill in question is correctly specified. Further, we assume that there is no differential item functioning (DIF) or measurement non-invariance across subpopulations (i.e., the test functions alike in different subgroups). We also do not consider complications introduced by missing data that stems from respondents not reaching or refusing to answer some test items. Finally, we will not deal with issues such as scaling, scale anchoring, linking, or test score interpretation. These issues are far from trivial but are beyond the scope of our present paper. Fortunately, in modern LSAS, most of these issues are taken care of by the test developers and data producers at earlier stages. Thus, secondary analysts need not be overly concerned with them, although it is good practice to critically examine whether assumptions such as measurement invariance/differential item functioning have been tested and are met. We will refer the reader to specialized treatments of these issues in the following. Definition and description As explained in the previous section, test scores are point estimates of an individual's ability \( {\hat{\theta}}_i \). They are the scores that would be reported back to test takers, for example in an admission or placement test, and used for diagnostic decisions. There are many different types of test scores that range from simple sum scores to more complex Bayesian techniques. All these techniques share the aim of maximizing validity by producing test scores that are as highly correlated with the underlying true ability as possible. Some of them are well suited for the purpose of individual diagnostics—however, all types of test scores share some fundamental limitations that make them less-than-optimal choices for secondary analysts of LSAS who are interested in population quantities (e.g., means or variances of skills, group differences in skills, or relations of the skill to instructional quality or other predictors). Below we briefly describe the most widely used types of test scores (all of which will typically correlate highly for a given assessment, but not all of which can be computed when complex assessment designs such as the aforementioned incomplete block designs are used). Sum scores Sum scores are the simplest type of test score. They are what the term "test scores" traditionally referred to (Lord, 1980). Their abiding popularity stems from the fact that they are easy to compute and interpret. However, as we will see, this simplicity can be deceptive as it masks the shortcomings of sum scores. These shortcomings explain why sum scores are no longer widely used in LSAS. Beauducel and Leue (2013), McNeish and Wolf (2020), and von Davier (2010) provide excellent discussions of the limitations of sum scores. Assuming we have three indicators (i.e., Yik with k = 1, 2, 3) to measure a skill, the sum score for person i is computed as $$ \mathrm{Unweighted}\ \mathrm{Sum}\ {\mathrm{Score}}_i={Y}_{i1}+{Y}_{i2}+{Y}_{i3} $$ Instead of the sum, one can also take the mean across items. Most commonly, sum or mean scores are unweighted (or unit-weighted) such that all items contribute equally to the resulting scale score. This is only valid if all test items reflect the target skill in equal measure and with the same amount of measurement error. These rather restrictive assumptions are foundational to the model of "parallel test" in CTT (see Steyer, 2015, for an introduction) and the one-parameter logistic model (1PL or "Rasch" model) in IRT (Andersen, 1977). In both models, all test items have the same factor loadings and error variance or item discriminations, respectively. This assumption does not always hold in skill assessments, such that congeneric CTT models or (at least) two-parameter logistic IRT models (2PL or "Birnbaum"; Andersen, 1977; Birnbaum, 2008) are needed. According to these models, items can have different loadings or discriminations, which implies that they are not interchangeable and reflect the latent skill to varying degrees. In this case, unweighted sum scores are inappropriate because unit-weights do not align with the measurement model (Beauducel & Leue, 2013; McNeish & Wolf, 2020). Researchers sometimes hope to remedy the problems of the sum scores by using weighted scores: $$ \mathrm{Weighted}\ \mathrm{Sum}\ {\mathrm{Score}}_i={\lambda}_1\cdotp {Y}_{i1}+{\lambda}_2\cdotp {Y}_{i2}+{\lambda}_3\cdotp {Y}_{i3} $$ The weights are typically taken from the loadings (λk) or item discriminations of the CTT or IRT model or from another dimension reduction technique such as principal component analysis (Joliffe & Morgan, 1992). Sum scores are based exclusively on the available answers of respondents to test items. They cannot readily handle missing data (e.g., Mazza, Enders, & Ruehlman, 2015; see also Enders, 2010). This also implies that sum scores are ill-suited for complex test designs in LSAS (von Davier, 2010). These complex test designs involve planned missingness designs in which individuals answer different subsets of items. They also utilize Information from background questionnaire in order to improve the precision and efficiency with which \( {\hat{\theta}}_i \) can be estimated. Items from the background questionnaire can also serve as "screening items" that govern which subset of items a respondent receives in booklet designs or computerized adaptive testing (CAT) designs. The skill data resulting from such designs cannot be readily summarized by a simple sum score. CTT factor scores Factor scores are test scores based on factor-analytic CTT measurement models such as EFA and CFA. They are often used for computing test scores from socio-emotional (or "non-cognitive") skill assessments that use rating scale format. Factor score estimation methods account for both the factor loadings (i.e., the λk in Eq. 1 and Fig. 1) and the residual error variance information contained in the measurement model. There are several methods to compute factor scores, including the regression method, Bartlett's regression method, and expected a posteriori (EAP) estimator method (Beauducel, 2005; Devlieger, Mayer, & Rosseel, 2016; Fava & Velicer, 1992; Grice & Harris, 1998). For unidimensional measurement models, these methods result in different, albeit highly correlated, factor scores that are equally viable (Beauducel, 2007). One issue, especially for multi-dimensional measurement models (e.g., models with more than one factor) is factor score indeterminacy (Grice, 2001). Factor score indeterminacy means that an infinite number of factor scores can be computed from the same factor solution and all will be equally consistent with the model that produced the factor loadings. The higher the factor score indeterminacy, the higher the differences in the factor scores from different estimation methods. Factor score indeterminacy is lower when there are a large number of items and the items have strong factor loadings. To obtain consistent estimates of regression coefficients and their standard errors, Skrondal and Laake (2001) recommended using the Regression method to compute factor scores when the factor (e.g., a skill) is used as a predictor variable and Bartlett's method when it is used as an outcome variable in the subsequent regression analyses. IRT ability estimates In modern LSAS, test scores are often computed from IRT models such as the 2PL, 3PL, or partial credit model (PCM). The two most widely used ability estimates from IRT models are likelihood-based methods such Warm's (1989) weighted likelihood estimate (WLE) and, once again, Bayesian methods such as the expected a posteriori (EAP) estimate. Whereas WLE depends only on the response pattern and the parameters of the measurement model, the EAP additionally depends on the prior distribution of θ. The EAP estimate is the mean of the posterior distribution of θ,which combines information about response patterns and model parameters with a prior distribution. Thus, unlike the WLE, EAP estimates can be computed with a prior distribution containing information from a background questionnaire (Laukaityte & Wiberg, 2017). Bayesian approaches such as EAP are inherently biased as they are shrinkage estimators, that is, the estimator pulls all test scores towards the mean of the prior distribution, thereby reducing their variance and making extreme scores less extreme. This bias is small when the prior distribution is appropriate and the reliability of the test is high (Tong & Kolen, 2010) but can be larger for test comprising only few items or when using incomplete block designs (e.g., Braun & von Davier, 2017). WLE and EAP estimates are widely used and tend to perform the best among other IRT-based ability estimates in terms of standard error of the regression coefficients in subsequent analyses. Fallibility As point estimates of individual ability, test scores turn the logic of latent measurement models that we showed in Eq. 1 upside down by predicting the latent ability from the observed items, rather than vice versa: \( {\hat{\theta}}_i={\theta}_i+{\varepsilon}_i \)(e.g., McDonald, 2011). The fundamental problem that all types of test scores share is that the resulting point estimates contain measurement error, no matter how complex the model from which they were computed. It is easiest to see this problem from the equations of the sum score (Eqs. 3 and 4). As per Eq. 1, the latent measurement model decomposes the answer to each item Yik into the (unobserved) true θi and a measurement error εik in latent measurement models. By stark contrast, in the sum score equation, the items jointly determine the overall skill score. Measurement error in the items is not separated out from true ability but transferred to the sum score. Thus, building on CTT's logic, we can rewrite Eq. 4 as: $$ \mathrm{Weighted}\ \mathrm{Sum}\ {\mathrm{Score}}_i={\lambda}_1\cdotp \left({\theta}_i+{\varepsilon}_{i1}\right)+{\lambda}_2\cdotp \left({\theta}_i+{\varepsilon}_{i2}\right)+{\lambda}_3\cdotp \left({\theta}_i+{\varepsilon}_{i3}\right) $$ Because all individual test items Yik confound true ability and measurement error, the resulting sum score also contains measurement error. Only under the unrealistic assumption that no measurement error is present in the items would the sum score equal the (unobserved) ability θi.Footnote 7 Thus, sum scores are not infallible indicators of θi. Weighting the indicators as in the weighted sum score or principal component scores does not remedy this issue (Raykov, Marcoulides, & Li, 2017). Nor does using complex CTT or IRT models to compute test scores: Although the ability estimation process partly accounts for measurement error by considering the factor loadings or item discriminations in the measurement model, the resulting factor scores, WLEs, and EAPs are merely realizations of the random variable θi (Hardt, Hecht, Oud, & Voelkle, 2019; see also McDonald, 2011) and hence fallible point estimates that contain measurement error (Hoijtink & Boomsma, 1996). In other words, whereas latent CTT or IRT measurement models separate true ability from measurement error, all forms of test scores again compound them. Moreover, depending on the ability estimation method used, test scores can contain additional biases. For example, because EAP is a "shrinkage estimator," EAP scores underestimate the population variance of the skill (Lu, Thomas, & Zumbo, 2005; Wu, 2005). The farther away an individual's score from the posterior mean (i.e., the mean after incorporating the prior distribution, which often contains information from background variables), the more it gets pulled towards the posterior mean. Contrariwise, WLE scores tend to have slightly lower conditional bias (i.e., the bias in the expected mean given θ) but higher standard deviation than EAP (Lu et al., 2005). Also, for both EAP and WLE, there are expected differences between individuals' ability estimates and their true ability scores, and these differences remain even in the case of large samples (Lu & Thomas, 2008). As a consequence of the measurement error (and potentially other biases) contained in test scores, covariance-based statistics (e.g., correlations or regression coefficients) involving test scores can be biased. When the test scores are used to predict an outcome, the bias is often (but not invariably) attenuation or "regression dilution," such that the true size of associations between the skill and its predictors or outcomes are underestimated (Lord & Novick, 2008). Both EAP and WLE scores lead to deflated regression coefficients especially as the test length decreases (Braun & von Davier, 2017; Lu et al., 2005). The standard errors of regression coefficients are also biased since the variance estimate of the skills is biased. Of note, different from variances and standard deviations, estimates of the skill's mean or mean differences across groups remain unbiased when using CTT factor scores. This is because CTT assumes that random measurement error has a mean of zero, which implies that the error is canceled out as one aggregates across a large number of items and individuals (Lord & Novick, 2008). Likewise, IRT ability estimates (EAP and WLE) both provide unbiased estimates of population means (Wu, 2005). Thus, using test scores in secondary analyses can lead to biased estimates of population variances, regression coefficients when the skill is predictor (independent variable), standard errors—and hence potentially lead to erroneous conclusions in secondary analysis. It appears that the crucial difference between the latent CTT/IRT measurement models and the point estimates \( {\hat{\theta}}_i \) computed from these models is not always clear to secondary analysts. The assumption that test scores derived from latent measurement models are somehow purified from measurement error is erroneous. Whether using factor scores, WLEs, or EAPs, no model-based ability estimates can remove the measurement error from \( {\hat{\theta}}_i \)—and different methods can introduce different forms of additional bias. Test scores are easy to understand conceptually, easy to compute, and easy to incorporate in secondary analysis. They can be treated much like any other variable (e.g., gender). Thus, the usability of test scores in the most common scenario—that is, secondary analysts working with pre-computed test scores provided by the data-producing organization—is generally high. Computing test scores is also straightforward in the case of sum scores (although, as noted, this no longer applies to complex test designs in modern LSAS that contain missing data by design). Computing factor scores and IRT ability estimates such as WLEs, and EAPs is somewhat more involved. However, provided basic familiarity with CTT or IRT, estimating measurement models and computing ability estimates from them is accessible through modern statistical software. Immutability In the scenario in which secondary analysts re-use test scores provided by data-producing organizations, test scores fulfill the immutability criterion: Test scores do not change depending on the covariates or subsamples used in the substantive analyses. They also do not depend on the analyst or analysis setup. Conversely, when secondary analysts compute their own set of test scores from the original item-level data, these test scores are no longer immutable. This is because CTT factor scores and IRT ability estimates depend on the underlying measurement model, estimator, and the subset of respondents included in the estimation (Grice & Harris, 1998; Wainer & Thissen, 1987), and also on the population model if a population model is used (see Mislevy, 1991). Thus, substantive conclusions regarding the same research question using the same LSAS might differ between an analyst using test scores computed by the data-producing organization and another analyst using their own custom set of test scores. Provided access to the item-level data (i.e., variables that store individuals' answers to the single test items), structural equation modeling (SEM; Jöreskog, 1970; Jöreskog & Sörbom, 1979) offers a solution for measurement error in skill measures. Instead of computing fallible point estimates of ability from a measurement model, SEM combines the measurement model with a structural model. The measurement model (i.e., the relations of θi to the Yik in Fig. 1) represents the skill in question as a latent variable that is free from measurement error. The structural model relates this error-free latent variable to predictors, outcomes, or covariates through regression or correlation paths (e.g., the paths from the Xik to θi in Fig. 1). Thanks to dramatic advances over the last two decades, SEM has become an increasingly flexible and general approach (e.g., Bollen & Noble, 2011; Li, 2016). Routines for estimating SEM in modern statistical software can handle both continuous and categorical observed and latent variables, missing data, complex sampling designs, multiple groups, mediation and moderation, and many more scenarios relevant to LSAS. The measurement model part can either be based on a CTT or an IRT framework.Footnote 8 CTT and IRT measurement models are closely related (Glöckner-Rist & Hoijtink, 2003; Raykov & Marcoulides, 2016); both are usually estimated with maximum likelihood estimation (ML) or robust ML (MLR). Item factor analysis (IFA) using a weighted least squares estimator such as DWLS or WLSMV (designed to handle binary or ordered-categorical test items) can be seen as intermediate approach that bridges CTT and IRT (Glöckner-Rist & Hoijtink, 2003; Wirth & Edwards, 2007). For rating scales with five or more response options and data that are approximately normally distributed, different estimators lead to highly similar results (Rhemtulla, Brosseau-Liard, & Savalei, 2012). Hybrid approaches combine IRT with SEM (e.g., Lu et al., 2005). For instance, the mixed effects structural equations model (MESE; Junker, Schofield, & Taylor, 2012; see also Richardson & Gilks, 1993), extends the covariance-based general SEM framework for psychometric data (Bollen, 1989; Skrondal & Rabe-Hesketh, 2004). Here, the latent variable is defined via an IRT measurement model before the structural paths are added (Schofield, 2015). By using Bayesian priors, the MESE model allows users to condition the latent variable on covariates in the structural model to reflect extraneous influences on the latent ability. Separating true ability from measurement error is the key motivation behind SEM and constitutes its main advantage over using point estimates of ability \( {\hat{\theta}}_i \) such as sum scores in subsequent analyses. By simultaneously modeling the latent measurement model and the structural model, measurement error in the skill is separated from true ability (Jöreskog, 1969). Because SEM relates only the reliable portion of variance in the skill to other variables, relationships between the skill and predictors, outcomes, or correlates are unattenuated because they are purged from measurement error in the skill (although measurement error in the covariates, if unaccounted for, may still lead to attenuation bias). Thus, SEM results in unbiased relationships between (latent) skills and other variables, avoiding biased results that would arise from using sum scores or model-based estimates (e.g., Fuller, 1987, 1995; Grice, 2001). SEM also largely avoids the additional biases on the population variances and standard errors that using WLE or EAP test scores can entail. For the advantages of SEM to transpire, it is important that the measurement model be correctly specified (Mislevy, 1991; Rhemtulla, van Bork, & Borsboom, 2020), although it appears that regression coefficient estimates of the skills in the structural part are relatively robust to mis-specification of the measurement models or the conditioning population model of the underlying skill (Schofield, 2015). However, there is a risk that misspecification in the structural part of the model can affect the measurement model; just like adding or deleting covariates can influence the parameters of the measurement model—an issue that we turn to in the immutability section. The disadvantage of SEMs compared to other methods of using skill measures from LSAS is their lower usability. Contrary to the other three methods reviewed here, secondary analysts must themselves implement SEM. This requires access to the item-level data (i.e., responses to individual test items must be included in the data). Each SEM is a unique analysis model tailored to a specific research question and cannot be disseminated with the LSAS data. Implementing SEM requires specialized statistical software and expertise that is not a routine part of the curriculum in all social and behavioral science disciplines. When IRT models (especially multi-dimensional IRT models) are used in the measurement part, only few software options are available and estimation can be computationally intensive (IFA with a DWLS or WLSMV estimator may provide a convenient solution here; Wirth & Edwards, 2007). Moreover, complex test designs (e.g., booklets or computer adaptive testing) may further complicate matters. For example, booklet designs or CAT can make it harder to specify and estimate a SEM due to the substantial amount of missing data. For these reasons, we expect that there will be only few occasions in which applied researchers employ SEM in secondary analysis of data from LSAS, despite the versatility of SEM and despite its advantages over test scores in terms of fallibility. Although the flexibility of SEM is an asset, it comes at the cost of violating immutability. If each analyst implements "their own" SEM, the parameters in the measurement model (such as the loadings λk relating the latent skill to its indicators; see Fig. 1) can change depending on a range of factors. These factors include not only the variables included in the measurement model but also those in the structural model (Anderson & Gerbing, 1988). Likewise, the (sub-)sample, missing data handling technique, and estimator used can result in different measurement model parameters and consequently different structural paths. If the measurement model parameters (say, factor loadings) change solely due to the presence (or absence) of predictors or outcomes in the structural model, the meaning of the latent variables changes and interpretational confounding occurs (Burt, 1973).Footnote 9 A potential remedy for the mutability of measurement model parameters is using a two-step procedure in which the parameters of the measurement model are fixed after first estimating them in the absence of structural paths. The fixed parameters of the measurement model ensure that the latent variable is defined in a constant manner when structural paths are added in the second step (Anderson & Gerbing, 1988, 1992; see also, Bakk & Vermunt, 2016). In the context of IRT, this procedure is known as a "fixed IRT-SEM approach" (Lu et al., 2005, p. 271). Such a two-step procedure ensures that the latent measurement model is immutable across different researchers and specific research questions with specific sets of variables and paths.Footnote 10 Plausible values Originally developed in the context of NAEP (Mislevy, 1991), PV methodology is tailored to the needs of LSAS. Its aim is to provide unbiased estimates of population statistics such as means and variances of skills. Accessible introductions include , Bauer and von Davier, 2017, Lüdtke and Robitzsch (2017); in German), von Davier et al. (2009), and Wu (2005). The basic idea of PVs is to treat ability estimation as a missing data problem and apply multiple imputation methodology (Little & Rubin, 2002; Rubin, 1987; for general introductions to multiple imputation, see Enders, 2010; Schafer & Graham, 2002; van Buuren, 2018). Instead of estimating a single test score \( {\hat{\theta}}_i \)per respondent, multiple imputations of their unobserved true ability θi are generated. These imputations are called PVs because they are educated guesses, based on a statistical model, of what a respondent's true ability might reasonably be. PVs are a special case of multiple imputations as the latent ability is completely missing. The observed information needed to impute the latent variables are their indicators (i.e., the test items Yk in Fig. 1), the parameters of the measurement model λk linking these indicators to θ, and a population model containing a set of background characteristics such as gender, parental socio-economic status, or motivation variables (i.e., the Xk, also called "conditioning variables"). The latent measurement model from which PVs are computed can be any type of IRT or CTT model. Typically, five PVs per respondent are estimated and disseminated with the data, although more (e.g., 10–20) PVs may be preferable to obtain more precise estimates of standard errors (e.g., Laukaityte & Wiberg, 2017). The variation across PVs reflects the uncertainty about the respondent's true ability. It is important to realize what PVs are not: They are not "test scores" in the traditional sense of Lord (1980); they are not point estimates of an individual's skills like CTT factor scores or IRT ability estimates. Also, PVs should not be confused with the true latent ability as conceived in CTT and IRT. Instead, PVs are intermediate quantities needed for the unbiased estimation of population quantities such as variances or regression coefficients (Bauer & von Davier, 2017). More technically, PVs are repeated random draws from a posterior distribution that represents an individual's ability and the uncertainty about its true value. The posterior distribution p(θi, Xi, Yi) is "conditional" because it depends on the individual's responses to test items plus a (large) number of background variables contained in the latent regression model (Fig. 1). In formulaic notation, the posterior distribution from which PVs are drawn is $$ p\left({\theta}_i,{\boldsymbol{X}}_{\boldsymbol{i}},{\boldsymbol{Y}}_{\boldsymbol{i}}\right)\propto p\left({\boldsymbol{Y}}_{\boldsymbol{i}}|{\theta}_i\right)\cdotp p\left({\theta}_i|\ {\boldsymbol{X}}_{\boldsymbol{i}}\right) $$ Here, p(Yi| θi) is the item response model that describes how the latent ability θi depends on the vector of item responses Yi = (Yi1, .., Yik) of person i. Moreover, p(θi| Xi) is the population model that describes how the latent skill θi depends on a vector of background variables Xi = (Xi1, .., Xip): $$ p\left({\theta}_i|\ {\boldsymbol{X}}_{\boldsymbol{i}}\right)\sim N\left({\beta}_0+{\boldsymbol{X}}_{\boldsymbol{i}}\ {\beta}_{\boldsymbol{p}};{\upsigma}_{\theta \mid {\boldsymbol{X}}_{\boldsymbol{i}}}^2\right) $$ The ability to incorporate background variables is a major advantage of PV methodology over traditional scoring methods such as sum scores or WLEs. Background variables often carry a great deal of information about an individual's likely standing on the skill scale, adding precision in estimating the PVs. Using this information for generating PVs allows LSAS to employ complex test designs (such as the booklet or "planned missingness" designs described earlier) that comprise far fewer test items than traditional designs (von Davier et al., 2009). It may be instructive to note that there is a straightforward relationship between EAP test scores and PVs: As both are computed/drawn from the same posterior distribution, the EAP is the expected value across all PVs. This relationship makes it very evident that PVs adequately account for the uncertainty about the respondent's true ability whereas the EAP—as a single point estimate—does not. From a practical perspective, incorporating PVs in secondary analysis involves the typical procedures for analyzing multiply imputed data: Each analysis (e.g., a regression model) is run once for each of the PVs. Parameter estimates are then pooled using "Rubin's rules" for means, regression coefficients, standard errors, and other quantities (Rubin, 1987). Of particular importance are the rules for pooling standard errors, which add uncertainty about the true ability. The uncertainty about the true ability is reflected in the variation across the different PVs per respondent and transferred to the variances and standard errors of parameter estimates. Therefore, the rules are necessary to obtain correct standard errors and p-values. If used correctly—more on that later—PVs produce at least approximately unbiased estimates of population parameters such as means, variances, regression coefficients, and standard errors. Although it may not be immediately apparent, associations of the target ability with external variables such as predictors or outcomes of skills are corrected for random measurement error in the ability. Other biases incurred by computing test scores are also avoided. Estimates of population quantities based on PVs are (often much) closer to the true population value than those obtained with test scores (e.g., Braun & von Davier, 2017; Carstens & Hastedt, 2010; Laukaityte & Wiberg, 2017; von Davier et al., 2009; Wu, 2005). As with SEM, the advantages of PVs only fully apply if the measurement model and population model used for generating the PV are correctly specified. Another precondition for unbiasedness that is not always met in LSAS is that if the data have a multilevel structure (e.g., students nested in schools), the PV-generating model must adequately reflect this structure (Laukaityte & Wiberg, 2017). Moreover, the PV-generating model must be at least as general as the analysis model. This precondition is known as the "congeniality" of the imputation model and analysis model (Enders, 2010; van Buuren, 2018). Lest bias occur, the background model used for generating the PVs must include all variables (in transformed or untransformed form) that an analysis carried out by a secondary analyst includes (Bauer & von Davier, 2017). This pertains also to interactions between variables or higher-order (e.g., quadradic) terms. In practice, the conditioning model generating the PVs in LSAS typically includes a very large number of variables (indeed, all available) from the background questionnaire.Footnote 11 In this manner, it is possible to ensure that most conceivable analysis models will be (at least almost) congenial with the PV-generating model and to keep possible bias that stems from the omission of variables in the conditioning model to a minimum (von Davier et al., 2009). Thus, if PVs were generated based on a comprehensive conditioning model, secondary analysts need not be overly concerned with congeniality. Even with a comprehensive conditioning model, congeniality may be violated, however, if researchers introduce variables in the analysis model that were not part of the conditioning model or not even assessed in the background questionnaire. The latter can occur, for example, when secondary analysts match the data from LSAS with administrative records or geo-referenced data (e.g., regional unemployment rates). It is possible for secondary data users to produce their own set of PVs, provided that they have access to the individual test items and possess the necessary analytical skills. Statistical software such as Mplus (Asparouhov & Muthén, 2010) or the R package TAM (Robitzsch, Kiefer, & Wu, 2020) have made PV estimation more accessible recently. Some LSAS such as the German NEPS (Scharl, Carstensen, & Gnambs, 2020) now provide dedicated tools for generating PVs based on a pre-specified measurement model and a custom population model in which secondary analysts can include the analysis variables required to achieve congeniality. In most cases, PVs are provided by data-producing organizations. Although the lion's share of work is thus on the side of these organizations, it is fair to say that PV methodology does complicate matters for secondary analysts somewhat: Using PVs requires at least a basic understanding of multiple imputation methodology. Secondary analysts must run each analysis separately for each set of PVs and pool results using Rubin's rules. It is important that secondary data analysts use PVs correctly, lest they lose the advantages of PV methodology for correct statistical inference. Two incorrect usages of PVs continue to be widespread (Jerrim et al., 2017; Laukaityte & Wiberg, 2017; Marchant, 2015; von Davier et al., 2009). The first incorrect usage is to use only one PV as if it were a point estimate, that is, a test score. Although this is the lesser sin and can produce unbiased estimates of population quantities (Wu, 2005), the uncertainty about each person's skill is lost and the variability information contained in the other PVs is neglected. The second mistake is to simply average across all PVs and use this average in subsequent analyses. Although the average across PVs produces a correct estimate of the ability's population mean, variances and standard errors will be biased downward as the uncertainty about the person's true ability is lost. Ignoring the uncertainty about the person's true ability may (but need not always; Marchant, 2015) lead to faulty inferences. Fortunately, modern statistical software and modules make working with PVs less burdensome for secondary analysts. As working with PVs is the same as working with multiple imputations, programs such as Mplus, Stata, SPSS, or R all contain functions or packages that automate the process of working with PVs. For example, the Stata module REPEST (Avvisati & Keslair, 2020) facilitates analyses using PVs (and survey weights) from the PISA and PIAAC studies carried out by the OECD. Thus, using PVs has become straightforward at least for most standard analyses (e.g., multiple regression), and there is little reason to shy away from using PVs as a secondary analyst for usability reasons. As noted earlier, typically the data-producing organization generates a set of PVs intended to serve as broad a range of research questions as possible. In that case, immutability is assured because all secondary analysts will use the exact same set of PVs for their analyses. If, by contrast, secondary analysts estimate their own PV (e.g., because they want to include additional conditioning variables to ensure congeniality), these PVs will differ depending on the specific type of measurement model chosen, the set of background variables included in the conditioning model, the subsample, and the imputation approach.Footnote 12 As a consequence, substantive results (e.g., relationship between the skill and some outcomes) obtained with different sets of PVs may also differ. Although user-generated PVs may help in achieving congeniality between the PV-generating model and analysis model, they thus violate our immutability criterion. To prevent large discrepancies across analysts and analyses, LSAS could define a set of standard conditioning variables that secondary analysts should include in their PV-generating model. Secondary analysts working with data from LSAS often use test scores in much the same way as they use other analysis variables (e.g., gender, educational attainment). However, as our review highlighted, there are several problems with test scores that are not yet widely recognized by secondary analysts (Braun & von Davier, 2017; Jacob & Rothstein, 2016; Jerrim et al., 2017; von Davier et al., 2009): As point estimates of ability, test scores are not fully adequate for the task of statistical inference in LSAS. Test scores do not control for measurement error in the skill, leading to various biases in regression coefficients, standard errors, and other population statistics. Some types of test scores are also unable to handle modern LSAS's complex test designs and to incorporate information from background variables. At this point, the reader may wonder how large and relevant the bias incurred by using test scores actually is. Simulation studies are best suited to answer this question. In simulation studies, data are simulated such that—unlike in real data—the true ability per simulated respondent is known, which allows quantifying various forms of bias. In one such study, Braun and von Davier (2017) studied the extent of attenuation bias that can occur in regression models in which a skill is an independent variable (i.e., predictor). The regression coefficient estimates based on five PVs were highly similar to the true population value, that is, unbiased. On the other hand, regression coefficients based on IRT ability estimates—EAP, WLE, and the simple maximum likelihood estimate (MLE)—were severely attenuated, with estimates 20–46% lower than the true population value of the regression coefficient (for details, see Table S1 in the Supplementary Online Material [SOM]). Moreover, EAP, WLE, and MLE (but less so PVs) produced overestimated regression coefficients for a covariate. These results were observed both in the case of congeniality and even non-congeniality between the generating model of PVs and the regression model used for substantive analysis. Another simulation study by Wu (2005) showed that the population mean was correctly estimated not only by PVs but also by IRT ability estimates (WLE, EAP, and MLE) (see Table S2 in SOM). However, only PVs provided nearly correct estimates of population variance, whereas IRT ability estimates were biased for both 20-item and even more so for three-item tests. Similarly, von Davier et al. (2009) showed that the population means were predicted fairly accurately regardless of the number of items on the test and the scoring method used—EAP, EAP adjusted for group membership (EAP-MG), Warm's correction for MLE (WML), and five PVs (Table S3 in SOM). However, this was not the case for estimated population standard deviations, which only PVs were able to recover accurately. All other methods were biased, and, akin to the Wu (2005) study, bias increased as the number of items tested decreased. In sum, these simulation studies highlight that population means of skills are unbiased when using test scores. However, the skills' variances, standard errors, and regression coefficients when using the skill as an independent variable will all be biased when using test scores, which may lead to erroneous statistical inferences (Braun & von Davier, 2017; Lu et al., 2005; Schofield, 2015; Wu, 2005). PVs perform well in all scenarios. Practical recommendations Based on our review, our recommendations for secondary analysts are clear: Whenever possible, secondary analysts should avoid using test scores in favor of methods that adequately account for measurement error in the target skill and preserve the uncertainty about the skill's true value per individual. In this regard, PV methodology lends itself as currently the best choice that is tailored to the needs of LSAS. If used correctly, PVs can prevent the various forms of bias in variances, regression coefficients, and their standard errors, as well as other population statistics that using test scores can entail. Moreover, using PVs can help avoid overly optimistic conclusions (i.e., type I error) in questions involving incremental predictive validity of some variable over a skill or vice versa (e.g., Braun & von Davier, 2017; see also Westfall & Yarkoni, 2016). The best option for secondary analysts in terms of fallibility, immutability, and usability is to use PVs provided by the data-producing organization and included in the data dissemination. If these PVs are based on an extensive background model, congeniality is typically a minor concern. If PVs are provided, researchers should follow the correct methodology (i.e., run the analyses on each PV and pool results following Rubin's rules) and refrain from averaging PVs or using only one PV. In the increasingly rare cases in which only test scores (e.g., WLE or EAP scores) but no PVs are included in the data, secondary analysts should be wary of—and discuss transparently—the potential biases that can ensue from using test scores. Alternatively, provided that item-level information is available, researchers with advanced psychometric knowledge might decide to use SEM or estimate a set of PVs by themselves—a process that, for example, NEPS now enables with a tool for PV generation (Scharl et al., 2020). This leads us to our recommendations for data-producing organizations responsible for LSAS. In our view, data-producing organizations should provide a set of PVs for each skill measured in a LSAS, based on an extensive background model. The measurement model and population (background) model on which these PVs and/or test scores are based should be made transparent, and computer code should be provided such that secondary analysts can reproduce these PVs as well as modify the model as needed. For this purpose, the data should include item-level information (i.e., variables that capture responses to individual test items) needed for re-estimating the models on which PVs and/or test scores were based. Following these recommendations will widen the range of options available to secondary analysts, enabling them, for example, to estimate their own PVs and/or SEM, as opposed to having to rely on test scores or PVs from a potentially non-congenial background model. It will also contribute to greater research transparency (see also Jerrim et al., 2017). There are good reasons for secondary analysts to gradually move away from using test scores—or at least to be mindful of the shortcomings of deceptively simple test scores in the context of LSAS, where the interest is in population quantities. As our review has shown, secondary analysts have two main options to avoid the potential biases that result from using test scores: (1) directly modeling measurement error in a SEM framework; or (2) incorporating measurement error in the analysis model through PV methodology. When using SEM, the modelled skills should invoke measurement models defined by the responsible data-producing organization (and accompanied by some recommended model parameters to foster immutability). Using PVs that are already included in the data (and that are ideally based on an extensive background model that ensures congeniality) seems to us the most sensible option under the criteria of fallibility, usability, and immutability. In line with previous authors (e.g., Braun & von Davier, 2017; Lüdtke & Robitzsch, 2017; von Davier et al., 2009; Wu, 2005), we therefore recommend that secondary analysts—as well as organizations responsible for LSAS—fully embrace PV methodology. Although some time and effort are necessary to understand the basics of PVs, we believe the effort is worthwhile, as these methods will enable analysts to produce more rigorous and reliable research findings from LSAS to inform policy and practice. We hope that our primer provided a good starting point. In this paper, we use the terms "ability,", "skills," and "proficiency" interchangeably. Although there are some differences across (sub-)disciplines in the way these terms are used, these subtle differences are of no import for our present intent. Psychometrics is the field of study concerned with psychological measurement. The relation between test length and test reliability is expressed in the so-called Spearman–Brown formula, sometimes called "prophecy formula." This formula allows to predict how the reliability Rel of a test will change when extending the length l of the test (l = number of items) by a factor of k. It stipulates that Rel(k × l) = k × Rel / [1 + (k –1) × Rel]. CTT notation traditionally uses ττiinstead of θiand uses the term "true score" instead of "true ability." Also note that the true score refers to the true score of a single test item or item parcel (indexed k , hence ττik) and is not necessarily identical to the latent ability θi(this is only true for the model of parallel tests; e.g., Steyer, 2015). We use θθiinstead of ττifor simplicity and consistency across IRT and CTT. Marginalized maximum likelihood (MML) method of estimation for graded response models and joint maximum likelihood (JML) method of estimation for partial credit models can integrate both the person and item parameter distribution in a single step (Natesan et al, 2016; Bertoli-Barsotti, 2005). These criteria correspond only loosely to statistical concepts. We use them as umbrella terms to summarize important information about the four methods. Iff the test items conform to a 1PL (Rasch) IRT model, then the sum score (at least) is a sufficient statistic for the latent ability θi. Note that latent variables may be categorical too, thereby suitable to model population heterogeneity and latent classes (e.g., Raykov, Marcoulides, & Chang, 2016). Irini Moustaki's helpful comments on interpretational confounding at a methodological OECD workshop on secondary data analysis (Paris, 2019) are gratefully acknowledged here. The phenomenon of shifting measurement properties in one-step procedures has been recognized as a threat to models with categorical latent variables (latent class modeling) when predicting distal outcomes (Vermunt, 2010; Zhu, Steele, & Moustaki, 2017). Although it has been stated for continuous latent variable models too (Anderson & Gerbing, 1988, 1992), the far-reaching consequences for secondary data analysis have not been addressed (e.g., Hair, Black, Babin, Anderson, & Tatham, 2006). We caution the reader about the lost variability of measurement model parameters once they are fixed. The uncertainty entailed in the measurement model is neglected when estimating structural paths. While this variability can be exploited to achieve higher parameter precision at large sample sizes, fixed IRT-SEM estimation still yields smaller finite sample bias than simultaneous IRT-SEM at smaller sample sizes (Lu et al., 2005). Typically, a dimension reduction technique such as principal component analysis is applied to the background variables and principal components that bundle 90% or more of the total variance in the background variables are included instead of the original variables (e.g., Martin, Mullis, & Hooper, 2016; Oranje & Ye, 2014). To be precise, the specific PVs per respondent can change by chance alone even when re-running the exact same PV-generating model. This is because, as outlined earlier, PVs are random draws from the posterior distribution. This, however, will typically not affect pooled estimates from these two sets of PVs. 1PL: 1-parameter logistic model Computerized adaptive testing CFA: Confirmatory factor analysis CTT: Classical test theory DWLS: Diagonally weighted least squares EFA: Exploratory factor analysis IRT: LSAS: Large-scale assessment surveys MESE: Mixed effects structural equations ML: Maximum likelihood MLE: Maximum likelihood estimate MLR: Robust maximum likelihood NAEP: National Assessment of Educational Progress NEPS: German National Educational Panel Study PCM: Partial credit model PIAAC: Programme for the International Assessment of Adult Competencies PISA: Programme for International Student Assessment Plausible value Structural equation model(ing) TIMMS: Trends in International Mathematics and Science Study WLSMV: Weighted least squares (means-and-variance corrected) Andersen, E. B. (1977). Sufficient statistics and latent trait models. Psychometrika, 42(1), 69–81. https://doi.org/10.1007/BF02293746. Anderson, J., & Gerbing, D. W. (1992). Assumptions and comparative strengths of the two-step approach: Comment on Fornell and Yi. Sociological Methods and Research, 20(3), 321–333. https://doi.org/10.1177/0049124192020003002. Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103(3), 411–423. https://doi.org/10.1037/0033-2909.103.3.411. Asparouhov, T. & Muthén, B. (2010). Plausible values for latent variables using Mplus. Mplus Technical Report. http://statmodel.com/download/Plausible.pdf Avvisati, F., & Keslair, F. (2020). REPEST: Stata module to run estimations with weighted replicate samples and plausible values. Retrieved from https://econpapers.repec.org/software/bocbocode/S457918.htm Bakk, Z., & Vermunt, J. K. (2016). Robustness of stepwise latent class modeling with continuous distal outcomes. Structural Equation Modeling, 23(1), 20–31. https://doi.org/10.1080/10705511.2014.955104. Beauducel, A. (2005). How to describe the difference between factors and corresponding factor-score estimates. Methodology, 1(4), 143–158. https://doi.org/10.1027/1614-2241.1.4.143. Beauducel, A. (2007). In spite of indeterminacy many common factor score estimates yield an identical reproduced covariance matrix. Psychometrika, 72(3), 437–441. https://doi.org/10.1007/s11336-005-1467-5. Beauducel, A., & Leue, A. (2013). Unit-weighted scales imply models that should be tested! Practical Assessment, Research, and Evaluation, 18(1), Article 1. https://doi.org/10.7275/y3cg-xv71. Bertoli-Barsotti, L. (2005). On the existence and uniqueness of JML estimates for the partial credit model. Psychometrika, 70(3), 517–531. https://doi.org/10.1007/s11336-001-0917-0. Birnbaum, A. L. (2008). Some latent trait models and their use in inferring an examinee's ability. In F. M. Lord, & M. R. Novick (Eds.), Statistical theories of mental test scores (pp. 397–479). Information Age Publishing (Original work published in 1968). Bollen, K. A. (1989). Structural equations with latent variables. Wiley. https://doi.org/10.1002/9781118619179. Bollen, K. A., & Noble, M. D. (2011). Structural equation models and the quantification of behavior. Proceedings of the National Academy of Sciences, 108(Supplement 3), 15639–15646. https://doi.org/10.1073/pnas.1010661108. Braun, H., & von Davier, M. (2017). The use of test scores from large-scale assessment surveys: Psychometric and statistical considerations. Large-Scale Assessments in Education, 5(1), 5–17. https://doi.org/10.1186/s40536-017-0050-x. Burt, R. S. (1973). Confirmatory factor analytic structures and the theory construction process. Sociological Methods and Research, 2(2), 131–190. https://doi.org/10.1177/004912417300200201. Carstens, R., & Hastedt, D. (2010). The effect of not using plausible values when they should be: An illustration using TIMSS 2007 grade 8 mathematics data. In 4th IEA International Research Conference (IRC-2010), Gothenburg, Sweden https://www.iea.nl/sites/default/files/2019-04/IRC2010_Carstens_Hastedt.pdf. Devlieger, I., Mayer, A., & Rosseel, Y. (2016). Hypothesis testing using factor score regression: A comparison of four methods. Educational and Psychological Measurement, 76(5), 741–770. https://doi.org/10.1177/0013164415607618. Enders, C. K. (2010). Applied missing data analysis. Guilford press. Fava, J. L., & Velicer, W. F. (1992). An empirical comparison of factor, image, component, and scale scores. Multivariate Behavioral Research, 27(3), 301–322. https://doi.org/10.1207/s15327906mbr2703_1. Fuller, W. A. (1987). Measurement error models. Hoboken: Wiley. Fuller, W. A. (1995). Estimation in the presence of measurement error. International Statistical Review, 63(2), 121–141. https://doi.org/10.2307/1403606. Glöckner-Rist, A., & Hoijtink, H. (2003). The best of both worlds: Factor analysis of dichotomous data using item response theory and structural equation modeling. Structural Equation Modeling, 10(4), 544–565. https://doi.org/10.1207/S15328007SEM1004_4. Graham, J. W., Taylor, B. J., Olchowski, A. E., & Cumsille, P. E. (2006). Planned missing data designs in psychological research. Psychological Methods, 11(4), 323–343. https://doi.org/10.1037/1082-989X.11.4.323. Grice, J. W. (2001). Computing and evaluating factor scores. Psychological Methods, 6(4), 430–450. https://doi.org/10.1037/1082-989x.6.4.430. Grice, J. W., & Harris, R. J. (1998). A comparison of regression and loading weights for the computation of factor scores. Multivariate Behavioral Research, 33(2), 221–247. https://doi.org/10.1207/s15327906mbr3302_2. Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2006). Multivariate data analysis, (6th ed., ). Pearson Prentice Hall. Hardt, K., Hecht, M., Oud, J. H., & Voelkle, M. C. (2019). Where have the persons gone?–An illustration of individual score methods in autoregressive panel models. Structural Equation Modeling, 26(2), 310–323. https://doi.org/10.1080/10705511.2018.1517355. Hoijtink, H., & Boomsma, A. (1996). Statistical inference based on latent ability estimates. Psychometrika, 61(2), 313–330. https://doi.org/10.1007/BF02294342. Hyslop, D. R., & Imbens, G. W. (2001). Bias from classical and other forms of measurement error. Journal of Business & Economic Statistics, 19(4), 475–481. https://doi.org/10.1198/07350010152596727. Jacob, B., & Rothstein, J. (2016). The measurement of student ability in modern assessment systems. Journal of Economic Perspectives, 30(3), 85–108. https://doi.org/10.1257/jep.30.3.85. Jerrim, J., Lopez-Agudo, L. A., Marcenaro-Gutierrez, O. D., & Shure, N. (2017). What happens when econometrics and psychometrics collide? An example using the PISA data. Economics of Education Review, 61, 51–58. https://doi.org/10.1016/j.econedurev.2017.09.007. Joliffe, I. T., & Morgan, B. J. T. (1992). Principal component analysis and exploratory factor analysis. Statistical Methods in Medical Research, 1(1), 69–95. https://doi.org/10.1177/096228029200100105. Jöreskog, K. G. (1969). A general approach to confirmatory maximum likelihood factor analysis. Psychometrika, 34(2), 183–202. https://doi.org/10.1007/BF02289343. Jöreskog, K. G. (1970). A general method for analysis of covariance structures. Biometrika, 57(2), 239–251. https://doi.org/10.1093/biomet/57.2.239. Jöreskog, K. G., & Goldberger, A. S. (1975). Estimation of a model with multiple indicators and multiple causes of a single latent variable. Journal of the American Statistical Association, 70(351), 631–639. https://doi.org/10.2307/2285946. Jöreskog, K. G., & Sörbom, D. (1979). Advances in factor analysis and structural equation models. University Press of America. Junker, B. W., Schofield, L. S., & Taylor, L. (2012). The use of cognitive ability measures as explanatory variables in regression analysis. IZA Journal of Labor Economics, 1(1), 4. https://doi.org/10.1186/2193-8997-1-4. Laukaityte, I., & Wiberg, M. (2017). Using plausible values in secondary analysis in large-scale assessments. Communications in statistics - Theory and Methods, 46(22), 11341–11357. https://doi.org/10.1080/03610926.2016.1267764. Li, C. (2016). Confirmatory factor analysis with ordinal data: Comparing robust maximum likelihood and diagonally weighted least squares. Behavior Research Methods, 48(3), 936–949. https://doi.org/10.3758/s13428-015-0619-7. Little, R. J., & Rubin, D. B. (2002). Statistical analysis with missing data. Wiley. Lord, F. M. (1980). Applications of item response theory to practical testing problems. New York: Routledge. https://doi.org/10.4324/9780203056615. Lord, F. M., & Novick, M. R. (2008). Statistical theories of mental test scores. Information Age Publishing (Original work published in 1968). Lu, I. R., & Thomas, D. R. (2008). Avoiding and correcting bias in score-based latent variable regression with discrete manifest items. Structural Equation Modeling, 15(3), 462–490. https://doi.org/10.1080/10705510802154323. Lu, I. R., Thomas, D. R., & Zumbo, B. D. (2005). Embedding IRT in structural equation models: A comparison with regression based on IRT scores. Structural Equation Modeling, 12(2), 263–277. https://doi.org/10.1207/s15328007sem1202_5. Lüdtke, O., & Robitzsch, A. (2017). Eine Einführung in die Plausible-Values-Technik für die psychologische Forschung. Diagnostica, 63(3), 193–205. https://doi.org/10.1026/0012-1924/a000175. Marchant, G. J. (2015). How plausible is using averaged NAEP values to examine student achievement? Comprehensive Psychology, 4, Article 1. https://doi.org/10.2466/03.CP.4.1. Martin, M. O., Mullis, I. V., & Hooper, M. (2016). Methods and procedures in TIMSS 2015. https://timssandpirls.bc.edu/publications/timss/2015-methods.html Mazza, G. L., Enders, C. K., & Ruehlman, L. S. (2015). Addressing item-level missing data: A comparison of proration and full information maximum likelihood estimation. Multivariate Behavioral Research, 50(5), 504–519. https://doi.org/10.1080/00273171.2015.1068157. McDonald, R. P. (2011). Measuring latent quantities. Psychometrika, 76(4), 511–536. https://doi.org/10.1007/s11336-011-9223-7. McNeish, D., & Wolf, M. G. (2020). Thinking twice about sum scores. Behavior Research Methods. Advance online publication. https://doi.org/10.3758/s13428-020-01398-0. Mislevy, R. J. (1991). Randomization-based inference about latent variables from complex samples. Psychometrika, 56(2), 177–196. https://doi.org/10.1007/BF02294457. Niemi, R. G., Carmines, E. G., & McIver, J. P. (1986). The impact of scale length on reliability and validity: A clarification of some misconceptions. Quality and Quantity, 20, 371–376. https://doi.org/10.1007/BF00123086. Nimon, K., Zientek, L. R., & Henson, R. (2012). The assumption of a reliable instrument and other pitfalls to avoid when considering the reliability of data. Frontiers in Psychology, 3(102), 1–13. https://doi.org/10.3389/fpsyg.2012.00102. OECD (2016). The survey of adult skills: Reader's companion, (2nd ed., ). OECD Publishing. https://doi.org/10.1787/9789264258075-en. Oranje, A., & Ye, L. (2014). Population model size, bias and variance in educational survey assessments. In L. Rutkowski, M. von Davier, & D. Rutkowski (Eds.), Handbook of international large-scale assessment: Background, technical issues, and methods of data analysis, (pp. 203–228). CRC Press. https://doi.org/10.1201/b16061 Pokropek, A. (2015). Phantom effects in multilevel compositional analysis: Problems and solutions. Sociological Methods & Research, 44(4), 677–705. https://doi.org/10.1177/0049124114553801. Raykov, T., & Marcoulides, G. A. (2016). On the relationship between classical test theory and item response theory: From one to the other and back. Educational and Psychological Measurement, 76(2), 325–338. https://doi.org/10.1177/0013164415576958. Raykov, T., Marcoulides, G. A., & Chang, C. (2016). Examining population heterogeneity in finite mixture settings using latent variable modeling. Structural Equation Modeling, 23(5), 726–730. https://doi.org/10.1080/10705511.2015.1103193. Raykov, T., Marcoulides, G. A., & Li, T. (2017). On the fallibility of principal components in research. Educational and Psychological Measurement, 77(1), 165–178. https://doi.org/10.1177/0013164416629714. Rhemtulla, M., Brosseau-Liard, P. É., & Savalei, V. (2012). When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions. Psychological Methods, 17(3), 354–373. https://doi.org/10.1037/a0029315. Rhemtulla, M., van Bork, R., & Borsboom, D. (2020). Worse than measurement error: Consequences of inappropriate latent variable measurement models. Psychological Methods, 25(1), 30–45. https://doi.org/10.1037/met0000220. Richardson, S., & Gilks, W. R. (1993). Conditional independence models for epidemiological studies with covariate measurement error. Statistics in Medicine, 12(18), 1703–1722. https://doi.org/10.1002/sim.4780121806. Robitzsch, A., Kiefer, T., & Wu, M. (2020). TAM: Test analysis modules. R package version 3, (pp. 5–19) http://cran.ma.imperial.ac.uk/web/packages/TAM/TAM.pdf. Rubin, D. (1987). Multiple imputation for nonresponse in surveys. Wiley. http://dx.doi.org/10.1002/9780470316696 Schafer, J. L., & Graham, J. W. (2002). Missing data: Our view of the state of the art. Psychological Methods, 7(2), 147–177. https://doi.org/10.1037/1082-989X.7.2.147. Scharl, A., Carstensen, C. H., & Gnambs, T. (2020). Estimating Plausible Values with NEPS Data: An Example Using Reading Competence in Starting Cohort 6 (NEPS Survey Paper No. 71). Leibniz Institute for Educational Trajectories, National Educational Panel Study. https://www.lifbi.de/Portals/13/NEPS%20Survey%20Papers/NEPS_Survey-Paper_LXXI.pdf Schofield, L. S. (2015). Correcting for measurement error in latent variables in used as predictors. The Annals of Applied Statistics, 9(4), 2133–2152. https://doi.org/10.1214/15-AOAS877. Skrondal, A., & Laake, P. (2001). Regression among factor scores. Psychometrika, 66(4), 563–575. https://doi.org/10.1007/BF02296196. Skrondal, A., & Rabe-Hesketh, S. (2004). Generalized latent variable modelling: Multilevel, longitudinal and structural equation models. Chapman & Hall. https://doi.org/10.1201/9780203489437 Spearman (1904). The proof and measurement of association between two things. American Journal of Psychology, 15(1), 72–101. https://doi.org/10.2307/1412159. Stefanski, L. A. (2000). Measurement error models. Journal of the American Statistical Association, 95(452), 1353–1358. https://doi.org/10.2307/2669787. Steyer, R. (2015). Classical (psychometric) test theory. In J. D. Wright (Ed.), International Encyclopedia of the Social & Behavioral Sciences, (2nd ed., pp. 785–791). Elsevier. https://doi.org/10.1016/B978-0-08-097086-8.44006-7. Televantou, I., Marsh, H. W., Kyriakides, L., Nagengast, B., Fletcher, J., & Malmberg, L. E. (2015). Phantom effects in school composition research: Consequences of failure to control biases due to measurement error in traditional multilevel models. School Effectiveness and School Improvement, 26(1), 75–101. https://doi.org/10.1080/09243453.2013.871302. Tong, Y., & Kolen, M. J. (2010). IRT proficiency estimators and their impact. In Annual conference of the National Council of Measurement in Education, Denver, CO http://images.pearsonassessments.com/images/tmrs/tmrs_rg/9_IRT_Estimator_Scoring_42210.pdf. van Buuren, S. (2018). Flexible Imputation of Missing Data, (2nd ed., ). CRC/Chapman & Hall. https://doi.org/10.1201/9780429492259 Vermunt, J. K. (2010). Latent class modeling with covariates: Two improved three-step approaches. Political Analysis, 18(4), 450–469. https://doi.org/10.2307/25792024. von Davier, M. (2010). Why sum scores may not tell us all about test takers. Newborn and Infant Nursing Reviews, 10(1), 27–36. https://doi.org/10.1053/j.nainr.2009.12.011. von Davier, M., Gonzalez, E., & Mislevy, R. J. (2009). What are plausible values and why are they useful? IERI Monograph Series: Issues and Methodologies in Large Scale Assessments, 2, 9–36 https://www.ierinstitute.org/fileadmin/Documents/IERI_Monograph/IERI_Monograph_Volume_02_Chapter_01.pdf. Wainer, H., & Thissen, D. (1987). Estimating ability with the wrong model. Journal of Educational Statistics, 12(4), 339–368. https://doi.org/10.3102/10769986012004339. Warm, T. A. (1989). Weighted likelihood estimation of ability in item response theory. Psychometrika, 54(3), 427–450. https://doi.org/10.1007/BF02294627. Westfall, J., & Yarkoni, T. (2016). Statistically controlling for confounding constructs is harder than you think. PloS one, 11(3), Article e0152719. https://doi.org/10.1371/journal.pone.0152719. Wirth, R. J., & Edwards, M. C. (2007). Item factor analysis: Current approaches and future directions. Psychological Methods, 12(1), 58–79. https://doi.org/10.1037/1082-989X.12.1.58. Wu, M. (2005). The role of plausible values in large-scale surveys. Studies in Educational Evaluation, 31(2–3), 114–128. https://doi.org/10.1016/j.stueduc.2005.05.005. Zhu, Y., Steele, F., & Moustaki, I. (2017). A general 3-step maximum likelihood approach to estimate the effects of multiple latent categorical variables on a distal outcome. Structural Equation Modeling, 24(5), 643–656. https://doi.org/10.1080/10705511.2017.1324310. The authors thank Maya Moritz for proofreading and copyediting. Open Access funding enabled and organized by Projekt DEAL. This paper was supported by two grants to Clemens M. Lechner: a grant by the German Research Foundation (DFG), "Stability and Change in Adult Competencies" (Grant No. LE 4001/1-1); and a grant by the Germany Federal Ministry of Education and Research (BMBF), "Risk and protective factors for the development of low literacy and numeracy in German adults" (Grant No. W143700A). Department of Survey Design and Methodology, GESIS – Leibniz Institute for the Social Sciences, PO Box 2 21 55, 68072, Mannheim, Germany Clemens M. Lechner, Nivedita Bhaktha, Katharina Groskurth & Matthias Bluemke Clemens M. Lechner Nivedita Bhaktha Katharina Groskurth Matthias Bluemke All authors contributed to the ideation for this paper, wrote the first draft together, and revised it. The author order reflects the relative share of contributions. The author(s) read and approved the final manuscript. Correspondence to Clemens M. Lechner. Additional file 1 : Table S1. Results of simulation study conducted by Braun and von Davier (2017). Table S2. Results of simulation study conducted by Wu (2005). Table S3. Results of simulation study by von Davier et al. (2009). Lechner, C.M., Bhaktha, N., Groskurth, K. et al. Why ability point estimates can be pointless: a primer on using skill measures from large-scale assessments in secondary analyses. Meas Instrum Soc Sci 3, 2 (2021). https://doi.org/10.1186/s42409-020-00020-5 Received: 10 July 2020 Large-scale assessments Measurement error Submission enquiries: [email protected]
CommonCrawl
Home Journals IJSSE Influencing Factors for Illegal Driving Behaviors of Rural Bus Drivers Influencing Factors for Illegal Driving Behaviors of Rural Bus Drivers Yun Xiao* | Zijun Liang School of Urban Construction and Transportation, Hefei University, Hefei 230606, China [email protected] https://doi.org/10.18280/ijsse.100109 10.01_09.pdf To improve the safety management of rural buses, this paper attempts to identify the factors affecting the drivers' behavioral features and their illegal driving behaviors. From the perspective of driver psychology, the safety incentive (SI) index was introduced to improve the theory of planned behavior (TPB) model. Then, the authors verified the improved TPB model for the illegal driving behaviors of rural bus drivers. In addition, the influence mechanisms of several factors on the illegal driving behaviors were fully analyzed, namely, attitude toward the behavior (AB), subjective norm (SN), perceived behavioral control (PBC), and the SI. The results show that the rural bus drivers have not developed a correct understanding of illegal driving behaviors; the attitudes of families, friends and passengers can indirectly affect the driving behaviors; the PBC of the driver has a direct or an indirect influence on the illegal driving behaviors; the SI, as the additional index, has a significant inhibitory effect on illegal driving behaviors. Finally, it is advised to improve the safety management of rural buses by improving education and publicity, supervising the whole driving process, and perfecting the SI system. traffic safety, theory of planned behavior (TPB), structural equation modelling (SEM), rural buses, illegal driving behaviors Rural buses refer to the buses that provide rural residents with the basic travel services, in accordance with the preset route, stops, time and ticket price. According to the Thirteenth Five-Year Plan for the Development of Comprehensive Transport Services by the Ministry of Transport of China, rural buses will cover all townships and administrative villages in China by 2020, with a marked increase in line coverage. As part of China's urban-rural dichotomy, there are huge differences between rural buses and urban buses in transport facility and operation mode. Compared with urban buses, rural buses in China encounter a variety of complex issues on traffic safety. Currently, the rural areas in China are faced with a high rate of traffic accidents. The traffic death rate in rural areas is 2-3 times that of urban areas. This calls for urgent management of the hidden traffic hazards on rural roads [1]. Due to the relatively short history of rural buses, only a few scholars have taken rural bus drivers as research objects. Most of the relevant studies focus on the factors affecting the traffic safety on rural roads, namely, road environment, vehicle speed, and driving behavior. Many have noticed the promoting effect of good road environment on traffic safety in rural areas. For example, Qin et al. [2] collected data on two-lane, two-way intersections in rural South Dakota, and then compared the local accuracies between jurisdiction-specific safety performance functions (SPFs) and the SPFs based on the Highway Safety Manual (HSM). Das et al. [3] defined rural or suburban roads with traffic volume ≤400 vehicles per day (vpd) as low-volume roads, and analyzed the impacts of traffic volume on expected crash frequencies. Llopis-Castelló et al. [4] discovered that 1/4 of road fatalities occur on the horizontal curves of two-lane rural roads, and adopted an SPF to predict the number of crashes. Shahla et al. [5] and Yang et al. [6] attributed the probability of accidents at signalized intersections to the annual mean daily traffic volume, pedestrian traffic volume, traffic signal priority, parking position and other constraints. The traffic safety on rural roads is negatively correlated with vehicle speed. In other words, relaxing speed limits undermines traffic safety, giving rise to fatal and injury crashes [7-10]. Besharati et al. [11] suggested that both fatal and non-fatal injuries can be reduced significantly by installing more speed cameras over rural roads. Abdel-Aty et al. [12] evaluated how different variable speed limits (VSLs) improve the traffic safety on roads. Road traffic is a complex system of various elements, including but not limited to humans, vehicles and roads. Humans are more proactive than the other elements. Most traffic accidents are resulted from violations and lapses by drivers. Domenichini et al. [13] considered drivers as the key to prevent traffic accidents, and recognized that the complex driver behavior depends on reflex (or involuntary) and voluntary driving actions. Among occupational drivers, the driving risk is associated with fatigue and poor sleep; their physical and mental health could be promoted through multiple interventions, namely, fatigue reduction and sleep management [14-16]. Febres et al. [17] found that the probability of serious or fatal injuries in traffic accidents increases with the number of high-speed violations, distractions and errors. Strogatz et al. [18] probed into the age structure of drivers from rural areas, revealing that old rural drivers tend to highlight the importance of driving and impact of driving cessation. Casado-Sanz et al. [19] held that population aging worsens the traffic safety on rural roads, and should be taken as a potential risk factor of rural traffic safety. Wang et al. [20] defined the impacts of his/her physical and mental states on traffic safety as the driver's propensity, which covers the mentality towards the traffic condition and decision/behavior under various dynamic factors. Under the theory of planned behavior (TPB), Atombo et al. [21] identified the determinants of the driver's intention to speed and speed violation behavior. The above studies attempt to improve traffic safety from highway design, technical speed limits, and driving patterns. However, there is little report on the safety of rural buses in the light of the psychological factors of rural bus drivers. The rural area is generally underpopulated. The previous research [22] calls for more attention to the road safety of low-density settlements, no matter if they are located in urban area or rural area. Moreover, rural buses often operate under poor road conditions and mixed traffic flows. The complex environment can easily disrupt the mood of the drivers, causing negative impacts on the driving behavior. In this paper, the TPB, an attitude theory in social psychology, is employed to analyze the mechanism of rural bus drivers committing illegal driving behaviors, and examine the behavioral features of the drivers. The research results shed new light on improving the safety of rural buses. The remainder of this paper is organized as follows: Section 2 introduces the principles of the TPB model and improves the model to describe the illegal behaviors of rural bus drivers; Section 3 designs a questionnaire and analyzes the collected data; Section 4 verifies the improved model through simulation and conducts the path analysis based on structural equation modeling (SEM); Section 5 puts forward the research conclusions. 2. Principles and Improvement of TPB Model 2.1 Principles of TPB The TPB is the combined result of the theory of reasoned action (TRA) and perceived behavioral control (PBC). It has developed into a new research model of behavioral theory [23]. There are five elements in the TPB, namely, attitude toward the behavior (AB), subjective norm (SN), perceived behavioral control (PBC), behavioral intention (BI) and behavior (B). Specifically, the AB refers to the degree to which a person has a favorable or unfavorable evaluation of the behavior of interest; the SN refers to a person's beliefs about whether peers and people of importance to the person think he/she should engage in the behavior; the PBC refers to a person's past experience with the performance of the behavior and anticipated obstacles that could inhibit behavior, which strengthens with the growing amount of perceived resources and opportunities and the decreasing number of anticipated obstacles; the BI refers to a person's subjective probability judgment for adopting a specific behavior, which reflects his/her intention to conduct a specific behavior; the B refers to the action actually taken by a person. Ajzen [23, 24] stated that the BI, under the effects of AB, SN and PBC, mediates the impacts of all possible influencing factors on the B. 2.2 Improved TPB for illegal behaviors of rural bus drivers The TPB model is open for extension. Any other variable that significantly predicts the behavior could be incorporated to the model, enhancing the explanatory and persuasive powers [24]. The safety incentive (SI) means the process that the drivers are trained and guided to pursue the safety management goals of the enterprise, which designs fair and reasonable rules and regulations to encourage standard operations and penalize illegal behaviors. Mahajan et al. [25] held that incentives have a major impact on the safe driving of professional drivers, and regarded individual incentive and control as essential to safety management. Li and Tian [26] established an effective SI system for enterprises, which fully integrates safety education/training, economic incentive and power incentive. The SI can significantly boost people's enthusiasm, initiative, and creativity, providing enterprises an effective means to achieve their safety goals. According to Harvard psychologist William James in his study on employees, normal paid employees only exert 20-30% of their abilities, while incentivized employees exert 80-90% of their abilities. Both SI and SN are external factors that affect a person's B. However, the two factors have different emphases: the SN highlights the perceived social pressure, while the SI focuses on how a person is motivated by the internal rules and regulations of the enterprise. Through the above analysis, the SI was innovatively introduced to the classic TPB model, creating an improved TPB model for illegal behaviors of rural bus drivers (Figure 1). Figure 1. The improved TPB model 3. Questionnaire Survey and Data Analysis 3.1 Questionnaire survey (1) Respondents Our questionnaire survey targets rural bus drivers. The respondents include 121 drivers from Hefei, 93 from Huoshan and 105 from Jinzhai. The three places are all located in eastern China's Anhui Province. Before the survey, every driver was informed that their personal information will be kept confidential. On average, the respondents drive 150-260km for 8-11 hours each day on city-to-township lines. The survey was carried out in the dispatch room or meeting room of each enterprise, as part of the Online Reporting System for Transportation Enterprises, a science and technology project supported by Department of Transport, Anhui Province, China in 2019. Before the survey, the questionnaire was explained face-to-face to the respondents, such that they could truly understand the statements. All the drivers were asked to fill out the questionnaire in an independent and anonymous manner. In total, 303 valid questionnaires were recovered, and only 16 were subpar. (2) Survey on illegal driving behaviors The illegal driving behaviors fall into three main categories: violations, lapses and errors. There is a total of 17 typical illegal driving behaviors, such as tailgating, overtaking on the curve, wrong-way driving, illegal lane occupation, speeding, to name but a few [27]. To identify the common illegal behaviors of rural bus drivers, each respondent was asked to select at least three most common illegal driving behaviors. Five indices were selected by more than half of all respondents: distracted driving (79.2%), illegal lane change (77.3%), illegal parking (75.6%), speeding (69.3%), and failure to yield (56.1%). The five representative behaviors are the focal points of our research. (3) Questionnaire design The questionnaire is made up of two essential parts: demographic variables and the variables of the improved TPB model. The demographic variables are about the basic information of each driver, including age, gender and driving age. The improved TPB model involves six dimensions, namely, B (illegal driving behavior), AB, SN, PBC, SI and BI. Multiple statements were designed under each dimension to minimize the filling error. Each statement was rated against a 7-point scale. The greater the point, the more the driver agrees with the statement. 1) Five statements (B1-B5) were designed under the B dimension, each of which stands for an illegal driving behavior: "Distracted driving is inevitable", "Illegal lane change is inevitable", "Illegal parking is inevitable", "Speeding is inevitable", and "Failure to yield is inevitable". 2) Three statements (AB1-AB3) were designed under the AB dimension to test the drivers' beliefs and evaluations of illegal driving behaviors: "I feel uneasy or guilty when driving illegally", "I am dissatisfied with he/she who drives illegally", and "It is unavoidable to drive illegally on an occasional basis". 3) Six statements (SN1-SN6) were designed under the SN dimension to test the normative beliefs and motivations to comply of rural bus drivers to illegal driving behaviors: "My families strongly oppose my illegal driving behaviors", "I care much about the opinions of my families", "My friends/colleagues strongly oppose my illegal driving behaviors", "I care much about the opinions of friends/colleagues", "The passengers strongly oppose my illegal driving behaviors", "I care much about the opinions of the passengers". 4) Four statements (PBC1-PBC4) were designed under the PBC dimension to test the control beliefs and perceived powers of rural bus drivers to illegal driving behaviors: "Driving illegally affects my control of the vehicle", "Driving illegally is safe for me", "Driving illegally is safe for others" and "It is easy to drive illegally without being caught". The four statements demonstrate how convenient it is for the drivers to drive illegally. 5) Three statements (SI1-SI3) were designed under the SI dimension to test how much the incentive affects the driving behavior: "I am less likely to drive illegally facing a high safety reward", "I am less likely to drive illegally after receiving warning education" and "I am less likely to drive illegally facing a high penalty". The three statements demonstrate the magnitude of reward or penalty for driving behavior. 6) Two statements (BI1-BI2) were designed under the BI dimension to test the drivers' intention for illegal driving behaviors: "I will continue to drive illegally in the coming months" and "I will stop from driving illegally in the coming months. 3.2 Data validity analysis (1) Reliability analysis Reliability is a metric of data consistency. The variables obtained in the questionnaire survey are reliable, only if the data are stable through repeated tests. The reliability of the questionnaire is generally verified by the Cronbach's alpha (α). If the coefficient is greater than 0.7, the questionnaire must be highly reliable. The α value can be calculated by: $\alpha=\frac{k}{k-1}\left(1-\frac{\sum s_{i}^{2}}{s^{2}}\right)$ where, k is the total number of items in the scale; $\sum S_{i}^{2}$ is the sum of variance of items in the scale; s2 is the variance of the sum of the items in the scale. Table 1. The α values of latent variables Latent variables The α values As shown in Table 1, the α values of all latent variables were all greater than 0.7, indicating that the designed questionnaire is highly reliable for further analysis. (2) Validity analysis Structural validity is defined as the degree to which scores of a questionnaire are an adequate reflection of the dimensionality of the construct to be measured. The structural validity of the questionnaire can be measured through factor analysis. To verify if our questionnaire can effectively reflect the psychological behavior of the drivers, a Kaiser-Meyer-Olkin (KMO) and Bartlett's test was conducted on SPSS 23.0. The results in Table 2 show that the KMO coefficient was 0.731, greater than 0.50, and the Sig value was 0.00, smaller than 0.05. Hence, the survey data are suitable for factor analysis. Table 2. Results of the KMO and Bartlett's test The KMO measure of sampling adequacy Bartlett's test of sphericity The approximate chi-square (χ2) 3.3 Statistical analysis (1) Age difference As shown in Table 3, there is a significant age difference in distracted driving (F=3.13, p<0.05) and failure to yield (F=2.71, p<0.05). The drivers under 40 are the most prone to distracted driving, and those over 50 are the most prone to failure to yield. Table 3. Variance analysis on illegal driving behaviors between drivers at different ages F-statistic Illegal lane change (2) Gender difference As shown in Table 4, there is a significant gender difference in distracted driving and speeding. Male drivers are more likely to commit distracted driving (F=6.05, p<0.05) and speeding (F=4.08, p<0.05). Table 4. The t-test results on gender difference in illegal driving behaviors (3) Point reduction in the past year As shown in Table 5, there is no significant difference in the five typical illegal driving behaviors between drivers with varied point reductions in the past year. (4) Number of accidents in the past three years As shown in Table 6, there is a significant difference in illegal lane change (F=6.07, p<0.05, speeding (F=5.92, p<0.05) and failure to yield (F=7.81, p<0.05) between drivers with different number of accidents in the past three years. It can be seen that the drivers who have committed 3 or more accidents in the past three years are prone to illegal lane change, speeding and failure to yield. Further analysis shows that failure to yield (11.3%) and illegal lane change (20.6%) are the primary reasons for traffic accidents. Table 5. Variance analysis on illegal driving behaviors between drivers with varied point reductions in the past year Point reduction in the past year 0-3 points >7 points Table 6. Variance analysis of illegal driving behaviors between drivers with different number of accidents in the past three years Number of accidents in the past three years (5) Type of driving license As shown in Table 7, there is no significant difference in any term, i.e. the type of driving license has no impact on the illegal driving behaviors of rural bus drivers. Table 7. Variance analysis on illegal driving behaviors between drivers holding different types of driving license Types of driving license 4. Model Verification and Path Analysis 4.1 Fitness of data and model Fitness is an indicator of the consistency between the proposed model and the collected data. The fitness can be measured by absolute indices, relative indices and adjustment indices. The typical absolute indices include P-value, goodness of fit index (GFI), average goodness of fit index (AGFI) and root mean square error of approximation (RMSEA), and typical relative indices include normed fit index (NFI), Tucker Lewis index (TLI), and comparative fit index (CFI). The evaluation criteria of the indices are shown in Table 8. It can be seen that the RMSEA was greater than 0.05, but below the acceptable threshold of 0.08. As a result, the fitness is acceptable between the proposed model and the sample data. Table 8. Fit indices of our model AGFI RMSEA NFI TLI P>0.05 (insignificant) <0.05 (Good), <0.08 (Reasonable) 4.2 Path analysis According to the improved TPB model, the driving behaviors can be predicted based on the AB, SN, PBC and SI of the drivers. Here, the SEM is introduced to construct the improved TPB model. The SEM can effectively measure the contribution of each index to the entire model, identify the correlation between indices, and demonstrate the overall fitness of the model. The SEM-based path analysis map contains many kinds of symbols, where each rectangle is an observable variable/factor, each oval is a latent variable/factor, each one-way arrow is a one-way impact or effect, and each two-way arc is a correlation. The established model was verified on AMOS. Based on the survey data, the values of all observable variables and some latent variables were obtained for our model. On this basis, an SEM-based path analysis map was plotted for the illegal driving behaviors of rural bus drivers. The survey data were normalized to fully disclose the correlation between variables. Figure 2. An SEM-based path analysis map for illegal driving behaviors of rural bus drivers According to the improved TPB model, the BI and B can be respectively calculated by: BI=0.33AB+0.27SN+0.13PBC+0.16SI B=0.58BI+0.7PBC+0.11SI 4.3 Discussion (1) In the improved TPB model, the AB, SN, PBC and SI are obviously correlated with the BI, and the PBC and SI have direct or indirect impacts on the B. (2) The most common illegal driving behaviors among rural bus drivers include distracted driving, illegal lane change, illegal parking, speeding, and failure to yield. Among them, illegal lane change, illegal parking and failure to yield are the top three illegal driving behaviors among these drivers. Further analysis shows that the drivers face high risks of illegal lane change, illegal parking and failure to yield, owing to the poor conditions and narrow width of bus roads in rural areas. To mitigate the risks, the road facilities for rural buses should be further improved, and the common illegal driving behaviors should be corrected in a timely manner. (3) The AB is the leading influencing factor of the BI, with a path coefficient of 0.33. Thus, more importance should be attached to the AB of the driver. Overall, rural bus drivers have not fully recognized the dangers of illegal driving behaviors, and hold different attitudes to their own and others' illegal driving behaviors: they are very dissatisfied with others' illegal driving behaviors (I am dissatisfied with he/she who drives illegally), but indifferent to their own illegal driving behaviors (I feel uneasy or guilty when driving illegally; It is unavoidable to drive illegally on an occasional basis). Psychologically, indifference is an emotional factor in disregard of harmful consequences. Indifferent drivers tend to drive illegally repeatedly, pushing up the likelihood of accidents. Therefore, publicity and education should be provided to guide the drivers to nurture a correct driving attitude. On the one hand, warning education should be implemented to correct the attitude of the drivers toward illegal driving behaviors, enabling them to grasp the correlations between illegal driving and accidents and to avoid illegal driving behaviors. On the other hand, the drivers should be taught to treat others' illegal driving behaviors correctly. If he/she witnessed others' illegal driving behaviors, the driver should report or supervise the behaviors through reasonable channels, rather than lost himself/herself in road rage. (4) The SN has a great influence on the BI, with a path coefficient of 0.27. According to our survey and the relevant literature, illegal behaviors are not acceptable by the public, and are subjected to social pressure. The illegal driving behaviors are significantly restricted by passengers. In the course of driving, passengers can sense the driver's operations, and correct the wrong operations in time. The illegal driving behaviors are also restricted by families. A good family environment helps the driver to foster good driving behaviors. The illegal driving behaviors are slightly affected by friends. To reduce and eliminate illegal driving behavior, it is necessary to improve traffic safety through family education, friend counselling, and passenger supervision. (5) The PBC has an important impact on the BI, with a path coefficient of 0.13. This means the intention toward illegal behaviors partially depends on the ability of self-control. The statement "It is easy to drive illegally without being caught" had the highest rating among the four PBC indices, because the traffic management in rural areas is too week to fully monitor the illegal driving behaviors. The other three statements received relatively high ratings. The main reason lies in the long-term fixed route for each bus, which causes job burnout and carelessness to drivers, making them overconfident. Therefore, it is necessary to monitor the whole driving process through technical means like video monitoring, and then timely correct the illegal driving behaviors. Moreover, the bus enterprise can rotate the drivers of different lines by a certain period. The drivers will be less fatigued in the changing environment. (6) The SI, the additional variable, has a major impact on the BI, with a path coefficient of 0.17, and also a major impact on the B of drivers. Therefore, the inclusion of the SI could enhance the explanatory and persuasive powers of the TPB model. The drivers have a strong sense of honor and take an aversion to penalties. The SI system must be perfected to strictly regulate their behaviors. According to the principle of safe behavior incentivization, the incentives should fully consider the needs of drivers, and adopt suitable penalty and competition mechanism. Meanwhile, the efficiency of supervision should be improved, and the employees should be encouraged to compete with each other, making the SI more effective. (7) There are direct correlations between the PBC, SI and B. With the growth of illegal behaviors, the self-control ability will be weakened, and the binding force of the SI will decrease. Then, the SI will bypass the BI and directly acts on the B. The BI has a significant positive impact on the B, with a path coefficient of 0.58. Thus, it is the direct impactor of driving behaviors. In future, the transport department should take two measures to manage rural bus drivers: reducing illegal behaviors by suppressing the BI, and promoting safe behaviors by improving the safety awareness. To promote the safety management of rural buses, this paper innovatively introduced the SI index to improve the classic TPB model, and verified the improved TPB model for the illegal driving behaviors of rural bus drivers. Besides, the influence mechanisms of the AB, SN, PBC and SI on illegal driving behaviors were analyzed in details, and the correlations were quantified between internal and external variables. Empirical analysis shows that the improved TPB model can effectively identify the factors affecting the illegal driving behaviors of rural bus drivers; the addition of the external variable SI perfects the TPB model, enhancing its explanatory power. To enhance safety management, the rural bus enterprises should improve the SI system, integrate rules and regulations on safety behavior with the code of conduct for derivers, and help them to establish correct safety evaluation criteria. Based on the influencing factors, the future research will explore the correlations of the SI with different types of drivers. This work was supported by the Science and Technology Project, Department of Transport, Anhui Province, China (Grant No.: 2018FACJ0641 and 2018FACJ0954) and Talent Scientific Research Fund Project, Hefei University (Grant No.: 18-19RC02). Our thanks also go to those who volunteered in this research. [1] Wang, L.J., Ning, P.S., Yin, P., Cheng, P.X., Schwebel, D.C., Liu, J.M., Wu, Y., Liu, Y.N., Qi, J.L., Zeng, X.Y., Zhou, M.G., Hu, G.Q. (2019). Road traffic mortality in China: analysis of national surveillance data from 2006 to 2016. Lancet Public Health, 4(5): e245-e255. https://doi.org/10.1016/S2468-2667(19)30057-X [2] Qin, X., Chen, Z., Shaon, R.R. (2019). Developing jurisdiction-specific SPFs and crash severity portion functions for rural two-lane, two-way intersections. Journal of Transportation Safety & Security, 11(6): 629-641. https://doi.org/10.1080/19439962.2018.1458052. [3] Das, S., Tsapakis, l., Datta, S. (2019). Safety performance functions of low-volume roadways. Transportation Research Record: Low-Volume Roads Conferences, 2673(12). https://doi.org/10.1177/0361198119853559 [4] Llopis-Castelló, D., Camacho-Torregrosa, F. J., García, A. (2018). Calibration of the inertial consistency index to assess road safety on horizontal curves of two-lane rural roads. Accident Analysis & Prevention, 118: 1-10. https://doi.org/10.1016/j.aap.2018.05.014 [5] Shahla, F., Shalaby, A.S., Persaud, B.N., Hadayeghi, A. (2009). Analysis of transit safety at signalized intersections. Transportation Research Record, 2102(1): 108-114. https://doi.org/10.3141/2102-14 [6] Yang, H.T., Lu, X.Z., Cherry, C., Liu, X.H., Li, Y.L. (2017). Spatial variations in active mode trip volume at intersections: A local analysis utilizing geographically weighted regression. Journal of Transport Geography, 64: 184-194. https://doi.org/10.1016/j.jtrangeo.2017.09.007 [7] Anagnostou, E., Cole, E. (2020). Targeted prevention of road traffic deaths in Greece: A multifactorial 5-year census-based study. European Journal of Trauma and Emergency Surgery, 1-16. https://doi.org/10.1007/s00068-019-01290-3 [8] Sayed, T., Sacchi, E. (2016). Evaluating the safety impact of increased speed limits on rural highways in British Columbia. Accident Analysis & Prevention, 95(Part A): 172-177. https://doi.org/10.1016/j.aap.2016.07.012 [9] El-Basyouny, K., Sayed, T. (2012). Measuring direct and indirect treatment effects using safety performance intervention functions. Safety Science, 50(4): 1125-1132. https://doi.org/10.1016/j.ssci.2011.11.008 [10] Cheng, Z.Y., Lu, J., Li, Y.X. (2018). Freeway crash risks evaluation by variable speed limit strategy using real-world traffic flow data. Accident Analysis and Prevention, 119: 176-187. https://doi.org/10.1016/j.aap.2018.07.009 [11] Besharati, M.M., Kashani, A.T., Li, Z.L., Washington, S., Prato, C.G. (2019). A bivariate random effects spatial model of traffic fatalities and injuries across Provinces of Iran. Accident; analysis and prevention, 136: 105394. https://doi.org/10.1016/j.aap.2019.105394 [12] Abdel-Aty, M., Dilmore, J., Dhindsa, A. (2006). Evaluation of variable speed limits for real-time freeway safety improvement. Accident Analysis and Prevention, 38(2): 335-345. https://doi.org/10.1016/j.aap.2005.10.010 [13] Domenichini, L., Branzi, V., Smorti, M. (2019) Influence of drivers' psychological risk profiles on the effectiveness of traffic calming measures. Accident Analysis and Prevention, 123: 243-255. https://doi.org/10.1016/j.aap.2018.11.025 [14] Kwon, S., Kim, H., Kim, G.S., Cho, E. (2019). Fatigue and poor sleep are associated with driving risk among Korean occupational drivers. Journal of Transport & Health, 14: 100572. https://doi.org/10.1016/j.jth.2019.100572 [15] Useche, S.A., Viviola, G., Cendales, B.E. (2017). Stress-related psychosocial factors at work, fatigue, and risky driving behavior in bus rapid transport (BRT) drivers. Accident Analysis and Prevention, 104: 106-114. https://doi.org/10.1016/j.aap.2017.04.023 [16] Davidovic, J., Dalibor, P., Boris, A. (2018). Professional drivers' fatigue as a problem of the modern era. Transportation Research Part F-Traffic Psychology and Behaviour, 55: 199-209. https://doi.org/10.1016/j.trf.2018.03.010 [17] Febres, J.D., Mohamadi, F., Mariscal, M.A., Herrera, S., Garcia-Herrero, S. (2019). The role of journey purpose in road traffic injuries: A Bayesian network approach. Journal of Advanced Transportation. https://doi.org/10.1155/2019/6031482 [18] Strogatz, D., Mielenz, T.J., Johnson, A.K., Baker, I.R., Robinson, M., Mebust, S.P., Andrews, H.F., Betz, M.E., Eby, D.W., Johnson, R.M., Jones, V.C., Leu, C.S., Molnar, L.J., Rebok, G.W., Li, G.H. (2020). Importance of driving and potential impact of driving cessation for rural and urban older adults. The Journal of Rural Health, 36(1): 88-93. https://doi.org/10.1111/jrh.12369 [19] Casado-Sanz, N., Guirao, B., Gálvez-Pérez, D. (2019). Population ageing and rural road accidents: Analysis of accident severity in traffic crashes with older pedestrians on Spanish crosstown roads. Research in Transportation Business & Management, 30: 100377. https://doi.org/10.1016/j.rtbm.2019.100377 [20] Wang, X.Y., Liu, Y.Q., Wang, J.Q., Zhang, J.L. (2019). Study on influencing factors selection of driver's propensity. Transportation Research Part D-Transport and Environment, 66: 35-48. https://doi.org/10.1016/j.trd.2018.06.025 [21] Atombo, C., Wu, C.Z.H., Zhang, H., Wemegah, T.D. (2017). Perceived enjoyment, concentration, intention, and speed violation behavior: Using flow theory and theory of planned behavior. Traffic Injury Prevention, 18(7): 694-702. https://doi.org/10.1080/15389588.2017.1307969 [22] McAndrews, C., Beyer, K., Guse, C.E., Layde, P. (2016). How do the definitions of urban and rural matter for transportation safety? Re-interpreting transportation fatalities as an outcome of regional development processes. Accident Analysis & Prevention, 97: 231-241. https://doi.org/10.1016/j.aap.2016.09.008 [23] Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2): 179-211. [24] Ajzen I. (2011). The theory of planned behaviour: Reactions and reflections. Psychology & Health, 26(9): 1113-1127. https://doi.org/10.1080/08870446.2011.613995 [25] Mahajan, K., Velaga, N.R., Kumar, A., Choudhary, A., Choudhary, P. (2019). Effects of driver work-rest patterns, lifestyle and payment incentives on long-haul truck driver sleepiness. Transportation Research Part F-Traffic Psychology and Behaviour, 60: 366-382. https://doi.org/10.1016/j.trf.2018.10.028 [26] Li, H.X., Tian, S.C. (2001). Safety incentive mechanism system analysis. Mining Safety & Environmental Protection, 28(3): 8-9. https://doi.org/10.3969/j.issn.1008-4495.2001.03.003 [27] Padilla, J.L., Castro, C., Doncel, P., Taubman-Ben-Ari, O. (2019). Adaptation of the multidimensional driving styles inventory for Spanish drivers: Convergent and predictive validity evidence for detecting safe and unsafe driving styles. Accident Analysis and Prevention, 136: 105413. https://doi.org/10.1016/j.aap.2019.105413 AllAuthorsTitleAbstractIndex termsFull Text
CommonCrawl
> math > arXiv:1607.04467 Mathematics > Number Theory arXiv:1607.04467 (math) Title:On the Minimum of a Positive Definite Quadratic Form over Non--Zero Lattice points. Theory and Applications Authors:Faustin Adiceam, Evgeniy Zorin Abstract: Let $\Sigma_d^{++}$ be the set of positive definite matrices with determinant 1 in dimension $d\ge 2$. Identifying any two $SL_d(\mathbb{Z})$-congruent elements in $\Sigma_d^{++}$ gives rise to the space of reduced quadratic forms of determinant one, which in turn can be identified with the locally symmetric space $X_d:=SL_d(\mathbb{Z})\backslash SL_d(\mathbb{R})/SO_d(\mathbb{R})$. Equip the latter space with its natural probability measure coming from a Haar measure on $SL_d(\mathbb{R})$. In 1998, Kleinbock and Margulis established sharp estimates for the probability that an element of $X_d$ takes a value less than a given real number $\delta>0$ over the non--zero lattice points $\mathbb{Z}^d\backslash\{ 0 \}$. In this article, these estimates are extended to a large class of probability measures arising either from the spectral or the Cholesky decomposition of an element of $\Sigma_d^{++}$. The sharpness of the bounds thus obtained are also established (up to multiplicative constants) for a subclass of these measures. Although of an independent interest, this theory is partly developed here with a view towards application to Information Theory. More precisely, after providing a concise introduction to this topic fitted to our needs, we lay the theoretical foundations of the study of some manifolds frequently appearing in the theory of Signal Processing. This is then applied to the recently introduced Integer-Forcing Receiver Architecture channel whose importance stems from its expected high performance. Here, we give sharp estimates for the probabilistic distribution of the so-called \emph{Effective Signal--to--Noise Ratio}, which is an essential quantity in the evaluation of the performance of this model. Subjects: Number Theory (math.NT); Information Theory (cs.IT) Cite as: arXiv:1607.04467 [math.NT] (or arXiv:1607.04467v1 [math.NT] for this version) From: Faustin Adiceam Dr. [view email] [v1] Fri, 15 Jul 2016 11:29:56 UTC (41 KB) math.NT cs.IT math.IT
CommonCrawl
On Atkin and Swinnerton-Dyer congruences for noncongruence modular forms Author: Jonas Kibelbek MSC (2010): Primary 11F33; Secondary 11S31, 15A03, 11B37 Published electronically: July 28, 2014 Abstract: In 1985, Scholl showed that Fourier coefficients of noncongruence cusp forms satisfy an infinite family of congruences modulo powers of $p$, providing a framework for understanding the Atkin and Swinnerton-Dyer congruences. We show that solutions to the weight-$k$ Scholl congruences can be rewritten, modulo the appropriate powers of $p$, as $p$-adic solutions of the corresponding linear recurrence relation. Finally, we show that there are spaces of cusp forms that do not admit any basis satisfying 3-term Atkin and Swinnerton-Dyer type congruences at supersingular places, settling a question raised by Atkin and Swinnerton-Dyer. A. O. L. Atkin and J. Lehner, Hecke operators on $\Gamma _{0}(m)$, Math. Ann. 185 (1970), 134–160. MR 268123, DOI https://doi.org/10.1007/BF01359701 A. O. L. Atkin and H. P. F. Swinnerton-Dyer, Modular forms on noncongruence subgroups, Combinatorics (Proc. Sympos. Pure Math., Vol. XIX, Univ. California, Los Angeles, Calif., 1968) Amer. Math. Soc., Providence, R.I., 1971, pp. 1–25. MR 0337781 Gabriel Berger, Hecke operators on noncongruence subgroups, C. R. Acad. Sci. Paris Sér. I Math. 319 (1994), no. 9, 915–919 (English, with English and French summaries). MR 1302789 E. J. Ditters, Hilbert functions and Witt functions. An identity for congruences of Atkin and of Swinnerton-Dyer type, Math. Z. 205 (1990), no. 2, 247–278. MR 1076132, DOI https://doi.org/10.1007/BF02571239 Gareth A. Jones, Triangular maps and noncongruence subgroups of the modular group, Bull. London Math. Soc. 11 (1979), no. 2, 117–123. MR 541962, DOI https://doi.org/10.1112/blms/11.2.117 Nicholas M. Katz, Crystalline cohomology, Dieudonné modules, and Jacobi sums, Automorphic forms, representation theory and arithmetic (Bombay, 1979), Tata Inst. Fund. Res. Studies in Math., vol. 10, Tata Inst. Fundamental Res., Bombay, 1981, pp. 165–246. MR 633662 Wen Ch'ing Winnie Li, Newforms and functional equations, Math. Ann. 212 (1975), 285–315. MR 369263, DOI https://doi.org/10.1007/BF01344466 Ling Long, Finite index subgroups of the modular group and their modular forms, Modular forms and string duality, Fields Inst. Commun., vol. 54, Amer. Math. Soc., Providence, RI, 2008, pp. 83–102. MR 2454321 R. A. Rankin, Contributions to the theory of Ramanujan's function $\tau (n)$ and similar arithmetical functions. I. The zeros of the function $\sum ^\infty _{n=1}\tau (n)/n^s$ on the line ${\mathfrak R}s=13/2$. II. The order of the Fourier coefficients of integral modular forms, Proc. Cambridge Philos. Soc. 35 (1939), 351–372. MR 411 A. J. Scholl, Modular forms and de Rham cohomology; Atkin-Swinnerton-Dyer congruences, Invent. Math. 79 (1985), no. 1, 49–77. MR 774529, DOI https://doi.org/10.1007/BF01388656 J. G. Thompson, A finiteness theorem for subgroups of ${\rm PSL}(2,\,{\bf R})$ which are commensurable with ${\rm PSL}(2,\,{\bf Z})$, The Santa Cruz Conference on Finite Groups (Univ. California, Santa Cruz, Calif., 1979) Proc. Sympos. Pure Math., vol. 37, Amer. Math. Soc., Providence, R.I., 1980, pp. 533–555. MR 604632 A. O. L. Atkin and J. Lehner, Hecke operators on $\Gamma _{0}(m)$, Math. Ann. 185 (1970), 134–160. MR 0268123 (42 \#3022) A. O. L. Atkin and H. P. F. Swinnerton-Dyer, Modular forms on noncongruence subgroups, Combinatorics (Proc. Sympos. Pure Math., Vol. XIX, Univ. California, Los Angeles, Calif., 1968), Amer. Math. Soc., Providence, R.I., 1971, pp. 1–25. MR 0337781 (49 \#2550) Gabriel Berger, Hecke operators on noncongruence subgroups, C. R. Acad. Sci. Paris Sér. I Math. 319 (1994), no. 9, 915–919 (English, with English and French summaries). MR 1302789 (95k:11063) E. J. Ditters, Hilbert functions and Witt functions. An identity for congruences of Atkin and of Swinnerton-Dyer type, Math. Z. 205 (1990), no. 2, 247–278. MR 1076132 (92b:14025), DOI https://doi.org/10.1007/BF02571239 Gareth A. Jones, Triangular maps and noncongruence subgroups of the modular group, Bull. London Math. Soc. 11 (1979), no. 2, 117–123. MR 541962 (83a:10039), DOI https://doi.org/10.1112/blms/11.2.117 Nicholas M. Katz, Crystalline cohomology, Dieudonné modules, and Jacobi sums, Automorphic forms, representation theory and arithmetic (Bombay, 1979), Tata Inst. Fund. Res. Studies in Math., vol. 10, Tata Inst. Fundamental Res., Bombay, 1981, pp. 165–246. MR 633662 (83a:14022) Wen Ch'ing Winnie Li, Newforms and functional equations, Math. Ann. 212 (1975), 285–315. MR 0369263 (51 \#5498) Ling Long, Finite index subgroups of the modular group and their modular forms, Modular forms and string duality, Fields Inst. Commun., vol. 54, Amer. Math. Soc., Providence, RI, 2008, pp. 83–102. MR 2454321 (2009k:11069) R. A. Rankin. Contributions to the theory of Ramanujan's function $\tau (n)$ and similar functions II. The order of the Fourier coefficients of integral modular forms, Proc. Cambridge Philos. Soc. 35 (1939), 357–373. MR 0000411 (1,69d) A. J. Scholl, Modular forms and de Rham cohomology; Atkin-Swinnerton-Dyer congruences, Invent. Math. 79 (1985), no. 1, 49–77. MR 774529 (86j:11045), DOI https://doi.org/10.1007/BF01388656 J. G. Thompson, A finiteness theorem for subgroups of $\textrm {PSL}(2, \textbf {R})$ which are commensurable with $\textrm {PSL}(2, \textbf {Z})$, The Santa Cruz Conference on Finite Groups (Univ. California, Santa Cruz, Calif., 1979), Proc. Sympos. Pure Math., vol. 37, Amer. Math. Soc., Providence, R.I., 1980, pp. 533–555. MR 604632 (82b:20067) Retrieve articles in Proceedings of the American Mathematical Society with MSC (2010): 11F33, 11S31, 15A03, 11B37 Retrieve articles in all journals with MSC (2010): 11F33, 11S31, 15A03, 11B37 Jonas Kibelbek Affiliation: Department of Mathematics, Iowa State University, Ames, Iowa 50011 Address at time of publication: 3505 Sharonwood Road, Apt. 2D, Laurel, Maryland 20724 Email: [email protected] Keywords: Atkin and Swinnerton-Dyer congruences, $p$-adic congruences, noncongruence modular forms Received by editor(s): February 2, 2012 Received by editor(s) in revised form: December 28, 2012 Additional Notes: This research was supported in part by NSF grant DMS-0801096 and NSA grant H98230-10-1-0195. Part of the research was done when the author was visiting the National Center for Theoretical Sciences in Hsinchu, Taiwan, and he thanks the NCTS for its hospitality. The author would like to thank Dr. Li for her encouragement and helpful suggestions, and is grateful for the referee's helpful comments, suggestions, and patience. Communicated by: Kathrin Bringmann
CommonCrawl
Thu, 19 Dec 2019 22:52:30 GMT 13.8: Einstein's Theory of Gravity [ "article:topic", "space-time", "authorname:openstax", "general relativity", "Schwarzschild radius", "non-Euclidean geometry", "action-at-a-distance force", "black hole", "NEUTRON STAR", "principle of equivalence", "space", "theory of general relativity", "license:ccby", "showtoc:no" ] Book: University Physics (OpenStax) Map: University Physics I - Mechanics, Sound, Oscillations, and Waves (OpenStax) 13: Gravitation A Revolution in Perspective The Principle of Equivalence A Geometric Theory of Gravity The Event Horizon The evidence for black holes Describe how the theory of general relativity approaches gravitation Explain the principle of equivalence Calculate the Schwarzschild radius of an object Summarize the evidence for black holes Newton's law of universal gravitation accurately predicts much of what we see within our solar system. Indeed, only Newton's laws have been needed to accurately send every space vehicle on its journey. The paths of Earth-crossing asteroids, and most other celestial objects, can be accurately determined solely with Newton's laws. Nevertheless, many phenomena have shown a discrepancy from what Newton's laws predict, including the orbit of Mercury and the effect that gravity has on light. In this section, we examine a different way of envisioning gravitation. In 1905, Albert Einstein published his theory of special relativity. This theory is discussed in great detail in Relativity so we say only a few words here. In this theory, no motion can exceed the speed of light—it is the speed limit of the Universe. This simple fact has been verified in countless experiments. However, it has incredible consequences—space and time are no longer absolute. Two people moving relative to one another do not agree on the length of objects or the passage of time. Almost all of the mechanics you learned in previous chapters, while remarkably accurate even for speeds of many thousands of miles per second, begin to fail when approaching the speed of light. This speed limit on the Universe was also a challenge to the inherent assumption in Newton's law of gravitation that gravity is an action-at-a-distance force. That is, without physical contact, any change in the position of one mass is instantly communicated to all other masses. This assumption does not come from any first principle, as Newton's theory simply does not address the question. (The same was believed of electromagnetic forces, as well. It is fair to say that most scientists were not completely comfortable with the action-at-a-distance concept.) A second assumption also appears in Newton's law of gravitation Equation 13.1. The masses are assumed to be exactly the same as those used in Newton's second law, \(\vec{F}\) = m\(\vec{a}\). We made that assumption in many of our derivations in this chapter. Again, there is no underlying principle that this must be, but experimental results are consistent with this assumption. In Einstein's subsequent theory of general relativity (1916), both of these issues were addressed. His theory was a theory of space-time geometry and how mass (and acceleration) distort and interact with that space-time. It was not a theory of gravitational forces. The mathematics of the general theory is beyond the scope of this text, but we can look at some underlying principles and their consequences. Einstein came to his general theory in part by wondering why someone who was free falling did not feel his or her weight. Indeed, it is common to speak of astronauts orbiting Earth as being weightless, despite the fact that Earth's gravity is still quite strong there. In Einstein's general theory, there is no difference between free fall and being weightless. This is called the principle of equivalence. The equally surprising corollary to this is that there is no difference between a uniform gravitational field and a uniform acceleration in the absence of gravity. Let's focus on this last statement. Although a perfectly uniform gravitational field is not feasible, we can approximate it very well. Within a reasonably sized laboratory on Earth, the gravitational field \(\vec{g}\) is essentially uniform. The corollary states that any physical experiments performed there have the identical results as those done in a laboratory accelerating at \(\vec{a} = \vec{g}\) in deep space, well away from all other masses. Figure 13.28 illustrates the concept. Figure \(\PageIndex{1}\): According to the principle of equivalence, the results of all experiments performed in a laboratory in a uniform gravitational field are identical to the results of the same experiments performed in a uniformly accelerating laboratory. How can these two apparently fundamentally different situations be the same? The answer is that gravitation is not a force between two objects but is the result of each object responding to the effect that the other has on the space-time surrounding it. A uniform gravitational field and a uniform acceleration have exactly the same effect on space-time. Euclidian geometry assumes a "flat" space in which, among the most commonly known attributes, a straight line is the shortest distance between two points, the sum of the angles of all triangles must be 180 degrees, and parallel lines never intersect. Non-Euclidean geometry was not seriously investigated until the nineteenth century, so it is not surprising that Euclidean space is inherently assumed in all of Newton's laws. The general theory of relativity challenges this long-held assumption. Only empty space is flat. The presence of mass—or energy, since relativity does not distinguish between the two—distorts or curves space and time, or space-time, around it. The motion of any other mass is simply a response to this curved space-time. Figure 13.29 is a two-dimensional representation of a smaller mass orbiting in response to the distorted space created by the presence of a larger mass. In a more precise but confusing picture, we would also see space distorted by the orbiting mass, and both masses would be in motion in response to the total distortion of space. Note that the figure is a representation to help visualize the concept. These are distortions in our three-dimensional space and time. We do not see them as we would a dimple on a ball. We see the distortion only by careful measurements of the motion of objects and light as they move through space. Figure \(\PageIndex{2}\): A smaller mass orbiting in the distorted space-time of a larger mass. In fact, all mass or energy distorts space-time. For weak gravitational fields, the results of general relativity do not differ significantly from Newton's law of gravitation. But for intense gravitational fields, the results diverge, and general relativity has been shown to predict the correct results. Even in our Sun's relatively weak gravitational field at the distance of Mercury's orbit, we can observe the effect. Starting in the mid-1800s, Mercury's elliptical orbit has been carefully measured. However, although it is elliptical, its motion is complicated by the fact that the perihelion position of the ellipse slowly advances. Most of the advance is due to the gravitational pull of other planets, but a small portion of that advancement could not be accounted for by Newton's law. At one time, there was even a search for a "companion" planet that would explain the discrepancy. But general relativity correctly predicts the measurements. Since then, many measurements, such as the deflection of light of distant objects by the Sun, have verified that general relativity correctly predicts the observations. We close this discussion with one final comment. We have often referred to distortions of space-time or distortions in both space and time. In both special and general relativity, the dimension of time has equal footing with each spatial dimension (differing in its place in both theories only by an ultimately unimportant scaling factor). Near a very large mass, not only is the nearby space "stretched out," but time is dilated or "slowed." We discuss these effects more in the next section. Einstein's theory of gravitation is expressed in one deceptively simple-looking tensor equation (tensors are a generalization of scalars and vectors), which expresses how a mass determines the curvature of space-time around it. The solutions to that equation yield one of the most fascinating predictions: the black hole. The prediction is that if an object is sufficiently dense, it will collapse in upon itself and be surrounded by an event horizon from which nothing can escape. The name "black hole," which was coined by astronomer John Wheeler in 1969, refers to the fact that light cannot escape such an object. Karl Schwarzschild was the first person to note this phenomenon in 1916, but at that time, it was considered mostly to be a mathematical curiosity. Surprisingly, the idea of a massive body from which light cannot escape dates back to the late 1700s. Independently, John Michell and Pierre Simon Laplace used Newton's law of gravitation to show that light leaving the surface of a star with enough mass could not escape. Their work was based on the fact that the speed of light had been measured by Ole Roemer in 1676. He noted discrepancies in the data for the orbital period of the moon Io about Jupiter. Roemer realized that the difference arose from the relative positions of Earth and Jupiter at different times and that he could find the speed of light from that difference. Michell and Laplace both realized that since light had a finite speed, there could be a star massive enough that the escape speed from its surface could exceed that speed. Hence, light always would fall back to the star. Oddly, observers far enough away from the very largest stars would not be able see them, yet they could see a smaller star from the same distance. Recall that in Gravitational Potential Energy and Total Energy, we found that the escape speed, given by Equation 13.6, is independent of the mass of the object escaping. Even though the nature of light was not fully understood at the time, the mass of light, if it had any, was not relevant. Hence, Equation 13.6 should be valid for light. Substituting c, the speed of light, for the escape velocity, we have $$v_{esc} = c = \sqrt{\dfrac{2GM}{R}} \ldotp$$ Thus, we only need values for R and M such that the escape velocity exceeds c, and then light will not be able to escape. Michell posited that if a star had the density of our Sun and a radius that extended just beyond the orbit of Mars, then light would not be able to escape from its surface. He also conjectured that we would still be able to detect such a star from the gravitational effect it would have on objects around it. This was an insightful conclusion, as this is precisely how we infer the existence of such objects today. While we have yet to, and may never, visit a black hole, the circumstantial evidence for them has become so compelling that few astronomers doubt their existence. Before we examine some of that evidence, we turn our attention back to Schwarzschild's solution to the tensor equation from general relativity. In that solution arises a critical radius, now called the Schwarzschild radius (RS). For any mass M, if that mass were compressed to the extent that its radius becomes less than the Schwarzschild radius, then the mass will collapse to a singularity, and anything that passes inside that radius cannot escape. Once inside RS, the arrow of time takes all things to the singularity. (In a broad mathematical sense, a singularity is where the value of a function goes to infinity. In this case, it is a point in space of zero volume with a finite mass. Hence, the mass density and gravitational energy become infinite.) The Schwarzschild radius is given by $$R_{S} = \dfrac{2GM}{c^{2}} \ldotp \label{13.12}$$ If you look at our escape velocity equation with vesc = c, you will notice that it gives precisely this result. But that is merely a fortuitous accident caused by several incorrect assumptions. One of these assumptions is the use of the incorrect classical expression for the kinetic energy for light. Just how dense does an object have to be in order to turn into a black hole? Example \(\PageIndex{1}\): Calculating the Schwarzschild Radius Calculate the Schwarzschild radius for both the Sun and Earth. Compare the density of the nucleus of an atom to the density required to compress Earth's mass uniformly to its Schwarzschild radius. The density of a nucleus is about 2.3 x 1017 kg/m3. We use Equation 13.12 for this calculation. We need only the masses of Earth and the Sun, which we obtain from the astronomical data given in Appendix D. Substituting the mass of the Sun, we have $$R_{S} = \dfrac{2GM}{c^{2}} = \dfrac{2(6.67 \times 10^{-11}\; N\; \cdotp m^{2}/kg^{2})(1.99 \times 10^{30}\; kg)}{(3.0 \times 10^{8}\; m/s)^{2}} = 2.95 \times 10^{3}\; m \ldotp$$ This is a diameter of only about 6 km. If we use the mass of Earth, we get RS = 8.85 x 10−3 m. This is a diameter of less than 2 cm! If we pack Earth's mass into a sphere with the radius RS = 8.85 x 10−3 m, we get a density of $$\rho = \dfrac{mass}{volume} = \dfrac{5.97 \times 10^{24}\; kg}{\dfrac{4}{3} \pi (8.85 \times 10^{-3}\; m)^{3}} = 2.06 \times 10^{30}\; kg/m^{3} \ldotp$$ A neutron star is the most compact object known—outside of a black hole itself. The neutron star is composed of neutrons, with the density of an atomic nucleus, and, like many black holes, is believed to be the remnant of a supernova—a star that explodes at the end of its lifetime. To create a black hole from Earth, we would have to compress it to a density thirteen orders of magnitude greater than that of a neutron star. This process would require unimaginable force. There is no known mechanism that could cause an Earth-sized object to become a black hole. For the Sun, you should be able to show that it would have to be compressed to a density only about 80 times that of a nucleus. (Note: Once the mass is compressed within its Schwarzschild radius, general relativity dictates that it will collapse to a singularity. These calculations merely show the density we must achieve to initiate that collapse.) Exercises \(\PageIndex{1}\) Consider the density required to make Earth a black hole compared to that required for the Sun. What conclusion can you draw from this comparison about what would be required to create a black hole? Would you expect the Universe to have many black holes with small mass? The Schwarzschild radius is also called the event horizon of a black hole. We noted that both space and time are stretched near massive objects, such as black holes. Figure 13.30 illustrates that effect on space. The distortion caused by our Sun is actually quite small, and the diagram is exaggerated for clarity. Consider the neutron star, described in Example 13.15. Although the distortion of space-time at the surface of a neutron star is very high, the radius is still larger than its Schwarzschild radius. Objects could still escape from its surface. However, if a neutron star gains additional mass, it would eventually collapse, shrinking beyond the Schwarzschild radius. Once that happens, the entire mass would be pulled, inevitably, to a singularity. In the diagram, space is stretched to infinity. Time is also stretched to infinity. As objects fall toward the event horizon, we see them approaching ever more slowly, but never reaching the event horizon. As outside observers, we never see objects pass through the event horizon—effectively, time is stretched to a stop. Visit this site to view an animated example of these spatial distortions. Figure \(\PageIndex{3}\): The space distortion becomes more noticeable around increasingly larger masses. Once the mass density reaches a critical level, a black hole forms and the fabric of space-time is torn. The curvature of space is greatest at the surface of each of the first three objects shown and is finite. The curvature then decreases (not shown) to zero as you move to the center of the object. But the black hole is different. The curvature becomes infinite: The surface has collapsed to a singularity, and the cone extends to infinity. (Note: These diagrams are not to any scale.) Not until the 1960s, when the first neutron star was discovered, did interest in the existence of black holes become renewed. Evidence for black holes is based upon several types of observations, such as radiation analysis of X-ray binaries, gravitational lensing of the light from distant galaxies, and the motion of visible objects around invisible partners. We will focus on these later observations as they relate to what we have learned in this chapter. Although light cannot escape from a black hole for us to see, we can nevertheless see the gravitational effect of the black hole on surrounding masses. The closest, and perhaps most dramatic, evidence for a black hole is at the center of our Milky Way galaxy. The UCLA Galactic Group, using data obtained by the W. M. Keck telescopes, has determined the orbits of several stars near the center of our galaxy. Some of that data is shown in Figure \(\PageIndex{4}\). The orbits of two stars are highlighted. From measurements of the periods and sizes of their orbits, it is estimated that they are orbiting a mass of approximately 4 million solar masses. Note that the mass must reside in the region created by the intersection of the ellipses of the stars. The region in which that mass must reside would fit inside the orbit of Mercury—yet nothing is seen there in the visible spectrum. Figure \(\PageIndex{4}\): Paths of stars orbiting about a mass at the center of our Milky Way galaxy. From their motion, it is estimated that a black hole of about 4 million solar masses resides at the center. (credit: UCLA Galactic Center Group – W.M. Keck Observatory Laser Team) The physics of stellar creation and evolution is well established. The ultimate source of energy that makes stars shine is the self-gravitational energy that triggers fusion. The general behavior is that the more massive a star, the brighter it shines and the shorter it lives. The logical inference is that a mass that is 4 million times the mass of our Sun, confined to a very small region, and that cannot be seen, has no viable interpretation other than a black hole. Extragalactic observations strongly suggest that black holes are common at the center of galaxies. Visit the UCLA Galactic Center Group main page for information on X-ray binaries and gravitational lensing. Visit this page to view a three-dimensional visualization of the stars orbiting near the center of our galaxy, where the animation is near the bottom of the page. Stars orbiting near the very heart of our galaxy provide strong evidence for a black hole there, but the orbits of stars far from the center suggest another intriguing phenomenon that is observed indirectly as well. Recall from Gravitation Near Earth's Surface that we can consider the mass for spherical objects to be located at a point at the center for calculating their gravitational effects on other masses. Similarly, we can treat the total mass that lies within the orbit of any star in our galaxy as being located at the center of the Milky Way disc. We can estimate that mass from counting the visible stars and include in our estimate the mass of the black hole at the center as well. But when we do that, we find the orbital speed of the stars is far too fast to be caused by that amount of matter. Figure \(\PageIndex{5}\) shows the orbital velocities of stars as a function of their distance from the center of the Milky Way. The blue line represents the velocities we would expect from our estimates of the mass, whereas the green curve is what we get from direct measurements. Apparently, there is a lot of matter we don't see, estimated to be about five times as much as what we do see, so it has been dubbed dark matter. Furthermore, the velocity profile does not follow what we expect from the observed distribution of visible stars. Not only is the estimate of the total mass inconsistent with the data, but the expected distribution is inconsistent as well. And this phenomenon is not restricted to our galaxy, but seems to be a feature of all galaxies. In fact, the issue was first noted in the 1930s when galaxies within clusters were measured to be orbiting about the center of mass of those clusters faster than they should based upon visible mass estimates. Figure \(\PageIndex{5}\): The blue curve shows the expected orbital velocity of stars in the Milky Way based upon the visible stars we can see. The green curve shows that the actually velocities are higher, suggesting additional matter that cannot be seen. (credit: modification of work by Matthew Newby) There are two prevailing ideas of what this matter could be—WIMPs and MACHOs. WIMPs stands for weakly interacting massive particles. These particles (neutrinos are one example) interact very weakly with ordinary matter and, hence, are very difficult to detect directly. MACHOs stands for massive compact halo objects, which are composed of ordinary baryonic matter, such as neutrons and protons. There are unresolved issues with both of these ideas, and far more research will be needed to solve the mystery. Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0). 13.7: Tidal Forces 13.E: Gravitation (Exercises) action-at-a-distance force non-Euclidean geometry principle of equivalence Schwarzschild radius space-time theory of general relativity
CommonCrawl
Grutter T., L. de Carvalho P, Virginie D., Taly A, Fischer M., Changeux J-P. 2005. A chimera encoding the fusion of an acetylcholine-binding protein to an ion channel is stabilized in a state close to the desensitized form of ligand-gated ion channels. C. R. Biol.. 328:223–234. Pons S, Sallette J, Bourgeois JP, Taly A, Changeux J-P, Devillers-Thiéry A. 2004. Critical role of the C-terminal segment in the maturation and export to the cell surface of the homopentameric $\alpha$7–5HT3A receptor. Eur. J. Neurosci.. 20:2022–2030. Konstantakaki M., Changeux J-P, Taly A. 2007. Docking of alpha-cobratoxin suggests a basal conformation of the nicotinic receptor. Biochem. Biophys. Res. Commun.. 359:413–418. Taly A, Changeux J-P. 2008. Functional organization and conformational dynamics of the nicotinic receptor: a plausible structural interpretation of myasthenic mutations. Ann. N. Y. Acad. Sci.. 1132:42–52. Grutter T., de Carvalho L.P, Dufresne V., Taly A, Changeux J-P. 2006. Identification of two critical residues within the Cys-loop sequence that determine fast-gating kinetics in a pentameric ligand-gated ion channel. J. Mol. Neurosci.. 30:63–64. Taly A, Corringer P.J, Grutter T., L. de Carvalho P, Karplus M., Changeux J-P. 2006. Implications of the quaternary twist allosteric model for the physiology and pathology of nicotinic acetylcholine receptors. Proc. Natl. Acad. Sci. U.s.a.. 103:16965–16970. Taly A, Corringer P-J, Grutter T, De Carvalho LPrado, Karplus M, Changeux J-P. 2006. Implications of the quaternary twist allosteric model for the physiology and pathology of nicotinic acetylcholine receptors. Proceedings of the National Academy of Sciences. 103:16965–16970. Grutter T., de Carvalho L.P, Dufresne V., Taly A, Edelstein S.J, Changeux J-P. 2005. Molecular tuning of fast gating in pentameric ligand-gated ion channels. Proc. Natl. Acad. Sci. U.s.a.. 102:18207–18212. Taly A, Corringer P.J, Guedin D., Lestage P., Changeux J-P. 2009. Nicotinic receptors: allosteric transitions and therapeutic targets in the nervous system. Nat. Rev. Drug Discov.. 8:733–750. Taly A, Delarue M., Grutter T., Nilges M., Le Novere N., Corringer P.J, Changeux J-P. 2005. Normal mode analysis suggests a quaternary twist model for the nicotinic receptor gating mechanism. Biophys. J.. 88:3954–3965. Bocquet N., L. de Carvalho P, Cartaud J., Neyton J., Le Poupon C., Taly A, Grutter T., Changeux J-P, Corringer P.J. 2007. A prokaryotic proton-gated ion channel from the nicotinic acetylcholine receptor family. Nature. 445:116–119.
CommonCrawl
Ford–Fulkerson algorithm The Ford–Fulkerson method or Ford–Fulkerson algorithm (FFA) is a greedy algorithm that computes the maximum flow in a flow network. It is sometimes called a "method" instead of an "algorithm" as the approach to finding augmenting paths in a residual graph is not fully specified[1] or it is specified in several implementations with different running times.[2] It was published in 1956 by L. R. Ford Jr. and D. R. Fulkerson.[3] The name "Ford–Fulkerson" is often also used for the Edmonds–Karp algorithm, which is a fully defined implementation of the Ford–Fulkerson method. The idea behind the algorithm is as follows: as long as there is a path from the source (start node) to the sink (end node), with available capacity on all edges in the path, we send flow along one of the paths. Then we find another path, and so on. A path with available capacity is called an augmenting path. Algorithm Let $G(V,E)$ be a graph, and for each edge from u to v, let $c(u,v)$ be the capacity and $f(u,v)$ be the flow. We want to find the maximum flow from the source s to the sink t. After every step in the algorithm the following is maintained: Capacity constraints $\forall (u,v)\in E:\ f(u,v)\leq c(u,v)$The flow along an edge cannot exceed its capacity. Skew symmetry $\forall (u,v)\in E:\ f(u,v)=-f(v,u)$The net flow from u to v must be the opposite of the net flow from v to u (see example). Flow conservation $\forall u\in V:u\neq s{\text{ and }}u\neq t\Rightarrow \sum _{w\in V}f(u,w)=0$The net flow to a node is zero, except for the source, which "produces" flow, and the sink, which "consumes" flow. Value(f) $\sum _{(s,u)\in E}f(s,u)=\sum _{(v,t)\in E}f(v,t)$The flow leaving from s must be equal to the flow arriving at t. This means that the flow through the network is a legal flow after each round in the algorithm. We define the residual network $G_{f}(V,E_{f})$ to be the network with capacity $c_{f}(u,v)=c(u,v)-f(u,v)$ and no flow. Notice that it can happen that a flow from v to u is allowed in the residual network, though disallowed in the original network: if $f(u,v)>0$ and $c(v,u)=0$ then $c_{f}(v,u)=c(v,u)-f(v,u)=f(u,v)>0$. Algorithm Ford–Fulkerson Inputs Given a Network $G=(V,E)$ with flow capacity c, a source node s, and a sink node t Output Compute a flow f from s to t of maximum value 1. $f(u,v)\leftarrow 0$ for all edges $(u,v)$ 2. While there is a path p from s to t in $G_{f}$, such that $c_{f}(u,v)>0$ for all edges $(u,v)\in p$: 1. Find $c_{f}(p)=\min\{c_{f}(u,v):(u,v)\in p\}$ 2. For each edge $(u,v)\in p$ 1. $f(u,v)\leftarrow f(u,v)+c_{f}(p)$ (Send flow along the path) 2. $f(v,u)\leftarrow f(v,u)-c_{f}(p)$ (The flow might be "returned" later) • "←" denotes assignment. For instance, "largest ← item" means that the value of largest changes to the value of item. • "return" terminates the algorithm and outputs the following value. The path in step 2 can be found with, for example, a breadth-first search (BFS) or a depth-first search in $G_{f}(V,E_{f})$. If you use the former, the algorithm is called Edmonds–Karp. When no more paths in step 2 can be found, s will not be able to reach t in the residual network. If S is the set of nodes reachable by s in the residual network, then the total capacity in the original network of edges from S to the remainder of V is on the one hand equal to the total flow we found from s to t, and on the other hand serves as an upper bound for all such flows. This proves that the flow we found is maximal. See also Max-flow Min-cut theorem. If the graph $G(V,E)$ has multiple sources and sinks, we act as follows: Suppose that $T=\{t\mid t{\text{ is a sink}}\}$ and $S=\{s\mid s{\text{ is a source}}\}$. Add a new source $s^{*}$ with an edge $(s^{*},s)$ from $s^{*}$ to every node $s\in S$, with capacity $c(s^{*},s)=d_{s}=\sum _{(s,u)\in E}c(s,u)$. And add a new sink $t^{*}$ with an edge $(t,t^{*})$ from every node $t\in T$ to $t^{*}$, with capacity $c(t,t^{*})=d_{t}=\sum _{(v,t)\in E}c(v,t)$. Then apply the Ford–Fulkerson algorithm. Also, if a node u has capacity constraint $d_{u}$, we replace this node with two nodes $u_{\mathrm {in} },u_{\mathrm {out} }$, and an edge $(u_{\mathrm {in} },u_{\mathrm {out} })$, with capacity $c(u_{\mathrm {in} },u_{\mathrm {out} })=d_{u}$. Then apply the Ford–Fulkerson algorithm. Complexity By adding the flow augmenting path to the flow already established in the graph, the maximum flow will be reached when no more flow augmenting paths can be found in the graph. However, there is no certainty that this situation will ever be reached, so the best that can be guaranteed is that the answer will be correct if the algorithm terminates. In the case that the algorithm runs forever, the flow might not even converge towards the maximum flow. However, this situation only occurs with irrational flow values.[4] When the capacities are integers, the runtime of Ford–Fulkerson is bounded by $O(Ef)$ (see big O notation), where $E$ is the number of edges in the graph and $f$ is the maximum flow in the graph. This is because each augmenting path can be found in $O(E)$ time and increases the flow by an integer amount of at least $1$, with the upper bound $f$. A variation of the Ford–Fulkerson algorithm with guaranteed termination and a runtime independent of the maximum flow value is the Edmonds–Karp algorithm, which runs in $O(VE^{2})$ time. Integral example The following example shows the first steps of Ford–Fulkerson in a flow network with 4 nodes, source $A$ and sink $D$. This example shows the worst-case behaviour of the algorithm. In each step, only a flow of $1$ is sent across the network. If breadth-first-search were used instead, only two steps would be needed. Path Capacity Resulting flow network Initial flow network $A,B,C,D$ ${\begin{aligned}&\min(c_{f}(A,B),c_{f}(B,C),c_{f}(C,D))\\={}&\min(c(A,B)-f(A,B),c(B,C)-f(B,C),c(C,D)-f(C,D))\\={}&\min(1000-0,1-0,1000-0)=1\end{aligned}}$ $A,C,B,D$ ${\begin{aligned}&\min(c_{f}(A,C),c_{f}(C,B),c_{f}(B,D))\\={}&\min(c(A,C)-f(A,C),c(C,B)-f(C,B),c(B,D)-f(B,D))\\={}&\min(1000-0,0-(-1),1000-0)=1\end{aligned}}$ After 1998 more steps ... Final flow network Notice how flow is "pushed back" from $C$ to $B$ when finding the path $A,C,B,D$. Non-terminating example Consider the flow network shown on the right, with source $s$, sink $t$, capacities of edges $e_{1}$, $e_{2}$ and $e_{3}$ respectively $1$, $r=({\sqrt {5}}-1)/2$ and $1$ and the capacity of all other edges some integer $M\geq 2$. The constant $r$ was chosen so, that $r^{2}=1-r$. We use augmenting paths according to the following table, where $p_{1}=\{s,v_{4},v_{3},v_{2},v_{1},t\}$, $p_{2}=\{s,v_{2},v_{3},v_{4},t\}$ and $p_{3}=\{s,v_{1},v_{2},v_{3},t\}$. StepAugmenting pathSent flowResidual capacities $e_{1}$$e_{2}$$e_{3}$ 0$r^{0}=1$$r$$1$ 1$\{s,v_{2},v_{3},t\}$$1$$r^{0}$$r^{1}$$0$ 2$p_{1}$$r^{1}$$r^{2}$$0$$r^{1}$ 3$p_{2}$$r^{1}$$r^{2}$$r^{1}$$0$ 4$p_{1}$$r^{2}$$0$$r^{3}$$r^{2}$ 5$p_{3}$$r^{2}$$r^{2}$$r^{3}$$0$ Note that after step 1 as well as after step 5, the residual capacities of edges $e_{1}$, $e_{2}$ and $e_{3}$ are in the form $r^{n}$, $r^{n+1}$ and $0$, respectively, for some $n\in \mathbb {N} $. This means that we can use augmenting paths $p_{1}$, $p_{2}$, $p_{1}$ and $p_{3}$ infinitely many times and residual capacities of these edges will always be in the same form. Total flow in the network after step 5 is $1+2(r^{1}+r^{2})$. If we continue to use augmenting paths as above, the total flow converges to $\textstyle 1+2\sum _{i=1}^{\infty }r^{i}=3+2r$. However, note that there is a flow of value $2M+1$, by sending $M$ units of flow along $sv_{1}t$, 1 unit of flow along $sv_{2}v_{3}t$, and $M$ units of flow along $sv_{4}t$. Therefore, the algorithm never terminates and the flow does not even converge to the maximum flow.[5] Another non-terminating example based on the Euclidean algorithm is given by Backman & Huynh (2018), where they also show that the worst case running-time of the Ford-Fulkerson algorithm on a network $G(V,E)$ in ordinal numbers is $\omega ^{\Theta (|E|)}$. Python implementation of Edmonds–Karp algorithm import collections class Graph: """ This class represents a directed graph using adjacency matrix representation. """ def __init__(self, graph): self.graph = graph # residual graph self.row = len(graph) def bfs(self, s, t, parent): """ Returns true if there is a path from source 's' to sink 't' in residual graph. Also fills parent[] to store the path. """ # Mark all the vertices as not visited visited = [False] * self.row # Create a queue for BFS queue = collections.deque() # Mark the source node as visited and enqueue it queue.append(s) visited[s] = True # Standard BFS loop while queue: u = queue.popleft() # Get all adjacent vertices of the dequeued vertex u # If an adjacent has not been visited, then mark it # visited and enqueue it for ind, val in enumerate(self.graph[u]): if (visited[ind] == False) and (val > 0): queue.append(ind) visited[ind] = True parent[ind] = u # If we reached sink in BFS starting from source, then return # true, else false return visited[t] # Returns the maximum flow from s to t in the given graph def edmonds_karp(self, source, sink): # This array is filled by BFS and to store path parent = [-1] * self.row max_flow = 0 # There is no flow initially # Augment the flow while there is path from source to sink while self.bfs(source, sink, parent): # Find minimum residual capacity of the edges along the # path filled by BFS. Or we can say find the maximum flow # through the path found. path_flow = float("Inf") s = sink while s != source: path_flow = min(path_flow, self.graph[parent[s]][s]) s = parent[s] # Add path flow to overall flow max_flow += path_flow # update residual capacities of the edges and reverse edges # along the path v = sink while v != source: u = parent[v] self.graph[u][v] -= path_flow self.graph[v][u] += path_flow v = parent[v] return max_flow See also • Berge's theorem • Approximate max-flow min-cut theorem • Turn restriction routing • Dinic's algorithm Notes 1. Laung-Terng Wang, Yao-Wen Chang, Kwang-Ting (Tim) Cheng (2009). Electronic Design Automation: Synthesis, Verification, and Test. Morgan Kaufmann. pp. 204. ISBN 978-0080922003.{{cite book}}: CS1 maint: multiple names: authors list (link) 2. Thomas H. Cormen; Charles E. Leiserson; Ronald L. Rivest; Clifford Stein (2009). Introduction to Algorithms. MIT Press. pp. 714. ISBN 978-0262258104. 3. Ford, L. R.; Fulkerson, D. R. (1956). "Maximal flow through a network" (PDF). Canadian Journal of Mathematics. 8: 399–404. doi:10.4153/CJM-1956-045-5. S2CID 16109790. 4. "Ford-Fulkerson Max Flow Labeling Algorithm". 1998. CiteSeerX 10.1.1.295.9049. 5. Zwick, Uri (21 August 1995). "The smallest networks on which the Ford–Fulkerson maximum flow procedure may fail to terminate". Theoretical Computer Science. 148 (1): 165–170. doi:10.1016/0304-3975(95)00022-O. References • Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). "Section 26.2: The Ford–Fulkerson method". Introduction to Algorithms (Second ed.). MIT Press and McGraw–Hill. pp. 651–664. ISBN 0-262-03293-7. • George T. Heineman; Gary Pollice; Stanley Selkow (2008). "Chapter 8:Network Flow Algorithms". Algorithms in a Nutshell. Oreilly Media. pp. 226–250. ISBN 978-0-596-51624-6. • Jon Kleinberg; Éva Tardos (2006). "Chapter 7:Extensions to the Maximum-Flow Problem". Algorithm Design. Pearson Education. pp. 378–384. ISBN 0-321-29535-8. • Samuel Gutekunst (2009). ENGRI 1101. Cornell University. • Backman, Spencer; Huynh, Tony (2018). "Transfinite Ford–Fulkerson on a finite network". Computability. 7 (4): 341–347. arXiv:1504.04363. doi:10.3233/COM-180082. S2CID 15497138. External links • A tutorial explaining the Ford–Fulkerson method to solve the max-flow problem • Another Java animation • Java Web Start application Media related to Ford-Fulkerson's algorithm at Wikimedia Commons
Wikipedia
Representation (mathematics) In mathematics, a representation is a very general relationship that expresses similarities (or equivalences) between mathematical objects or structures. Roughly speaking, a collection Y of mathematical objects may be said to represent another collection X of objects, provided that the properties and relationships existing among the representing objects yi conform, in some consistent way, to those existing among the corresponding represented objects xi. More specifically, given a set Π of properties and relations, a Π-representation of some structure X is a structure Y that is the image of X under a homomorphism that preserves Π. The label representation is sometimes also applied to the homomorphism itself (such as group homomorphism in group theory).[1][2] Representation theory Perhaps the most well-developed example of this general notion is the subfield of abstract algebra called representation theory, which studies the representing of elements of algebraic structures by linear transformations of vector spaces.[2] Other examples Although the term representation theory is well established in the algebraic sense discussed above, there are many other uses of the term representation throughout mathematics. Graph theory An active area of graph theory is the exploration of isomorphisms between graphs and other structures. A key class of such problems stems from the fact that, like adjacency in undirected graphs, intersection of sets (or, more precisely, non-disjointness) is a symmetric relation. This gives rise to the study of intersection graphs for innumerable families of sets.[3] One foundational result here, due to Paul Erdős and his colleagues, is that every n-vertex graph may be represented in terms of intersection among subsets of a set of size no more than n2/4.[4] Representing a graph by such algebraic structures as its adjacency matrix and Laplacian matrix gives rise to the field of spectral graph theory.[5] Order theory Dual to the observation above that every graph is an intersection graph is the fact that every partially ordered set (also known as poset) is isomorphic to a collection of sets ordered by the inclusion (or containment) relation ⊆. Some posets that arise as the inclusion orders for natural classes of objects include the Boolean lattices and the orders of dimension n.[6] Many partial orders arise from (and thus can be represented by) collections of geometric objects. Among them are the n-ball orders. The 1-ball orders are the interval-containment orders, and the 2-ball orders are the so-called circle orders—the posets representable in terms of containment among disks in the plane. A particularly nice result in this field is the characterization of the planar graphs, as those graphs whose vertex-edge incidence relations are circle orders.[7] There are also geometric representations that are not based on inclusion. Indeed, one of the best studied classes among these are the interval orders,[8] which represent the partial order in terms of what might be called disjoint precedence of intervals on the real line: each element x of the poset is represented by an interval [x1, x2], such that for any y and z in the poset, y is below z if and only if y2 < z1. Logic In logic, the representability of algebras as relational structures is often used to prove the equivalence of algebraic and relational semantics. Examples of this include Stone's representation of Boolean algebras as fields of sets,[9] Esakia's representation of Heyting algebras as Heyting algebras of sets,[10] and the study of representable relation algebras and representable cylindric algebras.[11] Polysemy Under certain circumstances, a single function f : X → Y is at once an isomorphism from several mathematical structures on X. Since each of those structures may be thought of, intuitively, as a meaning of the image Y (one of the things that Y is trying to tell us), this phenomenon is called polysemy—a term borrowed from linguistics. Some examples of polysemy include: • intersection polysemy—pairs of graphs G1 and G2 on a common vertex set V that can be simultaneously represented by a single collection of sets Sv, such that any distinct vertices u and w in V are adjacent in G1, if and only if their corresponding sets intersect ( Su ∩ Sw ≠ Ø ), and are adjacent in G2 if and only if the complements do ( SuC ∩ SwC ≠ Ø ).[12] • competition polysemy—motivated by the study of ecological food webs, in which pairs of species may have prey in common or have predators in common. A pair of graphs G1 and G2 on one vertex set is competition polysemic, if and only if there exists a single directed graph D on the same vertex set, such that any distinct vertices u and v are adjacent in G1, if and only if there is a vertex w such that both uw and vw are arcs in D, and are adjacent in G2, if and only if there is a vertex w such that both wu and wv are arcs in D.[13] • interval polysemy—pairs of posets P1 and P2 on a common ground set that can be simultaneously represented by a single collection of real intervals, that is an interval-order representation of P1 and an interval-containment representation of P2.[14] See also • Group representation • Representation theorems • Model theory References 1. Weisstein, Eric W. "Group Representation". mathworld.wolfram.com. Retrieved 2019-12-07. 2. Teleman, Constantin. "Representation Theory" (PDF). math.berkeley.edu. Retrieved 2019-12-07.{{cite web}}: CS1 maint: url-status (link) 3. McKee, Terry A.; McMorris, F. R. (1999), Topics in Intersection Graph Theory, SIAM Monographs on Discrete Mathematics and Applications, Philadelphia: Society for Industrial and Applied Mathematics, doi:10.1137/1.9780898719802, ISBN 978-0-89871-430-2, MR 1672910 4. Erdős, Paul; Goodman, A. W.; Pósa, Louis (1966), "The representation of a graph by set intersections", Canadian Journal of Mathematics, 18 (1): 106–112, CiteSeerX 10.1.1.210.6950, doi:10.4153/cjm-1966-014-3, MR 0186575 5. Biggs, Norman (1994), Algebraic Graph Theory, Cambridge Mathematical Library, Cambridge University Press, ISBN 978-0-521-45897-9, MR 1271140 6. Trotter, William T. (1992), Combinatorics and Partially Ordered Sets: Dimension Theory, Johns Hopkins Series in the Mathematical Sciences, Baltimore: The Johns Hopkins University Press, ISBN 978-0-8018-4425-6, MR 1169299 7. Scheinerman, Edward (1991), "A note on planar graphs and circle orders", SIAM Journal on Discrete Mathematics, 4 (3): 448–451, doi:10.1137/0404040, MR 1105950 8. Fishburn, Peter C. (1985), Interval Orders and Interval Graphs: A Study of Partially Ordered Sets, Wiley-Interscience Series in Discrete Mathematics, John Wiley & Sons, ISBN 978-0-471-81284-5, MR 0776781 9. Marshall H. Stone (1936) "The Theory of Representations of Boolean Algebras," Transactions of the American Mathematical Society 40: 37-111. 10. Esakia, Leo (1974). "Topological Kripke models". Soviet Math. 15 (1): 147–151. 11. Hirsch, R.; Hodkinson, I. (2002). Relation Algebra by Games. Studies in Logic and the Foundations of Mathematics. Vol. 147. Elsevier Science. 12. Tanenbaum, Paul J. (1999), "Simultaneous intersection representation of pairs of graphs", Journal of Graph Theory, 32 (2): 171–190, doi:10.1002/(SICI)1097-0118(199910)32:2<171::AID-JGT7>3.0.CO;2-N, MR 1709659 13. Fischermann, Miranca; Knoben, Werner; Kremer, Dirk; Rautenbachh, Dieter (2004), "Competition polysemy", Discrete Mathematics, 282 (1–3): 251–255, doi:10.1016/j.disc.2003.11.014, MR 2059526 14. Tanenbaum, Paul J. (1996), "Simultaneous representation of interval and interval-containment orders", Order, 13 (4): 339–350, CiteSeerX 10.1.1.53.8988, doi:10.1007/BF00405593, MR 1452517
Wikipedia
Is there a relation between the number of lattice points lie within these circles Suppose we have a circle of radius $r$ centered at the origin $(0,0)$. The number of integer lattice points within the circle, $N$, can be bounded using Gauss circle problem . Suppose that another circle of radius $r/2$ centered at the origin inside the initial circle of radius $r$, let $N^*$ represents the number of integer lattice points within the the smallest circle. I am mainly interested in the relation between $N$ and $N^*$. More precisely, to find the number of integer lattice points within the circle of radius $r$ and outside(and at the boundary of) the circle of radius $r/2$. This page provides the number $N$ for some distances $r$ in $2$ dimensions. For example if we take "ignore the integer lattice point represents the origin": $r=4$, then $N^*=12, N=48 $ and $N^* = \frac{1}{4}N$ $r=6$, then $N^*=28, N=112 $ and $N^* = \frac{1}{4}N$ $r=40$, then $N^*=1256, N=5024 $ and $N^* = \frac{1}{4}N$ By doing more calculations, in general (considering $r$ is even *for simplicity) we can say $$\frac{1}{5}N \leq N^* \leq \frac{1}{3}N$$ I searched in the literature to find something about this relation in $2$ or $d$ dimensions but without success. I think there is a result or a bound about it, could you kindly direct me to a reference or to a bound and I would be very grateful. nt.number-theory reference-request co.combinatorics analytic-number-theory lattices Noah16 Noah16Noah16 $\begingroup$ Here is a visualization that may lead to quick results. Draw a circle of radius r and count the lattice points. Then (instead of drawing a circle of radius 2r) color in the points with coordinates both multiples of 1/2. This is almost like making four copies of the original lattice. Now draw three more circles of radius r with origin near one of the shifted versions of the original origin, and consider what points lie outside the intersection (or don't get copied). Gerhard "Try Eyeballing As Discovery Method" Paseman, 2018.09.19. $\endgroup$ – Gerhard Paseman Sep 19 '18 at 18:00 $\begingroup$ As the Wikipedia page notes, the standard (and fairly straightforward) estimate for this is that $N=\pi r^2+O(r)$ (most of the challenge is in finding the linear term); since similarly, $N^*=\frac14\pi r^2+O(r)$, this immediately gives $N^*=\frac14N+O(\sqrt{N})$. In $d$ dimensions the corresponding statement is $N^*=2^{-d}N+O(N^{(1-1/d)})$, and again it's all pretty straightforward. Do you need more than this? $\endgroup$ – Steven Stadnicki Sep 19 '18 at 19:32 $\begingroup$ @GerhardPaseman thanks for your comment but sorry I didn't understand your idea! Is it possible to clarify more? $\endgroup$ – Noah16 Sep 20 '18 at 15:33 $\begingroup$ Take the larger circle and scale it and the lattice points down by a factor of 1/2. The image looks like the small circle but with more points, most of which are translates of the lattice points inside the small circle by 1/2. The idea is to use these translates to get an idea of the size of (number of points inside big circle minus 4 times number of points inside small circle). Gerhard "Let's See If This Helps" Paseman, 2018.09.20. $\endgroup$ – Gerhard Paseman Sep 20 '18 at 15:39 $\begingroup$ @StevenStadnicki thanks. Your calculations confirm what I wrote above but I am looking for something more powerful and accurate so that I can enumerate the points lie between circles as detailed in my question. Unfortunately, your calculations will not provide an upper and lower bounds. In fact I thought there is a result about this and because of that I asked for a reference! $\endgroup$ – Noah16 Sep 20 '18 at 15:42 Glad to see there's another Noah interested in such questions :). Let $N_{r,d}$ be the number of (non-zero) integer points in a $d$-dimensional ball of radius $r$. I'll try to summarize the state of our knowledge on $N_{r,d}$ below, at least as I understand it. Along the way, I'll answer your question for $d= 2$: $$ 3 \leq \frac{N_{2r,2}}{N_{r,2}}\leq 4.5 $$ for $r \geq 1$, which is tight. (If you exclude radii $r$ with no integer points of length $r$, then the upper bound drops to $4 + 1/6$, which is also tight.) I'll just say up front that a decent quick and dirty approximation is $N_{r,d} = \binom{d+r^2}{r^2}^{1\pm O(1)}$ (for $r \geq 1$). You can ignore the rest of this long post if you're happy knowing that. There are in some sense three different regimes in which $N_{r,d}$ behaves rather differently, as follows. (I'm being deliberately vague for now. Precise statements below.) The case $r \gg \sqrt{d}/2$, where $N_{r,d}$ is basically just the volume of the ball of radius $r$. The case $r \ll \sqrt{d}$, where $N_{r,d}$ is basically just the number of points in $\{-1,0,1\}^d$ with norm $\approx r^2$. The intermediate case $r \approx \sqrt{d}/2$, where it's a bit more complicated. For $d = 2$, there's a sense in which only the first case comes up because, well, the question is only interesting for $r \geq 1$, and $\sqrt{2}/2 = 1/\sqrt{2}$ is less than one. So, you don't see the more complicated structure until you get to higher dimensions. $\mathbf{r \gg \sqrt{d}/2}$: In this regime, volume estimates become very precise. In particular, by noting that every point in $\mathbb{R}^d$ is at distance at most $\sqrt{d}/2$ from $\mathbb{Z}^d$ and applying a simple argument [****], we see that $$\mathrm{Vol}(B_d(r - \sqrt{d}/2)) \leq N_{r,d} \leq \mathrm{Vol}(B_d(r + \sqrt{d}/2)) \; ,$$ where $B_d(r)$ is the $d$-dimensional $\ell_2$ ball in dimension $d$, which has volume $r^d \pi^{d/2}/\Gamma(d/2+1) \approx (2\pi e r^2/d)^{d/2}$. I.e., we have upper and lower bounds for $N_{r,d}$ that differ by a multiplicative factor of $$\Big( \frac{1+\sqrt{d}/(2r)}{1-\sqrt{d}/(2r)} \Big)^{d} \; , $$ which in particular implies that the ratio $N_{2r,d}/N_{r,d}$ that interests you is equal to $2^d$ up to a factor of at most $$\Big( \frac{1+\sqrt{d}/(4r)}{1-\sqrt{d}/(2r)} \Big)^{d} \; . $$ Finishing ${\bf d = 2}$ When $d = 2$, we can use the above estimate to find the maximum and minimum of $N_{2r,2}/N_{r,2}$ relatively easily. We do this by just brute-force computing the ratio for all small $r$. (I used Mathematica and the simple formula $N_{r,2} = \sum _{z_1=-\lfloor r\rfloor }^{\lfloor r\rfloor } (2 \lfloor \sqrt{r^2-z_1^2}\rfloor +1)-1$.) For integer $r^2$, we find that the maximum ratio is achieved at $r=\sqrt{3}$ with $$\frac{N_{2\sqrt{3}, 2}}{N_{\sqrt{3}, 2}} = \frac{36}{8} = 4.5 \; ,$$ and the minimum is actually achieved at $r = 1$ with $$\frac{N_{2, 2}}{N_{1, 2}} = \frac{12}{4} = 3 \; .$$ (The same ratio occurs again at $r = \sqrt{2}$.) If we don't want to include radii where $r^2$ cannot be written as the sum of two squares, then the maximum ratio occurs at $r = \sqrt{8}$ with $$ \frac{N_{2\sqrt{8}, 2}}{N_{\sqrt{8},2}} = \frac{100}{24} = 4.1666\ldots \; .$$ The volume approximation discussed above shows that no more extreme ratios can occur for, $r > 52$, so we can terminate our search there. (There's probably a better argument that lets you check fewer radii, but checking up to $r = 52$ isn't too bad.) (Geometrically, we can think of this very nice behavior as a result of the fact that $\mathbb{Z}^2$ actually isn't such a terrible sphere packing/covering, whereas $\mathbb{Z}^d$ is a really bad sphere packing/covering for large $d$.) Here are some plots of $N_{2r,2}/N_{r,2}$: (The bands that are clearly visible in the second image correspond to ratios of the form $4 + c/n$ for fixed $c$ and $n = $. $\mathbf{r \ll \sqrt{d}}$: Here, we can get quite tight bounds simply by noting that when $r^2$ is an integer and $r \ll \sqrt{d}$, $$N_{r,d} \approx |\{z \in \{-1,0,1\}^d \ : \ \|z\|= r\}| = 2^{r^2} \binom{d}{r^2} \; . $$ In other words, almost all of the integer points in a ball of radius $r \ll \sqrt{d}$ actually lie on the sphere of radius $r$ and have coordinates from $\{-1,0,1\}$. This immediately shows that the $2^d$ heuristic fails in this regime. In fact, until $N_{r,d} < 2^d$ for all radii $r$ with, say, $r < 0.4 \sqrt{d}$, so the $2^d$ heuristic obviously fails here. To be more precise, here's a smooth estimate with this flavor: $$ N_{r,d} = (2 e^{1+\chi} d /r^2)^{r^2} \; , $$ for any $r \leq \sqrt{d}/2$, where $B(r)$ is the ball of radius $r$ and the error parameter $\chi$ satisfies $$ -\frac{r^2}{d} - \frac{\log(C r)}{r^2} \leq \chi \leq \sqrt{\frac{C}{\log(d/r^2)}} \; , $$ for some not very large constant $C > 0$ [*]. Notice that these bounds are quite tight for $1 \ll r \ll \sqrt{d}$, since in this regime $\chi$ is subconstant. (A smooth function in $r$ can't give a much better approximation when $r$ is constant since, e.g., $N_{\sqrt{2} - \varepsilon, d} = 2d$, but $N_{\sqrt{2}, d} \approx d^2$. But the binomial coefficient is quite accurate in this case for integer $r^2$.) This gives $$\frac{N_{2r,d}}{N_{r,d}} \approx (e d /r^2)^{3r^2}2^{-5r^2} $$ for $1 \ll r \ll \sqrt{d}$. Geometrically, we can think of this strange behavior as a result of the integer lattice ``having short points that shouldn't be there.'' In particular, in $d$ dimensions, the ball of radius $1$ has volume much less than one, $\approx (C/d)^{d/2}$, but the integer lattice manages to have $2d+1$ points inside this ball anyway. The hard case, ${\bf r \approx \sqrt{d} }$ When $r \ll \sqrt{d}$ or $r \gg \sqrt{d}$, there are nice smooth functions with closed formulas that approximate $N_{r,d}$ well. For $r \approx \sqrt{d}$ one can approximate $N_{r,d}$ to arbitrary accuracy by studying the theta function $\Theta(\tau) := \sum_{z \in \mathbb{Z}} e^{-\tau z^2}$. (Actually, one can do this for all radii $r$, but it just gives the above answers back when $r \ll\sqrt{d}$ or $r \gg \sqrt{d}$.) Mazo and Odlyzko showed this in [**]. Specifically, they showed that $$N_{\alpha \sqrt{d} ,d}^{1/d} = e^{-\chi/\sqrt{d}}\inf_{\tau > 0} e^{\alpha^2 \tau}\Theta(\tau) \; , $$ where $0 \leq \chi \leq C_\alpha$ for some easily computable constant $C_{\alpha}$ that depends only on $\alpha$. (The constant $C_{\alpha}$ is essentially the standard deviation of the Gaussian distribution over the integers with the parameter $\tau$ chosen by the infimum.) Notice that Markov's inequality immediately yields $$N_{\alpha \sqrt{d} ,d}e^{-\alpha^2 d\tau}< \sum_{\stackrel{z \in \mathbb{Z}^d}{\|z\| \leq \alpha \sqrt{d}}} e^{-\tau z^2} < \sum_{z \in \mathbb{Z}^d} e^{-\tau z^2} = \Theta(\tau)^d \; $$ for any $\tau > 0$. Rearranging and taking the infimum shows that $\chi \geq 0$. So, the hard part is to show a nearly matching lower bound for appropriately chosen $\tau$, which amounts to showing that the summation is dominated by the contribution from points in a thin shell. Turning back to your original question, one can use this to show that $$\frac{N_{2\alpha \sqrt{d},d}}{N_{\alpha \sqrt{d}, d}} = e^{\chi_{\alpha}' \sqrt{d}} \cdot (C^*_{\alpha})^d$$, where $0 < C^*_{\alpha} < 2$ is some constant depending only on $\alpha$ and $|\chi_\alpha|$ is bounded by some constant also depending only on $\alpha$. Again, $C^*_{\alpha}$ is less than $2$ because ``the integer lattice has too many short points.'' I.e., it is in some sense a really really bad sphere packing [***]. [*] This particular bound is from my thesis (On the Gaussian measure over lattices), where I give an easy proof using the theta function $\Theta(\tau) := \sum_{z \in \mathbb{Z}} e^{-\tau z^2}$. (I'm sure that this result is not original, and the technique of using the theta function was already used by Mazo and Odlyzko in [**] to approximate $N_{r,d}$ in a more interesting regime.) [**] Mazo and Odlyzko. Lattice Points in high-dimensional spheres, 1990. https://link.springer.com/article/10.1007/BF01571276 . [***] With Oded Regev, we showed that it's actually ``the worst packing'' in a certain precise sense, up to a certain error term: https://arxiv.org/abs/1611.05979 . Basically, its theta function is almost maximal for stable (aka, "non-degenerate") lattices. [****] To see this, notice that if we sample a random point $t$ from the cube $[0,1/2]^d$, the expected number of points in a ball of radius $r-\sqrt{d}/2$ around $t$ must equal the volume of the ball exactly. In particular, there exists a $t$ such that this value is at least the volume, and since this $t$ is in the cube, the ball of radius $r$ around the origin contains this ball. (The upper bound follows from the same argument.) Noah Stephens-DavidowitzNoah Stephens-Davidowitz $\begingroup$ Thanks a lot for this effort. I just wanna know why you compared $r$ to $d$ so as a result you divided the process into cases while enumerating the number of lattice points? $\endgroup$ – Noah16 Sep 22 '18 at 20:59 $\begingroup$ i.e Yes I understand why you divided them into 3 cases but is it possible to generalize without comparing $r$ to $d$ $\endgroup$ – Noah16 Sep 22 '18 at 22:49 $\begingroup$ Sorry, I don't understand the question. $N_{r,d}$ depends both on $r$ and $d$. How can one discuss it without "comparing $r$ to $d$"? If you want to fix $d$ and vary $r$ or fix $r$ and vary $d$, then the first and second cases handle this, respectively. The three cases that I described can be collapsed into one by showing that $N_{r,d} \approx \inf_{\tau > 0} e^{r \tau}\Theta(\tau)^d$ for all $r,d$. However, the behavior of $\Theta(\tau)$ is very differently in three different regimes, $\tau \ll 1$, $\tau \gg 1$, and $\tau \approx 1$, which correspond to the three cases that I listed above. $\endgroup$ – Noah Stephens-Davidowitz Sep 23 '18 at 3:03 Not the answer you're looking for? Browse other questions tagged nt.number-theory reference-request co.combinatorics analytic-number-theory lattices or ask your own question. Number of divisors of an integer of form 4n+1 and 4n+3 The Gauss circle problem on a hexagonal lattice An exact counting solution for the number of points within a circle of radius $r$ centered on a lattice point in a $A_2$ hexagonal lattice Maximizing the number of lattice points in a circle of radius $r$ placed on a lattice Bounding the number of lattice points inside an $n$-dimensional ellipsoid Gauss' Circle Problem at $\left ( \frac{1}{2}, \frac{1}{2} \right ) $ Can we count the number of integer lattice points in this case? Heuristics behind the Circle problem? Lower bound for the number of lattice points on high dimensional spheres
CommonCrawl
\begin{definition}[Definition:Greatest Common Divisor/Integers] Let $a, b \in \Z: a \ne 0 \lor b \ne 0$. \end{definition}
ProofWiki
IZA Journal of Development and Migration Land tenure policy and off-farm employment in rural China Hongqin Chang1, Ping Ai1 & Yuan Li1 IZA Journal of Development and Migration volume 8, Article number: 9 (2018) Cite this article Using the data from the three waves (1995, 2002, and 2008) of the Chinese Household Income Project (CHIP), this paper investigates the impact of land tenure security on farmers' labor market outcomes in rural China. To identify the effect of land tenure security, this paper used difference-in-differences strategy to control for time invariant heterogeneity and a number of observed time-varying economic characteristics for its validity. The paper finds that in response to more security land rights, both women and men increase their probability of wage employment participation. JEL Classification: O15, J61, Q15, R23 Context of the study Household responsibility system and land security rights in China Before the land tenure reform in 1978, China carried out collective farming, which was characterized by collective ownership and unified collective operation. Property rights were centrally controlled, and the most severe problem with collective farming was inefficiency. The Household Responsibility System (HRS) had implemented in rural China in 1979 and was essentially completed by the end of 1983 (Lin 1992). Under the HRS, landholdings were distributed among households in a substantially egalitarian fashion (Burgess 1998). Practically, no rural households were landless (Zhang 2001). The underlying idea behind this institutional scheme was to give rural households relative freedom in their productive choices and to grant them secure land-use rights as a means of promoting individual investment. However, land ownership remains in the hands of collective village authorities; therefore, it could not be transferred between households, and land-use rights were contracted to the farmers for a short period of 1 to 2 years. In this context, security of rights over land depends mainly on two factors: the village authorities' land management and the contractual status of the plot. Today, under the framework of the HRS, there are five major tenure types in China (Brandt et al. 2002): responsibility land, grain ration land, contract land, private plot, and reclaimed land. Responsibility land is allocated on the basis of the number of family members, the number of laborers in each family, or the desire and ability of the household to engage in agricultural production. Grain ration land is typically allocated on the basis of household size to ensure that each household produces enough for its own consumption needs. The use of the land does not usually entail quotas or other obligations. A small amount of land was provided to rural households for private plots during the period of collective agriculture, and farmers retained this land when China reverted to family farming. Contract land is rented to households by the villages for a fixed cash payment. The length of these contracts varies considerably from community to community. Farmers can also acquire use rights to reclaimed land that was previously uncultivated. There are usually no quotas or fees tied to the use of the land (Brandt et al. 2002, p. 73–74). Each tenure type encompasses a different set of rights and obligations for rural households and guarantees a different level of security. A household's use rights over private plot and grain ration land can be considered comparatively secure and stable. Responsibility land, contract land, and reclaimed land, on the other hand, impose various obligations, such as the delivery of a mandatory quota of grain to the state at below-market prices. Those three types of land can be quite easily transferred and reallocated among households by the collective. A survey by the State Statistical Bureau in 1992 demonstrated that grain ration land only made up 8.4% of cultivated area, and responsibility land covered 84.5% of cultivated land (Cheng and Tsang 1996). Although the HRS intended to implement the land-use rights through a contractual framework, the contracts, in particular, the contract's duration, have not been respected by village collective authorities, who have periodically approved reallocation of land among household villagers. As discussed in Jacoby et al. (2002), reallocation of lands is promoted by local governments because of the following: first, following the demographic change within households, it helps to keep an egalitarian distribution of land (Kung, 1994); second, it reduces the inefficiencies often created by the distribution of land which happens with households' demographic changes, especially in contexts with land rental and labor markets failure (Li 1999; Benjamin and Brandt 2000); and third, it represents for local governments a tool to collect taxes and achieve production quotas (Rozelle and Li 1998). Periodic land reallocation has created uncertainty in rural households about the durability of land contracts and the risk of land expropriation in the future, thereby discouraging some households to decide to allocate labor to migration, to commit labor to off-farm employment, or to rent land. Rural land contracting law Realizing that frequent land reallocation and abusive land requisition has led to the insecurity of the land-use rights of farmers, the government has taken various action to promote land tenure security (Tao and Xu 2007). In 2002, China passed the Rural Land Contracting Law (RLCL) into law (Li 2003). This law goes beyond previous attempts to secure the land rights of farmers.Footnote 1 The RLCL requires farmers and collectives be issued with written contracts and certificates to confirm their land-use rights. These land contracts have a duration of 30 years. The RLCL focuses on four areas, namely (i) a stricter definition of land rights as property rights rather than just private contracts, (ii) a ban on large-scale reallocations of land and limiting small-scale readjustments with clear conditions, (iii) permitting land transfer between households, and (iv) a commitment to issuance of land documents (Deininger et al. 2012). The RLCL provides a legal basis for issues relating to tenure security, marketability, and enforcement of rural household land rights that had previously been dealt with only through administrative means. By giving a legal backing to secure 30-year rights and eliminating the scope for further readjustment of land, the RLCL aims to promote investment, diversification, and productivity. Land rights remain with the household even if some members change their registration status. A second goal of the RLCL is to create a basis for more impersonal transfers of land. Such transfers are of increased relevance to ensure adequate land utilization since, with migration or development of the rural non-farm economy, households respond to non-farm opportunities. For this purpose, the law allows land rights to be exchanged and to be leased, transferred, and assigned to others much more easily than was possible before (Deininger et al. 2004). The law also emphasizes the equality of men and women, stipulating that in case of marriage, divorce, or death of the husband, the rights to land of the spouses are maintained unless they receive a new land allocation in their new village. Table 1 presents the security land rights by villages for those years available in our data set (i.e., 1995, 2002, 2008). Land security at village level is measured by means of an indicator that combines information on the share of grain ration land relative to the village total land and whether or not the village retained some flexible land. Section 4.1 provides details on the data set and the construction of the security land indicator. The table shows a total of 795 villages in 1995, and 847 in 2002, among which there are 33 and 53% of villages, respectively, with higher security land rights. In 2008, the numbers of villages is 271, and the villages with higher security land increased to 91%. The statistics summary shows that after the policy change, there was an improvement in land security rights across villages. Thus, the empirical analysis in this paper is based on a comparison across time of labor market outcomes for adults in villages with (i.e., treatment group) and without (i.e., comparison group) land security rights. Table 1 Summary statistics of security land by villages This paper contributes to this recent but growing literature by analyzing the relationship between land security and off-farm employment in rural China, focusing on women's behavior. This research issue is relevant because although women have participated in off-farm activities at rates below those of men, participation rates have risen steadily since 1995. In fact, during the period 1995–2011, the participation rate of women in the off-farm sector rose faster than that of men (Li et al. 2013). Therefore, it is important to explore the extent to which the institutional changes in land tenure, which occurred in recent decades, have contributed to explaining this general pattern in off-farm employment of women in rural China. To explore the link between land security and off-farm employment of women, this study focuses on a major land-policy change in China, the RLCL which had passed in 2002, sought to improve land right of farmers. The RLCL required farmers and collectives be issued with written contracts and certificates to confirm their land-use rights, and the duration of the land contract was to be set at 30 years. One of the consequences of this policy change was the reduction of farmers' risk of losing land rights in future periods due to migration. As a result, incentives promote men and women in rural households to move into off-farm employment and to derive their income from non-agricultural sources. To identify the effect of the RLCL on farmers' labor market outcomes, this paper uses a difference-in-differences strategy to control for time invariant heterogeneity. The data used for the empirical analysis were derived from the Chinese Household Income Project (CHIP) household survey, for the waves of 1995, 2002, and 2008. We explore the impacts of land security rights on the following market outcomes: employment, farm work, off-farm work, wage employment, and self-employment. As highlighted, the main analysis focuses on women's outcomes in labor market; however, for comparative proposes, behavioral response of men to the RLCL is also explored. We found that improvement in land rights security derived from the RLCL has a positive influence on both women's and men's labor market behavior. In terms of the pre-policy average, employment increases 5.9% for women and 3.9% for men, and off-farm employment increases 40% for women and 28% for men. Evolution in time of employment outcomes. Women. Source: The following data sources, if not specifically stated, are from CHIP Evolution in time of employment outcomes. Men. Source: The following data sources, if not specifically stated, are from CHIP This paper contributes to the literature on the issue of land rights insecurity in developing countries. While most of the literature has focused on the impact of land rights on investment and productivity (e.g., Carter and Yao 1999; Jacoby et al. 2002; Deininger and Jin 2003; Goldstein and Udry 2008), less is known about its effect on household decision making as migration (e.g., Mullan et al. 2011) or employment decisions(e.g., Field 2007). An additional contribution of this paper is to shed light on an alternative way in which land tenure may affect household welfare; that is, by encouraging women to be engaged in off-farm activities with higher income return relative to farm work, the RLCL may increase women's economic empowerment and their ability to influence intra household decisions.Footnote 2 In rural China, women, especially those who were married, are considered the group that should stay at home and be committed to care for the elderly and children, as well as undertake the task of agricultural production (Wang 1999; Knight and Song 2003; Chang et al. 2011). Although rising, their labor force participation and income lag behind of men in off-farm employment (Chan and Senser 1997; Solinger 1999; Maurer-Fazio 1999; Song and Jiggins 2000; Li 2001; De Brauw et al. 2002; Shi et al. 2007). This study also contributes to the literature on China and labor market transition, in particular in rural areas, by empirically exploring the impact of rural land tenure and off-farm employment. This issue is relevant for the Chinese case due to the rapid increase in off-farm employment, particularly for women. This was experienced during the last three decades, a period which also involved major institutional changes in land arrangements. Although this topic is highly important, empirical research is still scarce. For instance, Kung and Lee (2001) and Shi et al. (2007) document that the development of land rental markets has encouraged off-farm employment; however, none of these studies directly analyze the impact of land insecurity on off-farm employment. This is the case of Lohmar (1999), who seeks to determine whether land insecurity in China deters workers in farm households from off-farm employment and finds that households in villages with relatively high insecurity are less likely to participate in such types of employment activities. In the present paper, we proceed by using a quasi-experimental design to analyze whether the recent land-policy changes in China, which improved land security rights of farmers, have increased employment in off-farm activities. Our results suggest that the positive impact of these types of policies on off-farm employment, such as the RLCL, could be a factor that explains the increasing tendency observed in the overall employment rate in off-farm activities in China, mainly among women. China's rural economy has undergone radical change since the onset of economic reforms in 1978. The implementation of the HRS in 1979, which replaced the commune system, significantly improved farmers' work incentives by giving them relative freedom in their productive choices and granting secure land-use rights as a means of promoting productive investment (Rozelle et al. 1999). Li et al. (1998) show that in rural China, the production behavior of farmers is affected by the type of land tenure and the associated property rights. For instance, the right to use land for long periods of time encourages the use of land-saving investments, while the lack of private property rights can be seen as a hindrance to efficient allocation and use of land (Dong 1996). In addition, the emergence of land rental market after almost two decades of rural reforms (Kung 2002), and the fact that households renting land have achieved higher land productivity than their counterparts, indicates that land rental transactions have increased aggregate agricultural production in China (Lohmar et al. 2001). The increasing productivity of the agricultural sector in rural China, in addition to a decreased demand for labor in this area and an increased wage differential between rural and urban areas, has provided strong incentive for rural labor to shift to off-farm employment in recent decades (Zhao 1999). Estimates suggest that off-farm (rural) employment in China rose from less than 150 million in 1995 to more than 250 million in 2004, that is, a growth in off-farm employment of more than 100 million people. By 2011, 61% of rural inhabitants were participating in off-farm work (nearly 310 million rural individuals), which is a rise of approximately 20% between 2004 and 2011 (Li et al. 2013). The patterns have motivated recent empirical research to focus on the relationship between land property rights, migration, off-farm activities, and employment. For instance, De La Rupelle et al. (2009) argue that insecure land rights influence households' labor allocation and shorten migration duration, preventing rural people from moving out of agriculture and out of rural areas. The authors test this hypothesis by exploiting variation in the intensity of land rights insecurity under the HRS across and within villages by using data from the Chinese Household Income Project (CHIP) household survey.Footnote 3 By exploiting the source of variation, De La Rupelle et al. (2009) show that migration behavior varies with the contractual structure of land holdings. When land is manipulated by village authorities, households with more secure grain ration land plots can afford to spend more time migrating. Similarly, Mullan et al. (2011) analyze the role of incomplete rural property rights in the migration decisions of rural households by using independent-based household surveys and self-reported information on land tenure and find that tenure insecurity reduces migration. Similar qualitative conclusions arise from the study of Mu and Giles (2014). Based on time variation in land reallocation (i.e., a more insecure land arrangement) across villages in 1995 and 2003 period, the authors find that farmers reduced their probability of migrating in response to a higher probability of village-wide land reallocation. How does the RLCL affect farmers' off-farm employment decisions? Our analysis is based upon the description by Besley (1995) and Mullen et al. (2008) of the link between land rights and investment decisions, who discuss two relevant arguments according to which land management arrangements could influence off-farm choice decisions in the context of China, in particular, the decision by rural households to migrate and participate in outside labor markets. First, migration is associated with a risk of expropriation since it entails a decrease in household size, which may induce the redistribution of some of the household land in order to maintain egalitarian land holdings (Rozelle and Li 1998). The RLCL imposes a ban on large-scale reallocations of land and limited small-scale readjustments. It reduced farmers' worry about their risk of losing land rights at a future time if they decided to migrate to off-farm activities. Second, migration is encouraged by the development of land exchange rights. The RLCL detailed the right to lease, assign, exchange, and carry out other transactions with land contracts. Thus, this policy change was expected to facilitate market transfers and improve the marketability of land rights. Land transfers permit households with higher marginal productivities of land to acquire land from households with lower marginal productivities, and induce a better allocation of household labor endowments in response to outside employment opportunities, such as those in off-farm activities. As consequence of a plausible reduction of barriers to migration out of rural areas, due to the improvement in land security rights as a result of the RLCL, it was expected that the policy change would also have a positive effect on off-farm labor markets. In particular, the main hypothesis to be empirically tested in our work is that the improvement in land security rights, due to the RLCL, had a positive effect on overall off-farm employment. However, the incentives of the reform probably heterogeneously affected different categories of workers. Indeed, most of the self-employed individuals were operating small family firms that were labor intensive and used little capital. As a consequence, the risk of land insecurity for firm owners was much higher than for those in the wage-earning sector. Therefore, we expect that the increase in off-farm employment after implementation of RLCL was driven mostly by an increase in wage employment rather than a growth in self-employment.Footnote 4 Should we expect that both women and men respond similar to the RLCL? We expect a certain degree of heterogeneity in response by gender, not because either men or women in rural households have faced different incentives from the RLCL but because probably the propensity to respond is heterogeneous (i.e., because the incidence may vary across groups). While a large reduction of the time spent in agriculture activities with a significant increase in off-farm work have been documented for both men and women, since the land tenure reforms began, the participation rate of men working full time in agriculture has been lower throughout the 1980s–2000s. This is due to their earlier and larger shift into the off-farm sector. For instance, in the 2000s, men between the ages of 30 and 50 participated in the off-farm labor force at rates more than 40% points higher than women (Li et al. 2013). Thus, this gap in off-farm employment rate may have generated a higher impact of the RLCL on the employment of women in off-farm activities relative to men. Actually, Li et al. (2013) document that the participation rate of women as full-time farm workers have declined faster than that of men during this period, mainly the 1990s, and off-farm participation rate has risen faster for women than for men. In the remainder of the paper, we will use the discussion above to guide our empirical investigation of the impact of the RLCL on labor market outcomes of both men and women. Specifically, we look at the impacts of improvement in land security rights due to the RLCL on the following outcomes: employment, farm work, off-farm work, wage employment, and self-employment. Methodology and data Data and sample construction The empirical analysis in this study is derived from cross-sectional data from the Chinese Household Income Project (CHIP) household survey for years 1995, 2002, and 2008 and thus covers the period before (1995 and 2002) and after the RLCL implementation (2008).Footnote 5 The purpose of this survey was to measure the distribution of personal income and related economic factors in both rural and urban areas of China. Data was collected through a series of questionnaire-based interviews conducted in rural and urban areas and supported by the China National Bureau of Statistics (NBS) and the Institute for the Study of Labor (IZA). The CHIP survey covered nine provinces in China. They are Hebei, Jiangsu, Zhejiang, and Guangdong from eastern China; Anhui, Henan, and Hubei from central China; and Chongqing and Sichuan from western China. The sample was chosen according to NBS data in order to be representative of the whole Chinese population. The survey contains detailed information on incomes and expenditures, employment status, family structure, and social and economic characteristics at both personal and household level. The information on individual and household characteristics in the survey is complemented by extensive data on village-level characteristics. We use this information to measure the impact of improvements of land security due to the RLCL on off-farm employment, as described below. For the original sampling, the CHIP1995 was selected from significantly larger samples drawn by the State Statistical Bureau and contained 7998 households and 34,739 individuals interviewed across 802 villages, which cover 19 provinces in China. CHIP2002 interviewed 37,969 people, from 9200 households distributed across 961 villages, which cover 22 provinces in China. CHIP 2008 contained 7990 households and 32,139 individuals interviewed across 355 villages, which cover 9 provinces in China. All of them include the following nine provinces: Hebei, Jiangsu, Zhejiang, Guangdong, Anhui, Henan, Hubei, Chongqing, and Sichuan. The number of villages included in CHIP 2008 shows a larger decline, but the number of households and individuals interviewed in 2008 has basically not changed. In this paper, the sample excludes the retired and those who were studying full time during the survey's reference period. Taking all of these restrictions into account and omitting observations with missing information yields a sample of 66,779 observations for individuals aged 16 to 65 years old: 28,813 females and 31,580 males. The statistical description of variables is shown in Table 11 in the Appendix. The key variable in our study is a measure for land right security indicating treatment status at village level. Following previous studies (e.g., De La Rupelle et al. 2009), we relied on the variation of security rights across villages, depending on each village's collective management of land. However, an important limitation of constructing a "homogenous" measure for land security is the CHIP questionnaire that asks for this type of information by using different specific questions across the 3 years of the survey considered for the analysis. To address this issue, in this study, we use two comparable variables to indicate the village-level dimension of security. For the years of 1995 and 2002, the land security measure is defined by using a variable which measures the share of grain ration land in total land at the village level. Grain ration land is intended to enable farmers to retain some land "to secure their food supply" (Cheng and Tsang 1996). Higher share of grain ration land in total land at the village level means there are more security rights for the village farmers. For the year of 2008, since the (continuous) variable indicating grain rotation is unavailable in the CHIP, the variable we use to measure land security at village level is whether the village has retained some land for adjustment (which is often translated as "flexible land"). The existence of flexible land means that there is room for land reallocation by part of the village leaders, i.e., the farmers have to take into account that their land can be redistributed to other members of the village (Cheng and Tsang 1996). Therefore, due to the data restrictions discussed above, we construct the measure for land security at village level (i.e., a variable for treatment status, as a binary indicator variable). In particular, for the years of 1995 and 2002, the indicator for land security at village level is coded as one if the share of grain ration land in total land at the village dimension is above the mean value of the year; meanwhile, for the year of 2008, it is coded as one if the village retained some flexible land that year. In order to verify our results are not (strictly) dependent on the information used to construct the land security indicator, we exploit the fact that the variable of "flexible land" for the CHIP2002 is also available and conduct some robustness exercises. Specifically, we construct "placebo-treatment status" by using the information about flexible land instead of grain rotation for 2002. First, we check the percentage of coincidences in the assignment of villages to treatment by using the original and "placebo" treatment status definition. Results show a high share of coincidence, in the order of 41.91 and 52.54%. Second, we perform the main estimates shown in the empirical analysis but using the placebo-treatment status definition as an indicator for treatment in the regression specification. The results of this exercise are qualitatively similar to those obtained by using the original definition of treatment, as depicted in Tables 15 and 16 in the Appendix section. Identification assumptions and econometric strategy The empirical work in this paper aims to identify the causal effect of land security on farmers' labor market outcomes in villages that had higher land security rights since the RLCL was implemented in 2002. (Figs. 1 and 2) In this paper, the major concern is that those villages that chose improving land security could be different from villages that chose not to improve land security and that this difference may be correlated with labor market outcomes. If households have more secure land, they can afford to spend more time being engaged in off-farm work (Brandt et al. 2002). However, many of the unobserved characteristics that may confound identification by using a simpler OLS strategy not only vary across villages but also are fixed over time. For example, such types of time-invariant factors are the preferences of village authorities to redistribute land, at least in a relatively short-time period as that characterized by our analysis. In order to control for time-invariant unobserved factors, we use a difference-in-differences approach to evaluate the policy's effect (Angrist and Krueger 1999), which compares the change in outcomes in the treatment group before and after the RLCL was implemented to the change in outcomes in the control group. The latter group is assumed to capture the counter factual trend for the treatment group which would have been observed in the absence of the policy change. The treatment group consists of individuals aged 16 to 65 in villages that have higher security rights (i.e., the village share of grain ration land in total land is above the mean value for years 1995 and 2002, and there is no flexible land in the collective for the year of 2008). The comparison group consists of individuals in the same age range whose village has land insecurity, as defined above. The empirical analysis therefore compares the off-farm labor market participation of women or men in the village having higher land security with land insecurity. By comparing changes, we control for unobserved time-invariant village characteristics that might be correlated with the land security as well as off-farm employment decision. The following is the difference-in-differences specification with controls on which most of the estimates in this paper are based: $$ {Y}_{ijt}=\alpha +{\beta}_0\mathrm{year}95+{\beta}_1\mathrm{year}08+\delta {\mathrm{landsecurity}}_{jt}+{\gamma}_0\mathrm{year}95\times {\mathrm{landsecurity}}_{jt}+{\gamma}_1\mathrm{year}08\times {\mathrm{landsecurity}}_{jt}+{X^{\hbox{'}}}_{ijt}\lambda +{\theta}_t+{\varphi}_j+{\varepsilon}_{ijt} $$ where i indexes individuals, j village, and t time. The variable Y ijt is one of the outcomes of interest; land security jt is an indicator variable for villages in the treatment group, coded as one if the village has higher land security and zero otherwise; year95 is a dummy equal to one before the RLCL and zero otherwise; year08 is a dummy equal to one after the RLCL was implemented and zero otherwise; and year08 × land security jt is the interaction between the two variables, which captures the difference-in-differences treatment effect. Year95 × land security jt is the interaction between the variables of year95 and land security jt , which controls for possible different pre-trends between the control and treatment groups. The X ijt matrix contains individual-specific variables and household and village variables to condition the differences in trends to observable characteristics. The individual covariates include age, education level, and marital status; the household covariates include land holding by household measured by the total area used for agricultural activities, asset used for agriculture measured by the balance of family financial assets, and the household demographic composition; finally, controls at village-level dimension include the village consumption level and per capital income.Footnote 6 θ t is a time effect common to all villages in time t, and φ j is a fixed effect unique to the village. The ε ijt is an individual time-varying error. The analysis is based on the set of labor market outcomes of interest directly related to the expected effects discussed in the section on theoretical predictions. Those outcomes include overall employment, farm work, off-farm employment, wage employment, and self-employment. Through the entire empirical analysis, the difference-in-differences results are showed separately for men and women. The second identification assumption is that the composition of each group remained constant over the period under study. This assumption would be violated, for instance, if the treatment group expanded over time and incorporated individuals with different characteristics. Although our regression equation includes controls for a broad set of individual and household characteristics, this may not be enough to control for potential differences in group-specific compositional changes over time. In Section 4.4, we will test whether the possible compositional effect has influenced the results. Tables 2 and 3 present the statistics summary of some of the main variables by treatment status before and after the policy change for women and men in the sample. The statistics indicate that both women and men in the two groups were reasonably similar in terms of age, education, legal status, and other main socioeconomic characteristics. Most importantly, the last column in the table presents the difference between the pre- and post-reform changes for the two groups, which indicates only a few statistically significant changes in the average characteristics of the two groups. This evidence suggests that, at least in terms of observed characteristics, the main results discussed below are not influenced by compositional effects. Table 2 Summary statistics. Women Table 3 Summary statistics. Men Application and results Table 4 presents the off-farm employment for men and women from 1995 to 2008. This table shows that off-farm employment rates increased largely for both men and women, and this increase is mainly from wage employment as self-employment increased slowly. Table 4 Summary statistics of labor market outcomes Baseline results The interaction between land rights security and the year of 2008 dummy captures the effect of being in a land security village on off-farm work after the RLCL was implemented, relative to those located in villages with land insecurity. Table 5 presents the baseline estimates of the effects of the RLCL on female and male labor market. Each column reports the OLS estimates of the regression equation for the main outcomes: employment, farm work, off-farm work, wage employment, and self-employment. The first row in Table 5 presents the estimates and standard errors of the interaction coefficient in equation, which captures the impact of the RLCL. The second row is the interaction between land rights security and the year of 1995 dummy variable, which indicates the treatment effect before the RLCL. The year 2002 is set as the reference year. The variables for years of 1995 and 2008 are pre-policy and post-policy, respectively. The third row displays the estimates of the coefficient of the treatment variable (land security in equation). The last row in the table reports the average of each column's dependent variable for the period before the implementation of the RLCL. The discussion focuses on the regression results with the full set of controls in regression equation, county fixed effects, individual age and education-level characteristics, land holding by household, asset used for agriculture by household, household demographic variables, and village characteristics, which are included but not reported. Since the percent of farmers who work in both activities is low (there are 6.6% women and 18% men who work both in farm and off-farm activities), we have restricted the analysis to the sample of individuals who engage in one employment only.Footnote 7 Table 5 The effect of RLCL on employment outcomes The estimates in column 1 of Table 5 present the policy impact on the employment rate for men and women. The coefficient on interaction of the land rights security variable and year dummy of land security × year08 shows that both the employment rate of women (5.8 percentage points) and men (4.1 percentage points) increased and both are statistically significant at the one percent level, respectively. In terms of the pre-policy average, the effect represents an increase of 6.6% for women and 4.6% for men. The variable of land security × year 95 (which captures pre-trend effects) is not statistically significant for women at the usual statistical levels. In contrast, the estimate for men shows the coefficient on this variable is statistically significant at the 10% statistical level, suggesting some caution for interpretation. For the rest of the estimates presented in Table 5, we cannot reject the null of the coefficient on land security × year95 that is statistically different of zero. The estimates in column 2 of Table 5 are the policy effect on farm employment for men and women. It shows that the coefficient of land security × year08 is negative for both women and men and statistically significant at the 5% level for men. The estimates in column 3 correspond to the main outcome of interest, that is, off-farm employment. The results indicate a statistically significant increase in those villages with land security after the RLCL relative to those villages without land security. For off-farm work, there is an increase of 7.1% points for women, and an increase of 10% points for men, both of them are significant at the 1% level. In terms of the pre-policy average, the effect represents an increase of 39.4% for women and 29.4% for men. Similarly, the variable of land security × year 95 is not statistically significant in the estimations. As the employment has increased significantly for both women and men, we can conclude that the policy effect on off-farm employment seems to be driven by an employment effect for women. For men, the employment on farm decreased significantly. Therefore, the policy effect on off-farm seems to be both a switching effect from farm activities to off-farm and an employment effect. We then separate the off-farm work to wage employment and self-employment. Columns 4 to 5 reveal that the overall effect on off-farm work was due to an increase in wage employment, which is consistent with the discussion in Section 3. For being employed as wage workers, there is an increase of 5.4 percentage points for women (significant at the 5% level) and an increase of 7.4 percentage points for men (significant at the 1% level). In terms of the pre-policy average, this implies that the effect is representative of an increase of 31.8% for women and 23.9% for men. With respect to the participation of self-employment, Table 5 shows that the sign of the interaction between land rights security and the year of 2008 are both small in magnitude. It is statistically significant at the 10% level for women and not statistically significant for men. Overall, the pattern of results in Table 5 indicates that women's labor market behavior responded to the policy reform as predicted in Section 3. The RLCL is associated with an increase in off-farm employment rates, and the overall effect on off-farm work was due to an increase in wage employment. In addition, the dependent variables were estimated at the village level (see Table 12 in the Appendix section). The result is consistent with the conclusions mentioned above. At the same time, the probit estimates of the effect of the RLCL on female and male labor market are shown in Tables 13 and 14 in the Appendix section. The main results are qualitatively similar to the regressions which are estimated by the OLS. For simplicity in the interpretation of the coefficients, we decided to report the coefficients as estimated by the OLS. Heterogeneous effects Exploring for heterogeneity effects across individuals and households is likely to be important in satisfactorily explaining their off-farm activities. Households and individuals differ considerably in terms of their on-farm productivities and their ability to access these off-farm opportunities. Heterogeneity in household labor or land endowments and human capital also induces labor re-allocation from farm to off-farm activities. We examine which groups of individuals are particularly affected by land tenure security (full results are not shown here for lack of space but available upon request to the authors). First, we stratify the sample into two groups based on age. The results reveal that the impact on off-farm work and wage employment is significant and larger for men and women older than 26. Second, when stratification is carried out by educational attainment, we find strong evidence that the less educated are affected. We also find a stronger effect of land tenure security on the labor market behavior of older people and of those with a low level of education. The result is consistent with the evidence of Mu and Giles (2014) on the labor supply responses to land security rights. Third, we stratify the sample by wage employment and self-employment in their local county and outside of their hometown. There are 4.2 percentage points for women and 7 percentage points for men, participating in the local wage labor market. With respect to self-employment, there is a significant increase in local self-employment for men, but the effect is not statistically significant for women (Tables 6, 7, 8, and 9). Table 6 The effect of RLCL on employment outcomes by age. Women Table 7 The effect of RLCL on employment outcomes by age. Men Table 8 The effect of RLCL on employment outcomes by education level. Women Table 9 The effect of RLCL on employment outcomes by education level. Men Robustness The following section present the robustness tests on the difference-in-differences estimates presented in the previous section. These exercises are based on the model of equation, with full controls for individual characteristics, household characteristics, and county-fixed effects, as in the previous analysis. The estimates on the previous regression analysis show that the coefficient on the interaction between land rights security and the year of 1995 dummy variable is insignificant statistically for (almost) all of the estimated models. This reflects that possible trends which affect treatment and control groups differently are not present in the setting, i.e., there is no significant evidence that we can reject the parallel trends before the policy reform assumption. A further concern for the identification strategy is that the treatment and comparison groups may have changed over the period under study, confounding treatment with composition effects. The summary statistics and the unconditional difference-in-differences estimates in Tables 2 and 3 indicate that the main individual characteristics of both groups did not change substantially before and after the policy change, but the household characteristics of both groups changed a lot. We include the interaction terms between the land security indicator and the full set of control covariates in the regression. Results in Table 10 indicate that the main estimates are also robust for this alternative. The estimated coefficients for the main outcomes are somewhat smaller than the baseline results in Table 5, but they remain significant at the usual confidence levels. Table 10 The effect of RLCL on labor market outcomes. Robustness and specification checks In general, the robustness tests suggest that changes in the composition of the treatment and comparison groups did not introduce a spurious correlation between changes in the outcomes before and after the reform. Finally, an additional concern is related to the form in which we computed the standard errors of estimated regression. In particular, as the dependent variable in regression models varies across individuals within villages, a certain degree of correlation between the outcomes of individuals belonging to the same village could be expected and thus affect the coefficient standard errors. In order to check the extent to which this issue could affect the inference in our setting, Tables 17 and 18 in the Appendix section replicate our main results in Table 5 by clustering standard errors at village level. As can be seen in those tables, the results remain basically unchanged. Additionally, we estimate the main results by restricting the data of CHIP 1995 and 2002 to the nine provinces of CHIP 2008. We found the results of both estimates are qualitatively the same (full results are not shown for lack of space but available upon request to the authors). In the paper, we investigate the effect of land rights security on labor market behavior. Until the early 2000s, farmers in rural China faced a substantial level of risk of losing land in village land reallocations. In 2002, China passed the Rural Land Contracting Law, which aims to secure the land rights of farmers. How does the RLCL affect farmers' labor market behavior? Based upon the difference-in-differences analysis, we found that land rights security has a positive influence on both women's and men's labor market behavior. In terms of the pre-policy average, employment increases 6.6% for women and 4.6% for men, and off-farm employment increases 39.4% for women and 29.4% for men. We separate the off-farm work to wage employment and self-employment. It reveals that the overall effect on off-farm work was due to an increase in wage employment, as there is an increase of 31.8% for women and 23.9% for men in wage employment with respect to the pre-reform average. The lesson from this paper is that off-farm employment, especially wage employment, is shown to be highly correlated with the RLCL. Because of these relationships, it seems that the government should continue its policies to insure farmers' land security rights and encourage land rental. The finding has important policy implications: with the expansion of land rental markets, we should expect to see an expansion in China's off-farm employment and a decline in the rural-urban income gap due to more wage income earned by farmers. The increased employment rate for women have positive implications for the well-being of women, which may increase women's economic empowerment and ability to influence the intra household. Nonetheless, the conclusions from the present study require some qualification. First, considering the time variation was used in the study for identification, the RLCL was probably also accompanied by many other policy changes, that are also likely to have affected labor market decisions. We cannot disentangle the potential relative effects in the setting. Second, the period in which the RLCL was approved in China coincided with a historic economic boom, making it difficult to determine the extent to which the changes in labor market behavior due to the economic upturn could have affected our results. This caveat might be exacerbated by the unavailability of additional years of (CHIP) data to better control for possible confounding factors. Despite these limitations, the robustness results and the virtual non-existence of pre-trends on labor market outcomes (even with the small number of pre-reform data considered) allow us to be confident about the results. Finally, it is important to consider that estimates of the impact of the RLCL on labor market outcomes in our study is not based on any data in which land security was literally absent. The identification strategy is based on the level of land security across villages' time, so the estimated coefficient of impact cannot be interpreted as treatment on the treated parameters. For instance, the Land Management Law passed in 1998 restricted land adjustments by requiring the agreement of two thirds of the village members. Some research examined women's land rights in the Household Responsibility System in rural China (Duncan and Li 2001; Li and Xi 2006; Liaw 2008). Under the HRS, women's land rights can be more easily challenged and jeopardized than men' s. Women lost land rights after marriage, and they often had to wait for the next reallocation to be assigned with land. Despite the institutional changes under the HRS, land was not privatized and ownership remained "collective"; in particular, the village authorities in the actual management strongly influenced the land use and allocation by households (Rozelle et al. 2002). Actually, the trends of self-employment and migrant wage earners of the non-farm rural employment grew in parallel during the 1980s and 1990s. However, the number of wage workers among migrants in 2008 was higher than self-employed workers (Wang et al. 2011). The CHIP conducted five waves of household surveys, in 1988, 1995, 2002, 2007 and lastly 2008, but only the waves in 1995, 2002 and 2008 have land information. The variables of income per capita and the balance of family financial assets are discounted by the consumer price index at the provincial level with 1995 as the base year. The price index is obtained from China Statistical Yearbooks of various years. If a person who works both in farm and off farm activities, his/her main activity was defined according to the working time spent on each of them (i.e., if his/her farm working time is more than off farm time, he/she was classified as the farm activity). Angrist JD, Krueger AB. Empirical strategies in labor economics. In: Ashenfelter O, Card D, editors. Handbook of labor economics, vol. 3A. Amsterdam: Elsevier; 1999. p. 1277–366. Benjamin D, Brandt L, Rozelle S. Aging, wellbeing, and social security in rural northern China. Popul Dev Rev. 2000;26:89–116. Besley T. Property rights and investment incentives: theory and evidence from Ghana. J Polit Econ. 1995;103(5):903–37. Brandt L, Huang, Li G, Rozelle S. Land rights in China: facts, fictions and issues. The China Journal. 2002;47:67–97. Burgess, R. (1998). Market incompleteness and nutritional status in rural China. In Paper delivered at the International Conference on Land Tenure and Agricultural Performance in Rural China, Beijing, China. Carter, M. R., & Yao, Y. (1999). Specialization without regret: transfer rights, agricultural productivity, and investment in an industrializing economy (World Bank policy research working paper no. 2202). Chan A, Senser RA. China's troubled workers. Foreign Affairs. 1997;76(2):104–17. Chang HQ, MacPhail F, Dong XY. The feminization of labor and the gender work-time gap in rural China. Fem Econ. 2011;17(4):93–124. Cheng YS, Tsang SK. Agricultural land reform in a mixed system: the Chinese experience of 1984-1994. China Information. 1996;10:44–74. De Brauw A, Huang J, Rozelle S, Zhang L, Zhang Y. The evolution of China's rural labor market during the reforms. J Comp Econ. 2002;30:353–529. De La Rupelle, M., Deng, Q., Shi, L., & Vendryes, T. (2009). Land rights insecurity and temporary migration in rural China (discussion paper no. 4668). Deininger K, Jin S. The impact of property rights on households'investment, risk coping, and policy preferences: evidence from China. Econ Dev Cult Chang. 2003;51(4):851–82. Deininger, K., Jin, S., & Xian, Z. (2004). Implementing China's new land law: evidence and policy lessons. Deininger, K., Jin, S.Q., & Fang, X. (2012). Moving off the farm: land institutions to facilitate structural transformation and agricultural productivity growth in China. (World Bank policy research working paper no. 5949). Dong XY. Two-tier land tenure system and sustained economic growth in post-1978 rural China. World Dev. 1996;24:915–28. Duncan J, Ping L. Women and land tenure in China: a study of women's land rights in Dongfang county. Hainan Province: Rural Development Institute Report; 2001. Field E. Entitled to work: urban tenure security and labor supply in Peru. Q J Econ. 2007;4(122):1561–602. Goldstein M, Udry C. The profits of power: land rights and agricultural investment in Ghana. J Polit Econ. 2008;116(6):981–1022. Jacoby H, Li G, Rozelle S. Hazards of expropriation: tenure insecurity and investment in rural China. Am Econ Rev. 2002;92(5):1420–47. Knight J, Song L. Chinese peasant choices: migration, rural industry or farming. Oxf Dev Stud. 2003;31(2):123–47. Kung JK. Egalitarianism, subsistence provision, and work incentives in China's agricultural collectives. World Dev. 1994;22(2):175–87. Kung JK. Off-farm labor markets and the emergence of land rental markets in rural China. J Comp Econ. 2002;30(2):395–414. Kung JK, Lee YF. So what if there is income inequality? The distributive consequence of non farm employment in rural China. Econ Dev Cult Chang. 2001;50(1):19–46. Li G. The economics of land tenure and property rights in China's agricultural sector. Stanford University: Unpublished Ph.D. dissertation; 1999. Li S. Women's employment and income. Chinese Social Science. 2001;2001(3):56–69. Li P. Rural land tenure reforms in China: issues, regulations and prospects for additional reform. Land Reform, Land Settlement, and Cooperatives. 2003;11(3):59–72. Li Y, Xi YS. Married women's rights to land in China's traditional farming areas. Journal of Contemporary China. 2006;15(49):621–36. Li G, Rozelle S, Brandt L. Tenure, land rights, and farmer investment incentives in China. Agric Econ. 1998;19(1):63–71. Li Q, Huang J, Luo R, Liu C. China's labor transition and the future of China's rural wages and employment. China & World Economy. 2013;21(3):4–24. Liaw HR. Women's land rights in rural China: transforming existing laws into a source of property rights. Pacific Rim Law & Policy Journal. 2008;17(1):237–65. Lin JY. Rural reforms and agricultural growth in China. Am Econ Rev. 1992;82(1):34–51. Lohmar B. Land tenure insecurity and labor allocation in rural China. Nashville, TN: Paper presented at the American Agricultural Economics Association Annual Meeting; 1999. Lohmar B, Zhang ZX, Somwaru A. Land rental market development and agricultural production in China. Chicago, IL: Paper presented at the American Agricultural Economics Association Annual Meeting; 2001. Maurer-Fazio M. Earnings and education in China's transition to a market economy survey evidence from 1989 and 1992. China Econ Rev. 1999;10(1):17–40. Mu, R., and Giles, J. (2014). Village political economy, land tenure insecurity, and the rural to urban migration decision: evidence from China. World Bank Policy Research Working Paper, (7080). Mullan K, Grosjean P, Kontoleon A. Land tenure arrangements and rural–urban migration in China. World Dev. 2011;39(1):123–33. Mullen, K., P. Grosjean, and A. Kontoleon. 2008. Land tenure arrangements and rural urban migration in China. Environmental Economy and Policy Research Discussion Paper Series, Number 37. University of Cambridge, Department of Land Economy, UK. Rozelle S, Li G. Village leaders and land-rights formation in China. Am Econ Rev. 1998;88(2):433–8. Rozelle S, Guo L, Shen M, Hughart A, Giles J. Leaving China's farms: survey results of new paths and remaining hurdles to rural migration. The China Quarterly. 1999;158(158):367–93. Rozelle S, Brandt L, Guo L, Huang J. Land rights in China: facts, fictions, and issues. China Journal. 2002;47(1):67–97. Shi X, Heerink N, Qu F. Choices between different off-farm employment sub-categories: an empirical analysis for Jiangxi Province, China. China Econ Rev. 2007;18:438–55. Solinger DJ. Citizenship issues in China's internal migration: comparisons with Germany and Japan. Political Science Quarterly. 1999;114(3):455–78. Song, Y. & Jiggins, J. (2000, January). Feminization of agriculture and related issues: two cases study in marginal rural area in China. Paper presented at the European Conference on Agricultural and Rural Development in China (ECARDC), Leiden, Holland. Tao R, Xu Z. Urbanization, rural land system and social security for migrants in China. J Dev Stud. 2007;43(7):1301–20. Wang YX. De-intensification and the feminization of farming in China. Gend Technol Dev. 1999;3(2):189–214. Wang X, Huang J, Zhang L, Rozelle S. The rise of migration and the fall of self employment in rural China's labor market. China Econ Rev. 2011;22:573–84. Zhang L. Land rights in twenty-first century: the evolution of tenure security, transfer and control rights in rural China, working paper. Center for Chinese Agricultural Policy, Institutefor geographical sciences and natural resource research. Beijing: Chinese Academy of Sciences; 2001. Zhao Y. Leaving the countryside: rural-to-urban migration decisions in China. Am Econ Rev. 1999;89(2):281–6. This research work was carried out with financial and scientific support from the Partnership for Economic Policy (PEP) (www.pep-net.org) with funding from the Department for International Development (DFID) of the UK (or UK Aid), and the Government of Canada through the International Development Research Center (IDRC). The authors are also grateful to Professor Marcelo Bergolo and Manuel Paradis for the technical support and guidance, as well as to Professor Luca Tiberti for the valuable comments and suggestions. The authors would also like to thank the helpful remarks from an anonymous referee and the editor. Responsible editor: Hartmut F. Lehmann Taiyuan University of Technology, Taiyuan, China Hongqin Chang, Ping Ai & Yuan Li Hongqin Chang Yuan Li Correspondence to Hongqin Chang. The IZA Journal of Development and Migration is committed to the IZA Guiding Principles of Research Integrity. The authors declare that they have observed these principles. Table 11 Statistical description of variables Table 12 The effect of RLCL on employment outcomes. Estimates by clustering data at village's level Table 13 The effect of RLCL on employment outcomes of women. Estimates by using a probit model Table 14 The effect of RLCL on employment outcomes of Men. Estimates by using a probit model Table 15 The effect of RLCL on employment outcomes of women Table 16 The effect of RLCL on employment outcomes of men Table 17 The effect of RLCL on employment outcomes of Women. Estimates by using cluster standard errors at village level Table 18 The effect of RLCL on employment outcomes of Men. Estimates by using cluster standard errors at village level Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Chang, H., Ai, P. & Li, Y. Land tenure policy and off-farm employment in rural China. IZA J Develop Migration 8, 9 (2018). https://doi.org/10.1186/s40176-017-0117-z Off-farm Rural China
CommonCrawl
\begin{document} \title{Data Assimilation to the Primitive Equations with $L^p$-$L^q$-based Maximal Regularity Approach} \abstract{ In this paper, we show mathematical justification of the data assimilation of nudging type in $L^p$-$L^q$ maximal regularity settings. We prove that the approximate solution of the primitive equations by data assimilation converges to the true solution with exponential order on the Besov space $B^{2/q}_{q,p}(\Omega)$ in the periodic layer domain $\Omega = \mathbb{T} ^2 \times (-h, 0)$. } \section{Introduction} \label{intro} The primitive equations are \begin{equation} \label{eq_primitive} \begin{alignedat}{3} \partial_t v - \Delta v + u \cdot \nabla v + \nabla_H \pi & = f & \quad \text{in} \quad & \Omega \times (0, \infty), \\ \partial_3 \pi & = 0 & \quad \text{in} \quad & \Omega \times (0, \infty), \\ \mathrm{div} \, u & =0 & \quad \text{in} \quad & \Omega \times (0, \infty), \\ v(0) & =v_0 & \quad \text{in} \quad & \Omega, \end{alignedat} \end{equation} where $T>0$, $u$ is a unknown vector field with initial data $v_0$, $\pi$ are unknown scalar functions. The domain $\Omega = \mathbb{T} ^2 \times (-l, 0)$ is the periodic layer. The differential operators $\nabla_H = (\partial_1, \partial_2)^T$, $\mathrm{div}_H = \nabla_H \cdot$, and $\Delta_H = \nabla_H \cdot \nabla_H$ are the horizontal gradient, the horizontal divergence, and the horizontal Laplacian, respectively. The vertical velocity $w$ is given by \begin{align*} w (x^\prime, x_3, t) = - \int_{-l}^0 \mathrm{div}_H \, v (x^\prime, z, t) dz. \end{align*} We write $\Gamma_l$, $\Gamma_b$, and $\Gamma_u$ to denote the lateral, bottom, upper boundaries. We impose the periodicity for $u, \pi$ on the lateral boundaries and \begin{gather}\label{eq_bound_conditions} \begin{split} \partial_3 v = 0, \quad w = 0 \quad \text{on} \quad \Gamma_u, \\ v = 0, \quad w = 0 \quad \text{on} \quad \Gamma_b. \end{split} \end{gather} The primitive equations are a fundamental model of geographic flows. The global well-posedness of the primitive equations in $H^1$ was established by Cao and Titi \cite{CaoTiti2007}. They proved this by combining the local well-posedness in $H^1$ with $H^1-$ a priori estimate. The local well-posedness was proved by Guill\'{e}n-Gonz\'{a}lez, Masmoudi, and Rodr\'{i}guez-Bellido \cite{GuillenMasmoudiRodriguez2001}. There are generalizations of Cao and Titi's results. Hieber and Kashiwabara \cite{HieberKashiwabara2016} proved the global well-posedness in Lebesgue spaces $L^p$-settings based on the theory of analytic semigroup. They used the equivalent equations to (\ref{eq_primitive}) such that \begin{equation} \label{eq_primitive_evo} \begin{alignedat}{3} \partial_t v - P \Delta v + P (u \cdot \nabla v) & = P f & \quad \text{in} \quad & \Omega \times (0, \infty), \\ \mathrm{div}_H \, \overline{v} & =0 & \quad \text{in} \quad & \Omega \times (0, \infty), \\ v(0) & =v_0 & \quad \text{in} \quad & \Omega, \end{alignedat} \end{equation} where $P: L^q(\Omega)^2 \rightarrow L^q_{\overline{\sigma}}(\Omega)$ is the hydrostatic Helmholtz projection given by $P = I + \nabla_H (- \Delta_H)^{-1} \mathrm{div}_H$. We write $C^\infty(\Omega)$ to denote the set of horizontally periodic $C^\infty$-functions for the horizontal variable $x^\prime$. We write the Lebesgue space $L^q(\Omega)$, the Sobolev space $H^{s, q}(\Omega)$, and the Besov space $B^{s}_{q, p}(\Omega)$ to denote the completion of $C^\infty(\Omega)$ by the standard $L^q$-, $H^{s, q}$-, and $B^s_{q, p}$- norms for $s \geq 0$ and $1 < q < \infty$. We denote the $\overline{\mathrm{div}_H}$-free $L^q$-vector fields $L^q_{\overline{\sigma}}(\Omega)$ by \begin{align*} L^q_{\overline{\sigma}}(\Omega) = \overline{ \Set{ \varphi \in C^\infty(\Omega)^2 }{ \mathrm{div}_H \overline{\varphi} = 0 } }^{\Vert \cdot \Vert_{L^q(\Omega)^2}}, \end{align*} where $\overline{\varphi} = \int_{-l}^0 \varphi(\cdot, \cdot, z) dz/l$ is the horizontal average. We analogously denote $\overline{\mathrm{div}_H}$-free Sobolev spaces $H^{s, q}_{\overline{\sigma}}(\Omega)$ and Besov space $B^{s}_{q, p, \overline{\sigma}}(\Omega)$. Note that $P: L^q(\Omega)^2 \rightarrow L^q_{\overline{\sigma}}(\Omega)$ is bounded. Giga, Gries, Hieber, Hussein, and Kashiwabara \cite{GigaGriesHieberHusseinKashiwabara2017_analiticity} proved the global well-posedness in $L^p$-$L^q$ settings under various boundaries conditions, $i. e.$ the periodic, Neumann, Dirichlet, Dirichlet-Neumann mixed boundary conditions. For the basic flow $v$, which is the solution to the primitive equations, we consider the data assimilation (DA) problem \begin{equation} \label{eq_nudging} \begin{alignedat}{3} \partial_t \tilde{v} - P \Delta \tilde{v} + P \left( \tilde{u} \cdot \nabla \tilde{v} \right) & = P J_\delta f + \mu P (J_\delta v - J_\delta \tilde{v}) & \quad \text{in} \quad & \Omega \times (0, \infty),\\ \mathrm{div}_H \, \overline{v} & = 0, & \quad \text{in} \quad & \Omega \times (0, \infty),\\ \tilde{v}(0) & = \tilde{v}_0 & \quad \text{in} \quad & \Omega, \end{alignedat} \end{equation} where $\mu >0$ is a constant called an inflation parameter, $\delta >0$ is a constant representing the inverse of the observation density, and the bounded linear operator $J_\delta$ is a generalization of the low-path operator satisfying \begin{gather} \label{eq_J} \begin{split} \Vert J_\delta f \Vert_{L^q (\Omega)^2} & \leq C \Vert f \Vert_{L^q (\Omega)^2}, \quad \text{for} \quad f \in L^q (\Omega)^2, \\ \Vert J_\delta f - f \Vert_{L^q (\Omega)^2} & \leq C \delta \Vert \nabla f \Vert_{L^q (\Omega)^2} \quad \text{for} \quad f \in H^{1, q} (\Omega)^2, \end{split} \end{gather} for some constant $C>0$. Note $J_\delta$ is independent of time variable. This kind of DA procedures are called the nudging. A typical example of $J_\delta$ is a cube-wise averaging operator, where each cube is a peace of homogeneous small cubic decomposition of $\Omega$ with the radius $O(\delta^{1/3})$. The term $J_\delta v - J_\delta \tilde{v}$ in (\ref{eq_nudging}) behaves like a forcing term to make $\tilde{v}$ converge to $v$. A basic aim of DA is to predict the true state $v$ by using observation $J_\delta v$. We remark that the information of $v$ itself is not used directly because we never obtain the perfect observation in the real world. This can be seen from the right-hand side of (\ref{eq_nudging}). In the right-hand side there are information of the observations and no direct information about $f$ and $v$. The DA is typically used in meteorology to forecast many physical variables in the atmosphere and the ocean, such as velocity field, temperature field, and other physically meaningful quantities. DA has strongly related to the partial differential equations of geographic flows. Azouani, Olson, and Titi \cite{AzouaniOlsonTiti2014} gave a mathematical framework to consider the DA, and they showed the solution to the DA equations of the two-dimensional Navier-Stokes equation converges to the true solution in the energy space. Albanez, Nussenzveig, and Titi \cite{AlbanezNussenzveigTiti2015} showed the same type of convergence result for the Navier-Stokes $\alpha$-model. Pei \cite{Pei2019} showed the convergence of the DA equations to the primitive equations in the energy space. There were no convergence results in $L^q$-frameworks. The aim of this paper is to show that the solution to (\ref{eq_nudging}) converges to (\ref{eq_primitive}) with exponential order in $L^p$-$L^q$-based maximal regularity settings. We set $V = v - \tilde{v}$, $W = w - \tilde{w}$, $U = (V, W)$, $V_0 = v_0 - \tilde{v}_0$, and $F = f - J_\delta f$. To show the convergence of $\tilde{v}$ to the true solution $v$, we consider the equations of the difference \begin{equation} \label{eq_diff} \begin{aligned} \partial_t V - P \Delta V + P \left( u \cdot \nabla V + U \cdot \nabla v - U \cdot \nabla V \right) & = P F - \mu P J_\delta V & \text{in} \quad & \Omega \times (0, \infty), \\ \mathrm{div}_H \, \overline{V} & = 0 & \text{in} \quad & \Omega \times (0, \infty),\\ V(0) & = V_0 & \text{in} \quad & \Omega. \end{aligned} \end{equation} For a Banach space $X$, we denote by $L^p_\eta (0, T; X)$ and $H^{1, p}_\eta (0, T; X)$ the $X$-valued $\eta$-weighted Lebesgue space and Sobolev space, respectively, such that \begin{gather*} L^p_\eta (0, T; X) = \left \{ f \in L^1_{loc} (0, T; X) \, : \, \Vert u \Vert_{L^p_\eta (0, T; X)} < \infty \right \}, \\ H^{m,p}_\eta (0, T; X) = \left \{ f, \, \partial_t f, \cdots, \partial_t^m f \in L^1_{loc} (0, T; X) \, : \, f, \, \partial_t f \cdots, \partial_t^m f \in L^p_\eta (0, T; X) \right \}, \\ \Vert u \Vert_{L^p_\eta (0, T; X)} := \left( \int_0^T \left( t^{1 - \eta} \Vert f(t) \Vert_{X} \right)^p \frac{dt}{t} \right)^{\frac{1}{p}}. \end{gather*} We use the standard modification when $p, q = \infty$. We write $A_q = P (- \Delta_q)$ to denote the hydrostatic Stokes operator with the domain \begin{align} D(A_q) = \Set{ \varphi \in H^{2, q}(\Omega)^2 \cap L^q_{\overline{\sigma}}(\Omega) }{ \text{$\varphi$ satisfies (\ref{eq_bound_conditions})} }. \end{align} The space of initial trace $X_{\theta, p, q} = (L^q_{\overline{\sigma}}(\Omega), D(A_q))_{\theta, p}$ is characterized by \begin{align} X_{\theta, p, q} = \left \{ \begin{array}{lll} \Set{ v \in B^{2\theta}_{q, p}(\Omega)^2 \cap L^q_{\overline{\sigma}}(\Omega) }{ \partial_3 v |_{x_3 = 0} = 0, \, v|_{x_3 = -l} = 0 } & \text{if} & \frac{1}{2} + \frac{1}{2q} < \theta < 1,\\ \Set{ v \in B^{2\theta}_{q, p}(\Omega)^2 \cap L^q_{\overline{\sigma}}(\Omega) }{ v|_{x_3 = -l} = 0 } & \text{if} & \frac{1}{2q} < \theta < \frac{1}{2} + \frac{1}{2q},\\ B^{2\theta}_{q, p}(\Omega)^2 \cap L^q_{\overline{\sigma}}(\Omega) & \text{if} & \theta < \frac{1}{2q}, \end{array} \right . \end{align} the reader refers to \cite{GigaGriesHieberHusseinKashiwabara2017_analiticity}. The main result of this paper is \begin{theorem} \label{thm_main_thoerem} Let $1 < p, q < \infty$ satisfy $\frac{1}{p} + \frac{1}{q} \leq 1$, and $\eta = \frac{1}{p} + \frac{1}{q}$. Let $v_0, V_0 \in X_{1/q, p, q}$. Let $f \in L^p_\eta (0, \infty; L^q(\Omega)^2)$ satisfy \begin{align*} \Vert e^{\gamma_0 t} f \Vert_{L^p_\eta (0, \infty; L^q(\Omega)^2)} + \Vert \partial_t f \Vert_{L^2_{loc}(0, \infty; L^2(\Omega)^2)} + \Vert e^{\gamma_0 t} f \Vert_{L^2(0, \infty; H^1(\Omega)^2)} < \infty, \end{align*} for some constant $\gamma_0 > 0$. Assume $v \in C(0, \infty; X_{1/q, p, q}) \hookrightarrow C(0, \infty; B^{2/q}_{q, p}(\Omega)^2)$ is the solution to (\ref{eq_primitive}) obtained by \cite{GigaGriesHieberHusseinKashiwabara2017_analiticity} satisfying \begin{gather} \label{eq_bounds_for_v} \begin{split} \Vert e^{\gamma_1 t} \partial_t v \Vert_{L^p_\eta (0, \infty; L^q(\Omega)^2)} + \Vert e^{\gamma_1 t} \nabla^2 v \Vert_{L^p_\eta (0, \infty; L^q(\Omega)^2)} < \infty, \\ \Vert v (t) \Vert_{X_{1/q, p, q}} \leq C_0 e^{- \gamma_1 t}, \end{split} \end{gather} for some constant $C_0>0$ and $\gamma_1 < \gamma_0$. Then there exit $\mu_0, \delta_0>0$, if $\mu \geq \mu_0$ and $\delta \leq \delta_0$, there exits a unique solution $V \in C(0, \infty;X_{1/q, p, q})$ to (\ref{eq_nudging}) such that \begin{align*} \Vert e^{\mu_\ast t} \partial_t V \Vert_{L^p_\eta(0, \infty; L^q(\Omega)^2)} + \Vert e^{\mu_\ast t} V \Vert_{L^p_\eta(0, \infty; H^{2,q}(\Omega)^2)} \leq C, \end{align*} for some constants $\gamma_0 < \mu_\ast < \gamma_1$ and $C>0$. Moreover, $V$ is exponentially stable in the sense \begin{align*} \Vert V (t) \Vert_{X_{1/q, p, q}} \leq C e^{ - \mu_\ast t}. \end{align*} \end{theorem} \begin{remark} \begin{enumerate} \item The assumption $f \in H^1(0, T; L^2(\Omega)^2)$ is used to obtain time regularity, which ensure the solution $V(t)$ belongs $H^2(\Omega)^2$ for a. a. $t \in (0, T)$. \item The existence of the solution $v$ to (\ref{eq_primitive_evo}) satisfying (\ref{eq_bounds_for_v}) have been proved by Giga $et \, al$ \cite{GigaGriesHieberHusseinKashiwabara2017_analiticity}. One can also see that the existence of the solution satisfying (\ref{eq_bounds_for_v}) from the proof of Theorem \ref{thm_main_thoerem} by taking $v \equiv 0$ and $\mu, \delta = 0$. \item The theorem is an extension of Pei's $L^2$-convergence result \cite{Pei2019}. When $p = q = 2$, Theorem \ref{thm_main_thoerem} implies $\tilde{v} \rightarrow v$ in $H^1(\Omega)^2$ as $t \rightarrow \infty$, which is stronger convergence than Pei's result. \end{enumerate} \end{remark} There are several steps to prove Theorem \ref{thm_main_thoerem}. We put $K_\delta = I - J_\delta$. We first show that the perturbed hydrostatic operator $\tilde{A}_{\mu, q} = A_q + \mu P J_\delta = (A_q + \mu I) - \mu P K_\delta$ admits the bounded $H^\infty$-calculus. We know that $K_\delta: H^{1, q}(\Omega)^2 \rightarrow L^q(\Omega)^2$ is bounded by (\ref{eq_J}) and the hydrostatic Stokes operator $A_q = P (- \Delta)$ with $D(A_q) = H^{2, q}(\Omega)^2 \cap L^q_{\overline{\sigma}}(\Omega)$ is invertible and admits bounded $H^\infty$-calculus by \cite{GigaGriesHieberHusseinKashiwabara2017_analiticity}. Then we see that the perturbed hydrostatic operator $\tilde{A}_{\mu, q}$ admits the bounded $H^\infty$-calculus if we take $\delta > 0$ so small that $\delta = O(\mu^{-1})$ by perturbation arguments, see the book by Pr\"{u}ss and Simonett \cite{PrussSimonett2016} for the definition and properties of bounded $H^\infty$-calculus. Therefore, we find that $\tilde{A}_{\mu, q}$ with $D(\tilde{A}_{\mu, q}) = D(A_q)$ has the $L^p$-$L^q$ maximal regularity and generates an analytic semigroup $e^{- t \tilde{A}_{\mu, q}}$. Note that, if we take $\mu$ sufficiently large, $\tilde{A}_{\mu, q}$ is exponentially stable, and the decay rate is larger than the minimum spectrum of $A_q$. We construct the local-in-time strong solution to (\ref{eq_diff}) based on Banach's fixed point theorem in $L^p$-$L^q$ maximal regularity settings. We follow the methods by Giga $et \, al.$ \cite{GigaGriesHieberHusseinKashiwabara2017_analiticity}. We also prove $V \in H^{2, p}_{\eta, loc}(0, T; L^q(\Omega)^2) \cap H^{1, p}_{\eta, loc}(0, T; D(\tilde{A}_{\mu, q})) \hookrightarrow C(0, T; D(\tilde{A}_{\mu, q}))$ by using the assumption $f \in H^{1,p}_\eta (0, T; L^q(\Omega)^2)$, which is used to connect the $L^p$-$L^q$-based solutions to $H^1$-global solution. We establish $H^2$- a priori estimate for $V$. The strategy is based on Giga $et \, al$ \cite{GigaGriesHieberHusseinKashiwabara2017_analiticity}. They showed $v \in C(0, T; D(A_2))$ by proving $H^1(0, T; D(A_2))$ bound using integration by parts and time regularity of $f \in H^1(0, T; L^q(\Omega)^2)$. In contrast to this, Hieber and Kashiwabara \cite{HieberKashiwabara2016} obtained $v \in C(0, T; D(A_2))$ using the spacial regularity assumption for $f \in L^2(0, T; H^1(\Omega)^2)$. In our case, Hieber and Kashiwabara's scenario does not work because the term $J_\mu V$ in (\ref{eq_diff}) does not always belong to $L^2(0, T; H^1(\Omega)^2)$ even if $V$ is smooth for spacial variables. Due to this reason, we assume time regularity for the external force. The assumption $f \in L^2(0, T; H^1(\Omega)^2)$ is used to bound the convection term $U \cdot \nabla v$ in (\ref{eq_diff}). We need not to estimate $U \cdot \nabla V$ and $u \cdot \nabla V$ to establish $H^1$- a priori estimate because these terms vanish by integration by parts. However, the term $U \cdot \nabla v$ does not vanish by integration by parts, then we have to estimate this. We need spacial regularity of the basic flow $v \in L^2(0, T; H^3(\Omega)^2)$ to estimate the convection term. This additional regularity can be obtained if $f \in L^2(0, T; H^1(\Omega)^2)$, see Appendix A. Combining with the local well-posedness and $H^1$- a priori estimate, we establish the global well-posedness in the maximal regularity space. For the case $T = \infty$, we first derive $H^2$ decay of $V$ when $p = q = 2$. For the other case we use the embedding $H^2(\Omega)^2 \hookrightarrow B^{2/q}_{q, p}(\Omega)^2$ to get smallness of $V(T_0)$ for some $T_0 > 0$. By the choice of the critical time weight $\eta$, we can extend $V(t)$ to infinity. Although, in the previous studies \cites{AzouaniOlsonTiti2014,Pei2019}, the authors construct the solution to the nudging equations directly, we show the existence and the uniqueness of the solution to (\ref{eq_nudging}) by contradiction argument, global well-posedness of the primitive equations, and Theorem \ref{thm_main_thoerem}. Actually, we can show that there exists a unique solution $\tilde{v}$ to (\ref{eq_nudging}) on $H^{1,q}(t_0 + \tau, t_0; L^q(\Omega)) \cap L^q(t_0 + \tau, t_0; H^{2, q}(\Omega))$ for all $t_0 >0$ and small $\tau > 0$ with initial data $\tilde{v} (t_0)$. On the other hand, vector field $v - V$ solves (\ref{eq_nudging}) in $H_\eta^{1,q}(0, \infty; L^q(\Omega)) \cap L^q_\eta(0, \infty; H^{2, q}(\Omega))$. By the uniqueness, the vector field $v - V$ must be the global solution to (\ref{eq_nudging}). In this paper we use the following notation and notion. We write $L^q_x L^r_{x^\prime} (\Omega \times \Omega^\prime) = L^q(\Omega; L^r(\Omega^\prime))$ for domains $\Omega, \Omega^\prime$ and $x \in \Omega, x^\prime \in \Omega^\prime$. We write \begin{align*} \dot{H}^{m, q}(\Omega) = \Set{ \varphi \in L^1_{loc}(\Omega) }{ \Vert \varphi \Vert_{\dot{H}^{m}} := \sum_{|\alpha|=m} \Vert \partial^\alpha_x \varphi \Vert_{L^q(\Omega)} < \infty }. \end{align*} for $m \in \mathbb{Z} _{\geq 0}$ and $q \in (1, \infty)$. For a bounded operator $B: X \rightarrow Y$ for a Banach spaces $X, Y$, we write $\Vert B \Vert_{X \rightarrow Y}$ to denote the operator norm of $B$. We denote the sector $\Sigma_\theta = \Set{\lambda \in \mathbb{C} }{ |\arg \lambda| < \theta}$ and define the set of analytic function on $\Sigma_\theta$ decaying at the origin and infinity such that \begin{align*} H^\infty_0(\Sigma_\theta) = \Set{ \varphi\,:\, \text{analytic on $\Sigma_\theta$} }{ |\varphi(\lambda)| \leq C \left| \frac{\lambda}{1 + \lambda^2} \right|^\varepsilon \, \text{for $\lambda \in \Sigma_{\theta}$ and some $\varepsilon>0$} }. \end{align*} The space $H^\infty_0(\Sigma_\theta)$ is equipped with the uniform norm on $\Sigma_\theta$. We say a sectorial operator $S$ with the domain $D(S) \subset X$ and the spectral angle $\theta_S \in (0, \pi)$ admits a bounded $H^\infty$-calculus on $X$ if \begin{align*} \left \Vert \frac{1}{2 \pi i} \int_{\partial \Sigma_\theta} \varphi(\lambda) (\lambda - S)^{-1} d\lambda \right \Vert_{X \rightarrow X} \leq C \Vert \varphi \Vert_{H^\infty_0(\Sigma_\theta)} \end{align*} for all $\varphi \in H^\infty_0(\Sigma_\theta)$ with $\theta < \theta_S$. In particular, the bounded $H^\infty$-calculus of $S$ on $L^q(\Omega)$ with the $H^\infty$-angle $\theta_{S, H^\infty} < \pi/2$ implies the maximal regularity estimate \begin{align*} \Vert \psi \Vert_{H^{1, p}(0, T; L^q(\Omega))} + \Vert \psi \Vert_{L^{p}(0, T; D(S))} \leq C \left( \Vert u_0 \Vert_{(L^q(\Omega), D(S))_{1-/p, p}} + \Vert g \Vert_{L^p(0, T; D(S))} \right), \end{align*} for the the solution to \begin{align*} \partial_t u + S u & = g \in L^p(0, T; L^q(\Omega)),\\ u(0) & = u_0 \in (L^q(\Omega), D(S))_{1-/p, p}. \end{align*} The maximal regularity holds on the time-weight Lebesgue space $L^q_\eta(0, T; L^q(\Omega))$. We denote the trace space $D_S(\theta, p)$ for a sectorial operator $S$ admitting a bounded $H^\infty$-calculus with $H^\infty$ angle $\phi_{H^\infty} \in [0, \pi/2)$ by \begin{align*} D_S(\theta, p) & = \Set{ x \in L^q_{\overline{\sigma}}(\Omega) }{ \Vert x \Vert_{D_S(\theta, p)} < \infty }, \\ \Vert x \Vert_{D_S(\theta, p)} & = \Vert x \Vert_{L^q(\Omega)^2} + \left( \int_0^\infty \Vert t^{1 - \theta} S e^{- t S} x \Vert_{L^q(\Omega)^2}^p \frac{dt}{t} \right)^{\frac{1}{p}}. \end{align*} We denote the Lebesgue-Sobolev mixed spaces $\mathbb{E}_{0, \eta, p, q, T}$ and $\mathbb{E}_{1, \eta, p, q, T}$ by \begin{align*} \mathbb{E}_{0, \eta, p, q, T} & = L^{p}_\eta(0, T; L^q(\Omega)^2), \\ \mathbb{E}_{1, \eta, p, q, T} & = H^{1, p}_\eta(0, T; L^q(\Omega)^2) \cap L^{p}_\eta(0, T; D(A_q)). \end{align*} \section{Linear Theory and Well-posedness} \subsection{Bounded $H^\infty$-calculus of the linearized operator} The linearized equation of (\ref{eq_diff}) is \begin{equation} \label{eq_perturbed_hydrostatic_stokes} \begin{aligned} \partial_t v - P \Delta v + \mu P J_\delta v & = f & \quad \text{in} \quad \Omega \times (0, \infty) \\ \mathrm{div}_H \, \overline{v} & = 0 & \quad \text{in} \quad \Omega \times (0, \infty) \end{aligned} \end{equation} with initial data $v (0) = v_0$. We show the maximal regularity estimates for (\ref{eq_perturbed_hydrostatic_stokes}). \begin{lemma} \label{lem_H_infty_for_tilde_A} Let $q \in (1, \infty)$, $\mu > 0$, and $\delta > 0$. Then there exists a constant $\alpha > 0$, if $\mu \delta < \alpha$, the perturbed hydrostatic operator operator $\tilde{A}_{\mu, q} := A_q + \mu J_\delta$ with $D(\tilde{A}_{\mu, q}) = D(A)$ admits a bounded $H^\infty$-calculus with $H^\infty$ angle $\theta_{H^\infty} \in (0, \pi/2)$. Furthermore, $\tilde{A}_{\mu, q}$ generates an analytic semigroup $e^{- t \tilde{A}_{\mu, q}}$ with faster decay rate than $e^{- t A}$ i.e. \begin{align*} \Vert e^{- t \tilde{A}_{\mu, q}} \Vert_{L^q (\Omega)^2 \rightarrow L^q (\Omega)^2} \leq C e^{ - \mu_\ast t} \end{align*} for some $C, \mu_\ast > 0$ and $\mu_\ast$ can be taken larger than the minimum spectrum of $A$. \end{lemma}. \begin{corollary} Let $\frac{1}{p} < \eta \leq 1$ Under the same assumption of Theorem \ref{lem_H_infty_for_tilde_A}, \begin{align} \Vert e^{\mu_\ast t} \partial_t v \Vert_{\mathbb{E}_{0, \eta, p, q, T}} + \Vert e^{\mu_\ast t} \tilde{A}_{\mu, q} v \Vert_{\mathbb{E}_{0, p, q, T}} \leq C \left( \Vert v_0 \Vert_{X_{\eta - 1/p, p, q}} + \Vert e^{\mu_\ast t} f \Vert_{\mathbb{E}_{0, p, q, T}} \right), \end{align} for the solution $v$ to $\partial_t v + \tilde{A}_{\mu, q} v = f$ with initial data $v_0 \in X_{\eta - 1/p, p, q}$ and $f$ satisfying $e^{\mu_\ast t} f \in \mathbb{E}_{0, \eta, p, q, T}$. The constant $C>0$ is independent of $T$. \end{corollary}. \begin{proof}[Proof of Lemma \ref{lem_H_infty_for_tilde_A}] We first invoke that the hydrostatic Stokes operator $A_q$ generates an analytic semigroup $e^{- t A_q}$ in $L^q_{\overline{\sigma}}(\Omega)$ and admits the bounded inverse, see \cite{HieberHusseingKashiwabara2016}. There exits a solution $\psi$ to $ \lambda \psi - A_q \psi = Pf$ such that \begin{align} \label{eq_resolvent_ineq_hydrostatic_stokes} |\lambda| \Vert \psi \Vert_{L^q (\Omega)^2} + \Vert \psi \Vert_{\dot{H}^2 (\Omega)^{2}} \leq C \Vert Pf \Vert_{L^q (\Omega)^2} \leq C \Vert f \Vert_{L^q (\Omega)^2}, \end{align} for all $\lambda \in \overline{\Sigma_\theta}^c \cup \{0\}$ and all $\theta \in (0, \pi/2)$. Since $J_\delta = I + K_\delta$, we have the resolvent problem for $\tilde{A}_{\mu, q}$ \begin{align} \label{eq_resolvent_problem_A+muJ} (\lambda + \mu) v + A_qv + \mu P K_\delta v = Pf, \end{align} By the assumption (\ref{eq_J}), we see that $K_\delta$ is a relatively compact perturbation to $A_q$ with domain $D(K_\delta) = H^{1, q} (\Omega)^2$ satisfying \begin{align*} \Vert \mu P K_\delta \varphi \Vert_{L^q(\Omega)^2} & \leq C \mu \delta \Vert \nabla \varphi \Vert_{L^q(\Omega)^2} \leq C \mu \delta \Vert \Delta \varphi \Vert_{L^q(\Omega)^2}^{\frac{1}{2}} \Vert \varphi \Vert_{L^q(\Omega)^2}^{\frac{1}{2}} \\ & \leq C \mu \delta \Vert A_q\varphi \Vert_{L^q(\Omega)^2}^{\frac{1}{2}} \Vert \varphi \Vert_{L^q(\Omega)^2}^{\frac{1}{2}} \\ & \leq \varepsilon \Vert A_q\varphi \Vert_{L^q(\Omega)^2} + \frac{1}{4 \varepsilon} \Vert \varphi \Vert_{L^q(\Omega)^2} \end{align*} for all $\varphi \in D (A_q)$ and some small $\delta, \varepsilon \in (0, 1)$. Therefore $A_q+ \mu P K_\delta$ with the domain $D(A_q+ \mu P K_\delta) = D(A_q)$ generates an analytic semigroup $e^{- t(A_q+ \mu P K_\delta)}$ on $L^q_{\overline{\sigma}} (\Omega)$. We see that $\lambda = 0$ belongs to the resolvent of $A_q$, the embedding $D(A_q) \hookrightarrow D(B)$, and the inequality $\Vert \mu P K_\delta v \Vert_{L^q(\Omega)^2} \leq \Vert (-A_q)^{\frac{1}{2}+\varepsilon^\prime} v \Vert_{L^q(\Omega)^2}$ for all $v \in D(A_q)$, small $\delta >0$, and $ \varepsilon^\prime \in [0, 1/2)$. Using perturbation theory of $H^\infty$-calculus, $e.g.$ Lemma 4.1 in the book \cite{KunstmannWeis2004}, we see there exits a constant $\lambda_0 > 0$ such that $\lambda_0 + A_q+ \mu P K_\delta$ admits a bounded $H^\infty$-calculus on $L^q_{\overline{\sigma}} (\Omega)$ with the $H^\infty$-angle angle $\theta^\prime \in (0, \pi/2)$. In view of (\ref{eq_resolvent_problem_A+muJ}), if we take $\mu$ sufficiently large, we find $\tilde{A}_{\mu, q}$ generates an analytic semigroup $e^{- t \tilde{A}_{\mu, q}}$ with faster exponential decay than $e^{- t A_q}$ and also admits an bounded $H^\infty$-calculus in $L^q_{\overline{\sigma}}(\Omega)$. \end{proof} Hereafter, we fix the constant $\mu_\ast$ in Lemma \ref{lem_H_infty_for_tilde_A} and always assume $\mu > \mu_0$ and $\delta < \delta_0$ for large $\mu_0>0$ and small $\delta_0 >0$ so that the assertions of Lemma \ref{lem_H_infty_for_tilde_A} holds. By the same kind of perturbation arguments as the proof of Lemma \ref{lem_H_infty_for_tilde_A}, we can show \begin{proposition} \label{prop_H_infty_for_A_plus_J} Under the same assumption for $q, \mu, \delta$ as Lemma \ref{lem_H_infty_for_tilde_A}. There exists a constant $\alpha > 0$, if $\mu \delta < \alpha$, the perturbed hydrostatic operator operator $\tilde{\tilde{A}}_{\mu, q} = A_q + \mu J_\delta$ with $D(\tilde{\tilde{A}}_{\mu, q}) = D(A)$ admits the bounded $H^\infty$-calculus. Furthermore, $\tilde{\tilde{A}}_{\mu, q}$ generates an analytic semigroup $e^{- t \tilde{\tilde{A}}_{\mu, q}}$ such that \begin{align*} \Vert e^{- t \tilde{\tilde{A}}_{\mu, q}} \Vert_{L^q (\Omega)^2 \rightarrow L^q (\Omega)^2} \leq C e^{ c t} \end{align*} for some $C, c > 0$. \end{proposition} We define the exponentially-weighted maximal regularity space $\mathbb{E}_{1, \eta, p, q, T, e^{\gamma t}}$ equipped with the norm \begin{align*} \Vert v \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\gamma t}}} := \Vert e^{\gamma t} \partial_t v \Vert_{L^p_\eta(0, T ; L^q(\Omega)^2)} + \Vert e^{\gamma t} v \Vert_{L^p_\eta(0, T ; D(A_{\mu, q}))}. \end{align*} We find that \begin{align*} \Vert \partial_t \left(e^{\gamma t} v\right) \Vert_{L^p_\eta(0, T ; L^q(\Omega)^2)} & \leq \gamma \Vert e^{\gamma t} v \Vert_{L^p_\eta(0, T ; L^q(\Omega)^2)} + \Vert e^{\gamma t} \partial_t v \Vert_{L^p_\eta(0, T ; L^q(\Omega)^2)} \\ & \leq C \Vert v \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\gamma t}}}, \end{align*} where the constant $C > 0$ is independent of $T$. \begin{proposition} \label{prop_pointwise_estimate_semigroup} Let $q \in (1, \infty)$ and $\theta \in [0, 1]$. Then \begin{align} \Vert \tilde{A}_{\mu, q}^\theta e^{- t \tilde{A}_{\mu, q}} f \Vert_{L^q(\Omega)^2} \leq C t^{ - \theta} e^{- \mu_\ast t} \Vert f \Vert_{L^q(\Omega)^2} \end{align} for all $f \in L^q_{\overline{\sigma}}(\Omega)^2$. \end{proposition} \begin{proof} Since $\tilde{A}_{\mu, q}$ is sectorial, we see $\Vert \tilde{A}_{\mu, q}^{m} e^{- t \tilde{A}_{\mu, q}} f \Vert_{L^q(\Omega)^2} \leq t^{ - m} e^{- \mu_\ast t} C \Vert f \Vert_{L^q(\Omega)^2}$ for $m = 0,1$. Interpolating between them, we obtain the estimate. \end{proof} \begin{proposition} \label{prop_H1_t_bound_linear_case} Let $1 < p, q < \infty$, $\eta > 1/p$, and $T>0$. Let $V \in \mathbb{E}_{1, \eta, p, q, T}$ be the solution to \begin{equation} \label{eq_linear_perturbed_hydrostatic_Stokes_eq_with_zero_id} \begin{split} \begin{aligned} \partial_t V + \tilde{A}_{\mu, q} V & = P F \\ V(0) & = 0 \end{aligned} \end{split} \end{equation} for $F \in H^{1,p}_\eta(0, T; L^q(\Omega)^2)$ satisfying $t \partial_t F \in \mathbb{E}_{0, \eta, p, q, T}$ and $V_0 \in X_{1/q, p, q}$. Then there exists a $T$-independent constant $C > 0$ such that \begin{align*} & \Vert t \partial_t V \Vert_{H^{1, p}_\eta(0, T; L^q(\Omega)^2)} + \Vert t \partial_t V \Vert_{L^p_\eta(0, T; D(\tilde{A}_{\mu, q}))} \\ & \leq C \left( \Vert V \Vert_{\mathbb{E}_{1, \eta, p, q, T}} + \Vert F \Vert_{\mathbb{E}_{0, \eta, p, q, T}} + \Vert t \partial_t F \Vert_{\mathbb{E}_{0, \eta, p, q, T}} \right). \end{align*} \end{proposition} \begin{proof} Since $F \in H^{1, p}_\eta(0, T; L^q(\Omega)^2)$, there exists a unique solution $V$ to (\ref{eq_linear_perturbed_hydrostatic_Stokes_eq_with_zero_id}) such that \begin{align*} \Vert V \Vert_{H^{2, p}_\eta(0, T; L^q(\Omega)^2) \cap H^{1, p}_\eta(0, T; L^q(\Omega)^2)} \leq C \Vert F \Vert_{H^{1, p}_\eta(0, T; L^q(\Omega)^2)} \end{align*} for some constant $C>0$. On the other hand, since $V \in \mathbb{E}_{1, \eta, p, q, T}$ and $t \partial_t F \in \mathbb{E}_{0, \eta, p, q, T}$, there exists a unique solution $\Phi \in \mathbb{E}_{1, \eta, p, q, T}$ to \begin{align} \label{eq_tdtV} \begin{split} \partial_t \Phi + \tilde{A}_{\mu, q} \Phi & = - \tilde{A}_{\mu, q} V + PF + t \partial_t P F, \\ V(0) & = 0, \end{split} \end{align} such that \begin{align} \label{eq_estimate_Phi} \Vert \Phi \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \leq C \left( \Vert V \Vert_{\mathbb{E}_{1, \eta, p, q, T}} + \Vert F \Vert_{\mathbb{E}_{1, \eta, p, q, T}} + \Vert t \partial_t F \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \right), \end{align} where $C$ is independent of $T$. The vector field $t \partial_t V$ also satisfies (\ref{eq_tdtV}). By the uniqueness, we find $\Phi = t \partial_t V$. Therefore, $t \partial_t V$ satisfies (\ref{eq_estimate_Phi}). \end{proof} \subsection{Local well-posedness} We establish the local well-posedness and global well-posedness for small data to (\ref{eq_diff}) in the maximal regularity settings. We begin with the nonlinear estimates, see Giga et al. \cite{GigaGriesHieberHusseinKashiwabara2017_analiticity}. \begin{proposition} \label{prop_bilinear_estimate_sobolev} There exits a constant $C>0$ such that \begin{align*} & \Vert v_1 \cdot \nabla_H v_2 \Vert_{H^{s, q}(\Omega)^2} + \left \Vert \int_{-l}^{x_3} \mathrm{div}_H v_1 dz \, \partial_3 v_2 \right \Vert_{{H^{s, q}(\Omega)^2}} \\ & \leq C \Vert v_1 \Vert_{H^{s + 1 + \frac{1}{q}, q}(\Omega)^2} \Vert v_2 \Vert_{H^{s + 1 + \frac{1}{q}, q}(\Omega)^2} \end{align*} for all $v_1, v_2 \in H^{s + 1 + \frac{1}{q}, q}(\Omega)^2$. \end{proposition} \begin{proposition} \label{prop_bilinear_estimate_bilinear_maximal_regularity} Let $1 < p, q < \infty$ satisfying $\eta = \frac{1}{p} + \frac{1}{q} \leq 1$. Let $0 \leq \gamma \leq \mu_\ast$. (i) There exits a $T$-independent constant $C>0$ such that \begin{align} \label{eq_bilinear_estimate_bilinear_maximal_regularity_1} \begin{split} & \Vert \left( f \cdot \nabla_H g \right) \Vert_{\mathbb{E}_{0, \eta, p, q, T, e^{\gamma t}}} + \left \Vert \left( \int_{-l}^{x_3} \mathrm{div}_H f dz \, \partial_3 g \right) \right \Vert_{\mathbb{E}_{0, \eta, p, q, T, e^{\gamma t}}} \\ & \leq C \Vert f \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\gamma t}}} \Vert g \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\gamma t}}} \end{split} \end{align} for $f, g \in \mathbb{E}_{1, \eta, p, q, T, e^{\gamma t}}$ satisfying $f|_{t = 0} = 0, g|_{t = 0} = 0$. (ii) Assume $\tilde{f} = e^{ - \mathcal{A}_1 t} f_0 + f$ and $\tilde{g} = e^{ - \mathcal{A}_2 t} g_0 + g$ for $f_0, g_0 \in X_{1/q, p, q}$ and $\mathcal{A}_j = A_q, \tilde{A}_{\mu, q}$. For $\gamma \leq \mu_\ast$, if not $\mathcal{A}_1 = \mathcal{A}_2 = A_q$, it follows that \begin{align} \label{eq_bilinear_estimate_bilinear_maximal_regularity_2} \begin{split} & \Vert \tilde{f} \cdot \nabla_H \tilde{g} \Vert_{\mathbb{E}_{0, \eta, p, q, T, e^{\gamma t}}} + \Vert \int_{-l}^{x_3} \mathrm{div}_H \tilde{f} dz \, \partial_3 \tilde{g} \Vert_{\mathbb{E}_{0, \eta, p, q, T, e^{\gamma t}}} \\ & \leq C_1 \Vert f_0 \Vert_{X_{1/q, p, q}} \Vert g_0 \Vert_{X_{1/q, p, q}} + C_2 \Vert f_0 \Vert_{X_{1/q, p, q}} \Vert g \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\gamma t}}} \\ & + C_3 \Vert g_0 \Vert_{X_{1/q, p, q}} \Vert f \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\gamma t}}} + C_4 \Vert f \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\gamma t}}} \Vert g \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\gamma t}}}. \end{split} \end{align} \end{proposition} \begin{proof} The Hardy inequality is such \begin{align*} \Vert \phi \Vert_{L^p_{1 + \theta - s}(0, T)} \leq C \Vert \phi \Vert_{H^{s, p}_{1 + \theta}(0, T)} \end{align*} for $s \in [0, 1]$, $1/p < \theta < 1$, and $\phi \in H^{s, p}_{1 + \theta}(0, T)$ satisfying $\phi |_{t = 0} = 0$. Combining with the Hardy inequality, Proposition \ref{prop_bilinear_estimate_sobolev}, and the mixed derivative theorem, we find \begin{align*} & \Vert f \cdot \nabla_H g \Vert_{\mathbb{E}_{0, \eta, p, q, T, e^{\gamma t}}} + \left \Vert \int_{-l}^{x_3} \mathrm{div}_H f dz \, \partial_3 g \right \Vert_{\mathbb{E}_{0, \eta, p, q, T, e^{\gamma t}}} \\ & \leq C \Vert e^{\gamma t} f \Vert_{L^{2p}_{(1 + \eta)/2}(0, T; H^{1 + 1/q, q}(\Omega)^2)} \Vert g \Vert_{L^{2p}_{(1 + \eta)/2}(0, T; H^{1 + 1/q, q}(\Omega)^2)}. \end{align*} Applying the Hardy inequality again, we have \begin{align*} & \Vert e^{\gamma t} f \Vert_{L^{2p}_{(1 + \eta)/2}(0, T; H^{1 + 1/q, q}(\Omega)^2)}\\ & \leq C \Vert e^{\gamma t} f \Vert_{H^{1/2p + (1 - \eta)/2, p}_{\eta}(0, T; H^{1 + 1/q, q}(\Omega)^2)} \leq C \Vert f \Vert_{\mathbb{E}_{0, \eta, p, q, T, e^{\gamma t}}}. \end{align*} The same inequality holds for $g$, then we obtain (\ref{eq_bilinear_estimate_bilinear_maximal_regularity_1}). By the choice of the time weight index $\eta$, the embedding constants are independent of $T$. We use Proposition 3.4.3 in \cite{PrussSimonett2016} to see \begin{align} \label{eq_bound_for_linear_part_in_mixed_space} \Vert e^{\gamma t} e^{- t \tilde{A}_{\mu, q}} f_0 \Vert_{L^{2p}_{(1 + \eta)/2}(0, T; H^{1 + 1/q, q}(\Omega)^2)} \leq C \Vert f_0 \Vert_{D_{\tilde{A}_{\mu, q}}(1 - \eta, 2p)} \leq C \Vert f_0 \Vert_{X_{1/q, p, q}}. \end{align} The embedding constants are independent of $T$. The same inequality holds for $e^{- t \tilde{A}_{\mu, q}} g_0$. Combining with (\ref{eq_bilinear_estimate_bilinear_maximal_regularity_1}) and the above estimates, we obtain (\ref{eq_bilinear_estimate_bilinear_maximal_regularity_2}). \end{proof} \begin{remark} \label{rmk_smallness_constant_for_small_T} The estimate (\ref{eq_bound_for_linear_part_in_mixed_space}) implies that the constants $C_j$ for $j = 1, 2, 3$ can be bounded small if we take $T$ sufficiently small since the norm $\Vert \cdot \Vert_{L^{2p}_{(1 + \eta)/2}(0, T; H^{1 + 1/q, q}(\Omega)^2)}$ is a integral norm with respect to $t$. \end{remark} \begin{proposition} \label{prop_local_wellposedness_maximal_regularity} Let $0 < p, q < 1$ satisfying $\frac{1}{p} + \frac{1}{q} \leq 1$. Let $\eta = \frac{1}{p} + \frac{1}{q}$ and $T>0$. Let $V_0 \in X_{1/q, p, q}$ and $F \in \mathbb{E}_{0, \eta, p, q, T}$. Assume $v \in \mathbb{E}_{1, \eta, p, q, T, e^{\gamma_1 t}}$ be the solution with initial data $v_0 \in X_{1/q, p, q}$ and an external force $f \in \mathbb{E}_{0, \eta, p, q, T, e^{\gamma_0 t}}$ to (\ref{eq_primitive}) such that \begin{align*} \Vert \partial_t v \Vert_{\mathbb{E}_{0, \eta, p, q, T, e^{\gamma_1 t}}} + \Vert A_q v \Vert_{\mathbb{E}_{0, \eta, p, q, T, e^{\gamma_1 t}}} \leq C \end{align*} for some $T$-independent constants $C>0$ and $\gamma_0 \geq \gamma_1 > 0$. (i) There exit $T_0>0$, if $T \leq T_0$, there exits a unique solution $V \in \mathbb{E}_{1, \eta, p, q, T}$ to (\ref{eq_diff}) with initial data $V_0$ such that \begin{align*} \Vert V \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \leq C \end{align*} for some constant $C>0$. (ii) There exists $\varepsilon_0 > 0$, if $\Vert v \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\mu_\ast t}}}, \Vert V_0 \Vert_{X_{1/q, p, q, T, e^{\mu_\ast t}}}, \Vert F \Vert_{\mathbb{E}_{0, \eta, p, q, T, e^{\mu_\ast t}}} \leq \varepsilon_0$, there exits a unique solution $V \in \mathbb{E}_{1, \eta, p, q, T, e^{\mu_\ast t}}$ to (\ref{eq_diff}) with initial data $V_0$ such that \begin{align*} \Vert V \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\mu_\ast t}}} \leq C \end{align*} for some constant $C>0$, where the case $T = \infty$ is included. (iii) If $F \in H^{1, p}_\eta (0, T; L^q(\Omega)^2)$ and $t \partial_t F \in \mathbb{E}_{1, \eta, p, q, e^{\gamma t}}$, there exits $T_0 > 0$, if $T \leq T_0$, \begin{align*} \Vert V \Vert_{\mathbb{E}_{1, \eta, p, q, e^{\gamma t}}} + \Vert t \partial_t V \Vert_{\mathbb{E}_{1, \eta, p, q, e^{\gamma t}}} \leq C, \end{align*} holds for some constant $C>0$. \end{proposition} \begin{proof} We prove (i) and (ii). We set \begin{align*} \mathcal{N}(V) = - U \cdot \nabla V + u \cdot \nabla V + V \cdot \nabla v. \end{align*} Let $\Phi^\prime \in \mathbb{E}_{1, \eta, p, q, T}$ be the solution to \begin{align*} \partial_t \Phi^\prime + \tilde{A}_{\mu, q} \Phi^\prime & = 0, \\ \Phi^\prime (0) & = V_0. \end{align*} We put $\Phi = (\Phi^\prime, \Phi_3)$ for $\Phi_3 = \int_{-l}^{x_3} \mathrm{div}_H \Phi^\prime dz$. By the maximal regularity of $\tilde{A}_{\mu, q}$, $\Phi$ satisfies \begin{align*} \Vert e^{\mu_\ast t} \Phi^\prime \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \leq C \Vert V_0 \Vert_{X_{1/q, p, q}} \end{align*} for some $T$-independent constant $C > 0$. Let $\tilde{V} = V - \Phi^\prime$, then $\tilde{V}$ satisfies \begin{align} \label{eq_for_V_tilde_abstract_formulations} \begin{split} \partial_t \tilde{V} + \tilde{A}_{\mu, q} \tilde{V} & = \tilde{\mathcal{N}}(\tilde{V}), \\ \tilde{V} (0) & = 0, \end{split} \end{align} where \begin{align*} \tilde{\mathcal{N}}(\tilde{V}) & = - \tilde{U} \cdot \nabla \tilde{V} - \Phi \cdot \nabla \tilde{V} - \tilde{U} \cdot \nabla \Phi^\prime - \Phi \cdot \nabla \Phi^\prime \\ & + u \cdot \nabla V + u \cdot \nabla \Phi^\prime + V \cdot \nabla v + \Phi \cdot \nabla v + F. \end{align*} By Proposition \ref{prop_bilinear_estimate_bilinear_maximal_regularity}, we find \begin{align} \label{eq_estimate_mathcalN_in_E0} \begin{split} \Vert e^{\tilde{\mu} t} \tilde{\mathcal{N}}(\tilde{V}) \Vert_{\mathbb{E}_{0, \eta, p, q, T}} & \leq C_2 \Vert e^{\tilde{\mu} t} \tilde{V} \Vert_{\mathbb{E}_{1, \eta, p, q, T}}^2 \\ & + C_1 \left( \Vert e^{\tilde{\mu} t} \Phi^\prime \Vert_{\mathbb{E}_{1, \eta, p, q, T}} + \Vert e^{\tilde{\mu} t} v \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \right) \Vert e^{\tilde{\mu} t} \tilde{V} \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \\ & + C_0 \left( \Vert e^{\tilde{\mu} t} \Phi^\prime \Vert_{\mathbb{E}_{1, \eta, p, q, T}}^2 + \Vert e^{\tilde{\mu} t} \Phi^\prime \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \Vert e^{\tilde{\mu} t} v \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \right. \\ & \left. \quad \quad \quad + \Vert e^{\tilde{\mu} t} v \Vert_{\mathbb{E}_{1, \eta, p, q, T}}^2 + \Vert e^{\tilde{\mu} t} F \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \right). \end{split} \end{align} We write $\mathcal{T}: \mathbb{E}_{0, \eta, p, q, T, e^{\tilde{\mu}t}} \rightarrow \mathbb{E}_{1, \eta, p, q, T, e^{\tilde{\mu}t}}$ to denote the solution operator $\mathcal{T} g = X$ for \begin{align*} \partial_t X + \tilde{A}_{\mu, q} X & = g, \\ X (0) & = 0. \end{align*} If we take $T$ small or take $\Vert V_0 \Vert_{X_{1/q, p, q}}$, $\Vert v_0 \Vert_{X_{1/q, p, q}}$, $\Vert F \Vert_{\mathbb{E}_{0, \eta, p, q, T, e^{\tilde{\mu}t}}}$ small, then by Remark \ref{rmk_smallness_constant_for_small_T} we find that $\Vert e^{\tilde{\mu} t} \Phi^\prime \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\tilde{\mu} t}}}$, $\Vert e^{\tilde{\mu} t} v \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\tilde{\mu} t}}}$, and $\Vert e^{\tilde{\mu} t} F \Vert_{\mathbb{E}_{0, \eta, p, q, T, e^{\tilde{\mu} t}}}$ can be sufficiently small. Combining with these observations and the estimate above, we see there exists small $R>0$ such that \begin{align*} \mathcal{T}\tilde{\mathcal{N}} : \mathbb{E}_{1, \eta, p, q, e^{\tilde{\mu}t}}(T) \rightarrow \mathbb{E}_{1, \eta, p, q, e^{\tilde{\mu}t}}(T), \quad \Vert \tilde{V} \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\tilde{\mu} t}}} \leq R \quad \text{for small $R>0$}, \end{align*} is a self-mapping. Let $\tilde{V}_1, \tilde{V}_2 \in \mathbb{E}_{1, \eta, p, q, T}$ satisfying \begin{align*} \Vert \tilde{V}_1 \Vert_{\mathbb{E}_{1, \eta, p, q, T}}, \Vert \tilde{V}_2 \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \leq R. \end{align*} Using the formula \begin{align*} &\tilde{\mathcal{N}}(\tilde{V}_1) - \tilde{\mathcal{N}}(\tilde{V}_2) \\ &= - (\tilde{U}_1 - \tilde{U}_2) \cdot \nabla \tilde{V}_1 - \tilde{U}_2 \cdot \nabla (\tilde{V}_1 - \tilde{V}_2) \\ &- \Phi \cdot \nabla (\tilde{V}_1 - \tilde{V}_2) - (\tilde{V}_1 - \tilde{V}_2) \cdot \nabla \Phi^\prime \\ &+ u \cdot \nabla (\tilde{V}_1 - \tilde{V}_2) + (\tilde{V}_1 - \tilde{V}_2) \cdot \nabla v \end{align*} and Proposition \ref{prop_bilinear_estimate_bilinear_maximal_regularity}, we find \begin{align} \label{eq_contractivity_of_quadratic_terms_maximal_regularity_space} \begin{split} & \Vert e^{\tilde{\mu} t} \left( \tilde{\mathcal{N}}(\tilde{V}_1) - \tilde{\mathcal{N}}(\tilde{V}_2) \right) \Vert_{\mathbb{E}_{0, \eta, p, q, T}} \\ & \leq C \left( \Vert e^{\tilde{\mu} t} \tilde{V}_1 \Vert_{\mathbb{E}_{1, \eta, p, q, T}} + \Vert e^{\tilde{\mu} t} \tilde{V}_2 \Vert_{\mathbb{E}_{1, \eta, p, q, T}} + \Vert e^{\tilde{\mu} t} v \Vert_{\mathbb{E}_{1, \eta, p, q, T}} + \Vert e^{\tilde{\mu} t} \Phi^\prime \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \right) \\ & \quad \times \Vert e^{\tilde{\mu} t} \left( \tilde{V}_1 - \tilde{V}_2 \right) \Vert_{\mathbb{E}_{1, \eta, p, q, T}}. \end{split} \end{align} If we take $\Vert \Phi^\prime \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\tilde{\mu} t}}}$ and $\Vert v \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\tilde{\mu} t}}}$, and $R$ sufficiently small, then $\mathcal{T}\tilde{\mathcal{N}}$ is a contraction mapping on $\mathbb{E}_{1, \eta, p, q, e^{\tilde{\mu}t}}(T)$. Banach's fixed point theorem implies that there exists a unique solution $\tilde{V} \in \mathbb{E}_{1, \eta, p, q, T, e^{\tilde{\mu}t}}$ to (\ref{eq_for_V_tilde_abstract_formulations}) such that $\Vert \tilde{V} \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\tilde{\mu}t}}} \leq R$. The solution to (\ref{eq_diff}) is given by $V = \tilde{V} + \Phi^\prime$. The solution $V$ satisfies \begin{align*} \Vert e^{\tilde{\mu} t} V \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\tilde{\mu}t}}} \leq C. \end{align*} Suppose there exists two solutions $V_1, V_2$ satisfying $V_1 \neq V_2$ and $\Vert V_j \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\tilde{\mu}t}}} \leq R$. The estimate (\ref{eq_contractivity_of_quadratic_terms_maximal_regularity_space}) implies if $T$ is small or $R$, $v$, and $\Phi^\prime$ are small, we see \begin{align*} \Vert e^{\tilde{\mu} t} (V_2 - V_2) \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\tilde{\mu}t}}} \leq \varepsilon \Vert e^{\tilde{\mu} t} (V_2 - V_2) \Vert_{\mathbb{E}_{1, \eta, p, q, T, e^{\tilde{\mu}t}}}, \end{align*} for some $\varepsilon \in (0,1)$. This implies $V_1 = V_2$ and a contradiction. Therefore, $V$ is unique in $\mathbb{E}_{1, \eta, p, q, T, e^{\tilde{\mu}t}}$. We proved (i) and (ii). We show (iii). We find from Proposition \ref{prop_bilinear_estimate_bilinear_maximal_regularity} \begin{align} \label{eq_bound_partial_t_N_tilde_V_in_maximal_regularity_space} \begin{split} & \Vert e^{\tilde{\mu} t} t \partial_t \tilde{\mathcal{N}}(\tilde{V}) \Vert_{\mathbb{E}_{0, \eta, p, q, T}} \\ & \leq C_2 \Vert e^{\tilde{\mu} t} t \partial_t \tilde{V} \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \Vert e^{\tilde{\mu} t} \tilde{V} \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \\ & + C_{11} \left( \Vert e^{\tilde{\mu} t} \Phi^\prime \Vert_{\mathbb{E}_{1, \eta, p, q, T}} + \Vert e^{\tilde{\mu} t} v \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \right) \Vert e^{\tilde{\mu} t} t \partial_t \tilde{V} \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \\ & + C_{12} \left( \Vert e^{\tilde{\mu} t} t \partial_t \Phi^\prime \Vert_{\mathbb{E}_{1, \eta, p, q, T}} + \Vert e^{\tilde{\mu} t} t \partial_t v \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \right) \Vert e^{\tilde{\mu} t} \tilde{V} \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \\ & + C_{01} ( \Vert e^{\tilde{\mu} t} t \partial_t \Phi^\prime \Vert_{\mathbb{E}_{1, \eta, p, q, T}} + \Vert e^{\tilde{\mu} t} \Phi^\prime \Vert_{\mathbb{E}_{1, \eta, p, q, T}} ) ( \Vert e^{\tilde{\mu} t} t \partial_t v \Vert_{\mathbb{E}_{1, \eta, p, q, T}} + \Vert e^{\tilde{\mu} t} v \Vert_{\mathbb{E}_{1, \eta, p, q, T}} ) \\ & + C_{02} \Vert e^{\tilde{\mu} t} t \partial_t F \Vert_{\mathbb{E}_{1, \eta, p, q, T}}. \end{split} \end{align} for some constants $C_2>0, C_{ij}>0$ ($i = 0,1, j=1, 2$). Note that since $\partial_t \Phi^\prime = \partial_t e^{t \tilde{A}_{\mu, q}/2} e^{t \tilde{A}_{\mu, q}/2} V_0$, we find from Proposition \ref{prop_pointwise_estimate_semigroup} that \begin{align*} \Vert t \partial_t \Phi \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \leq C \Vert V_0 \Vert_{X_{1/q, p, q}} \end{align*} for some constant $C > 0$. By Propositions \ref{prop_H1_t_bound_linear_case}, we have \begin{align} \label{eq_bound_for_t_partialt_T_N} \begin{split} & \Vert t \partial_t \mathcal{T} \mathcal{N} (\tilde{V}) \Vert_{\mathbb{E}_{1, \eta, p, q, e^{\tilde{\mu}t }}(T)} \\ & \leq C \left( \Vert \mathcal{T} \mathcal{N} (\tilde{V}) \Vert_{\mathbb{E}_{1, \eta, p, q, e^{\tilde{\mu}t }}(T)} + \Vert \mathcal{N} (\tilde{V}) \Vert_{\mathbb{E}_{0, \eta, p, q, e^{\tilde{\mu}t }}(T)} + \Vert t \partial_t \mathcal{N} (\tilde{V}) \Vert_{\mathbb{E}_{0, \eta, p, q, e^{\tilde{\mu}t }}(T)} \right). \end{split} \end{align} We define the mixed Banach space $\mathcal{U}_{\eta, p, q, T}$ by \begin{align*} \mathcal{U}_{\eta, p, q, T} & = \Set{ \varphi \in H^{2, p}_{\eta} (0, T; L^q(\Omega)^2) \cap H^{1, p}_{\eta} (0, T; D(\tilde{A}_{\mu, q})) }{ \Vert \varphi \Vert_{\mathcal{U}_{\eta, p, q, T}} < \infty } \\ \Vert \varphi \Vert_{\mathcal{U}_{\eta, p, q, T}} & := \alpha \Vert \varphi \Vert_{\mathbb{E}_{0, \eta, p, q, T, e^{\tilde{\mu}t}}} + \Vert t \partial_t \varphi \Vert_{\mathbb{E}_{0, \eta, p, q, T, e^{\tilde{\mu}t}}} \end{align*} for some constant $\alpha >0$. This $\alpha$ can be taken so large that it absorb the constant $C$ in (\ref{eq_bound_for_t_partialt_T_N}). Using the estimate (\ref{eq_bound_partial_t_N_tilde_V_in_maximal_regularity_space}), if we take $T>0$ sufficiently small, then we see $\mathcal{T}: \mathcal{U}_{\eta, p, q, T} \rightarrow \mathcal{U}_{\eta, p, q, T}$ can be a self-mapping. By the same way to prove (i), we find that $\mathcal{T}: \mathcal{U}_{\eta, p, q, T} \rightarrow \mathcal{U}_{\eta, p, q, T}$ is a contraction mapping for small $T$. Then Banach's fixed point theorem implies the conclusion of (iii). \end{proof} \begin{proposition} \label{prop_local_wellposedness_maximal_regularity_DAeq} Under the same assumptions for $p, q, \eta, V_0, v_0, v, F$, there exits $T_0>0$, if $T \leq T_0$, there exits a unique solution $V \in \mathbb{E}_{1, \eta, p, q}$ to (\ref{eq_nudging}) with initial data $V_0$ such that $\Vert V \Vert_{\mathbb{E}_{1, \eta, p, q, T}} \leq C$ for some constant $C>0$. \end{proposition} \begin{proof} The proof is based on Proposition \ref{prop_H_infty_for_A_plus_J} and same kind of contraction argument as Proposition \ref{prop_local_wellposedness_maximal_regularity}. Since the proof is quite similar to that of Proposition \ref{prop_local_wellposedness_maximal_regularity}, we omit the detail. \end{proof} \begin{remark} We know the global well-posedness of the primitive equations, local well-posedness of (\ref{eq_diff}), and the uniqueness of the solution to (\ref{eq_nudging}) in $\mathbb{E}_{1, \eta, p, q, T}$ for some $T>0$. Therefore, we can conclude that, if we establish the global well-posedness of (\ref{eq_diff}) in $\mathbb{E}_{1, \eta, p, q, \infty}$, the equations (\ref{eq_nudging}) are also globally well-posed in $\mathbb{E}_{1, \eta, p, q, \infty}$. \end{remark} \subsection{Global well-posedness and asymptotic behavior} We assume a priori bounds of the solution $V$ to (\ref{eq_diff}) such that \begin{gather} \label{eq_a_priori_bounds} \begin{split} \Vert V (t) \Vert_{L^2(\Omega)^2}^2 \leq e^{- ct} \Vert V_0 \Vert_{L^2(\Omega)^2}^2 + C \delta \int_0^t e^{- c (t - s)} \Vert \nabla f (s) \Vert_{L^2(\Omega)^2}^{2} ds, \\ \Vert V (t) \Vert_{L^2(\Omega)^2}^2 + \int_0^t \Vert \nabla V(s) \Vert_{L^2(\Omega)^2}^2 ds \leq \Vert V_0 \Vert_{L^2(\Omega)^2}^2 + C \delta \int_0^t \Vert \nabla f (s) \Vert_{L^2(\Omega)^2}^{2} ds. \\ \Vert V (t) \Vert_{H^2(\Omega)^2}^2 \leq \phi(t) \end{split} \end{gather} for some $c>0$ and $\phi \in BC(\varepsilon, T]$ for some small $\varepsilon > 0$. These a priori estimate is proved in Section \ref{sec_a_priori_estimate}. \begin{lemma} \label{lem_pre_of_main_theorem} Under the same assumption of Theorem \ref{thm_main_thoerem}. We assume a priori bounds (\ref{eq_a_priori_bounds}). Then there exit $\mu_0>0$, $\delta_0 >0$, if $\mu \geq \mu_0$ and $\delta \leq \delta_0$, there exits a unique solution $V \in C(0, T; X_{1/q, p, q})$ to (\ref{eq_nudging}) such that \begin{align*} \Vert e^{\mu_\ast t} \partial_t V \Vert_{L^p_\eta(0, T; L^q(\Omega)^2)} + \Vert e^{\mu_\ast t} V \Vert_{L^p_\eta(0, T; H^{2,q}(\Omega)^2)} \leq C, \end{align*} for some constants $\mu_\ast > 0$ and $C>0$, where the case $T = \infty$ is included. Moreover, $V$ is exponentially stable in the sense \begin{align*} \Vert V (t) \Vert_{X_{1/q, p, q}} \leq C e^{ - \mu_\ast t}. \end{align*} \end{lemma} \begin{proof} We begin by the case $p = q = 2$. By (\ref{eq_a_priori_bounds}) and Proposition \ref{prop_local_wellposedness_maximal_regularity}(i), we have a global solution $V \in \mathbb{E}_{1, 1, 2, 2, T}$ for $T < \infty$. We consider the case $T = \infty$. Since $\int_0^\infty \Vert \nabla V(s) \Vert_{L^2(\Omega)^2}^2 ds$ is bounded, there exits $t_0 > 0$ and small $\varepsilon_0 > 0$ such that \begin{align*} \Vert V(t_0) \Vert_{H^1(\Omega)^2} \leq \varepsilon_0. \end{align*} Applying Proposition \ref{prop_local_wellposedness_maximal_regularity}(ii), we find the solution $V$ with initial data $V(t_0)$ belongs to $\mathbb{E}_{1, 2, 2, \infty, e^{\mu_\ast t}}$ and satisfies $\Vert V(t) \Vert_{X_{1/q, p, q}} = O(e^{- \mu_\ast t})$. We finished the proof for the case $p = q = 2$. Since \begin{align*} \int_0^\infty e^{2 \mu_\ast s} \Vert V(s) \Vert_{D(\tilde{A}_{\mu, 2})}^2 ds < \infty, \end{align*} if we take $t_1 >0$ sufficiently large, there exits small $\varepsilon_1 > 0$ such that $\Vert V(t_1) \Vert_{D(\tilde{A}_{\mu, 2})} \leq \varepsilon_1$. Using this bound, the assumption $F \in H^1_{loc}(0, \infty; L^2(\Omega)^2)$, and Proposition \ref{prop_local_wellposedness_maximal_regularity}(ii, iii), we see $V \in BC(0, \infty; D(\tilde{A}_{\mu, 2}))$ and $\Vert V(t) \Vert_{H^2(\Omega)^2}$ is small for some $t>0$. Let $T^\prime$ be the maximal existence time of solution $V$ in $\mathbb{E}_{1, \eta, p, q, T^\prime}$ given by Proposition \ref{prop_local_wellposedness_maximal_regularity}(i). Proposition \ref{prop_local_wellposedness_maximal_regularity} implies $V(t) \in D(\tilde{A}_{\mu, q}) \hookrightarrow H^{2,q}_{\overline{\sigma}}(\Omega)$ for $t < T^\prime$. When $q \geq 6/5$, we find from the Sobolev embedding that $V(t) \in H^1(\Omega)^2$. For the case $1 < q < 6/5$, we use $F \in L^2(0, T; L^2(\Omega)^2)$ and the bootstrapping argument such as \cite{HieberHusseingKashiwabara2016} to improve the regularity as $V(t) \in H^1(\Omega)^2$ for $\frac{T^\prime}{2} \leq t < T^\prime$. Therefore, by the result when $p = q = 2$, we can obtain the global $H^1$-solution such that $V \in \mathbb{E}_{1/q, p, q, T^\prime} \cap BC(T^\prime/2, \infty; D(\tilde{A}_{\mu, 2}))$ for all any $1 < q < \infty$. Since $H^{2,2}(\Omega)^2 \hookrightarrow X_{1/q, p, q}$, we see $V(t) \in X_{1/q, p, q}$ for all $t > 0$. If we take $t_2>0$ sufficiently large, then we find $\Vert V(t_2) \Vert_{H^2(\Omega)^2} \leq \varepsilon_2$ for some small $\varepsilon_2 > 0$. By Proposition \ref{prop_local_wellposedness_maximal_regularity} (ii), we deduce \begin{align*} \Vert e^{\mu_\ast t} \partial_t V \Vert_{L^p_\eta(0, \infty; L^q(\Omega)^2)} + \Vert e^{\mu_\ast t} V \Vert_{L^p_\eta(0, \infty; H^{2,q}(\Omega)^2)} \leq C \end{align*} for some constant $C>0$, and \begin{align*} \Vert V (t) \Vert_{X_{1/q, p, q}} = O(e^{ - \mu_\ast t}). \end{align*} for some constant $C>0$. \end{proof} \section{A Priori Estimate} \label{sec_a_priori_estimate} \subsection{Global well-posedness and a priori estimate} \subsubsection{$L^2$-estimates} \begin{proposition} \label{prop_trigonal_estimate} Let $p \in (1, \infty)$ and $r_1, r_2 \in (1 - 1/p, \infty)$ satisfying $1 - 1/p \geq 1/r_1 + 1/r_2$. Then \begin{align*} &\left| \int_\Omega \left( \int_{-l}^{x_3} f dz \right) g h dx \right| \\ & \leq C \Vert f \Vert_{L^2(\Omega)}^{\frac{2}{p}} \Vert f \Vert_{H^1_{x^\prime}L^2_{x_3}(\Omega)}^{1 - \frac{2}{p}} \Vert g \Vert_{L^2(\Omega)}^{\frac{2}{r_2}} \Vert g \Vert_{H^1_{x^\prime}L^2_{x_3}(\Omega)}^{1 - \frac{2}{r_1}} \Vert h \Vert_{L^2(\Omega)}^{\frac{2}{r_2}} \Vert h \Vert_{H^1_{x^\prime}L^2_{x_3}(\Omega)}^{1 - \frac{2}{r_2}} \end{align*} for some constant $C>0$ and all $f, g, h \in H^1(\Omega)$. \end{proposition} \begin{proof} The Sobolev inequalities and the interpolation inequalities yield \begin{align*} & \left| \int_\Omega \left( \int_{-l}^{x_3} f dz \right) g h dx \right| \\ & \leq \left \Vert \int_{-l}^{0} |f| dz \right \Vert_{L^p_{x^\prime}} \left \Vert \int_{-l}^{0} |g| |h| dz \right \Vert_{L^{p^\prime}_{x^\prime}} \\ & \leq C \int_{-l}^0 \Vert f \Vert_{L^2(\Omega)^2}^{\frac{2}{p}} \Vert f \Vert_{H^1_{x^\prime}( \mathbb{T} ^2)}^{\frac{2}{p}} dz \int_{-l}^0 \Vert g \Vert_{L^2_{x^\prime}( \mathbb{T} ^2)}^{\frac{2}{r_2}} \Vert g \Vert_{H^1_{x^\prime}( \mathbb{T} ^2)}^{1 - \frac{2}{r_1}} \Vert h \Vert_{L^2_{x^\prime}( \mathbb{T} ^2)}^{\frac{2}{r_2}} \Vert h \Vert_{H^1_{x^\prime}( \mathbb{T} ^2)}^{1 - \frac{2}{r_2}} dz \\ & \leq C \Vert f \Vert_{L^2(\Omega)^2}^{\frac{2}{p}} \Vert f \Vert_{H^1_{x^\prime}L^2_{x_3}(\Omega)^2}^{1 - \frac{2}{p}} \Vert g \Vert_{L^2(\Omega)^2}^{\frac{2}{r_2}} \Vert g \Vert_{H^1_{x^\prime}L^2_{x_3}(\Omega)^2}^{1 - \frac{2}{r_1}} \Vert h \Vert_{L^2(\Omega)^2}^{\frac{2}{r_2}} \Vert h \Vert_{H^1_{x^\prime}L^2_{x_3}(\Omega)^2}^{1 - \frac{2}{r_2}}. \end{align*} \end{proof} We use $L^\infty_t H^1_x$- and $L^\infty_t H^2_x$-estimates and the exponential decay by Hieber and Kashiwabara \cite{HieberKashiwabara2016} and Giga et al \cite{GigaGriesHieberHusseinKashiwabara2017_analiticity}. We use integration by parts to (\ref{eq_nudging}) and Proposition \ref{prop_trigonal_estimate} to see that \begin{align*} & \frac{\partial_t}{2} \Vert V \Vert_{L^2 (\Omega)^2}^2 + \Vert \nabla V \Vert_{L^2(\Omega)^2}^2 \\ & \leq \left | \int_\Omega (U \cdot \nabla v) \cdot V + F \cdot V dx \right | - \int_\Omega \mu J_\delta V \cdot V dx\\ & \leq \Vert V \Vert_{L^2(\Omega)^2} \Vert \nabla_H v \Vert_{L^6(\Omega)^4} \Vert V \Vert_{L^3(\Omega)^2} \\ & + \Vert \nabla_H V \Vert_{L^2(\Omega)^4} \Vert \partial_3 v \Vert_{L^2(\Omega)^2}^{\frac{1}{2}} \Vert \partial_3 v \Vert_{H^1(\Omega)^2}^{\frac{1}{2}} \Vert V \Vert_{L^2(\Omega)^2}^{\frac{1}{2}} \Vert V \Vert_{H^1(\Omega)^2}^{\frac{1}{2}} \\ & + \Vert F \Vert_{L^2(\Omega)^2} \Vert \nabla V \Vert_{L^2(\Omega)^2} - \mu \Vert V \Vert_{L^2(\Omega)^2}^2 + \mu \delta \Vert \nabla V \Vert_{L^2(\Omega)^2}^2. \end{align*} Using the interpolation inequality and the embedding \begin{align*} \Vert \varphi \Vert_{L^3(\Omega)^2} \leq C \Vert \varphi \Vert_{L^2(\Omega)^2}^{\frac{1}{2}} \Vert \varphi \Vert_{\dot{H}^1(\Omega)}^{\frac{1}{2}} ,\quad \Vert \varphi \Vert_{L^6(\Omega)} \leq C \Vert \varphi \Vert_{H^1(\Omega)} \end{align*} for $\varphi \in H^1(\Omega)$ and taking $\delta > 0$ sufficiently small, we have \begin{align*} & \partial_t \Vert V \Vert_{L^2 (\Omega)^2}^2 + \Vert \nabla V \Vert_{L^2(\Omega)^2}^2 \\ & \leq C \left( \Vert \nabla_H v \Vert_{H^1(\Omega)}^{4/3} + \Vert \nabla v \Vert_{L^2(\Omega)^6} \Vert \nabla v \Vert_{H^1(\Omega)^6} + \Vert \nabla v \Vert_{L^2(\Omega)^6}^2 \Vert \nabla v \Vert_{H^1(\Omega)^6}^2 \right) \Vert V \Vert_{L^2(\Omega)^2}^2 \\ & - \mu \Vert V \Vert_{L^2(\Omega)^2}^2 + C \delta \Vert \nabla f \Vert_{L^2(\Omega)^2}^2. \end{align*} Since $v \in L^\infty (0, \infty; H^2(\Omega)^2)$, we take $\mu > 0$ sufficiently large and apply the Gronwall inequality to obtain \begin{align*} \Vert V (t) \Vert_{L^2(\Omega)^2}^2 \leq e^{- ct} \Vert V_0 \Vert_{L^2(\Omega)^2}^2 + C \delta \int_0^t e^{- c (t - s)} \Vert \nabla f (s) \Vert_{L^2(\Omega)^2}^{2} ds, \end{align*} and \begin{align*} \Vert V (t) \Vert_{L^2(\Omega)^2}^2 + \int_0^t \Vert \nabla V(s) \Vert_{L^2(\Omega)^2}^2 ds \leq \Vert V_0 \Vert_{L^2(\Omega)^2}^2 + C \delta \int_0^t \Vert \nabla f (s) \Vert_{L^2(\Omega)^2}^{2} ds, \end{align*} for some $c > 0$. To establish the $L^\infty_t H^1_x$-estimate for $V$, we first get the $L^\infty_t H^1_{x^\prime}$-estimate for $\overline{V}$. The $L^\infty_t L^4_x$-estimate used to chancel out the boundary trace terms from integration over $(-l ,0)$. The $L^\infty_t L^2_x$-estimate for $\partial_3 \tilde{V} = \partial_3 (V - \overline{V})$ is used to absorb nonlinear terms in the estimate for $\overline{V}$. \subsubsection{$L^\infty_t H^1_{x^\prime}$-estimates for $\overline{V}$} We divide (\ref{eq_diff}) into two parts. The vector $\overline{V}$ satisfies \begin{align} \label{eq_var_V} \begin{split} \partial_t \overline{V} - \Delta_H \overline{V} + \nabla_H \Pi & = \overline{F} - \mu \overline{J_\delta V} - \frac{1}{l} \gamma_- \partial_3 V - \overline{N} (v, \overline{V}, \tilde{V}),\\ \mathrm{div}_H \overline{V} & = 0, \end{split} \end{align} where $\tilde{V} = V - \overline{V}$ and \begin{align*} \overline{N} (v, \overline{V}, \tilde{V}) & = \overline{v} \cdot \nabla_H \overline{V} + \overline{V} \cdot \nabla_H \overline{v} - \overline{V} \cdot \nabla_H \overline{V} + \frac{1}{l} \int_{-l}^0 \tilde{u} \cdot \nabla \tilde{V} + \tilde{U} \cdot \nabla \tilde{v} - \tilde{U} \cdot \nabla \tilde{V} dz. \end{align*} The vector field $\tilde{V}$ satisfies \begin{equation} \label{eq_tilde_V} \begin{aligned} \partial_t \tilde{V} - \Delta \tilde{V} & = \tilde{F} - \mu \widetilde{J_\delta V} - \frac{1}{l}\gamma_- \partial_3 V - \tilde{N} (v, \overline{V}, \tilde{V})\\ \mathrm{div} \, \tilde{U} & = 0, \end{aligned} \end{equation} where $\gamma_-$ is the bottom boundary trace operator, \begin{align*} \tilde{N} (v, \overline{V}, \tilde{V}) & = u \cdot \nabla \tilde{V} + \tilde{v} \cdot \nabla_H \overline{V} + U \cdot \nabla \tilde{v} + \tilde{V} \cdot \nabla_H \overline{v} - U \cdot \nabla \tilde{V} - \tilde{V} \cdot \nabla_H \overline{V}. \\ & - \frac{1}{l} \int_{-l}^0 \tilde{u} \cdot \nabla \tilde{V} + \tilde{U} \cdot \nabla \tilde{v} - \tilde{U} \cdot \nabla \tilde{V} dz, \end{align*} and \begin{gather*} \tilde{W} = \int^{x_3}_{-l} \mathrm{div}_H \tilde{V} dz = \int^{x_3}_{-l} \mathrm{div}_H V dz, \quad \tilde{U} = (\tilde{V}, \tilde{W}). \end{gather*} We use the $L^2_tL^2_x$-maximal regularity of 2-D Stokes operator. By the same way as \cite{HieberHusseingKashiwabara2016}, we find \begin{align*} & \partial_t \Vert \nabla_H \overline{V} \Vert_{L^2( \mathbb{T} ^2)^4}^2 + \mu \Vert \nabla_H \overline{V} \Vert_{L^2( \mathbb{T} ^2)^4}^2 + \Vert \Delta_H \overline{V} \Vert_{L^2( \mathbb{T} ^2)^2}^2 + \Vert \nabla_H \Pi \Vert_{L^2( \mathbb{T} ^2)^2}^2\\ & \leq C \left \Vert \overline{F} - \frac{1}{l} \gamma_- \partial_3 V - \overline{N}(v, \overline{V}, \tilde{V}) + \mu \overline{K_\delta V} \right \Vert_{L^2( \mathbb{T} ^2)^2}^2\\ & \leq C \Vert F \Vert_{L^2(\Omega)^2}^2 + C \Vert \partial_3 V \Vert_{L^2(\Omega)^2}^2 + \frac{1}{100} \Vert \nabla \partial_3 V \Vert_{L^2(\Omega)^2}^2 \\ & + \left \Vert \overline{N}(v, \overline{V}, \tilde{V}) \right \Vert_{L^2( \mathbb{T} ^2)^2} + C \mu \delta \Vert \nabla V \Vert_{L^2(\Omega)^2}^2. \end{align*} By the Sobolev embeddings and interpolation inequalities, we have \begin{align*} & \left \Vert \overline{v} \cdot \nabla_H \overline{V} + \overline{V} \cdot \nabla_H \overline{v} - \overline{V} \cdot \nabla_H \overline{V} \right \Vert_{L^2( \mathbb{T} ^2)^2}^2 \\ & \leq C \Vert \overline{v} \Vert_{L^2( \mathbb{T} ^2)^2} \Vert \overline{v} \Vert_{H^1( \mathbb{T} ^2)^2} \Vert \nabla \overline{V} \Vert_{L^2( \mathbb{T} ^2)^6} \Vert \nabla \overline{V} \Vert_{H^1( \mathbb{T} ^2)^6} \\ & + C \Vert \overline{V} \Vert_{L^2( \mathbb{T} ^2)^2} \Vert \overline{V} \Vert_{H^1( \mathbb{T} ^2)^2} \Vert \nabla \overline{v} \Vert_{L^2( \mathbb{T} ^2)^6} \Vert \nabla \overline{v} \Vert_{H^1( \mathbb{T} ^2)^6} \\ & + C \Vert \overline{V} \Vert_{L^2( \mathbb{T} ^2)^2} \Vert \overline{V} \Vert_{H^1( \mathbb{T} ^2)^2} \Vert \nabla \overline{V} \Vert_{L^2( \mathbb{T} ^2)^6} \Vert \nabla \overline{V} \Vert_{H^1( \mathbb{T} ^2)^6}, \end{align*} and \begin{align*} & \left \Vert \int_{-l}^0 \tilde{v} \cdot \nabla_H \tilde{V} + \tilde{V} \cdot \nabla_H \tilde{v} - \tilde{V} \cdot \nabla_H \tilde{V} dz \right \Vert_{L^2( \mathbb{T} ^2)^2}^2 \\ & \leq C \Vert v \Vert_{H^2(\Omega)}^2 \Vert \nabla_H \tilde{V} \Vert_{L^2(\Omega)^2}^2 \\ & + C \left ( \int_{-l}^0 \Vert \tilde{V} \Vert_{L^2( \mathbb{T} ^2)^2}^{\frac{1}{2}} \Vert \tilde{V} \Vert_{H^1( \mathbb{T} ^2)^2}^{\frac{1}{2}} \Vert \nabla_H \tilde{v} \Vert_{L^2( \mathbb{T} ^2)^4}^{\frac{1}{2}} \Vert \nabla_H \tilde{v} \Vert_{H^1( \mathbb{T} ^2)^4}^{\frac{1}{2}} dz \right )^2 \\ & + C \Vert |\tilde{V}| | \nabla \tilde{V}| \Vert_{L^2(\Omega)}^2 \\ & \leq C \Vert v \Vert_{H^2(\Omega)}^2 \Vert \nabla_H \tilde{V} \Vert_{L^2(\Omega)^2}^2 \\ & + C \Vert \tilde{V} \Vert_{L^2(\Omega)^2} \Vert \tilde{V} \Vert_{H^1(\Omega)^2} \Vert \nabla_H \tilde{v} \Vert_{L^2(\Omega)^4} \Vert \nabla_H \tilde{v} \Vert_{H^1(\Omega)^4} \\ & + C \Vert |\tilde{V}| | \nabla \tilde{V}| \Vert_{L^2(\Omega)}^2. \end{align*} Since \begin{align} \label{eq_w_to_div_H} \int_{-l}^0 \left ( \int_{-l}^{x_3} \mathrm{div}_H \varphi dz \right ) \partial_3 \psi dx_3 = - \int_{-l}^0 ( \mathrm{div}_H \varphi ) \psi dx_3 \end{align} for all $\varphi, \psi \in H^1(\Omega)^2$ satisfying $\mathrm{div}_H \overline{\varphi} = 0$, we use the same way as above to get \begin{align*} & \left \Vert \int_{-l}^0 w \partial_z \tilde{V} + \tilde{W} \partial_z \tilde{v} - \tilde{W} \partial_z \tilde{V} dz \right \Vert_{L^2( \mathbb{T} ^2)^2}^2 \\ & \leq C \Vert \tilde{V} \Vert_{L^2(\Omega)^2} \Vert \tilde{V} \Vert_{H^1(\Omega)^2} \Vert \nabla_H \tilde{v} \Vert_{L^2(\Omega)^4} \Vert \nabla_H \tilde{v} \Vert_{H^1(\Omega)^4} \\ & + C \Vert v \Vert_{H^2}^2 \Vert \nabla_H \tilde{V} \Vert_{L^2(\Omega)^2}^2 + C \Vert |\tilde{V}| | \nabla \tilde{V}| \Vert_{L^2(\Omega)}^2 \end{align*} Summing up the above bounds and using the Young inequality, we obtain \begin{align} \label{eq_var_V_H1} \begin{split} & \partial_t \Vert \nabla_H \overline{V} (t) \Vert_{L^2( \mathbb{T} ^2)^4}^2 + \mu \Vert \nabla_H \overline{V} (t) \Vert_{L^2( \mathbb{T} ^2)^4}^2 + \frac{1}{2} \Vert \Delta_H \overline{V} (t) \Vert_{L^2( \mathbb{T} ^2)^2}^2 + \Vert \nabla_H \overline{\Pi} (t) \Vert_{L^2( \mathbb{T} ^2)^2}^2\\ & \leq \phi_{\overline{H^1}, 1} (t) \Vert \nabla_H \overline{V} (t) \Vert_{L^2( \mathbb{T} ^2)^4}^2 + \frac{1}{100} \Vert \partial_3 \nabla V \Vert_{L^2( \mathbb{T} ^2)^2}^2 + C \Vert \tilde{V}| | \nabla \tilde{V} \Vert_{L^2(\Omega)^2}^2 + \phi_{\overline{H^1}, 2} (t), \end{split} \end{align} for some $L^1 (0, T)$-functions $\phi_{\overline{H^1}, 1}, \phi_{\overline{H^1}, 2}$ depending on $L^\infty_t H^2_x$- and lower order bounds for $v$. \subsubsection{$L^\infty_t L^4_x$-estimates for $\tilde{V}$} Multiplying $|\tilde{V}|^2 \tilde{V}$ and integration by parts yields \begin{align*} & \frac{\partial_t}{4} \Vert \tilde{V} \Vert_{L^4(\Omega)^2}^4 + \Vert \nabla |\tilde{V}|^2 \Vert_{L^2(\Omega)^2}^2 + \Vert |\tilde{V}| |\nabla \tilde{V}| \Vert_{L^2(\Omega)^2}^2 \\ & \leq \left | \int_\Omega \left( \tilde{F} - \frac{1}{l} \gamma_- \partial_3 V - \tilde{N} (v, \overline{V}, \tilde{V}) \right) \cdot |\tilde{V}|^2 \tilde{V} dx \right | - \int_\Omega \mu \widetilde{J_\delta V} \cdot |\tilde{V}|^2 \tilde{V} dx \\ & =: I_1 + I_2 + I_3 + I_4. \end{align*} We first find from the Sobolev inequality that \begin{align*} & I_1 = \left | \int_\Omega \tilde{F} \cdot |\tilde{V}|^2 \tilde{V} dx \right | \\ & \leq C \Vert F \Vert_{L^2(\Omega)^2} \Vert |\tilde{V}|^2 \Vert_{H^1(\Omega)} \Vert \tilde{V} \Vert_{L^4(\Omega)^2}, \\ &I_4 = \int_\Omega \mu \widetilde{J_\delta V} \cdot |\tilde{V}|^2 \tilde{V} dx \\ & \leq - \mu \Vert \tilde{V} \Vert_{L^4(\Omega)^2}^4 + C \mu \delta \Vert \nabla V \Vert_{L^2(\Omega)^4} \Vert |\tilde{V}|^2 \Vert_{H^1(\Omega)} \Vert \tilde{V} \Vert_{L^4(\Omega)^2}, \end{align*} By the same way as \cite{HieberKashiwabara2016}, we have \begin{align*} & I_2 = \left | \int_\Omega \gamma_- \partial_3 V \cdot |\tilde{V}|^2 \tilde{V} dx \right | \leq \left | \int_{ \mathbb{T} ^2} \gamma_- \partial_3 V \cdot \int_{-l}^0 |\tilde{V}|^2 \tilde{V} dz dx^\prime \right | \\ & \leq C \Vert \partial_3 V \Vert_{L^2(\Omega)^2}^{\frac{1}{2}} \Vert \nabla \partial_3 V \Vert_{L^2(\Omega)^4}^{\frac{1}{2}} \Vert |\tilde{V}|^2 \Vert_{H^1(\Omega)} \Vert \tilde{V} \Vert_{L^4(\Omega)^2}. \end{align*} We estimates $I_3$. We find from the H\"{o}lder inequality and the trivial inequality \begin{align*} r^{\theta} \leq 1 + r^{\theta^\prime}, \quad \text{for $r \in \mathbb{R} _{\geq 0}$, $1 \leq \theta \leq \theta^\prime$}, \end{align*} that \begin{align*} & \left | \int_\Omega (\tilde{v} \cdot \nabla_H\overline{V}) \cdot |\tilde{V}|^2 \tilde{V} dx \right | \\ & \leq C \Vert \nabla_H \overline{V} \Vert_{L^2(\Omega)^4} \Vert \tilde{V} \Vert_{L^4(\Omega)^2}^3 \Vert \tilde{v} \Vert_{L^4(\Omega)^2} \\ & \leq C \Vert \nabla_H \overline{V} \Vert_{L^2(\Omega)^4} ( 1 + \Vert \tilde{V} \Vert_{L^4(\Omega)^2}^4 ) \Vert v \Vert_{H^1(\Omega)^2}. \end{align*} We see \begin{align*} &\left | \int_\Omega (U \cdot \nabla \tilde{v}) \cdot |\tilde{V}|^2 \tilde{V} dx \right | \\ & \leq \left | \int_\Omega (\tilde{V} \cdot \nabla_H \tilde{v} + \tilde{W} \partial_3 \tilde{v}) \cdot |\tilde{V}|^2 \tilde{V} dx \right | + \left | \int_\Omega (\overline{V} \cdot \nabla_H \tilde{v} + \overline{W} \partial_3 \tilde{v}) \cdot |\tilde{V}|^2 \tilde{V} dx \right | \\ & =: I_1^\prime + I_2^\prime + I_3^\prime + I_4^\prime. \end{align*} By the Sobolev inequalities and the H\"{o}lder inequality, we have \begin{align*} I_1^\prime & \leq C \Vert \nabla_H v \Vert_{L^2(\Omega)^4} \Vert |\tilde{V}|^2 \Vert_{L^2(\Omega)}^{1/2} \Vert |\tilde{V}|^2 \Vert_{H^1(\Omega)}^{3/2}, \\ I_2^\prime + I_4^\prime & \leq C \Vert \nabla_H \tilde{V} \Vert_{L^2(\Omega)^4} \Vert \partial_3 \tilde{v} \Vert_{L^2_{x_3} L^\infty_{x^\prime}(\Omega)^2} \Vert |\tilde{V}|^2 \tilde{V} \Vert_{L^2(\Omega)^2} \\ & \leq C \Vert \nabla_H \tilde{V} \Vert_{L^2(\Omega)^4} \Vert \partial_3 \tilde{v} \Vert_{H^1(\Omega)^2}^{3/4} \Vert \partial_3 \tilde{v} \Vert_{H^2(\Omega)^2}^{1/4}\\ & \quad \quad \quad \times \left( \Vert \tilde{V} \Vert_{L^4(\Omega)^2}^3 + \Vert \tilde{V} \Vert_{L^4(\Omega)^2}^{\frac{3}{2}} \Vert \nabla |\tilde{V}|^2 \Vert_{L^2(\Omega)^3}^{\frac{3}{4}} \right) \\ & \leq C \Vert \nabla_H \tilde{V} \Vert_{L^2(\Omega)^4}^{\frac{4}{3}} \Vert \partial_3 \tilde{v} \Vert_{H^1(\Omega)^2} \Vert \tilde{V} \Vert_{L^4(\Omega)^2}^4 + C \Vert \partial_3 \tilde{v} \Vert_{H^2(\Omega)^2} \\ & + C \Vert \nabla_H \tilde{V} \Vert_{L^2(\Omega)^4}^2 \Vert \partial_3 \tilde{v} \Vert_{H^1(\Omega)^2}^2 \Vert \tilde{V} \Vert_{L^4(\Omega)^2}^4 + C \Vert \nabla_H \tilde{V} \Vert_{L^2(\Omega)^4}^2 + C \Vert \partial_3 \tilde{v} \Vert_{H^2(\Omega)^2}^2 \\ & + \frac{1}{16} \Vert \nabla |\tilde{V}|^2 \Vert_{L^2(\Omega)^2}^2, \\ I_3^\prime & \leq C \Vert \tilde{V} \Vert_{L^2(\Omega)^2} \Vert \partial_3 \tilde{v} \Vert_{L^2_{x_3} L^\infty_{x^\prime}(\Omega)^2} \Vert |\tilde{V}|^2 \tilde{V} \Vert_{L^2(\Omega)^2} \\ & \leq C \Vert \tilde{V} \Vert_{L^2(\Omega)^2} \Vert \partial_3 \tilde{v} \Vert_{H^1(\Omega)^2}^{1/2} \Vert \partial_3 \tilde{v} \Vert_{H^2(\Omega)^2}^{1/2} \\ & \quad \quad \times \left( \Vert \tilde{V} \Vert_{L^4(\Omega)^2}^3 + \Vert \tilde{V} \Vert_{L^4(\Omega)^2}^{\frac{3}{2}} \Vert \nabla |\tilde{V}|^2 \Vert_{L^2(\Omega)^3}^{\frac{3}{4}} \right) \\ & \leq C \Vert \tilde{V} \Vert_{L^2(\Omega)^2}^{\frac{4}{3}} \Vert \partial_3 \tilde{v} \Vert_{H^1(\Omega)^2}^{\frac{2}{3}} \Vert \tilde{V} \Vert_{L^4(\Omega)^2}^4\\ & + C \Vert \nabla_H \tilde{V} \Vert_{L^2(\Omega)^4}^{\frac{8}{3}} \Vert \partial_3 \tilde{v} \Vert_{H^1(\Omega)^2}^{\frac{4}{3}} \Vert \tilde{V} \Vert_{L^4(\Omega)^2}^4 + C \Vert \partial_3 \tilde{v} \Vert_{H^2(\Omega)^2}^2 \\ & + \frac{1}{16} \Vert \nabla |\tilde{V}|^2 \Vert_{L^2(\Omega)^3}^2. \end{align*} The term $\int_\Omega (\tilde{V} \cdot \nabla_H \overline{v}) \cdot |\tilde{V}|^2 \tilde{V} dx$ can be bounded by the same way to bound $I_1^\prime$. By the same as \cite{HieberKashiwabara2016}, we see that \begin{align*} & \left | \int_\Omega (\tilde{V} \cdot \nabla_H \overline{V}) \cdot |\tilde{V}|^2 \tilde{V} dx \right | \\ & \leq C \Vert \nabla\overline{V} \Vert_{L^2( \mathbb{T} ^2)^6} \left \Vert \int_{-l}^0 |\tilde{V}|^4 dz \right \Vert_{L^2( \mathbb{T} ^2)} \\ & \leq C \Vert \nabla\overline{V} \Vert_{L^2( \mathbb{T} ^2)^6} \Vert |\tilde{V}|^2 \Vert_{L^2(\Omega)} ( \Vert |\tilde{V}|^2 \Vert_{L^2(\Omega)} + \Vert \nabla_H |\tilde{V}|^2 \Vert_{L^2(\Omega)^2} ) \\ & \leq C \Vert \nabla \overline{V} \Vert_{L^2(\Omega)^6} \Vert \tilde{V} \Vert_{L^4(\Omega)^2}^4 + C \Vert \nabla \overline{V} \Vert_{H^1(\Omega)^6}^2 \Vert \tilde{V} \Vert_{L^4(\Omega)^2}^4 + \frac{1}{16} \Vert \nabla_H |\tilde{V}|^2 \Vert_{L^2(\Omega)^2}^2. \end{align*} We use (\ref{eq_w_to_div_H}) to get \begin{align*} & \left | \int_\Omega \int_{-l}^0 \tilde{u} \cdot \nabla \tilde{V} dz \cdot |\tilde{V}|^2 \tilde{V} dx \right | \\ & \leq \left | \int_\Omega \int_{-l}^0 \tilde{v} \cdot \nabla_H \tilde{V} dz \cdot |\tilde{V}|^2 \tilde{V} dx \right | + \left | \int_\Omega \int_{-l}^0 (\mathrm{div}_H \tilde{v}) \tilde{V} dz \cdot |\tilde{V}|^2 \tilde{V} dx \right | \\ & \leq C \Vert \tilde{v} \Vert_{L^\infty(\Omega)^2} \Vert \nabla_H \tilde{V} \Vert_{L^2(\Omega)^4} \Vert |\tilde{V}|^2 \Vert_{L^2(\Omega)} \Vert \nabla_H |\tilde{V}|^2 \Vert_{L^2(\Omega)^2}^{\frac{1}{2}} \\ & + C \Vert \tilde{v} \Vert_{L^3(\Omega)^2} \Vert \tilde{V} \Vert_{H^1(\Omega)^2} \Vert |\tilde{V}|^2 \Vert_{L^2(\Omega)} \Vert \nabla_H |\tilde{V}|^2 \Vert_{L^2(\Omega)^2}^{\frac{1}{2}}. \end{align*} By the same way as above, we deduce \begin{align*} & \left | \int_\Omega \int_{-l}^0 \tilde{U} \cdot \nabla \tilde{v} dz \cdot |\tilde{V}|^2 \tilde{V} dx \right | \\ & \leq \left | \int_\Omega \int_{-l}^0 \tilde{V} \cdot \nabla_H \tilde{v} dz \cdot |\tilde{V}|^2 \tilde{V} dx \right | + \left | \int_\Omega \int_{-l}^0 (\mathrm{div}_H \tilde{V}) \tilde{v} dz \cdot |\tilde{V}|^2 \tilde{V} dx \right |\\ & \leq C \Vert \tilde{V} \Vert_{L^2(\Omega)^2} \Vert \tilde{v} \Vert_{H^3(\Omega)^2} \Vert |\tilde{V}|^2 \Vert_{L^2(\Omega)} \Vert \nabla_H |\tilde{V}|^2 \Vert_{L^2(\Omega)^2}^{\frac{1}{2}} \\ & + C \Vert \nabla \tilde{V} \Vert_{L^2(\Omega)^3} \Vert \tilde{v} \Vert_{H^2(\Omega)^2} \Vert |\tilde{V}|^2 \Vert_{L^2(\Omega)} \Vert \nabla_H |\tilde{V}|^2 \Vert_{L^2(\Omega)^2}^{\frac{1}{2}}, \end{align*} and \begin{align*} & \left | \int_\Omega \int_{-l}^0 \tilde{U} \cdot \nabla \tilde{V} dz \cdot |\tilde{V}|^2 \tilde{V} dx \right |\\ & \leq C \int_{-l}^0 \Vert \tilde{V} \Vert_{L^4( \mathbb{T} ^2)^2} \Vert \nabla_H \tilde{V} \Vert_{L^2( \mathbb{T} ^2)^4} dz \int_{-l}^0 \Vert |\tilde{V}|^3 \Vert_{L^4( \mathbb{T} ^2)} dz \\ & \leq C \Vert \tilde{V} \Vert_{L^4(\Omega)^2}^2 \Vert \nabla_H \tilde{V}^2 \Vert_{L^2(\Omega)^4} \Vert \nabla_H |\tilde{V}|^2 \Vert_{H^1(\Omega)^2}. \end{align*} Summing up the above inequalities and using the Young inequality, we see that there exit some $L^1 (0, T)$-functions $\phi_{\widetilde{L^4}, 1}, \phi_{\widetilde{L^4}, 2}$, and small scaler $\delta_{\widetilde{L^4}} > 0$ such that \begin{align} \label{eq_tilde_V_L4} \begin{split} & \frac{\partial_t}{4} \Vert \tilde{V} (t) \Vert_{L^4(\Omega)^2}^4 + \mu \Vert \tilde{V} (t) \Vert_{L^4(\Omega)^2}^4 + \Vert \nabla |\tilde{V} (t)|^2 \Vert_{L^2(\Omega)^3}^2 + \Vert |\tilde{V}(t)| |\nabla \tilde{V}(t)| \Vert_{L^2(\Omega)}^2 \\ & \leq \phi_{\widetilde{L^4}, 1} (t) \Vert \tilde{V} (t) \Vert_{L^4(\Omega)^2}^4 + \delta_{\widetilde{L^4}} \Vert \nabla \partial_3 V (t) \Vert_{L^2(\Omega)^6}^2 + \phi_{\widetilde{L^4}, 2} (t) \end{split} \end{align} \subsubsection{$L^\infty_t L^2_x$-estimates for $\partial_3 V$} Multiplying $- \partial_3^2 V$ to (\ref{eq_diff}) and integrating over $\Omega$, we have \begin{align*} & \frac{\partial_t}{2} \Vert \partial_3 V \Vert_{L^2(\Omega)^2}^2 + \Vert \nabla \partial_3 V \Vert_{L^2(\Omega)^6}^2\\ & \leq - \mu \Vert \partial_3 V \Vert_{L^2(\Omega)^2}^2 + \left| \int_\Omega (\partial_3 u \cdot \nabla V) \cdot \partial_3 V dx \right| + \left| \int_\Omega (\partial_3 U \cdot \nabla v) \cdot \partial_3 V dx \right| \\ & + \left| \int_\Omega (U \cdot \nabla \partial_3 v) \cdot \partial_3 V dx \right| + \left| \int_\Omega (\partial_3 U \cdot \nabla V) \cdot \partial_3 V dx \right| \\ & + \left| \int_\Omega F \cdot \partial_3^2 V dx \right| + \left| \int_\Omega K_\delta V \cdot \partial_3^2 V dx \right| \\ & =: I_1 + I_2 + I_3 + I_4 + I_5 + I_6 + I_7. \end{align*} Using the Sobolev inequality and the interpolation inequality, we have \begin{align*} I_2 & \leq \Vert v \Vert_{H^3(\Omega)^2} \Vert \nabla \tilde{V} \Vert_{L^2(\Omega)^6} \Vert \partial_3 \tilde{V} \Vert_{L^2(\Omega)^2},\\ I_3 & \leq C \Vert v \Vert_{H^2(\Omega)^2} \Vert \partial_3 V \Vert_{H^1(\Omega)^2}^{\frac{3}{2}} \Vert \partial_3 V \Vert_{H^1(\Omega)^2}^{\frac{1}{2}} \\ & + C \Vert \nabla_H V \Vert_{L^2(\Omega)^2} \Vert \nabla v \Vert_{H^1(\Omega)^2} \Vert \partial_3 V \Vert_{L^2(\Omega)^2}^{\frac{1}{2}} \Vert \partial_3 V \Vert_{H^1(\Omega)^2}^{\frac{1}{2}}. \end{align*} We use Proposition \ref{prop_trigonal_estimate} to get \begin{align*} I_4 & \leq \left| \int_\Omega (V \cdot \nabla_H \partial_3 v) \cdot \partial_3 V dx \right| + \int_\Omega \int_{-l}^0 |\nabla_H \tilde{V}| dz \, |\partial_3^2 v| |\partial_3 V| dx \\ & \leq C \Vert |V| |\nabla V| \Vert_{L^2(\Omega)} \Vert \nabla_H \partial_3 v \Vert_{L^2(\Omega)^6}\\ &+ C \Vert \nabla \tilde{V} \Vert_{L^2(\Omega)^6} \Vert v \Vert_{H^2(\Omega)^2}^{\frac{1}{2}} \Vert v \Vert_{H^3(\Omega)^2}^{\frac{1}{2}} \Vert \partial_3 \tilde{V} \Vert_{L^2(\Omega)^2}^{\frac{1}{2}} \Vert \partial_3 \tilde{V} \Vert_{H^1(\Omega)^2}^{\frac{1}{2}}. \end{align*} Since \begin{align*} \int_\Omega (\partial_3 V \cdot \nabla_H V) \cdot \partial_3 V dz & = \int_\Omega (\partial_3 V \cdot \nabla_H \tilde{V}) \cdot \partial_3 V dz + \int_\Omega (\partial_3 V \cdot \nabla_H \overline{V}) \cdot \partial_3 V dz \\ & = \int_\Omega (\mathrm{div}_H \partial_3 V) \tilde{V} \cdot \partial_3 V - \int_\Omega (\partial_3 V \cdot \tilde{V}) \mathrm{div}_H \partial_3 V dz \\ & + \int_\Omega (\partial_3 V \cdot \nabla_H \overline{V}) \cdot \partial_3 V dz, \end{align*} we use from (\ref{prop_trigonal_estimate}) to get \begin{align*} I_5 & \leq C \Vert |\tilde{V}| |\nabla \tilde{V}| \Vert_{L^2(\Omega)} \Vert \nabla_H \partial_3 \tilde{V} \Vert_{L^2(\Omega)^6} \\ & + C \Vert \partial_3 \overline{V} \Vert_{L^2(\Omega)^2} \Vert \partial_3 \tilde{V} \Vert_{L^2(\Omega)^2} \Vert \partial_3 \tilde{V} \Vert_{H^1(\Omega)^2} + C \Vert \partial_3 \tilde{V} \Vert_{L^2(\Omega)^2} \Vert \partial_3 \tilde{V} \Vert_{L^2(\Omega)^2} \Vert \partial_3 \tilde{V} \Vert_{H^1(\Omega)^2}. \end{align*} The H\"{o}lder inequality yields \begin{align*} I_6 + I_7 \leq C \Vert \nabla F \Vert_{L^2(\Omega)^2} \Vert \partial_3^2 V \Vert_{L^2(\Omega)^2} + C \Vert \nabla V \Vert_{L^2(\Omega)^6} \Vert \partial_3^2 V \Vert_{L^2(\Omega)^2}. \end{align*} Summing up the above inequalities and using the Young inequality, we see that there exit $L^1(0, T)$-functions $\phi_{\partial_3^{-1}L^2, 1}, \phi_{\partial_3^{-1}L^2, 2}$ and a scalar $\delta_{\partial_3^{-1}L^2} > 0$ such that \begin{align} \label{eq_L2_del3V} \begin{split} & \partial_t \Vert \partial_3 V (t) \Vert_{L^2(\Omega)^2}^2 + \mu \Vert \partial_3 V (t) \Vert_{L^2(\Omega)^2}^2 + \Vert \nabla \partial_3 V (t) \Vert_{L^2(\Omega)^6}^2\\ & \leq \phi_{\partial_3^{-1}L^2, 1}(t) \Vert \partial_3 V (t) \Vert_{L^2(\Omega)^2}^2 + \phi_{\partial_3^{-1}L^2, 2} (t) + \delta_{\partial_3^{-1}L^2} \Vert |\tilde{V}| |\nabla \tilde{V}| \Vert_{L^2(\Omega)}^2. \end{split} \end{align} By the estimates (\ref{eq_var_V_H1}), (\ref{eq_tilde_V_L4}), and (\ref{eq_L2_del3V}), we obtain \begin{align*} & \Vert \nabla_H \overline{V} (t) \Vert_{L^2( \mathbb{T} ^2)^4}^2 + \int_0^t \Vert \Delta_H \overline{V} (s) \Vert_{L^2( \mathbb{T} ^2)^2}^2 + \Vert \nabla_H \overline{\Pi} (t) \Vert_{L^2( \mathbb{T} ^2)^2}^2 ds \\ & + \Vert \tilde{V} (t) \Vert_{L^4(\Omega)^2}^4 + \int_0^t \Vert \nabla |\tilde{V} (s)|^2 \Vert_{L^2(\Omega)^3}^2 + \Vert |\tilde{V}(ts)| |\nabla \tilde{V}(s)| \Vert_{L^2(\Omega)}^2 ds \\ & + \Vert \partial_3 V (t) \Vert_{L^2(\Omega)^2}^2 + \int_0^t \Vert \nabla \partial_3 V (s) \Vert_{L^2(\Omega^6)}^2 ds \\ & \leq C e^{-\mu c t + \phi_1 (t)} + \phi_2 (t) \end{align*} for some $\phi_1, \phi_2 \in L^\infty(0,T)$. \subsubsection{$L^\infty_t H^1_x$-estimates for $V$} Multiplying $\Delta V$ and integrating over $\Omega$ and applying the Young inequality, we have \begin{align*} & \frac{\partial_t}{2} \Vert \nabla V(t) \Vert_{L^2(\Omega)^6}^2 + \Vert \Delta V(t) \Vert_{L^2(\Omega)^2}^2 \\ & \leq - \mu \Vert \nabla V(t) \Vert_{L^2(\Omega)^6}^2 + C\Vert u \cdot \nabla V \Vert_{L^2(\Omega)^2}^2 + C \Vert U \cdot \nabla v \Vert_{L^2(\Omega)^2}^2 + C \Vert U \cdot \nabla V \Vert_{L^2(\Omega)^2}^2 \\ & + C \Vert \nabla_H \Pi \Vert_{L^2(\Omega)^2}^2 + C \Vert F \Vert_{L^2(\Omega)^2}^2 + \frac{1}{4} \Vert \Delta V \Vert_{L^2(\Omega)^2}^2 \\ & =: I_1 + I_2 + I_3 + I_4 + I_5 + I_6 + I_7. \end{align*} Using the same way as \cite{HieberKashiwabara2016}, we see \begin{align*} I_4 & \leq C \Vert \tilde{V} \cdot \nabla_H \tilde{V} \Vert_{L^2(\Omega)^2}^2 + C \Vert \overline{V} \cdot \nabla_H \tilde{V} \Vert_{L^2(\Omega)^2}^2\\ & + C \Vert \tilde{V} \cdot \nabla_H \overline{V} \Vert_{L^2(\Omega)^2}^2 + C \Vert \tilde{V} \cdot \nabla_H \overline{V} \Vert_{L^2(\Omega)^2}^2 + C \left \Vert \int_{-l}^{x_3} \mathrm{div}_H \, \tilde{V} dz \partial_3 \tilde{V} \right \Vert_{L^2(\Omega)^2}^2 \\ & \leq C \Vert |\tilde{V}||\nabla \tilde{V}| \Vert_{L^2(\Omega)}^2 + C \Vert \overline{V} \Vert_{H^1( \mathbb{T} ^2)^2}^2 \Vert \nabla_H \tilde{V} \Vert_{L^2(\Omega)^4} \Vert \nabla_H \tilde{V} \Vert_{H^1(\Omega)^4} \\ & + C \Vert \tilde{V} \Vert_{L^4(\Omega)^2}^2 \Vert \nabla_H \overline{V} \Vert_{L^2( \mathbb{T} ^2)^4} \Vert \nabla_H \overline{V} \Vert_{H^1( \mathbb{T} ^2)^4} \\ & + C \Vert \overline{V} \Vert_{H^1( \mathbb{T} ^2)^2}^2 \Vert \nabla_H \overline{V} \Vert_{L^2( \mathbb{T} ^2)^4} \Vert \nabla_H \overline{V} \Vert_{H^1( \mathbb{T} ^2)^4} \\ & + C \Vert \tilde{V} \Vert_{L^2(\Omega)^2}^2 \Vert \nabla_H \tilde{V} \Vert_{H^1(\Omega)^4} \Vert \partial_3 \tilde{V} \Vert_{L^2(\Omega)^2} \Vert \partial_3 \tilde{V} \Vert_{H^1(\Omega)^2} \end{align*} and also \begin{align*} I_2 + I_3 & \leq C \Vert v \Vert_{H^2(\Omega)^2}^2 \Vert \nabla V \Vert_{L^2}^2 + C \Vert \nabla_H v \Vert_{L^2(\Omega)^4} \Vert \nabla_H v \Vert_{H^1(\Omega)^4} \Vert \partial_3 V \Vert_{L^2(\Omega)^2} \Vert \partial_3 V \Vert_{H^1(\Omega)^2} \\ & + \Vert V \Vert_{L^2(\Omega)^2}^2 \Vert v \Vert_{H^3(\Omega)^2}^2 + \Vert \nabla_H V \Vert_{L^2(\Omega)^4} \Vert \nabla_H V \Vert_{H^1(\Omega)^4} \Vert \partial_3 v \Vert_{L^2(\Omega)^2} \Vert \partial_3 v \Vert_{H^1(\Omega)^2}. \end{align*} Therefore, there exit positive functions $\phi_{H^1, 1}\in L^\infty(0, T)$ and $\phi_{H^1, 2} \in L^1(0, T)$ such that \begin{align} & \Vert \nabla V(t) \Vert_{L^2(\Omega)^2}^2 + \Vert \Delta V(t) \Vert_{L^2(\Omega)^2}^2 \leq \phi_{H^1, 1} (t) + \int_0^t \phi_{H^1, 2} (s) ds \end{align} \subsubsection{$\mathbb{E}_{1, 1, 2, 2}$-estimate for $V$} Using the $L^2_t$-$L^2_x$ maximal regularity of $\tilde{A}$, we see \begin{align*} & \Vert V \Vert_{\mathbb{E}_{1,1, 2, 2}(T)} \\ & \leq C \Vert (\partial_t + \tilde{A})V \Vert_{L^2(0, T; L^2(\Omega)^2)} \\ & \leq C \Vert U \cdot \nabla V \Vert_{L^2(0, T; L^2(\Omega)^2)} + C \Vert U \cdot \nabla v \Vert_{L^2(0, T; L^2(\Omega)^2)} + C \Vert u \cdot \nabla V \Vert_{L^2(0, T; L^2(\Omega)^2)} \\ & + C \mu \Vert J_\delta V \Vert_{L^2(0, T; L^2(\Omega)^2)} + C \Vert F \Vert_{L^2(0, T; L^2(\Omega)^2)}. \end{align*} Since \begin{align*} & \Vert U \cdot \nabla V \Vert_{L^2(0, T; L^2(\Omega)^2)} \\ & \leq C \Vert V \Vert_{L^2(0, T; H^2(\Omega)^2)} \Vert \nabla V \Vert_{L^\infty(0, T; H^2(\Omega)^6)} + C \Vert \nabla V \Vert_{L^\infty(0, T; H^2(\Omega)^6)} \Vert \nabla V \Vert_{L^2(0, T; H^1(\Omega)^6)}, \\ & \Vert U \cdot \nabla v \Vert_{L^2(0, T; L^2(\Omega)^2)} \\ &\leq C \Vert V \Vert_{L^2(0, T; H^2(\Omega)^2)} \Vert \nabla v \Vert_{L^2(0, T; L^2(\Omega)^6)} \\ & + C \Vert \nabla V \Vert_{L^2(0, T; L^2(\Omega)^6)}^{\frac{1}{2}} \Vert \nabla V \Vert_{L^\infty(0, T; H^1(\Omega)^6)}^{\frac{1}{2}} \Vert \nabla v \Vert_{L^\infty(0, T; H^2(\Omega)^6)}^{\frac{1}{2}} \Vert \nabla v \Vert_{L^\infty(0, T; H^1(\Omega)^6)}^{\frac{1}{2}},\\ & \Vert u \cdot \nabla V \Vert_{L^2(0, T; L^2(\Omega)^2)} \\ & \leq C \Vert v \Vert_{L^2(0, T; H^2(\Omega)^2)} \Vert \nabla V \Vert_{L^\infty(0, T; H^2(\Omega)^6)} \\ & + C \Vert \nabla v \Vert_{L^\infty(0, T; H^2(\Omega)^6)}^{\frac{1}{2}} \Vert \nabla v \Vert_{L^\infty(0, T; H^1(\Omega)^6)}^{\frac{1}{2}} \Vert \nabla V \Vert_{L^2(0, T; L^2(\Omega)^6)}^{\frac{1}{2}} \Vert \nabla V \Vert_{L^2(0, T; H^1(\Omega)^6)}^{\frac{1}{2}}, \end{align*} we conclude $V \in \mathbb{E}_{1,1, 2, 2, T}$ for $T > 0$. Moreover, we find $V \in H^{2,2} (0, T; L^2(\Omega)^2) \cap H^{1,2} (0, T; H^{2, 2}(\Omega)^2$ from $F \in H^{1, 2}(0, T; L^2(\Omega)^2)$ and Proposition \ref{prop_local_wellposedness_maximal_regularity} for $p = q = 2$. \begin{proof}[Proof of Theorem \ref{thm_main_thoerem}] Theorem is the direct consequence of Lemma \ref{lem_pre_of_main_theorem} and a priori bounds in $H^2(\Omega)$. \end{proof} \begin{remark} The nudging type DA seems not to be used as a practical way for weather or ocean prediction. Scientists and engineers in these fields use other types of DA, $e.g.$, the ensemble Kalman filter (EnKF), 3D-VAR, 4D-VAR, etc. From the mathematical point of view, one reason may be the regularity of the solution. We consider the system like \begin{align} \label{eq_u_j+1=Fu_j} u(t_j) = F(u(t_j)). \end{align} In the 3D-VAR scenario, for example, we modify the estimate $\tilde{u}_j$ and error covariance $\tilde{\sigma}_j$ at $t_j>0$ when we obtain the observation $H u_j + (\text{noise})$ at $t_j$, where $H$ is an observation operator and $u_j$ is the true state at $t_j$. Then we forward $\tilde{u}_j$ to $t_{j+1}$ by (\ref{eq_u_j+1=Fu_j}), and repeat the procedures of 3D-VAR. If we consider the parabolic equation $e.g.$ the Navier-Stokes equations and the primitive equations in a smooth domain, even if $\tilde{u}_j$ is not regular, the linear part $e^{-tA} \tilde{u}_j$ is immediately regularized and then becomes smooth for time and spatial variables. On the other hand, in the nudging type DA, we insert the effect of the observation to the right-hand side of the equation as an external force. Even if the true solution is smooth, the observation is not always smooth. This results the solution to the DA equation gain at most first order time regularity and second order special regularity $i.e.$ $\tilde{u} \in H^{2,p}(0, T;L^q(\Omega)) \cap L^p(0, T;H^{2, q}(\Omega))$ if observation belongs to $L^p(0,T; L^q(\Omega))$. The solution $\tilde{u}$ is not smooth enough as some weather and ocean prediction systems based on numerical simulations assume. When we seek the solution by numerical simulations, the regularity of the solution is crucial to the accuracy. Some ocean model systems require fourth or more order spacial approximation, which means the solution belongs to at least $H^{4, q}(\Omega)$ or $C^4$ regularity for the solution to the DA equation. These systems are not compatible with the DA equation based on the nudging equation. \end{remark} \begin{appendices} \section{$L^2_t H^3_x$ estimate for the solution to the primitive equations} In this appendix, we show the $L^2_t H^3_x$ a priori estimate \begin{align} \label{eq_L2_H3} \Vert \Delta v (t) \Vert_{L^2(\Omega)^2} + \int_0^t \Vert \nabla \Delta v (t) \Vert_{L^2(\Omega)^6} ds < \infty \end{align} for all $t>0$ if $f \in L^2_t H^1_x ((0, \infty) \times \Omega)$. Although Giga et al \cite{GigaGriesHieberHusseinKashiwabara2017_analiticity} proved $L^\infty_t H^2_x$-estimates and the lower order estimates, they did not show $L^2_t H^3_x$- a priori estimates. Let $\psi \in H^1_t L^2_{\overline{\sigma}, x}((0, \infty) \times \Omega) \cap L^2_t H^2_{\sigma, x} ((0, \infty)\times \Omega)$ be solution the hydrostatic Stokes Equations \begin{align} \label{eq_hydrostatic_Stokes_appendix} \begin{split} \partial_t \psi + A_2 \psi & = g \in L^2_t H^1_{\overline{\sigma}, x} ((0, \infty) \times \Omega), \\ \psi(0) & = \psi_0 \in H^2_{\overline{\sigma}}(\Omega), \end{split} \end{align} with boundary conditions (\ref{eq_bound_conditions}). Assume $\psi$ satisfies \begin{align} \int_0^t \Vert \nabla \partial_t \psi (s) \Vert_{L^2 (\Omega)^6} ds < \infty \end{align} for all $t > 0$. We denote $D^h_j \psi (x) = (\psi(x + h e_j) - \psi(x))/h$ for $j = 1, 2, 3$ and $h > 0$. Then $D^h_j \psi$ satisfies (\ref{eq_hydrostatic_Stokes_appendix}) with an external force $D^h_j g$ and initial data $D^h_j \psi_0$. We find \begin{align*} & \int^t_0 \Vert \Delta D^h_j \psi \Vert_{L^2(\Omega)^2}^2 ds \leq C \int^t_0 \Vert A_2 D^h_j \psi \Vert_{L^2(\Omega)^2}^2 ds \\ & \leq C \int^t_0 \Vert D^h_j \partial_t \psi \Vert_{L^2(\Omega)^2}^2 ds + C \int^t_0 \Vert D^h_j g \Vert_{L^2(\Omega)^2}^2 ds\\ & \leq C \int^t_0 \Vert \partial_j \partial_t \psi \Vert_{L^2(\Omega)^2}^2 ds + C \int^t_0 \Vert \partial_j g \Vert_{L^2(\Omega)^2}^2 ds. \end{align*} Since the right-hand side is uniform for $h$, we take a $\limsup$ to get \begin{align*} \int^t_0 \Vert \partial_j \psi \Vert_{H^2(\Omega)^2}^2 ds \leq C \int^t_0 \Vert \partial_j \partial_t \psi \Vert_{L^2(\Omega)^2}^2 ds + C \int^t_0 \Vert \partial_j g \Vert_{L^2(\Omega)^2}^2 ds. \end{align*} The reader refers to Section 3.1.5 and 4.2.7 of the book \cite{Sohr2001} by Sohr. Therefore, we see \begin{align*} & \int_0^t \Vert \partial_j v \Vert_{H^2(\Omega)^2}^2 ds \\ & \leq C \int_0^t \Vert \partial_t \partial_j v \Vert_{L^2(\Omega)^2}^2 ds + C \int_0^t \Vert \partial_j \left( v \cdot \nabla_H v + w \partial_3 v \right) \Vert_{L^2(\Omega)^2}^2 ds + C \int_0^t \Vert \partial_j f \Vert_{L^2(\Omega)^2}^2 dx. \end{align*} The first term is bounded from \cite{GigaGriesHieberHusseinKashiwabara2017_analiticity}. The Sobolev inequalities and the interpolation inequalities yield \begin{align*} & \int_\Omega \left| \nabla v \cdot \nabla_H v \right|^2 dx \leq C \Vert \nabla v \Vert_{L^4(\Omega)^6}^2 \leq C \Vert \nabla v \Vert_{L^2(\Omega)^6}^{3/2} \Vert \nabla v \Vert_{H^1(\Omega)^6}^{1/2}, \\ & \int_\Omega \left| v \cdot \nabla \nabla_H v \right|^2 dx \\ & \leq C \Vert v \Vert_{L^\infty(\Omega)^2} \Vert \nabla^2 v \Vert_{L^2(\Omega)^6} \leq C \Vert v \Vert_{H^1(\Omega)^2}^{\frac{1}{2}} \Vert v \Vert_{H^2(\Omega)^2}^\frac{1}{2} \Vert v \Vert_{\dot{H}^2(\Omega)}. \end{align*} By the same way, we see \begin{align*} \int_\Omega \left | \nabla w \partial_3 v \right |^2 dx & \leq C \Vert \nabla v \Vert_{L^2(\Omega)^6} \Vert \nabla v \Vert_{H^1(\Omega)^6} \Vert \partial_3 v \Vert_{L^2(\Omega)^2} \Vert \partial_3 v \Vert_{H^1(\Omega)^2},\\ & \leq C \Vert v \Vert_{H^1(\Omega)^2}^2 \Vert \nabla v \Vert_{H^1(\Omega)^6}^2\\ \int_\Omega \left | w \nabla \partial_3 v \right |^2 dx & \leq C \Vert \nabla_H v \Vert_{L^2(\Omega)^4} \Vert \nabla_H v \Vert_{H^1(\Omega)^4} \Vert \partial_3 v \Vert_{L^2(\Omega)^2} \Vert \partial_3 v \Vert_{H^1(\Omega)^2}\\ & \leq C \Vert v \Vert_{H^1(\Omega)^2}^2 \Vert \nabla v \Vert_{H^1(\Omega)^6}^2. \end{align*} Using the Young inequality, we conclude (\ref{eq_L2_H3}). \section{Remarks for Semigroup Based Approach} The main result of this paper is a mathematical justification of the DA in $L^p$-$L^q$ based maximal regularity settings. However, the analytic semigroup approach such as Hieber and Kashiwabara \cite{HieberKashiwabara2016} can be applicable. As in \cite{HieberKashiwabara2016}, we set \begin{align} \begin{split} & \mathcal{S}_{q, \eta, T} = \Set{ v \in C([0, T]; H^{2/q, q}_{\overline{\sigma}}(\Omega)) \cap C((0, T]; H^{1 + 1/q, q}_{\overline{\sigma}}(\Omega)) }{\Vert v \Vert_{\mathcal{S}_{q, \eta, T}} < \infty}, \\ & \Vert v \Vert_{\mathcal{S}_{q, \eta, T}} := \sup_{0 < t < T} e^{\eta t} \Vert v(t) \Vert_{H^{2/q, q}_{\overline{\sigma}}(\Omega)} + \sup_{0 < t < T} e^{\eta t} t^{1/2 - 1/2q} \Vert v(t) \Vert_{H^{1 + 1/q, q}_{\overline{\sigma}}(\Omega)}, \\ & \widetilde{\mathcal{S}}_{q, \eta, T} = \Set{ v \in C((0, T]; H^{1 + 1/q, q}_{\overline{\sigma}}(\Omega)^2) }{\Vert v \Vert_{\widetilde{\mathcal{S}}_{q, \eta, T}} < \infty}, \\ & \Vert v \Vert_{\widetilde{\mathcal{S}}_{q, \eta, T}} := \sup_{0 < t < T} e^{\eta t} t^{1/2 - 1/2q} \Vert v(t) \Vert_{H^{1 + 1/q, q}_{\overline{\sigma}}(\Omega)}. \end{split} \end{align} We set $X_q = \Set{\varphi \in H^{2/q, q}_{\overline{\sigma}}(\Omega)^2}{\varphi|_{\Gamma_b = 0}}$. Using this kind of method based on analytic semigroup theory as \cites{HieberKashiwabara2016,HieberHusseingKashiwabara2016}, we can show \begin{theorem} \label{thm_supplement_thoerem} Let $1 < p < \infty$ and $0 < \alpha < 1$. Let $v_0 \in X_q$ be a initial data. Let $f \in H^1_{loc} (0, \infty; L^q(\Omega)^2)$ satisfy \begin{align*} \sup_{t>0} e^{\gamma_0 t} \Vert f(s) \Vert_{L^q(\Omega)^2} + \sup_{0 < s < t} e^{\gamma_0 s} s^{1 - \frac{1}{q}} \Vert f (s) \Vert_{L^q(\Omega)^2} & < \infty,\\ \Vert f \Vert_{H^1_{loc}(0, \infty; L^2(\Omega)^2 \cap L^q(\Omega)^2)} + \Vert e^{\gamma_0 t} f \Vert_{L^2(0, \infty; H^1(\Omega)^2)} & < \infty, \end{align*} for some constant $\gamma_0 > 0$. Assume $v \in C(0, \infty; X_q)$ is the solution to (\ref{eq_primitive}), which obtained by \cite{HieberHusseingKashiwabara2016} with zero temperature and salinity, satisfying \begin{gather} \label{eq_bounds_for_v_2} \Vert v \Vert_{\mathcal{S}_{q, \gamma_1, \infty}} < \infty \end{gather} for some constant $C_0>0$ and $\gamma_1 < \gamma_0$. Then there exit $\mu_0, \delta_0>0$, if $\mu \geq \mu_0$ and $\delta \leq \delta_0$, there exits a unique solution $V \in C(0, \infty;X_{1/q, p, q})$ to (\ref{eq_nudging}) such that \begin{align*} \Vert v \Vert_{\mathcal{S}_{q, {\mu_\ast}, \infty}} < \infty \end{align*} for some constants $\gamma_0 < \mu_\ast < \gamma_1$ and $C>0$. Moreover, $V$ satisfies \begin{align*} \Vert \partial_t V (t) \Vert_{L^q(\Omega)^2} + \Vert V (t) \Vert_{D(\tilde{A}_{q, \mu})} = O(e^{- \mu_\ast t /2}). \end{align*} \end{theorem} \begin{remark} The assumption on the regularity for the initial data in Theorem \ref{thm_supplement_thoerem} is stronger than Theorem \ref{thm_main_thoerem} since $H^{2/q, q}(\Omega) \hookrightarrow B^{2/q}_{q, p}(\Omega)$ for $p > 2$. \end{remark} We show the local well-posedness in $C(0,T_\ast ; L^q(\Omega)^2)$ frame work with the analytic semi-group $e^{- t \tilde{A}}$ and the Fujita-Kato principle. The mild solution is given by \begin{align} \label{eq_int_eq_of_V} \begin{split} V (t) &= e^{- t \tilde{A}} V_0 + \int_0^t e^{- (t - s) \tilde{A}} P \left( - U (s) \cdot \nabla V (s) + u (s) \cdot \nabla V (s) + U (s) \cdot \nabla v (s) \right) ds \\ & + \int_0^t e^{- (t - s) \tilde{A}} P F(s) ds \\ & =: N_1(V, v) + N_2(F) := N(V, v) \end{split} \end{align} \begin{lemma}\label{lem_local_or_small_data_wellposedness_semigroup_settings} Let $T>0$, $0 \leq \tilde{\mu} \leq \mu_\ast$, and $V_0, v_0 \in H^{2/q, q}(\Omega)^2$. Let $f \in C(0, T; L^q(\Omega)^2)$ satisfy \begin{align*} \sup_{0 < s < T} e^{\gamma_0 s} s^{1 - \frac{1}{q}} \Vert f (s) \Vert_{L^q(\Omega)^2} < \infty, \end{align*} for some constant $\gamma_0 > 0$. Let $v$ satisfy the same assumptions as Theorem \ref{thm_supplement_thoerem}. Then, there exists $T_0 > 0$, if $T \leq T_0$, the integral equations (\ref{eq_int_eq_of_V}) admits the unique solution $V \in C(0,T; H^{2/q, q}_{\overline{\sigma}}(\Omega)^2)$ such that \begin{align} \Vert V \Vert_{\mathcal{S}_T} \leq C. \end{align} Moreover, there exits $\epsilon_0>0$, if $\Vert v_0 \Vert_{H^{2/q, q}(\Omega)^2}, \sup_{0 < s < T} e^{\gamma_0 s} s^{1 - \frac{1}{q}} \Vert f (s) \Vert_{L^q(\Omega)^2}\leq \varepsilon_0$, it can be taken $T = \infty$. \end{lemma} \begin{proof} We prove $N : \mathcal{S}_T \rightarrow \mathcal{S}_T$ is a contraction mapping if $T$ or $v_0$ is small. We find from Proposition \ref{prop_bilinear_estimate_sobolev} that \begin{align} \label{eq_bound_H_2_q} \begin{split} & e^{\tilde{\mu}t} \Vert N_1(V, v) \Vert_{H^{2/q, q}(\Omega)^2} \\ & \leq \int_0^t e^{\tilde{\mu}t} e^{- \mu_\ast (t - s)} (t - s)^{ - \frac{1}{q}} \\ &\times \left( \Vert U (s) \cdot \nabla V (s) \Vert_{L^q(\Omega)^2} + \Vert u (s) \cdot \nabla V (s) \Vert_{L^q(\Omega)^2} + \Vert U (s) \cdot \nabla v (s) \Vert_{L^q(\Omega)^2} \right) ds \\ & \leq C \int_0^t e^{\tilde{\mu} (t - s)} e^{ - \mu_\ast (t - s)} (t - s)^{ - \frac{1}{q}} s^{- 1 - 1/q} ds\\ & \times \left( (\sup_{0 < s < t} e^{\tilde{\mu}s} s^{\frac{1}{2} - \frac{1}{2q}}\Vert V (s) \Vert_{H^{1 + 1/q, q}(\Omega)^2})^2 \right. \\ & \left. + \sup_{0 < s < t} e^{\tilde{\mu}s} s^{\frac{1}{2} - \frac{1}{2q}}\Vert V (s) \Vert_{H^{1 + 1/q, q}(\Omega)^2} \sup_{0 < s < t} e^{\tilde{\mu}s} s^{\frac{1}{2} - \frac{1}{2q}}\Vert v (s) \Vert_{H^{1 + 1/q, q}(\Omega)^2} \right)\\ & \leq C (\sup_{0 < s < t} e^{\tilde{\mu}s} s^{\frac{1}{2} - \frac{1}{2q}}\Vert V (s) \Vert_{H^{1 + 1/q, q}(\Omega)^2})^2 \\ & + C \sup_{0 < s < t} e^{\tilde{\mu}s} s^{\frac{1}{2} - \frac{1}{2q}}\Vert V (s) \Vert_{H^{1 + 1/q, q}(\Omega)^2} \sup_{0 < s < t} e^{\tilde{\mu}s} s^{\frac{1}{2} - \frac{1}{2q}}\Vert v (s) \Vert_{H^{1 + 1/q, q}(\Omega)^2}. \end{split} \end{align} We find from Propositions \ref{prop_pointwise_estimate_semigroup} and \ref{prop_bilinear_estimate_sobolev} that \begin{align*} & e^{\tilde{\mu}t} \Vert N_1(V, v) \Vert_{H^{1 + 1/q, q}(\Omega)^2} \\ & \leq \int_0^t e^{\tilde{\mu}t} e^{- \mu_\ast (t - s)} (t - s)^{ - \frac{1}{2} - \frac{1}{2q}} \\ & \quad \quad \times \left( \Vert U (s) \cdot \nabla V (s) \Vert_{L^q(\Omega)^2} + \Vert u (s) \cdot \nabla V (s) \Vert_{L^q(\Omega)^2} + \Vert U (s) \cdot \nabla v (s) \Vert_{L^q(\Omega)^2} \right) ds \\ & \leq C t^{- \frac{1}{2} + \frac{1}{2q}} (\sup_{0 < s < t} e^{\tilde{\mu}s} s^{\frac{1}{2} - \frac{1}{2q}} \Vert V (s) \Vert_{H^{1 + 1/q, q}(\Omega)^2})^2 \\ & + C \sup_{0 < s < t} e^{\tilde{\mu}s} s^{\frac{1}{2} - \frac{1}{2q}}\Vert V (s) \Vert_{H^{1 + 1/q, q}(\Omega)^2} \sup_{0 < s < t} e^{\tilde{\mu}s} s^{\frac{1}{2} - \frac{1}{2q}}\Vert v (s) \Vert_{H^{1 + 1/q, q}(\Omega)^2}. \end{align*} Similarly we have \begin{align*} & e^{\tilde{\mu}t} \Vert N_1(V_1, v) - N_1(V_2, v) \Vert_{H^{1 + 1/q, q}(\Omega)^2} \\ & \leq C t^{- \frac{1}{2} + \frac{1}{2q}} \sup_{0 < s < t} e^{\tilde{\mu}s} s^{\frac{1}{2} - \frac{1}{2q}}\Vert V_1 (s) - V_2 (s) \Vert_{H^{1 + 1/q, q}(\Omega)^2} \\ & \times \left( \Vert V_1 (s) \Vert_{\widetilde{\mathcal{S}}_{q, \tilde{\mu}, T}} + \Vert V_2 (s) \Vert_{\widetilde{\mathcal{S}}_{q, \tilde{\mu}, T}} + \Vert v (s) \Vert_{\widetilde{\mathcal{S}}_{q, \tilde{\mu}, T}} \right). \end{align*} Since $e^{- t \tilde{A}}$ is analytic, we see \begin{align*} e^{\tilde{\mu}t} \Vert e^{- t \tilde{A}} V_0 \Vert_{H^{2/q, q}(\Omega)^2} & \leq C \Vert V_0 \Vert_{H^{2/q, q}(\Omega)^2}, \\ e^{\tilde{\mu}t} \Vert e^{- t \tilde{A}} V_0 \Vert_{H^{1 + 1/q, q}(\Omega)^2} & \leq C t^{- \frac{1}{2} + \frac{1}{2q}} \Vert V_0 \Vert_{H^{2/q, q}(\Omega)^2}. \end{align*} Moreover, since $D(\tilde{A})$ is densely embedded into $H^{s, q}(\Omega)^2$ for $s \in [0, 2)$, we see \begin{align*} \Vert e^{- t \tilde{A}} V_0 \Vert_{H^{2/q, q}(\Omega)^2} \rightarrow 0 \quad \text{as} \quad t \rightarrow 0. \end{align*} We find from Proposition \ref{prop_pointwise_estimate_semigroup} that \begin{align*} & e^{\tilde{\mu}t} \left \Vert \int_0^t e^{- (t - s) \tilde{A}} F (s) ds \right \Vert_{H^{2/q, q}(\Omega)^2} \\ & \leq C \int_0^t e^{\tilde{\mu}t} e^{- \mu_\ast (t - s)} (t - s)^{- \frac{1}{q}} \Vert F (s) \Vert_{L^q(\Omega)^2} ds \\ & \leq C \sup_{0 < s < t} e^{\tilde{\mu}s} s^{1 - \frac{1}{q}} \Vert F (s) \Vert_{L^q(\Omega)^2}, \end{align*} and \begin{align*} & e^{\tilde{\mu}t} \left \Vert \int_0^t e^{- (t - s) \tilde{A}} F (s) ds \right \Vert_{H^{1 + 1/q, q}(\Omega)^2} \\ & \leq \int_0^t e^{\tilde{\mu}t} e^{- \mu_\ast (t - s)} (t - s)^{ - \frac{1}{2} - \frac{1}{2q}} \Vert F (s) \Vert_{L^q(\Omega)^2} ds \\ & \leq C t^{- \frac{1}{2} + \frac{1}{2q}} \sup_{0 < s < t} e^{\tilde{\mu}s} s^{1 - \frac{1}{q}} \Vert F (s) \Vert_{L^q(\Omega)^2}. \end{align*} We show continuity of $\int_0^t e^{- (t - s) \tilde{A}} P F (s) ds$ with respect to $t$. For $h > 0$ \begin{align*} & \int_0^{t + h} e^{- (t + h - s) \tilde{A}} P F (s) ds - \int_0^t e^{- (t - s) \tilde{A}} P F (s) ds \\ & = \int_t^{t + h} e^{- (t + h - s) \tilde{A}} P F (s) ds + \int_0^t \left( e^{- h \tilde{A}} - \mathrm{id} \right) e^{- (t - s) \tilde{A}} P F (s) ds \\ & =: I_1 + I_2. \end{align*} By the similar estimates as above, we see \begin{align*} & e^{\tilde{\mu}t} \left \Vert I_1 \right \Vert_{H^{2/q, q}(\Omega)^2} \leq C \int_t^{t + h} (t - s)^{- \frac{1}{q}} s^{- 1 + \frac{1}{q}} ds \sup_{0 < s < t} e^{\tilde{\mu}s} s^{1 - \frac{1}{q}} \Vert F (s) \Vert_{L^q(\Omega)^2}. \end{align*} Since the integrand is in $L^1(t, t + h)$ uniformly for $h \in (0, h)$, we have $\Vert I_1 \Vert_{H^{2/q, q}(\Omega)^2} \rightarrow 0$ as $h \rightarrow 0$ for $t \geq 0$. \begin{align*} & e^{\tilde{\mu}t} \left \Vert \left( e^{- h \tilde{A}} - \mathrm{id} \right) e^{- (t - s) \tilde{A}} P F (s) \right \Vert_{H^{2/q, q}(\Omega)^2} \\ & \leq C (t - s)^{- \frac{1}{q}} s^{- 1 + \frac{1}{q}} \sup_{0 < s < t} e^{\tilde{\mu}s} s^{1 - \frac{1}{q}} \Vert F (s) \Vert_{L^q(\Omega)^2} \in L^1(0,t). \end{align*} Lebesgue's dominated convergence theorem yields $\left \Vert I_2 \right \Vert_{H^{2/q, q}(\Omega)^2} \rightarrow 0$ for $t \geq 0$. Using the similar way, we have $t^{\frac{1}{2} - \frac{1}{2q}}\left \Vert I_1 \right \Vert_{H^{1 + 1/q, q}(\Omega)^2} + t^{\frac{1}{2} - \frac{1}{2q}}\left \Vert I_2 \right \Vert_{H^{1 + 1/q, q}(\Omega)^2} \rightarrow 0$ for $t > 0$. The case $h<0$ can be treated similarly. Therefore, we see continuity of $\int_0^t e^{- (t - s) \tilde{A}} P F (s) ds$ on $t$. We also have continuity $N_1(V, v)$ on $t$ by the similar way. Combining the above estimates, we obtain the quadratic inequality \begin{align*} & \Vert N(V, v) \Vert_{\widetilde{\mathcal{S}}_{\tilde{\mu}, q, T}} \\ & \leq C_2 \Vert V \Vert_{\widetilde{\mathcal{S}}_{\tilde{\mu}, q, T}}^2 + C_1 \Vert V \Vert_{\widetilde{\mathcal{S}}_{\tilde{\mu}, q, T}} \Vert v \Vert_{\widetilde{\mathcal{S}}_{\tilde{\mu}, q, T}} \\ & + C_0 \left( \sup_{0 < t < T} e^{\tilde{\mu}t} t^{\frac{1}{2} - \frac{1}{2q}} \Vert e^{- t \tilde{A}} V_0 \Vert_{H^{1 + 1/q, q}(\Omega)^2} + C \sup_{0 < s < t} e^{\tilde{\mu}s} s^{1 - \frac{1}{q}} \Vert F (s) \Vert_{L^q(\Omega)^2} \right). \end{align*} We assume $\Vert v \Vert_{\widetilde{\mathcal{S}}_{\tilde{\mu}, q, T}} \leq \frac{1}{2C_1}$ is sufficiently small. The quadratic estimate implies, if we take $T>0$ so small or $\Vert V_0 \Vert_{H^{2/q, q}(\Omega)^2}$ and $\sup_{0 < s < t} e^{\tilde{\mu}s} s^{1 - \frac{1}{q}} \Vert F (s) \Vert_{L^q(\Omega)^2}$ so small that \begin{align*} & R := 16 C_2 C_0 \left( \sup_{0 < t < T} e^{\tilde{\mu}t} t^{\frac{1}{2} - \frac{1}{2q}} \Vert e^{- t \tilde{A}} V_0 \Vert_{H^{1 + 1/q, q}(\Omega)^2} + C \sup_{0 < s < t} e^{\tilde{\mu}s} s^{1 - \frac{1}{q}} \Vert F (s) \Vert_{L^q(\Omega)^2} \right) \leq \frac{1}{2}, \end{align*} then \begin{align*} \Vert V \Vert_{\widetilde{\mathcal{S}}_{\tilde{\mu}, q, T}} \leq \frac{ 1 - \sqrt{ 1 - R } }{2 C_2} =: R_\ast. \end{align*} We find $N (\cdot, v)$ is an self-mapping on $\Set{V \in \widetilde{\mathcal{S}}_{\tilde{\mu}, q, T}}{\Vert V \Vert_{\widetilde{\mathcal{S}}_{\tilde{\mu}, q, T}} < R_\ast}$ and if we take $R$ and $\Vert v \Vert_{\widetilde{\mathcal{S}}_{\tilde{\mu}, q, T}}$ sufficiently small again, $N (\cdot, v)$ is a contraction mapping. Banach's fixed point theorem implies there exits a unique mild solution $V$ to (\ref{eq_int_eq_of_V}) in $\widetilde{\mathcal{S}}_{\tilde{\mu}, q, T}$. The estimate (\ref{eq_bound_H_2_q}) implies $N (V, v) \in \mathcal{S}_{\tilde{\mu}, q, T}$, then $V \in \mathcal{S}_{\tilde{\mu}, q, T}$. \end{proof} We improve the regularity of the solution $V$ to (\ref{eq_int_eq_of_V}) for regular $F$. \begin{proposition} \label{prop_exponential_stability_semigroup_with_additional_regularity} Let $0 \leq \tilde{\mu} \leq \mu_\ast$, $T > 0$, and $1 < q < \infty$. Let $f \in C^\alpha(0, T; L^q(\Omega)^2)$ such that \begin{gather}\label{eq_pointwise_estimate_f} \begin{split} \Vert f (t) \Vert_{L^q(\Omega)^2} \leq C t^{- \beta} e^{-\gamma t}, \\ \Vert f (t + \tau) - f (t) \Vert_{L^q(\Omega)^2} \leq C \tau^{\alpha} t^{- \beta} e^{-\gamma t}, \end{split} \end{gather} for $0 < \alpha < 1$ and $\beta, \gamma > 0$. Set $\phi$ such that \begin{align*} \phi(t) = \int_{0}^t e^{- (t - s)\tilde{A}_{\mu, q}} P f ds. \end{align*} Then $\phi$ satisfies \begin{align*} \Vert \phi (t) \Vert_{L^q(\Omega)^2} \leq C t^{1 - \beta} ( e^{- \gamma t} + e^{- \mu_\ast t/2} ) \end{align*} and \begin{align*} &\Vert \partial_t \phi (t) \Vert_{L^q(\Omega)^2} + \Vert \tilde{A}_{\mu, q} \phi (t) \Vert_{L^q(\Omega)^2} \\ &\leq C ( t^{\alpha - \beta}e^{- \gamma t/2} + t^{- \beta} e^{- (\mu_\ast /2 + \gamma) t} + t^{- \beta} e^{- \mu_\ast t /2} ) \end{align*} for some constant $C > 0$. Especially, if $\mu_\ast \leq \gamma$, it follows that \begin{align*} \Vert \partial_t \phi (t) \Vert_{L^q(\Omega)^2} + \Vert \tilde{A}_{\mu, q} \phi (t) \Vert_{L^q(\Omega)^2} = O(t^{-\beta} e^{\mu_\ast/2}). \end{align*} \end{proposition} \begin{proof} We first observe \begin{align*} & \left \Vert \int_{0}^t e^{- (t - s)\tilde{A}_{\mu, q}} P f ds \right \Vert_{L^q(\Omega)^2}\\ & \leq \int_{t/2}^t s^{- \beta} e^{- \gamma s} ds + \int_0^{t/2} e^{- \mu_\ast t/2} s^{- \beta} ds \\ & \leq C t^{1 - \beta} ( e^{- \gamma t} + e^{- \mu_\ast t/2} ). \end{align*} We split $\tilde{A}_{\mu, q} \phi$ into three parts such that \begin{align*} \tilde{A}_{\mu, q} \phi & = \int_{\frac{t}{2}}^t \tilde{A}_{\mu, q} e^{- (t - s) \tilde{A}_{\mu, q}} P \left( - f (t) + f(s) \right) ds \\ & + \int_{\frac{t}{2}}^t \tilde{A}_{\mu, q} e^{- (t - s) \tilde{A}_{\mu, q}} P f (t) ds \\ & + \int_0^{\frac{t}{2}} \tilde{A}_{\mu, q} e^{- (t - s) \tilde{A}_{\mu, q}} P f (s) ds \\ & = : I_1 + I_2 + I_3. \end{align*} By the pointwise estimates (\ref{eq_pointwise_estimate_f}) and the estimate \begin{align*} & \Vert \tilde{A}_{\mu, q} e^{- (t - s) \tilde{A}_{\mu, q}} P f (s) \Vert_{L^q(\Omega)^2} \\ & \leq C e^{- \mu_\ast (t - s)} (t - s)^{-1} \Vert f(t) - f(s) \Vert_{L^q(\Omega)^2} \\ & \leq C e^{- \mu_\ast (t - s)} (t - s)^{- 1 +\alpha} s^{- \beta} e^{- \gamma s}, \end{align*} we see \begin{align*} \Vert I_1 \Vert_{L^q(\Omega)^2} \leq C \int_{\frac{t}{2}}^t (t - s)^{- 1 + \alpha} s^{- \beta} e^{- \gamma s} ds \leq C t^{\alpha - \beta} e^{- \gamma t /2}. \end{align*} By (\ref{eq_pointwise_estimate_f}), we find \begin{align*} \Vert I_3 \Vert_{L^q(\Omega)^2} \leq C \int_0^{\frac{t}{2}} e^{- \mu_\ast t /2} (t - s)^{- 1} s^{- \beta} ds \leq C t^{- \beta} e^{- \mu_\ast t /2}. \end{align*} Since \begin{align*} \int_{\frac{t}{2}}^{t} \tilde{A}_{\mu, q} e^{ - (t - s) \tilde{A}_{\mu, q}} P f(t) ds & = \int_{\frac{t}{2}}^{t} \frac{d}{ds} e^{ - (t - s) \tilde{A}_{\mu, q}} P f(t) ds \\ & = P \left( e^{- t \tilde{A}_{\tilde{\mu}, q}} f (t) - e^{- \frac{t}{2} \tilde{A}_{\tilde{\mu}, q}} f (t) \right) \end{align*} and $\tilde{A}_{\mu, q}$ is analytic, we have \begin{align*} \Vert I_2 \Vert_{L^q(\Omega)^2} \leq C t^{- \beta} e^{- (\mu_\ast /2 + \gamma) t}. \end{align*} Since $\Vert \partial_t \phi (t) \Vert_{L^q(\Omega)^2} \leq \Vert \tilde{A}_{\mu, q} \phi (t) \Vert_{L^q(\Omega)^2} + \Vert f (t) \Vert_{L^q(\Omega)^2}$, we have the conclusion. \end{proof} \begin{proof}[Proof of Theorem \ref{thm_supplement_thoerem}] By Proposition \ref{prop_bilinear_estimate_bilinear_maximal_regularity} for $p = q = 2$ we first deduce that there exists a $L^2$-solution $V \in C(0, \infty; H^1(\Omega)^2) \cap C(0, T; D(\tilde{A}_{2, \mu}))$ such that \begin{align*} \Vert V(t) \Vert_{H^1(\Omega)^2} \leq C e^{- \mu_\ast t} \end{align*} for some constant $C>0$. Moreover, we repeat the same argument as in the proof of Theorem \ref{thm_main_thoerem} for $p = q = 2$ to see that there exit small $\varepsilon > 0$ and large $T_\ast > 0$ such that $\Vert V (T_\ast) \Vert_{D(\tilde{A}_{2, \mu})} \leq \varepsilon$. Applying Lemma \ref{lem_local_or_small_data_wellposedness_semigroup_settings}, we have $V \in \mathcal{S}_{q, {\mu_\ast}, \infty}$. By assumption for $f$ and Proposition \ref{prop_bilinear_estimate_sobolev} we have \begin{align} \label{eq_decay_of_external_forces} \begin{split} & \Vert F \Vert_{L^2(\Omega)} = O (e^{- \gamma_0 t}), \\ & \Vert U(t) \cdot \nabla V(t) \Vert_{L^2(\Omega)} = O(e^{- 2 \mu_\ast t}), \\ & \Vert u(t) \cdot \nabla V(t) \Vert_{L^2(\Omega)} + \Vert U(t) \cdot \nabla v(t) \Vert_{L^2(\Omega)} = O(e^{- (\gamma_1 + \mu_\ast) t}), \end{split} \end{align} as $t \rightarrow \infty$. We use Proposition \ref{prop_exponential_stability_semigroup_with_additional_regularity} to conclude \begin{align} \label{eq_decay_of_Lq_solution} \Vert \partial_t V (t) \Vert_{L^2(\Omega)^2} + \Vert V (t) \Vert_{D(\tilde{A}_{2, \mu})} = O(e^{- \mu_\ast t /2}). \end{align} Note that if we take $\mu$ sufficiently large, the decay rate $e^{- \mu_\ast t /2}$ is larger than that of $\Vert \partial_t v (t) \Vert_{L^2(\Omega)^2} + \Vert v(t) \Vert_{D(A_2)}$. Proposition \ref{lem_local_or_small_data_wellposedness_semigroup_settings} implies there exists a unique solution in $\mathcal{S}_{q, \mu_\ast, T_\ast}^\prime$ for small $T_\ast^\prime > 0$ and all $1 < q < \infty$. We consider the case $q > 2$. In this case we have $V(t) \in H^{1 + 1/q,q}(\Omega)^2 \hookrightarrow H^1(\Omega)^2$ for $0 < t < T_\ast^\prime$. We can extend $V$ as $H^1$-solution with initial data $V(T_\ast^\prime)$ such that \begin{gather} \label{eq_estimate_to_extend_in_L2} \begin{split} V \in \mathcal{S}_{q, \mu_\ast, T_\ast^\prime} \cap C(T_\ast^\prime/2, \infty; H^1_{\overline{\sigma}}(\Omega)) \cap C(T_\ast^\prime/2, \infty; H^2(\Omega)^2), \\ \Vert V (t) \Vert_{H^2(\Omega)^2 } = O(e^{- \mu_\ast t /2}) \quad \text{as} \rightarrow \infty. \end{split} \end{gather} Since $H^2(\Omega)^2 \hookrightarrow H^{2/q, q}(\Omega)^2$, we see $V \in \mathcal{S}_{q, \mu_\ast, \infty}$. By Proposition \ref{prop_bilinear_estimate_sobolev}, we have the same order decay estimate as (\ref{eq_decay_of_external_forces}) even for $L^q$ cases and conclude \begin{align*} \Vert \partial_t V (t) \Vert_{L^q(\Omega)^2} + \Vert V (t) \Vert_{D(\tilde{A}_{q, \mu})} = O(e^{- \mu_\ast t /2}). \end{align*} for $q > 2$. We consider the case $q < 2$. Since $f \in L^2(0, \infty; L^2)$ we find from the same type of bootstrapping argument by \cite{HieberHusseingKashiwabara2016} that $V$ is also $L^2$-solution satisfying (\ref{eq_estimate_to_extend_in_L2}). Since $D(\tilde{A}_{\mu_\ast, 2}) \hookrightarrow X_q$, we can extend the solution $V$ globally in $X_q$ and find $V$ satisfies (\ref{eq_decay_of_Lq_solution}). We proved the theorem. \end{proof} \end{appendices} \end{document}
arXiv
The human population harbors 172 mutations per non-lethal genome position. What'll happen to them? A recent Panda's Thumb post highlighted that, given the size of the human genome, the rate of de novo point mutations, and the total size of the population, every non-lethal position can be expected to vary - meaning that, for every genome position or site, there's very likely at least one person (and usually dozens or more) with a new mutation there, so long as it's non-lethal. It's a trivial calculation and, while we could refine it in various ways, the essential point is clear. "We are all, regardless of race, genetically 99.9% the same." Right or wrong? Still, let's try to understand this a bit further. First, an equally simple, entirely compatible fact which might attenuate our surprise: the existence of a couple hundred people with new mutations in a certain site leaves about seven billion without a new mutation there. Indeed, at the vast majority of sites, almost all people are homozygous for the same allele - identical by descent from the hominid lineage. In that light, here's a deep question one can ask about all those hundreds of billions of de novo mutations: what will be their ultimate fate? Will they all shuffle through the future human population, making our genome's future evolution look like the reels on a slot machine? Or is it going to be rather more like the pitch drop experiment? Take any one of the 100 or so original mutations in my own genome. It could eventually go on to world domination - fixation, as it's usually called by the less ambitious - spreading by descent to every chromosome in the future population. Or at some point it could simply cease to exist - lost to the random vagaries of meiosis as well as my fecundicity and that of my progeny. On a certain timescale, these are the only two possible fates (except under some special conditions I won't consider here). Intuition tells us world domination must be very unlikely, and so the alternative - loss - must be very likely. But let's be generous, and assume the mutation in question is actually a selectively beneficial one. Our intuition might then hesitate, so it'll be helpful to recall a classic equation from population genetics, which effectively considers all of the mutation's possible evolutionary trajectories toward future fixation or loss in the Wright-Fisher model. Let $s$ denote the selection coefficient associated with the beneficial mutation, which here we'll roughly understand to mean that the mutation additively confers an expected $\frac{s}{2(1-s)}$ more offspring. Also let $N$ be the human population size, and $N_e$ the effective population size. The probability of my mutation attaining world domination is: $$ \mathbf{P}(\operatorname{Fix}|s,N,N_e) = \frac{1-\exp\left(- \frac{N_e}{N} s\right)}{1-\exp(-2 N_e s)} $$ Take $s = 0.001$, meaning that this mutation gives me a 0.05% better chance of passing it along. If that sounds small, consider everything that has to go just right for human reproduction to happen (giggity) and all the genomic loci that must play a role therein; you might then agree it's actually quite a large contribution from just a single point mutation! We can also take roughly $N = 10^{10}$. It's much more difficult to estimate what $N_e$ will be going forward into many future generations, so let's consider a wide range of possible values from $10^5$ to $10^9$: $x$ $\mathbf{P}(\operatorname{Fix}|s=0.001, N=10^{10}, N_e = x)$ $10^5$ $10^{-8}$ So, even under a very generous assumption of $N_e = 10^9$, my quite beneficial mutation will be lost with 99.99% certainty. Its chances are far worse under smaller - and perhaps more plausible - guesses of the future effective population size, e.g. $10^6$ish. And even worse still for any less-beneficial or deleterious mutation: as $s \rightarrow 0$, the fixation probability tends to $\frac{1}{2N} = 5 \times 10^{-11}$ (to verify this, recall $e^{-x} \approx 1-x$ for small $x$). So if indeed very beneficial mutations are very rare, then this simple model suggests that, of all those hundreds of billions of de novo mutations that must exist throughout the human population today, all but a few thousand or so are destined for evolutionary oblivion. Of course, they'll be "replaced" by the new ones arising in every future person born, a similarly tiny fraction of which will go on to fixation, and so the molecular evolution show will go on. Here are two nagging issues with what we've concluded so far. First, perhaps it's too extreme to expect my mutation to achieve total world domination; couldn't I die happy if I only knew it would ultimately propagate to, say, 1% of the future population? Second, while the mutation is a new one in my genome, there are probably dozens of other people in the world whose genomes fortuitously experienced the very same mutation, by the original argument from the PT post. What if we considered their mutations and mine interchangeably? We can address both of these questions with a slightly more general formula for the fixation probability, \mathbf{P}(\operatorname{Fix}|s,N,N_e,k) = \frac{1-\exp\left(- \frac{N_e}{N} sk\right)}{1-\exp(-2 N_e s)} where $k$ is the number of instances of the mutation found throughout the human population (previously we'd considered the special case $k=1$). To the second question above, even with $k=100$, we have $\mathbf{P}(\operatorname{Fix}|s=0.001,N=10^{10},N_e=10^6,k=100) = 10^{-5}$; it's still merely a one-in-a-hundred-thousand shot, even though this is a fairly beneficial mutation. The way to the answer for the first question is a bit more roundabout. Let's assume my mutation were to attain 1% frequency. Under that more modest, yet still glorious, scenario, what is its fate? $\mathbf{P}(\operatorname{Fix}|s=0.001,N=10^{10},N_e=10^6,k=0.01 \cdot 10^{10}) = 99.995\%$. Once it rises to 1%, fixation is essentially inevitable! But, since we know that world domination for my new mutation is extremely unlikely, it must follow that attaining even 1% frequency is also extremely unlikely. And, more unlikely still for any other less-beneficial mutation. It's intriguing that, while dozens to hundreds of instances of the mutation stand almost no chance, capturing just 1% of the population makes fixation virtually certain. What's the source of this striking shift in the underlying dynamics? The answer has to do with exponential growth. Natural selection acts to grow the existing population of the beneficial mutation: if there are few instances of the mutation, growth is slow, and the more instances there are, the faster the growth. By 1% the train has left the station. But, middling along with a few hundred copies, the slow rate of growth is more-or-less offset by the occasional losses to meiotic shuffling, infertility, premature death, abstinence, and contraception. As Gillespie wrote (below): "Think of all the great mutations that failed to get by the quagmire of rareness!" Having gained a little intuition from a simple model, it's always wise to recall salient limitations of that model. Some are obvious, like its treatment of $s$, $N$, and $N_e$ as constant over time and geography, the uncertainty about their values going forward, and the ignorance of extensive human population stratification. I can think of two others that seem worthy of mention in closing. Despite my previous argument that $s=0.001$ would be a large contribution for an individual point mutation, there are some fascinating epistasis theories about the potential for one mutation to operate synergistically with many other loci, essentially providing the "straw that broke the camel's back" to a much larger fitness advantage. The neutral network is an interesting and complementary way to conceptualize this powerful idea - that many previously neutral alleles can suddenly gain tremendous selective meaning in light of a single new mutation. If we can seriously entertain $s \gg 0.001$, then of course such a mutation stands a better chance. Finally, aside from far higher selection coefficients, my mutations have at least one other potential route to glory, absent entirely from the simple model considered above: genetic linkage to other beneficial variants, whether mine or my parents', rare or common. As for the ease of that route, we're still collecting data! Labels: Genomics The human population harbors 172 mutations per non...
CommonCrawl
\begin{definition}[Definition:Class Equality/Definition 2] Let $A$ and $B$ be classes. $A$ and $B$ are '''equal''', denoted $A = B$, {{iff}}: :$A \subseteq B$ and $B \subseteq A$ where $\subseteq$ denotes the subclass relation. \end{definition}
ProofWiki
\begin{document} \title{Exact Diffusion for Distributed Optimization and Learning --- Part II: Convergence Analysis} \small \begin{abstract} Part I of this work \cite{yuan2017exact1} developed the exact diffusion algorithm to remove the bias that is characteristic of distributed solutions for deterministic optimization problems. The algorithm was shown to be applicable to a larger set of combination policies than earlier approaches in the literature. In particular, the combination matrices are not required to be doubly stochastic, which impose stringent conditions on the graph topology and communications protocol. In this Part II, we examine the convergence and stability properties of exact diffusion in some detail and establish its linear convergence rate. We also show that it has a wider stability range than the EXTRA consensus solution, meaning that it is stable for a wider range of step-sizes and can, therefore, attain faster convergence rates. Analytical examples and numerical simulations illustrate the theoretical findings. \end{abstract} \begin{keywords} distributed optimization, diffusion, consensus, exact convergence, left-stochastic matrix, doubly-stochastic matrix, balanced policy, Perron vector. \end{keywords} \section{Introduction and review of Part I\cite{yuan2017exact1}} \setlength\abovedisplayskip{2.5pt} \setlength\belowdisplayskip{2.5pt} \setlength\abovedisplayshortskip{2.5pt} \setlength\belowdisplayshortskip{2.5pt} { For ease of reference, we provide a brief review of the main construction from Part I \cite{yuan2017exact1}. We consider a collection of $N$ networked agents working cooperatively to solve an aggregate optimization problem of the form: \eq{ \label{prob-dist} w^\star = \argmin_{w\in \mathbb{R}^M}\quad {\mathcal{J}}^\star(w)=\sum_{k=1}^{N} q_k J_k(w), } where the $\{q_k\}$ are positive weighting scalars, each $J_k(w)$ is convex and differentiable, and the aggregate cost ${\mathcal{J}}^\star(w)$ is strongly-convex. {When $q_1 \cdots = q_N$, problem \eqref{prob-dist} reduces to \eq{ \label{prob-consensus} w^o = \argmin_{w\in \mathbb{R}^M}\quad {\mathcal{J}}^o(w)=\sum_{k=1}^{N} J_k(w). } Problems of the type \eqref{prob-dist}--\eqref{prob-consensus} find applications in a wide range of areas including including wireless sensor networks \cite{braca2008running}, distributed adaptation and estimation strategies \cite{sayed2014adaptation,sayed2014adaptive}, distributed statistical learning\cite{chen2015dictionary,chouvardas2012sparsity} and clustering \cite{zhao2015distributed}. {Various algorithms have been proposed to solve problem \eqref{prob-consensus} such as \cite{dimakis2010gossip,sardellitti2010fast,kar2011convergence,yuan2016convergence,mota2013d,tsianos2012push,xi2015linear,shi2014linear,mokhtari2015dqm,shi2015extra,nedich2016achieving,xu2015augmented,nedic2016geometrically}. These algorithms either employ doubly-stochastic or right-stochastic combination matrices.} In Part I\cite{yuan2017exact1}, we derived the exact diffusion strategy \eqref{adapt}--\eqref{combine}. \begin{table}[h] \noindent \HRule\\ \noindent \textbf{\footnotesize Algorithm 1} {\footnotesize (Exact diffusion strategy for agent $k$)} \\ \HRule\\ \noindent \textbf{ {\scriptsize Setting:} } {\scriptsize Let $\overline{A}=(I_{N}+A)/2$, and $w_{k,\hspace{-0.2mm}-\hspace{-0.3mm}1}$ arbitrary. {\color{black}Set $\psi_{k,-1} = w_{k,-1}$.}} \noindent \textbf{ {\scriptsize {\color{white}Setting:}} } {\scriptsize {\color{black}Let $\mu_k = q_k \mu_o/p_k$}. } \noindent \textbf{ \hspace{-1.3mm}{\scriptsize Repeat for $i=0,1,2,\cdots$}}\\ {\footnotesize \eq{ \psi_{k,i} &= w_{k,i-1} - \mu_k {\nabla} J_k(w_{k,i-1}), \hspace{5mm} \mbox{\footnotesize (adaptation)} \label{adapt}\\ \phi_{k,i} &= \psi_{k,i} + w_{k,i-1} - \psi_{k,i-1}, \hspace{8mm} \mbox{\footnotesize (correction)} \label{correct}\\ w_{k,i} &= \sum_{\ell\in {\mathcal{N}}_k} \overline{a}_{\ell k} \phi_{\ell,i}. \hspace{2.32cm} \mbox{\footnotesize (combination)} \label{combine} }} \HRule \end{table} The matrix $A=[a_{\ell k}]$ in the table refers to the combination policy with $a_{\ell k}\geq 0$ denoting the weight that scales the data arriving from agent $\ell$ to agent $k$. The matrix $A$ is not required to be symmetric but is left-stochastic, i.e., \eq{ A^{\mathsf{T}} \mathds{1}_N = \mathds{1}_N } where $\mathds{1}_N$ refers to a column vector with all entries equal to one. It is assumed that the network graph is strongly-connected, which translates into a primitive matrix $A$. This implies, in view of the Perron-Frobenius theorem \cite{sayed2014adaptation}, that there exists a Perron vector $p$ satisfying \eq{ Ap=p,\;\;\;\mathds{1}_N^{\mathsf{T}} p=1,\;\;\;p \succ0. } Furthermore, it was argued in Eq.(11) of Part I\cite{yuan2017exact1} that given $q$ and $A$ (and hence $p$), one can always adjust $\{\mu_k\}_{k=1}^N$ and find a positive constant $\beta$ such that \eq{\label{q-A} q = \beta \, \mbox{diag}\{\mu_1,\mu_2,\cdots,\mu_N\}p. } Let $P=\mbox{\rm diag}(p)$, the matrix $A$ is said to be {\em balanced} if \eq{\label{local-balance} PA^{\mathsf{T}} = AP. } We showed in Part I\cite{yuan2017exact1} that balanced left-stochastic matrices are common in practice and that condition \eqref{local-balance} endows $A$ with several useful properties that enabled the derivation of the above exact diffusion strategy, and which will be used again in this work to examine its convergence properties. The structure of the exact diffusion strategy listed in \eqref{adapt}--\eqref{combine} is very similar to the standard diffusion implementation \cite{sayed2014adaptive,sayed2014adaptation,chen2012diffusion,chen2015learning}, with the only difference being the addition of an extra correction step between the adaptation and combination steps. We can rewrite the recursions \eqref{adapt}--\eqref{combine} in an aggregate form by resorting to a block vector notation. First, we introduce the eigen-decomposition \eq{\label{syud} (P - AP)/{2} = U \Sigma U^{\mathsf{T}}, } where $\Sigma \in \mathbb{R}^{N\times N}$ is a non-negative diagonal matrix and $U\in \mathbb{R}^{N\times N}$ is an orthogonal matrix. Next, we select $V$ to be the symmetric square-root matrix defined as \eq{\label{V-defi} V \define U \Sigma^{1/2} U^{\mathsf{T}} \in \mathbb{R}^{N\times N}, } and introduce the quantities: \eq{ \overline{A} & = (A + I_N)/2, \quad \overline{{\mathcal{A}}} = \overline{A} \otimes I_M,\\ {\mathcal{P}} & = P \otimes I_M, \hspace{10mm} {\mathcal{V}} = V \otimes I_M, \\ {\scriptstyle{\mathcal{W}}}_i &= {\mathrm{col}}\{w_{1,i}, \cdots, w_{N,i}\}, \\ {\scriptstyle{\mathcal{Y}}}_i &= {\mathrm{col}}\{y_{1,i}, \cdots, y_{N,i}\}, \\ {\mathcal{M}} &= {\mathrm{diag}}\{\mu_1 I_M, \cdots, \mu_N I_M\}, \\ {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}) &= {\mathrm{col}}\{{\nabla} J_1(w_{1}), \cdots, {\nabla} J_N(w_{N})\},\label{grad-J^o}\\ {\nabla} {\mathcal{J}}^\star({\scriptstyle{\mathcal{W}}}) &= {\mathrm{col}}\{q_1{\nabla} J_1(w_{1}), \cdots, q_N{\nabla} J_N(w_{N})\}. \label{J-star} } \noindent Using these variables, and was already explained in Part I\cite{yuan2017exact1}, the recursions \eqref{adapt}--\eqref{combine} can be rewritten in the following equivalent so-called primal-dual form: \begin{equation} \left\{ \begin{aligned} {\scriptstyle{\mathcal{W}}}_i &= \overline{{\mathcal{A}}}^{\mathsf{T}} \Big({\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}\hspace{-0.8mm}-\hspace{-0.8mm}{\mathcal{M}} {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1})\Big)\hspace{-0.8mm}-\hspace{-0.8mm}{\mathcal{P}}^{-1}{\mathcal{V}} {\scriptstyle{\mathcal{Y}}}_{i-1} \label{zn-1}\\ {\scriptstyle{\mathcal{Y}}}_i &= {\scriptstyle{\mathcal{Y}}}_{i-1} + {\mathcal{V}} {\scriptstyle{\mathcal{W}}}_i \end{aligned} \right. \end{equation} For the initialization, we set $y_{-1}=0$ and ${\scriptstyle{\mathcal{W}}}_{-1}$ to be any value, and hence for $i=0$ we have \begin{equation} \left\{ \begin{aligned} {\scriptstyle{\mathcal{W}}}_0 &= \overline{{\mathcal{A}}}^{\mathsf{T}} \Big({\scriptstyle{\mathcal{W}}}_{\hspace{-0.3mm}-\hspace{-0.3mm}1}\hspace{-0.8mm}-\hspace{-0.8mm}{\mathcal{M}} {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{\hspace{-0.3mm}-\hspace{-0.3mm}1})\Big), \label{zn-0}\\ {\scriptstyle{\mathcal{Y}}}_0 &= {\mathcal{V}} {\scriptstyle{\mathcal{W}}}_0. \end{aligned} \right. \end{equation} The following auxiliary lemma, which was established in Part I\cite{yuan2017exact1}, is used in the subsequent convergence analysis. \begin{lemma}[\sc Nullspace of $V$]\label{lm:null-V} It holds that \eq{\label{null-V} \mathrm{null}(V) &= \mathrm{null}(P-AP) = \mathrm{span}\{\mathds{1}_N\}, \\ \mathrm{null}( {\mathcal{V}} ) &= \mathrm{null}( {\mathcal{P}} - {\mathcal{A}} {\mathcal{P}} ) = \mathrm{span}\{ \mathds{1}_N \otimes I_M \}.\label{xcn987-2} } {$\blacksquare$} \end{lemma} In this article, we will establish the linear convergence of exact diffusion using the primal-dual form \eqref{zn-1}. This is a challenging task due to the coupled dynamics among the agents. To facilitate the analysis, we first apply a useful coordinate transformation and characterize the error dynamics in this transformed domain. Then, we show analytically that exact diffusion is stable, converges linearly, and has a wider stability range than EXTRA consensus strategy\cite{shi2015extra}. We also compare the performance of exact diffusion to other existing linearly convergent algorithms besides EXTRA, such as DIGing\cite{nedich2016achieving} and Aug-DGM \cite{xu2015augmented,nedic2016geometrically} with numerical simulations. } \section{Convergence of Exact Diffusion}\label{sec-convergence} The purpose of the analysis in this section is to establish the exact convergence of $w_{k,i}$ to $w^{\star}$, for all agents in the network, and to show that this convergence attains an exponential rate. \subsection{The Optimality Condition}\label{subsec-optimality} \begin{lemma}[\sc Optimality Condition]\label{lm-opt-cond-1} If condition \eqref{q-A} holds and block vectors $({\scriptstyle{\mathcal{W}}}^\star, {\scriptstyle{\mathcal{Y}}}^\star)$ exist that satisfy: \eq{ \overline{{\mathcal{A}}}^{\mathsf{T}} {\mathcal{M}} {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star) + {\mathcal{P}}^{-1}{\mathcal{V}} {\scriptstyle{\mathcal{Y}}}^\star & = 0, \label{KKT-1-1} \\ {\mathcal{V}} {\scriptstyle{\mathcal{W}}}^\star &= 0. \label{KKT-2-2} } then it holds that the block entries of ${{\scriptstyle{\mathcal{W}}}}^{\star}$ satisfy: \eq{\label{7280m} w_1^\star=w_2^\star=\cdots=w_N^\star=w^\star, } where $w^\star$ is the unique solution to problem \eqref{prob-dist}. \end{lemma} \begin{proof} From \eqref{xcn987-2}, we have \eq{\label{xcn78} {\mathcal{V}} {\scriptstyle{\mathcal{W}}}^\star = 0 \Longleftrightarrow w_1^\star=w_2^\star=\cdots=w_N^\star. } Next we check $w_k^\star = w^\star$. Since ${\mathcal{P}} > 0$, condition \eqref{KKT-1-1} is equivalent to \eq{ {\mathcal{P}} \overline{{\mathcal{A}}}^{\mathsf{T}} {\mathcal{M}} {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star) + {\mathcal{V}} {\scriptstyle{\mathcal{Y}}}^\star = 0. \label{KKT-1-2} } Let ${\mathcal{I}}=\mathds{1}_N \otimes I_M \in \mathbb{R}^{MN\times M}$. Multiplying by ${\mathcal{I}}^{\mathsf{T}}$ gives \eq{ 0=&\ {\mathcal{I}}^{\mathsf{T}} \big({\mathcal{P}} \overline{{\mathcal{A}}}^{\mathsf{T}} {\mathcal{M}} {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star) + {\mathcal{V}} {\scriptstyle{\mathcal{Y}}}^\star\big) \overset{(a)}{=}{\mathcal{I}}^{\mathsf{T}} {\mathcal{P}} \overline{{\mathcal{A}}}^{\mathsf{T}} {\mathcal{M}} {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star)\nonumber \\ =&\ {\sum_{k=1}^{N} p_k\mu_k {\nabla} J_k(w_k^\star) \overset{\eqref{q-A}}{=} \frac{1}{\beta} \sum_{k=1}^{N} q_k {\nabla} J_k(w^\star_k)}, } where equality (a) holds because ${\mathcal{V}}$ is symmetric and \eqref{xcn987-2}. Since $\beta \neq 0$, we conclude that $\sum_{k=1}^{N} q_k {\nabla} J_k(w^\star_k)=0$, which shows that the entries $\{w_{k}^{\star}\}$, which are identical, must coincide with the minimizer $w^{\star}$ of \eqref{prob-dist}. \end{proof} Observe that since ${{\mathcal{J}}}^{\star}(w)$ is assumed strongly-convex, then the solution to problem \eqref{prob-dist}, $w^\star$, is unique, and hence ${\scriptstyle{\mathcal{W}}}^\star$ is also unique. However, since ${\mathcal{V}}$ is rank-deficient, there can be multiple solutions ${{\scriptstyle{\mathcal{Y}}}}^{\star}$ satisfying \eqref{7280m}. Using an argument similar to \cite{shi2014linear,shi2015extra}, we can show that among all possible ${{\scriptstyle{\mathcal{Y}}}}^{\star}$, there is a unique solution ${{\scriptstyle{\mathcal{Y}}}}^{\star}_o$ lying in the column span of ${{\mathcal{V}}}$. \begin{lemma}[\sc Particular solution pair]\label{sy in V range sapce} When condition \eqref{q-A} holds and ${\mathcal{J}}^{o}(w)$ defined by \eqref{prob-consensus} is strongly-convex, there exists a unique pair of variables $({\scriptstyle{\mathcal{W}}}^\star, {\scriptstyle{\mathcal{Y}}}^\star_o)$, in which ${\scriptstyle{\mathcal{Y}}}^\star_o$ lies in the range space of ${\mathcal{V}}$, that satisfies conditions \eqref{KKT-1-1}-\eqref{KKT-2-2}. \end{lemma} \begin{proof} {First we prove that there always exist some block vectors $({\scriptstyle{\mathcal{W}}}^\star, {\scriptstyle{\mathcal{Y}}}^\star)$ satisfying \eqref{KKT-1-1}--\eqref{KKT-2-2}. Indeed, when ${\mathcal{J}}^o(w)$ is strongly-convex, the solution to problem \eqref{prob-dist}, $w^\star$, exists and is unique. Let ${{\scriptstyle{\mathcal{W}}}}^{\star}=\mathds{1}_{N}\otimes w^{\star}$. We conclude from Lemma \ref{lm:null-V} that condition \eqref{KKT-2-2} holds. Next we check whether there exists some ${\scriptstyle{\mathcal{Y}}}^\star$ such that \eq{ {\mathcal{P}}^{-1}{\mathcal{V}} {\scriptstyle{\mathcal{Y}}}^\star = - \overline{{\mathcal{A}}}^{\mathsf{T}} {\mathcal{M}} {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star), } or equivalently, \eq{\label{k29} {\mathcal{V}} {\scriptstyle{\mathcal{Y}}}^\star & = - {\mathcal{P}} \overline{{\mathcal{A}}}^{\mathsf{T}} {\mathcal{M}} {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star) \nonumber \\ & = - \overline{{\mathcal{A}}} {\mathcal{P}} {\mathcal{M}} {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star) = -\frac{1}{\beta}\overline{{\mathcal{A}}} {\nabla} {\mathcal{J}}^\star({\scriptstyle{\mathcal{W}}}^\star), } {where the last equality holds because \eq{ {\mathcal{P}} {\mathcal{M}} {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star)&= \left[ \begin{array}{c} \mu_1 p_1 {\nabla} J_1(w^\star)\\ \vdots\\ \mu_N p_N {\nabla} J_N(w^\star) \\ \end{array} \right] \overset{\eqref{q-A}}{=} \left[ \begin{array}{c} \frac{q_1}{\beta} {\nabla} J_1(w^\star)\\ \vdots\\ \frac{q_N}{\beta} {\nabla} J_N(w^\star) \\ \end{array} \right] \nonumber \\ &\overset{\eqref{J-star}}{=} \frac{1}{\beta}{\nabla} {\mathcal{J}}^\star({\scriptstyle{\mathcal{W}}}^\star), } } \hspace{-1.2mm}To prove the existence of ${\scriptstyle{\mathcal{Y}}}^\star$, we need to show that $\overline{{\mathcal{A}}} {\nabla} \hspace{-1mm} {\mathcal{J}}^\star\hspace{-0.5mm}(\hspace{-0.5mm}{\scriptstyle{\mathcal{W}}}^\star\hspace{-0.5mm})$ lies in $\mathrm{range}({\mathcal{V}})$. {Indeed, observe that \eq{\label{23nsd} \hspace{-2mm}{\mathcal{I}}^{\mathsf{T}} \overline{{\mathcal{A}}} {\nabla} {\mathcal{J}}^\star({\scriptstyle{\mathcal{W}}}^\star) = {\mathcal{I}}^{\mathsf{T}} {\nabla} {\mathcal{J}}^\star({\scriptstyle{\mathcal{W}}}^\star) \overset{(a)}{=} \sum_{k=1}^{N}q_k {\nabla} J_k(w^\star) = 0 } where the equality (a) holds because of equation \eqref{J-star}. Equality \eqref{23nsd} implies that $\overline{{\mathcal{A}}} {\nabla} {\mathcal{J}}^\star({\scriptstyle{\mathcal{W}}}^\star)$ is orthogonal to $\mathrm{span}({\mathcal{I}})$, i.e., $\mathrm{span}(\mathds{1}_N \otimes I_M)$. With \eqref{xcn987-2} we have \eq{ \overline{{\mathcal{A}}} {\nabla} {\mathcal{J}}^\star({\scriptstyle{\mathcal{W}}}^\star) \perp \mathrm{null}({\mathcal{V}}) \Leftrightarrow &\ \overline{{\mathcal{A}}} {\nabla} {\mathcal{J}}^\star({\scriptstyle{\mathcal{W}}}^\star) \in \mathrm{range}({\mathcal{V}}^{\mathsf{T}}) \nonumber \\ \Leftrightarrow &\ \overline{{\mathcal{A}}} {\nabla} {\mathcal{J}}^\star({\scriptstyle{\mathcal{W}}}^\star) \in \mathrm{range}({\mathcal{V}}), } where the last ``$\Leftrightarrow$" holds because ${\mathcal{V}}$ is symmetric.} } We now establish the existence of the unique pair $({ {\scriptstyle{\mathcal{W}}}}^{\star}, {{\scriptstyle{\mathcal{Y}}}}_o^{\star})$. Thus, let $({{\scriptstyle{\mathcal{W}}}}^{\star},{{\scriptstyle{\mathcal{Y}}}}^{\star})$ denote an arbitrary solution to \eqref{7280m}. Let further ${{\scriptstyle{\mathcal{Y}}}}_o^{\star}$ denote the projection of ${{\scriptstyle{\mathcal{Y}}}}^{\star}$ onto the column span of ${{\mathcal{V}}}$. It follows that ${\mathcal{V}}({\scriptstyle{\mathcal{Y}}}^\star - {\scriptstyle{\mathcal{Y}}}^\star_o) = 0$ and, hence, ${\mathcal{V}}{\scriptstyle{\mathcal{Y}}}^\star = {\mathcal{V}} {\scriptstyle{\mathcal{Y}}}_o^\star$. Therefore, the pair $({\scriptstyle{\mathcal{W}}}^\star, {\scriptstyle{\mathcal{Y}}}_o^\star)$ also satisfies conditions \eqref{KKT-1-1}-\eqref{KKT-2-2}. Next we verify the uniqueness of ${\scriptstyle{\mathcal{Y}}}_o^\star$ by contradiction. Suppose there is a different ${\scriptstyle{\mathcal{Y}}}_1^\star$ lying in ${\mathcal{R}}({\mathcal{V}})$ that also satisfies condition \eqref{KKT-1-1}. We let ${\scriptstyle{\mathcal{Y}}}^\star_o={\mathcal{V}} {\scriptstyle{\mathcal{X}}}^\star_o$ and ${\scriptstyle{\mathcal{Y}}}^\star_1={\mathcal{V}} {\scriptstyle{\mathcal{X}}}^\star_1$. Substituting ${\scriptstyle{\mathcal{Y}}}^\star_o$ and ${\scriptstyle{\mathcal{Y}}}^\star_1$ into condition \eqref{KKT-1-1}, we have \eq{ \overline{{\mathcal{A}}}^{\mathsf{T}} {\mathcal{M}} {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star) + {\mathcal{P}}^{-1}{\mathcal{V}}^2 {\scriptstyle{\mathcal{X}}}_o^\star & = 0, \label{ns-1}\\ \overline{{\mathcal{A}}}^{\mathsf{T}} {\mathcal{M}} {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star) + {\mathcal{P}}^{-1}{\mathcal{V}}^2 {\scriptstyle{\mathcal{X}}}_1^\star & = 0. \label{ns-2} } Subtracting \eqref{ns-2} from \eqref{ns-1} and recall ${\mathcal{P}}>0$, we have ${\mathcal{V}}^2({\scriptstyle{\mathcal{X}}}_o^\star - {\scriptstyle{\mathcal{X}}}_1^\star) = 0$, which leads to ${\mathcal{V}}({\scriptstyle{\mathcal{X}}}_o^\star - {\scriptstyle{\mathcal{X}}}_1^\star) = 0 \Longleftrightarrow {\scriptstyle{\mathcal{Y}}}_o^\star = {\scriptstyle{\mathcal{Y}}}_1^\star$. This contradicts the assumption that ${\scriptstyle{\mathcal{Y}}}_o^\star \neq {\scriptstyle{\mathcal{Y}}}_1^\star$. \end{proof} Using the above auxiliary results, we will show that $({\scriptstyle{\mathcal{W}}}_i, {\scriptstyle{\mathcal{Y}}}_i)$ generated through the exact diffusion \eqref{zn-1} will converge exponentially fast to $({\scriptstyle{\mathcal{W}}}^\star, {\scriptstyle{\mathcal{Y}}}^\star_o)$. \subsection{Error Recursion}\label{subsec-error-recursion} Let ${{\scriptstyle{\mathcal{W}}}}^{\star}=\mathds{1}_{N}\otimes w^{\star}$, which corresponds to a block vector with $w^{\star}$ repeated $N$ times. Introduce further the error vectors \eq{\label{error} \widetilde{{\scriptstyle{\mathcal{W}}}}_i={{\scriptstyle{\mathcal{W}}}}^{\star}-{{\scriptstyle{\mathcal{W}}}}_i,\;\;\;\; \widetilde{{\scriptstyle{\mathcal{Y}}}}_i={\scriptstyle{\mathcal{Y}}}_o^{\star}-{{\scriptstyle{\mathcal{Y}}}}_i. } The first step in the convergence analysis is to examine the evolution of these error quantities. Multiplying the second recursion of \eqref{zn-1} by ${\mathcal{V}}$ from the left gives: \eq{ {\mathcal{V}} {\scriptstyle{\mathcal{Y}}}_i = {\mathcal{V}} {\scriptstyle{\mathcal{Y}}}_{i-1} + \frac{1}{2}({\mathcal{P}}-{\mathcal{P}} {\mathcal{A}}) {\scriptstyle{\mathcal{W}}}_i. \label{V-tran y} } Substituting \eqref{V-tran y} into the first recursion of \eqref{zn-1}, we have \begin{equation} \left\{ \begin{aligned} \overline{{\mathcal{A}}}^{\mathsf{T}} \widetilde{\scriptstyle{\mathcal{W}}}_i &\hspace{-0.5mm}=\hspace{-0.5mm} \overline{{\mathcal{A}}}^{\mathsf{T}} \Big(\widetilde{\scriptstyle{\mathcal{W}}}_{i-1}\hspace{-0.8mm}+\hspace{-0.8mm}{\mathcal{M}} {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1})\Big) \hspace{-0.8mm}+\hspace{-0.8mm} {\mathcal{P}}^{-1} {\mathcal{V}} {\scriptstyle{\mathcal{Y}}}_{i}, \label{ed-1-subtracrt} \\ \widetilde{\scriptstyle{\mathcal{Y}}}_i &= \widetilde{\scriptstyle{\mathcal{Y}}} _{i-1} - {\mathcal{V}} {\scriptstyle{\mathcal{W}}}_i. \end{aligned} \right. \end{equation} Subtracting optimality conditions \eqref{KKT-1-1}--\eqref{KKT-2-2} from \eqref{ed-1-subtracrt} leads to \begin{equation} \hspace{-0.8mm}\left\{ \begin{aligned} \hspace{-1.5mm}\overline{{\mathcal{A}}}^{\mathsf{T}} \widetilde{\scriptstyle{\mathcal{W}}}_i &\hspace{-0.5mm}=\hspace{-0.5mm} \overline{{\mathcal{A}}}^{\mathsf{T}} \Big(\hspace{-0.8mm}\widetilde{\scriptstyle{\mathcal{W}}}_{i-1}\hspace{-0.8mm}+\hspace{-0.8mm}{\mathcal{M}} \big[{\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1})\hspace{-0.8mm}-\hspace{-0.8mm}{\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star)\big]\hspace{-0.8mm}\Big) \hspace{-0.8mm}-\hspace{-0.8mm} {\mathcal{P}}^{-1} {\mathcal{V}} \widetilde{\scriptstyle{\mathcal{Y}}}_{i}, \label{ed-1-subtracrt-1} \\ \hspace{-1.5mm}\widetilde{\scriptstyle{\mathcal{Y}}}_i &= \widetilde{\scriptstyle{\mathcal{Y}}}_{i-1} +{\mathcal{V}} \widetilde{\scriptstyle{\mathcal{W}}}_i. \end{aligned} \right. \end{equation} Next we examine the difference ${\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1})-{\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star)$. To begin with, we get from \eqref{grad-J^o} that \eq{\label{237sdb} \hspace{-2mm}{\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i-1}) \hspace{-0.8mm}-\hspace{-0.8mm} {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star) \hspace{-1mm}=\hspace{-1mm} \left[ \begin{array}{c} \hspace{-2mm}{\nabla} J_1(w_{1,i-1}) \hspace{-0.8mm}-\hspace{-0.8mm} {\nabla} J_1(w^\star)\hspace{-2mm}\\ \hspace{-2mm}\vdots\hspace{-2mm}\\ \hspace{-2mm}{\nabla} J_N(w_{N,i-1}) \hspace{-0.8mm}-\hspace{-0.8mm} {\nabla} J_N(w^\star)\hspace{-2mm} \\ \end{array} \right] } When ${\nabla} J_k(w)$ is twice-differentiable (see Assumption \ref{ass-lip}), we can appeal to the mean-value theorem from Lemma D.1 in \cite{sayed2014adaptation}, which allows us to express each difference in \eqref{237sdb} in the following integral form in terms of Hessian matrices for any~ $k=1,2,\ldots,N$: \eq{ {\nabla} J_k(w_{k,i-1}) \hspace{-1mm} - \hspace{-1mm}{\nabla} J_k(w^\star) \hspace{-0.8mm}=\hspace{-0.8mm} -\Big(\hspace{-1mm} \int_0^1 \hspace{-2mm}{\nabla}^2 \hspace{-1mm} J_k\hspace{-0.5mm} \big(\hspace{-0.5mm} w^\star \hspace{-1mm}-\hspace{-0.8mm} r \widetilde{w}_{k,i-1}\big)dr \hspace{-1mm}\Big)\widetilde{w}_{k,i-1}. \nonumber } If we let \eq{\label{H_k_i-1} H_{k,i-1} \hspace{-1.5mm}\define \hspace{-1.5mm} \int_0^1 {\nabla}^2 J_k\big(w^\star \hspace{-0.5mm}-\hspace{-0.5mm} r\widetilde{w}_{k,i-1}\big)dr \in \mathbb{R}^{M\times M}, } and introduce the block diagonal matrix: \eq{\label{H_i-1} {\mathcal{H}}_{i-1} \hspace{-1.5mm}\define \hspace{-1.5mm} \mathrm{diag}\{H_{1,i-1},H_{2,i-1},\cdots,H_{N,i-1}\}, } then we can rewrite \eqref{237sdb} in the form: \eq{ {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i-1}) - {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star) = - {\mathcal{H}}_{i-1} \widetilde{\scriptstyle{\mathcal{W}}}_{i-1}.\label{xcnh} } Substituting into \eqref{ed-1-subtracrt-1} we get \begin{equation} \left\{ \begin{aligned} \overline{{\mathcal{A}}}^{\mathsf{T}} \widetilde{\scriptstyle{\mathcal{W}}}_i &\hspace{-0.5mm}=\hspace{-0.5mm} \overline{{\mathcal{A}}}^{\mathsf{T}} (I_{MN}-{\mathcal{M}}{\mathcal{H}}_{i-1})\widetilde{\scriptstyle{\mathcal{W}}}_{i-1} - {\mathcal{P}}^{-1} {\mathcal{V}} \widetilde{\scriptstyle{\mathcal{Y}}}_{i}, \label{ed-1-subtracrt-1-1} \\ \widetilde{\scriptstyle{\mathcal{Y}}}_i &= \widetilde{\scriptstyle{\mathcal{Y}}}_{i-1} +{\mathcal{V}} \widetilde{\scriptstyle{\mathcal{W}}}_i. \end{aligned} \right. \end{equation} which is also equivalent to \eq{\label{xcnh09} &\ \left[ \begin{array}{cc} \overline{{\mathcal{A}}}^{\mathsf{T}} & {\mathcal{P}}^{-1}{\mathcal{V}} \\ -{\mathcal{V}} & I_{MN} \\ \end{array} \right] \left[ \begin{array}{c} \widetilde{\scriptstyle{\mathcal{W}}}_i\\ \widetilde{\scriptstyle{\mathcal{Y}}}_i \\ \end{array} \right] \nonumber \\ =&\ \left[ \begin{array}{cc} \overline{{\mathcal{A}}}^{\mathsf{T}} (I_{MN}-{\mathcal{M}}{\mathcal{H}}_{i-1}) & 0\\ 0 & I_{MN} \\ \end{array} \right] \left[ \begin{array}{c} \widetilde{\scriptstyle{\mathcal{W}}}_{i-1}\\ \widetilde{\scriptstyle{\mathcal{Y}}}_{i-1} \\ \end{array} \right]. } Using the relations $\overline{{\mathcal{A}}}^{\mathsf{T}} = \frac{I_{MN} + {\mathcal{A}}^{\mathsf{T}}}{2}$ and ${\mathcal{V}}^2=\frac{{\mathcal{P}} - {\mathcal{P}}{\mathcal{A}}^{\mathsf{T}}}{2}$, it is easy to verify that \eq{\label{nxcm987} \left[ \begin{array}{cc} \overline{{\mathcal{A}}}^{\mathsf{T}} & {\mathcal{P}}^{-1}{\mathcal{V}} \\ -{\mathcal{V}} & I_{MN} \\ \end{array} \right]^{-1} = \left[ \begin{array}{cc} I_{MN} & -{\mathcal{P}}^{-1}{\mathcal{V}} \\ {\mathcal{V}} & I_{MN}-{\mathcal{V}} {\mathcal{P}}^{-1} {\mathcal{V}} \\ \end{array} \right]. } Substituting into \eqref{nxcm987} gives \eq{ \left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}_i\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}_i\hspace{-1mm} \\ \end{array} \right] &\hspace{-1mm}=\hspace{-1mm} \left[ \begin{array}{cc} \hspace{-2mm}\overline{{\mathcal{A}}}^{\mathsf{T}} (I_{MN}-{\mathcal{M}}{\mathcal{H}}_{i-1}) & -{\mathcal{P}}^{-1}{\mathcal{V}} \hspace{-2mm}\\ \hspace{-2mm}{\mathcal{V}}\overline{{\mathcal{A}}}^{\mathsf{T}} (I_{MN}-{\mathcal{M}}{\mathcal{H}}_{i-1}) & I_{MN}-{\mathcal{V}} {\mathcal{P}}^{-1} {\mathcal{V}}\hspace{-2mm} \\ \end{array} \right] \left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}_{i-1}\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}_{i-1}\hspace{-1mm} \\ \end{array} \right]. } That is, the error vectors evolve according to: \eq{ \boxed{ \left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}_i\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}_i\hspace{-1mm} \\ \end{array} \right] = ({\mathcal{B}} - {\mathcal{T}}_{i-1})\left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}_{i-1}\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}_{i-1}\hspace{-1mm} \\ \end{array} \right]} \label{error-recursion} } where \eq{ {\mathcal{B}}&\define \left[ \begin{array}{cc} \overline{{\mathcal{A}}}^{\mathsf{T}} & -{\mathcal{P}}^{-1}{\mathcal{V}} \\ {\mathcal{V}}\overline{{\mathcal{A}}}^{\mathsf{T}} & I_{MN}-{\mathcal{V}} {\mathcal{P}}^{-1} {\mathcal{V}} \\ \end{array} \right], \\ {\mathcal{T}}_{i}&\define \left[ \begin{array}{cc} \overline{{\mathcal{A}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i} & 0 \\ {\mathcal{V}}\overline{{\mathcal{A}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i} & 0 \\ \end{array} \right].\label{T-defi} } Relation \eqref{error-recursion} is the error dynamics for the exact diffusion algorithm. We next examine its convergence properties. \subsection{Proof of Convergence} \label{sec-convergence-proof} We first introduce a common assumption. \begin{assumption}[\sc Conditions on cost functions] \label{ass-lip} Each $J_k(w)$ is twice differentiable, and its Hessian matrix satisfies \eq{\label{xzh2300} {\nabla}^2 J_k(w) \le \delta I_M. } Moreover, there exists at least one agent $k_o$ such that $J_{k_o}(w)$ is $\nu$-strongly convex, i.e. \eq{\label{xcn23987} {\nabla}^2 J_{k_o}(w) > \nu I_M. } \rightline \mbox{\rule[0pt]{1.3ex}{1.3ex}} \end{assumption} {\color{black}Note that when $J_k(w)$ is twice differentiable, condition \eqref{xzh2300} is equivalent to requiring each ${\nabla} J_k(w)$ to be $\delta$-Lipschitz continuous \cite{sayed2014adaptation}. In addition,} condition \eqref{xcn23987} ensures the strong convexity of ${\mathcal{J}}^o(w)$ and ${{\mathcal{J}}}^{\star}(w)$, and the uniqueness of $w^o$ and $w^{\star}$. It follows from \eqref{xzh2300}--\eqref{xcn23987} and the definition \eqref{H_k_i-1} that \eq{\label{H-properties} H_{k,i-1} \le \delta I_M,\ \forall k\quad \mbox{and}\quad H_{k_o,i-1} \ge \nu I_M. } The direct convergence analysis of recursion \eqref{error-recursion} is challenging. To facilitate the analysis, we identify a convenient change of basis and transform \eqref{error-recursion} into another equivalent form that is easier to handle. To do that, we first let \eq{ B\define \left[ \begin{array}{cc} \overline{A}^{\mathsf{T}} & - P^{-1}V \\ V\overline{A}^{\mathsf{T}} & I_N - VP^{-1}V \\ \end{array} \right]\in \mathbb{R}^{2N\times 2N}. } It holds that ${\mathcal{B}}=B\otimes I_M$. In the following lemma we introduce a decomposition for matrix $B$ that will be fundamental to the subsequent analysis. \begin{lemma}[\sc Fundamental Decomposition]\label{lm-B-decomposition} The matrix $B$ admits the following eigendecomposition \eq{ B &= X D X^{-1}, \label{B-deco-0} } where \eq{ D=\left[ \begin{array}{ccc} I_2 & \vline &0 \\ \hline 0 & \vline & D_1 \\ \end{array} \right], } and $D_1\in \mathbb{R}^{(2N-2)\times (2N-2)}$ is a diagonal matrix with complex entries. The magnitudes of the diagonal entries satisfy \eq{\label{uwehn} &\hspace{-3mm} |D_1(2k\hspace{-0.8mm}-\hspace{-0.8mm}3,2k\hspace{-0.8mm}-\hspace{-0.8mm}3)|=|D_1(2k\hspace{-0.8mm}-\hspace{-0.8mm}2,2k\hspace{-0.8mm}-\hspace{-0.8mm}2)|=\sqrt{\lambda_{k}(\overline{A})}<1, \nonumber \\ & \hspace{4.7cm}\forall\ k=2,3,\cdots N. } Moreover, \eq{\label{X and X_inv} X = \left[ \begin{array}{ccc} R & \vline &X_R \\ \end{array} \right], \quad X^{-1}= \left[ \begin{array}{c} L\\ \hline X_L \\ \end{array} \right], } where $X_R\in \mathbb{R}^{2N\times (2N-2)}$ and $X_L\in \mathbb{R}^{(2N-2) \times 2N}$, and $R$ and $L$ are given by \eq{\label{R and L} R\hspace{-1mm}=\hspace{-1mm}\left[ \begin{array}{cc} \hspace{-2mm}\mathds{1}_N & 0\hspace{-2mm}\\ \hspace{-2mm}0 & \mathds{1}_N\hspace{-2mm} \\ \end{array} \right]\hspace{-1mm}\in \hspace{-0.5mm}\mathbb{R}^{2N\times 2}, L\hspace{-1mm}=\hspace{-1mm}\left[ \begin{array}{cc} \hspace{-1mm}p^{\mathsf{T}} & 0\hspace{-1mm} \\ \hspace{-1mm}0 & \frac{1}{N}\mathds{1}_N^{\mathsf{T}} \\ \end{array} \right]\in \mathbb{R}^{2\times 2N}. } \end{lemma} \begin{proof} See Appendix \ref{appdx-Lemma-fd}. \end{proof} {\noindent{\bf Remark 1. (Other possible decompositions)} The eigendecomposition \eqref{B-deco-0} for $B$ is not unique because we can always scale $X$ and $X^{-1}$ to achieve different decompositions. In this paper, we will study the following family of decompositions: \eq{ B = X^\prime D (X^\prime)^{-1}, } where \eq{\label{X and X_inv-prime} X^\prime = \left[ \begin{array}{ccc} R & \vline & \frac{1}{c}X_R \\ \end{array} \right], \quad (X^\prime)^{-1}= \left[ \begin{array}{c} L\\ \hline c X_L \\ \end{array} \right], } and $c$ can be set to any nonzero constant value. We will exploit later the choice of $c$ in identifying the stability range for exact diffusion.\hspace{7.15cm} \mbox{\rule[0pt]{1.3ex}{1.3ex}}} { For convenience, we introduce the vectors: \eq{\label{tsdhb} &r_1= \left[ \begin{array}{c} \hspace{-1.8mm}\mathds{1}_N \hspace{-1.8mm}\\ \hspace{-1.8mm}0\hspace{-1.8mm} \\ \end{array} \right], r_2= \left[ \begin{array}{c} \hspace{-1.8mm}0\hspace{-1.8mm}\\ \hspace{-1.8mm}\mathds{1}_N\hspace{-1.8mm} \\ \end{array} \right], \ell_1= \left[ \begin{array}{c} \hspace{-1.8mm}p\hspace{-1.8mm}\\ \hspace{-1.8mm}0\hspace{-1.8mm} \\ \end{array} \right], \ell_2= \left[ \begin{array}{c} \hspace{-1.8mm}0\hspace{-1.8mm}\\ \hspace{-1.8mm}\frac{1}{N}\mathds{1}_N\hspace{-1.8mm} \\ \end{array} \right], } so that \eq{\label{R and L-2} R = [r_1\ r_2], \quad L = \left[ \begin{array}{c}\ell_1^{\mathsf{T}}\\ \ell_2^{\mathsf{T}} \\ \end{array} \right]. } Using \eqref{B-deco-0}--\eqref{R and L-2}, we write \eq{\label{cB-decompoision} {\mathcal{B}}&=(X^\prime \otimes I_M) (D \otimes I_M) ( (X^\prime)^{-1} \otimes I_M) \define {\mathcal{X}}' {\mathcal{D}} ({\mathcal{X}}')^{-1} \nonumber \\ &= \left[ \begin{array}{ccc} \hspace{-2mm}{\mathcal{R}}_1 & {\mathcal{R}}_2 & \frac{1}{c}{\mathcal{X}}_{R}\hspace{-2mm} \\ \end{array} \right] \left[ \begin{array}{ccc} I_{M} & 0 & 0\\ 0 & I_M & 0\\ 0 & 0 & {\mathcal{D}}_1 \\ \end{array} \right] \left[ \begin{array}{c} {\mathcal{L}}_1^{\mathsf{T}} \\ {\mathcal{L}}_2^{\mathsf{T}} \\ c {\mathcal{X}}_L \\ \end{array} \right], } where ${\mathcal{D}}_1 = D_1\otimes I_M$, \eq{\label{R and L-kron} &{\mathcal{R}}_1=\left[ \begin{array}{c}\hspace{-1mm}{\mathcal{I}}\hspace{-1mm}\\ \hspace{-1mm}0\hspace{-1mm}\\ \end{array} \right] \in \mathbb{R}^{2NM\times M},\;\; {\mathcal{R}}_2 = \left[ \begin{array}{c}0 \\ {\mathcal{I}}\\ \end{array} \right]\in \mathbb{R}^{2NM\times M},} \eq{ {\mathcal{L}}_1 = \left[ \begin{array}{c}{\scriptstyle{\mathcal{P}}}\\0 \\ \end{array} \right]\in \mathbb{R}^{2NM\times M},\;\; {\mathcal{L}}_2 = \left[ \begin{array}{c}\hspace{-1.8mm}0\hspace{-1.8mm}\\\hspace{-1.8mm}\frac{1}{N}{\mathcal{I}} \hspace{-1.8mm}\\ \end{array} \right] \in \mathbb{R}^{2NM\times M}, } while ${\mathcal{X}}_R=X_R\otimes I_M\in \mathbb{R}^{2NM\times 2(N-1)M}$ and ${\mathcal{X}}_L=x_L\otimes I_M \in \mathbb{R}^{2(N-1)M\times 2NM}$. Moreover, we are also introducing \eq{ {\mathcal{I}}\hspace{-1mm}=\hspace{-1mm}\mathds{1}_N \otimes I_M \in \mathbb{R}^{NM \times M},\; \overline{{\mathcal{P}}} \hspace{-1mm}=\hspace{-1mm} p\otimes I_M \in \mathbb{R}^{NM \times M},\label{xcn287} } where the variable $\overline{{\mathcal{P}}}$ defined above is different from the earlier variable ${{\mathcal{P}}}=P\otimes I_M\in{\mathbb{R}}^{NM\times NM}$. Multiplying both sides of \eqref{error-recursion} by ${({\mathcal{X}}')}^{-1}$: \eq{ \hspace{-1mm}({\mathcal{X}}')^{-1} \left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}_i\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}_i\hspace{-1mm} \\ \end{array} \right] \hspace{-1mm}=\hspace{-1mm}&\; [({\mathcal{X}}')^{-1}({\mathcal{B}}-{\mathcal{T}}_{i-1}) {\mathcal{X}}'] ({\mathcal{X}}')^{-1} \left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}_{i-1}\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}_{i-1}\hspace{-1mm} \\ \end{array} \right] } leads to \eq{\label{recursion-transform-2} \left[ \begin{array}{c} \hspace{-1mm}\bar{{\scriptstyle{\mathcal{X}}}}_i\hspace{-1mm}\\ \hspace{-1mm}\widehat{{\scriptstyle{\mathcal{X}}}}_i\hspace{-1mm}\\ \hspace{-1mm}\check{{\scriptstyle{\mathcal{X}}}}_i \hspace{-1mm} \\ \end{array} \right] \hspace{-1mm}=\hspace{-1mm} &\; \left( \left[ \begin{array}{ccc} I_M & 0 & 0\\ 0 & I_M &0 \\0 & 0 & {\mathcal{D}}_1 \\ \end{array} \right] - {\mathcal{S}}_{i-1} \right) \left[ \begin{array}{c} \hspace{-1mm}\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\hspace{-1mm}\\ \hspace{-1mm}\widehat{{\scriptstyle{\mathcal{X}}}}_{i-1}\hspace{-1mm}\\ \hspace{-1mm}\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\hspace{-1mm} \\ \end{array} \right], } where we defined \eq{\label{x-bar and x-check} \left[ \begin{array}{c} \hspace{-1mm}\bar{{\scriptstyle{\mathcal{X}}}}_i\hspace{-1mm}\\ \hspace{-1mm}\widehat{{\scriptstyle{\mathcal{X}}}}_i\hspace{-1mm}\\ \hspace{-1mm}\check{{\scriptstyle{\mathcal{X}}}}_i \hspace{-1mm} \\ \end{array} \right] \define&\; ({\mathcal{X}}')^{-1}\left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}_i\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}_i\hspace{-1mm} \\ \end{array} \right] = \left[ \begin{array}{c} {\mathcal{L}}_1^{\mathsf{T}} \\ {\mathcal{L}}_2^{\mathsf{T}} \\ c {\mathcal{X}}_L \\ \end{array} \right] \left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}_i\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}_i\hspace{-1mm} \\ \end{array} \right], } and \eq{\label{S} {\mathcal{S}}_{i-1}\define&({\mathcal{X}}')^{-1}{\mathcal{T}}_{i-1}{\mathcal{X}}'\nonumber \\ \hspace{-2mm}=&\left[ \begin{array}{ccc} \hspace{-2mm}{\mathcal{L}}_1^{\mathsf{T}} {\mathcal{T}}_{i-1}{\mathcal{R}}_1 & {\mathcal{L}}_1^{\mathsf{T}} {\mathcal{T}}_{i-1}{\mathcal{R}}_2 & \frac{1}{c}{\mathcal{L}}_1^{\mathsf{T}}{\mathcal{T}}_{i-1}{\mathcal{X}}_R \hspace{-2mm}\\ \hspace{-2mm}{\mathcal{L}}_2^{\mathsf{T}} {\mathcal{T}}_{i-1}{\mathcal{R}}_1& {\mathcal{L}}_2^{\mathsf{T}} {\mathcal{T}}_{i-1}{\mathcal{R}}_2 & \frac{1}{c}{\mathcal{L}}_2^{\mathsf{T}}{\mathcal{T}}_{i-1}{\mathcal{X}}_R \hspace{-2mm}\\ \hspace{-2mm}c{\mathcal{X}}_L{\mathcal{T}}_{i-1}{\mathcal{R}}_1 & c{\mathcal{X}}_L{\mathcal{T}}_{i-1}{\mathcal{R}}_2 & {\mathcal{X}}_L{\mathcal{T}}_{i-1}{\mathcal{X}}_R \hspace{-2mm} \\ \end{array} \right]. } To evaluate the block entries of ${{\mathcal{S}}}_{i-1}$, we partition \eq{\label{usd9} {\mathcal{X}}_R = \left[ \begin{array}{cc} {\mathcal{X}}_{R,u} \\ {\mathcal{X}}_{R,d} \\ \end{array} \right], } where ${\mathcal{X}}_{R,u}\in \mathbb{R}^{NM \times 2(N-1)M}$ and ${\mathcal{X}}_{R,d}\in \mathbb{R}^{NM \times 2(N-1)M}$. Then, it can be verified that \eq{\label{S-1st} {\mathcal{L}}_1^{\mathsf{T}} {\mathcal{T}}_{i-1}{\mathcal{R}}_1 &= \overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i-1}{\mathcal{I}},\\ {\mathcal{L}}_1^{\mathsf{T}} {\mathcal{T}}_{i-1}{\mathcal{R}}_2 &= 0,\\ \frac{1}{c}{\mathcal{L}}_1^{\mathsf{T}} {\mathcal{T}}_{i-1}{\mathcal{X}}_R &= \frac{1}{c} \overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}{\mathcal{X}}_{R,u}.\label{cxbwm8} } While \eq{ {\mathcal{L}}_2^{\mathsf{T}} {\mathcal{T}}_{i-1} = \left[ \begin{array}{cc} \hspace{-1.5mm}0 \hspace{-1mm}&\hspace{-1mm} \frac{1}{N}{\mathcal{I}}^{\mathsf{T}}\hspace{-1.5mm} \\ \end{array} \right]\hspace{-1.5mm} \left[ \begin{array}{cc} \hspace{-1.5mm}\overline{{\mathcal{A}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i-1} & 0\hspace{-1.5mm} \\ \hspace{-1.5mm}{\mathcal{V}}\overline{{\mathcal{A}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i-1} & 0\hspace{-1.5mm} \\ \end{array} \right] \overset{\eqref{xcn987-2}}{=} \left[ \begin{array}{cc} \hspace{-1.5mm}0 \hspace{-1mm}&\hspace{-1mm} 0\hspace{-1.5mm} \\ \end{array} \right], } Therefore, it follows that \eq{\label{S-2nd} {\mathcal{L}}_2^{\mathsf{T}} {\mathcal{T}}_{i-1}{\mathcal{R}}_1 =0,\;\; {\mathcal{L}}_2^{\mathsf{T}} {\mathcal{T}}_{i-1}{\mathcal{R}}_2 =0,\;\; \frac{1}{c}{\mathcal{L}}_2^{\mathsf{T}}{\mathcal{T}}_{i-1}{\mathcal{X}}_R =0. } Substituting \eqref{S}, \eqref{S-1st}--\eqref{cxbwm8} and \eqref{S-2nd} into \eqref{recursion-transform-2}, we have \eq{\label{znhg}\footnotesize \left[ \begin{array}{c} \hspace{-2mm}\bar{{\scriptstyle{\mathcal{X}}}}_i\hspace{-2mm} \\ \hspace{-2mm}\widehat{{\scriptstyle{\mathcal{X}}}}_i\hspace{-2mm} \\ \hspace{-2mm}\check{{\scriptstyle{\mathcal{X}}}}_i\hspace{-2mm} \\ \end{array} \right] \hspace{-1.2mm}&=\hspace{-1.2mm} \footnotesize \left[ \begin{array}{ccc} \hspace{-3mm}I_{\hspace{-0.3mm} M} \hspace{-1.2mm}-\hspace{-1.2mm} \overline{{\mathcal{P}}}^{\mathsf{T}}\hspace{-1.2mm}{\mathcal{M}}{\mathcal{H}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}{\mathcal{I}} & \hspace{-0.8mm}0 &\hspace{-1.3mm} -\frac{1}{c}\overline{{\mathcal{P}}}^{\mathsf{T}}\hspace{-1.2mm}{\mathcal{M}}{\mathcal{H}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}{\mathcal{X}}_{R,u} \hspace{-2mm}\\ \hspace{-3mm}0 & \hspace{-1.3mm}I_{\hspace{-0.3mm} M} &\hspace{-3.3mm} 0 \hspace{-2mm} \\ \hspace{2mm}-c{\mathcal{X}}_L{\mathcal{T}}_{i-1}{\mathcal{R}}_1 & \hspace{-1.3mm}-c{\mathcal{X}}_L{\mathcal{T}}_{i-1}{\mathcal{R}}_2 \hspace{-1.3mm} & {\mathcal{D}}_1 \hspace{-1mm}-\hspace{-1mm} {\mathcal{X}}_L{\mathcal{T}}_{i-1}{\mathcal{X}}_R \hspace{-2mm} \\ \end{array} \right] \hspace{-2mm}\left[ \begin{array}{c} \hspace{-2.5mm}\bar{{\scriptstyle{\mathcal{X}}}}_{i\hspace{-0.5mm}-\hspace{-0.5mm}1}\hspace{-2.5mm} \\ \hspace{-2.5mm}\widehat{{\scriptstyle{\mathcal{X}}}}_{i\hspace{-0.5mm}-\hspace{-0.5mm}1}\hspace{-2.5mm} \\ \hspace{-2.5mm}\check{{\scriptstyle{\mathcal{X}}}}_{i\hspace{-0.5mm}-\hspace{-0.5mm}1}\hspace{-2.5mm} \\ \end{array} \right] } From the second line of \eqref{znhg}, we get \eq{\label{28cn00} \widehat{{\scriptstyle{\mathcal{X}}}}_i = \widehat{{\scriptstyle{\mathcal{X}}}}_{i-1}. } As a result, $\widehat{{\scriptstyle{\mathcal{X}}}}_i$ will stay at $0$ only if the initial value $\widehat{{\scriptstyle{\mathcal{X}}}}_{0} = 0$. From the definition of ${\mathcal{L}}_2$ in \eqref{tsdhb} and \eqref{x-bar and x-check} we have \eq{\label{hx_0=0} \widehat{{\scriptstyle{\mathcal{X}}}}_0 &= {\mathcal{L}}_2^{\mathsf{T}} \left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}_0\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}_0\hspace{-1mm} \\ \end{array} \right] = \frac{1}{N}{\mathcal{I}}^{\mathsf{T}} \widetilde{\scriptstyle{\mathcal{Y}}}_0 \nonumber \\ &\overset{\eqref{error}}{=}\frac{1}{N}{\mathcal{I}}^{\mathsf{T}} ({\scriptstyle{\mathcal{Y}}}^\star_o - {\scriptstyle{\mathcal{Y}}}_0) \overset{\eqref{zn-0}}{=} \frac{1}{N}{\mathcal{I}}^{\mathsf{T}} ({\scriptstyle{\mathcal{Y}}}^\star_o - {\mathcal{V}} {\scriptstyle{\mathcal{W}}}_0). } Recall from Lemma \ref{sy in V range sapce} that ${\scriptstyle{\mathcal{Y}}}_o^\star$ lies in the $\mathrm{range}({\mathcal{V}})$, so that ${\scriptstyle{\mathcal{Y}}}^\star_o - {\mathcal{V}} {\scriptstyle{\mathcal{W}}}_0$ also lies in $\mathrm{range}({\mathcal{V}})$. From Lemma \ref{lm:null-V} we conclude that $\widehat{{\scriptstyle{\mathcal{X}}}}_0=0$. Therefore, from \eqref{28cn00} we have \eq{\label{xzcn} \widehat{{\scriptstyle{\mathcal{X}}}}_i=0, \quad \forall i\ge 0 } With \eqref{xzcn}, recursion \eqref{znhg} is equivalent to \eq{\label{final-recursion} \hspace{-3mm} \boxed{ \left[ \begin{array}{c} \hspace{-2mm}\bar{{\scriptstyle{\mathcal{X}}}}_i\hspace{-2mm}\\ \hspace{-2mm}\check{{\scriptstyle{\mathcal{X}}}}_i\hspace{-2mm} \\ \end{array} \right] \hspace{-1.5mm}=\hspace{-1.5mm} \left[ \begin{array}{cc} \hspace{-2mm}I_M \hspace{-1mm}-\hspace{-1mm}{\overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i\hspace{-0.4mm}-\hspace{-0.4mm}1}{\mathcal{I}}} & -\frac{1}{c}\overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i\hspace{-0.4mm}-\hspace{-0.4mm}1}{\mathcal{X}}_{R,u}\hspace{-2mm}\\ \hspace{-2mm}- c{\mathcal{X}}_L {\mathcal{T}}_{i-1} {\mathcal{R}}_1 & {\mathcal{D}}_1 - {\mathcal{X}}_L {\mathcal{T}}_{i-1} {\mathcal{X}}_R \hspace{-2mm} \\ \end{array} \right] \hspace{-1.5mm} \left[ \begin{array}{c} \hspace{-2mm}\bar{{\scriptstyle{\mathcal{X}}}}_{i\hspace{-0.4mm}-\hspace{-0.4mm}1}\hspace{-2mm}\\ \hspace{-2mm}\check{{\scriptstyle{\mathcal{X}}}}_{i\hspace{-0.4mm}-\hspace{-0.4mm}1}\hspace{-2mm} \\ \end{array} \right]} } The convergence of the above recursion is stated as follows. } \begin{theorem}[\sc Linear Convergence]\label{theom-convergence} Suppose each cost function $J_k(w)$ satisfies Assumption \ref{ass-lip}, the left-stochastic matrix $A$ satisfies the local balance condition \eqref{local-balance}, and also condition \eqref{q-A} holds. The exact diffusion recursion \eqref{zn-1} converges exponentially fast to $({\scriptstyle{\mathcal{W}}}^\star, {\scriptstyle{\mathcal{Y}}}^\star_o)$ for step-sizes satisfying \eq{\label{diffusion-range-step-size} \mu_{\max}\le \frac{p_{k_o}\tau_{k_o}\nu (1-\lambda)}{2\sqrt{p_{\max}}\alpha_d \delta^2}, } where $\lambda \hspace{-1mm}=\hspace{-1mm} \sqrt{\lambda_2(\overline{A})}\hspace{-1mm}<\hspace{-1mm}1$, $\tau_{k_o}\hspace{-1mm}=\hspace{-1mm}\mu_{k_o}/\mu_{\max}$, $p_{\max}\hspace{-1mm}=\hspace{-1mm}\max_k\{p_k\}$ and \eq{\label{T-d} \alpha_d \hspace{-0.5mm} \define \hspace{-0.5mm} \|{\mathcal{X}}_L\| \|{\mathcal{T}}_d\| \|{\mathcal{X}}_R\|, \mbox{ where } {\mathcal{T}}_d \define \left[ \begin{array}{cc} \overline{{\mathcal{A}}}^{\mathsf{T}} & 0\\ {\mathcal{V}}\overline{{\mathcal{A}}}^{\mathsf{T}} & 0 \\ \end{array} \right]. } The convergence rate for the error variables is given by \eq{ \left\| \left[ \begin{array}{cc} \widetilde{\scriptstyle{\mathcal{W}}}_i\\ \widetilde{\scriptstyle{\mathcal{Y}}}_i \\ \end{array} \right] \right\|^2 \le C \rho^i, } where $C$ is some constant and $\rho=1-O(\mu_{\max})$, namely, \eq{ \hspace{-1mm}\rho =& \max\Big\{ 1-p_{k_o}\tau_{k_o}\nu \mu_{\max} + \frac{2\sqrt{p_{\max}}\alpha_d \delta^2 \mu^2_{\max}}{1-\lambda},\nonumber \\ &\hspace{1cm} \lambda\hspace{-0.5mm}+\hspace{-0.5mm}\frac{\sqrt{p_{\max}}\alpha_d \delta^2\mu_{\max}}{p_{k_o}\tau_{k_o}\nu} \hspace{-0.5mm}+\hspace{-0.5mm} \frac{2\alpha_d^2 \delta^2\mu_{\max}^2}{1-\lambda} \Big\} < 1. } \end{theorem} \begin{proof} See Appendix \ref{app-theom-conv}. \end{proof} {\color{black} With similar arguments shown above, we can also establish the convergence property of the exact diffusion algorithm 1' from Part I \cite{yuan2017exact1}. Compared to the above convergence analysis, the error dynamics for algorithm 1' will now be perturbed by a mismatch term caused by the power iteration. Nevertheless, once the analysis is carried out we arrive at a similar conclusion. \begin{theorem}[\sc Linear convergence of Algorithm $1^\prime$] \label{them-algorithm-prime}Under the conditions of Theorem \ref{theom-convergence}, there exists a positive constant $\bar{\mu} > 0$ such that for step-sizes satisfying $\mu < \bar{\mu}$, the exact diffusion Algorithm 1' will converge exponentially fast to $({\scriptstyle{\mathcal{W}}}^\star, {\scriptstyle{\mathcal{Y}}}_o^\star)$. \end{theorem} \begin{proof} See Appendix \ref{app-algorithm-prime}. \end{proof} } \section{Stability Comparison with EXTRA} \label{sec-comparision} \subsection{Stability Range of EXTRA} In the case where the combination matrix $A$ is symmetric and {\em doubly-stochastic}, and all agents choose the {\em same} step-size $\mu$, the exact diffusion recursion \eqref{zn-1} reduces to \begin{equation} \left\{ \begin{aligned} {\scriptstyle{\mathcal{W}}}_i &= \overline{{\mathcal{A}}} \Big({\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}\hspace{-0.8mm}-\hspace{-0.8mm}\mu {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1})\Big)\hspace{-0.8mm}-\hspace{-0.8mm}{\mathcal{P}}^{-1}{\mathcal{V}} {\scriptstyle{\mathcal{Y}}}_{i-1}, \label{zn-2}\\ {\scriptstyle{\mathcal{Y}}}_i &= {\scriptstyle{\mathcal{Y}}}_{i-1} + {\mathcal{V}} {\scriptstyle{\mathcal{W}}}_i. \end{aligned} \right. \end{equation} where ${\mathcal{P}}=I_{MN}/N$. In comparison, the EXTRA consensus algorithm \cite{shi2015extra} has the following form for the same ${{\mathcal{P}}}$ (recall though that exact diffusion \eqref{zn-1} was derived and is applicable to a larger class of balanced left-stochastic matrices and is not limited to symmetric doubly stochastic matrices; it also allows for heterogeneous step-sizes): \begin{equation} \left\{ \begin{aligned} {\scriptstyle{\mathcal{W}}}_i^e &= \overline{{\mathcal{A}}} {\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}^e\hspace{-0.8mm}-\hspace{-0.8mm}\mu {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}^e)\hspace{-0.8mm}-\hspace{-0.8mm}{\mathcal{P}}^{-1}{\mathcal{V}} {\scriptstyle{\mathcal{Y}}}^e_{i-1}, \label{extra-pd-2}\\ {\scriptstyle{\mathcal{Y}}}^e_i &= {\scriptstyle{\mathcal{Y}}}^e_{i-1} + {\mathcal{V}} {\scriptstyle{\mathcal{W}}}^e_i, \end{aligned} \right. \end{equation} where we are using the notation ${\scriptstyle{\mathcal{W}}}_i^e$ and ${\scriptstyle{\mathcal{Y}}}_i^e$ to refer to the primal and dual iterates in the EXTRA implementation. Similar to \eqref{zn-0}, the initial condition for \eqref{extra-pd-2} is \begin{equation} \left\{ \begin{aligned} {\scriptstyle{\mathcal{W}}}^e_0 &= \overline{{\mathcal{A}}} {\scriptstyle{\mathcal{W}}}^e_{\hspace{-0.3mm}-\hspace{-0.3mm}1}\hspace{-0.8mm}-\hspace{-0.8mm}\mu {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^e_{\hspace{-0.3mm}-\hspace{-0.3mm}1}), \label{zn-0-extra}\\ {\scriptstyle{\mathcal{Y}}}^e_0 &= {\mathcal{V}} {\scriptstyle{\mathcal{W}}}^e_0. \end{aligned} \right. \end{equation} Comparing \eqref{zn-2} and \eqref{extra-pd-2} we observe one key difference; the diffusion update in \eqref{zn-2} involves a traditional gradient descent step in the form of ${{\scriptstyle{\mathcal{W}}}}_{i-1}-\mu\nabla {\cal J}^{o} ({{\scriptstyle{\mathcal{W}}}}_{i-1})$. This step starts from ${{\scriptstyle{\mathcal{W}}}}_{i-1}$ and evaluates the graduate vector at the same location. The result is then multiplied by the combination policy $\widetilde{\cal A}$. The same is {\em not} true for exact consensus in \eqref{extra-pd-2}; we observe an asymmetry in its update: the gradient vector is evaluated at ${{\scriptstyle{\mathcal{W}}}}_{i-1}^{e}$ while the starting point is at a different location given by $\widetilde{\cal A}{{\scriptstyle{\mathcal{W}}}}_{i-1}^e$. This type of asymmetry was shown in \cite{sayed2014adaptive,sayed2014adaptation} to result in instabilities for the traditional consensus implementation in comparison to the traditional diffusion implementation. It turns out that a similar problem continues to exist for the EXTRA consensus solution \eqref{extra-pd-2}. In particular, we will show that its stability range is smaller than exact diffusion (i.e., the latter is stable for a larger range of step-sizes, which in turn helps attain faster convergence rates). We will illustrate this behavior in the simulations in some detail. Here, though, we establish these observations analytically. The arguments used to examine the stability range of EXTRA consensus are similar to what we did in Section \ref{sec-convergence} for exact diffusion; we shall therefore be brief and highlight only the differences. As already noted in \cite{shi2015extra}, the optimality conditions for the EXTRA consensus algorithm require the existence of block vectors $({{\scriptstyle{\mathcal{W}}}}^{\star},{{\scriptstyle{\mathcal{Y}}}}^{\star})$ such that \eq{ \mu {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star) + {\mathcal{P}}^{-1}{\mathcal{V}} {\scriptstyle{\mathcal{Y}}}^\star & = 0, \label{extra-KKT-1-1} \\ {\mathcal{V}} {\scriptstyle{\mathcal{W}}}^\star &= 0. \label{extra-KKT-2-2} } Moreover, as argued in Lemma \ref{sy in V range sapce}, there also exists a unique pair of variables $({\scriptstyle{\mathcal{W}}}^\star, {\scriptstyle{\mathcal{Y}}}^\star_o)$, in which ${\scriptstyle{\mathcal{Y}}}^\star_o$ lies in the range space of ${\mathcal{V}}$, that satisfies \eqref{extra-KKT-1-1}--\eqref{extra-KKT-2-2}. Now we introduce the block error vectors: \eq{\label{extra-error} \widetilde{{\scriptstyle{\mathcal{W}}}}^e_i={{\scriptstyle{\mathcal{W}}}}^{\star}-{{\scriptstyle{\mathcal{W}}}}_i^e,\;\;\;\; \widetilde{{\scriptstyle{\mathcal{Y}}}}^e_i={\scriptstyle{\mathcal{Y}}}_o^{\star}-{{\scriptstyle{\mathcal{Y}}}}_i^e, } and examine the evolution of these error quantities. Using similar arguments in Section \ref{subsec-error-recursion}, and recalling the facts that $\overline{{\mathcal{A}}}$ is symmetric doubly-stochastic, and ${\mathcal{M}}=\mu I_{MN}$, we arrive at the error recursion for EXTRA consensus (see Appendix \ref{app-sta-EXTRA-error-recursion}): \eq{ \left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}_i^e\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}_i^e\hspace{-1mm} \\ \end{array} \right] &\hspace{-1mm}=\hspace{-1mm} \left[ \begin{array}{cc} \hspace{-2mm}\overline{{\mathcal{A}}} - \mu {\mathcal{H}}_{i-1} & -{\mathcal{P}}^{-1}{\mathcal{V}} \hspace{-2mm}\\ \hspace{-2mm}{\mathcal{V}} (\overline{{\mathcal{A}}} -\mu {\mathcal{H}}_{i-1}) & I_{MN}-{\mathcal{V}} {\mathcal{P}}^{-1} {\mathcal{V}}\hspace{-2mm} \\ \end{array} \right] \left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}^e_{i-1}\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}^e_{i-1}\hspace{-1mm} \\ \end{array} \right] \nonumber \\ &\hspace{-2mm}\define ({\mathcal{B}}^e - {\mathcal{T}}^e_{i-1})\left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}^e_{i-1}\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}^e_{i-1}\hspace{-1mm} \\ \end{array} \right], \label{extra-error-recursion-2} } where \eq{ {\mathcal{B}}^e\define \left[ \begin{array}{cc} \hspace{-1.5mm}\overline{{\mathcal{A}}} & -{\mathcal{P}}^{-1}{\mathcal{V}} \hspace{-1.5mm}\\ \hspace{-1.5mm}{\mathcal{V}}\overline{{\mathcal{A}}} & I_{MN}\hspace{-1mm}-\hspace{-1mm}{\mathcal{V}} {\mathcal{P}}^{-1} {\mathcal{V}}\hspace{-1.5mm} \\ \end{array} \right], {\mathcal{T}}^e_{i}\define \left[ \begin{array}{cc} \hspace{-1.5mm}\mu{\mathcal{H}}_{i} & 0 \hspace{-1.5mm}\\ \hspace{-1.5mm}\mu{\mathcal{V}}{\mathcal{H}}_{i} & 0 \hspace{-1.5mm} \\ \end{array} \right].\label{extra-T-defi} } It is instructive to compare \eqref{extra-error-recursion-2}--\eqref{extra-T-defi} with \eqref{error-recursion}--\eqref{T-defi}. These recursions capture the error dynamics for the exact consensus and diffusion strategies. Observe that ${\mathcal{B}}^e = {\mathcal{B}}$ when $\overline{{\mathcal{A}}}$ is symmetric and ${\mathcal{M}}=\mu I_{MN}$. Therefore, ${\mathcal{B}}^e$ has the same eigenvalue decomposition as in \eqref{cB-decompoision}--\eqref{xcn287}. With similar arguments to \eqref{B-deco-0}--\eqref{final-recursion}, we conclude that the reduced error recur-sion for EXTRA consensus takes the form (see Appendix \ref{app-sta-EXTRA-reduced}): \eq{\label{final-recursion-extra} \left[ \begin{array}{c} \hspace{-2mm}\bar{{\scriptstyle{\mathcal{X}}}}^e_i\hspace{-2mm}\\ \hspace{-2mm}\check{{\scriptstyle{\mathcal{X}}}}^e_i\hspace{-2mm} \\ \end{array} \right] \hspace{-1.5mm}=\hspace{-1.5mm} \left[ \begin{array}{cc} \hspace{-2mm}I_M \hspace{-1mm}-\hspace{-1mm}{\mu \overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{H}}_{i\hspace{-0.4mm}-\hspace{-0.4mm}1}{\mathcal{I}}} \hspace{-1mm}&\hspace{-1mm} -\frac{\mu}{c}\overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{H}}_{i\hspace{-0.4mm}-\hspace{-0.4mm}1}{\mathcal{X}}_{R,u}\hspace{-2mm}\\ \hspace{-2mm}- c{\mathcal{X}}_L {\mathcal{T}}^e_{i-1} {\mathcal{R}}_1 \hspace{-1mm}&\hspace{-1mm} {\mathcal{D}}_1 - {\mathcal{X}}_L {\mathcal{T}}^e_{i-1} {\mathcal{X}}_R \hspace{-2mm} \\ \end{array} \right] \hspace{-1.5mm} \left[ \begin{array}{c} \hspace{-2mm}\bar{{\scriptstyle{\mathcal{X}}}}^e_{i\hspace{-0.4mm}-\hspace{-0.4mm}1}\hspace{-2mm}\\ \hspace{-2mm}\check{{\scriptstyle{\mathcal{X}}}}^e_{i\hspace{-0.4mm}-\hspace{-0.4mm}1}\hspace{-2mm} \\ \end{array} \right]. } \noindent Following the same proof technique as for Theorem \ref{theom-convergence}, we can now establish the following result concerning stability conditions and convergence rate for EXTRA consensus. \begin{theorem}[\sc Linear Convergence of EXTRA]\label{lm-convergence-extra} Suppose each cost function $J_k(w)$ satisfies Assumption \ref{ass-lip}, and the combination matrix $A$ is primitive, symmetric and doubly-stochastic. The EXTRA recursion \eqref{extra-error-recursion-2} converges exponentially fast to $({\scriptstyle{\mathcal{W}}}^\star, {\scriptstyle{\mathcal{Y}}}^\star_o)$ for step-sizes $\mu$ satisfying \eq{\label{extra-range-step-size} \mu\le \frac{\nu (1-\lambda)}{2\sqrt{N}\alpha_e\delta^2}, } where $\lambda = \sqrt{\lambda_2(\overline{A})}<1$ and \eq{\label{T-e} \alpha_e = \|{\mathcal{X}}_L\| \|{\mathcal{T}}_e\| \|{\mathcal{X}}_R\|,\mbox{ where } {\mathcal{T}}_e = \left[ \begin{array}{cc} I_{MN} & 0\\ {\mathcal{V}} & 0 \\ \end{array} \right]. } The convergence rate for the error variables is given by \eq{ \left\| \left[ \begin{array}{cc} \widetilde{\scriptstyle{\mathcal{W}}}^e_i\\ \widetilde{\scriptstyle{\mathcal{Y}}}^e_i \\ \end{array} \right] \right\|^2 \le C \rho^i, } where $C$ is some constant and $\rho=1-O(\mu_{\max})$, namely, \eq{\label{rho-e} \rho_e =& \max\Big\{ 1-\frac{\nu}{N} \mu_{\max} + \frac{2\alpha_e \delta^2 \mu^2_{\max}}{\sqrt{N}(1-\lambda)},\nonumber \\ &\hspace{1cm} \lambda+\frac{\sqrt{N}\alpha_e \delta^2\mu_{\max}}{\nu} + \frac{2\alpha_e^2 \delta^2\mu_{\max}^2}{1-\lambda} \Big\} < 1. } \end{theorem} \begin{proof} See Appendix \ref{app-conv-extra}. \end{proof} \subsection{Comparison of Stability Ranges } When $\overline{{\mathcal{A}}}$ is symmetric and ${\mathcal{M}}=\mu I_{MN}$, from Theorem \ref{theom-convergence} we get the stability range of exact diffusion: \eq{\label{diffusion-range-step-size-compare} \mu\le \frac{\nu (1-\lambda)}{2\sqrt{N}\|{\mathcal{X}}_L\| \|{\mathcal{T}}_d\| \|{\mathcal{X}}_R\|\delta^2}, } where \eq{\label{T-d-compare} {\mathcal{T}}_d = \left[ \begin{array}{cc} \overline{{\mathcal{A}}} & 0\\ {\mathcal{V}}\overline{{\mathcal{A}}} & 0 \\ \end{array} \right]. } Comparing \eqref{diffusion-range-step-size-compare} with \eqref{extra-range-step-size}, we observe that the expressions differ by the terms $\|{\mathcal{T}}_e\|$ and $\|{\mathcal{T}}_d\|$. We therefore need to compare these two norms. Notice that \eq{ \|{\mathcal{T}}_e\|^2 &= \lambda_{\max}({\mathcal{T}}_e^{\mathsf{T}} {\mathcal{T}}_e)=\lambda_{\max}(I_{MN} + {\mathcal{V}}^2),\\% = \rho \left(I_{MN}+\frac{I - {\mathcal{A}}}{2}\right),\\ \|{\mathcal{T}}_d\|^2 &= \lambda_{\max}({\mathcal{T}}_d^{\mathsf{T}} {\mathcal{T}}_d)= \lambda_{\max}\big(\overline{{\mathcal{A}}}(I_{MN} + {\mathcal{V}}^2)\overline{{\mathcal{A}}} \big). } It is easy to recognize that $\lambda_{\max}(I_{MN} + {\mathcal{V}}^2) = \lambda_{\max}(I_{N} + V^2)$. Now, since $A$ is assumed symmetric doubly-stochastic and $P={1\over N} I_N$, we have \eq{ \hspace{-3mm} I_{N}+V^2 &= I_{N}+ \frac{P - P A}{2} \nonumber \\ &= I_{N}+\frac{I_{N} - A}{2N} = \frac{(2N+1)I_{N} - A}{2N}, \label{xcngyhu} } Moreover, since $A$ is primitive, symmetric and doubly stochastic, we can decompose it as \eq{\label{A-deco} A = U \Lambda\, U^{\mathsf{T}}, } where $U$ is orthogonal, $\Lambda={\mathrm{diag}}\{\lambda_1\hspace{-0.5mm}(\hspace{-0.5mm}A\hspace{-0.3mm}),\cdots\hspace{-0.5mm}, \lambda_N(A)\}$ and \eq{\label{23n8} 1 = \lambda_1(A) > \lambda_2(A) \ge \cdots \ge \lambda_N(A) >-1. } With this decomposition, expression \eqref{xcngyhu} can be rewritten as \eq{ I_{N}+V^2 = U \frac{(2N+1)I_{N} - \Lambda}{2N} U^{\mathsf{T}}. \label{xcnwedhg8} } from which we conclude that \eq{\label{zxc6hs90} \boxed{ \lambda_{\max}(I_{N} + V^2) = \frac{(2N+1) - \lambda_N(A)}{2N} } } Similarly, $\lambda_{\max}(\overline{{\mathcal{A}}} (I_{MN}+{\mathcal{V}}^2) \overline{{\mathcal{A}}})=\lambda_{\max}(\overline{A} (I_{N} + V^2) \overline{A})$. Using $\overline{A} = \frac{I_N + A}{2}$, and equations \eqref{A-deco} and \eqref{xcnwedhg8}, we have \eq{ &\hspace{-5mm} \overline{A} (I_{N}+ V^2) \overline{A} \nonumber \\ =&\ \left(\frac{I_{N} + A}{2}\right) \left( \frac{(2N+1)I_{N} - A}{2N} \right)\left(\frac{I_{N} + A}{2}\right) } \eq{= U \left(\hspace{-1mm}\frac{I_{N} + \Lambda}{2}\hspace{-1mm}\right) \left(\hspace{-1mm}\frac{(2N+1)I_{N} - \Lambda}{2N} \hspace{-1mm}\right) \left(\frac{I_{N} + \Lambda}{2}\right) U^{\mathsf{T}}. } Therefore, we have \eq{ &\ \lambda_{\max}\left(\overline{A} (I_{N}+ V^2) \overline{A}\right) \nonumber \\ =&\ \max_k\left\{\left(\frac{\lambda_k(A) + 1}{2}\right)^2 \left(\frac{2N+1 - \lambda_k(A)}{2N} \right) \right\} \label{xn3wh8-0} \nonumber \\ \overset{(a)}{\le}&\ \max_k\left\{\left(\frac{\lambda_k(A) + 1}{2}\right)^2\right\} \max_k\left\{ \frac{2N+1 - \lambda_k(A)}{2N} \right\} \nonumber \\ \overset{\eqref{23n8}}{=}&\ \frac{2N+1 - \lambda_N(A)}{2N}. } It is worth noting that the ``$=$" sign cannot hold in (a) because \eq{ \argmax_k\left\{\left(\frac{\lambda_k(A) + 1}{2}\right)^2\right\} &= 1,\\ \argmax_k\left\{ \frac{2N+1 - \lambda_k(A)}{2N} \right\} &= N. } In other words, $\left(\frac{\lambda_k(A) + 1}{2}\right)^2$ and $\frac{2N+1 - \lambda_k(A)}{2N}$ cannot reach their maximum values at the same $k$. As a result, \eq{\label{compare-Td-Te} \|{\mathcal{T}}_d\|^2 < \|{\mathcal{T}}_e\|^2 \Longrightarrow \alpha_d < \alpha_e. } This means that the upper bound on $\mu$ in \eqref{extra-range-step-size} is smaller than the upper bound on $\mu$ in \eqref{diffusion-range-step-size-compare}. We can also compare the convergence rates of EXTRA consensus and exact diffusion when both algorithms converge. When $\overline{{\mathcal{A}}}$ is symmetric and ${\mathcal{M}}=\mu I_{MN}$, from Theorem \ref{theom-convergence} we get the convergence rate of exact diffusion: \eq{\label{rho-d} \rho_d =& \max\Big\{ 1-\frac{\nu}{N} \mu_{\max} + \frac{2\alpha_d \delta^2 \mu^2_{\max}}{\sqrt{N}(1-\lambda)},\nonumber \\ &\hspace{1cm} \lambda+\frac{\sqrt{N}\alpha_d \delta^2\mu_{\max}}{\nu} + \frac{2\alpha_d^2 \delta^2\mu_{\max}^2}{1-\lambda} \Big\}. } It is clear from \eqref{rho-d} and \eqref{rho-e} that EXTRA consensus and exact diffusion have the same convergence rate to first-order in $\mu_{\max}$, namely, \eq{ \widehat{\rho}_d = 1 - \frac{\nu}{N}\mu_{\max} = \widehat{\rho}_e } More generally, when higher-order terms in $\mu_{\max}$ cannot be ignored, it holds that $\rho_d<\rho_e$ because $\alpha_d<\alpha_e$ (see \eqref{compare-Td-Te}). In this situation, exact diffusion converges faster than EXTRA. { \subsection{An Analytical Example} In this subsection we illustrate the stability of exact diffusion by considering the example of mean-square-error (MSE) networks \cite{sayed2014adaptation}. Suppose $N$ agents are observing streaming data $\{{\boldsymbol{d}}_k(i), {\boldsymbol{u}}_{k,i}\}$ that satisfy the regression model \eq{\label{regresson-model} {\boldsymbol{d}}_k(i)={\boldsymbol{u}}_{k,i}^{\mathsf{T}} w^o +{\boldsymbol{v}}_k(i), } where $w^o$ is unknown and ${\boldsymbol{v}}_k(i)$ is the noise process that is independent of the regression data ${\boldsymbol{u}}_{k,j}$ for any $k,j$. Furthermore, we assume ${\boldsymbol{u}}_{k,i}$ is zero-mean with covariance matrix $R_{u,k}=\mathbb{E}{\boldsymbol{u}}_{k,i}{\boldsymbol{u}}_{k,i}^{\mathsf{T}} > 0$, and ${\boldsymbol{v}}_k(i)$ is also zero-mean with power $\sigma_{v,k}^2=\mathbb{E} {\boldsymbol{v}}_k^2(i)$. We denote the cross covariance vector between ${\boldsymbol{d}}_k(i)$ and ${\boldsymbol{u}}_{k,i}$ by $r_{du,k} = \mathbb{E}{\boldsymbol{d}}_k(i){\boldsymbol{u}}_{k,i}$. To discover the unknown $w^o$, the agents cooperate to solve the following mean-square-error problem: \eq{\label{MSE-network} \min_{w\in \mathbb{R}^M}\ \textstyle{\frac{1}{2}\sum_{k=1}^{N}} \mathbb{E} \big({\boldsymbol{d}}_k(i)-{\boldsymbol{u}}_{k,i}^{\mathsf{T}} w\big)^2. } It was shown in Example 6.1 of \cite{sayed2014adaptation} that the global minimizer of problem \eqref{MSE-network} coincides with the unknown $w^o$ in \eqref{regresson-model}. When $R_{u,k}$ and $r_{du,k}$ are unknown and only realizations of ${\boldsymbol{u}}_{k,i}$ and ${\boldsymbol{d}}_k(i)$ are observed by agent $k$, one can employ the diffusion algorithm with stochastic gradient descent to solve \eqref{MSE-network}. However, when $R_{u,k}$ and $r_{du,k}$ are known in advance, problem \eqref{MSE-network} reduces to deterministic optimization problem: \eq{\label{MSE-network-determ} \min_{w\in \mathbb{R}^M}\ \frac{1}{2}\sum_{k=1}^{N} \big( w^{\mathsf{T}} R_{u,k}\hspace{0.3mm} w - 2 r_{du,k}^{\mathsf{T}} w \big). } We can then employ the exact diffusion or the EXTRA consensus algorithm to solve \eqref{MSE-network-determ}. To illustrate the stability issue, it is sufficient to consider a network with $2$ agents (see Fig. \ref{fig:2-agent-MSE}) and with diagonal Hessian matrices, i.e., \eq{\label{xcn99} R_{u,1}=R_{u,2}=\sigma^2 I_M. } We assume the agents use the combination weights {$\{a, 1-a\}$} with $a\in (0,1)$, so that \eq{\label{2-MSE-A} {A= \left[ \begin{array}{cc} a & 1-a\\ 1-a & a \\ \end{array} \right] \in \mathbb{R}^{2\times 2}, }} which is symmetric and doubly stochastic. The two agents employ the same step-size $\mu$ (or $\mu^e$ in the EXTRA recursion). It is worth noting that the following analysis can be extended to $N$ agents with some more algebra. Under \eqref{xcn99}, we have $H_1=H_2=\sigma^2 I_M$ and ${\mathcal{H}}={\mathrm{diag}}\{H_1,H_2\}=\sigma^2 I_{2M}$. For the matrix $A$ in \eqref{2-MSE-A}, we have \eq{\label{zn288} \lambda_1(A)=1,\quad \lambda_2(A)=2a-1\in (-1,1), } and $p=[0.5;0.5]$, $P=0.5 I_2$. \begin{figure} \caption{A two-agent network using combination weights $\{a, 1-a\}$} \label{fig:2-agent-MSE} \end{figure} Let $\widetilde{\scriptstyle{\mathcal{Z}}}_i = [\widetilde{\scriptstyle{\mathcal{W}}}_i;\widetilde{\scriptstyle{\mathcal{Y}}}_i]\in \mathbb{R}^{2M}$, and $\widetilde{\scriptstyle{\mathcal{Z}}}_i^e = [\widetilde{\scriptstyle{\mathcal{W}}}_i^e;\widetilde{\scriptstyle{\mathcal{Y}}}_i^e]\in \mathbb{R}^{2M}$. The exact diffusion error recursion \eqref{error-recursion} and the EXTRA error recursion \eqref{extra-error-recursion-2} reduce to \eq{ \widetilde{\scriptstyle{\mathcal{Z}}}_i = {\mathcal{Q}}_d \widetilde{\scriptstyle{\mathcal{Z}}}_{i-1},\label{special-ed} } \eq{ \widetilde{\scriptstyle{\mathcal{Z}}}_i^e = {\mathcal{Q}}_e \widetilde{\scriptstyle{\mathcal{Z}}}_{i-1}^e,\label{special-extra} } where \eq{ {\mathcal{Q}}_d \hspace{-1mm}&=\hspace{-1mm} \underbrace{\left[ \begin{array}{cc} \hspace{-2mm}(1-\mu \sigma^2 )\overline{A} & -2 V \hspace{-2mm}\\ \hspace{-2mm} (1-\mu \sigma^2 )V \overline{A} & \overline{A} \hspace{-2mm} \\ \end{array} \right]}_{Q_d} \otimes I_M, \\ {\mathcal{Q}}_e \hspace{-1mm}&=\hspace{-1mm} \underbrace{\left[ \begin{array}{cc} \hspace{-2mm}\overline{A}-\mu^e \sigma^2 I_{2} & -2 V \hspace{-2mm}\\ \hspace{-2mm} V(\overline{A}-\mu^e \sigma^2 I_{2}) & \overline{A} \hspace{-2mm} \\ \end{array} \right]}_{Q_e} \otimes I_M. } To guarantee the convergence of $\widetilde{\scriptstyle{\mathcal{Z}}}_i$ and $\widetilde{\scriptstyle{\mathcal{Z}}}_i^e$, we need to examine the eigenstructure of the $4\times 4$ matrices $Q_d$ and $Q_e$. {The proof of the next lemma is quite similar to Lemma \ref{lm-B-decomposition}; if desired, see Appendix F of the arXiv version\cite{yuan2017exact2}}. \begin{lemma}[\sc Eigenstructure of $Q_d$]\label{lm-Q_d-decom} The matrix $Q_d$ admits the following eigendecomposition \eq{\label{xn88} Q_d = X \overline{Q}_d X^{-1}, } where \eq{ \overline{Q}_d = \left[ \begin{array}{cc} 1 & 0 \\ 0 & E_d \\ \end{array} \right] } and \eq{\label{cn9999} E_d = \left[ \begin{array}{ccc} \hspace{-2mm}1-\mu\sigma^2 & 0 & 0\hspace{-2mm}\\ \hspace{-2mm}0 & (1-\mu\sigma^2)a & -\sqrt{2-2a}\hspace{-2mm}\\ \hspace{-2mm}0 & (1-\mu\sigma^2)a\sqrt{\frac{1-a}{2}} & a\hspace{-2mm} \\ \end{array} \right]. } Moreover, the matrices $X$ and $X^{-1}$ are given by \eq{\label{m87dh} X=\left[ \begin{array}{cc} r& X_R \\ \end{array} \right],\quad X^{-1}= \left[ \begin{array}{c} \ell^{\mathsf{T}}\\ X_L \\ \end{array} \right], } where $X_R\in \mathbb{R}^{4\times 3}$, $X_L\in \mathbb{R}^{3\times 4}$, and \eq{\label{xcn3} r=\frac{1}{2} \left[ \begin{array}{c} 0\\ \mathds{1}_2 \\ \end{array} \right]\in \mathbb{R}^{4}, \quad \ell= \left[ \begin{array}{cc} 0\\ \mathds{1}_2 \\ \end{array} \right]\in \mathbb{R}^{4}. } \rightline \mbox{\rule[0pt]{1.3ex}{1.3ex}} \end{lemma} It is observed that $Q_d$ always has an eigenvalue at $1$, which implies that $Q_d$ is not stable no matter what the step-size $\mu$ is. However, this eigenvalue does not influence the convergence of recursions \eqref{special-ed}. To see that, from Lemma \ref{lm-Q_d-decom} we have \eq{\label{ngy7} {\mathcal{Q}}_d &= {\mathcal{X}} \overline{{\mathcal{Q}}}_d {\mathcal{X}}^{-1} = \left[ \begin{array}{cc} \hspace{-2mm}R & {\mathcal{X}}_R\hspace{-2mm} \\ \end{array} \right] \left[ \begin{array}{cc} I_M & 0\\ 0 & {\mathcal{E}}_d \\ \end{array} \right] \left[ \begin{array}{c} L^{\mathsf{T}}\\ {\mathcal{X}}_L \\ \end{array} \right] } where ${\mathcal{X}}_R=X_R\otimes I_M$, ${\mathcal{X}}_L=X_L\otimes I_M$, ${\mathcal{E}}_d = E_d\otimes I_M$, and \eq{ R = \frac{1}{2} \left[ \begin{array}{c} 0 \\ \mathds{1}_2\otimes I_M \\ \end{array} \right], \quad L= \left[ \begin{array}{c} 0 \\ \mathds{1}_2\otimes I_M \\ \end{array} \right]. } Let \eq{ \left[ \begin{array}{c} \widehat{{\scriptstyle{\mathcal{Z}}}}_i\\ \check{{\scriptstyle{\mathcal{Z}}}}_i \\ \end{array} \right]={\mathcal{X}}^{-1} \widetilde{\scriptstyle{\mathcal{Z}}}_i= \left[ \begin{array}{c} L^{\mathsf{T}} \widetilde{\scriptstyle{\mathcal{Z}}}_i\\ {\mathcal{X}}_L \widetilde{\scriptstyle{\mathcal{Z}}}_i \\ \end{array} \right].\label{h82} } The exact diffusion recursion \eqref{special-ed} can be transformed into \eq{ \left[ \begin{array}{c} \widehat{{\scriptstyle{\mathcal{Z}}}}_i\\ \check{{\scriptstyle{\mathcal{Z}}}}_i \\ \end{array} \right] = \left[ \begin{array}{cc} I_M & 0\\ 0 & {\mathcal{E}}_d \\ \end{array} \right] \left[ \begin{array}{c} \widehat{{\scriptstyle{\mathcal{Z}}}}_{i-1}\\ \check{{\scriptstyle{\mathcal{Z}}}}_{i-1} \\ \end{array} \right], } which can be further divided into two separate recursions: \eq{ \widehat{{\scriptstyle{\mathcal{Z}}}}_i = \widehat{{\scriptstyle{\mathcal{Z}}}}_{i-1}, \quad \check{{\scriptstyle{\mathcal{Z}}}}_i = {\mathcal{E}}_d \check{{\scriptstyle{\mathcal{Z}}}}_{i-1}. } Therefore, $\widehat{{\scriptstyle{\mathcal{Z}}}}_i=0$ if $\widehat{{\scriptstyle{\mathcal{Z}}}}_0=0$. Since ${\scriptstyle{\mathcal{Y}}}_0={\mathcal{V}} {\scriptstyle{\mathcal{W}}}_0$ and ${\scriptstyle{\mathcal{Y}}}_o^\star \in \mathrm{range}({\mathcal{V}})$, we have $\widetilde{\scriptstyle{\mathcal{Y}}}_0 = {\scriptstyle{\mathcal{Y}}}_o^\star - {\scriptstyle{\mathcal{Y}}}_0 \in \mathrm{range}({\mathcal{V}})$. Therefore, \eq{ \widehat{{\scriptstyle{\mathcal{Z}}}}_0 &\overset{\eqref{h82}}{=} L^{\mathsf{T}} \widetilde{\scriptstyle{\mathcal{Z}}}_0 = \left[ \begin{array}{cc} 0 & (\mathds{1}_2\otimes I_M)^{\mathsf{T}} \\ \end{array} \right] \left[ \begin{array}{c} \widetilde{\scriptstyle{\mathcal{W}}}_0\\ \widetilde{\scriptstyle{\mathcal{Y}}}_0 \\ \end{array} \right] \overset{\eqref{xcn987-2}}{=} 0, } As a result, we only need to focus on the other recursion: \eq{\label{cngw7} \check{{\scriptstyle{\mathcal{Z}}}}_i = {\mathcal{E}}_d \check{{\scriptstyle{\mathcal{Z}}}}_{i-1},\quad \mbox{where} \quad {\mathcal{E}}_d = E_d \otimes I_M. } If we select the step-size $\mu$ such that all eigenvalues of $E_d$ stay inside the unit-circle, then we guarantee the convergence of $\check{{\scriptstyle{\mathcal{Z}}}_i}$ and, hence, $\widetilde{\scriptstyle{\mathcal{Z}}}_i$. \begin{lemma}[\sc Stability of exact diffusion]\label{lm-sta-ed} When $\mu$ is chosen such that \eq{\label{mu-range} 0 < \mu \sigma^2 < 2, } all eigenvalues of $E_d$ will lie inside the unit-circle, which implies that $\widetilde{\scriptstyle{\mathcal{Z}}}_i$ in \eqref{special-ed} converges to $0$, i.e., $\widetilde{\scriptstyle{\mathcal{Z}}}_i \to 0$. \end{lemma} \begin{proof} See Appendix \ref{app-sta-ed}. \end{proof} Next we turn to the EXTRA error recursion \eqref{special-extra}. \begin{lemma}[\sc Instability of EXTRA]\label{lm-sta-ex} When $\mu^e$ is chosen such that \eq{\label{mu-range-extra} \mu^e \sigma^2 \ge a+1, } it holds that $\widetilde{\scriptstyle{\mathcal{Z}}}_i^e$ generated through EXTRA \eqref{special-extra} will diverge. \end{lemma} \begin{proof} See Appendix \ref{app-sta-extra}. \end{proof} Comparing the statements of Lemmas \ref{lm-sta-ed} and \ref{lm-sta-ex}, and since $1+a<2$, exact diffusion has a larger range of stability than EXTRA (i.e., exact diffusion is stable for a wider range of step-size values). In particular, {if agents place small weights on their own data}, i.e., when $a\approx 0$, the stability range for exact diffusion will be almost twice as large as that of EXTRA.} { \section{Numerical Experiments}\label{sec-simulation} In this section we compare the performance of the proposed exact diffusion algorithm with existing linearly convergent algorithms such as EXTRA\cite{shi2015extra}, DIGing\cite{nedich2016achieving}, and Aug-DGM\cite{xu2015augmented,nedic2016geometrically}. In all figures, the $y$-axis indicates the relative error, i.e., $\|{\scriptstyle{\mathcal{W}}}_i-{\scriptstyle{\mathcal{W}}}^o\|^2/\|{\scriptstyle{\mathcal{W}}}_0-{\scriptstyle{\mathcal{W}}}^o\|^2$, where ${\scriptstyle{\mathcal{W}}}_i={\mathrm{col}}\{w_{1,i},\cdots,w_{N,i}\}\in \mathbb{R}^{NM}$ and ${\scriptstyle{\mathcal{W}}}^o={\mathrm{col}}\{w^o,\cdots,w^o\}\in \mathbb{R}^{NM}$. All simulations employ the connected network topology with $N = 20$ nodes shown in Fig.4 of Part I\cite{yuan2017exact1}. \subsection{Distributed Least-squares}\label{subsec:expe-ls} In this experiment, we focus on the least-squares problem: \eq{\label{prob-ls} w^o = \argmin_{w\in \mathbb{R}^M}\quad \frac{1}{2}\sum_{k=1}^{N} \|U_k w - d_k\|^2. } The simulation setting is the same as Sec. VI.A of Part I\cite{yuan2017exact1}. In the simulation we compare exact diffusion with EXTRA, DIGing, and Aug-DGM. These algorithms work with symmetric doubly-stochastic or right-stochastic matrices $A$. Therefore, we now employ doubly-stochastic matrices for a proper comparison. Moreover, there are two information combinations per iteration in DIGing and Aug-DGM algorithms, and each information combination corresponds to one round of communication. In comparison, there is only one information combination (or round of communication) in EXTRA and exact diffusion. For fairness we will compare the algorithms based on the amount of communications, rather than the iterations. {\color{black}In the figures, we use one unit amount of communication to represent $2ME$ communicated variables, where $M$ is the dimension of the variable while $E$ is the number of edges in the network.} The problem setting is the same as in the simulations in Part I, except that $A$ is generated through the Metropolis rule \cite{sayed2014adaptation}. {\color{black}In the top plot in Fig. \ref{fig:4-algs}, all algorithms are carefully adjusted to reach their fastest convergence. It is observed that exact diffusion is slightly better than EXTRA, and both of them are more communication efficient than DIGing and Aug-DGM.} When a larger step-size $\mu=0.02$ is chosen for all algorithms, it is observed that EXTRA and DIGing diverge while exact diffusion and Aug-DGM converge, and exact diffusion is much faster than Aug-DGM algorithm. {\color{black} We also compare exact diffusion with Push-EXTRA \cite{xi2015linear,zeng2015extrapush} and Push-DIGing \cite{nedich2016achieving} for non-symmetric combination policies. We consider the unbalanced network topology shown in Fig. 6 in Part I \cite{yuan2017exact1}. The combination matrix is generated through the averaging rule. Note that the Perron eigenvector $p$ is known beforehand for such combination matrix $A$, and we can therefore substitute $p$ directly into the recursions of Push-EXTRA and Push-DIGing. In the simulation, all algorithms are adjusted to reach their fastest convergence. In Fig. \ref{fig:5-algs}, it is observed that exact diffusion is the most communication efficient among all three algorithms. This figure illustrates that exact diffusion has superior performance for locally-balanced combination policies. } \begin{figure}\label{fig:4-algs} \end{figure} \begin{figure} \caption{\footnotesize Convergence comparison between exact diffusion, EXTRA, DIGing, and Aug-DGM for distributed least-squares problem \eqref{prob-ls} with non-symmetric combination policy.} \label{fig:5-algs} \end{figure} } \subsection{Distributed Logistic Regression} We next consider a pattern classification scenario. Each agent $k$ holds local data samples $\{h_{k,j}, \gamma_{k,j}\}_{j=1}^L$, where $h_{k,j}\in \mathbb{R}^M$ is a feature vector and $\gamma_{k,j}\in \{-1,+1\}$ is the corresponding label. Moreover, the value $L$ is the number of local samples at each agent. All agents will cooperatively solve the regularized logistic regression problem: \eq{\label{prob-lr} w^o=\argmin_{w\in \mathbb{R}^M} \sum_{k=1}^{N} \Big[\frac{1}{L}\sum_{\ell=1}^{L}\ln\big(1\hspace{-1mm}+\hspace{-1mm}\exp(-\gamma_{k,\ell} h_{k,\ell}^{\mathsf{T}} w)\big) \hspace{-1mm}+\hspace{-1mm} \frac{\rho}{2}\|w\|^2 \Big]. } The simulation setting is the same as Sec. VI.B of Part I\cite{yuan2017exact1}. In this simulation, we also compare exact diffusion with EXTRA, DIGing, and Aug-DGM. A symmetric doubly-stochastic $A$ is generated through the Metropolis rule. {\color{black}In the top plot in Fig. \ref{fig:lr-4-algs}, all algorithms are carefully adjusted to reach their fastest convergence. It is observed that exact diffusion is the most communication efficient among all algorithms.} When a larger step-size $\mu=0.04$ is chosen for all algorithms in the bottom plot in Fig. \ref{fig:lr-4-algs}, it is observed that both exact diffusion and Aug-DGM are still able to converge linearly to $w^o$, while EXTRA and DIGing fail to do so. Moreover, exact diffusion is observed much more communication efficient than Aug-DGM. \begin{figure}\label{fig:lr-4-algs} \end{figure} \appendices \section{Proof of Lemma \ref{lm-B-decomposition}}\label{appdx-Lemma-fd} Define $V^\prime \define V + \mathds{1}_N\, p^{\mathsf{T}} \in \mathbb{R}^{N\times N}$, we claim that $V^\prime$ is a full rank matrix. Suppose to the contrary that there exists some $x\neq 0$ such that $V'x=0$, i.e.,$(V + \mathds{1}_N\, p^{\mathsf{T}}) x = Vx + (p^{\mathsf{T}} x)\mathds{1}_N=0,$ which requires \eq{\label{xch388} Vx = - (p^{\mathsf{T}} x)\mathds{1}_N. } When $p^{\mathsf{T}} x \neq 0$, relation \eqref{xch388} implies that $\mathds{1}_N \in \mathrm{range}(V)$. {However, from Lemma \ref{lm:null-V} we know that \eq{ \mathrm{null}(V)=\mathrm{span}\{\mathds{1}_N\} \Longleftrightarrow&\ \mathrm{range}(V^{\mathsf{T}})^\perp = \mathrm{span}\{\mathds{1}_N\} \nonumber \\ \Longleftrightarrow&\ \mathrm{range}(V)^\perp = \mathrm{span}\{\mathds{1}_N\}, \label{i2378d} } where the last ``$\Leftrightarrow$" holds because $V$ is symmetric.} Relation \eqref{i2378d} is contradictory to $\mathds{1}_N \in \mathrm{range}(V)$. Therefore, $V^\prime x \neq 0$. When $p^{\mathsf{T}} x =0$, relation \eqref{xch388} implies that $Vx=0$, which together with Lemma \ref{lm:null-V} implies that $x=c \mathds{1}_N$ for some constant $c\neq 0$. However, since $p^{\mathsf{T}} \mathds{1}_N=1$, we have $p^{\mathsf{T}} x = c \neq 0$, which also contradicts with $p^{\mathsf{T}} x =0$. As a result, $V'$ has full rank and hence $\left(V'\right)^{-1}$ exists. With $V^\prime=V + \mathds{1}_N$ and the fact $V\mathds{1}_N = 0$ {(see Lemma \ref{lm:null-V})}, we also have \eq{ & VV^\prime =V (V + \mathds{1}_N\, p^{\mathsf{T}}) = V^2 + V \mathds{1}_N\, p^{\mathsf{T}} = V^2, \label{VVprime-1}\\ & V^\prime (I_N - \mathds{1}_N\, p^{\mathsf{T}}) = (V + \mathds{1}_N\, p^{\mathsf{T}})(I_N - \mathds{1}_N\, p^{\mathsf{T}})=V.\label{VVprime-2} } With relations \eqref{VVprime-1} and \eqref{VVprime-2}, we can verify that \eq{\label{B-deco-1} B \hspace{-1mm}&= \hspace{-1mm} \left[ \begin{array}{cc} \hspace{-2mm}I_N\hspace{-1mm} & \hspace{-1mm}0\hspace{-2mm}\\ \hspace{-2mm}0\hspace{-1mm} & \hspace{-1mm}{V}^\prime\hspace{-2mm} \\ \end{array} \right]\hspace{-1.5mm} \left[ \begin{array}{cc} \hspace{-2mm}\overline{A}^{\mathsf{T}} \hspace{-1mm} & \hspace{-0.5mm} - P^{-1}V^2\hspace{-2mm} \\ \hspace{-2mm}(V^\prime)^{-\hspace{-0.4mm}1}V\overline{A}^{\mathsf{T}} \hspace{-0.5mm} & \hspace{-1mm} I_N \hspace{-0.8mm}-\hspace{-0.8mm} (V^\prime)^{-\hspace{-0.4mm}1} V P^{-\hspace{-0.4mm}1} V^2\hspace{-2mm} \\ \end{array} \right]\hspace{-1.5mm} \left[ \begin{array}{cc} \hspace{-2mm}I_N\hspace{-1mm} & \hspace{-1mm}0\hspace{-2mm}\\ \hspace{-2mm}0\hspace{-1mm} & \hspace{-1mm}({V}^\prime)^{-1}\hspace{-2mm} \\ \end{array} \right] \nonumber \\ &\overset{(a)}{=} \hspace{-1mm} \left[ \begin{array}{cc} \hspace{-2mm}I_N\hspace{-1mm} & \hspace{-1mm}0\hspace{-2mm}\\ \hspace{-2mm}0\hspace{-1mm} & \hspace{-1mm}{V}^\prime\hspace{-2mm} \\ \end{array} \right]\hspace{0mm} \left[ \begin{array}{cc} \hspace{-2mm}\overline{A}^{\mathsf{T}} \hspace{-1mm} & \hspace{-0.5mm} \overline{A}^{\mathsf{T}} - I_N \hspace{-2mm} \\ \hspace{-2mm}\overline{A}^{\mathsf{T}} - \mathds{1}_N\, p^{\mathsf{T}} \hspace{-0.5mm} & \hspace{-1mm} \overline{A}^{\mathsf{T}} \hspace{-2mm} \\ \end{array} \right]\hspace{0mm} \left[ \begin{array}{cc} \hspace{-2mm}I_N\hspace{-1mm} & \hspace{-1mm}0\hspace{-2mm}\\ \hspace{-2mm}0\hspace{-1mm} & \hspace{-1mm}({V}^\prime)^{-1}\hspace{-2mm} \\ \end{array} \right] } where in (a) we used $V^2\hspace{-1mm}=\hspace{-1mm}(P \hspace{-1mm}-\hspace{-1mm} PA)/2$ and $\overline{A}^{\mathsf{T}} \hspace{-1mm}=\hspace{-1mm} (I_N \hspace{-1mm}+\hspace{-1mm} A^{\mathsf{T}})/2$. Using $A=Y\Lambda Y^{-1}$ from Lemma 3 of Part I\cite{yuan2017exact1}, we have \eq{\label{m88} \overline{A}^{\mathsf{T}} = (Y^{-1})^{\mathsf{T}} \overline{\Lambda} Y^{\mathsf{T}},\quad \overline{A}^{\mathsf{T}} \hspace{-1mm}-\hspace{-1mm} I_N \hspace{-0.5mm}=\hspace{-0.5mm} (Y^{-1})^{\mathsf{T}} (\overline{\Lambda} \hspace{-1mm}-\hspace{-1mm} I_N) Y^{\mathsf{T}} } where $\overline{\Lambda} \define \left(I_N + \Lambda\right)/2$. Obviously, $\overline{\Lambda}>0$ is also a real diagonal matrix. If we let $\overline{\Lambda}=\mathrm{diag}\{\lambda_1(\overline{A}),\cdots, \lambda_N(\overline{A})\}$, it holds that \eq{\label{tA&tA-I} \lambda_k(\overline{A}) = (\lambda_k(A)+1)/{2} >0,\quad \forall\, k =1,\cdots, N, } and $\lambda_1(\overline{A})=1$. Moreover, we can also verify that \eq{\label{tA-1p'} \overline{A}^{\mathsf{T}} - \mathds{1}_N\, p^{\mathsf{T}} = (Y^{-1})^{\mathsf{T}} \overline{\Lambda}_1 Y^{\mathsf{T}}, } where $ \overline{\Lambda}_1={\rm diag}\{0, \lambda_2(\overline{A}),\cdots, \lambda_N(\overline{A})\}.$ This is because the vectors $\mathds{1}_N^{\mathsf{T}}$ and $p$ are the left- and right-eigenvectors of $\overline{A}$. Combining relations \eqref{tA&tA-I} and \eqref{tA-1p'}, we have \eq{\label{B-deco-2} & \left[ \begin{array}{cc} \hspace{-2mm}\overline{A}^{\mathsf{T}} \hspace{-1mm} & \hspace{-0.5mm} \overline{A}^{\mathsf{T}} - I_N \hspace{-2mm} \\ \hspace{-2mm}\overline{A}^{\mathsf{T}} - \mathds{1}_N\, p^{\mathsf{T}} \hspace{-0.5mm} & \hspace{-1mm} \overline{A}^{\mathsf{T}} \hspace{-2mm} \\ \end{array} \right] \nonumber \\ =& \left[ \begin{array}{cc} \hspace{-2mm}(Y^{-1})^{\mathsf{T}}\hspace{-1mm} & \hspace{-1mm}0\hspace{-2mm}\\ \hspace{-2mm}0\hspace{-1mm} & \hspace{-1mm}(Y^{-1})^{\mathsf{T}}\hspace{-2mm} \\ \end{array} \right]\hspace{0mm} \left[ \begin{array}{cc} \hspace{-2mm}\overline{\Lambda} \hspace{-1mm} & \hspace{-0.5mm} \overline{\Lambda} - I_N \hspace{-2mm} \\ \hspace{-2mm}\ \overline{\Lambda}_1 \hspace{-0.5mm} & \hspace{-1mm} \overline{\Lambda} \hspace{-2mm} \\ \end{array} \right]\hspace{0mm} \left[ \begin{array}{cc} \hspace{-2mm}Y^{\mathsf{T}}\hspace{-1mm} & \hspace{-1mm}0\hspace{-2mm}\\ \hspace{-2mm}0\hspace{-1mm} & \hspace{-1mm} Y^{\mathsf{T}} \hspace{-2mm} \\ \end{array} \right]. } With permutation operations, it holds that \eq{\label{B-deco-3} \left[ \begin{array}{cc} \hspace{-2mm}\overline{\Lambda} \hspace{-1mm} & \hspace{-0.5mm} \overline{\Lambda} - I_N \hspace{-2mm} \\ \hspace{-2mm}\ \overline{\Lambda}_1 \hspace{-0.5mm} & \hspace{-1mm} \overline{\Lambda} \hspace{-2mm} \\ \end{array} \right] =& \Pi \left[ \begin{array}{cccc} E_1 & 0 & \cdots & 0 \\ 0 & E_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & E_N \\ \end{array} \right] \Pi^{\mathsf{T}}, } where $\Pi \in \mathbb{R}^{N\times N}$ is a permutation matrix, and \eq{ E_1 \hspace{-1mm}=\hspace{-1mm} \left[ \begin{array}{cc} \hspace{-2mm}1 & 0 \hspace{-2mm}\\ \hspace{-2mm}0 & 1\hspace{-2mm} \\ \end{array} \right]\hspace{-1mm}, \ E_k\hspace{-1mm}=\hspace{-1mm} \left[ \begin{array}{cc} \hspace{-2mm}\lambda_k(\overline{A}) & \lambda_k(\overline{A})-1\hspace{-2mm} \\ \hspace{-2mm}\lambda_k(\overline{A}) & \lambda_k(\overline{A})\hspace{-2mm} \\ \end{array} \right]\hspace{-1mm},\ \forall k = 2,\cdots, N. } Now we seek the eigenvalues of $E_k$. Let $d$ denote an eigenvalue of $E_k$. The characteristic polynomial of $E_k$ is \eq{ d^2-2\lambda_k(\overline{A}) d + \lambda_k(\overline{A}) = 0. } Therefore, we have \eq{ d=\frac{2\lambda_k(\overline{A}) \pm \sqrt{4\lambda_k^2(\overline{A}) - 4 \lambda_k(\overline{A})}}{2}. } Since $\lambda_k(\overline{A})\in(0,1)$ when $k=2,3,\cdots,N$, it holds that $4\lambda_k^2(\overline{A}) < 4 \lambda_k(\overline{A})$. Therefore, $d$ is a complex number, and its magnitude is $\sqrt{\lambda_k(\overline{A})}$. Therefore, $E_k$ can be diagonalized as \eq{\label{B-deco-5} E_k = Z_k \left[ \begin{array}{cc} d_{k,1} & 0\\ 0 & d_{k,2} \\ \end{array} \right] Z_k^{-1} } where $d_{k,1}$ and $d_{k,2}$ are complex numbers and \eq{\label{B-deco-6} |d_{k,1}| = |d_{k,2}| = \sqrt{\lambda_k(\overline{A})} < 1. } Define $Z$ and $\overline{X}$ as \eq{ Z\define& {\mathrm{diag}}\{I_2, Z_2,Z_3, \cdots, Z_N\} } \eq{ \overline{X} \define& \left[ \begin{array}{cc}I_N&0\\0&V'\\ \end{array} \right] \left[ \begin{array}{cc}(Y^{-1})^{\sf T}&0\\0& (Y^{-1})^{\sf T}\\ \end{array} \right] \Pi\, Z } Since each factor in $\overline{X}$ is invertible, $\overline{X}^{\hspace{0.2mm}-1}$ must exist. Combining \eqref{B-deco-1} and \eqref{B-deco-2}--\eqref{B-deco-6}, we finally arrive at \eq{\label{xcnh26} B = \overline{X} D \overline{X}^{-1}, \mbox{ where } D = \left[ \begin{array}{cc} I_2 & 0\\0 & D_1 \\ \end{array} \right], } and $D_1$ has the structure claimed in \eqref{uwehn}. Therefore, we have established so far the form of the eigenvalue decomposition of $B$. In this decomposition, each $k$-th column of $\overline{X}$ is a right-eigenvector associated with the eigenvalue $D(k,k)$, and each $k$-th row of $\overline{X}^{-1}$ is the left-eigenvector associated with $D(k,k)$. Recall, however, that eigenvectors are not unique. We now verify that we can find eigenvector matrices $\overline{X}$ and $\overline{X}^{-1}$ that have the structure shown in \eqref{X and X_inv} and \eqref{R and L}. To do so, it is sufficient to examine whether the two columns of $R$ are independent right-eigenvectors associated with eigenvalue $1$, and the two rows of $L$ are independent left-eigenvectors associated with $1$. Let \eq{\label{tsdhb-app-0} R= \left[ \begin{array}{cc} \hspace{-1mm}r_1 & r_2\hspace{-1mm} \\ \end{array} \right],\mbox{ where }\ r_1 \hspace{-1mm} \define \hspace{-1mm} \left[ \begin{array}{c} \hspace{-1.8mm}\mathds{1}_N \hspace{-1.8mm}\\ \hspace{-1.8mm}0\hspace{-1.8mm} \\ \end{array} \right], \quad r_2 \hspace{-1mm} \define \hspace{-1mm} \left[ \begin{array}{c} \hspace{-1.8mm}0\hspace{-1.8mm}\\ \hspace{-1.8mm}\mathds{1}_N\hspace{-1.8mm} \\ \end{array} \right]. } Obviously, $r_1$ and $r_2$ are independent. Since \eq{ Br_1 = r_1, \quad Br_2 = r_2, } we know $r_1$ and $r_2$ are right-eigenvectors associated with eigenvalue $1$. As a result, an eigenvector matrix $X$ can be chosen in the form $ X= \left[ \begin{array}{ccc} R& \vline & X_R \\ \end{array} \right], $ where each $k$-th column of $X_R$ corresponds to the right-eigenvector associated with eigenvalue $D_1(k,k)$. Similarly, we let \eq{\label{tsdhb-app-0-L} L= \left[ \begin{array}{c} \hspace{-1mm}\ell_1^{\mathsf{T}}\hspace{-1mm}\\ \hspace{-1mm}\ell_2^{\mathsf{T}}\hspace{-1mm} \\ \end{array} \right], \mbox{ where }\ \ell_1\define \left[ \begin{array}{c} \hspace{-1mm}p\hspace{-1mm}\\ \hspace{-1mm}0\hspace{-1mm} \\ \end{array} \right], \ell_2\define \left[ \begin{array}{c} \hspace{-1mm}0\hspace{-1mm}\\ \hspace{-1mm}\frac{1}{N}\mathds{1}_N\hspace{-1mm} \\ \end{array} \right]. } It is easy to verify that $\ell_1$ and $\ell_2$ are independent left-eigenvectors associated with eigenvalue $1$. Moreover, since $LR=I_2$, $X^{-1}$ has the structure \eq{\label{nh289} X^{-1}=\left[ \begin{array}{c} L\\ X_L \\ \end{array} \right], } where each $k$-th row of $X_L$ corresponds to a left-eigenvector associated with eigenvalue $D_1(k,k)$. \section{Proof of Theorem \ref{theom-convergence}} \label{app-theom-conv} From the first line of recursion \eqref{final-recursion}, we have \eq{\label{x_bar-recursion} \hspace{-1mm} \bar{{\scriptstyle{\mathcal{X}}}}_i \hspace{-1mm}= \hspace{-1mm} \left(I_M \hspace{-1mm} - \hspace{-1mm} {\overline{{\mathcal{P}}}^{\mathsf{T}} \hspace{-0.8mm}{\mathcal{M}}{\mathcal{H}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}{\mathcal{I}}}\right)\bar{{\scriptstyle{\mathcal{X}}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1} \hspace{-1mm} - \hspace{-1mm}\frac{1}{c} \overline{{\mathcal{P}}}^{\mathsf{T}} \hspace{-0.8mm}{\mathcal{M}}{\mathcal{H}}_{i-1}{\mathcal{X}}_{R,u} \check{{\scriptstyle{\mathcal{X}}}}_{i-1}. } Squaring both sides and using Jensen's inequality \cite{boyd2004convex} gives \eq{\label{x_bar_square} \hspace{-1mm} \|\bar{{\scriptstyle{\mathcal{X}}}}_i\|^2 \hspace{-1mm}=\hspace{-1mm}&\ \left\|\hspace{-1mm}\left(I_M \hspace{-1mm} - \hspace{-1mm} {\overline{{\mathcal{P}}}^{\mathsf{T}} \hspace{-0.8mm}{\mathcal{M}}{\mathcal{H}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}{\mathcal{I}}}\right) \bar{{\scriptstyle{\mathcal{X}}}}_{i-1} \hspace{-1mm}-\hspace{-1mm} \frac{1}{c} \overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i-1}{\mathcal{X}}_{R,u} \check{{\scriptstyle{\mathcal{X}}}}_{i-1}\hspace{-0.5mm}\right\|^2 \nonumber \\ \le &\ \frac{1}{1-t}\left\|I_M \hspace{-1mm} - \hspace{-1mm} {\overline{{\mathcal{P}}}^{\mathsf{T}} \hspace{-0.8mm}{\mathcal{M}}{\mathcal{H}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}{\mathcal{I}}}\right\|^2\|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \nonumber \\ &\quad + \frac{1}{t}\frac{1}{c^2}\|\overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i-1}{\mathcal{X}}_{R,u}\|^2 \|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 } for any $t\in (0,1)$. Using $\tau_k=\mu_k/\mu_{\max}$, we obtain \eq{ \hspace{-2mm} {\overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i-1}{\mathcal{I}}} =&\ \mu_{\max}\sum_{k=1}^{N}p_k \tau_k H_{k,i-1} \nonumber \\ \hspace{2mm} \overset{\eqref{H-properties}}{\ge} &\ {\mu_{\max}p_{k_o}\tau_{k_o}\nu I_M} \define {\sigma_{11} \mu_{\max}I_M}, } where $\sigma_{11} = p_{k_o} \tau_{k_o}\nu$. Similarly, we can also obtain \eq{ {\overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i-1}{\mathcal{I}}} & = \mu_{\max}\sum_{k=1}^{N}p_k \tau_k H_{k,i-1}\nonumber \\ &\overset{\eqref{H-properties}}{\le} \left(\sum_{k=1}^{N}p_k \tau_k \right)\delta\mu_{\max} I_M \hspace{-1.5mm} \overset{(a)}{\le} \delta\mu_{\max} I_M, } where inequality $(a)$ holds because $\tau_k<1$ and $\sum_{k=1}^{N}p_k=1$. It is obvious that $\delta >\sigma_{11}$. As a result, we have \eq{\label{xzcn8237} \hspace{-2mm}(1\hspace{-0.8mm}-\hspace{-0.8mm}\delta \mu_{\max})I_M \hspace{-0.8mm}\le\hspace{-0.8mm} I_{M} \hspace{-0.8mm}-\hspace{-0.8mm} {\overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i-1}{\mathcal{I}}} \le (1\hspace{-0.8mm}-\hspace{-0.8mm}\sigma_{11}\mu_{\max})I_M } which implies that when the step-size satisfy \eq{\label{xchayu} \mu_{\max}<1/\delta, } it will hold that \eq{\label{uiox98} &\ \|I_{M} \hspace{-0.8mm}-\hspace{-0.8mm} {\overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i-1}{\mathcal{I}}} \|^2 \le (1-\sigma_{11}\mu_{\max})^2. } On the other hand, we have \eq{\label{bguj87} \frac{1}{c^2}\|\overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i-1}{\mathcal{X}}_{R,u}\|^2&\ \le \frac{1}{c^2}\|\overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{M}}\|^2 \|{\mathcal{H}}_{i-1}\|^2 \|{\mathcal{X}}_{R,u}\|^2 \nonumber } \eq{ &\ \overset{(a)}{\le} \frac{1}{c^2}\left(\sum_{k=1}^{N}(\tau_k p_k)^2\right)\delta^2 \|{\mathcal{X}}_{R,u}\|^2 \mu^2_{\max} \nonumber \\ &\ \overset{(b)}{\le} \frac{p_{\max}}{c^2} \delta^2 \|{\mathcal{X}}_{R,u}\|^2 \mu^2_{\max} } where inequality (b) holds because $\tau_k<1$, $p_k^2 < p_k p_{\max}$ (where $p_{\max}= \max_k\{p_k\}$) and $\sum_{k=1}^{N}p_k=1$. {Inequality (a) follows by noting that $\overline{{\mathcal{P}}}^{\mathsf{T}} {\mathcal{M}} = \mu_{\max}[p_1\tau_1,\cdots,p_N \tau_N]\otimes I_M$. Introducing $s=[p_1\tau_1,p_2\tau_2,\cdots,p_N \tau_N]^{\mathsf{T}}\in \mathbb{R}^{N}$, we have \eq{ \|\overline{{\mathcal{P}}}^{\mathsf{T}}\hspace{-1mm} {\mathcal{M}}\|^2 \hspace{-1mm}=& \mu^2_{\max}\|s^{\mathsf{T}}\hspace{-1mm} \otimes\hspace{-0.7mm} I_M\|^2\hspace{-1mm}=\hspace{-1mm}\mu^2_{\max}\lambda_{\max}\Big(\hspace{-0.5mm}(s\otimes I_M)(s^{\mathsf{T}} \otimes I_M)\hspace{-0.5mm}\Big)\nonumber \\ =& \mu^2_{\max}\lambda_{\max}\Big(s s^{\mathsf{T}} \otimes I_M\Big) = \mu^2_{\max}\lambda_{\max}(s s^{\mathsf{T}}) \nonumber \\ =& \mu_{\max}^2 \|s\|^2 =\mu_{\max}^2 {\sum_{k=1}^{N}}(p_k\tau_k)^2. } Recall \eqref{usd9} and by introducing $E=\left[ \begin{array}{cc} I_{MN} & 0_{MN} \\ \end{array} \right]$, we have ${\mathcal{X}}_{R,u}=E {\mathcal{X}}_R$. Therefore, it holds that \eq{\label{xcn398} \|{\mathcal{X}}_{R,u}\|^2 \le \|E\|^2 \|{\mathcal{X}}_R\|^2=\|{\mathcal{X}}_R\|^2. } Substituting \eqref{xcn398} into \eqref{bguj87}, we have \eq{\hspace{-2mm} \frac{1}{c^2}\|\overline{{\mathcal{P}}}^{\mathsf{T}}\hspace{-2mm}{\mathcal{M}}{\mathcal{H}}_{i-1}{\mathcal{X}}_{R,u}\hspace{-0.5mm}\|^2 \hspace{-1mm}\le\hspace{-1mm} \frac{p_{\max}\delta^2}{c^2}\hspace{-0.5mm} \|{\mathcal{X}}_{R}\|^2 \mu^2_{\max} \hspace{-1mm}\define\hspace{-1mm} \sigma_{12}^2 \mu^2_{\max} } where $\sigma_{12} \define \sqrt{p_{\max}}\delta \|{\mathcal{X}}_R\|/c$.} Notice that $\sigma_{12}$ is independent of $\mu_{\max}$. Substituting \eqref{uiox98} and \eqref{bguj87} into \eqref{x_bar_square}, we get \eq{\label{x_bar_square-2} \|\bar{{\scriptstyle{\mathcal{X}}}}_i\|^2 \hspace{-1mm}\le\hspace{-1mm} &\ \frac{1}{1-t}(1-\sigma_{11}\mu_{\max})^2\|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 + \frac{1}{t} \sigma^2_{12}\mu_{\max}^2 \|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \nonumber \\ \hspace{-1mm}=\hspace{-0.2mm} & (1\hspace{-1mm}-\hspace{-1mm}\sigma_{11}\mu_{\max})\|\hspace{-0.3mm}\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\hspace{-0.3mm}\|^2 \hspace{-1mm}+\hspace{-1mm} ({\sigma^2_{12}}/{\sigma_{11}})\mu_{\max}\|\hspace{-0.3mm}\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\hspace{-0.3mm}\|^2 } where we are selecting $t = \sigma_{11}\mu_{\max}$. Next we check the second line of recursion \eqref{final-recursion}: \eq{ \check{{\scriptstyle{\mathcal{X}}}}_i =&\; -c{\mathcal{X}}_L{\mathcal{T}}_{i-1}{\mathcal{R}}_1 \bar{{\scriptstyle{\mathcal{X}}}}_{i-1}+ ({\mathcal{D}}_1 - {\mathcal{X}}_L{\mathcal{T}}_{i-1}{\mathcal{X}}_R)\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\nonumber \\ =&\; {\mathcal{D}}_1\check{{\scriptstyle{\mathcal{X}}}}_{i-1} - {\mathcal{X}}_L{\mathcal{T}}_{i-1}(c{\mathcal{R}}_1 \bar{{\scriptstyle{\mathcal{X}}}}_{i-1} + {\mathcal{X}}_R\check{{\scriptstyle{\mathcal{X}}}}_{i-1}). \label{yujs99} } Squaring both sides and using Jensen's inequality again, \eq{ \|\check{{\scriptstyle{\mathcal{X}}}}_i\|^2 =& \footnotesize\|{\mathcal{D}}_1 \check{{\scriptstyle{\mathcal{X}}}}_{i-1} - {\mathcal{X}}_L{\mathcal{T}}_{i-1}(c{\mathcal{R}}_1 \bar{{\scriptstyle{\mathcal{X}}}}_{i-1} + {\mathcal{X}}_R\check{{\scriptstyle{\mathcal{X}}}}_{i-1})\|^2\nonumber\\ \le & \frac{\|{\mathcal{D}}_1\|^2}{t} \|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \hspace{-0.8mm}+\hspace{-0.8mm} \frac{1}{1-t}\| {\mathcal{X}}_L{\mathcal{T}}_{i-1}\hspace{-0.5mm}(\hspace{-0.5mm}c{\mathcal{R}}_1 \bar{{\scriptstyle{\mathcal{X}}}}_{i-1} \hspace{-1mm}+\hspace{-1mm} {\mathcal{X}}_R\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\hspace{-0.5mm})\hspace{-0.5mm}\|^2 \nonumber} \eq{ \le & \frac{\|{\mathcal{D}}_1\|^2}{t} \|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^{\hspace{-0.3mm}2} \hspace{-1mm}+\hspace{-1mm} \frac{2c^2}{1-t} \| {\mathcal{X}}_L{\mathcal{T}}_{i-1}{\mathcal{R}}_1\|^2 \|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2\nonumber\\ &\hspace{0.63cm}+ \frac{2}{1-t} \|{\mathcal{X}}_L{\mathcal{T}}_{i-1}{\mathcal{X}}_R\|^2 \| \check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2. } where $t\in(0,1)$. From Lemma \ref{lm-B-decomposition} we have that $\lambda \define \|D_1\| = \sqrt{\lambda_2(\overline{A})}<1$. By setting $t=\lambda$, we reach \eq{\label{yuwe9} \|\check{{\scriptstyle{\mathcal{X}}}}_i\|^2 \leq&\ \lambda \|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 + 2c^2 \| {\mathcal{X}}_L{\mathcal{T}}_{i-1}{\mathcal{R}}_1\|^2 \|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2/(1-\lambda)\nonumber\\ &\;\;\;\;\;\hspace{1.03cm}+ {2} \|{\mathcal{X}}_L{\mathcal{T}}{\mathcal{X}}_R\|^2 \| \check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2/({1-\lambda}). } We introduce the matrix $\Gamma={\rm diag}\{\tau_1 I_M,\cdots, \tau_N I_M\}$, and note that we can write ${\mathcal{M}}=\mu_{\max}\Gamma$. Substituting it into \eqref{T-defi}, \eq{\label{xha9867} {\mathcal{T}}_{i-1} &= \mu_{\max} \left[ \begin{array}{cc} \overline{{\mathcal{A}}}^{\mathsf{T}} \Gamma {\mathcal{H}}_{i-1} & 0 \\ {\mathcal{V}} \overline{{\mathcal{A}}}^{\mathsf{T}} \Gamma {\mathcal{H}}_{i-1} & 0 \\ \end{array} \right] \nonumber \\ &= \mu_{\max} \underbrace{\left[ \begin{array}{cc} \overline{{\mathcal{A}}}^{\mathsf{T}} & 0 \\ {\mathcal{V}} \overline{{\mathcal{A}}}^{\mathsf{T}} & 0 \\ \end{array} \right]}_{\define {\mathcal{T}}_d} \left[ \begin{array}{cc} \Gamma {\mathcal{H}}_{i-1} & 0 \\ 0 & \Gamma {\mathcal{H}}_{i-1} \\ \end{array} \right], } which implies that \eq{\label{nui89} \|{\mathcal{T}}_{i-1}\|^2 \le \mu_{\max}^2 \|{\mathcal{T}}_d\|^2 \left( \max_{1\le k\le N} \|H_{k,i-1}\|^2 \right) \le \|{\mathcal{T}}_d\|^2 \delta^2 \mu_{\max}^2. } We also emphasize that $\|{\mathcal{T}}_d\|^2$ is independent of $\mu_{\max}$. With inequality \eqref{nui89}, we further have \eq{ \hspace{-2mm}c^2\| {\mathcal{X}}_L{\mathcal{T}}_{i-1} {\mathcal{R}}_1\|^2 \hspace{-0.8mm} &\le \hspace{-0.8mm} c^2\mu_{\max}^2 \|{\mathcal{X}}_L\|^2 \|{\mathcal{T}}_d\|^2 \|{\mathcal{R}}_1\|^2\delta^2 \hspace{-0.8mm}\define\hspace{-0.8mm} \sigma^2_{21} \mu_{\max}^2 \label{xcbnb978-1}\\ \hspace{-5mm}\| {\mathcal{X}}_L{\mathcal{T}}_{i-1}{\mathcal{X}}_R\|^2 \hspace{-0.8mm} &\le\hspace{-0.8mm} \mu_{\max}^2 \hspace{-0.5mm}\|{\mathcal{X}}_L\|^2\hspace{-0.5mm} \|{\mathcal{T}}_d\|^2\hspace{-0.5mm} \|\hspace{-0.3mm}{\mathcal{X}}_R\hspace{-0.5mm}\|^2\hspace{-0.5mm} \delta^2 \hspace{-1mm}\define\hspace{-1mm} \sigma^2_{22} \mu_{\max}^2\label{xcbnb978-2} } since $\|{\mathcal{R}}_1\|=1$, and where $\sigma_{21}$ and $\sigma_{22}$ are defined as \eq{ \sigma_{21}=c\|{\mathcal{X}}_L\| \|{\mathcal{T}}_d\| \delta,\ \ \sigma_{22}=\|{\mathcal{X}}_L\| \|{\mathcal{T}}_d\| \|{\mathcal{X}}_R\|\delta. } With \eqref{xcbnb978-1} and \eqref{xcbnb978-2}, inequality \eqref{yuwe9} becomes \eq{\label{yuwe9-2} \| \check{{\scriptstyle{\mathcal{X}}}}_i\|^2 \hspace{-1mm}\leq\hspace{-1mm} \left(\hspace{-1mm}\lambda \hspace{-0.8mm}+\hspace{-0.8mm} \frac{2\sigma_{22}^2 \mu^2_{\max}}{1-\lambda}\hspace{-1mm}\right)\hspace{-1mm} \|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \hspace{-1mm}+\hspace{-1mm} \frac{2\sigma_{21}^2 \mu^2_{\max}}{1-\lambda} \|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2. } Combining \eqref{x_bar_square-2} and \eqref{yuwe9-2}, we arrive at the inequality recursion \eq{\label{xngyi} \hspace{-3mm} \left[ \begin{array}{c} \hspace{-2mm}\|\bar{{\scriptstyle{\mathcal{X}}}}_i\|^2\hspace{-2mm} \\ \hspace{-2mm}\|\check{{\scriptstyle{\mathcal{X}}}}_i\|^2 \hspace{-2mm} \\ \end{array} \right] \preceq \underbrace{\left[ \begin{array}{cc} \hspace{-2mm}1-\sigma_{11}\mu_{\max}\hspace{-1mm} & \hspace{-1mm} \frac{\sigma_{12}^2}{\sigma_{11}}\mu_{\max}\hspace{-2mm}\\ \hspace{-2mm}\frac{2\sigma_{21}^2\mu^2_{\max}}{1-\lambda} \hspace{-1mm}&\hspace{-1mm} \lambda + \frac{2\sigma_{22}^2 \mu^2_{\max}}{1-\lambda}\hspace{-2mm} \\ \end{array} \right]}_{\define G} \left[ \begin{array}{c} \hspace{-2mm}\|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2\hspace{-2mm} \\ \hspace{-2mm}\|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \hspace{-2mm} \\ \end{array} \right]. } Now we check the spectral radius of the matrix $G$. Recall the fact that the spectral radius of a matrix is upper bounded by any of its norms. Therefore, \eq{ \hspace{-2mm}\rho(G) &\le \|G\|_1 = \max\Big\{1-\sigma_{11}\mu_{\max} + \frac{2\sigma_{21}^2\mu^2_{\max}}{1-\lambda},\nonumber \\ &\hspace{2.8cm} \lambda + \frac{\sigma_{12}^2}{\sigma_{11}}\mu_{\max} + \frac{2\sigma_{22}^2 \mu^2_{\max}}{1-\lambda}\Big\}, } where we already know that $\lambda<1$. To guarantee $\rho(G)<1$, it is enough to select the step-size parameter small enough to satisfy \eq{ 1-\sigma_{11}\mu_{\max} + \frac{2\sigma_{21}^2\mu^2_{\max}}{1-\lambda} &< 1,\label{upper-bound-1} \\ \lambda + \frac{\sigma_{12}^2}{\sigma_{11}}\mu_{\max} + \frac{2\sigma_{22}^2 \mu^2_{\max}}{1-\lambda} & <1. \label{upper-bound-2} } To get a simpler upper bound, we transform \eqref{upper-bound-2} such that \eq{ &\ \lambda + \frac{\sigma_{12}^2}{\sigma_{11}}\mu_{\max} + \frac{2\sigma_{22}^2 \mu^2_{\max}}{1-\lambda} \nonumber \\ =&\ \lambda \hspace{-1mm}+\hspace{-1mm} \frac{2\sigma_{12}^2}{\sigma_{11}}\mu_{\max} \hspace{-1mm}-\hspace{-1mm} \left( \frac{\sigma_{12}^2}{\sigma_{11}}\mu_{\max} \hspace{-1mm}-\hspace{-1mm} \frac{2\sigma_{22}^2 \mu^2_{\max}}{1-\lambda} \right) \hspace{-1mm}\le \lambda \hspace{-1mm}+\hspace{-1mm} \frac{2\sigma_{12}^2}{\sigma_{11}}\mu_{\max}, \label{xng8} } where the last inequality holds when \eq{\label{zxchsdj8} \mu_{\max} \le \frac{\sigma_{12}^2(1-\lambda)}{2\sigma_{11}\sigma_{22}^2}. } If, in addition, we let \eqref{xng8} be less than $1$, which is equivalent to selecting \eq{\label{zxgh8} \mu_{\max}\le \frac{\sigma_{11}(1-\lambda)}{2\sigma_{12}^2}, } then we guarantee equality \eqref{upper-bound-2}. Combing \eqref{upper-bound-1}, \eqref{zxchsdj8} and \eqref{zxgh8}, we have \eq{\label{upper-bd} \hspace{-3mm}\mu_{\max}\le \min\left\{ \frac{\sigma_{11}(1-\lambda)}{2\sigma_{21}^2},\frac{\sigma_{12}^2(1-\lambda)}{2\sigma_{11}\sigma_{22}^2},\frac{\sigma_{11}(1-\lambda)}{2\sigma_{12}^2} \right\} } This together with \eqref{xchayu}, i.e. \eq{\label{upper-bd-2} \mu_{\max}< {1}/{\delta} } will guarantee $\|G\|_1$ to be less than $1$. In fact, the upper bound in \eqref{upper-bd} can be further simplified. From the definitions of $\sigma_{11}$, $\sigma_{12}$, $\sigma_{21}$ and $\sigma_{22}$, we have \eq{ \frac{\sigma_{11}(1-\lambda)}{2\sigma_{21}^2} = &\ \frac{p_{k_o}\tau_{k_o}\nu (1-\lambda)}{2 c^2 \|{\mathcal{X}}_L\|^2\|{\mathcal{T}}_d\|^2\delta^2},\label{term-2}\\ \frac{\sigma_{12}^2(1-\lambda)}{2\sigma_{11}\sigma_{22}^2} = &\ \frac{p_{\max}(1-\lambda)}{2p_{k_o}\tau_{k_o}\nu \|{\mathcal{X}}_L\|^2\|{\mathcal{T}}_d\|^2 c^2}, \label{term-4}\\ \frac{\sigma_{11}(1-\lambda)}{2\sigma_{12}^2} = &\ \frac{p_{k_o}\tau_{k_o}\nu (1-\lambda)c^2}{2p_{\max}\|{\mathcal{X}}_R\|^2\delta^2}.\label{term-3} } First, notice that \eq{ \frac{\sigma_{11}(1-\lambda)}{2\sigma_{21}^2} \Big/\frac{\sigma_{12}^2(1-\lambda)}{2\sigma_{11}\sigma_{22}^2}= \frac{(p_{k_o}\tau_{k_o}\nu)^2}{p_{\max}\delta^2} < 1 } because $p_{k_o}<p_{\max}$, $\tau_{k_o}<1$ and $\nu<\delta$. Therefore, the inequality in \eqref{upper-bd} is equivalent to \eq{ \mu_{\max} &\le \min\left\{ \frac{p_{k_o}\tau_{k_o}\nu (1-\lambda)}{2 c^2 \|{\mathcal{X}}_L\|^2\|{\mathcal{T}}_d\|^2\delta^2}, \frac{p_{k_o}\tau_{k_o}\nu (1-\lambda)c^2}{2p_{\max}\|{\mathcal{X}}_R\|^2\delta^2} \right\} \nonumber \\ &= \frac{p_{k_o}\tau_{k_o}\nu (1-\lambda)}{2\delta^2} \min\left\{ \frac{1}{\|{\mathcal{X}}_L\|^2 \|{\mathcal{T}}_d\|^2 c^2},\frac{c^2}{p_{\max}\|{\mathcal{X}}_R\|^2}\right\}. \label{cxn2778} } It is observed that the constant value $c$ affects the upper bound in \eqref{cxn2778}. If $c$ is sufficiently large, then the first term in \eqref{cxn2778} dominates and $\mu_{\max}$ has a narrow feasible set. On the other hand, if $c$ is sufficiently small, then the second term dominates and $\mu_{\max}$ will also have a narrow feasible set. To make the feasible set of $\mu_{\max}$ as large as possible, we should optimize $c$ to maximize \eq{ \min\Big\{ \frac{1}{\|{\mathcal{X}}_L\|^2 \|{\mathcal{T}}_d\|^2 c^2},\ \ \frac{c^2}{p_{\max}\|{\mathcal{X}}_R\|^2}\Big\}. } Notice that the first term $1/(\|{\mathcal{X}}_L\|^2 \|{\mathcal{T}}_d\|^2 c^2)$ is monotone decreasing with $c^2$, while the second term $c^2/\|{\mathcal{X}}_R\|^2$ is monotone increasing with $c^2$. Therefore, when \eq{ \frac{1}{\|{\mathcal{X}}_L\|^2 \|{\mathcal{T}}_d\|^2 c^2} = \frac{c^2}{p_{\max}\|{\mathcal{X}}_R\|^2} \Longleftrightarrow c^2 = \frac{\sqrt{p_{\max}}\|{\mathcal{X}}_R\|}{\|{\mathcal{X}}_L\| \|{\mathcal{T}}_d\|},\label{c2} } we get the maximum upper bound for $\mu_{\max}$, i.e. \eq{ \mu_{\max}\le \frac{p_{k_o}\tau_{k_o}\nu (1-\lambda)}{2\sqrt{p_{\max}}\|{\mathcal{X}}_L\| \|{\mathcal{T}}_d\| \|{\mathcal{X}}_R\|\delta^2}. } Next we compare the above upper bound with $1/\delta$. Recall that for any matrix $A$, its spectral radius is smaller than its $2-$induced norm so that \eq{\label{238ss} \|{\mathcal{T}}_d\| \ge \rho({\mathcal{T}}_d)\overset{\eqref{xha9867}}{=} \rho(\overline{{\mathcal{A}}})=1. } Moreover, recall from Lemma \ref{lm-B-decomposition} that $X_L X_R=I_{2(N-1)}$, so that ${\mathcal{X}}_L {\mathcal{X}}_R=X_L X_R \otimes I_{M} = I_{2M(N-1)}$, which implies that \eq{\label{2378n} \|{\mathcal{X}}_L\|\|{\mathcal{X}}_R\| \ge \|{\mathcal{X}}_L {\mathcal{X}}_R\|= 1. } Using relations \eqref{238ss} and \eqref{2378n}, and recalling that $p_{k_o}\le p_{\max} < \sqrt{p_{\max}}$, $\tau_{k_o}<1$, $1-\lambda<1$ and $\nu<\delta$, we have \eq{ \frac{p_{k_o}\tau_{k_o}\nu (1-\lambda)}{2\sqrt{p_{\max}}\|{\mathcal{X}}_L\| \|{\mathcal{T}}_d\| \|{\mathcal{X}}_R\|\delta^2}\le \frac{\nu }{\delta^2} < \frac{\delta}{\delta^2}=\frac{1}{\delta}. } Therefore, the upper bounds in \eqref{upper-bd}, \eqref{upper-bd-2} are determined by \eq{\label{final-upper-bd} \mu_{\max}\le \frac{p_{k_o}\tau_{k_o}\nu (1-\lambda)}{2\sqrt{p_{\max}}\|{\mathcal{X}}_L\| \|{\mathcal{T}}_d\| \|{\mathcal{X}}_R\|\delta^2}. } In other words, when $\mu_{\max}$ satisfies \eqref{final-upper-bd}, $\|G\|_1$ will be guaranteed to be less than $1$, i.e., \eq{\label{xsdn8} \hspace{-2mm} \|G\|_1&= \max\Big\{1-\sigma_{11}\mu_{\max} + \frac{2\sigma_{21}^2\mu^2_{\max}}{1-\lambda},\nonumber \\ &\hspace{2.8cm} \lambda + \frac{\sigma_{12}^2}{\sigma_{11}}\mu_{\max} + \frac{2\sigma_{22}^2 \mu^2_{\max}}{1-\lambda}\Big\} \nonumber \\ &= \max\Big\{ 1-p_{k_o}\tau_{k_o}\nu \mu_{\max} + \frac{2c^2\|{\mathcal{X}}_L\|^2\|{\mathcal{T}}_d\|^2\delta^2 \mu^2_{\max}}{1-\lambda}\nonumber \\ &\hspace{3mm}\lambda\hspace{-1mm}+\hspace{-1mm}\frac{p_{\max}\|{\mathcal{X}}_R\|^2\delta^2}{c^2p_{k_o}\tau_{k_o}\nu}\mu_{\max} \hspace{-1mm}+\hspace{-1mm} \frac{2\|{\mathcal{X}}_L\|^2 \|{\mathcal{T}}_d\|^2 \|{\mathcal{X}}_R\|^2\delta^2\mu_{\max}^2}{1-\lambda} \Big\} \nonumber \\ &\overset{\eqref{c2}}{=} \max\Big\{ 1-p_{k_o}\tau_{k_o}\nu \mu_{\max} + \frac{2\sqrt{p_{\max}} \alpha_d \delta^2 \mu^2_{\max}}{1-\lambda},\nonumber \\ &\hspace{1cm} \lambda \hspace{-1mm}+\hspace{-1mm} \frac{\sqrt{p_{\max}} \alpha_d \delta^2\mu_{\max}}{p_{k_o}\tau_{k_o}\nu} \hspace{-1mm}+\hspace{-1mm} \frac{2\alpha_d^2\delta^2\mu_{\max}^2}{1-\lambda} \Big\} < 1, } where $\alpha_d \define \|{\mathcal{X}}_L\|\|{\mathcal{T}}_d\|\|{\mathcal{X}}_R\|$. Let \eq{ z_i\define \left[ \begin{array}{c} \|\bar{{\scriptstyle{\mathcal{X}}}}_i\|^2\\ \|\check{{\scriptstyle{\mathcal{X}}}}_i\|^2 \\ \end{array} \right] \succeq 0, } and note from \eqref{xngyi} that \eq{ z_i \preceq G z_{i-1}. } Computing the $1$-norm of both sides gives \eq{ \|\bar{{\scriptstyle{\mathcal{X}}}}_i\|^2 \hspace{-1mm}+\hspace{-1mm} \|\check{{\scriptstyle{\mathcal{X}}}}_i\|^2 &\hspace{-1mm}=\hspace{-1mm} \|z_i\|_1 \le \|G\|_1 \|z_{i-1}\|_1 = \rho (\|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \hspace{-1mm}+\hspace{-1mm} \|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2),\nonumber \\ &\hspace{-1mm}\le\hspace{-1mm} \rho^i(\|\bar{{\scriptstyle{\mathcal{X}}}}_{0}\|^2 \hspace{-1mm}+\hspace{-1mm} \|\check{{\scriptstyle{\mathcal{X}}}}_{0}\|^2), \label{xw3n} } where we define $\rho \define \|G\|_1$. Inequality \eqref{xw3n} is equivalent to \eq{\label{xngyi-2} \hspace{-3mm} \left\| \left[ \begin{array}{c} \hspace{-2mm}\bar{{\scriptstyle{\mathcal{X}}}}_i\hspace{-2mm} \\ \hspace{-2mm}\check{{\scriptstyle{\mathcal{X}}}}_i \hspace{-2mm} \\ \end{array} \right] \right\|^2 \le \rho^{i} \left\| \left[ \begin{array}{c} \hspace{-2mm}\bar{{\scriptstyle{\mathcal{X}}}}_{0}\hspace{-2mm} \\ \hspace{-2mm}\check{{\scriptstyle{\mathcal{X}}}}_{0} \hspace{-2mm} \\ \end{array} \right] \right\|^2, } By re-incorporating $\widehat{{\scriptstyle{\mathcal{X}}}}_i=0$, relation \eqref{xngyi-2} also implies that \eq{\label{xngyi-3} \hspace{-3mm} \left\| \left[ \begin{array}{c} \hspace{-2mm}\bar{{\scriptstyle{\mathcal{X}}}}_i\hspace{-2mm} \\ \hspace{-2mm}\widehat{{\scriptstyle{\mathcal{X}}}}_i\hspace{-2mm} \\ \hspace{-2mm}\check{{\scriptstyle{\mathcal{X}}}}_i \hspace{-2mm} \\ \end{array} \right] \right\|^2 \le \rho^{i} \left\| \left[ \begin{array}{c} \hspace{-2mm}\bar{{\scriptstyle{\mathcal{X}}}}_{0}\hspace{-2mm} \\ \hspace{-2mm}\widehat{{\scriptstyle{\mathcal{X}}}}_0\hspace{-2mm} \\ \hspace{-2mm}\check{{\scriptstyle{\mathcal{X}}}}_{0} \hspace{-2mm} \\ \end{array} \right] \right\|^2 \define C_0 \rho^i. } From \eqref{x-bar and x-check} we conclude that \eq{\label{237sdnkk} \left\| \left[ \begin{array}{c} \hspace{-2mm}\widetilde{\scriptstyle{\mathcal{W}}}_i\hspace{-2mm}\\ \hspace{-2mm}\widetilde{\scriptstyle{\mathcal{Y}}}_i\hspace{-2mm} \\ \end{array} \right] \right\|^2 \le \left\|{\mathcal{X}} \right\|^2 \left\| \left[ \begin{array}{c} \hspace{-2mm}\bar{{\scriptstyle{\mathcal{X}}}}_i\hspace{-2mm}\\ \hspace{-2mm}\widehat{{\scriptstyle{\mathcal{X}}}}_i\hspace{-2mm} \\ \hspace{-2mm}\check{{\scriptstyle{\mathcal{X}}}}_i\hspace{-2mm} \\ \end{array} \right] \right\|^2 \le C \rho^i, } where the constant $C = \|{\mathcal{X}}\|^2C_0$. {\color{black} \section{Proof of Theorem \ref{them-algorithm-prime}} \label{app-algorithm-prime} We define \eq{ {\mathcal{M}}_i^\prime \define \mu_o {\mathrm{diag}}\{ q_1 I_M/z_{1,i}(1), \cdots, q_N I_M/z_{N, i}(N) \}. } Substituting recursions (98) and (99) from Part I \cite{yuan2017exact1} into expre-ssion (100) we obtain (compare with (93) from Part I \cite{yuan2017exact1}): \eq{ {\scriptstyle{\mathcal{W}}}_i \hspace{-0.8mm}=\hspace{-0.8mm} \overline{{\mathcal{A}}}^{\mathsf{T}} \hspace{-1mm} \left[ 2{\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1} \hspace{-0.5mm}-\hspace{-0.5mm} {\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}2} \hspace{-0.5mm}-\hspace{-0.5mm}\left( \hspace{-0.5mm}{\mathcal{M}}_i^\prime {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}) \hspace{-0.5mm}-\hspace{-0.5mm} {\mathcal{M}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}^\prime {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}2}) \right) \right], } which can be rewritten into a primal-dual form (compare with (89) from Part I \cite{yuan2017exact1}): \begin{equation} \left\{ \begin{aligned} {\scriptstyle{\mathcal{W}}}_i &= \overline{{\mathcal{A}}}^{\mathsf{T}} \Big({\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}\hspace{-0.8mm}-\hspace{-0.8mm}{\mathcal{M}}_i^\prime {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1})\Big)\hspace{-0.8mm}-\hspace{-0.8mm}{\mathcal{P}}^{-1}{\mathcal{V}} {\scriptstyle{\mathcal{Y}}}_{i-1}, \label{zn-1'}\\ {\scriptstyle{\mathcal{Y}}}_i &= {\scriptstyle{\mathcal{Y}}}_{i-1} + {\mathcal{V}} {\scriptstyle{\mathcal{W}}}_i. \end{aligned} \right. \end{equation} For the initialization, we set $y_{-1}=0$ and ${\scriptstyle{\mathcal{W}}}_{-1}$ to be any value, and hence for $i=0$ we have \begin{equation} \left\{ \begin{aligned} {\scriptstyle{\mathcal{W}}}_0 &= \overline{{\mathcal{A}}}^{\mathsf{T}} \Big({\scriptstyle{\mathcal{W}}}_{\hspace{-0.3mm}-\hspace{-0.3mm}1}\hspace{-0.8mm}-\hspace{-0.8mm}{\mathcal{M}}_0^\prime {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{\hspace{-0.3mm}-\hspace{-0.3mm}1})\Big), \label{zn-0'}\\ {\scriptstyle{\mathcal{Y}}}_0 &= {\mathcal{V}} {\scriptstyle{\mathcal{W}}}_0. \end{aligned} \right. \end{equation} Recursions \eqref{zn-1'} and \eqref{zn-0'} are very close to the standard exact diffusion recursions \eqref{zn-1} and \eqref{zn-0}, except that the step-size matrix ${\mathcal{M}}_i^\prime$ is now changing with iteration $i$. Following the arguments \eqref{error} -- \eqref{ed-1-subtracrt}, we have \begin{equation} \left\{ \begin{aligned} \overline{{\mathcal{A}}}^{\mathsf{T}} \widetilde{\scriptstyle{\mathcal{W}}}_i &\hspace{-0.5mm}=\hspace{-0.5mm} \overline{{\mathcal{A}}}^{\mathsf{T}} \Big(\widetilde{\scriptstyle{\mathcal{W}}}_{i-1}\hspace{-0.8mm}+\hspace{-0.8mm}{\mathcal{M}}_i^\prime {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1})\Big) \hspace{-0.8mm}+\hspace{-0.8mm} {\mathcal{P}}^{-1} {\mathcal{V}} {\scriptstyle{\mathcal{Y}}}_{i}, \label{ed-1-subtracrt-prime} \\ \widetilde{\scriptstyle{\mathcal{Y}}}_i &= \widetilde{\scriptstyle{\mathcal{Y}}} _{i-1} - {\mathcal{V}} {\scriptstyle{\mathcal{W}}}_i. \end{aligned} \right. \end{equation} Subtracting optimality conditions \eqref{KKT-1-1}--\eqref{KKT-2-2} from \eqref{ed-1-subtracrt-prime} leads to \begin{equation} \hspace{-0.8mm}\left\{ \begin{aligned} \hspace{-1.5mm}\overline{{\mathcal{A}}}^{\mathsf{T}} \widetilde{\scriptstyle{\mathcal{W}}}_i &\hspace{-0.5mm}=\hspace{-0.5mm} \overline{{\mathcal{A}}}^{\mathsf{T}} \Big(\hspace{-0.8mm}\widetilde{\scriptstyle{\mathcal{W}}}_{i-1}\hspace{-0.8mm}+\hspace{-0.8mm}{\mathcal{M}} \big[{\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1})\hspace{-0.8mm}-\hspace{-0.8mm}{\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star)\big]\hspace{-0.8mm}\Big) \hspace{-0.8mm}-\hspace{-0.8mm} {\mathcal{P}}^{-1} {\mathcal{V}} \widetilde{\scriptstyle{\mathcal{Y}}}_{i} \\ & \hspace{5mm} + \overline{{\mathcal{A}}}^{\mathsf{T}}({\mathcal{M}}_i^\prime - {\mathcal{M}}) {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i-1}), \label{ed-1-subtracrt-1-prime} \\ \hspace{-1.5mm}\widetilde{\scriptstyle{\mathcal{Y}}}_i &= \widetilde{\scriptstyle{\mathcal{Y}}}_{i-1} +{\mathcal{V}} \widetilde{\scriptstyle{\mathcal{W}}}_i. \end{aligned} \right. \end{equation} Comparing recursions \eqref{ed-1-subtracrt-1-prime} and \eqref{ed-1-subtracrt-1}, it is observed that recursion \eqref{ed-1-subtracrt-1-prime} has an extra ``mismatch" term, $ \overline{{\mathcal{A}}}^{\mathsf{T}}({\mathcal{M}}_i^\prime - {\mathcal{M}}) {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i-1}). $ This mismatch arises because we do not know the perron vector $p$ in advance. We need to run the power iteration (see recursion (97) from Part I\cite{yuan2017exact1}) to learn it. Intuitively, since ${\mathcal{M}}_i^\prime \rightarrow {\mathcal{M}}$ as $i\to \infty$, we can expect the mismatch term to vanish gradually. Let \eq{\label{ei} e_i \define ({\mathcal{M}}_i^\prime - {\mathcal{M}}) {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i-1}). } By following arguments \eqref{237sdb}--\eqref{xcnh09}, recursion \eqref{ed-1-subtracrt-1-prime} is equivalent to \eq{\label{xcnh09-prime} &\ \left[ \begin{array}{cc} \overline{{\mathcal{A}}}^{\mathsf{T}} & {\mathcal{P}}^{-1}{\mathcal{V}} \\ -{\mathcal{V}} & I_{MN} \\ \end{array} \right] \left[ \begin{array}{c} \widetilde{\scriptstyle{\mathcal{W}}}_i\\ \widetilde{\scriptstyle{\mathcal{Y}}}_i \\ \end{array} \right] \nonumber \\ =&\ \left[ \begin{array}{cc} \hspace{-1mm}\overline{{\mathcal{A}}}^{\mathsf{T}} (I_{MN}-{\mathcal{M}}{\mathcal{H}}_{i-1}) & 0 \hspace{-1mm}\\ \hspace{-1mm}0 & I_{MN} \hspace{-1mm} \\ \end{array} \right] \hspace{-1mm} \left[ \begin{array}{c} \hspace{-1.5mm}\widetilde{\scriptstyle{\mathcal{W}}}_{i-1} \hspace{-2mm}\\ \hspace{-1.5mm}\widetilde{\scriptstyle{\mathcal{Y}}}_{i-1}\hspace{-2mm} \\ \end{array} \right] + \left[ \begin{array}{c} \hspace{-1.5mm}\overline{{\mathcal{A}}}^{\mathsf{T}}\hspace{-1.5mm}\\ \hspace{-1.5mm}0\hspace{-1.5mm} \\ \end{array} \right] e_i . } By following \eqref{nxcm987}--\eqref{T-defi}, recursion \eqref{xcnh09-prime} can be rewritten as \eq{ \boxed{ \left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}_i\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}_i\hspace{-1mm} \\ \end{array} \right] = ({\mathcal{B}} - {\mathcal{T}}_{i-1})\left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}_{i-1}\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}_{i-1}\hspace{-1mm} \\ \end{array} \right] + {\mathcal{B}}_{\ell} e_i } \label{error-recursion-prime} } where ${\mathcal{B}}$ and ${\mathcal{T}}_{i}$ are defined in \eqref{T-defi}, and \eq{ {\mathcal{B}}_{\ell} = \left[ \begin{array}{c} \overline{{\mathcal{A}}}^{\mathsf{T}}\\ {\mathcal{V}}\overline{{\mathcal{A}}}^{\mathsf{T}} \\ \end{array} \right] } Relation \eqref{error-recursion-prime} is the error dynamics for the exact diffusion algorithm $1^\prime$. Comparing \eqref{error-recursion-prime} with \eqref{error-recursion}, we find that algorithm $1^\prime$ is essentially the standard exact diffusion with error perturbation. Using Lemma \eqref{lm-B-decomposition} and by following arguments from \eqref{tsdhb} to \eqref{final-recursion}, we can transform the error dynamics \eqref{error-recursion-prime} into \eq{\label{final-recursion-prime} \hspace{-3mm} \left[ \begin{array}{c} \hspace{-2mm}\bar{{\scriptstyle{\mathcal{X}}}}_i\hspace{-2mm}\\ \hspace{-2mm}\check{{\scriptstyle{\mathcal{X}}}}_i\hspace{-2mm} \\ \end{array} \right] \hspace{-0.5mm}= &\hspace{-0.5mm} \left[ \begin{array}{cc} \hspace{-2mm}I_M \hspace{-1mm}-\hspace{-1mm}{\overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i\hspace{-0.4mm}-\hspace{-0.4mm}1}{\mathcal{I}}} & -\frac{1}{c}\overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i\hspace{-0.4mm}-\hspace{-0.4mm}1}{\mathcal{X}}_{R,u}\hspace{-2mm}\\ \hspace{-2mm}- c{\mathcal{X}}_L {\mathcal{T}}_{i-1} {\mathcal{R}}_1 & {\mathcal{D}}_1 - {\mathcal{X}}_L {\mathcal{T}}_{i-1} {\mathcal{X}}_R \hspace{-2mm} \\ \end{array} \right] \hspace{-1.5mm} \left[ \begin{array}{c} \hspace{-2mm}\bar{{\scriptstyle{\mathcal{X}}}}_{i\hspace{-0.4mm}-\hspace{-0.4mm}1}\hspace{-2mm}\\ \hspace{-2mm}\check{{\scriptstyle{\mathcal{X}}}}_{i\hspace{-0.4mm}-\hspace{-0.4mm}1}\hspace{-2mm} \\ \end{array} \right] \nonumber \\ & + \left[ \begin{array}{c} \hspace{-2mm}\overline{{\mathcal{P}}}^{\mathsf{T}}\hspace{-2mm}\\ \hspace{-2mm}c{\mathcal{X}}_L {\mathcal{B}}_{\ell}\hspace{-2mm} \\ \end{array} \right] \hspace{-1mm}e_i. } Next we analyze the convergence of the above recursion. From the first line we have \eq{ \hspace{-1mm} \|\bar{{\scriptstyle{\mathcal{X}}}}_i\|^2 \hspace{-1mm}=\hspace{-1mm}&\ \left\| \left(I_M \hspace{-1mm} - \hspace{-1mm} {\overline{{\mathcal{P}}}^{\mathsf{T}} \hspace{-0.8mm}{\mathcal{M}}{\mathcal{H}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}{\mathcal{I}}}\right) \bar{{\scriptstyle{\mathcal{X}}}}_{i-1} \right. \nonumber \\ & \left. \hspace{1cm} - \frac{1}{c} \overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i-1}{\mathcal{X}}_{R,u} \check{{\scriptstyle{\mathcal{X}}}}_{i-1}\hspace{-0.5mm} + \overline{{\mathcal{P}}}^{\mathsf{T}} e_i \right\|^2 } \eq{\label{x_bar_square-prime} \le &\ \frac{1}{1-t}\left\|I_M \hspace{-1mm} - \hspace{-1mm} {\overline{{\mathcal{P}}}^{\mathsf{T}} \hspace{-0.8mm}{\mathcal{M}}{\mathcal{H}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}{\mathcal{I}}}\right\|^2\|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \nonumber \\ &\quad + \frac{2}{t}\frac{1}{c^2}\|\overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{M}}{\mathcal{H}}_{i-1}{\mathcal{X}}_{R,u}\|^2 \|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 + \frac{2}{t}\|\overline{{\mathcal{P}}}^{\mathsf{T}}\|^2\|e_i\|^2 \nonumber \\ \le & (1\hspace{-1mm}-\hspace{-1mm}\sigma_{11}\mu_{\max})\|\hspace{-0.3mm}\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\hspace{-0.3mm}\|^2 \hspace{-1mm}+\hspace{-1mm} \frac{\sigma_{12}^2\mu_{\max}}{\sigma_{11}}\|\hspace{-0.3mm}\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\hspace{-0.3mm}\|^2 + \frac{2\|e_i\|^2}{\sigma_{11}\mu_{\max}}, } where the last inequality follows the arguments in \eqref{x_bar-recursion}--\eqref{x_bar_square-2}. From the second line of recursion \eqref{final-recursion-prime}, we have \eq{\hspace{-2mm} \label{yuwe9-2-prime} &\|\check{{\scriptstyle{\mathcal{X}}}}_i\|^2 \nonumber \\ =& \footnotesize\|{\mathcal{D}}_1 \check{{\scriptstyle{\mathcal{X}}}}_{i-1} \hspace{-1mm}-\hspace{-1mm} {\mathcal{X}}_L{\mathcal{T}}_{i-1}(c{\mathcal{R}}_1 \bar{{\scriptstyle{\mathcal{X}}}}_{i-1} + {\mathcal{X}}_R\check{{\scriptstyle{\mathcal{X}}}}_{i-1}) + c {\mathcal{X}}_L {\mathcal{B}}_{\ell}e_i\|^2\nonumber\\ \le & \frac{\|{\mathcal{D}}_1\|^2}{t} \|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^{\hspace{-0.3mm}2} \hspace{-1mm}+\hspace{-1mm} \frac{2c^2}{1-t} \| {\mathcal{X}}_L{\mathcal{T}}_{i-1}{\mathcal{R}}_1\|^2 \|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2\nonumber\\ &+ \frac{2}{1-t} \|{\mathcal{X}}_L{\mathcal{T}}_{i-1}{\mathcal{X}}_R\|^2 \| \check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 + \frac{2c^2}{1-t}\|{\mathcal{X}}_L {\mathcal{B}}_{\ell}\|^2 \|e_i\|^2 \nonumber \\ \le & \left(\hspace{-1mm}\lambda \hspace{-0.8mm}+\hspace{-0.8mm} \frac{2\sigma_{22}^2 \mu^2_{\max}}{1-\lambda}\hspace{-1mm}\right)\hspace{-1mm} \|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \hspace{-1mm}+\hspace{-1mm} \frac{2\sigma_{21}^2 \mu^2_{\max}}{1-\lambda} \|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \hspace{-1mm}+\hspace{-1mm} \frac{2c^2 d \|e_i\|^2}{1-\lambda}, } where $d\define \|{\mathcal{X}}_L {\mathcal{B}}_{\ell}\|^2$ is independent of iteration $i$. Moreover, the last inequality holds because of arguments in \eqref{yujs99}--\eqref{yuwe9-2}. Combining \eqref{x_bar_square-prime} and \eqref{yuwe9-2-prime}, we arrive at the inequality recursion (compare with \eqref{xngyi}): \eq{\label{xngyi-prime} \hspace{-3mm} \left[ \begin{array}{c} \hspace{-2mm}\|\bar{{\scriptstyle{\mathcal{X}}}}_i\|^2\hspace{-2mm} \\ \hspace{-2mm}\|\check{{\scriptstyle{\mathcal{X}}}}_i\|^2 \hspace{-2mm} \\ \end{array} \right] &\preceq \underbrace{\left[ \begin{array}{cc} \hspace{-2mm}1-\sigma_{11}\mu_{\max}\hspace{-1mm} & \hspace{-1mm} \frac{\sigma_{12}^2}{\sigma_{11}}\mu_{\max}\hspace{-2mm}\\ \hspace{-2mm}\frac{2\sigma_{21}^2\mu^2_{\max}}{1-\lambda} \hspace{-1mm}&\hspace{-1mm} \lambda + \frac{2\sigma_{22}^2 \mu^2_{\max}}{1-\lambda}\hspace{-2mm} \\ \end{array} \right]}_{\define G} \left[ \begin{array}{c} \hspace{-2mm}\|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2\hspace{-2mm} \\ \hspace{-2mm}\|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \hspace{-2mm} \\ \end{array} \right] \nonumber \\ &+ \left[ \begin{array}{c} \frac{2}{\sigma_{11} \mu_{\max}} \\ \frac{2c^2d}{1-\lambda} \\ \end{array} \right] \|e_i\|^2. } Next let us bound the mismatch term $\|e_i\|^2$. From \eqref{ei} we have \eq{ e_i &= ({\mathcal{M}}_i^\prime - {\mathcal{M}}) \left( {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i-1}) - {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star) \right) \nonumber \\ & \hspace{1cm} + ({\mathcal{M}}_i^\prime - {\mathcal{M}}){\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star) \nonumber \\ &\overset{\eqref{xcnh}}{=} -({\mathcal{M}}_i^\prime - {\mathcal{M}}) {\mathcal{H}}_{i-1} \widetilde{\scriptstyle{\mathcal{W}}}_{i-1} + ({\mathcal{M}}_i^\prime - {\mathcal{M}}){\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star). } which implies that \eq{\label{ei-upper-bound} \|e_i\|^2 & \le 2\delta^2 \|{\mathcal{M}}_i^\prime - {\mathcal{M}}\|^2 \|\widetilde{\scriptstyle{\mathcal{W}}}_{i-1}\|^2 + 2\|{\mathcal{M}}_i^\prime - {\mathcal{M}}\|^2 g, } where $g\define \|{\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}^\star)\|^2$ is a constant independent of iteration. Recall that ${\mathcal{M}} = M \otimes I_M$ and ${\mathcal{M}}^\prime_i = M^\prime_i \otimes I_M$ where \eq{ M &= {\mathrm{diag}}\{\mu_1, \mu_2, \cdots, \mu_N\}, \nonumber \\ M^\prime_i &= {\mathrm{diag}}\left\{\frac{q_1\mu_o}{z_{1,i}(1)}, \cdots, \frac{q_N\mu_o}{z_{N,i}(N)}\right\}. } Using the relation $\mu_k = q_k \mu_o/p_k$ (see equation (13) from Part I\cite{yuan2017exact1}), we have \eq{\label{zxncnc} & M - M^\prime_i \nonumber \\ = &{\mathrm{diag}}\left\{\frac{q_1\mu_o}{p_1}\left(1 - \frac{p_1}{z_{1,i}(1)}\right),\cdots, \frac{q_N\mu_o}{p_N}\left(1 - \frac{p_N}{z_{N,i}(N)}\right)\right\}\nonumber \\ = &{\mathrm{diag}}\left\{\mu_1\left(1 - \frac{p_1}{z_{1,i}(1)}\right),\cdots, \mu_N \left(1 - \frac{p_N}{z_{N,i}(N)}\right)\right\} \nonumber \\ = &\mu_{\max}{\mathrm{diag}}\left\{\tau_1\left(1 - \frac{p_1}{z_{1,i}(1)}\right),\cdots, \tau_N \left(1 - \frac{p_N}{z_{N,i}(N)}\right)\right\}, } where $\tau_k = \mu_k/\mu_{\max}\le1$. Now we examine the convergence of $1-p_k/z_{k,i}(k)$. From the discussion in Policy 5 form Part I\cite{yuan2017exact1}, it is known that ${\scriptstyle{\mathcal{Z}}}_i$ generated from the power iteration (see equation (37) from Part I) will converge to $[(\mathds{1}_N \otimes I_N)(p^{\mathsf{T}} \otimes I_N)]{\scriptstyle{\mathcal{Z}}}_{-1}$. Therefore, \eq{ &\hspace{-1cm} {\scriptstyle{\mathcal{Z}}}_i - [(\mathds{1}_N \otimes I_N)(p^{\mathsf{T}} \otimes I_N)]{\scriptstyle{\mathcal{Z}}}_{-1} \nonumber \\ = & \left[\left({\mathcal{A}}^{\mathsf{T}}\right)^{i+1} - (\mathds{1}_N \otimes I_N)(p^{\mathsf{T}} \otimes I_N)\right] {\scriptstyle{\mathcal{Z}}}_{-1} \nonumber \\ = & \left\{\left[ \left(A^{\mathsf{T}}\right)^{i+1} - \mathds{1}_N p^{\mathsf{T}} \right] \otimes I_N \right\} z_{-1} \nonumber \\ = & \left\{ \left[ A^{\mathsf{T}} - \mathds{1}_N p^{\mathsf{T}} \right]^{i+1} \otimes I_N \right\} z_{-1}. } Recall from the discussion in Policy 5 from Part I\cite{yuan2017exact1} that \eq{ [(\mathds{1}_N \otimes I_N)(p^{\mathsf{T}} \otimes I_N)]{\scriptstyle{\mathcal{Z}}}_{-1} = {\mathrm{col}}\{p,\cdots,p\} \in \mathbb{R}^{N^2}. } As a result, \eq{\label{sdj} |z_{k,i}(k) - p_k|^2 &\le \|{\scriptstyle{\mathcal{Z}}}_i - [(\mathds{1}_N \otimes I_N)(p^{\mathsf{T}} \otimes I_N)]{\scriptstyle{\mathcal{Z}}}_{-1} \|^2 \nonumber \\ & \le \|A^{\mathsf{T}} - \mathds{1}_N p^{\mathsf{T}}\|^{2(i+1)} \|{\scriptstyle{\mathcal{Z}}}_{-1}\|^2 \nonumber \\ & = h\cdot \rho_{A}^{2(i+1)}, \quad \forall k = 1, \cdots, N. } where $h\define \|{\scriptstyle{\mathcal{Z}}}_{-1}\|^2$ is a constant, and $\rho_A$ is the second largest eigenvalue magnitude of matrix $A$, i.e., $\rho_A = \max\{|\lambda_2(A)|, |\lambda_N(A)|\}$. Since $A$ is locally balanced, we know $A$ is diagonalizable with real eigenvalue in $(-1, 1]$, and it has a single eigenvalue at $1$ (see Table I from Part I \cite{yuan2017exact1}), we conclude that $\rho_A < 1$. Also, recall from the discussion at the end of Policy 5 in Part I \cite{yuan2017exact1} that $z_{k,i}(k)>0$ is guaranteed when $\bar{a}_{kk}>0$. Let \eq{\label{23nsdb} \alpha_k \define \min_i\{z_{k,i}(k) \} > 0,\quad \forall\ k = 1, \cdots, N } Combining \eqref{sdj} and \eqref{23nsdb}, it holds that for $k = 1, \cdots, N$, \eq{\label{2bdn} \left( 1 - \frac{p_k}{z_{k,i}(k)} \right)^2 \le \frac{h}{\alpha_k}\rho_A^{2(i+1)}= h_k \rho_A^{2(i+1)}, } where we define $h_k \define h/\alpha_k$. Substituting \eqref{2bdn} into \eqref{zxncnc}, it holds that \eq{\label{xnsd8} \|{\mathcal{M}}_i^\prime - {\mathcal{M}}\|^2 = \|M_i^\prime - M\|^2 \le \mu_{\max}^2 h^\prime \rho_{A}^{2(i+1)}, } where $h^\prime \define \max_k\{\tau^2_k h_k\}$ is a constant independent of iterations. Substituting \eqref{xnsd8} into \eqref{ei-upper-bound}, we have \eq{\label{ei-upper-bound-2} \|e_i\|^2 & \le 2 \delta^2 \|{\mathcal{M}}_i^\prime - {\mathcal{M}}\|^2 \|\widetilde{\scriptstyle{\mathcal{W}}}_{i-1}\|^2 + 2\|{\mathcal{M}}_i^\prime - {\mathcal{M}}\|^2 g \nonumber \\ & \le 2 \delta^2 \mu_{\max}^2 h^\prime \rho_{A}^{2(i+1)} \|\widetilde{\scriptstyle{\mathcal{W}}}_{i-1}\|^2 + 2 \mu_{\max}^2 h^\prime g \rho_{A}^{2(i+1)} \nonumber \\ &\le 2 \delta^2 \mu_{\max}^2 h^\prime \rho_{A}^{2(i+1)} \hspace{-1mm} \left( \|\widetilde{\scriptstyle{\mathcal{W}}}_{i-1}\|^2 + \|\widetilde{\scriptstyle{\mathcal{Y}}}_{i-1}\|^2 \right) \nonumber \\ &\quad \quad + 2 \mu_{\max}^2 h^\prime g \rho_{A}^{2(i+1)}. } Recall from \eqref{x-bar and x-check} that \eq{\label{x-bar and x-check-2} \left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}_i\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}_i\hspace{-1mm} \\ \end{array} \right] = {\mathcal{X}}' \left[ \begin{array}{c} \hspace{-1mm}\bar{{\scriptstyle{\mathcal{X}}}}_i\hspace{-1mm}\\ \hspace{-1mm}\widehat{{\scriptstyle{\mathcal{X}}}}_i\hspace{-1mm}\\ \hspace{-1mm}\check{{\scriptstyle{\mathcal{X}}}}_i \hspace{-1mm} \\ \end{array} \right]. } We therefore have \eq{\label{23bnd} \|\widetilde{\scriptstyle{\mathcal{W}}}_{i}\|^2 + \|\widetilde{\scriptstyle{\mathcal{Y}}}_{i}\|^2 \le &\ \|{\mathcal{X}}'\|^2 \left( \|\bar{{\scriptstyle{\mathcal{X}}}}_i\|^2 + \|\widehat{{\scriptstyle{\mathcal{X}}}}_i\|^2 + \|\check{{\scriptstyle{\mathcal{X}}}}_i\|^2 \right) \nonumber \\ = &\ \|{\mathcal{X}}'\|^2 \left( \|\bar{{\scriptstyle{\mathcal{X}}}}_i\|^2 + \|\check{{\scriptstyle{\mathcal{X}}}}_i\|^2 \right), } where the last equality holds because $\widehat{{\scriptstyle{\mathcal{X}}}}_i = 0$ for $i=0,1,\cdots$ (see \eqref{xzcn}). Substituting \eqref{23bnd} into \eqref{ei-upper-bound-2}, we have \eq{\label{2ndnd} \|e_i\|^2 &\le 2 \delta^2 \mu_{\max}^2 h^\prime \|{\mathcal{X}}'\|^2 \rho_{A}^{2(i+1)}\hspace{-1mm} \left( \|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \hspace{-1mm}+\hspace{-1mm} \|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \right) \nonumber \\ &\quad + 2 \mu_{\max}^2 h^\prime g \rho_{A}^{2(i+1)} } Substituting \eqref{2ndnd} into \eqref{xngyi-prime}, we have \eq{\label{xngyi-prime-2} \hspace{-3mm} \left[ \begin{array}{c} \hspace{-2mm}\|\bar{{\scriptstyle{\mathcal{X}}}}_i\|^2\hspace{-2mm} \\ \hspace{-2mm}\|\check{{\scriptstyle{\mathcal{X}}}}_i\|^2 \hspace{-2mm} \\ \end{array} \right] & \hspace{-1mm}\preceq \hspace{-1mm} {\left[ \begin{array}{cc} \hspace{-2mm}1 \hspace{-0.6mm}- \hspace{-0.6mm} \sigma_{11}\mu_{\max} \hspace{-0.6mm}+\hspace{-0.6mm} b^\prime \mu_{\max}\rho_{A}^{2(i+1)}\hspace{-1mm} & \hspace{-1mm} \frac{\sigma_{12}^2}{\sigma_{11}}\mu_{\max} \hspace{-0.6mm}+\hspace{-0.6mm} b^\prime \mu_{\max}\rho_{A}^{2(i+1)}\hspace{-2mm}\\ \hspace{-2mm}\frac{2\sigma_{21}^2\mu^2_{\max}}{1-\lambda}+c^\prime \mu^2_{\max}\rho_{A}^{2(i+1)} \hspace{-1mm}&\hspace{-1mm} \lambda \hspace{-0.6mm}+\hspace{-0.6mm} \frac{2\sigma_{22}^2 \mu^2_{\max}}{1-\lambda} \hspace{-0.6mm}+\hspace{-0.6mm} c^\prime \mu^2_{\max}\rho_{A}^{2(i+1)}\hspace{-2mm} \\ \end{array} \right]} \nonumber \\ & \quad \cdot \left[ \begin{array}{c} \hspace{-2mm}\|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2\hspace{-2mm} \\ \hspace{-2mm}\|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \hspace{-2mm} \\ \end{array} \right] + \left[ \begin{array}{c} d^\prime \mu_{\max}\rho_{A}^{2(i+1)} \\ e^\prime \mu^2_{\max}\rho_{A}^{2(i+1)} \\ \end{array} \right], } where $b',c',d',e'$ are constants defined as \eq{ b' &\hspace{-1mm}\define\hspace{-1mm} 4 \delta^2 h^\prime \|{\mathcal{X}}'\|^2/\sigma_{11}, \quad c' \hspace{-1mm}\define\hspace{-1mm} 4 \delta^2 h^\prime \|{\mathcal{X}}'\|^2 c^2d/(1 \hspace{-1mm}-\hspace{-1mm} \lambda), \\ d' &\hspace{-1mm}\define\hspace{-1mm} 4 h' g/\sigma_{11}, \hspace{1.4cm} e' \hspace{-1mm}\define\hspace{-1mm} 4 h' g c^2 d/(1-\lambda). } These constants are independent of iterations. It can be verified that when iteration $i$ is large enough such that \eq{ \rho_A^{2(i+1)} \le \min\left\{ \frac{\sigma_{11}}{2b'}, \frac{\sigma_{12}^2}{\sigma_{11} b'}, \frac{\sigma_{21}^2}{(1-\lambda)c^\prime}, \frac{\sigma_{22}^2}{(1-\lambda)c^\prime} \right\}, } the inequality \eqref{xngyi-prime-2} becomes \eq{\label{xngyi-prime-3} \hspace{-3mm} \left[ \begin{array}{c} \hspace{-2mm}\|\bar{{\scriptstyle{\mathcal{X}}}}_i\|^2\hspace{-2mm} \\ \hspace{-2mm}\|\check{{\scriptstyle{\mathcal{X}}}}_i\|^2 \hspace{-2mm} \\ \end{array} \right] &\preceq \underbrace{\left[ \begin{array}{cc} \hspace{-2mm}1 \hspace{-0.6mm}- \hspace{-0.6mm} \frac{\sigma_{11}\mu_{\max}}{2}\hspace{-1mm} & \hspace{-1mm} \frac{2\sigma_{12}^2}{\sigma_{11}}\mu_{\max} \hspace{-2mm}\\ \hspace{-2mm}\frac{3\sigma_{21}^2\mu^2_{\max}}{1-\lambda} \hspace{-1mm}&\hspace{-1mm} \lambda \hspace{-0.6mm}+\hspace{-0.6mm} \frac{3\sigma_{22}^2 \mu^2_{\max}}{1-\lambda} \hspace{-2mm} \\ \end{array} \right]}_{G^\prime} \left[ \begin{array}{c} \hspace{-2mm}\|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2\hspace{-2mm} \\ \hspace{-2mm}\|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \hspace{-2mm} \\ \end{array} \right] \nonumber \\ & \quad \quad + {\left[ \begin{array}{c} d^\prime \mu_{\max} \\ e^\prime \mu^2_{\max} \\ \end{array} \right]} \rho_{A}^{2(i+1)}, } where we can prove $\rho \define \|G'\|_1 = 1 - O(\mu_{\max}) < 1$ by following arguments \eqref{xsdn8}. Inequality \eqref{xngyi-prime-3} further implies that \eq{\label{23ndn} \left( \|\bar{{\scriptstyle{\mathcal{X}}}}_i\|^2 \hspace{-1mm}+\hspace{-1mm} \|\check{{\scriptstyle{\mathcal{X}}}}_i\|^2 \right) \le \rho \left( \|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \hspace{-1mm} + \hspace{-1mm} \|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \right) + f' \rho_{A}^{2(i+1)} } where $f'\define d'\mu_{\max} + e'\mu_{\max}^2 > 0$. Let $\beta = \max\{\rho, \rho_A\} < 1$. Inequality \eqref{23ndn} becomes \eq{\label{23ndn-2} \left( \|\bar{{\scriptstyle{\mathcal{X}}}}_i\|^2 \hspace{-1mm}+\hspace{-1mm} \|\check{{\scriptstyle{\mathcal{X}}}}_i\|^2 \right) \le \beta \left( \|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \hspace{-1mm} + \hspace{-1mm} \|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \right) \hspace{-1mm}+\hspace{-1mm} f' \beta^{2(i+1)}. } {\color{black}By adding $\gamma f' \beta^{2i+4}$, where $\gamma$ can be any positive constant to be chosen, to both sides of the above inequality, we get \eq{\label{2bs8} &\hspace{-8mm}\left( \|\bar{{\scriptstyle{\mathcal{X}}}}_i\|^2 + \|\check{{\scriptstyle{\mathcal{X}}}}_i\|^2 \right) + \gamma f' \beta^{2i+4}\nonumber \\ \le &\ \beta \left( \|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 + \|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 \right) + f' \beta^{2i+2} + \gamma f' \beta^{2i+4}\nonumber\\ = &\ \beta \left( \|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 + \|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 + \frac{1+ \gamma \beta^2}{\beta}f' \beta^{2i+2}\right) } By setting \eq{\label{23dns} \gamma = \frac{1}{\beta - \beta^2} > 0, } it can be verified that \eq{\label{xnaswbn} \gamma = \frac{1+ \gamma \beta^2}{\beta}. }} Substituting \eqref{xnaswbn} into \eqref{2bs8}, we have \eq{\label{2bs8-2} &\hspace{-1cm}\left( \|\bar{{\scriptstyle{\mathcal{X}}}}_i\|^2 + \|\check{{\scriptstyle{\mathcal{X}}}}_i\|^2 \right) + \gamma f' \beta^{2(i+2)}\nonumber \\ \le &\ \beta \left( \|\bar{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 + \|\check{{\scriptstyle{\mathcal{X}}}}_{i-1}\|^2 + \gamma f' \beta^{2(i+1)}\right). } As a result, the quantity $\left( \|\bar{{\scriptstyle{\mathcal{X}}}}_i\|^2 + \|\check{{\scriptstyle{\mathcal{X}}}}_i\|^2 \right) + \gamma f' \beta^{2(i+2)}$ converges to $0$ linearly. Since $f'>0, \gamma>0$ and $\beta>0$, we can conclude that $\|\bar{{\scriptstyle{\mathcal{X}}}}_i\|^2 + \|\check{{\scriptstyle{\mathcal{X}}}}_i\|^2$, and hence $\|\widetilde{\scriptstyle{\mathcal{W}}}_i\|^2 + \|\widetilde{\scriptstyle{\mathcal{Y}}}_i\|^2$, converges to $0$ linearly. } \section{Error Recursion for EXTRA Consensus} \label{app-sta-EXTRA-error-recursion} Multiplying the second recursion of \eqref{extra-pd-2} by ${\mathcal{V}}$ gives: \eq{ {\mathcal{V}} {\scriptstyle{\mathcal{Y}}}^e_i = {\mathcal{V}} {\scriptstyle{\mathcal{Y}}}^e_{i-1} + \frac{{\mathcal{P}}-{\mathcal{P}} {\mathcal{A}}}{2} {\scriptstyle{\mathcal{W}}}_i^e. \label{extra-V-tran y} } Substituting into the first recursion of \eqref{extra-pd-2} gives \eq{ \overline{{\mathcal{A}}} {\scriptstyle{\mathcal{W}}}_i^e &\hspace{-0.5mm}=\hspace{-0.5mm} \overline{{\mathcal{A}}} {\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}^e\hspace{-0.8mm}-\hspace{-0.8mm}\mu {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}^e)\hspace{-0.8mm}-\hspace{-0.8mm}{\mathcal{P}}^{-1}{\mathcal{V}} {\scriptstyle{\mathcal{Y}}}_{i}^e, \label{extra-sdfu} } From \eqref{extra-sdfu} and the second recursion in \eqref{extra-pd-2} we conclude that \begin{equation} \left\{ \begin{aligned} \overline{{\mathcal{A}}} \widetilde{\scriptstyle{\mathcal{W}}}_i^e &\hspace{-0.5mm}=\hspace{-0.5mm} \overline{{\mathcal{A}}}\widetilde{\scriptstyle{\mathcal{W}}}_{i-1}^e\hspace{-0.8mm}+\hspace{-0.8mm}\mu {\nabla} {\mathcal{J}}^o({\scriptstyle{\mathcal{W}}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}^e) \hspace{-0.8mm}+\hspace{-0.8mm} {\mathcal{P}}^{-1} {\mathcal{V}} {\scriptstyle{\mathcal{Y}}}^e_{i}, \label{extra-1-subtracrt} \\ \widetilde{\scriptstyle{\mathcal{Y}}}_i^e &= \widetilde{\scriptstyle{\mathcal{Y}}}_{i-1}^e - {\mathcal{V}} {\scriptstyle{\mathcal{W}}}_i^e. \end{aligned} \right. \end{equation} Subtracting the optimality condition \eqref{extra-KKT-1-1}--\eqref{extra-KKT-2-2} from \eqref{extra-1-subtracrt} leads to \begin{equation} \left\{ \begin{aligned} \overline{{\mathcal{A}}} \widetilde{\scriptstyle{\mathcal{W}}}_i^e &\hspace{-0.5mm}=\hspace{-0.5mm} (\overline{{\mathcal{A}}} \hspace{-0.8mm}-\hspace{-0.8mm}\mu {\mathcal{H}}_{i-1})\widetilde{\scriptstyle{\mathcal{W}}}_{i-1}^e\hspace{-0.8mm} \hspace{-0.0mm}-\hspace{-0.3mm} {\mathcal{P}}^{-1} {\mathcal{V}} \widetilde{\scriptstyle{\mathcal{Y}}}_{i}^e, \label{extra-1-subtracrt-1} \\ \widetilde{\scriptstyle{\mathcal{Y}}}_i^e &= \widetilde{\scriptstyle{\mathcal{Y}}}_{i-1}^e +{\mathcal{V}} \widetilde{\scriptstyle{\mathcal{W}}}_i^e. \end{aligned} \right. \end{equation} which is also equivalent to \eq{\label{extra-xcnh09} &\ \left[ \begin{array}{cc} \hspace{-2mm}\overline{{\mathcal{A}}} & {\mathcal{P}}^{-1}{\mathcal{V}} \hspace{-2mm}\\ \hspace{-2mm}-{\mathcal{V}} & I_{MN} \hspace{-2mm} \\ \end{array} \right]\hspace{-1mm} \left[ \begin{array}{c} \hspace{-2mm}\widetilde{\scriptstyle{\mathcal{W}}}_i^e\hspace{-2mm}\\ \hspace{-2mm}\widetilde{\scriptstyle{\mathcal{Y}}}_i^e\hspace{-2mm} \\ \end{array} \right] \hspace{-1mm}=\hspace{-1mm} \left[ \begin{array}{cc} \hspace{-2mm}\overline{{\mathcal{A}}}- \mu {\mathcal{H}}_{i-1} & 0\hspace{-2mm}\\ \hspace{-2mm}0 & I_{MN}\hspace{-2mm} \\ \end{array} \right]\hspace{-1mm} \left[ \begin{array}{c} \hspace{-2mm}\widetilde{\scriptstyle{\mathcal{W}}}^e_{i-1}\hspace{-2mm}\\ \hspace{-2mm}\widetilde{\scriptstyle{\mathcal{Y}}}^e_{i-1}\hspace{-2mm} \\ \end{array} \right]. } Using relations $\overline{{\mathcal{A}}} = \frac{I_{MN} + {\mathcal{A}}}{2}$ and ${\mathcal{V}}^2=\frac{{\mathcal{P}} - {\mathcal{P}}{\mathcal{A}}}{2}$, it is easy to verify that \eq{\label{extra-nxcm987} \left[ \begin{array}{cc} \overline{{\mathcal{A}}} & {\mathcal{P}}^{-1}{\mathcal{V}} \\ -{\mathcal{V}} & I_{MN} \\ \end{array} \right]^{-1} = \left[ \begin{array}{cc} I_{MN} & -{\mathcal{P}}^{-1}{\mathcal{V}} \\ {\mathcal{V}} & I_{MN}-{\mathcal{V}} {\mathcal{P}}^{-1} {\mathcal{V}} \\ \end{array} \right]. } Substituting \eqref{extra-nxcm987} into \eqref{extra-xcnh09} gives \eqref{extra-error-recursion-2}--\eqref{extra-T-defi}. \section{Error Recursion in Transformed Domain} \label{app-sta-EXTRA-reduced} Multiplying both sides of \eqref{extra-error-recursion-2} by $({\mathcal{X}}')^{-1}$: \eq{ ({\mathcal{X}}')^{-1} \left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}^e_i\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}^e_i\hspace{-1mm} \\ \end{array} \right] =&\; [({\mathcal{X}}')^{-1}({\mathcal{B}}^e-{\mathcal{T}}^e_{i-1}) {\mathcal{X}}'] ({\mathcal{X}}')^{-1} \left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}^e_{i-1}\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}^e_{i-1}\hspace{-1mm} \\ \end{array} \right] } leads to \eq{\label{extra-recursion-transform-2} \left[ \begin{array}{c} \hspace{-1mm}\bar{{\scriptstyle{\mathcal{X}}}}^e_i\hspace{-1mm}\\ \hspace{-1mm}\widehat{{\scriptstyle{\mathcal{X}}}}^e_i\hspace{-1mm}\\ \hspace{-1mm}\check{{\scriptstyle{\mathcal{X}}}}^e_i \hspace{-1mm} \\ \end{array} \right] \hspace{-1mm}=\hspace{-1mm} &\; \left( \left[ \begin{array}{ccc} I_M & 0 & 0\\ 0 & I_M &0 \\0 & 0 & {\mathcal{D}}_1 \\ \end{array} \right] - {\mathcal{S}}^e_{i-1} \right) \left[ \begin{array}{c} \hspace{-1mm}\bar{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\hspace{-1mm}\\ \hspace{-1mm}\widehat{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\hspace{-1mm}\\ \hspace{-1mm}\check{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\hspace{-1mm} \\ \end{array} \right], } where we defined \eq{\label{extra-x-bar and x-check} \left[ \begin{array}{c} \hspace{-1mm}\bar{{\scriptstyle{\mathcal{X}}}}^e_i\hspace{-1mm}\\ \hspace{-1mm}\widehat{{\scriptstyle{\mathcal{X}}}}^e_i\hspace{-1mm}\\ \hspace{-1mm}\check{{\scriptstyle{\mathcal{X}}}}^e_i \hspace{-1mm} \\ \end{array} \right] \define&\; ({\mathcal{X}}')^{-1}\left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}^e_i\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}^e_i\hspace{-1mm} \\ \end{array} \right] = \left[ \begin{array}{c} {\mathcal{L}}_1^{\mathsf{T}} \\ {\mathcal{L}}_2^{\mathsf{T}} \\ {\mathcal{X}}_L \\ \end{array} \right] \left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}^e_i\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}^e_i\hspace{-1mm} \\ \end{array} \right], } and \eq{\label{S-extra} \hspace{-2mm}{\mathcal{S}}^e_{i-1}\define&({\mathcal{X}}')^{-1}{\mathcal{T}}^e_{i-1}{\mathcal{X}}'\nonumber \\ \hspace{-4mm}=&\left[ \begin{array}{ccc} \hspace{-2mm}{\mathcal{L}}_1^{\mathsf{T}} {\mathcal{T}}^e_{i-1}{\mathcal{R}}_1 & {\mathcal{L}}_1^{\mathsf{T}} {\mathcal{T}}^e_{i-1}{\mathcal{R}}_2 & \frac{1}{c}{\mathcal{L}}_1^{\mathsf{T}}{\mathcal{T}}^e_{i-1}{\mathcal{X}}_R \hspace{-2mm}\\ \hspace{-2mm}{\mathcal{L}}_2^{\mathsf{T}} {\mathcal{T}}^e_{i-1}{\mathcal{R}}_1& {\mathcal{L}}_2^{\mathsf{T}} {\mathcal{T}}^e_{i-1}{\mathcal{R}}_2 & \frac{1}{c}{\mathcal{L}}_2^{\mathsf{T}}{\mathcal{T}}^e_{i-1}{\mathcal{X}}_R \hspace{-2mm}\\ \hspace{-2mm}c{\mathcal{X}}_L{\mathcal{T}}^e_{i-1}{\mathcal{R}}_1 & c{\mathcal{X}}_L{\mathcal{T}}^e_{i-1}{\mathcal{R}}_2 & {\mathcal{X}}_L{\mathcal{T}}^e_{i-1}{\mathcal{X}}_R \hspace{-2mm} \\ \end{array} \right]. } To compute each entry of ${\mathcal{S}}^e_{i-1}$, we let \eq{ {\mathcal{X}}_R = \left[ \begin{array}{cc} {\mathcal{X}}_{R,u} \\ {\mathcal{X}}_{R,d} \\ \end{array} \right], } where ${\mathcal{X}}_{R,u}\in \mathbb{R}^{NM \times 2(N-1)M}$ and ${\mathcal{X}}_{R,d}\in \mathbb{R}^{NM \times 2(N-1)M}$. For the first line of ${\mathcal{S}}^e_{i-1}$, it can be verified that \eq{\label{S-1st-extra} {\mathcal{L}}_1^{\mathsf{T}} {\mathcal{T}}^e_{i-1}{\mathcal{R}}_1 &=\hspace{-1mm} \mu\overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{H}}_{i-1}{\mathcal{I}},\\ {\mathcal{L}}_1^{\mathsf{T}} {\mathcal{T}}^e_{i-1}{\mathcal{R}}_2 &= \hspace{-1mm} 0,\\ \frac{1}{c}{\mathcal{L}}_1^{\mathsf{T}} {\mathcal{T}}^e_{i-1}{\mathcal{X}}_R &= \hspace{-1mm} \frac{\mu}{c} \overline{{\mathcal{P}}}^{\mathsf{T}} {\mathcal{H}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}{\mathcal{X}}_{R,u}. } Likewise, noting that \eq{ {\mathcal{L}}_2^{\mathsf{T}} {\mathcal{T}}^e_{i-1} = \left[ \begin{array}{cc} \hspace{-1.5mm}0 \hspace{-1mm}&\hspace{-1mm} \frac{1}{N}{\mathcal{I}}^{\mathsf{T}}\hspace{-1.5mm} \\ \end{array} \right]\hspace{-1.5mm} \left[ \begin{array}{cc} \hspace{-1.5mm}\mu {\mathcal{H}}_{i-1} & 0\hspace{-1.5mm} \\ \hspace{-1.5mm}\mu {\mathcal{V}}{\mathcal{H}}_{i-1} & 0\hspace{-1.5mm} \\ \end{array} \right] \overset{\eqref{xcn987-2}}{=} \left[ \begin{array}{cc} \hspace{-1.5mm}0 \hspace{-1mm}&\hspace{-1mm} 0\hspace{-1.5mm} \\ \end{array} \right], } we find for the second line of ${{\mathcal{S}}}_{i-1}^e$ that \eq{\label{S-2nd-extra} c{\mathcal{L}}_2^{\mathsf{T}} {\mathcal{T}}^e_{i-1}{\mathcal{R}}_1 =0,\;\; c{\mathcal{L}}_2^{\mathsf{T}} {\mathcal{T}}^e_{i-1}{\mathcal{R}}_2 =0,\;\; {\mathcal{L}}_2^{\mathsf{T}}{\mathcal{T}}^e_{i-1}{\mathcal{X}}_R =0. } Substituting \eqref{S-extra}, \eqref{S-1st-extra} and \eqref{S-2nd-extra} into \eqref{extra-recursion-transform-2}, we rewrite \eqref{extra-recursion-transform-2} as \eq{\label{znhg-extra}\footnotesize \hspace{0mm}\left[ \begin{array}{c} \hspace{-2mm}\bar{{\scriptstyle{\mathcal{X}}}}^e_i\hspace{-2mm} \\ \hspace{-2mm}\widehat{{\scriptstyle{\mathcal{X}}}}^e_i\hspace{-2mm} \\ \hspace{-2mm}\check{{\scriptstyle{\mathcal{X}}}}^e_i\hspace{-2mm} \\ \end{array} \right] \hspace{-1.2mm}&=\hspace{-1.2mm}\footnotesize \left[ \begin{array}{ccc} \hspace{-2.5mm}I_{\hspace{-0.3mm} M} \hspace{-1.2mm}-\hspace{-1.2mm} \mu \overline{{\mathcal{P}}}^{\mathsf{T}}\hspace{-0.3mm}{\mathcal{H}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}{\mathcal{I}} & \hspace{-0.8mm}0 &\hspace{-1.3mm} -\frac{\mu}{c}\overline{{\mathcal{P}}}^{\mathsf{T}}\hspace{-0.3mm}{\mathcal{H}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}{\mathcal{X}}_{R,u} \hspace{-2mm}\\ \hspace{-3mm}0 & \hspace{-1.3mm}I_{\hspace{-0.3mm} M} &\hspace{-3.3mm} 0 \hspace{-2mm} \\ \hspace{-2.5mm}-c{\mathcal{X}}_L{\mathcal{T}}^e_{i-1}{\mathcal{R}}_1 & \hspace{-1.3mm}-c{\mathcal{X}}_L{\mathcal{T}}^e_{i-1}{\mathcal{R}}_2 \hspace{-1.3mm} & {\mathcal{D}}_1 - {\mathcal{X}}_L{\mathcal{T}}^e_{i-1}{\mathcal{X}}_R \hspace{-2mm} \\ \end{array} \right] \hspace{-2mm}\left[ \begin{array}{c} \hspace{-2mm}\bar{{\scriptstyle{\mathcal{X}}}}^e_{i\hspace{-0.5mm}-\hspace{-0.5mm}1}\hspace{-2mm} \\ \hspace{-2mm}\widehat{{\scriptstyle{\mathcal{X}}}}^e_{i\hspace{-0.5mm}-\hspace{-0.5mm}1}\hspace{-2mm} \\ \hspace{-2mm}\check{{\scriptstyle{\mathcal{X}}}}^e_{i\hspace{-0.5mm}-\hspace{-0.5mm}1}\hspace{-2mm} \\ \end{array} \right] } From the second line of \eqref{znhg-extra}, we get \eq{\label{28cn00-extra} \widehat{{\scriptstyle{\mathcal{X}}}}^e_i = \widehat{{\scriptstyle{\mathcal{X}}}}^e_{i-1}. } As a result, $\widehat{{\scriptstyle{\mathcal{X}}}}^e_i$ will converge to $0$ only if the initial value $\widehat{{\scriptstyle{\mathcal{X}}}}^e_{0} = 0$. To verify that, from the definition of ${\mathcal{L}}_2$ in \eqref{tsdhb} and \eqref{extra-x-bar and x-check} we have \eq{\label{hx_0=0-extra} \widehat{{\scriptstyle{\mathcal{X}}}}_0^e &= {\mathcal{L}}_2^{\mathsf{T}} \left[ \begin{array}{c} \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{W}}}_0^e\hspace{-1mm}\\ \hspace{-1mm}\widetilde{\scriptstyle{\mathcal{Y}}}_0^e\hspace{-1mm} \\ \end{array} \right] = \frac{1}{N}{\mathcal{I}}^{\mathsf{T}} \widetilde{\scriptstyle{\mathcal{Y}}}_0^e \nonumber } \eq{ &\overset{\eqref{extra-error}}{=}\frac{1}{N}{\mathcal{I}}^{\mathsf{T}} ({\scriptstyle{\mathcal{Y}}}^\star_o - {\scriptstyle{\mathcal{Y}}}_0^e) \overset{\eqref{zn-0-extra}}{=} \frac{1}{N}{\mathcal{I}}^{\mathsf{T}} ({\scriptstyle{\mathcal{Y}}}^\star_o - {\mathcal{V}} {\scriptstyle{\mathcal{W}}}^e_0). } Recall that ${\scriptstyle{\mathcal{Y}}}_o^\star$ lies in the ${\mathcal{R}}({\mathcal{V}})$, so that ${\scriptstyle{\mathcal{Y}}}^\star_o - {\mathcal{V}} {\scriptstyle{\mathcal{W}}}_0$ also lies in ${\mathcal{R}}({\mathcal{V}})$. Recall further from Lemma \ref{lm:null-V} that ${\mathcal{I}}^{\mathsf{T}} {\mathcal{V}} =0$, and conclude that $\widehat{{\scriptstyle{\mathcal{X}}}}^e_0=0$. Therefore, from \eqref{28cn00-extra} we have \eq{\label{xzcn-extra} \widehat{{\scriptstyle{\mathcal{X}}}}_i^e=0, \quad \forall i\ge 0 } With \eqref{xzcn-extra}, recursion \eqref{znhg-extra} is equivalent to \eqref{final-recursion-extra}. \section{Proof of Theorem \ref{lm-convergence-extra}} \label{app-conv-extra} From the first line of recursion \eqref{final-recursion-extra}, we have \eq{\label{x_bar-recursion-extra} \bar{{\scriptstyle{\mathcal{X}}}}^e_i =& \left(I_M \hspace{-1mm} - \hspace{-1mm} {\mu \overline{{\mathcal{P}}}^{\mathsf{T}} {\mathcal{H}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}{\mathcal{I}}}\right)\bar{{\scriptstyle{\mathcal{X}}}}^e_{i\hspace{-0.3mm}-\hspace{-0.3mm}1} \hspace{-1mm} - \hspace{-1mm} \frac{\mu}{c} \overline{{\mathcal{P}}}^{\mathsf{T}} {\mathcal{H}}_{i-1}{\mathcal{X}}_{R,u} \check{{\scriptstyle{\mathcal{X}}}}^e_{i-1}. } Squaring both sides and using Jensen's inequality gives \eq{\label{x_bar_square-extra} \|\bar{{\scriptstyle{\mathcal{X}}}}^e_i\|^2=&\ \left\|\left(I_M \hspace{-1mm} - \hspace{-1mm} {\mu \overline{{\mathcal{P}}}^{\mathsf{T}} {\mathcal{H}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}{\mathcal{I}}}\right) \bar{{\scriptstyle{\mathcal{X}}}}^e_{i-1} - \frac{\mu}{c} \overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{H}}_{i-1}{\mathcal{X}}_{R,u} \check{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\right\|^2 \nonumber \\ \le &\ \frac{1}{1-t}\left\|I_M \hspace{-1mm} - \hspace{-1mm} {\mu \overline{{\mathcal{P}}}^{\mathsf{T}} {\mathcal{H}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}{\mathcal{I}}}\right\|^2\|\bar{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\|^2 \nonumber \\ &\quad + \frac{1}{tc^2}\|\mu\overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{H}}_{i-1}{\mathcal{X}}_{R,u}\|^2 \|\check{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\|^2 } for any $t\in (0,1)$. For the term ${\mu \overline{{\mathcal{P}}}^{\mathsf{T}} {\mathcal{H}}_{i\hspace{-0.3mm}-\hspace{-0.3mm}1}{\mathcal{I}}}$, we have \eq{ \hspace{-3mm} {\mu \overline{{\mathcal{P}}}^{\mathsf{T}} {\mathcal{H}}_{i-1}{\mathcal{I}}} = \mu \sum_{k=1}^{N}p_k H_{k,i-1} \overset{\eqref{H-properties}}{\ge} \frac{\mu}{N} \nu I_M \define {\sigma^e_{11} \mu I_M}, } where $\sigma_{11} =\nu/N$. Similarly, we can obtain the upper bound \eq{ {\mu \overline{{\mathcal{P}}}^{\mathsf{T}} {\mathcal{H}}_{i-1}{\mathcal{I}}} & = \mu \sum_{k=1}^{N}p_k H_{k,i-1}\overset{\eqref{H-properties}}{\le} \left(\sum_{k=1}^{N}p_k \right)\delta\mu I_M \hspace{-1.5mm} \overset{(a)}{=} \delta\mu I_M, } where equality $(a)$ holds because $\sum_{k=1}^{N}p_k=1$. It is obvious that $\delta >\sigma^e_{11}$. As a result, we have \eq{\label{xzcn8237-extra} (1\hspace{-0.8mm}-\hspace{-0.8mm}\delta \mu)I_M \hspace{-0.8mm}\le\hspace{-0.8mm} I_{M} \hspace{-0.8mm}-\hspace{-0.8mm} {\mu \overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{H}}_{i-1}{\mathcal{I}}} \le (1\hspace{-0.8mm}-\hspace{-0.8mm}\sigma^e_{11}\mu)I_M, } which implies that when the step-size is sufficiently small to satisfy \eq{\label{xchayu-extra} \mu<1/\delta, } it will hold that \eq{\label{uiox98-extra} &\ \left\|I_{M} \hspace{-0.8mm}-\hspace{-0.8mm} {\mu \overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{H}}_{i-1}{\mathcal{I}}} \right\|^2 \le (1-\sigma^e_{11}\mu_{\max})^2. } On the other hand, we have \eq{\label{bguj87-extra} &\frac{1}{c^2}\|\mu\overline{{\mathcal{P}}}^{\mathsf{T}}{\mathcal{H}}_{i-1}{\mathcal{X}}_{R,u}\|^2\nonumber \\ &\ \le \frac{\mu^2}{c^2} \|\overline{{\mathcal{P}}}^{\mathsf{T}}\|^2 \|{\mathcal{H}}_{i-1}\|^2 \|{\mathcal{X}}_{R,u}\|^2 \nonumber \\ &\ \le \frac{1}{c^2}\left(\sum_{k=1}^{N}p_k^2\right)\delta^2 \|{\mathcal{X}}_{R,u}\|^2 \mu^2 \nonumber \\ &\ = \frac{\delta^2}{c^2N}\|{\mathcal{X}}_{R,u}\|^2 \mu^2 \overset{\eqref{xcn398}}{\le} \frac{\delta^2}{c^2 N}\|{\mathcal{X}}_{R}\|^2 \mu^2 \define (\sigma^e_{12})^2 \mu^2, } where $\sigma^e_{12}= \delta \|{\mathcal{X}}_{R}\|/(c\sqrt{N})$ and the ``$=$" sign in the third line holds because $p_k = 1/N$. Notice that $\sigma^e_{12}$ is independent of $\mu$. Substituting \eqref{uiox98-extra} and \eqref{bguj87-extra} into \eqref{x_bar_square-extra}, we get \eq{\label{x_bar_square-2-extra} &\hspace{-5mm} \|\bar{{\scriptstyle{\mathcal{X}}}}^e_i\|^2 \nonumber \\ \le &\ \frac{1}{1-t}(1-\sigma^e_{11}\mu)^2\|\bar{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\|^2 + \frac{1}{t} (\sigma^e_{12})^2\mu^2 \|\check{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\|^2 \nonumber \\ = &\ (1-\sigma^e_{11}\mu)\|\bar{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\|^2 + \frac{(\sigma^e_{12})^2}{\sigma^e_{11}}\mu\|\check{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\|^2, } where we are selecting $t = \sigma^e_{11}\mu$. Next we check the second line of recursion \eqref{final-recursion-extra}, which amounts to \eq{ \check{{\scriptstyle{\mathcal{X}}}}^e_i =&\; -c{\mathcal{X}}_L{\mathcal{T}}^e_{i-1}{\mathcal{R}}_1 \bar{{\scriptstyle{\mathcal{X}}}}^e_{i-1}+ ({\mathcal{D}}_1 - {\mathcal{X}}_L{\mathcal{T}}^e_{i-1}{\mathcal{X}}_R)\check{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\nonumber \\ =&\; {\mathcal{D}}_1\check{{\scriptstyle{\mathcal{X}}}}^e_{i-1} - {\mathcal{X}}_L{\mathcal{T}}^e_{i-1}(c{\mathcal{R}}_1 \bar{{\scriptstyle{\mathcal{X}}}}^e_{i-1} + {\mathcal{X}}_R\check{{\scriptstyle{\mathcal{X}}}}^e_{i-1}). \label{yujs99-extra} } Squaring both sides of \eqref{yujs99-extra}, and using Jensen's inequality again, \eq{ \|\check{{\scriptstyle{\mathcal{X}}}}^e_i\|^2 =& \|{\mathcal{D}}_1 \check{{\scriptstyle{\mathcal{X}}}}^e_{i-1} - {\mathcal{X}}_L{\mathcal{T}}^e_{i-1}(c{\mathcal{R}}_1 \bar{{\scriptstyle{\mathcal{X}}}}^e_{i-1} + {\mathcal{X}}_R\check{{\scriptstyle{\mathcal{X}}}}^e_{i-1})\|^2\nonumber\\ \le & \frac{\|{\mathcal{D}}_1\|^2}{t} \|\check{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\|^2 \hspace{-0.8mm}+\hspace{-0.8mm} \frac{1}{1-t}\| {\mathcal{X}}_L{\mathcal{T}}^e_{i-1}(c{\mathcal{R}}_1 \bar{{\scriptstyle{\mathcal{X}}}}^e_{i-1} + {\mathcal{X}}_R\check{{\scriptstyle{\mathcal{X}}}}^e_{i-1})\|^2 \nonumber \\ \le & \frac{\|{\mathcal{D}}_1\|^2}{t} \|\check{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\|^2 + \frac{2c^2}{1-t} \| {\mathcal{X}}_L{\mathcal{T}}^e_{i-1}{\mathcal{R}}_1\|^2 \|\bar{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\|^2\nonumber\\ &\hspace{0.63cm}+ \frac{2}{1-t} \|{\mathcal{X}}_L{\mathcal{T}}^e_{i-1}{\mathcal{X}}_R\|^2 \| \check{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\|^2. } where $t\in(0,1)$. From Lemma \ref{lm-B-decomposition} we have that $\lambda \define \|D_1\| = \sqrt{\lambda_2(\widetilde{A})}<1$. By setting $t=\lambda$, we reach \eq{\label{yuwe9-extra} \|\check{{\scriptstyle{\mathcal{X}}}}^e_i\|^2 \leq&\ \lambda \|\check{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\|^2 + \frac{2c^2}{1-\lambda} \| {\mathcal{X}}_L{\mathcal{T}}^e_{i-1}{\mathcal{R}}_1\|^2 \|\bar{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\|^2\nonumber\\ &\;\;\;\;\;\hspace{1.03cm}+ \frac{2}{1-\lambda} \|{\mathcal{X}}_L{\mathcal{T}}^e_{i-1}{\mathcal{X}}_R\|^2 \| \check{{\scriptstyle{\mathcal{X}}}}^2_{i-1}\|^2. } From the definition of ${\mathcal{T}}^e_{i-1}$ in \eqref{extra-T-defi}, we have \eq{\label{xha9867-extra} {\mathcal{T}}^e_{i-1} \hspace{-1mm}=\hspace{-1mm} \mu \left[ \begin{array}{cc} \hspace{-2mm}{\mathcal{H}}_{i-1} & 0\hspace{-2mm} \\ \hspace{-2mm}{\mathcal{V}} {\mathcal{H}}_{i-1} & 0 \hspace{-2mm} \\ \end{array} \right] \hspace{-1mm}=\hspace{-1mm} \mu \underbrace{\left[ \begin{array}{cc} \hspace{-2mm}I_{MN} & 0\hspace{-2mm} \\ \hspace{-2mm}{\mathcal{V}} & 0\hspace{-2mm} \\ \end{array} \right]}_{\define {\mathcal{T}}_e} \left[ \begin{array}{cc} \hspace{-2mm}{\mathcal{H}}_{i-1} & 0\hspace{-2mm} \\ \hspace{-2mm}0 & {\mathcal{H}}_{i-1}\hspace{-2mm} \\ \end{array} \right], } which implies that \eq{\label{nui89-extra} \|{\mathcal{T}}^e_{i-1}\|^2 \le \mu^2 \|{\mathcal{T}}_e\|^2 \left( \max_{1\le k\le N} \|H_{k,i-1}\|^2 \right) \le \|{\mathcal{T}}_e\|^2 \delta^2 \mu^2. } We also emphasize that $\|{\mathcal{T}}_e\|^2$ is independent of $\mu$. With inequality \eqref{nui89-extra}, we further have \eq{ c^2\| {\mathcal{X}}_L{\mathcal{T}}^e_{i-1} {\mathcal{R}}_1\|^2 &\le c^2 \mu^2 \|{\mathcal{X}}_L\|^2 \|{\mathcal{T}}_e\|^2 \|{\mathcal{R}}_1\|^2\delta^2 \nonumber \\ &\define (\sigma^e_{21})^2 \mu^2 \label{xcbnb978-1-extra}\\ \| {\mathcal{X}}_L{\mathcal{T}}^e_{i-1}{\mathcal{X}}_R\|^2 &\le \mu^2 \|{\mathcal{X}}_L\|^2 \|{\mathcal{T}}_e\|^2 \|{\mathcal{X}}_R\|^2 \delta^2 \nonumber \\ &\define (\sigma^e_{22})^2 \mu^2,\label{xcbnb978-2-extra} } notice that $\|{\mathcal{R}}_1\|=1$, $\sigma^e_{21}$ and $\sigma^e_{22}$ are defined as \eq{ \sigma^e_{21}=c \|{\mathcal{X}}_L\| \|{\mathcal{T}}_e\| \delta,\ \ \sigma^e_{22}=\|{\mathcal{X}}_L\| \|{\mathcal{T}}_e\| \|{\mathcal{X}}_R\|\delta. } With \eqref{xcbnb978-1-extra} and \eqref{xcbnb978-2-extra}, inequality \eqref{yuwe9-extra} becomes \eq{\label{yuwe9-2-extra} \| \check{{\scriptstyle{\mathcal{X}}}}^e_i\|^2 \leq \left(\lambda + \frac{2(\sigma_{22}^e)^2 \mu^2}{1-\lambda}\right) \|\check{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\|^2 + \frac{2(\sigma^e_{21})^2 \mu^2}{1-\lambda} \|\bar{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\|^2. } Combining \eqref{x_bar_square-2-extra} and \eqref{yuwe9-2-extra}, we arrive at the inequality recursion: \eq{\label{xngyi-extra} \hspace{-3mm} \left[ \begin{array}{c} \hspace{-2mm}\|\bar{{\scriptstyle{\mathcal{X}}}}^e_i\|^2\hspace{-2mm} \\ \hspace{-2mm}\|\check{{\scriptstyle{\mathcal{X}}}}^e_i\|^2 \hspace{-2mm} \\ \end{array} \right] \preceq \underbrace{\left[ \begin{array}{cc} \hspace{-2mm}1-\sigma^e_{11}\mu\hspace{-1mm} & \hspace{-1mm} \frac{(\sigma_{12}^e)^2}{\sigma^e_{11}}\mu\hspace{-2mm}\\ \hspace{-2mm}\frac{2(\sigma_{21}^e)^2\mu^2}{1-\lambda} \hspace{-1mm}&\hspace{-1mm} \lambda + \frac{2(\sigma_{22}^e)^2 \mu^2}{1-\lambda}\hspace{-2mm} \\ \end{array} \right]}_{\define G_e} \left[ \begin{array}{c} \hspace{-2mm}\|\bar{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\|^2\hspace{-2mm} \\ \hspace{-2mm}\|\check{{\scriptstyle{\mathcal{X}}}}^e_{i-1}\|^2 \hspace{-2mm} \\ \end{array} \right]. } From this point onwards, we follow exactly the same argument as in \eqref{upper-bound-1}--\eqref{237sdnkk} to arrive at the conclusion in Theorem \ref{lm-convergence-extra}. \section{Proof of Lemma \ref{lm-sta-ed}}\label{app-sta-ed} It is observed from expression \eqref{cn9999} for $E_d$ that one of the eigenvalues is $1-\mu\sigma^2$. It is easy to verify that when $\mu$ satisfies \eqref{mu-range}, it holds that \( -1< 1 - \mu \sigma^2 < 1. \) Next, we check the other two eigenvalues. Let $\theta$ denote a generic eigenvalue of $E_d$. From the right-bottom $2\times 2$ block of $E_d$ in \eqref{cn9999}, we know that $\theta$ will satisfy the following characteristic polynomial \eq{\label{ed-cp} \theta^2 - (2-\mu \sigma^2)a\, \theta + (1-\mu \sigma^2) a =0, } where $a\in(0,1)$ is a combination weight (see the expression for $A$ in \eqref{2-MSE-A}). Solving \eqref{ed-cp}, the two roots are \eq{\label{roots} \theta_{1,2} = \frac{(2\hspace{-0.8mm}-\hspace{-0.8mm}\mu \sigma^2)a \pm \hspace{-0.8mm} \sqrt{(2\hspace{-0.8mm}-\hspace{-0.8mm}\mu\sigma^2)^2 a^2 \hspace{-0.8mm}-\hspace{-0.8mm} 4(1\hspace{-0.8mm}-\hspace{-0.8mm}\mu\sigma^2)a }}{2}. } Let \eq{\label{xcnweh8} \Delta = (2\hspace{-0.8mm}-\hspace{-0.8mm}\mu\sigma^2)^2 a^2 \hspace{-0.8mm}-\hspace{-0.8mm} 4(1\hspace{-0.8mm}-\hspace{-0.8mm}\mu\sigma^2)a. } Based on the value of $\mu\sigma^2$ and $a$, $\Delta$ can be negative, zero, or positive. Recall from \eqref{mu-range} that $0<\mu\sigma^2<2$. In that case, over the smaller interval $1\leq \mu\sigma^2< 2$, it holds that $(1-\mu\sigma^2)\geq 0$ and, from \eqref{xcnweh8}, $\Delta >0$. For this reason, as indicated in cases 1 and 2 below, the scenarios corresponding to $\Delta<0$ or $\Delta=0$ can only occur over $0< \mu\sigma^2 < 1$: \noindent \textbf{Case 1: $\Delta <0$}. It can be verified that when \eq{\label{j98} 1-\mu\sigma^2 > 0,\quad \mbox{and}\quad a < \frac{4(1-\mu\sigma^2)}{(2-\mu\sigma^2)^2}, } it holds that $\Delta < 0$. In this situation, both $\theta_1$ and $\theta_2$ are imaginary numbers with magnitude \eq{ |\theta_1| \hspace{-0.8mm}=\hspace{-0.8mm} |\theta_2| \hspace{-0.8mm}=\hspace{-0.8mm} \frac{1}{4}\left( (2\hspace{-0.8mm}-\hspace{-0.8mm}\mu \sigma^2)^2a^2 \hspace{-0.8mm}+\hspace{-0.8mm} (-\hspace{-0.4mm}\Delta) \right) \hspace{-0.8mm}=\hspace{-0.8mm} (1\hspace{-0.8mm}-\hspace{-0.8mm}\mu\sigma^2 ) a < 1, } where the last inequality holds because $0< \mu\sigma^2 <1$ (see \eqref{mu-range} and \eqref{j98}) and $a\in (0,1)$. \noindent \textbf{Case 2: $\Delta =0$}. It can be verified that when \eq{\label{bv6} 1-\mu\sigma^2 > 0,\quad \mbox{and}\quad a = \frac{4(1-\mu\sigma^2)}{(2-\mu\sigma^2)^2}, } it holds that $\Delta = 0$. In this situation, from \eqref{roots} we have \eq{ \theta_1 = \theta_2 = \frac{(2-\mu \sigma^2)a}{2} < 1, } where the last inequality holds because $0< \mu\sigma^2 <1$ (see \eqref{mu-range} and \eqref{j98}) and $a\in (0,1)$. Observe further that the upper bound on $a$ in \eqref{j98} is positive and smaller than one when $0<\mu\sigma^2 <1$. \noindent \textbf{Case 3: $\Delta >0$}. It can be verified that when \eq{\label{n2g8} 1-\mu\sigma^2 > 0,\quad \mbox{and}\quad a > \frac{4(1-\mu\sigma^2)}{(2-\mu\sigma^2)^2}, } or when $1\le \mu \sigma^2 < 2$, it holds that $\Delta > 0$. In this situation, $\theta$ is real and \eq{ \theta_{1} &= \frac{(2\hspace{-0.8mm}-\hspace{-0.8mm}\mu \sigma^2)a + \hspace{-0.8mm} \sqrt{(2\hspace{-0.8mm}-\hspace{-0.8mm}\mu\sigma^2)^2 a^2 \hspace{-0.8mm}-\hspace{-0.8mm} 4(1\hspace{-0.8mm}-\hspace{-0.8mm}\mu\sigma^2)a }}{2},\\ \theta_{2} &= \frac{(2\hspace{-0.8mm}-\hspace{-0.8mm}\mu \sigma^2)a - \hspace{-0.8mm} \sqrt{(2\hspace{-0.8mm}-\hspace{-0.8mm}\mu\sigma^2)^2 a^2 \hspace{-0.8mm}-\hspace{-0.8mm} 4(1\hspace{-0.8mm}-\hspace{-0.8mm}\mu\sigma^2)a }}{2}. } Moreover, since $(2\hspace{-0.8mm}-\hspace{-0.8mm}\mu \sigma^2)a>0$, we have \eq{ |\theta_2| < |\theta_1|=\theta_1. } We regard $\theta_1$ as a function of $a$, i.e., $\theta_1=f(a)$. It holds that $f(a)$ is monotone increasing with $a$. To prove it, note that \begin{equation} f^\prime(a)=\frac{2-\mu\sigma^2}{2} + \frac{2(2-\mu\sigma^2)a - 4(1-\mu\sigma^2)}{4\sqrt{\Delta}}. \end{equation} Now since \eq{\label{xcnwj88} &\ \Delta = (2\hspace{-0.8mm}-\hspace{-0.8mm}\mu\sigma^2)^2 a^2 \hspace{-0.8mm}-\hspace{-0.8mm} 4(1\hspace{-0.8mm}-\hspace{-0.8mm}\mu\sigma^2)a > 0 \nonumber \\ \Longleftrightarrow &\ (2\hspace{-0.8mm}-\hspace{-0.8mm}\mu\sigma^2)^2 a > 4(1\hspace{-0.8mm}-\hspace{-0.8mm}\mu\sigma^2) \;\;\; \mbox{(because $a>0$)} \nonumber \\ \Longrightarrow &\ 2(2-\mu\sigma^2)a > 4(1\hspace{-0.8mm}-\hspace{-0.8mm}\mu\sigma^2), } we conclude that $f^\prime(a)>0$. Since $a<1$, it follows that \eq{ \theta_1 = f(a) < f(1) = 1. } In summary, when $\mu$ satisfies \eqref{mu-range}, for any $a\in (0,1)$ it holds that all three eigenvalues of $E_d$ stay within the unit-circle, which implies that $\rho(E_d)<1$, and also $\rho({\mathcal{E}}_d)<1$. As a result, $\check{{\scriptstyle{\mathcal{Z}}}}_i$ in \eqref{cngw7} will converge to $0$. Since $\widehat{{\scriptstyle{\mathcal{Z}}}}_i=0$ for any $i$, we conclude that $\widetilde{\scriptstyle{\mathcal{Z}}}_i$ converges to $0$. \section{Proof of Lemma \ref{lm-sta-ex}}\label{app-sta-extra} Similar to the arguments used to establish Lemma \ref{lm-Q_d-decom} and \eqref{ngy7}--\eqref{cngw7}, the EXTRA error recursion \eqref{special-extra} can also be divided into two separate recursions \eq{ \widehat{{\scriptstyle{\mathcal{Z}}}}_i^e = \widehat{{\scriptstyle{\mathcal{Z}}}}_{i-1}^e, \quad \mbox{and} \quad \check{{\scriptstyle{\mathcal{Z}}}}_i^e = {\mathcal{E}}_e \check{{\scriptstyle{\mathcal{Z}}}}_{i-1}^e, } where ${\mathcal{E}}_e = E_e \otimes I_M$, and \eq{\label{cn9999-extra} E_e = \left[ \begin{array}{ccc} \hspace{-2mm}1-\mu^e\sigma^2 & 0 & 0\hspace{-2mm}\\ \hspace{-2mm}0 & a-\mu^e \sigma^2 & -\sqrt{2-2a}\hspace{-2mm}\\ \hspace{-2mm}0 & (a-\mu^e\sigma^2)\sqrt{\frac{1-a}{2}} & a\hspace{-2mm} \\ \end{array} \right]. } Also, since both ${\scriptstyle{\mathcal{Y}}}_0^e$ and ${\scriptstyle{\mathcal{Y}}}_o^\star$ lie in the $\mathrm{range}({\mathcal{V}})$, it can be verified that $\widehat{{\scriptstyle{\mathcal{Z}}}}_0^e=0$. Therefore, we only focus on the convergence of $\check{{\scriptstyle{\mathcal{Z}}}}_i^e$. Let $\theta^e$ denote a generic eigenvalue of $E_e$. From the right-bottom $2\times 2$ block of $E_e$ in \eqref{cn9999-extra}, we know that $\theta^e$ will satisfy the following characteristic polynomial \eq{\label{ed-cp-ex} (\theta^e)^2 - (2a-\mu^e \sigma^2)\, (\theta^e) + (a-\mu^e \sigma^2) =0. } Solving it, we have \eq{ \theta^e_{1,2}&=\frac{2a-\mu^e\sigma^2 \pm \sqrt{(2a-\mu^e\sigma^2)^2 - 4(a-\mu^e \sigma^2)}}{2}. } Now we suppose $\mu^e\sigma^2 \ge a+1$ as noted in \eqref{mu-range-extra}, it then follows that \eq{\label{xcn2hg} a-\mu^e\sigma^2\le -1 } and hence both $\theta_1^e$ and $\theta_2^e$ are real numbers with \eq{ \theta^e_{1}=\frac{2a\hspace{-1mm}-\hspace{-1mm}\mu^e\sigma^2 \hspace{-1mm}+\hspace{-1mm} \sqrt{(2a-\mu^e\sigma^2)^2 \hspace{-1mm}+\hspace{-1mm} 4(\mu^e \sigma^2-a)}}{2}, } \eq{ \theta^e_{2}=\frac{2a\hspace{-1mm}-\hspace{-1mm}\mu^e\sigma^2 \hspace{-1mm}-\hspace{-1mm} \sqrt{(2a-\mu^e\sigma^2)^2 \hspace{-1mm}+\hspace{-1mm} 4(\mu^e \sigma^2-a)}}{2}. } Moreover, with $\mu^e\sigma^2 \ge a+1$ we further have \eq{\label{b88}2a-\mu^e\sigma^2\le a-1<0,} which implies that \eq{ |\theta_2^e|= \frac{\mu^e\sigma^2\hspace{-1mm}-\hspace{-1mm}2a \hspace{-1mm}+\hspace{-1mm} \sqrt{(2a-\mu^e\sigma^2)^2 \hspace{-1mm}+\hspace{-1mm} 4(\mu^e \sigma^2-a)}}{2} > 1, } where the last inequality holds because of \eqref{xcn2hg} and \eqref{b88}. Therefore, when $\mu^e$ is chosen such that $\mu^e\sigma^2 \ge 1+a$, there always exists one eigenvalue $\theta^e$ such that $|\theta^e|>1$ which implies that $\check{{\scriptstyle{\mathcal{Z}}}}_i^e$ diverges, and so does $\widetilde{\scriptstyle{\mathcal{Z}}}_i^e$. \end{document}
arXiv
The circumference of Earth is 40,000 kilometers. How many trips around Earth could you take if you travel one billion meters? First, convert one billion meters to kilometers. \[1000000000 \textnormal{ meters} \cdot \frac{1 \textnormal{ kilometer}}{1000 \textnormal { meters}} = 1000000 \textnormal{ kilometers}\] Next, we divide the total distance traveled by the circumference around the Earth to get the total number of trips around the world, or $\frac{1000000}{40000} = \boxed{25}$ trips.
Math Dataset
Worldbuilding Stack Exchange is a question and answer site for writers/artists using science, geography and culture to construct imaginary worlds and settings. Join them; it only takes a minute: Could a solar system with large amounts of dust and debris exist? I'm working on a sci-fi/fantasy world set in a solar system with a few habital, currently-being-colonized planets. Most of the action will be taking place on the surface of one of them, but I want background detail to add to the setting and give me info for future developments. I was wondering if a solar system could exist with massive amounts of natural debris and dust, such as asteroid belts and the like, that affected space travel in such a way that ships couldn't accelerate too fast for fear of losing maneuverability and hitting something. I'm talking dense. This would also add an element of uncertainty and surprise to space battles, as normally you would be able to see your opponent from far away using sensors, but in the dust and debris, stealth would be much more feasible. On another note, I want a Star Trek or Mass Effect kind of ship handling, and to me, imposing this sort of speed limit will help with that. Any suggestions along that vein would be most appreciated too! EDIT: Wow, didn't expect so many helpful responses, especially on my first post ever! You guys are making my life a lot easier. I'm leaning toward a debris disk/field right now, as that seems perfect for what I had in mind. For additional info, the reason that there are people in this solar system is that that's where life appeared, and as they haven't developed FTL tech, they're basically stuck there. I think I'll call the system and the star Apex. The 'homeworld' colonizes a couple other planets in the system, and after a civil war between the colonies, one of them suffers orbital bombardment, plunging the few survivors into a post-apocalyptic mess hundreds of millions of kilometers from home. space science-fiction space-travel spaceships travel AWOLDan AWOLDanAWOLDan $\begingroup$ You might want to consider a gas torus, rather than dust and rocks - check out Larry Niven's Integral Trees for an example. $\endgroup$ – John Feltz Aug 25 '16 at 20:30 $\begingroup$ We have this now. Isn't this crap exactly why the Star Trek Enterprise has a deflector dish? It's a snow plow, for space. $\endgroup$ – candied_orange Aug 25 '16 at 21:47 $\begingroup$ Two star systems with large Oort clouds could have passed close in the (reatively) recent past. With enough cloud objects getting disturbed, it could be messy closer in, perhaps for millions of years. $\endgroup$ – user2338816 Aug 26 '16 at 5:38 $\begingroup$ I think yiu're confusing acceleration and velocity. Or did «accelerate too fast» mean accelerate up to a too-high velocity rather than how quickly you change your speed up to the final value? $\endgroup$ – JDługosz Aug 26 '16 at 10:03 $\begingroup$ You could have a proto-planetary disk, but if it's proto-planetary, I doubt there would be any habitable planets. Also, spcecrafts can fly out of the plane of the disk to get around it. And if the system is full of debris, planet could receive monthly asteroids, which will make it very uneconomical to live there. $\endgroup$ – sampathsris Aug 26 '16 at 10:55 What you described looks just like very young solar system, where planetary formation is a process still in progress. Your dust and asteroids is simply a protoplanetary disk. Planets are on the small side, still clearing their neighborhood. Probably no native life there, but easier to mine. This might be a reason to go there at all. Can't find source, but I remember reading somewhere that thickness may be about 19% of diameter. That's quite a lot - and interesting stuff is inside. Also, you seem to care about stealth. Risk of being attacked may outweigh risk of traveling inside stealth-able environment. Or not, but you still need to get in and out. MołotMołot $\begingroup$ Invoking a protoplanetary disk is a chicken and egg problem. While the disk was present the planets would not have formed yet (or, if they were newly formed, they would likely have global magma oceans). $\endgroup$ – Sean Raymond Aug 26 '16 at 14:28 Give it a protoplanetary disk: Image in the public domain. Protoplanetary disks are circumstellar disks that form early on in the life of a planetary system, from the original protoplanetary nebula around the star. They can survive for over 10-20 million years (see Mamajek et al. (2009)), meaning that life likely could not develop, but you could have loads of protoplanets and related bodies around while still retaining plenty of dust. I would recommend a debris disk, but they're not necessarily as dense. Additionally, they won't necessarily have bodies as large as protoplanets inside them - although even asteroid-sized bodies can be hazardous to space travelers. What will the densities actually be like? They can vary quite a lot over time. Here's a graph of surface density $\Sigma$ vs. radius $R$ from Dullemond et al.: Initially, there are maximum surface densities in the order of 10,000 grams per square centimeter; after 3 million years, this peak has gone down to 100 grams per square centimeter. We can use Equation 5 from here to find the spatial density: $$\rho(R,Z)=\frac{\Sigma(R)}{\sqrt{2\pi} H}\exp\left(-\frac{Z^2}{2H^2}\right)$$ At an elevation of $Z=0$, and assuming a scale height consistent with that of Chiang & Goldreich (1997), then at a distance of about 1 AU, $H\sim0.045$ and at a time of about half a million years, the density is . . . quite substantial. Sean Raymond's suggestion of a debris disk might be better than my answer, for a few reasons that he pointed out: Debris disks aren't so short-lived. They can contain large rocky planets; the debris disk I suggested only holds small planetesimals You should strongly consider that. HDE 226868♦HDE 226868 $\begingroup$ If there's a disk, what's the reason to not just fly above/below it and avoid most of the danger? $\endgroup$ – OrangeDog Aug 26 '16 at 7:57 $\begingroup$ @OrangeDog Same reason all of our probes fly (at least initially) in the orbital plane - it takes a freakin' huge amount of energy to orbit the Sun, and if you start from a planet you get most of it for free - if you stay in the same plane. $\endgroup$ – Ordous Aug 26 '16 at 13:04 $\begingroup$ @OrangeDog Protoplanetary disks are not thin. Even at $Z=H$, the density will still be about 0.60 times the density at $Z=0$ at the same radius - and if $H\sim.045$, then that's still over 4 million miles - about 17.5 times the distance from Earth to the Moon. So trying to escape and fly above/below it once you're at $Z=0$ (where most of the protoplanets will be) will take a lot of time and effort. It's not so simple. $\endgroup$ – HDE 226868♦ Aug 26 '16 at 13:50 $\begingroup$ It's true that protoplanetary disks are not that thin, with typical aspect ratios of 5% or so. However, large planets tend to sink to the very thin mid-plane where there is the most action. That being said, as I mentioned in a comment above, this is a chicken-and-egg problem. While the protoplanetary disk is there, rocky planets do not exist yet; they are still forming. And even if they are fully-formed they cannot yet be habitable since their surfaces are likely molten. What you need is a debris disk, preferably during an instability (see answer below) $\endgroup$ – Sean Raymond Aug 26 '16 at 14:30 $\begingroup$ @SeanRaymond Fair point. I chose 3 million years as a decent intermediate point because it would land in the very early stages of planetesimal formation; I was willing to not have there be proper planets in order to gain some small rocky bodies while retaining large amounts of dust. I also misunderstood the composition of mature debris disks; I thought the dust fraction was much smaller. I think your proposal is better, then. $\endgroup$ – HDE 226868♦ Aug 26 '16 at 14:45 It's not plausible to invoke a protoplanetary disk: they are too short-lived. There's a ton of dust floating around in those disks because it's where planets are forming. If your planets are already formed (with non-molten surfaces) then the disk is gone. What you need is a "debris disk". In astronomer-speak, debris disks are belts or disks of rocky/icy leftovers of planet formation that produce enough dust that you can see them directly. Here are a couple of famous examples (note that debris is usually inferred from spectra rather than seen directly; these are prime examples): Most debris disks are relatively cold, made of bodies on very cold (think Neptune-ish) orbits. But a few do have warmer belts closer to where terrestrial planets live. The trick is this: very dense belts don't last long. The best way I can think of to have a super dense debris field is to make your story take place during a late heavy bombardment-type event. Like the Solar System's bombardment (https://en.wikipedia.org/wiki/Late_Heavy_Bombardment) but much heavier. This is totally plausible from an astrodynamical point of view. Sean RaymondSean Raymond $\begingroup$ +1; I like it. I did a little bit of reading, and I've seen varied statistics for debris disk lifetimes (many on orders of magnitude greater than the lifetimes of protoplanetary disks). Out of curiosity, what would be the lifetime of a typical debris disk here? $\endgroup$ – HDE 226868♦ Aug 26 '16 at 14:52 $\begingroup$ Well, debris disks -- meaning belts of comets that slowly grind to dust -- can last for billions of years. However, the instability phase, during which an instability in planets' orbits causes a massive deluge of debris throughout the planetary system, is much shorter, typically 0.1 to 10 million years long. The Solar System's late heavy bombardment was pretty wimpy compared to what is likely to have happened in many extrasolar planetary systems. $\endgroup$ – Sean Raymond Aug 26 '16 at 17:05 That solar system is just a slag pile from some Type 2 Civilization's strip mining operation. We thought it was natural when we arrived. Some kind of young solar system or a late-blooming proto-planetary disk; but its sun turned out to be too old for that. Also, most of the expected heavier metals were missing. It was a real mystery for a while. Then we found the first of the artifacts. No more than a thousand Earth-years ago, some very advanced miners reduced most of the system to dust and rock fragments. They left a couple planets intact. Probably used them as their mining camps. They even took most of the primary star's mass. It didn't used to be a dwarf. They left just enough mass to keep all this junk from floating away and causing havoc out in open space. This place would be a wasteland, except those ancient strip miners left some of their tools behind; mostly broken bits and warn-out parts. Probably junk to them, but precious artifacts and technological wonders for us. That whole solar system has become a technological boom town. It is the center of the 24th century gold rush! Henry TaylorHenry Taylor Such a system wouldn't be colonised in the first place. In a protoplanetary disk, there's a lot of debris floating around, which is what you asked for. But that large amount of debris crossing planet orbits also means lots of impact events. These tend to do bad things to real estate values. Really bad things. Most sensible colonists would avoid this as way too likely to get a rock dropped on you from orbit. Unless there's some reason why you can't just go to the star next door. LelielLeliel $\begingroup$ Indeed. It would be next to impossible to reach one of the planets at all, for the fear of colliding into something, let alone trying to colonize some planets! $\endgroup$ – Youstay Igo Aug 26 '16 at 5:05 $\begingroup$ @YoustayIgo nah, you just activate your pegasus cloak, and fly right through! $\endgroup$ – Benubird Aug 26 '16 at 11:16 You could make your solar system filled with a combination of frequent clouds of gas and frequent asteroid belts. This could potentially force slowness and allow stealth while still allowing for an inhabitable solar system. Gen_LukeGen_Luke $\begingroup$ The OP already said all of this in the question. (S)he wants to know if/how such a system could actually exist. $\endgroup$ – JBentley Aug 26 '16 at 8:36 $\begingroup$ Hi, Gen_Luke, welcome to Worldbuilding SE. Your answer can be improved by explaining how the dense gas, dust & asteroids form part of the solar system. Here we prefer facts, discussion, details & reasons why something is so. It takes time to learn the rules here. Just hang in there and you pick them up. $\endgroup$ – a4android Aug 26 '16 at 11:38 There's an example of this, a later stage star with a crap ton of debris. The suspicion is that two planets within it collided. http://articles.latimes.com/2014/mar/08/science/la-sci-sn-beta-pictoris-star-planet-gas-collision-comets-carbon-monoxide-20140307 Even if we're pretty far away, if Mercury hit Venus, we'd have a lot of junk floating around. Not sure, how long we'd live. TatarizeTatarize You can place an ancient megastructure on this system, like a Dyson sphere. Due to his abandonment, the structure collapsed long ago, creating numerous remains. Over time, theses remains started to collide at high speed, creating smaller remains. At last, you obtain a system with a lot of debris on various size, from dust to country-sized part. Any advanced civilization can easily and logically create such kind of structure. MetushaelMetushael I'm going to more or less disagree with everyone here. I don't think the option of protoplanetary disk would work. If it's proto-planetary, planets are not yet formed or still in the stage of big ugly magma balls, incapable of hosting any life. When a planet gradually cools down and terraforms itself to be capable of hosting life, several billion years would have passed and the debris would have been cleared by the planets. You could have another planet in the system to have collided with a rogue planet recently (i.e. few million years in the past). That will give you a nice debris field. But it would mean that planet would be frequently and heavily bombarded with asteroids. It would not be suitable for life. If somehow you were able to have a debris field and have a thin debris free strip for the planet, the debris would not affect spacecrafts. Because they need not to fly in the planetary plane, they can fly out of it and get out of the system. So the alternative for you is to have your planet in a debris free zone, but give some incentives for the spacecrafts to go into a dangerous zone full of debris in another part of the system. Look at this breathtaking animation of Jupiter herding the asteroids: https://www.youtube.com/watch?v=yt1qPCiOq-8 I got an idea from this. Have a star larger/hotter than Sun, where your habitable planet is around 5-8 AU from the star. Have a gas giant closer to the star. And then have an inner planet collide with a rogue planet few 100,000 years ago. The gas giant will herd the debris so your planet will not be affected much. And oh, the rogue planet was crossing the planetary plane, so the debris is not a disk, but a big mass ball of dust, sand, and rocks around the star. And maybe your spacecrafts need to go near the star to recharge like Rama, or there's a valuable mineral that could be mined in the asteroid field. sampathsrissampathsris Thanks for contributing an answer to Worldbuilding Stack Exchange! Not the answer you're looking for? Browse other questions tagged space science-fiction space-travel spaceships travel or ask your own question. How to effectively and realistically blockade a star system? What influences Trojan Asteroid density? Could we make the Solar System or the Earth a space craft using only local materials? Is it theoretically possible to use electromagnetic, RF or heat energy to produce a ship forcefield? How close can a wormhole appear next to a star so that it doesn't affect it? Radiation from binary star systems and how that would affect a planet orbiting both stars Could an unstable system of several celestial bodies exist? How to estimate a star's heliopause? What would be the impact on Titan, Saturn and the Solar system if aliens harvested as much hydrocarbons as possible from Saturn's moon? Asteroid material that can cause signal distortion
CommonCrawl
Regular polytope In mathematics, a regular polytope is a polytope whose symmetry group acts transitively on its flags, thus giving it the highest degree of symmetry. All its elements or j-faces (for all 0 ≤ j ≤ n, where n is the dimension of the polytope) — cells, faces and so on — are also transitive on the symmetries of the polytope, and are regular polytopes of dimension ≤ n. Regular polytope examples A regular pentagon is a polygon, a two-dimensional polytope with 5 edges, represented by Schläfli symbol {5}. A regular dodecahedron is a polyhedron, a three-dimensional polytope, with 12 pentagonal faces, represented by Schläfli symbol {5,3}. A regular 120-cell is a polychoron, a four-dimensional polytope, with 120 dodecahedral cells, represented by Schläfli symbol {5,3,3}. (shown here as a Schlegel diagram) A regular cubic honeycomb is a tessellation, an infinite three-dimensional polytope, represented by Schläfli symbol {4,3,4}. The 256 vertices and 1024 edges of an 8-cube can be shown in this orthogonal projection (Petrie polygon) Regular polytopes are the generalized analog in any number of dimensions of regular polygons (for example, the square or the regular pentagon) and regular polyhedra (for example, the cube). The strong symmetry of the regular polytopes gives them an aesthetic quality that interests both non-mathematicians and mathematicians. Classically, a regular polytope in n dimensions may be defined as having regular facets ([n–1]-faces) and regular vertex figures. These two conditions are sufficient to ensure that all faces are alike and all vertices are alike. Note, however, that this definition does not work for abstract polytopes. A regular polytope can be represented by a Schläfli symbol of the form {a, b, c, ..., y, z}, with regular facets as {a, b, c, ..., y}, and regular vertex figures as {b, c, ..., y, z}. Classification and description Regular polytopes are classified primarily according to their dimensionality. They can be further classified according to symmetry. For example, the cube and the regular octahedron share the same symmetry, as do the regular dodecahedron and icosahedron. Indeed, symmetry groups are sometimes named after regular polytopes, for example the tetrahedral and icosahedral symmetries. Three special classes of regular polytope exist in every dimension: • Regular simplex • Measure polytope (Hypercube) • Cross polytope (Orthoplex) In two dimensions, there are infinitely many regular polygons. In three and four dimensions, there are several more regular polyhedra and 4-polytopes besides these three. In five dimensions and above, these are the only ones. See also the list of regular polytopes. In one dimension, the Line Segment simultaneously serves as all of these polytopes, and in two dimensions, the square can act as both the Measure Polytope and Cross Polytope at the same time. The idea of a polytope is sometimes generalised to include related kinds of geometrical object. Some of these have regular examples, as discussed in the section on historical discovery below. Schläfli symbols Main article: Schläfli symbol A concise symbolic representation for regular polytopes was developed by Ludwig Schläfli in the 19th century, and a slightly modified form has become standard. The notation is best explained by adding one dimension at a time. • A convex regular polygon having n sides is denoted by {n}. So an equilateral triangle is {3}, a square {4}, and so on indefinitely. A regular star polygon which winds m times around its centre is denoted by the fractional value {n/m}, where n and m are co-prime, so a regular pentagram is {5/2}. • A regular polyhedron having faces {n} with p faces joining around a vertex is denoted by {n, p}. The nine regular polyhedra are {3, 3} {3, 4} {4, 3} {3, 5} {5, 3} {3, 5/2} {5/2, 3} {5, 5/2} and {5/2, 5}. {p} is the vertex figure of the polyhedron. • A regular 4-polytope having cells {n, p} with q cells joining around an edge is denoted by {n, p, q}. The vertex figure of the 4-polytope is a {p, q}. • A regular 5-polytope is an {n, p, q, r}. And so on. Duality of the regular polytopes The dual of a regular polytope is also a regular polytope. The Schläfli symbol for the dual polytope is just the original symbol written backwards: {3, 3} is self-dual, {3, 4} is dual to {4, 3}, {4, 3, 3} to {3, 3, 4} and so on. The vertex figure of a regular polytope is the dual of the dual polytope's facet. For example, the vertex figure of {3, 3, 4} is {3, 4}, the dual of which is {4, 3} — a cell of {4, 3, 3}. The measure and cross polytopes in any dimension are dual to each other. If the Schläfli symbol is palindromic, i.e. reads the same forwards and backwards, then the polyhedron is self-dual. The self-dual regular polytopes are: • All regular polygons, {a}. • All regular n-simplexes, {3,3,...,3} • The regular 24-cell in 4 dimensions, {3,4,3}. • The great 120-cell ({5,5/2,5}) and grand stellated 120-cell ({5/2,5,5/2}) in 4 dimensions. • All regular n-dimensional cubic honeycombs, {4,3,...,3,4}. These may be treated as infinite polytopes. • Hyperbolic tilings and honeycombs (tilings {p,p} with p>4 in 2 dimensions, {4,4,4}, {5,3,5}. {3,5,3}, {6,3,6}, and {3,6,3} in 3 dimensions, {5,3,3,5} in 4 dimensions, and {3,3,4,3,3} in 5 dimensions). Regular simplices Graphs of the 1-simplex to 4-simplex. Line segment Triangle Tetrahedron Pentachoron   Main article: Simplex Begin with a point A. Mark point B at a distance r from it, and join to form a line segment. Mark point C in a second, orthogonal, dimension at a distance r from both, and join to A and B to form an equilateral triangle. Mark point D in a third, orthogonal, dimension a distance r from all three, and join to form a regular tetrahedron. And so on for higher dimensions. These are the regular simplices or simplexes. Their names are, in order of dimensionality: 0. Point 1. Line segment 2. Equilateral triangle (regular trigon) 3. Regular tetrahedron 4. Regular pentachoron or 4-simplex 5. Regular hexateron or 5-simplex ... An n-simplex has n+1 vertices. Measure polytopes (hypercubes) Graphs of the 2-cube to 4-cube. Square Cube Tesseract Main article: Hypercube Begin with a point A. Extend a line to point B at distance r, and join to form a line segment. Extend a second line of length r, orthogonal to AB, from B to C, and likewise from A to D, to form a square ABCD. Extend lines of length r respectively from each corner, orthogonal to both AB and BC (i.e. upwards). Mark new points E,F,G,H to form the cube ABCDEFGH. And so on for higher dimensions. These are the measure polytopes or hypercubes. Their names are, in order of dimensionality: 0. Point 1. Line segment 2. Square (regular tetragon) 3. Cube (regular hexahedron) 4. Tesseract (regular octachoron) or 4-cube 5. Penteract (regular decateron) or 5-cube ... An n-cube has 2n vertices. Cross polytopes (orthoplexes) Graphs of the 2-orthoplex to 4-orthoplex. Square Octahedron 16-cell Main article: Orthoplex Begin with a point O. Extend a line in opposite directions to points A and B a distance r from O and 2r apart. Draw a line COD of length 2r, centred on O and orthogonal to AB. Join the ends to form a square ACBD. Draw a line EOF of the same length and centered on 'O', orthogonal to AB and CD (i.e. upwards and downwards). Join the ends to the square to form a regular octahedron. And so on for higher dimensions. These are the cross polytopes or orthoplexes. Their names are, in order of dimensionality: 0. Point 1. Line segment 2. Square (regular tetragon) 3. Regular octahedron 4. Regular hexadecachoron (16-cell) or 4-orthoplex 5. Regular triacontakaiditeron (Pentacross) or 5-orthoplex ... An n-orthoplex has 2n vertices. History of discovery Convex polygons and polyhedra The earliest surviving mathematical treatment of regular polygons and polyhedra comes to us from ancient Greek mathematicians. The five Platonic solids were known to them. Pythagoras knew of at least three of them and Theaetetus (c. 417 BC – 369 BC) described all five. Later, Euclid wrote a systematic study of mathematics, publishing it under the title Elements, which built up a logical theory of geometry and number theory. His work concluded with mathematical descriptions of the five Platonic solids. Platonic solids TetrahedronCubeOctahedronDodecahedronIcosahedron Star polygons and polyhedra Our understanding remained static for many centuries after Euclid. The subsequent history of the regular polytopes can be characterised by a gradual broadening of the basic concept, allowing more and more objects to be considered among their number. Thomas Bradwardine (Bradwardinus) was the first to record a serious study of star polygons. Various star polyhedra appear in Renaissance art, but it was not until Johannes Kepler studied the small stellated dodecahedron and the great stellated dodecahedron in 1619 that he realised these two were regular. Louis Poinsot discovered the great dodecahedron and great icosahedron in 1809, and Augustin Cauchy proved the list complete in 1812. These polyhedra are known as collectively as the Kepler-Poinsot polyhedra. Main article: Regular polyhedron § History Kepler-Poinsot polyhedra Small stellated dodecahedron Great stellated dodecahedron Great dodecahedronGreat icosahedron Higher-dimensional polytopes It was not until the 19th century that a Swiss mathematician, Ludwig Schläfli, examined and characterised the regular polytopes in higher dimensions. His efforts were first published in full in Schläfli (1901), six years posthumously, although parts of it were published in Schläfli (1855) and Schläfli (1858). Between 1880 and 1900, Schläfli's results were rediscovered independently by at least nine other mathematicians — see Coxeter (1973, pp. 143–144) for more details. Schläfli called such a figure a "polyschem" (in English, "polyscheme" or "polyschema"). The term "polytope" was introduced by Reinhold Hoppe, one of Schläfli's rediscoverers, in 1882, and first used in English by Alicia Boole Stott some twenty years later. The term "polyhedroids" was also used in earlier literature (Hilbert, 1952). Coxeter (1973) is probably the most comprehensive printed treatment of Schläfli's and similar results to date. Schläfli showed that there are six regular convex polytopes in 4 dimensions. Five of them can be seen as analogous to the Platonic solids: the 4-simplex (or pentachoron) to the tetrahedron, the hypercube (or tesseract) to the cube, the 4-orthoplex (or hexadecachoron or 16-cell) to the octahedron, the 120-cell to the dodecahedron, and the 600-cell to the icosahedron. The sixth, the 24-cell, can be seen as a transitional form between the hypercube and 16-cell, analogous to the way that the cuboctahedron and the rhombic dodecahedron are transitional forms between the cube and the octahedron. In five and more dimensions, there are exactly three regular polytopes, which correspond to the tetrahedron, cube and octahedron: these are the regular simplices, measure polytopes and cross polytopes. Descriptions of these may be found in the list of regular polytopes. Also of interest are the star regular 4-polytopes, partially discovered by Schläfli. By the end of the 19th century, mathematicians such as Arthur Cayley and Ludwig Schläfli had developed the theory of regular polytopes in four and higher dimensions, such as the tesseract and the 24-cell. The latter are difficult (though not impossible) to visualise through a process of dimensional analogy, since they retain the familiar symmetry of their lower-dimensional analogues. The tesseract contains 8 cubical cells. It consists of two cubes in parallel hyperplanes with corresponding vertices cross-connected in such a way that the 8 cross-edges are equal in length and orthogonal to the 12+12 edges situated on each cube. The corresponding faces of the two cubes are connected to form the remaining 6 cubical faces of the tesseract. The 24-cell can be derived from the tesseract by joining the 8 vertices of each of its cubical faces to an additional vertex to form the four-dimensional analogue of a pyramid. Both figures, as well as other 4-dimensional figures, can be directly visualised and depicted using 4-dimensional stereographs.[1] Harder still to imagine are the more modern abstract regular polytopes such as the 57-cell or the 11-cell. From the mathematical point of view, however, these objects have the same aesthetic qualities as their more familiar two and three-dimensional relatives. At the start of the 20th century, the definition of a regular polytope was as follows. • A regular polygon is a polygon whose edges are all equal and whose angles are all equal. • A regular polyhedron is a polyhedron whose faces are all congruent regular polygons, and whose vertex figures are all congruent and regular. • And so on, a regular n-polytope is an n-dimensional polytope whose (n − 1)-dimensional faces are all regular and congruent, and whose vertex figures are all regular and congruent. This is a "recursive" definition. It defines regularity of higher dimensional figures in terms of regular figures of a lower dimension. There is an equivalent (non-recursive) definition, which states that a polytope is regular if it has a sufficient degree of symmetry. • An n-polytope is regular if any set consisting of a vertex, an edge containing it, a 2-dimensional face containing the edge, and so on up to n−1 dimensions, can be mapped to any other such set by a symmetry of the polytope. So for example, the cube is regular because if we choose a vertex of the cube, and one of the three edges it is on, and one of the two faces containing the edge, then this triplet, or flag, (vertex, edge, face) can be mapped to any other such flag by a suitable symmetry of the cube. Thus we can define a regular polytope very succinctly: • A regular polytope is one whose symmetry group is transitive on its flags. In the 20th century, some important developments were made. The symmetry groups of the classical regular polytopes were generalised into what are now called Coxeter groups. Coxeter groups also include the symmetry groups of regular tessellations of space or of the plane. For example, the symmetry group of an infinite chessboard would be the Coxeter group [4,4]. Apeirotopes — infinite polytopes Main article: Regular skew polyhedron In the first part of the 20th century, Coxeter and Petrie discovered three infinite structures {4, 6}, {6, 4} and {6, 6}. They called them regular skew polyhedra, because they seemed to satisfy the definition of a regular polyhedron — all the vertices, edges and faces are alike, all the angles are the same, and the figure has no free edges. Nowadays, they are called infinite polyhedra or apeirohedra. The regular tilings of the plane {4, 4}, {3, 6} and {6, 3} can also be regarded as infinite polyhedra. In the 1960s Branko Grünbaum issued a call to the geometric community to consider more abstract types of regular polytopes that he called polystromata. He developed the theory of polystromata, showing examples of new objects he called regular apeirotopes, that is, regular polytopes with infinitely many faces. A simple example of a skew apeirogon would be a zig-zag. It seems to satisfy the definition of a regular polygon — all the edges are the same length, all the angles are the same, and the figure has no loose ends (because they can never be reached). More importantly, perhaps, there are symmetries of the zig-zag that can map any pair of a vertex and attached edge to any other. Since then, other regular apeirogons and higher apeirotopes have continued to be discovered. Regular complex polytopes Main article: Complex polytope A complex number has a real part, which is the bit we are all familiar with, and an imaginary part, which is a multiple of the square root of minus one. A complex Hilbert space has its x, y, z, etc. coordinates as complex numbers. This effectively doubles the number of dimensions. A polytope constructed in such a unitary space is called a complex polytope.[2] Abstract polytopes Main article: Abstract polytope Grünbaum also discovered the 11-cell, a four-dimensional self-dual object whose facets are not icosahedra, but are "hemi-icosahedra" — that is, they are the shape one gets if one considers opposite faces of the icosahedra to be actually the same face (Grünbaum 1976). The hemi-icosahedron has only 10 triangular faces, and 6 vertices, unlike the icosahedron, which has 20 and 12. This concept may be easier for the reader to grasp if one considers the relationship of the cube and the hemicube. An ordinary cube has 8 corners, they could be labeled A to H, with A opposite H, B opposite G, and so on. In a hemicube, A and H would be treated as the same corner. So would B and G, and so on. The edge AB would become the same edge as GH, and the face ABEF would become the same face as CDGH. The new shape has only three faces, 6 edges and 4 corners. The 11-cell cannot be formed with regular geometry in flat (Euclidean) hyperspace, but only in positively curved (elliptic) hyperspace. A few years after Grünbaum's discovery of the 11-cell, H. S. M. Coxeter independently discovered the same shape. He had earlier discovered a similar polytope, the 57-cell (Coxeter 1982, 1984). By 1994 Grünbaum was considering polytopes abstractly as combinatorial sets of points or vertices, and was unconcerned whether faces were planar. As he and others refined these ideas, such sets came to be called abstract polytopes. An abstract polytope is defined as a partially ordered set (poset), whose elements are the polytope's faces (vertices, edges, faces etc.) ordered by containment. Certain restrictions are imposed on the set that are similar to properties satisfied by the classical regular polytopes (including the Platonic solids). The restrictions, however, are loose enough that regular tessellations, hemicubes, and even objects as strange as the 11-cell or stranger, are all examples of regular polytopes. A geometric polytope is understood to be a realization of the abstract polytope, such that there is a one-to-one mapping from the abstract elements to the geometric. Thus, any geometric polytope may be described by the appropriate abstract poset, though not all abstract polytopes have proper geometric realizations. The theory has since been further developed, largely by McMullen & Schulte (2002), but other researchers have also made contributions. Regularity of abstract polytopes Regularity has a related, though different meaning for abstract polytopes, since angles and lengths of edges have no meaning. The definition of regularity in terms of the transitivity of flags as given in the introduction applies to abstract polytopes. Any classical regular polytope has an abstract equivalent which is regular, obtained by taking the set of faces. But non-regular classical polytopes can have regular abstract equivalents, since abstract polytopes don't care about angles and edge lengths, for example. And a regular abstract polytope may not be realisable as a classical polytope. All polygons are regular in the abstract world, for example, whereas only those having equal angles and edges of equal length are regular in the classical world. Vertex figure of abstract polytopes The concept of vertex figure is also defined differently for an abstract polytope. The vertex figure of a given abstract n-polytope at a given vertex V is the set of all abstract faces which contain V, including V itself. More formally, it is the abstract section Fn / V = {F | V ≤ F ≤ Fn} where Fn is the maximal face, i.e. the notional n-face which contains all other faces. Note that each i-face, i ≥ 0 of the original polytope becomes an (i − 1)-face of the vertex figure. Unlike the case for Euclidean polytopes, an abstract polytope with regular facets and vertex figures may or may not be regular itself – for example, the square pyramid, all of whose facets and vertex figures are regular abstract polygons. The classical vertex figure will, however, be a realisation of the abstract one. Constructions Polygons The traditional way to construct a regular polygon, or indeed any other figure on the plane, is by compass and straightedge. Constructing some regular polygons in this way is very simple (the easiest is perhaps the equilateral triangle), some are more complex, and some are impossible ("not constructible"). The simplest few regular polygons that are impossible to construct are the n-sided polygons with n equal to 7, 9, 11, 13, 14, 18, 19, 21,... Constructibility in this sense refers only to ideal constructions with ideal tools. Of course reasonably accurate approximations can be constructed by a range of methods; while theoretically possible constructions may be impractical. Polyhedra Euclid's Elements gave what amount to ruler-and-compass constructions for the five Platonic solids.[3] However, the merely practical question of how one might draw a straight line in space, even with a ruler, might lead one to question what exactly it means to "construct" a regular polyhedron. (One could ask the same question about the polygons, of course.) The English word "construct" has the connotation of systematically building the thing constructed. The most common way presented to construct a regular polyhedron is via a fold-out net. To obtain a fold-out net of a polyhedron, one takes the surface of the polyhedron and cuts it along just enough edges so that the surface may be laid out flat. This gives a plan for the net of the unfolded polyhedron. Since the Platonic solids have only triangles, squares and pentagons for faces, and these are all constructible with a ruler and compass, there exist ruler-and-compass methods for drawing these fold-out nets. The same applies to star polyhedra, although here we must be careful to make the net for only the visible outer surface. If this net is drawn on cardboard, or similar foldable material (for example, sheet metal), the net may be cut out, folded along the uncut edges, joined along the appropriate cut edges, and so forming the polyhedron for which the net was designed. For a given polyhedron there may be many fold-out nets. For example, there are 11 for the cube, and over 900000 for the dodecahedron.[4] Numerous children's toys, generally aimed at the teen or pre-teen age bracket, allow experimentation with regular polygons and polyhedra. For example, klikko provides sets of plastic triangles, squares, pentagons and hexagons that can be joined edge-to-edge in a large number of different ways. A child playing with such a toy could re-discover the Platonic solids (or the Archimedean solids), especially if given a little guidance from a knowledgeable adult. In theory, almost any material may be used to construct regular polyhedra.[5] They may be carved out of wood, modeled out of wire, formed from stained glass. The imagination is the limit. Higher dimensions In higher dimensions, it becomes harder to say what one means by "constructing" the objects. Clearly, in a 3-dimensional universe, it is impossible to build a physical model of an object having 4 or more dimensions. There are several approaches normally taken to overcome this matter. The first approach, suitable for four dimensions, uses four-dimensional stereography.[1] Depth in a third dimension is represented with horizontal relative displacement, depth in a fourth dimension with vertical relative displacement between the left and right images of the stereograph. The second approach is to embed the higher-dimensional objects in three-dimensional space, using methods analogous to the ways in which three-dimensional objects are drawn on the plane. For example, the fold out nets mentioned in the previous section have higher-dimensional equivalents.[6] One might even imagine building a model of this fold-out net, as one draws a polyhedron's fold-out net on a piece of paper. Sadly, we could never do the necessary folding of the 3-dimensional structure to obtain the 4-dimensional polytope because of the constraints of the physical universe. Another way to "draw" the higher-dimensional shapes in 3 dimensions is via some kind of projection, for example, the analogue of either orthographic or perspective projection. Coxeter's famous book on polytopes (Coxeter 1973) has some examples of such orthographic projections.[7] Note that immersing even 4-dimensional polychora directly into two dimensions is quite confusing. Easier to understand are 3-d models of the projections. Such models are occasionally found in science museums or mathematics departments of universities (such as that of the Université Libre de Bruxelles). The intersection of a four (or higher) dimensional regular polytope with a three-dimensional hyperplane will be a polytope (not necessarily regular). If the hyperplane is moved through the shape, the three-dimensional slices can be combined, animated into a kind of four dimensional object, where the fourth dimension is taken to be time. In this way, we can see (if not fully grasp) the full four-dimensional structure of the four-dimensional regular polytopes, via such cutaway cross sections. This is analogous to the way a CAT scan reassembles two-dimensional images to form a 3-dimensional representation of the organs being scanned. The ideal would be an animated hologram of some sort, however, even a simple animation such as the one shown can already give some limited insight into the structure of the polytope. Another way a three-dimensional viewer can comprehend the structure of a four-dimensional polytope is through being "immersed" in the object, perhaps via some form of virtual reality technology. To understand how this might work, imagine what one would see if space were filled with cubes. The viewer would be inside one of the cubes, and would be able to see cubes in front of, behind, above, below, to the left and right of himself. If one could travel in these directions, one could explore the array of cubes, and gain an understanding of its geometrical structure. An infinite array of cubes is not a polytope in the traditional sense. In fact, it is a tessellation of 3-dimensional (Euclidean) space. However, a 4-polytope can be considered a tessellation of a 3-dimensional non-Euclidean space, namely, a tessellation of the surface of a four-dimensional sphere (a 4-dimensional spherical tiling). Locally, this space seems like the one we are familiar with, and therefore, a virtual-reality system could, in principle, be programmed to allow exploration of these "tessellations", that is, of the 4-dimensional regular polytopes. The mathematics department at UIUC has a number of pictures of what one would see if embedded in a tessellation of hyperbolic space with dodecahedra. Such a tessellation forms an example of an infinite abstract regular polytope. Normally, for abstract regular polytopes, a mathematician considers that the object is "constructed" if the structure of its symmetry group is known. This is because of an important theorem in the study of abstract regular polytopes, providing a technique that allows the abstract regular polytope to be constructed from its symmetry group in a standard and straightforward manner. Regular polytopes in nature For examples of polygons in nature, see: Main article: Polygon Each of the Platonic solids occurs naturally in one form or another: Main article: Regular polyhedron See also • List of regular polytopes • Johnson solid • Bartel Leendert van der Waerden References Notes 1. Brisson, David W. (2019) [1978]. "Visual Comprehension in n-Dimensions". In Brisson, David W. (ed.). Hypergraphics: Visualizing Complex Relationships In Arts, Science, And Technololgy. AAAS Selected Symposium. Vol. 24. Taylor & Francis. pp. 109–145. ISBN 978-0-429-70681-3. 2. Coxeter (1974) 3. See, for example, Euclid's Elements. 4. Some interesting fold-out nets of the cube, octahedron, dodecahedron and icosahedron are available here. 5. Instructions for building origami models may be found here, for example. 6. Some of these may be viewed at . 7. Other examples may be found on the web (see for example ). Bibliography • Coxeter, H.S.M. (1973) [1948]. Regular Polytopes (3rd ed.). Dover. ISBN 0-486-61480-8. • — (1974). Regular Complex Polytopes. Cambridge University Press. ISBN 052120125X. • — (1991). Regular Complex Polytopes (2nd ed.). Cambridge University Press. ISBN 978-0-521-39490-1. • Cromwell, Peter R. (1999). Polyhedra. Cambridge University Press. ISBN 978-0-521-66405-9. • Euclid (1956). Elements. Translated by Heath, T. L. Cambridge University Press. • Grünbaum, B. (1976). Regularity of Graphs, Complexes and Designs. Problèmes Combinatoires et Théorie des Graphes, Colloquium Internationale CNRS, Orsay. Vol. 260. pp. 191–197. • Grünbaum, B. (1993). "Polyhedra with hollow faces". In Bisztriczky, T.; et al. (eds.). POLYTOPES: abstract, convex, and computational. Mathematical and physical sciences, NATO Advanced Study Institute. Vol. 440. Kluwer Academic. pp. 43–70. ISBN 0792330161. • McMullen, P.; Schulte, S. (2002). Abstract Regular Polytopes. Cambridge University Press. • Sanford, V. (1930). A Short History Of Mathematics. The Riverside Press. • Schläfli, L. (1855). "Réduction d'une intégrale multiple, qui comprend l'arc de cercle et l'aire du triangle sphérique comme cas particuliers". Journal de Mathématiques. 20: 359–394. • Schläfli, L. (1858). "On the multiple integral ∫^ n dxdy... dz, whose limits are p_1= a_1x+ b_1y+…+ h_1z> 0, p_2> 0,..., p_n> 0, and x^ 2+ y^ 2+…+ z^ 2< 1". Quarterly Journal of Pure and Applied Mathematics. 2: 269–301. 3 (1860) pp54–68, 97–108. • Schläfli, L. (1901). "Theorie der vielfachen Kontinuität". Denkschriften der Schweizerischen Naturforschenden Gesellschaft. 38: 1–237. • Smith, J. V. (1982). Geometrical and Structural Crystallography (2nd ed.). Wiley. ISBN 0471861685. • Van der Waerden, B. L. (1954). Science Awakening. Translated by Dresden, Arnold. P Noordhoff. • D.M.Y. Sommerville (2020) [1930]. "X. The Regular Polytopes". Introduction to the Geometry of n Dimensions. Courier Dover. pp. 159–192. ISBN 978-0-486-84248-6. External links • The Atlas of Small Regular Polytopes - List of abstract regular polytopes. Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
\begin{document} \title{Decompositions of Unit Hypercubes \and the Reversion of a Generalized M\"obius Series} \begin{abstract} Let $s_d(n)$ be the number of distinct decompositions of the $d$-dimensional hypercube with $n$ rectangular regions that can be obtained via a sequence of splitting operations. We prove that the generating series $y = \sum_{n \geq 1} s_d(n)x^n$ satisfies the functional equation $x = \sum_{n\geq 1} \mu_d(n)y^n$, where $\mu_d(n)$ is the $d$-fold Dirichlet convolution of the M\"obius function. This generalizes a recent result by Goulden et al., and shows that $s_1(n)$ also gives the number of natural exact covering systems of $\mathbb{Z}$ with $n$ residual classes. We also prove an asymptotic formula for $s_d(n)$ and describe a bijection between $1$-dimensional decompositions and natural exact covering systems. \end{abstract} \section{Introduction}\label{Sec:01} \subsection{Decomposing the $d$-dimensional unit hypercube} Suppose we start with $(0,1)^d$, the $d$-dimensional unit hypercube, and iteratively perform the following operation: \begin{itemize} \item choose a region in the current decomposition, a coordinate $i \in \set{1, \ldots, d}$, and an arity $p \geq 2$; \item partition the selected region into $p$ equal smaller regions with cuts orthogonal to the $i^{\textnormal{th}}$ axis. \end{itemize} \begin{figure} \caption{One way to partition the unit square $(0,1)^2$ into 8 regions.} \label{Fig:0101} \end{figure} For example, Figure~\ref{Fig:0101} illustrates one way to decompose the unit square $(0,1)^2$ into $8$ regions using a sequence of $4$ partitions. We are interested in the following question: Given integers $d,n \geq 1$, how many distinct compositions of $(0,1)^d$ are there with $n$ regions? In Figure~\ref{Fig:0102}, we list all the possible decompositions of $(0,1)^2$ with $n \leq 4$ regions. \begin{figure} \caption{Elements in $\mathcal{S}_{2,n}$ for $n \leq 4$.} \label{Fig:0102} \end{figure} Let's introduce some notation so we can discuss these decompositions more precisely. Let $\mathbb{N} = \set{1, 2, \ldots, }$ denote the set of natural numbers, and $[n] = \set{1, 2, \ldots, n}$ for every $n \in \mathbb{N}$. Given a region \[ R = ( a_1, b_1 ) \times \cdots \times ( a_d, b_d ) \subseteq (0,1)^d \] (all regions mentioned in this manuscript will be rectangular), a coordinate $i \in [d]$, and an arity $p \geq 2$, define \[ c_j = a_i + \frac{j}{p} (b_i-a_i) \] for every $j \in \set{0,1,\ldots, p}$, and \[ H_{i,p}(R) = \set{ \set{x \in R : c_{j-1} < x_i < c_{j}} : j \in [p]} \] Thus, $H_{i,p}(R)$ is a set consisting of the $p$ regions obtained from splitting $R$ along the $i^{\textnormal{th}}$ coordinate. Then we define the set of hypercube decompositions $\mathcal{S}_{d}$ recursively as follows: \begin{itemize} \item $\set{(0,1)^d} \in \mathcal{S}_d$ --- this is the trivial decomposition with one region, the entire unit hypercube. \item For every $S \in \mathcal{S}_d$, region $R \in S$, coordinate $i \in [d]$, and arity $p \geq 2$, \[ (S \setminus \set{R}) \cup H_{i,p}(R) \in \mathcal{S}_d. \] \end{itemize} In other words, $\mathcal{S}_d$ consists of the decompositions of $(0,1)^d$ that can be achieved by a sequence of splitting operations. In particular, the coordinate and arity used in each splitting operation are arbitrary and can vary over the splitting sequence. Furthermore, given $S \in \mathcal{S}_d$ we let $|S|$ denote the number of regions in $S$, and define \[ \mathcal{S}_{d,n} = \set{ S \in \mathcal{S}_d : |S| = n} \] for every integer $n \geq 1$. Note that distinct splitting sequences can result in the same decomposition. For an example, the decomposition $S = H_{1,6}( (0,1) ) \in \mathcal{S}_{1,6}$ can be obtained from simply $6$-splitting the unit interval $(0,1)$, or $2$-splitting $(0,1)$ followed by $3$-splitting each of $(0,1/2)$ and $(1/2,1)$. We are interested in enumerating the number of decompositions $s_d(n) = |\mathcal{S}_{d,n}|$. For small values of $d$, we obtain the following sequences: \[ \begin{array}{l|rrrrrrrrrrr} n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & \cdots \\ \hline \href{https://oeis.org/A050385}{s_1(n)} & 1 & 1 & 3 & 10 & 39 & 160 & 691 & 3081 & 14095 & 65757 & \cdots \\ s_2(n) & 1 & 2 & 10 & 59 & 394 & 2810 & 20998 & 162216 & 1285185 & 10384986 & \cdots \\ s_3(n) & 1 & 3 & 21 & 177 & 1677 & 17001 & 180525 & 1981909 & 22314339 & 256245783 & \cdots \end{array} \] Note that the hyperlink at $s_1(n)$ in the table above directs to the sequence's entry in the On-Line Encyclopedia of Integer Sequences (OEIS)~\cite{OEIS}. Similar hyperlinks are placed throughout this manuscript for all integer sequences that are already in the OEIS at the time of this writing. \subsection{A generalized M\"obius function} To state our main result, we will need to describe the following generalization of the well-studied M\"obius function. Given $n \in \mathbb{N}$, the \emph{M\"{o}bius function} is defined to be \[ \mu(n) = \begin{cases} (-1)^k & \textnormal{if $n$ is a product of $k$ distinct primes;}\\ 0 & \textnormal{if $p^2 | n$ for some $p>1$.} \end{cases} \] (The convention is that $\mu(1) = 1$.) A well-known property of $\mu(n)$ is that \begin{equation}\label{Eq:0101} \sum_{i | n} \mu(i) = \begin{cases} 1 & \textnormal{if $n=1$;}\\ 0 & \textnormal{if $n > 1$.} \end{cases} \end{equation} More generally, given two arithmetic functions $\alpha, \beta : \mathbb{N} \to \mathbb{R}$, their \emph{Dirichlet convolution} $\alpha \ast \beta : \mathbb{N} \to \mathbb{R}$ is the function \[ (\alpha \ast \beta)(n) = \sum_{i | n} \alpha(i) \beta \left(\frac{n}{i}\right). \] For an example, let $\mathbbm{1} : \mathbb{N} \to \mathbb{R}$ be the function where $\mathbbm{1}(n) = 1$ for all $n \geq 1$, and let $\delta : \mathbb{N} \to \mathbb{R}$ denote the function where $\delta(1)=1$ and $\delta(n) = 0$ for all $n \geq 2$. Then \eqref{Eq:0101} can be equivalently stated as \[ \mu \ast \mathbbm{1} = \delta. \] Next, given integer $d \geq 1$, we define the \emph{$d$-fold M\"obius function} where \[ \mu_d = \underbrace{ \mu \ast \mu \ast \cdots \ast \mu}_{\textnormal{$d$ times}}. \] The following table gives the first few terms of $\mu_d(n)$ for $d \leq 3$. \[ \begin{array}{l|rrrrrrrrrrrrrrrr} n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9& 10 &11 & 12 & 13 & 14 & 15& \cdots \\ \hline \href{https://oeis.org/A008683}{\mu_1(n)} & 1 & -1 & -1 & 0 & -1 & 1 & -1 & 0& 0 & 1 & -1 & 0& -1& 1& 1& \cdots \\ \href{https://oeis.org/A007427}{\mu_2(n)} & 1 & -2 & -2 & 1 & -2 & 4 & -2 & 0& 1 & 4 & -2 &-2 & -2& 4 & 4 & \cdots \\ \mu_3(n) & 1 & -3 & -3 & 3 & -3 & 9 & -3 & -1& 3 & 9 & -3 & -9 & -3 & 9 & 9& \cdots \end{array} \] The generalized M\"obius function $\mu_d$ was independently discovered several times, first by Fleck in 1915~\cite[Section 2.2]{SandorC04}. The reader may refer to~\cite{SandorC04} for the historical developments of this (and other) generalizations of $\mu$. Next, we state two formulas for $\mu_d(n)$ that will be useful subsequently. \begin{lemma} For all integers $d,n \geq 1$ \begin{equation}\label{Eq:0102} \mu_d(n) = \sum_{\substack{ n_1, \ldots, n_d \geq 1 \\ \prod_{i=1}^d n_i = n}} \mu(n_i) \end{equation} Moreover, suppose $n$ has prime factorization $n=p_1^{m_1} p_2^{m_2} \cdots p_k^{m_k}$. Then \begin{equation}\label{Eq:0103} \mu_d(n) = \prod_{i=1}^k (-1)^{m_i} \binom{d}{m_i}. \end{equation} \end{lemma} \begin{proof} \eqref{Eq:0102} follows immediately from the definition of the Dirichlet convolution and a simple induction on $d$. For~\eqref{Eq:0103}, one can use induction on $d$ to show that $\mu_d(p_1^{m_1}) = (-1)^{m_1} \binom{d}{m_i}$ for all prime powers $p_1^{m_1}$, and then use the facts that \begin{itemize} \item $\mu$ is multiplicative (i.e., $\mu(ab) = \mu(a)\mu(b)$ if $a,b$ are coprime); \item if $\alpha,\beta$ are multiplicative, so is their Dirichlet convolution $\alpha \ast \beta$. \end{itemize} Thus, $\mu_d$ is multiplicative as well, and so it follows that \[ \mu_d(n) = \mu_d(p_1^{m_1} \cdots p_k^{m_k}) = \prod_{i=1}^k (-1)^{m_i} \binom{d}{m_i}. \] \end{proof} Our main result of the manuscript is the following. \begin{theorem}\label{Thm:0101} Given $d \in \mathbb{N}$, define $y = \sum_{n \geq 1} s_d(n)x^n$. Then $y$ satisfies the functional equation \begin{equation} x = \sum_{n \geq 1} \mu_d(n)y^n. \end{equation} \end{theorem} \subsection{Natural exact covering systems}\label{Sec:0103} Theorem~\ref{Thm:0101} generalizes a recent result on natural exact covering systems, which we describe here. Given $a \in \mathbb{Z}, n \in \mathbb{N}$, we let $\res{a}{n}$ denote the residue class $\set{ x \in \mathbb{Z}: x \equiv a~\textnormal{mod}~n}$. Given a residue class $\res{a}{n}, r \in \mathbb{N}$, and $i \in \set{0,1,\ldots, r-1}$, define \[ E_{i,r} (\res{a}{n}) = \res{in + a}{rn}. \] Observe that \begin{equation}\label{Eq:010a} \set{ E_{i,r}(\res{a}{n}) : i \in \set{0, \ldots, r-1}} \end{equation} partition $\res{a}{n}$, and so we can think of~\eqref{Eq:010a} as an $r$-splitting of $\res{a}{n}$. We then define the set of natural exact covering systems (NECS) $\mathcal{C}$ recursively as follows: \begin{itemize} \item $\set{\res{0}{1}} \in \mathcal{C}$ --- this is the trivial NECS with one residual class. \item For all $C \in \mathcal{C}$, residue class $\res{a}{n} \in C$, and integer $r \geq 2$, \[ (C \setminus \set{ \res{a}{n} }) \cup \set{ E_{i,r}(\res{a}{n}) : i \in \set{0,\ldots, r-1}}\in \mathcal{C}. \] \end{itemize} Notice that $\res{0}{1} = \mathbb{Z}$, and each NECS $C \in \mathcal{C}$ consists of a collection of residue classes that partition $\mathbb{Z}$. Also, given $n \in \mathbb{N}$, let $\mathcal{C}_{n} \subseteq \mathcal{C}$ denote the set of NECS with exactly $n$ residue classes, and let $c(n) = |\mathcal{C}_n|$. For example, \[ \set{\res{0}{4}, \res{2}{8}, \res{6}{8}, \res{1}{6}, \res{3}{6}, \res{5}{6}} \] is an element in $\mathcal{C}_6$ as it can be obtained from $2$-splitting $\res{0}{1}$, then $2$-splitting $\res{0}{2}$, then $2$-splitting $\res{2}{4}$, then $3$-splitting $\res{1}{2}$. Recently, Goulden, Granville, Richmond, and Shallit~\cite{GouldenGRS18} showed the following: \begin{theorem}\label{Thm:0102} Let $y = \sum_{n \geq 1} c(n) x^n$. Then $y$ satisfies the function equation \[ x = \sum_{n \geq 1} \mu(n)y^n. \] \end{theorem} Not only does Theorem~\ref{Thm:0102} provide a formula for the number of NECS with a given residual classes, it also shows that the generating series with coefficients $c(n)$ is exactly the compositional inverse of the familiar M\"obius power series. In that light, Theorem~\ref{Thm:0101} is a generalization of Theorem~\ref{Thm:0102}, as we provide a formula that gives $s_d(n)$ while also showing that they are the coefficients of the compositional inverse of the generalized M\"obius series. Moreover, Theorem~\ref{Thm:0101} implies that $s_1(n) = c(n)$ for every $n \geq 1$ (i.e., the number of ways to split $(0,1)$ into $n$ subintervals using a sequence of splitting operations is equal to the number of NECS of $\mathbb{Z}$ with $n$ classes). To the best of our knowledge, NECS first appeared in the mathematical literature in Porubsk{\`y}'s work in 1974~\cite{Porubsky74}. More broadly, a set of residual classes $C = \set{ \res{a_i}{n_i} : i \in [k]}$ is a \emph{covering system} of $\mathbb{Z}$ if $\bigcup_{i=1}^k \res{a_i}{n_i} = \mathbb{Z}$ (i.e., the residual classes need not be disjoint or result from a sequence of splitting operations). An ample amount of literature has been dedicated to studying covering systems since they were introduced by Erd\H{o}s~\cite{Erdos50}. The reader may refer to~\cite{PorubskyS02, Sun21, ZaleskiZ20, Znam82} and others for broader expositions on the topic. \subsection{Roadmap of the manuscript} This manuscript is organized as follows. In Section~\ref{Sec:02}, we introduce some helpful notions (such as the gcd of a decomposition) and prove Theorem~\ref{Thm:0101}. Then in Section~\ref{Sec:03}, we take a small detour to study $a_d(n)$, the coefficients of the multiplicative inverse of the generalized M\"obius series. We establish a combinatorial interpretation for $a_d(n)$, and prove some results that we'll rely on in Section~\ref{Sec:04}, where we prove an asymptotic formula for $s_d(n)$ and study its growth rate. Finally, we establish a bijection between $\mathcal{S}_{1,n}$ and $\mathcal{C}_n$ in Section~\ref{Sec:05}, and give an example of how studying hypercube decompositions can lead to results on NECS. \section{Proof of Theorem~\ref{Thm:0101}}\label{Sec:02} In this section, we prove Theorem~\ref{Thm:0101} using a $d$-fold variant of M\"obius inversion. We remark that the structure of our proof is somewhat similar to Goulden et al.'s corresponding argument for NECS~\cite[Theorem 3]{GouldenGRS18}. Given integers $r_1, \ldots, r_d \in \mathbb{N}$, let $D_{(r_1, \ldots, r_d)} \in \mathcal{S}_d$ denote the decomposition obtained by \begin{itemize} \item starting with $(0,1)^d$; \item for $i= 1,\ldots, d$, $r_i$-split all regions in the current decomposition in coordinate $i$. \end{itemize} \begin{figure} \caption{Examples of decompositions of the form $D_{(r_1,\ldots, r_n)}$.} \label{Fig:0201} \end{figure} Figure~\ref{Fig:0201} illustrates a few examples of these decompositions. Notice that $D_{(r_1, \ldots, r_d)}$ has $\prod_{i=1}^d r_i$ regions of identical shape and size. Next, given $S,S' \in \mathcal{S}_d$, we say that $S'$ \emph{refines} $S$ --- denoted $S' \succeq S$ --- if $S'$ can be obtained from $S$ by a (possibly empty) sequence of splitting operations. For example, in Figure~\ref{Fig:0202}, we have $S_3 \succeq S_1$ and $S_3 \succeq S_2$, while $S_1, S_2$ are incomparable. It is not hard to see that $\succeq$ imposes a partial order on $\mathcal{S}_d$. \begin{figure} \caption{Illustrating the refinement relation $\succeq$ on $\mathcal{S}_d$.} \label{Fig:0202} \end{figure} Also, it will be convenient to define the notion of scaling. Given regions $R, R' \subseteq (0,1)^d$, notice there is a unique affine function $L_{R \to R'}: \mathbb{R}^d \to \mathbb{R}^d$ that preserves the lexicographical order of vectors, while satisfying $\set{ L_{R \to R'}(x) : x \in R} = R'$. More explicitly, let \begin{align*} R &= (a_1, b_1) \times \cdots \times (a_d, b_d), \\ R' &= (a'_1, b'_1) \times \cdots \times (a'_d, b'_d). \end{align*} Then $L_{R \to R'} : \mathbb{R}^d \to \mathbb{R}^d$ where \[ \left[L_{R \to R'}(x) \right]_i = a'_i + \frac{b'_i-a'_i}{b_i-a_i}(x-a_i) \] for every $i \in [d]$ is the function that satisfies the aforementioned properties. Next, given regions $R, R', R'' \subseteq (0,1)^d$, we define \[ \scale_{R \to R'}(R'') = \set{L_{R \to R'}(x) : x \in R''}. \] We will always apply this scaling function in cases where $R'' \subseteq R$. Thus, intuitively, $\scale_{R \to R'}(R'')$ returns a region that's contained in $R'$ such that the relative position of $\scale_{R \to R'}(R'')$ inside $R'$ is the same of that of $R''$ inside $R$. More importantly, given a decomposition $S \in \mathcal{S}_d$, \[ \set{ \scale_{(0,1)^d \to R'}(R'') : R'' \in S} \] is a collection of sets that can be obtained from iteratively splitting $R'$. Conversely, if $\set{R_1,\ldots, R_n}$ is a collection of sets obtained from applying a sequence of splitting operations to region $R$, then \[ \set{ \scale_{R \to (0,1)^d}(R_i) : i \in [n]} \] is an element of $\mathcal{S}_{d,n}$. We then have the following: \begin{lemma}\label{Lem:0201} Let $y = \sum_{n \geq 1} s_{d}(n) x^n$. Then for all integers $r_1,\ldots, r_d \in \mathbb{N}$, \[ \sum_{\substack{S \in \mathcal{S}_d \\ S \succeq D_{(r_1, \ldots, r_d)}}} x^{|S|} = y^{ \prod_{i=1}^d r_i}. \] \end{lemma} \begin{proof} For convenience, let $n = \prod_{i=1}^d r_i$. First, observe the following bijection between $\set{S \in \mathcal{S}_d : S \succeq D_{(r_1, \ldots, r_d)}}$ and $\mathcal{S}_d^{n}$: Let $B_1, \ldots, B_n$ be the regions in $D_{(r_1, \ldots, r_d)}$. Then, given $n$ decompositions $S_1, \ldots, S_n \in \mathcal{S}_d$, \[ S = \bigcup_{j=1}^n \set{ \scale_{(0,1)^d \to B_j}(R) : R \in S_j} \] gives a decomposition of $(0,1)^d$ that refines $D_{(r_1, \ldots, r_d)}$. On the other hand, given $D \in \mathcal{S}_d$ that refines $D_{r_1,\ldots, r_n}$, define $D_j \subseteq S$ such that \[ D_j = \set{ R \in S : R \subseteq B_j} \] for every $j \in [n]$. By assumption that $S \succeq D_{(r_1,\ldots,r_d)}$, we know that $D_j$ can be obtained from applying a sequence of splitting operations to $B_j$ for every $j$. Hence, each of \[ S_j = \set{ \scale_{B_j \to (0,1)^n} (R) : R \in D_j} \] is an element of $\mathcal{S}_d$. Doing so for each of $D_1, \ldots, D_n$ results in an $n$-tuple of decompositions that corresponds to $S$. Thus, it follows that \[ y^n = \sum_{S_1, \ldots, S_n \in \mathcal{S}_d} x^{\sum_{i=1}^n |S_i|} = \sum_{\substack{ S \in \mathcal{S}_d \\ S \succeq D_{(r_1, \ldots, r_d)}}} x^{|S|}. \] \end{proof} Next, given $S \in \mathcal{S}_d$, we say that $\gcd(S) = (r_1, \ldots, r_d)$ if \begin{itemize} \item $S \succeq D_{(r_1, \ldots, r_d)}$; \item there doesn't exist $(r_1',\ldots, r_d') \neq (r_1, \ldots, r_d)$ where $(r_1',\ldots, r_d') \geq (r_1, \ldots, r_d)$ and $S \succeq D_{(r_1', \ldots, r_d')}$. \end{itemize} For example, in Figure~\ref{Fig:0202}, $\gcd(S_1) = \gcd(S_3) = (3,2)$, while $\gcd(S_2) = (3,1)$. As we shall see later, this notion of the gcd of a decomposition corresponds well to the existing notion of the gcd of an NECS. Next, define \[ s_{d, (r_1,\ldots, r_d)}(n) = \left| \set{S \in \mathcal{S}_{d,n} : \gcd(S) = (r_1, \ldots, r_d) } \right|, \] and the series \[ S_{d,(r_1,\ldots, r_d)}(x) = \sum_{n \geq 1} s_{d, (r_1,\ldots, r_d)}(n) x^n. \] In particular, consider the case when $r_1 = \cdots = r_d = 1$. If $S \in \mathcal{S}_{d,n}$ for some $n \geq 2$, then the splitting sequence for $S$ is non-empty, and so the $\gcd(S) \neq (1, \ldots, 1)$. Thus, we see that the only decomposition $S$ where $\gcd(S) = (1, \ldots, 1)$ is the trivial decomposition $\set{ (0,1)^d}$. This implies that \[ s_{d, (1, \ldots, 1)}(n) = \begin{cases} 1 & \textnormal{if $n=1$;}\\ 0 & \textnormal{otherwise.} \end{cases} \] Hence, it follows that $S_{d,(1,\ldots, 1)}(x) = x$. For general $r_1, \ldots, r_d$, we have the following. \begin{lemma}\label{Lem:0202} For all $d,n, r_1, \ldots, r_d \in \mathbb{N}$, \[ \sum_{\substack{S \in \mathcal{S}_d \\ S \succeq D_{(r_1, \ldots, r_d)}}} x^{|S|} = \sum_{a_1, \ldots, a_d \geq 1} S_{d,(a_1r_1,\ldots, a_dr_d)}(x) \] \end{lemma} \begin{proof} It suffices to show that, given fixed $d,n$, and $r_1, \ldots, r_d$, \[ \set{ S \in \mathcal{S}_{d,n} : S \succeq D_{(r_1, \ldots, r_d)}} = \bigcup_{a_1, \ldots, a_d \geq 1} \set{ S \in \mathcal{S}_{d,n} : \gcd(S) = (a_1r_1, \ldots, a_dr_d)}, \] since the sets making up the union on the right hand side are mutually disjoint. For $\supseteq$, notice that if $\gcd(S) = (a_1r_1, \ldots, a_dr_d)$, then $S \succeq D_{(r_1, \ldots, r_d)}$. It is also obvious that $D_{(a_1r_1, \ldots, a_dr_d)} \succeq D_{(r_1, \ldots, r_d)}$ for all $a_1, \ldots, a_d \geq 1$. Since $\succeq$ is a transitive relation, the containment holds. We next prove $\subseteq$. Let $S \succeq D_{(r_1, \ldots, r_d)}$ and suppose $\gcd(S) = (b_1,\ldots, b_d)$. Therefore, for every coordinate $i \in [d]$, $S \succeq \set{ H_{i, r_i}((0,1)^d)}$ and $S \succeq \set{ H_{i, b_i}((0,1)^d)}$. Thus, if we let $\ell_i$ be the least common multiple of $r_i$ and $b_i$, it follows that $S \succeq \set{ H_{i, \ell_i}((0,1)^d)}$. Since $b_i$ was chosen maximally (by the definition of gcd), it must be that $b_i = \ell_i$ for every $i$. This means that $r_i | b_i$, and so there exists $a_i \in \mathbb{N}$ where $b_i = a_ir_i$. This finishes our proof. \end{proof} With Lemmas~\ref{Lem:0201} and~\ref{Lem:0202}, we are now ready to prove a result that is slightly more general than Theorem~\ref{Thm:0101}. \begin{theorem}\label{Thm:0201} Let $d, r_1,\ldots, r_d \in \mathbb{N}$, and $y = \sum_{n \geq 1} s_d(n)x^n$. Then \[ S_{d, (r_1,\ldots, r_d)}(x) = \sum_{n \geq 1} \mu_d(n) y^{\left(\prod_{i=1}^d r_i\right)n}. \] \end{theorem} \begin{proof} From Lemmas~\ref{Lem:0201} and~\ref{Lem:0202}, we obtain that for every fixed $n \geq 1$, \begin{equation}\label{Eq:0201} y^{\left(\prod_{i=1}^d r_i \right)n} = \sum_{a_1, \ldots, a_d \geq 1} S_{d,(a_1r_1,\ldots, a_dr_dn)}(x). \end{equation} In fact, we see that $y^{\left(\prod_{i=1}^d r_i \right)n} = \sum_{a_1, \ldots, a_d \geq 1} S_{d,(a_1q_1,\ldots, a_dq_d)}(x)$ for all $q_1, \ldots, q_d \in \mathbb{N}$ where \begin{equation}\label{Eq:0202} \prod_{i=1}^d q_i = \left(\prod_{i=1}^d r_i \right)n. \end{equation} Now we multiply both sides of~\eqref{Eq:0201} by $\mu_d(n)$ and sum them over $n \geq 1$. On the left hand side, we obtain \[ \sum_{n \geq 1} \mu_d(n) y^{\left(\prod_{i=1}^d r_i\right)n}. \] For the right hand side, we have \begin{align*} & \sum_{n \geq 1} \mu_d(n) \left( \sum_{a_1, \ldots, a_d \geq 1} S_{d,(a_1r_1,\ldots, a_dr_dn)}(x) \right)\\ \overset{\eqref{Eq:0102}}{=}{}& \sum_{n \geq 1} \sum_{\substack{n_1, \ldots, n_d \geq 1 \\ n_1n_2\cdots n_d=n}} \prod_{i=1}^d \mu(n_i)\left( \sum_{a_1, \ldots, a_d \geq 1} S_{d,(a_1r_1,\ldots, a_dr_dn)}(x) \right)\\ ={}& \sum_{n_1, \ldots, n_d \geq 1} \prod_{i=1}^d \mu(n_i)\left( \sum_{a_1, \ldots, a_d \geq 1} S_{d,(a_1r_1,\ldots, a_dr_d \left( \prod_{i=1}^d n_i \right))}(x) \right)\\ \overset{\eqref{Eq:0202}}{=}{}& \sum_{n_1, \ldots, n_d \geq 1} \prod_{i=1}^d \mu(n_i)\left( \sum_{a_1, \ldots, a_d \geq 1} S_{d,(n_1a_1r_1,\ldots, n_da_dr_d)}(x) \right)\\ ={}& \sum_{b_1, \ldots, b_d \geq 1} S_{d,(b_1r_1,\ldots, b_dr_d)}(x) \left( \sum_{n_1 | b_1} \mu(n_1) \right) \cdots \left( \sum_{n_d | b_d} \mu(n_d) \right) \\ \overset{\eqref{Eq:0101}}{=}{}& S_{d, (r_1,\ldots, r_d)}(x). \end{align*} This finishes the proof. \end{proof} When $r_1= \cdots = r_d = 1$, Theorem~\ref{Thm:0201} specializes to $x = \sum_{n \geq 1} \mu_d(n) y^n$, and thus Theorem~\ref{Thm:0101} follows as a consequence. \section{A Small Detour: The Sequences $a_d(n)$}\label{Sec:03} Observe that, if we define the series \[ M_d(z) = \sum_{n \geq 1} \mu_d(n)z^n, \] then the functional equation in Theorem~\ref{Thm:0101} can be restated simply as $M_d(y) = x$. We have shown earlier that $s_d(n)$ gives the coefficient of $x^n$ in $y$, hence finding a combinatorial interpretation of the coefficients of the compositional inverse of the series $M_d(z)$. In this section, we'll mainly focus on the coefficients of what's essentially the multiplicative inverse of $M_d(z)$. For every $d \geq 1$ and $n \geq 0$, define \begin{equation}\label{Eq:0301} a_d(n) = [z^n] \frac{z}{M_d(z)}. \end{equation} Then it follows that \[ M_d(z) \left( \sum_{i \geq 0} a_d(i)z^i \right) = z ~\implies~ \sum_{k = 1}^n \mu_d(k) a_d(n-k) = [z^n] z. \] Since $\mu_d(1) = 1$ for all $d$, $a_d(n)$ satisfies the recurrence relation \begin{equation}\label{Eq:0302} a_d(n) = \sum_{k =2}^{n+1} -\mu_d(k)a_d(n+1-k) \end{equation} for all $n \geq 1$, with the initial condition $a_d(0) = 1$. The following table gives some values of $a_d(n)$ for small $d$ and $n$: \[ \begin{array}{l|rrrrrrrrrrrr} n & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 &\cdots \\ \hline \href{https://oeis.org/A073776}{a_1(n)} & 1 & 1 & 2 & 3 & 6 & 9 & 17 & 28 & 50 & 83 & 147 & \cdots \\ a_2(n) & 1 & 2 & 6 & 15 & 42 & 108 & 291 & 766 & 2041 & 5395& 14328 & \cdots \\ a_3(n) & 1 & 3 & 12 & 42 & 156 & 558 & 2028 & 7318 & 26490 & 95730 & 346218 & \cdots \end{array} \] The main goal of this section is to establish two results (Proposition~\ref{Prop:0301} and Lemma~\ref{Lem:0303}) that we will need when we study the asymptotic behavior and growth rate of $s_d(n)$. While we do so, we will take somewhat of a scenic route and uncover some properties of $a_d(n)$ along the way. \subsection{A lower bound for $a_d(n+1)/a_d(n)$} When we prove an asymptotic formula for $s_d(n)$ in Section~\ref{Sec:04}, one of the requirements will be to show that $a_d(n) \geq 0$ for every $d \geq 1$ and $n \geq 0$. Likewise, Goulden et al.~\cite[Theorem 2]{GouldenGRS18} showed that $a_1(n) \geq 0$ with a complex analysis argument as a part of their work establishing an asymptotic formula for $c(n)$. In this section, we describe the set $\tilde{\mathcal{A}}_{d,n}$ and show that $|\tilde{\mathcal{A}}_{d,n}| = a_d(n)$ for all $d \geq 1$ and $n\geq 0$. Having a combinatorial interpretation for $a_d(n)$ implies (among other things) that these coefficients are indeed nonnegative. Before we are able to define $\tilde{\mathcal{A}}_{d,n}$, we first need to introduce some intermediate objects $\mathcal{B}_{d,n}, \mathcal{A}_{d,n}$ and mention some of their relevant properties along the way. Given $d \in \mathbb{N}$, we call a multiset $S= \set{s_1, \ldots, s_{\ell}}$ a \emph{$d$-coloured prime set} if the following holds: \begin{itemize} \item The elements $s_1, \ldots s_{\ell}$ are (not necessarily distinct) prime numbers. \item Each of $s_1, \ldots, s_{\ell}$ is assigned a colour from $[d]$, such that each instance of the same prime number must be assigned distinct colours. \end{itemize} We also define the weight of $S$ to be $w(S) = \prod_{i=1}^{\ell} s_i -1$, and the sign of $S$ to be $v(S) = (-1)^{\ell+1}$. For every $n \geq 0$, let $\mathcal{B}_{d,i}$ denote the set of $d$-coloured prime sets with weight $i$. For instance, \[ \mathcal{B}_{2,11} = \set{ \set{2_1, 2_2, 3_1}, \set{2_1, 2_2, 3_2} } \] (where the assigned colour of a number is indicated in its subscript), and $\mathcal{B}_{2, 26} = \emptyset$. More generally, notice that if $n+1$ has prime factorization $n+1= p_1^{m_1}\cdots p_{k}^{m_k}$, then every element in $\mathcal{B}_{d,n}$ must contain exactly $m_i$ copies of $p_i$ for every $i \in [k]$. This implies that \begin{itemize} \item $|\mathcal{B}_{d,n}| = \prod_{i=1}^{k} \binom{d}{m_i} = | \mu_d(n+1)|$, since there are $\binom{d}{m_i}$ ways to colour to the $m_i$ copies of $p_i$ for every $i \in [k]$; \item $v(B) = (-1)^{1+\sum_{i=1}^k m_i}$ for every element in $\mathcal{B}_{d,n}$. \end{itemize} Thus, we see that \begin{equation}\label{Eq:0303} \sum_{B \in \mathcal{B}_{d,n}} v(B) = -\mu_d(n+1). \end{equation} Next, for every $d \geq 1$ and $n \geq 0$, let $\mathcal{A}_{d,n}$ to be the set of sequences $(A_1, \ldots, A_k)$ where \begin{itemize} \item $A_i$ is a $d$-coloured prime set for every $i \in [k]$; \item $\sum_{i=1}^k w(A_i) = n$. \end{itemize} Also, we define the sign of $A$ to be $v(A) = \prod_{i=1}^k v(A_i)$. In other words, $v(A) = 1$ if there is an odd number of even-sized sets in the sequence $A$, and $v(A) = -1$ otherwise. For example, the following table lists all elements of $A_{1,n}$ for $0 \leq n \leq 7$, as well as their signs. To reduce cluttering, we used $|$ to indicate separation of sets, and suppressed the colour assignment of the numbers since there is only $d=1$ available colour in this case. \[ \begin{array}{r|r|l|l} n & a_1(n) & \set{A \in \mathcal{A}_{1,n} : v(A) = 1} & \set{A \in \mathcal{A}_{1,n} : v(A) = -1}\\ \hline 0 & 1 & () & \\ \hline 1 & 1 & (2)& \\ \hline 2 & 2 & (2|2), (3)& \\ \hline 3 & 3 & (2|2|2), (2|3), (3|2)& \\ \hline 4 & 6 & (2|2|2|2), (2|2|3), (2|3|2), (3|2|2), (3|3), (5)& \\ \hline 5 & 9 & (2|2|2|2|2), (2|2|2|3), (2|2|3|2), (2|3|2|2), (2|3|3), & (2,3) \\ && (2|5),(3|2|2|2), (3|2|3), (3|3|2), (5|2) & \\ \hline 6 & 17 & (2|2|2|2|2|2), (2|2|2|2|3), (2|2|2|3|2), (2|2|3|2|2), &(2|2,3), (2,3|2) \\ && (2|2|3|3), (2|2|5), (2|3|2|2|2), (2|3|2|3), (2|3|3|2), & \\ && (2|5|2), (3|2|2|2|2), (3|2|2|3), (3|2|3|2), (3|3|2|2),& \\ && (3|3|3), (3|5), (5|2|2), (5|3), (7) & \end{array} \] Then we have the following. \begin{lemma}\label{Lem:0301} For every $d \geq 1$ and $n \geq 0$, \[ a_d(n) = \sum_{A \in \mathcal{A}_{d,n}} v(A). \] \end{lemma} \begin{proof} We prove the claim by induction on $n$. First, $a_d(0) = 1$, and $\mathcal{A}_{d,0}$ consists of just the empty sequence $A = ()$, and $v(A) = 1$. Thus, the base case holds. Now suppose $n \geq 1$, and let $A = (A_1, \ldots, A_k) \in \mathcal{A}_{d,n}$. Then $A_1 \in \mathcal{B}_{d,i}$ for some $1 \leq i \leq n-1$, and $A' = (A_2, \ldots, A_k) \in \mathcal{A}_{d,n-i}$. Thus, there is a natural bijection $g : \mathcal{A}_{d,n} \to \bigcup_{i=1}^{n-1} \mathcal{B}_{d,i} \times \mathcal{A}_{d, n-i}$. Moreover, notice that $v(A) = v(A_1)v(A')$. Thus, \begin{align*} \sum_{A \in \mathcal{A}_{d,n}} v(A) &= \sum_{i=1}^{n-1} \left( \sum_{B \in \mathcal{B}_{d,i}} v(B) \sum_{A' \in \mathcal{A}_{d,n-i}} v(A') \right) \\ & \overset{\eqref{Eq:0303}}{=} \sum_{i=1}^{n-1} - \mu_d(n+1) a_d(n-i) \\ &= \sum_{k=2}^n -\mu_d(k) a_d(n+1-k) \\ &\overset{\eqref{Eq:0302}}{=} a_d(n). \mbox{\qedhere} \end{align*} \end{proof} Next, we show that there are always more sequences in $\mathcal{A}_{d,n}$ with positive signs than those with negative signs. Given a sequence $A = (A_1, \ldots, A_k) \in \mathcal{A}_{d,n}$, we say that a subsequence $(A_j, A_{j+1}, \ldots, A_{j+\ell})$ of $A$ is \emph{odd, ascending, and repetitive} (OAR) if \begin{enumerate} \item (Odd) $A_j= \set{\ell}$, a singleton set, and $|A_{j+1}|, \ldots, |A_{j+\ell}|$ are all odd. \item (Ascending) Let $c_0$ be the colour assigned to the element $\ell \in A_j$. Then every element of $A_{j+1}$ is either greater than $\ell$, or is equal to $\ell$ and assigned a colour greater than $c_0$. \item (Repetitive) $A_{j+1} = A_{j+2} = \cdots = A_{j+\ell}$. \end{enumerate} For example, $A=(\set{3}, \set{2}, \set{2},\set{3,5,11},\set{3,5,11}, \set{2,3})$ contains an OAR subsequence starting at $j =3$ with $\ell = 2$ (regardless of $d$ and the colour assigned to the elements). Then we have the following. \begin{lemma}\label{Lem:0302} For every $d,n \geq 1$, \[ |\set{A \in \mathcal{A}_{d,n} : v(A) = -1} | \leq |\set{A \in \mathcal{A}_{d,n} : v(A) = 1}|. \] \end{lemma} \begin{proof} Let $A = (A_1, \ldots, A_k) \in \mathcal{A}_{d,n}$. Define indices $i_1(A), i_2(A)$ as follows: \begin{itemize} \item $i_1(A)$ is the smallest index where $|A_i|$ is even; \item $i_2(A)$ is the smallest index where an OAR subsequence begins. \end{itemize} Notice that, when both defined, $i_1(A)$ and $i_2(A)$ must be distinct for any sequence $A$, since an OAR subsequence must begin with a singleton set, which has odd size. Next, given sequence $A$, we define the sequence $f(A)$ as follows: \begin{itemize} \item Case 1: $i_1(A)$ is defined, and $i_2(A) > i_1(A)$ or is undefined. Let $i = i_1(A)$, and let $p_0 \in A_i$ be the smallest prime number that is assigned the minimum colour. Let $A'_{i} = A_i \setminus \set{p_0}$. Note that, since $|A_i|$ is even, $|A'_i|$ is necessarily odd (and hence nonempty). Define $f(A)$ by taking the sequence $A$ and replacing $A_i$ by the (OAR) subsequence \[ \set{p_0}, \underbrace{A', \ldots, A'}_{\textnormal{$p_0$~times}}, \] with the colour assignment to elements unchanged. For example, given $A = (\set{2}, \set{3,11}, \set{5,7})$, $i_1(A) =2$ and $i_2(A)$ is undefined, and we have $f(A) = (\set{2}, \set{3}, \set{11}, \set{11}, \set{11}, \set{5,7})$. Notice that $v(A) = 1, v(f(A))=-1$, and both $A$ and $f(A)$ have weight $1+32+34 = 67$. \item Case 2: $i_2(A)$ is defined, and $i_1(A) > i_2(A)$ or is undefined. Let $i = i_2(A)$ and let $p_0$ be the unique element in $A_i$. Define $f(A)$ by taking $A$ and replacing the OAR subsequence $A_j, A_{j+1}, \ldots, A_{j+p_0}$ by the set $A_j \cup A_{j+1}$, with the colour assignment to elements unchanged. For example, given $A = (\set{2}, \set{3}, \set{11}, \set{11}, \set{11}, \set{5,7})$, then $i_1(A) = 6$ and $i_2(A) = 2$, and we have $f(A) = (\set{2}, \set{3,11}, \set{5,7})$. \end{itemize} Notice that if $v(A) = -1$, then $i_1(A)$ must be defined, and so $f(A)$ is defined. Since $f$ either replaces one even set by a number of odd sets (Case 1) or vice versa (Case 2), we see that $v(f(A)) = -v(A)$ whenever $f(A)$ is defined. It is also not hard to check that $w(f(A)) = w(A)$ in both cases. Furthermore, notice that if $f(A)$ is defined, then $f(f(A)) = A$. This shows that $f$ is a bijection between the sets $\set{A \in \mathcal{A}_{d,n} : v(A) = -1}$ and $\set{f(A) : A \in \mathcal{A}_{d,n}, v(A) = -1}$. Thus, $f$ is injective when we consider it as a mapping from $\set{A \in \mathcal{A}_{d,n} : v(A) = -1}$ to $|\set{A \in \mathcal{A}_{d,n} : v(A) = 1}$, and our claim follows. \end{proof} Thus, the number of elements $A \in \mathcal{A}_{d,n}$ with $v(A)=-1$ never outnumber those with $v(A) = 1$. In fact, the function $f$ defined in the proof above can be seem as ``pairing up'' the sequences in $\mathcal{A}_{d,n}$ that contains an even set or an OAR subsequence (or both). Thus, we have obtained the following. \begin{corollary}\label{Cor:0301} Define $\tilde{\mathcal{A}}_{d,n} \subseteq \mathcal{A}_{d,n}$ to be the set of sequences that neither contains an even set nor an OAR subsequence. Then $a_d(n) = |\tilde{\mathcal{A}}_{d,n}|$ for every $d \geq 1$ and $n \geq 0$. \end{corollary} Corollary~\ref{Cor:0301} gives us a set whose elements is counted by $a_d(n)$. Using this combinatorial description, we prove the following result. \begin{proposition}\label{Prop:0301} For every $d \geq 1$, $a_d(0) =1$ and \[ \frac{a_d(n+1)}{a_d(n)} \geq d \] for all $n \geq 0$. \end{proposition} \begin{proof} Given $A = (A_1, \ldots, A_k) \in \tilde{\mathcal{A}}_{d,n}$, we define $f : \tilde{\mathcal{A}}_{d,n} \times [d] \to \tilde{\mathcal{A}}_{d, n+1}$ as follows: \[ f(A, c) = \begin{cases} (A_1, \ldots, A_{k-1}, \set{3_c}) & \textnormal{ if $A_{k} = \set{2_c}$ and $A_{k-1} = \set{2_{c'}}$ where $c' < c$;}\\ (A_1, \ldots, A_{k-1}, A_k, \set{2_c}) & \textnormal{otherwise.} \end{cases} \] (Again, we have used subscripts to denote assigned colours of elements.) The function $f$ can be seen as appending the set $\set{2_c}$ at the end of the given sequence (which increases the total weight by $1$ and does not create an even set). If this does create an OAR subsequence, it must be that the last three sets of the new sequence are $\set{2_{c'}}, \set{2_c}, \set{2_c}$ where $c' < c$. In this case, we further replace the two instances of $\set{2_c}$ by one instance of $\set{3_c}$, which does not change the weight of the whole sequence, and now guarantees that it does not have an OAR subsequence. Since $f(A,c)$ must have weight $n+1$ and does not contain any even sets nor OAR subsequences, it is an element of $\tilde{\mathcal{A}}_{d,n+1}$. It is also easy to see that $A$ and $c$ are uniquely recoverable from $f(A,c)$, and so $f(A,c)$ is injective. Thus, we conclude that \[ a_d(n+1) = |\tilde{\mathcal{A}}_{d,n+1}| \geq | \tilde{\mathcal{A}}_{d,n} \times [d]| = d a_d(n). \mbox{\qedhere} \] \end{proof} \subsection{An upper bound for $a_d(n+1)/a_d(n)$} \begin{figure} \caption{Plotting $\frac{a_d(n+1)}{a_d(n)}$ for $1 \leq d \leq 6, 0 \leq n \leq 20$.} \label{Fig:0301} \end{figure} We illustrate in Figure~\ref{Fig:0301} the values of $\frac{a_d(n+1)}{a_d(n)}$ for some small values of $d$ and $n$. While we have shown that the ratio is bounded below by $d$, the figure suggests that $d+1$ is a tight upper bound. We provide an algebraic proof that this is indeed true for all $d \geq 3$. \begin{proposition}\label{Prop:0302} For every $d \geq 3$ and $n \geq 0$, \[ \frac{a_d(n+1)}{a_d(n)} \leq d+1. \] \end{proposition} To do so, we'll need a lemma that will also be useful in Section~\ref{Sec:04}. \begin{lemma}\label{Lem:0303} Let $d \geq 2$ and $k \geq 3$ be integers. \begin{itemize} \item[(i)] For all $x \in [0, \frac{1}{d}]$, \[ \left| \sum_{n\geq 2^k} \mu_d(n)x^n \right| \leq 2d^k x^{2^k}. \] \item[(ii)] For all $x \in [0, \frac{1}{2d}]$, \[ \left| \sum_{n\geq 2^k} n \mu_d(n)x^{n-1} \right| \leq 2^{k+1} d^k x^{2^k-1}. \] \end{itemize} \end{lemma} \begin{proof} We first prove $(i)$. Notice that for all $n \in \set{ 2^k, \ldots, 2^{k+1}-1}$, $n$ has at most $k$ prime factors (counting multiplicities), and so $|\mu_d(n) | \leq d^k$. Thus, \begin{align*} \left| \sum_{n \geq 2^k} \mu_d(n) x^n \right| &= \left| \sum_{\ell \geq k} \sum_{n = 2^{\ell}}^{2^{\ell+1} -1} \mu_d(n) x^n \right| \\ &\leq \sum_{\ell \geq k} \sum_{n = 2^{\ell}}^{2^{\ell+1} -1} d^{\ell} x^n \\ &\leq \sum_{\ell \geq k} \sum_{n \geq 2^{\ell}} d^{\ell} x^n\\ &= \sum_{\ell \geq k} \frac{ d^{\ell}x^{2^{\ell}}}{1-x} \\ &\leq \sum_{\ell \geq k} \frac{ d^{\ell}x^{ 2^{k} (\ell-k+1)}}{1-x} \\ &= \frac{ d^k x^{2^k} }{(1-x) (1- dx^{2^k}) }. \end{align*} For the last inequality, notice that $2^{i-1} \geq i$ for all $i \geq 1$. Substituting $i = \ell-k+1$ and then multiplying both sides by $2^k$ gives $2^{\ell} \geq 2^{k}(\ell-k+1)$. Since $0 \leq x \leq \frac{1}{d} < 1$, it follows that $x^{2^{\ell}} \leq x^{2^{\ell}(\ell-k+1)}$ for all $\ell \geq k$. Next, when $d \geq 3, k \geq 3$, and $x \leq \frac{1}{3}$, $(1-x)(1-dx^{2^k}) \geq \frac{1}{2}$, and so $\frac{ d^k x^{2^k} }{(1-x) (1- dx^{2^k})} \leq 2d^kx^{2^k}$. For the case $d =2$, notice that $\mu_2(2^k) = 0$ for all $k\geq 3$. Thus, using essentially the same chain of inequalities above, we obtain that \[ \left| \sum_{n \geq 2^k} \mu_2(n) x^n \right| \leq \frac{ d^k x^{2^k} }{(1-x) (1- dx^{2^k}) } -d^kx^{2^k} \leq 2 d^kx^{2^k}. \] Next, we prove $(ii)$ using similar observations. Given $d\geq 2, k\geq 3$, and $x \leq \frac{1}{2d}$, \begin{align*} \left| \sum_{n \geq 2^k} \mu_d(n) n x^{n-1} \right| &= \left| \sum_{\ell \geq k} \sum_{n = 2^{\ell}}^{2^{\ell+1} -1} \mu_d(n) nx^{n-1} \right| \\ &\leq \sum_{\ell \geq k} \sum_{n = 2^{\ell}}^{2^{\ell+1} -1} d^{\ell} nx^{n-1} \\ &\leq \sum_{\ell \geq k} \sum_{n \geq 2^{\ell}} d^{\ell} nx^{n-1} \\ &= \sum_{\ell \geq k} d^{\ell}x^{2^{\ell}-1} \left( \frac{ 2^{\ell}}{1-x} + \frac{x}{(1-x)^2} \right) \\ &\leq \sum_{\ell \geq k} d^{\ell} x^{(2^k-1)+(\ell -k) 2^k} \left( \frac{ 2^{\ell}}{1-x} + \frac{x}{(1-x)^2} \right) \\ &= d^{\ell} x^{2^k-1} \left( \frac{ 2^k}{(1-2dx^{2^k})(1-x)} + \frac{x}{(1-dx^{2^k})(1-x)^2} \right).\\ & \leq d^{\ell} x^{2^k-1} \left( \frac{3}{2} 2^k + \frac{1}{2} \right)\\ & \leq 2^{k+1} d^{\ell} x^{2^k-1}. \mbox{\qedhere} \end{align*} \end{proof} We are now ready to prove Proposition~\ref{Prop:0302}. \begin{proof}[Proof of Proposition~\ref{Prop:0302}] We prove our claim by induction on $n$. First, using~\eqref{Eq:0302}, we obtain that \begin{align*} a_d(0) &= 1,\\ a_d(1) &= d,\\ a_d(2) &= d^2+d,\\ a_d(3) &= d^3+\frac{3}{2}d^2+\frac{1}{2}d,\\ a_d(4) &= d^4+2d^3+2d^2+d,\\ a_d(5) &= d^5+\frac{5}{2}d^4 + \frac{7}{2}d^3 + 2d^2,\\ a_d(6) &= d^6+3d^5+\frac{21}{4}d^4+\frac{9}{2}d^3+\frac{9}{4}d^2+d. \end{align*} From the above, one can check that $\frac{a_d(n+1)}{a_d(n)} \leq d+1$ for all $0 \leq n \leq 5$. Next, assume $n \geq 6$. Observe that \begin{align*} a_d(n) ={}& da_d(n-1) + da_d(n-2) - \binom{d}{2}a_d(n-3) + da_d(n-4) - d^2a_d(n-5) \\ &+ da_d(n-6) - \sum_{k \geq 8}^{n+1} \mu_d(k) a_d(n+1-k). \end{align*} Using the inductive hypothesis as well as Proposition~\ref{Prop:0301}, we see that \begin{align*} da_d(n-2) &\leq a_d(n-1), \\ -\binom{d}{2} a_d(n-3) & \leq -d^3 \binom{d}{2} a_d(n-6),\\ da_d(n-4) & \leq d(d+1)^2 a_d(n-6),\\ -d^2a_d(n-5) & \leq -d^3a_d(n-6). \end{align*} Also, from Proposition~\ref{Prop:0301} again, $a_d(n-6-i) \leq d^{-i}a_d(n-6) $ for every $i\geq 1$, and so \begin{align*} \left| \sum_{k \geq 8}^{n+1} \mu_d(k) a_d(n+1-k)\right| & \leq \left| \sum_{k \geq 8}^{n+1} \mu_d(k) d^{7-k} a_d(n-6) \right| \leq \left| \sum_{k \geq 8} \mu_d(k) d^{7-k} a_d(n-6) \right| \\ & \leq 2d^2 a_d(n-6) \end{align*} using Lemma~\ref{Lem:0303}(i). Thus, \begin{align*} a_d(n) ={}& da_d(n-1) + da_d(n-2) - \binom{d}{2}a_d(n-3)+ da_d(n-4) - d^2a_d(n-5) \\ & + da_d(n-6) - \sum_{k \geq 8}^{n+1} \mu_d(k) a_d(n+1-k) \\ \leq{}& (d+1)a_d(n-1) + \left( -d^3\binom{d}{2} + d(d+1)^2 - d^3 + d + 2d^2\right) a_d(n-6)\\ \leq{}& (d+1)a_d(n-1), \end{align*} where the last inequality relies on the assumption that $d \geq 3$. \end{proof} As seen in Figure~\ref{Fig:0301}, it appears that Proposition~\ref{Prop:0302} is also true for $d=1$ and $d=2$. A combinatorial proof for that would be interesting. \section{Asymptotic Formula and Growth Rate of $s_d(n)$}\label{Sec:04} In this section, we consider the asymptotic behavior and growth rate of $s_d(n)$. We will also end the section by describing a mapping from labelled plane rooted trees to hypercube decompositions that will help provide additional context to the growth rate of $s_d(n)$. \subsection{An asymptotic formula} The main tool we will rely on is the following result from Flajolet and Sedgewick~\cite[Theorem VI.6, p.~404]{FlajoletS09}: \begin{theorem}\label{Thm:0401} Let $y$ be a power series in $x$. Let $\phi\colon \mathbb{C} \to \mathbb{C}$ be a function with the following properties: \begin{enumerate} \item[(i)] $\phi$ is analytic at $z = 0$ and $\phi(0) > 0$; \item[(ii)] $y = x \phi(y)$; \item[(iii)] $[z^n] \phi(z) \geq 0$ for all $n \geq 0$, and $[z^n] \phi(z) \neq 0$ for some $n \geq 2$. \item[(iv)] There exists a (then necessarily unique) real number $s \in (0,r)$ such that $\phi(s) = s \phi'(s)$, where $r$ is the radius of convergence of $\phi$. \end{enumerate} Then, \[ [x^n] y \sim \sqrt{ \frac{\phi(s)}{2\pi \phi''(s)}} \, n^{-3/2} \left( \phi'(s) \right)^n. \] \end{theorem} Using Theorem~\ref{Thm:0401}, we obtain the following: \begin{theorem}\label{Thm:0402} Let $d \geq 1$, and let $s > 0$ be the smallest real number such that $M_d'(s) = 0$. Then, \[ s_d(n) \sim \frac{1}{ \sqrt{ -2\pi M_d''(s)}} \, n^{-3/2} \, M_d(s)^{\tfrac12-n}. \] \end{theorem} \begin{proof} We first verify the analytic conditions listed in Theorem~\ref{Thm:0401}. Since $M_d(y) = x$, to satisfy $(ii)$ we have $y=x\phi(y)$ where \[ \phi(z) = \frac{z}{M_d(z)} = \sum_{n \geq 0} a_d(n)z^n, \] where the coefficients $a_d(n)$ were defined in~\eqref{Eq:0301} and studied in Section~\ref{Sec:03}. It is easy to see that $\phi(0) = 1$ (for all $d$) and that $\Phi$ is analytic at $z=0$, and so $(i)$ holds. Condition $(iii)$ follows readily from Proposition~\ref{Prop:0301}. For $(iv)$, notice that \begin{equation}\label{Eq:0401} \phi'(z) = \frac{M_d(z) - zM_d'(z)}{M_d(z)^2}, \quad \phi''(z) = \frac{-zM_d(z)M_d''(z) - 2M_d(z)M_d'(z) + 2z(M_d'(z))^2}{M_d(z)^3}. \end{equation} Let $r$ be the radius of convergence of $\phi$ at $z = 0$. Since $\phi(z) = \frac{z}{M_d(z)}$, $r$ is the smallest positive solution to $M_d(r) = 0$. Given $M_d(0) = M_d(r)=0$ and that $M_d(z)$ is differentiable over $(0,r)$, there must exist $s \in (0,r)$ where $M_d'(s) =0$. Now observe that \[ M_d'(s) = 0 ~\implies~ \frac{s}{M_d(s)} = s \left( \frac{M_d(s) - sM_d'(s)}{M_d(s)^2} \right) ~\implies~ \phi(s) = s \phi'(s). \] Thus, condition $(iv)$ holds. Now that the analytic assumptions on $\phi(z)$ have been verified, we may establish the asymptotic formula. When $M_d'(s) = 0$, the expressions in~\eqref{Eq:0401} simplifies to \[ \phi'(s) = \frac{1}{M_d(s)}, \quad\quad\quad \phi''(s) = \frac{-sM_d''(s)}{M_d(s)^2}. \] Therefore, we have \begin{align*} s_d(n) = [x^n] y & \sim \sqrt{ \frac{\phi(s)}{2 \pi \phi''(s)} } \; n^{-3/2} \, \phi'(s)^n \\ &= \sqrt{ \frac{ s / M_d(s) } { 2 \pi \big( -s M_d''(s) / M_d(s)^2 \big) } } \; n^{-3/2} \left( \frac{1}{M_d(s)} \right)^n \\ &\;=\; \frac{1}{ \sqrt{ -2\pi M_d''(s) }} \, n^{-3/2} \, M_d(s)^{1/2-n}. \end{align*} This completes the proof. \end{proof} \subsection{Growth rate} We define the growth rate of $s_d(n)$ to be \[ \mathcal{K}_d = \lim_{n \to \infty} \frac{s_d(n+1)}{ s_d(n) }. \] Goulden et al.~\cite[Theorem 2]{GouldenGRS18} showed that $\mathcal{K}_1 \approx 5.487452$ in their work on NECS. Here, we show that $d=1$ turns out the only case where $\mathcal{K}_d < 4d + \frac{3}{2}$. \begin{proposition}\label{Prop:0401} For all integers $d \geq 2$, \[ 4d+ \frac{3}{2} \leq \mathcal{K}_d \leq 4d + \frac{3}{2} + \frac{1}{16d}. \]. \end{proposition} \begin{proof} It follows immediately from Theorem \ref{Thm:0402} that $\mathcal{K}_d = \frac{1}{M_d(s)}$ where $s$ is the smallest positive real number where $M_d'(s) =0$. For convenience, we define the polynomials \[ M_d^-(x) = \sum_{n \geq 1}^7 \mu_d(n)x^n - 2d^3x^8, \quad M_d^+(x) = \sum_{n \geq 1}^7 \mu_d(n)x^n + 2d^3x^8. \] Then from Lemma~\ref{Lem:0303} we know that $M_d^-(x) \leq M_d(x) \leq M_d^+(x)$ over $[0, \frac{1}{d}]$, and $(M_d^-)'(x) \leq M_d'(x) \leq (M_d^+)'(x)$ over $[0, \frac{1}{2d}]$. Let $k_1 = \frac{4d+5}{(4d+5)(2d+1) +1}$ and $k_2 = \frac{d-1}{d} k_1 + \frac{1}{d(2d+1)}$. Notice that $0 < k_1 < k_2 < \frac{1}{2d}$. Now observe that $(M_d^+)'(k_1) = \frac{1}{64(4d^2+7d+3)^7}c_1(d)$, where \begin{align*} c_1(d) ={}& 229376d^{10}+1966080d^9+7294976d^8+15323136d^7+20124288d^6+17555072d^5\\ &+11102496d^4+5917032d^3+2803847d^2+933639d+139968. \end{align*} (All polynomials computations in this proof were performed in Maple.) Likewise, one can check that $(M_d^+)'(k_2) = \frac{1}{64d^5(2d+1)^7(4d^2+7d+3)^7} c_2(d)$ where \begin{align*} c_2(d) ={}& -16777216d^{23}-209715200d^{22}-1265631232d^{21}-4968939520d^{20}\\ & -14345764864d^{19}-32330973184d^{18}-58457362432d^{17}-85738966016d^{16}\\ & -102477934592d^{15}-100110810240d^{14}-80143714368d^{13}-52681506144d^{12}\\ & -28456978128d^{11}-12623832456d^{10}-4590955884d^9-1365827366d^8\\ & -331953575d^7-65949629d^6-10735783d^5-1431097d^4-153709d^3-12647d^2\\ & -713d-21. \end{align*} Since $c_1(d)$ has exclusively positive coefficients, $(M_d^+)'(k_1) > 0$. Similarly, we see that $(M_d^+)'(k_2) < 0$. Thus, $M_d^+(x)$ has a local maximum at $s^+ \in (k_1, k_2)$. Next, observe that \[ (M_d^+)''(x) = -2d - 6dx + 6d(d-1)x^2 - 20dx^3 + 30d^2x^4 - 42dx^5 + 112d^3x^6, \] which is negative over $[0, \frac{1}{2d}]$. Thus, $M_d^+(x)$ is concave down over this interval. and $M_d^+(s^+)$ is the absolute maximum of $M_d^+(x)$ over $[0, \frac{1}{2d}]$. Now consider \[ L(x) = M_d^+(k_1) + (M_d^+)'(k_1)(x-k_1), \] the linearization of $M_d^+(x)$ at $x= k_1$. We argue that \[ M_d(s) \leq M_d^+(s) \leq M_d^+(s^+) \leq L(s^+) \leq L(k_2). \] For the first inequality, notice that $M_d'(0) > 0$, and $M_d'(k_2) < (M_d^+)'(k_2) < 0$. Thus, $s \in [0, \frac{1}{2d}]$, which implies that $M_d(s) \leq M_d^+(s)$ (since $M_d(x) \leq M_d^+(x)$ over $[0, k_2]$). The second inequality holds because we showed above that $s^+$ maximizes $M_d^+(x)$ over $[0, \frac{1}{2d}]$. The third inequality holds since $M_d^+(x)$ is concave down, and so $L(x) \geq M_d^+(x)$ over $[k_1, k_2]$. Finally, since $(M_d^+)'(k_1) >0$, $L(x)$ is an increasing function, and we obtain that $L(s^+) \leq L(k_2)$. Thus, we have $M_d(s) \leq L(k_2) = M_d^+(k_1) + (k_2-k_1) (M_d^+)'(k_1)$, and so \[ \mathcal{K}_d = \frac{1}{M_d(s)} \geq \frac{1}{M_d^+(k_1) + (k_2-k_1) (M_d^+)'(k_1)} = 4d+\frac{3}{2} + \frac{c_3(d)}{c_4(d)}, \] where \begin{align*} c_3(d) ={}& 524288d^{16}+6029312d^{15}+29491200d^{14}+76218368d^{13}+92368896d^{12}\\ & -45438976d^{11}-403496448d^{10}-849085696d^9-1125540408d^8-1083739620d^7\\ & -793966088d^6-450174888d^5-201247133d^4-73160887d^3-21407400d^2\\ & -4340565d-419904,\\ c_4(d)={}& 8388608d^{17}+118489088d^{16}+783810560d^{15}+3223584768d^{14}+9227272192d^{13}\\ & +19497541632d^{12}+31470530560d^{11}+39594894336d^{10}+39261180544d^9\\ & +30802044560d^8+19075944984d^7+9245407152d^6+3450501536d^5\\ & +966637518d^4+196684154d^3+28701088d^2+3266958d+279936. \end{align*} One can check that $c_3(d), c_4(d) \geq 0$ for all $d \geq 2$, and so we obtain that $\mathcal{K}_d \geq 4d + \frac{3}{2}$. Next, we prove the upper bound. Since $M_d(s) \geq M_d(k_1) \geq M_d^-(k_1)$, we have \[ \mathcal{K}_d = \frac{1}{M_d(s)} \leq \frac{1}{M_d^-(k_1)} = 4d + \frac{3}{2} + \frac{16}{d} - \frac{16}{d} \cdot \frac{c_5(d)}{c_6(d)}, \] where \begin{align*} c_5(d)={}& 5505024d^{14}+66846720d^{13}+376373248d^{12}+1307049984d^{11}+3139540992d^{10}\\ & +5534562304d^9+7393403136d^8+7593124096d^7+6001005236d^6\\ & +3610232652d^5+1615182134d^4+516014087d^3+109626523d^2+13506249d\\ & +699840,\\ c_6(d)&= 2097152d^{15}+28573696d^{14}+181665792d^{13}+715063296d^{12}+1949155328d^{11}\\ & +3898431488d^{10}+5912027136d^9+6925392128d^8+6322048032d^7\\ & +4502239636d^6+2485004860d^5+1046787214d^4+327172991d^3\\ & +72131011d^2+10147017d+699840. \end{align*} Since $c_5(d), c_6(d)$ have exclusively positive coefficients, we conclude that $\frac{c_5(d)}{c_6(d)} > 0$, and so we conclude that $\mathcal{K}_d \leq 4d + \frac{3}{2} + \frac{16}{d}$. \end{proof} We plot in Figure~\ref{Fig:0401} the values of $\mathcal{K}_d - 4d - \frac{3}{2}$ for $2 \leq d \leq 30$, as well as the upper bound given in Proposition~\ref{Prop:0401} for comparison. \begin{figure} \caption{Plotting $\mathcal{K}_d-4d-\frac{3}{2}$ for $2 \leq d \leq 30$ and the upper bound from Proposition~\ref{Prop:0401}.} \label{Fig:0401} \end{figure} \subsection{Relating trees and hypercube decompositions} We end this section by describing a mapping from plane rooted trees to hypercube decompositions, which will also add some perspective to the bounds we found for $\mathcal{K}_d$ in Proposition~\ref{Prop:0401}. Given an integer $d \geq 1$, let $\mathcal{T}_{d}$ be the set of plane rooted trees where each internal node has a label from $[d]$ and at least $2$ children, while the leaves of the tree are unlabelled. Furthermore, let $\mathcal{T}_{d,n} \subseteq \mathcal{T}_{d}$ denote the set of trees with exactly $n$ leaves. Then we can define a tree-to-decomposition mapping $\Psi : \mathcal{T}_{d,n} \to \mathcal{S}_{d,n}$ recursively as follows: \begin{itemize} \item $\Psi$ maps the tree with a single leaf node to the trivial decomposition $\set{(0,1)^d}$. \item Now suppose $T \in \mathcal{T}_{d,n}$ has root node labelled $i$ with $r$ children. Let $T_1, \ldots, T_r$ be the subtrees of the root node ordered from left to right. Also, for each $j \in [r]$, let $B_j = \set{ x \in (0,1)^d : \frac{j-1}{r} < x_i < \frac{j}{r}}$. (Notice that $\set{B_1, B_2, \ldots, B_r} = H_{i,r}( (0,1)^d))$.) Define \[ \Psi(T) = \bigcup_{j=1}^r \set{ \scale_{(0,1)^d \to B_j}(R) : R \in \Psi(T_j)}. \] \end{itemize} Figure~\ref{Fig:0402} gives an example of this mapping. \begin{figure} \caption{Illustrating the mapping $\Psi$ from trees to hypercube decompositions.} \label{Fig:0402} \end{figure} The mapping $\Psi$ is a natural extension of another tree-to-decomposition mapping that Bagherzadeh, Bremner, and the author used to study a generalization of Catalan numbers~\cite{AuBB20}. It is not hard to see that the mapping $\Psi$ is onto --- given $S \in \mathcal{S}_{d,n}$, we can use the sequence of splitting operations that resulted in $S$ to generate a tree $T \in \mathcal{T}_{d,n}$ where $\Psi(T) = S$. On the other hand, $\Psi$ is not one-to-one. Figure~\ref{Fig:0403} gives two types of situations where two distinct trees are mapped to the same decomposition. \begin{figure} \caption{Two types of situations where $\Psi : \mathcal{T}_{d,n} \to \mathcal{S}_{d,n}$ is not one-to-one.} \label{Fig:0403} \end{figure} In general, if we let $t_d(n) = |\mathcal{T}_{d,n}|$, then we see that $t_d(n) \geq s_d(n)$ for every $d,n \geq 1$, with the inequality being strict for all $n \geq 4$. We remark that when $d=1$, we can consider the nodes of the trees in $\mathcal{T}_{1,n}$ as being unlabelled, and so $t_1(n)$ gives the well-studied small Schr\"oder numbers~\cite[\href{https://oeis.org/A001003}{A001003}]{OEIS}. Thus, the sequences $t_d(n)$ can be seen as a generalization of small Schr\"oder numbers, which the author recently studied in another manuscript~\cite{Au21}. In particular, we have the following for the growth rate of $t_d(n)$: \begin{proposition}[\cite{Au21}, Proposition 4] For every $d \geq 1$, \[ \lim_{n \to \infty} \frac{t_d(n+1)}{t_d(n)} = 2d+1+2\sqrt{d^2+d}. \] \end{proposition} Thus, while $\displaystyle\lim_{d \to \infty} \lim_{n \to \infty} \frac{s_d(n+1)}{s_d(n)} = 4d+\frac{3}{2}$, $\displaystyle\lim_{d \to \infty} \lim_{n \to \infty} \frac{t_d(n+1)}{t_d(n)} = 4d+2$. Also, since the growth rate of $t_d(n)$ is strictly greater than that of $s_d(n)$, it follows that $\displaystyle\lim_{n \to \infty} \frac{s_d(n)}{t_d(n)} = 0$ for every $d \geq 1$. \section{Relating Decompositions and NECS}\label{Sec:05} An immediate consequence of Theorem~\ref{Thm:0101} is that $s_1(n) = c(n)$ for all $n \geq 1$. In this section, we highlight some combinatorial connections between hypercube decompositions and NECS. \subsection{A bijection between $\mathcal{S}_{1,n}$ and $\mathcal{C}_n$} Given an NECS $C = \set{ \res{a_i}{n_i} : i \in [k]}$, define $\gcd(C)$ to be the greatest common divisor of the moduli $n_1, \ldots, n_k$. Furthermore, when we discuss the gcd of a decomposition in $S \in \mathcal{S}_{1,n}$, we'll slightly abuse notation and write $\gcd(S) = r$ (instead of $\gcd(S) = (r)$, for ease of comparing gcd's of decompositions and NECS. Next, we define the mapping $\Phi : \mathcal{S}_{1,n} \to \mathcal{C}_n$ recursively as follows: \begin{itemize} \item ($n=1$) $\Phi$ maps the trivial decomposition $\set{(0,1)}$ to $\set{ \res{0}{1}}$, the NECS with a single residual class. \item ($n \geq 2$) Given $S \in \mathcal{S}_{1,n}$, let $r = \gcd(S)$, and $B_j = (\frac{j}{r}, \frac{j+1}{r})$ for every $j \in \set{0, \ldots, r-1}$. Given that $\gcd(S) = r$, we know that for every region $R \in S$, there is a unique $j$ where $R \subseteq B_j$. Thus, \[ S_j = \set{ \scale_{B_j \to (0,1)}(R) : R \in S, R \subseteq B_j}, \] is a decomposition in its own right. We then define \[ \Phi(S) = \bigcup_{j=0}^{r-1} \set{ E_{j,r}(T) : T \in \Phi(S_j)}. \] (Recall that $E_{j,r}( \res{a}{n} ) = \res{jn+a}{rn}$, as defined in Section~\ref{Sec:0103}.) \end{itemize} \begin{figure} \caption{Illustrating the mapping $\Phi : \mathcal{S}_{1,n} \to \mathcal{C}_n$ for $n \leq 4$.} \label{Fig:0501} \end{figure} Figure~\ref{Fig:0501} illustrates the mapping $\Phi$ applied to elements in $\mathcal{S}_{1,n}$ for $n \leq 4$. Next, we prove a result that will help us show that $\Phi$ is in fact a bijection. \begin{lemma}\label{Lem:0501} Let $S \in \mathcal{S}_{1,n}$. If $\gcd(S) = r$, then $\gcd(\Phi(S)) = r$. \end{lemma} \begin{proof} We prove our claim by induction on $n$. When $n=1$, then $S$ must be the trivial decomposition $\set{ (0,1)}$, and the claim holds as $\gcd(S) = \gcd(\Phi(S))= 1$. Now assume that $n \geq 2$, and so $r \geq 2$. Consider the decompositions $S_0, \ldots, S_{r-1}$ as defined in the definition of $\Phi$. Since $r = \gcd(S)$ is chosen maximally, it follows that \[ \gcd \set{ \gcd(S_j) : j \in \set{0,\ldots, r-1}} = 1. \] (The outer $\gcd$ is the ordinary greatest common divisor operation applied to a set of natural numbers.) By the inductive hypothesis, $\gcd(S_j) = \gcd(\Phi(S_j))$ for all $j$, and so we obtain that \[ \gcd \set{ \gcd(\Phi(S_j)) : j \in \set{0,\ldots, r-1}} = 1. \] Since $E_{j,r}$ multiplies each modulus of the residual classes in $\Phi(S_j)$ by $r$, we see that \[ \gcd\set{ E_{j,r}(T) : T \in \Phi(S_j)} = r \gcd(\Phi(S_j)) \] for every $j \in \set{0, \ldots, r-1}$, and so it follows that \[ \gcd(\Phi(S)) = \gcd\set{ E_{j,r}(T) : j \in \set{0,\ldots, r-1}, T \in \Phi(S_j)} = r. \mbox{\qedhere} \] \end{proof} Since $\Phi$ is gcd-preserving from Lemma~\ref{Lem:0501}, we are able to use a single column in Figure~\ref{Fig:0501} to indicate the gcd of both the input decomposition and the output NECS. Next, we prove that $\Phi$ is indeed a bijection. \begin{proposition} $\Phi : \mathcal{S}_{1,n} \to \mathcal{C}_n$ is a bijection. \end{proposition} \begin{proof} From Theorem~\ref{Thm:0101}, we know that $|\mathcal{S}_{1,n}| = |\mathcal{C}_n|$, and so every function from $\mathcal{S}_{1,n}$ to $\mathcal{C}_n$ is either both one-to-one and onto, or neither. Thus, it suffices to show that $\Phi$ is one-to-one, which we will prove by induction on $n$. The base case $n=1$ obviously holds. For a contradiction, suppose there exist distinct $S, S' \in \mathcal{S}_{1,n}$ where $\Phi(S) = \Phi(S')$. From Lemma~\ref{Lem:0501}, $\Phi(S) = \Phi(S')$ implies that $\gcd(S) = \gcd(S') = r$ for some integer $r \geq 2$. Consider the decompositions $S_0, \ldots, S_r$ and $S'_0, \ldots, S'_r$ as defined in the definition of $\Phi$. Now $S \neq S'$ implies that there exists an index $j$ where $S_j \neq S'_j$. On the other hand, $\Phi(S) = \Phi(S')$ implies that $\Phi(S_j) = \Phi(S'_j)$. Since $S_j$ consists of strictly fewer regions than $S$, this violates our inductive hypothesis. Thus, it follows that $\Phi$ is bijective. \end{proof} \subsection{Counting decompositions/NECS with a given LCM} We have established a map $\Phi$ between $1$-dimensional decompositions and NECS that is not only bijective, but also preserves some key combinatorial properties, such as the gcd as shown in Lemma~\ref{Lem:0501}. Thus, one can see hypercube decompositions as a generalization of NECS, and any result we prove for hypercube decompositions may also specialize to a corresponding implication for NECS. In this section, we provide one such example. Similar to how we defined the gcd of a given hypercube decomposition, we can also define its lcm. Given a $S \in \mathcal{S}_{d}$, we say that $\lcm(S) = (r_1, \ldots, r_d)$ if \begin{itemize} \item $S \preceq D_{(r_1, \ldots, r_d)}$; \item there doesn't exist $(r_1',\ldots, r_d') \neq (r_1, \ldots, r_d)$ where $(r_1',\ldots, r_d') \leq (r_1, \ldots, r_d)$ and $S \preceq D_{(r_1', \ldots, r_d')}$. \end{itemize} Using the decompositions in Figure~\ref{Fig:0202} as examples, we have $\lcm(S_1) = (3,2)$, $\lcm(S_2) = (6,4)$, and $\lcm(S_3) = (6, 12)$. Then, given $r_1, \ldots, r_d \in \mathbb{N}$, define the sets \begin{align*} \mathcal{G}_{(r_1, \ldots, r_d)}&= \set{ S \in \mathcal{S}_d : S \preceq D_{(r_1, \ldots, r_d)} },\\ \mathcal{H}_{(r_1, \ldots, r_d)} &= \set{ S \in \mathcal{S}_d : \lcm(S) =(r_1, \ldots, r_d) }. \end{align*} We also define $g(r_1, \ldots, r_d) = |\mathcal{G}_{(r_1, \ldots, r_d)}|$ and $h(r_1, \ldots, r_d) = |\mathcal{H}_{(r_1, \ldots, r_d)}|$. Since $\lcm(S) =(r_1, \ldots, r_d)$ implies that $S \preceq D_{(r_1, \ldots, r_d)}$, we see that $h(r_1, \ldots, r_d) \leq g(r_1,\ldots, r_d)$ for all $r_1, \ldots, r_d$. Also, when $r_1 = \cdots = r_d = 1$, it's obvious that $g(r_1,\ldots, r_d) = h(r_1,\ldots, r_d) = 1$. For the general case, we have the following recursive formulas: \begin{proposition}\label{Prop:0501} Given $r_1, \ldots, r_d \in \mathbb{N}$, \begin{itemize} \item[(i)] \[ g(r_1, \ldots, r_d) = 1 - \sum \left( \prod_{i=1}^d \mu(q_i) \right) g\left(\frac{r_1}{q_1}, \ldots, \frac{r_d}{q_d} \right)^{\prod_{i=1}^d q_i}, \] where the sum is over $q_1, \ldots, q_d \in \mathbb{N}$ where $q_i | r_i$ for every $i \in [d]$, and $\prod_{i=1}^d q_i \neq 1$. \item[(ii)] \[ h(r_1, \ldots, r_d) = \sum \left( \prod_{i=1}^d \mu(q_i) \right) g\left(\frac{r_1}{q_1}, \ldots, \frac{r_d}{q_d} \right), \] where the sum is over $q_1, \ldots, q_d \in \mathbb{N}$ where $q_i | r_i$ for every $i \in [d]$. \end{itemize} \end{proposition} \begin{proof} We first prove $(i)$. Given $n \in \mathbb{N}$, let $\mathbb{P}(n)$ be the set of prime divisors of $n$ (e.g., $\mathbb{P}(40) = \set{2,5}$). Now suppose $S \preceq D_{(r_1, \ldots, r_d)}$. Then either $S = \set{ (0,1)^d}$, or $S$ refines $\set{H_{i,p}((0,1)^d)}$ for some $i \in [d]$ and $p \in \mathbb{P}(r_i)$. Thus, if we define \[ \mathcal{W}_{i,p} = \set{ D \in \mathcal{S}_d : \set{H_{i,p}((0,1)^d)} \preceq D \preceq D_{(r_1, \ldots, r_d)} }, \] then we see that \[ \mathcal{G}_{(r_1,\ldots, r_d)}= \set{ \set{(0,1)^d}} \cup \bigcup_{i \in [d], p \in \mathbb{P}(r_i)} \mathcal{W}_{i,p}. \] We apply the principle of inclusion-exclusion to the above to compute the size of $\mathcal{G}_{(r_1,\ldots, r_d)}$. For convenience, given a finite set $S \subseteq \mathbb{N}$, we let $\pi(S)$ denote the product of all elements in $S$. Then notice that each intersection of a collection of $k$ sets from $\set{ \mathcal{W}_{i,p} : i \in [d], p \in \mathbb{P}(r_i)}$ can be uniquely written as \begin{equation}\label{Eq:0501} \bigcap_{i=1}^d \left( \bigcap_{p \in P_i} \mathcal{W}_{i,p} \right) = \set{ D \in \mathcal{S}_d : D_{(\pi(P_1), \ldots, \pi(P_d))} \preceq D \preceq D_{(r_1, \ldots, r_d)} }, \end{equation} for some choice of $P_1, \ldots, P_d$ where $P_i \subseteq \mathbb{P}(r_i)$ for every $i \in [d]$, and $\sum_{i=1}^d |P_i| = k$ (note that some of the $P_i$'s could be empty, in which case $\pi(P_i) = 1$). Now, the decomposition $D_{(\pi(P_1), \ldots, \pi(P_d))}$ consists of $m = \prod_{i=1}^d \pi(P_i)$ regions --- let's denote them by $B_1, \ldots, B_m$. Now, if $S$ is a decomposition that belongs to~\eqref{Eq:0501}, then $S \succeq D_{(\pi(P_1), \ldots, \pi(P_d))}$, and so \[ S_j = \set{ \scale_{B_j \to (0,1)^d}(R) : R \in S, R \subseteq B_j} \] is a decomposition in its own right for every $j \in [m]$. Furthermore, $S \preceq D_{(r_1,\ldots, r_d)}$ implies that $S_j \preceq D_{(r_1/\pi(P_1), \ldots, r_d/\pi(P_d))}$. Thus, we see that there is a bijection between~\eqref{Eq:0501} and $\mathcal{G}_{(r_1/\pi(P_1), \ldots, r_d/\pi(P_d))}^m$, and so~\eqref{Eq:0501} has size $g\left( \frac{r_1}{\pi(P_1)}, \ldots, \frac{r_d}{\pi(P_d)}\right)^{\prod_{i=1}^d \pi(P_i)}$. This gives \begin{align*} & g(r_1,\ldots, r_d) \\ ={}&1 + \left| \bigcup_{i \in [d], p \in \mathbb{P}(r_i)} \mathcal{W}_{i,p} \right|\\ ={}& 1+ \sum_{k \geq 1} (-1)^{k-1} \left( \sum_{\substack{P_1 \subseteq \mathbb{P}(r_1), \ldots, P_d \subseteq \mathbb{P}(r_i) \\ \sum_{i=1}^d |P_i|= k}} \left| \set{ D \in \mathcal{S}_d : D_{(\pi(P_1), \ldots, \pi(P_d))} \preceq D \preceq D_{(r_1, \ldots, r_d)} } \right| \right) \\ ={}& 1- \sum_{k \geq 1} (-1)^{k} \left( \sum_{\substack{P_1 \subseteq \mathbb{P}(r_1), \ldots, P_d \subseteq \mathbb{P}(r_i) \\ \sum_{i=1}^d |P_i|= k}} g\left( \frac{r_1}{\pi(P_1)}, \ldots, \frac{r_d}{\pi(P_d)}\right)^{\prod_{i=1}^d \pi(P_i)} \right). \end{align*} Next, notice that if we let $q_i = \pi(P_i)$ for every $i \in [d]$, then \begin{equation}\label{Eq:0502} \prod_{i=1}^d \mu(q_i) = \prod_{i=1}^d (-1)^{|P_i|} = (-1)^k. \end{equation} Thus, the sum above can be rewritten as \[ 1 - \sum \left( \prod_{i=1}^d \mu(q_i) \right) g\left(\frac{r_1}{q_1}, \ldots, \frac{r_d}{q_d} \right)^{\prod_{i=1}^d q_i}, \] where the sum is over $q_1, \ldots, q_d$ where $q_i$ is a product of prime divisors of $r_i$ for every $i$, and the condition $\sum_{i=1}^d |P_i| = k \geq 1$ translates to $\prod_{i=1}^d q_i \neq 1$. Moreover, since $\mu(q_i) = 0$ when $q_i$ is divisible by a non-trivial square, the sum above would remain the same if we expanded it to include all $q_i$'s that are divisors of $r_i$, which results in the claimed formula. Next, we prove $(ii)$ using a similar inclusion-exclusion argument as above. Given $S \in \mathcal{G}_{(r_1, \ldots, r_d)}$, then either $\lcm(S) = (r_1, \ldots, r_d)$ and $S \in \mathcal{H}_{(r_1,\ldots,r_d)}$, or there exists $i \in [d]$ and $p \in \mathbb{P}(r_i)$ such that $S \in \mathcal{G}_{(r_1, \ldots, r_i/p, \ldots, r_d)}$. Thus, \[ \mathcal{H}_{(r_1, \ldots, r_d)} = \mathcal{G}_{(r_1, \ldots, r_d)} \setminus \bigcup_{i \in [d], p \in \mathbb{P}(r_i)} \mathcal{G}_{(r_1, \ldots, r_i/p, \ldots, r_d)}. \] Now each intersection of a collection of $k$ sets from $\set{ \mathcal{G}_{(r_1, \ldots, r_i/p, \ldots, r_d)} : i \in [d], p \in \mathbb{P}(r_i)}$ can be uniquely written as \[ \bigcap_{i=1}^d \left( \bigcap_{p \in P_i} \mathcal{G}_{(r_1, \ldots, r_i/p, \ldots, r_d)} \right) = \mathcal{G}_{(r_1/\pi(P_1), \ldots, (r_d/\pi(P_d))}, \] for some choice of $P_1, \ldots, P_d$ where $P_i \subseteq \mathbb{P}(r_i)$ for every $i \in [d]$, and $\sum_{i=1}^d |P_i| = k$. Thus, we obtain that \begin{align*} & h(r_1,\ldots, r_d) \\ ={}& g(r_1, \ldots, r_d) - \left| \bigcup_{i \in [d], p \in \mathbb{P}(r_i)} \mathcal{G}_{(r_1, \ldots, r_i/p, \ldots, r_d)} \right|\\ ={}& g(r_1, \ldots, r_d)- \sum_{k \geq 1} (-1)^{k-1} \left( \sum_{\substack{P_1 \subseteq \mathbb{P}(r_1), \ldots, P_d \subseteq \mathbb{P}(r_i) \\ \sum_{i=1}^d |P_i|= k}} \left| \mathcal{G}_{(r_1/\pi(P_1), \ldots, (r_d/\pi(P_d))} \right| \right) \\ ={}& g(r_1, \ldots, r_d)+ \sum_{k \geq 1} (-1)^{k} \left( \sum_{\substack{P_1 \subseteq \mathbb{P}(r_1), \ldots, P_d \subseteq \mathbb{P}(r_i) \\ \sum_{i=1}^d |P_i|= k}} g\left( \frac{r_1}{\pi(P_1)}, \ldots, \frac{r_d}{\pi(P_d)} \right) \right) \\ ={}& \sum_{k \geq 0} (-1)^{k} \left( \sum_{\substack{P_1 \subseteq \mathbb{P}(r_1), \ldots, P_d \subseteq \mathbb{P}(r_i) \\ \sum_{i=1}^d |P_i|= k}} g\left( \frac{r_1}{\pi(P_1)}, \ldots, \frac{r_d}{\pi(P_d)} \right) \right) \\ ={}& \sum \left( \prod_{i=1}^d \mu(q_i) \right) g\left( \frac{r_1}{q_1}, \ldots, \frac{r_d}{q_d} \right), \end{align*} where we made the substitution $q_i = \pi(P_i)$ in the last equality and applied~\eqref{Eq:0502}. For the same rationale as in the proof of $(i)$, the above summation can be taken over all $q_1,\ldots, q_d$ where $q_i | r_i$, and our claim follows. \end{proof} With Proposition~\ref{Prop:0501}, we obtain a recursive formula to compute the number of hypercube decompositions with a given lcm. In the case of $d=1$, we obtain the following sequences: \[ \begin{array}{l|rrrrrrrrrrrrrrrrr} n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10& 11 & 12 & 13 & 14 & 15 & 16 & \cdots \\ \hline g(n) & 1 & 2 & 2 & 5 & 2 & 12 & 2 & 26 & 9 & 36 & 2 & 206 & 2 &132& 40&677& \cdots \\ h(n) & 1 & 1 & 1 & 3 & 1 & 9 & 1 & 21 & 7 & 33 & 1 & 191 & 1 & 129 & 37 & 651 & \cdots \end{array} \] Moreover, given an NECS $C = \set{ \res{a_i}{n_i} : i \in [k]}$, if we define $\lcm(C)$ to be the least common multiple of $n_1, \ldots, n_k$, then it is not hard to adapt Lemma~\ref{Lem:0501} to show that $\Phi$ preserves lcm as well. For example, the last column in Figure~\ref{Fig:0501} gives the lcm of the corresponding decomposition $S$, as well as that of the NECS $\Phi(S)$. Thus, we see that $g(n)$ gives the number of NECS whose lcm divides $n$, and $h(n)$ gives the number of NECS whose lcm is exactly $n$. (Goulden et al.~\cite[Proposition 6]{GouldenGRS18} also obtained a recursive formula for finding the number of NECS with a given lcm, gcd, and number of residual classes.) Also, recall that $\mu(ab) = \mu(a)\mu(b)$ whenever $a,b$ are coprime. Thus, if $r_1, \ldots, r_d \in \mathbb{N}$ are pairwise coprime, then it follows from Proposition~\ref{Prop:0501} that $g(r_1, \ldots, r_d) = g\left( \prod_{i=1}^d r_i \right)$ and $h(r_1, \ldots, r_d) = h\left( \prod_{i=1}^d r_i \right)$. Thus, given $n \in \mathbb{N}$ with prime factorization $n = p_1^{m_1} \cdots p_d^{m_d}$, there is a one-to-one correspondence between NECS with lcm $n$ and hypercube decompositions in $\mathcal{S}_d$ with lcm $(p_1^{m_1}, \ldots, p_d^{m_d})$. This geometric interpretation of NECS is similar to the ``low-dimensional mapping'' used by Berger, Felzenbaum, and Fraenkel, who obtained a series of results on covering systems by mapping them to lattice parallelotopes and employing geometric and combinatorial arguments. The reader can refer to~\cite{BergerFF87} and the references therein for their results. It would be interesting to investigate further what more can we learn about covering systems by relating them to hypercube decompositions. \end{document}
arXiv
Stability of thermoelastic Timoshenko beam with suspenders and time-varying feedback Soh Edwin Mukiawa ORCID: orcid.org/0000-0002-6668-11071, Cyril Dennis Enyi ORCID: orcid.org/0000-0001-9658-58641 & Salim A. Messaoudi ORCID: orcid.org/0000-0003-1061-00752 Advances in Continuous and Discrete Models volume 2023, Article number: 7 (2023) Cite this article 63 Accesses This paper considers a one-dimensional thermoelastic Timoshenko beam system with suspenders, general weak internal damping with time varying coefficient, and time-varying delay terms. Under suitable conditions on the nonlinear terms, we prove a general stability result for the beam model, where exponential and polynomial decay are special cases. We also gave some examples to illustrate our theoretical finding. In this paper, we consider a thermoelastic Timoshenko beam with suspension cables, weak internal damping, and a time-varying delay damping of the form $$ \textstyle\begin{cases} \rho u_{tt}(x,t)-\alpha u_{xx}(x,t)-\lambda ( \varphi -u )(x,t) \\ \quad {} + \gamma _{1}a(t)g_{1}(u_{t}(x,t))+ \gamma _{2}a(t)g_{2}(u_{t}(x,t- \tau (t)))=0, \\ \rho _{1}\varphi _{tt}(x,t)-k (\varphi _{x} +\psi )_{x}(x,t) +\lambda (\varphi -u )(x,t)+\gamma _{3}\varphi _{t}(x,t)=0, \\ \rho _{2} \psi _{tt}(x,t)-b \psi _{xx}(x,t) + k (\varphi _{x} + \psi )(x,t)- m\theta _{x}(x,t)=0, \\ \rho _{3} \theta _{t}(x,t)-\beta \theta _{xx}(x,t) -m \psi _{xt}(x,t)=0, \end{cases} $$ where \((x,t)\in (0,1)\times (0,\infty )\), φ represents the transverse displacement (in vertical direction) of the beam cross section, ψ is the angle of rotation of a cross-section. The vertical displacement of the vibrating spring (the cable) is represented by the function u, θ depicts the thermal moment of the beam, \(\lambda >0\) is the common stiffness of the string, and \(\alpha >0\) is the elastic modulus of the string (holding the cable to the deck). The positive constants ρ, \(\rho _{1}\), \(\rho _{2}\) are the density of the mass material of the cable, the mass density, and the moment of mass inertia of the beam, respectively. Also, b, k, β, m represent the rigidity coefficient of the cross-section, the shear modulus of elasticity, the thermal diffusivity, and the coupling coefficient which depends on the material properties, respectively. The function \(\tau (t)>0\) is the time-varying delay, \(\gamma _{1}\) and \(\gamma _{2} \) are real positive damping constants, \(g_{1}\) and \(g_{2}\) are the damping functions, and \(a(t)\) is a nonlinear weight function. We supplement (1.1) with the boundary conditions $$ \textstyle\begin{cases} u(0,t)=\varphi _{x}(0,t)=\psi (0,t)=\theta _{x}(0,t)=0, \quad t>0, \\ u(1,t)=\varphi (1,t)=\psi _{x}(1,t)=\theta (1,t)=0, \quad t>0, \end{cases} $$ and the initial data $$ \textstyle\begin{cases} u(x,0)=u_{0}(x),\qquad \varphi (x,0)= \varphi _{0}(x), \\ \psi (x,0)= \psi _{0}(x), \qquad \theta (x,0)=\theta _{0}(x), & \text{in } (0,1), \\ u_{t}(x,0)=u_{1}(x),\qquad \varphi _{t}(x,0)= \varphi _{1}(x), \\ \psi _{t}(x,0)= \psi _{1}(x), & \text{in } (0,1), \\ u_{t}(x,t-\tau (0))=f_{0}(x,t-\tau (0)),\quad \text{in } (0,1)\times (0, \tau (0)). \end{cases} $$ The stability of the above thermoelastic Timoshenko system with suspension cables would be our penultimate focus in this work. The Timoshenko beam model is arguably very popular and most used when the vibration of a beam exhibits significant transverse shear strain. A model to describe this phenomenon was introduced by Timoshenko [35] in 1921, see also [15, 18]. The nonlinear vibration of suspension bridges have captured the attention of different researchers and a number of research articles were written on the topic. The somewhat unpredictable large oscillations of suspension bridges have been modeled in diverse ways, one may see [1, 14, 25]. In any attempt to adequately describe the complicated dynamics of a suspension bridge, a robust model would be one with a considerable amount of degrees of freedom. Without prejudice, some simplified models have been considered, but do not account for a number of realistic behavior of suspension bridges, e.g., torsional oscillations. Of an advantage is the fact that rigorous mathematical analysis is easily carried out with such simpler models. A typical simplified model is the one-dimensional vibrating beam model, where torsional motion is neglected by ignoring sectional dimensions of the beam when they are negligible compared to length of the beam. The emergence of string-beam systems which model a nonlinear coupling of a beam and main cable (the string) were born out of the pioneering works of Lazer, McKenna, and Walter [23, 25, 26] (see also [7] and its references). Though initially modeled through the classic Euler–Bernoulli beam theory, the Timoshenko beam theory is also proven to perform better in predicting a beam response to vibrations than a model based on the classical Euler–Bernoulli beam theory. Indeed, the Timoshenko beam theory takes into account both rotary inertia and shear deformation effects, these are often neglected when applying Euler–Bernoulli beam theory. In the Timoshenko beam with suspenders which is modeled by (1.1), the suspenders are cables which are elastic in nature and are attached to the beam with elastic springs. The temperature dissipation here is assumed to be governed by the Fourier law of heat conduction. For \(a(t)\equiv 1\), \(g_{1}(s)\equiv s\) and \(\gamma _{2}\equiv 0\), \(g_{2}\equiv 0\) in system (1.1), Bochichio et al. [6] proved a well-posedness and an exponential stability result. A number of works have been done on different thermoelastic Timoshenko models without suspenders (see [10, 12, 16, 17, 28] and references in them). Time delays occur in systems modeling many phenomena in areas such as biosciences, medicine, physics, robotics, economics, chemical, thermal, and structural engineering. These phenomena depend on both present and some past history of occurrences, see [8, 9, 13, 21, 34] and the examples therein. In the case of constant delay and constant weight, the delay term usually accounts for the past history of strain, only up to some finite time \(\tau (t)\equiv \tau \). A step further involves results in the literature about constant weights (\(\gamma _{1}a(t)\equiv \gamma _{1}\), \(\gamma _{2}a(t)\equiv \gamma _{2}\) constants) and time-varying delay \(\tau (t)\). Works presenting the exponential stability result for wave equation with boundary or internal time-varying delay appeared in Nicaise et al. [32, 33]. Enyi and Mukiawa in [11] presented a general decay result for a viscoelastic plate equation under the condition \(|\gamma _{2}|<|\gamma _{1}|\sqrt{(1-d)}\). Furthermore, in [4, 24], the authors presented some existence and stability results for wave equation with internal time-varying delay and time-varying weights; and for suspension bridge models, see Mukiawa [3, 27, 29, 30]. Motivated by the works in [3, 6, 29], in the current paper, we are concerned with the stability result for the thermoelastic Timoshenko system with suspension cables, time-varying internal feedback, and time-varying weight given in (1.1)–(1.3). The result in [6] is a particular case of our result in this paper. We arrange this paper in the following manner. In Sect. 2, we state the needed assumptions. In Sect. 3, we present the proof of some technical and needed lemmas for our main result. In the last Sect. 4, we present and prove our main stability result. Throughout this paper, c and \(c_{i}\), \(i=1,2,\dots \), are generic positive constants, which are not necessarily the same from line to line. Functional settings and assumptions In this section, we state some needed assumptions on the damping coefficients, nonlinear functions, and the time-varying delay. As in [5, 32, 33], we assume the following conditions: \((A_{1})\): Function \(a:[0,+\infty )\rightarrow (0,+\infty )\) is a nonincreasing \(C^{1}\)-function such that there exists a positive constant C satisfying $$ \bigl\vert a'(t) \bigr\vert \leq Ca(t),\qquad \int _{0}^{+\infty}a(t)\,dt=+\infty . $$ Fuction \(g_{1}:\mathbb{R}\rightarrow \mathbb{R}\) is a nondecreasing \(C^{0}\)-function such that there exist positive constants \(C_{1}\), \(C_{2}\), r and a convex increasing function \(\chi \in C^{1}([0,+\infty ))\cap C^{2}((0,+\infty ))\) satisfying \(\chi (0)=0\) or χ is a nonlinear strictly convex \(C^{2}\)-function on \((0, r]\) with \(\chi '(0), \chi ''>0\) such that $$\begin{aligned}& s^{2}+g_{1}^{2}(s) \le \chi ^{-1}\bigl(s g_{1}(s)\bigr),\quad \text{for all } \vert s \vert \le r, \end{aligned}$$ $$\begin{aligned}& C_{1}s^{2} \le sg_{1}(s)\le C_{2}s^{2}, \quad \text{for all } \vert s \vert \ge r. \end{aligned}$$ Function \(g_{2}:\mathbb{R}\rightarrow \mathbb{R}\) is an increasing and odd \(C^{1}\)-function such that for some positive constants \(C_{3}\), \(\alpha _{1}\), \(\alpha _{2}\), $$\begin{aligned}& \bigl\vert g_{2}'(s) \bigr\vert \leq C_{3}, \end{aligned}$$ $$\begin{aligned}& \alpha _{1}\bigl(sg_{2}(s)\bigr) \leq G(s) \leq \alpha _{2}\bigl(sg_{1}(s)\bigr), \end{aligned}$$ $$ G(s)= \int _{0}^{s}g_{2}(r)\,dr. $$ There exist \(\tau _{0},\tau _{1}>0\) such that $$\begin{aligned}& 0 < \tau _{0}\leq \tau (t)\leq \tau _{1},\quad \forall t>0, \end{aligned}$$ $$\begin{aligned}& \tau \in W^{2,\infty}(0,T), \quad \forall T>0, \end{aligned}$$ $$\begin{aligned}& \tau '(t) \leq d< 1,\quad \forall t>0. \end{aligned}$$ The damping coefficients satisfy $$ \gamma _{2} \alpha _{2}(1-d\alpha _{1})< \alpha _{1}(1-d)\gamma _{1}. $$ Remark 2.1 Using the monotonicity of \(g_{2}\) and the mean value theorem for integrals, we deduce that $$ G(s)= \int _{0}^{s}g_{2}(r)\,dr< sg_{2}(s). $$ It follows from (2.5) that \(\alpha _{1}<1\). Similarly, as in Nicaise and Pignotti [31], we introduce the following change of variable: $$ z(x,\sigma , t)=u_{t}\bigl(x,t-\tau (t)\sigma \bigr), \quad \text{for } (x, \sigma ,t)\in (0,1)\times (0,1)\times (0,\infty ). $$ It follows that $$ \tau (t)z_{t}(x,\sigma ,t)+\bigl(1-\tau '(t)\sigma \bigr)z_{\sigma}(x,\sigma ,t)=0. $$ Therefore, system (1.1) becomes $$ \textstyle\begin{cases} \rho u_{tt}(x,t)-\alpha u_{xx}(x,t)-\lambda ( \varphi -u )(x,t)+ \gamma _{1}a(t)g_{1}(u_{t}(x,t)) \\ \quad {}+ \gamma _{2}a(t)g_{2}(z(x,1,t))=0, \\ \rho _{1}\varphi _{tt}(x,t)-k (\varphi _{x} +\psi )_{x}(x,t) +\lambda (\varphi -u )(x,t)+\gamma _{3}\varphi _{t}(x,t)=0, \\ \rho _{2} \psi _{tt}(x,t)-b \psi _{xx}(x,t) + k (\varphi _{x} + \psi )(x,t)- m\theta _{x}(x,t)=0, \\ \rho _{3} \theta _{t}(x,t)-\beta \theta _{xx}(x,t) -m \psi _{xt}(x,t)=0, \\ \tau (t)z_{t}(x,\sigma ,t)+(1-\tau '(t)\sigma )z_{\sigma}(x,\sigma ,t)=0, \end{cases} $$ subjected to the boundary conditions $$ \textstyle\begin{cases} u(0,t)=\varphi _{x}(0,t)=\psi (0,t)=\theta _{x}(0,t)=0, \quad t>0, \\ u(1,t)=\varphi (1,t)=\psi _{x}(1,t)=\theta (1,t)=0, \quad t>0, \\ z(x,0,t)=u_{t}(x,t), \quad x\in (0,1), t>0, \end{cases} $$ and initial data $$ \textstyle\begin{cases} u(x,0)=u_{0}(x),\qquad \varphi (x,0)= \varphi _{0}(x), \\ \psi (x,0)= \psi _{0}(x),\qquad \theta (x,0)=\theta _{0}(x), & \text{in } (0,1), \\ u_{t}(x,0)=u_{1}(x),\qquad \varphi _{t}(x,0)= \varphi _{1}(x), \\ \psi _{t}(x,0)= \psi _{1}(x), & \text{in } (0,1), \\ z(x,\sigma ,0)= u_{t}(x,-\tau (0)\sigma )=f_{0}(x,-\tau (0)\sigma ), & \text{in } (0,1)\times (0,1). \end{cases} $$ We introduce the following spaces: $$\begin{aligned} & H_{a}^{1}(0,1)=\bigl\{ \phi \in H^{1}(0,1): \phi (0) = 0\bigr\} , \\ & H_{b}^{1}(0,1)= \bigl\{ \phi \in H^{1}(0,1): \phi (1) = 0\bigr\} , \\ & H^{2}_{a}(0,1)=\bigl\{ \phi \in H^{2}(0,1): \phi _{x}\in H_{a}^{1}(0,1)\bigr\} , \\ & H^{2}_{b}(0,1)=\bigl\{ \phi \in H^{2}(0,1): \phi _{x}\in H_{b}^{1}(0,1) \bigr\} . \end{aligned}$$ For completeness, we state without proof the existence and uniqueness result for problem (1.1)–(1.3). The result can be established using the Faedo–Galerkin approximation method, see [5] or standard nonlinear semigroup method, see [19, 20]. Theorem 2.1 $$ \begin{aligned} (u_{0},\varphi _{0},\psi _{0},\theta _{0})&\in H^{2}(0,1)\cap H^{1}_{0}(0,1)\times H^{2}_{a}(0,1)\cap H^{1}_{b}(0,1) \times H^{2}_{b}(0,1) \\ &\quad {}\cap H^{1}_{a}(0,1) \times H^{2}_{a}(0,1)\cap H^{1}_{b}(0,1) \end{aligned} $$ $$ (u_{1},\varphi _{1},\psi _{1})\in H^{1}_{0}(0,1)\times H^{1}_{a}(0,1) \times H^{1}_{b}(0,1),\qquad f_{0}\bigl(\cdot ,-\tau (0) \bigr)\in H^{1}_{0} \bigl( (0,1);H^{1}(0,1) \bigr) $$ be given such that $$ f_{0}(\cdot ,0)=u_{1}. $$ Suppose conditions \((A_{1})\)–\((A_{4})\) hold. Then, problem (1.1)–(1.3) has a unique global weak solution in the class $$\begin{aligned} &u\in L^{\infty} \bigl( [0,+\infty);H^{2}(0,1) \cap H^{1}_{0}(0,1) \bigr), \qquad u_{t}\in L^{\infty} \bigl( [0,+\infty); H^{1}_{0}(0,1) \bigr), \\ & u_{tt} \in L^{\infty} \bigl( (0,+\infty );L^{2}(0,1) \bigr), \\ &\varphi \in L^{\infty} \bigl( [0,+\infty);H^{2}_{a}(0,1) \cap H^{1}_{b}(0,1) \bigr),\qquad \varphi _{t} \in L^{\infty} \bigl( [0,+\infty);H^{1}_{b}(0,1) \bigr), \\ &\varphi _{tt} \in L^{\infty} \bigl( (0,+\infty );L^{2}(0,1) \bigr), \\ &\psi \in L^{\infty} \bigl( [0,+\infty);H^{2}_{b}(0,1) \cap H^{1}_{a}(0,1) \bigr), \qquad \psi _{t} \in L^{\infty} \bigl( [0,+\infty);H^{1}_{a}(0,1) \bigr), \\ &\psi _{tt} \in L^{\infty} \bigl( (0,+\infty );L^{2}(0,1) \bigr), \\ &\theta \in L^{\infty} \bigl( [0,+\infty);H^{2}_{a}(0,1) \cap H^{1}_{b}(0,1) \bigr),\qquad \theta _{t}\in L^{\infty} \bigl( (0,+\infty );L^{2}(0,1) \bigr). \end{aligned}$$ Technical lemmas In this section, we prove some important lemmas which will be essential in establishing the main result. Let μ̄ be a positive constant satisfying $$ \frac{\gamma _{2}(1-\alpha _{1})}{\alpha _{1}(1-d)}< \bar{\mu}< \frac{\gamma _{1}-\gamma _{2}\alpha _{2}}{\alpha _{2}} $$ $$ \mu (t)=\bar{\mu}a(t). $$ The energy functional of system (2.14)–(2.16) is defined by $$ \begin{aligned} E(t)={}& \frac{1}{2} \int _{0}^{1} \bigl[\rho u_{t}^{2} + \rho _{1} \varphi _{t}^{2} + \rho _{2} \psi _{t}^{2} + \alpha u_{x}^{2} + k( \varphi _{x}+\psi )^{2}+b\psi _{x}^{2}+ \lambda (\varphi -u)^{2} \bigr]\,dx \\ &{}+ \frac{1}{2} \int _{0}^{1}\rho _{3}\theta ^{2}\,dx + \mu (t)\tau (t) \int _{0}^{1} \int _{0}^{1} G\bigl(z(x,\sigma ,t)\bigr)\,d\sigma \,dx. \end{aligned} $$ Lemma 3.1 Let \((u,\varphi , \psi , \theta ,z)\) be the solution of system (2.14)–(2.16). Then, the energy functional (3.2) satisfies $$ \begin{aligned} \frac{dE(t)}{dt}\leq {}&-a(t) [ \gamma _{1}-\bar{\mu}\alpha _{2}- \gamma _{2}\alpha _{2} ] \int _{0}^{1}u_{t} g_{1}(u_{t})\,dx \\ &{}-a(t) \bigl[\bar{\mu}\bigl(1-\tau '(t)\bigr)\alpha _{1}- \gamma _{2}(1-\alpha _{1}) \bigr] \int _{0}^{1}z(x,1,t)g_{2}\bigl(z(x,1,t) \bigr)\,dx \\ & {}-\gamma _{3} \int _{0}^{1}\varphi _{t}^{2}\,dx- \beta \int _{0}^{1} \theta _{x}^{2}\,dx \\ \leq{} &0, \quad \forall t\geq 0. \end{aligned} $$ Multiplying (2.14)1 by \(u_{t}\), (2.14)2 by \(\varphi _{t}\), (2.14)3 by \(\psi _{t}\), and (2.14)4 by θ, integrating the outcome over \((0,1)\), and applying integration by parts and the boundary conditions, we get $$\begin{aligned} \begin{aligned} &\frac {1}{2}\frac {d}{dt} \int _{0}^{1} \bigl[ \rho u^{2}_{t} + \alpha u_{x}^{2}+\lambda (\varphi -u)^{2} \bigr]\,dx \\ & \quad =\lambda \int _{0}^{1}\varphi _{t}(\varphi -u)\,dx- \gamma _{1}a(t) \int _{0}^{1}u_{t} g_{1}(u_{t})\,dx- \gamma _{2}a(t) \int _{0}^{1}u_{t} g_{2} \bigl(z(x,1,t)\bigr)\,dx, \end{aligned} \end{aligned}$$ $$\begin{aligned} \begin{aligned} &\frac {1}{2}\frac {d}{dt} \int _{0}^{1} \bigl[ \rho _{1}\varphi _{t}^{2} +k(\varphi _{x}+\psi )^{2} \bigr]\,dx \\ & \quad =-\gamma _{3} \int _{0}^{1}\varphi _{t}^{2}\,dx- \lambda \int _{0}^{1} \varphi _{t}(\varphi -u)\,dx +k \int _{0}^{1} \psi _{t}(\varphi _{x}+ \psi )\,dx, \end{aligned} \end{aligned}$$ $$\begin{aligned} &\frac {1}{2}\frac {d}{dt} \int _{0}^{1} \bigl[ \rho _{2}\psi ^{2}_{t} + b\psi _{x}^{2} \bigr]\,dx= m \int _{0}^{1} \psi _{t}\theta _{x}\,dx-k \int _{0}^{1} \psi _{t}(\varphi _{x}+\psi )\,dx, \end{aligned}$$ $$\begin{aligned} &\frac {1}{2} \int _{0}^{1} \rho _{3}\theta ^{2}\,dx=-\beta \int _{0}^{1} \theta _{x}^{2}\,dx-m \int _{0}^{1}\psi _{t}\theta _{x}\,dx. \end{aligned}$$ Adding (3.4)–(3.7), we arrive at $$\begin{aligned}& \begin{aligned} &\frac{1}{2} \int _{0}^{1} \bigl( \rho u^{2}_{t} +\alpha u_{x}^{2}+ \lambda (\varphi -u)^{2}+\rho _{1}\varphi _{t}^{2} +k(\varphi _{x}+ \psi )^{2}+\rho _{2}\psi ^{2}_{t} + b\psi _{x}^{2}+\rho _{3} \theta ^{2} \bigr)\,dx \\ &\quad = -\gamma _{1}a(t) \int _{0}^{1}u_{t} g_{1}(u_{t})\,dx- \gamma _{2}a(t) \int _{0}^{1}u_{t} g_{2} \bigl(z(x,1,t)\bigr)\,dx \\ &\qquad {}-\gamma _{3} \int _{0}^{1}\varphi _{t}^{2}\,dx- \beta \int _{0}^{1} \theta _{x}^{2}\,dx. \end{aligned} \end{aligned}$$ Now, multiplying equation (2.14)5 by \(\mu (t)g_{2}(z(x,\sigma ,t))\) and integrating over \((0,1)\times (0,1)\), we obtain $$ \begin{aligned} &\mu (t)\tau (t) \int _{0}^{1} \int _{0}^{1} z_{t}(x,\sigma ,t) g_{2}\bigl(z(x, \sigma ,t)\bigr)\,d\sigma \,dx \\ &\quad {} +\mu (t) \int _{0}^{1} \int _{0}^{1}\bigl(1-\tau '(t)\sigma \bigr)z_{\sigma}(x, \sigma , t) g_{2}\bigl(z(x,\sigma ,t)\bigr)\,d\sigma \,dx=0. \end{aligned} $$ On account of (2.6), we can write $$ \frac{\partial }{\partial \sigma} \bigl[ G\bigl(z(x,\sigma ,t)\bigr) \bigr]=z_{ \sigma}(x, \sigma , t) g_{2}\bigl(z(x,\sigma ,t)\bigr). $$ Therefore, (3.9) becomes $$ \begin{aligned} &\mu (t)\tau (t) \int _{0}^{1} \int _{0}^{1}z_{t}(x,\sigma ,t) g_{2}\bigl(z(x, \sigma ,t)\bigr)\,d\sigma \,dx \\ &\quad =-\mu (t) \int _{0}^{1} \int _{0}^{1}\bigl(1-\tau '(t)\sigma \bigr) \frac{\partial }{\partial \sigma} \bigl[ G\bigl(z(x,\sigma ,t)\bigr) \bigr]\,d\sigma \,dx. \end{aligned} $$ $$ \begin{aligned} &\frac{d}{dt} \biggl(\mu (t)\tau (t) \int _{0}^{1} \int _{0}^{1}G\bigl(z(x, \sigma ,t)\bigr)\,d\sigma \,dx \biggr) \\ &\quad =-\mu (t) \int _{0}^{1} \int _{0}^{1}\frac{\partial}{\partial \sigma} \bigl[\bigl(1-\tau '(t)\sigma \bigr)G\bigl(z(x,\sigma ,t)\bigr) \bigr]\,d\sigma \,dx \\ &\qquad {} + \mu '(t)\tau (t) \int _{0}^{l} \int _{0}^{1}G\bigl(z(x,\sigma ,t)\bigr)\,d\sigma \,dx \\ &\quad =\mu (t) \int _{0}^{1} \bigl( G\bigl(z(x,0,t)\bigr)-G \bigl(z(x,1,t)\bigr) \bigr)\,dx +\mu (t) \tau '(t) \int _{0}^{1} G\bigl(z(x,1,t)\bigr)\,dx \\ &\qquad {}+\mu '(t)\tau (t) \int _{0}^{1} \int _{0}^{1}G\bigl(z(x,\sigma ,t)\bigr)\,d\sigma \,dx \\ &\quad =\mu (t) \int _{0}^{1}G\bigl(u_{t}(x,t)\bigr)\,dx-\mu (t) \bigl(1-\tau '(t)\bigr) \int _{0}^{1}G(z(x,1,t)\,dx \\ &\qquad {}+\mu '(t)\tau (t) \int _{0}^{1} \int _{0}^{1}G\bigl(z(x,\sigma ,t)\bigr)\,d\sigma \,dx. \end{aligned} $$ Recalling the definition of the energy functional (3.2), and adding (3.8) and (3.12), we obtain $$ \begin{aligned} \frac{dE(t)}{dt}={}& -\gamma _{1}a(t) \int _{0}^{1}u_{t} g_{1}(u_{t})\,dx- \gamma _{2}a(t) \int _{0}^{1}u_{t} g_{2} \bigl(z(x,1,t)\bigr)\,dx \\ &{} +\mu (t) \int _{0}^{1}G\bigl(u_{t}(x,t)\bigr)\,dx-\mu (t) \bigl(1-\tau '(t)\bigr) \int _{0}^{1}G(z(x,1,t)\,dx \\ & {}-\gamma _{3} \int _{0}^{1}\varphi _{t}^{2}\,dx- \beta \int _{0}^{1} \theta _{x}^{2}\,dx +\mu '(t)\tau (t) \int _{0}^{1} \int _{0}^{1}G\bigl(z(x, \sigma ,t)\bigr)\,d\sigma \,dx. \end{aligned} $$ On the account of \((A_{1})\) and (2.5), we get $$ \begin{aligned} \frac{dE(t)}{dt}\leq{} &- \bigl( \gamma _{1}a(t)-\mu (t)\alpha _{2} \bigr) \int _{0}^{1}u_{t} g_{1}(u_{t})\,dx- \gamma _{2}a(t) \int _{0}^{1}u_{t} g_{2} \bigl(z(x,1,t)\bigr)\,dx \\ &{} -\mu (t) \bigl(1-\tau '(t)\bigr) \int _{0}^{1}G(z(x,1,t)\,dx -\gamma _{3} \int _{0}^{1}\varphi _{t}^{2}\,dx- \beta \int _{0}^{1}\theta _{x}^{2}\,dx . \end{aligned} $$ Now, we consider the convex conjugate of G defined by $$ G^{*}(s)=s\bigl(G'\bigr)^{-1}(s)-G \bigl(\bigl(G'\bigr)^{-1}(s)\bigr), \quad \forall s\geq 0, $$ which satisfies the generalized Young inequality (see [2]) $$ AB\leq G^{*}(A)+G(B), \quad \forall A,B >0. $$ Using (2.5) and the definition of G, we get $$ G^{*}(s)=sg_{2}^{-1}(s)-G \bigl(g_{2}^{-1}(s)\bigr), \quad \forall s\geq 0. $$ Therefore, on account of (2.5) and (3.17), we have $$ \begin{aligned} G^{*}\bigl(g_{2} \bigl(z(x,1,t)\bigr)\bigr)&=z(x,1,t)g_{2}\bigl(z(x,1,t)\bigr)-G \bigl(z(x,1,t)\bigr) \\ &\leq (1-\alpha _{1})z(x,1,t)g_{2}\bigl(z(x,1,t)\bigr). \end{aligned} $$ A combination of (3.14), (3.16), and (3.18) leads to $$\begin{aligned} \frac{dE(t)}{dt} \leq &- \bigl( \gamma _{1}a(t)-\mu (t)\alpha _{2} \bigr) \int _{0}^{1}u_{t} g_{1}(u_{t})\,dx \\ &{} +\gamma _{2}a(t) \int _{0}^{1}(G(u_{t})+G^{*} \bigl( g_{2}\bigl(z(x,1,t)\bigr)\bigr)\,dx \\ &{} -\mu (t) \bigl(1-\tau '(t)\bigr) \int _{0}^{1}G(z(x,1,t)\,dx -\gamma _{3} \int _{0}^{1}\varphi _{t}^{2}\,dx- \beta \int _{0}^{1}\theta _{x}^{2}\,dx \\ \leq& - \bigl( \gamma _{1}a(t)-\mu (t)\alpha _{2} \bigr) \int _{0}^{1}u_{t} g_{1}(u_{t})\,dx +\gamma _{2}a(t)\alpha _{2} \int _{0}^{1}u_{t} g_{1}(u_{t})\,dx \\ &{} +\gamma _{2}a(t) (1-\alpha _{1}) \int _{0}^{1}z(x,1,t)g_{2}\bigl(z(x,1,t) \bigr)\,dx \\ &{} -\mu (t) \bigl(1-\tau '(t)\bigr) \int _{0}^{1}G(z(x,1,t)\,dx-\gamma _{3} \int _{0}^{1}\varphi _{t}^{2}\,dx- \beta \int _{0}^{1}\theta _{x}^{2}\,dx \\ \leq& - \bigl( \gamma _{1}a(t)-\mu (t)\alpha _{2}-\gamma _{2}a(t) \alpha _{2} \bigr) \int _{0}^{1}u_{t} g_{1}(u_{t})\,dx \\ &{} - \bigl(\mu (t) \bigl(1-\tau '(t)\bigr)\alpha _{1}- \gamma _{2}a(t) (1- \alpha _{1}) \bigr) \int _{0}^{1}z(x,1,t)g_{2}\bigl(z(x,1,t) \bigr)\,dx \\ &{} -\gamma _{3} \int _{0}^{1}\varphi _{t}^{2}\,dx- \beta \int _{0}^{1} \theta _{x}^{2}\,dx. \end{aligned}$$ Recalling that \(\mu (t)=\bar{\mu}a(t)\), it follows from (3.19) that $$ \begin{aligned} \frac{dE(t)}{dt}\leq{}& -a(t) [ \gamma _{1}-\bar{\mu}\alpha _{2}- \gamma _{2}\alpha _{2} ] \int _{0}^{1}u_{t} g_{1}(u_{t})\,dx \\ & {}-a(t) \bigl[\bar{\mu}\bigl(1-\tau '(t)\bigr)\alpha _{1}-\gamma _{2}(1- \alpha _{1}) \bigr] \int _{0}^{1}z(x,1,t)g_{2}\bigl(z(x,1,t) \bigr)\,dx \\ & {}-\gamma _{3} \int _{0}^{1}\varphi _{t}^{2}\,dx- \beta \int _{0}^{1} \theta _{x}^{2}\,dx. \end{aligned} $$ Using (2.9) and (3.1), we obtain the desired result. This finishes the proof. □ The functional \(F_{1}\), defined by $$ F_{1}(t):=-\rho _{2}\rho _{3} \int _{0}^{1}\psi _{t} \int _{0}^{x} \theta (y,t)\,dy\,dx, $$ satisfies, along the solution of system (2.14)–(2.16) and for any \(\epsilon _{1}, \epsilon _{2}>0\), the estimate $$ \begin{aligned} F'_{1}(t)\le{}& - \frac {m\rho _{2}}{2} \int _{0}^{1}\psi _{t}^{2}\,dx + \epsilon _{1} \int _{0}^{1}\psi _{x}^{2}\,dx+ \epsilon _{2} \int _{0}^{1}( \varphi _{x}+\psi )^{2}\,dx \\ &{}+c \biggl(1+\frac {1}{\epsilon _{1}} + \frac {1}{\epsilon _{2}} \biggr) \int _{0}^{1}\theta _{x}^{2}\,dx. \end{aligned} $$ Differentiating \(F_{1}\), using (2.14)3 and (2.14)4, then integrating by parts and exploiting the boundary conditions lead to $$ \begin{aligned} F_{1}'(t)={}& b \rho _{3} \int _{0}^{1}\psi _{x}\theta \,dx +k\rho _{3} \int _{0}^{1}(\varphi _{x}+\psi ) \int _{0}^{x}\theta (y,t)\,dy\,dx \\ & {}+ m\rho _{3} \int _{0}^{1}\theta ^{2}\,dx -\rho _{2}\beta \int _{0}^{1} \psi _{t}\theta _{x}\,dx-\rho _{2}m \int _{0}^{1}\psi _{t}^{2}\,dx. \end{aligned} $$ Making use of Cauchy–Schwarz, Young's, and Poincaré's inequalities, we get (3.21). □ $$ F_{2}(t):= \int _{0}^{1} \biggl(\rho uu_{t} +\rho _{1}\varphi \varphi _{t} + \rho _{2}\psi \psi _{t}+\frac{\gamma _{3}}{2}\varphi ^{2} \biggr)\,dx, $$ satisfies, along the solution of system (2.14)–(2.16), the estimate $$ \begin{aligned} F'_{2}(t)\le{}& - \int _{0}^{1} \biggl(\frac{\alpha}{2} u_{x}^{2}+ \lambda (\varphi -u)^{2}+k(\varphi _{x} +\psi )^{2}+\frac {b}{2}\psi _{x}^{2} \biggr)\,dx \\ & {}+ \int _{0}^{1} \bigl(\rho u_{t}^{2}+ \rho _{1}\varphi ^{2}_{t} + \rho _{2} \psi _{t}^{2} \bigr)\,dx +c \int _{0}^{1}\theta _{x}^{2}\,dx \\ &{} +c \int _{0}^{1} \bigl\vert g_{1}(u_{t}) \bigr\vert ^{2}\,dx+ c \int _{0}^{1} \bigl\vert g_{2} \bigl(z(x,1,t)\bigr) \bigr\vert ^{2}\,dx, \quad \forall t\geq 0. \end{aligned} $$ Directly differentiating \(F_{2}\), using (2.14)1, (2.14)2, and (2.14)3, then applying integration by parts and boundary conditions, we obtain $$ \begin{aligned} F'_{2}(t)=&{}- \int _{0}^{1} \bigl( \alpha u_{x}^{2}+ \lambda (\varphi -u)^{2}+k( \varphi _{x} +\psi )^{2}+b\psi _{x}^{2} \bigr)\,dx \\ &{}+ \int _{0}^{1} \bigl(\rho u_{t}^{2} + \rho _{1}\varphi ^{2}_{t} + \rho _{2} \psi _{t}^{2} \bigr)\,dx +m \int _{0}^{1} \psi \theta _{x}\,dx \\ &{}-\gamma _{1}a(t) \int _{0}^{1} ug_{1}(u_{t})\,dx- \gamma _{2}a(t) \int _{0}^{1}ug_{2}\bigl(z(x,1,t)\bigr)\,dx. \end{aligned} $$ Using \((A_{1})\), Young's and Poincaré's inequalities, we obtain (3.23). □ The functional $$ F_{3}(t):=\bar{\mu}\tau (t) \int _{0}^{1} \int _{0}^{1} e^{-2\tau (t) \sigma}G\bigl(z(x,\sigma ,t) \bigr)\,d\sigma \,dx, $$ $$ \begin{aligned} F'_{3}(t) \le &-2F_{3}(t)+ \frac{\bar{\mu}\alpha _{2}}{2} \int _{0}^{1} \bigl( u_{t}^{2}+ \bigl\vert g_{1}(u_{t}) \bigr\vert ^{2} \bigr)\,dx, \quad \forall t \geq 0. \end{aligned} $$ Differentiating \(F_{3}\), we get $$ \begin{aligned} F'_{3}(t) ={}& \bar{\mu}\tau '(t) \int _{0}^{1} \int _{0}^{1} e^{-2\tau (t) \sigma}G\bigl(z(x,\sigma ,t) \bigr)\,d\sigma \,dx \\ &{} -2\bar{\mu}\tau (t)\tau '(t) \int _{0}^{1} \int _{0}^{1}\sigma e^{-2 \tau (t)\sigma}G\bigl(z(x,\sigma ,t)\bigr)\,d\sigma \,dx \\ & {}+\bar{\mu}\tau (t) \int _{0}^{1} \int _{0}^{1} e^{-2\tau (t) \sigma}z_{t}(x, \sigma ,t)g_{2}\bigl(z(x,\sigma ,t)\bigr)\,d\sigma \,dx. \end{aligned} $$ Using the last equation in (2.14), we can express the last term on the right hand-side of (3.26) as $$\begin{aligned}& \tau (t) \int _{0}^{1} \int _{0}^{1} e^{-2\tau (t)\sigma}z_{t}(x, \sigma ,t)g_{2}\bigl(z(x,\sigma ,t)\bigr)\,d\sigma \,dx \\& \quad = \int _{0}^{1} \int _{0}^{1} e^{-2\tau (t)\sigma} \bigl(\tau '(t) \sigma -1 \bigr) z_{\sigma}(x,\sigma ,t)g_{2} \bigl(z(x,\sigma ,t)\bigr)\,d\sigma \,dx \\& \quad = \int _{0}^{1} \int _{0}^{1}\frac{\partial}{\partial \sigma} \bigl[ e^{-2 \tau (t)\sigma} \bigl(\tau '(t)\sigma -1 \bigr)G\bigl(z(x,\sigma ,t)\bigr) \bigr]\,d\sigma \,dx \\& \begin{aligned}&\qquad {}+2\tau (t) \int _{0}^{1} \int _{0}^{1} e^{-2\tau (t)\sigma} \bigl(\tau '(t)\sigma -1 \bigr)G\bigl(z(x,\sigma ,t)\bigr)\,d\sigma \,dx \\ &\qquad {}-\tau '(t) \int _{0}^{1} \int _{0}^{1} e^{-2\tau (t)\sigma}G\bigl(z(x, \sigma ,t) \bigr)\,d\sigma \,dx \end{aligned}\\& \quad =-\bigl(1-\tau '(t)\bigr)e^{-2\tau (t)} \int _{0}^{1}G\bigl(z(x,1,t)\bigr)\,dx + \int _{0}^{l}G(u_{t})\,dx \\& \qquad {}+2\tau (t) \int _{0}^{1} \int _{0}^{1} e^{-2\tau (t)\sigma} \bigl(\tau '(t)\sigma -1 \bigr)G\bigl(z(x,\sigma ,t)\bigr)\,d\sigma \,dx \\& \qquad {}-\tau '(t) \int _{0}^{1} \int _{0}^{1} e^{-2\tau (t)\sigma}G\bigl(z(x, \sigma ,t) \bigr)\,d\sigma \,dx. \end{aligned}$$ Substituting (3.27) into (3.26), we arrive at $$ \begin{aligned} F'_{3}(t) ={}&-2 \bar{\mu}\tau (t) \int _{0}^{1} \int _{0}^{1} e^{-2\tau (t) \sigma}G\bigl(z(x,\sigma ,t) \bigr)\,d\sigma \,dx +\bar{\mu} \int _{0}^{1}G(u_{t})\,dx \\ &{} -\bigl(1-\tau '(t)\bigr)e^{-2\tau (t)} \int _{0}^{1}G\bigl(z(x,1,t)\bigr)\,dx. \end{aligned} $$ Using condition (2.5) and Young's inequality, we obtain (3.26). □ Let \((u,\varphi , \psi ,\theta , z)\) be the solution of system (2.14)–(2.16). Then, for \(N, N_{1}, N_{2}>0\) sufficiently large, the Lyapunov functional L, defined by $$\begin{aligned} L(t):=NE(t)+N_{1}F_{1}(t)+ N_{2}F_{2}(t)+F_{3}(t), \end{aligned}$$ satisfies, for some positive constants \(c_{1}\), \(c_{2}\), η, $$ c_{1}E(t) \le L(t) \le c_{2}E(t), \quad \forall t \ge 0, $$ $$ \begin{aligned} L'(t)\le -\eta E(t)+c \int _{0}^{1}\bigl(u^{2}_{t}+ \bigl\vert g_{1}(u_{t}) \bigr\vert ^{2} \bigr)\,dx+ c \int _{0}^{1} \bigl\vert g_{2} \bigl(z(x,1,t)\bigr) \bigr\vert ^{2}\,dx, \quad \forall t \ge 0. \end{aligned} $$ Applying Cauchy–Schwarz, Young's, and Poincaré's inequalities, we have $$\begin{aligned} \bigl\vert L(t)-NE(t) \bigr\vert \leq& N_{1} \biggl\vert -\rho _{2} \int _{0}^{1}\psi _{t} \int _{0}^{x} \theta (y,t)\,dy\,dx \biggr\vert \\ &{} + N_{2} \biggl\vert \int _{0}^{1} \biggl(\rho uu_{t} +\rho _{1} \varphi \varphi _{t} + \rho _{2}\psi \psi _{t}+\frac{\gamma _{3}}{2} \varphi ^{2} \biggr)\,dx \biggr\vert \\ &{} + \biggl\vert \bar{\mu}\tau (t) \int _{0}^{1} \int _{0}^{1} e^{-2 \tau (t)\sigma}G\bigl(z(x,\sigma ,t) \bigr)\,d\sigma \,dx \biggr\vert \\ \leq& \frac{N_{2}\rho}{2} \int _{0}^{1}u_{t}^{2}\,dx + \frac{N_{2}\rho _{1}}{2} \int _{0}^{1}\varphi _{t}^{2}\,dx+ \frac{(N_{1}+N_{2})\rho _{2}}{2} \int _{0}^{1}\psi _{t}^{2}\,dx \\ &{} +\frac{N_{2}\gamma _{3}}{2} \int _{0}^{1}\varphi ^{2}\,dx+ \frac{N_{2}\rho}{2} \int _{0}^{1}u_{x}^{2}\,dx + \frac{N_{2}\rho _{1}}{2} \int _{0}^{1}\varphi _{x}^{2}\,dx \\ &{} + \frac{N_{2}\rho _{2}}{2} \int _{0}^{1}\psi _{x}^{2}\,dx + \frac{ N_{1}\rho _{2}}{2} \int _{0}^{1} \biggl( \int _{0}^{x}\theta (y,t)\,dy \biggr)^{2}\,dx \\ &{} +\bar{\mu}\tau (t) \int _{0}^{1} \int _{0}^{1} G\bigl(z(x,\sigma ,t)\bigr)\,d\sigma \,dx. \end{aligned}$$ Using the relations $$ \begin{aligned} & \int _{0}^{1}\varphi ^{2}\,dx\leq 2 \int _{0}^{l}(\varphi -u)^{2}\,dx + 2 \int _{0}^{1} u_{x}^{2}\,dx, \\ & \int _{0}^{1}\varphi _{x}^{2}\,dx \leq 2 \int _{0}^{1}(\varphi _{x}+ \psi )^{2}\,dx +2 \int _{0}^{1}\psi _{x}^{2}\,dx, \end{aligned} $$ we arrive at $$ \begin{aligned} \bigl\vert L(t)-NE(t) \bigr\vert \leq{} &\frac{N_{2}\rho}{2} \int _{0}^{1}u_{t}^{2}\,dx + \frac{N_{2}\rho _{1}}{2} \int _{0}^{1}\varphi _{t}^{2}\,dx+ \frac{(N_{1}+N_{2})\rho _{2}}{2} \int _{0}^{1}\psi _{t}^{2}\,dx \\ &{} +N_{2}\gamma _{3} \int _{0}^{l}(\varphi -u)^{2}\,dx+ \biggl( N_{2} \gamma _{3}+\frac{N_{2}\rho}{2} \biggr) \int _{0}^{1}u_{x}^{2}\,dx \\ & {}+N_{2}\rho _{1} \int _{0}^{1}(\varphi _{x}+\psi )^{2}\,dx + \biggl( \frac{N_{2}(\rho _{1}+\rho _{2})}{2} \biggr) \int _{0}^{1}\psi _{x}^{2}\,dx \\ &{} +\frac{ N_{1}\rho _{2}}{2} \int _{0}^{1}\theta ^{2}\,dx+\bar{\mu} \tau (t) \int _{0}^{1} \int _{0}^{1} G\bigl(z(x,\sigma ,t)\bigr)\,d\sigma \,dx. \end{aligned} $$ From (3.33), we obtain $$ \bigl\vert L(t)-NE(t) \bigr\vert \leq \bar{c}E(t). $$ By choosing N large enough such that $$ c_{1}=N- \bar{c}>0,\qquad c_{2}=N+ \bar{c}>0, $$ estimate (4.14) follows. Next, we establish (3.31). Using Lemmas 3.1–3.4, we get $$ \begin{aligned} L'(t)\le{}& -\rho \int _{0}^{1}u_{t}^{2}\,dx- [N \gamma _{3}-N_{2} \rho _{1} ] \int _{0}^{1}\varphi _{t}^{2}\,dx- \biggl[N_{1} \frac{m\rho _{2}}{2}-N_{2}\rho _{2} \biggr] \int _{0}^{1}\psi _{t}^{2}\,dx \\ & {}-\frac{N_{2}\alpha}{2} \int _{0}^{1}u_{x}^{2}\,dx-N_{2} \lambda \int _{0}^{1}(\varphi -u)^{2}\,dx- [N_{2}k-N_{1}\epsilon _{2} ] \int _{0}^{1}(\varphi _{x}+\psi )^{2}\,dx \\ &{} - \biggl[N_{2}\frac{b}{2}-N_{1}\epsilon _{1} \biggr] \int _{0}^{1} \psi _{x}^{2}\,dx- \biggl[N\beta -N_{1}c \biggl(1+\frac{1}{\epsilon _{1}} + \frac{1}{\epsilon _{2}} \biggr)-N_{2}c \biggr] \int _{0}^{1}\theta _{x}^{2}\,dx \\ &{} -\frac{2e^{-2\tau _{1}}}{a(0)}\mu (t)\tau (t) \int _{0}^{1} \int _{0}^{1}G\bigl(z(x,\sigma ,t)\bigr)\,d\sigma \,dx + \biggl[\rho +N_{2}\rho + \frac{\bar{\mu }\alpha _{2}}{2} \biggr] \int _{0}^{1}u_{t}^{2}\,dx \\ &{} + \biggl[cN_{2}+ \frac{\bar{\mu}\alpha _{2}}{2} \biggr] \int _{0}^{1} \bigl\vert g_{1}(u_{t}) \bigr\vert ^{2}\,dx +cN_{2} \int _{0}^{1} \bigl\vert g_{2} \bigl(z(x,1,t)\bigr) \bigr\vert ^{2}\,dx. \end{aligned} $$ $$ N_{2}=1, \qquad \epsilon _{1}=\frac {N_{2}b}{4N_{1}}, \qquad \epsilon _{2}= \frac {N_{2}k}{2N_{1}}, $$ $$ \begin{aligned} L'(t)\le{}& -\rho \int _{0}^{1}u_{t}^{2}\,dx- [N \gamma _{3}-\rho _{1} ] \int _{0}^{1}\varphi _{t}^{2}\,dx- \biggl[N_{1}\frac{m\rho _{2}}{2}- \rho _{2} \biggr] \int _{0}^{1}\psi _{t}^{2}\,dx \\ & {}-\frac{\alpha}{2} \int _{0}^{1}u_{x}^{2}\,dx-\lambda \int _{0}^{1}( \varphi -u)^{2}\,dx- \frac {k}{2} \int _{0}^{1}(\varphi _{x}+\psi )^{2}\,dx \\ & {}-\frac{b}{4} \int _{0}^{1}\psi _{x}^{2}\,dx- \biggl[N\beta -N_{1}c \biggl(1+\frac{4N_{1}}{b} +\frac{2N_{1}}{k} \biggr)-c \biggr] \int _{0}^{1} \theta _{x}^{2}\,dx \\ & {}-\frac{2e^{-2\tau _{1}}}{a(0)}\mu (t)\tau (t) \int _{0}^{1} \int _{0}^{1}G\bigl(z(x,\sigma ,t)\bigr)\,d\sigma \,dx + \biggl[2\rho + \frac{\bar{\mu }\alpha _{2}}{2} \biggr] \int _{0}^{1}u_{t}^{2}\,dx \\ &{} + \biggl[c+ \frac{\bar{\mu }\alpha _{2}}{2} \biggr] \int _{0}^{1} \bigl\vert g_{1}(u_{t}) \bigr\vert ^{2}\,dx +c \int _{0}^{1} \bigl\vert g_{2} \bigl(z(x,1,t)\bigr) \bigr\vert ^{2}\,dx. \end{aligned} $$ Now, we choose \(N_{1}\) large such that $$ N_{1}\frac{m\rho _{2}}{2}-\rho _{2}>0. $$ Next, we select N very large so that (4.14) remains true and $$ N\gamma _{3}-\rho _{1}>0,\qquad N\beta -N_{1}c \biggl(1+ \frac{4N_{1}}{b} +\frac{2N_{1}}{k} \biggr)-c>0. $$ Therefore, using the energy functional defined by (3.2), we obtain (3.31). □ Stability result In this section, we are concerned with the main stability result, and is stated as follows. Let \((u,\varphi , \psi , \theta , z)\) be the solution of system (2.14)–(2.16) and assume (A1)–(A4) hold. Then, for some positive constants \(\delta _{1}\), \(\delta _{2}\), \(\delta _{3}\), and \(r_{0}\), the energy functional (3.2) satisfies $$ E(t)\le \delta _{1}\chi _{1}^{-1} \biggl(\delta _{2} \int _{0}^{t} a(s)\,ds +\delta _{3} \biggr),\quad t \ge 0, $$ $$ \chi _{1}(t)= \int _{t}^{1}\frac {1}{\chi _{0}(s)}\,ds\quad \textit{and}\quad \chi _{0}(t)=t \chi '(r_{0} t). $$ We divide the proof into two cases: Case I: χ is linear. Using (A2), we get $$ C_{1} \vert s \vert \le \bigl\vert g_{1}(s) \bigr\vert \le C_{2} \vert s \vert , \quad \forall s\in \mathbb{R}. $$ $$\begin{aligned} g_{1}^{2}(s)\le C_{2}sg_{1}(s), \quad \forall s\in \mathbb{R}. \end{aligned}$$ Therefore, multiplying (3.31) by \(a(t)\) and using (3.3) and (4.2), we conclude that $$\begin{aligned} a(t)L'(t)\le &-\eta a(t)E(t)+ca(t) \int _{0}^{1}u_{t}g_{1}(u_{t})\,dx+ca(t) \int _{0}^{1}z(x,1,t)g_{2}\bigl(z(x,1,t) \bigr)\,dx \\ \le & -\eta a(t)E(t)-cE'(t), \quad \forall t\in \mathbb{R}^{+}. \end{aligned}$$ Exploiting (A2) and (3.30), it follows that $$ L_{0}(t):= a(t)L(t)+cE(t)\sim E(t) $$ and, for some constant \(\eta _{1}>0\), the functional \(L_{0}\) satisfies $$ L'_{0}(t)\le -\eta _{1}a(t)L_{0}(t), \quad \forall t\ge 0. $$ A simple integration of (4.4) over \((0,t)\), using (4.3), yields $$ E(t)\le \delta _{1} \exp \biggl( -\delta _{2} \int _{0}^{t}a(s)\,ds \biggr)=\delta _{1} \chi _{1}^{-1} \biggl(\delta _{2} \int _{0}^{t} a(s)\,ds \biggr), \quad \forall t\geq 0. $$ Case II: χ is nonlinear on \([0,r]\). Here, as in [22], we select \(0< r_{1}\le r\) so that $$ sg_{1}(s)\le \min \bigl\{ r, \chi (r) \bigr\} , \quad \forall \vert s \vert \le r_{1}. $$ On account of (A2) and the continuity of \(g_{1}\) with the fact that \(|g_{1}(s)|>0\), for \(s\ne 0\), we conclude that $$ \textstyle\begin{cases} s^{2}+g_{1}^{2}(s) \le \chi ^{-1}(sg_{1}(s)), \quad \forall \vert s \vert \le r_{1}, \\ C'_{1} \vert s \vert \le \vert g_{1}(s) \vert \le C'_{2} \vert s \vert , \hfill \quad \forall \vert s \vert \ge r_{1}. \end{cases} $$ Now, we introduce the following partitions: $$\begin{aligned}& I_{1}= \bigl\{ x\in (0, 1): \vert u_{t} \vert \le r_{1} \bigr\} , \qquad I_{2}= \bigl\{ x\in (0, 1): \vert u_{t} \vert >r_{1} \bigr\} ,\\& \tilde{I_{1}}= \bigl\{ x\in (0, 1): \bigl\vert z(x,1,t) \bigr\vert \le r_{1} \bigr\} ,\qquad \tilde{I_{2}}= \bigl\{ x\in (0, 1): \bigl\vert z(x,1,t) \bigr\vert >r_{1} \bigr\} \end{aligned}$$ and the functional h, defined by $$ h(t)= \int _{I_{1}}u_{t}g_{1}(u_{t})\,dx. $$ Using the fact that \(\chi ^{-1}\) is concave and Jensen's inequality, it follows that $$ \chi ^{-1}\bigl(h(t)\bigr)\ge c \int _{I_{1}}\chi ^{-1} \bigl(u_{t}g_{1}(u_{t}) \bigr)\,dx. $$ Combining (4.7) and (4.8), we have $$\begin{aligned} a(t) \int _{0}^{1} \bigl(u_{t}^{2}+g_{1}^{2}(u_{t}) \bigr)\,dx&=a(t) \int _{I_{1}} \bigl(u_{t}^{2}+g_{1}^{2}(u_{t}) \bigr)\,dx+a(t) \int _{I_{2}} \bigl(u_{t}^{2}+g_{1}^{2}(u_{t}) \bigr)\,dx \\ &\le a(t) \int _{I_{1}}\chi ^{-1} \bigl(u_{t}g_{1}(u_{t}) \bigr)\,dx+ca(t) \int _{I_{2}}u_{t}g_{1}(u_{t})\,dx \\ &\le ca(t)\chi ^{-1}\bigl(h(t)\bigr)-cE'(t) . \end{aligned}$$ $$ \begin{aligned} a(t) \int _{0}^{1} g_{2}^{2} \bigl(z(x,1,t)\bigr)\,dx={}& a(t) \int _{\bar{I_{1}}}g_{2}^{2}\bigl(z(x,1,t)\bigr)\,dx + a(t) \int _{\bar{I_{2}}}g_{2}^{2}\bigl(z(x,1,t)\bigr)\,dx \\ \leq{}& ca(t) \int _{\bar{I_{1}}}z(x,1,t)g_{2}\bigl(z(x,1,t)\bigr)\,dx \\ &{} +a(t) \int _{\bar{I_{2}}}z(x,1,t)g_{2}\bigl(z(x,1,t)\bigr)\,dx \\ \leq{}& -cE'(t). \end{aligned} $$ Multiplying (3.31) by \(a(t)\) and using (4.9) and (4.10), we obtain $$ a(t)L'(t)+cE'(t)\le -\eta a(t)E(t)+ca(t)\chi ^{-1}\bigl(h(t)\bigr). $$ It follows from (A1) that $$ L_{1}'(t)\le -\eta a(t)E(t)+ca(t)\chi ^{-1}\bigl(h(t)\bigr), $$ $$ L_{1}(t)=a(t)L(t)+cE(t)\sim E (t)\quad \text{by virtue of (3.30)}. $$ Let \(r_{0}< r\) and \(\eta _{0}>0\) to be specified later. Then, combining (4.12) and the fact that $$ E'\le 0,\qquad \chi '>0,\qquad \chi ''>0 \quad \text{on } (0, r], $$ the functional \(L_{2}\), defined by $$ L_{2}(t):=\chi ' \biggl(r_{0}\frac {E(t)}{E(0)} \biggr)L_{1}(t)+\eta _{0}E(t), $$ satisfies $$ \kappa _{1}L_{2}(t) \le E(t) \le \kappa _{2}L_{2}(t) $$ for some positive constants \(\kappa _{1}\), \(\kappa _{2}\), and $$\begin{aligned} L_{2}'(t)&=r_{0} \frac {E'(t)}{E(0)}\chi '' \biggl(r_{0} \frac {E(t)}{E(0)} \biggr)L_{1}(t)+\chi ' \biggl(r_{0} \frac {E(t)}{E(0)} \biggr)L_{1}'(t)+ \eta _{0}E'(t) \\ &\le -\eta a(t) E(t)\chi ' \biggl(r_{0}\frac {E(t)}{E(0)} \biggr)+ \underbrace{ca(t)\chi ' \biggl(r_{0} \frac {E(t)}{E(0)} \biggr)\chi ^{-1} \bigl(h(t) \bigr)}_{A}+ \eta _{0}E'(t). \end{aligned}$$ To estimate the term A in (4.15), we consider the convex conjugate of χ denoted by \(\chi ^{*}\), defined by $$ \chi ^{*}(y)=y\bigl(\chi ' \bigr)^{-1}(y)-\chi \bigl[\bigl(\chi '\bigr)^{-1}(y) \bigr]\le y\bigl(\chi ^{ \prime}\bigr)^{-1}(y),\quad \text{if }y\in (0, \chi '(r)], $$ and which satisfies the generalized Young's inequality $$ XY\le \chi ^{*}(X)+\chi (Y), \quad \text{if }X\in \big(0, \chi '(r)\big], Y \in (0, r]. $$ Taking \(X=\chi ' (r_{0}\frac {E(t)}{E(0)} )\) and \(Y=\chi ^{-1} (h(t) )\) and recalling Lemma 3.1 and (4.6), then (4.15)–(4.17) lead to $$ \begin{aligned} L_{2}'(t)\le {}& -\eta a(t) E(t) \chi ' \biggl(r_{0}\frac {E(t)}{E(0)} \biggr)+ca(t) \biggl[ \chi * \biggl( \chi ' \biggl(r_{0} \frac {E(t)}{E(0)} \biggr) \biggr) +\chi \bigl( \chi ^{-1} \bigl(h(t) \bigr) \bigr) \biggr] \\ &{}+\eta _{0}E'(t) \\ ={}&-\eta a(t) E(t)\chi ' \biggl(r_{0}\frac {E(t)}{E(0)} \biggr)+ca(t) \chi * \biggl( \chi ' \biggl(r_{0} \frac {E(t)}{E(0)} \biggr) \biggr) \\ &{}+ca(t)h(t)+ \eta _{0}E'(t) \\ \le {}&-\eta a(t) E(t)\chi ' \biggl(r_{0}\frac {E(t)}{E(0)} \biggr)+ cr_{0}a(t) \biggl(\frac{E(t)}{E(0)} \biggr)\chi ' \biggl(r_{0}\frac {E(t)}{E(0)} \biggr) \\ &{}-cE'(t)+ \eta _{0}E'(t) \\ \le {}&-\bigl(\eta E(0)-cr_{0}\bigr)a(t) \biggl(\frac{E(t)}{E(0)} \biggr)\chi ' \biggl(r_{0}\frac {E(t)}{E(0)} \biggr)+(\eta _{0}-c)E'(t). \end{aligned} $$ By choosing \(r_{0}=\frac {\eta E(0)}{2c}\), \(\eta _{0}=2c\), and recalling that \(E'(t)\le 0\), we arrive at $$\begin{aligned} L_{2}'(t)\le {}&-\eta _{1}a(t) \frac {E(t)}{E(0)}\chi ' \biggl(r_{0} \frac {E(t)}{E(0)} \biggr)=-\eta _{1}a(t)\chi _{0} \biggl( \frac {E(t)}{E(0)} \biggr), \end{aligned}$$ where \(\eta _{1}>0\) and \(\chi _{0}(t)=t\chi '(r_{0}t)\). Now, since χ is strictly convex on \((0,r]\), we conclude that \(\chi _{0}(t)>0\), \(\chi '_{0}(t) >0\) on \((0, 1]\). Using (4.14) and (4.19), it follows that the functional $$ L_{3}(t)=\frac{\kappa _{1}L_{2}(t)}{E(0)} $$ $$ L_{3}(t)\sim E(t) $$ and, for some \(\delta _{2}>0\), $$ L_{3}'(t)\le -\delta _{2}a(t) \chi _{0} \bigl(L_{3}(t) \bigr), $$ which yields $$ \bigl[\chi _{1} \bigl(L_{3}(t) \bigr) \bigr]'\ge \delta _{2}a(t), $$ $$ \chi _{1}(t)= \int _{t}^{1}\frac {1}{\chi _{0}(s)}\,ds, \quad t\in (0,1]. $$ Integrating (4.22) over \([0, t]\), keeping in mind the properties of \(\chi _{0}\), and the fact that \(\chi _{1}\) is strictly decreasing on \((0, 1]\), we obtain $$ L_{3}(t)\le \chi _{1}^{-1} \biggl( \delta _{2} \int _{0}^{t}a(s)\,ds+ \delta _{3} \biggr), \quad \forall t\in \mathbb{R}^{+}, $$ for some \(\delta _{3}>0\). Using (4.20) and (4.23), the proof of Theorem 4.1 is completed. □ We end this section by giving some examples to illustrate the obtained result. $$ g_{0}\in C^{2} \bigl( [0,+\infty) \bigr) $$ be a strictly increasing function such that \(g_{0}(0)=0\) and, for some positive constants \(c_{1}\), \(c_{2}\) and r, the function \(g_{1}\) satisfies $$ \begin{aligned} &g_{0}\bigl( \vert s \vert \bigr)\leq \bigl\vert g_{1}(s) \bigr\vert \leq g_{0}^{-1}\bigl( \vert s \vert \bigr), \quad \forall \vert s \vert \leq r, \\ &c_{1}s^{2}\leq sg_{1}(s)\leq c_{2}s^{2}, \quad \forall \vert s \vert \geq r. \end{aligned} $$ We consider the function $$ \chi (s)= \biggl( \sqrt{\frac{s}{2}} \biggr)g_{0} \biggl(\sqrt{ \frac{s}{2}} \biggr). $$ It follows that χ is a \(C^{2}\)-strictly convex function on \((0,r]\) when \({\mathrm{g_{0}}}\) is nonlinear and therefore satisfies condition \((A_{2})\). Now, we give some examples of \(g_{0}\) such that \(g_{1}\) satisfies (5.1) near 0. Let \(g_{0}(s)= \lambda s\), where \(\lambda >0\) a constant, then \(\chi (s)= \bar{\lambda} s\), where \(\bar{\lambda}=\frac {\lambda}{2}\) satisfies \((A_{2})\) near 0 and from (4.1), we get $$ E(t)\leq \bar{\delta}\exp \biggl( -\delta _{2} \int _{0}^{t}a(s)\,ds \biggr), \quad \forall t\geq 0. $$ Let \(g_{0}(s) = \frac {1}{s}e^{-\frac{1}{s^{2}}}\), then \(\chi (s)= e^{-\frac{2}{s}}\) satisfies \((A_{2})\) in the neighborhood of 0 and from (4.1), we obtain $$ E(t)\leq \delta _{1} \biggl( \ln \biggl(\delta _{2} \int _{0}^{t} a(s)\,ds+ \delta _{3} \biggr) \biggr)^{-1}, \quad \forall t\geq 0. $$ Let \(g_{0}(s) = e^{-\frac{1}{s}}\), then \(\chi (s)= \sqrt{\frac{s}{2}} e^{-\sqrt{\frac{2}{s}}}\) satisfies \((A_{2})\) near 0 and using (4.1), we obtain $$ E(t)\leq \delta _{1} \biggl( \ln \biggl(\delta _{2} \int _{0}^{t} a(s)\,ds + \delta _{3} \biggr) \biggr)^{-2}, \quad \forall t\geq 0. $$ In this work, we obtained some general decay results for a thermoelastic Timoshenko beam system with suspenders, general weak internal damping, time-varying coefficient, and time-varying delay terms. The damping structure in system (2.14)–(2.16) is sufficient enough to stabilize the system without any additional conditions on the coefficient parameters as it is the case with many Timoshenko beam systems in the literature. The result of the present paper generalizes the one established in Bochichio et al. [6] and allows a large class of functions that satisfy condition \((A_{2})\). We also gave some examples to illustrate our theoretical finding. Data sharing is not applicable to this article as no new data were created or analyzed in this study. Ahmed, N.U., Harbi, H.: Mathematical analysis of dynamic models of suspension bridges. SIAM J. Appl. Math. 109, 853–874 (1998) Arnold, V.I.: Mathematical Methods of Classical Mechanics. Springer, New York (1989) Audu, J., Mukiawa, S.E., Almeida Júnior, D.S.: General decay estimate for a two-dimensional plate equation with time-varying feedback and time-varying coefficient. Results Appl. Math. 12, 100219 (2021) Article MATH Google Scholar Benaissa, A., Benguessoum, A., Messaoudi, S.A.: Energy decay of solutions for a wave equation with a constant weak delay and a weak internal feedback. Electron. J. Qual. Theory Differ. Equ. 11, 13 (2014) Benaissa, A., Messaoudi, S.A.: Global existence and energy decay of solutions for the wave equation with a time varying delay term in the weakly nonlinear internal feedbacks. J. Math. Phys. 53, 123514 (2012) Bochicchio, I., Campo, M., Fernández, J.R., Naso, M.G.: Analysis of a thermoelastic Timoshenko beam model. Acta Mech. 231(10), 4111–4127 (2020) Bochicchio, I., Giorgi, C., Vuk, E.: Long-term damped dynamics of the extensible suspension bridge. Int. J. Differ. Equ. 2010, Article ID 383420 (2010). https://doi.org/10.1155/2010/383420 Caraballo, T., Marin-Rubio, P., Valero, J.: Autonomous and non-autonomous attractors for differential equations with delays. J. Differ. Equ. 208, 9–41 (2005) Chen, Y.H., Fang, S.C.: Neurocomputing with time delay analysis for solving convex quadratic programming problems. IEEE Trans. Neural Netw. 11, 230–240 (2000) Enyi, C.D., Feng, B.: Stability result for a new viscoelastic-thermoelastic Timoshenko system. Bull. Malays. Math. Soc. 44, 1837–1866 (2021) Enyi, C.D., Mukiawa, S.E.: Decay estimate for a viscoelastic plate equation with strong time-varying delay. Ann. Univ. Ferrara 66, 339–357 (2020) Enyi, C.D., Mukiawa, S.E., Apalara, T.A.: Stabilization of a new memory-type thermoelastic Timoshenko system. Appl. Anal. (2022). https://doi.org/10.1080/00036811.2022.2027375 Feng, B.: Long-time dynamics of a plate equation with memory and time delay. Bull. Braz. Math. Soc. 49, 395–418 (2018) Giorgi, C., Pata, V., Vuk, E.: On the extensible viscoelastic beam. Nonlinearity 21, 713–733 (2008) Graff, K.F.: Wave Motion in Elastic Solids. Dover, New York (1975) Guesmia, A., Messaoudi, S.A.: General energy decay estimates of Timoshenko systems with frictional versus viscoelastic damping. Math. Methods Appl. Sci. 32(16), 2102–2122 (2009) Guesmia, A., Messaoudi, S.A., Soufyane, A.: Stabilization of a linear Timoshenko system with infinite history and applications to the Timoshenko-heat systems. Electron. J. Differ. Equ. 2012(193), 1 (2012) Han, S.M., Benaroya, H., Wei, T.: Dynamics of transversely vibrating beams using four engineering theories. J. Sound Vib. 225(5), 935–988 (1999) Kato, T.: Linear and quasilinear equations of evolution of hyperbolic type. In: C.I.M.E., II Ciclo, pp. 125–191 (1976) Kato, T.: Abstract Differential Equations and Nonlinear Mixed Problems Lezioni Fermiane [Fermi Lectures]. Scuola Normale Superiore, Pisa (1985) Kolmanoviskii, V., Mishkis, A.: Introduction of the Theory and Applications of Functional and Differential Equations, vol. 463. Kluwer Academic, Dordrecht (1999) Lasiecka, I., Tataru, D.: Uniform boundary stabilization of semilinear wave equations with nonlinear boundary damping. Differ. Integral Equ. 6, 507–533 (1993) Lazer, A.C., McKenna, P.J.: Large-amplitude periodic oscillations in suspension bridges: some new connections with nonlinear analysis. SIAM Rev. 32(4), 537–578 (1990) Liu, W.J.: General decay rate estimate for the energy of a weak viscoelastic equation with an internal time-varying delay term. Taiwan. J. Math. 17, 2101–2115 (2013) McKenna, P.J., Walter, W.: Nonlinear oscillations in a suspension bridge. Arch. Ration. Mech. Anal. 98, 167–177 (1987) McKenna, P.J., Walter, W.: Travelling waves in a suspension bridge. SIAM J. Appl. Math. 50(3), 703–715 (1990) Mukiawa, S.E.: Decay result for a delay viscoelastic plate equation. Bull. Braz. Math. Soc. 51, 333–356 (2020). https://doi.org/10.1007/s00574-019-00155-y Mukiawa, S.E.: The effect of time-varying delay damping on the stability of porous elastic system. Open J. Math. Sci. 5(1), 147–161 (2021) Mukiawa, S.E.: Stability result of a suspension bridge problem with time-varying delay and time-varying weight. Arab. J. Math. (2021). http://dx.doi.org/10.1007/s40065-021-00345-x Mukiawa, S.E.: A new optimal and general stability result for a thermoelastic Bresse system with Maxwell–Cattaneo heat conduction. Results Appl. Math. 10, 100152 (2021). https://doi.org/10.1016/j.rinam.2021.100152 Nicaise, S., Pignotti, C.: Stability and instability results of the wave equation with a delay term in the boundary or internal feedbacks. SIAM J. Control Optim. 45, 1561–1585 (2006) Nicaise, S., Pignotti, C.: Interior feedback stabilization of wave equations with time dependence delay. Electron. J. Differ. Equ. 2011, 41 (2011) Nicaise, S., Pignotti, C., Valein, J.: Exponential stability of the wave equation with boundary time-varying delay. Discrete Contin. Dyn. Syst. 4, 693–722 (2011) Richard, J.P.: Time-delay systems: an overview of some recent advances and open problems. Automatica 39, 1667–1694 (2003) Timoshenko, S.: On the correction for shear of the differential equation for transverse vibrations of prismatic bars. Philos. Mag. 41(245), 744–746 (1921) The authors gratefully acknowledge the technical and financial support from the Ministry of Education and University of Hafr Al Batin, Saudi Arabia. This research work was funded by institutional fund projects under No. IFP-A-2022-2-1-04. Department of Mathematics, University of Hafr Al Batin, Hafr Al Batin, 31991, Saudi Arabia Soh Edwin Mukiawa & Cyril Dennis Enyi Department of Mathematics and Statistics, University of Sharjah, Sharjah, United Arab Emirates Salim A. Messaoudi Soh Edwin Mukiawa Cyril Dennis Enyi All the authors have equal contributions in this paper. All authors read and approved the final manuscript. Correspondence to Soh Edwin Mukiawa. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Mukiawa, S.E., Enyi, C.D. & Messaoudi, S.A. Stability of thermoelastic Timoshenko beam with suspenders and time-varying feedback. Adv Cont Discr Mod 2023, 7 (2023). https://doi.org/10.1186/s13662-023-03752-w Received: 23 June 2022 35D30 74K10 65M60 General decay Timoshenko beam Thermoelasticity
CommonCrawl
\begin{document} \begin{frontmatter} \title{A sequential Monte Carlo approach to computing tail probabilities in stochastic models} \runtitle{Sequential Monte Carlo for tail probabilities} \begin{aug} \author[A]{\fnms{Hock Peng} \snm{Chan}\corref{}\thanksref{t1}\ead[label=e1]{[email protected]}} and \author[B]{\fnms{Tze Leung} \snm{Lai}\thanksref{t2}\ead[label=e2]{[email protected]}} \runauthor{H. P. Chan and T. L. Lai} \affiliation{National University of Singapore and Stanford University} \address[A]{Department of Statistics\\ \quad and Applied Probability\\ National University of Singapore\\ 6 Science Drive 2 \\ Singapore 117546\\ \printead{e1}} \address[B]{Department of Statistics\\ 390 Serra Mall\\ Stanford, California 94305\\ USA\\ \printead{e2}} \end{aug} \thankstext{t1}{Suported by the National University of Singapore Grant R-155-000-090-112.} \thankstext{t2}{Supported by the NSF Grant DMS-08-05879.} \received{\smonth{2} \syear{2009}} \revised{\smonth{12} \syear{2010}} \begin{abstract} Sequential Monte Carlo methods which involve sequential importance sampling and resampling are shown to provide a versatile approach to computing probabilities of rare events. By making use of martingale representations of the sequential Monte Carlo estimators, we show how resampling weights can be chosen to yield logarithmically efficient Monte Carlo estimates of large deviation probabilities for multidimensional Markov random walks. \end{abstract} \begin{keyword}[class=AMS] \kwd[Primary ]{60F10} \kwd{65C05} \kwd[; secondary ]{60J22} \kwd{60K35}. \end{keyword} \begin{keyword} \kwd{Exceedance probabilities} \kwd{large deviations} \kwd{logarithmic efficiency} \kwd{sequential importance sampling and resampling}. \end{keyword} \end{frontmatter} \section{Introduction} In complex stochastic models, it is often difficult to evaluate probabilities of events of interest analytically and Monte Carlo methods provide a practical alternative. When an event $A$ occurs with a~small probability (e.g., $10^{-4}$), generating 100 events would require a very large number of events (e.g., 1 million) for direct Monte Carlo computation of $P(A)$. To circumvent this difficulty one can use importance sampling instead of direct Monte Carlo changing the measure $P$ to $Q$ under which $A$ is no longer a rare event and evaluating $P(A) = E_Q(L {\mathbf1}_A)$ by $m^{-1} \sum_{i=1}^m L_i {\mathbf1}_{A_i}$, where $(L_1,{\mathbf1}_{A_1}),\ldots,(L_m,{\mathbf1}_{A_m})$ are $m$ independent samples drawn from the distribution $Q$, with $L_i$ being a realization of the likelihood ratio statistic $L:=dP/dQ$, which is the importance weight. While large deviations theory has provided important clues for the choice of $Q$ for Monte Carlo evaluation of exceedance probabilities, it has also been demonstrated that importance sampling measures that are consistent with large deviations can perform much worse than direct Monte Carlo (see Glasserman and Wang \cite{GW97}). Chan and Lai \cite{CL07} have recently resolved this problem by showing that certain mixtures of exponentially twisted measures are asymptotically optimal for importance sampling. For complex stochastic models, however, there are implementation difficulties in using these asymptotically optimal importance sampling measures. Herein we introduce a \textit {sequential importance sampling and resampling} (SISR) procedure to attain a weaker form of asymptotic optimality, namely, logarithmic efficiency; the definitions of \textit{asymptotic optimality} and \textit{logarithmic efficiency} are given in Section~\ref{sec3}. Instead of applying directly the asymptotically optimal importance sampling measure $Q$ that is difficult to sample from, SISR generates $m$ sequential samples from a more tractable importance sampling measure $\widetilde Q$ and resamples at every stage $t$ the $m$ sequential sample paths, yielding a~modified sample path after resampling. The objective is to approximate the target measure $Q$ by the weighted empirical measure defined by the resampling weights. Details are given in Section \ref{sec2} for general resampling weights (not necessarily those associated with the asymptotically optimal resampling measure). Section \ref{sec4} illustrates the SISR method for Monte Carlo computation of exceedance probabilities in a variety of applications which include boundary crossing probabilities of generalized likelihood ratio statistics and tail probabilities of Markov random walks. These applications demonstrate the versatility of the SISR method and the relative ease of its implementation. Our SISR procedure to compute probabilities of rare events is closely related to (a) the \textit{interacting particle systems} (IPS) approach introduced by Del Moral and Garnier \cite{DG05} to compute tail probabilities of the form $P \{ V(X_t)\,{\geq}\,a \}$ for a possibly nonhomogeneous Markov chain $\{ X_t \}$ and (b)~the \textit{dynamic importance sampling} method introduced by Dupuis and Wang \cite {DW05,DW07} to compute $P \{ S_n/n \in A \}$, where $S_n=\sum_{t=1}^n g(X_t)$ and $\{ X_n \}$ is a uniformly recurrent Markov chain with stationary distribution $\pi$ such that $\int g(x) \,d \pi(x) \notin A$. Both IPS and dynamic importance sampling generate the $X_i$ sequentially. Dynamic importance sampling uses an adaptive change of measures based on the simulated paths up to each time $t \leq n$. A recent method closely related to dynamic importance sampling is sequential state-dependent change of measures introduced by Blanchet and Glynn \cite{BG08} for Monte Carlo evaluation of tail probabilities of the maximum of heavy-tailed random walks. The IPS approach uses ``mutation'' to sample~$\widetilde X^{(i)}_{t+1}$ (conditional on the $X_1^{(i)},\ldots,X_t^{(i)}$ already generated) from the original measure $P$ and then uses ``selection'' to draw $m$ i.i.d.\vspace*{1pt} samples from $\{ (X_1^{(i)},\ldots,X_t^{(i)}, \widetilde X_{t+1}^{(i)})\dvtx1 \leq i \leq m \}$ according to a Boltzmann--Gibbs particle measure. The theory of IPS in \cite{DG05} focuses on tail probabilities of $V(X_t)$ for fixed $t$ as described in Section \ref{sec2} rather than large deviation probabilities of $g(S_n/n)$ for large $n$ as considered in Section \ref{sec3}. Our SISR procedure is motivated by rare events of the general form $\{ \mathbf{X}_n \in \Gamma\}$ that involves the entire sample path $\mathbf{X}_n = (X_1,\ldots, X_n)$ and includes $\{ V(X_n) \geq a \}$ and $\{ S_n/n \in A \}$ considered by Del Moral and Garnier, Dupuis and Wang as special cases. The sequential importance sampling component of SISR uses an easily implementable approximation $\widetilde Q$ of $Q$; in many cases it simply uses $\widetilde Q=P$. Thus, it is quite different from dynamic importance sampling even though both yield logarithmically efficient Monte Carlo estimates of $P \{ S_n/n \in A \}$. \section{Sequential importance sampling and resampling (SISR) and martingale representations}\label{sec2} The events in this section are assumed to belong to the $\sigma$-field generated by $n$ random variables $Y_1,\ldots,Y_n$ on a probability space $(\Omega,\mathcal {F},P)$. Let $\mathbf{Y}_t=(Y_1,\ldots,Y_t)$ for $1 \leq t \leq n$. For direct Monte Carlo computation of $\alpha:=P \{ \mathbf{Y}_n \in\Gamma \}$, i.i.d. random vectors $\mathbf{Y}_n^{(1)},\ldots,\mathbf{Y}_n^{(m)}$ are generated from $P$ and $\alpha$ is estimated by \begin{equation} \label{2.1} \widehat\alpha_{\mathrm{D}} = m^{-1} \sum_{i=1}^m {\mathbf1}_{\{ \mathbf{Y}_n^{(i)} \in\Gamma\}}. \end{equation} The estimate $\widehat\alpha_{\mathrm{D}}$ is unbiased and its variance is $\alpha(1-\alpha)/m$ which can be consistently estimated by \begin{equation} \label{2.2} \widehat\sigma_{\mathrm{D}}^2 := \widehat\alpha_{\mathrm{D}} (1-\widehat\alpha_{\mathrm{D}})/m. \end{equation} In most stochastic models of practical interest, the $Y_t$ are either independent or are specified by the conditional densities $p_t(\cdot|\mathbf{Y}_{t-1})$ of $Y_t$ given~$\mathbf{Y}_{t-1}$, with respect to some measure $\nu$. Direct Monte Carlo computation of $P \{ \mathbf{Y}_n \in\Gamma\}$, therefore, involves $Y_1^{(i)},\ldots, Y_n^{(i)}$ that are generated sequentially from~the\-se conditional densities for $1\leq i\leq m$. In contrast, SISR first generates~ $m$~inde\-pendent random variables $\widetilde Y_t^{(1)}, \ldots, \widetilde Y_t^{(m)}$ at stage $t$, with~$\widetilde Y_t^{(i)}$ having density function $\widetilde q_t(\cdot|\mathbf{Y}_{t-1}^{(i)})$ to form $\widetilde \mathbf{Y}_t^{(i)}=(\mathbf{Y}_{t-1}^{(i)}, \widetilde Y_t^{(i)})$ and then uses resampling weights of the form $w_t(\widetilde\mathbf{Y}_t^{(i)})/ \sum_{j=1}^m w_t(\widetilde{\mathbf Y}_t^{(j)})$ to draw $m$ independent sample paths $\mathbf{Y}_t^{(j)}$, $1 \leq j \leq m$, from $\{ \widetilde\mathbf{Y}_t^{(i)}, 1 \leq i \leq m \}$. Here $\widetilde q_t$ are conditional density functions with respect to $\nu$ such that $\widetilde q_t > 0$ whenever $p_t > 0$; one particular choice is $\widetilde q_t=p_t$. In Section \ref{sec3}, we show how the weights~$w_t$ can be chosen to obtain logarithmically efficient SISR estimates of rare event probabilities. The preceding SISR procedure uses \textit{bootstrap resampling} that chooses~i.i.d. sample paths from a weighted empirical measure of \mbox{$\{ \widetilde\mathbf{Y}_t^{(i)}, 1 \leq i \leq m \}$}. It is, therefore, similar to the selection step of the IPS approach that chooses i.i.d. ``path-particles'' from some weighted empirical particle measure (see~ \cite{DG05}). The Monte Carlo estimate of $\alpha$ using SISR with bootstrap resampling is \begin{equation} \label{2.3} \widehat\alpha_{\mathrm{B}} = m^{-1} \sum_{i=1}^m L\bigl(\widetilde{\mathbf Y}_n^{(i)}\bigr) h_{n-1}\bigl(\mathbf{Y}_{n-1}^{(i)}\bigr) {\mathbf1}_{\{ \tilde\mathbf{Y}_n^{(i)} \in\Gamma \}}, \end{equation} where $h_0 \equiv1$ and \begin{eqnarray} \label{2.4a} L({\mathbf y}_n)&=&\prod_{t=1}^n \frac{p_t(y_t|{\mathbf y}_{t-1})} {\widetilde q_t(y_t|{\mathbf y}_{t-1})},\qquad h_k({\mathbf y}_k)= \prod_{t=1}^k \frac{\bar w_t}{w_t({\mathbf y}_t)},\nonumber\\[-6pt]\\[-6pt] \bar w_t &=& \frac{1}{m} \sum_{i=1}^m w_t\bigl(\widetilde{\mathbf Y}_t^{(i)}\bigr).\nonumber \end{eqnarray} Chan and Lai \cite{CL08} have recently developed a general theory of sequential Monte Carlo filters in hidden Markov models by using a representation similar to the right-hand side of (\ref{2.3}) for these filters. The method of their analysis can be applied to analyze $m(\widehat\alpha_{\mathrm B}-\alpha)$, decomposing it into a sum of $(2n-1)m$ terms so that the summands form a martingale difference sequence. Let $E^*$ denote expectation under the probability measure $\widetilde Q$ from which the $\widetilde{\mathbf Y}^{(i)}_t$ and $\mathbf{Y}_t^{(i)}$ are drawn and define for $1 \leq t < n$, \begin{equation} \label{2.5} f_t({\mathbf y}_t)=E^*\bigl[L(\mathbf{Y}_n) {\mathbf1}_{\{ \mathbf{Y}_n \in\Gamma\}}| \mathbf{Y}_t={\mathbf y}_t\bigr]=L({\mathbf y}_t) P(\mathbf{Y}_n \in\Gamma|\mathbf{Y}_t = {\mathbf y}_t), \end{equation} setting $f_0 \equiv\alpha$ and $f_n(\widetilde{\mathbf Y}_n)=L(\widetilde{\mathbf Y}_n) {\mathbf1}_{\{ \tilde\mathbf{Y}_n \in\Gamma\}}$. An important\vspace*{1pt} ingredient in the analysis is the ``ancestral origin'' $a_t^{(i)}$ of $\mathbf{Y}_t^{(i)}$. Specifically, recall that the ``first generation'' of the $m$ particles consists of $\widetilde Y_1^{(1)},\ldots,\widetilde Y_1^{(m)}$ (before resampling) and set $a_t^{(i)}=j$ if the first component of $\mathbf{Y}_t^{(i)}$ is $\widetilde Y_1^{(j)}$. Let $\#_k^{(i)}$ denote the number of copies of $\widetilde{\mathbf Y}_k^{(i)}$ generated from $\{ \widetilde{\mathbf Y}_k^{(1)},\ldots,\widetilde{\mathbf Y}_k^{(m)} \}$ to form the $m$ particles in the $k$th generation and let $w_k^{(i)}=w_k(\widetilde{\mathbf Y}_k^{(i)})/\sum_{j=1}^m w_k(\widetilde{\mathbf Y}_k^{(j)})$. Then it follows from (\ref{2.4a}) and simple algebra that for $1 \leq i \leq m$, \begin{eqnarray*} mw_t^{(i)} & = & h_{t-1}\bigl(\mathbf{Y}_{t-1}^{(i)}\bigr)/ h_t\bigl(\widetilde{\mathbf Y}_t^{(i)}\bigr), \\[3pt] \sum_{i\dvtx a_t^{(i)}=j} f_t\bigl(\mathbf{Y}_t^{(i)}\bigr) h_t\bigl(\mathbf{Y}_t^{(i)}\bigr) & = & \sum_{i\dvtx a_{t-1}^{(i)}=j} \#_t^{(i)} f_t\bigl(\widetilde{\mathbf Y}_t^{(i)}\bigr) h_t\bigl(\widetilde{\mathbf Y}_t^{(i)}\bigr),\\[-15pt] \end{eqnarray*} \begin{eqnarray*} && \sum_{t=1}^n \sum_{i\dvtx a_{t-1}^{(i)}=j} \bigl[f_t\bigl(\widetilde{\mathbf Y}_t^{(i)}\bigr)- f_{t-1}\bigl(\mathbf{Y}_{t-1}^{(i)}\bigr)\bigr] h_{t-1}\bigl(\mathbf{Y}_{t-1}^{(i)}\bigr) \\[3pt] &&\quad{} + \sum_{t=2}^n \sum_{i\dvtx a_{t-2}^{(i)}=j} \bigl(\#_{t-1}^{(i)}-mw_{t-1}^{(i)}\bigr) f_{t-1} \bigl(\widetilde{\mathbf Y}_{t-1}^{(i)}\bigr) h_{t-1}\bigl(\widetilde{\mathbf Y}_{t-1}^{(i)}\bigr) \\[3pt] &&\qquad = \sum_{i\dvtx a_{n-1}^{(i)}=j} f_n\bigl(\widetilde{\mathbf Y}_n^{(i)}\bigr) h_{n-1}\bigl(\mathbf{Y}_{n-1}^{(i)}\bigr) -\alpha, \end{eqnarray*} recalling that $f_0 \equiv\alpha$, $h_0 \equiv 1$ and defining $a_0^{(i)}=i$. Let \begin{eqnarray} \label{2.10} \varepsilon_{2t-1}^{(j)} &=& \sum_{i\dvtx a_{t-1}^{(i)}=j} \bigl[f_t\bigl(\widetilde{\mathbf Y}_t^{(i)}\bigr)- f_{t-1}\bigl(\mathbf{Y}_{t-1}^{(i)}\bigr)\bigr]h_{t-1}\bigl(\mathbf{Y}_{t-1}^{(i)}\bigr) \quad\mbox{for } 1 \leq t \leq n, \nonumber\hspace*{-35pt}\\[-8pt]\\[-8pt] \varepsilon_{2t}^{(j)} &=& \sum_{i\dvtx a_{t-1}^{(i)}=j} \bigl(\#_t^{(i)}-mw_t^{(i)}\bigr) \bigl[ f_t\bigl(\widetilde{\mathbf Y}_t^{(i)}\bigr)h_t(\widetilde{\mathbf Y}_t^{(i)}) -\alpha\bigr] \quad\mbox{for } 1 \leq t \leq n-1. \nonumber\hspace*{-35pt} \end{eqnarray} Then for each fixed $j$, $\{ \varepsilon_t^{(j)}, 1 \leq t \leq2n-1 \}$ is a martingale difference sequence with respect to the filtration $\{ \mathcal{F}_t, 1 \leq t \leq2n-1 \}$ defined below and \begin{equation} \label{2.9} m(\widehat\alpha_{\mathrm{B}} - \alpha) = \sum_{j=1}^m \bigl(\varepsilon_1^{(j)} + \cdots+ \varepsilon_{2n-1}^{(j)}\bigr). \end{equation} The martingale representation (\ref{2.9}) that involves the ancestral origins of the genealogical particles is useful for estimating the standard error of $\widehat\alpha_{\mathrm B}$, as shown by Chan and Lai \cite{CL08} who have also introduced the $\sigma$-fields \begin{eqnarray} \label{2.11} \mathcal{F}_{2t-1} & = & \sigma\bigl( \bigl\{ \widetilde Y_1^{(i)}\dvtx1 \leq i \leq m \bigr\} \nonumber\\ &&\hphantom{\sigma\bigl(} {}\cup\bigl\{ \bigl(\mathbf{Y}_s^{(i)}, \widetilde{\mathbf Y}_{s+1}^{(i)},a_s^{(i)}\bigr)\dvtx1 \leq s < t, 1 \leq i \leq m \bigr\} \bigr), \\ \mathcal{F}_{2t} & = & \sigma\bigl( \mathcal{F}_{2t-1} \cup\bigl\{ \bigl(\mathbf{Y}_t^{(i)},a_t^{(i)}\bigr)\dvtx 1 \leq i \leq m \bigr\}\bigr) \nonumber \end{eqnarray} with respect to which (\ref{2.10}) forms a martingale difference sequence. Since\vspace*{1pt} $f_n(\widetilde{\mathbf Y}_n^{(i)})=L(\widetilde{\mathbf Y}_n^{(i)}) {\mathbf1}_{\{ \tilde \mathbf{Y}_n^{(i)} \in\Gamma \}}$ and $\sum_{i=1}^m (\#_t^{(i)}-mw_t^{(i)})=0$ for $1 \leq t \leq n-1$, summing (\ref{2.10}) over $t$ and $j$ yields (\ref{2.9}). Without tracing their ancestral origins, we can also use the successive generations of the $m$ particles to form martingale differences directly. Specifically, in analogy with (\ref{2.10}), define for $i=1,\ldots,m$, \begin{eqnarray} \label{list} Z_{2t-1}^{(i)} &=& \bigl[f_t\bigl(\widetilde{\mathbf Y}_t^{(i)}\bigr)-f_{t-1}\bigl(\mathbf{Y}_{t-1}^{(i)}\bigr)\bigr] h_{t-1}\bigl(\mathbf{Y}_{t-1}^{(i)}\bigr) \quad\mbox{for } 1 \leq t \leq n, \nonumber\hspace*{-35pt}\\[-8pt]\\[-8pt] Z_{2t}^{(i)} &=& f_t\bigl(\mathbf{Y}_t^{(i)}\bigr) h_t\bigl(\mathbf{Y}_t^{(i)}\bigr)-\sum_{j=1}^m w_t^{(j)} f_t\bigl(\widetilde{\mathbf Y}_t^{(j)}\bigr) h_t\bigl(\widetilde{\mathbf Y}_t^{(j)}\bigr) \quad\mbox{for } 1 \leq t \leq n-1. \nonumber\hspace*{-35pt} \end{eqnarray} As noted by Chan and Lai \cite{CL08}, $\{ (Z_t^{(1)}, \ldots, Z_t^{(m)}), 1 \leq t \leq2n-1 \}$ is a martingale difference sequence with respect to the filtration $\{ \mathcal{F}_t, 1 \leq t \leq2n-1 \}$ and $Z_t^{(1)}, \ldots, Z_t^{(m)}$ are conditionally independent given $\mathcal{F}_{t-1}$; moreover, \begin{equation} \label{martingale2} m(\widehat\alpha_{\mathrm B}-\alpha) = \sum_{t=1}^{2n-1} \bigl(Z_t^{(1)} + \cdots+ Z_t^{(m)}\bigr). \end{equation} From the martingale representation (\ref{martingale2}) it follows that $E^*(\widehat\alpha_{\mathrm B})=\alpha$. Moreover, under the assumption that \begin{equation} \label{finitew}\qquad \quad\sigma_{\mathrm B}^2 := \sum_{t=1}^n E^* \Biggl[ f_t^2(\mathbf{Y}_t) \Big/ \prod_{k=1}^{t-1} w_k(\mathbf{Y}_k) \Biggr] E^* \Biggl[ \prod_{k=1}^{t-1} w_k(\mathbf{Y}_k) \Biggr] -n\alpha^2 < \infty, \end{equation} application of the central limit theorem yields \begin{equation} \label{clt} \sqrt{m} (\widehat\alpha_B -\alpha) \Rightarrow N ( 0, \sigma_{\mathrm B}^2) \qquad\mbox{as } m \rightarrow\infty. \end{equation} A consistent estimate of $\sigma_{\mathrm B}^2$ is given by \begin{eqnarray} \label{2.13} \widehat\sigma_{\mathrm B}^2 & := & m^{-1} \sum_{j=1}^m \Biggl\{ \biggl[ \sum_{i\dvtx a_{n-1}^{(i)}=j} f_n\bigl(\widetilde\mathbf{Y}_n^{(i)}\bigr) h_{n-1} \bigl(\mathbf{Y}_{n-1}^{(i)}\bigr) \biggr] \nonumber\\[-8pt]\\[-8pt] &&\hphantom{m^{-1} \sum_{j=1}^m \Biggl\{} {} - \Biggl[1+\sum_{t=1}^{n-1} \sum_{i\dvtx a_{t-1}^{(i)}=j} \bigl(\#_t^{(i)}- mw_t^{(i)}\bigr) \Biggr] \widehat\alpha_{\mathrm B} \Biggr\}^2, \nonumber \end{eqnarray} which can be shown to converge to $\sigma_{\mathrm B}^2$ in probability as $m \rightarrow\infty$ by making use of the martingale representation (\ref{2.9}) (see \cite{CL08} for details). Del Moral and Jacod \cite {DJ01} have derived by a different method a martingale representation similar\vspace*{1pt} to (\ref{martingale2}) (see \cite{DJ01}, (3.3.7) and (3.3.8)), in which the term $Z_{2t-1}^{(i)}$ in~(\ref{list}) corresponds to the $t$th mutation on the $i$th particle and $Z_{2t}^{(i)}$ the $t$th selection by the $i$th particle. In~\cite{DJ01}, these two terms are combined into a sum and a central limit theorem similar to (\ref{clt}) is proved under the assumption of bounded $f_n$. Note that in (\ref{clt}) on the asymptotic normality of $\widehat\alpha _{\mathrm B}$ and in the consistency result $\widehat\sigma_{\mathrm B}^2 \stackrel {p}{\rightarrow} \sigma_{\mathrm B}^2$, the sample size $n$ in the probability $\alpha= P \{ \mathbf{Y}_n \in\Gamma\}$ is assumed to be fixed whereas the number $m$ of Monte Carlo samples approaches $\infty$. The consistent estimate $\widehat\sigma_\mathrm{B}^2$ of $\sigma_\mathrm{B}^2$ in (\ref{2.13}) provides an estimate $\widehat \sigma_\mathrm{B}/\sqrt{m}$ of the standard error (s.e.)($\widehat\alpha_\mathrm{B} $) of the Monte Carlo estimate $\widehat\alpha_{\mathrm B}$. Note that the usual estimate $\sqrt{\widehat\alpha_{\mathrm{B}} (1-\widehat\alpha_{\mathrm{B}})}$ is inconsistent for $\sqrt m$ s.e.($\widehat\alpha_\mathrm{B}$) because of the dependence among the $m$ sample paths due to resampling in the SISR procedure as in \cite{DDJ06,DG05}. The case of $n$ approaching $\infty$ will be considered in the next section in which the representation (\ref{2.10}) will still play a~pivotal role, but which requires new methods and large deviation principles rather than central limit theorems. Instead of bootstrap resampling, we can use the residual resampling sche\-me introduced by Baker \cite{Bak85,Bak87} which often leads to smaller asymptotic variance than that of bootstrap resampling. We consider here a variant of this scheme introduced by Crisan, Del Moral and Lyons \cite{CDL99} that can result in further reduction of the asymptotic variance. Let $\lfloor\cdot\rfloor$ denote the greatest integer function and let $m_t$ be the sample size at stage $t$ with $m_1=m$. We modify the bootstrap resampling step of the SISR procedure as follows: let $U_t^{(1)}, \ldots, U_t^{(m_t)}$ be independent Bernoulli random variables satisfying $P \{ U_t^{(i)} = 1 \} = m_t w_t^{(i)} - \lfloor m_t w_t^{(i)} \rfloor$. For each $1 \leq i \leq m_t$ and $t<n$, make $\#_t^{(i)}:=\lfloor m_t w_t^{(i)} \rfloor+ U_t^{(i)}$ copies of $(\widetilde{\mathbf Y}_t^{(i)}, a_{t-1}^{(i)},h_{t-1}^{(i)}, w_t^{(i)})$. These copies constitute an augmented sample $\{ (\mathbf{Y}_t^{(j)},a_t^{(j)}, h_t^{(j)},w_t^{(j)})\dvtx1 \leq j \leq m_{t+1} \}$, where $m_{t+1} = \sum_{i=1}^{m_t} \#_t^{(i)}$ and $h_t^{(i)} = h_{t-1}^{(i)}/(m_t w_t^{(i)})$. Estimate $\alpha$ by \[ \widehat\alpha_{\mathrm R} := m_n^{-1} \sum_{i=1}^{m_n} L\bigl(\widetilde{\mathbf Y}_n^{(i)}\bigr) h_{n-1}^{(i)}\bigl(\mathbf{Y}_{n-1}^{(i)}\bigr) {\mathbf1}_{\{ \tilde\mathbf{Y}_n^{(i)} \in \Gamma\}}. \] Define $\varepsilon_k^{(j)}$ by (\ref{2.10}) in which $m$ is replaced by $m_t$ and define $\mathcal{F}_{2t-1}$ (or $\mathcal{F}_{2t}$) by (\ref{2.11}) in which $m$ is replaced by $m_{s+1}$ (or by $m_{t+1}$). Moreover, define \begin{eqnarray*} \widetilde Z_{2t-1}^{(i)} & = & \bigl[f_t\bigl(\widetilde{\mathbf Y}_t^{(i)}\bigr)-f_{t-1}\bigl(\mathbf{Y}_{t-1}^{(i)}\bigr)\bigr] h_{t-1}\bigl(\mathbf{Y}_{t-1}^{(i)}\bigr) \qquad\mbox{ for } 1 \leq t \leq n, \\ \widetilde Z_{2t}^{(i)} & = & \bigl(\#_t^{(i)}-m_t w_t^{(i)}\bigr)\bigl[f_t\bigl(\widetilde{\mathbf Y}_t^{(i)}\bigr) h_t\bigl(\widetilde{\mathbf Y}_t^{(i)}\bigr)- \alpha\bigr] \qquad\mbox{for } 1 \leq t \leq n-1, \end{eqnarray*} for $i=1,\ldots,m_t$. Recall that the first generation of particles consists of $\widetilde Y_1^{(1)},\allowbreak\ldots, \widetilde Y_1^{(m)}$ and that $a_t^{(i)}=j$ if the first component of $\mathbf{Y}_t^{(i)}$ is $\widetilde Y_1^{(j)}$ for $j=\allowbreak1,\ldots,m$ and $i=1,\ldots,m_{t+1}$. Analogous to (\ref{2.9}) and (\ref{martingale2}), we have the martingale representations \begin{eqnarray} \label{2.15} m_n(\widehat\alpha_{\mathrm R}-\alpha) &=& \sum_{j=1}^m \bigl(\varepsilon_1^{(j)}+ \cdots+ \varepsilon_{2n-1}^{(j)}\bigr)\nonumber\\[-8pt]\\[-8pt] &=& \sum_{k=1}^{2n-1} \bigl(\widetilde Z_k^{(1)} + \cdots+ \widetilde Z_k^{(m_{\lfloor(k+1)/2 \rfloor})}\bigr).\nonumber \end{eqnarray} Analogous to (\ref{2.13}), define \begin{eqnarray*} \widehat\sigma_{\mathrm R}^2 & = & m^{-1} \sum_{j=1}^m \Biggl\{ \biggl[ \sum_{i\dvtx a_{n-1}^{(i)} = j} f_n\bigl(\widetilde\mathbf{Y}_n^{(i)}\bigr) h_{n-1}\bigl(\mathbf{Y}_{n-1}^{(i)}\bigr) \biggr] \\ &&\hphantom{m^{-1} \sum_{j=1}^m \Biggl\{} {} - \Biggl[ 1+ \sum_{t=1}^{n-1} \sum_{i\dvtx a_{t-1}^{(i)}=j} \bigl(\#_t^{(i)}-m_t w_t^{(i)}\bigr) \Biggr] \widehat\alpha_{\mathrm R} \Biggr\}^2. \end{eqnarray*} From (\ref{2.15}) it follows that $E^*[m_n(\widehat\alpha_{\mathrm R}-\alpha )]=0$. Let \[ \eta_t = E^* \Biggl[ \prod_{k=1}^t w_k(\mathbf{Y}_k) \Biggr],\qquad h_t^*({\mathbf y}_t)= \eta_t \Big/ \prod_{k=1}^t w_k({\mathbf y}_k), \] and let $\gamma(x)=(x-\lfloor x \rfloor) (1-x+\lfloor x \rfloor)/x$ for $x > 0$. If (\ref{finitew}) holds, then analogous to corresponding results for $\widehat\alpha_{\mathrm B}$ and $\widehat\sigma_{\mathrm B}^2$ in the bootstrap resampling case, we now have as $m \rightarrow\infty$, \begin{eqnarray*} &\displaystyle \widehat\sigma_{\mathrm R}^2 \stackrel{p}{\rightarrow} \sigma_{\mathrm R}^2,\qquad m_t/m \stackrel{p}{\rightarrow} 1\qquad \mbox{for every } t \geq1,& \\ &\displaystyle \sqrt{m} (\widehat\alpha_{\mathrm R}-\alpha) \Rightarrow N(0,\sigma ^2_{\mathrm R}),& \end{eqnarray*} where $\sigma_{\mathrm R}^2 < \sigma_{\mathrm B}^2$ and \begin{eqnarray*} \sigma_{\mathrm R}^2 & := & \sum_{t=1}^n E^* \{ [f^2_t(\mathbf{Y}_t)-f_{t-1}^2(\mathbf{Y}_{t-1})] h_{t-1}^*(\mathbf{Y}_{t-1}) \} \\ &&{} + \sum_{t=1}^{n-1} E^* \biggl\{ \gamma\biggl( \frac{h_{t-1}^*(\mathbf{Y} _{t-1})}{h_t^*(\mathbf{Y}_t)} \biggr) \frac{[f_t(\mathbf{Y}_t) h_t^*(\mathbf{Y}_t) -\alpha]^2}{h_t^*(\mathbf{Y}_t)} \biggr\}. \end{eqnarray*} Details are given in \cite{CL08}. Note the additional variance reduction if residual resampling is used instead of bootstrap resampling. \section{Logarithmically efficient SISR for Monte Carlo computation of small tail probabilities}\label{sec3} Let $\xi, \xi_1, \xi_2, \ldots$ be i.i.d. $d$-dimensional random vectors with a common distribution function $F$ such that $\psi(\theta):= \log( E e^{\theta' \xi}) < \infty$ for $\| \theta\| < \theta_0$. Let $S_n=\xi_1+ \cdots+\xi_n$, $\mu_0 = E \xi$, $\Theta= \{ \theta\dvtx \psi(\theta) < \infty\}$ and let $\Lambda$ be the closure of $\nabla\psi(\Theta)$ and $\Lambda^o$ be its interior. Assume that for any $\theta_0 \in\Theta^o$ and $\theta \in\Theta\setminus\Theta^o$, \[ \lim_{\rho\uparrow1} (\theta- \theta_0)' \nabla\psi\bigl(\theta_0+ \rho(\theta-\theta_0)\bigr) = \infty. \] Then by convex analysis (see, e.g., \cite{Bro86}, Chapter 3), $\Lambda$ contains the convex hull of the support of $\{ S_n/n, n \geq1 \}$. The gradient vector $\nabla\psi$ is a diffeomorphism from $\Theta^o$ onto $\Lambda^o$. For given $\mu\in\Lambda^o$ let $\theta_\mu= (\nabla \psi)^{-1}(\mu)$ and define the \textit{rate function} \begin{equation} \label{3.1} \phi(\mu) = \sup_{\theta\in\Theta} \{ \theta' \mu- \psi(\theta) \} = \theta_\mu' \mu-\psi(\theta_\mu). \end{equation} We can embed $F$ in an exponential family $\{ F_\theta, \theta\in \Theta\}$ with \[ dF_\theta(x) = e^{\theta' x-\psi(\theta)} \,dF(x). \] Under certain regularity conditions on $g\dvtx\Lambda\rightarrow\mathbf{R}$, Chan and Lai \cite{CL00} have developed asymptotic approximations, which involve both $g$ and $\phi$, to the exceedance probabilities \begin{eqnarray} \label{3.2} p_n & = & P \{ g(S_n/n) \geq b \} \qquad\mbox{with } b > g(\mu_0), \\ \label{3.3} p_c & = & P \Bigl\{ \max_{n_0 \leq n \leq n_1} n g(S_n/n) \geq c \Bigr\}, \end{eqnarray} where $n_0 \sim\rho_0 c$ and $n_1 \sim\rho_1 c$ such that $g(\mu_0) < \rho_1^{-1}$. Making use of these approximations, Chan and Lai \cite{CL07} have shown that certain mixtures of exponentially twisted measures are asymptotically optimal for Monte Carlo evaluation of (\ref{3.2}) or (\ref{3.3}) by importance sampling. Specifically, for $A = \{ g(S_n/n) \geq b \}$ in the case of (\ref{3.2}) or $A = \{ \max_{n_0 \leq n \leq n_1} n g(S_n/n) \geq c \}$ in the case of (\ref{3.3}), an importance sampling measure $Q$ (which may depend on $n$ or $c$) is said to be \textit{asymptotically optimal} if \begin{equation} \label{3.4} m \operatorname{Var}\Biggl( m^{-1} \sum_{i=1}^m L_i {\mathbf1}_{A_i} \Biggr) = O\bigl(\sqrt{n} p_n^2\bigr) \qquad\mbox{as } n \rightarrow\infty \end{equation} in the case of (\ref{3.2}) and if \begin{equation} \label{3.5} m \operatorname{Var}\Biggl( m^{-1} \sum_{i=1}^m L_i {\mathbf1}_{A_i} \Biggr) = O(p_c^2) \qquad\mbox{as } c \rightarrow\infty \end{equation} in the case of (\ref{3.3}), where $(L_1,{\mathbf1}_{A_1}), \ldots, (L_m, {\mathbf1}_{A_m})$ are $m$ independent realizations of ($L:=dP/dQ$, ${\mathbf1}_A$). For the case of (\ref{3.3}), since $E_Q(L {\mathbf1}_A) = P(A)=p_c$, $E_Q(L^2 {\mathbf1}_A) \geq p_c^2$ by the Cauchy--Schwarz inequality and, therefore, $Q$ is an asymptotically optimal importance sampling measure if $E_Q(L^2 {\mathbf1}_A) = O(p_c^2)$, which leads to the definition (\ref{3.5}) of asymptotic optimality for the Monte Carlo estimates. Chan and Lai \cite{CL07} have also shown that $\sqrt{n} p_n^2$ is an asymptotically minimal order of magnitude for $E_Q(L^2 {\mathbf1}_A)$ in the case of (\ref{3.2}). They have also extended this theory to Markov random walks $S_n$ whose increments~$\xi_i$ have distributions $F(\cdot|X_i, X_{i-1})$ depending on a Markov chain~$\{ X_t \}$. The asymptotically optimal mixtures of exponentially twisted measures $\int P_{\theta_\mu} \omega(\mu) \,d \mu$ in \cite{CL07} involve normalizing constants $\beta_n$ (or $\beta_c$) that may be difficult to compute. Moreover, it may even be difficult to sample from the twisted measure $P_{\theta_\mu}$, especially in multidimensional and Markovian settings. In this section we show that by choosing the resampling weights suitably, the SISR estimates $\widehat\alpha_{\mathrm{B}}$ can still attain \begin{equation} \label{3.6} m \operatorname{Var}(\widehat\alpha_{\mathrm{B}}) = p_n^2 e^{o(n)} \qquad\mbox{ as } m \rightarrow\infty\mbox{ and } n \rightarrow\infty \end{equation} for Monte Carlo estimation of $p_n$ and \begin{equation} \label{3.7} m \operatorname{Var}(\widehat\alpha_{\mathrm{B}}) =p_c^2 e^{o(c)} \qquad\mbox{as } m \rightarrow\infty\mbox{ and } c \rightarrow\infty \end{equation} for Monte Carlo estimation of (\ref{3.3}). Moreover, (\ref{3.6}) and (\ref{3.7}) still hold with~$\widehat\alpha_\mathrm{B}$ replaced by $\widehat \alpha_{\mathrm R}$. The properties (\ref{3.6}) and (\ref{3.7}) are called \textit{logarithmic efficiency}; the variance of the Monte Carlo estimate differs from the asymptotically optimal value by a factor of $e^{o(n)}$ (or $e^{o(c)}$) noting that $-n^{-1} \log p_n$ and $-c^{-1} \log p_c$ converge to positive limits. To begin with, suppose the asymptotically optimal importance sampling measure $Q$ has conditional densities $q_t(\cdot|\mathbf{Y}_{t-1})$ with respect to $\nu$. To achieve log efficiency,\vadjust{\goodbreak} the resampling functions $w_t$ can be chosen to satisfy approximately \begin{equation} \label{wtby} w_t({\mathbf y}_t) \propto q_t(y_t|{\mathbf y}_{t-1})/ \widetilde q_t(y_t|{\mathbf y}_{t-1}) \end{equation} as illustrated by the following example, after which a heuristic explanation for (\ref{wtby}) will be given. \begin{exa}\label{exam1} Suppose $\xi_1, \xi_2, \ldots$ are i.i.d. random variables ($d=1$) and $g(x)=x$ in (\ref{3.2}), so that $\alpha= p_n = P \{ S_n/n \geq b \}$, where $b> E \xi_1$ and $2 \theta_b \in\Theta$. Consider the SISR procedure with $\widetilde Q=P$ (and, therefore, $E^*=E$) and resampling weights \begin{equation} \label{3.8} w_t(\mathbf{Y}_t) = e^{\theta_b \xi_t-\psi(\theta_b)}. \end{equation} Then $L=1$ and hence, by (\ref{2.5}), \begin{equation} \label{3.9} f_t(\mathbf{Y}_t) = P\{ S_n/n \geq b |\mathbf{Y}_t \} = P \{S_n - S_t \geq n b - S_t | S_t\}. \end{equation} Therefore, standard Markov's inequality involving moment generating functions yields \begin{equation} \label{3.10} f_t(\mathbf{Y}_t) \leq e^{-\theta_b (nb - S_t) + (n-t) \psi(\theta_b)} = e^{\theta_b S_t-t \psi(\theta_b) - n \phi(b)}. \end{equation} By (\ref{2.10}) and the martingale decomposition (\ref{2.9}), \begin{eqnarray} \label{3.11} E(\widehat\alpha_\mathrm{B}- \alpha)^2 &\leq& m^{-1} \sum_{t=1}^n E \bigl\{\bigl[f_t\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr)-f_{t-1}\bigl(\mathbf{Y}_{t-1}^{(1)}\bigr)\bigr]^2 h_{t-1}^2\bigl(\mathbf{Y}_{t-1}^{(1)}\bigr) \bigr\} \nonumber\\[-8pt]\\[-8pt] &&{} + m^{-1} \sum_{t=1}^{n-1} E \bigl[\bigl( \#_t^{(1)} - mw_t^{(1)}\bigr)^2 f_t^2\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr) h_t^2\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr)\bigr], \nonumber \end{eqnarray} in which the superscript $^{(1)}$ can be replaced by $^{(i)}$ since the expectations are the same for all $i$. The derivation of (\ref{3.11}) uses the independence of $[f_t(\widetilde{\mathbf Y}_t^{(i)})- f_{t-1}(\mathbf{Y}_{t-1}^{(i)})] h_t(\mathbf{Y}_{t-1}^{(i)})$ for $1 \leq i \leq m$ when conditioned on $\mathcal{F}_{2t-2}$ and the pairwise negative correlations of $(\#_t^{(i)}-mw_t^{(i)}) f_t(\widetilde{\mathbf Y}_t^{(i)}) h_t(\widetilde{\mathbf Y}_t^{(i)})$ for $i=1,\ldots,m$ when conditioned on $\mathcal{F}_{2t-1}$. By (\ref{2.4a}), (\ref{3.8}) and (\ref{3.10}), \begin{eqnarray} \label{3.12} && E \bigl\{ \bigl[ f_t\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr)- f_{t-1}\bigl(\mathbf{Y}_{t-1}^{(1)}\bigr)\bigr]^2 h_{t-1}^2\bigl(\mathbf{Y}_{t-1}^{(1)}\bigr) \bigr\} \nonumber\\ &&\qquad = E \bigl\{ \bar w_1^2 \cdots\bar w_{t-1}^2 \bigl[f_t\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr)-f_{t-1}\bigl(\mathbf{Y}_{t-1}^{(1)}\bigr)\bigr]^2/ e^{2 \theta_b S_{t-1}^{(1)}- 2(t-1) \psi(\theta_b)} \bigr\} \\ &&\qquad \leq\biggl( 1+\frac{E(e^{\theta_b \xi_1- \psi(\theta_b)}-1)^2}{m} \biggr)^{t-1} e^{-2n \phi(b)} E\bigl(e^{2 \theta_b \xi_t-2 \psi(\theta_b)}\bigr). \nonumber \end{eqnarray} To see the inequality in (\ref{3.12}), condition on $\mathcal{F}_{2t-1}$. Since $E[f_t(\widetilde\mathbf{Y}_t^{(1)})|\mathcal{F}_{2t-1}]=f_{t-1}(\mathbf{Y}_{t-1}^{(1)})$, it follows from (\ref{3.10}) that \begin{eqnarray*} && E \bigl\{ \bigl[f_t\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr)-f_{t-1}\bigl(\mathbf{Y}_{t-1}^{(1)}\bigr)\bigr]^2/e^{2 \theta _b S_{t-1}^{(1)} -2(t-1) \psi(\theta_b)} | \mathcal{F}_{2t-1} \bigr\}\\ &&\qquad \leq E\bigl[f_t^2\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr)/e^{2 \theta_b S_{t-1}^{(1)}-2(t-1)\psi(\theta_b)}|\mathcal{F}_{2t-1}\bigr] \leq e^{-2n \phi(b)} E\bigl(e^{2 \theta_b \xi_t-2 \psi(\theta_b)}\bigr). \end{eqnarray*} Moreover, $\bar w_1^2, \ldots, \bar w_{t-1}^2$ are i.i.d. random variables with mean \begin{equation} \label{resample}\quad \quad E \Biggl[ m^{-1} \sum_{i=1}^m \bigl(e^{\theta_b \xi_1^{(i)}-\psi(\theta_b)}-1\bigr)+1 \Biggr]^2 = 1+m^{-1} E\bigl(e^{\theta_b \xi_1-\psi(\theta_b)}-1\bigr)^2 \end{equation} and their product $\bar w_1^2 \cdots\bar w_{t-1}^2$ in the second term of (\ref{3.12}) is $\mathcal{F}_{2t-1}$-measurable. This yields the inequality in (\ref{3.12}). Since the conditional distribution of $\#_t^{(i)}$ given $\mathcal{F}_{2t-1}$ is Binomial$(m,w_t^{(i)})$, $E[(\#_t^{(i)}-mw_t^{(i)})^2|\mathcal{F}_{2t-1}] \!\leq\!mw_t^{(i)}$. By (\ref{2.4a}), (\ref{3.8}) and (\ref{3.10}), $f_t(\widetilde{\mathbf Y}_t^{(i)}) h_t(\widetilde{\mathbf Y}_t^{(i)})\!{\leq}\allowbreak\bar w_1 \cdots\bar w_t e^{-n \phi(b)}$. Since $\sum_{i=1}^m w_t^{(i)}\,{=}\,1$, it then follows by conditioning on~$\mathcal{F}_{2t-1}$ that \begin{eqnarray*} && E \bigl\{ \bigl(\#_t^{(1)}-mw_t^{(1)}\bigr)^2 f_t^2\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr) h_t^2\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr) \bigr\} \\ &&\qquad = m^{-1} \sum_{i=1}^m E \bigl\{ \bigl(\#_t^{(i)}-mw_t^{(i)}\bigr)^2 f_t^2\bigl(\widetilde{\mathbf Y}_t^{(i)}\bigr) h_t^2\bigl(\widetilde{\mathbf Y}_t^{(i)}\bigr) \bigr\} \\ &&\qquad \leq E \Biggl\{ \Biggl( \sum_{i=1}^m w_t^{(i)} \Biggr)\bigl(\bar w_1 \cdots\bar w_t e^{-n \phi(b)}\bigr)^2 \Biggr\} = e^{-2n \phi(b)} E(\bar w_1^2 \cdots\bar w_t^2), \end{eqnarray*} which can be combined with (\ref{resample}) to yield \begin{equation} \label{3.13} \qquad E\bigl[\bigl(\#_t^{(1)}-mw_t^{(1)}\bigr)^2 f_t^2\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr) h_t^2\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr)\bigr] = O \biggl( \biggl(1+ \frac{K}{m} \biggr)^t e^{-2n \phi(b)} \biggr), \end{equation} where $K = E(e^{\theta_b \xi_1- \psi(\theta_b)}-1)^2$. By (\ref{3.11}), (\ref{3.12}) and (\ref{3.13}), \[ \liminf_{n \rightarrow\infty} - \frac{1}{n} \log [m \operatorname{Var}(\widehat\alpha_\mathrm{B})] \geq2 \phi(b) - \frac{K}{m} \] for any fixed $m$. Since $p_n/[n^{-1/2} e^{-n \phi(b)}]$ is bounded away from 0 and $\infty$ (see~\cite{CL07}, page 451), (\ref{3.6}) holds. \end{exa} \subsection{A heuristic principle for efficient SISR procedures}\label{sec3.1} The asymptotically optimal importance\vspace*{1pt} sampling measure for $p_n = P \{ S_n/n \geq b \}$ is $Q$ under which $\xi_1, \xi_2, \ldots$ are i.i.d. with density function $e^{\theta_b \xi- \psi(\theta_b)}$ with respect to $P$ (see \cite{CL07}). Since we have used $\widetilde Q=P$ in Example \ref{exam1}, (\ref{3.8}) actually follows the prescription (\ref{wtby}) to choose resampling weights that can achieve an effect similar to asymptotically optimal importance sampling. We now give a heuristic principle underlying this prescription. The SISR procedure uses the importance weights $p_t^{(i)}/\widetilde q_t^{(i)}$ (for the change of measures from $P$ to $\widetilde Q$) and resampling weights $w_t^{(i)}$, $1 \leq i \leq m$, for the $m$ simulated trajectories at stage $t$. The resampling\vspace*{1pt} step at stage $t$ basically converts $(\widetilde{\mathbf Y}_t^{(i)}, p_t^{(i)}/\widetilde q_t^{(i)},w_t^{(i)})$ to $(\mathbf{Y}_t^{(i)},p_t^{(i)}/ (\widetilde q_t^{(i)} w_t^{(i)}),1)$, and, therefore, the prescription (\ref{wtby}) for choosing resampling weights (satisfying $\widetilde q_t^{(i)} w_t^{(i)} = q_t^{(i)}$) is intended to yield the desired importance weights $p_t^{(i)}/q_t^{(i)}$. To transform this heuristic principle into a rigorous proof of logarithmic efficiency, one needs to be able to bound the second moments of the importance weights and resampling weights. This explains the requirement $2 \theta_b \in\Theta$ in Example \ref{exam1}. Example \ref{exam1} indicates the key role played by the martingale decomposition~(\ref{2.9}) and large deviation bounds for $P(\Gamma_n | \mathbf{Y}_k)$, $1 \leq k < n$, in the derivation of asymptotically efficient resampling weights. To generalize the basic ideas to the more general tail probability (\ref{3.2}) with nonlinear $g$, we provide large deviation bounds in Lemma \ref{lemma1}, whose proof is given in the \hyperref[app]{Appendix}, for \begin{equation} \label{3.14} P \bigl\{g\bigl((x + S_{n, k}) / n\bigr) \geq b\bigr\}, \end{equation} where $S_{n, k} = S_n - S_k$; note that (\ref{3.14}) is equal to $P\{g (S_n / n) \geq b | S_k = x\}$. The special case $k=0$ and $x = 0$ has been analyzed by Chan and Lai (see Theorem 2 of \cite{CL00}) under certain regularity conditions that yield precise saddlepoint approximations. The probability (\ref{3.14}) is more complicated than this special case because it involves additional parameters $x$ and $k$, but we only need large deviation bounds rather than saddlepoint approximations for logarithmic efficiency. Let $\mu_\theta= \nabla\psi(\theta)$ and define \begin{eqnarray} \label{I} I & = & \inf\{ \phi(\mu)\dvtx g(\mu) \geq b \}, \\ \label{Htheta} M & = & \{ \theta\dvtx\phi(\mu_\theta) \leq I \}. \end{eqnarray} \begin{lem}\label{lemma1} Let $b > g(\mu_0)$. Then as $n \rightarrow\infty$, \begin{equation} \label{3.16} P \bigl\{ g\bigl((x+S_{n,k})/n\bigr) \geq b \bigr\} \leq e^{-n I+o(n)} \int_M e^{\theta' x-k \psi(\theta)} \,d \theta, \end{equation} where the $o(n)$ term is uniform in $x$ and $k$. \end{lem} The proof of (\ref{3.16}) in the \hyperref[app]{Appendix} uses a change-of-measure argument that involves the measure $Q$ for which \[ (dQ/dP)({\mathbf Y}_n) = \int_M e^{\theta' S_n-n \psi(\theta)} \,d \theta/\operatorname{vol}(M). \] The bound (\ref{3.16}) is used in conjunction with the inequality $\int_M e^{\theta' x-k \psi(\theta)} \,d \theta\leq\operatorname{vol}(M) \exp\{ k \max_{\theta\in M} [\theta' x/k-\psi(\theta)] \}$ to prove the following theorem. \begin{theorem}\label{theo1} Letting $b > g(\mu_0)$, assume: \begin{longlist}[(C1)] \item[(C1)] $g$ is twice continuously differentiable and $\nabla g \not= 0$ on $N : = \{\mu\in\Lambda^o \dvtx g (\mu) = b\}$. \item[(C2)] $Ee^{2 \kappa\| \xi_1 \|} < \infty$, where $\kappa= {\sup_{\theta\in M}} \| \theta\|$ and $M$ is defined in (\ref{Htheta}). \end{longlist} Let $\widehat\theta_0=0$ and define for $1 \leq t \leq n$, \begin{eqnarray} \label{3.17} \widehat\theta_t & = & \mathop{\arg\max}_{\theta\in M} \{ \theta' S_t/t-\psi(\theta) \}, \nonumber\\[-8pt]\\[-8pt] w_t(\mathbf{Y}_t) & = & \exp\{ \widehat\theta'_t S_t -t \psi(\widehat\theta_t)-[\widehat\theta'_{t-1} S_{t-1} -(t-1) \psi(\widehat \theta_{t-1})] \}. \nonumber \end{eqnarray} With $\widetilde Q=P$ and the resampling weights thus defined, the SISR estimates~$\widehat\alpha_\mathrm{B}$ and $\widehat\alpha_\mathrm{R}$ are logarithmically efficient, that is, (\ref{3.6}) holds for $\widehat \alpha_\mathrm{B}$ and also with~$\widehat\alpha_\mathrm{R}$ in place of $\widehat \alpha_\mathrm{B}$ if $m \rightarrow\infty$ and $n \rightarrow\infty$. \end{theorem} Besides (\ref{3.16}), the proof of Theorem \ref{theo1} also uses the bounds in the following lemma. These bounds enable us to bound $E(\bar w_{t-1}^2|\mathcal{F}_{2(t-1)-2})$ in the proof of Theorem \ref{theo1}. \begin{lem}\label{lemma2} With the same notation and assumptions in Theorem \ref{theo1}, there exist nonrandom constants $\varepsilon_t$ and $K > 0$ such that \begin{eqnarray} \label{6--1} &\displaystyle \lim_{t \rightarrow\infty} \varepsilon_t = 0,\qquad E[ w_t(\mathbf{Y}_t)|S_{t-1} ] \leq e^{\varepsilon_t} \quad\mbox{and }&\nonumber\\[-8pt]\\[-8pt] &\displaystyle E[w_t^2(\mathbf{Y}_t)|S_{t-1}] \leq K \qquad\mbox{for all } t \geq1.&\nonumber \end{eqnarray} \end{lem} \begin{pf} Let $\eta= {\sup_{\theta\in M}} |\psi(\theta)|$. Then \begin{eqnarray} \label{A.16} \widehat\theta_t' S_t-t \psi(\widehat\theta_t) & = & [\widehat\theta_t' S_{t-1}-(t-1) \psi(\widehat\theta_t)]+[\widehat\theta_t' \xi_t-\psi(\widehat\theta_t)] \nonumber\\[-8pt]\\[-8pt] & \leq& [\widehat\theta_{t-1}' S_{t-1}-(t-1) \psi(\widehat\theta_{t-1})]+[\widehat \theta_t' \xi_t-\psi(\widehat\theta_t)] \nonumber \end{eqnarray} and, therefore, it follows from (\ref{3.17}) that $w_t(\mathbf{Y}_t) \leq e^{\kappa\| \xi_t \|+\eta}$. Hence, by (C2), \begin{equation} \label{A.17}\qquad \quad E\bigl[w_t(\mathbf{Y}_t) {\mathbf1}_{\{ \| \xi_t \| > \zeta\}}| S_{t-1}\bigr] \leq E\bigl[e^{\kappa\| \xi_1 \|+\eta} {\mathbf1}_{\{ \| \xi_1 \| > \zeta \}}\bigr] \rightarrow0 \qquad\mbox{as } \zeta\rightarrow\infty. \end{equation} It will be shown that for any fixed $\zeta> 0$, \begin{equation} \label{A.18} \gamma_{t,\zeta} := {\operatorname{ess} \sup}\| \widehat \theta_t -\widehat\theta_{t-1} \| {\mathbf1}_{\{ \| \xi_t \| \leq\zeta\}} \rightarrow0 \qquad\mbox{as } t \rightarrow\infty. \end{equation} Let $\widetilde\eta= {\sup_{\theta\in M}} \| \nabla\psi(\theta) \|$. Combining (\ref{A.18}) with (\ref{3.17}) and (\ref{A.16}) yields \begin{eqnarray} \label{A.19} E\bigl[w_t(\mathbf{Y}_t) {\mathbf1}_{\{ \| \xi_t \| \leq\zeta\}}|S_{t-1}\bigr] &\leq& E\bigl[e^{\hat\theta_t' \xi_t-\psi(\hat\theta_t)} {\mathbf1}_{\{ \| \xi _t \| \leq\zeta\}}|S_{t-1}\bigr] \nonumber\\ &\leq& e^{\gamma_{t,\zeta} (\zeta+\tilde\eta)} E\bigl[e^{\hat\theta_{t-1}' \xi_t-\psi(\hat\theta _{t-1})}|S_{t-1}\bigr]\\ &=&1+o(1) \nonumber \end{eqnarray} as $t \rightarrow\infty$. Moreover, by (C2) and (\ref{A.18}), as $\zeta\rightarrow\infty$, \begin{eqnarray} \label{wt2} \quad E\bigl[w_t^2(\mathbf{Y}_t) {\mathbf1}_{\{ \| \xi_t \| > \zeta\}}|S_{t-1}\bigr] & \leq& E\bigl[e^{2 \kappa\| \xi_1 \|+2 \eta} {\mathbf1}_{\{ \| \xi_1 \| > \zeta\}}\bigr] \rightarrow0 \nonumber\\ \quad E\bigl[w_t^2(\mathbf{Y}_t) {\mathbf1}_{\{ \| \xi_t \| \leq\zeta\}}| S_{t-1}\bigr] &\leq& e^{2 \gamma_{t,\zeta} (\zeta+\tilde\eta)} E\bigl[e^{2 \hat\theta_{t-1}' \xi_t-2 \psi(\hat\theta_{t-1})}|S_{t-1}\bigr] \\ \quad & \leq& \sup_{\theta\in M} e^{\psi(2 \theta)-2 \psi(\theta )}+o(1), \nonumber \end{eqnarray} and (\ref{6--1}) follows from (\ref{A.17}), (\ref{A.19}) and\vadjust{\goodbreak} (\ref{wt2}). To prove (\ref{A.18}), let $f_{x,t}(\theta)=\theta' x-t \psi(\theta )$ and let $\theta_{x,t}$ be the unique maximizer of $f_{x,t}(\theta)$ over $M$. Let $\lambda_{\min}(\cdot)$ denote the smallest eigenvalue of a~symmetric matrix. Since $\nabla^2 \psi(\theta)$ is continuous and positive definite for all $\theta\in M$, and since $M$ is compact and $\lambda_{\min}$ is a continuous function of the entries of $\nabla^2 \psi(\theta)$, $\inf_{\theta \in M} \lambda_{\min}(\nabla^2 \psi(\theta)) \geq2 \beta$ for some $\beta> 0$. Therefore, by Taylor's theorem, $f_{x,t-1}(\theta) \leq f_{x,t-1}(\theta_{x,t-1}) -\beta t \| \theta_{x,t-1} - \theta\|^2$ for all $\theta\in M$. It then follows that for $\| y-x \| \leq\zeta$, \begin{eqnarray*} f_{y,t}(\theta_{x,t-1}) &\leq& f_{y,t}(\theta_{y,t}) = f_{x,t-1}(\theta_{y,t})+ \theta_{y,t}'(y-x)-\psi(\theta_{y,t}) \\ &\leq& f_{x,t-1}(\theta_{x,t-1})-\beta t \| \theta_{x,t-1} -\theta_{y,t} \|^2 + \theta_{y,t}'(y-x)-\psi(\theta_{y,t}) \\ &\leq& f_{y,t}(\theta_{x,t-1})-\beta t \| \theta_{x,t-1} - \theta_{y,t} \|^2 +(\zeta+\widetilde\eta) \| \theta_{x,t-1}-\theta _{y,t} \| \end{eqnarray*} and, therefore, $\| \theta_{x,t-1} - \theta_{y,t} \| \leq (\zeta+\widetilde\eta)/(\beta t)$. Hence, (\ref{A.18}) holds by setting $x=S_{t-1}$ and $y=S_t$. \end{pf} \begin{pf*}{Proof of Theorem \ref{theo1}} To simplify the notation, we will suppress the superscript $^{(1)}$ in $\widehat\theta_{t-1}^{(1)}$ below. By (\ref{2.4a}) and (\ref{3.17}), \begin{equation}\label{3.18} h_{t-1}\bigl(\widetilde{\mathbf Y}_{t-1}^{(1)}\bigr) = \Biggl( \prod_{k=1}^{t-1} \bar w_k \Biggr) \exp\bigl[-\widehat\theta_{t-1}' \widetilde S_{t-1}^{(1)}+(t-1) \psi(\widehat \theta_{t-1})\bigr]. \end{equation} Making use of $E[f(\widetilde{\mathbf Y}_t^{(1)})|\mathcal{F}_{2t-2}]\,{=}\,f_{t-1}(\mathbf{Y}_{t-1}^{(1)})$, $E(\sup_{\theta\in M} e^{2 \theta' \xi_t-2 \psi(\theta)})\,{<}\, \infty$~and the independence of $\bar w_1^2 \cdots\bar w_{t-1}^2$ and $\xi_t$, we obtain from Lemma \ref{lemma1} and~(\ref{3.18}) that \begin{eqnarray} \label{3.19}\quad && E \bigl\{ \bigl[f_t\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr)-f_{t-1}\bigl(\mathbf{Y}_{t-1}^{(1)}\bigr)\bigr]^2 h_{t-1}^2\bigl(\mathbf{Y}_{t-1}^{(1)}\bigr) \bigr\} \nonumber\\ &&\qquad \leq E \bigl\{ \bar w_1^2 \cdots \bar w_{t-1}^2 f_t^2\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr)/\exp\bigl[2 \widehat\theta_{t-1}' S_{t-1}^{(1)}-2(t-1) \psi(\widehat\theta_{t-1})\bigr] \bigr\} \\ &&\qquad \leq e^{-2n I+o(n)} E(\bar w_1^2 \cdots\bar w_{t-1}^2). \nonumber \end{eqnarray} By (\ref{2.4a}) and Lemma \ref{lemma2}, \begin{eqnarray*} E\bigl(\bar w_{t-1}^2|\mathcal{F}_{2(t-1)-2}\bigr) &=& \Biggl( m^{-1} \sum_{i=1}^m E\bigl[w_{t-1}\bigl(\widetilde{\mathbf Y}_{t-1}^{(i)}\bigr) |S_{t-2}^{(i)}\bigr] \Biggr)^2 \\ &&{} +m^{-2} \sum_{i=1}^m \operatorname{Var}\bigl[w_{t-1}\bigl(\widetilde{\mathbf Y}_{t-1}^{(i)}\bigr)| S_{t-2}^{(i)}\bigr] \\ &\leq& (1+Km^{-1}) e^{2 \varepsilon_{t-1}} \end{eqnarray*} and proceeding inductively yields \begin{equation} \label{3.20}\qquad \quad E(\bar w_1^2 \cdots\bar w_{t-1}^2) \leq(1+K m^{-1})^{t-1} \exp\Biggl( \sum_{k=1}^{t-1} 2 \varepsilon_k \Biggr) \leq e^{K (t-1)/m+o(n)}. \end{equation} Similarly, under bootstrap or residual resampling, \begin{eqnarray} \label{3.21} && E\bigl[\bigl(\#_t^{(1)}-mw_t^{(1)}\bigr)^2 f_t^2\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr) h_t^2\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr)\bigr] \nonumber\\ &&\qquad = m^{-1} \sum_{i=1}^m E\bigl[\bigl(\#_t^{(i)}-mw_t^{(i)}\bigr)^2 f_t^2\bigl(\widetilde{\mathbf Y}_t^{(i)}\bigr) h_t^2\bigl(\widetilde{\mathbf Y}_t^{(i)}\bigr)\bigr] \\ &&\qquad \leq e^{-2nI+o(n)} E(\bar w_1^2 \cdots\bar w_t^2). \nonumber \end{eqnarray} By (C1), $p_n = e^{-nI+o(n)}$ (see \cite{CL00}, Theorem 2) and hence, it follows from~(\ref{3.11}) and (\ref{3.19})--(\ref{3.21}) that both $\widehat \alpha_{\mathrm R}$ and $\widehat\alpha_{\mathrm B}$ are logarithmically efficient. \end{pf*} The heuristic principle described in the paragraph following Example \ref{exam1} can also be used to construct logarithmically efficient SISR procedures for Monte Carlo evaluation of (\ref{3.3}) as illustrated in the following example. \begin{exa}\label{exam2} Let $T_c = \inf\{ n\dvtx S_n \geq c \}$. Consider the estimation of $p_c = P \{ T_c \leq n_1 \}$ [i.e., with $d=1$ and $g(x)=x$] when $\mu_0 <0$ and $n_1 \sim ac$ for some $a > 1/ \psi'(\theta_*)$, where $\theta_*$ is the unique positive root of $\psi(\theta_*)=0$. We shall assume $2 \theta_* \in\Theta$ and use the importance measure $\widetilde Q=P$ and resampling weights \[ w_t(\mathbf{Y}_t) = \cases{e^{\theta_* \xi_t}, &\quad if $t \leq T_c$, \cr 1, &\quad if $n_1 > t > T_c$.} \] Let $\eta(\mathbf{Y}_{T_c \wedge n_1}) = e^{\theta_*(S_{T_c \wedge n_1}-c)}$. Since $\eta(\mathbf{Y}_{T_c \wedge n_1}) \geq{\mathbf1}_{\{ \max_{n \leq n_1} S_n \geq c \}}$, it follows that \begin{equation} \label{3.14a} \qquad f_t(\mathbf{Y}_t) = P \Bigl\{ \max_{n \leq n_1} S_n \geq c \big| \mathbf{Y}_t \Bigr\} \leq E[\eta(\mathbf{Y}_{T_c \wedge n_1})|\mathbf{Y}_t] = e^{\theta_* (S_{T_c \wedge t}-c)}. \end{equation} Making use of (\ref{3.14a}) in place of (\ref{3.10}), we obtain that, analogous to (\ref{3.12}), \begin{eqnarray} \label{3.15a} &&E \bigl\{ \bigl[ f_t\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr)-f_{t-1}\bigl(\mathbf{Y}_{t-1}^{(1)}\bigr)\bigr]^2 h_{t-1}^2 \bigl(\mathbf{Y}_{t-1}^{(1)}\bigr) \bigr\} \nonumber\\[-8pt]\\[-8pt] &&\qquad\leq\biggl( 1 +\frac{K_*}{m} \biggr)^{t-1} e^{-2 \theta_* c} E(e^{2 \theta_* \xi_t}),\nonumber \end{eqnarray} where $K_* = E(e^{\theta_* \xi_1}-1)^2$ and that, analogous to (\ref{3.13}), \begin{equation} \label{3.16a}\qquad \quad E\bigl[\bigl(\#_t^{(1)}-mw_t^{(1)}\bigr)^2 f_t^2\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr) h_t^2\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr)\bigr] =O \biggl( \biggl( 1 + \frac{K_*}{m} \biggr)^{t-1} e^{-2 \theta_* c} \biggr). \end{equation} Hence, by (\ref{3.11}) (with $n_1$ in place of $n$), (\ref{3.15a}) and (\ref{3.16a}), \[ m \operatorname{Var}(\widehat\alpha_\mathrm{B}) = O \bigl( n_1 \exp[ (n_1 K_*/m) - 2 \theta_* c ] \bigr). \] Since $n_1=O(c)$ and $p_c/ e^{-\theta_* c}$ is bounded away from 0 and $\infty$, as shown in~\cite{Sie75}, (\ref{3.7}) also holds. \end{exa} In Theorem \ref{theo2}, we provide the resampling weights for logarithmically efficient simulation of (\ref{3.3}), for which the counterparts of (\ref{I}) and (\ref{Htheta}) are also provided. The basic idea is to use the resampling weights (\ref{3.17}) up to the stopping time \begin{equation} \label{3.31} T_c = \inf\{ n \geq n_0\dvtx n g(S_n/n) \geq c \} \wedge n_1. \end{equation} \begin{theorem}\label{theo2} $\!\!$Let $g(\mu_0)\,{<}\,a^{-1}$, $n_0\,{=}\,\delta c+O(1)$ and $n_1\,{=}\,ac+O(1)$ as $c\,{\rightarrow}\,\infty$ for some $a > \delta> 0$. Let $I = \inf\{ \phi(\mu)\dvtx g(\mu) \geq\delta^{-1} \}$ and\vspace*{1pt} $M = \{ \theta\dvtx\phi(\mu_\theta) \leq I \}$. Let $\widetilde Q= P$ and assume that \textup{(C1)--(C2)} hold for all $a^{-1} \leq b \leq\delta^{-1}$ and that \begin{longlist}[(C3)] \item[(C3)] $r:= \sup_{\mu\dvtx g(\mu) \geq a^{-1}} \min\{ g(\mu), \delta^{-1} \}/\phi(\mu) < \infty$. \end{longlist} Let $\widehat\theta_0=0$ and define for $1 \leq t \leq n_1-1$, $\widehat\theta_t = \arg\max_{\theta\in M} [\theta' S_t/t-\psi(\theta)]$ and \begin{equation} \label{thetahat}\qquad w_t(\mathbf{Y}_t) = \cases{ e^{ \hat\theta_t' S_t-t \psi(\widehat\theta_t) - [\hat \theta_{t-1}' S_{t-1} - (t-1) \psi(\widehat\theta_{t-1})]}, &\quad if $t \leq T_c$,\cr 1, &\quad if $n_1 > t > T_c$.} \end{equation} Then (\ref{3.7}) holds for $\widehat\alpha_{\mathrm B}$ and with $\widehat\alpha_{\mathrm B}$ replaced by $\widehat\alpha_{\mathrm R}$ if $m \rightarrow\infty$ and $c \rightarrow\infty$. \end{theorem} \begin{pf} Let $u=(t-1) \wedge T_c^{(1)}$. By (\ref{2.4a}) and (\ref{thetahat}), \begin{equation} \label{hseq} h_{t-1}\bigl(\widetilde{\mathbf Y}_{t-1}^{(1)}\bigr) = \Biggl( \prod_{k=1}^{t-1} \bar w_k \Biggr) \exp\bigl[ -\bigl(\widehat\theta_u^{(1)}\bigr)' \widetilde S_u^{(1)}+u \psi\bigl(\widehat\theta_u^{(1)}\bigr)\bigr]. \end{equation} Let $I_b = \inf\{ \phi(\mu)\dvtx g(\mu) \geq b \}$. By Lemma \ref{lemma1}, \begin{eqnarray} \label{ftseq}\qquad\qquad f_t\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr) &=& P \bigl\{ T_c^{(1)} \leq n_1 | \widetilde{\mathbf Y}_t^{(1)} \bigr\} \nonumber\\[-8pt]\\[-8pt] &\leq& \cases{\displaystyle \sum_{n=t+1}^{n_1} e^{-n I_{c/n}+o(n)} \int_M e^{\theta' \tilde S_t^{(1)}-t \psi(\theta)} \,d \theta, &\quad if $t < T_c^{(1)}$, \vspace*{2pt}\cr 1, &\quad if $t \geq T_c^{(1)}$.} \nonumber \end{eqnarray} Note that \[ \inf_{a^{-1} \leq b \leq\delta^{-1}} b^{-1} I_b = \min\biggl\{ \inf_{\mu\dvtx a^{-1} \leq g(\mu) \leq\delta^{-1}} \frac{\phi(\mu)}{g(\mu)}, \inf _{\mu\dvtx g(\mu) > \delta^{-1}} \frac{\phi(\mu)}{\delta^{-1}} \biggr\} = r^{-1} \] by (C3). Hence, by (\ref{hseq}) and (\ref{ftseq}), \begin{equation} \label{Fseq} E \bigl\{ \bigl[f_t\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr)-f_{t-1}\bigl(\mathbf{Y}_{t-1}^{(1)}\bigr)\bigr]^2 h_{t-1}^2\bigl(\mathbf{Y} _{t-1}^{(1)}\bigr) \bigr\} \leq e^{-2c/r+o(c)} E(\bar w_1^2 \cdots\bar w_{t-1}^2).\hspace*{-35pt} \end{equation} Similarly, it can be shown that under either bootstrap or residual resampling, \begin{equation} \label{hexseq}\quad E\bigl[\bigl(\#_t^{(1)}-mw_t^{(1)}\bigr)^2 f_t^2\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr) h_t^2\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr)\bigr] \leq e^{-2c/r+o(c)} E(\bar w_1^2 \cdots\bar w_{t-1}^2). \end{equation} By (C1) and Theorem 2 of \cite{CL00}, $p_c=e^{-c/r+o(c)}$ and hence, it follows from (\ref{3.20}), (\ref{Fseq}) and (\ref{hexseq}) that both $\widehat\alpha_{\mathrm R}$ and $\widehat\alpha _{\mathrm B}$ are logarithmically efficient. \end{pf} \subsection{Markovian extensions}\label{sec3.2} Let $\{ (X_t,S_t)\dvtx t=0,1,\ldots, \}$ be a Markov additive process on $\mathcal{X}\times\mathbf{R}^d$ with transition kernel \begin{eqnarray*} P(x,A \times B) :\!&=& P \{ (X_1,S_1) \in A \times(B+s) | (X_0,S_0)=(x,s) \} \\ &=& P \{ (X_1,S_1) \in A \times B| (X_0,S_0) = (x,0) \}. \end{eqnarray*} Let $\{ X_n \}$ be aperiodic and irreducible with respect to some maximal irreducibility measure $\varphi$ and assume that the transition kernel satisfies the minorization condition \begin{equation} \label{m1} P(x,A \times B) \geq h(x,B) \nu(A) \end{equation} for any measurable set $A \subset\mathcal{X}$, Borel set $B \subset \mathbf{R}^d$ and $s \in{\mathbf R}^d$ for some probability measure $\nu $ and measure $h(x,\cdot)$ that is positive for all $x$ belonging to a $\varphi$-positive set. Ney and Nummelin \cite{NN87} developed a theory to analyze large deviations properties of $S_n$ under (\ref{m1}) or when its variant $P(x,A \times B) \geq h(x) \nu(A \times B)$ holds. Let $\tau$ be the first regeneration time and assume that $\Omega:= \{ (\theta,\zeta)\dvtx E_\nu e^{\theta' S_\tau-\tau\zeta} < \infty\}$ is an open neighborhood of 0. Then for all $\theta\in\Theta:= \{ \theta\dvtx(\theta,\zeta) \in \Omega$ for some $\zeta\}$, the kernel \begin{equation} \label{whtPtheta} \widehat P_\theta(x,A) := \int e^{\theta' s} P(x,A \times ds) \end{equation} has a unique maximum eigenvalue $e^{\psi(\theta)}$, for which $\zeta=\psi(\theta)$ is the unique solution of the equation $E_\nu e^{\theta' S_\tau-\tau\zeta}=1$, with corresponding right eigenfunctions $r(\cdot; \theta)$ and left eigenmeasures $\ell_\nu (\theta, \cdot)$ defined by \begin{eqnarray} \label{rxtheta} r(x;\theta) & = & E_x e^{\theta' S_\tau-\tau \psi(\theta)}, \nonumber\\ \ell_x(\theta;A) & = & E_x \Biggl( \sum_{n=0}^{\tau-1} e^{\theta' S_n-n \psi(\theta)} {\mathbf1}_{\{ X_n \in A \}} \Biggr), \\ \ell_\nu(\theta; A) & = & \int\ell_x(\theta; A) \,d \nu(x). \nonumber \end{eqnarray} Let $\pi$ denote the stationary distribution of $\{ X_n \}$ and let \begin{equation} \label{thetamu} \theta_\mu= (\nabla\psi)^{-1}(\mu). \end{equation} To begin with, consider the special case $d=1$ and $g(x)=x$ for which the importance sampling measure with transition kernel \begin{equation} \label{Ptheta} P_\theta(x,dy \times ds) := e^{\theta' s-\psi(\theta)} \{ r(y;\theta)/r(x;\theta) \} P(x, dy \times ds) \end{equation} has been shown to be logarithmically efficient by Dupuis and Wang \cite{DW05} and asymptotically optimal by Chan and Lai \cite{CL07} for simulating the tail probability $P_{x_0} \{ S_n/n \geq b \}$ when $\theta$ is chosen to be $\theta_b$ in (\ref{Ptheta}). We shall show that by using SISR with $\widetilde Q=P$ and resampling weights $w_t(\mathbf{Y}_t)=e^{\theta_b \xi_t-\psi(\theta_b)}$, we can avoid computation of the eigenfunctions. To bring out the essence of the method, we first assume instead of the minorization condition (\ref{m1}) the stronger uniform recurrence condition \begin{equation} \label{ur} a_0 \nu(A \times B) \leq P(x,A \times B) \leq a_1 \nu(A \times B) \end{equation} for some $0 < a_0 < a_1$ and probability measure $\nu$ and for all $x \in\mathcal{X}$, measurable sets $A \subset\mathcal{X}$ and Borel sets $B \subset{\mathbf R}$. At the end of this section, we show how this assumption can be removed. Note that $\mathbf{Y}_t$ consists of $(X_i,\xi_i)$, $i \leq t$, in the Markov case. \begin{exa}\label{exam3} Let $b> E_\pi\xi_1$ and assume that $\theta_b \in \Theta$ and $E_\nu(e^{2 \theta_b \xi_1-2 \psi(\theta_b)})$ $< \infty$. We now extend Example \ref{exam1} to Markov additive processes by showing that the choice $\widetilde Q=P$ and \begin{equation} \label{weights} w_t(\mathbf{Y}_t) = e^{\theta_b \xi_t-\psi(\theta_b)} \end{equation} results in logarithmically efficient simulation of $P_{x_0} \{ S_n/n \geq b \}$. The dependence of the weights $w_t^{(i)}$ and $w_t^{(j)}$ for $i \neq j$, created from a combination of the Markovian structure of the underlying process and bootstrap resampling, requires a more delicate peeling and induction argument than that in Example \ref {exam1}. By considering $\xi_t-\psi(\theta_b)/\theta_b$ instead of $\xi_t$, we may assume without loss of generality that $\psi(\theta_b)=0$. Let $\kappa= \sup_{x \in\mathcal{X}} r(x;\theta_b)/\inf_{x \in\mathcal{X}} r(x;\theta_b)$ and let $E_\theta$ be expectation with respect to $P_\theta$. Then by (\ref{2.5}) and (\ref{Ptheta}), \begin{eqnarray*} f_t(\mathbf{Y}_t) & = & P_{x_0} \{ S_n/n \geq b | \mathbf{Y}_t \} = P \{ S_n-S_t \geq nb-S_t|X_t,S_t \} \\ & = & r(X_t;\theta_b) E_{\theta_b} \bigl[e^{-\theta_b(S_n-S_t)} \mathbf{1}_{\{ S_n-S_t \geq nb-S_t \}}/ r(X_n; \theta_b)|X_t, S_t\bigr] \\ & \leq& \kappa e^{-\theta_b(nb-S_t)}. \end{eqnarray*} We shall show that \begin{equation} \label{moment}\quad E(\bar w_1^2 \cdots\bar w_t^2) = e^{o(t)} \qquad\mbox{as } m \rightarrow \infty \mbox{ uniformly over } 1 \leq t \leq n-1. \end{equation} Then logarithmic efficiency of bootstrap resampling follows from (\ref{3.11})--(\ref{3.13}). We first show that for any $k<t$ and $i \neq j$, \begin{eqnarray} \label{peel} && E \bigl\{ \bar w_k^2 \bigl(E_{X_k^{(i)}} e^{\theta_b S_{t-k}}\bigr)\bigl( E_{X_k^{(j)}} e^{\theta_b S_{t-k}}\bigr)| \mathcal{F}_{2k-2} \bigr\} \nonumber\\[-8pt]\\[-8pt] &&\qquad \leq m^{-2} \sum_{u \neq v} \bigl(E_{X_{k-1}^{(u)}} e^{\theta_b S_{t-k+1}}\bigr) \bigl(E_{X_{k-1}^{(v)}} e^{\theta_b S_{t-k+1}}\bigr) + m^{-1} \beta, \nonumber \end{eqnarray} where $\beta=\sup_{h \geq0, x \in\mathcal{X}} E_x \{ e^{2 \theta_b \xi _1} (E_{X_1} e^{\theta_b S_h})^2 \}$, which is finite by (\ref{ur}). Note that $\bar w_k$ is measurable with respect to $\mathcal{F}_{2k-1}$ and that under bootstrap resampling, $X_k^{(i)}$ and $X_k^{(j)}$ are independent conditioned on $\mathcal{F}_{2k-1}$. Moreover, since $X_k^{(1)} = \widetilde X_k^{(\ell)}$ with probability $w_k^{(\ell)} = w_k(\widetilde\mathbf{Y}^{(\ell)}_k)/\sum_{j=1}^m w_k(\widetilde\mathbf{Y}_k^{(j)})$, \[ E \bigl\{ \bar w_k \bigl(E_{X_k^{(1)}}e^{\theta_b S_{t-k}}\bigr)|\mathcal{F}_{2k-1} \bigr\} = \bar w_k \sum_{u=1}^m w_k^{(u)} E_{\tilde X_k^{(u)}} e^{\theta_b S_{t-k}}, \] which is equal to $m^{-1} \sum_{u=1}^m e^{\theta_b \tilde\xi_k^{(u)}} E_{\tilde X_k^{(u)}} e^{\theta_b S_{t-k}}$ in view of (\ref{weights}) and that $\psi(\theta_b)=0$. Hence, \begin{eqnarray} \label{step1} && E \bigl\{ \bar w_k^2 \bigl(E_{X_k^{(i)}} e^{\theta_b S_{t-k}}\bigr)\bigl( E_{X_k^{(j)}} e^{\theta_b S_{t-k}}\bigr) | \mathcal{F}_{2k-1} \bigr\} \nonumber\\ &&\qquad = \Biggl(m^{-1} \sum_{u=1}^m e^{\theta_b \tilde\xi_k^{(u)}} E_{\tilde X_k^{(u)}} e^{\theta_b S_{t-k}} \Biggr)^2 \nonumber\\[-8pt]\\[-8pt] &&\qquad = m^{-2} \sum_{u \neq v} \bigl(e^{\theta_b \tilde\xi_k^{(u)}} E_{\tilde X_k^{(u)}} e^{\theta_b S_{t-k}}\bigr)\bigl( e^{\theta_b \tilde\xi_k^{(v)}} E_{\tilde X_k^{(v)}} e^{\theta_b S_{t-k}}\bigr) \nonumber\\ &&\qquad\quad{} + m^{-2} \sum_{u=1}^m e^{2 \theta_b \tilde\xi_k^{(u)}} \bigl(E_{\tilde X_k^{(u)}} e^{\theta_b S_{t-k}}\bigr)^2. \nonumber \end{eqnarray} Since $(\widetilde\xi_k^{(u)},\widetilde X_k^{(u)})$ and $(\widetilde\xi_k^{(v)},\widetilde X_k^{(v)})$ are independent conditioned on $\mathcal{F}_{2k-2}$ for $u \neq v$ and $E[e^{\theta_b \tilde \xi_k^{(i)}}(E_{\tilde X_k^{(i)}} e^{\theta_b S_{t-k}})| \mathcal{F}_{2k-2}] = E_{X_{k-1}^{(i)}} e^{\theta_b S_{t-k+1}}$, (\ref{peel}) follows from (\ref{step1}). We shall show using (\ref{peel}) and induction, that \begin{equation} \label{bound}\qquad E(\bar w_1^2 \cdots\bar w_k^2) \leq\gamma^2 (1+m^{-1} \beta)^k \quad\mbox{where } \gamma= \sup_{x \in\mathcal{X}, h \geq0} E_x e^{\theta_b S_h} (\mbox{$\geq$}1). \end{equation} For $k=1$, \[ E \bar w_1^2 = m^{-2} \sum_{i \neq j} E_{x_0} e^{\theta_b \xi_1^{(i)}} E_{x_0} e^{\theta_b \xi_1^{(j)}} +m^{-2} \sum_{i=1}^m E_{x_0} e^{2 \theta_b \xi_1^{(i)}} \leq\gamma^2 +m^{-1} \beta \] and indeed (\ref{bound}) holds. If (\ref{bound}) holds for all $k < t$, then by repeated application of (\ref{peel}), starting from $k=t$, we obtain \begin{eqnarray*} E(\bar w_1^2 \cdots\bar w_t^2) & \leq& (E_{x_0} e^{\theta_b S_t})^2+m^{-1} \beta\sum_{k=0}^{t-1} E(\bar w_1^2 \cdots\bar w_k^2) \\[-2pt] & \leq& \gamma^2 \Biggl\{ 1+m^{-1} \beta \sum_{k=0}^{t-1} (1+m^{-1} \beta)^k \Biggr\} = \gamma^2 (1+m^{-1} \beta )^t \end{eqnarray*} and (\ref{bound}) indeed holds for $k=t$. Hence, (\ref{moment}) is true and logarithmic efficiency is attained. \end{exa} The peeling argument used to derive (\ref{peel}) and (\ref{bound}) can also be used to extend Theorems \ref{theo1} and \ref{theo2}, which hold for general $g$, to the following.\vadjust{\goodbreak} \begin{theorem}\label{thm3} \textup{(a)} Let $M$, $\widehat\theta_t$ and $w_t(\mathbf{Y}_t)$ be the same as in Theorem~\ref{theo1}. Then Theorem \ref{theo1} still holds when the i.i.d. assumption on $\xi_t$ is replaced by the uniform recurrence condition (\ref{ur}) on the Markov additive process $(X_t,S_t=\xi_1+\cdots+\xi_t)$ and assumption \textup{(C2)} is generalized to \begin{equation} \label{expo} \int_{{\mathbf R}^d} e^{2 \kappa\| \xi\|} \nu(\mathcal{X}, d \xi) < \infty \qquad\mbox{where } \kappa= {\sup_{\theta\in M}} \| \theta\|. \end{equation} \textup{(b)} Let $M$, $\widehat\theta_t$ and $w_t(\mathbf{Y}_t)$ be the same as in Theorem \ref{theo2}. Then Theorem~\ref{theo2} still holds when the i.i.d. assumption on $\xi_t$ is replaced by the uniform recurrence condition (\ref{ur}) and assumption \textup{(C2)} is generalized to (\ref{expo}).\vspace*{-3pt} \end{theorem} Note that $\widetilde Q=P$ in Theorem \ref{thm3}. We next show how the uniform recurrence assumption (\ref{ur}) can be removed, extending the preceding results on the logarithmic efficiency of suitably chosen SISR procedures to more general Markov additive processes such that for some $\theta\in\Theta$, $0 < \beta< 1$, function $u\dvtx\mathcal{X} \rightarrow[1,\infty)$ and measurable set $C$: \begin{longlist}[(U1)] \item[(U1)] $\sup_{x \in C} u(x) < \infty$, $\int _\mathcal{X} u(x) \,d \nu(x) < \infty$, $\sup_{x \in C} \ell_x(\theta;C) < \infty$, $\int _\mathcal{X} \ell_x(\theta$; $C) \,d \nu(x) < \infty$,\vspace*{1pt} \item[(U2)] $E_x \{ e^{\theta' \xi_1-\psi(\theta)} u(X_1) \} \leq(1-\beta ) u(x)$ for $x \notin C$,\vspace*{1pt} \item[(U3)] $a:=\sup_{x \in C} E_x \{ e^{\theta' \xi_1-\psi(\theta)} u(X_1) \} < \infty$, \item[(U4)] $K_1:= \sup_{x \in\mathcal{X}} E_x \{ e^{2 \theta' \xi_1-2 \psi (\theta)} u^2(X_1) /u^2(x) \} < \infty$. \end{longlist} We illustrate in Section \ref{sec4}, Example \ref{exam5}, how (U1)--(U4) can be checked in a~concrete example. Condition (U1) [in which $\ell_x$ is defined in (\ref {rxtheta})] holds when $C$ is bounded and $\nu$ has support on a compact set. Conditions (U2)--(U4) are often called ``drift conditions'' (see \cite{CL07}). Although the arguments are essentially modifications of the peeling idea in Example \ref{exam3} by making use of (U1)--(U4), they are considerably more complicated than those in the uniformly recurrent case. We, therefore, only consider the univariate linear case [$d=1$, $g(y)=y$] in the following theorem to indicate the basic ideas without getting into the details of these modifications, such as replacing for general $g$ the $\theta_b$ in (\ref{uweights}) by sequential estimates $\widehat\theta _t$, as in (\ref{3.17}) and (\ref{thetahat}).\vspace*{-3pt} \begin{theorem}\label{theo4} Let $b > E_\pi\xi_1$ and assume that \textup{(U1)--(U4)} hold for $\theta=\theta_b$. Let $\widetilde Q=P$ and \begin{equation} \label{uweights} w_t(\mathbf{Y}_t) = e^{\theta_b \xi_t-\psi(\theta_b)} u(X_t)/u(X_{t-1}). \end{equation} Then (\ref{3.6}) holds with $p_n = P_{x_0} \{ S_n/n \geq b \}$, for $\widehat\alpha_{\mathrm B}$ or $\widehat\alpha_{\mathrm R}$, as $n \rightarrow\infty$ and $m \rightarrow \infty$. \end{theorem} \begin{pf} By considering $\xi_t-\psi(\theta_b)/\theta_b$ instead of $\xi_t$, we assume without loss of generality that $\psi(\theta_b)=0$. By (\ref{2.4a}) and (\ref{uweights}), \begin{equation} \label{hu} h_{t-1}\bigl(\widetilde{\mathbf Y}_{t-1}^{(1)}\bigr) = \Biggl( \prod_{k=1}^{t-1} \bar w_k \Biggr) e^{-\theta_b \tilde S_{t-1}^{(1)}} u(x_0)/u\bigl(\widetilde X_{t-1}^{(1)}\bigr).\vadjust{\goodbreak} \end{equation} It will be shown in the \hyperref[app]{Appendix} that \begin{equation} \label{ku} K_2:= \sup_{x \in\mathcal{X}, h \geq0} E_x \{ e^{\theta_b S_h} u(X_h)/u(x) \} < \infty. \end{equation} Note that \begin{eqnarray} \label{fu} f_t(\mathbf{Y}_t) & = & E_{x_0}\bigl({\mathbf1}_{\{ S_n/n \geq b \}}|\mathbf{Y}_t\bigr) \leq e^{-\theta_b nb} E_{x_0} (e^{\theta_b S_n}|\mathbf{Y}_t) \nonumber\\[-8pt]\\[-8pt] & = & e^{\theta_b (S_t-nb)} E_{X_t}(e^{\theta_b S_{n-t}}) \leq K_2 e^{\theta_b (S_t-nb)} u(X_t). \nonumber \end{eqnarray} Since $E_{x_0} [f_t(\widetilde{\mathbf Y}_t^{(1)})|\mathcal{F}_{2t-2}]\,{=}\,f_{t-1}(\mathbf{Y} _{t-1}^{(1)})$, it follows from (\ref{hu}), (\ref{fu}) and~(U3) that \begin{eqnarray} \label{sequ}\quad && E_{x_0} \bigl\{ \bigl[f_t\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr)-f_{t-1}\bigl(\mathbf{Y}_{t-1}^{(1)}\bigr)\bigr]^2 h_{t-1}^2\bigl(\mathbf{Y}_{t-1}^{(1)}\bigr) \bigr\} \nonumber\\ &&\qquad \leq K_2^2 e^{-2n \theta_b b} E_{x_0} \bigl\{ (\bar w_1 \cdots\bar w_{t-1})^2 e^{2 \theta_b \tilde\xi_t^{(1)}} u^2(x_0) u^2\bigl(\widetilde X_t^{(1)}\bigr)/u^2\bigl(X_{t-1}^{(1)}\bigr) \bigr\} \\ &&\qquad \leq \beta e^{-2n \theta_b b} E_{x_0} (\bar w_1^2 \cdots\bar w_{t-1}^2), \nonumber \end{eqnarray} where $\beta=K_1 K_2^2 u^2(x_0)$. By (\ref{hu}) and (\ref{fu}), under either bootstrap or residual resampling, \begin{eqnarray} \label{resampleu} && E_{x_0} \bigl[\bigl(\#_t^{(1)}-mw_t^{(1)}\bigr)^2 f_t^2\bigl(\widetilde{\mathbf Y}_t^{(1)}\bigr)h_t^2\bigl(\widetilde{\mathbf Y} _t^{(1)}\bigr)\bigr] \nonumber\\ &&\qquad = m^{-1} \sum_{i=1}^m E_{x_0} \bigl[\bigl(\#_t^{(i)}-mw_t^{(i)}\bigr)^2 f_t^2\bigl(\widetilde{\mathbf Y}_t^{(i)}\bigr) h_t^2\bigl(\widetilde{\mathbf Y}_t^{(i)}\bigr)\bigr] \\ &&\qquad \leq K_2^2 E_{x_0} (\bar w_1^2 \cdots\bar w_t^2) e^{-2n \theta_b b} u^2(x_0). \nonumber \end{eqnarray} In view of (\ref{3.11}), it now remains to show (\ref{moment}). It follows from the proof of (\ref{peel}) that for any $k<t$ and $i \neq j$, \begin{eqnarray*} && E_{x_0} \biggl\{ \bar w_k^2 \biggl(\frac{E_{X_k^{(i)}} [e^{\theta_b S_{t-k}} u(X_{t-k})]}{ u(X_k^{(i)})} \biggr) \biggl(\frac{E_{X_k^{(j)}} [e^{\theta_b S_{t-k}} u(X_{t-k})]}{ u(X_k^{(j)})} \biggr) \Big| \mathcal{F}_{2k-2} \biggr\} \\ &&\qquad \leq m^{-2} \sum_{v \neq w} \biggl(\frac{E_{X_{k-1}^{(v)}} [e^{\theta_b S_{t-k+1}} u(X_{t-k+1})]}{ u(X_{k-1}^{(v)})} \biggr)\!\biggl(\frac{E_{X_{k-1}^{(w)}} [e^{\theta_b S_{t-k+1}} u(X_{t-k+1})]}{ u(X_{k-1}^{(w)})} \biggr) \\ &&\qquad\quad{} + m^{-1} \beta. \end{eqnarray*} An argument similar to that in (\ref{peel}) and (\ref{bound}) can be used to show that \[ E_{x_0} (\bar w_1^2 \cdots\bar w_k^2) \leq K_2^2 (1+m^{-1} \beta)^k. \] Hence, (\ref{moment}) again holds and (\ref{3.6}) follows from (\ref {sequ}) and (\ref{resampleu}). \end{pf} \subsection{Implementation, estimation of standard errors and discussion}\label{sec3.3} As explained in the first paragraph of Section \ref{sec3.1}, at every stage $t$, the SISR procedure carries out importance sampling sequentially within each simulated trajectory but performs resampling across the $m$ trajectories. Since the computation\vadjust{\goodbreak} time for resampling increases with $m$, it is more efficient to divide the $m$ trajectories into $r$ subgroups of size $k$ so that $m=kr$ and resampling is performed within each subgroup of $k$ trajectories, independently of the other subgroups. This method also has the advantage of providing a direct estimate of the standard error of the Monte Carlo estimate $\bar\alpha := r^{-1} \sum_{i=1}^r \widehat \alpha_i$, where $\widehat\alpha_i$ denotes the SISR estimate of $\alpha $ (using either bootstrap or residual resampling) based on the $i$th subgroup of simulated trajectories. Specifically, we can estimate the standard error of $\bar\alpha$ by $\widehat\sigma /\sqrt{r}$, where \begin{equation} \label{sub} \widehat\sigma^2 = (r-1)^{-1} \sum_{i=1}^r (\widehat\alpha_i - \bar\alpha)^2. \end{equation} In Section \ref{sec2} we considered the case of fixed $n$ as $m \rightarrow \infty$ and provided estimates of the standard errors of the asymptotically normal $\widehat \alpha_{\mathrm B}$ and $\widehat\alpha_{\mathrm R}$. The validity of these estimates is unclear for the case $n \rightarrow \infty$ and $m \rightarrow\infty$ as considered in this section that involves large deviations theory instead of central limit theorems. By choosing $m=kr$ with $k \rightarrow\infty$ and $r \rightarrow\infty$ in~(\ref{sub}), we still have a consistent estimate $\widehat \sigma/\sqrt{r}$ of the standard error in the large deviations setting with $n \rightarrow \infty$. The resampling weights in Theorems \ref{theo1} and \ref{theo2} have closed-form expressions in terms of the cumulant generating function $\psi(\theta)$ in the i.i.d. case or the logarithm $\psi(\theta)$ of the largest eigenvalue of the kernel (\ref{whtPtheta}) in the Markov case. When $\psi(\theta)$ does not have an explicit formula, we can use numerical approximations and thereby approximate the logarithmically efficient resampling weights, as will be illustrated in Example \ref{exam5}. This is, therefore, much more flexible than logarithmically efficient importance sampling which involves sampling from the efficient importance measure that involves both the eigenvalue and corresponding eigenfunction in the Markov case (see \cite{BNS90,CL07,Col02,DW05,SB90}). Note that approximating the eigenvalue and eigenfunction usually does not result in an importance (probability) measure and, therefore, requires an additional task of computing the normalizing constants. The basic ideas in Examples \ref{exam1} and \ref{exam2} and Sections \ref{sec3.1} and \ref{sec3.2} can be extended to more general rare events of the form $\{ \mathbf{X}_T \in\Gamma\}$ and more general stochastic sequences $\mathbf{X}_t$ and stopping times $T$. To evaluate $P \{ \mathbf{X}_T \in\Gamma\}$ by Monte Carlo, it would be ideal to sample from the importance measure $Q$ for which \begin{equation} \label{3.60} \frac{dQ}{dP}(\mathbf{X}_t) = P \{ \mathbf{X}_T \in\Gamma| \mathbf{X}_t \}/P \{ \mathbf{X}_T \in\Gamma\} \qquad\mbox{for } t \leq T, \end{equation} because the corresponding Monte Carlo estimate of $P \{ \mathbf{X}_T \in\Gamma\}$ would have variance 0 (see \cite{DW05}, page 2). This is clearly not feasible because the right-hand side of (\ref{3.60}) involves the conditional probabilities $P \{ \mathbf{X}_T \in\Gamma| \mathbf{X}_t \}$ and its expectation $P \{ \mathbf{X}_T \in\Gamma\}$ which is an unknown quantity to be determined. On the other hand, SISR enables one to ignore the normalizing factor $P \{ \mathbf{X}_T \in\Gamma\}$ and to use tractable approximations to $P \{ \mathbf{X}_T \in\Gamma| \mathbf{X}_t \}$, as in Example \ref{exam1}, in coming up with a logarithmically efficient Monte Carlo estimate of $P \{ \mathbf{X}_T \in\Gamma\}$. \section{Illustrative examples}\label{sec4} We use the following two examples to illustrate Theorems \ref{theo1} and \ref{theo4}. \begin{exa}\label{exam4} Let $X_1, X_2, \ldots$ be i.i.d. random variables with $EX_1=0$. Let $\xi_i=(X_i,X_i^2)$ and $S_n = \xi_1 + \cdots+ \xi_n$. Define $g(y,v)=y/\sqrt{v}$ for $y \in{\mathbf R}$ and $v>0$ and note that $g(S_n/n)$ is the self-normalized sum of the $X_i$'s. There is extensive literature on the large deviation probability $p_n = P \{ g(S_n/n) \geq b \}$ (see \cite{DLS09}). Consider the case $b=1/\sqrt{2}$ and $X_i$ having the density function \[ f(x) = \frac{1}{2 \sqrt{2 \pi}} \bigl(e^{-(x-1)^2/2}+e^{-(x+1)^2/2}\bigr),\qquad x \in{\mathbf R}, \] with respect to the Lebesgue measure. Thus, $X_i$ is a mixture of $N(1,1)$ and $N(-1,1)$. In this case, $\Theta= \{ (\theta_1,\theta_2)\dvtx\theta_2 < 1/2 \}$, $\Lambda= \{ (y,v)\dvtx v \geq y^2 \}$ and \[ \log(Ee^{\theta_1 X_1+\theta_2 X_1^2}) = \log\biggl( \frac{1}{2} \biggr) + \frac{1}{2} -\frac{\theta_1^2+1}{2-4 \theta_2} + \log\biggl(\frac{e^{\theta_1/(1-2 \theta_2)} + e^{-\theta_1/(1-2 \theta_2) }}{\sqrt{1-2 \theta_2}} \biggr) \] for $\theta\in\Theta$. The infimum of the rate function over the one-dimensional manifold $N = \{ (y,v): y = \sqrt{v/2} \}$ is $I=0.324$ and is attained at $(y,v)=(1,2)$. Then $M = \{ \theta=(\theta_1, \theta_2)\dvtx \phi(y_\theta,v_\theta) \leq I \}$ [see (\ref{Htheta}) and Theorem \ref{theo1}]. We implement SISR with bootstrap resampling as described in Section \ref {sec3.3}, with $m=10$,000 particles, divided into 100 groups each having 100 particles. The results, in the form of mean${}\pm{}$standard error and for $n=15, 20$ and 25, are summarized in Table \ref{table1}, which also compares them to corresponding results obtained \begin{table}[b] \tablewidth=280pt \caption{Monte Carlo estimates of $P \{ g(S_n/n) \geq1/\sqrt{2} \}$}\label{table1} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lccc@{}} \hline & \multicolumn{3}{c@{}}{$\bolds{n}$}\\[-4pt] & \multicolumn{3}{c@{}}{\hrulefill}\\ & \textbf{15} & \textbf{20} & \textbf{25} \\ \hline SISR & $(1.10 \pm0.07) \times10^{-3}$ & $(1.9\pm0.2)\times10^{-4}$ & $(4.0\pm0.7) \times10^{-5}$ \\ Direct & $(0.9 \pm 0.3) \times10^{-3}$ & $(1\pm1) \times10^{-4}$ & 0 \\ \hline \end{tabular*} \end{table} by direct Monte Carlo with $m=10$,000 in (\ref{2.1}) and (\ref{2.2}). Table \ref{table1} shows 18-fold variance reduction by using SISR when $n=15$, 25-fold variance reduction when $n=20$ and that direct Monte Carlo fails when $n=25$. \end{exa} \begin{exa}\label{exam5} Let $\zeta_1, \zeta_2, \ldots, \gamma_1, \gamma_2, \ldots$ be i.i.d. standard normal random variables and let \begin{equation} \label{nonlinear} X_{n+1} = \lambda(X_n) + \zeta_{n+1}, \qquad\xi_n = X_n + \gamma_n, \end{equation} where $\lambda(x)$ is a monotone increasing, piecewise linear function given by \[ \lambda(x) = x {\mathbf1}_{\{ |x| \leq1 \}}+ \biggl( \frac{x+1}{2} \biggr) \mathbf{1}_{\{ x>1 \}} + \biggl( \frac{x-1}{2} \biggr) {\mathbf1}_{\{ x<-1 \}}. \] Let $\theta> 0$. We now show that (U1)--(U4) hold for $u(x) = e^{2.1 \theta x^+}$ and $C=(-\infty,\rho ]$, where $\rho\geq1$ is chosen large enough so that (U2) holds, as shown below. Since $(a+b)^+ \leq a+b^+$ for $a > 0$ and since $e^{2.05 \theta x} \leq e^{-0.05 \theta x} u(x)$, it follows that for $x > \rho$, \begin{eqnarray*} E_x \bigl\{ e^{\theta\xi_1-\psi(\theta)} u(X_1) \bigr\} & = & E \bigl\{ e^{\theta x + \theta\gamma_1 - \psi(\theta) + 2.1 \theta(({x+1})/{2}+\zeta _1)^+} \bigr\} \\ & \leq& u(x) e^{-0.05 \theta x} E \bigl\{ e^{\theta\gamma_1 - \psi(\theta) +1.05 \theta+2.1 \theta\zeta_1^+} \bigr\} \end{eqnarray*} and, therefore, (U2) holds if $\rho$ is large enough. It is easy to check that\vspace*{1pt} (U3) holds. Note that $\sup_{x \in(-\infty,1]} E_x[e^{2 \theta\xi_1-2 \psi(\theta)} u^2(X_1)] < \infty$ and that for $x>1$, \begin{eqnarray*} && E_x\bigl[e^{2 \theta\xi_1-2 \psi(\theta)} u^2(X_1)\bigr]/u^2(x) \\ &&\qquad = E \bigl[e^{2 \theta x+2 \theta\gamma_1 - 2 \psi(\theta)+4.2 \theta (({x+1})/{2}+ \zeta_1)^+}\bigr]/e^{4.2 \theta x^+} \\ &&\qquad \leq (e^{-0.1 \theta x} \wedge e^{2 \theta x}) E \bigl[e^{2 \theta\gamma _1 -2 \psi(\theta)+ 2.1 \theta+4.2 \theta \zeta_1^+}\bigr] \rightarrow0 \qquad\mbox{as } x \rightarrow\infty \end{eqnarray*} and, therefore, (U4) holds. Since $\lim_{x \rightarrow-\infty} E_x(e^{\theta\xi_1-\psi(\theta)})=0$, it follows that $\lim_{x \rightarrow-\infty} \ell_x(\theta;C)=0$; moreover, $u(x)=1$ for all $x \leq0$ and hence, (U1) also holds. We compute $P_0 \{ S_n/n \geq2.5 \}$ for SISR using resampling, with $m=10$,000 particles divided into 100 groups, each having 100 particles, and with resampling weights (\ref{uweights}) for which the following procedure is used to provide a numerical approximation for $\theta _{2.5}$. First note that by (\ref{nonlinear}), \begin{equation} \label{Exetheta} E_x e^{\theta\xi_1} = e^{\theta^2/2} E_x e^{\theta X_1}. \end{equation} The procedure involves a finite-state Markov chain approximation to (\ref{nonlinear}) with states $x_i$ and transition probabilities $p_{ij}$ ($1 \leq i, j \leq 1\mbox{,}000$) given by \[ x_i = \frac{i}{100}-2.505,\qquad p_{ij} = e^{-(x_j-\lambda(x_i))^2/2} \Big/ \sum_{k=1}^{1\mbox{,}000} e^{-(x_k-\lambda(x_i))^2/2}. \] For given $\theta$, it approximates $\psi(\theta)$ by $\theta^2/2 + \widetilde\psi(\theta)$, where $e^{\tilde\psi(\theta)}$ is the largest eigenvalue of the matrix $(e^{\theta x_j} p_{ij})_{1 \leq i,j \leq1\mbox{,}000}$, in view of (\ref {whtPtheta}) and (\ref{Exetheta}). Since $\psi'(\theta_{2.5})=2.5$ by (\ref {thetamu}), it uses Brent's method \cite{PFTV92} that involves bracketing followed by\vspace*{1pt} efficient search to find the positive root $\widetilde\theta_{2.5}$ of the equation $\widetilde\psi(\theta)+\theta^2/2 = 2.5 \theta$, noting that $\widetilde\psi(0)=0$. The root $\widetilde\theta _{2.5}=0.273$ is then used as an approximation to $\theta_{2.5}$ in (\ref{uweights}). Table \ref{table2} gives the results, \begin{table} \caption{Monte Carlo estimates of $P_0 \{ S_n/n \geq2.5 \}$}\label{table2} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}ld{1.3}ccc@{}} \hline & & \multicolumn{3}{c@{}}{$\bolds{n}$} \\[-4pt] & & \multicolumn{3}{c@{}}{\hrulefill}\\ & \multicolumn{1}{c}{$\bolds\theta$} & \textbf{15} & \textbf{20} & \textbf{25} \\ \hline SISR & 0.1 & $(9.68\pm1.37) \times10^{-4}$ & $(2.81\pm0.57) \times 10^{-4}$ & $(4.70\pm1.22) \times10^{-5}$ \\ & 0.2 & $(9.65\pm0.75) \times10^{-4}$ & $(2.45\pm0.24) \times10^{-4}$ & $(6.70\pm0.64) \times10^{-5}$ \\ & 0.273 & $(8.31\pm0.48) \times10^{-4}$ & $(2.42\pm0.19) \times 10^{-4}$ & $(6.33\pm0.44) \times10^{-5}$ \\ & 0.3 & $(9.11\pm0.51) \times10^{-4}$ & $(2.54\pm0.20) \times10^{-4}$ & $(5.27\pm0.38) \times10^{-5}$ \\ & 0.4 & $(9.78\pm0.80) \times10^{-4}$ & $(2.60\pm0.20) \times10^{-4}$ & $(6.58\pm0.67) \times10^{-5}$ \\ Direct & & $(8\pm3) \times10^{-4}$ & $(3\pm2) \times10^{-4}$ & 0 \\ \hline \end{tabular*} \end{table} in the form of mean${}\pm{}$standard error, for the SISR [with several choices of $\theta$ in (\ref{uweights}), including $\theta=\widetilde\theta_{2.5}$] and direct Monte Carlo estimates of $P_0 \{ S_n/n \geq2.5 \}$. It shows a variance reduction of 35 times for $n=15$ and 80 times for $n=20$ over direct Monte Carlo when $\widetilde \theta_{2.5}$ is used as an approximation to $\theta_{2.5}$ in the resampling weights (\ref {uweights}) for SISR. When $n=25$, direct Monte Carlo fails while the SISR estimate still has a reasonably small standard error. \end{exa} \begin{appendix}\label{app} \section*{\texorpdfstring{Appendix: Proof of (\lowercase{\protect\ref{3.16}}) and (\lowercase{\protect\ref{ku}})} {Appendix: Proof of (3.19) and (3.54)}} \begin{pf*}{Proof of (\ref{3.16})} For $0 < \varepsilon < I$, let \[ M_\varepsilon= \{ \theta\dvtx\phi(\mu_\theta)=I-\varepsilon\},\qquad H(\theta) = \{ \mu\in\Lambda^o\dvtx\theta'(\mu-\mu_\theta) \geq0 \}. \] If $\mu\in H(\theta)$, then $\theta' \mu\geq\theta' \mu_\theta$ and, therefore, \setcounter{equation}{0} \begin{equation} \label{phibound} \phi(\mu) = \sup_{\tilde\theta} \{ \widetilde\theta' \mu- \psi(\widetilde \theta) \} \geq \theta' \mu-\psi(\theta) \geq\theta' \mu_\theta- \psi(\theta) = I-\varepsilon. \end{equation} Moreover, for $\theta\in M_\varepsilon$, $H(\theta)$ is a closed half-space whose boundary is the tangent space of $\{ \mu\dvtx\phi(\mu) = I-\varepsilon\}$ at $\mu_\theta$. Hence, \begin{equation} \label{A.2} \phi(\mu) \neq I-\varepsilon\qquad\mbox{for } \mu\in\Lambda^o \bigm\backslash \bigcup_{\theta\in M_\varepsilon} H(\theta). \end{equation} Making use of this and (\ref{phibound}), we next show that \begin{equation} \label{cupH} \bigcup_{\theta\in M_\varepsilon} H(\theta) = \{ \mu\dvtx\phi(\mu) \geq I-\varepsilon\} \end{equation} and, therefore, by (\ref{I}), \begin{equation} \label{3.15} \Gamma:= \{ \mu\dvtx g(\mu) \geq b \} \subset\{ \mu\dvtx\phi(\mu) \geq I -\varepsilon\} = \bigcup_{\theta\in M_\varepsilon} H(\theta). \end{equation} By (\ref{phibound}), $\bigcup_{\theta\in M_\varepsilon} H(\theta) \subset\{ \mu\dvtx \phi(\mu) \geq I-\varepsilon\}$. Therefore, it suffices for the proof of (\ref{cupH}) to show that $\{ \mu\dvtx\phi(\mu) < I-\varepsilon\} \supset\Lambda^o \setminus \bigcup_{\theta\in M_\varepsilon} H(\theta)$. Suppose this is not the case. Then there exists $\mu_1 \in\Lambda^o \setminus\bigcup_{\theta\in M_\varepsilon} H(\theta)$ such that $\phi(\mu_1) \geq I-\varepsilon$. Since $\Lambda ^o \setminus \bigcup_{\theta\in M_\varepsilon} H(\theta) \supset\{ \mu\dvtx\phi(\mu ) < I-\varepsilon\}$, there exists $\mu_2 \in\Lambda^o \setminus\bigcup_{\theta\in M_\varepsilon} H(\theta)$ such that $\phi(\mu_2) < I-\varepsilon$. By continuity of $\phi$, there exists $\rho \in(0,1)$ such that $\phi(\rho\mu_1+(1-\rho) \mu_2)=I-\varepsilon$. Since $\Lambda^o \setminus H(\theta)$ is a half-space, $\Lambda^o \setminus\bigcup_{\theta\in M_\varepsilon} H(\theta) = \bigcap_{\theta\in M_\varepsilon} (\Lambda^o \setminus H(\theta))$ is convex and, therefore, $\rho\mu _1+(1-\rho) \mu_2 \in \Lambda^o \setminus\bigcup_{\theta\in M_\varepsilon} H(\theta)$, but this contradicts (\ref{A.2}), thereby proving (\ref{cupH}). Define the measure $Q$ by \[ \frac{dQ}{dP}({\mathbf Y}_n) = \int_M e^{\theta' S_n-n \psi(\theta)} \,d \theta/ \operatorname{vol}(M), \] where vol$(M)$ is the volume of $M$. Let $\mu_n = S_n/n$ and $h_n(\theta) = \theta' \mu_n - \psi(\theta)$. From (\ref{3.15}), it follows that if $\mu_n \in\Gamma$, then there exists $\theta_* \in M_\varepsilon$ such that $\theta_*'(\mu_n - \mu_{\theta_*}) \geq0$ and, therefore, \begin{equation} \label{hstar} h_n(\theta_*) = \theta_*' \mu_n - \psi(\theta_*) \geq\theta_*' \mu_{\theta_*} - \psi(\theta_*) = \phi(\mu_{\theta_*}) = I-\varepsilon, \end{equation} since $\theta_* \in M_\varepsilon$. Let $B_n = \{ \theta\dvtx(\theta-\theta_*)' \nabla h_n(\theta_*) \geq 0, \| \theta- \theta_* \| \leq n^{-1/2} \}$. Then for all $\theta\in B_n$, $h_n(\theta) = h_n(\theta^*)+(\theta-\theta^*)' \nabla h_n(\theta_*)- (\theta-\theta_*)' \nabla^2 \psi(\theta_*)(\theta-\theta_*)/2 +o(\| \theta-\theta_* \|^2)$ and, therefore, by (\ref{hstar}) and the definition of $B_n$, \[ h_n(\theta) \geq I - \varepsilon- (K+1)/(2n) \qquad\mbox{for all large } n, \] where $K = \sup_{\theta\in M} \| \nabla^2 \psi(\theta) \|$. Hence, for all large $n$, \begin{eqnarray} \label{volM} \frac{dQ}{dP}(\mathbf{Y}_n) & \geq& {\mathbf1}_{\{ \mu_n \in\Gamma\}} \int_{B_n} \exp\{ n h_n(\theta) \} \,d \theta/ \operatorname{vol}(M) \nonumber\\[-8pt]\\[-8pt] & \geq& {\mathbf1}_{\{ \mu_n \in\Gamma\}} (c_d/2) e^{nI-n \varepsilon -(K+1)/2} n^{-d/2}/\operatorname{vol}(M), \nonumber \end{eqnarray} in which $c_d$ denotes the volume of the $d$-dimensional unit ball. Letting $\varepsilon\rightarrow0$ in (\ref{volM}) yields $(dQ/dP)(\mathbf{Y}_n) \geq e^{nI+o(n)} {\mathbf1}_{\{ \mu_n \in\Gamma\}}$ in which $o(n)$ is uniform in~$\mathbf{Y}_n$. Hence, \begin{eqnarray*} P \{ g(S_n/n) \geq b | \mathbf{Y}_k \} & = & E_Q \biggl[ \frac{dP}{dQ} (\mathbf{Y}_n) {\mathbf1}_{\{ S_n/n \in\Gamma\}} \,\frac{dQ}{dP}(\mathbf{Y}_k) \Big| \mathbf{Y}_k \biggr] \\ & \leq& e^{-n I+o(n)} \,\frac{dQ}{dP}(\mathbf{Y}_k), \end{eqnarray*} proving (\ref{3.16}). \end{pf*} To prove (\ref{ku}), we use ideas similar to those in the proof of Lemma 1 of~\cite{CL03} and the following result of \cite{NN87}, page 568. \begin{lem}\label{lemma3} Let $\tau(0)=0$. Under (\ref{m1}), there exist regeneration times~$\tau(i)$, $i \geq1$, such that: \begin{longlist} \item $\tau(i+1)-\tau(i)$, $i \geq0$, are i.i.d. random variables, \item$\{ X_{\tau(i)}, \ldots, X_{\tau(i+1)-1}, \xi _{\tau(i)+1}, \ldots, \xi_{\tau(i+1)} \}$, $i=0,1,\ldots,$ are independent blocks, \item$X_{\tau(i)}$ has distribution $\nu$ for all $i \geq1$. \end{longlist} \end{lem} \begin{pf*}{Proof of (\ref{ku})} Let $\widetilde\ell_x = E_x \{ \sum_{n=1}^{\tau} e^{\theta_b S_n} u(X_n) \}$, $\widetilde\ell _\nu= \int\widetilde\ell_x \,d \nu(x)$ and $A = \{ \tau(i)\dvtx i \geq1 \}$. Since $u \geq1$, \begin{eqnarray} \label{expand}\quad E_x \{ e^{\theta_b S_k} u(X_k) \} &=& E_x \bigl\{ e^{\theta_b S_k} u(X_k) {\mathbf1}_{\{ \tau\geq k \}} \bigr\} \nonumber\\ &&{} + \sum_{j=1}^{k-1} E_x\bigl(e^{\theta_b S_j} {\mathbf1}_{\{ j \in A \} }\bigr) E_\nu\bigl(e^{\theta_b S_{k-j}} u(X_{k-j}) {\mathbf1}_{\{ \tau\geq k-j \}}\bigr) \\ &\leq& \widetilde\ell_x + \widetilde\ell_\nu\Bigl[\sup_{j \geq1} E_x\bigl(e^{\theta_b S_j} {\mathbf1}_{\{ j \in A \}}\bigr)\Bigr]. \nonumber \end{eqnarray} Let $0 < \sigma=\sigma(1) < \sigma(2) < \cdots$ be the hitting times of $C$. Then \begin{eqnarray} \label{ltilde} \widetilde\ell_x &\leq& E_x \Biggl\{ \sum_{n=1}^{\sigma} e^{\theta_b S_n} u(X_n) \Biggr\}\nonumber\\[-8pt]\\[-8pt] &&{} + E_x \Biggl\{ \sum_{k\dvtx\sigma(k) < \tau} e^{\theta_b S_{\sigma(k)}} \sum_{n=\sigma(k)+1}^{\sigma(k+1)} e^{\theta_b(S_n-S_{\sigma(k)})} u(X_n) \Biggr\}.\nonumber \end{eqnarray} Let $y \in\mathcal{X}$. By (U2), for all $n \geq2$, \[ E_y \bigl\{ e^{\theta_b S_n} u(X_n) {\mathbf1}_{\{ n \leq\sigma\}} \bigr\} \leq(1-\beta) E_y \bigl(e^{\theta_b S_{n-1}} u(X_{n-1}) {\mathbf1}_{\{ n-1 \leq \sigma\}}\bigr), \] from which it follows by proceeding inductively and applying (U3) that \begin{equation} \label{recursive} E_y \Biggl\{ \sum_{n=1}^{\sigma} e^{\theta_b S_n} u(X_n) \Biggr\} \leq\beta^{-1} \max\{ a, (1-\beta) u(y) \} \leq\alpha u(y), \end{equation} where $\alpha=\beta^{-1} \max\{ a, (1-\beta) \}$. Substitution of (\ref{recursive}) into (\ref{ltilde}) then yields \begin{equation} \label{lbound} \widetilde\ell_x\,{\leq}\,\alpha\Biggl\{\!u(x)\,{+}\,E_x \Biggl( \sum_{n=0}^{\tau-1} e^{\theta_b S_n} u(X_n) {\mathbf1}_{\{ X_n \in C \}} \Biggr)\!\Biggr\}\,{\leq}\,\alpha u(x) \,{+}\,\eta\ell_x(\theta_b;C),\hspace*{-45pt} \end{equation} where $\eta= \sup_{y \in C} u(y)$. Since $\int_\mathcal{X} u(x) \,d \nu(x) < \infty$ and $\int_\mathcal{X} \ell_x(\theta_b;C) \,d \nu(x) < \infty$, it follows from (\ref{lbound}) that $\widetilde\ell_\nu< \infty$. Combining \[ \ell_x(\theta_b;C) \leq E_x (e^{\theta_b S_\sigma}) \Bigl[\sup_{y \in C} \ell_y(\theta_b;C)\Bigr] \] with (\ref{recursive}) yields \begin{equation} \label{U3} \sup_{x \in\mathcal{X}} \{ \ell_x(\theta_b;C)/u(x) \} < \infty. \end{equation} Let $Q^*$ be a probability measure under which \[ \frac{dQ^*}{dP_\nu}\bigl( \{ (X_t,S_t)\dvtx t \leq\tau(i) \}\bigr) = e^{\theta_b S_{\tau(i)}}. \] Then \begin{equation} \label{A.6} \sup_{k \geq1} E_\nu\bigl(e^{\theta_b S_k} {\mathbf1}_{\{ k \in A \}}\bigr) = \sup_{k \geq1} Q^* \{ \tau(i)=k \mbox{ for some } i \} \leq1. \end{equation} From (\ref{expand}), (\ref{lbound}), (\ref{U3}) and \begin{eqnarray*} E_x \bigl(e^{\theta_b S_j} {\mathbf1}_{\{ j \in A \}}\bigr) & = & E_x \bigl(e^{\theta_b S_\tau} {\mathbf1}_{\{ \tau=j \}}\bigr) \\ & &{} + \sum_{h=1}^{j-1} E_x\bigl(e^{\theta_b S_\tau} {\mathbf1}_{\{ \tau=h \}}\bigr) E_\nu\bigl(e^{\theta_b S_{j-h}} {\mathbf1}_{\{ j-h \in A \}}\bigr) \\ & \leq& E_x(e^{\theta_b S_\tau}) \Bigl\{ 1+\sup_{k \geq1} E_\nu \bigl(e^{\theta_b S_k} {\mathbf1}_{\{ k \in A \} }\bigr) \Bigr\}, \end{eqnarray*} (\ref{ku}) follows from (\ref{A.6}). \end{pf*} \end{appendix} \printaddresses \end{document}
arXiv
\begin{document} \title{A general inertial proximal point method for mixed variational inequality problem} \author{Caihua Chen\footnotemark[1] \and Shiqian Ma\footnotemark[2] \and Junfeng Yang\footnotemark[3]} \renewcommand{\arabic{footnote}}{\fnsymbol{footnote}} \footnotetext[1]{International Center of Management Science and Engineering, School of Management and Engineering, Nanjing University, China (Email: {\tt [email protected]}). Research supported in part by Natural Science Foundation of Jiangsu Province under project grant No. BK20130550 and the Natural Science Foundation of China NSFC grant 11371192. } \footnotetext[2]{Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Hong Kong (Email: {\tt [email protected]}). This author was supported by Hong Kong Research Grants Council General Research Fund Early Career Scheme (Project ID: CUHK 439513).} \footnotetext[3]{Corresponding author (Email: {\tt [email protected]}). Department of Mathematics, Nanjing University, China. This author was supported by the Natural Science Foundation of China NSFC-11371192 and a grant form Jiangsu Key Laboratory for Numerical Simulation of Large Scale Complex Systems. The work was done while this author was visiting the Chinese University of Hong Kong.} \renewcommand{\arabic{footnote}}{\arabic{footnote}} \date{\today} \maketitle \begin{abstract} In this paper, we first propose a general inertial proximal point method for the mixed variational inequality (VI) problem. Based on our knowledge, without stronger assumptions, convergence rate result is not known in the literature for inertial type proximal point methods. Under certain conditions, we are able to establish the global convergence and a $o(1/k)$ convergence rate result (under certain measure) of the proposed general inertial proximal point method. We then show that the linearized alternating direction method of multipliers (ADMM) for separable convex optimization with linear constraints is an application of a general proximal point method, provided that the algorithmic parameters are properly chosen. As byproducts of this finding, we establish global convergence and $O(1/k)$ convergence rate results of the linearized ADMM in both ergodic and nonergodic sense. In particular, by applying the proposed inertial proximal point method for mixed VI to linearly constrained separable convex optimization, we obtain an inertial version of the linearized ADMM for which the global convergence is guaranteed. We also demonstrate the effect of the inertial extrapolation step via experimental results on the compressive principal component pursuit problem. \end{abstract} \begin{keywords} proximal point method, inertial proximal point method, mixed variational inequality, linearized alternating direction method of multipliers, inertial linearized alternating direction method of multipliers. \end{keywords} \begin{AMS} 65K05, 65K10, 65J22, 90C25 \end{AMS} \thispagestyle{plain} \section{Introduction} Let $T: \Re^n \rightrightarrows \Re^n$ be a set-valued maximal monotone operator from $\Re^n$ to its power set. The maximal monotone operator inclusion problem is to find $w^*\in\Re^n$ such that \begin{equation}\label{prob:0inTw} 0 \in T(w^*). \end{equation} Due to the mathematical generality of maximal monotone operators, the problem \eqref{prob:0inTw} is very inclusive and serves as a unified model for many problems of fundamental importance, for example, fixed point problem, variational inequality problem, minimization of closed proper convex functions, and their extensions. Therefore, it becomes extremely important in many cases to solve \eqref{prob:0inTw} in practical and efficient ways. The classical proximal point method, which converts the maximal monotone operator inclusion problem to a fixed point problem of a firmly nonexpansive mapping via resolvent operators, is one of the most influential approaches for solving \eqref{prob:0inTw} and has been studied extensively both in theory and in practice. The proximal point method was originally proposed by Martinet \cite{Mar70} based on the work of Moreau \cite{Mor65} and was popularized by Rockafellar \cite{Roc76a}. It turns out that the proximal point method is a very powerful algorithmic tool and contains many well known algorithms as special cases. In particular, it was shown that the classical augmented Lagrangian method for constrained optimization \cite{Hes69, Pow69}, the Douglas-Rachford operator splitting method \cite{DR56} and the alternating direction method of multipliers (ADMM, \cite{GM75, GM76}) are all applications of the proximal point method, see \cite{Roc76b, EB92}. Various inexact, relaxed and accelerated variants of the proximal point method were also very well studied in the literature, see, e.g., \cite{Roc76a, EB92, Gul92}. The primary proximal point method for minimizing a differentiable function $f:\; \Re^n\rightarrow \Re$ can be interpreted as an implicit one-step discretization method for the ordinary differential equations \begin{equation}\label{ode-order1-nabla-f} w' + \nabla f(w) = 0, \end{equation} where $w: \Re\rightarrow \Re^n$ is differentiable, $w'$ denotes its derivative, and $\nabla f$ is the gradient of $f$. Suppose that $f$ is closed proper and convex and its minimum value is attained, then every solution trajectory $\{w(t): \; t\geq 0\}$ of the differential system \eqref{ode-order1-nabla-f} converges to a minimizer of $f$ as $t$ goes to infinity. Similar conclusion can be drawn for \eqref{prob:0inTw} by considering the evolution differential inclusion problem $0 \in w'(t) + T(w(t))$ almost everywhere on $\Re_+$, provided that the operator $T$ satisfies certain conditions, see e.g., \cite{Bru75}. The proximal point method is a one-step iterative method, i.e., each new iterate point does not depend on any iterate points already generated other than the current one. To speed up convergence, multi-step methods have been proposed in the literature by discretizing a second-order ordinary differential system of the form \begin{equation} \label{HBF} w'' + \gamma w' + \nabla f(w) = 0, \end{equation} where $\gamma>0$. Studies in this direction can be traced back to at least \cite{Pol64} which examined the system \eqref{HBF} in the context of optimization. In the two-dimensional case, the system \eqref{HBF} characterizes roughly the motion of a heavy ball which rolls under its own inertial over the graph of $f$ until friction stops it at a stationary point of $f$. The three terms in \eqref{HBF} denote, respectively, inertial force, friction force and gravity force. Therefore, the system \eqref{HBF} is usually referred to as the \emph{heavy-ball with friction} (HBF) system. It is easy to show that the energy function $E(t) = \frac{1}{2} \|w'(t)\|^2 + f(w(t))$ is always decreasing with time $t$ unless $w'$ vanishes, which implies that the HBF system is dissipative. It was proved in \cite{Alv00} that if $f$ is convex and its minimum value is attained then each solution trajectory $\{w(t): t\geq 0\}$ of \eqref{HBF} converges to a minimizer of $f$. In theory the convergence of the solution trajectories of the HBF system to a stationary point of $f$ can be faster than those of the first-order system \eqref{ode-order1-nabla-f}, while in practice the second order inertial term $w''$ can be exploited to design faster algorithms \cite{APZ84,Ant94}. Motivated by the properties of \eqref{HBF}, an implicit discretization method was proposed in \cite{Alv00}. Specifically, given $w^{k-1}$ and $w^k$, the next point $w^{k+1}$ is determined via \[ \frac{w^{k+1} - 2w^k + w^{k-1}}{h^2} + \gamma\frac{w^{k+1} - w^k}{h} + \nabla f(w^{k+1}) = 0, \] which results to an iterative algorithm of the form \begin{equation} \label{iPPA-min-f} w^{k+1} = (I + \lambda \nabla f)^{-1} (w^k + \alpha (w^k-w^{k-1})), \end{equation} where $\lambda = h^2/(1+\gamma h)$ and $\alpha = 1/(1+\gamma h)$. Note that \eqref{iPPA-min-f} is nothing but a proximal point step applied to the extrapolated point $w^k + \alpha (w^k-w^{k-1})$, rather than $w^k$ as in the classical proximal point method. Thus the resulting iterative scheme \eqref{iPPA-min-f} is a two-step method and is usually referred as an inertial proximal point algorithm (PPA). Convergence properties of \eqref{iPPA-min-f} were studied in \cite{Alv00} under some assumptions on the parameters $\alpha$ and $\lambda$. Subsequently, this inertial technique was extended to solve the inclusion problem \eqref{prob:0inTw} of maximal monotone operators in \cite{AA01}. See also \cite{ME03} for approximate inertial PPA and \cite{Alv04, MM07, MM10} for some inertial type hybrid proximal algorithms. Recently, there are increasing interests in studying inertial type algorithms. Some latest references are inertial forward-backward splitting methods for certain separable nonconvex optimization problems \cite{OCBP14} and for strongly convex problems \cite{OBP14, APR14}, inertial versions of the Douglas-Rachford operator splitting method and the ADMM for maximal monotone operator inclusion problem \cite{BCH14, BC14a}, and inertial forward-backward-forward method \cite{BC14d} based on Tseng's approach \cite{Tse00}. See also \cite{BC14b, BC14c}. \subsection{Contributions} In this paper, we focus on the mixed variational inequality (VI) problem and study inertial PPA under a more general setting. In particular, a weighting matrix $G$ in the proximal term is introduced. In our setting the matrix $G$ is allowed to be positive semidefinite, as long as it is positive definite in the null space of a certain matrix. We establish its global convergence and a $o(1/k)$ convergence rate result under certain conditions. To the best of our knowledge, without stronger assumptions, convergence rate result is not known in the literature for general inertial type proximal point methods. This general setting allows us to propose an inertial version of the linearized ADMM, a practical variant of the well-known ADMM which has recently found numerous applications \cite{Boyd+11}. We show that the linearized ADMM for separable convex optimization is an application of a general PPA to the primal-dual optimality conditions, as long as the parameters are properly chosen. As byproducts of this finding, we establish global convergence and $O(1/k)$ convergence rate results of the linearized ADMM. Another aim of this paper is to study the effect of the inertial extrapolation step via numerical experiments. Finally, we connect inertial type algorithms with the popular accelerated methods pioneered by Nesterov \cite{Nes83} and give some concluding remarks. The main reason that we restrict our analysis to mixed VI problem rather than the apparently more general problem \eqref{prob:0inTw} is because it is very convenient to represent the optimality conditions of linearly constrained separable convex optimization as mixed VI. In fact, our analysis for Theorems \ref{Theorem1} and \ref{Theorem2} can be generalized to the maximal monotone operator inclusion problem \eqref{prob:0inTw} without any difficulty. \subsection{Notation} We use the following notation. The standard inner product and $\ell_2$ norm are denoted by $\langle\cdot,\cdot\rangle$ and $\|\cdot\|$, respectively. The sets of symmetric, symmetric positive semidefinite and symmetric positive definite matrices of order $n$ are, respectively, denoted by $S^n, S^n_+$ and $S^n_{++}$. For any matrix $A \in S^n_{+}$ and vectors $u, v\in \Re^n$, we let $\langle u, v\rangle_A := u^TAv$ and $\|u\|_A := \sqrt{\langle u, u\rangle_A}$. The Frobenius norm is denoted by $\|\cdot\|_F$. The spectral radius of a square matrix $M$ is denoted by $\rho(M)$. \section{A general inertial PPA for mixed VI} Let $\Omega\subseteq \Re^n$ be a closed and convex set, $\theta: \Re^n\rightarrow \Re$ be a closed proper convex function, and $F: \Re^n\rightarrow \Re^n$ be a monotone mapping. In this paper, we consider the mixed VI problem: find $w^*\in\Omega$ such that \begin{equation}\label{mVI} \theta(w) - \theta(w^*) + \langle w-w^*, F(w^*)\rangle \geq 0,\; \forall w\in \Omega. \end{equation} Let $G\in S^n_{+}$ and two sequences of parameters $\{\alpha_k\geq 0: k=0,1,2,\ldots\}$ and $\{\lambda_k > 0: k=0,1,2,\ldots\}$ be given. We study a general inertial PPA of the following form: given any $w^0 = w^{-1}\in \Re^n$, for $k=0,1,2,\ldots$, find $w^{k+1}\in \Omega$ such that \begin{subequations}\label{gippa} \begin{eqnarray} \label{gippa1} &\hspace{-6.5cm}\bar w^k := w^k + \alpha_k (w^k - w^{k-1}), \\ \label{gippa2} &\theta(w) - \theta(w^{k+1}) + \langle w-w^{k+1}, F(w^{k+1}) + \lambda_k^{-1}G(w^{k+1} - \bar w^k) \rangle \geq 0, \; \forall\; w\in \Omega. \end{eqnarray} \end{subequations} We make the following assumptions. \begin{assumption}\label{Omega-ast-nonempty} The set of solutions of \eqref{mVI}, denoted by $\Omega^*$, is nonempty. \end{assumption} \begin{assumption}\label{assumption-H-monotone} The mapping $F$ is $H$-monotone in the sense that \begin{equation}\label{H-monotonicity} \langle u-v, F(u)-F(v)\rangle \geq \|u-v\|_H^2, \quad \forall u, v\in \Re^n, \end{equation} where $H \in S^n_{+}$. Note that $H=0$ if $F$ is monotone, and $H\in S^n_{++}$ if $F$ is strongly monotone. \end{assumption} \begin{assumption}\label{M>0} The sum of $G$ and $H$, denoted by $M$, is positive definite, i.e., $M := G+H \in S^n_{++}$. \end{assumption} Under Assumptions \ref{assumption-H-monotone} and \ref{M>0}, it can be shown that $w^{k+1}$ is uniquely determined in \eqref{gippa2}. Therefore, the algorithm \eqref{gippa1}-\eqref{gippa2} is well defined. Clearly, the algorithm reduces to the classical PPA if $G\in S^n_{++}$ and $\alpha_k = 0$ for all $k$. It is called inertial PPA because $\alpha_k$ can be greater than $0$. We will impose conditions on $\alpha_k$ to ensure global convergence of the inertial PPA framework \eqref{gippa}. Our convergence results are extensions of those in \cite{AA01}. \begin{thm}\label{Theorem1} Assume that Assumptions \ref{Omega-ast-nonempty}, \ref{assumption-H-monotone} and \ref{M>0} hold. Let $\{w^k\}_{k=0}^\infty\subseteq \Re^n$ conforms to Algorithm \eqref{gippa1}-\eqref{gippa2}. The parameters $\{\alpha_k, \lambda_k\}_{k=0}^\infty$ satisfy, for all $k$, $0\leq \alpha_k \leq \alpha$ for some $\alpha \in [0,1)$ and $\lambda_k \geq \lambda$ for some $\lambda >0$. If \begin{equation}\label{cond-alp-k} \sum_{k=1}^\infty \alpha_k \Gnorm{w^k-w^{k-1}}^2 < \infty, \end{equation} then the sequence $\{w^k\}_{k=0}^\infty$ converges to some point in $\Omega^*$ as $k\rightarrow\infty$. \end{thm} \begin{proof} First, we show that, for any $w^* \in \Omega^*$, $\lim_{k\rightarrow \infty} \Mnorm{w^k - w^*} $ exists. As a result, $\{w^k\}_{k=0}^\infty$ is bounded and must have a limit point. Then, we show that any limit point of $\{w^k\}_{k=0}^\infty$ must lie in $\Omega^*$. Finally, we establish the convergence of $\{w^k\}_{k=0}^\infty$ to a point in $\Omega^*$ as $k\rightarrow\infty$. Let $w^*\in \Omega^*$ be arbitrarily chosen and $k\geq 0$. It follows from setting $w=w^* \in\Omega^*$ in \eqref{gippa2} and the $H$-monotonicity \eqref{H-monotonicity} of $F$ that \begin{eqnarray}\label{ineq1} \nonumber \lambda_k^{-1}\langle w^{k+1}-w^*, w^{k+1} - \bar w^k\rangle_G &\leq& \theta(w^*) - \theta(w^{k+1}) - \langle w^{k+1} - w^*, F(w^{k+1})\rangle \\ \nonumber &\leq& \theta(w^*) - \theta(w^{k+1}) - \langle w^{k+1} - w^*, F(w^*)\rangle - \|w^{k+1} - w^*\|_H^2 \\ &\leq& - \|w^{k+1} - w^*\|_H^2. \end{eqnarray} Define $\varphi_k := \Gnorm{w^k - w^*}^2$ and recall that $\bar w^k = w^k + \alpha_k(w^k-w^{k-1})$. Plug the identities \begin{eqnarray*} 2\Gid{w^{k+1} - w^*}{w^{k+1} - w^k} &=& \varphi_{k+1} - \varphi_{k} + \Gnorm{w^{k+1}-w^k}^2, \\ 2\Gid{w^{k+1} - w^*}{w^{k} - w^{k-1}} &=& \varphi_{k} - \varphi_{k-1} + \Gnorm{w^{k}-w^{k-1}}^2 + 2\Gid{w^{k+1} - w^k}{w^{k}-w^{k-1}}, \end{eqnarray*} into \eqref{ineq1} and reorganize, we obtain \begin{eqnarray} \label{ineq2.0} \nonumber \psi_k & :=& \varphi_{k+1} - \varphi_{k} -\alpha_k \left(\varphi_{k} - \varphi_{k-1}\right) \\ \nonumber &\leq& - \Gnorm{w^{k+1}-w^k}^2 + 2 \alpha_k \Gid{w^{k+1} - w^k}{w^{k}-w^{k-1}} + \alpha_k \Gnorm{w^{k}-w^{k-1}}^2 - 2 \lambda_k\|w^{k+1} - w^*\|_H^2 \\ \nonumber &=& - \Gnorm{w^{k+1}-\bar w^k}^2 + (\alpha_k^2 + \alpha_k) \Gnorm{w^{k}-w^{k-1}}^2 - 2 \lambda_k\|w^{k+1} - w^*\|_H^2 \\ \nonumber & \leq & - \Gnorm{w^{k+1}- \bar w^k}^2 + 2 \alpha_k \Gnorm{w^{k}-w^{k-1}}^2 - 2 \lambda_k\|w^{k+1} - w^*\|_H^2 \\ & \leq & - \Gnorm{w^{k+1}- \bar w^k}^2 + 2 \alpha_k \Gnorm{w^{k}-w^{k-1}}^2, \end{eqnarray} where the first inequality is due to \eqref{ineq1} and the second follows from $0\leq \alpha_k < 1$. Define \[\theta_k := \varphi_{k} - \varphi_{k-1} \text{~~and~~} \delta_k := 2\alpha_k \Gnorm{w^{k}-w^{k-1}}^2.\] Then, the inequality \eqref{ineq2.0} implies that $\theta_{k+1} \leq \alpha_k \theta_k + \delta_k \leq \alpha [\theta_k]_+ + \delta_k$, where $[t]_+ := \max\{t,0\}$ for $t\in \Re$. Therefore, we have \begin{eqnarray}\label{ineq+7} [\theta_{k+1}]_+ \leq \alpha [\theta_k]_+ + \delta_k \leq \alpha^{k+1} [\theta_0]_+ + \sum_{j=0}^{k} \alpha^j \delta_{k-j}. \end{eqnarray} Note that by our assumption $w^0=w^{-1}$. This implies that $\theta_0 = [\theta_0]_+ =0$ and $\delta_0=0$. Therefore, it follows from \eqref{ineq+7} that \begin{eqnarray}\label{sum-th-k+finite} \sum_{k=0}^\infty [\theta_{k}]_+ \leq \frac{1}{1-\alpha} \sum_{k=0}^{\infty} \delta_{k} = \frac{1}{1-\alpha} \sum_{k=1}^{\infty} \delta_{k} < \infty. \end{eqnarray} Here the second inequality is due to the assumption \eqref{cond-alp-k}. Let $\gamma_k := \varphi_k - \sum_{j=1}^k [\theta_j]_+$. From \eqref{sum-th-k+finite} and $\varphi_k\geq 0$, it follows that $\gamma_k$ is bounded below. On the other hand, \[ \gamma_{k+1} = \varphi_{k+1} - [\theta_{k+1}]_+ - \sum_{j=1}^k [\theta_j]_+ \leq \varphi_{k+1} - \theta_{k+1} - \sum_{j=1}^k [\theta_j]_+ = \varphi_{k} - \sum_{j=1}^k [\theta_j]_+ = \gamma_k, \] i.e., $\gamma_k$ is nonincreasing. As a result, $\{\gamma_k\}_{k=0}^\infty$ converges as $k\rightarrow\infty$, and the following limit \[ \lim_{k\rightarrow\infty}\varphi_k = \lim_{k\rightarrow\infty} \left(\gamma_k + \sum_{j=1}^k [\theta_j]_+\right) = \lim_{k\rightarrow\infty} \gamma_k + \sum_{k=1}^\infty [\theta_k]_+ \] exists. That is, $\lim_{k\rightarrow \infty} \Gnorm{w^k - w^*} $ exists for any $w^*\in \Omega^*$. Furthermore, it follows from the second ``$\leq$" of \eqref{ineq2.0} and the definition of $\theta_k$ and $\delta_k$ that \begin{eqnarray}\label{ineq+4} \nonumber \Gnorm{w^{k+1}- \bar w^k}^2 + 2\lambda_k\|w^{k+1}-w^*\|_H^2 &\leq& \varphi_{k} - \varphi_{k+1} + \alpha_k \left(\varphi_{k} - \varphi_{k-1}\right) + \delta_k \\ &\leq& \varphi_{k} - \varphi_{k+1} + \alpha [\theta_k]_+ + \delta_k. \end{eqnarray} By taking sum over $k$ and noting that $\varphi_k \geq 0$, we obtain \begin{equation} \label{ch-add1} \sum_{k=1}^\infty \left(\|w^{k+1}-\bar{w}^k\|_G^2 + 2\lambda_k\|w^{k+1}-w^*\|_H^2 \right) \leq \varphi_{1} + \sum_{k=1}^\infty \left(\alpha [\theta_k]_+ + \delta_k\right) < \infty, \end{equation} where the second inequality follows from \eqref{sum-th-k+finite} and assumption \eqref{cond-alp-k}. Since $\lambda_k\geq \lambda >0$ for all $k$, it follows from \eqref{ch-add1} that \begin{equation} \label{ch-add2} \lim_{k\rightarrow \infty}\|w^k-w^*\|_H = 0. \end{equation} Recall that $M= G+H$. Thus, $\lim_{k\rightarrow \infty} \Mnorm{w^k - w^*}$ exists. Since $M$ is positive definite, it follows that $\{w^k\}_{k=0}^\infty$ is bounded and must have at least one limit point. Again from \eqref{ch-add1} we have \begin{eqnarray*} \lim_{k\rightarrow\infty}\Gnorm{w^{k+1}- \bar w^k} = 0. \end{eqnarray*} Thus, the positive semidefiniteness of $G$ implies that $\lim_{k\rightarrow \infty} G(w^{k+1}- \bar w^k) = 0$. On the other hand, for any fixed $w\in \Omega$, it follows from \eqref{gippa2} that \begin{eqnarray}\label{gippa2-k} \theta(w) - \theta(w^{k}) + \langle w-w^{k}, F(w^{k}) \rangle \geq \lambda_{k-1}^{-1}\langle w^{k}-w, G(w^{k} - \bar w^{k-1})\rangle. \end{eqnarray} Suppose that $w^\star$ is any limit point of $\{w^k\}_{k=0}^\infty$ and $w^{k_j} \rightarrow w^\star$ as $j\rightarrow \infty$. Since $\Omega$ is closed, $w^\star\in \Omega$. Furthermore, by taking the limit over $k=k_j \rightarrow \infty$ in \eqref{gippa2-k} and noting that $G(w^{k} - \bar w^{k-1})\rightarrow 0$ and $\lambda_{k-1} \geq \lambda >0$, we obtain \[ \theta(w) - \theta(w^\star) + \langle w-w^\star, F(w^\star) \rangle \geq 0. \] Since $w$ can vary arbitrarily in $\Omega$, we conclude that $w^\star\in \Omega^*$. That is, any limit point of $\{w^k\}_{k=0}^\infty$ must also lie in $\Omega^*$. Finally, we establish the uniqueness of limit points of $\{w^k\}_{k=0}^\infty$. Suppose that $w^*_1$ and $w^*_2$ are two limit points of $\{w^k\}_{k=0}^\infty$ and $\lim_{j\rightarrow \infty} w^{i_j} = w^*_1$, $\lim_{j\rightarrow \infty} w^{k_j} = w^*_2$. Assume that $\lim_{k\rightarrow \infty} \|w^k - w^*_i\|_M = v_i$ for $i=1,2$. By taking the limit over $k=i_j\rightarrow \infty$ and $k=k_j\rightarrow \infty$ in the equality \[ \Mnorm{w^k - w^*_1}^2 - \Mnorm{w^k - w^*_2}^2 = \Mnorm{w^*_1 - w^*_2}^2 + 2\Mid{w^*_1 - w^*_2}{w^*_2 - w^k}, \] we obtain $v_1 - v_2 = - \Mnorm{w^*_1 - w^*_2}^2 = \Mnorm{w^*_1 - w^*_2}^2$. Thus, $\Mnorm{w^*_1 - w^*_2} = 0$. Since $M$ is positive definite, this implies that $w^*_1 = w^*_2$. Therefore, $\{w^k\}_{k=0}^\infty$ converges to some point in $\Omega^*$ and the proof of the theorem is completed. \end{proof} We have the following remarks on the assumptions and results of Theorem \ref{Theorem1}. \begin{remark} In practice, it is not hard to select $\alpha_k$ online such that the condition \eqref{cond-alp-k} is satisfied. \end{remark} \begin{remark} If $\alpha_k=0$ for all $k$, then the condition \eqref{cond-alp-k} is obviously satisfied. In this case, we reestablished the convergence of the classical PPA under the weaker condition that $G \in S^n_{+}$, provided that $\lambda_k\geq \lambda >0$ and $H+G\in S^n_{++}$, e.g., when $F$ is strongly monotone, i.e., $H\in S^n_{++}$. \end{remark} \begin{remark} Suppose that $H=0$ and $G\in S^n_{+}$, but $G \notin S^n_{++}$. Then, the sequence $\{w^k\}_{k=0}^\infty$ may not be well defined since \eqref{gippa2} does not necessarily have a solution in general. In the case that $\{w^k\}_{k=0}^\infty$ is indeed well defined (which is possible), the conclusion that $\lim_{k\rightarrow \infty} \Gnorm{w^k - w^*} $ exists for any $w^*\in \Omega^*$ still holds under condition \eqref{cond-alp-k}. However, since $G$ is only positive semidefinite, the boundedness of $\{w^k\}_{k=0}^\infty$ cannot be guaranteed. If a limit point $w^\star$ of $\{w^k\}_{k=0}^\infty$ does exist, then the conclusion $w^\star \in \Omega^*$ holds still. Moreover, suppose that $w^\star_1$ and $w^\star_2$ are any two limit points of $\{w^k\}_{k=0}^\infty$, then it holds that $Gw^\star_1=Gw^\star_2$. \end{remark} In the following theorem, we remove the assumption \eqref{cond-alp-k} by assuming that the sequence $\{\alpha_k\}_{k=0}^\infty$ satisfies some additional easily implementable conditions. Moreover, we establish a $o(1/k)$ convergence rate result for the general inertial proximal point method \eqref{gippa}. The trick used here to improve convergence rate from $O(1/k)$ to $o(1/k)$ seems to be first introduced in \cite{DLY13,DY14}. To the best of our knowledge, there is no convergence rate result known in the literature without stronger assumptions for inertial type proximal point methods. \begin{thm}\label{Theorem2} Assume that Assumptions \ref{Omega-ast-nonempty}, \ref{assumption-H-monotone} and \ref{M>0} hold. Suppose that the parameters $\{\alpha_k, \lambda_k\}_{k=0}^\infty$ satisfy, for all $k$, $0\leq \alpha_k \leq \alpha_{k+1}\leq \alpha < \frac{1}{3}$ and $\lambda_k\geq \lambda$ for some $\lambda>0$. Let $\{w^k\}_{k=0}^\infty$ be the sequence generated by Algorithm \eqref{gippa1}-\eqref{gippa2}. Then, we have the following results. \begin{enumerate} \item $\{w^k\}_{k=0}^\infty$ converges to some point in $\Omega^*$ as $k\rightarrow\infty$; \item For any $w^*\in\Omega^*$ and positive integer $k$, it holds that \begin{equation} \label{ineq+2} \min_{0\leq i\leq k-1} \|w^{i+1}- \bar w^i\|_G^2 \leq \frac{\left(1 + \frac{2}{1-3\alpha}\right)\Gnorm{w^0 - w^*}^2}{k}. \end{equation} Moreover, it holds as $k\rightarrow\infty$ that \begin{equation} \label{ineq+6} \min_{0\leq i\leq k-1} \|w^{i+1}- \bar w^i\|_G^2 = o\left(\frac{1}{k}\right). \end{equation} \end{enumerate} \end{thm} \begin{proof} Let $w^*\in \Omega^*$ be arbitrary fixed and, for all $k\geq 0$, retain the notation $\varphi_k = \Gnorm{w^k - w^*}^2$, \[ \psi_k = \varphi_{k+1} - \varphi_{k} -\alpha_k \left(\varphi_{k} - \varphi_{k-1}\right) \text{~~and~~} \theta_k = \varphi_k - \varphi_{k-1}. \] It follows from the first ``$\leq$'' in \eqref{ineq2.0} and $\lambda_k\geq 0$ that \begin{eqnarray} \label{ineq5} \nonumber \psi_k &\leq& - \Gnorm{w^{k+1}- w^k}^2 + 2\alpha_k \Gid{w^{k+1}- w^k}{w^{k}- w^{k-1}} + \alpha_k \Gnorm{w^{k}-w^{k-1}}^2 \\ \nonumber &\leq& - \Gnorm{w^{k+1}- w^k}^2 + \alpha_k \left(\Gnorm{w^{k+1}- w^k}^2 + \Gnorm{w^{k}- w^{k-1}}^2\right) + \alpha_k \Gnorm{w^{k}-w^{k-1}}^2 \\ &=& - (1-\alpha_k) \|w^{k+1}- w^k\|_G^2 + 2\alpha_k \Gnorm{w^{k}- w^{k-1}}^2, \end{eqnarray} where the second ``$\leq$" follows from the Cauchy-Schwartz inequality. Define \[ \mu_k := \varphi_k - \alpha_k \varphi_{k-1} + 2\alpha_k \Gnorm{w^{k}- w^{k-1}}^2.\] From $0\leq \alpha_k \leq \alpha_{k+1} \leq \alpha < \frac{1}{3}$, the fact that $\varphi_k\geq 0$ and \eqref{ineq5}, we have \begin{eqnarray} \label{ineq7} \nonumber \mu_{k+1} - \mu_k &=& \varphi_{k+1} - \alpha_{k+1} \varphi_{k} + 2\alpha_{k+1} \Gnorm{w^{k+1}- w^{k}}^2 - \left(\varphi_k - \alpha_k \varphi_{k-1} + 2\alpha_k \Gnorm{w^{k}- w^{k-1}}^2\right)\\ \nonumber &\leq& \psi_k + 2\alpha_{k+1} \Gnorm{w^{k+1}- w^{k}}^2 - 2\alpha_k \Gnorm{w^{k}- w^{k-1}}^2\\ \nonumber &\leq& - (1-\alpha_k) \|w^{k+1}- w^k\|_{G}^2 + 2 \alpha_{k+1} \Gnorm{w^{k+1}- w^{k}}^2 \\ \nonumber &\leq& - (1-3\alpha ) \|w^{k+1}- w^k\|_{G}^2 \\ & \leq& 0. \end{eqnarray} Thus, $\mu_{k+1} \leq \mu_k$ for all $k\geq 0$. Note that $w^0=w^{-1}$ by our assumption. It follows from the definitions of $\mu_k$ and $\varphi_k$ that $\mu_0 = (1-\alpha_0) \varphi_0 \leq \varphi_0 := \Gnorm{w^0-w^*}^2$. Therefore, we have \begin{eqnarray}\label{ineq+1} - \alpha \varphi_{k-1} \leq \varphi_k - \alpha \varphi_{k-1} \leq \varphi_k - \alpha_k \varphi_{k-1} \leq \mu_k \leq \mu_0 \leq \varphi_0. \end{eqnarray} Further take into account \eqref{ineq7}, we obtain \begin{equation} \label{ineq8} \varphi_k \leq \alpha \varphi_{k-1} + \varphi_0 \leq \alpha^k \varphi_0 + \varphi_0 \sum_{j=0}^{k-1}\alpha^j \leq \alpha^k \varphi_0 + \frac{\varphi_0}{1-\alpha}. \end{equation} The second last ``$\leq$" in \eqref{ineq7} implies that $(1-3\alpha ) \|w^{k+1}- w^k\|_G^2 \leq \mu_k - \mu_{k+1}$ for $k\geq 0$. Together with \eqref{ineq+1} and \eqref{ineq8}, this implies \begin{equation} \label{ineq+3} (1-3\alpha ) \sum_{j=0}^k \|w^{j+1}- w^j\|_G^2 \leq \mu_0 - \mu_{k+1} \leq \varphi_0 + \alpha \varphi_k \leq \alpha^{k+1}\varphi_0 + \frac{\varphi_0}{1-\alpha} \leq 2\varphi_0, \end{equation} where the second inequality is due to $\mu_0\leq \varphi_0$ and $- \alpha \varphi_{k} \leq \mu_{k+1}$, the next one follows from \eqref{ineq8}, and the last one is due to $\alpha < 1/3$. By taking the limit $k\rightarrow \infty$, we obtain \begin{equation}\label{def:C1} \frac{1}{2}\sum_{k=1}^\infty \delta_k = \sum_{k=1}^\infty \alpha_k\|w^{k}- w^{k-1}\|_G^2 \leq \alpha\sum_{k=1}^\infty \|w^{k}- w^{k-1}\|_G^2 \leq \frac{2\varphi_0\alpha}{ 1-3\alpha } := C_1 < \infty. \end{equation} The convergence of $\{w^k\}_{k=0}^\infty$ to a solution point in $\Omega^*$ follows from the proof of Theorem \ref{Theorem1}. It follows from \eqref{ineq+4} that, for $i\geq 0$, $ \Gnorm{w^{i+1}- \bar w^i}^2 \leq \varphi_{i} - \varphi_{i+1} + \alpha [\theta_i]_+ + \delta_i $, from which we obtain \begin{eqnarray}\label{ineq+5} \sum_{i=0}^{k-1} \Gnorm{w^{i+1}- \bar w^i}^2 \leq \varphi_{0} - \varphi_{k} + \alpha \sum_{i=1}^{k-1} [\theta_i]_+ + \sum_{i=1}^{k-1} \delta_i \leq \varphi_{0} + \alpha C_2 + 2C_1, \end{eqnarray} where $C_1$ is defined in \eqref{def:C1} and $C_2$ is defined as \[ C_2 := \frac{ 2C_1}{1-\alpha} \geq \frac{1}{1-\alpha} \sum_{i=1}^{\infty} \delta_{i} \geq \sum_{i=1}^{\infty} [\theta_i]_+. \] Here the first ``$\geq$" follows from the definition of $C_1$ in \eqref{def:C1} and the second one follows from \eqref{sum-th-k+finite}. Direct calculation shows that \begin{eqnarray}\label{calculateC} \varphi_{0} + \alpha C_2 + 2C_1 = \left[1 + \left(\frac{ 2\alpha }{1-\alpha} + 2 \right) \frac{2 \alpha}{ 1-3\alpha }\right]\varphi_0 \leq \left(1 + \frac{2}{1-3\alpha}\right)\varphi_0, \end{eqnarray} where the ``$\leq$" follows from $\alpha<1/3$. The estimate \eqref{ineq+2} follows immediately from \eqref{ineq+5} and \eqref{calculateC}. The $o\left(1/k\right)$ result \eqref{ineq+6} follows from \begin{equation}\label{y-add1} \frac{k-1}{2} \min_{0\leq i\leq k-1} \Gnorm{w^{i+1}- \bar w^i}^2 \leq \sum_{i= \lfloor {k-1 \over 2} \rfloor}^{k-1} \Gnorm{w^{i+1}- \bar w^i}^2, \end{equation} where $\lfloor {(k-1)/2} \rfloor$ denotes the greatest integer no greater than $(k-1)/2$, and the fact that the right-hand-side of \eqref{y-add1} converges to $0$ as $k\rightarrow\infty$ because $\sum_{i=0}^{\infty}\Gnorm{w^{i+1}- \bar w^i}^2 < \infty$. \end{proof} \begin{remark} Note that $w^{k+1}$ is obtained via a proximal point step from $\bar w^k$. Thus, the equality $w^{k+1} = \bar w^k$ implies that $w^{k+1}$ is already a solution of \eqref{mVI} (even if $G$ is only positive semidefinite, see \eqref{gippa2}). In this sense, the error estimate given in \eqref{ineq+2} can be viewed as a convergence rate result of the general inertial proximal point method \eqref{gippa}. In particular, \eqref{ineq+2} implies that, to obtain an $\varepsilon$-optimal solution in the sense that $\Gnorm{w^{k+1} -\bar w^k}^2 \leq \varepsilon$, the upper bound of iterations required by \eqref{gippa} is \[ \frac{\left(1 + \frac{2}{1-3\alpha}\right)\Gnorm{w^0 - w^*}^2}{\varepsilon}. \] \end{remark} \begin{remark} In general Hilbert space, weak convergence of $\{w^k\}_{k=0}^\infty$ to a point in $\Omega^*$ can still be guaranteed under similar assumptions. The analysis is similar to that of Theorems \ref{Theorem1} and \ref{Theorem2} by using a well-known result, called Opial's lemma \cite{Opi67}, in functional analysis of Banach space. \end{remark} \section{Inertial linearized ADMM} In this section, we prove that under suitable conditions the linearized ADMM is an application of PPA with weighting matrix $G \in S^n_{++}$. As byproducts of this result, we establish convergence, ergodic and nonergodic convergence rate results for linearized ADMM within the PPA framework. Furthermore, an inertial version of the linearized ADMM is proposed, whose convergence is guaranteed by Theorems \ref{Theorem1} and \ref{Theorem2}. Let $f: \Re^{n_1}\rightarrow\Re$ and $g: \Re^{n_2}\rightarrow\Re$ be closed convex functions, ${\cal X}\subseteq \Re^{n_1}$ and ${\cal Y}\subseteq \Re^{n_2}$ be closed convex sets. Consider linearly constrained separable convex optimization problem of the form \begin{equation}\label{min-(f+g)} \min_{x,y} \left\{ f(x) + g(y): \ \hbox{s.t. } Ax + By = b, x\in {\cal X}, y\in {\cal Y}\right\}, \end{equation} where $A\in \Re^{m\times n_1}$, $B\in \Re^{m\times n_2}$ and $b\in \Re^m$ are given. We assume that the set of KKT points of \eqref{min-(f+g)} is nonempty. Under very little assumptions, see, e.g., \cite{Eck89}, \eqref{min-(f+g)} is equivalent to the mixed variational inequality problem \eqref{mVI} with $\Omega$, $w$, $\theta$ and $F$ given, respectively, by $\Omega := {\cal X} \times {\cal Y} \times \Re^m$, \begin{equation}\label{def-w-F-theta} w = \left( \begin{array}{c} x \\ y \\ p \\ \end{array} \right), \quad \theta(w) := f(x) + g(y), \quad F(w) = \left( \begin{array}{ccc} 0 & 0 & -A^T \\ 0 & 0 & -B^T \\ A & B & 0 \\ \end{array} \right) \left( \begin{array}{c} x \\ y \\ p \\ \end{array} \right) - \left( \begin{array}{c} 0 \\ 0 \\ b \\ \end{array} \right). \end{equation} Since the coefficient matrix defining $F$ is skew-symmetric, $F$ is monotone, and thus Assumption \ref{assumption-H-monotone} is satisfied with $H=0$. Let $\beta >0$ and define the Lagrangian and the augmented Lagrangian functions, respectively, as \begin{subequations}\label{def:L-AL} \begin{eqnarray}\label{def:L} {\cal L}(x,y,p) &:=& f(x) + g(y) - \langle p, Ax + By - b\rangle,\\ \label{def:AL} {\cal \bar L}(x,y,p) &:=& {\cal L}(x,y,p) + \frac{\beta}{2}\|Ax+By-b\|^2. \end{eqnarray} \end{subequations} Given $(y^k,p^k)$, the classical ADMM in ``$x-p-y$" order iterates as follows: \begin{subequations}\label{ADM-xpy} \begin{eqnarray} \label{ADM-xpy-x} x^{k+1} &=& \arg\min_{x\in {\cal X}} {\cal \bar L}(x,y^k,p^k), \\ \label{ADM-xpy-y} p^{k+1} &=& p^k - \beta (Ax^{k+1}+By^k-b), \\ \label{ADM-xpy-y} y^{k+1} &=& \arg\min_{y\in {\cal Y}} {\cal \bar L}(x^{k+1},y,p^{k+1}). \end{eqnarray} \end{subequations} Note that here we still use the latest value of each variable in each step of the alternating computation. Therefore, it is equivalent to the commonly seen ADMM in ``$y-x-p$" order in a cyclic sense. We use the order ``$x-p-y$" because the resulting algorithm can be easily explained as a PPA-like algorithm applied to the primal-dual optimality conditions, see \cite{CGHY13}. Given $(x^k,y^k,p^k)$ and two parameters $\tau, \eta>0$, the iteration of linearized ADMM in ``$x-p-y$" order appears as \begin{subequations} \label{LADM-xpy} \begin{eqnarray} \label{LADM-xpy-u} u^k &=& A^T(Ax^k+By^k-b),\\ \label{LADM-xpy-x} x^{k+1} &=& \arg\min_{x\in {\cal X} } f(x) - \langle p^k, Ax \rangle + \frac{\beta}{2\tau}\|x - (x^k -\tau u^k) \|^2, \\ \label{LADM-xpy-p} p^{k+1} &=& p^k - \beta (Ax^{k+1}+By^k-b), \\ \label{LADM-xpy-v} v^k &=& B^T(Ax^{k+1}+By^k-b),\\ \label{LADM-xpy-y} y^{k+1} &=& \arg\min_{y\in {\cal Y} } g(y) - \langle p^{k+1}, By\rangle + \frac{\beta}{2\eta}\|y - (y^k -\eta v^k) \|^2. \end{eqnarray} \end{subequations} In the following, we prove that under suitable assumptions $(x^{k+1},y^{k+1},p^{k+1})$ generated by \eqref{LADM-xpy} conforms to the classical PPA with an appropriate symmetric and positive definite weighting matrix $G$. \begin{thm}\label{Theorem-LADM-mVI-k+1} Given $w^k = (x^k,y^k,p^k) \in \Omega$, then $w^{k+1} = (x^{k+1},y^{k+1},p^{k+1})$ generated by the linearized ADMM framework \eqref{LADM-xpy} satisfies \begin{equation}\label{LADM-mVI-k+1} w^{k+1} \in \Omega, \; \theta(w) - \theta(w^{k+1}) + \langle w - w^{k+1}, F(w^{k+1}) + G (w^{k+1} - w^k)\rangle \geq 0, \; \forall w\in \Omega, \end{equation} where \begin{equation}\label{G-LADM} G = \left( \begin{array}{ccc} \beta \left(\frac{1}{\tau}I - A^TA\right) & {\bf 0} & {\bf 0} \\ {\bf 0} & \frac{\beta}{\eta} I & -B^T \\ {\bf 0} & -B & \frac{1}{\beta}I \\ \end{array} \right). \end{equation} Here $I$ denotes identity matrix of appropriate size. \end{thm} \begin{proof} The optimality conditions of \eqref{LADM-xpy-x} and \eqref{LADM-xpy-y} imply that \begin{eqnarray*} f(x) - f(x^{k+1}) + (x-x^{k+1})^T \left\{-A^T p^k + \frac{\beta}{\tau} (x^{k+1} - x^k) + \beta A^T(Ax^k+By^k-b)\right\} \geq 0, \; \forall x\in {\cal X} ,\\ g(y) - g(y^{k+1}) + (y-y^{k+1})^T \left\{-B^T p^{k+1} + \frac{\beta}{\eta}(y^{k+1} - y^k) + \beta B^T(Ax^{k+1}+By^k-b)\right\} \geq 0, \; \forall y\in {\cal Y} . \end{eqnarray*} By noting \eqref{LADM-xpy-p}, the above relations can be rewritten as \begin{subequations}\label{LADM-op-xy} \begin{eqnarray} \label{LADM-op-x} f(x) - f(x^{k+1}) + (x-x^{k+1})^T \left\{-A^Tp^{k+1} + \beta \left(\frac{1}{\tau}I - A^TA\right) (x^{k+1} - x^k) \right\} \geq 0, \; \forall x\in {\cal X} ,\\ \label{LADM-op-y} g(y) - g(y^{k+1}) + (y-y^{k+1})^T \left\{ -B^T p^{k+1} + \frac{\beta}{\eta}(y^{k+1} - y^k) - B^T (p^{k+1} - p^k)\right\} \geq 0, \; \forall y\in {\cal Y} . \end{eqnarray} \end{subequations} Note that \eqref{LADM-xpy-p} can be equivalently represented as \begin{equation}\label{LADM-op-p} (p- p^{k+1})^T \left\{(Ax^{k+1}+By^{k+1}-b) - B (y^{k+1} - y^k) + \frac{1}{\beta} (p^{k+1} - p^k) \right\} \geq 0, \;\forall p\in\Re^m. \end{equation} By the notation defined in \eqref{def-w-F-theta}, we see that the addition of \eqref{LADM-op-x}, \eqref{LADM-op-y} and \eqref{LADM-op-p} yields \eqref{LADM-mVI-k+1}, with $G$ defined in \eqref{G-LADM}. \end{proof} \begin{remark} Clearly, the matrix $G$ defined in \eqref{G-LADM} is symmetric and positive definite provided that the parameters $\tau$ and $\eta$ are reasonably small. In particular, $G$ is positive definite if $\tau < 1/\rho(A^TA)$ and $\eta<1/\rho(B^TB)$. Using similar analysis, it is easy to verify that $w^{k+1} = (x^{k+1},y^{k+1},p^{k+1})$ generated by the ADMM framework \eqref{ADM-xpy} conforms to \eqref{LADM-mVI-k+1} with $G$ defined by \begin{equation}\label{G-ADM} G = \left( \begin{array}{ccc} {\bf 0} & {\bf 0} & {\bf 0} \\ {\bf 0} & \beta B^TB & -B^T \\ {\bf 0} & -B & \frac{1}{\beta}I \\ \end{array} \right), \end{equation} which is clearly never positive definite. See \cite{CGHY13} for details. \begin{comment} If we merely linearize the $x$ subproblem in \eqref{ADM-xpy}, then the resulting $w^{k+1}$ satisfies \eqref{LADM-mVI-k+1} with $G$ defined by \begin{equation}\label{G-LxADM} G = \left( \begin{array}{ccc} \beta \left(\frac{1}{\tau}I - A^TA\right) & {\bf 0} & {\bf 0} \\ {\bf 0} & \beta B^TB & -B^T \\ {\bf 0} & -B & \frac{1}{\beta}I \\ \end{array} \right). \end{equation} It is clear that the matrices defined in \eqref{G-ADM} and \eqref{G-LxADM} are never positive definite. \end{comment} \end{remark} For the linearized ADMM framework \eqref{LADM-xpy}, we have the following convergence results. Their proofs are given in the Appendix for convenience of readers. Similar convergence analysis and complexity results can be found in \cite{HY12a,HY12b}, and also \cite{ST14}, where a unified analysis of the proximal method of multipliers is given. \begin{thm}\label{Theorem-convergence-LADM} Assume that $0 < \tau < 1/\rho(A^TA)$ and $0<\eta < 1/\rho(B^TB)$. Let $\{w^k = (x^k,y^k,p^k)\}_{k=0}^\infty$ be generated by the linearized ADMM framework \eqref{LADM-xpy} from any starting point $w^0 = (x^0,y^0,p^0)$. The following results hold. \begin{enumerate} \item The sequence $\{w^k = (x^k,y^k,p^k)\}_{k=0}^\infty$ converges to a solution of \eqref{mVI}, i.e., there exists $w^\star = (x^\star,y^\star,p^\star) \in\Omega^*$ such that $\lim_{k\rightarrow\infty}w^k = w^\star$. Moreover, $(x^\star,y^\star) $ is a solution of \eqref{min-(f+g)}. \item For any fixed integer $k>0$, define $\bar w^k := \frac{1}{k+1} \sum_{i=0}^k w^{i+1}$. Then, it holds that \begin{equation}\label{ergodic-rate} \bar w^k\in\Omega, \; \theta(w) - \theta(\bar w^k) + ( w - \bar w^k)^T F(w) \geq - \frac{\|w - w^0\|_G^2}{2(k+1)}, \; \forall w\in \Omega, \end{equation} or, equivalently, \begin{equation}\label{saddle-conditions-approximate} \bar w^k = (\bar x^k, \bar y^k, \bar p^k)\in\Omega, \; {\cal L}(\bar x^k,\bar y^k,p) - {\cal L}(x,y,\bar p^k) \leq \frac{\|w - w^0\|_G^2}{2(k+1)}, \; \forall w = (x,y,p) \in \Omega. \end{equation} Here ${\cal L}$ is the Lagrangian function defined in \eqref{def:L}. \item After $k>0$ iterations, we have \begin{equation}\label{non-ergodic-rate} \| w^{k} - w^{k-1}\|_G^2 \leq \frac{\|w^0 - w^*\|_G^2} {k}. \end{equation} Moreover, it holds as $k\rightarrow\infty$ that \begin{eqnarray}\label{non-ergodic-rate-o} \|w^k - w^{k-1}\|_G^2 = o\left(1/k\right). \end{eqnarray} \end{enumerate} \end{thm} \begin{remark} It is not hard to show that the set of solutions $\Omega^*$ of the mixed VI problem \eqref{mVI} can be expressed as the intersection of $\Omega_w := \left\{ \bar w\in\Omega \ |\ \theta(w) - \theta(\bar w) + (w-\bar w)^TF(w) \geq 0 \right\}$ for all $w\in \Omega$, i.e., \[\Omega^* = \bigcap_{w\in \Omega} \Omega_w = \bigcap_{w\in \Omega} \left\{ \bar w\in\Omega \ |\ \theta(w) - \theta(\bar w) + (w-\bar w)^TF(w) \geq 0 \right\}.\] See, e.g., \cite{FP03book}. Therefore, the result \eqref{ergodic-rate} essentially assures that after $k$ iterations an approximate solution $\bar w^k$ with accuracy $O(1/k)$ can be found. On the other hand, it is easy to show that $w^* = (x^*,y^*,p^*) \in \Omega^*$ if and only if \begin{equation}\label{saddle-conditions} {\cal L}(x^*,y^*,p) - {\cal L}(x,y,p^*) \leq 0, \; \forall w = (x,y,p) \in \Omega. \end{equation} Thus, \eqref{saddle-conditions-approximate} can be viewed as an approximation to the optimality condition \eqref{saddle-conditions}. Since $\bar w^k$ is the average of all the points generated in the first $(k+1)$ iterations, the result \eqref{ergodic-rate} or \eqref{saddle-conditions-approximate} is usually called an ergodic convergence rate. \end{remark} \begin{remark} It is easy to see from \eqref{LADM-mVI-k+1} that $w^{k+1}$ must be a solution if $w^{k+1} = w^k$. As such, the difference of two consecutive iterations can be viewed in some sense as a measure of how close the current point is to the solution set. Therefore, the result \eqref{non-ergodic-rate} estimates the convergence rate of $w^k$ to the solution set using the measure $\|w^k-w^{k-1}\|_G^2$. \end{remark} \begin{remark} We note that all the results given in Theorem \ref{Theorem-convergence-LADM} remain valid if the conditions on $\tau$ and $\eta$ are relaxed to $0 < \tau \leq 1/\rho(A^TA)$ and $0<\eta \leq 1/\rho(B^TB)$, respectively. The proof is a little bit complicated and we refer interested readers to \cite{FPST13,Chen12Thesis}. \end{remark} Now we state the inertial version of the linearized ADMM, which is new to the best of our knowledge. Given $\beta, \tau, \eta>0$, a sequence $\{\alpha_k\geq 0\}_{k=0}^{\infty}$, $(x^k,y^k,p^k)$ and $(x^{k-1},y^{k-1},p^{k-1})$, the inertial linearized ADMM iterates as follows: \begin{subequations} \label{iLADM-xpy} \begin{eqnarray} \label{iLADM-xpy-bar-p} (\bar x^k,\bar y^k,\bar p^k) &=& (x^k,y^k,p^k) + \alpha_k (x^k-x^{k-1},y^k-y^{k-1},p^k-y^{k-1}) \\ \label{iLADM-xpy-u} u^k &=& A^T(A\bar x^k+B\bar y^k-b),\\ \label{iLADM-xpy-x} x^{k+1} &=& \arg\min_{x\in {\cal X} } f(x) - \langle \bar p^k, Ax \rangle + \frac{\beta}{2\tau}\|x - (\bar x^k -\tau u^k) \|^2, \\ \label{iLADM-xpy-p} p^{k+1} &=& \bar p^k - \beta (Ax^{k+1}+B\bar y^k-b), \\ \label{iLADM-xpy-v} v^k &=& B^T(Ax^{k+1}+B\bar y^k-b),\\ \label{iLADM-xpy-y} y^{k+1} &=& \arg\min_{y\in {\cal Y} } g(y) - \langle p^{k+1}, By\rangle + \frac{\beta}{2\eta}\|y - (\bar y^k -\eta v^k) \|^2. \end{eqnarray} \end{subequations} The following convergence result is a consequence of Theorems \ref{Theorem2} and \ref{Theorem-LADM-mVI-k+1}. \begin{thm}\label{Theorem-iLADM-xpy} Let $G$ be defined in \eqref{G-LADM} and $\{(x^k,y^k,p^k)\}_{k=0}^\infty\subseteq \Re^n$ be generated by \eqref{iLADM-xpy} from any starting point $(x^0,y^0,p^0)=(x^{-1},y^{-1},p^{-1})$. Suppose that $0<\tau<1/\rho(A^TA)$, $0<\eta<1/\rho(B^TB)$ and $\{\alpha_k\}_{k=0}^\infty$ satisfies, for all $k$, $0\leq \alpha_k \leq \alpha_{k+1}\leq \alpha < \frac{1}{3}$. Then, the sequence $\{(x^k,y^k,p^k)\}_{k=0}^\infty$converges to some point in $\Omega^*$, the set of solutions of \eqref{mVI}, as $k\rightarrow\infty$. Moreover, it holds that \begin{equation} \label{ineq+3forLADMM} \min_{0\leq i\leq k-1} \|(x^{i+1},y^{i+1},p^{i+1}) - (\bar x^i,\bar y^i,\bar p^i)\|_G^2 = o\left(\frac{1}{k}\right). \end{equation} \end{thm} \section{Numerical Results} In this section, we present numerical results to compare the performance of the linearized ADMM \eqref{LADM-xpy} (abbreviated as LADMM) and the proposed inertial linearized ADMM \eqref{iLADM-xpy} (abbreviated as iLADMM). Both algorithms were implemented in MATLAB. All the experiments were performed with Microsoft Windows 8 and MATLAB v7.13 (R2011b), running on a 64-bit Lenovo laptop with an Intel Core i7-3667U CPU at 2.00 GHz and 8 GB of memory. \subsection{Compressive principal component pursuit} In our experiments, we focused on the compressive principal component pursuit problem \cite{WGMM13}, which aims to recover low-rank and sparse components from compressive or incomplete measurements. Let ${\cal A}: \Re^{m\times n} \rightarrow \Re^q$ be a linear operator, $L_0$ and $S_0$ be, respectively, low-rank and sparse matrices of size $m\times n$. The incomplete measurements are given by $b = {\cal A}(L_0+S_0)$. Under certain technical conditions, such as $L_0$ is $\mu$-incoherent, the support of $S_0$ is randomly distributed with nonzero probability $\rho$ and the signs of $S_0$ conform to Bernoulli distribution, it was proved in \cite{WGMM13} that the low-rank and the sparse components $L_0$ and $S_0$ can be exactly recovered with high probability via solving the convex optimization problem \begin{equation}\label{prob:cpca} \min_{L,S} \left\{\|L\|_* + \lambda \|S\|_1: \; \hbox{s.t. } {\cal A}(L+S) = b\right\}, \end{equation} as long as the range space of the adjoint operator ${\cal A}^*$ is randomly distributed according to the Haar measure and its dimension $q$ is in the order $O\left((\rho mn + mr) \log^2m\right)$. Here $\lambda = 1/\sqrt{m}$ is a constant, $\|L\|_*$ and $\|S\|_1$ denote the nuclear norm of $L$ (sum of all singular values) and the $\ell_1$ norm of $S$ (sum of absolute values of all components), respectively. Note that to determine a rank $r$ matrix, it is sufficient to specify $(m+n-r)r$ elements. Let the number of nonzeros of $S_0$ be denoted by $\mathrm{nnz}(S_0)$. Without considering the distribution of the support of $S_0$, we define the \emph{degree of freedom} of the pair $(L_0,S_0)$ by \begin{equation}\label{def:dof} \mathrm{dof} := (m+n-r)r + \mathrm{nnz}(S_0). \end{equation} The augmented Lagrangian function of \eqref{prob:cpca} is given by \begin{equation*} {\cal \bar L}(L, S, p) := \|L\|_* + \lambda \|S\|_1 - \langle p, {\cal A}(L+S) - b\rangle + \frac{\beta}{2} \|{\cal A}(L+S) - b\|^2. \end{equation*} One can see that the minimization of ${\cal \bar L}$ with respect to either $L$ or $S$, with the other two variables being fixed, does not have closed form solution. To avoid inner loop for iteratively solving ADMM-subproblems, the linearized ADMM framework \eqref{LADM-xpy} and its inertial version \eqref{iLADM-xpy} can obviously be applied. Note that it is necessary to linearize both ADMM-subproblems in order to avoid inner loops. Though the iterative formulas of LADMM and inertial LADMM for solving \eqref{prob:cpca} can be derived very easily based on \eqref{LADM-xpy} and \eqref{iLADM-xpy}, we elaborate them below for clearness and subsequent references. Let $(L^k,S^k,p^k)$ be given. The LADMM framework \eqref{LADM-xpy} for solving \eqref{prob:cpca} appears as \begin{subequations} \label{LADM-LyS} \begin{eqnarray} \label{LADM-LyS-U} U^k &=& {\cal A}^*({\cal A} (L^k+ S^k)-b),\\ \label{LADM-LyS-L} L^{k+1} &=& \arg\min_{L} \|L\|_* - \langle p^k, {\cal A}(L) \rangle + \frac{\beta}{2\tau}\|L - (L^k -\tau U^k) \|_F^2, \\ \label{LADM-LyS-y} p^{k+1} &=& p^k - \beta ({\cal A} (L^{k+1}+S^k)-b), \\ \label{LADM-LyS-V} V^k &=& {\cal A}^*({\cal A} (L^{k+1}+S^k)-b),\\ \label{LADM-LyS-S} S^{k+1} &=& \arg\min_{S} \lambda\|S\|_1 - \langle p^{k+1}, {\cal A}(S)\rangle + \frac{\beta}{2\eta}\|S - (S^k -\eta V^k) \|_F^2. \end{eqnarray} \end{subequations} The inertial LADMM framework \eqref{iLADM-xpy} for solving \eqref{prob:cpca} appears as \begin{subequations} \label{iLADM-LyS} \begin{eqnarray} \label{iLADM-LyS-bar} (\bar L^k,\bar S^k,\bar p^k) &=& (L^k, S^k, p^k) + \alpha_k (L^k - L^{k-1}, S^k - S^{k-1}, p^k - p^{k-1}), \\ \label{iLADM-LyS-U} U^k &=& {\cal A}^*({\cal A} (\bar L^k+ \bar S^k)-b),\\ \label{iLADM-LyS-L} L^{k+1} &=& \arg\min_{L} \|L\|_* - \langle \bar p^k, {\cal A}(L) \rangle + \frac{\beta}{2\tau}\|L - (\bar L^k -\tau U^k) \|_F^2, \\ \label{iLADM-LyS-y} p^{k+1} &=& \bar p^k - \beta ({\cal A} (L^{k+1}+\bar S^k)-b), \\ \label{iLADM-LyS-V} V^k &=& {\cal A}^*({\cal A} (L^{k+1}+\bar S^k)-b),\\ \label{iLADM-LyS-S} S^{k+1} &=& \arg\min_{S} \lambda\|S\|_1 - \langle p^{k+1}, {\cal A}(S)\rangle + \frac{\beta}{2\eta}\|S - (\bar S^k -\eta V^k) \|_F^2. \end{eqnarray} \end{subequations} Note that the subproblems \eqref{LADM-LyS-L} (or \eqref{iLADM-LyS-L}) and \eqref{LADM-LyS-S} (or \eqref{iLADM-LyS-S}) have closed form solutions given, respectively, by the shrinkage operators of matrix nuclear norm and vector $\ell_1$ norm, see, e.g., \cite{MGC11,YZ11}. The main computational cost per iteration of both algorithms is one singular value decomposition (SVD) required in solving the $L$-subproblem. \subsection{Generating experimental data} In our experiments, we set $m=n$ and tested different ranks of $L_0$ (denoted by $r$), sparsity levels of $S_0$ (i.e., $\text{nnz}(S_0)/(mn)$) and sample ratios (i.e., $q/(mn)$). The low-rank matrix $L_0$ was generated by $\text{randn}(m,r)*\text{randn}(r,n)$ in MATLAB. The support of $S_0$ is randomly determined by uniform distribution, while the values of its nonzeros are uniformly distributed in $[-10,10]$. Such type of synthetic data are roughly those tested in \cite{WGMM13}. As for the linear operator ${\cal A}$, we tested three types of linear operators, i.e., two-dimensional partial DCT (discrete cosine transform), FFT (fast Fourier transform) and WHT (Walsh-Hadamard transform). The rows of these transforms are selected uniformly at random. \subsection{Parameters, stopping criterion and initialization} The model parameter $\lambda$ was set to $1/\sqrt{m}$ in our experiments, which is determined based on the exact recoverability theory in \cite{WGMM13}. As for the other parameters ($\beta$, $\tau$ and $\eta$) common to LADMM and iLADMM, we used the same set of values and adaptive rules in all the tests. Now we elaborate how the parameters are chosen. Since ${\cal A}$ contains rows of orthonormal transforms, it holds that ${\cal A}{\cal A}^* = {\cal I}$, the identity operator. Therefore, it holds that $\rho({\cal A}^*{\cal A})=1$. We set $\tau = \eta = 0.99$, which satisfies the convergence requirement specified in Theorems \ref{Theorem-convergence-LADM} and \ref{Theorem-iLADM-xpy}. The penalty parameter $\beta$ was initialized at $0.1q/\|b\|_1$ and was tuned at the beginning stage of the algorithm. Specifically, we tuned $\beta$ within the first $30$ iterations according to the following rule: \[ \beta_{k+1} = \left\{ \begin{array}{ll} \max(0.5\beta_k,10^{-3}), & \hbox{if $r_k < 0.1$;} \\ \min(2\beta_k, 10^2), & \hbox{if $r_k > 5$;} \\ \beta_k, & \hbox{otherwise,} \end{array} \right. \text{~~where~~} r_k := \frac{\beta_k\|{\cal A}(L^k+S^k)-b\|^2}{2s_k (\|L^k\|_* + \lambda \|S^k\|_1)}. \] Here $s_k$ is a parameter attached to the objective function $\|L\|_* + \lambda\|S\|_1$ and was chosen adaptively so that the quadratic term $\frac{\beta}{2}\|{\cal A}(L+S)-b\|^2$ and the objective term $\|L\|_* + \lambda \|S\|_1$ remain roughly in the same order. Note that the choice of $\beta$ does not have much theory and is usually determined via numerical experiments, see, e.g., \cite{YY13} for the influence of different $\beta$'s in linearized ADMM for matrix completion problem. The extrapolation parameter $\alpha_k$ for iLADMM was set to $0.28$ and held constant in all our experiments. Note that this value of $\alpha_k$ is determined based on experiments and may be far from optimal. How to select $\alpha_k$ adaptively to achieve stable and faster convergence remains a research issue. Here our main goal is to illustrate the effect of the extrapolation steps. We also present some numerical results to compare the performance of iLADMM with different constant strategies for $\alpha_k$. It is easy to see from \eqref{LADM-mVI-k+1} that if two consecutive iterates generated by proximal point method are identical then a solution is already obtained. Since LADMM is an application of a general PPA, we terminated it by the following rule \begin{equation}\label{stop-rule} \frac{\|(L^{k+1},S^{k+1},p^{k+1}) - (L^k,S^k,p^k)\|}{1 + \|(L^k,S^k,p^k)\|} < \varepsilon, \end{equation} where $\varepsilon > 0$ is a tolerance parameter. Here $\|(L,S,p)\| := \sqrt{\|L\|_F^2 + \|S\|_F^2 + \|p\|^2}$. Since iLADMM generates the new point $(L^{k+1},S^{k+1},p^{k+1})$ by applying proximal point method to $ (\bar L^k,\bar S^k,\bar p^k)$, we used the same stopping rule as \eqref{stop-rule} except that $(L^k,S^k,p^k)$ is replaced by $ (\bar L^k,\bar S^k,\bar p^k)$. That is \begin{equation}\label{stop-rule-iLADMM} \frac{\|(L^{k+1},S^{k+1},p^{k+1}) - (\bar L^k,\bar S^k,\bar p^k)\|}{1 + \|(\bar L^k,\bar S^k,\bar p^k)\|} < \varepsilon. \end{equation} In our experiments, we initialize all variables $L$, $S$ and $p$ at zeros. \subsection{Experimental results} Recall that the matrix size is $m\times n$, the number of measurements is $q$, the rank of $L_0$ is $r$, and the degree of freedom of the pair $(L_0,S_0)$ is defined in \eqref{def:dof}. In our experiments, we tested $m=n=1024$. Let $k$ be the number of nozeros of $S_0$. We tested four different ranks for $L_0$, three levels of sparsity for $S_0$ and four levels of sample ratios. Specifically, in our experiments we tested $r\in \{5,10,15,20\}$, $k/m^2 \in \{0.01, 0.05, 0.10\}$ and $q/m^2 \in \{0.4,0.6,0.8\}$. Let $(L,S)$ be the recovered solution. For each setting, we report the relative errors of $L$ and $S$ to the true low-rank and sparse matrices $L_0$ and $S_0$, i.e., $\|L-L_0\|_F/\|L_0\|_F$ and $\|S-S_0\|_F/\|S_0\|_F$, and the number of iterations to meet the condition \eqref{stop-rule} or \eqref{stop-rule-iLADMM}, which are denoted by iter1 and iter2 for LADMM and iLADMM, respectively. We terminated both algorithms if the number of iterations reached 1000 but the stopping rule \eqref{stop-rule} or \eqref{stop-rule-iLADMM} still did not hold. For each problem scenario, we run 10 random trials for both algorithms and report the averaged results. Detailed experimental results for $\varepsilon=10^{-5}$ and $r=5,10,15$ and $20$ are given in Tables \ref{table-r=5}-\ref{table-r=20}, respectively. In each table, dashed line ``---" represents that the maximum iteration number was reached. \begin{table}\caption{Results of $\text{rank}(L_0) = 5$: $\varepsilon=10^{-5}$, average results of 10 random trials.} \begin{center} \begin{footnotesize} \begin{tabular}{|c|c|c|c||c|c|r||c|c|r||c|}\hline \multicolumn{4}{|c||}{$m=n=1024$} & \multicolumn{3}{c||}{LADMM} & \multicolumn{3}{c||}{iLADMM} & \\ [1pt]\cline{1-10} \multicolumn{1}{|c|}{$r$} & \multicolumn{1}{c|}{$k/m^2$ } & \multicolumn{1}{c|}{$(q/m^2, q/\text{dof})$} & \multicolumn{1}{c||}{${\cal A}$} & \multicolumn{1}{c|}{$\frac{\|L-L_0\|_F}{\|L_0\|_F}$} & \multicolumn{1}{c|}{$\frac{\|S-S_0\|_F}{\|S_0\|_F}$} & \multicolumn{1}{c||}{iter1} & \multicolumn{1}{c|}{$\frac{\|L-L_0\|_F}{\|L_0\|_F}$} & \multicolumn{1}{c|}{$\frac{\|S-S_0\|_F}{\|S_0\|_F}$} & \multicolumn{1}{c||}{iter2} & $\frac{\text{iter2}}{\text{iter1}}$ \\ [2pt] \hline\hline \input{results/18-Jul-2014/r_5.dat} \end{tabular}\label{table-r=5} \end{footnotesize} \end{center} \end{table} \begin{table}\caption{Results of $\text{rank}(L_0) = 10$: $\varepsilon=10^{-5}$, average results of 10 random trials.} \begin{center} \begin{footnotesize} \begin{tabular}{|c|c|c|c||c|c|r||c|c|r||c|}\hline \multicolumn{4}{|c||}{$m=n=1024$} & \multicolumn{3}{c||}{LADMM} & \multicolumn{3}{c||}{iLADMM} & \\ [1pt]\cline{1-10} \multicolumn{1}{|c|}{$r$} & \multicolumn{1}{c|}{$k/m^2$ } & \multicolumn{1}{c|}{$(q/m^2, q/\text{dof})$} & \multicolumn{1}{c||}{${\cal A}$} & \multicolumn{1}{c|}{$\frac{\|L-L_0\|_F}{\|L_0\|_F}$} & \multicolumn{1}{c|}{$\frac{\|S-S_0\|_F}{\|S_0\|_F}$} & \multicolumn{1}{c||}{iter1} & \multicolumn{1}{c|}{$\frac{\|L-L_0\|_F}{\|L_0\|_F}$} & \multicolumn{1}{c|}{$\frac{\|S-S_0\|_F}{\|S_0\|_F}$} & \multicolumn{1}{c||}{iter2} & $\frac{\text{iter2}}{\text{iter1}}$ \\ [2pt] \hline\hline \input{results/18-Jul-2014/r_10.dat} \end{tabular}\label{table-r=10} \end{footnotesize} \end{center} \end{table} \begin{table}\caption{Results of $\text{rank}(L_0) = 15$: $\varepsilon=10^{-5}$, average results of 10 random trials.} \begin{center} \begin{footnotesize} \begin{tabular}{|c|c|c|c||c|c|r||c|c|r||c|}\hline \multicolumn{4}{|c||}{$m=n=1024$} & \multicolumn{3}{c||}{LADMM} & \multicolumn{3}{c||}{iLADMM} & \\ [1pt]\cline{1-10} \multicolumn{1}{|c|}{$r$} & \multicolumn{1}{c|}{$k/m^2$ } & \multicolumn{1}{c|}{$(q/m^2, q/\text{dof})$} & \multicolumn{1}{c||}{${\cal A}$} & \multicolumn{1}{c|}{$\frac{\|L-L_0\|_F}{\|L_0\|_F}$} & \multicolumn{1}{c|}{$\frac{\|S-S_0\|_F}{\|S_0\|_F}$} & \multicolumn{1}{c||}{iter1} & \multicolumn{1}{c|}{$\frac{\|L-L_0\|_F}{\|L_0\|_F}$} & \multicolumn{1}{c|}{$\frac{\|S-S_0\|_F}{\|S_0\|_F}$} & \multicolumn{1}{c||}{iter2} & $\frac{\text{iter2}}{\text{iter1}}$ \\ [2pt] \hline\hline \input{results/18-Jul-2014/r_15.dat} \end{tabular}\label{table-r=15} \end{footnotesize} \end{center} \end{table} \begin{table}\caption{Results of $\text{rank}(L_0) = 20$: $\varepsilon=10^{-5}$, average results of 10 random trials.} \begin{center} \begin{footnotesize} \begin{tabular}{|c|c|c|c||c|c|r||c|c|r||c|}\hline \multicolumn{4}{|c||}{$m=n=1024$} & \multicolumn{3}{c||}{LADMM} & \multicolumn{3}{c||}{iLADMM} & \\ [1pt]\cline{1-10} \multicolumn{1}{|c|}{$r$} & \multicolumn{1}{c|}{$k/m^2$ } & \multicolumn{1}{c|}{$(q/m^2, q/\text{dof})$} & \multicolumn{1}{c||}{${\cal A}$} & \multicolumn{1}{c|}{$\frac{\|L-L_0\|_F}{\|L_0\|_F}$} & \multicolumn{1}{c|}{$\frac{\|S-S_0\|_F}{\|S_0\|_F}$} & \multicolumn{1}{c||}{iter1} & \multicolumn{1}{c|}{$\frac{\|L-L_0\|_F}{\|L_0\|_F}$} & \multicolumn{1}{c|}{$\frac{\|S-S_0\|_F}{\|S_0\|_F}$} & \multicolumn{1}{c||}{iter2} & $\frac{\text{iter2}}{\text{iter1}}$ \\ [2pt] \hline\hline \input{results/18-Jul-2014/r_20.dat} \end{tabular}\label{table-r=20} \end{footnotesize} \end{center} \end{table} It can be seen from Tables \ref{table-r=5}-\ref{table-r=20} that iLADMM is generally faster than LADMM to obtain solutions satisfying the aforementioned conditions. Specifically, within our setting the numbers of iterations consumed by iLADMM range, roughly, from $60\%$--$80\%$ of those consumed by LADMM. If we take into account all the tests (except those cases where either LADMM or iLADMM failed to terminate within 1000 iterations, e.g., $(r,k/m^2,q/m^2) = (5,0.1,40\%)$ and ${\cal A}$ is partial DCT), the overall average number of iterations used by iLADMM is about $74\%$ of that used by LADMM. Note that in some cases iLADMM obtained satisfactory results within the number of allowed iterations (1000 in our setting), while LADMM did not. For example, $(r,k/m^2,q/m^2) = (5,0.1,40\%)$ and ${\cal A}$ is partial DCT or partial WHT. In most cases, the recovered matrices $L$ and $S$ are close to the true low-rank and sparse components $L_0$ and $S_0$, respectively. The relative errors are usually in the order $10^{-5}$---$10^{-6}$. For some cases, the recovered solutions are not of high quality (relative errors are large), which is mainly because the number of samples are small relative to the degree of freedom of $(L_0,S_0)$. This can be seen from the values of $q/\text{dof}$ listed in the tables. Roughly speaking, the recovered solutions are satisfactory (say, relative errors are less than $10^{-3}$) provided that $q/\text{dof}$ is no less than $3.5$. We note that the per iteration cost of both LADMM and iLADMM for the compressive principal pursuit model \eqref{prob:cpca} is dominated by one SVD and thus is roughly identical. The extra cost of the extrapolation inertial step in \eqref{iLADM-LyS-bar} is negligible compared to the computational load of SVD. This is the main reason that we only reported the number of iterations but not CPU time consumed by both algorithms. The inertial technique actually accelerates the original algorithm to a large extent but without increasing the total computational cost. To better understand the behavior of iLADMM relative to LADMM, we also tested different matrix sizes ($m=n=256, 512$ and $1024$) with different levels of stopping tolerance ($\varepsilon=10^{-3},10^{-4}$ and $10^{-5}$ in \eqref{stop-rule}). For each case, we tested $r\in\{5,10,15,20\}$ and $k/m^2\in\{0.01,0.05,0.10\}$ for a fixed $q$ such that $q/m^2 \in \{0.4,0.6,0.8\}$. For each $q$, we accumulated the iteration numbers for different $(r,k)$ and the three types of linear operators and took an average finally. The results are summarized in Figure \ref{Fig-m3}. Again, these results are average of 10 random trials for each case. From the results we can see that iLADMM is faster and terminates earlier than LADMM with different levels of stopping tolerance. Roughly speaking, iLADMM reduced the cost of LADMM by about 30\%. \begin{figure} \caption{Comparison results on different matrix sizes and stopping tolerance: Average results of 10 random trials ($m=n=256,512,1024$, and from left to right $\varepsilon = 10^{-3}, 10^{-4}, 10^{-5}$, respectively). } \label{Fig-m3} \end{figure} \begin{figure} \caption{Comparison results on different $\alpha_k\equiv\alpha$ and stopping tolerance: Average results of 10 random trials ($m=n=512$, $\alpha$ ranges from $0.05$ to $0.35$, and from left to right $\varepsilon = 10^{-3}, 10^{-4}, 10^{-5}$, respectively).} \label{Fig-alp6} \end{figure} We also run iLADMM with various constant strategies for $\alpha_k$. In particular, we set $m=n=512$ and tested different values of $q$ such that $q/\text{dof}\in \{5,10,15\}$. For each case, we varied $r \in \{5,10,15,20\}$ and $k/m^2 \in \{0.01,0.05,0.10\}$ for the three types of aforementioned measurement matrices. We accumulated the number of iterations and took an average finally. The detailed average results of 10 random trials for $\alpha_k\equiv \alpha$ from $0.05$ to $0.35$ are given in Figure \ref{Fig-alp6}. From the results in Figure \ref{Fig-alp6} we see that, for the tested 7 values of $\alpha$, iLADMM is slightly faster if $\alpha$ is larger, provided that $\alpha$ does not exceed $0.3$. We have also observed that for $\alpha > 0.3$ iLADMM either slows down or performs not very stable, especially when $q/\text{dof}$ is small. This is the main reason that we set $\alpha_k$ a constant value that is near $0.3$ but not larger. \section{Concluding remarks} In this paper, we proposed and analyzed a general inertial proximal point method within the setting of mixed VI problem \eqref{mVI}. The proposed method adopts a weighting matrix and allows more flexibility. Our convergence results require weaker conditions in the sense that the weighting matrix $G$ does not necessarily be positive definite, as long as the function $F$ is $H$-monotone and $G$ is positive definite in the null space of $H$. The convergence analysis can be easily adapted to the monotone inclusion problem \eqref{prob:0inTw}. We also showed that the linearized ADMM for linearly constrained separable convex optimization problem is a proximal point method applied to the primal-dual optimality conditions, as long as the parameters are reasonably small. As byproducts of this finding, we established with standard analytic techniques for proximal point method the global convergence and convergence rate results of LADMM. This proximal reformulation also allows us to propose an inertial version of LADMM, whose convergence is guaranteed under suitable conditions. Our preliminary implementation of the algorithm and extensive experimental results on compressive principal component pursuit problem have shown that the inertial LADMM is generally faster than the original LADMM. Though in a sense the acceleration is not very significant, we note that the inertial LADMM does not require any additional and unnegligible computational cost either. Throughout our experiments the extrapolation steplength $\alpha_k$ held constant. How to select $\alpha_k$ adaptively based on the current information such that the overall algorithm performs more efficiently and stable is a practically very important question and deserves further investigation. Another theoretical issue is to investigate worst-case complexity analysis for general inertial type algorithms. In fact, complexity results of inertial type algorithms for minimizing closed proper convex functions already exist in the literature. The pioneering work in this direction is due to Nesterov \cite{Nes83}, where the algorithm can also be viewed in the perspective of inertial algorithms. Refined analyses for more general problems can be found in \cite{BT09,Gul92}. Let $f: \Re^n\rightarrow \Re$ be a closed proper convex function and be bounded below. Based on \cite{Nes83,BT09,Gul92}, the following algorithm can be studied. Let $x^0 \in\Re^n$ be given. Set $x_0 = x^{-1}$, $t_0=1$ and $k=0$. For $k\geq 0$ the algorithm iterates as \begin{subequations}\label{Alg-optimal-iPPA} \begin{eqnarray} \label{Alg-optimal-t} t_{k+1} &=& \frac{1+\sqrt{1+4t_k^2}}{2}, \\ \label{Alg-optimal-bar-w} \bar w^k &=& w^k + \frac{t_k-1}{t_{k+1}} (w^k - w^{k-1}), \\ \label{Alg-optimal-w} w^{k+1} &=& \arg\min_w f(w) + \frac{1}{2\lambda_k}\|w - \bar w^k\|^2. \end{eqnarray} \end{subequations} Using analyses similar to those in \cite{Nes83, Gul92, BT09}, one can show that the sequence $\{w^k\}_{k=0}^\infty$ satisfies \[f(w^k) - \min_{w\in\Re^n} f(w) = O(1/k^2).\] Algorithm \eqref{Alg-optimal-iPPA} is nothing but an inertial PPA with steplength $\alpha_k = \frac{t_k-1}{t_{k+1}}$. It is interesting to note that $\alpha_k$ is monotonically increasing as $k\rightarrow \infty$ and converges to $1$, which is much larger than the upper bound condition $\alpha<1/3$ required in Theorem \ref{Theorem2}. Also note that the convergence for \eqref{Alg-optimal-iPPA} is measured by the objective residue. Without further assumptions on $f$, it seems difficult to establish convergence of the sequence $\{w^k\}_{k=0}^\infty$, see, e.g., \cite{Gul92}. In comparison, our results impose smaller upper bound on $\alpha_k$ but guarantee the convergence of the sequence of iterates $\{w^k\}_{k=0}^{\infty}$. Even though, there seems to be certain gap between the classical results \cite{Nes83,BT09,Gul92} for minimizing closed proper convex functions and the results presented in the present paper. Further research in this direction is interesting. \def$'${$'$} \begin{appendix} \section{Proof of Theorem~\ref{Theorem-convergence-LADM}} First, we sketch the proof of convergence of the sequence $\{w^k\}$ to a solution of \eqref{mVI}. Clearly, the matrix $G$ defined in \eqref{G-LADM} is symmetric and positive definite under our assumptions that $\tau < 1/\rho(A^TA)$ and $\eta < 1/\rho(B^TB)$. Let $w^*\in \Omega^*$ be arbitrary fixed. It follows from setting $w=w^*$ in \eqref{LADM-mVI-k+1} that \begin{eqnarray*} \langle w^{k+1} - w^*, G(w^k - w^{k+1})\rangle &\geq& \theta(w^{k+1}) - \theta(w^*) + \langle w^{k+1}-w^*, F(w^{k+1})\rangle \\ &\geq& \theta(w^{k+1}) - \theta(w^*) + \langle w^{k+1}-w^*, F(w^*)\rangle \\ &\geq& 0, \end{eqnarray*} where the second ``$\geq$" follows form the monotonicity of $F$. Therefore, we obtain \begin{eqnarray} \label{PPA-decrease} \nonumber \|w^{k+1}-w^*\|_G^2 &=& \|w^k-w^*\|_G^2 - \|w^k-w^{k+1}\|_G^2 - 2 \langle w^{k+1}-w^*, G(w^k-w^{k+1})\rangle \\ &\leq& \|w^k-w^*\|_G^2 - \|w^k-w^{k+1}\|_G^2. \end{eqnarray} Since $G$ is positive definite, this implies that measured by $G$-norm the sequence $\{w^k\}$ is strictly contractive with respect to $\Omega^*$ unless $w^k=w^{k+1}$ in which case $w^k$ is already a solution. The convergence of $\{ w^k \}$ to some solution $w^\star\in \Omega^*$ follows directly from standard analyses for PPA and the key inequality \eqref{PPA-decrease}. We omit the details. Second, we prove \eqref{ergodic-rate}. Let $w^{i+1} \in \Omega$ be generated via \eqref{LADM-xpy}. It follows from the monotonicity of $F$ and \eqref{LADM-mVI-k+1} that, for any $w\in \Omega$, there holds \begin{eqnarray*} \theta(w) - \theta(w^{i+1}) + (w - w^{i+1})^T F(w) &\geq& \theta(w) - \theta(w^{i+1}) + (w - w^{i+1})^T F(w^{i+1}) \\ &\geq& (w - w^{i+1})^T G ( w^i - w^{i+1}). \end{eqnarray*} By noting the relation $2(w - w^{i+1})^T G ( w^i - w^{i+1}) \geq \|w - w^{i+1}\|_G^2 -\|w - w^i\|_G^2 $, we obtain \[ \theta(w) - \theta(w^{i+1}) + (w - w^{i+1})^T F(w) \geq \frac{1}{2}\left( \|w - w^{i+1}\|_G^2 -\|w - w^i\|_G^2 \right), \forall w\in \Omega. \] Take sum over $i = 0,1,\ldots,k$ and divide both sides by $(k+1)$, we get \begin{equation} \label{O(1/k)-final} \theta(w) - \frac{1}{k+1}\sum_{i=0}^k\theta(w^{i+1}) + \left(w - \frac{1}{k+1}\sum_{i=0}^k w^{i+1}\right)^T F(w) \geq - \frac{\|w - w^0\|_G^2}{2(k+1)}, \; \forall w\in \Omega. \end{equation} The conclusion \eqref{ergodic-rate} follows directly from \eqref{O(1/k)-final} by noting the definition of $\bar w^k$ and the fact that \[ \frac{1}{k+1}\sum_{i=0}^k \theta(w^{i+1}) \geq \theta\left(\frac{1}{k+1}\sum_{i=0}^k w^{i+1}\right) = \theta(\bar w^k). \] The equivalence of \eqref{saddle-conditions-approximate} and \eqref{ergodic-rate} can be verified directly from the notation defined in \eqref{def-w-F-theta} and the definition of ${\cal L}$ in \eqref{def:L}. Finally, we prove \eqref{non-ergodic-rate} and \eqref{non-ergodic-rate-o}. Since \eqref{LADM-mVI-k+1} holds for all $k$, it also holds for $k := k-1$, i.e., \begin{equation}\label{LADM-mVI-k} \theta(w) - \theta(w^{k}) + \langle w - w^{k}, F(w^{k}) + G (w^{k} - w^{k-1})\rangle \geq 0, \; \forall w\in\Omega. \end{equation} By setting $w=w^k$ and $w=w^{k+1}$ in \eqref{LADM-mVI-k+1} and \eqref{LADM-mVI-k}, respectively, and taking an addition, we obtain \[ \langle G(w^{k+1} - w^{k}), (w^{k} - w^{k-1}) - (w^{k+1} - w^k)\rangle \geq \langle w^{k+1} - w^{k}, F(w^{k+1}) - F(w^{k})\rangle \geq 0. \] In addition, by taking into account the fact that \begin{eqnarray*} \|w^k - w^{k-1}\|_G^2-\|w^{k+1} - w^k\|_G^2 \geq 2\langle G(w^{k+1}-w^k), (w^k - w^{k-1})-(w^{k+1}-w^k)\rangle, \end{eqnarray*} we obtain $\|w^k - w^{k-1}\|_G \geq \|w^{k+1} - w^k\|_G$, i.e., $\|w^k - w^{k-1}\|_G$ is monotonically nonincreasing with respect to $k$. By further considering \eqref{PPA-decrease}, we obtain \[ k\|w^{k} - w^{k-1}\|_G^2 \le \sum_{i=0}^{k-1}\| w^{i+1} - w^{i}\|_G^2 \leq \sum_{i=0}^{k-1} \left(\|w^i - w^*\|_G^2 -\|w^{i+1} - w^*\|_G^2\right) \leq \|w^0 - w^*\|_G^2, \] which implies the relation \eqref{non-ergodic-rate}. By using the trick introduced in \cite{DLY13,DY14}, we can derive the $o\left(1/k\right)$ result \eqref{non-ergodic-rate-o}. Specifically, we have \begin{equation}\label{ch-add3} {k\over 2} \|w^k-w^{k-1}\|_G^2 \leq \sum_{i =\lfloor {k\over 2} \rfloor}^k \|w^{i} -w^{i-1}\|_G^2, \end{equation} where $\lfloor {k/2} \rfloor$ denotes the greatest integer no greater than $k/2$. The result \eqref{non-ergodic-rate-o} follows by further considering $\sum_{k=0}^{\infty}\| w^{k+1} - w^{k}\|_G^2 < \infty$ and thus the right-hand-side of \eqref{ch-add3} converges to $0$ as $k\rightarrow\infty$. \end{appendix} \end{document}
arXiv
Height Height is measure of vertical distance, either vertical extent (how "tall" something or someone is) or vertical position (how "high" a point is). For example, "The height of that building is 50 m" or "The height of an airplane in-flight is about 10,000 m". For example, "Christopher Columbus is 5 foot 2 inches in vertical height." When the term is used to describe vertical position (of, e.g., an airplane) from sea level, height is more often called altitude.[1] Furthermore, if the point is attached to the Earth (e.g., a mountain peak), then altitude (height above sea level) is called elevation.[2] In a two-dimensional Cartesian space, height is measured along the vertical axis (y) between a specific point and another that does not have the same y-value. If both points happen to have the same y-value, then their relative height is zero. In the case of three-dimensional space, height is measured along the vertical z axis, describing a distance from (or "above") the x-y plane. Etymology The English-language word high is derived from Old English hēah, ultimately from Proto-Germanic *xauxa-z, from a PIE base *keuk-. The derived noun height, also the obsolete forms heighth and highth, is from Old English híehþo, later héahþu, as it were from Proto-Germanic *xaux-iþa. In mathematics In elementary models of space, height may indicate the third dimension, the other two being length and width. Height is normal to the plane formed by the length and width. Height is also used as a name for some more abstract definitions. These include: 1. The altitude of a triangle, which is the length from a vertex of a triangle to the line formed by the opposite side; 2. A measurement in a circular segment of the distance from the midpoint of the arc of the circular segment to the midpoint of the line joining the endpoints of the arc (see diagram in circular segment); 3. In a rooted tree, the height of a vertex is the length of the longest downward path to a leaf from that vertex; 4. In algebraic number theory, a "height function" is a measurement related to the minimal polynomial of an algebraic number; among other uses in commutative algebra and representation theory; 5. In ring theory, the height of a prime ideal is the supremum of the lengths of all chains of prime ideals contained in it. In geosciences Although height is normally relative to a plane of reference, most measurements of height in the physical world are based upon a zero surface, known as sea level. Both altitude and elevation, two synonyms for height, are usually defined as the position of a point above the mean sea level. One can extend the sea-level surface under the continents: naively, one can imagine a lot of narrow canals through the continents. In practice, the sea level under a continent has to be computed from gravity measurements, and slightly different computational methods exist; see Geodesy, heights. In addition to vertical position, the vertical extent of geographic landmarks can be defined in terms of topographic prominence. For example, the highest mountain (by elevation in reference to sea level) belongs to Mount Everest, located on the border of Nepal and Tibet, China; however the tallest mountain, by measurement of apex to base, is Mauna Kea in Hawaii, United States. In geodesy Geodesists formalize mean sea level (MSL) by means of the geoid, the equipotential surface that best fits MSL. Then various types of height (normal, dynamic, orthometric, etc.) can be defined, based on the assumption of density of topographic masses necessary in the continuation of MSL under the continents. A purely geometric quantity is the ellipsoidal height, reckoned from the surface of a reference ellipsoid, see Geodetic system, vertical datum. In aviation In aviation terminology, the terms height, altitude, and elevation are not synonyms. Usually, the altitude of an aircraft is measured from sea level, while its height is measured from ground level. Elevation is also measured from sea level, but is most often regarded as a property of the ground. Thus, elevation plus height can equal altitude, but the term altitude has several meanings in aviation. In human culture Human height is one of the areas of study within anthropometry. While environmental factors have some effect on variations in human height, these influences are insufficient to account for all differences between populations, suggesting that genetic factors are important for explaining variations between human populations.[3] The United Nations uses height (among other statistics) to monitor changes in the nutrition of developing nations. In human populations, average height can distill down complex data about the group's birth, upbringing, social class, diet, and health care system. In their research, Baten, Stegl and van der Eng came to the conclusion that a change in the average height is a sign for a change in the economic development. With broad data of Indonesia, the researchers state that several incidents in the history of the country has led not only to a change in the economy but also to a change in the population’s average height.[4] See also • Acrophobia (fear of heights) • Centimetre–gram–second system of units • Chinese units of measurement • Elevation • Height gauge • Imperial units • International System of Units • United States customary units • Vertical metre References 1. Strahler, Alan (2013). Introducing Physical Geography (6th ed.). Hoboken, N.J.: Wiley. p. 42. ISBN 9781118396209. OCLC 940600903. 2. Petersen, James F.; Sack, Dorothy; Gabler, Robert E. (4 February 2016). Physical Geography. Cengage Learning. p. 113. ISBN 978-1-305-65264-4. Note that altitude usually refers to a height in the air (above sea level) and elevation refers to height on the surface [of the Earth] above (or below) sea level. 3. Stulp, G; Barrett, L (February 2016). "Evolutionary perspectives on human height variation" (PDF). Biological Reviews. 91 (1): 206–34. doi:10.1111/brv.12165. PMID 25530478. S2CID 5257723. 4. van der Eng, Pierre; Baten, Joerg; Stegl, Mojgan (2010). "Long-Term Economic Growth and the Standard of Living in Indonesia" (PDF). SSRN Electronic Journal. doi:10.2139/ssrn.1699972. S2CID 127728911. External links • Media related to Height at Wikimedia Commons • The dictionary definition of height at Wiktionary • The dictionary definition of high at Wiktionary Authority control: National • Germany
Wikipedia
\begin{document} \date{\today} \title{Semiclassical inverse spectral problem for elastic Love waves in isotropic media} \author{Maarten V. de Hoop \thanks{Simons Chair in Computational and Applied Mathematics and Earth Science, Rice University, Houston, TX 77005, USA ([email protected])} \and Alexei Iantchenko \thanks{Department of Materials Science and Applied Mathematics, Faculty of Technology and Society, Malm\"{o} University, SE-205 06 Malm\"{o}, Sweden ([email protected])} \and Robert D. van der Hilst \thanks{Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA ([email protected])} \and Jian Zhai \thanks{Institute for Advanced Study, The Hong Kong University of Science and Technology, Kowloon, Hong Kong, China ([email protected])}} \maketitle \pagestyle{myheadings} \thispagestyle{plain} \markboth{DE HOOP, IANTCHENKO, VAN DER HILST and ZHAI}{Semiclassical inverse spectral problem for Love waves} \begin{abstract} We analyze the inverse spectral problem on the half line associated with elastic surface waves. Here, we focus on Love waves. Under certain generic conditions, we establish uniqueness and present a reconstruction scheme for the \textit{S}- wavespeed with multiple wells from the semiclassical spectrum of these waves. \end{abstract} \section{Introduction} \label{intro} We analyze the inverse spectral problem on the half line associated with elastic surface waves. Here, we focus on Love waves. In a follow-up paper we present the corresponding inverse problem for Rayleigh waves. Surface waves have played a key role in revealing Earth's structure from the shallow near-surface to several hundred kilometers deep into the mantle, depending on the frequencies and data acquisition configurations considered. \subsection{Seismology} The inverse spectral problem for surface waves fits in the seismological framework of surface-wave tomography. Surface-wave tomography has a long history. Since pioneering work on inference from the dispersion of surface waves half a century ago \cite{Haskell, Press, BD, Knopoff, Toksoz, Schwab, Dziewonski, Woodhouse, Nolet1}, surface wave tomography based on dispersion of waveforms from earthquake data has played an important role in studies of the structure of the Earth's crust and upper mantle on both regional and global scales \cite{Lerner, Woodhouse2, Nataf, Nolet2, MT, Trampert, GJG, Ritzwoller-2001, Simons, BE, Ritsema-2004, Lebedev-2008, Yao2}. In order to avoid the effects of scattering due to complex crustal structure, these studies focused on the analysis, measurement, and inversion of surface wave dispersion at relatively low frequencies (that is, $4-20$ mHz, or periods between $50$ to $250$ s) at which the fundamental modes sense mantle structure to $200-300$ km depth and higher modes reach across the upper mantle and transition zone to some $660$ km depth. Most methods assume some form of (WKB) asymptotic and path-average approximation \cite{DT} in line with our semiclassical point of view. More than a decade ago, Campillo and his collaborators discovered that cross correlation of ambient noise yields Green's function for surface waves \cite{Campillo1,Campillo2,Campillo3}. This enabled the possibility to extend the applicability of surface-wave tomography not only to any area where seismic sensors can be placed, but also to short-path measurements and frequencies at which the data are most sensitive to shallow depths. Crustal studies based on ambient noise tomography are typically conducted in the period band of $5–40$ s, but shorter period surface waves ($\sim 1$ s, using station spacing of $\sim 20$ km or less) have been used to investigate shallow crustal or even near surface shear-wave speed variations \cite{Sabra, Yao1, Yao2, Yang, Lin, Huang}. \subsection{Semiclassical analysis perspective} In a separate contribution \cite{dHINZ}, we presented the semiclassical analysis of surface waves. Such an analysis leads to a geometric-spectral description of the propagation of these waves \cite{Babich, Woodhouse}. This semiclassical analysis is built on the work of Colin de Verdi\`{e}re \cite{CDV2, CDV3}. \textit{The main contribution of this paper is the construction of the Bohr-Sommerfeld quantization for Love waves.} Colin de Verdi\`{e}re also considered the inverse spectral problem of scalar surface waves allowing wavespeed profiles that contain a well \cite{CdV2011}. His result does not account for the Neumann boundary condition at the surface, although a reflection principle could be invoked, but his methodology directly applies once the Bohr-Sommerfeld quantization is obtained. The reflection principle does not apply to general elastic surface waves and the remedy is presented in this paper. In the process, we show that with the Neumann boundary condition at the surface, in fact, ambiguities arising in the recovery of the \textit{S}-wave speed on the line (that is, without this boundary condition) can be resolved. We study the elastic wave equation in $X = {\mathbb R}^2 \times (-\infty,0]$. In coordinates, $$ (x,z) ,\quad x = (x_1,x_2) \in {\mathbb R}^2 ,\ z \in {\Bbb R}^{-} = (-\infty,0], $$ we consider solutions, $u = (u_1,u_2,u_3)$, satisfying the Neumann boundary condition at $\partial X = \{z=0\}$, to the system \begin{equation}\label{elaswaeq} \begin{split} \partial^2_t u_i + M_{il} u_l &= 0 ,\\ u(t=0,x,z) &= 0,~~\partial_tu(t=0,x,z)=h(x,z) ,\\ \frac{c_{i3kl}}{\rho}\partial_k u_l(t,x,z=0) &= 0 , \end{split} \end{equation} where \begin{multline*} M_{il} = -\frac{\partial}{\partial z}\frac{c_{i33l}(x,z)}{\rho(x,z)} \frac{\partial}{\partial z} - \sum_{j,k=1}^{2} \frac{c_{ijkl}(x,z)}{\rho(x,z)} \frac{\partial}{\partial x_j} \frac{\partial}{\partial x_k} - \sum_{j=1}^{2} \frac{\partial}{\partial x_j}\frac{c_{ij3l}(x,z)}{\rho(x,z)} \frac{\partial}{\partial z} \\ - \sum_{k=1}^{2} \frac{c_{i3kl}(x,z)}{\rho(x,z)} \frac{\partial}{\partial z} \frac{\partial}{\partial x_k} -\sum_{k=1}^{2} \left( \frac{\partial}{\partial z} \frac{c_{i3kl}(x,z)}{\rho(x,z)} \right) \frac{\partial}{\partial x_k} - \sum_{j,k=1}^{2} \left( \frac{\partial}{\partial x_j} \frac{c_{ijkl}(x,z)}{\rho(x,z)} \right) \frac{\partial}{\partial x_k}. \end{multline*} Here, the stiffness tensor, $c_{ijkl}$, and density, $\rho$, are smooth and obey the following scaling: Introducing $Z = \frac{z}{\epsilon}$, $$ \frac{c_{ijkl}}{\rho}(x,z) = C_{ijkl}\left(x,\frac{z}{\epsilon}\right) , ~~\epsilon \in (0,\epsilon_0]; $$ $$ C_{ijkl}(x,Z)=C_{ijkl}(x,Z_I)=C_{ijkl}^I(x),\quad Z \leq Z_I<0. $$ As discussed in \cite{dHINZ}, surface waves travel along the surface $z = 0$. \noindent The remainder of the paper is organized as follows. In Section \ref{semi}, we give the formulation of the inverse problems as an inverse spectral problem on the half line. In Section \ref{S-decreasing}, we treat the simple case of recovery of a monotonic profile of wave speed. In Section \ref{BS rule}, we discuss the relevant Bohr-Sommerfeld quantization, which is the corner stone in the study of the inverse spectral problem. In Section \ref{inverse}, we give the reconstruction scheme under generic assumptions. \section{Semiclassical description of Love waves}\label{semi} \subsection{Surface wave equation, trace and the data} For the convenience of the readers, we briefly summarize the semiclassical description of elastic surface waves. The leading-order Weyl symbol associated with $M_{il}$ above is given by \begin{multline}\label{H0} H_{0,il}(x,\xi) = -\frac{\partial}{\partial Z}C_{i33l}(x,Z) \frac{\partial}{\partial Z} \\ - \mathrm{i}\sum_{j=1}^{2} C_{ij3l}(x,Z) \xi_j \frac{\partial}{\partial Z} - \mathrm{i}\sum_{k=1}^{2} C_{i3kl}(x,Z) \frac{\partial }{\partial Z} \xi_k - \mathrm{i} \sum_{k=1}^{2} \left( \frac{\partial}{\partial Z} C_{i3kl}(x,Z) \right) \xi_k \\ + \sum_{j,k=1}^{2} C_{ijkl}(x,Z) \xi_j \xi_k. \hspace*{4.0cm} \end{multline} We view $H_0(x,\xi)$ as ordinary differential operators in $Z$, with domain $$ \mathcal{D} = \left\{ v \in H^2(\mathbb{R}^-)\ \bigg|\ \sum_{l=1}^3\left(C_{i33l}(x,0) \frac{\partial v_l}{\partial Z}(0) + \mathrm{i} \sum_{k=1}^2C_{i3kl}\xi_kv_l(0)\right) = 0 \right\} . $$ For an isotropic medium, $$ C_{ijkl} = \hat{\lambda} \delta_{ij} \delta_{kl} + \hat{\mu} (\delta_{ik} \delta_{jl} + \delta_{il} \delta_{jk}), $$ where $\hat{\lambda}=\frac{\lambda}{\rho}$ and $\hat{\mu}=\frac{\mu}{\rho}$. The \textit{S}-wavespeed, $C_S$, is then $C_S=\sqrt{\hat{\mu}}$. The decoupling of Love and Rayleigh waves is observed in practice, and explained in \cite{dHINZ}. We denote $$ P(\xi) = \left(\begin{array}{ccc} |\xi|^{-1}\xi_2& |\xi|^{-1}\xi_1 & 0 \\ -|\xi|^{-1}\xi_1 &|\xi|^{-1}\xi_2 & 0 \\ 0 & 0 & 1 \end{array}\right) . $$ Then $$ P(\xi)^{-1}H_0(x,\xi)P(\xi)=\left(\begin{array}{cc} H_0^L(x,\xi) &\\ &H_0^R(x,\xi) \end{array}\right), $$ where \begin{equation}\label{love} H_0^L(x,\xi)\varphi_1 = -\pdpd{}{Z} \hat{\mu} \pdpd{\varphi_1}{Z} + \hat{\mu} \, |\xi|^2 \varphi_1 \end{equation} supplemented with boundary condition \begin{equation*}\label{love_b} \pdpd{\varphi_1}{Z}(0) = 0 , \end{equation*} for Love waves. We will consider only the Love waves in this paper. We assume that $\Lambda_\alpha(x,\xi)$ is an eigenvalue of $H_0(x,\xi)$ with eigenfunction $\Phi_{\alpha,0}(Z,x,\xi)$. By \cite[Theorem 2.1]{dHINZ}, we have \begin{equation}\label{eq:comm_sys} H_0^L \circ \Phi_{\alpha,0}\, = \Phi_{\alpha,0} \circ \Lambda_\alpha+\mathcal{O}(\epsilon). \end{equation} We define \begin{equation}\label{defchi} J_{\alpha,\epsilon}(Z,x,\xi) = \frac{1}{\sqrt{\epsilon}}\Phi_{\alpha,0}(Z,x,\xi) . \end{equation} Microlocally (in $x$), we can construct approximate solutions of the system \eqref{elaswaeq} with initial values $$ h(x,\epsilon Z)=\sum_{\alpha=1}^{\mathfrak{M}} J_{\alpha,\epsilon}(Z,x,\epsilon D_x) W_{\alpha,\epsilon}(x,Z) , $$ representing surface waves. We assume that all eigenvalues $\Lambda_1 < \cdots < \Lambda_\alpha < \cdots < \Lambda_{\mathfrak{M}}$ are eigenvalues of the operator given in (\ref{love}). We let $W_{\alpha,\epsilon} $ solve the initial value problems (up to leading order) \begin{eqnarray} [\epsilon^2 \partial_t^2 + \Lambda_{\alpha}(x,D_x)] W_{\alpha,\epsilon}(t,x,Z) &=& 0 , \label{wqeaj_sys}\\ W_{\alpha,\epsilon}(0,x,Z) &=& 0 ,\quad \partial_t W_{\alpha,\epsilon}(0,x,Z) = J_{\alpha,\epsilon}W_{\alpha}(x,Z) , \label{eq:initrange} \end{eqnarray} $\alpha = 1,\ldots,\mathfrak{M}$. We let $\mathcal{G}_0(Z,x,t,Z',\xi;\epsilon)$ denote the approximate Green's function (microlocalized in $x$), up to leading order, for Love waves. We may write \cite{dHINZ} \begin{multline} \label{G0} \mathcal{G}_0(Z,x,t,Z',\xi;\epsilon) = \sum_{\alpha=1}^{\mathfrak{M}} J_{\alpha,\epsilon}(Z,x,\xi) \\ \left(\frac{\mathrm{i}}{2}\mathcal{G}_{\alpha,+,0}(x,t,\xi,\epsilon)-\frac{\mathrm{i}}{2}\mathcal{G}_{\alpha,-,0}(x,t,\xi,\epsilon)\right)\Lambda^{-1/2}_\alpha(x,\xi)J_{\alpha,\epsilon}(Z',x,\xi) , \end{multline} where $\mathcal{G}_{\alpha,\pm,0}$ are Green's functions for half wave equations associated with (\ref{wqeaj_sys})-(\ref{eq:initrange}). We have the trace \[\label{trace_from_normal_modes} \int_{\mathbb{R}^-} \widehat{\epsilon \partial_t \mathcal{G}_0}(Z,x,\omega,Z,\xi;\epsilon) \mathrm{d} \epsilon Z = \sum_{\alpha=1}^{\mathfrak{M}} \delta(\omega^2 - \Lambda_\alpha(x,\xi)) \Lambda^{1/2}_\alpha(x,\xi) + {\mathcal O}(\epsilon^{-1}) \] from which we can extract the eigenvalues $\Lambda_\alpha,\,\alpha=1,2,\cdots,\mathfrak{M}$ as functions of $\xi$. We use these to recover the profile of $C_S^2$. In practice, these eigenvalues are obtained from surface-wave tomography and to ensure that all eigenvalues are observed, measurements of surface-waveforms should be taken in boreholes. Most seismic observations are made at or near Earth’s surface, but modern networks increasingly include borehole sensors indeed. For example, Hi-net seismographic network in Japan\footnote{http://www.hinet.bosai.go.jp} includes more than $750$ sensors located in $> 100$ m deep boreholes and permanent sites of USArray\footnote{http://www.usarray.org} include sensors placed around $100$ m depth. \subsection{Semiclassical spectrum} From here on, we only consider the operator $H_0^L(x,\xi)$ for Love waves. We suppress the dependence on $x$, and introduce $h = |\xi|^{-1}$ as another semiclassical parameter. Within this setting, we also change the notation from $\frac{\partial}{\partial Z}$ to $\frac{\mathrm{d}}{\mathrm{d}Z}$. We arrive at the operator \begin{equation*} L_h = -h^2 \frac{\mathrm{d}}{\mathrm{d}Z} \left(\hat{\mu}(Z) \frac{\mathrm{d}}{\mathrm{d}Z}\right) + \hat{\mu}(Z) \end{equation*} with Neumann boundary condition at $Z = 0$. The assumption on the stiffness tensor gives us the following assumption on $\hat{\mu}$: \begin{assumption} \label{assu1} The (unknown) function $\hat{\mu}$ satisfies $\hat{\mu}(Z) = \hat{\mu}(Z_I)$ for all $Z \leq Z_I$ and \begin{equation*} 0 < \hat{\mu}(0) = E_0 = \inf_{Z \le 0} \hat{\mu}(Z) < \hat{\mu}_I = \sup_{Z \leq 0} \hat{\mu}(Z) = \hat{\mu}(Z_I) . \end{equation*} \end{assumption} \noindent The assumption that $\hat{\mu}$ attains its mininum at the boundary, and its maximum in some deep zone, is realistic in practice. We first observe that the spectrum of $L_h$ is divided in two parts, $$ \sigma(L_h) = \sigma_{pp}(L_h) \cup \sigma_{ac}(L_h) , $$ where the point spectrum $\sigma_{pp}(L_h)$ consists of a finite number of eigenvalues in $(E_0,\hat{\mu}_I)$ and the continuous spectrum $\sigma_{ac}(L_h) = [\hat{\mu}_I,\infty)$. We write $\lambda_\alpha = h^2 \Lambda_\alpha$. Since this is a one-dimensional problem, the eigenvalues are simple and satisfy \begin{gather*} E_0 < \lambda_1(h) < \lambda_2(h) < \ldots < \lambda_{\mathfrak{M}}(h) < \hat{\mu}_I ; \end{gather*} the number of eigenvalues, $\mathfrak{M}$ increases as $h$ decreases. We will study how to reconstruct the profile $\hat{\mu}$ using only the asymptotic behavior of $\lambda_\alpha(h)$ in $h$. To this end, we introduce the semiclassical spectrum as in \cite{CdV2011} \begin{definition} For given $E$ with $E_0 < E \leq \hat{\mu}_I$ and positive real number $N$, a sequence $\mu_\alpha(h)$, $\alpha=1,2,\ldots$ is a semiclassical spectrum of $L_h$ mod $o(h^N)$ in $]-\infty,E[$ if, for all $\lambda_\alpha(h) < E$, $$ \lambda_\alpha(h) = \mu_\alpha(h) + o(h^N) $$ uniformly on every compact subset $K$ of $]-\infty,E[\,$. \end{definition} \section{Reconstruction of a monotonic profile} \label{S-decreasing} In this section, we give a reconstruction scheme for the simple situation where the profile $\hat{\mu}$ is monotonic. First it is well known that \begin{lemma} \label{first} The first eigenvalue of $L_h$ satisfies $\lim_{h \rightarrow 0} \lambda_1(h) = E_0$. \end{lemma} Similar to Theorem 3 in \cite{CDV2}, we have \begin{theorem} \label{CdV-decr} Assume that $\hat{\mu}$ is decreasing in $[Z_I,0]$. Then the asymptotics of the discrete spectra $\lambda_j(h), $ $1 \leq j \leq \mathfrak{M}_j$ as $h \to 0$ determine the function $\hat{\mu}$. \end{theorem} \noindent Before giving the proof, we recall the Abel transform and its inverse. We introduce \[ \mathcal{A} g(E) = \int_{E_0}^E \sqrt{E - u} g(u) \, \mathrm{d}u . \] Then \[ \frac{\mathrm{d}}{\mathrm{d}E} \mathcal{A} g(E) = \frac{1}{2} T g(E) , \quad T g(E) = \int_{E_0}^E \frac{g(u)}{\sqrt{E - u}} \, \mathrm{d}u , \] where $T g$ signifies the Abel transform of $g$. By the inversion formula for the Abel transform, \[ \frac{\mathrm{d}}{\mathrm{d}E} T^2 g(E) = \pi g(E) , \] we get \begin{equation} \label{eq:Atinv} \left(\frac{4}{\pi} \frac{\mathrm{d}^2}{\mathrm{d}E^2} \mathcal{A} \frac{\mathrm{d}}{\mathrm{d}E} \mathcal{A}\right) g(E) = g(E) . \end{equation} \begin{proof} First, we note that $E_0 = \hat{\mu}(0)$ is determined by the first semiclassical eigenvalue $\lambda_1(h)$ by Lemma~\ref{first}. Then, we invoke Weyl's law. For $E < \hat{\mu}_I$, let $N(h,E) = \#\{ \lambda_j(h) \leq E \}$, where $\lambda_j(h)$ is an eigenvalue for $L_h$. Then \cite{dHINZ} \begin{equation} \label{countingf} N(h,E) = \frac{1}{2\pi h} \left[ {\rm Area}(\{(Z,\zeta)\ :\ \hat{\mu}(Z) (1 + \zeta^2) \leq E\}) + o(1)\right] . \end{equation} Thus, from the leading order behavior (in $h$) of $\lambda_j(h)$ we can recover \[ {\rm Area}(\{(Z,\zeta)\ :\ \hat{\mu}(Z) (1 + \zeta^2) \leq E\}) = 2 \tilde{S}^1_0(E) ,\quad \tilde{S}^1_0(E) = \int_{f(E)}^0 \sqrt{\frac{E - \hat{\mu}}{\hat{\mu}}} \, \mathrm{d}Z , \] with $\hat{\mu}(f(E)) = E$. We change variable of integration, $Z = f(u)$, with \begin{equation} \label{sC} \left.\frac{\mathrm{d}}{\mathrm{d}Z} \hat{\mu}(Z) \right|_{Z = f(u)} = \frac{1}{f'(u)} \end{equation} and get \[ \tilde{S}^1_0(E) = \mathcal{A} g(E) ,\quad g(u) = \frac{f'(u)}{\sqrt{u}} . \] Applying (\ref{eq:Atinv}) above, we recover $g$, that is, \[ f'(E) = \left(\frac{4}{\pi} \sqrt{E} \frac{\mathrm{d}^2}{\mathrm{d}E^2} \mathcal{A} \frac{\mathrm{d}}{\mathrm{d}E} \right) \tilde{S}^1_0(E) ,\quad E_0 < E < \hat{\mu}_I . \] Then \[ f(E) = \int_{E_0}^E f'(u) \, \mathrm{d}u , \] using that $f(E_0) = 0$ and knowledge of $E_0 = \hat{\mu}(0)$ from the first eigenvalue (Lemma \ref{first}), from which we recover $\hat{\mu}$ by the inverse function theorem. \end{proof} \section{Bohr-Sommerfeld quantization} \label{BS rule} The Bohr-Sommerfeld rules give a quantization for the semiclassical spectrum \cite{CdV2005}. We will derive these rules making use of the WKB-Maslov Ansatz for the eigenfunctions. We obtain an alternative proof to the one given in \cite{CDV3, CdV2011}, which enables to explicitly incorporate Neumann boundary conditions at the surface. It opens the way for studying inverse problems also for Rayleigh waves; these will be investigated in a follow-up paper. We construct WKB solutions of the form \begin{equation} \label{WKB} u_h(Z) = C \exp\left[\frac{1}{h} \sum_{j=0}^\infty h^j \mathcal{S}_j(Z) \right] \end{equation} that satisfy \begin{equation}\label{eq10} -h^2 \hat{\mu}(Z) u_h''(Z) - h^2 \hat{\mu}'(Z) u_h'(Z) + \hat{\mu}(Z) u_h(Z) = E u_h(Z) . \end{equation} We will follow various calculations from \cite{Bender} in the following analysis. \subsection{Half well} We consider the eigenvalue problem on the half line $\mathbb{R}^-$, with Neumann boundary condition at $Z = 0$. We fix a real number $E$ and assume that there exists a unique $Z_E$ such that $\hat{\mu}(Z_E) = E$. For exposition of the construction, we change the variable $Z \rightarrow Z_E - Z$ such that $\hat{\mu}(0) = E$ and $Z_E$ is the boundary point. Furthermore, we assume that $\hat{\mu}(Z) - E > 0$ for $Z > 0$ and $\hat{\mu}(Z) - E < 0$ for $Z_E < Z < 0$. The original domain $]-\infty,0]$ changes to $[Z_E,\infty[$. We divide the domain $[Z_E,\infty[$ into three regions: region $\mathrm{I}$ $(Z > 0)$, region $\mathrm{II}$ ($|Z|$ is small) and region $\mathrm{III}$ $(Z_E \leq Z < 0)$. We will construct WKB solutions in each region and glue them together. First, we construct the WKB solution, $u_\mathrm{I}(Z)$, in \textbf{region I}. We substitute solutions of the form (\ref{WKB}), collect terms of equal orders in $h$, and arrive at an infinite family of equations which may be solved recursively. The $\mathcal{O}(h^0)$ terms give the eikonal equation for $\mathcal{S}_0$, $$ \hat{\mu}(Z) (1 - (\mathcal{S}_0'(Z))^2) = E . $$ We select the solution \begin{equation} \mathcal{S}_0(Z) = -\int_0^Z \sqrt{\frac{\hat{\mu} - E}{\hat{\mu}}} \mathrm{d}Z' . \end{equation} Then the $\mathcal{O}(h)$ term yields $$ \hat{\mu} \mathcal{S}_0'' + 2 \hat{\mu} \mathcal{S}_0' \mathcal{S}_1' + \hat{\mu}' \mathcal{S}_0' = 0 , $$ which implies that $$ \mathcal{S}_1' = -\frac{1}{2} (\log(\hat{\mu} \mathcal{S}_0'))' = -\frac{1}{4} \left(\log[\hat{\mu} (\hat{\mu} - E)]\right)' ; $$ we select the solution \begin{equation} \mathcal{S}_1 = -\frac{1}{4} \log[\hat{\mu} (\hat{\mu} -E)] . \end{equation} The lower order terms give us a sequence of equations, $$ 2 \hat{\mu} \mathcal{S}_0' \mathcal{S}_j' + (\hat{\mu} \mathcal{S}_{j-1}')' + \hat{\mu} \sum_{k=1}^{j-1} \mathcal{S}_{j-k}' \mathcal{S}_k' = 0 ,\quad j \geq 2 . $$ We write down the explicit form of $\mathcal{S}_2$ for later use \begin{equation} \label{eq:calS2} \mathcal{S}_2(\delta,Z) = \int_\delta^Z \left[\frac{(E \hat{\mu}' - 2 \hat{\mu} \hat{\mu}')^2}{ 32 \hat{\mu}^{3/2} (\hat{\mu} - E)^{5/2}} + \frac{-E^2 \hat{\mu}'' + 3 E \hat{\mu} \hat{\mu}'' - 2 \hat{\mu}^2 \hat{\mu}'' + E (\hat{\mu}')^2}{ 8 (\hat{\mu} - E)^{5/2} \hat{\mu}^{1/2}}\right] \mathrm{d}Z' , \end{equation} up to a constant difference; here $\delta$ is any small fixed positive constant. Upon integrating by parts, we obtain \begin{equation} \begin{split} \mathcal{S}_2(\delta,Z) = -\frac{(3 E + 2 \hat{\mu}) \hat{\mu}'}{48 \hat{\mu}^{1/2} (\hat{\mu} - E)^{3/2}} -&\frac{\hat{\mu}'}{24 (\hat{\mu} - E)^{1/2} \hat{\mu}^{1/2}} \\ &+ \int_\delta^Z \left[-\frac{(\hat{\mu}')^2}{24 \hat{\mu}^{3/2} (\hat{\mu} - E)^{1/2}} + \frac{(7 E - 8 \hat{\mu}) \hat{\mu}''}{ 48 \hat{\mu}^{1/2}(\hat{\mu} - E)^{3/2}}\right] \mathrm{d}Z' . \end{split} \end{equation} Next, we consider \textbf{region II} containing the turning point. When $|Z|$ is small, we expand $$ \hat{\mu}(Z) - E = a_1 Z + a_2 Z^2 + a_3 Z^3 + \cdots . $$ Here, $a_1 > 0$. We write $u_{\mathrm{II}}(Z) = \hat{\mu}^{-1/2}(Z) \, v_{\mathrm{II}}(Z) = (E + a_1 Z + a_2 Z^2 + a_3 Z^3 + \cdots)^{-1/2} v_{\mathrm{II}}(Z)$. Then we obtain $$ -h^2 \frac{\mathrm{d}}{\mathrm{d}Z} \left(\hat{\mu} \frac{\mathrm{d} u_{\mathrm{II}}}{\mathrm{d}Z}\right) = -h^2 \frac{\mathrm{d}}{\mathrm{d}Z} \left(\hat{\mu} \frac{\mathrm{d}}{ \mathrm{d}Z} \hat{\mu}^{-1/2}(Z) v_{\mathrm{II}}(Z)\right) = -h^2 \hat{\mu}^{1/2} v_{\mathrm{II}}'' + h^2 (\hat{\mu}^{1/2})'' v_{\mathrm{II}} . $$ Thus, by (\ref{eq10}), we have the following equation for $v_{\mathrm{II}}$, \begin{equation} \label{eq11} h^2 v_{\mathrm{II}}'' = \left( 1 - E \hat{\mu}^{-1} + h^2 \frac{(\hat{\mu}^{1/2})''}{\hat{\mu}^{1/2}} \right) \, v_{\mathrm{II}} . \end{equation} We further employ the simple asymptotic expansion, $$ 1 - E \hat{\mu}^{-1}(Z) = b_1 Z + b_2 Z^2 + \cdots , $$ where $b_1 = \frac{a_1}{E}$ and $b_2 = \frac{a_2 E - a_1^2}{E^2}$. Temporarily, we introduce the scaling $Z = h^{2/3} b_1^{-1/3} Y$. With abuse of notation for $v_{\mathrm{II}}$, (\ref{eq11}) gives $$ -h^2 h^{-4/3} b_1^{2/3} \frac{\mathrm{d}^2 v_{\mathrm{II}}}{\mathrm{d}Y^2} = (b_1 h^{2/3} b_1^{-1/3} Y + b_2 h^{4/3} b_1^{-2/3} Y^2 + \cdots) \, v_{\mathrm{II}} , $$ which can be simplified to \begin{equation} \label{eq12} \frac{\mathrm{d}^2 v_{\mathrm{II}}}{\mathrm{d}Y^2} \sim (Y + h^{2/3} b_1^{-4/3} b_2 Y^2) \, v_{\mathrm{II}} , \end{equation} keeping the second-order approximation. We then seek an approximate solution of the form $$ v_{\mathrm{II}}(Y) \sim (1 + \alpha_1 h^{2/3} Y) \, \mathrm{Ai}(Y + \beta_1 h^{2/3} Y^2) , $$ where $\mathrm{Ai}$ is the Airy function and $\alpha_1$ and $\beta_1$ are constants to be determined. By tedious calculations, we find that \begin{multline*} \frac{\mathrm{d}^2 v_{\mathrm{II}}}{\mathrm{d}Y^2} \sim D \Big[ \alpha_1 h^{2/3} \mathrm{Ai}'(Y + \beta_1 h^{2/3} Y^2) + \alpha_1 h^{2/3} (1 + 2 \beta_1 h^{2/3} Y) \, \mathrm{Ai}'(Y + \beta_1 h^{2/3} Y^2) \\ + (1 + \alpha_1 h^{2/3} Y) (1 + 2 \beta_1 h^{2/3} Y)^2 \mathrm{Ai}''(Y + \beta_1 h^{2/3} Y^2) \\ + (1 + \alpha_1 h^{2/3} Y) \, 2 \beta_1 h^{2/3} \mathrm{Ai}'(Y + \beta_1 h^{2/3} Y^2) \Big] . \end{multline*} Comparing this equation with differential equation (\ref{eq12}), and using that $$ \mathrm{Ai}''(Y + \beta_1 h^{2/3} Y^2) = (Y + \beta_1 h^{2/3} Y^2) \, \mathrm{Ai}(Y + \beta_1 h^{2/3} Y^2) , $$ we must have $$ \alpha_1 + \beta_1 = 0 , $$ and $$ 5 \beta_1 = b^{-4/3}_1 b_2 . $$ Hence, undoing the scaling and returning to the original (depth) coordinate, $$ v_{\mathrm{II}}(Z) \sim D \left(1 - \frac{b_2}{5 b_1} Z \right) \, \mathrm{Ai}\left[b_1^{1/3} h^{-2/3} \left(Z + \frac{b_2Z^2}{5 b_1}\right)\right] $$ so that \begin{multline} u_{\mathrm{II}}(Z) \sim D \left(E^{-1/2} - \frac{1}{2} E^{-3/2} a_1 Z\right) \\ \left(1 - \frac{a_2 E - a_1^2}{5 E a_1} Z\right) \mathrm{Ai}\left[\left( \frac{a_1}{E}\right)^{1/3} h^{-2/3} \left(Z + \frac{a_2 E - a_1^2}{5 E a_1} Z^2 \right)\right] . \end{multline} Now, we examine $u_{\mathrm{I}}(Z)$ for small $Z$. We make the following approximations \begin{eqnarray*} [\hat{\mu}(Z) (\hat{\mu}(Z) - E)]^{-1/4} &\sim& Z^{-1/4} (E a_1)^{-1/4} \left( 1 - \frac{1}{4} \frac{E a_2 + a_1^2}{E a_1} Z \right) , \\ \int_0^Z \sqrt{\frac{\hat{\mu} - E}{\hat{\mu}}} \mathrm{d}Z' &\sim& \frac{2}{3} Z^{3/2} \left(\frac{a_1}{E}\right)^{1/2} + \frac{E a_2 - a_1^2}{5 E a_1} \left( \frac{a_1}{E} \right)^{1/2} Z^{5/2} , \\ \mathcal{S}_2(\delta,Z) &\sim& - \frac{5}{48} E^{1/2} a_1^{-1/2} Z^{-3/2} - \frac{E^{1/2} a_2 a_1^{-3/2}}{12} \delta^{-1/2} . \end{eqnarray*} In the asymptotic expansion of $\mathcal{S}_2$, we neglect terms $\mathcal{O}(Z^{-1/2})$, which is justified because $h Z^{-1/2}$ is small (compared to $h \delta^{-1/2}$, $h Z^{-3/2}$) in the limit $h \to 0$. Substituting these formulas into $u_{\mathrm{I}}$ gives \begin{multline*} u_{\mathrm{I}} \sim C Z^{-1/4} (E a_1)^{-1/4} \left(1 - \frac{1}{4} \frac{E a_2 + a_1^2}{E a_1} Z\right) \\ \exp\left[-\frac{2}{3h} Z^{3/2} \left(\frac{a_1}{E}\right)^{1/2} - \frac{1}{5h} \frac{E a_2 - a_1^2}{E a_1} \left(\frac{a_1}{E}\right)^{1/2} Z^{5/2}\right. \\ \left. - \frac{h}{48} E^{1/2} a_1^{-5/2} Z^{-3/2} - \frac{h}{12} a^{-3/2} (2 a_1 - E) E^{-1/2}Z^{-3/2} - \frac{h E^{1/2} a_2 a_1^{-3/2}}{12} \mu^{-1/2}\right] . \end{multline*} Revisiting $u_{\mathrm{II}}(Z)$, for large positive $Z$, we employ the asymptotic behavior of $\mathrm{Ai}$, \[ \mathrm{Ai}(s) \sim \frac{1}{2\sqrt{\pi}} s^{-1/4} \left(1 - \frac{5}{48} s^{-3/2}\right) \exp\left[-\frac{2}{3} s^{3/2}\right] \] and obtain \begin{multline*} u_{\mathrm{II}}(Z) \sim D \frac{1}{2 \sqrt{\pi}} \left(\frac{a_1}{E}\right)^{-1/12} h^{1/6} Z^{-1/4} E^{-1/2} \left(1 - \frac{E a_2 + a_1^2}{4 E a_1} Z\right) \\ \left(1 - \frac{5}{48} h Z^{-3/2} \left(\frac{a_1}{E}\right)^{-1/2} \right) \exp\left[-\frac{2}{3} \left(\frac{a_1}{E}\right)^{1/2} h^{-1} Z^{3/2} \left(1 + \frac{3}{2} \left(\frac{a_2 E - a_1^2}{5 E a_1} \right) Z \right)\right] . \end{multline*} Uniformly asymptotically matching $u_{\mathrm{I}}$ and $u_{\mathrm{II}}$ then leads to the condition, \begin{equation} C = \frac{D}{2\sqrt{\pi}} h^{1/6} \exp\left[\frac{h E^{1/2} a_2 a_1^{-3/2}}{12} \mu^{-1/2}\right] a_1^{1/6} E^{-1/3} . \end{equation} In \textbf{region III}, we construct the (oscillatory) WKB solution, \begin{multline} \label{eq14} u_{\mathrm{III}}(Z) \sim F [(E - \hat{\mu}) \hat{\mu}]^{-1/4} \exp\left[\frac{\mathrm{i}}{h} \mathcal{S}_0(Z) + \mathrm{i} h \mathcal{S}_2(\mu,Z)\right] \\ + G [(E - \hat{\mu}) \hat{\mu}]^{-1/4} \exp\left[-\frac{\mathrm{i}}{h} \mathcal{S}_0(Z) - \mathrm{i} h \mathcal{S}_2(\mu,Z)\right] , \quad\quad Z,h\rightarrow 0^+ , \end{multline} where \begin{equation} \label{eq:calS0b} \mathcal{S}_0(Z) = \int_{Z}^0 \sqrt{\frac{E - \hat{\mu}}{\hat{\mu}}} \mathrm{d}Z' \end{equation} and \begin{multline} \label{eq:calS2b} \mathcal{S}_2(\delta,Z) = -\int_{Z}^{-\delta} \left[\frac{(E \hat{\mu}' - 2 \hat{\mu} \hat{\mu}')^2}{ 32 \hat{\mu}^{3/2} (E - \hat{\mu})^{5/2}} + \frac{-E^2 \hat{\mu}'' + 3 E \hat{\mu} \hat{\mu}'' - 2 \hat{\mu}^2 \hat{\mu}'' + E (\hat{\mu}')^2}{8 (E - \hat{\mu})^{5/2} \hat{\mu}^{1/2}}\right] \mathrm{d}Z' \\ = \frac{(3 E + 2 \hat{\mu}) \hat{\mu}'}{ 48 \hat{\mu}^{1/2} (E - \hat{\mu})^{3/2}} + \frac{\hat{\mu}'}{24 (E - \hat{\mu})^{1/2} \hat{\mu}^{1/2}} + \int_{Z}^{-\delta} \left[\frac{(\hat{\mu}')^2}{ 24 \hat{\mu}^{3/2} (E - \hat{\mu})^{1/2}} - \frac{(7 E - 8 \hat{\mu}) \hat{\mu}''}{ 48 \hat{\mu}^{1/2} (E - \hat{\mu})^{3/2}} \right] \mathrm{d}Z' . \end{multline} Next, we uniformly asymptotically match $u_{\mathrm{II}}$ and $u_{\mathrm{III}}$. To this end, we consider the asymptotic behavior of $\mathrm{Ai}(s)$ for large negative $s$, \[ \mathrm{Ai}(s) \sim \frac{1}{\sqrt{\pi}} s^{-1/4} \sin\left[-\frac{2}{3} s^{3/2} + \frac{\pi}{4}\right] , \] and obtain \begin{multline*} u_{\mathrm{II}}(Z) \sim D \frac{1}{\sqrt{\pi}} \left(\frac{a_1}{E}\right)^{-1/12} h^{1/6} Z^{-1/4} E^{-1/2} \left(1 - \frac{E a_2 + a_1^2}{4 E a_1} Z\right) \\ \sin\left[-\frac{2}{3} \left(\frac{a_1}{E}\right)^{1/2} h^{-1} Z^{3/2} \left(1 + \frac{3}{2} \left(\frac{a_2 E - a_1^2}{5 E a_1}\right) Z\right) + \frac{\pi}{4}\right] ,\quad\quad Z,h\rightarrow 0^+ . \end{multline*} Matching requires that $u_{\mathrm{III}}(Z)$ has form \begin{multline*} u_{\mathrm{III}}(Z) \sim \frac{D}{\sqrt{\pi}} \left(\frac{a_1}{E}\right)^{-1/12} h^{1/6} E^{-1/2} [(E - \hat{\mu}) \hat{\mu}]^{-1/4} \\ \sin\left[\frac{1}{h} \mathcal{S}_0(Z) + \frac{\pi}{4} + h \mathcal{S}_2(\delta,Z) - \frac{h E^{1/2} a_2 a_1^{-3/2}}{12} \delta^{-1/2}\right] , \quad\quad Z,h\rightarrow 0^+ . \end{multline*} Thus \begin{eqnarray} F &=&\frac{D}{2 \sqrt{\pi}} \left(\frac{a_1}{E}\right)^{-1/12} h^{1/6} E^{-1/2} \exp\left[ \frac{\mathrm{i} \pi}{4} - \mathrm{i} \frac{h E^{1/2} a_2 a_1^{-3/2} \delta^{-1/2}}{ 12} \right] , \\ G &=&-\frac{D}{2\sqrt{\pi}} \left(\frac{a_1}{E}\right)^{-1/12} h^{1/6} E^{-1/2} \exp\left[ -\frac{\mathrm{i} \pi}{4} + \mathrm{i} \frac{h E^{1/2} a_2 a_1^{-3/2} \delta^{-1/2}}{ 12} \right] . \end{eqnarray} The \textbf{Neumann boundary condition} pertains to region III, is applied at $Z = Z_E$ in the shifted coordinate and yields the Bohr-Sommerfeld rule. It takes the implicit form \begin{multline} \label{eq:BS-1} \cot\left[ \frac{1}{h} \mathcal{S}_0(Z_E) + \frac{\pi}{4} + h \mathcal{S}_2(\delta,Z_E) - \frac{h E^{1/2} a_2 a_1^{-3/2}}{24} \delta^{-1/2} \right] \\ = \mathfrak{F}(h,E) ,\quad \mathfrak{F}(h,E) = \left.\frac{h (E - 2 \hat{\mu}) \hat{\mu}'}{4 (E - \hat{\mu}) \hat{\mu} \left(-\sqrt{\frac{E - \hat{\mu}}{\hat{\mu}}} + h^2 \mathcal{S}_2'\right)} \right|_{Z = Z_E} . \end{multline} We carry out an asymptotic expansion of $\cot^{-1} (\mathfrak{F}(h,E))$ in the small $h$ limitm \[ \cot^{-1}(\mathfrak{F}(h,E)) = \frac{\pi}{2} + h \mathfrak{F}_1(E) + \mathcal{O}(h^2) , \] where \[ \mathfrak{F}_1(E) = \left. \frac{(E - 2 \hat{\mu}) \hat{\mu}'}{4 (E - \hat{\mu})^{3/2} \hat{\mu}^{1/2}}\right|_{Z = Z_E} . \] We undo the shift and return to the original (depth) coordinate. We consider, again, a function $f$ such that $\hat{\mu}(f(E)) = E$ when $Z_E = f(E)$. Substituting (\ref{eq:calS0b}) and (\ref{eq:calS2b}), (\ref{eq:BS-1}) takes the form \begin{multline*} \frac{1}{h} \int_{f(E)}^0 \sqrt{\frac{E - \hat{\mu}}{\hat{\mu}}} \mathrm{d}Z + \frac{\pi}{4} + \frac{(3 E + 2 \hat{\mu}(0)) \hat{\mu}'(0)}{ 48 \hat{\mu}^{1/2} (E - \hat{\mu}(0))^{3/2}} + \frac{\hat{\mu}'(0)}{ 24 (E - \hat{\mu}(0))^{1/2} \hat{\mu}^{1/2}(0)} \\ + \int_{f(E) + \delta}^{0} \left[ \frac{(\hat{\mu}')^2}{24 \hat{\mu}^{3/2}(E - \hat{\mu})^{1/2}} - \frac{(7 E - 8 \hat{\mu}) \hat{\mu}''}{ 48 \hat{\mu}^{1/2} (E - \hat{\mu})^{3/2}} \right] \mathrm{d}Z - \frac{h E^{1/2} a_2 a_1^{-3/2}}{24} \delta^{-1/2} \\ = \left(\alpha - \frac{1}{2}\right) \pi + h \mathfrak{F}_1(E) , \quad \alpha = 1, 2, \cdots, \end{multline*} where \[ \mathfrak{F}_1(E) = \frac{(-E + 2 \hat{\mu}(0)) \hat{\mu}'(0)}{ 4 (E - \hat{\mu}(0))^{3/2} \hat{\mu}^{1/2}(0)} . \] By letting $\delta \downarrow 0$, using that \[ \hat{\mu}(f(E) + \delta) - E \sim -a_1 \delta + a_2 \delta^2 , \] where $a_1 > 0$, and that \[ \frac{E^{1/2} a_2 a_1^{-3/2}}{24} \delta^{-1/2} \sim -\frac{E \hat{\mu}''(f(E))}{ 12 \sqrt{\hat{\mu}(f(E)) (E - \hat{\mu}(f(E) - \delta))}} \frac{1}{\hat{\mu}'(E)} , \] we obtain the quantization rule, \begin{equation*} \frac{1}{h} \frac{1}{4}\widetilde{S}_0(E) + \frac{\pi}{4} + h \frac{1}{4}\widetilde{S}_2(E) = \left(\alpha - \frac{1}{2}\right) \pi + \mathcal{O}(h^2) , \end{equation*} where \begin{equation} \label{tSfirst} \widetilde{S}_0(E) = 4\int_{f(E)}^0 \sqrt{\frac{E - \hat{\mu}}{\hat{\mu}}} \mathrm{d}Z \end{equation} and \begin{equation} \frac{1}{4}\widetilde{S}_2(E) = \frac{(3 E + 2 \hat{\mu}(0)) \hat{\mu}'(0)}{ 48 \hat{\mu}^{1/2}(0) (E - \hat{\mu}(0))^{3/2}} + \frac{\hat{\mu}'(0)}{24 (E - \hat{\mu}(0))^{1/2} \hat{\mu}^{1/2}(0)} - \frac{1}{24} \frac{\mathrm{d}}{\mathrm{d}E} \widetilde{J}(E) - \frac{1}{8} \widetilde{K}(E) - \mathfrak{F}_1(E) , \end{equation} in which \begin{eqnarray} \widetilde{J}(E) &=& \int_{f(E)}^0 \left( E \hat{\mu}'' - \frac{2 (E - \hat{\mu})}{\hat{\mu}} (\hat{\mu}')^2 \right) \frac{\mathrm{d}Z}{\sqrt{\hat{\mu} (E - \hat{\mu})}} , \\ \widetilde{K}(E) &=& \int_{f(E)}^0 \hat{\mu}'' \frac{\mathrm{d}Z}{\sqrt{\hat{\mu} (E - \hat{\mu})}} . \label{tSlast} \end{eqnarray} This quantization rule is satisfied by $E = \nu_{\alpha}(h)$ for the half well. \begin{remark} The above quantization rule suggest that $\lambda_1 = E_0 + \mathcal{O}(h^{2/3})$ under Assumption \ref{assu1}, since the first eigenvalue is (semiclassically) associated with the half well. This would give us an improved version of Lemma~\textnormal{\ref{first}}. If $\hat{\mu}'(0) = 0$, then the same quantization rule would lead to $\lambda_1 = E_0 + \mathcal{O}(h)$. \end{remark} \subsection{Full well} In anticipation that the Neumann boundary condition will not play a role, here, we consider the eigenvalue problem on the entire real line. We assume that there are two simple turning points, at $Z = f_-(E)$ and at $Z = f_+(E)$; that is, $\hat{\mu} < E$ on $]f_-(E),f_+(E)[$, and $\hat{\mu} > E$ on $]-\infty,f_-(E)[$ and $]f_+(E),+\infty[$. Clearly, $\hat{\mu}(f_-(E)) = \hat{\mu}(f_+(E)) = E$. Similar to the half well case, now, we construct WKB solutions in the different regions and match them in the neighborhoods of the two turning points $f_-(E)$ and $f_+(E)$. We let $a_{1,-}, a_{2,-}$ and $a_{1,+}, a_{2,+}$ be the expansion coefficients of $\hat{\mu} - E$ in the neighborhoods of $f_-(E)$ and $f_+(E)$, respectively. We now have \begin{equation*} \begin{split} & \lim_{\delta \downarrow 0} \int_{f_-(E) + \delta}^{f_+(E) - \delta} -\frac{(7 E - 8 \hat{\mu}''}{ 48 \hat{\mu}^{1/2} (E - \hat{\mu})^{3/2}} \mathrm{d}Z - \frac{E^{1/2} a_{2,-} a_{1,-}^{-3/2}}{12} \delta^{-1/2} - \frac{E^{1/2} a_{2,+} a_{1,+}^{-3/2}}{12} \delta^{-1/2} \\ =& \lim_{\delta \downarrow 0} \int_{f_-(E) + \delta}^{f_+(E) - \delta} \left( -\frac{\hat{\mu}''}{ 24 \hat{\mu}^{1/2}(E - \hat{\mu})^{1/2}} + \frac{E \hat{\mu}''}{48 \hat{\mu}^{1/2}(E - \hat{\mu})^{3/2}} \right) \mathrm{d}Z \\ & \quad\quad + \frac{E \hat{\mu}''(f_-(E))}{24 \sqrt{\hat{\mu}(f_-(E)) (E - \hat{\mu}(f_-(E) + \delta))}} \frac{1}{\hat{\mu}'(f_-(E))} \\ & \quad\quad - \frac{E \hat{\mu}''(f_+(E))}{24 \sqrt{\hat{\mu}(f_+(E)) (E - \hat{\mu}(f_+(E) - \delta))}} \frac{1}{\hat{\mu}'(f_+(E))} \\ & \quad\quad - \int_{f_-(E)}^{f_+(E)} \frac{\hat{\mu}''}{8 \hat{\mu}^{1/2}(E - \hat{\mu})^{1/2}} \mathrm{d}Z + \int_{f_-(E)}^{f_+(E)} \frac{(\hat{\mu}')^2}{ 24 \hat{\mu}^{3/2}(E - \hat{\mu})^{1/2}} \mathrm{d}Z \\ =& -\frac{1}{24} \frac{\mathrm{d}}{\mathrm{d}E}J(E) - \frac{1}{8}K(E) , \end{split} \end{equation*} where \begin{eqnarray} J(E) &=& \int_{f_{-}(E)}^{f_{+}(E)} \left( E \hat{\mu}'' - \frac{2 (E - \hat{\mu})}{\hat{\mu}} (\hat{\mu}')^2 \right) \frac{\mathrm{d}Z}{\sqrt{\hat{\mu} (E - \hat{\mu})}} , \label{eq:JEsw}\\ K(E) &=& \int_{f_{-}(E)}^{f_{+}(E)} \hat{\mu}'' \frac{\mathrm{d}Z}{\sqrt{\hat{\mu} (E - \hat{\mu})}} . \label{eq:KEsw} \end{eqnarray} That is, we arrive at the quantization \begin{equation*} \frac{1}{h} \frac{1}{2}S_0(E) + h \frac{1}{2}S_2(E) \sim \left(\alpha - \frac{1}{2}\right) \pi , \end{equation*} where \begin{equation} \label{formSfirst} S_0(E) = \frac{1}{2}\int_{f_-(E)}^{f_+(E)} \sqrt{\frac{E - \hat{\mu}}{\hat{\mu}}} \mathrm{d}Z \end{equation} and \begin{equation} \label{formSsecond} S_2(E) = -\frac{1}{12} \frac{\mathrm{d}}{\mathrm{d}E} J(E) - \frac{1}{4} K(E) . \end{equation} This quantization rule is satisfied by $E = \mu_{\alpha}(h)$ for the full well. We note that the above form has also been derived in \cite{CdV2011} using the method introduced in \cite{CdV2005}. \subsection{Multiple wells} In the case of multiple wells we invoke \begin{assumption} \label{assu_boundary well} There is a $Z^* < 0$ such that $\hat{\mu}'(Z^*) = 0$, $\hat{\mu}''(Z^*) < 0$ and $\hat{\mu}'(Z) < 0$ for $Z \in \,]Z^*,0[\,$. \end{assumption} \begin{assumption} \label{assu2jag} The function $\hat{\mu}(Z)$ has non-degenerate critical values at a finite set $$\{ Z_1, Z_2, \cdots, Z_M \}$$ in $]Z_I,0[$ and all critical points are non-degenerate extrema. None of the critical values of $\hat{\mu}(Z)$ are equal, that is, $\hat{\mu}(Z_j) \not= \hat{\mu}(Z_k)$ if $j \not= k$. \end{assumption} \noindent We label the critical values of $\hat{\mu}(Z)$ as $E_1 < \ldots < E_M < \hat{\mu}_I$ and the corresponding critical points by $Z_1,\cdots,Z_M$. We use the fact that $\hat{\mu}(0) = \inf_{Z \leq 0} \hat{\mu}(Z)$ and denote $Z_0 = 0$ and $E_0 = \hat{\mu}(Z_0)$. We define a well of order $k$ as a connected component of $\{ Z \in (Z_I,0)\ :\ \hat{\mu}(Z) < E_k \}$ that is not connected to the boundary, $Z = 0$. We refer to the connected component connected to the boundary as a half well of order $k$. We denote $J_k = ]E_{k-1},E_k[$, $k = 1,2,3,\cdots$ and let $N_k$ ($\leq k$) be the number of wells of order $k$ (see Figure~\ref{trajectory2} top). The set $\{Z \in (Z_I,0)\ :\ \hat{\mu}(Z) < E_k\}$ consists of $N_k$ wells and one half well \[ W_j^k(E) ,\quad j=1,2,\cdots,N_k ,\text{ and } \widetilde{W}^k(E) ,\quad (\cup_{j=1}^{N_k} W_j^k(E)) \cup \widetilde{W}^k(E) \subset [Z_I,0[ \, . \] The half well $\widetilde{W}^k(E)$ is connected to the boundary $Z = 0$. \begin{figure} \caption{Wells of different orders and periodic trajectories.} \label{trajectory2} \end{figure} Similar to Proposition 10.1 in \cite{CdV2011}, we have \begin{proposition}\label{CdV Pr.10.1bis} The semiclassical spectrum mod $o(h^{5/2})$ in $J_k$ is the union of $N_k + 1$ spectra: $\cup_{j=1}^{N_k} \Sigma_j^k(h) \cup \widetilde{\Sigma}^k(h)$. Here, $\Sigma_j^k(h)$ is the semiclassical spectrum associated to well $W_j^k$, and $\widetilde{\Sigma}^k(h)$ is the semiclassical spectrum for half well $\widetilde{W}^k$. \end{proposition} The above separation of semiclassical spectra comes from the fact that the eigenfunctions are $\mathcal{O}(h^\infty)$ outside the wells, and is related to the exponentially small ``tunneling'' effects \cite{Helffer, Zelditch}. We refer further to \cite{Bender} for more details. Therefore, we have Bohr-Sommerfeld rules for separated wells, that is, \begin{equation} \label{sepa} \Sigma_j^k(h) = \{\mu_\alpha(h)\ :\ E_{k-1} < \mu_\alpha(h) < E_k\text{ and }S^{k,j}(\mu_\alpha(h)) = 2\pi h \alpha\} , \end{equation} where $S^{k,j} = S^{k,j}(E) :\ ]E_{k-1},E_k[ \to \mathbb{R}$ admits the asymptotics in $h$ $$ S^{k,j}(E) = S^{k,j}_0(E) + h \pi + h^2 S_2^{k,j}(E) + \cdots $$ and \begin{equation} \widetilde{\Sigma}^k(h) = \{\nu_\alpha(h)\ :\ E_{k-1} < \nu_\alpha(h) < E_k\text{ and } \widetilde{S}^k(\nu_\alpha(h)) = 2\pi h \alpha\}, \end{equation} where $\widetilde{S}^k = \widetilde{S}^k(E) :\ ]E_{k-1},E_k[ \to \mathbb{R}$ admits the asymptotics $$ \widetilde{S}^k(E) = \frac{1}{2} \tilde{S}^k_0(E) + \frac{3}{2} h \pi + \frac{1}{2} h^2 \widetilde{S}_2^k(E) + \cdots . $$ The form of $S^{k,j}$ is similar to the one given in (\ref{formSfirst})-(\ref{eq:KEsw}) and the form of $\widetilde{S}^k$ is similar to the one given in (\ref{tSfirst})-(\ref{tSlast}). We will give more details below. For alternative representations of $S^{k,j}$ and $\widetilde{S}^k$, we introduce the classical Hamiltonian $p_0(Z,\zeta) = \hat{\mu}(Z) (1 + \zeta^2)$. For any $k$, $p_0^{-1}(J_k)$ is a union of $N_k$ topological annuli $A_j^k$ and a half annulus $\widetilde{A}^k$. The map $p_0 :\ A_j^k \to J_k$ is a fibration whose fibers $p_0^{-1}(E) \cap A_j^k$ are topological circles $\gamma_j^k(E)$ that are periodic trajectories of classical dynamics (illustrated in Figure~\ref{trajectory2} bottom). The map $p_0 :\ \widetilde{A}^k \to J_k$ is a topological half circle $\widetilde{\gamma}^k(E)$. If $E \in J_k$ then $p_0^{-1}(E) = (\cup_{j=1}^{N_k} \gamma_j^k(E)) \cup \widetilde{\gamma}^k(E)$. The corresponding classical periods are \[ T_j^k(E) = \int_{\gamma_j^k(E)}|\mathrm{d}t| . \] We let $t$ be the parametrization of $\gamma^k_j(E)$ by time evolution in \begin{equation} \label{traj} \frac{\mathrm{d}Z}{\mathrm{d}t} = \partial_\zeta p_0 ,\quad \frac{\mathrm{d}\zeta}{\mathrm{d}t} = -\partial_Z p_0 \end{equation} for a realized energy level $E$. \begin{figure} \caption{Behavior of a half trajectory.} \label{trajectory} \end{figure} For the half well $\widetilde{W}_k$, $(Z,\zeta)$ follows a periodic (half) trajectory as shown in Figure~\ref{trajectory}. After one (half-) period $T$, the trajectory reaches the boundary $Z(T) = 0$, and encounters a perfect reflection, so that \[ \zeta(T +) = -\zeta(T -) = \sqrt{\frac{E - \hat{\mu}(0)}{\hat{\mu}(0)}} , \] and then continues following the Hamilton system (\ref{traj}). \subsubsection{Wells separated from the boundary} For a well $W^k_j$ separated from the boundary, the associated semiclassical spectrum mod $o(h^{5/2})$ follows from (\ref{sepa}) and (\ref{eq:JEsw})-(\ref{formSsecond}). We have \begin{equation} \label{Actions} S^{k,j}(E) = S_0^{k,j}(E) + h \pi + h^2 S_2^{k,j}(E) , \end{equation} where \begin{equation} \label{eq:S0} S_0^{k,j}(E) = \int_{\gamma_j^k(E)} \zeta \mathrm{d}Z = {\rm Area} \{(Z,\zeta)\ :\ p_0(Z,\zeta) \leq E,\, Z\in W^k_j \} \end{equation} and \begin{equation} \label{eq:S2} S_2^{k,j}(E) = -\frac{1}{12} \frac{\mathrm{d}}{\mathrm{d}E} \int_{\gamma^k_j(E)} \left( E \hat{\mu}'' - 2 \frac{(E - \hat{\mu})}{\hat{\mu}} (\hat{\mu}')^2 \right) |\mathrm{d}t| - \frac14 \int_{\gamma^k_j(E)} \hat{\mu}'' |\mathrm{d}t| . \end{equation} The explicit forms of $S_0^{k,j}$ and $S_2^{k,j}$ are equivalent to those given in (\ref{formSfirst})-(\ref{eq:KEsw}). Here, the integration over $]f_-(E),f_+(E)[$, $E \in [E_{k-1},E_k]$, in the $Z$ coordinate has been changed into integration along the periodic trajectory $\gamma$. One can get the same results by using the method in \cite{CdV2005, CdV2011}. From (\ref{eq:S0}) it is immediate that \begin{equation} \label{eq:TkjS} (S^{k,j}_0)'(E) = T^k_j(E) . \end{equation} \subsubsection{Half well connected to the boundary} For the half well $\widetilde{W}^k$ connected to the boundary, we have, mod $\mathcal{O}(h^2)$, \begin{equation} \label{eq:tS} \widetilde{S}^k(E) = \frac12 \widetilde{S}^k_0(E) + h \frac32 \pi , \end{equation} where \begin{equation} \label{eq:tS0} \widetilde{S}^k_0(E) = 2 \int_{\widetilde{\gamma}^k(E)} \zeta \mathrm{d}Z . \end{equation} The explicit form of $\widetilde{S}^k_0$ is equivalent to the one given in (\ref{tSfirst}). Here, the integration over $]f(E),0[$, $E \in [E_{k-1},E_k]$, in the $Z$ coordinate has been changed into integration along the (half) periodic trajectory $\tilde{\gamma}$. As before, it follows that \begin{equation} \label{eq:tTkS} \frac12 (\widetilde{S}^k_0)'(E) = \frac{1}{2}\widetilde{T}^k(E) . \end{equation} The explicit form of $\widetilde{S}_2^k$ will not be needed in the following and hence we omit it. \noindent We note that $S_0^{k,j}$ and $\widetilde{S}^k_0$ depend only on periodic trajectories. \begin{remark} In the further analysis of the inverse problem, the explicit form of $S_2^k$ is only needed for the wells separated from the boundary (between two turning points) and there the formulas are exactly as in \textnormal{\cite{CdV2011}} (on the whole line without boundary conditions). Near the boundary (between a turning point and the boundary) the function $\hat{\mu}$ is strictly decreasing and only $S_0^k$ or the counting function for semiclassical eigenvalues suffice to reconstruct the profile. \end{remark} \section{Unique recovery of $\hat{\mu}$ from the semiclassical spectrum} \label{inverse} \subsection{Trace formula} The inverse problem is addressed with a trace formula as it reflects the data. \begin{lemma}[\cite{CdV2011}, Lemma 11.1] \label{CdV L11.1bis} Let $S :\ J \to {\Bbb R}$ be a smooth function with $S' > 0$. Then we have the following identity as Schwartz distributions in $J$, meaning that the equality holds when applying both sides to a test function $\phi \in C_0^\infty(J)$, \begin{equation} \label{CdV5bis} \sum_{\alpha \in {\Bbb Z}} \delta(E - S^{-1}(2\pi h \alpha)) = \frac{1}{2\pi h} \sum_{m \in {\Bbb Z}} e^{\mathrm{i} m S(E) / h} S'(E) . \end{equation} \end{lemma} Substituting the action in (\ref{Actions}), (\ref{eq:tS}) and the Bohr-Sommerfeld rules in (\ref{CdV5bis}) yields, on $J_k$ with $\{\mu_\alpha(h)\}_\alpha = \cup_{j=1}^{N_k} \Sigma_j^k(h)$, \begin{align*} \sum_{\alpha \in {\Bbb Z}} \delta(E - \mu_\alpha(h)) =& \frac{1}{2\pi h} \sum_{j=1}^{N_k} \sum_{m \in {\Bbb Z}} e^{\mathrm{i} m (S_0^{k,j}(E) h^{-1} + \pi + h S_2^{k,j}(E) + {\mathcal O}(h^2))} (({S^{k,j}_0})'(E) + h^2 ({S^{k,j}_2})'(E) + {\mathcal O}(h^3)) \\ =& \frac{1}{2\pi h} \sum_{j=1}^{N_k} \sum_{m \in {\Bbb Z}} e^{\mathrm{i} m S_0^{k,j}(E) h^{-1}} e^{\mathrm{i} m \pi} ({S^{k,j}_0})'(E) (1 + \mathrm{i} m h S_2^{k,j}(E) + {\mathcal O}(h^2)) \end{align*} and with $\{\nu_\alpha(h)\}_\alpha \subset \widetilde{\Sigma}^k(h)$, \begin{align*} \sum_{\alpha \in {\Bbb Z}} \delta(E - \nu_\alpha(h)) =& \frac{1}{2\pi h} \sum_{m \in {\Bbb Z}} e^{\mathrm{i} m (\frac{1}{2} \widetilde{S}_0^{k}(E) h^{-1} + \frac{3}{2} \pi + h \frac{1}{2} \widetilde{S}_2^{k}(E) + {\mathcal O}(h^2))} \left( \frac{1}{2} (\widetilde{S}^{k}_0)'(E) + \frac{h^2}{2} (\widetilde{S}^k_2)'(E) + {\mathcal O}(h^3) \right) \\ =& \frac{1}{2\pi h} \sum_{m \in {\Bbb Z}} e^{\mathrm{i} m \frac{1}{2} \widetilde{S}_0^{k}(E) h^{-1}} e^{\mathrm{i} m \frac{3}{2} \pi} \frac{1}{2} (\widetilde{S}^{k}_0)'(E) \left( 1 + \mathrm{i} m h \frac{1}{2} \widetilde{S}_2^{k}(E) + {\mathcal O}(h^2) \right) . \end{align*} Using (\ref{eq:TkjS}) and (\ref{eq:tTkS}), and writing $\mu_\alpha$ for $\nu_\alpha$ in a unified notation, we obtain the trace formula in \begin{theorem} \label{CdV Th.11.1boundary} Let $\mu_\alpha(h)$ be the semiclassical spectrum modulo $o(h^{5/2})$. As distributions on $J_k$, we have \begin{align} \sum_{\alpha \in {\Bbb Z}} \delta(E - \mu_\alpha(h)) =& \frac{1}{2\pi h} \sum_{j = 1}^{N_k} \sum_{m \in {\Bbb Z}} (-1)^m e^{\mathrm{i} m S_0^{k,j}(E) h^{-1}} T^k_j(E) (1 + \mathrm{i} m h S_2^{k,j}(E)) \nonumber\\ &+ \frac{1}{2\pi h} \sum_{m \in {\Bbb Z}} e^{\mathrm{i} m \frac12 \widetilde{S}_0^k(E) h^{-1}} e^{\mathrm{i} m \frac32\pi} \widetilde{T}^k(E) \left( 1 + \mathrm{i} m h \frac12 \widetilde{S}^k_2(E) \right) + o(1) . \label{CdV6boundary} \end{align} \end{theorem} \noindent The direct way to obtain this trace formula is starting from (\ref{trace_from_normal_modes}), that is, \begin{equation*} \int_{\mathbb{R}^-} \widehat{\epsilon \partial_t \mathcal{G}_0}(Z,x,\omega,Z,\xi;\epsilon) \mathrm{d}(\epsilon Z) \sim \frac{1}{2 h^2} \sum_{\alpha \in {\Bbb Z}} \delta(E - \mu_\alpha(h)) , \end{equation*} upon substituting $E = h^2 \omega^2$. We then expand the parametrix $(\ref{G0})$ in the WKB eigenfunctions $(\ref{WKB})$ from the previous section. We will use the notation \begin{eqnarray} Z^k_{m,j}(E) &=& \frac{1}{2\pi h} (-1)^m e^{\mathrm{i} m S^{k,j}_0(E) h^{-1}} T^k_j(E) (1 + \mathrm{i} m h S^{k,j}_2(E)) ,\quad j = 1,\cdots,N_k , \\ Z^k_{m,N_k+1}(E) &=& \frac{1}{2\pi h} e^{\mathrm{i} m \frac32 \pi} e^{\mathrm{i} m \frac12 \widetilde{S}^k_0(E) h^{-1}} T^k_{N_k+1}(E) \left( 1 + \mathrm{i} m h \frac12 \widetilde{S}^k_2(E) \right) , \\ T^k_{N_k+1}(E) :&=& \widetilde{T}^k(E) \end{eqnarray} for $m \in {\Bbb Z}$. To further unify the notation, we write $$ S^{k,N_k+1}_{0,2}(E) := \frac12 \widetilde{S}^k_{0,2}(E) . $$ The micro-support of $Z^k_{m,j}$, $j=1,\cdots,N_k+1$, is given by the Lagrangian submanifold $$ L^k_{m,j} = \{ (E,m T^k_j(E))\ :\ E \in J_k \} $$ of $T^*J_k$ associated with phase function $mS^{k,j}_0(E)$. \subsection{Separation of clusters and the weak transversality condition} \label{ssec:sepclu} We observe that the singular points of the counting function, $\int_{p_0(Z,\zeta) \le E} |\mathrm{d}Z \mathrm{d}\zeta|$, are precisely the critical values, $E_1, E_2, \cdots, E_M$, of $\hat{\mu}$ \cite[Lemma~11.1]{CdV2011} and, hence, are determined using the Weyl asymptotics first. From the singularity at $E_k$ one can extract the value of $\hat{\mu}''(Z_k)$. We then invoke \begin{assumption} For any $k = 1,2,\cdots$ and any $j$ with $1 \leq j < l \leq N_k+1$, {\em the classical periods (half-period if $j=N_k+1$) $T^k_j(E)$ and $T^k_l(E)$ are weakly transverse in $J_k$}, that is, there exists an integer $N$ such that the $N$th derivative $(T^k_j - T^k_l)^{(N)}(E)$ does not vanish. \end{assumption} \noindent We introduce the sets $$ B = \{ E \in J_k\ :\ \exists j \neq l ,\quad T^k_j(E) = T^k_l(E) \} , $$ while suppressing $k$ in the notation. By the weak transversality assumption, it follows that $B$ is a discrete subset of $J_k$. We let the distributions $D_h(E) = \sum_{\alpha \in {\Bbb Z}} \delta(E - \mu_\alpha(h))$ be given on intervals $J = J_k$ modulo $o(1)$ using (\ref{CdV6boundary}). These distributions are determined mod $o(1)$ by the semiclassical spectra mod $o(h^{5/2})$. We denote by $Z_h$ the finite sum defined by the right-hand side of (\ref{CdV6boundary}) restricted to $m = 1$, that is, $$ Z_h(E) = \sum_{j=1}^{N_k+1} Z^k_{1,j}(E) . $$ By analyzing the micro-support of $D_h$ and $Z_h$ \cite[Lemmas 12.2 and 12.3]{CdV2011}, we find \begin{lemma} Under the weak transversality assumption, the sets $B$ and the distributions $Z_h$ mod $o(1)$ are determined by the distributions $D_h$ mod $o(1)$. \end{lemma} \begin{lemma} Assuming that the $S^j$'s are smooth and the $a_j$'s do not vanish, there is a unique splitting of $Z_h$ as a sum $$ Z_h(E) = \frac{1}{2\pi h} \sum_{j=1}^{N_k+1} (a_j(E) + h b_j(E)) e^{\mathrm{i} S^j(E)/h} + o(1) . $$ \end{lemma} \noindent It follows that the spectrum in $J_k$ mod $o(h^{5/2})$ determines the actions $S^{k,j}_0(E)$, $S^{k,j}_2(E)$ and $\widetilde{S}^k_0(E)$. This provides the separation of the data for the $N_k$ wells and the half well. For the reconstruction of $\hat{\mu}$ from these actions, we need one more assumption \begin{assumption} \label{defect} The function $\hat{\mu}$ has a generic symmetry defect: If there exist $X_\pm$ satisfying $\hat{\mu}(X_-) = \hat{\mu}(X_+) < E$, and for all $N \in {\mathbb N}$, $\hat{\mu}^{(N)}(X_-) = (-1)^N \hat{\mu}^{(N)}(X_+)$, then $\hat{\mu}$ is globally even with respect to ${\frac{1}{2}} (X_+ + X_-)$ in the interval $\{ Z\ :\ \hat{\mu}(Z) < E \}$. \end{assumption} \noindent We will carry out the reconstruction of $\hat{\mu}$ successively in intervals $J_k$, $k = 1,\cdots,M$ and then on the interval $[E_M,E_{M+1}]$ with $E_{M+1} = \hat{\mu}_I$. \subsection{Reconstruction of a single well, with barrier and descreasing profile} We discuss in detail the case of one local minimum for $Z < 0$ and global minimum at $Z = 0$ ($\hat{\mu}(0) < \hat{\mu}(Z)$ $\forall$ $Z < 0$, $\hat{\mu}'(0) \leq 0$). This means that the global minimum occurs at $Z = 0$ and $E_1 = \hat{\mu}(Z_1)$ is the local minimum. Then $E_2 = \hat{\mu}(Z_2)$ is attained at $Z_2 \in (Z_1,0)$ and $E_3 = \hat{\mu}_I$. \textit{Step 1}. For $E \in ]E_0,E_1[$, there is only one (half) well, $\widetilde{W}^1(E)$, of order $1$ with $\widetilde{W}^1(E_1) = [Z_1',0]$. Since $\hat{\mu}$ is strictly decreasing in $\widetilde{W}^1(E_1)$, we may reconstruct $\hat{\mu}$ on this interval as in Section~\ref{S-decreasing}. This is illustrated in Figure~\ref{separation2} in green. \begin{figure} \caption{Reconstruction Step 1 in green.} \label{separation2} \end{figure} \textit{Step 2}. We note that $Z_2$ in this case is the $Z^*$ defined above Assumption~\ref{assu_boundary well}. We consider $E \in \,]E_1,E_2[$ which corresponds to wells of order $k = 2$ with $N_k = 1$ (one connected component for $Z < 0$ separated from the boundary). The two wells are $W^{2,1}(E)$ and $\widetilde{W}^2(E)$ with $W^{2,1}(E_2) = [Z_-,Z_2]$ and $\widetilde{W}^2(E_2) = [Z_2,0]$. Here, $Z_-$ is the unique point in $[Z_I,Z_1]$ such that $E_2 = \hat{\mu}(Z_-)$. We are given $S^{2,1}_0$, $S^{2,1}_2$ and $\widetilde{S}^2_0$ (and $\widetilde{S}^2_2$). We continue to reconstruct $\hat{\mu}$ from $[Z'_1,0]$ to $[Z_2,0]$ from $\widetilde{S}_0^2$. For the reconstruction of $\hat{\mu}$ on the interval $I = [Z_-,Z_2]$, more effort is needed. We note that, up to this point, $I$ itself cannot be determined yet. The following theorem is a version of \cite[Theorem 5.1]{CdV2011} \begin{theorem} \label{CdV-well} Under Assumption \ref{defect}, the function $\hat{\mu}$ is determined on $I$ by $S_0^{2,1}$ and $S_2^{2,1}$ up to a symmetry $\hat{\mu}(Z) \rightarrow \hat{\mu}(c - Z)$, where $\frac{c}{2}$ is the midpoint of $I$. \end{theorem} \begin{proof} For any $E \in [E_1,E_2[$ the functions $f_\pm :\ [E_1, E_2[ \to I$, are defined so that $W^2_1(E) = [f_-(E),f_+(E)]$. We have $\hat{\mu}'(Z) < 0$ for $Z \in\, ]f_-(E),Z_1[$ and $\hat{\mu}'(Z) > 0$ for $Z \in\, ]Z_1,f_+(E)[$. We introduce \[ \Phi(E) = f_+'(E) - f_-'(E)\quad\text{and}\quad \Psi(E) = \frac{1}{f_+'(E)} - \frac{1}{f_-'(E)} . \] As in the proof of Theorem~\ref{CdV-decr}, we have \[ (S^{2,1}_0)'(E) = T g(E) ,\quad T g(E) = \int_{E_1}^E \frac{g(u)}{\sqrt{E - u}} \, \mathrm{d}u\quad\text{with}\quad g(u) = \frac{\Phi(u)}{\sqrt{u}} . \] The inversion formula for the Abel transform yields $\Phi(E)$ for $E \in [E_1,E_2[$. Concerning the recovery of $\Psi$, we have \[ S^{2,1}_2(E) = -\frac{1}{12}\frac{\mathrm{d}}{\mathrm{d}E}\mathcal{B} \Psi(E) ,\quad \mathcal{B} \Psi(E) = \int_{E_1}^E \left((7 E - 6 u) \Psi'(u) - 2 \left(\frac{E}{u} - 1 \right) \Psi(u)\right) \frac{\mathrm{d}u}{\sqrt{u (E - u)}} , \] which follows from (\ref{formSsecond}) with (\ref{eq:JEsw})-(\ref{eq:KEsw}) upon changing variable of integration, $Z = f_\pm(u)$. Thus, from $S^{2,1}_2(E)$ and the fact $\mathcal{B} \Psi(E_1) = \pi \sqrt{2 E_1 \hat{\mu}''(Z_1)}$, we can recover $\mathcal{B} \Psi(E)$. It can be shown that \[ \frac{\pi}{E^{3/2}} \frac{\mathrm{d}^2}{\mathrm{d} E^2} (T \circ \mathcal{B} \Psi)(E) = E^2 \Psi''(E) + 4 E \Psi'(E) - \Psi(E) . \] That is, we obtain a second-order inhomogeneous ordinary differential equation for $\Psi$ on the interval $[E_1,E_2[$. This equation is supplemented with the ``initial'' conditions \[ \Psi(E_1) = 0 ,\quad \lim_{E \downarrow E_1} \sqrt{E - E_1} \Psi'(E) = \sqrt{2 \hat{\mu}''(Z_1)} \] As mentioned in Subsection~\ref{ssec:sepclu}, this second derivative is obtained from the limiting behavior of the counting function which coincides with $S^{2,1}_0(E)$ as $E \downarrow E_1$. We use that the period of small oscillations of the ``pendulum'' associated to the local minimum of $\hat{\mu}$ at $Z_1$ is given by \[ (S^{2,1}_0)'(E) = \int_{f_-(E)}^{f_+(E)} \frac{\mathrm{d}Z}{\sqrt{\hat{\mu}(E - \hat{\mu})}} = \pi \sqrt{\frac{2}{E_1 \hat{\mu}''(Z_1)}} + o(1)\quad \text{as}\ E \downarrow E_1 . \] Thus we obtain $\Psi(E)$ for $E \in [E_1,E_2[$. With $\pm f_\pm'(E) > 0$ for $E \in\, ]E_1,E_2[$, we then find \begin{equation} \label{eq:fpm} 2 f_{\pm}' = \pm \Phi + \sigma \sqrt{\Phi^2 - 4 \frac{\Phi}{\Psi}} \end{equation} with \begin{equation*} \sigma = \mathop{\rm sign}\nolimits(f_+' + f_-') = \left\{\begin{array}{ccc} +1 &\text{if}& f_+' + f_-' > 0 \\ 0 &\text{if}& f_+' + f_-' = 0 \\ -1 &\text{if}& f_+' + f_-' < 0 \end{array}\right. \end{equation*} We note that the sign is not (yet) determined, and only if the well is mirror-symmetric with respect to its vertex then $f_+' + f_-' = 0$ and the square root in (\ref{eq:fpm}) vanishes. However, later, we will find the sign by a gluing argument. By Assumption~\ref{defect}, the function $\sigma = \sigma(E)$ is constant for $E \in\, ]E_1,E_2[$. Hence, in what follows we will exchange $\sigma$ with $\pm$. We have \begin{eqnarray*} f_+(E) &=& Z_1 + \frac12 \int_{E_1}^E \left( \Phi \pm \sqrt{\Phi^2 - 4 \frac{\Phi}{\Psi}}\right) \mathrm{d}E , \\ f_-(E) &=& Z_1 + \frac12 \int_{E_1}^E \left( -\Phi \pm \sqrt{\Phi^2 - 4 \frac{\Phi}{\Psi}}\right) \mathrm{d}E . \end{eqnarray*} Since $f_+(E_2) = Z_2$ and $f_-(E_2) = Z_-$, we find that \begin{eqnarray*} Z_2 &=& Z_1 + \frac12 \int_{E_1}^{E_2} \left( \Phi \pm \sqrt{\Phi^2 - 4 \frac{\Phi}{\Psi}}\right) \mathrm{d}E , \\ Z_- &=& Z_1 + \frac12 \int_{E_1}^{E_2} \left( -\Phi \pm \sqrt{\Phi^2 - 4 \frac{\Phi}{\Psi}}\right) \mathrm{d}E . \end{eqnarray*} Hence, the distance, $Z_2 - Z_1$, between the two critical points is recovered (modulo mirror symmetry of $Z_1$ with respect to $\frac{c}{2}$). Since $f_\pm$ are both monotonic on $]E_1,E_2[$, $\hat{\mu}$ can be recovered (up to mirror symmetry) on $I$. \end{proof} With this result, the reconstructions on $[Z'_1,0]$ and $I$ can be smoothly glued together, and the uncertainty in the translation of $I$ and the ``orientation'' of $\hat{\mu}$ on $I$ are eliminated. Thus $\hat{\mu}$ is uniquely determined on the interval $[Z_-,0]$. This is illustrated in Figure~\ref{separation3}. \begin{figure} \caption{Reconstruction Step 2, first part in green and second part in blue.} \label{separation3} \end{figure} \textit{Step 3}. On the interval $[Z_I,Z_-]$ we may use the Weyl asymptotics again to recover $\hat{\mu}$. The counting function in the interval $[E_2,E_3]$ is obtained from $\widetilde{S}^3$ which corresponds with $$ {\rm Area}(\{ (Z,\zeta)\ :\ \hat{\mu}(Z) (1 + \zeta^2) \leq E \}) = A_1(E) + A_2(E) , $$ where $$ A_1(E) = {\rm Area}(\{ (Z,\zeta)\ :\ \hat{\mu}(Z) (1 + \zeta^2) \leq E ,\ Z_- \leq Z \leq 0 \}) $$ is already known, and $$ A_2(E) = 2 \int_{f(E)}^{Z_-} \sqrt{\frac{E - \hat{\mu}}{\hat{\mu}}} \mathrm{d}Z , $$ $Z_I \leq f(E) < Z_-$ since $E_2 \leq E \leq E_3 = \hat{\mu}_I$. Thus we may recover $\hat{\mu}$ on the interval $[Z_I,Z_-]$ where $\hat{\mu}$ is decreasing while applying Theorem~\ref{CdV-decr}. Step 3 is illustrated in Figure~\ref{separation4}. \begin{figure} \caption{Reconstruction Step 3 in light blue.} \label{separation4} \end{figure} The two profiles for $\hat{\mu}$ on $[Z_I, Z_-]$ and on $[Z_-,Z_2]$ are then glued together at $Z=Z_-$ which is already known. This completes the reconstruction procedure. \subsection{Reconstruction of multiple wells} If $\hat{\mu}$ has multiple wells, we follow an inductive procedure. First, we consider the reconstruction of the half well $\widetilde{W}^k$ of order $k$ between $E_{k-1}$ and $E_k$. We note that $\widetilde{W}^k$ must be a continuation of the half well $\widetilde{W}^{k-1}$, or be joined with some well $W^{k-1}_{j'}$ of order $k-1$. This can be done in a fashion similar to the process presented above (on $[Z_I,Z_-]$). Secondly, we consider the reconstruction of a well, $W^k_j$, separated from the boundary, of order $k$. The well $W^k_j$ might be a new well, and can be reconstructed as in Theorem \ref{CdV-well}. The well, $W^k_j$, might also be joining two wells of order $k-1$, or extending a single well of order $k-1$. Let the profile under $E_{k-1}$ already be recovered. The smooth joining of two wells can be carried out under Assumption~\ref{defect}. We consider now functions $f_-(E)$ and $f_+(E)$ for $E \in [E_{k-1},E_{k}]$ such that $W^k_j$ is the union of three connected intervals, $$ W^k_j(E_k) = [f_-(E_k),f_-(E_{k-1})[ \, \cup \, [f_-(E_{k-1}),f_+(E_{k-1})] \, \cup \, ]f_+(E_{k-1}),f_+(E_{k})] ; $$ see Figure~\ref{fpm}. The semiclassical spectrum in $]E_{k-1},E_{k}[$ up to $o(h^{5/2})$ gives the actions $S^{k,j}_0$ and $S^{k,j}_2$. \begin{figure} \caption{Illustration of $f_\pm$.} \label{fpm} \end{figure} From $S^{k,j}_0$ we obtain \begin{equation*} T^k_j(E) = (S^{k,j}_0)'(E) = \int_{f_{-}(E)}^{f_{+}(E)} \frac{\mathrm{d}Z}{\sqrt{\hat{\mu} (E - \hat{\mu})}} \end{equation*} which signify the periods of the trajectories of energy $E$. We write $Z_- = f_-(E_{k-1})$ and $Z_+ = f_+(E_{k-1})$, and decompose the interval, $$ [f_-(E),f_+(E)] = [f_-(E),Z_-] \cup [Z_-,Z_+] \cup [Z_+,f_+(E)] . $$ In accordance with this decomposition, \begin{equation*} T^k_j(E) = T_-(E) + T_{k-1}(E) + T_+(E) , \end{equation*} where \begin{eqnarray*} T_-(E) &=& \int_{f_{-}(E)}^{Z_-} \frac{\mathrm{d}Z}{\sqrt{\hat{\mu} (E - \hat{\mu})}} , \\ T_{k-1}(E) &=& \int_{Z_-}^{Z_+} \frac{\mathrm{d}Z}{\sqrt{\hat{\mu} (E - \hat{\mu})}} , \\ T_+(E) &=& \int_{Z_+}^{f_{+}(E)} \frac{\mathrm{d}Z}{\sqrt{\hat{\mu} (E - \hat{\mu})}} . \end{eqnarray*} We note that $T_{k-1}(E)$ is already known. In $T_{\mp}(E)$ we change the variable of integration, $Z = f_{\mp}(u)$. Using that $\hat{\mu}(f_{\mp}(u)) = u$, we get \begin{equation*} T_{\mp}(E) = \mp \int_{E_{k-1}}^E \frac{f_{\mp}'(u)}{\sqrt{u (E - u)}} \mathrm{d}u ; \end{equation*} then \begin{equation*} T^k_j(E) - T_{k-1}(E) = T g(E) ,\quad T g(E) = \int_{E_{k-1}}^E \frac{g(u)}{\sqrt{E - u}} \mathrm{d}u\quad \text{with}\quad g(u) = \frac{\Phi(u)}{\sqrt{u}} \end{equation*} and $\Phi(u) = f_+'(u) - f_-'(u)$ as before. Inverting this Abel transform, we obtain $\Phi$ on $[E_{k-1},E_k[$. From $S^{k,j}_2$ we obtain \[ -\frac{1}{12} \frac{\mathrm{d}}{\mathrm{d}E} J(E) - \frac{1}{4} K(E) , \] where \begin{eqnarray*} J(E) &=& \int_{f_{-}(E)}^{f_{+}(E)} \left( E \hat{\mu}'' - 2 \frac{(E - \hat{\mu})}{\hat{\mu}} (\hat{\mu}')^2 \right) \frac{\mathrm{d}Z}{\sqrt{\hat{\mu} (E - \hat{\mu})}} , \\ K(E) &=& \int_{f_{-}(E)}^{f_{+}(E)} \hat{\mu}'' \frac{\mathrm{d}Z}{\sqrt{\hat{\mu} (E - \hat{\mu})}} . \end{eqnarray*} Using that $$ \hat{\mu}(f_\pm(E)) = E ,\quad \hat{\mu}'|_{Z = f_\pm(E)} = \frac{1}{f_\pm'(E)} ,\quad \hat{\mu}''|_{Z = f_\pm(E)} = \left(\frac{1}{f_\pm'}\right)'(E) \frac{1}{f_\pm'(E)} , $$ changing variables of integration in $J$ and $K$, $Z = f_\pm(u)$ and introducing \[ \Psi(E) = \frac{1}{f_{+}'(E)} - \frac{1}{f_{-}'(E)} , \] we have \begin{eqnarray*} J(E) - J_{k-1}(E) &=& \int_{E_{k-1}}^E \left(E \Psi'(u) - 2 \left(\frac{E}{u} - 1\right) \Psi(u)\right) \frac{\mathrm{d}u}{\sqrt{u (E - u)}} , \\ K(E) - K_{k-1}(E) &=& \int_{E_{k-1}}^E \Psi'(u) \frac{\mathrm{d}u}{\sqrt{u (E - u)}} , \end{eqnarray*} where \begin{eqnarray*} J_{k-1}(E) &=& \int_{Z_-}^{Z_+} \left( E \hat{\mu}'' - 2 \frac{(E - \hat{\mu})}{\hat{\mu}} (\hat{\mu}')^2 \right) \frac{\mathrm{d}Z}{\sqrt{\hat{\mu} (E - \hat{\mu})}} , \\ K_{k-1}(E) &=& \int_{Z_-}^{Z_+} \hat{\mu}'' \frac{\mathrm{d}Z}{\sqrt{\hat{\mu} (E - \hat{\mu})}} \end{eqnarray*} are already known. Thus, from $S^{k,j}_2$, we recover \[ \mathcal{B} \Psi(E) = \int_{E_{k-1}}^E \left( (7 E - 6 u) \Psi'(u) - 2 \left(\frac{E}{u} - 1\right) \Psi(u) \right) \frac{\mathrm{d}u}{\sqrt{u (E - u)}} . \] Then similar to the proof of Theorem \ref{CdV-well}, we recover $\Psi$ on $[E_{k-1},E_k[$ by inverting $\mathcal{B}$ through the introduction of a second-order ordinary differential equation. From $\Phi$ and $\Psi$ we obtain $$ 2 f_+' = \Phi \pm \sqrt{\Phi^2 - 4 \frac{\Phi}{\Psi}} ,\quad 2 f_-' =-\Phi \pm \sqrt{\Phi^2 - 4 \frac{\Phi}{\Psi}} $$ and then \begin{eqnarray*} f_+(E) &=& Z_+ + \frac12 \int_{E_{k-1}}^E \left( \Phi \pm \sqrt{\Phi^2 - 4 \frac{\Phi}{\Psi}}\right) \mathrm{d}E , \\ f_-(E) &=& Z_- + \frac12 \int_{E_{k-1}}^E \left( \Phi \pm \sqrt{\Phi^2 - 4 \frac{\Phi}{\Psi}}\right) \mathrm{d}E . \end{eqnarray*} From $f_-$ we recover $\hat{\mu}$ on the interval $[f_-(E),Z_-]$ and from $f_+$ we recover $\hat{\mu}$ on the interval $[Z_+,f_+(E)]$. The $\pm$ signs in $f_{\pm}$ are disentangled by smoothly joining the newly reconstructed pieces to the previously reconstructed part and Assumption~\ref{defect}, as in previous section. Since the profile in $[Z_-,Z_+]$ can only be determined up to translation and symmetry, the determination of the profile in $W^k_j$ is up to the same translation and symmetry. The symmetry and translation freedom for all the wells will be gradually eliminated during the whole process. At the final step, there is a single half well connected to the boundary, and then we can reconstruct exactly the entire profile. \end{document}
arXiv
Strong mixing conditions From Encyclopedia of Mathematics This article Strong Mixing Conditions was adapted from an original article by Richard Crane Bradley, which appeared in StatProb: The Encyclopedia Sponsored by Statistics and Probability Societies. The original article ([http://statprob.com/encyclopedia/StrongMixingConditions.html StatProb Source], Local Files: pdf | tex) is copyrighted by the author(s), the article has been donated to Encyclopedia of Mathematics, and its further issues are under Creative Commons Attribution Share-Alike License'. All pages from StatProb are contained in the Category StatProb. 2010 Mathematics Subject Classification: Primary: 60G10 Secondary: 60G99 [MSN][ZBL] Richard C. Bradley Department of Mathematics, Indiana University, Bloomington, Indiana, USA There has been much research on stochastic models that have a well defined, specific structure --- for example, Markov chains, Gaussian processes, or linear models, including ARMA (autoregressive -- moving average) models. However, it became clear in the middle of the last century that there was a need for a theory of statistical inference (e.g. central limit theory) that could be used in the analysis of time series that did not seem to "fit" any such specific structure but which did seem to have some "asymptotic independence" properties. That motivated the development of a broad theory of "strong mixing conditions" to handle such situations. This note is a brief description of that theory. The field of strong mixing conditions is a vast area, and a short note such as this cannot even begin to do justice to it. Journal articles (with one exception) will not be cited; and many researchers who made important contributions to this field will not be mentioned here. All that can be done here is to give a narrow snapshot of part of the field. The strong mixing ($\alpha$-mixing) condition. Suppose $X := (X_k, k \in {\mathbf Z})$ is a sequence of random variables on a given probability space $(\Omega, {\cal F}, P)$. For $-\infty \leq j \leq \ell \leq \infty$, let ${\cal F}_j^\ell$ denote the $\sigma$-field of events generated by the random variables $X_k, j \leq k \leq \ell (k \in {\mathbf Z})$. For any two $\sigma$-fields ${\cal A}$ and ${\cal B} \subset {\cal F}$, define the "measure of dependence" $$ \alpha({\cal A}, {\cal B}) := \sup_{A \in {\cal A}, B \in {\cal B}} |P(A \cap B) - P(A)P(B)|. \tag{1} $$ For the given random sequence $X$, for any positive integer $n$, define the dependence coefficient $$\alpha(n) = \alpha(X,n) := \sup_{j \in '''Z'''} \alpha({\cal F}_{-\infty}^j, {\cal F}_{j + n}^{\infty}). \tag{2} $$ By a trivial argument, the sequence of numbers $(\alpha(n), n \in {\mathbf N})$ is nonincreasing. The random sequence $X$ is said to be "strongly mixing", or "$\alpha$-mixing", if $\alpha(n) \to 0$ as $n \to \infty$. This condition was introduced in 1956 by Rosenblatt [Ro1], and was used in that paper in the proof of a central limit theorem. (The phrase "central limit theorem" will henceforth be abbreviated CLT.) In the case where the given sequence $X$ is strictly stationary (i.e. its distribution is invariant under a shift of the indices), eq. (2) also has the simpler form $$\alpha(n) = \alpha(X,n) := \alpha({\cal F}_{-\infty}^0, {\cal F}_n^{\infty}). \tag{3} $$ For simplicity, in the rest of this note, we shall restrict to strictly stationary sequences. (Some comments below will have obvious adaptations to nonstationary processes.) In particular, for strictly stationary sequences, the strong mixing ($\alpha$-mixing) condition implies Kolmogorov regularity (a trivial "past tail" $\sigma$-field), which in turn implies "mixing" (in the ergodic-theoretic sense), which in turn implies ergodicity. (None of the converse implications holds.) For further related information, see e.g. [Br, v1, Chapter 2]. Comments on limit theory under $\alpha$-mixing. Under $\alpha$-mixing and other similar conditions (including ones reviewed below), there has been a vast development of limit theory --- for example, CLTs, weak invariance principles, laws of the iterated logarithm, almost sure invariance principles, and rates of convergence in the strong law of large numbers. For example, the CLT in [Ro1] evolved through subsequent refinements by several researchers into the following "canonical" form. (For its history and a generously detailed presentation of its proof, see e.g. [Br, v1, Theorems 1.19 and 10.2].) Theorem 1. Suppose $(X_k, k \in {\mathbf Z})$ is a strictly stationary sequence of random variables such that $EX_0 = 0$, $EX_0^2 < \infty$, $\sigma_n^2 := ES_n^2 \to \infty$ as $n \to \infty$, and $\alpha(n) \to 0$ as $n \to \infty$. Then the following two conditions (A) and (B) are equivalent: (A) The family of random variables $(S_n^2/\sigma_n^2, n \in {\mathbf N})$ is uniformly integrable. (B) $S_n/\sigma_n \Rightarrow N(0,1)$ as $n \to \infty$. If (the hypothesis and) these two equivalent conditions (A) and (B) hold, then $\sigma_n^2 = n \cdot h(n)$ for some function $h(t), t \in (0, \infty)$ which is slowly varying as $t \to \infty$. Here $S_n := X_1 + X_2 + \dots + X_n$; and $\Rightarrow$ denotes convergence in distribution. The assumption $ES_n^2 \to \infty$ is needed here in order to avoid trivial $\alpha$-mixing (or even 1-dependent) counterexamples in which a kind of "cancellation" prevents the partial sums $S_n$ from "growing" (in probability) and becoming asymptotically normal. In the context of Theorem 1, if one wants to obtain asymptotic normality of the partial sums (as in condition (B)) without an explicit uniform integrability assumption on the partial sums (as in condition (A)), then as an alternative, one can impose a combination of assumptions on, say, (i) the (marginal) distribution of $X_0$ and (ii) the rate of decay of the numbers $\alpha(n)$ to 0 (the "mixing rate"). This involves a "trade-off"; the weaker one assumption is, the stronger the other has to be. One such CLT of Ibragimov in 1962 involved such a "trade-off" in which it is assumed that for some $\delta > 0$, $E|X_0|^{2 + \delta} < \infty$ and $\sum_{n=1}^\infty [\alpha(n)]^{\delta/(2 + \delta)} < \infty$. Counterexamples of Davydov in 1973 (with just slightly weaker properties) showed that that result is quite sharp. However, it is not at the exact "borderline". From a covariance inequality of Rio in 1993 and a CLT (in fact a weak invariance principle) of Doukhan, Massart, and Rio in 1994, it became clear that the "exact borderline" CLTs of this kind have to involve quantiles of the (marginal) distribution of $X_0$ (rather than just moments). For a generously detailed exposition of such CLTs, see [Br, v1, Chapter 10]; and for further related results, see also Rio [Ri]. Under the hypothesis (first sentence) of Theorem 1 (with just finite second moments), there is no mixing rate, no matter how fast (short of $m$-dependence), that can insure that a CLT holds. That was shown in 1983 with two different counterexamples, one by the author and the other by Herrndorf. See [Br, v1&3, Theorem 10.25 and Chapter 31]. Several other classic strong mixing conditions. As indicated above, the terms "$\alpha$-mixing" and "strong mixing condition" (singular) both refer to the condition $\alpha(n) \to 0$. (A little caution is in order; in ergodic theory, the term "strong mixing" is often used to refer to the condition of "mixing in the ergodic-theoretic sense", which is weaker than $\alpha$-mixing as noted earlier.) The term "strong mixing conditions" (plural) can reasonably be thought of as referring to all conditions that are at least as strong as (i.e. that imply) $\alpha$-mixing. In the classical theory, five strong mixing conditions (again, plural) have emerged as the most prominent ones: $\alpha$-mixing itself and four others that will be defined here. Recall our probability space $(\Omega, {\cal F}, P)$. For any two $\sigma$-fields ${\cal A}$ and ${\cal B} \subset {\cal F}$, define the following four "measures of dependence": $$ \eqalignno{ \phi({\cal A}, {\cal B}) &:= \sup_{A \in {\cal A}, B \in {\cal B}, P(A) > 0} |P(B|A) - P(B)|; & (4) \cr \psi({\cal A}, {\cal B}) &:= \sup_{A \in {\cal A}, B \in {\cal B}, P(A) > 0, P(B) > 0} |P(B \cap A)/[P(A)P(B)]\thinspace -\thinspace 1|; & (5) \cr \rho({\cal A}, {\cal B}) &:= \sup_{f \in {\cal L}^2({\cal A}),\thinspace g \in {\cal L}^2({\cal B})} |{\rm Corr}(f,g)|; \quad {\rm and} & (6) \cr \beta ({\cal A}, {\cal B}) &:= \sup (1/2) \sum_{i=1}^I \sum_{j=1}^J |P(A_i \cap B_j) - P(A_i)P(B_j)| & (7) \cr } $$ where the latter supremum is taken over all pairs of finite partitions $(A_1, A_2, \dots, A_I)$ and $(B_1, B_2, \dots, B_J)$ of $\Omega$ such that $A_i \in {\cal A}$ for each $i$ and $B_j \in {\cal B}$ for each $j$. In (6), for a given $\sigma$-field ${\cal D} \subset {\cal F}$, the notation ${\cal L}^2({\cal D})$ refers to the space of (equivalence classes of) square-integrable, ${\cal D}$-measurable random variables. Now suppose $X := (X_k, k \in {\mathbf Z})$ is a strictly stationary sequence of random variables on $(\Omega, {\cal F}, P)$. For any positive integer $n$, analogously to (3), define the dependence coefficient $$\phi(n) = \phi(X,n) := \phi({\cal F}_{-\infty}^0, {\cal F}_n^{\infty}), \tag{8} $$ and define analogously the dependence coefficients $\psi(n)$, $\rho(n)$, and $\beta(n)$. Each of these four sequences of dependence coefficients is trivially nonincreasing. The (strictly stationary) sequence $X$ is said to be "$\phi$-mixing" if $\phi(n) \to 0$ as $n \to \infty$; "$\psi$-mixing" if $\psi(n) \to 0$ as $n \to \infty$; "$\rho$-mixing" if $\rho(n) \to 0$ as $n \to \infty$; and "absolutely regular", or "$\beta$-mixing", if $\beta(n) \to 0$ as $n \to \infty$. The $\phi$-mixing condition was introduced by Ibragimov in 1959 and was also studied by Cogburn in 1960. The $\psi$-mixing condition evolved through papers of Blum, Hanson, and Koopmans in 1963 and Philipp in 1969; and (see e.g. [Io]) it was also implicitly present in earlier work of Doeblin in 1940 involving the metric theory of continued fractions. The $\rho$-mixing condition was introduced by Kolmogorov and Rozanov 1960. (The "maximal correlation coefficient" $\rho({\cal A}, {\cal B})$ itself was first studied by Hirschfeld in 1935 in a statistical context that had no particular connection with "stochastic processes".) The absolute regularity ($\beta$-mixing) condition was introduced by Volkonskii and Rozanov in 1959, and in the ergodic theory literature it is also called the "weak Bernoulli" condition. For the five measures of dependence in (1) and (4)--(7), one has the following well known inequalities: $$ \eqalignno{ 2\alpha({\cal A}, {\cal B}) \thinspace & \leq \thinspace \beta({\cal A}, {\cal B}) \thinspace \leq \thinspace \phi({\cal A}, {\cal B}) \thinspace \leq \thinspace (1/2) \psi({\cal A}, {\cal B}); \cr 4 \alpha({\cal A}, {\cal B})\thinspace &\leq \thinspace \rho({\cal A}, {\cal B}) \thinspace \leq \thinspace \psi({\cal A}, {\cal B}); \quad {\rm and} \cr \rho({\cal A}, {\cal B}) \thinspace &\leq \thinspace 2 [\phi({\cal A}, {\cal B})]^{1/2} [\phi({\cal B}, {\cal A})]^{1/2} \thinspace \leq \thinspace 2 [\phi({\cal A}, {\cal B})]^{1/2}. \cr } $$ For a history and proof of these inequalities, see e.g. [Br, v1, Theorem 3.11]. As a consequence of these inequalities and some well known examples, one has the following "hierarchy" of the five strong mixing conditions here: (i) $\psi$-mixing implies $\phi$-mixing. (ii) $\phi$-mixing implies both $\rho$-mixing and $\beta$-mixing (absolute regularity). (iii) $\rho$-mixing and $\beta$-mixing each imply $\alpha$-mixing (strong mixing). (iv) Aside from "transitivity", there are in general no other implications between these five mixing conditions. In particular, neither of the conditions $\rho$-mixing and $\beta$-mixing implies the other. For all of these mixing conditions, the "mixing rates" can be essentially arbitrary, For all of these mixing conditions, the "mixing rates" can be essentially arbitrary, and in particular, arbitrarily slow. That general principle was established by Kesten and O'Brien in 1976 with several classes of examples. For further details, see e.g. [Br, v3, Chapter 26]. The various strong mixing conditions above have been used extensively in statistical inference for weakly dependent data. See e.g. [DDLLLP], [DMS], [Ro3], or [Žu]. Ibragimov's conjecture and related material. Suppose (as in Theorem 1) $X := (X_k, k \in {\mathbf Z})$ is a strictly stationary sequence of random variables such that $$ EX_0 = 0, \ EX_0^2 < \infty, \ {\ \rm and\ } ES_n^2 \to \infty {\ \rm as\ } n \to \infty. \tag{9} $$ In the 1960s, I.A. Ibragimov conjectured that under these assumptions, if also $X$ is $\phi$-mixing, then a CLT holds. Technically, this conjecture remains unsolved. Peligrad showed in 1985 that it holds under the stronger "growth" assumption $\liminf_{n \to \infty} n^{-1} ES_n^2 > 0$. (See e.g. [Br, v2, Theorem 17.7].) Under (9) and $\rho$-mixing (which is weaker than $\phi$-mixing), a CLT need not hold (see [Br, v3, Chapter 34] for counterexamples). However, if one also imposes either the stronger moment condition $E|X_0|^{2 + \delta} < \infty$ for some $\delta > 0$, or else the "logarithmic" mixing rate assumption $\sum_{n=1}^\infty \rho(2^n) < \infty$, then a CLT does hold (results of Ibragimov in 1975). For further limit theory under $\rho$-mixing, see e.g. [LL] or [Br, v1, Chapter 11]. Under (9) and an "interlaced" variant of the $\rho$-mixing condition (i.e. with the two index sets allowed to be "interlaced" instead of just "past" and "future"), a CLT does hold. For this and related material, see e.g. [Br, v1, Sections 11.18-11.28]. There is a vast literature on central limit theory for random fields satisfying various strong mixing conditions. See e.g. [Ro3], [Žu], [Do], and [Br, v3]. In the formulation of mixing conditions for random fields --- and also "interlaced" mixing conditions for random sequences --- some caution is needed; see e.g. [Br, v1&3, Theorems 5.11, 5.13, 29.9, and 29.12]. Connections with specific types of models. Now let us return briefly to a theme from the beginning of this write-up: the connection between strong mixing conditions and specific structures. Markov chains. Suppose $X := (X_k, k \in {\mathbf Z})$ is a strictly stationary Markov chain. In the case where $X$ has finite state space and is irreducible and aperiodic, it is $\psi$-mixing, with at least exponentially fast mixing rate. In the case where $X$ has countable (but not necessarily finite) state space and is irreducible and aperiodic, it satisfies $\beta$-mixing, but the mixing rate can be arbitrarily slow. In the case where $X$ has (say) real (but not necessarily countable) state space, (i) Harris recurrence and "aperiodicity" (suitably defined) together are equivalent to $\beta$-mixing, (ii) the "geometric ergodicity" condition is equivalent to $\beta$-mixing with at least exponentially fast mixing rate, and (iii) one particular version of "Doeblin's condition" is equivalent to $\phi$-mixing (and the mixing rate will then be at least exponentially fast). There exist strictly stationary, countable-state Markov chains that are $\phi$-mixing but not "time reversed" $\phi$-mixing (note the asymmetry in the definition of $\phi({\cal A}, {\cal B})$ in (4)). For this and other information on strong mixing conditions for Markov chains, see e.g. [Ro2, Chapter 7], [Do], [MT], and [Br, v1&2, Chapters 7 and 21]. Stationary Gaussian sequences. For stationary Gaussian sequences $X := (X_k, k \in {\mathbf Z})$, Ibragimov and Rozanov [IR] give characterizations of various strong mixing conditions in terms of properties of spectral density functions. Here are just a couple of comments: For stationary Gaussian sequences, the $\alpha$- and $\rho$-mixing conditions are equivalent to each other, and the $\phi$- and $\psi$-mixing conditions are each equivalent to $m$-dependence. If a stationary Gaussian sequence has a continuous positive spectral density function, then it is $\rho$-mixing. For some further closely related information on stationary Gaussian sequences, see also [Br, v1&3, Chapters 9 and 27]. Dynamical systems. Many dynamical systems have strong mixing properties. Certain one-dimensional "Gibbs states" processes are $\psi$-mixing with at least exponentially fast mixing rate. A well known standard "continued fraction" process is $\psi$-mixing with at least exponentially fast mixing rate (see [Io]). For certain stationary finite-state stochastic processes built on piecewise expanding mappings of the unit interval onto itself, the absolute regularity condition holds with at least exponentially fast mixing rate. For more detains on the mixing properties of these and other dynamical systems, see e.g. Denker [De]. Linear and related processes. There is a large literature on strong mixing properties of strictly stationary linear processes (including strictly stationary ARMA processes and also "non-causal" linear processes and linear random fields) and also of some other related processes such as bilinear, ARCH, or GARCH models. For details on strong mixing properties of these and other related processes, see e.g. Doukhan [Do, Chapter 2]. However, many strictly stationary linear processes fail to be $\alpha$-mixing. A well known classic example is the strictly stationary AR(1) process (autoregressive process of order 1) $X := (X_k, k \in {\mathbf Z})$ of the form $X_k = (1/2)X_{k-1} + \xi_k$ where $(\xi_k, k \in {\mathbf Z})$ is a sequence of independent, identically distributed random variables, each taking the values 0 and 1 with probability 1/2 each. It has long been well known that this random sequence $X$ is not $\alpha$-mixing. For more on this example, see e.g. [Br, v1, Example 2.15] or [Do, Section 2.3.1]. Further related developments. The AR(1) example spelled out above, together with many other examples that are not $\alpha$-mixing but seem to have some similar "weak dependence" quality, have motivated the development of more general conditions of weak dependence that have the "spirit" of, and most of the advantages of, strong mixing conditions, but are less restrictive, i.e. applicable to a much broader class of models (including the AR(1) example above). There is a substantial development of central limit theory for strictly stationary sequences under weak dependence assumptions explicitly involving characteristic functions in connection with "block sums"; much of that theory is codified in [Ja]. There is a substantial development of limit theory of various kinds under weak dependence assumptions that involve covariances of certain multivariate Lipschitz functions of random variables from the "past" and "future" (in the spirit of, but much less restrictive than, say, the dependence coefficient $\rho(n)$ defined analogously to (3) and (8)); see e.g. [DDLLLP]. There is a substantial development of limit theory under weak dependence assumptions that involve dependence coefficients similar to $\alpha(n)$ in (3) but in which the classes of events are restricted to intersections of finitely many events of the form $\{X_k > c\}$ for appropriate indices $k$ and appropriate real numbers $c$; for the use of such conditions in extreme value theory, see e.g. [LLR]. In recent years, there has been a considerable development of central limit theory under "projective" criteria related to martingale theory (motivated by Gordin's martingale-approximation technique --- see [HH]); for details, see e.g. [Pe]. There are far too many other types of weak dependence conditions, of the general spirit of strong mixing conditions but less restrictive, to describe here; for more details, see e.g. [DDLLLP] or [Br, v1, Chapter 13]. [Br] R.C. Bradley. Introduction to Strong Mixing Conditions, Vols. 1, 2, and 3. Kendrick Press, Heber City (Utah), 2007. [DDLLLP] J. Dedecker, P. Doukhan, G. Lang, J.R. León, S. Louhichi, and C. Prieur. Weak Dependence: Models, Theory, and Applications. Lecture Notes in Statistics 190. Springer-Verlag, New York, 2007. [DMS] H. Dehling, T. Mikosch, and M. Sørensen, eds. Empirical Process Techniques for Dependent Data. Birkhäuser, Boston, 2002. [De] M. Denker. The central limit theorem for dynamical systems. In: Dynamical Systems and Ergodic Theory, (K. Krzyzewski, ed.), pp. 33-62. Banach Center Publications, Polish Scientific Publishers, Warsaw, 1989. [Do] P. Doukhan. Mixing: Properties and Examples. Springer-Verlag, New York, 1995. [HH] P. Hall and C.C. Heyde. Martingale Limit Theory and its Application. Academic Press, San Diego, 1980. [IR] I.A. Ibragimov and Yu.A. Rozanov. Gaussian Random Processes. Springer-Verlag, New York, 1978. [Io] M. Iosifescu. Doeblin and the metric theory of continued fractions: a functional theoretic solution to Gauss' 1812 problem. In: Doeblin and Modern Probability, (H. Cohn, ed.), pp. 97-110. Contemporary Mathematics 149, American Mathematical Society, Providence, 1993. [Ja] A. Jakubowski. Asymptotic Independent Representations for Sums and Order Statistics of Stationary Sequences. Uniwersytet Mikołaja Kopernika, Toruń, Poland, 1991. [LL] Z. Lin and C. Lu. Limit Theory for Mixing Dependent Random Variables. Kluwer Academic Publishers, Boston, 1996. [LLR] M.R. Leadbetter, G. Lindgren, and H. Rootzén. Extremes and Related Properties of Random Sequences and Processes. Springer-Verlag, New York, 1983. [MT] S.P. Meyn and R.L. Tweedie. Markov Chains and Stochastic Stability (3rd printing). Springer-Verlag, New York, 1996. [Pe] M. Peligrad. Conditional central limit theorem via martingale approximation. In: Dependence in Probability, Analysis and Number Theory, (I. Berkes, R.C. Bradley, H. Dehling, M. Peligrad, and R. Tichy, eds.), pp. 295-309. Kendrick Press, Heber City (Utah), 2010. [Ri] E. Rio. Théorie Asymptotique des Processus Aléatoires Faiblement Dépendants. Mathématiques & Applications 31. Springer, Paris, 2000. [Ro1] M. Rosenblatt. A central limit theorem and a strong mixing condition. Proc. Natl. Acad. Sci. USA 42 (1956) 43-47. [Ro2] M. Rosenblatt. Markov Processes, Structure and Asymptotic Behavior. Springer-Verlag, New York, 1971. [Ro3] M. Rosenblatt. Stationary Sequences and Random Fields. Birkhäuser, Boston, 1985. [Žu] I.G. Žurbenko. The Spectral Analysis of Time Series. North-Holland, Amsterdam, 1986. Strong mixing conditions. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Strong_mixing_conditions&oldid=38548 Retrieved from "https://encyclopediaofmath.org/index.php?title=Strong_mixing_conditions&oldid=38548" Statprob
CommonCrawl
Moving shadow detection based on stationary wavelet transform Kavitha Nagarathinam1 & Ruba Soundar Kathavarayan2 Many surveillance and forensic applications face problems in identifying shadows and their removal. The moving shadow points overlap with the moving objects in a video sequence leading to misclassification of the exact object. This article presents a novel method for identifying and removing moving shadows using stationary wavelet transform (SWT) based on a threshold determined by wavelet coefficients. The multi-resolution property of the stationary wavelet transform leads to the decomposition of the frames into four different bands without the loss of spatial information. The conventional discrete wavelet transform (DWT), which has the same property, suffers from the problem of shift invariance due to the decimation operation leading to a shift in the original signal during reconstruction. Since SWT does not have the decimation operation, the problem of shift invariance is solved which makes it feasible for change detection, pattern recognition and feature extraction and retrieves the original signal without the loss of phase information also. For detection and removal of shadow, a new threshold in the form of a variant statistical parameter—"skewness"—is proposed. The value of threshold is determined through the wavelet coefficients without the requirement of any supervised learning or manual calibration. Normally, the statistical parameters like mean, variance and standard deviation does not show much variation in complex environments. Skewness shows a unique variation between the shadow and non-shadow pixels in various environments than the previously used thresholds—standard deviation and relative standard deviation. The experimental results prove that the proposed method works better than other state-of-art-methods. The automated visualization applications are shaped mainly to acquire the appearance of moving points from a video sequence using various pattern recognition and machine learning techniques. In particular, people and vehicles are the most important subjects for monitoring in any typical surveillance and video forensics applications. They usually describe humans with their appearance, size and behaviour and vehicles by their make, model and type. Similarly annotations of humans and vehicles help in video forensic applications to search for people or vehicle with a recognized portrayal by a witness. The major issue in such surveillance and forensic videos is moving shadows. They prevent the correct identification of the suspects in case of any unlawful activities. Serious flaws like merging of more than one object, incorrect object shape and deformation of its contour and cues may happen if moving shadows are not removed from the target object. Proper induction of the moving object segmentation algorithms may help the video surveillance and forensic applications, object tracking applications, etc., to perform well up to the customer satisfaction. If not, categorization or evaluation of moving object position will produce erroneous results. Sanin et al. [1] classified the moving shadow detection methods under various categories. Readily available cues which make a distinction between the target object, its shadow and the place where it is cast (background) are discussed to get a clear understanding of the shadow detection concept. Sanin et al. [1] discussed few shadow detection algorithms which consider the physical and geometrical properties of the shadows to identify them. They are categorized under model-based techniques. Many of the shadow detection algorithms consider the colour-based methods [2,3,4,5,6,7,8] and texture-based [9,10,11,12] methods to differentiate shadows from the foreground objects. They are categorized under property-based techniques. The colour-based methods use the following facts about the shadow and background. The red-green-blue (RGB) values, hue and saturation in hue-saturation-value (HSV) colour space and grey level of a shadow are inferior to the background in the corresponding pixel [12, 13]. The variation between those values of shadow and background show a gradual growth between adjacent pixels [14]. The shadow and the background have similar texture. Texture-rich object has a texture-less shadow [12, 14]. The entropy value or the Gaussian filter derivatives are helpful in extracting the texture of the image segments [15]. There is a difference in the illumination of shadow and background which also helps to differentiate shadow pixels. Lesser boundary when compared to background and not many interior edges when compared to objects are used to identify shadow pixels. But still, the shadows remain attached to their objects [16]. Even though object and shadow have the same movement, their position separates them [17]. Skewness is a variant statistical feature with respect to shadow and non-shadow areas for locating shadows provided a proper edge detection algorithm is used [15]. An invariant feature across shadow boundary is the image gradient. For majority of the moving shadow detection methods, chromatic analysis is the first step. The hue to intensity ratio [18] is used to categorize shadow and non-shadow pixels. Also, c1c2c3 colour space is used to distinguish shadow areas by introducing a variation in the colour space transformation. But still, if the RGB values remain the same, the variation in the transformation formula will not show any difference. Colour space-based methods [19] such as HSI, HSV, YIQ and YCbCr analyse the intensity and colour properties of shadows in aerial images and calculate a hue to intensity ratio for each pixel. The shadow regions are extracted based on a threshold, which is not clearly classifying the dark blue and dark green surfaces. To improve the shadow detection results, a consecutive thresholding [20] can be done. Based on the lighting conditions, an invariant colour model [21] which identifies the shadow can be used, but this may not work for all types of images. HSI colour space along with the colour attenuation relationship [22] is analysed to detect the shadow in colour aerial images. The shadow properties discussed so far can be utilized in image and video forensics to expose the falsification of photos and videos by detecting the shadows using their geometric and shading information [23]. Those images and videos can then be properly annotated for future forensic analysis after the removal of fake content. In image and video forensics field, the detection of shadow regions decides whether they are really formed or manipulated. The inconsistency of the texture between the shadow and background regions reveal the tampered content from the images [24] since it is known that the shadow does not change the texture of the background [12, 14]. Farid [25] gave three geometric techniques for detecting the traces of digital manipulation: vanishing points, reflections and shadows. The author reported with both real-time image and video forgery by analysing the shadows in them. This supports the need of detection of shadow in image and video forensics by which we were motivated to propose a moving shadow detection method. All the methods discussed so far dealt the shadow detection or removal process in spatial domain techniques that suffer from inaccurate segmentation due to the non-removal of noise or failure to detect new appearance automatically. Only few methods are available in wavelet domain that rectifies the problems in spatial domain techniques for a better identification of the moving cast shadows. Guan [26] explored the properties of the HSV colour model for shadow detection and removal using dyadic multi-scale wavelet transform. The standard deviation of the wavelet coefficients from the value component helps to identify the shadow pixels from the foreground pixels. In a similar fashion, Khare et al. [27] used relative standard deviation of the wavelet coefficients to separate the shadows from the foreground pixels. Since the wavelet transforms decompose the input into high- and low-frequency values, the threshold values get updated automatically. Combining the saturation component with the value component, the threshold identifies the region of the shadows to be removed. The algorithms discussed so far still have the deficiency of the accurate shadow detection and removal. In the proposed approach, we have also used the HSV colour space as it corresponds closely to the human perception of colour and separates chromaticity and luminosity easily. We also used variant statistical parameter "skewness" of the wavelet coefficients to detect and remove the shadows. In the rest of the paper, Section 2 describes stationary wavelet transform (SWT); Sections 3 and 4 explains the threshold selection and the proposed method respectively. Section 5 explains the experimental results, and Section 6 gives the comparisons of the proposed method and other existing and state-of-art-methods [1, 7, 12, 26, 27]. Finally, discussions and conclusions of the work are given in Sections 7 and 8. Stationary wavelet transform (SWT) In this section, we have briefly discussed SWT and the reasons for using SWT for detection and removal of shadow from moving objects. Guan [26], Khare et al. [27] used discrete wavelet transform (DWT) that lacks in phase information and translation invariance and that creates problems in reconstruction of the image. So, we propose a method which uses SWT. Fourier transform (FT) analyses a signal by decomposing it into constituent sinusoids of different frequencies. It has a major drawback of losing the temporal information. But cooperation of short-time Fourier transform (STFT) with a predefined window helps to retrieve the temporal information along with their frequencies. DWT has an advantage over the Fourier transform in terms of localisation in frequency domain as well as in spatial domain. The DWT can be applied on a discrete signal containing N samples. The signal is decomposed into low-frequency band (L) using low-pass filter and high-frequency band (H) using high-pass filter. Each band is sub-sampled by a factor of two. In the case of 2D signal (image), each row of an image is filtered by a low-pass filter l[m] and high-pass filter h[m]. But the decimation operation of DWT makes it a shifted version of signal but not equivalent to the shift in DWT of signal. SWT solves this problem of shift invariance [28]. SWT differs from conventional DWT in terms of decimation and shift invariance, which makes it feasible for change detection, pattern recognition and feature extraction. In SWT, the input signal is convolved with low l[m]- and high h[m]- pass filter in a similar manner as in DWT, but no decimation is performed to obtain wavelet coefficients of different sub-bands. As there is no decimation involved in SWT, the number of coefficients is twice that of the samples in the input signal that helps for a better reconstruction of the given image. Threshold selection An optimal threshold has to be determined for the accurate shadow detection process. Guan [26] used standard deviation (σ) as the threshold, Khare et al. [27] used relative standard deviation (σ/μ) as the threshold and we have chosen skewness as the threshold in SWT domain. For a sample of n values, skewness (Skew) parameter is defined as: $$ \mathrm{Skewness}=E\left[{\left(\frac{x-\mu }{\sigma}\right)}^3\right] $$ where x is the one of the sample value, μ is the sample mean and σ is the standard deviation of the n values. Skewness is a measure of the degree of asymmetry of a distribution. The skewness value can be positive, negative or even undefined [29]. A negative skewness indicates that the tail on the left side of the probability density function is longer than the right side and the bulk of the values lie to the right of the mean. A positive skewness indicates that the tail on the right side is longer than the left side and the bulk of the values lie to the left of the mean as shown in Fig. 1. a Negative skewness—the left tail is longer and the mass of the distribution is concentrated on the right of the figure. b Positive skewness—the right tail is longer and the mass of the distribution is concentrated on the left of the figure The motivation for selecting skewness as one of the distinguishing features to represent shadow is it always shows a variation from the skewness of the creating object and the casted background [30]. For an image having homogeneous reflective surface, the skewness of the luminance histogram and sub-band histograms are correlated with the shadows regions present in it [29]. In Eq. 1, μ is the average luminance of the object and σ is the standard deviation of the luminance of the object along with its neighbours in wavelet domain. The acceleration of (x − μ) with the cubic value enhances the edges around the object present in a scene; the summation and normalization make the edges of shadows blurry since they are soft and fasten the process of shadow detection. So, the skewness of the wavelet coefficients has the boundary information of the objects which helps us to clearly segregate them from the shadows. Also, the existing thresholds may not show any differentiation of either as object or shadow for those pixels whose values of saturation and hue are undefined. The mean μ and standard deviation σ of such patterns as shown in Fig. 2 are identical leading to misclassification. They differ only in the sign of skewness because of the cubic acceleration and normalization of the luminance value of the object. a Example Image1 having similar patterns. b Histogram of images in a. c Example Image2 having similar patterns. d Histogram of images in c Therefore, skewness is selected as a stable threshold in our work for classification of moving shadow pixels from the objects. Though the thresholds σ and (σ/μ) are able to detect and remove shadow, their performance degrades when the foreground object and the background share the same or dark colour. In the proposed method, we have applied the threshold on stationary wavelet coefficients of value component of HSV to detect moving object with shadow. For removal of shadow, we have applied logical AND on thresholded stationary wavelet coefficients of value component and those of saturation component. Instead of manual calibration of threshold which needs some predefined parameters, we propose a variant statistical parameter, i.e. skewness as a new threshold for detection and removal of shadow in SWT domain. Guan [26] and Khare et al. [27] applied the thresholds to the discrete wavelet coefficients (DWT) that losses the phase information. The loss of phase information in DWT and the difficulty faced when there are similar patterns of input frames are rectified by applying SWT and skewness as a threshold respectively. Proposed algorithm The proposed approach considers a reference frame as the background model from the video sequence and the consecutive frames are processed one by one to detect the foreground and shadow pixels. The reference frame and the current frame are converted from RGB colour space to HSV colour space. The absolute difference of the two frames is taken with respect to hue, saturation and value component. They are represented as ΔH, ΔV and ΔS respectively. The stationary wavelet transform is applied to the absolute difference components of value (ΔV) and saturation (ΔS) to get the wavelet coefficients denoted as WΔV and WΔS. Also, the variant statistical parameter skewness is calculated for the wavelet coefficients for the value and saturation component denoted as (Skew)WΔV and (Skew)WΔS respectively. They act as an automatic threshold to classify the moving pixels as foreground or shadow pixels. The pixels having a greater value than the automatic threshold "variant statistical value − skewness" of the value component are categorized as the moving pixels which include the foreground and shadow pixels. Equation 2 represents this situation. $$ M=\left\{\begin{array}{l}1,\mathrm{if}\ {W}_{\varDelta V}\ge {\left(\mathrm{Skew}\right)}_{W_{\varDelta V}}\\ {}0,\mathrm{otherwise}\end{array}\right. $$ Also, to remove the shadow pixels, we consider the automatic threshold "variant statistical value − skewness" of the saturation component. Equation 3 represents this situation clearly. $$ M=\left\{\begin{array}{l}1,\mathrm{if}\ {W}_{\varDelta V}\ge {\left(\mathrm{Skew}\right)}_{W_{\varDelta V}}\varLambda {W}_{\varDelta S}\ge {\left(\mathrm{Skew}\right)}_{W_{\varDelta S}}\\ {}0,\mathrm{otherwise}\end{array}\right. $$ Finally, the reconstruction of the shadow-detected image and shadow-removed image is done by the inverse wavelet transform. Binary closing morphological operations are applied to smoothen the reconstructed images. To make our performance effective and systematic, an extensive result on several well-known benchmarks for which ground truth data was available is presented. The chosen benchmarks consist of indoor and outdoor scenes from UCSD CVRR Laboratory, CAVIAR Test Scenarios and Institute for Infocomm Research (I2R). A regional language movie's song of our country named "Sarvam" is also used to test the proposed method with the other existing methods. Specifically, Intelligent Room, Hallway and CAVIAR are typical indoor environments while Highway I is an outdoor scenario of roadways. In addition, comparison with several existing methods to prove the superiority from the aspects of quality and quantity of our method, including Sanin et al. [12], Cucchiara et al. [7], Guan et al. [26] methods is done. The results are shown in Fig. 3. The first image (a) displays the original frames and (b) gives the ground truths of the corresponding frames in (a). Then the figure (c), (d) and (e) presents the results of the various methods [7, 12, 26] and finally (f) gives a visualization of the results of the proposed method. Visualized comparison results of the proposed method with others. a Input frame from Highway I, Intelligent Room, and Hallway. b Ground truths of the respective frames from video sequence. c Results of Sanin et al. [12]. d Results of Cucchiara et al. [7]. e Results of Guan et al. [26]. f Results of the proposed method The performance of the proposed method against previously used thresholds is discussed. Figures 4 and 5 shows the vertical, horizontal and diagonal wavelet coefficients of the absolute difference of the reference frame and current frame of Highway I video sequence (frame no: 100, outdoor) and Intelligent Room video sequence (frame no: 130, indoor) plotted against their frequency of occurrence respectively. The shadow regions have lower values than the object region regions [12, 13, 29]. In both the figures, the original shadow regions are highlighted with the grey line and arrow mark indicates the end of the shadow points. The blue square indicates the detected shadow pixels using our proposed method—skewness as threshold. The black and red squares indicate the detected shadow pixels using standard deviation [26] (σ) and relative standard deviation [27] (σ/μ). The plots clearly indicate that our proposed method detects perfectly the shadow pixels than the previously used thresholds. Plot of a vertical, b horizontal and c diagonal wavelet coefficients of frame no. 100 from Highway I video sequence: The grey colour line shows the original shadow pixels, and the arrow mark in grey colour indicates the end of the shadow points. The blue colour square indicates the detected shadow pixels using skewness, and the arrow mark in blue colour indicates the end of the shadow points. The black and red squares indicate the detected shadow pixels using standard deviation [26] (σ) and relative standard deviation [27] (σ/μ) Plot of a vertical, b horizontal and c diagonal wavelet coefficients of frame no. 130 from Intelligent Room video sequence: The grey colour line shows the original shadow pixels, and the arrow mark in grey colour indicates the end of the shadow points. The blue colour square indicates the detected shadow pixels using skewness, and the arrow mark in blue colour indicates the end of the shadow points. The black and red squares indicate the detected shadow pixels using standard deviation [26] (σ) and relative standard deviation [27] (σ/μ) In order to analyse the proposed method objectively and quantitatively, the shadow detection rate η and shadow discrimination rate ε [20] are considered. The effectiveness of the proposed method is shown along with the other methods in Table 1 giving the shadow detection rate (η) and Table 2 giving the shadow discrimination rate (ε). Also, the computation time needed to process for a single frame in seconds of the proposed method along with the other methods is given in Table 3. Table 1 Shadow detection rate (η) for representative video sequences Table 2 Shadow discrimination rate (ε) for representative video sequences Table 3 Total computation time of various methods The shadow discrimination rate of our method for various videos compared with the thresholds used by Guan [26] and Khare et al. [27] is shown in Fig. 6. The visualized comparison of the proposed method with the state-of-art-methods is shown in Fig. 7. In all the sub-figures, the first image shows the input frame, the second shows the corresponding ground truth, the third shows the proposed method results and the fourth and fifth show the results of the state-of-art-methods [26, 27] respectively. The video sequences used are Highway sequence, regional language movie song (Sarvam) and Lobby Hall. All the performance measures show that the proposed method outperforms than the existing and state-of-art-methods in the literature. Shadow discrimination rate of various videos using skewness (proposed method), standard deviation [26] and relative standard deviation [27]. a Shadow discrimination rate in Highway I sequence. b Shadow discrimination rate in Shopping Mall. c Shadow discrimination rate in "Sarvam" regional movie song. d Shadow discrimination rate in Lobby Hall Visualized comparison results of the proposed method with the state-of-art-methods. The first frame is the input, second is the ground truth, third is the proposed method result, fourth is the Guan [26] method result and fifth is the Khare et al. [27] method result. a Highway I video sequence and Lobby Hall video sequence. b "Sarvam" regional movie song—long and short shadows The existing moving shadow detection methods in wavelet domain using DWT lack in translation invariance and loss phase information during reconstruction. To address these issues, we have used stationary wavelet transform. The novelty of our method lies in the usage of a variant statistical parameter skewness that shows a difference for any kind of pattern. Previously used thresholds like standard deviation (σ) and relative standard deviation (σ/μ) will remain the same for undefined values of saturation and hue leading to misclassification. Whenever there is equality in RGB values, the conversion of such pixels to HSV colour space produces an undefined saturation and hue value [31]. The proposed threshold has been tested with various benchmark videos and compared with the existing thresholds to prove its usability. The limitation of the proposed method is the redundancy of the wavelet coefficients which can be reduced by incorporating global feature preservation techniques like principal component analysis (PCA) or linear discriminant analysis (LDA) [32] with the wavelet transform as in many object and face detection methods. In this article, a stationary wavelet transform (SWT)-based shadow detection method is proposed. A new threshold based on the variant statistical feature, calculated with the help of the wavelet coefficients, is used to classify the moving objects and shadows. When compared with the discrete wavelet transform (DWT), the shift invariance and multi-resolution property of SWT supports the reconstruction of the image from the sub-bands without the loss of phase information. Also, the determined threshold from the wavelet coefficients helps in a unique discrimination of the shadows and objects. The promising results of the proposed method highlight the advantages of both the SWT and the variant statistical threshold. The performance of the method degrades with high speed of the object and non-stationary background; therefore, this work can be extended for the non-stationary background for the future work. Also, there is a redundancy of the wavelet coefficients which can be reduced by incorporating PCA or LDA with the wavelet transform in the future. A. Sanin, C. Sanderson, B.C. Lovell, Shadow detection: a survey and comparative evaluation of recent methods. Pattern Recogn. 45(4), 1684–1695 (2012) A. Prati, R. Cucchiara, I. Mikic, M.M. Trivedi, Analysis and detection of shadows in video streams: a comparative evaluation. Proc. IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR) 2, 571–576 (2001) Chun-Ting Chen, C.T., Chung-Yen Su, Wen-Chung Kao, "An enhanced segmentation on vision-based shadow removal for vehicle detection", Proc. Int. Conf. on Green Circuit and Systems, pp. 679–682, 2010 K. Nagarathinam, R.S. Kathavarayan, A survey of moving cast shadow detection methods. Int. J. Sci. Eng. Res 5(5), 93–98 (2014) E. Salvador, A. Cavallaro, T. Ebrahimi, Shadow aware object-based video processing. Proc. IET Int. Conf. on Vision, Image and Signal Processing 152(4), 398–406 (2005) E. Salvador, A. Cavallaro, T. Ebrahimi, Cast shadow segmentation using invariant color features. Comput. Vis. Image Underst. 95(2), 238–259 (2004) R. Cucchiara, C. Grana, M. Piccard, A. Prati, Detecting moving objects, ghosts and shadows in video streams. IEEE Trans. Pattern Anal. Mach. Intell. 25(10), 1337–1342 (2003) A. Prati, R. Cucchiara, I. Mikic, M.M. Trivedi, Detecting moving shadows: algorithms and evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 25(7), 918–923 (2003) L. Pei, R. Wang, Moving cast shadow detection based on PCA. Proc.Int. Conf. on Natural Computation. 2, 581–584 (2009) A. Leone, C. Distante, Shadow detection for moving objects based on texture analysis. Pattern Recogn. 40(4), 1222–1233 (2007) Rui Qin, Shengcai Liao, Zhen Lei, Stan Z. Li, "Moving cast shadow removal based on local descriptors", Proc. Int. Conf. on Pattern Recognition (ICPR), pp. 1377–1380, 2010 Andres Sanin, Conrad Sanderson, Brain C. Lovell, "Improved shadow removal for robust person tracking in surveillance scenario", Proc. Int. Conf. On Pattern Recognition (ICPR), pp. 141–144, 2010 Zhang, L. and X.M. He, "Fake shadow detection based on SIFT features matching", Proceedings of the WASE International Conference on Information Engineering, IEEE Xplore Press, Beidaihe, Hebei, pp: 216–220. doi:10.1109/ICIE.2010.58, 2010 S. Kumar, A. Kaur, Algorithm for shadow detection in real-colour images. Int. J. Comput. Ci Eng. 2, 2444–2446 (2010) Jiejie Zhu, Kegan G.G. Samuel, Syed Z. Masood and Marshall F. Tappen, "Learning to recognize shadows in monochromatic natural images", Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE Xplore Press, San Francisco, CA., pp: 223–230. doi:10.1109/CVPR.2010.5540209, 2010 J Vinitha Panicker and M. Wilscy, "Detection of moving cast shadows using edge information", Proceedings of the 2nd International Conference on Computer and Automation Engineering, IEEE Xplore Press, Singapore, pp: 817–821. doi:10.1109/ICCAE.2010.5451878, Feb 2010 F. Cogun, A.E. Cetin, Moving shadow detection in video using cepstrum. Int. J. Adv. Robot. Syst. 10(18), 2013 (2013). doi:10.5772/52942 Bangyu Sun and Shutao Li, "Moving cast shadow detection of vehicle using combined color models", Proceedings of the Chinese Conference on Pattern Recognition, Oct. 21–23, IEEE Xplore Press, Chongqing, pp: 1–5. doi:10.1109/CCPR.2010.5659321, 2010. J.D. Tsai Victor, A comparative study on shadow compensation of color aerial images in invariant color models. IEEE Trans. Geosci. Remote Sens. 44, 1661–1671 (2006) K.-L. Chung, Y.-R. Lin, Y.H. Huang, Efficient shadow detection of color aerial images based on successive thresholding scheme. IEEE Trans. Geosci. Remote Sens. 47, 671–682 (2009) D Finlayson, C Fredembach, MS Drew, "Detecting illumination in images" in IEEE 11th International Conference on Computer Vision (ICCV), Rio de Janeiro, Brazil, pp. 1–8, 2007 Wenxuan Shi, Jie Li, "Shadow Detection in color aerial images based on HSI space and color attenuation relationship", EURASIP J. Adv. Signal Process., (1), pp. 141, 2012 Eric Kee, James F O'Brien, and Hany Farid. "Exposing photo manipulation from shading and shadows", ACM Trans. Graph. 33(5):165:1–165:21, 2014 Yongzhen Ke, Fan Qin, Weidong Min, and Guiling Zhang. "Exposing image forgery by detecting consistency of shadow," Sci. World J., Article ID 364501, 9 pages, 2014. doi:10.1155/2014/364501 Hany Farid, "How to detect faked photos", American Scientist, pp. 77-81, March-April issue, 2017 G.Y. Peng, Spatio-temporal motion-based foreground segmentation and shadow suppression. IET Comput. Vis. 4(1), 50–60 (2010) M. Khare, R.K. Srivastava, A. Khare, Moving shadow detection and removal—a wavelet transform based approach. IET Comput. Vis. 8(6), 701–717 (2014) Zhang, Yudong, Shuihua Wang, Yuankai Huo, Lenan Wu, and Aijun Liu, "Feature extraction of brain MRI by stationary wavelet transform and its applications", J. Biol. Syst.(18), issue spec01 pp. 115–132, 2010 L. Sharan, E.H. Adelson, I. Motoyoshi, S.'y. Nishida, Non-oriented filters are better than oriented filters for skewness detection. Perception 36, 6a (2007) M. Golchin, F. Khalid, L.N. Abdullah, S.H. Davarpanah, Shadow detection using color and edge information. J. Comput. Sci. 9(11), 1575–1588 (2013) Kavitha Nagarathinam and Ruba Soundar Kathavarayan, "Moving shadow detection in videos using HSI color space along hue singular pixels", International Journal of Printing, Packaging & Allied. Sciences (4), No. 2, pp. 1217–1225, 2016 K. Ruba Soundar, K. Murugesan, Preserving global and local features—a combined approach for recognizing face images. Int. J. Pattern Recognit. Artif. Intell. 1(24), 39–53 (2010) We declare that there is no funding source for this work as of now. Assistant Professor,CSE Department, Saranathan College of Engineering, Trichy, Tamil Nadu, India Kavitha Nagarathinam Professor,CSE Department, P.S.R Engineering College, Sivakasi, Tamil Nadu, India Ruba Soundar Kathavarayan Search for Kavitha Nagarathinam in: Search for Ruba Soundar Kathavarayan in: KN contributed major effort in the study conception, design, acquisition, and interpretation of the data and manuscript preparation. RSK contributed major effort in the critical revision of the design and the design, interpretation, and review of the manuscript. Both authors read and approved the final manuscript. Correspondence to Kavitha Nagarathinam. Nagarathinam, K., Kathavarayan, R.S. Moving shadow detection based on stationary wavelet transform. J Image Video Proc. 2017, 49 (2017) doi:10.1186/s13640-017-0198-x Object tracking Shadow detection Video forensics and surveillance Wavelet transform Image and Video Forensics for Social Media analysis
CommonCrawl
Some results on the behaviour of transfer functions at the right half plane EECT Home Complete controllability for a class of fractional evolution equations with uncertainty February 2022, 11(1): 125-167. doi: 10.3934/eect.2020105 Global attractors, exponential attractors and determining modes for the three dimensional Kelvin-Voigt fluids with "fading memory" Manil T. Mohan , Department of Mathematics, Indian Institute of Technology Roorkee-IIT Roorkee, Haridwar Highway, Roorkee, Uttarakhand 247667, India *Corresponding author: Manil T. Mohan Received October 2019 Revised September 2020 Published February 2022 Early access December 2020 The three-dimensional viscoelastic fluid flow equations, arising from the motion of Kelvin-Voigt (Kelvin-Voight) fluids in bounded domains is considered in this work. We investigate the long-term dynamics of such viscoelastic fluid flow equations with "fading memory" (non-autonomous). We first establish the existence of an absorbing ball in appropriate spaces for the semigroup defined for the Kelvin-Voigt fluid flow equations of order one with "fading memory" (transformed autonomous coupled system). Then, we prove that the semigroup is asymptotically compact, and hence we establish the existence of a global attractor for the semigroup. We provide estimates for the number of determining modes for both asymptotic as well as for trajectories on the global attractor. Once the differentiability of the semigroup with respect to initial data is established, we show that the global attractor has finite Hausdorff as well as fractal dimensions. We also show the existence of an exponential attractor for the semigroup associated with the transformed (equivalent) autonomous Kelvin-Voigt fluid flow equations with "fading memory". Finally, we show that the semigroup has Ladyzhenskaya's squeezing property and hence is quasi-stable, which also implies the existence of global as well as exponential attractor having finite fractal dimension. Keywords: Kelvin-Voigt fluids, global attractors, exponential attractors, quasi-stability, setermining modes, fractal dimension, Hausdorff dimension. Mathematics Subject Classification: Primary 37L30; Secondary 35Q35, 35Q30, 35B40. Citation: Manil T. Mohan. Global attractors, exponential attractors and determining modes for the three dimensional Kelvin-Voigt fluids with "fading memory". Evolution Equations & Control Theory, 2022, 11 (1) : 125-167. doi: 10.3934/eect.2020105 V. Barbu, Nonlinear Semigroups and Differential Equations in Banach Spaces, Noordhoff International Publishing, Leiden, 1976. Google Scholar M. Böhm, On Navier-Stokes and Kelvin-Voigt equations in three dimensions in interpolation spaces, Mathematische Nachrichten, 155 (1992), 151-165. doi: 10.1002/mana.19921550112. Google Scholar A. O. Celebi, V. K. Kalantarov and M. Polat, Global attractors for 2D Navier-Stokes-Voight equations in an unbounded domain, Applicable Analysis, 88 (2009), 381-392. doi: 10.1080/00036810902766682. Google Scholar V. V. Chepyzhov, M. Conti and V. Pata, Averaging of equations of viscoelasticity with singularly oscillating external forces, Journal de Mathématiques Pures et Appliquées, 108 (2017), 841-868. doi: 10.1016/j.matpur.2017.05.007. Google Scholar I. Chueshov, Dynamics of Quasi-Stable Dissipative Systems, Springer, Cham, 2015. doi: 10.1007/978-3-319-22903-4. Google Scholar B. Cockburn, D. A. Jones and E. S. Titi, Determining degrees of freedom for nonlinear dissipative equations, C. R. Acad. Sci. Paris Ser. I Math., 321 (1995), 563-568. Google Scholar P. Constantin and C. Foias, Global Lyapunov exponents, Kaplan-Yorke formulas and the dimension of the attractors for 2D Navier-Stokes equations, Commun. Pure Appl. Math., 38 (1985), 1-27. doi: 10.1002/cpa.3160380102. Google Scholar P. Constantin and C. Foias, Navier-Stokes Equations, Chicago Lectures in Mathematics, University of Chicago Press, Chicago, IL, 1988. Google Scholar C. M. Dafermos, Asymptotic stability in viscoelasticity, Arch. Rational Mech. Anal., 37 (1970), 297-308. doi: 10.1007/BF00251609. Google Scholar V. Danese, P. G. Geredeli and V. Pata, Exponential attractors for abstract equations with memory and applications to viscoelasticity, Discret. Contin. Dyn. Syst., 35 (2015), 2881-2904. doi: 10.3934/dcds.2015.35.2881. Google Scholar F. Di Plinio, A. Giorgini, V. Pata and R. Temam, Navier-Stokes-Voigt equations with memory in 3D lacking instantaneous kinematic viscosity, Journal of Nonlinear Science, 28 (2018), 653-686. doi: 10.1007/s00332-017-9422-1. Google Scholar A. Eden, C. Foias and V. Kalantarov, A remark for two constructions of exponential attractors for $\alpha$-contractions, J. Dynam. Differential Equations, 10 (1998), 37-45. doi: 10.1023/A:1022636328133. Google Scholar M. Efendiev, A. Miranville and S. Zelik, Exponential attractors for a nonlinear reaction-diffusion system in $ \mathbb{R}^3$, C. R. Acad. Sci. Paris Sér. I Math., 330 (2000), 713-718. doi: 10.1016/S0764-4442(00)00259-7. Google Scholar M. Efendiev, Attractors for Degenerate Parabolic Type Equations, American Mathematical Society, Providence, Rhode Island, 2013. doi: 10.1090/surv/192. Google Scholar C. Foias, O. P. Manley, R. Temam and et al., Asymptotic analysis of the Navier-Stokes equations, Phys. D, 9 (1983), 157-188. doi: 10.1016/0167-2789(83)90297-X. Google Scholar C. Foias and G. Prodi, Sur le comportement global des solutions non-stationnaires des équations de Navier-Stokes en dimension 2, Rend. Sem. Mat. Univ. Padova, 39 (1967), 1-34. Google Scholar C. Foias, O. P. Manley, R. Temam and Y. M. Tréve, Asymptotic analysis of the Navier-Stokes equations, Phys. D, 9 (1983), 157-188. doi: 10.1016/0167-2789(83)90297-X. Google Scholar C. G. Gal and T. T. Medjo, A Navier-Stokes-Voight model with memory, Math. Meth. Appl. Sci., 36 (2013), 2507-2523. doi: 10.1002/mma.2771. Google Scholar J. K. Hale, Asymptotic Behavior of Dissipative Systems, Mathematical Surveys and Monographs, vol. 25, American Mathematical Society, Providence, RI, 1988. doi: 10.1090/surv/025. Google Scholar A. A. Ilyin, On the spectrum of the Stokes operator, Funct. Anal. Appl., 43 (2009), 254-263. doi: 10.1007/s10688-009-0034-x. Google Scholar A. A. Ilyin, Lower bounds for the spectrum of the Laplace and Stokes operators, Discrete Contin. Dyn. Syst., 28 (2010), 131-146. doi: 10.3934/dcds.2010.28.131. Google Scholar D. A. Jones and and E. S. Titi, Upper bounds on the number of determining modes, nodes, and volume elements for the Navier-Stokes equations, Indiana Univ. Math. J., 42 (1993), 875-887. doi: 10.1512/iumj.1993.42.42039. Google Scholar D. A. Jones and E. S. Titi, On the number of determining nodes for the 2D Navier-Stokes equations, J. Math. Anal. Appl., 168 (1992), 72-88. doi: 10.1016/0022-247X(92)90190-O. Google Scholar D. A. Jones and E. S. Titi, Determining finite volume elements for the 2D Navier-Stokes equations, Phys. D, 60 (1992), 165-174. doi: 10.1016/0167-2789(92)90233-D. Google Scholar V. K. Kalantarov and E. S. Titi, Global attractors and determining modes for the 3D Navier-Stokes-Voight equations, Chin. Ann. Math. Ser. B, 30 (2009), 697-714. doi: 10.1007/s11401-009-0205-3. Google Scholar N. A. Karazeeva, A. A. Kotsiolis and A. P. Oskolkov, Dynamical systems generated by initial-boundary value problems for equations of motion of linear viscoelastic fluids, Proc. Steklov Inst. Math., 3 (1991), 73-108. Google Scholar O. A. Ladyzhenskaya, The Mathematical Theory of Viscous Incompressible Flow, Gordon and Breach, Science Publishers, New York-London-Paris, 1969. Google Scholar O. A. Ladyzhenskaya, Attractors for Semigroups and Evolution Equations, Lezioni Lincee, Cambridge University Press, Cambridge, 1991. doi: 10.1017/CBO9780511569418. Google Scholar B. Levant, F. Ramos and E. S. Titi, On the statistical properties of the 3D incompressible Navier-Stokes-Voigt model, Commun. Math. Sci., 8 (2010), 277-293. doi: 10.4310/CMS.2010.v8.n1.a14. Google Scholar M. T. Mohan, On the three dimensional Kelvin-Voigt fluids: Global solvability, exponential stability and exact controllability of Galerkin approximations, Evolution Equations and Control Theory, 9 (2020), 301-339. doi: 10.3934/eect.2020007. Google Scholar M. T. Mohan and S. S. Sritharan, Stochastic Navier-Stokes equations perturbed by Lévy noise with hereditary viscosity, Infinite Dimensional Analysis, Quantum Probability and Related Topics, 22 (2019), 1950006, 32 pp. doi: 10.1142/S0219025719500061. Google Scholar A. P. Oskolkov, Theory of nonstationary flows of Kelvin-Voigt fluids, Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI), 115 (1982), 191-202. Google Scholar A. P. Oskolkov, Nonlocal problems for the equations of motion of Kelvin-Voight fluids, Journal of Mathematical Sciences, 75 (1995), 2058-2078. doi: 10.1007/BF02362946. Google Scholar A. P. Oskolkov and R. D. Shadiev, Nonlocal problems of the theory of the equations of motion for Kelvin-Voight fluids, Journal of Soviet Mathematics, 62 (1990), 2699-2723. doi: 10.1007/BF01102639. Google Scholar V. Pata and A. Zucchi, Attractors for a damped hyperbolic equation with linear memory, Adv. Math. Sci. Appl., 11 (2001), 505-529. Google Scholar R. Rosa, The global attractor for the 2D Navier-Stokes flow on some unbounded domains, Nonlinear Analysis, 32 (1998), 71-85. doi: 10.1016/S0362-546X(97)00453-7. Google Scholar R. Temam, Navier-Stokes Equations, Theory and Numerical Analysis, North-Holland Publishing Co., Amsterdam, 1984. Google Scholar R. Temam, Navier-Stokes Equations and Nonlinear Functional Analysis, 2$^nd$ edition, CBMS-NSF Regional Conference Series in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1995. doi: 10.1137/1.9781611970050. Google Scholar R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics, Springer-Verlag, New York, 1997. doi: 10.1007/978-1-4612-0645-3. Google Scholar M. C. Zelati and C. G. Gal, Singular limits of Voigt models in fluid dynamics, J. Math. Fluid Mech., 17 (2015), 233-259. doi: 10.1007/s00021-015-0201-1. Google Scholar V. G. Zvyagin and M. V. Turbin, Investigation of initial-boundary value problems for mathematical models of the motion of Kelvin-Voigt fluids, Journal of Mathematical Sciences, 168 (2010), 157-308. doi: 10.1007/s10958-010-9981-2. Google Scholar Manil T. Mohan. On the three dimensional Kelvin-Voigt fluids: global solvability, exponential stability and exact controllability of Galerkin approximations. Evolution Equations & Control Theory, 2020, 9 (2) : 301-339. doi: 10.3934/eect.2020007 Michael L. Frankel, Victor Roytburd. Fractal dimension of attractors for a Stefan problem. Conference Publications, 2003, 2003 (Special) : 281-287. doi: 10.3934/proc.2003.2003.281 Manil T. Mohan. Global and exponential attractors for the 3D Kelvin-Voigt-Brinkman-Forchheimer equations. Discrete & Continuous Dynamical Systems - B, 2020, 25 (9) : 3393-3436. doi: 10.3934/dcdsb.2020067 Luci H. Fatori, Marcio A. Jorge Silva, Vando Narciso. Quasi-stability property and attractors for a semilinear Timoshenko system. Discrete & Continuous Dynamical Systems, 2016, 36 (11) : 6117-6132. doi: 10.3934/dcds.2016067 Baowei Feng. On a semilinear Timoshenko-Coleman-Gurtin system: Quasi-stability and attractors. Discrete & Continuous Dynamical Systems, 2017, 37 (9) : 4729-4751. doi: 10.3934/dcds.2017203 Jason S. Howell, Irena Lasiecka, Justin T. Webster. Quasi-stability and exponential attractors for a non-gradient system---applications to piston-theoretic plates with internal damping. Evolution Equations & Control Theory, 2016, 5 (4) : 567-603. doi: 10.3934/eect.2016020 Radosław Czaja. Pullback attractors via quasi-stability for non-autonomous lattice dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021276 Dung Le. Exponential attractors for a chemotaxis growth system on domains of arbitrary dimension. Conference Publications, 2003, 2003 (Special) : 536-543. doi: 10.3934/proc.2003.2003.536 Miroslav Bulíček, Josef Málek, K. R. Rajagopal. On Kelvin-Voigt model and its generalizations. Evolution Equations & Control Theory, 2012, 1 (1) : 17-42. doi: 10.3934/eect.2012.1.17 Serge Nicaise, Cristina Pignotti. Stability of the wave equation with localized Kelvin-Voigt damping and boundary delay feedback. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 791-813. doi: 10.3934/dcdss.2016029 Manuel Fernández-Martínez, Miguel Ángel López Guerrero. Generating pre-fractals to approach real IFS-attractors with a fixed Hausdorff dimension. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1129-1137. doi: 10.3934/dcdss.2015.8.1129 Markus Böhm, Björn Schmalfuss. Bounds on the Hausdorff dimension of random attractors for infinite-dimensional random dynamical systems on fractals. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3115-3138. doi: 10.3934/dcdsb.2018303 María Anguiano, Alain Haraux. The $\varepsilon$-entropy of some infinite dimensional compact ellipsoids and fractal dimension of attractors. Evolution Equations & Control Theory, 2017, 6 (3) : 345-356. doi: 10.3934/eect.2017018 Igor Kukavica. On Fourier parametrization of global attractors for equations in one space dimension. Discrete & Continuous Dynamical Systems, 2005, 13 (3) : 553-560. doi: 10.3934/dcds.2005.13.553 Ali Wehbe, Rayan Nasser, Nahla Noun. Stability of N-D transmission problem in viscoelasticity with localized Kelvin-Voigt damping under different types of geometric conditions. Mathematical Control & Related Fields, 2021, 11 (4) : 885-904. doi: 10.3934/mcrf.2020050 Louis Tebou. Stabilization of some elastodynamic systems with localized Kelvin-Voigt damping. Discrete & Continuous Dynamical Systems, 2016, 36 (12) : 7117-7136. doi: 10.3934/dcds.2016110 Moncef Aouadi, Alain Miranville. Quasi-stability and global attractor in nonlinear thermoelastic diffusion plate with memory. Evolution Equations & Control Theory, 2015, 4 (3) : 241-263. doi: 10.3934/eect.2015.4.241 Lin Yang, Yejuan Wang, Tomás Caraballo. Regularity of global attractors and exponential attractors for $ 2 $D quasi-geostrophic equations with fractional dissipation. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021093 Fathi Hassine. Asymptotic behavior of the transmission Euler-Bernoulli plate and wave equation with a localized Kelvin-Voigt damping. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 1757-1774. doi: 10.3934/dcdsb.2016021 Xiaobin Yao, Qiaozhen Ma, Tingting Liu. Asymptotic behavior for stochastic plate equations with rotational inertia and Kelvin-Voigt dissipative term on unbounded domains. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1889-1917. doi: 10.3934/dcdsb.2018247 HTML views (432) Manil T. Mohan
CommonCrawl
\begin{document} \begin{abstract} An explicit formula for the mean value of $\vert L(1,\chi)\vert^2$ is known, where $\chi$ runs over all odd primitive Dirichlet characters of prime conductors $p$. Bounds on the relative class number of the cyclotomic field ${\mathbb Q}(\zeta_p)$ follow. Lately the authors obtained that the mean value of $\vert L(1,\chi)\vert^2$ is asymptotic to $\pi^2/6$, where $\chi$ runs over all odd primitive Dirichlet characters of prime conductors $p\equiv 1\pmod{2d}$ which are trivial on a subgroup $H$ of odd order $d$ of the multiplicative group $({\mathbb Z}/p{\mathbb Z})^*$, provided that $d\ll\frac{\log p}{\log\log p}$. Bounds on the relative class number of the subfield of degree $\frac{p-1}{2d}$ of the cyclotomic field ${\mathbb Q}(\zeta_p)$ follow. Here, for a given integer $d_0>1$ we consider the same questions for the non-primitive odd Dirichlet characters $\chi'$ modulo $d_0p$ induced by the odd primitive characters $\chi$ modulo $p$. We obtain new estimates for Dedekind sums and deduce that the mean value of $\vert L(1,\chi')\vert^2$ is asymptotic to $\frac{\pi^2}{6}\prod_{q\mid d_0}\left (1-\frac{1}{q^2}\right )$, where $\chi$ runs over all odd primitive Dirichlet characters of prime conductors $p$ which are trivial on a subgroup $H$ of odd order $d\ll\frac{\log p}{\log\log p}$. As a consequence we improve the previous bounds on the relative class number of the subfield of degree $\frac{p-1}{2d}$ of the cyclotomic field ${\mathbb Q}(\zeta_p)$. Moreover, we give a method to obtain explicit formulas and use Mersenne primes to show that our restriction on $d$ is essentially sharp. \end{abstract} \maketitle \footnotetext{ 2010 Mathematics Subject Classification. Primary. 11F20. 11R42. 11M20, 11R20, 11R29. 11J71. Key words and phrases. Dirichlet character. $L$-function. Mean square value. Relative class number. Dedekind sums. Cyclotomic field. Discrepancy. Multiplicative subgroup.} \section{Introduction} For the beginning of this introduction, we refer the reader to \cite{Was} and to Lemma \ref{kernel} below (which is probably well known but for which we found no reference in the literature).\\ Let $X_f$ be the multiplicative group of the $\phi (f)$ Dirichlet characters modulo $f>2$. Let $X_f^- =\{\chi\in X_f;\ \chi (-1)=-1\}$ be the set of the $\phi (f)/2$ odd Dirichlet characters modulo $f$. Let $L(s,\chi)$ be the Dirichlet $L$-function associated with $\chi\in X_f$. Let $H$ denote a subgroup of index $m$ in the multiplicative group $({\mathbb Z}/f{\mathbb Z})^*$. We assume that $-1\not\in H$. Hence $m$ is even. We set $X_f(H) =\{\chi\in X_f;\ \chi_{/H}=1\}$, a subgroup of order $m$ of $X_f$ isomorphic to the group of Dirichlet characters of the abelian quotient group $G/H$ of order $m$. Define $X_f^-(H)=\{\chi\in X_f^-;\ \chi_{/H}=1\}$, a set of cardinal $m/2$. The mean square value of $L(1,\chi)$ as $\chi$ ranges in $X_f^-(H)$ is defined by \begin{equation}\label{M(f,H)} M(f,H) :={1\over\# X_f^-(H)}\sum_{\chi\in X_f^-(H)}\vert L(1,\chi)\vert^2. \end{equation} Let $K$ be an abelian number field of degree $m$ and prime conductor $p\geq 3$, i.e. let $K$ be a subfield of the cyclotomic number field ${\mathbb Q}(\zeta_p)$ (Kronecker-Weber's theorem). The Galois group ${\rm Gal}({\mathbb Q}(\zeta_p)/{\mathbb Q})$ is canonically isomorphic to the multiplicative cyclic group $({\mathbb Z}/p{\mathbb Z})^*$ and $H :={\rm Gal}({\mathbb Q}(\zeta_p)/K)$ is a subgroup of $({\mathbb Z}/p{\mathbb Z})^*$ of index $m$ and order $d=(p-1)/m$. Now, assume that $K$ is imaginary. Then $d$ is odd, $m$ is even, $-1\not\in H$ and the set $$X_K^- :=X_p^-(H) :=\{\chi\in X_p^-;\hbox{ and } \chi_{/H}=1\}$$ is of cardinal $(p-1)/(2d) =m/2$. Let $K^+$ be the maximal real subfield of $K$ of degree $m/2$ fixed by the complex conjugation. The class number $h_{K^+}$ of $K^+$ divides the class number $h_K$ of $K$. The relative class number of $K$ is defined by $h_K^- =h_K/h_{K^+}$. The analytic class number formula and the arithmetic-geometric mean inequality give \begin{equation}\label{boundhKminus} h_K^- =w_K\left (\frac{p}{4\pi^2}\right )^{m/4}\prod_{\chi\in X_K^-} L(1,\chi) \leq w_K\left ({pM(p,H)\over 4\pi^2}\right )^{m/4}, \end{equation} where $w_K$ is the number of complex roots of unity in $K$. Hence $w_K=2p$ for $K ={\mathbb Q}(\zeta_p)$ and $w_K=2$ otherwise. In \cite[Theorem 1.1]{LMQJM} we proved that \begin{equation}\label{asymptoticMpH} M(p,H) =\frac{\pi^2}{6}+o(1) \end{equation} as $p$ tends to infinity and $H$ runs over the subgroups of $({\mathbb Z}/p{\mathbb Z})^*$ of odd order $d\leq\frac{\log p}{3(\log\log p)}$ \footnote{This restriction on $d$ is probably optimal, by (\ref{Mpd}).}. Hence, by \eqref{boundhKminus} we have \begin{equation}\label{boundhKgen} h_{K}^{-} \leq w_K\left(\frac{(1+o(1))p}{24}\right)^{(p-1)/4d}. \end{equation} In some situations it is even possible to give an explicit formula for $M(p,H)$ implying a completely explicit bound for $h_K^-$. Indeed, by \cite{W} and \cite{Met} (see also (\ref{M(f,1)})), we have: \begin{equation}\label{Mp1} M(p,\{1\}) ={\pi^2\over 6}\left (1-{1\over p}\right )\left (1-{2\over p}\right ) \leq\frac{\pi^2}{6} \ \ \ \ \ \hbox{($p\geq 3$)}. \end{equation} Hence, \begin{equation}\label{boundhpminus24} h_{{\mathbb Q}(\zeta_p)}^- \leq 2p\left ({pM(p,\{1\})\over 4\pi^2}\right )^{(p-1)/4} \leq 2p\left ({p\over 24}\right )^{(p-1)/4}. \end{equation} We refer the reader to \cite{Gra} for more information about the expected size of $h_{{\mathbb Q}(\zeta_p)}^-$. The only other situation where a similar explicit result is known is the following one (see Theorem \ref{thp3M2}) for a new proof): \begin{theorem*}\label{thp3} (See \footnote{Note the misprint in the exponent in \cite[(8)]{LouBPASM64}.} \cite[Theorem 1]{LouBPASM64}). Let $p\equiv 1\pmod 6$ be a prime integer. Let $K$ be the imaginary subfield of degree $(p-1)/3$ of the cyclotomic number field ${\mathbb Q}(\zeta_p)$. Let $H$ be the subgroup of order $3$ of the multiplicative group $({\mathbb Z}/p{\mathbb Z})^*$. We have (compare with (\ref{Mp1}) and (\ref{boundhpminus24})) $$M(p,H) ={\pi^2\over 6}\left (1-{1\over p}\right ) \leq\frac{\pi^2}{6} \hbox{ and } h_K^-\leq 2\left (\frac{p}{24}\right )^{(p-1)/12}.$$ \end{theorem*} In \cite{LouCMB36/37} (see also \cite{LouPMDebr78}), the following simple argument allowed to improve on \eqref{boundhpminus24}. Let $d_0>1$ be a given integer. Assume that $\gcd (d_0,f)=1$. For $\chi$ modulo $f$ let $\chi'$ be the character modulo $d_0f$ induced by $\chi$. Then, \begin{equation}\label{L1XXprime} L(1,\chi) =L(1,\chi')\prod_{q\mid d_0}\left (1-\frac{\chi(q)}{q}\right )^{-1} \end{equation} (throughout the paper this notation means that $q$ runs over the prime divisors of $d_0$). Let $H$ be a subgroup of order $d$ of the multiplicative group $({\mathbb Z}/f{\mathbb Z})^*$, with $-1\not\in H$. We define \begin{equation}\label{Md0(f,H)} M_{d_0}(f,H) :={1\over\# X_f^-(H)}\sum_{\chi\in X_f^-(H)}\vert L(1,\chi')\vert^2 \end{equation} and\footnote{Note that $\Pi_{d_0}(f,H)\in {\mathbb Q}_+^*$, by Lemma \ref{formulaPi}.} \begin{equation}\label{Pi} \Pi_{d_0}(f,H) :=\prod_{q\mid d_0}\prod_{\chi\in X_f^-(H)}\left (1-\frac{\chi(q)}{q}\right ) \text{ and } D_{d_0}(f,H) :=\Pi_{d_0}(f,H)^{4/m}. \end{equation} Clearly there is no restriction in assuming from now on that $d_0$ is square-free. Using (\ref{L1XXprime}), we obtain (compare with (\ref{boundhKminus})): \begin{equation}\label{boundhKminusd0} h_K^- =\frac{w_K}{\Pi_{d_0}(p,H)} \left (\frac{p}{4\pi^2}\right )^{m/4}\prod_{\chi\in X_K^-}L(1,\chi') \leq w_K\left (\frac{pM_{d_0}(p,H)}{4\pi^2D_{d_0}(p,H)}\right )^{m/4}. \end{equation} Assume that no prime divisor $q$ of $d_0$ is in $H$ (see Remarks \ref{restrictiond0}). By Corollary \ref{PiKminus}, we know that \begin{equation}\label{Dpestimate} D_{d_0}(p,H) =1+o(1)\end{equation} and we expect that \begin{equation}\label{asymptoticMd0(p,H)M(p,H)} M_{d_0}(p,H) \sim\left\{\prod_{q\mid d_0}\left (1-\frac{1}{q^2}\right )\right\} \times M(p,H). \end{equation} Hence, (\ref{boundhKminusd0}) should indeed improve on (\ref{boundhKminus}). The aim of this paper is two-fold. Firstly we give an asymptotic formula for $M_{d_0}(p,H)$ when $d$ satisfies the same restriction as in \eqref{asymptoticMpH} allowing us to improve on the bound \eqref{boundhKgen}. Secondly we treat the case of groups of order $1$ and $3$ for small $d_0$'s as well as the case of Mersenne primes and groups of size $\approx \log p$. In both cases an explicit description of these subgroups allows us to obtain explicit formulas for $M_{d_0}(p,H)$. Our main result is the following: \noindent\frame{\vbox{ \begin{theorem}\label{asympd0} Let $d_0\geq 1$ be a given square-free integer. As $p$ tends to infinity and $H$ runs over the subgroups of $({\mathbb Z}/p{\mathbb Z})^*$ of odd order $d\leq \frac{\log p}{3(\log\log p)}$, we have the asymptotic formula $$M_{d_0}(p,H) =\frac{\pi^2}{6}\prod_{q\mid d_0}\left (1-{1\over q^2}\right ) +O(d(\log p)^2 p^{-\frac{1}{d-1}}) =\frac{\pi^2}{6}\prod_{q\mid d_0}\left (1-{1\over q^2}\right ) +o(1).$$ This implies that for any $C>4\pi ^2$ and $p$ large enough, we have the bound \begin{equation}\label{boundhKopt} h_K^- \leq w_K\left (\frac{p}{C}\right )^{(p-1)/4d}, \end{equation} by \eqref{boundhKminusd0}, \eqref{Dpestimate} and choosing $d_0$ large enough as the product of the first primes. \end{theorem} }} \begin{Remark} The special case $d_0=1$ was proved in \cite[Theorem $1$]{LMQJM}. Note that the restriction on $d$ cannot be extended further to the range $d=O(\log p)$ as shown by Theorem \ref{TheoremMersenne}. Moreover the constant in \eqref{boundhKopt} cannot be taken smaller than $4\pi^2$, see the discussion about Kummer's conjecture in \cite{MP01}. \end{Remark} In the first part of the paper, the presentation goes as follows: \begin{itemize} \item In Section \ref{genlemmas}, we explain the condition about the prime divisors of $d_0$ and prove that $D_{d_0}(p,H) =1+o(1)$. \item In Section \ref{Dedekind}, we review some results on Dedekind sums and prove a new bound of independent interest for Dedekind sums $s(h,f)$ with $h$ being of small order modulo $f$ (see Theorem \ref{indivbound}). To do so we use techniques from uniform distribution and discrepancy theory. Then we relate $M_{d_0}(p,H)$ to twisted moments of $L$- functions which we further express in terms of Dedekind sums. For the sake of clarity, we first treat separately the case $H=\{1\}$. Note that we found that this case is related to elementary sums of maxima that we could not estimate directly, see Section \ref{sectionsummax}. Using our estimates on Dedekind sums we deduce the asymptotic formula of Theorem \ref{asympd0} and the related class number bounds. \end{itemize} In the second part of the paper, we focus on the explicit aspects. Let us describe briefly our presentation: \begin{itemize} \item In Section \ref{Casetrivialgroup} we establish formulas for $M_{d_0}(f,\{1\})$ for $d_0\in\{1,2,3,6\}$ and $\gcd (d_0,f)=1$ (such formulae become harder to come by as $d_0$ gets larger). For example, for $p\geq 5$ and $d_0=6$, using Theorem \ref{M3M6} we obtain the following formula for $M_6(p,\{1\})$: $$M_6(p,\{1\}) ={\pi^2\over 9}\left (1-\frac{c_p}{p}\right ) \leq\frac{\pi^2}{9}, \hbox{ where } c_p =\begin{cases} 1&\hbox{if $p\equiv 1\pmod 3$}\\ 0&\hbox{if $p\equiv 2\pmod 3$}\\ \end{cases}$$ which by (\ref{boundhKminusd0}) and Corollary \ref{PiKminus} give improvements on (\ref{boundhpminus24}) (see also \cite{Feng} and \cite{LouCMB36/37}) $$h_{{\mathbb Q}(\zeta_p)}^- \leq 3p\left ({p\over 36}\right )^{(p-1)/4}.$$ Based on these results and on a lot of numerical computations, we also state a conjecture for a general $d_0$. In Section \ref{sectionformulad0} we obtain an explicit formula of the form \begin{equation}\label{Md0pHintro} M_{d_0}(p,H) =\frac{\pi^2}{6} \left\{\prod_{q\mid d_0}\left (1-\frac{1}{q^2}\right )\right\} \left (1+\frac{N_{d_0}(p,H)}{p}\right ), \end{equation} where $N_{d_0}(p,H)$ defined in (\ref{defNd0fH}) is an explicit average of Dedekind sums. In Proposition \ref{H=1} we prove that $N_{d_0}(p,\{1\})\in {\mathbb Q}$ depends only on $p$ modulo $d_0$ and is easily computable. \item For $H\neq\{1\}$ explicit formulae for $M_{d_0}(p,H)$ seem difficult to come by. In Section \ref{genMersenne}, we focus on Mersenne primes $p=2^d-1$, with $d$ odd. We take $H=\{2^k;\ 0\leq k\leq d-1\}$, a subgroup of odd order $d$ of the multiplicative group $({\mathbb Z}/p{\mathbb Z})^*$. For $d_0\in\{1,3,15\}$ we prove in Theorem \ref{Mersenne} that $$M_{d_0}(p,H) =\frac{\pi^2}{2}\left\{\prod_{q\mid d_0}\left (1-\frac{1}{q^2}\right )\right\} \left (1+\frac{N_{d_0}'(p,H)}{p}\right ),$$ where $N_{d_0}'(p,H)=a_1(p)d+a_0(p)$ with $a_1(p),a_0(p)\in {\mathbb Q}$ depending only on $p=2^d-1$ modulo $d_0$ and easily computable. In this range $d \gg \log p$, we see that $M_{d_0}(p,H)$ has a different asymptotic behavior than in Theorem \ref{asympd0}. \item In Section \ref{Sectiond=3}, we turn to the specific case of subgroups of order $3$. Writing $f=a^2+ab+b^2$ not necessarily prime, and taking $H=\{1,a/b,b/a\}$, the subgroup of order $3$ of the multiplicative group $({\mathbb Z}/f{\mathbb Z})^*$, we prove in Proposition \ref{boundsSab} that $N_{d_0}(f,H) =O(\sqrt f)$ in (\ref{Md0pHintro}) for $d_0\in\{1,2,3,6\}$. To do so we obtain bounds for the Dedekind sums stronger than Theorem \ref{indivbound}. Note that this cannot be expected in general for subgroups of order $3$ modulo composite $f$ (see Remark \ref{compositebound} and \ref{dedekindtwosizes}). Furthermore we show that these bounds are sharp in the case of primes $p=a^2+a+1$, in accordance with Conjecture \ref{conjDedekind}. \end{itemize} \section{Preliminaries}\label{genlemmas} \subsection{Algebraic considerations} Since there are infinitely many prime integers in the arithmetic progressions $a+f{\mathbb Z}$ for $\gcd (a,f)=1$, the canonical morphism $s_{d_0}:({\mathbb Z}/d_0f{\mathbb Z})^*\longrightarrow ({\mathbb Z}/f{\mathbb Z})^*$ is surjective and its kernel is of order $\phi(d_0)$. Then $H_{d_0} =s_{d_0}^{-1}(H)$ is a subgroup of order $\phi(d_0)d$ of $({\mathbb Z}/d_0f{\mathbb Z})^*$ and as $\chi$ runs over $X_f^-(H)$ the $\chi'$'s run over $X_{d_0f}^-(H_{d_0})$ and by (\ref{M(f,H)}) and (\ref{Md0(f,H)}) we have: \begin{equation}\label{MM} M_{d_0}(f,H) =M(d_0f,H_{d_0}). \end{equation} \begin{lemma}\label{kernel} Let $f>2$. Let $H$ be a subgroup of index $m =(G:H)$ in the multiplicative group $G:=({\mathbb Z}/f{\mathbb Z})^*$. Then $\# X_f(H) =m$ and $H=\cap_{\chi\in X_f(H)}\ker\chi$. Moreover, if $-1\not\in H$, then $m$ is even, $\# X_f^-(H)=m/2$ and $H=\cap_{\chi\in X_f^-(H)}\ker\chi$. \end{lemma} \begin{proof} Since $X_f(H)$ is isomorphic to the group of Dirichlet characters of the abelian quotient group $G/H$, it is of order $m$, by \cite[Chapter VI, Proposition 2]{Ser}. Clearly, $H\subseteq \cap_{\chi\in X_f(H)}\ker\chi$. Conversely, take $g\not\in H$, of order $n\geq 2$ in the abelian quotient group $G/H$. Define a character $\chi$ of the subgroup $\langle g,H\rangle$ of $G$ generated by $g$ and $H$ by $\chi(g^kh) =\exp(2\pi ik/n)$, $(k,h)\in {\mathbb Z}\times H$. It extends to a character of $G$ still denoted $\chi$, by \cite[Chapter VI, Proposition 1]{Ser}. Since $g\not\in\ker\chi$ and $\chi\in X_f(H)$ we have $g\not\in \cap_{\chi\in X_f(H)}\ker\chi$, i.e. $ \cap_{\chi\in X_f(H)}\ker\chi\subseteq H$. Now, assume that $-1\not\in H$. Set $H'=\langle -1,H\rangle$, of index $m/2$ in $G$. Then $X_f^-(H) =X_f(H)\setminus X_f(H')$ is indeed of order $m-m/2 =m/2$, by the first assertion. Clearly, $H\subseteq \cap_{\chi\in X_f^-(H)}\ker\chi$. Conversely, take $g\not\in H$. Set $H'':=\langle g,H\rangle=\{g^kh;\ k\in {\mathbb Z},\ h\in H\}$, of index $m''$ in $G$, with $m>m''$. If $-1=g^kh\in H''$ then clearly $\chi (g)\neq 1$ for $\chi\in X_f^-(H)$, hence $g\not\in \cap_{\chi\in X_f^-(H)}\ker\chi$. If $-1\not\in H''$ and $\chi\in X_f^-(H)\setminus X_f^-(H'')$, a non-empty set or cardinal $m/2-m''/2 =(H'':H)/2\geq 1$, then clearly $\chi (g)\neq 1$, hence $g\not\in \cap_{\chi\in X_f^-(H)}\ker\chi$. Therefore, $\cap_{\chi\in X_f^-(H)}\ker\chi\subseteq H$. \end{proof} \begin{Remark}\label{restrictiond0} We have $M_{d_0}(p,H)/D_{d_0}(p,H) =M_{d_0/q}(p,H)/D_{d_0/q}(p,H)$ whenever a prime $q$ dividing $d_0$ is in $\cap_{\chi\in X_p^-(H)}\ker\chi$. Hence, by Lemma \ref{kernel}, when applying (\ref{boundhKminusd0}) we may assume that no prime divisor of $d_0$ is in $H$. \end{Remark} \subsection{On the size of $\Pi_{d_0}(f,H)$ and $D_{d_0}(f,H)$ defined in (\ref{Pi})}\label{sizeproduct} \begin{lemma}\label{formulaPi} Let $H$ be a subgroup of order $d\geq 1$ of the multiplicative group $({\mathbb Z}/f{\mathbb Z})^*$, where $f>2$. Assume that $-1\not\in H$. Let $g$ be the order of a given prime integer $q$ in the multiplicative quotient group $({\mathbb Z}/f{\mathbb Z})^*/H$. Let $X_f(H)$ be the multiplicative group of the $\phi(f)/d$ Dirichlet characters modulo $f$ for which $\chi_{/H} =1$. Define $X_f^-(H) =\{\chi\in X_f(H);\ \chi (-1)=-1\}$, a set of cardinal $\phi(f)/2$. Then $$\Pi_{q}(f,H) :=\prod_{\chi\in X_f^-(H)}\left (1-\frac{\chi (q)}{q}\right ) =\begin{cases} \left (1-\frac{1}{q^g}\right )^{\frac{\phi(f)}{2dg}}&\hbox{if $g$ is odd,}\\ \left (1+\frac{1}{q^{g/2}}\right )^{\frac{\phi(f)}{dg}}&\hbox{if $g$ is even.} \end{cases}$$ \end{lemma} \begin{proof} Let $\alpha$ be of order $g$ in an abelian group $A$ of order $n$. Let $B=\langle\alpha\rangle$ be the cyclic group generated by $\alpha$. Let $\hat B$ be the group of the $g$ characters of $B$. Then $P_{B}(X) :=\prod_{\chi\in\hat B} (X-\chi (\alpha)) =X^g-1$. Now, the restriction map $\chi\in\hat A\rightarrow\chi_{/B}\in\hat B$ is surjective, by \cite[Proposition 1]{Ser}, and of kernel isomorphic to $\widehat{A/B}$ of order $n/g$, by \cite[Proposition 2]{Ser}. Therefore, $P_{A}(X) :=\prod_{\chi\in\hat A} (X-\chi (\alpha)) =P_B(X)^{n/g} =(X^g-1)^{n/g}$.\\ With $A=({\mathbb Z}/f{\mathbb Z})^*/H$ of order $n=\phi(f)/d$, we have $\hat A =X_f(H)$ and $\prod_{\chi\in X_f(H)}(X-\chi (q)) =(X^g-1)^{\frac{\phi(f)}{dg}}$. Let $H'$ be the subgroup of order $2d$ generated by $-1$ and $H$. With $A'=({\mathbb Z}/p{\mathbb Z})^*/H'$ of order $n'=\phi(f)/(2d)$, we have $\hat A' =X_f(H') =X_f^+(H):=\{\chi\in X_f(H);\ \chi (-1)=+1\}$ and $\prod_{\chi\in X_f^+(H)}(X-\chi (q)) =(X^{g'}-1)^{\frac{\phi(f)}{2dg'}}$, where $q$ is of order $g'$ in $A'$.\\ Since $X_f^-(H) =X_f(H)\setminus X_f^+(H)$, it follows that $$\prod_{\chi\in X_f^-(H)} (X-\chi (q)) =\frac{(X^g-1)^{\frac{\phi(f)}{dg}}}{(X^{g'}-1)^{\frac{\phi(f)}{2dg'}}}.$$ Since $g'=g$ if $g$ is odd and $g'=g/2$ if $g$ is even, the assertion follows. \end{proof} \begin{corollary}\label{PiKminus} Fix $d_0>1$ square-free. Let $p\geq 3$ run over the prime integers that do not divide $d_0$. Let $H$ run over the subgroups of odd order $d$ of the multiplicative group $({\mathbb Z}/p{\mathbb Z})^*$, with $d =o(\log p)$. Then, \begin{equation}\label{behaviorPid0pH} D_{d_0}(p,H)\ =1+o(1). \end{equation} Moreover, $$\Pi _{d_0}(p,\{1\}) \geq\exp\left (\frac{\log d_0}{2}F(p+1)\right ), \hbox{ where for $x>1$ we set } F(x) :=\frac{(x-2)\log\left (1-\frac{1}{x}\right )}{\log x}.$$ In particular, $\Pi_{6}(p,\{1\}) \geq 2/3 \,\, \hbox{for $p\geq 5$.}$ \end{corollary} \begin{proof} Let $q$ be a prime divisor of $d_0$. Then $\left (1-\frac{1}{q^g}\right )^{\frac{2}{g}} \leq D_q(p,H) =\Pi_{q}(f,H)^{\frac{4d}{p-1}} \leq\left (1+\frac{1}{q^{g/2}}\right )^{\frac{4}{g}}$, by (\ref{Pi}) and Lemma \ref{formulaPi}, with $f=p$, $\phi(f)=p-1$ and $m =(p-1)/d$. Either $q^g\equiv 1\pmod p$, in which case $q^g\geq p+1$, or $q^g\equiv h\pmod p$ for some $h\in\{2,\cdots,p-1\}\cap H$, in which case $p$ divides $S:=1+h+\cdots +h^{d-1}$ which satisfies $p\leq S\leq 2h^{d-1}$. Therefore, in both cases, we have $q^{g}\geq (p/2)^{\frac{1}{d-1}}$. Hence $q^g$ tends to infinity in the range $d =o(\log p)$ and (\ref{behaviorPid0pH}) follows.\\ Notice that if $p=2^d-1$ runs over the Mersenne primes and $H=\langle 2\rangle$, we have $d=O(\log p)$ but $D_2(p,H) =\left (1-\frac{1}{2^2}\right )^2$ does not satisfy (\ref{behaviorPid0pH}). Now, assume that $H=\{1\}$. Then, $K={\mathbb Q}(\zeta_p)$ and $q^g\geq p+1$. Hence, $$\Pi_q(p,\{1\}) \geq\left (1-\frac{1}{p+1}\right )^{\frac{p-1}{2g}} \geq\left (1-\frac{1}{p+1}\right )^{\frac{(p-1)\log q}{2\log (p+1)}} =\exp\left (\frac{\log q}{2}F(p+1)\right ).$$ The desired lower bound easily follows. \end{proof} \section{Dedekind sums and mean square values of $L$-functions}\label{Dedekind} \subsection{Dedekind sums and Dedekind-Rademacher sums} The Dedekind sums is the rational integer defined by \begin{equation}\label{defscd} s(c,d) ={1\over 4d}\sum_{n=1}^{\vert d\vert -1}\cot\left ({\pi n\over d}\right )\cot\left ({\pi nc\over d}\right ) \ \ \ \ \ (c\in {\mathbb Z},\ d\in {\mathbb Z}\setminus\{0\},\ \gcd (c,d)=1), \end{equation} with the convention $s(c,-1)=s(c,1)=0$ for $c\in {\mathbb Z}$ (see \cite{Apo} or \cite{RG} where it is however assumed that $d>1$). It depends only on $c$ mod $\vert d\vert$ and $c\mapsto s(c,d)$ can therefore be seen as an application from $({\mathbb Z}/\vert d\vert {\mathbb Z})^*$ to ${\mathbb Q}$. Notice that \begin{equation}\label{cc*} s(c^*,d)=s(c,d) \text{ whenever } cc^*\equiv 1\pmod d \end{equation} (make the change of variables $n\mapsto nc$ in $s(c^*,d)$). Recall the reciprocity law for Dedekind sums \begin{equation}\label{scddc} s(c,d)+s(d,c) ={c^2+d^2-3\vert cd\vert +1\over 12cd}, \ \ \ \ \ (c,d\in {\mathbb Z}\setminus\{0\},\ \gcd(c,d)=1). \end{equation} In particular, \begin{equation}\label{s1d} s(1,d) =\frac{d^2-3\vert d\vert +2}{12d} \text{ and } s(2,d) =\frac{d^2-6\vert d\vert +5}{24d} \ \ \ \ \ (d\in {\mathbb Z}\setminus\{0\}). \end{equation} For $b,c\in {\mathbb Z},\ d\in {\mathbb Z}\setminus\{-1,0,1\}$ such that $\gcd (b,d) =\gcd (c,d)=1$, the Dedekind-Rademacher sum is the rational integer defined by $$s(b,c,d) ={1\over 4d}\sum_{n=1}^{\vert d\vert -1}\cot\left ({\pi nb\over d}\right )\cot\left ({\pi nc\over d}\right ),$$ with the convention $s(b,c,-1)=s(b,c,1)=0$ for $b,c\in {\mathbb Z}$. Hence, $s(c,d) =s(1,c,d)$, if $\alpha\in ({\mathbb Z}/\vert d\vert {\mathbb Z})^*$ is represented as $\alpha =b/c$ with $\gcd(b,d)=\gcd(c,d) =1$, then $s(\alpha,d) =s(b,c,d)$, and \begin{equation}\label{bcd=abacd} \hbox{$s(b,c,d) =s(ab,ac,d)$ for any $a\in {\mathbb Z}$ with $\gcd (a,d)=1$.} \end{equation} For $\gcd( b,c) =\gcd (c,d) =\gcd (d,b) =1$ we have a reciprocity law for Dedekind-Rademacher sums (see \cite{Rad} or \cite{BR}): \begin{equation}\label{sbcddcb} s(b,c,d)+s(d,b,c)+s(c,d,b) ={b^2+c^2+d^2-3\vert bcd\vert\over 12bcd}. \end{equation} The Cauchy-Schwarz inequality and (\ref{s1d}) yield \begin{equation}\label{boundscd} \vert s(c,d)\vert \leq s(1,\vert d\vert) \leq\vert d\vert/12 \hbox { and } \vert s(b,c,d)\vert \leq s(1,\vert d\vert) \leq\vert d\vert/12. \end{equation} \subsection{Non trivial bounds on Dedekind sums}\label{boundsDedekind} In this section we will use the alternative definition of the Dedekind sums given by $$s(c,d) =\sum_{a=1}^{d-1} \left(\left(\frac{a}{d}\right)\right) \left(\left(\frac{ac}{d}\right)\right) \ \ \ \ \ (c\in {\mathbb Z},\ d\geq 1,\ \gcd (c,d)=1)$$ where $ \left(\left(\right)\right):\mathbb{R} \rightarrow \mathbb{R}$ stands for the sawtooth function defined by $$\left(\left(x\right)\right):= \begin{cases} x-\lfloor x\rfloor - 1/2& \text{ if } x \in \mathbb{R} \backslash \mathbb{Z},\\ 0 &\text{ if } x \in \mathbb{Z}. \end{cases} $$ In order to prove Theorem \ref{asympd0}, we need general bounds on Dedekind sums depending on the multiplicative order of the argument. This is a new type of bounds for Dedekind sums and the following result that improves upon \eqref{boundscd} for $k =o\left(\frac{\log p}{\log\log p}\right)$ might be of independent interest (see also Conjecture \ref{conjDedekind} for further discussions). \noindent\frame{\vbox{ \begin{theorem} \label{indivbound} Let $p>1$ be a prime integer and assume that $h$ has odd order $k\geq 3$ in the multiplicative group $({\mathbb Z}/p{\mathbb Z})^*$. We have $$\vert s(h,p)\vert \ll (\log p)^2 p^{1-\frac{1}{\phi(k)}}.$$ \end{theorem}}} \begin{Remark} Let us notice that by a result of Vardi \cite{Var}, for any function $f$ such that $\lim_{n\to +\infty} f(n)=+\infty$ we have $s(c,d) \ll f(d)\log d$ for almost all $(c,d)$ with $\gcd (c,d)=1$. However Dedekind sums take also very large values (see for instance \cite{CEK,Gir03} for more information). \end{Remark} Our proof builds from ideas of the proof of \cite[Theorem $10$]{LMQJM} where some tools from equidistribution theory and the theory of pseudo-random generators were used. We refer for more information to \cite{Korobov}, \cite{niederreiter1977pseudo} or the book of Konyagin and Shparlinski \cite[Chapter $12$]{igorkonya} (see \cite[Section $4$]{LMQJM} for more details and references). Let us recall some notations. For any fixed integer $s$, we consider the $s$-dimensional cube $I_s=\left[0,1\right]^s$ equipped with its $s$-dimensional Lebesgue measure $\lambda_s$. We denote by $\mathcal{B}$ the set of rectangular boxes of the form $$\prod_{i=1}^{s}[\alpha_i,\beta_i) =\left\{x\in I_s, \alpha_i\leq x_i <\beta_i\right\}$$ where $0\leq \alpha_i<\beta_i\leq 1.$ If $S$ is a finite subset of $I^s$, we define the discrepancy $D(S)$ by $$D(S) =\sup_{B \in \mathcal{B}}\left\vert \frac{\# (B\cap S)}{\# S}-\lambda_s(B)\right\vert.$$ Let us introduce the following set of points: $$S_{h,p}=\left\{\left(\frac{x}{p},\frac{x h}{p}\right) \in I_2, x \bmod p\right\}.$$ For good choice of $h$, the points are equidistributed and we expect for ``nice'' functions $f$ $$\lim_{p\to \infty} \frac{1}{p}\sum_{x \bmod p} f\left(\frac{x}{p},\frac{hx}{p}\right) = \int_{I_2} f(x,y)dx dy.$$ \begin{lemma}\label{discrepancy} For any $h$ of odd order $k\geq 3$ we have the following discrepancy bound $$D(S_{h,p}) \leq (\log p)^{2}p^{-1/\phi(k)}.$$ \end{lemma} \begin{proof} It follows from the proof of \cite[Theorem 10]{LMQJM} where the bound was obtained as a consequence of Erd\H{o}s-Turan inequality and tools from pseudo random generators theory. \end{proof} \subsubsection{Proof of Theorem \ref{indivbound}} Observe that $$s(h,p)= \sum_{x \bmod p} f\left(\frac{x}{p},\frac{hx}{p}\right)$$ where $f(x,y)= ((x))((y))$. By Koksma-Hlawka inequality \cite[Theorem $1.14$]{tichy} we have $$\left\vert \frac{1}{p} \sum_{ x \bmod p} f\left(\frac{x}{p},\frac{x h}{p}\right) -\int_{I_{2}} f(u,v)dudv \right\vert \leq V(f)D(S_{h,p})$$ where $V(f)$ is the Hardy-Krause variation of $f$. Moreover we have $$\int_{I_{2}} f(u,v)dudv =0.$$ The readers can easily convince themselves that $V(f) \ll 1$. Hence the result follows from Lemma \ref{discrepancy}. \begin{Remark}\label{compositebound} The same method used to bound the discrepancy leads to a similar bound for composite $f$. Indeed for $h \in ({\mathbb Z}/f{\mathbb Z})^*$ of order $k\geq 3$, we have $s(h,f)= O\left((\log f)^2 f/E(f)\right)$ with $E(f)=\max\{P^{+}(f)^{1/\phi(k*)},\textrm{rad}(f)^{1/k}\}$ where $P^{+}(f)$ is the largest prime factor of $f$, $k^*$ is the order of $h$ modulo $P^+(f)$ and $\displaystyle{rad(f)=\prod_{\ell \mid f \atop \ell \textrm{prime}}\ell}$ is the radical of $f$. If $f=h^3-1$ is squarefree, then we have $E(f) = f^{1/3}$ and $s(h,f)=O\left((\log f)^2 f^{2/3}\right)$ which is close to the truth by a logarithmic factor (see Remark \ref{dedekindtwosizes}). \end{Remark} For $\gcd(b,p)=\gcd(c,p)=1$ we recall the other definition of Dedekind-Rademacher sums $$s(b,c,p)= \sum_{a=1}^{p-1} \left(\left(\frac{ab}{p}\right)\right) \left(\left(\frac{ac}{p}\right)\right).$$ A similar argument as in the proof of Theorem \ref{indivbound} leads to a bound on these generalized sums: \begin{theorem} \label{indivboundgen} Let $q_1$, $q_2$ and $k\geq 3$ be given natural integers. Let $p$ run over the primes and $h$ over the elements of order $k$ in the multiplicative group $({\mathbb Z}/p{\mathbb Z})^*$. Then, we have $$\vert s(q_1,q_2 h,p)\vert \ll (\log p)^2 p^{1-\frac{1}{\phi(k)}}.$$ \end{theorem} \begin{proof} The proof follows exactly the same lines as the proof of Theorem \ref{indivbound} except for the fact that the function $f$ is replaced by the function $g(x,y)= ((q_1 x))((q_2 y))$. Hence we have $$s(q_1,q_2 h,p)= g\left(\frac{x}{p},\frac{hx}{p}\right)$$ and by symmetry we remark that $$\int_{I_{2}} g(u,v)dudv =0.$$ Again $V(g) \ll 1$ and the result follows from Lemma \ref{discrepancy} and Koksma-Hlawka inequality. \end{proof} \subsection{Twisted second moment of $L$- functions and Dedekind sums}\label{twistedsection} We illustrate the link between Dedekind sums and twisted moments of $L$- functions by first proving Theorem \ref{asympd0} in the case $H=\{1\}$ with a stronger error term. For any integers $q_1,q_2 \geq 1$ and any prime $p\geq 3$, we define the twisted moment \begin{equation}\label{twistedmomentrivial} M_{q_1,q_2}(p) :=\frac{2}{\phi(p)} \sum_{\chi \in X_p^-}\chi(q_1)\overline{\chi}(q_2)\vert L(1,\chi)\vert^2. \end{equation} The following formula (see \cite[Proposition 1]{LouCMB36/37}) will help us to relate $L$- functions to Dedekind sums: \begin{equation}\label{formulaL1X} L(1,\chi) =\frac{\pi}{2f}\sum_{a=1}^{f-1}\chi (a)\cot\left (\frac{\pi a}{f}\right ) \ \ \ \ \ (\chi\in X_f^-).\end{equation} \begin{theorem} \label{twistedmoment} Let $q_1$ and $q_2$ be given coprime integers. Then when $p$ goes to infinity $$ M_{q_1,q_2}(p)= \frac{\pi^2}{6q_1q_2}+O(1/p). $$ \end{theorem} \begin{Remark} It is worth to notice that in the case $q_2=1$, explicit formulas are known by \cite[Theorem $4$]{LouBKMS52} (see also \cite{Lee17}). This also gives a new and simpler proof of \cite[Theorem $1.1$]{Lee19} in a special case. \end{Remark} \begin{proof} Let us define $$\epsilon(a,b) :=\frac{2}{\phi(p)}\sum_{\chi \in X_p^-}\chi(a)\overline{\chi}(b) =\begin{cases} 1&\text{ if } \gcd(a,p)=\gcd(b,p)=1 \text{ and } a=b \bmod p, \\ -1&\text{ if } \gcd(a,p)=\gcd(b,p)=1 \text{ and } a=-b \bmod p, \\ 0 &\text{ otherwise.} \end{cases}$$ Using orthogonality relations and \eqref{formulaL1X} we arrive at \begin{align*} M_{q_1,q_2}(p) &=\frac{\pi^2}{4p^2} \sum_{a=1}^{p-1} \sum_{b=1}^{p-1}\epsilon(q_1 a,q_2 b)\cot\left(\frac{\pi a}{p}\right)\cot\left(\frac{\pi b}{p}\right) \\ & = \frac{\pi^2}{2p^2} \sum_{a=1}^{p-1}\cot\left(\frac{\pi q_1a}{p}\right)\cot\left(\frac{\pi q_2 a}{p}\right) = \frac{2\pi^2}{p}s(q_1,q_2,p). \end{align*} When $q_1$ and $q_2$ are coprime integers and $p$ goes to infinity, we infer from \eqref{sbcddcb} and \eqref{boundscd} that $$s(q_1,q_2,p)= \frac{p}{12q_1 q_2}+O(1). $$ The result follows immediatly. \end{proof} \begin{corollary}\label{twistednoncoprime} Let $q_1$ and $q_2$ be given natural integers. Then when $p$ goes to infinity $$ M_{q_1,q_2}(p)= \frac{\pi^2}{6}\frac{ \gcd(q_1,q_2)^2}{q_1q_2}+O(1/p). $$ \end{corollary} \begin{proof} Let $\delta= \gcd(q_1,q_2)$. We clearly have $M_{q_1,q_2}(p)=M_{q_1/\delta,q_2/\delta}(p)$ and the result follows from Theorem \ref{twistedmoment}. \end{proof} The proof of Theorem \ref{asympd0} in the case of the trivial subgroup follows easily. \begin{corollary}\label{theotrivialcase} Let $d_0$ be a given square-free integer. We have the following asymptotic formula $$M_{d_0}(p,\{1\}) = \frac{\pi^2}{6}\prod_{q \mid d_0} \left(1-\frac{1}{q^2}\right)+ O(1/p).$$ \end{corollary} \begin{proof} For $\chi$ modulo $p$, let $\chi'$ be the character modulo $d_0p$ induced by $\chi$. By \eqref{L1XXprime} and Corollary \ref{twistednoncoprime} we have \begin{align*} M_{d_0}(p,\{1\})& =\frac{2}{\# X_p^-} \sum_{\chi \in X_p^-} |L(1,\chi')|^2 =\sum_{\delta_1 \mid d_0} \sum_{\delta_2 \mid d_0} \frac{\mu(\delta_1)}{\delta_1}\frac{\mu(\delta_2)}{\delta_2} M_{\delta_1,\delta_2}(p) \\ &=\frac{\pi^2}{6}\sum_{\delta_1 \mid d_0} \sum_{\delta_2 \mid d_0} \frac{\mu(\delta_1)}{\delta_1^2} \frac{\mu(\delta_2)}{\delta_2^2}\gcd(\delta_1,\delta_2)^2 + O(1/p) =\frac{\pi^2}{6} \prod_{q \mid d_0} \left(1-\frac{1}{q^2}\right)+O(1/p). \end{align*} \end{proof} \subsection{An interesting link with sums of maxima} Before turning to the general case of Theorem \ref{asympd0}, we explain how to use Theorem \ref{twistedmoment} to estimate the seemingly innocuous sum\footnote{In \cite{Sun} the author uses lattice point interpretation to study sums with a similar flavour.} defined for any integers $q_1,q_2 \geq 1$ by \begin{equation}\label{defsummax} \text{Ma}_{q_1,q_2,p}:=\sum_{x \bmod p}\max(q_1 x,q_2 x) \end{equation} where here and below $q_1 x, q_2 x$ denote the representatives modulo $p$ taken in $\left[1,p\right]$. \noindent\frame{\vbox{ \begin{theorem}\label{summax} Let $q_1$ and $q_2$ be natural integers such that $q_1 \neq q_2$. Then we have the following asymptotic formula $$\text{Ma}_{q_1,q_2,p} = p^2\left(\frac{2}{3}-\frac{\gcd(q_1,q_2)^2}{12q_1q_2}\right)(1+o(1)).$$ \end{theorem}}} \begin{Remark} In the special case $q_1=1$, we are able to evaluate the sum directly without the need of Dedekind sums and $L$- functions. However, we could not prove Theorem \ref{summax} in the general case using elementary counting methods. \end{Remark} \begin{Remark} Let us notice that $\int_{0}^{1}\int_{0}^{1} \max(x,y) dx dy = 2/3$. Hence using the same method as in Section \ref{boundsDedekind}, we can show that if the points $\left(\left\{\frac{x}{p}\right\},\left\{\frac{qx}{p}\right\}\right)$ are equidistributed in the square $[0,1]^2$ then $$ \sum_{x \bmod p} \max(x,qx) \sim \frac{2}{3}p^2.$$ For $q$ fixed and $p \rightarrow +\infty$, the points are not equidistributed in the square and we see that the correcting factor $\frac{\gcd(q_1,q_2)^2}{12q_1q_2}$ from equidistribution is related to the Dedekind sum $s(q_1,q_2,p)$. \end{Remark} We need the following result of \cite[Theorem $4$]{LMQJM}: \begin{proposition}\label{relation} Let $\chi$ be a primitive Dirichlet character modulo $f>2$, its conductor. Set $\displaystyle{S(k,\chi) =\sum_{l=0}^k\chi (l)}$. Then $$\sum_{k=1}^{p-1}\vert S(k,\chi)\vert^2 =\frac{f^2}{12}\prod_{p\mid f}\left (1-\frac{1}{p^2}\right ) +a_\chi\frac{f^2}{\pi^2}\vert L(1,\chi)\vert^2, \hbox { where } a_\chi :=\begin{cases} 0&\hbox{if $\chi (-1)=+1$,}\\ 1&\hbox{if $\chi (-1)=-1$.} \end{cases}$$ \end{proposition} \subsubsection{Proof of Theorem \ref{summax}}\label{sectionsummax} We follow a strategy similar to the proof of \cite[Corollary $5$]{LMQJM}. We denote by $\chi_0$ the trivial character. Using Proposition \ref{relation} and recalling the definition \eqref{twistedmomentrivial} we arrive at: $$ \sum_{\chi \in X_p \backslash \chi_0}\chi(q_1)\overline{\chi}(q_2)\sum_{k=1}^{p-1}\vert S(k,\chi)\vert^2 = \sum_{\chi \in X_p\backslash \chi_0}\chi(q_1)\overline{\chi}(q_2)\frac{p^2-1}{12} +\frac{p^3}{2\pi^2}M_{q_1,q_2}(p). $$ Adding the contribution of the trivial character $$ \chi_0(q_1)\overline{\chi_0}(q_2)\sum_{k=1}^{p-1}\left\vert\sum_{l=1}^k 1\right\vert^2 =\sum_{k=1}^{p-1} k^2 =\frac{(p-1)p(2p-1)}{6},$$ we obtain \begin{align}\label{shift} \sum_{\chi \in X_p}\chi(q_1)\overline{\chi}(q_2)\sum_{k=1}^{p-1}\vert S(k,\chi)\vert^2 & = \sum_{\chi \in X_p}\chi(q_1)\overline{\chi}(q_2)\frac{p^2-1}{12} + \frac{(p-1)p(2p-1)}{6} \\ & + \frac{p^3}{2\pi^2}M_{q_1,q_2}(p) +O(p^2). \nonumber \end{align} Using the fact that $q_1 \neq q_2 \bmod p$ and the orthogonality relations, we have $$ \sum_{\chi \in X_p}\chi(q_1)\overline{\chi}(q_2)\frac{p^2-1}{12}=0. $$ We now follow the method used in the proof of \cite[Theorem $10$]{LMQJM} (see also \cite{Elma}) with some needed changes to treat the left hand side of \eqref{shift}. Again by orthogonality, we obtain \begin{align*} \sum_{\chi \in X_p}\chi(q_1)\overline{\chi}(q_2)\sum_{k=1}^{p-1}\vert S(k,\chi)\vert^2 = \sum_{\chi \in X_p}\chi(q_1)\overline{\chi}(q_2)\sum_{k=1}^{p-1}\left\vert\sum_{l=1}^k\chi (l)\right\vert^2 \\ = \sum_{\chi \in X_p}\sum_{k=1}^{p-1}\sum_{1\leq l_1,l_2\leq k}\chi(q_1 l_1)\overline{\chi(q_2 l_2)} =(p-1)^2{\mathcal {A}}(q_1,q_2,p), \end{align*} where \begin{equation*} {\cal A}(q_1,q_2,p) =\frac{1}{p-1} \sum_{N=1}^{p-1}\left (\sum_{1 \leq n_1,n_2 \leq N \atop q_1 n_1=q_2 n_2 \bmod p} 1\right ). \end{equation*} Changing the order of summation and making the change of variables $n_1=q_2m_1$ we arrive at $$(p-1){\cal A}(q_1,q_2,p) = \sum_{1\leq m_1 \leq p}(p- \max(q_1 m_1,q_2 m_1)) =p^2- \sum_{x \bmod p}\max(q_1 x,q_2 x)+o(p^2).$$ By symmetry, injecting this into \eqref{shift}, we arrive at \begin{align}\label{formulacp} p^3 -p\sum_{x \bmod p}\max(q_1 x,q_2 x) =\frac{(p-1)p(2p-1)}{6}+ \frac{p^3}{2\pi^2}M_{q_1,q_2}(p) +o(p^3). \end{align} Hence comparing the terms of order $p^3$ in the above formula \eqref{formulacp} and using Corollary \ref{twistednoncoprime}, we have $$\sum_{x \bmod p}\max(q_1 x,q_2 x) = c_{q_1,q_2}(p^2 +o(p^2))$$ where $$1-c_{q_1,q_2} =\frac{1}{3}+\frac{1}{12}\frac{\gcd(q_1,q_2)^2}{q_1 q_2}.$$ This concludes the proof. \\ We know turn to the general case of Theorem 1.1. Let $d_0$ be a given square-free integer such that $\gcd(d_0,p)=1$. For $\chi$ modulo $p$, let $\chi'$ be the character modulo $d_0p$ induced by $\chi$. Recall that we want to show for $H$ a subgroup of $\left(\mathbb{Z}/p\mathbb{Z}\right)^*$ of odd order $d \ll \frac{\log p}{\log \log p}$ that \begin{align*} M_{d_0}(p,H) &=\frac{1}{\# X_p^-(H)} \sum_{\chi \in X_p^-(H)} |L(1,\chi')|^2 =(1+o(1))\frac{\pi^2}{6}\prod_{q \mid d_0}\left(1-\frac{1}{q^2}\right). \end{align*} \subsection{Twisted average of $L$- functions over subgroups} For any integers $q_1,q_2 \geq 1$ and any prime $p\geq 3$, we define \begin{equation}\label{defMdtwist} M_{q_1,q_2}(p,H) :=\frac{1}{\# X_p^-(H)} \sum_{\chi \in X_p^-(H)} \chi(q_1)\overline{\chi}(q_2)|L(1,\chi)|^2. \end{equation} Our main result is the following: \begin{theorem}\label{MTtwist} Let $q_1$ and $q_2$ be given coprime integers. When $H$ runs over the subgroups of $\left(\mathbb{Z}/p\mathbb{Z}\right)^*$ of odd order $d$, we have the following asymptotic formula $$M_{q_1,q_2}(p,H) =\frac{\pi^2}{6q_1 q_2} + O\left( d (\log p)^2p^{-\frac{1}{\phi(d)}} \right).$$ \end{theorem} \begin{proof} The proof follows the same lines as the proof of Theorem \ref{twistedmoment}. Let us define $$\epsilon_H(a,b) :=\frac{1}{\# X_p^-(H)}\sum_{\chi \in X_p^-(H)}\chi(a)\overline{\chi}(b) =\begin{cases} 1 &\text{ if } \gcd(a,p)=\gcd(b,p)=1 \text{ and } a\in bH, \\ -1 &\text{ if } \gcd(a,p)=\gcd(b,p)=1 \text{ and } a\in -bH, \\ 0 &\text{ otherwise.} \end{cases}$$ Hence we obtain similarly \begin{align*} M_{q_1,q_2}(p,H) &=\frac{\pi^2}{4p^2} \sum_{a=1}^{p-1} \sum_{b=1}^{p-1}\epsilon_H(q_1 a,q_2 b)\cot\left(\frac{\pi a}{p}\right)\cot\left(\frac{\pi b}{p}\right) \\ & = \frac{\pi^2}{2p^2}\sum_{h\in H} \sum_{a=1}^{p-1}\cot\left(\frac{\pi q_1a}{p}\right)\cot\left(\frac{\pi q_2h a}{p}\right) \\ & = \frac{2\pi^2}{p}s(q_1,q_2,p)+ O\left(p^{-1}\sum_{1\neq h\in H} s(q_1,q_2h,p)\right) \\ & = \frac{\pi^2}{6q_1q_2} + O(1/p) + O\left( \vert H\vert (\log p)^2p^{-\frac{1}{\phi(d)}} \right)=\frac{\pi^2}{6q_1q_2} + O\left( d(\log p)^2p^{-\frac{1}{\phi(d)}} \right), \end{align*} where we used Theorem \ref{indivboundgen} in the last line and noticed that $\phi(k)$ divides $\phi(d)$ whenever $k$ divides $d$. \end{proof} \begin{Remark} The error term is negligible as soon as $d\leq \frac{\log p}{3(\log \log p)}$. \end{Remark} It follows directly that \noindent\frame{\vbox{ \begin{corollary}\label{twistednoncoprimeH} Let $q_1$ and $ q_2$ be given integers. When $H$ runs over the subgroups of $\left(\mathbb{Z}/p\mathbb{Z}\right)^*$ of odd order $d$, we have the following asymptotic formula $$M_{q_1,q_2}(p,H) = \frac{\pi^2}{6}\frac{\gcd(q_1,q_2)^2}{q_1 q_2} + O\left( d (\log p)^2p^{-\frac{1}{\phi(d)}} \right).$$ \end{corollary} }} \subsection{Proof of Theorem \ref{asympd0}} As in the proof of Corollary \ref{theotrivialcase} and using Corollary \ref{twistednoncoprimeH} \begin{align*} M_{d_0}(p,H) & =\frac{1}{\# X_p^-(H)} \sum_{\chi \in X_p^-(H)} |L(1,\chi')|^2 =\sum_{\delta_1 \mid d_0} \sum_{\delta_2 \mid d_0} \frac{\mu(\delta_1)}{\delta_1}\frac{\mu(\delta_2)}{\delta_2} M_{\delta_1,\delta_2}(p,H) \\ &= \frac{\pi^2}{6} \sum_{\delta_1 \mid d_0} \sum_{\delta_2 \mid d_0} \frac{\mu(\delta_1)}{\delta_1^2}\frac{\mu(\delta_2)}{\delta_2^2}\gcd(\delta_1,\delta_2)^2 + O\left( d (\log p)^2p^{-\frac{1}{\phi(d)}} \right) \\ & = \frac{\pi^2}{6} \prod_{q \mid d_0} \left(1-\frac{1}{q^2}\right) + O\left( d (\log p)^2p^{-\frac{1}{\phi(d)}} \right) = (1+o(1))\frac{\pi^2}{6} \prod_{q \mid d_0}\left(1-\frac{1}{q^2}\right) \end{align*} using the condition on $d$. \section{Explicit formulas for $M_{d_0}(f,H)$} Recall that by \eqref{formulaL1X} \begin{equation*} L(1,\chi) =\frac{\pi}{2f}\sum_{a=1}^{f-1}\chi (a)\cot\left (\frac{\pi a}{f}\right ) \ \ \ \ \ (\chi\in X_f^-). \end{equation*} Hence using the definition of Dedekind sums we obtain (see \cite[Proof of Theorem 2]{LouBPASM64}): \begin{equation}\label{formulaM(f,H)} M(f,H) ={2\pi^2\over f}\sum_{\delta\mid f}\frac{\mu (\delta)}{\delta}\sum_{h\in H}s(h,f/\delta). \end{equation} \subsection{A formula for $M_{d_0}(f,\{1\})$ for $d_0=1,2,3,6$}\label{Casetrivialgroup} The first consequence of (\ref{formulaM(f,H)}) is a short proof of \cite[Th\'eor\`emes 2 and 3]{LouCMB36/37} by taking $H=\{1\}$, the trivial subgroup of the multiplicative group $({\mathbb Z}/f{\mathbb Z}^*)$. Indeed, (\ref{formulaM(f,H)}) and (\ref{s1d}) give $$M(f,\{1\}) ={2\pi^2\over f}\sum_{\delta\mid f}\frac{\mu (\delta)}{\delta}s(1,f/\delta) =\frac{\pi^2}{6}\sum_{\delta\mid f}\mu (\delta)\left (\frac{1}{\delta^2}-\frac{3}{\delta f}+\frac{2}{f^2}\right ).$$ The arithmetic functions $f\mapsto\sum_{\delta\mid f}\mu (\delta)\delta^k$ being multiplicative, we obtain (see also \cite{Qi}): \begin{equation}\label{M(f,1)} M(f,\{1\}) ={\pi^2\over 6} \times\left\{ \prod_{q\mid f}\left (1-{1\over q^2}\right )-{3\over f}\prod_{q\mid f}\left (1-{1\over q}\right ) \right\} \ \ \ \ \ \hbox{($f>2$).} \end{equation} Now, it is clear by \eqref{MM} that for $d_0$ odd and square-free and $f$ odd we have \begin{equation}\label{MMbis} M_{2d_0}(f,\{1\}) =M_{d_0}(2f,\{1\}). \end{equation} Hence, on applying (\ref{M(f,1)}) to $2f$ instead of $f$ we therefore obtain: \begin{equation}\label{M2(f,1)} M_2(f,\{1\}) ={\pi^2\over 8} \times\left\{ \prod_{q\mid f}\left (1-{1\over q^2}\right )-{1\over f}\prod_{q\mid f}\left (1-{1\over q}\right ) \right\} \ \ \ \ \ \hbox{($f>2$ odd)}. \end{equation} The following explicit formulas generalize \cite[Th\'eor\`eme 4]{LouCMB36/37} to composite moduli: \noindent\frame{\vbox{ \begin{theorem}\label{M3M6} For $f>2$ we have \begin{equation*} M_3(f,\{1\}) =\frac{4\pi^2}{27} \times\left\{ \prod_{q\mid f}\left (1-\frac{1}{q^2}\right ) -\frac{3}{2f}\prod_{q\mid f}\left (1-\frac{1}{q}\right ) +\frac{\varepsilon(f)}{2f} \prod_{q\mid f}\left (1-\frac{\varepsilon(q)}{q}\right ) \right\} \end{equation*} whenever $(\gcd(f,3)=1)$ and \begin{equation*} M_6(f,\{1\}) =\frac{\pi^2}{9} \times\left\{ \prod_{q\mid f}\left (1-\frac{1}{q^2}\right ) -\frac{1}{2f}\prod_{q\mid f}\left (1-\frac{1}{q}\right ) -\frac{\varepsilon(f)}{2f} \prod_{q\mid f}\left (1-\frac{\varepsilon(q)}{q}\right ) \right\} \end{equation*} whenever $(\gcd(f,6)=1)$, where $\varepsilon (f) =\left (\frac{f}{3}\right )$ (Legendre's symbol) is such that $f\equiv\varepsilon(f)\pmod 3$. \end{theorem} }} \begin{proof} By (\ref{MMbis}), the formula for $M_6(f,\{1\})$ follows from the one for $M_3(f,\{1\})$ applied to $2f$. Let us prove the formula for $M_{3}(f,\{1\})$. By (\ref{MM}) and (\ref{formulaM(f,H)}), we have $$M_3(f,\{1\}) =M(3f,H_3) =\frac{2\pi^2}{3f}S,$$ where $H_3=\{1,1+\varepsilon(f) f\}$ and $$S :=\sum_{\delta\mid 3f}\frac{\mu(\delta)}{\delta}\sum_{h\in H_3}s(h,3f/\delta) =\sum_{\delta\mid 3f}\frac{\mu(\delta)}{\delta}s(1,3f/\delta) +\sum_{\delta\mid 3f}\frac{\mu(\delta)}{\delta}s(1+\varepsilon(f) f,3f/\delta).$$ Now, $\{\delta;\ \delta\mid 3f\} =\{\delta';\ \delta'\mid f\} \cup\{3\delta';\ \delta'\mid f\}$ (disjoint union). Moreover, if $\delta\in \{3\delta';\ \delta'\mid f\}$, then $1+\varepsilon(f) f\equiv 1\pmod {3f/\delta}$ and $s(1+\varepsilon(f) f,3f/\delta) =s(1,3f/\delta)$. Therefore, $$S =\sum_{\delta\mid 3f}\frac{\mu(\delta)}{\delta}s(1,3f/\delta)\\ +\sum_{\delta'\mid f}\frac{\mu(\delta')}{\delta'}s(1+\varepsilon(f) f,3f/\delta') -\sum_{\delta'\mid f}\frac{\mu(\delta')}{3\delta'}s(1,f/\delta').$$ To cope with the first and third terms of this identity, we notice that for $n>1$ we have $$\sum_{\delta\mid n}\frac{\mu(\delta)}{\delta}s(1,n/\delta) =\frac{nF_{-1}(f)-3F_0(n)}{12}, \hbox{ where } F_k(n) =\sum_{\delta\mid n}\frac{\mu (\delta)}{\delta}\delta^k =\prod_{q\mid n}(1-q^{k-1}),$$ by (\ref{s1d}) and by noticing that $F_1(n)=0$ for $n>1$. To cope with its second term, setting $f'=f/\delta'$, we have $1+\varepsilon(f) f \equiv 1+\varepsilon(f')f'\pmod{3f'}$ and Lemma \ref{step1} below gives $$s(1+\varepsilon(f) f,3f') =s(1+\varepsilon(f')f',3f') =\frac{f/\delta'}{36}-\frac{1}{4}+\frac{\delta'}{18f} +\frac{\varepsilon(f)\varepsilon(\delta')}{9}.$$ Therefore, $$S =\frac{3fF_{-1}(3f)-3F_0(3f)}{12} +\frac{fF_{-1}(f)}{36} -\frac{F_0(f)}{4} +\frac{F(f)}{9} -\frac{fF_{-1}(f)-3F_0(f)}{36},$$ where $$F(f) :=\sum_{\delta'\mid f}\frac{\mu(\delta')}{\delta'}\varepsilon(f)\varepsilon(\delta') =\varepsilon(f)\prod_{q\mid f}\left (1-\frac{\varepsilon(q)}{q}\right ).$$ Noticing that $F_{-1}(3f) =\frac{8}{9}F_{-1}(f)$ and $F_0(3f) =\frac{2}{3}F_0(f)$, we obtain $$S =\frac{2}{9}fF_{-1}(f) -\frac{1}{3}F_0(f) +\frac{1}{9}F(f).$$ The desired formula for $M_3(f,\{1\}) =M(3f,H_3) =\frac{2\pi^2}{3f}S$ follows. \end{proof} \begin{lemma}\label{step1} Let $f>2$ be such that $\gcd(f,3)=1$. Set $\varepsilon =\left (\frac{f}{3}\right )$ (Legendre's symbol). Then, $$s(1+\varepsilon f,3f) =\frac{f^2-9f+2}{36f}+\frac{\varepsilon}{9}.$$ \end{lemma} \begin{proof} Set $Q(X,Y)=\frac{X^2+Y^2+1}{12XY}$. Then $s(c,d) =Q(c,d)-\frac{{\rm sign}(cd)}{4}-s(d,c)$, by (\ref{scddc}). Therefore, \begin{eqnarray*} s(1+\varepsilon f,3f) &=&Q(1+\varepsilon f,3f) -\frac{\varepsilon}{4} -s(3f,1+\varepsilon f),\\ s(3f,1+\varepsilon f) &=&s(-3\varepsilon,1+\varepsilon f) =Q(-3\varepsilon,1+\varepsilon f) +\frac{1}{4} -s(1+\varepsilon f,-3\varepsilon)\\ s(1+\varepsilon f,-3\varepsilon) &=&s(-1,-3\varepsilon) =\varepsilon s(1,3) =\frac{\varepsilon}{18} \end{eqnarray*} and $s(1+\varepsilon f,3f) =Q(1+\varepsilon f,3f) -\frac{\varepsilon}{4} -\bigl ( Q(-3\varepsilon,1+\varepsilon f) +\frac{1}{4} -\frac{\varepsilon}{18} \bigr )$. The formula for $s(1+\varepsilon f,3f)$ follows, by noticing that $Q(1+\varepsilon f,3f) -Q(-3\varepsilon,1+\varepsilon f) =\frac{f^2+11\varepsilon f+2}{36f}$. \end{proof} Inspired by Theorem \ref{M3M6} and bearing on a lot of numerical computations we came up with the following conjecture: \noindent\frame{\vbox{ \begin{Conjecture} Let $d_0>2$ be a given square-free integer. Set $$\kappa_{d_0} :=\frac{\pi^2}{6} \prod_{q\mid d_0}\left (1-\frac{1}{q^2}\right ) \hbox{ and } c :=3\prod_{q\mid d_0}\frac{q-1}{q+1}.$$ For $n\in {\mathbb Z}$, set $\varepsilon(n) =+1$ if $n\equiv +1\pmod {d_0}$ and $\varepsilon(n) =-1$ if $n\equiv -1\pmod {d_0}$.\\ Then for $f>2$ such that all its prime divisors $q$ satisfy $q\equiv\pm 1\pmod {d_0}$ we have $$M_{d_0}^{}(f,\{1\}) =\kappa_{d_0} \times\left\{ \prod_{q\mid f}\left (1-\frac{1}{q^2}\right ) -\frac{c}{f}\prod_{q\mid f}\left (1-\frac{1}{q}\right ) +\varepsilon(f) \frac{c-1}{f}\prod_{q\mid f}\left (1-\frac{\varepsilon(q)}{q}\right ) \right\}.$$ In particular, for $f>2$ such that all its prime divisors $q$ satisfy $q\equiv 1\pmod {d_0}$ we have $$M_{d_0}^{}(f,\{1\}) =\kappa_{d_0} \times\left\{ \prod_{q\mid f}\left (1-\frac{1}{q^2}\right ) -\frac{1}{f}\prod_{q\mid f}\left (1-\frac{1}{q}\right ) \right\}.$$ \end{Conjecture} }} This conjecture can be checked on any given pair of coprime integers $d_0>2$ and $f>2$ with $\gcd (d_0,f)=1$. Indeed by (\ref{MM}) and (\ref{formulaM(f,H)}), we have $$M_{d_0}^{}(f,\{1\}) =M(d_0f,H_{d_0}) =\frac{2\pi^2}{d_0f} \sum_{\delta\mid d_0f}\frac{\mu(\delta)}{\delta} \sum_{H=0\atop\gcd (1+Hf,d_0)=1}^{d_0-1} s(1+Hf,d_0f/\delta).$$ Now, by (\ref{scddc}) and (\ref{s1d}) any Dedekind sum $s(c,d)\in {\mathbb Q}$ with $c,d\geq 1$ can be easily computed by successive euclidean divisions of $c$ by $d$ and exchanges of $c$ and $d$, until we reach $c=1$. Therefore, this conjecture might have been hard to come by but it can be easily verified for any given pair of coprime integers $d_0>2$ and $f>2$. \subsection{A formula for $M(p,H)$} The second immediate consequence of (\ref{formulaM(f,H)}) and (\ref{s1d}) is: \noindent\frame{\vbox{ \begin{proposition}\label{TheoremM(p,H)} For $f>2$ and $H$ a subgroup of the multiplicative group $({\mathbb Z}/f{\mathbb Z})^*$, set \begin{equation}\label{N(f,H)} S'(H,f)=\sum_{1\neq h\in H}s(h,f) \hbox{ and } N(f,H) :=-3+\frac{2}{f}+12S'(H,f). \end{equation} Then, for $p\geq 3$ a prime and $H$ a subgroup of odd order of the multiplicative group $({\mathbb Z}/p{\mathbb Z})^*$, we have \begin{equation}\label{M(p,H)} M(p,H) =\frac{\pi^2}{6} \left (1+\frac{N(p,H)}{p}\right ) =\frac{\pi^2}{6}\left (\left (1-\frac{1}{p}\right )\left (1-\frac{2}{p}\right )+\frac{12S'(H,p)}{p}\right ). \end{equation} \end{proposition} }} \begin{Remark} In particular, $N(f,\{1\}) =-3+2/f$ and (\ref{M(p,H)}) implies (\ref{Mp1}). Notice also that $N(p,H)\in {\mathbb Z}$ for $H\neq\{1\}$, by \cite[Theorem 6]{LouBKMS56}. Moreover, by \cite[Theorem $1$]{LMQJM}, the asymptotic formula $M(p,H) =\frac{\pi^2}{6}+o(1)$ holds as $p$ tends to infinity and $H$ runs over the subgroup of $({\mathbb Z}/p{\mathbb Z})^*$ of odd order $d\leq \frac{\log p}{\log \log p}$. Hence we have $N(p,H)=o(p)$ under this restriction. \end{Remark} \subsection{A formula for $M_{d_0}(p,H)$}\label{sectionformulad0} We will now derive a third consequence of (\ref{formulaM(f,H)}): a formula for the mean square value $M_{d_0}(p,H)$. \noindent\frame{\vbox{ \begin{theorem} Let $d_0>1$ be a square-free integer. Let $f>2$ be coprime with $d_0$. Let $H$ be a subgroup of the multiplicative group $({\mathbb Z}/f{\mathbb Z})^*$. Whenever $\delta$ divides $d_0$, let $s_\delta:({\mathbb Z}/\delta f{\mathbb Z})^*\longrightarrow ({\mathbb Z}/f{\mathbb Z})^*$ be the canonical surjective morphism and set $H_{\delta}=s_\delta^{-1}(H)$ and $H_{\delta}' =s_\delta^{-1}(H\setminus\{1\})$. Define the rational number \begin{equation}\label{defNd0fH} N_{d_0}(f,H) =-f +\frac{12\mu(d_0)}{\prod_{q\mid d_0}(q^2-1)} \sum_{\delta\mid d_0}\delta\mu (\delta)\sum_{h\in H_{d_0}}s(h,\delta f). \end{equation} Then, for $p\geq 3$ a prime which does not divide $d_0$ and $H$ a subgroup of odd order of the multiplicative group $({\mathbb Z}/p{\mathbb Z})^*$, we have \begin{equation}\label{formulaMd0pH} M_{d_0}(p,H) =\frac{2\pi^2\mu(d_0)\phi(d_0)}{d_0^2p} \sum_{\delta\mid d_0}\frac{\delta\mu(\delta)}{\phi(\delta)} S(H_\delta,\delta p), \hbox{ where } S(H_\delta,\delta f) =\sum_{h\in H_\delta}s(h,\delta f), \end{equation} and \begin{equation}\label{Md0pH} M_{d_0}(p,H) =\kappa_{d_0} \times\left ( 1 +\frac{N_{d_0}(p,H)}{p} \right ), \hbox{ where } \kappa_{d_0} :=\frac{\pi^2}{6} \prod_{q\mid d_0}\left (1-\frac{1}{q^2}\right ). \end{equation} Moreover, \begin{align} N_{d_0}(f,H) &=-f +\frac{12\mu(d_0)}{\prod_{q\mid d_0}(q+1)} \sum_{\delta\mid d_0}\frac{\delta\mu(\delta)}{\phi(\delta)}S(H_\delta,\delta f)\nonumber \\ & =N_{d_0}(f,\{1\}) +\frac{12\mu(d_0)}{\prod_{q\mid d_0}(q+1)} \sum_{\delta\mid d_0}\frac{\delta\mu(\delta)}{\phi(\delta)} S'(H_{\delta},\delta f) \hbox{ where } S'(H_{\delta},\delta f) :=\sum_{h\in H_{\delta}'}s(h,\delta f).\label{defNd0fHter} \end{align} \end{theorem} }} \begin{proof} Using (\ref{MM}) and by making the change of variables $\delta\mapsto d_0f/\delta$ in (\ref{formulaM(f,H)}), we obtain: \begin{equation}\label{Md0fH} M_{d_0}(f,H) =M(d_0f,H_{d_0}) ={2\pi^2\over d_0^2f^2}\sum_{\delta\mid d_0f}\delta\mu (d_0f/\delta)\sum_{h\in H_{d_0}}s(h,\delta). \end{equation} Since $\{\delta;\ \delta\mid d_0p\}$ is the disjoint union of $\{\delta;\ \delta\mid d_0\}$ and $\{\delta p;\ \delta\mid d_0\}$, by (\ref{Md0fH}) we obtain: $$M_{d_0}(p,H) =-{2\pi^2\mu(d_0)\over d_0^2p^2}\sum_{\delta\mid d_0}\delta\mu (\delta)\sum_{h\in H_{d_0}}s(h,\delta) +{2\pi^2\mu(d_0)\over d_0^2p}\sum_{\delta\mid d_0}\delta\mu (\delta)\sum_{h\in H_{d_0}}s(h,\delta p).$$ Now, $S:=\sum_{h\in H_{d_0}}s(h,\delta)=0$ whenever $\delta\mid d_0$, which gives \begin{equation}\label{stepMersenne1} M_{d_0}(p,H) ={2\pi^2\mu(d_0)\over d_0^2p}\sum_{\delta\mid d_0}\delta\mu (\delta)\sum_{h\in H_{d_0}}s(h,\delta p) \end{equation} and implies (\ref{Md0pH}). Indeed, let $\sigma:({\mathbb Z}/d_0f{\mathbb Z})^*\longrightarrow ({\mathbb Z}/\delta{\mathbb Z})^*$ be the canonical surjective morphism. Its restriction $\tau$ to the subgroup $H_{d_0}$ is surjective, by the Chinese reminder theorem. Hence, $S =(H_{d_0}:\ker\tau) \times S'$, where $S':=\sum_{c\in ({\mathbb Z}/\delta{\mathbb Z})^*}s(c,\delta) =\sum_{c\in ({\mathbb Z}/\delta{\mathbb Z})^*}s(-c,\delta) =-S'$ yields $S'=0$. In the same way, whenever $\delta\mid d_0$, the kernel of the canonical surjective morphism $s:({\mathbb Z}/d_0f{\mathbb Z})^*\longrightarrow ({\mathbb Z}/\delta f{\mathbb Z})^*$ being a subgroup of order $\phi(d_0f)/\phi(\delta f) =\phi(d_0)/\phi(\delta)$, we have \begin{equation}\label{stepMersenne2} \sum_{h\in H_{d_0}}s(h,\delta f) =\frac{\phi(d_0)}{\phi(\delta)}\sum_{h\in H_{\delta}}s(h,\delta f) \end{equation} and (\ref{formulaMd0pH}) follows from (\ref{stepMersenne1}) and (\ref{stepMersenne2}). Then, (\ref{Md0pH}) is a direct consequence of (\ref{formulaMd0pH}) and (\ref{defNd0fH}). Finally (\ref{defNd0fHter}) is an immediate consequence of (\ref{defNd0fH}) and (\ref{stepMersenne2}). \end{proof} \subsubsection{A new proof of Theorem \ref{asympd0}} We split the sum in \eqref{stepMersenne1} into two cases depending whether $h=1$ or not. By \eqref{s1d} we have $s(1,\delta p)=\frac{p\delta}{12}+O(1)$ giving a contribution to the sum of order $$ \frac{\pi^2 \mu(d_0)}{6d_0^2} \sum_{\delta \mid d_0}\delta^2\mu(\delta) + O(1/p) = \frac{\pi^2}{6} \prod_{q \mid d_0}\left(1-\frac{1}{q^2}\right)+O(1/p).$$ When $h\neq 1$ and $h\in H_{d_0}$, it is clear that the order of $h$ modulo $p$ is between $3$ and $d$. Hence it follows from Theorem \ref{indivbound} (see the Remark after) that $s(h,\delta p)=O((\log p)^2 p^{1-\frac{1}{\phi(d)}})$. The integer $d_0$ being fixed, we can sum up these error terms and the proof is finished. \subsection{An explicit way to compute $N_{d_0}(f,\{1\})$} \begin{proposition}\label{H=1} Let $d_0>1$ be a given square-free integer. Set $B=\prod_{q\mid d_0}(q^2-1)$. For $f\in{\mathbb Z}$ with $\gcd (d_0,f)=1$, set \begin{equation}\label{Ad0f}A(d_0,f) =\sum_{a\mod d_0\atop\gcd (d_0,a)=1} \sum_{b\mod d_0\atop{\gcd (d_0,b)=1\atop b\neq a}} \cot\left (\frac{\pi (b-a)}{d_0}\right ) \left ( \cot\left (\frac{\pi fa}{d_0}\right ) -\cot\left (\frac{\pi fb}{d_0}\right ) \right ),\end{equation} a rational number depending only on $f$ modulo $d_0$. Then for $f>2$ we have $$N_{d_0}(f,\{1\}) =\frac{3}{B} \left (A(d_0,f)-\phi(d_0)^2\right ).$$ Consequently, $N_{d_0}(f,\{1\})$ is a rational number depending only on $f$ modulo $d_0$. \end{proposition} \begin{proof} By (\ref{defNd0fH}) we have $$N_{d_0}(f,\{1\}) =-f+\frac{12\mu(d_0)}{B}\sum_{\delta\mid d_0}\delta\mu (\delta) \sum_{h\in ({\mathbb Z}/d_0f{\mathbb Z})^*\atop h\equiv 1\pmod f}s(h,\delta f).$$ Using (\ref{s1d}) to evaluate the contribution of $h=1$ in this expression and $\sum_{\delta\mid d_0}\mu(\delta) =0$, we get $$N_{d_0}(f,\{1\}) =-\frac{3\phi(d_0)}{B} +\frac{12\mu(d_0)}{B}\sum_{\delta\mid d_0}\delta\mu (\delta) \sum_{1\neq h\in ({\mathbb Z}/d_0f{\mathbb Z})^*\atop h\equiv 1\pmod f}s(h,\delta f)$$ and $$N_{d_0}(f,\{1\}) =-\frac{3\phi(d_0)^2}{B} +\frac{3\mu(d_0)}{Bf} \sum_{1\neq h\in ({\mathbb Z}/d_0f{\mathbb Z})^*\atop h\equiv 1\pmod f} \sum_{\delta\mid d_0}\mu (\delta) \sum_{n=1}^{\delta f -1}\left (1+\cot\left ({\pi n\over\delta f}\right )\cot\left ({\pi nh\over\delta f}\right )\right ),$$ by (\ref{defscd}) and by noticing that $\#\{1\neq h\in ({\mathbb Z}/d_0f{\mathbb Z})^*,\ h\equiv 1\pmod f\} =\phi(d_0)-1$. Therefore, \begin{equation}\label{S} N_{d_0}(f,\{1\}) =-\frac{3\phi(d_0)^2}{B} +\frac{3}{Bf}S, \end{equation} where $$S :=\sum_{1\neq h\in ({\mathbb Z}/d_0f{\mathbb Z})^*\atop h\equiv 1\pmod f} \sum_{n=1\atop\gcd(d_0,n)=1}^{d_0f -1} \left (1+\cot\left ({\pi n\over d_0f}\right )\cot\left ({\pi nh\over d_0f}\right )\right )$$ (make the change of variable $\delta\mapsto d_0/\delta$). Now, using $$1+\cot(x)\cot(y) =\cot(x-y)(\cot(y)-\cot(x)) =2i\cot(x-y)\left (\frac{1}{e^{2iy}-1}-\frac{1}{e^{2ix}-1}\right )$$ for $x,y,x-y\not\in\pi {\mathbb Z}$, writing $h =1+bf$ and $n=a+Ad_0$ and setting $\zeta_k=\exp(2i\pi /k)$, we obtain $$1+\cot\left ({\pi n\over d_0f}\right )\cot\left ({\pi nh\over d_0f}\right ) =2i\cot\left (\frac{\pi ab}{d_0}\right ) \left (\frac{1}{\zeta_f^A\zeta_{d_0f}^a-1}-\frac{1}{\zeta_f^A\zeta_{d_0f}^{a(1+bf)}-1}\right )$$ and $$S =2i\sum_{a\in ({\mathbb Z}/d_0{\mathbb Z})^*} \sum_{b=1\atop\gcd (d_0,1+bf=1)}^{d_0-1} \sum_{A=0}^{f-1} \cot\left (\frac{\pi ab}{d_0}\right ) \left (\frac{1}{\zeta_f^A\zeta_{d_0f}^a-1}-\frac{1}{\zeta_f^A\zeta_{d_0f}^{a(1+bf)}-1}\right ).$$ Now, if $\lambda^f\neq 1$, then $$\sum_{A=0}^{f-1}\frac{1}{\zeta_f^A\lambda-1} =\frac{f}{\lambda^f-1}$$ (evaluate the logarithmic derivative of $x^f-1$ at $x=\lambda^{-1}$, if $\lambda\neq 0$). Hence, \begin{multline*} S =2if\sum_{a\in ({\mathbb Z}/d_0{\mathbb Z})^*} \sum_{b=1\atop\gcd (d_0,1+bf=1)}^{d_0-1} \cot\left (\frac{\pi ab}{d_0}\right ) \left (\frac{1}{\zeta_{d_0}^a-1}-\frac{1}{\zeta_{d_0}^{a(1+bf)}-1}\right )\\ =-f\sum_{a\in ({\mathbb Z}/d_0{\mathbb Z})^*} \sum_{b=1\atop\gcd (d_0,1+bf=1)}^{d_0-1} \cot\left (\frac{\pi ab}{d_0}\right ) \left ( \cot\left (\frac{\pi a}{d_0}\right ) -\cot\left (\frac{\pi a(1+bf)}{d_0}\right ) \right ). \end{multline*} Finally, if $ff^*\equiv aa^*\equiv 1\pmod {d_0}$, the change of variables $a\mapsto fa$ and $b\mapsto f^*(a^*b-1)$ gives $ab\mapsto b-a$, $a\mapsto fa$ and $a(1+bf)\mapsto fb$. Therefore, $S=fA(d_0,f)$, by (\ref{S}), and the desired result follows. \end{proof} \begin{Remark} As a consequence we obtain $M_{d_0}(p,\{1\}) =\frac{\pi^2}{6}\prod_{q\mid d_0}\left (1-\frac{1}{q^2}\right ) +O(p^{-1})$, using (\ref{Md0pH}) and the fact that $N_{d_0}(p,\{1\})$ depends only on $p$ modulo $d_0$. This gives in this extreme situation another proof of Theorem \ref{asympd0} with a better error term. Moreover, in that situation we have $K ={\mathbb Q}(\zeta_p)$ and in (\ref{boundhKminusd0}) the term $\Pi_{d_0}(p,\{1\})$ is bounded from below by a constant independent of $p$, by Corollary \ref{PiKminus}. \end{Remark} \section{The case where $f=a^{d-1}+\cdots+a^2+a+1$}\label{genMersenne} \subsection{Explicit formulas for $d_0=1,2$} \begin{lemma}\label{(a^d-1)/(a-1)} Let $f>1$ be a rational integer of the form $f=(a^d-1)/(a-1)$ for some $a\neq -1,0,1$ and some odd integer $d\geq 3$. Hence $f$ is odd. Set $H=\{a^k;\ 0\leq k\leq d-1\}$, a subgroup of order $d$ of the multiplicative group $({\mathbb Z}/f{\mathbb Z})^*$. Then, $$S(H,f) =\frac{a+1}{a-1}\times\frac{f-(d-1)a-1}{12}$$ and $$S(H_2,2f) =\begin{cases} \frac{a+1}{a-1}\times\frac{4f-(d-1)a-3d-1}{24}&\hbox{if $a$ is odd}\\ \frac{2a-1}{a-1}\times\frac{f-(d-1)a-1}{12}&\hbox{if $a$ is even.} \end{cases}$$ \end{lemma} \begin{proof} We have $S(H,f) =\sum_{k=0}^{d-1}s(a^k,f)$. Moreover, $S(H_2,2f) =\sum_{k=0}^{d-1}s(a^k,2f)$ if $a$ is odd and $S(H_2,2f) =s(1,2f)+\sum_{k=1}^{d-1}s(a^k+f,2f)$ if $a$ is even. Now, we claim that for $0\leq k\leq d-1$ we have $$s(a^k,f) =\frac{a^k}{12f}+\frac{(f^2+1)a^{-k}}{12f} +\frac{a^k+a^{-k}(a^2-2a+2)}{12(a-1)} -\frac{a(a+1)}{12(a-1)} \hbox{ whatever the parity of $a$,}$$ $$s(a^k,2f) =\frac{a^k}{24f}+\frac{(4f^2+1)a^{-k}}{24f} +\frac{4a^k+a^{-k}(a^2-2a+5)}{24(a-1)} -\frac{(a+1)(a+3)}{24(a-1)} \hbox{ if $a$ is odd,}$$ and that for $1\leq k\leq d-1$ we have $$s(a^k+f,2f) =\frac{a^k}{24f}+\frac{(f^2+1)a^{-k}}{24f} +\frac{a^k+a^{-k}(a^2-2a+2)}{24(a-1)} -\frac{a(2a-1)}{12(a-1)} \hbox{ if $a$ is even.}$$ Noticing that $\sum_{k=1}^{d-1}a^k =f-1$ and $\sum_{k=1}^{d-1}a^{-k} =\frac{f-1}{(a-1)f+1}$, we then get the assertions on $S(H,f)$ and $S(H_2,2f)$. Now, let us for example prove the third claim. Hence, assume that $a$ is even and that $1\leq k\leq d-1$. Then $f_k :=(a^k-1)/(a-1)$ is odd, ${\rm sign}(f_k)={\rm sign}(a)^k$ and $a^k+f>0$.\\ First, since $2f\equiv -2a^{k}\pmod {a^k+f}$, using (\ref{scddc}) we have $$s(a^k+f,2f) =\frac{a^k+f}{24f} +\frac{f}{6(a^k+f)} -\frac{1}{4} +\frac{1}{24(a^k+f)f} +s(2a^{k},a^k+f).$$ Second, noticing that $a^k+f\equiv f_k\pmod {2a^{k}}$ and using (\ref{scddc}) we have $$s(2a^{k},a^k+f) =\frac{a^{k}}{6(a^k+f)} +\frac{a^k+f}{24a^{k}} -\frac{{\rm sign}(a)^k}{4} +\frac{1}{24a^{k}(a^k+f)} -s(f_k,2a^{k}).$$ Finally, noticing that $2a^{k}\equiv 2\pmod{f_k}$ and using (\ref{scddc}) and (\ref{s1d}) we have \begin{eqnarray*} s(f_k,2a^{k}) &=&\frac{f_k}{24a^{k}} +\frac{a^{k}}{6f_k} -\frac{{\rm sign}(a)^k}{4} +\frac{1}{24f_ka^{k}} -s(2,f_k)\\ &=&\frac{f_k}{24a^{k}} +\frac{a^{k}}{6f_k} -\frac{{\rm sign}(a)^k}{4} +\frac{1}{24f_ka^{k}} -\frac{f_k^2-6f_k+5}{24f_k}. \end{eqnarray*} After some simplifications, we obtain the desired formula for $s(a^k+f,2f)$. Notice that for $d=3$ we obtain $S(H,f) =\frac{f-1}{12}$, in accordance with (\ref{SfabH}). \end{proof} Using \eqref{formulaMd0pH} and Lemma \ref{(a^d-1)/(a-1)} we readily obtain: \noindent\frame{\vbox{ \begin{theorem}\label{TheoremMersenne} Let $d\geq 3$ be a prime integer. Let $p\equiv 1\pmod{2d}$ be a prime integer of the form $p=(a^d-1)/(a-1)$ for some $a\neq -1,0,1$. Let $K$ be the imaginary subfield of degree $(p-1)/d$ of the cyclotomic field ${\mathbb Q}(\zeta_p)$. Set $H=\{a^k;\ 0\leq k\leq d-1\}$, a subgroup of order $d$ of the multiplicative group $({\mathbb Z}/p{\mathbb Z})^*$. We have the mean square value formulas \begin{equation}\label{Mpd} M(p,H) =\frac{\pi^2}{6}\times\frac{a+1}{a-1}\times\left (1-\frac{(d-1)a+1}{p}\right ). \end{equation} and \begin{equation}\label{M2pd} M_2(p,H) =\frac{\pi^2}{8}\times \begin{cases} \frac{a+1}{a-1}\times\left (1-\frac{d}{p}\right )&\hbox{ if $a$ is odd,}\\ 1-\frac{(d-1)a+1}{p}&\hbox{ if $a$ is even.}\\ \end{cases} \end{equation} Consequently, for a given $d$, as $p\rightarrow\infty$ we have \begin{equation*} M(p,H) =\frac{\pi^2}{6}+o(1) \hbox{ and } M_2(p,H) =\frac{\pi^2}{8}+o(1). \end{equation*} On the other hand, for a given $a$, as $p\rightarrow\infty$ we have \begin{equation*} M(p,H) =\frac{\pi^2}{6}\times\frac{a+1}{a-1}+o(1) \hbox{ and } M_2(p,H) =\begin{cases} \frac{\pi^2}{8}\times\frac{a+1}{a-1}+o(1) &\hbox{if $a$ is odd,}\\ \frac{\pi^2}{8}+o(1) &\hbox{if $a$ is even.} \end{cases} \end{equation*} \end{theorem} }} \begin{Remark} Assertion (\ref{Mpd}) was initially proved\footnote{Note the misprint in the exponent in \cite[Theorem 5]{LouBPASM64}}in \cite[Theorem 5]{LouBPASM64} for $d=5$ and then generalized in \cite[Proposition 3.1]{LMQJM} to any $d\geq 3$. However, (\ref{Mpd}) is much simpler than \cite[(22)]{LMQJM}. Notice that if $p$ runs over the prime of the form $p=(a^d-1)/(a-1)$ with $a\neq 0,2$ even then $M_2(p,H) =\frac{6}{8}\times\frac{a-1}{a+1}\times M(p,H)$ and the asymptotic (\ref{asymptoticMd0(p,H)M(p,H)}) is not satisfied. \end{Remark} \subsection{The case where $p$ is a Mersenne prime and $d_0=1,3,15$}\label{Mersennesection} In the setting of Theorem \ref{Mersenne}, we have $2\in H$. Hence, by Remark \ref{restrictiond0} we assume that $d_0$ is odd. \noindent\frame{\vbox{ \begin{theorem}\label{Mersenne} Let $p=2^d-1>3$ be a Mersenne prime. Hence, $d$ is odd and $H=\{2^k;\ 0\leq k\leq d-1\}$ is a subgroup of odd order $d$ of the multiplicative group $({\mathbb Z}/p{\mathbb Z})^*$. Let $K$ be the imaginary subfield of degree $m =(p-1)/d$ of ${\mathbb Q}(\zeta_p)$. Then $$M(p,H) =\frac{\pi^2}{2}\left (1-\frac{2d-1}{p}\right ) \leq\frac{\pi^2}{2} \text{ and } h_K^-\leq 2\left (\frac{p}{8}\right )^{m/4},$$ $$M_3(p,H) =\frac{4\pi^2}{9}\left (1-\frac{d}{p}\right ) \leq\frac{4\pi^2}{9} \text{ and } h_K^-\leq 2\left (\frac{p}{9}\right )^{m/4}$$ and $$M_{15}(p,H) =\frac{32\pi^2}{75}\left (1-\frac{c_d}{48p}\right ) \leq\frac{32\pi^2}{75}, \hbox{ where } c_d =\begin{cases} 47d+1&\hbox{if $d\equiv 1\pmod 4$,}\\ 17d-3&\hbox{if $d\equiv 3\pmod 4$.} \end{cases}$$ In particular, for $d\equiv 3\pmod 4$ we have $h_K^-\leq 2\left (\frac{8p}{75}\right )^{m/4}.$ \end{theorem} }} \begin{proof} By (\ref{formulaMd0pH}) we have \begin{equation}\label{Md0Nd0'(f,H)} M_{d_0}(p,H) =\frac{\pi^2}{2}\left\{\prod_{q\mid d_0}\left (1-\frac{1}{q^2}\right )\right\} \left (1+\frac{N_{d_0}'(p,H)}{p}\right ), \end{equation} where for $H$ a subgroup of odd order of the multiplicative group $({\mathbb Z}/f{\mathbb Z})^*$ we set \begin{equation}\label{Nd0'(f,H)} N_{d_0}'(f,H) :=-f +\frac{4\mu(d_0)}{\prod_{q\mid d_0}(q+1)} \sum_{\delta\mid d_0}\frac{\delta\mu(\delta)}{\phi(\delta)}S(H_\delta,\delta f) . \end{equation} The formulas for $M(p,H),M_3(p,H)$ and $M_{15}(p,H)$ follow from (\ref{Md0Nd0'(f,H)}) and Lemma \ref{Mersennesq=1,3,5,15} below. The upper bounds on $h_K^-$ follow from (\ref{boundhKminusd0}) and Lemma \ref{formulaPi} according to which $\Pi_q(p,H)\geq 1$ if $q$ is of even order in the quotient group $G/H$, where $G=({\mathbb Z}/p{\mathbb Z})^*$, hence if $q$ is of even order in the group $G$. Now, since $p\equiv 3\pmod 4$ the group $G$ is of order $p-1=2N$ with $N$ odd and $q$ is of even order in $G$ if and only $q^N=-1$ in $G$, i.e. if and only if the Legendre symbol $\left (\frac{q}{p}\right )$ is equal to $-1$. Now, since $p=2^d-1\equiv-1\equiv 3 \pmod 4$ for $d\geq 3$, the law of quadratic reciprocity gives $\left (\frac{3}{p}\right ) =-\left (\frac{p}{3}\right ) =-\left (\frac{1}{3}\right ) =-1$, as $p\equiv (-1)^d-1\equiv -2\equiv 1\pmod 3$. Hence, $\Pi_3(p,H)\geq 1$. In the same way, if $d\equiv 3\pmod 4$ then $p=2^d-1 =2\cdot 4^\frac{d-1}{2}-1\equiv 2\cdot (-1)^\frac{d-1}{2}-1\equiv -3\equiv 2\pmod 5$ and $\left (\frac{5}{p}\right ) =\left (\frac{p}{5}\right ) =\left (\frac{2}{5}\right ) =-1$ and $\Pi_5(p,H)\geq 1$. \end{proof} \begin{lemma}\label{Mersennesq=1,3,5,15} Set $f=2^d-1$ and $\varepsilon_d =(-1)^{(d-1)/2}$ with $d\geq 2$ odd. Hence $\gcd(f,15)=1$. Set $H =\{2^k;\ 0\leq k\leq d-1\}$, a subgroup of order $d$ of the multiplicative group $({\mathbb Z}/f{\mathbb Z})^*$. Then, \begin{equation}\label{S(H,fMersenne)} S(H,f) =\frac{f-2d+1}{4} \hbox{ and } N'(f,H) =-2d+1, \end{equation} \begin{equation}\label{S(H3,3fMersenne)} S(H_3,3f) =\frac{5f-6d+1}{6} \hbox{ and } N_3'(f,H) =-d, \end{equation} \begin{equation}\label{S(H5,5fMersenne)} S(H_5,5f) =\frac{7f-10d+2+\varepsilon_d}{5} \hbox{ and } N_5'(f,H) =-\frac{4}{3}d+\frac{1+\varepsilon_d}{6}, \end{equation} \begin{equation}\label{S(H15,15fMersenne)} S(H_{15},15f) =\frac{14f-\left (12+3\varepsilon_d\right )d+1}{3} \hbox{ and } N_{15}'(f,H) =-\frac{32+15\varepsilon_d}{48}d +\frac{1-2\varepsilon_d}{48}. \end{equation} \end{lemma} \begin{proof} The first assertion is the special case $a=2$ of Lemma \ref{(a^d-1)/(a-1)}. Let us now deal with the second assertion. Here $H_3 =\{2^k;\ 0\leq k\leq d-1\} \cup\{2^k+(-1)^kf;\ 0\leq k\leq d-1\}$. We assume that $0\leq k\leq d-1$. Hence, ${\rm sign}(2^k+(-1)^kf)=(-1)^k$. \noindent {\bf 1.} Noticing that $3f\equiv -3\pmod {2^k}$, by (\ref{scddc}) we obtain $$s(2^k,3f) =\frac{4^k+9f^2-9\cdot 2^k\cdot f+1}{36\cdot 2^k\cdot f}+s(3,2^k).$$ Noticing that $2^k\equiv (-1)^k\pmod 3$, by (\ref{scddc}) and (\ref{s1d}) we obtain $$s(3,2^k) =\frac{9+4^k-9\cdot 2^k+1}{36\cdot 2^k}-(-1)^ks(1,3) =\frac{9+4^k-9\cdot 2^k+1}{36\cdot 2^k}-\frac{(-1)^k}{18}.$$ Hence $$s(2^k,3f) =\frac{f+1}{36f}2^k +\frac{(f+1)(9f+1)}{36f}2^{-k} -\frac{1}{2} -\frac{(-1)^k}{18}.$$ \noindent {\bf 2.} Noticing that $3f\equiv -3\cdot (-1)^k2^k\pmod {2^k+(-1)^kf}$, by (\ref{scddc}) we obtain \begin{align*} s(2^k+(-1)^kf,3f) =&\frac{2^k+(-1)^kf}{36f} +\frac{f}{4(2^k+(-1)^kf)} -\frac{(-1)^k}{4} +\frac{1}{36(2^k+(-1)^kf)f}\\ &+(-1)^ks(3\cdot 2^k,2^k+(-1)^kf) \end{align*} and noticing that $2^k+(-1)^kf\equiv (-1)^{k-1}\pmod {3\cdot 2^k}$, by (\ref{scddc}) we obtain \begin{align*} s(3\cdot 2^k,2^k+(-1)^kf) =&\frac{3\cdot 2^k}{12(2^k+(-1)^kf)} +\frac{2^k+(-1)^kf}{36\cdot 2^k} -\frac{(-1)^k}{4} +\frac{1}{36\cdot 2^k\cdot (2^k+(-1)^kf)}\\ &+(-1)^ks(1,3\cdot 2^k). \end{align*} Using (\ref{s1d}) we finally obtain $$s(2^k+(-1)^kf,3f) =\frac{9f+1}{36f}2^k +\frac{(f+1)^2}{36f}2^{-k} -\frac{1}{2} +\frac{(-1)^k}{18}.$$ \noindent {\bf 3.} Using $\sum_{k=0}^{d-1}2^k =f$, $\sum_{k=0}^{d-1}2^{-k}=\frac{2f}{f+1}$ and $\sum_{k=0}^{d-1}(-1)^k=1$, we obtain $$\sum_{k=0}^{d-1}s(2^k,3f) =\frac{19f-18d+1}{36} \hbox{ and } \sum_{k=0}^{d-1}s(2^k+(-1)^kf,3f) =\frac{11f-18d+5}{36}.$$ Hence, we do obtain $$S(H_3,3f) =\sum_{h\in H_3}s(h,3f) =\frac{19f-18d+1}{36} +\frac{11f-18d+5}{36} =\frac{5f-6d+1}{6}$$ and $N_3'(f,H) =-d$, by (\ref{Nd0'(f,H)}). Let us finally deal with the third and fourth assertions. The proof involves tedious and repetitive computations. For this reason we will restrict ourselves to a specific case. Let us for example give some details for the proof of (\ref{S(H15,15fMersenne)}) in the case that $d\equiv 1\pmod 4$. We have $f=2^d-1\equiv 1\pmod {30}$ and $H_{15}=\cup_{l=0}^{14}E_l$, where $E_l := \{2^k+lf;\ 0\leq k\leq d-1,\ \gcd(2^k+l,15)=1\}$ for $0\leq l\leq 14$. We have to compute the sums $s_l:=\sum_{n\in E_l}s(n,15f)$. Let us for example give some details in the case that $l=1$. We have $\gcd (2^k+1,15)=1$ if and only if $k\equiv 0\pmod 4$. Hence $s_1=\sum_{k=0}^{(d-1)/4}s(16^{k}+f,15f)$. Using (\ref{scddc}) and (\ref{s1d}) we obtain $$s(16^k+f,15f) =\frac{9f+1}{180f}16^k +\frac{14}{45} +\frac{(f+1)^2}{180f}16^{-k}.$$ Finally, using $\sum_{k=0}^{(d-1)/4}16^k =\frac{8f+7}{15}$ and $\sum_{k=0}^{(d-1)/4}16^{-k} =\frac{2(8f+7)}{15(f+1)}$ we obtain $$s_1 =\sum_{k=0}^{(d-1)/4}s(16^k+f,15f) =\frac{88f^2+(210d+731)f+21}{2700f}.$$ Finally, using (\ref{Nd0'(f,H)}), (\ref {S(H,fMersenne)}), (\ref{S(H3,3fMersenne)}) and (\ref{S(H5,5fMersenne)}) we get (\ref{S(H15,15fMersenne)}). \end{proof} We conclude this Section with the following result for $d_0 =3\cdot 5\cdot 7 =105$, whose long proof we omit\footnote{The formulas have been checked on numerous examples using a computer algebra system. }: \begin{lemma}\label{Mersennesq=105} Set $f =2^d-1$ with $d>1$ odd. Assume $\gcd (f,105)=1$, i.e. that $d\equiv 1, 5, 7, 11\pmod{12}$. Set $H =\{2^k;\ 0\leq k\leq d-1\}$, a subgroup of order $d$ of the multiplicative group $({\mathbb Z}/f{\mathbb Z})^*$. Then \begin{equation}\label{N105(Mersenne)} N_{105}'(f,H) =-\frac{1}{576} \times\begin{cases} 437d+139&\hbox{if $d\equiv 1\pmod {12}$},\\ 535d-644&\hbox{if $d\equiv 5\pmod{12}$},\\ 97d-324&\hbox{if $d\equiv 7\pmod{12}$},\\ 195d+13&\hbox{if $d\equiv 11\pmod {12}$.} \end{cases} \end{equation} \end{lemma} Lemmas \ref {Mersennesq=1,3,5,15}-\ref{Mersennesq=105} show that the following Conjecture holds true for $d_0\in\{1,3,5,15,105\}$: \noindent\frame{\vbox{ \begin{Conjecture}\label{conjectureMersenne} Let $d_0\geq 1$ be odd and square-free. Let $N$ be the order of $2$ in the multiplicative group $({\mathbb Z}/d_0{\mathbb Z})^*$. Set $f =2^d-1$ with $d>1$ odd and $H =\{2^k;\ 0\leq k\leq d-1\}$, a subgroup of order $d$ of the multiplicative group $({\mathbb Z}/f{\mathbb Z})^*$. Assume $\gcd (f,d_0)=1$. Then $N_{d_0}'(f,H) =A_1(d)d+A_0(d)$, where $A_1(d)$ and $A_0(d)$ are rational numbers which depend only on $d$ modulo $N$, i.e. only on $f$ modulo $d_0$. Hence for a prime $p\geq 3$ we expect $$M_{d_0}(p,H) =\frac{\pi^2}{2}\left\{\prod_{q \mid d_0} \left(1-\frac{1}{q^2}\right)\right\}\left(1+\frac{A_1(d)d}{p}+\frac{A_0(d)}{p}\right),$$ confirming again that the restriction on $d$ in Theorem \ref{asympd0} should be sharp. \end{Conjecture} }} \section{The case of subgroups of order $d=3$}\label{Sectiond=3} \subsection{Formulas for $d_0=1,2,6$} Let $p\equiv 1\pmod 6$ be a prime integer. Let $K$ be the imaginary subfield of degree $m=(p-1)/3$ of the cyclotomic field ${\mathbb Q}(\zeta_p)$. Since $p$ splits completely in the quadratic field ${\mathbb Q}(\sqrt{-3})$ of class number one, there exists an algebraic integer $\alpha =a+b\frac{1+\sqrt{-3}}{2}$ with $a,b\in {\mathbb Z}$ such that $p =N_{{\mathbb Q}(\sqrt{-3})/{\mathbb Q}}(\alpha) =a^2+ab+b^2$. Then, $H=\{1,a/b,b/a\}$, is the unique subgroup of order $3$ of the cyclic multiplicative group $({\mathbb Z}/p{\mathbb Z})^*$. So we consider the integers $f>3$ of the form $f=a^2+ab+b^2$, with $a,b\in {\mathbb Z}\setminus\{0\}$ and $\gcd(a,b)=1$, which implies $\gcd(a,f) =\gcd (b,f)=1$ and the oddness of $f$. We set \begin{equation}\label{defH} H=\left\{1,a/b,b/a\right\}, \end{equation} a subgroup of order $3$ of the multiplicative group $({\mathbb Z}/f{\mathbb Z})^*$. We have the following explicit formula. \begin{lemma}\label{sabf} Let $f>3$ be of the form $f=a^2+ab+b^2$, with $a,b\in {\mathbb Z}$ and $\gcd(a,b)=1$. Let $H$ be as in (\ref{defH}). Then, \begin{equation}\label{SfabH} s(a,b,f) =\frac{f-1}{12f}, \ S(H,f) =\frac{f-1}{12} \hbox{ and } N(f,H) =-1+12S(H,f) =-1. \end{equation} \end{lemma} \begin{proof} Noticing that $s(b,f,a) =s(b,b^2,a) =s(1,b,a) =s(b,a)$, by (\ref{bcd=abacd}), and $s(f,a,b) =s(a^2,a,b) =s(a,1,b) =s(a,b)$, and using (\ref{scddc}), we obtain \begin{eqnarray*} s(a,b,f) &=&\frac{a^2+b^2+f^2-3\vert ab\vert f}{12abf}-s(b,f,a)-s(f,a,b) \ \ \ \ \ \text{(by (\ref{sbcddcb}))}\\ &=&\frac{a^2+b^2+f^2-3\vert ab\vert f}{12abf} -s(b,a)-s(a,b)\\ &=&\frac{a^2+b^2+f^2-3\vert ab\vert f}{12abf} -\frac{a^2+b^2-3\vert ab\vert+1}{12ab} =\frac{f-1}{12f}. \end{eqnarray*} Finally, $S(H,f)=s(1,f)+s(a,b,f)+s(b,a,f) =s(1,f)+2s(a,b,f)$ and use (\ref{s1d}) and (\ref{defNd0fHter}). \end{proof} \begin{Remark}\label{dedekindtwosizes} Take $f_1=A^2+AB+B^2>0$, where $3\nmid f_1$ and $\gcd (A,B)=1$. Set $f=(f_1+1)^3-1$. Then $f=a^2+ab+b^2$, where $a=Af_1+A-B$, $b=Bf_1+A+2B$ and $\gcd(a,b)=1$. By Lemmas \ref{sabf} we have an infinite family of moduli $f$ for which the multiplicative group $({\mathbb Z}/f{\mathbb Z})^*$ contains at the same time an element $h=a/b$ of order $d=3$ for which $s(h,f)$ is asymptotic to $1/12$ and an element $h'=f_1+1$ of order $d=3$ for which $s(h',f)$ is asymptotic to $f^{2/3}/12$. Indeed by \eqref{scddc} and \eqref{s1d} for $f=(f_1+1)^3-1$ we have $s(h',f) =\frac{h'^5+h'^4-6h'^3+6}{12f}$. \end{Remark} To deal with the case $d_0>1$, we notice that by (\ref{defNd0fHter}) we have: \begin{proposition}\label{Nd0abf} Let $d_0\geq 1$ be a given squarefree integer. Take $f>3$ odd of the form $f =a^2+ab+b^2$, where $\gcd (a,b)=1$ and $\gcd (d_0,f)=1$. Set $H=\{1,a/b,b/a\}$, a subgroup of order $3$ of the multiplicative group $({\mathbb Z}/f{\mathbb Z})^*$. Let $N_{d_0}(f,H)$ be the rational number defined in (\ref{defNd0fH}). Then $$N_{d_0}(f,H) =N_{d_0}(f,\{1\}) +\frac{24\mu(d_0)}{\prod_{q\mid d_0}(q+1)} \sum_{\delta\mid d_0}\frac{\delta\mu(\delta)}{\phi(\delta)}S(a,b,\delta f),$$ where $N_{d_0}(f,\{1\})$ is a rational number which depends only on $f$ modulo $d_0$, by Proposition \ref{H=1}, and where $$S(a,b,\delta f) =\sum_{h\in ({\mathbb Z}/\delta f{\mathbb Z})^*\atop h\equiv a/b\pmod f} s(h,\delta f) =\sum_{h\in ({\mathbb Z}/\delta f{\mathbb Z})^*\atop h\equiv b/a\pmod f} s(h,\delta f).$$ \end{proposition} It seems that there are no explicit formulas for $S(a,b,\delta f)$, $S(H_\delta,\delta f)$ or $N_\delta(f,H)$ for $\delta>1$ (however, assuming that $b=1$ we will obtain such formulas in Section \ref{d3b1} for $\delta\in\{2,3,6\}$). Instead, our aim is to prove in Proposition \ref{boundsSab} that $N_\delta(f,H) =O(\sqrt f)$ for $\delta\in\{2,3,6\}$. Let $f>3$ be of the form $f=a^2+ab+b^2$, with $a,b\in {\mathbb Z}$ and $\gcd(a,b)=1$. Hence, $a$ or $b$ is odd. Since $a^2+ab+b^2 =a'^2+a'b'+b'^2$ $=a''^2+a''b''+b''^2$ and $a'/b' =a/b$ and $a''/b'' =a/b$ in $({\mathbb Z}/f{\mathbb Z})^*$, where $(a',b') =(-b,a+b)$ and $(a'',b'') =(-a-b,a)$, we may assume that both $a$ and $b$ are odd. Moreover, assume that $\gcd (3,f)=1$. If $3\nmid ab$, by swapping $a$ and $b$ as needed, which does not change neither $H$ nor $S(a,b,H)$, we may assume that $a\equiv -1\pmod 6$ and $b\equiv 1\pmod 6$. If $3\mid ab$, by swapping $a$ and $b$ and then changing both $a$ and $b$ to their opposites as needed, which does not change neither $H$ nor $S(a,b,H)$, we may assume that $a\equiv 3\pmod 6$ and $b\equiv 1\pmod 6$. So in Proposition \ref{Nd0abf} we may restrict ourselves to the integers of the form \begin{multline}\label{deff} f>3 \hbox{ is odd of the form } f=a^2+ab+b^2, \hbox{ with } a,b\in {\mathbb Z} \hbox{ odd } \hbox{ and } \gcd(a,b)=1\\ \hbox{and if $\gcd(3,f)=1$ then $a\equiv -1\hbox{ or }3\pmod 6$ and $b\equiv 1\pmod 6$}. \end{multline} \noindent\frame{\vbox{ \begin{proposition}\label{boundsSab} Let $\delta\in\{2,3,6\}$ be given. Let $f$ be as in (\ref{deff}), with $\gcd (f,\delta)=1$. Then, $s(h,\delta f) =O(\sqrt f)$ for any $h\in ({\mathbb Z}/\delta f{\mathbb Z})^*$ such that $h\equiv a/b\pmod f$. Consequently, for a given $d_0\in\{1,2,3,6\}$, in Proposition \ref{Nd0abf} we have $N_{d_0}(f,H) =O(\sqrt f)$, and we cannot expect great improvements on these bounds, by (\ref{N2fa1H}), (\ref{N3fa1H}) and (\ref{N6fa1H}). \end{proposition}}} \begin{proof} First, by (\ref{SfabH}) we have $$S(a,b,f) =s(a,b,f) =\frac{f-1}{12f}.$$ Second, $f$ being odd, recalling \eqref{Ad0f} we have $A(2,f) =A(2,1) =0$, $N_2(f,\{1\}) =-1$, \begin{equation}\label{Sab2f} S(a,b,2f) =s(a,b,2f) \end{equation} and $$N_2(f,H) =-1 -8S(a,b,f) +16S(a,b,2f).$$ Third, assume that $d_0\in\{3,6\}$. Then $\gcd(f,3)=1$. Hence, $f\equiv 1\pmod 6$. Therefore, $A(3,f) =A(3,1) =4/3$, $A(6,f) =A(6,1) =-4$, $N_3(f,\{1\}) =N_6(f,\{1\}) =-1$, $$N_3(f,H) =-1-6S(a,b,f) +9S(a,b,3f)$$ and $$N_6(f,H) =-1+2S(a,b,f)-4S(a,b,2f)-3S(a,b,3f)+6S(a,b,6f).$$ \noindent If $a\equiv -1\pmod 6$, $b\equiv 1\pmod 6$ and $\delta\in\{1,2\}$, then $\{h\in ({\mathbb Z}/3\delta f{\mathbb Z})^*;\ h\equiv a/b\pmod f\} =\{a/b,(a+2f)/b\}$ and \begin{equation}\label{Sab3deltaf1} S(a,b,3\delta f) =s(a,b,3\delta f) +s(a+2f,b,3\delta f). \end{equation} \noindent If $a\equiv 3\pmod 6$, $b\equiv 1\pmod 6$ and $\delta\in\{1,2\}$, then $\{h\in ({\mathbb Z}/3\delta f{\mathbb Z})^*;\ h\equiv a/b\pmod f\} =\{(a-\delta f)/b,(a+\delta f)/b\}$ and \begin{equation}\label{Sab3deltaf2} S(a,b,3\delta f) =s(a-\delta f,b,3\delta f) +s(a+\delta f,b,3\delta f). \end{equation} \noindent Let us now bound the Dedekind-Rademacher sums in (\ref{Sab2f}), (\ref{Sab3deltaf1}) and (\ref{Sab3deltaf2}). We will need the bounds: \begin{equation}\label{boundsab} \hbox{if } f=a^2+ab+b^2, \hbox{ then } \vert a\vert+\vert b\vert\leq\sqrt{4f} \hbox{ and } \vert ab\vert\geq\sqrt{f/3}. \end{equation} Indeed, $4f-(\vert a\vert+\vert b\vert)^2 \geq 3(\vert a\vert-\vert b\vert)^2\geq 0$ and $f\leq a^2+a^2b^2+b^2 =3a^2b^2$. \noindent {\bf First}, we deal with the Dedekind-Rademacher sums $s(a,b,\delta f)$ in (\ref{Sab2f}) and (\ref{Sab3deltaf1}), where $\delta\in\{2,3,6\}$. Here, $\gcd (a,b) =\gcd(a,\delta f)=\gcd (b,\delta f) =1$. Then (\ref{boundscd}) and (\ref{boundsab}) enable us to write (\ref{sbcddcb}) as follows: $$s(a,b,\delta f)+O(\sqrt f)+O(\sqrt f) =O(\sqrt f).$$ Hence, in (\ref{Sab2f}) and (\ref{Sab3deltaf1}) we have $s(a,b,2f),\ s(a,b,3f),\ s(a,b,6f)=O(\sqrt f)$. \noindent {\bf Second}, the remaining and more complicated Dedekind-Rademacher sums in (\ref{Sab3deltaf1}) and (\ref{Sab3deltaf2}) are of the form $s(a+\varepsilon\delta f,b,3\delta f)$, where $\varepsilon\in\{\pm 1\}$, $\delta\in\{1,2\}$ and $\gcd (a+\varepsilon\delta f,3\delta f) =\gcd(b,3\delta f)=1$. Set $\delta' =\gcd(a+\varepsilon\delta f,b)$. Then $\gcd(\delta',3\delta f)=1$. Thus, $s(a+\varepsilon\delta f,b,3\delta f)$ $=s((a+\varepsilon\delta f)/\delta',b/\delta',3\delta f)$, where now the three terms in this latter Dedekind-Rademacher are pairwise coprime. Then (\ref{boundscd}) and (\ref{boundsab}) enable us to write (\ref{sbcddcb}) as follows: $$s((a+\varepsilon\delta f)/\delta',b/\delta',3\delta f) +O(\sqrt f) +s(b/\delta',3\delta f,(a+\varepsilon\delta f)/\delta') =O(\delta'^2/b) =O(b) =O(\sqrt f).$$ Now, $3\delta f\equiv -3\varepsilon a\pmod {a+\varepsilon\delta f}$ gives $s(b/\delta',3\delta f,(a+\varepsilon\delta f)/\delta') =-\varepsilon s(b/\delta',3a,(a+\varepsilon\delta f)/\delta')$. Since the three rational integers in this latter Dedekind-Rademacher are pairwise coprime, the bounds (\ref{boundsab}) and (\ref{boundscd}) enable us to write (\ref{sbcddcb}) as follows: $$s(b/\delta',3a,(a+\varepsilon\delta f)/\delta')+O(\sqrt f)+O(\sqrt f) =O(\sqrt f).$$ It follows that $s(a+\varepsilon\delta f,b,3\delta f) =s((a+\varepsilon\delta f)/\delta',b/\delta',3\delta f) =O(\sqrt f)$, i.e., in (\ref{Sab3deltaf1}) and (\ref{Sab3deltaf2}) we have $s(a+2f,b,6f),\ s(a-2f,b,6f),\ s(a+2f,b,3f), s(a-f,b,3f), s(a+f,b,3f) =O(\sqrt f)$. \end{proof} \noindent\frame{\vbox{ \begin{Conjecture}\label{conjecturesab} Let $\delta$ be a given square-free integer. Let $f>3$ run over the odd integers of the form $f=a^2+ab+b^2$ with $\gcd(a,b)=1$ and $\gcd (\delta,f)=1$. Then $s(h,\delta f) =O(\sqrt f)$ for any $h\in ({\mathbb Z}/\delta f{\mathbb Z})^*$ such that $h\equiv a/b\pmod f$. Consequently, for a given square-free integer $d_0$, in Proposition \ref{Nd0abf}, we would have $N_{d_0}(f,H) =O(\sqrt f)$ for $\gcd (d_0,f)=1$. \end{Conjecture} }} Putting everything together we obtain: \noindent\frame{\vbox{ \begin{theorem}\label{thp3M2} Let $p\equiv 1\pmod 6$ be a prime integer. Let $K$ be the imaginary subfield of degree $(p-1)/3$ of the cyclotomic field ${\mathbb Q}(\zeta_p)$. Let $H$ be the subgroup of order $3$ of the multiplicative group $({\mathbb Z}/p{\mathbb Z})^*$. We have \begin{equation*} M(p,H) ={\pi^2\over 6}\left (1+\frac{N(p,H)}{p}\right ) ={\pi^2\over 6}\left (1-\frac{1}{p}\right ) \hbox{ and } h_K^- \leq 2\left (\frac{p}{24}\right )^{(p-1)/12}, \end{equation*} and the following effective asymptotics and upper bounds \begin{equation}\label{pc3M2} M_2(p,H) ={\pi^2\over 8}\left (1+\frac{N_2(p,H)}{p}\right ) ={\pi^2\over 8}\left (1+O(p^{-1/2})\right ) \hbox{ and } h_K^- \leq 2\left (\frac{p+o(p)}{32}\right )^{(p-1)/12}, \end{equation} \begin{equation*} M_6(p,H) ={\pi^2\over 9}\left (1+\frac{N_6(p,H)}{p}\right ) ={\pi^2\over 9}\left (1+O(p^{-1/2})\right ) \hbox{ and } h_K^- \leq 2\left (\frac{p+o(p)}{36}\right )^{(p-1)/12}. \end{equation*} \end{theorem} }} \begin{proof} The formulas for $M(p,H)$, $M_2(p,H)$ and $M_6(p,H)$ follow from (\ref{M(p,H)}), (\ref{Md0pH}), (\ref{SfabH}) and Proposition \ref{boundsSab}. The inequalities on $h_K^-$ are consequences as usual of (\ref{boundhKminusd0}) and Corollary \ref{PiKminus}. \end{proof} \subsection{The special case $p=a^2+a+1$ and $d_0=1,2,6$}\label{d3b1} \begin{theorem}\label{thp3M2M6} Let $p\equiv 1\pmod 6$ be a prime integer of the form $p=a^2+a+1$ with $a\in {\mathbb Z}$. Let $K$ be the imaginary subfield of degree $(p-1)/3$ of the cyclotomic field ${\mathbb Q}(\zeta_p)$. Let $H$ be the subgroup of order $3$ of the multiplicative group $({\mathbb Z}/p{\mathbb Z})^*$. We have \begin{equation}\label{pc3M2a} M_2(p,H) ={\pi^2\over 8}\left (1-(-1)^{a}\frac{2a+1}{p}\right ), \end{equation} showing that the error term in (\ref{pc3M2}) is optimal, \begin{equation}\label{pc3M6a} M_6(p,H) ={\pi^2\over 9}\left (1+\frac{c_a}{p}\right ), \hbox{ where } c_a =\begin{cases} -2a-1&\hbox{if $a\equiv 0\pmod 6$},\\ -3&\hbox{if $a\equiv 2,3\pmod 6$},\\ 2a+1&\hbox{if $a\equiv 5\pmod 6$}. \end{cases} \end{equation} \end{theorem} \begin{proof} The formula \eqref{pc3M2a} is a special case of \eqref{M2pd} for $d=3$. By (\ref{Md0pH}), we have $$ M_6(p,H) ={\pi^2\over 9}\left (1+\frac{N_6(p,H)}{p}\right ).$$ Hence (\ref{pc3M6a}) follows from Lemma \ref{sab6f} below. \end{proof} \begin{lemma}\label{sab2f} Let $f>3$ be of the form $f=a^2+a+1$, $a\in {\mathbb Z}$. Set $H=\{1,a,a^2\}$, a subgroup of order $3$ of the multiplicative group $({\mathbb Z}/f{\mathbb Z})^*$. We have \begin{equation}\label{S2fa1H} S(H,f) =\frac{f-1}{12} \hbox { and } S(H_2,2f) =\frac{2f+c_a'}{12} \hbox{ where } c_a' :=\begin{cases} -3a-2&\hbox{if $a\equiv 0\pmod 2$},\\ 3a+1&\hbox{if $a\equiv 1\pmod 2$} \end{cases} \end{equation} and \begin{equation}\label{N2fa1H} N_2(f,H) =-f-4S(H,f)+8S(H_2,2f) =(-1)^{a-1}(2a+1). \end{equation} \end{lemma} \begin{proof} Apply Lemma \ref{(a^d-1)/(a-1)} with $d=3$ and $f=a^2+a+1$ and then conclude using (\ref{defNd0fHter}). \end{proof} \begin{lemma}\label{sab6f} Let $f>3$ be of the form $f=a^2+a+1$, $a\in {\mathbb Z}$. Assume that $\gcd(f,6)=1$, i.e. that $a\equiv 0, 2, 3, 5\pmod 6$. Set $H=\{1,a,a^2\}$, a subgroup of order $3$ of the multiplicative group $({\mathbb Z}/f{\mathbb Z})^*$. Then \begin{equation}\label{S3fa1H} S(H_3,3f) =\frac{5f+c_a''}{18} \hbox{ where } c_a'' :=\begin{cases} -8a-5&\hbox{if $a\equiv 0\pmod 3$},\\ 8a+3&\hbox{if $a\equiv 2\pmod 3$}, \end{cases} \end{equation} \begin{equation}\label{N3fa1H} N_3(f,H) =-f-3S(H,f)+\frac{9}{2}S(H_3,3f) =\frac{c_a''+1}{4} =\begin{cases} -2a-1&\hbox{if $a\equiv 0\pmod 3$},\\ 2a+1&\hbox{if $a\equiv 2 \pmod 3$}, \end{cases} \end{equation} \begin{equation}\label{S6fa1H} S(H_6,6f) =\frac{10f+c_a'''}{18} \hbox{ where } c_a''' :=\begin{cases} -19a-10&\hbox{if $a\equiv 0\pmod 6$},\\ a-18&\hbox{if $a\equiv 2\pmod 6$},\\ -a-19&\hbox{if $a\equiv 3\pmod 6$},\\ 19a+9&\hbox{if $a\equiv 5\pmod 6$}, \end{cases} \end{equation} and \begin{equation}\label{N6fa1H} N_6(f,H) =-f +S(H,f)-2S(H_2,2f)-\frac{3}{2}S(H_3,3f)+3S(H_6,6f) =\begin{cases} -2a-1&\hbox{if $a\equiv 0\pmod 6$,}\\ -3&\hbox{if $a\equiv 2,3\pmod 6$,}\\ 2a+1&\hbox{if $a\equiv 5\pmod 6$.} \end{cases} \end{equation} \end{lemma} \begin{proof} Let us for example detail the computation of $S(H_6,6f)$ in the case that $a\equiv 0\pmod 6$. We have $f\equiv 1\pmod 6$ and $H_6=\{1,1+4f,a+f,a+5f,a^2+f,a^2+5f\}$. Since $a^2+f =(a+f)^{-1}$ and $a^2+5f=(a+5f)^{-1}$ in $({\mathbb Z}/f{\mathbb Z})^*$, we have $S(H_6,6f) =s(1,6f)+s(1+4f,6f)+2s(a+f,6f)+2s(a+5f,6f)$, by (\ref{cc*}). Using (\ref{scddc}) and (\ref{s1d}) we obtain $s(1,6f) =\frac{18f^2-9f+1}{36f}$, $s(1+4f,6f) =\frac{2f^2-13f+1}{36f}$, $s(a+f,6f) =-\frac{(3a-21)f+1}{72f}$ and $s(a+5f,6f) =-\frac{(35a+19)f+1}{72f}$. Formula (\ref{S6fa1H}) follows. Using (\ref{SfabH}), (\ref{S2fa1H}), (\ref{S3fa1H}) and (\ref{S6fa1H}), we obtain $N_6(f,H) =\frac{-1-2c_a'-c_a''+2c_a'''}{12}$ and \eqref{N6fa1H}. \end{proof} \section{Conclusion and a conjecture}\label{conclusion} The proof of Lemma \ref{(a^d-1)/(a-1)} gives \begin{equation}\label{s(a,f)} s\left (a,\frac{a^d-1}{a-1}\right ) =\frac{(f-1)(f-a^2-1)}{12af} =O\left (f^{1-\frac{1}{d-1}}\right ) \hbox{ for $d\geq 3$ odd and $a\neq -1,0,1$,} \end{equation} Our numerical computations suggest the following stronger version of Theorem \ref{indivbound}: \begin{Conjecture}\label{conjDedekind} There exists $C>0$ such that for any odd $d>1$ dividing $p-1$ and any $h$ of order $d$ in the multiplicative group $({\mathbb Z}/p{\mathbb Z})^*$ we have \begin{equation}\label{conjecture} \vert s(h,p)\vert \leq Cp^{1-\frac{1}{\phi(d)}}. \end{equation} \end{Conjecture} Indeed, for $p\leq 10^6$ we checked on a desk computer that any odd $d>1$ dividing $p-1$ and any $h$ of order $d$ in the multiplicative group $({\mathbb Z}/p{\mathbb Z})^*$ we have $$Q(h,p) :=\frac{\vert s(h,p)\vert}{p^{1-\frac{1}{\phi(d)}}} \leq Q(2,2^7-1) =0.08903\cdots$$ The estimate \eqref{conjecture} would allow to slightly extend the range of validity of Theorem \ref{asympd0} to $d \leq (1-\varepsilon)\frac{\log p}{\log \log p}$. Moreover the choice $a=2$ in (\ref{s(a,f)}) for which $s(2,f)$ is asymptotic to $\frac{1}{24}f$ with $f=2^d-1$ shows that $s(h,p) =o(p)$ cannot hold true in the range $d \asymp \log p$. Notice that we cannot expect a better bound than (\ref{conjecture}), by (\ref{s(a,f)}). Finally, the restriction that $p$ be prime in (\ref{conjecture}) is paramount by Remark \ref{dedekindtwosizes} where $s(a,f) \sim f^{2/3}/12$ for $a$ of order $3$ in $({\mathbb Z}/(a^3-1){\mathbb Z})^*$. } \end{document}
arXiv
Results for 'Yuri Seo' Distinct Effects of Pride and Gratitude Appeals on Sustainable Luxury Brands.Felix Septianto, Yuri Seo & Amy Christine Errmann - 2021 - Journal of Business Ethics 169 (2):211-224.details This study synthesizes research on evolutionary psychology, emotional appeals, and viral advertising in order to develop a novel perspective on how sustainable luxury brands can be effectively promoted on social media. The results of two experiments show that the emotional appeals of pride and gratitude increase consumer intentions to spread electronic word-of-mouth about sustainable luxury brands via two discrete mechanisms. Study 1 establishes that featuring the pride appeal increases eWOM intentions by heightening the luxury dimension of sustainable luxury brands, whereas (...) featuring the gratitude appeal increases eWOM intentions by heightening the sustainability dimension of sustainable luxury brands. Study 2 shows that these discrete effects of emotional appeals influence consumers to adopt different types of eWOM behaviors toward sustainable luxury brands. Specifically, the pride appeal increases consumer intentions to broadcast eWOM via status attainment motives. In contrast, the gratitude appeal increases consumer intentions to narrowcast eWOM via affiliation seeking motives. The findings offer novel theoretical insights and provide managers with tools to promote sustainable luxury brands in a digital environment. (shrink) Pride in Normative Ethics Yuri K. Melvil.Yuri K. Melvil - 1960 - Atti Del XII Congresso Internazionale di Filosofia 3:493-496.details Yuri Andropov: A new leader of Russia.Yuri Glazov - 1983 - Studies in Soviet Thought 26 (3):173-215.details Eastern European Philosophy in European Philosophy Yuri Andropov: A new leader of russia.Yuri Glazov - 1983 - Studies in East European Thought 26 (3):173-215.details Emotion and consciousness: Ends of a continuum.Yuri I. Alexandrov & Mikko E. Sams - 2005 - Cognitive Brain Research 25 (2):387-405.details Emotion and Consciousness in Psychology in Philosophy of Cognitive Science Non-commercial Surrogacy in Thailand: Ethical, Legal, and Social Implications in Local and Global Contexts.Yuri Hibino - 2020 - Asian Bioethics Review 12 (2):135-147.details In this paper, the ethical, legal, and social implications of Thailand's surrogacy regulations from both domestic and global perspectives are explored. Surrogacy tourism in Thailand has expanded since India strengthened its visa regulations in 2012. In 2015, in the wake of a major scandal surrounding the abandonment of a surrogate child by its foreign intended parents, a law prohibiting the practice of surrogacy for commercial purposes was enacted. Consequently, a complete ban on surrogacy tourism was imposed. However, some Thai physicians (...) and surrogate mothers cross into neighboring countries to provide foreign clients with the commercial surrogacy services that are forbidden in Thailand. Under this legislation, the needs of Thai couples who are unable to conceive are accommodated by legally accessible, non-commercial surrogacy services; however, there is currently no provision in place aimed at protecting the rights and interests of surrogate mothers and children. It is widely believed that the abolition of surrogacy tourism, an industry that give rise to several major scandals, and legal access to surrogacy by Thai couples were the Thai government's primary goal in implementing this legislation. (shrink) Plagiarism and Poetic Identity in Martial.J. Mira Seo - 2009 - American Journal of Philology 130 (4):567-593.details This article addresses the significance of plagiarism as a poetic theme in Martial's Epigrams. Martial is the first classical poet to use the term plagiarius to refer to literary theft, and elsewhere in his epigrams, he frequently accuses other poets of appropriating or copying his work. This article argues that Martial's explicit references to plagiarism illustrate a poetic self-identity invested in the materiality of his texts: as Martial objectifies his books, he also commodifies them as articles of the marketplace. By (...) defining his poetry as material commodities available for purchase or theft, Martial also exposes the economic fictions of Roman literary patronage. (shrink) Philosophy of Sport in Social and Political Philosophy Becoming a Surrogate Online:" Message Board" Surrogacy in Thailand.Yuri Hibino & Yosuke Shimazono - 2013 - Asian Bioethics Review 5 (1):56-72.details Feminism: Reproduction in Philosophy of Gender, Race, and Sexuality Knowing How Without Knowing That.Yuri Cath - 2011 - In John Bengson & Mark Moffett (eds.), Knowing How: Essays on Knowledge, Mind, and Action. Oxford University Press. pp. 113.details In this paper I develop three different arguments against the thesis that knowledge-how is a kind of knowledge-that. Knowledge-that is widely thought to be subject to an anti-luck condition, a justified or warranted belief condition, and a belief condition, respectively. The arguments I give suggest that if either of these standard assumptions is correct then knowledge-how is not a kind of knowledge-that. In closing I identify a possible alternative to the standard Rylean and intellectualist accounts of knowledge-how. This alternative view (...) illustrates that even if the arguments given here succeed it might still be reasonable to hold that knowing how to do something is a matter of standing in an intentional relation to a proposition other than the knowledge-that relation. (shrink) Knowledge How in Epistemology Skills in Philosophy of Action $45.47 new $61.94 from Amazon $81.04 used (collection) View on Amazon.com Yuri Gurevich and Saharon Shelah. On finite rigid structures. The journal of symbolic logic, vol. 61 , pp. 549–562. [REVIEW]Alexei P. Stolboushkin - 2000 - Bulletin of Symbolic Logic 6 (3):353-355.details Logic and Philosophy of Logic, Miscellaneous in Logic and Philosophy of Logic Revisionary intellectualism and Gettier.Yuri Cath - 2015 - Philosophical Studies 172 (1):7-27.details How should intellectualists respond to apparent Gettier-style counterexamples? Stanley offers an orthodox response which rejects the claim that the subjects in such scenarios possess knowledge-how. I argue that intellectualists should embrace a revisionary response according to which knowledge-how is a distinctively practical species of knowledge-that that is compatible with Gettier-style luck. Epistemic Luck in Epistemology Intentional Action in Philosophy of Action Justification in Epistemology Can a biologist fix a radio?—Or, what I learned while studying apoptosis.Yuri Lazebnik - 2002 - Cancer Cell 2:179-182.details Reflective Equilibrium.Yuri Cath - 2016 - In H. Cappelen, T. Gendler & J. Hawthorne (eds.), The Oxford Handbook of Philosophical Methodology. Oxford University Press. pp. 213-230.details This article examines the method of reflective equilibrium (RE) and its role in philosophical inquiry. It begins with an overview of RE before discussing some of the subtleties involved in its interpretation, including challenges to the standard assumption that RE is a form of coherentism. It then evaluates some of the main objections to RE, in particular, the criticism that this method generates unreasonable beliefs. It concludes by considering how RE relates to recent debates about the role of intuitions in (...) philosophy. (shrink) John Rawls in 20th Century Philosophy Philosophical Methods, Misc in Metaphilosophy Reflective Equilibrium in Meta-Ethics $136.89 new $151.65 used (collection) View on Amazon.com Persistence and Spacetime.Yuri Balashov - 2009 - Oxford and New York: Oxford University Press UK.details Yuri Balashov sets out major rival views of persistence--endurance, perdurance, and exdurance--in a spacetime framework and proceeds to investigate the implications of Einstein's theory of relativity for the debate about persistence. His overall conclusion--that relativistic considerations favour four-dimensionalism over three-dimensionalism--is hardly surprising. It is, however, anything but trivial. Contrary to a common misconception, there is no straightforward argument from relativity to four-dimensionalism. The issues involved are complex, and the debate is closely entangled with a number of other philosophical disputes, (...) including those about the nature and ontology of time, parts and wholes, material constitution, causation and properties, and vagueness. (shrink) Three- and Four-Dimensionalism in Metaphysics Feature and Configuration in Face Processing: Japanese Are More Configural Than Americans.Yuri Miyamoto, Sakiko Yoshikawa & Shinobu Kitayama - 2011 - Cognitive Science 35 (3):563-574.details Previous work suggests that Asians allocate more attention to configuration information than Caucasian Americans do. Yet this cultural variation has been found only with stimuli such as natural scenes and objects that require both feature- and configuration-based processing. Here, we show that the cultural variation also exists in face perception—a domain that is typically viewed as configural in nature. When asked to identify a prototypic face for a set of disparate exemplars, Japanese were more likely than Caucasian Americans to use (...) overall resemblance rather than feature matching. Moreover, in a speeded identity-matching task, Japanese were more accurate than Americans in identifying the spatial configuration of features (e.g., eyes). Together, these findings underscore the robustness of culture's influences on cognition. (shrink) Knowing What It is Like and Testimony.Yuri Cath - 2019 - Australasian Journal of Philosophy 97 (1):105-120.details It is often said that 'what it is like'-knowledge cannot be acquired by consulting testimony or reading books [Lewis 1998; Paul 2014; 2015a]. However, people also routinely consult books like What It Is Like to Go to War [Marlantes 2014], and countless 'what it is like' articles and youtube videos, in the apparent hope of gaining knowledge about what it is like to have experiences they have not had themselves. This article examines this puzzle and tries to solve it by (...) appealing to recent work on knowing-wh ascriptions. In closing I indicate the wider significance of these ideas by showing how they can help us to evaluate prominent arguments by Paul [2014; 2015a] concerning transformative experiences. (shrink) Deliberation in Philosophy of Action Epistemology of Testimony in Epistemology Knowledge-Wh in Epistemology Philosophy of Language, General Works in Philosophy of Language Philosophy of Mind, General Works in Philosophy of Mind Varieties of Knowledge, Misc in Epistemology Cultural differences in the dialectical and non-dialectical emotional styles and their implications for health.Yuri Miyamoto & Carol D. Ryff - 2011 - Cognition and Emotion 25 (1):22-39.details Know How and Skill: The Puzzles of Priority and Equivalence.Yuri Cath - 2020 - In Ellen Fridland & Carlotta Pavese (eds.), Routledge Handbook of Skill and Expertise. New York: Routledge.details This chapter explores the relationship between knowing-how and skill, as well other success-in-action notions like dispositions and abilities. I offer a new view of knowledge-how which combines elements of both intellectualism and Ryleanism. According to this view, knowing how to perform an action is both a kind of knowing-that (in accord with intellectualism) and a complex multi-track dispositional state (in accord with Ryle's view of knowing-how). I argue that this new view—what I call practical attitude intellectualism—offers an attractive set of (...) solutions to various puzzles concerning the connections between knowing-how and abilities and skills to perform intentional actions. (shrink) Tacit and Dispositional Belief in Philosophy of Mind Interoceptive sensitivity predicts sensitivity to the emotions of others.Yuri Terasawa, Yoshiya Moriguchi, Saiko Tochizawa & Satoshi Umeda - 2014 - Cognition and Emotion 28 (8):1435-1448.details Intellectualism and Testimony.Yuri Cath - 2017 - Analysis 77 (2):1-9.details Knowledge-how often appears to be more difficult to transmit by testimony than knowledge-that and knowledge-wh. Some philosophers have argued that this difference provides us with an important objection to intellectualism—the view that knowledge-how is a species of knowledge-that. This article defends intellectualism against these testimony-based objections. Testimony, Misc in Epistemology Persistence and Space-Time.Yuri Balashov - 2000 - The Monist 83 (3):321-340.details Material objects persist through time and survive change. How do they manage to do so? What are the underlying facts of persistence? Do objects persist by being "wholly present" at all moments of time at which they exist? Or do they persist by having distinct "temporal segments" confined to the corresponding times? Are objects three-dimensional entities extended in space, but not in time? Or are they four-dimensional spacetime "worms"? These are matters of intense debate, which is now driven by concerns (...) about two major issues in fundamental ontology: parthood and location. It is in this context that broadly empirical considerations are increasingly brought to bear on the debate about persistence. Persistence and Spacetime pursues this empirically based approach to the questions. Yuri Balashov begins by setting out major rival views of persistence -- endurance, perdurance, and exdurance -- in a spacetime framework and proceeds to investigate the implications of Einstein's theory of relativity for the debate about persistence. His overall conclusion -- that relativistic considerations favour four-dimensionalism over three-dimensionalism -- is hardly surprising. It is, however, anything but trivial. Contrary to a common misconception, there is no straightforward argument from relativity to four-dimensionalism. The issues involved are complex, and the debate is closely entangled with a number of other philosophical disputes, including those about the nature and ontology of time, parts and wholes, material constitution, causation and properties, and vagueness. (shrink) Endurance in Metaphysics Regarding a Regress.Yuri Cath - 2013 - Pacific Philosophical Quarterly 94 (3):358-388.details Is there a successful regress argument against intellectualism? In this article I defend the negative answer. I begin by defending Stanley and Williamson's (2001) critique of the contemplation regress against Noë (2005). I then identify a new argument – the employment regress – that is designed to succeed where the contemplation regress fails, and which I take to be the most basic and plausible form of a regress argument against intellectualism. However, I argue that the employment regress still fails. Drawing (...) on the previous discussion, I criticise further regress arguments given by Hetherington (2006) and Noë (2005). (shrink) Ongoing Commercialization of Gestational Surrogacy due to Globalization of the Reproductive Market before and after the Pandemic.Yuri Hibino - 2022 - Asian Bioethics Review 14 (4):349-361.details Surrogacy tourism in Asian countries has surged in recent decades due to affordable prices and favourable regulations. Although it has recently been banned in many countries, it is still carried out illegally across borders. With demand for surrogacy in developed countries increasing and economically vulnerable Asian women lured by lucrative compensation, there are efforts by guest countries to ease the strict surrogacy regulations in host countries. Despite a shift toward "altruistic surrogacy", commercial surrogacy persists. Recent research carried out by international (...) organizations that seek to establish a legal relationship between the commissioning parents and children in cross-border surrogacy arrangements, under the guise of the "best interests of the child," appears to promote a resurgence of overseas commercial surrogacy rather than restrict it. Further commercialization of surrogacy should be prevented by carefully investigating the reality of the surrogacy process. (shrink) Globalization in Social and Political Philosophy Social Epistemology and Knowing-How.Yuri Cath - forthcoming - In Jennifer Lackey & Aidan McGlynn (eds.), Oxford Handbook of Social Epistemology. Oxford University Press.details This chapter examines some key developments in discussions of the social dimensions of knowing-how, focusing on work on the social function of the concept of knowing-how, testimony, demonstrating one's knowledge to other people, and epistemic injustice. I show how a conception of knowing-how as a form of 'downstream knowledge' can help to unify various phenomena discussed within this literature, and I also consider how these ideas might connect with issues concerning wisdom, moral knowledge, and moral testimony. Epistemic Injustice in Epistemology The Concept of Knowledge in Epistemology Kul'tura I Vzryv (Moscow).Yuri Lotman - forthcoming - Gnosis.details Russian Philosophy in European Philosophy Hemiface Differences in Visual Exploration Patterns When Judging the Authenticity of Facial Expressions.Yuri Busin, Katerina Lukasova, Manish K. Asthana & Elizeu C. Macedo - 2018 - Frontiers in Psychology 8.details Cultural sensitivity in brain death determination: a necessity in end-of-life decisions in Japan.Yuri Terunuma & Bryan J. Mathis - 2021 - BMC Medical Ethics 22 (1):1-6.details Background In an increasingly globalized world, legal protocols related to health care that are both effective and culturally sensitive are paramount in providing excellent quality of care as well as protection for physicians tasked with decision making. Here, we analyze the current medicolegal status of brain death diagnosis with regard to end-of-life care in Japan, China, and South Korea from the perspectives of front-line health care workers. Main body Japan has legally wrestled with the concept of brain death for decades. (...) An inability to declare brain death without consent from family coupled with cultural expectations of family involvement in medical care is mirrored in other Confucian-based cultures and may complicate care for patients from these countries when traveling or working overseas. Within Japan, China, and South Korea, medicolegal shortcomings in the diagnosis of brain death act as a great source of stress for physicians and expose them to potential public and legal scorn. Here, we detail the medicolegal status of brain death diagnosis within Japan and compare it to China and South Korea to find common ground and elucidate the impact of legal ambiguity on health care workers. Conclusion The Confucian cultural foundation of multiple Asian countries raises common issues of family involvement with diagnosis and cultural considerations that must be met. Leveraging public education systems may increase awareness of brain death issues and lead to evolving laws that clarify such end-of-life issues while protecting physicians from sociocultural backlash. (shrink) Medical Ethics in Applied Ethics The ability hypothesis and the new knowledge-how.Yuri Cath - 2009 - Noûs 43 (1):137-156.details What follows for the ability hypothesis reply to the knowledge argument if knowledge-how is just a form of knowledge-that? The obvious answer is that the ability hypothesis is false. For the ability hypothesis says that, when Mary sees red for the first time, Frank Jackson's super-scientist gains only knowledge-how and not knowledge-that. In this paper I argue that this obvious answer is wrong: a version of the ability hypothesis might be true even if knowledge-how is a form of knowledge-that. To (...) establish this conclusion I utilize Jason Stanley and Timothy Williamson's well-known account of knowledge-how as "simply a species of propositional knowledge" . I demonstrate that we can restate the core claims of the ability hypothesis – that Mary only gains new knowledge-how and not knowledge-that – within their account of knowledge-how as a species of knowledge-that. I examine the implications of this result for both critics and proponents of the ability hypothesis. (shrink) Specific Expressions, Misc in Philosophy of Language The Knowledge Argument in Philosophy of Mind On the chiral anomaly in non-Riemannian spacetimes.Yuri N. Obukhov, Eckehard W. Mielke, Jan Budczies & Friedrich W. Hehl - 1997 - Foundations of Physics 27 (9):1221-1236.details Thetranslation Chern-Simons type three-formcoframe∧torsion on a Riemann-Cartan spacetime is related to the Nieh-Yan fourform. Following Chandia and Zanelli, two spaces with nontrivial translational Chern-Simons forms are discussed. We then demonstrate, first within the classical Einstein-Cartan-Dirac theory and second in the quantum heat kernel approach to the Dirac operator, how the Nieh-Yan form surfaces in both contexts, in contrast to what has been assumed previously. Space and Time in Philosophy of Physical Science Genome reduction as the dominant mode of evolution.Yuri I. Wolf & Eugene V. Koonin - 2013 - Bioessays 35 (9):829-837.details Genetics and Molecular Biology in Philosophy of Biology Mechanisms of Evolution in Philosophy of Biology BHASKAR'S PHILOSOPHY AS ANTI-ANTHROPISM A Comparative Study of Eastern and Western Thought.Seo Mingyu - 2008 - Journal of Critical Realism 7 (1):5-28.details This article aims to contribute to the understanding of Roy Bhaskar's philosophical evolution from critical realism to the philosophy of meta-Reality. Following Bhaskar's own terminology, I define his intellectual journey as the 'identification of dualism and duality within non-duality' by proposing that anti-anthropism plays a key role in the developmental consistency of his system from critical realism via dialectical critical realism to meta-Reality. For this purpose, I compare Bhaskar's philosophy with Andrew Collier's theory of human rationality and spiritual emancipation based (...) on Christianity and Tu Wei-Ming's anthropocosmic understanding of human beings and nature based on Confucianism. (shrink) Asian Philosophy, Misc in Asian Philosophy Who is Dr. Frankenstein? Or, what Professor Hayek and his friends have done to science.Yuri Lazebnik - 2018 - Organisms 2 (2):9-42.details This commentary suggests that the ongoing malaise of biomedical research results from adopting a doctrine that is incompatible with the principles of creative scientific discovery and thus should be treated as a mental rather than somatic disorder. I overview the progression of the malaise, outline the doctrine and the history of its marriage to science, formulate the diagnosis, justify it by reviewing the symptoms of the malaise, and suggest how to begin to cure the disease. A knowledge representation based on the Belnap's four-valued logic.Yuri Kaluzhny & Alexei Yu Muravitsky - 1993 - Journal of Applied Non-Classical Logics 3 (2):189-203.details Nonclassical Logics in Logic and Philosophy of Logic Attenuated sensitivity to the emotions of others by insular lesion.Yuri Terasawa, Yoshiko Kurosaki, Yukio Ibata, Yoshiya Moriguchi & Satoshi Umeda - 2015 - Frontiers in Psychology 6.details James V. Neel and Yuri E. Dubrova: Cold War Debates and the Genetic Effects of Low-Dose Radiation.Magdalena E. Stawkowski & Donna M. Goldstein - 2015 - Journal of the History of Biology 48 (1):67-98.details This article traces disagreements about the genetic effects of low-dose radiation exposure as waged by James Neel, a central figure in radiation studies of Japanese populations after World War II, and Yuri Dubrova, who analyzed the 1986 Chernobyl nuclear power plant accident. In a 1996 article in Nature, Dubrova reported a statistically significant increase in the minisatellite DNA mutation rate in the children of parents who received a high dose of radiation from the Chernobyl accident, contradicting studies that found (...) no significant inherited genetic effects among offspring of Japanese A-bomb survivors. Neel's subsequent defense of his large-scale longitudinal studies of the genetic effects of ionizing radiation consolidated current scientific understandings of low-dose ionizing radiation. The article seeks to explain how the Hiroshima/nagasaki data remain hegemonic in radiation studies, contextualizing the debate with attention to the perceived inferiority of Soviet genetic science during the Cold War. (shrink) Quasi-matrix logic as a paraconsistent logic for dubitable information.Yury V. Ivlev - 2000 - Logic and Logical Philosophy 8:91.details Paraconsistent Logic in Logic and Philosophy of Logic The genetic structure of SARS‐CoV‐2 is consistent with both natural or laboratory origin: Response to Tyshkovskiy and Panchin.Yuri Deigin & Rossana Segreto - 2021 - Bioessays 43 (9):2100137.details Yuri Lotman on metaphors and culture as self-referential semiospheres.Winfried Nöth - 2006 - Semiotica 2006 (161):249-263.details Semiotics in Social Sciences Elasticity to atomistics: Predictive modeling of defect behavior.Yuri Osetsky, Ron Scattergood, Anna Serra & Roger Stoller - 2010 - Philosophical Magazine 90 (7-8):803-804.details Knowing How and 'Knowing How'.Yuri Cath - 2015 - In Christopher Daly (ed.), The Palgrave Handbook of Philosophical Methods. Palgrave-Macmillan. pp. 527-552.details What is the relationship between the linguistic properties of knowledge-how ascriptions and the nature of knowledge-how itself? In this chapter I address this question by examining the linguistic methodology of Stanley and Williamson (2011) and Stanley (2011a, 2011b) who defend the intellectualist view that knowledge-how is a kind of knowledge-that. My evaluation of this methodology is mixed. On the one hand, I defend Stanley and Williamson (2011) against critics who argue that the linguistic premises they appeal to—about the syntax and (...) semantics of knowledge-how and knowledge-wh ascriptions—do not establish their desired conclusions about the nature of knowledge-how itself. But, on the other hand, I also criticize the role that linguistic considerations play in Stanley's (2011a) response to apparent Gettier-style counterexamples to intellectualism. (shrink) Epistemology of Philosophy, Misc in Metaphilosophy Linguistic Analysis in Philosophy in Metaphilosophy Transformative experiences and the equivocation objection.Yuri Cath - 2022 - Inquiry: An Interdisciplinary Journal of Philosophy:1-22.details Paul (2014, 2015a) argues that one cannot rationally decide whether to have a transformative experience by trying to form judgments, in advance, about (i) what it would feel like to have that experience, and (ii) the subjective value of having such an experience. The problem is if you haven't had the experience then you cannot know what it is like, and you need to know what it is like to assess its value. However, in earlier work I argued that 'what (...) it is like'-knowledge comes in degrees, and I briefly suggested that, consequently, some instances of Paul's argument schema might commit a fallacy of equivocation. The aim of this paper is to further explore and strengthen this objection by, first, offering a new argument—the modelling argument—in support of it, and then by evaluating a range of replies that might be given to this objection on Paul's behalf. I conclude that each reply either fails or, at best, only partially succeeds in defending some but not all instances of Paul's argument schema. In closing, I consider how we might revise Paul's concepts of transformative experiences and choices in response to this conclusion. (shrink) Epistemology of Imagination in Philosophy of Mind Epistemology of Mind in Philosophy of Mind Imagination and Memory in Philosophy of Mind Knowledge by Acquaintance in Epistemology Persistence and Spacetime.Yuri Balashov - 2010 - Oxford and New York: Oxford University Press.details Background and assumptions. Persistence and philosophy of time ; Atomism and composition ; Scope ; Some matters of methodology -- Persistence, location, and multilocation in spacetime. Endurance, perdurance, exdurance : some pictures ; More pictures ; Temporal modification and the "problem of temporary intrinsics" ; Persistence, location and multilocation in generic spacetime ; An alternative classification -- Classical and relativistic spacetime. Newtonian spacetime ; Neo-Newtonian (Galilean) spacetime ; Reference frames and coordinate systems ; Galilean transformations in spacetime ; Special relativistic (...) spacetime ; Length contraction and time dilation ; Invariant properties of special relativistic spacetime -- Persisting objects in classical spacetime. Enduring, perduring, and exduring objects in Galilean spacetime ; The argument from vagueness ; From minimal D-fusions to temporal parts ; Motivating a sharp cutoff ; Some objections and replies ; Implications -- Persisting objects in Minkowski spacetime. Enduring, perduring, and exduring objects in Minkowski spacetime ; Flat and curved achronal regions in Minkowski spacetime ; Early reflections on persisting objects in Minkowski spacetime : Quine and Smart ; "Profligate ontology"? ; Is achronal universalism tenable in Minkowski spacetime? ; "Crisscrossing" and immanent causation -- Coexistence in spacetime. The notion of coexistence ; Desiderata ; Coexistence in Galilean spacetime ; Coexistence in Minkowski spacetime : CASH ; Alexandrov-Stein present and Alexandrov-Stein coexistence ; AS-Coexistence v. CASH : symmetry, multigrade, and objectivity ; As-coexistence v. CASH : relevance ; The mixed past of coexistence ; No need in the extended now -- Strange coexistence? Coexistence and [email protected] ; The asymmetry thesis ; The absurdity thesis ; Collective CASH value of coexistence ; Collective [email protected] and coexistence in classical spacetime ; Collective [email protected] and coexistence in Minkowski spacetime ; Contextuality ; Chronological incoherence ; Some objections -- Shapes and other arrangements in Minkowski spacetime. How rigid is a granite block? ; Perspectives in space ; Perspectives in spacetime ; Are shapes intrinsic to objects? ; The causal objection ; The micro-reductive objection ; Pegs, boards, and shapes ; Perduring objects exist. (shrink) Intuitionistic logic with strong negation.Yuri Gurevich - 1977 - Studia Logica 36 (1-2):49 - 59.details This paper is a reaction to the following remark by grzegorczyk: "the compound sentences are not a product of experiment. they arise from reasoning. this concerns also negations; we see that the lemon is yellow, we do not see that it is not blue." generally, in science the truth is ascertained as indirectly as falsehood. an example: a litmus-paper is used to verify the sentence "the solution is acid." this approach gives rise to a (very intuitionistic indeed) conservative extension of (...) the heyting logic satisfying natural duality laws. (shrink) Intuitionistic Logic in Logic and Philosophy of Logic Verification of Karl Markx's Revolutionary Insights.Yury Oleinikov - 2018 - Russian Journal of Philosophical Sciences 4:27-44.details The Monadic Theory of ω 1 2.Yuri Gurevich, Menachem Magidor & Saharon Shelah - 1983 - Journal of Symbolic Logic 48 (2):387-398.details Assume ZFC + "There is a weakly compact cardinal" is consistent. Then: For every $S \subseteq \omega, \mathrm{ZFC} +$ "S and the monadic theory of ω 2 are recursive each in the other" is consistent; and ZFC + "The full second-order theory of ω 2 is interpretable in the monadic theory of ω 2 " is consistent. Model Theory in Logic and Philosophy of Logic Маска в художественном мире гоголя и маски анатолия каплана.Yuri M. Lotman - 2002 - Σημιοτκή-Sign Systems Studies 2:695-705.details Teaching Rape Texts in Classical Literature.Yurie Hong - 2013 - Classical World: A Quarterly Journal on Antiquity 106 (4):669-675.details Charles Peirce and Pragmatism.Yuri K. Melvil - 1971 - Philosophy and Phenomenological Research 32 (2):271-272.details Charles Sanders Peirce in 19th Century Philosophy Koenraad kortmulder (1998). Play and evolution - second thoughts on the behaviour of animals.Yuri Robbers - 2001 - Acta Biotheoretica 49 (1):75-76.details Evolutionary Biology in Philosophy of Biology Presentism and relativity. [REVIEW]Yuri Balashov & Michel Janssen - 2003 - British Journal for the Philosophy of Science 54 (2):327-346.details In this critical notice we argue against William Craig's recent attempt to reconcile presentism (roughly, the view that only the present is real) with relativity theory. Craig's defense of his position boils down to endorsing a 'neo-Lorentzian interpretation' of special relativity. We contend that his reconstruction of Lorentz's theory and its historical development is fatally flawed and that his arguments for reviving this theory fail on many counts. 1 Rival theories of time 2 Relativity and the present 3 Special relativity: (...) one theory, three interpretations 4 Theories of principle and constructive theories 5 The relativity interpretation: explanatorily deficient? 6 The relativity interpretation: ontologically fragmented? 7 The space-time interpretation: does God need a preferred frame of reference? 8 The neo-Lorentzian interpretation: at what price? 9 The neo-Lorentzian interpretation: with what payoff? 10 Why we should prefer the space-time interpretation over the neo-Lorentzian interpretation 11 What about general relativity? 12 Squaring the tenseless space-time interpretation with our tensed experience. (shrink) Eternalism in Metaphysics Presentism in Metaphysics Special Relativity in Philosophy of Physical Science Direct download (12 more)
CommonCrawl
Think about how best to approximate these things from the physical world around us. You will need to make some estimations and find information from friends or other sources, as would any scientist! Take care to represent all of your answers using a sensible number of decimal places and be sure to note all of your assumptions clearly. Light travels at $c=3\times 10^8$ metres per second. How fast in this in miles per hour? How many times faster is this than a sports car? The Milky Way is a spiral galaxy with diamater about 100,000 light years and thickness about 1000 light years. There are estimated to be between 100 billion and 400 billion stars in the galaxy. Estimate the average distance between these stars. Density of lead $11.34$g/cm$^3$. How big would a tonne of lead be? Estimate the mass of ore it takes to produce a roll of aluminum kitchen foil. How many AA batteries contain enough charge between them to run a laptop for an hour? Estimate how many atoms there are in a staple. Einstein's equation tells us that the enegy $E$ stored in matter equals $mc^2$, where $m$ is the mass and $c$ is the speed of light. How much energy is contained in the staple from question 6? How long could this energy run your laptop for? How much energy would it take to raise the air temperature of the room you are in by 1$^\circ$C? How much gas must be burned to produce this much energy? What is the cost of that much gas? An obvious part of the skill with applying mathematics to physics is to know the fundamental formulae and constants relevent to a problem. By not providing these pieces of information directly, you need to engage at a deeper level with the problems. You might not necessarily know all of the required formulae, but working out which parts you can and cannot do is all part of the problem solving process! Vectors. Physics. Biology. Investigations. Maths Supporting SET. Chemistry. STEM - physical world. Standard index form/Scientific notation. Engineering. Mathematical modelling.
CommonCrawl
\begin{definition}[Definition:Transmission Error] Let $c$ be a transmitted codeword of a message sent over a transmission line $T$. Let $c'$ be the corresponding word received across $T$. Each term of $c'$ may or may not match the corresponding term of $c$. Each term of $c'$ which does ''not'' match the corresponding term of $c$ is referred to as an instance of a '''transmission error'''. Category:Definitions/Information Theory \end{definition}
ProofWiki
Quick access to articles on this page: - March 2021 - Cryptography and assembly code - March 2021 - Key wrapping and nonce-misuse resistance - March 2021 - Signature forgeries in Golang ECDSA library? - March 2021 - The Let's Encrypt duplicate signature key selection attack - March 2021 - A flamegraph of Real-World Cryptography - March 2021 - ''This destroyes the RSA cryptosystem'' - February 2021 - I was on the Technoculture podcast - January 2021 - I'm on the develomentor podcast to talk about what applied cryptography is! more on the next page... Cryptography and assembly code posted March 2021 Thanks to filippo streaming his adventures rewriting Golang assembly code into "cleaner" Golang assembly code, I discovered the Avo assembly generator for Golang. This post is not necessarily about Golang, but Golang is a good example as its standard library is probably the best cryptographic standard library of any programming language. At dotGo 2019, Michael McLoughlin presented on his Avo tool. In the talk he mentions that there's 24,962 x86 assembly lines in Golang's standard library, and most of it is in the crypto package. A very "awkward" place where "we need very high performance, and absolute correctness". He then shows several example that he describes as "write-once code". The talk is really interesting and I recommend you to check it. I personally spent days trying to understand Golang's SHA-3 assembly implementation. I even created a Go Assembly by Example page to help me in this journey. And I ended up giving up. I just couldn't understand how it worked, the thing didn't make sense. Someone had written it with their own mental model of how they wanted to pass data around. It was horrible. It's not just a problem of Golang. Look at OpenSSL, for example, which most cryptographic applications and libraries rely on. It contains a huge amount of assembly code to implement cryptography, and that assembly code is sometimes generated by unintelligible perl code. There are many more good examples out there. the BearSSL TLS implementation by Thomas Pornin, the libsodium cryptographic library by Frank Denis, the extended keccak code package by the Keccak team, all use assembly code to produce fast cryptography. We're making such a fuss about readable, auditable, simple and clear cryptographic implementations, but most of that has been thrown out of the window in the quest for performance. The real problem, from a reviewer perspective is that assembly is getting us much further away from the specification. As the role of a reviewer is to match the implementation to the specification, it makes the job hard, perhaps impossible. Food for thoughts... Key wrapping and nonce-misuse resistance posted March 2021 If you know about authenticated encryption, you could stop reading here, understand that you can just use AES-GCM or Chacha20-Poly1305 whenever you need to encrypt something, and move on with your life. Unfortunately real-world cryptography is not always about the agreed standard, it is also about constraints. Constraints in size, constraints in speed, constraints in format, and so on. For this reason, we need to look at scenarios where these AEADs won't fit, and what solutions have been invented by the field of cryptography. Wrapping keys: how to encrypt secrets One of the problem of nonce-based AEADs is that they all require a nonce, which takes additional space. In worse scenarios, the nonce to-be-used for encryption comes from an untrusted source and can thus lead to nonce repetition that would damage the security of the encryption. From these assumptions, it was noticed that encrypting a key might not necessarily need randomization, since what is encrypted is already random. Encrypting keys is a useful paradigm in cryptography, and is used in a number of protocol as you will see in the second part of this book. The most widely adopted standard is NIST Special Publication 800-38F, Recommendation for Block Cipher Modes of Operation: Methods for Key Wrapping. It specifies two key wrapping algorithms based on AES: the AES Key Wrap (KW) mode and the AES Key Wrap With Padding (KWP) mode. These two algorithms are often implemented in Hardware Security Modules (HSM). HSMs are devices that are capable of performing cryptographic operations while ensuring that keys they store cannot be extracted by physical attacks. That's at least if you're under a certain budget. These key-wrapping algorithms do not take an additional nonce or IV, and randomize their encryption based on what they are encrypting. Consequently, they do not have to store an additional nonce or IV next to the ciphertexts. AES-GCM-SIV and nonce-misuse resistance authenticated encryption In 2006, Rogaway published a new key-wrapping algorithm called Synthetic initialization vector (SIV), as part of Deterministic Authenticated-Encryption: A Provable-Security Treatment of the Key-Wrap Problem. In the white paper, Rogaway notes that the algorithm is not only useful to encrypt keys, but as a general AEAD algorithm as well that is resistant to nonce repetitions. As you probably know, a repeating nonce in AES-GCM or Chacha20-Poly1305 has catastrophic consequences. It not only reveals the XOR of the two plaintexts, but it also allows an attacker to recover an authentication key and to forge more messages. In Nonce-Disrespecting Adversaries: Practical Forgery Attacks on GCM in TLS, a group of researchers found 184 HTTPS servers being guilty of reusing nonces. (I even wrote here about their super-cool live demo.) The point of a nonce-misuse resistant algorithm is that encrypting two plaintexts with the same nonce will only reveal if the two plaintexts are equal or not. It is sometimes hard to obtain good randomness on constrained devices or mistakes can be made. In this case, nonce-misuse resistant algorithms solve real problems. In the rest of this section, I describe the scheme standardized by Google in RFC 8452, AES-GCM-SIV: Nonce Misuse-Resistant Authenticated Encryption. The idea of AES-GCM-SIV is to generate the encryption and authentication keys separately via a main key every time a message has to be encrypted (or decrypted). This is done by producing a keystream long enough with AES-CTR, the main key and a nonce: The main key of AES-GCM-SIV is used solely with AES-CTR to derive the encryption key K and the authentication key H. Notice that if the same nonce is used to encrypt two different messages, the same keys will be derived here. Next, AES-GCM-SIV authenticates the plaintext, instead of the ciphertexts as we have seen in the previous schemes. This creates an authentication tag over the associated data and the plaintext (and their respective lengths). Instead of GMAC, AES-GCM-SIV defines a new MAC called Polyval. It is quite similar and only attempts to optimize some of GMAC's operations. The Polyval function is used to hash the plaintext and the associated data. It is then encrypted with the encryption key K to produce an authentication tag. Importantly, notice that if the same nonce is reused, two different messages will of course produce two different tags. This is important because in AES-GCM-SIV, the tag is then used as a nonce to AES-CTR in order to encrypt the plaintext. AES-GCM-SIV uses the authentication tag (created with Polyval over the plaintext and the associated data) as a nonce for AES-CTR to encrypt the plaintext. This is the trick behind SIV: the nonce used to encrypt in the AEAD is generated from the plaintext itself, which makes it highly unlikely that two different plaintexts will end up being encrypted under the same nonce. To decrypt, the same process is done in reverse: AES-GCM-SIV decrypts a ciphertext by using the authentication as a nonce for AES-CTR. The plaintext recovered is then used along with the associated data to validate the authentication tag. Both tags need to be compared (in constant-time) before releasing the plaintext to the application. As the authentication tag is computed over the plaintext, the ciphertext must first be decrypted (using the tag as an effective nonce). Then, the plaintext must be validated against the authentication tag. Because of that, one must realize two things: The plaintext is released by the algorithm before it can be authenticated. For this reason it is extremely important that nothing is done with the plaintext, until it is actually deemed valid by the authentication tag verification. Since the algorithm works by decrypting the ciphertext (respectively encrypting the plaintext) as well as authenticating the plaintext, we say that it is two-pass -- it must go over the data twice. Because of this, it usually is slower than its counterpart AES-GCM. SIV is a powerful concept that can be implemented for other algorithms as well. Signature forgeries in Golang ECDSA library? posted March 2021 Take a look at the following program that you can run in Golang's playground. // sign a message hash, _ := hex.DecodeString("ffffffff00000000ffffffffffffffffbce6faada7179e84f3b9cac2fc632552") r, s, err := ecdsa.Sign(rand.Reader, privateKey, hash[:]) // print the signature signature := r.Bytes() signature = append(signature, s.Bytes()...) fmt.Println("signature:", hex.EncodeToString(signature)) // verify the signature if !ecdsa.Verify(&privateKey.PublicKey, hash[:], r, s) { panic("wrong signature") fmt.Println("signature valid for", hex.EncodeToString(hash[:])) // I modify the message, this should invalidate the signature var hash2 [32]byte hash2[31] = 1 if !ecdsa.Verify(&privateKey.PublicKey, hash2[:], r, s) { fmt.Println("signature valid for", hex.EncodeToString(hash2[:])) this should print out: signature: 4f3e60dc53ab470d23e82567909f01557f01d521a0b2ae96a111d107741d8ebb885332d790f0691bdc900661bf40c595a07750fa21946ed6b88c61c43fbfc1f3 signature valid for ffffffff00000000ffffffffffffffffbce6faada7179e84f3b9cac2fc632552 signature valid for 0000000000000000000000000000000000000000000000000000000000000001 Can you tell what's the problem? Is ECDSA broken? Is Golang's standard library broken? Is everything fine? The Let's Encrypt duplicate signature key selection attack posted March 2021 On August 11th, 2015, Andrew Ayer sent an email to the IETF mailing list starting with the following words: I recently reviewed draft-barnes-acme-04 and found vulnerabilities in the DNS, DVSNI, and Simple HTTP challenges that would allow an attacker to fraudulently complete these challenges. The draft-barnes-acme-04 mentioned by Andrew Ayer is a document specifying ACME, one of the protocols behind the Let's Encrypt certificate authority. A certificate authority is the thing that your browser trusts and that signs the public keys of websites you visit. It is called a "certificate" authority due to the fact that it does not sign public keys, but certificates. A certificate is just a blob of data bundling a website's public key, its domain name, and some other relevant metadata. The attack was found merely 6 weeks before major browsers were supposed to start trusting Let's Encrypt's public key. The draft has since become RFC 8555: Automatic Certificate Management Environment (ACME), mitigating the issues. Since then no cryptographic attacks are known on the protocol. This blog post will go over the accident, and explain why it happened, why it was a surprising bug, and what you should watch for when using signatures in cryptography. How Let's Encrypt used signatures Let's Encrypt is a pretty big deal. Created in 2014, it is a certificate authority run as a nonprofit, providing trust to hundreds of millions of websites. The key to Let's Encrypt's success are twofold: It is free. Before Let's Encrypt most certificate authorities charged fees from webmasters who wanted to obtain certificates. It is automated. If you follow their standardized protocol, you can request, renew and even revoke certificates via a web interface. Contrast that to other certificate authorities who did most processing manually, and took time to issue certificates. If a webmaster wants her website example.com to provide a secure connection to her users (via HTTPS), she can request a certificate from Let's Encrypt (essentially a signature over its domain name and public key), and after proving that she owns the domain example.com and getting her certificate issued, she will be able to use it to negotiate a secure connection with any browser trusting Let's Encrypt. That's the theory. In practice the flow goes like this: Alice registers on Let's Encrypt with an RSA public key. Alice asks Let's Encrypt for a certificate for example.com. Let's Encrypt asks Alice to prove that she owns example.com, for this she has to sign some data and upload it to example.com/.well-known/acme-challenge/some_file. Once Alice has signed and uploaded the signature, she asks Let's Encrypt to go check it. Let's Encrypt checks if it can access the file on example.com, if it successfully downloaded the signature and the signature is valid then Let's Encrypt issues a certificate to Alice. In 2015, Alice could request a signed certificate from Let's Encrypt by uploading a signature (from the key she registered with) on her domain. The certificate authority verifies that Alice owns the domain by downloading the signature from the domain and verifying it. If it is valid, the authority signs a certificate (which contains the domain's public key, the domain name example.com, and some other metadata) and sends it to Alice who can then use it to secure her website in a protocol called TLS. Let's see next how the attack worked. How did the Let's Encrypt attack work? In the attack that Andrew Ayer found in 2015, Andrew proposes a way to gain control of a Let's Encrypt account that has already validated a domain (let's pick example.com as an example) The attack goes something like this (keep in mind that I'm simplifying): Alice registers and goes through the process of verifying her domain example.com by uploading some signature over some data on example.com/.well-known/acme-challenge/some_file. She then successfully manages to obtain a certificate from Let's Encrypt. Later, Eve signs up to Let's Encrypt with a new account and an RSA public key, and request to recover the example.com domain Let's Encrypt asks Eve to sign some new data, and upload it to example.com/.well-known/acme-challenge/some_file (note that the file is still lingering there from Alice's previous domain validation) Eve crafts a new malicious keypair, and updates her public key on Let's Encrypt. She then asks Let's Encrypt to check the signature Let's Encrypt obtains the signature file from example.com, the signature matches, Eve is granted ownership of the domain example.com. She can then ask Let's Encrypt to issue valid certificates for this domain and any public key. The 2015 Let's Encrypt attack allowed an attacker (here Eve) to successfully recover an already approved account on the certificate authority. To do this, she simply forges a new keypair that can validate the already existing signature and data from the previous valid flow. Take a few minutes to understand the attack. It should be quite surprising to you. Next, let's see how Eve could craft a new keypair that worked like the original one did. Key substitution attacks on RSA In the previously discussed attack, Eve managed to create a valid public key that validates a given signature and message. This is quite a surprising property of RSA, so let's see how this works. A digital signature does not uniquely identify a key or a message. -- Andrew Ayer, Duplicate Signature Key Selection Attack in Let's Encrypt (2015) Here is the problem given to the attacker: for a fixed signature and (PKCS#1 v1.5 padded) message, a public key $(e, N)$ must satisfy the following equation to validate the signature: $$signature = message^e \pmod{N}$$ One can easily craft a key pair that will (most of the time) satisfy the equation: a public exponent $e = 1$ a private exponent $d = 1$ a public modulus $N = \text{signature} - \text{message}$ You can easily verify that the validation works with this keypair: $$\begin{align} &\text{signature} = \text{message}^e \mod{N} \\ \iff &\text{signature} = \text{message} \mod{\text{signature} - \text{message}} \\ \iff &\text{signature} - \text{message} = 0 \mod{\text{signature} - \text{message}} \end{align}$$ Is this issue surprising? It should be. This property called "key substitution" comes from the fact that there exists a gap between the theoretical cryptography world and the applied cryptography world, between the security proofs and the implemented protocols. Signatures in cryptography are usually analyzed with the EUF-CMA model, which stands for Existential Unforgeability under Adaptive Chosen Message Attack. In this model YOU generate a key pair, and then I request YOU to sign a number of arbitrary messages. While I observe the signatures you produce, I win if I can at some point in time produce a valid signature over a message I hadn't requested. Unfortunately, even though our modern signature schemes seem to pass the EUF-CMA test fine, they tend to exhibit some surprising properties like the key substitution one. To learn more about key substitution attack and other signature shenanigans, take a look at my book Real-World Cryptography. A flamegraph of Real-World Cryptography posted March 2021 I've now spent 2 years writing my introduction on applied cryptography: Real-World Cryptography, which you can already read online here. (If you're wondering why I'm writing another book on cryptography check this post.) I've written all the chapters, but there's still a lot of work to be done to make sure that it's good (collecting feedback), that it's consistent (unification of diagrams, of style, etc.), and that it's well structured. For the latter point, I thought I would leverage the fact that I'm an engineer and use a tool that's commonly used to measure performance: a flamegraph! It looks like this, and you can click around to zoom on different chapters and sections: The bottom layer shows all the chapter in order, and the width of the boxes show how lengthy they are. The more you go up, the more you "nest" yourself into a section. For example, clicking on the chapter 9: Secure transport, you can see that it is composed of several sections with the longest being "How does TLS work", which itself is composed of several subsections with the longest being "The TLS handshake". Using this flamegraph, I can now analyze how consistent the book is. The good news is that the chapters all seem pretty evenly distributed, for the exception of shorter chapters 3 (MACs), 6 (asymmetric encryption), and 16 (final remarks). This is also expected are these chapters are much more straightforward than the rest of the book. Too length Looks like the bigger chapters are in order: post-quantum crypto, authenticated encryption, hardware cryptography, user authentication, secure transport. This is not great, as post-quantum crypto is supposed to be a chapter for the curious people who get to the end of the book, not a chapter to make the book bigger... The other chapters are also unnecessary long. My goal is going to be to reduce these chapters' length in the coming weeks. Too nested This flamegraph is also useful to quickly see if there are sections that are way too nested. For example, Chapter 9 on secure transport has a lot of mini sections on TLS. Also, look at some of the section in chapter 5: Key exchanges > Key exchange standards > ECDH > ECDH standard. That's too much. Not nested enough Some chapters have almost no nested sections at all. For example, chapter 8 (randomness) and 16 (conclusion) are just successions of depth-1 sections. Is this a bad thing? Not necessarily, but if a section becomes too large it makes sense to either split it into several sections, or have subsections in it. I've noticed, for example, that the first section of chapter 3 on MACs titled "What is a MAC?" is quite long, and doesn't have subsections. (Same for section 6.2 asymmetric encryption in practice and section 8.2 what is a PRNG) I also managed to spot some errors in nested sections by doing this! So that was pretty cool as well :) EDIT: If you're interested in doing something like this with your own project, I published the script here. ''This destroyes the RSA cryptosystem'' posted March 2021 Schnorr just released a new paper Fast Factoring Integers by SVP Algorithms with the words "This destroyes the RSA cryptosystem." (spelling included) in the abstract. What does this really mean? The paper is honestly quite dense to read and there's no conclusion in there. UPDATE: Several people have pointed out that the "This destroyes the RSA cryptosystem" is not present in the paper itself, that is until the paper was updated to include the sentence without the typo. UPDATE: There was some discussion about a potential fake, but several people in the industry are confirming that this is from Schnorr himself: UPDATE: Sweis is calling for a proof of concept: According to the claims in Schnorr's paper, it should be practical to set significant new factoring records. There is a convenient 862-bit RSA challenge that has not been factored yet. Posting its factors, as done for the CADO-NFS team's records, would lend credence to Schnorr's paper and encourage more review of the methodology. UPDATE: Léo Ducas has been trying to implement the claim, without success. UPDATE: Geoffroy Couteau thinks the claim is wrong: several top experts on SVP and CVP algorithms have looked at the paper and concluded that it is incorrect (I cannot provide names, since it was in the context of anonymous reviews). UPDATE: Daniel Shiu pointed out an error in the paper UPDATE: Pedro Fortuny Ayuso is very skeptical of the claim. Will he end up eating his shirt? Schnorr is 78 years old. I am not gerontophobic (being 50 I am approaching that age) but: Atiyah claimed the Riemann Hypothesis, Hironaka has claimed full resolution of singularities in any characteristic... And I am speaking of Fields medalists. So: you do really need peer-review for strong arguments. I was on the Technoculture podcast posted February 2021 Hey reader! I was on the Technoculture podcast (or videocast?) to talk about cryptography in general. The host Federica Bressan is releasing excerpts bit by bit. You can watch the first part (Theoretical vs. Real-World Cryptography) here: And here's the rest that I will update as they get posted 2/5: Real-World Cryptography & cryptocurrencies 3/5: Real-World Cryptography & applications 4/5: Real-World Cryptography & usable security 5/5: 5/5 Real-World Cryptography & COVID-19 I'm on the develomentor podcast to talk about what applied cryptography is! posted January 2021 I had a lot of fun talking about applied cryptography on the develomentor podcast a few monts ago and the episode just came out today! It also looks like you can get a free copy of my book by listening to it :) https://develomentor.com/2021/01/07/david-wong-what-is-applied-cryptography-121/ - Common x509 certificate validation/creation pitfalls
CommonCrawl
JFM JFM can refer to: • Jack FM, a radio network brand • Justice for Myanmar, a Burmese activist group • Joseph Menna, an American sculptor-engraver who sculpted the Union Shield design used on the reverse of the 2010 Lincoln Cent • Journal of Fluid Mechanics, a scientific journal in the field of fluid mechanics • Jahrbuch über die Fortschritte der Mathematik, a project which has been incorporated into Zentralblatt MATH • JFM committee, joint forest management committee
Wikipedia
What is the value of $$ (3x-2)(4x+1)-(3x-2)4x+1 $$ when $x=4$? Since \begin{align*} (3x-2)(4x+1)-(3x-2)4x+1 &=(3x-2)(4x+1-4x)+1 \\ &=(3x-2) \cdot 1 +1 =3x-1, \end{align*} when $x=4$ we have the value $3 \cdot 4 -1 =\boxed{11}$.
Math Dataset
\begin{document} \title{f Some notions of subharmonicity over the quaternions} \begin{abstract} This works introduces several notions of subharmonicity for real-valued functions of one quaternionic variable. These notions are related to the theory of slice regular quaternionic functions introduced by Gentili and Struppa in 2006. The interesting properties of these new classes of functions are studied and applied to construct the analogs of Green's functions. \end{abstract} {\small \noindent{\bf Acknowledgements.} This work stems from a question posed by Filippo Bracci at Universit\`a di Roma ``Tor Vergata'', where the author was the recipient of the ``Michele Cuozzo'' prize. The author warmly thanks the Cuozzo family, Universit\`a di Roma ``Tor Vergata'' and Filippo Bracci for the remarkable research opportunity.\\ The author is partly supported by GNSAGA of INdAM and by Finanziamento Premiale FOE 2014 ``Splines for accUrate NumeRics: adaptIve models for Simulation Environments'' of MIUR.} \section{Introduction} Let ${\mathbb{H}} = {\mathbb{R}}+i{\mathbb{R}}+j{\mathbb{R}}+k{\mathbb{R}}$ denote the real algebra of quaternions and let \[{\mathbb{S}} := \{q \in {\mathbb{H}} : q^2=-1\} = \{\alpha i+\beta j+\gamma k : \alpha^2+\beta^2+\gamma^2=1\}\] denote the $2$-sphere of quaternionic imaginary units. For each $I \in{\mathbb{S}}$, the subalgebra $L_I={\mathbb{R}}+I {\mathbb{R}}$ generated by $1$ and $I$ is isomorphic to ${\mathbb{C}}$. In recent years, this elementary fact has been the basis for the introduction of a theory of quaternionic functions. \begin{definition}[\cite{advances}]\label{regularfunction} Let $f$ be a quaternion-valued function defined on a domain $\Omega$. For each $I \in {\mathbb{S}}$, let $\Omega_I = \Omega \cap L_I$ and let $f_I = f_{|_{\Omega_I}}$ be the restriction of $f$ to $\Omega_I$. The restriction $f_I$ is called \emph{holomorphic} if it has continuous partial derivatives and \begin{equation} \frac{1}{2} \left( \frac{\partial}{\partial x} + I \frac{\partial}{\partial y} \right) f_I(x+yI) \equiv 0. \end{equation} The function $f$ is called \emph{(slice) regular} if, for all $I \in {\mathbb{S}}$, $f_I$ is holomorphic. \end{definition} The study of regular quaternionic functions has then grown into a full theory, described in the monograph~\cite{librospringer}. It resembles the theory of holomorphic complex functions, but in a many-sided way that reflects the richness of the non-commutative setting. In the present work, we consider several notions of subharmonicity related to the class of regular quaternionic functions. In contrast with the work~\cite{pharmonicity}, which studies the relation between quaternion-valued (or Clifford-valued) regular functions and real harmonicity, we consider real-valued functions of a quaternionic variable and look for new notions of subharmonicity compatible with composition with regular functions. The first attempt is \emph{${\mathbb{J}}$-plurisubharmonicity}. However, this property is quite restrictive, besides being preserved by composition with a regular function $f$ only if $f$ is \emph{slice preserving}, that is, if $f(\Omega_I)\subseteq L_I$ for all $I\in{\mathbb{S}}$. For this reason, the alternative notions of \emph{weakly subharmonic} and \emph{strongly subharmonic} function are introduced. Composition with regular functions turns out to map strongly subharmonic functions into weakly subharmonic ones. Moreover, composition with slice preserving regular functions is proven to preserve weak subharmonicity. These new notions of subharmonicity turn out to have many nice properties that recall the complex and pluricomplex cases, including mean-value properties and versions of the maximum modulus principle. These results are finally applied to construct quaternionic analogs of Green's functions, which reveal many peculiarities due to the non-commutative setting. \section{Prerequisites}\label{sectiondifferential} Let us recall a few properties of the algebra of quaternions ${\mathbb{H}}$, on which we consider the standard Euclidean metric and topology. \begin{itemize} \item For each $I \in{\mathbb{S}}$, the couple $1,I$ can be completed to a (positively oriented) orthonormal basis $1,I,J,K$ by choosing $J \in {\mathbb{S}}$ with $I \perp J$ and setting $K = IJ$. \item The coordinates of any $q\in {\mathbb{H}}$ with respect to such a basis can be recovered as \begin{align*} x_0(q)&=\frac{1}{4}(q-IqI-J qJ-KqK)\\ x_1(q)&=\frac{1}{4I}(q-I qI+JqJ+KqK)\\ x_2(q)&=\frac{1}{4J}(q+I qI-J qJ+KqK)\\ x_3(q)&=\frac{1}{4K}(q+IqI+JqJ-KqK). \end{align*} \item Mapping each $v \in T_{q_0}{\mathbb{H}} \cong {\mathbb{H}}$ to $Iv$ for all $q_0 \in {\mathbb{H}}$ defines an (orthogonal) complex structure on ${\mathbb{H}}$, called \emph{constant}. A biholomorphism between $({\mathbb{H}},I)$ and $(L_I^2,I)\cong({\mathbb{C}}^2,i)$ can be constructed by mapping each $q$ to $(z_1(q),z_2(q))$, where \begin{align*} z_1(q)&=x_0(q)+Ix_1(q) = (q-IqI) \frac{1}{2},\\ z_2(q)&=x_2(q)+Ix_3(q) = (q+IqI) \frac{1}{2J}, \end{align*} are such that $z_1(q)+z_2(q)J = q$. Both $z_1$ and $z_2$ depend on the choice of $I$; $z_2$ also depends on $J$, but only up to a multiplicative constant $c\in L_I$. \end{itemize} For every domain $\Omega$ and every function $f : \Omega \to {\mathbb{H}}$, let us denote by $f=f_1+f_2 J$ the corresponding decomposition with $f_1,f_2$ ranging in $L_I$. Furthermore $\partial_1,\partial_2, \bar \partial_1, \bar \partial_2:C^1(\Omega,L_I)\to C^0(\Omega,L_I)$ will denote the corresponding complex derivatives. In other words, \begin{align*} \partial_1 &=\frac{1}{2}\left(\frac{\partial}{\partial x_0} - I \frac{\partial}{\partial x_1}\right)\\ \bar \partial_1 &=\frac{1}{2}\left(\frac{\partial}{\partial x_0} + I \frac{\partial}{\partial x_1}\right)\\ \partial_2 &=\frac{1}{2}\left(\frac{\partial}{\partial x_2} - I \frac{\partial}{\partial x_3}\right)\\ \bar \partial_2 &=\frac{1}{2}\left(\frac{\partial}{\partial x_2} + I \frac{\partial}{\partial x_3}\right). \end{align*} We notice that these derivatives commute with each other, and that $\partial_1,\bar \partial_1$ depend only on $I$, while $\partial_2, \bar \partial_2$ depend on both $I$ and $J$. The definition of regular function (Definition~\ref{regularfunction}) amounts to requiring that the restriction to $\Omega_I$ be holomorphic from $(\Omega_I, I)$ to $({\mathbb{H}},I)$ for all $I \in {\mathbb{S}}$. Curiously, if the domain is carefully chosen then a stronger property holds. \begin{definition} Let $\Omega$ be a domain in ${\mathbb{H}}$. $\Omega$ is a \emph{slice domain} if it intersects the real axis ${\mathbb{R}}$ and if, for all $I \in {\mathbb{S}}$, the intersection $\Omega_I$ with the complex plane $L_I$ is a connected. Moreover, $\Omega$ is termed \emph{symmetric} if it is axially symmetric with respect to the real axis ${\mathbb{R}}$. \end{definition} If we denote by $\partial_cf$ the \emph{slice derivative} \begin{equation*} \partial_cf(x+Iy) = \frac{1}{2} \left( \frac{\partial}{\partial x} - I \frac{\partial}{\partial y} \right) f_I(x+yI) \end{equation*} introduced in \cite{advances} and by $\partial_sf$ the \emph{spherical derivative} \begin{equation*} \partial_sf(q) =(q - \bar q)^{-1} \left(f(q) -f(\bar q)\right) \end{equation*} introduced in \cite{perotti}, then the aforementioned property can be stated as follows. \begin{theorem}[\cite{expansion}] Let $\Omega$ be a symmetric slice domain, let $f : \Omega \to {\mathbb{H}}$ be a regular function and let $q_0 \in \Omega$. Chosen $I,J \in {\mathbb{S}}$ so that $q_0 \in L_I$ and $I \perp J$, let $z_1,z_2,\bar z_1, \bar z_2$ be the induced coordinates and let $\partial_1,\partial_2,\bar \partial_1, \bar \partial_2$ be the corresponding derivations. Then \begin{equation}\label{complexholomorphy} \left.\left( \begin{array}{cc} \bar\partial_1 f_1 & \bar \partial_2 f_1\\ \bar \partial_1 f_2 & \bar \partial_2 f_2 \end{array}\right)\right|_{q_0} = \left( \begin{array}{cc} 0 & 0 \\ 0 & 0 \end{array}\right). \end{equation} Furthermore, if $q_0 \not \in {\mathbb{R}}$ then \begin{equation}\label{complexjacobian} \left.\left( \begin{array}{cc} \partial_1 f_1 & \partial_2 f_1 \\ \partial_1 f_2 & \partial_2 f_2 \end{array}\right)\right|_{q_0} = \left( \begin{array}{cr} \partial_cf_1(q_0) & - \overline{\partial_sf_2(q_0)} \\ \partial_cf_2(q_0) & \overline{\partial_sf_1(q_0)} \end{array}\right). \end{equation} If, on the contrary, $q_0 \in {\mathbb{R}}$ then \begin{equation} \left.\left( \begin{array}{cc} \partial_1 f_1 & \partial_2 f_1 \\ \partial_1 f_2 & \partial_2 f_2 \end{array}\right)\right|_{q_0} = \left( \begin{array}{cr} \partial_cf_1(q_0) & - \overline{\partial_cf_2(q_0)} \\ \partial_cf_2(q_0) & \overline{\partial_cf_1(q_0)} \end{array}\right). \end{equation} \end{theorem} We point out that we have not proven that $f$ is holomorphic with respect to the constant structure $I$: with respect to the basis $1,I,J, IJ$, equality \eqref{complexholomorphy} only holds at those points $q_0$ that lie in $L_I$. In fact, regularity is related to a different notion of holomorphy, which involves non-constant orthogonal complex structures. Let us recall the notations $Re(q) = x_0(q), Im(q) = q-Re(q)$ for $q \in {\mathbb{H}}$ and let us set \begin{equation*} {\mathbb{J}}_{q_0}v := \frac{Im(q_0)}{|Im(q_0)|} v\qquad \forall\ v \in T_{q_0}({\mathbb{H}}\setminus{\mathbb{R}}) \cong {\mathbb{H}}. \end{equation*} Then $\pm{\mathbb{J}}$ are orthogonal complex structures on \begin{equation*} {\mathbb{H}}\setminus{\mathbb{R}} = \bigcup_{I \in {\mathbb{S}}} ({\mathbb{R}}+I{\mathbb{R}}^+) \end{equation*} and they are induced by the natural identification with the complex manifold ${\mathbb{CP}}^1 \times ({\mathbb{R}}+i{\mathbb{R}}^+)$. \begin{theorem}[\cite{ocs}]\label{ocs} Let $\Omega$ be a symmetric slice domain, and let $f: \Omega \to {\mathbb{H}}$ be an injective regular function. Then the real differential of $f$ is invertible at each $q \in \Omega$ and the push-forward of ${\mathbb{J}}$ via $f$, that is, \begin{equation} {\mathbb{J}}^f_{f(q)}v = \frac{Im(q)}{|Im(q)|} v \qquad \forall\ v \in T_{f(q)} f(\Omega\setminus {\mathbb{R}}) \cong {\mathbb{H}}\,, \end{equation} is an orthogonal complex structure on $f(\Omega \setminus {\mathbb{R}})$. \end{theorem} In the hypotheses of the previous theorem, $f$ is (obviously) a holomorphic map from $(\Omega \setminus {\mathbb{R}}, {\mathbb{J}})$ to $\left(f(\Omega \setminus {\mathbb{R}}), {\mathbb{J}}^f\right)$. Furthermore, there is a special class of regular functions such that ${\mathbb{J}}^f = {\mathbb{J}}$. \begin{remark}\label{J-holomorphic} Let $f: \Omega \to {\mathbb{H}}$ be a \emph{slice preserving} regular function, namely a regular function such that $f(\Omega_I)\subseteq L_I$ for all $I \in {\mathbb{S}}$. Then $f(\Omega \setminus {\mathbb{R}}) = f(\Omega) \setminus {\mathbb{R}}$ and $f$ is a holomorphic map from $(\Omega \setminus {\mathbb{R}}, {\mathbb{J}})$ to $\left( f(\Omega) \setminus {\mathbb{R}}, {\mathbb{J}}\right)$ \end{remark} \section{Quaternionic notions of subharmonicity}\label{sectionsubharmonicity} Let $\Omega$ be a domain in ${\mathbb{H}}$ and let \begin{equation*} \mathsf{us}(\Omega) = \{u: \Omega \to [-\infty, +\infty), u \mathrm{\ upper\ semicontinuous,\ } u \not \equiv -\infty\}. \end{equation*} For $u \in \mathsf{us}(\Omega)$, we aim at defining some notion of subharmonicity that behaves well when we compose $u$ with a regular function. Remark \ref{J-holomorphic} encourages us to consider \emph{${\mathbb{J}}$-plurisubharmonic} and \emph{${\mathbb{J}}$-harmonic} functions, i.e., functions that are pluri(sub)harmonic with respect to the complex structure ${\mathbb{J}}$. However, the notion of ${\mathbb{J}}$-plurisubharmonicity on a symmetric slice domain $\Omega$ is induced by plurisubharmonicity in ${\mathbb{CP}}^1 \times D_{\Omega}$ with \begin{equation*} D_{\Omega}=\{x+iy \in {\mathbb{R}}+i{\mathbb{R}}^+ : x+y{\mathbb{S}} \subset\Omega\}, \end{equation*} which amounts to constance in the first variable and subharmonicity in the second variable. We conclude: \begin{proposition} Let $\Omega$ be a symmetric domain in ${\mathbb{H}}$. A function $u \in \mathsf{us}(\Omega)$ is ${\mathbb{J}}$-plurisubharmonic in $\Omega \setminus {\mathbb{R}}$ if, and only if, there exists a subharmonic function $\upsilon : D_{\Omega} \to {\mathbb{R}}$ such that $u(x+Iy) = \upsilon(x+iy)$ for all $I \in {\mathbb{S}}$ and for all $x+iy \in D_{\Omega}$. In particular, a function $u \in C^2(\Omega,{\mathbb{R}})$ is ${\mathbb{J}}$-plurisubharmonic in $\Omega \setminus {\mathbb{R}}$ if, and only if, $u(x+Iy)$ does not depend on $I$ and \begin{equation}\label{weaksubharmonicity} \left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} \right) u(x+Iy) \geq 0. \end{equation} \end{proposition} Analogous observations can be made for ${\mathbb{J}}$-harmonicity. In order to get a richer class of functions, we need to suitably weaken the notion of subharmonicity considered. We are thus encouraged to give the following definition. \begin{definition} Let $\Omega$ be a domain in ${\mathbb{H}}$ and let $u \in \mathsf{us}(\Omega)$. We call $u$ \emph{weakly subharmonic} if for all $I \in {\mathbb{S}}$ the restriction $u_I=u_{|_{\Omega_I}}$ is subharmonic (after the natural identification between $L_I$ and ${\mathbb{C}}$). We say that $u$ is \emph{weakly harmonic} if, for all $I \in {\mathbb{S}}$, $u_I$ is harmonic. \end{definition} \begin{remark} A function $u \in C^2(\Omega,{\mathbb{R}})$ is weakly subharmonic if, and only if, for all $I \in {\mathbb{S}}$, the associated $\partial_1,\bar\partial_1$ are such that $\bar \partial_1 \partial_1u_I\geq 0$ in $\Omega_I$; that is, inequality \eqref{weaksubharmonicity} holds for all $I \in {\mathbb{S}}$. Furthermore, $u$ is weakly harmonic if, and only if, equality holds at all points. \end{remark} By construction: \begin{proposition} Let $\Omega$ be a symmetric domain in ${\mathbb{H}}$ and let $u \in \mathsf{us}(\Omega)$. If $u$ is ${\mathbb{J}}$-pluri(sub)harmonic in $\Omega \setminus {\mathbb{R}}$ then it is weakly (sub)harmonic in $\Omega \setminus {\mathbb{R}}$. If, moreover, $u$ is continuous at all points of $\Omega\cap{\mathbb{R}}$ then $u$ is weakly (sub)harmonic in $\Omega$. \end{proposition} The converse implication is not true, as shown by the next example. Here, $\langle\cdot,\cdot\rangle$ denotes the Euclidean scalar product on $Im({\mathbb{H}})\cong{\mathbb{R}}^3$. \begin{example}\label{coordinates} All real affine functions $u: {\mathbb{H}} \to {\mathbb{R}}$ are weakly harmonic, including the coordinates $x_0,x_1,x_2,x_3$ with respect to any basis $1,I,J,IJ$ with $I,J \in {\mathbb{S}}, I\perp J$. On the other hand, $x_1,x_2,x_3$ are not ${\mathbb{J}}$-plurisubharmonic in ${\mathbb{H}} \setminus {\mathbb{R}}$, as $x_1(x+Iy) = y \langle I,i \rangle, x_2(x+Iy) = y \langle I,j \rangle, x_3(x+Iy) = y \langle I,k \rangle$ are not constant in $I$. \end{example} Actually, a stronger property holds for real affine functions $u: {\mathbb{H}} \to {\mathbb{R}}$: they are pluriharmonic with respect to any constant orthogonal complex structure. This motivates the next definition. \begin{definition} Let $\Omega$ be a domain in ${\mathbb{H}}$ and let $u \in \mathsf{us}(\Omega)$. We say that $u$ is \emph{strongly (sub)harmonic} if it is pluri(sub)harmonic with respect to every constant orthogonal complex structure on $\Omega$. \end{definition} \begin{remark} A function $u \in C^2(\Omega,{\mathbb{R}})$ is strongly subharmonic if, for all $I,J \in {\mathbb{S}}$ with $I \perp J$, \begin{equation} H_{I,J}(u)= \left( \begin{array}{cc} \bar\partial_1\partial_1 u & \bar\partial_1\partial_2 u \\ \bar\partial_2\partial_1 u & \bar\partial_2\partial_2 u \end{array}\right) \end{equation} is a positive semidefinite matrix at each $q \in \Omega$. The function $u$ is strongly harmonic if for all $I,J \in {\mathbb{S}}$ with $I \perp J$ the matrix $H_{I,J}(u)$ has constant rank $0$. \end{remark} Clearly, if the matrix $H_{I,J}(u)$ is positive semidefinite then its $(1,1)$-entry $\bar \partial_1 \partial_1u$ is non-negative. Similarly, if $H_{I,J}(u)$ has constant rank $0$ then $\bar \partial_1 \partial_1u\equiv 0$. This leads to the next result, which, however, is not only true for $u \in C^2(\Omega,{\mathbb{R}})$ but also for $u \in \mathsf{us}(\Omega)$. \begin{proposition} Let $u \in \mathsf{us}(\Omega)$. If $u$ is strongly subharmonic then it is weakly harmonic. \end{proposition} \begin{proof} Let $u \in \mathsf{us}(\Omega)$ be strongly subharmonic, let $I\in{\mathbb{S}}$ and let us prove that $u_I$ is subharmonic. By construction, $u$ is plurisubharmonic with respect to the constant orthogonal complex structure $I$. Moreover, the inclusion map $incl:\Omega_I\to\Omega$ is a holomorphic map from $(\Omega_I,I)$ to $(\Omega,I)$. As a consequence, $u_I=u\circ incl$ is subharmonic, as desired. \end{proof} Example \ref{coordinates} shows that strong (sub)harmonicity does not imply ${\mathbb{J}}$-pluri(sub)harmonicity. The converse implication does not hold, either: \begin{example}\label{x^2-y^2} The function $u(q) = Re(q^2)$ is ${\mathbb{J}}$-pluriharmonic in ${\mathbb{H}}\setminus{\mathbb{R}}$, as $u(x+Iy) = x^2-y^2$ does not depend on $I$ and $\left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} \right) u(x+Iy) \equiv 0$. On the other hand, $u$ is not strongly subharmonic. Actually, it is not plurisubharmonic with respect to any constant orthogonal complex structure $I$: after choosing $J \in {\mathbb{S}}$ with $J \perp I$, we compute \begin{equation*} u(z_1+z_2J) = Re\left(z_1^2-z_2\bar z_2 +(z_1z_2+z_2\bar z_1)J\right) = \frac{z_1^2+\bar z_1^2}{2}-z_2\bar z_2 \end{equation*} for all $z_1,z_2 \in L_I$, so that $H_{I,J}(u)\equiv \left( \begin{array}{cc} 0 & 0\\ 0 & -1 \end{array}\right)$. \end{example} We have proven the following implications (none of which can be reversed): \begin{equation*} \begin{array}{rlcrl} &&\mathrm{plurisubharmonic\ }&&\\ &&\mathrm{w.r.t.\ all\ OCS's\ in\ }\Omega \setminus {\mathbb{R}}&&\\ &\swarrow&&\searrow&\\ {\mathbb{J}}\mathrm{-plurisubharmonic}&&&&\mathrm{strongly\ subh.}\\ \mathrm{in\ } \Omega \setminus {\mathbb{R}}&&&&\mathrm{in\ } \Omega\setminus {\mathbb{R}}\\ &\searrow&&\swarrow&\\ &&\mathrm{weakly\ subharmonic\ in\ } \Omega\setminus {\mathbb{R}}&& \end{array} \end{equation*} A similar scheme can be drawn for the quaternionic notions of harmonicity. We show with a further example that a strongly subharmonic function is not necessarily strongly \emph{harmonic} when it is weakly harmonic, or even ${\mathbb{J}}$-pluriharmonic. \begin{example}\label{log} Consider the function $u : {\mathbb{H}} \to {\mathbb{R}}$ with $u(q):=\log|q|$ for all $q\in{\mathbb{H}}\setminus\{0\}$ and $u(0):=-\infty$. $u$ is ${\mathbb{J}}$-pluriharmonic in ${\mathbb{H}}\setminus{\mathbb{R}}$, as $u(x+Iy) = \frac12 \log(x^2+y^2)$. As a consequence, $u$ is also weakly harmonic. On the other hand, for any choice of $I,J \in {\mathbb{S}}$, the fact that $u(z_1+z_2J) = \frac{1}{2}\log(z_1\bar z_1 + z_2 \bar z_2)$ implies that \begin{align*} H_{I,J}(u)_{|_{z_1+z_2J}} &= \frac{1}{2 (z_1\bar z_1 + z_2 \bar z_2)^2} \left( \begin{array}{cc} z_2 \bar z_2 & -z_1 \bar z_2\\ -z_2\bar z_1 & z_1\bar z_1 \end{array}\right)\\ &= \frac{1}{2 (|z_1|^2 + |z_2|^2)^2} \left( \begin{array}{cc} |z_2|^2 & -z_1 \bar z_2\\ -z_2\bar z_1 & |z_1|^2 \end{array}\right)\,. \end{align*} Hence, $u$ is strongly subharmonic but it is not strongly harmonic. \end{example} Let us review a few classical constructions in our new environment. \begin{remark} On a given domain $\Omega$, let us denote by $\mathsf{wsh}(\Omega)$ the set of weakly subharmonic functions, by $\mathsf{ssh}(\Omega)$ that of strongly subharmonic functions, and by $\mathsf{psh}_{\mathbb{J}}(\Omega)$ that of ${\mathbb{J}}$-plurisubharmonic functions on $\Omega$ (if $\Omega$ equals a symmetric domain minus ${\mathbb{R}}$). If $S$ is any of these sets then: \begin{enumerate} \item $S$ is a convex cone; \item for all $u \in S$, if $\varphi$ is a real-valued $C^2$ function on a neighborhood of $u({\mathbb{R}})$ and if $\varphi$ is increasing and convex then $\varphi \circ u :\Omega \to {\mathbb{R}}$ also belongs to $S$; \item for all $u_1,u_2 \in S$, the function $u(q) = \max\{u_1(q),u_2(q)\}$ belongs to $S$; \item if $\{u_\alpha\}_{\alpha \in A}$ (with $A \neq \emptyset$) is a family in $S$, locally bounded from above, and if $u(q) = \sup_{\alpha \in A} u_\alpha(q)$ for all $q \in \Omega$ then the upper semicontinuous regularization $u^*$ belongs to $S$. \end{enumerate} \end{remark} \begin{example} For any $\alpha>0$, the function $u : {\mathbb{H}} \to {\mathbb{R}} \ q \mapsto |q|^\alpha$ is strongly subharmonic in ${\mathbb{H}}$ and it is ${\mathbb{J}}$-plurisubharmonic in ${\mathbb{H}} \setminus {\mathbb{R}}$. \end{example} \begin{example} The functions $Re^2(q) = x_0^2(q)$ and $|Im(q)|^2 = x_1^2(q)+x_2^2(q)+x_3^2(q)$ are strongly subharmonic in ${\mathbb{H}}$. (They are also ${\mathbb{J}}$-plurisubharmonic in ${\mathbb{H}}\setminus{\mathbb{R}}$, as $Re^2(x+Iy) = x^2$ and $|Im(x+Iy)|^2=y^2$). \end{example} We conclude this section showing that any given subharmonic function on an axially symmetric planar domain extends to a weakly subharmonic function on the corresponding symmetric domain of ${\mathbb{H}}$. \begin{remark} If we start with a domain $D \subseteq {\mathbb{C}}$ that is symmetric with respect to the real axis and a (sub)harmonic function $\upsilon$ on $D$, we may define a weakly (sub)harmonic function $u$ on the symmetric domain $\Omega = \bigcup_{x+iy \in D} x+y{\mathbb{S}}$ by setting \begin{equation*} u(x+Iy) := \frac{1+\langle I,i \rangle}2 \upsilon(x+iy) + \frac{1-\langle I,i \rangle}2 \upsilon(x-iy) \end{equation*} for all $x \in {\mathbb{R}},I\in {\mathbb{S}}, y>0$ such that $x+Iy \in \Omega$ and $u(x):=\upsilon(x)$ for all $x\in\Omega\cap{\mathbb{R}}$. \end{remark} \section{Composition with regular functions}\label{sectioncomposition} We now want to understand the behavior of the different notions of subharmonicity we introduced, under composition with regular functions. For ${\mathbb{J}}$-pluri(sub)harmonicity, Remark \ref{J-holomorphic} immediately implies: \begin{proposition} Let $\Omega$ be a symmetric domain in ${\mathbb{H}}$ and let $u \in \mathsf{us}(\Omega)$. $u$ is ${\mathbb{J}}$-pluri(sub)harmonic in $\Omega\setminus{\mathbb{R}}$ if, and only if, for every slice preserving regular function $f : \Omega'\setminus{\mathbb{R}} \to \Omega\setminus{\mathbb{R}}$, the composition $u \circ f$ is ${\mathbb{J}}$-pluri(sub)harmonic in $\Omega'\setminus{\mathbb{R}}$. \end{proposition} It is essential to restrict to slice preserving regular functions. If we compose $u$ with an injective regular function $f$ then the only sufficient condition we know in order for $u\circ f$ to be ${\mathbb{J}}$-pluri(sub)harmonic is, that $u$ be ${\mathbb{J}}^f$-pluri(sub)harmonic (see Theorem \ref{ocs}). For weak (sub)harmonicity, we can prove the next result. \begin{theorem}\label{compositionweakregular} Let $u \in \mathsf{us}(\Omega)$. $u$ is weakly (sub)harmonic in $\Omega$ if, and only if, for every slice preserving regular function $f : \Omega' \to \Omega$, the composition $u \circ f$ is weakly (sub)harmonic in $\Omega'$. \end{theorem} \begin{proof} If $u \circ f$ is weakly (sub)harmonic for all slice preserving regular $f$ then in particular $f=f\circ id$ is weakly (sub)harmonic. Conversely, let $u \in \mathsf{us}(\Omega)$ be weakly (sub)harmonic, let $f : \Omega' \to \Omega$ be a slice preserving regular function and let us prove that $u \circ f$ is weakly (sub)harmonic. For each $I\in{\mathbb{S}}$, $u_I$ is (sub)harmonic in $\Omega_I$ and the restriction $f_I$ is a holomorphic map from $(\Omega'_I,I)$ to $(\Omega_I,I)$. As a consequence, $(u \circ f)_I=u_I\circ f_I$ is (sub)harmonic in $\Omega'_I$. \end{proof} As for strong harmonicity and subharmonicity, they are preserved under composition with quaternionic affine transformations, as the latter are holomorphic with respect to any constant structure $I \in {\mathbb{S}}$: \begin{remark}\label{affine} If $u$ is a strongly (sub)harmonic function on a domain $\Omega \subseteq {\mathbb{H}}$ then, for any $a,b \in {\mathbb{H}}$ with $b\neq0$, the function $v(q)=u(a+qb)$ is strongly (sub)harmonic in $\Omega b^{-1}-a$. \end{remark} However, strong harmonicity and subharmonicity are not preserved by composition with other regular functions, not even slice preserving regular functions (see Example \ref{x^2-y^2}). For this reason, we address the study of their composition with regular functions by direct computation, starting with the $C^2$ case. \begin{lemma}\label{laplacian} Let $I,J \in{\mathbb{S}}$ with $I \perp J$ and let us consider the associated $\partial_1,\partial_2, \bar \partial_1, \bar \partial_2$. Let $\Omega$ be a domain in ${\mathbb{H}}$ and let $u \in C^2(\Omega,{\mathbb{R}})$. For every symmetric slice domain $\Omega'$ and for every regular function $f : \Omega' \to \Omega$, we have \begin{equation} \bar \partial_1 \partial_1(u\circ f)_{|_{q_0}} = (\overline{\partial_1 f_1}, \overline{\partial_1f_2})|_{q_0}\cdot H_{I,J}(u)_{|_{f(q_0)}} \cdot \left.\left( \begin{array}{c} \partial_1 f_1\\ \partial_1 f_2 \end{array}\right)\right|_{q_0} \end{equation} at each $q_0 \in \Omega'_I$. If, moreover, $f$ is slice preserving then \begin{equation} \bar \partial_1 \partial_1(u\circ f)_{|_{q_0}} = |\partial_1 f_1|^2_{|_{q_0}} \cdot \bar\partial_1\partial_1 u_{|_{f(q_0)}} \end{equation} at each $q_0 \in \Omega'_I$. The same is true if there exists a constant $c \in {\mathbb{H}}$ such that $f_I+c$ maps $\Omega'_I$ to $L_I$. \end{lemma} \begin{proof} We compute: \[\partial_1(u\circ f) = (\partial_1 u) \circ f \cdot \partial_1f_1 + (\partial_2 u) \circ f \cdot \partial_1f_2 + (\bar \partial_1 u) \circ f \cdot \partial_1\bar f_1 + (\bar \partial_2 u) \circ f \cdot \partial_1\bar f_2\] and \begin{align*} \bar \partial_1 \partial_1(u\circ f) =\ & \bar \partial_1((\partial_1 u) \circ f) \cdot \partial_1f_1 + (\partial_1 u) \circ f \cdot \bar \partial_1\partial_1f_1+\\ &+ \bar \partial_1((\partial_2 u) \circ f) \cdot \partial_1f_2 + (\partial_2 u) \circ f \cdot \bar \partial_1\partial_1f_2 +\\ &+ \bar \partial_1((\bar \partial_1 u) \circ f) \cdot \partial_1\bar f_1 +(\bar \partial_1 u) \circ f \cdot \bar \partial_1\partial_1\bar f_1 +\\ &+ \bar \partial_1((\bar \partial_2 u) \circ f) \cdot \partial_1\bar f_2 + (\bar \partial_2 u) \circ f \cdot \bar \partial_1\partial_1\bar f_2\,. \end{align*} If we evaluate the previous expression at a point $q \in L_I$, equality \eqref{complexholomorphy} guarantees the vanishing of all terms but the first and the third. Hence, \[\bar \partial_1 \partial_1(u\circ f)_{|_q} = \bar \partial_1((\partial_1 u) \circ f)_{|_q} \cdot {\partial_1f_1}_{|_q}+ \bar \partial_1((\partial_2 u) \circ f)_{|_q} \cdot {\partial_1f_2}_{|_q}\] where \begin{align*} &\bar \partial_1((\partial_1 u) \circ f)_{|_q} =\\ &=\partial_1\partial_1 u _{|_{f(q)}} \cdot {\bar \partial_1 f_1}_{|_q}+ \partial_2\partial_1 u _{|_{f(q)}} \cdot {\bar \partial_1 f_2}_{|_q}+\bar\partial_1\partial_1 u _{|_{f(q)}} \cdot {\bar \partial_1 \bar f_1}_{|_q} + \bar\partial_2\partial_1 u _{|_{f(q)}} \cdot {\bar \partial_1 \bar f_2}_{|_q} =\\ &= \bar\partial_1\partial_1 u _{|_{f(q)}} \cdot \overline{\partial_1 f_1}_{|_q} + \bar\partial_2\partial_1 u _{|_{f(q)}} \cdot \overline{\partial_1 f_2}_{|_q} \end{align*} and \begin{align*} &\bar \partial_1((\partial_2 u) \circ f)_{|_q} =\\ &=\partial_1\partial_2 u _{|_{f(q)}} \cdot {\bar \partial_1 f_1}_{|_q}+ \partial_2\partial_2 u _{|_{f(q)}} \cdot {\bar \partial_1 f_2}_{|_q}+\bar\partial_1\partial_2 u _{|_{f(q)}} \cdot {\bar \partial_1 \bar f_1}_{|_q} + \bar\partial_2\partial_2 u _{|_{f(q)}} \cdot {\bar \partial_1 \bar f_2}_{|_q} =\\ &=\bar\partial_1\partial_2 u _{|_{f(q)}} \cdot \overline{\partial_1 f_1}_{|_q} + \bar\partial_2\partial_2 u _{|_{f(q)}} \cdot \overline{\partial_1 f_2}_{|_q}\,. \end{align*} Thus, \begin{align*} \bar \partial_1 \partial_1(u\circ f)_{|_q} =\ &\bar\partial_1\partial_1 u _{|_{f(q)}} \cdot |\partial_1 f_1|^2_{|_q} + \bar\partial_2\partial_1 u _{|_{f(q)}} \cdot \overline{\partial_1 f_2}_{|_q}\cdot {\partial_1f_1}_{|_q} +\\ &+\bar\partial_1\partial_2 u _{|_{f(q)}} \cdot \overline{\partial_1 f_1}_{|_q}{\partial_1f_2}_{|_q} + \bar\partial_2\partial_2 u _{|_{f(q)}} \cdot |\partial_1 f_2|^2_{|_q}\,; \end{align*} that is, \[\bar \partial_1 \partial_1(u\circ f)_{|_q}= (\overline{\partial_1 f_1}, \overline{\partial_1f_2})|_q\cdot \left. \left( \begin{array}{cc} \bar\partial_1\partial_1 u & \bar\partial_1\partial_2 u \\ \bar\partial_2\partial_1 u & \bar\partial_2\partial_2 u \end{array}\right)\right|_{f(q)} \cdot \left.\left( \begin{array}{c} \partial_1 f_1\\ \partial_1 f_2 \end{array}\right)\right|_{q}\,.\] Finally, if there exists $c = c_1+c_2J \in {\mathbb{H}}$ such that $f_I+c$ maps $\Omega_I$ to $L_I$. then $f_2\equiv -c_2$ in $\Omega_I$ so that $\partial_1 f_2$ vanishes identically in $\Omega_I$ and \[\bar \partial_1 \partial_1(u\circ f)_{|_{q}} = |\partial_1 f_1|^2_{|_{q}} \cdot \bar\partial_1\partial_1 u_{|_{f(q)}}\,,\] as desired. \end{proof} We are now ready to study the composition of strongly (sub)harmonic $C^2$ functions with regular functions. \begin{theorem}\label{compositionstrongregularc2} Let $u\in C^2(\Omega,{\mathbb{R}})$. $u$ is strongly (sub)harmonic if, and only if, for every symmetric slice domain $\Omega'$ and for every regular function $f : \Omega' \to \Omega$, the composition $u \circ f$ is weakly (sub)harmonic. \end{theorem} \begin{proof} If $u : \Omega \to {\mathbb{R}}$ is strongly subharmonic then, for all $I,J \in {\mathbb{S}}$ (with $I \perp J$), the matrix $H_{I,J}(u)$ is positive semidefinite. For every regular function $f : \Omega' \to \Omega$ and for all $I \in {\mathbb{S}}$, Lemma \ref{laplacian} implies $\bar \partial_1 \partial_1(u\circ f)_{|_{z_1}}\geq 0$ at each $z_1 \in \Omega_I$. Hence, $u\circ f$ is weakly subharmonic. Conversely, if $u \circ f$ is weakly subharmonic for every regular function $f : \Omega' \to \Omega$ then we can prove that $u$ is strongly subharmonic in the following way. Let us fix $I,J \in {\mathbb{S}}, p \in \Omega$ (with $I \perp J$) and prove that $H_{I,J}(u)$ is positive semidefinite at $p$, i.e., that \[(\bar v_1, \bar v_2)\cdot H_{I,J}(u)_{|_{p}} \cdot \left( \begin{array}{c} v_1\\ v_2 \end{array}\right) \geq 0\] for arbitrary $v_1,v_2 \in L_I$. Let us set $v := v_1+v_2J$ and $f(q):=qv+p$ for $q\in B(0,R)$ (with $R>0$ small enough to guarantee the inclusion of $f(B(0,R))=B(p,|v|R)$ into $\Omega$). By direct computation, $\partial_cf\equiv v$. Formula \eqref{complexjacobian} yields the equalities ${\partial_1 f_1}_{|_{0}}= v_1, {\partial_1 f_2}_{|_{0}}= v_2$. Taking into account Lemma \ref{laplacian} and the fact that $f(0)=p$, we conclude that \[(\bar v_1, \bar v_2)\cdot H_{I,J}(u)_{|_{p}} \cdot \left( \begin{array}{c} v_1\\ v_2 \end{array}\right) = \bar \partial_1 \partial_1(u\circ f)_{|_{0}}\,.\] Since $u\circ f$ is weakly subharmonic, $\bar \partial_1 \partial_1(u\circ f)_{|_{0}}\geq0$ and we have proven the desired inequality. Analogous reasonings characterize strong \emph{harmonicity}. \end{proof} The previous result allows us to construct a large class of examples of weakly subharmonic functions. \begin{example} For any regular function $f: \Omega \to {\mathbb{H}}$ on a symmetric slice domain $\Omega$, the components of $f$ with respect to any basis $1,I,J, IJ$ with $I,J \in {\mathbb{S}},I\perp J$ are weakly harmonic. Furthermore, for all $\alpha>0$ the functions $\log|f|, |f|^\alpha,Re^2f, |Im f|^2$ are weakly subharmonic. \end{example} \section{Mean-value property and consequences}\label{sectionmeanvalue} We can characterize weak and strong pluri(sub)harmonicity of $u \in \mathsf{us}(\Omega)$ in terms of mean-value properties. For each $I \in {\mathbb{S}}, a \in \Omega, b \in {\mathbb{H}}$ such that $\Omega$ includes the circle $\Gamma_{I,a,b}:=\{a+e^{I\vartheta}b: \vartheta\in{\mathbb{R}}\}$, we will use the notation \begin{equation} l_I(u; a, b) := \frac{1}{2\pi} \int_0^{2\pi} u(a+e^{I\vartheta}b) d\vartheta. \end{equation} \begin{proposition}\label{weaklymean} Let $\Omega$ be a domain in ${\mathbb{H}}$ and let $u \in \mathsf{us}(\Omega)$. $u$ is weakly subharmonic (resp., weakly harmonic) if, and only if, the inequality \[u(a) \leq l_I(u; a, b)\] (resp., the equality $u(a) = l_I(u; a, b)$) holds for all $I \in {\mathbb{S}}, a \in \Omega_I, b\in L_I\setminus\{0\}$ such that $\Gamma_{I,a,b}\subset\Omega_I$. \end{proposition} \begin{proof} Fix any $I\in{\mathbb{S}}$. By~\cite[Theorem 2.4.1]{klimek}, $u_I$ is subharmonic in $\Omega_I$ if, and only if, $u(a) \leq l_I(u; a, b)$ for all $a \in \Omega_I, b\in L_I$ such that $\Gamma_{I,a,b}\subset\Omega_I\setminus\{0\}$. The corresponding equalities characterize harmonicity. \end{proof} \begin{proposition}\label{stronglymean} Let $\Omega$ be a domain in ${\mathbb{H}}$ and let $u \in \mathsf{us}(\Omega)$. $u$ is strongly subharmonic (resp., strongly harmonic) if, and only if, the inequality \[u(a) \leq l_I(u; a, b)\] (resp., the equality $u(a) = l_I(u; a, b)$) holds for all $I \in {\mathbb{S}}, a \in \Omega, b \in {\mathbb{H}}\setminus\{0\}$ such that $\Gamma_{I,a,b}\subset\Omega$. \end{proposition} \begin{proof} For each $I\in{\mathbb{S}}$, let us apply~\cite[Theorem 2.9.1]{klimek} to establish whether $u$ is $I$-plurisubharmonic. This happens if, and only if, $u(a) \leq l_I(u; a, b)$ for all $a \in \Omega, b\in {\mathbb{H}}\setminus\{0\}$ such that $\Gamma_{I,a,b}\subset\Omega$. The corresponding equalities characterize $I$-pluriharmonicity. \end{proof} As an application of the previous results, we can extend Theorem~\ref{compositionstrongregularc2} to all $u \in \mathsf{us}(\Omega)$. \begin{theorem}\label{compositionstrongregular} Let $u \in \mathsf{us}(\Omega)$. $u$ is strongly (sub)harmonic if, and only if, for every regular function $f : \Omega' \to \Omega$ the composition $u \circ f$ is weakly (sub)harmonic. \end{theorem} \begin{proof} Let us suppose the composition $u \circ f$ with any regular function $f:\Omega' \to\Omega$ to be weakly subharmonic and let us prove that $u$ is strongly subharmonic. By Proposition~\ref{stronglymean}, it suffices to prove that, for any $I \in{\mathbb{S}}, a\in\Omega, b\in{\mathbb{H}}\setminus\{0\}$ such that $\Gamma_{I,a,b}\subset\Omega$, the inequality \[u(a) \leq l_I(u; a, b)\] holds. If we set $f(q):=a+qb$, then $f(0)=a$ and $f$ maps the circle $\Gamma_{I,0,1}$ into the circle $\Gamma_{I,a,b}$. Thus, it suffices to prove that \[u(f(0)) \leq l_I(u\circ f; 0, 1).\] But this inequality is true by Proposition~\ref{weaklymean}, since $u \circ f$ is weakly subharmonic in a domain $\Omega'$ such that $\Gamma_{I,0,1}\subset\Omega'_I$. Analogous considerations can be made for the harmonic case. Conversely, let $u \in \mathsf{us}(\Omega)$ be strongly (sub)harmonic, let $f : \Omega' \to \Omega$ be a regular function and let us prove that $u \circ f$ is weakly (sub)harmonic. For each $I\in{\mathbb{S}}$, $u$ is $I$-pluri(sub)harmonic and the restriction $f_I$ is a holomorphic map from $(\Omega'_I,I)$ to $(\Omega,I)$. As a consequence, $(u \circ f)_I=u\circ f_I$ is (sub)harmonic in $\Omega'_I$, as desired. \end{proof} A form of maximum modulus principle holds for weakly or strongly plurisubharmonic functions. \begin{proposition} Let $\Omega$ be a domain in ${\mathbb{H}}$ and suppose $u \in \mathsf{us}(\Omega)$ to be weakly subharmonic. If $u$ has a local maximum point $p \in \Omega_I$ then $u_I$ is constant in the connected component of $\Omega_I$ that includes $p$. If, moreover, $u$ is strongly subharmonic, then $u$ is constant in $\Omega$. \end{proposition} \begin{proof} In our hypotheses, $u_I$ is a subharmonic function with a local maximum point $p \in \Omega_I$. Thus, $u_I$ is constant in the connected component of $\Omega_I$ that includes $p$ by the maximum modulus principle for subharmonic functions~\cite[Theorem 2.4.2]{klimek}. If, moreover, $u$ is strongly subharmonic then it is $I$-plurisubharmonic. Since we assumed $\Omega$ to be connected, $u$ is constant in $\Omega$ by the maximum modulus principle for plurisubharmonic functions~\cite[Corollary 2.9.9]{klimek}. \end{proof} Let us now consider maximality. \begin{definition} Let $S$ be a class of real-valued functions on an open set $D$ and let $\upsilon$ be an element of $S$. Suppose that, for any relatively compact subset $G$ of $D$ and for all $\nu\in S$ with $\nu \leq \upsilon$ in $\partial G$, the inequality $\nu \leq \upsilon$ holds throughout $G$. In this situation, we say that $\upsilon$ is \emph{maximal} in $S$ (or among the elements of $S$). \end{definition} The following characterization of weak harmonicity immediately follows from \cite[\S3.1]{klimek}. \begin{remark} Let $\Omega$ be a domain in ${\mathbb{H}}$ and let $u\in\mathsf{wsh}(\Omega)$. $u$ is weakly harmonic if, and only if, for all $I \in {\mathbb{S}}$, the restriction $u_I$ is maximal among subharmonic functions on $\Omega_I$. As a consequence, if $u$ is weakly harmonic then $u$ is maximal in $\mathsf{wsh}(\Omega)$. \end{remark} Let us now consider strongly subharmonic functions. Since they are plurisubharmonic with respect to all constant structures, we can make the following observation. \begin{remark} Let $\Omega$ be a domain in ${\mathbb{H}}$ and let $u\in \mathsf{ssh}(\Omega)$. If $u$ is strongly harmonic then it is maximal in $\mathsf{ssh}(\Omega)$. Furthermore, if $u \in C^2(\Omega)$ then $u$ is maximal in $\mathsf{ssh}(\Omega)$ if and only if $\det H_{I,J}(u) \equiv 0$ for all $I,J \in {\mathbb{S}}$ with $I \perp J$. \end{remark} It is easy to exhibit a maximal element of $\mathsf{ssh}(\Omega)$ that is not strongly harmonic. \begin{example} The function $u(q) = \log|q|$ is a strongly subharmonic function on ${\mathbb{H}}$. The explicit computations in Example \ref{log} show that $u$ maximal but not strongly harmonic. \end{example} \section{Approximation}\label{sectionmapproximation} An approximation result holds for strongly subharmonic functions. For all $\varepsilon >0$, let \[\Omega_{\varepsilon}:=\left\{ \begin{array}{ll} \{q \in \Omega : \mathrm{dist}(q, \partial \Omega)>\varepsilon\}&\mathrm{\ if\ }\Omega\neq{\mathbb{H}}\\ \,{\mathbb{H}}&\mathrm{\ if\ }\Omega={\mathbb{H}} \end{array} \right.\] and let $u*\chi_\varepsilon$ denote the convolution of $u$ with the standard smoothing kernel $\chi_\varepsilon$ of ${\mathbb{H}}\cong{\mathbb{R}}^4$. For further details, see \cite[\S2.5]{klimek}. \begin{proposition}\label{approximation} Let $u \in \mathsf{ssh}(\Omega)$. If $\varepsilon>0$ is such that $\Omega_{\varepsilon}$ is not empty, then $u*\chi_\varepsilon \in C^{\infty} \cap \mathsf{ssh}(\Omega_\varepsilon)$. Moreover, $u*\chi_\varepsilon$ monotonically decreases with decreasing $\varepsilon$ and \begin{equation} \lim_{\varepsilon \to 0^+} u*\chi_\varepsilon (q) = u(q) \end{equation} for each $q \in \Omega$. \end{proposition} \begin{proof} Fix any $I\in{\mathbb{S}}$, so that $u$ is $I$-plurisubharmonic. By~\cite[Theorem 2.9.2]{klimek}, $u*\chi_\varepsilon$ is both an element of $C^{\infty}$ and an $I$-plurisubharmonic function. For the same reason, our second statement also holds true. \end{proof} On the other hand, convolution with the standard smoothing kernel $\chi_\varepsilon$ does not preserve weak subharmonicity. This can be shown with an example. \begin{example} The function $u(q) = Re(q^2)$ is in $\mathsf{wsh}({\mathbb{H}})$, but $u*\chi_\varepsilon$ does not belong to $\mathsf{wsh}({\mathbb{H}})$. Indeed, we saw that for each orthonormal basis $1,I,J,IJ$ of ${\mathbb{H}}$ we have $H_{I,J}(u)\equiv \left( \begin{array}{cc} 0 & 0\\ 0 & -1 \end{array}\right)$. Hence, $-u$ is strongly subharmonic and the same is true for $-u*\chi_\varepsilon$ by the previous proposition. In particular, $-u*\chi_\varepsilon \in \mathsf{wsh}({\mathbb{H}})$ so that $u*\chi_\varepsilon$ can only be in $\mathsf{wsh}({\mathbb{H}})$ if it is weakly \emph{harmonic}. This amounts to requiring that for each $I \in {\mathbb{S}}, a \in \Omega_I, b \in L_I\setminus\{0\}$ such that $\Gamma_{I,a,b} \subset \Omega$ the equality \begin{equation*} u*\chi_\varepsilon(a) = l_I(u*\chi_\varepsilon; a, b) = l_I(u; \cdot, b) * \chi_\varepsilon (a), \end{equation*} holds. But this happens if, and only if, $u(a-q) = l_I(u; a-q, b)$ for all $q$ in the support $\overline{B(0,\varepsilon)}$ of $\chi_\varepsilon$. This cannot be true, since (if $z_1,z_2$ denote the complex variables with respect to the orthonormal basis $1,I,J,IJ$) the function $-u$ is strictly subharmonic in $z_2$. \end{example} \section{Green's functions} We now consider the analogs of Green's functions in the context of weakly and strongly subharmonic functions. \begin{definition}\label{green} Let $\Omega$ be a domain in ${\mathbb{H}}$, let $q_0 \in \Omega$, and set \begin{align*} &\mathsf{wsh}_{q_0}(\Omega):=\left\{u \in \mathsf{wsh}(\Omega) : u<0, \limsup_{q \to q_0} \big|u(q)-\log|q-q_0|\big| < \infty\right\}\\ &\mathsf{ssh}_{q_0}(\Omega):=\mathsf{wsh}_{q_0}(\Omega) \cap \mathsf{ssh}(\Omega)\,. \end{align*} For all $q\in\Omega$, let us define \begin{align*} w(q) &:= \left\{ \begin{array}{ll} -\infty& \mathrm{\ if\ }\mathsf{wsh}_{q_0}(\Omega) = \emptyset \\ \sup\{u(q): u\in \mathsf{wsh}_{q_0}(\Omega) \} &\mathrm{\ otherwise} \end{array} \right.\\ s(q) &:= \left\{ \begin{array}{ll} -\infty& \mathrm{\ if\ }\mathsf{ssh}_{q_0}(\Omega) = \emptyset \\ \sup\{u(q): u\in \mathsf{ssh}_{q_0}(\Omega) \} &\mathrm{\ otherwise} \end{array} \right. \end{align*} The \emph{Green function} of $\Omega$ with logarithmic pole at $q_0$, denoted $g^\Omega_{q_0}$, is the upper semicontinuous regularization $w^*$ of $w$. The \emph{strongly subharmonic Green function} of $\Omega$ with logarithmic pole at $q_0$, denoted $G^\Omega_{q_0}$, is the upper semicontinuous regularization $s^*$ of $s$. \end{definition} \begin{remark} By construction, $G^\Omega_{q_0}(q) \leq g^\Omega_{q_0}(q)$ for all $q\in\Omega$. Moreover, any inclusion $\Omega'\subseteq\Omega$ implies $g^\Omega_{q_0}(q) \leq g^{\Omega'}_{q_0}(q)$ and $G^\Omega_{q_0}(q) \leq G^{\Omega'}_{q_0}(q)$. \end{remark} Let us construct a basic example. We will use the notations ${\mathbb{B}} := B(0,1)$, where \begin{equation*} B(q_0,R) := \{q \in {\mathbb{H}} : |q-q_0|<R\} \end{equation*} for all $q_0 \in {\mathbb{H}}, R>0$, and ${\mathbb{B}}_I := {\mathbb{B}} \cap L_I$. \begin{example} We can easily prove that \[G^{\mathbb{B}}_{0}(q)=g^{\mathbb{B}}_{0}(q)=\log|q|\] for all $q\in{\mathbb{B}}$. Indeed, $q\mapsto\log|q|$ is clearly an element of $\mathsf{ssh}_{0}({\mathbb{B}})\subseteq\mathsf{wsh}_{0}({\mathbb{B}})$. Furthermore, for each $u\in\mathsf{wsh}_{0}({\mathbb{B}})$, the inequality $u(q)\leq\log|q|$ holds throughout ${\mathbb{B}}$. Indeed, for all $I\in{\mathbb{S}}$ it holds $u_I(z)\leq\log|z|$ for all $z\in{\mathbb{B}}_I$ because $z \mapsto \log|z|$ is the (complex) Green function of the disc ${\mathbb{B}}_I$. \end{example} Further examples can be derived by means of the next results. \begin{lemma}\label{stonggreenequality} Let $f$ be any affine transformation of ${\mathbb{H}}$, let $\Omega$ be a domain in ${\mathbb{H}}$ and fix $q_0 \in \Omega$. Then \[G^{f(\Omega)}_{f(q_0)}(f(q)) = G^{\Omega}_{q_0}(q)\] for all $q\in\Omega$. \end{lemma} \begin{proof} By repeated applications of Remark \ref{affine}, we conclude that \[\mathsf{ssh}_{q_0}(\Omega)=\{u\circ f : u\in \mathsf{ssh}_{f(q_0)}(f(\Omega))\}.\] Thanks to this equality, the statement immediately follows from Definition~\ref{green}. \end{proof} \begin{lemma}\label{greeninequalities} Let $\Omega$ be a symmetric slice domain in ${\mathbb{H}}$, fix $q_0 \in \Omega$ and take a regular function $f:\Omega\to{\mathbb{H}}$. Then \[G^{f(\Omega)}_{f(q_0)}(f(q)) \leq g^{\Omega}_{q_0}(q)\] for all $q\in\Omega$. If, moreover, $f$ is slice preserving then \[g^{f(\Omega)}_{f(q_0)}(f(q)) \leq g^{\Omega}_{q_0}(q)\] for all $q\in\Omega$. If, additionally, $f$ admits a regular inverse $f^{-1}:f(\Omega)\to\Omega$, then the last inequality becomes an equality at all $q\in\Omega$. \end{lemma} \begin{proof} By Theorem~\ref{compositionstrongregular}, \[\mathsf{wsh}_{q_0}(\Omega)\supseteq\{u\circ f : u\in \mathsf{ssh}_{f(q_0)}(f(\Omega))\}.\] If $f$ is a slice preserving regular function then, by Theorem~\ref{compositionweakregular}, \[\mathsf{wsh}_{q_0}(\Omega)\supseteq\{u\circ f : u\in \mathsf{wsh}_{f(q_0)}(f(\Omega))\}.\] The last inclusion is actually an equality if $f$ admits a regular inverse $f^{-1}:f(\Omega)\to\Omega$, which is necessarily slice preserving. The three statements now follow from Definition~\ref{green}. \end{proof} In the last statement, we assumed $\Omega$ to be a symmetric slice domain for the sake of simplicity. The result could, however, be extended to all slice domains. Lemmas~\ref{stonggreenequality} and~\ref{greeninequalities}, along with the preceding example, yield what follows. \begin{example} For each $x_0\in{\mathbb{R}}$ and each $R>0$, the equalities \[\log\frac{|q-x_0|}{R}=G^{B(x_0,R)}_{x_0}(q)=g^{B(x_0,R)}_{x_0}(q)\] hold for all $q\in B(x_0,R)$. We point out that this function is strongly subharmonic and weakly harmonic in $B(x_0,R)$. \end{example} \begin{example}\label{ball} For each $q_0\in{\mathbb{H}}$ and each $R>0$ it holds \[\log\frac{|q-q_0|}{R}=G^{B(q_0,R)}_{q_0}(q)\leq g^{B(q_0,R)}_{q_0}(q)\] for all $q\in B=B(q_0,R)$. We point out that $G^{B}_{q_0}$, though strongly and weakly subharmonic, is not weakly \emph{harmonic} if $q_0 \not \in {\mathbb{R}}$. Indeed, if we fix an orthonormal basis $1,I,J,IJ$ and write $q_0 = q_1+ q_2J$ with $q_1,q_2 \in L_I$, then by Lemma \ref{laplacian} \[\left(\bar \partial_1 \partial_1 G^B_{q_0}\right)_{|_z} = \frac1{R^2} \left( \bar \partial_1 \partial_1 G^{\mathbb{B}}_{0}\right)_{|_{\frac{z-q_0}R}} = \frac{|q_2|^2}{2 (|z-q_1|^2 + |q_2|^2)^2}\] for $z \in L_I$. This expression only vanishes when $q_2 = 0$, that is, when $q_0 \in L_I$. \end{example} We are now in a position to make the next remarks. \begin{remark} Let $\Omega$ be a bounded domain in ${\mathbb{H}}$ and let $q_0 \in \Omega$. For all $r,R>0$ such that $B(q_0,r) \subseteq \Omega \subseteq B(q_0,R)$, we have that \[\log\frac{|q-q_0|}{R} \leq G^\Omega_{q_0}(q)\leq \log\frac{|q-q_0|}{r}.\] As a consequence, $G^\Omega_{q_0}$ is not identically equal to $-\infty$, it belongs to $\mathsf{ssh}_{q_0}(\Omega)$ and it coincides with the supremum $s$ appearing in Definition \ref{green}. \end{remark} \begin{remark} Let $\Omega$ be a bounded domain in ${\mathbb{H}}$ and let $q_0 \in \Omega$. For all $R>0$ such that $\Omega \subseteq B(q_0,R)$, we have that \[\log\frac{|q-q_0|}{R} \leq g^\Omega_{q_0}(q)\,,\] whence $g^\Omega_{q_0}$ is not identically equal to $-\infty$. If, moreover, $q_0=x_0\in{\mathbb{R}}$ then for all $r>0$ such that $B(x_0,r) \subseteq \Omega$ we have that \[g^\Omega_{x_0}(q)\leq \log\frac{|q-x_0|}{r}.\] In this case, $g^\Omega_{x_0}$ belongs to $\mathsf{wsh}_{x_0}(\Omega)$ and it coincides with the supremum $w$ appearing in Definition \ref{green}. \end{remark} When $\Omega$ admits a well-behaved exhaustion function, we can prove a few further properties. \begin{theorem} Let $\Omega$ be a bounded domain in ${\mathbb{H}}$. Suppose there exists $\rho\in C^0(\Omega, (-\infty,0)) \cap \mathsf{ssh}(\Omega)$ such that $\{q \in \Omega: \rho(q) <c\} \subset\subset \Omega$ for all $c<0$. Then for all $q_0 \in \Omega$ and for all $p \in \partial \Omega$ \[\lim_{q \to p} G^\Omega_{q_0}(q) =0.\] Moreover, $G^\Omega_{q_0}$ is continuous in $\Omega\setminus\{q_0\}$. \end{theorem} \begin{proof} Let $B(q_0,r) \subseteq \Omega \subseteq B(q_0,R)$ and let $c>0$ be such that $c \rho < \log \frac r R$ in $\overline{B(q_0,r)}$. If we set \begin{equation*} u(q) = \left\{ \begin{array}{ll} \log\frac{|q-q_0|}{R} & q \in \overline{B(q_0,r)}\\ \max\left\{c \rho(q), \log\frac{|q-q_0|}{R}\right\} & q \in \Omega \setminus B(q_0,r) \end{array} \right. \end{equation*} then $u \in \mathsf{ssh}_{q_0}(\Omega)$. Thus, $u \leq G^\Omega_{q_0}\leq 0$, where $\lim_{q \to p} u(q) =0$ for all $p \in \partial \Omega$. This prove the first statement. To prove the second statement, we only need to prove the lower semicontinuity of $G^\Omega_{q_0}$. Let us choose $\lambda \in (0,1)$ such that $\rho < -\lambda$ in $\overline{B(q_0,\lambda)}$. For any $\varepsilon \in (0,\lambda)$ such that $\log \frac \varepsilon R>(1-\varepsilon) \log \varepsilon^2$ (whence $\log \frac \varepsilon R > \varepsilon - \frac 1 \varepsilon$), let us set \begin{equation*} \alpha(q) = (1-\varepsilon) \log\left(\varepsilon |q-q_0|\right) - \varepsilon \end{equation*} on $\overline{B(q_0, \varepsilon)}$. By Proposition~\ref{approximation}, for each sufficiently small $\delta>0$, the convolution $G^\Omega_{q_0} * \chi_\delta$ is an element of $C^\infty\cap\mathsf{ssh}(\Omega_\delta)$. If we choose $\eta \in (0,\varepsilon)$ so that $(1-\varepsilon) \log (\varepsilon \eta)>\log \frac \eta R$ then we may choose $\delta = \delta_\varepsilon>0$ so that: \begin{itemize} \item $(1-\varepsilon) \log (\varepsilon \eta)>G^\Omega_{q_0} * \chi_\delta$ in $\partial B(q_0,\eta)$, \item $\Omega_\delta$ includes $\rho^{-1}([-\infty,-\varepsilon^3])$, and \item $G^\Omega_{q_0} * \chi_\delta<0$ in $\rho^{-1}(-\varepsilon^3)$. \end{itemize} We may then set \[\beta(q) := G^\Omega_{q_0} * \chi_{\delta_\varepsilon}(q) - \varepsilon\] for all $q\in\Omega_{\delta_\varepsilon}$ and \[\gamma(q) := \varepsilon^{-2} \rho(q)\] for all $q\in\Omega$. By construction, $\alpha,\beta,\gamma$ can be patched together in a continuous strongly subharmonic function defined on $\Omega$, namely \begin{equation*} u_\varepsilon := \left\{ \begin{array}{ll} \alpha & \mathrm{in\ } \overline{B(q_0,\eta)}\\ \max\left\{\alpha,\beta\right\} & \mathrm{in\ } \overline{B(q_0,\varepsilon)} \setminus B(q_0,\eta)\\ \beta & \mathrm{in\ } \rho^{-1}([-\infty,-\varepsilon]) \setminus B(q_0,\varepsilon)\\ \max\left\{\beta,\gamma\right\} & \mathrm{in\ } \rho^{-1}([-\varepsilon,-\varepsilon^3])\\ \gamma & \mathrm{in\ } \Omega \setminus \rho^{-1}([-\infty,-\varepsilon^3)) \end{array} \right. \end{equation*} We remark that $\rho^{-1}([-\infty,-\varepsilon]) \setminus B(q_0,\varepsilon)$ increases as $\varepsilon \to 0$ and that \[\bigcup_{\varepsilon \in (0,\lambda)}\left( \rho^{-1}([-\infty,-\varepsilon]) \setminus B(q_0,\varepsilon) \right)= \Omega \setminus \{q_0\}\,.\] Thus, for all $q \in \Omega \setminus \{q_0\}$, \[\lim_{\varepsilon \to 0} u_\varepsilon(q) = \lim_{\varepsilon \to 0} G^\Omega_{q_0} * \chi_{\delta_\varepsilon}(q) - \varepsilon = G^\Omega_{q_0}(q)\,.\] Moreover, for each $\varepsilon \in (0, \lambda)$ it holds $\frac{u_\varepsilon}{1-\varepsilon} \in \mathsf{ssh}_{q_0}(\Omega)$, whence $\frac{u_\varepsilon}{1-\varepsilon} \leq G^\Omega_{q_0}$ in $\Omega$. Thus, \begin{equation*} G^\Omega_{q_0}(q) = \sup_{\varepsilon \in (0, \lambda)} \frac{u_\varepsilon(q)}{1-\varepsilon}\,, \end{equation*} whence the lower semicontinuity of $G^\Omega_{q_0}$ immediately follows. \end{proof} Similarly: \begin{proposition} Let $\Omega$ be a bounded domain in ${\mathbb{H}}$. Suppose there exists $\rho\in C^0(\Omega, (-\infty,0)) \cap \mathsf{wsh}(\Omega)$ such that $\{q \in \Omega: \rho(q) <c\} \subset\subset \Omega$ for all $c<0$. Then for all $q_0 \in \Omega$ and for all $p \in \partial \Omega$ \[\lim_{q \to p} g^\Omega_{q_0}(q) =0.\] \end{proposition} \begin{proof} Let $B(q_0,r) \subseteq \Omega \subseteq B(q_0,R)$ and let $c>0$ be such that $c \rho < \log \frac r R$ in $\overline{B(q_0,r)}$. If we set \begin{equation*} u(q) = \left\{ \begin{array}{ll} \log\frac{|q-q_0|}{R} & q \in \overline{B(q_0,r)}\\ \max\left\{c \rho(q), \log\frac{|q-q_0|}{R}\right\} & q \in \Omega \setminus B(q_0,r) \end{array} \right. \end{equation*} then $u \in \mathsf{wsh}_{q_0}(\Omega)$. Thus, $u \leq g^\Omega_{q_0}\leq 0$, where $\lim_{q \to p} u(q) =0$ for all $p \in \partial \Omega$. \end{proof} For a special class of domains $\Omega$ and points $q_0$, the Green function $g^\Omega_{q_0}$ can be easily determined, as follows. \begin{theorem}\label{greenonsymmetricslice} Let $\Omega$ be a bounded symmetric slice domain and let $x_0 \in \Omega \cap {\mathbb{R}}$. Consider the slice $\Omega_i = \Omega \cap {\mathbb{C}}$ of the domain and the (complex) Green function of $\Omega_i$ with logarithmic pole at $x_0$, which we will denote as $\gamma^{\Omega_i}_{x_0}$. If we set \[u(x+Iy) := \gamma^{\Omega_i}_{x_0}(x+iy)\] for all $x,y\in{\mathbb{R}}$ and $I \in {\mathbb{S}}$ such that $x+Iy\in\Omega$, then: \begin{itemize} \item $u$ is a well-defined function on $\Omega$; \item $u$ is ${\mathbb{J}}$-plurisubharmonic in $\Omega \setminus {\mathbb{R}}$ and it belongs to $\mathsf{wsh}_{x_0}(\Omega)$; \item $u$ coincides with $g^\Omega_{x_0}$. \end{itemize} \end{theorem} \begin{proof} The slice $\Omega_i$ of the domain is a bounded domain in ${\mathbb{C}}$. By~\cite[Proposition 6.1.1]{klimek}, the function $\gamma^{\Omega_i}_{x_0}$ is a negative plurisubharmonic function on $\Omega_i$ with a logarithmic pole at $x_0$. Moreover, since $\Omega_i$ is symmetric with respect to the real axis, it holds $\gamma^{\Omega_i}_{x_0}(x+iy)=\gamma^{\Omega_i}_{x_0}(x-iy)$ for all $x+iy\in\Omega_i$. It follows at once that $u$ is well-defined, that it belongs to $\mathsf{wsh}_{x_0}(\Omega)$, and that it is ${\mathbb{J}}$-plurisubharmonic in $\Omega\setminus{\mathbb{R}}$. Moreover, let us fix any other $v\in \mathsf{wsh}_{0}({\mathbb{B}})$: we can prove that $v(q) \leq u(q)$ for all $q\in\Omega$, as follows. For each $I \in {\mathbb{S}}$, the inequality $v_I\leq u_I$ holds in $\Omega_I$ because (up to identifying $L_I$ with ${\mathbb{C}}$) the function $u_I$ is the (complex) Green function of $\Omega_I$ with logarithmic pole at $x_0$ and $v_I$ is a negative subharmonic function on $\Omega_I$ with a logarithmic pole at $x_0$. As a consequence, $u$ coincides with $g^\Omega_{x_0}$. \end{proof} \subsection{A significant example} An interesting example to consider is that of the unit ball ${\mathbb{B}}$ with a pole other than $0$. It is natural to address it by means of the \emph{classical M\"obius transformations} of ${\mathbb{B}}$, namely the conformal transformations $v^{-1}M_{q_0}u$, where $u,v$ are constants in $\partial{\mathbb{B}}$, $q_0\in{\mathbb{B}}$ and $M_{q_0}$ is the transformation of ${\mathbb{B}}$ defined as \[M_{q_0}(q) := (1-q\bar q_0)^{-1}(q-q_0)\,.\] The transformation $M_{q_0}$ has inverse $M_{q_0}^{-1}=M_{-q_0}$. It is regular if, and only if, $q_0=x_0\in{\mathbb{R}}$; in this case, it is also slice preserving. For more details, see~\cite{poincare,moebius}. Our first observation can be derived from either Lemma~\ref{greeninequalities} or Theorem~\ref{greenonsymmetricslice}. \begin{example} For each $x_0\in{\mathbb{B}}\cap{\mathbb{R}}$, we have \[G^{{\mathbb{B}}}_{x_0}(q)\leq g^{{\mathbb{B}}}_{x_0}(q)=\log\frac{|q-x_0|}{|1-qx_0|}\] for all $q\in {\mathbb{B}}$. \end{example} The same techniques do not work when the logarithmic pole $q_0$ is not real, as a consequence of the fact that $M_{q_0}$ is not a regular function. Nevertheless, we can make the following observation. \begin{example} Let us fix $q_0\in{\mathbb{B}}\setminus{\mathbb{R}}$. We will prove that \[\log\frac{\left|q-q_0\right|}{\left|1-q\bar q_0\right|}\leq g^{\mathbb{B}}_{q_0}(q)\] by showing that $u(q) = \log\frac{\left|q-q_0\right|}{\left|1-q\bar q_0\right|}$ is weakly subharmonic. We will also prove that: (a) the restriction $u_I$ is harmonic if, and only if, $q_0\in{\mathbb{B}}_I$; (b) the function $u$ is not strongly subharmonic. \begin{itemize} \item We first observe that \[u(q)=\log|q-q_0|-\log|q-\tilde q_0|-\log|\bar q_0|\,,\] with $\tilde q_0:=\bar q_0^{-1} = q_0|q_0|^{-2}$. With respect to any orthonormal basis $1,I,J,IJ$ and to the associated coordinates $z_1,z_2,\bar z_1,\bar z_2$, if we split $q_0,\tilde q_0$ as $q_0 = q_1 + q_2J,\tilde q_0=\tilde q_1 + \tilde q_2J$ with $q_1,q_2,\tilde q_1,\tilde q_2 \in {\mathbb{B}}_I$ then \begin{align*} H_{I,J}(u)_{|_q}&= \frac{1}{2|q-q_0|^4} \left( \begin{array}{cc} |z_2-q_2|^2 & -(z_1-q_1)(\bar z_2-\bar q_2)\\ -(z_2-q_2)(\bar z_1-\bar q_1) & |z_1-q_1|^2 \end{array}\right)+\\ &-\frac{1}{2\left|q-\tilde q_0\right|^4} \left( \begin{array}{cc} |z_2-\tilde q_2|^2 & -(z_1-q_1)(\bar z_2-\overline{\tilde q_2})\\ -(z_2-\tilde q_2)(\bar z_1-\overline{\tilde q_1}) & |z_1-\tilde q_1|^2 \end{array}\right)\,. \end{align*} \item For all $z\in{\mathbb{B}}_I$ (that is, $z_1=z,z_2=0$) it holds \begin{align*} \left(\bar \partial_1 \partial_1 u \right)_{|_z} &= \frac{|q_2|^2}{2 (|z-q_1|^2 + |q_2|^2)^2} - \frac{|q_2|^2 |q_0|^4}{2 \left(\big|z\,|q_0|^2-q_1\big|^2 + |q_2|^2\right)^2}\,. \end{align*} If $q_0\in{\mathbb{B}}_I$ then $q_2=0$ and $\left(\bar \partial_1 \partial_1 u \right)$ vanishes identically in ${\mathbb{B}}_I$. Otherwise, $\left(\bar \partial_1 \partial_1 u \right)_{|_z}>0$ for all $z \in {\mathbb{B}}_I$ because \begin{align*} &\big|z\,|q_0|^2-q_1\big|^2 + |q_2|^2-|q_0|^2(|z-q_1|^2 + |q_2|^2)\\ &=|z|^2|q_0|^4+|q_0|^2-|q_0|^2|z|^2-|q_0|^4\\ &=|q_0|^2(1-|q_0|^2)(1-|z|^2)>0\,. \end{align*} \item In general, $H_{I,J}(u)$ is not positive semidefinite. To see this, let us choose a basis $1,I,J,IJ$ so that $q_1\neq0\neq q_2$ and let us choose $z_1=0,z_2=q_2$. We get \begin{align*} H_{I,J}(u)_{|_{q_2J}}&= \frac{1}{2|q_1|^4} \left( \begin{array}{cc} 0 & 0\\ 0& |q_1|^2 \end{array}\right)+\\ &-\frac{1}{2\left(|\tilde q_1|^2+|q_2-\tilde q_2|^2\right)^2} \left( \begin{array}{cc} |q_2-\tilde q_2|^2 & q_1(\bar q_2-\overline{\tilde q_2})\\ (q_2-\tilde q_2)\overline{\tilde q_1} & |\tilde q_1|^2 \end{array}\right)\,, \end{align*} where $q_2-\tilde q_2\neq0$ by construction. \end{itemize} \end{example} We would now like to consider a different approach, through regular transformations of ${\mathbb{B}}$. Indeed, the work~\cite{moebius} proved the following facts. \begin{itemize} \item The only regular bijections ${\mathbb{B}}\to{\mathbb{B}}$ are the so-called \emph{regular M\"obius transformations} of ${\mathbb{B}}$, namely the transformations ${\mathcal{M}}_{q_0}*u={\mathcal{M}}_{q_0}u$ with $u\in\partial{\mathbb{B}}, q_0 \in{\mathbb{B}}$ and \[{\mathcal{M}}_{q_0}(q) := (1-q\bar q_0)^{-*}*(q-q_0)\,.\] Here, the symbol $*$ denotes the multiplicative operation among regular functions and $f^{-*}$ is the inverse of $f$ with respect to this multiplicative operation. \item For all $q\in{\mathbb{B}}$, it holds \[{\mathcal{M}}_{q_0}(q) = M_{q_0}(T_{q_0}(q))\,,\] where $T_{q_0}: {\mathbb{B}} \to {\mathbb{B}}$ is defined as $T_{q_0}(q) = (1-qq_0)^{-1}q (1-qq_0)$ and has inverse $T_{q_0}^{-1}(q)=T_{\bar q_0}(q)$. \item ${\mathcal{M}}_{q_0}$ is slice preserving if, and only if, $q_0=x_0\in{\mathbb{R}}$ (in which case, $T_{q_0}={id}_{\mathbb{B}}$ and ${\mathcal{M}}_{x_0}=M_{x_0}$). \end{itemize} If we fix $q_0\in{\mathbb{B}}\setminus{\mathbb{R}}$ then, by Lemma~\ref{greeninequalities}, \begin{equation}\label{greenball1} \log \left|{\mathcal{M}}_{q_0}(q)\right|\leq g^{\mathbb{B}}_{q_0}(q)\,. \end{equation} for all $q\in{\mathbb{B}}$. The work~\cite{schwarzpick} proved the quaternionic Schwarz-Pick Lemma and, in particular, the inequality \begin{align*} \log|f(q)| \leq \log \left|{\mathcal{M}}_{q_0}(q)\right| \end{align*} valid for all regular $f:{\mathbb{B}} \to {\mathbb{B}}$ with $f(q_0)=0$. It is therefore natural to ask ourselves whether an equality may hold in \eqref{greenball1}. However, this is not the case: as a consequence of the next result, inequality~\eqref{greenball1} is strict at all $q$ not belonging to the same slice ${\mathbb{B}}_I$ as $q_0$. As a byproduct, we conclude that the set \[\{\log|f| : f : {\mathbb{B}} \to {\mathbb{H}}\mathrm{\ regular,\ } f(q_0)=0\}\] is not a dense subset of $\mathsf{wsh}_{q_0}({\mathbb{B}})$. \begin{theorem} If $q_0\in{\mathbb{B}}_I$, then \begin{equation}\label{greenball2} |M_{q_0}(T_{q_0}(q))|=|{\mathcal{M}}_{q_0}(q)|\leq|M_{q_0}(q)|\,, \end{equation} for all $q\in{\mathbb{B}}$. Equality holds if, and only if, $q\in{\mathbb{B}}_I$. \end{theorem} \begin{proof} Inequality \eqref{greenball2} is equivalent to \[|T_{q_0}(q)-q_0|\,|1-q\bar q_0|\leq|q-q_0|\,|1-T_{q_0}(q)\bar q_0|\,.\] Since $|T_{q_0}(q)|=|q|$, the last inequality is equivalent to \begin{align*} 0\leq&\left(|q|^2-2\langle q,q_0 \rangle+|q_0|^2\right)\left(1-2\langle T_{q_0}(q),q_0 \rangle+|q|^2|q_0|^2\right)+\\ &-\left(|q|^2-2\langle T_{q_0}(q),q_0 \rangle+|q_0|^2\right)\left(1-2\langle q,q_0 \rangle+|q|^2|q_0|^2\right)\\ =&\,2\left(-|q|^2-|q_0|^2+1+|q|^2|q_0|^2\right)\langle T_{q_0}(q),q_0 \rangle+\\ &\,2\left(-1-|q|^2|q_0|^2+|q|^2+|q_0|^2\right)\langle q,q_0 \rangle\\ =&\,2(1-|q_0|^2)(1-|q|^2))\langle T_{q_0}(q)-q,q_0 \rangle\,. \end{align*} Thus, inequality \eqref{greenball2} holds for $q\in{\mathbb{B}}$ if, and only if, $0\leq\langle T_{q_0}(q)-q,q_0 \rangle$. This is equivalent to the non-negativity of the real part of $(T_{q_0}(q)-q)\bar q_0=((1-qq_0)^{-1}q(1-qq_0)-q)\bar q_0$ or, equivalently, of the real part of \begin{align*} &\left(\overline{(1-qq_0)}q(1-qq_0)-|1-qq_0|^2q\right)\bar q_0\\ &=\overline{(1-qq_0)}(q-q^2q_0-q+qq_0q)\bar q_0\\ &=(1-\bar q_0\bar q)(qq_0q\bar q_0-q^2|q_0|^2)\\ &=qq_0q\bar q_0-q^2|q_0|^2+|q|^2|q_0|^2(\bar q_0q-q\bar q_0)\\ &=(|q_0|^2-\bar q_0^2)(|z_2|^2-z_1z_2J)+(\bar q_0-q_0)z_2J\,, \end{align*} where the last equality can be obtained by direct computation after splitting $q$ as $q=z_1+z_2J$, with $z_1,z_2\in{\mathbb{B}}_I$ and $J\perp I$. If $q_0=x_0+Iy_0$ then the real part of the last expression equals $(x_0^2+y_0^2-x_0^2+y_0^2)|z_2|^2=2y_0^2|z_2|^2$, which is clearly non-negative. Moreover, it vanishes if, and only if, $z_2=0$, i.e., $q\in{\mathbb{B}}_I$. \end{proof} An example wherein inequality \eqref{greenball2} holds and is strict had been constructed in~\cite{volumeindam}. That construction was used to prove that regular M\"obius transformations are not isometries for the Poincar\'e distance of ${\mathbb{B}}$, defined as \[\delta_{\mathbb{B}}(q,q_0):=\frac12\log\left(\frac{1+|M_{q_0}(q)|}{1-|M_{q_0}(q)|}\right)\] for all $q,q_0\in{\mathbb{B}}$. Our new inequality~\eqref{greenball2} is equivalent to $\delta_{\mathbb{B}}(T_{q_0}(q),q_0)\leq\delta_{\mathbb{B}}(q,q_0)$. In other words, we have proven the following property of the transformation $T_{q_0}$ of ${\mathbb{B}}$: while all points $q\in{\mathbb{B}}_I$ are fixed, all points $q\in{\mathbb{B}}\setminus{\mathbb{B}}_I$ are attracted to $q_0$ with respect to the Poincar\'e distance. \end{document}
arXiv
EXPONENTIAL MODELS All Answered Unanswered Exponential function questions and answers Recent questions in Exponential models Ben Shaver 2022-01-06 Answered Which set of ordered pairs could be generated by an exponential function? A. (1, 1) (2, 1/2) (3, 1/3) (4, 1/4) B. (1, 1) (2, 1/4) (3, 1/9) (4 1/16) C. (1, 1/2) (2, 1/4) (3, 1/8) (4, 1/16) D. (1, 1/2) (2, 1/4) (3, 1/6) (4, 1/8) zakinutuzi 2021-12-31 Answered Furthermore why is it that \(\displaystyle{e}^{{x}}\) is used in exponential modelling? Why aren't other exponential functions used, i.e. \(\displaystyle{2}^{{x}}\), etc? David Lewis 2021-12-30 Answered I have a real valued number \(\displaystyle{y}_{{t}}\). At each time step t, \(\displaystyle{y}_{{t}}\) is multiplied by \(\displaystyle{\left({1}+\epsilon\right)}\) with probability p and multiplied by \(\displaystyle{\left({1}-\epsilon\right)}\) with probability \(\displaystyle{1}-{p}\). What is the expected value of \(\displaystyle{y}_{{{t}+{n}}}\)? What is the variance? Ikunupe6v 2021-12-21 Answered What is the difference between linear growth and exponential growth? Patricia Crane 2021-12-15 Answered Find the derivative of \(\displaystyle{y}={e}^{{{5}{x}}}\) Inyalan0 2021-12-09 Answered Solve \(\displaystyle{e}^{{{x}}}={0}\) Julia White 2021-12-09 Answered Differentiate, please: \(\displaystyle{y}={x}^{{{x}}}\). Determine whether the given function is linear, exponential, or neither. For those that are linear functions, find a linear function that models the data. for those that are exponential, find an exponential function that models the data. \(\displaystyle{x}{F}{\left({x}\right)}-{1}{\left(\frac{1}{{2}}\right)}{0}{\left(\frac{1}{{4}}\right)}{1}{\left(\frac{1}{{8}}\right)}{2}{\left(\frac{1}{{16}}\right)}{3}\frac{1}{{32}}\) Lennie Carroll 2021-09-20 Answered A researcher is trying to determine the doubling time fora population of the bacterium Giardia lamblia. He starts a culture in a nutrient solution and estimates the bacteria count every four hours. His data are shown in the table. ​ Time (hours)Bacteria count (CFU/mL)0374478631278161052013024173 ​ Time (hours) 0 4 8 12 16 20 24 ​ Bacteria count (CFU/mL) 37 47 63 78 105 130 173 ​ Use a graphing calculator to find an exponential curve \(f(t)=a*b^t\) that models the bacteria population t hours later. Suman Cole 2021-09-20 Answered Determine whether the given function is linear, exponential, or neither. For those that are linear functions, find a linear function that models the data. for those that are exponential, find an exponential function that models the data. xg(x) -12 05 18 211 314 avissidep 2021-09-18 Answered James rents an apartment with an initial monthly rent of $1,600. He was told that the rent goes up 1.75% each year. Write an exponential function that models this situation to calculate the rent after 15 years. Round the monthly rent to the nearest dollar. he298c 2021-09-16 Answered A researcher is trying to determine the doubling time of a population of the bacterium Giardia lamblia. He starts a culture in a nutrient solution and estimates the bacteria count every four hours. His data are shown in the table. Use a graphing calculator to find an exponential curve f(t)=a⋅bt that models the bacteria population t hours later. Time (hours) 04812162024 Bacteria count (CFU/mL)37476378105130173 DofotheroU 2021-09-13 Answered The exponential growth models describe the population of the indicated country, A, in millions, t years after 2006. \(\displaystyle{A}={33.1}{e}^{{0.009}}{t}\) Uganda \(\displaystyle{A}={28.2}{e}^{{0.034}}{t}\) Use this information to determine whether each statement is true or false. If the statement is false, make the necessary change(s) to produce a true statement. Uganda's growth rate is approximately 3.8 times that of Canada's. Determine whether the given function is linear, exponential, or neither. For those that are linear functions, find a linear function that models the data. for those that are exponential, find an exponential function that models the data. $$\begin{array} \text{x} & \text{g(x)}\ \hline \text{−1−1} & \text{3}\ \text{0} & \text{6}\ \text{1} & \text{12}\ \text{2} & \text{18}\ \text{3} & \text{30}\ \end{array}$$ Mylo O'Moore 2021-08-16 Answered The following question consider the Gompertz equation, a modification for logistic growth, which is often used for modeling cancer growth, specifically the number of tumor cells. When does population increase the fastest in the threshold logistic equation \(\displaystyle{P}'{\left({t}\right)}={r}{P}{\left({1}-{\frac{{{P}}}{{{K}}}}\right)}{\left({1}-{\frac{{{T}}}{{{P}}}}\right)}?\) Khadija Wells 2021-08-13 Answered The table shows the annual service revenues R1 in billions of dollars for the cellular telephone industry for the years 2000 through 2006. \(\begin{matrix} Year&2000&2001&2002&2003&2004&2005&2006\\ R_1&52.5&65.3&76.5&87.6&102.1&113.5&125.5 \end{matrix}\) (a) Use the regression capabilities of a graphing utility to find an exponential model for the data. Let t represent the year, with t=10 corresponding to 2000. Use the graphing utility to plot the data and graph the model in the same viewing window. (b) A financial consultant believes that a model for service revenues for the years 2010 through 2015 is \(\displaystyle{R}{2}={6}+{13}+{13},{9}^{{0.14}}{t}\). What is the difference in total service revenues between the two models for the years 2010 through 2015? Phoebe 2021-08-11 Answered Find The Exponential Function \(\displaystyle{f{{\left({x}\right)}}}={a}^{{x}}\) Whose Graph Is Given. nicekikah 2021-08-08 Answered For the following exercises, use a graphing utility to create a scatter diagram of the data given in the table. Observe the shape of the scatter diagram to determine whether the data is best described by an exponential, logarithmic, or logistic model. Then use the appropriate regression feature to find an equation that models the data. When necessary, round values to five decimal places. \(\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|} \hline x & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 \\ \hline \hline f(x) & 13.98 & 17.84 & 20.01 & 22.7 & 24.1 & 26.15 & 27.37 & 28.38 & 29.97 & 31.07 & 31.43 \\ \hline \end{array}\) illusiia 2021-08-06 Answered Graph each function and tell whether it represents exponential growth, exponential decay, or neither. \(\displaystyle{y}={\left({2.5}\right)}^{{{x}}}\) Line 2021-07-04 Answered Transform the given differential equation or system into an equivalent system of first-order differential equations. \(\displaystyle{x}{''}+{2}{x}'+{26}{x}={82}{\cos{{4}}}{t}\) Limits and continuity Integrals Analyzing functions
CommonCrawl
\begin{document} \author{Kahar El-Hussein \\ Department of Mathematics, Faculty of Science and Arts at Al Qurayat,\\ Al Jouf University, KSA \& Forat University, Deir el Zore, Syria\\ E-mail: [email protected], [email protected]} \title{Plancherel Theorem on the Symplectic Group $SP(4,\mathbb{R})$} \maketitle \begin{abstract} Let $SL(4,\mathbb{R})$ be the $15-$ dimensional connected semisimple Lie group and let $SL(4,\mathbb{R})=KAN$ \ be the Iwasawa decomposition. \ Let $ \mathbb{R}^{4}$ $\rtimes SL(4,\mathbb{R})$ be the group of the semidirect product of $SL(4,\mathbb{R})$ with the real $\ $vector group $\mathbb{R}^{4}. $ The goal of this paper is to define the Fourier transform on $SL(4,\mathbb{ R})$\ in order to obtain the Plancherel theorem on $SL(4,\mathbb{R})$ and so on $\mathbb{R}^{4}$ $\rtimes SL(4,\mathbb{R}).$ Since the symplectic group $ SP(4,\mathbb{R})$ is a subgroup of $SL(4,\mathbb{R})$, then it will be easy to get the Plancherel theorem on $SP(4,\mathbb{R})$ and so on its inhomogeneous group. To this end, we obtain some interesting results on its nilpotent symplectic group \end{abstract} \textbf{Key words}: Semisimple Lie group $SL(4,\mathbb{R)},$ Symplectic Lie group $SP(4,\mathbb{R)},$ Nilpotent Symplectic Group $,$ Fourier transform and Plancherel Theorem on $SL(4,\mathbb{R)},$ Plancherel Theorem on $SP(4,\mathbb{R)}$ and on its Inhomogeneous group. \textbf{AMS 2000 Subject Classification:} $43A30\&35D$ $05$ \section{\protect Introduction} \textbf{1. }As well known the connected semisimple Lie group $SL(n,\mathbb{R} )$, consists of the following matrices \begin{equation} SL(n,\mathbb{R})=\{A=GL(n,\mathbb{R)};\det A=1\} \end{equation} The \ group $Sp(2n,\mathbb{R})$ is a subgroup of $SL(2n,\mathbb{R}),$ which is \begin{equation} SP(2n,\mathbb{R})=\{g\in SL(2n,\mathbb{R});g\jmath g^{t}=\jmath \} \end{equation} where $\jmath $ is the symplectic matrix defined by \begin{equation} \jmath =\left( \begin{array}{cc} 0 & I_{n} \\ -I_{n} & 0 \end{array} \right) \end{equation} and $0$ and $I$ are the $n\times n$ zero and identity matrices. It is clear $ det$ $\jmath =1,$ $\jmath ^{2}=I,$ and $\jmath ^{t}=\jmath ^{-1}=-\jmath $ The $10-$demesional symplectic group $SP(4,\mathbb{R)}$. If $g$ $ \in SP(4,\mathbb{R)}$, then \begin{equation} g=\left( \begin{array}{cccc} x_{11} & x_{12} & x_{13} & x_{14} \\ x_{21} & x_{22} & x_{14} & x_{24} \\ x_{31} & x_{32} & -x_{11} & -x_{21} \\ x_{32} & x_{42} & -x_{12} & -x_{22} \end{array} \right) ,x_{ij}\in \mathbb{R}\text{, }1\leq i,j\leq 4 \end{equation} The goal of this paper, is to define the Fourier transform in order to obtain the Plancherel theorem on the group $SP(4,\mathbb{R)}$ and its inhomogeneous group. Therefore, I will define the Fourier transform on $SL(4, \mathbb{R)}$, and I will prove its Plancherel theorem. Besides, I will demonstrate the existence theorem and hypoellipticity of the partial differential equations on its nilpotent symplectic \section{\protect Notation and Results} \textbf{2.1. }In the following and far away from the representations theory of Lie groups we use the Iwasawa decomposition of $SL(4,\mathbb{R}),$ to define the Fourier transform and to get the Plancherel formula on the connected real semisimple Lie group $SL(4,\mathbb{R}).$ Therefore let $SL(4, \mathbb{R})$\ be the complex Lie group, which is \begin{equation} SL(4,\mathbb{R})=\{\left( A= \begin{array}{cccc} a_{11} & a_{12} & a_{13} & a_{14} \\ a_{21} & a_{22} & a_{23} & a_{24} \\ a_{31} & a_{32} & a_{33} & a_{34} \\ a_{41} & a_{42} & a_{43} & a_{44} \end{array} \right) :\text{ }a_{ij}\in \mathbb{R},1\leq i,j\leq 4\text{ \textit{and} } \det A=1\} \end{equation} Let $G=SL(4,\mathbb{R})=KNA$ be the Iwasawa decomposition of $G$, where \begin{eqnarray} K &=&SO(4) \notag \\ N &=&\{\text{ }\left( \begin{array}{cccc} 1 & x_{1} & x_{3} & x_{6} \\ 0 & 1 & x_{2} & x_{5} \\ 0 & 0 & 1 & x_{4} \\ 0 & 0 & 0 & 1 \end{array} \right) _{i}x_{i}\in \mathbb{R}\text{ },1\leq i\leq 6\} \notag \\ A &=&\{\left( \begin{array}{cccc} a_{1} & 0 & 0 & 0 \\ 0 & a_{1} & 0 & 0 \\ 0 & 0 & a_{1} & 0 \\ 0 & 0 & 0 & a_{1} \end{array} \right) :\text{ }a_{i}\in \mathbb{R}_{+}^{\star },1\leq i\leq 4,a_{1}a_{2}a_{3}a_{4}=1\} \end{eqnarray} Hence every $g\in SL(4,\mathbb{R})$ can be written as $g=kan\in SL(4,\mathbb{R}),$ where $k\in K,$ $a\in A,$ $n\in N.$ We denote by $ L^{1}(SL(4,\mathbb{R}))$ the Banach algebra that consists of all complex valued functions on the group $SL(4,\mathbb{R})$, which are integrable with respect to the Haar measure $dg$ of $SL(4,\mathbb{R})$ and multiplication is defined by convolution product on $SL(4,\mathbb{R})$ , and we denote by $ L^{2}(SL(4,\mathbb{R}))$ the Hilbert space of $SL(4,\mathbb{R})$. So we have for any $f\in L^{1}(SL(4,\mathbb{R}))$ and $\phi \in L^{1}(SL(4,\mathbb{R}))$ \begin{equation} \phi \ast f(h)=\int\limits_{G}f(g^{-1}h)\phi (g)dg \end{equation} The Haar measure $dg$ on a connected real semi-simple Lie group $G$ $=SL(4, \mathbb{R)}$, can be calculated from the Haar measures $dn,$ $da$ and $dk$ on $N;A$ and $K;$respectively, by the formula \begin{equation} \int\limits_{SL(4,\mathbb{R})}f(g)dg=\int\limits_{A}\int\limits_{N}\int \limits_{K}f(ank)dadndk \end{equation} \textbf{2.2.} Keeping in mind that $a^{-2\rho }$ is the modulus of the automorphism $n\rightarrow $ $ana^{-1}$ of $N$ we get also the following representation of $dg$ \begin{equation} \int\limits_{SL(4,\mathbb{R})}f(g)dg=\int\limits_{A}\int\limits_{N}\int \limits_{K}f(ank)dadndk=\int\limits_{N}\int\limits_{A}\int \limits_{K}f(nak)a^{-2\rho }dndadk \end{equation} where \begin{equation*} \rho =2^{-1}\sum_{\alpha \rangle 0}m(\alpha )\alpha \end{equation*} and $m(\alpha )$ denotes the multiplicity of the root $\alpha $ and $\rho =$ the dimension of the nilpotent group $N.$ Furthermore, using the relation $ \int\limits_{G}f(g)dg=\int\limits_{G}f(g^{-1})dg,$ we receive \begin{equation} \int\limits_{SL(4,\mathbb{R})}f(g)dg=\int\limits_{K}\int\limits_{A}\int \limits_{N}f(kan)a^{2\rho }dndadk \end{equation} \section{\protect \textbf{Fourier Transform and Plancherel Theorem On } $N$} \textbf{3.1.} Let $N$ be the real group consisting of all matrices of the form \begin{equation} \left( \begin{array}{cccc} 1 & x_{1} & x_{3} & x_{6} \\ 0 & 1 & x_{2} & x_{5} \\ 0 & 0 & 1 & x_{4} \\ 0 & 0 & 0 & 1 \end{array} \right) \end{equation} where $(x_{1},x_{2},$ $x_{3},$ $x_{4},$ $x_{5},x_{6})\in \mathbb{R}^{6}$ . The group can be identified with the group $(\mathbb{R}^{3}\underset{\rho _{2}}{\rtimes }\mathbb{R}^{2}$ $)\underset{\rho _{1}}{\rtimes }\mathbb{R}$\ be the semidirect product of the real vector groups $\mathbb{R}$, $\mathbb{R} ^{2}$ and $\mathbb{R}^{3}$, where $\rho _{2}$ is the group homomorphism \hspace{0.05in}$\rho _{2}:\mathbb{R}^{2}\rightarrow Aut(\mathbb{R}^{3}),$ which is defined by \begin{equation} \rho _{2}(x_{3},x_{2}\text{ } )(y_{6},y_{5},y_{4})=(y_{6}+x_{3}y_{4},y_{5}+x_{2}y_{4},y_{4}) \end{equation} and $\rho _{1}$ is the group homomorphism\hspace{0.05in}$\rho _{1}:\mathbb{R} \rightarrow Aut(\mathbb{R}^{3}\underset{\rho _{2}}{\rtimes }\mathbb{R}^{2}),$ which is given by \begin{equation} \rho _{1}(x_{1}\text{ } )(y_{6},y_{5},y_{4},y_{3},y_{2})=(y_{6}+x_{1}y_{5},y_{5},y_{4},y_{3}+x_{1}y_{2},y_{2}) \end{equation} where $Aut(\mathbb{R}^{3})$ $(resp.Aut(\mathbb{R}^{3}\underset{\rho _{2}}{ \rtimes }\mathbb{R}^{2}))$ is the group of all automorphisms of $\mathbb{R} ^{3}$ $(resp.(\mathbb{R}^{3}\underset{\rho _{2}}{\ltimes }\mathbb{R}^{2})),$ see $[6].$ Let $L=\mathbb{R}^{3}\times \mathbb{R}^{2}\times \mathbb{R}^{2}\mathbb{ \times R}\times \mathbb{R}$\ be the group with law: \begin{eqnarray} &&X.Y=(x_{6},x_{5},x_{4},x_{3},x_{2},t_{3},t_{2},x_{1},t_{1})(y_{6},y_{5},y_{4},y_{3},y_{2},s_{3},s_{2},y_{1},s_{1}) \notag \\ &=&((x_{6},x_{5},x_{4},x_{3},x_{2},t_{3},t_{2})(\rho _{1}(t_{1})(y_{6},y_{5},y_{4},y_{3},y_{2},s_{3},s_{2}),y_{1}+x_{1},s_{1}+t_{1}) \notag \\ &=&((x_{6},x_{5},x_{4},x_{3})\rho _{2}(t_{3},t_{2})(y_{6}+t_{1}y_{5},y_{5},y_{4},y_{3},s_{3}+t_{1}s_{2},s_{2}),x_{2}+y_{2},y_{1}+x_{1},s_{1}+t_{1}) \notag \\ &=&((x_{6},x_{5},x_{4})+(y_{6}+t_{1}y_{5}+t_{3}y_{4},y_{5}+t_{2}y_{4},y_{4}),x_{3}+y_{3},s_{3}+t_{1}s_{2}, \notag \\ &&s_{2}+t_{2},x_{2}+y_{2},y_{1}+x_{1},s_{1}+t_{1}) \notag \\ &=&(x_{6}+y_{6}+t_{1}y_{5}+t_{3}y_{4},x_{5}+y_{5}+t_{2}y_{4},x_{4}+y_{4},x_{3}+y_{3},t_{3}+s_{3}+t_{1}s_{2}, \notag \\ &&y_{2}+x_{2},s_{2}+t_{2},y_{1}+x_{1},s_{1}+t_{1}) \end{eqnarray} for all $(X,Y)\in L^{2}.$ In this case the group $N$ can be identified with the closed subgroup $\mathbb{R}^{3}\times \{0\}\underset{\rho _{1}}{\rtimes } \mathbb{R}^{2}\mathbb{\times }$ $\{0\}\ \underset{\rho _{1}}{\rtimes } \mathbb{R}$ of $L$ and $B$ with the closed subgroup $\mathbb{R}^{3}\times \mathbb{R}^{2}\times \{0\}\mathbb{\times R}\times \{0\}$\ of $L,$ where $B= \mathbb{R}^{3}\times \mathbb{R}^{2}\mathbb{\times }$ $\mathbb{R}$ the group, which is the direct product of the real vector groups $\mathbb{R}$, $\mathbb{ R}^{2}$ and $\mathbb{R}^{3}$ Let $C^{\infty }(N),$ $\mathcal{D}(N),$ $\mathcal{D}^{\prime }(N),$ $ \mathcal{E}^{\prime }(N)$ be the space of $C^{\infty }$- functions, $ C^{\infty }$ with compact support, distributions and distributions with compact support on $N$ respectively$.$We denote by $L^{1}(N)$ the Banach algebra that consists of all complex valued functions on the group $N$, which are integrable with respect to the Haar measure of $N$ and multiplication is defined by convolution on $N$, and we denote by$L^{2}(N)$ the Hilbert space of $N$. \newline \textbf{Definition 3.1.}\textit{\ For every }$f\in C^{\infty }(N)$, \textit{ one can define function} $\widetilde{f}\in C^{\infty }(L)$ \textit{as follows:} \begin{equation} \widetilde{f}(x,x_{3},x_{2},t_{3},t_{2},x_{1},t_{1})=f(\rho _{1}(x_{1})(\rho _{2}(x_{3},x_{2})(x),t_{3}+x_{3},t_{2}+x_{2}),t_{1}) \end{equation} \textit{for all} $(x,x_{3},x_{2},t_{3},t_{2},x_{1},t_{1})\in L,$ here $ x=(x_{6},x_{5},x_{4})\in \mathbb{R}^{3}.$ \textbf{Remark 3.1. }\textit{The function} $\widetilde{f}$ \textit{is invariant in the following sense:} \begin{eqnarray} &&\widetilde{f}((\rho _{1}(h)((\rho _{2}(r,k)(x),x_{3}-r,x_{2}-k,t_{3}+r,t_{2}+k),x_{1}-h,t_{1}+h) \notag \\ &=&\widetilde{f}(x,x_{3},x_{2},t_{3},t_{2},x_{1},t_{1}) \end{eqnarray} \textit{for any} $(x,x_{3},x_{2},t_{3},t_{2},x_{1},t_{1})\in L$\textit{, }$ h\in \mathbb{R}$ \textit{and }$(r,k)\in \mathbb{R}$, where $ x=(x_{6},x_{5},x_{4})\in \mathbb{R}^{3}$. \textit{So every function} $\psi (x,x_{3},x_{2},x_{1})$ \textit{on} $N$\textit{\ extends uniquely as an invariant function} $\widetilde{\psi } (x,x_{3},x_{2},t_{3},t_{2},x_{1},t_{1}) $ \textit{on} $L.$ \textbf{Theorem 3.1.}\textit{\ For every function} $F\in C^{\infty }(L)$ \textit{invariant in sense} $(16)$ \textit{and for every }$\varphi \mathcal{ \in }$ $\mathcal{D}(N)$, \textit{we have} \begin{equation} u\text{ }\ast F(x,x_{3},x_{2},t_{3},t_{2},x_{1},t_{1})=u\text{ }\ast _{c}F(x,x_{3},x_{2},t_{3},t_{2},x_{1},t_{1}) \end{equation} \textit{for every} $(X,x_{3},x_{2},t_{3},t_{2},x_{1},t_{1})$ $\in L$, \textit{where} $\ast $\textit{\ signifies the convolution product on} $N$\ \textit{with respect the variables} $(x,t_{3},t_{2},t_{1})$\ \textit{and} $ \ast _{c}$\textit{signifies the commutative convolution product on} $B$ \textit{with respect the variables }$(x,x_{3},x_{2},x_{1}).$ \textit{Proof}\textbf{:}\textit{\ }\textbf{\ }In fact we have\ \begin{eqnarray} &&\varphi \ast F(x,x_{3},x_{2},t_{3},t_{2},x_{1},t_{1}) \notag \\ &=&\int\limits_{N}F\left[ (y,y_{3},y_{2},s)^{-1}(X,x_{3},x_{2},t_{3},t_{2},x_{1},t_{1})\right] u(y,y_{3},y_{2},s)dydy_{3}dy_{2}ds \notag \\ &=&\int\limits_{N}F\left[ (\rho _{1}(s^{-1})(y,y_{3},y_{2})^{-1},-s)(x,x_{3},x_{2},t_{3},t_{2},x_{1},t_{1}) \right] u(y,y_{3},y_{2},s)dydy_{3}dy_{2}ds \notag \\ &=&\int\limits_{N}F[(\rho _{1}(s^{-1})((\rho _{2}(y_{3},y_{2})^{-1}((-y)+(x))),x_{3},x_{2},t_{3}-y_{3},t_{2}-y_{2}),x_{1},t_{1}-s)] \notag \\ &&u(y,y_{3},y_{2},s)dydy_{3}dy_{2}ds \end{eqnarray} Since $F$ is invariant in sense $(16),$ then for every $ (x,x_{3},x_{2},t_{3},t_{2},x_{1},t_{1})\in L$ we get \begin{eqnarray} &&\varphi \ast F(x,x_{3},x_{2},t_{3},t_{2},x_{1},t_{1}) \notag \\ &=&\int\limits_{N}F[(\rho _{1}(s^{-1})(\rho _{2}(y_{3},y_{2})^{-1}(-y+x),x_{3},x_{2},t_{3}-y_{3},t_{2}-y_{2}), \notag \\ &&x_{1},t_{1}-s)]u(y,y_{3},y_{2},s)dydy_{3}dy_{2}ds \notag \\ &=&\int\limits_{N}F\left[ x-y,x_{3}-y_{3},x_{2}-y_{2},t_{3},t_{2},x_{1}-s,t \right] u(y,y_{3},y_{2},s)dydy_{3}dy_{2}ds \notag \\ &=&\varphi \ast _{c}F(x,x_{3},x_{2},t_{3},t_{2},x_{1},t_{1}) \end{eqnarray} \textbf{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ } \textbf{\ \ \ \ \ \ \ \ \ \ } As in $[6]$, we will define the Fourier transform on $G$. Therefore let $ \mathcal{S}(N)$ be the Schwartz space of $N$ which\hspace{0.05in}can be considered as the Schwartz space of $\mathcal{S}(B),$ and let $\mathcal{S} ^{\prime }(N)$ be the space of all tempered distributions on $N.$ \textbf{Definition 3.2.}\textit{\ If }$f\in $\textit{\ }$\mathcal{S}(N),$ \textit{\ one can define its Fourier transform }$\mathcal{F}f$\textit{\ \hspace{0.05in}by the Fourier transform on its vector group}: \begin{equation} \mathcal{F}f\text{ \ }(\xi )=\int\limits_{N}f(X)\text{ }e^{-\text{ }i\text{ } \langle \xi \text{,}X\rangle }\text{ }dX \end{equation} \textit{for any} $\xi =(\xi _{6},\xi _{5},\xi _{4},\xi _{3},\xi _{2},\xi _{1})\in \mathbb{R}^{6},$ and $X=(x_{6},x_{5},x_{4},x_{3},x_{2},x_{1})\in \mathbb{R}^{6}$, \textit{where} $\langle \xi $,$X\rangle =\xi _{6}x_{6}+\xi _{5}x_{5}+\xi _{4}x_{4}+\xi _{3}x_{3}+\xi _{2}x_{2}+\xi _{1}x_{1}$ \textit{ and} $dX=dx_{6}dx_{5}dx_{4}dx_{3}dx_{2}dx_{1}$\textit{is the Haar measure on} $N$. \textit{The mapping}\ $f$ \ $\rightarrow $ \ $\mathcal{F}f$ \ \textit{ is isomorphism of the topological vector space} \ $\mathcal{S}(N)$ \textit{ onto} $\mathcal{S}(\mathbb{R}^{6}).$ \textbf{Theorem 3.2. }\textit{The Fourier transform }$\mathcal{F}$ \textit{ satisfies :} \begin{equation} \overset{\vee }{\varphi }\ast f(0)=\int\limits_{\mathbb{R}^{4}}\mathcal{F} f(\xi )\overline{\mathcal{F}u(\xi )}d\xi \end{equation} \textit{for every }$f\in S(N)$\textit{\ and }$\varphi \in S(N),$ \textit{ where\ }$\overset{\vee }{\varphi }(X)=\overline{u(X^{-1})},$\textit{\hspace{ 0.05in}}$\xi =(\xi _{6},\xi _{5},\xi _{4},\xi _{3},\xi _{2},\xi _{1}),$ \textit{\ }$d\xi =d\xi _{1}d\xi _{2}d\xi _{3}d\xi _{4}d\xi _{5}d\xi _{6},$ \textit{\ is the Lebesgue\ measure on }$\mathbb{R}^{6},$\textit{\ and } $ \ast $\textit{\ denotes the convolution product on }$N$\textit{.} \textit{Proof:\ }By the classical Fourier transform, we have: \begin{eqnarray} &&\overset{\vee }{\varphi }\ast f(0)=\int\limits_{\mathbb{R}^{4}}\mathcal{F}( \overset{\vee }{\varphi }\ast f)(\xi )d\xi \notag \\ &=&\int\limits_{\mathbb{R}^{6}}\int\limits_{N}\overset{\vee }{\varphi }\ast f(X)\text{ }e^{-i\left\langle \xi ,X\right\rangle }\text{ }dXd\xi \notag \\ &=&\int\limits_{\mathbb{R}^{6}}\int\limits_{N}\int\limits_{N}f(YX)\overline{ u(Y)}e^{-i\left\langle \xi ,X\right\rangle }\text{ }dY\text{ }dXd\xi . \end{eqnarray} By change of variable $YX=X^{\prime }$ with $ Y=(x_{6},x_{5},x_{4},x_{3},x_{2},x_{1})$ and $X^{\prime }=(y_{6},y_{5},y_{4},y_{3},y_{2},y_{1})$, we get \begin{eqnarray*} X &=&Y^{-1}X^{\prime }=(x_{6},x_{5},x_{4},x_{3},x_{2},x_{1})^{-1}(y_{6},y_{5},y_{4},y_{3},y_{2},y_{1}) \\ &=&(y_{6}-x_{6}+x_{1}x_{5}-x_{1}y_{5}-x_{1}x_{2}x_{4}+x_{3}x_{4}-x_{3}y_{4}+x_{1}x_{2}y_{4},y_{5}-x_{5}+x_{2}y_{4}-x_{2}x_{4}, \\ &&x_{4}+y_{4},y_{3}-x_{3}-x_{1}y_{2}+x_{1}x_{2},y_{2}-x_{2},y_{1}-x_{1}) \end{eqnarray*} and \begin{eqnarray*} &&-i\left\langle \xi ,X\right\rangle \\ &=&-i\left\langle \xi ,Y^{-1}X^{\prime }\right\rangle \\ &=&-i[(y_{6}-x_{6}+x_{1}x_{5}-x_{1}y_{5}-x_{1}x_{2}x_{4}+x_{3}x_{4}-x_{3}y_{4}+x_{1}x_{2}y_{4})\xi _{6}+(y_{5}-x_{5}+x_{2}y_{4})\xi _{5} \\ &&-x_{2}x_{4}\xi _{5}+(y_{4}-x_{4})\xi _{4}+(y_{3}-x_{3}-x_{1}y_{2}+x_{1}x_{2})\xi _{3}+(y_{2}-x_{2})\xi _{2}+(y_{1}-x_{1})\xi _{1}] \\ &=&-i[(x_{6}\xi _{6}-y_{6}\xi _{6})+(-x_{2}x_{4}\xi _{6}+x_{2}y_{4}\xi _{6}-y_{2}\xi _{3}+x_{2}\xi _{3}+x_{5}\xi _{6}-y_{5}\xi _{6}-\xi _{1})x_{1}-y_{1}\xi _{1} \\ &&+(y_{5}\xi _{5}-x_{5}\xi _{5})+(y_{4}\xi _{5}-x_{4}\xi _{5}-\xi _{2})x_{2}+y_{2}\xi _{2}+y_{3}\xi _{3}+(x_{4}\xi _{6}-y_{4}\xi _{6}-\xi _{3})x_{3}+(y_{4}-x_{4})\xi _{4} \end{eqnarray*} So we obtain \begin{eqnarray*} &&e^{-i(y_{6}-x_{6}+x_{1}x_{5}-x_{1}y_{5}-x_{1}x_{2}x_{4}+x_{3}x_{4}-x_{3}y_{4}+x_{1}x_{2}y_{4})\xi _{6}} \text{ }e^{-i((y_{5}-x_{5}+x_{2}y_{4}-x_{2}x_{4})\xi _{5}+(y_{4}-x_{4})\xi _{4})} \\ &&e^{-i((y_{3}-x_{3}-x_{1}y_{2}+x_{1}x_{2})\xi _{3}+(y_{2}-x_{2})\xi _{2}+(y_{1}-x_{1})\xi _{1})}\text{ } \\ &=&e^{-i((x_{6}\xi _{6}-y_{6}\xi _{6})+(-x_{2}x_{4}\xi _{6}+x_{2}y_{4}\xi _{6}-y_{2}\xi _{3}+x_{2}\xi _{3}+x_{5}\xi _{6}-y_{5}\xi _{6}-\xi _{1})x_{1}-y_{1}\xi _{1})}\text{ } \\ &&e^{-i((y_{5}\xi _{5}-x_{5}\xi _{5})+(y_{4}\xi _{5}-x_{4}\xi _{5}-\xi _{2})x_{2}+y_{2}\xi _{2})-i(y_{3}\xi _{3}+(x_{4}\xi _{6}-y_{4}\xi _{6}-\xi _{3})x_{3}+(y_{4}-x_{4})\xi _{4})}\text{ } \end{eqnarray*} By the invariance of the Lebesgue,s measures $d\xi _{1},d\xi _{2}$ \hspace{ 0.05in}and $d\xi _{3}$, we get \begin{eqnarray*} &&\overset{\vee }{\varphi }\ast f(0)=\int\limits_{N}\int\limits_{N}\int\limits_{\mathbb{R}^{6}}f(X^{\prime })e^{-i((x_{6}\xi _{6}-y_{6}\xi _{6})+(-x_{2}x_{4}\xi _{6}+x_{2}y_{4}\xi _{6}-y_{2}\xi _{3}+x_{2}\xi _{3}+x_{5}\xi _{6}-y_{5}\xi _{6}-\xi _{1})x_{1}+y_{1}\xi _{1})} \\ &&e^{-i((y_{5}\xi _{5}-x_{5}\xi _{5})+(y_{4}\xi _{5}-x_{4}\xi _{5}-\xi _{2})x_{2}+y_{2}\xi _{2})}e^{-i(y_{3}\xi _{3}+(x_{4}\xi _{6}-y_{4}\xi _{6}-\xi _{3})x_{3}+(y_{4}-x_{4})\xi _{4})}\overline{\varphi (Y)}dY\hspace{ 0.05in}dX^{\prime }d\xi \\ &=&\int\limits_{N}\int\limits_{N}\int\limits_{\mathbb{R}^{6}}f(X^{\prime })e^{-i(y_{6}\xi _{6}-x_{6}\xi _{6}+y_{5}\xi _{5}-x_{5}\xi _{5}+y_{4}\xi _{4}-x_{4}\xi _{4}+y_{3}\xi _{3}-\xi _{3}x_{3}-\xi _{2}x_{2}+y_{2}\xi _{2}-\xi _{1}x_{1}+y_{1}\xi _{1})}\overline{\varphi (Y)}dY\hspace{0.05in} dX^{\prime }d\xi \\ &=&\int\limits_{\mathbb{R}^{6}}\mathcal{F}f(\xi )\text{ }\overline{\mathcal{F }\varphi (\xi )}d\xi \end{eqnarray*} for any $Y$ $=(x_{6},x_{5},x_{4},x_{3},x_{2},x_{1})\in $ $\mathbb{R}^{6}$ and $X^{\prime }=(y_{6},y_{5},y_{4},y_{3},y_{2},y_{1})\in \mathbb{R}^{6},$ where $0=(0,0,0,0,0,0)$ is the identity of $N$. The theorem is proved. \textbf{Corollary 3.1. }\textit{In theorem \textbf{3.2.} if\hspace{0.05in}we replace }$\varphi $ \textit{by }$f,$\textit{\ we obtain the Plancherel,s formula on }$N$ \begin{equation} \overset{\vee }{f}\ast f(0)=\int\limits_{N}\left\vert f(X)\right\vert ^{2}dX=\int\limits_{\mathbb{R}^{6}}\left\vert \mathcal{F}f(\xi )\right\vert ^{2}d\xi \end{equation} \section{\textbf{Fourier Transform and Plancherel Theorem On} $SL(4,\mathbb{ R)}$} \textbf{4.1. }Let $\underline{k}$ be the Lie algebra of $K=SO(4)$. Let $ (X_{1},X_{2},X_{3},X_{4})$ a basis of $\underline{k}$ , such that the both operators \begin{equation} \Delta =\sum\limits_{i=1}^{3}X_{i}^{2},D_{q}=\sum\limits_{0\leq l\leq q}\left( -\sum\limits_{i=1}^{3}X_{i}^{2}\right) ^{l} \end{equation} are left and right invariant (bi-invariant) on $K,$ this basis exist see $ [4, $ $p.564)$. For $l\in \mathbb{N} $. Let $D^{l}=(1-\Delta )^{l}$, then the family of semi-norms $\{\sigma _{l}$ , $l\in \mathbb{N} \}$ such that \begin{equation} \sigma _{l}(f)=\int_{K}\left\vert D^{l}f(y)\right\vert ^{2}dy)^{\frac{1}{2}}, \text{ \ \ \ \ \ \ \ \ }f\in C^{\infty }(K) \end{equation} define on $C^{\infty }(K)$ the same topology of the Frechet topology defined by the semi-normas $\left\Vert X^{\alpha }f\right\Vert _{2}$ defined as \begin{equation} \left\Vert X^{\alpha }f\right\Vert _{2}=\int_{K}(\left\vert X^{\alpha }f(y)\right\vert ^{2}dy)^{\frac{1}{2}},\text{ \ \ \ \ \ \ \ \ }f\in C^{\infty }(K) \end{equation} where $\alpha =(\alpha _{1},$.....,$\alpha _{m})\in \mathbb{N} ^{m},$ see $[4,p.565]$ Let $\widehat{K}$ be the set of all irreducible unitary representations of $ K.$ If $\gamma \in \widehat{K}$, we denote by $E_{\gamma }$ the space of the representation $\gamma $ and $d_{\gamma }$\ its dimension then we get\qquad \textbf{Definition 4.1.} \textit{The Fourier transform of a function }$f\in C^{\infty }(K)$\textit{\ is defined as} \begin{equation} Tf(\gamma )=\int\limits_{K}f(x)\gamma (x^{-1})dx \end{equation} \textit{where }$T$\textit{\ is the Fourier transform on} $K$ \textbf{Theorem (\textbf{A. Cerezo }}$[4]$\textbf{) 4.1.} \textit{Let} $f\in C^{\infty }(K),$ \textit{then we have the inversion of the Fourier transform} \begin{equation} f(x)=\sum\limits_{\gamma \in \widehat{K}}d\gamma tr[Tf(\gamma )\gamma (x)] \end{equation} \begin{equation} f(I_{K})=\sum\limits_{\gamma \in \widehat{K}}d\gamma tr[Tf(\gamma )] \end{equation} \textit{and the Plancherel formula} \begin{equation} \left\Vert f(x)\right\Vert _{2}^{2}=\int_{K}\left\vert f(x)\right\vert ^{2}dx=\sum\limits_{\gamma \in \widehat{K}}d_{\gamma }\left\Vert Tf(\gamma )\right\Vert _{H.S}^{2} \end{equation} \textit{for any }$f\in L^{1}(K),$ \textit{where }$I_{K}$ \textit{is the identity element of \ }$K$ \textit{and} $\left\Vert Tf(\gamma )\right\Vert _{H.S}^{2}$ \textit{is the Hilbert- Schmidt norm of the operator} $Tf(\gamma )$\ \textbf{Definition 4.2}. \textit{For any function} $f\in \mathcal{D} (G),$ \textit{we can define a function} $\Upsilon (f)$\textit{on }$G\times K$ $=G\times SO(4)$ \textit{by} \begin{equation} \Upsilon (f)(g,k_{1})=\Upsilon (f)(kna,k_{1})=f(gk_{1})=f(knak_{1}) \end{equation} \textit{for }$g=kna\in G,$ \textit{and} $k_{1}\in K$ . \textit{The restriction of} $\ \Upsilon (f)\ast \psi (g,k_{1})$ \textit{on} $K(G)$ \textit{is }$\Upsilon (f)\ast \psi (g,k_{1})\downarrow _{K(G)}=f(nak_{1})=f(g)\in \mathcal{D}(G),$ \textit{and }$\Upsilon (f)(g,k_{1})\downarrow _{SO(4)}=f(kna)$ $\in \mathcal{D}(G)$ \textbf{Remark 4.1}. $\Upsilon (f)$ is invariant in the following sense \begin{equation} \Upsilon (f)(gh,h^{-1}k_{1})=\Upsilon (f)(g,k_{1}) \end{equation} \textbf{Definition 4.3}.\textit{\ If }$f$ \textit{and }$\psi $ \textit{are two functions belong to} $\mathcal{D}(G),$ \textit{then we can define the convolution of } $\Upsilon (f)\ $\textit{and} $\psi $\ \textit{on} $G\times SO(4)$ \textit{as} \begin{eqnarray*} &&\Upsilon (f)\ast \psi (g,k_{1}) \\ &=&\int\limits_{G}\Upsilon (f)(gg_{2}^{-1},k_{1})\psi (g_{2})dg_{2} \\ &=&\int\limits_{SO(4)}\int\limits_{N}\int\limits_{A}\Upsilon (f)(knaa_{2}^{-1}n_{2}^{-1}k^{-1}k_{1})\psi (k_{2}n_{2}a_{2})dk_{2}dn_{2}da_{2} \end{eqnarray*} and so we get \begin{eqnarray*} \Upsilon (f)\ast \psi (g,k_{1}) &\downarrow &_{K(G)}=\Upsilon (f)\ast \psi (I_{K}na,k_{1}) \\ &=&\int\limits_{SO(4)}\int\limits_{N}\int \limits_{A}f(naa_{2}^{-1}n_{2}^{-1}k^{-1}k_{1})\psi (k_{2}n_{2}a_{2})dk_{2}dn_{2}da_{2} \\ &=&\Upsilon (f)\ast \psi (na,k_{1}) \end{eqnarray*} where $g_{2}=k_{2}n_{2}a_{2}$ \textbf{Definition 4.3}. \textit{For }$f\in \mathcal{D}(G)$, \textit{let} $\Upsilon (f)$ \textit{be its associated function}, \textit{we define the Fourier transform of \ }$\Upsilon (f)(g,k_{1})$ \textit{by } \begin{eqnarray} &&\mathcal{F}\Upsilon (f))(I_{SO(4)},\xi ,\lambda ,\gamma ) \notag \\ &=&\int_{N}\int_{A}[\int_{SO(4)}(T\Upsilon (f)(I_{SO(4))}na,k_{1})\gamma (k_{1}^{-1})dk_{1}] \notag \\ &&a^{-i\lambda }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n\rangle }\text{ } dadn \notag \\ &=&\int\limits_{SO(3)}\int_{N}\int_{A}[\Upsilon (f)(I_{SO(3)}na,k_{1})]a^{-i\lambda }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n\rangle }\text{ }\gamma (k_{1}^{-1})dadndk_{1} \end{eqnarray} \textit{where }$\mathcal{F}$ \textit{is the Fourier transform on }$AN$ \textit{and }$T$ \textit{is the Fourier transform on} $SO(4),$ $I_{SO(3)}$ \textit{is the identity element of }$SO(4)$, \textit{and }$ n=(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}),n_{2}=(y_{1},y_{2},y_{3},y_{4},y_{5},y_{6}),\xi =(\xi _{1},\xi _{2},\xi _{3},\xi _{4},\xi _{5},\xi _{6}),a=b_{1}b_{2}b_{3} $ \textit{and} $a=a_{1}a_{2}a_{3}$ \textbf{Plancherel's Theorem 4.2\textit{\textbf{.} }}\textit{For any function\ }$f\in $\textit{\ }$L^{1}(G)\cap $\textit{\ }$L^{2}(G),$\textit{we get } \begin{equation} \int_{G}\left\vert f(g)\right\vert ^{2}dg=\int\limits_{A}\int\limits_{N}\int\limits_{SO(4)}\left\vert f(kna)\right\vert ^{2}dadndk=\sum\limits_{\gamma \in \widehat{SO(4)} }d_{\gamma }\int\limits_{\mathbb{R}^{3}}\int\limits_{\mathbb{R} ^{6}}\left\Vert T\mathcal{F}f(\alpha ,\xi ,\gamma )\right\Vert _{2}^{2}d\alpha d\xi \end{equation} \textit{\ } \begin{equation} f(I_{A}I_{N}I_{S^{1}})=\int\limits_{N}\int\limits_{A}\sum\limits_{\gamma \in \widehat{K}}d_{\gamma }T\mathcal{F}f(\alpha ,\xi ,\gamma )]d\alpha d\xi =\sum\limits_{\gamma \in \widehat{K}}d_{\gamma }\int\limits_{\mathbb{R} ^{3}}\int\limits_{\mathbb{R}^{6}}T\mathcal{F}f(\alpha ,\xi ,\gamma )d\alpha d\xi \end{equation} \textit{where }$,$ $I_{A},I_{N},$ and $I_{K}$ \textit{are the identity elements of} $A$, $N$ \textit{and }$K$ \textit{respectively, }$\mathcal{F}$ \textit{is the Fourier transform on }$AN$ \textit{and }$T$ \textit{is the Fourier transform on} $K,$ \textit{Proof: }First let $\overset{\vee }{f}$ \ be the function defined by \begin{equation} \ \overset{\vee }{f}(kna)=\overline{f((kna)^{-1})}=\overline{ f(a^{-1}n^{-1}k^{-1})} \end{equation} Then we have \begin{eqnarray} &&\int_{SL(4,\mathbb{R})}\left\vert f(g)\right\vert ^{2}dg \notag \\ &=&\Upsilon (f)\ast \overset{\vee }{f}(I_{SO(4)}I_{N}I_{A},I_{SO(4)}) \notag \\ &=&\int\limits_{G}\Upsilon (f)(I_{SO(4)}I_{N}I_{A}g_{2}^{-1},I_{SO(4)}) \overset{\vee }{f}(g_{2})dg_{2} \notag \\ &=&\int\limits_{A}\int\limits_{N}\int_{SO(4)}\Upsilon (f)(a_{2}^{-1}n_{2}^{-1}k_{2}^{-1},I_{SO(4)})\overset{\vee }{f} (k_{2}n_{2}a_{2})da_{2}dn_{2}dk_{2} \notag \\ &=&\int\limits_{A}\int\limits_{N} \int_{SO(4)}f(a_{2}^{-1}n_{2}^{-1}k_{2}^{-1})\overline{ f((k_{2}n_{2}a_{2})^{-1})}da_{2}dn_{2}dk_{2} \notag \\ &=&\int\limits_{A}\int\limits_{N}\int_{SO(4)}\left\vert f(a_{2}n_{2}k_{2})\right\vert ^{2}da_{2}dn_{2}dk_{2} \end{eqnarray} Secondly \begin{eqnarray*} &&\Upsilon (f)\ast \overset{\vee }{f}(I_{SO(4)}I_{N}I_{A},I_{SO(4)}) \\ &=&\int\limits_{\mathbb{R}^{18}}\sum\limits_{\gamma \in \widehat{SO(4)} }d\gamma \int_{SO(4)}tr(\Upsilon (f)\ast \overset{\vee }{f} (I_{SO(4)}na,k_{1})\gamma (k_{1}^{-1})) \\ &&a^{-i\alpha }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n\rangle }dadndk_{1}d\lambda d\xi \\ &=&\sum\limits_{\gamma \in \widehat{SO(4)}}d\gamma \int_{SO(4)}\int\limits_{ \mathbb{R}^{18}}tr[\Upsilon (f)\ast \overset{\vee }{f} (I_{SO(3}na,k_{1})dka^{-i\alpha }e^{-\text{ }i\langle \text{ }\xi ,\text{ } n\rangle }\text{ }\gamma (k_{1}^{-1})]dadndk_{1}d\lambda d\xi \\ &=&\int\limits_{\mathbb{R}^{27}}\sum\limits_{\gamma \in \widehat{SO(4)} }\int_{SO(4)}tr[\Upsilon (f)(I_{SO(4)}nab^{-1}n_{2}^{-1}k_{2}^{-1},k_{1}) \overset{\vee }{f}(k_{2}n_{2}b)\gamma (k_{1}^{-1})dk_{1}] \\ &&a^{-i\alpha }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n\rangle }dndadn_{2}dbd\lambda d\xi \end{eqnarray*} where \begin{eqnarray*} &&e^{-i(y_{6}-x_{6}+x_{1}x_{5}-x_{1}y_{5}-x_{1}x_{2}x_{4}+x_{3}x_{4}-x_{3}y_{4}+x_{1}x_{2}y_{4})\xi _{6}} \text{ }e^{-i((y_{5}-x_{5}+x_{2}y_{4}-x_{2}x_{4})\xi _{5}+(y_{4}-x_{4})\xi _{4})} \\ &&e^{-i((y_{3}-x_{3}-x_{1}y_{2}+x_{1}x_{2})\xi _{3}+(y_{2}-x_{2})\xi _{2}+(y_{1}-x_{1})\xi _{1})}\text{ } \\ &=&e^{-i(y_{6}\xi _{6}-x_{6}\xi _{6}+y_{5}\xi _{5}-x_{5}\xi _{5}+y_{4}\xi _{4}-x_{4}\xi _{4}+y_{3}\xi _{3}-\xi _{3}x_{3}-\xi _{2}x_{2}+y_{2}\xi _{2}-\xi _{1}x_{1}+y_{1}\xi _{1})} \end{eqnarray*} $ n=(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}),n_{2}=(y_{1},y_{2},y_{3},y_{4},y_{5},y_{6}),\xi =(\xi _{1},\xi _{2},\xi _{3},\xi _{4},\xi _{5},\xi _{6}),a=b_{1}b_{2}b_{3} $ and $a=a_{1}a_{2}a_{3}$ Using the fact that \begin{equation} \int\limits_{A}\int\limits_{N}\int_{SO(4)}f(kna)dadndk=\int\limits_{N}\int \limits_{A}\int_{SO(4)}f(kan)a^{2}dndadk \end{equation} and \begin{eqnarray} &&\int\limits_{\mathbb{R}^{6}}\int\limits_{A}\int\limits_{N} \int_{SO(4)}f(kna)e^{-\text{ }i\langle \text{ }\xi ,\text{ }n\rangle }dadndkd\xi \notag \\ &=&\int\limits_{\mathbb{R}^{6}}\int\limits_{A}\int\limits_{N} \int_{SO(4)}f(kan)e^{-\text{ }i\langle \text{ }\xi ,\text{ }an_{1}a^{-1} \text{ }\rangle }a^{2}dadndkd\xi \notag \\ &=&\int\limits_{\mathbb{R}^{6}}\int\limits_{A}\int\limits_{N} \int_{SO(4)}f(kan)e^{-\text{ }i\langle \text{ }a\xi a^{-1},\text{ }n\rangle }a^{2}dadndkd\xi \notag \\ &=&\int\limits_{\mathbb{R}^{6}}\int\limits_{A}\int\limits_{N} \int_{SO(4)}f(kan)e^{-\text{ }i\langle \text{ }\xi ,\text{ }n\rangle }dadndkd\xi \end{eqnarray} Then we have \begin{eqnarray*} &&\Upsilon (f)\ast \overset{\vee }{f}(I_{SO(4)}I_{N}I_{A},I_{SO(4)}) \\ &=&\int\limits_{\mathbb{R}27}\int_{SO(4)}\int\limits_{A}\int\limits_{N}\sum \limits_{\gamma \in \widehat{SO(3)}}d_{\gamma }\int_{SO(4)}f(nab^{-1}n_{2}^{-1}k_{2}^{-1},k_{1})\overset{\vee }{f} (k_{2}n_{2}b)\gamma (k_{1}^{-1})dk_{1}dk_{2} \\ &&a^{-i\lambda }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n\rangle }dndadn_{2}da_{2}d\lambda d\xi \\ &=&\int\limits_{\mathbb{R}27}\sum\limits_{\gamma \in \widehat{SO(4)} }d_{\gamma }\int_{SO(4)}\int_{SO(4)}f(ab^{-1}nn_{2}^{-1}k_{2}^{-1},k_{1}) \overset{\vee }{f}(k_{2}n_{2}b)\gamma (k_{1}^{-1})dk_{1}dk_{2} \\ &&a^{-i\lambda }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n\rangle }dndadn_{2}da_{2}d\lambda d\xi \\ &=&\int\limits_{\mathbb{R}27}\sum\limits_{\gamma \in \widehat{SO(4)} }d_{\gamma }\int_{SO(4)}\int_{SO(4)}f(ank_{2}^{-1},k_{1})\overset{\vee }{f} (k_{2}n_{2}b)\gamma (k_{1}^{-1})dk_{1}dk_{2} \\ &&ab^{-i\lambda }e^{-\text{ }i\langle \text{ }\xi ,\text{ }nn_{2}\rangle }dndadn_{2}da_{2}d\lambda d\xi \\ &=&\int\limits_{\mathbb{R}27}\sum\limits_{\gamma \in \widehat{SO(4)} }d_{\gamma }\int_{SO(4)}\int_{SO(4)}f(ank_{2}^{-1}k_{1})\overset{\vee }{f} (k_{2}n_{2}b)\gamma (k_{1}^{-1})dk_{1}dk_{2} \\ &&ab^{-i\lambda }e^{-\text{ }i\langle \text{ }\xi ,\text{ }nn_{2}\rangle }dndadn_{2}da_{2}d\lambda d\xi \\ &=&\int\limits_{\mathbb{R}27}\sum\limits_{\gamma \in \widehat{SO(4)} }d_{\gamma }\int_{SO(4)}\int_{SO(4)}f(ank_{1}^{-1})\overset{\vee }{f} (k_{2}n_{2}b)\gamma (k_{1}^{-1})\gamma (k_{2}^{-1})dk_{1}dk_{2} \\ &&a^{-i\lambda }b^{-i\lambda }e^{-\text{ }i\langle \text{ }\xi ,\text{ } n\rangle }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n_{2}\rangle }dndadn_{2}da_{2}d\lambda d\xi \end{eqnarray*} So, we get \begin{eqnarray*} &&\Upsilon (f)\ast \overset{\vee }{f}(I_{SO(4)}I_{N}I_{A},I_{SO(4)}) \\ &=&\int\limits_{\mathbb{R}27}\sum\limits_{\gamma \in \widehat{SO(4)} }d_{\gamma }\int_{SO(4)}\int_{SO(4)}f(ank_{1}^{-1})\overset{\vee }{f} (k_{2}n_{2}b)\gamma (k_{1}^{-1})\gamma (k_{2}^{-1})dk_{1}dk_{2} \\ &&a^{-i\lambda }b^{-i\lambda }e^{-\text{ }i\langle \text{ }\xi ,\text{ } n+n_{2}\rangle }dndadn_{2}da_{2}d\lambda d\xi \\ &=&\int\limits_{\mathbb{R}27}\sum\limits_{\gamma \in \widehat{SO(4)} }d_{\gamma }\int_{SO(4)}\int_{SO(4)}f(ank_{1}^{-1})\overline{ f(b{}^{-1}n_{2}^{-1}k_{2}^{-1})}\gamma (k_{1}^{-1})\gamma (k_{2}^{-1})dk_{1}dk_{2} \\ &&a^{-i\lambda }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n\rangle }b^{-i\lambda }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n_{2}\rangle }dndadn_{2}da_{2}d\lambda d\xi \\ &=&\int\limits_{\mathbb{R}27}\sum\limits_{\gamma \in \widehat{SO(4)} }d_{\gamma }\int_{SO(4)}\int_{SO(4)}f(ank_{1}^{-1})\overline{ f(b{}n_{2}k_{2})\gamma (k_{2}^{-1})}\gamma (k_{1}^{-1})dk_{1}dk_{2} \\ &&a^{-i\lambda }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n\rangle }b^{-i\lambda }e^{\text{ }i\langle \text{ }\xi ,\text{ }n_{2}\rangle }dndadn_{2}da_{2}d\lambda d\xi \\ &=&\int\limits_{\mathbb{R}27}\sum\limits_{\gamma \in \widehat{SO(4)} }d_{\gamma }\int_{SO(4)}\int_{SO(4)}f(ank_{1}^{-1})\overline{ f(b{}n_{2}k_{2})\gamma (k_{2}^{-1})}\gamma (k_{1}^{-1})dk_{1}dk_{2} \\ &&a^{-i\lambda }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n\rangle } \overline{b^{-i\lambda }e^{\text{ }-i\langle \text{ }\xi ,\text{ } n_{2}\rangle }}dndadn_{2}dbd\lambda d\xi \\ &=&\int\limits_{\mathbb{R}^{9}}\sum\limits_{\gamma \in \widehat{SO(4)} }d_{\gamma }T\mathcal{F}f(\lambda ,\xi ,\gamma )\overline{T\mathcal{F} f(\lambda ,\xi ,\gamma )}d\lambda d\xi =\int\limits_{\mathbb{R} ^{9}}\sum\limits_{\gamma \in \widehat{SO(4)}}d_{\gamma }\left\vert T\mathcal{ F}(f)(\lambda ,\xi ,\gamma )\right\vert ^{2}d\lambda d\xi \end{eqnarray*} Hence theorem of Plancherel on $SL(4,\mathbb{R})$ is Proved Let $SP(4,\mathbb{R})=KNA$ be the Iwasawa decomposition of the symplectic $ SP(4,\mathbb{R}).$ My state result is \textbf{Corollary\textit{\ }4.1.\textit{\ }}\textit{For any function }$f\in $\textit{\ }$L^{1}(SP(4,\mathbb{R}))\cap $\textit{\ }$ L^{2}(SP(4,\mathbb{R})),$\textit{we get} \begin{equation} \int_{SP(4,\mathbb{R})}\left\vert f(v,g)\right\vert ^{2}dvdg=\int\limits_{N}\int\limits_{A}\sum_{\gamma \in \widehat{K} }d_{\gamma }\left\Vert \mathcal{F}_{\mathbb{R}^{2}}T\mathcal{F}F(\xi ,\lambda ,\gamma )\right\Vert ^{2}d\eta d\lambda d\xi \end{equation} Which is the Plancherel theorem on the symplectic $SP(4,\mathbb{R})$ \section{Plancherel Theorem on Group $\mathbb{R}^{4}\rtimes SL(4,\mathbb{R})$ } Let $P$ $=\mathbb{R}^{4}\rtimes _{\rho }SL(4,\mathbb{R})$ be the $14-$ dimensional affine group$.$ Let $(v,g)$ and $(v^{\prime },g^{\prime })$ be two elements belong $P,$ then the multiplication of $(v,g)$ and $(v^{\prime },g^{\prime })$ is given by \begin{equation} (v,g)(v^{\prime },g^{\prime })=(v+\text{\ }\rho (g)(v^{\prime }),\text{ } gg^{\prime })=(v+\text{\ }gv^{\prime },\text{ }gg^{\prime })\text{ \ } \end{equation} for any $(v,v^{\prime })\in \mathbb{R}^{4}\times \mathbb{R}^{4}$ and $(g,$ $ g^{\prime })\in SL(4,\mathbb{R})\times SL(4,\mathbb{R}),$ where $gv^{\prime }=\rho (g)(v^{\prime }).$ To define the Fourier transform on $P$, we introduce the following new group \textbf{Definition 5.1}. \textit{Let }$Q=\mathbb{R}^{3}$ $\times SL(4, \mathbb{R})\times SL(4,\mathbb{R})$ \textit{be the group with law}: \begin{eqnarray} X\cdot Y &=&(v,h,g)(v^{\prime },h^{\prime },g^{\prime }) \notag \\ &=&(v+\text{\ }gv^{\prime },hh^{\prime },\text{ }gg^{\prime }) \end{eqnarray} \textit{for all} $X=(v,h,g)$ $\in Q$ \textit{and} $Y=(v^{\prime },h^{\prime },g^{\prime })\in Q.$ Denote by $A=\mathbb{R}^{4}$ $\times SL(4,\mathbb{R})$ the group of the direct product of $\mathbb{R}^{4}$ with the group $SL(4, \mathbb{R}).$ Then the group $A$ can be regarded as the subgroup $\mathbb{R} ^{3}$ $\times SL(4,\mathbb{R})\times \{I_{SL(4,\mathbb{R})}\}$ of $Q\mathbb{ \ }$and $P$ can be regarded as the subgroup $\mathbb{R}^{4}$ $\times \{I_{SL(4,\mathbb{R})}\}\times SL(4,\mathbb{R})$ of $Q.$ \textbf{Definition 5.2}. \textit{For any function} $f\in \mathcal{D}(P),$ \textit{we can define a function} $\widetilde{f}$ \textit{on }$Q$ \textit{by} \begin{equation} \widetilde{f}(v,g,h)=f(gv,gh) \end{equation} \textbf{Remark 5.1. }\textit{The function} $\widetilde{f}$ \textit{\ is invariant in the following sense} \begin{equation} \widetilde{f}(q^{-1}v,g,q^{-1}h)=\widetilde{f}(v,gq^{-1},h)\text{\ } \end{equation} \textbf{Theorem 5.1. }\textit{For any function }$\psi \in \mathcal{D}(P)$ \textit{and }$\widetilde{f}\in C^{\infty }(Q)$ \textit{invariant in sense }$ (32)$\textit{, we get} \begin{equation} \psi \ast \widetilde{f}(v,h,g)=\widetilde{f}\ast _{c}\psi (v,h,g) \end{equation} \textit{\ where} $\ast $ \textit{signifies the convolution product on} $P$ \textit{with respect the variable} $(v,g),$ \textit{and }$\ast _{c}$\textit{ signifies the convolution product on} $A$ \textit{with respect the variable} $(v,h)$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textit{Proof : }In fact for each $\psi \in \mathcal{D}(P)$ \textit{and }$ \widetilde{f}\in C^{\infty }(Q),$ we have \begin{eqnarray} &&\psi \ast \widetilde{f}(v,h,g) \notag \\ &=&\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R})}\widetilde{f} ((v^{\prime },g^{\prime })^{-1}(v,h,g))\psi (v^{\prime },g^{\prime })dv^{\prime }dg^{\prime } \notag \\ &=&\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R})}\widetilde{f} [(g^{\prime }{}^{-1}(-v^{\prime }),g^{\prime }{}^{-1})(v,h,g)]\psi (v^{\prime },g^{\prime })dv^{\prime }dg^{\prime } \notag \\ &=&\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R})}\widetilde{f} [(g^{\prime }{}^{-1}(-v^{\prime }),g^{\prime }{}^{-1})(v,h,g)]\psi (v^{\prime },g^{\prime })dv^{\prime }dg^{\prime } \\ &=&\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R})}\widetilde{f} [(g^{\prime }{}^{-1}(v-v^{\prime }),h,g^{\prime }{}^{-1}g)]\psi (v^{\prime },g^{\prime })dv^{\prime }dg^{\prime } \notag \\ &=&\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R})}\widetilde{f} [v-v^{\prime },hg^{\prime }{}^{-1},g]\psi (v^{\prime },g^{\prime })dv^{\prime }dg^{\prime } \notag \\ &=&\widetilde{f}\ast _{c}\psi (v,h,g) \end{eqnarray} \textbf{Corollary 5.1.\ }\textit{From theorem \textbf{5.1}, the equation \ turns as} \begin{eqnarray} &&\psi \ast \widetilde{f}(v,h,I_{G}) \notag \\ &=&\psi \ast _{c}\widetilde{f}(v,h,I_{SL(3,\mathbb{R})})=\int\limits_{ \mathbb{R}^{2}}\int_{SL(3,\mathbb{R})}\widetilde{f}[v-v^{\prime },hg^{\prime }{}^{-1},g]\psi (v^{\prime },g^{\prime })dv^{\prime }dg^{\prime } \notag \\ &=&\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R})}f[hg^{\prime }{}^{-1}(v-v^{\prime }),hg^{\prime }{}^{-1}]\psi (v^{\prime },g^{\prime })dv^{\prime }dg^{\prime }=h(f)\ast _{c}\psi (v,h) \end{eqnarray} where \begin{equation} h(f)(v,g)=f(gv,g) \end{equation} \textbf{Definition 5}.\textbf{3}. \textit{Let }$\Upsilon F$ \textit{be the function on }$P\times SL(4,\mathbb{R})$ \textit{defined by} \begin{equation} \Upsilon F(v,(g,k_{1}))=F(v,gk_{1}) \end{equation} \textbf{Definition 5.4. \ }\textit{Let }$\psi \in \mathcal{D}(P)$ \textit{ and }$F\in \mathcal{D}(P),$\textit{then we can define a convolution product on the Affine group }$P$ \textit{as} \begin{eqnarray*} &&\psi \ast _{c}\Upsilon F(v,(g,k_{1})) \\ &=&\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R})}\Upsilon F(v-v^{\prime },(gg^{\prime }{}^{-1},k_{1}))\psi (v^{\prime },g^{\prime })dv^{\prime }dg^{\prime } \\ &=&\int\limits_{\mathbb{R}^{4}}\int_{K}\int_{N}\int_{A}F(v-v^{\prime },kna(k^{\prime }n^{\prime }a^{\prime })^{-1}k_{1}))\psi (v^{\prime },k^{\prime }n^{\prime }a^{\prime })dv^{\prime }dk^{\prime }dn^{\prime }da^{\prime } \end{eqnarray*} \textit{where }$g=kna$ \textit{and} $g^{\prime }=k^{\prime }n^{\prime }a^{\prime }$ \textbf{Corollary 5.2. }\textit{For any function }$F$ \textit{belongs to} $ \mathcal{D}(P)$ , \textit{we obtain} \begin{eqnarray*} &&\psi \ast _{c}\Upsilon h(F)(v,(g,k_{1})) \\ &=&\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R})}\Upsilon h(F)(v-v^{\prime },(gg^{\prime }{}^{-1},k_{1})\psi (v^{\prime },g^{\prime })dv^{\prime }dg^{\prime } \\ &=&\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R})}\Upsilon h(F)(v-v^{\prime },(gg^{\prime }{}^{-1},k_{1})\psi (v^{\prime },g^{\prime })dv^{\prime }dg^{\prime } \\ &=&\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R})}h(F)(v-v^{\prime },gg^{\prime }{}^{-1}k_{1})\psi (v^{\prime },g^{\prime })dv^{\prime }dg^{\prime } \\ &=&\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R})}F(gg^{\prime }{}^{-1}k_{1}(v-v^{\prime }),gg^{\prime }{}^{-1}k_{1})\psi (v^{\prime },g^{\prime })dv^{\prime }dg^{\prime } \end{eqnarray*} \textbf{Corollary 5.3. }\textit{For any function }$F$ \textit{belongs to} $ \mathcal{D}(P)$ , \textit{we obtain}\textbf{\ } \begin{equation} F\ast \Upsilon h(\overset{\vee }{F})(0,(I_{SL(4,\mathbb{R} )},I_{K}))=\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R} )}|f(v,g)|^{2}dgdv=\left\Vert f\right\Vert _{2}^{2} \end{equation} \textit{Proof: }If\ $F\in \mathcal{D}(P),$then we get \begin{eqnarray*} &&F\ast \Upsilon h(\overset{\vee }{F})(0,(I_{SL(4,\mathbb{R})},I_{K})) \\ &=&\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R})}\Upsilon \hbar ( \overset{\vee }{F})[(0-v),(I_{G}g^{-1},I_{S^{1}})]F(v,g)dgdv \\ &=&\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R})}\mathit{\ }\hbar ( \overset{\vee }{F})[(0-v),I_{G}g^{-1}I_{S^{1}}]F(v,g)dgdv \\ &=&\int\limits_{SO(3)}\int\limits_{\mathbb{R}^{3}}\mathit{\ }\hbar (\overset{ \vee }{F})[(-v),g^{-1}]F(v,g)dgdv \\ &=&\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R})}\overset{\vee }{F} [g^{-1}(-v),g^{-1}]F(v,g)dgdv \\ &=&\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R})}\overline{ F[g^{-1}(-v),g^{-1}]^{-1}}F(v,g)dgdv \\ &=&\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R})}\overline{F[v,g]} F(v,g)dgdv=\int\limits_{\mathbb{R}^{4}}\int_{SL(4,\mathbb{R})}\left\vert f(v,g)\right\vert ^{2}dgdv \end{eqnarray*} \textbf{Definition 5.5.}\textit{\ Let} $f\in \mathcal{D}(P),$ \textit{we define its Fourier transform by} \begin{equation} \mathcal{F}_{\mathbb{R}^{4}}T\mathcal{F}f(\eta ,\gamma ,\xi ,\lambda )=\int\limits_{\mathbb{R}^{4}}\int_{A}\int_{N}\int_{K}f(v,kna)e^{-\text{ } i\langle \text{ }\eta ,\text{ }v\rangle }\text{ }\gamma (k^{-1})a^{-i\lambda }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n\rangle }dkdadndv \notag \end{equation} \textit{where} $\mathcal{F}_{\mathbb{R}^{4}}$ \textit{is the Fourier transform on} $\mathbb{R}^{3},$ $kna=g,$ $\eta =(\eta _{1},\eta _{2},\eta _{3},\eta _{4})\in \mathbb{R}^{4},$ $v=(v_{1},v_{2},v_{3},v_{4})\in \mathbb{R }^{4},$ \textit{and} $dv=dv_{1}dv_{2}dv_{3}dv_{4}$ \textit{is the Lebesgue measure on} $\mathbb{R}^{4}$ and \begin{equation} \langle (\eta _{1},\eta _{2},\eta _{3},\eta _{4}),(v_{1},v_{2},v_{3},v_{4})\rangle =\sum_{i=1}^{4}\eta _{i}v_{i} \end{equation} \textbf{Plancherel's Theorem 5.2\textit{\textbf{\textit{.}} }}\textit{For any function }$f\in $\textit{\ }$L^{1}(P)\cap $\textit{\ }$L^{2}(P),$\textit{ we get} \begin{equation} \int_{P}\left\vert f(v,g)\right\vert ^{2}dvdg=\int\limits_{\mathbb{R} ^{13}}\sum_{\gamma \in \widehat{SO(4)}}d_{\gamma }\left\Vert \mathcal{F}_{ \mathbb{R}^{2}}T\mathcal{F}F(\eta ,\gamma ,\xi ,\lambda )\right\Vert ^{2}d\eta d\lambda d\xi \end{equation} \textit{Proof: }Let\ $\Upsilon \hbar (\overset{\vee }{F})$ be the function defined as\textit{\ } \begin{eqnarray} &&\mathit{\ }\Upsilon \hbar (\overset{\vee }{F})\text{\ }(v;(g,k_{1}))= \mathit{\ }\hbar (\overset{\vee }{F})\text{\ }(v;gk_{1}) \notag \\ &=&\overset{\vee }{F}(gk_{1}v;gk_{1})=\mathit{\ }\overline{ F(gk_{1}v;gk_{1})^{-1})} \end{eqnarray} then, we have \begin{eqnarray*} &&F\ast \Upsilon \mathit{\ }\hbar (\overset{\vee }{F})\text{\ }(0,(I_{SL(4, \mathbb{R})}I_{N}I_{A},I_{SO(4)})) \\ &=&\int\limits_{\mathbb{R}^{2}}\int\limits_{\mathbb{R}^{3}}\int\limits_{ \mathbb{R}^{3}}\mathcal{F}_{\mathbb{R}^{4}}\mathcal{F(}F\ast \Upsilon \mathit{\ }\hbar (\overset{\vee }{F}\text{\ })(\eta ,(I_{SO(4)}\xi ,\lambda ,I_{SO(4)}))d\lambda d\xi d\eta \\ &=&\int\limits_{\mathbb{R}^{26}}\sum_{\gamma \in \widehat{SO(4)}}d_{\gamma }\int\limits_{SO(4)}\mathcal{F}_{\mathbb{R}^{4}}T\mathcal{F(}F\ast \Upsilon \mathit{\ }\hbar (\overset{\vee }{F}))((\eta ,(I_{SO(4)}na,k_{1}))\gamma (k_{1}^{-1})dk_{1}) \\ &&e^{-\text{ }i\langle \text{ }\eta ,\text{ }v\rangle }a^{-i\lambda }e^{- \text{ }i\langle \text{ }\xi ,\text{ }n\rangle }dadndvd\lambda d\xi d\eta \\ &=&\int\limits_{SL(4,\mathbb{R})}\int\limits_{\mathbb{R}^{30}}\sum_{\gamma \in \widehat{SO(3)}}d_{\gamma }\int\limits_{SO(4)}\Upsilon \mathit{\ }\hbar ( \overset{\vee }{F})((v-w),(I_{SO(4)}nag_{2}^{-1},k_{1}))\gamma (k_{1}^{-1})dk_{1}F(w,g_{2})dg_{2} \\ &&e^{-\text{ }i\langle \text{ }\eta ,\text{ }v\rangle }a^{-i\lambda }e^{- \text{ }i\langle \text{ }\xi ,\text{ }n\rangle }dadndvdwd\lambda d\xi d\eta \\ &=&\int\limits_{\mathbb{R}^{40}}\int\limits_{SO(4)}\sum_{\gamma \in \widehat{ SO(4)}}d_{\gamma }\int\limits_{SO(4)}\hbar (\overset{\vee }{F} )((v-w),(nab^{-1}n_{2}^{-1}k_{2}^{-1}k_{1}))\gamma (k_{1}^{-1})dk_{1} \\ &&F(w,k_{2}n_{2}b)dk_{2}e^{-\text{ }i\langle \text{ }\eta ,\text{ }v\rangle }a^{-i\lambda }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n\rangle }dbdn_{2}dadndwdvd\lambda d\xi d\eta \\ &=&\int\limits_{\mathbb{R}^{40}}\int\limits_{SO(4)}\sum_{\gamma \in \widehat{ SO(4)}}d_{\gamma }\int\limits_{SO(4)}\hbar (\overset{\vee }{F} )(v,(ank_{1}))\gamma (k_{1}^{-1})dk_{1}F(w,k_{2}n_{2}b)\gamma (k_{2}^{-1})dk_{2} \\ &&e^{-\text{ }i\langle \text{ }\eta ,\text{ }v+w\rangle }a^{-i\lambda }b^{-i\lambda }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n\rangle }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n_{2}\rangle }da_{2}dn_{2}dadndwdvd\lambda d\xi d\eta \end{eqnarray*} So, we get \begin{eqnarray*} &=&\int\limits_{\mathbb{R}^{40}}\int\limits_{SO(4)}\sum_{\gamma \in \widehat{ SO(4)}}d_{\gamma }\int\limits_{SO(4)}(\overset{\vee }{F})(ank_{1}v,ank_{1}) \gamma (k_{1}^{-1})dk_{1}F(w,k_{2}n_{2}a_{2})\gamma (k_{2}^{-1})dk_{2} \\ &&e^{-\text{ }i\langle \text{ }\eta ,\text{ }v\rangle }e^{-\text{ }i\langle \text{ }\eta ,\text{ }w\rangle }e^{-i\langle \lambda ,a\rangle }e^{-i\langle \lambda ,a_{2}\rangle }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n\rangle }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n_{2}\rangle }da_{2}dn_{2}dadndwdvd\lambda d\xi d\eta \\ &=&\int\limits_{\mathbb{R}^{40}}\int\limits_{SO(4)}\sum_{\gamma \in \widehat{ SO(4)}}d_{\gamma }\int\limits_{SO(4)}\overline{F(-v,k_{1}^{-1}n^{-1}a^{-1})} F(w,k_{2}n_{2}a_{2})\gamma (k_{1}^{-1})\gamma (k_{2}^{-1})\gamma (k_{2}^{-1}) \\ &&e^{-\text{ }i\langle \text{ }\eta ,\text{ }v\rangle }e^{-\text{ }i\langle \text{ }\eta ,\text{ }w\rangle }a^{-i\lambda }a_{2}^{-i\lambda }e^{-\text{ } i\langle \text{ }\xi ,\text{ }n\rangle }e^{-\text{ }i\langle \text{ }\xi , \text{ }n_{2}\rangle }dk_{1}dk_{2}da_{2}dn_{2}dadndwdvd\lambda d\xi d\eta \\ &=&\int\limits_{\mathbb{R}0}\int\limits_{SO(4)}\sum_{\gamma \in \widehat{ SO(4)}}d_{\gamma }\int\limits_{SO(4)}\overline{F(v,k_{1}na)\gamma (k_{1})} F(w,k_{2}n_{2}a_{2})\gamma (k_{2}^{-1})dk_{1}dk_{2} \\ &&e^{\text{ }i\langle \text{ }\eta ,\text{ }v\rangle }e^{-\text{ }i\langle \text{ }\eta ,\text{ }w\rangle }e^{i\langle \lambda ,a\rangle }e^{-i\langle \lambda ,a_{2}\rangle }e^{\text{ }i\langle \text{ }\xi ,\text{ }n\rangle }e^{-\text{ }i\langle \text{ }\xi ,\text{ }n_{2}\rangle }da_{2}dn_{2}dadndwdvd\lambda d\xi d\eta \\ &&\int\limits_{\mathbb{R}^{13}}\sum_{\gamma \in \widehat{SO(4)}}d_{\gamma }\left\Vert \mathcal{F}_{\mathbb{R}^{4}}T\mathcal{F}F(\eta ,\gamma ,\xi ,\lambda )\right\Vert _{H.S}^{2}d\eta d\lambda d\xi \end{eqnarray*} Hence the theorem is proved on the $\mathbb{R}^{4}$ $\rtimes SL(4, \mathbb{R}).$ \textbf{Corollary 5.3.\textit{\ }}\textit{For any function }$f\in $\textit{\ }$L^{1}(\mathbb{R}^{4}\rtimes _{\rho }SP(4,\mathbb{R}))\cap $\textit{\ }$ L^{2}(\mathbb{R}^{4}\rtimes _{\rho }SP(4,\mathbb{R})),$\textit{we get} \begin{equation} \int_{\mathbb{R}^{4}\rtimes _{\rho }SP(4,\mathbb{R})}\left\vert f(v,g)\right\vert ^{2}dvdg=\int\limits_{\mathbb{R}^{4}}\int_{N}\int_{A} \sum_{\gamma \in \widehat{K}}d_{\gamma }\left\Vert \mathcal{F}_{\mathbb{R} ^{2}}T\mathcal{F}F(\eta ,\gamma ,\xi ,\lambda )\right\Vert ^{2}d\eta d\lambda d\xi \end{equation} Which is the Plancherel theorem on the inhomogeneous group $\mathbb{R} ^{4}\rtimes _{\rho }SP(4,\mathbb{R})$ of the symplectic $SP(4,\mathbb{R}),$ where $KNA$ is the Iwasawa decomposition of the symplectic group $SP(4, \mathbb{R}).$ \section{Hypoellipticity of Differential Operators on the Symplectic} \textbf{6.1.} Denote by $SP_{N}$ the nilpotent symplectic subgroup of the group $SP(4,\mathbb{R)}$ consists of all matrices of the form \begin{equation} SP_{N}=\left\{ \left( \begin{array}{cccc} 1 & x & y & z \\ 0 & 1 & z-xt & t \\ 0 & 0 & 1 & 0 \\ 0 & 0 & -x & 1 \end{array} \right) ,(x,y,z,t)\in \mathbb{R}^{4}\right\} \end{equation} We denote by $N$ the nilpotent symplectic subgroup of $SP_{N}$, formed by the following matrix \begin{equation} N=\left\{ \left( \begin{array}{cccc} 1 & x & y & z \\ 0 & 1 & z & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & -x & 1 \end{array} \right) ,(x,y,z)\in \mathbb{R}^{3}\right\} \end{equation} \ \ \ The group $N$ is isomorphic onto the group $G=\mathbb{R}^{3}\rtimes _{\rho }\mathbb{R}$ semidirect of two groups $\mathbb{R}^{2}$ and $\mathbb{R} $ $,$ where $\rho :\mathbb{R\rightarrow }$ $Aut(\mathbb{R}^{2})$ is the group homomorphism defined by \ $\rho (x)(z,y)=(z+xy,y).$\ The multiplication of two elements $X=(z,y,x)$ and $Y=(c,b,a)$ is given by \begin{eqnarray} &&(z,y,x)(c,b,a) \notag \\ &=&(z+c+xb-ay,y+b,x+a) \notag \\ &=&(z+c+\left\vert \begin{array}{c} x\text{ \ \ }y \\ a\text{ \ \ }b \end{array} \right\vert ,y+b,x+a) \end{eqnarray} $.$Our aim is to prove the solvability and hypoellipticity of the following Lewy operators \begin{equation} L=(-\partial _{x}-i\partial _{y}-2y\partial _{z}+2ix\partial _{z}) \end{equation} \begin{equation} L_{\star }=(-\partial _{x}+i\partial _{y}-2y\partial _{z}-2ix\partial _{z}) \end{equation} \textbf{Definition 6.1. }\textit{One can define a transformation }$\hbar :$ \textit{\ }$\mathcal{D}^{\prime }(\mathbb{R}^{3})\rightarrow \mathcal{D} ^{\prime }(\mathbb{R}^{3})$ \begin{equation} \hbar \Psi (z,y\text{ },x)=\Psi (z-2xy\text{ },y\text{ ,}-x) \end{equation} It results from this definition that $\hbar ^{2}=\hbar $ \textbf{Theorem 6.1. }\textit{Let }$Q=\partial _{x}-i\partial _{y}$ \textit{be the Cauchy}-\textit{Riemann operator, then we have for any }$f\in C^{\infty }(\mathbb{R}^{3})$ \begin{equation} (Lf)(z,y,-x)=\hbar Q\hbar f(z,y,-x)\ \end{equation} \newline For the proof of this theorem see $[6].$ \textbf{Corollary 6.1. }\textit{The Lewy operator }$L$\textit{\ is solvable} \textit{Proof: }In fact the Cauchy-Riemann operator $Q=\partial _{x}-i\partial _{y}$ is solvable, because $QC^{\infty }(\mathbb{R} ^{3})=C^{\infty }(\mathbb{R}^{3}),$ and $\hbar C^{\infty }(\mathbb{R} ^{3})=C^{\infty }(\mathbb{R}^{3}).$ So, I have $LC^{\infty }(H)=C^{\infty }(H)$ \textbf{Definition 6.1. }\textit{Let }$G$ \textit{be a Lie group an}\textbf{ \ }\textit{operator \ }$\Gamma :\mathcal{D}^{\prime }(G)\rightarrow \mathcal{ D}^{\prime }(G)$ \textit{is called hypoelliptic if } \begin{equation} \Gamma \varphi \in C^{\infty }(G)\Longrightarrow \varphi \in C^{\infty }(G) \end{equation} \textit{\ for every distribution} $\varphi \in \mathcal{D}^{\prime }(G).$ \textbf{Theorem 6.2. }\textit{The Lewy operator is hypoelliptic } \textit{Proof: }First\textit{\ }the operator\textit{\ }$\hbar $ is hypoelliptic, and the Cauchy- Riemann operator $\partial _{x}-i\partial _{y}$ is hypoelliptic. So if $\varphi \in \mathcal{D}^{\prime }(\mathbb{R}^{3}) \mathcal{\ }$and if $L\varphi (z,y,-x)=\hbar Q\hbar \varphi (z,y,-x)\in C^{\infty }(\mathbb{R}^{3}),$ then I get \begin{eqnarray} L\varphi &\in &C^{\infty }(\mathbb{R}^{3})\Longrightarrow \hbar Q\hbar \varphi \in C^{\infty }(\mathbb{R}^{3}) \notag \\ &\Longrightarrow &Q\hbar \varphi \in C^{\infty }(\mathbb{R}^{3})\text{ } \Longrightarrow \hbar \varphi \in C^{\infty }(\mathbb{R}^{3}) \notag \\ &\Longrightarrow &\varphi \in C^{\infty }(\mathbb{R}^{3}) \end{eqnarray} \textbf{Theorem 6.3.}\textit{\ Let }$Q_{\star }$\textit{\ be the operator} \begin{equation} L_{\star }=(-\partial _{x}+i\partial _{y}-2y\partial _{z}-2ix\partial _{z}) \end{equation} \begin{equation} Q_{\star }=\partial _{x}+i\partial _{y} \end{equation} \textit{then for every }$\varphi \in C^{\infty }(\mathbb{R}^{3}),$\textit{\ I have} \begin{eqnarray} &&\hbar (\partial _{x}-i\partial _{y})(\partial _{x}+i\partial _{y})\hbar \varphi (z,y,-x)=\hbar \Delta \hbar \varphi (z,y,-x) \notag \\ &=&[(-\partial _{x}-2y\partial _{z})+(-i\partial _{y}+2ix\partial _{z})((-\partial _{x}-2y\partial _{z})+(i\partial _{y}-2ix\partial _{z})\varphi ](z,y,-x) \notag \\ &=&LL_{\star }\varphi (z,y,-x) \end{eqnarray} where $\Delta $ and $L_{\star }$ are the operators \begin{equation} \Delta =\frac{\partial ^{2}}{\partial _{x^{2}}}+\frac{\partial ^{2}}{ \partial _{y^{2}}} \end{equation} \begin{equation} L_{\star }=(i\partial _{y}-2ix\partial _{z})+(-\partial _{x}-2y\partial _{z}) \end{equation} $L_{\star }$ is called the conjugate of the Lewy operator, which can be considered another form of the Lewy operator. As in theorem \textbf{6.2,} we can easily see that $L_{\star }C^{\infty }(\mathbb{R}^{3})=C^{\infty }( \mathbb{R}^{3}).$ The operator $LL_{\star }$ can be regarded as the square of the Lewy operator on the $3-$dimensional Heisenberg group. \textbf{Corollary 6.1. }\textit{The operators }$LL_{\star }$ \textit{and }$ L_{\star }$ \textit{are hypoelliptic} \textit{Proof: }From the above we deduce the following \begin{eqnarray} L_{\star }\varphi &\in &C^{\infty }(\mathbb{R}^{3})\Longrightarrow \hbar Q_{\star }\hbar \varphi \in C^{\infty }(\mathbb{R}^{3}) \notag \\ &\Longrightarrow &Q_{\star }\hbar \varphi \in C^{\infty }(\mathbb{R}^{3}) \text{ }\Longrightarrow \hbar \varphi \in C^{\infty }(\mathbb{R}^{3}) \notag \\ &\Longrightarrow &\varphi \in C^{\infty }(\mathbb{R}^{3}) \end{eqnarray} In other hand we have \begin{eqnarray} LL_{\star }\varphi &\in &C^{\infty }(\mathbb{R}^{3})\Longrightarrow \hbar QQ_{\star }\hbar \varphi \in C^{\infty }(\mathbb{R}^{3}) \notag \\ &\Longrightarrow &QQ_{\star }\hbar \varphi \in C^{\infty }(\mathbb{R}^{3}) \text{ }\Longrightarrow Q_{\star }\hbar \varphi \in C^{\infty }(\mathbb{R} ^{3}) \notag \\ &\Longrightarrow &\hbar \varphi \in C^{\infty }(\mathbb{R} ^{3})\Longrightarrow \varphi \in C^{\infty }(\mathbb{R}^{3}) \end{eqnarray} \textbf{Theorem 6.4.}\textit{\ The following left invariant differential operators on }$G$ \begin{equation} y\partial _{z}+\partial _{x}+i\partial _{y}+ix\partial _{z} \end{equation} \begin{equation} \frac{\partial ^{2}}{\partial _{x^{2}}}-\frac{\partial ^{2}}{\partial _{y^{2}}}-2x\frac{\partial }{\partial _{z}}\frac{\partial }{\partial y}+2y \frac{\partial }{\partial _{z}}\frac{\partial }{\partial x}+(y^{2}-x^{2}) \frac{\partial ^{2}}{\partial _{z^{2}}}+\frac{\partial ^{2}}{\partial _{z^{2}}} \end{equation} \textit{are solvable and hypoelliptic} \textit{Proof: }The solvability results from theorem \textbf{6.1.} For the hypoellipticity, we consider the mapping $\Gamma :\mathcal{D} ^{\prime }(G)\rightarrow \mathcal{D}^{\prime }(G)$ defined by \begin{equation} \Gamma \phi (z,y,x)=\phi (z-xy,y,x) \end{equation} The operator $\Gamma $ is hypoelliptic and its inverse is \begin{equation} \Gamma ^{-1}\phi (z,y,x)=\phi (z+xy,y,x) \end{equation} thus we get \begin{equation} \Gamma (\partial _{x}+i\partial _{y})\Gamma ^{-1}\phi (z,y,x)=(y\partial _{z}+\partial _{x}+i\partial _{y}+ix\partial _{z})\phi (z,y,x \end{equation} and \begin{eqnarray} &&\Gamma (\frac{\partial ^{2}}{\partial _{x^{2}}}+\frac{\partial ^{2}}{ \partial _{y^{2}}}+\frac{\partial ^{2}}{\partial _{z^{2}}})\Gamma ^{-1}\phi (z,y,x) \\ &=&(\frac{\partial ^{2}}{\partial _{x^{2}}}-\frac{\partial ^{2}}{\partial _{y^{2}}}-2x\frac{\partial }{\partial _{z}}\frac{\partial }{\partial y}+2y \frac{\partial }{\partial _{z}}\frac{\partial }{\partial x}+(y^{2}-x^{2}) \frac{\partial ^{2}}{\partial _{z^{2}}}+\frac{\partial ^{2}}{\partial _{z^{2}}})\phi (z,y,x) \end{eqnarray} Since the operators $\Gamma ,\partial _{x}+i\partial _{y},\frac{\partial ^{2} }{\partial _{x^{2}}}+\frac{\partial ^{2}}{\partial _{y^{2}}}+\frac{\partial ^{2}}{\partial _{z^{2}}}$ and $\Gamma ^{-1}$ are hypoelliptic, then the hypoellipticity of the operators $(y\partial _{z}+\partial _{x}+i\partial _{y}+ix\partial _{z})$ and $\frac{\partial ^{2}}{\partial _{x^{2}}}-\frac{ \partial ^{2}}{\partial _{y^{2}}}-2x\frac{\partial }{\partial _{z}}\frac{ \partial }{\partial y}+2y\frac{\partial }{\partial _{z}}\frac{\partial }{ \partial x}+(y^{2}-x^{2})\frac{\partial ^{2}}{\partial _{z^{2}}}+\frac{ \partial ^{2}}{\partial _{z^{2}}}$ is fulfilled \textbf{Hormander condition for the hypoellipticity } By the sufficient condition of the hypoellipticity given by the Hormander theorem $[3,$ page $11]$, we oblige already quoted the sublaplacian \begin{equation} \frac{\partial ^{2}}{\partial _{x^{2}}}+\frac{\partial ^{2}}{\partial _{y^{2}}}+4x\frac{\partial }{\partial _{z}}\frac{\partial }{\partial y}-4y \frac{\partial }{\partial _{z}}\frac{\partial }{\partial x}+4(y^{2}+x^{2}) \frac{\partial ^{2}}{\partial _{z^{2}}} \end{equation} which is hypoelliptic by the Hormander theorem, while the operator \begin{equation} \frac{\partial ^{2}}{\partial _{x^{2}}}+\frac{\partial ^{2}}{\partial _{y^{2}}}+4x\frac{\partial }{\partial _{z}}\frac{\partial }{\partial y}-4y \frac{\partial }{\partial _{z}}\frac{\partial }{\partial x}+4(y^{2}+x^{2}) \frac{\partial ^{2}}{\partial _{z^{2}}}-4i\frac{\partial }{\partial _{z}} \end{equation} is not hypoelliptic because the Hormander condition is not fulfilled. By contrast all our results, which are obtained by above theorems, contradict the Hormonder conditions for the solvability and the hypoellipticity. The basis of the Lie algebra of the group $N$ is given by the following vector fields $Z=\frac{\partial }{\partial _{z}},Y=(x\frac{\partial }{ \partial _{z}}+\frac{\partial }{\partial y}),$ $X=(-y\frac{\partial }{ \partial _{z}}+\frac{\partial }{\partial x})$. Since $[X,Y]=2Z,$ and $ X,Y,[X,Y]$ span the Lie algebra of $N$ . Then the Hormander theorem in $[5]$ , gives the hypoellipticity of the operator \begin{equation} X^{2}+Y^{2}=(x\frac{\partial }{\partial _{z}}+\frac{\partial }{\partial y} )^{2}+(-y\frac{\partial }{\partial _{z}}+\frac{\partial }{\partial x})^{2} \end{equation} While my results prove the solvability and hypoellipticity operators \begin{equation} X^{2}+Y^{2}+Z^{2}=(x\frac{\partial }{\partial _{z}}+\frac{\partial }{ \partial y})^{2}+(-y\frac{\partial }{\partial _{z}}+\frac{\partial }{ \partial x})^{2}+\frac{\partial ^{2}}{\partial _{z^{2}}} \end{equation} As well known the Laplace operator \begin{equation} \Delta =\dsum\limits_{i=1}^{3}\frac{\partial ^{2}}{\partial _{x_{i}^{2}}} \end{equation} on the real vector group $\mathbb{R}^{3}$ is solvable and hypoelliptic. This operator as a left invariant differential on the group $N$ is nothing but the following operator \begin{equation} \Delta _{h_{1}}=\dsum\limits_{i=1}^{3}\frac{\partial ^{2}}{\partial _{z^{2}}} +(x\frac{\partial }{\partial _{z}}+\frac{\partial }{\partial _{y}})^{2}+(-y \frac{\partial }{\partial _{z}}+\frac{\partial }{\partial _{x}})^{2} \end{equation} and as a right invariant on $N$ is the operator \begin{equation} \Delta _{h_{2}}=\dsum\limits_{i=1}^{3}\frac{\partial ^{2}}{\partial _{z^{2}}} +(-x\frac{\partial }{\partial _{z}}+\frac{\partial }{\partial _{y}})^{2}+(y \frac{\partial }{\partial _{z}}+\frac{\partial }{\partial _{x}})^{2} \end{equation} where $\Delta _{h_{1}}(resp.\Delta _{h_{2}})$ is the left (resp. right) invariant differential operator associated to $\Delta .$ The operators $ \Delta _{h_{1}}$ and $\Delta _{h_{2}}$ can be regarded as the Laplacian operators on the $3-$dimensional Symplectic Nilpotent group $N$. My aim result is \textbf{Theorem 6.4. }\textit{The} \textit{Laplace operators }$\Delta _{h_{1}}$ \textit{and} $\Delta _{h_{2}}$ \textit{on the Heisenberg group are hypoelliptic} \textit{Proof: }We consider the following mappings from $\mathcal{D}^{\prime }(N)\rightarrow \mathcal{D}^{\prime }(N)$ defined by \begin{equation} \Lambda \Psi (z,y\text{ },x)=\Psi (z+xy\text{ },y\text{ ,}x) \end{equation} \begin{equation} \tau \Psi (z,y\text{ },x)=\Psi (z+xy\text{ },-y\text{ ,}x) \end{equation} \begin{equation} \pi \Psi (z,y\text{ },x)=\Psi (z+xy\text{ },y\text{ ,}-x) \end{equation} These operators has the property of hypoellipticity, because if $\Lambda \Psi (z,y,x)=\Psi (z+xy,-y,x)\in C^{\infty }(N),$ then $\Psi (z,y,x)\in C^{\infty }(N)$, so on $\tau $ and $\pi .$ In other side we have \begin{equation} \tau \Delta \Lambda \Psi (z,y,x)=\Delta _{h_{1}}\Psi (z,-y,x) \end{equation} \begin{equation} \pi \Delta \Lambda \Psi (z,y,x)=\Delta _{h_{2}}\Psi (z,y,-x) \end{equation} Since $\Delta $, $\Lambda ,\tau $ and $\pi $ are hypoelliptic, then the hypoellipticity of $\Delta _{h_{1}}$and $\Delta _{h_{\substack{ 2}}}$are accomplished \section{\protect On the Existence Theorem on $N$} Out of the proofs of my book $[6]$, I solve here by different method the equation \begin{equation*} PC^{\infty }(G)=C^{\infty }(G) \end{equation*} For this, I introduce two groups: The first is the group $G\times \mathbb{R} , $ which is the direct product of the group $G$\ with the real vector group $\mathbb{R}.$ The second is the group $E=\mathbb{R}^{2}\times \mathbb{R} \times \mathbb{R}$ with law: \begin{equation} g\cdot g^{\prime }=(X,x,y)(X^{\prime },x^{\prime },y^{\prime })=(X+X^{\prime }+yX^{\prime },,x+x^{\prime },y+y^{\prime }) \end{equation} for all $g=(X,x,y)\in \mathbb{R}^{4},g^{\prime }=(X^{\prime },x^{\prime },y^{\prime })\in \mathbb{R}^{4},X\in \mathbb{R}^{2}$ and $X^{\prime }\in \mathbb{R}^{2}$. In this case the group $G$ can be identified with the closed sub$-$group $\mathbb{R}^{2}\times \left\{ 0\right\} \times $ $\mathbb{ R}$ of $E$ and the group $A=\mathbb{R}^{2}\times $ $\mathbb{R}$ , direct product of the group $\mathbb{R}^{2}$ by the group $\mathbb{R}$ with the closed sub$-$group $\mathbb{R}^{2}\times \mathbb{R}\times \left\{ 0\right\} $ \ of $E$ \textbf{Definition 7.1. }\textit{For every\ }$\phi \in C^{\infty }(G),$\textit{\ one can define a functions\ }$\tau \phi $ \textit{belong to } $C^{\infty }(G\times \mathbb{R}),$ \textit{and }$\iota \phi $ \textit{belong to} $E$\textit{\ as follows:} \begin{equation} \tau \phi (X,x,y)=\phi (x^{-1}X,x+y) \end{equation} \begin{equation} \iota \phi (X,x,y)=\phi (xX,x+y) \end{equation} \textit{for any\ }$(X,x,y)\in G\times \mathbb{R}^{m}. $ \textit{The functions}\ $\tau \phi $ \textit{and }$\iota \phi $\ \textit{are invariant in the following sense} \begin{equation} \tau \phi (kX,x+k,y-k)=\phi (z,x,y) \end{equation} \begin{equation} \iota \phi (kX,x-k,y+k)=\phi (z,x,y) \end{equation} Now, I state my theorem \textbf{Theorem 7.1. }\textit{Let }$P$ \textit{be a right invariant differential on} $G,$ \textit{and let} $u$ \textit{be the} \textit{ distribution associated to} $P.$ \textit{Then the equation} \begin{equation} P\phi (X,x)=u\ast \phi (X,x)=\int_{G}\phi ((w,v)^{-1}(X,x)u(X,x)dwdv=\varphi (X,x) \end{equation} \textit{has a solution} $\phi \in C^{\infty }(G),$\textit{for any function }$ \varphi \in $ $C^{\infty }(G),$\textit{\ where }$\ast $ \textit{signifies the convolution product on }$G.$ \textit{Proof: }Consider the operator $P$ as a differential operator $Q$ on the abelian group $A=\mathbb{R}^{2}\times \left\{ 0\right\} \times \mathbb{R} .$ By the theory of partial differential equations with constant coefficients on $\mathbb{R}^{2}\times \mathbb{R},$ then for any function $ g\in C^{\infty }(\mathbb{R}^{2}\times \mathbb{R}),$\ there exist a function $ \psi $ on $\mathbb{R}^{2}\times \mathbb{R},$ such that \begin{equation} Q\psi (X,x)=u\ast _{c}\psi (X,x)=\int_{\mathbb{R}^{3}}\psi (X-a,x-b)u(X,x)dadb=g(X,x) \end{equation} Using the extension of the function $\psi $ on the group $G\times \mathbb{R} , $then for each $f\in C^{\infty }(\mathbb{R}^{2}\times \mathbb{R}),$ I get \begin{equation} =(u\ast _{c}\tau \psi )(X,0,y)\downarrow _{A}=f(X,y) \end{equation} Let $\tau f$ be the extension of the function $f$ \ on the group $ G\times \mathbb{R},$ that means \begin{eqnarray} Q\tau \psi (X,0,y) &\downarrow &_{A}=(u\ast _{c}\tau \psi )(X,0,y)\downarrow _{A}= \\ \tau f(X,0,y) &\downarrow &_{A}=f(X,y) \notag \end{eqnarray} where \begin{eqnarray} (u\ast _{c}\psi )(X,y) &=&\int_{\mathbb{R}^{3}}\psi (X-a,y-b)u(X,y)dadb \\ &=&\tau f(X,0,y)\downarrow _{A}=f(X,y) \end{eqnarray} Let $\top _{x}$ be the right translation of the group $G$, which is defined as \begin{equation} =\top _{x}\Psi (X,t)=\Psi ((X,t)((0,x))=\Psi (X,t+x) \end{equation} Then I have $\tau \psi $ is the solution of the equation \begin{equation} (u\ast \tau \psi )(X,x,0)\downarrow _{G}=f(x^{-1}X,x) \end{equation} In fact, we have \begin{eqnarray} &=&\top _{x}(u\ast _{c}\tau \psi )(X,0,0) \notag \\ &=&(u\ast \tau \psi )(X,x,0)\downarrow _{G}=(u\ast \tau \psi )(X,x,0) \notag \\ &=&\top _{x}\tau f(X,0,0)\downarrow _{G}=\tau f(X,x,0)=f(x^{-1}X,x) \end{eqnarray} So I get, if $\psi $ is the solution of the equation on the abelian group $A= \mathbb{R}^{2}\times \mathbb{R}$ \begin{equation} (Q\psi )(X,y)=f(X,y) \end{equation} on the abelian group $A=\mathbb{R}^{2}\times \mathbb{R}$, then the function $ \tau \psi $ is the solution of the equation \begin{equation} (P\tau \psi )(X,x)=f(x^{-1}X,x) \end{equation} on the group $G.$ Let$\ \widetilde{\psi }(X,x)$ be the function, which is defined as \begin{equation} \widetilde{\psi }(X,x)=\psi (xX,x) \end{equation} In the same way, I have proved by in $[6],$ if $\widetilde{\psi }(X,x)$ is the solution of the equation \begin{equation} Q\text{ }\widetilde{\psi }(X,x)=\widetilde{\varphi }(X,x) \end{equation} on the group $A,$ then the function $\psi $ is the solution of the equation \begin{equation} P\psi (X,x)=\varphi (X,x) \end{equation} on the group $G.$ \textbf{Corollary 7.1. }\textit{The Lewy equation is solvable in the sense, for any }$g\in C^{\infty }(\mathbb{R}^{3})$ \textit{there is a function }$ f\in C^{\infty }(\mathbb{R}^{3}),$ \textit{such that } \begin{equation} L=(-\frac{\partial }{\partial x}-i\frac{\partial }{\partial y}+2i\text{ }(x+i \text{ }y)\frac{\partial }{\partial z})f=g \end{equation} The Lewy equation is invariant on the $3-$ dimensional nilpotent symplectic group $N=\mathbb{R}^{2}\rtimes _{\rho }\mathbb{R}$. So it is solvable. \textbf{The Example of Hormander for the non solvability} Homander had considered in his book $[16,p.156],$ another form of the Lewy operator, which is\textbf{\ } \begin{equation} P(x,D)=(-i\partial _{x}+\partial _{y}-2x\partial _{z}-2iy\partial _{z}) \end{equation} He constructed his example the operator of real variable coefficients, which is \begin{equation} Q(x,D)=P(x,D)\overline{P(x,D)}\overline{P(x,D)}P(x,D) \end{equation} and proved $Q(x,D)$ is unsolvable see $[16,p164]$, where $\overline{P(x,D)}$ is the operator defined by \begin{equation} \overline{P(x,D)}=(i\partial _{x}+\partial _{y}-2x\partial _{z}+2iy\partial _{z}) \end{equation} My result is: \textbf{Theorem 7.2. }\textit{The operator }$Q(x,D)$ \textit{is solvable} \textit{Proof: }Let $R$ be the following Cauchy-Riemann operator \begin{equation} R=-i\partial _{x}+\partial _{y} \end{equation} and let $\phi $\ be any function infinitely differentiable on $\mathbb{R} ^{3} $, then we get \begin{eqnarray} &=&\hbar (-i\partial _{x})\hbar \phi \ (z,y,-x)=-i\partial _{x}\hbar \varphi (z+2yx,y,x) \notag \\ &=&(-i\frac{d}{dt})_{0}\hbar \phi \ (z+2yx,y,x+t) \notag \\ &=&(-i\frac{d}{dt})_{0}\phi \ (z-2yt),y,-x-t) \notag \\ &=&(-i\partial _{x}-2yi\partial _{z})\phi \ (z,y,-x) \end{eqnarray} and \begin{eqnarray*} \hbar (\partial _{y})\hbar \phi \ (z,y,-x) &=&\partial _{y}\hbar \phi \ (z+2xy,y,x) \\ &=&(\frac{d}{ds})_{0}\hbar \phi \ (z+2yx,y+s,x) \\ &=&(\frac{d}{ds})_{0}\phi \ (z-2sx,y+s,-x) \\ &=&(\partial _{y}-2x\partial _{z})\phi \ (z,y,-x) \end{eqnarray*} So, we get \begin{equation} (P(x,D)\phi )(z,y,-x)=(-i\partial _{x}+\partial _{y}-2x\partial _{z}-2iy\partial _{z})\phi =\hbar R\hbar \phi (z,y,-x)\ \end{equation} In the same manner, I prove \begin{equation} (\overline{P(x,D)}\phi )(z,y,-x)=(i\partial _{x}+\partial _{y}-2x\partial _{z}+2iy\partial _{z})\phi =\hbar R_{\star }\hbar \phi (z,y,-x)\ \end{equation} where $R_{\star }$ \begin{equation} R_{\star }=i\partial _{x}+\partial _{y} \end{equation} Finally, I find \begin{equation} ((\overline{P(x,D)}(P(x,D)\phi )(z,y,-x)=\hbar R_{\star }R\hbar \phi (z,y,-x)\ \end{equation} \begin{equation} ((P(x,D)\overline{(P(x,D)})\phi )(z,y,-x)=\hbar RR\star \hbar \phi (z,y,-x)\ \end{equation} \begin{eqnarray} &&((((P(x,D)\overline{P(x,D)}\overline{P(x,D)}))))(P(x,D)\phi )(z,y,-x) \notag \\ &=&\hbar RR_{\star }R_{\star }R\hbar \phi (z,y,-x)\ =Q(x,D)\phi (z,y,-x) \end{eqnarray} Hence the solvability of the operator $Q(x,D).$ Also the operator \begin{equation} X+iY-4iZ=ix\frac{\partial }{\partial _{z}}+i\frac{\partial }{\partial y}-y \frac{\partial }{\partial _{z}}+\frac{\partial }{\partial x}-4i\frac{ \partial }{\partial _{z}} \end{equation} is solvable. So the invalidity of the Hormander condition for the non solvability \section{\protect \textbf{Conclusion}} \textbf{8.1.} Any invariant differential operator has the form \begin{equation} P=\sum\limits_{\alpha ,\beta }a_{\alpha ,\beta }X^{\alpha }Y^{\beta } \end{equation} on the Lie group $G=\mathbb{R}^{2}\times _{\rho }\mathbb{R}$, where $ X^{\alpha }=(X_{1}^{\alpha _{1}},X_{2}^{\alpha _{2}}),Y^{\beta },$ $\alpha _{i}\in \mathbb{N} \in \mathbb{N} $ $(1\leq i\leq 2)$ and $X=(X_{1},X_{2}),$ are the invariant vectors field on $G,$ which are the basis of the Lie algebra \underline{$g$} of $G$ and $ a_{\alpha ,\beta }\in \mathbb{C} .$ Any invariant partial differential equation on the $3-$dimensional group $ G=\mathbb{R}^{2}\times _{\rho }\mathbb{R}$ is solvable. So the invalidity of the Hormander condition for the non existence. Over fifty years ago where there are a lot of books and lot of published papers by many mathematicians as $[2,5,9,17,18,20,21,25],$are all based on a non careful mathematical ideas. Especially those research published after $ 2006$ the date of opening my new way in Fourier analysis on non abelian Lie groups. Unfortunately, some of those research books and articles were published in the famous scientific centers such as Springer $[3,16]$, Elsevier $[23]$, AMS $[22,27]$, Wiley $[26]$, Francis \& Taylor $[1]$, ... ect \textbf{8.2. Open questions. }The operator $[3,$ $p.2]$ \begin{eqnarray} &&(y^{2}-z^{2})\frac{\partial ^{2}u}{\partial x^{2}}+(1+x^{2})(\frac{ \partial ^{2}u}{\partial y^{2}}-\frac{\partial ^{2}u}{\partial z^{2}})-xy \frac{\partial ^{2}u}{\partial x\partial y}- \notag \\ &&-\frac{\partial ^{2}(xyu)}{\partial x\partial y}+xz\frac{\partial ^{2}u}{ \partial x\partial z}+\frac{\partial ^{2}(xyu)}{\partial x\partial z} \end{eqnarray} can be solved \textbf{8.3. Open questions. }Is the operator \begin{equation} X^{2}+Y^{2}-4iZ=(x\frac{\partial }{\partial _{z}}+\frac{\partial }{\partial y })^{2}+(-y\frac{\partial }{\partial _{z}}+\frac{\partial }{\partial x} )^{2}-4i\frac{\partial }{\partial _{z}} \end{equation} solvable and hypoelliptic, and the operator \begin{equation} L-\alpha i\partial _{z}=\partial _{x}+2y\partial _{z}+i\partial _{y}-2ix\partial _{z}-\alpha i\partial _{z},\alpha \in \mathbb{R} \end{equation} is hypoelliptic \textbf{Open} \textbf{Question. }Consider the Kannai operators $[3,$ $p.5]$ \textbf{\ } \begin{equation} D_{1}=\frac{\partial }{\partial _{x}}+x\frac{\partial ^{2}}{\partial _{y^{2}} },\text{ }D_{2}=\frac{\partial }{\partial _{x}}-x\frac{\partial ^{2}}{ \partial _{y^{2}}} \end{equation} I believe, the first can be solved on the $3-$dimensional Heisenberg group $ H $ and the second can be hypoelliptic. \end{document}
arXiv
3: Systems of Linear Equations Mat C Intermediate Algebra (Carr) { "3.01:_Solve_Systems_of_Linear_Equations_with_Two_Variables" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.02:_Solve_Applications_with_Systems_of_Equations" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.03:_Solve_Mixture_Applications_with_Systems_of_Equations" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.04:_Solve_Systems_of_Equations_with_Three_Variables" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.05:_Graphing_Systems_of_Linear_Inequalities" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "3.06:_Solve_Systems_of_Equations_Using_Matrices" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()" } { "00:_Front_Matter" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "01:_Solving_Linear_Equations" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "02:_Graphs_and_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "03:_Systems_of_Linear_Equations" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "04:_Rational_Expressions_and_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "05:_Roots_and_Radicals" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "06:_Quadratic_Equations_and_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "07:_Exponential_and_Logarithmic_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "08:_Conics" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "09:_Sequences_and_Series" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "zz:_Back_Matter" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()" } 3.4: Solve Systems of Equations with Three Variables profmindy [ "article:topic", "authorname:openstax", "license:ccby", "showtoc:no", "transcluded:yes", "source[1]-math-5141" ] https://math.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FCourses%2FMission_College%2FMat_C_Intermediate_Algebra_(Carr)%2F03%253A_Systems_of_Linear_Equations%2F3.04%253A_Solve_Systems_of_Equations_with_Three_Variables Mission College Determine Whether an Ordered Triple is a Solution of a System of Three Linear Equations with Three Variables LINEAR EQUATION IN THREE VARIABLES SolutionS OF A SYSTEM OF LINEAR EQUATIONS WITH THREE VARIABLES Solve a System of Linear Equations with Three Variables Example \(\PageIndex{4}\): How to Solve a System of Equations With Three Variables by Elimination SOLVE A SYSTEM OF LINEAR EQUATIONS WITH THREE VARIABLES. Example \(\PageIndex{10}\) Solve Applications using Systems of Linear Equations with Three Variables By the end of this section, you will be able to: Before you get started, take this readiness quiz. Evaluate \(5x−2y+3z\) when \(x=−2, y=−4,\) and \(z=3.\) If you missed this problem, review [link]. Classify the equations as a conditional equation, an identity, or a contradiction and then state the solution. \( \left\{ \begin{array} {l} −2x+y=−11 \\ x+3y=9 \end{array} \right. \) Classify the equations as a conditional equation, an identity, or a contradiction and then state the solution. \(\left\{ \begin{array} {l} 7x+8y=4 \\ 3x−5y=27 \end{array} \right. \) In this section, we will extend our work of solving a system of linear equations. So far we have worked with systems of equations with two equations and two variables. Now we will work with systems of three equations with three variables. But first let's review what we already know about solving equations and systems involving up to two variables. We learned earlier that the graph of a linear equation, \(ax+by=c\), is a line. Each point on the line, an ordered pair \((x,y)\), is a solution to the equation. For a system of two equations with two variables, we graph two lines. Then we can see that all the points that are solutions to each equation form a line. And, by finding what the lines have in common, we'll find the solution to the system. Most linear equations in one variable have one solution, but we saw that some equations, called contradictions, have no solutions and for other equations, called identities, all numbers are solutions We know when we solve a system of two linear equations represented by a graph of two lines in the same plane, there are three possible cases, as shown. Similarly, for a linear equation with three variables ax+by+cz=d,ax+by+cz=d, every solution to the equation is an ordered triple, (x,y,z)(x,y,z)that makes the equation true. A linear equation with three variables, where a, b, c, and d are real numbers and a, b, and c are not all 0, is of the form \[ ax+by+cz=d\nonumber \] Every solution to the equation is an ordered triple, \((x,y,z)\) that makes the equation true. All the points that are solutions to one equation form a plane in three-dimensional space. And, by finding what the planes have in common, we'll find the solution to the system. When we solve a system of three linear equations represented by a graph of three planes in space, there are three possible cases. To solve a system of three linear equations, we want to find the values of the variables that are solutions to all three equations. In other words, we are looking for the ordered triple \((x,y,z)\) that makes all three equations true. These are called the solutions of the system of three linear equations with three variables. Solutions of a system of equations are the values of the variables that make all the equations true. A solution is represented by an ordered triple \((x,y,z)\). To determine if an ordered triple is a solution to a system of three equations, we substitute the values of the variables into each equation. If the ordered triple makes all three equations true, it is a solution to the system. Determine whether the ordered triple is a solution to the system: \( \left\{ \begin{array} {l} x−y+z=2 \\ 2x−y−z=−6 \\ 2x+2y+z=−3 \end{array} \right. \) ⓐ \((−2,−1,3)\) ⓑ \((−4,−3,4)\) ⓐ ⓑ Determine whether the ordered triple is a solution to the system: \( \left\{ \begin{array} {l} 3x+y+z=2 \\ x+2y+z=−3 \\ 3x+y+2z=4 \end{array} \right. \) ⓐ \((1,−3,2)\) ⓑ \((4,−1,−5)\) ⓐ yes ⓑ no Determine whether the ordered triple is a solution to the system: \( \left\{ \begin{array} {l} x−3y+z=−5 \\ −3x−y−z=1 \\ 2x−2y+3z=1 \end{array} \right. \) ⓐ \((2,−2,3)\) ⓑ \((−2,2,3)\) ⓐ no ⓑ yes To solve a system of linear equations with three variables, we basically use the same techniques we used with systems that had two variables. We start with two pairs of equations and in each pair we eliminate the same variable. This will then give us a system of equations with only two variables and then we know how to solve that system! Next, we use the values of the two variables we just found to go back to the original equation and find the third variable. We write our answer as an ordered triple and then check our results. Solve the system by elimination: \( \left\{ \begin{array} {l} x−2y+z=3 \\ 2x+y+z=4 \\ 3x+4y+3z=−1 \end{array} \right. \) Solve the system by elimination: \( \left\{ \begin{array} {l} 3x+y−z=2 \\ 2x−3y−2z=1 \\ 4x−y−3z=0 \end{array} \right.\) \((2,−1,3)\) Solve the system by elimination: \( \left\{ \begin{array} {l} 4x+y+z=−1 \\ −2x−2y+z=2 \\ 2x+3y−z=1 \end{array} \right. \) \((−2,3,4)\) The steps are summarized here. Write the equations in standard form If any coefficients are fractions, clear them. Eliminate the same variable from two equations. Decide which variable you will eliminate. Work with a pair of equations to eliminate the chosen variable. Multiply one or both equations so that the coefficients of that variable are opposites. Add the equations resulting from Step 2 to eliminate one variable Repeat Step 2 using two other equations and eliminate the same variable as in Step 2. The two new equations form a system of two equations with two variables. Solve this system. Use the values of the two variables found in Step 4 to find the third variable. Write the solution as an ordered triple. Check that the ordered triple is a solution to all three original equations. Solve: \( \left\{ \begin{array} {l} 3x−4z=0 \\ 3y+2z=−3 \\ 2x+3y=−5 \end{array} \right. \) \[ \left\{ \begin{array} {ll} 3x−4z=0 &(1) \\ 3y+2z=−3 &(2) \\ 2x+3y=−5 &(3) \end{array} \right. \nonumber \] We can eliminate \(z\) from equations (1) and (2) by multiplying equation (2) by 2 and then adding the resulting equations. Notice that equations (3) and (4) both have the variables \(x\) and \(y\). We will solve this new system for \(x\) and \(y\). To solve for y, we substitute \(x=−4\) into equation (3). We now have \(x=−4\) and \(y=1\). We need to solve for z. We can substitute \(x=−4\) into equation (1) to find z. We write the solution as an ordered triple. \((−4,1,−3)\) We check that the solution makes all three equations true. \(\begin{array} {lll} {3x-4z=0 \space (1)} &{3y+2z=−3 \space (2)} &{2x+3y=−5 \space (3)} \\ {3(−4)−4(−3)\overset{?}{=} 0} &{3(1)+2(−3)\overset{?}{=} −3} &{2(−4)+3(1)\overset{?}{=} −5} \\ {0=0 \checkmark} &{−3=−3 \checkmark} &{−5=−5 \checkmark} \\ {} &{} &{\text{The solution is }(−4,1,−3)} \end{array}\) Solve: \( \left\{ \begin{array} {l} 3x−4z=−1 \\ 2y+3z=2 \\ 2x+3y=6 \end{array} \right. \) \((−3,4,−2)\) When we solve a system and end up with no variables and a false statement, we know there are no solutions and that the system is inconsistent. The next example shows a system of equations that is inconsistent. Solve the system of equations: \( \left\{ \begin{array} {l} x+2y−3z=−1 \\ x−3y+z=1 \\ 2x−y−2z=2 \end{array} \right. \) \[\left\{ \begin{array} {ll} x+2y−3z=−1 &(1) \\ x−3y+z=1 &(2) \\ 2x−y−2z=2 &(3) \end{array} \right.\nonumber \] Use equation (1) and (2) to eliminate z. Use (2) and (3) to eliminate \(z\) again. Use (4) and (5) to eliminate a variable. There is no solution. We are left with a false statement and this tells us the system is inconsistent and has no solution. Solve the system of equations: \( \left\{ \begin{array} {l} x+2y+6z=5 \\ −x+y−2z=3 \\ x−4y−2z=1 \end{array} \right. \) no solution Solve the system of equations: \( \left\{ \begin{array} {l} 2x−2y+3z=6 \\ 4x−3y+2z=0 \\ −2x+3y−7z=1 \end{array} \right. \) When we solve a system and end up with no variables but a true statement, we know there are infinitely many solutions. The system is consistent with dependent equations. Our solution will show how two of the variables depend on the third. Solve the system of equations: \( \left\{ \begin{array} {l} x+2y−z=1 \\ 2x+7y+4z=11 \\ x+3y+z=4 \end{array} \right. \) \[\left\{ \begin{array} {ll} x+2y−z=1 &(1) \\ 2x+7y+4z=11 &(2) \\ x+3y+z=4 &(3) \end{array} \right.\nonumber \] Use equation (1) and (3) to eliminate x. Use equation (1) and (2) to eliminate x again. Use equation (4) and (5) to eliminate \(y\). There are infinitely many solutions. Solve equation (4) for y. Represent the solution showing how x and y are dependent on z. \( \begin{aligned} y+2z &= 3 \\ y &= −2z+3 \end{aligned} \) Use equation (1) to solve for x. \( x+2y−z=1\) Substitute \(y=−2z+3\). \( \begin{aligned} x+2(−2z+3)−z &= 1 \\ x−4z+6−z &= 1 \\ x−5z+6 &= 1 \\ x &= 5z−5 \end{aligned} \) The true statement \(0=0\) tells us that this is a dependent system that has infinitely many solutions. The solutions are of the form (x,y,z)(x,y,z) where \(x=5z−5;\space y=−2z+3\) and z is any real number. Solve the system by equations: \( \left\{ \begin{array} {l} x+y−z=0 \\ 2x+4y−2z=6 \\ 3x+6y−3z=9 \end{array} \right. \) infinitely many solutions \((x,3,z)\) where \(x=z−3;\space y=3;\space z\) is any real number Solve the system by equations: \( \left\{ \begin{array} {l} x−y−z=1 \\ −x+2y−3z=−4 \\ 3x−2y−7z=0 \end{array} \right. \) infinitely many solutions \((x,y,z)\) where \(x=5z−2;\space y=4z−3;\space z\) is any real number Applications that are modeled by a systems of equations can be solved using the same techniques we used to solve the systems. Many of the application are just extensions to three variables of the types we have solved earlier. The community college theater department sold three kinds of tickets to its latest play production. The adult tickets sold for $15, the student tickets for $10 and the child tickets for $8. The theater department was thrilled to have sold 250 tickets and brought in $2,825 in one night. The number of student tickets sold is twice the number of adult tickets sold. How many of each type did the department sell? We will use a chart to organize the information. Number of students is twice number of adults. Rewrite the equation in standard form. \(\begin{aligned} y &= 2x \\ 2x−y &= 0 \end{aligned} \) Use equations (1) and (2) to eliminate z. Use (3) and (4) to eliminate \(y\). Solve for x. \(x=75 \) adult tickets Use equation (3) to find y. \(−2x+y=0\) Substitute \(x=75\). \(\begin{aligned} −2(75)+y &= 0 \\ −150+y &= 0 \\ y &= 150\text{ student tickets}\end{aligned} \) Use equation (1) to find z. \(x+y+z=250\) Substitute in the values \(x=75, \space y=150.\) \(\begin{aligned} 75+150+z &= 250 \\ 225+z &= 250 \\ z &= 25\text{ child tickets} \end{aligned} \) Write the solution. The theater department sold 75 adult tickets, 150 student tickets, and 25 child tickets. The community college fine arts department sold three kinds of tickets to its latest dance presentation. The adult tickets sold for $20, the student tickets for $12 and the child tickets for $10.The fine arts department was thrilled to have sold 350 tickets and brought in $4,650 in one night. The number of child tickets sold is the same as the number of adult tickets sold. How many of each type did the department sell? The fine arts department sold 75 adult tickets, 200 student tickets, and 75 child tickets. The community college soccer team sold three kinds of tickets to its latest game. The adult tickets sold for $10, the student tickets for $8 and the child tickets for $5. The soccer team was thrilled to have sold 600 tickets and brought in $4,900 for one game. The number of adult tickets is twice the number of child tickets. How many of each type did the soccer team sell? The soccer team sold 200 adult tickets, 300 student tickets, and 100 child tickets. Access this online resource for additional instruction and practice with solving a linear system in three variables with no or infinite solutions. Solving a Linear System in Three Variables with No or Infinite Solutions 3 Variable Application Linear Equation in Three Variables: A linear equation with three variables, where a, b, c, and d are real numbers and a, b,and c are not all 0, is of the form \[ax+by+cz=d\nonumber \] How to solve a system of linear equations with three variables. The solutions of a system of equations are the values of the variables that make all the equations true; a solution is represented by an ordered triple (x,y,z). This page titled 3.4: Solve Systems of Equations with Three Variables is shared under a CC BY license and was authored, remixed, and/or curated by OpenStax. 3.3: Solve Mixture Applications with Systems of Equations 3.5: Graphing Systems of Linear Inequalities
CommonCrawl
The ratio of the length of a rod and its.. Home / Height and Distance / The ratio of the length of a rod and its.. पोस्ट लेखक:ApexTeam पोस्ट श्रेणी:Height and Distance The ratio of the length of a rod and its shadow is $1: \sqrt{3}$ The angle of elevation of the sum is. A. 30° B. 45° C. 60° D. 90° Answer: Option A Solution(By Apex Team) Let AB be rod and BC be its shadow So that, $A B: B C=1: \sqrt{3}$ Let $\theta$ be the angle of elevation $\begin{array}{l} \therefore \tan \theta=\frac{A B}{B C}=\frac{1}{\sqrt{3}}=\tan 30^{\circ} \\ \left(\because \tan 30^{\circ}=\frac{1}{\sqrt{3}}\right) \\ \therefore \theta=30^{\circ} \end{array}$ ∴ Hence angle of elevation $=30^{\circ}$ शायद तुम्हे यह भी अच्छा लगे If the angles of elevation of a tower from two If a 1.5 m tall girl stands at a distance of 3. The angle of depression of a car parked on.. The angle of elevation of the top of a tower.. In a pre school, the average weight of 30 girls in.. David obtained 76, 65, 82, 67 and 85 marks जून 2, 2021/
CommonCrawl
Upon examining the photographs, I noticed no difference in eye color, but it seems that my move had changed the ambient lighting in the morning and so there was a clear difference between the two sets of photographs! The before photographs had brighter lighting than the after photographs. Regardless, I decided to run a small survey on QuickSurveys/Toluna to confirm my diagnosis of no-change; the survey was 11 forced-choice pairs of photographs (before-after), with the instructions as follows: Several studies have assessed the effect of MPH and d-AMP on tasks tapping various other aspects of spatial working memory. Three used the spatial working memory task from the CANTAB battery of neuropsychological tests (Sahakian & Owen, 1992). In this task, subjects search for a target at different locations on a screen. Subjects are told that locations containing a target in previous trials will not contain a target in future trials. Efficient performance therefore requires remembering and avoiding these locations in addition to remembering and avoiding locations already searched within a trial. Mehta et al. (2000) found evidence of greater accuracy with MPH, and Elliott et al. (1997) found a trend for the same. In Mehta et al.'s study, this effect depended on subjects' working memory ability: the lower a subject's score on placebo, the greater the improvement on MPH. In Elliott et al.'s study, MPH enhanced performance for the group of subjects who received the placebo first and made little difference for the other group. The reason for this difference is unclear, but as mentioned above, this may reflect ability differences between the groups. More recently, Clatworthy et al. (2009) undertook a positron emission tomography (PET) study of MPH effects on two tasks, one of which was the CANTAB spatial working memory task. They failed to find consistent effects of MPH on working memory performance but did find a systematic relation between the performance effect of the drug in each individual and its effect on individuals' dopamine activity in the ventral striatum. If you could take a pill that would help you study and get better grades, would you? Off-label use of "smart drugs" – pharmaceuticals meant to treat disorders like ADHD, narcolepsy, and Alzheimer's – are becoming increasingly popular among college students hoping to get ahead, by helping them to stay focused and alert for longer periods of time. But is this cheating? Should their use as cognitive enhancers be approved by the FDA, the medical community, and society at large? Do the benefits outweigh the risks? That study is also interesting for finding benefits to chronic piracetam+choline supplementation in the mice, which seems connected to a Russian study which reportedly found that piracetam (among other more obscure nootropics) increased secretion of BDNF in mice. See also Drug heuristics on a study involving choline supplementation in pregnant rats.↩ Theanine can also be combined with caffeine as both of them work in synergy to increase memory, reaction time, mental endurance, and memory. The best part about Theanine is that it is one of the safest nootropics and is readily available in the form of capsules.  A natural option would be to use an excellent green tea brand which constitutes of tea grown in the shade because then Theanine would be abundantly present in it. Omega-3 fatty acids: DHA and EPA – two Cochrane Collaboration reviews on the use of supplemental omega-3 fatty acids for ADHD and learning disorders conclude that there is limited evidence of treatment benefits for either disorder.[42][43] Two other systematic reviews noted no cognition-enhancing effects in the general population or middle-aged and older adults.[44][45] Running low on gum (even using it weekly or less, it still runs out), I decided to try patches. Reading through various discussions, I couldn't find any clear verdict on what patch brands might be safer (in terms of nicotine evaporation through a cut or edge) than others, so I went with the cheapest Habitrol I could find as a first try of patches (Nicotine Transdermal System Patch, Stop Smoking Aid, 21 mg, Step 1, 14 patches) in May 2013. I am curious to what extent nicotine might improve a long time period like several hours or a whole day, compared to the shorter-acting nicotine gum which feels like it helps for an hour at most and then tapers off (which is very useful in its own right for kicking me into starting something I have been procrastinating on). I have not decided whether to try another self-experiment. Racetams, specifically Piracetam, an ingredient popular in over-the-counter nootropics, are synthetic stimulants designed to improve brain function. Patel notes Piracetam is the granddaddy of all racetams, and the term "nootropic" was originally coined to describe its effects. However, despite its popularity and how long it's been around and in use, researchers don't know what its mechanism of action is. Patel explained that the the most prominent hypothesis suggests Piracetam enhances neuronal function by increasing membrane fluidity in the brain, but that hasn't been confirmed yet. And Patel elaborated that most studies on Piracetam aren't done with the target market for nootropics in mind, the young professional: "There seems to be a growing percentage of intellectual workers in Silicon Valley and Wall Street using nootropics. They are akin to intellectual professional athletes where the stakes and competition is high," says Geoffrey Woo, the CEO and co-founder of nutrition company HVMN, which produces a line of nootropic supplements. Denton agrees. "I think nootropics just make things more and more competitive. The ease of access to Chinese, Russian intellectual capital in the United States, for example, is increasing. And there is a willingness to get any possible edge that's available." Smart drugs, formally known as nootropics, are medications, supplements, and other substances that improve some aspect of mental function. In the broadest sense, smart drugs can include common stimulants such as caffeine, herbal supplements like ginseng, and prescription medications for conditions such as ADHD, Alzheimer's disease, and narcolepsy. These substances can enhance concentration, memory, and learning. A total of 14 studies surveyed reasons for using prescription stimulants nonmedically, all but one study confined to student respondents. The most common reasons were related to cognitive enhancement. Different studies worded the multiple-choice alternatives differently, but all of the following appeared among the top reasons for using the drugs: "concentration" or "attention" (Boyd et al., 2006; DeSantis et al., 2008, 2009; Rabiner et al., 2009; Teter et al., 2003, 2006; Teter, McCabe, Cranford, Boyd, & Guthrie, 2005; White et al., 2006); "help memorize," "study," "study habits," or "academic assignments" (Arria et al., 2008; Barrett et al., 2005; Boyd et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; Low & Gendaszek, 2002; Rabiner et al., 2009; Teter et al., 2005, 2006; White et al., 2006); "grades" or "intellectual performance" (Low & Gendaszek, 2002; White et al., 2006); "before tests" or "finals week" (Hall et al., 2005); "alertness" (Boyd et al., 2006; Hall et al., 2005; Teter et al., 2003, 2005, 2006); or "performance" (Novak et al., 2007). However, every survey found other motives mentioned as well. The pills were also taken to "stay awake," "get high," "be able to drink and party longer without feeling drunk," "lose weight," "experiment," and for "recreational purposes." The peculiar tired-sharp feeling was there as usual, and the DNB scores continue to suggest this is not an illusion, as they remain in the same 30-50% band as my normal performance. I did not notice the previous aboulia feeling; instead, around noon, I was filled with a nervous energy and a disturbingly rapid pulse which meditation & deep breathing did little to help with, and which didn't go away for an hour or so. Fortunately, this was primarily at church, so while I felt irritable, I didn't actually interact with anyone or snap at them, and was able to keep a lid on it. I have no idea what that was about. I wondered if it might've been a serotonin storm since amphetamines are some of the drugs that can trigger storms but the Adderall had been at 10:50 AM the previous day, or >25 hours (the half-lives of the ingredients being around 13 hours). An hour or two previously I had taken my usual caffeine-piracetam pill with my morning tea - could that have interacted with the armodafinil and the residual Adderall? Or was it caffeine+modafinil? Speculation, perhaps. A house-mate was ill for a few hours the previous day, so maybe the truth is as prosaic as me catching whatever he had. Tyrosine (Examine.com) is an amino acid; people on the Imminst.org forums (as well as Wikipedia) suggest that it helps with energy and coping with stress. I ordered 4oz (bought from Smart Powders) to try it out, and I began taking 1g with my usual caffeine+piracetam+choline mix. It does not dissolve easily in hot water, and is very chalky and not especially tasty. I have not noticed any particular effects from it. Tuesday: I went to bed at 1am, and first woke up at 6am, and I wrote down a dream; the lucid dreaming book I was reading advised that waking up in the morning and then going back for a short nap often causes lucid dreams, so I tried that - and wound up waking up at 10am with no dreams at all. Oops. I take a pill, but the whole day I don't feel so hot, although my conversation and arguments seem as cogent as ever. I'm also having a terrible time focusing on any actual work. At 8 I take another; I'm behind on too many things, and it looks like I need an all-nighter to catch up. The dose is no good; at 11, I still feel like at 8, possibly worse, and I take another along with the choline+piracetam (which makes a total of 600mg for the day). Come 12:30, and I disconsolately note that I don't seem any better, although I still seem to understand the IQ essays I am reading. I wonder if this is tolerance to modafinil, or perhaps sleep catching up to me? Possibly it's just that I don't remember what the quasi-light-headedness of modafinil felt like. I feel this sort of zombie-like state without change to 4am, so it must be doing something, when I give up and go to bed, getting up at 7:30 without too much trouble. Some N-backing at 9am gives me some low scores but also some pretty high scores (38/43/66/40/24/67/60/71/54 or ▂▂▆▂▁▆▅▇▄), which suggests I can perform normally if I concentrate. I take another pill and am fine the rest of the day, going to bed at 1am as usual. Soldiers should never be treated like children; because then they will act like them. However, There's a reason why the 1SG is known as the Mother of the Company and the Platoon Sergeant is known as a Platoon Daddy. Because they run the day to day operations of the household, get the kids to school so to speak, and focus on the minutia of readiness and operational execution in all its glory. Officers forget they are the second link in the Chain of Command and a well operating duo of Team Leader and Squad Leader should be handling 85% of all Soldier issues, while the Platoon sergeant handles the other 15% with 1SG. Platoon Leaders and Commanders should always be present; training, leading by example, focusing on culture building, tracking and supporting NCO's. They should be focused on big business sides of things, stepping in to administer punishment or award and reward performance. If an officer at any level is having to step into a Soldier's day to day lives an NCO at some level is failing. Officers should be junior Officers and junior Enlisted right along side their counterparts instead of eating their young and touting their "maturity" or status. If anything Officers should be asking their NCO's where they should effect, assist, support or provide cover toward intitiatives and plans that create consistency and controlled chaos for growth of individuals two levels up and one level down of operational capabilities at every echelon of command. Bought 5,000 IU soft-gels of Vitamin D-333 (Examine.com; FDA adverse events) because I was feeling very apathetic in January 2011 and not getting much done, even slacking on regular habits like Mnemosyne spaced repetition review or dual n-back or my Wikipedia watchlist. Introspecting, I was reminded of depression & dysthymia & seasonal affective disorder. This is one of the few times we've actually seen a nootropic supplement take a complete leverage on the nootropic industry with the name Smart Pill. To be honest, we don't know why other companies haven't followed suit yet – it's an amazing name. Simple, and to the point. Coming from supplement maker, Only Natural, Smart Pill makes some pretty bold claims regarding their pills being completely natural, whilst maintaining good quality. This is their niche – or Only Natural's niche, for that matter. They create supplements, in this case Smart Pill, with the… Learn More... A record of nootropics I have tried, with thoughts about which ones worked and did not work for me. These anecdotes should be considered only as anecdotes, and one's efforts with nootropics a hobby to put only limited amounts of time into due to the inherent limits of drugs as a force-multiplier compared to other things like programming1; for an ironic counterpoint, I suggest the reader listen to a video of Jonathan Coulton's I Feel Fantastic while reading. "Such an informative and inspiring read! Insight into how optimal nutrients improved Cavin's own brain recovery make this knowledge-filled read compelling and relatable. The recommendations are easy to understand as well as scientifically-founded – it's not another fad diet manual. The additional tools and resources provided throughout make it possible for anyone to integrate these enhancements into their nutritional repertoire. Looking forward to more from Cavin and Feed a Brain!!!!!!" L-Alpha glycerylphosphorylcholine or choline alfoscerate, also known as Alpha GPC is a natural nootropic which works both on its own and also in combination with other nootropics. It can be found in the human body naturally in small amounts. It's also present in some dairy products, wheat germ, and in organic meats. However, these dietary sources contain small quantities of GPC, which is why people prefer taking it through supplements. Piracetam is well studied and is credited by its users with boosting their memory, sharpening their focus, heightening their immune system, even bettering their personalities. But it's only one of many formulations in the racetam drug family. Newer ones include aniracetam, phenylpiracetam and oxiracetam. All are available online, where their efficacy and safety are debated and reviewed on message boards and in podcasts. Many people prefer the privacy and convenience of ordering brain boosting supplements online and having them delivered right to the front door. At Smart Pill Guide, we have made the process easier, so you can place your order directly through our website with your major credit card or PayPal. Our website is secure, so your personal information is protected and all orders are completely confidential. Smart pills have huge potential and several important applications, particularly in diagnosis. Smart pills are growing as a highly effective method of endoscopy, particularly for gastrointestinal diseases. Urbanization and rapid lifestyle changes leaning toward unhealthy diets and poor eating habits have led to distinctive increasing lifestyle disorders such as gastroesophageal reflux disease (GERD), obesity, and gastric ulcers. Nootrobox co-founder Geoffrey Woo declines a caffeinated drink in favour of a capsule of his newest product when I meet him in a San Francisco coffee shop. The entire industry has a "wild west" aura about it, he tells me, and Nootrobox wants to fix it by pushing for "smarter regulation" so safe and effective drugs that are currently unclassified can be brought into the fold. Predictably, both companies stress the higher goal of pushing forward human cognition. "I am trying to make a smarter, better populace to solve all the problems we have created," says Nootroo founder Eric Matzner. A Romanian psychologist and chemist named Corneliu Giurgea started using the word nootropic in the 1970s to refer to substances that improve brain function, but humans have always gravitated toward foods and chemicals that make us feel sharper, quicker, happier, and more content. Our brains use about 20 percent of our energy when our bodies are at rest (compared with 8 percent for apes), according to National Geographic, so our thinking ability is directly affected by the calories we're taking in as well as by the nutrients in the foods we eat. Here are the nootropics we don't even realize we're using, and an expert take on how they work. Aniracetam is known as one of the smart pills with the widest array of uses. From benefits for dementia patients and memory boost in adults with healthy brains, to the promotion of brain damage recovery. It also improves the quality of sleep, what affects the overall increase in focus during the day. Because it supports the production of dopamine and serotonin, it elevates our mood and helps fight depression and anxiety. Caffeine (Examine.com; FDA adverse events) is of course the most famous stimulant around. But consuming 200mg or more a day, I have discovered the downside: it is addictive and has a nasty withdrawal - headaches, decreased motivation, apathy, and general unhappiness. (It's a little amusing to read academic descriptions of caffeine addiction9; if caffeine were a new drug, I wonder what Schedule it would be in and if people might be even more leery of it than modafinil.) Further, in some ways, aside from the ubiquitous placebo effect, caffeine combines a mix of weak performance benefits (Lorist & Snel 2008, Nehlig 2010) with some possible decrements, anecdotally and scientifically: Vinh Ngo, a San Francisco family practice doctor who specializes in hormone therapy, has become familiar with piracetam and other nootropics through a changing patient base. His office is located in the heart of the city's tech boom and he is increasingly sought out by young, male tech workers who tell him they are interested in cognitive enhancement. Gibson and Green (2002), talking about a possible link between glucose and cognition, wrote that research in the area …is based on the assumption that, since glucose is the major source of fuel for the brain, alterations in plasma levels of glucose will result in alterations in brain levels of glucose, and thus neuronal function. However, the strength of this notion lies in its common-sense plausibility, not in scientific evidence… (p. 185). "It is important to note that Abilify MyCite's prescribing information (labeling) notes that the ability of the product to improve patient compliance with their treatment regimen has not been shown. Abilify MyCite should not be used to track drug ingestion in "real-time" or during an emergency because detection may be delayed or may not occur," the FDA said in a statement. Flow diagram of cognitive neuroscience literature search completed July 2, 2010. Search terms were dextroamphetamine, Aderrall, methylphenidate, or Ritalin, and cognitive, cognition, learning, memory, or executive function, and healthy or normal. Stages of subsequent review used the information contained in the titles, abstracts, and articles to determine whether articles reported studies meeting the inclusion criteria stated in the text. I have personally found that with respect to the NOOTROPIC effect(s) of all the RACETAMS, whilst I have experienced improvements in concentration and working capacity / productivity, I have never experienced a noticeable ongoing improvement in memory. COLURACETAM is the only RACETAM that I have taken wherein I noticed an improvement in MEMORY, both with regards to SHORT-TERM and MEDIUM-TERM MEMORY. To put matters into perspective, the memory improvement has been mild, yet still significant; whereas I have experienced no such improvement at all with the other RACETAMS. Hall, Irwin, Bowman, Frankenberger, & Jewett (2005) Large public university undergraduates (N = 379) 13.7% (lifetime) 27%: use during finals week; 12%: use when party; 15.4%: use before tests; 14%: believe stimulants have a positive effect on academic achievement in the long run M = 2.06 (SD = 1.19) purchased stimulants from other students; M = 2.81 (SD = 1.40) have been given stimulants by other studentsb Vinpocetine walks a line between herbal and pharmaceutical product. It's a synthetic derivative of a chemical from the periwinkle plant, and due to its synthetic nature we feel it's more appropriate as a 'smart drug'. Plus, it's illegal in the UK. Vinpocetine is purported to improve cognitive function by improving blood flow to the brain, which is why it's used in some 'study drugs' or 'smart pills'. A 100mg dose of caffeine (half of a No-Doz or one cup of strong coffee) with 200mg of L-theanine is what the nootropics subreddit recommends in their beginner's FAQ, and many nootropic sellers, like Peak Nootropics, suggest the same. In my own experiments, I used a pre-packaged combination from Nootrobox called Go Cubes. They're essentially chewable coffee cubes (not as gross as it sounds) filled with that same beginner dose of caffeine, L-theanine, as well as a few B vitamins thrown into the mix. After eating an entire box of them (12 separate servings—not all at once), I can say eating them made me feel more alert and energetic, but less jittery than my usual three cups of coffee every day. I noticed enough of a difference in the past two weeks that I'll be looking into getting some L-theanine supplements to take with my daily coffee. Cytisine is not known as a stimulant and I'm not addicted to nicotine, so why give it a try? Nicotine is one of the more effective stimulants available, and it's odd how few nicotine analogues or nicotinic agonists there are available; nicotine has a few flaws like short half-life and increasing blood pressure, so I would be interested in a replacement. The nicotine metabolite cotinine, in the human studies available, looks intriguing and potentially better, but I have been unable to find a source for it. One of the few relevant drugs which I can obtain is cytisine, from Ceretropic, at 2x1.5mg doses. There are not many anecdotal reports on cytisine, but at least a few suggest somewhat comparable effects with nicotine, so I gave it a try. If you want to make sure that whatever you're taking is safe, search for nootropics that have been backed by clinical trials and that have been around long enough for any potential warning signs about that specific nootropic to begin surfacing. There are supplements and nootropics that have been tested in a clinical setting, so there are options out there. The power calculation indicates a 20% chance of getting useful information. My quasi-experiment has <70% chance of being right, and I preserve a general skepticism about any experiment, even one as well done as the medical student one seems to be, and give that one a <80% chance of being right; so let's call it 70% the effect exists, or 30% it doesn't exist (which is the case in which I save money by dropping fish oil for 10 years). On 8 April 2011, I purchased from Smart Powders (20g for $8); as before, some light searching seemed to turn up SP as the best seller given shipping overhead; it was on sale and I planned to cap it so I got 80g. This may seem like a lot, but I was highly confident that theanine and I would get along since I already drink so much tea and was a tad annoyed at the edge I got with straight caffeine. So far I'm pretty happy with it. My goal was to eliminate the physical & mental twitchiness of caffeine, which subjectively it seems to do. If smart drugs are the synthetic cognitive enhancers, sleep, nutrition and exercise are the "natural" ones. But the appeal of drugs like Ritalin and modafinil lies in their purported ability to enhance brain function beyond the norm. Indeed, at school or in the workplace, a pill that enhanced the ability to acquire and retain information would be particularly useful when it came to revising and learning lecture material. But despite their increasing popularity, do prescription stimulants actually enhance cognition in healthy users? Nor am I sure how important the results are - partway through, I haven't noticed anything bad, at least, from taking Noopept. And any effect is going to be subtle: people seem to think that 10mg is too small for an ingested rather than sublingual dose and I should be taking twice as much, and Noopept's claimed to be a chronic gradual sort of thing, with less of an acute effect. If the effect size is positive, regardless of statistical-significance, I'll probably think about doing a bigger real self-experiment (more days blocked into weeks or months & 20mg dose) The compound is one of the best brain enhancement supplements that includes memory enhancement and protection against brain aging. Some studies suggest that the compound is an effective treatment for disorders like vascular dementia, Alzheimer's, brain stroke, anxiety, and depression. However, there are some side effects associated with Alpha GPC, like a headache, heartburn, dizziness, skin rashes, insomnia, and confusion. (I was more than a little nonplussed when the mushroom seller included a little pamphlet educating one about how papaya leaves can cure cancer, and how I'm shortening my life by decades by not eating many raw fruits & vegetables. There were some studies cited, but usually for points disconnected from any actual curing or longevity-inducing results.) The evidence? In small studies, healthy people taking modafinil showed improved planning and working memory, and better reaction time, spatial planning, and visual pattern recognition. A 2015 meta-analysis claimed that "when more complex assessments are used, modafinil appears to consistently engender enhancement of attention, executive functions, and learning" without affecting a user's mood. In a study from earlier this year involving 39 male chess players, subjects taking modafinil were found to perform better in chess games played against a computer. Nootropics include natural and manmade chemicals that produce cognitive benefits. These substances are used to make smart pills that deliver results for enhancing memory and learning ability, improving brain function, enhancing the firing control mechanisms in neurons, and providing protection for the brain. College students, adult professionals, and elderly people are turning to supplements to get the advantages of nootropic substances for memory, focus, and concentration. With all these studies pointing to the nootropic benefits of some essential oils, it can logically be concluded then that some essential oils can be considered "smart drugs." However, since essential oils have so much variety and only a small fraction of this wide range has been studied, it cannot be definitively concluded that absolutely all essential oils have brain-boosting benefits. The connection between the two is strong, however. Most research on these nootropics suggest they have some benefits, sure, but as Barbara Sahakian and Sharon Morein-Zamir explain in the journal Nature, nobody knows their long-term effects. And we don't know how extended use might change your brain chemistry in the long run. Researchers are getting closer to what makes these substances do what they do, but very little is certain right now. If you're looking to live out your own Limitless fantasy, do your research first, and proceed with caution. Accordingly, we searched the literature for studies in which MPH or d-AMP was administered orally to nonelderly adults in a placebo-controlled design. Some of the studies compared the effects of multiple drugs, in which case we report only the results of stimulant–placebo comparisons; some of the studies compared the effects of stimulants on a patient group and on normal control subjects, in which case we report only the results for control subjects. The studies varied in many other ways, including the types of tasks used, the specific drug used, the way in which dosage was determined (fixed dose or weight-dependent dose), sample size, and subject characteristics (e.g., age, college sample or not, gender). Our approach to the classic splitting versus lumping dilemma has been to take a moderate lumping approach. We group studies according to the general type of cognitive process studied and, within that grouping, the type of task. The drug and dose are reported, as well as sample characteristics, but in the absence of pronounced effects of these factors, we do not attempt to make generalizations about them. The data from 2-back and 3-back tasks are more complex. Three studies examined performance in these more challenging tasks and found no effect of d-AMP on average performance (Mattay et al., 2000, 2003; Mintzer & Griffiths, 2007). However, in at least two of the studies, the overall null result reflected a mixture of reliably enhancing and impairing effects. Mattay et al. (2000) examined the performance of subjects with better and worse working memory capacity separately and found that subjects whose performance on placebo was low performed better on d-AMP, whereas subjects whose performance on placebo was high were unaffected by d-AMP on the 2-back and impaired on the 3-back tasks. Mattay et al. (2003) replicated this general pattern of data with subjects divided according to genotype. The specific gene of interest codes for the production of Catechol-O-methyltransferase (COMT), an enzyme that breaks down dopamine and norepinephrine. A common polymorphism determines the activity of the enzyme, with a substitution of methionine for valine at Codon 158 resulting in a less active form of COMT. The met allele is thus associated with less breakdown of dopamine and hence higher levels of synaptic dopamine than the val allele. Mattay et al. (2003) found that subjects who were homozygous for the val allele were able to perform the n-back faster with d-AMP; those homozygous for met were not helped by the drug and became significantly less accurate in the 3-back condition with d-AMP. In the case of the third study finding no overall effect, analyses of individual differences were not reported (Mintzer & Griffiths, 2007). The original "smart drug" is piracetam, which was discovered by the Romanian scientist Corneliu Giurgea in the early 1960s. At the time, he was looking for a chemical that could sneak into the brain and make people feel sleepy. After months of testing, he came up with "Compound 6215". It was safe, it had very few side effects – and it didn't work. The drug didn't send anyone into a restful slumber and seemed to work in the opposite way to that intended. There is no official data on their usage, but nootropics as well as other smart drugs appear popular in the Silicon Valley. "I would say that most tech companies will have at least one person on something," says Noehr. It is a hotbed of interest because it is a mentally competitive environment, says Jesse Lawler, a LA based software developer and nootropics enthusiast who produces the podcast Smart Drug Smarts. "They really see this as translating into dollars." But Silicon Valley types also do care about safely enhancing their most prized asset – their brains – which can give nootropics an added appeal, he says. The infinite promise of stacking is why, whatever weight you attribute to the evidence of their efficacy, nootropics will never go away: With millions of potential iterations of brain-enhancing regimens out there, there is always the tantalizing possibility that seekers haven't found the elusive optimal combination of pills and powders for them—yet. Each "failure" is but another step in the process-of-elimination journey to biological self-actualization, which may be just a few hundred dollars and a few more weeks of amateur alchemy away. Scientists found that the drug can disrupt the way memories are stored. This ability could be invaluable in treating trauma victims to prevent associated stress disorders. The research has also triggered suggestions that licensing these memory-blocking drugs may lead to healthy people using them to erase memories of awkward conversations, embarrassing blunders and any feelings for that devious ex-girlfriend. "As a neuro-optometrist who cares for many brain-injured patients experiencing visual challenges that negatively impact the progress of many of their other therapies, Cavin's book is a god-send! The very basic concept of good nutrition among all the conflicting advertisements and various "new" food plans and diets can be enough to put anyone into a brain fog much less a brain injured survivor! Cavin's book is straightforward and written from not only personal experience but the validation of so many well-respected contemporary health care researchers and practitioners! I will certainly be recommending this book as a "Survival/Recovery 101" resource for all my patients including those without brain injuries because we all need optimum health and well-being and it starts with proper nourishment! Kudos to Cavin Balaster!" Noopept shows a much greater affinity for certain receptor sites in the brain than racetams, allowing doses as small as 10-30mg to provide increased focus, improved logical thinking function, enhanced short and long-term memory functions, and increased learning ability including improved recall. In addition, users have reported a subtle psychostimulatory effect. Organizations, and even entire countries, are struggling with "always working" cultures. Germany and France have adopted rules to stop employees from reading and responding to email after work hours. Several companies have explored banning after-hours email; when one Italian company banned all email for one week, stress levels dropped among employees. This is not a great surprise: A Gallup study found that among those who frequently check email after working hours, about half report having a lot of stress. Price discrimination is aided by barriers such as ignorance and oligopolies. An example of the former would be when I went to a Food Lion grocery store in search of spices, and noticed that there was a second selection of spices in the Hispanic/Latino ethnic food aisle, with unit prices perhaps a fourth of the regular McCormick-brand spices; I rather doubt that regular cinnamon varies that much in quality. An example of the latter would be using veterinary drugs on humans - any doctor to do so would probably be guilty of medical malpractice even if the drugs were manufactured in the same factories (as well they might be, considering economies of scale). Similarly, we can predict that whenever there is a veterinary drug which is chemically identical to a human drug, the veterinary drug will be much cheaper, regardless of actual manufacturing cost, than the human drug because pet owners do not value their pets more than themselves. Human drugs are ostensibly held to a higher standard than veterinary drugs; so if veterinary prices are higher, then there will be an arbitrage incentive to simply buy the cheaper human version and downgrade them to veterinary drugs. With just 16 predictions, I can't simply bin the predictions and say yep, that looks good. Instead, we can treat each prediction as equivalent to a bet and see what my winnings (or losses) were; the standard such proper scoring rule is the logarithmic rule which pretty simple: you earn the logarithm of the probability if you were right, and the logarithm of the negation if you were wrong; he who racks up the fewest negative points wins. We feed in a list and get back a number: Up to 20% of Ivy League college students have already tried "smart drugs," so we can expect these pills to feature prominently in organizations (if they don't already). After all, the pressure to perform is unlikely to disappear the moment students graduate. And senior employees with demanding jobs might find these drugs even more useful than a 19-year-old college kid does. Indeed, a 2012 Royal Society report emphasized that these "enhancements," along with other technologies for self-enhancement, are likely to have far-reaching implications for the business world. Adderall is an amphetamine, used as a drug to help focus and concentration in people with ADHD, and promote wakefulness for sufferers of narcolepsy. Adderall increases levels of dopamine and norepinephrine in the brain, along with a few other chemicals and neurotransmitters. It's used off-label as a study drug, because, as mentioned, it is believed to increase focus and concentration, improve cognition and help users stay awake. Please note: Side Effects Possible. The evidence? Ritalin is FDA-approved to treat ADHD. It has also been shown to help patients with traumatic brain injury concentrate for longer periods, but does not improve memory in those patients, according to a 2016 meta-analysis of several trials. A study published in 2012 found that low doses of methylphenidate improved cognitive performance, including working memory, in healthy adult volunteers, but high doses impaired cognitive performance and a person's ability to focus. (Since the brains of teens have been found to be more sensitive to the drug's effect, it's possible that methylphenidate in lower doses could have adverse effects on working memory and cognitive functions.) At this point I began to get bored with it and the lack of apparent effects, so I began a pilot trial: I'd use the LED set for 10 minutes every few days before 2PM, record, and in a few months look for a correlation with my daily self-ratings of mood/productivity (for 2.5 years I've asked myself at the end of each day whether I did more, the usual, or less work done that day than average, so 2=below-average, 3=average, 4=above-average; it's ad hoc, but in some factor analyses I've been playing with, it seems to load on a lot of other variables I've measured, so I think it's meaningful). Popular among computer programmers, oxiracetam, another racetam, has been shown to be effective in recovery from neurological trauma and improvement to long-term memory. It is believed to effective in improving attention span, memory, learning capacity, focus, sensory perception, and logical thinking. It also acts as a stimulant, increasing mental energy, alertness, and motivation. We reached out to several raw material manufacturers and learned that Phosphatidylserine and Huperzine A are in short supply. We also learned that these ingredients can be pricey, incentivizing many companies to cut corners. A company has to have the correct ingredients in the correct proportions in order for a brain health formula to be effective. We learned that not just having the two critical ingredients was important – but, also that having the correct supporting ingredients was essential in order to be effective. Brain focus pills mostly contain chemical components like L-theanine which is naturally found in green and black tea. It's associated with enhancing alertness, cognition, relaxation, arousal, and reducing anxiety to a large extent. Theanine is an amino and glutamic acid that has been proven to be a safe psychoactive substance. Some studies suggest that this compound influences, the expression in the genes present in the brain which is responsible for aggression, fear, and memory. This, in turn, helps in balancing the behavioral responses to stress and also helps in improving specific conditions, like Post Traumatic Stress Disorder (PTSD). Overall, the studies listed in Table 1 vary in ways that make it difficult to draw precise quantitative conclusions from them, including their definitions of nonmedical use, methods of sampling, and demographic characteristics of the samples. For example, some studies defined nonmedical use in a way that excluded anyone for whom a drug was prescribed, regardless of how and why they used it (Carroll et al., 2006; DeSantis et al., 2008, 2009; Kaloyanides et al., 2007; Low & Gendaszek, 2002; McCabe & Boyd, 2005; McCabe et al., 2004; Rabiner et al., 2009; Shillington et al., 2006; Teter et al., 2003, 2006; Weyandt et al., 2009), whereas others focused on the intent of the user and counted any use for nonmedical purposes as nonmedical use, even if the user had a prescription (Arria et al., 2008; Babcock & Byrne, 2000; Boyd et al., 2006; Hall et al., 2005; Herman-Stahl et al., 2007; Poulin, 2001, 2007; White et al., 2006), and one did not specify its definition (Barrett, Darredeau, Bordy, & Pihl, 2005). Some studies sampled multiple institutions (DuPont et al., 2008; McCabe & Boyd, 2005; Poulin, 2001, 2007), some sampled only one (Babcock & Byrne, 2000; Barrett et al., 2005; Boyd et al., 2006; Carroll et al., 2006; Hall et al., 2005; Kaloyanides et al., 2007; McCabe & Boyd, 2005; McCabe et al., 2004; Shillington et al., 2006; Teter et al., 2003, 2006; White et al., 2006), and some drew their subjects primarily from classes in a single department at a single institution (DeSantis et al., 2008, 2009; Low & Gendaszek, 2002). With few exceptions, the samples were all drawn from restricted geographical areas. Some had relatively high rates of response (e.g., 93.8%; Low & Gendaszek 2002) and some had low rates (e.g., 10%; Judson & Langdon, 2009), the latter raising questions about sample representativeness for even the specific population of students from a given region or institution. The amphetamine mix branded Adderall is terribly expensive to obtain even compared to modafinil, due to its tight regulation (a lower schedule than modafinil), popularity in college as a study drug, and reportedly moves by its manufacture to exploit its privileged position as a licensed amphetamine maker to extract more consumer surplus. I paid roughly $4 a pill but could have paid up to $10. Good stimulant hygiene involves recovery periods to avoid one's body adapting to eliminate the stimulating effects, so even if Adderall was the answer to all my woes, I would not be using it more than 2 or 3 times a week. Assuming 50 uses a year (for specific projects, let's say, and not ordinary aimless usage), that's a cool $200 a year. My general belief was that Adderall would be too much of a stimulant for me, as I am amphetamine-naive and Adderall has a bad reputation for letting one waste time on unimportant things. We could say my prediction was 50% that Adderall would be useful and worth investigating further. The experiment was pretty simple: blind randomized pills, 10 placebo & 10 active. I took notes on how productive I was and the next day guessed whether it was placebo or Adderall before breaking the seal and finding out. I didn't do any formal statistics for it, much less a power calculation, so let's try to be conservative by penalizing the information quality heavily and assume it had 25%. So \frac{200 - 0}{\ln 1.05} \times 0.50 \times 0.25 = 512! The experiment probably used up no more than an hour or two total. The Stroop task tests the ability to inhibit the overlearned process of reading by presenting color names in colored ink and instructing subjects to either read the word (low need for cognitive control because this is the habitual response to printed words) or name the ink color (high need for cognitive control). Barch and Carter (2005) administered this task to normal control subjects on placebo and d-AMP and found speeding of responses with the drug. However, the speeding was roughly equivalent for the conditions with low and high cognitive control demands, suggesting that the observed facilitation may not have been specific to cognitive control. Before you try nootropics, I suggest you start with the basics: get rid of the things in your diet and life that reduce cognitive performance first. That is easiest. Then, add in energizers like Brain Octane and clean up your diet. Then, go for the herbals and the natural nootropics. Use the pharmaceuticals selectively only after you've figured out your basics.
CommonCrawl
Two identical charged spheres are suspended by strings of equal lengths. The strings make an angle of $${30^ \circ }$$ with each other. When suspended in a liquid of density $$0.8g$$ $$c{m^{ - 3}},$$ the angle remains the same. If density of the material of the sphere is $$1.6$$ $$g$$ $$c{m^{ - 3}},$$ the dielectric constant of the liquid is $$4$$ $${F_e} = T\sin {15^ \circ }\,\,;$$ $$mg = T\cos {15^ \circ }$$ $$ \Rightarrow \tan {15^ \circ } = {{{F_e}} \over {mg}}$$ $$\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,...(i)$$ In liquid, $${F_e}' = T'\sin {15^ \circ }$$ $$\,\,\,\,\,\,...(ii)$$ $$mg = {F_B} + T'\cos {15^ \circ }$$ $${F_B}' = V\left( {d - \rho } \right)g = V\left( {1.6 - 0.8} \right)g = 0.8\,Vg$$ $$ = 0.8{m \over d}g = {{0.8mg} \over {1.6}} = {{mg} \over 2}$$ $$\therefore$$ $$mg = {{mg} \over 2} + T'\cos {15^ \circ }$$ $$ \Rightarrow {{mg} \over 2} = T'\cos {15^ \circ }$$ $$\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,...\left( B \right)$$ From $$(A)$$ and $$(B),$$ $$\tan \,{15^ \circ } = {{2{F_e}'} \over {mg}}\,\,\,\,\,\,\,\,\,...\left( 2 \right)$$ From $$(1)$$ and $$(2)$$ $${{{F_e}} \over {mg}} = {{2{F_e}'} \over {mg}} \Rightarrow {F_e} = 2{F_e}' \Rightarrow {F_e}' = {{{F_e}} \over 2}$$ Two wires are made of the same material and have the same volume. However wire $$1$$ has cross-sectional area $$A$$ and wire $$2$$ has cross-sectional area $$3A.$$ If the length of wire $$1$$ increases by $$\Delta x$$ on applying force $$F,$$ how much force is needed to stretch wire $$2$$ by the same amount? $$4F$$ $$F$$ As shown in the figure, the wires will have the same Young's modulus (same material) and the length of the wire of area of cross-section $$3A$$ will be $$\ell /3$$ (same volume as wire $$1$$). For wire $$1,$$ $$y = {{F/A} \over {\Delta x/\ell }}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,...(i)$$ For wire $$2.$$ $$Y = {{F'/3A} \over {\Delta x/\left( {\ell /3} \right)}}........(ii)$$ From $$(i)$$ and $$(ii),$$ $${F \over A} \times {\ell \over {\Delta x}} = {{F'} \over {3A}} \times {\ell \over {3\Delta x}} \Rightarrow F' = 9F$$ A jar is filled with two non-mixing liquids $$1$$ and $$2$$ having densities $${\rho _1}$$ and $${\rho _2}$$ respectively. A solid ball, made of a material of density $${\rho _3}$$, is dropped in the jar. It comes to equilibrium in the position shown in the figure. Which of the following is true for $${\rho _1}$$ , $${\rho _1}$$ and $${\rho _3}$$ ? $${\rho _3} < {\rho _1} < \rho {}_2$$ $${\rho _1} > {\rho _3} > \rho {}_2$$ From the figure it is clear that liquid $$1$$ floats on liquid $$2.$$ The lighter liquid floats over heavier liquid. Therefore we can conclude that $${\rho _1} < {\rho _2}$$ Also $${\rho _3} < {\rho _2}$$ otherwise the ball would have sink to the bottom of the jar. Also $${\rho _3} > {\rho _1}$$ otherwise the ball would have floated in liquid $$1.$$ From the above discussion we conclude that $${\rho _1} < {\rho _2} < {\rho _3}.$$ A capillary tube (A) is dipped in water. Another identical tube (B) is dipped in a soap-water solution. Which of the following shows the relative nature of the liquid columns in the two tubes? In case of water, the meniscus shape is concave upwards. Also according to ascent formula $$h = {{2T\,\cos \,\theta } \over {r\rho g}}$$ The surface tension $$(I)$$ of soap solution is less than water. Therefore rise of soap solution in the capillary tube is less as compared to water. As in the case of water. the meniscus shape of soap solution is also concave upwards. Questions Asked from Properties of Matter
CommonCrawl
Paclitaxel and curcumin coadministration in novel cationic PEGylated niosomal formulations exhibit enhanced synergistic antitumor efficacy Ashraf Alemi1, Javad Zavar Reza1,2, Fateme Haghiralsadat3, Hossein Zarei Jaliani4, Mojtaba Haghi Karamallah2, Seyed Ahmad Hosseini5 & Somayeh Haghi Karamallah6 The systemic administration of cytotoxic chemotherapeutic agents for cancer treatment often has toxic side effects, limiting the usage dose. To increase chemotherapeutic efficacy while reducing toxic effects, a rational design for synergy-based drug regimens is essential. This study investigated the augmentation of therapeutic effectiveness with the co-administration of paclitaxel (PTX; an effective chemotherapeutic drug for breast cancer) and curcumin (CUR; a chemosensitizer) in an MCF-7 cell line. We optimized niosome formulations in terms of surfactant and cholesterol content. Afterward, the novel cationic PEGylated niosomal formulations containing Tween-60: cholesterol:DOTAP:DSPE-mPEG (at 59.5:25.5:10:5) were designed and developed to serve as a model for better transfection efficiency and improved stability. The optimum formulations represented potential advantages, including extremely high entrapment efficiency (~ 100% for both therapeutic drug), spherical shape, smooth-surface morphology, suitable positive charge (zeta potential ~ + 15 mV for both CUR and PTX), sustained release, small diameter (~ 90 nm for both agents), desired stability, and augmented cellular uptake. Furthermore, the CUR and PTX kinetic release could be adequately fitted to the Higuchi model. A threefold and 3.6-fold reduction in CUR and PTX concentration was measured, respectively, when the CUR and PTX was administered in nano-niosome compared to free CUR and free PTX solutions in MCF-7 cells. When administered in nano-niosome formulations, the combination treatment of CUR and PTX was particularly effective in enhancing the cytotoxicity activity against MCF-7 cells. Most importantly, CUR and PTX, in both free form and niosomal forms, were determined to be less toxic on MCF-10A human normal cells in comparison to MCF-7 cells. The findings indicate that the combination therapy of PTX with CUR using the novel cationic PEGylated niosome delivery is a promising strategy for more effective breast cancer treatment. Chemotherapy is the standard treatment for various types of cancers. However, chemotherapy is associated with high systemic toxicity and low therapeutic effectiveness [1]. Nanotechnology has revolutionized the diagnosis and treatment of cancer [2]. A nano-sized drug delivery system (DDS), or nanocarrier, is designed to deliver therapeutic and/or diagnostic agents to their target sites [3]. Over recent decades, drug delivery systems using vesicular carriers have attracted great interest because these carriers provide high encapsulation efficiency, control drug release, enhance drug solubility, carry both hydrophilic and hydrophobic drugs, reduce side effects, prolong circulation in blood, and possess the ability to target a specific area [4, 5]. Vesicles made of natural or synthetic phospholipids are called liposomes, while transferosomes are modified liposomal systems that, in addition to phospholipids, contain a single chain surfactant as an edge activator; ethosomes contain ethanol as an edge activator instead of a single chain surfactant. Despite having some advantages over conventional dosage forms, vesicular carriers present many problems in practical applications, such as high cost, the use of organic solvents for preparation, and a limited shelf life due to lipid rancidification [6]. Therefore, a continuous endeavor has been made to find an alternative vesicular carrier. Niosomes meet this requirement [7]. Niosomes, or non-ionic surfactant vesicles, are unilateral or multilamellar spheroidal structures. Niosomes are preferred as an effective alternative to conventional liposomes, as they offer several advantages, including greater stability, lower cost, biodegradability, biocompatibility, non-immunogenic, and low toxicity, and they can be stored more easily for industrial production in pharmaceutical applications [5, 8,9,10,11,12]. To improve stability and circulation half-life, niosomes may be coated with appropriate polymer coatings, such as polyethylene glycol (PEG), creating PEGylated niosomes. PEG coating also helps reduce systemic phagocytosis, which results in prolonged systemic circulation, as well as reduced toxicity profiles [13, 14]. Paclitaxel (PTX) is an important antineoplastic drug, and it is isolated from the bark of Taxus brevifolia. PTX demonstrates an effective chemotherapeutic and cytotoxic activity against breast, ovarian, colon, lung, prostate, and brain cancers. However, the wide therapeutic effects of PTX are limited due to the low therapeutic index and poor water-solubility [15, 16]. Curcumin (CUR) is a hydrophobic polyphenol compound obtained from the rhizome of the plant Curcuma longa. CUR exhibits various pharmacological activities, such as anti-inflammatory, anti-oxidant, and anti-tumor effects. Particularly, CUR has been demonstrated to be highly effective against a variety of different malignancies, including leukemia and lymphoma, as well as colorectal, breast, lung, prostate, and pancreatic carcinoma. However, the pharmacological application of CUR has been impeded due to its extremely low aqueous solubility, instability, extremely poor bioavailability, and high metabolic rate [17,18,19]. As a result, nanotechnology is considered one of the most significant methods to design and develop various nano-carrier formulations for curcumin and paclitaxel, such as polymeric micelles, liposomes, self-assemblies, nanogels, niosome biodegradable microspheres, and cyclodextrin [18, 20, 21]. In this study, we loaded both curcumin and paclitaxel into cationic PEGylated niosomal formulations for enhanced efficacy in MCF-7 human breast adenocarcinoma cells. In addition to formulation design and optimization, we have examined release profile, intracellular delivery, and enhancement of cytotoxicity appears. The effect of surfactant:cholesterol ratio on CUR/PTX niosome formulations To specify the optimal formulation for attaining high entrapment efficiency, controlled release (at 37 °C and pH 7.4), and small vesicle size, various niosomal CUR/PTX formulations were evaluated (Table 1). As shown in Table 1, cholesterol had a profound effect on CUR/PTX entrapment efficiency in niosomes: by increasing the amount of cholesterol content from 10% in formulation 1 (F1) to 30% in formulation 4 (F4), PTX/CUR entrapments into nano-niosomes were constantly increased. However, adding cholesterol from F1 to F4 decreased the percentage of CUR/PTX released over 12 h. Furthermore, as can be seen from the presented results, the mean diameter of the niosomes increased with increasing the cholesterol content (F1 → F5, Table 1). However, the addition of up to 50% cholesterol to niosomes in F5 decreased niosomal efficiency in trapping curcumin/paclitaxel compared to the 30% cholesterol content in F4. Based on high entrapment efficiency and sustained drug release, the F4 formula has chosen as the formulation for further studies. Table 1 Effect of the non-ionic surfactant Tween 60: cholesterol with various molar ratios on entrapment efficiency (EE %), size and % release (R) in CUR/PTX loaded Niosomes The effect of DSPE-mPEG (2000) and DOTAP in niosomal formulation For attaining less aggregation, smaller niosomes, higher entrapment efficiency, and improved stability, 5% PEG was added to F4. According to Table 2, the F6 niosomal formula containing 5% PEG showed higher drug entrapment, smaller diameter, smaller Poly-Dispersity Index (PDI), and lower drug release than the F4 formula. Table 2 shows the number of positive charge particles and the entrapment efficiency were increased by adding 10% DOTAP to F6. However, vesicle size and PDI declined with a 10% increase in the molar amount of DOTAP. The obtained results showed the CUR/PTX niosomal formulations containing Tween-60: cholesterol:DOTAP:PEG with a 59.5:25.5:10:5 molar ratio (F7) had the desired feature based on high entrapment efficiency, sustained drug release, small diameter, and improved transfection efficiency (Table 2). Table 2 Effect cationic phospholipid DOTAP and DSPE-mPEG (2000) on entrapment efficiency (EE %), size and % release (R) in CUR/PTX loaded Niosomes Physical characterization of niosomal vesicles The internal structure of CUR/PTX niosomes was evaluated by cryogenic transmission electron microscopy (Cryo-TEM). As illustrated in Fig. 1a, b, the optimum formula of CUR/PTX niosomes was spherically shaped. Furthermore, the niosomes structures' rigid boundaries were indicated. According to SEM photographs, the niosomal vesicles were found to be round with smooth surfaces (Fig. 1c, d). Morphological assessment: a niosomal paclitaxel; b niosomal curcumin by cryogenic transmission electron microscopy (Cryo-TEM). Scanning electron microscopy (SEM) of c curcumin niosome; and d paclitaxel niosome In-vitro drug release study Evaluation of in vitro drug release was performed using the dialysis method. The results of a 72-h release profile of CUR and PTX from the optimum formulation (F7) in PBS pH 7.4 at 37 °C are displayed in Fig. 2. After 72 h, 29.93 and 28.16% of the loaded drugs were released for CUR and PTX, respectively. The cumulative release profile of CUR and PTX was apparently biphasic, with an initial rapid release period followed by a slower release phase. The in vitro release profile of curcumin and paclitaxel from niosomal optimum formula Release kinetics modeling Figure 3 shows the CUR/PTX release data were analyzed mathematically according to: zero-order, first-order, Hixson–Crowell, and Higuchi's equations. Table 3 summarizes the correlation coefficients (R2) calculated for niosomal formulations. The results revealed that the release of CUR and PTX from niosomal films is most fitted to the Higuchi model, according to the higher correlation coefficient. Curcumin and paclitaxel comparative plots. a Zero order release kinetics; b first order release kinetics; c Higuchi (SQRT) release kinetics and d Hixson–Crowell model Table 3 Release kinetics data of CUR and PTX from the niosomal optimum formulae Fourier transforms infrared (FTIR) spectral evaluation To confirm the drug presence in CUR/PTX nano-niosome formulations, FTIR analysis was performed. Figure 4a shows the FTIR spectrum of free paclitaxel. There were characteristic peaks in this spectrum: O–H stretching and N–H stretching in 2° amine at 3445 cm−1, –CH3 asymmetric and symmetric stretching at 2923 cm−1, conjugation of C=O with phenyl group at 1733 cm−1, C–O stretch at 1122 cm−1, and the C–H out-of-plane bending vibrations for monosubstituted rings in the paclitaxel molecule in the region of 900–500 cm−1. FTIR spectra. a Free paclitaxel; b free curcumin; c blank noisome; d niosomal paclitaxel; e niosomal; curcumin; f comparison blank noisome and niosomal paclitaxel; g comparison blank noisome and niosomal curcumin Figure 4b demonstrates the FTIR spectrum of free curcumin. The bands exhibited in this spectrum can be assigned to: C–H stretching and O–H stretching at 3507 cm−1, aromatic ring C=C stretching at 1506 cm−1, C=O stretch at 1152 cm−1, and the C–H out-of-plane bending vibrations for ortho-disubstituted rings in curcumin molecule in the region of 800–600 cm−1. The FTIR pattern for blank niosome (Fig. 4c) demonstrates various characteristic peaks of DOTAP, Tween-60, cholesterol, and DSPE-mPEG in the range of 3500–1115 cm−1. The band observed at 3435 cm−1 was assigned to cholesterol and Tween-60 (O–H stretching in phenols and N–H stretching in 2°-amines). C–N stretch and C–O stretch occur at 1148 cm−1 and belonged to DOTAP and Tween-60, respectively. The carbonyl group exhibits a strong absorption band at 1642.15 cm−1 due to C–O stretching vibration in DSPE-mPEG, Tween-60, and DOTAP. All peaks were repeated in the FTIR spectrum of PTX/CUR niosome formulations. The niosomal paclitaxel FTIR spectrum (Fig. 1d) shows the out-of-plane bending peaks in the range of 900–500 cm−1, and it can be used to assign mono-substitution on the paclitaxel ring that confirms paclitaxel loading in the niosome formulation. Furthermore, according to the niosomal curcumin FTIR spectrum (Fig. 1e), the out-of-plane bending peaks in the 800–600 cm−1 range can be utilized to allocate ortho substitutions on the curcumin ring that corroborates curcumin loading in the niosome formulation. When compared to the blank noisome, the sharper band in the 1600 cm−1 region and the broader bands in the 3500 cm−1 and 900–500 cm−1 regions in the CUR/PTX niosomal formulations (Fig. 1f, g) affirm curcumin and paclitaxel entrapment in the nano-niosomes. Physical stability examination To determine physical stability, the optimum formulation of curcumin/paclitaxel-loaded niosomes, in terms of encapsulation efficiency, vesicle size, PDI, and zeta potential, were tested by storing them at 4 °C. After storage for 60 days, the encapsulation efficiency, vesicle size, PDI, and zeta potential of the optimized formulation (F7) were not significantly changed from the freshly prepared samples (p value < 0.05). These results confirmed the stability of the F7 formula. IC50s for individual curcumin and paclitaxel on MCF-7 and MCF-10A cells To determine the inhibitory effect of individual curcumin and paclitaxel as a free form and as a niosomal form on MCF-7 and MCF-10A cells, we first performed dose–response experiments for curcumin and paclitaxel. As indicated in Fig. 5, individual treatments with the free form and the niosomal form resulted in growth inhibition of MCF-7 and MCF-10A cells in a dose-dependent pattern. Table 4 evaluates the IC50 values of these agents. The IC50 values of free PTX solution and free CUR solution was 13.54 and 44.60 μg mL−1, respectively, against MCF-7 cells and 30.75 and 76.71 μg mL−1, respectively, against MCF-10A cells (Fig. 5a, b). This revealed that MCF-10A cells needed at least a ∼ 2.27-fold higher concentration of PTX solution and a ∼ 1.7-fold higher concentration of CUR solution to attain IC50 compared to their counterpart MCF-7 cancer cells. As depicted in Fig. 5c, d, nano-niosomes were highly efficient in delivering the PTX and CUR drugs to both MCF-7 and MCF-10A cells. A threefold and 3.6-fold reduction in CUR and PTX concentration were measured, respectively, when the CUR and PTX were administered in nano-niosomes compared to free CUR and free PTX solutions in MCF-7 cells. Similarly, the CUR and PTX delivered in nano-niosomes to MCF-10A cells demonstrated a 1.2- and 1.5-fold lowered concentration, respectively. These results indicated that PTX and CUR in free and niosomal forms had less cytotoxicity on MCF-10A cells as a model for normal human mammary epithelial cells. The IC50 concentrations were then utilized to generate fixed ratios for subsequent combination experiments and for the calculation of combination index (CI). Inhibition of cell growth by curcumin (CUR) and paclitaxel (PTX) individual as a drug free form and drug niosomal form in MCF-7 and MCF-10A cell. a Free CUR; b Free PTX; c Nio CUR; d Nio PTX for MCF-7 (filled square) and MCF-10A (filled triangle) cells Table 4 The IC50 values of paclitaxel, curcumin alone and in combination on MCF-7 and MCF-10A cells, administered in the forms of free drug and drug niosomal form Growth inhibitory effects of paclitaxel in combination with curcumin To determine the synergistic antitumor effects of curcumin and paclitaxel, we performed a combination study, and the results are presented in Table 5. Figure 6a, b showed the dose–response curves for MCF-7 and MCF-10A cell lines exposed to paclitaxel and curcumin combination therapy. According to the results, curcumin could significantly increase the cell growth inhibition of paclitaxel; in the presence of free CUR solution, the IC50 of free PTX solution was diminished to ∼ 1.6-fold in MCF-7 cells and ∼ 1.4-fold in MCF-10A cells. This combination therapy regimen was significantly efficacious (p value < 0.05) when the PTX and CUR was delivered in nano-niosome formulations compared to a free solution (Table 4). Thus, the use of PTX and CUR together resulted in enhanced therapeutic potential. Figure 6 also illustrates the combination index analysis of the PTX and CUR interaction in MCF-7 and MCF-10A cells. Values of CI < 1 were obtained from the paclitaxel and curcumin combination in both free forms and niosomal forms for MCF-7 and MCF-10A cells, demonstrating that the two drugs interact synergistically to inhibit cell growth (Fig. 6c–f). Table 5 Paclitaxel and curcumin combination index (CI) against MCF-7 and MCF-10A cells Analysis of synergy between curcumin and paclitaxel for MCF-7 (filled triangle) and MCF-10A (filled square) cells. a Dose–response curve of free CUR + Free PTX; b dose–response curve of Nio CUR + Nio PTX. CI values at different levels of growth inhibition effect (fraction affected, FA; c Free CUR + Free PTX in MCF-7 cells; d Nio CUR + Nio PTX in MCF-7 cells; e Free CUR + Free PTX in MCF-10A cells, f Nio CUR + Nio PTX in MCF-10A cell Nano-niosomal CUR/PTX cellular uptake experiments Cellular uptake experiments were performed to evaluate the cellular uptake behavior of different CUR/PTX niosomal formulations in the following cells: MCF-7 cells as a cancer cell model and MCF10A cells as a model for normal human mammary epithelial cells. Figures 7, 8 and 9 illustrates the cellular uptake images of F6 and F7 CUR/PTX-loaded niosome formulations on MCF-7 and MCF10A cell lines monitored by fluorescence microscope. As depicted in Fig. 7b, d, the MCF-7 cells treated with the CUR/PTX F7 formula containing 10% DOTAP showed greater green and cyan (blue–green) color intensity compared to cells treated with CUR/PTX F6 formula (without DOTAP, Fig. 7a, c). By adding 10% DOTAP to the F6 formula, the drug release, vesicle size, and polydispersity index decreased, while the transfection efficiency was enhanced. Similarly, these results are observed in MCF-10A cells (Fig. 9a–d); however, the intensity of the green and cyan color in these cells was much less than in the MCF-7 cells. These findings indicate that CUR/PTX-loaded niosome formulations entered healthy cells much less than cancerous cells. These results are consistent with cytotoxicity experiments. Cellular uptake of F6 and F7 CUR/PTX loaded niosomes formulations on MCF-7cell line. MCF-7cell line [a1 F6 Nio CUR Nucleus, a2 F6 Nio CUR, a3 F6 Nio CUR merged; b1 F7 Nio CUR Nucleus, b2 F7 Nio CUR, b3 F& Nio CUR merged; c1 F6 PTX CUR Nucleus, c2 F6 Nio PTX, c3 F6 Nio CUR PTX; d1 F7 Nio PTX Nucleus, d2 F7 Nio PTX, d3 F& Nio PTX merged] Cellular uptake of F6 and F7 CUR/PTX loaded niosomes formulations on MCF-7cell line. MCF-10A cell line [a1 F6 Nio CUR Nucleus, a2 F6 Nio CUR, a3 F6 Nio CUR merged; b1 F7 Nio CUR Nucleus, b2 F7 Nio CUR, b3 F& Nio CUR merged; c1 F6 PTX CUR Nucleus, c2 F6 Nio PTX, c3 F6 Nio CUR PTX; d1 F7 Nio PTX Nucleus, d2 F7 Nio PTX, d3 F& Nio PTX merged] Apoptosis analysis Apoptosis was measured by annexin V-fluorescein isothiocyanate (FITC)/propidium iodide (PI) double staining (Sigma-Aldrich, USA). MCF-7 cells were seeded in six-well plates at a density of 1 × 105 cells per well. Apoptosis was induced by treating the cells with PTX and CUR, either as single agents or as a PTX + CUR combination, administered in aqueous solution or in nano-niosome formulations at an IC50 concentration for each drug. After 24 h of incubation, the cells were detached using 0.25% trypsin/EDTA (Sigma-Aldrich, USA) and centrifuged at 1500 rpm for 3 min, after which the pellet was resuspended in ice-cold, phosphate-buffered saline (PBS, pH 7.4). Annexin V-FITC solution (3 µL) was added to each cell suspension. In addition, 3 µL of propidium iodide stock solution was added to the cells to identify necrotic cells. After 30 min of incubation on ice, the stained cells were analyzed by flow cytometry using the BD FACSCalibur instrument. Cells that did not receive any drug treatment served as the control. Plants have been employed as medicines for centuries, and the usage of plant-derived chemicals has been extended into anticancer drugs. Lately, chemotherapeutic strategies have advanced to the utilization of combined active compounds because they are believed to be more active than a single agent. Hence, treatment effectiveness could increase, and the toxic side effects may be reduced, due to the extremely low use of drugs. Curcumin (diferuloylmethane), a yellow pigment isolated from the rhizome of turmeric, has been reported to have an extensive spectrum of pharmacological activities. Furthermore, curcumin is currently involved in the early phase of a clinical trial as a potential chemo-preventive agent [22, 23]. Therefore, it is logical to evaluate whether curcumin, as a new antiproliferative agent, can sensitize tumors to the chemotherapeutic drug paclitaxel for breast cancer cells. Paclitaxel (PTX) has been used as an effective chemotherapeutic drug for a wide range of tumors, such as breast, lung, prostate, ovarian, and pancreatic cancers [24, 25]. The CUR and PTX combination is a remarkable anticancer drug therapy. PTX is a powerful microtubule-stabilizing agent that commences cell cycle arrest, while CUR attacks biologically by regulating several signal transduction pathways [26,27,28]. Despite these good therapeutic effects, the wide therapeutic range of PTX and CUR is limited due to poor aqueous solubility and low therapeutic index. A promising approach for circumventing these issues is the use of a vesicular nanocarrier, such as niosomes, which are an alternative to phospholipid vesicles for the encapsulation of hydrophobic drugs due to providing high encapsulation efficiency, biocompatibility, biodegradation, low preparation cost, and sufficient stability, as well as being free from organic solvents and offering easy storage [7]. In this study, we have developed a novel cationic PEGylated niosomal formulation for encapsulating paclitaxel and curcumin. The vesicular systems were prepared from the nonionic surfactant Tween-60, as a commercial surfactant, and all formulations were compared in terms of entrapment efficiency, drug release, vesicle size, and polydispersity index. Furthermore, niosomes formulated without cholesterol formed a gel, and only the addition of cholesterol was a homogenous niosome obtained [29]. The hydrophilic–lipophilic balance (HLB) of the nonionic surfactant, the chemical structure of the components, and the critical packing parameter (CPP) are important in forming bilayer vesicles instead of micelles. The HLB value of a surfactant plays a key role in controlling the drug entrapment efficiency of the vesicle it forms. A surfactant, such as Tween-60, with an HLB value in the 14–17 range is inappropriate for creating niosomes. For HLB > 6, cholesterol must be added to the surfactant until forming a bilayer vesicle. Also, the presence of cholesterol in the formulation of niosomes is necessary for the physical stability of these nano-sized vesicles (i.e., suppressing the surfactant's tendency to form aggregates, decreasing drug leakage, vesicle size, and dispersion). This was primarily ascribed to the increase in hydrophobicity (particularly with higher HLB surfactant molecules, such as Tween-60) that augmented the structural affinity of the bilayer membrane for CUR/PTX molecules [6, 11, 12, 30,31,32]. Therefore, cholesterol is added to the formulations as a membrane-stabilizing factor. As a result, by increasing the amount of cholesterol content from 10 to 30%, PTX/CUR entrapments in nano-niosomes were increased, while the percentage of CUR/PTX release was decreased. Furthermore, the mean diameter of niosomes increased with increasing the cholesterol content. However, the addition of cholesterol content to niosomes up to 50% decreased niosomal efficiency in trapping curcumin/paclitaxel compared to 30% cholesterol content. This finding can be explained by the possible competition between curcumin and paclitaxel as lipophilic drugs and the cholesterol incorporation into the niosomes. A further increase in cholesterol tends to deposit between the bilayers, excluding the drug from the niosomal bilayers. Above a certain level of cholesterol, entrapment efficiency decreased possibly due to a decrease in CPP [6, 11, 12, 30,31,32]. Improving stability, increasing the drug encapsulation, decreasing mean size diameter, and reducing drug release is due to the presence of PEGylation in the niosomal formulations [33, 34]. Therefore, 5% PEG was added to the F4 formula. According to the findings, the F6 niosomal formula demonstrated higher drug entrapment, smaller diameter, smaller PDI, and lower drug release than the F4 formula. Additionally, cationic lipids added to the niosomal formulations enhanced the niosomes' physicochemical properties and the transfection efficiency. The addition of DOTAP decreased the drug's release and vesicle size due to a decline in the cholesterol content. This effect also decreased the polydispersity index, which is relevant to the further reciprocal repel force between the particles with the same sign charge in the suspension system [35,36,37,38]. To hamper the aggregation of vesicular systems, it is essential to introduce a charge on the surface of the vesicle. A good indicator for the size of this barrier is zeta potential. If all the particles possess large enough zeta potential, they presumably repel each other strongly enough that they will not have the tendency to aggregate [39]. After storage for 60 days, the presence of DOTAP and PEG in niosomal formulations of CUR and PTX demonstrated no significant changes when compared to freshly prepared samples in terms of encapsulation efficiency, vesicle size, PDI, and zeta potential of the optimized formulation (F7). This implies that the new F7 niosome formulation could minimize problems associated with niosome instability, including aggregation, fusion, and drug leakage. The rate of drug release from a delivery system is a crucial factor and must be appraised to attain an optimal system with the desired drug-release profile. The in vitro release study was conducted to predict how a delivery system may function under the ideal status, which might display some indication of its in vivo efficiency. In-vitro drug release demonstrated that the cumulative release profile of CUR and PTX were apparently biphasic, with an initial rapid release period followed by a slower release phase. Because CUR and PTX are small molecules and the permeability cut-off of the dialysis bag was 12 kDa, the released CUR and PTX poured easily from the bag. As a result, neither the dialysis bag nor the drug size restricted the drug's release. The initial fast rate of release was regulated by the diffuse mechanism (concentration gradient of CUR/PTX between noisome and buffer), while the later slow release resulted from the drug's sustained release from the inner layer [40,41,42]. The in vitro release of CUR and PTX from the niosomal formulation was assessed by fitting the cumulative drug release into mathematical release models, which are commonly applied to elucidate release kinetics and to compare release profiles. The CUR/PTX niosomal formulations followed the Higuchi model. These findings indicated that CUR and PTX molecules were diffused in the niosome matrix and that there were no possible interactions between the niosome components and the drugs [5, 43,44,45]. In this study, we have investigated the effect of PTX and CUR combination therapy, in both free forms and niosomal forms, on MCF-7 cells as a cancer cell model and MCF10A cells as a model for normal human mammary epithelial cells (Tables 4 and 5). The ratiometric combination of PTX and CUR significantly suppressed the growth of MCF-7 cells. When the free drugs were administered in nano-niosome formulations, the cytotoxicity effects manifested even more. The enhanced therapeutic activity achieved with the combination therapy was ascribed to the P-glycoprotein (P-gp) downregulation and to the inhibition of the NFκB pathway by CUR. Most importantly, CUR downregulates the NF-ĸB signaling pathways, thus inhibiting cancer cell growth and inducing apoptosis. Therefore, CUR sensitizes cancer cells to increase the cancer cells' response to anticancer drugs. Increasing the accumulation of PTX within the cancer cell due to P-gp downregulation can overcome the MDR phenomenon [1, 27, 46]. We observed a similar trend for MCF-10a cells. Nevertheless, as expected, CUR and PTX had fewer side effects in both free form and niosomal form on MCF10A human mammary epithelial cells. The cellular uptake experiments were demonstrated by the addition of DOTAP, which enhanced the transfection efficiency of the CUR/PTX F7 formula; it is well known that cationic lipids enhance the transfection efficiency of niosomal formulations [35,36,37,38]. Quantitative apoptotic activity measurements were made by flow cytometry analysis in PTX and CUR treated cells. Statistically significant when apoptotic activity of paclitaxel NanoNiosome formulation is compared with free paclitaxel and curcumin NanoNiosome formulation is compared with free curcumin solution in MCF-7 cells (p < 0.05). In addition to these findings, flow cytometry analysis also revealed that the apoptosis was significantly greater with the combination therapy and with drugs administered NanoNiosome formulations at p < 0.05. These results collaborate with the cell viability experiment to affirm that NanoNiosomes were effective in delivering the PTX and CUR to the cells, and combination therapy with PTX and CUR delivered in NanoNiosome formulations indeed demonstrated higher therapeutic efficacy in MCF-7 cells. Our successful findings suggest novel cationic PEGylated niosomal formulations for paclitaxel and curcumin co-administration. The encapsulation efficiency of both drugs was extremely successful. The drugs' release profile demonstrated burst release followed by a sustained drug release for both agents. The combination of PTX (a powerful anticancer drug) with CUR (an effective chemosensitizer), particularly in nano-niosome formulations, can improve the therapeutic effectiveness of cancer treatments. Our experimental evidence indicated that a nanocarrier-based approach adopted for the delivery of CUR/PTX combinations was efficient in battling cancer cells in vitro. CUR/PTX niosomes preparation We used the thin-film hydration method to prepare the curcumin and paclitaxel-loaded niosomes [47]. Tween-60 (DaeJung Chemicals & Metals, South Korea) and cholesterol (Sigma-Aldrich, USA) were dissolved in chloroform to obtain the different molar ratio molarities (as illustrated in Table 1). PTX (Stragen, Switzerland) and CUR (Sigma-Aldrich, USA) were dissolved in chloroform and added to the mixture of surfactant and lipids. Fluorescent label Dil (Sigma, USA) was added to the lipid phase at 0.1% mol for lipid staining to evaluate cellular uptake. Niosomal formulations were screened for particle size, controlled release, and high entrapment efficiency parameters. After attaining optimized synthetic conditions, the cationic lipid DOTAP (1,2-dioleoyl-3-trimethylammonium-propane, Sigma-Aldrich, USA) and polyethylene glycol (Lipoid PE 18:0/18:0–PEG2000, DSPE-mPEG 2000, Lipoid GmbH, Germany) were added for improving stability and transfection efficiency of the niosomal formulations. Organic solvent was removed by rotary evaporator (Heidolph, Germany) at 50 °C until a thin-layered film formed. The dry lipid films were hydrated by adding phosphate-buffered saline (PBS, pH = 7.4) at 60 °C for 60 min to obtain the niosomal suspensions. After hydration, the prepared vesicles were sonicated for 30 min using a microtip probe sonicator (model UP200St, Hielscher Ultrasonics GmbH, Germany) to reduce the vesicles' mean size. Thereafter, free drugs (unloaded) were separated from niosomal vesicles using a dialysis bag diffusion technique against PBS for 1 h at 4 °C (MW = 12 kDa, Sigma-Aldrich, USA) [48]. Drug-free niosomes were produced in a similar manner without adding curcumin and paclitaxel. The dose of both drugs was 0.5 mg mL−1 for all formulations. Analysis of encapsulation efficiency To evaluate entrapment efficiency, spectroscopic measurements were performed. The amounts of niosomal encapsulated CUR and PTX were analyzed with a UV spectro-photometer (model T80+, PG Instruments, United Kingdom) at 429 and 236 nm (ʎmax), respectively [7]. The encapsulation efficiency was determined as follows: $${\text{Encapsulation efficiency }}\left( \% \right) \, = \frac{{{\text{The amount of CUR}}/{\text{PTX encapsulated within niosomes}}}}{{{\text{Total amount of CUR}}/{\text{PTX added}}}} \times 100$$ The particle size distribution, zeta potential and Poly-Dispersity Index (PDI) of the obtained niosomes were measured by dynamic light scattering technique using a ZetaPALS zeta potential and particle size analyzer (Brookhaven Instruments, Holtsville, NY, USA). Scattered light was detected at room temperature at an angle of 90°, and the diluted samples in 1700 µL of deionized water (0.1 mg mL−1) were prepared and immediately measured after preparation. All measurements were carried out three times, and their mean values were calculated. The internal structure of NanoNiosome formulations was determined by cryogenic transmission electron microscopy (FEI Tecnai 20, type Sphera, Oregon, USA) equipped with a LaB6 filament at 200 kV. A drop of NanoNiosome solution was placed over a 200-mesh Copper-coated TEM grids, and TEM measurement was accomplished. Characterization of surface morphology of Niosomes was evaluated using scanning electron microscope (SEM. To prepare the sample used in SEM, a little amount of the NanoNiosome solution dispersed in water was placed on the mesh copper grid 400. Then, the copper grid was placed in an evacuated desiccator to evaporate the solvent. Finally the samples were coated with gold coater to make them conductive, followed by evaluation of the surface morphology using SEM with 100 W power instrument (model KYKY-EM3200-30 kV, China). The in vitro release of CUR/PTX from niosomes was monitored using a dialysis bag (MW = 12 kDa) against PBS (containing 2% Tween-20 to imitate a physiological environment) for 72 h at 37 °C and 7.4 pH [42]. First, the CUR/PTX niosome samples were suspended in a dialysis tube, and the release of both drugs was evaluated in 10 mL of PBS with continuous stirring. Then, 2 mL of the sample was collected from the incubation medium at specific time intervals and immediately substituted with an equal volume of fresh PBS. The amount of CUR/PTX released was determined using a UV–Vis spectrometer at 429 and 236 nm, respectively. Mathematical modeling of drug release kinetic Cumulative percentages of the drug released from the niosomes were calculated by the following Eq. (1): $${\text{Release}} = \frac{{{\text{M}}_{\text{t}} }}{{{\text{M}}_{\text{f}} }}$$ where Mt and Mf are the cumulative amounts of drug released at any time (t) and the final amounts of drug released, respectively. To determine the release kinetic, the release data were fitted to the mathematical models by the linear regression analysis of Graph pad prism 6.0, as follows: Zero-order rate equation: $$Q_{t} = Q_{0} + K_{0} t$$ where Qt is the amount of the remaining drug in the formulation at time t; Q0 is the initial amount of drug in the formulation; and K0 is the zero-order release constant. First-order rate equation: $$\log {\text{C}} = \log {\text{C}}_{0 } - \frac{{{\text{K}}_{\text{t}} }}{2.303}$$ where C0 is the initial drug concentration; K is the first-order release constant; and t is time. Higuchi's model: $${\text{Q}} = {\text{ K}}_{\text{H}} {\text{t}}^{ 1/ 2}$$ where Q is the amount of drug released in time t per unit area, and KH is the Higuchi dissolution constant. Hixson–Crowell model: $${\text{Q}}_{0}^{ 1/ 3} - {\text{ Q}}_{\text{t}}^{ 1/ 3} = {\text{ K}}_{\text{s}} {\text{t}}$$ Q0 is the initial amount of the drug in the niosomes; Qt is the cumulative amount of the drug released at time t; and Ks is the Hixson–Crowell release constant. Finally, the correlation coefficients' values were compared to determine the release model that best fits the data [40, 42]. The samples' functional group characterizations were investigated using FTIR spectrometer (Model 8300, Shimadzu Corporation, Tokyo, Japan) for pure CUR, pure PTX, blank noisome, niosomal-CUR, and niosomal-PTX. For preparation, the samples were lyophilized as a dry powder and mixed with potassium bromide (KBr). Then, the samples were placed in a hydraulic press to form the pellets. The FTIR spectrum was scanned in the wavelength range of 400–4000 cm−1. To determine the physical stability of niosomal curcumin/paclitaxel during storage, the change in particle size, zeta potential, PDI, and the remaining amount of the drug in vesicle was assessed over 14-, 28-, and 60-day intervals [9, 39]. Cell lines and culture conditions Human breast cancer MCF-7 cells (the Iranian Biological Resource Center, Tehran, Iran) were cultured in DMEM/F12 Ham's mixture (InoClon, Iran) supplemented with 2 mM GlutaMAX™-I (100X, Gibco, USA), 10% FBS (Fetal Bovine Serum, Gibco, USA), and 1 mg mL−1 penicillin/streptomycin (Gibco, USA). Non-tumorigenic human breast epithelial cell line MCF-10A (the Iranian Biological Resource Center, Tehran, Iran) was grown in DMEM/F12 Ham's mixture supplemented with 2 mM GlutaMAX™-I, 5% horse serum (Gibco, USA), EGF (Epithelial growth factor, Sigma, USA) 20 ng mL−1, insulin 10 μg mL−1 (Sigma, USA), hydrocortisone 0.5 μg mL−1 (Sigma, USA), 100 ng mL−1 cholera toxin (Sigma, USA), and 1 mg mL−1 penicillin/streptomycin. An MCF-10A cell line was used for comparison in all experiments. The cytotoxicity of various formulations was determined by MTT (Sigma, USA) assay [49,50,51]. Briefly, MCF-7 and MCF-10A cells were seeded in 96-well plates at 10,000 cells per well. Following attachment for 24 h, the cells were treated with 200 μL fresh medium containing serial dilutions of the different drug/niosome formulations: free-PTX solution, free-CUR solution, free PTX + free CUR physical mixture, niosomal CUR, niosomal PCT, and the co-administration of niosomal CUR-niosomal PTX. After incubation for 48 h, 20 μL MTT (5 mg mL−1 in PBS) was added into each 96-well plate and incubated for 3 h at 37 °C. Finally, the medium was carefully removed, and 180 μL of DMSO was added to each well to dissolve the formazan crystals formed. Absorbance of each well was recorded by EPOCH Microplate Spectrophotometer (synergy HTX, BioTek, USA) at 570 nm. The cytotoxicity of the different formulations was expressed as the Inhibitory Concentration (IC50) value, defined as the drug concentration required for inhibiting cell growth by 50% relative to the control. The IC50 values of PTX and CUR as single drugs or in combination were calculated using GraphPad Prism 6. The curcumin and paclitaxel combination was appraised by calculating the CI value using the CompuSyn software, with the method utilized by Chou and Talalay: $${\text{CI }} = \frac{a}{A} + \frac{b}{B}$$ where a is the PTX IC50 in combination with CUR at concentration b; A is the PTX IC50 without CUR; and B is the CUR IC50 in the absence of PTX. According to the Chou and Talalay equation, when CI < 1, the interaction between the two drugs is synergistic; when CI = 1, the interaction between the two drugs is additive; and when CI > 1, the two drugs are antagonistic [52,53,54]. MCF-7 and MCF-10A cells were seeded at a density of 2 × 105 cells per well in a 6-well plate and incubated for 24 h to allow them to attach. The cells were then treated with the different NioCUR and NioPTX formulations. After 3 h of incubation, the cells were washed three times with cold PBS and fixed with a 4% paraformaldehyde solution (Sigma, USA). Then, the cells were stained with DAPI (0.125 µg mL−1, Thermo Fisher Scientific, USA) and imaged with a fluorescence microscope (BX61, Olympus, Japan) [48, 49, 51]. An annexin V-FITC/PI double staining assay was carried out to confirm whether apoptosis was induced by curcumin or paclitaxel alone or in combination when administered in an aqueous solution and nano-niosome formulation. The results in Fig. 9 show quantitative apoptotic activity in MCF-7 cells via apoptosis assay using flow cytometry following the treatment of cells for 24 h. In apoptotic cells, the membrane phospholipid phosphatidylserine (PS) is translocated from the inner to the outer surface of the plasma membrane, thereby exposing PS to the external cellular environment. Annexin V is a 35–36 kDa Ca2+-dependent phospholipid-binding protein with high affinity for PS, and it binds to exposed apoptotic cell-surface PS. Annexin V can be conjugated to fluorochromes, such as FITC, while retaining its high affinity for PS, thus serving as a sensitive probe for the flow cytometric analysis of cells undergoing apoptosis. Furthermore, propidium iodide (PI) is a fluorescent intercalating agent that can be used as a DNA stain in flow cytometry. PI cannot pass the membrane of live cells and apoptotic cells; however, it stains dead cells, making it useful to differentiate necrotic, apoptotic, healthy, and dead cells. In the scatter plot of double variable flow cytometry, the Q4 quadrant (FITC−/PI−) shows living cells; the Q2 quadrant (FITC+/PI+) stands for late apoptotic cells; the Q3 quadrant (FITC+/PI−) represents early apoptotic cells; and the Q1 quadrant (FITC−/PI+) shows necrotic cells. The flow cytometry plots demonstrate there was enhancement in cellular apoptosis in MCF-7 cells when PTX and CUR were administered in nano-niosome formulations as compared to free drugs (p < 0.05). Furthermore, when PTX and CUR were co-administered in nano-niosome formulations, there was a significant increase in apoptosis (i.e., 15.27% early apoptosis in niosomal curcumin and 31.03% early apoptosis in niosomal paclitaxel versus 49.79% early apoptosis in niosomal curcumin + niosomal paclitaxel, p < 0.05). These results are consistent with the growth inhibitory effects of paclitaxel in combination with curcumin. Apoptosis assay using flow cytometry following the treatment of cells for 24 h. a Control; b free curcumin + free paclitaxel; c free curcumin; d free paclitaxel; e niosomal curcumin; f niosomal paclitaxel; g niosomal curcumin + niosomal paclitaxel Statistical data analyses were performed via GraphPad Prism 6 software and expressed as mean ± SD. A Student t test was used when comparing two independent groups, and an ANOVA test was used when comparing multiple samples. A p value < 0.05 was considered significant. PTX: CUR: DDS: drug delivery system PEG: EE: entrapment efficiency Cryo-TEM: cryogenic transmission electron microscopy FTIR: Fourier transforms infrared CI: combination index Ruttala HB, Ko YT. Liposomal co-delivery of curcumin and albumin/paclitaxel nanoparticle for enhanced synergistic antitumor efficacy. Colloids Surf B. 2015;128:419–26. Jaishree V, Gupta PD. Nanotechnology: a revolution in cancer diagnosis. Indian J Clin Biochem. 2012;27:214–20. Eldar-Boock A, Polyak D, Scomparin A, Satchi-Fainaro R. Nano-sized polymers and liposomes designed to deliver combination therapy for cancer. Curr Opin Biotechnol. 2013;24:682–9. Jain S, Jain V, Mahajan SC. Lipid based vesicular drug delivery systems. Adv Pharm. 2014;2014:12. Sezgin-Bayindir Z, Yuksel N. Investigation of formulation variables and excipient interaction on the production of niosomes. AAPS PharmSciTech. 2012;13:826–35. Kumar GP, Rajeshwarrao P. Nonionic surfactant vesicular systems for effective drug delivery—an overview. Acta Pharm Sinica B. 2011;1:208–19. Sharma V, Anandhakumar S, Sasidharan M. Self-degrading niosomes for encapsulation of hydrophilic and hydrophobic drugs: an efficient carrier for cancer multi-drug delivery. Mater Sci Eng C. 2015;56:393–400. Abdelbary AA, AbouGhaly MH. Design and optimization of topical methotrexate loaded niosomes for enhanced management of psoriasis: application of Box-Behnken design, in vitro evaluation and in vivo skin deposition study. Int J Pharm. 2015;485:235–43. Shilakari Asthana G, Sharma PK, Asthana A. In vitro and in vivo evaluation of niosomal formulation for controlled delivery of clarithromycin. Scientifica. 2016;2016:6492953. Tavano L, Aiello R, Ioele G, Picci N, Muzzalupo R. Niosomes from glucuronic acid-based surfactant as new carriers for cancer therapy: preparation, characterization and biological properties. Colloids Surf B Biointerfaces. 2014;118:7–13. Tavano L, Muzzalupo R, Picci N, de Cindio B. Co-encapsulation of antioxidants into niosomal carriers: gastrointestinal release studies for nutraceutical applications. Colloids Surf B Biointerfaces. 2014;114:82–8. Tavano L, Muzzalupo R, Picci N, de Cindio B. Co-encapsulation of lipophilic antioxidants into niosomal carriers: percutaneous permeation studies for cosmeceutical applications. Colloids Surf B Biointerfaces. 2014;114:144–9. Ahmed M, Moussa M, Goldberg SN. Synergy in cancer treatment between liposomal chemotherapeutics and thermal ablation. Chem Phys Lipids. 2012;165:424–37. Wang AZ, Langer R, Farokhzad OC. Nanoparticle delivery of cancer drugs. Annu Rev Med. 2012;63:185–98. Bansal A, Kapoor DN, Kapil R, Chhabra N, Dhawan S. Design and development of paclitaxel-loaded bovine serum albumin nanoparticles for brain targeting. Acta Pharm. 2011;61:141–56. Heo DN, Yang DH, Moon HJ, Lee JB, Bae MS, Lee SC, et al. Gold nanoparticles surface-functionalized with paclitaxel drug and biotin receptor as theranostic agents for cancer therapy. Biomaterials. 2012;33:856–66. Ferreira N, Goncalves NP, Saraiva MJ, Almeida MR. Curcumin: a multi-target disease-modifying agent for late-stage transthyretin amyloidosis. Sci Rep. 2016;6:26623. Yang X, Li Z, Wang N, Li L, Song L, He T, et al. Curcumin-encapsulated polymeric micelles suppress the development of colon cancer in vitro and in vivo. Sci Rep. 2015;5:10322. Zaman MS, Chauhan N, Yallapu MM, Gara RK, Maher DM, Kumari S, et al. Curcumin nanoformulation for cervical cancer treatment. Sci Rep. 2016;6:20051. Naksuriya O, Okonogi S, Schiffelers RM, Hennink WE. Curcumin nanoformulations: a review of pharmaceutical properties and preclinical studies and clinical data related to cancer treatment. Biomaterials. 2014;35:3365–83. Yallapu MM, Jaggi M, Chauhan SC. Curcumin nanoformulations: a future nanomedicine for cancer. Drug Discov Today. 2012;17:71–80. Dhillon N, Aggarwal BB, Newman RA, Wolff RA, Kunnumakkara AB, Abbruzzese JL, et al. Phase II trial of curcumin in patients with advanced pancreatic cancer. Clin Cancer Res. 2008;14:4491–9. Shehzad A, Wahid F, Lee YS. Curcumin in cancer chemoprevention: molecular targets, pharmacokinetics, bioavailability, and clinical trials. Arch Pharm (Weinheim). 2010;343:489–99. Baek JS, Cho CW. Controlled release and reversal of multidrug resistance by co-encapsulation of paclitaxel and verapamil in solid lipid nanoparticles. Int J Pharm. 2015;478:617–24. Jia L, Li Z, Shen J, Zheng D, Tian X, Guo H, et al. Multifunctional mesoporous silica nanoparticles mediated co-delivery of paclitaxel and tetrandrine for overcoming multidrug resistance. Int J Pharm. 2015;489:318–30. Esatbeyoglu T, Huebbe P, Ernst IM, Chin D, Wagner AE, Rimbach G. Curcumin—from molecule to biological function. Angew Chem Int Ed Engl. 2012;51:5308–32. Ganta S, Amiji M. Coadministration of paclitaxel and curcumin in nanoemulsion formulations to overcome multidrug resistance in tumor cells. Mol Pharm. 2009;6:928–39. Muthoosamy K, Abubakar IB, Bai RG, Loh HS, Manickam S. Exceedingly higher co-loading of curcumin and paclitaxel onto polymer-functionalized reduced graphene oxide for highly potent synergistic anticancer treatment. Sci Rep. 2016;6:32808. Balakrishnan P, Shanmugam S, Lee WS, Lee WM, Kim JO, Oh DH, et al. Formulation and in vitro assessment of minoxidil niosomes for enhanced skin delivery. Int J Pharm. 2009;377:1–8. Imran M, Shah MR, Ullah F, Ullah S, Elhissi AMA, Nawaz W, et al. Glycoside-based niosomal nanocarrier for enhanced in vivo performance of Cefixime. Int J Pharm. 2016;505:122–32. Marianecci C, Di Marzio L, Rinaldi F, Celia C, Paolino D, Alhaique F, et al. Niosomes from 80s to present: the state of the art. Adv Coll Interface Sci. 2014;205:187–206. Tavano L, Aiello R, Ioele G, Picci N, Muzzalupo R. Niosomes from glucuronic acid-based surfactant as new carriers for cancer therapy: preparation, characterization and biological properties. Colloids Surf B. 2014;118:7–13. Gabizon A, Shmeeda H, Grenader T. Pharmacological basis of pegylated liposomal doxorubicin: impact on cancer therapy. Eur J Pharm Sci. 2012;45:388–98. Kim JY, Kim JK, Park JS, Byun Y, Kim CK. The use of PEGylated liposomes to prolong circulation lifetimes of tissue plasminogen activator. Biomaterials. 2009;30:5751–6. Ojeda E, Puras G, Agirre M, Zarate J, Grijalvo S, Eritja R, et al. The role of helper lipids in the intracellular disposition and transfection efficiency of niosome formulations for gene delivery to retinal pigment epithelial cells. Int J Pharm. 2016;503:115–26. Ojeda E, Puras G, Agirre M, Zarate J, Grijalvo S, Eritja R, et al. The influence of the polar head-group of synthetic cationic lipids on the transfection efficiency mediated by niosomes in rat retina and brain. Biomaterials. 2016;77:267–79. Ojeda E, Puras G, Agirre M, Zarate J, Grijalvo S, Pons R, et al. Niosomes based on synthetic cationic lipids for gene delivery: the influence of polar head-groups on the transfection efficiency in HEK-293, ARPE-19 and MSC-D1 cells. Org Biomol Chem. 2015;13:1068–81. Zhi D, Zhang S, Wang B, Zhao Y, Yang B, Yu S. Transfection efficiency of cationic lipids with different hydrophobic domains in gene delivery. Bioconjug Chem. 2010;21:563–77. Ertekin ZC, Bayindir ZS, Yuksel N. Stability studies on piroxicam encapsulated niosomes. Curr Drug Deliv. 2015;12:192–9. Kamboj S, Saini V, Bala S. Formulation and characterization of drug loaded nonionic surfactant vesicles (niosomes) for oral bioavailability enhancement. Sci World J. 2014;2014:8. Panwar P, Pandey B, Lakhera PC, Singh KP. Preparation, characterization, and in vitro release study of albendazole-encapsulated nanosize liposomes. Int J Nanomed. 2010;5:101–8. Shaker DS, Shaker MA, Hanafy MS. Cellular uptake, cytotoxicity and in vivo evaluation of Tamoxifen citrate loaded niosomes. Int J Pharm. 2015;493:285–94. Bayindir ZS, Be AB, Yuksel N. Paclitaxel-loaded niosomes for intravenous administration: pharmacokinetics and tissue distribution in rats. Turk J Med Sci. 2015;45:1403–12. Bayindir ZS, Yuksel N. Characterization of niosomes prepared with various nonionic surfactants for paclitaxel oral delivery. J Pharm Sci. 2010;99:2049–60. Sezgin-Bayindir Z, Onay-Besikci A, Vural N, Yuksel N. Niosomes encapsulating paclitaxel for oral bioavailability enhancement: preparation, characterization, pharmacokinetics and biodistribution. J Microencapsul. 2013;30:796–804. Duan J, Mansour HM, Zhang Y, Deng X, Chen Y, Wang J, et al. Reversion of multidrug resistance by co-encapsulation of doxorubicin and curcumin in chitosan/poly(butyl cyanoacrylate) nanoparticles. Int J Pharm. 2012;426:193–201. Uchegbu IF, Vyas SP. Non-ionic surfactant based vesicles (niosomes) in drug delivery. Int J Pharm. 1998;172:33–70. Lin YL, Liu YK, Tsai NM, Hsieh JH, Chen CH, Lin CM, et al. A Lipo-PEG-PEI complex for encapsulating curcumin that enhances its antitumor effects on curcumin-sensitive and curcumin-resistance cells. Nanomedicine. 2012;8:318–27. Baek JS, Cho CW. A multifunctional lipid nanoparticle for co-delivery of paclitaxel and curcumin for targeted delivery and enhanced cytotoxicity in multidrug resistant breast cancer cells. Oncotarget. 2017;8:30369–82. Lv S, Tang Z, Li M, Lin J, Song W, Liu H, et al. Co-delivery of doxorubicin and paclitaxel by PEG-polypeptide nanovehicle for the treatment of non-small cell lung cancer. Biomaterials. 2014;35:6118–29. Wang J, Wang F, Li F, Zhang W, Shen Y, Zhou D, et al. A multifunctional poly(curcumin) nanomedicine for dual-modal targeted delivery, intracellular responsive release, dual-drug treatment and imaging of multidrug resistant cancer cells. J Mater Chem B. 2016;4:2954–62. https://doi.org/10.1039/c5tb02450a. Chou TC, Motzer RJ, Tong Y, Bosl GJ. Computerized quantitation of synergism and antagonism of taxol, topotecan, and cisplatin against human teratocarcinoma cell growth: a rational approach to clinical protocol design. J Natl Cancer Inst. 1994;86:1517–24. Chou TC, Talalay P. Quantitative analysis of dose-effect relationships: the combined effects of multiple drugs or enzyme inhibitors. Adv Enzyme Regul. 1984;22:27–55. Foucquier J, Guedj M. Analysis of drug combinations: current methodological landscape. Pharmacol Res Perspect. 2015;3:e00149. All authors had equal role in design, work, statistical analysis and manuscript writing. All authors read and approved the final manuscript. All data generated or analyzed during this study are included in this article. The manuscript was approved by the Shahid Sadoughi University of Medical Sciences Internal Review Board. There are no human subjects or animals involved in the study. This study was financially supported by grant from the Shahid Sadoughi University of Medical Sciences, Yazd, Iran. Department of Clinical Biochemistry, Faculty of Medicine, Shahid Sadoughi University of Medical Sciences, Yazd, Iran Ashraf Alemi & Javad Zavar Reza Biotechnology Research Center, International Campus, Shahid Sadoughi University of Medical Science, Yazd, Iran Javad Zavar Reza & Mojtaba Haghi Karamallah Department of Life Science Engineering, Faculty of New Sciences & Technologies, University of Tehran, Tehran, Iran Fateme Haghiralsadat Protein Engineering Laboratory, Department of Medical Genetics, School of Medicine, Shahid Sadoughi University of Medical Sciences, Yazd, Iran Hossein Zarei Jaliani Nutrition and Metabolic Diseases Research Center, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran Seyed Ahmad Hosseini Student Research Committee, Shiraz University of Medical Sciences, Shiraz, Iran Somayeh Haghi Karamallah Search for Ashraf Alemi in: Search for Javad Zavar Reza in: Search for Fateme Haghiralsadat in: Search for Hossein Zarei Jaliani in: Search for Mojtaba Haghi Karamallah in: Search for Seyed Ahmad Hosseini in: Search for Somayeh Haghi Karamallah in: Correspondence to Javad Zavar Reza. Alemi, A., Zavar Reza, J., Haghiralsadat, F. et al. Paclitaxel and curcumin coadministration in novel cationic PEGylated niosomal formulations exhibit enhanced synergistic antitumor efficacy. J Nanobiotechnol 16, 28 (2018) doi:10.1186/s12951-018-0351-4 Niosome
CommonCrawl
Last edited by Kigore 5 edition of Measure and integration found in the catalog. Measure and integration a concise introduction to real analysis by Leonard F. Richardson Published 2008 by Wiley in Hoboken, N.J . Lebesgue integral, Measure theory, Statement Leonard F. Richardson. LC Classifications QA312 .R45 2008 Pagination p. cm. This open access textbook welcomes students into the fundamental theory of measure, integration, and real analysis. Rating: (not yet rated) 0 with reviews - Be the first. Measure and Integration INTRODUCTION Chapter 1 The most important analytic tool used in this book is integration. The student of analysis meets this concept in a calculus course where an integral is defined as a Riemann integral. While this point of view of integration mayFile Size: 2MB. impact of migration on rural power structure Enigmas of psychical research Piagetian theory and its implications for the helping professions Geometry of Spherical Space Form Groups (Series in Pure Mathematics) Gray victory Mighty Johns Gold A teaching handbook for Wiccans & Pagans American system as a science of justice Alan Walton. Bricks and brickwork Criteria for recording obligations for defense stock fund purchases should be changed William Lemuel Risley, his ancestral lines, including Risley, Millard, Strong, and related families, and his descendants Thoughts for the Lonely Nights The sinners A herb for every ill. Leg art The spendthrift Stanford education conference. Mindless murders. Measure and integration by Leonard F. Richardson Download PDF EPUB FB2 Book includes a self-contained proof of the Calder on{Zygmund inequality in Chapter 7 and an existence and uniqueness proof for (left and right) Haar measures on locally compact Hausdor groups in Chapter 8. The book is intended as a companion for a foundational one semester lecture course on measure and integration and there are many topics that it. Measure and Integration. This graduate-level lecture note covers Lebesgue's integration theory with applications to analysis, including an introduction to convolution and the Fourier transform. This book describes the following topics: Standard Forms, Change Of The Independent Variable,Integration by parts and powers of Sines and cosines. Motivated Measure and integration book a brief review of Riemann integration and its deficiencies, the text begins by immersing students in the concepts of measure and integration. Lebesgue measure and abstract measures are developed together, with each providing key insight into the Measure and integration book ideas of the other approach. Lebesgue integration links into results such as the 5/5(2). Measure, Integration & Real Analysis (Graduate Texts in Mathematics Book ) - Kindle edition by Axler, Sheldon. Download it once and read it on your Kindle device, PC, phones or tablets. Use features like bookmarks, note taking and highlighting while reading Measure, Integration & Real Analysis (Graduate Texts in Mathematics Book ).5/5(1). Measure, Integration & Real Analysis Sheldon Axler. This book seeks to provide students with a deep understanding of the definitions, theorems, and proofs related to measure, integration, and real analysis. The content and level of this book fit well with the first-year graduate course on these topics at most American universities. 1 Measure on a ¾-Algebra of Sets 5 2 Lebesgue Measure on R 21 3 Measurable Functions 33 4 Convergence a.e. and Convergence in Measure 45 5 Integration of Bounded Functions on Sets of Finite Measure 53 6 Integration of Nonnegative Functions 63 7 Integration of Measurable Functions 75 8 Signed Measures and Radon-Nikodym Theorem This open access textbook welcomes students into the fundamental theory of measure, integration, and real analysis. Focusing on an accessible approach, Axler lays the foundations for further study by promoting a deep understanding of key results. Content is carefully curated to suit a single course, or two-semester sequence of courses, creating. This open access textbook welcomes students into the fundamental theory of measure, integration, and real analysis. Content is carefully curated to suit a single course, or two-semester sequence, creating a versatile entry point for graduate Brand: Springer International Publishing. A very good book is "Measure and Integration Theory" from Heinz Measure and integration book, especially if you are planning to study probability theory. One of its strengths is that the theory is first developed without using topology and then applied to topological spaces. In my opinion this leads to a better understanding of Radon measures for example. A measure space is denoted by X;M;"), where X is the space of points, M is the ¾ algebra of measurable sets, and " is the measure, deflned on M. A measure on a topological space for which the measurable sets is the Borel algebra BX is called a Borel measure. Borel measures play a pre-eminent role in measure theory on Rn. Defining the integral in terms of step functions provides an immediate link to elementary integration theory as taught in calculus courses. The more abstract concept of Lebesgue measure, which generalises the primitive notions of length, area and volume, is deduced later. The explanations are simple and detailed with particular stress on Cited by: If you are looking for a book in measure theory, you should certainly get a copy of the book of that title by Halmos. You may need a second book for details on stochastic processes, but for the underlying analysis it will be hard to find a more comprehensive book, or a better-regarded author. Measure and Integration: A Concise Introduction to Real Analysis presents the basic concepts and methods that are important for successfully reading and understanding proofs. Blending coverage of both fundamental and specialized topics, this book serves as a practical and thorough introduction to measure and integration, while also facilitating. Measure and Integration: A First Course by M Thamban Nair. The book starts with a review of Riemann integration as a motivation for the necessity of introducing the concepts of measure and integration in a general setting. Then the text slowly evolves from the concept of an outer measure of subsets of the set of real line to the concept of. Book Description. This concise text is intended as an introductory course in measure and integration. It covers essentials of the subject, providing ample motivation for new concepts and theorems in the form of discussion and remarks, and with many worked-out examples. The fundamentals of measure and integration theory are discussed, along with the interplay between measure theory and topology. Comprised of four chapters, this book begins with an overview of the basic concepts of the theory of measure and integration as a prelude to the study of probability, harmonic analysis, linear space theory, and other. A user-friendly introduction to Lebesgue measure and integration / Gail S. Nelson. pages cm. – (Student mathematical library ; volume 78) Includes bibliographical references and index. ISBN (alk. paper) 1. Measure theory. Lebesgue integral. Integration, Functional. Title. QCM43N45 –dc23 File Size: 1MB. This open access textbook welcomes students into the fundamental theory of measure, integration, and real analysis. Focusing on an accessible approach, Axler lays the foundations for further study by promoting a deep understanding of key results. Content is carefully curated to suit a single Brand: Springer International Publishing. The book is very understandable, requiring only a basic knowledge of analysis. It can be warmly recommended to a broad spectrum of readers, to graduate students as well as young researchers. -- EMS Newsletter. This monograph provides a quite comprehensive presentation of measure and integration theory and of some of their applications. An Introduction to Measure Theory. Terence Tao. This is a preliminary version of the book An Introduction to Measure Theory published by the American Mathematical Society (AMS). This preliminary version is made available with sure and integration theory, both in Euclidean spaces and in abstract measure spaces. This text is based on my. measure, integration, and real analysis. This book aims to guide you to the wonders of this subject. You cannot read mathematics the way you read a novel. If you zip through a page in less than an hour, you are probably going too fast. When you encounter the phrase as you should verify, you should indeed do the verification, which will usually File Size: 5MB. This text approaches integration via measure theory as opposed to measure theory via integration, an approach which makes it easier to grasp the subject. Apart from its central importance to pure mathematics, the material is also relevant to applied mathematics and probability, with proof of the mathematics set out clearly and in considerable. who belongs to the target group of integration policies and b. what exactly is meant by the term "integration" are of great importance. Without common standards as to what is meant by "migrant" and by "integration" all attempts to measure migrants' integration in different countries are likely to be of little meaning. The fundamentals of measure and integration theory are discussed, along with the interplay between measure theory and topology. Comprised of four chapters, this book begins with an overview of the basic concepts of the theory of measure and integration as a prelude to the study of probability, harmonic analysis, linear space theory, and other Book Edition: 1. Measure, Integration & Real Analysis book. Read reviews from world's largest community for readers.5/5(2). The book starts with a review of Riemann integration as a motivation for the necessity of introducing the concepts of measure and integration in a general setting. Integration, Measure and Probability (Dover Books on Mathematics) by Pitt, H. and a great selection of related books, art and collectibles available now at This textbook provides a thorough introduction to measure and integration theory, fundamental topics of advanced mathematical analysis. Proceeding at a leisurely, student-friendly pace, the authors begin by recalling elementary notions of real analysis before proceeding to measure theory and Lebesgue integration. The lecture notes were prepared in LaTeX by Ethan Brown, a former student in the class. He used Professor Viaclovsky's handwritten notes in producing them. Why Measure Theory. Measure Spaces and Sigma-algebras. Operations on Measurable Functions (Sums, Products, Composition) Real-valued Measurable Functions. Limits of Measurable Functions. This book gives a straightforward introduction to the field as it is nowadays required in many branches of analysis and especially in probability theory. The first three chapters (Measure Theory, Integration Theory, Product Measures) basically follow the clear and approved exposition given in the author's earlier book on "Probability Theory and Measure Theory".5/5(1). "This book is a gentle introduction that makes measure and integration theory accessible to the average third-year undergraduate student. The ideas are developed at an easy pace in a form that is suitable for self-study, with an emphasis on clear explanations and concrete examples rather than abstract theory. This text approaches integration via measure theory as opposed to measure theory via integration, an approach which makes it easier to grasp the subject. Apart from its central importance to pure mathematics, the material is also relevant to applied mathematics and probability, with proof of the mathematics set out clearly and in considerable detail. Numerous. The elements of integration and Lebesgue measure. Wiley Classics Library. New York: John Wiley & Sons Inc. xii+ ISBN MR Bauer, Heinz (). Measure and Integration Theory. De Gruyter Studies in Mathematics Berlin: De Gruyter. ISBN Bourbaki, Nicolas (). Integration. Chapters 1–6. An Introduction to Measure & Integration by Rana, Inder K. and a great selection of related books, art and collectibles available now at lebesgue measure and integration Download lebesgue measure and integration or read online books in PDF, EPUB, Tuebl, and Mobi Format. Click Download or Read Online button to get lebesgue measure and integration book now. This site is like a library, Use search box in the widget to get ebook that you want. about abstract measure and integration, while Chapter 5 and 6 complement the material in two opposite directions. Chapter 3 is a little more demanding. Finally, in Chapter 7, we are ready to see the results of the theory. This book is written for the instructor rather than for the student in aAuthor: Jose L Menaldi. Lebesgue Measure and Integration is the ideal text for an advanced undergraduate analysis course or for a first-year graduate course in mathematics, statistics, probability, and other applied areas. It will also serve well as a supplement to courses in advanced measure theory and integration and as an invaluable reference long after course work. Here are some solutions to exercises in the book: Measure and Integral, An Introduction to Real Analysis by Richard L. Wheeden and Antoni Zygmund. Chapter 1,2: analysis1 Chapter 3: analysis2 Chapter 4, 5: analysis3 Chapter 5,6: analysis4 Chapter 6,7: analysis5 Chapter 8: analysis6 Chapter 9: analysis7 Measure and Integral: An Introduction to Real. This text contains a basic introduction to the abstract measure theory and the Lebesgue integral. Most of the standard topics in the measure and integration theory are discussed. In addition, topics on the Hewitt-Yosida decomposition, the Nikodym and Vitali-Hahn-Saks theorems and material on finitely additive set functions not contained in standard texts are explored. Chapter 7. Measure and integration on product spaces § Introduction § Product of measure spaces § Integration on product spaces: Fubini's theorems § Lebesgue measure on R[sup(2)] and its properties § Product of finitely many measure spaces ; Chapter 8. The book is short and very readable, and it introduces Lebesgue integration on the real line in a very understandable way. In general, I think that it is much better to introduce measure theory and Lebesgue integration in the specific context of the real line and $\mathbb{R}^n$, perhaps moving on to general measure spaces after this is done.Thinking back very far, to when I was a student learning measure theory, I really liked "Introduction to measure and probability" by Kingman and Taylor. The measure theory part was also published as a separate book, "Introduction to measure and integration" by (only) Taylor.Principles of Analysis: Measure, Integration, Functional Analysis, and Applications prepares readers for advanced courses in analysis, probability, harmonic analysis, and applied mathematics at the doctoral level. The book also helps them prepare for qualifying exams in real analysis. It is designed. ebookcompare.icu © 2020
CommonCrawl
\begin{document} \title{Effect of depolarizing and quenching collisions\\ on contrast of the coherent population trapping resonance} \author{K.\,M.~Sabakar$^{1}$} \author{M.\,I.~Vaskovskaya$^{1}$} \author{D.\,S.~Chuchelov$^{1}$} \author{E.\,A.~Tsygankov$^{1}$} \email[]{[email protected]} \author{V.\,V.~Vassiliev$^{1}$} \author{S.\,A.~Zibrov$^{1}$} \author{V.\,L.~Velichansky$^{1,2}$} \affiliation{1. Lebedev Physical Institute of the Russian Academy of Sciences, Leninsky Prospect 53, Moscow, 119991 Russia} \affiliation{2.~National~Research~Nuclear~University~MEPhI,~Kashirskoye~Highway~31,~Moscow,~115409~Russia} \begin{abstract} We investigate the effect of buffer gases on the coherent population trapping resonance induced by a $\sigma$-polarized optical field in $^{87}$Rb atoms. Our experimental results show that inert gases, which depolarize the excited state of the alkali-metal atoms, provide higher contrast than nitrogen that effectively quenches their fluorescence. We also demonstrate that elimination of the spontaneous radiation does not significantly decrease the width at moderate temperatures of an atomic medium. Therefore, a mixture of inert gases can be preferable over a mixture with nitrogen for atomic clocks. \end{abstract} \maketitle All-optical interrogation schemes utilizing the effect of coherent population trapping (CPT)~\cite{ARIMONDO1996257}, progress in miniature vapor-cell production technology and advances in diode lasers~\cite{michalzik2013vcsel} have led to the development of chip-scale atomic clocks~\cite{10.1117/12.726792}. Their main advantages over other frequency standards are smaller size and lower power consumption, but they have lower frequency stability. Currently, many research groups are seeking for new approaches to improve the long-term frequency stability of such atomic clocks~\cite{Zhang:s,8786876,yanagimachi,gozzelino2020kr}. In the absence of a frequency drift, the stability is proportional to $1/\sqrt{\tau}$, where $\tau$ is the averaging time~\cite{shah2010advances}. In this case, a further improvement of the long-term frequency stability can be achieved only by the increase of the short-term stability, which depends on the contrast-to-width-ratio of the CPT resonance. In what follows, we call it the quality factor, or Q-factor. The standard approach to reduce the relaxation rate of the ground-state coherence occurring due to collisions of alkali-metal atoms with atomic cell walls is the usage of a buffer gas. It provides diffusion of atoms to the walls with a slower speed than their unperturbed movement with the thermal velocity. The probability to lose the polarization upon collisions with buffer gas particles is less than under interaction with the cell walls. However, an increase of the buffer gas pressure results eventually in the collisional rebroadening of the CPT resonance. These opposite dependencies give the value of buffer gas pressure that provides the minimum width. This value depends on the dimensions and geometry of the cell~\cite{vanier1989quantum}. The buffer gas induces a temperature-dependent shift of the CPT resonance frequency, which can be suppressed by using a mixture of two gases with linear temperature coefficients of opposite signs~\cite{doi:10.1063/1.331467}. Most often, a mixture of argon and nitrogen is used. \vbox{It is known that inert gases depolarize the excited state of alkali-metal atoms and tend to equalize populations of its magnetic sublevels~\cite{franzen1957atomic,zhitnikov1970optical,okunevich1970relaxation,RevModPhys.44.169}. The equalization of populations should increase the CPT resonance amplitude detected in $\sigma^+$-$\sigma^+$ scheme. Indeed, the depolarization reduces the number of atoms pumped to the sublevel $5$P$_{1/2}$ \hbox{$m_{F_e}=2$} (we consider the case of $^{87}$Rb atoms). Therefore, a smaller amount of atoms is optically pumped to the non-absorbing sublevel \hbox{$m_{F_g}=2$} of the ground state and more atoms arrive at the working sublevels \hbox{$F_g={1,\,2},\,m_{F_g}=0$} due to spontaneous transitions. Fig.~\ref{LevelsScheme} demonstrates distribution of populations over magnetic sublevels for the case of zero and complete excited-state depolarization. They were obtained by solving density-matrix equations accounting for all electric-dipole transitions of D$_1$ line induced by a bichromatic $\sigma^+$-polarized optical field. Power broadening of the CPT resonance was set to be $3$-times greater than relaxation rate of ground-state elements to make difference in populations evident. Details of calculations are given in \hyperref[Appendix]{Appendix}.} Nitrogen quenches fluorescence of alkali-metal atoms due to the transfer of the excited-state energy to molecular vibrations. This prevents broadening of the CPT resonance induced by the spontaneous radiation, therefore, nitrogen is often considered as a preferable buffer gas for atomic clocks~\cite{vanier1989quantum,doi:10.1063/1.5026238}. Transitions from the excited to the ground states during quenching have the same selection rules as spontaneous decay~\cite{PhysRevA.11.1,PhysRevA.25.2985}, but the effect of nitrogen on the population distribution of the excited state has not been studied in detail and is poorly described in the literature. We assume that molecular gases can to some extent prevent the excited-state depolarization. In this case, nitrogen should reduce the CPT resonance amplitude compared to inert gases while improving its width due to the elimination of the spontaneous radiation. The influence of these two factors on Q-factor is opposite. The goal of this paper is to estimate which of them is more significant. To check this, we compared the contrast and width of the CPT resonance in \hbox{Ar, Ne, and N$_2$}. \begin{figure} \caption{Energy level structure and electric-dipole transitions to sublevels of hyperfine component \hbox{$F_e=2$} induced by an optical field with $\sigma^+$ polarization. Columns show distribution of populations over $5$S$_{1/2}$ and $5$P$_{1/2}$ states. They were calculated in a model without (a) and with (b) accounting for the depolarization; see \hyperref[Appendix]{Appendix} for details. The heights of the columns for the excited state are increased by about five orders of magnitude.} \label{LevelsScheme} \end{figure} \section{Experiment} \label{SectionExamples} The experimental setup is schematically shown in Fig.~\ref{ExpSetup}. We used a single-mode vertical-cavity surface-emitting laser generating at \hbox{$\simeq795$~nm}. The DC and RF components of the injection current were fed to the laser via a bias tee. The modulation frequency was close to $3.417$~GHz, and the first sidebands of the polychromatic optical field were tuned to transitions $F_g=2\rightarrow F_e=2$, $F_g=1\rightarrow F_e=2$ of the $^{87}$Rb D$_1$~line. The power of the RF field was set to provide the highest amplitudes of the first-order sidebands. A polarizer and a quarter-wave plate were used to form the CPT resonance in the $\sigma^+$-$\sigma^+$ scheme. The diameter of the laser beam was $3$~mm. The laser wavelength was stabilized by a feedback loop that controls the temperature of the laser diode. \begin{figure} \caption{The layout of experimental setup.} \label{ExpSetup} \end{figure} An atomic cell was placed in a longitudinal magnetic field of $0.02$~G to separate the metrological CPT resonance from magneto-sensitive ones at the transitions between sublevels $m_{F_g}=\pm1$. The temperature of the atomic cell was maintained with an accuracy of $0.01$~$^{\circ}$C. The cell, heater, and solenoid were placed in a three-layer $\mu$-metal magnetic shield, providing better than $500$-fold suppression of the laboratory magnetic field. We have manufactured three sets of cylindrical atomic cells with CO$_2$ laser-welded windows ($8$~mm diameter, $15$~mm length, $0.7$~mm wall thickness) and filled them with isotopically enriched $^{87}$Rb and one of the buffer gases: N$_2$, Ar, or Ne. The buffer gas pressures are $30$, $60$, and $90$~Torr. We used pinch-off glass welding to seal the stem at a distance of about $20$~mm from the cell body so as not to heat it. This ensures that the actual gas pressure inside the cell differs from the pressure in the filling chamber by no more than $1$\%. \begin{figure} \caption{Metrological (central) and magneto-sensitive CPT resonances obtained in atomic cells filled with nitrogen and argon at a pressure of $90$~Torr. The dashed lines serve as a guide for the eye and show the difference in the amplitudes of the resonances.} \label{Magneto-sensitive} \end{figure} \begin{figure*} \caption{Dependencies of the CPT resonance contrast on the laser intensity for different temperatures and pressures of nitrogen, argon, and neon.} \label{Contrasts} \end{figure*} \begin{figure*}\label{Qfactors} \end{figure*} Fig.~\ref{Magneto-sensitive} shows metrological and magneto-sensitive CPT resonances obtained in two atomic cells filled with nitrogen and argon at a pressure of $90$~Torr. The inhomogeneity of the magnetic field did not lead to a noticeable broadening of magneto-sensitive resonances, thus, we consider their amplitudes to be determined by populations of the corresponding magnetic sublevels. Experimental conditions are the same for both cells: temperature and optical field intensity are $65$~$^\circ$C and $0.3$~mW/cm$^2$. The non-resonant radiation losses in both cells are almost equal. However, some differences in the signals can be seen. First, resonances on sublevels $F_g=1,2, m_{F_g}=-1$ (left) and $F_g=1,2, m_{F_g}=0$ (central) have noticeably greater amplitudes in Ar than in N$_2$. Second, the background transmission in nitrogen is higher when the microwave frequency is detuned from the CPT resonance ($\simeq1.96$~V in N$_2$, $\simeq1.82$~V in Ar). On the contrary, nitrogen should provide a slightly smaller transmission level due to the lower collisional broadening of the $^{87}$Rb D$_1$~line~\cite{pitz2014pressure}. We attribute the mentioned above features to the negative impact of the fluorescence quenching in nitrogen, which reduces efficiency of the excited state depolarization and enhances pumping to the non-absorbing sublevel. Dependencies of the contrast (the ratio of the resonance amplitude to the transmission level at resonance peak) of the metrological CPT resonance on the laser field intensity for all pressures are shown in Fig.~\ref{Contrasts}. The difference in contrast between gases is negligible at intensities below~$0.1$~mW/cm$^{2}$ for all pressures. As the intensity increases, the rate of optical pumping becomes significant and the divergence between dependencies arises. For N$_2$, the contrast reaches a maximum and then slightly decreases. For Ar and Ne the contrast does not decrease even at the highest available intensity of $1$~mW/cm$^{2}$. Neon provides the highest contrast, reaching a value of $6.5$\%, while the maximal contrasts for argon and nitrogen are $5$\% and $3.6$\%, respectively. s the pressure increases, the dependencies remain almost the same and the relation $C_{\text{max}}^{\text{Ne}}>C_{\text{max}}^{\text{Ar}}>C_{\text{max}}^{\text{N}_2}$ does not change, but the maximal contrasts decline. It happens due to the growth of homogeneous broadening that leads to decrease in amount of atoms optically pumped into the dark state. The increase in temperature cannot fully compensate the loss in number of atoms since the relaxation rate of the ground-state coherence becomes greater due to the spin-exchange mechanism. Therefore, the maximal absorption contrast is achieved at higher temperature for greater buffer gas pressure and falls with growth of the latter. The difference in contrasts between the inert gases is due to the smaller broadening of $^{87}$Rb D$_1$ line by Ne~\cite{pitz2014pressure}. \begin{figure*} \caption{Dependencies of the CPT resonance width on the optical field intensity for different temperatures in neon~(a) and nitrogen~(b) and on the temperature for atomic cells with nitrogen, argon and neon~(c). Pressure of all buffer gases is $90$~Torr. The dashed line in (c) is the spin-exchange broadening for $^{87}$Rb calculated via~Eq.~\eqref{SE}.} \label{IWidth} \end{figure*} Similarly to the contrast, the dependencies of the CPT resonance width and Q-factor on the intensity were obtained. From each dependence Q(I) we defined the maximum value Q$_{\text{max}}$ and plotted the dependence of the Q$_{\text{max}}$ on temperature for each gas and all pressures (Fig.~\ref{Qfactors}). Under the same conditions, the resonance in nitrogen has the smallest width. Nitrogen has an advantage over argon and neon of about $20$\% and $10$\% in Q$_{\text{max}}$ for a pressure of $30$~Torr, which is achieved due to the narrower resonance. At higher buffer gas pressures, the advantage of inert gases in contrast exceeds the advantage of nitrogen in width. As a result, Q$^{\text{Ne}}_{\text{max}}$/Q$^{\text{N}_{2}}_{\text{max}}$ is close to $1.6$ at $60$~Torr and to $2$ at $90$~Torr. Note that neon maximizes Q-factor at lower temperatures than other gases at all pressures. Fig.~\ref{IWidth}~(a,$\,$b) shows dependencies of the CPT resonance width on the optical field intensity for different temperatures in neon and nitrogen. In neon the width decreases with temperature at intensities above $0.4$~mW/cm$^2$. This feature is due to the light narrowing effect~\cite{godone2002dark}, which takes place when a sufficiently large number of atoms are coherently trapped in a dark state. In nitrogen, as the temperature increases, the width increases in the same way for all intensities, which indicates the much smaller impact of this effect. Therefore, more atoms resides at the non-absorbing sublevel in N$_2$ compared to Ne. We have studied the potential benefit of fluorescence quenching in nitrogen for the CPT resonance width. For this, we made three additional cells with the same diameter of $8$~mm, but smaller internal length of $2.5$~mm. The decrease of length prevents complete absorption of low-intensity laser radiation at high temperatures and Rb concentrations, when the influence of spontaneous photons on the width should be more evident. The cells were filled with $90$~Torr of Ar, Ne, and N$_2$. The beam was expanded to $6$~mm in diameter and the operating radiation intensity was $0.1$~mW/cm$^2$. The measured dependencies of the CPT resonance width on temperature in these cells are shown in Fig.~\ref{IWidth}(c). As one would expect, the width should grow faster with temperature in inert gases than in nitrogen due to the spontaneous radiation. However, dependencies in all gases reveal the same behavior, which is typical for spin-exchange broadening. The contribution of this mechanism to the coherence relaxation is given by~\cite{RevModPhys.44.169} \begin{equation} \Gamma_{se}=\dfrac{6I+1}{8I+4}\sigma_{se}v_rn, \label{SE} \end{equation} \noindent where $I$ is the nuclear spin, $\sigma_{se}$ is the spin-exchange cross-section, $v_r$ is the average relative velocity, and $n$ is the concentration of the alkali-metal atoms. The dashed line in Fig.~\ref{IWidth}(c) is the dependence of $\Gamma_{se}/\pi$ on temperature for $^{87}$Rb plotted for concentration taken from~\cite{Steck} and the most reliable value of $\sigma_{se}$ equal to $1.9\cdot10^{-14}$~cm$^2$~\cite{walter2002magnetic}. However, we have not observed a noticeable broadening of the CPT resonance in inert gases at high temperatures compared to nitrogen. Similar result was obtained earlier in~\cite{knappe2002temperature} for $^{133}$Cs in the temperature range $20$--$65$~$^{\circ}$C. Thus, we do not associate the difference in widths with the quenching effect, it is probably related to the lower diffusion coefficient in nitrogen~\cite{Arditi&Carver,Pouliot}, which determines the rate of Rb coherence relaxation as a result of collisions with the cell walls. \section{Discussion} Considering the choice of a buffer gas for CPT-based atomic clocks, we believe that a proper mixture of Ar and Ne is preferable to a mixture containing N$_2$. When using inert gases, it is possible to achieve a higher resonance contrast due to depolarization of $^{87}$Rb excited state. Moreover, the maximal Q-factor of the resonance in neon is achieved at a lower temperature than in nitrogen, which reduces the clock power consumption. Another advantage of Ar-Ne mixture is the ability to suppress the light shift at higher buffer gas pressures. As we demonstrated in~\cite{BGpaper}, the AC Stark shift of the CPT resonance frequency cannot be eliminated if the homogeneous broadening of optical lines exceeds a certain value. Since $^{87}$Rb D$_1$ line collisional broadening rate for Ne is about $1.5$~times smaller than that for N$_2$ ~\cite{pitz2014pressure}, it is possible to obtain the minimal relaxation rate of the coherence and retain the ability to suppress the light shift in miniature atomic cells, which is significant for chip-scale atomic clocks. Finally, the depolarization of the $^{87}$Rb excited state in inert gases leads to a smaller difference in populations of the ground-state working sublevels due to the repopulation pumping mechanism. Unequal populations are the source of the CPT resonance asymmetry and the nonlinear dependence of its frequency on the laser field intensity~\cite{https://doi.org/10.48550/arxiv.2012.13731}. This hinders the light shift suppression methods based on the laser field intensity modulation (see, for example,~\cite{doi:10.1063/1.2360921}). \section{Summary} We have demonstrated that argon and neon provide a higher contrast of the CPT resonance than nitrogen in the $\sigma^+$-$\sigma^+$ scheme. The difference in contrast is significant when the optical pumping rate dominates over the ground-state relaxation. We explain this effect as follows. Quenching of alkali-metal atoms fluorescence reduces the degree of the excited-state depolarization, which increases the population of the non-absorbing sublevel. As a result, the amount of atoms that can be optically pumped to the dark state becomes smaller and the amplitude of CPT resonance decreases. We have not found a benefit from quenching of the fluorescence for the width of the CPT resonance at temperatures providing the maximal Q-factor. The difference in Q-factor between Ne and N$_2$ increases with the buffer gas pressure, reaching a factor of $2$ at $90$~Torr. Hence, a mixture of inert gases can be more advantageous for CPT-based atomic clocks than a mixture with nitrogen. \section{Acknowledgments} \label{Acknowledgments} The authors receive funding from Russian Science Foundation (grant No.~19-12-00417). \section{Appendix} \label{Appendix} Here we consider the following model of optical pumping in four-level system with moments $F=2,1$ in excited and ground states; see Fig.~\ref{LevelsScheme}. Components of bichromatic optical field \begin{equation*} \mathbf{E}(t)=\mathbf{e}\dfrac{\mathcal{E}}{2}\left(e^{-i\omega_rt}+e^{-i\omega_bt}+c.c.\right), \end{equation*} \noindent where $\mathbf{e}=(\mathbf{e}_x+i\mathbf{e}_y)/\sqrt{2}$, induce transitions between levels $F_g=2$, $F_e=2$ and $F_g=1$, $F_e=2$, respectively, having optical detuning $\Delta$. The corresponding detuning from $F_e=1$ level is $\Delta+\omega_e$, where $\omega_e$ is the hyperfine splitting of the excited state. Frequency spacing between components is close to hyperfine splitting of the ground state: $\omega_b-\omega_r=\omega_g+\delta$, where detuning $\delta$ is much smaller than $\omega_g$. Phenomenological relaxation constant $\Gamma$ was introduced in equations for optical coherences to account for homogeneous broadening of absorption line occurring under collision of alkali-metal atoms with particles of a buffer gas. We assume that $\gamma\ll\Gamma$, where $\gamma$ denotes natural width of the excited state. Rabi frequency $V=d\mathcal{E}/2\hbar$ contains the reduced dipole matrix element. For simplicity, one phenomenological constant $\Gamma_g$ is used to describe relaxation of the ground-state sublevels. Finally, we do not account for magneto-sensitive CPT resonances and consider only the one between sublevels $m_{F_g}=0$. Initial equations for elements of the density matrix are solved under approximations of low saturation regime and the resonant one for optical field (also known as the rotating-wave approximation), which allows us to obtain the following system for the steady-state regime. Namely, there are equations for populations of the excited state under absence of its depolarization: \begin{subequations} \begin{equation} \rho^{uu}_{-2-2}=0, \end{equation} \begin{equation} \rho^{uu}_{-1-1}=\dfrac{V^2}{6}\dfrac{\Gamma/(\gamma/2)}{\Delta^2+\Gamma^2}\rho^{22}_{-2-2}, \end{equation} \begin{equation} \rho^{dd}_{-1-1}=\dfrac{V^2}{2}\dfrac{\Gamma/(\gamma/2)}{(\Delta+\omega_e)^2+\Gamma^2}\rho^{22}_{-2-2}, \end{equation} \begin{equation} \rho^{uu}_{00}=\dfrac{V^2}{4}\dfrac{\Gamma/(\gamma/2)}{\Delta^2+\Gamma^2}\left(\rho^{22}_{-1-1}+\dfrac{1}{3}\rho^{11}_{-1-1}\right), \end{equation} \begin{equation} \rho^{dd}_{00}=\dfrac{V^2}{4}\dfrac{\Gamma/(\gamma/2)}{(\Delta+\omega_e)^2+\Gamma^2}\left(\rho^{22}_{-1-1}+\dfrac13\rho^{11}_{-1-1}\right), \end{equation} \begin{equation} \rho^{uu}_{11}=\dfrac{V^2}{4}\dfrac{\Gamma/(\gamma/2)}{\Delta^2+\Gamma^2}\left[\rho^{22}_{00}+\rho^{11}_{00}-2\text{Re}\left(\rho^{21}_{00}\right)\right], \end{equation} \begin{equation} \rho^{dd}_{11}=\dfrac{V^2}{12}\dfrac{\Gamma/(\gamma/2)}{(\Delta+\omega_e)^2+\Gamma^2}\left[\rho^{22}_{00}+\rho^{11}_{00}-2\text{Re}\left(\rho^{21}_{00}\right)\right], \end{equation} \begin{equation} \rho^{uu}_{22}=\dfrac{V^2}{2}\dfrac{\Gamma/(\gamma/2)}{\Delta^2+\Gamma^2}\left(\dfrac13\rho^{22}_{11}+\rho^{11}_{11}\right), \end{equation} \end{subequations} \noindent where upper indices ``u,'' ``d'' of the density matrix elements denote upper-state levels $F_e=2,1$ and indices ``2,'' ``1'' denote ground-state levels $F_g=2,1$. Lower indices denote $m_F$ value. Equations for elements of the ground state are the following: \onecolumngrid \begin{subequations} \begin{equation} \Gamma_g\rho^{22}_{-2-2}=-\Gamma\left[\dfrac13\dfrac{V^2}{\Delta^2+\Gamma^2}+\dfrac{V^2}{(\Delta+\omega_e)^2+\Gamma^2}\right]\rho^{22}_{-2-2}+\dfrac{\Gamma_g}{8}+\gamma\left(\dfrac13\rho^{uu}_{-2-2}+\dfrac16\rho^{uu}_{-1-1}+\dfrac12\rho^{dd}_{-1-1}\right), \end{equation} \begin{equation} \Gamma_g\rho^{22}_{-1-1}=-\dfrac12\Gamma\left[\dfrac{V^2}{\Delta^2+\Gamma^2}+\Gamma\dfrac{V^2}{(\Delta+\omega_e)^2+\Gamma^2}\right]\rho^{22}_{-1-1}\\ +\dfrac{\Gamma_g}{8}+\gamma\left(\dfrac16\rho^{uu}_{-2-2}+\dfrac{1}{12}\rho^{uu}_{-1-1}+\dfrac14\rho^{uu}_{00}+\dfrac14\rho^{dd}_{-1-1}+\dfrac14\rho^{dd}_{00}\right), \end{equation} \begin{equation} \Gamma_g\rho^{11}_{-1-1}=-\dfrac16\Gamma\left[\dfrac{V^2}{\Delta^2+\Gamma^2}+\Gamma\dfrac{V^2}{(\Delta+\omega_e)^2+\Gamma^2}\right]\rho^{11}_{-1-1}+\dfrac{\Gamma_g}{8}+\gamma\left(\dfrac12\rho^{uu}_{-2-2}+\dfrac14\rho^{uu}_{-1-1}+\dfrac{1}{12}\rho^{uu}_{00}+\dfrac{1}{12}\rho^{dd}_{-1-1}+\dfrac{1}{12}\rho^{dd}_{00}\right), \end{equation} \begin{equation} \begin{gathered} \Gamma_g\rho^{22}_{00}=-\dfrac12\Gamma\left[\dfrac{V^2}{\Delta^2+\Gamma^2}+\dfrac13\Gamma\dfrac{V^2}{(\Delta+\omega_e)^2+\Gamma^2}\right]\rho^{22}_{00}\\ +\dfrac{1}{2}\dfrac{V^2}{\Delta^2+\Gamma^2}\left[\Delta\cdot\text{Im}(\rho^{21}_{00})+\Gamma\cdot\text{Re}(\rho^{21}_{00})\right]+\dfrac{1}{6}\dfrac{V^2}{(\Delta+\omega_e)^2+\Gamma^2}\left[\left(\Delta+\omega_e\right)\cdot\text{Im}(\rho^{21}_{00})+\Gamma\cdot\text{Re}(\rho^{21}_{00})\right]\\ +\dfrac{\Gamma_g}{8}+\gamma\left(\dfrac14\rho^{uu}_{-1-1}+\dfrac14\rho^{uu}_{11}+\dfrac{1}{12}\rho^{dd}_{-1-1}+\dfrac13\rho^{dd}_{00}+\dfrac{1}{12}\rho^{dd}_{11}\right), \end{gathered} \end{equation} \begin{equation} \begin{gathered} \Gamma_g\rho^{11}_{00}=-\dfrac12\Gamma\left[\dfrac{V^2}{\Delta^2+\Gamma^2}+\dfrac13\dfrac{V^2}{(\Delta+\omega_e)^2+\Gamma^2}\right]\rho^{11}_{00}\\ -\dfrac{1}{2}\dfrac{V^2}{\Delta^2+\Gamma^2}\left[\Delta\cdot\text{Im}(\rho^{21}_{00})-\Gamma\cdot\text{Re}(\rho^{21}_{00})\right]-\dfrac{1}{6}\dfrac{V^2}{(\Delta+\omega_e)^2+\Gamma^2}\left[\left(\Delta+\omega_e\right)\cdot\text{Im}(\rho^{21}_{00})-\Gamma\cdot\text{Re}(\rho^{21}_{00})\right]\\ +\dfrac{\Gamma_g}{8}+\gamma\left(\dfrac14\rho^{uu}_{-1-1}+\dfrac13\rho^{uu}_{00}+\dfrac14\rho^{uu}_{11}+\dfrac{1}{12}\rho^{dd}_{-1-1}+\dfrac{1}{12}\rho^{dd}_{11}\right), \end{gathered} \end{equation} \begin{equation} \Gamma_g\rho^{22}_{11}=-\dfrac13\Gamma\dfrac{V^2}{\Delta^2+\Gamma^2}\rho^{22}_{11}+\dfrac{\Gamma_g}{8}+\gamma\left(\dfrac16\rho^{uu}_{22}+\dfrac{1}{12}\rho^{uu}_{11}+\dfrac14\rho^{uu}_{00}+\dfrac14\rho^{dd}_{11}+\dfrac14\rho^{dd}_{00}\right), \end{equation} \begin{equation} \Gamma_g\rho^{11}_{11}=-\Gamma\dfrac{V^2}{\Delta^2+\Gamma^2}\rho^{11}_{11}+\dfrac{\Gamma_g}{8}+\gamma\left(\dfrac12\rho^{uu}_{22}+\dfrac14\rho^{uu}_{11}+\dfrac{1}{12}\rho^{uu}_{00}+\dfrac{1}{12}\rho^{dd}_{11}+\dfrac{1}{12}\rho^{dd}_{00}\right), \end{equation} \begin{equation} \Gamma_g\rho^{22}_{22}=\dfrac{\Gamma_g}{8}+\gamma\left(\dfrac13\rho^{uu}_{22}+\dfrac16\rho^{uu}_{11}+\dfrac12\rho^{dd}_{11}\right), \end{equation} \begin{equation} \begin{gathered} \left\{\delta+i\Gamma_g+\dfrac{i}{2}\Gamma\left[\dfrac{V^2}{\Delta^2+\Gamma^2}+\dfrac{1}{3}\dfrac{V^2}{(\Delta+\omega_e)^2+\Gamma^2}\right]\right\}\rho^{21}_{00}=\\ \dfrac{i}{4}\Gamma\left[\dfrac{V^2}{\Delta^2+\Gamma^2}+\dfrac{1}{3}\dfrac{V^2}{(\Delta+\omega_e)^2+\Gamma^2}\right](\rho^{22}_{00}+\rho^{11}_{00})+\dfrac14\left[\Delta\dfrac{V^2}{\Delta^2+\Gamma^2}+\dfrac{1}{3}(\Delta+\omega_e)\dfrac{V^2}{(\Delta+\omega_e)^2+\Gamma^2}\right](\rho^{22}_{00}-\rho^{11}_{00}). \end{gathered} \label{coherence} \end{equation} \label{groundstate} \end{subequations} \twocolumngrid The light shift of the ground-state microwave transition frequency is neglected in system of equations~\eqref{groundstate} to simplify calculations. To account for complete depolarization of the excited state, we replaced all its populations with the arithmetic mean: $\rho^{uu}_{ii},\,\rho^{dd}_{ii}\rightarrow\left(\sum^{2}_{i=-2}\rho^{uu}_{ii}+\sum^{1}_{i=-1}\rho^{dd}_{ii}\right)/8$. Solution for significant rate of optical pumping, $V^2/\Gamma\gg\Gamma_g$, demonstrated that physical contrast, which we define here as $\left[\rho^{ee}(|\delta|\gg V^2/\Gamma)-\rho^{ee}(\delta=0)\right]/\rho^{ee}(|\delta|\gg V^2/\Gamma)$, is two-times greater when the excited state is depolarized. Here $\rho^{ee}$ is the sum of populations of the excited-state sublevels. Fig.~\ref{LevelsScheme} demonstrates distributions of populations of $^{87}$Rb $5$S$_{1/2}$ and $5$P$_{1/2}$ states for two cases: without (a) and with (b) excited-state depolarization. They were calculated for $\Gamma/2\pi=1$~GHz, $\omega_e/2\pi=817$~MHz, $\Delta/2\pi=-30$~MHz, $\delta=0$. The value of Rabi frequency was set to provide power broadening of the CPT resonance three-times greater than $\Gamma_g$. In case (b) population of sublevel \hbox{$m_{F_e}=2$} decreases and optical pumping of the non-absorbing sublevel \hbox{$m_{F_g}=2$} becomes smaller. Populations of excited-state sublevels \hbox{$F_e=2,\,m_{F_e}=-2,-1,0$}, \hbox{$F_e=1,\,m_{F_e}=-1,0$} grow, which increases amount of spontaneous transitions to working sublevels \hbox{$F_g=1,2,\,m_{F_g}=0$}. We note that population of sublevel \hbox{$F_g=1,\,m_{F_g}=1$} is smaller than that of \hbox{$F_g=1,\,m_{F_g}=-1$}, since probability of transition \hbox{$|F_g=1,\,m_{F_g}=1\rangle\rightarrow|F_e=2,\,m_{F_e}=2\rangle$ is~greater}, while the repopulation rate of these sublevels is the same due to spontaneous transitions. We also note that equation~\eqref{coherence} for coherence $\rho^{21}_{00}$ contains terms $\propto(\rho^{22}_{00}-\rho^{11}_{00})$ in its right-hand side. Despite that components of bichromatic field have equal intensities, populations of working sublevels are not equal due to spontaneous transitions. As a consequence, the real part of coherence $\rho^{21}_{00}$ acquires a term proportional to $\delta$. The CPT resonance becomes neither an even nor an odd function of $\delta$, i.e., it turns out to be asymmetric. On the opposite, under complete depolarization of the excited state, spontaneous transitions equally populate working sublevels providing a symmetric CPT resonance. \end{document}
arXiv
\begin{definition}[Definition:False] A statement has a truth value of '''false''' {{iff}} what it says does not match the way that things are. \end{definition}
ProofWiki
Pappus graph In the mathematical field of graph theory, the Pappus graph is a bipartite 3-regular undirected graph with 18 vertices and 27 edges, formed as the Levi graph of the Pappus configuration.[1] It is named after Pappus of Alexandria, an ancient Greek mathematician who is believed to have discovered the "hexagon theorem" describing the Pappus configuration. All the cubic distance-regular graphs are known; the Pappus graph is one of the 13 such graphs.[2] Pappus graph The Pappus graph. Named afterPappus of Alexandria Vertices18 Edges27 Radius4 Diameter4 Girth6 Automorphisms216 Chromatic number2 Chromatic index3 Book thickness3 Queue number2 PropertiesBipartite Symmetric Distance-transitive Distance-regular Cubic Hamiltonian Table of graphs and parameters The Pappus graph has rectilinear crossing number 5, and is the smallest cubic graph with that crossing number (sequence A110507 in the OEIS). It has girth 6, diameter 4, radius 4, chromatic number 2, chromatic index 3 and is both 3-vertex-connected and 3-edge-connected. It has book thickness 3 and queue number 2.[3] The Pappus graph has a chromatic polynomial equal to: $(x-1)x(x^{16}-26x^{15}+325x^{14}-2600x^{13}+14950x^{12}-65762x^{11}+$$229852x^{10}-653966x^{9}+1537363x^{8}-3008720x^{7}+4904386x^{6}-$$6609926x^{5}+7238770x^{4}-6236975x^{3}+3989074x^{2}-1690406x+356509)$. The name "Pappus graph" has also been used to refer to a related nine-vertex graph,[4] with a vertex for each point of the Pappus configuration and an edge for every pair of points on the same line; this nine-vertex graph is 6-regular, is the complement graph of the union of three disjoint triangle graphs, and is the complete tripartite graph K3,3,3. The first Pappus graph can be embedded in the torus to form a self-Petrie dual regular map with nine hexagonal faces; the second, to form a regular map with 18 triangular faces. The two regular toroidal maps are dual to each other. Algebraic properties The automorphism group of the Pappus graph is a group of order 216. It acts transitively on the vertices, on the edges and on the arcs of the graph. Therefore the Pappus graph is a symmetric graph. It has automorphisms that take any vertex to any other vertex and any edge to any other edge. According to the Foster census, the Pappus graph, referenced as F018A, is the only cubic symmetric graph on 18 vertices.[5][6] The characteristic polynomial of the Pappus graph is $(x-3)x^{4}(x+3)(x^{2}-3)^{6}$. It is the only graph with this characteristic polynomial, making it a graph determined by its spectrum. Gallery • Pappus graph coloured to highlight various cycles. • The chromatic index of the Pappus graph is 3. • The chromatic number of the Pappus graph is 2. • The Pappus graph embedded in the torus, as a regular map with nine hexagonal faces. • The Pappus graph and associated map embedded in the torus. References 1. Weisstein, Eric W. "Pappus Graph". MathWorld. 2. Brouwer, A. E.; Cohen, A. M.; and Neumaier, A. Distance-Regular Graphs. New York: Springer-Verlag, 1989. 3. Jessica Wolz, Engineering Linear Layouts with SAT. Master Thesis, University of Tübingen, 2018 4. Kagno, I. N. (1947), "Desargues' and Pappus' graphs and their groups", American Journal of Mathematics, The Johns Hopkins University Press, 69 (4): 859–863, doi:10.2307/2371806, JSTOR 2371806 5. Royle, G. "Cubic Symmetric Graphs (The Foster Census)." 6. Conder, M. and Dobcsányi, P. "Trivalent Symmetric Graphs Up to 768 Vertices." J. Combin. Math. Combin. Comput. 40, 41-63, 2002.
Wikipedia
Studies on phytochemical, antioxidant, anti-inflammatory and analgesic activities of Euphorbia dracunculoides Muhammad Majid1Email author, Muhammad Rashid Khan1Email author, Naseer Ali Shah2, Ihsan Ul Haq3, Muhammad Asad Farooq1, Shafi Ullah1, Anam Sharif1, Zartash Zahra1, Tahira Younis1 and Moniba Sajid1 © Majid et al. 2015 Published: 7 October 2015 Plants provide an alternative source to manage various human disorders due to diverse metabolites. Euphorbia dracunculoides of family Euphorbiaceae is used by local practitioners in rheumatism, epilepsy, edema, snake bite, warts and also possesses diuretic and purgative effects. The present study evaluated the antioxidant, anti-inflammatory and analgesic activities of various extracts of E. dracunculoides. Further, phytochemical constituents of the leading extracts were also investigated. Dry powder of E. dracunculoides was extracted with n-hexane (EDH), acetone (EDA), ethanol (EDE), ethanol + water (1:1) (EDEW) and methanol (EDM) and screened for phytochemical classes, total phenolic (TPC) and flavonoid content (TFC). Antioxidant effects of the extracts were manifested by in vitro multidimensional assays. The anti-inflammatory and analgesic activities of the extracts were evaluated through carrageenan induced paw edema and hot plate test in rat. In addition, GC-MS analysis of EDH and HPLC-DAD analysis of EDEW was carried out to determine the presence of active constituents. Qualitative analysis of various extracts of E. dracunculoides assured the existence of tannins and coumarins while presence of anthraquinones and anthocyanins was not traced in these extracts. Maximum quantity of TPC and TFC was recorded in EDEW followed by EDE. EDEW and EDE showed significant antioxidant activities with therapeutic potential against hydroxyl and phosphomolybdate radicals, β-carotene bleaching assay and in reducing of iron while moderate to low scavenging abilities were recorded for DPPH, nitric oxide and for iron chelation. During anti-inflammatory activity after 4 h of drug administration the 300 mg/kg body weight dose of EDH (68.660 ± 10.502 %) and EDE (51.384 ± 8.623 %) exhibited strong anti-inflammatory activity and reduced the carrageenan-induced paw edema in rat as compared to standard drug diclofenac sodium (78.823 ± 6.395 %). Treatment of rats with EDH (70.206 ± 5.445 %) and EDE (56.508 ± 6.363 %) after 90 min showed significant increase in percent latency time in hot plate test as compared to morphine (63.632 ± 5.449 %) treatment in rat. GC-MS analysis of EDH indicated the presence of 30 compounds predominantly of steroids and terpenoids. HPLC-DAD analysis against known standards established the presence of rutin, catechin, caffeic acid and myricetin in EDEW. Our results suggest that presence of various polyphenolics, terpenoids and steroids render E. dracunculoides with therapeutic potential for oxidative stress and inflammation related disorders. Euphorbia dracunculoides Medicinal plants play an appealing role in modulation of several human disorders. Many civilizations and cultures like Chinese, Ayurvedic, Unani and Hindi has a confirm belief on these cures to treat their health related issues. Plants derived chemicals are important source of clinical agents that have sedative, anti-depressant, antioxidant, antispasmodic, anxiolytic, anti-inflammatory, immunomodulatory, analgesic, anti-pyretic, and cardio protective activities [1]. This customary herbal medicinal system is deep rooted in their cultures and habitats and knowledge of home remedies is conveyed accordingly to their descendants as the time goes [2]. During the normal metabolic process human body produce reactive oxygen species (ROS) and reactive nitrogen species (RNS) as byproduct to accomplish the normal physiological processes. Oxidative stress can be declared as a root cause for many diseases especially chronic inflammatory disorders such as rheumatism, diabetes, carcinomas, mutagenesis, sarcomas, aging and circulatory disorders [3]. Living cells possess an excellent defense mechanism to cope with damaging effects of free radicals and this is made possible by a variety of detoxifying enzymes and metabolites inside the body that scavenge free radicals in a professional way. Among them superoxide dismutase (SOD), catalase (CAT) and peroxidase (POD) are the main stream enzymes supported with scavengers and chelating agents. The indirectly acting antioxidants are very important in protecting body from deleterious effects of ROS/RNS. This group comprised thousands of compounds involved in free radical scavenging and most of them are derived from dietary sources such as vegetables and fruits. We are much more endangered by free radicals than being secured by natural scavenging entities producing in our body in stressed conditions. Phytochemicals play a unique role in avoiding hazardous effects of free radicals [4]. Polyphenols are excellent free radical scavengers and inhibitors of lipid peroxidation. Terpenoids and steroids are useful metabolites for various metabolic disorders. These properties make their role crucial from therapeutic and pharmacologic point of views. The trend of finding the pharmacological activities of the plants of known medicinal uses in folk medicines is quite much effective than going randomly after it [5, 6]. The genus Euphorbia contributes as the largest amongst the spurge family with over 2000 species, with awesome use value in folk Chinese medicinal system used mainly for skin diseases and edemas [7]. Several species of Euphorbia have been used in local system of medicine; for the treatment of various ailments. Rhizome of Euphorbia neriifolia and aerial parts of Euphorbia royleana has been used for the treatment of anti-inflammatory disorders [8, 9]. In local system of medicine such as Africa and Australia, Euphorbia hirta has used as a remedy for various ailments especially in hypertension and edema. Previous studies have evaluated E. hirta for antipyretic, analgesic, anti-inflammatory and diuretic activities [10, 11]. Strong antioxidant activity of E. macroclada and E. acanthothamnos has been determined in previous studies [12]. Euphorbia dracunculoides Lam., (Euphorbiaceae) is distributed in Southwest Asia, North Africa and South Europe. It is an annual herb and is usually found along riverbanks, in valleys and roadsides of sandy areas in Khyber Pakhtunkhwa Province of Pakistan. Fruits are used to remove warts from skin [13]. Leaves are used in snake bite and epilepsy [14]. A decoction of whole plant is applied on body of cattle for lice killing [15]. Euphorbia dracunculoides has been used by local practitioners for its diuretic and purgative properties. Structurally diversified 19 diterpenoids have been isolated from aerial parts of E. dracunculoides [16]. Because of similar morphology of the dried aerial parts of E. drancunculoides to Ruta graveolens, it is sold or used clinically as replacement of R. graveolens for analgesic and inflammatory disorders; gout and arthritis [17, 18]. To our knowledge the scientific validation of E. dracunculoides for the use in inflammation related disorders has not been reported earlier. For this purpose we investigated preliminary phytochemical composition, antioxidant and anti-inflammatory activities of various extracts of E. dracunculoides. EDEW showed significant antioxidant activity; was subjected to HPLC-DAD analysis for the presence of flavonoid constituents. EDH exhibited remarkable analgesic and anti-inflammatory activities; was explored for GC-MS analysis. Preparation of extract Plant was recognized by its local name and collected from the District Lakki Marwat in March 2014. The plant sample after authentication was deposited (Accession No. 127962) at the Herbarium of Pakistan, Quaid-i-Azam University, Islamabad, Pakistan. The fully shade dried aerial parts of E. dracunculoides were first powdered followed by two extraction (36 h) with n-hexane, acetone, ethanol, ethanol + water (1:1 v/v) and 95 % methanol in 2 : 1 ratio (v/w). Filtered extracts; EDH, EDA, EDE, EDEW and EDM were dried under vacuum in a rotary evaporator at 40 °C and stored at 4 °C for in vitro and in vivo experiments. Phytochemical analysis Different qualitative tests were employed to identify the phytochemical classes present in various extract of the plant E. dracunculoides. Assessment of phenols For the presence of phenols previously reported methodology was followed [19]. Each sample (1 mg) was suspended in 2 ml of distilled water containing 10 % ferric chloride. The confirmation sign for the presence of phenol was the development of blue or green color. Assessment of flavonoids Protocol of Trease and Evans [20] was followed to establish the presence of flavonoids in each sample. Briefly, 1 mg of each sample was allowed to react with 1 ml of 2 N sodium hydroxide and appearance of yellow color was considered as the confirmation sign of flavonoid presence. Assessment of coumarins An aliquot of each sample (1 mg/ml) was mixed with 1 ml of 10 % sodium hydroxide. Appearance of yellow color formation in the test tube was the proof of coumarins presence in test sample [19]. Assessment of saponins Each sample (2 mg) was suspended in 2 ml of distilled water and vigorously shaken. The formation of a soapy layer of almost 1–2 cm was the indication of saponins presence [19]. Assessment of tannins Confirmative sign of tannins was the development of dark blue or greenish black color on the mixing of 1 mg of each sample and 2 ml of 5 % ferric chloride [20]. Assessment of terpenoids Each sample 0.5 mg was mixed with 2 ml of chloroform and 2 ml of concentrated sulphuric acid. The appearance of a red brown colored layer in the middle of two layers confirmed the existence of terpenoids [20]. Assessment of anthraquinone Development of red color was considered as indication for the presence of anthraquinone after mixing of 1 mg of each sample with 2 ml of diluted 2 % hydrochloric acid [19]. Assessment of anthocyanin and betacyanin Each sample (1 mg) was boiled for 10 min in 2 ml of 1 N sodium hydroxide. Formation of bluish green color was the sign of anthocyanin and yellow color formation of betacyanin presence [20]. Assessment of alkaloids An amount of 2 mg of each sample was mixed with concentrated sulphuric acid. The reaction mixture was allowed to react with Mayer's reagent. Appearance of green color or formation of white precipitates was the symbol of alkaloid presence [20]. Total phenolic as well as flavonoid contents were quantified by the following narrated procedures. Total phenolic contents (TPC) To determine the total phenolic content in each sample spectrophotometric method already reported was followed [21]. Briefly, 1 ml of each sample (1 mg/ml) was mixed with 9 ml of distilled water and I ml of Folin-Ciocalteu reagent. The mixture obtained was mixed rigorously for 5 min and 10 ml of 7 % Na2CO3 was added to the mixture. By the addition of distilled water the final volume of mixture was made to 25 ml and was incubated at room temperature for 90 min. Optical density was ensured at wavelength of 750 nm in triplicate for each sample. By using gallic acid as standard the estimation of TPC was carried out as mg of gallic acid equivalents (GAE) per gram of dry extract/fraction. Total flavonoid content (TFC) To estimate the TFC in the test samples, 0.3 ml of each sample was mixed with 0.15 ml of 0.5 M NaNO2 followed by the addition of 0.1 ml of 0.3 M AlCl3.6H2O, and 3.4 ml of 30 % methanol [22]. An aliquot of 1 ml of 1 M NaOH was added to it after a lapse of 5 min. The optical density of the reaction mixture was recorded at 506 nm wavelength against the reagent blank. Total flavonoid content as mg rutin equivalents per gram of dry extract/fraction was estimated while using the calibration curve of rutin. Gas chromatography-Mass spectrometry (GC-MS) analysis EDH was analyzed for the presence of active constituents on "Thermo GC-Trace Ultra Ver; 5.0" gas chromatograph coupled with a "Thermo MS DSQ II" for mass determination. Components were separated on a "ZB 5-MS Capillary Standard Non-polar Column" with 60 m length having 0.25 μm film thickness. During the experiment the temperature was raised from 70 to 260 °C at a rate of 6 °C/min. The flow rate of the carrier gas; helium was 1 ml/min while injection volume of the samples was 1 μl. The identification of chemical constituents was based on comparison of their relative retention times and mass spectra with those obtained from authentic sample and/or the NIST/NBS and Wiley libraries spectra [23]. High performance liquid chromatography (HPLC) analysis On account of significant antioxidant activity for most of the in vitro assays the EDEW extract was selected for HPLC-DAD analysis. HPLC analysis of EDEW was carried out by using HPLC-DAD (Agilent Germany) equipment using Sorbex RXC8 (Agilent USA) analytical column with 5 μm particle size and 25 ml capacity. Mobile phase was consisted of eluent A, (acetonitrile-methanol-water-acetic acid /5: 10: 85: 1) and eluent B (acetonitrile-methanol-acetic acid/40: 60: 1). The gradient (A: B) utilized was the following: 0–20 min (0 to 50 % B), 20–25 min (50 to 100 % B), and then isocratic 100 % B (25–40 min) at flow rate of 1 ml/min. The injection volume of the sample was 20 μl. Before the injection samples were filtered through 0.45 μm membrane filter. Among the standards rutin and gallic acid were analyzed at 257 nm, catechin at 279 nm, caffeic acid at 325 nm and quercetin, myricetin, kampferol were analyzed at 368 nm [24]. Each time the column was reconditioned for 10 min before the next analysis. All samples were assayed in triplicates. Quantification was carried out by the integration of the peak using the external standard method. All chromatographic operations were carried out at an ambient temperature. In vitro antioxidant assays The in vitro antioxidant assays were carried out by preparing the plant samples (1 mg/ml) in 95 % methanol and then making its serial dilutions. The specific protocol was followed for finding specific scavenging activities of the plant samples. DPPH (1, 1-diphenyl-2-picryl-hydrazyl) radical scavenging assay The DPPH scavenging capabilities of deleterious effects of free radical were determined by following the methodology of [25]. An amount of 24 mg of DPPH was dissolved in 100 ml methanol and the stock solution was kept at 20 °C temperature for further use. The optical density of DPPH was optimized at 0.908 (±0.02) at 517 nm by diluting the pre made DPPH stock solution with methanol. An aliquot of 3 ml of DPPH was mixed with 100 ml plant samples with different concentrations (25–250 μg/ml). Following continuous stirring the test tubes were incubated for 15 min at room temperature. Optical density was recorded at wavelength of 517 nm. Ascorbic acid was used as standard to compare the antioxidant activity. Antioxidant capacity was determined by the following equation: $$ \mathrm{Inhibition}\ \% = \left[\frac{\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{control}-\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{the}\kern0.5em \mathrm{sample}}{\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{control}}\right]\times 100 $$ Hydroxyl radical scavenging assay The power of scavenging hydroxyl free radicals refers to the antioxidant potential of plant samples using methodology practiced by [26]. This technique involved the mixing of 500 μl of 2-deoxyribose (2.8 mM) prepared in 50 mM phosphate buffer and its pH was maintained at 7.4. The reaction mixture was prepared by addition of 100 μl of 0.1 M EDTA, 200 μl of ferric chloride (100 mM) and 100 μl of 200 mM H2O2 and 100 μl of plant sample. The initiation of reaction was brought by the introduction of 100 μl of ascorbic acid (300 mM) and incubated for 1 h at 37 °C. Then 1 ml of 2.8 % trichloroacetic acid and 1 ml of 1 % w/v thiobarbituric acid prepared in 50 mM NaOH were added to the reaction mixture. The whole recipe was heated in water bath for 15 min. After cooling to room temperature the absorbance of the reaction mixture was recorded at 532 nm. The hydroxyl radical scavenging activity was analyzed by the following formula: $$ \mathrm{Scavenging}\ \mathrm{effect}\ \% = \left[\frac{1-\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{the}\kern0.5em \mathrm{sample}}{\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{control}}\right]\times 100 $$ Nitric oxide scavenging assay The method of Bhaskar and Balakrishnan [27] was used to assess the nitric oxide scavenging potential of the plant samples. The Griess reagent was prepared by adding equimolar quantity of 0.1 % napthylenediamine in distilled water and 1 % of sulphanilamide in 5 % phosphoric acid. A volume of 100 μl of sample was added to 100 μl of sodium nitroprusside (10 mM) being prepared in saline phosphate buffer. An aliquot of 1 ml of the Griess reagent was added to the reaction mixture. After incubation at room temperature for 3 h the optical density of the reaction mixture was noted spectrophotometrically at 546 nm using ascorbate as a positive control. Following formula was used for determining the percentage inhibition of nitric oxide radical formation. Chelating power assay The iron (II) binding capability at multiple sites confers the antioxidant potential of plant samples [28]. The plant sample with its serial dilutions was prepared in methanol and 200 μl of each dilution was mixed with 900 μl of methanol and 100 μl FeCl2.2H2O (2.0 mM) and incubated for 5 min. The reaction was initiated by introducing 400 μl of ferrozine (5.0 mM). After incubation for 10 min the optical density was recorded at 562 nm using EDTA as standard in comparison. The chelating power was determined by the following formula: $$ \mathrm{Chelating}\ \mathrm{effect}\ \% = \left[\frac{\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{control}-\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{the}\kern0.5em \mathrm{sample}}{\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{control}}\right]\times 100 $$ Inhibition ability of β-carotene bleaching was recorded by the method of [29]. The reagent was prepared by dissolving 2 mg of β-carotene in 10 ml of chloroform followed by addition of 200 mg of Tween 80 and 20 mg of linoleic acid. After evaporation of chloroform, 50 ml of distilled water was added to the reaction mixture and vigorously vortexed to get a uniform emulsion of β-carotene linoleate. In freshly prepared 250 μl of emulsion, 30 μl of plant sample was added and optical density was measured at 470 nm at 0 h. Then after keeping the reaction mixture at 45 °C for 2 h the final optical density was again recorded. Catechin was served as standard in this assay and % inhibition of β-carotene was determined by the formula: $$ \%\ \mathrm{inhibition} = \left[\left({\mathrm{A}}_{\mathrm{A}(120)}\hbox{--} {\mathrm{A}}_{\mathrm{C}\ (120)}\right)/\left({\mathrm{A}}_{\mathrm{C}\ (0)}\hbox{--} {\mathrm{A}}_{\mathrm{A}(120)}\right)\right] \times 100 $$ Where AA (120) is the absorbance of the antioxidant at t = 120 min, AC (120) is the absorbance of the control at t = 120 min, and AC (0) is the absorbance of the control at t = 0 min. Reducing power assay Reducing power of various extracts was estimated by the method of [30, 31]. Briefly, 2 ml of plant extract was mixed with 2 ml of 0.2 M phosphate buffer (pH 6.6) and 2 ml of potassium ferricyanide (10 mg/l) and the reaction mixture was incubated at 50 °C for 20 min. After addition of 2 ml of trichloroacetic acid (100 mg/l) in the reaction mixture, 2 ml was of it was diluted with 2 ml of distilled water and 0.4 ml of FeCl3 (0.1 %). Optical density of the reaction mixture was measured at 700 nm after 10 min of incubation. Gallic acid was used as standard. Phosphomolybedenum assay The methodology of [31] was used to assess the antioxidant capabilities of the plant sample. Accordingly, 0.1 ml of the plant sample was mixed with 1 ml of the reagent solution (prepared by adding 28 mM Na3PO4 and 0.6 M H2SO4 with that of 4 mM ammonium molybdate). After incubation at 95 °C in a water bath for 90 min the reaction mixture was cooled down to room temperature and optical density was recorded at 765 nm. Ascorbic acid served as standard in this assay. Anti-inflammatory activity Protocol of [32] was followed to estimate the anti-inflammatory activity of extracts by carrageenan-induced paw edema in rat. Sprague-Dawley rats (six weeks old) weighing about 150–200 g were randomly divided into six groups containing 6 rats in each group. Rats had free access to the laboratory feed and water. National institute of health (NIH) guidelines were strictly carried out using test animals for experimentation. Ethical Committee of Quaid-i-Azam University Islamabad approved the study protocol (Bch#0267) for the animal care and experimentation. Group I was negative control and received DMSO, Group II orally received 10 mg/kg of drug diclofenac sodium (1:1 W/V DMSO). Group III to Group VII were given 300 mg/kg dose of EDH, EDA, EDE, EDEW and EDM respectively. To develop the paw edema, 1 ml/kg body weight of carrageenan solution (0.9 % w/v; saline) was injected in the right paw of each rat after 1 h of the administration of plant sample. Paw volume was measured plethysmographically at 0, 1st, 2nd, 3rd and 4th h after carrageenan injection and the following formulae for calculating percent inhibition of edema were used; $$ \mathrm{E}\mathrm{V}=\mathrm{P}\mathrm{V}\mathrm{A}-\mathrm{P}\mathrm{V}\mathrm{I} $$ Where, EV = Edema volume, PVI = Paw volume before carrageenan administration (i.e. initial paw volume) and, PVA = Paw volume after carrageenan administration. $$ \mathrm{Percent}\ \mathrm{inhibition}=\left[\frac{\left(\mathrm{E}\mathrm{V}\mathrm{c}\ \hbox{--}\ \mathrm{E}\mathrm{V}\mathrm{t}\right)\ }{\left(\mathrm{E}\mathrm{V}\mathrm{c}\right)} \times 100\right]. $$ EVc = Edema volume of control animals, EVt = Edema volume of test sample animals. Analgesic activity The procedure described by [33], was followed to perform this test. Sprague-Dawley rats of either sex (n = 6) weighing 180–220 g were used. Animals were subjected to pre-testing on a hot plate analgesimeter (Harvard apparatus Ltd., UK) maintained at 55 ± 0.1 oC. Animals having latency time greater than 15 s on hot plate during pre-testing were excluded. Animals were divided randomly into 7 groups, each consisting of six rats. Group I was negative control and received 2 ml/kg p.o. DMSO, Group II received 10 mg/kg i.p. of the standard drug morphine sulphate. Group III to Group VII were given 300 mg/kg p.o. dose of EDH, EDA, EDE, EDEW and EDM respectively. The latency time was recorded for each group at 0, 30, 60 and 90 min following drug administration. In order to prevent the tissue damage the cut off time of 30 s was set for all animals. Percent analgesia was calculated using the following formula. $$ \%\ \mathrm{Analgesia}=\left[\frac{\left(\mathrm{Test}\ \mathrm{latency}\ \hbox{--}\ \mathrm{control}\ \mathrm{latency}\right)\ }{\left(\mathrm{Cut}\ \mathrm{off}\ \mathrm{time}-\mathrm{control}\ \mathrm{latency}\right)} \times 100\right] $$ Data obtained in this study was presented as mean ± SD. One way analysis of variance was performed to determine the variability among groups by Statistix 8.1. GraphPad Prim 5 was used to determine the correlation of IC50 values of antioxidant assays with TPC and TFC by Pearson's correlation coefficient. Significant differences among groups were calculated by Tukey's multiple comparison tests. Statistical significance was set at P > 0.05. Extraction yield, total phenolic and flavonoid content By using 100 g of dry powder of E. dracunculoides for extraction, the maximum yield 5462 mg powder was obtained for EDEW followed by 1703 mg (EDM), 1340 mg (EDA), 724 mg (EDE) and 592 mg (EDH). On the basis of standard regression lines for gallic acid (y = 0.0103x + 0.1875; R2 = 0.9978) and rutin (y = 0.00028x + 0.497; R2 = 0.998), the equivalents of TPC and TFC were calculated (Table 1). EDEW showed maximum quantity of TPC (17.35 ± 0.62 mg GAE/g dry sample) followed by EDE (16.41 ± 0.54 mg GAE/g dry sample), EDM (14.11 ± 0.37 mg GAE/g dry sample), EDA (10.62 ± 0.33 mg GAE/g dry sample) and EDH (8.21 ± 0.49 mg GAE/g dry sample). Flavonoids were found to be rich in EDEW (7.57 ± 0.42 mg RE/g dry sample) followed by EDE (6.77 ± 0.31 mg RE/g dry sample), EDM (4.84 ± 0.29 mg RE/g dry sample), EDA (4.52 ± 0.37 mg RE/g dry sample) and EDA (4.18 ± 0.25 mg RE/g dry sample). Extraction yield, total phenolic and flavonoid contents of E. dracunculoides Yield (mg/100 g powder) Total phenolic content (mg gallic acid equivalent/ g dry sample) Total flavonoid content (mg rutin equivalent/ g dry sample) 8.21 ± 0.49e 10.62 ± 0.33d EDEW EDH E. dracunculoides n-hexane extract, EDA E. dracunculoides aqueous extract, EDE E. dracunculoides ethyl acetate, EDEW E. dracunculoides ethyl acetate + water extract, EDM E. dracunculoides methanol extract. Each value is represented as mean ± SD (n = 3). Means with different superscript (a-e) letters in the column are significantly (P < 0.05) different from one another Phytochemical classes The results of phytochemical analysis of all the extracts of E. dracunculoides are listed in Table 2. Qualitative analysis of E. dracunculoides ensured the presence of tannins, phenols, flavonoids and coumarins in all extracts of E. dracunculoides except coumarins were absent in EDH. Presence of anthraquinones was not recorded in all the extracts. EDEW contained the maximum phytochemical classes and EDA showed the least number of existent phytochemical classes. Phytochemical analysis of E. dracunculoides Phytochemical Coumarins Tannins Anthraquinones Betacyanin (+) present, (-) absent, (++) moderate concentration, (+++) abundant concentration EDH E. dracunculoides n-hexane extract, EDA E. dracunculoides aqueous extract, EDE E. dracunculoides ethyl acetate, EDEW E. dracunculoides ethyl acetate + water extract, EDM E. dracunculoides methanol extract GC-MS analysis of n-hexane extract The n-hexane extract of E. dracunculoides was selected for GC-MS analysis due to its significant analgesic and anti-inflammatory activities. GC-MS analysis indicated that EDH contained 30 chemical constituents eluted between 6.36 and 40.10 min (Fig. 1). The identification of chemical constituents was based on comparison of their relative retention times and mass spectra with those obtained from authentic sample and/or the NIST/NBS and Wiley libraries spectra (Table 3). Of these 30 chemical constituents, there were 7 terpenoides (13.25 % on the basis of peak area), 4 lactones (2.12 %), 3 steroids (2.59 %), 3 phenols (1.60 %), 2 hetrocycles (73.91 %), 1 fatty acid (2.63 %), 1 ester (1.30 %), 1 carboxylic acid (0.58 %), 1 lactam (0.51 %), 1 nitrile (0.40 %), 1 alkyne (0.36 %), 1 hydrazone (0.19 %), 1 terpene (0.18 %), 1 aldehyde (0.14 %), 1 ketone (0.13 %) and 1 amino acid (0.1 %) in EDH. GC-MS analysis of n-hexane extract of E. dracunculoides GC-MS analysis of EDH Compound name O,N-Permethylated N-Acetyllysine Phenol, 2-Methyl-5-(1-Methylethyl)- 2,4-Decadienal, (E,E)- (Cas) Phenol, 2-Methoxy-4-(2-Propenyl)- (Cas) 3,4-Dihydro-2H-1,5-(3″-t-butyl) benzodioxepine Heterocycle 4-(Phenylethynyl)-1-methoxybenzene (-)-Loliolide 2-Pentadecanone, 6,10,14-trimethyl- Terpenoid (E,E)-Farnesyl Acetone Hexadecanoic acid, methyl ester (CAS) Hexadecanoic acid (CAS) Hexadecanoic acid, ethyl ester 9,12,15-Octadecatrienoic acid, methyl ester, (Z,Z,Z)-(CAS) 1H-Imidazole, 1-ethyl- (CAS) 3-(2-Methylaminophenyl)-1H-benzopyran-1-one 4,8,12,16-Tetramethylheptadecan-4-olide Estra-1,3,5(10)-trien-17-one, 3-hydroxy-2-methoxy- 3à,6á-Dihydroxyandrost-4-Ene-17-One Di-(2-ethylhexyl)phthalate 10-Methylundeca-2,4,8-triene Austrobailignan-6 Butyl 9,12,15-octadecatrienoate 1,7,7-trimethyl-3-thiocyanatomethylen-bicyclo[2,2,1]hepta 2-Acetamido-3-(3,4-dihydroxyphenyl) propenoic acid 2,7-Diphenyl-1,6-dioxopyridazino[4,5-2′,3′] pyrrolo[4′,5′-D pyridazine Lactam 4-(cis-6-Methoxymethyl-3,4-dimethyl-3-cyclohexenyl)-trans-3-buten-2-one 2,4-dinitrophenylhydrazone Hydrazone 1-Heneicosyl formate 2-Methoxy-6-hydroxy-1,3-dicyanoazulene Xanthinin Stigmasta-5,22-dien-3-ol, (3á,22E)- (CAS) HPLC-DAD analysis of ethanol + water extract HPLC-DAD profile of EDEW was illustrated in Fig. 2. EDEW showed the existence of rutin, catechin, caffeic acid and myricetin. Rutin showed maximum quantity (65.8 ± 2.2 μg/mg dry extract) followed by catechin (15.3 ± 1.2 μg/mg dry extract), caffeic acid (8.5 ± 0.75 μg/mg dry extract) and myricetin (4.54 ± 0.35 μg/mg dry extract) as demonstrated in Table 4. HPLC-DAD profile of E. dracunculoides ethanol + water extract (EDEW) at different wavelengths. Signal 1: 257λ, Signal 2:279λ, Signal 3: 325λ, Signal 4; 368λ.Conditions: Mobile Phase A-ACN: MEOH: H2O: AA:: 5:10:85:1, Mobile phase B-ACN: MEOH: AA:: 40:60:1, Injection volume 20 μL, Flow rate 1 ml/min, Agilent RP-C8 HPLC-DAD results for EDEW of E. dracunculoides Flavonoid/Phenolics Signal wavelength Quantity (μg/mg dry extract) 65.8 ± 2.2 Catechin Caffeic acid 8.5 ± 0.75 4.54 ± 0.35 EDEW ethanol + water extract. Each value is represented as mean ± SD (n = 3) In vitro antioxidant activities DPPH radical scavenging activity Moderate to low scavenging activity for DPPH was exhibited by all the extracts of E. dracunculoides (Table 5). Minimum IC50 values were exhibited by EDEW (144.7 ± 3.4 μg/ml) followed by EDE (263.6 ± 3.7 μg/ml) and EDM (351.3 ± 4.5 μg/ml) while EDA and EDH showed the higher IC50 values (> 1000 μg/ml). Overall, order of IC50 of EDEW < EDE < EDM < EDA < EDH was observed. The DPPH radical scavenging activity of extract/fractions showed significant correlation with TPC (R2 = 0.9028, P < 0.01) and non-significant with TFC (R2 = 0.5687, P > 0.05, Table 6). All the extracts showed the higher IC50 values than ascorbic acid (26.65 ± 2.4 μg/ml). However, concentration dependent activity was recorded for all the extracts. IC50 values of different antioxidant activities of E. dracunculoides extracts IC50(μg/ml) Plant sample DPPH scavenging Hydroxyl scavenging Iron chelating 297.12 ± 3.22a 202.20 ± 3.41b 181.10 ± 1.93c 366.81 ± 3.12d 279.51 ± 2.57e 446.3 ± 3.52c 244.45 ± 2.35f 45.97 ± 1.92e 78.57 ± 2.15f Values are presented as means ± SD (n = 3). Means with different superscript (a-f) letters in the column are significantly (P < 0.01) different from one another Correlation of IC50 values of different antioxidant activities of E. dracunculoides with total phenolic and total flavonoid contents Correlation R2 0.9028** Hydroxyl radical scavenging activity 0.7224* Iron chelating assay Nitric Oxide radical scavenging Activity β - carotene bleaching scavenging activity Phosphomolybdenum assay 0.893** Column with different superscripts are significantly different *, **, indicate P < 0.05, P < 0.01. TFC total flavonoid content, TPC total phenolic content Hydroxyl radical (•OH) scavenging assay All the extracts of E. dracunculoides scavenged •OH radicals and prevented 2-deoxyribose breakdown (Table 5). A concentration-dependent pattern was observed for hydroxyl radical scavenging activity. Lowest IC50 values were recorded for EDEW (78.02 ± 1.21 μg/ml) followed by EDE (121.70 ± 2.05 μg/ml), EDM (142.43 ± 2.51 μg/ml), EDA (202.20 ± 3.4 μg/ml) and EDA (297.12 ± 3.22 μg/ml). IC50 values of EDH, EDA, EDE, EDEW and EDM were significantly higher from rutin (80.74 ± 2.25 μg/ml) and gallic acid (45.97 ± 1.92 μg/ml). Significant correlation of IC50 values of hydroxyl radical scavenging was determined for TPC (R2 = 0.9505, P < 0.01) as well as for TFC (R2 = 0.7224, P <0.05) (Table 6). Nitric oxide (NO−) scavenging assay In the present study, moderate level of nitric oxide scavenging activity was observed for all the extracts with IC50 values for EDEW (317.8 ± 3.35 μg/ml), EDE (366.81 ± 3.12 μg/ml) and EDM (405.01 ± 3.57 μg/ml) as compared to standard ascorbic acid (244.45 ± 2.35 μg/ml). IC50 values for other extracts were 583.4 ± 3.65 μg/ml and 756.31 ± 4.23 μg/ml for EDA and EDH, respectively (Table 5). IC50 values obtained for nitric oxide scavenging activity exhibited a significant correlation with TPC (R2 = 0.9638, P <0.01) and TFC (R2 = 0.6822, P <0.05) (Table 6). Iron chelating activity Iron chelating activity (IC50 values) of E. dracunculoides extracts are given in Table 5. In current study, the best IC50 values for iron chelation were exhibited by EDE (279.51 ± 2.57 μg/ml) while the least by EDH (616.57 ± 3.3 μg/ml). IC50 value for other extracts was; EDEW (331.46 ± 2.42 μg/ml), EDM (446.3 ± 3.52 μg/ml) and EDA (531.43 ± 2.94 μg/ml). IC50 of standard EDTA was 189.85 ± 1.53 μg/ml (Table 5). Significant correlation (R2 = 0.9361, P < 0.01) of IC50 values was observed with TFC (R2 = 0.9361, P < 0.01) and for TPC (R2 = 0.8232, P < 0.05) (Table 6 ). β-Carotene scavenging activity Ethanol + water extract of E. dracunculoides (EDEW) showed the lowest IC50 value (100.40 ± 1.8 μg/ml) as compared to other extracts viz. EDE (120.53 ± 2.52 μg/ml), EDM (210.23 ± 3.41 μg/ml) and EDA (181.10 ± 1.9 μg/ml). However, the maximum IC50 was shown by EDH (244.06 ± 3.8 μg/ml) as compared to the standard catechin (78.57 ± 2.15 μg/ml) shown in Table 5. The concentration dependent inhibition in β-carotene bleaching power pattern was observed for all the extracts. The assay showed significant correlation of IC50 with both TPC (R2 = 0.7579, P < 0.05) and TFC (R2 = 0.8965, P < 0.01) as shown in Table 6. Ethanol + water extract (EDEW) showed the highest reducing power with 792.59 mg ascorbic acid equivalent/g sample measured at 250 μg/ml of extract followed by EDE (777.77 mg ascorbic acid equivalents/g sample), EDM (770.37 mg ascorbic acid equivalents/g sample), EDA (711.11 mg ascorbic acid equivalents/g sample) and EDH (688.88 mg ascorbic acid equivalents/g sample) as shown in Fig. 3. There was recorded a significant correlation between the reducing power and with both TPC (R2 = 0.9812, P < 0.01) and TFC (R2 = 0.7349, P < 0.05) shown in Table 6. Reducing power activity of different concentrations of Euphobia dracunculoids extracts Total antioxidant capacity (phosphomolybdenum assay) Total antioxidant capacity of various extracts was determined by phosphomolybdate method and expressed as equivalents of ascorbic acid (mg/g of extract) as shown in Fig. 4. Total antioxidant capacity was found to decrease in the order, EDEW > EDE > EDM > EDA > EDH. All the samples exhibited an increase in antioxidant capacity with increase in concentration. The assay showed significant correlation with TPC (R2 = 0.7429, P < 0.05) and TFC (R2 = 0.893, P < 0.01) as shown in Table 6. Total antioxidant activity of different concentrations of Euphobia dracunculoids extracts Anti-inflammatory activity of E. dracunculoides The results of carrageenan-induced rat paw edema are summarized in Table 7. EDH, EDA, EDE, EDEW and EDM of E. dracunculoides were tested for their anti-inflammatory effects. The results obtained indicated that EDH possessed significant (P < 0.05) anti-inflammatory activity in rats followed by EDE and EDEW. The EDH at the test dose i.e. 300 mg/kg body weight reduced the carrageenan-induced edema up to 68.660 ± 10.502 % whereas the standard drug diclofenac sodium showed 78.823 ± 6.395 % of edema inhibition after 4 h of carrageenan injection. Similarly EDE and EDEW showed 51.384 ± 8.623 % and 46.302 ± 8.975 % inhibition of edema formation after 4 h of carrageenan administration. Effect of E. dracunculoides on carrageenan-induced paw edema in rat mg/kg Edema volume (ml)/ Percent edema inhibition DMSO (2 ml) 0.506 ± 0.046 DIC (10) 42.862 ± 5.614ab 69.975 ± 5.730a EDH (300) 68.660 ± 10.502a EDA (300) 24.960 ± 7.283c 27.492 ± 5.730d EDE (300) 37.307 ± 8.113abc 43.014 ± 12.271bc 51.384 ± 8.623b EDEW (300) 31.752 ± 3.883bc 39.746 ± 10.125cd EDM (300) 28.665 ± 11.396c 32.394 ± 8.443cd EDH E. dracunculoides n-hexane extract, EDA E. dracunculoides aqueous extract, EDE E. dracunculoides ethyl acetate, EDEW E. dracunculoides ethyl acetate + water extract, EDM E. dracunculoides methanol extract (EDM), DIC diclofenac sodium Data values shown represent mean ± SD (n = 6). Means with different superscript (a-d) letters in the column are significantly (P < 0.05) different from one another Analgesic activity of E. dracunculoides In this assay E. dracunculoides extracts; EDH, EDA, EDE, EDEW and EDM were evaluated in rats for their analgesic potential in hot plate test (Table 8). All the extracts showed increase in the percent latency period with the maximum was recorded with 74.309 ± 5.864 % for EDH as compared to 78.889 ± 5.853 % with morphine after 60 min of drug administration. The EDE and EDM also exhibited more than 50 % of analgesia. The extracts and the standard drug showed less analgesia after 90 min of drug administration except the EDA and EDEW where more analgesia was recorded after 90 min of drug administration. Analgesic activity of E. dracunculoides of hot plate test in rat Treatment (mg/kg) Latency time sec Percent analgesia 10.167 ± 0.752 3.971 ± 3.588d Morphine (10) Data values shown represent mean ± SD (n = 6). Means with different superscript (a-e) letters in the column are significantly (P < 0.05) different from one another The role of medicinal plants in curing diseases is increasing due to the presence of versatile compounds that have the ability to cure a variety of diseases and helping physicians to cope with increasing ratio of ailments these days [34]. Plants are rich in variety of compounds with different polarities [35]. Maximum yield was obtained in EDEW followed by EDM and the minimum yield was recorded in EDE following the rule that polar solvents dissolve more compounds in comparison to that of non-polar in short time at room temperature [36]. Present studies are in agreement to the concept of polarity based extraction with the experiment run by [37] on Maytenus royleanus leaves and [38] evaluating Rumex hastatus roots. Preliminary qualitative phytochemical screening gives a clue for the medicinal aptitude of the herb. In the conducted study bioactive components that impart biologically active nature to the plant were screened and results ensured the presence of terpenoids, coumarins, flavonoids, tannins, phenols, alkaloids, saponins, and betacyanin. These variant compounds ranging from low polarity to high polarity were extracted in their respective solvents. The EDH was analyzed through GC-MS and it was found to contain 30 chemical constituents eluted between 6.36 and 40.10 min. Maximum proportion was contributed by terpenoids followed by lactones, steroids, phenols and heterocycles. GC-MS analyses of EDH validated the concept of polarity based extraction as the most of the compounds identified were of non-polar nature. In the present study through HPLC analysis of EDEW presence of four compounds; rutin, catechin, caffeic acid and quercetin was evaluated. Rutin was present in maximum amount (65.8 ± 2.2 μg/mg dry extract) followed by catechin (15.3 ± 1.2 μg/mg dry extract). Rutin is a known and well reputed secondary metabolite of plants with admirable hepatoprotective, anti-inflammatory and antioxidant activities [39]. Another well-known polyphenolic compound, catechin has good antioxidant potential and provides a reliable defense wall against free radicals. Catechin provides protection against neurological disorders, inflammation and apoptosis. The admiring amount of standard antioxidant phenolic compounds in EDEW is the justification of raised antioxidant potential of this extract. Significant correlation between TPC and TFC with that of antioxidant assays also validates the HPLC results. Natural antioxidants have attained worth reputation in treating several diseases and have raised the value of folk herbal medicines in the modern era. The ability of DPPH radical scavenging is considered as a milestone while assessing the antioxidant aptitude of a crude plant extract or an isolated pure compound. It is a very short timed assay for appraising the antioxidant abilities and has worth economic values [40]. In this study EDEW showed the maximum scavenging activity with lowest IC50 value, followed by EDE < EDM < EDA < EDH. Greater quantity of phenolics and flavonoids extracted in EDEW might exhibit good scavenging abilities due to donation of electron or hydrogen to stabilize DPPH free radicals. None of the five extracts showed IC50 below ascorbic acid used as standard. Though [41] and [42] used different techniques for extraction yet our results are consolidating their findings. The results reported in this study indicated a good correlation with the TFC while non-significant correlation was recorded with TPC. The strong correlation of IC50 values with TFC might be attributed by the presence of active flavonoids such as myricetin, rutin, caffeic acid and catechin. Antioxidant potential of a plant crude extract or compound can also be estimated by its ability to oxidize linoleic acid. β-carotene is a fat soluble hydrocarbon with yellowish orange color. Linoleic acid hydroperoxides on reaction with β-carotene bleaches its color. This is due to the fact that β-carotene forms a complex with linoleic acid and oxidizes it. This consumption of β-carotene causes the reduction of bright yellow color to light milky color. Now the presence of an antioxidant in the reaction mixture checks the consumption of β-carotene by acting on linoleic acid free radicals. As the β-carotene is released from the complex between it and linoleic acid, the yellow color of the reaction mixture is regained. Hence stronger the antioxidant present in the reaction mixture brighter will be the color and higher will be its optical density [43]. In the present study, EDEW and EDE showed the best activity with lower IC50 values. Significantly good correlation of IC50 values was observed with TPC (P < 0.01) and TFC (P < 0.05) which defines that more active constituents exhibiting inhibition of β-carotene bleaching belong to the TPC while constituents having moderate activity are present in TFC. Hydroxyl radical is generated during various biochemical reactions in the body. It is a short lived, very toxic free radical having affinity for biomolecules like lipids, proteins, amino acids, sugars, deoxyribonucleic acids, leading to cancer, mutagenesis and cytotoxicity [44]. Superoxide dismutase converts superoxide radical into hydrogen peroxide which is converted to a highly reactive hydroxyl radical. Evidence of •OH scavenging activity by various extracts was obtained through deoxyribose system [45]. In vivo hydroxyl radicals are probably produced through Haber–Weiss reaction where Fe+3 is reduced to Fe+2 with the help of O2•- which leads to initiation of Fenton reaction between H2O2 and Fe+2 [46]. In the present study EDEW and EDE extracts showed the best activity with lower IC50 values. A significant correlation of IC50 values was present with both TPC and TFC. Our results are in well accordance to the findings of [47] who reported methanol extract of Dicliptera roxburghiana as the most active to scavenge hydroxyl radicals. At pH 7.4 a vigorous generation of nitric oxide from sodium nitroprusside occurs which further in favorable conditions viz aerobic conditions and aqueous solution react with oxygen and convert to nitrite ions, that can be appraised by Griess reagent. These nitrite ions impart pink color to the reaction mixture. Entities with NO− scavenging abilities hinder the nitrite ion production by consuming the available oxygen. In our study EDEW and EDE showed comparatively good results than rest of the extracts and significant correlation of IC50 values was observed with TPC and TFC. This is due to the fact that EDEW and EDE are rich in bioactive phenolic and polyphenolics which have a fine tendency to scavenge NO− free radicals that are responsible for oxidative stress. Research of [48] and [49] is in agreement to our findings. Idea of iron chelating assay is based on the principle of scavenger's ability to decolorize the iron-ferrozine complex. Ferrozine quickly reacts with the iron (II) to form water soluble colored complex. Scavenging entity present in the extract forms chelates with iron (II) hindering the iron-ferrozine complex formation and ultimately lowering the color intensity of the solution. In the present study EDE exhibited the best performance with lowest IC50 among all extracts in comparison to EDTA used as standard. Significant correlation of IC50 values was expressed with TFC (P < 0.01) and with TPC (P < 0.05) indicating that more active flavonoids constituents for iron chelation activity. The basic principle of phosphomolybdate assay is that antioxidant species reduces Mo (IV) to Mo (V) and this reduced form of Mo forms complex with phosphate at acidic pH and raised temperature impart dark green color to the final solution [50]. The electron/ hydrogen donating pattern of antioxidants depends upon its structure and series of redox reactions occurring in the activity [51]. EDEW and EDE showed admiring results than that of EDM, EDA and EDH. The assay showed a good correlation with TPC as well as TFC. Jan et al. [52] also reported aqueous extract as the best extract in phosphomolybdenum assay, followed by methanol. Moreover, significant correlation has also been reported with TPC which is in consensus with the present study but the method of extraction was a bit different, which suggests that this methodology made no difference in extracting bioactive compounds responsible for total antioxidant activity of the plant extract. Reducing power of E. dracunculoides was determined by using the potassium ferricynide reduction method. In this assay iron (Fe+3) in ferric chloride is converted to ferrous (Fe+2) by antioxidant compound/extract resulting in conversion of yellow color of the test solution to green. The intensity of green colour is directly proportional to the reducing power of the sample. Basically reducing power of the sample is due to hydrogen donating ability of antioxidants to the free radicals [53]. The assay was significantly correlated with TPC and TFC. This assay results in a pattern of EDEW > EDE > EDM > EDA > EDH at 250 μg/ml. Our study has been supported by the report of [38] that crude methanol extract was the best sample in reducing power assay after butanol. Carrageenan is widely used to induce paw edema in rodents to demonstrate anti-inflammatory effect of drugs or herbs. Carrageenan-induced inflammation is considered to be a biphasic model. Initial stage (1–2 h) contributes to the release of histamine, bradykinin and serotonin which mediates the increased synthesis of prostaglandins from surrounding tissues of the injured site. The later phase (3–4 h) is characterized by the elevated level of prostaglandins mediated by the elevated release of leukotrienes and bradykinin. During this phase the cyclo-oxygenase-2 (COX-2) converts arachidonic acid into prostaglandins which is a key factor of inflammation maintenance. In this experiment carrageenan induced edema in the hind paw of rats was inhibited by all the extracts. EDH significantly inhibited the edema formation; 1, 2, 3 and 4 h after the injection of carrageenan in the hind paw of rats. These results indicated that the phyto-constituents present in EDH inhibited the inflammatory mediators of the initial as well as late phase of inflammation induced with carrageenan. The other extracts however, more effectively inhibited the edema during the initial phase. Steroids are established anti-inflammatory agents which inhibit the production of prostaglandin not only by inducing the biosynthesis of phospholipase A2 inhibitor but also by raising the level of cyclo-oxygenase/PGE isomerase [54–56]. In the present investigation GC-MS analysis of EDH revealed the existence of stigmasta-5, 22-dien-3-ol, (3á,22E)- (CAS); a phytosterol possesses strong anti-inflammatory and analgesic properties probably contributing towards the anti-inflammatory properties of EDH [57, 58]. The standard drug also exhibited the strong anti-inflammatory potential after 2, 3 and 4 h of carrageenan injection to rats. Diclofenac sodium like other NSAIDs targets the COX-2 enzyme thereby inhibiting the formation of the paw edema. The anti-inflammatory results obtained with EDH in both phases might be attributed by the counteraction of anti-inflammatory agents such as sterols and terpenoids [59]. The anti-inflammatory effects obtained by the polar extracts; EDE, EDA, EDEW and EDM during the initial phase might be attributed by the presence of flavonoids (rutin, catechin, caffeic acid and myricetin) and other constituents. Anti-inflammatory activities of rutin, catechin and caffeic acid have been well documented [60]. Our studies are in consensus with [60] and [61, 62] who generated the same anti-inflammatory results of Acacia hydaspica and Boerhavia procumbens in rats. Thermal nociception models such as, hot plate test was used to evaluate the central analgesic activity. In this study all the extracts of E. dracunculoides showed analgesic effect in the hot plate test. EDH exhibited the best potent analgesic activity among the extracts with 74.309 ± 5.864 % inhibition of pain sensation followed by EDE (61.811 ± 4.528 %) in comparison to morphine (78.889 ± 5.853 %) after 60 min of drug administration. Morphine induces analgesic effect through activation of opioid receptors and the apparent similarity between the results of extracts with standard morphine, indicates that they might work in a same manner to reduce pain sensation. The presence of steroids in EDH might induce the biosynthesis of phospholipase A2 inhibitor and cyclo-oxygenase/PGE isomerase which in turn inhibits the pain producing prostaglandins. Our results are in agreements to the findings of Mondal et al. [63] evaluating Alternanthera sessilis as analgesic agent. Backhouse et al. [64] also reported the same results while evaluating Buddleja globosa as analgesic and anti-inflammatory agent. The present results suggest that the flavonoids in E. dracunculoides might be the key players in scavenging of oxidative stress inducing species while flavonoids along with sterols and terpenoids alleviate the inflammation and pain inducing mediators. MRK is strongly acknowledged for his kind supervision, expert guidance and generous facilitation of all necessary materials and equipments. Dr. Bushra Mirza is also highly acknowledged for her assistance in HPLC. MM collect the plant and data for various activities. MRK provide the nacessary facilities and edited the manuscript. IH provided help in HPLC. NAS,MAF, SU, AS, ZZ, TY and MS provide help in experimentation, data collection and analysis. All authors read and approved the final manuscript. Department of Biochemistry, Faculty of Biological Sciences, Quaid-i-Azam University, Islamabad, Pakistan Department of Biosciences, COMSATS Institute of Information Technology, Islamabad, Pakistan Department of Pharmacy, Faculty of Biological Sciences, Quaid-i-Azam University, Islamabad, Pakistan Okwu DE, Uchenna NF. Exotic multifaceted medicinal plants of drugs and pharmaceutical industries. Afr J Biotechnol. 2009;8:25.Google Scholar Mahmood A, Mahmood A, Tabassum A. Ethnomedicinal survey of plants from District Sialkot, Pakistan. J App Pharm. 2011;2:212–20.Google Scholar Birben E, Sahiner UM, Sackesen C, Erzurum S, Kalayci O. Oxidative stress and antioxidant defense. World Allergy Organ J. 2012;5(1):9.View ArticlePubMedPubMed CentralGoogle Scholar Shah NA, Khan MR, Naz K, Khan MA. Antioxidant potential, DNA protection, and HPLC-DAD analysis of neglected medicinal Jurinea dolomiaea roots. BioMed Res Int. 2014;2014:726241.PubMedPubMed CentralGoogle Scholar Cordell GA. Changing strategies in natural products chemistry. Phytochemistry. 1995;40(6):1585–612.View ArticleGoogle Scholar Kang JY, Khan MN, Park NH, Cho JY, Lee MC, Fujii H. Antipyretic, analgesic, and anti-inflammatory activities of the seaweed Sargassum fulvellum and Sargassum thunbergii in mice. J Ethnopharmacol. 2008;116(1):187–90.View ArticlePubMedGoogle Scholar Jassbi AR. Chemistry and biological activity of secondary metabolites in Euphorbia from Iran. Phytochemistry. 2006;67(18):1977–84.View ArticlePubMedGoogle Scholar Bigoniya P, Rana A. A comprehensive phyto-pharmacological review of Euphorbia neriifolia Linn. Phcog Rev. 2008;2(4):57.Google Scholar Bani S, Kaul A, Jaggi B, Suri K, Suri O, Sharma O. Anti-inflammatory activity of the hydrosoluble fraction of Euphorbia royleana latex. Fitoterapia. 2000;71(6):655–62.View ArticlePubMedGoogle Scholar Kumar S, Malhotra R, Kumar D. Euphorbia hirta: Its chemistry, traditional and medicinal uses, and pharmacological activities. Phcog Rev. 2010;4(7):58.View ArticleGoogle Scholar Patil SB, Naikwade NS, Magdum CS. Review on phytochemistry and pharmacological aspects of Euphorbia hirta Linn. JPRHC. 2009;1(1):113–33.Google Scholar Barla A, Öztürk M, Kültür Ş, Öksüz S. Screening of antioxidant activity of three Euphorbia species from Turkey. Fitoterapia. 2007;78(6):423–5.View ArticlePubMedGoogle Scholar Rahman MA, Mossa JS, Al-Said MS, Al-Yahya MA. Medicinal plant diversity in the flora of Saudi Arabia 1: a report on seven plant families. Fitoterapia. 2004;75(2):149–61.View ArticlePubMedGoogle Scholar Sharma J, Painuli R, Gaur R. Plants used by the rural communities of district Shahjahanpur, Uttar Pradesh. Indian J Tradit Know. 2010;9(4):798–803.Google Scholar Sikarwar R, Bajpai A, Painuli R. Plants used as veterinary medicines by aboriginals of Madhya Pradesh, India. Pharma Biol. 1994;32(3):251–5.View ArticleGoogle Scholar Wang L, Ma Y-T, Sun Q-Y, Li X-N, Yan Y, Yang J, et al. Structurally diversified diterpenoids from Euphorbia dracunculoides. Tetrahedron. 2015;71(34):5484–93.View ArticleGoogle Scholar Qurainy F, Khan S, Ali MA, Hemaid M, Ashraf M. Authentication of Ruta graveolens and its adulterant using internal transcribed spacer (ITS) sequences of nuclear ribosomal DNA. Pak J Bot. 2011;43(3):1613–20.Google Scholar Parray SA, Bhat J, Ahmad G, Jahan N, Sofi G, IFS M. Ruta graveolens: from traditional system of medicine to modern pharmacology: an overview. Am J Pharm Tech Res. 2012;2(2):239–52.Google Scholar J H: Phytochemical Methods. A guide to modern techniques of plant analysis. 3rd ed. New York: Chapman & Hall Co; 1998. p. 1–302.Google Scholar Trease GE, Evans EW. Pharmacognosy. 11th ed. London: Braillar Tiridel Can Macmillian Publishers; 1989. p. 60–75.Google Scholar Kim D-O, Jeong SW, Lee CY. Antioxidant capacity of phenolic phytochemicals from various cultivars of plums. Food chem. 2003;81(3):321–6.View ArticleGoogle Scholar Park Y-S, Jung S-T, Kang S-G, Heo BG, Arancibia-Avila P, Toledo F, et al. Antioxidants and proteins in ethylene-treated kiwifruits. Food Chem. 2008;107(2):640–8.View ArticleGoogle Scholar Cha J-D, Jeong M-R, Jeong S-I, Moon S-E, Kim J-Y, Kil B-S, et al. Chemical composition and antimicrobial activity of the essential oils of Artemisia scoparia and A. capillaris. Planta Med. 2005;71(2):186–90.View ArticlePubMedGoogle Scholar Zu Y, Li C, Fu Y, Zhao C. Simultaneous determination of catechin, rutin, quercetin kaempferol and isorhamnetin in the extract of sea buckthorn (Hippophae rhamnoides L.) leaves by RP-HPLC with DAD. J Pharmaceut Biomed. 2006;41(3):714–9.View ArticleGoogle Scholar Brand-Williams W, Cuvelier M, Berset C. Use of a free radical method to evaluate antioxidant activity. Food Sci Technol-Leb. 1995;28(1):25–30.View ArticleGoogle Scholar Halliwell B, Gutteridge JM. Formation of a thiobarbituric-acid-reactive substance from deoxyribose in the presence of iron salts: the role of superoxide and hydroxyl radicals. FEBS Lett. 1981;128(2):347–52.View ArticlePubMedGoogle Scholar Bhaskar H, Balakrishnan N. In vitro antioxidant property of laticiferous plant species from western ghats Tamilnadu, India. Int J Health Res. 2009;2:2.Google Scholar Dastmalchi K, Dorman HD, Oinonen PP, Darwis Y, Laakso I, Hiltunen R. Chemical composition and in vitro antioxidative activity of a lemon balm (Melissa officinalis L.) extract. Food Sci Technol-Leb. 2008;41(3):391–400.View ArticleGoogle Scholar Elzaawely AA, Xuan TD, Koyama H, Tawata S. Antioxidant activity and contents of essential oil and phenolic compounds in flowers and seeds of Alpinia zerumbet (Pers.) BL Burtt. & RM Sm. Food Chem. 2007;104(4):1648–53.View ArticleGoogle Scholar Siddhuraju P, Mohan P, Becker K. Studies on the antioxidant activity of Indian Laburnum (Cassia fistula L.): a preliminary assessment of crude extracts from stem bark, leaves, flowers and fruit pulp. Food Chem. 2002;79(1):61–7.View ArticleGoogle Scholar Umamaheswari M, Chatterjee T. In vitro antioxidant activities of the fractions of Coccinia grandis L. leaf extract. Afr J Tradit Complem Med. 2008;5(1):61–73.Google Scholar Winter CA, Risley EA, Nuss GW. Anti-inflammatory and antipyretic activities of indo-methacin, 1-(p-chlorobenzoyl)-5-methoxy-2-methyl-indole-3-acetic acid. J Pharmacol Exp Ther. 1963;141(3):369–76.PubMedGoogle Scholar Muhammad N, Saeed M, Khan H. Antipyretic, analgesic and anti-inflammatory activity of Viola betonicifolia whole plant. BMC Complem Altern Med. 2012;12(1):59.View ArticleGoogle Scholar Petrovska BB. Historical review of medicinal plants' usage. Pharmacogn Rev. 2012;6(11):1.View ArticlePubMedPubMed CentralGoogle Scholar Jones WP, Kinghorn AD. Extraction of plant secondary metabolites. In: Natural products isolation. Volume 20, edn.: Springer; 2005: 323-351.Google Scholar Pin K, Chuah AL, Rashih AA, Mazura M, Fadzureena J, Vimala S, et al. Antioxidant and anti-inflammatory activities of extracts of betel leaves (Piper betle) from solvents with different polarities. J Trop For Sci. 2010;448–455.Google Scholar Shabbir M, Khan MR, Saeed N. Assessment of phytochemicals, antioxidant, anti-lipid peroxidation and anti-hemolytic activity of extract and various fractions of Maytenus royleanus leaves. BMC Complem Altern Med. 2013;13(1):143.View ArticleGoogle Scholar Sahreen S, Khan MR, Khan RA. Comprehensive assessment of phenolics and antiradical potential of Rumex hastatus D. Don. roots. BMC Complem Altern Med. 2014;14(1):47.View ArticleGoogle Scholar Kubola J, Siriamornpun S. Phytochemicals and antioxidant activity of different fruit fractions (peel, pulp, aril and seed) of Thai gac (Momordica cochinchinensis Spreng). Food Chem. 2011;127(3):1138–45.View ArticlePubMedGoogle Scholar Khan MA, Rahman AA, Islam S, Khandokhar P, Parvin S, Islam MB, et al. A comparative study on the antioxidant activity of methanolic extracts from different parts of Morus alba L.(Moraceae). BMC Res Notes. 2013;6(1):24.View ArticlePubMedPubMed CentralGoogle Scholar Khan RA, Khan MR, Sahreen S, Ahmed M. Evaluation of phenolic contents and antioxidant activity of various solvent extracts of Sonchus asper (L.) Hill. Chem Cent J. 2012;6(1):12.View ArticlePubMedPubMed CentralGoogle Scholar Bokhari J, Khan MR, Shabbir M, Rashid U, Jan S, Zai JA. Evaluation of diverse antioxidant activities of Galium aparine. Spectrochim Acta A. 2013;102:24–9.View ArticleGoogle Scholar Tan Z, Shahidi F. Antioxidant activity of phytosteryl phenolates in different model systems. Food Chem. 2013;138(2):1220–4.View ArticlePubMedGoogle Scholar Kohen R, Nyska A, Invited review. Oxidation of biological systems: oxidative stress phenomena, antioxidants, redox reactions, and methods for their quantification. Toxicol Pathol. 2002;30(6):620–50.View ArticlePubMedGoogle Scholar Wu N, Kong Y, Fu Y, Zu Y, Yang Z, Yang M, et al. In vitro antioxidant properties, DNA damage protective activity, and xanthine oxidase inhibitory effect of cajaninstilbene acid, a stilbene compound derived from pigeon pea Cajanus cajan (L.) Millsp.] leaves. J Agr Food Chem. 2010;59(1):437–43.View ArticleGoogle Scholar Kehrer JP. The Haber–Weiss reaction and mechanisms of toxicity. Toxicology. 2000;149(1):43–50.View ArticlePubMedGoogle Scholar Ahmad B, Khan MR, Shah NA, Khan RA. In vitro antioxidant potential of Dicliptera roxburghiana. BMC Complem Altern Med. 2013;13(1):140.View ArticleGoogle Scholar Duenas M, Hernandez T, Estrella I. Assessment of in vitro antioxidant capacity of the seed coat and the cotyledon of legumes in relation to their phenolic contents. Food Chem. 2006;98(1):95–103.View ArticleGoogle Scholar Kilani S, Sghaier MB, Limem I, Bouhlel I, Boubaker J, Bhouri W, et al. In vitro evaluation of antibacterial, antioxidant, cytotoxic and apoptotic activities of the tubers infusion and extracts of Cyperus rotundus. Bioresource Technol. 2008;99(18):9004–8.View ArticleGoogle Scholar Prieto P, Pineda M, Aguilar M. Spectrophotometric quantitation of antioxidant capacity through the formation of a phosphomolybdenum complex: specific application to the determination of vitamin E. Anal Biochem. 1999;269(2):337–41.View ArticlePubMedGoogle Scholar Lee SK, Mbwambo Z, Chung H, Luyengi L, Gamez E, Mehta R, et al. Evaluation of the antioxidant potential of natural products. Comb Chem High T Scr. 1998;1(1):35–46.Google Scholar Jan S, Khan MR, Rashid U, Bokhari J. Assessment of antioxidant potential, total phenolics and flavonoids of different solvent fractions of Monotheca buxifolia fruit. Osong Public Health and Research Perspectives. 2013;4(5):246–54.View ArticlePubMedPubMed CentralGoogle Scholar Gordon M. The mechanism of antioxidant action in vitro. In: Food antioxidants. edn.: Springer; 1990: 1-18.Google Scholar Flower R, Blackwell G. Anti-inflammatory steroids induce biosynthesis of a phospholipase A2 inhibitor which prevents prostaglandin generation. Nature. 1979;278:456–9.View ArticlePubMedGoogle Scholar Uma B, Yegnanarayan R, Pophale P, Zambare M, Somani R. Antinociceptive evaluation of an ethanol extract of Achyranthes aspera (agadha) in animal models of nociception. Int J Phyto. 2010;2:4.Google Scholar Barnes PJ, Adcock I, Spedding M, Vanhoutte PM. Anti-inflammatory actions of steroids: molecular mechanisms. Trends Pharmacol Sci. 1993;14(12):436–41.View ArticlePubMedGoogle Scholar Gomez M, Saenz M, Garcia M, Fernandez M. Study of the topical anti-inflammatory activity of Achillea ageratum on chronic and acute inflammation models. Z Naturforsch C. 1999;54(11):937–41.View ArticlePubMedGoogle Scholar Conforti F, Sosa S, Marrelli M, Menichini F, Statti GA, Uzunov D, et al. In vivo anti-inflammatory and in vitro antioxidant activities of Mediterranean dietary plants. J Ethnopharmacol. 2008;116(1):144–51.View ArticlePubMedGoogle Scholar Shah SMM, Sadiq A, Shah SMH, Ullah F. Antioxidant, total phenolic contents and antinociceptive potential of Teucrium stocksianum methanolic extract in different animal models. BMC Complem Alten Med. 2014;14(1):181.View ArticleGoogle Scholar Afsar T, Khan MR, Razak S, Ullah S, Mirza B. Antipyretic, anti-inflammatory and analgesic activity of Acacia hydaspica R. Parker and its phytochemical analysis. BMC Complem Altern Med. 2015;15(1):136.View ArticleGoogle Scholar Bokhari J, Khan MR, Haq IU. Assessment of phytochemicals, antioxidant, and anti-inflammatory potential of Boerhavia procumbens Banks ex Roxb. Toxicol Ind Health. 2015;0748233714567183.Google Scholar Bokhari J, Khan MR. Evaluation of anti-asthmatic and antioxidant potential of Boerhavia procumbens in toluene diisocyanate (TDI) treated rats. J Ethnopharmacol. 2015;172:377–85.View ArticlePubMedGoogle Scholar Mondal H, Saha S, Awang K, Hossain H, Ablat A, Islam MK, et al. Central-stimulating and analgesic activity of the ethanolic extract of Alternanthera sessilis in mice. BMC Complem Altern Med. 2014;14:398.View ArticleGoogle Scholar Backhouse N, Rosales L, Apablaza C, Goïty L, Erazo S, Negrete R, et al. Analgesic, anti-inflammatory and antioxidant properties of Buddleja globosa, Buddlejaceae. J Ethnopharmacol. 2008;116(2):263–9.View ArticlePubMedGoogle Scholar
CommonCrawl
\begin{document} \title[Some Aspects of Multifractal analysis] {Some Aspects of Multifractal analysis} \author[]{Ai-Hua Fan} \address{Ai Hua FAN, LAMFA, UMR 7352, CNRS, Universit\'e de Picardie, 33 rue Saint Leu, 80039 Amiens, France. E-mail: [email protected]} \subjclass[2010]{Primary 37C45} \keywords{Dynamical system, Ergodic average, Multifractal analysis, Hausdorff dimension. } \begin{abstract} The aim of this survey is to present some aspects of multifractal analysis around the recently developed subject of multiple ergodic averages. Related topics include dimensions of measures, oriented walks, Riesz products etc. The exposition on the multifractal analysis of multiple ergodic averages is mainly based on \cite{FLM,KPS,FSW}. {\em This paper is published in Geometry and Analysis of Fractals (pp 115-145), Ed. D. J. Feng and K. S. Lau, Springer Proceedings in Mathematics and Statistics Vol. 88, Springer (2014).} \end{abstract} \maketitle \section{Introduction} \label{sec:introduction} Multifractal problems can be put into the following frame. Let $(X, d)$ be a metric space and $\mathcal{P}(x)$ a property (quantitative or qualitative) depending on a point $x$ of the space $X$. For any prescribed property $\mathbf{P}$, we look at the set of those points $x$ which have the property $\mathbf{P}$: $$ E(\mathbf{P}) =\{x \in X: \mathcal{P}(x) = \mathbf{P}\}. $$ The size of the sets $E(\mathbf{P})$ for different $\mathbf{P}$'s is problematic in the multifractal analysis. According to popular folklore, the function $x \mapsto \mathcal{P}(x)$ is multifractal if $E(\mathbf{P})$ is not empty for uncountably many properties $\mathbf{P}$. Usually the size of a set $A$ in $X$ is described by its Hausdorff dimension $\dim_H A$ or its packing dimension $\dim_P A$, or its topological entropy in a dynamical setting. See \cite{Falc,Mattila1995} for the dimension theory and \cite{Bowen73,Pesin} for the notion of topological entropy. Seeds of multifractals were sown in B. Mandelbrot's works on multiplicative chaos in 1970's \cite{Mandelbrot1974a,Mandelbrot1974b}. First rigorous results are due to J. P. Kahane and J. Peyr\`ere \cite{KP1976}. The concept of multifractality came from geophysics and theoretical physics. At the beginning, U. Frisch and G. Parisi~\cite{FrishParisi}, H. G. Hentschel and I. Procaccia~\cite{hentchel} had the rather vague idea of mixture of subsets of different dimensions each of which has a given H\"{o}lder singularity exponent. The multifractal formalism became clearer in the 1980-90's in the works of T. C. Halsey-M. H. Jensen-L.P. Kadanoff-I. Procsccia-B. I. Shraiman \cite{Halsey}, of P. Collet-J. L. Lebowitz-A. Porzio~\cite{collet}, and of G. Grown-G. Michon-J. Peyr\`ere \cite{BMP}. The multifractal formalism is tightly related to the thermodynamics and D. Ruelle \cite{Ruelle} was the first to use the thermodynamical formalism to compute the Hausdorff dimensions of some Julia sets. The research on the subject has been very active and very fruitful since four decades. The first most studied multifractal quantity is the local dimension of a Borel measure $\mu$ on $X$. Recall that the local lower dimension of $\mu$ at $x\in X$ is defined by $$ \underline{D}_\mu(x) = \liminf_{r\to 0} \frac{\log \mu(B(x, r))}{\log r} $$ where $B(x, r)$ denotes the ball centered at $x$ with radius $r$. The local upper dimension $\overline{D}_\mu(x)$ is similarly defined. There is a huge literature on this subject. Let us mention another example, H\"{o}lder exponent of a function (see \cite{Jafsiam}). Let $\alpha >0$. Let $f: \mathbb{R}^d \to \mathbb{R}$ be a function and $x\in \mathbb{R}^d$ be a fixed point. We say $f$ is $\alpha$-H\"{o}lder at $x$ and we write $f\in C^\alpha(x)$ if there exist two constants $ \delta >0$ and $C>0$ and a polynomial $P(x)$ of degree strictly smaller than $\alpha$ such that $$ |y-x|<\delta \Rightarrow |f(y) -f(x) - P(y-x)|\le C |y-x|^\alpha. $$ The H\"{o}lder exponent of $f$ at $x$ is then defined by $$ h_f(x) =\sup\{\alpha>0: f\in C^\alpha(x)\}. $$ The multifractal analysis has now become a set of tools applicable in analysis, probability and stochastic processus, number theory, ergodic theory and dynamical systems etc if we don't account applications in physics and other sciences. The main goal of this paper is to present some problems in the multifractal analysis of Birkhoff ergodic averages, especially of multiple Birkhoff ergodic averages, and some related topics like dimensions of measures and Riesz products as tools, and oriented walks as similar subject. Let $T: X \to X$ be a map from $X$ into $X$. We consider the dynamical system $(X,T)$. The main concern about the system is the behavior of an orbit $\{T^n x\}$ of a given point $x\in X$. Some aspects of the behavior of the orbit may be described by the so-called Birkhoff averages \begin{equation} \label{def_Sn} A_nf (x) = \frac{1}{n} \sum_{k=0}^{n-1} f(T^k x) \end{equation} where $f: \mathbb{R}^d \to \mathbb{R}$ is a given function, called observable. We refer to \cite{Walters1982} for basic facts in the theory of dynamical system and ergodic theory. The famous and fundamental Birkhoff Ergodic Theorem states that for any $T$-invariant ergodic probability measure $\mu$ on $X$ and for any integrable function $ f\in L^1(\mu)$, the limit $$ \lim_{n\to +\infty} \ A_nf(x) = \int_X f \, d\mu $$ holds for $\mu$-almost all points $x\in X$. Even if $\mu$ is only $T$-invariant but not ergodic, the limit still exists for $\mu$-almost all points $x\in X$. For many dynamical systems, there is a rich class of invariant measures and so that the limit of Birkhoff averages $A_nf(x)$ may vary for different points $x$. This variety reflects the chaotic feature of the dynamical system. A typical example is the doubling dynamics $x \mapsto 2 x (\!\!\!\mod 1)$ on the unit circle $\mathbb{T}:=\mathbb{R}/\mathbb{Z}$. The multifractal analysis of Birkhoff ergodic averages provides a way to study the chaotic feature of the dynamics. Let $\alpha\in \mathbb{R}$. We define the $\alpha$-level set $$E_f(\alpha) = \left\{ x\in X: \lim_{n\to +\infty} A_nf(x) = \alpha \right\}.$$ The purpose of the multifractal analysis is the determination of the size of sets $ E_f(\alpha)$. If the Hausdorff dimension is used as measuring device, we are led to the Haudorff multifractal spectrum of $f$: $$ \mathbb{R} \ni \alpha \mapsto d_H(\alpha): = \dim_H E_f(\alpha). $$ The existing works show that in many cases it is possible to compute the spectrum $d_H(\cdot)$ and it is also possible to distinguish a nice invariant measure sitting on the set $F_f(\alpha)$ for each $\alpha$. By ``nice" we mean that the measure is supported by $E_f(\alpha)$ and its dimension is equal to that of $E_f(\alpha)$. The dimension of a measure is defined to be the dimension of the ``smallest" Borel support of the measure. Therefore the nice measure is a maximal measure in the sense that it attains the maximum among all measures supported by $E_f(\alpha)$. This maximal measure may be invariant, ergodic and even mixing. Some other nice properties are also shared by the maximal measure. For this well studied classic ergodic averages, see for example \cite{BSS,FF,FFW,FLP2008,FS2003,FengLauWu,Olivier,TV}. Let us also mention some useful tools for dimension estimation \cite{BarralSeuret,BV,Durand,FST}. The multiple ergodic theory started almost at the same time of the development of multifractal analysis. It started with F\"{u}rstenberg's proof of Szemer\'edi theorem on the existence of arbitrary long arithmetic sequence in a set of integers of positive density \cite{Furstenberg}. This theory involves several dynamics rather than one dynamics. Let $T_1$, $T_2$, ..., $T_d$ be $d$ transformations on a space $X$. We assume that they are commuting each other and preserving a given probability measure $\mu$. For $d$ measurable functions $F=(f_1, \cdots, f_d)$ we define the multiple Birkhoff averages by \begin{equation} \label{def_generalergaverage-0} A_n F(x) :=\frac{1}{n} \sum_{k=0}^{n-1} f_1(T_1^k x)f_2( T_2^k x)\cdots f_d(T_d^k x). \end{equation} An inspiring example is the couple $(\tau_2, \tau_3)$ on the circle $\mathbb{T}$ where $$ \tau_2 x = 2x \ (\!\!\!\!\!\mod 1), \qquad\tau_3 x = 3x \ (\!\!\!\!\!\mod 1). $$ The mixture of the dynamics $T_1, T_2, \cdots, T_d$ is much more difficult to understand. After the first works of F\"{u}rstenberg-Weiss \cite{FurstenbergWeiss} and of Conze-Lesigne \cite{ConzeLesigne}, B. Host and B. Kra \cite{HK1} proved the $L^2$-convergence of $A_nF$ when $T_j=T^j$ (powers of a fixed dynamics) and $f_j \in L^\infty(\mu)$. For the almost every convergence, results are sparse. J. Bourgain proved the almost everywhere convergence when $d=2$. We should point out that even if the limit exists, it is not easy to recognize the limit. In particular, the limit may not be constant for some ergodic measures. For nilsystems, explicit formula for the limit was known to E. Lesigne \cite{Lesigne} and T. Ziegler \cite{Ziegler}. Anyway, not like the "simple" ergodic theory, the multiple ergodic theory has not yet reached its maturity. Although the multiple ergodic theory has been still developing, this situation doesn't prevent us from investigating the multifractal feature of multiple systems. Let us consider the following general set-up. For a given observable $\Phi:X^d \to \mathbb{R}$, we consider the Multiple Ergodic Averages \begin{equation} \label{def_generalergaverage} A_n \Phi(x) :=\frac{1}{n} \sum_{k=0}^{n-1} \Phi(T_1^k x, T_2^k x, ..., T_d^k x). \end{equation} The Birkhoff averages (\ref{def_generalergaverage-0}) correspond to the special case of tensor product $\Phi=f_1\otimes f_2\otimes \cdots \otimes f_d$. It is natural to introduce the following generalization of multifractal spectrum. In this paper, we will only consider the case where $T_j = T^j$ for $1\le j \le d$. So \begin{equation} \label{def_ergaverage} A_n \Phi(x)=\frac{1}{n} \sum_{k=0}^{n-1} \Phi(T^k x, T^{2k} x, ..., T^{dk} x). \end{equation} The multiple Hausdorff spectrum of the observable $\Phi$ is defined by $$ d_H(\alpha) = \dim_H E_\Phi(\alpha), \qquad (\alpha \in \mathbb{R}) $$ where \begin{equation} \label{def_Ealpha} E_\Phi(\alpha)=\left\{ x\in X: \lim_{n\to +\infty} \ \frac{1}{n} \sum_{k=0}^{n-1} \Phi(T^k x, T^{2k} x, ..., T^{dk} x) =\alpha\right\}. \end{equation} As we said, the classical theory ($d=1$) is well developed. When $d>1$, there are several results on the multifractal analysis of the limit of the averages $A_n\Phi$ in some special cases; but questions remain largely unanswered. In Section 2, we will present the first results obtained in \cite{FLM} in a very special case which give us a feeling of the problem and illustrate the difficulty of the problem. One result is the Hausdorff spectrum obtained by using Riesz products, a tool borrowed from Fourier analysis. Another result concerns the box dimension of a multiplicatively invariant set. These two kinds of result are respectively generalized in \cite{FSW} and \cite{KPS,PSSS} in some general setting. In Section 5, we will present these results. As we mentioned, the local dimension of a measure was the first study object of multifractal analysis. In Section 3, we will give an account of dimensions of measures which are related to the local dimension and have their own interests. The idea of using Riesz products is inspired by a work on oriented walks \cite{Fan2000} to which Section 4 will be devoted. In the last section, we will collect some remarks and open problems. \section{First multifractal results on the multiple ergodic averages } The question of computing the dimension of $E_\Phi(\alpha)$ was raised by Fan, Liao and Ma in \cite{FLM}, where the following special case was studied: $X=: \mathbb{M}_2:=\{-1, 1\}^\mathbb{N^*}$, $T$ is the shift, $d\ge 1$ and $\Phi$ is the function $$ \Phi(x^{(1)}, x^{(2)}, \cdots, x^{(d)})= x^{(1)}_1 x^{(2)}_1\cdots x^{(d)}_1, \quad ( x^{(1)}, x^{(2)}, \cdots, x^{(d)})\in \mathbb{M}_2^d. $$ Note that $x_1^{(j)}$ is the first coordinate of $x^{(j)}$. We consider $\mathbb{M}_2$ as the infinite product of the multiplicative group $\{-1, 1\}$. Then the function $x\mapsto x_j$ is a group character of $\mathbb{M}_2$, called Rademacher function and $$ \Phi( T^k x, T^{2d}x, \cdots, T^{dk}x) = x_k x_{2k}\cdots x_{dk} $$ is also a group character, called Walsh function. Recall that $\mathbb{M}_2=\{-1,1\}^{\mathbb{N}^*}$, considered as a symbolic space, is endowed with the metric $$d(x,y) = 2 ^{-\min\{k: x_k\neq y_k\}}, \mbox{ for $x,y \in \mathbb{M}_2$.}$$ \subsection{ Multifractal spectrum of a sequence of Walsh functions} \ Under the above assumption, we have the following result. \begin{theorem}{\rm (\cite{FLM})}\label{Thm_FLM} For every $\alpha\in [-1,1]$, the set $$B_\alpha := \left\{x\in \mathbb{M}_2: \lim_{n\to +\infty} \frac 1 n \sum_{k=1}^n x_kx_{2k} \cdots x_{dk} = \alpha\right\}$$ has the Hausdorff dimension $$\dim_H B_\alpha = 1 - \frac 1 d + \frac{1} {d\log 2} H \left( \frac{1+\alpha}{2} \right),$$ where $H(t):= -t\log t -(1-t) \log(1-t)$. \end{theorem} A key observation is that all these Walsh functions constitute a dissociated system in the sense of Hewitt-Zuckermann \cite{HZ}. This allows us to define probability measures called Riesz products on the group $\mathbb{M}_2$: $$ \mu_b := \prod_{k=1}^\infty (1+ b x_k x_{2k}\cdots x_{dk}) := w^*\!-\!\lim_{N\to\infty} \prod_{k=1}^N (1+ b x_k x_{2k}\cdots x_{dk} )dx $$ for $b \in [-1, 1]$, where $dx$ denote the Haar measure on $\mathbb{M}_2$. Fortunately, the Riesz product $\mu_\alpha$ ($b=\alpha$) is a maximizing measure on $B_\alpha$. So, by computing the dimension of the measure $\mu_\alpha$, we get the stated formula. We will go back to Riesz products in Section~\ref{Sect_Dim} and to dimensions of measures in Section~\ref{Sect_OW}. Note that the case $d=1$ is nothing but the well-known Besicovich-Eggleston theorem dated back to 1940's, which would be considered as the first result of multifractal analysis. \subsection{Box dimension of some multiplicatively invariant set}\label{X_2} The very motivation of \cite {FLM} was the multiple ergodic averages in the following case where $X: = \mathbb{D}_2:=\{0,1\}^{\mathbb{N}^*}$, $T$ is the shift on $\mathbb{D}_2$ and $\Phi(x,y) = x_1y_1$ for $x=x_1x_2... $ and $ y=y_1y_2...\in \mathbb{D}_2$. The space $\mathbb{D}_2$ may also be considered as the infinite product group of $\mathbb{Z}/2\mathbb{Z}$. But the function $x \mapsto x_1$ is no longer group character and the Fourier method fails. Then the authors of \cite{FLM} proposed to look at a subset of the $0$-level set \begin{equation} \label{def:E0} E_\Phi(0) = \left\{x\in \Sigma_2: \lim_{n\to +\infty} \frac 1 n \sum_{k=1}^n x_k x_{2k} =0\right\}. \end{equation} The proposed subset is \begin{equation} \label{def:X2} X_2 = \{x\in \Sigma_2: \ \forall \, k\geq 1, \ x_k x_{2k}=0\}. \end{equation} This set $X_2$ has a nicer structure than $E_\Phi(0)$. The condition $x_k x_{2k}=0$ is imposed to all integers $k$ without exception for all points $x$ in $X_2$, while the same condition is imposed to "most" integers $k$ for points $x$ in $E_\Phi(0)$. \begin{theorem}\label{box-dim} {\rm (\cite{FLM})} \label{prop_FLM} The box dimension of $X_2$ is equal to $$ \dim_B X_2=\frac{1}{2\log 2} \sum_{n=1}^{+\infty} \frac{ \log F_n}{2^n} ,$$ where $F_n$ is the Fibonacci sequence: $F_0=1$, $F_1=2$ and $F_{n+2}=F_{n+1}+F_n$ for $n\geq 0$. \end{theorem} The key idea to prove the formula is the following observation, which is also one of the key points for all the obtained results up to now in different cases. Look at the definition \eqref{def:X2} of $X_2$. The value of the digit $x_1$ of an element $x=(x_k)\in X_2$ has an impact on the value of $x_2$, which in turn on the value of $x_4$, ... and so forth on the values of $x_{2^k}$ for all $k\geq 1$. But it has no influence on $x_3$, $x_5$, ... . Similarly, the value of $x_i$ for an odd integer $i$ only has influence on $x_{ i 2^k}$. This suggests us the following partition $$\mathbb{N}^* = \bigsqcup _{i \mbox{\rm \ \small odd}} \ \Lambda_i, \ \ \mbox { with } \Lambda_i : =\{i\,2^n: n\geq 0\}.$$ We could say that the defining conditions of $X_2$ restricted to different $\Lambda_i$ are independent. We are then led to investigate, for each odd number $i$, the restriction of $x$ to $\Lambda_i$ which will be denoted by $$ x_{|\Lambda_i} = x_{i}x_{i2}x_{i2^2} \ldots x_{i2^n} \ldots.$$ If we rewrite $x_{|\Lambda_i} = z_1z_2 ...$, which is considered as a point in $\mathbb{D}_2$, then $(z_n)$ belongs to the subshift of finite type subjected to $z_k z_{k+1} =0$. It is clear that $$\dim_B X_2=\lim_{n\to\infty}\frac{\log_2N_n}{n}$$ where $N_n$ is the cardinality of the set $$ \{(x_1x_2\cdots x_{n}): x_{k}x_{2{k}}=0 \ \text{ for } k\geq 1 \text{ such that } \ 2{k}\leq n\}. $$ Let us decompose the set of the first $n$ integers as follows $\{1, \cdots, n\}=C_0 \sqcup C_1\sqcup \cdots\sqcup C_m$ with \begin{align*} C_0:&=\left\{1, \ \ \ \ \ \ 3,\ \ \ \ \ \, 5, \ \ \ \ \dots, \ 2 n_0-1 \right\}, \\ C_1:&=\left\{1\cdot 2, \ \ 3\cdot 2, \ \ 5\cdot 2, \ \dots, \ 2\cdot\big(2n_1-1\big) \right\}, \\ &\dots\\ C_k:&=\left\{1\cdot 2^k , \ 3\cdot 2^k, \ 5\cdot 2^k, \ \dots, \ 2^k\cdot (2n_k-1) \right\}, \\ &\dots\\ C_m:&=\left\{1\cdot 2^m \right\}. \end{align*} These finite sequences have different length $n_k$ ($0\le k\le m$). Actually $n_k$ is the biggest integer such that $2^k(2n_k-1)\leq n$, i.e. $ n_k=\left\lfloor\frac{n}{2^{k+1}}+\frac{1}{2}\right\rfloor. $ The number $m$ is the biggest integer such that $2^m\leq n$, i.e. $m=\lfloor\log_2 n\rfloor.$ It clear that $n_0>n_1>\cdots > n_{m-1}>n_m=1$. The conditions $x_{\ell}x_{2\ell}=0$ with $\ell$ in different columns in the table defining $C_0, \cdots, C_m$ are independent. This independence allows us to count the number of possible choices for $(x_1,\cdots, x_n)$ by the multiplication principle. First, we have $n_m(=1)$ column which has $m+1$ elements. We have $F_{m+1}$ choices for those $x_{\ell}$ with $\ell$ in the first column because $x_{\ell}x_{2\ell}$ is conditioned to be different from the word $11$. We repeat this argument for other columns. Each of the next $n_{m-1}-n_{m}$ columns has $m$ elements, so we have $F_{m}^{n_{m-1}-n_m}$ choices for those $x_{\ell}$ with $\ell$ in these columns. By induction, we get $$N_n=F_{m+1}^{n_m}F_{m}^{n_{m-1}-n_m}F_{m-1}^{n_{m-2}-n_{m-1}}\cdots F_1^{n_0-n_1}.$$ To finish the computation we need to note that $\frac{n_k}{n}$ tends to $2^{-(k+1)}$ as $n$ tends to the infinity. The set $X_2$ is not invariant under the shift. But as observed by Kenyon, Peres and Solomyak \cite{KPS}, it is multiplicatively invariant in the sense that $M_r X_2 \subset X_2$ for all integers $r\ge 1$ where $$ M_r((x_n)) = (x_{rn}). $$ The Hausdorff dimension of $X_2$ was obtained in \cite{KPS} and the gauge function of $X_2$ was obtained in \cite{PS2}. We will discuss the work done in \cite{KPS} in Section~\ref{Sect_MBA}. \section{Dimensions of measures}\label{Sect_Dim} Multifractal properties were first investigated for measures. The multifractal analysis of a measure is the analysis of the local dimension on the whole space, while the dimensions of a measure concern with what happens on a Borel support of the measure. In \cite{Fan1989c}, lower and upper Hausdorff dimensions of a measure were introduced and systematically studied, inspired by \cite{Peyriere1975} and \cite{Kahane1987b}. The lower and upper packing dimensions were later studied independently in \cite{Tamashiro1995} and \cite{He1}. Some aspects were also considered in \cite{Haase}. A fundamental theorem in the theory of fractals is Frostman theorem. Howroyd \cite{Howroyd1994} and Kaufman \cite{Kaufman} generalized it from Euclidean spaces to complete separable metric spaces. This fundamental theorem allows us to employ the potential theory. \subsection{Potential theory} Let $(X, d)$ be a complete separable metric space, called Polish space. Let $0<\alpha<\infty$. For any locally finite Borel measure $\mu$ on $X$, we define its potential of order $\alpha$ by $$ U_{\alpha}^{\mu} (x) := \int_X \frac{\text{d}\mu(y)}{(\text{d}(x, y))^\alpha}\quad \,\,\,\, (x \in X) $$ and its energy of order $\alpha$ by $$ I_{\alpha}^{\mu} := \int_X U_{\alpha}^{\mu}(x) \text{d}\mu(x) = \int_X \int_X \frac{\text{d}\mu(x) \text{d}\mu(y)}{(\text{d}(x, y))^\alpha}. $$ The capacity of order $\alpha$ of a compact set $K$ in $X$ is defined by $$ {\rm Cap}_\alpha K = \left( \inf_{\mu \in \mathcal{M}^+_1(K)} I_{\alpha}^{\mu} \right)^{-1}. $$ For an arbitrary set $E$ of $X$, we define its capacity of order $\alpha$ by $$ {\rm Cap}_\alpha E = \sup\{ {\rm Cap}_\alpha K: K \, \mbox{compact\ contained\ in }\ E\} . $$ For a set $E$ in $X$, we define its capacity dimension by $$ \dim_C E = \inf \{\alpha>0: {\rm Cap}_\alpha (E) = 0\} = \sup \{\alpha>0: {\rm Cap}_\alpha (E) > 0\}. $$ The following is the theorem of Frostman-Kaufman-Howroyd. Frostman initially dealt with the Euclidean space. Kaufman proved the result by generalizing a min-max theorem on quadratic function to the mutual potential energy functional, while Howroyd used the technique of weighted Hausdorff measures. \begin{theorem}[\cite{Kaufman,Howroyd1994}] Let $(X, d)$ be a complete metric space. For any Borel $E\subset X$, we have $ \dim_C E = \dim_H E. $ \end{theorem} \subsection{Hausdorff dimensions of a measures} An important tool to study a measure is its dimensions, which attempt to estimate the size of the "supports" of the measure. The idea finds its origin in J. Peyri\`ere's works on Riesz products \cite{Peyriere1975} and also in that of J.-P. Kahane on Dvoretzky covering \cite{Kahane1987b}. The following definitions were introduced in \cite{Fan1989c} (see also \cite{Fan1994, Falc}). Let $(X, d)$ be a complete separable metric space. Let $\mu$ be a Borel measure on $X$. The {\em lower Hausdorff dimension} and the {\em upper Hausdorff dimension} of a measure $\mu$ are respectively defined by \begin{eqnarray*} \dim_* \mu = \inf \{ \dim_H A: \mu(A)\ >0\},\qquad \dim^* \mu = \inf \{ \dim_H A: \mu(A^c)=0\}. \end{eqnarray*} It is evident that $ \dim_* \mu \le \dim^* \mu. $ When the equality holds, $\mu$ is said to be {\em unidimensional} or $\alpha$-{\em dimensional} where $\alpha$ is the common value of $\dim_* \mu$ and $\dim^* \mu$. The Hausdorff dimensions $\dim_* \mu$ and $\dim^* \mu$ are described by the lower local dimension function $\underline{D}_\mu(x)$ in the following way. \begin{theorem}[\cite{Fan1989c, Fan1994}]\label{HDM} $$ \dim_* \mu = {\mbox{\rm ess} \inf}_\mu \ \underline{D}_\mu(x), \quad \dim^* \mu = {\mbox{\rm ess} \sup}_\mu \ \underline{D}_\mu(x). $$ \end{theorem} There are also a continuity-singularity criterion using Hausdorff measures and a energy-potential criterion. Sometimes these criteria are more practical. \begin{theorem}[\cite{Fan1989c, Fan1994}] \begin{eqnarray*} \dim_* \mu &=& \sup \{ \alpha>0: \mu \ll H^\alpha\} = \sup \{ \alpha>0: \mu=\sum \mu_k, I_\alpha^{\mu_k}<\infty \}\\% \right\}\\ \dim^* \mu & =& \inf \{ \alpha>0: \ \mu \perp H^\alpha \} \, = \inf \, \{ \alpha>0: U_\alpha^{\mu}(x) =\infty, \ \mu\!-\!\mbox{\rm p.p.} \}. \end{eqnarray*} \end{theorem} Theorem~\ref{HDM} holds for lower and upper packing dimensions of a measures, similarly defined, if we replace $\underline{D}(\mu, x)$ by $\overline{D}(\mu, x)$ (see \cite{Tamashiro1995} and \cite{He1}). They are denoted by $\mbox{\rm Dim}_*\mu$ and $\mbox{\rm Dim}^*\mu$. We say $\mu$ is exact if $\dim_*\mu=\dim^*\mu =\mbox{\rm Dim}_*\mu =\mbox{\rm Dim}^*\mu $. \subsection{Sums, products, convolutions, projections of measures} What we present in this subsection was in the first unpublished version of \cite{Fan1994}. Some part was restated by in \cite{Tamashiro1995} and some part was used in \cite{Fan1994b,Fan1994c}. {\em Sum.} The absolute continuity $\nu\ll \mu$ is a partial order on the space of positive Borel measures $M^+(X)$ on the space $X$. Using the continuity-singularity criterion, it is easy to see that \begin{equation}\label{dim-equiv} 0<\nu \ll \mu \Rightarrow \dim_*\mu \le \dim_*\nu \le \dim^* \nu \le \dim^* \mu. \end{equation} The relation $\nu\sim \mu$ (meaning $\nu\ll \mu \ll \nu$) is an equivalent relation. Since two equivalent measures have the same lower and upper Hausdorff dimensions, both $\dim_*\mu$ and $\dim^*$ are well defined for equivalent classes. Given a family of positive measures $\{\mu_i\}_{i\in I}$ which is bounded under the order $\ll$, we denote its supremum by $\bigvee_{i\in I} \mu_i$. If the family is finite, we have $\bigvee_{i\in I} \mu_i \sim \sum_{i\in I} \mu_i$. Such equivalence also holds when the family is countable. In general, there is a countable sub-family $\mu_{i_k}$ such that $ \bigvee_{i\in I} \mu_i \sim \sum_k \mu_{i_k}. $ This is what we mean by sum of measures. \begin{theorem} If $\{\mu_i\}_{i\in I}$ is a bounded family of measures in $M^+(X)$, we have $$ \dim_* \bigvee_{i\in I} \mu_i = \inf_{i \in I} \dim_*\mu_i,\quad \dim^* \bigvee_{i\in I} \mu_i = \sup_{i \in I} \dim^*\mu_i. $$ \end{theorem} Let us see how to prove the first formula for a family of two measures $\mu$ and $\nu$. For any $\alpha < \dim_* (\mu+\nu)$, we have $ U_\alpha^\mu(x) + U_\alpha^\mu(x) = U_\alpha^{\mu+\nu}(x)<\infty \quad (\mu+\nu)\!-\!a.e. $ This implies $\dim_*(\mu+\nu) \le \min\{\dim_*\mu, \dim_* \nu\}$. The inverse inequality follows from the fact that if $ \beta < \min\{\dim_*\mu, \dim_* \nu\}$, then $ \mu \ll H^\beta \ \mbox{\rm and}\ \nu\ll H^\beta $ which implies $\mu+\nu \ll H^\beta$. Let us consider now the infimum $\bigwedge_{i \in I}$ of a family of measure $\{\mu_i\}_{i\in I}$. Recall that by definition we have $$ \bigwedge_{i\in I} \mu_i \sim \bigvee_{\mu: \forall i\in I, \mu\ll \mu_i} \mu. $$ For a family of two measures $\mu$ and $\nu$, we have $ \mu\bigwedge\nu \sim \frac{d\mu}{d \nu} \bigvee \frac{d\nu}{d\mu}. $ \begin{theorem} If $\{\mu_i\}_{i\in I}$ is family of measures in $M^+(X)$ such that $\bigwedge_{i \in I} \mu_i\not=0$, we have $$ \sup_{i\in I}\dim_* \mu_i \le \dim_* \bigwedge_{i \in I} \mu_i \le \dim^* \bigwedge_{i \in I} \mu_i \le \inf_{i\in I}\dim^* \mu_i. $$ \end{theorem} Consequently, $\bigvee_{i \in I} \mu_i$ (resp. $\bigwedge_{i \in I} \mu_i$) is unidimensional if and only if all measures $\mu_i$ are unidimensional and have the same dimension. {\em Product.} Let $(X,\delta_X)$ and $(Y, \delta_Y)$ be two Polish spaces. Then $d:=\delta_X \vee \delta_Y$ is a compatible metric on the product space. Let $\mu \in M^+(X)$ and $\nu\in M^+(Y)$. Let us consider the product measure $\mu \otimes\nu$. \begin{theorem} For $\mu \in M^+(X)$ and $\nu\in M^+(Y)$, we have $$ \dim_* \mu\otimes \nu \ge \dim_*\mu + \dim_*\nu, \qquad \dim^* \mu\otimes\nu \ge \dim^*\mu + \dim^*\nu. $$ \end{theorem} We say a measure $\mu\in M^+(X)$ is regular if $\underline{D}(\mu, x)= \overline{D}(\mu, x)$ $\mu$-a.e. \begin{theorem} If $\mu \in M^+(X)$ or $\nu\in M^+(Y)$ is regular, we have $$ \dim_* \mu\otimes \nu = \dim_*\mu + \dim_*\nu, \qquad \dim^* \mu\otimes\nu = \dim^*\mu + \dim^*\nu. $$ \end{theorem} {\em Convolution.} Assume that the Polish space $X$ is a locally compact abelian group $G$. Assume further that $G$ satisfies the following hypothesis $$ I_\alpha^{\mu*\nu} \le C(\alpha, \nu) I_\alpha^\mu $$ for all measures $\mu, \nu \in M^+(X)$, where $C(\alpha, \nu)$ is a constant independent of $\mu$. For example, if $G=\mathbb{R}^d$, we have $$ I_\alpha^\mu = \int |\xi|^{-(d-\alpha)} |\widehat{\mu}(\xi)|^2 d\xi $$ which implies that the hypothesis is satisfied by $\mathbb{R}^d$. The hypothesis is also satisfied by the group $\prod_{n=1}^\infty \mathbb{Z}/m_n\mathbb{Z}$ \cite{Fan1989c}. \begin{theorem} For any measures $\mu, \nu \in M^+(G)$, we have $$ \dim_* \mu*\nu \ge \max\{\dim_*\mu, \dim_*\nu\}, \qquad \dim^* \mu*\nu \ge \max\{\dim^*\mu, \dim^*\nu\}. $$ \end{theorem} For a given measure $\mu\in M^+(G)$, we consider the following two subgroups $$ H_-:=\{t \in G: \mu \ll \mu *\delta_t\}, \qquad H_+:=\{t \in G: \mu*\delta_t \ll \mu\}. $$ We could call $H = H_-\cap H_+$ the quasi-invariance group of $\mu$. \begin{theorem} Under the above assumption, if $\nu(H_-)>0$, we have the equality $ \dim_* \mu*\nu = \dim_*\mu $ and consequently $\dim_*\nu\le \dim_*\mu$ and $\dim_H H_-\le \dim_*\mu$; if $\nu(H_+)>0$, we have the equality $ \dim^* \mu*\nu = \dim^*\mu $ and consequently $\dim^*\nu\le \dim^*\mu$ and $\dim_H H_+\le \dim^*\mu$. \end{theorem} {\em Projection.} Let $\mu \in M^+(\mathbb{R}^2)$ with $\mathbb{R}^2=\mathbb{C}$. Let $L_\theta$ ($0\le \theta<2\pi$) be the line passing the origin and having angle $\theta$ with the line of abscissa. The orthogonal projection $P_\theta$ on $L_\theta$ is defined by $P_\theta(x)= \langle x, e^{i\theta} \rangle$. Let $\mu_\theta: = \mu \circ P_\theta^{-1}$ be the projection of $\mu$ on $L_\theta$. \begin{theorem} Let $\mu \in M^+(\mathbb{R}^2)$. For almost all $\theta$, we have $\dim_*\mu_\theta = \dim_* \mu \wedge 1$ and $\dim^*\mu_\theta = \dim^* \mu \wedge 1$. \end{theorem} \subsection{Ergodicity and dimension} L. S. Young \cite{Young1982} considered diffeomorphisms of surfaces leaving invariant an ergodic Borel probability measure $\mu$. She proved that $\mu$ is exact and found a formula relating $\dim \mu$ to the entropy and Lyapunov exponents of $\mu$. One of the main problems in the interface of dimension theory and dynamical systems is the Eckmann-Ruelle conjecture on the dimension of hyperbolic ergodic measures: the local dimension of every hyperbolic measure invariant under a $C^{1+\alpha}$-diffeomorphism exists almost everywhere. This conjecture was proved by L. Barreira, Y. Pesin and J. Schmeling~\cite{BPS} based on the fundamental fact that such a measure possesses asymptotically "almost" local product structure. But, in general, the ergodicity of the measure doesn't imply that the measure is exact~\cite{Cutler}. In \cite{Fan1994b}, $D$-ergodicity and unidimensionality were studied. \section{Oriented walks and Riesz products}\label{Sect_OW} \subsection{Oriented walks} Let $(\epsilon_n)_{n\ge 1} \subset [0, 2\pi)^\mathbb{N}$ be a sequence of angles. For $n\ge 1$, define $$ S_n(\epsilon) = \sum_{k=1}^n e^{i(\epsilon_1 + \epsilon_2 + \cdots + \epsilon_k)}. $$ We call $(S_n(\epsilon))_{n\ge 1}$ an oriented walk on the plan $\mathbb{C}$. In his book \cite{Feller} (vol. 1, pages 240-241), Feller mentioned a model describing the length of long polymer molecules in chemistry. It is a random chain consisting of $n$ links, each of unit length, and the angle between two consecutive links is $\pm \alpha$ where $\alpha$ is a positive constant. Then the distance $L_n$ from the beginning to the end of the chain can be expressed by $$ L_n = |S_n(\epsilon)| $$ where $(\epsilon_n)$ is an i.i.d. sequence of random variables taking values in $\{-\alpha, \alpha\}$. If $\alpha=0$, $L_n=n$ is deterministic. If $0<\alpha<2\pi$, the random variable $L_n$ is not expressed as sums of independent variables. However Feller succeeded in computing the second order moment of $L_n$. It is actually proved in \cite{Feller} that $\|L_n\|_2$ is of order $\sqrt{n}$. More precisely, for $0<\alpha<2\pi$ we have $$ \mathbb{E} L_n(\alpha)^2 = n \frac{1 + \cos \alpha}{1-\cos \alpha } - 2 \cos \alpha \frac{1 - \cos^n \alpha}{(1-\cos \alpha)^2 }. $$ Observe that $$ \mathbb{E}L_n^2 = \frac{1-(-1)^n}{2} \ \mbox{\rm if}\ \alpha=\pi; \qquad \mathbb{E}L_n^2\sim n \frac{1+\cos \alpha}{1- \cos \alpha} \ \mbox{\rm if}\ 0<\alpha<2\pi, \alpha\not=\pi. $$ What is the behavior of $S_n(\epsilon)$ as $n\to \infty$ for individuals $\epsilon$ ? We could study the behavior from the multifractal point of view. Let us consider a more general setting. Fix $d\in \mathbb{N}^*$. Let $\tau \in GL(\mathbb{R}^d)$, $v \in \mathbb{R}^d$ and $A$ a finite subset of $\mathbb{Z}$. For any $x=(x_n) \in \mathbb{D}:=A^\mathbb{N}$, we define the oriented walk $$ S_0(x)=v, \qquad S_n(x)=\sum_{k=1}^n \tau^{x_1+x_2+\cdots +x_k}v. $$ For $\alpha\in \mathbb{R}^d$, we define the $\alpha$-{\em level set} $$E_{\tau}(\alpha):=\left\{x\in \mathbb{D} : \lim_{n\to\infty }\frac1n S_n(x)=\alpha\right\}.$$ Let $L_\tau:=\{\alpha\in\mathbb{R}^d : E_{\tau}(\alpha)\neq \emptyset \}$. The following two cases were first studied in \cite{Fan2000}. Case 1: $d=2$, $\tau = -1$ and $A=\{0, 1\}$; Case 2: $d=1$, $\tau = e^{i \pi/2}$ and $A=\{-1, 1\}$. \begin{theorem}\cite{Fan2000}. In the first case, we have $L_\tau=[-1,1]$ and for $\alpha\in L_\tau$ we have $$\dim_H E_{\tau}(\alpha)= \dim_P E_{\tau}(\alpha)=H\left(\frac{1+\alpha}{2}\right).$$ In the seconde case, we have $L_\tau=\{z=a+ib : |a|\leq 1/2, |b|\leq 1/2 \}$ and for $\alpha=a+bi\in L_\tau$ we have $$ \dim_H E_{\tau}(\alpha)= \dim_P E_{\tau}(\alpha)=\frac{1}{2\log 2}\left[H\left(\frac{1}{2}+a\right)+H\left(\frac{1}{2}+b\right)\right].$$ \end{theorem} This theorem was proved by using Riesz products which will be described in the following subsection. A new construction of measures allows us to deal with a class of oriented walks. We assume that $\tau \in GL(\mathbb{R}^d)$ is idempotent. That is to say $\tau^p = Id$ for some integer $p>1$ (the case $p=1$ is trivial). The least $p$ is called the order of $\tau$. The above two cases are special cases. In fact, $\theta= -1$ is idempotent with order $p=2$ and $\theta=e^{i\pi/2}$ is idempotent with order $p=4$. The following rotations in $\mathbb{R}^3$ $$ \tau_1 = \left( \begin{array}{ccc} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right), \qquad \tau_2 = \left( \begin{array}{ccc} 0 & 0 & -1 \\ 1& 0 & 0 \\ 0 & 1 & 0 \end{array} \right) $$ are idempotent with order respectively equal to $3$ and $6$. Also remark that $\tau \in SO_3(\mathbb{R})$ and $\tau_2 \in O_3(\mathbb{R})\setminus SO_3(\mathbb{R})$. Since $\tau^p=Id$, the sum $x_1 + x_2+\cdots +x_k$ in the definition of $S_n(x)$ can be made modulo $p$. For $s\in\mathbb{R}^d$, we define a $p\times p$-matrix $M_s = (M_s(i, j))$: for $(i, j) \in \mathbb{Z}/p\mathbb{Z} \times \mathbb{Z}/p\mathbb{Z}$ define $$ M_s(i,j) = 1_A(j-i) \exp[\langle s, \tau^j\rangle v]. $$ where $\langle \cdot, \cdot \rangle$ denotes the scalar product in $\mathbb{R}^d$. It is clear that $M_s$ is irreducible iff $M_0$ is so. We consider $A$ as a subset (modulo $p$) of $\mathbb{Z}/p\mathbb{Z}$. It is easy to see that $M_0$ is irreducible iff $A$ generates the group $\mathbb{Z}/p\mathbb{Z}$. Assume that $A$ generates the group $\mathbb{Z}/p\mathbb{Z}$. Then $M_s$ is irreducible and by the Perron-Frobenius theorem, the spectral radius $\lambda(s)$ of $M_s$ is a simple eigenvalue and there is a unique corresponding probability eigenvector $t(s) = (t_s(0), t_s(1) \cdots, t_s(p-1)$. Let $$ P(s) = \log \lambda(s). $$ It is real analytic and strictly convex function on $\mathbb{R}^d$. We call it the pressure function associated to the oriented walk. \begin{theorem} \label{Thm_OW}\cite{FW} Assume $\tau$ is idempotent with order $p$ and $A$ generates the group $\mathbb{Z}/p \mathbb{Z}$. Then $L_\tau = \overline{\{\triangledown P(s) : s\in \mathbb{R}^d \}}$ and for $\alpha\in \bigtriangleup$, we have $$\dim_H E_{\tau}(\alpha)= \dim_P E_{\tau}(\alpha)=\frac{1}{\log p}\inf_{s\in\mathbb{R}^d}\{P(s)-\langle s,\alpha\rangle\}= \frac{P(s_\alpha)-\langle s_\alpha,\alpha\rangle}{\log p},$$ where $s_\alpha$ is the unique $s\in\mathbb{R}^d$ such that $\triangledown P(s)=\alpha$. \end{theorem} \subsection{Riesz products} Theorem~\ref{Thm_FLM} was proved by using Riesz products. While Hausdorff introduced the Hausdorff dimension (1919), F. Riesz constructed a class of continuous but singular measures on the circle (1918), called Riesz products. Riesz products are used as tool in harmonic analysis and some of them are Gibbs measures in the sense of dynamical systems. Let us recall the definition of Riesz product on a compact abelian group $G$, due to Hewitt-Zuckerman \cite{HZ} (1966). Let $\widehat{G}$ be the dual group of $G$. A sequence of characters $\Lambda = (\gamma_n )_{n \geq 1} \subset \widehat{G}$ is said to be {\em dissociated} if for any $n \ge 1$, the following characters are all distinct: $$ \gamma_1^{\epsilon_1} \gamma_2^{\epsilon_2}\cdots \gamma_n^{\epsilon_n} $$ where $\epsilon_j \in \{-1, 0, 1\}$ if $\gamma_j$ is not of order $2$, or $\epsilon_j \in \{0,1\}$ otherwise. Given such a dissociated sequence $\Lambda = (\gamma_n )_{n \geq 1}$ and a sequence of complex numbers $a = (a_n)_{n \geq 1}$ such that $|a_n | \leq 1$, we can define a probability measure on $G$, called {\em Riesz product}, \begin{equation}\label{Riesz} \mu_a =\prod_{n=1}^\infty \bigl(1 + \mbox{\rm Re}\ a_n \gamma_n (t)\bigr) \end{equation} as the weak* limit of $\prod_{n=1}^N \bigl(1 + \mbox{\rm Re}\ a_n \gamma_n (t)\bigr) \text{d}t$ where $\text{d}t$ denotes the Haar measure on $G$. A very useful fact is that the Fourier coefficients of the Riesz product $\mu_a$ can be explicitly expressed in term of the coefficients $a_n$'s: $$ \widehat{\mu}_a (\gamma) = \prod_{k=1}^n a_k^{(\epsilon_k)} \ \mathrm{if} \ \gamma=\gamma_1^{\epsilon_1} \gamma_2^{\epsilon_2}\cdots \gamma_n^{\epsilon_n}, \quad \widehat{\mu}_a (\gamma) =0 \ \mathrm{otherwise}. $$ where $a_n^{(\epsilon)} = 1, a_n/2$ or $\bar{a}_n/2$ according to $\epsilon = 0, 1$ or $-1$. Consequently the sequence $\{ \gamma_n - \bar{a}_n/2 \}_{n\ge 1}$ is an orthogonal system in $L^2(\mu_a)$. Here are some properties of $\mu_a$. \begin{theorem}[\cite{Zygmund1968}] \label{ACS} The measure $\mu_a$ is either absolutely continuous or singular (with respect to the Haar measure) according to $\sum_{n=1}^\infty |a_n|^2<\infty$ or $=\infty$. \end{theorem} \begin{theorem} [\cite{Fan1993},\cite{Peyriere1990}] \label{FP} Let $\{\alpha_n\}$ be a sequence of complex numbers. The orthogonal series $\sum_{n=1}^\infty \alpha_n (\gamma_n(t) - \bar{a}_n/2)$ converges $\mu_a$-everywhere iff $\sum_{n=1}^\infty |\alpha_n|^2 <\infty.$ \end{theorem} The proof of Theorem~\ref{FP} in \cite{Fan1993} involved the following Riesz product with $\omega =(\omega_n) \in G^\mathbb{N}$ as phase translation: \begin{equation}\label{RandomRiesz} \mu_{a,\omega} =\prod_{n=1}^\infty (1 + \mbox{\rm Re} \ a_n \gamma_n (t+\omega_n)). \end{equation} Actually $\mu_{a, \omega}$ was considered as a random measure and $(\omega_n)$ was considered as an i.i.d. random sequence with Haar measure as common probability law. When two Riesz products $\mu_a$ and $\mu_b$ are singular or mutually absolutely continuous ? It is a unsolved problem. Bernoulli infinite product measures can be viewed as Riesz products on the group $(\mathbb{Z}/m\mathbb{Z})^{\mathbb{N}}$. For these Bernoulli infinite measures, the Kakutani dichotomy theorem \cite{Kakutani} applies and there is a complete solution. But there is no complete solution for other groups. The classical Riesz products are of the form \begin{equation}\label{RealRiesz} \mu_a = \prod_{k=1}^\infty (1 + \mbox{\rm Re} \ a_n e^{i \lambda_k x}) \end{equation} where $\{\lambda_n\}\subset \mathbb{N}$ is a lacunary sequence in the sense that $\lambda_{n+1} \ge 3 \lambda_n$. J. Peyri\`ere has first studied the lower and upper dimensions of $\mu_a$, without introducing the notion of dimension of measures. Let us mention an estimation for the energy integrals of $\mu_a$ \cite{Fan1989c}: $$ \int\int \frac{d \mu_a(x)d\mu_a(y)}{|x-y|^\alpha} \approx \lambda_1^{\alpha -1}|a_1|^2 + \sum_{n=2}^\infty \lambda_n^{\alpha -1}|a_n|^2 \prod_{k=1}^{n-1} \left(1 + \frac{|a_n|^2}{2}\right). $$ \subsection{Evolution measures} The key for the proof of Theorem~\ref{Thm_OW} is the construction of the following measures on $A^\mathbb{N}$, which describe the evolution of the oriented walk. It is similar to Markov measure but it is not. It plays the role of Gibbs measure but it is not Gibbs measure either. Recall that $M_s t(s) = \lambda(s) t(s)$. In other words, for every $i \in \mathbb{Z}/p\mathbb{Z}$ we have $$ \lambda(s) t_i(s) =\sum_{j} 1_A(j -i) t_j(s) \exp[\langle s, \tau^j v\rangle]. $$ Denote, for $a\in A$ and for $(x_1, \cdots, x_{k+1}) \in A^{k+1}$, $$ \pi(a)=\frac{t_a(s)}{\sum_{b \in A}t_{b}(s)}; $$ $$ Q_k(x_1,x_2,\cdots, x_{k+1})= \frac{t_{x_1 + x_2+\cdots +x_{k+1}}(s) \exp[\langle s,\tau^{x_1 + x_2+\cdots +x_{k+1}}v \rangle]} {\lambda(s) t_{x_1 + x_2+\cdots +x_{k}}(s)}.$$ Then we define a probability measure $\mu_s$ on $A^\mathbb{N}$ as follows. For any word $x_1x_2\cdots x_n\in A^n$, let $$\mu_s([x_1x_2\cdots x_n])=\pi(x_1)Q_1(x_1,x_2)Q_2(x_1,x_2,x_3)\cdots Q_{n-1}(x_1,x_2,\cdots, x_n).$$ For $x = (x_n) \in A^\mathbb{N}$, let $$ w_n(x) = x_1 + x_2 + \cdots + x_n \quad (\!\!\!\!\!\mod p). $$ The mass $\mu_s([x_1x_2\cdots x_n])$ and the partial sum $S_n(x)$ are directly related as follows. $$\log\mu_s([x_1x_2\cdots x_n])=\left\langle s,S_n(x)\right\rangle-(n-1)\log \lambda(s)-\log \sum_{a\in A} t_{a}(s)+\log t_{w_n(x)}(s).$$ As the $t_i(s)$'s are bounded, we deduce that the following relation between the measure $\mu_s$ and the oriented walk $S_n$. \begin{proposition}\label{prop local dimension1} For any $x\in \mathbb{D}$, we have $$\log\mu_s([x_1x_2\cdots x_n]) - \left\langle s,S_n(x)\right\rangle= - n \log \lambda(s)+ O(1).$$ \end{proposition} \section{Multiple Birkhoff averages}\label{Sect_MBA} Let $\mathcal{A}=\{0, 1,2, \cdots, m-1\}$ be a set of $m$ symbols ($m\ge 2$). Denote $\Sigma_m=\mathcal{A}^{\mathbb{N}^*}$. Let $q\ge 2$ be a integer. Fan, Schmeling and Wu made a forward step in \cite{FSW} by obtaining a Hausdorff spectrum of multiple ergodic averages for a class of potentials. They consider an arbitrary function $\varphi: \mathcal{A}^d \to \mathbb{R}$ and study the sets $$E(\alpha)=\left\{x\in \Sigma_m : \lim_{n\to\infty}A_n\varphi(x)=\alpha\right\}$$ for $\alpha\in \mathbb{R}$, where \begin{equation}\label{GMEA} A_n\varphi(x)=\frac{1}{n}\sum_{k=1}^n \varphi(x_k, x_{qk}, \cdots, x_{q^{d-1}}). \end{equation} Let $$\alpha_{\min}=\min_{a_1,\cdots,a_{d}\in \mathcal{A} }\varphi(a_1,\cdots,a_{d}), \quad \alpha_{\max}=\max_{a_1,\cdots,a_{d}\in \mathcal{A} }\varphi(a_1,\cdots,a_{d}).$$ It is assumed that $\alpha_{\min}<\alpha_{\max}$ (otherwise $\varphi$ is constant and the problem is trivial). A key ingredient of the proof is a class of measures constructed by Kenyon, Peres and Solomyak \cite{KPS} that we call telescopic product measures. In \cite{FSW}, a nonlinear thermodynamic formalism was developed. \subsection{ Thermodynamic formalism} \label{M-sets}\ \\ The Hausdorff dimension of $E(\alpha)$ is determined through the following thermodynamic formalism. Let $\mathcal{F}(\mathcal{A}^{d-1}, \mathbb{R}^+)$ be the cone of functions defined on $\mathcal{A}^{d-1}$ taking non-negative real values. For any $s \in \mathbb{R}$, consider the transfer operator $\mathcal{L}_s$ defined on $\mathcal{F}(\mathcal{A}^{d-1}, \mathbb{R}^+)$ by \begin{equation}\label{transer-operator} \mathcal{L}_s \psi (a) = \sum_{j \in \mathcal{A}} e^{s \varphi(a, j)} \psi (Ta, j) \end{equation} where $T: \ \mathcal{A}^{d-1}\to \mathcal{A}^{d-2}$ is defined by $T(a_1,\cdots,a_{d-1})=(a_2,\cdots,a_{d-1})$. Then define the non-linear operator $\mathcal{N}_s$ on $\mathcal{F}(\mathcal{A}^{d-1}, \mathbb{R}^+)$ by $ \mathcal{N}_s \psi (a)= (\mathcal{L}_s \psi (a))^{1/q}. $ It is proved in \cite{FSW} that the equation \begin{equation}\label{transer_equation} \mathcal{N}_s \psi_s = \psi_s \end{equation} admits a unique strictly positive solution $\psi_s=\psi_s^{(d -1)} : \mathcal{A}^{d-1}\to \mathbb{R}_+^*$. Extend the function $\psi_s$ onto $\mathcal{A}^{k}$ for all $1\le k \le d -2$ by induction: \begin{equation}\label{transer_equation2} \psi_s^{(k)} (a)=\left(\sum_{j \in \mathcal{A}} \psi_s^{(k+1)} (a, j)\right)^{\frac{1}{q}}, \ \ (a\in \mathcal{A}^{k}). \end{equation} For simplicity, we write $\psi_s(a)=\psi_s^{(k)}(a)$ for $a\in \mathcal{A}^k$ with $1\leq k\leq d-1$. Then the pressure function is defined by \begin{equation}\label{pressure function} P_{\varphi}(s) = (q-1)q^{d-2} \log \sum_{j\in \mathcal{A}}\psi_s(j). \end{equation} It is proved \cite{FSW} that $P_{\varphi}(s)$ is an analytic and convex function of $s\in \mathbb{R}$ and even strictly convex since $\alpha_{\min} <\alpha_{\max}$. The Legendre transform of $P_{\varphi}$ is defined as $$ P^*_{\varphi}(\alpha)=\inf_{s\in \mathbb{R}}(P_{\varphi}(s)-s\alpha). $$ We denote by $L_{\varphi}$ the set of levels $\alpha\in \mathbb{R}$ such that $E(\alpha)\neq \emptyset$. \begin{theorem} {\rm (\cite{FSW})}\label{thm principal} We have $L_{\varphi}=[P'_{\varphi}(-\infty),P'_{\varphi}(+\infty)].$ If $\alpha =P'_{\varphi}(s_\alpha)$ for some $s_\alpha \in \mathbb{R}\cup\{-\infty,+\infty\}$, then $E(\alpha)\neq \emptyset$ and the Hausdorff dimension of $E(\alpha)$ is equal to $$\dim_H E(\alpha)=\frac{P_{\varphi}^*(\alpha)}{q^{d-1}\log m}.$$ \end{theorem} Similar results hold for vector valued functions $\varphi$ \cite{FSW}. Y. Peres and B. Solomyak \cite{PS2} have obtained a result for the special case $\varphi(x,y) = x_1y_1$ on $\Sigma_2$. Y. Kifer \cite{Kifer} has obtained a result on the multiple recurrence sets for some frequency of product form. Let us consider two examples. Let $q=2$ and $\ell=2$ and let $\varphi_1(x,y)=x_1y_1$ (See Figure 1) and $\varphi_2(x,y) = (2x_1-1)(2y_1-1)$ (see Figure 2) be two potentials on $\Sigma_2$. The invariant spectra (see \S \ref{InvSpectrum}) are also shown in the figures. \begin{figure} \caption{Spectra $\alpha\mapsto \dim_HE(\alpha)$ and $\alpha\mapsto F_{\rm inv}(\alpha)$ for $\varphi_1$.} \label{figure 1} \end{figure} \begin{figure} \caption{Spectra $\alpha\mapsto \dim_HE(\alpha)$ and $\alpha\mapsto F_{\rm inv}(\alpha)$ for $\varphi_2$.} \label{figure 2} \end{figure} \subsection{Telescopic product measures} One of the key points in the proof of the Hausdorff spectrum (Theorem \ref{thm principal}) is the observation that the coordinates $x_1,\cdots , x_n,\cdots $ of $x$ appearing in the definition of ${A}_n\varphi(x)$ share the following independence. Consider the partition of $\mathbb{N}^*$: $$ \mathbb{N}^*=\bigsqcup_{i\geq 1,q\nmid i}\Lambda_i\ \ {\rm with}\ \Lambda_i=\{iq^j\}_{j\ge 0}. $$ Observe that if $k=iq^j$ with $q\nmid i$, then $\varphi(x_k,x_{kq},\cdots ,x_{kq^{d-1}})$ depends only on $x_{|_{\Lambda_i}}$, the restriction of $x$ on $\Lambda_i$. So the summands in the definition of ${A}_n\varphi(x)$ can be put into different groups, each of which depends on one restriction $x_{|_{\Lambda_i}}$. For this reason, we decompose $\Sigma_m$ as follows: $$ \Sigma_m =\prod_{i\geq1,q\nmid i}\mathcal{A}^{\Lambda_i}. $$ Telescopic product measures are now constructed as follows. Let $\mu$ be a probability measure on $\Sigma_m$. Notice that $\mathcal{A}^{\Lambda_i}$ is nothing but a copy of $\Sigma_m$. We consider $\mu$ as a measure on $\mathcal{A}^{\Lambda_i}$ for every $i$ with $q\nmid i$. Then we define the infinite product measure $\mathbb{P}_\mu$ on $\prod_{i\geq1,q\nmid i}\mathcal{A}^{\Lambda_i}$ of the copies of $\mu$. More precisely, for any word $u$ of length $n$ we define $$ \mathbb{P}_{\mu}([u])=\prod_{i\leq n,q\nmid i}\mu([u_{|_{\Lambda_i}}]), $$ where $[u]$ denotes the cylinder of all sequences starting with $u$. The probability measure $\mathbb{P}_{\mu}$ is called {\em telescopic product measure}. Kenyon, Peres and Solomyak \cite{KPS} have first constructed these measures. The Hausdorff dimension of every telescopic product measure is computable. \begin{theorem}{\rm (\cite{KPS, FSW})}\label{dim-meas} For any given measure $\mu$, the telescopic product measure $\mathbb{P}_\mu$ is exact and its dimension is equal to $$\dim_{H}\mathbb{P}_{\mu}=\dim_P \mathbb{P}_{\mu}=\frac{(q-1)^2}{\log m}\sum_{k=1}^{\infty}\frac{H_k(\mu)}{q^{k+1}}$$ where $H_k(\mu)=-\sum_{a_1,\cdots,a_k\in S}\mu([a_1\cdots a_k])\log \mu([a_1\cdots a_k]). $ \end{theorem} \subsection{Dimension formula of Ruelle-type} The function $\psi_s$ defined by (\ref{transer_equation}) and (\ref{transer_equation2}) determine a special telescopic product measure which plays the role of Gibbs measure in the proof of the Hausdorff spectrum. First we define a $(d-1)$-step Markov measure on $\Sigma_m$, which will be denoted by $\mu_s$, with the initial law \begin{equation}\label{def measure 1} \pi_s([a_1,\cdots,a_{d-1}])=\prod_{j=1}^{d-1}\frac{\psi_s(a_1,\cdots,a_j)}{\psi_s^q(a_1,\cdots,a_{j-1})} \end{equation} and the transition probability \begin{equation}\label{def measure 2} Q_s\left([a_1,\cdots,a_{d-1}],[a_2,\cdots,a_{d}]\right)=e^{s\varphi(a_{1},\cdots,a_{d})} \frac{\psi_s(a_2,\cdots,a_{d})}{\psi_s^q(a_{1},\cdots,a_{d-1})}. \end{equation} The corresponding telescopic product measure $\mathbb{P}_{\mu_s}$ is proved to be a dimension maximizing measure of $E(\alpha)$ if $s$ is chosen to be the solution of $ P'_\varphi(s) =\alpha. $ The dimension of $\mathbb{P}_{\mu_s}$ is simply expressed by the pressure function. In other words, we have the following formula of Ruelle-type. \begin{theorem}{\rm (\cite{FSW})} \label{prop formula dim P_s} For any $s\in \mathbb{R}$, we have $$\dim_H\mathbb{P}_{\mu_s}=\frac{1}{q^{d-1}}[P_\varphi(s)-sP'_\varphi(s)].$$ \end{theorem} \subsection{ Multiplicatively invariant sets} \label{M-sets}\ \\ Kenyon, Peres and Solomyak \cite{KPS} were able to compute both the Hausdorff dimension and the box dimension of $X_2$, already considered in \S \ref{X_2}, and of a class of generalizations of $X_2$. Peres, Schmeling, Seuret and Solomyak \cite{PSSS} generalized the results to a more general class of sets. Recall that $\Sigma_m=\{0,1,..., m-1\}^{\mathbb{N}^*}$, $q$ is an integer greater than 2 and $\Lambda_i : =\{i\,q^n: n\geq 0\}$. For any subset $\Omega \subset \Sigma_m$, define \begin{equation} \label{defXOmega} X_\Omega := \{x=(x_k)_{k\geq 1}\in \Sigma_m: \forall \, i\neq 0 \!\! \!\mod q, \ x_{|\Lambda_i} \in \Omega\}. \end{equation} We get $X_\Omega =X_2$ when $q=2$ and $\Omega$ is the Fibonacci set $\{y\in \Sigma_m: \forall \, k\geq 1, \ y_ky_{k+1}=0\}$. The set $X_\Omega$ is not shift-invariant but is multiplicatively invariant in the sense that $M_r X_\Omega \subset X_\Omega$ for every integer $r\in \mathbb{N}^*$ where $M_r$ maps $(x_n)$ to $(x_{rn})$. The generating set $\Omega$ has a tree of prefixes, which is a directed graph $\Gamma$. The set $V(\Gamma)$ of vertices consists of all possible prefixes of finite length in $\Omega$, i.e. $$V(\Gamma) := \bigcup_{k\geq 0} \mbox{Pref}_k(\Omega) ,$$ where $\mbox{Pref}_0(\Omega) =\{\emptyset\}$, $\mbox{Pref}_k(\Omega) := \{u\in \{0,1,..., m-1\}^k: \Omega \cap [u]\neq \emptyset\}$. There is a directed edge from a vertex $u$ to another $v$ if and only if $v=ui$ for some $i\in \{0,1,..., m-1\}$. \begin{theorem}\label{thKPS}{\rm (\cite{KPS})} There exists a unique vector $\bar t=(t_v)_{v\in \Gamma} \in [1,m^{\frac{1}{q-1}}]^{V(\Gamma)}$ defined on the tree such that \begin{equation} \label{deftoptimal} \forall v\in V(\Gamma), \ \ \ (t_v)^q = \sum_{i\in\{0,1,...,m-1\}: \ vi\in \Omega} \ t_{vi}. \end{equation} The Hausdorff dimension and the box dimension of $X_\Omega$ are respectively equal to \begin{eqnarray} \dim_H (X_\Omega ) & = & (q-1) \log_m t_\emptyset\\ \dim_B (X_\Omega ) & = & (q-1)^2 \sum_{k=1}^{+\infty} \frac{ \log_m \left| \text{{\em Pref}}_k(\Omega) \right|}{q^{k+1}}. \end{eqnarray} The two dimensions coincide if and only if the tree $\Gamma$ is spherically symmetric, i.e. all prefixes of length $k$ in $\Omega$ have the same number of continuations of length $k+1$ in $\Omega$. \end{theorem} The vector $\bar{t}$ defines a measure $\mu$ on $\Omega\subset \{0, 1, \cdots, m-1\}^{\mathbb{N}^*}$. Then a telescopic product measure can be built on $X_\Omega$. It is proved in \cite{KPS} that there is a maximizing measure on $X_\Omega$ of this form. A typical example of the class of sets studied by Peres, Schmeling, Solomyak and Seuret \cite{PSSS} is $$X_{2,3}=\{x\in \Sigma_m: x_kx_{2k}x_{3k} = 0\}.$$ The construction of the sets is as follows. Let $\kappa \ge 1$ be an integer and let $p_1,\ldots,p_\kappa$ be $\kappa$ primes, which generates a semigroup $S$ of $\mathbb{N}^*$: $$S=\langle p_1, p_2, \cdots, p_\kappa\rangle = \{p_1^{\alpha_1}p_2^{\alpha_2} \cdots p_\kappa^{\alpha_\kappa}: \alpha_1, ..., \alpha_\kappa\in \mathbb{N}\}.$$ The elements of $S$ are arranged in increasing order and denoted by $\ell_k$ the $k$-th element of $ S = \{\ell_k\}_{k=1}^\infty: 1=\ell_1 < \ell_2 < \cdots. $ Define \begin{equation} \label{eq-fs} \gamma(S):= \sum_{k=1}^\infty \frac{1}{\ell_k}\,. \end{equation} Write $ (i,S)=1 $ when $(i, p_j)=1$ for all$ j\le \kappa$. We have the following partition of $\mathbb{N}^*$: \begin{equation} \label{eq-disj} \mathbb{N}^* = \bigsqcup_{(i,S)=1} iS. \end{equation} For each element $x = {(x_k)}_{k=1}^\infty $, $x|_{iS}$ denotes the restriction $ x|_{iS}:={(x_{i\ell_k})}_{k=1}^\infty $, which is also viewed as an element of $\Sigma_m$. Given a closed subset $\Omega\subset \Sigma_m$, we define a new subset of $\Sigma_m$: \begin{equation} \label{def-Xom} X_\Omega^{(S)}:= \Bigl\{x = {(x_k)}_{k=1}^\infty \in \Sigma_m:\ x|_{iS} \in \Omega\ \ \mbox{for all}\ i,\ (i,S)=1\Bigr\}. \end{equation} \begin{theorem} \label{thPSSS} {\rm(\cite{PSSS})} There exists a vector $ \overline{t}=(t(u))_{u\in {\rm Pref}(\Omega)}\in [1,+\infty)^{{\rm Pref}(\Omega)} $ defined on the tree of prefixes of $\Omega$ such that \begin{equation*} t(\varnothing) \in [1,m],\ \ t(u) \in [1, m^{\ell_k(\ell_{k+1}^{-1} +\ell_{k+2}^{-1}+\cdots)}],\ |u|=k,\ k\ge 1, \end{equation*} which is the solution of the system \begin{eqnarray*} t(\varnothing)^{\gamma(S)} & = & \sum_{j=0}^{m-1} t(j),\\ t(u)^{\ell_{k+1}/\ell_{k}} & = & \sum_{j:\ uj\in {\rm Pref}_{k+1}(\Omega)} t(uj),\ \ \ \forall\ u \in {\rm Pref}_k(\Omega),\ \forall k\ge 1. \end{eqnarray*} The Hausdorff dimension and the box dimension of $X_\Omega^{(S)}$ are respectively equal to \begin{eqnarray*} \dim_H(X_\Omega^{(S)}) & = & \log_m t(\varnothing)\\ \dim_B(X_\Omega^{(S)}) & = & \gamma(S)^{-1} \sum_{k=1}^\infty \Bigl(\frac{1}{\ell_k} - \frac{1}{\ell_{k+1}}\Bigr) \log_m |{\rm Pref}_{k}(\Omega)|. \end{eqnarray*} We have $\dim_H(X_\Omega^{(S)}) = \dim_B(X_\Omega^{(S)})$ if and only if the tree of prefixes of $\Omega$ is spherically symmetric. \end{theorem} Ban, Hu and Lin \cite{BHL} studied the Minkowski dimension of $X_{2,3}$ and of some other multiplicative sets as pattern generating problem. \section{Remarks and Problems} \subsection{Vector valued potential} The non-linear thermodynamic formalism can be generalized to vectorial potentials. Let $\varphi, \gamma$ be two functions defined on $\mathcal{A}^\ell$ taking real values. Instead of considering the transfer operator $\mathcal{L}_s$ as defined in (\ref{transer-operator}), we consider the following one: $$ \mathcal{L}_s \psi (a) = \sum_{j \in S} e^{s \varphi(a, j)+\gamma(a, j)} \psi (Ta, j),\ a\in S^{\ell-1},\ s\in \mathbb{R}. $$ There exists a unique solution to the equation $$(\mathcal{L}_s \psi)^{\frac{1}{q}}=\psi.$$ Then we define the pressure function $P_{\varphi,\gamma}(s)$ as indicated in $P_{\varphi,\gamma}(s)$. The function $s\mapsto P_{\varphi,\gamma}(s)$ is convex and analytic. Now, let $\underline{\varphi}=(\varphi_1,\cdots,\varphi_d)$ be a function defined on $S^\ell$ taking values in $\mathbb{R}^d$. For $\underline{s}=(s_1,\cdots,s_d)\in \mathbb{R}^d$, we consider the following transfer operator. $$ \mathcal{L}_{\underline{s}} \psi (a) = \sum_{j \in S} e^{\langle \underline{s},\underline{\varphi}\rangle} \psi (Ta, j),\ a\in S^{\ell-1}, $$ where $\langle \cdot,\cdot\rangle$ denotes the scalar product in $\mathbb{R}^d$. We denote the associated pressure function by $P(\underline{\varphi})(\underline{s})$. Then for any vectors $u,v\in \mathbb{R}^d$ the function $$\mathbb{R} \ni s\ \longmapsto \ P(\underline{\varphi})(us+v)$$ is analytic and convex. We deduce from this that the function $\underline{s}\ \mapsto \ P(\underline{\varphi})(\underline{s})$ is infinitely differentiable and convex on $\mathbb{R}^d$. Similarly, we define the level sets $E(\underline{\alpha})$ $(\underline{\alpha}\in \mathbb{R}^d)$ of $\underline{\varphi}$. A vector version of Theorem \ref{thm principal} is stated by just replacing the derivative of the pressure function by the gradient. \subsection{Invariant spectrum and mixing spectrum}\label{InvSpectrum} The set $E_\Phi(\alpha)$ defined by (\ref{def_ergaverage}) is not invariant. The size of the invariant part of $E_\Phi(\alpha)$ could be considered to be $$ d_{\rm inv}(\alpha) = \sup\{\dim^* \mu: \mu \ {\rm invariant}, \ \mu(E_\Phi(\alpha))=1\}. $$ The function $\alpha \mapsto d_{\rm inv}(\alpha)$ is called the invariant spectrum of $\Phi$. Similarly we define the mixing spectrum of $\Phi$ by $$ d_{\rm mix}(\alpha) = \sup\{\dim^* \mu: \mu \ {\rm mixing}, \ \mu(E_\Phi(\alpha))=1\}. $$ Examples in \cite{FSW} show that it is possible to have $$ d_{\rm mix}(\alpha) < d_{\rm inv}(\alpha) <d_H(\alpha). $$ \subsection{Semigroups} The semigroup $\{q^n\}_{n\ge 0}$ of $\mathbb{N}^*$ appeared in \cite{FSW,KPS}. Other semigroup structures appeared in \cite{PSSS}. Combining the ideas in \cite{PSSS,FSW}, averages like $$ \lim_{n \to \infty} \frac{1}{n} \sum_{k=1}^n \varphi(x_k, x_{2k}, x_{3k}) $$ can be treated \cite{Wu}. The Riesz product method used in \cite{FLM} is well adapted to the study of the special limit on $\Sigma_2$: $$ \lim_{n \to \infty} \frac{1}{n} \sum_{k=1}^n (2x_k-1) (2 x_{2k} -1)\cdots (2 x_{\ell k}-1) $$ where $\ell \ge 2$ is any integer. \subsection{Subshifts of finite type} What we have presented is strictly restricted to the full shift dynamics. It is a challenging problem to study the dynamics of subshift of finite type and the dynamics with Markov property. New ideas are needed to deal with these dynamics. It is also a challenging problem to deal with potential depending more than one coordinates. The doubling dynamics $Tx = 2 x$ $\!\!\!\!\mod 1$ on the interval $[0,1)$ is essentially a shift dynamics. Cookie cutters are the first interval maps coming into the mind after the doubling map. If the cookie cutter maps are not linear, it is a difficult problem. A cookie-cutter can be coded, but the non-linearity means that the derivative is a potential depending more than one codes. Based on the computation made in \cite{PS2}, Liao and Rams \cite{LR} considered a special piecewise linear map of two branches defined on two intervals $I_0$ and $I_1$ and studied the following limit $$ \lim_{n \to \infty} \frac{1}{n} \sum_{k=1}^n 1_{I_1}(T^kx)1_{I_1}(T^{2k}x). $$ The techniques presented in \cite{FSW} can be used to treat the problem for general piecewise {\em linear} cookie cutter dynamics \cite{FLW, Wu}. \subsection{Discontinuity of spectrum for V-statistics} The limit of V-statistics $$ \lim_{n\to \infty} n^{-r}\sum_{1\le i_1, \cdots, i_r\le n} \Phi(T^{i_1}x, \cdots, T^{i_r} x). $$ was studied in \cite{FSW_V} where it is proved that the multifractal spectrum of topological entropy of the above limit is expressed by an variational principle when the system satisfies the specification property. Unlike the classical case ($r=1$) where the spectrum is an analytic function when $\Phi$ is H\"{o}lder continuous, the spectrum of the limit of higher order V-statistics ($r\ge 2$) may be discontinuous even for very regular kernel $\Phi$. It is an interesting problem to determine the number of discontinuities. M. Rauch \cite{Rauch} has recently established a variational principe relative to $V$-statistics. \subsection{Mutual absolute continuity of two Riesz products} Let us state two conjectures. See \cite{BFP} for the discussion on these conjectures. {\em Conjecture 1.} {\em Let $\mu_a$ and $\mu_b$ be two Riesz products and let $\omega :=(\omega_n) \in G^\mathbb{N}$. Then $\mu_a \ll \mu_b \Rightarrow \mu_{a,\omega} \ll \mu_{b,\omega}$, and $\mu_a \perp \mu_b \Rightarrow \mu_{a,\omega} \perp \mu_{b,\omega}$}. For a function $f$ defined on $G$, we use $\mathbb{E} f$ to denote the integral of $f$ with respect to the Haar measure. The truthfulness is that the preceding conjecture implies the following one. {\em Conjecture 2.} {\em Let $\mu_a$ and $\mu_b$ be two Riesz products. Then\\ $$ \prod_{n=1}^\infty \mathbb{E} \sqrt{(1 + \mbox{\rm Re}\ a_n \gamma_n ) (1 + \mbox{\rm Re} \ b_n \gamma_n ) } >0 \Longrightarrow \mu_a \ll \mu_b;$$ $$ \prod_{n=1}^\infty \mathbb{E} \sqrt{(1 + \mbox{\rm Re}\ a_n \gamma_n ) (1 + \mbox{\rm Re} \ b_n \gamma_n ) } =0 \Longrightarrow \mu_a \perp \mu_b.$$} \subsection{Doubling and tripling} For any integer $m\ge 2$, we define the dynamics $\tau_m x = mx $ ($\!\!\!\!\mod 1$) on $[0, 1)$. A typical couple of commuting transformations is the couple $(\tau_2, \tau_3)$. Let us take, for example, $\Phi(x, y) = e^{2\pi i (a x + b y)}$ with $a, b$ being two fixed integers. We are then led to the multiple ergodic averages, a special case of \eqref{def_generalergaverage}, \begin{equation}\label{2_3} A_n^{(2, 3)}(x): = \frac{1}{n}\sum_{k=1}^n e^{2\pi i (a 2^k + b 3^k) x}. \end{equation} This is an object not yet well studied in the literature (but if $a=0$, we get a classical Birkhoff average). We propose to develop a thermodynamic formalism by studying Gibbs type measures which are weak limits $\mu_{s, t}$ ($s, t \in\mathbb{R}$) of $$ Z_n(s, t)^{-1} Q_n(x) d x $$ where $$ Q_n(x) := \prod_{k=1}^n e^{s \cos (2\pi (a 2^k + b 3^k) x) + t \sin (2\pi (a 2^k + b 3^k) x)}. $$ The pressure function defined by $$ P(s, t) := \lim_{n\to \infty} \frac{\log Z_n(s, t)}{n} $$ would be differentiable. But first we have to prove the existence of the limit defining $P(s,t)$. More generally, let $(c_n)$ be a sequence of complex numbers and $(\lambda_n)$ a lacunary sequence of positive integers (by lacunary we mean $\inf_n \frac{\lambda_{n+1}}{\lambda_n}>1$). We can consider the following weighted lacunary trigonometric averages $$ \frac{1}{n}\sum_{k=1}^n c_k e^{2\pi i \lambda_k x}. $$ Under the divisibility condition $\lambda_n | \lambda_{n+1}$, such averages and more general averages were studied in \cite{Fan1997b}. For example, if $c_k = e^{2\pi i \omega_k}$ with $(\omega_k)$ being an i.i.d. sequence of Lebesgue distributed random variables, from the results obtained in \cite{Fan1997b} we deduce that almost surely the pressure is well defined and equal to the following deterministic function $$ P(s, t) = \log \int_0^{2\pi} e^{\sqrt{t^2+s^2} \cos x}\frac{ d x}{2\pi }. $$ Recall that $$ J_0(r) = \frac{1}{2\pi}\int_{0}^{2\pi} e^{r\cos x} d x = \sum_{n=0}^\infty \frac{r^{2 n}}{(n!)^2 2^{2n}}. $$ is the Bessel function. But, the condition $\lambda_n |\lambda_{n+1}$ for $\lambda_n = 2^n + 3^n$ is not satisfied. Neither the condition is satisfied for $\lambda_n =2^n + 4^n$. No rigorous results are known for the multifractal analysis of the averages defined by (\ref{2_3}). As conjectured by F\"urstenberg, the Lebesgue measure is the unique continuous probability measure which is both $\tau_2$-invariant and $\tau_3$-invariant. However, common $\tau_2$- and $\tau_3$-periodic points (different from the trivial one $0$) do exists. Given two integers $n\ge 1$ and $m\ge 1$. We can prove that there is a point $x (\not=0)$ which is $n$-periodic with respect to $\tau_2$ and $m$-periodic with respect to $\tau_3$ if and only if \begin{equation}\label{2_3condition} (2^n -1, 3^m-1) >1. \end{equation} Let $d = (2^n -1, 3^m-1)$. When the above condition on GCD is satisfied, there are $d-1$ such common periodic points. These common periodic points $x (\not=0)$ are of the form $ x = \frac{k}{2^n-1}= \frac{j}{3^m-1} $ for some $1\le k < 2^n-2$ and $1\le j <3^m-2$. Actually choices for $k$ are $$ 1 \cdot \frac{2^n-1}{d},\ \ \ 2 \cdot \frac{2^n-1}{d},\ \cdots, \ (d-1) \cdot \frac{2^n-1}{d}. $$ Choices for $j$ are $ 1 \cdot \frac{3^m-1}{d},\ \ \ 2 \cdot \frac{3^m-1}{d},\ \cdots, \ (d-1) \cdot \frac{3^m-1}{d}. $ Thus the $d-1$ common periodic points are $\frac{1}{d}, \frac{2}{d}, \cdots, \frac{d-1}{d}$. For such a point $x$, the following limit exists $$ \lim_{N\to \infty} A_N^{(2,3)}(x) = A_{nm}^{(2,3)}(x). $$ Note that there is an infinite number of such couples $n$ and $m$ such that (\ref{2_3condition}) holds. There would be some relation between these common periodic points and the multifractal behavior of $A_N^{(2,3)}(x)$. \defBibliography{Bibliography} \end{document}
arXiv
Loss of atmosphere on Mars If the atmosphere on Mars was once much thicker, how was it likely lost? Was it due to interaction with the solar wind, the small size of the planet, both, or something else, and approximately how long did it take to reach its current thickness? atmosphere mars Becky EricsonBecky Ericson The loss of the Martian atmosphere can be mostly attributed to its mass. The reason why Earth still has an atmosphere made of lighter elements is because with larger mass comes larger escape velocity, which is the speed at which an atom's kinetic energy overcomes the gravitational potential energy of its planet. The distribution of speeds of most gasses can be described by the Maxwell–Boltzmann distribution. This curve represents the probability of finding a particle with a given speed at a given temperature. Holding temperature constant, the above chart illustrates that lighter molecules will have higher probabilities of being found with higher speeds. $$P(\epsilon) = \frac{2}{\sqrt{\pi}k_{B}T} \left( \frac{\epsilon}{k_{B}T} \right)^{1/2} \exp{-\left(\frac{\epsilon}{k_{B}T}\right)} $$ where $\epsilon$ is the energy of the particle, $\frac{p^{2}}{2m}$ in the absence of a gravitational field (see this lecture for information regarding how to include the gravitational term). Integrated over time, the lighter gasses tend to exceed the escape velocity more often than their heavier counterparts. This is why the larger planets like Jupiter and Saturn still have atmospheres dominated by hydrogen and helium. astromaxastromax Not the answer you're looking for? Browse other questions tagged atmosphere mars or ask your own question. How do solar winds affect the atmospheric composition and density of planets? Why is Mars considered the outer edge of the "goldilocks zone"? Does liquid water on Mars means life? If we found evidence of life on Mars, how would we know that it originated on Mars rather than Earth? How did Mars come to have a 24 hour 39 minute day? Can IR sensitive cameras read signals from Venus's surface emissions? Mars and 115 mm - which eyepieces? What are the real conditions for the creation of an atmosphere/loss of an existing atmosphere? Does Jupiter lose some atmosphere at all? Could Mars have oil? What if the atmosphere of Titan were like the one on Venus?
CommonCrawl
Finite Mathematics and Calculus with Applications Margaret L. Lial, Raymond N. Greenwell, Nathan P. Ritchey Functions of Several Variables Select Section 17.1: Functions of Several Variables 17.2: Partial Derivatives 17.3: Continuous Money Flow 17.4: Lagrange Multipliers 17.5: Total Differentials and Approximations 17.6: Double Integrals Let $f(x, y)=2 x-3 y+5 .$ Find the following. a. $f(2,-1) \quad$ b. $f(-4,1) \quad$ c. $f(-2,-3) \quad$ d. $f(0,8)$ Thomas E. Let $g(x, y)=x^{2}-2 x y+y^{3} .$ Find the following. a. $g(-2,4) \quad$ b.g $(-1,-2) \qquad$ c.g $(-2,3) \quad$ d. $g(5,1)$ Let $h(x, y)=\sqrt{x^{2}+2 y^{2}} .$ Find the following. a. $h(5,3) \quad$ b. $h(2,4) \quad$ c. $h(-1,-3) \quad$ d. $h(-3,-1)$ Let $f(x, y)=\frac{\sqrt{9 x+5 y}}{\log x} .$ Find the following. a. $f(10,2)$ b. $f(100,1)$ c. $f(1000,0)$ d. $f\left(\frac{1}{10}, 5\right)$ Graph the first-octant portion of each plane. $x+y+z=9$ $x+y+z=15$ $2 x+3 y+4 z=12$ $x+y=4$ $y+z=5$ $z=4$ Graph the level curves in the first quadrant of the $x y$ -plane for the following functions at heights of $z=0, z=2,$ and $z=4$. $3 x+2 y+z=24$ Rommel E. $3 x+y+2 z=8$ $y^{2}-x=-z$ $2 y-\frac{x^{2}}{3}=z$ Discuss how a function of three variables in the form $w=f(x, y, z)$ might be graphed. Suppose the graph of a plane $a x+b y+c z=d$ has a portion in the first octant. What can be said about $a, b, c,$ and $d ?$ In the chapter on Nonlinear Functions, the vertical line test was presented, which tells whether a graph is the graph of a function. Does this test apply to functions of two variables? Explain. A graph that was not shown in this section is the hyperboloid of one sheet, described by the equation $x^{2}+y^{2}-z^{2}=1$ Describe it as completely as you can. With its graph in a–f on the next page $z=x^{2}+y^{2}$ $z^{2}-y^{2}-x^{2}=1$ $x^{2}-y^{2}=z$ $z=y^{2}-x^{2}$ $\frac{x^{2}}{16}+\frac{y^{2}}{25}+\frac{z^{2}}{4}=1$ $z=5\left(x^{2}+y^{2}\right)^{-1 / 2}$ Let $f(x, y)=4 x^{2}-2 y^{2},$ and find the following. a. $\frac{f(x+h, y)-f(x, y)}{h}$ b. $\frac{f(x, y+h)-f(x, y)}{h}$ c. $$\lim _{h \rightarrow 0} \frac{f(x+h, y)-f(x, y)}{h}$$ d. $$\lim _{h \rightarrow 0} \frac{f(x, y+h)-f(x, y)}{h}$$ Let $f(x, y)=5 x^{3}+3 y^{2},$ and find the following. Let $f(x, y)=x y e^{x^{2}+y^{2}} .$ Use a graphing calculator or spread- sheet to find each of the following and give a geometric interpretation of the results. (Hint: First factor $e^{2}$ from the limit and then evaluate the quotient at smaller and smaller values of $h . )$ a. $$\lim _{h \rightarrow 0} \frac{f(1+h, 1)-f(1,1)}{h}$$ b. $$\lim _{h \rightarrow 0} \frac{f(1,1+h)-f(1,1)}{h}$$ The following table provides values of the function $f(x, y)$ However, because of potential errors in measurement, the functional values may be slightly inaccurate. Using the statistical package included with a graphing calculator or spreadsheet and critical thinking skills, find the function $f(x, y)=$ $a+b x+c y$ that best estimates the table where $a, b,$ and $c$ are integers. (Hint: Do a linear regression on each column with the value of $y$ fixed and then use these four regression equations to determine the coefficient $c .$ ) Production of a digital camera is given by $$P(x, y)=100\left(\frac{3}{5} x^{-2 / 5}+\frac{2}{5} y^{-2 / 5}\right)^{-5}$$ where $x$ is the amount of labor in work-hours and $y$ is the amount of capital. Find the following. a. What is the production when 32 work-hours and 1 unit of capital are provided? b. Find the production when 1 work-hour and 32 units of capital are provided. c. If 32 work-hours and 243 units of capital are used, what is the production output? The multiplier function $$M=\frac{(1+i)^{n}(1-t)+t}{[1+(1-t) i]^{n}}$$ compares the growth of an Individual Retirement Account (IRA) with the growth of the same deposit in a regular savings account. The function $M$ depends on the three variables $n, i$ and $t,$ where $n$ represents the number of years an amount is left at interest, $i$ represents the interest rate in both types of accounts, and $t$ t represents the income tax rate. Values of $M>1$ indicate that the IRRA grows faster than the savings account. Let $M=f(n, i, t)$ and find the following. Find the multiplier when funds are left for 25 years at 5$\%$ interest and the income tax rate is 33$\% .$ Which account grows faster? What is the multiplier when money is invested for 40 years at 6$\%$ interest and the income tax rate is 28$\% ?$ Which account grows faster? Find the level curve at a production of 500 for the production functions. Graph each level curve in the $xy$-plane. In their original paper, $\mathrm{Cobb}$ and Douglas estimated the pro- duction function for the United States to be $z=1.01 x^{3 / 4} y^{1 / 4}$ , where $x$ represents the amount of labor and $y$ the amount of capital. Source: American Economic Review. A study of the connection between immigration and the fiscal problems associated with the aging of the baby boom generation considered a production function of the form $z=x^{0.6} y^{0.4}$ where $x$ represents the amount of labor and $y$ the amount of capital. Source: Journal of Political Economy. For the function in Exercise $34,$ what is the effect on z of doubling $x ?$ Of doubling $y ?$ Of doubling both? If labor $(x)$ costs $\$ 250$ per unit, materials $(y)$ cost $\$ 150$ per unit, and capital $(z)$ costs $\$ 75$ per unit, write a function for total cost. The rate of heat loss (in watts) in harbor seal pups has been approximated by $$H(m, T, A)=\frac{15.2 m^{0.67}(T-A)}{10.23 \ln m-10.74}$$ where $m$ is the body mass of the pup (in $\mathrm{kg} ),$ and $T$ and $A$ are the body core temperature and ambient water temperature, respectively $\left(\text { in }^{\circ} \mathrm{C}\right) .$ Find the heat loss for the following data. Source: Functional Ecology. a. Body mass $=21 \mathrm{kg} ;$ body core temperature $=36^{\circ} \mathrm{C} ;$ ambient water temperature $=4^{\circ} \mathrm{C}$ b. Body mass $=29 \mathrm{kg} ;$ body core temperature $=38^{\circ} \mathrm{C} ;$ ambient water temperature $=16^{\circ} \mathrm{C}$ The surface area of a human (in square meters) has been approximated by $$A=0.024265 h^{0.3964} m^{0.5378}$$ where $h$ is the height $(\mathrm{in} \mathrm{cm})$ and $m$ is the mass (in $\mathrm{kg} ) .$ Find $A$ for the following data. Source: The Journal of Pediatrics. a. Height, $178 \mathrm{cm} ;$ mass, 72 $\mathrm{kg}$ b. Height, $140 \mathrm{cm} ;$ mass, 65 $\mathrm{kg}$ c. Height, $160 \mathrm{cm} ;$ mass, 70 $\mathrm{kg}$ d. Using your mass and height, find your own surface area. An article entitled "How Dinosaurs Ran" explains that the locomotion of different sized animals can be compared when they have the same Froude number, defined as $$F=\frac{v^{2}}{g l}$$ where $v$ is the velocity, $g$ is the acceleration of gravity $\left(9.81 \mathrm{m} \text { per } \sec ^{2}\right),$ and $l$ is the leg length (in meters). Source: Scientific American. a. One result described in the article is that different animals change from a trot to a gallop at the same Froude number, roughly 2.56. Find the velocity at which this change occurs for a ferret, with a leg length of 0.09 m, and a rhinoceros, with a leg length of 1.2 m. b. Ancient footprints in Texas of a sauropod, a large herbivo- rous dinosaur, are roughly 1 m in diameter, corresponding to a leg length of roughly 4 m. By comparing the stride divided by the leg length with that of various modern creatures, it can be determined that the Froude number for these dinosaurs is roughly 0.025. How fast were the sauropods traveling? According to research at the Great Swamp in New York, the percentage of fish that are intolerant to pollution can be estimated by the function $$P(W, R, A)=48-2.43 W-1.81 R-1.22 A$$ where $W$ is the percentage of wetland, $R$ is the percentage of residential area, and $A$ is the percentage of agricultural area surrounding the swamp. Source: Northeastern Naturalist. a. Use this function to estimate the percentage of fish that will be intolerant to pollution if 5 percent of the land is classified as wetland, 15 percent is classified as residential, and 0 percent is classified as agricultural. (Note: The land can also be classified as forest land.) b. What is the maximum percentage of fish that will be intolerant to pollution? c. Develop two scenarios that will drive the percentage of fish that are intolerant to pollution to zero. d. Which variable has the greatest influence on P? In tropical regions, dengue fever is a significant health problem that affects nearly 100 million people each year. Using data collected from the 2002 dengue epidemic in Colima, Mexico, researchers have estimated that the incidence $I$ (number of new cases in a given year) of dengue can be predicted by the following function. $$\begin{aligned} I(p, a, m, n, e)=&(25.54+0.04 p-7.92 a+2.62 m\\ &+4.46 n+0.15 e )^{2} \end{aligned}$$ where $p$ is the precipitation $(\mathrm{mm}), a$ is the mean temperature $\left(^{\circ} \mathrm{C}\right), m$ is the maximum temperature $\left(^{\circ} \mathrm{C}\right), n$ is the minimum temperature $\left(^{\circ} \mathrm{C}\right),$ and $e$ is the evaporation $(\mathrm{mm}) .$ Source: Journal of Environmental Health. a. Estimate the incidence of a dengue fever outbreak for a region with 80 $\mathrm{mm}$ of rainfall, average temperature of $23^{\circ} \mathrm{C}$ , maximum temperature of $34^{\circ} \mathrm{C}$ , minimum temperature of $16^{\circ} \mathrm{C},$ and evaporation of 50 $\mathrm{mm} .$ b. Which variable has a negative influence on the incidence of dengue? Describe this influence and what can be inferred mathematically about the biology of the fever. Using data collected by the U.S. Forest Service, the annual number of deer-vehicle accidents for any given county in Ohio can be estimated by the function $$\begin{aligned} A(L, T, U, C)=& 53.02+0.383 L+0.0015 T+0.0028 U \\ &-0.0003 C \end{aligned}$$ where $A$ is the estimated number of accidents, $L$ is the road length (in kilometers), $T$ is the total county land area (in hundred-acres (Ha)), $U$ is the urban land area (in hundred- acres), and $C$ is the number of hundred-acres of crop land. Source: Ohio Journal of Science. a. Use this formula to estimate the number of deer-vehicle accidents for Mahoning County, where $L=266 \mathrm{km}, T=$ $107,484 \mathrm{Ha}, U=31,697 \mathrm{Ha},$ and $C=24,870$ Ha. The actual value was $396 .$ b. Given the magnitude and nature of the input numbers, which of the variables have the greatest potential to influence the number of deer-vehicle accidents? Explain your answer. Using data collected by the U.S. Forest Service, the annual number of deer that are harvested for any given county in Ohio can be estimated by the function $$N(R, C)=329.32+0.0377 R-0.0171 C$$ where $N$ is the estimated number of harvested deer, $R$ is the rural land area (in hundred-acres), and $C$ is the number of hundred-acres of crop land. Source: Ohio Journal of Science. a. Use this formula to estimate the number of harvested deer for Tuscarawas County, where $R=141,319$ Ha and $\mathrm{C}=$ $37,960$ Ha. The actual value was 4925 deer harvested. b. Sketch the graph of this function in the first octant. Pregnant sows tethered in stalls often show high levels of repetitive behavior, such as bar biting and chain chewing, indicating chronic stress. Researchers from Great Britain have developed a function that estimates the relationship between repetitive behavior, the behavior of sows in adjacent stalls, and food allowances such that $$\ln (T)=5.49-3.00 \ln (F)+0.18 \ln (C)$$ where $T$ is the percent of time spent in repetitive behavior, $F$ is the amount of food given to the sow (in kilograms per day), and $C$ is the percent of time that neighboring sows spent bar biting and chain chewing. Source: Applied Animal Behaviour Science. a. Solve the above expression for $T$ b. Find and interpret $T$ when $F=2$ and $C=40$ . Extra postage is charged for parcels sent by U.S. mail that are more than 84 in. in length and girth combined. (Girth is the distance around the parcel perpendicular to its length. See the figure.) Express the combined length and girth as a function of $L, W,$ and $H$ Refer to the figure for Exercise $46 .$ Assume $L, W,$ and $H$ are in feet. Write a function in terms of $L, W,$ and $H$ that gives the total area of the material required to build the box. The holes cut in a roof for vent pipes require elliptical templates. A formula for determining the length of the major axis of the ellipse is given by $$L=f(H, D)=\sqrt{H^{2}+D^{2}}$$ where $D$ is the (outside) diameter of the pipe and $H$ is the "rise "of the roof per $D$ units of "run'; that is, the slope of the roof is $H / D$ . (See the figure below.) The width of the ellipse (minor axis) equals $D .$ Find the length and width of the ellipse required to produce a hole for a vent pipe with a diameter of 3.75 in. in roofs with the following slopes. a. 3$/ 4$ b. 2$/ 5$
CommonCrawl
\begin{document} \title{The Structure of Masses of rank $n$ Quadratic Lattices of varying determinant over number fields} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{defn}[thm]{Definition} \newtheorem{rem}[thm]{Remark} \newtheorem{conj}[thm]{Conjecture} \newtheorem{question}[thm]{Question} \newtheorem{sublem}[thm]{Sublemma} \newcommand{\mathrm{Mat}}{\mathrm{Mat}} \renewcommand{\mathbb{O}}{\mathrm{O}} \newcommand{\mathrm{SO}}{\mathrm{SO}} \newcommand{\mathrm{SL}}{\mathrm{SL}} \newcommand{\mathrm{GL}}{\mathrm{GL}} \newcommand{\mathrm{ord}}{\mathrm{ord}} \newcommand{\mathrm{Gen}}{\mathrm{Gen}} \newcommand{\mathrm{Sym}}{\mathrm{Sym}} \newcommand{\mathrm{sgn}}{\mathrm{sgn}} \newcommand{\mathrm{SqCl}}{\mathrm{SqCl}} \newcommand{\mathrm{Mass}}{\mathrm{Mass}} \newcommand{\mathrm{Aut}}{\mathrm{Aut}} \newcommand{\mathrm{Disc}}{\mathrm{Disc}} \newcommand{\mathrm{DiscSq}}{\mathrm{DiscSq}} \newcommand{\mathrm{Pic}}{\mathrm{Pic}} \newcommand{\mathrm{Gal}}{\mathrm{Gal}} \newcommand{\mathrm{Vol}}{\mathrm{Vol}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{F}}{\mathbb{F}} \renewcommand{\mathbb{S}}{\mathbb{S}} \newcommand{\mathbb{T}}{\mathbb{T}} \newcommand{\mathbb{U}}{\mathbb{U}} \renewcommand{\mathbb{O}}{\mathbb{O}} \newcommand{\mathbb{E}}{\mathbb{E}} \newcommand{\mathbb{A}}{\mathbb{A}} \newcommand{\mathbb{V}}{\mathbb{V}} \newcommand{\mathrm{res}}{\mathrm{res}} \newcommand{\mathrm{Norm}}{\mathrm{Norm}} \newcommand{\mathrm{Image}}{\mathrm{Image}} \newcommand{\vec x}{\vec x} \newcommand{\vec y}{\vec y} \renewcommand{\vec c}{\vec c} \newcommand{\mathcal{M}}{\mathcal{M}} \renewcommand{\mathcal{H}}{\mathcal{H}} \renewcommand{\[}{\left[} \renewcommand{\right]}{\right]} \renewcommand{\(}{\left(} \renewcommand{\)}{\right)} \newcommand{\rightarrow}{\rightarrow} \newcommand{\alpha}{\alpha} \newcommand{\varepsilon}{\varepsilon} \newcommand{\leg}[2]{\(\frac{#1}{#2}\)} \newcommand{\mathfrak{p}}{\mathfrak{p}} \newcommand{\mathfrak{q}}{\mathfrak{q}} \newcommand{\mathcal{O}_F}{\mathcal{O}_F} \newcommand{\mathcal{O}_K}{\mathcal{O}_K} \newcommand{\mathcal{O}_\p}{\mathcal{O}_\mathfrak{p}} \newcommand{\mathcal{O}_\q}{\mathcal{O}_\mathfrak{q}} \newcommand{\mathcal{O}_v}{\mathcal{O}_v} \newcommand{\mathrm{Supp}}{\mathrm{Supp}} \newcommand{\mathrm{Cond}}{\mathrm{Cond}} \newcommand{\mathfrak{n}}{\mathfrak{n}} \newcommand{\mathfrak{s}}{\mathfrak{s}} \renewcommand{\mathfrak{v}}{\mathfrak{v}} \newcommand{\mathcal{G}}{\mathcal{G}} \newcommand{\mathfrak{I}}{\mathfrak{I}} \renewcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})}{\mathrm{SqCl}(\mathbb{A}_{F, \mathbf{f}}^\times, U_\mathbf{f})} \newcommand{\SqCl(\A_{F, \mathbf{f}}^\times)}{\mathrm{SqCl}(\mathbb{A}_{F, \mathbf{f}}^\times)} \newcommand{\SqCl(F^\times, \OF^\times)}{\mathrm{SqCl}(F^\times, \mathcal{O}_F^\times)} \newcommand{\SqCl(F^\times)}{\mathrm{SqCl}(F^\times)} \newcommand{\SqCl(F_\p^\times, \Op^\times)}{\mathrm{SqCl}(F_\mathfrak{p}^\times, \mathcal{O}_\p^\times)} \newcommand{\SqCl(F_\p^\times)}{\mathrm{SqCl}(F_\mathfrak{p}^\times)} \newcommand{\SqCl(F_v^\times, \Ov^\times)}{\mathrm{SqCl}(F_v^\times, \mathcal{O}_v^\times)} \newcommand{\SqCl(F_v^\times)}{\mathrm{SqCl}(F_v^\times)} \newcommand{\SqCl(\Op^\times)}{\mathrm{SqCl}(\mathcal{O}_\p^\times)} \newcommand{\SqCl(\OF^\times)}{\mathrm{SqCl}(\mathcal{O}_F^\times)} \section{Overview and Notation} \subsection{Introduction} The notion of the ``mass'' of a positive definite integral quadratic form $Q$ was first introduced as such by Smith in his 1867 paper \cite{Smith:1867} and also by Minkowski in his 1885 dissertation \cite{Mink}, though special cases can be found in earlier works of Gauss, Dirichlet and Eisenstein. In this setting, the {\bf mass} of $Q$ is defined to be the rational number $$ \mathrm{Mass}(Q) := \sum_{[Q'] \in \mathrm{Gen}(Q)} \frac{1}{|\mathrm{Aut}(Q')|} \in \mathbb{Q} > 0 $$ given by summing the reciprocals of the sizes of the automorphism groups $\mathrm{Aut}(Q')$ of all $\mathbb{Z}$-equivalence classes of quadratic forms $Q'$ (denoted by $[Q']$) where $Q'$ is equivalent to $Q$ by some invertible linear change of variables over $\mathbb{Z}/m\mathbb{Z}$ for each $m \in \mathbb{N}$ and also over the real numbers $\mathbb{R}$. (The set of such $Q'$ is called the {\bf genus} $\mathrm{Gen}(Q)$ of $Q$, and it is well-known that this sum is finite.) The mass is an interesting quantity because it is closely related to the number of classes in the genus (called the {\bf class number} $h(Q)$ of $Q$), but also because it can be computed analytically as an infinite product over all primes $p \in \mathbb{N}$. In 1935-1937, Siegel \cite{Si1, Si2, Si3, Siegel:1963vn} revolutionized the analytic theory of quadratic forms by providing a general framework for understanding previous mass formulas, as well as exact formulas for representing numbers by certain quadratic forms. All of these formulas have a very ``volume-theoretic" character, and express certain weighted sums over classes $[Q'] \in \mathrm{Gen}(Q)$ as a product over all places $v$ (of our given number field, e.g. $\mathbb{Q}$) as a product of ``local densities'' which measure the ``volume of solutions'' of some given quadratic diophantine equation. In particular Siegel's Mass formula states that $$ \mathrm{Mass}(Q) = 2\prod_v \beta_{Q,v}(Q)^{-1} $$ where the $\beta_{Q,v}(Q)$ are the local densities (for $Q$ representing itself) at the place $v$. This perspective was used by Tamagawa \cite{Tamagawa:1966ys, Weil:1982zr} in the 1960's to give a new proof of Siegel's formula for $\mathrm{Mass}(Q)$ in terms of a canonical (Tamagawa) measure on the associated adelic special orthogonal group $\mathrm{SO}_\mathbb{A}(Q)$. To use the mass formula to evaluate $\mathrm{Mass}(Q)$ for any given quadratic form $Q$ involves evaluating the local density factors at all places $v$, which can be rather complicated at the primes $p \mid 2\det(Q)$. These factors have been worked out in various cases by many authors, though the state of the literature about these formulas is not entirely adequate. In particular, a correct formula at the place $p=2$ is given in \cite{CS} without proof, and this appears not to agree with the proven formula \cite{Watson:1976dq}. For $p \neq 2$ correct formulas are given in \cite{Pall, CS}, and though the analogous formulas holds at any prime ideal $\mathfrak{p}\nmid 2$ of a number field $F$, a reference for this is difficult to find (perhaps in \cite{Pfeuffer:1971pd}?). One very natural question that has not received much attention is how to understand the {\bf total mass} $\mathrm{TMass}_n(S)$ of positive definite $\mathbb{Z}$-valued quadratic forms in $n$ variables with (Hessian) determinant $S$, defined as $$ \mathrm{TMass}_n(S) := \sum_{[Q] \text{ with} \det(Q) = S} \frac{1}{|\mathrm{Aut}(Q)|}, $$ and how $\mathrm{TMass}_n(S)$ varies with $S$. These are the main questions that we address in this paper, for any fixed $n \geq 2$. From the perspective of Siegel's mass formula, the total mass $\mathrm{TMass}_n(S)$ is a somewhat less natural quantity to study than $\mathrm{Mass}(Q)$ since it involves summing masses of quadratic forms across all genera of a given determinant, giving a complicated sum of complicated infinite products. However, rather remarkably, this summation has the effect of smoothing out much of the variation of the mass formula, and allows us to give a formula for the total mass $\mathrm{TMass}_n(S)$ for any given determinant $S$. Since it is well-known that the variation of the archimedean local density is given by $\beta_{Q,\infty}(Q)^{-1} = C_n(Q) \cdot S^{\frac{n(n-1)}{2}}$ for some constant $C_n(Q)$, we instead study the {\bf total non-archimedean mass} $T_n(S)$ defined by $$ T_n(S) := \beta_\infty(Q) \cdot \mathrm{TMass}_n(S) = \sum_{\substack{[Q] \text{ with } \\ \det(Q) = S}} 2 \cdot \prod_p \beta_p(Q)^{-1}, $$ and the {\bf primitive total non-archimedean mass} $T^*_n(S)$ defined by instead summing over classes $[Q]$ of primitive integer-valued forms with $\det(Q) = S$. Our main results, stated for simplicity in the special case of positive definite forms over $F=\mathbb{Q}$ with $n$ odd, show that \begin{thm} When $n$ is odd, the formal Dirichlet series $$ D_{T^*; n}(s) := \sum_{S\in\mathbb{N}} \frac{T^*_n(S)}{S^{s}} $$ can be written as a sum $$ D_{T^*; n}(s) = \kappa_n \cdot \[ D_{A^*; n}(s) + D_{B^*; n}(s) \right] $$ of two Eulerian Dirichlet series $D_{A^*; n}(s)$ and $D_{B^*; n}(s)$, with some explicit constant $\kappa_n$ (Corollaries \ref{Cor:Formal_Dirichlet_M} and \ref{Cor:Formal_Dirichlet_decomp_n_odd}). \end{thm} \noindent When $n$ is even this sum of Dirichlet series is slightly more complicated, and naturally gives such a decomposition where the overall constant $\kappa_n$ depends on the squareclass $t\mathbb{N}^2$ that $S$ lies within. We also show that \begin{thm} The Euler factors at $p$ in the Dirichlet series $D_{A^*; n}(s)$ and $D_{B^*; n}(s)$ above are each rational functions in $p^{-s}$ (Theorem \ref{Thm:Rationality_of_Euler_factors}, Corollary \ref{Cor:Rationality_of_AB_Euler_factors}). \end{thm} \noindent When $n=2$ we explicitly compute these Euler factors at all primes $p$ (Theorem \ref{Thm:Explicit_A_and_B_for_n=2}). Finally, we use these explicit local computations to recover Dirichlet's class number formula for imaginary quadratic fields $K/\mathbb{Q}$ (Theorem \ref{Thm:Analytic_class_number_formula}), and discuss some similarities between the total non-archimedean mass series $D_{T; n}(s)$ and modular forms of half-integral weight. We actually prove our main theorems over a general number field $F$, which causes us to make several technical distinctions that are blurred when $F=\mathbb{Q}$. Over $F$ we see that not all lattices are free, (leading us to replace quadratic forms by the more general notion of quadratic lattices), the discriminant becomes an integral squareclass (or more accurately, a non-archimedean tuple of local integral squareclasses indexed by the prime ideals $\mathfrak{p}$ of $F$), and there is no natural notion of how to associate a squareclass to an ideal (leading us to define the notions of a ``formal squareclass series'' and of a ``family of distinguished squareclasses''). Also, substantial technical work must be done to ``normalize'' the local densities within a squareclass over $F$ (where over $\mathbb{Q}$ we have a natural notion of the ``squarefree part'' of a number and a simpler local theory of genera) and to establish an analytic class number formula generalizing Dirichlet's class number formula to the setting of CM-extensions $K/F$. All of our main results include the freedom to discuss quadratic lattices of any fixed signature (which is now a vector), and also allow one to specify their Hasse invariants at finitely many primes $\mathfrak{p}$. One important motivation for studying the total mass of given determinant comes from the arithmetic implications of the ``discriminant-preserving" correspondences introduced by Bhargava \cite{Bh1, Bh2, Bh3, Bh4} that generalize Gauss composition for binary quadratic forms. Also, masses of certain ternary quadratic forms summed across several genera of fixed determinant (called $S$-genera) were studied in \cite{Berk-H-Jagy} to establish a family of ``$S$-genus identities'', though there the determinants were essentially squarefree. A forthcoming paper \cite{Hanke_n_equals_3_masses} explicitly computes the local Euler factors for $D_{A^*; n}(S)$ and $D_{B^*; n}(S)$ when $n=3$, in preparation for an investigation of the growth of the $2$-parts of class groups of cubic fields \cite{Bh-Ha-Sh} jointly with M. Bhargava and A. Shankar. {\bf Acknowledgements:} The author would like to thank Manjul Bhargava for posing a question that led to this work, and MSRI for their hospitality during their Spring 2011 semester in Arithmetic Statistics. The author would also like to warmly thank Robert Varley for his continuing interest in this work, and for several helpful conversations. This work was completed at the University of Georgia between December 2009 and Summer 2011, and was partially supported by the NSF Grant DMS-0603976. \subsection{Detailed Summary} In this paper we study the ``primitive total non-archimedean mass'' $T^*_{\vec{\sigma}_\infty, \vec c_{\mathbb{S}}; n}(S)$ of rank $n$ quadratic lattices over a number field $F$ of a fixed Hessian determinant squareclass $S$, signature vector $\vec{\sigma}_\infty = (\sigma_v)$ for all real archimedean places of $F$, and specified Hasse invariants $c_\mathfrak{p}$ at some finite set $\mathbb{S}$ of primes $\mathfrak{p}$ of $F$. Our main interest is in understanding how the total non-archimedean mass varies as we vary its determanant squareclass $S$, and its behaviour as ``$S \rightarrow \infty$'' in various ways. In \textsection2 we define local, global, and adelic notions of (integral and rational) squareclasses associated to genera of quadratic lattices, and describe some related structures and normalizations. We also go to some effort to show that there is at most one local genus of integral quadratic forms associated to any normalized determinant squareclass, and to characterize the squareclasses that arise from quadratic lattices. In \textsection3 we define and study the total non-archimedean mass, and show that it can be studied formally as a purely local object. In \textsection4 we introduce the notion of a formal squareclass series to encode how the total non-archimedean mass varies with its determinant squareclass. In this language we show that the associated formal squareclass series is a precise linear combination of two formal squareclass series admitting Euler product expansions, and determine how this linear combination depends on the signature and chosen Hasse invariants. One of these terms is independent of our fixed signature and Hasse invariant conditions, while the other oscillates (as a sum or difference) depending on them. We regard the first term as the ``main term'', and the second as the ``error term'' or ``secondary term''. This is particularly interesting because one usually regards local densities via the Siegel-Weil formula as themselves contributing the ``main term'' of the theta series, so the fact that these exhibit further structure is a very interesting feature. We also give a way of associating a (family of) formal Dirichlet series to these formal squareclass series, and prove a similar structure theorem in that context. Once this structural result is established, the main question is to understand the rationality of the Euler factors of each of these series. For this we give a purely local formulation of these Euler factors as a weighted sum over local genera of quadratic forms of with a fixed determinant squareclass. This formulation allows one in principle to use the theory of $\mathfrak{p}$-adic integral invariants for quadratic forms and $\mathfrak{p}$-adic local density formulas to explicitly compute these factors in any case of interest, though such computations rapidly become non-trivial. In \textsection5 we use aspects of the theory of local genus invariants to show that the Euler factors previously considered can be written as rational functions, and when $n$ is odd we show that these Euler factors can be fully understood as we vary our local normalized squareclass (and that this dependence is almost constant). In \textsection6 we establish a precise connection between our total non-archimedean mass and the mass of quadratic lattices defined as a weighted sum over classes. When the class number $h(\mathcal{O}_F)=1$ we show that formal Dirichlet series made from these masses decompose nicely. Finally, in \textsection7 we perform an case-by-case analysis with local genus invariants to exactly compute the Euler factors whose rationality was previously established. At primes $\mathfrak{p}\mid 2$ we use the ``train/compartment'' formalism of Conway \cite[p381]{CS-book} to describe the $2$-adic genus invariants, and the (stated but not proven) mass formula of Conway and Sloane \cite{CS} to compute the local density $\beta_{Q,\mathfrak{p} }(Q)^{-1}$ for primes $\mathfrak{p}\mid2$. Because of this, our computations are valid for any number field $F$ where $p=2$ splits completely. To treat number fields with different splitting behavior at $p=2$ by these methods one must give both a good theory of integral invariants and a formula for computing local densities. There is a rather complicated theory of dyadic integral invariants that has been worked out by O'Meara \cite{OMeara:1955rr, OMeara:1957cr}, but these papers have received little attention in the literature. The theory of explicit dyadic masses for arbitrary quadratic forms has not been fully worked out. We conclude in \textsection8 by using results of Kneser and of \textsection7 to establish an analytic class number formula for $h(\mathcal{O}_K)$, where $K/F$ is a CM extension of number fields and $p=2$ splits completely in $F$. We also remark how this formula can be generalized to allow quadratic orders, and show it is compatible with the more traditional class number formula arising from ratios of Dedekind zeta functions. \subsection{Notation} Throughout this paper we let $\mathbb{Z} := \{\cdots, -2, -1, 0, 1, 2, \cdots\}$ denote the integers, $\mathbb{Q}$ the rational numbers, $\mathbb{R}$ the real numbers, $\mathbb{C}$ the complex numbers, and $\mathbb{N}$ the natural numbers (i.e. positive integers). We also denote by $\mathbb{Z}_{\geq 0}$ the non-negative integers. We denote the units (i.e. invertible elements) of a ring $R$ by $R^\times$, and let $\mathrm{char}(K)$ denote the characterisitic of a field $K$. We also let $\mathrm{Mat}_{m \times n}(R)$ denote the ring of $m \times n$ matrices over $R$, and set $\mathrm{GL}_n(R) := \mathrm{Mat}_{n \times n}(R)^\times$. For any object, we let the subscript $\bullet$ refer to an unspecified set of extra parameters for the object. We write $A \equiv B_{(m)}$ to mean that the elements/sets given by $A$ and $B$ are equal in the ring $R/mR$, where the ambient ring $R$ is implicitly known. {\bf Number Fields:} We let $F$ denote a number field with ring of integers $\mathcal{O}_F$, $v$ is a place of $F$, $\mathfrak{p}$ is a prime ideal (or more simply, a {\bf prime}) of $F$, $F_v$ is the completion of $F$ at $v$, $\mathcal{O}_v$ is the ring of integers of $F_v$ (which is $F_v$ itself when $v$ is archimedean). We let $[F:\mathbb{Q}]$ denote the absolute degree of $F$ (giving its dimension as a $\mathbb{Q}$-vectorspace) and let $\Delta_F \in \mathbb{Z}$ denote the absolute discriminant of $F$. For a prime $\mathfrak{p}$, we denote its {\bf residue field} at $\mathfrak{p}$ by $k_\mathfrak{p} := \mathcal{O}_\p/\mathfrak{p}\mathcal{O}_\p$, which is a finite field of size $q := |\mathcal{O}_\p/\mathfrak{p}\mathcal{O}_\p|$. We denote by $\infty$ the archimedean place of the rational numbers $\mathbb{Q}$, and write $v\mid\infty$ (resp. $v\mid\infty_\mathbb{R}$) to denote that $v$ is archimedean (resp. real archimedean). We also identify the two conjugate embeddings $v$ where $F_v = \mathbb{C}$. We denote the set of non-archimedean (finite) places (i.e. primes $\mathfrak{p}$) of $F$ by $\mathbf{f}$. For any finite set $\mathbb{T} \subset \mathbf{f}$ we let $I^{\mathbb{T}}(\mathcal{O}_F)$ denote the set of invertible (integral) ideals of $\mathcal{O}_F$, relatively prime to all $\mathfrak{p}\in\mathbb{T}$. We also adopt the general convention that quantities denoted by Fraktur letters (e.g. $\mathfrak{p}, \mathfrak{a}, \mathfrak{n}, \mathfrak{s}, \mathfrak{v}$, etc.) will be (possibly fractional) ideals of $F$. We denote by $\mathbb{A}_{F, \mathbf{f}}^\times := \prod'_{\mathfrak{p} \in \mathbf{f}} F_\mathfrak{p}^\times$ the non-archimedean ideles of $F$, where the restricted direct product $\prod'$ requires that all but finitely many components lie in $\mathcal{O}_\p^\times$. For convenience we will often write products $\prod_{\mathfrak{p}\in\mathbf{f}}$ more simply as unquantified products $\prod_\mathfrak{p}$, and similarly write $\prod_v$ for a product over all places $v$ of $F$. When $F = \mathbb{Q}$, we define the {\bf conductor} $\mathrm{Cond}(\chi) \in \mathbb{N}$ of a Dirichlet character $\chi: (\mathbb{Z}/m\mathbb{Z})^\times \rightarrow \mathbb{C}^\times$, as the smallest $f \in \mathbb{N}$ so that $\chi$ factors through a character on $(\mathbb{Z}/f\mathbb{Z})^\times$. We also say that $D \in \mathbb{Z}$ is a {\bf fundamental discriminant} if $D$ is the absolute discriminant of a quadratic number field. We refer to an extension of number fields $K/F$ of degree $[K:F]=2$ where $F$ is totally real and $K$ is totally complex as a {\bf CM extension}, where CM here is an abbreviation for ``complex multiplication''. {\bf Squareclasses:} For any abelian group $G$ with subgroup $H$, we let $\mathrm{SqCl}(G,H):= G / (H^2)$ denote the group of {\bf $H$-squareclasses of $G$}, and write $\mathrm{SqCl}(G) := \mathrm{SqCl}(G,G)$ for the group of {\bf squareclasses of $G$}. We will frequently refer to the group of {\bf (integral) non-archimedean squareclasses} $\SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ where $U_\mathbf{f} := \prod_\mathfrak{p} \mathcal{O}_\p^\times$, and let $S_\mathfrak{p}$ denote the component of $S$ at $\mathfrak{p}$. We define the {\bf valuation at $\mathfrak{p}$} of a non-archimedean squareclass $S$ by the expression $\mathrm{ord}_\mathfrak{p}(S) = \mathrm{ord}_\mathfrak{p}(S_\mathfrak{p}) := \mathrm{ord}_\mathfrak{p}(\alpha)$ when $S_\mathfrak{p} = \alpha(\mathcal{O}_\p^\times)^2$ for some $\alpha \in F_\mathfrak{p}^\times$, and we define its {\bf support} $\mathrm{Supp}(S)$ as the set of primes $\mathfrak{p}$ of $F$ where $\mathrm{ord}_\mathfrak{p}(S) \neq 0$. {\bf Basic Quadratic Objects:} For any $n \in \mathbb{N}$ we define an $n$-dimensional {\bf quadratic space} over a field $K$ (assuming that $\mathrm{char}(K) \neq 2$) to be a pair $(V, Q)$ where $V$ is an $n$-dimensional vectorspace (which we assume is equipped with a basis $\mathcal{B}$) over $K$ and $Q$ is a quadratic form on $V$ (relative to $\mathcal{B}$). Given a quadratic form $Q(x_1, \cdots, x_n)$ we define its Hessian and Gram matrices respectively as $H = (h_{ij}) := (\frac{\partial^2 Q}{\partial x_i \partial x_j})$ and Gram matrix $G := \frac{1}{2}H$. We respectively define the Gram and Hessian determinants $\det_G(Q)$ and $\det_H(Q)$ of the quadratic space $(V,Q)$ as the squareclass in $\mathrm{SqCl}(K^\times)$ given by taking the determinant of the Gram and Hessian matrices of $Q$. We say that $(V, Q)$ is {\bf non-degenerate} if the Gram determinant $\det_G(Q) \neq 0$ (which happens iff $\det_H(Q) \neq 0$ since $\mathrm{char}(K)\neq 2$). We say that $L$ is a {\bf quadratic $R$-lattice} for some ring $R$ if $R$ is a subring of $F$, $L$ is a finitely generated $R$-submodule of some quadratic space $(V,Q)$ over $F$, and $L \otimes_R F = V$. Note that a quadratic lattice $L$ inherits the values of its ambient quadratic space, and for any set $\mathbb{T}$ we say that $L$ is {\bf $\mathbb{T}$-valued} if $Q(L) \subset \mathbb{T}$. We say that a quadratic ($\mathcal{O}_F$- or $\mathcal{O}_\p$-)lattice $L$ in {\bf primitive} if $L$ is ($\mathcal{O}_F$- or $\mathcal{O}_\p$-)valued and any scaling $Q \mapsto c \cdot Q$ of the ambient quadratic space for which $L$ is still ($\mathcal{O}_F$- or $\mathcal{O}_\p$-)valued must have $c \in$ ($\mathcal{O}_F$ or $\mathcal{O}_\p$). Given a quadratic $\mathcal{O}_F$-lattice $L$, we denote its localizations $L_v := L \otimes_{\mathcal{O}_F} \mathcal{O}_v$ in the quadratic spaces $(V_v := \otimes_{F} F_v, Q_v)$ over $F_v$. We let $\mathfrak{s}(L)$ denote its {\bf scale ideal} (generated locally by the entries of its Gram matrix in a basis for $L_\mathfrak{p}$), $\mathfrak{n}(L)$ its {\bf norm ideal} (generated by its values), its {\bf volume ideal} $\mathfrak{v}(L)$ (generated locally by $\det_G(L_\mathfrak{p})$ relative to a basis for $L_\mathfrak{p}$), and its {\bf norm group} $\mathcal{G}(L)$ (generated by its values). {\bf Decorated Quadratic Objects:} We define $\mathbf{Gen}^*(S, \vec{\sigma}_\infty, \vec c_{\mathbb{S}}; n)$, $\mathbf{Cls}^*(S, \vec{\sigma}_\infty, \vec c_{\mathbb{S}}; n)$ and $\mathbf{Cls}^{*,+}(S, \vec{\sigma}_\infty, \vec c_{\mathbb{S}}; n)$ respectively to be the set of genera, classes, and proper classes $G$ of primitive $\mathcal{O}_F$-valued rank $n$ quadratic $\mathcal{O}_F$-lattices with fixed signature $\sigma_v(G) = (\vec{\sigma}_\infty)_v$ at all places $v \mid \infty$, fixed Hasse invariants $c_\mathfrak{p}(G) = (\vec c_\mathbb{S})_\mathfrak{p}$ at the finitely many primes $\mathfrak{p}\in\mathbb{S}$, and Hessian determinant $\det_H(G) = S$. We define $\mathbf{Gen}^*_{\mathfrak{p}}(S, \varepsilon; n)$ as the set of local genera $G_\mathfrak{p}$ of primitive $\mathcal{O}_\p$-valued rank $n$ quadratic forms with Hessian determinant $\det_H(G_\mathfrak{p}) = S$ and Hasse invariant $c_\mathfrak{p}(G_\mathfrak{p}) = \varepsilon$. We also let $\mathbf{Gen}^*_{\mathbf{f}}(S, \varepsilon_\infty, \vec c_{\mathbb{S}}; n)$ be the set of tuples $(G_\mathfrak{p})$ of $G_\mathfrak{p} \in \mathbf{Gen}^*_{\mathfrak{p}}(S_\mathfrak{p}, \{\pm1\}; n)$ over all primes $\mathfrak{p}$ where we require that $\det_H(G_\mathfrak{p}) \in \mathcal{O}_\p^\times$ for all but finitely many primes $\mathfrak{p}$, and also $\prod_\mathfrak{p} c_\mathfrak{p}(G_\mathfrak{p}) = \varepsilon_\infty$. (Note that here $c_\mathfrak{p}(G_\mathfrak{p}) = 1$ for all but finitely many $\mathfrak{p}$ since the Hilbert symbol $(x,y)_\mathfrak{p} = 1$ when $\mathfrak{p}\nmid 2$ and $x,y\in \mathcal{O}_\p^\times$.) \section{Facts about local genera of integer-valued quadratic forms} In this section we are interested in defining and understanding the Hessian determinant squareclasses of (non-degenerate) $\mathcal{O}_F$-valued quadratic $\mathcal{O}_F$-lattices. We classify their associated ideals, show locally that when the associated ideal is maximal there is a unique genus of quadratic forms giving rise to this squareclass, and finally establish exactly which squareclasses arise from quadratic lattices. In later sections these observations will allow us to define certain ``normalized'' local densities, and to sensibly parametrize Hessian determinant squareclasses by integral ideals. \begin{defn} Given a quadratic $\mathcal{O}_F$-lattice $L$, we define its {\bf (non-archimedean) Hessian determinant squareclass} $\det_H(L)$ as the squareclass $S \in \mathrm{SqCl}(\mathbb{A}_{F, \mathbf{f}}^\times, U_\mathbf{f})$ so that $S_\mathfrak{p} = \det_H(Q_\mathfrak{p})$ where $Q_\mathfrak{p}$ is the quadratic form associated with the quadratic lattice $L_\mathfrak{p}$ with respect to some choice of independent generators for $L_\mathfrak{p}$ as a free $\mathcal{O}_\p$-module. Note that $S_\mathfrak{p}$ does not depend on this choice of generators, and that $\mathrm{ord}_\mathfrak{p}(S_\mathfrak{p})=0$ for all but finitely many primes $\mathfrak{p}$. \end{defn} In order to better understand the Hessian determinant squareclass $\det_H(L)$, we first study the relationships between local and global squareclasses under both rational and integral equivalence. \begin{lem} We can express the integral and rational non-archimedean squareclasses as restricted direct products of local integral and rational squareclasses by $$ \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f}) = \sideset{}{'} \prod_\mathfrak{p} \SqCl(F_\p^\times, \Op^\times) $$ $$ \SqCl(\A_{F, \mathbf{f}}^\times) = \sideset{}{'} \prod_\mathfrak{p} \SqCl(F_\p^\times) $$ where both restricted direct products $\prod'_p$ are with respect to the family of open subgroups $\SqCl(\Op^\times)$, using the inclusion $\SqCl(\Op^\times) \hookrightarrow \SqCl(F_\p^\times)$ in the second product. \end{lem} \begin{proof} The restricted direct product with respect to the subgroups $\SqCl(\OF^\times)$ follow from the restricted product defining of $\mathbb{A}_{F, \mathbf{f}}^\times$, and the local squareclass group factors $\mathrm{SqCl}(F_\mathfrak{p}^\times, \cdot)$ are obtained by looking at the surjective restriction map to the component at any given prime $\mathfrak{p}$, since $U_\mathbf{f} \cap F_\mathfrak{p}^\times = \mathcal{O}_\p^\times$ and $\mathbb{A}_{F, \mathbf{f}}^\times \cap F_\mathfrak{p}^\times = F_\mathfrak{p}^\times$. \end{proof} We notice that by passing from integral to rational equivalence gives {\bf rational reduction maps}, denoted by $\rho_{*}$, giving the commutative diagram \begin{equation} \label{Eq:Rational_reduction} \begin{tikzpicture} [baseline=(current bounding box.center)] [description/.style={fill=white,inner sep=2pt}] \matrix (m) [matrix of math nodes, row sep=3em, column sep=2.5em, text height=1.5ex, text depth=0.25ex] { \SqCl(F^\times, \OF^\times) & \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f}) \\ \SqCl(F^\times) & \SqCl(\A_{F, \mathbf{f}}^\times) \\ }; \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$\Delta$} (m-1-2) (m-2-1) edge node[auto] {$\Delta$} (m-2-2); \path[->>] (m-1-2) edge node[auto] {$ \rho_\mathbf{f}$} (m-2-2); \path[->>] (m-1-1) edge node[auto] {$ \rho$} (m-2-1); \end{tikzpicture} \end{equation} where $\Delta$ denotes the diagonal map $x \mapsto (x, x, \cdots)$. We also can realize the non-archimedean rational reduction map $\rho_\mathbf{f}$ as the product $\rho_\mathbf{f} = \prod_\mathfrak{p} \rho_\mathfrak{p}$ of the local rational reduction maps $\rho_\mathfrak{p}: \SqCl(F_\p^\times, \Op^\times) \twoheadrightarrow \SqCl(F_\p^\times)$. \begin{defn} We say that a non-archimedean integral squareclass $S \in \mathrm{SqCl}(\mathbb{A}_{F, \mathbf{f}}^\times, U_\mathbf{f})$ is {\bf globally rational} if its rational reduction $\rho_\mathbf{f}(S)$ is in the image of $\SqCl(F^\times)$ in (\ref{Eq:Rational_reduction}). \end{defn} We begin by studying the possible valuations attained by the Hessian determinant squareclasses, which we phrase in the language of ideals. \begin{defn} For any non-archimedean integral squareclass $S \in \mathrm{SqCl}(\mathbb{A}_{F, \mathbf{f}}^\times, U_\mathbf{f})$ we define its {\bf associated (valuation) ideal} $\mathfrak{I}(S)$ by $$ \mathfrak{I}(S) := \prod_\mathfrak{p} \mathfrak{p}^{\mathrm{ord}_\mathfrak{p}(S_\mathfrak{p})}. $$ This product is finite since $S_\mathfrak{p} \subseteq \mathcal{O}_\p^\times$ for all but finitely many primes $\mathfrak{p}$. \end{defn} \begin{lem} \label{Lem:Hessian_det_ideals} Suppose that $L$ is a rank $n$ $\mathcal{O}_F$-valued quadratic $\mathcal{O}_F$-lattice. Then the associated ideal $\mathfrak{I}(\det_H(L)) \subseteq \mathcal{O}_F$ and for each $n$, the sum $\mathfrak{h}_n$ of all such ideals is \begin{align*} \mathfrak{h}_n := \biggl\{ \textstyle{\prod_\mathfrak{p} \det_H}(L_\mathfrak{p}) \mathcal{O}_\p \biggm\vert \begin{aligned} &\text{$L_\mathfrak{p}$ is an $\mathcal{O}_\p$-valued } \\ &\text{rank $n$ quadratic form} \end{aligned} \biggr\} = \begin{cases} \mathcal{O}_F & \quad \text{if $n$ is even,} \\ 2\mathcal{O}_F & \quad \text{if $n$ is odd.} \\ \end{cases} \end{align*} \end{lem} \begin{proof} The localization $L_\mathfrak{p}:= L\otimes_{\mathcal{O}_F} \mathcal{O}_\p$ of $L$ is a free $\mathcal{O}_\p$-module, and so by taking a basis for $L_\mathfrak{p}$ we can represent it by an $\mathcal{O}_\p$-valued quadratic form. By taking an $\mathcal{O}_\p$-basis of $L_\mathfrak{p}$, we obtain an $\mathcal{O}_\p$-valued quadratic form whose Hessian matrix is in $\mathrm{Mat}_{n\times n}(\mathcal{O}_\p)$ with even diagonal, hence $\det_H(L_\mathfrak{p}) \in \mathcal{O}_\p$. The Leibniz formula $$ \det(A) = \sum_{\sigma\in S_n} \mathrm{sgn}(\sigma) \prod_{i=1}^n a_{i \sigma(i)} $$ for a square matrix $A = (a_{ij})$ with even diagonal tells us that the terms from permutations $\sigma$ with no fixed points determine how $\mathfrak{h}_n$ sits between $\mathcal{O}_F$ and $2\mathcal{O}_F$. When $A$ is symmetric the terms from $\sigma$ and $\sigma^{-1}$ are the same, hence they contribute twice when $\sigma \neq \sigma^{-1}$. Since a fixed-point-free involution $\sigma \in S_n$ exists $\iff n$ in even, we see that $\mathfrak{h}_n = 2\mathcal{O}_F$ when $n$ in odd. If $n$ is even then we can choose the involution $\sigma(i) := n-i + 1$ and define the matrix $A = (a_{ij}) :=(\delta_{i \sigma(i)})$ which has $\det(A) \in \{\pm1\}$, so $\mathfrak{h}_n = \mathcal{O}_F$. \end{proof} \begin{thm}[Uniqueness of Normalized Local Genera] \label{Thm:Uniqueness_for_normalized_QF} Suppose that $n \in \mathbb{N}$ and that $S_\mathfrak{p} \in \mathrm{SqCl}(F_\mathfrak{p}^\times, \mathcal{O}_\p^\times)$ with $\mathrm{ord}_\mathfrak{p}(S_\mathfrak{p}) = \mathrm{ord}_\mathfrak{p}(\mathfrak{h}_n)$. Then there exists at most one local genus $G_\mathfrak{p}$ of primitive $\mathcal{O}_\p$-valued quadratic forms in $n$ variables with (Hessian) determinant $\det_H(G_\mathfrak{p}) = S_\mathfrak{p}$. \end{thm} \begin{proof} The condition that $G_\mathfrak{p}$ is $\mathcal{O}_\p$-valued is equivalent to the local norm ideal $\mathfrak{n}(G_\mathfrak{p})$ being integral, and the ideal generated by the Hessian determinant squareclass is the volume ideal $\mathfrak{v}(2G_\mathfrak{p})$ (defined in \cite[\textsection82:9, p221]{OM}). The volume ideal $\mathfrak{v}(L)$ can be expressed in terms of any Jordan splitting $L = \bigoplus_{i \in \mathbb{Z}} L_i$ as $$ \mathfrak{v}(L) = \mathfrak{p}^{ \sum_{i \in \mathbb{Z}} i \cdot \mathrm{rank}_{\mathcal{O}_\p}(L_i) }. $$ We also have that $$ \mathfrak{s}(L) = \mathfrak{p}^{ \min\{i \in \mathbb{Z} \,\mid\, \mathrm{rank}_{\mathcal{O}_\p}(L_i) \neq 0\} } $$ and the scale and norm ideals are related by the containments $\mathfrak{s}(G_\mathfrak{p}) \supseteq \mathfrak{n}(G_\mathfrak{p}) \supseteq 2\mathfrak{s}(G_\mathfrak{p})$. When $\mathfrak{p}\nmid 2$ this shows that $\mathfrak{s}(G_\mathfrak{p}) = \mathfrak{n}(G_\mathfrak{p}) = \mathcal{O}_\p$ and the largest possible ideal $\mathfrak{v}(2L)$ is $\mathcal{O}_\p$, which is attained by the unimodular forms. When $\mathfrak{p}\nmid 2$ unimodular forms exist for every $n \in \mathbb{N}$, and they are exactly characterized up to isomorphism by their determinant squareclass \cite[\textsection92:2, p247]{OM}. When $\mathfrak{p}\mid 2$ the condition $2\mathfrak{s}(G_\mathfrak{p}) \subseteq \mathfrak{n}(G_\mathfrak{p}) = \mathcal{O}_\p$ implies that $\mathfrak{s}(2G_\mathfrak{p}) \subseteq \mathcal{O}_\p$ and so $\mathfrak{v}(2G_\mathfrak{p})\subseteq \mathcal{O}_\p$. If $\mathfrak{v}(2G_\mathfrak{p}) = \mathcal{O}_\p$ then $2G_\mathfrak{p}$ is unimodular and satisfies $\mathfrak{n}(2G_\mathfrak{p}) \subsetneq \mathfrak{s}(2G_\mathfrak{p})$, so by \cite[\textsection93:15, p 258]{OM} we see that $2G_\mathfrak{p}$ is an orthogonal direct sum of binary forms, and so $n$ in even. If $n$ is odd then largest volume ideal is given by $\mathfrak{v}(2G_\mathfrak{p}) = 2\mathcal{O}_\p$ because we can take a rank $n-1$ unimodular form that must be a direct sum of binary forms of norm ideal $2\mathcal{O}_\p$, and then since $\mathfrak{n}(2G_\mathfrak{p}) = 2\mathcal{O}_\p$ the remaining unary form $\alpha x^2$ must have $\mathfrak{s}(\alpha x^2) = \mathfrak{n}(\alpha x^2) \subseteq 2\mathcal{O}_\p$, so the volume ideal is largest when $\mathfrak{v}(2G_\mathfrak{p}) = \mathfrak{s}(\alpha x^2) = 2\mathcal{O}_\p$ (which can be attained). We now examine each of these two possibilities. \centerline{\rule{5in}{0.1mm}} {\bf Case 1 ($n$ even):} When $n$ is even, we define $$ \mathcal{L}_{\mathfrak{p}; \text{ even}} := \mathcal{L}_{\mathfrak{p}; n} := \left\{ \begin{tabular}{c} Local genera of \\ non-degenerate rank $n$ \\ quadratic $\mathcal{O}_\p$-lattices $L := L_0$ \\ \end{tabular} \middle| \begin{tabular}{c} $L_0$ is unimodular \\ with $\mathfrak{n}(L_0) = 2\mathcal{O}_\p$ \end{tabular} \right\}. $$ \begin{sublem} Suppose that $n\in \mathbb{N}$ is even and $L, L' \in \mathcal{L}_{\mathfrak{p}; n}$. Then $$ \textstyle \det_H(L) = \det_H(L') \qquad \Longrightarrow \qquad L \cong L'. $$ \end{sublem} \begin{proof}[Proof of Sublemma] We first analyze the Hasse invariants of the unimodular non-diagonalizable binary quadratic lattices with $\mathfrak{n}(L) = 2\mathcal{O}_\p$. (From \cite[\textsection93:15, p258]{OM} and $\mathfrak{n}(L_0) = 2$ we know that $L_0$ must be a sum of lattices of this form.) We know from \cite[\textsection93:11, pp255-6]{OM} we know that the only such lattices (up to equivalence) are $A(0,0)$ and $A(2, 2\rho)$ where $A(\alpha, \beta) := \alpha x^2 + 2xy + \beta y^2$, $\rho \in \SqCl(\Op^\times)$ is in the squareclass defined by the relation $\Omega =: 1 + 4\rho$ and $\Omega\in \SqCl(\Op^\times)$ is the unique squareclass with quadratic defect $4\mathcal{O}_\p$. {\bf Case a)} Suppose that $Q = A(0,0) = 2xy$ over $\mathcal{O}_\p$. Then $Q \sim_{F_\mathfrak{p}} x^2 - y^2$, giving $\det_H(Q) = (-1)(\mathcal{O}_\p^\times)^2$ and $c_\mathfrak{p}(Q) = (1,-1)_\mathfrak{p} = 1$. {\bf Case b)} Suppose that $Q = A(2, 2\rho) = 2x^2 + 2xy + 2\rho y^2$ over $\mathcal{O}_\p$. Then $Q \sim_{F_\mathfrak{p}} 2x^2 + (2\rho - \frac{1}{2})y^2$, giving $\det_H(Q) = (4\rho - 1)(\mathcal{O}_\p^\times)^2 = (\Omega - 2)(\mathcal{O}_\p^\times)^2$ and $c_\mathfrak{p}(Q) = (2, 2\rho - \frac{1}{2})_\mathfrak{p} = (2, 2\cdot (4\rho - 1))_\mathfrak{p} = \cancel{(2,2)_\mathfrak{p}} \cdot (2, \Omega - 2)_\mathfrak{p}$. We note here that the two determinant squareclasses $\det_H(Q)$ are the two possible lifts of the (mod 4) squareclass $-1 \in \mathrm{SqCl}((\mathcal{O}_\p/4\mathcal{O}_\p)^\times)$. From \cite[\textsection93:18(ii), p260]{OM} (and since there $\mathrm{ord}_\mathfrak{p}(a) + \mathrm{ord}_\mathfrak{p}(b) = \mathrm{ord}_\mathfrak{p}(2) + \mathrm{ord}_\mathfrak{p}(2)$ is even) we see that any $L \in \mathcal{L}_{\mathfrak{p}, even}$ can be written as a direct sum of copies of $A(0,0)$ and at most one $A(2, 2\rho)$, the presence/absence of $A(2, 2\rho)$ is determined by $\det_H(L)$. Therefore the lattice $L$ is completely determined by $\det_H(L)$ and $n$. \end{proof} \centerline{\rule{5in}{0.1mm}} {\bf Case 2 ($n$ is odd):} When $n \in \mathbb{N}$ is odd, we let $$ \mathcal{L}_{\mathfrak{p}; n} := \left\{ \begin{tabular}{c} Local genera of \\ non-degenerate rank $n$ \\ quadratic $\mathcal{O}_\p$-lattices \\ $L := L_0 \oplus L_1$ \\ \end{tabular} \middle| \begin{tabular}{c} $L_0$ is unimodular of rank $n-1$, \\ $L_1 =: 2u x^2$ is $2\mathcal{O}_\p$-modular, \\ and $\mathfrak{n}(L_0) = \mathfrak{n}(L_1) = 2\mathcal{O}_\p$ \\ \end{tabular} \right\}. $$ Notice that if $G' \in \mathcal{L}_{\mathfrak{p}; n}$ then $G'$ is $2\mathcal{O}_\p$-valued and so $\frac{1}{2}G'$ is a primitive $\mathcal{O}_\p$-valued genus with normalized Hessian determinant $\det_H(\frac{1}{2}G')$. From above, we also see that $\mathcal{L}_{\mathfrak{p}; n}$ contains all local genera $2G_\mathfrak{p}$ where $G_\mathfrak{p}$ is a primitive $\mathcal{O}_\p$-valued local genus and $\det_H(G_\mathfrak{p})$ is normalized. This shows that if $G_\mathfrak{p}$ is a primitive $\mathcal{O}_\p$-valued local genus then $$ \text{$\textstyle\det_H(G_\mathfrak{p})$ is normalized $\Longleftrightarrow G_\mathfrak{p} \in \mathcal{L}_{\mathfrak{p}; n}$.} $$ \begin{sublem} Suppose that $n\in \mathbb{N}$ is odd and $L, L' \in \mathcal{L}_{\mathfrak{p}; n}$. Then $$ \textstyle \det_H(L) = \det_H(L') \qquad \Longrightarrow \qquad V \cong V', $$ where $V$ and $V'$ are the quadratic spaces associated to $L$ and $L'$ respectively. \end{sublem} \begin{proof}[Proof of Sublemma] When $n$ is odd then the decomposition $V = V_0 \oplus V_1$ has local invariants $$ \textstyle \det_H(V) = \det_H(V_0) \cdot \det_H(V_1), $$ $$ \textstyle c_\mathfrak{p}(V) = c_\mathfrak{p}(V_0) \cdot c_\mathfrak{p}(V_1) \cdot (\det_G(V_0), \det_G(V_1))_\mathfrak{p}. $$ We know that $\det_H(V_1) = u \in \SqCl(\Op^\times)$ and $c_\mathfrak{p}(V_1) = 1$ since $V_1 = 2u x^2$, and our previous computations show that $\det_H(V_0) = (-1)^\frac{n-1}{2} \Omega^\epsilon$ with $c_\mathfrak{p}(V_0)$ uniquely determined by $\epsilon \in \{0,1\}$. This gives the local invariants of $V$ as $$ \textstyle \det_H(V) = (-1)^\frac{n-1}{2} \Omega^\epsilon u \qquad \text{ and } \qquad c_\mathfrak{p}(V) = c_\mathfrak{p}(V_0) \cdot ((-1)^\frac{n-1}{2} \Omega^\epsilon, 2u)_\mathfrak{p}. $$ Now suppose that $L, L' \in \mathcal{L}_{\mathfrak{p}, n}$ with $n$ odd, and that their associated quadratic spaces have direct sum decompositions as $V = V_0 \oplus V_1$ and $V' = V'_0 \oplus V'_1$, where $V_i$ and $V'_i$ are the quadratic spaces associated to $L_i$ and $L'_i$ respectively. If $\det_H(V) = \det_H(V')$ then knowing that there are exactly two squareclasses in $\SqCl(\Op^\times)$ with given reduction in $\{\pm 1\} \subseteq \SqCl(\Op^\times){(\mathcal{O}_\p/4\mathcal{O}_\p)^\times}$, we have that either $\epsilon' = \epsilon$ and $u' = u$ (giving $V' \cong V$), or \begin{equation} \epsilon' \neq \epsilon \quad\text{ and }\quad u' = u \cdot \Omega. \end{equation} In the second case we check that $c_\mathfrak{p}(V') = c_\mathfrak{p}(V)$ since by \cite[\textsection63:11a, p165]{OM} we have that $$ ((-1)^\frac{n-1}{2} \Omega, 2u)_\mathfrak{p} = -((-1)^\frac{n-1}{2}, 2u)_\mathfrak{p} = ((-1)^\frac{n-1}{2}, 2u\cdot \Omega)_\mathfrak{p}, $$ and so again $V'\cong V$. This shows that if $L, L' \in \mathcal{L}_{\mathfrak{p}, n}$ with $n$ odd and $\det_H(L) = \det_H(L')$, then their associated quadratic spaces are equivalent. \end{proof} \begin{sublem} Suppose that $n\in \mathbb{N}$ is even and $L, L' \in \mathcal{L}_{\mathfrak{p}; n}$. Then $$ \textstyle \det_H(L) = \det_H(L') \qquad \Longrightarrow \qquad L \cong L'. $$ \end{sublem} \begin{proof}[Proof of Sublemma] To finish the proof, we compute O'Meara's ideal $\mathfrak{f}_0(L)$ (which is ideal $\mathfrak{f}$ associated to the unimodular lattice $L_0$) and the weight ideals $\mathfrak{w}_i(L)$, then use the dyadic integral equivalence conditions in \cite[\textsection93:28, pp267-8]{OM}. Since $u_i = \mathrm{ord}_\mathfrak{p}(\mathfrak{s}(L_i)) = \mathrm{ord}_\mathfrak{p}(2)$ for $i \in \{0,1\}$ (so $u_0 + u_1 \in 2\mathbb{Z}$), the defining equation for $\mathfrak{f}_0$ on \cite[p264]{OM} becomes $$ \mathcal{O}_\p^2 \cdot \mathfrak{f}_0 = \sum_{\alpha \in 2\mathcal{O}_\p, \beta \in 2u\mathcal{O}_\p^2} \mathfrak{d}(\alpha\beta) + \underbrace{2\mathfrak{p}^{\frac{2 \mathrm{ord}_\mathfrak{p}(2)}{2} + 0}}_{=\, 4\mathcal{O}_\p}. $$ Since the quadratic defect satisfies $\mathfrak{d}(\alpha^2 \cdot \xi) = \alpha^2\mathfrak{d}(\xi)$ (\cite[p160]{OM}), we see that the sum above is $4\sum_{\gamma \in \mathcal{O}_\p^\times} \mathfrak{d}(\gamma) \subseteq 4\mathcal{O}_\p$, so $\mathfrak{f}_0 = 4\mathcal{O}_\p$. We also compute that $\mathfrak{w}_0(L) = 2\mathcal{O}_\p$ and $\mathfrak{w}_1(L) = \mathcal{O}_\p$ from the known norms and scales. With these, we see that the respective conditions in \cite[\textsection93:28, pp267-8]{OM} for $L, L' \in \mathfrak{L}_{\mathfrak{p}; n}$ to be equivalent are: \begin{enumerate} \item $\det_H(L_0) \equiv \det_H(L'_0) \pmod {4\mathcal{O}_\p}$, \item There is an isometry $V_0 \rightarrow V$, \item no condition. \end{enumerate} When $\det_H(L) = \det_H(L')$ we have that $V \sim V'$ so the second condition certainly holds, and the first condition follows from our previous analysis since $\Omega \equiv 1 \pmod {4\mathcal{O}_\p}$. This shows that $L\cong L'$, proving the theorem. \end{proof} \centerline{\rule{5in}{0.1mm}} These sublemmas together show that there is at most one local genus of lattices of any given normalized local Hessian determinant. \end{proof} \begin{lem}[Classifying Hessian squareclasses] \label{Lem:Globally_rational} Given $n \in\mathbb{N}$, then a non-archimedean squareclass $S \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ is the Hessian determinant squareclass of a some non-degenerate rank $n$ (possibly not $\mathcal{O}_F$-valued) quadratic $\mathcal{O}_F$-lattice $L$ iff $S$ is globally rational. \end{lem} \begin{proof} If $L \subset (V,Q)$ is a non-degenerate quadratic $\mathcal{O}_F$-lattice then its Hessian determinant $\det_H(L) =: S = \prod_\mathfrak{p} S_\mathfrak{p}$ where $S_\mathfrak{p} := \det_H(Q'_\mathfrak{p})$ is the Hessian determinant of a quadratic form $Q'_\mathfrak{p}$ obtained by choosing $n$ independent generators for $L_\mathfrak{p}$ as an $\mathcal{O}_\p$-module (though $S_\mathfrak{p}$ doesn't depend on this choice). Since these generators for $L_\mathfrak{p}$ also give a basis for the local quadratic space $(V_\mathfrak{p}, Q_\mathfrak{p})$, we see that $S_\mathfrak{p} = \det_H(Q_\mathfrak{p}) \in \SqCl(F_\p^\times)$ and so $S_\mathfrak{p} = (\det_H(Q))_\mathfrak{p}$ where $\det_H(Q) \in \SqCl(F^\times)$, which shows that $\det_H(L)$ is always globally rational. Now suppose that $S \in \mathrm{SqCl}(\mathbb{A}_{F, \mathbf{f}}^\times, U_\mathbf{f})$ is globally rational. Then $S$ must differ from the squareclass $\alpha(\mathcal{O}_\p^\times)^2$ for some $\alpha\in F^\times$ at only finitely many primes, and denote the set of such primes by $\mathbb{T}$. At each $\mathfrak{p}\in\mathbb{T}$ we have that $S_\mathfrak{p} / \alpha \in (F_\mathfrak{p}^\times)^2$, so we can find some $\beta_\mathfrak{p} \in F_\mathfrak{p}^\times$ so that $S_\mathfrak{p} / \alpha = (\beta_\mathfrak{p})^2$. By the Hasse principle for quadratic spaces \cite[\textsection63:23, p171]{OM} we can find an $n$-dimensional quadratic space $(V, Q)$ over $F$ with $\det_H(Q) = \alpha$. We also have that $\alpha =\det_H(L)$ for the standard free lattice $L := (\mathcal{O}_F)^n$ (all relative to a given fixed basis for $V$). By choosing for each $\mathfrak{p}\in \mathbb{T}$ some $\lambda_\mathfrak{p} \in \mathrm{GL}_n(F_\mathfrak{p})$ with $\det(\lambda_\mathfrak{p}) = \beta_\mathfrak{p}$, we see that the local lattices $L'_\mathfrak{p} := \lambda_\mathfrak{p} L_\mathfrak{p}$ have $\det_H(L'_\mathfrak{p}) = S_\mathfrak{p}$ and these can be assembled (together with $L_\mathfrak{p}$ for all $\mathfrak{p}\notin\mathbb{T}$) to a quadratic lattice $L' \subset (V, Q)$ with $\det_H(L) = S$. This shows that every globally rational non-archimedean squareclass arises as $\det_H(L)$. \end{proof} \begin{rem} \label{Rem:Local_genus_existence_when_2_splits_completely} From the proof of Theorem \ref{Thm:Uniqueness_for_normalized_QF} we see that a local genus $G_\mathfrak{p}$ with $\det_H(G_\mathfrak{p}) = S_\mathfrak{p}$ exists for any normalized squareclasss $S_\mathfrak{p} \in \SqCl(F_\p^\times, \Op^\times)$ when $\mathfrak{I}(S_\mathfrak{p}) \subseteq \mathfrak{h}_n \mathcal{O}_\p$ both when $n$ is odd, and when $n$ is even and $\mathfrak{p}\nmid 2$. When $n$ is even and $F_\mathfrak{p} = \mathbb{Q}_2$, we have existence iff either $\mathfrak{I}(S_\mathfrak{p}) \subseteq \mathcal{O}_\p$ and $S \equiv (-1)^\frac{n}{2}$ or $\mathfrak{I}(S_\mathfrak{p}) \subseteq 4\mathcal{O}_\p$. \end{rem} \section{The total non-archimedean mass} In this section we define our main object of study, the ``primitive total non-archimedean mass'' $T^*$ of primitive integer-valued quadratic lattices of rank $n$ with given signature, Hasse invariants (at finitely many primes), and Hessian determinant squareclass over a number field $F$. We show that this primitive total non-archimedean mass can be naturally considered as a purely local object $M^*$, and we examine its convergence properties in this local context. \begin{defn} \label{Def:Signature_vector} Given a number field $F$ and some $n \in \mathbb{N}$ we let $\vec{\sigma}_\infty$ denote a {\bf vector of signatures} for $F$ of rank $n$, by which we mean a vector $\vec{\sigma}_\infty := ((\sigma_{v, +}, \sigma_{v, -}))_{\{v \mid \infty_\mathbb{R}\}} \in \prod_{v \mid \infty_\mathbb{R}} (\mathbb{Z}_{\geq 0} \times \mathbb{Z}_{\geq 0})$ where for each real archimedean place $v$ of $F$ we have $\sigma_{v, +} + \sigma_{v, -}= n$. \end{defn} \begin{rem} Notice that one obtains a vector of signatures of rank $n$ naturally from any $n$-dimensional non-degenerate quadratic space $(V, Q)$ over $F$, and that this vector determines the product of (non-degenerate) quadratic spaces $\prod_{v\mid\infty} (V_v, Q_v)$ up to isomorphism. \end{rem} \begin{defn} \label{Def:Hasse_vector} Let $\mathbb{S}$ be a finite set of non-archimedean places of a number field $F$ and define a {\bf vector of Hasse invariants}, or {\bf Hasse vector}, as a vector $\vec c_\mathbb{S} \in \prod_{\mathfrak{p}\in \mathbb{S}} \{\pm 1\}$. \end{defn} \begin{defn} \label{Def:M_rational} For a fixed number field $F$, $n \in \mathbb{N}$, vectors $\vec{\sigma}_\infty$ and $\vec c_\mathbb{S}$ of signatures of rank $n$ and Hasse invariants, and some globally rational non-archimedean squareclass $S \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$, we define the {\bf primitive total rational non-archimedean mass} of (Hessian) determinant $S$ by $$ T^*_{\vec{\sigma}_\infty, \vec c_{\mathbb{S}}; n}(S) := \sum_{G \in \mathbf{Gen}^*(S, \vec{\sigma}_\infty, \vec c_{\mathbb{S}}; n)} \prod_\mathfrak{p} \beta_{G, \mathfrak{p}}(G)^{-1}. $$ \end{defn} To study the global quantity $T_{\vec{\sigma}_\infty, \vec c_{\mathbb{S}}; n}(S)$ we notice that this can be recovered from a more local quantity that is a little easier to study, though there are some convergence problems for arbitrary products when $n=2$. Notice that in the following formulation the signature vector $\vec{\sigma}_\infty$ above is replaced by a single sign $\varepsilon_\infty \in \{\pm1\}$, which will help to clarify the dependence of the total mass on the choice of signatures. \begin{defn} \label{Def:M_idelic} For a fixed number field $F$, $n \in \mathbb{N}$, some $\varepsilon_\infty \in \{\pm1\}$, a vector $\vec c_\mathbb{S}$ of Hasse invariants, and some non-archimedean squareclass $S \in \mathrm{SqCl}(\mathbb{A}_{F, \mathbf{f}}^\times, U_\mathbf{f})$, we define the {\bf primitive total adelic non-archimedean mass} of (Hessian) determinant $S$ by $$ M^*_{\varepsilon_\infty, \vec c_{\mathbb{S}}; n}(S) := \sum_{G \in \mathbf{Gen}^*_{\mathbf{f}}(S, \varepsilon_\infty, \vec c_{\mathbb{S}}; n)} \prod_\mathfrak{p} \beta_{G, \mathfrak{p}}(G)^{-1}, $$ when the infinite product converges, and zero otherwise. Notice that by Theorem \ref{Thm:Uniqueness_for_normalized_QF} the local genera $G_\mathfrak{p}$ are uniquely determined for $\mathfrak{p} \notin \mathrm{Supp}(S) \cup \{\mathfrak{p}\mid2\}$, so the sum over $\mathbf{Gen}^*_{\mathbf{f}}(S, \varepsilon_\infty, \vec c_{\mathbb{S}}; n)$ is finite. \end{defn} We now address some convergence issues for $M^*_{\varepsilon_\infty, \vec c_{\mathbb{S}}; n}(S)$, and see how it recovers $T^*_{\vec{\sigma}_\infty, \vec c_{\mathbb{S}}; n}(S)$ as a special case. \begin{lem} The quantity $M^*_{\varepsilon_\infty, \vec c_{\mathbb{S}}; n}(S)$ converges for all $S\in \mathrm{SqCl}(\mathbb{A}_{F, \mathbf{f}}^\times, U_\mathbf{f})$ when $n\neq 2$. If $n=2$ and $S$ is globally rational (i.e. agrees with some given element of $\SqCl(F^\times)$) at all but finitely many places $\mathfrak{p}$ then $M^*_{\varepsilon_\infty, \vec c_{\mathbb{S}}; n}(S)$ converges. \end{lem} \begin{proof} Suppose that $S\in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$. If $\mathfrak{I}(S) \nsubseteq \mathfrak{h}_n$ then the sum over $G$ is empty and the empty product converges. If $\mathfrak{I}(S) \subseteq \mathfrak{h}_n$ then by Theorem \ref{Thm:Uniqueness_for_normalized_QF} we know that for every $\mathfrak{p} \notin \mathbb{T} := \mathrm{Supp}(S) \cup \{\mathfrak{p}\mid 2\}$ there is a unique (unimodular) local genus $G_\mathfrak{p}$ over $\mathcal{O}_\p$ with $\det_H(G_\mathfrak{p}) = S_\mathfrak{p}$, and the infinite products converge iff the product $\prod_{\mathfrak{p}\notin\mathbb{T}} \beta^{-1}_{G_\mathfrak{p}, \mathfrak{p}}(G_\mathfrak{p})$ converges. For $\mathfrak{p} \notin \mathbb{T}$ we have by Hensel's Lemma that $\beta_{G_\mathfrak{p}, \mathfrak{p}}(G_\mathfrak{p}) = \frac{|SO_{Q_\mathfrak{p}}(k_\mathfrak{p})|}{q^\frac{n(n-1)}{2}}$, and the sizes of orthogonal groups (of types $B$ and $D$) over $k_\mathfrak{p}$ are given in \cite[p72]{Carter}. This leads to the formulas $$ \beta_{Q_\mathfrak{p}, \mathfrak{p}}(Q_\mathfrak{p}) = \begin{cases} (1-\frac{1}{q^2})(1-\frac{1}{q^4})\cdots (1-\frac{1}{q^{2r}}) & \text{if $n = 2r+1$ is odd,} \\ (1-\frac{1}{q^2})(1-\frac{1}{q^4})\cdots (1-\frac{1}{q^{2r-2}})(1\pm\frac{1}{q^{r}}) & \text{if $n = 2r$ is even,} \\ \end{cases} $$ where the choice of sign when $n$ is even is given by $\pm = - \chi_\mathfrak{p}(S_\mathfrak{p})$ where $\chi_\mathfrak{p}$ is the non-trivial quadratic character of $\mathrm{SqCl}(k_\mathfrak{p}^\times)$. This tells us that the generic product $\prod_{\mathfrak{p}\notin\mathbb{T}} \beta^{-1}_{G_\mathfrak{p}, \mathfrak{p}}(G_\mathfrak{p})$ is given by $\zeta_F^\mathbb{T}(2) \zeta_F^\mathbb{T}(4) \cdots \zeta_F^\mathbb{T}(n-1)$ when $n\geq 3$ is odd, and the bounds $$ 1 \leq \prod_{\mathfrak{p}\notin\mathbb{T}} \beta^{-1}_{G_\mathfrak{p}, \mathfrak{p}}(G_\mathfrak{p}) \leq \zeta_F^\mathbb{T}(2) \zeta_F^\mathbb{T}(4) \cdots \zeta_F^\mathbb{T}(n-2) \zeta_F^\mathbb{T}(\tfrac{n}{2}) / \zeta_F^\mathbb{T}(n) $$ when $n$ is even. This ensures convergence for arbitrary $S$ when $n\neq 2$ (as $n=1$ has generic factor $1$), but shows that when $n=2$ we can have divergent products if almost all signs are $+$. However if $n=2$ and $S$ agrees with some $S \in \SqCl(F^\times)$ on $\mathbb{T}$ (after possibly enlarging $\mathbb{T}$) then we see that $\prod_{\mathfrak{p}\notin\mathbb{T}} \beta^{-1}_{G_\mathfrak{p}, \mathfrak{p}}(G_\mathfrak{p}) = L_F^\mathbb{T}(1, \chi)$ where $\chi$ is the finite order Hecke character of $F$ given by the Legendre symbol $\mathfrak{p} \mapsto \leg{a}{\mathfrak{p}}$ for all $\mathfrak{p}\nmid 2$, which converges. \end{proof} \begin{lem} \label{Lem:T_is_M} Suppose the non-archimedean squareclass $S \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ is globally rational. Then $$ T^*_{\vec{\sigma}_\infty, \vec c_{\mathbb{S}}; n}(S) = M^*_{\varepsilon_\infty, \vec c_{\mathbb{S}}; n}(S) $$ where $\varepsilon_\infty := \prod_{v\mid\infty} c_v((\vec{\sigma}_\infty)_v)$ \end{lem} \begin{proof} We will prove this by establishing a bijection $\mathbf{Gen}^*(S, \vec{\sigma}_\infty, \vec c_{\mathbb{S}}; n) \overset{\sim}{\rightarrow} \mathbf{Gen}^*_{\mathbf{f}}(S, \varepsilon_\infty, \vec c_{\mathbb{S}}; n)$ when $S$ is globally rational, and use the identification of (local or global) genera with the orthogonal group orbits of lattices in a (local or global) quadratic space to describe them. (See \cite[\textsection1.2 and \textsection4.5]{Ha-AWS} for more details.) We have a localization map $\lambda: L \mapsto (L_\mathfrak{p})_\mathbf{f}$ that takes a genus of lattices $L$ to a non-archimedean tuple of genera of lattices $L_\mathfrak{p}$ in their respectively localized quadratic spaces. Notice that $\det_H(L) = (\det_H(L_\mathfrak{p}))_\mathbf{f}$, $c_\mathfrak{p}(V) = c_\mathfrak{p}(V_\mathfrak{p})$, and $\mathrm{rank}_{\mathcal{O}_F}(L) = \mathrm{rank}_{\mathcal{O}_\p}(L_\mathfrak{p}) = n$ for all primes $\mathfrak{p}$, and the product formula for Hasse invariants gives $\prod_\mathfrak{p} c_\mathfrak{p}(V) = \prod_{v\mid\infty} c_v(V)$. Therefore by restricting to primitive and $\mathcal{O}_F$-valued genera (which are local properties of $L$), we obtain a localization map $\lambda: \mathbf{Gen}^*(S, \vec{\sigma}_\infty, \vec c_{\mathbb{S}}; n) \rightarrow \mathbf{Gen}^*_{\mathbf{f}}(S, \varepsilon_\infty, \vec c_{\mathbb{S}}; n)$. To see that that the map is injective, notice that if $G_\mathbf{f} := \lambda(G)$ then by the Hasse principle for quadratic spaces there is a unique quadratic space $(V,Q)$ localizing to the quadratic spaces $(V_\mathfrak{p},Q_\mathfrak{p})$ of $G_\mathbf{f}$, which is the quadratic space of $L$ (up to global isomorphism). Also, because global lattices are uniquely determined by their localizations, we see that $L$ is also determined (up to equivalence), so $\lambda$ is injective. To see that $\lambda$ is surjective when $S$ is globally rational, first notice that by Lemma \ref{Lem:Globally_rational} any $G \in \mathbf{Gen}^*(S, \vec{\sigma}_\infty, \vec c_{\mathbb{S}}; n)$ must have $S$ globally rational, so the condition is necessary. However when $S$ is globally rational, then by the Hasse principle \cite[\textsection72:1, p203]{OM} any $G_\mathbf{f} \in \mathbf{Gen}^*_{\mathbf{f}}(S, \varepsilon_\infty, \vec c_{\mathbb{S}}; n)$ will have a unique quadratic space $(V,Q)$ over $F$ that localizes to the quadratic spaces of $(G_\mathbf{f})_\mathfrak{p}$ at all primes $\mathfrak{p}$, and then the local-global principle for lattices \cite[\textsection81:14, p218]{OM} these $L_\mathfrak{p}$ can be assembled to some $\mathcal{O}_F$-lattice on $V$ since $\det_H(G_\mathbf{f}) \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ implies that $L_\mathfrak{p} = (\mathcal{O}_\p)^n$ for almost all $\mathfrak{p}$. \end{proof} \section{The Structure of certain Formal Series} In this section we use the language of formal series to study the primitive total adelic non-archimedean mass $M^*_{\varepsilon_\infty, \vec c_{\mathbb{S}}; n}(S)$ locally, and show that certain formal series associated to the total non-archimedean mass (as the determinant squareclass varies) naturally decomposes (along squareclasses of squareclasses) as a linear combination of two series each of which admits an Euler product. In Section \ref{Sec:Binary_Forms} we will compute these Euler factors explicitly when $n=2$. Given a function $X_{\bullet}:\mathrm{SqCl}(\mathbb{A}_{F, \mathbf{f}}^\times) \rightarrow \mathbb{C}$, we define the {\bf formal non-archimedean squareclass series} associated to $X_{\bullet}$ as formal sum of the form $$ \mathcal{F}_{X, \bullet} := \sum_{S \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})} \frac{X_{\bullet}(S)}{S} $$ where the formal symbols $S$ run over idelic square classes $S \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$. If $X_{\bullet}(S)$ is a {\bf multiplicative function} on $\SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ (meaning that $X_{\bullet}(S) = \prod_\mathfrak{p} X_{\mathfrak{p}, \bullet}(S_\mathfrak{p})$ for some functions $X_{\mathfrak{p}, \bullet}$ on $\SqCl(F_\p^\times, \Op^\times)$) that is trivial on the local squareclasses $\SqCl(\Op^\times)$ for all but finitely many primes $\mathfrak{p}$ (which is required for the multiplicativity to make sense formally), then we can express it as local product of the form $$ \sum_{S \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})} \frac{X_{\bullet}(S)}{S} = \prod_\mathfrak{p} \underbrace{ \sum_{S_\mathfrak{p} \in \SqCl(F_\p^\times, \Op^\times)} \frac{X_{\mathfrak{p}, \bullet}(S_\mathfrak{p})}{S_\mathfrak{p}} }_\text{Euler factor at $\mathfrak{p}$} $$ where $X_{\mathfrak{p}, \bullet}(S_\mathfrak{p})$ is defined by the natural inclusion map $\SqCl(F_\p^\times, \Op^\times) \hookrightarrow \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$. We say that such a product over primes is an {\bf Euler product}, and refer to the sum for any given prime $\mathfrak{p}$ as the {\bf Euler factor at $\mathfrak{p}$}. For any finite set of primes $\mathbb{S}$, we define the {\bf partial (non-archimedean) formal squareclass series} away from $\mathbb{S}$ by \begin{align} \mathcal{F}^\mathbb{S}_{X, \bullet} & := \[ \prod_{\mathfrak{p} \in\mathbb{S}} \frac{1}{(\mathcal{O}_\p^\times)^2} \right] \cdot \[ \prod_{\mathfrak{p} \notin\mathbb{S}} \sum_{S_\mathfrak{p} \in \SqCl(F_\p^\times, \Op^\times)} \frac{X_{\mathfrak{p}, \bullet}(S_\mathfrak{p})}{S_\mathfrak{p}} \right] \\ &= \sum_{ \substack{S \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f}) \text{ where} \\ \text{$S_\mathfrak{p} = (\mathcal{O}_\p^\times)^2$ for all $\mathfrak{p} \in \mathbb{S}$} } } \frac{X_{\bullet}(S)}{S}. \end{align} For any subset $\mathbb{K} \subseteq \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ we also define the {\bf restriction to $\mathbb{K}$} of a formal squareclass series by $$ \mathcal{F}_{X, \bullet} \Big|_{\mathbb{K}} := \sum_{S\in \mathbb{K}} \frac{X_{\bullet}(S)}{S}. $$ In preparation for our main structure theorem (Theorem \ref{Theorem:M_N_Dirichlet}), we give some definitions useful for normalizing squareclasses and define some arithmetically interesting local quantities that will be used later for defining Euler factors. \begin{defn} \label{Defn:Normalized_squareclass} Given $n \in \mathbb{N}$, we say that a non-archimedean squareclass $\widetilde{S} \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ is {\bf normalized for $n$} if its associated ideal $\mathfrak{I}(\widetilde{S}) = \mathfrak{h}_n$. Similarly we say that a local squareclass $\widetilde{S_\mathfrak{p}} \in \SqCl(F_\p^\times, \Op^\times)$ is {\bf normalized at $\mathfrak{p}$ for $n$} if its associated ideal $\mathfrak{I}(\widetilde{S_\mathfrak{p}}) = \mathfrak{h}_n \mathcal{O}_\p$. \end{defn} \begin{defn} \label{Defn:Uniformizing_squareclass} We say $\pi_\mathfrak{p} \in \SqCl(F_\p^\times, \Op^\times)$ is a {\bf uniformizing squareclass (at $\mathfrak{p}$)} if its associated ideal $\mathfrak{I}(\pi_\mathfrak{p}) = \mathfrak{p}$. A set $\mathcal{P} = \{\pi_\mathfrak{p}\}$ consisting of one such $\pi_\mathfrak{p}$ for each prime $\mathfrak{p}$ of $F$ is called a {\bf family of uniformizing squareclasses}. \end{defn} \begin{defn} \label{Defn:Associated_normalized_squareclass} Given $n \in \mathbb{N}$ and a family of uniformizing squareclasses $\mathcal{P} = \{\pi_\mathfrak{p}\}$, for any $S_\mathfrak{p} \in \SqCl(F_\p^\times, \Op^\times)$ we can define the {\bf normalized squareclass $\widetilde{S_\mathfrak{p}}$ associated to $S_\mathfrak{p}$} by the formula $\widetilde{S_\mathfrak{p}} := S_\mathfrak{p} \cdot \pi_\mathfrak{p}^{\mathrm{ord}_\mathfrak{p}(\mathfrak{h}_n / \mathfrak{I}(S_\mathfrak{p}))}$. Similarly, for any $S \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ we can define the {\bf normalized squareclass $\widetilde{S}$ associated to $S$} as $\widetilde{S} := S \cdot \prod_{\mathfrak{p}} \pi_\mathfrak{p}^{\mathrm{ord}_\mathfrak{p}(\mathfrak{h}_n / \mathfrak{I}(S))}$, by using the natural inclusion $\SqCl(F_\p^\times, \Op^\times) \hookrightarrow \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$. For $S \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ these two normalizations satisfy the local compatibility relation $(\widetilde{S})_\mathfrak{p} = \widetilde{S_\mathfrak{p}}$, and both are normalized squareclasses in the sense of Definition \ref{Defn:Normalized_squareclass}. \end{defn} \begin{rem} When defining the normalized squareclass $\widetilde{S}$ we often implicitly assume a fixed choice of a family $\mathcal{P}$ of uniformizing squareclasses, and we will only refer to $\mathcal{P}$ explicitly when we are choosing a particular kind of family. \end{rem} \begin{defn} \label{Def:generic_density_and_product} Given $n \in \mathbb{N}$ and a normalized local squareclass $\widetilde{S_\mathfrak{p}} \in \SqCl(F_\p^\times, \Op^\times)$ we define the {\bf generic local density at $\mathfrak{p}$} of rank $n$ to be $$ \beta_{n, \mathfrak{p}}(\widetilde{S_\mathfrak{p}}) := \beta_{G_\mathfrak{p}, \mathfrak{p}}(G_\mathfrak{p}) $$ where $G_\mathfrak{p}$ is the unique local genus of primitive $\mathcal{O}_\p$-valued quadratic forms with $\det_H(G_\mathfrak{p}) = \widetilde{S_\mathfrak{p}}$, described in Lemma \ref{Lem:Hessian_det_ideals}. At the finitely many primes $\mathfrak{p}\mid 2$ where such a genus may not exist for certain $S_\mathfrak{p}$, we make an arbitrary (but forever fixed) choice of these generic local densities (e.g. say $\beta_{n, \mathfrak{p}}(\widetilde{S_\mathfrak{p}}) := 1$). Similarly, if $S \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ then we define the {\bf generic density product} of rank $n$ as the product $$ \beta_{n, \mathbf{f}}(\widetilde{S}) := \prod_\mathfrak{p} \beta_{n, \mathfrak{p}}(\widetilde{S}_\mathfrak{p}). $$ where $\widetilde{S}$ is the normalized squareclass associated to $S$. \end{defn} \begin{defn} \label{Def:normalized_mass_sums} Given $n\in\mathbb{N}$ and $\varepsilon \in \{\pm1\}$, we define the {\bf normalized local mass sums} at $\mathfrak{p}$ to be the quantities $$ \widetilde{M}_{\mathfrak{p}; n}^{*, \varepsilon}(S_\mathfrak{p}) := \sum_{ G_\mathfrak{p} \in \mathbf{Gen}^*_{\mathfrak{p}}(S, \varepsilon; n) } \frac{\beta_{n, \mathfrak{p}}(\widetilde{S_\mathfrak{p}})}{\beta_{G_\mathfrak{p}, \mathfrak{p}}(G_\mathfrak{p})}. $$ For convenience we also define the quantities $$ A^*_{\mathfrak{p}; n}(S_\mathfrak{p}) := \widetilde{M}_{\mathfrak{p}; n}^{*, +}(S_\mathfrak{p}) + \widetilde{M}_{\mathfrak{p}; n}^{*, -}(S_\mathfrak{p}) \qquad \text{and} \qquad B^*_{\mathfrak{p}; n}(S_\mathfrak{p}) := \widetilde{M}_{\mathfrak{p}; n}^{*, +}(S_\mathfrak{p}) - \widetilde{M}_{\mathfrak{p}; n}^{*, -}(S_\mathfrak{p}), $$ which allow us to better understand the $\varepsilon$-dependence of $\widetilde{M}_{\mathfrak{p}; n}^{*, \varepsilon}(S_\mathfrak{p})$ via the formula \begin{equation} \label{Eq:M_as_A_and_B} \widetilde{M}_{\mathfrak{p}; n}^{*, \varepsilon}(S_\mathfrak{p}) = \frac{A^*_{\mathfrak{p}; n}(S_\mathfrak{p}) + \varepsilon \cdot B^*_{\mathfrak{p}; n}(S_\mathfrak{p})}{2}. \end{equation} \end{defn} \begin{lem} \label{Lem:A_and_B_generically_one} If $\mathfrak{p}\nmid 2$ and $S_\mathfrak{p} \in \SqCl(F_\p^\times, \Op^\times)$, then $$ A^*_{\mathfrak{p}; n}(S_\mathfrak{p}) = B^*_{\mathfrak{p}; n}(S_\mathfrak{p}) = \begin{cases} 1 & \qquad\text{if $\mathrm{ord}_p(S_\mathfrak{p}) = 0$,} \\ 0 & \qquad\text{if $\mathrm{ord}_p(S_\mathfrak{p}) < 0$,} \\ \end{cases} $$ so the formal Euler products $\mathcal{F}_{A^*; n}$ and $\mathcal{F}_{B^*; n}$ respectively associated to the functions $A^*_n(S) := \prod_\mathfrak{p} A^*_{\mathfrak{p}; n}(S_\mathfrak{p})$ and $B^*_n(S) := \prod_\mathfrak{p} B^*_{\mathfrak{p}; n}(S_\mathfrak{p})$ make sense term-by-term as formal non-archimedean squareclass series. \end{lem} \begin{proof} By Lemma \ref{Lem:Hessian_det_ideals}, any local genus of non-degenerate $G_\mathfrak{p}$ of $\mathcal{O}_\p$-valued quadratic forms has associated ideal $\mathfrak{I}(\det_H(G_\mathfrak{p})) \subseteq \mathfrak{h}_n \mathcal{O}_\p \subseteq \mathcal{O}_\p$, so so there are no such local general with $\mathrm{ord}_\mathfrak{p}(S_\mathfrak{p}) < 0$ and so $A^*_{\mathfrak{p}; n}(S_\mathfrak{p}) = B^*_{\mathfrak{p}; n}(S_\mathfrak{p}) = 0$ in this case. If $\mathfrak{p}\nmid 2$ and $S_\mathfrak{p} \in \SqCl(F_\p^\times, \Op^\times)$ is a local squareclass with $\mathrm{ord}_\mathfrak{p}(S_\mathfrak{p}) = 0$ then by Theorem \ref{Thm:Uniqueness_for_normalized_QF} there is a unique local genus $G_\mathfrak{p}$ of quadratic forms with $\det(G_\mathfrak{p}) = S_\mathfrak{p} = \widetilde{S_\mathfrak{p}}$. Since $G_\mathfrak{p}$ is unimodular and $\mathfrak{p}\nmid 2$ we also have that $c_\mathfrak{p}(G_\mathfrak{p})= 1$, which shows that $A^*_{\mathfrak{p}; n}(S_\mathfrak{p}) = B^*_{\mathfrak{p}; n}(S_\mathfrak{p}) = 1$. \end{proof} We now give an explicit decomposition formula for $M^*_{\varepsilon_\infty, \vec c_{\mathbb{S}}; n}(S)$ in the language of formal non-archimedean squareclass series. \begin{thm} \label{Theorem:M_N_Dirichlet} The formal non-archimedean squareclass series $$ \mathcal{F}_{M^*, \varepsilon_\infty, \vec c_{\mathbb{S}}; n} := \sum_{S \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})} \frac{M^*_{\varepsilon_\infty, \vec c_{\mathbb{S}}; n}(S)}{S} $$ can be written as $$ \mathcal{F}_{M^*, \varepsilon_\infty, \vec c_{\mathbb{S}}; n} = \hspace{-.2in} \sum_{\substack{\text{normalized} \\ \widetilde{S} \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})}} \hspace{-.2in} \tfrac{1}{2} \beta_{n, \mathbf{f}}^{-1}(\widetilde{S}) \Biggl( \mathcal{K}_{\vec c_\mathbb{S}; n} \cdot \[ \mathcal{F}^\mathbb{S}_{A^*; n} + C_{\varepsilon_\infty, \vec c_{\mathbb{S}}} \cdot \mathcal{F}^\mathbb{S}_{B^*; n} \right] \Biggr|_{\widetilde{S}\cdot \langle\mathcal{P}\rangle} $$ where $$ \mathcal{K}_{\vec c_\mathbb{S}; n} := \prod_{\mathfrak{p} \in \mathbb{S}} \, \sum_{S_\mathfrak{p} \in \SqCl(F_\p^\times, \Op^\times)} \frac{\widetilde{M}_{\mathfrak{p}; n}^{*, (\vec c_{\mathbb{S}})_\mathfrak{p}}(S_\mathfrak{p})}{S_\mathfrak{p}}, $$ $C_{\varepsilon_\infty, \vec c_{\mathbb{S}}} := \varepsilon_\infty \cdot \prod_{\mathfrak{p}\in\mathbb{S}} (\vec c_{\mathbb{S}})_\mathfrak{p} \in \{\pm1\}$, and $\langle\mathcal{P}\rangle$ is the multiplicative subset of $\SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ generated by the (implicitly fixed) uniformizing family of squareclasses $\mathcal{P}$. Here the series $\mathcal{F}^\mathbb{S}_{A^*; n}$ and $\mathcal{F}^\mathbb{S}_{B^*; n}$ are both given as Euler products over primes $\mathfrak{p}\notin\mathbb{S}$. \end{thm} \begin{proof} Given $S \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$, from Definition \ref{Def:M_idelic} we have \begin{align} \allowdisplaybreaks[4] M^*_{\varepsilon_\infty, \vec c_{\mathbb{S}}; n}(S) &= \sum_{G_\mathbf{f} \in \mathbf{Gen}^*_{\mathbf{f}}(S, \varepsilon_\infty, \vec c_{\mathbb{S}}; n)} \prod_\mathfrak{p} \beta_{G, \mathfrak{p}}(G)^{-1} = \beta_{n, \mathbf{f}}^{-1}(\widetilde{S}) \cdot \sum_{G_\mathbf{f} \in \mathbf{Gen}^*_{\mathbf{f}}(S, \varepsilon_\infty, \vec c_{\mathbb{S}}; n)} \prod_\mathfrak{p} \frac{\beta_{n, \mathfrak{p}}(\widetilde{S}_\mathfrak{p})}{\beta_{G, \mathfrak{p}}(G)} \\ & = \beta_{n, \mathbf{f}}^{-1}(\widetilde{S}) \cdot \sum_{G_\mathbf{f} \in \mathbf{Gen}^*_{\mathbf{f}}(S, \varepsilon_\infty, \vec c_{\mathbb{S}}; n)} \prod_{\substack{ \mathfrak{p} \in\mathbb{T} \text{ where} \\ \mathbb{T} := \{ \mathfrak{p}\mid 2\} \cup \mathrm{Supp}(S) \cup \mathbb{S} }} \frac{\beta_{n, \mathfrak{p}}(\widetilde{S}_\mathfrak{p})}{\beta_{G, \mathfrak{p}}(G)} \\ & = \beta_{n, \mathbf{f}}^{-1}(\widetilde{S}) \sum_{\substack{ (\varepsilon_\mathfrak{p})_{\mathfrak{p}\in\mathbb{T}} \in \{\pm1\}^{|\mathbb{T}|} \\ \text{satisfying} \\ \varepsilon_\mathfrak{p} = (\vec c_{\mathbb{S}})_\mathfrak{p} \text{ if $\mathfrak{p}\in \mathbb{S}$ and } \\ \prod_{\mathfrak{p}\in\mathbb{T} - \mathbb{S}} \varepsilon_\mathfrak{p} = C_{\varepsilon_\infty, \vec c_{\mathbb{S}}} }} \, \sum_{\substack{ \text{Tuples of primitive local genera $G_\mathfrak{p}$} \\ \text{of $\mathcal{O}_\p$-integral quadratic forms} \\ \text{in $n$-variables with $\mathfrak{p}\in\mathbb{T}$} \\ \text{$c_\mathfrak{p}(G_\mathfrak{p}) = \varepsilon_\mathfrak{p}$ and $\det_H(G_\mathfrak{p}) = S_\mathfrak{p}$} \\ }} \,\, \prod_{\mathfrak{p} \in \mathbb{T}} \,\, \frac{\beta_{n, \mathfrak{p}}(\widetilde{S}_\mathfrak{p})}{\beta_{G_\mathfrak{p}, \mathfrak{p}}(G_\mathfrak{p})} \\ & = \beta_{n, \mathbf{f}}^{-1}(\widetilde{S}) \sum_{\substack{ (\varepsilon_\mathfrak{p})_{\mathfrak{p}\in\mathbb{T}} \in \{\pm1\}^{|\mathbb{T}|} \\ \text{satisfying} \\ \varepsilon_\mathfrak{p} = (\vec c_{\mathbb{S}})_\mathfrak{p} \text{ if $\mathfrak{p}\in \mathbb{S}$ and } \\ \prod_{\mathfrak{p}\in\mathbb{T} - \mathbb{S}} \varepsilon_\mathfrak{p} = C_{\varepsilon_\infty, \vec c_{\mathbb{S}}} }} \, \prod_{\mathfrak{p} \in \mathbb{T}} \,\, \underbrace{ \sum_{ G_\mathfrak{p} \in \mathbf{Gen}^*_{\mathfrak{p}}(S, \varepsilon; n) } \frac{\beta_{n, \mathfrak{p}}(\widetilde{S}_\mathfrak{p})}{\beta_{G_\mathfrak{p}, \mathfrak{p}}(G_\mathfrak{p})} }_{\widetilde{M}_{\mathfrak{p}; n}^{*, \varepsilon_\mathfrak{p}}(S_\mathfrak{p}) = }. \end{align} By using (\ref{Eq:M_as_A_and_B}) to re-express $\widetilde{M}_{\mathfrak{p}; n}^{*, \varepsilon_\mathfrak{p}}(S_\mathfrak{p})$, and Lemma \ref{Lemma:polynomial_id} below, we obtain \begin{align} \allowdisplaybreaks[4] M^*_{\varepsilon_\infty, \vec c_{\mathbb{S}}; n}(S) & = \beta_{n, \mathbf{f}}^{-1}(\widetilde{S}) \sum_{\substack{ (\varepsilon_\mathfrak{p})_{\mathfrak{p}\in\mathbb{T}} \in \{\pm1\}^{|\mathbb{T}|} \\ \text{satisfying} \\ \varepsilon_\mathfrak{p} = (\vec c_{\mathbb{S}})_\mathfrak{p} \text{ if $\mathfrak{p}\in \mathbb{S}$ and } \\ \prod_{\mathfrak{p}\in\mathbb{T} - \mathbb{S}} \varepsilon_\mathfrak{p} = C_{\varepsilon_\infty, \vec c_{\mathbb{S}}} }} \, \prod_{\mathfrak{p} \in \mathbb{T}} \, \frac{A^*_{\mathfrak{p}; n}(S_\mathfrak{p}) + \varepsilon_\mathfrak{p} B^*_{\mathfrak{p}; n}(S_\mathfrak{p})}{2} \\ & = \beta_{n, \mathbf{f}}^{-1}(\widetilde{S}) \[ \prod_{\mathfrak{p} \in \mathbb{S}} \, \widetilde{M}_{\mathfrak{p}; n}^{*, (\vec c_{\mathbb{S}})_\mathfrak{p}}(S_\mathfrak{p}) \right] \sum_{\substack{ (\varepsilon_\mathfrak{p})_{\mathfrak{p}\in\mathbb{T}-\mathbb{S}} \in \{\pm1\}^{|\mathbb{T}-\mathbb{S}|} \\ \text{satisfying} \\ \prod_{\mathfrak{p}\in\mathbb{T} - \mathbb{S}} \varepsilon_\mathfrak{p} = C_{\varepsilon_\infty, \vec c_{\mathbb{S}}} }} \, \prod_{\mathfrak{p} \in \mathbb{T}-\mathbb{S}} \, \frac{A^*_{\mathfrak{p}; n}(S_\mathfrak{p}) + \varepsilon_\mathfrak{p} B^*_{\mathfrak{p}; n}(S_\mathfrak{p})}{2} \\ & = \beta_{n, \mathbf{f}}^{-1}(\widetilde{S}) \[ \prod_{\mathfrak{p} \in \mathbb{S}} \, \widetilde{M}_{\mathfrak{p}; n}^{*, (\vec c_{\mathbb{S}})_\mathfrak{p}}(S_\mathfrak{p}) \right] \,\,\, \frac{2^{{|\mathbb{T}-\mathbb{S}|}-1}}{2^{|\mathbb{T}-\mathbb{S}|}} \, \[ \, \prod_{\mathfrak{p} \in \mathbb{T}-\mathbb{S}} A^*_{\mathfrak{p}; n}(S_\mathfrak{p}) + C_{\varepsilon_\infty, \vec c_{\mathbb{S}}} \prod_{\mathfrak{p} \in \mathbb{T}-\mathbb{S}} B^*_{\mathfrak{p}; n}(S_\mathfrak{p}) \right]. \end{align} Since all of the valuations $\mathrm{ord}_\mathfrak{p}(\det_H(Q)) \geq 0$ for any $\mathcal{O}_F$-valued quadratic form $Q$, we see that $A^*_{\mathfrak{p}; n}(S_\mathfrak{p}) = B^*_{\mathfrak{p}; n}(S_\mathfrak{p}) = 0$ when $\mathrm{ord}_\mathfrak{p}(S) < 0$. Also, if $\mathfrak{p}\nmid 2$ and $\mathrm{ord}_\mathfrak{p}(S)=0$ then there is a unique local genus of $\mathcal{O}_\p$-valued ternary quadratic forms of determinant $S_\mathfrak{p}$, and this local genus has Hasse invariant $c_\mathfrak{p}=1$, which shows that $A^*_{\mathfrak{p}; n}(S_\mathfrak{p}) = B^*_{\mathfrak{p}; n}(S_\mathfrak{p}) = 1$ when $\mathrm{ord}_\mathfrak{p}(S) = 0$. By Lemma \ref{Lem:Hessian_det_ideals} every $S \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ with $M^*_{\varepsilon_\infty, \vec c_\mathbb{S}; n}(S) \neq 0$ has $\mathfrak{I}(S) \subseteq \mathfrak{h}_n$ and becomes normalized by dividing by a product of powers of the uniformizing squareclasses $\pi_\mathfrak{p} \in \mathcal{P}$, so $$ M^*_{\varepsilon_\infty, \vec c_\mathbb{S}; n}(S) \neq 0 \implies S \in \bigsqcup_{\substack{ \text{normalized} \\ \widetilde{S} \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f}) }} \widetilde{S} \cdot \langle\mathcal{P}\rangle. $$ This shows that the formal non-archimedean squareclass series $\mathcal{F}_{M^*, \varepsilon_\infty, \vec c_{\mathbb{S}}; n}$ can be written as sum over squareclasses with the same normalization $\widetilde{S}$ of a linear combination of two formal squareclass series, each of which admits an Euler product expansion given by the desired formulas. \end{proof} We now prepare to associate a formal Dirichlet series to a formal non-archimedean squareclass series. \begin{defn}[Distinguished squareclasses] \label{Def:distinguished_squareclasses} Given $n \in \mathbb{N}$, we say that a homomorphism $\lambda: I(\mathcal{O}_F) \rightarrow \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ defines a {\bf distinguished family of (non-archimedean) squareclasses} $\lambda(\mathfrak{a})$ if $\mathfrak{I}(\lambda(\mathfrak{a})) = \mathfrak{a}$ for all $\mathfrak{a} \in I(\mathcal{O}_F)$. A family of distinguished squareclasses gives us a good way of parametrizing (a family of) Hessian determinant squareclasses of $\mathcal{O}_F$-valued rank $n$ quadratic $\mathcal{O}_F$-lattices by integral ideals. \end{defn} \begin{rem}[Normalizing distinguished squareclasses] \label{Rem:normalizing_distinguished_squarecalsses} Because of Theorem \ref{Theorem:M_N_Dirichlet} it is important to understand how the normalization map $\sim_n \,:S \mapsto \widetilde{S}$ affects our family of distinguished squareclasses $\lambda(\mathfrak{a})$. We have the following commutative diagram \begin{equation} \label{Eq:Rational_reduction} \begin{tikzpicture} [baseline=(current bounding box.center)] [description/.style={fill=white,inner sep=2pt}] \matrix (m) [matrix of math nodes, row sep=3em, column sep=2.5em, text height=1.5ex, text depth=0.25ex] { I(\mathcal{O}_F) & \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f}) \\ \mathrm{SqCl}(I(\mathcal{O}_F)) & \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f}) \\ }; \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$\lambda$} (m-1-2); \path[dashed,->,font=\scriptsize] (m-2-1) edge node[auto] {$\widetilde{\lambda}$} (m-2-2); \path[->,font=\scriptsize] (m-1-2) edge node[auto] {$\sim_n$} (m-2-2); \path[->>,font=\scriptsize] (m-1-1) edge node[auto] {id} (m-2-1); \path[->,font=\scriptsize] (m-1-1) edge node[auto] {$\widetilde{\lambda}$} (m-2-2); \end{tikzpicture} \end{equation} where the dashed map $\widetilde{\lambda}$ is well-defined because $$ \widetilde{\lambda(\mathfrak{a})} = \lambda(\mathfrak{a}) \prod_\mathfrak{p} {\pi_\mathfrak{p}}^{\mathrm{ord}_\mathfrak{p}(\mathfrak{h}_n) - \mathrm{ord}_\mathfrak{p}(\mathfrak{a})}, $$ so by (strict) multiplicativity we have that $$ \widetilde{\lambda(\mathfrak{a} \mathfrak{b}^2)} = \widetilde{\lambda(\mathfrak{a})} \cdot \underbrace{ \lambda(\mathfrak{b})^2 \prod_\mathfrak{p} {\pi_\mathfrak{p}}^{- 2\mathrm{ord}_\mathfrak{p}(\mathfrak{b})} }_{= 1 \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})} = \widetilde{\lambda(\mathfrak{a})}. $$ The map $\widetilde{\lambda}$ is usually non-constant (see Remark \ref{Rem:canonical_distinguished_squareclasses} for the exception), and to account for this variation we write $I(\mathcal{O}_F)$ as the disjoint union of squareclasses $\mathfrak{t} \cdot I(\mathcal{O}_F)^2$ where $\mathfrak{t}$ varies over all squarefree ideals $\mathfrak{t} \in I(\mathcal{O}_F)$. \end{rem} \begin{rem}[Canonical ``local'' distinguished squareclasses] \label{Rem:canonical_distinguished_squareclasses} Given $n \in \mathbb{N}$, a normalized non-archimedean squareclass $\widetilde{S} \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$, and a family $\mathcal{P}$ of uniformizing squareclasses, we have a canonical ``local'' distinguished family of squareclasses defined by the rule $\mathfrak{a} =: \prod_\mathfrak{p} \mathfrak{p}^{\nu_\mathfrak{p}} \mapsto \lambda'(\mathfrak{a})$ where $$ \lambda'(\mathfrak{a}) := \lambda'_{\widetilde{S}, \mathcal{P} ;n}(\mathfrak{a}) := \begin{cases} \prod_\mathfrak{p} \pi_\mathfrak{p}^{\nu_\mathfrak{p}} \cdot \widetilde{S} & \qquad \text{if $n$ is even,} \\ \prod_\mathfrak{p} \pi_\mathfrak{p}^{\nu_\mathfrak{p} - \mathrm{ord}_\mathfrak{p}(2)} \cdot \widetilde{S} & \qquad \text{if $n$ is odd.} \end{cases} $$ However because this family is defined purely locally, it will not be as interesting as some of the more globally defined squareclasses that we will consider in Section \ref{Sec:Binary_Forms}. These are the only families of distinguished squareclasses where the map $\widetilde{\lambda}$ is constant (having value $\widetilde{\lambda}(\mathfrak{a}) = \widetilde{\lambda}(\mathcal{O}_F) = \widetilde{S}$). \end{rem} This notion of distinguished squareclasses will allow us to naturally associate formal Dirichlet series to a formal non-archimedean squareclass series. \begin{defn}[Associated Dirichlet series] \label{Defn:Associated_Dirichlet_series} Given a family $\lambda$ of distinguished squareclasses, we can associate to any formal non-archimedean squareclass series $\mathcal{F}_{X, \bullet; n}$ the formal Dirichlet series $D_{X, \lambda, \bullet ;n}$ by considering the terms associated to the distinguished squareclasses $\lambda(\mathfrak{a})$. More generally, for any finite (possibly empty) set $\mathbb{T}$ of primes, we let $$ D^\mathbb{T}_{X, {\lambda}, \bullet; n}(S) := \sum_{\mathfrak{a}\in I^\mathbb{T}(\mathcal{O}_F)} \frac{X_{\bullet; n}(\lambda(\mathfrak{a}))}{\mathfrak{a}^s}. $$ \end{defn} The following corollary shows that these formal Dirichlet series inherit much of the structure of the formal squareclass series they are derived from. \begin{cor} \label{Cor:Formal_Dirichlet_M} Given $n\in\mathbb{N}$, $\varepsilon_\infty \in \{\pm1\}$, a vector of Hasse invariants $\vec c_\mathbb{S}$, and a family of distinguished squareclasses $\lambda$, the formal Dirichlet series $D_{M, \lambda, \varepsilon_\infty, \vec c_\mathbb{S}; n}(s)$ can be written as $$ D_{M^*, {\lambda}, \varepsilon_\infty, \vec c_\mathbb{S}; n}(s) = \sum_{\mathfrak{t}} \tfrac{1}{2} \beta_{n, \mathbf{f}}^{-1}(\widetilde{\lambda(\mathfrak{t})}) \cdot \left( K_{\vec c_\mathbb{S}; n} \cdot \[ D^\mathbb{S}_{A^*, \lambda; n}(s) + C_{\varepsilon_\infty, \vec c_{\mathbb{S}}} \cdot D^\mathbb{S}_{B^*, {\lambda}; n}(s) \right] \right|_{\mathfrak{t} \, I(\mathcal{O}_F)^2} $$ where $\mathfrak t$ runs over all squarefree ideals in $I(\mathcal{O}_F)$, $$ K_{\vec c_\mathbb{S}; n} := \prod_{\mathfrak{p} \in \mathbb{S}} \, \sum_{i=0}^\infty \frac{\widetilde{M}_{\mathfrak{p}; n}^{*, (\vec c_{\mathbb{S}})_\mathfrak{p}}(\lambda(\mathfrak{p}^i))}{\mathfrak{p}^i}, $$ and $C_{\varepsilon_\infty, \vec c_{\mathbb{S}}} := \varepsilon_\infty \cdot \prod_{\mathfrak{p}\in\mathbb{S}} (\vec c_{\mathbb{S}})_\mathfrak{p} \in \{\pm1\}$. Here both $D_{A^*, {\lambda}; n}^\mathbb{S}(s)$ and $D_{B^*, {\lambda}; n}^\mathbb{S}(s)$ are both given as Euler products over primes $\mathfrak{p}\notin\mathbb{S}$. \end{cor} \begin{proof} The statement of the Corollary follows directly by taking the formal Dirichlet series of the statement in Theorem \ref{Theorem:M_N_Dirichlet} relative to the distinguished family of squareclasses $\lambda$, and by using Remark \ref{Rem:normalizing_distinguished_squarecalsses} to write the sum over normalized $\widetilde{S}$ as a sum over squareclasses of integral ideals. The Euler product expansions follow because the maps $\mathfrak{a} \mapsto A^*_{\mathfrak{p};n}(\lambda(\mathfrak{a}))$ and $\mathfrak{a} \mapsto B^*_{\mathfrak{p};n}(\lambda(\mathfrak{a}))$ are multiplicative functions, and by Lemma \ref{Lem:A_and_B_generically_one} the formal infinite products involve only finitely many non-trivial factors. \end{proof} \begin{rem}[Computing the generic product] We note that an exact formula for the generic product $\beta_{n, \mathbf{f}}^{-1}(\widetilde{S})$ can be derived from \cite[\textsection6, p115-120]{GHY} because the associated genera of local quadratic forms $Q_\mathfrak{p}$ of determinant $\det_H(Q_\mathfrak{p}) = \widetilde{S}_\mathfrak{p}$ correspond to locally maximal $\mathcal{O}_\p$-valued quadratic lattices $L_\mathfrak{p}$. (However these $L_\mathfrak{p}$ only assemble to a global maximal lattice $L$ when $\widetilde{S}$ is globally rational.) For each prime $\mathfrak{p}$ these factors differ from the ``generic'' unimodular factors at $\mathfrak{p}$ in $L(M)$ in \cite[Eq(7.2), p121]{GHY} by the factors $\lambda_\mathfrak{p}$ given in \cite[Prop 4.4 and 4.5, p121]{GHY}. When $n$ is odd this formula shows that $\beta_{n, \mathbf{f}}^{-1}(\widetilde{S})$ is constant as $\widetilde{S}$ varies, essentially because there is only one orthogonal group over each finite field. \end{rem} When $n$ is odd the previous remark considerably simplifies the statement of Corollary \ref{Cor:Formal_Dirichlet_M} by removing the dependence on the squarefree ideals $\mathfrak{t}$. \begin{cor} \label{Cor:Formal_Dirichlet_decomp_n_odd} Suppose that $n$ is odd. Then the formal Dirichlet series $D_{M^*, \lambda, \varepsilon_\infty, \vec c_\mathbb{S}; n}(s)$ of Corollary \ref{Cor:Formal_Dirichlet_M} can be written as $$ D_{M^*, {\lambda}, \varepsilon_\infty, \vec c_\mathbb{S}; n}(s) = \tfrac{1}{2} \beta_{n, \mathbf{f}}^{-1}(\widetilde{\lambda(\mathcal{O}_F)}) \cdot K_{\vec c_\mathbb{S}; n} \cdot \[ D^\mathbb{S}_{A^*, \lambda; n}(s) + C_{\varepsilon_\infty, \vec c_{\mathbb{S}}} \cdot D^\mathbb{S}_{B^*, {\lambda}; n}(s) \right], $$ with the notation as defined there. \end{cor} \begin{rem}[Variation of the signature vector] Notice that while there are $(n+1)^{r}$ possible signature vectors $\vec{\sigma}_\infty$ for non-degenerate quadratic spaces over $F$ of given dimension $n$ (where $r$ is the number of real embeddings of $F$), there are only two possible series $\mathcal{F}_M$ and the dependence on $\vec{\sigma}_\infty$ is encoded in the single sign $\varepsilon_\infty \in \{\pm1\}$. This observation will allow us to translate questions about the masses of indefinite quadratic forms into question about totally definite forms, where explicit computations are more tractable. This is done in later sections to numerically verify our local mass computations of $A_{\mathfrak{p}; n}$ and $B_{\mathfrak{p}; n}$. \end{rem} Finally we state and prove the main technical lemma of this section, used in the proof of Theorem \ref{Theorem:M_N_Dirichlet}. \begin{lem} \label{Lemma:polynomial_id} Suppose $\mathbb{T}$ is a nonempty finite set, $X_i$ and $Y_i$ are indeterminates for all $i\in\mathbb{T}$, and let $\mu_N$ denote the set of all $N^\text{th}$ roots of unity in $\mathbb{C}$. Then for any $c\in\mu_N$ we have the polynomial identity \begin{equation} \label{Eq:Polynomial_identity} \sum_{\substack{ (\varepsilon_i)_{i\in\mathbb{T}} \in {\mu_N}^{|\mathbb{T}|} \\ \text{satisfying} \\ \prod_{i\in\mathbb{T}} \varepsilon_p = c}} \, \prod_{i\in\mathbb{T}} \,\, (X_i + \varepsilon_i Y_i) = N^{|\mathbb{T}|-1} \[ \prod_{i\in\mathbb{T}} X_i + c \prod_{i\in\mathbb{T}} Y_i \right] \, . \end{equation} \end{lem} \begin{proof} By dividing the identity by $\prod_{i\in\mathbb{T}} Y_i$ and replacing $X_i/Y_i$ by $X_i$, we can assume without loss of generality that all $Y_i = 1$. For any finite set $\mathbb{V}$ we define a norm map $\mathrm{Norm}_{\mathbb{V}}:(\mu_N)^{\mathbb{V}} \twoheadrightarrow \mu_N$ on $\mathbb{V}$-tuples by $\mathrm{Norm}_{\mathbb{V}}(\vec x_{\mathbb{V}}) := \prod_{i \in \mathbb{V}} x_i$, and for finite sets $\mathbb{V}' \subseteq \mathbb{V}$ we define restriction maps $\mathrm{res}^{\mathbb{V}}_{\mathbb{V}'}: (\mu_N)^{\mathbb{V}} \twoheadrightarrow (\mu_N)^{\mathbb{V}'}$ by $\mathrm{res}^{\mathbb{V}}_{\mathbb{V}'}(\vec x_{\mathbb{V}}) := (x_i)_{i\in \mathbb{V}'}$. Now consider the term $a_\mathbb{U} \prod_{i\notin\mathbb{U}} X_i$ on the left-hand side of (\ref{Eq:Polynomial_identity}) for some fixed $\mathbb{U} \subseteq \mathbb{T}$. Then we have $$ a_{\mathbb{U}} = \sum_{\substack{ (\varepsilon_i)_{i\in\mathbb{T}} \in {\mu_N}^{|\mathbb{T}|} \\ \text{satisfying} \\ \prod_{i\in\mathbb{T}} \varepsilon_p = c}} \, \prod_{i\in\mathbb{U}} \,\, \varepsilon_i = \sum_{\vec x \in \mathrm{Norm}_\mathbb{T}^{-1}(c)} \mathrm{Norm}_{\mathbb{U}}(\mathrm{res}^{\mathbb{T}}_{\mathbb{U}}(\vec x)). $$ For convenience, we let $\varphi_{\mathbb{U}} := (\mathrm{Norm}_{\mathbb{U}} \circ \mathrm{res}^{\mathbb{T}}_{\mathbb{U}}|_{\mathrm{Norm}_\mathbb{T}^{-1}(c)}$. If $\mathbb{U} = \mathbb{T}$ then $\mathrm{Image}(\varphi_{\mathbb{U}}) = c$ and $\varphi_{\mathbb{U}}$ has multiplicity $N^{|\mathbb{T}| - 1}$, so $a_\mathbb{T} = c \cdot N^{|\mathbb{T}| - 1}$. If $\mathbb{U} = \emptyset $ then $\mathrm{Image}(\varphi_{\mathbb{U}}) = 1$ and $\varphi_{\mathbb{U}}$ has multiplicity $N^{|\mathbb{T}| - 1}$, so $a_\emptyset = N^{|\mathbb{T}| - 1}$. If $\emptyset \subsetneq \mathbb{U} \subsetneq \mathbb{T}$ then $\mathrm{Image}(\varphi_{\mathbb{U}}) = \mu_N$ and each fibre $\varphi_{\mathbb{U}}^{-1}(\varepsilon)$ has multiplicity $N^{|\mathbb{T}| - 2}$, so $a_\mathbb{T} = N^{|\mathbb{T}| - 2} \sum_{\varepsilon \in \mu_N} \varepsilon = 0$. \end{proof} \begin{rem}[Removing Primitivity] For some applications it is much more natural to remove the condition that we consider only primitive quadratic lattices $(L, Q)$ in the primitive total non-archimedean mass $M^*_{\vec{\sigma}_\infty; n}(S)$, say where $\mathbb{S} = \emptyset$ for simplicity. This has the effect of multiplying the formal associated Dirichlet series $D_{M^*, \lambda, \varepsilon_\infty; n}(s)$ by the factor by $\zeta_F(ns + \frac{n(n-1)}{2})$ for every choice $\lambda$ of a distinguished family of squareclasses. This follows from the scale invariance of the mass $\mathrm{Mass}(L,Q)$, with the $ns$ term coming from fact that scaling a quadratic space by $c$ scales its determinant by $c^n$, and the $\frac{n(n-1)}{2}$ term arising from the variation of the local densities at $v\mid\infty$. \end{rem} \begin{rem}[Relation to modular forms] There are some structural similarities between our Dirichlet series in Corollary \ref{Cor:Formal_Dirichlet_M} and the (naive) Dirichlet series associated to a Hecke eigenform $f(z)$ of half-integral weight via the Mellin transform. In particular, as pointed out by Shimura in \cite{Shimura:1973kx}, the action of the half-integral weight Hecke algebra on a Hecke eigen-cuspform $f(z) = \sum_{m\geq 1} a_m e^{2\pi i m z}$ produces relations between Fourier coefficients $a_m$ for $m$ within any fixed squareclass $t\mathbb{N}^2$ (with $t$ squarefree), and offers no insight about the squarefree coefficients $a_t$, which must be investigated separately. This similarity might lead one to hope for the existence of a modular form whose $L$-function is essentially the Dirichlet series $D_{M^*, \lambda, \varepsilon_\infty; n}(s)$ or $D_{M, \lambda, \varepsilon_\infty; n}(s)$ of Definition \ref{Defn:Associated_Dirichlet_series}. In \cite[Thrm2, p91]{Hirzebruch:1976fk} Hirzebruch and Zagier, following ideas of H. Cohen \cite{Cohen:1975uq} in higher weights, show that there is a non-holomorphic function $\mathcal{H}(z)$ that transforms as a weight $3/2$ modular form for $\Gamma_0(4)$, whose holomorphic part is a Fourier series generating function for the positive definite total masses $T_{n=2}(m)$ (written there as the Hurwitz numbers $H(m)$) when $m \in \mathbb{N}$. Furthermore, the function $\mathcal{H}(z)$ naturally arises as a linear combination of Eisenstein series from the two regular cusps. Here the total mass Dirichlet series $D_{T; n=2}(s)$ over $\mathbb{Q}$ agrees with the usual $L$-function $L(\mathcal{H}, s)$ of $\mathcal{H}(z)$. Similarly, when $n=1$ the total mass Dirichlet series $D_{T; n=1}(s)$ for positive definite forms over $\mathbb{Q}$ agrees with the Riemann zeta function $\zeta(s)$, which can be thought of the $L$-function of an Eisenstein series on $\mathrm{GL}_1$. These results and structural similarities suggest that the total mass Dirichlet series $D_{M, \lambda, \varepsilon_\infty; n}(s)$ arise as $L$-functions of Eisenstein series on $\mathrm{GL}_n$, and that when $n$ is even the weight should be half-integral (i.e. these are automorphic forms on a double cover of $\mathrm{GL}_n$). This phenomenon will be investigated further in future papers, and as a first step in this direction, we give explicit formulas for the ternary case $D_{T; n=3}(s)$ in \cite{Hanke_n_equals_3_masses}. \end{rem} \section{Structural results for the Euler factors $A^*_{\mathfrak{p}; n}$ and $B^*_{\mathfrak{p}; n}$} In this section we establish the rationality of the Euler factors for $A^*_{\mathfrak{p}; n}(S_\mathfrak{p})$ and $B^*_{\mathfrak{p}; n}(S_\mathfrak{p})$ as $S_\mathfrak{p}$ varies but its normalized squareclass $\widetilde{S_\mathfrak{p}}$ is fixed (under some mild removable conditions when $\mathfrak{p}\mid 2$). We also use local scaling symmetries when $n$ is odd to gain precise information about how the variation of $\widetilde{S_\mathfrak{p}}$ affects these Euler factors (at all primes $\mathfrak{p}$). \begin{defn} \label{Defn:Jordan_structure_and_PLGS} If $\mathfrak{p}\nmid 2$ we define a {\bf Jordan block structure} of size $n \in \mathbb{N}$ as a tuple $(n_1, \cdots, n_r)$ of $n_i \in \mathbb{N}$ where $n_1 + \cdots + n_r = n$, but $r \in \mathbb{N}$ is not specified. If $\mathfrak{p}\mid 2$ and $F_\mathfrak{p} = \mathbb{Q}_2$ then we define a {\bf partial local genus symbol} of size $n$, as Jordan block structure where \begin{enumerate} \item the commas separating the elements of the tuple can be either a comma `,', a semicolon `;', or a pair of colons `::', \item every $n_i \in 2\mathbb{N}$ may appear either with a bar above it or not, \item the semicolon separator may only appear between two (adjacent) unbarred numbers. \end{enumerate} For example, the symbol $(1; 2, 1, \bar2, 1 :: 3)$ partial local genus symbol of size 10. \end{defn} \begin{rem}[Relation to genus symbols] \label{Rem:CS_PLGS_translation} The partial local genus symbols are in bijective correspondence with the local genus symbols in \cite[Ch 10.7, pp378-384]{CS} for the prime $p=2$, where the oddities and signs are not specified. In the Conway-Sloane notation, the `::' symbols indicate a separation between trains, and the barred numbers $\bar{n_i}$ indicate Type II Jordan blocks of dimension $n_i$ (which separate compartments iff they are between odd numbers in the same train), and the `;' indicates a separation of compartments (but not trains) between Type I blocks, indicating that their scale ideals differ by a factor of 4. When $p \neq 2$, the Jordan block structures are in correspondence with the local genus symbols where the signs are not specified. The invariants described over $\mathbb{Q}$ there are known to hold for any $\mathfrak{p}\nmid 2$ for any number field $F$. \end{rem} \begin{rem}[Counting Jordan Block Structures] One can show that there are exactly $2^{n-1}$ Jordan block structures of size $n$ (for $\mathfrak{p}\nmid 2$), because they satisfy the recursion that $J(n) = \sum_{i=0}^{n-1} J(i)$ where $J(0)=J(1) = 1$ and $J(n)$ is the number of Jordan block structures of size $n$. Counting partial local genus symbols (for $\mathfrak{p}\mid 2$ where $F_\mathfrak{p}= \mathbb{Q}_2$) can also be done, but it is a much more complicated task. \end{rem} \begin{thm}[Rationality of ``fixed unit'' local series] \label{Thm:Rationality_of_Euler_factors} If $\mathfrak{p}\nmid 2$ or $F_\mathfrak{p} = \mathbb{Q}_2$, then the formal local Dirichlet series $$ \sum_{ \substack{S_\mathfrak{p} \in \SqCl(F_\p^\times, \Op^\times) \\ \text{with $\widetilde{S_\mathfrak{p}}$ fixed}} } \frac{A^*_{\mathfrak{p}; n}(S_\mathfrak{p})}{N_{F/\mathbb{Q}}(\mathfrak{I}(S_\mathfrak{p}))^{s}} \quad \text{ and } \quad \sum_{ \substack{S_\mathfrak{p} \in \SqCl(F_\p^\times, \Op^\times) \\ \text{with $\widetilde{S_\mathfrak{p}}$ fixed}} } \frac{B^*_{\mathfrak{p}; n}(S_\mathfrak{p})}{N_{F/\mathbb{Q}}(\mathfrak{I}(S_\mathfrak{p}))^{s}} $$ are rational functions of $X := q^{-s}$, where the norm $N_{F/\mathbb{Q}}(\mathfrak{a}) := |\mathcal{O}_F/\mathfrak{a}\mathcal{O}_F|$. \end{thm} \begin{proof} First assume that $\mathfrak{p}\nmid 2$ and consider a given Jordan block structure $J$ of size $n$ (i.e. dimensions and scales of the modular lattice summands) arising from $n$-variable primitive $\mathcal{O}_\p$-valued quadratic forms over $F_\mathfrak{p}$, say with $r$ non-zero blocks. From \cite[\textsection92:2, p147]{OM} we see that by decorating each (non-zero) Jordan block by a valuation zero squareclasses of $(F_\mathfrak{p})^\times$, we parametrize all $\mathcal{O}_\p$-equivalence classes of quadratic forms with that Jordan block structure $J$. This decoration process gives rise to a distribution of Hasse invariants and normalized local densities for each Jordan block structure that gives a contribution to $A^*_{\mathfrak{p}; n}(S_\mathfrak{p})$ and $B^*_{\mathfrak{p}; n}(S_\mathfrak{p})$ (which are the sum of all such contributions). We observe that this contribution $c_J$ varies with the scales of its components as $$ c_J = \widetilde{c}_J \cdot \prod_{i} q^{\kappa_i \alpha_i} \qquad \qquad \text{where $\kappa_i := \sum_{j < i} \tfrac{n_i n_j}{2} - \sum_{j > i} \tfrac{n_i n_j}{2}$, } $$ $n_i := \dim(L_i)$, $\mathfrak{p}^{\alpha_i} := s(L_i)$ and $\widetilde{c}_J \in \mathbb{Q} $ is independent of the scale valuations $\alpha_i$. More precisely, $\widetilde{c}_J$ depends only on the tuple $\vec{n}_J := (n_i)$ of Jordan block dimensions $n_i \in \mathbb{N}$ with $\sum_i n_i = n$ (and in any associated Jordan block structure $J$ we have $\mathfrak{s}(L_i) \supsetneq \mathfrak{s}(L_j)$ when $i < j$). Since there are only only finitely many such tuples $\vec{n}_J$ for any given $n\in \mathbb{N}$, we have a finite sum of contributions $\widetilde{c}_J$ (indexed by $\vec{n}_J$) which vary with $\alpha := \mathrm{ord}_\mathfrak{p}(S_\mathfrak{p}) = \sum_i \alpha_i$ as sums of the form $$ c_J = \widetilde{c}_J \cdot \sum_{ \substack{ 0=\alpha_1 < \alpha_2 < \cdots < \alpha_r \\ \text{ where } \sum_i \alpha_i = \alpha } } \prod_{i=1}^r q^{\kappa_i \alpha_i}. $$ By using Ferrer diagrams, we can use an affine change variables to rewrite these sums as free sums over $\vec{\beta} := (\beta_1, \cdots, \beta_{r})$ with $\beta_i \geq 0$ where each $\alpha_i$ is an affine function of $\vec{\beta}$. Therefore the contribution of each of these terms to the formal Euler factors are products of geometric series, and a finite sum of geometric series is a rational function. When $\mathfrak{p}\mid 2$ and $F_\mathfrak{p} = \mathbb{Q}_2$ then the same argument applies where we replace the Jordan block structure by partial local genus symbols $J$ on size $n$, by using the $2$-adic integral local invariants in \cite[\textsection7.3-6, pp380-3]{CS-book}. For each partial local genus symbol, using \cite[\textsection4]{CS} we notice that the variation of $c_J$ with the scales of the trains comes purely from the ``cross product'' factor, and the sum over all such (finitely many) terms formally gives a sum of geometric series. \end{proof} The rationality result for ``fixed unit'' local series in Theorem \ref{Thm:Rationality_of_Euler_factors} allows us to show that the Euler factors of the Dirichlet series $D_{A^*, \lambda, \varepsilon_\infty; n}(s)$ and $D_{B^*, \lambda, \varepsilon_\infty; n}(s)$ are also rational. \begin{cor}[Rationality of Euler factors] \label{Cor:Rationality_of_AB_Euler_factors} If $\mathfrak{p}\nmid 2$ or $F_\mathfrak{p} = \mathbb{Q}_2$, $\varepsilon \in \{\pm1\}$ and $\lambda$ is a distinguished family of squareclasses, then the formal local Dirichlet series $$ \sum_{i=0}^\infty \frac{A^*_{\mathfrak{p}; n}(\lambda(\mathfrak{p}^i))}{N_{F/\mathbb{Q}}(\mathfrak{p}^i)^{s}} \quad \text{ and } \quad \sum_{i=0}^\infty \frac{B^*_{\mathfrak{p}; n}(\lambda(\mathfrak{p}^i))}{N_{F/\mathbb{Q}}(\mathfrak{p}^i)^{s}} $$ are rational functions of $X := q^{-s} = N_{F/\mathbb{Q}}(\mathfrak{p})^{-s}$. These are the Euler factors at $\mathfrak{p}$ of the Dirichlet series $D_{A^*, \lambda, \varepsilon_\infty; n}(s)$ and $D_{B^*, \lambda, \varepsilon_\infty; n}(s)$ described in \end{cor} \begin{proof} These local Dirichlet series agree with the local series described in Theorem \ref{Thm:Rationality_of_Euler_factors}, up to an overall scaling and possibly an alternating sign. Since any scaling of a power series $F(x)= \sum_{i} a_i x^i$ along congruence classes of powers of $i$ can be accomplished by a rational transformation of $F(x)$, the result follows. \end{proof} \begin{rem}[Rationality for dyadic primes] The condition that $F_\mathfrak{p} = \mathbb{Q}_2$ for $\mathfrak{p}\mid 2$ in Theorem \ref{Thm:Rationality_of_Euler_factors} is only present because the invariant theory of integral quadratic forms over general dyadic field \cite{OMeara:1955rr, OMeara:1957cr} has not been formulated in a way that makes it readily applicable here. However one can formulate it in a language of partial local genus symbols (``trains and compartments'') that are decorated by rational invariants of the underlying quadratic spaces (under some equivalences which specialize to ``oddity fusion'' and ``sign walking'' over $\mathbb{Q}_2$) that allow the proof above to work for all $\mathfrak{p}\mid 2$. This reformulation will appear in a forthcoming paper \cite{Hanke:dyadic}. \end{rem} We now examine the effect of local unit scalings when $n$ is odd, which allow us to establish symmetries of the Euler factors for $A^*_{\mathfrak{p}; n}$ and $B^*_{\mathfrak{p}; n}$ across normalized local squareclasses. \begin{thm}[Normalized local squareclass invariance] Suppose that $n \in \mathbb{N}$ is odd and that $u_\mathfrak{p} \in \mathcal{O}_\p^\times$. Then $$ A^*_{\mathfrak{p}; n}(u_\mathfrak{p} S_\mathfrak{p}) = A^*_{\mathfrak{p}; n}(S_\mathfrak{p}) $$ and $$ B^*_{\mathfrak{p}; n}(u_\mathfrak{p} S_\mathfrak{p}) = \begin{cases} B^*_{\mathfrak{p}; n}(S_\mathfrak{p}) & \text{if $n \equiv 1 (\text{mod } 4)$,} \\ (u_\mathfrak{p}, u_\mathfrak{p})_\mathfrak{p} \cdot B^*_{\mathfrak{p}; n}(S_\mathfrak{p}) & \text{if $n \equiv 3 (\text{mod } 4)$.} \\ \end{cases} $$ \end{thm} \begin{proof} Consider the effect of the local scaling $G_\mathfrak{p} \mapsto u_\mathfrak{p} \cdot G_\mathfrak{p}$ on the local genera $G_\mathfrak{p}$ of $\det_H(G_\mathfrak{p}) = S_\mathfrak{p}$. The scaled determinant is given by $\det_H(u_\mathfrak{p} G_\mathfrak{p}) = u_\mathfrak{p}^{n} S_\mathfrak{p} = u_\mathfrak{p} S_\mathfrak{p}$, and by Lemma \ref{Lem:Hilbert_symbol_scaling} we see that the scaled Hasse invariant is given by \begin{align*} c_\mathfrak{p}(u_\mathfrak{p} S_\mathfrak{p}) & = \textstyle{ (u_\mathfrak{p}, u_\mathfrak{p})_\mathfrak{p}^\frac{n(n-1)}{2} \cdot (u, 2^n \det_H(G_\mathfrak{p}))_\mathfrak{p}^{n-1} \cdot c_\mathfrak{p}(S_\mathfrak{p}) } \\ & = c_\mathfrak{p}(S_\mathfrak{p}) \cdot \begin{cases} 1 & \text{if $n \equiv 1 (\text{mod } 4)$,} \\ (u_\mathfrak{p}, u_\mathfrak{p})_\mathfrak{p} & \text{if $n \equiv 3 (\text{mod } 4)$.} \\ \end{cases} \end{align*} These formulas show the desired symmetries because the local unit scaling is an involution on local genera $G_\mathfrak{p}$ that gives a bijection between genera of determinants $S_\mathfrak{p}$ and $u_\mathfrak{p} S_\mathfrak{p}$, and also preserves their local densities (by the definition of $\beta_{Q, \mathfrak{p}}(Q)$). \end{proof} \begin{rem} If $\mathfrak{p} \nmid 2$ and $n$ is odd then we see that the Euler factors $A^*_{\mathfrak{p}; n}(S_\mathfrak{p})$ and $B^*_{\mathfrak{p}; n}(S_\mathfrak{p})$ depend only on $\mathrm{ord}_\mathfrak{p}(S_\mathfrak{p})$, or equivalently, on the local ideal associated to $S_\mathfrak{p}$. When $n \equiv 1\pmod4$ this invariance holds at all primes $\mathfrak{p}$. \end{rem} \begin{rem}[Symmetries for $n$ even] When $n$ is even it is still possible to gain some symmetry information about the series $\mathcal{F}_{M^*, \varepsilon_\infty, \vec c_{\mathbb{S}}; n}$ by examining the effect of global unit scalings by $u \in \mathcal{O}_F^\times / (\mathcal{O}_F^\times)^2$, but this only gives finitely many symmetries. \end{rem} \begin{lem} \label{Lem:Hilbert_symbol_scaling} Given non-degenerate $n$-dimensional quadratic space $(V, Q)$ over a local field $K_v$ of characteristic $\neq 2$, we have $$ \textstyle{ c_v(u\cdot V) = (u,u)^\frac{n(n-1)}{2}_v \cdot (u, \det_G(V))_v^{n-1} \cdot c_v(V). } $$ \end{lem} \begin{proof} Since $K_v$ has characteristic $\neq 2$, we can assume that $Q \sim_{K_v} a_1x_1^2 + \cdots + a_n x_n^2$ with $a_i \in K_v^\times$, giving $c_v(V) = \prod_{1 \leq i < j \leq n} (a_i, a_j)_v$. From basic properties of the Hilbert symbol we have that $$ (u a_i, u a_j)_v = (u,u)_v \cdot (u, a_i a_j)_v \cdot (a_i, a_j)_v, $$ which gives the desired formula by noticing that the factors $(u, a_i a_j)_v$ for a given index $i$ (with varying index $j$) appear exactly $n-1$ times. \end{proof} \section{Relationship with the Mass formula} In this section we relate the primitive total adelic non-archimedean mass $M^*_{\varepsilon_\infty, \vec c_{\mathbb{S}}; n}(S)$ with the total mass of quadratic lattices (as a sum over global classes) of any given signature vector $\vec{\sigma}_\infty$ and Hessian determinant squareclass $S$, where we may restrict the allowed Hasse invariants $c_\mathfrak{p}$ at finitely many primes. \begin{defn} We define the {\bf proper mass} $\mathrm{Mass}^{+}(L)$ of an $\mathcal{O}_F$-valued non-degenerate quadratic lattice $L$ to be the quantity $\mathrm{Mass}(\Lambda, \varphi)$ defined in \cite[eq (5.4), p26]{Ha-Thesis}. Since $\mathrm{Mass}^{+}(L)$ only depends on the genus $G := \mathrm{Gen}(L)$, we also define the proper mass of a genus $G$ as $\mathrm{Mass}^{+}(G) := \mathrm{Mass}^{+}(L)$ for any $L\in G$. \end{defn} \begin{thm} \label{Thm:Mass_theorem} For a given a signature vector $\vec{\sigma}_\infty$ of rank $n$, Hasse vector $\vec c_{\mathbb{S}}$ for $F$, and globally rational non-archimedean squareclass $S \in \mathrm{SqCl}(\mathbb{A}_{F, \mathbf{f}}^\times, U_\mathbf{f})$, we have \begin{align} M^*_{\varepsilon_\infty, \vec c_{\mathbb{S}}; n}(S) &= \frac{C_{\vec{\sigma}_\infty, n}}{2 \cdot |\Delta_F|^\frac{n(n-1)}{4}} \cdot \frac{N_{F/\mathbb{Q}}(2\mathcal{O}_F)^\frac{n(n+1)}{2}}{N_{F/\mathbb{Q}}(\mathfrak{I}(S))^\frac{n+1}{2}} \cdot \sum_{ G \in \mathbf{Gen}^*(S, \vec{\sigma}_\infty, \vec c_{\mathbb{S}}; n) } \mathrm{Mass}^{+}(G) \end{align} where $$ C_{\vec{\sigma}_\infty, n} := \[ \prod_{\text{$v$ real}} 2 \cdot 2^\frac{-n \cdot \min\{\sigma_{v,+}, \sigma_{v,-}\} }{2} \cdot V(\sigma_{v,+}) \cdot V(\sigma_{v,-}) \right] \[ \prod_{\text{$v$ complex}} 2^\frac{n(n-3)}{2} \cdot V(n) \right], $$ $\varepsilon_\infty := \prod_{v\mid\infty} c_v((\vec{\sigma}_\infty)_v)$, and $V(r) := \frac{1}{2} \pi^\frac{r(r+1)}{4} \cdot \(\prod_{i=1}^r \Gamma(\frac{i}{2})\)^{-1}$ for $r \in \mathbb{Z} \geq 0$. \end{thm} \begin{proof} By \cite[eq (5.7), p27]{Ha-Thesis} (which uses the convention that $\sigma_{v,+} \geq \sigma_{v,-}$ for every real place) we see that $$ \mathrm{Mass}^{+}(L) = 2 \cdot |\Delta_F|^\frac{n(n-1)}{4} \cdot \prod_{v\mid\infty} \mathrm{Vol}_{Q_v}(C_{v})^{-1} \cdot \prod_\mathfrak{p} \beta_{\mathfrak{p}}(L, Q)^{-1}, $$ where $\mathrm{Vol}_{Q_v}(C_{v})$ is the volume on the fibre $C_v \subseteq SO(\phi_v)$ defined in \cite[\textsection4]{Ha-Thesis}, computed using the volume form on $SO(Q_v)$. Since $\prod_{v\mid\infty}\mathrm{Vol}_{\phi_v}(C_{v}) = C_{\vec{\sigma}_\infty, n}$, we can use \cite[Lemma 2.2]{Ha-Thesis} to write $$ \prod_{v\mid\infty}\mathrm{Vol}_{Q_v}(C_{v})^{-1} = \prod_{v\mid\infty} \(\frac{|\det_G(Q)|_v}{\cancel{|\det_G(\phi_v)|_v}} \)^{\frac{n+1}{2}} C_{\vec{\sigma}_\infty, n}^{-1}. $$ We can similarly use \cite[Lemma 2.2]{Ha-Thesis} to express the lattice local densities $\beta_{\mathfrak{p}}(L, Q)$ in terms of the local densities of the quadratic form $\psi_\mathfrak{p}$ induced by restricting $Q$ to $L_\mathfrak{p}$ in some local basis of $L_\mathfrak{p}$, giving $$ \prod_\mathfrak{p} \beta_{\mathfrak{p}}(L, Q)^{-1} = \prod_\mathfrak{p} \beta_{\psi_\mathfrak{p}, \mathfrak{p}}(\psi_\mathfrak{p})^{-1} \cdot \prod_\mathfrak{p} \(\sqrt{\frac{|\det_G(\psi_\mathfrak{p})|_\mathfrak{p}}{|\det_G(Q)|_\mathfrak{p}}} \)^{-(n+1)}. $$ Now using the product formula and observing that $\prod_\mathfrak{p} |\det_G(\psi_\mathfrak{p})|_\mathfrak{p}^{-1} = N_{F/\mathbb{Q}}( \frac{1}{2^n}\mathfrak{I}(S))$, gives $$ \mathrm{Mass}^{+}(L) = \frac{2 \cdot |\Delta_F|^\frac{n(n-1)}{4}}{C_{\vec{\sigma}_\infty, n}} \cdot \frac{N_{F/\mathbb{Q}}(\mathfrak{I}(S))^\frac{n+1}{2}}{N_{F/\mathbb{Q}}(2\mathcal{O}_F)^\frac{n(n+1)}{2}} \cdot \prod_\mathfrak{p} \beta_{\psi_\mathfrak{p}, \mathfrak{p}}(\psi_\mathfrak{p})^{-1}. $$ Finally, summing over all primitive quadratic rank $n$ lattices with the given signature vector and Hasse invariants at $\mathfrak{p} \in \mathbb{S}$ gives $T^*_{\vec{\sigma}_\infty, \vec c_{\mathbb{S}}; n}(S)$, and using Lemma \ref{Lem:T_is_M} we recover $M^*_{\varepsilon_\infty, \vec c_{\mathbb{S}}; n}(S)$. \end{proof} \begin{cor} \label{Cor:Mass_formula_result_for_globally_rational} If the signature vector $\vec{\sigma}_\infty$ is totally definite and $S \in \mathrm{SqCl}(\mathbb{A}_{F, \mathbf{f}}^\times, U_\mathbf{f})$ is globally rational, then $$ M^*_{\varepsilon_\infty, \vec c_{\mathbb{S}}; n}(S) = \frac{V(n)^{[F:\mathbb{Q}]}}{ |\Delta_F|^\frac{n(n-1)}{4}} \cdot \frac{N_{F/\mathbb{Q}}(2\mathcal{O}_F)^\frac{n(n+1)}{2}}{N_{F/\mathbb{Q}}(\mathfrak{I}(S))^\frac{n+1}{2}} \cdot \sum_{ L \in \mathbf{Cls}^*(S, \vec{\sigma}_\infty, \vec c_{\mathbb{S}}; n) } \frac{1}{|\mathrm{Aut}(L)|} $$ where $\varepsilon_\infty$ and $V(n)$ are defined in Theorem \ref{Thm:Mass_theorem}. \end{cor} \begin{proof} This follows from Theorem \ref{Thm:Mass_theorem} by taking all signatures $\sigma_v := (n, 0)$, and noticing that $C_{\vec{\sigma}_\infty, n} = \prod_{v\mid\infty} V(n) = V(n)^{[F:\mathbb{Q}]}$. We can replace the proper mass by the mass because $\mathrm{Mass}^+(G) = 2\mathrm{Mass}(G)$, which can be proved by analyzing how classes split into proper classes. \end{proof} \begin{cor} Suppose that $\lambda$ gives a family of distinguished squareclasses where each $\lambda(\mathfrak{a})$ is globally rational and let $\vec{\sigma}_\infty$ be a totally definite signature vector, then the formal Dirichlet series $$ \sum_{\mathfrak{a} \in I(\mathcal{O}_F)} \( \sum_{ Q \in \mathbf{Cls}^*(\lambda(\mathfrak{a}), \vec{\sigma}_\infty, \vec c_{\mathbb{S}}; n) } \frac{1}{|\mathrm{Aut}(Q)|} \) \mathfrak{a}^{-s} = \frac{|\Delta_F|^\frac{n(n-1)}{4}}{\( 2^\frac{n(n+1)}{2} \cdot V(n) \)^{[F:\mathbb{Q}]}} \cdot D_{M^*, \lambda, \varepsilon_\infty, \vec c_\mathbb{S}; n}(s - \tfrac{n+1}{2}) $$ where $\varepsilon_\infty$, $\vec c_\mathbb{S}$ and $V(n)$ are defined in Theorem \ref{Thm:Mass_theorem}. The Dirichlet series $D_{M^*, \lambda, \varepsilon_\infty, \vec c_\mathbb{S}; n}(s)$ is given explicitly in Corollary \ref{Cor:Formal_Dirichlet_M} as a linear combination of two Dirichlet series each of which is defined by an Euler product. \end{cor} \begin{proof} This follows directly from Corollary \ref{Cor:Mass_formula_result_for_globally_rational}. \end{proof} \section{Results for binary quadratic forms} \label{Sec:Binary_Forms} In this section we work out the formal non-archimedean squareclass series for binary quadratic forms, which is closely related to class numbers of quadratic rings and modular forms of weight $\frac{3}{2}$. \begin{lem} \label{Lem:Count_binary_Jordan_and_PLGS} When $n=2$ there are exactly two Jordan block structures and five partial local genus symbols. Explicitly, they are $\{(2), (1,1)\}$ and $\{(\bar{2}), (2), (1,1), (1;1), (1::1) \}$. \end{lem} \begin{proof} This follows from Definition \ref{Defn:Jordan_structure_and_PLGS} by counting and decorating the ordered partitions of $n=2$. \end{proof} \begin{lem}[Distributions of Hasse invariants] Suppose that $G_\mathfrak{p}$ varies over all primitive local genera of $\mathcal{O}_\p$-valued quadratic forms in 2 variables where $\det_H(G_\mathfrak{p})$ is fixed. The distribution of Hasse invariants $c_\mathfrak{p}$ arising from genera $G_\mathfrak{p}$ above with fixed Jordan block structures (for $\mathfrak{p}\nmid 2$) is given by $$ \begin{tabular}{c | c | c | c | c} \text{Jordan block structure} & $\det_H(G_\mathfrak{p})$ & $c_\mathfrak{p} =1$ & $c_\mathfrak{p} =-1$ & Conditions\\ \hline $(2)$ & -- & $1$ & $0$ & --\\ \hline \multirow{2}{*}{$(1,1)$} & \multirow{2}{*}{$u \pi_\mathfrak{p}^\nu$} & $1$ & $1$ & if $\nu$ is odd \\ \cline{3-5} & & $2$ & $0$ & if $\nu$ is even \\ \end{tabular} $$ Similarly, the distribution of Hasse invariants arising from genera $G_\mathfrak{p}$ above with fixed partial local genus symbols (when $\mathfrak{p}\mid 2$ and $F_\mathfrak{p} = \mathbb{Q}_2$) is given by $$ \begin{tabular}{c | c | c | c | c} \text{Partial local genus symbols} & $\det_H(G_\mathfrak{p})$ & $c_\mathfrak{p} =1$ & $c_\mathfrak{p} =-1$ & Conditions\\ \hline \multirow{2}{*}{$(\bar{2})$} & \multirow{2}{*}{$u$} & $0$ & $0$ & if $u \equiv 1_{(4)}$ \\ \cline{3-5} & & $0$ & $1$ & if $u \equiv 3_{(4)}$ \\ \hline \multirow{2}{*}{$(2)$} & \multirow{2}{*}{$u$} & $1$ & $1$ & if $u \equiv 1_{(4)}$ \\ \cline{3-5} & & $1$ & $0$ & if $u \equiv 3_{(4)}$ \\ \hline $(1,1)$ & -- & $1$ & $1$ & --\\ \hline \multirow{2}{*}{$(1;1)$} & \multirow{2}{*}{$u$} & $1$ & $1$ & if $u \equiv 1_{(4)}$ \\ \cline{3-5} & & $2$ & $0$ & if $u \equiv 3_{(4)}$ \\ \hline \multirow{2}{*}{$(1::1)$} & \multirow{2}{*}{$u \pi_\mathfrak{p}^\nu$} & $2$ & $2$ & if $\nu$ is odd or $u \equiv 1_{(4)}$ \\ \cline{3-5} & & $4$ & $0$ & if $\nu$ is even and $u \equiv 3_{(4)}$ \\ \end{tabular} $$ \end{lem} \begin{proof} {\bf Case 1:} When $\mathfrak{p}\nmid 2$, the (modular) Jordan blocks are summands determined up to isomorphism by their determinant squareclass (and dimension and scale). The Jordan block structure $J = (2)$ has one block and $Q \sim_{\mathcal{O}_\p} x^2 + uy^2$, so there is one genus for each allowed determinant $u$. Here the Hasse invariant is $c_\mathfrak{p} = (1,u)_\mathfrak{p}= 1$. When $J = (1,1)$ then $Q \sim_{\mathcal{O}_\p} u_1x^2 + \pi_\mathfrak{p}^\nu u_2 y^2$, where the $u_i\in \mathcal{O}_\p^\times$ can be freely chosen so that $u_1u_2 = u$ is fixed and $\nu$ is determined by the determinant. This gives two forms, and their Hasse invariants are given by the Hilbert symbol $$ c_\mathfrak{p}(Q) = (u_1, \pi_\mathfrak{p}^\nu u_2)_\mathfrak{p} = \cancel{(u_1, u_2)_\mathfrak{p}} \cdot (u_1, \pi_\mathfrak{p})^\nu_\mathfrak{p} = \leg{u_1}{\mathfrak{p}}^\nu. $$ When $\nu$ is even this is 1, but when $\nu$ is odd this takes both values $\pm1$ once. {\bf Case 2:} Now suppose that $\mathfrak{p}\mid 2$ and $F_\mathfrak{p} = \mathbb{Q}_2$, so $\mathcal{O}_\p = \mathbb{Z}_2$ and we can use the ``train/compartment'' invariant theory of quadratic forms over $\mathbb{Z}_2$ in \cite[\textsection7.5, p381--2]{CS-book} to enumerate local genera. This amount to decorating the partial local genus symbols (translated via Remark \ref{Rem:CS_PLGS_translation}) with ``signs'' and ``oddities'' to obtain a local genus symbol. Note that in general a change of sign does not affect the reduction of the determinant (mod 4), and allows us to freely vary among these two squareclasses. When $J = (\bar{2})$ then there are exactly two quadratic forms, having determinants 3 and 7 (mod 8), both with $c_\mathfrak{p} = -1$. When $J = (2)$ then there are three possible oddities and one choice of sign. When $u \equiv 1$ (mod 4) we have two oddities have $c_\mathfrak{p} =\pm1$, but when $u \equiv 3$ (mod 4) then we have one oddity with $c_\mathfrak{p} = 1$. If there are no barred numbers in the partial local genus symbol $J$, then it corresponds to a diagonalizable quadratic form $Q \sim_{\mathcal{O}_\p} ax^2 + 2^\nu by^2$ for some $a,b \in \mathcal{O}_\p^\times$. Also all forms of this determinant are scalings of $Q$ by some $u \in \SqCl(\Op^\times)$, and we can easily compute their Hasse invariants using the formula $$ c_\mathfrak{p}(u Q) = (u a, u 2^\nu b)_\mathfrak{p} = (u,u)_\mathfrak{p} \cdot (u, 2^\nu)_\mathfrak{p} \cdot (u, ab)_\mathfrak{p} \cdot (a,2^\nu b)_\mathfrak{p}. $$ When $J = (1,1)$ we have four possible oddities $(\{0,2,4,6\})$ and two signs, and we see that the change of oddity and sign to preserve the determinant preserves $(u,u)_\mathfrak{p}$, but reverses $(u, 2)_\mathfrak{p}$, so we have $c_\mathfrak{p} = \pm1$ once for each determinant. When $J = (1;1)$ then $\nu = 2$ so scaling to account for different oddities (with fixed sign) changes $c_\mathfrak{p}$ by $(u,u)_\mathfrak{p} \cdot (u,ab)_\mathfrak{p}$, which is constant for all $u$ iff $ab \equiv u \equiv 3$ (mod 4). By choosing $a\equiv 1$ (mod 8) we see that we always have at least one $c_\mathfrak{p} = 1$, giving the $c_\mathfrak{p}$-distribution above. Finally when $J=(1::1)$ we have two choices of sign and four choices of unsigned oddities, giving four possible rescaling. When $\nu$ is odd or $ab \equiv 1$ (mod 4) we can use the previous cases to see that $c_\mathfrak{p}$ will change sign, and so be equibistributed, otherwise all $c_\mathfrak{p}$ values are 1. \end{proof} \begin{lem}[$\mathfrak{p}$-masses and local densities] Given a primitive local genus $G_\mathfrak{p}$ of non-degenerate binary quadratic forms with associated ideal $\mathfrak{I}(\det_H(G_\mathfrak{p})) = \mathfrak{p}^\nu$, the $\mathfrak{p}$-mass $m_\mathfrak{p}(G_\mathfrak{p})$ given by replacing $p$ by $q$ in \cite[eq (3), p263]{CS} is related to the inverse local density $\beta^{-1}_{G_\mathfrak{p}, \mathfrak{p}}(G_\mathfrak{p})$ by the formula $$ \beta^{-1}_{G_\mathfrak{p}, \mathfrak{p}}(G_\mathfrak{p}) = 2m_\mathfrak{p}(G_\mathfrak{p}) \cdot q^{-\frac{3\nu}{2} + 3 \mathrm{ord}_\mathfrak{p}(2)}, $$ where either $\mathfrak{p}\nmid 2$ or $\mathfrak{p}\mid 2$ and $F_\mathfrak{p} = \mathbb{Q}_2$. \end{lem} \begin{proof} The formula \cite[\textsection12, pp281-2]{CS} adapted to $\mathfrak{p}\nmid 2$ or where $F_\mathfrak{p} = \mathbb{Q}_2$ gives $$ 2 \beta_{Q,\mathfrak{p}}(Q) m_\mathfrak{p}(Q) = q^{\frac{1}{2}\textstyle (n+1)\cdot \mathrm{ord}_\mathfrak{p}(\mathfrak{I}(\det_G(Q)))}, $$ and the formula follows since $\mathfrak{p}^\nu = \det_H(G_\mathfrak{p}) = 2^n \cdot \det_G(G_\mathfrak{p})$. \end{proof} \begin{rem} The $p$-mass formulas given in \cite[eq (3) and \textsection12]{CS} are still valid for any $\mathfrak{p}\nmid 2$ because they have been independently derived in \cite{Pall}, and the argument given there holds for computing the (non-degenerate) local densities $\beta_{Q,\mathfrak{p}}(Q)$ for any $\mathfrak{p}\nmid 2$ by replacing the expression ``$p$'' there by either $\mathfrak{p}$ or $q$ as appropriate. \end{rem} We now adopt a particularly convenient convention for defining the generic local densities when no normalized local genus exists (see Definition \ref{Def:generic_density_and_product}). \begin{defn} \label{Def:generic_local_density_convention_at_n=2} Suppose that $p=2$ splits completely in $F$, and $n=2$. Then for normalized $\widetilde{S_\mathfrak{p}} \in \SqCl(F_\p^\times, \Op^\times)$ where $\mathfrak{p}\mid 2$ and $\widetilde{S_\mathfrak{p}} \equiv 1_{(4)}$, we set $$ \beta^{-1}_{n=2; \mathfrak{p}}(\widetilde{S_\mathfrak{p}}) := 2 \cdot \gamma_\mathfrak{p}(\widetilde{S_\mathfrak{p}})^{-1} = \frac{2}{1 - \frac{\chi_{\widetilde{S_\mathfrak{p}}}(\mathfrak{p})}{2}} = \frac{2}{1 - \frac{1}{2}\leg{\widetilde{S_\mathfrak{p}}}{2}}, $$ where $\gamma_\mathfrak{p}(u) := 1 - \frac{\chi_{u}(\mathfrak{p})}{2}$, $\chi_{u}(\mathfrak{p}) := \leg{-u}{\mathfrak{p}}$, and $\leg{u}{2}$ is the extended Kronecker symbol defined as $$ \leg{u}{2} := \begin{cases} 1 & \qquad\text{if $u \equiv \pm 1_{(8)}$,} \\ -1 & \qquad\text{if $u \equiv \pm 3_{(8)}$,} \\ 0 & \qquad\text{otherwise.} \\ \end{cases} $$ In the proof of Theorem \ref{Thm:Explicit_A_and_B_for_n=2} we will see that this definition also gives $\beta^{-1}_{n=2; \mathfrak{p}}(\widetilde{S_\mathfrak{p}})$ when $\widetilde{S_\mathfrak{p}} \equiv -1_{(4)}$, and it is this uniformity motivates our choice here. \end{defn} We now compute the normalized Euler factors at $\mathfrak{p}\nmid 2$ and also at $\mathfrak{p}\mid2$ where $F_\mathfrak{p} =\mathbb{Q}_2$. \begin{thm} \label{Thm:Explicit_A_and_B_for_n=2} When $n=2$ and the prime $p=2$ splits completely in $F$, we have explicit rational functions for the ``fixed unit'' local Dirichlet series described in Theorem \ref{Thm:Rationality_of_Euler_factors}, given by $$ \sum_{\nu=0}^\infty \frac{A^*_{\mathfrak{p}; 2}(u\, \pi_\mathfrak{p}^\nu)}{q^{\nu s}} = \begin{cases} \frac{1 - \chi_u(\mathfrak{p}) \, q^{-(s+2)}}{1 - q^{-(s+1)}} & \qquad \text{if $\mathfrak{p}\nmid 2$,} \\ \frac{1 - \chi_u(\mathfrak{p}) \, q^{-(s+3)}}{1 - q^{-(s+1)}} & \qquad \text{if $\mathfrak{p}\mid 2$ and $F_\mathfrak{p} = \mathbb{Q}_2$,} \\ \end{cases} $$ and $$ \sum_{\nu=0}^\infty \frac{B^*_{\mathfrak{p}; 2}(u\, \pi_\mathfrak{p}^\nu)}{q^{\nu s}} = \begin{cases} \frac{1 - \chi_u(\mathfrak{p}) \, q^{-(2s+3)}}{1 - q^{-(2s+2)}} & \qquad \text{if $\mathfrak{p}\nmid2$,} \\ 0 & \qquad \text{if $\mathfrak{p}\mid 2$, $F_\mathfrak{p} = \mathbb{Q}_2$, and $u\equiv 1$ (mod $4\mathcal{O}_\p$),} \\ \frac{-1 + 2 q^{-(2s+2)} - \chi_u(\mathfrak{p}) \, q^{-(2s+3)}}{1 - q^{-(2s+3)}} & \qquad \text{if $\mathfrak{p}\mid 2$, $F_\mathfrak{p} = \mathbb{Q}_2$, and $u\equiv 3$ (mod $4\mathcal{O}_\p$),} \\ \end{cases} $$ where $\chi_u(\mathfrak{p}) := \leg{-u}{\mathfrak{p}}$ and where $\textstyle \leg{\cdot}{\mathfrak{p}}$ is defined as the non-trivial quadratic character on $\mathrm{SqCl}(k_\mathfrak{p}^\times)$ when $\mathfrak{p}\nmid 2$, and as the Kronecker symbol at $\mathfrak{p}\mid2$ when $F_\mathfrak{p} = \mathbb{Q}_2$ (i.e. which takes values 1 if $u\equiv \pm1_{(8)}$ and $-1$ if $u\equiv \pm3_{(8)})$. \end{thm} \begin{proof} For convenience, we let $\gamma_{\mathfrak{p}}(u) := \textstyle{1 - \frac{\chi_u(\mathfrak{p})}{q}}$. From Lemma \ref{Lem:Count_binary_Jordan_and_PLGS}, when $\mathfrak{p}\nmid 2$ there are two cases to consider and when $F_\mathfrak{p} = \mathbb{Q}_2$ there are five cases to consider. The normalized densities and Hasse invariant distributions for each of these cases, and local factor computations are given in the following tables: \resizebox{0.98\hsize}{!}{ \centering \hspace{-0.2in} \begin{tabular}{ | c | c | c | c | c | c | c | c | c |} \multirow{2}{*}{prime $\mathfrak{p}$} & \multirow{2}{*}{\text{\#}} & Allowed & Jordan& \multirow{2}{*}{Cases }& \multicolumn{2}{c|}{\# of Hasse Invariants} & \multirow{2}{*}{$\mathfrak{p}$-mass $m_\mathfrak{p}$} & Normalized Densities \\ & & $\nu = \mathrm{ord}_\mathfrak{p}(S)$ & Blocks & & $\hspace{.13in}c_\mathfrak{p} = 1\hspace{.13in}$ & $c_\mathfrak{p} = -1$ & & $\gamma_\mathfrak{p}(u) \cdot\beta_{Q, \mathfrak{p}}^{-1}(Q)$ \\ \hline \multirow{3}{*}{$\mathfrak{p}\nmid 2$} & 1 & \multirow{1}{*}{$\nu = 0$} & I${}_2$ & -- & 1 & 0 & $\frac{1}{2}\gamma_\mathfrak{p}(u)^{-1}$ & \multirow{1}{*}{1}\\ \cline{2-9} & \multirow{2}{*}{2} & \multirow{2}{*}{$\nu \geq 1$} & \multirow{2}{*}{I{}$_1 \oplus $I${}_1$} & $\nu$ odd & 1 & 1 & \multirow{2}{*}{$\frac{1}{4}\,q^{\frac{\nu}{2}}$} &\multirow{2}{*}{$\frac{1}{2}\, q^{-\nu} \cdot \gamma_\mathfrak{p}(u)$} \\ \cline{5-7} & & & & $\nu$ even & 2 & 0 & & \\ \hline \hline \multirow{10}{.6in}{$\mathfrak{p}\mid2$ and $F_\mathfrak{p} = \mathbb{Q}_2$} & \multirow{3}{*}{1} & \multirow{3}{*}{$\nu = 0$} & \multirow{3}{*}{II${}_2$} & $u\equiv 1_{(4)}$ & 0 & 0 & \multirow{1}{*}{--} & \multirow{1}{*}{--} \\ \cline{5-9} & & & & $u\equiv 3_{(8)}$ & \multirow{2}{*}{0} & \multirow{2}{*}{1} & \multirow{2}{*}{$2^{-2} \gamma_\mathfrak{p}(u)^{-1}$} & \multirow{2}{*}{2}\\ \cline{5-5} & & & & $u\equiv 7_{(8)}$ & & & & \\ \cline{2-9} & \multirow{2}{*}{2} & \multirow{2}{*}{$\nu = 2$} & \multirow{2}{*}{I{}$_2$ } & $u\equiv 1_{(4)}$ & 1 & 1 & \multirow{1}{*}{$2^{-3}$} & $2^{-2} \gamma_\mathfrak{p}(u)$\\ \cline{5-9} & & & & $u\equiv 3_{(4)}$ & 1 & 0 & $2^{-2}$ & $2^{-1} \gamma_\mathfrak{p}(u)$\\ \cline{2-9} & \multirow{1}{*}{3} & \multirow{1}{*}{$\nu = 3$} & \multirow{1}{*}{I{}$_1 \oplus $I${}_1$} & -- & 1 & 1 & \multirow{1}{*}{$2^{-\frac{5}{2}}$} & $2^{-3} \, \gamma_\mathfrak{p}(u)$\\ \cline{2-9} & \multirow{2}{*}{4} & \multirow{2}{*}{$\nu = 4$} & \multirow{2}{*}{I{}$_1 \oplus $I${}_1$} & $u\equiv 1_{(4)}$ & 1 & 1 & \multirow{2}{*}{$2^{-2}$} & \multirow{2}{*}{$2^{-4} \, \gamma_\mathfrak{p}(u)$} \\ \cline{5-7} & & & & $u\equiv 3_{(4)}$ & 2 & 0 & & \\ \cline{2-9} & \multirow{2}{*}{5} & \multirow{2}{*}{$\nu \geq 5$} & \multirow{2}{*}{I{}$_1 \oplus $I${}_1$} & $\nu$ odd or $u\equiv 1_{(4)}$ & 2 & 2 & \multirow{2}{*}{$2^{\frac{\nu}{2} - 5}$} & \multirow{2}{*}{$2^{-\nu-1} \, \gamma_\mathfrak{p}(u)$} \\ \cline{5-7} & & & & $\nu$ even and $u\equiv 3_{(4)}$ & 4 & 0 & & \\ \hline \end{tabular} } \resizebox{0.98\hsize}{!}{ \centering \hspace{-0.2in} \begin{tabular}{ | c | c | c | c | c | c | c |} \multirow{2}{*}{prime $\mathfrak{p}$} & \multirow{2}{*}{\text{\#}} & Allowed & Jordan & \multirow{2}{*}{Cases } & \multirow{2}{*}{$A^*_{\mathfrak{p}; n=2}(S)$} & \multirow{2}{*}{$B^*_{\mathfrak{p}; n=2}(S)$} \\ & & $\nu = \mathrm{ord}_\mathfrak{p}(S)$ & Blocks & & & \\ \hline \multirow{3}{*}{$\mathfrak{p}\nmid2$} & 1 & \multirow{1}{*}{$\nu = 0$} & I${}_2$ & -- & 1 & 1 \\ \cline{2-7} & \multirow{2}{*}{2} & \multirow{2}{*}{$\nu \geq 1$} & \multirow{2}{*}{I{}$_1 \oplus $I${}_1$} & $\nu$ odd &\multirow{2}{*}{$q^{-\nu} \cdot \gamma_\mathfrak{p}(u)$} &\multirow{1}{*}{$0$} \\ \cline{5-5} \cline{7-7} & & & & $\nu$ even & & $q^{-\nu} \cdot \gamma_\mathfrak{p}(u)$ \\ \hline \hline \multirow{10}{.6in}{$\mathfrak{p}\mid2$ and $F_\mathfrak{p} = \mathbb{Q}_2$} & \multirow{3}{*}{1} & \multirow{3}{*}{$\nu = 0$} & \multirow{3}{*}{II${}_2$} & $u\equiv 1_{(4)}$ & \multirow{1}{*}{--} & \multirow{1}{*}{--} \\ \cline{5-7} & & & & $u\equiv 3_{(8)}$ & \multirow{2}{*}{$1$} & \multirow{2}{*}{$-1$} \\ \cline{5-5} & & & & $u\equiv 7_{(8)}$ & & \\ \cline{2-7} & \multirow{2}{*}{2} & \multirow{2}{*}{$\nu = 2$} & \multirow{2}{*}{I{}$_2$ } & $u\equiv 1_{(4)}$ & \multirow{2}{*}{$2^{-2} \gamma_\mathfrak{p}(u)$} & 0\\ \cline{5-5} \cline{7-7} & & & & $u\equiv 3_{(4)}$ & & $2^{-2} \gamma_\mathfrak{p}(u)$\\ \cline{2-7} & \multirow{1}{*}{3} & \multirow{1}{*}{$\nu = 3$} & \multirow{1}{*}{I{}$_1 \oplus $I${}_1$} & -- & $2^{-3} \, \gamma_\mathfrak{p}(u)$ & 0 \\ \cline{2-7} & \multirow{2}{*}{4} & \multirow{2}{*}{$\nu = 4$} & \multirow{2}{*}{I{}$_1 \oplus $I${}_1$} & $u\equiv 1_{(4)}$ & \multirow{2}{*}{$2^{-4} \, \gamma_\mathfrak{p}(u)$} & 0 \\ \cline{5-5} \cline{7-7} & & & & $u\equiv 3_{(4)}$ & & $2^{-4} \, \gamma_\mathfrak{p}(u)$ \\ \cline{2-7} & \multirow{2}{*}{5} & \multirow{2}{*}{$\nu \geq 5$} & \multirow{2}{*}{I{}$_1 \oplus $I${}_1$} & $\nu$ odd or $u\equiv 1_{(4)}$ & \multirow{2}{*}{$2^{-\nu} \, \gamma_\mathfrak{p}(u)$} & 0 \\ \cline{5-5} \cline{7-7} & & & & $\nu$ even and $u\equiv 3_{(4)}$ & & $2^{-\nu} \, \gamma_\mathfrak{p}(u)$ \\ \hline \end{tabular} } \noindent (Note: Here our convention for the generic local density at $\mathfrak{p} \mid 2$ requires an extra division by 2 for the normalized local densities before using them in the second table.) When $\mathfrak{p}\nmid2$ these tables give \begin{align*} \sum_{\nu \geq 0} A^*_{\mathfrak{p}; 2}(u\, \pi_\mathfrak{p}^\nu) X^\nu &= 1 + \(1- \tfrac{\chi_u(\mathfrak{p})}{q}\) \sum_{\nu \geq 1} q^{-\nu} X^\nu = 1 + \frac{(1- \frac{\chi_u(\mathfrak{p})}{q})\frac{X}{q}}{1 - \frac{X}{q}} = \frac{1- \frac{\chi_u(\mathfrak{p}) X}{q^2}}{1 - \frac{X}{q}} \\ \end{align*} and \begin{align*} \sum_{\nu \geq 0} B^*_{\mathfrak{p}; 2}(u\, \pi_\mathfrak{p}^\nu) X^\nu &= 1 + \(1- \tfrac{\chi_u(\mathfrak{p})}{q}\) \sum_{\substack{\nu \geq 2 \\ \nu \text{ even}}} q^{-\nu} X^\nu = 1 + \frac{(1- \frac{\chi_u(\mathfrak{p})}{q})\frac{X^2}{q^2}}{1 - \frac{X^2}{q^2}} = \frac{1 - \frac{\chi_u(\mathfrak{p}) X^2}{q^3}}{1 - \frac{X^2}{q^2}} \end{align*} which give the desired formulas for $\mathfrak{p}\nmid2$ after substituting $X := q^{-s}$. When $\mathfrak{p}\mid2$ and $F_\mathfrak{p}= \mathbb{Q}_2$ we have \begin{align*} \sum_{\nu \geq 0} A^*_{\mathfrak{p}^\nu, u} X^\nu = 1 + \(1- \tfrac{\chi_u(\mathfrak{p})}{q}\) \sum_{\nu \geq 2} q^{-\nu} X^\nu = 1 + \frac{(1- \frac{\chi_u(\mathfrak{p})}{q})\frac{X^2}{q^2}}{1 - \frac{X}{q}} = \frac{1- \frac{\chi_u(\mathfrak{p}) X}{q^3}}{1 - \frac{X}{q}} \end{align*} and when $\mathfrak{p}\mid 2$ and $u \equiv 3$ (mod $4\mathcal{O}_\p$) we have \begin{align*} \sum_{\nu \geq 0} B^*_{\mathfrak{p}; 2}(u\, \pi_\mathfrak{p}^\nu) X^\nu &= -1 + \(1- \tfrac{\chi_u(\mathfrak{p})}{q}\) \sum_{\substack{\nu \geq 2 \\ \nu \text{ even}}} q^{-\nu} X^\nu = -1 + \frac{(1- \frac{\chi_u(\mathfrak{p})}{q})\frac{X^2}{q^2}}{1 - \frac{X^2}{q^2}} = \frac{-1 + \frac{2 X^2}{q^2} - \frac{\chi_u(\mathfrak{p}) X^2}{q^3}}{1 - \frac{X^2}{q^2}} \end{align*} which give the desired formulas for $\mathfrak{p}\nmid2$ after substituting $X := q^{-s}$. When $\mathfrak{p}\mid 2$ and $u\equiv1$ (mod $4\mathcal{O}_\p$) then $B_{\mathfrak{p}; 2}(u\, \pi_\mathfrak{p}^\nu) = 0$ for all $\nu$. \end{proof} \begin{rem} \label{Rem:B_is_zero_for_nonsquare_ideals} Notice that from our table of local computations in the proof of Theorem \ref{Thm:Explicit_A_and_B_for_n=2} that $B^*_{n=2}(S) = 0$ unless $\mathfrak{I}(S)$ is a square and $S \equiv 3$ (mod $4\mathcal{O}_F$) In this case we have $A^*_{n=2}(S) = \pm B^*_{n=2}(S)$, where $\pm = (-1)^\tau$ and $\tau$ is the number of primes $\mathfrak{p}\mid 2$ where $\mathfrak{p}\nmid \mathfrak{I}(S)$. \end{rem} We now compute the total mass of totally definite quadratic lattices of any given determinant over number fields $F$ where $p=2$ splits completely. These results give an independent (global) way of computing the local series in Theorem \ref{Thm:Explicit_A_and_B_for_n=2}. \begin{lem} \label{Lem:Unevaluated_mass_for_binary_lattices} Suppose that $F$ is a totally real number field. Then for every $S\in\SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ for which there exists a totally definite $\mathcal{O}_F$-valued rank $2$ quadratic $\mathcal{O}_F$-lattice $L$ with $\det_H(L) = S$ and signature vector $\vec{\sigma}_\infty$, we have $$ \sum_{ L \in \mathbf{Cls}^*(S, \vec{\sigma}_\infty; n=2) } \frac{1}{|\mathrm{Aut}(L)|} = \frac{ \beta^{-1}_{n=2, \mathbf{f}}(\widetilde{S}) \cdot |\Delta_F|^\frac{1}{2} \cdot N_{F/\mathbb{Q}}(\mathfrak{I}(S))^{\frac{3}{2}} }{2 \cdot (4\pi)^{[F:\mathbb{Q}]}} \biggl[ A^*_{n=2}(S) + \varepsilon_\infty \cdot B^*_{n=2}(S) \biggr], $$ where $\varepsilon_\infty := (-1)^\alpha$ and $\alpha$ is the number of archimedean places $v$ where the local signature $\sigma_v$ is negative definite. \end{lem} \begin{proof} Since $V(2) = \frac{\pi}{2}$ in Corollary \ref{Cor:Mass_formula_result_for_globally_rational}, we have that $$ M^*_{\varepsilon_\infty; 2}(S) = \frac{(4\pi)^{[F:\mathbb{Q}]}}{|\Delta_F|^\frac{1}{2} N_{F/\mathbb{Q}}(\mathfrak{I}(S))^{\frac{3}{2}}} \cdot \sum_{ Q \in \mathbf{Cls}^*(S, \sigma; 2) } \frac{1}{|\mathrm{Aut}(L)|} $$ with $\varepsilon_\infty = 1$. When $\mathbb{S} = \emptyset$ in Theorem \ref{Cor:Mass_formula_result_for_globally_rational}, looking at the $S$-coefficient gives $$ M^*_{\varepsilon_\infty; 2}(S) = \tfrac{1}{2} \beta_{n=2, \mathbf{f}}^{-1}(\widetilde{S}) \cdot \[A^*_{n=2}(S) + \varepsilon_\infty \cdot B^*_{n=2}(S)\right] $$ which proves the theorem. \end{proof} \begin{rem} This issue of existence of global genera of lattices in Lemma \ref{Lem:Unevaluated_mass_for_binary_lattices} is equivalent to the local existence (at all $\mathfrak{p}$) together with the condition that $S$ is globally rational. The local existence question is discussed in Remark \ref{Rem:Local_genus_existence_when_2_splits_completely}, and gives the exact existence criterion when $p=2$ splits completely in $F$. \end{rem} \begin{lem} \label{Lem:Generic_local_product_when_n_is_two} Suppose that $p=2$ splits completely in $F$. Then with the conventions in Definition \ref{Def:generic_local_density_convention_at_n=2}, the generic density product can be evaluated as $$ \beta^{-1}_{n=2, \mathbf{f}}(S) = 2^{[F:\mathbb{Q}]} \cdot \prod_\mathfrak{p} \gamma_\mathfrak{p}(\widetilde{S})^{-1}. $$ \end{lem} \begin{proof} When $\mathfrak{p}\nmid 2$ we have a unique local genus $G_\mathfrak{p}$ with $\det_H(G_\mathfrak{p}) = \widetilde{S}_\mathfrak{p}$, and $\beta^{-1}_{{G_\mathfrak{p}}, \mathfrak{p}}(G_\mathfrak{p}) = \gamma_\mathfrak{p}(S)^{-1} := \frac{1}{1-\frac{\chi_\mathfrak{t}(\mathfrak{p})}{q}}$. Now suppose that $\mathfrak{p}\mid2$, $F_\mathfrak{p} = \mathbb{Q}_2$. When $\widetilde{S}_\mathfrak{p} = -1 \in \mathrm{SqCl}((\mathcal{O}_\p/4\mathcal{O}_\p)^\times)$ there is also a unique genus $G_\mathfrak{p}$ with $\det_H(G_\mathfrak{p}) = \widetilde{S}_\mathfrak{p}$, but here the doubled $\mathfrak{p}$-mass $2m_\mathfrak{p} = \frac{1}{4} \gamma_\mathfrak{p}(S)^{-1}$, giving $\beta^{-1}_{{G_\mathfrak{p}}, \mathfrak{p}}(G_\mathfrak{p}) = 2 \gamma_\mathfrak{p}(S)^{-1}$. When $\widetilde{S}_\mathfrak{p} = 1 \in \mathrm{SqCl}((\mathcal{O}_\p/4\mathcal{O}_\p)^\times)$ then there is no local genus $G_\mathfrak{p}$ with $\det_H(G_\mathfrak{p}) = \widetilde{S}_\mathfrak{p}$, but following Definition \ref{Def:generic_local_density_convention_at_n=2} we define the normalized local density in this case to again be $2 \gamma_\mathfrak{p}(S)^{-1}$. Since $2$ splits completely in $F$, we have that $\beta^{-1}_{n=2, \mathbf{f}}(\widetilde{S}) = 2^{[F:\mathbb{Q}]} \prod_\mathfrak{p} \gamma^{-1}_\mathfrak{p}(S)$, which proves the lemma. \end{proof} \begin{thm} Suppose that $F$ is totally real, $p=2$ splits completely in $F$, and $\vec{\sigma}_\infty^+$ is the totally definite signature vector of rank 2 (i.e. $\sigma_v^+ = (2,0)$ for all $v\mid\infty$). Then for every $S\in\SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ where $\mathbf{Cls}^*(S, \vec{\sigma}_\infty^+; n = 2) \neq \emptyset$, we have $$ \sum_{ L \in \mathbf{Cls}^*(S, \vec{\sigma}_\infty^+; n = 2) } \frac{1}{|\mathrm{Aut}(L)|} = \frac{\kappa(S) \cdot |\Delta_F|^\frac{1}{2} \cdot N_{F/\mathbb{Q}}(\mathfrak{I}(S))^{\frac{1}{2}}}{2 \cdot (2\pi)^{[F:\mathbb{Q}]}} \cdot \prod_{\mathfrak{p}\nmid \mathfrak{I}(S)} \gamma_\mathfrak{p}(S)^{-1} $$ where $$ \kappa(S) := \begin{cases} 2 & \qquad \text{if $\mathfrak{I}(S) = \square$, $\widetilde{S} \equiv 3_{(4)}$, and $\tau$ is even} \\ 0 & \qquad \text{if $\mathfrak{I}(S) = \square$, $\widetilde{S} \equiv 3_{(4)}$, and $\tau$ is odd} \\ 1 & \qquad \text{otherwise,} \\ \end{cases} $$ and $\tau := $ the number of primes $\mathfrak{p}\mid 2$ with $\mathfrak{p}\nmid \mathfrak{I}(S)$. \end{thm} \begin{proof} By Remark \ref{Rem:B_is_zero_for_nonsquare_ideals} when $\mathfrak{I}(S) \neq \square$ or $\widetilde{S} \equiv 1_{(4)}$ we only need to consider $A^*_2(S)$ since there $B^*_2(S) = 0$. From the table in the proof of Theorem \ref{Thm:Explicit_A_and_B_for_n=2}, we see that $A^*_\mathfrak{p}(S)$ is $q^{-\nu_\mathfrak{p}}$ where $\nu_\mathfrak{p} := \mathrm{ord}_\mathfrak{p}(\mathfrak{I}(S))$, with a possible factor of $\gamma_\mathfrak{p}(u)$ which appears iff $\mathfrak{p}\mid \mathfrak{I}(S)$. Combining these we have $$ A^*_{n=2}(S) = \frac{1}{N_{F/\mathbb{Q}}(\mathfrak{I}(S))} \cdot \prod_{\mathfrak{p}\mid \mathfrak{I}(S)} \gamma_\mathfrak{p}(\widetilde{S}). $$ Thus by Lemma \ref{Lem:Generic_local_product_when_n_is_two} the product $$ \beta^{-1}_{n=2, \mathbf{f}}(\widetilde{S}) \cdot A^*_{n=2}(S) = \frac{2^{[F:\mathbb{Q}]}}{N_{F/\mathbb{Q}}(\mathfrak{I}(S))} \cdot \prod_{\mathfrak{p}\nmid \mathfrak{I}(S)} \gamma_\mathfrak{p}(\widetilde{S})^{-1}, $$ and the result follows from Lemma \ref{Lem:Unevaluated_mass_for_binary_lattices}. When $\mathfrak{I}(S) = \square$ and $\widetilde{S} \equiv 1_{(4)}$ then from Remark \ref{Rem:B_is_zero_for_nonsquare_ideals} we have $B^*_{n=2}(S) = (-1)^\tau A^*_{n=2}(S)$, proving the cases where $\kappa(S) = 0$ and $2$. \end{proof} \section{The analytic class number formula} In this section we explain how to interpret our previous results about binary quadratic lattices in terms of class numbers of relative quadratic orders, by using Kneser's generalized Dedekind correspondence between quadratic lattices and ideal classes of quadratic extensions. One interesting corollary of this formula is to recover the Dirichlet class number formula for CM extensions of totally real fields $F$ where $p=2$ splits completely (in $F$). This elucidates some comment of Siegel \cite[p11, pp124-5]{Siegel:1963vn} where he states that his general mass formula recovers Dirichlet's class number formula when applied to binary quadratic forms. \begin{defn} Suppose that $R$ is a quadratic $\mathcal{O}_F$-algebra in the sense of \cite[\textsection2, p407]{Kneser:1982kx}. Then we define the its {\bf (non-archimedean) discriminant squareclass} $\mathrm{Disc}_{\mathcal{O}_F}(R) \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ by requiring that the local squareclasses $\mathrm{Disc}_{\mathcal{O}_F}(R)_\mathfrak{p}$ are the discriminant squareclasses $(b^2 - 4c)(\mathcal{O}_\p^\times)^2$ of the local free quadratic algebras $R_\mathfrak{p} \cong \mathcal{O}_\p[x]/(x^2 + bx + c)$. \end{defn} \begin{defn} We say that two signature vectors $\vec{\sigma}_\infty$ and $\vec{\sigma}'_\infty$ for $F$ of rank $n$ are {\bf locally similar} if for every real place $v$ of $F$ we have that $\sigma_v = (\sigma_{v, +}, \sigma_{v, -})$ is equal to either $(\sigma'_{v, +}, \sigma'_{v, -})$ or $(\sigma'_{v, -}, \sigma'_{v, +})$, which corresponds to the relation that the associated local quadratic spaces are similar for each $v\mid\infty$. \end{defn} To connect classes of binary quadratic forms with ideal classes in a relative quadratic extension, we translate some results of Kneser on composition laws for binary quadratic forms into our language. \begin{lem}[Translating Kneser's Quadratic Lattices] \label{Lem:Kneser_G(C)_translation} Suppose that $C$ is a quadratic $\mathcal{O}_F$-algebra, and let $G(C)$ denote the group of projective rank two primitive quadratic $\mathcal{O}_F$-lattices $(L, Q)$ which are rank one $C$-modules and satisfy the norm-compatibility condition $Q(c\cdot \vec x) = \mathrm{Norm}(c) \cdot Q(\vec x)$ for all $\vec x \in L$ and all $c \in C$, as described in \cite[\textsection6, p441]{Kneser:1982kx}. Then $$ G(C) = \bigsqcup_{\substack{\vec{\sigma}'_\infty \text{ is locally } \\ \text{ similar to } \vec{\sigma}_\infty(C)}} \mathbf{Cls}^*(\textstyle{\det_H(C)}, \vec{\sigma}'_\infty; n=2) $$ where $\det_H(C) \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ and $\vec{\sigma}_\infty(C)$ are respectively the non-archimedean Hessian determinant squareclass and signature vector of the binary quadratic $\mathcal{O}_F$-lattice $C$ equipped with its (quadratic) norm form $\mathrm{Norm}_{C/\mathcal{O}_F}$, and $\vec{\sigma}'_\infty$ runs over all signature vectors of rank $2$ which are definite at exactly the real places $v$ where $\vec{\sigma}_\infty(C)$ is definite. \end{lem} \begin{proof} If $(L,Q) \in G(C)$ then by the norm-compatibility condition at each place $v$ we have the similarity condition $(L_v, Q_v) \cong_{\mathcal{O}_v} u_v \cdot (C_v, \mathrm{Norm}_{C_v / \mathcal{O}_v})$ where $u_v := Q(\vec x_v)$ for any $\vec x_v \in L_v$ that generates $L_v$ as a (free) rank one $C_v$-module. At non-archimedean places $\mathfrak{p}$, since the local determinant squareclass of an even rank quadratic lattice is unaffected by local unit scaling, this shows that $\det_H(L) = \det_H(C) \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$. At real places $v$ we have that local signatures $\sigma_v(C_v, \mathrm{Norm}_v) = (2,0)$ or $(1,1)$ when $(C_v, \mathrm{Norm}_v)$ is respectively definite or indefinite since $\mathrm{Norm}(1) = 1 > 0$. Similarly, the local similarity of $L$ and $C$ at real places $v$ shows that $\sigma_v(L, Q)$ is definite $\iff \sigma_v(C, \mathrm{Norm})$ is definite, so we have the inclusion $\subseteq$. Conversely, suppose that $(L, Q) \in \mathbf{Cls}^*(S, \vec{\sigma}_\infty; n=2)$ for some choice of $S$ and $\vec{\sigma}_\infty$. Then from \cite[p407, top]{Kneser:1982kx} we know that $L$ is a rank 2 module over its even Cifford algebra $C := C(L)$ satisfying the norm compatibility condition above, and $C$ is a quadratic $\mathcal{O}_F$-algebra. However since a quadratic $\mathcal{O}_F$-algebra is determined locally up to isomorphism by its non-archimedean discriminant squareclass in $\SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$ (hence determined globally), and locally at all primes $\mathfrak{p}$ the discriminant squareclass of $C$ must agree with $S$ (from the previous argument), we see that $C$ is independent of our choice of $L$. (Also the previous argument now shows that $\vec{\sigma}_\infty$ is locally similar to $\vec{\sigma}_\infty(C)$.) This shows the opposite inclusion $\supseteq$, proving the claim. \end{proof} \begin{lem}[Generalized Dedekind Correspondence] \label{Lem:Dedekind_correspondence} Suppose that $K/F$ is a CM extension of number fields. Then, with $G(C)$ as in Lemma \ref{Lem:Kneser_G(C)_translation} and $C = \mathcal{O}_K$, we have $$ |G(\mathcal{O}_K)| = \frac{h(\mathcal{O}_K)}{h(\mathcal{O}_F)} \cdot \frac{2^{[F:\mathbb{Q}]}}{Q_{K/F}}, $$ where $Q_{K/F} := [\mathcal{O}_K^\times: \mathcal{O}_F^\times \cdot (\mathcal{O}_K^\times \cap \mu_\infty)]$ and $\mu_\infty$ denotes the group of roots of unity in $\bar{\mathbb{Q}}$. \end{lem} \begin{proof} From Kneser \cite[p412, top]{Kneser:1982kx} we have the exact sequence $$ \xymatrix{ \mathcal{O}_K^\times \ar[r]^{\mathrm{Norm}} & \mathcal{O}_F^\times \ar[r] & G(\mathcal{O}_K) \ar[r] & \mathrm{Pic}(\mathcal{O}_K) \ar[r]^{\mathrm{Norm}} & \mathrm{Pic}(\mathcal{O}_F) \ar[r] & 1, } $$ where the last entry follows from the surjectivity result \cite[Thrm 10.1, p184]{Washington:1982uq}, giving $$ |G(\mathcal{O}_K)| = \frac{h(\mathcal{O}_K)}{h(\mathcal{O}_F)} \cdot |\mathcal{O}_F^\times / \mathrm{Norm}_{K/F}(\mathcal{O}_K^\times)|. $$ To compute the size of this last group, notice that the the norm map gives an isomorphism $$ \xymatrix{ \mathcal{O}_K^\times / (\mathcal{O}_F^\times \cdot (\mathcal{O}_K^\times \cap \mu_\infty)) \ar[r]^{\sim\qquad} & \mathrm{Norm}_{K/F}(\mathcal{O}_K^\times) / \mathrm{Norm}_{K/F}(\mathcal{O}_F^\times), } $$ of groups of size $Q_{K/F}$, and that $\mathcal{O}_F^\times / \mathrm{Norm}_{K/F}(\mathcal{O}_F^\times) = \mathcal{O}_F^\times / (\mathcal{O}_F^\times)^2$ has size $2^{([F:\mathbb{Q}] - 1) + 1}$ by Dirichlet's unit theorem and because the only roots of unity in $F$ are $\{\pm 1\}$. Combining these gives the desired formula. \end{proof} We now count automorphisms of binary lattices in $G(\mathcal{O}_K)$ in preparation for generalizing Dirichlet's class number formula. \begin{lem}[Computing Automorphisms] \label{Lem:binary_lattice_automorphisms} Suppose that $K/F$ is a CM extension of number fields and that $(L, Q) \in G(\mathcal{O}_K)$ as in Lemmas \ref{Lem:Kneser_G(C)_translation} and \ref{Lem:Dedekind_correspondence}. Then $\mathrm{Aut}^+(L) = \mu_K := K^\times \cap \mu_\infty$ and $|\mathrm{Aut}(L)| = 2 \cdot |\mu_K|$. \end{lem} \begin{proof} Since $\mathrm{Aut}(L)$ are exactly the automorphisms $\mathrm{Aut}(V,Q)$ of the ambient quadratic space $(V,Q)$ stabilizing $L$, we first determine $\mathrm{Aut}(V,Q)$. Notice that the norm-compatibility condition ensures $L$ corresponds to a (not necessarily free) lattice in a scaled version of the quadratic space $(K, \mathrm{Norm}_{K/F})$, and that scaling a quadratic space does not affect its automorphisms. By taking the (ordered) $F$-basis $\{1, \sqrt{\Delta}\}$ for $K$, (giving the matrix representation $\alpha := a + b \sqrt{\Delta} \leftrightarrow M_\alpha := \begin{bmatrix} a & b\Delta \\ b & a \end{bmatrix}$ and $\mathrm{Norm}_{K/F}(\alpha) = \det(M_\alpha)$) and explicitly solving for all $\gamma \in GL_2(F)$ satisfying ${}^t\gamma M \gamma = M$ with $M = \begin{bmatrix} 1 & 0 \\ 0 & -\Delta \end{bmatrix}$ and $\Delta \in F$ with $K = F(\sqrt{\Delta})$, we see that the (rational) automorphisms of the quadratic space have the form $$ \mathrm{Aut}(K, \mathrm{Norm}_{K/F}) \cong \mathrm{Gal}(K/F) \times \{K^\times \mid \mathrm{Norm}_{K/F}(K^\times) = 1\}. $$ Since for $\alpha \in K^\times$ we have $\det(M_\alpha) = \mathrm{Norm}_{K/F}(\alpha) = 1$, and the non-trivial Galois element $\sigma$ has $\det(\sigma) = -1$, we see that $$ \mathrm{Aut}^+(K, \mathrm{Norm}_{K/F}) = \{K^\times \mid \mathrm{Norm}_{K/F}(K^\times) = 1\}. $$ Now suppose that $\gamma \in \mathrm{Aut}^+(L,Q)\subseteq \mathrm{GL}_2(F)$. By Lemma \ref{Lem:Kneser_G(C)_translation} we know that $(L,Q)$ is totally definite, hence $|\mathrm{Aut}(L)| < \infty$. Since $\gamma \cdot L = L$ we know that $\gamma \in \mathcal{O}_K^\times$, but since $\gamma$ has finite order by Dirichlet's unit theorem we see that $\gamma \in \mu_K$. Conversely, the norm compatibility condition shows that $\mu_K \subseteq \mathrm{Aut}^+(L)$, proving the first claim. From the Kneser exact sequence at $\mathrm{Pic}(C)$, we see that the underlying $\mathcal{O}_K$-modules $I$ of all $L \in G(\mathcal{O}_K)$ are exactly those which (as ideals ) have $N_{K/F}(I) = (I \cdot \sigma(I)) \cap F = a\mathcal{O}_F$ for some $a \in F^\times$, and so $I \cdot \sigma(I) = a\mathcal{O}_K$ giving $\sigma(I) = I$ in $\mathrm{Pic}(\mathcal{O}_K)$. Therefore $\sigma \in \mathrm{Aut}(L)$, proving the second claim. \end{proof} We are now in a position to derive a relative version of Dirchlet's class number formula for CM extensions from our previous results. \begin{thm}[Analytic Class Number Formula] \label{Thm:Analytic_class_number_formula} Suppose that $K/F$ is a CM extension of number fields and $S := \det_H(\mathcal{O}_K, \mathrm{Norm}_{K/F}) \in \SqCl(\A_{F, \mathbf{f}}^\times, U_\mathbf{f})$, and that $p=2$ splits completely in $F$. Then we have the analytic class number formula $$ h(\mathcal{O}_K) = \frac{|\mu_K| \cdot h(\mathcal{O}_F) \cdot |\Delta_F|^\frac{1}{2} \cdot Q_{K/F} \cdot N_{F/\mathbb{Q}}(\mathfrak{I}(S))^{\frac{1}{2}}} {(2\pi)^{[F:\mathbb{Q}]}} \cdot L_F(1, \chi_{K/F}), $$ where $Q_{K/F} := |\mathcal{O}_K^\times / \mu_K \cdot \mathcal{O}_F^\times| \in \{1,2\}$ and $\chi_{K/F}$ is the non-trivial order 2 Hecke character over $F$ associated to the extension $K/F$ by class field theory. When $F= \mathbb{Q}$, we have $h(\mathcal{O}_F) = Q = 1$ and we recover Dirichlet's analytic class number formula for imaginary quadratic fields $K$. \end{thm} \begin{proof} From Lemmas \ref{Lem:Dedekind_correspondence} and \ref{Lem:binary_lattice_automorphisms} we have that $$ h(\mathcal{O}_K) = \frac{h(\mathcal{O}_F) \cdot Q_{K/F}}{2^{[F:\mathbb{Q}]}} \cdot |G(\mathcal{O}_K)| = \frac{2 \cdot |\mu_K| \cdot h(\mathcal{O}_F) \cdot Q_{K/F}}{2^{[F:\mathbb{Q}]}} \cdot \frac{|G(\mathcal{O}_K)|}{|\mathrm{Aut}(L)|} $$ for any $L \in G(\mathcal{O}_K)$. By Lemma \ref{Lem:Kneser_G(C)_translation} we can re-express this as a sum over proper masses of binary quadratic lattices with totally definite signature vectors $\vec{\sigma}'_\infty$, giving $$ h(\mathcal{O}_K) = \frac{2 \cdot |\mu_K| \cdot h(\mathcal{O}_F) \cdot Q_{K/F}}{2^{[F:\mathbb{Q}]}} \cdot \sum_{\substack{\text{totally} \\ \text{definite $\vec{\sigma}'_\infty$}}} \sum_{L \in \mathrm{Cls}(S, \vec{\sigma}'_\infty; n=2)} \frac{1}{|\mathrm{Aut}(L)|}. $$ Since $\vec{\sigma}'_\infty$ freely runs over all both the $(2,0)$ and $(0,2)$ local signatures $\sigma'_v$ at each archimedean place $v$, we see that when applying Lemmas \ref{Lem:Unevaluated_mass_for_binary_lattices} and \ref{Lem:Generic_local_product_when_n_is_two} to evaluate these sums the contribution from the $B$-series cancels out due to the variation of $\varepsilon_\infty \in \{\pm1\}$, giving $$ h(\mathcal{O}_K) = \frac{\cancel{2} \cdot |\mu_K| \cdot h(\mathcal{O}_F) \cdot Q_{K/F}}{\cancel{2^{[F:\mathbb{Q}]}}} \cdot \cancel{2^{[F:\mathbb{Q}]}} \cdot \frac{|\Delta_F|^\frac{1}{2} \cdot N_{F/\mathbb{Q}}(\mathfrak{I}(S))^{\frac{1}{2}}}{\cancel{2} \cdot (2\pi)^{[F:\mathbb{Q}]}} \cdot \prod_{\mathfrak{p}\nmid\mathfrak{I}(S)} \gamma_\mathfrak{p}(\widetilde{S})^{-1}. $$ Finally, since $-S$ is the fundamental discriminant squareclass for $\mathcal{O}_K$ as a quadratic $\mathcal{O}_F$-algebra, we have that the conductor of $\chi_{K/F}$ is $\mathfrak{I}(-S) = \mathfrak{I}(S)$, and so $L_F(1, \chi_{K/F}) = \prod_{\mathfrak{p} \nmid \mathfrak{I}(S)} \gamma_\mathfrak{p}(\widetilde{S})^{-1}$. \end{proof} \begin{rem}[Cancellation of $B^*$-terms] It is interesting to see that the Dirichlet series for $B^*$ will a priori cancel out whenever the signature is not totally indefinite (i.e. not indefinite at all real places), and even then, our explicit computations (Remark \ref{Rem:B_is_zero_for_nonsquare_ideals}) show that we only have a contribution from $B^*$ when $\mathfrak{I}(S)$ is a square and $S \equiv 3$ (mod $4\mathcal{O}_F$). \end{rem} \begin{rem}[Class numbers of orders] Suppose that $R_K$ is some order in $\mathcal{O}_K$, where $K/F$ is a CM extension and $p=2$ splits completely in $F$. Then one can also establish an analytic class number formula for $h(R_K)$ analogous to Theorem \ref{Thm:Analytic_class_number_formula} for $h(\mathcal{O}_K)$. By comparing these formulas for $h(R_K)$ and $h(\mathcal{O}_K)$ one can show the relation $$ h(R_K) = h(\mathcal{O}_K) \cdot \prod_{\mathfrak{p}\mid \frac{\mathfrak{I}(S')}{\mathfrak{I}(S)}} \(1 - \frac{\chi_S(\mathfrak{p})}{q} \) $$ where $S = \det_H(\mathcal{O}_K, N_{K/F})$ and $S' = \det_H(R_K, N_{K/F})$, which agrees with the (more general) class number formula for orders given by Shimura in \cite[\textsection{12.5}, pp116-7]{Shimura_Clifford}. \end{rem} \begin{rem}[Class number formula from $L$-functions] The analytic class number formula in Theorem \ref{Thm:Analytic_class_number_formula} also agrees with the formula arising from taking $\mathrm{res}_{s=1} \frac{\zeta_K(s)}{\zeta_F(s)}$ for CM extensions $K/F$, which states that $$ h(\mathcal{O}_K) = \frac{h(\mathcal{O}_F) \cdot |\Delta_K|^\frac{1}{2} \cdot Q_{K/F} \cdot |\mu_K|} {(2\pi)^{[F:\mathbb{Q}]} \cdot |\Delta_F|^\frac{1}{2}} \cdot L_F(1,\chi_{K/F}). $$ (Here the ratio of regulators is computed using \cite[Prop. 4.16, p41]{Washington:1982uq}.) To see this we apply the relative discriminant formula \cite[Cor 2.10, p202]{Neukirch:1999cr} to the tower of extensions $K/F/\mathbb{Q}$, giving $$ \Delta_K = (\Delta_F)^2 \cdot N_{F/\mathbb{Q}}(\Delta_{K/F}), $$ and notice that the relative discriminant ideal $\Delta_{K/F} = \mathfrak{I}(-S) = \mathfrak{I}(S)$. \end{rem} \begin{rem}[Indefinite forms and non-CM extensions] One can also perform similar computations for indefinite binary forms to obtain an analytic class number formula (specializing to Dirichlet's class number formula for real quadratic fields when $F=\mathbb{Q}$) for non-CM quadratic extensions $K/F$, where $p=2$ splits completely in $F$. This is somewhat more complicated since it would require one to normalize the symmetric space and regulator measures appropriately, and perform the relevant (possibly non-finite index) unit group computations for $K/F$ in that setting. For simplicity here we only treat the totally definite case, which both illustrates how one would proceed in the more general case and shows the explicit connection between our work and analytic class number formulas. \end{rem} \end{document}
arXiv
EURASIP Journal on Wireless Communications and Networking Research | Open | Published: 30 May 2018 Load balancing mechanism for clustered PMIPv6 protocol Safwan M. Ghaleb ORCID: orcid.org/0000-0002-5250-02991, Shamala Subramaniam1,2, Zuriati Ahmad Zukarnain1 & Abdullah Muhammed1 EURASIP Journal on Wireless Communications and Networkingvolume 2018, Article number: 135 (2018) | Download Citation The Correction to this article has been published in EURASIP Journal on Wireless Communications and Networking 2018 2018:260 Proxy Mobile IPv6 (PMIPv6) has become a credible member of pertinent research areas. This is attributed mainly to its capability of enabling mobility without imposing constraints or requirements on the mobile node (MN). This MN shield is enabled due to the transferring of mobility-related signaling to a new entity, which is called Mobile Access Gateway (MAG). However, associating MNs to a specific MAG inside the PMIPv6 network increases the MAG load probability. Thus, several research have enhanced the PMIPv6 protocol to improve its basic specifications and performance. Strategies include protocols, which apply the clustering technique to enhance the overall performance of the PMIPv6 in terms of routing, scalability, lifetime, and load balancing. The load balancing mechanism is considered in the non-clustered protocols. However, this mechanism has not been adopted in clustering-based protocols. Thus, pertaining to the load and the respective assignments is critical. In this article, to address these issues, a new load balancing mechanism is proposed among MAGs for Cluster-based Proxy Mobile IPv6 (CSPMIPv6) protocol. The signaling within the CSPMIPv6 has been enhanced to support the proposed load balancing mechanism. The proposed mechanism employs the inter- and intra-domain on a frequent basis to select the best MAG among the candidate MAGs. The new mechanism has improved the performance to create an evident improvement in terms of average queuing delay, handover latencies, transmission rate, end-to-end delay, and packet loss as compared to the LBM-PMIPv6 mechanism and CSPMIPv6 protocol. In Mobile Internet Protocol (MIP), the high involvement of MNs in the mobility-related signaling causes several serious issues. Among the issues are long handover latency and excessive signaling [1]. The MN is required to register with the home agent (HA) whenever the MN changes its point of attachment. Addressing these problems associated with the MIP protocol, the Proxy Mobile IPv6 (PMIPv6) has been developed by the Internet Engineering Task Force (IETF) in order to effectively handoff operations to MNs [2]. This is done by adding a new entity, named Mobile Access Gateway (MAG) that takes over the responsibility of mobility configuration from the MN. The main role of the MAG entity is to detect MN movement within the Local Mobility Anchor (LMA). In addition, the MAG initiates the required signals with the authentication, authorization, and accounting (AAA) server to register the MN with the respective LMA. The main role of LMA in the PMIPv6 protocol is to maintain the MN accessibility whenever the MN changes its points of attachment within the PMIPv6 network. This removal of responsibility from the MN results in the PMIPv6 protocol enhancing the performance of MIPv6 protocol, especially in terms of traffic signaling, service disruption, and tunneling overhead. Therefore, making PMIPv6 a significant mobility management protocol for wireless sensor networks (WSNs). However, ignoring the load balancing among the MAGS and using single LMA to process or forward the MN's packets withing the LMA domain, have resulted in many drawbacks (e.g, single point of failure, long handover latencies and intense signaling [3–5]). To tackle these issues, research findings such as Sensor Proxy MIPv6 (SPMIPv6) [6–8], Cluster-based PMIPv6 for wireless mesh networks [9], and a Cluster-based Proxy Mobile IPv6 (CSPMIPv6) [4] have been developed to mitigate these problems. All these protocols have employed clustering strategy in order to be more efficient for mobile users. The CSPMIPv6 [4] protocol solved a high number of issues associated with the PMIPv6 and SPMIPv6 protocols. Thus, the protocol is able to be used in a variety of applications more compared to other protocols [3, 10]. The CSPMIPv6 has inherited other drawbacks due to dependency on the central and single LMA. The fast handovers for Proxy MIPv6 (PFMIPv6) [11] protocol has been developed by the IETF to reduce the handover latency. However, the serving network causes false handover initiation, due to the prediction of the target network to which the MN will move [12]. Contradictory to the benefits of the PMIPv6 protocol and its extensions, the constraints are caused by the MNs, which have to connect to a particular MAG within the PMIPv6 network. This causes the MAG to be overloaded, especially in large networks. The overloaded MAG causes a queuing delay, which in turn leads performance degradation packet loss, end-to-end delay, and the throughput. There has been no consideration of load balancing in the basic specification of the PMIPv6 and its extensions. Thus, many types of research such as [13–18] have attempted to solve this issue through applying the load sharing mechanism between the MAGs. This has seen the increasing performance of the overall system. Their proposed mechanisms, which are elaborated in Section 3, deployed load balancing action by selecting the best target MAG, in addition to selecting the low-priority traffic MNs for the handoff process. These protocols have achieved good results in terms of striking a balance of load between the MAGs. However, these proposed mechanisms are applied only to non-clustered protocols. The clustering-based protocols have not researched, despite their widely being used in the research areas. Several issues such as high queuing delay, end-to-end delay, and packet loss are accused through applying these mechanisms when no consideration is given to the division of clusters. Subsequently, this is leading to serious disruption. As a result, it is evident that the MAG selection has enormous potential for enhancement, which is the focus of this article. The ability for serving network to select the Target MAG (TMAG) according to its domain will definitely lead to the reduction of the handover latency, end-to-end delay, and the average queuing delay. This is the result of the reduction of the signaling registration and the avoidance of the LMA involvement. In order to achieve these potentials to increase the performance of the system, a load balancing based on the clustered PMIPv6 protocol is proposed LB-CSPMIPv6 to provide a seamless mobility management and lowering queuing delay. In the initial registration process of the MAGs and HMAGs, LB-CSPMIPv6 enables the LMA to assign a number for every sub-local domain in the clustered PMIPv6 domain. This domain is carried out by the heartbeat message along with the load status in order to select the best MAG for the handoff MN with the same domain of the MN's serving MAG, which is different from existing schemes where each TMAG is selected based on its domain and load. In the handoff process, LB-CPMIPv6 comprehensively considers the scenarios of intra- and inter-handoff mobility to provide a seamless mobility support to MHs roaming across various access networks, and low buffering cost, which reduces handoff delay and prevents packet loss. In this work, the CSPMIPv6 protocol handover signaling forms the core of the newly proposed load balancing mechanism. The performance analysis of the proposed load balancing mechanism with an extensive simulation has been developed using Network Simulator (NS2) to show that the proposed load balancing mechanism (LB-CSPMIPv6) achieves an improved quality of service (QoS) demands. In this work, the unique adoption of a load balancing mechanism is developed to improve the overall system performance of clustered PMIPv6 domain. The main contributions of this article are as follows: A detailed analysis of the CSPMIPv6 protocol in terms of merits, demerits, and its architecture, which represents the underlying of the LB-CPMIPv6 mechanism, is presented. The benchmark that has been selected for comparison purpose is reviewed extensively. Providing an extensive overview of proposed mechanisms within the PMIPv6 domain. The development of a new load balancing mechanism for clustered PMIPv6 enhances the load distribution among the MAGs within the CSPMIPv6 domain. The focal point in this new mechanism is exploiting the clustering benefits inside the PMIPv6 domain to enhance the process of selecting the TMAG during the handoff action. This article is organized as follows: Section 2 presents an extensive review of the CSPMIPv6 protocol focusing on its advantages, disadvantage, and in particular the handover signaling. Section 3 deliberates in detail the related work on the loading balancing in the PMIPv6 protocol. Section 4 discusses in detail the proposed LB-CPMIPv6 mechanism. In Section 5, a detailed explanation of the load balancing signaling for the clustered PMIPv6 domain is done and followed by Section 6, where the system architecture that is used as the environment for the LB-CPMIPv6 mechanism is presented and the performance evaluation for LB-CPMIPv6 mechanism is discussed. Section 7 concludes the contributions of the proposed work. An overview of the CSPMIPv6 protocol In this section, an extensive description of the CSPMIPV6 protocol, which has been used as a basis for the LB-CPMIPv6 mechanism, is presented. Jabir et al. [4] proposes the clustered PMIPv6 architecture to overcome problems associated with the Proxy MIPv6 (SPMIPv6) [6] and Proxy Mobile IPv6 (PMIPv6) [2] protocols respectively. In this developed solution, the PMIPv6 domain was divided into local sub-domains, as shown in Fig. 1. Each sub-domain contains several MAG clusters and each cluster is controlled and managed by a cluster Head MAG (HMAG). As deliberated in the earlier sections, the CSPMIPv6 is derived from the PMIPv6, so functionalities of entities such as LMA, MAG, MN, and corresponding node (CN) are identical to those in PMIPv6 protocol. The new entity HMAG in the CSPMIPv6 protocol has been configured to take the responsibility of the local cluster handoff from the LMA, in order to mitigate the load and the signaling on the LMA. In addition, the (AAA) functionalities are provided by the HMAG to reduce the registration time that is needed to register the MN. The registration processes of the new MN in the CSPMIPv6 protocol are performed according to the following steps: Once the movement of MN has been detected by MAG i, it sends a request message authentication to the AAA server including the MN identifier (MN-ID). Overall CSPMIPv6 system architecture Then the MAG i registers the MN in its domain cluster by sending a Local Proxy Binding Update (LPBU) to the HMAG j. Upon the successful authentication by the HMAG j, the HMAG j registers the MN on the LMA by sending a Proxy Binding Update (PBU) including the MN-ID and the HMAG-ID. Once the PBU message is received successfully by the LMA, a new Binding Cash Entry (BCE) is created to store the MN-ID and HMAG j identifier. Subsequently, the LMA sends a Proxy Binding Acknowledgment (PBA) reply to the HMAG j. The PBA message includes the Home Network Prefix (HNP) of the MN, which hereafter will be used for maintaining the MN reachability within the PMIPv6 domain. The LMA configures the routing path with the HMAG by setting a bi-directional tunnel between them to send and receive the traffic. The HMAG j adds the MN information to its Binding Update List (BUL) in order to register the MN and sends a Local Proxy Binding Acknowledgment (LPBA) message to the MAG i containing the MN prefix. Then, routing configuration is performed to make the MN accessible. When MAG i gets the LPBA message from the HMAG j, its BUL will be modified by adding the MN and forward the HNP to the MN through the advertisement message. Now, the MN has the ability to send and receive traffic. The MN information at the end of registration will be stored in the MAG i, HMAG j, and LMA tables as stated in the aforementioned registration operation. Furthermore, The HMAG j exchanges the MN information with the MAG i to perform a routing configuration for the MN. Thus, there will be no need for a bi-directional tunnel set up between the HMAG j and MAG i [19]. Moreover, the idea of integrating the AAA functionalities with the LMA functions proposed by [6] is reused in this CSPMIPv6 protocol to reduce the signaling cost during the MN registration. The handoff procedure within the CSPMIPv6 domain functions is illustrated in Fig. 2 and deliberated as follows. Handoff procedure in the CSPMIPv6 domain When the MN decides to move from its serving network to another within the CSPMIPv6 domain, the MN movements could be either an intra- or inter-cluster handoff. In the intra-cluster handoff, the MN is supposed to move to another MAG within the same cluster domain. In other words, the MN movement is still controlled by the same cluster head HMAG. Therefore, the handover here is performed by the HMAG through updating its binding table without any intervention from the LMA. To do so, the destination MAG to which the MN decides to move, will send an LPBU message to the respective HMAG including the MN-ID. Here, the HMAG will only need to update its table by setting the new MAG address in its MAG field as opposed to the inter-cluster handoff. This is done once the MN information has already been recorded. Then, the respective HMAG sends back an LPBA message, including the HNP to the requesting MAG as well as configures the routing performed with the requesting MAG in order to forward the MN packets. In the inter-cluster handoff, the MN movement is detected by another MAG located outside the cluster domain of the serving MAG. Thus, when the destination MAG accepts to register the MN, an LPBU message is sent to the respective HMAG including the MN-ID. However, the HMAG will not find the MN-ID in its binding table as the MN comes from another cluster. Therefore, the LMA must be involved in the process. This is done by the requesting HMAG sending a PBU message to the LMA advertising the new location of the MN. Subsequently, the LMA will update its BCE tables and send a reply to the requesting HMAG. Once the requesting HMAG receives the PBA, a new binding table for the MN will be created and a reply will also be sent to the respective MAG. Finally, a new entry for the MN will be created by the MAG in its binding table and an HNP message will be sent to the MN. The CSPMIPv6 has gained several substantial benefits as a result of dividing the PMIPv6 domain into sub-local networks. These advantages have increased the MN user performance concerning the mobility management. This performance enhancement comes as a result of reducing the LMA load by relieving it from the local mobility signaling within the HMAG cluster. Furthermore, the signaling cost has been reduced as a result of integrating the AAA functionalities with the HMAG. Another critical benefit is shortening the routing path when the MN moves inside its cluster (i.e., intra-cluster handoff) while performing the handoff process by the HMAG without any involvement from the LMA. Despite all of these merits mentioned above, the CSPMIPv6 still suffers from several issues such as the one point of failure (single LMA), end-to-end delay, and excessive signaling [3]. Related works on PMIPv6 protocol load balancing mechanisms In the PMIPv6, the mobility-related signaling responsibility is undertaken by the MAGs on behalf of the MN. All the MNs must be connected to a particular MAG which makes the MAG overloaded easily. The overloading on the MAG leads to an increase of packet loss, end-to-end delay, and the decrease of the transmission rate. Consequently, several works have been proposed to reduce the load on the overloaded MAGs via applying the load sharing mechanism between the MAGs to avoid the negative effect on the overall system performance. Kim and Lee [14] propose a load-balancing mechanism to equitably distribute the load among different MAGs within the PMIPv6 domain. The proposed work led to improving the overall system performance in terms of average queuing delay, packet loss, and end-to-end delay, while increasing the transmission rate. The authors utilized the heartbeat message in order to allow for a specific MAG to learn the load status of its neighboring MAGs. The heartbeat message is modified in order to store the load field for the load balancing action. Similarly, the MAGs also sends a heartbeat message to the LMA, including their load status. The LMA stores the received loads in its BCE used to measure the overall system performance. The description of this is shown in Fig. 3. When the LMA load exceeds a specific threshold, a heartbeat message is sent by the LMA to the overloaded MAG. Then, the overloaded MAG performs a load balancing and chooses the MNs that have the option to change their point of attachment. The target MAG is selected by the serving MAG based on the received signal strength (RSS) and the load status reported from the MNs. The signaling process that is performed during the load balancing action is presented in Fig. 4. This work restricts the procedure of choosing the handover MN (HMN) for the handover process by preventing the serving MAG to select the MNs that have a real-time session. Numerical and simulation analysis has been conducted by the authors to evaluate their proposed mechanism, and their result shows significantly enhanced performance over the original PMIPv6. The abovementioned mechanism forms the core of the proposed LB-CPMIPv6 mechanism. Furthermore, all the paper variables and assumptions are also reused in this work to create an identical platform for comparative purpose. Load balancing operation in the PMIPv6 domain [13] Load balancing signaling in the PMIPv6 domain Another work in this area has been done by Kim and Lee [13, 15] to enhance the load balancing by utilizing the IEEE 802.21 standard. The IEEE 802.21 optimizes the handover between the heterogeneous technologies via facilitating media-independent handover by providing up layers with network-related information. This work aims to determine the load on a candidate point of attachment (PoA). There are cases where the PoA suffers from heavy loads as compared to the TMAG that experiences a lower amount of load. This happens if the MAG load concentrates on only one of its PoA (BS/AP). Thus, the target PoA load is very important to knowing to reduce the overall load overhead. This proposed technique has proven to have a remarkable enhancement in terms of queuing delay and transmission rate. Another load balancing approach has been proposed by Kong et al. [16] for efficient migration of the load between the MAGs. Their approach determines the target MAG which needs a low signaling requirement. Each MAG learns the load status of its neighboring MAGs by exchanging their load among each other in the domain. Then, the MAGs create a list of candidate MAGs based on the received load information in order to select the best TMAG for the HMN. A proactive load balancing is performed during the initial attachment of the MN by selecting the MAG that has the lowest load according to the load information. This is done before the current MAG becomes overloaded. Therefore, by avoiding the overloaded MAG, benefits such as low packet loss and low signaling will be achieved. However, in this mechanism, the HMN experiences an extra delay, especially in the proactive scenario caused by the time needed by the serving MAG to determine the best TMAG to which the HMN moves according to the MAGs loads. Real-time sessions have not been considered in [16], which in turn degrades the system performance. Moreover, this mechanism requires MNs with multi-interface to be connected with two different networks, which makes it restricted to this scenario. Also, multi-domains within the same domain has not been considered in this work, which makes the MN moves to a different domain that requires extra signaling, which in turn leads to high queuing delay and low transmission rate. An agent-based scheme was proposed by Dimple and Kailash [17] to mitigate the overloaded MAG issue within the PMIPv6 network. Their mechanism works by moving the mobile agent from one location to another to reduce the load on the overloaded MAG. The mobile agent achieves this through visiting one MN to collect its data and moves to the other MNs associated to the MAG to take the only relevant data for transmission, in order to reduce the overhead communication. The MN selection is performed according to certain criteria such that the MNs that have real-time session will not be selected, while the MNs that have high-rate data connection become a target for a handoff. Despite the benefits gained by employing the MN agent, several issues arise. Anticipating the MN in the load balancing adds some burden to the MNs and increase the function complicity. This is done by selecting one MN to visit the other MNs within the MAG domain to collect the similar data packets which require some signaling messages between the MN and the associated MN. Moreover, the employed θ threshold by the LMA in the mechanism depends on the size of the data reduction by the MAG that sent to the LMA. This leads to overload the MAG that has numerous attached MNs but does not have any similar data between them or have less than the specified threshold, which not reflect the reality load state of the MAG. Furthermore, clustered PMIPv6 protocol not considered in their implementation, which in turn may lead to effect the intra-domain mobility advantages in a contrary manner. Qutub and Anjali [18] introduce an efficient mechanism to balance the load among the overloaded and low-load MAGs. Their mechanism selects the target MAG according to its geographical serving area and its current load. Also, the MN selection for handover is performed based on the MN's QoS profile, location, direction, and multi-interface capability. This selection has proven to reduce the overload and provide the service, which satisfies the QoS. In this work, not only the overloaded is avoided but also the services are provided with a level of QoS that satisfy the mobile users. However, employing the Global Position System (GPS) expedites the power of the MNs, which is not acceptable in the critical applications. Besides, this work consecrates on the overloaded MAGs and ignore the overloaded LMA. The overloaded LMA is determined according to the all MAGs load in the system, which may be accrued even when the MAGs are not overloaded. This definitely degrades the overall system performance through increasing the time of registering/de-registering the MNs (large queuing delay). Furthermore, divided domains do not consider in their work, which affects the QoS regarding the mobility management. The load balancing problem also has been researched by the Internet Engineering Task Force (IETF) and for which a Request for Comments (RFC) was introduced by Jiang [20]. Each MAG sends its load periodically to the LMA and hereafter is used by the LMA to create a list of candidate MAGs for performing load balancing. The factors have been used in their mechanism to select the target MAG are specified in [18]. The process of selecting the HMN is performed as follows: When the MAG becomes overloaded, the MAG starts load balancing action by selecting the HMN according to its service type to avoid selecting the MN that has a real-time session. Then, the MAG sends Load State Message (LSM) to the LMA in order to inform the LMA about its loads. Accordingly, the LMA gives a feedback to the MAG about the overloaded MAGs and the non-overloaded MAGs. This is done by sending a Load State Acknowledgment Message (LSAM) to the MAG in order to migrate the HMN to a new MAG that is not overloaded. Once the MAG receives the LSAM message, it sends a request message to non-overloaded MAGs. The non-overloaded MAGs upon receiving the request messages reply to the requested MAG along with the acceptance or the rejection of its request according to their status. Then, the overloaded MAG sends a notification to the HMN including the information about the TMAG. The MN once receives the notification from the MAG, it sends Router Solicitation (RS) message to the TMAG to inform it about its movements. A PBU and PBA messages are exchanged between the MAG and the LMA to register the handoff MN. Finally, the TMAG sends a Router Advertisement (RA) message to the HMN including the new IP addresses in order to complete its registration. According to the registration procedures in this mechanism, the MAG should send a request to the all non-overloaded MAGs and await their responses to select the TMAG for the HMN based these responses. This process consumes the bandwidth due to the messages exchanged between the MAGS during the load balancing mechanism. In addition, this work does not target the overloaded LMA, which is responsible for the acceptability of all the MNs connected to the MAGs that may be overloaded. Moreover, the intra domain mobility in case of dividing the PMIPv6 into sub-local domains is not considered, which lead to long handoff delay through increasing the path recovery and signaling cost. Nguyen and Bonnet [21, 22] introduce a solution mechanism to solve the issue of load balancing in the PMIPv6 by considering the IP multicast session. Their solution caters two scenarios, which are named as proactive-multicast and the reactive-multicast. For the former, when the MN starts a new multicast connection, a load balancing action will be triggered to select the suitable LMA to manage this connection. However, in the latter, when the LMA becomes overloaded, the LMA starts to select some of the multicast sessions for a load balancing purposes. Then, the LMA selects the suitable target LMA for these sessions according to same criteria as specified in [21]. Their experiments and numerical results that have been conducted have shown significant improvements in terms of load distribution as well as reducing the multicast service disruption. However, moving the MN's multicast session to another LMA (LMA has least load), which may be located far away from the MN affects the system performance. This is due to the long path between the MN and the new LMA that leads to long delay or packets drops high signaling cost. Moreover, this work focuses on balancing the load between the LMAs and ignores the overloaded MAGs, which may overload even when the load on LMAs is balanced. Another work focuses on distributing the load between the LMAs introduced in [23]. The primary aim of this work is moving the load from the overloaded LMA to the LMA has the least load. This is done when the load at LMA reached the specified threshold. Accordingly, the LMA sends load balancing (LB) warning to the MAG that serves the selected MN. Then, the MAG sends refresh binding to the LMA and hereafter the LMA communicates with the new LMA to bind the selected MN to another LMA. Now, the MN anchored at the new LMA. This work shows remarkable improvements regarding the blocking probability and dropping probability than PMIPv6 with no load control. However, the authors do not take into consideration the overloaded MAG, which is easily susceptible to be overloaded any time when the attached MNs very high or when the MN requires a high stream session. This leads to service disruption through increasing the queuing delay. In addition, the hierarchical domain is not considered also in their work, which may lead to high queuing delay. A load balancing scheme is introduced in [24] to improve the overall system performance in terms of handoff delay and throughput. The IEEE 802.21 Media Independent Handover Services (MIH) functionalities are utilized with the proper selection of the MN new network to provide a seamless handover in the heterogeneous networks. In this scheme, when the signal of the MN becomes very weak, a report from MN is sent to the MN serving MAG. Then, the PMAG upon receiving the report sends handover initiate (HI) message to the LMA including all candidates MAG/APs information. Accordingly, the LMA forwards the HI message to the candidates and these candidates response to the LMA with sending a Handover Acknowledgment (HAck) messages to inform the LMA about their status and their acceptance to serve the MN. The LMA forwards the received HAck messages to the serving MAG in order to select the proper network for the MN. Despite the enhancements that are made in this scheme regarding the handover time and throughput, additional signaling messages are required between the PMAG, LMA, and the candidates MAGs/APs, which negatively impact the system. The reactive load balancing is not considered in this scheme, which leads to increasing the blocking probability in the overloaded MAGs and service disruption due to increasing the queuing delay in the overloaded MAGs, in addition to ignoring the divided domain as the previous works do. Raza et al. [25] employ the Software Defined Networking (SDN)-based solution in order to mitigate the loads between the LMAs. This works depend on central mobility controller that is responsible for monitoring the load at the LMAs. The controller upon detecting the load crossing over the predefined threshold on any of the LMA starts moving some traffic from the massive LMA load to the lower LMA load. According to the analytical analysis, their scheme has significant improvements regarding disruption period of uplink and downlink traffic during load balancing action compared to their benchmark. However, adding extra element is costly. In addition, the massive MAG load is not considered in their scheme, which affects the system performance in terms of handover delay. Moreover, LMA domains also not consider in this scheme, which leads to moving the track to another LMA located far away from the serving LMA. Furthermore, scalability issue has arisen as a result of using a central controller. SDN also used by [26] to reduce the blocking probability and increase the resource utilization through using mobility-aware load distribution for multiple controllers. The objective of this work is handling the handover messages as fast as possible. This is performed by distinguishing the handover messages (gives them high priority) and manipulate them by the controller has the least load among the other controllers if the serving controller suffers from heavy load. However, the main consideration is given to the loads on the LMA and is ignored the loads on the MAGs. In addition, the clustered domain also is not considered in their scheme. These ignoring lead to serious issues regarding the mobility, which in turn affect the service delivered to the mobile users. The review of these deliberated algorithms raises some major concerns which have to be considered for the load sharing mechanism. A list of candidate MAGs to be created with a fewer message exchange to avoid the network overloading issue and choosing the HMNs should be performed based on their traffic type to avoid the selection of the HMNs that have an arguing critical-session. Unfortunately, proposed works above metioned have not proposed such solution for the clustered-based protocol during the formation of the candidate MAG list. Thus, is effected the overall system performance as the selection of TMAG from another cluster or in the case, there is another target MAG from the same cluster of the serving MAG. Thus, this work has been motivated by these open issues focused on the clustered protocol and to provide solutions. In this work, the CSPMIPv6 protocol is implemented to make it as the central referral platform for the proposed LB-CPMIPv6 mechanism. The proposed mechanism in [13] has been implemented in this research work whereas it is applied on the CSPMIPv6 for a comparison purpose. The CSPMIPv6 architecture is shown in Fig. 1. This protocol divides the PMIPv6 domain into sub-domains. Each domain encompasses some MAGs that form a cluster within the PMIPv6 domain. Subsequently, each cluster elects one MAG to act as the cluster head (HMAG). The MAG in the CSPMIPv6 can be easily overloaded as in PMIPv6 protocol. Figure 5 shows an example of the CSPMIPv6-based inter-architecture of its overlapped area among its clusters. The overlapped area between the sub-domains contains a number of HMN candidates. These candidates must have another optional network to connect with for the handover purpose. As seen in Fig. 5, MAG1 and MAG2 are located in the same cluster as the HMAG1, while MAG3 is located in a different cluster HMAG2. The solid lines represent the current connection of the HMN candidate, while the dotted lines represent the optional connection for it. The selection process of TMAG and HMNs must be performed to provide better performance to the HMNs. Likewise, choosing the appropriate network for the HMNs in the overlapped area leads to the balance of load between the MAGs, which in turn avoids the overlapped MAGs. The proposed enhanced load balancing algorithm is presented in the next section. An example of CSPMIPv6 inter-structure for load balancing movement The proposed LB-CPMIPv6 mechanism In this paper, a new mechanism, named LB-CPMIPv6 is proposed to enhance the overall system performance of IP-WSNs by considering a load balancing approach in the clustered network. The proposed LB-CPMIPv6 mechanism expands the MAG capability to avoid overloading issue by developing a new load balancing mechanism. In addition, the proposed mechanism reduces the time needed to recover path between the communicating entities. In the proposed LB-CPMIPv6 mechanism, a domain number should be assigned to every sub-domain in order to distinguish between the clusters within the PMIPv6 domain. The proposed LB-CPMIPv6 mechanism provides an efficient way to balance the load between the MAGs, by predicting the proper TMAG to which HMN moves accurately, as illustrated in Algorithms 1, 2, and 3. Algorithms 1, 2, and 3 explain the functionalities of MAG, HMAG, and LMA respectively within the proposed mechanism. The control flow diagram of LB-CSPMIPv6 mechanism is illustrated in Fig. 6. Load balancing operations within CSPMIPv6 domain Load balancing mechanism for clustered PMIPv6 domain (LB-CPMIPv6) In this proposed mechanism, a load balancing mechanism is developed for a cluster PMIPv6 to improve the efficiency of MNs and accordingly the overall system performance is improved. This is due to the need to take into consideration the intra- and inter-domain mobility during the load balancing process. The MAG located within the CSPMIPv6 domain acts as the gateway between the MNs and the HMAG. Thus, the MNs must be connected to the MAG to be connected to the network. Subsequently, the MAG could become overloaded if the number of the connected MNs increases in the network. This constraint has motivated, a new load balancing mechanism, which is applied to reduce the load at mainly the overloaded MAGs. In order to ensure the standardization of the performance analysis as a comparative platform, the LB-CPMIPv6 mechanism performance analysis has reused the parameter values and assumptions that have been presented by Kim and Lee in [13]. Hereafter, the proposed load balancing mechanism for the PMIPv6 network in [13] should be referred as the "LBM-PMIPv6 mechanism" for the sake of simplicity. In the CSPMIPv6 system model, the load at MAG i is measured according to the average packet arrival rate in a particular interval time. The similar measurement is used for measuring the arriving rate at a certain jth time interval and will hereafter be denoted by λi where the Nm is the number of the measurements. The Nm measurements are used to estimate the $ \bar {\lambda }$ for MAG i, which is computed as the average arrival rate at a certain time interval. After that, the MAG i calculates the average packet arrival rate using the weighted moving average technique under the assumption that μi is the average service rate of the MAG i, which is used by [13] and is mathematicaly expressed as follows: $$ \bar{\lambda} = \frac{\sum_{j=1}^{N_{m}} (N_{m} - j + 1) \lambda_{N_{m}-j+1}}{\sum_{j=1}^{N_{m}} j} $$ The reason to utilize the weighted moving average method is to reveal the uncontrolled action. In addition, it gives a higher weight to the current traffic sample as compared to the old traffic sample in the measurement as proposed in [27] in order to compute the MAG load precisely. Then, the pi can be expressed as $ \frac {\bar {\lambda }_{i}}{\mu _{i}} $ where the $ \bar {\lambda } $ is the average arrival rate and the μi is the average services rate at a certain time. By considering the MAG i processing capacity into account, θi is the maximum acceptable load on the MAG i. If pi>θi, MAG i becomes overloaded and will perform a load balancing technique. In the LB-CPMIPv6 mechanism, every MAG is supposed to send its load and its domain number in a periodical manner to the related HMAG using the heartbeat message, which has been introduced in [28]. The heartbeat message is exchanged periodically by the MAG information related to their related HMAGs within the CSPMIPv6 domain. This is done to inform the HMAGs with their loads as well as to detect the reachability of the other end links. In this work, the heartbeat message is extended to include the load status and the domain number, as shown in Fig. 7. In addition, the Proxy Binding Acknowledgment (PBA) and the Local Proxy Binding Acknowledgment (LPBA) messages, which are presented in [2] and [4] respectively are extended, as shown in Figs. 8 and 9. This extension is to include the domain number during the initial attachment of the MAGs while in the other control signaling the domain number is set to be zero. These two mechanisms are used to define the domain numbers, which are dynamically done by the LMA according to the number of its related domains or statically done during the installation. $$ f=\frac{\left(\sum^{M}_{i=1}p_{i}\right)^{2}}{M*\sum^{M}_{i=1}(\,p_{i})^{2}} $$ Heartbeat message, including the domain number PBA message, including the domain number LPBA message, including the domain number Each domain number and loading information that is received by the HMAGs, LMA, and MAG are saved in their databases, which are then used in the intra- and inter-domain load balancing. Subsequently, every HMAG k also measures its load status, F1, via employing the stored load information on its database that is received from its related MAGs within its domain. The total load for each HMAG domain pt can be expressed as $p_{t} = \sum _{i=1}^{M} p_{i}$ where the M is the number of the MAGs within the HMAG k domain in the CSPMIPv6 domain. If pt>θ, the HMAG k measures the Fairness Index (FI) according to the Eq. 2 that is given by [29]. The FI lies between 0 and 1. If all the MAGs within the HMAG k domain have the same load, the FI is 1. Subsequently, the HMAG k uses the MAGs load information, which is stored in its policy database to compute the f value and the compares it with θf. If f <θf, the HMAG k will send a heartbeat message to its related MAGs to inform the most overloaded MAGs to perform a load balancing action. In this work, the extended field (F flag and load status field) in the heartbeat message that is given by [13] is reused in the same way. Finally, every HMAG in the CSPMIPv6 domain has to send its load that is received from the related MAGs to the LMA. This is performed using the heartbeat message to compute the overall system performance and operates as follows. Once the HMAG receives the total loads pt from each related MAGs, the HMAG send these loads to the related LMA periodically using the heartbeat message. Upon receiving the pt loads from the associated HMAGs, the LMA computes the load status F1, using the received loads to measure the overall system performance within the CSPMIPv6 domain. The total load can be expressed as: $$p_{T}= \sum_{k=1}^{N} p_{t} $$ where the N is the number of the HMAG in the CSPMIPv6 domain and pt is the total load at the HMAG k. After that, the LMA in the CSPMIPv6 domain measures the F1 according to the MAGs load received by the related HMAG in the entire system as expressed in Eq. (2), where M denotes the number of HMAGs in the system. If f is less than θfL, a heartbeat message request is sent by the LMA to the most overloaded HMAGs to inform them to perform a load balancing action. The LMA uses the received loads from all the related HMAGs to determine the most overloaded HMAGs. A new flag is also added to the heartbeat message is named F and is set to 1 if the heartbeat message comes from the LMA entity to the related HMAGs or zero if the heartbeat message comes from the HMAG to the related MAGs. Once the overloaded HMAG receives the heartbeat message, a load balancing action is performed by sending a heartbeat message to the related MAGs, which in turn selects the HMNs in the overlapped area. The HMNs must change their point of attachment to another MAG. The criteria of the HMN selection are discussed in the next subsection. According to the above explanation, the LB-CPMIPv6 mechanism only affects the protocols with multiple domains. In other protocols, there is no impact because the LB-CPMIPv6 mechanism deals with the whole domain as one domain. Thus, selecting the MAG will be performed based on the LBM-PMIPv6 mechanism [13]. HMN selection The wrong selection procedure of the HMNs candidate significantly degrades the performance of the system. Thus, the system must be adapted to appropriately choose the HMNs in a better manner. In this work, the process of HMNs selection by the overloaded MAG is performed based on some criteria as follows. The MAG chooses the HMN that has an option to change its connection. This indicates that the HMN is located between different networks that are advertising their services to such HMN in order to maintain a continuous IPv6 session. This is achieved if the HMN is located in the MAGs overlapped area and it receives a Signal Strength (SS) from all of them. The MAG creates a candidate list for the HMNs that exist in its overlapped and receive an RSS from another MAG. Then, the MAG should select the HMNs that require the highest data rate from the candidate HMNs. The MAG should not select the HMNs that have a real-time connection (e.g., audio and video) due to the sensitivity to delay leading to increase the handoff latency. To determine the non-real HMN session by the MAG, the "Traffic Class" or the "Flow Label" field of IPv6 must be examined by the MAG for all the candidate HMNs in the MAG overlapped area [30]. So disturbing the real-time session during the handover latency will be avoided. If the MAG overlapped area does not contain any non-real HMN session, the HMNs with the highest data rate will be selected. After the HMNs selection by the MAG from its candidate list, the MAG now is ready to choose the best target MAG to where the HMNs bind. The selection of the target MAG is performed as described in the subsection of the target MAG selection. Target MAG selection For an enhanced load distribution, the selection of the target MAG must be performed as accurately as possible. Therefore, the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) algorithm is modified to determine the target MAG in this research. Furthermore, in this study, additional operations are proposed to adopt the algorithm with the clustered PMIPv6. The intra- and inter-cluster handoffs within the CSPMIPv6 domain have been considered due to the adaptation process. The system has achieved better selection technique to the target MAG among the candidates MAGs, which have been reflected in the system performance in terms of handover latency, end-to-end delay and queuing delay as presented in Section 6. The enhanced processes of selecting the best target MAG are performed as follows: The MAG utilizes the RSS, which is reported by the HMN to determine the available network for the HMN. The MAG then places the available networks as candidate networks (i.e., candidate MAGs). This is performed to select the best candidate MAG in terms of RSS, load status and the domain number during the load balancing action. The technique for order preference is adapted from the TOPSIS algorithm that is presented in [27]. The TOPSIS algorithm is used by the serving MAG to determine the target MAG. This algorithm used to choose the optimal MAG as possible according to the Signal Strength reported by the HMNs and the load status of that MAG. Observation shows that the TOPSIS algorithm is not suitable when applied within the clustered PMIPv6 domain. This is due to the fact of dividing the domains into local sub-domains, which is not considered in the TOPSIS algorithm, subsequently increasing the communication overhead. Therefore, some enhancement has been implemented by the LB-CPMIPv6 mechanism starting with modification of the exchanges of messages between the system entities until of change the HMN its points of attachment to ideal TMAG. The TOPSIS algorithm steps including the additional operations of selecting the optimal target MAG for the HMNs are described as follows: Step 1: The TOPSIS algorithm constructs the decision matrix D, which is a [1×n] matrix, as: $$\begin{array}{*{20}l} D = \left\{C_{1},C_{2}, \ldots, C_{n}\right\}, \end{array} $$ where Ci represents a pair of pi and Si for the ith candidate MAG in the matrix D, as: $$\begin{array}{*{20}l} C_{i} = \left(p_{i}, S_{i}\right), \forall_{i} \in \{1, 2, \ldots, n\}, \end{array} $$ where pi denotes the load status and Si represents the signal strength of candidate MAG i and n is the number of candidate MAGs in the matrix D. Moreover, the TOPSIS algorithm is designed to avoid the candidate MAGs, which load is more than a predefined threshold θ. Step 2: The TOPSIS algorithm computes the normalized decision matrix $\bar {D}$, which is a [1×n] matrix, as: $$\begin{array}{*{20}l} \bar{D} = \left\{\bar{C}_{1},\bar{C}_{2}, \ldots, \bar{C}_{n}\right\}, \end{array} $$ where $\bar {C}_{i}$ represents a pair of $\bar {p}_{i}$ and $\bar {S}_{i}$ for the each candidate MAG in the matrix $\bar {D}$, as: $$\begin{array}{*{20}l} \bar{C}_{i} = \left(\bar{p}_{i}, \bar{S}_{i}\right), \forall_{i} \in \{1, 2, \ldots, n\}, \end{array} $$ where $\bar {p}_{i}$ = $\bar {p}_{i}\slash {\sum \limits _{i=1}^{n} \bar {p}^{2}_{i}} $ and denotes the value of normalized load status, while the $\bar {S}_{i} = {\bar {S}_{i}} \slash {\sum \limits _{i=1}^{n} \bar {S}^{2}_{i}}$ and represents the value of normalized signal strength of candidate MAG i and n is the number of candidate MAGs in the matrix $\bar {D}$. Step 3: The TOPSIS algorithm uses the weight value (w), which is a system parameter. This weight (w) and its complement to 1 (1−w) are used to weight the pi and Si, respectively. Since the load at the candidate MAG (pi) has higher priority than the signal strength (Si), the weight value (w) have to always be greater than 0.5. Step 4: The TOPSIS algorithm calculates the weighting decision matrix: V, as: $$\begin{array}{*{20}l} V = \left\{w\bar{C}_{1},w\bar{C}_{2}, \ldots, w\bar{C}_{n}\right\}, \end{array} $$ where $w\bar {C}_{i}$ represents a pair of $ w\bar {p}_{i}$ and $w\bar {S}_{i}$ for the each candidate MAG in the matrix $\bar {D}$, as: $$\begin{array}{*{20}l} w\bar{C}_{i} = \left(w\bar{p}_{i}, w\bar{S}_{i}\right), \forall_{i} \in \{1, 2, \ldots, n\}, \end{array} $$ where the $ w\bar {p}_{i} $ is the result of the multiplication of the weight value with the normalized load status value and the $ w\bar {S}_{i}$ is the result of the multiplication of the weight value with the normalized signal strength. For the sake of simplicity the $ w\bar {p}_{i} $ is replaced by vp, where the $ w\bar {S}_{i}$ is replaced by vS Step 5: Determine the optimal network C∗ (min vp, max vS and the worst network $ \hat {C} $ (max vp, min vS) from V matrix: $$ C^{*} = \left\{v^{*}_{p}, v^{*}_{S} \right\} \hat C = \{\hat{v}_{p}, \hat{v}_{S} \}, \forall_{i} \in \{1, 2, \ldots, n\}. $$ Step 6: The TOPSIS algorithm calculates the separation measures by using the Euclidean distance. The separation of each candidate MAG from the optimal and the worst MAG, S∗ and $ \hat {C} $ are calculated as: $$\begin{array}{*{20}l} S^{*}_{i} = \sqrt{\left(v_{p} - v^{*}_{p}\right)^{2} (v_{S} - v^{*}_{S})^{2}}, \forall_{i} \in \{1, 2, \ldots, n\}. \end{array} $$ $$\begin{array}{*{20}l} \hat{S}_{i} = \sqrt{\left(v_{p} - \hat{v}_{p}\right)^{2} (v_{S} - \hat{v}_{S})^{2}}, \forall_{i} \in \{1, 2, \ldots, n\}. \end{array} $$ Step 7: Next, the MAG ranks the preference order after calculating the relative separation measure as: $$\begin{array}{*{20}l} x_{i} = \hat{S}_{i} \slash {S^{*}_{i} + \hat{S}_{i}}, \forall_{i} \in \{1, 2, \ldots, n\}. \end{array} $$ Step 8: This step is performed according to two different cases. First, if the closest MAG to the ideal network environment has the same domain of the serving MAG, the serving MAG selects it without any hesitation as the target MAG. Second, if the closest MAG to the ideal network is from a different domain, the serving network looks into the ideal network candidate MAGs list to see if there is another ideal one has the same domain in order to choose it instead of the ideal one has a different domain. The selection of the TMAG from the ideal network candidates maintains the system stability in terms of signal strength and the load status. This means, the process of selecting the TMAG that have the same domain will not affect the thresholds of the signal strength and the load, which will lead to the maintenance of the connection session without any service disruption. Step 9: When the serving MAG has done the selection of the closest MAG to the ideal network environment from the ranking preference, the serving MAG starts a load balancing process by sending a handover command message to HMN. In order to select the ideal TMAG, a new load balancing signaling is proposed. In the next section, the new load balancing signaling within the clustered protocols is explained in detail. Load balancing signaling for the clustered PMIPv6 domain As mentioned earlier, every MN within the PMIPv6 domain must connect to a particular MAG to communicate with other MNs. This can lead to overloading the MAG, especially in large networks, when the number of MNs is substantial. This section presents the required signaling of our proposed work to mitigate the load of the overloaded MAG. Figure 10 shows that this signaling is extended from the CSPMIPv6 signaling framework, which includes the inter- and intra-handover signaling operation [4]. Given that, the MN sends a report to the serving network, which includes the MN-ID, MN-IID, the new MAG-ID, and the RSS. This report is sent only if the RSS exceeds a threshold as performed in Third Generation Partnership Project (3GPP). The scenario of performing a load balancing handover in the clustered PMIPv6 domain is performed as follows. Load balancing signaling in the CSPMIPv6 domain The MAG i performs a load balancing procedure according to three cases. Therefore, when a load of MAG i exceeds a specific threshold or if its cluster head HMAG j sends a load balancing request or if the LMA orders the HMAG j, to perform a load balancing, which in turn moves this order to the related MAG i. After that, the MAG i starts to perform a load balancing procedure by choosing the appropriate HMN from the overlapped area to perform a handover action. This HMN selection is performed as presented earlier. When the MAG i receives the handover report from the HMN, the MAG i extracts the IDs of the candidate MAGs, which is given by the report and subsequently stored as a list on its policy database. Then, the MAG i utilizes the Handover initiates (HI) and the Handover Acknowledgment (HAck) messages, which are introduced by [11] and extended by [13] to determine the load of the candidate MAGs. This is done by sending an HI message by the MAG i to all candidates MAGs including the MN-ID. Once the HI message received by the candidate MAGs, the candidate MAGs reply by sending HAck messages emulated to the MAG i including the current loads information. A flag N used as introduced by [13] to distinguish between the HI and HAck messages. The MAG i employs the received loads information from the candidate MAGs and the domain number to choose the TMAG, which the HMN will move to as described in Section 2.2. Subsequently, a Handover Command Message (HCM) is sent by the MAG i to the HMN. Upon receiving the HCM, the HMN starts a new link connection with the TMAG. When the TMAG detects the attachment of the HMN, an LPBU message sends to the HMAG k. Once the HMAG k receives the LPBU message, the HMAG k fetches its binding table by searching for the HMN information. If the HMN moves to another MAG within the HMAG k cluster domain, which is as called intra-cluster handoff mobility, the HMAG k updates its Binding Cash Entry (BCE) by setting the New MAG (NMAG) address instead of the TMAG address in its MAG field. On the other hand, if the HMN moves to another MAG in a new HMAG (NHMAG) cluster domain within the CSPMIPv6 domain, which is as called inter-cluster handoff mobility, the NHMAG will not find any matching entry in its BUL table. Since the HMN comes from another cluster, so the NHMAG sends PBU to the LMA to update its BCE for the HMN. Then, the HMAG field entry in the BCE of the LMA will be modified by setting the NHMAG address and releases the old one for the HMN. After that, the LMA replies to the NHMAG via sending a PBA message. The NHMAG, upon receiving the PBA message sends an LPBA message to the NMAG after updating its binding entry for the HMN. The NMAG consequently updates its BUL for the HMN and advertises the HNP to the HMN. This article presents the development of a load balancing mechanism among MAGs in the CSPMIPv6 domain. The performance analysis of this algorithm and the comparative analysis has been done using discrete event simulation and in particular the NS2. For the evaluation purposes, the work in [13] is re-implemented and the CSPMIPv6 protocol, which is presented in [4]. Also, this work reuses the parameter values and assumptions that have been used in [13] to ensure a level of comparative platform, as shown in Table 1. Table 1 shows the new setup values needed by LB-CPMIPv6 mechanism for performing load balancing. In addition, the performance gain is calculated as in [31] to show the variation results between the proposed LB-CPMIPv6 and LBM-PMIPv6 mechanisms based on Eq. (13), where x and $ \acute {x} $ represent the results produced using LB-CPMIPv6 and LBM-PMIPv6 mechanisms respectively. $$\begin{array}{*{20}l} \text{Performance gain} = \left|\left(\frac{\sum (x-\acute{x})}{\sum (\acute{x}) }\right) \times 100 \right| \end{array} $$ Table 1 Parameters for experimental results This section illustrates the simulation setup used in the experiments in order to evaluate the proposed load balancing mechanism. The experiments are performed using the network simulator NS2 [32, 33]. As illustrated in Fig. 11, which represents the system topology, the MNs communicate with each other including the CN that is connected to the CSPMIPv6 domain through the core network. The Poisson process has been used by CN and MNs to generate packets with a mean rate of 2 packets/ms. Each MAG measures the packet arrival rate (λ) at every 50 ms. When the number of measurements reaches 20 times by the MAG, the MAG calculates the $ \bar \lambda $ using the weighted average method as depicted in Eq. (1). In the proposed network topology, the MNs are randomly scattered over 10 MAGs. In addition, the MAGs are equally connected with two HMAGs (HMAG1, HMAG2) while the HMAGs are associated to one LMA. The load status for each MAG is carried by the heartbeat message that is sent every single second to their related HMAG. The time between the first Heartbeat and the next should be small as recommended in [28]. Similarly, the HMAGs send their load status to the LMA to measure the overall system performance. Several MNs are scattered randomly over the MAGs areas. The total load varies between 0.05 to 0.8. Every MAG is associated with a limited queue K as mentioned in Table 1. For simplicity, each MAG in the network topology is considered to have same service rate μs and a threshold value θs. The simulation is conducted under three different scenarios. In the first scenario, the LB-CPMIPv6 mechanism is applied in the PMIPv6 network (without-clustering) and is compared with LBM-PMIPv6 mechanism [13]. This scenario can show the impact of LB-CPMIPv6 mechanism in the PMIPv6 protocol that uses no clustering technique. In the second scenario, each MN can connect to one, two or three MAGs with an equal probability within the CSPMIPv6 domain. This scenario is performed to inject the overlapped area with a high number of MNs in order to show the actual impact of the intra-cluster handoff process on the system performance. In the last scenario, each MN is connected to one or zero MAG with a probability equal and 0.2 and 0.8 respectively, without any concentration on the overlapped area between the MAGs. The network topology used for simulation In this section, the performance metrics evaluated are the average queuing delay, handover latency, transmission rate and the packet loss. The queuing delay is defined as the summation of the waiting time of each packet in the queue per MAGs. The transmission rate is measured by calculating the average amount of data transmission from the MAGs for the entire simulation. Three scenarios are conducted to demonstrate the enhancement of LB-CPMIPv6 mechanism in the clustered protocol against the LBM-PMIPv6 mechanism in addition to the scenario situation of no load balancing. The scenarios have been conducted as follows. In the first scenario, the proposed mechanism applied on the standard PMIPv6 protocol, which means that the MAGs belong to the same domain (no hierarchical domain). In this scenario, a comparison between LB-CPMIPv6 mechanism and LBM-PMIPv6 mechanism in PMIPv6 domain is carried out in terms of measuring average queuing delay. The result of this comparison is depicted in Fig. 12. As observed from the figure both, the LB-CPMIPv6 mechanism and LBM-PMIPv6 mechanism have achieved similar results in terms of average queuing delay, while PMIPv6 protocol, which has no load balancing mechanism, shows a higher average queuing delay. In the LB-CPMIPv6 mechanism, if there is only one domain (no clustering), the serving MAG selects the TMAG according to its load and based on RSS reported from HMN. In this case, the LB-CPMIPv6 mechanism works precisely similar to LBM-PMIPv6 mechanism, and PMIPv6 protocol shows the highest average queuing delay since it does not support the load balancing. Thus, the packet service time inside the overloaded MAG buffer increases as a result of traffic from connected MNs. The average queuing delay per MAG in the PMIPv6 protocol In the second scenario, Fig. 13 illustrates the average queuing delay per packet at the MAG versus the overall system load for the entire simulation time within a clustered environment. In this scenario, the CSPMIPv6 protocol is employed as an experimental environment to show the effectiveness of the proposed LB-CPMIPv6 mechanism in the clustered domain, which totally differs from the first scenario applied in the non-clustered domain. It is evident that the LB-CPMIPv6 and LBM-PMIPv6 mechanisms outperform the average queuing delay of the CSPMIPv6 protocol, which has no load balancing. Performing the load balancing action leads to the reduction of the packets buffering time at the overloaded MAGs. Furthermore, the performance for LB-CSPMIPv6, LBM-PMIPv6 and CSPMIPv6 shows same results regarding the queuing delay before the predetermined thresholds take place. The average queuing delay obtained from scenario 2 When the system load reaches 0.105 and 0.175, the HMAG and/or the LMA starts to perform a load balancing action because the values in the figure are influenced after the predefined thresholds take place. In addition, when the total load reaches 0.35 for θf=0.5, a few MAGs suffer from the overload issue. Subsequently, the overloaded MAGs start performing a load balancing procedure as depicted in Fig. 13. The LB-CPMIPv6 mechanism shows identical results regarding the queuing delay in comparison to the LBM-PMIPv6 mechanism before reaching the respective thresholds. On the other hand, when the respective thresholds are reached, the average queuing delay increases whenever the load increases. This is due to the increase of the number of exchanged packets among the MNs. However, the LBM-PMIPv6 mechanism shows high queuing delay compared to LB-CPMIPv6 mechanism, as illustrated in Fig. 13. This is because sub-domains are not utilized in the clustered protocols, which affects the intra-cluster handoff. The performance gain of LB-CPMIPv6 mechanism over LBM-PMIPv6 mechanism is 15.66% on average. The LB-CPMIPv6 mechanism selects the ideal MAG from the best candidates through employing the domain number, which in turn leads to shortening the time needed for delivering packets to their destination after the handoff. For example, when the load reaches 0.105 on the HMAG, the HMAG performs load balancing by sending a message to the overloaded MAG, which is located on its domain and each overloaded MAG selects the TMAG based on its load, domain number and the RSS. Figure 14 illustrates the packet loss ratio between LB-CPMIPv6, LBM-PMIPv6 mechanisms and the CSPMIPv6 protocol. The advantages of applying the load balancing mechanisms in the protocols can be demonstrated clearly to an overall reduction in the overloaded MAGs. For example, the load balancing reduces the increased level of buffer utilization, which in turn reduces the number of lost packets. In addition, the LB-CPMIPv6 mechanism achieves better results in terms of packet loss as compared to the LBM-PMIPv6 mechanism. This is because the fact that the LB-CPMIPv6 mechanism moves the HMNs within the same domain as possible, which results to bring down the time of the handoff process leading to the reduction packet loss. In other words, shortening the time needed to perform the handoff process leads to the reduction of the packet waiting in the buffer, which in turn reduces the packet loss. Number of packet loss for LB-CPMIPv6 mechanism, Kim and Lee [13] and non-load balancing When the total load reaches 0.175, the LMA and a minimum one of the HMAGs exceed their thresholds, which are depicted in Table 1 ($ \theta _{f_{L}} $ and θ), and for that the balancing function will be triggered by LMA and/or the respective HMAG. The TMAG that has taken the same domain with the MAG that has load balancing action will be selected according to the MAG selection criteria in LB-CPMIPv6 mechanism. This definitely leads to shortening the time needed to register the HMN on the TMAG, which in turn decreases the packet waiting time in the queue. This enhancement leads to the reduction of the packet congestion from the point of view of the queuing system. Moreover, selecting the TMAG based on the RSS and load status only in the LBM-PMIPv6 mechanism increases the packet loss ratio. This is due to the long registration time that is needed to register the HMN in a different domain, which increases the buffering time. This results in the increasing of congestion from the queuing system perspective. A related example to this is the movement of the HMN to another cluster domain causes an extra delay due to the time needed to exchange its information among the HMAGs. After this, this information should be emulated to the TMAG, which in turn needs a buffering technique to preserve the packets during the handover process. Figure 15 compares the effects of LB-CPMIPv6 mechanism and LBM-PMIPv6 mechanism on the handover latency. The handover latency is the interval between the time of the last packet that is received by the HMN from the old path and the time of the first packet that is received from the new path by the HMN. The LB-CPMIPv6 mechanism outperforms the LBM-PMIPv6 mechanism in terms of the handover latency. This is due to that the TMAG is selected based on the domain factor. In other words, the time needed to perform a handoff process by the HMNs is reduced. This is done by eliminating the authentication process on TMAG and performing the handover without the involvement of the LMA, which may be located far from the HMNs. The handover process is performed by the overloaded MAG if the load in the LMA, HMAG or MAG exceeds their predetermined thresholds. As in Fig. 15, the handover is started, when the total load reaches 17.5 and is performed again when some MAGs become overloaded. The performance gain of LB-CPMIPv6 mechanism over LBM-PMIPv6 mechanism is almost 32.68%. Impact of intra-cluster handoff on the handover latency In the third scenario, the average queuing delay, transmission rate and end-to-end delay are measured, as shown in Figs. 16, 17, and 18 respectively. The average queuing delay obtained from the third scenario The transmission rate obtained from the third scenario End-to-end delay per MAG versus the total load Figure 16 depicts the impact of the LB-CPMIPv6 on the average queuing delay in comparison with the LBM-PMIPv6 mechanism and the original CSPMIPv6 protocol. The figure shows that the LB-CPMIPv6 mechanism increases the performance of the LBM-PMIPv6 mechanism even when the overlapped area is characterized by a small number of MNs. This can be attributed to the proposed mechanism performance to select the TMAG from the same domain when the overloaded MAG performs a load balancing action. This definitely leads to the shortening of the time needed to register and authenticate the MN on the TMAG, which in turn decreases the packet waiting time in the queue, especially in the limit queue. The performance gain of LB-CPMIPv6 mechanism over LBM-PMIPv6 mechanism is almost 9%. Figure 17 shows the average data transmission rate from the MAGs per MNs in the third scenario. The MNs scattered randomly within the CSPMIPv6 domain. It is obvious that the LB-CPMIPv6 mechanism has a higher data transmission rate than the other mechanisms, while the CSPMIPv6 with no-load balancing has the least data transmission rate. We observed from Fig. 17 that the data transmission rate is roughly stable in the LB-CPMIPv6 and LBM-PMIPv6 mechanisms. However, in the case of no-load balancing, the data transmission rate decreases whenever the MNs arriving rate increases. This is due to the absence of load balancing that leads to an unbalanced situation at the MAGs within the CSPMIPv6 domain. Furthermore, LB-CPMIPv6 mechanism shows a significant enhancement in data transmission rate compared to LBM-PMIPv6 mechanism, as shown in Fig. 17. This is due to the fact that the LB-CPMIPv6 mechanism gives higher priority for the selection of the TMAG based on its domain without affecting the load status or the SS threshold. This mechanism increases the traffic among the MAGs resulting in increasing the data transmission rate. Furthermore, forwarding the HMNs traffic to the TMAG that is located in the same cluster reduces the time needed for registering the HMN and authenticating it in the TMAG, which in turn increases the amount of sent packets to their targets. However, in the LBM-PMIPv6 mechanism, the traffic usually has an extra delay as a result of sending packets to another cluster. Figure 18 presents the measured average of the end-to-end delay per MAG in the CSPMIPv6 versus the total load on the overall system. Interestingly, the LB-CPMIPv6 mechanism outperforms the LBM-PMIPv6 mechanism and the CSPMIPv6, which has no load balancing, despite reducing the overlapped area. This is due to performing load balancing action by the overloaded HMAGs, which leads to distribute the load within their clusters. This, in turn, leads to shortening the routing path as well as choosing the most closest MAG to the serving MAG. Selecting the TMAG based on its domain number has a positive impact on the overall system performance. Subsequently, the result shows that the utilizing domain number during the TMAG selection reduces the end-to-end delay due to the path faster recovery for the HMN after the handoff. Furthermore, the original CSPMIPv6 protocol has a higher end-to-end delay due to the unfair distribution of load. The unbalancing MAGs suffer from the heavy load, which in turn increases the overhead on the MAGs queue, causes extra packet time delay. Moreover, CSPMIPv6 tends to have relatively long paths, which also contributes to increasing the end-to-end delay. PMIPv6 protocol and its extensions have been proposed to provide a seamless handover action within a localized management network. This is achieved via relieving the MN from any signaling-related to the mobility process when the MN changes its link. This is done by adding the new MAG that performs the mobility related-signaling with the LMA instead of the MN. Furthermore, the MAG establishes a tunnel with LMA to send and receive the packets of the MN. However, to establish a new link connection, the MN has to be associated with a specific MAG. This association could overload the MAG. Consequently, the LB-CPMIPv6 mechanism has been proposed in this article to fairly distribute the loads among the MAGs fairly. The main advantage of LB-CPMIPv6 is its capacity to consider clustered domain within the clustered protocols, which is not considered in other competitive mechanisms. In the LB-CPMIPv6, the HMN that has a real-time session will not be selected during the process of the load balancing; this restriction relieves the critical applications from service disruption. Furthermore, the CSPMIPv6 handover signaling has been extended to be adapted with the newly proposed load balancing mechanism. Moreover, the LPBA, PBA, and the heartbeat messages are modified to enable sharing of the domain number for the new load balancing mechanism. The LB-CPMIPv6 mechanism is implemented and simulated using the well-known NS2 simulator. The evaluation of the LB-CPMIPv6 mechanism in comparison to the LBM-PMIPv6 load mechanism and CSPMIPv6 protocol is performed in terms of queuing delay, packet loss ratio, end-to-end delay, and transmission rate. The results show that the new load balancing mechanism achieves a better performance by reducing the average queuing delay, packet loss, end-to-end delay, and increasing the transmission rate. Following publication of the original article [1], an error was noticed in the article. The third author's name was inadvertently misspelled. PCD Johnson, J Arkko, Mobility support in IPv6. Technical report, RFC 3775 (June 2004). https://www.rfc-editor.org/info/rfc3775. ELKDVCKS Gundavelli, B Patil, Proxy Mobile IPv6. Technical report, IETF, RFC 5213 (2008). https://www.rfc-editor.org/info/rfc5213. SM Ghaleb, S Subramaniam, ZA Zukarnain, A Muhammed, Mobility management for IoT: a survey. EURASIP J. Wirel. Commun. Netw.2016(1), 165 (2016). AJ Jabir, SK Subramaniam, ZZ Ahmad, NAWA Hamid, A cluster-based proxy mobile IPv6 for IP-WSNs. EURASIP J. Wirel. Commun. Netw.2012(1), 1–17 (2012). JH Lee, KD Singh, JM Bonnin, S Pack, Mobile data offloading: a host-based distributed mobility management approach. IEEE Internet Comput.18(1), 20–29 (2014). MM Islam, E-N Huh, Sensor proxy mobile IPv6 (SPMIPv6)—a novel scheme for mobility supported IP-WSNs. Sensors. 11(2), 1865–1887 (2011). MM Islam, S-H Na, S-J Lee, E-N Huh, A novel scheme for PMIPv6 based Wireless Sensor Network. Future Generation Information Technology: Second International Conference, FGIT 2010, Jeju Island, Korea, December 13-15, 2010. Proceedings (Springer, Berlin, 2010). MM Islam, TD Nguyen, AA Al Saffar, S-H Na, E-N Huh, in Computational Collective Intelligence. Technologies and Applications: Second International Conference, ICCCI 2010, Kaohsiung, Taiwan, November 10-12, 2010. Proceedings, Part III, ed. by J-S Pan, S-M Chen, and NT Nguyen. Energy efficient framework for mobility supported smart IP-WSN (SpringerBerlin, Heidelberg, 2010), pp. 282–291. http://dx.doi.org/10.1007/978-3-642-16696-9_31. H-N Nguyen, C Bonnet, in 2008 5th IEEE International Conference on Mobile Ad Hoc and Sensor Systems. Proxy mobile ipv6 for cluster based heterogeneous wireless mesh networks (IEEEAtlanta, 2008). https://doi.org/10.1109/MAHSS.2008.4660097. AJ Jabir, S Shamala, Z Zuriati, N Hamid, A comprehensive survey of the current trends and extensions for the Proxy Mobile IPv6 Protocol. IEEE Syst. J.PP(99), 1–17 (2015). CKKRPBH Yokota, F Xia, Fast Handovers for Proxy Mobile IPv6. Technical report, RFC 5949 (2010). https://www.rfc-editor.org/info/rfc5949. MS Kim, S Lee, D Cypher, N Golmie, in Global Telecommunications Conference (GLOBECOM 2010). Fast Handover Latency Analysis in Proxy Mobile IPv6 (IEEE, 2010), pp. 1–5. M-S Kim, S Lee, Load balancing and its performance evaluation for layer 3 and IEEE 802.21 frameworks in PMIPv6-based wireless networks. Wirel. Commun. Mob. Comput.10(11), 1431–1443 (2010). K Mun-Suk, L SuKyoung, A novel load balancing scheme for PMIPv6-based wireless networks. {AEU} - Int. J. Electron. Commun.64(6), 579–583 (2010). MS Kim, S Lee, in 2009 IEEE 20th International Symposium on Personal, Indoor and Mobile Radio Communications. Load balancing based on layer 3 and ieee 802.21 frameworks in pmipv6 networks (IEEETokyo, 2009), pp. 788–792. https://doi.org/10.1109/PIMRC.2009.5450177. H Kong, Y Jang, H Choo, in Computational Science and Its Applications (ICCSA), 2010 International Conference On. An efficient load balancing of Mobile Access Gateways in Proxy Mobile IPv6 Domains (IEEEFukuoka, 2010), pp. 289–292. https://doi.org/10.1109/ICCSA.2010.67. J Dimple, C Kailash, A load reduction and balancing scheme for MAG operating in PMIPv6 domain (Springer, Berlin, Heidelberg, 2013). http://dx.doi.org/10.1007/978-3-642-35864-7_19. S Qutub, T Anjali, in Electro/Information Technology (EIT), 2012 IEEE International Conference On. Load sharing mechanism for Mobile Access Gateways in PMIPv6 (IEEEIndianapolis, 2012). https://doi.org/10.1109/EIT.2012.6220760. F Teraoka, T Arita, in 2011 Third International Conference on Ubiquitous and Future Networks (ICUFN). PNEMO: a network-based localized mobility management protocol for mobile networks (IEEEDalian, 2011), pp. 168–173. https://doi.org/10.1109/ICUFN.2011.5949156. H Jiang, Load sharing support for MAGs in Proxy Mobile IPv6. Technical report, IETF-Draft (expired) (December, 2011). T-T Nguyen, C Bonnet, Considerations of IP multicast for load balancing in Proxy Mobile IPv6 networks. Comput. Netw.72(Supplement C), 113–126 (2014). TT Nguyen, C Bonnet, in 2014 International Conference on Computing, Networking and Communications (ICNC). Load balancing mechanism for Proxy Mobile IPv6 networks: an IP multicast perspective (IEEEHonolulu, 2014), pp. 766–770. https://doi.org/10.1109/ICCNC.2014.6785433. S Jeon, RL Aguiar, N Kang, Load-balancing Proxy Mobile IPv6 Networks with mobility session redirection. IEEE Commun. Lett.17(4), 808–811 (2013). C-M Huang, M-S Chiang, PB Chau, in 2015 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom). A load-considered fast media independent handover control scheme for Proxy Mobile IPv6 (LC-FMIH-PMIPv6) in the multiple-destination environment (IEEEConstanta, 2015), pp. 171–175. https://doi.org/10.1109/BlackSeaCom.2015.7185109. SM Raza, D Park, Y Park, K Lee, H Choo, in Proceedings of the 10th International Conference on Ubiquitous Information Management and Communication. IMCOM '16. Dynamic load balancing of local mobility anchors in software defined networking based Proxy Mobile IPv6 (ACMNew York, 2016), p. 106. http://doi.acm.org/10.1145/2857546.2857654. Y Kyung, Y Kim, K Hong, H Choi, M Joo, J Park, in 2016 IEEE Symposium on Computers and Communication (ISCC). Mobility-aware load distribution scheme for scalable SDN-based mobile networks (IEEEMessina, 2016), pp. 119–124. https://doi.org/10.1109/ISCC.2016.7543725. K Papagiannaki, N Taft, ZL Zhang, C Diot, in INFOCOM 2003. Twenty-Second Annual Joint Conference of the IEEE Computer and Communications. IEEE Societies, 2. Long-term forecasting of Internet backbone traffic: observations and initial models (IEEESan Francisco, 2003), pp. 1178–11882. https://doi.org/10.1109/INFCOM.2003.1208954. EKRELHKNKSV Devarapalli, J Laganier, Heartbeat mechanism for proxy mobile IPv6. Technical report, RFC 5847, IETF Network Working Group (June 2010). https://www.rfc-editor.org/info/rfc5847. R Jain, The art of computer systems performance analysis: techniques for experimental design, measurement, simulation, and modeling (Wiley, New York, 1991). S Deering, R Hinden, Internet Protocol, Version 6 (IPv6) Specification. Technical report, IETF, RFC 2460 (1998). https://www.rfc-editor.org/info/rfc2460. M Ghaleb, S Subramaniam, M Othman, Z Zukarnain, Predetermined path of mobile data gathering in wireless sensor networks based on network layout. EURASIP J. Wirel. Commun. Netw.2014(1), 51 (2014). T Issariyakul, E Hossain, Introduction to Network Simulator 2 (NS2) (Springer, Boston, 2012). K Fall, K Varadhan, The ns manual (formerly ns notes and documentation). The VINT project. 47:, 19–231 (2005). Thanks to all my family, Sons, friends, and my colleagues who helped me to achieve this work. A special thanks to my supervisor Dato Prof. Shamala Subermaniam. NS2 simulator and PMIPv6 patch are employed to test the work in this paper. Department of Communication Technology and Network, Universiti Putra Malaysia, Serdang, Selengor D.E., 43400, Malaysia Safwan M. Ghaleb , Shamala Subramaniam , Zuriati Ahmad Zukarnain & Abdullah Muhammed Sports Academy, Universiti Putra Malaysia., Serdang, Selengor D.E., 43400, Malaysia Shamala Subramaniam Search for Safwan M. Ghaleb in: Search for Shamala Subramaniam in: Search for Zuriati Ahmad Zukarnain in: Search for Abdullah Muhammed in: SMG designed the methods, conducted the experiments, evaluated performance, and wrote the paper. SS defined the research area, problems and objectives. ZAZ and AM have equally contributed by given the final approval of the version to be published and contributed to the analysis. All authors read and approved the final manuscript. Correspondence to Safwan M. Ghaleb. Safwan Ghaleb has received his bachelor degree in Computer Science from University of Jordan, Amman, Jordan, in 2009, the master degree in computer science from Jordan University of Science and Technology, Irbid, Jordan in 2012. He is working towards Ph.D. in computer networks, Universiti Putra Malaysia. His research interest include Internet of Things (IoT), Wireless and Mobile Networks, and Data Mining. S. Shamala received the B.S. degree in Computer Science from University Putra Malaysia (UPM), in 1996, M.S. (UPM), in 1999, Ph.D. (UPM) in 2002. Her research interests are computer networks, simulation and modeling, scheduling and real time system. Dr. Shamala is now Prof. at the Department of Communication Technology and Networks, Faculty of Computer Science and Information Technology, University Putra Malaysia (UPM), Malaysia. Dr. Zuriati Ahmad Zukarnain is a professor at the Faculty of Computer Science and information Technology, University Putra Malaysia. She has served as a Head of Department of Communication Technology and Networks at the Faculty of Computer Science and information Technology, University Putra Malaysia. She received her PhD from the University of Bradford, UK. Her research interests include: Efficient multiparty QKD protocol for classical network and cloud, load balancing in the wireless ad hoc network, quantum processor unit for quantum computer, Authentication Time of IEEE 802.15.4 with Multiple-key Protocol, Intra-domain Mobility Handling Scheme for Wireless Networks, Efficiency and Fairness for new AIMD Algorithms and A Kernel model to improve the computation speedup and workload performance. She has been actively involved as a member of the editorial board for some international peer-reviewed and cited journals. Dr. Zuriati is currently undertaking some national funded projects on QKD protocol for cloud environment as well as routing and load balancing in the wireless ad hoc network. Dr. Zuriati is the founder of ZA Quantum Sdn Bhd, the start up company from University Putra Malaysia to produce a software designing tool for Quantum Communication known as Quantum Communication Simulator (QuCS). Abdullah Muhammed received the bachelor degree in computer science from Universiti Putra Malaysia in 1998, the master degree in computer science from Universiti Malaya in 2004 and the PhD degree in computer science from the University of Nottingham, United Kingdom in 2014. He is a senior lecturer at the Department of Communication Technology and Networks, Faculty of Computer Science and Information Technology, Universiti Putra Malaysia and is currently the HoD. His main research interests include cloud computing, mobile and wireless network, scheduling, heuristic and optimization. The original version of this article was revised; for further details, please see the Correction article 10.1186/s13638-018-1231-1 , corrected publication October/2018Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Proxy mobile IPv6 IPv6 protocol Queuing delay
CommonCrawl
\begin{document} \date{\today} \title[Good Pair of Lattices]{Pairs of Full-Rank Lattices With Parallelepiped-Shaped Common Fundamental Domains} \author[H. Burgiel]{Heidi Burgiel} \address{Dept.\ of Mathematics \& Computer Science\\ Bridgewater State University\\ Bridgewater, MA 02325 U.S.A.\\ } \email{[email protected]} \author[V. Oussa]{Vignon Oussa} \address{Dept.\ of Mathematics \\ Bridgewater State University\\ Bridgewater, MA 02325 U.S.A.\\ } \keywords{Lattices, Fundamental Domains, Tiling, Packing, Orthonormal bases} \subjclass[2000]{52C22,52C17, 42B99, 42C30.} \maketitle \begin{abstract} We provide a complete characterization of pairs of full-rank lattices in $\mathbb{R}^{d}$ which admit common connected fundamental domains of the type $N\left[ 0,1\right) ^{d}$ where $N$ is an invertible matrix of order $d.$ Using our characterization, we construct several pairs of lattices of the type $\left( M\mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ which admit a common fundamental domain of the type $N\left[ 0,1\right) ^{d}.$ Moreover, we show that for $d=2,$ there exists an uncountable family of pairs of lattices of the same volume which do not admit a common connected fundamental domain of the type $N\left[ 0,1\right) ^{2}.$ \end{abstract} \section{Introduction} Let $\left\{ e_{k}:1\leq k\leq d\right\} $ be a basis for the vector space $\mathbb{R}^{d}.$ A full-rank lattice $\Gamma$ is a discrete subgroup of $\mathbb{R}^{d}$ which is generated by the set $\left\{ e_{k}:1\leq k\leq d\right\} .$ The number of generators of the lattice is called the rank of the lattice, and the set $\left\{ e_{k}:1\leq k\leq d\right\} $ is called a basis for the lattice. Adopting the convention that vectors in $\mathbb{R} ^{d}$ are written as $d\times1$ matrices, it is convenient to describe the lattice $\Gamma$ as $\Gamma=M\mathbb{Z}^{d}=\left\{ Mk:k\in\mathbb{Z} ^{d}\right\} $ where the $j^{th}$ column of the matrix $M$ corresponds to the vector $e_{j}.$ For a full-rank lattice $M\mathbb{Z}^{d},$ the positive number $\left\vert \det M\right\vert $ which is equal to the volume of the parallelepiped $M\left[ 0,1\right) ^{d}$ is conveniently called the volume of the lattice $M\mathbb{Z}^{d}.$ Let $E$ be a Lebesgue measurable subset of $\mathbb{R}^{d}.$ We say that $E$ packs $\mathbb{R}^{d}$ by $\Gamma$ if and only if for any $\lambda,\gamma \in\Gamma,\gamma\neq\lambda,$ $\left( E+\lambda\right) \cap\left( E+\gamma\right) =\emptyset.$ Moreover, we say that $E$ is a measurable fundamental domain of $\Gamma$ if and only if for any $\lambda,\gamma\in \Gamma,\gamma\neq\lambda,$ $\left( E+\lambda\right) \cap\left( E+\gamma\right) =\emptyset$ and $\cup_{\gamma\in\Gamma}\left( E+\gamma \right) =\mathbb{R}^{d}.$ It is worth noticing that if $E$ packs $\mathbb{R}^{d}$ by $\Gamma$ and if the Lebesgue measure of $E$ is equal to the volume of $\Gamma$ then $E$ is a fundamental domain of $\Gamma.$ According to a remarkable result of Deguang and Wang (Theorem $1.1$ of \cite{Han Yang Wang}), it is known that two full-rank lattices in $\mathbb{R}^{d}$ of the same volume have a common fundamental domain. This result has profound applications in time-frequency analysis \cite{Heil,Pfander, Grog}. In \cite{Han Yang Wang}, the authors provide a general procedure for constructing a fundamental domain for any given pair of lattices of the same volume. However, it is often the case that the fundamental domains obtained in \cite{Han Yang Wang} are disconnected, unbounded and difficult to describe. It is therefore natural to ask if it is possible to characterize pairs of lattices which admit `simple' common fundamental domains. Let us be more precise about what we mean by a `simple' fundamental domain for a lattice. Let $\Gamma_{1},$ and $\Gamma_{2}$ be two full-rank lattices of the same volume. We say that the pair $\left( \Gamma_{1},\Gamma_{2}\right) $ is a \textbf{good pair of lattices} if and only if there exists an invertible matrix $N$ of order $d$ such that the parallelepiped $N\left[ 0,1\right) ^{d}$ is a common fundamental domain for $\Gamma_{1},\Gamma_{2}.$ Clearly, such a fundamental domain is a simple set in the sense that it is connected, star-shaped, convex and is easily described. Although the investigation of good pairs of lattices is an interesting problem on its own right, it is also worth noting that common fundamental domains for pairs of lattices which are bounded and star-shaped are of central importance in the construction of smooth frames which are compactly supported \cite{Pfander}. \subsection{Short overview of the paper} Our main objective in this paper is to provide solutions to the following problems: \begin{problem} \label{One}Is it possible to obtain a simple characterization of good pair of lattices? \end{problem} \begin{problem} \label{Two} For which unimodular matrices $M$ is $\left( M\mathbb{Z} ^{d},\mathbb{Z}^{d}\right) $ a good pair of lattices (or not)? \end{problem} On one hand, we are able to address Problem \ref{One} in a way that we judge is satisfactory. On the other, while we are able to describe several non-trivial families of good pairs of lattices of the type $\left( M \mathbb{Z}^{d},\mathbb{Z}^{d}\right) $, to the best of our knowledge Problem \ref{Two} is still open. Here is a summary of the results obtained in this paper: \begin{itemize} \item We present a simple yet powerful characterization of good pairs of lattices in Proposition \ref{Main}, and we describe various properties (Proposition \ref{changebasis}) of good pairs of lattices. \item Addressing Problem \ref{Two}, in Proposition \ref{unipotent}, and Proposition \ref{order3} we construct several non-trivial families of good pairs of lattices of the type $\left( M\mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ in any given dimension. Moreover, in Proposition \ref{notgood} we establish the existence of an uncountable collection of pairs of lattices in dimension 2 which have the same volume and are not good pairs. \item We provide methods that can be exploited to construct good pairs of lattices in higher dimensions from good pairs of lattices in lower dimensions (Proposition \ref{tensor}.) \end{itemize} Among several results obtained in this work, here are the main ones. \begin{proposition} \label{Main} Let $\mathbf{0}$ be the zero vector in $\mathbb{R}^{d}.$ Let $\Gamma_{1}=M_{1}\mathbb{Z}^{d}$ and $\Gamma_{2}=M_{2}\mathbb{Z}^{d}$ be two full-rank lattices of the same volume. $\left( \Gamma_{1},\Gamma_{2}\right) $ is a good pair of lattices if and only if there exists a unimodular matrix $N$ ($\left\vert \det N\right\vert =1$) such that $N\left( -1,1\right) ^{d}\cap\left( M_{1}^{-1}M_{2}\right) \mathbb{Z}^{d}=\left\{ \mathbf{0} \right\} $ and $N\left( -1,1\right) ^{d}\cap\mathbb{Z}^{d}=\left\{ \mathbf{0}\right\} .$ \end{proposition} Notice that for any given invertible matrix $M$ of order $d,$ the zero vector is always an element of the set $\left( \left( M_{1}^{-1}M_{2}\right) \mathbb{Z}^{d}\cup\mathbb{Z}^{d}\right) \cap M\left( -1,1\right) ^{d}.$ Thus, $\left( M_{1}\mathbb{Z}^{d},M_{2}\mathbb{Z}^{d}\right) $ is a good pair of full-rank lattices if and only if there exists a matrix $N$ of order $d$ such that $\left\vert \det N\right\vert =1$ and the set $\left( \left( \left( M_{1}^{-1}M_{2}\right) \mathbb{Z}^{d}\right) \cup\mathbb{Z} ^{d}\right) \cap\left( N\left( -1,1\right) ^{d}\right) $ is a singleton. We also observe that the condition described in Proposition \ref{Main} is easily checked (especially in lower dimensional vector spaces) and will be exploited to derive other results. Additionally, we would like to point out that since the volume of the set $N\left( -1,1 \right) ^{d}$ must be equal to $2^{d},$ according to a famous theorem of Minkowski (Theorem $2,$ \cite{Til}) the closure of the set $N\left( -1,1 \right) ^{d}$ must contain points of the lattices $\left( M_{1}^{-1}M_{2}\right) \mathbb{Z}^{d}, \mathbb{Z}^{d}$ other than the zero vector. Thus, $\left( M_{1}\mathbb{Z} ^{d},M_{2}\mathbb{Z}^{d}\right) $ is a good pair of lattices if and only if there exists a unimodular matrix $N$ such that the only nonzero elements of $\left( M_{1}^{-1}M_{2}\right) \mathbb{Z}^{d}$ and $\mathbb{Z}^{d}$ which belong to the closure of the set $N\left( -1,1\right) ^{d}$ are on the boundary of the $N\left( -1,1\right) ^{d}$. \begin{proposition} \label{unipotent}Let $M$ be a triangular matrix of order $d$ with ones on the diagonal and let $P,Q$ be unimodular integral matrices of order $d.$ Then $\left( PMQ\mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices with common fundamental domain $PM\left[ 0,1\right) ^{d}.$ \end{proposition} Put $\mathbf{p}=\left( p_{1},\cdots,p_{d-1}\right) \in\mathbb{R}^{d-1}$ and define the matrix-valued functions $\mathbf{p}\mapsto M(\mathbf{p})$ and $\mathbf{p}\mapsto N(\mathbf{p})$ as follows: \[ M\left( \mathbf{p}\right) =\left[ \begin{array} [c]{cccc} p_{1} & 1 & & \\ & \ddots & \ddots & \\ & & p_{d-1} & 1\\ & & & {\prod\limits_{k=1}^{d-1}}\frac{1}{p_{k}} \end{array} \right] ,\text{ }N\left( \mathbf{p}\right) =\left[ \begin{array} [c]{ccccc} & & & & 1\\ & & & 1 & p_{2}\\ & & \udots & \udots & \\ & 1 & p_{d-1} & & \\ 1 & {\prod\limits_{k=1}^{d-1}}\frac{1}{p_{k}} & & & \end{array} \right] . \] Furthermore, given $\mathbf{m}=\left( m_{1},m_{2},\cdots,m_{d-1}\right) \in\mathbb{Z}^{d-1}$, we define the matrix-valued function: \[ \mathbf{m}\mapsto D\left( \mathbf{m}\right) =\left[ \begin{array} [c]{ccccc} \frac{1}{m_{1}} & & & & \\ & \frac{1}{m_{2}} & & & \\ & & \ddots & & \\ & & & \frac{1}{m_{d-1}} & \\ & & & & {\prod\limits_{k=1}^{d-1}}m_{k} \end{array} \right] . \] \begin{proposition} \label{order3}Let $P,Q$ be unimodular integral matrices of order $d,$ and let $U,V$ be unimodular integral matrices of order $2.$ \begin{enumerate} \item If $p_{1},\cdots,p_{d-1}\neq0$ then $\left( \left( PM\left( \mathbf{p}\right) Q\right) \mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices with common fundamental domain $PN\left( \mathbf{p}\right) \left[ 0,1\right) ^{d}.$ \item If $m_{1}m_{2}\cdots m_{d-1}\neq0$ then $\left( \left( PD\left( \mathbf{m}\right) Q\right) \mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices. \item If $m,n$ are nonzero integers such that $\gcd\left( m,n\right) =1\ $then \[ \left( \left( U\left[ \begin{array} [c]{cc} \frac{m}{n} & 0\\ 0 & \frac{n}{m} \end{array} \right] V\right) \mathbb{Z}^{2},\mathbb{Z}^{2}\right) \] is a good pair of lattices. \end{enumerate} \end{proposition} It is worth mentioning that Part $3$ of Proposition \ref{order3} has also been proved in \cite{Pfander}, Proposition $5.3.$ However, the novelty here lies in our proof. Next, for any real number $r,$ we define the matrix-valued function \[ r\mapsto R\left( r\right) = \begin{bmatrix} \sqrt{r} & 0\\ 0 & \frac{1}{\sqrt{r}} \end{bmatrix} . \] \begin{proposition} \label{notgood} For any unimodular integral matrices $P,Q,$ if $r$ is a natural number such that $\sqrt{r}$ is irrational then $\left( \left( PR\left( r\right) Q\right) \mathbb{Z}^{2},\mathbb{Z}^{2}\right) $ is not a good pair of lattices. \end{proposition} We remark that Proposition \ref{notgood} is consistent with Proposition $5.3,$ \cite{Pfander} where it is proved that it is not possible to find a star-shaped common fundamental domain for the lattices $R(2) \mathbb{Z} ^{2},\mathbb{Z}^{2}$. The present work is organized around the proofs of the results mentioned above. In the second section we fix notations and present several results crucial to the third section of the paper, in which we prove our main propositions. \section{Generalities and Intermediate Results} We remark that the investigation of good pairs of lattices in $\mathbb{R}^{d}$ where $d=1$ is not interesting. In fact, let us suppose that $\Gamma _{1},\Gamma_{2}$ are two full-rank lattices of the same volume in $\mathbb{R}.$ Then there exist nonzero real numbers $a,b$ such that $\Gamma_{1}=a\mathbb{Z}\text{ and }\Gamma_{2}=b\mathbb{Z}$ and $\left\vert a\right\vert =\left\vert b\right\vert .$ Thus, the half-open interval $\left\vert a\right\vert \left[ 0,1\right) $ is a common fundamental domain for the pair $\left( \Gamma_{1},\Gamma_{2}\right) ,$ and clearly $\left( \Gamma_{1},\Gamma_{2}\right) $ is a good pair of lattices. As such, in the one-dimensional case every pair of lattices of the same volume is a good pair. However, as we shall see in Proposition \ref{notgood}, there exist lattices of the same volume in dimension two which are not good pairs. \subsection{Notation and Terminology} Throughout this paper, we shall assume that $d$ is a natural number strictly greater than one. Let $M$ be a matrix. The transpose of the matrix $M$ is denoted $M^{tr}.$ Let $v$ be a vector (in column form) in $\mathbb{R}^{d}.$ The Euclidean norm of $v$ is given by $\left\Vert v\right\Vert _{2}=\left( \sum_{k=1}^{d}v_{k}^{2}\right) ^{1/2},$ where \[ v=\left[ \begin{array} [c]{ccc} v_{1} & \cdots & v_{d} \end{array} \right] ^{tr}. \] Given two vectors $v,w\in\mathbb{R}^{d},$ the inner product of $v$ and $w$ is $\left\langle v,w\right\rangle =\sum_{k=1}^{d}v_{k}w_{k}.$ All subsets of $\mathbb{R}^{d}$ that we are concerned with in this paper will be assumed to be Lebesgue measurable. Let $E$ be a subset of $\mathbb{R}^{d}.$ Then $\chi_{E}$ stands for the \textbf{indicator function} of the set $E.$ That is, $\chi_{E}:\mathbb{R}^{d}\rightarrow\mathbb{R}$ is the function defined by \[ \chi_{E}\left( x\right) =\left\{ \begin{array} [c]{c} 1\text{ if }x\in E\\ 0\text{ if }x\notin E \end{array} \right. . \] For any subset $E\subseteq$ $\mathbb{R}^{d}$ we define the set $E-E$ as follows: \[ E-E=\left\{ x-y\in\mathbb{R}^{d}:x,y\in E\right\} . \] Throughout this paper, $\mathbf{0}$ stands for the zero vector in $\mathbb{R}^{d},$ and we recall that $M$ is a \textbf{unimodular matrix} if and only if $\det M=\pm1.$ \subsection{General Facts about Lattices and Good Pairs of Lattices} \begin{lemma} \label{First}Let $P,M$ be two matrices of the same order such that $\left\vert \det P\right\vert =\left\vert \det M\right\vert .$ Then $\left( P\mathbb{Z}^{d},M\mathbb{Z}^{d}\right) $ is a good pair of lattices if and only if for any invertible matrix $N$ of order $d,$ $\left( \left( NP\right) \mathbb{Z}^{d},\left( NM\right) \mathbb{Z}^{d}\right) $ is a good pair of lattices. \end{lemma} \begin{proof} Assume that $\left( P\mathbb{Z}^{d},M\mathbb{Z}^{d}\right) $ is a good pair of lattices. Then from \cite{Pfander}, Page $3$ we know that $\left( P\mathbb{Z}^{d},M\mathbb{Z}^{d}\right) $ is a good pair of lattices if and only if there exists a set $E=Z[0,1)^{d}$ such that $\sum_{k\in\mathbb{Z}^{d} }\chi_{E}\left( x+Pk\right) =\sum_{k\in\mathbb{Z}^{d}}\chi_{E}\left( x+Mk\right) =1$ for all $x\in\mathbb{R}^{d},$ where $Z$ is a matrix of order $d$ and $\left\vert \det Z\right\vert =\left\vert \det P\right\vert =\left\vert \det M\right\vert $. We shall show that the functions \[ \mathbb{R}\ni x\mapsto\sum_{k\in\mathbb{Z}^{d}}\chi_{NE}\left( x+NPk\right) \text{ and }\mathbb{R}\ni x\mapsto\sum_{k\in\mathbb{Z}^{d}}\chi_{NE}\left( x+NMk\right) \] are each equal to the constant function $\mathbb{R}\ni x\mapsto1.$\newline Indeed, given any $x\in\mathbb{R}^{d},$ since $\sum_{k\in\mathbb{Z}^{d}} \chi_{E}\left( x+Pk\right) $ is equal to one, it follows that \[ \sum_{k\in\mathbb{Z}^{d}}\chi_{NE}\left( x+NPk\right) =\sum_{k\in \mathbb{Z}^{d}}\chi_{NE}\left( NN^{-1}x+NPk\right) =\sum_{k\in\mathbb{Z} ^{d}}\chi_{E}\left( N^{-1}x+Pk\right) =1. \] Similarly, using the fact that $\sum_{k\in\mathbb{Z}^{d}}\chi_{E}\left( x+Mk\right) =1$ for all $x\in\mathbb{R}^{d},$ we obtain: \[ \sum_{k\in\mathbb{Z}^{d}}\chi_{NE}\left( x+NMk\right) =\sum_{k\in \mathbb{Z}^{d}}\chi_{NE}\left( N\left( N^{-1}x+Mk\right) \right) =\sum_{k\in\mathbb{Z}^{d}}\chi_{E}\left( N^{-1}x+Mk\right) =1. \] Therefore, $NE$ is a fundamental domain for $NP\mathbb{Z}^{d}$ and for $NM\mathbb{Z}^{d}$ as well. Now, let us assume that $\left( NP\mathbb{Z}^{d},NM\mathbb{Z}^{d}\right) $ is a good pair of lattices. That is, there is a set $E=Z[0,1)^{d}$ for some matrix $Z$ such that \[ \sum_{k\in\mathbb{Z}^{d}}\chi_{E}\left( x+NPk\right) =\sum_{k\in \mathbb{Z}^{d}}\chi_{E}\left( x+NMk\right) =1 \] $\text{ for all }x\in\mathbb{R}^{d}.$ Next, \begin{align*} \sum_{k\in\mathbb{Z}^{d}}\chi_{E}\left( x+NPk\right) & =\sum _{k\in\mathbb{Z}^{d}}\chi_{N^{-1}E}\left( N^{-1}x+Pk\right) =1\\ \sum_{k\in\mathbb{Z}^{d}}\chi_{E}\left( x+NMk\right) & =\sum _{k\in\mathbb{Z}^{d}}\chi_{N^{-1}E}\left( N^{-1}x+Mk\right) =1 \end{align*} and $N^{-1}E$ is a common fundamental domain for $M\mathbb{Z}^{d}$ and $P\mathbb{Z}^{d}.$ \end{proof} \begin{lemma} \label{integral}The following holds true: \begin{enumerate} \item Let $\Gamma=M\mathbb{Z}^{d}$ where $M$ is an invertible matrix with entries in $\mathbb{Z}$. If $\left\vert \det M\right\vert =1$ then $\Gamma=M\mathbb{Z}^{d}=\mathbb{Z}^{d}.$ \item Let $\Gamma_{1}=M_{1}\mathbb{Z}^{d},$ $\Gamma_{2}=M_{2}\mathbb{Z}^{d}$ be two full-rank lattices of the same volume. Then $\Gamma_{1}=\Gamma_{2}$ if and only if $M_{1}=M_{2}U$ for some integral unimodular matrix $U.$ \end{enumerate} \end{lemma} \begin{proof} For the first part, if $M$ is an integral matrix, then clearly, $\Gamma$ is a subgroup of $\mathbb{Z}^{d}.$ In order to prove that $\mathbb{Z}^{d}$ is a subgroup of $\Gamma,$ it is enough to show that the canonical basis elements of the lattice $\mathbb{Z}^{d}$ are also elements of $M\mathbb{Z}^{d}.$ Let $\left\{ e_{1},\cdots,e_{d}\right\} $ be the canonical basis for the lattice $\mathbb{Z}^{d}.$ That is, the matrix $\left[ \begin{array} [c]{ccc} e_{1} & \cdots & e_{d} \end{array} \right] $ is the identity matrix of order $d$. Now, let $b_{j}=M^{-1}e_{j}$ for $j\in\left\{ 1,\cdots,d\right\} .$ Since $\left\vert \det M\right\vert =1$ and $M^{-1}$ is an integral matrix, it is clear that each $b_{j}$ is an integral vector and $Mb_{j}=e_{j}.$ Thus the set containing vectors $e_{1},\cdots,e_{d}$ is a subset of $M\mathbb{Z}^{d}$ and $\mathbb{Z}^{d}$ is a subgroup of $\Gamma$. For the second part, assume that $\Gamma_{1}=\Gamma_{2}$. For each $k\in\left\{ 1,2,\cdots,d\right\} $ there exists $\ell_{k}\in\mathbb{Z}^{d}$ such that $M_{1}e_{k}=M_{2}\ell_{k}.$ Next, let $U=\left[ \begin{array} [c]{ccc} \ell_{1} & \cdots & \ell_{d} \end{array} \right] $ be a matrix of order $d$. By assumption, $M_{1}=M_{2}U.$ Moreover, since $U=M_{2}^{-1}M_{1}$ then $\left\vert \det U\right\vert =1.$ Next, let us suppose that $M_{1}=M_{2}U$ for some integral matrix $U$ where $\left\vert \det U\right\vert =1.$ For any $\ell\in\mathbb{Z}^{d},$ $M_{1}\ell =M_{2}\left( U\ell\right) \in M_{2}\mathbb{Z}^{d}.$ It follows that for any $\ell\in\mathbb{Z}^{d},$ $M_{2}\ell=M_{1}\left( U^{-1}\ell\right) \in M_{1}\mathbb{Z}^{d}.$ Therefore, $\Gamma_{1}=\Gamma_{2}.$ This completes the proof. \end{proof} \begin{proposition} \label{changebasis} Let $M,M_{1},M_{2}$ be invertible matrices of order $d.$ Then \begin{enumerate} \item $\left( M\mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices if and only if for any unimodular integral matrices $P$ and $T,$ $\left( PMT\mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices. \item $\left( M_{1}\mathbb{Z}^{d},M_{2}\mathbb{Z}^{d}\right) $ is a good pair of lattices if and only if $\left( \mathbb{Z}^{d},M_{1}^{-1} M_{2}\mathbb{Z}^{d}\right) $ is a good pair of lattices. \item $\left( M\mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices if and only if $\left( M^{-1}\mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices. \end{enumerate} \end{proposition} \begin{proof} For Part $1,$ assume that $\left( M\mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices. Let $T,P$ be two unimodular integral matrices. Then $\left( M\mathbb{Z}^{d},\mathbb{Z}^{d}\right) =\left( M\left( T\mathbb{Z}^{d}\right) ,\mathbb{Z}^{d}\right) .$ By applying Lemma \ref{First} we see that $\left( PMT\mathbb{Z}^{d},P\mathbb{Z}^{d}\right) $ is a good pair of lattices. However, according to Lemma \ref{integral} Part $1,$ we have $P\mathbb{Z}^{d}=\mathbb{Z}^{d}.$ Therefore, $\left( PMT\mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices. Now, for the converse, let us assume that $\left( PMT\mathbb{Z}^{d},\mathbb{Z} ^{d}\right) $ is a good pair of lattices. Since the inverse of $P$ is an integral unimodular matrix, then \[ \left( MT\mathbb{Z}^{d},P^{-1}\mathbb{Z}^{d}\right) =\left( MT\mathbb{Z} ^{d},\mathbb{Z}^{d}\right) =\left( M\mathbb{Z}^{d},\mathbb{Z}^{d}\right) \] is a good pair of lattices. This completes the proof of Part $1.$ Part $2$ follows from Lemma \ref{First}; indeed, $\left( M_{1}\mathbb{Z} ^{d},M_{2}\mathbb{Z}^{d}\right) $ is a good pair of lattices if and only if \[ \left( M_{1}^{-1}\left( M_{1}\mathbb{Z}^{d}\right) ,M_{1}^{-1}\left( M_{2}\mathbb{Z}^{d}\right) \right) =\left( \mathbb{Z}^{d},\left( M_{1}^{-1}M_{2}\right) \mathbb{Z}^{d}\right) \] is a good pair of lattices. Similarly, Part $3$ follows from Lemma \ref{First} as well and is simply due to the fact that \[ \left( M^{-1}M\mathbb{Z}^{d},M^{-1}\mathbb{Z}^{d}\right) =\left( \mathbb{Z}^{d},M^{-1}\mathbb{Z}^{d}\right) . \] \end{proof} The following lemmas play a central role in the proof of our main results. \begin{lemma} \label{common}Let $\Gamma_{1}=M\mathbb{Z}^{d}$ such that $\left\vert \det M\right\vert =1.$ $E$ is a common fundamental domain for $\Gamma_{1}$ and $\Gamma_{2}=\mathbb{Z}^{d}$ if and only if $\left( E-E\right) \cap M\mathbb{Z}^{d}=\left\{ \mathbf{0}\right\} ,\text{ }\left( E-E\right) \cap\mathbb{Z}^{d}=\left\{ \mathbf{0}\right\} $ and the volume of $E$ is equal to one. \end{lemma} \begin{proof} Assume that $E$ is a common fundamental domain for $\Gamma_{1}$ and $\Gamma_{2}=\mathbb{Z}^{d}.$ Then clearly, the volume of the set $E$ must be equal to one. Next, given distinct $k,l\in\mathbb{Z}^{d},$ it is clear that $\left( E+Mk\right) \cap\left( E+Ml\right) $ is an empty set. Therefore, given any $x,y\in E,$ it must be true that $x-y$ can never be equal to $Mn$ for some $n\in\mathbb{Z}^{d}$ unless $n=\mathbf{0}.$ Therefore, $\left( E-E\right) \cap M\mathbb{Z}^{d}=\left\{ \mathbf{0}\right\} .$ If $M$ is the identity matrix, a similar argument allows us to derive that $\left( E-E\right) \cap\mathbb{Z}^{d}=\left\{ \mathbf{0}\right\} $ as well. Next, assuming that $\left( E-E\right) \cap M\mathbb{Z}^{d}=\left\{ \mathbf{0}\right\} $ and $\left( E-E\right) \cap\mathbb{Z}^{d}=\left\{ \mathbf{0}\right\} ,$ a calculation similar to that found in the proof of Lemma~\ref{times} shows that $\left( E+Mk\right) \cap\left( E+Ml\right) $ is an empty set for $l$ not equal to $k$. Finally, since it is assumed that the volume of $E$ is equal to one then $E$ is a common fundamental domain for $\Gamma_{1}$ and $\Gamma_{2}=\mathbb{Z}^{d}.$ This completes the proof. \end{proof} \begin{lemma} \label{good pair}Assume that $\left\vert \det M\right\vert =1.$ Then $\left( M\mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair if and only if there exists a unimodular matrix $N$ such that $N\left( -1,1\right) ^{d}\cap M\mathbb{Z}^{d}=\left\{ \mathbf{0}\right\} ,\text{ and }N\left( -1,1\right) ^{d}\cap\mathbb{Z}^{d}=\left\{ \mathbf{0}\right\} .$ \end{lemma} \begin{proof} $\left( M\mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair if and only if there exists a common fundamental domain $E=N\left[ 0,1\right) ^{d}$ for the lattices $M\mathbb{Z}^{d},\mathbb{Z}^{d}$ where $N$ is a unimodular matrix. Now, appealing to Lemma \ref{common}, this holds if and only if $\left( E-E\right) \cap M\mathbb{Z}^{d}=\left\{ \mathbf{0}\right\} $ and $\left( E-E\right) \cap\mathbb{Z}^{d}=\left\{ \mathbf{0}\right\} .$ Finally, the proof is completed by observing that $\left( E-E\right) =N\left[ 0,1\right) ^{d}-N\left[ 0,1\right) ^{d}=N\left( -1,1\right) ^{d}.$ \end{proof} Appealing to Lemma \ref{common}, the following is immediate: \begin{lemma} \label{condition}Let $\Gamma_{1}=M\mathbb{Z}^{d}$ such that $\left\vert \det M\right\vert =1.$ Then $M\left[ 0,1\right) ^{d}$ is a common fundamental domain for $M\mathbb{Z}^{d}$ and $\mathbb{Z}^{d}$ if and only if $M\left( -1,1\right) ^{d}\cap\mathbb{Z}^{d}=\left\{ \mathbf{0}\right\} .$ \end{lemma} \subsection{Constructing Good Pairs from Known Good Pairs} \begin{lemma} \label{times}Let $\Gamma=\Gamma_{1}\times\Gamma_{2}$ where $\Gamma_{1}$ is a full-rank lattice in $\mathbb{R}^{n}$ and $\Gamma_{2}$ is a full-rank lattice in $\mathbb{R}^{m}.$ If $E_{1}$ is a common fundamental domain for $\Gamma _{1}$ and $\mathbb{Z}^{n}$ in $\mathbb{R}^{n}$ and $E_{2}$ is a common fundamental domain for $\Gamma_{2}$ and $\mathbb{Z}^{m}$ in $\mathbb{R}^{m}$ then $E=E_{1}\times E_{2}$ is a common fundamental domain for $\Gamma$ and $\mathbb{Z}^{n}\times\mathbb{Z}^{m}$ in $\mathbb{R}^{n}\times\mathbb{R}^{m}.$ \end{lemma} \begin{proof} Indeed, let us assume that $E_{1}$ is a common fundamental domain for $\Gamma_{1}$ and $\mathbb{Z}^{n}$ in $\mathbb{R}^{n},$ $E_{2}$ is a common fundamental domain for $\Gamma_{2}$ and $\mathbb{Z}^{m}$ in $\mathbb{R}^{m},$ and there exist distinct $\gamma,k\in\mathbb{Z}^{n}\times\mathbb{Z}^{m}$ such that the set $\left( E+\gamma\right) \cap\left( E+k\right) $ is not empty ($E=E_{1}\times E_{2}$). Then there exist $z,z^{\prime}\in E$ such that $z+\gamma=z^{\prime}+k.$ Now, we write $z=\left( x,y\right) $, $z^{\prime }=\left( x^{\prime},y^{\prime}\right) ,$ $\gamma=\left( \gamma_{1} ,\gamma_{2}\right) $ and $k=\left( k_{1},k_{2}\right) .$ Thus, \[ \left( x,y\right) +\gamma=\left( x+\gamma_{1},y+\gamma_{2}\right) =\left( x^{\prime}+k_{1},y^{\prime}+k_{2}\right) . \] As a result, $x+\gamma_{1}=x^{\prime}+k_{1}$ and $y+\gamma_{2}=y^{\prime }+k_{2}$. Since $\gamma\neq k$ then either $\gamma_{1}\neq k_{1}$ or $\gamma_{2}\neq k_{2}.$ So, we obtain that either $\gamma_{1}\neq k_{1}$ and $x+\gamma_{1}=x^{\prime}+k_{1},$ or $\gamma_{2}\neq k_{2}$ and $y+\gamma _{2}=y^{\prime}+k_{2}.$ This contradicts our assumption that $E_{1}$ is a common fundamental domain for $\Gamma_{1}$ and $\mathbb{Z}^{n}$ in $\mathbb{R}^{n}$ and $E_{2}$ is a common fundamental domain for $\Gamma_{2}$ and $\mathbb{Z}^{m}$ in $\mathbb{R}^{m}.$ \end{proof} Appealing to Lemma \ref{times}, the following is immediate. \begin{lemma} \label{direct sum} Let $M_{1}$ and $M_{2}$ be two invertible matrices of order $d.$ Assume that $\left( M_{1}\mathbb{Z}^{n},\mathbb{Z}^{n}\right) $ and $\left( M_{2}\mathbb{Z}^{m},\mathbb{Z}^{m}\right) $ are good pairs of lattices. If \[ M=M_{1}\oplus M_{2}=\left[ \begin{array} [c]{cc} M_{1} & \\ & M_{2} \end{array} \right] \] then $\left( M\left( \mathbb{Z}^{n}\times\mathbb{Z}^{m}\right) ,\mathbb{Z}^{n}\times\mathbb{Z}^{m}\right) $ is a good pair of lattices. \end{lemma} Given two matrices $A,B$ of order $a$ and $b$ respectively, such that $A=\left[ A_{i,j}\right] _{1\leq i,j\leq a}$ the \textbf{tensor product} (or \textbf{Kronecker product}) of the matrices $A\otimes B$ is a matrix of order $ab$ given by \[ A\otimes B=\left[ \begin{array} [c]{ccc} A_{11}B & \cdots & A_{1d}B\\ \vdots & \cdots & \vdots\\ A_{d1}B & \cdots & A_{dd}B \end{array} \right] . \] \begin{lemma} \label{tsor}Let $I_{p}$ be the identity matrix of order $p,$ and let $M$ be an invertible matrix of order $d.$ \begin{enumerate} \item If $\left( M\mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices then $\left( \left( I_{p}\otimes M\right) \mathbb{Z} ^{pd},\mathbb{Z}^{pd}\right) $ is a good pair of lattices. \item If $\left( M\mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices then $\left( \left( M\otimes I_{p}\right) \mathbb{Z} ^{pd},\mathbb{Z}^{pd}\right) $ is a good pair of lattices. \end{enumerate} \end{lemma} \begin{proof} For Part $1$, we observe that \[ I_{p}\otimes M=\left[ \begin{array} [c]{ccc} M & & \\ & \ddots & \\ & & M \end{array} \right] . \] Applying Lemma \ref{direct sum} an appropriate number of times gives us the desired result. For Part $2,$ let \[ M=\left[ \begin{array} [c]{ccc} m_{11} & \cdots & m_{1d}\\ \vdots & \cdots & \vdots\\ m_{d1} & \cdots & m_{dd} \end{array} \right] \text{ and }I_{p}=\left[ \begin{array} [c]{ccc} 1 & & \\ & \ddots & \\ & & 1 \end{array} \right] . \] Then \[ M\otimes I_{p}=\left[ \begin{array} [c]{ccc} m_{11}I_{p} & \cdots & m_{1d}I_{p}\\ \vdots & \cdots & \vdots\\ m_{d1}I_{p} & \cdots & m_{dd}I_{p} \end{array} \right] . \] We shall show that $M\otimes I_{p}$ and $I_{p}\otimes M$ are similar matrices. In other words, there exists an integral unimodular matrix $P$ such that: \begin{equation} P\left( M\otimes I_{p}\right) P^{-1}=\left( I_{p}\otimes M\right) . \label{dirsum} \end{equation} Indeed, let $\left\{ e_{i}\otimes e_{j}:1\leq i\leq d,1\leq j\leq p\right\} $ be a basis for $\mathbb{R}^{d}\otimes\mathbb{R}^{p}.$ Define $Q:\mathbb{R} ^{d}\otimes\mathbb{R}^{p}\rightarrow\mathbb{R}^{p}\otimes\mathbb{R}^{d}$ such that $Q\left( e_{i}\otimes e_{j}\right) =e_{j}\otimes e_{i}.$ It is easy to see that $Q$ is a linear isomorphism whose matrix is an integral unimodular matrix. Moreover, \begin{align*} Q^{-1}\left( M\otimes I_{p}\right) Q\left( e_{i}\otimes e_{j}\right) & =Q^{-1}\left( M\otimes I_{p}\left( e_{j}\otimes e_{i}\right) \right) \\ & =Q^{-1}\left( Me_{j}\otimes e_{i}\right) \\ & =e_{i}\otimes Me_{j}\\ & =\left( I_{p}\otimes M\right) \left( e_{i}\otimes e_{j}\right) . \end{align*} Formula (\ref{dirsum}) is finally obtained by setting $Q=P^{-1}.$ Now, since $\left( \left( I_{p}\otimes M\right) \mathbb{Z}^{pd},\mathbb{Z} ^{pd}\right) $ is a good pair of lattices by Part $1$, it follows that $\left( \left( M\otimes I_{p}\right) \mathbb{Z}^{pd},\mathbb{Z} ^{pd}\right) $ is a good pair of lattices. \end{proof} \begin{proposition} \label{tensor}Let $M$ be an invertible matrix of order $d.$ If $N$ is a unimodular integral matrix of order $n$ and if $\left( M\mathbb{Z} ^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices then $\left( \left( M\otimes N\right) \mathbb{Z}^{dn},\mathbb{Z}^{dn}\right) $ is a good pair of lattices. \end{proposition} \begin{proof} We observe that $M\otimes N=\left( M\otimes I_{n}\right) \left( I_{d}\otimes N\right) .$ If $N$ is a unimodular integral matrix of order $n$ then $I_{d}\otimes N$ is a unimodular integral matrix of order $dn.$ Next, since $\left( M\mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices then it follows from Lemma \ref{tsor} Part $2,$ that $\left( \left( M\otimes I_{p}\right) \mathbb{Z}^{pd},\mathbb{Z}^{pd}\right) $ is a good pair. Now, since $I_{d}\otimes N$ is a unimodular integral matrix, appealing to Lemma \ref{integral} we obtain the desired result: $\left( M\otimes I_{p}\right) \mathbb{Z}^{pd}=\left( M\otimes N\right) \mathbb{Z}^{pd}.$ \end{proof} \section{Proofs of Main Results} \subsection{Proof of Proposition \ref{Main}} The fact that $\left( \Gamma_{1},\Gamma_{2}\right) $ is a good pair of lattices if and only if $\left( \left( M_{1}^{-1}M_{2}\right) \mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair is due to Part $2$ of Proposition \ref{changebasis}. The fact that $\left( \Gamma_{1},\Gamma _{2}\right) $ is a good pair of lattices is equivalent to the statement that there exists a unimodular matrix $N$ such that $N\left( -1,1\right) ^{d} \cap\left( M_{1}^{-1}M_{2}\right) \mathbb{Z}^{d}=\left\{ \mathbf{0} \right\} ,\text{ and }N\left( -1,1\right) ^{d}\cap\mathbb{Z}^{d}=\left\{ \mathbf{0}\right\} $ is due to Lemma \ref{good pair}. \subsection{Proof of Proposition \ref{unipotent}} It suffices to show that $M\left[ 0,1\right) ^{d}$ is a common fundamental domain for $M\mathbb{Z}^{d}$ and $\mathbb{Z}^{d}.$ First, let us assume that $M$ is an upper triangular unipotent matrix. We will offer a proof by induction on $d.$ For the base case, let us assume that $d=2.$ We define \[ M_{s}=\left[ \begin{array} [c]{cc} 1 & s\\ 0 & 1 \end{array} \right] \] $\text{ for some }s\in\mathbb{R}.$ The fact that $M_{s}\left[ 0,1\right) ^{2}$ is a fundamental domain for the lattice $M_{s}\mathbb{Z}^{2}$ is obvious. Now, let $z=\left[ \begin{array} [c]{cc} x & y \end{array} \right] ^{tr}\in\left( -1,1\right) ^{2}$ such that $M_{s}z=k=\left[ \begin{array} [c]{cc} k_{1} & k_{2} \end{array} \right] ^{tr}\in\mathbb{Z}^{2}.$ We would like to show that $k_{1}=k_{2}=0.$ First, we observe that \[ z=M_{s}^{-1}k=\left[ \begin{array} [c]{cc} k_{1}-sk_{2} & k_{2} \end{array} \right] ^{tr}\in\left( -1,1\right) ^{2}. \] This is only possible if $k_{2}=k_{1}=0.$ Therefore $M_{s}\left( -1,1\right) ^{2}\cap\mathbb{Z}^{2}=\left\{ \mathbf{0}\right\} $ and by Lemma \ref{condition}, $M_{s}\left[ 0,1\right) ^{2}$ is a common fundamental domain for $M_{s}\mathbb{Z}^{2}$ and $\mathbb{Z}^{2}.$ Now, let us suppose that for all $d\leq m-1\in\mathbb{N}$ we have that $M\left[ 0,1\right) ^{d}$ is a common fundamental domain for $M\mathbb{Z} ^{d}$ and $\mathbb{Z}^{d}$ whenever $M$ is a unipotent matrix. More precisely, let \[ M=\left[ \begin{array} [c]{ccccc} 1 & a_{1} & a_{2} & \cdots & a_{m-1}\\ & 1 & a_{m} & \cdots & a_{2m-3}\\ & & \ddots & \ddots & \vdots\\ & & & 1 & a_{\frac{_{m^{2}-m}}{2}}\\ & & & & 1 \end{array} \right] \text{ } \] be an arbitrary unipotent matrix of order $m$ with real entries. Let \[ v=\left[ \begin{array} [c]{cccc} a_{1} & a_{2} & \cdots & a_{m-1} \end{array} \right] ,\text{ }M_{1}=\left[ \begin{array} [c]{cccc} 1 & a_{m} & \cdots & a_{2m-3}\\ & \ddots & \ddots & \vdots\\ & & 1 & a_{\frac{_{m^{2}-m}}{2}}\\ & & & 1 \end{array} \right] \] so that \[ M=\left[ \begin{array} [c]{cc} 1 & v\\ 0 & M_{1} \end{array} \right] . \] Next, assume that for any given $z\in\left( -1,1\right) ^{m}$ we have that $Mz\in\mathbb{Z}^{m}.$ We want to show that $z$ is the zero vector. Writing \[ Mz=\left[ \begin{array} [c]{cc} 1 & v\\ 0 & M_{1} \end{array} \right] \left[ \begin{array} [c]{c} z_{1}\\ z_{2} \end{array} \right] =\left[ \begin{array} [c]{c} z_{1}+\left\langle v,z_{2}\right\rangle \\ M_{1}z_{2} \end{array} \right] , \] where $\left\langle v,z_{2}\right\rangle $ is the dot product of the vectors $v,z_{2}$, it follows that $M_{1}z_{2}\in\mathbb{Z}^{m-1}.$ By the assumption of the induction, then $z_{2}=0$ and it follows that $z_{1}+\left\langle v,z_{2}\right\rangle =z_{1}\in\mathbb{Z}.$ Since $z\in\left( -1,1\right) ^{m}$ then $z_{1}=0$ and this completes the induction. Now, let us suppose that $M$ is a lower triangular unipotent matrix. Put \[ J=\left[ \begin{array} [c]{ccc} & & 1\\ & \udots & \\ 1 & & \end{array} \right] . \] Notice that $JMJ^{-1}$ is an upper triangular matrix. Since $\left( JMJ^{-1}\mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices, using the fact that $J$ is a unimodular integral matrix together with Proposition \ref{changebasis}, Part $1,$ it follows that $\left( M\mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices as well. This completes the proof of the first part. \subsection{Proof of Proposition \ref{order3}} Put \[ M=M\left( \mathbf{p}\right) =\left[ \begin{array} [c]{cccc} p_{1} & 1 & & \\ & \ddots & \ddots & \\ & & p_{d-1} & 1\\ & & & \frac{1}{p_{1}\cdots p_{d-1}} \end{array} \right] , \] and \[ N=N\left( \mathbf{p}\right) =\left[ \begin{array} [c]{ccccc} & & & & 1\\ & & & 1 & p_{2}\\ & & \udots & \udots & \\ & 1 & p_{d-1} & & \\ 1 & \frac{1}{p_{1}\cdots p_{d-1}} & & & \end{array} \right] . \] We would like to show that $N\left[ 0,1\right) ^{d}$ is a common fundamental domain for the pair $\left( M\mathbb{Z}^{d},\mathbb{Z}^{d}\right) .$ For this purpose, it is enough to show (see Lemma \ref{common}) that $N\left( -1,1\right) ^{d}\cap\mathbb{Z}^{d}=\left\{ \mathbf{0}\right\} $ and \[ N\left( -1,1\right) ^{d}\cap M\mathbb{Z}^{d}=\left\{ \mathbf{0}\right\} . \] In order to prove that $N\left( -1,1\right) ^{d}\cap M\mathbb{Z} ^{d}=\left\{ \mathbf{0}\right\} ,$ let us suppose that\ $Nv=Mk$, $v\in\left( -1,1\right) ^{d}$ and $k\in\mathbb{Z}^{d}.$ It follows that $M^{-1}Nv\in\mathbb{Z}^{d}.$ Computing the inverse of $M,$ we obtain \begin{equation} M^{-1}=\left[ \begin{array} [c]{cccccc} \frac{1}{p_{1}} & \dfrac{\left( -1\right) ^{1}}{p_{1}p_{2}} & \dfrac{\left( -1\right) ^{2}}{p_{1}p_{2}p_{3}} & \cdots & \dfrac{\left( -1\right) ^{d-2} }{p_{1}p_{2}\cdots p_{d-1}} & \left( -1\right) ^{d-1}\\ & \dfrac{1}{p_{2}} & \dfrac{\left( -1\right) ^{1}}{p_{2}p_{3}} & \cdots & \dfrac{\left( -1\right) ^{d-3}}{p_{2}\cdots p_{d-1}} & \left( -1\right) ^{d-2}p_{1}\\ & & \dfrac{1}{p_{3}} & \cdots & \dfrac{\left( -1\right) ^{d-2}}{p_{3}\cdots p_{d-1}} & \left( -1\right) ^{d-3}p_{1}p_{2}\\ & & & \ddots & \vdots & \vdots\\ & & & & \dfrac{1}{p_{d-1}} & \left( -1\right) ^{d-\left( d-1\right) }p_{1}\cdots p_{d-2}\\ & & & & & p_{1}\cdots p_{d-1} \end{array} \right] . \end{equation} Next, with some formal calculations we obtain that \[ M^{-1}Nv=\left[ \begin{array} [c]{cccccc} \left( -1\right) ^{d-1} & & & & & \\ \left( -1\right) ^{d-2}p_{1} & & & & & 1\\ \left( -1\right) ^{d-3}p_{1}p_{2} & & & & \udots & \\ \vdots & & & 1 & & \\ \left( -1\right) ^{d-\left( d-1\right) }p_{1}\cdots p_{d-2} & & 1 & & & \\ p_{1}\cdots p_{d-1} & 1 & & & & \end{array} \right] v=k\in\mathbb{Z}^{d}. \] Therefore, $v$ must be the zero vector. To show that $N\left( -1,1\right) ^{d}\cap\mathbb{Z}^{d}=\left\{ \mathbf{0}\right\} ,$ let $z\in\left( -1,1\right) ^{d}$ such that $Nz=k\in\mathbb{Z}^{d}.$ More precisely, we have \[ \left[ \begin{array} [c]{ccccc} & & & & 1\\ & & & 1 & p_{2}\\ & & \udots & \udots & \\ & 1 & p_{d-1} & & \\ 1 & \frac{1}{p_{1}\cdots p_{d-1}} & & & \end{array} \right] \left[ \begin{array} [c]{c} z_{1}\\ z_{2}\\ \vdots\\ z_{d-1}\\ z_{d} \end{array} \right] =\left[ \begin{array} [c]{c} z_{d}\\ z_{d-1}+z_{d}\\ \vdots\\ z_{2}+z_{3}\\ z_{1}+\frac{z_{2}}{p_{1}\cdots p_{d-1}} \end{array} \right] =\left[ \begin{array} [c]{c} k_{1}\\ k_{2}\\ \vdots\\ k_{d-1}\\ k_{d} \end{array} \right] . \] Now using the fact that $k\in\mathbb{Z}^{d}$ together with $z\in\left( -1,1\right) ^{d}$ gives us that $z$ must be equal to the zero vector. Therefore, $N\left( -1,1\right) ^{d}\cap\mathbb{Z}^{d}=\left\{ \mathbf{0}\right\} .$ In light of Proposition \ref{changebasis} Part $1$, $\left( \left( PM\left( \mathbf{p}\right) Q\right) \mathbb{Z} ^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices with common fundamental domain $\left( PN\left( \mathbf{p}\right) \right) \left[ 0,1\right) ^{d}$ whenever $P,Q$ are integral unimodular matrices. For Part $2,$ appealing again to Proposition \ref{changebasis} Part $1$ it is enough to show that $\left( D\left( \mathbf{m}\right) \mathbb{Z} ^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices. First, let \[ Z=\left[ \begin{array} [c]{ccccc} \dfrac{1}{m_{1}} & 1 & & & \\ & \dfrac{1}{m_{2}} & 1 & & \\ & & \ddots & \ddots & \\ & & & \dfrac{1}{m_{d-1}} & 1\\ & & & & {\displaystyle\prod\limits_{i=1}^{d-1}}m_{i} \end{array} \right] . \] Next, applying the first part of the proposition it is clear that $\left( Z\mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices. Now, put \[ U=\left[ \begin{array} [c]{cccccc} 1 & \left( -1\right) ^{1}m_{1} & \left( -1\right) ^{2}m_{1}m_{2} & \left( -1\right) ^{3}m_{1}m_{2}m_{3} & \cdots & \left( -1\right) ^{d-1} {\displaystyle\prod\limits_{i=1}^{d-1}}m_{i}\\ & 1 & \left( -1\right) ^{1}m_{2} & \left( -1\right) ^{2}m_{2}m_{3} & \cdots & \left( -1\right) ^{d-2}{\displaystyle\prod\limits_{i=2}^{d-1}} m_{i}\\ & & 1 & \left( -1\right) ^{1}m_{3} & \cdots & \left( -1\right) ^{d-3}{\displaystyle\prod\limits_{i=3}^{d-1}}m_{i}\\ & & & \ddots & \ddots & \vdots\\ & & & & 1 & \left( -1\right) ^{1}m_{d-1}\\ & & & & & 1 \end{array} \right] . \] Since $U$ is a unimodular integral matrix, then $Z\mathbb{Z}^{d} =ZU\mathbb{Z}^{d}$ (Lemma \ref{integral}). It is easy to check that $ZU$ is equal to the diagonal matrix \[ \left[ \begin{array} [c]{ccccc} \dfrac{1}{m_{1}} & & & & \\ & \dfrac{1}{m_{2}} & & & \\ & & \ddots & & \\ & & & \dfrac{1}{m_{d-1}} & \\ & & & & {\displaystyle\prod\limits_{i=1}^{d-1}}m_{i} \end{array} \right] =D\left( \mathbf{m}\right) . \] Therefore $\left( ZU\mathbb{Z}^{d},\mathbb{Z}^{d}\right) =\left( D\left( \mathbf{m}\right) \mathbb{Z}^{d},\mathbb{Z}^{d}\right) $ is a good pair of lattices. For the last part of the proposition, put \[ S=\left[ \begin{array} [c]{cc} 0 & 1\\ 1 & \frac{n}{m} \end{array} \right] ,S^{\prime}=\left[ \begin{array} [c]{cc} \frac{m}{n} & 1\\ 0 & \frac{n}{m} \end{array} \right] . \] We claim that $S\left[ 0,1\right) ^{2}$ is a common fundamental domain for the lattices $S^{\prime}\mathbb{Z}^{2}$ and $\mathbb{Z}^{2}.$ To see this, it suffices (see Proposition \ref{Main}) to check that $S\left( -1,1\right) ^{2}\cap\mathbb{Z}^{2}=\left\{ \mathbf{0}\right\} $ and $S\left( -1,1\right) ^{2}\cap S^{\prime}\mathbb{Z}^{2}=\left\{ \mathbf{0}\right\} .$ Let $z=\left[ \begin{array} [c]{cc} x & y \end{array} \right] ^{tr}\in\left( -1,1\right) ^{2}$ such that $Sz\in\mathbb{Z}^{2}.$ Then $z=S^{-1}k=\left[ \begin{array} [c]{cc} k_{2}-\frac{nk_{1}}{m} & k_{1} \end{array} \right] ^{tr}$ where $k=\left[ \begin{array} [c]{cc} k_{1} & k_{2} \end{array} \right] ^{tr}\in\mathbb{Z}^{2}.$ So, $z=\mathbf{0}.$ Next, let us assume that $Sz\in S^{\prime}\mathbb{Z}^{2}.$ That is, $Sz=S^{\prime}k$ for some $k=\left[ \begin{array} [c]{cc} k_{1} & k_{2} \end{array} \right] ^{tr}\in\mathbb{Z}^{2}.$ It follows that \[ z=\left[ \begin{array} [c]{cc} x & y \end{array} \right] ^{tr}=S^{-1}S^{\prime}\left[ \begin{array} [c]{cc} k_{1} & k_{2} \end{array} \right] ^{tr}=\left[ \begin{array} [c]{cc} -k_{1} & \frac{mk_{1}+nk_{2}}{n} \end{array} \right] ^{tr}. \] Thus $z=\mathbf{0}$ as well. \newline Next, since $\gcd\left( m,n\right) =1$ there exist $\ell_{1},\ell_{2}\in\mathbb{Z}$ such that $1-n\ell_{2}-m\ell _{1}=0.$ As such, it follows that \begin{align*} \left[ \begin{array} [c]{cc} 1 & -m\ell_{2}\\ 0 & 1 \end{array} \right] \left[ \begin{array} [c]{cc} \frac{m}{n} & 1\\ 0 & \frac{n}{m} \end{array} \right] \left[ \begin{array} [c]{cc} 1 & -\ell_{1}n\\ 0 & 1 \end{array} \right] & =\left[ \begin{array} [c]{cc} \frac{m}{n} & 1-n\ell_{2}-m\ell_{1}\\ 0 & \frac{1}{m}n \end{array} \right] \\ & =\left[ \begin{array} [c]{cc} \frac{m}{n} & 0\\ 0 & \frac{n}{m} \end{array} \right] . \end{align*} By Proposition \ref{changebasis}, Part $1,$ \begin{equation} \left( \left[ \begin{array} [c]{cc} \frac{m}{n} & 0\\ 0 & \frac{n}{m} \end{array} \right] \mathbb{Z}^{2},\mathbb{Z}^{2}\right) . \label{gp} \end{equation} Using the fact that \[ \left[ \begin{array} [c]{cc} 1 & -\ell_{1}n\\ 0 & 1 \end{array} \right] \mathbb{Z}^{2}=\mathbb{Z}^{2} \] together with Lemma \ref{First}, we conclude that \[ \left[ \begin{array} [c]{cc} 1 & -m\ell_{2}\\ 0 & 1 \end{array} \right] \left[ \begin{array} [c]{cc} 0 & 1\\ 1 & \frac{n}{m} \end{array} \right] \left[ 0,1\right) ^{2}=\left[ \begin{array} [c]{cc} -m\ell_{2} & 1-n\ell_{2}\\ 1 & \frac{n}{m} \end{array} \right] \left[ 0,1\right) ^{2} \] is a common connected fundamental domain for the pair (\ref{gp}). Finally, the first part of Proposition \ref{changebasis} gives the desired result. \subsection{Proof of Proposition \ref{notgood}} According to Proposition \ref{changebasis}, it is enough to show that if $r$ is a natural number such that $\sqrt{r}$ is irrational then $\left( R\left( r\right) \mathbb{Z}^{2},\mathbb{Z}^{2}\right) $ is not a good pair of lattices. Put $N=\left[ \begin{array} [c]{cc} a & b\\ c & d \end{array} \right] $ such that $\left\vert \det N\right\vert =1.$ Let us suppose that $\Omega=N\left[ 0,1\right) ^{2}$ is a common fundamental domain for $\mathbb{Z}^{2}$ and $R\mathbb{Z}^{2}=R\left( r\right) \mathbb{Z}^{2}.$ There must exist a non-zero element $k$ in $\mathbb{Z}^{2}$ such that one corner of the closure of the set $\Omega+k$ meet $\Omega$ at the origin. Similarly, since $\Omega$ tiles the plane by $R(r)\mathbb{Z}^{2},$ there must exist a non-trivial element $k$ of $R(r)\mathbb{Z}^{2}$ such that one corner of the closure of $\Omega+k$ intersects $\Omega$ at the origin as well (see Figure below) \begin{center} \begin{figure} \caption{Behavior of a tiling around the origin of the plane} \end{figure} \end{center} Hence, there exist \[ p,q\in\left\{ \left[ \begin{array} [c]{c} 1\\ 0 \end{array} \right] ,\left[ \begin{array} [c]{c} 0\\ 1 \end{array} \right] \right\} \] such that \begin{equation} \left\{ \begin{array} [c]{c} Np+k=\mathbf{0}\\ Nq+j=\mathbf{0} \end{array} \right. \label{system} \end{equation} for some $k\in\mathbb{Z}^{2}-\left\{ \mathbf{0}\right\} $ and $j\in R\mathbb{Z} ^{2}-\left\{ \mathbf{0}\right\} .$ By assumption, $N$ is a unimodular matrix. However, without loss of generality, we may assume that $\det N=1$. Indeed, if $\det N=-1,$ then (\ref{system}) is equivalent to $\left\{ \begin{array} [c]{c} JNp+Jk=\mathbf{0}\\ JNq+Jj=\mathbf{0} \end{array} \right. $ where $\det\left( JN\right) =1,$ $Jk\in\mathbb{Z}^{2}-\left\{ \mathbf{0}\right\} ,Jj\in R\mathbb{Z}^{2}-\left\{ \mathbf{0}\right\} $ and $J=\left[ \begin{array} [c]{cc} -1 & 0\\ 0 & 1 \end{array} \right] .$ We shall prove that if $\sqrt{r}\notin\mathbb{Q}$ then (\ref{system}) has no solution. There are several possible cases that may arise from all the possible choices for $p,q$. First of all, since $R\mathbb{Z}^{2}\cap \mathbb{Z}^{2}=\left\{ \mathbf{0}\right\} $, it is easy to see that (\ref{system}) has no solution whenever $p=q.$ Therefore, we should only focus on the cases where $p$ is not equal to $q.$ Put \[ k=\left[ \begin{array} [c]{cc} k_{1} & k_{2} \end{array} \right] ^{tr}\text{ and }j=\left[ \begin{array} [c]{cc} \sqrt{r}j_{1} & \frac{1}{\sqrt{r}}j_{2} \end{array} \right] ^{tr}\text{ where }k_{1,}k_{2},j_{1},j_{2}\in\mathbb{Z}. \] \textbf{Case $1.1$} If \[ p=\left[ \begin{array} [c]{c} 1\\ 0 \end{array} \right] ,q=\left[ \begin{array} [c]{c} 0\\ 1 \end{array} \right] ,N=\left[ \begin{array} [c]{cc} \frac{bc+1}{d} & b\\ c & d \end{array} \right] \] and $d\neq0$ then \[ k_{1}=\frac{j_{1}k_{2}r+\sqrt{r}}{j_{2}},k_{2}=-c,j_{1}=\frac{-b}{\sqrt{r} },j_{2}=-d\sqrt{r} \] and $k_{1}j_{2}-j_{1}k_{2}r=\sqrt{r}.$ Thus, System (\ref{system}) has no solution since $\sqrt{r}$ is irrational. \newline\textbf{Case $1.2$} If \[ p=\left[ \begin{array} [c]{c} 1\\ 0 \end{array} \right] ,q=\left[ \begin{array} [c]{c} 0\\ 1 \end{array} \right] ,N=\left[ \begin{array} [c]{cc} a & b\\ -\frac{1}{b} & 0 \end{array} \right] \] and $b\neq0$ then \[ k_{1}=a,k_{2}=-\frac{1}{b},j_{1}=\frac{b}{\sqrt{r}},j_{2}=0 \] and $j_{1}=-\frac{1}{k_{2}\sqrt{r}},k_{2}\neq0.$ This is absurd since $j_{1}$ is an integer. \newline\textbf{Case $2.1$} If \[ p=\left[ \begin{array} [c]{c} 0\\ 1 \end{array} \right] ,q=\left[ \begin{array} [c]{c} 1\\ 0 \end{array} \right] ,N=\left[ \begin{array} [c]{cc} \frac{bc+1}{d} & b\\ c & d \end{array} \right] ,d\neq0 \] then \[ k_{1}=-b,k_{2}=-d,j_{1}=\frac{-1-bc}{d\sqrt{r}},j_{2}=-c\sqrt{r} \] and $k_{2}j_{1}r-j_{2}k_{1}=\sqrt{r}$ which is absurd. \newline\textbf{Case $2.2$} If \[ p=\left[ \begin{array} [c]{c} 0\\ 1 \end{array} \right] ,q=\left[ \begin{array} [c]{c} 1\\ 0 \end{array} \right] ,N=\left[ \begin{array} [c]{cc} a & b\\ -\frac{1}{b} & 0 \end{array} \right] ,b\neq0 \] then \[ k_{1}=b,k_{2}=0,j_{1}=\frac{a}{\sqrt{r}},j_{2}=-\frac{\sqrt{r}}{b}. \] Therefore, $j_{2}=-\frac{\sqrt{r}}{k_{1}}$ and this is absurd. \end{document}
arXiv
\begin{document} \begin{center} {\Large\bf The Real-Rootedness and Log-concavities of \\[6pt] Coordinator Polynomials of Weyl Group Lattices} \end{center} \begin{center} David G. L. Wang$^{1}$ and Tongyuan Zhao$^{2}$\\[6pt] $^{1}$Beijing International Center for Mathematical Research\\ $^{2}$School of Mathematics, LMAM\\ $^{1,2}$Peking University, Beijing 100871, P.R. China {\tt $^{1}[email protected]}$\quad $ {\tt $^{2}[email protected]} \end{center} \begin{abstract} It is well-known that the coordinator polynomials of the classical root lattice of type~$A_n$ and those of type~$C_n$ are real-rooted. They can be obtained, either by the Aissen-Schoenberg-Whitney theorem, or from their recurrence relations. In this paper, we develop a trigonometric substitution approach which can be used to establish the real-rootedness of coordinator polynomials of type~$D_n$. We also find the coordinator polynomials of type $B_n$ are not real-rooted in general. As a conclusion, we obtain that all coordinator polynomials of Weyl group lattices are log-concave. \end{abstract} \noindent\textbf{Keywords:} coordinator polynomial; log-concavity; real-rootedness; trigonometric substitution; Weyl group lattice \noindent\textbf{AMS Classification:} 65H04 \section{Introduction} Let $f(x)=\sum_{i=1}^na_ix^i$ be a polynomial of degree $n$ with nonnegative coefficients. We say that $f(x)$ is {\em real-rooted} if all its zeros are real. Real-rooted polynomials have attracted much attention during the past decades. One of the most significant reasons is that for any polynomial, the real-rootedness implies the log-concavity of its coefficients, which in turn implies the unimodality of the coefficients. Indeed, unimodal and log-concave sequences occur naturally in combinatorics, algebra, analysis, geometry, computer science, probability and statistics. We refer the reader to the survey papers, Brenti~\cite{Bre94} and Stanley~\cite{Sta89}, for various results on the unimodality and log-concavity. There is a characterization of real-rooted polynomials in the theory of total positivity; see Karlin~\cite{Kar68}. A matrix $(a_{ij})_{i,j\ge0}$ is said to be {\em totally positive} if all its minors have nonnegative determinants. The sequence $\{a_k\}_{k=0}^n$ is called a {\em P\'olya frequency sequence} if the lower triangular matrix $(a_{i-j})_{i,j=0}^n$ is totally positive, where $a_k$ is set to be zero if $k<0$. A basic link between P\'olya frequency sequences and real-rooted polynomial was given by the Aissen-Schoenberg-Whitney theorem~\cite{ASW52}, which stated that the polynomial $f(x)$ is real-rooted if and only if the sequence $\{a_k\}_{k=0}^n$ is a P\'olya frequency sequence. Another characterization from the probabilistic point of view can be found in Pitman~\cite{Pit97}, see also Schoenberg~\cite{Sch55}. Polynomials arising from combinatorics are often real-rooted. Basic examples include the generating functions of binomial coefficients, of Stirling numbers of the first kind and of the second kind, of Eulerian numbers, and the matching polynomials; see, for example, Brenti~\cite{Bre95,Bre96}, Liu and Wang~\cite{LW07}, Stanley~\cite{Sta00} and Wang and Yeh~\cite{WY05}. This paper is concerned with the real-rootedness and the log-concavities of coordinator polynomials of Weyl group lattices. Following Ardila et al.~\cite{ABHPS11}, we give an overview of the notions. Let $\mathcal{L}$ be a lattice, that is, a discrete subgroup of a finite-dimensional Euclidean vector space $E$. The dimension of the subspace spanned by $\mathcal{L}$ is called its rank. A lattice is said to be generated as a monoid if there exists a finite collection~$M$ of vectors such that every vector in the lattice is a nonnegative integer linear combination of the vectors in~$M$. Suppose that $\mathcal{L}$ is a lattice of rank~$d$, generated by~$M$. For any vector $v$ in~$\mathcal{L}$, define the length of~$v$ with respect to~$M$ to be the minimum sum of the coefficients among all nonnegative integer linear combinations, denoted by~$\ell(v)$. In other words, \[ \ell(v)=\min\biggl\{\ \sum_{m\in M}c_m\ \, \bigg|\ \,v=\sum_{m\in M} c_mm,\ c_m\geq 0\ \biggr\}. \] Let $S(k)$ be the number of vectors of length $k$ in $\mathcal{L}$. Benson~\cite{Ben83} proved that the generating function \begin{equation}\label{def-h} \sum_{k\ge0}S(k)x^{k}={h(x)\over(1-x)^d} \end{equation} is rational, where $h(x)$ is a polynomial of degree at most $d$. Following Conway and Sloane~\cite{CS97}, we call $h(x)$ the {\em coordinator polynomial} with respect to~$M$. We concern ourselves with the classical root lattices as $\mathcal{L}$. Let $e_i$ denote the vector in $E$, having the $i$th entry one, and all other entries zero, where the space $E$ is taken to be~$\mathbb{R}^{n+1}$ for the root lattice $A_n$, and to be~$\mathbb{R}^{n}$ for the root lattices $B_n$, $C_n$, and $D_n$. The root lattices can be defined to be generated as monoids respectively by \begin{align*} M_{A_n}&=\bigl\{\pm(e_i-e_j)\,\big|\,1\le i<j\le n+1\bigr\},\\[5pt] M_{B_n}&=\bigl\{\pm e_i\pm e_j\,\big|\,1\le i<j\le n\bigr\} \cup\bigl\{\pm e_i\,\big|\,1\le i\le n\bigr\},\\[5pt] M_{C_n}&=\bigl\{\pm e_i\pm e_j\,\big|\,1\le i<j\le n\bigr\} \cup\bigl\{\pm 2e_i\,\big|\,1\le i\le n\bigr\},\\[5pt] M_{D_n}&=\bigl\{\pm e_i\pm e_j\,\big|\,1\le i<j\le n\bigr\}. \end{align*} We denote the coordinator polynomial of type $T$ by $h_T(x)$. Conway and Sloane~\cite{CS97} established the explicit expression \begin{equation}\label{a} h_{A_n} (x) = \sum_{k=0}^{n}{n\choose k}^2 x^k, \end{equation} which were also called the Narayana polynomials of type $B$ by Chen, Tang, Wang and Yang~\cite{CTWY10}. In fact, these polynomials appeared as the rank generating function of the lattice of noncrossing partitions of type $B$ on the set $\{1,2,\ldots,n\}$. With Colin Mallows's help, Conway and Sloane~\cite{CS97} conjectured that \begin{equation}\label{d} h_{D_n}(x)= \frac{(1+\sqrt{x})^{2n}+(1-\sqrt{x})^{2n}}{2}-2n x(1+x)^{n-2}. \end{equation} Baake and Grimm~\cite{BG97} pointed out that the methods outlined in~\cite{CS97} can be used to deduce that the coordinator polynomials of type $C$ have the expression \begin{equation}\label{c} h_{C_n}(x)=\sum_{k=0}^{n}\binom{2n}{2k} x^k. \end{equation} They also conjectured that \begin{equation}\label{b} h_{B_n}(x)=\sum_{k=0}^{n} \binom{2n+1}{2k}x^k - 2n x(1+x)^{n-1}. \end{equation} Bacher, de la Harpe and Venkov~\cite{BHV97} rederived~\eqref{a} and proved the formulas~\eqref{d}, \eqref{c} and~\eqref{b}. Recently, Ardila et al.~\cite{ABHPS11} gave alternative proofs for~\eqref{a}, \eqref{d} and \eqref{c} by computing the $f$-vectors of a unimodular triangulation of the corresponding root polytope. The real-rootedness of coordinator polynomials has received much attention. As pointed out by Conway and Sloane~\cite{CS97}, coordinator polynomials of type~$A$ can be expressed as \[ h_{A_n}(x)=(1-x)^{n}L_n\biggl(\frac{1+x}{1-x}\biggr), \] where $L_n(x)$ denotes the $n$th Legendre polynomial. Since Legendre polynomials are orthogonal, and thus real-rooted, we are led to the following result. \begin{thm}\label{thm-A} The coordinator polynomials of type $A$ are real-rooted. \end{thm} In fact, Theorem~\ref{thm-A} follows immediately from a classical result of Schur~\cite{Sch14}, see also Theorems 2.4.1 and 3.5.3 in Brenti~\cite{Bre89}. For $h_{C_n}(x)$, one may easily deduce the real-rootedness by the Aissen-Schoenberg-Whitney theorem. \begin{thm}\label{thm-C} The coordinator polynomials of type $C$ are real-rooted. \end{thm} Liang and Yang~\cite{LY} reproved both Theorems~\ref{thm-A} and~\ref{thm-C} by establishing recurrences of the coefficients. Moreover, they verified the real-rootedness of coordinator polynomials of types $E_6$, $E_7$, $F_4$ and $G_2$. In contrast, $h_{E_8}(x)$ is not real-rooted. They also conjectured that $h_{D_n}(x)$ is real-rooted. For coordinator polynomials of type $B_n$, we find $h_{B_{16}}(x)$ has $14$ real roots and $2$ non-real roots. So $h_{B_n}(x)$ are not real-rooted in general. In the next section, we develop a trigonometric substitution approach which enables us to confirm Liang-Yang's conjecture. In Section 3, we establish the log-concavities of all coordinator polynomials of Weyl group lattices. \section{The real-rootedness} In this section, we show the real-rootedness of $h_{D_n}(x)$. \begin{thm}\label{thm-D} The coordinator polynomials of type $D$ are real-rooted. \end{thm} We shall adopt a technique of trigonometric transformation. To be precise, we transform the polynomial $h_{D_n}(x)$ into a trigonometric function, say, $g_n(\theta)$, and then consider the roots of $g_n(\theta)$. It turns out that the signs of $g_n(\theta)$ at a sequence of $n+1$ fixed values of $\theta$ are interlacing. Hence $g_n(\theta)$ has $n$ distinct zeros in a certain domain, and so does $h_{D_n}(x)$. \begin{proof} We are going to show that the polynomial $h_{D_n}(x)$ has $n$ distinct negative roots for $n\ge2$. For this purpose, we let $y>0$ and substitute $x=-y^2$ in the expression~\eqref{d} of $h_{D_n}(x)$. Note that $\sqrt{-y^2}$ is two-valued, denoting $\pm yi$. However, taking $\sqrt{-y^2}=yi$ and taking $\sqrt{-y^2}=-yi$ yields the same expression of $h_{D_n}(-y^2)$, that is, \begin{equation}\label{eq-f} h_{D_n}(-y^2)={(1+yi)^{2n}+(1-yi)^{2n}\over2}+2ny^2(1-y^2)^{n-2}. \end{equation} Without loss of generality, we can suppose that \[ y=\tan{\phi\over2}, \] where $\phi\in(0,\pi)$. Then $1+yi=\sqrt{1+y^2}\,e^{i\phi/2}$, and thus \[ (1+yi)^{2n}+(1-yi)^{2n}=(1+y^2)^ne^{in\phi}+(1+y^2)^ne^{-in\phi}=2(1+y^2)^n\cos{n\phi}. \] It follows that \[ h_{D_n}(-y^2)=(1+y^2)^n\cos{n\phi}+2ny^2(1-y^2)^{n-2} =\bigl(1+y^2\bigr)^ng_n(\phi), \] where \begin{equation}\label{g} g_n(\phi)=\cos{n\phi}+{n\over2}\sin^2\phi\cos^{n-2}\phi. \end{equation} Now it suffices to prove that the function $g_n(\phi)$ has $n$ distinct roots $\phi$ in the interval $(0,\pi)$. Let \[ h_n(\phi)={n\over2}\sin^2\phi\cos^{n-2}\phi. \] We claim that \begin{equation}\label{h} |h_n(\phi)|<1. \end{equation} In fact, by the arithmetic-geometric mean inequality, \begin{align} h_n^2(\phi) & = \Bigl({n\over2}\sin^2\phi\Bigr)\Bigl({n\over2}\sin^2\phi\Bigr) \bigl(\cos^2\phi\bigr)^{n-2}\notag \\ & \le \Bigl({{n\over2}\sin^2\phi+{n\over2}\sin^2\phi+(n-2)\cos^2\phi \Bigr)}^n\Big/n^n\label{eq-11}\\[5pt] &=\Bigl(1-{2\cos^2\phi\over n}\Bigr)^n\le1.\label{eq-12} \end{align} Note that the equality in~\eqref{eq-11} holds if and only if \begin{equation}\label{cond1} {n\over2}\sin^2\phi=\cos^2\phi, \end{equation} while the equality in~\eqref{eq-12} holds if and only if \begin{equation}\label{cond2} \cos\phi=0. \end{equation} However, the conditions~\eqref{cond1} and~\eqref{cond2} contradict each other. So the equality in~\eqref{eq-12} does not hold. This confirms the claim~\eqref{h}. Let $j$ be an integer. From~\eqref{g}, we see that \[ g_n\Bigl({j\pi\over n}\Bigr)=(-1)^j+h_n\Bigl({j\pi\over n}\Bigr). \] Since $|h(\phi)|<1$ for any $\phi$, we have \[ (-1)^jg_n\Bigl({j\pi\over n}\Bigr) =1+(-1)^jh_n\Bigl({j\pi\over n}\Bigr)>0. \] By its continuity, we obtain that $g_n(\phi)$ has roots $\phi_0,\phi_1,\ldots,\phi_{n-1}$ such that \[ 0<\phi_0<{\pi\over n}<\phi_1<{2\pi\over n}<\phi_2<{3\pi\over n} <\cdots<{(n-1)\pi\over n}<\phi_{n-1}<\pi. \] In conclusion, the polynomial $h_{D_n}(x)$ has $n$ distinct negative roots \[ x_j=-\tan^2{\phi_j\over2},\qquad j=0,1,\ldots,n-1. \] This completes the proof. \end{proof} \section{The log-concavity} In this section, we consider the log-concavities of coordinator polynomials of Weyl group lattices. For basic notions on the Weyl group, see Humphreys~\cite{Hum72B}. By the definition~(\ref{def-h}), it is easy to see that the coordinator polynomial of any Weyl group lattice is the product of the coordinator polynomials of the Weyl group lattices determined by the irreducible components. This has been noticed by, for instance, Conway and Sloane~\cite[Page 2373]{CS97}. By the Cauchy-Binet theorem, the product of log-concave polynomials with nonnegative coefficients and no internal zero coefficients are log-concave; see Stanley~\cite[Proposition 2]{Sta89}. Therefore, we are led to consider the coordinator polynomials of the Weyl group lattices which are determined by irreducible root systems. By the Cartan-Killing classification, irreducible root systems can be classified into types $A_n$, $B_n$, $C_n$, $D_n$, $E_6$, $E_7$, $E_8$, $F_4$ and $G_2$. For historical notes, see Bourbaki~\cite{Bou02}. To conclude, we have the following result. \begin{thm} All coordinator polynomials of Weyl group lattices are log-concave. \end{thm} \begin{proof} It is straightforward to verify the log-concavity of the coordinator polynomials of types $E_6$, $E_7$, $E_8$, $F_4$ and $G_2$. By Theorems~\ref{thm-A}, \ref{thm-C} and~\ref{thm-D}, it suffices to show the log-concavity of $h_{B_n}(x)$. Let $b_k$ be the coefficient of $x^k$ in $h_{B_n}(x)$, and let $b_k'=b_k\big/{n\choose k}$. By~(\ref{b}), we have \[ b_k'={(2n+1)!!\over (2k-1)!!(2n-2k+1)!!}-2k. \] It is easy to verify that the sequence $\bigl\{b_k'\bigr\}_{k=0}^n$ is log-concave, which implies the log-concavity of $\bigl\{b_k\bigr\}_{k=0}^n$. This completes the proof. \end{proof} \noindent{\bf Acknowledgments.} This work was supported by the National Natural Science Foundation of China (Grant No.~$11101010$). We are grateful to Arthur Yang for his kindly telling us the real-rootedness conjecture, and to Chunwei Song for his encouragements. We also thank the anonymous referee for detailed comments that improved the organization of this material. \end{document}
arXiv
Classification theorem In mathematics, a classification theorem answers the classification problem "What are the objects of a given type, up to some equivalence?". It gives a non-redundant enumeration: each object is equivalent to exactly one class. A few issues related to classification are the following. • The equivalence problem is "given two objects, determine if they are equivalent". • A complete set of invariants, together with which invariants are realizable, solves the classification problem, and is often a step in solving it. • A computable complete set of invariants (together with which invariants are realizable) solves both the classification problem and the equivalence problem. • A canonical form solves the classification problem, and is more data: it not only classifies every class, but provides a distinguished (canonical) element of each class. There exist many classification theorems in mathematics, as described below. Geometry • Classification of Euclidean plane isometries • Classification theorems of surfaces • Classification of two-dimensional closed manifolds • Enriques–Kodaira classification of algebraic surfaces (complex dimension two, real dimension four) • Nielsen–Thurston classification which characterizes homeomorphisms of a compact surface • Thurston's eight model geometries, and the geometrization conjecture • Berger classification • Classification of Riemannian symmetric spaces • Classification of 3-dimensional lens spaces • Classification of manifolds Algebra • Classification of finite simple groups • Classification of Abelian groups • Classification of Finitely generated abelian group • Classification of Rank 3 permutation group • Classification of 2-transitive permutation groups • Artin–Wedderburn theorem — a classification theorem for semisimple rings • Classification of Clifford algebras • Classification of low-dimensional real Lie algebras • Bianchi classification • ADE classification • Langlands classification Linear algebra • Finite-dimensional vector spaces (by dimension) • Rank–nullity theorem (by rank and nullity) • Structure theorem for finitely generated modules over a principal ideal domain • Jordan normal form • Sylvester's law of inertia Analysis • Classification of discontinuities Complex analysis • Classification of Fatou components Mathematical physics • Classification of electromagnetic fields • Petrov classification • Segre classification • Wigner's classification See also • Representation theorem • List of manifolds
Wikipedia
Graph convolutional networks: a comprehensive review Si Zhang1, Hanghang Tong1, Jiejun Xu2 & Ross Maciejewski3 Computational Social Networks volume 6, Article number: 11 (2019) Cite this article Graphs naturally appear in numerous application domains, ranging from social analysis, bioinformatics to computer vision. The unique capability of graphs enables capturing the structural relations among data, and thus allows to harvest more insights compared to analyzing data in isolation. However, it is often very challenging to solve the learning problems on graphs, because (1) many types of data are not originally structured as graphs, such as images and text data, and (2) for graph-structured data, the underlying connectivity patterns are often complex and diverse. On the other hand, the representation learning has achieved great successes in many areas. Thereby, a potential solution is to learn the representation of graphs in a low-dimensional Euclidean space, such that the graph properties can be preserved. Although tremendous efforts have been made to address the graph representation learning problem, many of them still suffer from their shallow learning mechanisms. Deep learning models on graphs (e.g., graph neural networks) have recently emerged in machine learning and other related areas, and demonstrated the superior performance in various problems. In this survey, despite numerous types of graph neural networks, we conduct a comprehensive review specifically on the emerging field of graph convolutional networks, which is one of the most prominent graph deep learning models. First, we group the existing graph convolutional network models into two categories based on the types of convolutions and highlight some graph convolutional network models in details. Then, we categorize different graph convolutional networks according to the areas of their applications. Finally, we present several open challenges in this area and discuss potential directions for future research. Graphs naturally arise in many real-world applications, including social analysis [1], fraud detection [2, 3], traffic prediction [4], computer vision [5], and many more. By representing the data as graphs, the structural information can be encoded to model the relations among entities, and furnish more promising insights underlying the data. For example, in a transportation network, nodes are often the sensors and edges represent the spatial proximity among sensors. In addition to the temporal information provided by the sensors themselves, the graph structure modeled by the spatial correlations leads to a prominent improvement in the traffic prediction problem [4]. Moreover, by modeling the transactions among people as a graph, the complex transaction patterns can be mined for synthetic identity detection [3] and money laundering detection [6]. However, the complex structure of graphs [7] often hampers the capability of gaining the true insights underlying the graphs. Such complexity, for example, resides in the non-Euclidean nature of the graph-structured data. A potential solution to dealing with the complex patterns is to learn the graph representations in a low-dimensional Euclidean space via embedding techniques, including the traditional graph embedding methods [8,9,10], and the recent network embedding methods [11, 12]. Once the low-dimensional representations are learned, many graph-related problems can be easily done, such as the classic node classification and link prediction [12]. There exist many thorough reviews on both traditional graph embedding and recent network embedding methods. For example, Yan et al. review several well-established traditional graph embedding methods and discuss the general framework for graph dimensionality reduction [13]. Hamilton et al. review the general graph representation learning methods, including node embedding and subgraph embedding [14]. Furthermore, Cui et al. discuss the differences between the traditional graph embedding and the recent network embedding methods [15]. One notable difference is that the recent network embedding is more suitable for the task-specific network inference. Other existing literature reviews on network embedding include [16, 17]. Despite some successes of these embedding methods, many of them suffer from the limitations of the shallow learning mechanisms [11, 12] and might fail to discover the more complex patterns behind the graphs. Deep learning models, on the other hand, have been demonstrated their power in many applications. For example, convolution neural networks (CNNs) achieve a promising performance in many computer vision [18] and natural language processing [19] applications. One key reason of such successes is that CNN models can highly exploit the stationarity and compositionality properties of certain types of data. In particular, due to the grid-like nature of images, the convolutional layers in CNNs enable to take advantages of the hierarchical patterns and extract high-level features of the images to achieve a great expressive capability. The basic CNN models aim to learn a set of fixed-size trainable localized filters which scan every pixel in the images and combine the surrounding pixels. The core components include the convolutional and pooling layers that can be operated on the data with an Euclidean or grid-like structure. However, the non-Euclidean characteristic of graphs (e.g., the irregular structure) makes the convolutions and filtering on graphs not as well-defined as on images. In the past decades, researchers have been working on how to conduct convolutional operations on graphs. One main research direction is to define graph convolutions from the spectral perspective, and thus, graph signal processing, such as graph filtering and graph wavelets, has attracted lots of research interests. Shuman et al. give a comprehensive overview of graph signal processing, including the common operations and analyses on graphs [20]. Briefly speaking, spectral graph convolutions are defined in the spectral domain based on graph Fourier transform, an analogy of 1-D signal Fourier transform. In this way, the spectral-based graph convolutions can be computed by taking the inverse Fourier transform of the multiplication between two Fourier transformed graph signals. On the other hand, graph convolution can be also defined in the spatial domain (i.e., vertex domain) as the aggregations of node representations from the node neighborhoods. The emergence of these operations opens a door to graph convolutional networks. Generally speaking, graph convolutional network models are a type of neural network architectures that can leverage the graph structure and aggregate node information from the neighborhoods in a convolutional fashion. Graph convolutional networks have a great expressive power to learn the graph representations and have achieved a superior performance in a wide range of tasks and applications. Note that in the past few years, many other types of graph neural networks have been proposed, including (but are not limited to): (1) graph auto-encoder [21], (2) graph generative model [22, 23], (3) graph attention model [24, 25], and (4) graph recurrent neural networks [26, 27]. There exist several other related surveys on the topic of graph neural networks. Bronstein et al. review the mathematical details and a number of early approaches of geometric deep learning for both graphs and manifolds [28]. Zhang et al. present a detailed review that covers many existing graph neural networks beyond graph convolutional networks, such as graph attention networks and gated graph neural network [29]. In addition, Wu et al. also review the studies on graph generative models and neural networks for spatial-temporal networks [30]. Besides, Lee et al. present an overview of graph neural networks with a special focus on graph attention networks [31]. However, since graph convolutional network is a very hot and fast developing research area, these existing surveys may not cover the most up-to-date models. In this survey, we focus specifically on reviewing the existing literature of the graph convolutional networks and cover the recent progress. The main contributions of this survey are summarized as follows: We introduce two taxonomies to group the existing graph convolutional network models (Fig. 1). First, we categorize graph convolutional networks into spectral-based and spatial-based models depending on the types of convolutions. Then, we introduce several graph convolutional networks according to their application domains. We motivate each taxonomy by surveying and discussing the up-to-date graph convolutional network models. We discuss the challenges of the current models that need to be addressed and highlight some promising directions for the future work. The rest of the paper is organized as follows. We start by summarizing the notations and introducing some preliminaries of graph convolutional networks in "Notations and preliminaries" section. Then, in "Spectral graph convolutional networks" and "Spatial graph convolutional networks" sections, we categorize the existing models into the spectral-based methods and the spatial-based methods by the types of graph filtering with some detailed examples. "Applications of graph convolutional networks" section presents the methods from a view of applications. In "Challenges and future researches" section, we discuss some challenges of the existing graph convolutional network models and provide some directions for the future work. Finally, we conclude our survey in "Concluding remarks" section. Notations and preliminaries In this section, we present the notations and some preliminaries for the graph convolutional networks. In general, we use bold uppercase letters for matrices, bold lowercase letters for vectors, and lowercase letters for scalars. For matrix indexing, we use \(\mathbf{A}(i,j)\) to denote the entry at the intersection of the ith row and jth column. We denote the transpose of a matrix \(\mathbf{A}\) as \(\mathbf{A}^T\). Graphs and graph signals In this survey, we are interested in the graph convolutional network models on an undirected connected graph \(\mathcal {G}=\{\mathcal {V},\mathcal {E},\mathbf{A}\}\), which consists of a set of nodes \(\mathcal {V}\) with \(|\mathcal {V}|=n\), a set of edges \(\mathcal {E}\) with \(|\mathcal {E}|=m\) and the adjacency matrix \(\mathbf{A}\). If there is an edge between node i and node j, the entry \(\mathbf{A}(i,j)\) denotes the weight of the edge; otherwise, \(\mathbf{A}(i,j)=0\). For unweighted graphs, we simply set \(\mathbf{A}(i,j)=1\). We denote the degree matrix of \(\mathbf{A}\) as a diagonal matrix \({\mathbf{D}}\), where \({\mathbf{D}}(i,i)=\sum _{j=1}^{n}\mathbf{A}(i,j)\). Then, the Laplacian matrix of \(\mathbf{A}\) is denoted as \({\mathbf{L}}={\mathbf{D}}-\mathbf{A}\). The corresponding symmetrically normalized Laplacian matrix is \(\tilde{{\mathbf{L}}}=\mathbf{I}-{\mathbf{D}}^{-\frac{1}{2}}\mathbf{A}{\mathbf{D}}^{-\frac{1}{2}}\), where \(\mathbf{I}\) is an identity matrix. A graph signal defined on the nodes is represented as a vector \({\mathbf{x}}\in \mathbb {R}^n\), where \({\mathbf{x}}(i)\) is the signal value on the node i [20]. Node attributes, for instance, can be considered as the graph signals. Denote \(\mathbf{X}\in \mathbb {R}^{n\times d}\) as the node attribute matrix of an attributed graph, and then, the columns of \(\mathbf{X}\) are the d signals of the graph. Graph Fourier transform It is well-known that the classic Fourier transform of an 1-D signal f is computed by \(\hat{f}(\xi )=\langle f,e^{2\pi i\xi t}\rangle\), where \(\xi\) is the frequency of \(\hat{f}\) in the spectral domain and the complex exponential is the eigenfunction of the Laplace operator. Analogously, the graph Laplacian matrix \({\mathbf{L}}\) is the Laplace operator defined on a graph. Hence, an eigenvector of \({\mathbf{L}}\) associated with its corresponding eigenvalue is an analog to the complex exponential at a certain frequency. Note that the symmetrically normalized Laplacian matrix \(\tilde{{\mathbf{L}}}\) and the random-walk transition matrix can be also used as the graph Laplace operator. In particular, denote the eigenvalue decomposition of \(\tilde{{\mathbf{L}}}\) as \(\tilde{{\mathbf{L}}}=\mathbf{U}\varvec{\Lambda }\mathbf{U}^T\) where the lth column of \(\mathbf{U}\) is the eigenvector \(\mathbf{u}_l\) and \(\varvec{\Lambda }(l,l)\) is the corresponding eigenvalue \(\lambda _l\), and then, we can compute the Fourier transform of a graph signal \({\mathbf{x}}\) as: $$\begin{aligned} \hat{{\mathbf{x}}}(\lambda _l)=\langle {\mathbf{x}},\mathbf{u}_l\rangle =\sum _{i=1}^{n}{\mathbf{x}}(i)\mathbf{u}^*_l(i). \end{aligned}$$ The above equation represents in the spectral domain a graph signal defined in the vertex domain. Then, the inverse graph Fourier transform can be written as: $$\begin{aligned} {\mathbf{x}}(i)=\sum _{l=1}^{n}\hat{{\mathbf{x}}}(\lambda _l)\mathbf{u}_l(i). \end{aligned}$$ Graph filtering Graph filtering is a localized operation on graph signals. Analogous to the classic signal filtering in the time or spectral domain, one can localize a graph signal in its vertex domain or spectral domain, as well. (1) Frequency filtering: Recall that the frequency filtering of a classic signal is often represented as the convolution with the filter signal in the time domain. However, due to the irregular structure of the graphs (e.g., different nodes having different numbers of neighbors), graph convolution in the vertex domain is not as straightforward as the classic signal convolution in the time domain. Note that for classic signals, the convolution in the time domain is equivalent to the inverse Fourier transform of the multiplication between the spectral representations of two signals. Therefore, the spectral graph convolution is defined analogously as: $$\begin{aligned} ({\mathbf{x}}*_{\mathcal {G}}\mathbf{y})(i)=\sum _{l=1}^{n}\hat{{\mathbf{x}}}(\lambda _l)\hat{\mathbf{y}}(\lambda _l)\mathbf{u}_l(i). \end{aligned}$$ Note that \(\hat{{\mathbf{x}}}(\lambda _l)\hat{\mathbf{y}}(\lambda _l)\) indicates the filtering in the spectral domain. Thus, the frequency filtering of a signal \({\mathbf{x}}\) on graph \(\mathcal {G}\) with a filter \(\mathbf{y}\) is exactly same as Eq. (3) and is further re-written as: $$\begin{aligned} {\mathbf{x}}_{out}={\mathbf{x}}*_{\mathcal {G}}\mathbf{y}=\mathbf{U} \begin{bmatrix} \hat{\mathbf{y}}(\lambda _1)&0 \\&\ddots&\\ 0&\hat{\mathbf{y}}(\lambda _n) \end{bmatrix} \mathbf{U}^T{\mathbf{x}}. \end{aligned}$$ (2) Vertex filtering: The graph filtering of a signal \({\mathbf{x}}\) in the vertex domain is generally defined as a linear combination of the signal components in the nodes neighborhood. Mathematically, the vertex filtering of a signal \({\mathbf{x}}\) at node i is: $$\begin{aligned} {\mathbf{x}}_{out}(i)=w_{i,i}{\mathbf{x}}(i)+\sum _{j\in \mathcal {N}(i,K)}w_{i,j}{\mathbf{x}}(j), \end{aligned}$$ where \(\mathcal {N}(i,K)\) represents the K-hop neighborhood of node i in the graph and the parameters \(\{w_{i,j}\}\) are the weights used for the combination. It can be shown that using a K-polynomial filter, the frequency filtering can be interpreted from the vertex filtering perspective [20]. Spectral graph convolutional networks An overview of graph convolutional networks In this section and the subsequent "Spatial graph convolutional networks" section, we categorize the graph convolutional neural networks into the spectral-based methods and the spatial-based methods, respectively. We consider the spectral-based methods to be those methods that start with constructing the frequency filtering. The first notable spectral-based graph convolutional network is proposed by Bruna et al. [32]. Motivated by the classic CNN, this deep model on graphs contains several spectral convolutional layers that take a vector \(\mathbf{X}^p\) of size \(n\times d_{p}\) as the input feature map of the pth layer and output a feature map \(\mathbf{X}^{p+1}\) of size \(n\times d_{p+1}\) by: $$\begin{aligned} \mathbf{X}^{p+1}(:, j)=\sigma \left( \sum _{i=1}^{d_p}\mathbf{V} \begin{bmatrix} (\varvec{\theta }_{i,j}^p)(1)& 0 \\& \ddots& \\ 0& (\varvec{\theta }_{i,j}^p)(n) \end{bmatrix} \mathbf{V}^T\mathbf{X}^p(:,i)\right), \quad \forall j=1,\cdots ,d_{p+1}, \end{aligned}$$ where \(\mathbf{X}^p(:,i)\) (\(\mathbf{X}^{p+1}(:,j)\)) is the ith (jth) dimension of the input (output) feature map, respectively; \(\varvec{\theta }_{i,j}^p\) denotes a vector of learnable parameters of the filter at the pth layers. Each column of \(\mathbf{V}\) is the eigenvector of \({\mathbf{L}}\) and \(\sigma (\cdot )\) is the activation function. However, there are several issues with this convolutional structure. First, the eigenvector matrix \(\mathbf{V}\) requires the explicit computation of the eigenvalue decomposition of the graph Laplacian matrix, and hence suffers from the \(O(n^3)\) time complexity which is impractical for large-scale graphs. Second, though the eigenvectors can be pre-computed, the time complexity of Eq. (6) is still \(O(n^2)\). Third, there are O(n) parameters to be learned in each layer. Besides, these non-parametric filters are not localized in the vertex domain. To overcome the limitations, the authors also propose to use a rank-r approximation of eigenvalue decomposition. To be specific, they use the first r eigenvectors of \(\mathbf{V}\) that carry the most smooth geometry of the graph and consequently reduce the number of parameters of each filter to O(1). Moreover, if the graph contains the clustering structure that can be explored via such a rank-r factorization, the filters are potentially localized. Building upon [32], Henaff et al. propose to apply an input smoothing kernel (e.g., splines) and use the corresponding interpolated weights as the filter parameters for graph spectral convolutions [33]. As claimed in [33], the spatial localization in the vertex domain can be somewhat achieved. However, the computational complexity and the localization power still hinder learning better representations of the graphs. To address these limitations, Defferrard et al. propose the ChebNet that uses K-polynomial filters in the convolutional layers for localization [34]. Such a K-polynomial filter is represented by \(\hat{\mathbf{y}}(\lambda _l)=\sum _{k=1}^{K}\theta _k\lambda _l^k\). As mentioned in "Notations and preliminaries" section, the K-polynomial filters achieve a good localization in the vertex domain by integrating the node features within the K hop neighborhood [20], and the number of the trainable parameters decreases to \(O(K)=O(1)\). In addition, to further reduce the computational complexity, the Chebyshev polynomial approximation [35] is used to compute the spectral graph convolution. Mathematically, the Chebyshev polynomial \(T_k(x)\) of order k can be recursively computed by \(T_k(x)=2xT_{k-1}(x)-T_{k-2}(x)\) with \(T_0=1,~T_1(x)=x\). Defferrard et al. normalize the filters by \(\tilde{\lambda }_l=2\frac{\lambda _l}{\lambda _{\text {max}}}-1\) to make the scaled eigenvalues lie within \([-1, 1]\). As a result, the convolutional layer is: $$\begin{aligned} \mathbf{X}^{p+1}(:,j)=\sigma \left( \sum _{i=1}^{d_p}\sum _{k=0}^{K-1} (\varvec{\theta }_{i,j}^p)(k+1)T_k(\tilde{{\mathbf{L}}})\mathbf{X}^p(:,i)\right) , \quad \forall j=1,\ldots ,d_{p+1}, \end{aligned}$$ where \(\varvec{\theta }_{i,j}^p\) is a K-dimensional parameter vector for the ith column of input feature map and the jth column of output feature map at the \(-p\)th layer. The authors also design a max-pooling operation with the multilevel clustering method Graclus [36] which is quite efficient to uncover the hierarchical structure of the graphs. As a special variant, the graph convolutional network proposed by Kipf et al. (named as GCN) aims at the semi-supervised node classification task on graphs [37]. In this model, the authors truncate the Chebyshev polynomial to first-order (i.e., \(K=2\) in Eq. (7)) and specifically set \((\varvec{\theta })_{i,j}(1)=-(\varvec{\theta })_{i,j}(2)=\theta _{i,j}\). Besides, since the eigenvalues of \(\tilde{{\mathbf{L}}}\) are within [0, 2], relaxing \(\lambda _{\text {max}}=2\) still guarantees \(-1\le \tilde{\lambda }_l\le 1,~\forall l=1,\cdots ,n\). This leads to the simplified convolution layer as: $$\begin{aligned} \mathbf{X}^{p+1}=\sigma \left( \tilde{{\mathbf{D}}}^{-\frac{1}{2}}\tilde{\mathbf{A}}\tilde{{\mathbf{D}}}^{-\frac{1}{2}}\mathbf{X}^p\varvec{\Theta }^p\right), \end{aligned}$$ where \(\tilde{\mathbf{A}}=\mathbf{I}+\mathbf{A}\) is equivalent to adding self-loops to the original graph and \(\tilde{{\mathbf{D}}}\) is the diagonal degree matrix of \(\tilde{\mathbf{A}}\), and \(\varvec{\Theta }^p\) is a \(d_{p+1}\times d_p\) parameter matrix. Besides, Eq. (8) has a close relationship with the Weisfeiler–Lehman isomorphism test [38]. In addition, since Eq. (8) is essentially equivalent to aggregating node representations from their direct neighborhood, GCN has a clear meaning of vertex localization and, thus, is often considered as bridging the gap between the spectral-based methods and spatial-based methods. However, the training process could be costly in terms of memory for large-scale graphs. Moreover, the transduction of GCN interferes with the generalization, making the learning of representations of the unseen nodes in the same graph and the nodes in an entirely different graph more difficult [37]. To address the issues of GCN [37], FastGCN [39] improves the original GCN model by enabling the efficient minibatch training. It first assumes that the input graph \(\mathcal {G}\) is an induced subgraph of a possibly infinite graph \(\mathcal {G}'\), such that the nodes \(\mathcal {V}\) of \(\mathcal {G}\) are i.i.d. samples of the nodes of \(\mathcal {G}'\) (denoted as \(\mathcal {V}'\)) under some probability measure \(\mathcal {P}\). This way, the original convolution layer represented by Eq. (8) can be approximated by Monte Carlo sampling. Denote some i.i.d. samples \(u_1^p,\ldots ,u_{t_p}^p\) at layer-p, the graph convolution can be estimated by: $$\begin{aligned} \mathbf{X}^{p+1}(v,:)=\sigma \left( \frac{1}{t_p}\sum _{i=1}^{t_p}\tilde{\mathbf{A}}(v,u_i^p)\mathbf{X}^p(u_i^p,:)\varvec{\Theta }^p\right). \end{aligned}$$ Note that this Monte Carlo estimator of graph convolution could lead to a high variance of estimation. To reduce the variance, the authors formulate the variance and solve for an importance sampling distribution \(\mathcal {P}\) of nodes. In addition, Chen et al. develop control variate-based algorithms to approximate GCN model [37] and propose an efficient sampling-based stochastic algorithm for training [40]. Besides, the authors theoretically prove the convergence of the algorithm regardless of the sampling size in the training phase [40]. Recently, Huang et al. develop an adaptive layer-wise sampling method to accelerate the training process in GCN models [41]. They first construct the layers in a graph convolutional network in a top-down way and then propose a layer-wise sampler to avoid the over-expansion of the neighborhoods due to the fixed-size sampling. To further reduce the variance, an explicit importance sampling is derived. In parallel to the above models built upon Chebyshev polynomial approximations, other localized polynomial filters and their corresponding graph convolutional network models have also been proposed. For example, Levie et al. propose to use a more complex approximation method, namely Cayley polynomial, to approximate filters [42]. The proposed CayleyNet model is motivated by the fact that as the eigenvalues of the Laplacian matrix used in Chebyshev polynomials are scaled to the band \([-1, 1]\), the narrow frequency bands (i.e., eigenvalues concentrated around one frequency) are hard to be detected. Given that this narrow-band characteristic often appears in the community-structured graphs, ChebNet has limited flexibility and performance in a broader range of graph mining problems. Specifically, the Cayley filters of order K have the following form:Footnote 1 $$\begin{aligned} \hat{\mathbf{y}}_{{\mathbf{c}},h}(\lambda _l)=c_0+2\text {Re}\left\{ \sum _{k=1}^{K} c_k(h\lambda _l-i)^j(h\lambda _l+i)^{-j}\right\}, \end{aligned}$$ where \({\mathbf{c}}=[c_0,\cdots ,c_K]\) are the parameters to be learned and \(h>0\) is a spectral zoom parameter used to dilate graph spectrum, so that the Cayley filters can specialize different frequency bands. The localization property as well as the linear complexity can be achieved by further using Jacobi approximation [42]. In addition, LanczosNet [43] is proposed to encode the multi-scale characteristic naturally resided in graphs and penetrates the computation bottleneck of most existing models that involve the exponentiated graph Laplacian in the graph convolution operators to capture multi-scale information (e.g., [34]). In detail, the authors first compute the low rank approximation of matrix \(\tilde{\mathbf{A}}\) by Lanczos algorithm, such that \(\tilde{\mathbf{A}}\approx \mathbf{V}\mathbf{R} \mathbf{V}^T\), where \(\mathbf{V}=\mathbf{Q}\mathbf{B}\), \(\mathbf{Q}\in \mathbb {R}^{n\times K}\) contains the first K Lanczos vectors, and \(\mathbf{B}\mathbf{R}\mathbf{B}^T\) is the eigen-decomposition of a tridiagonal matrix \(\mathbf{T}\). In this way, the tth power of \(\tilde{\mathbf{A}}\) can be simply approximated by \(\tilde{\mathbf{A}}^t\approx \mathbf{V}\mathbf{R}^t\mathbf{V}^T\). Based on this, the proposed spectral filter in LanczosNet is formulated as: $$\begin{aligned} \mathbf{X}^{p+1}(:,j)=[\mathbf{X}^p(:,i),\mathbf{V}\hat{\mathbf{R}}(1)\mathbf{V}^T\mathbf{X}^p(:,i), \ldots ,\mathbf{V}\hat{\mathbf{R}}(K-1)\mathbf{V}^T\mathbf{X}^p(:,i)]\varvec{\Theta }_{i,j}, \end{aligned}$$ where \(\hat{\mathbf{R}}(k)=f_k([\mathbf{R}^0,\ldots ,\mathbf{R}^{K-1}])\) is a diagonal matrix and \(f_k\) is a multi-layer perceptron (MLP). To leverage the multi-scale information, the above spectral filter is modified by adding short-scale parameters and long-scale parameters. A variant for node representation learning is also proposed in [43]. Beyond the Fourier transform-based spectral filters, Xu et al. propose to use the spectral wavelet transform on graphs, such that the consequent model can vary different scales of graphs to be captured [44]. Moreover, since many graph structures are manually constructed based upon the similarities among data points (e.g., kNN graphs), these fixed graphs may not have the best learning capability for some specific tasks. To this end, Li et al. propose a spectral graph convolution layer that can simultaneously learn the graph Laplacian [45]. In particular, instead of directly parameterizing the filter coefficients, the spectral graph convolution layer parameterizes a function over the graph Laplacian by introducing a notion of residual Laplacian. However, the main drawback of this method is the inevitable \(O(n^2)\) complexity. Spatial graph convolutional networks As the spectral graph convolution relies on the specific eigenfunctions of Laplacian matrix, it is still nontrivial to transfer the spectral-based graph convolutional network models learned on one graph to another graph whose eigenfunctions are different. On the other hand, according to the graph filtering in vertex domain (i.e., Eq. (5)), graph convolution can be alternatively generalized to some aggregations of graph signals within the node neighborhood. In this section, we categorize the spatial graph convolutional networks into the classic CNN-based models, propagation-based models, and other related general frameworks. Classic CNN-based spatial graph convolutional networks Classic CNN models on grid-like data, such as images, have been shown great successes in many related applications, including images classification [46,47,48], object detection [18, 49], semantic segmentation [50, 51], etc. The basic properties of grid-like data that are exploited by convolution architectures include: (1) the number of neighboring pixels for each pixel is fixed, and (2) the spatial order of scanning images is naturally determined, i.e., from left to right and from top to bottom. However, different from images, neither the number of neighboring units nor the spatial order among them is fixed in the arbitrary graph data. To address these issues, many works have been proposed to build graph convolutional networks directly upon the classic CNNs. Niepert et al. propose to address the aforementioned challenges by extracting locally connected regions from graphs [52]. The proposed PATCHY-SAN model first determines the nodes ordering by a given graph labeling approach such as centrality-based methods (e.g., degree, PageRank, betweenness, etc.) and selects a fixed-length sequence of nodes. Second, to address the issue of arbitrary neighborhood size of nodes, a fixed-size neighborhood for each node is constructed. Finally, the neighborhood graph is normalized according to graph labeling procedures, so that nodes of similar structural roles are assigned similar relative positions, followed by the representation learning with classic CNNs. However, as the spatial order of nodes is determined by the given graph labeling approach that is often solely based on graph structure, PATCHY-SAN lacks the learning flexibility and generality to a broader range of applications. Different from PATCHY-SAN that order nodes by structural information [52], LGCN model [53] is proposed to transform the irregular graph data to grid-like data by using both structural information and input feature map of the p-th layer. In particular, for a node \(u\in \mathcal {V}\) in \(\mathcal {G}\), it stacks the input feature map of the node u's neighbors into a single matrix \(\mathbf{M}\in \mathbb {R}^{|\mathcal {N}(u)|\times d_p}\), where \(|\mathcal {N}(u)|\) represents the number of 1-hop neighboring nodes of node u. For each column of \(\mathbf{M}\), the first r largest values are preserved and form a new matrix \(\tilde{\mathbf{M}}\in \mathbb {R}^{r\times d_p}\). In such a simple way, the input feature map along with the structural information of the graph can be transformed to an 1-D grid-like data \(\tilde{\mathbf{X}}_p\in \mathbb {R}^{n\times (r+1)\times d_p}\). Then, the classic 1-D CNN can be applied to \(\tilde{\mathbf{X}}^p\) and learn new node representations \(\mathbf{X}^{p+1}\). Note that a subgraph-based training method is also proposed to scale the model to large-scale graphs. As the convolution in the classic CNNs can only manage the data with the same topological structures, another way to extend the classic CNNs to graph data is to develop a structure-aware convolution operation for both Euclidean and non-Euclidean data. Chang et al. first build the connection between the classical filters and univariate functions (i.e., functional filters) and then model the graph structure into the generalized functional filters to be structural aware [54]. Since this structure-aware convolution requires infinite parameters to be learned, the Chebyshev polynomial [35] is used for approximation. Another work [55] re-architects the classic CNN by designing a set of fixed-size learnable filters (e.g., size-1 up to size-K) and shows that these filters are adaptive to the topology of the graph. Propagation-based spatial graph convolutional networks In this subsection, we focus on the spatial graph convolutions that propagate and aggregate the node representations from neighboring nodes in the vertex domain. One notable work is [56] where the graph convolution for node u at the pth layer is designed as: $$\begin{aligned} {\mathbf{x}}_{\mathcal {N}(u)}^p&=\mathbf{X}^p(u,:)+\sum _{v\in \mathcal {N}(u)} \mathbf{X}^p(v,:) \end{aligned}$$ $$\begin{aligned} \mathbf{X}^{p+1}(u,:) =\sigma \left({\mathbf{x}}_{\mathcal {N}(u)}^p \varvec{\Theta }_{|\mathcal {N}(u)|}^p\right), \end{aligned}$$ where \(\varvec{\Theta }_{|\mathcal {N}(u)|}^p\) is the weight matrix for nodes with the same degree as \(|\mathcal {N}(u)|\) at the p-=th layer. However, for arbitrarily large graphs, the number of unique values of node degree is often a very large number. Consequently, there will be many weight matrices to be learned at each layer, possibly leading to the overfitting problem. Atwood et al. propose a diffusion-based graph convolutional network (named as DCNN) which evokes the propagations and aggregations of node representations by graph diffusion processes [57]. A k-step diffusion is conducted by the kth power of transition matrix \({\mathbf{P}}^k\), where \({\mathbf{P}}={\mathbf{D}}^{-1}\mathbf{A}\). Then, the diffusion–convolution operation is formulated as: $$\begin{aligned} {\mathbf{Z}}(u,k,i)=\sigma \left( \varvec{\Theta }(k,i)\sum _{v=1}^{n} {\mathbf{P}}^k(u,v)\mathbf{X}(v,i)\right), \end{aligned}$$ where \({\mathbf{Z}}(u,k,i)\) is the ith output feature of node u aggregated based on \({\mathbf{P}}^k\) and the nonlinear activation function \(\sigma (\cdot )\) is chosen as the hyperbolic tangent function. Suppose that K hops diffusion is considered, and then, the K-th power of transition matrix requires an \(O(n^2K)\) computational complexity which is prohibited especially for large-scale graphs. Monti et al. propose a generic graph convolution network framework named MoNet [5] by designing a universe patch operator which integrates the signals within the node neighborhood. In particular, for a node i and its neighboring node \(j\in \mathcal {N}(i)\), they define a d-dimensional pseudo-coordinates \(\mathbf{u}(i,j)\) and feed it into P learnable kernel functions \(\left( w_1(\mathbf{u}),\ldots ,w_P(\mathbf{u})\right)\). Then, the patch operator is formulated as \(D_p(i)=\sum _{j\in \mathcal {N}(i)} w_p(\mathbf{u}(i,j)){\mathbf{x}}(j),~p=1,\ldots ,P\), where \({\mathbf{x}}(j)\) is the signal value at the node j. The graph convolution in the spatial domain is then based on the patch operator as: $$\begin{aligned} ({\mathbf{x}}*_{s} \mathbf{y})(i) =\sum _{l=1}^{P}\mathbf{g}(p)D_p(i){\mathbf{x}}. \end{aligned}$$ It is shown that by carefully selection of \(\mathbf{u}(i,j)\) and the kernel function \(w_p(\mathbf{u})\), many existing graph convolutional network models [37, 57] can be viewed as a specific case of MoNet. SplineCNN [58] follows the same framework [i.e., Eq. (15)], but uses a different convolution kernel based on B-splines. For graphs accompanied with edge attribute information, the weight parameters of filters are often conditioned on the specific edge attributes in the neighborhood of a node. To exploit edge attributes, an edge-conditioned convolution (ECC) operation [59] is designed by borrowing the idea of dynamic filter network [60]. For the edge between node v and node u at the p-th ECC layer, with the corresponding filter-generating network \(F^p:~\mathbb {R}^{s}\rightarrow \mathbb {R}^{d_{p+1}\times d_{p}}\) that generates edge-specific weights matrix \(\varvec{\Theta }_{v,u}^p\), the convolution operation is mathematically formalized as: $$\begin{aligned} \mathbf{X}^{p+1}(u,:)=\frac{1}{|\mathcal {N}(u)|}\sum _{v\in \mathcal {N}(u)} \varvec{\Theta }_{v,u}^p\mathbf{X}^p(v,:)+\mathbf{b}^p, \end{aligned}$$ where \(\mathbf{b}^p\) is a learnable bias and the filtering–generating network \(F^p\) is implemented by multi-layer perceptrons. In addition, Hamilton et al. propose an aggregation-based inductive representation learning model, named GraphSAGE [61]. The full batch version of the algorithm is straightforward: for a node u, the convolution layer in GraphSAGE (1) aggregates the representation vectors of all its immediate neighbors in the current layer via some learnable aggregator, (2) concatenates the representation vector of node u with its aggregated representation, and then (3) feeds the concatenated vector to a fully connected layer with some nonlinear activation function \(\sigma (\cdot )\), followed by a normalization step. Formally, the p-th convolutional layer in GraphSAGE contains: $$\begin{aligned}&{\mathbf{x}}_{\mathcal {N}(u)}^{p} \leftarrow \text {AGGREGATE}_{p}(\{\mathbf{X}^p(v,:),~\forall v\in \mathcal {N}(u)\}); \end{aligned}$$ $$\begin{aligned}&\mathbf{X}^{p+1}(u,:) \leftarrow \sigma \left( \text {CONCAT}(\mathbf{X}^p(u,:), {\mathbf{x}}_{\mathcal {N}(u)}^{p})\varvec{\Theta }^{p}\right). \end{aligned}$$ There are several choices of the aggregator functions, including the mean aggregator, LSTM aggregator, and the pooling aggregator. By using mean aggregators, Eq. (17) can be simplified to: $$\begin{aligned} \mathbf{X}^{p+1}(u,:)\leftarrow \sigma \left( \text {MEAN}(\{\mathbf{X}^p(u,:)\}\cup \{\mathbf{X}^p(v,:),~\forall v\in \mathcal {N}(u)\})\varvec{\Theta }^{p}\right), \end{aligned}$$ which approximately resembles the GCN model [37]. Besides, pooling aggregator is formulated as: $$\begin{aligned} \text {AGGREGATE}_{p}^{\text {pool}}=\max \left( \{\sigma (\mathbf{X}^p(v,:) \varvec{\Theta }^{p}+\mathbf{b}^p),~\forall v\in \mathcal {N}(u)\}\right). \end{aligned}$$ To allow the minibatch training, the authors also provide a variant by uniformly sampling a fixed size of the neighboring nodes for each node [61]. However, the performance in node representation learning is often degraded as the graph convolutional models become deeper. In practice, it has been shown that a two-layer graph convolution model often achieves the best performance in GCN [37] and GraphSAGE [61]. According to [62], the convolution in GCN [37] is related to Laplacian smoothing [63] and more convolution layers result in less distinguishable representations even for nodes from different clusters. From a different perspective, Xu et al. analyze different expansion behaviors for two types of nodes, including the nodes in an expander-like core part and nodes in the tree part of the graphs, and show that the same number of propagation steps can lead to different effects [64]. For example, for nodes within the core part, the influence of their features spreads much faster than the nodes in the tree part and thereby this rapid average causes the node representations indistinguishable. To mitigate this issue and make the graph convolutional models deeper, by borrowing the idea of the residual network [65] in computer vision, Xu et al. propose a skip connection architecture named Jumping Knowledge Network [64]. The Jumping Knowledge Network can adaptively select the aggregations from the different convolution layers. In other words, the last layer of the model can selectively aggregate the intermediate representations for each node independently. The layer-wise aggregators include concatenation aggregator, max-pooling aggregator, and LSTM-attention aggregator. In addition, the Jumping Knowledge Network model admits the combination with the other existing graph neural network models, such as GCN [37], GraphSAGE [61], and GAT [24]. Related general graph neural networks Graph convolutional networks that use convolutional aggregations are a special type of the general graph neural networks. Other variants of graph neural networks based on different types of aggregations also exist, such as gated graph neural networks [26] and graph attention networks [24]. In this subsection, we briefly cover some general graph neural network models of which graph convolutional networks can be viewed as special variants. One of the earliest graph neural networks is [66] which defines the parametric local transition function f and local output function g. Denote \(\mathbf{X}^0(u,:)\) as the input attributes of node u and \(\mathbf{E}_u\) as the edge attributes of the edges incident to node u. Then, the local transition function and local output function are formulated as: $$\begin{aligned}&\mathbf{H}(u,:) =f\left( \mathbf{X}^0(u,:), \mathbf{E}_u, \mathbf{H}(u,:), \mathbf{X}^0(\mathcal {N}(u),:)\right) \end{aligned}$$ $$\begin{aligned}&\mathbf{X}(u,:) =g\left( \mathbf{X}^0(u,:), \mathbf{H}(u,:)\right) , \end{aligned}$$ where \(\mathbf{H}(u,:)\), \(\mathbf{X}(u,:)\) are the hidden state and output representation of node u. Eq. (19) defines one general form of aggregations in graph neural network. In [66], the function f is restricted to a contraction mapping to ensure convergence and suggested by the Banach's fixed point theorem [67]. In this way, a classic iterative scheme is applied to update the hidden states. However, it is inefficient and less effective to update the states in an iterative manner to obtain steady states. In contrast, SSE [68] aims to learn the steady states of node representations iteratively but in a stochastic way. Specifically, for a node u, SSE first samples a set of nodes \(\tilde{\mathcal {V}}\) from \(\mathcal {V}\) and updates the node representations for T iterations to be close to stability by: $$\begin{aligned} \mathbf{X}(u,:)\leftarrow (1-\alpha )\mathbf{X}(u,:)+\alpha \mathcal {T}_{\varvec{\Theta }} \left[ \{\mathbf{X}(v,:),~\forall v\in \mathcal {N}(u)\}\right], \end{aligned}$$ where node \(u\in \tilde{\mathcal {V}}\) and \(\mathcal {T}_{\varvec{\Theta }}\) is the aggregation function defined by: $$\begin{aligned} \mathcal {T}_{\varvec{\Theta }}\left[ \{\mathbf{X}(v,:),~\forall v\in \mathcal {N}(u)\}\right] =\sigma \left( \left[ \mathbf{X}^0(u,:), \sum _{v\in \mathcal {N}(u)}[\mathbf{X}(v,:),\mathbf{X}^0(v,:)]\right] \varvec{\Theta }_2\right) \varvec{\Theta }_1, \end{aligned}$$ where \(\mathbf{X}^0(u,:)\) denotes the input attributes of node u. Message-Passing Neural Networks (MPNNs) proposed in [69] generalize many variants of graph neural networks, such as graph convolutional networks (e.g., [37, 56, 61]) and gated graph neural networks [26]. MPNN can be viewed as a two-phase model, including message-passing phase and readout phase. In the message-passing phase, the model runs node aggregations for P steps and each step contains the following two functions: $$\begin{aligned} \mathbf{H}^{p+1}(u,:)&=\sum _{v\in \mathcal {N}(u)}M^p(\mathbf{X}^p(u,:),\mathbf{X}^p(v,:), \mathbf{e}_{u,v}) \end{aligned}$$ $$\begin{aligned} \mathbf{X}^{p+1}(u,:)&=U^p(\mathbf{X}^p(u,:),\mathbf{H}^{p+1}(u,:)), \end{aligned}$$ where \(M^p, U^p\) are the message function and the update function at the pth step, respectively, and \(\mathbf{e}_{u,v}\) denotes the attributes of edge (u, v). Then, the readout phase computes the feature vector for the whole graph by: $$\begin{aligned} \hat{\mathbf{y}}=R\left( \{\mathbf{X}^P(u,:)|u\in \mathcal {V}\}\right), \end{aligned}$$ where R denotes the readout function. In addition, Xu et al. theoretically analyze the expressive power of the existing neighborhood aggregation-based graph neural networks [70]. They analyze how powerful the existing graph neural networks are based on the close relationship between graph neural networks and the Weisfeiler–Lehman graph isomorphism test, and conclude that the existing neighborhood aggregation-based graph neural networks (e.g., [37, 61]) can be at most as powerful as the one-dimensional Weisfeiler–Lehman isomorphism test. To achieve the equal expressive power of Weisfeiler–Lehman test, Xu et al. propose a simple architecture named Graph Isomorphism Network [70]. Applications of graph convolutional networks Graph convolutional networks can be also categorized according to their application domains. In this section, we mainly introduce the applications of graph convolutional networks in computer vision, natural language processing, science, and other domains. Applications in computer vision Computer vision has been one of the hottest research areas in the past decades. Many existing deep learning architectures used in computer vision problems are built upon the classic convolution neural networks (CNNs). Despite the great successes of CNNs, they are difficult to encode the intrinsic graph structures in the specific learning tasks. In contrast, the graph convolutional networks have been applied to solve some computer vision problems and shown a comparable or even better performance. In this subsection, we further categorize these applications based on the type of data. Image classification is of a great importance in many real-world applications. By some carefully hand-crafted graph construction methods (e.g., kNN similarity graphs) or other supervised approaches, the unstructured images can be converted to the structured graph data and thereby are able to be applied to graph convolutional networks. Existing models for image classification include, but are not limited to [5, 32, 34, 71, 72]. Another application on images is visual question answering that explores the answers to the questions on images. Narasimhan et al. propose a graph convolutional network-based deep learning model to use the information from multiple facts of the images from knowledge bases to aid question answering, which relies less on retrieving the single correct fact of images [73]. In addition, as images often contain multiple objects, understanding the relationships (i.e., visual relationships) among the objects helps to characterize the interactions among them, which makes visual reasoning a hot topic in computer vision. For visual relationships detection, Cui et al. propose a graph convolutional network to leverage both the semantic graphs of words and spatial scene graph [74]. Besides, Yao et al. propose an architecture of graph convolutional networks and LSTM to explore the visual relationships for image captioning [75]. To generate scene graphs, despite some existing message-passing-based methods [76, 77], many of them may not handle the unreliable visual relationships. Yang et al. propose an attentional graph convolutional model that can place attention on the reliable edges while dampening the influence of unlikely edges [78]. In the opposite direction, Johnson et al. use a graph convolutional network model to process the input scene graph and generate the images by a cascaded refinement network [79] trained adversarially [80]. One of the high-impact applications of videos is the action recognition which can help video understanding. In [81], a spatial-temporal graph convolutional model is designed to eliminate the need of hand-crafted part assignment and can achieve a greater expressive power. Another skeleton-based method is [82], where a generalized graph construction process is proposed to capture the variation in the skeleton sequences and the generalized graph is then fed to a graph convolutional network for variation learning. Wang and Gupta [83] represents the input video as a space-time region graph which builds two types of connections (i.e., appearance similarity and spatial-temporal proximity), and then recognizes actions by applying graph convolutional networks. Zhang et al. propose a tensor convolutional network for action recognition [84]. Point clouds provide a flexible geometric representation for many applications in computer graphics and computer vision. Followed by the pioneering PointNet [85], the state-of-the-art deep neural networks consider the local features of point clouds [85, 86]. However, these works ignore the geometric relationships among points. EdgeConv [87], on the other hand, is proposed to capture the local geometric structure while maintaining the permutation invariance property and outperforms other existing approaches in the point cloud segmentation task. A regularized graph convolutional network model has been proposed for segmentation on point clouds in [88] in which the graph Laplacian is dynamically updated to capture the connectivity of the learned features. FeaStNet [89] built upon graph convolutional networks dynamically determines the association between filter weights and graph neighborhood, showing a comparable performance in part labeling. Wang et al. propose a local spectral graph convolutional network for both point cloud classification and segmentation [90]. For point cloud classification, other graph convolution-based methods include [45, 59]. Valsesia et al. propose a localized generative model by using graph convolution to generate 3D point clouds [91]. One application on meshes which we consider in this paper is the shape correspondence, i.e., to find correspondences between collections of 3D shapes. Beyond the classic CNN-based methods (e.g., [92, 93]), several graph convolutional network-based approaches have been proposed, including [5, 89]. In addition, Litany et al. propose to combine graph convolutional networks with variational auto-encoder for the shape completion task [94]. Applications in natural language processing Text classification is one of the most classical problems in natural language processing. With the documents as nodes and the citation relationships among them as edges, the citation network can be constructed, in which node attributes are often modeled by the bag-of-words. In this scenario, the straightforward way to classify documents into different categories is by node classification. Many graph convolutional network models have been proposed, to name a few, including [5, 37, 42, 61, 95]. Another way is to view the documents at the graph-level (i.e., each document is modeled as a graph) and classify the texts by graph classification [33, 34]. Besides, TextGCN [96] models a whole corpus to a heterogeneous graph and learn word embedding and document embedding simultaneously, followed by a softmax classifier for text classification. Gao et al. use a graph pooling layer and the hybrid convolutions of graph convolution and classic convolution to incorporate node ordering information, achieving a better performance over the traditional CNN-based and GCN-based methods [97]. When there are lots of labels at different topical granularities, these single-granularity methods may achieve a suboptimal performance. In [98], a graph-of-words is constructed to capture long-distance semantics, and then, a recursively regularized graph convolution model is applied to leverage the hierarchy of labels. Information extraction is often the cornerstone of many NLP-related applications and graph convolutional networks have been broadly applied in it and its variant problems. For example, GraphIE [99] first uses a recurrent neural network to generate local context-aware hidden representations of words or sentences and then learns non-local dependencies between textual units, followed by a decoder for labeling at the word level. GraphIE can be applied to information extraction such as named entity extraction. Graph convolutional networks have been designed to the relation extraction between words [100, 101] and event extraction [102, 103]. In addition, Marcheggiani et al. develop a syntactic graph convolutional network model that can be used on top of syntactic dependence trees, which is suitable for various NLP applications such as semantic role labeling [104], and neural machine translation [105]. For semantic machine translation, graph convolutional networks can be used to inject a semantic bias into sentence encoders and achieve performance improvements [106]. Moreover, the dilated iterated graph convolutional network model is designed for dependence parsing [107]. Applications in science In particle physics, jets are referred to the collimated sprays of energetic hadrons and many tasks are related to jets, including the classification and regression problems associated with the progenitor particles giving rise to the jets. Recently, variants of the message-passing neural network [69] have been designed to classify jets into two classes: quantum chromodynamics-based jets and W-boson-based jets [108]. ParticleNet, built upon edge convolutions [87], is a customized neural network architecture that operates directly on particle clouds for jet tagging [109]. Besides, graph convolutional network model has been also applied for IceCube signal classification [110]. Another interesting application is to predict the physical dynamics, e.g., how a cube deforms as it collides with the ground. Mrowca et al. propose a hierarchical graph-based object representation that decomposes an object into particles and connects particles within the same group, or to the ancestors and descendants [111]. They then propose a hierarchical graph convolutional network to learn the physics predictions. Chemistry, biology, and materials science Learning on molecules has attracted lots of attention in chemistry, drug discovery, and materials science. For example, graph convolutional networks have been used for molecular fingerprints prediction [56, 112]. In drug discovery, DeepChemStable [113], an attention-based graph convolutional network mode, is used for chemical stability prediction of a compound. Besides, by modeling the protein–protein interactions, drug–protein target interactions into a multimodal graph, graph convolutions can be applied to predict polypharmacy side effects [114]. Another important application in chemistry is the molecular property prediction. Message-Passing Neural Networks (MPNNs) [69], a general graph neural network framework, can be used to predict the quantum properties of a molecular. PotentialNet [115] first entails graph convolutions over chemical bonds to learn the features of atoms, then entails both bond-based and spatial distance-based propagation and finally conducts graph gathering over the ligand atoms, followed by a fully connected layer for molecular property predictions. Protein interface prediction is a challenging problem with important applications in drug discovery. Fout et al. construct a graph where each residue in a protein is considered as a node and nodes are accompanied with features computed from amino acid sequence as well as structure [116]. To predict protein interface, graph convolution layers are used for different protein graphs, followed by one or more fully connected layers. In addition, [117] proposes a so-called crystal graph convolutional neural network to directly learn material properties from the connection of atoms in the crystal. Beyond the applications in classical problems of social science, such as community detection [42, 118], and link prediction [21, 119, 120], graph convolutional networks have been applied in many other problems. DeepInf [121] aims to predict social influences by learning users latent features. Vijayan et al. propose to use graph convolutional networks for retweet count forcasting [122]. Moreover, fake news can be also detected by graph convolutions [123]. Graph convolutional networks have been widely used for social recommendation which aims to leverage the user–item interactions and/or user–user interactions to boost the recommendation performance. Wu et al. propose a neural influence diffusion model that takes how users are influenced by their trusted friends into considerations for better social recommendations [124]. Ying et al. propose a very efficient graph convolutional network model PinSage[125] based on GraphSAGE [61] which exploits the interactions between pins and boards in Pinterest. Wang et al. propose a neural graph collaborative filtering framework that integrates the user–item interactions into the graph convolutional network and explicitly exploits the collaborative signals [126]. Challenges and future researches Deep graph convolutional networks Although the initial objective of graph convolutional network models is to leverage the deep architecture for better representation learning, most of the current models still suffer from their shallow structure. For example, GCN [37] in practice only uses two layers and using more graph convolution layers may even hurt the performance. This is also intuitive due to its simple propagation procedure. As deeper the architecture is, the representations of nodes may become smoother even for those nodes that are distinct and far from each other. This issue violates the purpose of using deep models. Although few works have been proposed to address this issue (e.g., skip connection based models), how to build a deep architecture that can better adaptively exploits the deeper structural patterns of graphs is still an open challenge. Graph convolutional networks for dynamic graphs Most of the existing graph convolutional networks explicitly assume the input graphs are static. However, in the real cases, networks are often changing dynamically. For example, social networks are essentially dynamic networks as users are joining/quiting the networks frequently and friendships among users are also changing dynamically. To this end, learning graph convolutional networks on static graphs may not provide an optimal performance. Thus, the efficient dynamic graph convolutional network models are important to be studied. More powerful graph convolutional networks Most of the existing spatial graph convolutional network models are based on neighborhood aggregations. These models have been proved theoretically to be at most as powerful as one-dimensional Weisfeiler–Lehman graph isomorphism test, and the graph isomorphism network has been proposed to reach the limit [70]. However, one natural question to be asked is: can we break the limit of 1-dimensional Weisfeiler–Lehman graph isomorphism test? A few works have studied the related questions such as [127,128,129]. However, further researches on this problem are still quite challenging. Multiple graph convolutional networks As already mentioned before, the major drawback of the spectral graph convolutional networks is its inability of adaptation from one graph to another graph if two graphs have different Fourier basis (i.e., eigenfunctions of the Laplacian matrix). The existing work [130] alternatively learns the filter parameters by generalizing the eigenfunctions of a single graph to the eigenfunctions of the Kronecker product graph of multiple input graphs. As a different track, inductive learning is possible for many spatial graph convolutional network models, such that one model learned on one or several graphs can be applied to other graphs. However, a drawback of these methods is that the interactions (e.g., anchor links, cross-network node similarities) or correlations (e.g., correlations among multiple views) across multiple graphs are not exploited. In fact, given multiple graphs, the representation learning of a unique node should be able to benefit from more information provided across graphs or views. However, to our best knowledge, there is no existing model aiming at the problems in this setting. Graph convolutional network models, as one category of the graph neural network models, have become a very hot topic in both machine learning and other related areas, and a substantial amount of models have been proposed to solve different problems. In this survey, we conduct a comprehensive literature review on the emerging field of graph convolutional networks. Specifically, we introduce two intuitive taxonomies to group the existing works based on the types of graph filtering operations and also on the areas of applications. For each taxonomy, we highlight with some detailed examples from a unique standpoint. We also discuss some open challenges and potential issues of the existing graph convolutional networks and provide some future directions. The symbol i here represents an imaginary number, instead of an index using in other parts of this paper. Distribution Statement "A" (Approved for Public Release, Distribution Unlimited). convolution neural network ChebNet: Chebyshev polynomial based graph convolution (model proposed in [34]) GCN: graph convolutional network (model proposed in [37]) FastGCN: minibatch training for GCN (model proposed in [39]) CayleyNet: Cayley polynomial based graph convolution (model proposed in [42]) LanczosNet: multiscale graph convolutional network by Lanczos algorithm (model proposed in [43]) PATCHY-SAN: graph CNN (model proposed in [52]) LGCN: large-scale graph CNN (model proposed in [53]) DCNN: diffusion-based graph convolutional network (model proposed in [57]) MoNet: pseudo-coordinates based graph convolutional network (model proposed in [5]) SplineCNN: B-splines based convolution kernel for graphs and meshes (model proposed in [58]) ECC: edge-conditioned convolution (model proposed in [59]) GraphSAGE: mean/LSTM/pooling aggregation based graph convolutional network (model proposed in [61]) GAT: graph attention network (model proposed in [24]) MPNN: message passing neural networks (model proposed in [69]) Backstrom L, Leskovec J. Supervised random walks: predicting and recommending links in social networks. In: WSDM. New York: ACM; 2011. p. 635–44. Akoglu L, Tong H, Koutra D. Graph based anomaly detection and description: a survey. Data Min Knowl Discov. 2015;29(3):626–88. Zhang S, Zhou D, Yildirim MY, Alcorn S, He J, Davulcu H, Tong H. Hidden: hierarchical dense subgraph detection with application to financial fraud detection. In: SDM. Philadelphia: SIAM; 2017. p. 570–8. Li Y, Yu R, Shahabi C, Liu Y. Diffusion convolutional recurrent neural network: data-driven traffic forecasting. 2018. Monti F, Boscaini D, Masci J, Rodola E, Svoboda J, Bronstein MM. Geometric deep learning on graphs and manifolds using mixture model cnns. In: CVPR, vol. 1. 2017. p. 3. Zhou D, Zhang S, Yildirim MY, Alcorn S, Tong H, Davulcu H, He J. A local algorithm for structure-preserving graph cut. In: KDD. New York: ACM; 2017. p. 655–64. Boccaletti S, Latora V, Moreno Y, Chavez M, Hwang D-U. Complex networks: structure and dynamics. Phys Rep. 2006;424(4–5):175–308. Tenenbaum JB, De Silva V, Langford JC. A global geometric framework for nonlinear dimensionality reduction. Science. 2000;290(5500):2319–23. Roweis ST, Saul LK. Nonlinear dimensionality reduction by locally linear embedding. Science. 2000;290(5500):2323–6. Belkin M, Niyogi P. Laplacian eigenmaps and spectral techniques for embedding and clustering. In: NIPS. 2002. p. 585–91. Perozzi B, Al-Rfou R, Skiena S. Deepwalk: online learning of social representations. In: KDD. New York: ACM; 2014. p. 701–10. Grover A, Leskovec J. node2vec: scalable feature learning for networks. In: KDD. New York: ACM; 2016. p. 855–64. Yan S, Xu D, Zhang B, Zhang H-J, Yang Q, Lin S. Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans Pattern Anal Mach Intell. 2007;29(1):40–51. Hamilton WL, Ying R, Leskovec J. Representation learning on graphs: methods and applications. 2017. arXiv preprint arXiv:1709.05584 Cui P, Wang X, Pei J, Zhu W. A survey on network embedding. TKDE. 2018. Goyal P, Ferrara E. Graph embedding techniques, applications, and performance: a survey. Knowl Based Syst. 2018;151:78–94. Cai H, Zheng VW, Chang K. A comprehensive survey of graph embedding: problems, techniques and applications. TKDE. 2018. Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2014. p. 580–7. Gehring J, Auli M, Grangier D, Dauphin YN. A convolutional encoder model for neural machine translation. 2016. arXiv preprint arXiv:1611.02344 Shuman DI, Narang SK, Frossard P, Ortega A, Vandergheynst P. The emerging field of signal processing on graphs: extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Process Mag. 2013;30(3):83–98. Kipf TN, Welling M. Variational graph auto-encoders. 2016. arXiv preprint arXiv:1611.07308 Wang H, Wang J, Wang J, Zhao M, Zhang, W, Zhang F, Xie X, Guo M. Graphgan: graph representation learning with generative adversarial nets. In: Thirty-second AAAI conference on artificial intelligence. 2018. You J, Ying R, Ren X, Hamilton WL, Leskovec J. Graphrnn: a deep generative model for graphs. 2018. arXiv preprint arXiv:1802.08773 Velickovic P, Cucurull G, Casanova A, Romero A, Lio P, Bengio Y. Graph attention networks. 2017. arXiv preprint arXiv:1710.10903 Lee JB, Rossi R, Kong X. Graph classification using structural attention. In: KDD. New York: ACM; 2018. p. 1666–74. Li Y, Tarlow D, Brockschmidt M, Zemel R. Gated graph sequence neural networks. 2015. arXiv preprint arXiv:1511.05493 Tai KS, Socher R, Manning CD. Improved semantic representations from tree-structured long short-term memory networks. 2015. arXiv preprint arXiv:1503.00075 Bronstein MM, Bruna J, LeCun Y, Szlam A, Vandergheynst P. Geometric deep learning: going beyond euclidean data. IEEE Signal Process Mag. 2017;34(4):18–42. Zhou J, Cui G, Zhang Z, Yang C, Liu Z, Sun M. Graph neural networks: a review of methods and applications. 2018. arXiv preprint arXiv:1812.08434. Wu Z, Pan S, Chen F, Long G, Zhang C, Yu PS. A comprehensive survey on graph neural networks. 2019. arXiv preprint arXiv:1901.00596 Lee JB, Rossi RA, Kim S, Ahmed NK, Koh E. Attention models in graphs: a survey. 2018. arXiv preprint arXiv:1807.07984 Bruna J, Zaremba W, Szlam A, LeCun Y. Spectral networks and locally connected networks on graphs. 2013. arXiv preprint arXiv:1312.6203 Henaff M, Bruna J, LeCun Y. Deep convolutional networks on graph-structured data. 2015. arXiv preprint arXiv:1506.05163 Defferrard M, Bresson X, Vandergheynst P. Convolutional neural networks on graphs with fast localized spectral filtering. In: NIPS. 2016. p. 3844–52. Hammond DK, Vandergheynst P, Gribonval R. Wavelets on graphs via spectral graph theory. Appl Comput Harmon Anal. 2011;30(2):129–50. Dhillon IS, Guan Y, Kulis B. Weighted graph cuts without eigenvectors a multilevel approach. IEEE Trans Pattern Anal Mach Intell. 2007;29(11):1944–57. Kipf TN, Welling M. Semi-supervised classification with graph convolutional networks. 2016. arXiv preprint arXiv:1609.02907. Shervashidze N, Schweitzer P, Leeuwen EJ, Mehlhorn K, Borgwardt KM. Weisfeiler-lehman graph kernels. JMLR. 2011;12:2539–61. Chen J, Ma T, Xiao C. Fastgcn: fast learning with graph convolutional networks via importance sampling. 2018. arXiv preprint arXiv:1801.10247. Chen J, Zhu J, Song L. Stochastic training of graph convolutional networks with variance reduction. In: ICML. 2018. p. 941–9. Huang W, Zhang T, Rong Y, Huang J. Adaptive sampling towards fast graph representation learning. In: Advances in neural information processing systems. 2018. p. 4563–72. Levie R, Monti F, Bresson X, Bronstein MM. Cayleynets: graph convolutional neural networks with complex rational spectral filters. IEEE Trans Signal Process. 2017;67(1):97–109. Liao R, Zhao Z, Urtasun R, Zemel RS. Lanczosnet: multi-scale deep graph convolutional networks. 2019. arXiv preprint arXiv:1901.01484. Xu B, Shen H, Cao Q, Qiu Y, Cheng X. Graph wavelet neural network. In: International conference on learning representations. 2019. https://openreview.net/forum?id=H1ewdiR5tQ. Li R, Wang S, Zhu F, Huang J. Adaptive graph convolutional neural networks. In: Thirty-second AAAI conference on artificial intelligence. 2018. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012. p. 1097–105. Wang J, Yang Y, Mao J, Huang Z, Huang C, Xu W. Cnn-rnn: a unified framework for multi-label image classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 2285–94. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. 2014. arXiv preprint arXiv:1409.1556. Girshick R. Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision. 2015. p. 1440–8. Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. p. 3431–40. Chen L.-C, Papandreou, G, Kokkinos, I, Murphy, K, Yuille, AL. Semantic image segmentation with deep convolutional nets and fully connected crfs. 2014. arXiv preprint arXiv:1412.7062. Niepert M, Ahmed M, Kutzkov K. Learning convolutional neural networks for graphs. In: International conference on machine learning. 2016. p. 2014–23. Gao H, Wang Z, Ji S. Large-scale learnable graph convolutional networks. In: KDD. New York: ACM; 2018. p. 1416–24. Chang J, Gu J, Wang L, Meng G, Xiang S, Pan C. Structure-aware convolutional neural networks. In: Advances in neural information processing systems. 2018. p. 11–20. Du J, Zhang S, Wu G, Moura J.M, Kar S. Topology adaptive graph convolutional networks. 2017. arXiv preprint arXiv:1710.10370 Duvenaud DK, Maclaurin D, Iparraguirre J, Bombarell R, Hirzel T, Aspuru-Guzik A, Adams RP. Convolutional networks on graphs for learning molecular fingerprints. In: Advances in neural information processing systems. 2015. p. 2224–32. Atwood J, Towsley D. Diffusion-convolutional neural networks. In: NIPS. 2016. Fey M, Lenssen JE, Weichert F, Müller H. Splinecnn: Ffast geometric deep learning with continuous b-spline kernels. In: CVPR. 2018. p. 869–77. Simonovsky M, Komodakis N. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. p. 3693–702. Jia X, De Brabandere B, Tuytelaars T, Gool L.V. Dynamic filter networks. In: Advances in neural information processing systems. 2016. p. 667–75. Hamilton W, Ying Z, Leskovec J. Inductive representation learning on large graphs. In: NIPS. 2017. p. 1024–34. Li Q, Han Z, Wu X.-M. Deeper insights into graph convolutional networks for semi-supervised learning. In: Thirty-second AAAI conference on artificial intelligence. 2018. Taubin G. A signal processing approach to fair surface design. In: Proceedings of the 22nd annual conference on computer graphics and interactive techniques. New York: ACM; 1995. p. 351–8. Xu K, Li C, Tian, Y, Sonobe, T, Kawarabayashi K.-i, Jegelka S. Representation learning on graphs with jumping knowledge networks. 2018. arXiv preprint arXiv:1806.03536 He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. p. 770–8. Scarselli F, Gori M, Tsoi AC, Hagenbuchner M, Monfardini G. The graph neural network model. IEEE Trans Neural Netw. 2009;20(1):61–80. Khamsi MA, Kirk WA. An introduction to metric spaces and fixed point theory, vol. 53. New York: Wiley; 2011. Dai H, Kozareva Z, Dai B, Smola A, Song L. Learning steady-states of iterative algorithms over graphs. In: International conference on machine learning. 2018. p. 1114–22. Gilmer J, Schoenholz SS, Riley PF, Vinyals O, Dahl GE. Neural message passing for quantum chemistry. In: Proceedings of the 34th international conference on machine learning, vol. 70. 2017. p. 1263–72. http://JMLR.org Xu K, Hu W, Leskovec J, Jegelka S. How powerful are graph neural networks? 2018. arXiv preprint arXiv:1810.00826 Garcia V, Bruna J. Few-shot learning with graph neural networks. 2017. arXiv preprint arXiv:1711.04043 Kampffmeyer M, Chen Y, Liang X, Wang H, Zhang Y, Xing EP. Rethinking knowledge graph propagation for zero-shot learning. 2018. arXiv preprint arXiv:1805.11724 Narasimhan M, Lazebnik S, Schwing A. Out of the box: reasoning with graph convolution nets for factual visual question answering. In: Advances in neural information processing systems. 2018. p. 2659–70. Cui Z, Xu C, Zheng W, Yang J. Context-dependent diffusion network for visual relationship detection. In: 2018 ACM multimedia conference on multimedia conference. New York: ACM; 2018. p. 1475–82. Yao T, Pan Y, Li Y, Mei T. Exploring visual relationship for image captioning. In: Proceedings of the European conference on computer vision (ECCV). 2018. p. 684–99. Xu D, Zhu Y, Choy C.B, Fei-Fei L. Scene graph generation by iterative message passing. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. p. 5410–9. Dai B, Zhang Y, Lin D. Detecting visual relationships with deep relational networks. In: Proceedings of the IEEE conference on computer vision and Pattern recognition. 2017. p. 3076–86 Yang J, Lu J, Lee S, Batra D, Parikh D. Graph r-cnn for scene graph generation. In: Proceedings of the European conference on computer vision (ECCV). 2018. p. 670–85. Chen Q, Koltun V. Photographic image synthesis with cascaded refinement networks. In: Proceedings of the IEEE international conference on computer vision. 2017. p. 1511–20 Johnson J, Gupta A, Fei-Fei L. Image generation from scene graphs. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. p. 1219–28 Yan S, Xiong Y, Lin D. Spatial temporal graph convolutional networks for skeleton-based action recognition. In: Thirty-second AAAI conference on artificial intelligence. 2018. Gao X, Hu W, Tang J, Pan P, Liu J, Guo Z. Generalized graph convolutional networks for skeleton-based action recognition. 2018. arXiv preprint arXiv:1811.12013. Wang X, Gupta A. Videos as space-time region graphs. In: Proceedings of the European conference on computer vision (ECCV). 2018. p. 399–417 Zhang T, Zheng W, Cui Z, Li Y. Tensor graph convolutional neural network. 2018; arXiv preprint arXiv:1803.10071 Qi C.R, Su H, Mo K, Guibas LJ. Pointnet: deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. p. 652–60 Shen Y, Feng C, Yang Y, Tian D. Neighbors do help: deeply exploiting local structures of point clouds. 2017. arXiv preprint arXiv:1712.06760 Wang Y, Sun Y, Liu Z, Sarma SE, Bronstein MM, Solomon JM. Dynamic graph cnn for learning on point clouds. 2018. arXiv preprint arXiv:1801.07829 Te G, Hu W, Guo Z, Zheng A. Rgcnn: regularized graph cnn for point cloud segmentation. 2018. arXiv preprint arXiv:1806.02952 Verma, N, Boyer, E, Verbeek, J. Feastnet: feature-steered graph convolutions for 3d shape analysis. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. p. 2598–606. Wang C, Samari B, Siddiqi K. Local spectral graph convolution for point set feature learning. In: Proceedings of the European conference on computer vision (ECCV). 2018. p. 52–66. Valsesia D, Fracastoro G, Magli E. Learning localized generative models for 3d point clouds via graph convolution. In: International conference on learning representations. 2019. Boscaini D, Masci J, Rodolà E, Bronstein M. Learning shape correspondence with anisotropic convolutional neural networks. In: Advances in neural information processing systems. 2016. p. 3189–97. Wang P-S, Liu Y, Guo Y-X, Sun C-Y, Tong X. O-cnn: octree-based convolutional neural networks for 3d shape analysis. ACM Trans Graph (TOG). 2017;36(4):72. Litany O, Bronstein A, Bronstein M, Makadia A. Deformable shape completion with graph convolutional autoencoders. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. p. 1886–95. Zhuang C, Ma Q. Dual graph convolutional networks for graph-based semi-supervised classification. In: Proceedings of the 2018 world wide web conference on world wide web. International World Wide Web Conferences Steering Committee; 2018. p. 499–508. Yao L, Mao C, Luo Y. Graph convolutional networks for text classification. 2018. arXiv preprint arXiv:1809.05679 Gao H, Chen Y, Ji S. Learning graph pooling and hybrid convolutional operations for text representations. 2019. arXiv preprint arXiv:1901.06965 Peng H, Li J, He Y, Liu Y, Bao M, Wang L, Song Y, Yang Q. Large-scale hierarchical text classification with recursively regularized deep graph-cnn. In: Proceedings of the 2018 world wide web conference on world wide web. International World Wide Web Conferences Steering Committee; 2018. p. 1063–72. Qian Y, Santus E, Jin Z, Guo J, Barzilay R. Graphie: a graph-based framework for information extraction. 2018. arXiv preprint arXiv:1810.13083 Zhang Y Qi P, Manning C.D. Graph convolution over pruned dependency trees improves relation extraction. 2018. arXiv preprint arXiv:1809.10185 Zhang N, Deng S, Sun Z, Wang G, Chen X, Zhang W, Chen H. Long-tail relation extraction via knowledge graph embeddings and graph convolution networks. 2019. arXiv preprint arXiv:1903.01306 Liu X, Luo Z, Huang H. Jointly multiple events extraction via attention-based graph information aggregation. 2018. arXiv preprint arXiv:1809.09078 Nguyen T.H, Grishman R. Graph convolutional networks with argument-aware pooling for event detection. In: Thirty-second AAAI conference on artificial intelligence. 2018. Marcheggiani D, Titov I. Encoding sentences with graph convolutional networks for semantic role labeling. 2017. arXiv preprint arXiv:1703.04826 Bastings J, Titov I, Aziz W, Marcheggiani D, Sima'an K. Graph convolutional encoders for syntax-aware neural machine translation. 2017. arXiv preprint arXiv:1704.04675 Marcheggiani D, Bastings J, Titov I. Exploiting semantics in neural machine translation with graph convolutional networks. 2018. arXiv preprint arXiv:1804.08313 Strubell E, McCallum A. Dependency parsing with dilated iterated graph cnns. 2017. arXiv preprint arXiv:1705.00403 Henrion I, Brehmer J, Bruna J, Cho K, Cranmer K, Louppe G, Rochette G. Neural message passing for jet physics. 2017. Qu H, Gouskos L. Particlenet: jet tagging via particle clouds. 2019. arXiv preprint arXiv:1902.08570 Choma N, Monti F, Gerhardt L, Palczewski T, Ronaghi Z, Prabhat P, Bhimji W, Bronstein M, Klein S, Bruna J. Graph neural networks for icecube signal classification. In: 2018 17th IEEE international conference on machine learning and applications (ICMLA). New York: IEEE; 2018. p. 386–91. Mrowca D, Zhuang C, Wang E, Haber N, Fei-Fei LF, Tenenbaum J, Yamins DL. Flexible neural representation for physics prediction. In: Advances in neural information processing systems. 2018. p. 8799–810 Kearnes S, McCloskey K, Berndl M, Pande V, Riley P. Molecular graph convolutions: moving beyond fingerprints. J Comput Aided Mol Des. 2016;30(8):595–608. Li X, Yan X, Gu Q, Zhou H, Wu D, Xu J. Deepchemstable: chemical stability prediction with an attention-based graph convolution network. J Chem Inf Model. 2019. Zitnik M, Agrawal M, Leskovec J. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics. 2018;34(13):457–66. Feinberg E.N, Sur D, Wu Z, Husic BE, Mai H, Li Y, Sun S, Yang J, Ramsundar B, Pande VS. Potentialnet for molecular property prediction. 2018. arXiv preprint arXiv:1803.04465 Fout A, Byrd J, Shariat B, Ben-Hur A. Protein interface prediction using graph convolutional networks. In: Advances in neural information processing systems. 2017. p. 6530–9. Xie T, Grossman JC. Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties. Phys Rev Lett. 2018;120(14):145301. Bruna J, Li X. Community detection with graph neural networks. 2017. arXiv preprint arXiv:1705.08415 Harada S, Akita H, Tsubaki M, Baba Y, Takigawa I, Yamanishi Y, Kashima H. Dual convolutional neural network for graph of graphs link prediction. 2018. arXiv preprint arXiv:1810.02080 Chen J, Xu X, Wu Y, Zheng H. Gc-lstm: graph convolution embedded lstm for dynamic link prediction. 2018. arXiv preprint arXiv:1812.04206 Qiu J, Tang J, Ma H, Dong Y, Wang K, Tang J. Deepinf: social influence prediction with deep learning. In: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. New York: ACM; 2018. p. 2110–9. Vijayan R, Mohler G. Forecasting retweet count during elections using graph convolution neural networks. In: 2018 IEEE 5th international conference on data science and advanced analytics (DSAA). New York: IEEE; 2018. p. 256–62. Monti F, Frasca F, Eynard D, Mannion D, Bronstein MM. Fake news detection on social media using geometric deep learning. 2019. arXiv preprint arXiv:1902.06673 Wu L, Sun P, Fu Y, Hong R, Wang X, Wang M. A neural influence diffusion model for social recommendation. 2019. arXiv preprint arXiv:1904.10322 Ying R, He R, Chen K, Eksombatchai P, Hamilton WL, Leskovec J. Graph convolutional neural networks for web-scale recommender systems. 2018. arXiv preprint arXiv:1806.01973. Wang X, He X, Wang M, Feng F, Chua T-S. Neural graph collaborative filtering. 2019. arXiv preprint arXiv:1905.08108 Maron H, Ben-Hamu H, Serviansky H, Lipman Y. Provably powerful graph networks. 2019. arXiv preprint arXiv:1905.11136 Keriven N, Peyré G. Universal invariant and equivariant graph neural networks. 2019. arXiv preprint arXiv:1905.04943 Chen Z, Villar S, Chen L, Bruna J. On the equivalence between graph isomorphism testing and function approximation with GNNs. 2019. arXiv preprint arXiv:1905.12560 Monti F, Bronstein M, Bresson X. Geometric matrix completion with recurrent multi-graph neural networks. In: NIPS. 2017. p. 3697–707. This material is supported by the National Science Foundation (IIS-1651203, IIS-1715385), by the United States Air Force and DARPA under contract number FA8750-17-C-0153Footnote 2, and by the U.S. Department of Homeland Security under Grant Award Number 2017-ST-061-QA0001. The content of the information in this document does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. This material is supported by the National Science Foundation under Grant Nos. IIS-1651203, IIS-1715385, IIS-1743040, and CNS-1629888, by DTRA under the Grant Number HDTRA1-16-0017, by the United States Air Force and DARPA under Contract Number FA8750-17-C-0153, by Army Research Office under the Contract Number W911NF-16-1-0168, and by the U.S. Department of Homeland Security under Grant Award Number 2017-ST-061-QA0001. University of Illinois Urbana-Champaign, Champaign, USA & Hanghang Tong HRL Laboratories, LLC, Malibu, USA Jiejun Xu Arizona State University, Tempe, USA Ross Maciejewski Search for Si Zhang in: Search for Hanghang Tong in: Search for Jiejun Xu in: Search for Ross Maciejewski in: All authors read and approved the final manuscript. Correspondence to Si Zhang. Zhang, S., Tong, H., Xu, J. et al. Graph convolutional networks: a comprehensive review. Comput Soc Netw 6, 11 (2019) doi:10.1186/s40649-019-0069-y Received: 22 March 2019 Graph convolutional networks Graph representation learning Spectral methods Spatial methods Aggregation mechanism
CommonCrawl
\begin{document} \title{Regular Sequences of Quasi-Nonexpansive Operators and Their Applications} \author{Andrzej Cegielski\thanks{ Faculty of Mathematics, Computer Science and Econometrics, University of Zielona Gora, ul. Szafrana 4a, 65-516 Zielona Gora, Poland, e-mail: [email protected]}, Simeon Reich\thanks{ Department of Mathematics, The Technion - Israel Institute of Technology, 3200003 Haifa, Israel, e-mail: [email protected]} \ and Rafa\l\ Zalas \thanks{ Department of Mathematics, The Technion - Israel Institute of Technology, 3200003 Haifa, Israel, e-mail: [email protected]} } \maketitle \begin{abstract} In this paper we present a systematic study of regular sequences of quasi-nonexpansive operators in Hilbert space. We are interested, in particular, in weakly, boundedly and linearly regular sequences of operators. We show that the type of the regularity is preserved under relaxations, convex combinations and products of operators. Moreover, in this connection, we show that weak, bounded and linear regularity lead to weak, strong and linear convergence, respectively, of various iterative methods. This applies, in particular, to block iterative and string averaging projection methods, which, in principle, are based on the above-mentioned algebraic operations applied to projections. Finally, we show an application of regular sequences of operators to variational inequality problems. \noindent \textbf{Key words and phrases:} Convex feasibility problem, demi-closed operator, linear rate of convergence, metric projection, regular family of sets, subgradient projection, variational inequality. \noindent \textbf{2010 Mathematics Subject Classification:} 41A25, 47J25, 41A28, 65K15. \end{abstract} \section{Introduction} \label{sec:intro} Let $\mathcal{H}$ be a real Hilbert space equipped with inner product $\langle \cdot ,\cdot \rangle $ and induced norm $\left\Vert \cdot \right\Vert $. We denote by $\limfunc{Fix}U:= \{x\in \mathcal{H}\mid Ux=x\}$ the \textit{fixed point set} of an operator $U\colon\mathcal{H} \rightarrow \mathcal{H}$. We recall that for given closed and convex sets $ C_i\subseteq\mathcal{H}$, $i=1,\ldots, m$, the \textit{convex feasibility problem} (CFP) is to find a point $x$ in $C:=\bigcap_{i=1}^m C_i$. In this paper we assume that the CFP is consistent, that is, $C\neq\emptyset$. \textbf{Motivation.} Below we formulate a prototypical convergence theorem for the methods of cyclic and simultaneous projections: \begin{theorem}[\protect\cite{BB96}] \label{intro:th:basic} Let $U:=\prod_{i=1}^m P_{C_i}$ or $U:=\frac 1 m \sum_{i=1}^m P_{C_i}$ and for each $k=0,1,2\ldots,$ let $x^{k+1}:=Ux^k$, where $x^0\in\mathcal{H}$. Then: \begin{enumerate} \item[(i)] $x^k$ converges weakly to some point $x^*\in C$. \item[(ii)] If the family of sets $\{C_1,\ldots,C_m\}$ is boundedly regular, then the convergence is in norm. \item[(iii)] If the family of sets $\{C_1,\ldots,C_m\}$ is boundedly linearly regular, then the convergence is linear. \end{enumerate} \end{theorem} It is not difficult to see that both algorithmic operators $U$ in the above theorem, due to the demi-closedness of $U-\limfunc{Id}$ at 0 \cite[Theorem 1] {Opi67}, for each $\{x^{k}\}_{k=0}^{\infty }\subseteq \mathcal{H}$ and $ \{n_{k}\}_{k=0}^{\infty }\subseteq \{k\}_{k=0}^{\infty }$, satisfy \begin{equation} \left. \begin{array}{l} x^{n_{k}}\rightharpoonup y \\ Ux^{k}-x^{k}\rightarrow 0 \end{array} \right\} \quad \Longrightarrow \quad y\in \limfunc{Fix}U, \label{intro:WR} \end{equation} where $\limfunc{Fix}U=C$. Moreover, note that in case (ii), by \cite[ Theorems 4.10 and 4.11]{CZ14}, we have \begin{equation} \lim_{k\rightarrow \infty }\Vert Ux^{k}-x^{k}\Vert =0\quad \Longrightarrow \quad \lim_{k\rightarrow \infty }d(x^{k},\limfunc{Fix}U)=0, \label{intro:BR} \end{equation} which holds for any bounded sequence $\{x^{k}\}_{k=0}^{\infty }\subseteq \mathcal{H}$. Finally, in case (iii), we have observed, as will be shown below, that for any bounded subset $S\subseteq \mathcal{H}$, there is $ {\Greekmath 010E} >0$ such that for all $x\in S$, we have \begin{equation} d(x,\limfunc{Fix}U)\leq {\Greekmath 010E} \Vert Ux-x\Vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{intro:LR} \end{equation} It turns out that, in principle, conditions \eqref{intro:WR}, \eqref{intro:BR} and \eqref{intro:LR} are intrinsic abstract properties of $ U $ which, when combined with the strong quasi-nonexpansivity, lead to weak, strong and linear convergence; see, for example, \cite{BNP15} and \cite {KRZ17}. In this paper we refer to them as \textit{weak, bounded} and \textit{linear regularity} of the given operator $U$, respectively; see Definition \ref{d-R}. Note that the iterative methods described in Theorem \ref{intro:th:basic} are static, that is, we iterate one fixed algorithmic operator $U$. Nevertheless, in many cases, the iterative methods applied to solving the CFPs are dynamic in the sense the algorithmic operators may change from iteration to iteration. More precisely, one considers the following general form of the iterative method: \begin{equation} x^{0}\in \mathcal{H};\qquad x^{k+1}:=U_{k}x^{k}, \label{intro:genIter} \end{equation} where for each $k=0,1,2,\ldots $, $U_{k}\colon \mathcal{H}\rightarrow \mathcal{H}$ is quasi-nonexpansive and satisfies $C\subseteq \limfunc{Fix} U_{k}$. The examples of \eqref{intro:genIter} with an extensive survey can be found in \cite{Ceg12}; see also Example \ref{ex-BRk}. The study of dynamic iterative methods necessitates a systematic investigation of the abstract properties of the sequences of regular operators. The main properties that we are interested in are related not only to convex combination and products of regular operators, as in Theorem \ref {intro:th:basic}, but also to relaxation, that is, to operators of the form $ \limfunc{Id}+{\Greekmath 010B} (U-\limfunc{Id})$, where ${\Greekmath 010B} \in (0,2)$. All three of these algebraic operations are, in principle, the building bricks for block-iterative \cite{AC89, Com96, Com97, BB96}, dynamic string averaging \cite{AR08, BRZ18, CZ13} and even more sophisticated algorithms, such as modular string averaging \cite{RZ16}. \textbf{Contribution.} The main contribution of this paper consists in extending the notion of weakly, boundedly and linearly regular operators described in \eqref{intro:WR}, \eqref{intro:BR} and \eqref{intro:LR} by replacing one fixed operator $U$ with a sequence of operators $\{U_{k}\}$. Within the framework of this extension, we provide a systematic study of sequences of regular operators, where we establish their basic properties and give some examples. The main result in this direction is that the the convex combination and product operations, when applied to regular sequences of operators, preserve the initial regularity under certain conditions; see Theorems \ref{t-Rk1} and \ref{t-Rk2}. Although the preservation of weak \cite [Theorems 4.1 and 4.2]{Ceg15a} and bounded regularity \cite[Theorems 4.10 and 4.11]{CZ14} was known for one fixed operator, the preservation of linear regularity, even in this simple case, seems to be new; see Corollaries \ref {c-Rk1b} and \ref{c-Rk2b}. Next, we extend Theorem \ref{intro:th:basic} by showing that weak, bounded and linear regularity, when combined with appropriate regularity of sets and strong quasi-nonexpasivity, lead to weak, strong and linear convergence of the method \eqref{intro:genIter}; see Theorems \ref{th:main} and \ref{th:main2}. Moreover, following recent work in the field of variational inequalities \cite{Ceg15, CZ13, CZ14, GRZ15, GRZ17}, we provide an application of regular sequences of operators in this direction as well; see Theorem \ref{t-uku}. \textbf{Historical overview.} The regularity properties described in \eqref{intro:WR}, \eqref{intro:BR} and \eqref{intro:LR} have reappeared in the literature under various names, as we now recall. Clearly, a weakly regular operator $U$ is an operator for which $U-\limfunc{ Id}$ is demi-closed at 0. This type of the demi-closedness condition goes back to the papers by Browder and Petryshyn \cite{BP66} and by Opial \cite {Opi67}. The term \textit{weakly regular operator} was introduced in \cite[ Def. 12]{KRZ17}. The concept of weak regularity has recently been extended to the \textit{fixed point closed mappings} in \cite[Lemma 2.1]{BCW14}), where the weak convergence was replaced by the strong one. Weakly regular sequences of operators appeared already in \cite[Sec. 2]{AK14}, where they were called sequences satisfying \textit{condition (Z)} and applied to a viscosity approximation process for solving variational inequalities. Weakly regular sequences of operators were also studied in \cite{Ceg15}, where they were introduced through sequences satisfying a \textit{demi-closedness principle}, again, with applications to variational inequalities. Some properties of weakly regular sequences of operators can be found in \cite {RZ16}. A prototypical version of regular operator can be found in \cite[Theorem 1.2] {PW73} by Petryshyn and Williamson, where it was assumed, in addition, that the operator was continuous. As far we know, the definition of boundedly regular operators as well as their properties were first proposed by Cegielski and Zalas in \cite[Definition 16]{CZ13} under the name \textit{ approximately shrinking}, because of their relation to the \textit{ quasi-shrinking} operators defined in \cite[Section 3]{YO04}. The term ``boundedly regular operator'' was proposed in \cite[Definition 7.1]{BNP15}. Because of the relationship of boundedly/linearly regular operators to boundedly/linearly regular families of sets (see Remark \ref{r-BRsets}), in this paper we have replaced the term ``approximately shrinking'' by ``boundedly regular''. Many properties of these operators under the name ``approximately shrinking'' were presented in \cite{CZ14} with some extensions in \cite{Zal14}, \cite{RZ16} and \cite{Ceg16}, and with more applications in \cite{Ceg15} and \cite{CM16}. It is worth mentioning that regular operators were applied even in Hadamard spaces to solving common fixed point problems \cite{RS17}. The phrase \textit{boundedly linearly regular} in connection to operators was proposed by Bauschke, Noll and Phan, who applied them to establish a linear rate of convergence for some block iterative fixed point algorithms \cite[Theorem 6.1]{BNP15}. To the best of our knowledge, the concept of this type of operator goes back to Outlaw \cite[Theorem 2]{Out69}, and Petryshyn and Williamson \cite[Corollary 2.2]{PW73}. A closely related condition called a \textit{linearly focusing} algorithm, was studied by Bauschke and Borwein \cite[Definition 4.8]{BB96}. The concept of a focusing algorithm goes back to Fl\aa m and Zowe \cite[Section 2]{FZ90}, and can also be found in \cite[Definition 1.2]{Com97} by Combettes. Linearly regular operators appeared in \cite[Definition 3.3]{CZ14} by Cegielski and Zalas as \textit{ linearly shrinking} ones. We would like to mention that in the literature one can find concepts similar to our concepts of regularity of operators; see, for example, \textit{H\"{o}lder regular operators }in \cite[Definition 2.4]{BLT17} or \textit{modulus of regularity} in \cite[Definition 3.1]{KLN17}. \textbf{Organization of the paper.} In Section \ref{s-prel} we introduce the reader to our notation and to basic facts regarding quasi-nonexpansive operators, Fej\'{e}r monotone sequences and regular families of sets. In Section \ref{s-RO} we formulate the definition of regular operators and give several examples. In Section \ref{s-RS} we extend this definition to sequences of operators and show their basic properties. The main properties related to sequences, but not limited to them, are presented in Section \ref {s-4}. Applications to convex feasibility problems and variational inequalities are shown in Section \ref{s-app}. \section{Preliminaries\label{s-prel}} \textbf{Notation. } Sequences of elements of $\mathcal{H}$ will be denoted by $x^{k},y^{k},z^{k}$, etc. Sequences of real parameters will be usually denoted by ${\Greekmath 010B} _{k},{\Greekmath 0115} _{k},{\Greekmath 0121} _{k}$ or by ${\Greekmath 011A} _{i}^{k},{\Greekmath 0121} _{i}^{k}$, etc. Sequences of operators will be denoted by $ \{T_{k}\}_{k=0}^{\infty },\{U_{k}\}_{k=0}^{\infty }$ or by $ \{U_{i}^{k}\}_{k=0}^{\infty }$ etc. In order to distinguish ${\Greekmath 011A} _{i}^{k}$ and $U_{i}^{k}$ from the $k$-th power of ${\Greekmath 011A} _{i}$ and $U_{i}$, the latter will be denoted by $({\Greekmath 011A} _{i})^{k}$ and $(U_{i})^{k}$, respectively. We denote the \textit{identity} operator by $\limfunc{Id}$. For a family of operators $U_{i}:\mathcal{H}\rightarrow \mathcal{H}$, $i\in I:=\{1,2,...,m\}$ , and an ordered set $K:=(i_{1},i_{2},...,i_{s})$, we denote $\mathop{\textstyle \prod }_{i\in K}U_{i}:=U_{i_{s}}U_{i_{s-1}}...U_{i_{1}}$. For an operator $T$ and for $ {\Greekmath 0115} \geq 0$ we define $T_{{\Greekmath 0115}}:=\limfunc{Id}+{\Greekmath 0115} (T-\limfunc{Id} ) $ and call it a ${\Greekmath 0115} $-\textit{relaxation} of $T$, while ${\Greekmath 0115} $ is called the \textit{relaxation parameter}. For ${\Greekmath 010B} \in \mathbb{R} $, denote ${\Greekmath 010B} _{+}:=\max \{0,{\Greekmath 010B} \}$. Similarly, for a function $f: \mathcal{H}\rightarrow \mathbb{R} $, denote $f_{+}:=\max \{0,f\}$, that is, $f_{+}(x)=[f(x)]_{+}$, $x\in \mathcal{H}$. For a fixed $x\in \mathcal{H}$, denote $\limfunc{Argmin}_{i\in I}f_{i}(x)=\{j\in I \mid f_{j}(x)\leq f_{i}(x)$ for all $i\in I\}$. $\square$ Let $C\subseteq \mathcal{H}$ be nonempty, closed and convex. It is well known that for any $x\in \mathcal{H}$, there is a unique point $y\in C$ such that $\Vert x-y\Vert \leq \Vert x-z\Vert $ for all $z\in C$. This point is called the \textit{metric projection} of $x$ onto $C$ and is denoted by $ P_{C}x$. The operator $P_{C}:\mathcal{H}\rightarrow \mathcal{H}$ is nonexpansive and $\limfunc{Fix}P_{C}=C$. Moreover, $P_{C}x$ is characterized by: $y\in C$ and $\langle z-y,x-y\rangle \leq 0$ for all $z\in C$. Let $f:\mathcal{H}\rightarrow \mathbb{R} $ be a convex continuous function. Then for any $x\in \mathcal{H}$, there exists a point $g_{f}(x)\in \mathcal{H}$ satisfying $\langle g_{f}(x),y-x\rangle \leq f(y)-f(x)$ for all $y\in \mathcal{H}$. This point is called a \textit{subgradient} of $f$ at $x$. Suppose that $ S(f,0):=\{x:f(x)\leq 0\}\neq \emptyset $. For each $x\in \mathcal{H}$, we fix a subgradient $g_{f}(x)\in \mathcal{H}$ and define an operator $P_{f}: \mathcal{H}\rightarrow \mathcal{H}$ by \begin{equation} P_{f}(x):=\left\{ \begin{array}{ll} x-\frac{f(x)}{\Vert g_{f}(x)\Vert ^{2}}g_{f}(x)\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }f(x)>0 \RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \\ x\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{otherwise.} \end{array} \right. \end{equation} In order to simplify the notation we also write $P_{f}(x)=x-\frac{f_{+}(x)}{ \Vert g_{f}(x)\Vert ^{2}}g_{f}(x)$ for short. The operator $P_{f}$ is called a \textit{subgradient projection}. Clearly, $\limfunc{Fix}P_{f}=S(f,0)$. Now we recall an inequality related to convex functions in $\mathbb{R}^{n}$. \begin{lemma}[{\protect\cite[Lemma 3.3]{Fuk84}}] \label{l-Fuk} Let $f\colon \mathbb{R}^{n}\rightarrow \mathbb{R}$ be convex and assume that the Slater condition is satisfied, that is, $f(z)<0$ for some $z\in \mathbb{R}^{n}$. Then for each compact subset $K$ of $\mathbb{R} ^{n}$, there is ${\Greekmath 010E} >0$ such that the inequality \begin{equation} {\Greekmath 010E} d(x,S(f,0))\leq f_{+}(x) \label{e-Fuk} \end{equation} holds for every $x\in K$. \end{lemma} \subsection{Strongly quasi-nonexpansive operators} In this subsection we recall the notion of a strongly quasi-nonexpansive operator as well as several properties of these operators. \begin{definition} \rm\ We say that $T$ is ${\Greekmath 011A} $-\textit{strongly quasi-nonexpansive }(${\Greekmath 011A} $ -SQNE), where ${\Greekmath 011A} \geq 0$, if $\limfunc{Fix}T\neq \emptyset $ and \begin{equation} \left\Vert Tu-z\right\Vert ^{2}\leq \left\Vert u-z\right\Vert ^{2}-{\Greekmath 011A} \left\Vert Tu-u\right\Vert ^{2} \label{e-SQNE} \end{equation} for all $u\in \mathcal{H}$ and all $z\in \limfunc{Fix}T$. If ${\Greekmath 011A} =0$ in ( \ref{e-SQNE}), then $T$ is called \textit{quasi-nonexpansive }(QNE). If $ {\Greekmath 011A} >0$ in (\ref{e-SQNE}), then we say that $T$ is \textit{strongly quasi-nonexpansive }(SQNE). \end{definition} Clearly, a nonexpansive operator having a fixed point is QNE. We say that $T$ is a \textit{cutter} if $\limfunc{Fix}T\neq \emptyset $ and $\langle x-Tx,z-Tx\rangle \leq 0$ for all $x\in \mathcal{H}$ and for all $z\in \limfunc{Fix}T$. Now we recall well-known facts which we employ in the sequel. \begin{fact} \label{f-3}If $T$ is QNE, then $\limfunc{Fix}T$ is closed and convex. \end{fact} \begin{fact} \label{f-0}If $T$ is a cutter, then $\left\Vert Tx-x\right\Vert \leq \left\Vert P_{\limfunc{Fix}T}x-x\right\Vert $ for all $x\in \mathcal{H}$. \end{fact} \begin{fact} \label{f-2}The following conditions are equivalent: \begin{enumerate} \item[$\mathrm{(i)}$] $T$ is a cutter; \item[$\mathrm{(ii)}$] $\langle Tx-x,z-x\rangle \geq \left\Vert Tx-x\right\Vert ^{2}$ for all $x\in \mathcal{H}$ and for all $z\in \limfunc{ Fix}T$; \item[$\mathrm{(iii)}$] $T$ is $1$-SQNE; \item[$\mathrm{(iv)}$] $T_{{\Greekmath 0115} }$ is $(2-{\Greekmath 0115} )/{\Greekmath 0115} $-SQNE, where ${\Greekmath 0115} \in (0,2]$. \end{enumerate} \end{fact} For proofs of Facts \ref{f-3}--\ref{f-2}, see, for example, \cite[Section 2.1.3]{Ceg12}. \begin{corollary} \label{c-1}The following conditions are equivalent: \begin{enumerate} \item[$\mathrm{(i)}$] $T$ is QNE; \item[$\mathrm{(ii)}$] $T_{{\Greekmath 0115} }$ is $(1-{\Greekmath 0115} )/{\Greekmath 0115} $-SQNE, where ${\Greekmath 0115} \in (0,1]$; \item[$\mathrm{(iii)}$] $T_{1/2}$ is a cutter. \end{enumerate} \end{corollary} The most important examples of cutter operators are the metric projection $ P_{C}$ onto a nonempty, closed and convex subset $C\subseteq \mathcal{H}$ (see, e.g., \cite[Sections 1.2 and 2.2]{Ceg12}) and a subgradient projection $P_{f}$ related to a continuous convex function $f:\mathcal{H}\rightarrow \mathbb{R} $ with $S(f,0):=\{x\in \mathcal{H}:f(x)\leq 0\}\neq \emptyset $ (see, for instance, \cite[Section 4.2]{Ceg12}). The following two facts play an important role in the sequel. \begin{fact} \label{f-5} Let $U_{i}:\mathcal{H}\rightarrow \mathcal{H}$ be ${\Greekmath 011A} _{i}$ -SQNE, ${\Greekmath 011A} _{i}\geq 0$, $i\in I:=\{1,2,...,m\}$, with $\bigcap_{i\in I} \func{Fix}U_{i}\neq \emptyset $, $U:=\sum_{i\in I}{\Greekmath 0121} _{i}U_{i}$, where $ {\Greekmath 0121} _{i}\geq 0,i\in I$, and $\sum_{\in I}{\Greekmath 0121} _{i}=1$. \begin{enumerate} \item[$\mathrm{(i)}$] If ${\Greekmath 0121} _{i},{\Greekmath 011A} _{i}>0$ for all $i\in I$, then $ \limfunc{Fix}U=\bigcap_{i=1}^{m}\limfunc{Fix}U_{i}$ and $U$ is ${\Greekmath 011A} $-SQNE with ${\Greekmath 011A} =\min_{i\in I}{\Greekmath 011A} _{i}$; \item[$\mathrm{(ii)}$] For any $x\in \mathcal{H}$ and $z\in \bigcap_{i\in I} \func{Fix}U_{i}$ we have \begin{equation} \Vert Ux-z\Vert ^{2}\leq \Vert x-z\Vert ^{2}-\sum_{i=1}^{m}{\Greekmath 0121} _{i}{\Greekmath 011A} _{i}\Vert U_{i}x-x\Vert ^{2}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{e-SQNE1} \end{equation} \item[$\mathrm{(iii)}$] For any $z\in \bigcap_{i\in I}\limfunc{Fix}U_i$, $ x\in\mathcal{H}$ and positive $R\geq \Vert x-z\Vert$, we have \begin{equation} \frac{1}{2R}\sum_{i=1}^{m}{\Greekmath 0121} _{i}{\Greekmath 011A} _{i}\Vert U_{i}x-x\Vert ^{2}\leq \Vert Ux-x\Vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{e-SQNE2} \end{equation} \end{enumerate} \end{fact} \begin{proof} For (i), see \cite[Theorems 2.1.26(i) and 2.1.50]{Ceg12}. Parts (ii) and (iii) were proved in \cite[Proposition 4.5]{CZ14} in the case where ${\Greekmath 011A} >0$ , but it follows from the proof that the statement is also true if ${\Greekmath 011A} \geq 0$. \end{proof} \begin{fact} \label{f-6} Let $U_{i}:\mathcal{H}\rightarrow \mathcal{H}$ be ${\Greekmath 011A} _{i}$ -SQNE, ${\Greekmath 011A} _{i}\geq 0$, $i\in I:=\{1,2,...,m\}$, with $\bigcap_{i\in I} \func{Fix}U_{i}\neq \emptyset $, and let $U:=U_{m}U_{m-1}...U_{1}$. \begin{enumerate} \item[$\mathrm{(i)}$] If ${\Greekmath 011A} =\min_{i\in I}{\Greekmath 011A} _{i}>0$, then $\limfunc{Fix }U=\bigcap_{i=1}^{m}\limfunc{Fix}U_{i}$ and $U:=U_{m}U_{m-1}...U_{1}$ is $ {\Greekmath 011A} /m$-SQNE; \item[$\mathrm{(ii)}$] For any $x\in \mathcal{H}$ and $z\in \bigcap_{i\in I} \func{Fix}U_{i}$ we have \begin{equation} \Vert Ux-z\Vert ^{2}\leq \Vert x-z\Vert ^{2}-\sum_{i=1}^{m}{\Greekmath 011A} _{i}\Vert Q_{i}x-Q_{i-1}x\Vert ^{2}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \label{e-SQNE3} \end{equation} where $Q_{i}:=U_{i}U_{i-1}\ldots U_{1}$, $i\in I$, $Q_{0}:=\func{Id}$. \item[$\mathrm{(iii)}$] For any $z\in \bigcap_{i\in I}\limfunc{Fix}U_i$, $ x\in\mathcal{H}$ and positive $R\geq \Vert x-z\Vert$, we have \begin{equation} \frac{1}{2R}\sum_{i=1}^{m}{\Greekmath 011A} _{i}\Vert Q_{i}x-Q_{i-1}x\Vert ^{2}\leq \Vert Ux-x\Vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{e-SQNE4} \end{equation} \end{enumerate} \end{fact} \begin{proof} For (i), see \cite[Theorems 2.1.26(ii) and 2.1.48(ii)]{Ceg12}. Parts (ii) and (iii) were proved in \cite[Proposition 4.6]{CZ14} in the case where $ {\Greekmath 011A} >0$, but it follows from the proof that the statement is also true if $ {\Greekmath 011A} \geq 0 $. \end{proof} \subsection{Fej\'{e}r monotone sequences} \begin{definition} We say that a sequence $\{x^{k}\}_{k=0}^{\infty }$ is Fej\'{e}r monotone with respect to a subset $C\subseteq \mathcal{H}$ if $\Vert x^{k+1}-z\Vert \leq \Vert x^{k}-z\Vert $ for all $z\in C$ and $k=0,1,2,\ldots$. \end{definition} \begin{lemma} \label{f-6a}Let $T_{k}$ be ${\Greekmath 011A} _{k}$-SQNE, $k=0,1,2,\ldots$, with ${\Greekmath 011A} :=\inf_{k}{\Greekmath 011A} _{k}\geq 0$ and $F:=\bigcap_{k=0}^{\infty }\limfunc{Fix} T_{k}\neq \emptyset $, and let a sequence $\{x^{k}\}_{k=0}^{\infty }$ be generated by $x^{k+1}=T_{k}x^{k}$, where $x^{0}\in \mathcal{H}$ is arbitrary. \begin{enumerate} \item[$\mathrm{(i)}$] The sequence $\{x^{k}\}_{k=0}^{\infty }$ is Fej\'{e}r monotone with respect to $F$. \item[$\mathrm{(ii)}$] If ${\Greekmath 011A} >0$, then $\lim_{k}\Vert T_{k}x^{k}-x^{k}\Vert =0$. \end{enumerate} \end{lemma} \begin{proof} Part (i) follows directly from the definition of a QNE operator, while part (ii) follows from (i) and from the definition of an SQNE operator. \end{proof} \begin{fact} \label{f-7}If a sequence $\{x^{k}\}_{k=0}^{\infty }\subseteq \mathcal{H}$ is Fej\'{e}r monotone with respect to a nonempty subset $C\subseteq \mathcal{H}$ , then \begin{enumerate} \item[$\mathrm{(i)}$] $x^{k}$ converges weakly to a point $z\in C$ if and only if all its weak cluster points belong to $C$; \item[$\mathrm{(ii)}$] $x^{k}$ converges strongly to a point $z\in C$ if and only if $\lim_{k}d(x^{k},C)=0$; \item[$\mathrm{(iii)}$] if there is a constant $q\in (0,1)$ such that $ d(x^{k+1},C)\leq qd(x^{k},C)$ holds for all $k=0,1,2,\ldots$, then $ \{x^{k}\}_{k=0}^{\infty }$ converges linearly to a point $z\in C$ and \begin{equation} \Vert x^{k}-z\Vert \leq 2d(x^{0},C)q^{k}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \end{equation} \end{enumerate} \end{fact} \begin{proof} See \cite[Theorem 2.16(ii), (v) and (vi)]{BB96}. \end{proof} \begin{lemma} \label{th:Fejer2} Let $\{x^{k}\}_{k=0}^{\infty }$ be Fej\'{e}r monotone with respect to $C$ and let $s\in \mathbb{N}$. \begin{enumerate} \item[$\mathrm{(i)}$] If $x^{ks}\rightharpoonup z$ for some $z\in C$ and $ \lim_{k}\Vert x^{k+1}-x^{k}\Vert =0$, then $x^{k}\rightharpoonup z$. \item[$\mathrm{(ii)}$] If $x^{ks}\rightarrow z$ for some $z\in C$, then $ x^{k}\rightarrow z$. \item[$\mathrm{(iii)}$] If there are $c>0$, $q\in (0,1)$ and $z\in C$ such that $\Vert x^{ks}-z\Vert \leq cq^{k}$ for each $k=0,1,2,\ldots$, then \begin{equation} \Vert x^{k}-z\Vert \leq \frac{c}{(\sqrt[\scriptstyle{s}]{q})^{s-1}}\left( \sqrt[\scriptstyle{s}]{q}\right) ^{k}. \end{equation} \end{enumerate} \end{lemma} \begin{proof} Suppose that the assumptions of (i) are satisfied. Let $n=n_{k}=\lfloor \frac{k}{s}\rfloor :=\max \{m\mid ms\leq k\}$ and $p=k-ns$. Clearly, $ n\rightarrow \infty $ if and only if $k\rightarrow \infty $. By the assumption, we have \begin{equation} 0\leq \lim_{k}\Vert x^{k}-x^{ns}\Vert =\lim_{n}\Vert x^{k}-x^{ns}\Vert \leq \lim_{n}\sum_{l=ns}^{ns+p-1}\Vert x^{l+1}-x^{l}\Vert =0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \end{equation} This yields $\lim_{k}\Vert x^{k}-x^{ns}\Vert =0$ and $ x^{k}=x^{ns}+(x^{k}-x^{ns})\rightharpoonup z$ as $k\rightarrow \infty $. Note that (i) is true even without the Fej\'{e}r monotonicity of $ \{x^{k}\}_{k=0}^{\infty }$. Part (ii) follows from Fact \ref{f-7}(ii). For a proof of (iii), see \cite[Prop. 1.6]{BB96}. \end{proof} \subsection{Regular families of sets} Below we recall the notion of regularity of a finite family of sets as well as several properties of regular families. \begin{definition} \rm\ (\cite[Def. 5.1]{BB96}, \cite[Def. 5.7]{BNP15}) Let $S\subseteq \mathcal{H}$ be nonempty and $\mathcal{C}$ be a family of closed convex subsets $ C_{i}\subseteq \mathcal{H}$, $i\in I:=\{1,2,...,m\}$, with $C:=\bigcap_{i\in I}C_{i}\neq \emptyset $. We say that $\mathcal{C}$ is: \begin{enumerate} \item[(a)] \textit{regular }over $S$ if for any sequence $ \{x^{k}\}_{k=0}^{\infty }\subseteq S$, we have \begin{equation} \lim_{k}\max_{i\in I}d(x^{k},C_{i})=0\Longrightarrow \lim_{k}d(x^{k},C)=0 \RIfM@\expandafter\text@\else\expandafter\mbox\fi{;} \end{equation} \item[(b)] \textit{linearly regular }over $S$ if there is a constant ${\Greekmath 0114} >0$ such that for every $x\in S$, we have \begin{equation} d(x,C)\leq {\Greekmath 0114} \max_{i\in I}d(x,C_{i})\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \end{equation} We call the constant ${\Greekmath 0114} $ a \textit{modulus} of the linear regularity of $\mathcal{C}$ over $S$. \end{enumerate} If any of the above regularity conditions holds for $S=\mathcal{H}$, then we omit the phrase \textquotedblleft over $S$\textquotedblright . \noindent If any of the above regularity conditions holds for every bounded subset $ S\subseteq \mathcal{H}$, then we precede the corresponding term with the adverb \textit{boundedly} while omitting the phrase \textquotedblleft over $ S $\textquotedblright . \end{definition} The theorem below gives a small collection of sufficient conditions for a family $\mathcal{C}$ to be (boundedly, linearly) regular. \begin{theorem}[\protect\cite{BB96, BNP15}] \label{t-BRsets} Let $C_{i}\subseteq \mathcal{H}$, $i\in I:=\{1,\ldots ,m\}$ , be closed convex with $C:=\bigcap_{i\in I}C_{i}\neq \emptyset $ and let $ \mathcal{C}:=\{C_{i}\mid i\in I\}$. \begin{enumerate} \item[$\mathrm{(i)}$] If $\dim \mathcal{H}<\infty $, then $\mathcal{C}$ is boundedly regular; \item[$\mathrm{(ii)}$] If $C_{j}\cap \limfunc{int}(\bigcap_{i\in I\setminus \{j\}}C_{i})\neq \emptyset $, then $\mathcal{C}$ is boundedly linearly regular; \item[$\mathrm{(iii)}$] If all $C_{i}$, $i\in I$, are half-spaces, then $ \mathcal{C}$ is linearly regular; \item[$\mathrm{(iv)}$] If $\dim \mathcal{H}<\infty $, $C_{j}$ is a half-space, $j\in J\subseteq I$, and $\bigcap_{j\in J}C_{j}\cap \bigcap_{i\in I\setminus J}\limfunc{ri}C_{i}\neq \emptyset $, then $\mathcal{ C}$ is boundedly linearly regular. \end{enumerate} \end{theorem} More sufficient conditions can be found, for example, in \cite[Fact 5.8] {BNP15}. Note that the bounded linear regularity of a family $\{C_{i}\mid i\in I\}$ has no inheritance property even if each $C_{i}$, $i\in I$, is a closed linear subspace \cite{RZ14}. \section{Regular operators\label{s-RO}} \begin{definition} \label{d-R} \rm\ Let $S\subseteq \mathcal{H}$ be nonempty, and $C\subseteq \mathcal{H}$ be nonempty, closed and convex. We say that a quasi-nonexpansive operator $ U\colon\mathcal{H}\rightarrow\mathcal{H}$ is: \begin{enumerate} \item[(a)] \textit{weakly }$C$-\textit{regular} over $S$ if for any sequence $\{x^{k}\}_{k=0}^{\infty }\subseteq S$ and $\{n_k\}_{k=0}^\infty\subseteq \{k\}_{k=0}^\infty$, we have \begin{equation} \left . \begin{array}{l} x^{n_k}\rightharpoonup y \\ \|U x^k-x^k\|\rightarrow 0 \end{array} \right\}\quad\Longrightarrow\quad y\in C; \end{equation} \item[(b)] $C$-\textit{regular} over $S$ if for any sequence $ \{x^{k}\}_{k=0}^{\infty }\subseteq S$, we have \begin{equation} \lim_{k\rightarrow \infty }\Vert Ux^{k}-x^{k}\Vert =0\quad \Longrightarrow \quad \lim_{k\rightarrow \infty }d(x^{k},C)=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{;} \end{equation} \item[(c)] \textit{linearly }$C$-\textit{regular} over $S$ if there is $ {\Greekmath 010E} >0$ such that for all $x\in S$, we have \begin{equation} d(x,C)\leq {\Greekmath 010E} \Vert Ux-x\Vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \end{equation} The constant ${\Greekmath 010E} $ is called a \textit{modulus} of the linear $C$ -regularity\textit{\ of }$U$ over $S$. \end{enumerate} If any of the above regularity conditions holds for $S=\mathcal{H}$, then we omit the phrase ``over $S$''. If any of the above regularity conditions holds for every bounded subset $S\subseteq \mathcal{H}$, then we precede the corresponding term with the adverb ``boundedly'' while omitting the phrase ``over $S$'' (we allow ${\Greekmath 010E} $ to depend on $S$ in (c)). We say that $U$ is \textit{(boundedly) weakly regular}, \textit{regular} or \textit{linearly regular} (over $S$) if $C=\limfunc{Fix}U$ in (a), (b) or (c), respectively. \end{definition} The most common setting of the above definition, in which we are interested, is where \begin{equation} C=\limfunc{Fix} U, \quad \limfunc{Fix} U\cap S \neq \emptyset \quad \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and } \quad S \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ is bounded}. \end{equation} \begin{remark}[Weak regularity] \label{r-WR} \rm\ Note that if the operator $U$ is weakly regular, then this means that $U- \limfunc{Id}$ is demi-closed at 0. Observe that: \begin{enumerate} \item[(i)] $U$ is weakly $C$-regular over $S$ if and only if for any sequence $\{x^{k}\}_{k=0}^{\infty }\subseteq S$, we have \begin{equation} \left . \begin{array}{l} x^k\rightharpoonup y \\ \|U x^k-x^k\|\rightarrow 0 \end{array} \right\}\quad\Longrightarrow\quad y\in C. \end{equation} This type of equivalence is no longer true for a $C$-weakly regular sequence of operators, as we show in the next section; see Remark \ref{r-WR2}. \item[(ii)] $U$ is boundedly weakly $C$-regular if and only if $U$ is weakly $C$-regular. This follows from (i) and the fact that any weakly convergent sequence $\{x^k\}_{k=0}^\infty$ must be bounded. Therefore there is no need to distinguish between boundedly weakly ($C$-)regular and weakly ($C$ -)regular operators. \end{enumerate} \end{remark} \begin{remark}[Regular operators and regular sets] \label{r-BRsets} \rm\ The notion of regular operators is closely related to the notion of a regular family of subsets. Indeed, for a family $\mathcal{C}$ of closed convex subsets $C_{i}\subseteq \mathcal{H}$, $i\in I:=\{1,2,...,m\}$, having a common point, denote by $P$ the metric projection onto the furthest subset $C_{i}$, that is, for any $x\in \mathcal{H}$ and for some $i(x)\in \limfunc{ Argmax}_{i\in I}d(x,C_{i})$, $P(x)=P_{C_{i(x)}}x$. Note that, in general, $P$ is not uniquely defined, because, in general, $i(x)$ is not uniquely defined. Therefore we suppose that for any $x\in \mathcal{H}$, the index $ i(x)\in \limfunc{Argmax}_{i\in I}d(x,C_{i})$ is fixed, for example, $ i(x)=\min \{i\in I\mid i\in \limfunc{Argmax}_{i\in I}d(x,C_{i})\}$. It is easily seen that the operator $P$ is (linearly) regular over $S$ (with modulus ${\Greekmath 010E} $) if and only if the family $\mathcal{C}$ is (linearly) regular over $S$ (with modulus ${\Greekmath 010E} $). \end{remark} Clearly, the metric projection $P_{C}$ onto a nonempty closed convex subset $ C\subseteq \mathcal{H}$ is linearly regular with a modulus ${\Greekmath 010E} =1$. Below we give a few examples of weakly (boundedly, boundedly linearly) regular operators. \begin{example} \label{ex-NE-BR} \rm\ A nonexpansive operator $U\colon \mathcal{H}\rightarrow \mathcal{H}$ with a fixed point is weakly regular. This follows from the fact that a nonexpansive operator satisfies the demi-closedness principle \cite[Lemma 2] {Opi67}. If $\mathcal{H}=\mathbb{R}^{n}$, then, by \cite[Proposition 4.1] {CZ14}, $U$ is boundedly regular. This, in principle, follows from the fact that in $\mathbb{R}^{n}$ the weak convergence is equivalent to the strong one. In this paper we extend \cite[Proposition 4.1]{CZ14} to sequences of regular operators; see Theorem \ref{t-reg} and Corollary \ref{c-reg}. \end{example} \begin{example}[Subgradient projection] \label{ex-subProj} \rm\ Let $f\colon \mathcal{H}\rightarrow \mathbb{R}$ be continuous and convex with a nonempty sublevel set $S(f,0)$ and let $P_{f}:\mathcal{H}\rightarrow \mathcal{H}$ be a subgradient projection. \begin{enumerate} \item[(a)] If $f$ is Lipschitz continuous on bounded sets, then $P_{f}$ is weakly regular \cite[Theorem 4.2.7]{Ceg12}. We recall that $f$ is Lipschitz continuous on bounded sets if and only if $f$ maps bounded sets onto bounded sets if and only if $\partial f$ is uniformly bounded on bounded sets \cite[ Proposition 7.8]{BB96}. All three conditions hold true if $\mathcal{H}= \mathbb{R}^{n}$. If, in addition, $f$ is strongly convex, then $P_{f}$ is boundedly regular. A detailed proof of this fact will be presented elsewhere. \item[(b)] If $\mathcal{H}=\mathbb{R}^{n}$ then, by (a) and by the equivalence of weak and strong convergence in a finite dimensional space, $ P_{f}$ is boundedly regular. See also \cite[Lemma 24]{CZ13}. \item[(c)] If $\mathcal{H}=\mathbb{R}^{n}$ and $f(z)<0$ for some $z\in \mathbb{R}^{n}$, then $P_{f}$ is boundedly linearly regular. Indeed, by (a) and by Lemma \ref{l-Fuk}, for every compact $K\subseteq \mathbb{R}^{n}$, there are ${\Greekmath 010E} ,\Delta >0$ such that $\Vert \partial f(x)\Vert \leq \Delta $ and ${\Greekmath 010E} d(x,S(f,0))\leq f_{+}(x)$ for any $x\in K$. Thus, \begin{equation} \Vert x-P_{f}x\Vert =\frac{f_{+}(x)}{\Vert g_{f}(x)\Vert }\geq \frac{{\Greekmath 010E} }{\Delta }d(x,S(f,0))\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{e-subProj} \end{equation} Since $\limfunc{Fix}P_{f}=S(f,0)$, inequality (\ref{e-subProj}) proves the bounded linear regularity of $P_{f}$. \end{enumerate} \end{example} The operators presented in Examples \ref{ex-NE-BR} and \ref{ex-subProj}(b) need not be boundedly regular if $\dim \mathcal{H}=\infty $ as the following example shows. \begin{example}[Subgradient projection which is not regular] \rm\ Let $C_{1},C_{2}\subseteq \mathcal{H}$ be closed convex and $x^{0}\in \mathcal{H}$. Suppose that: \begin{enumerate} \item[(i)] $C:=C_{1}\cap C_{2}=\{0\}$, \item[(ii)] $d(x^{0},C_{2})\leq d(x^{0},C_{1})$, \item[(iii)] the sequence $\{x^{k}\}_{k=0}^{\infty }$ defined by the recurrence $x^{k+1}=P_{C_{2}}P_{C_{1}}x^{k}$ converges weakly to $0$, but $ \{x^{k}\}_{k=0}^{\infty }$ does not converge in norm. \end{enumerate} \noindent A construction of $C_{1},C_{2}$ and a point $x^{0}$, satisfying (i)-(iii) is due to\ Hundal \cite{Hun04}; see also \cite{MR03}. Define a function $f:\mathcal{H}\rightarrow \mathbb{R} $ as follows: \begin{equation} f(x)=\max \{d(x,C_{1}),d(x,C_{2})\}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \end{equation} Clearly, $f$ is continuous and convex as the maximum of continuous and convex functions. It is easy to check that for $x\in C_{1}\setminus C$ we have $f(x)=d(x,C_{2})$ and $g_{f}(x)={\Greekmath 0272} f(x)=\frac{x-P_{C_{2}}x}{ d(x,C_{2})}$, and that for $x\in C_{2}\setminus C$ we have $f(x)=d(x,C_{1})$ and $g_{f}(x)={\Greekmath 0272} f(x)=\frac{x-P_{C_{1}}x}{d(x,C_{1})}$. Let $u^{k}$ be defined by the recurrence \begin{equation} u^{k+1}=P_{f}u^{k}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \end{equation} with $u^{0}=x^{0}$. Then we have \begin{equation} u^{k+1}=\left\{ \begin{array}{ll} P_{C_{1}}u^{k}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{for }k=2n\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \\ P_{C_{2}}u^{k}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{for }k=2n+1\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \end{array} \right. \end{equation} By Hundal's construction, $u^{k}$ converges weakly to $0$ but does not converge in norm, that is, $\limsup_{k}\Vert u^{k}\Vert >0$. Now it is easily seen that $P_{f}$ is not boundedly regular. Indeed. $ \{u^{k}\}_{k=0}^{\infty }$ is bounded as a weakly convergent sequence. Moreover, $\lim_{k}\left\Vert u^{k}-P_{f}u^{k}\right\Vert =0$, because $ P_{f} $ is SQNE (see Lemma \ref{f-6a}). But \begin{equation} \limsup_{k}d(u^{k},C)=\limsup_{k}\Vert u^{k}\Vert >0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \end{equation} Thus, $P_{f}$ is not boundedly regular. \end{example} \section{Regular sequences of operators} \label{s-RS} \begin{definition} \label{d-RS} \rm\ Let $S\subseteq \mathcal{H}$ be nonempty, and $C\subseteq \mathcal{H}$ be nonempty, closed and convex. We say that the sequence $\{U_{k}\}_{k=0}^{ \infty }$ of quasi-nonexpansive operators $U_{k}\colon \mathcal{H} \rightarrow \mathcal{H}$ is: \begin{enumerate} \item[(a)] \textit{weakly }$C$-\textit{regular} over $S$ if for each $ \{x^k\}_{k=0}^\infty\subseteq S$ and $\{n_k\}_{k=0}^\infty\subseteq \{k\}_{k=0}^\infty$, we have \begin{equation} \left . \begin{array}{l} x^{n_k}\rightharpoonup y \\ \|U_k x^k-x^k\|\rightarrow 0 \end{array} \right\}\quad\Longrightarrow\quad y\in C; \label{e-Wk} \end{equation} \item[(b)] $C$-\textit{regular} over $S$ if for any sequence $ \{x^{k}\}_{k=0}^{\infty }\subseteq S$, we have \begin{equation} \lim_{k\rightarrow \infty }\Vert U_{k}x^{k}-x^{k}\Vert =0\quad \Longrightarrow \quad \lim_{k\rightarrow \infty }d(x^{k},C)=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{;} \label{e-Rk} \end{equation} \item[(c)] \textit{linearly }$C$-\textit{regular} over $S$ if there is $ {\Greekmath 010E} >0$ such that for all $x\in S$ and $k=0,1,2,\ldots$, we have \begin{equation} d(x,C)\leq {\Greekmath 010E} \Vert U_{k}x-x\Vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{e-LRk} \end{equation} The constant ${\Greekmath 010E} $ is called a \textit{modulus} of the linear $C$ -regularity\textit{\ of }$\{U_{k}\}_{k=0}^{\infty }$ over $S$. \end{enumerate} \noindent If any of the above regularity conditions holds for $S=\mathcal{H}$ , then we omit the phrase ``over $S$''. If any of the above regularity conditions holds for every bounded subset $S\subseteq \mathcal{H}$, then we precede the corresponding term with the adverb ``boundedly'' while omitting the phrase ``over $S$''\ (we allow ${\Greekmath 010E} $ to depend on $S$ in (c)). We say that $\{U_{k}\}_{k=0}^{\infty }$ is \textit{(boundedly) weakly regular, regular or linearly regular} (over $S$), if \begin{equation} C=\bigcap_{k=0}^{\infty }\limfunc{Fix}U_{k}\neq\emptyset \end{equation} in (a), (b) or (c), respectively. \end{definition} Setting $U_{k}=U$ for all $k\geq 0$ in Definition \ref{d-RS}, we arrive at Definition \ref{d-R} of a (weakly, linearly) $C$-regular operator. Although all the three sets $C$, $F:=\bigcap_{k=0}^\infty \limfunc{Fix} U_k$ and $S$ are not formally related in the above definition, similarly to the case of a single operator, the most common setting that we are interested in is where \begin{equation} C=F, \quad S\cap F \neq \emptyset \quad \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and} \quad S \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ is bounded} . \end{equation} We now adjust Remark \ref{r-WR} to the case of a sequence of operators. \begin{remark}[Weak regularity] \label{r-WR2} \rm\ Observe that: \begin{enumerate} \item[(i)] If $\{U_k\}_{k=0}^\infty$ is weakly $C$-regular over $S$ then, obviously, for any sequence $\{x^{k}\}_{k=0}^{\infty }\subseteq S$, we have \begin{equation} \label{e-WR2} \left . \begin{array}{l} x^k\rightharpoonup y \\ \|U_k x^k-x^k\|\rightarrow 0 \end{array} \right\}\quad\Longrightarrow\quad y\in C. \end{equation} The above condition \eqref{e-WR2} is no longer equivalent to \eqref{e-Wk}, as it was in the case of a constant sequence of operators. To see this, following \cite[Sec. 4]{Ceg15}, we consider $U_{2k}:=T$ and $U_{2k+1}:=V$, $ k=0,1,2,\ldots$, where $T,V\colon\mathcal{H }\rightarrow\mathcal{H}$ have a nonempty common fixed point set $C=\limfunc{Fix}T\cap\limfunc{Fix}V$. Assume that $V$ and $T$ are weakly regular. Then, clearly, $\{U_k\}_{k=0}^\infty$ satisfies \eqref{e-WR2}. Assume now that there is $y\in \limfunc{Fix} T\setminus \limfunc{Fix} V$. Then, by taking $z\in \limfunc{Fix} V$ and setting $x^{2k}=y$, $x^{2k+1}=z$, we see that $y$ is a weak cluster point of $\{x^k\}_{k=0}^\infty$ and $\|U_kx^k-x^k\|= 0$, but $y\notin \limfunc{Fix} T\cap \limfunc{Fix} V$. Consequently, $\{U_k\}_{k=0}^\infty$ is not weakly regular. \item[(ii)] Assume that $F\neq \emptyset $. Then $\{U_{k}\}_{k=0}^{\infty }$ is boundedly weakly $C$-regular if and only if $\{U_{k}\}_{k=0}^{\infty }$ is weakly $C$-regular. Indeed, assume that $\{U_{k}\}_{k=0}^{\infty }$ is boundedly weakly regular and let $\{x^{k}\}_{k=0}^{\infty }$ be such that $ \Vert U_{k}x^{k}-x^{k}\Vert \rightarrow 0$ and $x^{n_{k}}\rightharpoonup y$. Then, for any $z\in F$, the sequence \begin{equation} y^{n}:=\left\{ \begin{array}{ll} x^{k}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }n=n_{k}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for some }k\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \\ z\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{otherwise} \end{array} \right. \end{equation} is bounded, $y^{n_{k}}\rightharpoonup y$ and $\Vert U_{k}y^{k}-y^{k}\Vert \rightarrow 0$. Consequently, by the bounded weak $C$-regularity of $ \{U_{k}\}_{k=0}^{\infty }$, we have $y\in C$. This shows that $ \{U_{k}\}_{k=0}^{\infty }$ is weakly regular. Therefore again, as it was for the case of a single operator, there is no need to distinguish between boundedly weakly ($C$-)regular and weakly ($C$-)regular sequences of operators whenever $F\neq \emptyset $. \end{enumerate} \end{remark} \begin{theorem} \label{t-reg} Let $U_{k}\colon \mathcal{H}\rightarrow \mathcal{H}$ be quasi-nonexpansive, $k=0,1,2,\ldots$, let $S\subseteq \mathcal{H}$ be nonempty and let $C\subseteq \mathcal{H}$ be nonempty, closed and convex. Then the following statements hold true: \begin{enumerate} \item[$\mathrm{(i)}$] If $\{U_{k}\}_{k=0}^{\infty }$ is linearly $C$-\textit{ regular} over $S$, then $\{U_{k}\}_{k=0}^{\infty }$ is $C$-\textit{regular} over $S$. \item[$\mathrm{(ii)}$] If $\{U_{k}\}_{k=0}^{\infty }$ is $C$-regular over $S$ , then $\{U_{k}\}_{k=0}^{\infty }$ is weakly $C$-regular over $S$. \item[$\mathrm{(iii)}$] If $\{U_{k}\}_{k=0}^{\infty }$ is weakly $C$-regular over $S$, $\mathcal{H}=\mathbb{R}^n$ and $S$ is bounded, then $ \{U_{k}\}_{k=0}^{\infty }$ is $C$-regular over $S$. \end{enumerate} \end{theorem} \begin{proof} Part (i) follows directly from Definition \ref{d-RS}. (ii) Suppose that $\{U_{k}\}_{k=0}^{\infty }$ is $C$-regular over $S$. Let $ \{x^{k}\}_{k=0}^{\infty }\subseteq S$, $y$ be a weak cluster point of $ \{x^{k}\}_{k=0}^{\infty }$ and $\Vert U_{k}x^{k}-x^{k}\Vert \rightarrow 0$. Then $\lim_{k}d(x^{k},C)=0$. Let $\{x^{n_{k}}\}_{k=0}^{\infty }\subseteq \{x^{k}\}_{k=0}^{\infty }$ be a subsequence converging weakly to $y$. By the weak lower semicontinuity of $d(\cdot ,C)$, we have \begin{equation} 0=\lim_{k}d(x^{k},C)=\lim_{k}d(x^{n_{k}},C)\geq d(y,C)\geq 0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \end{equation} Now the closedness of $C$ yields $y\in C$, which proves the weak $C$ -regularity of $\{U_{k}\}_{k=0}^{\infty }$ over $S$. (iii) Suppose that $\{U_{k}\}_{k=0}^{\infty }$ is weakly $C$-regular over $S$ and $\mathcal{H}=\mathbb{R}^n$. Let $\{x^{k}\}_{k=0}^{\infty }\subseteq S$ and $\lim_{k}\Vert U_{k}x^{k}-x^{k}\Vert =0$. We prove that $ \lim_{k}d(x^{k},C)=0$. By the boundedness of $S$, there is a subsequence $ \{x^{n_{k}}\}_{k=0}^{\infty }\subseteq\{x^{k}\}_{k=0}^{\infty }$ which converges to $y\in \mathcal{H}$ and such that $\limsup_{k}d(x^{k},C)= \lim_{k}d(x^{n_{k}},C)$. The weak $C$-regularity of $\{U_{k}\}_{k=0}^{\infty }$ over $S$ yields that $y\in C$. The continuity of $d(\cdot ,C)$ implies that \begin{equation} 0\leq \limsup_{k}d(x^{k},C)=\lim_{k}d(x^{n_{k}},C)=d(y,C)=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \end{equation} Thus $\lim_{k}d(x^{k},C)=0$, that is, $\{U_{k}\}_{k=0}^{\infty }$ is $C$ -regular over $S$. \end{proof} Parts (ii) and (iii) of Theorem \ref{t-reg} in the case $U_{k}=U$, $ k=0,1,2\ldots$, were proved in \cite[Theorem 4.1]{CZ14}. \begin{corollary} \label{c-reg} Let $U_{k}\colon \mathcal{H}\rightarrow \mathcal{H}$ be quasi-nonexpansive, $k=0,1,2,\ldots$, and assume that $\bigcap_{k=0}^\infty \limfunc{Fix}U_k\neq\emptyset$. Then the following statements hold true: \begin{enumerate} \item[$\mathrm{(i)}$] If $\{U_{k}\}_{k=0}^{\infty }$ is boundedly linearly regular, then $\{U_{k}\}_{k=0}^{\infty }$ is boundedly regular. \item[$\mathrm{(ii)}$] If $\{U_{k}\}_{k=0}^{\infty }$ is boundedly regular, then $\{U_{k}\}_{k=0}^{\infty }$ is weakly regular. \item[$\mathrm{(iii)}$] If $\{U_{k}\}_{k=0}^{\infty }$ is weakly regular and $\mathcal{H}=\mathbb{R}^{n}$, then $\{U_{k}\}_{k=0}^{\infty }$ is boundedly regular. \end{enumerate} \end{corollary} \begin{remark} \label{r-RSa} \rm\ Let $\{U_{k}\}_{k=0}^{\infty }$ be a sequence of quasi-nonexpansive operators and let $C\subseteq \mathcal{H}$ be nonempty, closed and convex. Clearly, the sequence $\{U_{k}\}_{k=0}^{\infty }$ is (weakly, linearly) $C$ -regular over any nonempty bounded subset $S\subseteq C$ because if $x\in C $ , then $d(x,C)=0$. Let $S_{i}\subseteq \mathcal{H}$, $i=1,2$, be nonempty. If $\{U_{k}\}_{k=0}^{\infty }$ is (weakly, linearly) $C$-regular over $S_{i}$ , $i=1,2$, then $\{U_{k}\}_{k=0}^{\infty }$ is (weakly, linearly) $C$ -regular over $S:=S_{1}\cup S_{2}$. Thus, without loss of generality, we can add to $S$ an arbitrary bounded subset of $C$. Moreover, if $ \{U_{k}\}_{k=0}^{\infty }$ is (weakly, linearly) $C$-regular over $S$, then $ \{U_{k}\}_{k=0}^{\infty }$ is (weakly, linearly) $C$-regular over an arbitrary nonempty subset of $S$. Thus in the definition of boundedly (weakly, linearly) $C$-regular sequences of operators we can restrict the bounded subsets $S$ to balls $B(z,R)$, where $z\in C$ is fixed and $R>0$. \end{remark} \begin{remark} \label{r-RS} \rm\ Let $C_{1},C_{2}$ $\subseteq \mathcal{H}$ be nonempty, closed and convex, $ C_{1}\subseteq C_{2}$, and $S\subseteq \mathcal{H}$ be nonempty. Clearly, $ \{U_{k}\}_{k=0}^{\infty }$ is (weakly, linearly) $C_{2}$-regular over $S$ if $\{U_{k}\}_{k=0}^{\infty }$ is (weakly, linearly) $C_{1}$-regular over $S$. \end{remark} We finish this section with two natural properties of (weakly, linearly) $C$ -regular sequences of operators. \begin{proposition}[Relaxation] \label{l-rel-BR} Let $T_{k}\colon \mathcal{H}\rightarrow \mathcal{H}$ be quasi-nonexpansive, $k=0,1,2,\ldots $, let $S\subseteq \mathcal{H}$ be nonempty and let $C\subseteq \mathcal{H}$ be nonempty, closed and convex. Suppose that $\{T_{k}\}_{k=0}^{\infty }$ is weakly (boundedly, boundedly linearly) $C$-regular over $S$ (with modulus ${\Greekmath 010E} $) and $U_{k}:=\limfunc{ Id}+{\Greekmath 0115} _{k}(T_{k}-\limfunc{Id})$, where $0<{\Greekmath 0115} =\inf_{k}{\Greekmath 0115} _{k}\leq {\Greekmath 0115} _{k}\leq 1$. Then the sequence $\{U_{k}\}_{k=0}^{\infty }$ is weakly, (boundedly, boundedly linearly) $C$-regular over $S$ (with modulus ${\Greekmath 010E} /{\Greekmath 0115} $). \end{proposition} \begin{proof} The lemma follows directly from Definition \ref{d-RS}. \end{proof} \begin{proposition}[Subsequences of regular operators] \label{l-sub-BR} Let $U_{k}\colon \mathcal{H}\rightarrow \mathcal{H}$ be quasi-nonexpansive, $k=0,1,2,\ldots$, let $S\subseteq \mathcal{H}$ be nonempty and let $C\subseteq \mathcal{H}$ be nonempty, closed and convex. Moreover, let $F:=\bigcap_{k=0}^{\infty }\limfunc{Fix}U_{k}$. Then the following statements hold true: \begin{enumerate} \item[$\mathrm{(i)}$] If $\{U_{k}\}_{k=0}^{\infty }$ is weakly $C$-regular over $S$, then any of its subsequences $\{U_{n_{k}}\}_{k=0}^{\infty }$ is weakly $C$-regular over $S$, whenever $S\cap F\neq\emptyset$. \item[$\mathrm{(ii)}$] If $\{U_{k}\}_{k=0}^{\infty }$ is $C$-regular over $S$ , then any of its subsequences $\{U_{n_{k}}\}_{k=0}^{\infty }$ is $C$ -regular over $S$, whenever $S\cap F\neq \emptyset $. \item[$\mathrm{(iii)}$] If $\{U_{k}\}_{k=0}^{\infty }$ is linearly $C$ -regular over $S$ with modulus ${\Greekmath 010E} $, then any of its subsequences $ \{U_{n_{k}}\}_{k=0}^{\infty }$ is linearly $C$-regular over $S$ with a modulus ${\Greekmath 010E} $. \end{enumerate} Moreover, if $\{U_{k}\}_{k=0}^{\infty }$ is weakly, boundedly or boundedly linearly regular, then $F=\bigcap_k \limfunc{Fix} U_{n_k}$ for every subsequence $\{n_k\}_{k=0}^\infty\subseteq \{k\}_{k=0}^\infty$. \end{proposition} \begin{proof} (i) Suppose that $\{U_{k}\}_{k=0}^{\infty }$ is weakly $C$-regular over $S$. Let $\{x^{k}\}_{k=0}^{\infty }\subseteq S$, $\lim_{k}\Vert U_{n_{k}}x^{k}-x^{k}\Vert =0$, $y$ be a weak cluster point of $ \{x^{k}\}_{k=0}^{\infty }$ and $\{x^{m_{k}}\}_{k=0}^{\infty }\subseteq \{x^{k}\}_{k=0}^{\infty }$ be a subsequence converging weakly to $y$. We claim that $y\in C$. To show this, let $z\in S\cap F$ and define \begin{equation} y^{n}:=\left\{ \begin{array}{ll} x^{m_{k}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }n=n_{m_{k}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for some }k\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \\ z\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{otherwise.} \end{array} \right. \end{equation} Then $\{y^{n}\}_{n=0}^{\infty }\subseteq S$ and moreover, we have \begin{equation} \Vert U_{n}y^{n}-y^{n}\Vert =\left\{ \begin{array}{ll} \Vert U_{n_{m_{k}}}x^{m_{k}}-x^{m_{k}}\Vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }n=n_{m_{k}} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for some }k\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \\ 0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{otherwise.} \end{array} \right. \end{equation} By assumption, $\lim_{k}\Vert U_{n_{m_{k}}}x^{m_{k}}-x^{m_{k}}\Vert =0$. Consequently, $\lim_{n}\Vert U_{n}y^{n}-y^{n}\Vert =0$. Since $ \{U_{n}\}_{n=0}^{\infty }$ is weakly $C$-regular over $S$ and $y$ is a weak cluster point of $\{y^{n}\}_{n=0}^{\infty }$, we have $y\in C$. (ii) Suppose that $\{U_{k}\}_{k=0}^{\infty }$ is $C$-regular over $S$. Let $ \{x^{k}\}_{k=0}^{\infty }\subseteq S$, $\lim_{k}\Vert U_{n_{k}}x^{k}-x^{k}\Vert =0$. We claim that $\lim_{k}d(x^{k},C)=0$. To show this, let $z\in S\cap F\cap C$ and define \begin{equation} y^{n}:=\left\{ \begin{array}{ll} x^{k}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }n=n_{k}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for some }k\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \\ z\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{otherwise.} \end{array} \right. \end{equation} Then, as in (i), $\{y^n\}_{n=0}^\infty\subseteq S$ and we have \begin{equation} \Vert U_{n}y^{n}-y^{n}\Vert =\left\{ \begin{array}{ll} \Vert U_{n_{k}}x^{k}-x^{k}\Vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{if }n=n_{k}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \\ 0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{otherwise.} \end{array} \right. \end{equation} By assumption, $\lim_{k}\Vert U_{n_{k}}x^{k}-x^{k}\Vert =0$. Consequently, $ \lim_{n}\Vert U_{n}y^{n}-y^{n}\Vert =0$. Since $\{U_{n}\}_{n=0}^{\infty }$ is $C$-regular over $S$, we have $\lim_{n}d(y^{n},C)=0$, which yields $ \lim_{k}d(x^{k},C)=0$. The proof of part (iii) is straightforward. Assume that $\{U_{k}\}_{k=0}^{\infty }$ is weakly regular, $ \{n_{k}\}_{k=0}^{\infty }\subseteq \{k\}_{k=0}^{\infty }$ and let $z\in \bigcap_{k}\limfunc{Fix}U_{n_{k}}$. We show that $z\in F$. Define $y^{k}=z$ for all $k=0,1,2,\ldots $. Then, by (i), we see that $z$ has to be in $F$. Since bounded and bounded linear regularity imply weak regularity (Corollary \ref{c-reg}), the proof is complete. \end{proof} A variant of part (i), as well as the last statement from the above proposition, were observed in \cite[Lemma 4.6, Remark 4.7]{Ceg15}. \section{Convex combinations and products of regular sequences of operators \label{s-4}} Theorems \ref{t-Rk1} and \ref{t-Rk2} below show that a family of (weakly, linearly) regular sequences of operators having a common fixed point is closed under convex combinations and compositions. We consider here $p$ sequences of operators $\{U_{j}^{k}\}_{k=0}^{\infty }$, $j=1,2,...,p$, and $ m $ sets $C_{i}$, $i=1,2,...,m$. \begin{theorem} \label{t-Rk1} For each $k=0,1,2,\ldots ,$ let $U_{k}:=\sum_{j=1}^{p}{\Greekmath 0121} _{j}^{k}U_{j}^{k}$, where $U_{j}^{k}\colon \mathcal{H}\rightarrow \mathcal{H} $ is ${\Greekmath 011A} _{j}^{k}$-strongly quasi-nonexpansive, ${\Greekmath 011A} _{j}^{k}\geq 0$, $ {\Greekmath 0121} _{j}^{k}\geq 0$, $j\in J:=\{1,\ldots ,p\}$, $\sum_{j\in J}{\Greekmath 0121} _{j}^{k}=1$. Moreover, for each $i\in I:=\{1,\ldots ,m\}$, let $ C_{i}\subseteq \mathcal{H}$ be closed and convex. Moreover, let $S\subseteq \mathcal{H}$ be bounded, $F_{0}:=\bigcap_{j\in J}\bigcap_{k\geq 0}\limfunc{ Fix}U_{j}^{k}$, $C:=\bigcap_{i\in I}C_{i}$ and assume that $C\subseteq F_{0}$ is nonempty. \begin{enumerate} \item[$\mathrm{(i)}$] Suppose that for some $i\in I$, there is $ \{j_{k}\}_{k=0}^{\infty }\subseteq J$ such that the sequence $ \{U_{j_{k}}^{k}\}_{k=0}^{\infty }$ is weakly $C_{i}$-regular over $S$ and $ {\Greekmath 011B}_i:=\inf_k{\Greekmath 0121}_{j_k}^k{\Greekmath 011A}_{j_k}^k>0$. Then the sequence $ \{U_{k}\}_{k=0}^{\infty }$ is weakly $C_{i}$-regular over $S$. If this property holds for all $i\in I$, then $\{U_{k}\}_{k=0}^{\infty }$ is weakly $ C$-regular over $S$; \item[$\mathrm{(ii)}$] Suppose that for some $i\in I$, there is $ \{j_{k}\}_{k=0}^{\infty }\subseteq J$ such that the sequence $ \{U_{j_{k}}^{k}\}_{k=0}^{\infty }$ is $C_{i}$-regular over $S$ and $ {\Greekmath 011B}_i:=\inf_k{\Greekmath 0121}_{j_k}^k{\Greekmath 011A}_{j_k}^k>0$. Then the sequence $ \{U_{k}\}_{k=0}^{\infty }$ is $C_{i}$-regular over $S$. If the property holds for all $i\in I$ and $\{C_i\mid i\in I\}$ is regular over $S$, then $ \{U_{k}\}_{k=0}^{\infty }$ is $C$-regular over $S$. \item[$\mathrm{(iii)}$] Suppose that for any $i\in I$, there is $ \{j_{k}\}_{k=0}^{\infty }\subseteq J$ such that the sequence $ \{U_{j_{k}}^{k}\}_{k=0}^{\infty }$ is linearly $C_{i}$-regular over $S$ with modulus ${\Greekmath 010E} _{i}$, ${\Greekmath 011B}_i:=\inf_k{\Greekmath 0121}_{j_k}^k{\Greekmath 011A}_{j_k}^k>0$ and $ \{C_i\mid i\in I\}$ is linearly regular over $S$ with modulus ${\Greekmath 0114} >0$. Then $\{U_{k}\}_{k=0}^{\infty }$ is linearly $C$-regular over $S$ with modulus $2{\Greekmath 0114} ^{2}{\Greekmath 010E} ^{2}/{\Greekmath 011B}$, where ${\Greekmath 011B}:=\min_i{\Greekmath 011B}_i$ and ${\Greekmath 010E} :=\min_{i\in I}{\Greekmath 010E} _{i}$. \end{enumerate} \end{theorem} \begin{proof} Let $z\in C$ and $\{x^{k}\}_{k=0}^{\infty }\subseteq S$. By Fact \ref{f-5} (iii), for any $k=0,1,2,\ldots,$ and $j\in J$ we have \begin{equation} \frac{{\Greekmath 0121} _{j}^{k}{\Greekmath 011A} _{j}^{k}}{2R}\Vert U_{j}^{k}x^{k}-x^{k}\Vert ^{2}\leq \frac{1}{2R}\sum_{i=1}^{p}{\Greekmath 0121} _{i}^{k}{\Greekmath 011A} _{i}^{k}\Vert U_{i}^{k}x^{k}-x^{k}\Vert ^{2}\leq \Vert U_{k}x^{k}-x^{k}\Vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \label{e-1} \end{equation} where $R>0$ is such that $S\subseteq B(z,R)$. (i) Let $y$ be a weak cluster point of $\{x^{k}\}_{k=0}^{\infty }$, $i\in I$ and $\{j_{k}\}_{k=0}^{\infty }\subseteq J$ be such that the sequence $ \{U_{j_{k}}^{k}\}_{k=0}^{\infty }$ is weakly $C_{i}$-regular over $S$. Suppose that $\lim_{k}\Vert U_{k}x^{k}-x^{k}\Vert =0$. Inequalities (\ref {e-1}) with $j=j_{k}$, $k=0,1,2,\ldots,$ and the inequality ${\Greekmath 011B}_i>0$ yield $\lim_{k}\Vert U_{j_{k}}^{k}x^{k}-x^{k}\Vert =0$. Thus $y\in C_{i}$, that is, $\{U_{k}\}_{k=0}^{\infty }$ is weakly $C_{i}$-regular over $S$. If this property holds for all $i\in I$, then $y\in C$, that is, $ \{U_{k}\}_{k=0}^{\infty }$ is weakly $C$-regular over $S$. (ii) Let $i\in I$ and $\{j_{k}\}_{k=0}^{\infty }\subseteq J$ be such that $ \{U_{j_{k}}^{k}\}_{k=0}^{\infty }$ is $C_{i}$-regular over $S$. Suppose that $\lim_{k}\Vert U_{k}x^{k}-x^{k}\Vert =0$. By (\ref{e-1}) with $j=j_{k}$, $ k=0,1,2,\ldots,$ and since ${\Greekmath 011B}_i>0$, we have $\lim_{k}\Vert U_{j_{k}}^{k}x^{k}-x^{k}\Vert =0$. Consequently, $\lim_{k}d(x^{k},C_{i})=0$, that is, $\{U_{k}\}_{k=0}^{\infty }$ is $C_{i}$-regular over $S$. The proof of the second part of (ii) follows now directly from the definition of a regular family of sets. (iii) Let $i\in I$ be arbitrary and $\{j_{k}\}_{k=0}^{\infty }\subseteq J$ be such that the sequence $\{U_{j_{k}}^{k}\}_{k=0}^{\infty }$ is linearly $ C_{i}$-regular over $S$ with modulus ${\Greekmath 010E} _{i}$. By (\ref{e-1}) with $ j=j_{k}$, $x^{k}=x$, $z=P_{C}x$ and $R=\Vert x-z\Vert =d(x,C)$, we get \begin{equation} \Vert U_{j_{k}}^{k}x-x\Vert ^{2}\leq \sum_{j\in J}\frac{{\Greekmath 0121} _{j}^{k}{\Greekmath 011A} _{j}^{k}}{{\Greekmath 011B}}\Vert U_{j}^{k}x-x\Vert ^{2}\leq \frac{2d(x,C)}{{\Greekmath 011B}} \Vert U_{k}x-x\Vert \label{e-1a} \end{equation} for all $x\in S$. Since the sequence $\{U_{j_{k}}^{k}\}_{k=0}^{\infty }$ is linearly $C_{i}$-regular over $S$ with modulus ${\Greekmath 010E} _{i}$, we also have $ d(x,C_{i})\leq {\Greekmath 010E} _{i}\Vert U_{j_{k}}^{k}x-x\Vert $, $x\in S$, $ k=0,1,2,\ldots,$ and thus, by (\ref{e-1a}), we arrive at \begin{equation} d^{2}(x,C_{i})\leq \frac{2{\Greekmath 010E} _{i}^{2}d(x,C)}{{\Greekmath 011B}}\Vert U_{k}x-x\Vert \label{e-1b} \end{equation} for all $x\in S$ and $k=0,1,2,\ldots$. Since $\{C_i\mid i\in I\}$ is linearly regular over $S$ with modulus ${\Greekmath 0114} $, we get \begin{equation} d^{2}(x,C)\leq {\Greekmath 0114} ^{2}d^{2}(x,C_{i})\leq \frac{2{\Greekmath 0114} ^{2}{\Greekmath 010E} _{i}^{2}d(x,C)}{{\Greekmath 011B}}\Vert U_{k}x-x\Vert \end{equation} for all $i\in I$, $x\in S$ and $k=0,1,2,\ldots$. This yields \begin{equation} d(x,C)\leq \frac{2{\Greekmath 0114} ^{2}{\Greekmath 010E} ^{2}}{{\Greekmath 011B}}\Vert U_{k}x-x\Vert \end{equation} for all $x\in S$ and $k=0,1,2,\ldots,$ which means that $\{U_{k}\}_{k=0}^{ \infty }$ is linearly $C$-regular over $S$ with modulus $2{\Greekmath 0114} ^{2}{\Greekmath 010E} ^{2}/{\Greekmath 011B}$, as asserted. \end{proof} \begin{corollary} \label{c-Rk1a} For each $k=0,1,2,\ldots ,$ let $U_{k}:=\sum_{i=1}^{m}{\Greekmath 0121} _{i}^{k}U_{i}^{k}$, where $U_{i}^{k}\colon \mathcal{H}\rightarrow \mathcal{H} $ is ${\Greekmath 011A} _{i}^{k}$-strongly quasi-nonexpansive, $i\in I:=\{1,\ldots ,m\}$. Assume that ${\Greekmath 011A} :=\min_{i\in I}\inf_{k}{\Greekmath 011A} _{i}^{k}>0$, ${\Greekmath 0121} :=\min_{i}\inf_{k}{\Greekmath 0121} _{i}^{k}>0$, $\sum_{i\in I}{\Greekmath 0121} _{i}^{k}=1$ and $ F_{0}:=\bigcap_{i\in I}F_{i}\neq \emptyset $, where $F_{i}:=\bigcap_{k\geq 0} \limfunc{Fix}U_{i}^{k}$. Moreover, let $S\subseteq \mathcal{H}$ be bounded. \begin{enumerate} \item[$\mathrm{(i)}$] Suppose that for any $i\in I$, $\{U_{i}^{k}\}_{k=0}^{ \infty }$ is weakly regular over $S$. Then the sequence $\{U_{k}\}_{k=0}^{ \infty }$ is also weakly regular over $S$. \item[$\mathrm{(ii)}$] Suppose that for any $i\in I$, $\{U_{i}^{k}\}_{k=0}^{ \infty }$ is regular over $S$ and the family $\{F_i \mid i\in I\}$ is regular over $S$. Then the sequence $\{U_{k}\}_{k=0}^{\infty }$ is also regular over $S$. \item[$\mathrm{(iii)}$] Suppose that for any $i\in I$, $\{U_{i}^{k} \}_{k=0}^{\infty }$ is linearly regular over $S$ with modulus ${\Greekmath 010E} _{i}$, ${\Greekmath 010E} :=\min_{i\in I}{\Greekmath 010E} _{i}>0$, and the family $\{F_i \mid i\in I\}$ is linearly regular over $S$ with modulus ${\Greekmath 0114} >0$. Then the sequence $ \{U_{k}\}_{k=0}^{\infty }$ is regular over $S$ with modulus $2{\Greekmath 0114} ^{2}{\Greekmath 010E} ^{2}/({\Greekmath 0121} {\Greekmath 011A} )$. \end{enumerate} \end{corollary} \begin{proof} It suffices to substitute $J=I$, $C_{i}=F_{i}$ and $j_{k}=i$, $ k=0,1,2,\ldots $ in Theorem \ref{t-Rk1}. \end{proof} \begin{corollary} \label{c-Rk1b} Let $U:=\sum_{i=1}^{m}{\Greekmath 0121} _{i}U_{i}$, where $U_{i}\colon \mathcal{H}\rightarrow \mathcal{H}$ is ${\Greekmath 011A} _{i}$-strongly quasi-nonexpansive, $i\in I:=\{1,\ldots ,m\}$. Assume that ${\Greekmath 011A} :=\min_{i\in I}{\Greekmath 011A} _{i}>0$, ${\Greekmath 0121} :=\min_{i}{\Greekmath 0121} _{i}>0$, $\sum_{i\in I}{\Greekmath 0121} _{i}=1$ and $F_{0}:=\bigcap_{i\in I}\limfunc{Fix}U_{i}\neq \emptyset $. Moreover, let $S\subseteq \mathcal{H}$ be bounded. \begin{enumerate} \item[$\mathrm{(i)}$] Suppose that for any $i\in I$, $U_{i}$ is weakly regular over $S$. Then $U$ is also weakly regular over $S$. \item[$\mathrm{(ii)}$] Suppose that for any $i\in I$, $U_{i}$ is regular over $S$ and the family $\{\limfunc{Fix}U_{i}\mid i\in I\}$ is regular over $ S$. Then $U$ is also regular over $S$. \item[$\mathrm{(iii)}$] Suppose that for any $i\in I$, $U_{i}$ is linearly regular over $S$ with modulus ${\Greekmath 010E} _{i}$, ${\Greekmath 010E} :=\min_{i\in I}{\Greekmath 010E} _{i}>0$, and the family $\{\limfunc{Fix}U_{i}\mid i\in I\}$ is linearly regular over $S$ with modulus ${\Greekmath 0114} >0$. Then $U$ is linearly regular over $S$ with modulus $2{\Greekmath 0114} ^{2}{\Greekmath 010E} ^{2}/({\Greekmath 0121} {\Greekmath 011A} )$. \end{enumerate} \end{corollary} \begin{proof} It suffices to substitute $U_{i}^{k}=U_{i}$ and ${\Greekmath 0121} _{i}^{k}={\Greekmath 0121} _{i}$ for all $k=0,1,2,\ldots ,$ and $i\in I$ in Corollary \ref{c-Rk1a}. \end{proof} Since $S\subseteq \mathcal{H}$ is an arbitrary nonempty and bounded subset in Theorem \ref{t-Rk1} and in Corollaries \ref{c-Rk1a} and \ref{c-Rk1b}, these three results are also true for boundedly (weakly, linearly) ($C_{i}$ -)regular sequences of operators. Note that if an operator (or sequence of operators) is boundedly linearly regular with modulus ${\Greekmath 010E} $, then the same property holds with any modulus ${\Greekmath 010D} >{\Greekmath 010E} $. Therefore, without any loss of generality, we can restrict the analysis to boundedly linearly regular operators (or sequence of operators) with modulus ${\Greekmath 010E} \geq 1$. \begin{theorem} \label{t-Rk2} For each $k=0,1,2,\ldots ,$ let $U_{k}:=U_{p}^{k}U_{p-1}^{k} \ldots U_{1}^{k}$, where $U_{j}^{k}\colon \mathcal{H}\rightarrow \mathcal{H}$ is ${\Greekmath 011A} _{j}^{k}$-strongly quasi-nonexpansive, $j\in J:=\{1,\ldots ,p\}$ and ${\Greekmath 011A} :=\min_{j\in J}\inf_{k}{\Greekmath 011A} _{j}^{k}>0$. Moreover, for each $i\in I:=\{1,\ldots ,m\}$, let $C_{i}\subseteq \mathcal{H}$ be closed and convex. Let $F_{0}:=\bigcap_{j\in J}\bigcap_{k\geq 0}\limfunc{Fix}U_{j}^{k}$, $ C:=\bigcap_{i\in I}C_{i}$ and assume that $C\subseteq F_{0}$ is nonempty. Moreover, let $S:=B(z,R)$ for some $z\in C$ and $R>0$. \begin{enumerate} \item[$\mathrm{(i)}$] Suppose that for some $i\in I$, there is $ \{j_{k}\}_{k=0}^{\infty }\subseteq J$ such that the sequence $ \{U_{j_{k}}^{k}\}_{k=0}^{\infty }$ is weakly $C_{i}$-regular over $S$. Then the sequence $\{U_{k}\}_{k=0}^{\infty }$ is weakly $C_{i}$-regular over $S$. If this property holds for all $i\in I$, then $\{U_{k}\}_{k=0}^{\infty }$ is weakly $C$-regular over $S$. \item[$\mathrm{(ii)}$] Suppose that for some $i\in I$, there is $ \{j_{k}\}_{k=0}^{\infty }\subseteq J$ such that the sequence $ \{U_{j_{k}}^{k}\}_{k=0}^{\infty }$ is $C_{i}$-regular over $S$. Then the sequence $\{U_{k}\}_{k=0}^{\infty }$ is $C_{i}$-regular over $S$. If this property holds for all $i\in I$ and $\{C_i\mid i\in I\}$ is regular over $S$ , then $\{U_{k}\}_{k=0}^{\infty }$ is $C$-regular over $S$. \item[$\mathrm{(iii)}$] Suppose that for any $i\in I$, there is $ \{j_{k}\}_{k=0}^{\infty }\subseteq J$ such that the sequence $ \{U_{j_{k}}^{k}\}_{k=0}^{\infty }$ is linearly $C_{i}$-regular over $S$ with modulus ${\Greekmath 010E} _{i}\geq 1$, ${\Greekmath 010E} :=\min_{i\in I}{\Greekmath 010E} _{i}$, and $ \{C_{i}\mid i\in I\}$ is linearly regular over $S$ with modulus ${\Greekmath 0114} >0$. Then $\{U_{k}\}_{k=0}^{\infty }$ is linearly $C$-regular over $S$ with modulus $2p{\Greekmath 0114} ^{2}{\Greekmath 010E} ^{2}/{\Greekmath 011A} $. \end{enumerate} \end{theorem} \begin{proof} Let $z\in C$ and $\{x^{k}\}_{k=0}^{\infty }\subseteq S$. Denote $ Q_{j}^{k}:=U_{j}^{k}U_{j-1}^{k}...U_{1}^{k}$, $Q_{0}^{k}:=\limfunc{Id}$ and $ x_{j}^{k}:=Q_{j}^{k}x^{k}$, $j\in J$, $k=0,1,2,\ldots$. Clearly, $ Q_{j}^{k}=U_{j}^{k}Q_{j-1}^{k}$, $x_{0}^{k}=x^{k}$ and $ x_{j}^{k}=U_{j}^{k}x_{j-1}^{k}$, $j\in J$. By Fact \ref{f-6}(iii), for any $ j\in J$, we have \begin{equation} 0\leq \frac{{\Greekmath 011A} _{j}^{k}}{2R}\Vert U_{j}^{k}x_{j-1}^{k}-x_{j-1}^{k}\Vert ^{2}\leq \frac{1}{2R}\sum_{l=1}^{p}{\Greekmath 011A} _{l}^{k}\Vert U_{l}^{k}x_{l-1}^{k}-x_{l-1}^{k}\Vert ^{2}\leq \Vert U_{k}x^{k}-x^{k}\Vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \label{e-2} \end{equation} $j\in J$. Suppose that $\lim_{k}\Vert U_{k}x^{k}-x^{k}\Vert =0$. Note that the assumption $C\neq \emptyset $ and the quasi-nonexpansivity of $U_{l}^{k}$ , $l\in J$, imply $\{x_{l}^{k}\}_{k=0}^{\infty }\subseteq S$, $l\in J$. Inequalities (\ref{e-2}) and the inequality ${\Greekmath 011A} >0$ yield \begin{equation} \lim_{k}\Vert U_{l}x_{l-1}^{k}-x_{l-1}^{k}\Vert =0 \end{equation} for all $l\in J$. For a sequence $\{j_{k}\}_{k=0}^{\infty }$, denote $ y^{k}=x_{j_{k}-1}^{k}$. Clearly, $\{y^{k}\}_{k=0}^{\infty }\subseteq S$. (i) Let $y$ be a weak cluster point of $\{x^{k}\}_{k=0}^{\infty }$, $i\in I$ and $\{j_{k}\}_{k=0}^{\infty }\subseteq J$ be such that the sequence $ \{U_{j_{k}}^{k}\}_{k=0}^{\infty }$ is weakly $C_{i}$-regular over $S$. Suppose that $\lim_{k}\Vert U_{k}x^{k}-x^{k}\Vert =0$. Inequalities (\ref {e-2}) with $j=j_{k}$, $k=0,1,2,\ldots,$ and the inequality ${\Greekmath 011A} >0$ yield $ \lim_{k}\Vert U_{j_{k}}^{k}y^{k}-y^{k}\Vert =0$. Thus $y\in C_{i}$, that is, $\{U_{k}\}_{k=0}^{\infty }$ is weakly $C_{i}$-regular over $S$. If this property holds for all $i\in I$, then $y\in C$, that is, $ \{U_{k}\}_{k=0}^{\infty }$ is weakly $C$-regular over $S$. (ii) Let $i\in I$ and $\{j_{k}\}_{k=0}^{\infty }\subseteq J$ be such that the sequence $\{U_{j_{k}}^{k}\}_{k=0}^{\infty }$ is weakly $C_{i}$-regular over $S$. Since ${\Greekmath 011A} >0$, inequalities (\ref{e-2}) yield \begin{equation} \lim_{k}\Vert U_{j_{k}}^{k}y^{k}-y^{k}\Vert =0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{e-Uik1} \end{equation} By the $C_{i}$-regularity of $\{U_{j_{k}}^{k}\}_{k=0}^{\infty }$ over $S$ and by (\ref{e-Uik1}), we have \begin{equation} \lim_{k}d(y^{k},C_{i})=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{e-dyjk1} \end{equation} The definition of the metric projection and the triangle inequality yield \begin{eqnarray} d(x^{k},C_{i}) &=&\Vert x^{k}-P_{C_{i}}x^{k}\Vert \leq \Vert x^{k}-P_{C_{i}}y^{k}\Vert =\Vert \sum_{j=0}^{j_{k}-2}(x_{j}^{k}-x_{j+1}^{k})+(y^{k}-P_{C_{i}}y^{k})\Vert \notag \\ &\leq &\sum_{j=0}^{j_{k}-2}\left\Vert x_{j}^{k}-x_{j+1}^{k}\right\Vert +\left\Vert y^{k}-P_{C_{i}}y^{k}\right\Vert \leq \sum_{j=0}^{p-1}\Vert U_{j+1}^{k}x_{j}^{k}-x_{j}^{k}\Vert +d(y^{k},C_{i})\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \end{eqnarray} By (\ref{e-2}), the inequality ${\Greekmath 011A} >0$ and the assumption that $ \lim_{k}\Vert U_{k}x^{k}-x^{k}\Vert =0$, we have \begin{equation} \lim_{k}\sum_{j=0}^{p-1}\Vert U_{j+1}^{k}x_{j}^{k}-x_{j}^{k}\Vert =0. \end{equation} This together with (\ref{e-dyjk1}) leads to $\lim_{k}d(x^{k},C_{i})=0$, that is, $\{U_{k}\}_{k=0}^{\infty }$ is boundedly $C_{i}$-regular. The proof of the second part of (ii) follows directly from the definition of a regular family of sets. (iii) Let $i\in I$ be arbitrary and $\{j_{k}\}_{k=0}^{\infty }\subseteq J$ be such that the sequence $\{U_{j_{k}}^{k}\}_{k=0}^{\infty }$ is linearly $ C_{i}$-regular over $S$. Let $x\in S$. By (\ref{e-2}) with $x^{k}=x$, $ z=P_{C}x$ and $R=\Vert x-z\Vert =d(x,C)$, we get \begin{equation} \sum_{j=1}^{p}\Vert x_{j}^{k}-x_{j-1}^{k}\Vert ^{2}\leq \frac{2d(x,C)}{{\Greekmath 011A} } \Vert U_{k}x-x\Vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{e-2a} \end{equation} By the linear $C_{i}$-regularity of $\{U_{j_{k}}^{k}\}_{k=0}^{\infty }$ over $S$ with modulus ${\Greekmath 010E} _{i}$, \begin{equation} d(y^{k},C_{i})\leq {\Greekmath 010E} _{i}\Vert U_{j_{k}}^{k}y^{k}-y^{k}\Vert \label{e-2b} \end{equation} for all $k=0,1,2,\ldots $. By the definition of the metric projection, the triangle inequality, inequality (\ref{e-2b}) and the assumption that ${\Greekmath 010E} _{i}\geq 1$, we have \begin{eqnarray} d^{2}(x,C_{i}) &\leq &\Vert x-P_{C_{i}}y^{k}\Vert ^{2}\leq \left( \sum_{l=1}^{j_{k}-1}\Vert x_{l}^{k}-x_{l-1}^{k}\Vert +\Vert y^{k}-P_{C_{i}}y^{k}\Vert \right) ^{2} \notag \\ &=&\left( \sum_{l=1}^{j_{k}-1}\Vert x_{l}^{k}-x_{l-1}^{k}\Vert +d(y^{k},C_{i})\right) ^{2}\leq \left( \sum_{l=1}^{j_{k}-1}\Vert x_{l}^{k}-x_{l-1}^{k}\Vert +{\Greekmath 010E} _{i}\Vert U_{j_{k}}^{k}y^{k}-y^{k}\Vert \right) ^{2} \notag \\ &\leq &{\Greekmath 010E} _{i}^{2}\left( \sum_{l=1}^{j_{k}-1}\Vert x_{l}^{k}-x_{l-1}^{k}\Vert +\Vert U_{j_{k}}^{k}y^{k}-y^{k}\Vert \right) ^{2}={\Greekmath 010E} _{i}^{2}\left( \sum_{l=1}^{j_{k}}\Vert x_{l}^{k}-x_{l-1}^{k}\Vert \right) ^{2} \notag \\ &\leq &{\Greekmath 010E} _{i}^{2}\left( \sum_{l=1}^{p}\Vert x_{l}^{k}-x_{l-1}^{k}\Vert \right) ^{2}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \end{eqnarray} $x\in S$, $k=0,1,2,\ldots $. The above inequalities and the Cauchy-Schwarz inequality $\langle e,a\rangle ^{2}\leq p\Vert a\Vert ^{2}$ with $ e=(1,1,...,1)\in \mathbb{R} ^{p}$ and $a=(a_{1},a_{2},...a_{p})\in \mathbb{R} ^{p}$, where $a_{j}:=\Vert x_{j}^{k}-x_{j-1}^{k}\Vert $, $j\in J$, yield \begin{equation} d^{2}(x,C_{i})\leq p{\Greekmath 010E} _{i}^{2}\sum_{l=1}^{p}\Vert x_{l}^{k}-x_{l-1}^{k}\Vert ^{2}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \label{e-2c} \end{equation} $x\in S$, $k=0,1,2,\ldots $. Now the linear bounded regularity of $ \{C_{i}\mid \in I\}$ with modulus ${\Greekmath 0114} $, (\ref{e-2a}) and (\ref{e-2c}) imply that \begin{equation} d^{2}(x,C)\leq {\Greekmath 0114} ^{2}d^{2}(x,C_{i})\leq \frac{2p{\Greekmath 010E} _{i}^{2}{\Greekmath 0114} ^{2}}{{\Greekmath 011A} }d(x,C)\Vert U_{k}x-x\Vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \end{equation} $x\in S$, $k=0,1,2,\ldots $ for all $i\in I$. This gives \begin{equation} d(x,C)\leq \frac{2p{\Greekmath 010E} ^{2}{\Greekmath 0114} ^{2}}{{\Greekmath 011A} }\Vert U_{k}x-x\Vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \end{equation} $x\in S$, $k=0,1,2,\ldots ,$ that is, $\{U_{k}\}_{k=0}^{\infty }$ is linearly $C$-regular over $S$ with modulus $2p{\Greekmath 0114} ^{2}{\Greekmath 010E} ^{2}/{\Greekmath 011A} $. \end{proof} \begin{corollary} \label{c-Rk2a} For each $k=0,1,2,\ldots ,$ let $U_{k}:=U_{m}^{k}U_{m-1}^{k} \ldots U_{1}^{k}$, where $U_{i}^{k}\colon \mathcal{H}\rightarrow \mathcal{H}$ is ${\Greekmath 011A} _{i}^{k}$-strongly quasi-nonexpansive, $i\in I:=\{1,\ldots ,m\}$. Assume that ${\Greekmath 011A} :=\min_{i\in I}\inf_{k}{\Greekmath 011A} _{i}^{k}>0$ and $ F_{0}:=\bigcap_{i\in I}F_{i}\neq \emptyset $, where $F_{i}:=\bigcap_{k\geq 0} \limfunc{Fix}U_{i}^{k}$. Moreover, let $S:=B(z,R)$ for some $z\in F_{0}$ and $R>0$. \begin{enumerate} \item[$\mathrm{(i)}$] Suppose that for any $i\in I$, $\{U_{i}^{k}\}_{k=0}^{ \infty }$ is weakly regular over $S$. Then the sequence $\{U_{k}\}_{k=0}^{ \infty }$ is also weakly regular over $S$. \item[$\mathrm{(ii)}$] Suppose that for any $i\in I$, $\{U_{i}^{k}\}_{k=0}^{ \infty }$ is regular over $S$ and the family $\{F_{i}\mid i\in I\}$ is regular over $S$. Then the sequence $\{U_{k}\}_{k=0}^{\infty }$ is also regular over $S$. \item[$\mathrm{(iii)}$] Suppose that for any $i\in I$, $\{U_{i}^{k} \}_{k=0}^{\infty }$ is linearly regular over $S$ with modulus ${\Greekmath 010E} _{i}\geq 1$, ${\Greekmath 010E} :=\min_{i\in I}{\Greekmath 010E} _{i}>0$, and the family $ \{F_{i}\mid i\in I\}$ is linearly regular over $S$ with modulus ${\Greekmath 0114} >0$. Then the sequence $\{U_{k}\}_{k=0}^{\infty }$ is regular over $S$ with modulus $2m{\Greekmath 0114} ^{2}{\Greekmath 010E} ^{2}/{\Greekmath 011A} $. \end{enumerate} \end{corollary} \begin{proof} It suffices to substitute $J=I$, $C_{i}=F_{i}$ and $j_{k}=i$, $ k=0,1,2,\ldots $ in Theorem \ref{t-Rk2}. \end{proof} \begin{corollary} \label{c-Rk2b} Let $U:=U_{m}U_{m-1}\ldots U_{1}$, where $U_{i}\colon \mathcal{H}\rightarrow \mathcal{H}$ is ${\Greekmath 011A} _{i}$-strongly quasi-nonexpansive, $i\in I:=\{1,\ldots ,m\}$. Assume that ${\Greekmath 011A} :=\min_{i\in I}{\Greekmath 011A} _{i}>0$ and $F_{0}:=\bigcap_{i\in I}\limfunc{Fix} U_{i}\neq \emptyset $. Moreover, let $S:=B(z,R)$ for some $z\in F_{0}$ and $ R>0$. \begin{enumerate} \item[$\mathrm{(i)}$] Suppose that for any $i\in I$, $U_{i}$ is weakly regular over $S$. Then $U$ is also weakly regular over $S$. \item[$\mathrm{(ii)}$] Suppose that for any $i\in I$, $U_{i}$ is regular over $S$ and the family $\{\limfunc{Fix}U_{i}\mid i\in I\}$ is regular over $ S$. Then $U$ is also regular over $S$. \item[$\mathrm{(iii)}$] Suppose that for any $i\in I$, $U_{i}$ is linearly regular over $S$ with modulus ${\Greekmath 010E} _{i}\geq 1$, ${\Greekmath 010E} :=\min_{i\in I}{\Greekmath 010E} _{i}>0$, and the family $\{\limfunc{Fix}U_{i}\mid i\in I\}$ is linearly regular over $S$ with modulus ${\Greekmath 0114} >0$. Then $U$ is linearly regular over $S$ with modulus $2m{\Greekmath 0114} ^{2}{\Greekmath 010E} ^{2}/{\Greekmath 011A} $. \end{enumerate} \end{corollary} \begin{proof} It suffices to substitute $U_{i}^{k}=U_{i}$ for all $k=0,1,2,\ldots $ and $ i\in I$ in Corollary \ref{c-Rk2a}. \end{proof} \begin{example} \label{ex-BRk} \rm\ Let $S_{i}:\mathcal{H}\rightarrow \mathcal{H}$ be QNE, $C_{i}:=\limfunc{Fix} S_{i}$, $i\in I:=\{1,2,...,m\}$ and $C:=\bigcap_{i\in I}C_{i}\neq \emptyset $ . Set $J:=\{1,2,...,p\}$ and $S_{i,{\Greekmath 0115} }:=\limfunc{Id}+{\Greekmath 0115} (S_i- \limfunc{Id})$, where ${\Greekmath 0115} \in \lbrack 0,1]$. \begin{enumerate} \item[(a)] (\textit{Block iterative sequence}) Let $J_{k}\subseteq J$ be an ordered subset, $k=0,1,2,\ldots $. Let \begin{equation} T_{j}^{k}:=\limfunc{Id}+{\Greekmath 0115} _{j}^{k}(\sum_{i\in I_{j}^{k}}{\Greekmath 0121} _{ij}^{k}S_{i}-\limfunc{Id})=\sum_{i\in I_{j}^{k}}{\Greekmath 0121} _{ij}^{k}S_{i,{\Greekmath 0115} _{j}^{k}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \label{e-Tjk1} \end{equation} where $I_{j}^{k}\subseteq I$, ${\Greekmath 0121} _{ij}^{k}\geq {\Greekmath 010E} >0$ for all $i\in I_{j}^{k}$, $\sum_{i\in I_{j}^{k}}{\Greekmath 0121} _{ij}^{k}=1$, $j\in J_{k}$, $ k=0,1,2,\ldots $, and \begin{equation} T_{k}:=\prod_{j\in J_{k}}T_{j}^{k}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{e-Tk1} \end{equation} The block iterative methods for solving the consistent convex feasibility problem \cite{Ceg12} can be represented in the form $x^{k+1}=T_{k}x^{k}$ with a sequence of operators $T_{k}$ given by (\ref{e-Tk1}), where $ T_{j}^{k} $ are defined by (\ref{e-Tjk1}). Suppose that $\underline{{\Greekmath 0115} } :=\inf_{k\geq 0}\min_{j\in J_{k}}{\Greekmath 0115} _{j}^{k}>0$ and $\bar{{\Greekmath 0115}} :=\sup_{k\geq 0}\max_{j\in J_{k}}{\Greekmath 0115} _{j}^{k}<1$. Put ${\Greekmath 011A} _{j}^{k}:=(1-{\Greekmath 0115} _{j}^{k})/{\Greekmath 0115} _{j}^{k}$, $j\in J_{k}$, $ k=0,1,2,\ldots $, and ${\Greekmath 011A} :=\inf_{k\geq 0}\min_{j\in J_{k}}{\Greekmath 011A} _{j}^{k}$. Then $S_{i,{\Greekmath 0115} _{j}^{k}}$ is ${\Greekmath 011A} _{j}^{k}$-SQNE, $i\in I_{j}^{k}$ (see Corollary \ref{c-1}(ii)). Clearly, ${\Greekmath 011A} _{j}^{k}\geq {\Greekmath 011A} \geq (1-\bar{ {\Greekmath 0115}})/\bar{{\Greekmath 0115}}>0$ for all $j\in J_{k}$ and $k=0,1,2,\ldots $. Suppose that $I^{k}:=\bigcup_{j\in J_{k}}I_{j}^{k}=I$ for all $ k=0,1,2,\ldots $. Let $i\in I$ be arbitrary but fixed, and $j_{k}\in J_{k}$ and $i_{k}\in I_{j_{k}}^{k}$ be such that $i_{i_{k},j_{k}}=i$. By Facts \ref {f-2}(iv) and \ref{f-5}(i), $T_{j}^{k}$ is ${\Greekmath 011A} _{j}^{k}$-SQNE, where ${\Greekmath 011A} _{j}^{k}\geq {\Greekmath 011A} >0$ for all $j\in J_{k}$ and $k=0,1,2,\ldots $. Moreover, $ F_{0}:=\bigcap_{k\geq 0}\bigcap_{j\in J_{k}}\limfunc{Fix}T_{j}^{k}=C$. Suppose that each $S_{i}$, $i\in I$, is weakly (boundedly, boundedly linearly) regular. Then, by Proposition \ref{l-rel-BR}, the sequence $ \{S_{i,{\Greekmath 0115} _{j_{k}}^{k}}\}_{k=0}^{\infty }$ is weakly (boundedly, boundedly linearly) $C_{i}$-regular. Let us now separately consider the above three different types of regularity. \begin{enumerate} \item[(i)] Suppose first that each $S_{i}$, $i\in I$, is weakly regular. Then, by Theorem \ref{t-Rk1} (i), the sequence $\{T_{j_{k}}^{k}\}_{k=0}^{ \infty }$ is weakly $C_{i}$-regular and, consequently, by Theorem \ref{t-Rk2} (i), the sequence $\{T_{k}\}_{k=0}^{\infty }$ is weakly $C_{i}$-regular. Moreover, since $i\in I$ is arbitrary, the sequence $\{T_{k}\}_{k=0}^{\infty }$ is also weakly $C$-regular. \item[(ii)] Suppose now that each $S_{i}$, $i\in I$, is boundedly regular. Then, by Theorem \ref{t-Rk1} (ii), the sequence $\{T_{j_{k}}^{k}\}_{k=0}^{ \infty }$ is boundedly $C_{i}$-regular and, consequently, by Theorem \ref {t-Rk2} (ii), the sequence $\{T_{k}\}_{k=0}^{\infty }$ is boundedly $C_{i}$ -regular. Moreover, if we assume that the family $\{C_{i}\mid i\in I\}$ is boundedly regular, then the sequence $\{T_{k}\}_{k=0}^{\infty }$ is boundedly $C$-regular. \item[(iii)] Finally, suppose that each $S_{i}$, $i\in I$, is boundedly linearly regular and that each subfamily of $\{C_{i}\mid i\in I\}$ is boundedly linearly regular. Then, by Theorem \ref{t-Rk1} (iii), the sequence $\{T_{j_{k}}^{k}\}_{k=0}^{\infty }$ is boundedly linearly $C_{i}$-regular and, consequently, by Theorem \ref{t-Rk2} (iii), the sequence $ \{T_{k}\}_{k=0}^{\infty }$ is also boundedly linearly $C_{i}$-regular. Moreover, the sequence $\{T_{k}\}_{k=0}^{\infty }$ is boundedly linearly $C$ -regular too. \end{enumerate} \item[(b)] (\textit{String averaging sequence}) Let $ I_{j}^{k}:=(i_{1j}^{k},i_{2j}^{k},...,i_{sj}^{k})\subseteq I$ be an ordered subset, where $s\geq 1$, $j\in J$ and $k=0,1,2,\ldots $. Let \begin{equation} T_{j}^{k}:=\prod_{i\in I_{j}^{k}}S_{i,{\Greekmath 0115} _{ij}^{k}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \label{e-Tjk2} \end{equation} where $I_{j}^{k}\subseteq I$, ${\Greekmath 0115} _{ij}^{k}\in (0,1)$, $i\in I_{j}^{k}$ , $j\in J$, $k=0,1,2,\ldots $, and \begin{equation} T_{k}:=\sum_{j\in J_{k}}{\Greekmath 0117} _{j}^{k}T_{j}^{k}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \label{e-Tk2} \end{equation} where $J_{k}\subseteq J$, ${\Greekmath 0117} _{j}^{k}\geq {\Greekmath 010E} >0$, $j\in J_{k}$, $ \sum_{j\in J_{k}}{\Greekmath 0117} _{j}^{k}=1$ and $k=0,1,2,\ldots $. The string averaging methods for solving the convex feasibility problem \cite{RZ16} can be represented in the form $x^{k+1}=T_{k}x^{k}$ with a sequence of operators $ T_{k}$ given by (\ref{e-Tk2}), where $T_{j}^{k}$ are defined by (\ref{e-Tjk2} ). Denote ${\Greekmath 011A} _{ij}^{k}:=(1-{\Greekmath 0115} _{ij}^{k})/{\Greekmath 0115} _{ij}^{k}$, $i\in I_{j}^{k}$, $j\in J$, $k=0,1,2,\ldots $, and ${\Greekmath 011A} :=\inf_{k\geq 0}\min_{i\in I_{j}^{k},j\in J}{\Greekmath 011A} _{ij}^{k}$, and suppose that $\underline{ {\Greekmath 0115} }:=\inf_{k\geq 0}\min_{i\in I_{j}^{k},j\in J}{\Greekmath 0115} _{ij}^{k}>0$ and $\bar{{\Greekmath 0115}}:=\sup_{k\geq 0}\max_{i\in I_{j}^{k},j\in J}{\Greekmath 0115} _{ij}^{k}<1$. Similarly to the situation in (a), $S_{i,{\Greekmath 0115} _{ij}^{k}}$ is ${\Greekmath 011A} _{ij}^{k}$-SQNE and ${\Greekmath 011A} _{ij}^{k}\geq {\Greekmath 011A} \geq (1-\bar{{\Greekmath 0115}})/ \bar{{\Greekmath 0115}}>0$ for all $i\in I_{j}^{k}$, $j\in J$ and $k\geq 0$. By Fact \ref{f-6}(i), $T_{j}^{k}$ is ${\Greekmath 011A} /m$-SQNE and $\limfunc{Fix} T_{j}^{k}=\bigcap_{i\in I_{j}^{k}}C_{i}$. Suppose that $I^{k}:=\bigcup_{j\in J_{k}}I_{j}^{k}=I$ for all $k=0,1,2,\ldots $. Then $F_{0}:=\bigcap_{k\geq 0}\bigcap_{j\in J_{k}}\limfunc{Fix}T_{j}^{k}=C$. Let $i\in I$ be arbitrary but fixed, and $i_{k}\in \{1,2,...,s\}$ and $j_{k}\in J$ be such that $ i_{i_{k},j_{k}}^{k}=i$. Similarly to the situation in (a), by interchanging Theorem \ref{t-Rk1} with Theorem \ref{t-Rk2}, one can obtain corresponding statements to (i), (ii) and (iii), respectively. \end{enumerate} \end{example} \begin{remark} \rm\ We now comment on the existing literature, where one can find preservation of regularity properties under convex combinations and compositions of operators. The preservation of weak regularity for a single operator presented in Corollaries \ref{c-Rk1b} (i) and \ref{c-Rk2b} (i) can be found in \cite[ Theorem 4.1 and 4.2]{Ceg15a}, respectively. The results concerning bounded regularity from Corollaries \ref{c-Rk1b} (ii) and \ref{c-Rk2b} (ii) were shown in \cite[Theorem 4.10 and Theorem 4.11]{CZ14}. Statement (iii) from the above-mentioned corollaries regarding linear regularity is new, as far as we know, even in this simple setting. The preservation of weak regularity for a sequence of operators (Corollaries \ref{c-Rk1a} (i) and \ref{c-Rk2a} (i)) was established in \cite[Example 4.5] {Ceg15}. These results also follow from \cite[Lemma 3.4]{RZ16}. Preservation of bounded regularity for a sequence of operators (Corollaries \ref{c-Rk1a} (ii) and \ref{c-Rk2a} (ii)) was proved in \cite[Lemma 4.10]{Zal14} and \cite[ Lemma 3.5]{RZ16}. The preservation of linear regularity for a sequence of operators has not been studied so far. We would like to emphasize that Theorems \ref{t-Rk1} and \ref{t-Rk2} are more general than all of the above results. \end{remark} \section{Applications} \label{s-app} In this section we show how to apply weakly (boundedly, boundedly linearly) regular sequences of operators to methods for solving convex feasibility and variational inequality problems. \subsection{Applications to convex feasibility problems} \begin{theorem} \label{th:main} Let $C\subseteq \mathcal{H}$ be nonempty, closed and convex, and for each $k=0,1,2,\ldots $, let $U_{k}\colon \mathcal{H}\rightarrow \mathcal{H}$ be ${\Greekmath 011A} _{k}$-strongly quasi-nonexpansive with ${\Greekmath 011A} :=\inf_{k}{\Greekmath 011A} _{k}>0$ and $C\subseteq F:=\bigcap_{k=0}^{\infty }\limfunc{Fix }U_{k}$. Moreover, let $x^{0}\in \mathcal{H}$ and for each $k=0,1,2,\ldots $ , let $x^{k+1}:=U_{k}x^{k}$. \begin{enumerate} \item[(i)] If $\{U_{k}\}_{k=0}^{\infty }$ is weakly $C$-regular, then $x^{k}$ converges weakly to some $x^{\ast }\in C$. \item[(ii)] If $\{U_{k}\}_{k=0}^{\infty }$ is boundedly $C$-regular, then the convergence to $x^{\ast }$ is in norm. \item[(iii)] If $\{U_{k}\}_{k=0}^{\infty }$ is boundedly linearly $C$ -regular, then the convergence is $R$-linear, that is, $\Vert x^{k}-x^{\ast }\Vert \leq 2d(x^{0},C)q^{k}$, $k=0,1,2,\ldots,$ for some $q\in (0,1)$. \end{enumerate} \end{theorem} \begin{proof} By the definition of an SQNE operator, for any $z\in C$ we have \begin{equation} \Vert x^{k+1}-z\Vert ^{2}=\Vert U_{k}x^{k}-z\Vert ^{2}\leq \Vert x^{k}-z\Vert ^{2}-{\Greekmath 011A} _{k}\Vert U_{k}x^{k}-x^{k}\Vert ^{2} \label{e-sSQNE} \end{equation} and Lemma \ref{f-6a}(ii) yields that \begin{equation} \lim_{k\rightarrow \infty }\Vert U_{k}x^{k}-x^{k}\Vert =0. \label{e-AR} \end{equation} (i) Let $\{U_{k}\}_{k=0}^{\infty }$ be weakly $C$-regular, $x^{\ast }$ be an arbitrary weak cluster point of $\{x^{k}\}_{k=0}^{\infty }$ and let $ x^{n_{k}}\rightharpoonup x^{\ast }$. Then $x^{\ast }\in C$. Since $x^{\ast }$ is an arbitrary weak cluster point of $\{x^{k}\}_{k=0}^{\infty }$, Fact \ref {f-7}(i) yields the weak convergence of the whole sequence $ \{x^{k}\}_{k=0}^{\infty }$ to $x^{\ast }$. (ii) Assume that $\{U_{k}\}_{k=0}^{\infty }$ is boundedly $C$-regular. This, when combined with (\ref{e-AR}), gives $d(x^{k},C)\rightarrow 0$, which by Fact \ref{f-7}(ii) implies that the convergence to $x^{\ast }$ is in norm. (iii) The bounded linear $C$-regularity of $\{U_{k}\}_{k=0}^{\infty }$ and the boundedness of $\{x^{k}\}_{k=0}^{\infty }$ imply that there is ${\Greekmath 010E} >0 $ such that $\Vert U_{k}x^{k}-x^{k}\Vert \geq {\Greekmath 010E} ^{-1}d(x^{k},C)$ holds for each $k=0,1,2,\ldots $. Consequently, by substituting $z=P_{C}x^{k} $ into (\ref{e-sSQNE}) and by the inequality $d(x^{k+1},C)\leq \Vert x^{k+1}-P_{C}x^{k}\Vert $, we arrive at \begin{equation} {\Greekmath 011A} {\Greekmath 010E} ^{-2}d^{2}(x^{k},C)\leq d^{2}(x^{k},C)-d^{2}(x^{k+1},C)\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \end{equation} $k=0,1,2,\ldots $. This, when combined with Fact \ref{f-7}(iii), leads to \begin{equation} \Vert x^{k}-x^{\ast }\Vert \leq 2d(x^{0},C)\left( \sqrt{1-{\Greekmath 011A} /{\Greekmath 010E} ^{2}} \right) ^{k}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \end{equation} $k=0,1,2,\ldots $, which completes the proof. \end{proof} The assumption that $\{U_{k}\}_{k=0}^{\infty }$ is weakly regular (boundedly regular, boundedly linearly regular) is quite strong. Indeed, by Proposition \ref{l-sub-BR}, we have $\bigcap_{l=0}^{\infty}\limfunc{Fix}U_{m_{l}} = \bigcap_{k=0}^{\infty }\limfunc{Fix}U_{k}$ for any sequence $ \{m_{k}\}_{k=0}^{\infty }\subseteq \{k\}_{k=0}^\infty$. Consequently, we cannot directly apply Theorem \ref{th:main} in the case of (almost) cyclic or intermittent control. This is due to the fact that for some subsequence $ \{m_k\}_{k=0}^\infty$, one could have $\bigcap_k\limfunc{Fix}U_{m_k}\neq \bigcap_k \limfunc{Fix}U_k$. Nevertheless, Theorem \ref{th:main} can still be indirectly applied to the above-mentioned controls as we now show. \begin{theorem} \label{th:main2} Let $C_{i}\subseteq \mathcal{H}$ be closed and convex, $ i\in I:=\{1,\ldots ,m\}$, such that $C:=\bigcap_{i\in I}C_{i}\neq \emptyset $ . For each $k=0,1,2,\ldots,$ let $U_{k}\colon \mathcal{H}\rightarrow \mathcal{H}$ be ${\Greekmath 011A}_{k}$-strongly quasi-nonexpansive with $C\subseteq F:=\bigcap_{k=0}^{\infty }\limfunc{Fix}U_{k}$, and let ${\Greekmath 011A} :=\inf_{k}{\Greekmath 011A} _{k}>0$. Moreover, let $x^{k}$ be generated by $x^{k+1}:=U_{k}x^{k}$, $ k=0,1,2,\ldots$, where $x^{0}\in \mathcal{H}$ is arbitrary. In addition, let $\{n_{k}^{i}\}_{k=0}^{\infty }$, $i\in I$, be increasing sequences of nonnegative integers with bounded growth, that is, $0<n_{k+1}^{i}-n_{k}^{i} \leq s$, $k=0,1,2,\ldots$, for some $s>0$. If for each $i\in I$, the subsequence $\{U_{n_{k}^{i}}\}_{k=0}^{\infty }$ is: \begin{enumerate} \item[$\mathrm{(i)}$] weakly $C_{i}$-regular, then $\{x^{k}\}_{k=0}^{\infty } $ converges weakly to some $x^{\ast }\in C$. \item[$\mathrm{(ii)}$] boundedly $C_{i}$-regular and the family $\{C_{i}\mid i\in I\}$ is boundedly regular, then the convergence to $x^{\ast }$ is in norm. \item[$\mathrm{(iii)}$] boundedly linearly $C_{i}$-regular and the family $ \{C_{i}\mid i\in I\}$ is boundedly linearly regular, then the convergence is $R$-linear, that is, $\Vert x^{k}-x^{\ast }\Vert \leq 2d(x^{0},C)q^{k}$, $ k=0,1,2,\ldots$, for some $q\in (0,1)$. \end{enumerate} \end{theorem} \begin{proof} Let \begin{equation} T_{k}:=U_{k+s-1}U_{k+s-2}...U_{k}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,} \label{proof:th:main2:Tk} \end{equation} $k\geq 0$, and define a sequence $\{y^{k}\}_{k=0}^{\infty }$ by \begin{equation} y^{0}:=x^{0};\quad y^{k+1}:=T_{k}y^{k}. \label{proof:th:main2:yk} \end{equation} Obviously, we have $y^{k}=x^{ks}$, $k=0,1,2,\ldots$.. (i) By Theorem \ref{t-Rk2}(i), the sequence $\{T_{k}\}_{k=0}^{\infty }$ is weakly $C$-regular. Now Theorem \ref{th:main}(i) yields the weak convergence of $y^{k}$ to a point $x^{\ast }\in C$. Moreover, by the assumption, $U_{k}$ is ${\Greekmath 011A} _{k}$-SQNE, and ${\Greekmath 011A} >0$ yields that $\lim_{k}\Vert x^{k+1}-x^{k}\Vert =0$ (see Lemma \ref{f-6a}(ii)). In view of Lemma \ref {th:Fejer2}(i), these facts yield the weak convergence of $x^{k}$ to $ x^{\ast }\in C$. (ii) By Theorem \ref{t-Rk2}(ii), the sequence $\{T_{k}\}_{k=0}^{\infty }$ is $C$-regular. Now Theorem \ref{th:main}(ii) yields the convergence in norm of $y^{k}$ to $x^{\ast }\in C$. In view of Lemma \ref{th:Fejer2}(ii), this yields the convergence in norm of $x^{k}$ to $x^{\ast }\in C$. (iii) By Theorem \ref{t-Rk2}(iii), the sequence $\{T_{k}\}_{k=0}^{\infty }$ is linearly $C$-regular. Now Theorem \ref{th:main}(iii) yields the $R$ -linear convergence of $y^{k}$ to $x^{\ast }\in C$. In view of Lemma \ref {th:Fejer2}(iii), this yields the $R$-linear convergence of $x^{k}$ to $ x^{\ast }\in C$. \end{proof} \subsection{Applications to variational inequality problems} Let $G\colon\mathcal{H}\rightarrow\mathcal{H}$ be monotone and let $ C\subseteq\mathcal{H}$ be nonempty, closed and convex. We recall that the \textit{variational inequality problem} governed by $G$ and $C$, which we denote by VI($G$,$C$), is to \begin{equation} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{find }\bar{u}\in C\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ with }\langle G\bar{u},u-\bar{u}\rangle \geq 0 \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for all }u\in C. \label{e-VIP1} \end{equation} It is well known that VI($G$,$C$) has a unique solution if, for example, $G$ is ${\Greekmath 0114} $-Lipschitz continuous and ${\Greekmath 0111}$-strongly monotone, where $ {\Greekmath 0114}, {\Greekmath 0111} >0$ \cite[Theorem 46.C]{Zei85}. In this section we show how one can apply the results of Section \ref{s-4} to an iterative scheme for solving VI($G$, $C$). We begin with recalling some known results. \begin{theorem} \label{t-uk-WR} Let $G\colon\mathcal{H}\rightarrow\mathcal{H}$ be ${\Greekmath 0114}$ -Lipschitz continuous and ${\Greekmath 0111}$-strongly monotone, where ${\Greekmath 0114},{\Greekmath 0111}>0$, and let $C\subseteq\mathcal{H}$ be nonempty, closed and convex. Moreover, for each $k=0,1,2,\ldots,$ let $U_k\colon\mathcal{H}\rightarrow\mathcal{H}$ be ${\Greekmath 011A}_k$-strongly quasi-nonexpansive such that $C\subseteq \limfunc{Fix} U_k$ and ${\Greekmath 0115}_k\in[0,\infty)$. Consider the following method: \begin{equation} u^0\in\mathcal{H}; \quad u^{k+1}=U_{k}u^{k}-{\Greekmath 0115} _{k}GU_{k}u^{k}. \label{e-uk} \end{equation} Assume that ${\Greekmath 011A}:=\inf_k{\Greekmath 011A}_k>0$, $\lim_k{\Greekmath 0115}_k =0$ and $ \sum_k{\Greekmath 0115}_k =\infty$. If $\{U_{k}\}_{k=0}^{\infty }$ is weakly $C$ -regular (in particular, boundedly $C$-regular), then $u^{k}$ converges strongly to the unique solution of VI($G$, $C$). \end{theorem} \begin{proof} See \cite[Theorem 4.8]{Ceg15}. For related results see also \cite[Theorem 4.3 ]{AK14} and \cite[Th. 2.4]{Hir06}. The part regarding a boundedly regular sequence of operators follows from Corollary \ref{c-reg}(ii) in view of which a boundedly $C$-regular sequence is also weakly $C$-regular. \end{proof} As we mentioned in the previous subsection, the assumption that $ \{U_{k}\}_{k=0}^{\infty }$ is weakly regular is quite strong. In particular, this assumption excludes (almost) cyclic and intermittent controls. In the next result we show that one can still establish norm convergence for method \eqref{e-uk} in the case of the above-mentioned controls, but at the cost of imposing bounded regularity of both families of operators and sets. \begin{theorem} \label{t-uku} Let $G\colon\mathcal{H}\rightarrow\mathcal{H}$ be ${\Greekmath 0114}$ -Lipschitz continuous and ${\Greekmath 0111}$-strongly monotone, where ${\Greekmath 0114},{\Greekmath 0111}>0$, and let $C:=\bigcap_{i\in I}C_i\subseteq\mathcal{H}$ be nonempty, where for each $i\in I:=\{1\ldots,m\}$, $C_i$ is closed and convex. Moreover, for each $k=0,1,2,\ldots,$ let $U_k\colon\mathcal{H}\rightarrow\mathcal{H}$ be $ {\Greekmath 011A}_k $-strongly quasi-nonexpansive such that $C\subseteq \limfunc{Fix} U_k$ , ${\Greekmath 0115}_k\in[0,\infty)$ and consider the following method: \begin{equation} u^0\in\mathcal{H}; \quad u^{k+1}=U_{k}u^{k}-{\Greekmath 0115} _{k}GU_{k}u^{k}. \label{e-uk2} \end{equation} Assume that ${\Greekmath 011A}:=\inf_k{\Greekmath 011A}_k>0$, $\lim_k{\Greekmath 0115}_k =0$ and $ \sum_k{\Greekmath 0115}_k =\infty$. Moreover, assume that there is $s\geq 0$ such that for any $i\in I$ and $k=0,1,2,\ldots$, there is $l_{k}\in \{k,k+1,...,k+s-1\} $ such that the subsequence $\{U_{l_{k}}\}_{k=0}^{\infty }$ is $C_{i}$-regular and that the family $\{C_{i}\mid i\in I\}$ is boundedly regular. Then $u^{k}$ converges strongly to the unique solution of VI($G$, $C$). \end{theorem} \begin{proof} By \cite[Theorem 2.17]{GRZ17}, it suffices to show that the implication \begin{equation} \lim_{k}\sum_{l=n_{k}}^{n_{k}+s-1}\Vert U_{l}u^{l}-u^{l}\Vert=0 \Longrightarrow \lim_{k}d(u^{n_{k}},C)=0 \end{equation} holds true for any arbitrary subsequence $\{n_{k}\}_{k=0}^{\infty }\subseteq \{k\}_{k=0}^{\infty }$. To this end, choose $\{n_{k}\}_{k=0}^{\infty }\subseteq \{k\}_{k=0}^{\infty }$ and assume that \begin{equation} \label{t-uku:proof:1} \lim_{k}\sum_{l=n_{k}}^{n_{k}+s-1}\Vert U_{l}u^{l}-u^{l}\Vert=0. \end{equation} Let $i\in I$ be arbitrary. By assumption, for each $k=0,1,2,\ldots,$ there is $l_{k}\in \{n_{k},n_{k}+1,...,n_{k}+s-1\}$ such that $\{U_{l_{k}} \}_{k=0}^{\infty }$ is $C_{i}$-regular. So, by the boundedness of $ \{u^k\}_{k=0}^\infty$ (see \cite[Lemma 9]{CZ13}) and \eqref{t-uku:proof:1}, we have \begin{equation} \lim_{k}d(u^{l_{k}},C_{i})=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{e-lim-dujk} \end{equation} Observe that the boundedness of $u^{k}$ and $\lim_{k}{\Greekmath 0115} _{k}=0$ lead to \begin{equation} \lim_{k}\sum_{l=k}^{k+s-1}{\Greekmath 0115} _{l}\Vert GU_{l}u^{l}\Vert =0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.} \label{e-sum-GUk} \end{equation} Moreover, the triangle inequality, \eqref{e-uk2}, \eqref{t-uku:proof:1} and \eqref{e-sum-GUk} imply that \begin{align} \Vert u^{n_{k}}-u^{l_{k}}\Vert & \leq \sum_{l=n_{k}}^{l_{k}}\Vert u^{l+1}-u^{l}\Vert \leq \sum_{l=n_{k}}^{n_{k}+s-1}\Vert u^{l+1}-u^{l}\Vert \notag \\ & \leq \sum_{l=n_{k}}^{n_{k}+s-1}\Vert U_{l}u^{l}-u^{l}\Vert +\sum_{l=n_{k}}^{n_{k}+s-1}{\Greekmath 0115} _{l}\Vert GU_{l}u^{l}\Vert \rightarrow_k 0. \end{align} This, the definition of the metric projection, the triangle inequality and ( \ref{e-lim-dujk}) yield \begin{align} d(u^{n_{k}},C_{i}) & =\Vert u^{n_{k}}-P_{C_{i}}u^{n_{k}}\Vert \leq \Vert u^{n_{k}}-P_{C_{i}}u^{l_{k}}\Vert \notag \\ & \leq \Vert u^{n_{k}}-u^{l_{k}}\Vert +\Vert u^{l_{k}}-P_{C_{i}}u^{l_{k}}\Vert \rightarrow_k 0. \end{align} Since $i\in I$ is arbitrary and the family $\{C_{i}\mid i\in I\}$ is boundedly regular, $\lim_{k}d(u^{n_k},C)=0$, which completes the proof. \end{proof} \begin{remark} \rm\ \cite[Theorem 2.17]{GRZ17}, which we have used in order to prove Theorem \ref {t-uku}, appeared for the first time in \cite[Theorem 3.16]{Zal14}. Since this result was presented in Polish, we refer here to a paper which has been published in English. Related results can be found, for example, in \cite[ Theorem 12]{CZ13} or \cite[Theorem 4.13]{Ceg15}. \end{remark} \noindent \textbf{Funding.} The research of the second author was supported in part by the Israel Science Foundation (Grants no. 389/12 and 820/17), the Fund for the Promotion of Research at the Technion and by the Technion General Research Fund. \textbf{Acknowledgments.} We are grateful to an anonymous referee for his/her comments and remarks which helped us to improve our manuscript. \end{document}
arXiv
If the numbers on the third diagonal of Pascal's Triangle are triangular numbers, what is the value of the $50$th triangular number? (The $n$th triangular number is $1+2+3+\ldots+n$.) If the triangular numbers are found on the third diagonal of Pascal's Triangle, the triangular numbers are \[\binom{2}{0}, \binom{3}{1}, \binom{4}{2}, \cdots,\] where the $n$th triangular number is $\binom{n+1}{n-1}$. We're looking for the $50$th triangular number, which is $$\binom{51}{49}=\frac{51!}{49!2!}\frac{51 \cdot 50}{2\cdot 1}=51\cdot25=\boxed{1275}.$$
Math Dataset
Frobenius's theorem (group theory) In mathematical group theory, Frobenius's theorem states that if n divides the order of a finite group G, then the number of solutions of xn = 1 is a multiple of n. It was introduced by Frobenius (1903). Statement A more general version of Frobenius's theorem states that if C is a conjugacy class with h elements of a finite group G with g elements and n is a positive integer, then the number of elements k such that kn is in C is a multiple of the greatest common divisor (hn,g) (Hall 1959, theorem 9.1.1). Applications One application of Frobenius's theorem is to show that the coefficients of the Artin–Hasse exponential are p integral, by interpreting them in terms of the number of elements of order a power of p in the symmetric group Sn. Frobenius conjecture Frobenius conjectured that if in addition the number of solutions to xn=1 is exactly n where n divides the order of G then these solutions form a normal subgroup. This has been proved as a consequence of the classification of finite simple groups. The symmetric group S3 has exactly 4 solutions to x4=1 but these do not form a normal subgroup; this is not a counterexample to the conjecture as 4 does not divide the order of S3. References • Frobenius, G. (1903), "Über einen Fundamentalsatz der Gruppentheorie", Berl. Ber.: 987–991, JFM 34.0153.01 • Hall, Marshall (1959), Theory of Groups, Macmillan, LCCN 59005035, MR 0103215
Wikipedia
\begin{definition}[Definition:Monster Group] A group $G$ is a '''Monster group''' and the largest sporadic simple group {{iff}} it has the order: :$808017424794512875886459904961710757005754368000000000 = 2^{46}.3^{20}.5^9.7^6.11^2.13.17.19.23.29.31.41.47.59.71$ {{questionable|The {{iff}} seems wrong. For example, the cyclic group of order $808017424794512875886459904961710757005754368000000000$ is not the Monster group.}} \end{definition}
ProofWiki
Lev Tumarkin Lev Abramovich Tumarkin (14 January 1904 – 1 August 1974) was a Russian mathematician. He was dean of the Faculty of Mechanics and Mathematics of Moscow State University.[1] He was a student of Pavel Aleksandrov.[2] He attended the First International Topological Conference in Moscow, 1935 as a host but made no presentation.[1] References 1. Apushkinskaya, D.E.; Nazarov, A. I.; G.I. Sinkevich, G. I. (2019). "In Search of Shadows: the First Topological Conference, Moscow 1935". arXiv:1903.02065 [math.HO]. 2. "Lev Tumarkin - The Mathematics Genealogy Project". www.genealogy.math.ndsu.nodak.edu. Mathematics Genealogy Project, North Dakota State University. Authority control: Academics • Mathematics Genealogy Project • zbMATH
Wikipedia
\begin{document} \maketitle \begin{abstract} We consider several models (including both multidimensional ordinary differential equations (ODEs) and partial differential equations (PDEs), possibly ill-posed), subject to very strong damping and quasi-periodic external forcing. We study the existence of response solutions (i.e., quasi-periodic solutions with the same frequency as the forcing). Under some regularity assumptions on the nonlinearity and forcing, without any arithmetic condition on the forcing frequency $\omega$, we show that the response solutions indeed exist. Moreover, the solutions we obtained possess optimal regularity in $\varepsilon$ (where $\varepsilon$ is the inverse of the coefficients multiplying the damping) when we consider $\varepsilon$ in a domain that does not include the origin $\varepsilon=0$ but has the origin on its boundary. We get that the response solutions depend continuously on $\varepsilon$ when we consider $\varepsilon $ tends to $0$. However, in general, they may not be differentiable at $\varepsilon=0$. In this paper, we allow multidimensional systems and we do not require that the unperturbed equations under consideration are Hamiltonian. One advantage of the method in the present paper is that it gives results for analytic, finitely differentiable and low regularity forcing and nonlinearity, respectively. As a matter of fact, we do not even need that the forcing is continuous. Notably, we obtain results when the forcing is in $L^2$ space and the nonlinearity is just Lipschitz as well as in the case that the forcing is in $H^1$ space and the nonlinearity is $C^{1 + \text{Lip}}$. In the proof of our results, we reformulate the existence of response solutions as a fixed point problem in appropriate spaces of smooth functions. \end{abstract} \textbf{Keywords.} Strong dissipation; Response solutions; Singular perturbations. \textbf{2010 Mathematics Subject Classification.} 35R25, 37L10, 35Q56, 34D35, 37L25. \section{Introduction}\label{sec:intro} In recent times, there has been much interest in the study of response solutions for nonlinear mechanical models subject to strong dissipation and quasi-periodic external forcing. We recall that response solutions are solutions with the same frequency as the forcing. The mechanical systems are second order equations. Since the large coefficients of dissipation are factors of terms involving the first derivative, this is a singular perturbation. We are interested in finding response solutions for two kinds of equations. We first consider an ODE model of the form: \begin{equation}\label{000.0} \begin{split} x_{tt}+\frac{1}{\varepsilon} x_t+ g(x)= f(\omega t),\,\, x\in \mathbb{R}^n. \end{split} \end{equation} The equation \eqref{000.0} is referred as \emph{``varactor''} equations in the literature \cite{Rafael13,Gentile10, Gentile14,Guido17,Gentile17}. We also consider PDE models. One particular example is obtained from the Boussinesq equation (derived in the paper \cite{Bou72}) by adding a singular friction proportional to the velocity: \begin{equation}\label{b-e} \begin{split} u_{tt}+\frac{1}{\varepsilon}u_t-\beta u_{xxxx}-u_{xx}=(u^{2})_{xx}+f(\omega t,x), \,\,x\in\mathbb{T}=\mathbb{R}/2 \pi \mathbb{Z},\,\, \beta>0, \end{split} \end{equation} where $\beta>0$ is a parameter. Of course, the equation \eqref{b-e} will be supplemented with periodic boundary conditions. We note that the positive sign of $\beta$ makes equation \eqref{b-e} ill-posed. That is, there are many initial conditions that do not lead to solutions. It is, however, possible that there is a systematic way to construct many special solutions, for some ill-posed Boussinesq equations, which are physically observed (we refer to the papers \cite{Rafael16, Llave09, Honngyu1, Honngyu2}). In both equations \eqref{000.0} and \eqref{b-e}, $\varepsilon $ is a small parameter in $\mathbb{R}$ and $\omega \in \mathbb{R}^d$ with $d\in \mathbb{N}_{+}:=\mathbb{N}\setminus\{0\}$. The forcing $f$ is quasi-periodic with respect to time $t$. Note that in the PDE \eqref{b-e}, the forcing may depend on the space variable. At this moment, we think of the forcing as a quasi-periodic function taking values in a space of functions. In equation \eqref{000.0}, one considers the nonlinearity $g$ as a function from $\mathbb{R}^n$ to $\mathbb{R}^n$ with $n\in \mathbb{N}_{+}$ and the forcing $f$ as a function from $ \mathbb{T}^d$ to $ \mathbb{R}^n$. We will obtain several results depending on the regularity assumed for $f$ and $g$. First, we will consider that the functions $f$ and $g$ are real analytic such that they take real values for real arguments, which are what appears in physical applications, with $\varepsilon \in \mathbb{R}$. We will also consider highly differentiable functions $f$ and $g$, such as $f\in H^{m}(m>\frac{d}{2})$ and $g$ is $C^{m+l},\,l=1,2,\cdots$. In addition, we will obtain results for rather irregular functions $f$ and $g$. For example, the forcing $f$ is in the $L^2$ space, the nonlinearity $g$ is just Lipschitz or $f$ is in the $H^1$ space, $g$ is $C^{1 + \text{Lip}}$. In equation \eqref{b-e}, we consider the function $f:\mathbb{T}^d\times \mathbb{T}\rightarrow \mathbb{R}$. Analogously to the case of \eqref{000.0}, we will present results for $f$ being real analytic and finitely differentiable with high regularity. Note that in the study of the PDE model \eqref{b-e}, we will just focus in the physically relevant case of a specific nonlinearity $(u^{2})_{xx}$. It is possible to discuss general nonlinearities in a regularity class, but being unaware of a physical motivation, we leave these generalizations to the readers. We emphasize that, in \eqref{b-e}, the nonlinearity $(u^{2})_{xx}$ is unbounded from one space to itself, but the fixed point problem we consider overcomes this problem since there will be smoothing factors. From the point of physical view, the parameter $\varepsilon$ is real. However, it is natural to consider $\varepsilon$ in a complex domain when we consider our problem in an analytic setting. It is important to notice that the complex domain we use does not include the origin but accumulates on it. Indeed, the solutions fail to be differentiable at $\varepsilon = 0$ in the generality considered in the present paper (see Remark~\ref{noo-de}). However, we will show that the response solutions depend continuously on $\varepsilon$ as $\varepsilon$ tends to $0$. \subsection{Some remarks on the literature} The problem of the response solutions for dissipative systems has been studied by several methods. One method is based on developing asymptotic series and then show that they can be resummed using combinatorial arguments, which are established using the so-called \emph{``tree formalism''}. This can be found in the literature \cite{Gentile05, Gentile06,GG10,GGO10}. Recent papers developing this method are \cite{Guido17,Gentile17}. We point out that one important novelty of the papers \cite{Guido17, Gentile17} is that no arithmetic condition is required in the frequency of the forcing. A later method is to reduce the existence of response solutions to a fixed point problem, which is analyzed in a ball in an appropriate Banach space, centered in the solution predicted by the asymptotic expansion. In this direction, we refer to \cite{Rafael13,Rafael17} and references there. Note that the papers \cite{Rafael13,Rafael17} considered the perturbative expansion to low orders on $\varepsilon$ and obtains a reasonably approximate solutions in a neighborhood of $\varepsilon=0$. Nevertheless, to obtain the asymptotic expansions, one needs to solve equations involving small divisors and assume some non-degeneracy conditions. Note that the small divisors assumed in \cite{Rafael13,Rafael17} are weaker than the Diophantine conditions in KAM theory. In this paper, we will not assume any small divisors conditions since we do not attempt to get the approximate solution through an asymptotic expansion. Since the literature is growing, it is interesting to compare systematically results. There are several figures of merit for results on the existence of response solutions. \begin{enumerate} \item The arithmetic properties required in the external forcing frequency, such as Diophantine condition, Bryuno condition, or even weaker conditions, etc. \item The analyticity domain in $\varepsilon$ established. Since we do not expect that the asymptotic series converges, this domain does not include a ball centered at the origin. Note that the shape of this analyticity domain is very important to study properties of the asymptotic series. For example, Borel summability in \cite{Gentile05,Gentile06}. In the generality we consider in this paper, the solutions we construct fail to be even differentiable at the origin $\varepsilon=0$. (See Remark~\ref{noo-de}). \item Whether the method gives some asymptotic expansions for the solutions. \item Whether the method can deal with the forcing function $f$ which has low regularity (e.g. $f\in L^2$ or $f\in H^1$) and the nonlinearity function $g$ of low regularity (the case of piecewise differentiable functions appears in some applications). \item The generality of the models considered (e.g. whether the method requires that the system is Hamiltonian, Reversible, etc.) \item Smallness conditions imposed on functions $f$ and $g$. \end{enumerate} Notice that all these figures of merit cannot be accomplished at the same time. Obtaining more conclusions on the solutions (e.g. the existence of asymptotic expansions) will require more regularity and some arithmetic conditions on the frequency. \subsection{The method in the present paper} From the strictly logical point of view, our paper and \cite{Guido17,Gentile17} are completely different even if they are motivated by the same physical problem for the model \eqref{000.0}. More precisely, the present paper deals with not only analytic problems but also finitely differentiable problems and even just Lipschitz problems by the method of fixed point theorem. In contrast, the papers \cite{Guido17,Gentile17} apply resummation methods to establish the existence of response solutions under analytic condition. In the multidimensional case in equation \eqref{000.0}, compared with \cite{Guido17}, the methods presented in this paper do not need that the oscillators without dissipation are Hamiltonian or that the linearization of $g$ at the origin (i.e. $Dg(0)$, which is a $n\times n$ matrix) is positive definite. Further, we do not assume that the matrix $Dg(0)$ is diagonalizable or symmetric. We allow Jordan blocks that appear naturally in problems at resonance \cite{gazzo15,gazz15}. However, we note that our method for analytic case involves smallness assumptions in the forcing $f$ but not in the nonlinear part $\hat{g}$ of $g$. In the case of $L^2$ and $H^1$, we involve just smallness assumptions on $\hat{g}$ but not $f$. For the highly differentiable case (i.e. $H^{m},\,m>\frac{d}{2}$), we choose either smallness assumption for $f$ or $\hat{g}$. (See Section~\ref{sec:small}). As a further application, we consider adding dissipative terms to the Boussinesq equation of water waves in \eqref{b-e}. We note that the equation \eqref{b-e} is ill-posed and not all initial conditions lead to solutions. Nevertheless, we construct special solutions which are response. The approach followed in \cite{Rafael13, Rafael17} for similar problems \eqref{b-e} has two steps. In the first step, one constructed series expansions in $\varepsilon$ that produced approximate solutions. In a second step, one used a contraction mapping principle for an operator defined in a small ball near the approximate solutions obtained in the first step. Of course, this approach requires a very careful choice of the spaces in which the approximate solutions lie and the fixed point problems are formulated. One important consideration is that the spaces are chosen such that the operators involved map the spaces into themselves. Since some of the operators involved are diagonal in Fourier series, it is important that the norms can be read off from the Fourier coefficients. It will also be convenient that we have Banach algebras properties and that the nonlinear composition operators can be readily estimated. We have to say that it is the idea in \cite{Rafael13, Rafael17} that inspires our present treatment for the equations \eqref{000.0} and \eqref{b-e}. To motivate the procedure adopted in this paper, we note that in the method of \cite{Rafael13,Rafael17}, the fixed point part does not depend on any arithmetic condition on the forcing frequency. We will modify slightly the fixed point part to get response solutions with some regularity for our model \eqref{000.0}. In this way, we first reformulate the existence of response solutions for equation \eqref{000.0} as a fixed point problem. Then, under certain regularity assumptions for the nonlinearity and the forcing, we obtain the response solutions with corresponding regularity on $\varepsilon$ when $\varepsilon$ ranges over an appropriate domain without any circle centered at the origin $\varepsilon=0$. It is quite possible that the response solutions constructed are not differentiable with respect to $\varepsilon$ at $\varepsilon = 0$ (see Remark~\ref{noo-de}) since we do not assume any Diophantine conditions for the frequency $\omega$. Therefore, when we consider $\varepsilon$ goes to $0$, we just get the response solutions depend continuously on $\varepsilon$. The method of the proof in this paper (very different from resumming expansions) consists in transforming the original equations \eqref{000.0} and \eqref{b-e} into fixed point equations (see \eqref{fixeq3} and \eqref{plo1}, respectively). The main observation that allows us to solve the fixed point equations is that we are allowed to use the strong dissipation in the contraction mapping principle. Our method also works for finitely differentiable problems. In such case, we will introduce Sobolev spaces, in which the norms of functions are measured by size of the Fourier coefficients. We think that the regularity results obtained in this paper are close to optimal. As for the optimality for the domain, we find that there exist arbitrarily small values of $\varepsilon$ for which the map we constructed is not a contraction and the method of the proof breaks down. Therefore, we conjecture that this is optimal and that indeed, regular solutions do not exist for these small parameter values and general forcing and nonlinearity. We also show in Remark~\ref{noo-de} that, both in the analytic and in the finitely differentiable case, there are examples in which the solution is not differentiable in $\varepsilon$ at $\varepsilon = 0$ when we remove the Diophantine condition on the forcing frequency $\omega$. The lack of differentiability at $\varepsilon = 0$ is a reflection of the problem being a singular perturbation. In the case considered here that there are no non-resonance conditions on the frequency, the problem is more severe than in previously considered cases. \subsection{Some possible generalization} Our method could deal easily with the general case with the form of \begin{equation}\label{general} \begin{split} \mathbf{p}x_{tt}+\frac{1}{\varepsilon} \mathbf{q}x_t+ g(x,\omega t)= f(\omega t),\quad x\in \mathbb{R}^n, \end{split} \end{equation} where $\mathbf{p},\,\mathbf{q}$ are diagonal constant matrix and $g(x,\omega t)=Ax+\hat{g}(x,\omega t)$, where $A$ is a matrix in Jordan Block form and $\hat{g}(x,\omega t)\,:\mathbb{R}^n\times \mathbb{T}^d\rightarrow \mathbb{R}^n$ is sufficiently regular. We leave the easy details to the interested readers. See Remark~\ref{in-jordan}, which gives some simplified calculations after we have carried out the case in \eqref{000.0}. \subsection{Organization of this paper} Our paper is organized as follows: In Section~\ref{sec:formulation}, we present the idea of reformulating the existence of response solutions for equation \eqref{000.0} as a fixed point problem. To solve this fixed point equation, in Section~\ref{sec:spacesode}, we give the precise function spaces that we work in and we list their important properties, such as Banach algebra properties and the regularity of the composition operators. We state our three main results: analytic case, highly differentiable case and low regularity in Section ~\ref{sec:statement}. Section ~\ref{sec:analytic} is mainly devoted to the proof of our analytic result by contraction mapping principle. In the process, we need to pay more attention to the invertibility of operators and regularity of composition operators. In Section~\ref{sec:finitely}, we prove our regular result in the finitely differentiable case by the contraction argument and the implicit function theorem. Section~\ref{sec:pde} is an application to the ill-posed PDE \eqref{b-e} by a similar idea to used for ODE \eqref{000.0}. \section{The formulation for equation \eqref{000.0}}\label{sec:formulation} In this section, we give an overview of our treatment for ODE model \eqref{000.0}, which can be rewritten as \begin{equation}\label{000.1} \begin{split} \varepsilon x_{tt}+x_t+\varepsilon g(x)=\varepsilon f(\omega t),\,\, x\in \mathbb{R}^n, \end{split} \end{equation} where, as indicated before, the mappings $g:\mathbb{R}^n\rightarrow \mathbb{R}^n,\,\,f:\mathbb{T}^d\rightarrow \mathbb{R}^n$. We will reduce the existence of response solutions of equation \eqref{000.1} to an equivalent fixed point problem. To this end, it is crucial to make some assumptions for equation \eqref{000.1}. \subsection{Preliminaries} For the analytic and highly differentiable functions $f$ and $g$ defining the equation \eqref{000.1}, we assume that: $\mathbf{H}$: The average of $f$ is $0$ and $g(0)=0$. Denote $A=Dg(0)$, which is a $n\times n$ matrix, the spectrum $\lambda_j\,(j=1,\cdots,n)$ of $A$ is real and $\lambda_j\neq 0$. Actually, we could weaken the assumptions on the regularity of the function $g$ when considering low regularity results (e.g, $L^2$ or $H^1$). As we will see in Section~\ref{sec:lowregularity}, instead of assuming $g$ is differentiable, we just assume that: $\mathbf{\widetilde{H}}$: $g$ is Lipschitz in $\mathbb{R}^n$ and it can be expressed in the form of \begin{equation*} g(x)=Ax+\hat{g}(x), \end{equation*} where $A$ is a $n\times n$ matrix and its spectrum is real and nonzero. Moreover, the nonlinear part $\hat{g}$ satisfies that $\text{Lip}(\hat{g})\ll 1$ in the whole of $\mathbb{R}^n$. Note that in both assumptions $\mathbf{H}$ and $\mathbf{\widetilde{H}}$, we are not including that the matrix $A$ is diagonalizable. Non-diagonalizable matrices appear naturally when considering oscillators at resonance, which is often a design goal in several applications in electronics or appear in mechanical systems with several nodes. We emphasize that the assumption $\mathbf{\widetilde{H}}$ involves assumptions on $\hat g$ for all values of its argument. This is needed when we consider solutions in $L^2(\mathbb{T}^d)$ which may be unbounded. It is important to note that, once we have established the conclusion for $g$ under the assumption $\mathbf{\widetilde{H}}$, we can accommodate several physical situations such as piecewise linear nonlinearity with small breaks. Without loss of generality, we assume that \begin{equation}\label{non-res} \omega \cdot k\neq 0,\,\forall k\in \mathbb{Z}^d\setminus\{0\}. \end{equation} Indeed, if there is a $k_0\in \mathbb{Z}^d\setminus\{0\}$ such that $\omega \cdot k_0=0$, we could reformulate the forcing with only $(d-1)-$dimensional variables which are orthogonal to $k_0$. Namely, the map $f:\,\mathbb{T}^{d-1}\rightarrow \mathbb{R}^n$. The condition \eqref{non-res} is called the $\emph{``non-resonance''}$ condition. If the non-resonance condition \eqref{non-res} is satisfied, the set $\{\omega t\}_{t \in \mathbb{R}}$ is dense on $\mathbb{T}^d$. \subsection{Quasi-periodic solutions, hull functions} In this paper, we are interested in finding the quasi-periodic solutions with frequency $\omega\in \mathbb{R}^d$. These are functions of time $t$ with the form \begin{equation}\label{realso} x_\varepsilon(t) = U_\varepsilon(\omega t) \end{equation} for a suitable function $U_{\varepsilon}:\mathbb{T}^{d} \rightarrow \mathbb{R}^n$, indexed by the small parameter $\varepsilon$. The function $U_{\varepsilon}$ is often called the \emph{``hull function''}. Substituting \eqref{realso} into equation \eqref{000.1} and using that $\{\omega t\}_{t \in \mathbb{R}}$ is dense in $\mathbb{T}^d$, we obtain that \eqref{000.1} holds for a continuous function $U_\varepsilon$ if and only if the hull function $U_\varepsilon$ satisfies \begin{equation}\label{fixeq} \varepsilon\left( \omega\cdot \partial_{\theta}\right) ^2 U_{\varepsilon}(\theta)+\left( \omega\cdot \partial_{\theta}\right) U_{\varepsilon}(\theta) +\varepsilon g(U_{\varepsilon}(\theta))=\varepsilon f(\theta). \end{equation} Hence, our treatment for equation \eqref{000.1} will be based on finding $U_{\varepsilon}$ which solves \eqref{fixeq}. We will manipulate \eqref{fixeq} to reformulate it as a fixed point problem that can be solved by the contraction argument (or the implicit function theorem). The equation \eqref{fixeq} we will solve involves parameter $\varepsilon$ (the inverse of coefficient multiplying the damping). We will obtain solutions with delicate regularity in $\varepsilon$, which are objects in a space of functions. Precisely, in the analytic case (see Section~\ref{sec:analytic}), we will get a solution $U_{\varepsilon}$ of equation \eqref{fixeq} depending analytically on $\varepsilon$ when $\varepsilon$ ranges on a complex domain $\Omega$ which does not include the origin $\varepsilon = 0$ but so that the origin is in the closure of $\Omega$. In the finitely differentiable case (see Section~\ref{sec:finitely}), the solution $ U_{\varepsilon}$ is differentiable in $\varepsilon$ when $\varepsilon$ is in a real domain $\widetilde{\Omega}$ which does also not include zero but includes it in its closure. However, when we consider the regularity for the solution $U_{\varepsilon}$ of equation \eqref{fixeq} as $\varepsilon$ goes to $0$, we get that $ U_{\varepsilon}$ is continuous in $\varepsilon$ in the topologies used in the fixed point problem (see Lemma~\eqref{continuous}). Moreover, we will show that, in the generality considered in this paper, there are cases in which the solution is not differentiable at $\varepsilon = 0$ (see Remark~\ref{noo-de}). Later, we will develop analogous procedures for the PDE model \eqref{b-e} (see Section~\ref{sec:pde}). We anticipate that the treatment is inspired by this section presenting the formulation for ODE. The unknowns will not take values in $\mathbb{R}^n$, but rather will take values in a Banach space of functions. In addition, the partial differential equation \eqref{b-e} is ill-posed and its nonlinearity is unbound, which make us do some more drastic rearrangement for its fixed point equation (see \eqref{ffixeq}). \subsection{Formulation of the fixed point problem} \label{sec:formulationfixed} In this part, we just present the formal manipulations. The precise set up will follow, but it is natural to present first the formal manipulations since the rigorous setting is chosen to make them precise. Our goal is to transform equation \eqref{fixeq} into an equivalent fixed point problem. We rewrite \eqref{fixeq} as \begin{equation}\label{fixeq1} \begin{split} \varepsilon\left( \omega\cdot \partial_{\theta}\right) ^2 U_{\varepsilon}(\theta)+\left( \omega\cdot \partial_{\theta}\right) U_{\varepsilon}(\theta) +\varepsilon A U_{\varepsilon}(\theta)=\varepsilon f(\theta)-\varepsilon\hat{g}(U_{\varepsilon}(\theta)), \end{split} \end{equation} where $A$ is a constant matrix and \begin{equation*} \hat{g}(x)=g(x)-Ax. \end{equation*} Note that, in both the analytic case and the highly differentiable case , we use assumption {\bf{H}}. It is obvious that \begin{equation}\label{nont0} \hat{g}(0)=0,\,\,D\hat{g}(0)=0. \end{equation} Namely, \begin{equation*} \hat{g}(x)=O(x^2),\,\,D\hat{g}(x)=O(x), \end{equation*} where $ O(x)$ denotes the same order as $x$. As a consequence, $D\hat{g}$ is small (in many sense) in a small neighborhood of the origin $x=0$. We could also assume that $D\hat{g}$ is globally small. This is trivial in the sense of complex analyticity by Liouville's theorem. When $g$ is just Lipschitz, we need that $\text{Lip}(\hat{g})$ is globally small as condition $\mathbf{\widetilde H}$. Based on equation \eqref{fixeq1} and denoting by $Id$ the $n\times n$ identity matrix, we introduce the linear operator $\mathcal{L}_{\varepsilon}$ as \begin{equation}\label{ope} \mathcal{L}_{\varepsilon}=\varepsilon\left( \omega\cdot \partial_{\theta}\right) ^2 Id +\left( \omega\cdot \partial_{\theta}\right)Id +\varepsilon A, \end{equation} defined on $n-$dimensional periodic functions of $\theta\in \mathbb{T}^d$. Then, \eqref{fixeq1} can be rewritten by \begin{equation}\label{fixeq2} \mathcal{L}_{\varepsilon}(U_{\varepsilon}(\theta)) =\varepsilon f(\theta)-\varepsilon\hat{g}(U_{\varepsilon}(\theta)). \end{equation} As shown in Section~\ref{sec:linverse}, the operator $\mathcal{L}_{\varepsilon}$ is boundedly invertible in the space $H^{\rho,m}$ defined in Section ~\ref{sec:spacesode} when $\varepsilon$ ranges in a suitable complex domain. This allows the equation \eqref{fixeq2} to be transformed into a fixed point problem as \begin{equation}\label{fixeq3} U_{\varepsilon}(\theta) =\varepsilon\mathcal{L}_{\varepsilon}^{-1}\left[ f(\theta)-\hat{g}(U_{\varepsilon}(\theta))\right]\equiv \mathcal{T}_{\varepsilon}(U_{\varepsilon})(\theta), \end{equation} where we have introduced the operator $\mathcal{T}_{\varepsilon}$. For a fixed $\varepsilon$, we can obtain a solution $U_{\varepsilon}$ for equation \eqref{fixeq3} by the contraction mapping principle. Further, we want to get a solution $U_{\varepsilon}$ possessing optimal regularity in $\varepsilon$. This can be achieved by considering operator $\mathcal{T}$ above in a function space consisting of functions regular in $\varepsilon$ (see Section~\ref{sec:analyticitysolution} for analytic case and Section~\ref{sec:finisolution} for highly differentiable case). Specially, in the highly differentiable case, we will use the classic implicit function theorem to get the results with optimal regularity in $\varepsilon$. For convenience, we now introduce the operator $\mathbf{T}$ involving the arguments $\varepsilon$ and $U$ as the following: \begin{equation}\label{impli} \begin{split} \mathbf{T}(\varepsilon,U):=U-\mathcal{T}(\varepsilon,U). \end{split} \end{equation} This makes it clear to obtain the solution $U=U_{\varepsilon}$, as a function of $\varepsilon$, having the same regularity as $\mathbf{T}$ by the classical implicit function theorem. Two subtle points appear in this strategy. One is the invertibility of the linear operator $\mathcal{L}_\varepsilon$ and the bound of its inverse. Another is the regularity of the composition operator $\hat{g}\circ U$ in \eqref{fixeq3}. We also need to study the dependence on the parameter $\varepsilon$ of the solution $U_{\varepsilon}$ satisfying equation \eqref{fixeq3}. We observe that the linear operator $\mathcal{L}_{\varepsilon}$ is diagonal in the basis of Fourier functions. This suggests that we use some variants of Sobolev (or Bergman) spaces which provide analyticity -- or in the low regularity case $L^2$ or $H^1$. Hence, it will be useful that the spaces we consider have norms that can be estimated very easily by estimating the Fourier coefficients. The estimates of the Fourier coefficients involves the assumptions that the eigenvalues of $A$ are nonzero real number and that the range of $\varepsilon$ is restricted to a domain accumulating at the origin $\varepsilon=0$. (See Section~\ref{sec:coe} for details). For the nonlinear estimates, we need that the composition operator defined by $\hat{g}\circ U$ is smooth considered as a mapping acting on the spaces we consider. The regularity of the composition on the left by a smooth functions acting on variants of Sobolev spaces have been widely studied \cite{MR74,AZ90, kappe03}. In Sections~\ref{sec:spacesode}, we will present the precise spaces and some properties in these spaces used to implement our program. \subsection{Some heuristic considerations on the smallness conditions required for the present method}\label{sec:small} Recall the fixed point equation \eqref{fixeq3}, the operator we consider has the structure \begin{equation*} U =\varepsilon\mathcal{L}_{\varepsilon}^{-1} f-\varepsilon\mathcal{L}_{\varepsilon}^{-1}\hat{g}(U)\equiv \mathcal{T}_{\varepsilon}(U). \end{equation*} To solve it by iteration, roughly, we need that the map $U\rightarrow \varepsilon\mathcal{L}_{\varepsilon}^{-1}\hat{g}\circ U$ is a contraction in a domain that contains a ball around $\varepsilon\mathcal{L}_{\varepsilon}^{-1}f$. Of course, the notions of contraction and smallness depend on the spaces under consideration. The results of existence are sharper if we consider spaces of more regular functions and the results of local uniqueness are sharper if we consider spaces of less regular functions. Both the contraction properties of $\varepsilon\mathcal{L}_{\varepsilon}^{-1}\hat{g}\circ U$ and the smallness properties of $\varepsilon\mathcal{L}_{\varepsilon}^{-1}f$ are formulated in appropriate norms (which change with the regularity considered). As we will see in Section~\ref{sec:linverse}, the operator $\varepsilon\mathcal{L}_{\varepsilon}^{-1}$ can be bounded in appropriate norms, which allows us to just consider the smallness of $f$ and the properties of the composition $\hat{g}\circ U$. To this end, it is clear that we can trade off some of the smallness assumptions in $\hat{g}$ and $f$. If we are willing to make global assumptions of smallness on $\hat{g}$, we do not need any smallness assumption on $f$. If, on the other hand, we assume that $\hat{g}$ is smooth and $\hat{g}(0)=D\hat{g}(0)=0$, we have that $\hat{g}$ is small (in many senses) in a small neighborhood at the origin. From this point of view, it is necessary to impose smallness condition on $f$ in this small neighborhood. There are some caveats to these arguments: In the analytic case, assuming that $D\hat{g}$ is small globally (even bounded) in the whole complex space $\mathbb{C}^n$, Liouville's theorem shows that it is constant, namely, $\hat{g}$ is linear. This makes our result true, but it is trivial and we will not state it. Of course, Liouville's theorem is only a concern for analytic results. In the low regularity cases (e.g. $L^2$ or $H^1$ when $d\geq 2$), the range of $f$ may be the whole of $\mathbb{R}^n$, hence we need to make global assumptions on smallness in $\hat{g}$. In the case of $H^m$ regularity with $m>\frac{d}{2}$, we prove our results under two types of smallness assumptions (See Section~\ref{sec:proof-fi}). We also advance that in the case of $H^1$ regularity, the contraction argument we use will be somewhat more sophisticated. (See Section~\ref{sec:lowregularity}). \section{Function spaces} \label{sec:spacesode} \subsection{Choice of spaces}\label{sec:choice} To implement the fixed point problem outlined in Section~\ref{sec:formulation}, we need to define precisely function spaces with appropriate norms. The discussion in Section ~\ref{sec:analytic} will make clear, it is very convenient that the norms can be expressed in terms of the Fourier coefficients of functions. In such a case, the inverse of the linear operator $\mathcal{L}_{\varepsilon}$ can be easily estimated just by estimating its Fourier coefficients. We are allowed to use the base in a such way that the Fourier coefficients of the multiplier operator $\mathcal{L}_{\varepsilon}$ have the Jordan standard form. (See Section~\ref{sec:multiplier}). We also need the spaces to possess other properties allowing us to control the composition $\hat{g}\circ U$ in \eqref{fixeq3} with ease, such as Banach algebras properties under multiplication and the properties of the composition operators. To study the analyticity in $\varepsilon$, we will define spaces of analytic functions of $\varepsilon$ in Section ~\ref{sec:analyticitysolution}. In this section, we use the same notations for Banach spaces as in \cite{Llave09, Rafael13,Rafael16}. For $\rho \geq 0$, we denote \begin{equation*} \mathbb{T}^{d}_{\rho}=\left\lbrace \theta\in \mathbb{C}^d/(2\pi\mathbb{Z})^d\,:\, \mathrm{Re}(\theta_j)\in \mathbb{T},\,\,|\mathrm{Im}(\theta_j)|\leqslant \rho,\,\,j=1,\ldots,d \right\rbrace. \end{equation*} We denote the Fourier expansion of a periodic function $f(\theta)$ on $\mathbb{T}^{d}_{\rho}$ by \begin{equation*} \begin{split} f(\theta)=\sum_{k\in\mathbb{Z}^{d}}\widehat{f}_{k}e^{\mathrm{i} k\cdot\theta}, \end{split} \end{equation*} where $k\cdot\theta=k_1 \theta_1+\cdots+k_d \theta_d$ represents the Euclidean product in $\mathbb{C}^d$ and $\widehat{f}_{k}$ are the Fourier coefficients of $f$. If $f$ is analytic and bounded on $\mathbb{T}^{d}_{\rho}$, then the Fourier coefficients satisfy the Cauchy bounds \begin{equation*} |\widehat{f}_{k}|\leq M e^{-|k|\rho} \end{equation*} with $M$ being the maximum of $|f(\theta)|$ on $\mathbb{T}^{d}_{\rho}$ and $|k|=|k_1|+\ldots+|k_d|$. \begin{definition}\label{space} For $\rho \geq 0,\,m,\,d,\,n \in \mathbb{N}_{+}$, we denote by $H^{\rho,m}$ the space of analytic functions $U$ in $\mathbb{T}_\rho^d$ with finite norm $:$ \begin{equation*} \begin{aligned} H^{\rho,m}:&=H^{\rho,m}(\mathbb{T}^d)\\ &=\bigg\lbrace U:\,\mathbb{T}_\rho^d\rightarrow \mathbb{C}^n\,\mid\,\|U\|_{\rho,m}^{2}=\sum_{k\in \mathbb{Z}^d}|\widehat{U}_{k}|^{2}e^{2\rho |k|} (|k|^{2}+1)^{m}<+\infty\bigg\rbrace. \end{aligned} \end{equation*} \end{definition} It is obvious that the space $\big(H^{\rho,m},\,\,\|\cdot \|_{\rho,m}\big)$ is a Banach space and indeed a Hilbert space. From the real analytic point of view, we consider the Banach space $H^{\rho,m}$ of the functions that take real values for real arguments. For $\rho=0$, $H^{m}(\mathbb{T}^d):=H^{0,m}(\mathbb{T}^d)$ is the standard Sobolev space, we refer to the references \cite{taylor3,sobolev} for more details. In this case, when $m>\frac{d}{2}$, by the Sobolev embedding theorem (see chapter $2$ and $6$ in \cite{taylor3}), we obtain that $H^{m+l}(\mathbb{T}^d)\,(l=1,2,\cdots)$ embeds continuously into $C^l(\mathbb{T}^d)$. For $\rho>0$, functions in the space $H^{\rho,m}$ are analytic in the interior of $\mathbb{T}_\rho^d$ and extend to Sobolev functions on the boundary of $\mathbb{T}_\rho^d$. \begin{remark} As a matter of fact, when $\rho>0$ and $m>d$, the space $H^{\rho,m}$ can be identified with a closed space of the standard Sobolev space $H^{m}(\mathbb{T}_\rho^d)$ consisting of functions which are complex differentiable. The manifold $\mathbb{T}^d_\rho$ has $2d$ real dimension so that, when $m>d$, the standard Sobolev embedding theorem shows that $H^{\rho,m+l}\,(l=1,2,\cdots)$ embeds continuously into $C^l(\mathbb{T}^d_\rho)$. Since the uniform limit of complex differentiable functions is also complex differentiable, we conclude that our space is a closed space of the standard Sobolev space of $\mathbb{T}^d_\rho$ considered as a $2d-$dimensional real manifold. Several variants of this idea appear already in Bergman spaces in \cite{ReedS75,ReedS72}. We also point out that the set of functions in $H^{\rho,m}$ which take real values for real arguments is a closed set in $H^{\rho,m}$ (this set is also a linear space over the reals). Since we will show that our operators map this set into itself, we get that the fixed point we produce will be such that they give real values for real arguments. \end{remark} \subsection{Properties of the chosen spaces $H^{\rho,m}$ above} We note several well-known properties of the space $H^{\rho,m}$ defined in the Section~\ref{sec:choice}, which will play a crucial role in what follows. \begin{lemma}[Interpolation inequalities]\label{interpolation} For any $0\leq i\leq m, \,0\leq \nu \leq 1$, denote $s=(1-\nu)i+\nu m$, there exist constants $C_{i,m}$ depending on $i,m$ such that \begin{enumerate} \item -\emph{Sobolev case:} for $f\in H^{m}$, we have that \begin{equation}\label{Sobolevint} \|f\|_{H^s}\leq C_{i,m}\cdot\|f\|_{H^i}^{1-\nu}\cdot\|f\|_{H^m}^{\nu}, \end{equation} \item -\emph{Analytic case:} for $\rho>0,\,g\in H^{\rho,m},$ we have that \begin{equation}\label{Sobolevint2} \|g\|_{H^{\rho, s}}\leq C_{i,m}\cdot\|g\|_{H^{\rho, i}}^{1-\nu}\cdot\|g\|_{H^{\rho, m}}^{\nu}. \end{equation} \end{enumerate} \end{lemma} The inequality \eqref{Sobolevint} is the very standard Sobolev interpolation inequality in the literature \cite{taylor3,Zehnder75}. Since, as mentioned before, the spaces $H^{\rho,m}(\mathbb{T}^d)$ can be considered as a subspace of the standard Sobolev space in $\mathbb{T}_\rho^d$, we also have \eqref{Sobolevint2}. \begin{lemma}[Banach algebra properties]\label{alge} We have the following properties in two cases: \begin{enumerate} \item -\emph{Sobolev case (see \cite{sobolev,taylor3}):} Let $m>\frac{d}{2}$, there exists a constant $C_{m,d}$ depending only on $m,d$ such that for $u_1,\,u_2\in H^{m}$, the product $u_1\cdot u_2\in H^{m}$ and \begin{equation*} \|u_1u_2\|_{H^{m}}\leq C_{m,d}\|u_1\|_{H^{m}}\|u_2\|_{H^{m}}. \end{equation*} \item -\emph{Analytic case:} For $\rho>0,\,m>d$, there exists a constant $C_{\rho,m,d}$ depending on $\rho,m,d$ such that for $u_1,\,u_2\in H^{\rho,m}$, the product $u_1\cdot u_2\in H^{\rho,m}$ and \begin{equation*} \|u_1u_2\|_{H^{\rho,m}}\leq C_{\rho,m,d}\|u_1\|_{H^{\rho,m}}\|u_2\|_{H^{\rho,m}}. \end{equation*} \end{enumerate} In particular, $H^{\rho,m}$ is a Banach algebra when $\rho,\,m,\,d$ are as above. \end{lemma} To analyze the operator defined in \eqref{fixeq3}, we also need to estimate the properties of the composition operator $\hat{g}\circ U$. The following are well known consequence of Gagliardo-Nirenberg inequalities. \begin{lemma}[Composition properties]\label{gag-nir} We have the following properties in two case: \begin{enumerate} \item -\emph{Sobolev case (see \cite{taylor3,Cala10}):} Let $g\in C^m(\mathbb{R}^n,\,\mathbb{R}^n)$ and assume that $g(0)=0$. Then, for $u\in H^{m}(\mathbb{T}^d,\,\mathbb{R}^n)\cap L^{\infty}(\mathbb{T}^d,\,\mathbb{R}^n)$, we have \begin{equation*} \begin{split} \|g(u)\|_{ H^{m}}\leq \mathbf{c}\|u\|_{L^{\infty}}\left(1+\|u\|_{ H^{m}}\right), \end{split} \end{equation*} where $\mathbf{c}:=\mathbf{c}(\eta)=\sup_{|x|\leq \eta,\,\alpha\leq m}|D^{\alpha}g(x)|$. Particularly, when $m>\frac{d}{2}$ (so that, by the Sobolev embedding theorem $H^m \subset L^{\infty}$), if $g\in C^{m+2},$ then \begin{equation}\label{com-sob} \begin{split} \|g\circ(u+v)-g\circ u-Dg\circ u\cdot v\|_{ H^{m}}\leq C_{m,d}\|u\|_{L^{\infty}}\left(1+\|u\|_{ H^{m}}\right)\|g\|_{C^{m+2}}\|v\|_{ H^{m}}^2, \end{split} \end{equation} \item -\emph{Analytic case:} Let $g:B \rightarrow \mathbb{C}^n $ with $B$ being an open ball around the origin in $\mathbb{C}^n$ and assume that $g$ is analytic in $B$. Then, for $u\in H^{\rho,m}(\mathbb{T}_{\rho}^d,\,\mathbb{C}^n)\cap L^{\infty}(\mathbb{T}_{\rho}^d,\,\mathbb{C}^n)$ with $u(\mathbb{T}_{\rho}^d)\subset B$, we have \begin{equation*} \begin{split} \|g(u)\|_{ H^{\rho,m}}\leq C_{u}\|u\|_{L^{\infty}(\mathbb{T}_{\rho}^d)}\left(1+\|u\|_{ H^{\rho,m}}\right), \end{split} \end{equation*} where $C_{u}$ is a constant depending on the norm of $u$. In the case of $m>d$, we have that \begin{equation*} \begin{split} \|g\circ(u+v)-g\circ u-Dg\circ u\cdot v\|_{ H^{\rho,m}}\leq C_{\rho,m,d}\|u\|_{L^{\infty}(\mathbb{T}_{\rho}^d)}\left(1+\|u\|_{ H^{\rho,m}}\right)\|v\|_{ H^{\rho,m}}^2. \end{split} \end{equation*} \end{enumerate} \end{lemma} The complete proof of Lemma~\ref{gag-nir} can be found in Proposition $3.9$ in \cite{taylor3} or Proposition $2.20$ in \cite{kappe03}, Proposition $1$ in \cite{MR74}. To make the paper self-contained, we just give an sketch of the ideas for the inequality \eqref{com-sob}, but refer the interested readers to the references above. Since \begin{equation*} \begin{split} g\circ(u+v)(\theta)-g\circ u(\theta)-Dg\circ u(\theta)\cdot v(\theta)=\int_{0}^1\int_{0}^{t} D^2g\circ(u+tsv)(\theta)\cdot v^2(\theta)dsdt, \end{split} \end{equation*} we get the desired result by the facts that $D^2g\circ(u+tsv)\in H^m$ and its $H^m$ norm is bounded uniformly in $t,s$ and that $H^m$ is a Banach algebra under multiplication by Lemma~\ref{alge}. The range of the derivative $D\hat{g}$ is a $n \times n$ matrix, which can be identified with $\mathbb{R}^{n^2}$. Note that the dimension of the range of $g$ does not play any role in our arguments. The proof of Lemma~\ref{gag-nir} is rather elementary in the analytic case. As a matter of fact, Lemma~\ref{gag-nir} gives not only the composition operator is differentiable but also presents formula for the derivative. It is easy to check that the same argument leads to higher derivatives of the composition operator if we assume more regularity for function $g$. More precisely, we have the following proposition: \begin{proposition}[Regularity of composition operators]\label{com-regu} We have that following two cases: \begin{enumerate} \item -\emph{Sobolev case:} Let $m>\frac{d}{2}$. Then, the left composition operator \begin{equation*} \mathcal{C}_g:\,H^{m}(\mathbb{T}^d,\mathbb{R}^n)\rightarrow H^{m}(\mathbb{T}^d,\mathbb{R}^n) \end{equation*} defined by \begin{equation*} \mathcal{C}_{g}[u](\theta)=g(u(\theta)), \end{equation*} has the following properties: If $g\in C^{m+1}(\mathbb{R}^n,\mathbb{R}^n)$, then $\mathcal{C}_g$ is Lipschitz. If $g\in C^{m+l+1}(\mathbb{R}^n,\mathbb{R}^n),\, (l=1,2,\cdots)$, then $\mathcal{C}_g$ is $C^{l}$. Moreover, the derivative of the operator $\mathcal{C}_{g}$ is given by \begin{equation*} (D\mathcal{C}_{g}[u]v)(\theta)=Dg(u)v(\theta). \end{equation*} \item -\emph{Analytic case:} Let $\rho>0$. Assume that $m > d$ and $g\,: B \rightarrow \mathbb{C}^n $, where $B$ is an open ball around the origin in $\mathbb{C}^n$, is analytic in $B$. Let $u_0\in H^{\rho,m}$ be such that $u_0(\mathbb{T}_\rho^d)\subset B$. Then for all $u$ in a neighborhood $\mathcal{U}$ of $u_0$ in $H^{\rho,m}$, the operator $\mathcal{C}_{g}\,:\,\mathcal{U}\rightarrow H^{\rho,m}$ is analytic. Moreover, for $v\in H^{\rho,m}$, the derivative of the operator $\mathcal{C}_{g}$ is given by \begin{equation*} (D\mathcal{C}_{g}[u]v)(\theta)=Dg(u)v(\theta). \end{equation*} \end{enumerate} \end{proposition} \begin{proof} In fact, Lemma~\ref{gag-nir} shows that the operator $\mathcal{C}_g$ is $C^1$ when $g\in C^{m+2}$. For $g\in C^{m+l+1}$, we can proceed by induction. If we have proved the result for $l$ and the formula for the derivative, we obtain the case for $l+1$. Indeed, if $g\in C^{m+l+1}$, we have $\mathcal{C}_g$ is $C^l$. Then, for $g\in C^{m+l+2}$, $Dg\in C^{m+l+1}$, we get $D\mathcal{C}_g$ is $C^l$ by induction. Namely, $\mathcal{C}_g$ is $C^{l+1}$. In the analytic case, we start by observing that $u(\mathbb{T}_\rho^d)\subset B$ is a compact set by the Sobolev embedding theorem. Hence, it is at a bounded distance from the boundary of $B$. If the neighborhood of $u$ is sufficiently small, the range of all the functions will also be contained in $B$. Then, we obtain our result by Lemma~\ref{gag-nir}. We can also refer to \cite{Rafael17} for more details. \end{proof} Note that, for the Sobolev case in Proposition~\ref{com-regu}, the regularity of $\mathcal{C}_g$ is not optimal, we refer to \cite{Rs96,AZ90,kappe03} for more results. Note also that, for the analytic case in Proposition~\ref{com-regu}, the result is not the most general result. There are results in the case of regularity that the Sobolev embedding theorem does not give continuity. In these cases, we need to take more care of the ranges of the functions. Since the functions are differentiable in the complex sense, we obtain that the composition operator $\mathcal{C}_{g}$ is differentiable in the complex sense by the chain rule to obtain the derivative. Further, to get that the operator $\mathcal{C}_g$ is analytic, we just recall the Cauchy result that also holds for functions whose arguments range over a complex Banach space. See~\cite{Hil57}. \section{Statement of the main results} \label{sec:statement} In this section, we state several results for the model \eqref{000.1}. These results are aimed at different regularity of the forcing $f$: analyticity (Theorem~\ref{mainthm}), finite (but high enough) number of derivatives (Theorem~\ref{theom-fi}) and low regularity (Theorem~\ref{theom-conti}). \begin{theorem}\label{mainthm} Suppose that $f\in H^{\rho,m}(\mathbb{T}^d)$ for some $\rho>0,\,m>d$ and $g$ is analytic in an open ball around the origin in the space $\mathbb{C}^n$. If the condition $\mathbf{H}$ is satisfied, then, for $\varepsilon \in \Omega(\sigma,\mu)$, where \begin{equation}\label{domain0} \begin{split} \Omega:=\Omega(\sigma, \mu)=\left\lbrace \varepsilon\in \mathbb{C}\,:\,\mathrm{Re}(\varepsilon)\geq \mu\,|\mathrm{Im}(\varepsilon)|,\,\, \sigma\le|\varepsilon|\leq 2\sigma\right\rbrace \end{split} \end{equation} with $\mu>\mu_0$ for $\mu_0>0$ sufficiently large and $\sigma>0$ sufficiently small, there is a unique solution $U_{\varepsilon}\in H^{\rho,m}(\mathbb{T}^d)$ for equation \eqref{fixeq}. Furthermore, considering $U_{\varepsilon}$ as a function of $\varepsilon$, the mapping $\varepsilon\rightarrow U_{\varepsilon}:\,\Omega\rightarrow H^{\rho,m}(\mathbb{T}^d)$ is analytic when $m>(d+2)$. In addition, as $\varepsilon\rightarrow 0$, the solution $U_{\varepsilon}\rightarrow 0$ and the mapping $\varepsilon\rightarrow U_{\varepsilon}$ is continuous. \end{theorem} \begin{remark} The statement of Theorem~\ref{mainthm} does not impose any Diophantine condition on the forcing frequency $\omega$. Since we do not expand the solution as a power series in $\varepsilon$, there is no equation involving the small divisor appearing. We will, however, not get that the solution is differentiable with respect to $\varepsilon$ at the origin $\varepsilon = 0$ and this may indeed be false in the generality considered in this paper. (See Remark~\ref{noo-de}). \end{remark} \begin{theorem}\label{theom-fi} Suppose that $f\in H^{m}(\mathbb{T}^d)$ with $m>\frac{d}{2}$ and $g\in C^{m+l}(\mathbb{R}^n,\mathbb{R}^n) \,\,(l=1,2,\cdots)$. If the condition $\mathbf{H}$ is satisfied, then, for $\varepsilon\in \widetilde{\Omega}(\sigma)$, where \begin{equation}\label{fini-para} \widetilde{\Omega}:=\widetilde{\Omega}(\sigma)=\left\lbrace \varepsilon\in \mathbb{R}:\, \sigma\le|\varepsilon|\leq 2\sigma\right\rbrace \end{equation} with sufficiently small $\sigma>0$, there exists a unique solution $U_{\varepsilon}\in H^{m}(\mathbb{T}^d)$ for equation \eqref{fixeq}. Moreover, we have the following regularity in $\varepsilon$: If $g\in C^{m+1}(\mathbb{R}^n,\mathbb{R}^n)$, then the mapping $\varepsilon\rightarrow U_{\varepsilon}:\widetilde{\Omega}\rightarrow H^m(\mathbb{T}^d)$ is Lipschitz. If $g\in C^{m+l+1}(\mathbb{R}^n,\mathbb{R}^n)$, then the mapping $\varepsilon\rightarrow U_{\varepsilon}:\widetilde{\Omega}\rightarrow H^m(\mathbb{T}^d)$ is $C^{l}$.\\ In addition, when $\varepsilon\rightarrow 0$, the solution $U_{\varepsilon}\rightarrow 0$ and $\varepsilon\rightarrow U_{\varepsilon}$ is continuous. \end{theorem} We note that the regularity in $\varepsilon$ in Theorem~\ref{theom-fi} depends on the regularity of the composition operator $g\circ u$ in Proposition~\ref{com-regu}. Even if we show that the derivatives with respect to $\varepsilon$ exist for all $\varepsilon>0$, we do not make any claim about the limit of the derivatives as $\varepsilon$ goes to $0$. The following Theorem~\ref{theom-conti} is for the situation when the forcing and the nonlinearity are rather irregular. \begin{theorem}\label{theom-conti} Suppose that $f\in L^2(\mathbb{T}^d)$ and $g$ is globally Lipschitz continuous on $\mathbb{R}^n$ satisfying the condition $\mathbf{\widetilde{H}}$. Then, for $\varepsilon\in \widehat{\Omega}\subset\mathbb{R}\setminus\{0\} $ being the sufficiently small domain, there is a unique solution $U_{\varepsilon}\in L^2(\mathbb{T}^d)$ for equation \eqref{fixeq}. The solution $U_{\varepsilon}$ is continuous in $\varepsilon$. Under the above assumptions if $f\in H^1(\mathbb{T}^d)$ and $g\in C^{1+Lip}$, then, the unique solution $U_{\varepsilon}$ constructed above is in $\cap_{0 \le s < 1} H^s$. \end{theorem} Note that Theorem~\ref{theom-conti} applies to some piecewise linear models (the Lipschitz constant of the derivatives has to be sufficiently small). Such models appear naturally in many areas. We also stress that in Theorem~\ref{theom-conti}, for $f\in H^1(\mathbb{T}^d)$, we cannot claim that the solution is in $H^1$, but only that it belongs to the intersection $\cap_{0 \leq s < 1}H^s$. We do not have a contraction argument in this case, but we can estimate the speed of convergence of the iterative procedure in the space $H^s$ for $0\leq s <1 $. In the analytic case (Theorem~\ref{mainthm}) and in the highly differentiable regularity (Theorem~\ref{theom-fi}), when $m>(\frac{d}{2}+2)$, we have that the solution $U_{\varepsilon}$ is $C^2$ with respect to the argument $\theta$. Hence, the quasi-periodic solutions $x(t)$ obtained through \eqref{realso} is also a twice differentiable function of time. As a consequence, the solutions we have produced satisfy the differential equation \eqref{000.0} in the classical sense. In the lower regularity case, the solutions we produce solve the equation in the sense that the Fourier coefficients of \eqref{fixeq} are the same in both sides. This is equivalent to solving \eqref{000.0} in the weak sense since the trigonometric polynomials are dense in the space of $C^{\infty}$ test functions. In this paper, we also present some results for PDE's model \eqref{b-e}. Since the formulation requires new definitions and auxiliary lemmas, we postpone the formulation of the results till Section~\ref{sec:pde}. \section{Analytic case: Proof of Theorem~\ref{mainthm}} \label{sec:analytic} We prove Theorem~\ref{mainthm} in the analytic sense by considering the fixed point equation \eqref{fixeq3} in the Banach space $H^{\rho,m}$ for any $\varepsilon\in \Omega(\sigma,\mu)$. Recall the equation \eqref{fixeq3} \begin{equation}\label{ffix} U_{\varepsilon}(\theta) =\mathcal{L}_{\varepsilon}^{-1}\left[ \varepsilon f(\theta)-\varepsilon\hat{g}(U_{\varepsilon}(\theta))\right]\equiv \mathcal{T}_{\varepsilon}(U_{\varepsilon})(\theta). \end{equation} The first concern is the invertibility of the linear operator $\mathcal{L}_{\varepsilon}$ and the quantitative bounds on its inverse when $\varepsilon$ ranges over the complex domain $\Omega(\sigma, \mu)$ defined in \eqref{domain0}. We remark that it is impossible to obtain the same bounds if $\varepsilon$ belongs to the imaginary axis. In fact, we conjecture that the optimal domain of $\varepsilon$, when the solution $U_{\varepsilon}$ of equation \eqref{ffix} is considered as a function of $\varepsilon$, do not extend to the imaginary axis. Secondly, since we want to obtain a solution $U_{\varepsilon}$ analytic in $\varepsilon$, we will define a space consisting of functions analytic in $\varepsilon$. (See the space $H^{\rho,m,\Omega}$ defined in Section~\ref{sec:analyticitysolution}). By reinterpreting the fixed point problem in the space $H^{\rho,m, \Omega}$, we obtain rather directly the analytic dependence on $\varepsilon$ of the solutions $U_{\varepsilon}$. The delicate steps are to show that the operator $\mathcal{T}$ defined in \eqref{ffix} maps a ball centered at the origin in the space $H^{\rho,m, \Omega} $ to itself and it is a contraction in this ball. \subsection{Estimates on the inverse operator $\mathcal{L}_{\varepsilon}^{-1}$} \label{sec:linverse} For the analytic nonlinearity $g$, the linear part $A$ is dominant with respect to the nonlinear part $\hat{g}$. Moreover, the Lipschitz constant of $\hat{g}$ can be small enough in a sufficiently small domain. We now study the linear operator defined by \begin{equation*} \mathcal{L}_{\varepsilon}=\varepsilon\left( \omega\cdot \partial_{\theta}\right) ^2 Id+\left( \omega\cdot \partial_{\theta}\right) Id +\varepsilon A. \end{equation*} Our main result in this section includes that $\mathcal{L}_{\varepsilon}$ is boundedly invertible from the analytic function space $H^{\rho,m}$ to itself when $\varepsilon$ ranges over a complex conical domain $\Omega(\sigma, \mu)$, which is away from imaginary axis. Of course, this result requires the assumptions on $A$ in $\mathbf{H}$. A key ingredient for the result is that the norms of the functions can be read off from the sizes of the Fourier series and that the operator $\mathcal{L}_\varepsilon$ acts in a very simple matter in Fourier series. Indeed, if the matrix $A$ was diagonal, the operator $\mathcal{L}_\varepsilon$ will be just a Fourier multiplier in each component (this case is worth keeping in mind as a heuristic guide). \subsubsection{Some elementary manipulations}\label{sec:multiplier} A consequence of the assumption $\mathbf{H}$ is that there exists a basis of generalized eigenvectors $\Phi_i\in \mathbb{C}^n\,(i=1,2,\cdots,n)$ such that \begin{equation}\label{jordan} \begin{split} A\Phi=J\Phi,\,\, \Phi=(\Phi_1,\cdots,\Phi_n)^{\top}, \end{split} \end{equation} where $J$ is the standard Jordan normal form. That is, \begin{equation*} \begin{split} J= \left( \begin{matrix} J_1& \,\,& \,\,& 0 \\ \,\,& J_2& \,\,&\,\,\\ \,\,&\,\,& \ddots&\,\,\\ \,0&\,\,& \,\,& J_p \end{matrix} \right) ,\,\,J_j= \left( \begin{matrix} \lambda_j& \,\,& \,\,0 \\ 1& \lambda_j& \,\,\\ \,\,&\ddots& \ddots\\ \,\,\,\,\,\,\,0&\,\,& 1& \lambda_j \end{matrix} \right) ,\,\,1\leq j\leq p,\,1\leq p \leq n. \end{split} \end{equation*} When we write a function $U_{\varepsilon}(\theta)\in H^{\rho,m}$ in the Fourier expansion as \begin{equation*} U_{\varepsilon}(\theta) =\sum_{k\in\mathbb{Z}^{d}}\widehat{U}_{k,\,\varepsilon}e^{\mathrm{i} k\cdot\theta} =\sum_{k\in\mathbb{Z}^{d} } \widetilde{\widehat{U}}_{k,\,\varepsilon}\Phi e^{\mathrm{i} k\cdot\theta}, \end{equation*} with $\widehat{U}_{k,\,\varepsilon},\,\widetilde{\widehat{U}}_{k,\,\varepsilon}\in \mathbb{C}^n$ and $\Phi$ being in \eqref{jordan}, the operator $\mathcal{L}_{\varepsilon}$ acting on the Fourier basis becomes \begin{equation*} \begin{split} \mathcal{L}_{\varepsilon}(\Phi e^{\mathrm{i} k\cdot\theta})=\left( -\varepsilon(k\cdot \omega)^2 Id+\mathrm{i}( k\cdot \omega)Id +\varepsilon J\right)\Phi e^{\mathrm{i} k\cdot\theta}=:L_\varepsilon(k\cdot \omega)\Phi e^{\mathrm{i} k\cdot\theta}, \end{split} \end{equation*} where \begin{equation}\label{ma-value} \begin{split} L_\varepsilon(a)&=-\varepsilon a^2 Id +\mathrm{i}a Id +\varepsilon J\\ &=\left( \begin{matrix} L_{\varepsilon,1}(a)& \,\,& \,\,&\,\,0 \\ &L_{\varepsilon,2}(a)& \,\,&\,\,\\ \,\,&\,\,& \ddots&\,\,\\ \,0&\,\,&\,\,& L_{\varepsilon,p}(a) \end{matrix} \right) \end{split} \end{equation} with \begin{equation*} \begin{split} L_{\varepsilon,j}(a)=\left( \begin{matrix} l_{\varepsilon,j}(a)& \,\,& \,\,0 \\ \varepsilon& l_{\varepsilon,j}(a)& \,\,\\ \,&\ddots& \ddots\\ \,0&\,\,& \varepsilon& l_{\varepsilon,j}(a) \end{matrix} \right)\,\,(1\leq j\leq p) \end{split} \end{equation*} and \begin{equation}\label{l-value} \begin{split} l_{\varepsilon,j}(a)=-\varepsilon a^2 +\mathrm{i}a+\varepsilon \lambda_j,\,\, j=1,2,\cdots,p. \end{split} \end{equation} The formula \eqref{ma-value} gives that \begin{equation}\label{invert0} \begin{split} L^{-1}_\varepsilon(a)=\left( \begin{matrix} L^{-1}_{\varepsilon,1}(a)& \,\,& \,\,&\,\,0 \\ &L^{-1}_{\varepsilon,2}(a)& \,\,&\,\,\\ \,\,&\,\,& \ddots&\,\,\\ \,0&\,\,&\,\,& L^{-1}_{\varepsilon,p}(a) \end{matrix} \right) \end{split} \end{equation} with \begin{equation}\label{invert} \begin{split} L_{\varepsilon,j}^{-1}(a)= \left( \begin{matrix} l_{\varepsilon,j}^{-1}(a)& \,\,& \,\,&\,\,0&\,\,\\ -\varepsilon l_{\varepsilon,j}^{-2}(a)& l_{\varepsilon,j}^{-1}(a)& \,\,&\,\,&\\ \varepsilon^2 l_{\varepsilon,j}^{-3}(a)&\,\, -\varepsilon l_{\varepsilon,j}^{-2}(a)&\,\, l_{\varepsilon,j}^{-1}(a)&\,\,&\\ \vdots&\,\,\ddots&\,\, \ddots&\,\, \ddots&\\ (-1)^{n-1}\varepsilon^{n-1} l_{\varepsilon,j}^{-n}(a)&\,\,\cdots&\varepsilon^2 l_{\varepsilon,j}^{-3}(a)&\,\, -\varepsilon l_{\varepsilon,j}^{-2}(a)\,\,& \,\,l_{\varepsilon,j}^{-1}(a) \end{matrix} \right). \end{split} \end{equation} Consequently, to estimate the inverse of $\mathcal{L}_{\varepsilon}$, it suffices to estimate \begin{equation}\label{t-value} \begin{split} \Gamma_{\varepsilon}:=\sup_{a\in\mathbb{R}}|L^{-1}_\varepsilon(a)|\geq \sup_{k\in\mathbb{Z}^{d}}|L^{-1}_\varepsilon(k\cdot \omega)|. \end{split} \end{equation} In the following part, for ease of notation, we will drop the index $j$ in $l_{\varepsilon,j}(a)$ defined in \eqref{l-value}. That means $l_{\varepsilon}(a)$ stands for $l_{\varepsilon,j}(a)$. \subsubsection{Estimating the Fourier coefficients $L^{-1}_\varepsilon$ in \eqref{invert0} of the inverse operator $\mathcal{L}^{-1}_{\varepsilon}$}\label{sec:coe} For the matrix $L_{\varepsilon}(a)$ with special form defined in \eqref{ma-value}, once we obtain the infimum of $|l_\varepsilon(a)|$ in \eqref{l-value} for $a\in \mathbb{R}$, we get the estimates of $\Gamma_{\varepsilon}$ defined in \eqref{t-value}. The following estimates are similar to those in \cite{Rafael13}, which considered only the $1-$dimensional case. We now present the details for $n-$dimensional case. Note that the estimates we obtain also apply to the standard Sobolev space $H^m$, which allows to conclude very quickly the results for the finitely differentiable case presented in Section~\ref{sec:finitely}. We first deal with two special cases, which throw some light in the general case. Of course, from the purely logical point of view, these special cases can be omitted since they can be covered in the general discussion. We note that $\mathbf{Case~ 1}$ with $\varepsilon\in \mathbb{R}$ is te only case needed in the finite differentiability result. So it is worth dealing with it explicitly. $\mathbf{Case~ 1}$. When $\varepsilon\in \mathbb{R}$, we have \begin{equation*} \begin{split} |l_\varepsilon(a)|^{2}&=|-\varepsilon a^2 +\mathrm{i}a +\varepsilon \lambda_j|^{2}\\ &=(-\varepsilon a^2 +\varepsilon \lambda_j)^2+a^2\\ &=\varepsilon^2 a^4 +(1-2\varepsilon^2 \lambda_j)a^2+\varepsilon^2 \lambda_j^2. \end{split} \end{equation*} Take $G(v)=\varepsilon^2 v^2 +(1-2\varepsilon^2 )v+\varepsilon^2 \lambda_j^2$ with $v=a^2\geq 0$. It is obvious that $G(v)\geq G(0) =\varepsilon^2 \lambda_j^2$ since $DG(v)=2\varepsilon^2 v+(1-2\varepsilon^2 \lambda_j)> 0$ due to the smallness of $\varepsilon$. Therefore, we have \begin{equation}\label{lowbound} \begin{split} \inf_{a\in \mathbb{R}}|l_\varepsilon(a)|\geq |\varepsilon \lambda_j|. \end{split} \end{equation} Namely, \begin{equation*} \begin{split} \sup_{a\in\mathbb{R}}|l_\varepsilon(a)|^{-1}\leq|\varepsilon \lambda_j|^{-1}. \end{split} \end{equation*} Together with \eqref{invert}, we have that \begin{equation*} \Gamma_{\varepsilon}=\sup_{a\in\mathbb{R}}|L^{-1}_\varepsilon(a)|\leq |\varepsilon|^{-1}C_{\lambda} \end{equation*} for a positive constant $C_{\lambda}$ depending on the eigenvalues $\lambda_1,\lambda_2,\cdots,\lambda_n$. $\mathbf{Case~ 2}$. When $\varepsilon$ is pure imaginary, i.e. $\varepsilon=\mathrm{i}s$ with $ \sigma\leq|s|\leq2\sigma$. In this case, there exits a real root $a$ such that $|l_\varepsilon(a)|=0$ since the discriminant $1+4s^2\lambda_j>0$ (by the smallness of $s$) for $-sa^2+a+s\lambda_j=0$. Hence, the operator $\mathcal{L}_{\varepsilon}$ is unbounded if the small parameter $\varepsilon$ locates in the imaginary axis, which makes the contraction mapping principle inapplicable. We conjecture that no solutions for the equation \eqref{fixeq1} exist when $\varepsilon$ is purely imaginary because zero divisors can be considered as resonance. To study the analyticity in $\varepsilon$ of the function $U_{\varepsilon}$ satisfying \eqref{ffix}, it will be interesting to study the inverse of $\mathcal{L}_{\varepsilon}$ when $\varepsilon$ ranges over the complex domain $\Omega(\sigma,\mu)$. \begin{proposition}\label{control1} For $\Gamma_{\varepsilon}$ defined in \eqref{t-value}, when $\varepsilon \in \Omega(\sigma,\mu)$, we have \begin{equation*} \Gamma_{\varepsilon}\leq \sigma^{-1}C_{\lambda,\mu} \end{equation*} with a positive constant $C_{\lambda,\mu}$ depending on the eigenvalues $\lambda_1,\lambda_2,\cdots,\lambda_n$ and $\mu$. \end{proposition} \begin{proof} Fix \begin{equation*} \begin{split} \varepsilon=s_1+\mathrm{i}s_2, \end{split} \end{equation*} for $\varepsilon$ lining on a conical domain $\Omega(\sigma,\,\mu)$, we have $s_1\geq \mu|s_2|$, where $\mu>\mu_0$ with some sufficiently large positive constant $\mu_0$, and $\sigma ^2\leq s_1^2+s_2^2\leq 4\sigma^2$. Namely, \begin{equation}\label{small-var} \begin{split} \sqrt{1+\frac{1}{\mu^2}}\cdot\sigma\leq s_1\leq \sqrt{1+\frac{1}{\mu^2}}\cdot 2 \sigma. \end{split} \end{equation} Then, one obtains that \begin{equation}\label{inver-o} \begin{split} |l_\varepsilon(a)|^{2}&=|-\varepsilon a^2 +\mathrm{i}a +\varepsilon \lambda_j|^{2}\\ &=\left[ -s_1 (a^2-\lambda_j) -\mathrm{i}(s_2a^2-a-s_2\lambda_j)\right] ^2\\ &=s_1^2(a^2-\lambda_j)^2 +\left[s_2(a^2-\lambda_j)-a\right]^2. \end{split} \end{equation} If $\lambda_j<0$, it is obvious that \begin{equation}\label{negative} |l_\varepsilon(a)|^{2}\geq s_1^2(a^2-\lambda_j)^2 \geq s_1^2\lambda_j^2. \end{equation} The remaining task is to estimate $|l_\varepsilon(a)|^{2}$ in the case of $\lambda_j>0$. Since $a^2-\lambda_j=0$ holds at the point $a=\pm \sqrt{\lambda_j}$, we divide the domain of $a$ into two parts, for $0<\delta\ll 1$, denoted by \begin{equation*} \begin{split} I_{1}=[(1-\delta)\sqrt{\lambda_j},\,(1+\delta)\sqrt{\lambda_j}]\cup [(-1-\delta)\sqrt{\lambda_j},\,(-1+\delta)\sqrt{\lambda_j}],\quad I_{2}=\mathbb{R}\setminus I_{1}. \end{split} \end{equation*} When $a\in I_{2}$, we obtain the estimate \begin{equation}\label{i2} \begin{split} |l_\varepsilon(a)|^{2}\geq s_1^2(a^2-\lambda_j)^2 \geq s_1^2C_{\lambda}, \end{split} \end{equation} where $C_{\lambda}$ depends on the choice of $\delta$ as well. When $a\in I_{1}$, it is clear that $\left[s_2(a^2-\lambda_j)-a\right]=O(s_2)-a$. Therefore, \begin{equation}\label{i1} \begin{split} |l_\varepsilon(a)|^{2}&\geq \left[s_2(a^2-\lambda_j)-a\right]^2=\left[O(s_2)-a\right]^2\geq \frac{a^2}{2} \geq C_{\lambda}\geq C_{\lambda}s_1^2 \end{split} \end{equation} by the smallness of $s_1$ and $s_2$. Note that the last inequality in above estimate is very wasteful but we want to get estimates comparable to the ones we have in the other pieces. The inequalities \eqref{small-var}, \eqref{negative}, \eqref{i2} and \eqref{i1} allow that \begin{equation}\label{sup-l} \sup_{a\in\mathbb{R}}|l_{\varepsilon}(a)|^{-1}\leq s_1^{-1} C_{\lambda}\leq \sigma^{-1}C_{\lambda,\mu}. \end{equation} Combing with the formulas in \eqref{invert0} and \eqref{invert}, we obtain that \begin{equation}\label{l-inverse} \begin{split} \Gamma_{\varepsilon}=\sup_{a\in\mathbb{R}}|L^{-1}_\varepsilon(a)|\leq \sigma^{n-1}\cdot \sigma^{-n} C_{\lambda,\mu}\leq \sigma^{-1}C_{\lambda,\mu}. \end{split} \end{equation} \end{proof} It follows from Proposition~\ref{control1} that, for $\varepsilon\in \Omega(\sigma,\mu)$, \begin{equation}\label{inverse1} \begin{split} |\varepsilon L_{\varepsilon}^{-1}(a)| \leq \sigma\cdot \sigma^{-1} C_{\lambda,\mu}. \end{split} \end{equation} This inequality is crucial in the contraction mapping argument used in Section~\ref{sec:analyticitysolution}. \begin{remark} By \eqref{l-inverse}, we see that $\Gamma_{\varepsilon}$ can be bounded by $\sigma^{-1}$ when $\sigma$ is the minimum distance to the origin in the domain $\Omega(\sigma,\mu)$. Then it follows from \eqref{inverse1} that the bad factors $\sigma^{-1}$ can be dominated by the good factor $\sigma$. This is the reason why we choose $\sigma\leq |\varepsilon|\leq 2\sigma$, whose maximum and minimum distance to the origin are comparable. Note, however, that the estimate for $\varepsilon \mathcal{L}_{\varepsilon}^{-1}$ are independent of $\sigma$, so we obtain uniqueness of solutions for different $\sigma$, i.e. the solutions obtained for different $\sigma$ agree for the $\varepsilon$ in the intersection. \end{remark} \begin{remark}\label{in-jordan} We note that the method presented in this present paper can accommodate small modifications leading to several generalizations. For example, we have the general equation \eqref{general} with $\mathbf{p}=\emph{diag}(\mathbf{p}_1,\cdots,\mathbf{p}_n),\,\mathbf{q}=\emph{diag}(\mathbf{q}_1,\cdots,\mathbf{q}_n)$ being a diagonal matrix satisfying $\mathbf{p}_j,\,\mathbf{q}_j\in \mathbb{R}\setminus \{0\},\,j=1,\,\cdots,n$. In this general case, the only modification with the present exposition is that the calculation for $l_{\varepsilon}(a)$ in \eqref{inver-o} becomes \begin{equation*} \begin{split} |l_\varepsilon(a)|^{2}&=|-\varepsilon \mathbf{p}_ja^2 +\mathrm{i}\mathbf{q}_ja +\varepsilon \lambda_j|^{2}\\ &=\left[ -s_1 (\mathbf{p}_ja^2-\lambda_j) -\mathrm{i}(s_2\mathbf{p}_ja^2-\mathbf{q}_ja-s_2\lambda_j)\right] ^2\\ &=s_1^2(\mathbf{p}_ja^2-\lambda_j)^2 +\left[s_2(\mathbf{p}_ja^2-\lambda_j)-\mathbf{q}_ja\right]^2, \end{split} \end{equation*} which makes no difference in our discussion in Proposition~\ref{control1}. \end{remark} \subsection{Analyticity in $\varepsilon$ of the solution $U_{\varepsilon}$} \label{sec:analyticitysolution} As the discussion in Section~\ref{sec:formulation}, we rewrite \eqref{fixeq3} as \begin{equation}\label{fin-fix} U(\theta) =\varepsilon\mathcal{L}_{\varepsilon}^{-1}\left[ f(\theta)-\hat{g}(U(\theta))\right]\equiv \mathcal{T}(U)(\theta) \end{equation} with $U$ being a function of $\varepsilon$ defined by $U_{\varepsilon}=U_{\varepsilon}(\theta)$. In addition, we define the operator $\mathcal{T}$, acting on functions analytic in $\varepsilon$, given by \begin{equation}\label{con-op} \mathcal{T}(U) \equiv\varepsilon\mathcal{L}_{\varepsilon}^{-1} \left[f-\hat{g}(U)\right] \end{equation} with $\mathcal{T}$ being a function of $(\varepsilon,U)$. Since we want to obtain the solution $U_{\varepsilon}$ depending analytically on $\varepsilon$, we reinterpret $\mathcal{T}$ above as an operator acting on space $H^{\rho,m,\Omega}$ consisting of analytic functions of $\varepsilon$ taking values in $H^{\rho,m}$ with $\varepsilon$ ranging over the domain $\Omega(\sigma, \mu)$. We endow the space \begin{equation*} H^{\rho,m,\Omega}=\bigg\lbrace U:\varepsilon \rightarrow U_{\varepsilon}:\,\Omega\rightarrow H^{\rho,m} \ \rm{}\ is\ analytic \rm{}\ and \rm{}\ bounded \bigg\rbrace \end{equation*} with the supremum norm \begin{equation*} \|U\|_{\rho,m,\Omega}=\sup_{\varepsilon \in \Omega}\|U_{\varepsilon}\|_{\rho,m}. \end{equation*} The supremum norm in $\varepsilon$ makes $H^{\rho,m,\Omega}$ a Banach space. Moreover, it is also a Banach algebra under multiplication when $m>d$ by Proposition~\ref{alge}. We now show that the operator $\mathcal{T}$ defined in \eqref{con-op} maps the space $H^{\rho,m,\Omega}$ into itself. \begin{lemma}\label{diff1} Assume $m>(d+2)$. If $U \in H^{\rho,m,\Omega}$, then $\mathcal{T}(U)\in H^{\rho,m,\Omega}$. Precisely, if the mapping $\varepsilon \rightarrow U_{\varepsilon}: \Omega\rightarrow H^{\rho,m}$ is complex differentiable, then, the mapping $\varepsilon\rightarrow \mathcal{T}_{\varepsilon} (U_{\varepsilon}): \Omega\rightarrow H^{\rho,m}$ is complex differentiable as well. \end{lemma} \begin{proof} From the definition \eqref{con-op}, we know that the operator $\mathcal{T}$ is composed of operators $ \varepsilon \mathcal{L}_{\varepsilon}^{-1}$ and $\hat{g}$. It is clear that the map $\varepsilon \rightarrow \hat{g}(U_{\varepsilon}):\,\Omega\rightarrow H^{\rho,m}$ is complex differentiable since $\hat{g}$ is analytic and it does not depend on $\varepsilon$ explicitly. Therefore, it suffices to show that the map $\varepsilon\rightarrow \varepsilon \mathcal{L}_{\varepsilon}^{-1}(V_{\varepsilon}):\,\Omega\rightarrow H^{\rho,m}$ is complex differentiable when $ V_{\varepsilon}$, considered as a function from $\Omega$ to $H^{\rho,m}$, is a complex differentiable. We prove that the derivatives of $\varepsilon \mathcal{L}_{\varepsilon}^{-1}(V_{\varepsilon})$ with respect to $\varepsilon$ exist in the space $H^{\rho,m-2}$ instead of $H^{\rho,m}$. Then, we apply somewhat surprising Lemma~\ref{bootstrap} in the Appendix~\ref{sec:appendix} to conclude that the derivatives we consider indeed exist in the space $H^{\rho,m}$. For a fixed $\varepsilon\in \Omega$, we expand $V_{\varepsilon}(\theta)$ as \begin{equation*} V_{\varepsilon}(\theta) =\sum_{k\in\mathbb{Z}^{d}}\widehat{V}_{k,\,\varepsilon}e^{\mathrm{i} k\cdot\theta}, \end{equation*} with \begin{equation}\label{coeff0} \begin{split} \widehat{V}_{k,\,\varepsilon}=\int_{\mathbb{T}_{\rho}^d} V_{\varepsilon}(\theta)e^{-\mathrm{i} k\cdot\theta}d\theta \end{split} \end{equation} satisfying \begin{equation}\label{coeff-es} \begin{split} \left|\widehat{V}_{k,\,\varepsilon}\right|\leq \left\|V_{\varepsilon}\right\|_{\rho,m} e^{-\rho |k|} \left(|k|^2+1\right)^{-\frac{m}{2}}. \end{split} \end{equation} Taking the derivative with respect to $\varepsilon$ for \eqref{coeff0}, we have that \begin{equation}\label{coeff1} \begin{split} \frac{d}{d\varepsilon}\widehat{V}_{k,\,\varepsilon}=\int_{\mathbb{T}_{\rho}^d} \left(\frac{d}{d\varepsilon}V_{\varepsilon}\right)(\theta) e^{-\mathrm{i} k\cdot\theta}d\theta \end{split} \end{equation} with \begin{equation}\label{coeff-es1} \begin{split} \left|\frac{d}{d\varepsilon}\widehat{V}_{k,\,\varepsilon}\right|\leq \left\|\frac{d}{d\varepsilon}V_{\varepsilon}\right\|_{\rho,m}e^{-\rho |k|}(|k|^2+1)^{-\frac{m}{2}}. \end{split} \end{equation} It follows from Section~\ref{sec:linverse} that \begin{equation*} \begin{split} \varepsilon \mathcal{L}_{\varepsilon}^{-1}(V_{\varepsilon})=\sum_{k\in\mathbb{Z}^{d} } \varepsilon L_{\varepsilon}^{-1}(\omega\cdot k)\widehat{V}_{k,\,\varepsilon}e^{\mathrm{i} k\cdot\theta} \end{split} \end{equation*} with $L_{\varepsilon}^{-1}$ defined in \eqref{invert0}. By \eqref{sup-l}, we have that \begin{equation*} \begin{split} &\left|\frac{d}{d\varepsilon}\left[\varepsilon^n l_{\varepsilon}^{-n}(\omega\cdot k)\right]\right|\\ &=\big|n\cdot \varepsilon^{n-1}\cdot l_{\varepsilon}^{-n}(\omega\cdot k)-n\cdot \varepsilon^{n}\cdot l_{\varepsilon}^{-n-1}(\omega\cdot k)\cdot \left((\omega\cdot k)^2+\lambda\right)\big|\\ & \leq C_{n,\lambda,\mu}\cdot\sigma^{-1}|k|^2. \end{split} \end{equation*} Together with the formulas \eqref{invert0} and \eqref{invert}, we have that \begin{equation}\label{de-invert} \begin{split} &\left|\frac{d}{d\varepsilon}\left(\varepsilon L_{\varepsilon}^{-1}(\omega\cdot k)\widehat{V}_{k,\,\varepsilon}\right)\right|\\ &\leq\left|\frac{d}{d\varepsilon}\left(\varepsilon L_{\varepsilon}^{-1}(\omega\cdot k)\right)\right|\left|\widehat{V}_{k,\,\varepsilon}\right|+ \left|\varepsilon L_{\varepsilon}^{-1}(\omega\cdot k)\right|\left|\frac{d}{d\varepsilon}\widehat{V}_{k,\,\varepsilon}\right|\\ &\leq C_{n,\lambda,\mu}\cdot \sigma^{-1}|k|^2\left(\left|\widehat{V}_{k,\,\varepsilon}\right|+\left|\frac{d}{d\varepsilon}\widehat{V}_{k,\,\varepsilon}\right|\right). \end{split} \end{equation} Hence, \eqref{coeff-es}, \eqref{coeff-es1} and \eqref{de-invert} yield that \begin{equation*} \begin{split} &\left\|\frac{d}{d\varepsilon}\left(\varepsilon L_{\varepsilon}^{-1}(\omega\cdot k)\widehat{V}_{k,\,\varepsilon}\right)e^{\mathrm{i} k\cdot\theta} \right\|_{\rho,\,m-\tau}\\&\leq C_{n,\lambda,\mu}\cdot \sigma^{-1}|k|^2\left(\left|\widehat{V}_{k,\,\varepsilon}\right|+\left|\frac{d}{d\varepsilon}\widehat{V}_{k,\,\varepsilon}\right|\right) \left\|e^{\mathrm{i} k\cdot\theta}\right\|_{\rho,\,m-\tau}\\ &\leq C_{n,\lambda,\mu}\cdot\sigma^{-1} |k|^2\left(\left\|V_{\varepsilon}\right\|_{\rho,m}+\left\|\frac{d}{d\varepsilon}V_{\varepsilon}\right\|_{\rho,m} \right)e^{-\rho |k|}(|k|^2+1)^{-\frac{m}{2}}\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdot e^{\rho |k|}(|k|^2+1)^{\frac{m-\tau}{2}}\\ & \leq C_{n,\lambda,\mu}\cdot\sigma^{-1}\left(\|V_{\varepsilon}\|_{\rho,m}+\left\|\frac{d}{d\varepsilon}V_{\varepsilon}\right\|_{\rho,m} \right)(|k|^2+1)^{-(\frac{\tau}{2}-1)}. \end{split} \end{equation*} By $\sum_{|k|=\kappa}1=2^d \kappa^{d-1},\,k=(k_1,\cdots,k_d)\in \mathbb{Z}^d$ and choosing $d+2<\tau\leq m$, we obtain that \begin{equation*} \begin{split} \sum_{k\in \mathbb{Z}^d}(|k|^2+1)^{-(\frac{\tau}{2}-1)}\leq C_d\sum_{\kappa=0}^{\infty}(\kappa^2+1)^{-\frac{\tau-d-1}{2}} < \infty. \end{split} \end{equation*} As a consequence, it follows from Weierstrass M-test that the series \begin{equation*} \begin{split} \sum_{k\in \mathbb{Z}^d} \frac{d}{d\varepsilon}\left(\varepsilon L_{\varepsilon}^{-1} (\omega\cdot k)\widehat{V}_{k,\,\varepsilon}\right)e^{\mathrm{i} k\cdot\theta} \end{split} \end{equation*} converge uniformly on $\varepsilon \in \Omega$ in the space $H^{\rho,m-\tau}$. The fact that these formal derivatives are uniformly convergent shows that they are the true derivatives. Namely, \begin{equation*} \begin{split} \frac{d}{d\varepsilon}\left(\varepsilon \mathcal{L}_{\varepsilon}^{-1}(V_{\varepsilon})\right)= \sum_{k\in \mathbb{Z}^d} \frac{d}{d\varepsilon}\left(\varepsilon L_{\varepsilon}^{-1} (\omega\cdot k)\widehat{V}_{k,\,\varepsilon}\right)e^{\mathrm{i} k\cdot\theta}. \end{split} \end{equation*} Therefore, we have that the mapping $\varepsilon \rightarrow \varepsilon \mathcal{L}_{\varepsilon}^{-1}(V_{\varepsilon}):\Omega\rightarrow H^{\rho,m-\tau}$ is complex differentiable. Since $H^{\rho,m} \subset H^{\rho,m-\tau}$, we conclude that the mapping $\varepsilon \rightarrow \varepsilon \mathcal{L}_{\varepsilon}^{-1}(V_{\varepsilon}):\Omega\rightarrow H^{\rho,m}$ is complex differentiable with derivatives in $H^{\rho,m-\tau}$ by Lemma~\ref{bootstrap} in Appendix~\ref{sec:appendix}. \end{proof} \subsection{Existence of the fixed point} \label{sec:existencefp} The proof of the existence of the solutions for equation \eqref{fin-fix} is based on the fixed point theorem in the Banach space $H^{\rho,m,\Omega}$. We consider a ball $\mathcal{B}_{r}(0)$ around the origin in $H^{\rho,m,\Omega}$ with radius $r>0$ such that $\mathcal{T}(\mathcal{B}_{r}(0))\subset \mathcal{B}_{r}(0)$ and $\mathcal{T}$ is a contraction in the ball $\mathcal{B}_{r}(0)$. By \eqref{inverse1}, we get \begin{equation}\label{cons} \|\varepsilon \mathcal{L}_{\varepsilon}^{-1}\|_{\rho,m,\Omega}\leq C_{\lambda,\mu}. \end{equation} Moreover, it follows from \eqref{nont0} ( $\hat{g}(0)=D\hat{g}(0)=0$) and Proposition~\ref{com-regu} that the Lipschitz constant of the composition operator $\hat{g}\circ U$ is bounded by a constant times the radius $r$ when $U\in \mathcal{B}_{r}(0)$. Therefore, for $U\in \mathcal{B}_{r}(0)$, one has \begin{equation*} \begin{split} \|\mathcal{T}(U)\|_{\rho, m,\Omega} &\leq\|\mathcal{T}(0)\|_{\rho, m,\Omega}+ \|\mathcal{T}(U)- \mathcal{T}(0)\|_{\rho, m,\Omega}\\ &\leq \|\varepsilon \mathcal{L}_{\varepsilon}^{-1}\|_{\rho,m,\Omega}\left( \|f\|_{\rho, m,\Omega}+ \| \hat{g}(U)-\hat{g}(0)\|_{\rho, m,\Omega}\right) \\ &\leq C_{\lambda,\mu}\left( \|f\|_{\rho, m,\Omega}+Lip( \hat{g})\cdot \|U\|_{\rho, m,\Omega}\right) \\ &\leq C_{\lambda,\mu}\left( \|f\|_{\rho, m,\Omega}+O(r)\cdot \|U\|_{\rho, m,\Omega}\right) \leq r, \end{split} \end{equation*} whenever we take $f$ and $r$ such that \begin{equation}\label{small-r} \|f\|_{\rho, m,\Omega}\leq \frac{r}{2C_{\lambda,\mu}},\, C_{\lambda,\mu}O(r)<\frac{1}{2}. \end{equation} Note that we need the smallness assumption for $f$ in this case. Thus, $\mathcal{T}(\mathcal{B}_{r}(0)) \subset\mathcal{B}_{r}(0)$. For any elements $U_1,\,U_2\in\mathcal{B}_{r}(0)$, we have that \begin{equation*} \begin{split} \|\mathcal{T}(U_1)-\mathcal{T}(U_2)\|_{\rho, m,\Omega} &=\|\varepsilon\mathcal{L}_{\varepsilon}^{-1} \hat{g}(U_1)-\varepsilon\mathcal{L}_{\varepsilon}^{-1} \hat{g}(U_2)\|_{\rho, m,\Omega}\\ &\leq C_{\lambda,\mu} O(r)\|U_1-U_2\|_{\rho, m,\Omega}\\ &\leq \frac{1}{2}\|U_1-U_2\|_{\rho, m,\Omega}. \end{split} \end{equation*} Therefore, $\mathcal{T}$ is a contraction in the ball $ \mathcal{B}_{r}(0)$ satisfying \eqref{small-r}. It follows from the fixed point theorem in the Banach space $H^{\rho,m,\Omega}$ that there exists a unique solution $U\in H^{\rho,m,\Omega}$ analytic in $\varepsilon$ for equation \eqref{fixeq}. \begin{remark} When we consider the operator $\mathcal{T}$ defined in \eqref{con-op} in the Banach space $H^{\rho,m,\Omega}$, the solution $U_{\varepsilon}$ obtained via fixed point theorem does not lose any regularity on $\varepsilon$. That is, the solution $U_{\varepsilon}$ naturally depends analytically on the parameter $\varepsilon$. However, in the finitely differentiable case, when we take $\varepsilon\in \widetilde{\Omega}\subset \mathbb{R}$ instead of $\varepsilon\in\Omega \subset \mathbb{C}$, the contraction mapping principle is not enough to get a solution $U_{\varepsilon}$ with optimal regularity in $\varepsilon$ since when $\rho=0$, the space $H^{\rho,m,\widetilde{\Omega}}$ is no longer a Banach space with supremum in $\varepsilon$. We will combine with the implicit function theorem to get the optimal regularity. (See Section~\ref{sec:proof-fi} for more details). It is worth pointing out that in the low regularity, especially in $H^1$, we need more sophisticated contraction argument in some sense since there is no Lipschitz property for the composition operator $\hat{g}\circ u$ in $H^1$. (See Section~\ref{sec:lowregularity}). \end{remark} \begin{remark}\label{noo-de} We emphasize that the general solution $U_{\varepsilon}$ obtained above maybe not differentiable in $\varepsilon$ at the origin $\varepsilon=0$ since we do not impose any Diophantine condition for the frequency $\omega$. Indeed, if $U_{\varepsilon}$ was differentiable, we denote the derivative $U^{(1)}(\theta):=\frac{dU_{\varepsilon}(\theta)}{d\varepsilon}\mid _{\varepsilon=0}$ and assume $U_{\varepsilon}=0$ at point $\varepsilon=0$. Then, taking the derivative in $\varepsilon$ at $\varepsilon=0$ for equation~\ref{fixeq}, $U^{(1)}$ would satisfy that \begin{equation}\label{dio} \begin{split} \left( \omega\cdot \partial_{\theta}\right) U^{(1)}(\theta) =f(\theta). \end{split} \end{equation} If $\omega$ is sufficiently Liouvillean ($e.g, |\omega\cdot k|\leq \exp (-|k|^2)$, such $\omega$ can be easily constructed for infinitely many $k$), we can easily construct analytic function $f$ so that $U^{(1)}(\theta)$ solving \eqref{dio} cannot be even a distribution. \end{remark} \begin{lemma}\label{continuous} For the solution $U_{\varepsilon}$ constructed above, we have that the mapping $\varepsilon\rightarrow U_{\varepsilon}$ is continuous when $\varepsilon \rightarrow 0$. \end{lemma} \begin{proof} We take $\rho_1>\rho>0$ so that both the space $H^{\rho_1,m}$ and the space $H^{\rho,m}$ satisfy the assumptions of the Theorem~\ref{mainthm}. Denote by $U^1_{\varepsilon},\,U_{\varepsilon}$ the solutions obtained by applying Theorem~\ref{mainthm} to $H^{\rho_1,m}$, $H^{\rho,m}$ respectively. Then, we observe that $U^1_{\varepsilon}=U_{\varepsilon}$ by $U^1_{\varepsilon}\in H^{\rho_1,m}\subseteq H^{\rho,m}$ and the uniqueness conclusion in $H^{\rho,m}$. Moreover, we note that the set $\left\lbrace U^1_{\varepsilon}\,|\,\varepsilon\in \overline{\Omega} \right\rbrace $, where $\overline{\Omega}$ denotes the closure of $\Omega$, is bounded in $H^{\rho_1,m}$ and hence it is precompact in $H^{\rho,m}$ topology. To show that $U_{\varepsilon}$ is continuous in $\varepsilon$ at $\varepsilon=0$, it suffices to verify that the graph $\mathcal{G}$ of $U$. That is, \begin{equation*} \mathcal{G}:=\left\lbrace (\varepsilon,\,U_{\varepsilon})|\, \varepsilon\in \overline{\Omega} \right\rbrace \end{equation*} is compact in the $H^{\rho,m}$ topology. Since a ball in $H^{\rho_1,m}$ is precompact in $H^{\rho,m}$, we just need to prove that $\mathcal{G}$ is closed. Indeed, the sequence $(\varepsilon_n,\,U_{\varepsilon_n})\in \mathcal{G}$ if and only if \eqref{fixeq2} is satisfied, that is \begin{equation*} \mathcal{L}_{\varepsilon_n}(U_{\varepsilon_n}(\theta)) =\varepsilon_n f(\theta)-\varepsilon_n\hat{g}(U_{\varepsilon_n}(\theta)). \end{equation*} Taking the limits of $\varepsilon_n\rightarrow \varepsilon_{*},\,U_{\varepsilon_n}\rightarrow U^{*}$ for $n\rightarrow \infty$, one can obtain that \begin{equation*} \mathcal{L}_{\varepsilon_*}(U^{*}(\theta)) =\varepsilon_* f(\theta)-\varepsilon_*\hat{g}(U^{*}(\theta)). \end{equation*} Hence, we conclude that $(\varepsilon_*,\,U^{*})\in \mathcal{G}$. \end{proof} \section{Finitely differentiable case: Proofs of Theorem~\ref{theom-fi} and Theorem~\ref{theom-conti}} \label{sec:finitely} In this section we present the proof of Theorem~\ref{theom-fi}, which concerns the highly differentiable forcing $f$. We also prove Theorem~\ref{theom-conti} in which the forcing is assumed to be $L^2$ or $H^1$. The method used for the finitely differentiable case, especially $H^1$, is different from that for the analytic case. \subsection{Proof of Theorem~\ref{theom-fi}}\label{sec:proof-fi} When the forcing term $f$ and the nonlinear term $g$ are finitely differentiable, we consider $\varepsilon \in \mathbb{R}$ in equation \eqref{000.0}. \subsubsection{Regularity in $\varepsilon$}\label{sec:finisolution} In order to get solutions $U_{\varepsilon}$ with some regularity in $\varepsilon$, we need to consider the operator $\mathcal{T}$ defined in \eqref{fixeq3} acting on the space $H^{m,\widetilde{\Omega}}$ consisting of differentiable functions of $\varepsilon$ taking values in $H^{m}$ with $\varepsilon$ ranging over the domain $\widetilde{\Omega}$ defined in \eqref{fini-para}. Moreover, we endow $H^{m,\widetilde{\Omega}}$ with the supremum norm \begin{equation}\label{suprenorm2} \|U\|_{H^{m,\widetilde{\Omega}}}=\sup_{\varepsilon \in \widetilde{\Omega}}\|U_{\varepsilon}\|_{H^{m}}, \end{equation} which is similar to the analytic case in Section~\ref{sec:analyticitysolution}. Note that $H^m$ is a Banach space and it is a Banach algebra when $m>\frac{d}{2}$ by Proposition~\ref{alge}. However, $H^{m,\widetilde{\Omega}}$ (in contrast with the analytic version $H^{\rho,m,\widetilde{\Omega}}$) is not a Banach space with the supremum norm defined in \eqref{suprenorm2}. In this case, if we just apply the fixed point theorem to the proof of Theorem~\ref{theom-fi} in the space $H^{m,\widetilde{\Omega}}$, we may lose some regularity in the argument $\varepsilon$. To avoid this shortcoming, we will combine the contraction argument with the implicit function theorem such that the solution $U_{\varepsilon}$ with optimal regularity in $\varepsilon$ can be obtained. More precisely, as shown in Section~\ref{sec:fini} at $\mathbf{Step~1}$, for some $\varepsilon_0 \in \widetilde{\Omega}$, we first produce a solution $U_{\varepsilon_0}$ of equation \eqref{fixeq3} such that $\mathbf{T}(\varepsilon_0,U_{\varepsilon_0})=0$, where $\mathbf{T}$ is defined in \eqref{impli}. To get the optimal regularity of the map taking $\widetilde{\Omega}$ to $H^m$, we apply the classic implicit function theorem (we refer to the references \cite{dieu, loo, Krantz}) for the operator $\mathbf{T}$. In this process, it is crucial to study the differentiability of the operator $\mathbf{T}$, mapping $\widetilde{\Omega}\times H^m $ to $H^{m}$, with respect to the arguments $(\varepsilon,U)$ as well as the invertibility of $D_2\mathbf{T}(\varepsilon_0, U_{\varepsilon_0})$. By equation \eqref{fixeq3}, we can easily get the differentiability of the operator $\mathcal{T}$ with respect to the argument $U\in H^m$ since the operator $\mathcal{L}_\varepsilon$ are linear and the differentiability properties of the left composition operator $\hat g \circ U$ are already studied carefully in \cite{kappe03,AZ90}. The key to our results will be the differentiability of the operator $\mathcal{T}$ in \eqref{fixeq3} with respect to $\varepsilon$ as the following: \begin{proposition}\label{derivatives} Fix any $m \in \mathbb{N}$ with $m>\frac{d}{2}$ and $\sigma > 0$. We consider the map that $\varepsilon \mathcal{L}^{-1}_\varepsilon \in B(H^m, H^m)$ for every $\varepsilon \in \widetilde{\Omega}$, where $B(H^m, H^m)$ denotes the set of bounded operators from the space $H^m$ to itself. For any $l \in \mathbb{N}$, the map $\varepsilon \rightarrow \varepsilon \mathcal{L}^{-1}_\varepsilon$ is $C^l$ considered as a mapping from $\widetilde{\Omega}$ to $B(H^m, H^m)$. Moreover, for any $l\in \mathbb{N}$ and $\varepsilon \in \widetilde{\Omega}$, $\frac{d^l}{d \varepsilon^l} (\varepsilon \mathcal{L}_\varepsilon^{-1})\in B(H^m, H^m)$. As a matter of fact, something stronger is true. The map $\varepsilon \rightarrow \varepsilon \mathcal{L}^{-1}_\varepsilon$ is real analytic for $\varepsilon \in \widetilde{\Omega}$ and the radius of analyticity can be bounded uniformly for all $\varepsilon \in \widetilde{\Omega}$. \end{proposition} \begin{proof} The key to the proof is the observation that, as noted in \eqref{lowbound} in Section~\ref{sec:coe}, $|l_\varepsilon(a)| \ge |\varepsilon| |\lambda_j|\ge \sigma |\lambda_j| $ for $\varepsilon\in \widetilde{\Omega}$. To study the expansion in powers of $\delta$ for $l^{-1}_{\varepsilon + \delta}(a) $, we rewrite: \begin{equation}\label{goodformu} \begin{split} l^{-1}_{\varepsilon + \delta}(a) &= \left( (\varepsilon +\delta)( \lambda_j -a^2) + \mathrm{i} a \right)^{-1} \\ &=\left(\varepsilon ( \lambda_j -a^2) + \mathrm{i} a + \delta( \lambda_j -a^2) \right)^{-1} \\ &= \left(\varepsilon ( \lambda_j -a^2) + \mathrm{i} a \right)^{-1} \left( 1 + \delta \frac{ \lambda_j -a^2}{\varepsilon ( \lambda_j -a^2) + \mathrm{i} a} \right)^{-1}. \end{split} \end{equation} It is easy to see that the factor $ \frac{ \lambda_j -a^2}{\varepsilon ( \lambda_j -a^2) + \mathrm{i} a}$ is bounded uniformly in $a$ (compute the limit as $|a|$ tends to infinity and observe that the function is continuous in $a$ since the denominator does not vanish) and uniformly in $\varepsilon$ when $\varepsilon$ ranges in an interval bounded away from zero. Therefore, we can expand $\left( 1 + \delta \frac{ \lambda_j -a^2}{\varepsilon ( \lambda_j -a^2) + \mathrm{i} a} \right)^{-1} $ in \eqref{goodformu} in powers of $\delta$ using the geometric series formula. Moreover, the radii of convergence are bounded uniformly in $\varepsilon\in \widetilde{\Omega}$ and the values of the coefficients in the expansion are also bounded uniformly in $a \in \mathbb{R}, \varepsilon \in \widetilde{\Omega}$. Using the formula~\eqref{invert} in Section~\ref{sec:multiplier} for the inverse $\mathcal{L}^{-1}_{\varepsilon}$, we also obtain that the matrices $L_{\varepsilon + \delta}^{-1}$ can be expanded in powers of $\delta$ with coefficients that are bounded uniformly in $a \in \mathbb{R}, \varepsilon \in \widetilde{\Omega}$. We note that the operator $\mathcal{L}^{-1}_{\varepsilon}$ are multiplier operators (in the sense used in Fourier series). That is, for $\widehat{f}_{k}$ being the Fourier coefficients of function $f$ in the space $H^{m}$, the Fourier coefficients $\widehat{(\mathcal{L}^{-1}_{\varepsilon}f)}_{k}$ of function $(\mathcal{L}^{-1}_{\varepsilon}f)$ in the space $H^{m}$ have the structure: \begin{equation}\label{fou-coe} \widehat{(\mathcal{L}^{-1}_{\varepsilon}f)}_{k}=L^{-1}_{\varepsilon,k}\widehat{f}_{k}, \end{equation} where each $L^{-1}_{\varepsilon,k}$ is $n\times n$ matrix (see \eqref{invert} for details). From the discussion in above paragraph, we know that, for each $k$, $L^{-1}_{\varepsilon,k}$ is uniformly analytic in $\varepsilon$. Thus, we conclude that the operator $\mathcal{L}^{-1}_{\varepsilon}$ is analytic in $\varepsilon$ by \eqref{fou-coe}. In addition, we know that the Fourier indices $k$ only enter into the multipliers $L_{\varepsilon,k}^{-1}$ through $\omega \cdot k$ and the supremum of $L_{\varepsilon,k}^{-1}$ over the Fourier index is bounded by the supremum in $a$, which is studied in the previous Section~\ref{sec:linverse}. Together with the fact that the norms of functions in Sobolev spaces are measured by size of the Fourier coefficients, we have that, for all $m>\frac{d}{2}$, the norm of $\mathcal{L}^{-1}_{\varepsilon}$ considered as an operator from the Sobolev space $H^m$ to itself is defined by \begin{equation}\label{sup} \left\|\mathcal{L}^{-1}_{\varepsilon} \right\|_{H^m\rightarrow H^m}=\sup_{k\in \mathbb{Z}^d}\|L^{-1}_{\varepsilon,k}\|. \end{equation} Note that the norms of $L^{-1}_{\varepsilon,k}$ are just finite-dimensional norms. As a consequence, we can bound $\| \mathcal{L}_{\varepsilon }^{-1} \|_{H^m\rightarrow H^m}$ by the supremum of the multipliers defined in \eqref{sup}. Therefore, when we write $\mathcal{L}^{-1}_{\varepsilon+\delta}=\sum_{n=0}^{\infty}\mathcal{L}^{-1}_{\varepsilon,n}\delta^n$, $\| \mathcal{L}_{\varepsilon,n}^{-1} \|_{H^m\rightarrow H^m}$ can be bounded by the way of \eqref{sup}. That means $\frac{d^l}{d \varepsilon^l} (\varepsilon \mathcal{L}_\varepsilon^{-1})\in B(H^m, H^m)$ for every $\varepsilon\in \widetilde{\Omega}$. \end{proof} \subsubsection{Existence of the solutions}\label{sec:fini} With all the above preliminaries established, now we turn to finishing the proof of Theorem~\ref{theom-fi}. We divide the proof into two steps. First, for a fixed $\varepsilon\in \widetilde{\Omega}$, we find a fixed point $U_{\varepsilon}$ of $\mathcal{T}$ defined in \eqref{fixeq3} by considering a domain $\mathcal{P} \subset H^{m}$ with $\mathcal{T}(\mathcal{P})\subset \mathcal{P} $ on which $\mathcal{T}$ is a contraction. Secondly, we use the classical implicit function theorem to verify that the solution $U_{\varepsilon}$ we obtained in the first step possesses the optimal regularity in $\varepsilon$. Namely, we conclude that $U_{\varepsilon} \in H^{m,\widetilde{\Omega}}$. $\mathbf{Step~1}$. As we state in Section~\ref{sec:small}, there are two ways to prove that $\mathcal{T}$ is a contraction. One is that we choose a small ball in $H^{m}$ such that $Lip(\hat{g})$ is small in this ball. Meanwhile, we impose smallness condition on $f$ in this ball. In this way, the operator $\mathcal{T}$ maps this ball into itself and it is a contraction in this ball. (We omit the details here since it is similar to Section~\ref{sec:existencefp}). Another is that we assume that $\emph{Lip}(\hat{g})$ (or $D\hat{g}$) is globally small in the whole of $\mathbb{R}^n$. In this case, for a fixed $\varepsilon\in \widetilde{\Omega}$ and $U_1,\,U_2\in H^{m}$, it follows from \eqref{cons} that \begin{equation*} \begin{split} \|\mathcal{T}(U_1)-\mathcal{T}(U_2)\|_{H^{m}} &= \|\varepsilon\mathcal{L}_{\varepsilon}^{-1}(\hat{g}(U_1)-\hat{g}(U_2))\|_{H^{m}}\\ &\leq C_{\lambda,\mu} Lip(\hat{g})\cdot \|U_1-U_2\|_{H^{m}}\\ &\leq \frac{1}{2}\|U_1-U_2\|_{H^{m}}. \end{split} \end{equation*} This makes $\mathcal{T}$ a contraction in the whole space $H^{m}$. In summary, we get a fixed point $U_{\varepsilon_0}\in H^m$ of the equation \eqref{con-op} for some $\varepsilon_0\in \widetilde{\Omega}$. $\mathbf{Step ~2}$. It follows from Proposition~\ref{com-regu} and Proposition~\ref{derivatives} that the operator $\mathcal{T}$ is $C^l$ with respect to the argument $(\varepsilon,U)$. Namely, $\mathbf{T}(\varepsilon,U):=U-\mathcal{T}(\varepsilon,U)$ is $C^l$ in the domain of $\widetilde{\Omega}\times H^{m}$. Based on $\mathbf{Step ~1}$, we have $\mathbf{T}(\varepsilon_{0}, U_{\varepsilon_0})=0$. Moreover, $D_2\mathbf{T}(\varepsilon_{0},U_{\varepsilon_0})=Id-D_2\mathcal{T}(\varepsilon_{0},U_{\varepsilon_0})=Id-\varepsilon_0\mathcal{L}_{\varepsilon_0}^{-1}D\hat{g}(U_{\varepsilon_0})$ is invertible since $\varepsilon_0\mathcal{L}_{\varepsilon_0}^{-1}$ is bounded and $D\hat{g}(U_{\varepsilon_0})$ is sufficiently small. Therefore, by the implicit function theorem, there exist an open neighborhood included in $\widetilde{\Omega}\times H^{m}$ of $(U_{\varepsilon_0},\varepsilon_0)$ and a $C^l$ function $U_{\varepsilon}$ satisfying $\mathbf{T}(\varepsilon,U_{\varepsilon})=0$ on this neighborhood. \subsection{Proof of Theorem~\ref{theom-conti}} \label{sec:lowregularity} In this section, we will prove Theorem~\ref{theom-conti} in a different way from the first two cases (analytic and highly differentiable cases). The key problem is the properties of the composition operator $\hat{g}\circ u$ in space $H^{1}(\mathbb{T}^d)$ or space $L^2(\mathbb{T})$. \begin{proposition}\label{continuouscomposition} For the composition operator defined by: \begin{equation}\label{compositiondefined} \mathcal{C}_{\hat g}[u](\theta)={\hat g}(u(\theta)), \end{equation} we have the following properties: If we consider $\mathcal{C}_{\hat{g}}$ acting on $L^2(\mathbb{T}^d,\mathbb{R}^n)$ and assume that $\hat{g}$ is globally Lipschitz continuous on $\mathbb{R}^n$, then \begin{equation*} \mathcal{C}_{\hat{g}}:\,L^{2}(\mathbb{T}^d,\mathbb{R}^n)\rightarrow L^{2}(\mathbb{T}^d,\mathbb{R}^n) \end{equation*} is Lipschitz continuous. If we consider $\mathcal{C}_{\hat g}$ acting on $H^1(\mathbb{T}^d, \mathbb{R}^n)$ and assume that $\hat{g}\in C^{1+Lip}$, then \begin{equation*} \mathcal{C}_{\hat g}:\,H^{1}(\mathbb{T}^d,\mathbb{R}^n)\rightarrow H^{1}(\mathbb{T}^d,\mathbb{R}^n) \end{equation*} is bounded and continuous. In particular, given $\epsilon > 0$, there is $\delta:=\delta(\epsilon, \text{Lip}(\hat g), \hat g(0) ) > 0$ so that $\|u\|_{H^1} \le \delta $ implies $\| \mathcal{C}_{\hat g}(u) \|_{H^1} \le \epsilon$. \end{proposition} \begin{proof} Since $\hat{g}$ is globally Lipschitz continuous on $\mathbb{R}^n$, denote $M=\text{Lip}(\hat g)$ (for ease of notation, we will use $M$ in the following part) and for $u,v\in L^{2}(\mathbb{T}^d,\mathbb{R}^n)$, we get \begin{equation*} |\hat{g}( u(\theta))-\hat{g}(v(\theta))|\leq M |u(\theta)-v(\theta)|. \end{equation*} Therefore, \begin{equation*} \|\hat{g}\circ u-\hat{g}\circ v\|_{L^2}\leq M\|u-v\|_{L^2}. \end{equation*} We refer to \cite{AZ90,Kinder00} for the properties of the operator $\mathcal{C}_{\hat g}$ mapping space $H^1(\mathbb{T}^d,\mathbb{R}^n)$ to itself . \end{proof} \begin{remark}\label{small-f} We emphasize that for our results in $L^2$ and $H^{m}\,\,(m>\frac{d}{2})$ , it is needed to assume that $M=\text{Lip}(\hat g)$ is globally arbitrary small. This allows us to obtain that the operator $\mathcal{T}$ in \eqref{fixeq3} is a contraction in the whole space. However, due to the lack of Lipschitz regularity for the operator $\mathcal{C}_{\hat g}$ acting on the space $H^1$ (see \cite{AZ90}), we need to choose a ball belonging to $H^1$ so that the operator $\mathcal{T}$ maps this ball into itself. Note that the chosen ball does not need to be small. We also do not require that the forcing is small in $H^1$. \end{remark} Now, we go back to the proof of Theorem~\ref{theom-conti}. \begin{proof} First we give the proof for the result in space $L^2$. By Parseval's identity, we know that the $L^2-$norm is also expressible in terms of the Fourier coefficients. Together with the bound of $\varepsilon\mathcal{L}_{\varepsilon}^{-1}$ in \eqref{cons}, we have that $\mathcal{T}(L^2)\subset L^2$. Also, for $u,v\in L^2$, one has \begin{equation*} \|\mathcal{T}(u)-\mathcal{T}(v)\|_{L^2}=\|\varepsilon\mathcal{L}_{\varepsilon}^{-1}\big(\hat{g}\circ u-\hat{g}\circ v\big)\|_{L^2}\leq C_{\lambda,\mu}M\|u-v\|_{L^2}. \end{equation*} It follows from $M:=\text{Lip}(\hat g)\ll 1$ in assumption $\mathbf{\widetilde{H}}$ that $\mathcal{T}$ is a contraction in $L^2$. This gives the $L^2$ result. Now, we present the proof for the result in $H^1$. Using the interpolation inequality in Lemma~\ref{interpolation}, we obtain, for $n\leq1$, that \begin{equation}\label{sconvergence} \begin{split} \|\mathcal{T}^{n+1}(u)-\mathcal{T}^n(u)\|_{H^s} &\leq C_{\lambda,\mu}|\mathcal{T}^{n+1}(u)-\mathcal{T}^n(u)\|_{L^2}^{1-s}\|\mathcal{T}^{n+1}(u)-\mathcal{T}^n(u)\|_{H^1}^{s}\\ &\leq C_{\lambda,\mu} (M^n)^{1-s}\|\mathcal{T}(u)-u\|_{L^2}^{1-s}\|\mathcal{T}^{n+1}(u)-\mathcal{T}^n(u)\|_{H^1}^{s}. \end{split} \end{equation} We have that the function $(M^n)^{1-s}$ is decreasing exponentially. The remaining task is to show that $\|\mathcal{T}^{n+1}(u)-\mathcal{T}^n(u)\|_{H^1}$ in \eqref{sconvergence} can be bounded independently of the iteration step $n$. As a matter of fact, from Proposition~\ref{continuouscomposition}, we know that $u\in H^1$ implies $\hat{g}\circ u\in H^1$. Moreover, it is easy to check that \begin{equation*} \|\hat{g}\circ u\|_{H^1} \leq M\|u\|_{H^1}. \end{equation*} Therefore, for the operator $\mathcal{T}$ defined in \eqref{fixeq3}, we get \begin{equation*} \|\mathcal{T}(u)\|_{H^1} =\|\varepsilon\mathcal{L}_{\varepsilon}^{-1}\left(f+\hat{g}\circ u\right)\|_{H^1}\leq C_{\lambda,\mu}\|f\|_{H^1}+C_{\lambda,\mu}M\|u\|_{H^1}. \end{equation*} We now choose a ball $B_r(0)$ centered at the origin in $H^1$ such that $B_r(0)$ is mapped by $\mathcal{T}$ into itself. This can be achieved whenever we take $r$ such that $C_{\lambda,\mu}\|f\|_{H^1}+C_{\lambda,\mu}Mr\leq r$, which is equivalent to \begin{equation}\label{rad} r\geq \frac{C_{\lambda,\mu}\|f\|_{H^1}}{1-C_{\lambda,\mu}M}. \end{equation} This can be done since $M$ is small enough. Note that the radius $r$ chosen by \eqref{rad} depends on the function $f$, which can be any function in $H^1$. As a consequence, for every $u\in B_r(0)$ and $n\in \mathbb{N}$, we obtain that $\mathcal{T}^n(u)\in B_r(0)$ and \begin{equation*} \|\mathcal{T}^{n+1}(u)-\mathcal{T}^n(u)\|_{H^1} \leq 2r. \end{equation*} Thus, \eqref{sconvergence} becomes \begin{equation}\label{sconvergence1} \begin{split} \|\mathcal{T}^{n+1}(u)-\mathcal{T}^n(u)\|_{H^s} \leq C_{\lambda,\mu} (M^n)^{1-s}(2r)^s\|\mathcal{T}(u)-u\|_{L^2}^{1-s}, \end{split} \end{equation} which indicates that the sequence $\mathcal{T}^{n}(u)$ has a limit $u^*\in H^s$ and the fixed point obtained by the contraction mapping in $L^2$ should be in $H^s$. Note that \eqref{sconvergence1} allows one to bound the distance in $H^s$ from an initial guess to the true solution. That is, \begin{equation*} \begin{split} \|u^*-u\|_{H^s} &=\|\lim_{n\rightarrow 0}\mathcal{T}^n(u)-u\|_{H^s}\\ &=\|\sum_{n=0}^{\infty}\left[\mathcal{T}^{n+1}(u)-\mathcal{T}^n(u)\right]\|_{H^s} \\ &\leq C_{\lambda,\mu}(2r)^{s}\|\mathcal{T}(u)-u\|_{L^2}^{1-s} \sum_{n=0}^{\infty}(M^n)^{1-s}\\ &\leq C_{\lambda,\mu}(2r)^{s}(1-M^{1-s})^{-1}\|\mathcal{T}(u)-u\|_{L^2}^{1-s}. \end{split} \end{equation*} \end{proof} \begin{remark} As shown in \cite{AZ90}, the conditions for composition operators mapping $H^{1+\delta}$ to itself are very strict. There are many mapping results for the composition in $H^{1 +\delta} \cap L^\infty$, but it is not clear how the $L^\infty$ norm behaves under the Fourier multipliers. Therefore, using the methods of this paper, it seems that there is gap between the treatments possible for the forcing. Either $H^s\,\,(0<s \leq 1)$ or $H^m\,\,(m > d/2)$. \end{remark} \section{Results for PDEs} \label{sec:pde} An important observation is that, since the treatment of \eqref{000.0} did not use any properties of the dynamics of equation, we can treat even ill-posed partial differential equations. The ill-posed equation \eqref{b-e} is a showcase of the possibilities of our method for model \eqref{000.0}. The heuristic principle is that we can think of evolutionary PDE as models similar to \eqref{000.0} in which the role of the phase space $\mathbb{R}^n$ is taken up by a function space (of functions of the spatial variable $x$). Note that the nonlinearities in PDE models can be not just compositions but more complicated operators (even unbounded). For example, the non-linearity $(u^2)_{xx}$ in equation \eqref{b-e} is an unbounded operator from a function space to itself. However, the fixed point problem under consideration in the Banach space we choose overcomes this tricky problem. (See Section~\ref{sec:regularityoperator}). The solutions produced in this section point in the direction that ill-posed equations, even if, lack a general theory of the existence and uniqueness of the solution, may admit many solutions that have a good physical interpretation. For convenience, we rewrite equation \eqref{b-e} as \begin{equation}\label{b-e1} \begin{split} \varepsilon u_{tt}+u_t-\varepsilon \beta u_{xxxx}-\varepsilon u_{xx}=\varepsilon(u^{2})_{xx}+\varepsilon f(\omega t,x),\, x\in\mathbb{T}, \ t\in\mathbb{R},\ \beta>0 \end{split} \end{equation} with periodic boundary condition. We define the full Lebesgue measure set \begin{equation}\label{full} \begin{split} \mathcal {O}=\left\{\beta>0: \frac{1}{\sqrt{\beta}} \textrm{is not an integer}\right\}. \end{split} \end{equation} Note that we shall only work with values of $\beta$ in $\mathcal {O}$ so that the eigenvalues of the linear operator $\varepsilon \beta \partial_{xxxx}+\varepsilon \partial_{xx}$ in \eqref{b-e1} are different from zero in a such way that the operator $\mathcal{N}_{\varepsilon}$ defined in \eqref{plo} is invertible. (See Section~\ref{sec:regularityoperator} for the details). \begin{remark} There are other models of friction besides the $u_t$ term in \eqref{b-e1} that one could consider. The treatment given in the present paper is a very general method and could be applied to several friction models, such as $u_{txx}$. We note also that our method for the ill-posed equation \eqref{b-e1} with positive parameter $\beta$ also applies to well-posed equation \eqref{b-e1} with negative parameter $\beta$. It is even easier for well-posed case since the eigenvalues of the linear operator $\varepsilon \beta \partial_{xxxx}+\varepsilon \partial_{xx}$ in \eqref{b-e1} are not zero such that we can invert the operator $\mathcal{N}_{\varepsilon}$ defined in \eqref{plo}. However, we just consider the ill-posed model \eqref{b-e1} that serves as motivation for the readers. This ill posed case is what appears in water wave theory \cite{Bou72}. \end{remark} \subsection{Formulation of the fixed point problem} Similar to Section~\ref{sec:formulation} for ODE model, we need to reduce the equation \eqref{b-e1} to a fixed point problem. In this section, we just present the formal manipulations omitting specification of spaces. Indeed, the precise spaces defined in Section~\ref{sec:spaces} will be motivated by the desire to justify the formal manipulations and that the operators considered are a contraction. Our goal is to find response solutions of the form \begin{equation}\label{realso1} u_{\varepsilon}(t,x)=U_{\varepsilon}(\omega t,x), \end{equation} where, for each fixed $\varepsilon$, $U_{\varepsilon}: \mathbb{T}^d \times \mathbb{T}\rightarrow \mathbb{R}$. Inserting \eqref{realso1} into \eqref{b-e1}, we get the following functional equation for $U_{\varepsilon}:$ \begin{equation}\label{ffixeq} \begin{split} \varepsilon\left( \omega\cdot \partial_{\theta}\right) ^2 U_{\varepsilon}(\theta,x)+\left( \omega\cdot \partial_{\theta}\right) U_{\varepsilon}(\theta,x) -\varepsilon\beta\partial_{x}^4 U_{\varepsilon}(\theta,x)&-\varepsilon\partial_{x}^2 U_{\varepsilon}(\theta,x) \\ &=\varepsilon (U_{\varepsilon}^{2})_{xx}+\varepsilon f(\theta,x). \end{split} \end{equation} The solution of equation \eqref{ffixeq} will be the centerpiece of our treatment. Denote by $\mathcal{N}_{\varepsilon}$ the linear operator \begin{equation}\label{plo} \mathcal{N}_{\varepsilon}U_{\varepsilon}(\theta,x)=\left[ \varepsilon\left(\omega\cdot \partial_{\theta}\right) ^2 +\left( \omega\cdot \partial_{\theta}\right)-\varepsilon\beta\partial_{x}^4 -\varepsilon\partial_{x}^2\right]U_{\varepsilon}(\theta,x). \end{equation} Then, \eqref{ffixeq} can be rewritten as \begin{equation}\label{ffixeq1} \mathcal{N}_{\varepsilon}U_{\varepsilon}(\theta,x)=\varepsilon (U_{\varepsilon}^{2})_{xx}+\varepsilon f(\theta,x). \end{equation} As we will see in Section~\ref{sec:regularityoperator}, the operator $\mathcal{N}_{\varepsilon}$ is boundedly invertible in some appropriate space for $\varepsilon \in \Omega(\sigma,\mu)$ defined in \eqref{domain0}. Namely, \eqref{ffixeq1} becomes \begin{equation}\label{plo1} U_{\varepsilon}(\theta,x)=\varepsilon\mathcal{N}_{\varepsilon}^{-1}\left[ (U_{\varepsilon}^{2})_{xx}+f(\theta,x)\right] \equiv \mathcal{T}_{\varepsilon}(U_{\varepsilon}(\theta,x)), \end{equation} where, for convenience, we introduce the operator $\mathcal{T}_{\varepsilon}$. In Section~\ref{sec:mainpr} dealing with the analytic case, we will show that there exists a solution $U_{\varepsilon}$ analytic in $\varepsilon$ for equation \eqref{plo1} by the contraction mapping argument. Moreover, in Section~\ref{sec:pde-finitely} carrying out finitely differentiable case, we will combine contraction mapping principle with the classical implicit function theorem to get the regular results. From the formal manipulation above, we find that the first key point is to study the invertibility of the operator $ \mathcal{N}_{\varepsilon}$ and give quantitative estimates on its inverse for $\varepsilon$ in a complex domain. Note that the linear operator $ \mathcal{N}_{\varepsilon}$ defined in \eqref{plo} used to study PDE models is much more complicated than the linear operator $\mathcal{L}_{\varepsilon}$ defined in \eqref{ope} for ODE models since $\mathcal{N}_{\varepsilon}$ involves not only the angle variable $\theta\in \mathbb{T}^d$ but also the space variable $x\in \mathbb{T}$. This leads to different calculation for the inverse of $\mathcal{N}_{\varepsilon}$ (See Section~\ref{sec:regularityoperator}). The second crucial part is that the nonlinearity $(U^2_{\varepsilon})_{xx}$ maybe unbounded from one space to itself. However, it happens that $\varepsilon\mathcal{N}_{\varepsilon}^{-1}(U^2)_{xx}$ is bounded. (See Lemma~\ref{derivative-non} and Lemma~\ref{ppo} for more details). To get a fixed point for equation \eqref{plo1}, analogous to the smallness arguments in Section~\ref{sec:small} for ordinary partial differential equation \eqref{000.0}, we also need to impose some smallness conditions for partial differential model. However, we only consider a specially nonlinear map $U\rightarrow \varepsilon\mathcal{N}_{\varepsilon}^{-1}(U^2)_{xx}$, which is analytic, be a contraction in a domain that contains a ball around $\varepsilon\mathcal{N}_{\varepsilon}^{-1}f$. It is nontrivial to choose a sufficiently small ball and the forcing $f$ is assumed to be small in this ball. \subsection{Choice of spaces and the statement of our results} \label{sec:spaces} In this section, we give the concrete spaces we work in. Again, we note that the main principle is that the norms of the functions needed to be expressed in terms of the Fourier coefficients associated to the Fourier basis in arguments $\theta$ and $x$. This permits us to estimate the inverse of the linear operator $\mathcal{N}_{\varepsilon}$ just by estimating its Fourier coefficients. We also need these spaces to possess the Banach algebra properties and the properties of composition operator so that the nonlinear terms can be controlled. From the point of view analyticity in $\varepsilon$ , it is necessary to define spaces consisting of analytic functions with respect to $\varepsilon$. In a way analogous to the definition in Section~\ref{sec:spacesode}, for $\rho\geq 0,\,m,d\in \mathbb{Z}_{+}$, we define the space of analytic functions $U$ in $\mathbb{T}_\rho^{d+1}$ with finite norm \begin{equation*} \begin{aligned} \mathcal{H}^{\rho,m}:&=\mathcal{H}^{\rho,m}(\mathbb{T}^{d+1})\\ &=\bigg\lbrace U\,:\,\mathbb{T}_\rho^{d+1}\rightarrow \mathbb{C}\,\mid\, U(\theta,x)=\sum_{k\in \mathbb{Z}^d,\,j\in\mathbb{Z}}\widehat{U}_{k,j}e^{\mathrm{i} (k\cdot\theta+j\cdot x)},\\ &\ \ \ \ \ \ \ \ \ \ \ \|U\|_{\rho,m}^{2}=\sum_{k\in \mathbb{Z}^d,\,j\in\mathbb{Z}}|\widehat{U}_{k,j}|^{2}e^{2\rho (|k|+|j|)} (|k|^{2}+|j|^2+1)^{m}<+\infty\bigg\rbrace. \end{aligned} \end{equation*} It is obvious that the space $\big(\mathcal{H}^{\rho,m},\,\,\|\cdot \|_{\rho,m}\big)$ is a Banach space as well as a Hilbert space. We actually consider $\mathcal{H}_0^{\rho,m}$, which is a subspace of $\mathcal{H}^{\rho,m}$, consisting of functions $U\in\mathcal{H}^{\rho,m}$ with \begin{equation}\label{0-ave} \int_{0}^{2\pi} U(\theta, x)dx=0. \end{equation} In the physical applications, we also consider the closed subspace of $\mathcal{H}^{\rho,m}$ in which the functions take real values for real arguments. Note that the choice of the normalization condition \eqref{0-ave} is motivated by the assumption that \begin{equation*} \int_{0}^{2\pi} f(\theta, x)dx=0. \end{equation*} Here and after, we consider our fixed point problems in the space $\mathcal{H}_0^{\rho,m}$. To simplicity the notation, we still write $\mathcal{H}^{\rho,m}$ as $\mathcal{H}_0^{\rho,m}$. For $\rho>0$, $\mathcal{H}^{\rho,m}$ consists of function which are analytic in the domain $\mathbb{T}_{\rho}^{d+1}$. For $\rho=0$, $\mathcal{H}^{m}:=\mathcal{H}^{0,m}$ is just the regular Sobolev space. Similar to Proposition~\ref{alge}, when $\rho>0,\,m>(d+1)$ or $\rho=0,\,m>\frac{d+1}{2}$, we still have the Banach algebra properties in space $\mathcal{H}^{\rho,m}$. Now we are ready to state our main results on the existence of quasi-periodic solutions for PDE \eqref{b-e1} in the cases of analyticity and finite differentiability. \begin{theorem}\label{main-p} Assume that $f \in \mathcal{H}^{\rho,m}(\mathbb{T}^{d+1})$ with $\rho>0,\,\,m>(d+1)$. Then, for $\varepsilon\in \Omega(\sigma,\mu)$ defined in \eqref{domain0}, there exists a unique solution $U_{\varepsilon}\in \mathcal{H}^{\rho,m}(\mathbb{T}^{d+1})$ for equation \eqref{ffixeq}. Furthermore, considering $U_{\varepsilon}$ as a function of $\varepsilon$, we have that $\varepsilon\rightarrow U_{\varepsilon}:\,\Omega\rightarrow \mathcal{H}^{\rho,m}$ is analytic when $m>(d+5)$. In addition, when $\varepsilon\rightarrow 0$, the solution $U_{\varepsilon}\rightarrow 0$ and $\varepsilon\rightarrow U_{\varepsilon}$ is continuous. \end{theorem} Our method also applies to finitely differentiable forcing., but we leave the details. \begin{theorem}\label{pde-fi} Assume that $f\in \mathcal{H}^{m}(\mathbb{T}^{d+1})$ with $m>\frac{d+1}{2}$. Then, for $\varepsilon\in \widetilde{\Omega}$ defined in \eqref{fini-para}, there exists a unique solution $U_{\varepsilon}\in \mathcal{H}^{m}(\mathbb{T}^{d+1})$ for equation \eqref{ffixeq}. Furthermore, for any $l\in \mathbb{N}$, the map $\varepsilon\rightarrow U_{\varepsilon}$ is $C^{l}$ (even real analytic) considered as a mapping from $\widetilde{\Omega}$ to $\mathcal{H}^{m}$. In addition, when $\varepsilon\rightarrow 0$, the solution $U_{\varepsilon}\rightarrow 0$ and the map $\varepsilon\rightarrow U_{\varepsilon}$ is continuous. \end{theorem} \subsection{The boundness of the operator $\mathcal{T}_{\varepsilon}$ defined in \eqref{plo1} taking $\mathcal{H}^{\rho,m}$ into itself} \label{sec:regularityoperator} For the PDE model \eqref{b-e1}, the nonlinear map $U\rightarrow (U^2)_{xx}$ (which in the ODE case was a composition operator with $\hat{g}\circ U$) is an unbounded operator from a space to itself. We will show, however, that the map $U\rightarrow \varepsilon\mathcal{N}_{\varepsilon}^{-1}(U^2)_{xx}$ is bounded from a space to itself. To this end, we give the following lemmas and propositions. Some of the results would generalize for a nonlinearity of the form $U\rightarrow (g(U))_{xx}$. We will not pursue these specialized results in this paper, but we think it would be an interesting subject. \begin{lemma}\label{derivative-non} Let $U\in \mathcal{H}^{\rho,m}$. Denote by \begin{equation}\label{non-term} h(U)=(U^2)_{xx}. \end{equation} Then, $h$ is analytic from the space $\mathcal{H}^{\rho,m}$ to the space $\mathcal{H}^{\rho,m-2}$. Moreover, for $V\in \mathcal{H}^{\rho,m}$, we have that \begin{equation*} \|Dh(U)V\|_{\rho,m-2}\leq 2\|U\|_{\rho,m}\|V\|_{\rho,m}. \end{equation*} \end{lemma} \begin{proof} We rewrite $h=h_1\circ h_2$ with \begin{equation*} \begin{split} h_1:\,&\mathcal{H}^{\rho,m}\rightarrow\mathcal{H}^{\rho,m-2}\\ &\ \ \ U\rightarrow U_{xx} \end{split} \end{equation*} and \begin{equation*} \begin{split} h_2:\,&\mathcal{H}^{\rho,m}\rightarrow\mathcal{H}^{\rho,m}\\ &\ \ \ U\rightarrow U^{2}. \end{split} \end{equation*} It is obvious that both $h_1$ and $h_2$ are analytic. Therefore, the composition operator $h:\mathcal{H}^{\rho,m}\rightarrow\mathcal{H}^{\rho,m-2}$ is analytic. Moreover, \begin{equation*} Dh(U)V=\frac{d}{d\xi}h(U+\xi V)\bigg\lvert_{\xi=0}=\frac{d}{d\xi}\left((U+\xi V)^2\right)_{xx}\bigg\lvert_{\xi=0}=2(U V)_{xx}, \end{equation*} which shows that \begin{equation*} \|Dh(U)V\|_{\rho,m-2}\leq 2\|UV\|_{\rho,m}\leq2\|U\|_{\rho,m}\|V\|_{\rho,m} \end{equation*} by the Banach algebra property in the space $\mathcal{H}^{\rho,m}$. \end{proof} Lemma~\ref{derivative-non} allows that the map $U\rightarrow (U^2)_{xx}$ is bounded from the space $\mathcal{H}^{\rho,m}$ to $\mathcal{H}^{\rho,m-2}$. To prove the boundedness of the operator $\mathcal{T}_{\varepsilon}$ defined in \eqref{plo1}, the remaining task is to show that $\varepsilon\mathcal{N}_{\varepsilon}^{-1}: \mathcal{H}^{\rho,m-2}\rightarrow \mathcal{H}^{\rho,m}$ is bounded. \begin{lemma}\label{ppo} For a fixed $\varepsilon \in \Omega(\sigma,\mu)$, the operator $\varepsilon\mathcal{N}_{\varepsilon}^{-1}$ taking the space $\mathcal{H}^{\rho,m-2}$ into $\mathcal{H}^{\rho,m}$ is bounded. \end{lemma} \begin{proof} We verify that $\|\varepsilon\mathcal{N}_{\varepsilon}^{-1}\|_{\mathcal{H}^{\rho,m-2}\rightarrow \mathcal{H}^{\rho,m}}$ can be bounded by the supremum of its multipliers, as we argued in the proof of Proposition~\ref{derivatives}. For $V\in\mathcal{H}^{\rho,m-2}$, by \eqref{plo} and \eqref{0-ave}, we have the following Fourier expansion \begin{equation*} \begin{split} \mathcal{N}^{-1}_{\varepsilon}(V(\theta,x))=\sum_{k\in \mathbb{Z}^d \atop j\in\mathbb{Z} \setminus \{0\}}\frac{1}{-\varepsilon(k\cdot \omega)^2+\mathrm{i} (k\cdot \omega) -\varepsilon(\beta j^4-j^2)}\widehat{V}_{k,j}e^{\mathrm{i} (k\cdot\theta+j\cdot x)}. \end{split} \end{equation*} Note that, by \eqref{full}, $\beta j^4-j^2\neq 0$ for $j\in\mathbb{Z} \setminus \{0\}$. To obtain the desired results, we now estimate the supremum of $\widetilde{\mathbf{N}}_{\varepsilon}$ defined by \begin{equation}\label{t-es} \begin{split} &\widetilde{\mathbf{N}}_{\varepsilon}(k,j)\\ &:= \frac{k^2+j^2}{-\varepsilon (k\cdot \omega)^2+\mathrm{i}(k\cdot \omega)-\varepsilon(\beta j^4-j^2)}\\ &=\left( \frac{k^2}{-\varepsilon (k\cdot \omega)^2+\mathrm{i}(k\cdot \omega)-\varepsilon(\beta j^4-j^2)}+ \frac{j^2}{-\varepsilon (k\cdot \omega)^2+\mathrm{i}(k\cdot \omega)-\varepsilon(\beta j^4-j^2)}\right) \end{split} \end{equation} for $k\in \mathbb{Z}^d,\,j\in\mathbb{Z}\setminus \{0\}$. In fact, \eqref{t-es} includes two terms, which have similar estimates, we just give the details for the second term. Note that it is an easy case for $k=0$. We will estimate the infimum of \begin{equation*} \begin{split} N_{\varepsilon}(a,t):=\frac{-\varepsilon a^2+\mathrm{i}a-\varepsilon(\beta t^2-t)}{t}, \,\,a:=(k\cdot \omega)\in \mathbb{R}\setminus\{0\},\,t:=j^2\in \mathbb{Z}_{+}. \end{split} \end{equation*} Taking $\varepsilon=s_1+\mathrm{i}s_2\in \Omega(\sigma,\mu)$, we have \begin{equation}\label{simi-cal} \begin{split} |N_{\varepsilon}(a,t)|^2=s_1^2\left[\frac{a^2}{t}-(1-\beta t)\right]^2+ \left[s_2\left(\frac{a^2}{t}-(1-\beta t)\right)-\frac{a}{t}\right]^2, \end{split} \end{equation} which has an infimum controlled by $\sigma$ by a similar argument to Proposition~\ref{control1}. We now estimate \eqref{simi-cal}. When $\beta>1$, we have that $1-\beta t<0$. Thus, $ |N_{\varepsilon}(a,t)|^2\geq s_1^2\left[\frac{a^2}{t}-(1-\beta t)\right]^2\geq (\beta -1)^2s_1^2:=s_1^2C_{\beta}$ for a positive constant $C_{\beta }$ depending on $\beta$. In the following part, to simplify the notation, we denote $C_{\beta}$ by all constants depending on $\beta$. We focus mainly on the case of $0<\beta<1$. We divide $t\in \mathbb{Z}_{+}$ into two regions as the following: $\mathbf{Case~1}.$ When $t\geq[\frac{1}{\beta}]+1$, we have that $1-\beta t<0$. Therefore \begin{equation*} |N_{\varepsilon}(a,t)|^2\geq s_1^2\left[\frac{a^2}{t}-(1-\beta t)\right]^2\geq s_1^2C_{\beta }. \end{equation*} $\mathbf{Case~2}$. When $1\leq t\leq [\frac{1}{\beta}]$, we get that $t(1-\beta t)\in [C_{\beta }^{1},\,C_{\beta }^{2}]$ with $C_{\beta }^{2}\geq C_{\beta }^{1}>0$. It is clear that $\frac{a^2}{t}-(1-\beta t)=0$ holds at $a^2=t(1-\beta t)\in [C_{\beta }^{1},\,C_{\beta }^{2}]$, namely, $a\in[-\sqrt{C_{\beta }^{2}},-\sqrt{C_{\beta }^{1}}]\cup[\sqrt{C_{\beta }^{1}},\sqrt{C_{\beta }^{2}}]$. Now, we define two regions in $a\in \mathbb{R}$, by choosing a constant $0<\delta\ll 1$, as follows \begin{equation*} \begin{split} I_1=[(-1-\delta)\sqrt{C_{\beta }^{2}},(-1+\delta)\sqrt{C_{\beta }^{1}}]\cup [(1-\delta)\sqrt{C_{\beta }^{1}},(1+\delta)\sqrt{C_{\beta }^{2}}],\,\,I_2=\mathbb{R}\setminus I_1. \end{split} \end{equation*} The case of $a\in I_2$ yields that \begin{equation*} |N_{\varepsilon}(a,t)|^2\geq s_1^2\left[\frac{a^2}{t}-(1-\beta t)\right]^2\geq s_1^2C_{\beta}. \end{equation*} If $a\in I_1$, $\frac{a^2}{t}-(1-\beta t)$ can be bounded so that we can bound the second term in $ |N_{\varepsilon}(a,t)|^2$, that is \begin{equation*} \begin{split} |N_{\varepsilon}(a,t)|^2&\geq \left[s_2(\frac{a^2}{t}-(1-\beta t))-\frac{a}{t}\right]^2\\ &=\left[O(s_2)-\frac{a}{t}\right]^2\geq s_1^2C_{\beta} \end{split} \end{equation*} whenever $|\varepsilon|$ is sufficiently small. The above estimates for $|N_{\varepsilon}(a,t)|$ give that \begin{equation*} |N_{\varepsilon}(a,t)|\geq s_1C_{\beta}. \end{equation*} Therefore, \begin{equation}\label{n-value} \inf_{a\in \mathbb{R},\,t\in \mathbb{Z}\setminus\{0\}}|N_{\varepsilon}(a,t)|\geq s_1C_{\beta}\geq \sigma C_{\beta,\mu} \end{equation} for a positive constant $C_{\beta,\mu}$ depending on $\beta$ and $\mu$, by the domain of $\varepsilon\in \Omega(\sigma,\mu)$. Consequently, for $\widetilde{\mathbf{N}}_{\varepsilon}(k,j)$ defined in \eqref{t-es}, we obtain \begin{equation}\label{p-bound} \sup_{k\in \mathbb{Z}^d,\,j\in \mathbb{Z}\setminus\{0\}}|\widetilde{\mathbf{N}}_{\varepsilon}(k,j)|\leq \sup_{a\in \mathbb{R},\,t\in \mathbb{Z}_{+}} |\widetilde{\mathbf{N}}_{\varepsilon}(a,t)|\leq \sigma^{-1}C_{\beta,\mu}. \end{equation} It follows that \begin{equation*} \begin{split} \|\mathcal{N}_{\varepsilon}^{-1}(V)\|_{\rho,m}\leq \sigma^{-1}C_{\beta,\mu}\|V\|_{\rho,m-2}. \end{split} \end{equation*} This allows us to define \begin{equation*} \begin{split} \left\|\mathcal{N}^{-1}_{\varepsilon} \right\|_{\mathcal{H}^{m-2}\rightarrow \mathcal{H}^{m}}=\sup_{k\in \mathbb{Z}^d,j\in \mathbb{Z}\setminus\{0\}}|\widetilde{\mathbf{N}}_{\varepsilon}(k,j)|. \end{split} \end{equation*} That means $\varepsilon\mathcal{N}^{-1}_{\varepsilon}$ can be bounded from $\mathcal{H}^{m-2}$ to $ \mathcal{H}^{m}$. \end{proof} As a matter of fact, Lemma~\ref{derivative-non} and Lemma~\ref{ppo} give that the operator $\mathcal{T}$ defined in \eqref{plo1} is analytic from the space $\mathcal{H}^{\rho,m}$ to itself. \begin{remark} Note that the previous Lemma~\ref{ppo} includes the case of $\varepsilon\in \mathbb{R}$, which will be used in the later finitely differentiable case (see Lemma~\ref{ppo1}). Note also that for the equation \eqref{b-e1}, the nonlinearity will always be regular. Therefore, we just consider the finitely differentiable version with $m>\frac{d+1}{2}$. The analogue of the low regularity results for ODE case would be easier to consider. \end{remark} \subsection{Proof of Theorem~\ref{main-p}}\label{sec:mainpr} In this section, we give the proof of Theorem~\ref{main-p}. \subsubsection{Regularity in $\varepsilon$}\label{sec:regupde} Since we want to obtain solutions depending analytically on $\varepsilon$, proceeding as in Section~\ref{sec:analyticitysolution}, we consider $\mathcal{T}:=\mathcal{T}_{\varepsilon}$ defined in \eqref{plo1} acting on the space $\mathcal{H}^{\rho,m,\Omega}$ consisting of analytic functions of $\varepsilon$ taking values in $\mathcal{H}^{\rho,m}$ with $\varepsilon$ ranging over the domain $\Omega(\sigma, \mu)$. We endow $\mathcal{H}^{\rho,m,\Omega}$ with supremum norm \begin{equation*} \|U\|_{\rho,m,\Omega}=\sup_{\varepsilon \in \Omega(\sigma, \mu)}\|U_{\varepsilon}\|_{\rho,m}, \end{equation*} which makes $\mathcal{H}^{\rho,m,\Omega}$ a Banach space. Moreover, it is also a Banach algebra when $m>(d+1)$. Based on Lemma~\ref{ppo}, we show that the operator $\mathcal{T}$ maps the space $\mathcal{H}^{\rho,m,\Omega}$ into itself. The idea of the proof is similar to Lemma~\ref{diff1}, but the details are different since PDE model \eqref{b-e1} involves a space variable $x$. \begin{proposition}\label{diff2} If $m>(d+5)$, then the operator $\mathcal{T}$ defined in \eqref{plo1} maps the analytic Banach space $ \mathcal{H}^{\rho,m,\Omega}$ into itself. Precisely, if the mapping $\varepsilon \rightarrow U_{\varepsilon}: \Omega\rightarrow \mathcal{H}^{\rho,m}$ is complex differentiable, then, $\varepsilon\rightarrow \mathcal{T}_{\varepsilon} (U_{\varepsilon}): \Omega\rightarrow \mathcal{H}^{\rho,m}$ is also complex differentiable. \end{proposition} \begin{proof} From the fixed point equation \eqref{plo1}, we know that $\mathcal{T}_{\varepsilon}$ is composed by $\varepsilon\mathcal{N}^{-1}_{\varepsilon}$ and $h$ defined in Lemma~\ref{derivative-non}. Lemma~\ref{derivative-non} gives that $h(\mathcal{H}^{\rho,m,\Omega}) \subset \mathcal{H}^{\rho,m-2,\Omega}$. Hence, it suffices to verify that $\varepsilon\mathcal{N}^{-1}_{\varepsilon}(\mathcal{H}^{\rho,m-2,\Omega})\subset \mathcal{H}^{\rho,m,\Omega}$. In the following step, we use a similar method as that in the proof of Proposition~\ref{diff1}. For a fixed $\varepsilon\in \Omega$, we expand $V_{\varepsilon}(\theta,x)\in \mathcal{H}^{\rho,m-2}$ as \begin{equation*} V_{\varepsilon}(\theta,x) =\sum_{k\in\mathbb{Z}^{d},\,j\in\mathbb{Z}\setminus \{0\}}\widehat{V}_{k,j,\varepsilon}e^{\mathrm{i} (k\cdot\theta+j\cdot x)} \end{equation*} with \begin{equation}\label{coeff-es2} \begin{split} \left|\widehat{V}_{k,j,\varepsilon}\right|\leq \left\|V_{\varepsilon}\right\|_{\rho,m-2}e^{-\rho (|k|+|j|)}(|k|^2+|j|^2+1)^{-\frac{m-2}{2}} \end{split} \end{equation} and \begin{equation}\label{coeff-es22} \begin{split} \left|\frac{d}{d\varepsilon}\widehat{V}_{k,j,\varepsilon}\right|\leq \left\|\frac{d}{d\varepsilon}V_{\varepsilon}\right\|_{\rho,m-2}e^{-\rho (|k|+|j|)}(|k|^2+|j|^2+1)^{-\frac{m-2}{2}}. \end{split} \end{equation} It follows from \eqref{plo} that \begin{equation*} \begin{split} \varepsilon \mathcal{N}_{\varepsilon}^{-1}(V_{\varepsilon})=\sum_{k\in\mathbb{Z}^{d},\,j\in\mathbb{Z}\setminus \{0\}} \varepsilon \mathbf{N}_{\varepsilon}^{-1}(k\cdot \omega,\,j)\widehat{V}_{k,j,\varepsilon}e^{\mathrm{i} (k\cdot\theta+j\cdot x)}, \end{split} \end{equation*} where \begin{equation*} \begin{split} \mathbf{N}_{\varepsilon}^{-1}(k\cdot \omega,\,j) =\frac{1}{-\varepsilon(k\cdot \omega)^2+\mathrm{i} (k\cdot \omega) -\varepsilon(\beta j^4-j^2)}=:\mathbf{N}_{\varepsilon}^{-1}. \end{split} \end{equation*} By \eqref{p-bound}, one has \begin{equation*} \begin{split} &\left|\frac{d}{d\varepsilon}\left(\varepsilon \mathbf{N}_{\varepsilon}^{-1}\widehat{V}_{k,j,\varepsilon}\right)\right|\\ &\leq |\mathbf{N}_{\varepsilon}^{-1}|\left|\widehat{V}_{k,j,\varepsilon}\right|+\left|\varepsilon\frac{d}{d\varepsilon} \mathbf{N}_{\varepsilon}^{-1}\right|\left|\widehat{V}_{k,j,\varepsilon}\right|+ \left|\varepsilon \mathbf{N}_{\varepsilon}^{-1}\right|\left|\frac{d}{d\varepsilon}\widehat{V}_{k,j,\varepsilon}\right|\\ &\leq C_{\beta,\mu}\cdot \sigma^{-1}|j|^2\left(\left|\widehat{V}_{k,j,\varepsilon}\right|+\left|\frac{d}{d\varepsilon}\widehat{V}_{k,j,\varepsilon}\right|\right). \end{split} \end{equation*} Together with \eqref{coeff-es2} and \eqref{coeff-es22}, we get \begin{equation*} \begin{split} &\left\|\frac{d}{d\varepsilon}\left(\varepsilon \mathbf{N}^{-1}_{\varepsilon}\widehat{V}_{k,j,\varepsilon}\right)e^{\mathrm{i} (k\cdot\theta+j\cdot x)} \right\|_{\rho,\,m-\tau}\\ &\leq C_{\beta,\mu}\cdot\sigma^{-1}|j|^2\left(\left|\widehat{V}_{k,j,\varepsilon}\right|+\left|\frac{d}{d\varepsilon}\widehat{V}_{k,j,\varepsilon}\right|\right) \|e^{\mathrm{i} (k\cdot\theta+j\cdot x)}\|_{\rho,\,m-\tau}\\ &\leq C_{\beta,\mu}\cdot\sigma^{-1} |j|^2\left(\|V_{\varepsilon}\|_{\rho,m-2}+\left\|\frac{d}{d\varepsilon}V_{\varepsilon}\right\|_{\rho,m-2} \right)e^{-\rho (|k|+|j|)}\\ &\ \ \ \ \ \ \ \ \ \ \cdot (|k|^2+|j|^2+1)^{-\frac{m-2}{2}} e^{\rho (|k|+|j|)}(|k|^2+|j|^2+1)^{\frac{m-\tau}{2}}\\ & \leq C_{\beta,\mu}\cdot \sigma^{-1}\left(\|V_{\varepsilon}\|_{\rho,m-2}+\left\|\frac{d}{d\varepsilon}V_{\varepsilon}\right\|_{\rho,m-2} \right)(|k|^2+|j|^2+1)^{-(\frac{\tau}{2}-2)}. \end{split} \end{equation*} By choosing $d+5<\tau\leq m$, we obtain that \begin{equation*} \begin{split} \sum_{k\in \mathbb{Z}^d,\,j\in\mathbb{Z}\setminus \{0\}}(|k|^2+|j|^2+1)^{-(\frac{\tau}{2}-2)}\leq C_d \sum_{\kappa=0}^{\infty}(\kappa^2+1)^{-\frac{\tau-d-4}{2}} < \infty. \end{split} \end{equation*} As a consequence, by Weierstrass M-test, we conclude that the series \begin{equation*} \begin{split} \sum_{k\in \mathbb{Z}^d,\,j\in\mathbb{Z}\setminus \{0\}} \frac{d}{d\varepsilon}\left(\varepsilon \mathbf{N}^{-1}_{\varepsilon}\widehat{V}_{k,j,\varepsilon}\right)e^{\mathrm{i} (k\cdot\theta+j\cdot x)} \end{split} \end{equation*} converge uniformly on $\varepsilon\in \Omega$ in the space $\mathcal{H}^{\rho,m-\tau}$. Therefore, \begin{equation*} \begin{split} \frac{d}{d\varepsilon}\left(\varepsilon \mathcal{N}_{\varepsilon}^{-1}(V_{\varepsilon})\right)= \sum_{k\in \mathbb{Z}^d,\,j\in\mathbb{Z}\setminus \{0\}} \frac{d}{d\varepsilon}\left(\varepsilon \mathbf{N}^{-1}_{\varepsilon}\widehat{V}_{k,j,\varepsilon}\right)e^{\mathrm{i} (k\cdot\theta+j\cdot x)}. \end{split} \end{equation*} In conclusion, we have that the map $\varepsilon \rightarrow \varepsilon \mathcal{N}_{\varepsilon}^{-1}(V_{\varepsilon}):\Omega\rightarrow \mathcal{H}^{\rho,m}$ is complex differentiable with derivatives in $\mathcal{H}^{\rho,m-\tau}$ by $\mathcal{H}^{\rho,m} \subset \mathcal{H}^{\rho,m-\tau}$ and Lemma~\ref{bootstrap} in Appendix. \end{proof} \subsubsection{Proof of Theorem~\ref{main-p}} \label{sec:proofmain-p} We now start to deal with the fixed point equation \begin{equation*} U(\theta,x)=\mathcal{N}_{\varepsilon}^{-1}\left[ \varepsilon (U^{2})_{xx}+\varepsilon f(\theta,x)\right] \equiv \mathcal{T}(U)(\theta,x). \end{equation*} in the space $\mathcal{H}^{\rho,m,\Omega}$. We will find a fixed point of $\mathcal{T}$ by considering a small ball $\mathbb{B}_{\mathbf{r}}(0) \subset \mathcal{H}^{\rho,m,\Omega}$ with $C_{\beta,\mu}\cdot \mathbf{r}<\frac{1}{2}$ such that $\mathcal{T}(\mathbb{B}_{\mathbf{r}}(0))\subset \mathbb{B}_{\mathbf{r}}(0)$and $\mathcal{T}$ is a contraction in this ball. It follows from Lemma~\ref{ppo} that \begin{equation*} \|\varepsilon \mathcal{N}_{\varepsilon}^{-1}\|_{\rho,m,\Omega}\leq C_{\beta,\mu}. \end{equation*} Hence, if $U\in \mathbb{B}_{\mathbf{r}}(0)$, Lemma~\ref{derivative-non} shows that \begin{equation*} \begin{split} \|\mathcal{T}(U)\|_{\rho, m,\Omega} &\leq\|\mathcal{T}(0)\|_{\rho, m,\Omega}+ \|\mathcal{T}(U)- \mathcal{T}(0)\|_{\rho, m,\Omega}\\ &\leq\|\varepsilon\mathcal{N}_{\varepsilon}^{-1}\|_{\rho, m,\Omega}\left( \|f\|_{\rho, m,\Omega}+ \| Dh(V)U\|_{\rho, m-2,\Omega}\right) \\ &\leq C_{\beta,\mu}\left( \|f\|_{\rho, m,\Omega}+\|V\|_{\rho,m,\Omega}\|U\|_{\rho,m,\Omega}\right)\\ &\leq C_{\beta,\mu}\left( \|f\|_{\rho, m,\Omega}+\mathbf{r}^2\right) \leq \mathbf{r}, \end{split} \end{equation*} provided that we impose the smallness condition for $f$ satisfying \begin{equation*} \|f\|_{\rho, m,\Omega}\leq \frac{\mathbf{r}}{2C_{\beta,\mu}}. \end{equation*} Moreover, for $U_1,U_2\in \mathbb{B}_{\mathbf{r}}(0)$, we get that \begin{equation*} \begin{split} \|\mathcal{T}(U_1)-\mathcal{T}(U_2)\|_{\rho, m,\Omega} &=\|\varepsilon\mathcal{N}_{\varepsilon}^{-1} h(U_1)-\varepsilon\mathcal{N}_{\varepsilon}^{-1} h(U_2)\|_{\rho, m,\Omega}\\ &\leq C_{\beta,\mu}\cdot\mathbf{r}\|U_1-U_2\|_{\rho, m,\Omega}\\ &<\frac{1}{2}\|U_1-U_2\|_{\rho, m,\Omega}, \end{split} \end{equation*} which implies that $\mathcal{T}$ is a contraction in the ball $\mathbb{B}_{\mathbf{r}}(0)$. In conclusion, there is a unique fixed point $U$ in the space $\mathcal{H}^{\rho,m,\Omega}$ for equation \eqref{plo1}. Namely, we obtain a solution $U_{\varepsilon}$ analytic in $\varepsilon$ for equation \eqref{ffixeq}. For $\varepsilon \rightarrow 0$, we have $\varepsilon\,\rightarrow U_{\varepsilon}$ is continuous, whose proof is similar to Lemma~\ref{continuous}. \subsection{Proof of Theorem~\ref{pde-fi}}\label{sec:pde-finitely} In this section, we consider $\mathcal{T}:=\mathcal{T}_{\varepsilon}$ defined in \eqref{plo1} acting on space $\mathcal{H}^{m,\widetilde{\Omega}}$ consisting of differentiable functions of $\varepsilon$ taking values in $\mathcal{H}^{m}$ with $\varepsilon$ ranging over the domain $\widetilde{\Omega}(\sigma, \mu)$. We endow $\mathcal{H}^{m,\widetilde{\Omega}}$ with supremum norm \begin{equation}\label{suprenorm-pde2} \|U\|_{m,\Omega}=\sup_{\varepsilon \in \widetilde{\Omega}(\sigma, \mu)}\|U_{\varepsilon}\|_{m}. \end{equation} We only have the result that the space $\mathcal{H}^{m}$ is a Banach space and it is also a Banach algebra when $m>\frac{d+1}{2}$ but not the space $\mathcal{H}^{m,\widetilde{\Omega}}$ with the supremum norm with respect to $\varepsilon$ defined in \eqref{suprenorm-pde2}. Consequently, the contraction mapping principle is not enough to get the solution $U_{\varepsilon}$ with optimal regularity in $\varepsilon$. We will combine with the implicit function theorem to obtain the regular solutions. In order to use the implicit function theorem, analogous to Section ~\ref{sec:finitely}, the main issue is to study the differentiability of the operator $\mathcal{T}(\varepsilon,U)$ in \eqref{plo1} considered as an operator from $\widetilde{\Omega}\times \mathcal{H}^m$ to $\mathcal{H}^{m}$ as well as the invertibility of $D_2\mathcal{T}(\varepsilon, U)$. We first present the result with respect to the argument $U$. Since Lemma~\ref{derivative-non} and Lemma~\ref{ppo} also hold in the finitely differentiable setting, we have the following result when we work in the space $\mathcal{H}^{m}$. \begin{lemma}\label{ppo1} For a fixed $\varepsilon \in \widetilde{\Omega}(\sigma,\mu)$, the operator $\mathcal{T}_{\varepsilon}$ is analytic from the space $\mathcal{H}^{m}$ to itself. \end{lemma} Now, we give the following proposition with the result that the operator $\mathcal{T}$ in \eqref{plo1} is differentiable in the argument $\varepsilon$. Note that $\mathcal{T}$ is composed by $\varepsilon\mathcal{N}^{-1}_{\varepsilon}$ and $h$ defined in \eqref{non-term}. Since $h(\mathcal{H}^{m}) \subset \mathcal{H}^{m-2}$, we need to verify that the derivatives of $\varepsilon\mathcal{N}^{-1}_{\varepsilon}$ with respect to $\varepsilon$ is bounded from the space $\mathcal{H}^{m-2}$ to the space $\mathcal{H}^{m}$. Similar to Proposition~\ref{derivatives}, we have: \begin{proposition}\label{derivatives1} Fix any $m \in \mathbb{N}$ with $m>\frac{d+1}{2}$ and $\sigma>0$. We consider the map that to every $\varepsilon \in \widetilde{\Omega}$, $\varepsilon \mathcal{N}^{-1}_\varepsilon \in B(\mathcal{H}^{m-2}, \mathcal{H}^m)$. Moreover, for any $l \in \mathbb{N}$ and $\varepsilon \in \widetilde{\Omega}$, the map $\varepsilon \rightarrow \varepsilon \mathcal{N}^{-1}_\varepsilon$ is $C^l$ considered as a mapping from $\widetilde{\Omega}$ to $B(\mathcal{H}^{m-2}, \mathcal{H}^m)$. Namely, $\frac{d^l}{d \varepsilon^l} (\varepsilon \mathcal{N}_\varepsilon^{-1})\in B(\mathcal{H}^{m-2}, \mathcal{H}^m)$. As a matter of fact, something stronger is true. The mapping $\varepsilon \rightarrow \varepsilon \mathcal{N}^{-1}_\varepsilon$ is real analytic for $\varepsilon \in \widetilde{\Omega}$ and the radius of analyticity can be bounded uniformly for all $\varepsilon \in \widetilde{\Omega}$. \end{proposition} \begin{proof} The idea of the proof is similar to Proposition~\ref{derivatives}. Based on the estimates $|N_\varepsilon(a,t)| \geq \sigma C_{\beta,\mu}$ in \eqref{n-value} in Lemma~\ref{ppo}, we now expand $N_{\varepsilon+\delta}^{-1}(a,t)$ in powers of $\delta$ as \begin{equation}\label{goodformu1} \begin{split} N^{-1}_{\varepsilon + \delta}(a,t) &=\left(-(\varepsilon+\delta)\left[\frac{a^2}{t}-(1-\beta t)\right]+\mathrm{i}\frac{a}{t}\right)^{-1}\\ &= \left( -\varepsilon\left[\frac{a^2}{t}-(1-\beta t)\right]+\mathrm{i}\frac{a}{t}-\delta \left[\frac{a^2}{t}-(1-\beta t)\right] \right)^{-1} \\ &=\left(-\varepsilon\left[\frac{a^2}{t}-(1-\beta t)\right]+\mathrm{i}\frac{a}{t} \right)^{-1}\left(1-\delta\frac{\left[\frac{a^2}{t}-(1-\beta t)\right]}{-\varepsilon\left[\frac{a^2}{t}-(1-\beta t)\right]+\mathrm{i}\frac{a}{t}}\right)^{-1}. \end{split} \end{equation} By the estimates in Lemma~\ref{ppo}, we observe that the factor $ \frac{\left[\frac{a^2}{t}-(1-\beta t)\right]}{-\varepsilon\left[\frac{a^2}{t}-(1-\beta t)\right]+\mathrm{i}\frac{a}{t}}$ is bounded uniformly in $a\in \mathbb{R},t\in \mathbb{Z}_{+}$ and $\varepsilon\in \widetilde{\Omega}$. Therefore, we can expand $\left(1-\delta\frac{\left[\frac{a^2}{t}-(1-\beta t)\right]}{-\varepsilon\left[\frac{a^2}{t}-(1-\beta t)\right]+\mathrm{i}\frac{a}{t}}\right)^{-1} $ in \eqref{goodformu1} in powers of $\delta$ using the geometric series formula and the radii of convergence are bounded uniformly and the values of the function are also bounded in a ball which is uniform in $a \in \mathbb{R}, t\in \mathbb{Z}_{+}$ and $\varepsilon \in \widetilde{\Omega}$. That means $N_{\varepsilon}^{-1}$ is uniformly analytic in $\varepsilon$ for each $a \in \mathbb{R}, t\in \mathbb{Z}_{+}$. In the Fourier space, we know that $\mathcal{N}_{\varepsilon}^{-1}$ is multiplier operator with the multiplier $N_{\varepsilon,k,j}^{-1}$. Precisely, for $\widehat{f}_{k,j}$ being the Fourier coefficients of function $f$ in the space $\mathcal{H}^{m-2}$, the Fourier coefficients $\widehat{(\mathcal{N}^{-1}_{\varepsilon}f)}_{k,j}$ of function $(\mathcal{N}^{-1}_{\varepsilon}f)$ in the space $\mathcal{H}^{m}$ have the structure: \begin{equation*} \widehat{(\mathcal{N}^{-1}_{\varepsilon}f)}_{k,j}=N^{-1}_{\varepsilon,k,j}\widehat{f}_{k,j}. \end{equation*} Hence, we get that $\mathcal{N}^{-1}_{\varepsilon}$ is analytic in $\varepsilon$. Moreover, we can bound $\| \mathcal{N}_{\varepsilon}^{-1} \|_{\mathcal{H}^{m-2}\rightarrow \mathcal{H}^m}$ by the norm defined by \begin{equation}\label{sup11} \left\|\mathcal{N}^{-1}_{\varepsilon} \right\|_{H^{m-2}\rightarrow \mathcal{H}^m}=\sup_{k\in \mathbb{Z}^{d},\,j\in \mathbb{Z}\setminus\{0\}}\|N^{-1}_{\varepsilon,k,j}\| \end{equation} since the uniform boundness of $N^{-1}_{\varepsilon,k,j}$ in $k\in \mathbb{Z}^{d},\,j\in \mathbb{Z}\setminus\{0\}$. Therefore, when we write $\mathcal{N}^{-1}_{\varepsilon+\delta}=\sum_{n=0}^{\infty}\mathcal{N}^{-1}_{\varepsilon,n}\delta^n$, $\| \mathcal{N}_{\varepsilon,n}^{-1} \|_{\mathcal{H}^{m-2}\rightarrow \mathcal{H}^m}$ can be bounded by the definition in \eqref{sup11}. That means $\frac{d^l}{d \varepsilon^l} (\varepsilon \mathcal{N}_\varepsilon^{-1})\in B(\mathcal{H}^{m-2}), \mathcal{H}^{m}$ for every $\varepsilon\in \widetilde{\Omega}$. \end{proof} Now, we start to prove Theorem~\ref{pde-fi} by constructing a fixed point $U_{\varepsilon_0}$ for $\varepsilon_0\in \widetilde{\Omega}$ first and then using the implicit function theorem to obtain the optimal regularity of $U_{\varepsilon}$ in $\varepsilon$. It is similar to the proof in Section~\ref{sec:fini}. We omit some details here. \begin{proof} First, when we choose a small ball $\mathbb{B}_{\mathbf{r}}(0) \subset \mathcal{H}^{m,\widetilde{\Omega}}$, the similar process to Section~\ref{sec:proofmain-p} allows us to obtain a fixed point $U_{\varepsilon_0}\in\mathcal{H}^{m}$ for some $\varepsilon_0\in \widetilde{\Omega}$ by the contraction argument in this ball. Then, according to Lemma~\ref{ppo1} and Proposition~\ref{derivatives1}, we obtain that the operator $\mathcal{T}$ defined on $\widetilde{\Omega}\times \mathcal{H}^{m}$ is $C^l$ in arguments $\varepsilon$ and $U$. Namely, $\mathbf{T}(\varepsilon,U):=U-\mathcal{T}(\varepsilon,U)$ is $C^l$ in $\widetilde{\Omega}\times \mathcal{H}^{m}$. Based on the first step, we have $\mathbf{T}(\varepsilon_{0},U_{\varepsilon_0})=0$. Moreover, $D_2\mathbf{T}(\varepsilon_{0},U_{\varepsilon_0})=Id-D_2\mathcal{T}(\varepsilon_{0},U_{\varepsilon_0})=Id-\varepsilon_0\mathcal{N}_{\varepsilon_0}^{-1}Dh(U_{\varepsilon_0})$ is invertible since $\varepsilon_0\mathcal{N}_{\varepsilon_0}^{-1}Dh(U_{\varepsilon_0})$ is sufficiently small in a small domain of the origin. Therefore, by the implicit function theorem, there exist an open neighborhood included in $\widetilde{\Omega}\times \mathcal{H}^{m}$ of $(U_{\varepsilon_0},\varepsilon_0)$ and a $C^l$ function $U_{\varepsilon}$ satisfying $\mathbf{T}(\varepsilon,U_{\varepsilon})=0$ on this neighborhood. \end{proof} \appendix \section{Some properties in analytic and finitely differentiable Banach spaces} \label{sec:appendix} \subsection{Analytic functions in Banach space} \begin{definition}\label{analy-def} Let $X,\,Y$ be complex Banach spaces and $O\subset X$ is open . We say that $f:\,O\rightarrow Y$ is analytic if it is differentiable at all points of $O$ and there exists a function $\gamma:=\gamma_{x}(\|z\|)$, with $\frac{\gamma_{x}(\|z\|)}{\|z\|}\rightarrow 0$ as $\|z\|\rightarrow 0$, such that \begin{equation*} \|f(x+z)-f(x)-Df(x)\cdot z \| \leq \gamma_{x}(\|z\|) \end{equation*} for all $x\in O$ and $z\in X$ such that $(x+z)\in O$. \end{definition} Note that Definition~\ref{analy-def} is a rather weak version of differentiability, but it is enough for this paper. For more analyticity of nonlinear functions in Banach spaces, we refer to \cite{Hil57,Muji86}. The main result of this appendix is the theory of complex analytic functions in Banach space, bootstraping the meaning of derivatives of analytic functions. The result could be deduced from stronger results in \cite{Hil57,ReedS72}, but we thought it would be useful to present a self-contained proof since this lemma could be useful in other applications. \begin{lemma}\label{bootstrap} Let $U \subseteq \mathbb{C}$ be open and $X,\,Y$ be complex Banach spaces, $X\subseteq Y$ with continuous embedding. Let $f: U\rightarrow X$, which is differentiable in $Y$ for all $x\in U$, and \begin{equation}\label{ydiff} \lim_{h\rightarrow 0}\left\| \frac{f(x+h)-f(x)}{h}-f'(x)\right\|_{Y}=0. \end{equation} Then, $f'(x)\in X$ and \begin{equation}\label{xdiff} \lim_{h\rightarrow 0}\left\| \frac{f(x+h)-f(x)}{h}-f'(x)\right\|_{X}=0. \end{equation} \end{lemma} We start proving the Cauchy-Goursat theorem for functions satisfying \eqref{ydiff}. The proof is rather straightforward. This will lead to a Cauchy formula, from which we can deduce \eqref{xdiff}. \begin{proposition}\label{prop} Let $g:U\rightarrow X\subseteq Y$, be differentiable at everywhere in the sense of $Y$ differentiable. Let $\gamma$ be a triangle contour contained in $U$. Then \begin{equation*} \int_{\gamma}g(z)dz=0. \end{equation*} \end{proposition} Of course, by the usual approximation procedures, one can get the result for more general paths. This will not be needed for our purposes. Note that, by the fact that $g$ is continuous as a function from $U$ to $Y$, we know that the integrals over paths involved can be understood as Riemann integrals. \begin{proof} Suppose $\gamma$ is a triangular contour with positive orientation, we construct four positively oriented contours that are triangles obtained by joining the midpoints of the sides of $\gamma$. Then, we have \begin{equation*} \int_{\gamma}g(z)dz=\sum_{i=1}^{4}\int_{\gamma_i}g(z)dz. \end{equation*} Let $\gamma_1$ be selected such that \begin{equation*} \left| \int_{\gamma}g(z)dz\right| \leq\sum_{i=1}^{4}\left| \int_{\gamma_i}g(z)dz\right| \leq 4\left| \int_{\gamma_1}g(z)dz\right|. \end{equation*} If $\int_{\gamma}g(z)dz=b\neq 0$, we get \begin{equation*} \left| \int_{\gamma_1}g(z)dz\right|\geq \frac{1}{4} |b|. \end{equation*} Proceeding by induction, we get a sequence of triangular contours $\{\gamma_n\}$, whose length equals $2^{-n}|\gamma|$, where $|\gamma|$ denotes the length of $\gamma$, such that \begin{equation}\label{lowerbound} \left| \int_{\gamma_n}g(z)dz\right|\geq \frac{1}{4^n} |b|. \end{equation} By the choice of $\gamma_n$, we have \begin{equation*} \overline{Interior\, of\, \gamma_{n+1}}\subset \overline{Interior\, of\, \gamma_{n}} \end{equation*} and the length of the sides of $\gamma_n$ goes to $0$ as $n\rightarrow \infty$. Therefore there exists a unique point $z_0\in \bigcap_{n} \overline{Interior\, of\, \gamma_{n}}\in U$. Since $g$ is differentiable at $z_0$, there is a function $R$ such that \begin{equation*} g(z)=g(z_0)+g'(z_0)(z-z_0)+R(z,z_0), \end{equation*} where \begin{equation*} \|R(z,z_0)\|_{Y} \leq |z-z_0|w(|z-z_0|) \end{equation*} with $w(|z-z_0|) \rightarrow 0$ when $|z-z_0|\rightarrow 0$. Integrating $g$ along $\gamma_n$, we find that \begin{equation*} \begin{split} \int_{\gamma_n}g(z)dz&=\int_{\gamma_n}g(z_0)dz+ \int_{\gamma_n}g'(z_0)(z-z_0)dz+\int_{\gamma_n}R(z,z_0)dz\\ &=[g(z_0)-g'(z_0)z_0]\int_{\gamma_n}1dz+ g'(z_0)\int_{\gamma_n}zdz+\int_{\gamma_n}R(z,z_0)dz \\ &=\int_{\gamma_n}R(z,z_0)dz. \end{split} \end{equation*} Therefore, \begin{equation}\label{upperbound} \begin{split} \left\|\int_{\gamma_n}g(z)dz\right\|_{Y}&\leq |\gamma_n|\cdot\sup_{z\in \gamma_n}\|R(z,z_0)\|_{Y}\\ &\leq |\gamma_n|\cdot \frac{|\gamma_n|}{2}\cdot w\left(\frac{|\gamma_n|}{2}\right)\\ &\leq \frac{|\gamma|^2}{2\cdot 4^n}w\left(\frac{|\gamma_n|}{2}\right) \end{split} \end{equation} by $|z-z_0|<\frac{1}{2}|\gamma_n|$ for $z \in \gamma_n$. Comparing \eqref{lowerbound} and \eqref{upperbound}, we get $b=0$ as desired. \end{proof} As a corollary, we obtain the same conclusion, but assuming only that $g$ is differentiable at all points inside of the triangle except for the center of the small triangles. Now we begin to prove Lemma~\ref{bootstrap}. As it is standard, for the function $f$ in Lemma~\ref{bootstrap}, fix $\epsilon$ belonging to interior of $\gamma$, we define \begin{equation*} \begin{split} g_{\epsilon}(z)=\left\{ \begin{array}{l} \begin{split} \frac{f(z)-f(\epsilon)}{z-\epsilon},\,\,z\neq \epsilon, \end{split} \\ \\ \begin{split} f'(z),\,\,z=\epsilon, \end{split} \end{array} \right. \end{split} \end{equation*} which satisfies the hypothesis of the Proposition~\ref{prop} or its corollary. If $\gamma$ is an triangle centered at $\epsilon$, then \begin{equation*} 0=\int_{\gamma}g_{\epsilon}(z)dz= \int_{\gamma}\frac{f(z)}{z-\epsilon}dz-f(\epsilon)\int_{\gamma}\frac{1}{z-\epsilon}dz. \end{equation*} Hence we satisfy the formula \begin{equation*} f(\epsilon)=\frac{1}{2\pi\mathrm{i}} \int_{\gamma}\frac{f(z)}{z-\epsilon}dz. \end{equation*} Now, we can compute the derivative with respect to $\epsilon$ in space $X$ and obtain \begin{equation*} f'(\epsilon)=\frac{1}{2\pi\mathrm{i}} \int_{\gamma}\frac{f(z)}{(z-\epsilon)^2}dz. \end{equation*} Of course, since the derivative is obtained as limits of quotients, if the limit exists in $X$, it has to agree with the limit in $Y$. \subsection{Finitely differentiable functions in Banach space} For arbitrary Banach spaces $X_1, \cdots, X_i, Y, i\geq 1$, we denote by $A(X^{\otimes i}, Y)$ the space of symmetric continuous $i$-linear forms on $X^{\otimes i}:=X_1\times \cdots \times X_i$ taking values in $Y$. Now we present the converse to Taylor's theorem (see page $6$ in the book \cite{robbin67}). \begin{lemma} Let $O\subset X$ be a convex set and $F:\,O\rightarrow Y,\,f_{i}:\,O\rightarrow A(X^{\otimes i}, Y),\,i=0,\cdots,r.$ For any $x\in O$ and $h\in X$ such that $(x+h)\in O$, we define $R(x,h)$ by \begin{equation*} F(x+h)=F(x)+\sum_{i=1}^r\frac{f_{i}(x)(h,\cdots,h)}{i!}+R(x,h). \end{equation*} If for any $0\leq i\leq r$, $f_{i}$ is continuous and for any $x\in O$, $\frac{\|R(x,h)\|_{Y}}{\|h\|^r_{X}}\rightarrow 0$ as $\|h\|^r_{X}\rightarrow 0$, then we say $F$ is of class $C^r$ on $O$ and $D^iF=f_i$ for any $0\leq i\leq r$. \end{lemma} \begin{definition} We denote by $C^{r}(O,Y)$ the space of functions $f:O \rightarrow Y$ with continuous derivatives up to order to $r$. We endow $C^{r}(O, Y )$ with the norm of the supremum of all the derivatives. Namely, \begin{equation}\label{normCr} \begin{aligned} \|f\|_{C^{r}}=\max_{0\leq i\leq r} \sup_{x\in O}|[D^{i}f](x)|_{X^{\otimes i}, Y} \end{aligned} \end{equation} with \begin{equation*} \begin{aligned} |\cdot|_{X^{\otimes i}, Y} \equiv \sup_{\|x_1\|_{X_1}=1,\ldots \|x_i\|_{X_i}=1} \|A(x_1,\ldots, x_i)\|_Y. \end{aligned} \end{equation*} As it is well known, the norm \eqref{normCr} makes $C^{r}(O,Y)$ a Banach space. \end{definition} \begin{definition} We denote by $C^{r+Lip} ( O, Y )$ the space of functions in $C^{r}(O, Y )$ whose $r-$th derivative is Lipschitz. The Lipschitz constant is \begin{equation*} \begin{aligned} Lip_{O,Y}D^{r}f=\sup_{x_1,\,x_2 \in X \atop x_1\neq x_2} \frac{|D^{r}f(x_1)-D^{r}f(x_2) |_{X^{\otimes r},Y}}{\|x_1-x_2\|_{X}}. \end{aligned} \end{equation*} \end{definition} We note that since $O$ may be not compact, this definition is different from the Whitney definition in which the topology is given by semi-norms of supremum in compact sets. We will not use the Whitney definition of $C^r$ in this paper. \begin{definition} An open set $O$ is called a compensated domain if there is a constant $C$ such that given $x, y \in O$ there is a $C^1$ path $\gamma$ contained in $O$ joining $x,y$ satisfying $|\gamma| \le C \| x - y\|$. \end{definition} For $O$ a compensated domain, we have the mean value theorem \begin{equation*} \| f(x) - f(y) \|_Y \le C \| f\|_{C^1(O, Y)} \| x - y\|_X. \end{equation*} In particular, $C^1$ functions in a compensated domain are Lipschitz. It is not difficult to construct non-compensated domains with $C^1$ functions which are not Lipschitz. Of course a convex set is compensated and the compensation constant is $1$. In our paper, we will just be considering domains which are balls or full spaces. See \cite{Rafael99} for the effects of the compensation constants in many problems of the function theory. \subsection{The standard Sobolev space} As a matter of fact, we define \begin{equation*} \begin{aligned} H^m(\mathbb{T}^d):=H^m(\mathbb{T}^d,\mathbb{R}^n):= \{U=(U_1,\cdots,U_n)|U_i\in H^m(\mathbb{T}^d,\mathbb{R}),\,i=1,\cdots,n\} \end{aligned} \end{equation*} equipped with the norm \begin{equation}\label{sobol} \begin{aligned} \|U\|_{H^m}=\sum_{0\leq i\leq n}\|U_i\|_{H^m}. \end{aligned} \end{equation} And \begin{equation*} \begin{aligned} H^m(\mathbb{T}^d,\mathbb{R})= \{U\in L^2(\mathbb{T}^d,\mathbb{R}):\,D^{|\alpha|} U\in L^2(\mathbb{T}^d,\mathbb{R}), \,\,\,0\leq |\alpha|\leq m\}, \end{aligned} \end{equation*} where we use multi-index notation $\alpha=(\alpha_1,\cdots,\alpha_d)\in \mathbb{N}^d$, $|\alpha|=\sum_{i=1}^d \alpha_i$ and $x=(x_1,\cdots,x_d)\in \mathbb{T}^d$, $D^{\alpha}:=D^{\alpha}_x=D_{x_1}^{\alpha_1}\cdots D_{x_d}^{\alpha_d}$. We define \begin{equation*} \begin{aligned} \|U\|_{H^m(\mathbb{T}^d,\mathbb{R})}=\sum_{0\leq |\alpha|\leq m}\|D^{\alpha} U\|_{L^2} \end{aligned} \end{equation*} with \begin{equation*} \begin{aligned} \|U\|_{L^2}=\left(\int_{\mathbb{T}^d}|U(\theta)|^2d\theta\right)^{\frac{1}{2}}. \end{aligned} \end{equation*} Indeed, by Fourier transformation, the norm defined in \eqref{sobol} is equivalent to the norm defined by Definition~\ref{space} based on the Fourier coefficients. We refer to the book \cite{sobolev, taylor3} for more details. \end{document}
arXiv
\begin{document} \title{Cartesian Coherent Differential Categories} \begin{abstract} We extend to general cartesian categories the idea of Coherent Differentiation recently introduced by Ehrhard in the setting of categorical models of Linear Logic. The first ingredient is a summability structure which induces a partial left-additive structure on the category. Additional functoriality and naturality assumptions on this summability structure implement a differential calculus which can also be presented in a formalism close to Blute, Cockett and Seely's cartesian differential categories. We show that a simple term language equipped with a natural notion of differentiation can easily be interpreted in such a category. \end{abstract} \tableofcontents \newcommand\Real{\mathbb{R}} \newcommand\Realp{\Real_{\geq 0}} \newcommand\COH{\mathbf{Coh}} \newcommand\cC{\mathcal C} \newcommand\Ob[1]{\mathsf{Ob}(#1)} \newcommand\Comp{\mathrel\circ} \input{T-not} \section*{Introduction} \addcontentsline{toc}{section}{\protect\numberline{}Introduction} This article is a long version of a paper, with the same title and by the same authors, accepted at the \emph{ACM/IEEE Symposium on Logic in Computer Science} 2023. In particular, all the proofs which are missing in the conference version are provided in the present article. \medbreak Linear Logic (LL) and its models~\cite{Girard87} strongly suggest that differentiation of proofs should be a natural operation extracting their best ``local'' linear approximation. Remember that for any \(E,F\) Banach spaces, \(f:E\to F\) is differentiable at \(x \in E \) if there is a neighborhood \(U\) of \(0\) in \(E\) and a linear and continuous function \(\phi:E\to F\) such that, for all \(u\in U\) \begin{align} \label{eq:derivative-def} f(x+u)=f(x)+\phi(u)+o(\|u\|)\,. \end{align} When \(\phi\) exists, it is unique and is denoted as \(f'(x)\). When \(f'(x)\) exists for all \(x\in E\), the function \(f':E\to\mathcal L(E,F)\), where \(\mathcal L(E,F)\) is the Banach space of linear and continuous functions \(E\to F\), is called the \emph{differential} of \(f\). This function can itself admit a differential and so on. When all these iterated differentials exist one says that \(f\) is \emph{smooth} and the \(n\)th derivative of \(f\) is a function \(f^{(n)}:E\to\mathcal L_n(E,F)\) where \(\mathcal L_n(E,F)\) is the space of \(n\)-linear symmetric functions \(E^n\to F\). It can even happen that \(f\) is locally (or even globally) expressed using its iterated derivatives by means of the \emph{Taylor Formula} \(f(x+u)=\sum_{n=0}^\infty\frac1{n!}f^{(n)}(x)(u,\dots,u)\); when this holds locally at any point \(x\), \(f\) is said to be \emph{analytic}. Based on categorical models of LL where morphisms are analytic functions, the differential \(\lambda\)-calculus and differential LL provide a logical and syntactical account of differentiation. A program of type \(A\Rightarrow B\) can be turned into a program of type \(A\Rightarrow(A\multimap B)\). This provides a new approach of finite approximations of functions by a syntactical version of the Taylor Formula which has shown relevance in the study of the \(\lambda\)-calculus and of LL. Differentiation is deeply connected with \emph{addition}, as it can already be seen in its definition~\cref{eq:derivative-def}. This connection also appears when writing the differential of \(f:\Real^n\to\Real\) as a sum of \emph{partial derivatives}: \[ f'(x_1,\dots,x_n)\cdot(u_1,\dots,u_n)= \sum_{i=1}^n\frac{\partial f(x_1,\dots,x_n)}{\partial x_i}u_i \] and, of course, in the Taylor formula itself. For this reason, until recently, all categorical models of the differential \(\lambda\)-calculus and of differential LL~\cite{Blute06, Blute09} were using categories where hom-sets have a structure of commutative monoid and both formalisms feature a formal and unrestricted addition operation on terms or proofs of the same type. The only available operational interpretation of such a sum being erratic choice, these formalisms are inherently non-deterministic. Recently, \Ehrhard{} observed~\cite{Ehrhard22a} that, in a setting where all coefficients are non-negative, differentiation survives to strong restrictions on the use of addition. Consider for instance a function \([0,1]\to[0,1]\) which is smooth on \([0,1)\) and all of whose iterated derivatives are everywhere \(\geq 0\) \footnote{This actually implies that \(f\) is analytic.}. If \(x,u\in[0,1]\) are such that \(x+u\in[0,1]\) then \(f(x)+f'(x)u\leq f(x+u)\in[0,1]\) (this makes sense even if \(f'(1)=\infty\), which can happen: take \(f(x)=1-\sqrt{1-x}\)). So if \(S\) is the set of all such pairs \((x,u)\) that we call \emph{summable}, we can consider the function \(\D(f):(x,u)\mapsto(f(x),f'(x)u)\) as a map \(S\to S\). This basic observation is generalized in~\cite{Ehrhard21} to a wide range of categorical models \(\cL\) of LL including coherence spaces, probabilistic coherence spaces \emph{etc.}~where hom-sets have only a \emph{partially defined} addition. In these \emph{summable categories}, \(S\) becomes an endofunctor \(\cL\to\cL\) equipped with an additional structure which allows to define summability and (partial) sums in a very general way and turns out to induce a monad. Differentiation is then axiomatized as a distributive law between this monad (similar to the tangent bundle monad of a tangent category~\cite{Rosicky84}) and the resource comonad \(\oc\_\) of the LL structure of the category \footnote{Which by the way needs not be a fully-fledged LL model.} \(\cL\). Indeed, this distributive law allows one to extend \(S\) to $\kleisliExp$ the Kleisli category of \(\oc\_\) and this extension \(\D:\Kl\cL\to\Kl\cL\) turns out to be a monad which has all the required properties of differentiation. It is well known that $\kleisliExp$ is a cartesian closed category, and it can be interesting to drift away from the LL structure of $\categoryLL$ by only looking at the structure of its Kleisli category. This is what happened with differentiation. It was first axiomatized in a typical LL setting with additive categories and differential categories~\cite{Blute06}. It was then carried to the setting of cartesian categories with left-additive categories and cartesian differential categories (CDC)~\cite{Blute09}. Unsurprisingly, the Kleisli categories of the former provide instances of the latter, but cartesian differential categories cover a wider range of models. As mentionned in~\cite{Ehrhard21}, differential categories can be seen as a special instance of summable categories equipped with differentiation (we will call those \emph{coherent differential categories}) in which addition is unrestricted. Naturally, we can wonder if there is a notion of \emph{cartesian coherent differential categories}, that arise as the Kleisli categories of coherent differential categories, and that generalize CDC to a partial setting. We provide a positive answer to this question. We define coherent differentiation in an \emph{arbitrary category}, whose morphisms are intuitively considered as smooth. So we start from a category \(\cC\) equipped with a map \footnote{It will become a functor and even a monad later.} \(\D:\Ob\cC\to\Ob\cC\) given together with morphisms \(\Dproj_{0,X},\Dproj_{1,X},\Dsum_X\in\cC(\D(X),X)\) (for each \(X\in\Obj\cC\)). The intuition is that \(\D(X)\) is the object of summable pairs of elements of \(X\), that \(\Dproj_i\) are the obvious projections and that \(\Dsum\) computes the sums. We assume \(\Dproj_0,\Dproj_1\) to be jointly monic and this is sufficient to say when \(f_0,f_1\in\cC(X,Y)\) are summable: this is when there is a necessarily unique \(h\in\cC(X,\D(Y))\) such that \(\Dproj_i\circ h=f_i\) and when this holds we set \(f_0+f_1=\Dsum\Comp h\). Under suitable assumptions this very light structure suffices to equip hom-sets of \(\cC\) with a structure of partial commutative monoid which is compatible with composition on the left \footnote{And not on the right in general since, intuitively, the morphisms of \(\cC\) are not assumed to be linear.}. This structure is a convenient setting for differentiation: it suffices to furthermore equip \(\D\) with a functorial action on morphisms wrt.~which some morphisms (definable in terms of \(\Dproj_0,\Dproj_1,\Dsum\)) are natural. This is the notion of \emph{coherent differential category} whose axioms are in one-to-one correspondence with those of a CDC. Just as in tangent categories~\cite{Rosicky84,Cockett14}, our functor \(\D\) can be equipped with a monad structure. Contrarily to the additive framework of CDC our differentiation functor \(\D\) is not defined in terms of the cartesian product so it is important to understand how it interacts with the cartesian product when available: this is formalized by the concept of \emph{cartesian coherent differential category} (CCDC). This compatibility can be expressed in terms of a strength with which \(\D\) can be equipped, turning it into a commutative monad. This induces a satisfactory theory of partial derivatives. We provide a concrete example of such a category based on probabilistic coherence spaces and illustrate our formalism by interpreting a simple term language equipped with a notion of differentiation in a CCDC. \section{Left summability structure} We introduce in this section the notion of \emph{left summability structure} in order to generalize the notion of summability structure introduced in \cite{Ehrhard21} to a setting where morphisms are not necessarily additive. \subsection{Left pre-summability structures} Let $\category$ be a category with objects $\objects$ and hom-set $\category(X, Y)$ for any $X, Y \in \objects$. We assume that any hom-set $\category(X, Y)$ contains a distinguished morphism $0^{X,Y}$ (usually $X$ and $Y$ are kept implicit) such that for any $f \in \category(Z, X)$, $0^{X,Y} \comp f = 0^{Z, Y}$. \begin{definition} \label{def:pre-presummability-structure} A \emph{summable pairing structure} on a category \(\cC\) is a tuple $(\D, \Dproj_0, \Dproj_1, \Dsum)$ where: \begin{itemize} \item $\D: \objects \arrow \objects$ is a map (a functional class) on objects; \item $(\Dproj_{0, X})_{X \in \objects}, (\Dproj_{1, X})_{X \in \objects}$ and $(\Dsum_X)_{X \in \objects}$ are families of morphisms in $\category(\D X, X)$. The object $X$ will usually be kept implicit; \item $\Dproj_0$ and $\Dproj_1$ are jointly monic: for any $f, g \in \category(Y, \D X)$, if $\Dproj_0 \comp f = \Dproj_0 \comp g$ and $\Dproj_1 \comp f = \Dproj_1 \comp g$ then $f = g$. \end{itemize} \end{definition} We assume in what follows that $\category$ is equipped with a summable pairing structure $(\D, \Dproj_0, \Dproj_1, \Dsum)$. \begin{definition} Two morphisms $f_0, f_1 \in \category(X, Y)$ are said to be \emph{summable} if there exists $h \in \category(X, \D Y)$ such that $\Dproj_i \comp h = f_i$. The joint monicity of the $\Dproj_i$'s ensures that when $h$ exists, it is unique. We set $\Dpair{f_0}{f_1} \defEq h$, and we call it the \emph{witness} of the sum. By definition, $\Dproj_i \comp \Dpair{f_0}{f_1} = f_i$. Then we set $f_0 + f_1 \defEq \Dsum \comp \Dpair{f_1}{f_2}$. \end{definition} \begin{remark} A more standard approach to notations would be to write $\Dproj_1$ and $\Dproj_2$ instead of $\Dproj_0$ and $\Dproj_1$. The reason we proceed that way is that \Cref{eq:derivative-def} will be formalized in our setting with the use of a pair $\Dpair{f(x)}{f'(x) \cdot u}$. That is, the left element of this pair is of order $0$, and the right element is of order $1$. \end{remark} \begin{notation}\label{notation:sum-defined} We write $f_0 \summable f_1$ for the property that $f_0$ and $f_1$ are summable. We say that an algebraic expression containing binary sums is \emph{well defined} if each pair of morphisms involved in these sums are summable. For example, $(f_0 + f_1) + f_2$ is well defined if $f_0 \summable f_1$ and $(f_0 + f_1) \summable f_2$. \end{notation} \begin{proposition} The morphism \label{prop:proj-sum} $\Dproj_0$ and $\Dproj_1$ are summable with witness $\Dpair{\Dproj_0}{\Dproj_1} = \id$ and sum $\Dproj_0 + \Dproj_1 = \Dsum$. \end{proposition} \begin{proof} $\Dproj_i \comp \id = \Dproj_i$ so by definition, $\Dproj_0 \summable \Dproj_1$ with witness $\id$ and sum $\Dsum \comp \id = \Dsum$. \end{proof} \begin{proposition}[Left compatibility of sum] \label{prop:sum-left-compatible} For any $f_0, f_1 \in \category(Y, Z)$ and $g \in \category(X, Y)$, if $f_0 \summable f_1$, then $(f_0 \comp g) \summable (f_1 \comp g)$ with witness $\Dpair{f_0 \comp g}{f_1 \comp g} =\Dpair{f_0}{f_1} \comp g$. Moreover, $(f_0 \comp g)+(f_1 \comp g) = (f_0 + f_1) \comp g$. \end{proposition} \begin{proof} Let $\witness = \Dpair{f_0}{f_1} \comp g$. Then $\Dproj_i \comp \witness = f_i \comp g$ so $\witness$ is a witness for the summability of $f_0 \comp g$ and $f_1 \comp g$. And $f_0 \comp g + f_1 \comp g \defEq \Dsum \comp \witness = (f_0 + f_1) \comp g$. \end{proof} An important class of morphisms is that of additive morphisms, for which addition is compatible with composition on the right. \begin{definition} \label{def:additive} A morphism $h \in \category(Y, Z)$ is \emph{additive} if $h \comp 0 = 0$ and if for any $f_0, f_1 \in \category(X, Y)$, if $f_0 \summable f_1$ then $h \comp f_0 \summable h \comp f_1$ and $h \comp (f_0 + f_1) = h \comp f_0 + h \comp f_1$. Note that \(\id\) is additive and that the composition of two additive morphisms is an additive morphism. \end{definition} \begin{proposition} \label{prop:additive} A morphism $h$ such that $h \comp 0 = 0$ is additive if and only if $h \comp \Dproj_0 \summable h \comp \Dproj_1$ with sum $h \comp \Dsum$. \end{proposition} \begin{proof} For the forward implication, recall that $\Dproj_0 \summable \Dproj_1$ with sum $\Dsum$. Thus by additivity of $h$, $h \comp \Dproj_0 \summable h \comp \Dproj_1$ with sum $h \comp \Dsum$. For the reverse implication, assume that $f_0 \summable f_1$. Since $h \comp \Dproj_0 \summable h \comp \Dproj_1$, \Cref{prop:sum-left-compatible} ensures that $h \comp f_0 = h \comp \Dproj_0 \comp \Dpair{f_0}{f_1}$ and $h \comp f_1 = h \comp \Dproj_1 \comp \Dpair{f_0}{f_1}$ are summable, with sum $(h \comp \Dproj_0 + h \comp \Dproj_1) \comp \Dpair{f_0}{f_1} = h \comp \Dsum \comp \Dpair{f_0}{f_1} = h \comp (f_0 + f_1)$. \end{proof} \begin{definition} \label{def:presummability-structure} The summable pairing structure $(\D, \Dproj_0, \Dproj_1, \Dsum)$ is a \emph{left pre-summability structure} if $\Dproj_0, \Dproj_1$ and $\Dsum$ are additive. \end{definition} The additivity of the projections implies that the sum behaves well with respect to the operation $\Dpair{\_}{\_}$ itself. \begin{proposition} \label{prop:pair-sum} Assume that $\Dproj_0$ and $\Dproj_1$ are additive. Then for any $f_0, f_1, g_0, g_1 \in \category(X, Y)$, if $f_0 \summable f_1$, $g_0 \summable g_1$ and $\Dpair{f_0}{f_1} \summable \Dpair{g_0}{g_1}$, then $f_0 \summable g_0$, $f_1 \summable g_1$, $(f_0 + g_0) \summable (f_1 + g_1)$ and $\Dpair{f_0}{f_1} + \Dpair{g_0}{g_1} = \Dpair{f_0 + g_0}{f_1 + g_1}$. \end{proposition} \begin{proof} By additivity of $\Dproj_i$, $\Dproj_i \comp \Dpair{f_0}{f_1} = f_i$ and $\Dproj_i \comp \Dpair{g_0}{g_1} = g_i$ are summable with sum $f_i + g_i = \Dproj_i \comp (\Dpair{f_0}{f_1} + \Dpair{g_0}{g_1})$. Since \(\Dproj_0\summable\Dproj_1\) this entails by \Cref{prop:sum-left-compatible} that $f_0 + g_0$, $f_1 + g_1$ are summable with witness $\Dpair{f_0}{f_1} + \Dpair{g_0}{g_1}$. \end{proof} The additivity of $\Dsum$ implies that whenever $\Dpair{f_0}{f_1} \summable \Dpair{g_0}{f_1}$, one has $\Dsum \comp \Dpair{f_0}{f_1} \summable \Dsum \comp \Dpair{g_0}{g_1}$ and \[\Dsum \comp (\Dpair{f_0}{f_1} + \Dpair{g_0}{g_1}) = (\Dsum \comp \Dpair{f_0}{f_1}) + (\Dsum \comp \Dpair{g_0}{g_1})\] Assuming the additivity of the projections, the additivity of \(\Dsum\) implies that whenever $\Dpair{\Dpair{f_0}{f_1}}{\Dpair{g_0}{g_1}}$ exists, the two sums below are well defined (see \Cref{notation:sum-defined}) and \begin{equation} \label{eq:medial} (f_0 + g_0) + (f_1 + g_1) = (f_0 + f_1) + (g_0 + g_1)\,. \end{equation} \begin{proposition} \label{prop:zero-additive} The morphisms $0$ and $0$ are summable with witness $0$ and sum $0$. In particular, $0$ is additive. \end{proposition} \begin{proof} On the one hand, $\Dproj_i \comp 0 = 0$ by additivity of $\Dproj_i$, so $0 \summable 0$ with witness $0$. On the other hand, $\Dsum \comp 0 = 0$ by additivity of $\Dsum$ so $0+0 = 0$. In particular, $0$ is additive thanks to \Cref{prop:additive} because $0 \comp \Dproj_0 = 0$ and $0 \comp \Dproj_1 = 0$ are summable with witness $0$ and sum $0 = 0 \comp \Dsum$. \end{proof} \subsection{Left summability structures} We consider a category $\category$ equipped with a left pre-summability structure $(\D, \Dproj_0, \Dproj_1, \Dsum)$. The goal of this section is to make $(\category(X, Y), +, 0)$ a \emph{partial} commutative monoid. Similar structures appear in~\cite{Arbib80} or more recently in~\cite{Hines13}, in a setting where sums can be infinitary. Our partial monoids have only finite sums \footnote{Although the extension of the finite sum to an infinitary operations will have to be considered when dealing with fixpoints.}. More crucially, the categorical notion of summability defined above is essential for us whereas it is not categorically formalized in these works. \begin{definition} \labeltext{($\D$-com)}{ax:D-com} The left pre-summability structure is \emph{commutative} if for any object $X$, $\Dproj_1, \Dproj_0 \in \category(\D X, X)$ are summable with sum $\Dsum$. Then we set $\Dsym=\Dpair{\Dproj_1}{\Dproj_0} \in \category(\D X, \D X)$ so that \(\Dproj_i\comp\Dsym=\Dproj_{1-i}\). This property is called \ref{ax:D-com}. \end{definition} \begin{proposition}[Commutativity] The left pre-summability structure is commutative if and only if for any $f_0, f_1 \in \category(X, Y)$, if $f_0 \summable f_1$ then $f_1 \summable f_0$ and $f_0 + f_1 = f_1 + f_0$. \end{proposition} \begin{proof} For the direct implication, assume that $f_0 \summable f_1$. Then $\Dproj_i \comp \Dsym \comp \Dpair{f_0}{f_1} = \Dproj_{1-i} \comp \Dpair{f_0}{f_1} = f_{1-i}$ so $f_1 \summable f_0$ with witness $\Dsym \comp \Dpair{f_0}{f_1}$. Furthermore, $f_1 + f_0 = \Dsum \comp \Dsym \comp \Dpair{f_0}{f_1} = \Dsum \comp \Dpair{f_0}{f_1} = f_0 + f_1$. Conversely, $\Dproj_0 \summable \Dproj_1$ so by commutativity $\Dproj_1 \summable \Dproj_0$ and $\Dproj_1 + \Dproj_0 = \Dproj_0 + \Dproj_1 = \Dsum$. \end{proof} \begin{definition} \label{def:injections} \labeltext{($\D$-zero)}{ax:D-zero} The left pre-summability structure \emph{has $0$ as a neutral element} if for any object $X$, $\id_X \summable 0$ and $0 \summable \id_X$ with sums equal to $\id_X$. We call this property \ref{ax:D-zero}. We define $\Dinj_0, \Dinj_1 \in \category(X, \D X)$ as $\Dinj_0 \defEq \Dpair{\id_X}{0}$ and $\Dinj_1 \defEq \Dpair{0}{\id_X}$. \end{definition} \begin{proposition}[Neutrality of \(0\)] The left pre-summability structure has $0$ as a neutral element if and only if for any morphism $f \in \category(X, Y)$, $0 \summable f$, $f \summable 0$ and $f + 0 = 0 + f = f$. \end{proposition} \begin{proof} By definition of $\Dinj_0$, $\Dproj_0 \comp \Dinj_0 \comp f = \id \comp f = f$ and $\Dproj_1 \comp \Dinj_0 \comp f = 0 \comp f = 0$. So $f \summable 0$ with witness $\Dinj_0 \comp f$ and $f + 0 = \Dsum \comp \Dinj_0 \comp f = \id \comp f = f$. We do the same for $0 + f$ with $\Dinj_1$. Conversely, we apply the neutrality of 0 on $\id$ to get that $\id \summable 0$ and $0 \summable \id$, with sum $\id$. \end{proof} Associativity is not that straightforward, as there are two possible notions. The situation is similar in the infinitary setting of~\cite{Hines13} with the distinction between Weak Partition Associativity and Partition Associativity. \begin{definition}[Weak Associativity] The operation $+$ is called \emph{weakly associative} if whenever $(f_0 + f_1) + f_2$ and $f_0 + (f_1 + f_2)$ are well defined (recall \Cref{notation:sum-defined}), we have $(f_0 + f_1) + f_2 = f_0 + (f_1 + f_2)$. \end{definition} \begin{definition}[Associativity] \label{def:associative} The operation $+$ is called \emph{associative} if whenever $(f_0 + f_1) + f_2$ or $f_0 + (f_1 + f_2)$ is well defined, the other expression is also well defined and $(f_0 + f_1) + f_2 = f_0 + (f_1 + f_2)$. \end{definition} We need to work in a partial setting in which addition is associative: this is required for instance in \Cref{sec:differential} to define $\DmonadSum = \Dpair{\Dproj_0 \comp \Dproj_0}{\Dproj_0 \comp \Dproj_1 + \Dproj_1 \comp \Dproj_0}$. This associativity seems related to a kind of positivity of morphisms. \begin{example} Let $x, y \in\Intcc{-1}{1}$ be summable when $|x|+|y| \leq 1$, with \(x+y\) as sum. Then $+$ is weakly associative, but is not associative. Indeed, take $x_0 = -\frac{1}{2}, x_1 = \frac{1}{2}, y_1 = 1$. Then $(x_0 + x_1) + y_1$ is defined, but $x_0 + (x_1 + y_1)$ is not since $|x_1| + |y_1| = \frac{3}{2} > 1$. However, the same definition on $[0, 1]$ yields an associative operation. \end{example} Recall from~\Cref{eq:medial} that whenever $\Dpair{\Dpair{f_0}{f_1}}{\Dpair{g_0}{g_1}}$ exists, the expressions \((f_0 + g_0) + (f_1 + g_1)\) and \((f_0 + f_1) + (g_0 + g_1)\) are well defined and equal. Taking $g_0 = 0$ and assuming \ref{ax:D-zero}, this means that whenever $\Dpair{\Dpair{f_0}{f_1}}{\Dpair{0}{g_1}}$ exists, $(f_0 + f_1) + g_1$ and $f_0 + (f_1 + g_1)$ are well defined and equal. Taking $f_1 = 0$ and assuming \ref{ax:D-zero}, this means that whenever $\Dpair{\Dpair{f_0}{0}}{\Dpair{g_0}{g_1}}$ exist, $ f_0 + (g_0 + g_1)$ and $(f_0 + g_0) + g_1$ are well defined and equal. Thus associativity holds if \ref{ax:D-zero} holds and if whenever $(f_0 + f_1) + g_1$ is defined (respectively $f_0 + (g_0 + g_1)$ is defined), then $\Dpair{\Dpair{f_0}{f_1}}{\Dpair{0}{g_1}}$ exists (respectively $\Dpair{\Dpair{f_0}{0}}{\Dpair{g_0}{g_1}}$ exists). This shows that associativity follows from the following axiom. \begin{definition} \labeltext{($\D$-witness)}{ax:D-witness} The left pre-summability structure \emph{admits witnesses} if for any $f, g \in \category(Y, \D X)$, if $\Dsum \comp f \summable \Dsum \comp g$ then $f \summable g$. We call this property \ref{ax:D-witness}. \end{definition} \begin{theorem} The properties \ref{ax:D-zero}, \ref{ax:D-com} and \ref{ax:D-witness} give to $\category(X, Y)$ the structure of a \emph{partial commutative monoid} for any objects $X,Y$. That is, for any $f, f_0, f_1, f_2 \in \category(X, Y)$: \begin{itemize} \item $f \summable 0$, $0 \summable f$ and $0 + f = f + 0 = f$; \item If $f_0 \summable f_1$ then $f_1 \summable f_0$ and $f_0 + f_1 = f_1 + f_0$; \item If $(f_0 + f_1) + f_2$ or $f_0 + (f_1 + f_2)$ is defined, then both are defined and $(f_0 + f_1) + f_2 = f_0 + (f_1 + f_2)$. \end{itemize} \end{theorem} One can define inductively from this binary sum a notion of arbitrary finite sum. The empty family is always summable with sum $0$. The family $(f_i)_{i \in I}$ for $I \neq \emptyset$ is summable if $\exists i_0 \in I$ such that $(f_i)_{i \in I/\{i_0\}}$ is summable and if $(\sum_{i \in I/\{i_0\}} f_i) \summable f_{i_0}$. Then we set $\sum_{i \in I} f_i \defEq \sum_{i \in I/\{i_0\}} f_i + f_{i_0}$. \Cref{prop:arbitrary-sum} shown in \cite{Ehrhard21} ensures that the choice of order for the sum is irrelevant. \begin{theorem} \label{prop:arbitrary-sum} A family $(f_i)_{i \in I}$ is summable if and only if for all partition \footnote{Where we admit that some \(I_j\)s can be empty.} $I_1, \ldots, I_n$ of $I$, we have that for all $j \in \interval{1}{n} := \{1, \ldots, n \}$, $(f_i)_{i \in I_j}$ is summable and $(\sum_{i \in I_j} f_i)_{j \in \interval{1}{n}}$ is summable. Moreover, $\sum_{i \in I} f_i = \sum_{j \in \interval{1}{n}} \sum_{i \in I_j} f_i$. \end{theorem} \begin{definition} \label{def:left-summability-struct} A \emph{left summability structure} is a left pre-summability structure $(\D, \Dproj_0, \Dproj_1, \Dsum)$ such that \ref{ax:D-zero}, \ref{ax:D-com} and \ref{ax:D-witness} hold. \end{definition} \subsection{Comparison with summability structures} \label{sec:comparison-ss} In the $\LL{}$ setting of~\cite{Ehrhard21}, \Ehrhard{} introduced a notion of pre-summability structure $(\S, \Sproj_0, \Sproj_1, \Ssum)$ as a summable pairing structure (recall \Cref{def:pre-presummability-structure}) where $\S$ is a functor for which $\Sproj_0, \Sproj_1, \Ssum$ are natural transformations. \begin{theorem} \label{prop:summability-struct-equivalence} The following are equivalent \begin{itemize} \item $(\S, \Sproj_0, \Sproj_1, \Ssum)$ is a left pre-summability structure and every morphism is additive; \item $(\S, \Sproj_0, \Sproj_1, \Ssum)$ is a pre-summability structure \cite{Ehrhard21}. \end{itemize} \end{theorem} Remember that in~\cite{Ehrhard21}, the underlying category \(\category\) is assumed to be enriched over the monoidal category of pointed sets, the distinguished element of \(\category (X,Y)\) being denoted \(0\). In particular \(f\Comp 0=0\) always holds. \begin{proof} Let $(\S, \Sproj_0, \Sproj_1, \Ssum)$ be a left pre-summability structure in which every morphism is additive. By \Cref{prop:additive}, for any $f \in \category(X, Y)$ we can define $\S f \defEq \Spair{f \comp \Sproj_0}{f \comp \Sproj_1}$ and the following equations hold: $\Dproj_i \circ \S f = f \circ \Dproj_i$, $\Dsum \circ \S f = f \circ \Dsum$. Furthermore, $\S$ is a functor: $\Sproj_i \comp \S \id = \id \comp \Sproj_i = \Sproj_i \comp \id$ and $\Sproj_i \comp \S f \comp \S g = f \comp \Sproj_i \comp \S g = f \comp g \comp \Sproj_i = \Sproj_i \comp \S (f \comp g)$. Thus, by joint monicity of the $\Dproj_i$, $\S \id = \id$ and $\S (f \comp g) = \S f \comp \S g$. Then the equations $\Sproj_i \comp \S f = f \comp \Sproj_i$ and $\Ssum \comp \S f = f \comp \Ssum$ introduced above correspond to the naturality of $\Dproj_0, \Dproj_1$ and $\Dsum$. Conversely, let $(\S, \Sproj_0, \Sproj_1, \Ssum)$ be a pre-summability structure in the sense of~\cite{Ehrhard21}. The naturality of $\Sproj_0$ and $\Sproj_1$ ensures that for any $f$, $f \comp \Sproj_0 \summable f \comp \Sproj_1$ with witness $\S f$. The naturality of $\Ssum$ ensures that the sum of those two morphisms is $\Ssum \comp \S f = f \comp \Ssum$. Finally, $f \comp 0 = 0$ by assumption. So every morphism is additive by \Cref{prop:additive}. In particular, $\Sproj_0, \Sproj_1$ and $\Ssum$ are additive, so $(\S, \Sproj_0, \Sproj_1, \Ssum)$ is a left pre-summability structure. \end{proof} \begin{cor} The summability structures of~\cite{Ehrhard21} are the left summability structures where all morphisms are additive. \end{cor} \section{Differential} \subsection{Differential Structure} \label{sec:diff-structure} \label{sec:differential} Recall from~\Cref{eq:derivative-def} the main idea of the differential calculus. We generalize it to a partial additive setting: $f$ is differentiable at $x$ if for any $u$, if $x \summable u$ then $f'(x) \cdot u$ is defined, $f(x) \summable f'(x) \cdot u$ and, intuitively, $f(x + u) \simeq f(x) + f'(x) \cdot u$. Hence the differential of $f$ can be seen as a function $\D f$ that maps a pair of two summable elements $\Dpair{x}{u}$ to a pair of two summable elements $\D f(x, u) = \Dpair{f(x)}{f'(x) \cdot u}$. \begin{definition} \label{def:differential-struct} A \emph{pre-differential structure} is a left summability structure $(\D, \Dproj_0, \Dproj_1, \Dsum)$ together with, for each \(X,Y\in\Obj\category\), an operator \(\category(X,Y)\to\category(\D X,\D Y)\), also denoted as \(\D\), and such that $\Dproj_0 \comp \D f = f \comp \Dproj_0$. We define the \emph{differential} of $f$ as $\dcoh f \defEq \Dproj_1 \comp \D f \in \category(\D X, Y)$. By our assumptions $\D f = \Dpair{f \comp \Dproj_0}{\dcoh f}$. \end{definition} At this point we do not assume $\D$ to be a functor, this will be the Chain Rule. Then the equation $\Dproj_0 \comp \D f = f \comp \Dproj_0$ will be the naturality of $\Dproj_0$. We can also introduce three families of morphisms $\DmonadSum$, $\Dlift$ and $\Dswap$ whose naturality will correspond to some axioms of differentiation. This is very similar to what happens in tangent categories \cite{Cockett14}, the difference being the structure of the functor $\D$ itself \footnote{There might be a way to combine tangent categories and coherent differentiation in one notion allowing to axiomatize objects similar to manifolds where the tangent spaces have an addition of vectors which is only partially defined. The first step should be to develop convincing concrete examples of such objects, which might be related to the semantics of Type Theory.}. The additivity of $\Dsum$ ensures that $\Dsum \comp \Dproj_0 \summable \Dsum \comp \Dproj_1$. That is, $(\Dproj_0 \comp\Dproj_0 + \Dproj_1 \comp\Dproj_0) \summable (\Dproj_0\comp \Dproj_1 + \Dproj_1\comp \Dproj_1)$. By associativity, this implies that $((\Dproj_0\comp \Dproj_0 + \Dproj_1\comp \Dproj_0) + \Dproj_0\comp \Dproj_1) + \Dproj_1\comp \Dproj_1$ is well defined, so $(\Dproj_0\comp \Dproj_0 + \Dproj_1\comp \Dproj_0) + \Dproj_0\comp \Dproj_1$ is well defined. By associativity again, $\Dproj_0\comp \Dproj_0 + (\Dproj_1\comp \Dproj_0 + \Dproj_0\comp \Dproj_1)$ is well defined, so~\Cref{def:DmonadSum} below makes sense. \begin{definition} \label{def:DmonadSum} For any object $X$, define $\DmonadSum \in \category(\D^2 X, \D X)$ as $\DmonadSum \defEq \Dpair{\Dproj_0 \comp \Dproj_0}{\Dproj_1 \comp \Dproj_0 + \Dproj_0 \comp \Dproj_1}$. \end{definition} By \ref{ax:D-zero}, $(\Dproj_0 + 0) \summable (0 + \Dproj_1)$ so by \ref{ax:D-witness} $\Dpair{\Dproj_0}{0} \summable \Dpair{0}{\Dproj_1}$. So \Cref{def:Dlift} below makes sense. \begin{definition} \label{def:Dlift} For any object $X$, define $\Dlift \in \category(\D X, \D^2 X)$ as $\Dlift \defEq \Dpair{\Dpair{\Dproj_0}{0}}{\Dpair{0}{\Dproj_1}}$. \end{definition} By \Cref{prop:sum-left-compatible} (left compatibility) $\Dproj_0 \comp (\Dproj_0 + \Dproj_1) \summable \Dproj_1 \comp (\Dproj_0 + \Dproj_1)$. By additivity of $\Dproj_0$ and $\Dproj_1$, it means that $(\Dproj_0 \comp \Dproj_0 + \Dproj_0 \comp \Dproj_1) \summable (\Dproj_1 \comp \Dproj_0 + \Dproj_1 \comp \Dproj_1)$. So by \ref{ax:D-witness}, $\Dpair{\Dproj_0 \comp \Dproj_0}{\Dproj_0 \comp \Dproj_1} \summable \Dpair{\Dproj_1 \comp \Dproj_0}{\Dproj_1 \comp \Dproj_1}$ and \Cref{def:Dswap} below makes sense. \begin{definition} \label{def:Dswap} For any object $X$, we can define $\Dswap \in \category(\D^2 X, \D^2 X)$ as $\Dswap \defEq\Dpair{\Dpair{\Dproj_0 \comp \Dproj_0}{\Dproj_0 \comp \Dproj_1}} {\Dpair{\Dproj_1 \comp \Dproj_0}{\Dproj_1 \comp \Dproj_1}}$. \end{definition} It is probably easier to understand those morphisms by how they operate on witnesses. This corresponds to \Cref{prop:family-on-pairs} below. The proof is a straightforward computation using the joint monicity of $\Dproj_0$ and $\Dproj_1$. \begin{proposition} \label{prop:family-on-pairs} For any $x, u, v, w \in \category(U, X)$ such that $\Dpair{\Dpair{x}{u}}{\Dpair{v}{w}}$ is defined, \begin{align*} \DmonadSum \comp \Dpair{\Dpair{x}{u}}{\Dpair{v}{w}} &= \Dpair{x}{u+v} \\ \Dswap \comp \Dpair{\Dpair{x}{u}}{\Dpair{v}{w}} &= \Dpair{\Dpair{x}{v}}{\Dpair{u}{w}} \\ \Dlift \comp \Dpair{x}{u} &= \Dpair{\Dpair{x}{0}}{\Dpair{0}{u}} \end{align*} \end{proposition} \begin{definition} \label{def:CDC} \labeltext{(Dproj-lin)}{ax:Dproj-lin} \labeltext{(D-chain)}{ax:D-chain} \labeltext{(Dsum-lin)}{ax:Dsum-lin} \labeltext{(D-add)}{ax:D-add} \labeltext{(D-Schwarz)}{ax:D-schwarz} \labeltext{(D-lin)}{ax:D-lin} \labeltext{(D-struct)}{ax:D-struct} A \emph{differential structure} is a pre-differential structure $(\D, \Dproj_0, \Dproj_1, \Dsum)$ where the following axioms hold, using the associated notation \(\dcoh f\) introduced in~\Cref{def:differential-struct}: \begin{enumerate} \item \ref{ax:Dproj-lin} $\dcoh \Dproj_0 = \Dproj_0 \comp \Dproj_1$, $\dcoh \Dproj_1 = \Dproj_1 \comp \Dproj_1$; \item \ref{ax:Dsum-lin} $\dcoh \Dsum = \Dsum \comp \Dproj_1$, $\dcoh 0 = 0$; \item \ref{ax:D-chain} $\D$ is a functor (Chain Rule); \item \ref{ax:D-add} $\Dinj_0, \DmonadSum$ are natural transformations (additivity of the derivative); \item \ref{ax:D-lin} $\Dlift$ is a natural transformation (linearity of the derivatives); \item \ref{ax:D-schwarz} $\Dswap$ is a natural transformation (Schwarz Rule). \end{enumerate} A \emph{coherent differential category} is a category $\category$ equipped with a differential structure. \end{definition} The axiom \ref{ax:Dproj-lin} corresponds to an important structural property of $\D$ with regard to $\Dpair{\_}{\_}$. The axiom \ref{ax:Dsum-lin} corresponds to the additivity of the derivative operator, that is, $(f+g)' = f' + g'$. The axiom \ref{ax:D-chain} corresponds to the Chain Rule of the differential calculus. The axiom \ref{ax:D-add} says that $u \mapsto f'(x) \cdot u$ is additive. The axiom \ref{ax:D-lin} says that $u \mapsto f'(x) \cdot u$ is not only additive, but also equal to its own derivative in 0. It is shown in Prop.~4.2 of~\cite{Cockett14} (in the left-additive setting of cartesian differential categories) that it implies that $u \mapsto f'(x) \cdot u$ is equal to its own derivative in any points. The same reasoning can be generalized to our setting, but it would require too much technical development to be developed in this paper. Finally, the axiom \ref{ax:D-schwarz} corresponds to the Schwarz Rule, that is, the second derivative $f''(x)$ (a bilinear map) is symmetric. An account of these axioms as properties of $\dcoh$ can be found in \Cref{sec:equivalence-lemmas} and might help the reader understand the ideas mentioned above. \subsection{Linearity} For the rest of this section, $\category$ is only assumed to be equipped with a pre-differential structure. Any use of an axiom of coherent differential categories will be made explicit. \begin{definition}[$\D$-linearity] \label{def:linear} A morphism $f \in \category(X,Y)$ is \emph{$\D$-linear} if the following diagrams commute. \begin{center} \begin{tikzcd} \D X \arrow[r, "\D f"] \arrow[d, "\Dproj_1"'] & \D Y \arrow[d, "\Dproj_1"] \\ X \arrow[r, "f"'] & Y \end{tikzcd}\quad \begin{tikzcd} \D X \arrow[r, "\D f"] \arrow[d, "\Dsum"'] & \D Y \arrow[d, "\Dsum"] \\ X \arrow[r, "f"'] & Y \end{tikzcd}\quad \begin{tikzcd} X \arrow[rd, "0"] \arrow[d, "0"'] & \\ X \arrow[r, "f"'] & Y \end{tikzcd} \end{center} \end{definition} \begin{remark} \label{rem:linear} The first diagram can also be written as $\dcoh(f) = f \comp \Dproj_1$ and means that $\D f = \Dpair{f \comp \Dproj_0}{f \comp \Dproj_1}$. \end{remark} \begin{proposition} \label{prop:linear-and-additive} A morphism $f$ is $\D$-linear if and only if it is additive and $\dcoh f = f \comp \Dproj_1$ (that is, $\D f = \Dpair{f \comp \Dproj_0}{f \comp \Dproj_1}$). \end{proposition} \begin{proof} Assume that $f$ is $\D$-linear. Then $f \comp 0 = 0$ and, by \Cref{rem:linear}, $f \comp \Dproj_0 \summable f \comp \Dproj_1$ of witness $\D f$. Thus $f \comp \Dproj_0 + f \comp \Dproj_1 \defEq \Dsum \comp \D f = f \comp \Dsum$ by assumption. So $f$ is additive by~\Cref{prop:additive}, and $\dcoh f = f \circ \Dproj_1$ by assumption. Conversely, only the second diagram is not part of the assumptions. \begin{align*} \Dsum \comp \D f &= (\Dproj_0 + \Dproj_1) \comp \D f \\ &= \Dproj_0 \comp \D f + \Dproj_1 \comp \D f \quad \text{by \Cref{prop:sum-left-compatible}} \\ &= f \comp \Dproj_0 + f \comp \Dproj_1 \quad \text{ by assumption} \\ &= f \comp (\Dproj_0 + \Dproj_1) = f \comp \Dsum \quad \text{ by additivity of $f$} \end{align*} Thus $f$ is $\D$-linear. \end{proof} Thus $\D$-linear morphisms are in particular additive. As we will see, our notion of additive and $\D$-linear morphisms ultimately coincides with that of~\cite{Blute09}, so this distinction between additivity and linearity is as relevant as it is in their setting. \begin{cor} \label{prop:constructors-linear} \ref{ax:Dproj-lin} is equivalent to the linearity of $\Dproj_0$ and $\Dproj_1$. \ref{ax:Dsum-lin} is equivalent to the linearity of $\Dsum$ and $0$. \end{cor} Thus $\D$-linear morphisms are special instances of additive ones. Our notion of additive and $\D$-linear morphisms ultimately coincides with the one of~\cite{Blute09} thanks to \Cref{prop:linear-equation} below, so this distinction between additivity and linearity is as relevant as it is in their setting. \begin{proposition} \label{prop:linear-equation} Assuming \ref{ax:Dproj-lin}, \ref{ax:D-chain} and \ref{ax:D-add}, any morphism $h \in \category(X, Y)$ such that $\dcoh h = h \comp \Dproj_1$ is additive, hence $\D$-linear. \end{proposition} \begin{proof} The proof relies on \Cref{prop:derivative-additive-zero,prop:derivative-additive-sum} of \Cref{sec:equivalence-lemmas}. If $h = \dcoh h \comp \Dproj_1$, then for any $g \in \category(Z, X)$, $h \comp g = h \comp \Dproj_1 \comp \Dpair{0}{g} = \dcoh h \comp \Dpair{0}{g}$. Thus, $h \comp 0 = \dcoh h \comp \Dpair{0}{0} = 0$ by \Cref{prop:derivative-additive-zero}, and $h \comp (f_0 + f_1) = \dcoh h \comp \Dpair{0}{f_0 + f_1} = \dcoh h \comp \Dpair{0}{f_0} + \dcoh h \comp \Dpair{0}{f_1} = h \comp f_0 + h \comp f_1$ by \Cref{prop:derivative-additive-sum} again. Thus, $h$ is additive, so $h$ is $\D$-linear by \Cref{prop:linear-and-additive}. \end{proof} \label{sec:Dstruct-lin} Thanks to \ref{ax:D-chain}, \ref{ax:Dproj-lin} and \ref{ax:Dsum-lin}, we can show that linear morphisms are closed under composition, witnesses and sum. \begin{proposition} \label{prop:composition-linear} Assuming \ref{ax:D-chain}, $\D$-linear morphisms are closed under composition and inverses. \end{proposition} \begin{proof} Easy verification using the functoriality of $\D$. \iffalse When $\D$ is a functor, the $\D$-linearity of $f \in \category(X, Y)$ means that it is an algebra homomorphism between the $\D$-algebras $(X, \Dproj_1)$ and $(Y, \Dproj_1)$, between the $\D$-algebras $(X, \Dsum)$ and $(Y, \Dsum)$ and between the $\Idfunctor$-algebras $(X, 0)$ and $(Y, 0)$. Here we use the notion of algebra of a \emph{functor} and not of a \emph{monad}. Thus, the fact that linear morphisms are closed under composition and inverses corresponds to a straightforward and standard property about homomorphisms of algebras. \fi \iffalse Assume that $f$ and $g$ are linear. Then $\Dproj_1 \comp \D (f \comp g) = \Dproj_1 \comp \D f \comp \D g = f \comp \Dproj_1 \comp \D g = f \comp g \comp \Dproj_1$. The same goes for $\Dsum$. Finally, $f \comp g \comp 0 = f \comp 0 = 0$. Assume that $f$ is a $\D$-linear isomorphism. Then $f^{-1} \comp 0 = 0$ if and only if $0 = f \comp 0$ which hold by $\D$-linearity of $f$. Besides, $\Dproj_1 \comp \D (f^{-1}) = f^{-1} \comp \Dproj_1$ if and only if $f \comp \Dproj_1 \comp \D (f^{-1}) = \Dproj_1$, that is, if and only if $\Dproj_1 \comp \D f \comp \D (f^{-1}) = \Dproj_1$ by $\D$-linearity of $f$. But this always hold, because $\D (f^{-1}) = (\D f)^{-1}$ by functoriality of $\D$. The same reasoning shows that $\Dsum \comp \D (f^{-1}) = f^{-1} \comp \Dsum$. Thus, $f^{-1}$ is $\D$-linear. \fi \end{proof} \begin{proposition}[$\D$-linearity and pairing] \label{prop:pairing-linear} Assume \ref{ax:D-chain} and \ref{ax:Dproj-lin}. Assume that $h_0, h_1 \in \category(X, Y)$ are summable and both $\D$-linear. Then $\Dpair{h_0}{h_1}$ is $\D$-linear. \end{proposition} \begin{proof} Let us do the diagram involving $\Dsum$, the other two being very similar. By joint monicity of the $\Dproj_i$'s, it suffices to solve the diagram chase below for $i=0,1$. \begin{center} \begin{tikzcd}[column sep = large, row sep = 2em] \D X \arrow[r, "\D \Dpair{h_0}{h_1}"] \arrow[dd, "\Dsum"'] \arrow[rd, "\D h_i"'] \arrow[rdd, "(c)", phantom] \arrow[rd, "(a)" description, phantom, bend left = 25] & \D^2 Y \arrow[r, "\Dsum"] \arrow[d, "\D \Dproj_i"'] \arrow[rd, "(b)", phantom] & \D Y \arrow[d, "\Dproj_i"] \\ & \D Y \arrow[r, "\Dsum"] & Y \\ X \arrow[rr, "\Dpair{h_0}{h_1}"'] \arrow[rru, "h_i"] & {} & \D Y \arrow[u, "\Dproj_i"'] \end{tikzcd} \end{center} (a)~is a consequence of \ref{ax:D-chain}, (b)~is a consequence of \ref{ax:Dproj-lin} and (c)~is the \(\D\)-linearity of $h_i$. \end{proof} \begin{proposition} Assuming \ref{ax:D-chain} and \ref{ax:Dproj-lin}, $\Dsum$ is $\D$-linear if and only if for all $h_0, h_1 \in \category(X, Y)$ summable and both $\D$-linear, $h_0 + h_1$ is $\D$-linear. \end{proposition} \begin{proof} Assume that $h_0, h_1$ are \(\D\)-linear. By \Cref{prop:pairing-linear}, $\Dpair{h_0}{h_1}$ is \(\D\)-linear so $h_0 + h_1 = \Dsum \comp \Dpair{h_0}{h_1}$ is \(\D\)-linear (\(\D\)-linearity is closed under composition). Conversely, $\Dsum = \Dproj_0 + \Dproj_1$ and $\Dproj_0$, $\Dproj_1$ are \(\D\)-linear so $\Dsum$ is \(\D\)-linear. \end{proof} \begin{cor} \label{prop:everyone-linear} Assuming \ref{ax:Dproj-lin}, \ref{ax:Dsum-lin} and \ref{ax:D-chain}, $\Dinj_0, \Dinj_1, \Dswap, \Dlift, \DmonadSum$ are all $\D$-linear. \end{cor} \begin{proof} All these morphisms are obtained through pairing, sums and composition of $\D$-linear maps. \end{proof} On a side note, by \Cref{rem:linear} the $\D$-linearity of $\Dproj_i$ means that $\D \Dproj_i = \Dpair{\Dproj_i \comp \Dproj_0}{\Dproj_i \comp \Dproj_1}$. In particular, it implies that $\Dswap = \Dpair{\D \Dproj_0}{\D \Dproj_1}$. This is very useful because the differential of a pair can then be obtained from the pair of the differentials. \begin{proposition} \label{prop:pair-derivative} Assume \ref{ax:Dproj-lin}, \ref{ax:D-chain}. Let $f_0, f_1 \in \category(X, Y)$ such that $f_0 \summable f_1$. Then $\D f_0 \summable \D f_1$ and $\Dpair{\D f_0}{\D f_1} = \Dswap \comp \D \Dpair{f_0}{f_1}$. \end{proposition} \begin{proof} $\Dproj_i \comp \Dswap \comp \D \Dpair{f_0}{f_1} = \D \Dproj_i \comp \D \Dpair{f_0}{f_1} = \D f_i$. \end{proof} \subsection{The Differentiation Monad} \begin{proposition} Assuming \ref{ax:D-chain}, \ref{ax:Dproj-lin} and \ref{ax:Dsum-lin}, the following diagrams commute. \begin{center} \begin{tikzcd} \D X \arrow[r, "\D \Dinj_0"] \arrow[rd, "\id_{\D X}"'] & \D^2 X \arrow[d, "\DmonadSum" description] & \D X \arrow[l, "\Dinj_0"'] \arrow[ld, "\id_{\D X}"] \\ & \D X & \end{tikzcd}\quad \begin{tikzcd} \D^3 X \arrow[r, "\D \DmonadSum_{X}"] \arrow[d, "\DmonadSum_{\D X}"'] & \D^2 X \arrow[d, "\DmonadSum_X"] \\ \D^2 X \arrow[r, "\DmonadSum_X"'] & \D X \end{tikzcd} \end{center} \end{proposition} \begin{proof} By \Cref{prop:everyone-linear}, $\Dinj_0$ is $\D$-linear. Thus by \Cref{rem:linear}, $\D \Dinj_0 = \Dpair{\Dinj_0 \comp \Dproj_0}{\Dinj_0 \comp \Dproj_1} = \Dpair{\Dpair{\Dproj_0}{0}}{\Dpair{\Dproj_1}{0}}$. Hence $\DmonadSum \comp \D \Dinj_0 = \Dpair{\Dproj_0}{0 + \Dproj_1} = \Dpair{\Dproj_0}{\Dproj_1} = \id_{\D X}$ by \Cref{prop:family-on-pairs}. Next $\Dinj_0^{\D X} = \Dpair{\Dpair{\Dproj_0}{\Dproj_1}}{\Dpair{0}{0}}$ since $\Dpair{\Dproj_0}{\Dproj_1} = \id$ and $\Dpair{0^{X,X}}{0^{X,X}} = 0^{\D X, \D X}$. By \Cref{prop:family-on-pairs} again, $\DmonadSum \comp \Dinj_0 = \Dpair{\Dproj_0}{\Dproj_1 + 0} = \Dpair{\Dproj_0}{\Dproj_1} = \id_{\D X}$ so the triangles commute. The square is a direct computation. We use simple juxtaposition for the composition of projections for the sake of readability. The bottom path can be reduced using left compatibility of addition (\Cref{prop:sum-left-compatible}) and additivity of the projections: \begin{align*} \DmonadSum \comp \DmonadSum &= \Dpair{\Dproj_0 \comp \Dproj_0 \comp \DmonadSum}{ \Dproj_1 \comp \Dproj_0 \comp \DmonadSum + \Dproj_0 \comp \Dproj_1 \comp \DmonadSum} \\ &= \Dpair{\Dproj_0\comp \Dproj_0\comp \Dproj_0}{\Dproj_1\comp \Dproj_0\comp \Dproj_0 + \Dproj_0 \comp (\Dproj_1\comp \Dproj_0 + \Dproj_0\comp \Dproj_1)} \\ &= \Dpair{\Dproj_0\comp \Dproj_0\comp \Dproj_0}{\Dproj_1\comp \Dproj_0\comp \Dproj_0 + (\Dproj_0\comp \Dproj_1\comp \Dproj_0 + \Dproj_0\comp \Dproj_0\comp \Dproj_1)} \,. \end{align*} The upper path can be reduced by $\D$-linearity of $\DmonadSum$ and left compatibility of sum (\Cref{prop:sum-left-compatible}): \begin{align*} \DmonadSum \comp \D \DmonadSum &= \Dpair{\Dproj_0\comp \Dproj_0 \comp \D \DmonadSum}{ \Dproj_1\comp \Dproj_0 \comp \D \DmonadSum + \Dproj_0 \Dproj_1 \comp \D \DmonadSum}\\ &= \Dpair{\Dproj_0 \comp \DmonadSum \comp \Dproj_0}{ \Dproj_1 \comp \DmonadSum \comp \Dproj_0 + \Dproj_0 \comp \DmonadSum \comp \Dproj_1} \\ &= \Dpair{\Dproj_0\comp \Dproj_0\comp \Dproj_0}{(\Dproj_1\comp \Dproj_0 + \Dproj_0\comp \Dproj_1) \comp \Dproj_0 + \Dproj_0\comp \Dproj_0\comp \Dproj_1)} \\ &= \Dpair{\Dproj_0\comp \Dproj_0\comp \Dproj_0}{(\Dproj_1\comp \Dproj_0\comp \Dproj_0 + \Dproj_0\comp \Dproj_1\comp \Dproj_0) + \Dproj_0\comp \Dproj_0\comp \Dproj_1)} \,. \end{align*} We conclude that those two morphisms are equal, using the associativity of the partial sum. \end{proof} \begin{cor} \ref{ax:Dproj-lin}, \ref{ax:Dsum-lin}, \ref{ax:D-chain} and \ref{ax:D-add} imply that $(\D,\Dinj_0,\DmonadSum)$ is a monad. \end{cor} \section{Interpreting the axioms as properties of the derivative} \label{sec:equivalence-lemmas} In this section, $\category$ is only assumed to be a category equipped with a pre-differential structure~(\Cref{def:differential-struct}). We show that the various axioms of a coherent differential category correspond to standard rules of the differential calculus, written as properties about $\dcoh(f)$. The results of this section are only necessary for \Cref{sec:CDC} but they also provide some intuitions on the axioms of coherent differentiation. All the proofs are similar, and consist in using the joint monicity of $\Dproj_0$ and $\Dproj_1$ to reduce the axioms to a set of equations, then show that only one of those equations is non trivial. In what follows, ``linear'' always means \(\D\)-linear. \begin{proposition} \label{prop:D-chain} $\D$ is a functor if and only if $\dcoh (\id) = \Dproj_1$ and $\dcoh (g \comp f) = \dcoh(g) \comp \Dpair{f \comp \Dproj_0}{\dcoh(f)}$. \end{proposition} \begin{proof} $\D$ is a functor if and only if $\D \id_X = \id_{\D X}$ and for any $g, f$, $\D (g \comp f) = \D g \comp \D f$. By joint monicity of the $\Dproj_i$, $\D \id = \id$ if and only if $\Dproj_i \circ \D \id = \Dproj_i \circ \id = \Dproj_i$. But $\Dproj_0 \circ \D \id = \id \circ \Dproj_0 = \Dproj_0$ by assumptions on Pre-Differential Structures. So $\D \id = \id$ if and only if $\Dproj_1 \circ \D \id = \Dproj_1$, that is, if and only if $\dcoh(\id) = \Dproj_1$. Similarly, $\Dproj_0 \comp \D g \comp \D f = g \comp \Dproj_0 \comp \D f = g \comp f \comp \Dproj_0 = \Dproj_0 \comp \D (g \comp f)$ by assumption on pre-differential-structures. So by joint monicity of the $\Dproj_i$, $\D (g \comp f) = \D g \comp \D f$ if and only if $\Dproj_1 \comp \D (g \comp f) = \Dproj_1 \comp \D g \comp \D f$. By definition of $\dcoh$, this corresponds exactly to the equation $\dcoh (g \comp f) = \dcoh(g) \comp \D f = \dcoh(g) \comp \Dpair{f \comp \Dproj_0}{\dcoh(f)}$ \end{proof} \begin{proposition} \label{prop:D-sum-com} Assuming \ref{ax:Dproj-lin}, $\Dsum$ is linear if and only if $\D \Dsum = \D \Dproj_0 + \D \Dproj_1$. Assuming \ref{ax:Dproj-lin} and \ref{ax:D-chain}, $\Dsum$ is linear if and only if for any $f_0, f_1$ that are summable, $\D(f_0+f_1) = \D f_0 + \D f_1$ (recall that $\D f_0 \summable \D f_1$ by \Cref{prop:pair-derivative}). \end{proposition} \begin{proof} By linearity of $\Dproj_i$, $\D \Dproj_i = \Dpair{\Dproj_i \comp \Dproj_0}{\Dproj_i \comp \Dproj_1}$ so by \Cref{prop:pair-sum}, $\D \Dproj_0 + \D \Dproj_1 = \Dpair{\Dproj_0 \comp \Dproj_0 + \Dproj_1 \comp \Dproj_0} {\Dproj_0 \comp \Dproj_1 + \Dproj_1 \comp \Dproj_1} = \Dpair{(\Dproj_0 + \Dproj_1) \comp \Dproj_0}{(\Dproj_0 + \Dproj_1) \comp \Dproj_1} = \Dpair{\Dsum \comp \Dproj_0}{\Dsum \comp \Dproj_1}$. But $\Dsum$ is linear if and only if $\D \Dsum = \Dpair{\Dsum \comp \Dproj_0}{\Dsum \comp \Dproj_1}$ by \Cref{prop:linear-and-additive}, that is, if and only if $\D \Dsum = \D \Dproj_0 + \D \Dproj_1$. For the second part of the lemma, notice that the right statement for $f_0 = \Dproj_0$ and $f_1 = \Dproj_1$ is exactly $\D \Dsum = \D \Dproj_0 + \D \Dproj_1$, so the converse direction holds. For the forward direction, notice that \begin{align*} \D (f_0 + f_1) &= \D (\Dsum \comp \Dpair{f_0}{f_1}) \\ &= \D \Dsum \comp \D \Dpair{f_0}{f_1} \text{\quad by \ref{ax:D-chain}} \\ &= (\D \Dproj_0 + \D \Dproj_1) \comp \D \Dpair{f_0}{f_1} \text{\quad by assumptions} \\ &= \D \Dproj_0 \comp \D \Dpair{f_0}{f_1} + \D \Dproj_1 \comp \D \Dpair{f_0}{f_1} \\ &= \D f_0 + \D f_1 \text{\quad by \ref{ax:D-chain}} \end{align*} \end{proof} \begin{cor} \label{prop:Dsum-lin} Assuming \ref{ax:Dproj-lin} and \ref{ax:D-chain}, $\Dsum$ is linear if and only if for any $f_0, f_1$ that are summable, $\dcoh (f_0+f_1) = \dcoh (f_0) + \dcoh (f_1)$ \end{cor} \begin{proof} The linearity of $\Dsum$ is equivalent to $\D (f_0 + f_1) = \D f_0 + \D f_1$ for any $f_0, f_1$ summable. By \Cref{prop:pair-sum}, this is equivalent to $\Dpair{(f_0 + f_1) \comp \Dproj_0}{\dcoh (f_0 + f_1)} = \Dpair{f_0 \comp \Dproj_0 + f_1 \comp \Dproj_0}{\dcoh (f_0) + \dcoh (f_1)}$. The left compatibility of addition (\Cref{prop:sum-left-compatible}) ensures that the first coordinates are always equal. So $\Dsum$ is linear if and only if for all $f_0 \summable f_1$, $\dcoh (f_0+f_1) = \dcoh (f_0) + \dcoh (f_1)$. \end{proof} \begin{proposition} \label{prop:derivative-additive-zero} The following assertions are equivalent: \begin{itemize} \item[(1)] $\Dinj_0$ is natural; \item[(2)] For any $f \in \category (X, Y)$, $ \dcoh f \comp \Dinj_0 = 0$; \item[(3)] For any $f \in \category (X, Y)$, any object $U$ and $x \in \category(U, X)$, $\dcoh f \comp \Dpair{x}{0} = 0$. \end{itemize} \end{proposition} \begin{proof} (1) $\Equiv$ (2). By joint monicity of the $\Dproj_i$, for any $f \in \category(X, Y)$, $\D f \comp \Dinj_0 = \Dinj_0 \comp f$ if and only if $\Dproj_0 \comp \D f \comp \Dinj_0 = \Dproj_0 \comp \Dinj_0 \comp f = f$ and $\Dproj_1 \comp \D f \comp \Dinj_0 = \Dproj_1 \comp \Dinj_0 \comp f = 0$. The first condition always hold by naturality of $\Dproj_0$ and definition of $\Dinj_0$. So $\Dinj_0$ is natural if and only if the second identity holds. This equation is precisely~(2). (2) $\Equiv$ (3). The forward direction is directly obtained by composing the identity of $(2)$ by $x$ on the right. The reverse is directly obtained by applying the equation of~(3) to $x = \id_X$. \end{proof} \begin{proposition} \label{prop:derivative-additive-sum} Assuming \ref{ax:Dproj-lin} and \ref{ax:D-chain}, the following assertions are equivalent: \begin{itemize} \item[(1)] $\DmonadSum$ is natural; \item[(2)] for any $f \in \category (X, Y)$, $\dcoh f \comp \D \Dproj_0 \summable \dcoh f \comp \Dproj_0 $ and $ \dcoh f \comp \DmonadSum = \dcoh f \comp \D \Dproj_0 + \dcoh f \comp \Dproj_0 $; \item[(3)] for any $f \in \category (X, Y)$, any object $U$ and any $x, u, v \in \category(U, X)$ that are summable, $\dcoh f \comp \Dpair{x}{u} \summable \dcoh f \comp \Dpair{x}{v}$ and \[ \dcoh f \comp \Dpair{x}{u+v} = \dcoh f \comp \Dpair{x}{u} + \dcoh f \comp \Dpair{x}{v}\,. \] \end{itemize} \end{proposition} \begin{proof} (1) $\Equiv$ (2). By joint monicity of the $\Dproj_i$, for any $f \in \category(X, Y)$, $\D f \comp \DmonadSum = \DmonadSum \comp \D^2 f$ if and only if $\Dproj_0 \comp \D f \comp \DmonadSum = \Dproj_0 \comp \DmonadSum \comp \D^2 f$ and $\Dproj_1 \comp \D f \comp \DmonadSum = \Dproj_1 \comp \DmonadSum \comp \D^2 f$. The equation $\Dproj_0 \comp \D f \comp \DmonadSum = \Dproj_0 \comp \DmonadSum \comp \D^2 f$ always holds. Indeed \begin{align*} \Dproj_0 \comp \D f \comp \DmonadSum &= f \comp \Dproj_0 \comp \DmonadSum \text{\quad by naturality of $\Dproj_0$} \\ &= f \comp \Dproj_0 \comp \Dproj_0 \text{\quad by definition of $\DmonadSum$} \\ \Dproj_0 \comp \DmonadSum \comp \D^2 f &= \Dproj_0 \comp \Dproj_0 \comp \D^2 f \text{\quad by definition of $\DmonadSum$} \\ &= f \comp \Dproj_0 \comp \Dproj_0 \text{\quad by naturality of $\Dproj_0$} \, . \end{align*} The left hand side of the equation $\Dproj_1 \comp \D f \comp \DmonadSum = \Dproj_1 \comp \DmonadSum \comp \D^2 f$ is $\dcoh(f) \circ \DmonadSum$ by definition. The right hand side rewrites as follows. \begin{align*} \Dproj_1 \comp \DmonadSum \comp \D^2 f &= (\Dproj_0 \comp \Dproj_1 + \Dproj_1 \comp \Dproj_0) \comp \D^2 f & \\ &= \Dproj_0 \comp \Dproj_1 \comp \D^2 f + \Dproj_1 \comp \Dproj_0 \comp \D^2 f \text{\quad by \Cref{prop:sum-left-compatible}} \\ &= \Dproj_1 \comp \D \Dproj_0 \comp \D^2 f + \Dproj_1 \comp \Dproj_0 \comp \D^2 f \text{\quad by $\D$-linearity of $\Dproj_0$} \\ &= \Dproj_1 \comp \D (\Dproj_0 \comp \D f) + \Dproj_1 \comp \Dproj_0 \comp \D^2 f \text{\quad by \ref{ax:D-chain}} \\ &= \Dproj_1 \comp \D (f \comp \Dproj_0) + \Dproj_1 \comp \D f \comp \Dproj_0 \text{\quad by naturality of $\Dproj_0$} \\ &= \Dproj_1 \comp \D f \comp \D \Dproj_0 + \Dproj_1 \comp \D f \comp \Dproj_0 \text{\quad by \ref{ax:D-chain}} \\ &= \dcoh f \comp \D \Dproj_0 + \dcoh f \circ \Dproj_0 \text{\quad by definition} \end{align*} So this second equation under consideration is equivalent to the equation of~(2). (2) $\Equiv$ (3). Recall that $\D \Dproj_0 = \Dpair{\Dproj_0 \comp \Dproj_0}{\Dproj_0 \comp \Dproj_1}$ by linearity of $\Dproj_0$. Then the forward direction is directly obtained by composing the equation of~(2) with $\Dpair{\Dpair{x}{v}}{\Dpair{u}{0}}$ on the right. The converse is directly obtained by applying the equation of~(3) to $x = \Dproj_0 \comp \Dproj_0$, $u = \Dproj_1 \comp \Dproj_0$ and $v = \Dproj_0 \comp \Dproj_1$. \end{proof} \begin{remark} \label{rem:dcoh-dcoh} Notice that $\dcoh(\dcoh(f)) = \Dproj_1 \comp \D (\Dproj_1 \comp \D f) = \Dproj_1 \comp \D \Dproj_1 \comp \D^2 f = \Dproj_1 \comp \Dproj_1 \comp \D^2 f$ assuming \ref{ax:D-chain} and \ref{ax:Dproj-lin}. Thus, $\dcoh(\dcoh(f))$ is nothing more than the rightmost coordinate of $\D^2 f$. This will be useful for what follows in this part. \end{remark} \begin{proposition} \label{prop:D-lin} Assuming \ref{ax:Dproj-lin}, \ref{ax:D-chain} and the naturality of $\Dinj_0$, the following assertions are equivalent: \begin{enumerate} \item $\Dlift$ is natural; \item for all morphism $f \in \category(X, Y)$, $\dcoh(\dcoh (f)) \comp \Dlift = \dcoh (f)$; \item for all morphism $f \in \category(X, Y)$, for all morphisms $x, u \in \category(U, X)$ summable, \[ \dcoh(\dcoh(f)) \comp \Dpair{\Dpair{x}{0}}{\Dpair{0}{u}} = \dcoh(f) \comp \Dpair{x}{u}\,. \] \end{enumerate} \end{proposition} \begin{proof} By joint monicity of the $\Dproj_i$, $\Dlift$ is natural if and only if for all $f$ and for all $i, j \in \{0, 1\}, \Dproj_i \comp \Dproj_j \comp \D^2 f \comp \Dlift = \Dproj_i \comp \Dproj_j \comp \Dlift \comp \D f$. By \Cref{rem:dcoh-dcoh} (and because $\Dproj_1 \comp \Dproj_1 \comp \Dlift = \Dproj_1$), the equation for $i = j = 1$ corresponds exactly to the equation $\dcoh(\dcoh (f)) \comp \Dlift = \dcoh (f)$. Thus, it suffices to show that $\Dproj_i \comp \Dproj_j \comp \D^2 f \comp \Dlift = \Dproj_i \comp \Dproj_j \comp \Dlift \comp \D f$ always holds when $(i, j) \neq (1, 1)$ to conclude that $(1)$ is equivalent to $(2)$. \begin{itemize} \item Case $i = 0, j = 0$: $\Dproj_0 \comp \Dproj_0 \comp \Dlift \comp \D f = \Dproj_0 \comp \D f = f \comp \Dproj_0$ and $\Dproj_0 \comp \Dproj_0 \comp \D^2 f \comp \Dlift = f \comp \Dproj_0 \comp \Dproj_0 \comp \Dlift = f \comp \Dproj_0$; \item Case $i = 1, j = 0$: $\Dproj_0 \comp \Dproj_1 \comp \Dlift \comp \D f = 0 \comp \D f = 0$ and $\Dproj_1 \comp \Dproj_0 \comp \D^2 f \comp \Dlift = \Dproj_1 \comp \D f \comp \Dproj_0 \comp \Dlift = \Dproj_1 \comp \D f \comp \Dinj_0 \comp \Dproj_0 = \Dproj_1 \comp \Dinj_0 \comp f \comp \Dproj_0 = 0$ thanks to the naturality of $\Dinj_0$; \item Case $i = 0, j = 1$: $\Dproj_0 \comp \Dproj_1 \comp \Dlift \comp \D f = 0 \comp \D f = 0$ and $\Dproj_0 \comp \Dproj_1 \comp \D^2 f \comp \Dlift = \Dproj_1 \comp \D \Dproj_0 \comp \D^2 f \comp \Dlift = \Dproj_1 \comp \D f \comp \D \Dproj_0 \comp \Dlift = \Dproj_1 \comp \D f \comp \Dinj_0 \comp \Dproj_0 = \Dproj_1 \comp \Dinj_0 \comp f \comp \Dproj_0 = 0$ thanks to the naturality of $\Dinj_0$. \end{itemize} Next~(2) is a particular case of~(3) for $x = \Dproj_0 $ and $u = \Dproj_1$. Conversely, assuming~(2) we have that $\dcoh(\dcoh(f)) \comp \Dpair{\Dpair{x}{0}}{\Dpair{0}{u}} = \dcoh(\dcoh(f)) \comp \Dlift \comp \Dpair{x}{u} = \dcoh(f) \comp \Dpair{x}{u}$. \end{proof} \begin{proposition} \label{prop:D-schwarz} Assuming \ref{ax:Dproj-lin} and \ref{ax:D-chain}, the following assertions are equivalent: \begin{enumerate} \item $\Dswap$ is natural; \item for all morphism $f \in \category(X, Y)$, $\dcoh(\dcoh (f)) \comp \Dswap = \dcoh(\dcoh (f))$; \item for all morphism $f \in \category(X, Y)$ and $x, u, v, w \in \category(U, X)$ that are summable, \begin{align*} \dcoh(\dcoh (f)) \comp \Dpair{\Dpair{x}{u}}{\Dpair{v}{w}} = \dcoh(\dcoh (f)) \comp \Dpair{\Dpair{x}{v}}{\Dpair{u}{w}} \end{align*} \end{enumerate} \end{proposition} \begin{proof} By joint monicity of the $\Dproj_i$, $\Dswap$ is natural if and only if for all $f$ and for all $i, j \in \{0, 1\}, \Dproj_i \comp \Dproj_j \comp \D^2 f \comp \Dswap = \Dproj_i \comp \Dproj_j \comp \Dswap \comp \D^2 f$. But $\Dproj_i \comp \Dproj_j \comp \Dswap \comp \D^2 f = \Dproj_j \comp \Dproj_i \comp \D^2 f$. Then, by \Cref{rem:dcoh-dcoh}, the equation for $i = j = 1$ corresponds exactly to the equation $\dcoh(\dcoh (f)) \comp \Dswap = \dcoh(\dcoh(f))$. Thus, it suffices to show that $\Dproj_i \comp \Dproj_j \comp \D^2 f \comp \Dswap = \Dproj_j \comp \Dproj_i \comp \D^2 f$ when $(i, j) \neq (1, 1)$ to conclude that~(1) is equivalent to~(2). \begin{itemize} \item $i = 0, j = 0$: The equation holds by reflexivity of equality. \item $i=1, j=0$: $\Dproj_0 \comp \Dproj_1 \comp \D^2 f = \Dproj_1 \comp \D \Dproj_0 \comp \D^2 f = \Dproj_1 \comp \D f \comp \D \Dproj_0$ and $\Dproj_1 \comp \Dproj_0 \comp \D^2 f \comp \Dswap = \Dproj_1 \comp \D f \comp \Dproj_0 \comp \Dswap = \Dproj_1 \comp \D f \comp \D \Dproj_0$ so both sides are equal. \item $i=0, j=1$: $\Dproj_0 \comp \Dproj_1 \comp \D^2 f \comp \Dswap = \Dproj_1 \comp \Dproj_0 \comp \D^2 f$ if and only if $\Dproj_0 \comp \Dproj_1 \comp \D^2 f = \Dproj_1 \comp \Dproj_0 \comp \D^2 f \comp \Dswap$ because $\Dswap$ is involutive. But this equation holds, as seen above. \end{itemize} Next, (2) is a particular case of~(3) for $x = \Dproj_0 \comp \Dproj_0$, $u = \Dproj_0 \comp \Dproj_1$, $v = \Dproj_1 \comp \Dproj_0$ and $w = \Dproj_1 \comp \Dproj_1$. Conversely, if (3) holds then $\dcoh(\dcoh (f)) \comp \Dpair{\Dpair{x}{u}}{\Dpair{v}{w}} = \dcoh(\dcoh(f)) \comp \Dswap \comp \Dpair{\Dpair{x}{v}}{\Dpair{u}{w}} = \dcoh(\dcoh(f)) \comp \Dpair{\Dpair{x}{v}}{\Dpair{u}{w}}$. \end{proof} \iffalse \subsection{Link to Coherent Differentiation} One can define a category $\categoryLin$ whose objects are the objects of $\category$ and whose morphisms are the $\D$-linear morphisms. This is a category, because the composition of two $\D$-linear morphisms is a $\D$-linear morphisms thanks to \ref{ax:D-chain}. Besides, there is a forgetful functor $\forget : \categoryLin \arrow \category$. Let us denote $g \compl f$ the composition of $f \in \categoryLin(X, Y)$ with $g \in \categoryLin(Y, Z)$. Then, the linearity of $\Dproj_0, \Dproj_1, \Dsum$ and $0$ ensures that $(\D, \Dproj_0, \Dproj_1, \Dsum)$ is a structure on $\categoryLin$. Besides, two morphisms $f, g \in \categoryLin(X, Y)$ are summable if and only if $\forget(f), \forget(g)$ are summable as morphisms in $\category$, with sum such that $\forget(f + g) = \forget(f) + \forget(g)$. In particular, $\Dproj_0, \Dproj_1, \Dsum$ are still additive in $\categoryLin$ so $(\D, \Dproj_0, \Dproj_1, \Dsum)$ is a Pre-Summability structure on $\categoryLin$. Furthermore, the axioms of Summability Structure in $\category$ imply the same axioms in $\categoryLin$ so $(\D, \Dproj_0, \Dproj_1, \Dsum)$ is a Summability Structure on $\categoryLin$. Also, \Cref{prop:constructors-linear} ensures that $\Dinj_0, \Dinj_1, \Dswap, \Dlift, \DmonadSum$ are all linear. Because linear morphisms are additive, the Summability Structure on $\categoryLin$ is a Summability Structure in the sense of \Ehrhard{} \cite{Ehrhard21} and for any $f \in \categoryLin(X, Y)$, $\D (\forget f) = \forget(\S f)$ where $\S$ is the functor that coincides with $\D$ on the objects and is defined on the morphisms as $\S f = \Dpair{f \comp \Dproj_0}{f \comp \Dproj_1}$. So, \ref{ax:D-chain} states that $\D$ is a functor that extends the functor $\S$ to $\category$. Besides, one can check that $\Dinj_0, \DmonadSum$ as morphisms of $\categoryLin$ are natural transformations for $\S$, that is $\Dinj_0 \compl f = \S f \compl \Dinj_0$ and $\DmonadSum \compl \S^2 f = \S f \compl \DmonadSum$. Thus, $(\S, \Dinj_0, \DmonadSum)$ is a monad on $\categoryLin$. Then, \ref{ax:D-add} states that the Monad $(\D, \Dinj_0, \DmonadSum)$ extends the Monad $(\S, \Dinj_0, \DmonadSum)$ to $\category$. This is not surprising, as the equivalent of the Chain Rule and the additive rule in Coherent Differentiation as introduced by \Ehrhard{} are precisely rules that allows to extend the monad $\S$ on a model of $\LL{}$ $\categoryLL$ to the Kleisli Category of the resource modality $\kleisliExp$. \fi \section{Compatibility with the cartesian product} We assume in this section that $\category$ is cartesian and is equipped with a left summability structure $(\D, \Dproj_0, \Dproj_1, \Dsum)$. \begin{notation} \label{notation:product} We use $\with$ for the cartesian product, following the notations of $\LL{}$. For any objects $Y_0, Y_1$, the projection will be written as $\prodProj_i \in \category(Y_0 \with Y_1, Y_i)$ and the pairing of $f_0 \in \category(X, Y_0)$ and $f_1 \in \category(X, Y_1)$ as $\prodPair{f_0}{f_1}$. Finally, the terminal object will be written $\top$. Note that the uniqueness of the pairing in the universal property of the cartesian product can be understood as the joint monicity of the $\prodProj_i$. \end{notation} \subsection{Cartesian product and summability structure} \label{sec:cartesian-summability} \begin{definition} \label{def:prod-compatible} The summability structure $(\D, \Dproj_0, \Dproj_1, \Dsum)$ is \emph{compatible with the cartesian product} if $\prodPair{0}{0} = 0$ and, for all $f_0, g_0 \in \category(X, Y_0)$ and $f_1, g_1 \in \category(X, Y_1)$: \begin{itemize} \item $\prodPair{f_0}{f_1} \summable \prodPair{g_0}{g_1}$ if and only if $f_0 \summable g_0$ and $f_1 \summable g_1$ \item and then \( \prodPair{f_0}{f_1} + \prodPair{g_0}{g_1} = \prodPair{f_0+f_1}{g_0+g_1} \). \end{itemize} \end{definition} That is, sums are computed componentwise. Let us break down this definition in more details. \begin{proposition} \label{prop:projections-additive} The following are equivalent \begin{itemize} \item $\prodProj_0, \prodProj_1$ are additive; \item $\prodPair{0}{0} = 0$ and for all $f_0, g_0 \in \category(X, Y_0)$ and $f_1, g_1 \in \category(X, Y_1)$, if $\prodPair{f_0}{f_1} \summable \prodPair{g_0}{g_1}$ then $f_0 \summable g_0$, $f_1 \summable g_1$ and $\prodPair{f_0}{f_1} + \prodPair{g_0}{g_1} = \prodPair{f_0+f_1}{g_0+g_1}$. \end{itemize} \end{proposition} \begin{proof} Assume that $\prodProj_0, \prodProj_1$ are additive. Then $\prodProj_i \comp 0 = 0 = \prodProj_i \comp \prodPair{0}{0}$. Thus by joint monicity, $0 = \prodPair{0}{0}$. Furthermore, assume that $\prodPair{f_0}{f_1} \summable \prodPair{g_0}{g_1}$. Then by additivity of $\prodProj_i$, $\prodProj_i \comp \prodPair{f_0}{f_1} = f_i$ and $\prodProj_i \comp \prodPair{g_0}{g_1} = g_i$ are summable and $f_i + g_i = \prodProj_i \comp (\prodPair{f_0}{f_1} + \prodPair{g_0}{g_1})$. So the joint monicity of the $\prodProj_i$ implies that $\prodPair{f_0}{f_1} + \prodPair{g_0}{g_1} = \prodPair{f_0+f_1}{g_0+g_1}$. Conversely, since $\prodPair{0}{0} = 0$ we have $\prodProj_i \comp 0 = \prodProj_i \comp \prodPair{0}{0} = 0$. Let $f, g \in \category(X, Y_0 \with Y_1)$ be summable. One can write $f = \prodPair{\prodProj_0 \comp f}{\prodProj_1 \comp f}$ and $g = \prodPair{\prodProj_0 \comp g}{\prodProj_1 \comp g}$. Since $f \summable g$ we have $\prodProj_i \comp f \summable \prodProj_i \comp g$ and $f + g = \prodPair{\prodProj_0 \comp f + \prodProj_0 \comp g}{\prodProj_1 \comp f + \prodProj_1 \comp g}$. Applying $\prodProj_i$ on this equation yields that $\prodProj_i \comp (f+g) = \prodProj_i \comp f + \prodProj_i \comp g$ so $\prodProj_i$ is additive. \end{proof} \begin{cor} \label{prop:with-sum} If $\prodProj_0$ and $\prodProj_1$ are additive, then $0 \with 0 = 0$ and for all $f_0, g_0 \in \category(X_0, Y_0)$ and $f_1, g_1 \in \category(X_1, Y_1)$, if $f_0 \with f_1 \summable g_0 \with g_1$ then $f_0 \summable g_0$, $f_1 \summable g_1$ and $f_0 \with f_1 + g_0 \with g_1 = (f_0 + g_0) \with (f_1 + g_1)$. \end{cor} \begin{proof} We simply use the fact that $f \with g = \prodPair{f \comp \prodProj_0}{g \comp \prodProj_1}$ and~\Cref{prop:projections-additive} together with the left compatibility of sum with regard to composition (\Cref{prop:sum-left-compatible}). \end{proof} We now assume that the projections $\prodProj_0$ and $\prodProj_1$ are additive. This allows us to define a morphism $\prodSwap \in \category(\D (X_0 \with X_1), \D X_0 \with \D X_1)$ for any objects $X_0, X_1$ as $\prodSwap \defEq \prodPair{\Dpair{\prodProj_0 \comp \Dproj_0}{\prodProj_0 \comp \Dproj_1}} {\Dpair{\prodProj_1 \comp \Dproj_0}{\prodProj_1 \comp \Dproj_1}}$. In other words, $\Dproj_i \comp \prodProj_j \comp \prodSwap = \prodProj_j \comp \Dproj_i$ , that is \[ \prodSwap \comp \Dpair{\prodPair{f_0}{f_1} }{\prodPair{g_0}{g_1}} = \prodPair{\Dpair{f_0}{g_0}}{\Dpair{f_1}{g_1}}\,. \] This is very reminiscent of the flip $\Dswap$ (it swaps the two middle coordinates), except that there are no summability conditions associated with the $\prodPair{\_}{\_}$ pairing. \begin{theorem} \label{prop:prodSwap-inverse} The following assertions are equivalent \begin{enumerate} \item $\prodSwap$ is an isomorphism; \item $\Dproj_0 \with \Dproj_0 \summable \Dproj_1 \with \Dproj_1$; \item for any $f_0, g_0 \in \category(X, Y_0)$, $ f_1, g_1 \in \category(X, Y_1)$, if $f_0 \summable g_0$ and $f_1 \summable g_1$ then $f_0 \with f_1 \summable g_0 \with g_1$; \item for any $f_0, g_0 \in \category(X, Y_0)$, $ f_1, g_1 \in \category(X, Y_1)$, if $f_0 \summable g_0$ and $f_1 \summable g_1$ then $\prodPair{f_0}{f_1} \summable \prodPair{g_0}{g_1}$ \end{enumerate} and then \(\Dpair{\Dproj_0 \with \Dproj_0}{\Dproj_1 \with \Dproj_1}=\prodSwap^{-1}\). \end{theorem} \begin{proof} $(1) \Rightarrow (2)$: Assume that $\prodSwap$ is an isomorphism with inverse $\witness$. Then $\Dproj_i \comp \prodProj_j = \Dproj_i \comp \prodProj_j \comp \prodSwap \comp \witness = \prodProj_j \comp \Dproj_i \comp \witness$. But $\Dproj_i \comp \prodProj_j = \prodProj_j \comp (\Dproj_i \with \Dproj_i)$ by naturality of $\prodProj_j$ so $\prodProj_j \comp \Dproj_i \comp \witness = \prodProj_j \comp (\Dproj_i \with \Dproj_i)$. By joint monicity of the $\prodProj_j$'s we have $\Dproj_i \comp \witness = (\Dproj_i \with \Dproj_i)$. That is $\witness = \Dpair{\Dproj_0 \with \Dproj_0}{\Dproj_1 \with \Dproj_1}$. $(2) \Rightarrow (1)$: Assume that $\Dproj_0 \with \Dproj_0 \summable \Dproj_1 \with \Dproj_1$, of witness $\witness$. Then, $\prodProj_j \comp \Dproj_i \comp \witness = \prodProj_j \comp (\Dproj_i \with \Dproj_i) = \Dproj_i \comp \prodProj_j$. Hence \begin{align*} \prodProj_j \comp \Dproj_i \comp \witness \comp \prodSwap = \Dproj_i \comp \prodProj_j \comp \prodSwap = \prodProj_j \comp \Dproj_i \\ \Dproj_i \comp \prodProj_j \comp \prodSwap \comp \witness = \prodProj_j \comp \Dproj_i \comp \witness = \Dproj_i \comp \prodProj_j \end{align*} By joint monicity of the $\prodProj_j$'s and of the $\Dproj_i$'s we get $\witness \comp \prodSwap = \id_{\D(X_0 \with X_1)}$ and $\prodSwap \comp \witness = \id_{\D X_0 \with \D X_1}$. $(2) \Rightarrow (3)$: We have $\Dpair{f_0}{g_0}\in \category(X, \D Y_0)$ and $\Dpair{f_1}{g_1} \in \category(X, \D Y_1)$. Let \( \witness = \Dpair{\Dproj_0 \with \Dproj_0}{\Dproj_1 \with \Dproj_1} \comp (\Dpair{f_0}{g_0} \with \Dpair{f_1}{g_1}) \). We have $\Dproj_0 \comp\witness = f_0 \with f_1$ and $\Dproj_1\comp \witness = g_0 \with g_1$ so that $f_0 \with f_1 \summable g_0 \with g_1$. $(3) \Rightarrow (2)$: $(2)$ is a particular case of case $(3)$. $(3) \Rightarrow (4)$: Assume that $f_0 \summable g_0$ and $f_1 \summable g_1$. Then by assumption, $f_0 \with f_1 \summable g_0 \with g_1$. Let $\witness = \Dpair{f_0 \with f_1}{g_0 \with g_1} \comp \prodPair{\id}{\id}$. Then $\Dproj_0\comp \witness = \prodPair{f_0}{f_1}$ and $\Dproj_1\comp \witness = \prodPair{g_0}{g_1}$ so that $\prodPair{f_0}{f_1} \summable \prodPair{g_0}{g_1} $. $(4) \Rightarrow (3)$: Assume that $f_0 \summable g_0$ and $f_1 \summable g_1$. Then $f_0 \comp \prodProj_0 \summable g_0 \comp \prodProj_0$ and $f_1 \comp \prodProj_1 \summable g_1\comp \prodProj_1$ by left compatibility wrt.~composition (\Cref{prop:sum-left-compatible}). Hence, by assumption, $\prodPair{f_0\comp \prodProj_0}{ f_1\comp \prodProj_1} \summable \prodPair{g_0\comp \prodProj_0}{g_1\comp \prodProj_1}$. That is $f_0 \with f_1 \summable g_0 \with g_1$. \end{proof} \begin{cor} \label{cor:summability-cartesian-compat} A summability structure is compatible with the cartesian product if and only if $\prodProj_0, \prodProj_1$ are additive and $\prodSwap$ is an isomorphism. \end{cor} \iffalse \begin{proposition} If a Summability Structure is compatible with the product, then the pairing of two additive morphisms is additive. \end{proposition} \begin{proof} Assume that $h_0 \in \category(X, Y_0), h_1 \in \category(X, Y_1)$ are additive. Then $\prodPair{h_0}{h_1} \comp 0 = \prodPair{h_0 \comp 0}{h_1 \comp 0} = \prodPair{0}{0} = 0$. Furthermore, assume that $f_0, f_1 \in \category(Z, X)$ are summable. Then: $h_0 \comp f_0 \summable h_0 \comp f_1$ and $h_1 \comp f_0 \summable h_1 \comp f_1$. By \Cref{prop:prodSwap-inverse}, $\prodPair{h_0 \comp f_0}{h_1 \comp f_0} \summable \prodPair{h_0 \comp f_1}{h_1 \comp f_1}$ of sum $\prodPair{h_0 \comp f_0 + h_0 \comp f_1 }{h_1 \comp f_0 + h_1 \comp f_1} = \prodPair{h_0 \comp (f_0 + f_1)}{h_1 \comp (f_0 + f_1)}$ by additivity of the $h_i$. That is, $\prodPair{h_0}{h_1} \comp f_0 \summable \prodPair{h_0}{h_1 }\comp f_1$ of sum $\prodPair{h_0}{h_1} \comp (f_0 + f_1)$. \end{proof} \fi \subsection{Cartesian product and differential structure} We now assume that $\category$ is a cartesian category with a pre-differential structure $(\D, \Dproj_0, \Dproj_1, \Dsum)$. \begin{definition} The (pre-)differential structure $(\D, \Dproj_0, \Dproj_1, \Dsum)$ is \emph{compatible with the cartesian product} if the underlying summability structure is compatible with the cartesian product, and if $\prodProj_0, \prodProj_1$ are $\D$-linear. A \emph{cartesian coherent differential category} (CCDC) is a coherent differential category whose cartesian product is compatible with the differential structure. \end{definition} We assume that $\category$ is a CCDC. By $\D$-linearity of $\prodProj_0$ and $\prodProj_1$, all constructions involving only the cartesian product are $\D$-linear. \begin{proposition} \label{prop:prod-pairing-linear} If $h_0 \in \category(X, Y_0)$ and $h_1 \in \category(X, Y_1)$ are $\D$-linear, then $\prodPair{h_0}{h_1}$ is $\D$-linear. If $f_0 \in \category(X_0, Y_0)$ and $f_1 \in \category(X_1, Y_1)$ are $\D$-linear, then $f_0 \with f_1$ is $\D$-linear. \end{proposition} \begin{proof} For the first statement we proceed as for \Cref{prop:pairing-linear} except that the paring as a summable pair is replaced by the pairing of the cartesian product. The second statement follows from the first one, because $f_0 \with f_1 = \prodPair{f_0 \comp \prodProj_0}{f_1 \comp \prodProj_1}$, the projections are $\D$-linear, and \(\D\)-linearity is closed under composition. \end{proof} For any objects $X_0, X_1$, there is a natural transformation $\prodPair{\D \prodProj_0}{\D \prodProj_1} \in \category(\D(X_0\with X_1), \D X_0 \with \D X_1)$. By \(\D\)-linearity of $\prodProj_0$ and $\prodProj_1$ this natural transformation is equal to $\prodPair{\Dpair{\prodProj_0 \comp \Dproj_0}{\prodProj_0 \comp \Dproj_1}} {\Dpair{\prodProj_1 \comp \Dproj_0}{\prodProj_1 \comp \Dproj_1}} = \prodSwap$. Whence a result similar to~\Cref{prop:pair-derivative}. \begin{proposition} \label{prop:prodpair-derivative} For any $f_0 \in \category(X, Y_0)$ and $f_1 \in \category(X, Y_1)$, $\prodPair{\D f_0}{\D f_1} = \prodSwap \comp \D \prodPair{f_0}{f_1}$ \end{proposition} \begin{proof} $\prodProj_i \comp \prodSwap \comp \D \prodPair{f_0}{f_1} = \D \prodProj_i \comp \D \prodPair{f_0}{f_1} = \D f_i$. \end{proof} \subsection{Partial derivatives} Using $\prodSwap^{-1}$, we define two natural transformations \begin{align*} \strengthL &= (\prodSwap)^{-1} \comp (\id_{\D X_0} \with \Dinj_0) \in \category (\D X_0 \with X_1, \D (X_0 \with X_1))\\ \strengthR &= (\prodSwap)^{-1} \comp (\Dinj_0 \with \id_{\D X_1}) \in \category (X_0 \with \D X_1, \D (X_0 \with X_1)) \end{align*} Note that $\prodSwap$, $(\prodSwap)^{-1}$, $\strengthL$ and $\strengthR$ are all $\D$-linear, thanks to \Cref{prop:prod-pairing-linear,prop:everyone-linear,prop:composition-linear}. \begin{proposition} \label{prop:strength} $\strength{0} = \Dpair{\Dproj_0 \with \id_{X_1}}{\Dproj_1 \with 0}$ and $\strength{1} = \Dpair{\id_{X_0} \with \Dproj_0}{0 \with \Dproj_1}$ \end{proposition} \begin{proof} By \Cref{prop:prodSwap-inverse}, $(\prodSwap)^{-1} = \Dpair{\Dproj_0 \with \Dproj_0}{\Dproj_1 \with \Dproj_1}$ and the result follows by a straightforward computation. \end{proof} \begin{definition}[Partial derivative] If $f \in \category(X_0 \with X_1, Y)$ one can define $\D_0 f \defEq \D f \comp \strengthL \in \category(\D X_0 \with X_1, \D Y)$ and $\D_1 f \defEq \D f \comp \strengthR \in \category(X_0 \with \D X_1, \D Y)$, the \emph{partial derivatives} of $f$. \end{definition} \begin{proposition} \label{prop:partial-derivative-Dproj0} For any $f \in \category(X_0\with X_1,Y)$, $\Dproj_0 \comp \D_0 f = f \comp (\Dproj_0 \with \id)$ and $\Dproj_0 \comp \D_1 f = f \comp (\id \with \Dproj_0)$. \end{proposition} \begin{proof} $\Dproj_0 \comp \D_0 f = \Dproj_0 \comp \D f \comp \strengthL = f \comp \Dproj_0 \comp \strengthL = f \comp (\Dproj_0 \with \id)$ by \Cref{prop:strength}. The proof for $\strengthR$ is similar. \end{proof} \begin{proposition} \label{prop:commutative-monad-more} The following diagram commutes. \begin{center} \begin{tikzcd} \D (X_0 \with \D X_1) \arrow[d, "\D \strengthR"'] & \D X_0 \with \D X_1 \arrow[l, "\strengthL"'] \arrow[r, "\strengthR"] & \D(\D X_0 \with X_1) \arrow[d, "\D \strengthL"] \\ \D^2(X_0 \with X_1) \arrow[rr,swap, "\Dswap"'] & & \D^2(X_0 \with X_1) \end{tikzcd} \end{center} \end{proposition} \begin{proof} We use \Cref{prop:strength} to compute $\D \strengthR \comp \strengthL$ and $\D \strengthL \comp \strengthR$. Since $\strengthL$ is $\D$-linear, $\D \strengthL = \Dpair{\strengthL \comp \Dproj_0}{\strengthL \comp \Dproj_1}$ by \Cref{rem:linear}. Thus \begin{align*} \D \strengthL \comp \strengthR &= \D \strengthL \comp \Dpair{\id_{X_0} \with \Dproj_0}{0 \with \Dproj_1} \\ &= \Dpair{\strengthL \comp (\id_{X_0} \with \Dproj_0)}{\strengthL \comp (0 \with \Dproj_1)} \\ &= \Dpair{\Dpair{\Dproj_0 \with \Dproj_0}{\Dproj_1 \with 0}} {\Dpair{0 \with \Dproj_1}{0 \with 0}} \end{align*} Similarly, $\D \strengthR \comp \strengthL = \Dpair{\Dpair{\Dproj_0 \with \Dproj_0}{0 \with \Dproj_1}}{\Dpair{\Dproj_1 \with 0}{0 \with 0}}$. The commutation results from~\Cref{prop:family-on-pairs}. \end{proof} \begin{proposition} \label{prop:commutative-monad} The following diagram commutes \begin{center} \begin{tikzcd} \D (X_0 \with \D X_1) \arrow[d, "\D \strengthR"'] & \D X_0 \with \D X_1 \arrow[l, "\strengthL"'] \arrow[r, "\strengthR"] \arrow[d, "\prodSwap^{-1}"] & \D(\D X_0 \with X_1) \arrow[d, "\D \strengthL"] \\ \D^2(X_0 \with X_1) \arrow[r, "\DmonadSum"] & \D (X_0 \with X_1) & \D^2(X_0 \with X_1) \arrow[l, "\DmonadSum"'] \end{tikzcd} \end{center} \end{proposition} \begin{proof} Thanks to the computation of $\D \strengthL \comp \strengthR$ in the proof of \Cref{prop:commutative-monad-more}, we know that $\DmonadSum \comp \D \strengthL \comp \strengthR = \Dpair{\Dproj_0 \with \Dproj_0} {\Dproj_1 \with 0 + 0 \with \Dproj_1} = \Dpair{\Dproj_0 \with \Dproj_0}{\Dproj_1 \with \Dproj_1}$ by \Cref{prop:with-sum}. So $\DmonadSum \comp \D \strengthL \comp \strengthR = (\prodSwap)^{-1}$ by \Cref{prop:prodSwap-inverse}. A similar computation yields the result for $\DmonadSum \comp \D \strengthR \comp \strengthL$. \end{proof} \begin{remark} We can check that the natural morphisms $\strengthL, \strengthR$ are \emph{strenghts}~\cite{Kock72, Moggi91} for the monad $(\D, \Dinj_0, \DmonadSum)$. Then the diagram of \Cref{prop:commutative-monad} means that this strong monad is a \emph{commutative monad}. The diagrams can be checked by hand, but are also a consequence of very generic properties about strong monads on cartesian categories. As mentioned in~\cite{Aguiar18} in paragraph 2.3, any monad $(\monad, \monadUnit, \monadSum)$ on a cartesian category can be endowed with the structure of \emph{a colax symmetric monoidal monad} \footnote{Also called oplax symmetric monoidal monad, or symmetric comonoidal monad, or Hopf monad, see~\cite{Moerdijk02}} taking \begin{itemize} \item $\smfProdOne$ is the unique element of $\category(\monad \top, \top)$ \item $\smfProdTwo_{X_1, X_2} \defEq \prodPair{\monad \prodProj_1}{\monad \prodProj_2} \in \category(\monad (X_1 \with X_2), \monad X_1 \with \monad X_2)$ \end{itemize} If $\smfProdTwo$ and $\smfProdOne$ are isos, $\monad$ becomes a \emph{(strong) symmetric monoidal monad}. This is what happens here for $\monad = \D$, because $\smfProdTwo=\prodSwap$ and we can show that $\smfProdOne$ is an isomorphism with inverse $\Dinj_0$ using the join monicity of the $\Dproj_i$. But symmetric monoidal monad are the same as commutative monads as shown in \cite{Kock70, Kock72}, and it turns out that the strengths induced from the symmetric monoidal structure are exactly $\strengthL$ and $\strengthR$. \end{remark} The axioms \ref{ax:D-schwarz} and \ref{ax:D-add} carry to the setting of partial derivatives very naturally thanks to \Cref{prop:commutative-monad-more,prop:commutative-monad} respectively, giving the full fledged Schwarz and Leibniz rules. The fact that the Leibniz rule is a consequence of the additivity of the derivative is not surprising, as it is also the case in the usual differential calculus: $f'(x, y) \cdot (u, v) = f'(x, y) \cdot (u, 0) + f'(x, y) \cdot (0, v) = \frac{\partial f}{\partial x}(x, y) \cdot u + \frac{\partial f}{\partial y}(x, y) \cdot v$. \begin{proposition}[Leibniz rule] \label{prop:leibniz} $\D f \comp \prodSwap^{-1} = \DmonadSum \comp \D_0 \D_1 f = \DmonadSum \comp \D_1 \D_0 f$ \end{proposition} \begin{proof} Let us prove that $\D f \comp \prodSwap^{-1} = \DmonadSum \comp \D_0 \D_1 f$. \begin{align*} \DmonadSum \comp \D_0 \D_1 f &= \DmonadSum \comp \D (\D f \comp \strengthR) \comp \strengthL \quad \text{by definition} \\ &= \DmonadSum \comp \D^2 f \comp \D \strengthR \comp \strengthL \quad \text{by \ref{ax:D-chain}} \\ &= \D f \comp \DmonadSum \comp \D \strengthR \comp \strengthL \quad \text{by \ref{ax:D-add}} \\ &= \D f \comp \prodSwap^{-1} \quad \text{by \Cref{prop:commutative-monad}} \tag*{\qedhere} \end{align*} The proof of $\D f \comp \prodSwap^{-1} = \DmonadSum \comp \D_1 \D_0 f$ is similar. \end{proof} \begin{proposition}[Schwarz rule] $\D_0 \D_1 f = \Dswap \comp \D_1 \D_0 f$ \end{proposition} \begin{proof} Very similar to that of \Cref{prop:leibniz}, except that it uses the naturality of $\Dswap$ of \ref{ax:D-schwarz} instead of the naturality of $\DmonadSum$. \end{proof} \subsection{Generalization to arbitrary finite products} \begin{notation} Recall that the existence of arbitrary finite products is equivalent to the existence of a binary product and a terminal object. In order to stay consistent with the current notations, we write the finite products starting from $0$: $X_0 \with \cdots \with X_n$. We allow empty products, with the convention that taking $n = -1$ yields a product $X_0 \with \cdots \with X_{-1} := \top$. \end{notation} The constructions above can be extended to arbitrary finite products. On can indeed define a (symmetric monoidal) natural transformation $\prodSwap[n] \in \category(\D (X_0 \with \cdots \with X_n), \D X_0 \with \cdots \with \D X_{n})$ inductively by \((\prodSwap[-1])_X \defEq \prodFinal_{\D \top} \in \category(\D \top, \top)\), \((\prodSwap[0])_X \defEq \id_{\D X} \in \category(\D X, \D X)\) and \(\prodSwap[n+1] \defEq \prodSwap \comp \prodPair{\prodSwap[n]}{\id_{\D X_{n+1}}}\). By associativity of the cartesian product, this definition does not depend on the actual parenthesizing of $X_0 \with \cdots \with X_n$. \begin{notation} Let $X_0, Y_0, \ldots, X_n, Y_n\in\Obj\category$, $i \in \interval{0}{n}$ and $f_k \in \category(X_k, Y_k)$ for each $k\neq i$. Let $g \in \category(X_i, Y_i)$. Define $\singleApp{i}{f}{g} \defEq f_0 \with \cdots \with f_{i-1} \with g \with f_{i+1} \with \cdots \with f_n$ in which we use $f_i$ everywhere except at position $i$ where we use $g$. \end{notation} Similarly to the binary case, one can then define a strength $\strength{i} \in \category(X_0 \with \cdots \with \D X_i \with \cdots \with X_n, \D (X_0 \with \cdots \with X_n))$ as \[ \strength{i} \defEq (\prodSwap[n])^{-1} \comp \singleApp{i}{(\Dinj_0)}{\id_{\D X_i}}\,. \] \begin{proposition} $\prodSwap[n]$ is an isomorphism and $(\prodSwap[n])^{-1} = \Dpair{\Dproj_0 \with \cdots \with \Dproj_0}{\Dproj_1 \with \cdots \with \Dproj_1}$. Hence, \label{prop:strength-n} $\strength{i} = \Dpair{\singleApp{i}{\id}{\Dproj_0}}{\singleApp{i}{0}{\Dproj_1}}$. \end{proposition} \begin{proof} The equation on $\prodSwap[n]$ is obtained by unfolding the inductive definition and using~\Cref{prop:prodSwap-inverse}. The equations on the $\strength{i}$'s follow from this, as in~\Cref{prop:strength}. \end{proof} \begin{definition} For any $f \in \category(X_0 \with \cdots \with X_n, Y)$ one can define the \emph{i-th partial derivative of $f$} as $\D_i f \defEq \D f \comp \strength{i} \in \category(X_0 \with \cdots \with \D X_i \with \cdots \with X_n, \D Y)$. \end{definition} \begin{proposition} \label{prop:partial-derivative-Dproj0-n} $\Dproj_0 \comp \D_i f = f \comp \singleApp{i}{\id}{\Dproj_0}$. \end{proposition} \begin{proof} Same as \Cref{prop:partial-derivative-Dproj0}. \end{proof} \begin{definition} \label{def:natural-trans-iterate} For any $X\in\Obj\category$ and $n \geq 0$, we can define $\DmonadSum_X^k \in \category(\D^{n+1} X, \D X)$ as the composition of \(k\) copies of \(\DmonadSum\): $\DmonadSum_X^0 = \id_{\D X}$ and $\DmonadSum_X^{k+1} = \DmonadSum_X^k \comp \DmonadSum_{\D^k X}$. We define similarly $\Dproj_i^k \in \category(\D^k X, X)$. \end{definition} Note that $\DmonadSum^{k} = \Dpair{\Dproj_0^{k+1}}{\sum_{j = 0}^{k} \Dproj_0^{j} \comp \Dproj_1 \comp \Dproj_0^{k-j}}$. In other words, the right component of $\DmonadSum^k$ sums over all of the possible combinations of $k$ left projections and one right projection. One can prove a generalization of \Cref{prop:commutative-monad} for $n \geq 1$, \[ (\prodSwap[n])^{-1} = \DmonadSum^{n} \comp \D^n \strength{\alpha(n)} \comp \cdots \comp \D \strength{\alpha(1)} \comp \D\strength{\alpha(0)} \] for any $\alpha$ permutation of $\llbracket 0, n \rrbracket$. As in \Cref{prop:leibniz}, this generalizes the Leibniz Rule to the $n$-ary case. \begin{proposition}[Leibniz, generalized] \label{prop:leibniz-n} For any $n \geq 1$ and for any permutation $\alpha$ of $\llbracket 0, n \rrbracket$, \[ \D f \comp (\prodSwap[n])^{-1} = \DmonadSum^{n} \comp \D_{\alpha(n)} \ldots \D_{\alpha(0)} f\,. \] \end{proposition} \subsection{Multilinear morphism} We generalize to multivariate functions the notion of additivity and $\D$-linearity. \begin{definition} A morphism $f \in \category(Y_0 \with \cdots \with Y_n, Z)$ is additive in its $i^{th}$ argument (for $i \in \interval{0}{n}$) if $f \comp \singleApp{i}{\id}{0} = 0$ and if for all $h_0, h_1 \in \category(X, Y_i)$ such that $h_0 \summable h_1$, then $f \comp \singleApp{i}{\id}{h_0} \summable f \comp \singleApp{i}{\id}{h_1}$ and \[ f \comp \singleApp{i}{\id}{h_0} + f \comp \singleApp{i}{\id}{h_1} = f \comp \singleApp{i}{\id}{h_0 + h_1} \] \end{definition} \begin{proposition} A morphism $f \in \category(Y_0 \with \cdots \with Y_n, Z)$ such that $f \comp \singleApp{i}{\id}{0} = 0$ is additive in its $i^{th}$ argument if and only if $f \comp \singleApp{i}{\id}{\Dproj_0} \summable f \comp \singleApp{i}{\id}{\Dproj_1}$ with sum $f \comp \singleApp{i}{\id}{\Dsum}$. \end{proposition} \begin{proof} The proof is the same as \Cref{prop:additive}, using the fact that for any $k \in \{0,1\}$, $f \comp \singleApp{i}{\id}{h_k} = f \comp \singleApp{i}{\id}{\Dproj_k} \comp \singleApp{i}{\id}{\Dpair{h_0}{h_1}}$. \end{proof} \begin{definition} A morphism $f \in \category(X_0 \with \cdots \with X_n, Y)$ is \emph{linear in its $i^{th}$ argument} if it is additive in this argument and if $\Dproj_1 \comp \D_i f = f \comp \singleApp{i}{\id}{\Dproj_1}$. \end{definition} As in \Cref{prop:linear-equation}, \ref{ax:D-add} ensures that the equation $\Dproj_1 \comp \D_i f = f \comp \singleApp{i}{\id}{\Dproj_1}$ is a sufficient condition for linearity in the $i^{th}$ argument. \begin{proposition} \label{prop:additive-i} Assume that $\Dproj_1 \comp \D_i f = f \comp \singleApp{i}{\id}{\Dproj_1}$. Then $f$ is additive in its $i^{th}$ argument, hence linear in that argument. \end{proposition} \begin{proof} The equation allows rewriting $f \comp \singleApp{i}{\id}{h}$ as follows. \begin{align*} f \comp \singleApp{i}{\id}{h} &= f \comp \singleApp{i}{\id}{\Dproj_1} \comp \singleApp{i}{\id}{\Dpair{0}{h}} \\ &= \D_i f \comp \singleApp{i}{\id}{\Dpair{0}{h}} \quad \text{by assumption} \\ &= \D f \comp \strength{i} \comp \singleApp{i}{\id}{\Dpair{0}{h}} \\ &= \D f \comp \Dpair{\singleApp{i}{\id}{0}}{\singleApp{i}{0}{h}} \quad \text{by \Cref{prop:strength-n}} \end{align*} In particular, $f \comp \singleApp{i}{\id}{0} = \D f \comp \Dpair{\singleApp{i}{\id}{0}}{\singleApp{i}{0}{0}}$. But $\singleApp{i}{0}{0} = 0$ by \Cref{prop:with-sum}. So by \ref{ax:D-add} and \Cref{prop:derivative-additive-zero}, $f \comp \singleApp{i}{\id}{0} = 0$. Similarly, if $h_0 \summable h_1$, \begin{align*} &f \comp \singleApp{i}{\id}{h_0 + h_1} \\ &=\D f \comp \Dpair{\singleApp{i}{\id}{0}}{\singleApp{i}{0}{h_0 + h_1}} \\ &= \D f \comp \Dpair{\singleApp{i}{\id}{0}}{\singleApp{i}{0}{h_0} + \singleApp{i}{0}{h_1}} \quad \text{by \Cref{prop:with-sum}}\\ &= \D f \comp \Dpair{\singleApp{i}{\id}{0}}{\singleApp{i}{0}{h_0}} + \D f \comp \Dpair{\singleApp{i}{\id}{0}}{\singleApp{i}{0}{h_1}} \quad \text{by \ref{ax:D-add} and \Cref{prop:derivative-additive-sum}} \\ &= f \comp \singleApp{i}{\id}{h_0} + f \comp \singleApp{i}{\id}{h_1} \, . \end{align*} \end{proof} \begin{definition} \label{def:multilinear} A morphism $f \in \category(X_0 \with \cdots \with X_n, Y)$ is \emph{multilinear} (and more precisely, \emph{$(n+1)$-linear}) if it is linear in all of its argument. Note that the \(1\)-linear morphisms are exactly the \(\D\)-linear ones. \end{definition} As a sanity check of the notion, we can use the result below together with the Leibniz rule to show a result similar to the fact that in differential calculus, if $\Phi$ is a bilinear map, then $\Phi'(x, y) \cdot (u, v) = \Phi(x, v) + \Phi(u, y)$. \begin{lemma} \label{lemma:Dproj0-partial-commute} For any $f \in \category(X_0 \with \cdots \with X_n, Y)$ and $i, j \in \interval{0}{n}$ such that $i \neq j$, \[ \D \Dproj_0 \comp \D_i \D_j f = \D_i f \comp \singleApp{j}{\id}{\Dproj_0} \] \end{lemma} \begin{proof} This is a direct computation \begin{align*} \D \Dproj_0 \comp \D_i \D_j f &= \D \Dproj_0 \comp \D (\D_j f) \comp \strength{i} \\ &= \D (\Dproj_0 \comp \D_j f) \comp \strength{i} \quad \text{by \ref{ax:D-chain}} \\ &= \D (f \comp \singleApp{j}{\id}{\Dproj_0}) \comp \strength{i} \quad \text{by \Cref{prop:partial-derivative-Dproj0-n}} \\ &= \D f \comp \D \singleApp{j}{\id}{\Dproj_0} \comp \strength{i} \quad \text{by \ref{ax:D-chain}} \\ &= \D f \comp \strength{i} \comp \singleApp{j}{\id}{\Dproj_0} \quad \text{since $\strength{i}$ natural and $i \neq j$} \\ &= \D_i f \comp \singleApp{j}{\id}{\Dproj_0} \end{align*} \end{proof} \begin{theorem} For any $(n+1)$-linear morphism $f \in \category(X_0 \with \cdots \with X_n, Y)$ \begin{align*} \Dproj_0 \comp \D f \comp (\prodSwap)^{-1} &= f \comp (\Dproj_0 \with \cdots \with \Dproj_0) \\ \Dproj_1 \comp \D f \comp (\prodSwap)^{-1} &=f \comp (\Dproj_1 \with \Dproj_0 \with \cdots \with \Dproj_0) + \cdots + \: f \comp (\Dproj_0 \with \cdots \with \Dproj_0 \with \Dproj_1) \end{align*} \end{theorem} \begin{proof} We will write the proof for $n=1$. The general case relies on the same arguments. The first equation is just a direct consequence of the naturality of $\Dproj_0$ and \Cref{prop:strength}. For the second equation, Leibniz (\Cref{prop:leibniz}) ensures that $\Dproj_1 \comp \D f \comp \prodSwap^{-1} = \Dproj_1 \comp \DmonadSum \comp \D_0 \D_1 f = \Dproj_1 \comp \Dproj_0 \comp \D_0 \D_1 f + \Dproj_0 \comp \Dproj_1 \comp \D_0 \D_1 f$. We can compute those two summands separately. \begin{align*} \Dproj_1 \comp{} \Dproj_0 \comp \D_0 \D_1 f &= \Dproj_1 \comp \D_1 f \comp (\Dproj_0 \with \id) \quad \text{by \Cref{prop:partial-derivative-Dproj0}}\\ &= f \comp (\id \with \Dproj_1) \comp (\Dproj_0 \with \Dproj_1) \quad \text{by bilinarity of $f$} \\ &= f \comp (\Dproj_0 \with \Dproj_1) \end{align*} \begin{align*} \Dproj_0 \comp{} \Dproj_1 \comp \D_0 \D_1 f &= \Dproj_1 \comp \D \Dproj_0 \comp \D_0 \D_1 f \quad \text{by linearity of $\Dproj_0$} \\ &= \Dproj_1 \comp \D_0 f \comp (\id \with \Dproj_0) \quad \text{by \Cref{lemma:Dproj0-partial-commute}} \\ &= f \comp (\Dproj_1 \with \id) \comp (\id \with \Dproj_0) \quad \text{by bilinarity of $f$} \\ &= f \comp (\Dproj_1 \with \Dproj_0) \end{align*} Which concludes the proof. \end{proof} We can expand on the ideas of the proof \Cref{lemma:Dproj0-partial-commute} to show the following result. This result is crucial, as it explains how to project on a series of partial derivatives. \begin{proposition} \label{prop:DDproj-commute-partial} Let $n \geq 0$, $f \in \category(X_0 \with \cdots \with X_n)$, $d \geq 0$ and $i, i_1, \ldots, i_d \in \interval{0}{n}$. Then, \[ \D^d \Dproj_0 \comp \D_{i_d} \ldots \D_{i_1} \D_i f = \D_{i_d} \ldots \D_{i_1} f \comp \singleApp{i}{\id}{\D^{h_d(i)} \Dproj_0} \] where $h_d(i) = \# \{k \in \interval{1}{d} \mid i_k = i\}$. Furthermore, if $f$ is $(n+1)$-linear, then \[ \D^d \Dproj_1 \comp \D_{i_d} \ldots \D_{i_1} \D_i f = \D_{i_d} \ldots \D_{i_1} f \comp \singleApp{i}{\id}{\D^{h_d(i)} \Dproj_1} \] \end{proposition} \begin{proof} By induction on $d$. The case $d = 0$ is \Cref{prop:partial-derivative-Dproj0-n} for $\Dproj_0$, and the definition of $n$-linearity for $\Dproj_1$. We deal with the inductive step for $\Dproj_0$. The inductive step for $\Dproj_1$ is dealt with similarly. \begin{align*} \D^{d+1} \Dproj_0 \comp \D_{i_{d+1}} \ldots \D_{i_1} \D_i f &= \D(\D^d \Dproj_0) \comp \D(\D_{i_d} \ldots \D_{i_1} \D_i f) \comp \strength{i_{d+1}} \quad \text{by definition} \\ &= \D (\D^d \Dproj_0 \comp \D_{i_d} \ldots \D_{i_1} \D_i f) \comp \strength{i_{d+1}} \quad \text{by \ref{ax:D-chain}} \\ &= \D (\D_{i_d} \ldots \D_{i_1} \D_i f \comp \singleApp{i}{\id}{\D^{h_d(i)} \Dproj_0}) \comp \strength{i_{d+1}} \quad \text{by inductive hypothesis} \\ &= \D \D_{i_d} \ldots \D_{i_1} \D_i f \comp \D \singleApp{i}{\id}{\D^{h_d(i)} \Dproj_0} \comp \strength{i_{d+1}} \quad \text{by \ref{ax:D-chain}} \end{align*} The next step is to use the naturality of $\strength{i_{d+1}}$: \[ \D (f_0 \with \cdots \with f_n) \comp \strength{i_{d+1}} = \singleApp{i_{d+1}}{f}{\D f_{i_{d+1}}} \] If $i_{d+1} = i$, then \[ \D \singleApp{i}{\id}{\D^{h_d(i)} \Dproj_0} \comp \strength{i_{d+1}} = \strength{i_{d+1}} \comp \singleApp{i}{\id}{\D^{h_d(i) + 1} \Dproj_0} \] If $i_{d+1} \neq i$ then \[ \D \singleApp{i}{\id}{\D^{h_d(i)} \Dproj_0} \comp \strength{i_{d+1}} = \strength{i_{d+1}} \comp \singleApp{i}{\id}{\D^{h_d(i)} \Dproj_0} \] In both case, \[ \D \singleApp{i}{\id}{\D^{h_d(i)} \Dproj_0} \comp \strength{i_{d+1}} = \strength{i_{d+1}} \comp \singleApp{i}{\id}{\D^{h_{d+1}(i)} \Dproj_0}\] Consequently: \begin{align*} \D^{d+1} \Dproj_0 \comp \D_{i_{d+1}} \ldots \D_{i_1} \D_i f &= \D \D_{i_d} \ldots \D_{i_1} \D_i f \comp \strength{i_{d+1}} \comp \singleApp{i}{\id}{\D^{h_{d+1}(i)} \Dproj_0} \ \\ &= \D_{i_{d+1}} \D_{i_d} \ldots \D_{i_1} \D_i f \comp \singleApp{i}{\id}{\D^{h_{d+1}(i)} \Dproj_0} \end{align*} which concludes the proof. \end{proof} This property instantiated in $d = 1$ gives back something similar to \Cref{lemma:Dproj0-partial-commute}. \begin{cor} \label{cor:Dproj-commute-partial} If $f \in \category(X_0 \with \cdots \with X_n)$ is $(n+1)$-linear, then for any $i, j \in \interval{0}{n}$ such that $i \neq j$ and for any $k \in \{0,1\}$, \[ \D \Dproj_k \comp \D_i \D_j f = \D_i f \comp \singleApp{j}{\id}{\Dproj_k} \] \[ \D \Dproj_k \comp \D_i \D_i f = \D_i f \comp \singleApp{i}{\id}{\D \Dproj_k} \] \end{cor} We can use this corollary to show that the partial derivative of a $(n+1)$-linear morphism is also $(n+1)$-linear. \begin{theorem} \label{prop:partial-preserve-multilinearity} If $f \in \category(X_0 \with \cdots \with X_n)$ is $(n+1)$-linear, then for any $i \in \interval{0}{n}$, $\D_i f$ is $(n+1)$-linear. \end{theorem} \begin{proof} Let $j \in \interval{0}{n}$. The goal is to prove that $\Dproj_1 \comp \D_j \D_i f = \D_i f \comp \singleApp{j}{\id}{\Dproj_1}$. By joint monicity of the $\Dproj_k$, it suffices to prove that $\Dproj_k \comp \Dproj_1 \comp \D_j \D_i f = \Dproj_k \comp \D_i f \comp \singleApp{j}{\id}{\Dproj_1}$ for any $k \in \{0,1\}$. If $i \neq j$, \begin{align*} \Dproj_k \comp \Dproj_1 \comp \D_j \D_i f &= \Dproj_1 \comp \D \Dproj_k \comp \D_j \D_i f \text{\quad by $\D$-linearity of $\Dproj_1$} \\ &= \Dproj_1 \comp \D_j f \comp \singleApp{i}{\id}{\Dproj_k} \text{\quad by \Cref{cor:Dproj-commute-partial}} \\ &= f \comp \singleApp{j}{\id}{\Dproj_1} \comp \singleApp{i}{\id}{\Dproj_k} \text{\quad since $f$ is $(n+1)$-linear} \\ &= f \comp \singleApp{i}{\id}{\Dproj_k} \comp \singleApp{j}{\id}{\Dproj_1} \text{\quad since } i \neq j \\ &= \Dproj_k \comp \D_i f \comp \singleApp{j}{\id}{\Dproj_1} \text{\quad since $f$ is $(n+1)$-linear} \end{align*} The case $i = j$ is very similar \begin{align*} \Dproj_k \comp \Dproj_1 \comp \D_i \D_i f &= \Dproj_1 \comp \D \Dproj_k \comp \D_i \D_i f \text{\quad by $\D$-linearity of $\Dproj_1$} \\ &= \Dproj_1 \comp \D_i f \comp \singleApp{i}{\id}{\D \Dproj_k} \text{\quad by \Cref{cor:Dproj-commute-partial}} \\ &= f \comp \singleApp{i}{\id}{\Dproj_1} \comp \singleApp{i}{\id}{\D \Dproj_k} \text{\quad since $f$ is $(n+1)$-linear} \\ &= f \comp \singleApp{i}{\id}{\Dproj_k} \comp \singleApp{i}{\id}{\Dproj_1} \text{\quad since $\Dproj_k$ is $\D$-linear} \\ &= \Dproj_k \comp \D_i f \comp \singleApp{i}{\id}{\Dproj_1} \text{\quad since $f$ is $(n+1)$-linear.} \end{align*} \end{proof} Composition with a linear morphism preserves multilinearity. Thus, the Leibniz rule ensures that if $f$ is multilinear then $\D f$ is also multilinear. \begin{proposition} \label{prop:composition-preserve-multilinearity} If $f \in \category(X_0 \with \cdots \with X_n, Y)$ is $(n+1)$-linear and $h \in \category(Y, Z)$ is linear, then $h \comp f$ is $(n+1)$-linear. \end{proposition} \begin{proof} This follows from a straightforward computation $\Dproj_1 \comp \D_i (h \comp f) = \Dproj_1 \comp \D (h \comp f) \comp \strength{i} = \Dproj_1 \comp \D h \comp \D f \comp \strength{i} = h \comp \Dproj_1 \comp \D_i f = h \comp f \comp \singleApp{i}{\id}{\Dproj_1}$. \end{proof} \begin{theorem} If $f \in \category(X_0 \with \cdots \with X_n, Y)$ is $(n+1)$-linear, then $\D f \comp (\prodSwap[n])^{-1}\in\category(\D X_0 \with \cdots \with \D X_n, \D Y)$ is also $(n+1)$-linear. \end{theorem} \begin{proof} By Leibniz (\Cref{prop:leibniz-n}), $\D f \comp (\prodSwap[n])^{-1} = \DmonadSum^{n} \comp \D_{\alpha(n)} \ldots \D_{\alpha(0)} f$. But the partial derivatives preserves multilinearity by \Cref{prop:partial-preserve-multilinearity} and composition by $\DmonadSum^{n}$ on the left preserves multilinearity by \Cref{prop:composition-preserve-multilinearity}. \end{proof} \section{Kleisli category of the exponential of a model of $\LL{}$} \subsection{Coherent differentiation in a linear setting} \label{sec:CD-induces-CCDC} Let $\categoryLL$ be a symmetric monoidal closed category that is a model of $\LL{}$, and more precisely a Seely category in the sense of~\cite{Mellies09}. We write the composition of $f \in \categoryLL(X, Y)$ with $g \in \categoryLL(Y, Z)$ as $g \compl f$ to stress the intuition that the morphisms of \(\cL\) are linear. The axioms of a Seely category include the existence of a cartesian product $\with$ and a comonad $(\oc, \der, \dig)$ on $\categoryLL$, where $\der_X \in \categoryLL(\Excl X, X)$ and $\dig_X \in \categoryLL(\Excl X, \Excll X)$ are natural transformations. The \emph{Kleisli category} \(\Kl\cL\) of this comonad is the category whose objects are the objects of $\categoryLL$ and whose hom-sets are $\kleisliExp(X, Y) = \categoryLL(\Excl X, Y)$. Composition is defined in this category as $g \comp f = g \compl \Excl f \compl \dig$ and the identity at $X$ is $\der_X$, the unit of the comonad. It is well known that $\kleisliExp$ is a cartesian (closed) category, with the same cartesian product $\with$ as $\categoryLL$. The goal of this section is to show that coherent differentiation on $\categoryLL$ as introduced in~\cite{Ehrhard21} in the setting of $\LL{}$ gives $\kleisliExp$ a CCDC structure. \begin{theorem} \label{prop:cd-induces-cdc} Any differential structure on a summable category $\categoryLL$ (see~\cite{Ehrhard21}) induces a CCDC structure on $\kleisliExp$. \end{theorem} Let us detail first what the assumption means. The category $\categoryLL$ is said to be \emph{summable}~\cite{Ehrhard21} if it has a summability structure $(\S, \Sproj_0, \Sproj_1, \Ssum)$ in the sense of \Ehrhard. By \Cref{prop:summability-struct-equivalence}, this means that $(\S, \Sproj_0, \Sproj_1, \Ssum)$ is a left summability structure in the sense of \Cref{def:left-summability-struct} where every morphism is additive and the functorial action of $\S$ is given by $\S f \defEq \Spair{f \comp \Sproj_0}{f \comp \Sproj_1}$. Then, we can define $\Sinj_i$, $\SmonadSum$, $\Slift$ and $\Sswap$ as usual\footnote{Note that in~\cite{Ehrhard21}, $\SmonadSum$ is called $\tau$}. The difference is that the additivity of every morphism ensures that those families are natural transformations for the functor $\S$. In particular, $(\S, \Sinj_0, \SmonadSum)$ is \emph{de facto} a monad. The category $\categoryLL$ is said to be \emph{summable as a cartesian category} if $\prodPair{\S \prodProj_0}{\S \prodProj_1} = \prodSwap$ is an isomorphism \footnote{We can show that the condition required in~\cite{Ehrhard21} that $0 \in \categoryLL(\S \top, \top)$ is an isomorphism always hold, using the joint monicity of the $\Dproj_i$}. Because every morphism of $\categoryLL$ is additive, this corresponds by \Cref{cor:summability-cartesian-compat} to the fact that the cartesian product is compatible with the left summability structure as in~\Cref{def:prod-compatible}. It is well known that there is a faithful functor $\kleisliCastExp : \categoryLL \arrow \kleisliExp$ which maps $X$ to $X$ and $f \in \categoryLL(X, Y)$ to $f \Compl \der_X \in \categoryLL(\Excl X, Y)$. We can show that this functor induces a left summability structure $(\D, \lin(\Sproj_0), \lin(\Sproj_1), \lin(\Ssum))$ on $\kleisliExp$ (where $\D X \defEq \S X$) compatible with the cartesian product $\with$ of $\kleisliExp$. The reason is that $\lin$ preserves monicity and additivity, thanks to the well known fact that $\lin(h) \comp f = h \Compl f$. Finally, the definition of $\lin$ ensures that $\lin(\Spair{f_0}{f_1}) = \Dpair{\lin(f_0)}{\lin(f_1}$. In particular, the families of morphism generated by the Left Summability Structure $(\D, \lin(\Sproj_0), \lin(\Sproj_1), \lin(\Ssum))$ in \Cref{def:injections,def:DmonadSum,def:Dlift,def:Dswap} are $\lin(\Sinj_i)$, $\lin(\SmonadSum)$, $\lin(\Slift)$ and $\lin(\Sswap)$ respectively. Then a \emph{differential structure} on a summable category $\categoryLL$ is a natural transformation $\devM_X \in \categoryLL(\Excl \S X, \S \Excl X)$ satisfying some equations called ($\devM$-chain), ($\devM$-local), ($\devM$-lin), ($\devM$-$\with$) and ($\devM$-Schwarz) (see~\cite{Ehrhard21}). The first axiom, ($\devM$-chain), is a compatibility condition of $\devM$ with regard to $\dig$ and $\der$, making $\devM$ a \emph{distributive law} between the functor $\S$ and the comonad $\Excl\_$. \begin{definition} A \emph{distributive law} between a functor $F : \categoryLL \arrow \categoryLL$ and the comonad $\Excl\_$ on $\categoryLL$ is a natural transformation $\lambda^F \in \categoryLL(\Excl FX, F\Excl X)$ such that the two following diagrams commute. \begin{center} \begin{tikzcd} \Excl FX \arrow[r, "\lambda^F_X"] \arrow[rd, "\der_{FX}"'] & F\Excl X \arrow[d, "F \der_X"] \\ & FX \end{tikzcd} \begin{tikzcd} \Excl FX \arrow[d, "\dig_{FX}"'] \arrow[rr, "\lambda_X^F"] & & F\Excl X \arrow[d, "F\dig_X"] \\ \Excll FX \arrow[r, "\Excl\lambda^F_X"'] & \Excl F\Excl X \arrow[r, "\lambda^F_{\Excl X}"'] & F\Excll X \end{tikzcd} \end{center} \end{definition} A definition of distributive laws can be found in~\cite{Power02}, together with a proof of \Cref{prop:dl-extension,prop:dl-morphism} stated below (corollary 5.11 of~\cite{Power02}) \footnote{These observations are made in the more general setting of $2$-categories}. \begin{proposition} \label{prop:dl-extension} Let $F : \category \arrow \category$ be an endofunctor. There is a bijection between distributive laws $\lambda_F \in \categoryLL(\Excl F X, F \Excl X)$ and \emph{liftings}\footnote{The word ``extension'' is also used. We use the term lifting in order to stick to the terminology of~\cite{Power02}} $\lift{F}$ of $F$ on $\kleisliExp$. A lifting $\lift{F}$ of $F$ is a functor $\lift{F} : \kleisliExp \arrow \kleisliExp$ such that $\lift{F} X = F X$ and $\lift{F} (\kleisliCastExp(h)) = \kleisliCastExp(F h)$. \end{proposition} \begin{proof} Given a distributive law $\lambda^F \in \categoryLL(\Excl FX, F\Excl X)$, one can define an extension mapping $X$ to $F X$ and $f \in \kleisliExp(X, Y)$ to $F (f) \Compl \lambda^F_X \in \kleisliExp(FX, FY)$. We can check that it is a functor using the diagrams of distributive laws, and a lifting of $F$ using the naturality of $\lambda^F$. Conversely, any lifting $\lift{F}$ of $F$ induces a family $\lambda^F_{X} = \lift{F} \id_{\Excl X} \in \kleisliExp(\Excl FX, F\Excl X)$. The two diagrams of distributive law comes from the functoriality of $\lift{F}$ and the naturality comes from the fact that $\lift{F}$ is an extension of $F$. \end{proof} \begin{remark} \label{rem:dl-composite} Let $F, G : \categoryLL \arrow \categoryLL$ be two functors, with respective lifting $\lift{F}$ and $\lift{G}$ associated to the distributive laws $\lambda^F \in \categoryLL(\Excl FX, F\Excl X)$ and $\lambda^G \in \categoryLL(\Excl GX, G\Excl X)$. Then $\lift{G} \lift{F}$ is a lifting of $G F$ and the distributive law associated with $\lift{G} \lift{F}$ is the following natural transformation: $\lambda^{GF}_X = G (\lambda^F_X) \Compl \lambda^G_{F X} \in \categoryLL(\Excl GFX, GF \Excl X)$. \end{remark} The result below, proved in~\cite{Power02}, is rather overlooked. While the proof is indeed quite simple, it provides a very interesting perspective on the idea of extending structure to a Kleisli (or similarly to an Eilenberg-Moore) category. \begin{proposition} \label{prop:dl-morphism} Let $F, G : \categoryLL \arrow \categoryLL$ be two endofunctors. Assume that $\lift{F}$ and $\lift{G}$ are lifting of $F$ and $G$ respectively, and let $\lambda^F$ and $\lambda^G$ be their respective associated distributive law. Let $\alpha_X \in \categoryLL(F X, G X)$ be a natural transformation. Then $\kleisliCastExp(\alpha_X) \in \kleisliExp(\lift{F} X, \lift{G} X)$ is natural if and only if the following diagram commutes. \begin{equation} \label{eq:dl-morphism} \begin{tikzcd} \Excl F X \arrow[r, "\lambda^F"] \arrow[d, "\Excl \alpha"'] & F \Excl X \arrow[d, "\alpha"] \\ \Excl GX \arrow[r, "\lambda^G"'] & G\Excl X \end{tikzcd} \end{equation} \end{proposition} \begin{proof} straightforward computation. \end{proof} In the case of differentiation, the axiom ($\devM$-chain) implies that $\devM \in \category(\Excl \S X, \S \Excl X)$ is a distributive law between the comonad $\oc$ and the functor $\S$. This means that $\S$ can be lifted to an endofunctor $\D$ on $\kleisliExp$. Besides, there is a trivial distributive law $\id_{\Excl X} \in \categoryLL(\Excl X,\Excl X)$ associated to the lifting of the identity functor on $\categoryLL$ to the identity functor on $\kleisliExp$. Then ($\devM$-local) is an instance of~\Cref{eq:dl-morphism} in which $F = \S$, $G = \Id$ and $\alpha = \Sproj_0$. This means that ($\devM$-local) holds if and only if $\lin(\Sproj_0) \in \kleisliExp(\D X, X)$ is a natural transformation. Thus, $(\D, \lin(\Sproj_0), \lin(\Sproj_1), \lin(\Ssum))$ is a pre-differential structure on $\kleisliExp$ (in the sense of \Cref{def:differential-struct}) and \ref{ax:D-chain} holds. Moreover, since $\D$ is a lifting of $\S$, for any $h \in \categoryLL(X, Y)$, the morphism $\lin(h) \in \kleisliExp(X, Y)$ is $\D$-linear. Indeed, $\lin(\Sproj_0) \comp \D (\lin(h)) = \lin(\Sproj_0) \comp \lin(\S h) = \lin(\Sproj_0 \Compl \S h) = \lin(h \Compl \Sproj_0) = \lin(h) \comp \lin(\Sproj_0)$. As a result, $\lin(\Sproj_i), \lin(\Ssum), \lin(\prodProj_i)$ are all linear so \ref{ax:Dproj-lin}, \ref{ax:Dsum-lin} hold and the pre-differential structure is compatible with the cartesian product. Furthermore, ($\devM$-lin) consists of two instances of \Cref{eq:dl-morphism}. The first one is an instance in which $F = \S$, $G = \Id$ and $\alpha = \Sinj_0 \in \categoryLL(X, \S X)$. The second one is an instance in which $F = \S^2$, $G = \S$ and $\alpha = \SmonadSum \in \categoryLL(\S^2 X, X)$. Indeed, as we saw in \Cref{rem:dl-composite}, there is a distributive law $\S (\devM_X) \Compl \devM_{\S X} \in \category(\Excl{\S^2} X, \S^2 \Excl X)$ associated to $\D^2$, the lifting of $\S^2$ to $\kleisliExp$. So ($\devM$-lin) holds if and only if $\lin(\Sinj_0) \in \kleisliExp(\D^2 X, \D X)$ and $\lin(\SmonadSum) \in \kleisliExp(\D^2 X, \D X)$ are natural transformation, that is if and only if \ref{ax:D-add} hold\footnote{As we saw, this gives to $\D$ the structure of a Monad on $\kleisliExp$. In fact, ($\devM$-chain) and ($\devM$-lin) taken together make $\devM$ a distributive law between the \emph{monad} $\D$ and the comonad $\Excl\_$. There is a striking symmetry, because it also allows to lift $\Excl\_$ to a comonad on $\kleisliS$ the Kleisli category of $\S$}. Finally, ($\devM$-Schwarz) consists of an instance of \Cref{eq:dl-morphism} in which $F = \S^2$, $G = \S^2$ and $\alpha = \Sswap$. So ($\devM$-Schwarz) holds if and only if $\lin(\Sswap) \in \kleisliExp(\D^2 X, \D^2 X)$ is natural. The only lacking axiom is~\ref{ax:D-lin} that corresponds to the naturality of $\lin(\Slift)$. Thanks to~\Cref{prop:dl-morphism}, it would hold if and only if the diagram below commutes. \begin{equation} \begin{tikzcd} \Excl \S X \arrow[d, "\Excl \Slift"'] \arrow[rr, "\devM_X"] & & \S\Excl X \arrow[d, "\Slift_{\Excl X}"] \\ \Excl{\S^2} X \arrow[r, "\devM_{\S X}"'] & \S ! \S X \arrow[r, "\S \devM_X"'] & \S^2 !X \end{tikzcd} \end{equation} This diagram is not mentioned in~\cite{Ehrhard21} but makes perfectly sense in the setting of coherent differentiation in \LL{} and holds in all known \LL{} models of coherent differentiation. The study of the consequences of this diagram is left for further work. This ends the proof of \cref{prop:cd-induces-cdc}. \begin{remark} The only remaining axiom is ($\devM$-$\with$) that deals with the Seely isomorphisms $\seely{n} \in \categoryLL(! X_0 \tensor \ldots \tensor !X_n, !(X_0 \with \ldots \with X_n))$ of the Seely category $\categoryLL$. It is possible to define in $\LL{}$ a notion of multilinearity: given any $l \in \categoryLL(X_0 \tensor \ldots \tensor X_n, Y)$, one can define $\mlin{l} \in \kleisliExp(X_0 \with \ldots \with X_n, Y)$ as $\mlin{l} = l \Compl (\der \tensor \ldots \tensor \der) \Compl (\seely{n})^{-1}$. Then a morphism in $\kleisliExp(X_0 \with \ldots \with X_n, Y)$ is \emph{$(n+1)$-linear} (in the sense of $\LL{}$) if it can be written as $\mlin{h}$ for some $h$. The axiom ($\devM$-$\with$) allows to show that any $(n+1)$-linear morphism in the sense of \LL{} is also $(n+1)$-linear in the sense of \Cref{def:multilinear}. A proof of this fact can be implicitly found in Theorem 4.26 of~\cite{Ehrhard22-pcf}. This is a crucial fact, because it shows that what really matters is the $(n+1)$-linearity in terms of CCDC rather than the $(n+1)$-linearity in terms of $\LL{}$. \end{remark} Many models of $\LL{}$ have a coherent differential structure, such as coherence spaces, non-uniform coherence spaces and probabilistic coherence spaces. Thus, their Kleisli categories are all CCDCs. This provides a rich variety of examples. We present here the example of probabilistic coherence spaces. \subsection{The example of probabilistic coherence spaces} \label{sec:apcoh} A \emph{probabilistic coherence space} (PCS)~\cite{DanosEhrhard08} is a pair \(X=(\Web X,\Pcoh X)\) where \(\Web X\) is a set and \(\Pcoh X\subseteq\Realpto{\Web X}\) satisfies \(\Pcoh X=\{x\in\Realpto{\Web X}\St\forall x'\in\cP'\ \Eval{x}{x'}\defEq\sum_{a\in\Web X}x_ax'_a\leq 1\}\) for some \(\cP'\subseteq\Realpto{\Web X}\) called a \emph{predual} of \(X\). To avoid \(\infty\) coefficients it is also assumed that \(\forall a\in\Web X\ 0<\sup_{x'\in\cP'}x'_a<\infty\) and then it is easily checked that for all \(\forall a\in\Web X\ 0<\sup_{x\in\Pcoh X}x_a<\infty\). A multiset of elements of a set \(I\) is a function \(m:I\to\Nat\) such that the set \(\Supp m=\{i\in I\St m(i)\not=0\}\) is finite. The set \(\Mfin I\) of these multisets is the free commutative monoid generated by \(I\). We use \(\Mset{\List i1k}\) for the \(m\in\Mfin I\) such that \(m(i)=\Card{\{j\St i_j=i\}}\), for \(\List i1k\in I\). Given PCSs \(X\) and \(Y\), a function \(f:\Pcoh X\to\Pcoh Y\) is \emph{analytic} \footnote{There is also a purely functional characterization of these functions as those which are totally monotone and Scott continuous, see~\cite{Crubille18}} if there is a \emph{matrix} \(t\in\Realpto{\Mfin{\Web X}\times\Web Y}\) such that, for all \(x\in\Pcoh X\) and \(b\in\Web Y\), one has \(f(x)_b=\sum_{(m,b)\in\Mfin{\Web X}\times\Web Y}t_{m,b}x^m\) where \(x^m=\prod_{a\in\Web X}x_a^{m(a)}\). Thanks to the fact that all the coefficients in \(t\) are finite, it is not difficult to see that they can be recovered from the function \(f\) itself by means of iterated differentiation, see~\cite{DanosEhrhard08}. So an analytic function has \emph{exactly one} associated matrix. The identity function \(\Pcoh X\to\Pcoh X\) is analytic (of matrix \(t\) given by \(t_{m,a}=\Kronecker{m}{\Mset a}\)) and the composition of two analytic functions is still analytic. We use \(\ACOH\) for the category whose objects are PCSs and morphisms are analytic functions. For instance, if \(\Sone\) is the PCS \((\Eset\ast,\Intercc01)\) then \(f_1,f_2:\Intercc01\to\Intercc01\) given by \(f_1(x)=1-\sqrt{1-x^2}\) and \(f_2(x)=e^{x-1}\) are in \(\ACOH(\Sone,\Sone)\), but \(f_3(x)=2x-x^2\) is not because of the negative coefficient. The (pointwise) sum of two analytic functions \(\Pcoh X\to\Pcoh Y\) is always well defined \(\Pcoh X\to\Realp^{\Web Y}\), but is not necessarily in \(\ACOH(X,Y)\) so \(\ACOH\) is not left-additive \footnote{At least for this most natural addition.}. If \(X\) is a PCS then \(\D X =(\{0,1\}\times\Web X, \Pcohp{\D X} =\{z\in\Realpto{\{0,1\}\times\Web X} \St\Dproj_0(z)+\Dproj_1(z)\in\Pcoh X\})\), where \(\Dproj_i(z)_a=z_{i,a}\), is a PCS. Then \(\Dproj_0,\Dproj_1\in\ACOH(\D X,X)\) and we have also \(\Dsum\in\ACOH(\D X,X)\) given by \(\Dsum(z)=\Dproj_0(z)+\Dproj_1(z)\). In other words \(\D X\) is the PCS whose elements are the pairs \((x,u)\in\Pcoh X^2\) such that \(x+u\in\Pcoh X\). In that way we have equipped \(\ACOH\) with a left pre-summability structure and the associated notion of summability is the obvious one: \(f_0,f_1\in\ACOH(X,Y)\) are summable if their pointwise sum \(f_0+f_1\) is in \(\ACOH(X,Y)\) (the matrix of this sum is the sum of the matrices of \(f_0\) and \(f_1\)). It is easily checked that this left pre-summability structure is a left summability structure (see~\Cref{def:left-summability-struct}). As explained in~\Cref{sec:differential}, differentiation boils down to extending the operation \(\D\) to morphisms in such a way that the conditions of~\Cref{def:CDC} be satisfied. Given \(f\in\ACOH(X,Y)\) of matrix \(t\) and \((x,u)\in\Pcohp{\D X}\) we have \begin{align*} f(x+u) &=\sum_{(m,b)\in\Mfin{\Web X}\times\Web Y}t_{m,b}(x+u)^m \\ &=\sum_{(m,b)\in\Mfin{\Web X}\times\Web Y}t_{m,b} \sum_{p\leq m}\Binom mpx^{m-p}u^p \\ &=f(x)+\sum_{a\in\Supp m}\Binom m{\Mset a}x^{m-\Mset a}u_a+r(x,u) \\ &=f(x)+\sum_{a\in\Supp m}m(a)x^{m-\Mset a}u_a+r(x,u) \end{align*} where \(\Binom mp=\prod_{a\in\Web X}\Binom{m(a)}{p(a)}\in\Nat\) when \(p\leq m\) for the pointwise order. In these expressions the remainder \(r(x,u)\) is a power series in \(x\) and \(u\) all of whose monomials have total degree \(>1\) in \(u\) (such as \(x_au_bu_c\) if \(a,b,c\in\Web X\)). In particular \(\Norm{r(x,u)}\in o(\Norm u)\) where \(\Norm x=\sup\{\Eval x{x'}\St x'\in\cP'\}\in\Intercc 01\) for any predual of \(X\) (this norm does not depend on the choice of \(\cP'\)). Using~\Cref{def:differential-struct} we set \[ \dcoh f(x,u)=\sum_{a\in\Supp m}m(a)x^{m-\Mset a}u_a.\] Since all coefficients of \(t\) are \(\geq 0\) we have \(f(x)+\dcoh f(x,u)\leq f(x+u)\) for the pointwise order so that \(\D f(x,u)=(f(x),\dcoh f(x,u))\in\Pcoh{(\D Y)}\). In that way we have defined an analytic function \(\D f\in\ACOH(\D X,\D Y)\) and it is easily checked that \(\ACOH\) is a coherent differential category in the sense of~\Cref{def:CDC}. For the two examples above we get \(\dcoh f_2(x,u)=e^{x-1}u\) and \(\dcoh f_1(x,u)=xu/\sqrt{1-x^2}\) which seems to be undefined when \(x=1\) but is not because then \emph{we must have} \(u=0\) and so \(\dcoh f_1(1,0)=0\). An analytic \(f\in\ACOH(X,Y)\) is \emph{linear} if its matrix \(t\) satisfies that whenever \(t_{m,b}\not=0\), one has \(m=\Mset a\) for some \(a\in\Web X\). This notion of linearity \footnote{Which arises from the fact that \(\ACOH\) is the Kleisli category of the comonad ``\(\oc\)'' on the PCS model of LL of~\cite{DanosEhrhard08}.} coincides with both additivity~\Cref{def:additive} and \(\D\)-linearity~\Cref{def:linear}. The category \(\ACOH\) is cartesian, with \(\top=(\emptyset,\{0\})\) and \(X\with Y =(\{0\}\times\Web X\cup\{1\}\times\Web Y), \{z\in\Realpto{\{0\}\times\Web X\cup\{1\}}\St\prodProj_0(z)\in\Pcoh X\text{ and }\prodProj_1(z)\in\Pcoh Y\}\) which is easily seen to be a PCS (\(\prodProj_i\) is defined exactly as \(\Dproj_i\)) such that \(\Pcohp{X\with Y}=\Pcoh X\times\Pcoh Y\) up to a trivial bijection. The projections \(\prodProj_i\) are additive, and \(\prodSwap\) (see~\Cref{sec:cartesian-summability}) is an iso: if \(((x,u),(y,v))\in\Pcohp{\D X\with\D Y}\) then \(((x,y),(u,v))\in\Pcohp{\D(X\with Y)}\) since \((x,y)+(u,v)=(x+u,y+v)\) so the summability structure is compatible with the cartesian product by~\Cref{cor:summability-cartesian-compat}. An \(f\in\ACOH(X_0\with X_1,Y)\) is \emph{bilinear} in \(X_0,X_1\) if it is linear (or additive) separately in both inputs, which is equivalent to saying that its matrix \(t\) satisfies that if \(t_{m,b}\not=0\) then \(m=\Mset{(0,a_0),(1,a_1)}\) with \(a_i\in\Web{X_i}\) for \(i=0,1\). Let \(\SNat=(\Nat,\{x\in\Realpto\Nat\St\sum_{n\in\Nat}x_n\leq 1\})\) which represents the type of integers in \(\ACOH\), then the function \(h:\ACOH(\SNat\with\SNat\with\SNat,\SNat)\) given by \(h(u,x,y)=u_0x+(\sum_{n=1}^\infty u_n)y\) is bilinear in \(\SNat\), \(\SNat\with\SNat\) and can be understood as an \(\mathtt{ifzero}\) operator. The function \(k\in\ACOH(\SNat,\SNat)\) such that \(k(x)_n=x_{n+1}\) is linear and represents the successor operation. \section{Link with cartesian differential categories} \label{sec:CDC} We show in this section that CCDCs are a generalization of cartesian differential categories~\cite{Blute09}. \subsection{Cartesian left additive categories} \label{sec:comparison-sum} We rely on the presentation of~\cite{Lemay18} for left additive categories, since this article uses a minimal set of assumptions. \begin{definition} \label{def:left-additive} A left additive category is a category such that each hom-set is a commutative monoid, with addition $+$ and zero $0$ commuting with composition on the right, that is $(f+g) \comp h = f \comp h + g \comp h$ and $0 \comp f = 0$. \end{definition} \begin{definition} A morphism $h$ is additive if addition is compatible with composition with $h$ on the left, that is $h \comp (f + g) = h \comp f + h \comp g$ and $h \comp 0 = 0$. Note that the identity is additive, and additive morphisms are closed under addition and composition. \end{definition} \begin{definition} \label{def:additive-total} A cartesian left additive category is a left additive category such that the projections are additive. \end{definition} Given a cartesian left additive category $\category$, one can define a summable pairing structure (\Cref{def:pre-presummability-structure}) $(\Dwith, \prodProj_0, \prodProj_1, \prodProj_0 + \prodProj_1)$ with $\Dwith X = X \with X$. Then one can check that all morphisms are summable (the witness of $f \summable g$ is $\prodPair{f}{g}$). Moreover the left additivity of the category ensures that the notion of sum induced by $(\Dwith, \prodProj_0, \prodProj_1, \prodProj_0 + \prodProj_1)$ coincides with the native structure of monoid on the hom-sets. In particular, a morphism is additive in the sense of \Cref{def:additive} if and only if it is additive in the sense of \Cref{def:additive-total}. Consequently, $\prodProj_0, \prodProj_1$ and $\prodProj_0 + \prodProj_1$ are additive. Thus, $(\Dwith, \prodProj_0, \prodProj_1, \prodProj_0 + \prodProj_1)$ is a left pre-summability structure. Finally, it is a left summability structure because \ref{ax:D-witness} trivially holds (everything is summable), and \ref{ax:D-zero}, \ref{ax:D-com} hold thanks to the fact that everything is summable and that $(\category(X, Y), +, 0)$ is a commutative monoid. Conversely any left summability structure on $\category$ of shape $(\Dwith, \prodProj_0, \prodProj_1, \Dsum)$ with $\D_{\with} X = X \with X$ endows each hom-set with a commutative monoid structure and \Cref{prop:sum-left-compatible} ensures that the category is left additive. Then, as above, a morphism is additive in the sense of \Cref{def:additive} if and only if it is additive in the sense of \Cref{def:additive-total}. Thus $\prodProj_0, \prodProj_1$ are additive so the category is cartesian left additive. Moreover $\Dsum = \prodProj_0 + \prodProj_1$ by \Cref{prop:proj-sum} so the left summability structure induced by the monoid on the hom-set coincides with the left summability structure we started from. We just proved \Cref{prop:summability-total} below. \begin{theorem} \label{prop:summability-total} Let $\category$ be a cartesian category. Define $\Dwith X = X \with X$. There is a bijection between the monoid structures on the hom-set that make $\category$ a cartesian left additive category and the left summability structures $(\D, \Dproj_0, \Dproj_1, \Dsum)$ on $\category$ such that $\D = \Dwith$, $\Dproj_0 = \prodProj_0$ and $\Dproj_1 = \prodProj_1$. \end{theorem} \begin{remark} Any left summability structure on $\category$ of shape $(\Dwith, \prodProj_0, \prodProj_1, \Dsum)$ with $\Dwith X = X \with X$ is \emph{de facto} compatible with the cartesian product. The additivity of $\prodProj_0$ and $\prodProj_1$ is part of the axioms of summability, and $\prodSwap$ is an isomorphism thanks to \Cref{prop:prodSwap-inverse} and the fact that everything is summable. \end{remark} \subsection{Cartesian differential categories} \label{sec:comparison-diff} We give the axioms of a cartesian differential category following the alternative formulation of~\cite{Cockett14} for convenience. \begin{definition} \label{def:CartesianDC} A cartesian differential category is a cartesian left additive category $\category$ equipped with a differential combinator $\d$ that maps each morphism $f \in \category(X, Y)$ to a morphism $\d{f} \in \category(X \with X, Y)$ such that \begin{enumerate} \item $\d{\prodProj_0} = \prodProj_0 \comp \prodProj_1$, $\d{\prodProj_1} = \prodProj_1 \comp \prodProj_1$; \item $\d{0} = 0$ and $\d{(f+g)} = \d{f} + \d{g}$; \item $\d{\id} = \Dproj_1$ and $\d {(g \comp f)} = \d{g} \comp \prodPair{f \comp \Dproj_0}{\d{f}}$; \item $\d{f} \comp \prodPair{x}{0} = 0$ and $\d{f} \comp \prodPair{x}{u+v} = \d{f} \comp \prodPair{x}{u} + \d{f} \comp \prodPair{x}{v}$; \item $\d{\d{f}} \comp \prodPair{\prodPair{x}{0}}{\prodPair{0}{u}} = \d{f} \comp \prodPair{x}{u}$; \item $\d{\d{f}} \comp \prodPair{\prodPair{x}{u}}{\prodPair{v}{w}} = \d{\d f} \comp \prodPair{\prodPair{x}{v}}{\prodPair{u}{w}}$. \end{enumerate} \end{definition} Note that the axiom $\d{\id} = \prodProj_1$ seems to be missing from the axioms given in~\cite{Cockett14}, although it can be found in the original formulation in~\cite{Blute09}. There is usually another axiom, that states that $\d{\prodPair{f}{g}} = \prodPair{\d{f}}{\d{g}}$. But as observed in \cite{Lemay18}, this axiom is a consequence of the linearity of the projections and of the chain rule so we discard it. Let $\category$ be a left additive category. As stated in \Cref{prop:summability-total}, the structure of monoid in the hom-set arises from a summability structure $(\Dwith, \prodProj_0, \prodProj_1, \prodProj_0 + \prodProj_1)$ compatible with the cartesian product. Then, there is a bijection between pre-differential structures on top of this summability structure and differential combinators in the sense of \Cref{def:CartesianDC}: we can define the functorial action of $\Dwith$ from $\d$ as $\Dwith{f} \defEq \prodPair{f \comp \prodProj_0}{\d {f}}$, and we can define $\d$ from $\Dwith$ as $\d{f} = \prodProj_1 \comp \Dwith{f}$. Besides, we have shown in \Cref{sec:equivalence-lemmas} that the axioms of coherent differentiation are equivalent to some equational properties on $\dcoh$. When the underlying left summability structure is $(\Dwith, \prodProj_0, \prodProj_1, \prodProj_0 + \prodProj_1)$, those properties turn out to be exactly the axioms of cartesian differential categories. The axiom (1) corresponds to \ref{ax:Dproj-lin}. By \Cref{prop:Dsum-lin},~(2) corresponds to \ref{ax:Dsum-lin}. By \Cref{prop:D-chain},~(3) corresponds to \ref{ax:D-chain}. By \Cref{prop:derivative-additive-zero,prop:derivative-additive-sum},~(4) corresponds to \ref{ax:D-add}. By \Cref{prop:D-lin},~(5) corresponds to \ref{ax:D-lin}. By \Cref{prop:D-schwarz},~(6) corresponds to \ref{ax:D-schwarz}. Finally, the differential structures on top of the left summability structure $(\Dwith, \prodProj_0, \prodProj_1, \Dsum)$ are \emph{de facto} compatible with the cartesian product, because the linearity of $\prodProj_0$ and $\prodProj_1$ is included in~(1). \begin{theorem} \label{prop:ccdc-cdc} The cartesian differential categories are exactly the cartesian coherent differential categories in which $\D X = X \with X$, $\Dproj_0 = \prodProj_0$, $\Dproj_1 = \prodProj_1$. \end{theorem} \begin{remark} In \cite{Blute09}, $h$ is said to be linear if $\d (h) = h \comp \prodProj_1$. Then \Cref{prop:linear-equation} ensures that this notion of linearity exactly corresponds through \Cref{prop:ccdc-cdc} to our notion of $\D$-linearity introduced in \Cref{def:linear}. \end{remark} \begin{remark} \label{rk:tangent-categories} Every cartesian differential category is also a tangent category~\cite{Cockett14}, and the tangent functor induced from $\d$ is exactly the same functor as $\D_{\with}$. This makes sense, as coherent differentiation and tangent categories are very similar: they extend cartesian differential categories by generalizing addition in two different ways. \end{remark} \section{A first order coherent differential language} We introduce a first order language associated to these models. Note that a development of a whole coherent differential PCF of which our language can be roughly considered as a fragment can already be found in \cite{Ehrhard22-pcf}, with a semantics based on~\cite{Ehrhard21}. Our main contribution here is that CCDCs provide the tools for a more principled and synthetic treatment of the semantics. This tighter connection between syntax and semantics allows for the development of new ideas, such as a more systematic treatment of multilinearity. \subsection{Terms} \begin{definition} Le $\varTypes$ be a set of ground type symbols, ranged over by $\alpha, \beta, \ldots$ For any $\alpha \in \varTypes$ and $h \in \N$, $\D^h \alpha$ is a ground type. General types are inductively defined by \[ A, B, C \defEq \D^h \alpha \mid A \with B\,. \] \end{definition} For any type $A$, we define the type $\D A$ inductively on $A$ by $\D \D^h \alpha = \D^{h+1} \alpha$ and $\D (A \with B) = \D A \with \D B$. \begin{definition} Let $\multiLinVar, \multiLinVar[1], \ldots$ be function symbols. Each function symbol $\multiLinVar$ is uniquely assigned a \emph{function type} of the form $A_0, \ldots, A_n \arrow B$ where $A_i$ and $B$ are types. Then, $n+1$ is called the arity of $\multiLinVar$, denoted as $\arr(\multiLinVar)$. \end{definition} A function symbol $\multiLinVar$ of type $A_0, \ldots, A_n \arrow B$ will be interpreted in section \Cref{sec:semantics} as a $(n+1)$-linear morphisms $\sem{\multiLinVar} \in \category(\sem{A_0} \with \cdots \with \sem{A_n}, \sem{B})$ (recall \Cref{def:multilinear}). Note that the types $A_i$ can themselves be products and need not be ground types. For example, a $2$-linear map in $\category((A \with B) \with C, D)$ can by no means be seen as a $3$-linear map in $\category(A \with B \with C, D)$. \begin{definition} \label{def:functions} Define \emph{functions} as \[ \multiLin, \multiLin[1], \ldots \defEq \multiLinVar \mid \Dproj_i^{A} \mid \prodProj_i^{A, B} \mid \Dinj_i^{A} \mid \DmonadSum_n^{A} \] where $i \in \{0, 1\}$, $n \geq 0$, $\multiLinVar$ are function symbols and $A, B$ are types. Each function $\multiLin$ has a function type: $\Dproj_0^A, \Dproj_1^A$ have type $\D A \arrow A$, $\Dinj_0^A, \Dinj_1^A$ have type $A \arrow \D A$, the $\DmonadSum_n^A$ have type $\D^{n+1} A \arrow \D A$ and $\prodProj_0^{A,B}, \prodProj_1^{A, B}$ have types $A \with B \arrow A$ and $A \with B \arrow B$ respectively. Notice that projections have arity $1$ and not $2$. The type attached to the constructors $\Dproj_i$, $\prodProj_i$, $\Dinj_i$ and $\DmonadSum_n$ will always be kept implicit in what follows. \end{definition} \begin{remark} Taking $n=-1$ allows to write constants. \end{remark} \begin{definition} Let $\Var$ be a set of variable symbols. The set $\terms$ of terms is defined inductively as follows \[ t, u, \ldots \defEq \prodPair{t_0}{t_1} \mid \multiLin^{\word}(t_0, \ldots, t_n) \mid x \] where $x \in \Var$, $\multiLin$ are function symbols of arity $n+1$ and $\word \in \interval{0}{n}^\ast $, the set of finite words \footnote{Such a word represents a successive application of partial derivatives on the multilinear symbol $f$, more on this in~\Cref{sec:semantics}.} of elements of \(\interval0n\). \end{definition} \begin{remark} Nothing prevents us from adding to this calculus non multilinear function symbols, assuming that the formal derivatives for the function symbols are also provided. We focus on multilinear functions though, due to the nature of the basic operations of PCF. A coherent differential PCF would contain a base type $\nat$, two function symbols $\predCons$ and $\succCons$ of type $\nat \arrow \nat$, a family of function symbols $\ifCons^A$ of type $\nat, A \with A \arrow A$ (conditional) and a family of function symbols $\letCons^A$ of type $\nat, (\nat \arrow A) \arrow A$ (call-by-value on the type of integers). An analysis of the semantics of these symbols in coherent differentiation in the $\LL{}$ setting of~\cite{Ehrhard22-pcf} or in the example of~\Cref{sec:apcoh} indeed shows that $\predCons$ and $\succCons$ should be interpreted as linear morphisms, and that $\ifCons^A$ and $\letCons^A$ should be interpreted as $2$-linear morphisms. Using the fact that variables can be used in a non-linear way as well as the PCF fixpoint operator, it is then possible to write terms whose interpretation is not multilinear. For instance, \(f_1\) of~\Cref{sec:apcoh} is the semantics of a term, see~\cite{Ehrhard22a}. \end{remark} \begin{notation} For any word $\word$, we write $\wordLength{\word}$ for its length, and $\wordLetter{\word}{j}$ for the number of occurrences of the letter $j$. We will write $f$ for $f^{\emptyWord}$, where $\emptyWord$ is the empty word. Notice that when $\arr (\multiLin) = 0$, a word $\word \in \interval{0}{0}^\ast$ can be uniquely seen as an integer $d = |\word|$. We will then write $\depth{\multiLin}{d}$ for $\multiLin^{\word}$. \end{notation} We introduce the typing rules in \Cref{fig:typing}. The systematic treatment of multilinear morphisms allows for a great factorization of the rules. We write $\multiLin: A_0, \ldots, A_n \arrow B$ if $f$ has type $A_0, \ldots, A_n \arrow B$. \begin{figure} \caption{Typing rules} \label{fig:typing} \end{figure} Given any term $t$, one can define a term $\diffTerm{t}{x}$ by induction on $t$. The inductive steps are given in \Cref{fig:differential}. \begin{figure} \caption{Differential of a term} \label{fig:differential} \end{figure} \begin{proposition} \label{prop:typing-differential} If $\Gamma, x : A \vdash t : B$ then $\Gamma, x : \D A \vdash \diffTerm{t}{x} : \D B$ \end{proposition} \begin{proof} By induction on the typing derivation. \begin{itemize} \item If the last rule applied is \varRule{} then the first possibility is that $t = x$ and $\Gamma, x : A \vdash x : A$. But then, $\diffTerm{x}{x} = x$ and $\Gamma, x : \D A \vdash x : \D A$. The second possibility is that $t = y$ with $y \neq x$ and $\Gamma \vdash y : B$. But then, $\diffTerm{y}{x} = \Dinj_0(y)$ and $\Gamma \vdash \Dinj_0(y) : \D B$. Thus, $\Gamma, x : \D A \vdash \Dinj_0(y) : \D B$ in both cases. \item If the last rule applied is \pairRule, then $t = \prodPair{t_0}{t_1}$, $t$ is of type $B_0 \with B_1$, $\Gamma, x : A \vdash t_0 : B_0$ and $\Gamma, x : A \vdash t_1 : B_1$. But $\diffTerm{t}{x} = \prodPair{\diffTerm{t_0}{x}}{\diffTerm{t_1}{x}}$. By induction hypothesis $\Gamma, x : \D A \vdash \diffTerm{t_0}{x} : \D B_0$ and $\Gamma, x : \D A \vdash \diffTerm{t_1}{x} : \D B_1$. Thus, by applying \pairRule, $\Gamma, x : \D A \vdash \prodPair{\diffTerm{t_0}{x}}{\diffTerm{t_1}{x}} : \D B_0 \with \D B_1$. But $\D B_0 \with \D B_1 = \D (B_0 \with B_1)$ so $\Gamma, x : \D A \vdash \diffTerm{\prodPair{t_0}{t_1}}{x} : \D (B_0 \with B_1)$. \item If the last rule applied is \appRule{} then $t = f^{\word}(t_0, \ldots, t_n)$, $f$ has some type $A_0, \ldots, A_n \arrow B$, and $\Gamma, x : A \vdash t : \D^{\wordLength{\word}} B$. Besides, for any $i$, $\Gamma, x : A \vdash t_i : \D^{\wordLetter{\word}{i}} A_i$. By induction hypothesis, $\Gamma, x : \D A \vdash \diffTerm{t_i}{x} : \D^{\wordLetter{\word}{i} + 1} A_i$. But $\wordLetter{\word n \cdots 1 0}{i} = \wordLetter{\word}{i} + 1$ so applying the \appRule{} rule gives a derivation for $\Gamma, x : \D A \vdash f^{\word n \cdots 1 0}(\diffTerm{t_0}{x}, \ldots, \diffTerm{t_n}{x}) : \D^{\wordLength{\word}+n+1} B$. Applying the \appRule{} rule again for $f=\DmonadSum_n$ yields a derivation of $\Gamma, x : \D A \vdash \DmonadSum_n(f^{\word n \cdots 1 0}(\diffTerm{t_0}{x}, \ldots, \diffTerm{t_n}{x})) : \D^{\wordLength{\word}+1} B$, which concludes the proof. \end{itemize} \end{proof} \subsection{Semantics} \label{sec:semantics} Let $\category$ be a CCDC. For the sake of simplicity, we assume that $\D (X \with Y) = \D X \with \D Y$ and $\prodSwap=\id$ \footnote{This assumption is by no mean necessary but it simplifies the notations and the results}. Assume that we are given an object $\sem{\alpha}$ of \(\category\) for any ground type symbol $\alpha$. Then one can interpret any type as an object: $\sem{\D^h \alpha} = \D^h \sem{\alpha}$ and $\sem{A \with B} = \sem{A} \with \sem{B}$. It follows by a straightforward induction that $\sem{\D A} = \D \sem{A}$. This interpretation extends as usual to contexts, setting $\sem{x_0 : A_0, \ldots, x_n : A_n} = \sem{A_0} \with \cdots \with \sem{A_n}$. The semantics of the empty context is $\top$. Assume that we are given a $(n+1)$-linear morphism $\sem{\multiLinVar} \in \category(\sem{A_0} \with \cdots \with \sem{A_n}, \sem{B})$ for any function symbol $\multiLinVar : A_0, \ldots, A_n \arrow B$. Then any function $\multiLin : A_0, \ldots, A_n \arrow B$ can be interpreted as an $(n+1)$-linear morphism $\sem{f}$ by setting $\sem{\Dproj_i} = \Dproj_i$, $\sem{\Dinj_i} = \Dinj_i$, $\sem{\DmonadSum_n} = \DmonadSum^n$ (as defined in \Cref{def:natural-trans-iterate}) and $\sem{\prodProj_i} = \prodProj_i$. \begin{remark}\label{rem:pair-differential} Since $\prodSwap=\id$, we have $\D \prodProj_i = \D \prodProj_i \comp (\prodSwap)^{-1} = \D \prodProj_i \comp \Dpair{ \Dproj_0 \with \Dproj_0}{\Dproj_1 \with \Dproj_1} = \Dpair{\prodProj_i \comp (\Dproj_0 \with \Dproj_0)}{\prodProj_i \comp (\Dproj_1 \with \Dproj_1)} = \Dpair{\Dproj_0 \comp \prodProj_i}{\Dproj_1 \comp \prodProj_i} = \prodProj_i$. Notice also that $\prodPair{\D f_0}{\D f_1} = \D \prodPair{f_0}{f_1}$ by \Cref{prop:prodpair-derivative} \end{remark} \begin{theorem} For any term $t$ such that $\Gamma \vdash t : A$, we can define $\sem{t}_{\Gamma} \in \category(\sem{\Gamma}, \sem{A})$. \end{theorem} \begin{proof} We proceed by induction on the term. \begin{itemize} \item If $t = x$ then the last typing rule must be (Var) so that $\Gamma = \Gamma_0, x : A, \Gamma_1$. Define $\sem{x}_{\Gamma} = \prodProj_{|\Gamma_0|} \in \category(\sem{\Gamma_0} \with \sem{A} \with \sem{\Gamma_1}, \sem{A})$. \item If $t = \prodPair{t_0}{t_1}$ then the last typing rule must be (Pair), so $t$ is of type $A \with B$, $\Gamma \vdash t_0 : A$ and $\Gamma \vdash t_1 : B$. By induction, one can define $\sem{t_0}_{\Gamma} \in \category(\sem{\Gamma}, \sem{A})$ and $\sem{t_1}_{\Gamma} \in \category(\sem{\Gamma}, \sem{B})$. Then we define $\sem{\prodPair{t_0}{t_1}}_{\Gamma} = \prodPair{\sem{t_0}_{\Gamma}}{\sem{t_1}_{\Gamma}} \in \category(\sem{\Gamma}, \sem{A \with B})$. \item If $t = f^{\word}(t_0, \ldots, t_n)$ with $f : A_0, \ldots, A_n \arrow B$ then the last typing rule must be (App). That is, $t$ must be of type $D^{\wordLength{\word}} B$ for some type $B$ and for \(i=0,\dots,n\) we have a derivation of $\Gamma \vdash t_i : \D^{\wordLetter{\word}{i}}A_i$. By inductive hypothesis, we can define $\sem{t_i}_{\Gamma} \in \category(\sem{\Gamma}, \sem{D^{\wordLetter{\word}{i}} A_i})$. But $\sem{D^{\wordLetter{\word}{i}} A_i} = \D^{\wordLetter{\word}{i}} \sem{A_i}$ and $\D_{\word_k} \ldots \D_{\word_1} \sem{f} \in \category(\D^{\wordLetter{\word}{0}} \sem{A_0} \with \cdots \with \D^{\wordLetter{\word}{n}} \sem{A_n}, \D^{\wordLength{\word}} \sem{B})$. Thus, we can set $\sem{f^{\word_1 \cdots \word_k}(t_0, \ldots, t_n)}_{\Gamma} = \D_{\word_k} \ldots \D_{\word_1} \sem{f} \comp \prodPair{\sem{t_0}_{\Gamma}, \ldots} {\sem{t_n}_{\Gamma}}$. \end{itemize} \end{proof} \begin{notation} We use $\sem{x}_{\Gamma} = \prodProj_x$ for the projection on $\sem{\Gamma}$ to the coordinate where $x$ appears in \(\Gamma\). \end{notation} \begin{remark} In particular, $\sem{\depth{\Dproj_i}{d}(t)} = \D^d \Dproj_i \comp \sem{t}$, $\sem{\depth{\Dinj_i}{d}(t)} = \D^d \Dinj_i \comp \sem{t}$, $\sem{\depth{\DmonadSum_n}{d}(t)} = \D^d{\DmonadSum^n} \comp \sem{t}$. More importantly, $\sem{\depth{\prodProj_i}{d}(t)} = \D^d \prodProj_i \comp \sem{t} = \prodProj_i \comp \sem{t}$ because of our assumption that $\prodSwap$ is the identity. \end{remark} \begin{notation} For any word $\word = \word_1 \cdots \word_k$ in $\interval{0}{n}^k$, define $\D_{\word} \defEq \D_{\word_k} \ldots \D_{\word_1}$. Then for any $f \in \category(X_0 \with \ldots \with X_n, Y)$, $\D_{\word} f \in \category(\D^{\wordLetter{\word}{0}} X_0 \with \cdots \with \D^{\wordLetter{\word}{n}} X_n, \D^{\wordLength{\word}} Y)$. Note that $\D_{\word \cdot \word[1]} = \D_{\word[1]} \D_{\word}$. Then, \Cref{prop:DDproj-commute-partial} can be seen as the property that for any $f$ $(n+1)$-linear, for any word $\word[1]$ of length $d$, $\D^d \Dproj_i \comp \D_{\word[1]} \D_j f = \D_{\word[1]} f \comp \singleApp{j}{\id}{\D^{\wordLetter{\word[1]}{j}} \Dproj_i}$ \end{notation} The main result of this section on the calculus consists in showing that the semantics of this syntactical derivative operation corresponds to the derivative in the model. \begin{theorem} If $\Gamma, x : A \vdash t : B$ then $\sem{\diffTerm{t}{x}}_{\Gamma, x : \D A} = \D_1 \sem{t}_{\Gamma, x : A}$ where $\sem{t}_{\Gamma, x : A}$ is seen as a morphisms of $\category(\sem{\Gamma} \with \sem{A}, \sem{B})$. \end{theorem} \begin{proof} By induction on $t$. \begin{itemize} \item If $t = x$ then $\sem{t}_{\Gamma, x : A} = \prodProj_1 \in \category(\sem{\Gamma} \with \sem{A}, \sem{A})$. Then $\D_1 \prodProj_1 = \D \prodProj_1 \comp \strengthR = \D \prodProj_1 \comp \Dpair{\id \with \Dproj_0}{0 \with \Dproj_1} = \Dpair{\prodProj_1 \comp (\id \with \Dproj_0)}{\prodProj_1 \comp (0 \with \Dproj_1)} = \Dpair{\Dproj_0 \comp \prodProj_1}{\Dproj_1 \comp \prodProj_1} = \prodProj_1$ using \Cref{prop:strength} and the linearity of $\prodProj_1$. \item If $t = y \neq x$ then $\sem{t}_{\Gamma, x : A} = \sem{y}_{\Gamma} \comp \prodProj_0 = \prodProj_y \comp \prodProj_0 \in \category(\sem{\Gamma} \with \sem{A}, \sem{B})$. Then $\D_1 (\prodProj_y \comp \prodProj_0) = \D \prodProj_y \comp \D \prodProj_0 \comp \strengthR = \D \prodProj_y \comp \D \prodProj_0 \comp \Dpair{\id \with \Dproj_0}{0 \with \Dproj_1} = \D \prodProj_y \comp \Dpair{\prodProj_0 \comp (\id \with \Dproj_0)} {\prodProj_0 \comp (0 \with \Dproj_1)} =\D \prodProj_y \comp \Dpair{\prodProj_0}{0} = \Dpair{\prodProj_y \comp \prodProj_0}{0} = \sem{\Dinj_0(y)} = \sem{\diffTerm{y}{x}}$. \item If $t = \prodPair{t_0}{t_1}$, then $\sem{\diffTerm{t}{x}} = \sem{\prodPair{\diffTerm{t_0}{x}}{\diffTerm{t_1}{x}}} = \prodPair{\sem{\diffTerm{t_0}{x}}}{\sem{\diffTerm{t_1}{x}}} $. By inductive hypothesis, $\sem{\diffTerm{t}{x}} = \prodPair{\D_1 \sem{t_0}}{\D_1 \sem{t_1}}$. But $\prodPair{\D_1 \sem{t_0}}{\D_1 \sem{t_1}} = \prodPair{\D \sem{t_0} \comp \strengthR}{\D \sem{t_1} \comp \strengthR} = \prodPair{\D \sem{t_0}}{\D \sem{t_1}} \comp \strengthR$. By \Cref{rem:pair-differential}, this is equal to $\D \prodPair{\sem{t_0}}{\sem{t_1}} \comp \strengthR = \D_1 \prodPair{\sem{t_0}}{\sem{t_1}} = \D_1 \sem{t}$. \item If $t = f^{\word}(t_0, \ldots, t_n)$ then by definition $\diffTerm{t}{x} = \DmonadSum_n(f^{\word n \cdots 1 0}( \diffTerm{t_0}{x}, \ldots, \diffTerm{t_n}{x}))$. Thus, $\sem{\diffTerm{t}{x}} = \DmonadSum^n \comp \D_{n \cdots 1 0} \D_{\word} f \comp \prodPair{\sem{\diffTerm{t_0}{x}}, \ldots}{\sem{\diffTerm{t_n}{x}}} = \DmonadSum^n \comp \D_{n \cdots 1 0} \D_{\word} f \comp \prodPair{\D_1 \sem{t_0}, \ldots}{\D_1 \sem{t_n}}$ by inductive hypothesis. But then, the Leibniz rule (\Cref{prop:leibniz-n}) states that $\DmonadSum^n \comp \D_{n \cdots 1 0} \D_{\word} f = \D \D_{\word} f$. Thus, $\sem{\diffTerm{t}{x}} = \D \D_{\word} f \comp \prodPair{\D \sem{t_0} \comp \strengthR, \ldots}{\D \sem{t_n} \comp \strengthR} = \D \D_{\word} f \comp \prodPair{\D \sem{t_0}, \ldots}{\D \sem{t_n}} \comp \strengthR = \D (\D_{\word} f \comp \prodPair{\sem{t_0}, \ldots}{\sem{t_n}}) \comp \strengthR = \D \sem{t} \comp \strengthR = \D_1 \sem{t}$. \end{itemize} \end{proof} \subsection{Reduction} We introduce in this section a set of reduction rules that deals with the differential content of the terms. The set of rules is more compact than the one given in~\cite{Ehrhard22-pcf}, but covers all of the rules concerning the fragment we are looking at. \begin{remark} We could have added a construct $t[u/x]$ for explicit substitutions, with the typing rule \begin{center} \begin{prooftree} \hypo{\Gamma, x : A \vdash t : B} \hypo{\Gamma \vdash u : A} \infer2{\Gamma \vdash t[u/x] : B} \end{prooftree}(Cut) \end{center} as well as reduction rules that performs the substitution steps (for example, $x[u/x] \red u$). We decided not to do so because, in a higher order \(\lambda\)-calculus setting, such explicit substitutions are not necessary. \end{remark} The main difference with the differential lambda-calculus of~\cite{Ehrhard03} is the absence of sum, because we do not want a non deterministic typing rules such as \begin{center} \begin{prooftree} \hypo{\Gamma \vdash t : A} \hypo{\Gamma \vdash u : A} \infer2{\Gamma \vdash t + u : A} \end{prooftree} \end{center} But the reduction of a $\Dproj_1$ against a $\DmonadSum$ will introduce sums. Handling sum without the typing rule above is tricky, because of subject reduction. There will be no guarantee indeed that if $\Gamma \vdash t + u : A$ and $t \red t'$ then $\Gamma \vdash t' + u : A$. For this reason, we chose a conservative approach, by keeping sums as a formal multiset on top of the terms. \begin{definition} A term multiset is a finite multiset of term. \end{definition} See \Cref{sec:apcoh} for the notations we use on multisets. We define a reduction $\red$ from terms to term multisets. The reduction rules are given in \Cref{fig:reduction-diff}. Then we define $\redSym$ as the ``reflexive'' closure of $\red$. That is, $t \redSym \multisetVar$ if $t \red \multisetVar$ or if $\multisetVar = [t]$. It allows to lifts $\red$ to a reduction from a term multiset to a term multiset in a monadic fashion: if $t_1 \red L_1$ and for all $i \neq 1$, $t_i \redSym L_i$, then \[ [t_1, \ldots, t_n] \redMultiset \sum_{i=1}^{n} L_i \] where $\sum$ is the multiset union, that is, the pointwise sum of the functions $L_i : \terms \arrow \N$. \begin{figure} \caption{Reduction rules} \label{fig:reduction-diff} \end{figure} \begin{definition} A term multiset $[t_1, \ldots, t_n ]$ of type $A$ in context $\Gamma$ is $\category$-summable if $\sem{t_1}_{\Gamma}, \ldots, \sem{t_n}_{\Gamma}$ are summable (in the sense of \Cref{prop:arbitrary-sum}). Then, we define $\sem{[t_1, \ldots, t_n ]}_{\Gamma} = \sem{t_1}_{\Gamma} + \cdots + \sem{t_n}_{\Gamma}$. Note that $[\,]$ is always $\category$-summable, and $\sem{[\,]} = 0$. \end{definition} The main point of coherent differentiation is that the reduction $\red$ will always introduce term multisets that are $\category$-summable, for any model $\category$. \begin{theorem}[Invariance of semantics under reduction] \label{prop:semantic-invariance} For any $\Gamma \vdash t : A$, if $t \red L$ then $L$ is $\category$-summable and $\sem{L}_{\Gamma} = \sem{t}_{\Gamma}$. \end{theorem} \begin{proof} Let us consider every application of the rule $\red$. Note that when a term multiset has one element, it is always $\category$-summable and $\sem{[t]} = \sem{t}$. \begin{align*} \sem{\depth{\prodProj_i}{d}(\prodPair{t_0}{t_1})} &= \D^d \prodProj_i \comp \prodPair{\sem{t_0}}{\sem{t_1}} \\ &= \prodProj_i \comp \prodPair{\sem{t_0}}{\sem{t_1}} \\ &= \sem{t_i}\,. \end{align*} The rule below is the one where most of the differential content appears. Recall that $\sem{f}$ is assumed to be multilinear, for any function $f$. It implies that $\D_{\word} \sem{f}$ is also multilinear by \Cref{prop:partial-preserve-multilinearity}, so it is possible to apply \Cref{prop:DDproj-commute-partial} on it. \begin{align*} & \sem{\depth{\Dproj_i}{d} (f^{\word j \word[1]}(t_0, \ldots, t_n))} \\ &\quad= \D^d \Dproj_i \comp \D_{\word[1]} \D_j \D_{\word} \sem{f} \comp \prodPair{\sem{t_0}, \ldots}{\sem{t_n}} \\ &\quad= \D_{\word[1]} \D_{\word} \sem{f} \comp \singleApp{j}{\id}{\D^{\wordLetter{\word[1]}{j}} \Dproj_i} \comp \prodPair{\sem{t_0}, \ldots}{\sem{t_n}} \quad \text{by \Cref{prop:DDproj-commute-partial}} \\ &\quad= \D_{\word \word[1]} \sem{f} \comp \prodPair{\sem{t_0}, \ldots, \D^{\wordLetter{\word[1]}{j}} \Dproj_i \comp \sem{t_j}, \ldots}{\sem{t_n}} \\ &\quad= \sem{f^{\word \word[1]} (t_0, \ldots, \depth{\Dproj_i}{\wordLetter{\word[1]}{j}}(t_j), \ldots, t_n)}\,. \end{align*} The three next rules are rather standard and are consequence of the definition of $\Dproj_i$, $\Dinj_j$ and $\DmonadSum^n$. \begin{align*} \sem{\depth{\Dproj_i}{d}(\depth{\Dinj_i}{d}(t))} &= \D^d \Dproj_i \comp \D^d \Dinj_i \comp \sem{t} \\ &= \D^d(\Dproj_i \comp \Dinj_i) \comp \sem{t} \text{\quad by \ref{ax:D-chain}} \\ &= \D^d \id \comp \sem{t} = \sem{t} \text{\quad by \ref{ax:D-chain}} \end{align*} \begin{align*} \sem{\depth{\Dproj_i}{d}(\depth{\Dinj_{1-i}}{d}(t))} &= \D^d \Dproj_i \comp \D^d \Dinj_{1-i} \comp \sem{t} \\ &= \D^d(\Dproj_i \comp \Dinj_{1-i}) \comp \sem{t} \text{\quad by \ref{ax:D-chain}} \\ &= \D^d 0 \comp \sem{t} = 0 \comp \sem{t} \text{\quad by \ref{ax:Dsum-lin}} \\ &= 0 = \sem{[\,]} \end{align*} \begin{align*} \sem{\depth{\Dproj_0}{d}(\depth{\DmonadSum_n}{d}(t))} &= \D^d \Dproj_0 \comp \D^d \DmonadSum^n \comp \sem{t} \\ &= \D^d(\Dproj_0 \comp \DmonadSum^n) \comp \sem{t} \text{\quad by \ref{ax:D-chain}} \\ &= \D^d(\Dproj_0^{n+1}) \comp \sem{t} \\ &= (\D^d \Dproj_0)^{n+1} \comp \sem{t} \text{\quad by \ref{ax:D-chain}} \\ &= \sem{(\depth{\Dproj_0}{d})^{n+1}(t)}\,. \end{align*} The last rule is where finite multisets of size greater than $1$ are introduced. Most lines in the following sequence of equations should be understood as follows: ``the sum above is well defined, so the sum below is well defined and they are equal''. \begingroup \allowdisplaybreaks \begin{align*} \sem{\depth{\Dproj_1}{d}(\depth{\DmonadSum_n}{d}(t))} &= \D^d \Dproj_1 \comp \D^d \DmonadSum^n \comp \sem{t} \\ &= \D^d(\Dproj_1 \comp \DmonadSum^n) \comp \sem{t} \quad \text{by \ref{ax:D-chain}} \\ &= \left( \D^d(\sum_{k=0}^n \Dproj_0^k \comp \Dproj_1 \comp \Dproj_0^{n-k}) \right) \comp \sem{t} \\ &= \left( \sum_{k=0}^n \D^d (\Dproj_0^k \comp \Dproj_1 \comp \Dproj_0^{n-k}) \right) \comp \sem{t} \quad \text{by \ref{ax:Dsum-lin} and \Cref{prop:D-sum-com}} \\ &= \sum_{k=0}^n \D^d (\Dproj_0^k \comp \Dproj_1 \comp \Dproj_0^{n-k}) \comp \sem{t} \quad \text{by \Cref{prop:sum-left-compatible}} \\ &= \sum_{k=0}^n (\D^d \Dproj_0)^k \comp \D^d \Dproj_1 \comp (\D^d \Dproj_0)^{n-k} \comp \sem{t} \quad \text{by \ref{ax:D-chain}} \\ &= \sum_{k=0}^n \sem{(\depth{\Dproj_0}{d})^k \depth{\Dproj_1}{d} (\depth{\Dproj_0}{d})^{n-k}(t)]} \,. \end{align*} \endgroup Thus, $\sum_{k=0}^n [(\depth{\Dproj_0}{d})^k \depth{\Dproj_1}{d} (\depth{\Dproj_0}{d})^{n-k}(t)]$ is $\category$-summable of semantics $\sem{\depth{\Dproj_1}{d}(\depth{\DmonadSum_n}{d}(t))}$\,. \end{proof} \begin{cor} \label{prop:semantic-invariance-lift} For any term multiset $\Gamma \vdash L : A$ that is $\category$-summable, if $L \redMultiset L'$ then $L'$ is $\category$-summable and $\sem{L'}_{\Gamma} = \sem{L}_{\Gamma}$. \end{cor} \begin{proof} Assume that $[t_1, \ldots, t_n]$ is $\category$-summable and that $[t_1, \ldots, t_n] \redMultiset \multisetVar$. That is, for any $i$, $t_i \redSym [t_i^1, \ldots, t_i^{k_i}]$ and $\multisetVar = \sum_{i=1}^{n} [t_i^1, \ldots, t_i^{k_i}]$. Then by \Cref{prop:semantic-invariance}, for any $i$, $\sem{t_i^1}, \ldots, \sem{t_i^{k_i}}$ are summable of sum $\sem{t_i}$. By assumption, $\sem{t_1}, \ldots, \sem{t_n}$ are summable, that is, $\sum_{j=1}^{k_1} \sem{t_1^j}, \ldots, \sum_{j=1}^{k_n} \sem{t_n^j}$ are summable. By \Cref{prop:arbitrary-sum}, it means that the family $\sem{t_1^1}, \ldots, \sem{t_1^{k_1}}, \ldots, \sem{t_n^1}, \ldots, \sem{t_n^{k_n}}$ is summable of sum \[ \sum_{i=1}^{n} \sum_{j=1}^{k_i} \sem{t_i^j} = \sum_{i=1}^n \sem{t_i} \] Thus $\multisetVar$ is $\category$-summable and $ \sem{\multisetVar} = \sem{[t_1, \ldots, t_n]}$. \end{proof} The usage of such term multisets may seem somewhat non deterministic. But any multiset generated by reductions of the calculus can be interpreted as a summable family in deterministic models such as probabilistic coherence spaces\footnote{Probabilistic branching is by no mean a form of non determinism} (see Section~\ref{sec:apcoh}) or non uniform coherence spaces. This determinism of the models allows to prove in \cite{Ehrhard22-pcf} a result that roughly state that whenever a closed term of \emph{type integer} reduces to a term multiset $C + [\intTerm]$ (where $\intTerm$ are the usual integer variables of \PCF{}), then $\sem{C} = 0$. That is, only one of the branches of the reduction rule \[ \depth{\Dproj_1}{d}(\depth{\DmonadSum_n}{d}(t)) \red \sum_{k=0}^{n} [ (\depth{\Dproj_0}{d})^k \depth{\Dproj_1}{d} (\depth{\Dproj_0}{d})^{n-k} (t)] \] produces a non empty multiset. The proof relies on the fact that any term of type integer will be interpreted in $\ACOH$ as a Dirac distribution $\dirac{n}$ on $\N$ or as the zero distribution, because the calculus does not feature any form of probabilistic branching. Thus, a term multiset of type integer is $\ACOH$-summable if and only if there is at most one term in the multiset whose semantic is not $0$. In particular, $\semPcoh{\intTerm} = \dirac{\intVar}$ and $C + [\intTerm]$ is $\ACOH$-summable (by \cref{prop:semantic-invariance-lift}) so $\semPcoh{C} = 0$. One can also use non-uniform coherence spaces for proving the same result in a similar way. This observation led to the development of a completely deterministic Krivine Machine for a coherent differential version of PCF in~\cite{Ehrhard22-pcf}, extending the projections path with a writable memory structure. \section*{Conclusion} We have introduced and studied a general categorical framework for coherent differentiation, a new approach to the differential calculus which does not require the ambient category to be (left-)additive. We have also proposed some basic syntactical constructs accounting in a term language for these new categorical constructs. These are the foundations for a principled and systematic approach to the denotational semantics of functional programming languages like (probabilistic) PCF extended with coherent differentiation. As shown in~\cite{Ehrhard22-pcf} such an extension can perfectly feature general recursive definitions as well as deterministic or probabilistic behaviors, in sharp contrast with the Differential \(\lambda\)-calculus~\cite{EhrhardRegnier02} which is inherently non-deterministic. Accordingly, the next step will be to specialize the present general axiomatization to the case where the category is cartesian closed. \section*{Acknowledgment} We thank the reviewers for their careful reading and helpful comments. This work was partly supported by the ANR project \emph{Probabilistic Programming Semantics (PPS)} ANR-19-CE48-0014. \end{document}
arXiv
\begin{document} \title{Stratifications for moduli of sheaves and moduli of quiver representations} \begin{abstract} We study the relationship between two stratifications on parameter spaces for coherent sheaves and for quiver representations: a stratification by Harder--Narasimhan types and a stratification arising from the geometric invariant theory construction of the associated moduli spaces of semistable objects. For quiver representations, both stratifications coincide, but this is not quite true for sheaves. We explain why the stratifications on various Quot schemes do not coincide and see that the correct parameter space to compare such stratifications is the stack of coherent sheaves. Then we relate these stratifications for sheaves and quiver representations using a generalisation of a construction of \'{A}lvarez-C\'{o}nsul and King. \end{abstract} \section{Introduction} Many moduli spaces can be described as a quotient of a reductive group $G$ acting on a scheme $B$ using the methods of geometric invariant theory (GIT) developed by Mumford \cite{mumford}. This depends on a choice of linearisation of the $G$-action which determines an open subscheme $B^{ss}$ of $B$ of semistable points such that the GIT quotient $B/\!/ G$ is a good quotient of $B^{ss}$. In this article, we are interested in two moduli problems with such a GIT construction: \begin{enumerate} \item moduli of representations of a quiver $Q$ with relations $\cR$; \item moduli of coherent sheaves on a polarised projective scheme $(X,\cO_X(1))$. \end{enumerate} In the first case, King \cite{king} uses a stability parameter $\theta$ to construct moduli spaces of $\theta$-semistable quiver representations of dimension vector $d$ as a GIT quotient of an affine variety by a reductive group action linearised by a character $\chi_\theta$. In the second case, Simpson \cite{simpson} constructs a moduli space of semistable sheaves with Hilbert polynomial $P$ as a GIT quotient of a projective scheme by a reductive group action linearised by an ample line bundle. For both moduli problems, we compare two stratifications: a Hesselink stratification arising from the GIT construction and a stratification by Harder--Narasimhan types. For quiver representations, both stratifications coincide, but this is not the case for sheaves. We explain why these stratifications do not agree for sheaves and how to rectify this. Then we relate these stratifications for sheaves and quiver representations using a construction of \'{A}lvarez-C\'{o}nsul and King \cite{ack}. \subsection{Harder--Narasimhan stratifications} In both of the above moduli problems, there is a notion of semistability for objects that involves verifying an inequality for all subobjects; in fact, this arises from the GIT notion of semistability appearing in the construction of these moduli spaces. The idea of a Harder--Narasimhan (HN) filtration is to construct a unique maximally destabilising filtration for each object in a moduli problem \cite{hn}. Every coherent sheaf has a unique HN filtration: for pure sheaves, this result is well-known (cf. \cite{huybrechts} Theorem 1.3.4) and the extension to coherent sheaves is due to Rudakov \cite{rudakov}. For quiver representations, there is no canonical notion of HN filtration with respect to the stability parameter $\theta$. Rather, the notion of HN filtration depends on a collection of positive integers $\alpha_v$ indexed by the vertices $v$ of the quiver (see Definition \ref{defn HN}). We note that in many previous works, only the choice $\alpha_v =1$, for all vertices $v$, is considered. For both quiver representations and sheaves, we can stratify the associated moduli stacks by HN types (which encode the invariants of the successive quotients in the HN filtrations) and we can stratify the parameter spaces used in the GIT construction of these moduli spaces. \subsection{Hesselink stratifications} Let $G$ be a reductive group acting on a scheme $B$ such that \begin{enumerate} \item $B$ is affine and the action on the structure sheaf is linearised by a character of $G$ , or \item $B$ is projective with an ample $G$-linearisation. \end{enumerate} Then, by the Hilbert--Mumford criterion \cite{mumford}, it suffices to check GIT semistability on 1-parameter subgroups (1-PSs) of $G$. Associated to this action and a choice of norm on the set of conjugacy classes of 1-PSs, there is a stratification due to Hesselink \cite{hesselink} of $B$ \[ B = \bigsqcup_{\beta \in \mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L} S_\beta\] into finitely many disjoint $G$-invariant locally closed subschemes. Furthermore, there is a partial order on the strata such that the closure of a stratum is contained in the union over all higher strata, the lowest stratum is $B^{ss}$ and the higher strata parametrise points of different instability types. In order to construct this stratification, the idea is to associate to each unstable point a conjugacy class of adapted 1-PSs of $G$, in the sense of Kempf \cite{kempf}, that are most responsible for the instability of this point. These stratifications have a combinatorial nature: the index set can be determined from the weights of the action of a maximal torus of $G$ and the strata $S_\beta$ can be constructed from simpler subschemes $Z_\beta$, known as limit sets, which are semistable sets for smaller reductive group actions (cf. \cite{kirwan,ness} for the projective case and \cite{hoskins_quivers} for the affine case). Hesselink stratifications have a diverse range of applications. For a smooth projective variety $B$, the cohomology of $B/\!/G$ can be studied via these stratifications \cite{kirwan}. In variation of GIT, these stratifications are used to describe the birational transformations between quotients \cite{dolgachevhu,thaddeus} and to provide orthogonal decompositions in the derived categories of GIT quotients \cite{bfk,halpern}. \subsection{Comparison results} For quiver representations, the choice of parameter $\alpha$ used to define HN filtrations corresponds to a choice of norm $|| - ||_{\alpha}$ used to define the Hesselink stratification. The corresponding HN and Hesselink stratifications on the space of quiver representations agree: for a quiver without relations, this result is \cite{hoskins_quivers} Theorem 5.5 and we deduce the corresponding result for a quiver with relations in Theorem \ref{quiver HN is Hess}. Simpson constructs the moduli space of semistable sheaves on $(X,\cO_X(1))$ with Hilbert polynomial $P$ as a GIT quotient of a closed subscheme $R_n$ of the Quot scheme \[\quot_n:=\quot(k^{P(n)} \otimes \cO_X(-n),P)\] by the natural $\SL_{P(n)}$-action for $n$ sufficiently large. The associated Hesselink stratification of $R_n$ was partially described in \cite{hoskinskirwan} and compared with the stratification by HN types. It is natural to expect these stratifications to coincide, as in the case of quiver representations, but it is only shown in \cite{hoskinskirwan} that a pure HN stratum is contained in a Hesselink stratum for $n$ sufficiently large. In this article, we complete the description of the Hesselink stratification of $\quot_n$ and extend the inclusion result of \cite{hoskinskirwan} to non-pure HN types (cf. Theorem \ref{thm comp strat}). Moreover, we explain why the Hesselink and HN stratifications on the Quot scheme do not agree. The underlying moral reason is that the Quot scheme does not parametrise all sheaves on $X$ with all Hilbert polynomial $P$ and so is only a truncated parameter space. The natural solution is to instead consider the moduli stack $\cC oh_P(X)$ of coherent sheaves on $X$ with Hilbert polynomial $P$. As every coherent sheaf is $n$-regular for $n$ sufficiently large, the open substacks of $n$-regular sheaves form an increasing cover \[ \cC oh_P(X) = \bigcup_{n \geq 0} \cC oh_P^{n-\reg}(X)\] such that $\cC oh^{n-\reg}_P(X) $ is a stack quotient of an open subscheme of $\quot_n$ by $\GL_{P(n)}$. Using this description, we can view the Quot schemes as finite dimensional approximations to $\cC oh_P(X)$. We use the Hesselink stratifications on each $\quot_n$ to construct an associated Hesselink stratification on $\cC oh_P(X)$, which can be seen as a limit over $n$ of the stratifications on $\quot_n$. The main result is that the Hesselink and HN stratifications on $\cC oh_P(X)$ coincide (cf. \ref{cor Hess is HN on stack}). \subsection{The functorial construction of \'{A}lvarez-C\'{o}nsul and King} We relate these stratifications for sheaves and quivers using a functor considered by \'{A}lvarez-C\'{o}nsul and King \cite{ack} \[ \Phi_{n,m}:= \text{Hom} (\cO_X(-n) \oplus \cO_X(-m),\:-\:) : \textbf{Coh}(X) \ra \textbf{Reps}({K_{n,m}})\] from the category of coherent sheaves on $X$ to the category of representations of a Kronecker quiver $K_{n,m}$ with two vertices ${n,m}$ and $\dim H^0(\cO_X(m-n))$ arrows from $n$ to $m$. Explicitly, a sheaf $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ is sent to the $K_{n,m}$-representation $W_\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H=(W_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H,n}, W_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H,m}, \text{ev}_\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)$ where $W_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H,l}:= H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(l))$ and the morphisms are given by the evaluation map \[ \text{ev}_\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H: H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n)) \otimes H^0(\cO_X(m-n)) \ra H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(m)).\] \'{A}lvarez-C\'{o}nsul and King \cite{ack} show, for $m >\!> n > \!> 0$, this functor embeds the subcategory of semistable sheaves with Hilbert polynomial $P$ into the subcategory of $\theta_{n,m}(P)$-semistable quiver representations of dimension $d_{n,m}(P)$ and construct the moduli space of semistable sheaves on $X$ with Hilbert polynomial $P$ by using King's construction of the moduli spaces of quiver representations. In the final section of this article, we relate the stratifications for moduli of quivers and sheaves using this map of \'{A}lvarez-C\'{o}nsul and King. We prove that a HN stratum for sheaves is sent to a HN stratum for quiver representations (cf. Theorem \ref{thm ack HN}); although, multiple HN strata for sheaves can be sent to the same HN stratum for quiver representations when $\dim X > 1$. \subsection*{Notation and conventions} We work over an algebraically closed field $k$ of characteristic zero and by scheme, we mean scheme of finite type over $k$. By sheaf, we always mean coherent algebraic sheaf. For schemes $X$ and $S$, by a family of sheaves on $X$ parametrised by $S$, we mean a sheaf $\cF$ over $X \times S$ that is flat over $S$ and we write $\cF_s := \cF|_{X \times \{s\}}$. We use the term stratification in a weaker sense than usual to mean a decomposition into disjoint locally closed subsets with a partial order on the strata such that the closure of a given stratum is contained in the union of all higher strata (usually for a stratification, one requires the closure of a given stratum to be the union of all higher strata). For natural numbers $m$ and $n$, we write \lq for $m >\!> n$' or \lq for $m$ sufficiently larger than $n$' to mean there exists $N \geq n$ such that for all $m \geq N$. Similarly, by \lq for $n_r >\!> n_{r-1} >\!> \cdots >\!> n_0$', we mean $\exists N_0 \geq n_0 \: \forall n_1 \geq N_0 \: \exists N_1 \geq n_1 \: \forall n_2 \geq N_1 \dots \: \exists N_{r-1} \geq n_{r-1} \: \forall n_r \geq N_{r-1}$. \section{Hesselink stratifications of affine varieties}\label{sec hess aff} Let $G$ be a reductive group acting linearly on an affine variety $V$ and let $L_\rho$ denote the $G$-linearisation on the structure sheaf of $V$ obtained by twisting by a character $\rho : G \ra \GG_m$. \begin{defn}\label{defn GIT ss wrt char} A point $v \in V$ is $\rho$-{semistable} if there exists $\sigma \in H^0(V, {L}_\rho^{\otimes n})^G $ for some $n > 0$ such that $\sigma(v) \neq 0$. We let $V^{\rho-\text{ss}}$ denote the subset of $\rho$-semistable points. \end{defn} By \cite{mumford} Theorem 1.10, the GIT quotient of $G$ acting on $V$ with respect to $L_\rho$ \[ V^{\rho-\text{ss}} \ra V/\!/_\rho G := \proj \bigoplus_{n \geq 0} H^0(V, {L}_\rho^{\otimes n})^G \] is a good quotient. Let $(-,-) : X^*(G) \times X_*(G) \ra \ZZ$ be the natural pairing between characters and cocharacters; then semistability can be checked on 1-parameter subgroups $\lambda : \GG_m \ra G$. \begin{thm}[The Hilbert--Mumford criterion \cite{king,mumford}] A point $v \in V$ is $\rho$-semistable if and only we have $(\rho,\lambda) \geq 0$, for every 1-PS $\lambda : \GG_m \ra G$ such that $\lim_{t \ra 0 } \lambda(t) \cdot v$ exists. \end{thm} The unstable locus $V - V^{\rho-ss}$ can be stratified by instability types by assigning to each unstable point a conjugacy class of 1-PSs that is \lq most responsible' for the instability of this point and then stratifying by these conjugacy classes. To give a precise meaning to the term \lq most responsible' we use a norm $|| - ||$ on the set $\overline{X}_*(G)$ of conjugacy classes of 1-PSs of $G$. More precisely, we fix a maximal torus $T$ of $G$ and a Weyl invariant norm $|| - ||_T$ on $X_*(T)_\RR$; then, for $\lambda \in X_*(G)$, we let $|| \lambda ||:= || g\lambda g^{-1}||_T$ for $g \in G$ such that $g\lambda g^{-1} \in X_*(T)$. \begin{ex}\label{ex norms} (a) Let $T$ be the diagonal maximal torus in $G = \GL(n)$; then the norm associated to the dot product on $\RR^n \cong X_*(T)_\RR $ is invariant under the Weyl group $W = S_n$. (b) For a product of general linear groups $G=\GL(n_1)\times\cdots \times\GL(n_r)$, we can use positive numbers $\alpha \in \NN^r$ to weight the norms $|| - ||$ for each factor constructed by part (a); that is, \[ || (\lambda_1,\dots , \lambda_r) ||_\alpha := \sum_{i=1}^r \alpha_i ||\lambda_i||.\] \end{ex} We fix a norm $|| - ||$ on $\overline{X}_*(G)$ such that $|| -||^2$ is $\ZZ$-valued. In the study of instability in projective GIT, Kempf used such a norm to define a notion of adapted 1-PSs \cite{kempf}. In the affine setting, we use an appropriately modified definition as given in \cite{hoskins_quivers}. For $v \in V$, we let \[ M^\rho_G(v) := \inf\left\{\frac{(\rho, \lambda)}{|| \lambda ||}:\:\text{1-PSs}\: \lambda \text{ of } G \text{ such that } \lim_{t \to 0} \lambda(t) \cdot v\: \text{exists}\right\}.\] \begin{defn} A 1-PS $\lambda$ is $\rho$-adapted to a $\rho$-unstable point $v$ if $\lim_{t \to 0} \lambda(t) \cdot v$ exists and \[ M^\rho_G(v) = \frac{(\rho,\lambda)}{|| \lambda ||}. \] We let $\wedge^\rho(v) $ denote the set of primitive 1-PSs which are $\rho$-adapted to $v$. Let $[\lambda]$ denote the conjugacy class of a 1-PS $\lambda$; then the Hesselink stratum associated to $[\lambda]$ is \[S_{[\lambda]}:=\{ v \in V - V^{\rho\text{-}\mathrm{ss}}: \wedge^\rho(v) \cap [\lambda] \neq \emptyset\}.\] We define a strict partial ordering $<$ on $\overline{X}_*(G)$ by $[\lambda] < [\lambda']$ if \[ \frac{(\rho,\lambda)}{||\lambda||} > \frac{(\rho,\lambda')}{||\lambda'||}. \] \end{defn} Let $V^\lambda_+ $ be the closed subvariety of $V$ consisting of points $v$ such that $\lim_{t \to 0} \lambda(t) \cdot v$ exists; then we have a natural retraction $p_\lambda : V^\lambda_+ \ra V^\lambda$ onto the $\lambda$-fixed locus. The following theorem describing the Hesselink strata appears in \cite{hoskins_quivers} for the case when $V$ is an affine space. \begin{thm}\label{hess strat thm} Let $G$ be a reductive group acting linearly on an affine variety $V$ with respect to a character $\rho$. Let $|| - ||$ be a norm on $\overline{X}_*(G)$; then there is a decomposition \[ V -V^{\rho\text{-}\mathrm{ss}} = \bigsqcup_{[\lambda] \in \cB} S_{[\lambda]} \] into finitely many disjoint $G$-invariant locally closed subvarieties of $V$. Moreover, we have \begin{enumerate} \item $S_{[\lambda]}= GS_\lambda$ where $S_{\lambda}:= \{ v: \lambda \in \wedge^\rho(v)\}$, \item $S_\lambda = p_\lambda^{-1}(Z_\lambda) $ where $ Z_{\lambda} := \{ v : \lambda \in \wedge^\rho(v) \cap G_v\} $, \item $Z_{\lambda}$ is the semistable subset for $G_\lambda:= \Stab_G(\lambda)$ acting on $V^{\lambda}$ with respect to the character $\rho_\lambda:= || \lambda||^2 \rho - (\rho,\lambda)\lambda^*$, where $\lambda^*$ is the $|| - ||$-dual character to $\lambda$, \item $\partial S_{[\lambda]} \cap S_{[\lambda']} \neq \emptyset$ only if $[\lambda] < [\lambda']$. \end{enumerate} \end{thm} \begin{proof} We deduce the result for a closed $G$-invariant subvariety $V$ of an affine space $W$ from the result for $W$ given in \cite{hoskins_quivers}. Since taking invariants for a reductive group $G$ is exact and $V \subset W$ is closed, it follows that $V^{\rho-ss} = V \cap W^{\rho-ss}$ and the Hesselink stratification of $V- V^{\rho-ss}$ is the intersection of $V- V^{\rho-ss}$ with the Hesselink stratification of $W - W^{\rho-ss}$; that is, \[ V = \bigsqcup_{[\lambda]} S_{[\lambda]}^V \quad \text{and} \quad W = \bigsqcup_{[\lambda]} S_{[\lambda]}^W, \quad \quad \text{where} \quad S_{[\lambda]}^V =V \cap S_{[\lambda]}^W .\] We use properties of the strata for $W$ to prove the analogous properties of the strata for $V$. \begin{enumerate} \item As $V$ is $G$-invariant, we have that $S_{[\lambda]}^V : =V \cap S_{[\lambda]}^W = G(V \cap S_\lambda^W)=G S_\lambda^V$. \item As $V\subset W$ is closed, we have that $S_\lambda^V=V \cap S_\lambda^W= p_\lambda^{-1}(V \cap Z_\lambda^W)=p_\lambda^{-1}(Z_\lambda^V)$. \item As $G_\lambda$ is reductive and $V \subset W$ is closed, we have that $Z_\lambda^V = V \cap Z_\lambda^W$ is the GIT semistable locus for $G_\lambda$ acting on $V^\lambda= V \cap W^\lambda$. \item As $V \subset W$ is closed, we have that $\emptyset \neq \partial S_{[\lambda]}^V \cap S_{[\lambda']}^V \subset \partial S_{[\lambda]}^W \cap S_{[\lambda']}^W $ only if $[\lambda] < [\lambda']$. \end{enumerate} This completes the proof of the theorem. \end{proof} By setting $S_{[0]}:= V^{\rho\text{-ss}}$, we obtain a Hesselink stratification of $V$ \begin{equation}\label{hess strat} V = \bigsqcup_{[\lambda]} S_{[\lambda]}. \end{equation} \subsection{Computing the Hesselink stratification}\label{sec comp Hess strat} The task of computing these stratifications is greatly simplified by Theorem \ref{hess strat thm}, as we can construct the strata from the limit sets $Z_\lambda$. Furthermore, we will see that, by fixing a maximal torus $T$ of $G$, we can determine the indices for the unstable strata from the $T$-weights of the action on $V$. \begin{defn} For $v \in V$, we let $W_v$ denote the set of $T$-weights of $v$. For each subset $W$ of $T$-weights, we consider an associated cone in $ X_*(T)_\RR = X_*(T) \otimes_\ZZ \RR$ \[ C_W := \bigcap_{\chi \in W} H_\chi \quad \quad \text{where} \: \: H_\chi:=\{ \lambda \in X_*(T)_\RR : (\lambda,\chi) \geq 0 \}. \] Let $\rho_T \in X^*(T)$ denote the restriction of $\rho$ to $T$; then a subset $W$ of the $T$-weights is called $\rho_T$-semistable if $C_W \subseteq H_{\rho_T}$ and otherwise we say $W$ is $\rho_T$-unstable. If $W$ is $\rho_T$-unstable, we let $\lambda_W$ be the unique primitive 1-PS in $C_W \cap X_*(T)$ for which $\frac{(\lambda,\rho)}{||\lambda||}$ is minimal (for the existence and uniqueness of this 1-PS, see \cite{hoskins_quivers} Lemma 2.13). \end{defn} If $W = \emptyset$, then $C_W = X_*(T)_\RR$ and $W$ is $\rho_T$-unstable for all non-trivial $\rho_T$ with $\lambda_W = - \rho_T$. \begin{prop}\label{prop strat for torus} Let $T =(\GG_m)^n$ act linearly on an affine variety $V$ with respect to a character $\rho : T \ra \GG_m$ and let $|| - ||$ be the norm associated to the dot product on $\ZZ^n$; then the Hesselink stratification for the torus $T$ is given by \[ V - V^{\rho-ss} = \bigsqcup_{W \in \cB} S_{\lambda_W}\] where $\cB = \{ W : W \text{ is } \rho \text{-unstable} \}$ and $S_{\lambda_W}= \{ v \in V : \: W = W_v \}$. Therefore, the stratification is determined by the $T$-weights and, moreover, $ v \in V$ is $\rho$-semistable if and only if its $T$-weight set $W_v$ is $\rho$-semistable (that is, $C_{W_v} \subseteq H_{\rho}$). \end{prop} \begin{proof} By construction, $C_{W_v}$ is the cone of (real) 1-PSs $\lambda$ such that $\lim_{t \ra 0} \lambda(t) \cdot v$ exists. By the Hilbert--Mumford criterion, $v$ is $\rho$-semistable if and only if $(\lambda,\rho) \geq 0$ for all $\lambda \in C_{W_v}$; that is, if and only if $C_{W_v} \subseteq H_{\rho}$. Therefore, $v$ is $\rho$-semistable if and only if $W_v$ is $\rho$-semistable. As the conjugation action is trivial, $[\lambda_W]=\lambda_W$ and every $\rho$-unstable point has a unique $\rho$-adapted primitive 1-PS. If $v$ is $\rho$-unstable, then $\lambda_{W_v}$ is $\rho$-adapted to $v$ by \cite{hoskins_quivers} Lemma 2.13. \end{proof} For a reductive group $G$, we fix a maximal torus $T$ of $G$ and positive Weyl chamber. Let $\cB$ denote the set of 1-PSs $\lambda_W$ corresponding to elements in this positive Weyl chamber where $W$ is a $\rho_T$-unstable set of weights. \begin{cor}\label{cor index set from torus weights} Let $G$ be a reductive group acting linearly on an affine variety $V$ with respect to a character $\rho : G \ra \GG_m$. Then the Hesselink stratification is given by \[ V - V^{\rho-ss} = \bigsqcup_{\lambda_W \in \cB} S_{[\lambda_W]}.\] \end{cor} \begin{proof} It is clear that the left hand side is contained in the right hand side. Conversely, suppose that $v \in V - V^{\rho-ss}$ and $\lambda$ is an primitive 1-PS of $G$ that is $\rho$-adapted to $v$. Then there exists $g \in G$ such that $\lambda':=g\lambda g^{-1} \in X_*(T)$ and, moreover, $\lambda'$ is $\rho$-adapted to $v':= g \cdot v$ for the $G$-action on $V$. As every 1-PSs of $T$ is a 1-PSs of $G$, it follows that $M^\rho_G(v') \leq M_T^{\rho_T}(v')$. Therefore, $\lambda'$ is $\rho_T$-adapted to $v'$ for the $T$-action on $V$ and $\lambda' = \lambda_W \in \cB$, where $W = W_{v'}$. In particular, we have that $v \in S_{[\lambda]}= S_{[\lambda_W]}$. \end{proof} This gives an algorithm to compute the Hesselink stratification: the index set $\cB$ is determined from the $T$-weights and, for each $[\lambda] \in \cB$, we compute $S_{[\lambda]}$ from $Z_\lambda$ using Theorem \ref{hess strat thm}. \begin{rmk} In general, the Hesselink stratification for $G$ and its maximal torus $T$ are not easily comparable. The $\rho$-semistable locus for the $G$-action is contained in the $\rho$-semistable locus for the $T$-action, but the $G$-stratification does not refine the $T$-stratification. Often there are more $T$-strata, since 1-PSs of $T$ may be conjugate in $G$ but not in $T$. \end{rmk} \begin{ex}\label{ex grass} Let $G:=\GL_r$ act on $V:=\text{Mat}_{r \times n} \cong \mathbb A}\def\BB{\mathbb B}\def\CC{\mathbb C}\def\DD{\mathbb D^{rn}$ by left multiplication, linearised with respect to the character $\rho := \det : G \ra \GG_m$. It is well known that the GIT quotient is \[ \text{Mat}_{r \times n}/\!/_{\det} \GL_r =\text{Gr}(r,n) \] the Grassmannian of $r$-planes in $\mathbb A}\def\BB{\mathbb B}\def\CC{\mathbb C}\def\DD{\mathbb D^n$ and the GIT semistable locus is given by matrices of rank $r$. In this example, we show that the Hesselink stratification is given by rank. We fix the maximal torus $T \cong \GG_m^r$ of diagonal matrices in $G$ and use the dot product on $\ZZ^r \cong X_*(T)$ to define a norm $|| -||$ on conjugacy classes of 1-PSs. The weights of the $T$-action on $V$ are $\chi_1, \dots , \chi_r$ where $\chi_i$ denotes the $i^\text{th}$ standard basis vector in $\ZZ^r$ and the Weyl group $S_r$ acts transitively on the $T$-weights. The restriction of the determinant character to $T$ is given by $\rho_T = (1, \dots , 1) \in \ZZ^r \cong X^*(T)$. As the Weyl group is the full permutation group on the set of $T$-weights, it follows that $\lambda_W$ and $\lambda_{W'}$ are conjugate under $S_r$ whenever $|W| = |W'|$. Hence, for $k =0, \dots, r$, we let \[W_k: = \{ \chi_1, \dots , \chi_k \}\] and note that $W_k$ is $\rho_T$-unstable if and only if $k < r$ with corresponding $\rho_T$-adapted 1-PS \[\begin{smallmatrix}\lambda_k := &(\underbrace{0, \dots, 0,} & \underbrace{-1, \dots, -1}). \\ & k & r-k \end{smallmatrix}\] By Corollary \ref{cor index set from torus weights}, the index set for the stratification is $\cB= \{ \lambda_k : 0 \leq k \leq r-1 \}$. We use Theorem \ref{hess strat thm} to calculate the unstable strata. Since \[ V^{\lambda_k}_+ = V^{\lambda_k} = \left\{ v = \left( \begin{array}{ccccc} v_{11} & v_{12} & \cdots & \cdots & v_{1n} \\ \vdots & \cdots & \cdots &\cdots &\vdots \\ v_{k1} & v_{k2} & \cdots &\cdots & v_{kn} \\ 0 & \cdots & \cdots & \cdots & 0 \\ \vdots & \cdots & \cdots & \cdots & \vdots \\ 0 & \cdots & \cdots & \cdots & 0 \end{array} \right) \in \text{Mat}_{r \times n} \right\} \cong \text{Mat}_{k \times n},\] we also have that $S_{\lambda_k} = Z_{\lambda_k}$. By definition, $Z_{\lambda_k}$ is the GIT semistable set for $G_{\lambda_k}\cong\GL_k \times \GL_{r-k}$ acting on $V^{\lambda_k}$ with respect to the character \[ \begin{array}{rcc}\rho_{\lambda_k} := || \lambda_k ||^2 \rho - (\lambda_k, \rho) \lambda_k = (r-k) \rho - (k-r) \lambda_k = &(\underbrace{r-k, \dots, r-k,} & \underbrace{0, \dots, 0}). \\ & k & r-k \end{array} \] As positive rescaling of the character does not alter the semistable locus, we can without loss of generality assume that $\rho_{\lambda_k}$ is the product of the determinant character on $\GL_k$ and the trivial character on $\GL_{r-k}$. Therefore, $Z_{\lambda_k}$ is the GIT semistable set for $\GL_k$ acting on $\text{Mat}_{k \times n} \cong V^{\lambda_k} $ with respect to $\det : \GL_k \ra \GG_m$; that is, $Z_{\lambda_k}$ is the set of matrices in $\text{Mat}_{k \times n}$ whose final $r-k$ rows are zero and whose top $k$ row vectors are linearly independent. Then, $S_{[\lambda_k]} = GS_{\lambda_k}$ is the locally closed subvariety of rank $k$ matrices and the Hesselink stratification is given by rank. \end{ex} \section{Stratifications for moduli of quiver representations}\label{sec quiver} Let $Q=(V,A,h,t)$ be a quiver with vertex set $V$, arrow set $A$ and head and tail maps $h,t : A \ra V$ giving the direction of the arrows. A $k$-representation of a quiver $Q=(V,A,h,t)$ is a tuple $W = (W_v,\phi_a)$ consisting of a $k$-vector space $W_v$ for each vertex $v$ and a linear map $\phi_a : W_{t(a)} \ra W_{h(a)}$ for each arrow $a$. The dimension vector of $W$ is $ \dim W = (\dim W_v) \in \NN^V$. \subsection{Semistable quiver representations} For quiver representations of a fixed dimension vector $d = (d_v) \in \NN^V$, King introduced a notion of semistability depending on a stability parameter $\theta= (\theta_v) \in \ZZ^V$ such that $\sum_{v\in V} \theta_v d_v = 0$. \begin{defn}[King, \cite{king}]\label{theta ss defn} A representation $W$ of $Q$ of dimension $d$ is $\theta$-semistable if for all subrepresentations $W' \subset W$ we have $\theta(W'):= \sum_v \theta_v \dim W_v' \geq 0$. \end{defn} We recall King's construction of moduli of $\theta$-semistable quiver representations \cite{king}. Let \[ \rep_d(Q) := \bigoplus_{a \in A} \text{Hom}(k^{d_{t(a)}},k^{d_{h(a)}}) \quad \text{and} \quad G_d(Q) := \prod_{v \in V} \GL(d_v,k);\] then $G_d(Q)$ acts on $\rep_d(Q)$ by conjugation. Since the diagonal subgroup $\GG_m \cong \Delta \subset G_d(Q)$, given by $t \in \GG_m \mapsto (tI_{d_v})_{v \in V} \in G_d(Q)$, acts trivially on $\rep_d(Q)$, we often consider the quotient group $\overline{G}_d(Q): = G_d(Q)/\Delta$. We note that every representation of $Q$ of dimension $d$ is isomorphic to an element of $\rep_d(Q)$ and, moreover, two representations in $\rep_d(Q)$ are isomorphic if and only if they lie in the same $\overline{G}_d(Q)$ orbit. Hence, the moduli stack $\cR ep_d(Q)$ of representations of $Q$ of dimension $d$ is isomorphic to the quotient stack of $\overline{G}_d(Q)$ acting on $\rep_d(Q)$: \[ \cR ep_d(Q) \cong [\rep_d(Q)/\overline{G}_d(Q)].\] The $G_d(Q)$-action on $\rep_d(Q)$ is linearised by $\rho_{\theta} :G_d(Q) \ra \GG_m$ where $\rho_\theta (g_v) := \Pi_v \det(g_v)^{\theta_v}$. \begin{thm}[King, \cite{king}] The moduli space of $\theta$-semistable quiver representations is given by \[ M^{\theta-ss}_d(Q):=\rep_d(Q)/\!/_{\rho_\theta} G_d(Q).\] \end{thm} In particular, King shows that the notion of $\theta$-semistability for quiver representations agrees with the notion of GIT semistability for $G_d(Q)$ acting on $\rep_d(Q)$ with respect to $\rho_\theta$. \begin{rmk} We can also consider moduli spaces of $\theta$-semistable quiver representations for quivers with relations. A path in a quiver is a sequence of arrows in the quiver $(a_1, \dots, a_n)$ such that $h(a_i) = t(a_{i+1})$ and a relation is a linear combination of paths all of which start at some common vertex $v_t$ and end at a common vertex $v_h$. Let $\cR$ be a set of relations on $Q$ and let $\rep_d(Q,\cR)$ be the closed subvariety of $\rep_d(Q)$ consisting of quiver representations that satisfy the relations $\cR$. The moduli space of $\rho$-semistable representations of $(Q,\cR)$ is a closed subvariety of the moduli space of $\rho$-semistable representations of $Q$: \[M^{\theta-ss}_d(Q,\cR):=\rep_d(Q,\cR)/\!/_{\rho_\theta}G_d(Q)\subset M^{\theta-ss}_d(Q).\] \end{rmk} \subsection{HN filtrations for quivers representations}\label{sec HN filt quiv} We fix a quiver $Q$, a dimension vector $d \in \NN^V$ and a stability parameter $\theta= (\theta_v) \in \ZZ^V$ such that $\sum_v \theta_v d_v = 0$. To define a Harder--Narasimhan filtration, we need a notion of semistability for quiver representations of all dimension vectors and for this we use a parameter $\alpha = (\alpha_v) \in \NN^V$. \begin{defn}\label{defn HN} A representation $W$ of $Q$ is $(\theta,\alpha)$-semistable if for all $0 \neq W' \subset W$, we have \[ \frac{\theta(W')}{\alpha(W')} \geq \frac{\theta(W)}{\alpha(W)} \] where $\alpha(W) := \sum_{v \in V} \alpha_v \dim W_v$. A Harder--Narasimhan (HN) filtration of $W$ (with respect to $\theta$ and $\alpha$) is a filtration $0 = W^{(0)} \subset W^{(1)} \subset \cdots \subset W^{(s)} =W$ by subrepresentations, such that the quotient representations $W_i := W^{(i)}/W^{(i-1)}$ are $(\theta,\alpha)$-semistable and \[ \frac{\theta(W_1)}{\alpha(W_1)} < \frac{\theta(W_2)}{\alpha(W_2)} <\cdots < \frac{\theta(W_s)}{\alpha(W_s)} .\] The Harder--Narasimhan type of $W$ (with respect to $\theta$ and $\alpha$) is $\gamma(W) := (\dim W_1, \dots , \dim W_s)$. \end{defn} If $W$ is a representation of dimension $d$, then $\theta(W) = 0$ and so $W$ is $(\theta,\alpha)$-semistable if and only if it is $\theta$-semistable. The proof of the existence and uniqueness of the Harder--Narasimhan filtration of quiver representations is standard (for example, see \cite{reineke}, for the case when $\alpha_v = 1$ for all $v$). By the existence and uniqueness of the HN-filtration with respect to $(\theta,\alpha)$, we have a decomposition \[ \rep_d(Q) = \bigsqcup_{\gamma} R_{\gamma} \] where $R_{\gamma}$ denotes the subset of representations with HN type $\gamma$. Reineke \cite{reineke} proves that $R_{\gamma}$ are locally closed subvarieties of $\rep_d(Q)$ (when $\alpha_v =1$ for all $v$; the general case is analogous). We refer to the above decomposition as the HN stratification of the representation space (with respect to $\theta$ and $\alpha$). As isomorphic quiver representations have the same HN type, each HN stratum $R_\gamma$ is invariant under the action of $\overline{G}_d(Q)$. Hence, we obtain a HN stratification of the moduli stack of quiver representations: \[\cR ep_d(Q)=\bigsqcup_{\gamma}\cR ep^{\gamma}_d(Q)\] where $\cR ep^{\gamma}_d(Q)\cong [R_{\gamma}/\overline{G}_d(Q)]$. We emphasise again that this stratification depends on both $\theta$ and $\alpha$. In particular, by varying the parameter $\alpha$, we obtain different HN stratifications. \begin{rmk}\label{rmk HN relns} For a quiver with relations $(Q,\cR)$, there is also a HN stratification of the representation space $\rep_d(Q,\cR)$. If $W$ is a representation of $Q$ that satisfies the relations $\cR$, then any subrepresentation of $W$ also satisfies these relations. Therefore, the HN strata of $\rep_d(Q,\cR)$ are the intersection of the HN strata in $\rep_d(Q)$ with $\rep_d(Q,\cR)$. \end{rmk} \subsection{The HN stratification is the Hesselink stratification} Let $T$ be the maximal torus of $G_d(Q)$ given by the product of the maximal tori $T_v \subset \GL(d_v)$ of diagonal matrices. For a rational 1-PS $\lambda \in X_*(G)_{\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T}$, there is a unique $C>0$ such that $C\lambda \in X_*(G)$ is primitive. \begin{defn} For a HN type $\gamma = (d_1, \dots ,d_s)$ of a quiver representation of dimension $d$, let $\lambda_\gamma$ be the unique primitive 1-PS associated to the rational 1-PS $\lambda'_\gamma = (\lambda'_{\gamma,v})$ of $T$ given by \[ \lambda'_{\gamma,v}(t) = \text{diag} (t^{r_1}, \dots , t^{r_1},t^{r_2}, \dots , t^{r_2}, \dots \dots, t^{r_s}, \dots , t^{r_s} )\] where the rational weight $r_i:= - \theta(d_i)/\alpha(d_i)$ appears $(d_i)_v$ times. \end{defn} \begin{rmk} For $\gamma \neq \gamma'$, the conjugacy classes of $\lambda_\gamma$ and $\lambda_{\gamma'}$ are distinct. \end{rmk} Let $|| - ||_\alpha$ be the norm on 1-PSs of $G_d(Q)$ associated to $\alpha \in \NN^V$ (cf. Example \ref{ex norms}). For a quiver without relations, the following result is \cite{hoskins_quivers} Theorem 5.5. We can deduce the case with relations from this result using Theorem \ref{hess strat thm} and Remark \ref{rmk HN relns}. \begin{thm}\label{quiver HN is Hess} Let $(Q,\cR)$ be a quiver with relations and $d \in \NN^V$ a dimension vector. For stability parameters $\theta \in \ZZ^V$ satisfying $\sum_{v \in V} \theta_v d_v = 0$ and $\alpha \in \NN^V$, we let \[\rep_d(Q,\cR) = \bigsqcup_{\gamma} R_{\gamma} \quad \quad \text{and} \quad \quad \rep_d(Q,\cR) = \bigsqcup_{[\lambda]} S_{[\lambda]} \] denote the HN stratification with respect to $(\theta,\alpha)$ and the Hesselink stratification with respect to $\rho_\theta$ and $||-||_\alpha$ respectively. Then $R_\gamma = S_{[\lambda_\gamma]}$ and the stratifications coincide. \end{thm} The moduli stack admits a finite stratification by HN types or, equivalently, Hesselink strata \[ \cR ep_d(Q,\cR) = \bigsqcup_{\tau} \cR ep_d^\gamma(Q,\cR) = \bigsqcup_{[\lambda]} [S_{[\lambda]}/\overline{G}_d(Q)].\] \section{Projective GIT and Hesselink stratifications} Let $G$ be a reductive group acting on a projective scheme $X$ with respect to an ample $G$-linearisation $L$. The projective GIT quotient \[X^{ss}(L) \ra X/\!/_L G:= \proj \bigoplus_{n \geq 0} H^0(X, {L}_\rho^{\otimes n})^G\] is a good quotient of the open subscheme $X^{ss}(L) $ of $ X$ of $L$-semistable points. For a 1-PS $\lambda$ of $G$ and $x \in X$, we let $\mu^L(x,\lambda) $ denote the weight of the $\GG_m$ action on the fibre of $L$ over the fixed point $\lim_{t \ra 0} \lambda(t) \cdot x$. \begin{thm}[Hilbert--Mumford criterion \cite{mumford}] A point $x$ is $L$-semistable if and only we have $\mu^L(x,\lambda) \geq 0$ for all 1-PSs $\lambda : \GG_m \ra G$. \end{thm} Fix a norm $|| - ||$ on the set $\overline{X}_*(G)$ of conjugacy classes of 1-PSs such that $|| - ||^2$ is $\ZZ$-valued. \begin{defn}[Kempf, \cite{kempf}] For $x \in X$, let \[ M^L(x) := \inf\left\{\frac{\mu^L(x,\lambda)}{|| \lambda ||} : \:\text{1-PSs} \: \lambda \text{ of } G \right\}.\] A 1-PS $\lambda$ is $L$-adapted to an $L$-unstable point $x$ if \[ M^L(x) = \frac{\mu^L(x,\lambda)}{|| \lambda ||} . \] Let $\wedge^L(x) $ denote the set of primitive 1-PSs which are $L$-adapted to $x$. \end{defn} \begin{defn}(Hesselink, \cite{hesselink}) For a conjugacy class $[\lambda]$ of 1-PSs of $G$ and $ d \in \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{> 0}$, we let \[S_{[\lambda],d}:= \{x: M^L(x)=-d \: \text{ and } \wedge^\rho(x) \cap [\lambda] \neq \emptyset\}.\] \end{defn} Let $p_\lambda : X \ra X^\lambda$ denote the retraction onto the $\lambda$-fixed locus given by $x \mapsto \lim_{t \ra 0} \lambda(t) \cdot x$ and let $X_d^\lambda \subset X^\lambda$ be the union of components on which $\mu^L(-,\lambda)=-d||\lambda||$. Let $G_\lambda$ be the subgroup of elements of $G$ that commute with $\lambda$ and let $P(\lambda) $ be the parabolic subgroup of elements $g \in G$ such that $\lim_{t \ra 0} \lambda(t)g \lambda(t)^{-1}$ exists in $G$. \begin{thm}[cf. \cite{hesselink,kirwan,ness}]\label{hess thm proj} Let $G$ be a reductive group acting on a projective scheme $X$ with respect to an ample $G$-linearisation $L$ and let $|| - ||$ be a norm on conjugacy classes of 1-PSs of $G$. Then there is a Hesselink stratification \[ X -X^{ss}(L)= \bigsqcup_{([\lambda],d)} S_{[\lambda],d} \] into finitely many disjoint $G$-invariant locally closed subschemes of $X$. Moreover: \begin{enumerate} \item $S_{[\lambda],d} = GY^{\lambda}_d \cong G \times_{P(\lambda)} Y^\lambda_d$ where $Y^{\lambda}_d =\{ x \in X: M^L(x) = -d \: \text{ and } \: \lambda \in \wedge^\rho(x)\}$; \item $Y^\lambda_d = p_\lambda^{-1}(Z_d^\lambda)$ where $Z_d^\lambda= \{ x \in X^\lambda: M^L(x) = -d \: \text{ and } \: \lambda \in \wedge^\rho(x)\}$; \item $Z_d^\lambda$ is the GIT-semistable locus for the reductive group $G_\lambda$ acting on $X_d^\lambda$ with respect to a modified linearisation $L_\beta$; \item $\partial S_{[\lambda],d} \cap S_{{[\lambda'],d'}} \neq \emptyset$ only if $d > d'$. \end{enumerate} \end{thm} Let $S_0:= X^{ss}(L)$; then we obtain a Hesselink stratification of $X$ \[ X = \bigsqcup_{\beta \in \cB} S_\beta\] where the lowest stratum is $S_0 $ and the higher strata are indexed by pairs $\beta = ([\lambda],d)$ as above. We note that if $\beta = ([\lambda],d)$ indexes a non-empty stratum, then $d ||\lambda|| \in \ZZ$ as $\mu(-,\lambda)$ is $\ZZ$-valued. \begin{rmk}\label{new indices for H strat} An unstable Hesselink index $\beta =([\lambda],d)$ determines a rational 1-PS \[\lambda_\beta := \frac{d}{||\lambda||} \lambda\] (where $d/||\lambda||$ is rational, as $||\lambda||^2$ and $d||\lambda||$ are both integral). In fact, we can recover $([\lambda],d)$ from $\lambda_\beta$, as $d = - ||\lambda_\beta||$ and $\lambda $ is the unique primitive 1-PS lying on the ray spanned by $\lambda_\beta$. \end{rmk} \begin{rmk}\label{rmk on hess strat} The additional index $d$ is redundant in the affine case, as it can be determined from the character $\rho$ and 1-PS $\lambda$ (more precisely, we have $d = - (\rho,\lambda)/||\lambda||$). Moreover, projective Hesselink stratifications share the following properties with affine Hesselink stratifications. \begin{enumerate} \item If $X \subset \PP^n$ and $G$ acts via a representation $\rho : G \ra \GL_{n+1}$, then the unstable indices can be determined from the weights of a maximal torus of $G$; for details, see \cite{kirwan}. \item For a closed $G$-invariant subscheme $X' $ of $X$, the Hesselink stratification of $X'$ is the intersection of the Hesselink stratification of $X$ with $X'$. \item The Hesselink strata may not be connected; however, if $Z_{d,i}^\lambda$ denote the connected components of $Z_d^\lambda$, then $S_{[\lambda],d,i} := G p_\lambda^{-1}(Z_{d,i}^\lambda)$ are the connected components of $S_{[\lambda],d}.$ \end{enumerate} \end{rmk} \section{Stratifications for moduli of sheaves}\label{sec sheaves} Let $(X, \cO_X(1))$ be a projective scheme of finite type over $k$ with a very ample line bundle. \subsection{Preliminaries} For a sheaf $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ over $X$, we let $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n) := \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H \otimes \cO_X(n)$ and let \[ P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H,n) = \chi(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n)) = \sum_{i=0}^n (-1)^i \dim H^i(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n))\] be the Hilbert polynomial of $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ with respect to $\cO_X(1)$. We recall that the degree of $P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)$ is equal to the dimension of $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ (i.e., the dimension of the support of $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$) and a sheaf $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ is pure if all its non-zero subsheaves have the same dimension as $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$. For a non-zero sheaf $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$, the leading coefficient $r(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)$ of $P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)$ is positive and we define the reduced Hilbert polynomial of $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ to be $P^{\red}(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H):=P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)/\text{r}(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)$. \begin{defn}[Castelnuovo-Mumford \cite{mumfordcurves}] A sheaf $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ over $X$ is $n$-regular if \[ H^{i}(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n-i)) = 0 \quad \text{ for all } \: i > 0. \] \end{defn} This is an open condition and, by Serre's Vanishing Theorem, any sheaf is $n$-regular for $n > \!> 0$. Furthermore, any bounded family of sheaves is $n$-regular for $n > \!> 0$. \begin{lemma}[cf. \cite{mumfordcurves}] For a $n$-regular sheaf $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ over $X$, we have the following results. \begin{enumerate} \renewcommand{\roman{enumi})}{\roman{enumi})} \item $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ is $m$-regular for all $m \geq n$. \item $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n)$ is globally generated with vanishing higher cohomology, i.e., the evaluation map $H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n)) \otimes \cO_X(-n) \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ is surjective and $H^i(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n)) = 0$ for $i > 0$. \item The natural multiplication maps $H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n)) \otimes H^0(\cO_X(m-n)) \ra H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(m))$ are surjective for all $m \geq n$. \end{enumerate} If we have an exact sequence of sheaves over $X$ \[ 0 \ra \cF' \ra \cF \ra \cF'' \ra 0,\] such that $\cF'$ and $\cF''$ are both $n$-regular, then $\cF$ is also $n$-regular. \end{lemma} \subsection{Construction of the moduli space of semistable sheaves}\label{sec simp constr} In this section, we outline Simpson's construction \cite{simpson} of the moduli space of semistable sheaves on $(X,\cO_X(1))$ with Hilbert polynomial $P$. We define an ordering $ \leq $ on rational polynomials in one variable by $P \leq Q$ if $P(x) \leq Q(x)$ for all sufficiently large $x$. For polynomials of a fixed degree with positive leading coefficient, this is equivalent to the lexicographic ordering on the coefficients. \begin{defn}\label{defn ss} A pure sheaf $\cF$ on $X$ is semistable (in the sense of Gieseker) if for all non-zero subsheaves $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H \subset \cF$, we have $P^{\red}(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) \leq P^{\red}(\cF).$ \end{defn} By the Simpson--Le Potier bounds (cf. \cite{simpson} Theorem 1.1), the set of semistable sheaves with Hilbert polynomial $P$ is bounded; therefore, for $n >\!> 0$, all semistable sheaves with Hilbert polynomial $P$ are $n$-regular. Let $V_n := k^{P(n)}$ be the trivial $P(n)$-dimensional vector space and let $\quot(V_n \otimes \cO_X(-n), P)$ denote the Quot scheme parametrising quotients sheaves of $V_n \otimes \cO_X(-n)$ with Hilbert polynomial $P$. Let $Q_n$ denote the open subscheme of this Quot scheme consisting of quotients $q : V_n \otimes \cO_X(-n) \onto \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ such that $H^0(q(n))$ is an isomorphism, let $Q_n^{\pur} \subset Q_n$ denote the open subscheme consisting of quotient sheaves that are pure and let $R_n$ denote the closure of $Q_n^{\pur}$ in the quot scheme. Let $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ be a $n$-regular sheaf; then $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n)$ is globally generated and $\dim H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n)) = P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H,n)$. In particular, every $n$-regular sheaf on $X$ with Hilbert polynomial $P$ can be represented as a quotient sheaf in $Q_n$ by choosing an isomorphism $H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n)) \cong V_n$ and using the surjective evaluation map $H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n) ) \otimes \cO_X(-n) \onto \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$. The group $G_n:=\GL(V_n)$ acts on $V_n$ and $Q_n$ such that the $G_n$-orbits in $Q_n$ are in bijection with isomorphism classes of sheaves in $Q_n$. As the diagonal $\GG_m$ acts trivially, we often consider the action of $\SL(V_n)$. The action is linearised using Grothendieck's embedding of the Quot scheme into a Grassmannian: the corresponding line bundle is given by \[ L_{n,m} := \det(\pi_{*}(\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X_n \otimes \pi_X^*\cO_X(m))) \] for $m >\!> n$, where $\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X_n$ denotes the universal quotient sheaf on the Quot scheme. \begin{thm}[Simpson \cite{simpson}, Theorem 1.21]\label{simthm} For $m>\!>n>\!>0$, the moduli space of semistable sheaves on $X$ with Hilbert polynomial $P$ is the GIT quotient of $R_n$; that is, \[ M^\text{ss}(X,P)=R_n /\!/_{L_{n,m}} \SL(V_n).\] \end{thm} In his proof, Simpson shows, for $m>\!>n>\!>0$ (depending on $X$ and $P$), that an element $q : V_n \otimes \cO_X(-n) \onto \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ in $R_n$ is GIT semistable if and only if $q$ belongs to the open subscheme $Q_n^{ss}$ of $Q_n$ consisting of quotient sheaves $q: V_n \otimes \cO_X(-n) \onto \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ such that $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ is a semistable sheaf. \subsection{The Hesselink stratification of the Quot scheme}\label{sec Hess quot} Associated to the action of $\SL(V_n)$ on $\quot(V_n \otimes \cO_X(-n),P)$ with respect to $L_{n,m}$ and the norm $||-||$ coming from the dot product on the diagonal torus $T$ (cf. Example \ref{ex norms}), there is a Hesselink stratification \[ \quot(V_n \otimes \cO_X(-n),P) = \bigsqcup_{\beta \in \cB} S_\beta \] In this section, for fixed $n$, we describe this stratification of $\quot := \quot(V_n \otimes \cO_X(-n),P)$; this completes the partial description of the Hesselink stratification on $R_n$ given in \cite{hoskinskirwan}. By Remark \ref{new indices for H strat}, the unstable indices $\beta$ can equivalently be viewed as conjugacy classes of rational 1-PSs $[\lambda_\beta]$ of $\SL(V_n)$. We take a representative $\lambda_\beta \in X_*(T)$ and, by using the Weyl group action, assume that the weights are decreasing; that is, \[ \lambda_\beta(t) = \diag (t^{r_1}, \dots , t^{r_1},t^{r_2}, \dots , t^{r_2} , \dots , t^{r_s} ,\dots ,t^{r_s}) \] where $r_i$ are strictly decreasing rational numbers that occur with multiplicities $l_i$. Then $\beta$ is equivalent to the decreasing sequence of rational weights $r(\beta): = (r_1, \dots , r_s)$ and multiplicities $l(\beta):=(l_1, \dots ,l_s)$. Since $\lambda_\beta $ is a rational 1-PSs of $\SL(V_n) $, we note that \[ \sum_{i=1}^s l_i = \dim V_n =P(n)\quad\quad \text{and} \quad\quad \sum_{i=1}^s r_il_i = 0.\] We first describe the fixed locus for $\lambda_\beta$. The 1-PS $\lambda_\beta$ determines a filtration of $V:=V_n$ \[ 0 = V^{(0)} \subset V^{(1)} \subset \cdots \subset V^{(s)} =V, \quad \text{where} \:\: V^{(i)}:=k^{l_1 + \cdots +l_i} \] with $V^i := V^{(i)}/V^{(i-1)}$ of dimension $l_i$. For a quotient $q: V \otimes \cO_X(-n) \onto \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ in $\quot$, we let $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)}:=q(V^{(i)}\otimes \cO_X(-n))$; then we have exact sequences of quotient sheaves \[\xymatrix{ 0 \ar[r] & V^{(i-1)} \otimes \cO_X(-n) \ar@{->>}[d]^{q^{(i-1)}} \ar[r] & V^{(i)} \otimes \cO_X(-n)\ar@{->>}[d]^{q^{(i)}} \ar[r] & V^i \otimes \cO_X(-n) \ar@{->>}[d]^{q^{i}} \ar[r] & 0 \\ 0 \ar[r] & \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i-1)} \ar[r] & \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)} \ar[r] & \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i:= \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)}/\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i-1)} \ar[r] & 0.}\] We fix an isomorphism $V \cong \oplus_{i=1}^s V^i$; then via this isomorphism we have $\lim_{t \ra 0} \lambda_\beta(t) \cdot q = \oplus_{i=1}^s q_i $ by \cite{huybrechts} Lemma 4.4.3. \begin{lemma}\label{lemma on fixed locus} For $\beta=([\lambda],d)$ with $r(\beta)=(r_1, \dots, r_s)$ and $l(\beta)=(l_1, \dots, l_s)$ as above, we have \[ \quot^{\lambda} \: \: = \bigsqcup_{(P_1, \dots ,P_s) \: : \: \sum_{i=1}^s P_i = P } \: \: \: \prod_{i=1}^s \quot(V^i \otimes \cO_X(-n),P_i).\] and \[\quot^{\lambda}_d \:\:\cong \bigsqcup_{\begin{smallmatrix}(P_1, \dots ,P_s)\: : \: \sum_{i=1}^s P_i = P, \\ \sum_{i=1}^s r_i P_i(m) + r_i^2 l_i = 0 \end{smallmatrix}} \: \: \: \prod_{i=1}^s \quot(V^i \otimes \cO_X(-n),P_i).\] \end{lemma} \begin{proof} The description of the $\lambda$-fixed locus follows from the discussion above. By definition, $\quot^{\lambda}_d$ is the union of fixed locus components on which $\mu(-,\lambda)=-d ||\lambda||$ or, equivalently, \[ \mu(-,\lambda_\beta) = - ||\lambda_\beta||^2 = - \sum_{i=1}^s r_i^2 l_i,\] since $\lambda_\beta = (d/|| \lambda||) \lambda$. For a $\lambda$-fixed quotient sheaf $q : V \otimes \cO_X(-n) \onto \oplus_{i=1}^s \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$, we have that $ \mu(q,\lambda_\beta) = \sum_{i=1}^s r_i P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i,m)$ by \cite{huybrechts} Lemma 4.4.4 and so this completes the proof. \end{proof} By Theorem \ref{hess thm proj}, for $\beta = ([\lambda],d)$ as above, we can construct the stratum $S_\beta$ from the limit set $Z_d^\lambda$ which is the GIT semistable set for the subgroup \[ G_\lambda \cong \left\{ (g_1, \dots , g_s) \in \prod_{i=1}^s \GL({V^i}) : \prod_{i=1}^s \det g_i = 1 \right\}\] acting on $\quot^{\lambda}_d$ with respect to a modified linearisation $L_\beta$, obtained by twisting the original linearisation $L = L_{n,m}$ by the (rational) character $\chi_\beta : G_\lambda \ra \GG_m$ where \[ \chi_\beta(g_1, \dots , g_s) =\prod_{i=1}^s (\det g_i)^{r_i}.\] \begin{prop}\label{Zdlambda} Let $\beta=([\lambda],d)$ with $r(\beta)=(r_1, \dots, r_s)$ and $l(\beta)=(l_1, \dots, l_s)$ as above; then \[ Z_d^\lambda\: \: \cong \bigsqcup_{(P_1, \dots ,P_s) } \prod_{i=1}^s \quot(V^i \otimes \cO_X(-n),P_i)^{\SL(V^i)-ss}(L_{n,m})\] where this is a union over tuples $(P_1, \dots ,P_s)$ such that $\sum_{i=1}^s P_i = P$ and \begin{equation}\label{num cond} r_i = \frac{P(m)}{P(n)} - \frac{P_i(m)}{l_i} \quad \quad \text{for } \: i = 1, \dots , s. \end{equation} \end{prop} \begin{proof} The centre $ Z(G_\lambda) \cong \{ (t_1, \dots , t_s) \in (\GG_m)^s : \Pi_{i=1}^s t_i^{l_i} = 1\}$ acts trivially on $\quot^{\lambda} $ and acts on the fibre of $L$ over $q \in \quot^\lambda$, lying in a component indexed by $(P_1, \dots , P_s)$, by a character \[ \chi_q : Z(G_\lambda) \ra \GG_m \quad \text{ given by } \quad \chi_{q}(t_1, \dots , t_s)= \prod_{i=1}^s t_i^{P_i(m)}. \] Therefore, $Z(G_\lambda)$ acts on the fibre of $L_\beta$ over $q$ by $\chi_q \chi_\beta(t_1, \dots , t_s) = \prod_{i=1}^s t_i^{P_i(m) +r_i l_i}$. By the Hilbert--Mumford criterion, $q$ is unstable for the action of $G_\lambda$ with respect to $L_\beta$ unless $\chi_q \chi_\beta$ is trivial; i.e., there is a constant $C$ such that $ C l_i = P_i(m) + r_i l_i$, for $i=1, \dots , s$. In this case \[ C P(n) =\sum_{i=1}^s C l_i = \sum_{i=1}^s P_i(m) + r_i l_i = P(m)\] and we see that the conditions given at (\ref{num cond}) are necessary for $q$ to belong to $Z_d^\lambda$. We suppose that $(P_1, \dots ,P_s)$ is a tuple satisfying $\sum_{i=1}^s P_i = P$ and the conditions given at (\ref{num cond}). Since $G_\lambda$ is, modulo a finite group, $\Pi_{i=1}^s \SL(V^i) \times Z(G_\lambda)$ and $Z(G_\lambda)$ acts on both \[ \prod_{i=1}^s \quot(V^i \otimes \cO_X(-n),P_i)\] and $L_\beta$ trivially, the semistable locus for $G_\lambda$ is equal to the semistable locus for $\Pi_{i=1}^s \SL(V^i)$. As $\Pi_{i=1}^s \SL(V^i)$-linearisations, we have that $L_\beta=L$. It then follows that \[ \left(\prod_{i=1}^s \quot(V^i \otimes \cO_X(-n),P_i) \right)^{\Pi_{i=1}^s \SL(V^i)-ss}(L) = \prod_{i=1}^s \quot(V^i \otimes \cO_X(-n),P_i)^{\SL(V^i)-ss}(L)\] (for example, see \cite{hoskinskirwan} Lemma 6.6) which completes the proof of the proposition. \end{proof} The following corollary is an immediate consequence of Theorem \ref{hess thm proj}. \begin{cor}\label{cor Hess quot} For $\beta$ as above, the scheme $S_\beta$ parametrises quotients $q : V \otimes \cO_X(-n) \onto \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ with a filtration $0 = W^{(0)} \subset W^{(1)} \subset \cdots \subset W^{(s)} =V$ such that, for $ i =1 , \dots , s$, we have \begin{enumerate} \item $\dim W^i = l_i$, where $W^{i}:=W^{(i)}/W^{(i-1)}$, \item $P(n){P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i,m)}=l_i({P(m)} - r_iP(n)) $, and \item the quotient sheaf $q^i : W^i \otimes \cO_X(-n) \onto \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ is $\SL(W^i)$-semistable with respect to $L_{n,m}$, \end{enumerate} where $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)}:=q(W^{(i)} \otimes \cO_X(-n))$ and $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i:=\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)}/\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i-1)}$. Furthermore, $S_\beta = \SL(V)Y_d^\lambda$ where $Y^d_\lambda$ parametrises quotients $q$ with a filtration $W^\bullet$ as above such that $W^\bullet = V^\bullet$. \end{cor} \begin{rmk}\label{refine Hess} We see from the above description of the Hesselink strata that the strata are not always connected (cf. Remark \ref{rmk on hess strat}) and it is natural to further decompose the limit sets $Z_d^\lambda$ as \[ Z_d^\lambda = \bigsqcup_{\underline{P} \in \cC_\beta} Z_{\underline{P}} \quad \quad \text{where } \: Z_{\underline{P}} \cong \prod_{i=1}^s \quot(V^i \otimes \cO_X(-n),P_i)^{\SL(V^i)-ss}(L_{n,m}).\] Here $\cC_\beta$ is the set of tuples $\underline{P}=(P_1, \dots ,P_s)$ such that $\sum_{i=1}^s P_i = P$, the conditions (\ref{num cond}) hold and $Z_{\underline{P}}$ is non-empty. For $\underline{P}=(P_1, \dots ,P_s) \in \cC_\beta$, we note that \begin{equation}\label{cond on tuple} \frac{P_1(n)}{P_1(m)} > \cdots \cdots > \frac{P_s(n)}{P_s(m)}. \end{equation} Then $S_\beta$ is a disjoint union of the schemes $S_{\underline{P}}:= \SL(V) p_\lambda^{-1}(Z_{\underline{P}})$ over $\underline{P} \in \cC_\beta$ and we obtain a refinement of the Hesselink stratification: \[ \quot(V_n \otimes \cO_X(-n),P)= \bigsqcup_{\underline{P}} S_{\underline{P}}.\] \end{rmk} \subsection{HN filtrations for coherent sheaves} In this section, we describe a canonical destabilising filtration for each coherent sheaf, known as its Harder--Narasimhan (HN) filtration \cite{hn}. \begin{defn}\label{def HN usual} A pure HN filtration of a sheaf $\cF$ is a filtration by subsheaves \[ 0 = \cF^{(0)} \subset \cF^{(1)} \subset \cdots \subset \cF^{(s)} = \cF \] such that $\cF_i := \cF^{(i)}/\cF^{(i-1)}$ are semistable and $ P^{\red}(\cF_1) > P^{\red}(\cF_2) > \cdots > P^{\red}(\cF_s)$. \end{defn} By \cite{huybrechts} Theorem 1.3.4, every pure sheaf has a unique pure HN filtration. To define HN filtrations for non-pure sheaves, we need an alternative definition of semistability that does not use reduced Hilbert polynomials, as every non-pure sheaf $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ has a non-zero subsheaf whose Hilbert polynomial has degree strictly less than that of $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$. For this, we use an extended notion of semistability due to Rudakov \cite{rudakov}. We define a partial ordering $\preccurlyeq$ on $\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T[x]$ by \[ P \preccurlyeq Q \iff \frac{P(n)}{P(m)} \leq \frac{Q(n)}{Q(m)} \quad \quad \text{for \:} \: m >\! > n >\! > 0\] and similarly define a strict partial order $\prec$ by replacing $\leq$ with $<$. This ordering allows us to compare polynomials with positive leading coefficient of different degrees in a way that polynomials of lower degree are bigger with respect to this ordering. \begin{rmk} Rudakov formulated this preordering using the coefficients of the polynomials: for rational polynomials $P(x) = p_d x^d + \cdots + p_0$ and $Q(x) = q_e x^e + \cdots +q_0$, let \[ \Lambda(P,Q):=(\lambda_{f,f-1}, \dots , \lambda_{f,0}, \lambda_{f-1,f-2}, \dots , \lambda_{f-1,0}, \dots , \lambda_{1,0}) \quad \text{where} \quad \lambda_{i,j}:=p_iq_j - q_ip_j\] and $f := \max(d,e)$. We write $\Lambda(P,Q)>0$ if the first non-zero $\lambda_{i,j}$ appearing in $\Lambda(P,Q)$ is positive. Then $ P \preccurlyeq Q$ is equivalent to $ \Lambda(P,Q) \geq 0 $. \end{rmk} \begin{defn}\label{def rud ss} A sheaf $\cF$ is semistable if $P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) \preccurlyeq P(\cF)$ for all non-zero subsheaves $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H \subset \cF$. \end{defn} This definition of semistability implies purity, as polynomials of smaller degree are bigger with respect to $\preccurlyeq$. Moreover, for Hilbert polynomials $P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)$ and $P(\cF)$ of the same degree, we have $P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) \preccurlyeq P(\cF)$ if and only if $P^{\red}(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) \leq P^{\red}(\cF)$. Thus, a sheaf is semistable in the sense of Definition \ref{def rud ss} if and only if it is semistable in the sense of Definition \ref{defn ss}. \begin{defn}\label{def HN gen} A HN filtration of a sheaf $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ is a filtration by subsheaves \[ 0 = \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(0)} \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(1)} \subset \cdots \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(s)} = \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H \] such that $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i := \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)}/\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i-1)}$ are semistable and $ {P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_1)} \succ {P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_2)} \succ \cdots \succ {P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_s)}$ . \end{defn} Using this definition of HN filtration, every coherent sheaf has a unique HN filtration (cf. \cite{rudakov}, Corollary 28). We can construct the HN filtration of a sheaf from its torsion filtration and the pure HN filtrations of the subquotients in the torsion filtration as follows. \begin{prop}\label{prop HN gen} Let $0 \subset T^{(0)}(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) \subset \cdots \subset T^{(d)}(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) = \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ be the torsion filtration of $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ and let \begin{equation}\label{HN tor succ} 0 = \cF^{(0)}_i \subset \cF^{(1)}_i \subset \cdots \subset \cF^{(s_i)}_i = T_i(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) :=T^{(i)}(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)/T^{(i-1)}(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) \end{equation} be the pure HN filtrations of the subquotients in the torsion filtration. Then $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ has HN filtration \[0 = \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_0^{(0)} \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_0^{(1)} \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_1^{(1)} \subset \cdots \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_1^{(s_1)} \subset \cdots \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_d^{(1)} \subset \cdots \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_d^{(s_d)} =\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H \] where $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(j)}_i$ is the preimage of $\cF^{(j)}_i$ under the quotient map $T^{(i)}(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) \ra T_i(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)$. \end{prop} \begin{proof} As the HN filtration is unique, it suffices to check that the subquotients are semistable with decreasing Hilbert polynomials for $\prec$. First, we note that $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(0)}_i = T^{(i-1)}(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)= \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(s_{i-1})}_{i-1}$ and \[ \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i^j:=\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i^{(j)}/\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i^{(j-1)} \cong \cF_i^{(j)}/\cF_i^{(j-1)}=:\cF_i^{j}.\] Since (\ref{HN tor succ}) is the pure HN filtration of $T_i(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)$, we have inequalities $P^{\red}(\cF_i^1)> \cdots > P^{\red}(\cF_i^{s_i})$ and the subquotients $\cF_i^{j}$ are semistable. Moreover, it follows that \[ P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_0^1) \succ P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_1^1) \succ \cdots \succ P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_1^{s_1}) \succ \cdots \succ P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_d^1) \succ \cdots \succ P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_d^{s_d}), \] as $\deg P(\cF_i^j) =i$ and polynomials of lower degree are bigger with respect to this ordering. \end{proof} \begin{defn} Let $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ be a sheaf with HN filtration $ 0 = \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(0)} \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(1)} \subset \cdots \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(s)} = \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H $; then the HN type of $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ is the tuple $\tau(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H):=(P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_{1}),\dots ,P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_{s}))$ where $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i := \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)}/\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i-1)}$. We say $\tau = (P_1,\dots, P_s)$ is a pure HN type if all polynomials $P_i$ have the same degree. \end{defn} \subsection{The HN stratification on the stack of coherent sheaves} Let $\cC oh_{P}(X)$ denote the stack of sheaves on $X$ with Hilbert polynomial $P$; this is an Artin stack such that \begin{equation}\label{coh as union} \cC oh_{P}(X) \cong \bigcup_{n} [Q_n^o/G_n] \end{equation} where $G_n = \GL(V_n)$ and $Q_n^o$ is the open subscheme of $\quot(V_n \otimes \cO_X(-n),P)$ consisting of quotient sheaves $q : V_n \otimes \cO_X(-n) \onto \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ such that $H^0(q(n))$ is an isomorphism and $H^i(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n)) = 0 $ for $i > 0$ (cf. \cite{LMB} Theorem 4.6.2.1). Let \[ \cH_{P}:= \left\{ \tau= (P_1,\dots, P_s) : \: P=\sum_{i=1}^s P_i \: \text{and} \: {P_1} \succ {P_2} \succ \cdots \succ {P_s} \right\}\] be the set of HN types of sheaves with Hilbert polynomial $P$. For a HN type $\tau= (P_1,\dots, P_s)$, we define the $(n,m)$th Shatz polytope $\Gamma(\tau,n,m)$ to be the union of the line segments joining $x_k: = (\sum_{j=1}^k P_j(m), \sum_{j=1}^k P_j(n))$ to $x_{k+1}$ for $ k =0, \dots , s-1$. We define a partial order $\leq$ on the set of HN types $\cH_P$ by $\tau \leq \tau'$ if $\Gamma(\tau,n,m) $ lies above $\Gamma(\tau',n,m) $ for $m >\! > n >\!> 0$. \begin{thm}[Shatz \cite{shatz}; see also \cite{nitsureshn} Theorem 5] Let $\cF$ be a family of sheaves on $X$ with Hilbert polynomial $P$ parametrised by a scheme $S$; then the HN type function $ S \ra \cH_P$ given by $s \mapsto \tau(\cF_s)$ is upper semi-continuous. Therefore, \begin{enumerate} \item $S_{> \tau}= \{ s \in S : \tau(\cF_s) > \tau \}$ is closed in $S$, \item $S_\tau= \{ s \in S : \tau(\cF_s)= \tau \}$ is locally closed in $S$, \end{enumerate} and, moreover, there is a finite Shatz stratification of $S$ into disjoint subschemes $S_\tau$ such that \[\overline{S_\tau} \subset \bigsqcup_{\tau' \geq \tau} S_{\tau'}.\] \end{thm} The universal quotient sheaf $\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X_n$ over $X \times Q_n^o$ is a family of sheaves on $X$ with Hilbert polynomial $P$ parametrised by $Q_n^o$; therefore, we have an associated Shatz stratification \begin{equation}\label{n shatz strat of quot} Q_n^o = \bigsqcup_\tau Q_{n,\tau}. \end{equation} As the Shatz strata $Q_{n,\tau}$ are $G_n$-invariant, this stratification descends to the stack quotient \[ [Q_n^o/G_n] = \bigsqcup_\tau [Q_{n,\tau}/G_n].\] From the description (\ref{coh as union}) of $\cC oh_P(X)$, we have the following corollary. \begin{cor}\label{Shatz strat} There is a HN stratification on the stack of coherent sheaves \[\cC oh_{P}(X) = \bigsqcup_{\tau \in \cH_P} \cC oh_{P}^\tau(X) \] into disjoint locally closed substacks $\cC oh_{P}^\tau(X)$ such that $ \overline{\cC oh_{P}^\tau(X)} \subset \bigsqcup_{\tau' \geq \tau} \cC oh_{P}^{\tau'}(X).$ \end{cor} \begin{rmk}\label{mspace} If $\tau = (P)$ is the trivial HN type, then $\cC oh^\tau_{X,P}=\cC oh^{ss}_{X,P}$ and, for $n >\!> 0$, \[\cC oh^{ss}_{X,P} \cong [Q^{ss}_n / G_n]\] where $Q^{ss}_n =Q_{n,(P)}$ is an open subscheme of $\quot(V_n \otimes \cO_X(-n),P)$. In fact, an analogous statement holds for all HN types (cf. Proposition \ref{HN strat quot pres}). \end{rmk} \subsection{The Hesselink and Shatz stratifications} In this section, we prove that every Shatz stratum for a HN type $\tau$ is contained in a Hesselink stratum of $\quot(V_n \otimes \cO_X(-n),P)$ for $n >\!> 0$; this generalises a corresponding result for pure HN types given in \cite{hoskinskirwan}. \begin{defn}\label{def hess index from HN type} For a HN type $\tau = (P_1, \dots , P_s) \in \cH_P$ and natural numbers $(n,m)$, we let $\beta_{n,m}(\tau)$ denote the conjugacy class of the rational 1-PS \[\begin{array}{rcccc} \lambda_{\beta_{n,m}(\tau)}(t) = \text{diag} &(\underbrace{t^{r_1}, \dots , t^{r_1},} & \dots & \underbrace{t^{r_s}, \dots , t^{r_s}})& \quad \text{where} \quad r_i:= \frac{P(m)}{P(n)} - \frac{P_i(m)}{P_i(n)} . \\ & P_1(n) & & P_s(n) & \end{array}\] \end{defn} Since $\tau$ is a HN type, we have $P_1 \succ P_2 \succ \cdots \cdots \succ P_s$ and thus, for $m > \!> n > \!> 0$, \begin{equation}\label{ineq needed} \frac{P_1(n)}{P_1(m)}> \frac{P_2(n)}{P_2(m)} > \cdots \cdots > \frac{P_s(n)}{P_s(m)} \end{equation} i.e., the weights $r_i$ are decreasing. \begin{thm}[see also \cite{hoskinskirwan}]\label{thm comp strat} Let $\tau$ be a HN type; then, for $m > \!> n > \! > 0$, the Shatz stratum $Q_{n,\tau}$ is a closed subscheme of the Hesselink stratum $S_{\beta_{n,m}(\tau)}$ in $\quot(V_n \otimes \cO_X(-n),P)$. \end{thm} \begin{proof} If $\tau = (P_1, \dots , P_s)$, we take $n > \! > 0$ so all semistable sheaves with Hilbert polynomial $P_i$ are $n$-regular for $i =1, \dots , s$. Then every sheaf with HN type $\tau$ is $n$-regular, as it admits a filtration whose successive quotients are $n$-regular, and so can be parametrised by a point in the Shatz stratum $Q_{n,\tau}$. Let $V_n^i := k^{P_i(n)}$ and let $Q_{n}^i \subset \quot(V_n^i \otimes \cO_X(-n),P_i)$ be the open subscheme consisting of quotients $q : V_n^i \otimes \cO_X(-n) \onto \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i$ such that $H^0(q(n))$ is an isomorphism. Let $(Q_n^i)^{\pur}$ be the open subscheme of $Q_n^i$ parametrising pure sheaves and let $R_n^i$ be the closure of this subscheme in the Quot scheme. By \cite{simpson} Theorem 1.19, we can take $m >\!>n$ so that the GIT semistable set for $\SL(V_n^i)$ acting on $R_n^i$ with respect to the linearisation $L_{n,m}$ is the lowest Shatz stratum in $Q_n^i$ parametrising semistable sheaves; that is, \[ (R_n^i)^{\SL(V_n^i)-ss}(L_{n,m}) = (Q_n^i)^{ss} := (Q_n^i)_{(P_i)}\] for $i=1, \dots, s$. Furthermore, we assume $m > \!> n > \!> 0$, so that the inequalities (\ref{ineq needed}) hold. As described above, the index $\beta = \beta_{n,m}(\tau)$ determines a filtration $0=V^{(0)}_n \subset \cdots \subset V_n^{(s)}=V_n$ where $V_n^i := V_n^{(i)}/V_n^{(i-1)} = k^{P_i(n)}$. By construction, the conditions (\ref{num cond}) hold for $r(\beta)$ and $l(\beta)$; hence \[ Z_{\tau,n,m}:= \prod_{i=1}^s (Q_n^i)^{ss}\subset \prod_{i=1}^s \quot(V^i_n \otimes \cO_X(-n),P_i)^{\SL(V^i_n)-ss}(L_{n,m}) \subset Z^\lambda_d.\] Both inclusions are closed inclusions: the first, as $R_n^i$ is closed in $\quot(V^i_n \otimes \cO_X(-n),P_i)$. Therefore, $Y_{\tau,n,m} := p_\lambda^{-1}(Z_{\tau,n,m}) \subset Y_d^\lambda$ and $\SL(V_n)Y_{\tau,n,m} \subset S_{\beta}$ are both closed subschemes. To complete the proof, we show that $Q_{n,\tau} = \SL(V_n)Y_{\tau,n,m}$. By construction, $ Y_{\tau,n,m}$ consists of quotient sheaves $q: V_n \otimes \cO_X(-n) \onto \cF$ with a filtration \[ 0 \subset \cF^{(1)} \subset \cdots \subset \cF^{(s)} = \cF \quad \text{ where } \quad \cF^{(i)}: = q(V^{(i)}_n \otimes \cO_X(-n))\] whose successive quotients $\cF^{(i)}/\cF^{(i-1)}$ are semistable with Hilbert polynomial $P_i$; that is, $\cF$ has HN type $\tau$ and thus $\SL(V_n^i) Y_{\tau,n,m} \subset Q_{n,\tau}$. Conversely, for $q : V_n \otimes \cO_X(-n) \onto \cF$ in $Q_{n,\tau}$ with HN filtration given by \[0 = \cF^{(0)} \subset \cF^{(1)} \subset \cdots \subset \cF^{(s)}= \cF,\] we can choose $g \in \SL(V_n)$ that sends the filtration $V^{(i)}_n$ to $W_n^{(i)}:= H^0(q(n))^{-1} (H^0(\cF^{(i)}(n)))$; then $g \cdot q \in Y_{\tau,n,m}$ and this finishes the proof. \end{proof} \begin{cor}\label{cor Shatz strata} Let $\tau$ be a HN type for sheaves on $X$ with Hilbert polynomial $P$. For $m>\!> n >\!> 0$, all sheaves with HN type $\tau$ are $n$-regular and \[Q_{n,\tau}=\SL(V_n)Y_{\tau,n,m}\cong \SL(V_n) \times_{F_{n,\tau}} Y_{\tau,n,m}\] where \begin{enumerate} \item $F_{n,\tau}=P(\lambda)$ is a parabolic subgroup of $\SL(V_n)$, \item $Y_{\tau,n,m} = p_\lambda^{-1}(Z_{\tau,n,m})$, and \item $Z_{\tau,n,m}:= \prod_{i=1}^s (Q_n^i)^{ss}$, \end{enumerate} where the subschemes $(Q_n^i)^{ss} \subset \quot(V_n^i \otimes \cO_X(-n),P_i)$ are as above. \end{cor} \subsection{The structure of the HN strata} We now describe the HN strata $\cC oh^\tau_{P}(X)$. \begin{prop}\label{HN strat quot pres} Let $\tau$ be a HN type. Then, for $n >\!> 0$, we have isomorphisms \[\cC oh^\tau_{P}(X) \cong [Q_{n,\tau} /G_n] \cong [Y_{\tau,n,m}/F_{n,\tau}]\] where $Q_{n,\tau}$ and $Y_{\tau,n,m}$ are locally closed subschemes of $\quot( V_n \otimes \cO_X(-n),P)$ and $F_{n,\tau}$ is a parabolic subgroup of $G_n = \GL(V_n)$. \end{prop} \begin{proof} By Corollary \ref{cor Shatz strata}, there exists $n >\!>0$, so all sheaves with HN type $\tau$ are $n$-regular and so can be parametrised by a quotient sheaf in the Shatz stratum $Q_{n,\tau} \subset Q_n^o$. The restriction $\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X_{n,\tau}$ of the universal family to $Q_{n,\tau}\times X$ has the local universal property for families of sheaves on $X$ of HN type $\tau$ by our assumption on $n$. Therefore, we obtain a map \[Q_{n,\tau} \ra \cC oh^\tau_P(X) \] that is an atlas for $\cC oh^\tau_P(X)$. Two morphisms $f_i : S \ra Q_{n,\tau}$ define isomorphic families of sheaves of HN type $\tau$ if and only if they are related by a morphism $\varphi : S \ra \GL(V_n)$; i.e. $f_1(s) = \varphi(s) \cdot f_2(s)$ for all $s \in S$. Hence, the above morphism descends to an isomorphism $[Q_{n,\tau}/G_n] \ra \cC oh^\tau_{P}(X)$. The final isomorphism follows from Corollary \ref{cor Shatz strata}. \end{proof} \subsection{Stratifications on the stack}\label{sec sheaf strat compare} It is natural to expect an agreement between the Hesselink and Shatz stratifications, as we saw in the case of quiver representations. Further evidence that supports such an expectation comes from the agreement between the Yang--Mills and HN stratification in gauge theory (cf. \cite{atiyahbott,daskalopoulos}). However, we only have a containment result: the $\tau$-Shatz stratum is contained in a Hesselink stratum for $m$ and $n$ sufficiently large. In this section, we explain why these stratifications do not agree and what should be done to rectify this. First we note that multiple HN strata can be contained in a single Hesselink stratum; the proof follows from Definition \ref{def hess index from HN type}. \begin{lemma} Let $\tau =(P_1 , \dots , P_s)$ and $\tau' = (P_1', \dots , P_t')$ be HN types; then \[ \beta_{n,m}(\tau) = \beta_{n,m}(\tau') \] if and only if $s =t$ and, for $i =1, \dots, s$, we have that $P_i(n)=P_i'(n)$ and $P_i(m) = P_i'(m)$. \end{lemma} \begin{rmk} We note that if $\dim X \leq 1$, then the assignment $\tau \mapsto \beta_{n,m}(\tau)$ is injective for any $m > n$, as the Hilbert polynomial $P$ of any sheaf on $X$ has at most degree 1 and so $P$ is uniquely determined by the pair $(P(n),P(m))$. \end{rmk} For distinct HN types $\tau \neq \tau'$, we note that \[ \beta_{n,m}(\tau) \neq \beta_{n,m}(\tau') \quad \text{for } \: m >\! > n >\!> 0.\] However, as there are infinite many HN types, we cannot pick $m >\! > n >\!> 0$ so that the assignment $\tau \mapsto \beta_{n,m}(\tau)$ is injective for all HN types. We can overcome this problem by refining the Hesselink strata as suggested in Remark \ref{refine Hess}; however, even after such a refinement, the stratifications do not coincide (cf. Proposition \ref{relation of strata} and the later comments). The deeper reason underlying the failure of these stratifications to coincide is that the Quot scheme is only a truncated parameter space for sheaves with Hilbert polynomial $P$, in the sense that it does not parametrise all sheaves on $X$ with Hilbert polynomial $P$. However, the family of all sheaves on $X$ with Hilbert polynomial $P$ is unbounded and so there is no scheme that parametrises all such sheaves. In fact, we really need to consider the quot schemes $\quot(V_n \otimes \cO_X(-n),P)$ for all $n$ simultaneously. In the case of vector bundles over a smooth projective curve, Bifet, Ghione and Letizia \cite{bgl} constructed an ind-variety of matrix divisors and used a HN stratification on this ind-variety to derive inductive formulae for the Betti numbers. Therefore, one may hope to analogously construct an ind-quot scheme. Unfortunately, in our setting, there are no natural maps between these quot schemes which allow us to construct such an ind-quot scheme. The correct space to compare the Hesselink and HN stratifications is on the stack $\cC oh_P(X)$. By Corollary \ref{Shatz strat}, there is an infinite HN stratification \[\cC oh_{P}(X) = \bigsqcup_{\tau} \cC oh_{P}^\tau(X). \] We have an increasing open cover of $\cC oh_P(X)$ by the substacks $\cC oh_{P}^{n-\reg}(X)$ of $n$-regular sheaves: \[ \cC oh_P(X) = \bigcup_{n \geq 0} \cC oh_P^{n-\reg}(X)\] such that $\cC oh^{n-\reg}_P(X) \cong [Q^{n-\reg}/G_n]$ where $Q^{n-\reg}$ is the open subscheme of $Q_n^o$ consisting of quotient sheaves $q: V_n \otimes \cO_X(-n) \onto \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ such that $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ is $n$-regular. In the remainder of this section, by using the Hesselink stratification on each Quot scheme $\quot(V_n \otimes \cO_X(-n),P)$, we construct a Hesselink stratification on $\cC oh_P(X)$. For each $n$, we fix $m_n > \!> n$ so the $\SL(V_n)$-linearisation $L_{n,m_n}$ on $\quot(V_n \otimes \cO_X(-n),P)$ is ample. Then we have an associated Hesselink stratification \[ \quot(V_n \otimes \cO_X(-n),P) = \bigsqcup_{\beta \in \cB_n} S^n_\beta \] which we refine, as in Remark \ref{refine Hess}, by tuples $\underline{P} = (P_1, \dots ,P_s)$ of Hilbert polynomials \[ \quot(V_n \otimes \cO_X(-n),P) = \bigsqcup_{\underline{P} \in \cC_n} S^n_{\underline{P}}.\] Let $S_{\underline{P}}^{n-\reg}$ be the fibre product of $Q^{n-\reg}$ and $S^n_{\underline{P}}$ in this Quot scheme. We have a decomposition \[ Q^{n-\reg} = \bigsqcup_{\underline{P} \in \cC_n} S_{\underline{P}}^{n-\reg}\] into finitely many locally closed subschemes. Since $Q^{n-\reg}$ is an open, rather than a closed, subscheme of the Quot scheme, the strata $S_{\underline{P}}^{n-\reg}$ do not admit descriptions as in Theorem \ref{hess thm proj}. However, as each stratum is $G_n$-invariant, we obtain an induced decomposition \[ \cC oh^{n-\reg}_P(X) = \bigsqcup_{\underline{P} \in \cC_n} {\cS}_{\underline{P}}^n \quad \quad \text{where } \: {\cS}_{\underline{P}}^n \cong [S_{\underline{P}}^{n-\reg}/G_n]\] into finitely many disjoint locally closed substacks. \begin{prop}\label{relation of strata} Let $\tau \in \cH_P$ be a HN type. Then, for $n >\! > 0$ and for $n' > \! > n$, we have \[ \cC oh^\tau_P(X) \subset \cS_{\tau}^n \subset \bigsqcup_{\tau' \in \cB_\tau^n} \cC oh^{\tau'}_P(X) \subset \bigsqcup_{\tau' \in \cB_\tau^n} \cS_{\tau'}^{n'}\] where $\cB_\tau^n$ is a finite set of HN types $\tau' \geq \tau$. \end{prop} \begin{proof} By Theorem \ref{thm comp strat}, for $n>\!> 0$, the Shatz stratum indexed by $\tau$ is contained in the Hesselink stratum indexed by $\beta_{n,m_n}(\tau)$. Moreover, $\cC oh^\tau_P(X)$ is contained in the refined Hesselink stratum $\cS^n_\tau$ indexed by the tuple of polynomials $\tau=(P_1, \dots ,P_s)$. For the second inclusion, we note that every quotient sheaf $q : V_n \otimes \cO_X(-n) \onto \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ in the refined Hesselink stratum $S_\tau^n \subset \quot(V_n \otimes \cO_X(-n),P)$ has a filtration \[ 0 \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(1)} \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(2)} \subset \cdots \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(s)} =\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H\] such that $P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)}/\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i-1)}) = P_i$; that is $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ has HN type greater than or equal to $\tau$ and \[ \cS_{\tau}^n \subset \bigsqcup_{\tau' \geq \tau} \cC oh^{\tau'}_P(X). \] As $S_\tau^{n-\reg}$ has a finite Shatz stratification by HN types, we can take a finite index set $\cB_\tau^n$ here. The final inclusion follows by applying Theorem \ref{thm comp strat} to the finite set of HN types $\cB_\tau^n$. \end{proof} It is not the case that for $n' >\!> n$, the $n'$th Hesselink stratification refines the $n$th Hesselink stratification. For example, we could have two sheaves $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ and $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H'$ which belong to distinct strata $\cS^n_{\underline{P}}$ and $\cS^n_{\underline{P}'}$, but both have the same HN type. Moreover, we could have a stratum $\cS_{\underline{P}}^n$ indexed by a tuple of polynomials $\underline{P} = (P_1, \dots , P_s)$ which is not a HN type: i.e. we have \[ \frac{P_1(n)}{P_1(m_n)} > \cdots > \frac{P_s(n)}{P_s(m_n)} \] without $P_1 \succ \cdots \succ P_s$; although, then $\underline{P}$ will not index a $n'$th Hesselink stratum for $n' >\!> n$. \begin{lemma} A tuple of polynomials $\underline{P}$ is a surviving Hesselink index (i.e. $\cS^n_{\underline{P}}$ is non-empty for $n >\!> 0$) if and only if it is a HN type of a sheaf on $X$ with Hilbert polynomial $P$. \end{lemma} We want to define an infinite Hesselink stratification on $\cC oh_{{P}}(X)$ such that the infinite strata $\cH ess_\tau$ are indexed by the surviving Hesselink indices $\tau$. Furthermore, we want this stratification to be a limit over $n$ of the finite Hesselink stratifications on the stack of $n$-regular sheaves; more precisely, we want a sheaf $\cF$ to be represented by a point of $\cH ess_\tau$ if and only if $\cF$ is represented by a point of $\cS^n_\tau$ for $n > \! > 0$. From this intuitive picture concerning the points we would like the infinite Hesselink strata to parametrise, it is not clear whether these infinite Hesselink strata $\cH ess_\tau$ are locally closed substacks of $\cC oh_{{P}}(X)$. To give a more concrete definition, we define the infinite Hesselink strata as an inductive limit of stacks. This is slightly delicate as there is no reason for the finite Hesselink strata $S^n_{\tau}$ to stabilise for large $n$, or even to be comparable. For example, we know that there are some sheaves in $S^n_{\tau}$ that do not belong to $S^{n'}_{\tau}$ for any $n' >\!> n$; however, as $n$ increases there are more and more sheaves that become $n$-regular and so can appear in each stratum. To circumvent this difficulty, we restrict our attention to sufficiently large substacks $\cR^n_{\tau} \subset \cS^n_{\tau}$ that form a descending chain. \begin{defn} For each surviving Hesselink index $\tau$, we fix a natural number $N_\tau$ such that $\cC oh^{\tau}_P(X) \subset \cS^n_{\tau}$ for all $n \geq N_\tau$. Then let $\cR^{N_\tau}_\tau := \cS_\tau^{N_\tau}$ and \[ \cR^n_{\tau} := \cR^{n-1}_{\tau} \times_{\cC oh^{n-\reg}_P(X)} \cS^n_{\tau}\] for $n > N_\tau$. Then we have a descending chain $\cdots \cdots \hookrightarrow \cR^{n}_{\tau} \hookrightarrow \cR^{n-1}_{\tau} \hookrightarrow \cdots \cdots \hookrightarrow \cR^{N_{\tau}}_{\tau}$. We let $\cH ess_{\tau}$ be the inductive limit of this chain and call this the infinite Hesselink stratum for $\tau$. \end{defn} \begin{prop} Let $\tau$ be a surviving Hesselink index; then $\cR^{n'}_{\tau} = \cC oh_P^\tau(X)$ for $n' >\! > 0$. In particular, $\cH ess_{\tau} = \cC oh_P^\tau(X)$. \end{prop} \begin{proof} By Proposition \ref{relation of strata}, we have that \[ \cS^{N_\tau}_\tau \subset \bigsqcup_{\tau' \in \cB_\tau} \cC oh^{\tau'}_P(X)\] where the index set $\cB_\tau$ is finite and $ \cC oh^{\tau'}_P(X) \subset \cS_{\tau'}^{n'}$ for all $\tau' \in \cB_\tau$ and $n' >\!> N_\tau$. By construction, $ \cC oh^{\tau}_P(X)$ is a substack of $\cR^n_\tau$ and $\cR^{n}_\tau$ is a substack of $\cS^n_\tau$ for $n \geq N$. We claim that, $\cR_\tau^{n'}$ is a substack of $\cC oh_\tau^{n'}$ for $n' >\!> N_\tau$ and, therefore, $\cR_\tau^{n'} = \cC oh^\tau_P(X)$ for $n' >\!> N_\tau$. To prove the claim, we recall that \[ \cR^{n'}_\tau \subset \cS^{N_\tau}_\tau \subset \bigsqcup_{\tau' \in \cB_\tau} \cC oh^{\tau'}_P(X)\] and $ \cC oh^{\tau'}_P(X) \subset \cS^{n'}_{\tau'}$, for $n'>\!>N_{\tau}$ and $\tau' \in \cB_\tau$. If $\cR^{n'}_\tau$ meets $ \cC oh^{\tau'}_P(X)$ for some $\tau' \neq \tau$, then this implies that $\cS^{n'}_\tau$ meets $\cS^{n'}_{\tau'}$, which contradicts the disjointness of these finite Hesselink strata. Therefore, $\cR^{n'}_\tau$ is a substack of $\cC oh_\tau^{n'}$ for $n' >\!> N_\tau$ and this completes the proof. \end{proof} \begin{rmk} It is easy to check that a sheaf is represented by a point of $\cH ess_\tau$ if and only if it is represented by a point of $\cS^n_\tau$ for $n >\!> 0$.\end{rmk} \begin{cor}\label{cor Hess is HN on stack} The infinite Hesselink stratification on the stack $\cC oh_P(X)$ \[ \cC oh_P(X) = \bigsqcup_{\tau} \cH ess_{\tau}\] coincides with the stratification by Harder--Narasimhan types. \end{cor} \section{A functorial point of view} In \cite{ack}, \'{A}lvarez-C\'{o}nsul and King give a functorial construction of the moduli space of semistable sheaves on $(X, \cO_X(1))$, using a functor \[ \Phi_{n,m} : \textbf{Coh}(X) \ra \textbf{Reps}({K_{n,m}})\] from the category of coherent sheaves on $X$ to the category of representations of a Kronecker quiver $K_{n,m}$. They prove that, for $m >\!> n > \!> 0$, this functor embeds the subcategory of semistable sheaves with Hilbert polynomial $P$ into a subcategory of $\theta_{n,m}(P)$-semistable quiver representations of dimension $d_{n,m}(P)$ and construct the moduli space of semistable sheaves on $X$ with Hilbert polynomial $P$ by using King's construction of the moduli spaces of quiver representations. In this section, we consider an associated morphism of stacks and describe the relationship between the Hesselink and HN strata for sheaves and quivers. \subsection{Overview of the construction of the functor} Let $X$ be a projective scheme of finite type over $k$ with very ample line bundle $\cO_X(1)$ and let $\textbf{Coh}(X)$ denote the category of coherent sheaves on $X$. For natural numbers $m > n$, we let $K_{n,m}$ be a Kronecker quiver with vertex set $V:=\{n,m\}$ and $\dim H^0(\cO_X(m-n))$ arrows from $n$ to $m$: \[ K_{n,m} = \quad \left( \begin{array}{ccc} & \longrightarrow & \\ n & \vdots & m \\ \bullet & H^0(\cO_X(m-n)) & \bullet \\ & \vdots & \\ & \longrightarrow & \end{array} \right).\] For natural numbers $m >n$, we consider the functor \[ \Phi_{n,m}:= \text{Hom} (\cO_X(-n) \oplus \cO_X(-m),\:-\:) : \textbf{Coh}(X) \ra \textbf{Reps}({K_{n,m}})\] that sends a sheaf $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ to the representation $W_\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ of $K_{n,m}$ where \[ W_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H,n}:=H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n)) \quad \quad \quad \quad W_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H,m}:=H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(m)) \] and the evaluation map $H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n)) \otimes H^0(\cO_X(m-n)) \ra H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(m))$ gives the arrows. We fix a Hilbert polynomial $P$ and let $\textbf{Coh}_{P}^{n-\reg}(X)$ be the subcategory of $\textbf{Coh}(X)$ consisting of $n$-regular sheaves with Hilbert polynomial $P$. Then the image of $\Phi_{n,m}$ restricted to $\textbf{Coh}_{P}^{n-\reg}(X)$ is contained in the subcategory of quiver representations of dimension vector $d_{n,m}(P)=(P(n),P(m))$. Furthermore, by \cite{ack} Theorem 3.4, if $\cO_X(m-n)$ is regular, then the functor \[ \Phi_{n,m} : \textbf{Coh}_{P}^{n-\reg}(X) \ra \textbf{Reps}_{d_{n,m}(P)}(K_{n,m})\] is fully faithful. Let us consider the stability parameter $ \theta_{n,m}(P): = (-P(m),P(n))$ for representations of $K_{n,m}$ of dimension $d_{n,m}(P)$. \begin{thm}[\cite{ack}, Theorem 5.10]\label{ack thm} For $m >\!> n > \!> 0$, depending on $X$ and $P$, a sheaf $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ with Hilbert polynomial $P$ is semistable if and only if it is pure, $n$-regular and the quiver representation $\Phi_{n,m}(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)$ is $\theta_{n,m}(P)$-semistable. \end{thm} We briefly describe how to pick $m >\!> n > \!> 0$ as required for this theorem to hold; for further details, we refer to the reader to the conditions (C1) - (C5) stated in \cite{ack} $\S$5.1. First, we take $n$ so all semistable sheaves with Hilbert polynomial $P$ are $n$-regular and the Le Potier--Simpson estimates hold. Then we choose $m$ so $\cO_X(m-n)$ is regular and, for all $n$-regular sheaves $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ and vector subspaces $V' \subset H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n))$, the subsheaf $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H'$ generated by $V'$ under the evaluation map $H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n)) \otimes \cO(-n) \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ is $m$-regular. Finally, we take $m$ is sufficiently large so a finite list of polynomial inequalities can be determined by evaluation at $m$ (see (C5) in \cite{ack}). We can alternatively consider the functor $\Phi_{n,m}$ as a morphism of stacks. We recall we have isomorphisms of stacks \[ \cC oh_{P}^{n-\reg}(X) \cong [Q^{n-\reg}/G_n],\] where $G_n = \GL(V_n)$, and \[\cR \text{eps}_{d_{n,m}(P)}(K_{n,m})\cong[\rep_{d_{n,m}(P)}(K_{n,m})/\overline{G}_{d_{n,m}(P)}(K_{n,m})].\] Let $\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X_n$ be the universal quotient sheaf over $Q^{n-\reg} \times X$ and $p : Q^{n-\reg} \times X \ra Q^{n-\reg}$ be the projection map. By definition of $Q^{n-\reg}$, we have that $R^ip_* (\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X_n(n)) = 0$ for $ i > 0$; therefore, by the semi-continuity theorem, $p_*(\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X_n(n))$ is a vector bundle over $Q^{n-\reg}$ of rank $P(n)$ and similarly $p_*(\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X_n(m))$ is a rank $P(m)$ vector bundle. Hence, by using the evaluation map, we have obtain a family of representations of $K_{n,m}$ of dimension $d_{n,m}(P)$ parametrised by $Q^{n-\reg}$ that induces a morphism \[ Q^{n-\reg} \ra \cR \text{eps}_{d_{n,m}(P)}(K_{n,m}).\] As this morphism is $G_n$-invariant, it descends to a morphism \[\Phi_{n,m} : \cC oh_{P}^{n-\reg}(X) \ra \cR \text{eps}_{d_{n,m}(P)}(K_{n,m})\] where we continue to use the notation $\Phi_{n,m}$ to mean the morphism of stacks. \subsection{Image of the Hesselink strata} In this section, we study the image of the Hesselink strata under the map \[\Phi_{n,m} : \cC oh_{P}^{n-\reg}(X) \ra \cR \text{eps}_{d_{n,m}(P)}(K_{n,m}).\] Let \[ \quot(V_n \otimes \cO_X(-n),P) = \bigsqcup_{\beta \in \cB_{n,m}} S_\beta\] be the Hesselink stratification associated to the $\SL(V_n)$-action on this Quot scheme with respect to $L_{n,m}$ as we described in $\S$\ref{sec Hess quot}. As in $\S$\ref{sec sheaf strat compare}, we consider the induced stratification on the stack of $n$-regular sheaves \begin{equation}\label{hess quot} \cC oh^{n-\reg}_P(X) = \bigsqcup_{\beta} \cS_\beta^{n,m} \end{equation} where $\cS^{n,m}_\beta = [S^{n-\reg}_\beta/G_n]$ and $S_\beta^{n-\reg} $ is the fibre product of $ Q^{n-\reg}$ and $S_\beta$ in this Quot scheme. We recall that the unstable Hesselink strata are indexed by conjugacy classes of rational 1-PSs $\lambda_\beta$ of $\SL(V_n)$. Equivalently, the index $\beta$ is given by a collection of strictly decreasing rational weights $r(\beta) = (r_1, \dots , r_s)$ and multiplicities $l(\beta) = (l_1, \dots , l_s)$ satisfying $\sum_{i=1}^s l_i = P(n)$ and $\sum_{i=1}^s r_i l_i = 0$. More precisely, we recall that the rational 1-PS associated to $r(\beta)$ and $l(\beta)$ is \[\lambda_\beta(t)=\diag (t^{r_1},\dots , t^{r_1}, \dots , t^{r_s} ,\dots ,t^{r_s})\] where $r_i$ appears $l_i$ times. To define a Hesselink stratification on the stack of representations of $K_{n,m}$ of dimension vector $d_{n,m}(P)$, we need to choose a parameter $\alpha \in \NN^2$ which defines a norm $|| - ||_\alpha$. We choose $\alpha =\alpha_{n,m}(P):=(P(m),P(n))$ due to the following lemma. \begin{lemma}\label{how to pick alpha} Let $\theta=\theta_{n,m}(P)$ and $\alpha=\alpha_{n,m}(P)$. Then, for sheaves $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H $ and $\cF$, we have \[ \frac{\theta(W_\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)}{\alpha(W_\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)} \geq \frac{\theta(W_\cF)}{\alpha(W_\cF)} \iff \frac{H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n))}{H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(m))} \leq \frac{H^0(\cF(n))}{H^0(\cF(m))}.\] The same statement holds if we replace these inequalities with strict inequalities. \end{lemma} \begin{proof} By definition of these parameters, we have \[ \frac{\theta(W_\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)}{\alpha(W_\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)}:= \frac{-P(m)H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n)) + P(n)H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(m))}{P(m)H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n)) + P(n)H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(m))} = 1 - \frac{2P(m)H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n))}{P(m)H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n)) + P(n)H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(m))}.\] From this, it is easy to check the desired equivalences of inequalities. \end{proof} By Theorem \ref{quiver HN is Hess}, we can equivalently view the Hesselink stratification (with respect to $\theta$ and $\alpha$) as a stratification by HN types: \[\cR ep_{d_{n,m}(P)}(K_{n,m})=\bigsqcup_\gamma \cR \text{eps}_{d_{n,m}(P)}^{\gamma}(K_{n,m}).\] \begin{defn} For an index $\beta$ of the Hesselink stratification (\ref{hess quot}) on the stack of $n$-regular sheaves, we let $\gamma(\beta):=(d_1(\beta), \dots ,d_s(\beta))$ where \[ d_i(\beta) = \left(l_i,l_i \frac{P(m)}{P(n)} - l_i r_i\right)\] and $r(\beta) = (r_1, \dots , r_s)$ and $l(\beta) = (l_1, \dots , l_s)$. \end{defn} \begin{lemma} Let $\beta$ be an index for the Hesselink stratification of $Q^{n-\reg}$; then $\gamma(\beta)$ is a HN type for a representation of $K_{n,m}$ of dimension $d_{n,m}$ with respect to $\theta$ and $\alpha$. \end{lemma} \begin{proof} Let $r(\beta) = (r_1, \dots , r_s)$ and $l(\beta) = (l_1, \dots , l_s)$ be as above; then $r_1 > \dots > r_s$ and \[ \sum_{i=1}^s l_i = \dim V_n = P(n) \quad \text{and} \quad \sum_{i=1}^s r_il_i = 0.\] Let $\gamma(\beta):=(d_1(\beta), \dots ,d_s(\beta))$ be as above; then \[ \sum_{i=1}^s d_i(\beta) =\left(\sum_{i=1}^n l_i, \sum_{i=1}^n l_i \frac{P(m)}{P(n)} - l_i r_i \right) = (P(n),P(m)).\] Since $r_1 > \cdots > r_s$, it follows that \[ \frac{\theta(d_1(\beta))}{\alpha(d_1(\beta))} < \frac{\theta(d_2(\beta))}{\alpha(d_2(\beta))} < \cdots \cdots < \frac{\theta(d_s(\beta))}{\alpha(d_s(\beta))}.\] To complete the proof, we need to verify that \[ d_i(\beta) = \left(l_i,l_i \frac{P(m)}{P(n)} - l_i r_i\right) \in \NN^2.\] The first number $l_i$ is a multiplicity and so is a natural number, but the second number is a priori only rational. As $\beta$ is an index for the Hesselink stratification, it indexes a non-empty stratum $S_\beta$ and, as this stratum is constructed from its associated limit set $Z_d^\lambda$, by Theorem \ref{hess thm proj}, it follows that this limit set must also be non-empty. Hence this limit set contains a quotient sheaf $q : V_n \otimes \cO_X(-n) \ra \oplus_{i=1}^s \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i$ such that, by Proposition \ref{Zdlambda}, for $i=1, \dots, s$, we have \[ P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i,m) = l_i \frac{P(m)}{P(n)} - l_i r_i.\] Then, as $P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i,m) \in \NN$, this completes the proof. \end{proof} \begin{prop} For a Hesselink index $\beta$, we have \[ \Phi_{n,m} \left(\cS^{n,m}_\beta \right) \subset \bigsqcup_{\gamma \geq \gamma(\beta)} \cR \text{eps}^\gamma_{d_{n,m}(P)}(K_{n,m}).\] \end{prop} \begin{proof} Let $q : V_n \otimes \cO_X(-n) \onto \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ be a quotient sheaf in $S_\beta^{n-\reg}$ and let $r(\beta) = (r_1, \dots , r_s)$ and $l(\beta) = (l_1, \dots , l_s)$ be the associated rational weights and multiplicities; thus, $r_1 > \cdots > r_s$ and \[ \lambda_\beta(t) = \diag (t^{r_1}, \dots , t^{r_1}, \dots , t^{r_s} ,\dots ,t^{r_s}) \] where $r_i$ appears $l_i$ times. If $\lambda$ is the unique integral primitive 1-PS associated to $\lambda_\beta$, then $\beta = ([\lambda],d)$ where $d=||\lambda_\beta||$. The 1-PS $\lambda$ induces a filtration $0=V^{(0)}_n \subset V^{(1)}_n \subset \cdots \subset V^{(s)}_n = V_n$ such that the successive quotients $V^{i}_n$ have dimension $l_i$. By Corollary \ref{cor Hess quot}, there exists $g \in \SL(V_n)$ such that we have a filtration \[ 0 = \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(0)} \subset \cdots \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)} := g \cdot q(V^{(i)}_n \otimes \cO_X(-n)) \subset \cdots \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(s)}=\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H\] where the Hilbert polynomials of $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^i := \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)}/\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i-1)}$ satisfy \[ P(n){P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i,m)}=l_i({P(m)} - r_iP(n)) \quad \quad \text{for } \: i = 1, \dots ,s. \] Let $W^{(i)}:= W_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)}}$ be the quiver representation associated to $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)}$; then we have a filtration \begin{equation}\label{filtr of W} 0 = W^{(0)} \subset W^{(1)} \subset \cdots \subset W^{(s)} =W:=W_\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H \end{equation} with $ \dim W^{(i)} := (\dim H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)}(n)),\dim H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)}(m)) ) = (\dim V_n^{(i)}, P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)},m) )$, due to the fact that $H^0(q(n))$ is an isomorphism and $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)}$ is $m$-regular. Let $W_i := W^{(i)}/W^{(i-1)}$; then \[\dim W_i=(\dim V_n^i,P(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i,m))=\left(l_i,l_i \frac{P(m)}{P(n)}-l_i r_i \right).\] As we have a filtration (\ref{filtr of W}) of $W$ whose successive quotients have dimension vectors specified by $\gamma(\beta)$, it follows that $W_\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ has HN type greater than or equal to $\gamma(\beta)$. \end{proof} In the above proof, we note that the quiver representation $W_i$ is only isomorphic to $W_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i}$ if $H^{1}(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i-1)}(n)) = 0$. In general, this is not the case, but it is always the case that $W_i$ is a subrepresentation of $W_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i}$, as $W_{i,n} \subset H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i(n))$ and $W_{i,m} \cong H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i(m))$. In particular, it was not possible to use GIT semistability properties of the quotient sheaves $q_i : V_n^i \otimes \cO_X(-n) \ra \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i$ to deduce $(\theta,\alpha)$-semistability of $W_i$ (that is, to show that (\ref{filtr of W}) is the HN filtration of $W$). It is not clear to the author if such a result should hold. A more natural way to state the above result is the following. \begin{cor} For a Hesselink index $\beta$, we have \[ \Phi_{n,m} \left( \bigsqcup_{\beta' \geq \beta}\cS^n_{\beta'} \right) \subset \bigsqcup_{\gamma' \geq \gamma(\beta)} \cR \text{eps}^{\gamma'}_{d_{n,m}(P)}(K_{n,m}).\] \end{cor} \subsection{Image of the HN strata} In this section, we study the image of the HN strata $\cC oh^{\tau}_P(X)$ under the map \[\Phi_{n,m} : \cC oh_{P}^{n-\reg}(X) \ra \cR \text{eps}_{d_{n,m}(P)}(K_{n,m})\] for $n$ and $m$ sufficiently large (depending on $\tau$). In fact, we show that a HN stratum for sheaves is mapped to a HN stratum for quiver representations. Let $\theta=\theta_{n,m}(P): = (-P(m),P(n))$ and $\alpha= \alpha_{n,m}(P) := (P(m),P(n))$ be as above. \begin{defn} For a HN type $\tau = (P_1, \dots , P_s) \in \cH_P$ of a sheaf and natural numbers $(n,m)$, we let \[\gamma_{n,m}({\tau}) := (d_{n,m}(P_1), \dots , d_{n,m}(P_s))\] where $d_{n,m}(P_i) = (P_i(n),P_i(m))$. \end{defn} As $\tau$ is a HN type of sheaves, we have that ${P_1} \succ {P_2} \succ \cdots \cdots \succ {P_s}$; thus, \begin{equation}\label{HN type inequal} \frac{P_1(n)}{P_1(m)}> \frac{P_2(n)}{P_2(m)} > \cdots \cdots > \frac{P_s(n)}{P_s(m)} \quad \quad \text{for } \: m >\!> n>\!> 0. \end{equation} Therefore, by Lemma \ref{how to pick alpha}, for $m >\!> n>\!> 0$, we have \[ \frac{\theta(d_{n,m}(P_1))}{\alpha(d_{n,m}(P_1))} < \frac{\theta(d_{n,m}(P_2))}{\alpha(d_{n,m}(P_2))} < \cdots \cdots < \frac{\theta(d_{n,m}(P_s))}{\alpha(d_{n,m}(P_s))};\] i.e., $\gamma_{n,m}(\tau)$ is a HN type for representations of $K_{n,m}$ of dimension $d_{n,m}(P)$ for $m >\!> n>\!> 0$. \begin{thm}\label{thm ack HN} Let $\tau = (P_1,\dots ,P_s) \in \cH_P$ be a HN type. Then, for $m >\!> n > \!> 0$, \[\Phi_{n,m} \left( \cC oh_{P}^\tau(X)\right) \subset \cR \text{eps}_{d_{n,m}(P)}^{\gamma_{n,m}({\tau})}(K_{n,m}).\] \end{thm} \begin{proof} We take $m >\!> n > \!> 0$ as needed for Theorem \ref{ack thm} for the Hilbert polynomials $P_1, \dots , P_s$. Furthermore, we assume that $m$ and $n$ are sufficiently large so the inequalities (\ref{HN type inequal}) hold. Let $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ be a sheaf on $X$ of HN type $\tau$ and HN filtration given by \begin{equation} \label{given filtr} 0 = \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(0)} \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(1)} \subset \cdots \subset \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(s)}= \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H. \end{equation} Let $W_\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ and $W^{(i)}:=W_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)}}$ be the associated quiver representations; then we claim that the induced filtration \begin{equation}\label{filtr to show} 0 = W^{(0)} \subset W^{(1)} \subset \cdots \subset W^{(s)} = W_\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H \end{equation} is the HN filtration of $ W_\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ with respect to $(\theta,\alpha)$ and, moreover, that $W_\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ has HN type $\gamma_{n,m}(\tau)$. Let $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i$ and $W_i$ denote the successive subquotients in the above filtrations. Our assumptions on $n$ imply that each $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i$ is $n$-regular and so, by induction, each $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)}$ is $n$-regular. Therefore, we have exact sequences \[ 0 \ra H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i-1)}(n)) \ra H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H^{(i)}(n)) \ra H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i(n)) \ra 0\] that give isomorphisms $W_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i} \cong W_i$. By Theorem \ref{ack thm}, as $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i$ is semistable and $n$-regular with Hilbert polynomial $P_i$, the quiver representation $W_i$ is $\theta_i:=\theta_{n,m}(P_i)$-semistable. For a subrepresentation $W' \subset W_i$, we note that \[ \theta_i(W') \geq 0 \iff P_i(n)\dim W'_{v_m}\geq P_i(m)\dim W'_{v_n} \iff \frac{\theta(W')}{\alpha(W')} \geq \frac{\theta(W_i)}{\alpha(W_i)}.\] Therefore, $\theta_i$-semistability of $W_i$ implies $(\theta,\alpha)$-semistability of $W_i$. To finish the proof of the claim, i.e. to prove that (\ref{filtr to show}) is the HN filtration of $ W_\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$, it suffices to check that \[ \frac{\theta(W_1)}{\alpha(W_1)} < \frac{\theta(W_2)}{\alpha(W_2)} < \cdots \cdots < \frac{\theta(W_s)}{\alpha(W_s)} \] or, equivalently, by Lemma \ref{how to pick alpha}, that \[ \frac{H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_1(n))}{H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_1(m))} > \frac{H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_2(n))}{H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_2(m))} > \cdots \cdots > \frac{H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_s(n))}{H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_s(m))}.\] Since $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i$ is $n$-regular, we have that $H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i(n)) = P_i(n)$ and $H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i(m))=P_i(m)$; thus, the above inequalities are equivalent to (\ref{HN type inequal}). Moreover, this shows that $ W_\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ has HN type $\gamma_{n,m}(\tau)$. \end{proof} \begin{rmk} The assignment $\tau \mapsto \gamma_{n,m}(\tau)$ is not injective for exactly the same reason as mentioned in $\S$\ref{sec sheaf strat compare}. In fact, more generally, $P \mapsto d_{n,m}(P)$ is not injective, unless $ \dim X \leq 1$. \end{rmk} \subsection{Adding more vertices} To determine a polynomial $P(x)$ in one variable of degree at most $d$, it suffices to know the value $P$ takes at $d+1$ different values of $x$. In this final section, by using this observation, we generalise the construction of \'{A}lvarez-C\'{o}nsul and King by adding more vertices so that the map $\tau \mapsto \gamma(\tau)$ is injective. For a tuple $\underline{n}=(n_0, \cdots , n_d)$ of increasing natural numbers, we define a functor \[ \Phi_{\underline{n}}:= \text{Hom} (\bigoplus_{i=0}^d\cO_X(-n_i),\:-\:) : \cC oh(X) \ra \cR \text{eps}(K_{\underline{n}})\] where $K_{\underline{n}}$ denotes the quiver with vertex set $V=\{n_0, \dots , n_d\}$ and $\dim H^0(\cO_X(n_{i+1} - n_i))$ arrows from $n_i$ to $n_{i+1}$: \[ K_{\underline{n}} = \quad \left( \begin{array}{ccccccc} & \longrightarrow & & & & \longrightarrow & \\ n_0 & \vdots & n_1 & & n_{d-1} & \vdots & n_d \\ \bullet & H^0(\cO_X(n_1-n_0)) & \bullet & \quad \cdots \quad & \bullet & H^0(\cO_X(n_d -n_{d-1})) & \bullet \\ & \vdots & \\ & \longrightarrow & & & & \longrightarrow & \end{array} \right).\] More precisely, if $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ is a coherent sheaf on $X$, then $\Phi_{\underline{n}}(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)$ is the quiver representation denoted $W_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H}=(W_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H,n_0}, \dots , W_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H,n_d}, \text{ev}_1, \dots , \text{ev}_d)$ where we let $W_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H,l} := H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(l))$ and define the maps $\text{ev}_l : H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n_{l-1})) \otimes H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n_l - n_{l-i})) \ra H^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n_i))$ using the evaluation map on sections. For the dimension vector $d_{\underline{n}}(P):=(P(n_0), \dots , P(n_d))$, we observe that \[\Phi_{\underline{n}}: \cC oh_{P}^{n_0-\reg}(X) \ra \cR \text{eps}_{d_{\underline{n}}(P)}(K_{\underline{n}}).\] Moreover, we note that the assignment that maps a Hilbert polynomial $P$ of a sheaf on $X$ to the corresponding dimension vector $d_{\underline{n}}(P)$ is injective. We want to choose a stability parameter $\theta \in \ZZ^{d+1}$ for representations of $K_{\underline{n}}$ of dimension vector $d_{\underline{n}}(P)$ such that $\Phi_{\underline{n}}$ sends semistable sheaves to $\theta$-semistable quiver representations for $\underline{n} > \! > 0$ (that is, for $n_d > \! > n_{d-1} > \! > \cdots > \! > n_0 > \! > 0$). Let \[ \theta= \theta_{\underline{n}}(P):= (\theta_0, \dots , \theta_d) \quad \text{where} \quad \theta_i:= \sum_{j <i} P(n_j) - \sum_{j>i} P(n_j); \] then $\sum_{i=0}^d \theta_i P(n_i) = 0$. The following lemma demonstrates this is a suitable choice. \begin{lemma}\label{correct theta} Let $\theta = \theta_{\underline{n}}(P)$ as above and $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ be a sheaf on $X$. If, for all $j < i$, we have \[ \frac{h^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n_j))}{ h^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n_i))} \leq \frac{P(n_j)}{P(n_i)},\] then $\theta(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) \geq 0$. \end{lemma} \begin{proof} From the definition $\theta (\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) := \sum_{i=0}^d \theta_i h^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n_i))$, it follows that \begin{align*} \theta (\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) & = \sum_{i} \left( \sum_{j< i} P(n_j) - \sum_{j>i} P(n_j) \right) h^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n_i)) \\ & =\sum_i \sum_{j<i} \left( P(n_j) h^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n_i)) - P(n_i) h^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n_j) \right) \geq 0 \end{align*} by using the given inequalities. \end{proof} Using this lemma, it is easy to prove the following corollary analogously to \cite{ack} Theorem 5.10. \begin{cor} Let $P$ be a fixed Hilbert polynomial of a sheaf on $X$; then for $\underline{n} >\! > 0$, we have \[ \Phi_{\underline{n}} (\cC oh_{P}^{\text{ss}}(X) ) \subset \cR \text{eps}_{d_{\underline{n}}(P)}^{\theta_{\underline{n}}(P)}(K_{\underline{n}}).\] \end{cor} Finally, we need to find the value of the parameter $\alpha \in \NN^{d+1}$ needed to define the correct notion of HN filtrations of representations of $K_{\underline{n}}$ in order to prove an analogue of Theorem \ref{thm ack HN}. The following lemma, whose proof is analogous to Lemma \ref{correct theta}, shows that \[ \alpha= \alpha_{\underline{n}}(P):= (\alpha_0, \dots , \alpha_d) \quad \text{where} \quad \alpha_i:= \sum_{j <i} P(n_j) + \sum_{j>i} P(n_j) \] is a suitable choice. \begin{lemma} Let $\theta = \theta_{\underline{n}}(P)$ and $\alpha= \alpha_{\underline{n}}(P)$ be as above and $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ and $\cF$ be two sheaves on $X$. If, for all $j < i$, we have \[\frac{h^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n_j))}{ h^0(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H(n_i))} \leq \frac{h^0(\cF(n_j))}{h^0(\cF(n_i))}, \] then \[ \frac{\theta(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)}{\alpha(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)} \geq \frac{\theta(\cF)}{\alpha(\cF)}.\] \end{lemma} Finally, we deduce the required corollary in the same way as the proof of Theorem \ref{thm ack HN}, by using the above lemma. \begin{cor} Fix a Hilbert polynomial $P$ and let $\theta = \theta_{\underline{n}}(P)$ and $\alpha= \alpha_{\underline{n}}(P)$. For a HN type $\tau=(P_1, \dots , P_s)$ of sheaves on $X$ with Hilbert polynomial $P$, we have, for $\underline{n} > \! > 0$, that \[ \Phi_{\underline{n}} (\cC oh_{P}^{\tau}(X) ) \subset \cR \text{eps}_{d_{\underline{n}}(P)}^{\gamma_{\underline{n}}(\tau)}(K_{\underline{n}})\] where \[ \gamma_{\underline{n}}(\tau): = (d_{\underline{n}}(P_1), \dots , d_{\underline{n}}(P_s)) \] and the HN stratification on the stack of quiver representations is taken with respect to $(\theta, \alpha)$. In particular, the assignments $P \mapsto d_{\underline{n}}(P)$ and $\tau \mapsto \gamma_{\underline{n}}(\tau)$ are both injective. \end{cor} \noindent{Freie Universit\"{a}t Berlin, Arnimallee 3, Raum 011, 14195 Berlin, Germany} \noindent{\texttt{[email protected]}} \end{document}
arXiv
Consistent estimator In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to θ0. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to θ0 converges to one. In practice one constructs an estimator as a function of an available sample of size n, and then imagines being able to keep collecting data and expanding the sample ad infinitum. In this way one would obtain a sequence of estimates indexed by n, and consistency is a property of what occurs as the sample size “grows to infinity”. If the sequence of estimates can be mathematically shown to converge in probability to the true value θ0, it is called a consistent estimator; otherwise the estimator is said to be inconsistent. Consistency as defined here is sometimes referred to as weak consistency. When we replace convergence in probability with almost sure convergence, then the estimator is said to be strongly consistent. Consistency is related to bias; see bias versus consistency. Definition Formally speaking, an estimator Tn of parameter θ is said to be weakly consistent, if it converges in probability to the true value of the parameter:[1] ${\underset {n\to \infty }{\operatorname {plim} }}\;T_{n}=\theta .$ i.e. if, for all ε > 0 $\lim _{n\to \infty }\Pr {\big (}|T_{n}-\theta |>\varepsilon {\big )}=0.$ An estimator Tn of parameter θ is said to be strongly consistent, if it converges almost surely to the true value of the parameter: $\Pr {\big (}\lim _{n\to \infty }T_{n}=\theta {\big )}=1.$ A more rigorous definition takes into account the fact that θ is actually unknown, and thus, the convergence in probability must take place for every possible value of this parameter. Suppose {pθ: θ ∈ Θ} is a family of distributions (the parametric model), and Xθ = {X1, X2, … : Xi ~ pθ} is an infinite sample from the distribution pθ. Let { Tn(Xθ) } be a sequence of estimators for some parameter g(θ). Usually, Tn will be based on the first n observations of a sample. Then this sequence {Tn} is said to be (weakly) consistent if [2] ${\underset {n\to \infty }{\operatorname {plim} }}\;T_{n}(X^{\theta })=g(\theta ),\ \ {\text{for all}}\ \theta \in \Theta .$ This definition uses g(θ) instead of simply θ, because often one is interested in estimating a certain function or a sub-vector of the underlying parameter. In the next example, we estimate the location parameter of the model, but not the scale: Examples Sample mean of a normal random variable Suppose one has a sequence of statistically independent observations {X1, X2, ...} from a normal N(μ, σ2) distribution. To estimate μ based on the first n observations, one can use the sample mean: Tn = (X1 + ... + Xn)/n. This defines a sequence of estimators, indexed by the sample size n. From the properties of the normal distribution, we know the sampling distribution of this statistic: Tn is itself normally distributed, with mean μ and variance σ2/n. Equivalently, $\scriptstyle (T_{n}-\mu )/(\sigma /{\sqrt {n}})$ has a standard normal distribution: $\Pr \!\left[\,|T_{n}-\mu |\geq \varepsilon \,\right]=\Pr \!\left[{\frac {{\sqrt {n}}\,{\big |}T_{n}-\mu {\big |}}{\sigma }}\geq {\sqrt {n}}\varepsilon /\sigma \right]=2\left(1-\Phi \left({\frac {{\sqrt {n}}\,\varepsilon }{\sigma }}\right)\right)\to 0$ as n tends to infinity, for any fixed ε > 0. Therefore, the sequence Tn of sample means is consistent for the population mean μ (recalling that $\Phi $ is the cumulative distribution of the normal distribution). Establishing consistency The notion of asymptotic consistency is very close, almost synonymous to the notion of convergence in probability. As such, any theorem, lemma, or property which establishes convergence in probability may be used to prove the consistency. Many such tools exist: • In order to demonstrate consistency directly from the definition one can use the inequality [3] $\Pr \!{\big [}h(T_{n}-\theta )\geq \varepsilon {\big ]}\leq {\frac {\operatorname {E} {\big [}h(T_{n}-\theta ){\big ]}}{h(\varepsilon )}},$ the most common choice for function h being either the absolute value (in which case it is known as Markov inequality), or the quadratic function (respectively Chebyshev's inequality). • Another useful result is the continuous mapping theorem: if Tn is consistent for θ and g(·) is a real-valued function continuous at point θ, then g(Tn) will be consistent for g(θ):[4] $T_{n}\ {\xrightarrow {p}}\ \theta \ \quad \Rightarrow \quad g(T_{n})\ {\xrightarrow {p}}\ g(\theta )$ • Slutsky’s theorem can be used to combine several different estimators, or an estimator with a non-random convergent sequence. If Tn →dα, and Sn →pβ, then [5] ${\begin{aligned}&T_{n}+S_{n}\ {\xrightarrow {d}}\ \alpha +\beta ,\\&T_{n}S_{n}\ {\xrightarrow {d}}\ \alpha \beta ,\\&T_{n}/S_{n}\ {\xrightarrow {d}}\ \alpha /\beta ,{\text{ provided that }}\beta \neq 0\end{aligned}}$ • If estimator Tn is given by an explicit formula, then most likely the formula will employ sums of random variables, and then the law of large numbers can be used: for a sequence {Xn} of random variables and under suitable conditions, ${\frac {1}{n}}\sum _{i=1}^{n}g(X_{i})\ {\xrightarrow {p}}\ \operatorname {E} [\,g(X)\,]$ • If estimator Tn is defined implicitly, for example as a value that maximizes certain objective function (see extremum estimator), then a more complicated argument involving stochastic equicontinuity has to be used.[6] Bias versus consistency Unbiased but not consistent An estimator can be unbiased but not consistent. For example, for an iid sample {x 1 ,..., x n } one can use T n (X) = x n as the estimator of the mean E[X]. Note that here the sampling distribution of T n is the same as the underlying distribution (for any n, as it ignores all points but the last), so E[T n (X)] = E[X] and it is unbiased, but it does not converge to any value. However, if a sequence of estimators is unbiased and converges to a value, then it is consistent, as it must converge to the correct value. Biased but consistent Alternatively, an estimator can be biased but consistent. For example, if the mean is estimated by ${1 \over n}\sum x_{i}+{1 \over n}$ it is biased, but as $n\rightarrow \infty $, it approaches the correct value, and so it is consistent. Important examples include the sample variance and sample standard deviation. Without Bessel's correction (that is, when using the sample size $n$ instead of the degrees of freedom $n-1$), these are both negatively biased but consistent estimators. With the correction, the corrected sample variance is unbiased, while the corrected sample standard deviation is still biased, but less so, and both are still consistent: the correction factor converges to 1 as sample size grows. Here is another example. Let $T_{n}$ be a sequence of estimators for $\theta $. $\Pr(T_{n})={\begin{cases}1-1/n,&{\mbox{if }}\,T_{n}=\theta \\1/n,&{\mbox{if }}\,T_{n}=n\delta +\theta \end{cases}}$ We can see that $T_{n}{\xrightarrow {p}}\theta $, $\operatorname {E} [T_{n}]=\theta +\delta $, and the bias does not converge to zero. See also • Efficient estimator • Fisher consistency — alternative, although rarely used concept of consistency for the estimators • Regression dilution • Statistical hypothesis testing • Instrumental variables estimation Notes 1. Amemiya 1985, Definition 3.4.2. 2. Lehman & Casella 1998, p. 332. 3. Amemiya 1985, equation (3.2.5). 4. Amemiya 1985, Theorem 3.2.6. 5. Amemiya 1985, Theorem 3.2.7. 6. Newey & McFadden 1994, Chapter 2. References • Amemiya, Takeshi (1985). Advanced Econometrics. Harvard University Press. ISBN 0-674-00560-0. • Lehmann, E. L.; Casella, G. (1998). Theory of Point Estimation (2nd ed.). Springer. ISBN 0-387-98502-6. • Newey, W. K.; McFadden, D. (1994). "Chapter 36: Large sample estimation and hypothesis testing". In Robert F. Engle; Daniel L. McFadden (eds.). Handbook of Econometrics. Vol. 4. Elsevier Science. ISBN 0-444-88766-0. S2CID 29436457. • Nikulin, M. S. (2001) [1994], "Consistent estimator", Encyclopedia of Mathematics, EMS Press • Sober, E. (1988), "Likelihood and convergence", Philosophy of Science, 55 (2): 228–237, doi:10.1086/289429. External links • Econometrics lecture (topic: unbiased vs. consistent) on YouTube by Mark Thoma
Wikipedia
Cash Flow-to-Debt Ratio: Definition, Formula, and Example Will Kenton Will Kenton is an expert on the economy and investing laws and regulations. He previously held senior editorial roles at Investopedia and Kapitall Wire and holds a MA in Economics from The New School for Social Research and Doctor of Philosophy in English literature from NYU. David Kindness Reviewed by David Kindness David Kindness is a Certified Public Accountant (CPA) and an expert in the fields of financial accounting, corporate and individual tax planning and preparation, and investing and retirement planning. David has helped thousands of clients improve their accounting and financial systems, create budgets, and minimize their taxes. Learn about our Financial Review Board Investopedia / Sydney Saporito What Is the Cash Flow-to-Debt Ratio? The cash flow-to-debt ratio is the ratio of a company's cash flow from operations to its total debt. This ratio is a type of coverage ratio and can be used to determine how long it would take a company to repay its debt if it devoted all of its cash flow to debt repayment. Cash flow is used rather than earnings because cash flow provides a better estimate of a company's ability to pay its obligations. The Formula for the Cash Flow-to-Debt Ratio Cash Flow to Debt = Cash Flow from Operations Total Debt \begin{aligned} &\text{Cash Flow to Debt} = \frac{ \text{Cash Flow from Operations} }{ \text{Total Debt} } \\ \end{aligned} ​Cash Flow to Debt=Total DebtCash Flow from Operations​​ The ratio is less frequently calculated using EBITDA or free cash flow. The cash flow-to-debt ratio compares a company's generated cash flow from operations to its total debt. The cash flow-to-debt ratio indicates how much time it would take a company to pay off all of its debt if it used all of its operating cash flow for debt repayment (although this is a very unrealistic scenario). What the Cash Flow-to-Debt Ratio Can Tell You? While it is unrealistic for a company to devote all of its cash flow from operations to debt repayment, the cash flow-to-debt ratio provides a snapshot of the overall financial health of a company. A high ratio indicates that a company is better able to pay back its debt, and is thus able to take on more debt if necessary. Another way to calculate the cash flow-to-debt ratio is to look at a company's EBITDA rather than the cash flow from operations. This option is used less often because it includes investment in inventory, and since inventory may not be sold quickly, it is not considered as liquid as cash from operations. Without further information about the make-up of a company's assets, it is difficult to determine whether a company is as readily able to cover its debt obligations using the EBITDA method. The Difference Between Free Cash Flow and Cash Flow From Operations Some analysts use free cash flow instead of cash flow from operations because this measure subtracts cash used for capital expenditures. Using free cash flow instead of cash flow from operations may, therefore, indicate that the company is less able to meet its obligations. The cash flow-to-debt ratio examines the ratio of cash flow to total debt. Analysts sometimes also examine the ratio of cash flow to just long-term debt. This ratio may provide a more favorable picture of a company's financial health if it has taken on significant short-term debt. In examining either of these ratios, it is important to remember that they vary widely across industries. A proper analysis should compare these ratios with those of other companies in the same industry. Example of How to Use the Cash Flow-to-Debt Ratio Assume that ABC Widgets, Inc. has total debt of $1,250,000 and cash flow from operations for the year of $312,500. Calculate the company's cash flow to debt ratio as follows: Cash Flow to Debt = $ 3 1 2 , 5 0 0 $ 1 , 2 5 0 , 0 0 0 = . 2 5 = 2 5 % \begin{aligned} &\text{Cash Flow to Debt} = \frac{ \$312,500 }{ \$1,250,000 } = .25 = 25\% \\ \end{aligned} ​Cash Flow to Debt=$1,250,000$312,500​=.25=25%​ The company's ratio result of 25% indicates that, assuming it has stable, constant cash flows, it would take approximately four years to repay its debt since it would be able to repay 25% each year. Dividing the number 1 by the ratio result (1 / .25 = 4) confirms that it would take four years to repay the company's debt. If the company had a higher ratio result, with its cash flow from operations higher relative to its total debt, this would indicate a financially stronger business that could increase the dollar amount of its debt repayments if needed. What Is a Solvency Ratio, and How Is It Calculated? A solvency ratio is a key metric used to measure an enterprise's ability to meet its debt and other obligations. Debt-Service Coverage Ratio (DSCR): How To Use and Calculate It In corporate finance, the debt-service coverage ratio (DSCR) is a measurement of the cash flow available to pay current debt obligations. Total-Debt-to-Total-Assets Ratio: Meaning, Formula, and What's Good Total-debt-to-total-assets is a leverage ratio that shows the total amount of debt a company has relative to its assets. Interest Coverage Ratio: Formula, How It Works, and Example The interest coverage ratio is a debt and profitability ratio used to determine how easily a company can pay interest on its outstanding debt. Noncurrent Liabilities: Definition, Examples, and Ratios Noncurrent liabilities are business's long-term financial obligations that are not due within the following twelve month period. Capital Expenditure (CapEx) Definition, Formula, and Examples Capital expenditures (CapEx) are funds used by a company to acquire or upgrade physical assets such as property, buildings, or equipment. Analyzing Walmart's Debt Ratios (WMT) Financial Ratios to Spot Companies Headed for Bankruptcy Understanding the Cash Flow Statement Guide to Financial Ratios Free Cash Flow Yield: The Best Fundamental Indicator Solvency Ratios vs. Liquidity Ratios: What's the Difference?
CommonCrawl
npj climate and atmospheric science Wildfire-driven thunderstorms cause a volcano-like stratospheric injection of smoke David A. Peterson1, James R. Campbell1, Edward J. Hyer1, Michael D. Fromm2, George P. Kablick III2, Joshua H. Cossuth1 & Matthew T. DeLand3 npj Climate and Atmospheric Science volume 1, Article number: 30 (2018) Cite this article Climate sciences Intense heating by wildfires can generate deep, smoke-infused thunderstorms, known as pyrocumulonimbus (pyroCb), which can release a large quantity of smoke particles above jet aircraft cruising altitudes. Injections of pyroCb smoke into the lower stratosphere have gained increasing attention over the past 15 years due to the rapid proliferation of satellite remote sensing tools. Impacts from volcanic eruptions and other troposphere-to-stratosphere exchange processes on stratospheric radiative and chemical equilibrium are well recognized and monitored. However, the role of pyroCb smoke in the climate system has yet to be acknowledged. Here, we show that the mass of smoke aerosol particles injected into the lower stratosphere from five near-simultaneous intense pyroCbs occurring in western North America on 12 August 2017 was comparable to that of a moderate volcanic eruption, and an order of magnitude larger than previous benchmarks for extreme pyroCb activity. The resulting stratospheric plume encircled the Northern Hemisphere over several months. By characterizing this event, we conclude that pyroCb activity, considered as either large singular events, or a full fire season inventory, significantly perturb the lower stratosphere in a manner comparable with infrequent volcanic intrusions. Fire-triggered thunderstorms, or pyrocumulonimbus (pyroCb), are an extreme weather phenomenon associated with large wildfires at temperate latitudes. PyroCbs can release a large quantity of smoke particles into the lower stratosphere, often above the tropopause by several kilometers.1,2 The weather conditions driving pyroCb occurrence,3 along with increasingly active fire seasons,4,5 indicate that pyroCbs are a significant and endemic summertime feature in several regions worldwide. A single fire season in western North America, for instance, can include more than 25 intense single or multi-updraft pyroCb events.6,7 Since the discovery of pyroCb in the early 2000s, several stratospheric aerosol layers first thought to be of volcanic origin have been reclassified as originating from pyroCb activity.2 The significance of volcanic plumes in the lower stratosphere and their role in the climate system has been recognized for several decades. To date, however, the impact of pyroCbs on climate has never been systematically explored, and remains almost completely unquantified in recent studies of the lower stratosphere.8,9,10 Here, we quantify the mass of smoke aerosol particles injected into the lower stratosphere from five near-simultaneous intense pyroCbs observed in western North America on 12 August 2017, referred to hereafter as the "Pacific Northwest Event". Systematic satellite observations are employed to examine the evolution of the event and to obtain essential input parameters, including the spatial extent and vertical profile of the pyroCb plume within the lower stratosphere. This study establishes an observational benchmark for extreme pyroCbs, with the goal of motivating modeling and in situ study of lower-stratospheric plume composition, chemical and physical particle evolution, and radiative properties that may influence lower-stratospheric chemistry and dynamic circulation on seasonal and hemispheric scales. Stratospheric smoke mass estimates The stratospheric smoke intrusion from the Pacific Northwest Event was associated with five distinct pyroCb updrafts. Calculations based on the combination of lidar and passive remote sensing observations reveal that this event injected an estimated 0.1–0.3 Tg of total aerosol particle mass into the lower stratosphere (Fig. 1). The Pacific Northwest Event was comparable to the total stratospheric particle mass injected by the initial plume of a moderate volcanic eruption, characterized by a Volcanic Explosivity Index (VEI) between 3 and 4.9,10 The Kasatochi eruption (VEI of 4) in the Aleutian Islands of Alaska (United States, 7–8 August 2008) serves as a suitable reference event, given its proximity in latitude, occurrence during the same month, and injection to similar stratospheric altitudes. Kasatochi yielded an estimated 0.2–0.5 Tg of stratospheric aerosol particle mass during its initial eruptive cycles (see Methods for details). This plume was accompanied by approximately 1.2–2.2 Tg of sulfur dioxide (SO2) vapor mass.11,12 Total accumulated stratospheric particle mass caused by Kasatochi, accounting for the secondary conversion of SO2 to sulfate-based compounds over the weeks and months post-eruption,13 ranged between 2.0 and 3.0 Tg. Comparisons with stratospheric particle mass estimates from other significant events. Bars indicate the approximate uncertainty range of stratospheric aerosol particle mass injected. All mass estimates are displayed using a logarithmic scale (x-axis). Color scheme indicates event type and characteristics The stratospheric smoke injection from the Pacific Northwest Event was an order of magnitude larger (0.1 vs. 0.01 Tg) than the most significant single-event stratospheric intrusion of smoke aerosol recorded to date (Fig. 1), associated with multiple pyroCb updrafts (pulses) observed during the Chisholm Fire in Alberta, Canada in 2001.1 The Pacific Northwest Event also likely exceeded the total aerosol particle mass injected by all pyroCb activity (26 events) inventoried during the fire season of 2013 across western North America.7 Details of the extreme fire behavior and pyroCb activity observed during 2013 have been examined previously.3,6 Stratospheric intrusions from each individual pyroCb, such as those associated with the Silver and Pony/Elk fires, were two orders of magnitude lower in total smoke particle mass than the Pacific Northwest Event. However, the cumulative seasonal impact (0.04–0.12 Tg over 3 months) is still comparable to a relatively small volcanic eruption on its own. Evolution of the Pacific Northwest Event Each pyroCb contributing to the 12 August Pacific Northwest Event was detected by the constellation of Geostationary Operational Environmental Satellites (GOES) observing North America7 (Fig. 2). The first pyroCb occurred over the Diamond Creek Fire (https://inciweb.nwcg.gov/incident/5409/) in northern Washington State, United States at 21:00 UTC (14:00 local time [LT]). The four remaining pyroCb were caused by a broad complex of wildfires in British Columbia, Canada, with the second and third pyroCb developing at 23:00 UTC (16:00 LT), the fourth at 00:00 UTC (17:00 LT), and the fifth at 00:30 UTC (17:30 LT). All five pyroCb updrafts were active for 1–4 h over a total 5 h period, which is comparable to the duration of the Kasatochi eruption.11 Estimated stratospheric mass (0.1–0.3 Tg) is higher than the range of estimates for total dry particulate mass derived from the area burned by these fires (0.02–0.26 Tg). However, this latter estimate does not account for the total amount of condensed water mass simultaneously lofted into the stratosphere by each updraft or secondary particle formation within the plume. Evolution of the 12 August 2017 Pacific Northwest pyroCb event from satellite. Grayscale shading indicates the thermal infrared (11 µm) brightness temperature (GOES-16), with colder, high-altitude cloud tops displayed in white. Green shading indicates the solar reflectivity of smaller cloud top particles relative to the 11 µm brightness temperature. PyroCb smoke particle perturbations therefore correspond with larger green values. Pink shading indicates all satellite fire detections for the preceding 24 h in native resolution (MODIS). The approximate position of the approaching surface cold front is highlighted in blue During the preceding days, an intensifying upper-level cyclone and surface cold front approached the west coast of North America. Wildfire behavior in the Pacific Northwest intensified ahead of this disturbance, driven by enhanced southwesterly surface winds and a relatively dry surface air mass, peaking on the afternoon of 12 August (Fig. 2). The approaching storm also induced transport of relatively moist and unstable air from the nearby Pacific Ocean over the dry, deep, and unstable near-surface boundary layer, thus providing a mechanism for large-scale rising motion in the middle and upper troposphere. These combined characteristics are a potent recipe for high-based convection (relatively dry thunderstorms over elevated terrain), which is consistent with a conceptual model for pyroCb development.3 Strong thermal buoyancy induced by the large and intense wildfires sustained smoke column updrafts necessary to penetrate the tropopause, thus providing a direct pathway for smoke aerosols to enter the lower stratosphere. At least two smaller pyroCbs occurred in the same area on 11 August, but they were not as deep as the 12 August Pacific Northwest Event, reinforcing the importance of meteorological contex in supporting deep pyroCb updrafts. PyroCb cloud tops are characterized by relatively small particles induced by significant smoke aerosol loading and the concomitant seeding/nucleation from ambient water vapor.14,15 The relative reduction in cloud particle size compared with more typical regional thunderstorms causes an unusually large solar reflection by pyroCb cloud tops in the GOES shortwave infrared channel (4 µm). This characteristic is used to distinguish the pyroCb clouds in Fig. 2 through subtraction of the 11 µm thermal infrared brightness temperature from that at 4 µm. This brightness temperature difference (BTD) highlights the pyroCb clouds, and reveals that the four Canadian pyroCbs dominated the event. The Canadian pyroCbs also exhibited the lowest observed 11 µm cloud top brightness temperatures (below −60 °C, not evident in Fig. 2), indicating relatively stronger convective updrafts. Relating these cloud-top brightness temperatures to height using local thermodynamic profiles from radiosondes (weather balloons) reveals that these cold cloud tops reached altitudes of 11.5–12.5 km, which exceeded the local cold-point tropopause. Each of the four Canadian pyroCbs contributed an estimated 0.03–0.08 Tg to the initial total particle mass (0.1–0.3 Tg), which is roughly comparable to experiencing four Chisholm pyroCb events on a single afternoon (Fig. 1). Characteristics of the stratospheric smoke plume An expansive thunderstorm outflow anvil cloud, comprised of both smoke and water ice, persisted for nearly 12 h in GOES infrared imagery after cessation of the parent pyroCb updrafts. The Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP), flown aboard NASA's polar-orbiting CALIPSO satellite,16 passed over the decaying cloud shield approximately 8 h after pyroCb cessation (10:45 UTC on 13 August), confirming the injection of smoke and water ice at least 1 km into the stratosphere (not shown). By the afternoon of 14 August (19:30 UTC), ice crystal influence on the CALIOP backscatter profile within the plume had diminished, and a distinct 1.5 km deep residual layer was present over northern Canada (Fig. 3). CALIOP attenuated laser backscatter measurements were converted to smoke particle mass density averages over the two successive orbital passes incident upon the young stratospheric plume at this time, providing an estimated range of 73–220 µg m−3 (see Methods). Characteristics of the young stratospheric smoke plume. Top panel shows profiles of 532 nm attenuated backscatter (km−1 sr−1) observed by CALIOP on 14 August 2017 for a daytime (ascending) CALIPSO overpass beginning 19:27 UTC. The dashed white line denotes the approximate tropopause altitude. Bottom panel shows near-coincident ultra-violet (UV) aerosol index (AI) observations from OMPS, with the CALIPSO satellite track superimposed in black. The horizontal extent of the stratospheric smoke plume (AI ≥ 15) is displayed in shades of yellow and red The horizontal extent of the stratospheric plume was estimated by applying the ultra-violet Aerosol Index (AI, dimensionless) retrieved from Ozone Mapping Profiler Suite (OMPS) Nadir Mapper (NM), flown aboard the Suomi National Polar-orbiting Partnership (S-NPP) satellite.17 AI is sensitive to the altitude of the light-absorbing smoke particles that make up pyroCb plumes (e.g., black and brown carbon), with stratospheric layers corresponding to the largest values (e.g., >15–20).1 On 14 August, AI values measured over a large portion of northern Canada exceeded those of any known pyroCb plume on record2 (Fig. 3). Comparison with coincident CALIOP profiling showed that pixels with an AI at or above 15 were consistent with smoke particles in the stratosphere, which mirrors the AI threshold applied to the 2001 Chisholm pyroCb smoke plume1 (Fig. 1). Integration of each individual pixel area coinciding with stratospheric smoke particles yielded an instantaneous plume area of nearly 800,000 km2. Stratospheric smoke transport and residence time The plume was transported downwind of North America by relatively strong winds associated with the upper-tropospheric jet stream, which also influences the circulation of the lower stratosphere. Figure 4a shows a true color satellite image (GeoColor) from the Advanced Baseline Imager (ABI) onboard the GOES-16 satellite over eastern North America on 17 August (image used with permission from the Cooperative Institute for Research in the Atmosphere [CIRA]). Smoke from the Pacific Northwest Event is evident between two mid-latitude cyclones aligned with the upper-tropospheric jet stream, extending from the Canadian Arctic to the Atlantic Ocean, near the coast of Maine and Nova Scotia. This high-altitude plume reached Europe by 19 August, Asia by 24 August, and encircled the entire Northern Hemisphere by 31 August (not shown). Stratospheric smoke transport and residence time. Top panel shows true color imagery from the GOES-16 GeoColor Algorithm (developed by CIRA) near sunrise on 17 August 2017 (11:45 UTC). The stratospheric smoke plume extends from Hudson Bay to the northern Atlantic Ocean. Bottom panel provides a time series of CALIOP relative attenuated 532 nm scattering ratio averaged over the Northern Hemisphere between 40°N and 80°N. The gaps in the time series represent CALIOP data outages in early August and mid-September 2017 A time-height analysis of CALIOP relative attenuated molecular scattering ratio averaged over the Northern Hemisphere between 40°N and 80°N shows background values near 1.0 in the lower stratosphere (16–20 km) prior to the Pacific Northwest Event (Fig. 4b). The initial impact of the Pacific Northwest Event is evidenced by its rapid increase above 1.2 from mid-August to early September. Scattering ratios remained higher than background through the middle of December, suggesting that significant levels of smoke remained in the lower stratosphere for approximately 4 months. In the absence of any significant volcanic particle mass reaching the stratosphere between August and December 2017, as well as any additional extreme pyroCb events, the prolonged period of elevated scattering ratios highlighted in Fig. 4b reflects a sustained perturbation of the lower stratospheric aerosol layer from the 12 August Pacific Northwest Event. Similar characteristics were observed following the Kasatochi eruption.10 The Pacific Northwest Event therefore constitutes a stratospheric intrusion similar to a moderate volcano, including total aerosol particle mass initially injected, the nature of its downwind transport, and seasonal persistence. This study provides a unique quantitative analysis of the aerosol particle mass injected into the lower stratosphere by a single extreme pyroCb event, revealing that its impact is comparable in magnitude to the initial stages of a moderate volcanic eruption. The cumulative stratospheric impact from all pyroCb activity observed during the fire season of 2013 in western North America was also significant. These results indicate that pyroCb activity, occurring as either large singular events or smaller events accumulated over a fire season, influence the lower stratosphere in a manner consistent with infrequent volcanic intrusions. Meteorological conditions driving pyroCb development suggest that stratospheric intrusions of smoke particles can be expected every fire season in favored regions of the Northern Hemisphere.3 The expansive stratospheric plume associated with the Australian "Black Saturday" pyroCb event18 (7 February 2009) shows that pyroCbs can also impact the Southern Hemisphere.19 While the community already monitors surface-based and lower tropospheric aerosol and chemical perturbing agents that influence the lower stratosphere, such as volcanic eruptions and halogen compounds, the Pacific Northwest Event demonstrates that pyroCbs also likely play a significant role in the climate system. The physical particle properties and chemical evolution of pyroCb smoke in the stratosphere remain highly uncertain, as do their optical characteristics and the potential for significant solar dimming effects. PyroCb smoke is comprised primarily of carbonaceous aerosol, with physical and optical properties very different from volcanic plumes comprised of ash and sulfate-based particles. Processing of pyroCb smoke during the lofting process into the stratosphere will change its composition and properties relative to surface or tropospheric smoke plumes. In addition, since pyroCb updrafts begin with strong surface inflow winds in a dry environment, additional aerosol particles such as mineral dust, may also be contributing factors.20,21 Detailed airborne sampling, in combination with ground and spaceborne observations, is therefore essential for improved understanding of pyroCb impacts on chemistry, radiation, secondary particle formation, and dynamic circulation. This research is further motivated by the recent increase in large wildfires observed across many boreal and temperate ecosystems,4,5 which suggests that stratospheric intrusions of smoke aerosol from pyroCbs may also be increasing. PyroCb detection PyroCb detection for the 2017 Pacific Northwest Event was based on the Advanced Baseline Imager (ABI) onboard the recently launched GOES-16 satellite. The ABI provides higher spatial resolution and temporal sampling compared with the previous generation of GOES sensors. Close proximity to one or more satellite-based fire detections (within 40–60 km) is the first requirement for pyroCb detection.7 This study is based on the fire products for the MODerate Resolution Imaging Spectroradiometer (MODIS) sensors aboard the Terra and Aqua satellites (MYD14/MOD14, collection 6).22 PyroCb detection was initiated only for areas with MODIS fires detected during a 24-h period preceding each GOES-16 observation. To be classified as pyroCb, a cloudy pixel must exhibit a thermal infrared (11 µm) brightness temperature below an approximated homogeneous liquid-water freezing threshold,14 which implies a high likelihood of vertical cloud development near the tropopause altitude. Extreme smoke aerosol loading and strong updrafts within a pyroCb induce a discernable microphysical shift (from indirect aerosol effects) toward smaller cloud droplets and ice particles when compared with those of typical convective storms occurring over elevated terrain.15 This effect is resolved from satellite by observing a relatively large daytime near-infrared (4 µm) reflectivity.14 Application of a 4 and 11 µm BTD therefore allows for distinction of pyroCb clouds from traditional convection.7 This methodology relies on reflected sunlight, but captures the overwhelming majority of pyroCb activity, since initiation after sunset is relatively infrequent.2,3 Testing with previous GOES imagers (e.g., GOES-15) has demonstrated the successful detection of individual pyroCb events, pyroCbs embedded within traditional convection, and multiple, short-lived pulses of activity.7 Imagery products based on these methods are posted in near-real-time on the Naval Research Laboratory's pyroCb website: http://www.nrlmry.navy.mil/pyrocb-bin/pyrocb.cgi. Stratospheric smoke mass Quantifying the total particle mass of the stratospheric aerosol layers injected by pyroCbs involves the combination of active and passive remote sensing techniques. Each calculation was based on observations 24–48 h after pyroCb cessation, when the stratospheric plume was comprised primarily of residual smoke aerosol rather than ice particles. Visible and infrared imagery from the GOES-16 ABI was examined to confirm that the study region was devoid of cloud contamination. Regional tropopause heights were determined using temperature profiles from local radiosondes. The vertical extent of the plume above the tropopause was constrained using vertical profiles of 532 nm backscatter and linear laser depolarization ratio from the CALIOP.16 The average particle mass density (Mρ) of the stratospheric smoke layer was calculated by $$M_\rho = \frac{{\beta R}}{\varepsilon },$$ where β is the average CALIOP Level 1 backscatter (units of m−1 sr−1), R is an assumed particulate extinction-to-backscatter lidar ratio (units of sr), and ε is the particle mass extinction coefficient (units of m2 g−1). The calculation of Mp is sensitive to the values of ε and R, which are dependent on the physical and optical properties of smoke particles. While smoke plumes from temperate and boreal forest fires have been extensively studied,23,24 particles lofted by pyroCbs are transported to the stratosphere through an environment of ice phase or possibly mixed-phase condensation. This process will likely alter the effective particle properties significantly from those observed at ground level. The amount of mineral dust lofted within pyroCb plumes and its impact on optical properties also remains unresolved.20,21 An estimated range of 3.0–6.0 m2 g−1 was therefore used for ε, based on the available literature for aged boreal smoke plumes25 and accounting for potential entrainment of mineral dust.26 Similarly, a range of 40–60 sr was used for R to account for a potential mix of smoke particles, water/ice, and mineral dust.27 The horizontal area of the pyroCb plume was constrained using the ultra-violet (UV) AI (dimensionless) retrieved from the Ozone Mapping Profiler Suite (OMPS) Nadir Mapper (NM), which is sensitive to the altitude of light-absorbing aerosols.1 All OMPS pixels along each CALIOP overpass track were examined to derive an AI threshold for stratospheric aerosol presence (AI ≥ 15), which mirrored the value applied to the 2001 Chisholm pyroCb smoke plume.1 By assuming a constant altitude of the stratospheric smoke plume, this AI threshold was applied to identify all pixels that contain stratospheric smoke particles. Integration of each individual pixel area (OMPS pixels are ~50 km × 50 km at nadir) provided the total area of the stratospheric smoke plume. Integration of Mρ over the smoke plume depth and area provided an estimate of the total mass of smoke aerosol injected into the stratosphere by the pyroCb event. Adjustments were required to account for the fully attenuating smoke particle layer of the Pacific Northwest Event. An average smoke plume layer depth of 1.5 km was employed and Mρ was augmented by ~10% (based on increasing β) to account for additional smoke particles between the detection limit and the tropopause altitude. CALIOP backscatter data at 1064 nm (not shown) support this augmentation. As a closure experiment, radiative transfer calculations were employed to produce simulated AI values from the inputs for the CALIOP-based smoke particle mass estimates, including the range for ε and observed plume depth. AI was simulated using the OMPS method for two UV radiances (330 and 390 nm) computed by the Santa Barbara DISORT Atmospheric Radiative Transfer (SBDART) model.28 Particulate single scatter albedo and asymmetry parameters were assumed to be 0.9 and 0.6, respectively, with an Angstrom exponent of 1.6.29 This simulation produced AI values of 15–20, which match the pyroCb AI threshold for stratospheric smoke particles, thus reinforcing confidence in the CALIOP-based aerosol particle mass estimates. The simulated AI varies between −10 and +15% when choosing different optical properties relevant for a smoke and ice plume mixture in the UV. Calculations of the total smoke particle mass emitted by the fires contributing to pyroCb development were based on burned area observations derived from satellite and airborne observations. The Canadian fires burned 300,000 hectares during 9–16 August 2017 (http://cwfis.cfs.nrcan.gc.ca/report/archives?year=2017&month=08&day=16&process=Submit). The Diamond Creek fire in Washington State burned over 10,000 hectares (https://inciweb.nwcg.gov/incident/5409/). Total area burned on 12 August 2017 was estimated at 25–75% of each total, based on the tendency of extreme fire days to dominate total area burned in boreal regions.30 Fuel consumption was estimated in a range of 22–53.2 Mg per hectare, and smoke emissions (PM2.5) per fuel consumed were estimated in a range of 9.4–21.2 g kg−1. 31 Combining these three factors and their associated ranges provided a range of the total dry smoke particle mass released on 12 August (0.02–0.26 Tg). Weather and fuel conditions consistent with the rapid fire growth observed for these fires are consistent with high fuel consumption and smoke production per area burned, but this effect is poorly quantified. Surface-based estimates were therefore calculated using the full climatological range of fuel consumption and smoke release for fires in the study region. While these emissions factors include some secondary particle production in the plume,32 they exclude coarse particles and sulfate aerosol particles converted from gaseous SO2. Literature estimates indicate these two components together contribute less than 15% to total particulate mass.24,31 2013 fire season An inventory of pyroCb activity was developed for western North America using the fire season (June–August) of 2013.7 It contains 26 individual events, comprised of 31 individual pulses of intense pyroCb activity (e.g., separate updrafts/anvil clouds), all of which were capable of injecting smoke particles into the upper troposphere and/or lower stratosphere. A first-order estimate of the stratospheric smoke particle mass injected across the entire inventory was derived based on three pyroCb events that spanned the spectrum of observed activity during 2013. For example, the Pony/Elk Fire in Idaho (United States) on 9 August was a relatively small pyroCb. A wildfire complex in Manitoba, Canada produced a larger pyroCb on 3 July. The Silver Fire in New Mexico (United States) produced three pulses (convective updrafts) of pyroCb activity in rapid succession, and was likely the most extreme event in the sample. Each of these three events produced a stratospheric plume of aerosol particles that was well observed by CALIOP and OMPS from 24 to 48 h after initial injection. Stratospheric particle mass estimates were produced for each case using the same methodology as the 2017 Pacific Northwest Event. The range of mean mass estimates for these three events (based on ε and R) was used as an estimate for the remaining pyroCb events in the inventory. Summation over the full inventory provided the total seasonal stratospheric smoke particle mass injected. Volcanic aerosol mass Similar to the pyroCb methodology, stratospheric aerosol mass particle estimates for the eruption of Mt. Kasatochi (Alaska, 7–8 August 2008) began with CALIOP observations nearly 48 h after the eruption (at least one complete diurnal cycle). Tropopause heights were determined using temperature profiles from the closest radiosonde observations to the plume. However, several modifications were required. Mass estimates for the initial eruptive plume injection account for two layers of stratospheric aerosol particles observed at 11.5–13 and 15–16 km.11 A range of 0.6–0.8 m2 g−1 was used for ε, based on observations of volcanic plumes within 1–3 days of an eruption.33 The value for R was fixed at 60 sr.34 Initial estimates based on these methods (0.4–0.5 Tg) are consistent with particulate mass estimates retrieved independently from infrared satellite sensors within 48 h of the Kasatochi eruption.35 The Kasatochi stratospheric plume of aerosol particles was accompanied by approximately 1.2–2.2 Tg of sulfur dioxide (SO2) vapor mass.11,12 Conversion of this SO2 component (molar mass = 64) to sulfates in the lower stratosphere increases the total particulate mass estimate over the ensuing weeks and months post-eruption.36 To estimate this additional mass, it was assumed that 75% of the SO2 converted to sulfuric acid (molar mass = 98) and 25% to water,13 producing a total accumulated stratospheric particle mass estimate between 2.0 and 3.0 Tg. The lower bound of the initial particle mass estimated from CALIOP (0.4 Tg) was also modified to account for potential SO2 to sulfate-based secondary particle formation between the initial injection and the CALIOP observation. Over a single diurnal cycle, we estimate a conversion of 0.1 Tg of SO2 vapor to 0.2 Tg particle mass.12,13 When subtracted, this yields a 0.2 Tg lower bound value for the initial stratospheric injection of Kasatochi. Neither the lower nor upper-bound estimates consider potential particle fallout during this period. CALIOP data processing Stratospheric relative 532 nm attenuated molecular scattering ratios were derived from CALIOP based on methods described in the available literature.37 Level 1B attenuated 532 nm lidar backscatter (km−1 sr−1) data were aggregated at 1-day intervals for all measurements collected within the lower stratosphere (16–20 km above mean sea level) over the Northern Hemisphere between 40°N and 80°N. Molecular profiles were derived using Goddard Modeling and Assimilation Office (GMAO) Goddard Earth Observing System Model, Version 5 (GEOS-5), meteorological reanalysis thermal profiles included in the CALIOP Level 1 files. The relative attenuated molecular scattering ratio is defined as total attenuated particle backscatter normalized by the corresponding attenuated molecular profile. Ozone is not explicitly defined, however, nor is any coincident aerosol transmission occurring between 20 and 35 km, and thus these profiles are specifically defined as relative. To suppress noise, these data were passed through a Gaussian smoothing filter, the so-called "Von Hann Window", and rendered at 25 m vertical resolution and 0.05-day temporal resolution using corresponding spatial and temporal half-widths of 375 m and 1.0 days. The CALIOP data that support the findings of this study are available from https://eosweb.larc.nasa.gov/project/calipso/calipso_table. OMPS data are available from https://ozoneaq.gsfc.nasa.gov/data/omps/. GOES satellite data are available from NOAA CLASS archive https://class.noaa.gov. Imagery products based on GOES data, including the pyroCb detection product, are posted in near-real-time on the Naval Research Laboratory's pyroCb website: http://www.nrlmry.navy.mil/pyrocb-bin/pyrocb.cgi. The standard MODIS fire products (MOD14) can be obtained from https://lpdaac.usgs.gov/dataset_discovery/modis/modis_products_table/mod14_v006. Derived stratospheric mass estimates are available from the corresponding author upon request. Fromm, M. et al. Stratospheric impact of the Chisholm pyrocumulonimbus eruption: 1. Earth-viewing satellite perspective. J. Geophys. Res. Atmos. 113, https://doi.org/10.1029/2007jd009153 (2008). Fromm, M. et al. The untold story of pyrocumulonimbus. Bull. Am. Meteorol. Soc. 91, 1193–1209 (2010). Peterson, D. A., Hyer, E. J., Campbell, J. R., Solbrig, J. E. & Fromm, M. D. A conceptual model for development of intense pyrocumulonimbus in western North America. Mon. Weather Rev. 145, 2235–2255 (2017). Kasischke, E. S. & Turetsky, M. R. Recent changes in the fire regime across the North American boreal region—spatial and temporal patterns of burning across Canada and Alaska (vol 33, art no L09703, 2006). Geophys. Res. Lett. 33, https://doi.org/10.1029/2006gl026946 (2006). Westerling, A. L. Increasing western US forest wildfire activity: sensitivity to changes in the timing of spring (vol 371, 20150178, 2016). Philos. Trans. R. Soc. B 371, https://doi.org/10.1098/rstb.2016.0373 (2016). Peterson, D. A. et al. The 2013 rim fire implications for predicting extreme fire spread, pyroconvection, and smoke emissions. Bull. Am. Meteorol. Soc. 96, 229–247 (2015). Peterson, D. A. et al. Detection and inventory of intense pyroconvection in western North America using GOES-15 Daytime Infrared Data. J. Appl. Meteorol. Climatol. 56, 471–493 (2017). Solomon, S. et al. The persistently variable "background" stratospheric aerosol layer and global climate change. Science 333, 866–870 (2011). Ridley, D. A. et al. Total volcanic stratospheric aerosol optical depths and implications for global climate change. Geophys. Res. Lett. 41, 7763–7769 (2014). Andersson, S. M. et al. Significant radiative impact of volcanic aerosol in the lowermost stratosphere. Nat. Commun. 6, https://doi.org/10.1038/ncomms8692 (2015). Kristiansen, N. I. et al. Remote sensing and inverse transport modeling of the Kasatochi eruption sulfur dioxide cloud. J. Geophys. Res. Atmos. 115, https://doi.org/10.1029/2009jd013286 (2010). Krotkov, N. A., Schoeberl, M. R., Morris, G. A., Carn, S. & Yang, K. Dispersion and lifetime of the SO2 cloud from the August 2008 Kasatochi eruption. J. Geophys. Res. Atmos. 115, https://doi.org/10.1029/2010jd013984 (2010). Yue, G. K., Poole, L. R., Wang, P. H. & Chiou, E. W. Stratospheric aerosol acidity, density, and refractive-index deduced from SAGE-II and NMC temperature data. J. Geophys. Res. Atmos. 99, 3727–3738 (1994). Rosenfeld, D. et al. The Chisholm firestorm: observed microstructure, precipitation and lightning activity of a pyro-cumulonimbus. Atmos. Chem. Phys. 7, 645–659 (2007). Chang, D. et al. Comprehensive mapping and characteristic regimes of aerosol effects on the formation and evolution of pyro-convective clouds. Atmos. Chem. Phys. 15, 10325–10348 (2015). Winker, D. M. et al. The CALIPSO mission: a global 3D view of aerosols and clouds. Bull. Am. Meteorol. Soc. 91, 1211–1229 (2010). Flynn, L. et al. Performance of the Ozone Mapping and Profiler Suite (OMPS) products. J. Geophys. Res. Atmos. 119, 6181–6195 (2014). Cruz, M. G. et al. Anatomy of a catastrophic wildfire: The Black Saturday Kilmore East fire in Victoria, Australia. For. Ecol. Manag. 284, 269–285 (2012). Vernier, J. P. et al. Major influence of tropical volcanic eruptions on the stratospheric aerosol layer during the last decade. Geophys. Res. Lett. 38, https://doi.org/10.1029/2011gl047563 (2011). Lindsey, D. T., Miller, S. D. & Grasso, L. The impacts of the 9 April 2009 dust and smoke on convection. Bull. Am. Meteorol. Soc. 91, 991–995 (2010). Lerach, D. G. & Cotton, W. R. Simulating southwestern US desert dust influences on supercell thunderstorms. Atmos. Res. 204, 78–93 (2018). Giglio, L., Schroeder, W. & Justice, C. O. The collection 6 MODIS active fire detection algorithm and fire products. Remote Sens. Environ. 178, 31–41 (2016). Reid, J. S. et al. A review of biomass burning emissions. Part III: Intensive optical properties of biomass burning particles. Atmos. Chem. Phys. 5, 827–849 (2005). Reid, J. S., Koppmann, R., Eck, T. F. & Eleuterio, D. P. A review of biomass burning emissions. Part II: Intensive physical properties of biomass burning particles. Atmos. Chem. Phys. 5, 799–825 (2005). Nikonovas, T., North, P. R. J. & Doerr, S. H. Particulate emissions from large North American wildfires estimated using a new top-down method. Atmos. Chem. Phys. 17, 6423–6438 (2017). Maring, H., Savoie, D. L., Izaguirre, M. A., Custals, L. & Reid, J. S. Mineral dust aerosol size distribution change during atmospheric transport. J. Geophys. Res. Atmos. 108, https://doi.org/10.1029/2002jd002536 (2003). Ackermann, J. The extinction-to-backscatter ratio of tropospheric aerosol: a numerical study. J. Atmos. Ocean. Technol. 15, 1043–1050 (1998). Ricchiazzi, P., Yang, S. R., Gautier, C. & Sowle, D. SBDART: a research and teaching software tool for plane-parallel radiative transfer in the Earth's atmosphere. Bull. Am. Meteorol. Soc. 79, 2101–2114 (1998). Calvo, A. I. et al. Radiative forcing of haze during a forest fire in Spain. J. Geophys. Res. Atmos. 115, https://doi.org/10.1029/2009jd012172 (2010). Flannigan, M. D., Logan, K. A., Amiro, B. D., Skinner, W. R. & Stocks, B. J. Future area burned in Canada. Clim. Change 72, 1–16 (2005). Akagi, S. K. et al. Emission factors for open and domestic biomass burning for use in atmospheric models. Atmos. Chem. Phys. 11, 4039–4072 (2011). Reid, J. S. et al. Physical, chemical, and optical properties of regional hazes dominated by smoke in Brazil. J. Geophys. Res. Atmos. 103, 32059–32080 (1998). Turnbull, K. et al. A case study of observations of volcanic ash from the Eyjafjallajokull eruption: 1. In situ airborne observations. J. Geophys. Res. Atmos. 117, https://doi.org/10.1029/2011jd016688 (2012). Prata, A. T., Young, S. A., Siems, S. T. & Manton, M. J. Lidar ratios of stratospheric volcanic ash and sulfate aerosols retrieved from CALIOP measurements. Atmos. Chem. Phys. 17, 8599–8618 (2017). Corradini, S., Merucci, L., Prata, A. J. & Piscini, A. Volcanic ash and SO2 in the 2008 Kasatochi eruption: retrievals comparison from different IR satellite sensors. J. Geophys. Res. Atmos. 115, https://doi.org/10.1029/2009jd013634 (2010). Robock, A. Volcanic eruptions and climate. Rev. Geophys. 38, 191–219 (2000). Vernier, J. P. et al. Tropical stratospheric aerosol layer from CALIPSO lidar observations. J. Geophys. Res. Atmos. 114, https://doi.org/10.1029/2009jd011946 (2009). We are grateful to Daniel Lindsey at the National Oceanic and Atmospheric Administration (NOAA) Center for Satellite Applications and Research and the Cooperative Institute for Research in the Atmosphere (CIRA) for providing GOES-16 GeoColor imagery of the stratospheric smoke plume. We also thank Melinda Surratt at the Naval Research Laboratory for assistance with satellite data processing, as well as Daniel Stern (UCAR) and Sam Brand (SAIC) for assistance with editing. This research was performed while D.A.P. held a Karle's Research Fellowship Award at the Naval Research Laboratory in Monterey, CA. Additional support was provided by ONR32, under award N0001417WX00777. M.T.D. was supported by NASA contract NNG17HP01C. Naval Research Laboratory, 7 Grace Hopper Avenue, Monterey, CA, 93943, USA David A. Peterson, James R. Campbell, Edward J. Hyer & Joshua H. Cossuth Naval Research Laboratory, 4555 Overlook Avenue SW, Washington, DC, 20375, USA Michael D. Fromm & George P. Kablick III Science Systems and Applications, Inc. (SSAI), 10210 Greenbelt Road, Suite 600, Lanham, MD, 20706, USA Matthew T. DeLand David A. Peterson James R. Campbell Edward J. Hyer Michael D. Fromm George P. Kablick III Joshua H. Cossuth D.A.P assembled each data source, designed the experiments, conducted meteorological analyses, and wrote the narrative. J.R.C. provided all lidar data and supported each step of the analysis. E.J.H. provided total smoke mass estimates for relevant fires and supported data interpretation. M.D.F. and G.P.K supported stratospheric aerosol mass calculations for both pyroCb and volcanic plumes. J.H.C. produced GOES-16 satellite imagery and M.T.D. provided and analyzed OMPS aerosol index data. Correspondence to David A. Peterson. The authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Peterson, D.A., Campbell, J.R., Hyer, E.J. et al. Wildfire-driven thunderstorms cause a volcano-like stratospheric injection of smoke. npj Clim Atmos Sci 1, 30 (2018). https://doi.org/10.1038/s41612-018-0039-3 Revised: 21 June 2018 Pyrocumulonimbus Stratospheric Plume Injections Measured by the ACE‐FTS C. D. Boone , P. F. Bernath & M. D. Fromm Geophysical Research Letters (2020) Wildfire smoke in the lower stratosphere identified by in situ CO observations Joram J. D. Hooghiem , Maria Elena Popa , Thomas Röckmann , Jens-Uwe Grooß , Ines Tritscher , Rolf Müller , Rigel Kivi & Huilin Chen Atmospheric Chemistry and Physics (2020) A Decadal Climatology of Chemical, Physical, and Optical Properties of Ambient Smoke in the Western and Southeastern United States Qijing Bian , Bonne Ford , Jeffrey R. Pierce & Sonia M. Kreidenweis Journal of Geophysical Research: Atmospheres (2020) Smoke of extreme Australian bushfires observed in the stratosphere over Punta Arenas, Chile, in January 2020: optical thickness, lidar ratios, and depolarization ratios at 355 and 532 nm Kevin Ohneiser , Albert Ansmann , Holger Baars , Patric Seifert , Boris Barja , Cristofer Jimenez , Martin Radenz , Audrey Teisseire , Athina Floutsi , Moritz Haarig , Andreas Foth , Alexandra Chudnovsky , Ronny Engelmann , Félix Zamorano , Johannes Bühl & Ulla Wandinger Evaluation of Stratospheric Intrusions and Biomass Burning Plumes on the Vertical Distribution of Tropospheric Ozone Over the Midwestern United States J. L. Wilkins , B. Foy , A. M. Thompson , D. A. Peterson , E. J. Hyer , C. Graves , J. Fishman & G. A. Morris Editorial Summary ATMOSPHERIC SCIENCE: Fire-driven injection of smoke into the stratosphere Pyrocumulonimbus—thunderstorms spawned from fire—inject 10× the mass of smoke particles into the upper atmosphere than once thought. When hot enough, wildfires can trigger convective updrafts, the depths of which extend well into the lower stratosphere. David Peterson and colleagues from the Naval Research Laboratory use satellite observations to quantify the impact of pyrocumulonimbus on troposphere-to-stratosphere exchange of smoke aerosols, focusing on five events that occurred in the Pacific Northwest in August 2017. The mass of smoke aerosols injected into the lower stratosphere is estimated to be 0.1–0.3 Tg, an order of magnitude larger than previous assessments, and roughly equal to that expected from a medium-sized volcanic eruption. With observed and projected increases in wildfires, any subsequent intrusions of smoke particles into the stratosphere could have considerable impacts on the global climate. Climate and Weather Extremes For Authors & Referees npj Climate and Atmospheric Science ISSN 2397-3722 (online)
CommonCrawl
\begin{document} \renewcommand{\addcontentsline}[3]{} \title{Landauer vs. Nernst: What is the True Cost of Cooling a Quantum System?} \author{Philip Taranto\,\orcidlink{0000-0002-4247-3901}} \email{[email protected]} \thanks{P. T. and F. B. contributed equally.} \affiliation{Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo City, Tokyo 113-0033, Japan} \affiliation{Atominstitut, Technische Universit{\"a}t Wien, 1020 Vienna, Austria} \affiliation{Institute for Quantum Optics and Quantum Information - IQOQI Vienna, Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria} \author{Faraj Bakhshinezhad\,\orcidlink{0000-0002-0088-0672}} \thanks{P. T. and F. B. contributed equally.} \affiliation{Atominstitut, Technische Universit{\"a}t Wien, 1020 Vienna, Austria} \affiliation{Department of Physics and Nanolund, Lund University, Box 118, 221 00 Lund, Sweden} \affiliation{Institute for Quantum Optics and Quantum Information - IQOQI Vienna, Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria} \author{Andreas Bluhm\,\orcidlink{0000-0003-4796-7633}} \thanks{A. B. and R. S. contributed equally.} \affiliation{Univ.\ Grenoble Alpes, CNRS, Grenoble INP, LIG, 38000 Grenoble, France} \affiliation{QMATH, Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen, Denmark} \author{Ralph Silva\,\orcidlink{0000-0002-4603-747X}} \thanks{A. B. and R. S. contributed equally.} \affiliation{Institute for Theoretical Physics, ETH Z\"urich, Wolfgang-Pauli-Str. 27, Z\"urich, Switzerland} \author{Nicolai Friis\,\orcidlink{0000-0003-1950-8640}} \affiliation{Atominstitut, Technische Universit{\"a}t Wien, 1020 Vienna, Austria} \affiliation{Institute for Quantum Optics and Quantum Information - IQOQI Vienna, Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria} \author{Maximilian P.~E. Lock\,\orcidlink{0000-0002-8241-8202}} \affiliation{Atominstitut, Technische Universit{\"a}t Wien, 1020 Vienna, Austria} \affiliation{Institute for Quantum Optics and Quantum Information - IQOQI Vienna, Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria} \author{Giuseppe Vitagliano\,\orcidlink{0000-0002-5563-3222}} \affiliation{Atominstitut, Technische Universit{\"a}t Wien, 1020 Vienna, Austria} \affiliation{Institute for Quantum Optics and Quantum Information - IQOQI Vienna, Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria} \author{Felix C. Binder\,\orcidlink{0000-0003-4483-5643}} \affiliation{School of Physics, Trinity College Dublin, Dublin 2, Ireland} \affiliation{Institute for Quantum Optics and Quantum Information - IQOQI Vienna, Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria} \affiliation{Atominstitut, Technische Universit{\"a}t Wien, 1020 Vienna, Austria} \author{Tiago Debarba\,\orcidlink{0000-0001-6411-3723}} \affiliation{Departamento Acad{\^ e}mico de Ci{\^ e}ncias da Natureza, Universidade Tecnol{\'o}gica Federal do Paran{\'a} (UTFPR), Campus Corn{\'e}lio Proc{\'o}pio, Avenida Alberto Carazzai 1640, Corn{\'e}lio Proc{\'o}pio, Paran{\'a} 86300-000, Brazil} \author{Emanuel Schwarzhans\,\orcidlink{0000-0001-8259-9720}} \affiliation{Atominstitut, Technische Universit{\"a}t Wien, 1020 Vienna, Austria} \affiliation{Institute for Quantum Optics and Quantum Information - IQOQI Vienna, Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria} \author{Fabien Clivaz\,\orcidlink{0000-0003-0694-8575}} \affiliation{Institut f{\"u}r Theoretische Physik und IQST, Universit{\"a}t Ulm, Albert-Einstein-Allee 11, D-89069 Ulm, Germany} \affiliation{Institute for Quantum Optics and Quantum Information - IQOQI Vienna, Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria} \author{Marcus Huber\,\orcidlink{0000-0003-1985-4623}} \email{[email protected]} \affiliation{Atominstitut, Technische Universit{\"a}t Wien, 1020 Vienna, Austria} \affiliation{Institute for Quantum Optics and Quantum Information - IQOQI Vienna, Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria} \date{\today} \begin{abstract} Thermodynamics connects our knowledge of the world to our capability to manipulate and thus to control it. This crucial role of control is exemplified by the third law of thermodynamics, Nernst's unattainability principle, which states that infinite resources are required to cool a system to absolute zero temperature. But what are these resources and how should they be utilised? And how does this relate to Landauer's principle that famously connects information and thermodynamics? We answer these questions by providing a framework for identifying the resources that enable the creation of pure quantum states. We show that perfect cooling is possible with Landauer energy cost given infinite time or control complexity. However, such optimal protocols require complex unitaries generated by an external work source. Restricting to unitaries that can be run solely via a heat engine, we derive a novel Carnot-Landauer limit, along with protocols for its saturation. This generalises Landauer's principle to a fully thermodynamic setting, leading to a unification with the third law and emphasises the importance of control in quantum thermodynamics. \end{abstract} \maketitle \pdfbookmark[1]{Introduction}{Introduction} \label{sec:introduction} \section{Introduction} \emph{What is the cost of creating a pure state?} Pure states appear as ubiquitous idealisations in quantum information processing and preparing them with high fidelity is essential for quantum technologies such as reliable quantum communication~\cite{GisinRibordyTittelZbinden2002, PirandolaEtAl2020}, high-precision quantum parameter estimation~\cite{GiovannettiLloydMaccone2011, TothApellaniz2014, DemkowiczDobrzanskiJarzynaKolodynski2015}, and fault-tolerant quantum computation~\cite{Preskill1997, Preskill2018}. Fundamentally, pure states are prerequisites for ideal measurements~\cite{Guryanova2020} and precise timekeeping~\cite{Erker_2017,SchwarzhansLockErkerFriisHuber2021}. To answer the above question, one could turn to Landauer's principle, stating that erasing a bit of information has an \emph{energy} cost of at least $k_B T \log(2)$~\cite{Landauer_1961}. Alternatively, one could consult Nernst's unattainability principle (the third law of thermodynamics)~\cite{Nernst_1906}, stating that cooling a physical system to its ground state requires diverging resources. At the outset, it seems that these statements are at odds with one another. However, Landauer's protocol requires infinite time, thus identifying \emph{time} as a resource according to the third law~\cite{Ticozzi_2014,Masanes_2017,Wilming_2017,Freitas_2018,Scharlau_2018}. \emph{Does this mean either infinite energy or time are needed to prepare a pure state?} The perhaps surprising answer we give here is: \emph{no}. We show that finite energy and time suffice to perfectly cool any quantum system and we identify the previously hidden resource\textemdash \emph{control complexity}\textemdash that must diverge (in the spirit of Nernst's principle) to do so. Intuitively, the control complexity of a protocol refers to the structure of machine energy gaps that the cooling unitary must couple the system to; we demonstrate that this energy-level spectrum must approximate a continuum in order to cool with minimal time and energy costs. In short, the ultimate limit on the energetic cost of cooling is still provided by the Landauer limit, but in order to achieve it, either time or control complexity must diverge. At the same time, heat fluctuations and short coherence times in quantum technologies~\cite{Acin_2018} demand that both energy and time are not only finite, but minimal. Therefore, in addition to proving the necessity of diverging control complexity for perfect cooling with minimal time and energy, we develop explicit protocols that saturate the ultimate limits. We demonstrate that mitigating overall heat dissipation comes at the practical cost of controlling fine-tuned interactions that require a \emph{coherent} external work source, i.e., a quantum battery~\cite{Aberg2013,Skrzypczyk_2014,LostaglioJenningsRudolph2015,Friis2018,CampaioliPollockVinjanampathy2019}. From a thermodynamic perspective, this may seem somewhat unsatisfactory: nonequilibrium resources imply that the total system is not closed, and the optimal protocol (saturating the Landauer bound) is reminiscent of a Maxwellian demon with perfect control. \begin{figure*}\label{fig:schematic} \end{figure*} Accordingly, we also consider an \emph{incoherent} control setting restricted to global energy-conserving unitaries with a heat bath as thermodynamic energy source. This setting corresponds to minimal overall control, where interactions need only be switched on and off to generate transformations, i.e., a heat engine alone drives the dynamics~\cite{Scovil1959,Kosloff2014,Uzdin2015,Mitchison2019,Woods2019maximumefficiencyof}. The incoherent-control setting is therefore fully thermodynamically consistent inasmuch as both the machine state is assumed to be thermal (and to rethermalize between control steps) \emph{and} the permitted control operations are those implementable solely via a heat engine. In this paradigm, we show that the Landauer bound is not attainable, subsequently derive a novel limit---which we dub the \emph{Carnot-Landauer} bound---and construct protocols that saturate it, thereby establishing its significance. The Carnot-Landauer bound follows from an equality phrased in terms of entropic and energetic quantities that must hold for any state transformation in the incoherent control paradigm; in this sense, the Carnot-Landauer equality adapts the equality version of Landauer's principle developed in Ref.~\cite{Reeb_2014} to a fully (quantum) thermodynamic setting. Our work thus both generalises Landauer’s erasure principle and, at the same time, unifies it with the laws of thermodynamics. By accounting for control complexity, we emphasise a crucial resource that is oftentimes overlooked but, as we show, must be taken into account for any operationally meaningful theory of thermodynamics. Here, we focus on the asymptotic setting that allows us to connect this resource with Nernst's unattainability principle. Beyond the asymptotic case, the gained insights also open the door to a better understanding of the intricate relationship between energy, time, and control complexity when all resources are finite, which will be crucial for practical applications; we additionally provide a preliminary analysis to this end. Lastly, our protocols saturating the Carnot-Landauer bound pave the way for thermodynamically driven (i.e., minimal-control) quantum technologies, which, by mitigating the cost of control at the very outset, could lead to tangible advantages. \pdfbookmark[2]{Overview and Summary of Results}{Overview and Summary of Results} \subsection*{Overview \& Summary of Results} \label{sec:results} Loosely speaking, there are two types of thermodynamic laws: those, like the second law, that bound (changes of) characteristic quantities during thermodynamic processes, and those, like the third law, which state the impossibility of certain tasks. Landauer's principle is of the former kind (indeed, it can be rephrased as a version of the second law), associating a minimal heat dissipation to any logically irreversible process, thereby placing a fundamental limit on the energy cost of computation. The paradigmatic logically irreversible process is that of erasing information, i.e., resetting an arbitrary state to a blank register. From a physics perspective, said task can be rephrased as \emph{perfectly cooling} a system to the ground state, or more generally, taking an initially full-rank state to a rank-deficient one.\footnote{Low-temperature thermal states correspond to those with low information content, as they have low entropy or small effective support; viewing cooling more broadly (i.e., not restricting to thermal states and allowing for arbitrary Hamiltonians), we see that cooling indeed encompasses information erasure: States with smaller effective support are ``colder'' than those with greater support according to any meaningful notion of ``cool'' (see Ref.~\cite{Clivaz2020Thesis}).} Note that although there is, in general, a distinction between physical cooling and information erasure, in this paper we focus on erasing quantum information encoded in fundamental degrees of freedom rather than in logical macrostate sectors, and accordingly use the terms somewhat interchangeably. This is justified because in either case, the ultimate limitation (be it cooling to absolute zero or perfectly erasing information) requires a rank-decreasing process, which is what we formally analyse. Nernst's unattainability principle is of the latter kind of thermodynamic law, stating that perfectly cooling a system requires diverging resources. The resources typically considered are energy and time, whose asymptotic trade-off relation is relatively well established: on the one hand, perfect cooling can be achieved in finite time at the expense of an energy cost that diverges as the ground state is approached; on the other hand, the energy cost can be minimised by implementing a quasistatic process that saturates the Landauer limit but takes infinitely long.\footnote{Note, however, that although the asymptotic trade-off relationship is known, the connection between energy and time in the finite-resource setting remains unresolved: For instance, if one uses twice the amount of energy, it is not clear how much faster a given protocol can be implemented; we provide some preliminary insight to such questions in Sec.~\ref{subsec:imperfectcooling}.} These two types of thermodynamic laws are intimately related, but details of their interplay have remained elusive: under which conditions can the Landauer bound be saturated and what are the minimal resources required to do so? Which protocols asymptotically create pure states with given (diverging) resources? What type of control do such protocols require and how difficult are they to implement in practice? We address these questions by considering the task of cooling a quantum system in two extremal control paradigms (see Fig.~\ref{fig:schematic}): One driven by a \emph{coherent} work source and the other by an \emph{incoherent} heat engine. After laying out the framework, we proceed to analyse the relationship between the aforementioned three resources for cooling. A core idea of this paper originates from the observation that it is possible to perfectly cool a physical system with both finite energy and time. Although said observation is simple in nature inasmuch as it can be obtained by a shift in perspective of Landauer's original protocol, its consequences run deep: indeed, the apparent tension between Landauer cooling and Nernst's unattainability principle that arises when only energy and time are considered as resources is resolved via the inclusion of control complexity as a consideration. Subsequently, we define a meaningful notion of control complexity in terms of the energy-level structure of the machine that the system must be coupled to throughout the cooling protocol and demonstrate its thermodynamic consistency by showing that it indeed must diverge to cool the system to the ground state at minimal energy cost, thereby reconciling the viewpoints of Landauer and Nernst. Having established the trinity of relevant resources, we present three main results: \begin{enumerate} \item Perfect cooling is possible with coherent control provided either energy, time, or control complexity diverge. In particular, it is possible in finite time and at Landauer energy cost with diverging control complexity. \item Perfect cooling is possible with incoherent control, i.e., with a heat engine, provided either time or control complexity diverge. On the other hand, it is impossible with both finite time and control complexity, regardless of the amount of energy drawn from the heat bath. \item No process driven by a finite-temperature heat engine can (perfectly) cool a quantum system at the Landauer limit. Nonetheless, the Carnot-Landauer limit, which we introduce here (as a consequence of a stronger equality), can be saturated for any heat bath, given either diverging time or control complexity. \end{enumerate} In the following, we discuss each of these results in turn in more detail and provide a systematic study concerning the asymptotic interplay of energy, time, and control complexity as thermodynamic resources in two extremal control paradigms, as well as develop insight into the finite-resource regime for some special cases. We begin by outlining the framework. \pdfbookmark[2]{Framework: Cooling a Physical System}{Framework: Cooling a Physical System} \section{Framework: Cooling a Physical System} \label{subsec:framework} Consider a target system $\mathcal{S}$ in an initial state $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ described by a unit-trace, positive semidefinite operator with associated Hamiltonian $H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$. An auxiliary machine $\mathcal{M}$, initially uncorrelated with $\mathcal{S}$ and in equilibrium with a reservoir at inverse temperature $\beta := \tfrac{1}{k_B T}$, is used to cool the target system. The initial state of $\mathcal{M}$ is thus of Gibbs form, \begin{align} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} = \tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}) := \frac{e^{-\beta H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})}, \end{align} where $H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ is the machine Hamiltonian and $\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}):=\tr{e^{-\beta H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}}$ its partition function. Throughout this paper we consider only Hamiltonians with discrete spectra, i.e., with an associated separable Hilbert space that has a countable energy eigenbasis. Moreover, for the most part we consider finite-dimensional systems (or sequences thereof) and deal with infinite-dimensional systems separately. As shown in Fig.~\ref{fig:schematic}, a single step of a cooling process comprises two subprocedures: first, a joint unitary is implemented during the \emph{control} step; second, the machine \emph{rethermalises} to the ambient temperature. A cooling \emph{protocol} is determined by the initial conditions and any concatenation of such primitives\footnote{One could refer to both $\mathcal{M}$ \emph{and} the transformations applied as the \emph{machine} and call the system $\mathcal{M}$ itself the working \emph{medium} inasmuch as the latter passively facilitates the process, in line with conventional parlance; however, we use the terminology established in the pertinent literature.}. We consider two extremal control paradigms corresponding to two classes of allowed global transformations. The \emph{coherent control} paradigm permits arbitrary unitaries on $\mathcal{S}\mathcal{M}$; in general, these change the total energy but leave the global entropy invariant and thus require an external work source $\mathcal{W}$. At the other extreme is the \emph{incoherent control} paradigm, where the energy source is a heat bath. Here, the machine $\mathcal{M}$ is bipartitioned: one part, $\mathcal{C}$, is connected to a \emph{cold} bath at inverse temperature $\beta$, which serves as a sink for all energy and entropy flows; the other, $\mathcal{H}$, is connected to a \emph{hot} bath at inverse temperature $\beta_{\raisebox{-1pt}{\tiny{$H$}}} \leq \beta$, which provides energy. The composite system $\mathcal{S}\mathcal{C}\mathcal{H}$ is closed and thus global unitary transformations are restricted to be energy conserving. The temperature gradient causes a natural heat flow away from the hot bath, which carries maximal entropic change with it. Cooling protocols in this setting can be run with minimal external control, i.e., they require only the switching on and off of interactions. \pdfbookmark[1]{Coherent Control}{Coherent Control} \section{Coherent Control } \label{sec: coherentcontrol} We begin by considering cooling with coherently controlled resources (see Fig.~\ref{fig:schematic}, top panel). We first analyse energy, time, and control complexity as resources that can be traded off against one another in order to optimise cooling performance, before focusing more specifically on the nature and role of control complexity. \pdfbookmark[2]{Energy, Time, and Control Complexity as Resources}{Energy, Time, and Control Complexity as Resources} \subsection{Energy, Time, and Control Complexity as Resources} \label{subsec:energytimecontrolcomplexity} In the coherent-control setting, a transformation $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \to \varrho^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ is enacted via a unitary $U$ on $\mathcal{S}\mathcal{M}$ involving a thermal machine $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} = \tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})$, i.e., \begin{align} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime := \ptr{\mathcal{M}}{U (\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}) U^\dagger}. \end{align} For such a transformation, there are two energy costs contributing to the total energy change, which must be drawn from a work source $\mathcal{W}$. The first is the energy change of the target $\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} := \tr{H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} (\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})}$; the second is that of the machine $\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} := \tr{H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} (\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})}$, where $\varrho^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} := \ptr{\mathcal{S}}{U (\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}) U^\dagger}$. The latter is associated with the heat dissipated into the environment and is given by~\cite{Reeb_2014} \begin{align}\label{eq:landauerequality} \beta \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} = \widetilde{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + I(\mathcal{S}: \mathcal{M})_{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^\prime} + D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime \| \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}), \end{align} where $S(\varrho) := - \tr{\varrho \log (\varrho)}$ is the von Neumann entropy, $\widetilde{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} := S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}) - S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}^\prime)$\footnote{Note the differing sign conventions (denoted by the tilde) that we use for changes in energies, $\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}} := E_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}^\prime - E_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}$, and in entropies, $\widetilde{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}} := S_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}} - S_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}^\prime$, such that energy \emph{increases} and entropy \emph{decreases} are positive.}, $I(\mathcal{A}:\mathcal{B})_{\varrho_{\mathcal{A} \mathcal{B}}} := S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}})+S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{B}$}}}) - S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{A} \mathcal{B}$}}})$ (with marginals $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{A}$/$\mathcal{B}$}}} := \ptr{\mathcal{B}/\mathcal{A}}{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{A} \mathcal{B}$}}}}$) is the mutual information between $\mathcal{A}$ and $\mathcal{B}$, and $D(\varrho\|\sigma) := \tr{\varrho \log(\varrho)} - \tr{\varrho \log(\sigma)}$ is the relative entropy of $\varrho$ with respect to $\sigma$, with $D(\varrho\|\sigma) := \infty$ if $\textup{supp}[\varrho] \nsubseteq \textup{supp}[\sigma]$. We derive Eq.~\eqref{eq:landauerequality} and its generalisation to the incoherent-control setting in Appendix~\ref{app:equalityformsofthecarnot-landauerlimit}. The mutual information is non-negative and vanishes iff $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{A} \mathcal{B}$}}} = \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} \otimes \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{B}$}}}$; similarly, the relative entropy is non-negative and vanishes iff $\varrho = \sigma$. Dropping these terms leads to the Landauer bound~\cite{Landauer_1961} \begin{align}\label{eq:landauerlimit} \beta \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \geq \widetilde{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}. \end{align} \setlength{\tabcolsep}{8pt} \begin{table}[t!] \begin{tabular}{llll} & \textbf{Energy} & \textbf{Time} & \textbf{Complexity} \\\colrule \parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{Qudit}}} & $\to\infty$ & $1$ & $\tfrac{1}{2} d(d-1)$ \\ & Landauer & $\to\infty$ & $\tfrac{1}{2} d(d-1)$ \\ & Landauer & $1$ & $\to\infty$\\\colrule \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{H. O.}}} &$\to\infty$ & $1$ & $ \to \infty$ (Gaussian) \\ &Landauer & $\to\infty$ & $ \to \infty$ (Gaussian) \\ &Finite ($>$ Landauer) & $\to\infty$ & 1 (Non-Gaussian) \\ &Landauer & $1$ & $\to\infty$ (Gaussian)\\\botrule \end{tabular} \caption{\emph{Coherent-control cooling protocols for finite-dimensional (qudit) and harmonic oscillator systems.} Landauer energy cost refers to saturation of Eq.~\eqref{eq:landauerlimit} and complexity refers to the proxy measure effective dimension (see Def.~\ref{def:effectivedimension}); time is measured as the number of unitary operations with a fixed complexity. In the qudit case, the system and machine dimensions are equal: $d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} = d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} =: d$.} \label{tab:coherentcontrol} \end{table} The Landauer limit holds \emph{independently} of the protocol implemented, i.e., it assumes only that \emph{some} unitary was applied to the target and thermal machine. For large machines, the dissipated heat is typically much greater than the energy change of the target; nonetheless, the contributions can be comparable at the microscopic scale. We assume that the target begins in equilibrium with the reservoir at inverse temperature $\beta$, i.e., in the initial thermal state $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} = \tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta , H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$, with no loss of generality since such a relaxation can be achieved for free (by swapping the target with a suitable part of the environment; however, see Ref.~\cite{Riechers2021} for a discussion of initial state dependency of the bound). We track all energetic and entropic quantities and refer to the asymptotic saturation of Eq.~\eqref{eq:landauerlimit} with $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime$ pure as \emph{perfect cooling at the Landauer limit}. Although Landauer's limit sets the minimum heat that must be dissipated---and thereby the minimum energy cost---for cooling any physical system, the third law makes no specification that energy must be the resource minimised (or that time must diverge). One might instead consider using a source of unbounded energy to perfectly cool a system as quickly as possible. Additionally, control complexity plays an important role as a resource, inasmuch as its divergence permits perfect cooling at the Landauer limit in finite time (see below). As summarised in Table~\ref{tab:coherentcontrol}, we now present coherently controlled protocols that perfectly cool an arbitrary finite-dimensional target system using thermal machines when any one of the three considered resources\textemdash energy, time or control complexity\textemdash diverges; moreover, the resources that are kept finite saturate protocol-independent ultimate bounds. The following thus provides a comprehensive analysis of cooling with respect to the trinity of resources that can be traded off amongst each other. \subsection{Perfect Cooling at the Ultimate Limits with Infinite Resources} \label{subsec:methods-coherentcontrol} \emph{1. Diverging Energy.---}We first consider the situation in which time and control complexity are fixed to be finite, while the energy cost is allowed to diverge. Here, we present the following: \begin{thm}\label{thm:divergingenergycoherent} With diverging energy, any finite-dimensional quantum system can be perfectly cooled using a single interaction of finite complexity. \end{thm} \noindent The cooling protocol using diverging energy is the simplest. Here, one exchanges all populations of the target system with those of a thermal machine with suitably large energy gaps to sufficiently concentrate the initial machine population in the ground state subspace of the target system. This exchange requires a single system-machine unitary and is of finite complexity (in a sense discussed below). Nonetheless, the energy drawn from the work source in this protocol diverges. Moreover, in addition to being sufficient for perfect cooling with both finite time and control complexity, any protocol that cools perfectly with both finite time and control complexity requires diverging energy. See Appendix~\ref{app:divergingenergy} for details. We now move to consider the situations in which the energy cost is minimised at the expense of either diverging time or control complexity. Equation~\eqref{eq:landauerequality} provides insight for understanding the conditions required for saturating the Landauer bound. Although for finite-dimensional machines only trivial processes of the form $U_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}} = U_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ saturate the Landauer limit~\cite{Reeb_2014}, we show how it can be asymptotically saturated with nontrivial processes by considering diverging machine and interaction properties, as we elaborate on shortly. Any such process must asymptotically exhibit no correlations such that $I(\mathcal{S}:\mathcal{M})_{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^\prime} \to 0$ and effectively not disturb the machine, i.e., yield $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime \to \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ such that $D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime \| \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}) \to 0$. Indeed, any correlations created between initially thermal systems would come at the expense of an additional energetic cost~\cite{HuberPerarnauHovhannisyanSkrzypczykKloecklBrunnerAcin2015, BruschiPerarnauLlobetFriisHovhannisyanHuber2015,VitaglianoKloecklHuberFriis2019} whose minimisation is a problem that has so far only been partially resolved~\cite{BakhshinezhadEtAl2019}. However, it has been shown that for any (strictly) rank nondecreasing process, there exists a thermal machine and joint unitary such that for any $\epsilon > 0$, the heat dissipated satisfies $\beta \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \leq \widetilde{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + \epsilon$~\cite{Reeb_2014}, thereby saturating the Landauer limit. Here, we present protocols that asymptotically achieve both this and perfect cooling (in particular, effectively decrease the rank), and provide necessary conditions on the underlying resources required to do so. \emph{2. Diverging Time.---}We now present a protocol that uses a diverging number of operations of finite complexity to asymptotically attain perfect cooling at the Landauer limit~\cite{Anders_2013,Reeb_2014,Skrzypczyk_2014}. \begin{thm}\label{thm:inftimeFinTepFinDim} With diverging time, any finite-dimensional quantum system can be perfectly cooled at the Landauer limit via interactions of finite complexity. \end{thm} \emph{Sketch of proof.\textemdash }We first show that any system can be cooled from $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} = \tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$ to $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta^{*}, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$, with $\beta^* \geq \beta$, using only $\beta^{-1}\, \widetilde{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ units of energy. Our proof is constructive in the sense that we provide a protocol that achieves the Landauer energy cost as the number of operations diverges. The individual interactions in this protocol are of finite control complexity as they simply swap the target system with one of a sequence of thermal machines with increasing energy gaps. In this way, the final state $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta^*, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$ can be made to be arbitrarily close to $\ket{0}\!\bra{0}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ for any initial temperature. \qed The proof is presented in Appendix~\ref{app:divergingtimecoolingprotocolfinitedimensionalsystems}, along with a more detailed dimension-dependent energy cost function for the special case of equally spaced Hamiltonians. Through the protocol described above, we see that given a diverging amount of time, the target system can be sequentially coupled with a machine of finite complexity that rethermalizes between control steps in such a way that the final target system state is arbitrarily close to the ground state for any initial temperature. This trade-off between energy and time is well known, and we discuss it only briefly in order to help build intuition and highlight the versatility of our framework. Alternatively, one can compress all the operations applied in the diverging-time protocol into one global unitary that achieves the same final states, thereby achieving perfect cooling at the Landauer limit in a single unit of time but with an infinitely complex interaction. That is, the diverging temporal resource of repeated interactions with a single, finite-size machine is replaced by a single interaction with a larger machine of diverging control complexity. \emph{3. Diverging Control Complexity.---}By reconsidering the diverging-time protocol above, a trade-off can be made between time and control complexity. As illustrated in Fig.~\ref{fig:complexity}, one can consider all of the operations $\{ U_k=e^{-iH_k t_k} \}_{k=1,\dots,N}$ required in said protocol to make up one single joint interaction $U_{\textup{tot}}:=\lim_{N\to\infty}\prod_{k=1}^{N}U_k=e^{-iH_{\textup{tot}} t_{\textup{tot}}}$ acting on a larger machine, thus setting the time required to be unity (in terms of the number of control operations before the machine rethermalises). In other words, for any finite number $N$ of unitary transformations $U_k$, there exists a total Hamiltonian $H_{\textup{tot}}\suptiny{0}{0}{(N)}$ and a finite time $t\subtiny{0}{0}{N}$ that generates the overall transformation $U_{\textup{tot}}\suptiny{0}{0}{(N)} := \prod_{k=1}^{N}U_k$; since $t\subtiny{0}{0}{N}$ is finite, we can set it equal to one without loss of generality by rescaling the Hamiltonian as $\widetilde{H}_{\textup{tot}}\suptiny{0}{0}{(N)}= t\subtiny{0}{0}{N} H_{\textup{tot}}\suptiny{0}{0}{(N)}$. Here, we refer to the limit $N \to\infty$ as diverging control complexity. Compressing a diverging number of finite-complexity operations thus yields a protocol of diverging control complexity. The fact that there exists such an operation that minimises both the time and energy requirements follows from our constructive proof of Theorem~\ref{thm:inftimeFinTepFinDim}. We therefore have the following: \begin{cor}\label{cor:infcomplexity} With diverging control complexity, any finite-dimensional quantum system can be perfectly cooled at the Landauer limit in finite time. \end{cor} However, this particular way of constructing complex control protocols is not necessarily unique. It is thus natural to wonder if diverging control complexity is a generic feature necessary to achieve perfect cooling at the Landauer limit in unit time and indeed, how to quantify control complexity that is operationally meaningful between the extreme cases of being either very small or divergent, as we now turn to discuss. Indeed, the inclusion of an explicit quantifier of control complexity regarding thermodynamic tasks---which, although crucial for practical purposes, is oftentimes overlooked---is one of the main novelties of our present work. \pdfbookmark[2]{Control Complexity in Quantum Thermodynamics}{Control Complexity in Quantum Thermodynamics} \section{Control Complexity in Quantum Thermodynamics} Although the protocol described above has diverging control complexity by construction, one need not construct complex protocols in this way, and so the natural concern becomes understanding the generic features that enable perfect cooling at the Landauer limit in unit time. To address this issue, we first provide protocol-independent structural conditions that must be fulfilled by the machine to enable \emph{(1) perfect cooling} and \emph{(2) cooling at Landauer cost}; combined, these independent conditions provide a necessary requirement, namely that the machine must have an unbounded spectrum (from above) and be infinite-dimensional (respectively) for the \emph{possibility} of \emph{(3) perfect cooling at the Landauer limit}. Such properties of the machine Hamiltonian define the \emph{structural complexity}, which sets the potential for how cool the target system can be made and at what energy cost. As the name suggests, this is entailed by the structure of the machine, e.g., the number of energy gaps and their arrangement, and as such provides a static notion of complexity. However, given a machine with particular structural complexity, one may not be able to utilise said potential due to constraints on the dynamics that can be implemented. For instance, one may be restricted to only two-body interactions, or operations involving only a few energy levels at a time. Assuming a sufficient structural complexity at hand, such constraints limit one from optimally manipulating the systems. Thus, the extent to which a machine's potential is utilised depends on properties of the dynamics of a given protocol, i.e., the \emph{control complexity}. We provide a detailed study of structural and control complexity in Appendix~\ref{app:conditionsstructuralcontrolcomplexity}, and here summarise the key methods. \subsection{Structural \& Dynamical Notions of Complexity} We split the consideration of complexity into two parts: first, the protocol-independent \emph{structural} conditions that must be fulfilled by the machine and, second, the dynamic \emph{control complexity} properties of the interaction that implements a given protocol (see Fig.~\ref{fig:complexity}). \begin{figure} \caption{\emph{Complexity.} We consider structural (left) and control complexity (right). Structural complexity concerns properties of the machine Hamiltonian. For perfect cooling it is necessary that the largest energy gap diverges [see Eq.~\eqref{eq:maingeneralpuritybound}]. Moreover, an infinite-dimensional machine with particular energy-level structure is required for saturation of the Landauer bound. Control complexity refers to properties of the unitary that represents a protocol. The yellow box in the foreground represents a unitary $U$ involving the entire machine, whereas the smaller yellow columns in the background represent a potential decomposition (e.g., of the diverging-time protocol) into unitaries $U_{i}$ involving certain subspaces of the overall machine. Not only must the target system interact with all levels of an infinite-dimensional machine for Landauer-cost cooling, it must do so in a fine-tuned way.} \label{fig:complexity} \end{figure} \subsubsection{Structural Complexity} Regarding the former, first note that one can lower bound the smallest eigenvalue $\lambda_{\textup{min}}$ of the final state $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}'$ (and hence how cold the system can become) after \emph{any} unitary interaction with a thermal machine by~\cite{Reeb_2014} \begin{align}\label{eq:maingeneralpuritybound} \lambda_{\textup{min}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime) \geq e^{-\beta\, \omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}^{\textup{max}}} \lambda_{\textup{min}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}), \end{align} where $\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}^{\textup{max}}:=\max_{i,j}|\omega_{j}-\omega_{i}|$ denotes the largest energy gap of the machine Hamiltonian $H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ with eigenvalues $\omega_{i}$. It follows that perfect cooling is only possible under two conditions: either the machine begins in a pure state ($\beta\to\infty$), or $H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ is unbounded, i.e., $\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}^{\textup{max}}\to\infty$. Requiring $\beta<\infty$, a diverging energy gap in the machine Hamiltonian is thus a necessary structural condition for perfect cooling. Independently, another condition required to saturate the Landauer limit can be derived for any amount of cooling: in Ref.~\cite{Reeb_2014}, it was shown that for any finite-dimensional machine, there are correction terms to the Landauer bound which imply that it cannot be saturated; these terms only vanish in the limit where the machine dimension diverges. We thus have two independent necessary conditions on the structure of the machine that must be asymptotically fulfilled to achieve relevant goals for cooling: the former is required for perfect cooling; the latter for cooling at the Landauer limit. Together, these conditions imply the following: \begin{cor}\label{cor:structuralcondition} To perfectly cool a target system with energy cost at the Landauer limit using a thermal machine $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})$, the machine must be infinite dimensional and $\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}^{\textup{max}}$, the maximal energy gap of $H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}\,$, must diverge. \end{cor} The unbounded structural properties of the machine support the \emph{possibility} for perfect cooling at the Landauer limit; we now move to focus on the control properties of the interaction that \emph{realise} said potential (see Fig.~\ref{fig:complexity}). This leads to the distinct notion of control complexity, which differentiates between protocols that access the machine in a more or less complex manner. The structural complexity properties are protocol independent and related to the energy spectrum and dimensionality of the machine, whereas the control complexity concerns dynamical properties of the unitary that represents a particular protocol. \subsubsection{Control Complexity} Although it is intuitive that a unitary coupling the system to many degrees of freedom of the machine should be considered complex, it is \emph{a priori} unclear how to quantify control complexity in a manner that both \begin{enumerate} \item corresponds to our intuitive understanding of the word ``complex'', meaning ``difficult to implement''; and \item is consistent with Nernst's third law in the sense that its divergence is necessary to reach a pure state (when all other considered resources are restricted to be finite). \end{enumerate} Many notions of complexity put forth throughout the literature to capture the first point above do not necessarily satisfy the second, as we discuss later. Here, we take the opposite approach and seek a \emph{minimal} notion of complexity that is first and foremost consistent with the third law of thermodynamics, which we hope to develop further to incorporate the idea of quantifying how difficult a protocol is to implement. In the following sections, we begin by demonstrating that any cooling protocol that achieves perfect cooling with minimal time and energy resources requires coupling the target system to an infinite-dimensional machine, thereby capturing a notion of control complexity that satisfies the second point above. However, by subsequently analysing the sufficient conditions for such optimal cooling, we see that such a condition is in general insufficient to achieve said goal; furthermore, coupling to an infinite-dimensional machine is not necessarily difficult to implement in practice in certain experimental platforms. The insights gained here finally motivate our more refined notion of control complexity, namely that the system must be coupled to a spectrum of machine energy gaps that approximate a continuum. This condition is indeed difficult to achieve in all experimental settings and therefore provides a reasonable definition of control complexity inasmuch as it satisfies both desiderata outlined above. \subsection{Effective Dimension as a Notion of Control Complexity} As a first step in this direction, a good proxy measure of control complexity is the effective dimension of a unitary operation, i.e., the dimension of the subspace of the global Hilbert space upon which the unitary acts nontrivially.\\ \begin{defn}\label{def:effectivedimension} The \emph{effective dimension} is the minimum dimension of a subspace $\mathcal{A}$ of the joint Hilbert space $\mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}}$ in terms of which the unitary can be decomposed as $U_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}} = U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} \oplus \mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{A}^\perp$}}}$: \begin{align}\label{eq:maineffectivedimension} d^{\,\textup{eff}} := \min \mathrm{dim}(\mathcal{A}) : U_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}} = U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} \oplus \mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{A}^\perp$}}}. \end{align} \end{defn} Intuitively, given any (sufficiently large) machine dimension, the effective dimension captures how much of the machine takes part in the controlled interaction. While any dynamics that requires a high amount of control must accordingly have large effective dimension, the converse does not necessarily follow: there exist dynamics with corresponding large (even infinite) effective dimensions (e.g., Gaussian operations on two harmonic oscillators, such as those enacted by a beam splitter) that are easily implementable and do not require high levels of control, as we discuss further below. Nevertheless, using the definition above, we show that any protocol achieving perfect cooling at the Landauer limit necessarily involves interactions between the target and infinitely many energy levels of the machine. In other words, no interaction restricted to a finite-dimensional subspace suffices. We begin by demonstrating that the effective dimension (nontrivially) accessed by a unitary (see Def.~\ref{def:effectivedimension}) must diverge to achieve perfect cooling at the Landauer limit, thereby providing a good proxy for control complexity in the sense that it aligns with Nernst's third law and provides a necessary condition. Intuitively, the effective dimension of a unitary operation is the dimension of the subspace of the global Hilbert space upon which the unitary acts nontrivially, in other words the part of the joint space that is actually accessed by the control protocol. This quantity can be computed by considering a given cooling protocol and finite unit of time $T$ (which we can set equal to unity without loss of generality) with respect to which the target and total machine transform unitarily by decomposing the Hamiltonian in $U_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}} = e^{-i H_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}} T}$ in terms of local and interaction terms, i.e., $H_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}} = H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\otimes \mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} + \mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\otimes H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} + H_{\raisebox{-1pt}{\scriptsize{\textup{int}}}}$. The effective dimension then corresponds to $\mathrm{rank}(H_{\raisebox{-1pt}{\scriptsize{\textup{int}}}})$. With this definition at hand, we have the following: \begin{thm}\label{thm:variety} The unitary representing a cooling protocol that saturates the Landauer limit must act nontrivially on an infinite-dimensional subspace of $\operatorname{supp}(H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})$. This implies $d^{\,\textup{eff}} \to \infty$. \end{thm} \noindent Intuitively, we show that if a protocol accesses only a finite-dimensional subspace of the machine, then the machine is effectively finite-dimensional inasmuch as a suitable replacement can be made while keeping all quantities relevant for cooling invariant. Invoking the main result of Ref.~\cite{Reeb_2014} then implies that there are finite-dimensional correction terms such that the Landauer limit cannot be saturated. The effective dimension therefore provides a minimal quantifier for control complexity: it is the quantity that must diverge in order to (perfectly) cool at minimal energy cost---thus, it satisfies the above point 2. Moreover, it requires no assumption on the underlying structure of the machine, with the results holding for either collections of finite-dimensional systems or harmonic oscillators without any restrictions on the types of individual operations allowed. This highlights a certain level of generality regarding the definition put forth, inasmuch as it is not tied to any presupposed structure of the systems at hand or the ability of the agent to control them. Additionally, as we discuss below, in many situations of interest, such as a machine comprising a collection of qubits and/or natural gate-set limitations, said definition also corresponds to protocols that are difficult to implement in practice, therefore also satisfying the above point~1. However, such additional restrictions are by no means generic. Moreover, it is \emph{a priori} unclear if having a diverging effective dimension is enough to permit perfect cooling with minimal time and energy cost. We now move on to discuss the connection to practical difficulty in general before analysing sufficient conditions regarding control complexity. \subsubsection{Correspondence to Practical Difficulty} Importantly, if one supposes that the system and machines are finite dimensional, then diverging effective dimension implies diverging circuit complexity, where the latter is defined in terms of the minimum number of gates (from a predetermined set of possibilities) required to implement the overall circuit representing a particular protocol. For instance, considering a qubit system and machines, and the ability to perform arbitrary two-qubit gates, the effective dimension is simply the logarithm of the number of distinct machine qubits that the system interacts with throughout the protocol. For any cooling protocol that achieves Landauer energy cost, it is clear that every one of a diverging number of qubit machines must take part in the overall transformation. Moreover, the particular interactions applied can be taken to be \texttt{SWAP} gates, which require the ability for the agent to be able to perform a \texttt{CNOT} gate, which in turn permits universal quantum computation with two-qubit interactions. Thus, given a universal two-qubit gate set, the circuit required to perform perfect cooling at minimal energy cost has a complexity that scales with the number of machine qubits. For higher-dimensional architectures or further restrictions on the gate set, any meaningful notion of control complexity will increase accordingly. This means that the task of cooling a finite-dimensional system with finite-dimensional machines at the Landauer limit is---even with a perfect quantum computer---an impossibly difficult task. However, although our proposed definition of effective dimension as a notion of control complexity is flexible inasmuch as it applies to arbitrary system-machine structures, the price of such generality comes with the drawback that it tends to overestimate the difficulty of implementing a particular protocol in practice. In other words, without imposing any additional assumptions regarding the situation at hand, the effective dimension does not necessarily satisfy the above point~1. For example, whilst the effective dimension and the circuit complexity coincide for qubits, in higher-dimensional settings, the former overestimates the latter because not all system-machine subspaces are necessarily required to implement a particular protocol (i.e., although using all such subspaces provides one way to achieve it, this is not unique). Thus, the extent to which the circuit complexity is overestimated depends on the allowed gate set that is considered ``simple'' in general. At the extreme end, i.e., for harmonic-oscillator systems and machines, this can be seen from the fact that a single beam-splitter operation (which is a two-mode Gaussian operation, corresponding to a simple circuit complexity in the usual sense considered for infinite-dimensional quantum circuit architectures) already has infinite effective dimension, but is far from sufficient to achieve perfect cooling at Landauer cost. As a representative for infinite-dimensional systems, we treat harmonic oscillator target systems separately in Appendix~\ref{app:harmonicoscillators}. In the infinite-dimensional setting, the difficulty of implementing an operation is often related to the polynomial degree of its generators. Here, we see some friction with respect to Eq.~\eqref{eq:maineffectivedimension}: as mentioned above, a generic Gaussian unitary operation (i.e., one generated by a Hamiltonian at most quadratic in the mode operators) between a harmonic oscillator target and machine already implies infinite effective dimensionality. In light of this, we first construct a protocol that achieves perfect cooling at the Landauer limit with diverging time using only sequences of Gaussian operations [i.e., those typically considered to be practically easily implementable (cf. Refs.~\cite{BrownFriisHuber2016,Friis2018}), but nonetheless with infinite effective dimensionality according to Def.~\ref{def:effectivedimension}]. This result highlights that the polynomial degree of the generators of a particular protocol would---somewhat counterintuitively, since operations corresponding to high polynomial degree are difficult to achieve in practice---\emph{not} provide a suitable measure of control complexity inasmuch as its divergence is not necessary for Landauer-cost cooling. In contrast, we then present a protocol that demonstrates that perfect cooling is possible given diverging time and operations acting on only a finite effective dimensionality (i.e., using non-Gaussian operations), with a finite energy cost that is greater than the Landauer limit; whether or not a similar protocol that saturates the Landauer limit exists in this setting remains an open question. \subsubsection{Sufficiency for Optimal Cooling} Thus, in general, accessing an infinite-dimensional machine subspace is not sufficient for reaching the Landauer limit. Indeed, in all of the protocols that we present, the degrees of freedom of the machine must be individually addressed in a fine-tuned manner to permute populations optimally, which intuitively corresponds to complicated multipartite gates and demonstrates that an operationally meaningful notion of control complexity must take into account factors beyond the effective dimensionality accessed by an operation. In particular, the interactions couple the target system to a diverging number of subspaces of the machine corresponding to distinct energy gaps. Moreover, there are a diverging number of energy levels of the machine both above and below the first excited level of the target. These observations highlight that fine-tuned control plays an important role. Indeed, both the final temperature of the target as well as the energy cost required to achieve this depends upon how the global eigenvalues are permuted via the cooling process. First, how cool the target becomes depends on the sum of the eigenvalues that are placed into the subspace spanned by the ground state. Second, for any fixed amount of cooling, the energy cost depends on the constrained distribution of eigenvalues within the machine. Thus, in general, the optimal permutation of eigenvalues depends upon properties of both the target and machine. To highlight this, in Appendix~\ref{app:conditionsstructuralcontrolcomplexity}, we consider the task of cooling a maximally mixed target system with the additional constraint that the operation implemented lowers the temperature as much as possible. This allows us to derive a closed-form expression for the distribution of machine eigenvalues alone that must be asymptotically satisfied as the machine dimension diverges. Drawing from these insights, in the coming section we propose a stronger notion of control complexity (in the sense that it bounds the effective dimension from below and that it corresponds to practical difficulty in virtual every setting imaginable) in terms of the energy-gap structure of the machine and demonstrate that this measure too must diverge to cool perfectly with minimal time and energy costs. This concept is even more important in the case where all resources are finite, as particular structures of machines and the types of interactions permitted play a crucial role in both how much time or energy is spent cooling a system and how cold the system can ultimately become (see, e.g., Refs.~\cite{Clivaz_2019E,Taranto_2020,zhen2021}). \subsection{Energy-Gap Variety as a Notion of Control Complexity} This analysis motivates searching for a more detailed notion of control complexity that takes the energy-level structure of the machine into account, which should hold across all platforms and dimension scales. The discussion above illustrates some key challenges in defining a measure of control complexity that satisfies natural desiderata: such a measure should correspond to the difficulty of implementing operations in practice and simultaneously cover all possible physical platforms, including finite-dimensional systems such as, e.g., specific optical transitions of electrons in the shell of trapped ions, and infinite-dimensional systems such as the state-space-specific modes of the electromagnetic field. The effective dimension that we introduce above as a proxy manages to cover all such systems and provides a rigorous mathematical criterion that every physical protocol will necessarily have to fulfil in order to cool at minimal energy cost. As we have seen, however, infinite effective dimension is insufficient for cooling at the Landauer limit and it may not be all that difficult to achieve in continuous-variable setups. This begs the question of how this minimal definition of control complexity can be extended in order to more faithfully represent what permits saturation of the ultimate limitations and is difficult to achieve in practice. Looking at all of our cooling protocols, a common property that seems to be important in minimising the energy cost of cooling is that the system is coupled to a set of machine energy gaps that are distributed in such a way that they (approximately) densely cover the interval $[\omega_1, \omega^*]$, where $\omega_1$ is the first energy gap of the target system and $\omega^*$ is the maximal energy gap, which sets the final achievable temperature of the system (for perfect cooling to the ground state, note that one requires $\omega^* \to \infty$). Let us denote the number of distinct energy gaps in a (fixed) interval as the \emph{energy-gap variety}. More formally, we have the following: \begin{defn}\label{def:energygapvariety} Consider an interval $[\omega_a, \omega_b) \subseteq \mathbbm{R}$. We define the energy-gap variety in terms of the set of machine energy gaps that lie in said interval, i.e., first construct the set \begin{align} \mathcal{E}_{[\omega_a, \omega_b)} := \{ \omega_\gamma := \omega_i-\omega_j \, | \, \omega_i-\omega_j \in [\omega_a, \omega_b) \}_{\gamma}. \end{align} The number of distinct elements in such a set is the \emph{energy-gap variety}. \end{defn} On the one hand, it is clear that coupling a system to a large number of distinct and/or closely spaced energy gaps requires fine-tuned control that is difficult in any experimental setting. On the other, the energy-gap variety lower bounds the effective dimension, and thus it is not clear that it needs to diverge in order to cool at Landauer energy cost. In Appendix~\ref{app:conditionsstructuralcontrolcomplexity}, we demonstrate that the energy-gap variety must indeed diverge and, additionally, that the set of energy gaps must densely cover a relevant interval (whose endpoints set the amount of cooling possible) in order to perfectly cool at the Landauer limit by proving the following: \begin{thm}\label{thm:energygapvariety-main} In order to cool $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \mapsto \ketbra{0}{0}$ with a thermal machine $\tau_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}(\beta,H_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}})$ at Landauer energy cost with a single control operation, the global unitary $U$ must couple the system to a diverging number of distinct energy gaps that densely cover the interval $[\omega_0,\infty)$, where $\omega_0$ is the smallest energy gap of the target system. \end{thm} Taken in combination with its sufficiency to achieve said task, this result posits the energy-gap variety as a better quantifier of control complexity than the effective dimension, constituting the best thermodynamically meaningful notion of control complexity that we have put forth so far. The above theorem establishes the relevance of the energy-gap variety regarding the ultimate limitations of perfect cooling. In reality, of course, experimental imperfections abound, and so naturally the question arises: \emph{how robust is the energy-gap variety and to what extent can it incorporate errors?} Regarding the former: note that the above theorem posits the impossibility of cooling at Landauer energy cost unless one has control over an (infinitely) fine-grained energy-gap structure. Any perturbation away from said structure will result in some additional energy requirement for cooling; however, intuitively, small perturbations will correspond to small increases in energy costs. Properly accounting for such impacts, e.g., by bounding the additional energy cost in terms of a difference from the optimal energy-gap structure, is an important next step to understand the practical limitations of cooling. Regarding the latter point, in reality one never has perfect control over microscopic degrees of freedom. For instance, an immediate experimental imperfection that should be accounted for is the fact that two energy gaps which are very close together will be practically indistinguishable. Although a full-fledged error analysis here would constitute a major follow-up work, note that such cases can be formally dealt with within our framework by suitably modifying the definition, i.e., by discretising energy bands to suitably capture the indistinguishability of energy gaps and/or error margins. Aside from introducing and highlighting the important role of control complexity, we now take a step back to consider the notion of overall control at a higher level. It is clear that the protocols that saturate the Landauer limit for the energy cost of cooling require highly controlled microstate interactions between the system and machine; in turn, such transformations necessitate that the agent has access to a versatile \emph{work source}, i.e., either a quantum battery~\cite{Aberg2013,Skrzypczyk_2014,LostaglioJenningsRudolph2015,Friis2018,CampaioliPollockVinjanampathy2019} or a classical work source with a precise clock~\cite{Erker_2017,SchwarzhansLockErkerFriisHuber2021}. Such control is reminiscent of Maxwell's demon, who can indeed address all microscopic configurations at hand. This level of control is, however, in some sense at odds with the true spirit of thermodynamics. Indeed, the very reason that the machine is taken to begin as a thermal (Gibbs) state in thermodynamics is precisely because it provides the microscopic description that is \emph{both} consistent with macroscopic observations (in particular, average energy) and makes minimal assumptions regarding the information that the agent has about the initial state; thermodynamics as a whole is largely concerned with what can be done with minimal information requirements. Beginning with this, and then going on to permit dynamical interactions that address the full complex microstructure is somewhat contradictory, at least in essence; indeed, it has been argued that ``Maxwell's demon cannot operate''~\cite{Brillouin_1951} as an autonomous thermal being. Thus, a more thermodynamically sound setting would also restrict the transformations themselves to be ones that can be driven with minimal overall control. We now move to analyse the task of cooling within such a context. \pdfbookmark[1]{Incoherent Control}{Incoherent Control} \section{Incoherent Control (Heat Engine)} \label{subsec:methods-incoherentcontrol} The results presented so far pertain to cooling with the only restriction being that the machines are initially thermal. In particular, there are no restrictions on the allowed unitaries. In general, the operations required for cooling are not energy conserving and require an external work source. With respect to standard considerations of thermodynamics, this may seem somewhat unsatisfactory, as the joint system is, in the coherent setting, open to the universe. When quantifying thermodynamic resources, one typically restricts the permitted transformations to be energy conserving, thereby closing the joint system and yielding a self-contained theory. We therefore analyse protocols using energy-conserving unitaries. With this restriction, it is in general not possible to cool a target system with machines that are initially thermal at a single temperature, as was considered in the coherent-control paradigm~\cite{Clivaz_2019L}. Instead, cooling can be achieved by partitioning the machine into one cold subsystem $\mathcal{C}$ that begins in equilibrium at inverse temperature $\beta$ and another hot subsystem $\mathcal{H}$ coupled to a heat bath at inverse temperature $\beta_{\raisebox{-1pt}{\tiny{$H$}}} < \beta$~\cite{Clivaz_2019L,Clivaz_2019E} (see Fig.~\ref{fig:schematic}, bottom panel). In other words, one uses a hot and a cold bath to construct a heat engine that cools the target. As we demonstrate, perfect cooling can be achieved in this setting as pertinent resources diverge. However, the structure of the hot bath plays a crucial role regarding the resource requirements. In particular, we present a no-go theorem that states that perfect cooling with a heat engine using a single unitary of finite control complexity is impossible, even given diverging energy drawn from the hot bath. This result is in stark contrast to its counterpart in the coherent-control setting, where diverging energy is sufficient for perfect cooling and serves to highlight the fact that the incoherent-control setting is a fundamentally distinct paradigm that must be considered independently. Here, we focus on finite-dimensional systems and leave the analysis of infinite-dimensional ones to future work. \subsection{Ultimate Limits in the Incoherent Control Paradigm} In the incoherent-control setting, an adaptation of the (equality-form) Landauer bound on the minimum heat dissipated (or, as we phrase it here, the minimum amount of energy drawn from the hot bath) can be derived, which we dub the \emph{Carnot-Landauer limit}: \begin{thm}\label{thm:main-landauer-incoherent} Let $F_{\raisebox{-1pt}{\tiny{$\beta$}}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}) := \mathrm{tr}[H_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}] - \beta^{-1} S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}})$ be the free energy of a state $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}$ with respect to a heat bath at inverse temperature $\beta$, $\Delta F_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(\beta)} := F_{\raisebox{-1pt}{\tiny{$\beta$}}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime) - F_{\raisebox{-1pt}{\tiny{$\beta$}}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$, and let $\eta := 1 - \beta_{\raisebox{-1pt}{\tiny{$H$}}}/\beta \in (0,1)$ be the Carnot efficiency with respect to the hot and cold baths. In the incoherent-control setting, the quantity \begin{align}\label{eq:landauerincoherent1} &\Delta F_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(\beta)} + \eta\, \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} \\ &= -\frac{1}{\beta}[\Delta S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + \Delta S_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} + \Delta S_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} + D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}^\prime||\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}) + D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^\prime||\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}})] \notag \end{align} satisfies the inequality \begin{align}\label{eq:landauerincoherent2} \Delta F_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(\beta)} + \eta \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} &\leq 0. \end{align} \end{thm} \noindent Equation~\eqref{eq:landauerincoherent2} holds due to the non-negativity of the sum of local entropy changes and the relative-entropy terms. The derivation is provided in Appendix~\ref{app:equalityformsofthecarnot-landauerlimit}, where we also show that the usual Landauer bound is recovered in the limit of an infinite-temperature heat bath. The incoherent-control setting is fundamentally distinct from the coherent-control setting in terms of what can (or cannot) be achieved with given resources. For instance, consider the case where one wishes to achieve perfect cooling in unit time and with finite control complexity with diverging energy cost. In the coherent-control setting, this task is possible in principle (see Theorem~\ref{thm:divergingenergycoherent}). On the other hand, in the incoherent-control setting, we have the following no-go theorem (see Appendix~\ref{app:coolingprotocolsincoherentcontrolparadigm} for a proof): \begin{thm}\label{thm:infenergyauto} In the incoherent control scenario, it is not possible to perfectly cool any quantum system of finite dimension in unit time and with finite control complexity, even given diverging energy drawn from the hot bath, for any non-negative inverse temperature heat bath $\beta_{\raisebox{-1pt}{\tiny{$H$}}} \in [0, \beta < \infty)$. \end{thm} \noindent This result follows from the fact that in the incoherent-control setting, the target system can only interact with subspaces of the joint hot-and-cold machine with respect to which it is energy degenerate. For any operation of fixed control complexity, there is always a finite amount of population remaining outside of the accessible subspace, implying that perfect cooling cannot be achieved, independent of the amount of energy drawn from the hot bath. \subsection{Saturating the Carnot-Landauer Limit} The above result emphasises the difference between coherent and incoherent controlling, which means that it is \emph{a priori} unclear if the Carnot-Landauer bound is attainable and, if so, how to attain it. Indeed, the restriction to energy-conserving unitaries generally makes it difficult to tell if the ultimate bounds can be saturated in the incoherent-control setting, and which resources would be required to do so. We present a detailed study of cooling in the incoherent-control setting in Appendix~\ref{app:coolingprotocolsincoherentcontrolparadigm}, where we prove the following results. We begin by demonstrating incoherent cooling protocols that saturate the Landauer bound in the regime where the heat-bath temperature goes to infinity. We do so by fine tuning the machine structure such that the desired cooling transitions between the target system and the cold and hot parts of the machine are rendered energy conserving. In particular, we prove the following: \begin{thm}\label{thm:autoinftempinftime} In the incoherent control scenario, for an infinite-temperature hot bath $\beta_{\raisebox{-1pt}{\tiny{$H$}}} = 0$, any finite-dimensional system can be perfectly cooled at the Landauer limit with diverging time via interactions of finite control complexity. Similarly, the goal can be achieved in unit time with diverging control complexity. \end{thm} Following our analysis of infinite-temperature heat baths, we study the more general case of finite-temperature heat baths. In Appendix~\ref{app:incoherentcoolingfinitetemperature}, we detail cooling protocols that saturate the Carnot-Landauer limit for any finite-temperature heat bath. More precisely, we prove: \begin{thm}\label{thm:autofintempinftime} In the incoherent control scenario, for any finite-temperature hot bath $0 < \beta_{\raisebox{-1pt}{\tiny{$H$}}} < \beta$, any finite-dimensional quantum system can be perfectly cooled at the Carnot-Landauer limit given diverging time via finite control complexity interactions. Similarly, the goal can be achieved in unit time with diverging control complexity. \end{thm} \noindent As in the coherent-control setting, these protocols use either diverging time or control complexity to asymptotically saturate the Carnot-Landauer bound. The results presented in this section therefore provide a comprehensive understanding of the resources required to perfectly cool at minimum energy cost in a setting that aligns with the resource theories of thermodynamics. \pdfbookmark[1]{Imperfect Cooling with Finite Resources}{Imperfect Cooling with Finite Resources} \section{Imperfect Cooling with Finite Resources} \label{subsec:imperfectcooling} The above results set the ultimate limitations for cooling inasmuch as the protocols saturate optimal bounds by using diverging resources. In reality, however, any practical implementation is limited to having only finite resources at its disposal. According to the third law, a perfectly pure state cannot be achieved in this scenario. Nonetheless, one can prepare a state of finite temperature by investing said resources appropriately. In this finite-resource setting, the interplay between energy, time, and control complexity is rather complicated. First, the cooling performance is stringent upon the chosen figure of merit for the notion of cool---the ground-state population, purity, average energy, or temperature of the nearest thermal state are all reasonable candidates, but they differ in general~\cite{Clivaz_2019L}. Second, the total amount of resources available bounds the reachable temperature in any given protocol. Third, the details of the protocol itself influence the energy cost of achieving a desired temperature. In other words, determining the optimal distribution of resources is an extremely difficult task in general and remains an open question. We therefore focus here on the paradigmatic special case of cooling a qubit target system by increasing its ground-state population in order to highlight some salient points regarding cooling to finite temperatures. First, we compare the finite performance of two distinct coherent control protocols that both asymptotically saturate the Landauer limit; nonetheless, at any finite time, their performance varies. The first protocol simply swaps the target qubit with one of a sequence of machine qubits whose energy gaps are distributed linearly; the second involves interacting the target with a high-dimensional machine with a particular degeneracy structure. Although the latter cannot be decomposed easily into a qubit circuit (thereby making it more difficult to implement in practice), one can compare the two protocols fairly by fixing the total (and effective) dimension to be equal, i.e., comparing the performance of the linear sequential qubit machine protocol after $N+1$ qubits have been accessed with that of the latter protocol with machine dimension $2^{N+1}$. In doing so, we see that the simpler former protocol outperforms the more difficult latter one in terms of the energy cost at finite times, emphasising the fact that difficulty in practice does not necessarily correspond to complexity as a thermodynamic resource. Additionally, we analyse the cooling rates at which energy and time can be traded off amongst each other in the linear qubit sequence protocol by deriving an analytic expression. Lastly, we compare the performance of a coherent and an incoherent control protocol that use a similar machine structure to achieve a desired final temperature. We see that the price one must pay for running the protocol via a heat engine is that either more steps or more complex operations are required to match the performance of the coherent control setting. This example serves to elucidate the connection between the two extremal control scenarios relevant for thermodynamics. Although throughout most of the paper we focus on the asymptotic achievability of optimal cooling strategies, the protocols that we construct provide insight into how said asymptotic limits are approached. This facilitates a better understanding of the more practically relevant questions that are constrained when all resources are restricted to be finite: i) \emph{how cold can the target system be made?} and ii) \emph{at what energy cost?} In line with Nernst's third law, the answer to the former question cannot be perfectly cold (i.e., zero temperature). The answer depends upon how said resources are configured and utilized. For instance, given a single unitary interaction of finite complexity in the coherent-control setting, the ground-state population of the output state can be upper bounded in terms of the largest energy gap of the machine, $\omega_{\textup{max}}$ [see Eq.~\eqref{eq:maingeneralpuritybound}]. On the other hand, supposing that one can reuse a single machine system multiple times, then as the number of operation steps increases, the ground-state population of the output state approaches $(1+e^{-\beta \omega_{\textup{max}}})^{-1}$ from below~\cite{Clivaz_2019L}. There is clearly a trade-off relation here between time and complexity, and a systematic analysis of the rate at which these quantities can be traded off against one another warrants further investigation. Similarly, the energy cost to reach a desired final temperature also depends upon the distribution of resources, as we now examine. Given access to a machine of a certain size (as measured by its dimension), one could ask: \emph{what is the optimal configuration of machine energy spectrum and global unitary to cool a system as efficiently as possible?} Here, we compare two contrasting constructions for the cooling unitary in the coherent-control setting for a qubit target system (with energy gap $\omega_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$)---both of which asymptotically achieve Landauer cost cooling, but whose finite behaviour differs. The first protocol considers a machine of $N$ qubits whose energy gaps increase linearly from the first excited state energy level of the system $\omega_1 = \omega_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ to some maximum energy level $\omega_{\raisebox{-1pt}{\tiny{$N$}}} = \omega_{\textup{max}}$, which dictates the final achievable temperature. In this protocol, the target system is swapped sequentially with each of the $N$ qubits in order of increasing energy gaps; we hence refer to it as the \emph{linear qubit machine sequence}. The second protocol we consider is presented in full in Appendix~\ref{app:finetunedcontrolconditions} and inspired by one presented in Ref.~\cite{Reeb_2014} (see Appendix D therein); we hence refer to it as the Reeb \& Wolf \textbf{(RW)} protocol. Here, the global unitary acts on the system and a high-dimensional machine with an equally spaced Hamiltonian whose degeneracy doubles with each increasing energy level, i.e., it has a singular ground state, a twofold degenerate first excited state, a fourfold degenerate second excited state, and so on; the final energy level has an extra state so that the total dimension is $2^{N+1}$ (where $N$ is the number of energy levels). In particular, the unitary performs the permutation that places the maximal amount of population in the ground state of the target system. Due to the structure of both protocols, one can make a fair comparison between them, contrasting the single unitary on a $2^{N}$-dimensional machine in the RW protocol versus the composition of $N$ two-qubit \texttt{SWAP} unitaries in the linear machine sequence, i.e., such that both protocols access a machine of the same size overall. \begin{figure}\label{fig:RWqubits} \end{figure} As shown in Fig.~\ref{fig:RWqubits}, although both protocols asymptotically tend to the Landauer limit, their finite behaviour differs. Indeed, the work cost of the linear qubit machine sequence protocol outperforms that of the RW protocol. This is somewhat surprising, as the latter is a complex high-dimensional unitary whereas the former a composition of qubit swaps; although both protocols have the same effective dimension in this comparison overall, this highlights that difficulty in the lab setting need not correspond to resourcefulness in a thermodynamic sense. Indeed, developing optimal finite cooling strategies for arbitrary systems and machines is difficult in general and remains an important open question. Nonetheless, in Appendix~\ref{app:imperfectcooling}, we derive the rate of resource divergence of the sequential qubit protocol to further clarify the trade-off between time and energy for this protocol. Finally, we contrast the two extremal thermodynamic paradigms considered by comparing the energy cost of a coherently controlled cooling protocol to an incoherently controlled one that achieves the same final ground-state population. Intuitively, the latter setting requires more resources to achieve the same performance as the former due to the fact that only energy-resonant subspaces can be accessed by the unitary, and hence only a subspace of the full machine is usable. This implies that a greater number of operations (of fixed control complexity) are required to achieve similar results as the coherent setting, as demonstrated in Appendix~\ref{app:imperfectcooling} explicitly. Indeed, determining the optimal cooling protocols for a range of realistic assumptions remains a major open avenue. \pdfbookmark[2]{Perfect Cooling with Incoherent Control (Heat Engine)}{Perfect Cooling with Incoherent Control (Heat Engine)} \pdfbookmark[1]{Discussion}{Discussion} \section{Discussion} \label{sec:discussion} \pdfbookmark[2]{Relation to Previous Works}{Relation to Previous Works} \subsection*{Relation to Previous Works} \label{subsec:relationpreviousworks} A vast amount of the literature concerning quantum thermodynamics considers resource theories (see Refs.~\cite{Ng_2018,Lostaglio_2019} and references therein), whose central question is: \emph{what transformations are possible given particular resources, and how can one quantify the value of a resource?} While this perspective sheds light on what is possible in principle, it does not per se concern itself with the potential implementation of said transformations. Yet, the unitary operations considered in a resource theory will themselves require certain resources to implement in practice. Focusing only on a resource-theoretic perspective would thus overlook the question: \emph{how does one optimally use said resources?} Our results focus on this latter question and highlight the role of control complexity in optimising resource use. Concurrently, by considering arbitrary unitary operations (akin to our coherent-control paradigm without limitations on machine size) Refs.~\cite{Anders_2013,Skrzypczyk_2014} and~\cite{Reeb_2014}, studied the potential saturation of the second law of thermodynamics and Landauer's limit, respectively. References~\cite{Skrzypczyk_2014} and~\cite{Anders_2013} develop a similar protocol to our diverging time protocol in the context of work extraction and demonstrate its optimality for saturating the second law. However, these works do not discuss the practical viewpoint that the goal can be achieved in a smaller number of operations by allowing the latter to be more complex, as we emphasise. On the other hand, Ref.~\cite{Reeb_2014} considers the resources required for saturation of the Landauer limit and show an important result regarding structural complexity, namely that the machine must be infinite dimensional to cool at the Landauer limit. Our analysis regarding complexity begins here and continues to elucidate the key complexity properties that enhance the efficiency of a cooling protocol. In particular, we show that an infinite-dimensional machine is not sufficient unless the controlled unitary indeed accesses the entire machine. This first leads to the notion of ``effective dimension'', which provides a good proxy for control complexity that is consistent with Nernst's third law for all types of quantum machines---from finite-dimensional systems to harmonic oscillators. Moreover, we highlight that the optimal interactions must be fine tuned, i.e., they must couple the system to particular energy gaps of the machine in a specific configuration, paving the way for a more nuanced definition of control complexity that takes into account the complicated and precise level of control required, as we present in terms of the ``energy-gap variety''. Lastly, we emphasise that the latter discussion concerns the coherent-control scenario, which is only one of the extremal control paradigms that we consider. In addition, we consider the task of cooling in a more thermodynamically consistent setting, namely the incoherent-control paradigm. There we derive the Carnot-Landauer equality and consequent inequality, which are adaptations of the Landauer equality~\cite{Reeb_2014} and inequality~\cite{Landauer_1961}, respectively, where the protocol can only be run via a heat engine. On the more practical side, note that our work here concerns erasing quantum information encoded in fundamental rather than logical degrees of freedom. Our reasoning here is twofold: firstly, the ultimate limitations that we aim to understand are the same whether one wishes to cool a physical system or erase information; in other words, although it may be possible to save some finite trade-off costs for imperfect erasure in the coarse-grained setting, the resources required to perform a rank-reducing process asymptotically diverge in both cases. Secondly, it is much more difficult to create coherent superpositions in the case where information is redundantly encoded in macrostates, as this would require all microstates to be in phase (indeed, this is a major reason why quantum computers aim to encode information in fundamental degrees of freedom). For erasing quantum information using bulk (classical) cooling (i.e., coupling to a suitably engineered cold bath), the relevant condition is nondegeneracy of the ground state; additionally, many original Landauer thought experiments consider degenerate Hamiltonians for the computational states. In contrast, our protocols are based upon directly controlled cooling, which works independently of the target system Hamiltonian and as such bridges the gap between various perspectives. Moving forward, it will be interesting to explore how information can be erased cheaper if it is encoded in a coarse-grained fashion, in order to better square our fundamental results presented here with experimental demonstrations. Doing so would require finite versions of all of the systems and resources that we analyse here, which we leave for future exploration. \pdfbookmark[2]{Conclusions and Outlook}{Conclusions and Outlook} \subsection*{Conclusions \& Outlook} \label{subsec:conclusionsoutlook} The results of this work have wide-ranging implications. We have both generalised and unified Landauer's bound with respect to the laws of thermodynamics. In particular, we have posed the ultimate limitations for cooling quantum systems or erasing quantum information in terms of resource costs and presented protocols that asymptotically saturate these limits. Indeed, while it is well known that heat and time requirements must be minimised to combat the detrimental effects of fluctuation-induced errors and short decoherence times on quantum technologies~\cite{Acin_2018}, we have shown that this comes at a practical cost of greater control. In particular, we have demonstrated the necessity of implementing fine-tuned interactions involving a diverging number of energy levels to minimise energy and time costs, which serves to deliver a cautionary message: control complexity must be accounted for to build operationally meaningful resource theories of quantum thermodynamics. This result posits the energy-gap variety accessed by a unitary protocol as a meaningful quantifier of control complexity that is both fully consistent with the third law of thermodynamics and chimes well with what is difficult to achieve in practice. Our analysis of the incoherent-control setting further provides pragmatic ultimate limitations for the scenario where minimal control is required, in the sense that all transformations are driven by thermodynamic energy and entropy flows between two heat baths, which could be viewed as a thermodynamically driven quantum computer~\cite{Bennett_1982}. Nevertheless, the intricate relationship between various resources here will need to be further explored. Looking forward, we believe it will be crucial to go beyond asymptotic limits. While Landauer erasure and the third law of thermodynamics conventionally deal with the creation of pure states, practical results would need to consider cooling to a finite temperature (i.e., creating approximately pure states) with a finite amount of invested resources~\cite{Clivaz_2019E,Taranto_2020,zhen2021}. In this context, the trade-off between time and control complexity will gain more practical relevance, as realistic quantum technologies have limited coherence times and interaction Hamiltonians are limited to few-body terms. Here, operational measures of control complexity that fit the envisioned experimental setup present an important challenge that must be overcome to apply our results across various platforms. Our results strengthen the view that, in contrast to classical thermodynamics, the role of control is one of the most crucial issues to address before a true understanding of the limitations and potential of quantum machines is revealed. On the one hand, in classical systems, control is only ever achieved over few bulk degrees of freedom, whereas addressing and designing particular microstate control is within reach of current quantum technological platforms, offering additional routes towards operations enhanced by fine-tuned control. On the other hand, the cost of such control itself can quickly exceed the energy scale of the system, potentially rendering any perceived advantages a mirage. This is exacerbated by the fact that it is not possible to observe (measure) a quantum machine without incurring significant additional thermodynamic costs~\cite{Guryanova2020,DebarbaEtAl2019} and non-negligible backaction on the operation of the machine itself~\cite{Manzano_2019}. A fully developed theory of quantum thermodynamics would need to take these into account and we hope that our study sheds light on the role of control complexity in this endeavour. \pdfbookmark[1]{Acknowledgments}{Acknowledgments} \begin{acknowledgments} The authors thank Elizabeth Agudelo and Paul Erker for very insightful discussions at the early stages of the project. P.T. acknowledges support from the Austrian Science Fund (FWF) project: Y879-N27 (START), the European Research Council (Consolidator grant `Cocoquest' 101043705), and the Japan Society for the Promotion of Science (JSPS) by KAKENHI Grant No. 21H03394. F.B. is supported by FQXi Grant No. FQXi-IAF19-07 from the Foundational Questions Institute Fund, a donor advised fund of Silicon Valley Community Foundation. A.B. acknowledges support from the VILLUM FONDEN via the QMATH Centre of Excellence (Grant no. 10059) and from the QuantERA ERA-NET Cofund in Quantum Technologies implemented within the European Union's Horizon 2020 Programme (QuantAlgo project) via the Innovation Fund Denmark. R.S. acknowledges funding from the Swiss National Science Foundation via an Ambizione grant PZ00P2\_185986. N.F. is supported by the Austrian Science Fund (FWF) projects: P 36478-N and P 31339-N27. M.P.E.L. acknowledges financial support by the ESQ (Erwin Schr{\"o}dinger Center for Quantum Science \& Technology) Discovery programme, hosted by the Austrian Academy of Sciences ({\"O}AW) and TU Wien. G.V. is supported by the Austrian Science Fund (FWF) projects ZK 3 (Zukunftskolleg) and M 2462-N27 (Lise-Meitner). F.C.B. acknowledges support from the European Union's Horizon 2020 research and innovation programme under the Marie Sk{\l}odowska-Curie Grant Agreement No. 801110 and the Austrian Federal Ministry of Education, Science and Research (BMBWF). This project reflects only the authors' view, the EU Agency is not responsible for any use that may be made of the information it contains. T.D. acknowledges support from the Brazilian agency CNPq INCT-IQ through the project (465469/2014-0). E.S. is supported by the Austrian Science Fund (FWF) project: Y879-N27 (START). F.C. is supported by the ERC Synergy grant HyperQ (Grant No. 856432). M.H. is supported by the European Research Council (Consolidator grant `Cocoquest' 101043705), the Austrian Science Fund (FWF) project: Y879-N27 (START), and acknowledges financial support by the ESQ (Erwin Schr{\"o}dinger Center for Quantum Science \& Technology) Discovery programme, hosted by the Austrian Academy of Sciences ({\"O}AW). \end{acknowledgments} \pdfbookmark[1]{References}{References} \begin{thebibliography}{58} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{#1} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{https://doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Gisin}\ \emph {et~al.}(2002)\citenamefont {Gisin}, \citenamefont {Ribordy}, \citenamefont {Tittel},\ and\ \citenamefont {Zbinden}}]{GisinRibordyTittelZbinden2002} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Nicolas}\ \bibnamefont {Gisin}}, \bibinfo {author} {\bibfnamefont {Gr{\'e}goire}\ \bibnamefont {Ribordy}}, \bibinfo {author} {\bibfnamefont {Wolfgang}\ \bibnamefont {Tittel}}, \ and\ \bibinfo {author} {\bibfnamefont {Hugo}\ \bibnamefont {Zbinden}},\ }\emph {\enquote {\bibinfo {title} {Quantum cryptography},}\ }\href {https://doi.org/10.1103/RevModPhys.74.145} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {145} (\bibinfo {year} {2002})},\ \Eprint {http://arxiv.org/abs/quant-ph/0101098} {arXiv:quant-ph/0101098}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pirandola}\ \emph {et~al.}(2020)\citenamefont {Pirandola}, \citenamefont {Andersen}, \citenamefont {Banchi}, \citenamefont {Berta}, \citenamefont {Bunandar}, \citenamefont {Colbeck}, \citenamefont {Englund}, \citenamefont {Gehring}, \citenamefont {Lupo}, \citenamefont {Ottaviani}, \citenamefont {Pereira}, \citenamefont {Razavi}, \citenamefont {Shamsul~Shaari}, \citenamefont {Tomamichel}, \citenamefont {Usenko}, \citenamefont {Vallone}, \citenamefont {Villoresi},\ and\ \citenamefont {Wallden}}]{PirandolaEtAl2020} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Stefano}\ \bibnamefont {Pirandola}}, \bibinfo {author} {\bibfnamefont {Ulrik~L.}\ \bibnamefont {Andersen}}, \bibinfo {author} {\bibfnamefont {Leonardo}\ \bibnamefont {Banchi}}, \bibinfo {author} {\bibfnamefont {Mario}\ \bibnamefont {Berta}}, \bibinfo {author} {\bibfnamefont {Darius}\ \bibnamefont {Bunandar}}, \bibinfo {author} {\bibfnamefont {Roger}\ \bibnamefont {Colbeck}}, \bibinfo {author} {\bibfnamefont {Dirk}\ \bibnamefont {Englund}}, \bibinfo {author} {\bibfnamefont {Tobias}\ \bibnamefont {Gehring}}, \bibinfo {author} {\bibfnamefont {Cosmo}\ \bibnamefont {Lupo}}, \bibinfo {author} {\bibfnamefont {Carlo}\ \bibnamefont {Ottaviani}}, \bibinfo {author} {\bibfnamefont {Jason~L.}\ \bibnamefont {Pereira}}, \bibinfo {author} {\bibfnamefont {Mohsen}\ \bibnamefont {Razavi}}, \bibinfo {author} {\bibfnamefont {Jesni}\ \bibnamefont {Shamsul~Shaari}}, \bibinfo {author} {\bibfnamefont {Marco}\ \bibnamefont {Tomamichel}}, \bibinfo {author} {\bibfnamefont {Vladyslav~C.}\ \bibnamefont {Usenko}}, \bibinfo {author} {\bibfnamefont {Giuseppe}\ \bibnamefont {Vallone}}, \bibinfo {author} {\bibfnamefont {Paolo}\ \bibnamefont {Villoresi}}, \ and\ \bibinfo {author} {\bibfnamefont {Petros}\ \bibnamefont {Wallden}},\ }\emph {\enquote {\bibinfo {title} {Advances in quantum cryptography},}\ }\href {https://doi.org/10.1364/AOP.361502} {\bibfield {journal} {\bibinfo {journal} {Adv. Opt. Photon.}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {1012} (\bibinfo {year} {2020})},\ \Eprint {http://arxiv.org/abs/1906.01645} {arXiv:1906.01645}\BibitemShut {NoStop} \bibitem [{\citenamefont {Giovannetti}\ \emph {et~al.}(2011)\citenamefont {Giovannetti}, \citenamefont {Lloyd},\ and\ \citenamefont {Maccone}}]{GiovannettiLloydMaccone2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Vittorio}\ \bibnamefont {Giovannetti}}, \bibinfo {author} {\bibfnamefont {Seth}\ \bibnamefont {Lloyd}}, \ and\ \bibinfo {author} {\bibfnamefont {Lorenzo}\ \bibnamefont {Maccone}},\ }\emph {\enquote {\bibinfo {title} {{Advances in quantum metrology}},}\ }\href {https://doi.org/10.1038/nphoton.2011.35} {\bibfield {journal} {\bibinfo {journal} {Nat. Photonics}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {222} (\bibinfo {year} {2011})},\ \Eprint {http://arxiv.org/abs/1102.2318} {arXiv:1102.2318}\BibitemShut {NoStop} \bibitem [{\citenamefont {T{\'o}th}\ and\ \citenamefont {Apellaniz}(2014)}]{TothApellaniz2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G{\'e}za}\ \bibnamefont {T{\'o}th}}\ and\ \bibinfo {author} {\bibfnamefont {Iagoba}\ \bibnamefont {Apellaniz}},\ }\emph {\enquote {\bibinfo {title} {Quantum metrology from a quantum information science perspective},}\ }\href {https://doi.org/10.1088/1751-8113/47/42/424006} {\bibfield {journal} {\bibinfo {journal} {J. Phys. A: Math. Theor.}\ }\textbf {\bibinfo {volume} {47}},\ \bibinfo {pages} {424006} (\bibinfo {year} {2014})},\ \Eprint {http://arxiv.org/abs/1405.4878} {arXiv:1405.4878}\BibitemShut {NoStop} \bibitem [{\citenamefont {Demkowicz-Dobrza{\'n}ski}\ \emph {et~al.}(2015)\citenamefont {Demkowicz-Dobrza{\'n}ski}, \citenamefont {Jarzyna},\ and\ \citenamefont {Ko{\l}ody{\'n}ski}}]{DemkowiczDobrzanskiJarzynaKolodynski2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Rafa{\l}}\ \bibnamefont {Demkowicz-Dobrza{\'n}ski}}, \bibinfo {author} {\bibfnamefont {Marcin}\ \bibnamefont {Jarzyna}}, \ and\ \bibinfo {author} {\bibfnamefont {Janek}\ \bibnamefont {Ko{\l}ody{\'n}ski}},\ }\emph {\enquote {\bibinfo {title} {Quantum limits in optical interferometry},}\ }\href {https://doi.org/10.1016/bs.po.2015.02.003} {\bibfield {journal} {\bibinfo {journal} {Prog. Optics}\ }\textbf {\bibinfo {volume} {60}},\ \bibinfo {pages} {345} (\bibinfo {year} {2015})},\ \Eprint {http://arxiv.org/abs/1405.7703} {arXiv:1405.7703}\BibitemShut {NoStop} \bibitem [{\citenamefont {Preskill}(1997)}]{Preskill1997} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {John}\ \bibnamefont {Preskill}},\ }\emph {\enquote {\bibinfo {title} {{Fault-tolerant quantum computation}},}\ }in\ \href {https://doi.org/10.1142/9789812385253_0008} {\emph {\bibinfo {booktitle} {Introduction to Quantum Computation}}},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}}, \bibinfo {editor} {\bibfnamefont {Sandu}\ \bibnamefont {Popescu}}, \ and\ \bibinfo {editor} {\bibfnamefont {T.~P.}\ \bibnamefont {Spiller}}}\ (\bibinfo {publisher} {World-Scientific, Singapore},\ \bibinfo {year} {1997})\ Chap.~\bibinfo {chapter} {8}, pp.\ \bibinfo {pages} {213--269},\ \Eprint {http://arxiv.org/abs/quant-ph/9712048} {arXiv:quant-ph/9712048}\BibitemShut {NoStop} \bibitem [{\citenamefont {Preskill}(2018)}]{Preskill2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {John}\ \bibnamefont {Preskill}},\ }\emph {\enquote {\bibinfo {title} {{Quantum Computing in the NISQ era and beyond}},}\ }\href {https://doi.org/10.22331/q-2018-08-06-79} {\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {79} (\bibinfo {year} {2018})},\ \Eprint {http://arxiv.org/abs/1801.00862} {arXiv:1801.00862}\BibitemShut {NoStop} \bibitem [{\citenamefont {Guryanova}\ \emph {et~al.}(2020)\citenamefont {Guryanova}, \citenamefont {Friis},\ and\ \citenamefont {Huber}}]{Guryanova2020} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Yelena}\ \bibnamefont {Guryanova}}, \bibinfo {author} {\bibfnamefont {Nicolai}\ \bibnamefont {Friis}}, \ and\ \bibinfo {author} {\bibfnamefont {Marcus}\ \bibnamefont {Huber}},\ }\emph {\enquote {\bibinfo {title} {{Ideal Projective Measurements Have Infinite Resource Costs}},}\ }\href {https://doi.org/10.22331/q-2020-01-13-222} {\bibfield {journal} {\bibinfo {journal} {{Quantum}}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {222} (\bibinfo {year} {2020})},\ \Eprint {http://arxiv.org/abs/1805.11899} {arXiv:1805.11899}\BibitemShut {NoStop} \bibitem [{\citenamefont {Erker}\ \emph {et~al.}(2017)\citenamefont {Erker}, \citenamefont {Mitchison}, \citenamefont {Silva}, \citenamefont {Woods}, \citenamefont {Brunner},\ and\ \citenamefont {Huber}}]{Erker_2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Paul}\ \bibnamefont {Erker}}, \bibinfo {author} {\bibfnamefont {Mark~T.}\ \bibnamefont {Mitchison}}, \bibinfo {author} {\bibfnamefont {Ralph}\ \bibnamefont {Silva}}, \bibinfo {author} {\bibfnamefont {Mischa~P.}\ \bibnamefont {Woods}}, \bibinfo {author} {\bibfnamefont {Nicolas}\ \bibnamefont {Brunner}}, \ and\ \bibinfo {author} {\bibfnamefont {Marcus}\ \bibnamefont {Huber}},\ }\emph {\enquote {\bibinfo {title} {{Autonomous Quantum Clocks: Does Thermodynamics Limit Our Ability to Measure Time?}}}\ }\href {https://doi.org/10.1103/physrevx.7.031022} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {031022} (\bibinfo {year} {2017})},\ \Eprint {http://arxiv.org/abs/1609.06704} {arXiv:1609.06704}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schwarzhans}\ \emph {et~al.}(2021)\citenamefont {Schwarzhans}, \citenamefont {Lock}, \citenamefont {Erker}, \citenamefont {Friis},\ and\ \citenamefont {Huber}}]{SchwarzhansLockErkerFriisHuber2021} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Emanuel}\ \bibnamefont {Schwarzhans}}, \bibinfo {author} {\bibfnamefont {Maximilian P.~E.}\ \bibnamefont {Lock}}, \bibinfo {author} {\bibfnamefont {Paul}\ \bibnamefont {Erker}}, \bibinfo {author} {\bibfnamefont {Nicolai}\ \bibnamefont {Friis}}, \ and\ \bibinfo {author} {\bibfnamefont {Marcus}\ \bibnamefont {Huber}},\ }\emph {\enquote {\bibinfo {title} {{Autonomous Temporal Probability Concentration: Clockworks and the Second Law of Thermodynamics}},}\ }\href {https://doi.org/10.1103/PhysRevX.11.011046} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {011046} (\bibinfo {year} {2021})},\ \Eprint {http://arxiv.org/abs/2007.01307} {arXiv:2007.01307}\BibitemShut {NoStop} \bibitem [{\citenamefont {Landauer}(1961)}]{Landauer_1961} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Rolf}\ \bibnamefont {Landauer}},\ }\emph {\enquote {\bibinfo {title} {{Irreversibility and Heat Generation in the Computing Process}},}\ }\href {https://doi.org/10.1147/rd.53.0183} {\bibfield {journal} {\bibinfo {journal} {IBM J. Res. Dev.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {183} (\bibinfo {year} {1961})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nernst}(1906)}]{Nernst_1906} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Walther}\ \bibnamefont {Nernst}},\ }\emph {\enquote {\bibinfo {title} {{{\"{U}}ber die Beziehung zwischen W{\"{a}}rmeentwicklung und maximaler Arbeit bei kondensierten Systemen.}}}\ }in\ \href {https://archive.org/details/mobot31753002089495} {\emph {\bibinfo {booktitle} {Sitzungsberichte der K{\"{o}}niglich Preussischen Akademie der Wissenschaften}}}\ (\bibinfo {address} {Berlin},\ \bibinfo {year} {1906})\ pp.\ \bibinfo {pages} {933--940}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ticozzi}\ and\ \citenamefont {Viola}(2014)}]{Ticozzi_2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Francesco}\ \bibnamefont {Ticozzi}}\ and\ \bibinfo {author} {\bibfnamefont {Lorenza}\ \bibnamefont {Viola}},\ }\emph {\enquote {\bibinfo {title} {{Quantum resources for purification and cooling: fundamental limits and opportunities}},}\ }\href {https://doi.org/10.1038/srep05192} {\bibfield {journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {5192} (\bibinfo {year} {2014})},\ \Eprint {http://arxiv.org/abs/1403.8143} {arXiv:1403.8143}\BibitemShut {NoStop} \bibitem [{\citenamefont {Masanes}\ and\ \citenamefont {Oppenheim}(2017)}]{Masanes_2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Llu{\'\i}s}\ \bibnamefont {Masanes}}\ and\ \bibinfo {author} {\bibfnamefont {Jonathan}\ \bibnamefont {Oppenheim}},\ }\emph {\enquote {\bibinfo {title} {{A general derivation and quantification of the third law of thermodynamics}},}\ }\href {https://doi.org/10.1038/ncomms14538} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {14538} (\bibinfo {year} {2017})},\ \Eprint {http://arxiv.org/abs/1412.3828} {arXiv:1412.3828}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wilming}\ and\ \citenamefont {Gallego}(2017)}]{Wilming_2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Henrik}\ \bibnamefont {Wilming}}\ and\ \bibinfo {author} {\bibfnamefont {Rodrigo}\ \bibnamefont {Gallego}},\ }\emph {\enquote {\bibinfo {title} {{Third Law of Thermodynamics as a Single Inequality}},}\ }\href {https://doi.org/10.1103/PhysRevX.7.041033} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {041033} (\bibinfo {year} {2017})},\ \Eprint {http://arxiv.org/abs/1701.07478} {arXiv:1701.07478}\BibitemShut {NoStop} \bibitem [{\citenamefont {Freitas}\ \emph {et~al.}(2018)\citenamefont {Freitas}, \citenamefont {Gallego}, \citenamefont {Masanes},\ and\ \citenamefont {Paz}}]{Freitas_2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Nahuel}\ \bibnamefont {Freitas}}, \bibinfo {author} {\bibfnamefont {Rodrigo}\ \bibnamefont {Gallego}}, \bibinfo {author} {\bibfnamefont {Llu{\'{i}}s}\ \bibnamefont {Masanes}}, \ and\ \bibinfo {author} {\bibfnamefont {Juan~Pablo}\ \bibnamefont {Paz}},\ }\emph {\enquote {\bibinfo {title} {{Cooling to Absolute Zero: The Unattainability Principle}},}\ }in\ \href {https://doi.org/10.1007/978-3-319-99046-0_25} {\emph {\bibinfo {booktitle} {{Thermodynamics in the Quantum Regime}}}},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {Felix}\ \bibnamefont {Binder}}, \bibinfo {editor} {\bibfnamefont {Luis~A}\ \bibnamefont {Correa}}, \bibinfo {editor} {\bibfnamefont {Christian}\ \bibnamefont {Gogolin}}, \bibinfo {editor} {\bibfnamefont {Janet}\ \bibnamefont {Anders}}, \ and\ \bibinfo {editor} {\bibfnamefont {Gerardo}\ \bibnamefont {Adesso}}}\ (\bibinfo {publisher} {Springer International Publishing},\ \bibinfo {address} {Cham, Switzerland},\ \bibinfo {year} {2018})\ Chap.~\bibinfo {chapter} {25}, pp.\ \bibinfo {pages} {597--622},\ \Eprint {http://arxiv.org/abs/1911.06377} {arXiv:1911.06377}\BibitemShut {NoStop} \bibitem [{\citenamefont {Scharlau}\ and\ \citenamefont {M{\"u}ller}(2018)}]{Scharlau_2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Jakob}\ \bibnamefont {Scharlau}}\ and\ \bibinfo {author} {\bibfnamefont {Markus~P.}\ \bibnamefont {M{\"u}ller}},\ }\emph {\enquote {\bibinfo {title} {{Quantum Horn's lemma, finite heat baths, and the third law of thermodynamics}},}\ }\href {https://doi.org/10.22331/q-2018-02-22-54} {\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {54} (\bibinfo {year} {2018})},\ \Eprint {http://arxiv.org/abs/1605.06092} {arXiv:1605.06092}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ac{\'{\i}}n}\ \emph {et~al.}(2018)\citenamefont {Ac{\'{\i}}n}, \citenamefont {Bloch}, \citenamefont {Buhrman}, \citenamefont {Calarco}, \citenamefont {Eichler}, \citenamefont {Eisert}, \citenamefont {Esteve}, \citenamefont {Gisin}, \citenamefont {Glaser}, \citenamefont {Jelezko}, \citenamefont {Kuhr}, \citenamefont {Lewenstein}, \citenamefont {Riedel}, \citenamefont {Schmidt}, \citenamefont {Thew}, \citenamefont {Wallraff}, \citenamefont {Walmsley},\ and\ \citenamefont {Wilhelm}}]{Acin_2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Antonio}\ \bibnamefont {Ac{\'{\i}}n}}, \bibinfo {author} {\bibfnamefont {Immanuel}\ \bibnamefont {Bloch}}, \bibinfo {author} {\bibfnamefont {Harry}\ \bibnamefont {Buhrman}}, \bibinfo {author} {\bibfnamefont {Tommaso}\ \bibnamefont {Calarco}}, \bibinfo {author} {\bibfnamefont {Christopher}\ \bibnamefont {Eichler}}, \bibinfo {author} {\bibfnamefont {Jens}\ \bibnamefont {Eisert}}, \bibinfo {author} {\bibfnamefont {Daniel}\ \bibnamefont {Esteve}}, \bibinfo {author} {\bibfnamefont {Nicolas}\ \bibnamefont {Gisin}}, \bibinfo {author} {\bibfnamefont {Steffen~J.}\ \bibnamefont {Glaser}}, \bibinfo {author} {\bibfnamefont {Fedor}\ \bibnamefont {Jelezko}}, \bibinfo {author} {\bibfnamefont {Stefan}\ \bibnamefont {Kuhr}}, \bibinfo {author} {\bibfnamefont {Maciej}\ \bibnamefont {Lewenstein}}, \bibinfo {author} {\bibfnamefont {Max~F.}\ \bibnamefont {Riedel}}, \bibinfo {author} {\bibfnamefont {Piet~O.}\ \bibnamefont {Schmidt}}, \bibinfo {author} {\bibfnamefont {Rob}\ \bibnamefont {Thew}}, \bibinfo {author} {\bibfnamefont {Andreas}\ \bibnamefont {Wallraff}}, \bibinfo {author} {\bibfnamefont {Ian}\ \bibnamefont {Walmsley}}, \ and\ \bibinfo {author} {\bibfnamefont {Frank~K.}\ \bibnamefont {Wilhelm}},\ }\emph {\enquote {\bibinfo {title} {{The quantum technologies roadmap: a European community view}},}\ }\href {https://doi.org/10.1088/1367-2630/aad1ea} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {20}},\ \bibinfo {pages} {080201} (\bibinfo {year} {2018})},\ \Eprint {http://arxiv.org/abs/1712.03773} {arXiv:1712.03773}\BibitemShut {NoStop} \bibitem [{\citenamefont {{\AA}berg}(2013)}]{Aberg2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {{\AA}berg}},\ }\emph {\enquote {\bibinfo {title} {Truly work-like work extraction via a single-shot analysis},}\ }\href {https://doi.org/10.1038/ncomms2712} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {1925} (\bibinfo {year} {2013})},\ \Eprint {http://arxiv.org/abs/1110.6121} {arXiv:1110.6121}\BibitemShut {NoStop} \bibitem [{\citenamefont {Skrzypczyk}\ \emph {et~al.}(2014)\citenamefont {Skrzypczyk}, \citenamefont {Short},\ and\ \citenamefont {Popescu}}]{Skrzypczyk_2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Paul}\ \bibnamefont {Skrzypczyk}}, \bibinfo {author} {\bibfnamefont {Anthony~J.}\ \bibnamefont {Short}}, \ and\ \bibinfo {author} {\bibfnamefont {Sandu}\ \bibnamefont {Popescu}},\ }\emph {\enquote {\bibinfo {title} {Work extraction and thermodynamics for individual quantum systems},}\ }\href {https://doi.org/10.1038/ncomms5185} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {4185} (\bibinfo {year} {2014})},\ \Eprint {http://arxiv.org/abs/1307.1558} {arXiv:1307.1558}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lostaglio}\ \emph {et~al.}(2015)\citenamefont {Lostaglio}, \citenamefont {Jennings},\ and\ \citenamefont {Rudolph}}]{LostaglioJenningsRudolph2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Matteo}\ \bibnamefont {Lostaglio}}, \bibinfo {author} {\bibfnamefont {David}\ \bibnamefont {Jennings}}, \ and\ \bibinfo {author} {\bibfnamefont {Terry}\ \bibnamefont {Rudolph}},\ }\emph {\enquote {\bibinfo {title} {{Description of quantum coherence in thermodynamic processes requires constraints beyond free energy}},}\ }\href {https://doi.org/10.1038/ncomms7383} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {6383} (\bibinfo {year} {2015})},\ \Eprint {http://arxiv.org/abs/1405.2188} {arXiv:1405.2188}\BibitemShut {NoStop} \bibitem [{\citenamefont {Friis}\ and\ \citenamefont {Huber}(2018)}]{Friis2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Nicolai}\ \bibnamefont {Friis}}\ and\ \bibinfo {author} {\bibfnamefont {Marcus}\ \bibnamefont {Huber}},\ }\emph {\enquote {\bibinfo {title} {Precision and {W}ork {F}luctuations in {G}aussian {B}attery {C}harging},}\ }\href {https://doi.org/10.22331/q-2018-04-23-61} {\bibfield {journal} {\bibinfo {journal} {{Quantum}}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {61} (\bibinfo {year} {2018})},\ \Eprint {http://arxiv.org/abs/1708.00749} {arXiv:1708.00749}\BibitemShut {NoStop} \bibitem [{\citenamefont {Campaioli}\ \emph {et~al.}(2018)\citenamefont {Campaioli}, \citenamefont {Pollock},\ and\ \citenamefont {Vinjanampathy}}]{CampaioliPollockVinjanampathy2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Francesco}\ \bibnamefont {Campaioli}}, \bibinfo {author} {\bibfnamefont {Felix~A.}\ \bibnamefont {Pollock}}, \ and\ \bibinfo {author} {\bibfnamefont {Sai}\ \bibnamefont {Vinjanampathy}},\ }\emph {\enquote {\bibinfo {title} {{Quantum Batteries}},}\ }in\ \href {https://doi.org/10.1007/978-3-319-99046-0_8} {\emph {\bibinfo {booktitle} {Thermodynamics in the Quantum Regime}}},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {Felix}\ \bibnamefont {Binder}}, \bibinfo {editor} {\bibfnamefont {Luis~A.}\ \bibnamefont {Correa}}, \bibinfo {editor} {\bibfnamefont {Christian}\ \bibnamefont {Gogolin}}, \bibinfo {editor} {\bibfnamefont {Janet}\ \bibnamefont {Anders}}, \ and\ \bibinfo {editor} {\bibfnamefont {Gerardo}\ \bibnamefont {Adesso}}}\ (\bibinfo {publisher} {Springer International Publishing},\ \bibinfo {address} {Cham, Switzerland},\ \bibinfo {year} {2018})\ Chap.~\bibinfo {chapter} {8}, pp.\ \bibinfo {pages} {207--225},\ \Eprint {http://arxiv.org/abs/1805.05507} {arXiv:1805.05507}\BibitemShut {NoStop} \bibitem [{\citenamefont {Scovil}\ and\ \citenamefont {Schulz-DuBois}(1959)}]{Scovil1959} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Henry E.~D.}\ \bibnamefont {Scovil}}\ and\ \bibinfo {author} {\bibfnamefont {Erich~O.}\ \bibnamefont {Schulz-DuBois}},\ }\emph {\enquote {\bibinfo {title} {{Three-Level Masers as Heat Engines}},}\ }\href {https://doi.org/10.1103/PhysRevLett.2.262} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {262} (\bibinfo {year} {1959})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kosloff}\ and\ \citenamefont {Levy}(2014)}]{Kosloff2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Ronnie}\ \bibnamefont {Kosloff}}\ and\ \bibinfo {author} {\bibfnamefont {Amikam}\ \bibnamefont {Levy}},\ }\emph {\enquote {\bibinfo {title} {{Quantum Heat Engines and Refrigerators: Continuous Devices}},}\ }\href {https://doi.org/10.1146/annurev-physchem-040513-103724} {\bibfield {journal} {\bibinfo {journal} {Annu. Rev. Phys. Chem.}\ }\textbf {\bibinfo {volume} {65}},\ \bibinfo {pages} {365} (\bibinfo {year} {2014})},\ \Eprint {http://arxiv.org/abs/1310.0683} {arXiv:1310.0683}\BibitemShut {NoStop} \bibitem [{\citenamefont {Uzdin}\ \emph {et~al.}(2015)\citenamefont {Uzdin}, \citenamefont {Levy},\ and\ \citenamefont {Kosloff}}]{Uzdin2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Raam}\ \bibnamefont {Uzdin}}, \bibinfo {author} {\bibfnamefont {Amikam}\ \bibnamefont {Levy}}, \ and\ \bibinfo {author} {\bibfnamefont {Ronnie}\ \bibnamefont {Kosloff}},\ }\emph {\enquote {\bibinfo {title} {{Equivalence of Quantum Heat Machines, and Quantum-Thermodynamic Signatures}},}\ }\href {https://doi.org/10.1103/PhysRevX.5.031044} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {031044} (\bibinfo {year} {2015})},\ \Eprint {http://arxiv.org/abs/1502.06592} {arXiv:1502.06592}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mitchison}(2019)}]{Mitchison2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Mark~T.}\ \bibnamefont {Mitchison}},\ }\emph {\enquote {\bibinfo {title} {Quantum thermal absorption machines: refrigerators, engines and clocks},}\ }\href {https://doi.org/10.1080/00107514.2019.1631555} {\bibfield {journal} {\bibinfo {journal} {Contemp. Phys.}\ }\textbf {\bibinfo {volume} {60}},\ \bibinfo {pages} {164} (\bibinfo {year} {2019})},\ \Eprint {http://arxiv.org/abs/1902.02672} {arXiv:1902.02672}\BibitemShut {NoStop} \bibitem [{\citenamefont {Woods}\ \emph {et~al.}(2019)\citenamefont {Woods}, \citenamefont {Ng},\ and\ \citenamefont {Wehner}}]{Woods2019maximumefficiencyof} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Mischa~P.}\ \bibnamefont {Woods}}, \bibinfo {author} {\bibfnamefont {Nelly Huei~Ying}\ \bibnamefont {Ng}}, \ and\ \bibinfo {author} {\bibfnamefont {Stephanie}\ \bibnamefont {Wehner}},\ }\emph {\enquote {\bibinfo {title} {The maximum efficiency of nano heat engines depends on more than temperature},}\ }\href {https://doi.org/10.22331/q-2019-08-19-177} {\bibfield {journal} {\bibinfo {journal} {{Quantum}}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {177} (\bibinfo {year} {2019})},\ \Eprint {http://arxiv.org/abs/1506.02322} {arXiv:1506.02322}\BibitemShut {NoStop} \bibitem [{\citenamefont {Reeb}\ and\ \citenamefont {Wolf}(2014)}]{Reeb_2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {David}\ \bibnamefont {Reeb}}\ and\ \bibinfo {author} {\bibfnamefont {Michael~M.}\ \bibnamefont {Wolf}},\ }\emph {\enquote {\bibinfo {title} {{An improved Landauer principle with finite-size corrections}},}\ }\href {https://doi.org/10.1088/1367-2630/16/10/103011} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {103011} (\bibinfo {year} {2014})},\ \Eprint {http://arxiv.org/abs/1306.4352} {arXiv:1306.4352}\BibitemShut {NoStop} \bibitem [{\citenamefont {Clivaz}(2020)}]{Clivaz2020Thesis} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Fabien}\ \bibnamefont {Clivaz}},\ }\emph {\bibinfo {title} {{Optimal Manipulation Of Correlations And Temperature In Quantum Thermodynamics}}},\ \href {https://doi.org/10.13097/archive-ouverte/unige:145933} {Ph.D. thesis},\ \bibinfo {school} {University of Geneva} (\bibinfo {year} {2020}),\ \Eprint {http://arxiv.org/abs/2012.04321} {arXiv:2012.04321}\BibitemShut {NoStop} \bibitem [{\citenamefont {Riechers}\ and\ \citenamefont {Gu}(2021)}]{Riechers2021} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Paul~M.}\ \bibnamefont {Riechers}}\ and\ \bibinfo {author} {\bibfnamefont {Mile}\ \bibnamefont {Gu}},\ }\emph {\enquote {\bibinfo {title} {{Impossibility of achieving Landauer's bound for almost every quantum state}},}\ }\href {https://doi.org/10.1103/PhysRevA.104.012214} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {104}},\ \bibinfo {pages} {012214} (\bibinfo {year} {2021})},\ \Eprint {http://arxiv.org/abs/2103.02337} {arXiv:2103.02337}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huber}\ \emph {et~al.}(2015)\citenamefont {Huber}, \citenamefont {Perarnau-Llobet}, \citenamefont {Hovhannisyan}, \citenamefont {Skrzypczyk}, \citenamefont {Kl{\"o}ckl}, \citenamefont {Brunner},\ and\ \citenamefont {Ac{\'i}n}}]{HuberPerarnauHovhannisyanSkrzypczykKloecklBrunnerAcin2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Marcus}\ \bibnamefont {Huber}}, \bibinfo {author} {\bibfnamefont {Mart{\'i}}\ \bibnamefont {Perarnau-Llobet}}, \bibinfo {author} {\bibfnamefont {Karen~V.}\ \bibnamefont {Hovhannisyan}}, \bibinfo {author} {\bibfnamefont {Paul}\ \bibnamefont {Skrzypczyk}}, \bibinfo {author} {\bibfnamefont {Claude}\ \bibnamefont {Kl{\"o}ckl}}, \bibinfo {author} {\bibfnamefont {Nicolas}\ \bibnamefont {Brunner}}, \ and\ \bibinfo {author} {\bibfnamefont {Antonio}\ \bibnamefont {Ac{\'i}n}},\ }\emph {\enquote {\bibinfo {title} {Thermodynamic cost of creating correlations},}\ }\href {https://doi.org/10.1088/1367-2630/17/6/065008} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {065008} (\bibinfo {year} {2015})},\ \Eprint {http://arxiv.org/abs/1404.2169} {arXiv:1404.2169}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bruschi}\ \emph {et~al.}(2015)\citenamefont {Bruschi}, \citenamefont {Perarnau-Llobet}, \citenamefont {Friis}, \citenamefont {Hovhannisyan},\ and\ \citenamefont {Huber}}]{BruschiPerarnauLlobetFriisHovhannisyanHuber2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {David~E.}\ \bibnamefont {Bruschi}}, \bibinfo {author} {\bibfnamefont {Mart{\'i}}\ \bibnamefont {Perarnau-Llobet}}, \bibinfo {author} {\bibfnamefont {Nicolai}\ \bibnamefont {Friis}}, \bibinfo {author} {\bibfnamefont {Karen~V.}\ \bibnamefont {Hovhannisyan}}, \ and\ \bibinfo {author} {\bibfnamefont {Marcus}\ \bibnamefont {Huber}},\ }\emph {\enquote {\bibinfo {title} {The thermodynamics of creating correlations: Limitations and optimal protocols},}\ }\href {https://doi.org/10.1103/PhysRevE.91.032118} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {032118} (\bibinfo {year} {2015})},\ \Eprint {http://arxiv.org/abs/1409.4647} {arXiv:1409.4647}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vitagliano}\ \emph {et~al.}(2018)\citenamefont {Vitagliano}, \citenamefont {Kl{\"o}ckl}, \citenamefont {Huber},\ and\ \citenamefont {Friis}}]{VitaglianoKloecklHuberFriis2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Giuseppe}\ \bibnamefont {Vitagliano}}, \bibinfo {author} {\bibfnamefont {Claude}\ \bibnamefont {Kl{\"o}ckl}}, \bibinfo {author} {\bibfnamefont {Marcus}\ \bibnamefont {Huber}}, \ and\ \bibinfo {author} {\bibfnamefont {Nicolai}\ \bibnamefont {Friis}},\ }\emph {\enquote {\bibinfo {title} {Trade-off between work and correlations in quantum thermodynamics},}\ }in\ \href {https://doi.org/10.1007/978-3-319-99046-0_30} {\emph {\bibinfo {booktitle} {Thermodynamics in the Quantum Regime}}},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {Felix}\ \bibnamefont {Binder}}, \bibinfo {editor} {\bibfnamefont {Luis~A.}\ \bibnamefont {Correa}}, \bibinfo {editor} {\bibfnamefont {Christian}\ \bibnamefont {Gogolin}}, \bibinfo {editor} {\bibfnamefont {Janet}\ \bibnamefont {Anders}}, \ and\ \bibinfo {editor} {\bibfnamefont {Gerardo}\ \bibnamefont {Adesso}}}\ (\bibinfo {publisher} {Springer International Publishing},\ \bibinfo {address} {Cham, Switzerland},\ \bibinfo {year} {2018})\ Chap.~\bibinfo {chapter} {30}, pp.\ \bibinfo {pages} {731--750},\ \Eprint {http://arxiv.org/abs/1803.06884} {arXiv:1803.06884}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bakhshinezhad}\ \emph {et~al.}(2019)\citenamefont {Bakhshinezhad}, \citenamefont {Clivaz}, \citenamefont {Vitagliano}, \citenamefont {Erker}, \citenamefont {Rezakhani}, \citenamefont {Huber},\ and\ \citenamefont {Friis}}]{BakhshinezhadEtAl2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Faraj}\ \bibnamefont {Bakhshinezhad}}, \bibinfo {author} {\bibfnamefont {Fabien}\ \bibnamefont {Clivaz}}, \bibinfo {author} {\bibfnamefont {Giuseppe}\ \bibnamefont {Vitagliano}}, \bibinfo {author} {\bibfnamefont {Paul}\ \bibnamefont {Erker}}, \bibinfo {author} {\bibfnamefont {Ali~T.}\ \bibnamefont {Rezakhani}}, \bibinfo {author} {\bibfnamefont {Marcus}\ \bibnamefont {Huber}}, \ and\ \bibinfo {author} {\bibfnamefont {Nicolai}\ \bibnamefont {Friis}},\ }\emph {\enquote {\bibinfo {title} {Thermodynamically optimal creation of correlations},}\ }\href {https://doi.org/10.1088/1751-8121/ab3932} {\bibfield {journal} {\bibinfo {journal} {J. Phys. A: Math. Theor.}\ }\textbf {\bibinfo {volume} {52}},\ \bibinfo {pages} {465303} (\bibinfo {year} {2019})},\ \Eprint {http://arxiv.org/abs/1904.07942} {arXiv:1904.07942}\BibitemShut {NoStop} \bibitem [{\citenamefont {Anders}\ and\ \citenamefont {Giovannetti}(2013)}]{Anders_2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Janet}\ \bibnamefont {Anders}}\ and\ \bibinfo {author} {\bibfnamefont {Vittorio}\ \bibnamefont {Giovannetti}},\ }\emph {\enquote {\bibinfo {title} {Thermodynamics of discrete quantum processes},}\ }\href {https://doi.org/10.1088/1367-2630/15/3/033022} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {033022} (\bibinfo {year} {2013})},\ \Eprint {http://arxiv.org/abs/1211.0183} {arXiv:1211.0183}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brown}\ \emph {et~al.}(2016)\citenamefont {Brown}, \citenamefont {Friis},\ and\ \citenamefont {Huber}}]{BrownFriisHuber2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont {Brown}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Friis}}, \ and\ \bibinfo {author} {\bibfnamefont {Marcus}\ \bibnamefont {Huber}},\ }\emph {\enquote {\bibinfo {title} {{Passivity and practical work extraction using Gaussian operations}},}\ }\href {https://doi.org/10.1088/1367-2630/18/11/113028} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {113028} (\bibinfo {year} {2016})},\ \Eprint {http://arxiv.org/abs/1608.04977} {arXiv:1608.04977}\BibitemShut {NoStop} \bibitem [{\citenamefont {Clivaz}\ \emph {et~al.}(2019{\natexlab{a}})\citenamefont {Clivaz}, \citenamefont {Silva}, \citenamefont {Haack}, \citenamefont {Bohr~Brask}, \citenamefont {Brunner},\ and\ \citenamefont {Huber}}]{Clivaz_2019E} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Fabien}\ \bibnamefont {Clivaz}}, \bibinfo {author} {\bibfnamefont {Ralph}\ \bibnamefont {Silva}}, \bibinfo {author} {\bibfnamefont {G{\'e}raldine}\ \bibnamefont {Haack}}, \bibinfo {author} {\bibfnamefont {Jonatan}\ \bibnamefont {Bohr~Brask}}, \bibinfo {author} {\bibfnamefont {Nicolas}\ \bibnamefont {Brunner}}, \ and\ \bibinfo {author} {\bibfnamefont {Marcus}\ \bibnamefont {Huber}},\ }\emph {\enquote {\bibinfo {title} {{Unifying paradigms of quantum refrigeration: Fundamental limits of cooling and associated work costs}},}\ }\href {https://doi.org/10.1103/PhysRevE.100.042130} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages} {042130} (\bibinfo {year} {2019}{\natexlab{a}})},\ \Eprint {http://arxiv.org/abs/1710.11624} {arXiv:1710.11624}\BibitemShut {NoStop} \bibitem [{\citenamefont {Taranto}\ \emph {et~al.}(2020)\citenamefont {Taranto}, \citenamefont {Bakhshinezhad}, \citenamefont {Sch\"uttelkopf}, \citenamefont {Clivaz},\ and\ \citenamefont {Huber}}]{Taranto_2020} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Philip}\ \bibnamefont {Taranto}}, \bibinfo {author} {\bibfnamefont {Faraj}\ \bibnamefont {Bakhshinezhad}}, \bibinfo {author} {\bibfnamefont {Philipp}\ \bibnamefont {Sch\"uttelkopf}}, \bibinfo {author} {\bibfnamefont {Fabien}\ \bibnamefont {Clivaz}}, \ and\ \bibinfo {author} {\bibfnamefont {Marcus}\ \bibnamefont {Huber}},\ }\emph {\enquote {\bibinfo {title} {{Exponential Improvement for Quantum Cooling through Finite-Memory Effects}},}\ }\href {https://doi.org/10.1103/PhysRevApplied.14.054005} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Appl.}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {054005} (\bibinfo {year} {2020})},\ \Eprint {http://arxiv.org/abs/2004.00323} {arXiv:2004.00323}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhen}\ \emph {et~al.}(2021)\citenamefont {Zhen}, \citenamefont {Egloff}, \citenamefont {Modi},\ and\ \citenamefont {Dahlsten}}]{zhen2021} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Yi-Zheng}\ \bibnamefont {Zhen}}, \bibinfo {author} {\bibfnamefont {Dario}\ \bibnamefont {Egloff}}, \bibinfo {author} {\bibfnamefont {Kavan}\ \bibnamefont {Modi}}, \ and\ \bibinfo {author} {\bibfnamefont {Oscar}\ \bibnamefont {Dahlsten}},\ }\emph {\enquote {\bibinfo {title} {{Universal Bound on Energy Cost of Bit Reset in Finite Time}},}\ }\href {https://doi.org/10.1103/PhysRevLett.127.190602} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {127}},\ \bibinfo {pages} {190602} (\bibinfo {year} {2021})},\ \Eprint {http://arxiv.org/abs/2106.00580} {arXiv:2106.00580}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brillouin}(1951)}]{Brillouin_1951} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Brillouin}},\ }\emph {\enquote {\bibinfo {title} {{Maxwell's Demon Cannot Operate: Information and Entropy. I}},}\ }\href {\doibase 10.1063/1.1699951} {\bibfield {journal} {\bibinfo {journal} {J. Appl. Phys.}\ }\textbf {\bibinfo {volume} {22}},\ \bibinfo {pages} {334} (\bibinfo {year} {1951})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Clivaz}\ \emph {et~al.}(2019{\natexlab{b}})\citenamefont {Clivaz}, \citenamefont {Silva}, \citenamefont {Haack}, \citenamefont {Bohr~Brask}, \citenamefont {Brunner},\ and\ \citenamefont {Huber}}]{Clivaz_2019L} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Fabien}\ \bibnamefont {Clivaz}}, \bibinfo {author} {\bibfnamefont {Ralph}\ \bibnamefont {Silva}}, \bibinfo {author} {\bibfnamefont {G{\'e}raldine}\ \bibnamefont {Haack}}, \bibinfo {author} {\bibfnamefont {Jonatan}\ \bibnamefont {Bohr~Brask}}, \bibinfo {author} {\bibfnamefont {Nicolas}\ \bibnamefont {Brunner}}, \ and\ \bibinfo {author} {\bibfnamefont {Marcus}\ \bibnamefont {Huber}},\ }\emph {\enquote {\bibinfo {title} {{Unifying Paradigms of Quantum Refrigeration: A Universal and Attainable Bound on Cooling}},}\ }\href {https://doi.org/10.1103/PhysRevLett.123.170605} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {123}},\ \bibinfo {pages} {170605} (\bibinfo {year} {2019}{\natexlab{b}})},\ \Eprint {http://arxiv.org/abs/1903.04970} {arXiv:1903.04970}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ng}\ and\ \citenamefont {Woods}(2018)}]{Ng_2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Nelly H.~Y.}\ \bibnamefont {Ng}}\ and\ \bibinfo {author} {\bibfnamefont {Mischa~P.}\ \bibnamefont {Woods}},\ }\emph {\enquote {\bibinfo {title} {{Resource Theory of Quantum Thermodynamics: Thermal Operations and Second Laws}},}\ }in\ \href {https://doi.org/10.1007/978-3-319-99046-0_26} {\emph {\bibinfo {booktitle} {{Thermodynamics in the Quantum Regime}}}},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {Felix}\ \bibnamefont {Binder}}, \bibinfo {editor} {\bibfnamefont {Luis~A}\ \bibnamefont {Correa}}, \bibinfo {editor} {\bibfnamefont {Christian}\ \bibnamefont {Gogolin}}, \bibinfo {editor} {\bibfnamefont {Janet}\ \bibnamefont {Anders}}, \ and\ \bibinfo {editor} {\bibfnamefont {Gerardo}\ \bibnamefont {Adesso}}}\ (\bibinfo {publisher} {Springer International Publishing},\ \bibinfo {address} {Cham, Switzerland},\ \bibinfo {year} {2018})\ Chap.~\bibinfo {chapter} {26}, pp.\ \bibinfo {pages} {625--650},\ \Eprint {http://arxiv.org/abs/1805.09564} {arXiv:1805.09564}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lostaglio}(2019)}]{Lostaglio_2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Matteo}\ \bibnamefont {Lostaglio}},\ }\emph {\enquote {\bibinfo {title} {{An introductory review of the resource theory approach to thermodynamics}},}\ }\href {https://doi.org/10.1088/1361-6633/ab46e5} {\bibfield {journal} {\bibinfo {journal} {Rep. Prog. Phys.}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {114001} (\bibinfo {year} {2019})},\ \Eprint {http://arxiv.org/abs/1807.11549} {arXiv:1807.11549}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bennett}(1982)}]{Bennett_1982} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Charles~H.}\ \bibnamefont {Bennett}},\ }\emph {\enquote {\bibinfo {title} {The thermodynamics of computation -- a review},}\ }\href {https://doi.org/10.1007/BF02084158} {\bibfield {journal} {\bibinfo {journal} {Int. J. Theor. Phys.}\ }\textbf {\bibinfo {volume} {21}},\ \bibinfo {pages} {905} (\bibinfo {year} {1982})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Debarba}}\ \emph {et~al.}(2019)\citenamefont {{Debarba}}, \citenamefont {{Manzano}}, \citenamefont {{Guryanova}}, \citenamefont {{Huber}},\ and\ \citenamefont {{Friis}}}]{DebarbaEtAl2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Tiago}\ \bibnamefont {{Debarba}}}, \bibinfo {author} {\bibfnamefont {Gonzalo}\ \bibnamefont {{Manzano}}}, \bibinfo {author} {\bibfnamefont {Yelena}\ \bibnamefont {{Guryanova}}}, \bibinfo {author} {\bibfnamefont {Marcus}\ \bibnamefont {{Huber}}}, \ and\ \bibinfo {author} {\bibfnamefont {Nicolai}\ \bibnamefont {{Friis}}},\ }\emph {\enquote {\bibinfo {title} {{Work estimation and work fluctuations in the presence of non-ideal measurements}},}\ }\href {https://doi.org/10.1088/1367-2630/ab4d9d} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {21}},\ \bibinfo {pages} {113002} (\bibinfo {year} {2019})},\ \Eprint {http://arxiv.org/abs/1902.08568} {arXiv:1902.08568}\BibitemShut {NoStop} \bibitem [{\citenamefont {Manzano}\ \emph {et~al.}(2019)\citenamefont {Manzano}, \citenamefont {Giorgi}, \citenamefont {Fazio},\ and\ \citenamefont {Zambrini}}]{Manzano_2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Gonzalo}\ \bibnamefont {Manzano}}, \bibinfo {author} {\bibfnamefont {Gian-Luca}\ \bibnamefont {Giorgi}}, \bibinfo {author} {\bibfnamefont {Rosario}\ \bibnamefont {Fazio}}, \ and\ \bibinfo {author} {\bibfnamefont {Roberta}\ \bibnamefont {Zambrini}},\ }\emph {\enquote {\bibinfo {title} {Boosting the performance of small autonomous refrigerators via common environmental effects},}\ }\href {https://doi.org/10.1088/1367-2630/ab5c58} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {21}},\ \bibinfo {pages} {123026} (\bibinfo {year} {2019})},\ \Eprint {http://arxiv.org/abs/1908.10259} {arXiv:1908.10259}\BibitemShut {NoStop} \bibitem [{\citenamefont {van Dam}\ and\ \citenamefont {Hayden}(2002)}]{vanDam2002} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Wim}\ \bibnamefont {van Dam}}\ and\ \bibinfo {author} {\bibfnamefont {Patrick}\ \bibnamefont {Hayden}},\ }\href@noop {} {\emph {\enquote {\bibinfo {title} {{Renyi-entropic bounds on quantum communication}},}\ }}\Eprint {http://arxiv.org/abs/quant-ph/0204093} {arXiv:quant-ph/0204093} (\bibinfo {year} {2002})\BibitemShut {NoStop} \bibitem [{\citenamefont {Ohya}\ and\ \citenamefont {Petz}(1993)}]{Ohya1993} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Masanori}\ \bibnamefont {Ohya}}\ and\ \bibinfo {author} {\bibfnamefont {D\'enes}\ \bibnamefont {Petz}},\ }\href@noop {} {\emph {\bibinfo {title} {{Quantum Entropy and Its Use}}}},\ \bibinfo {edition} {2nd}\ ed.\ (\bibinfo {publisher} {Springer Berlin, Heidelberg},\ \bibinfo {year} {1993})\ Chap.~\bibinfo {chapter} {2}, pp.\ \bibinfo {pages} {37--46}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ohya}\ and\ \citenamefont {Watanabe}(2010)}]{Ohya2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Masanori}\ \bibnamefont {Ohya}}\ and\ \bibinfo {author} {\bibfnamefont {Noboru}\ \bibnamefont {Watanabe}},\ }\emph {\enquote {\bibinfo {title} {{Quantum Entropy and Its Applications to Quantum Communication and Statistical Physics}},}\ }\href {\doibase 10.3390/e12051194} {\bibfield {journal} {\bibinfo {journal} {Entropy}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {1194} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Allahverdyan}\ \emph {et~al.}(2011)\citenamefont {Allahverdyan}, \citenamefont {Hovhannisyan}, \citenamefont {Janzing},\ and\ \citenamefont {Mahler}}]{Allahverdyan_2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Armen~E.}\ \bibnamefont {Allahverdyan}}, \bibinfo {author} {\bibfnamefont {Karen~V.}\ \bibnamefont {Hovhannisyan}}, \bibinfo {author} {\bibfnamefont {Dominik}\ \bibnamefont {Janzing}}, \ and\ \bibinfo {author} {\bibfnamefont {Guenter}\ \bibnamefont {Mahler}},\ }\emph {\enquote {\bibinfo {title} {{Thermodynamic limits of dynamic cooling}},}\ }\href {https://doi.org/10.1103/PhysRevE.84.041109} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {041109} (\bibinfo {year} {2011})},\ \Eprint {http://arxiv.org/abs/1107.1044} {arXiv:1107.1044}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ladyman}\ \emph {et~al.}(2013)\citenamefont {Ladyman}, \citenamefont {Lambert},\ and\ \citenamefont {Wiesner}}]{Ladyman2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {James}\ \bibnamefont {Ladyman}}, \bibinfo {author} {\bibfnamefont {James}\ \bibnamefont {Lambert}}, \ and\ \bibinfo {author} {\bibfnamefont {Karoline}\ \bibnamefont {Wiesner}},\ }\emph {\enquote {\bibinfo {title} {{What is a complex system?}}}\ }\href {https://doi.org/10.1007/s13194-012-0056-8} {\bibfield {journal} {\bibinfo {journal} {Eur. J. Philos.}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {33} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Holovatch}\ \emph {et~al.}(2017)\citenamefont {Holovatch}, \citenamefont {Kenna},\ and\ \citenamefont {Thurner}}]{Holovatch_2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Yurij}\ \bibnamefont {Holovatch}}, \bibinfo {author} {\bibfnamefont {Ralph}\ \bibnamefont {Kenna}}, \ and\ \bibinfo {author} {\bibfnamefont {Stefan}\ \bibnamefont {Thurner}},\ }\emph {\enquote {\bibinfo {title} {Complex systems: physics beyond physics},}\ }\href {https://doi.org/10.1088/1361-6404/aa5a87} {\bibfield {journal} {\bibinfo {journal} {Eur. J. Phys.}\ }\textbf {\bibinfo {volume} {38}},\ \bibinfo {pages} {023002} (\bibinfo {year} {2017})},\ \Eprint {http://arxiv.org/abs/1610.01002} {arXiv:1610.01002}\BibitemShut {NoStop} \bibitem [{\citenamefont {Marshall}\ \emph {et~al.}(2011)\citenamefont {Marshall}, \citenamefont {Olkin},\ and\ \citenamefont {Arnold}}]{2011Marshall} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Albert~W.}\ \bibnamefont {Marshall}}, \bibinfo {author} {\bibfnamefont {Ingram}\ \bibnamefont {Olkin}}, \ and\ \bibinfo {author} {\bibfnamefont {Barry~C.}\ \bibnamefont {Arnold}},\ }\href {https://dx.doi.org/10.1007/978-0-387-68276-1} {\emph {\bibinfo {title} {{Inequalities: Theory of Majorization and its Applications}}}},\ \bibinfo {edition} {2nd}\ ed.,\ Vol.\ \bibinfo {volume} {143}\ (\bibinfo {publisher} {Springer New York, NY},\ \bibinfo {year} {2011})\ Chap.~\bibinfo {chapter} {9}, pp.\ \bibinfo {pages} {297--365}\BibitemShut {NoStop} \bibitem [{\citenamefont {Thirring}(2002)}]{Thirring_2002} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Walther}\ \bibnamefont {Thirring}},\ }\href {https://doi.org/10.1007/978-3-662-05008-8} {\emph {\bibinfo {title} {{Quantum Mathematical Physics: Atoms, Molecules and Large Systems}}}}\ (\bibinfo {publisher} {Springer Berlin, Heidelberg},\ \bibinfo {year} {2002})\ Chap.~\bibinfo {chapter} {7}, pp.\ \bibinfo {pages} {423--499}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gamow}(1947)}]{Gamow1947} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {George}\ \bibnamefont {Gamow}},\ }\href@noop {} {\emph {\bibinfo {title} {{One Two Three... Infinity: Facts and Speculations of Science}}}}\ (\bibinfo {publisher} {Viking Press},\ \bibinfo {address} {New York, NY},\ \bibinfo {year} {1947})\BibitemShut {NoStop} \bibitem [{\citenamefont {Adesso}\ \emph {et~al.}(2014)\citenamefont {Adesso}, \citenamefont {Ragy},\ and\ \citenamefont {Lee}}]{AdessoRagyLee2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Gerardo}\ \bibnamefont {Adesso}}, \bibinfo {author} {\bibfnamefont {Sammy}\ \bibnamefont {Ragy}}, \ and\ \bibinfo {author} {\bibfnamefont {Antony~R.}\ \bibnamefont {Lee}},\ }\emph {\enquote {\bibinfo {title} {{Continuous Variable Quantum Information: Gaussian States and Beyond}},}\ }\href {https://doi.org/10.1142/S1230161214400010} {\bibfield {journal} {\bibinfo {journal} {Open Syst. Inf. Dyn.}\ }\textbf {\bibinfo {volume} {21}},\ \bibinfo {pages} {1440001} (\bibinfo {year} {2014})},\ \Eprint {http://arxiv.org/abs/1401.4679} {arXiv:1401.4679}\BibitemShut {NoStop} \bibitem [{\citenamefont {Silva}\ \emph {et~al.}(2016)\citenamefont {Silva}, \citenamefont {Manzano}, \citenamefont {Skrzypczyk},\ and\ \citenamefont {Brunner}}]{Silva_2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Ralph}\ \bibnamefont {Silva}}, \bibinfo {author} {\bibfnamefont {Gonzalo}\ \bibnamefont {Manzano}}, \bibinfo {author} {\bibfnamefont {Paul}\ \bibnamefont {Skrzypczyk}}, \ and\ \bibinfo {author} {\bibfnamefont {Nicolas}\ \bibnamefont {Brunner}},\ }\emph {\enquote {\bibinfo {title} {Performance of autonomous quantum thermal machines: Hilbert space dimension as a thermodynamical resource},}\ }\href {https://doi.org/10.1103/PhysRevE.94.032120} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {032120} (\bibinfo {year} {2016})},\ \Eprint {http://arxiv.org/abs/1604.04098} {arXiv:1604.04098}\BibitemShut {NoStop} \end{thebibliography} \onecolumngrid \appendix \let\addcontentsline\oldaddcontentsline \pdfbookmark[0]{Supplemental Material}{Supplemental Material} \vspace*{5mm} \label{supplementalmaterial} \phantomsection \begin{center} \begin{LARGE} Supplemental Material \end{LARGE} \end{center} \setcounter{tocdepth}{2} \vspace*{-5mm} \tableofcontents \renewcommand{\Alph{section}}{\Alph{section}} \renewcommand{\Alph{section}\arabic{subsection}}{\Alph{section}\arabic{subsection}} \renewcommand{\Alph{section}\arabic{subsection}\alph{subsubsection}}{\Alph{section}\arabic{subsection}\alph{subsubsection}} \makeatletter \renewcommand{\p@subsection}{} \renewcommand{\p@subsubsection}{} \makeatother \section{Equality Forms of the (Carnot-)Landauer Limit} \label{app:equalityformsofthecarnot-landauerlimit} In this section, we present lower bounds on the energy change of the machine (or heat dissipated into its environment) in terms of the entropy change of the target system, both in the coherent and incoherent-control settings outlined in the main text. In the coherent setting, this amounts to the well-known Landauer principle~\cite{Landauer_1961}, whereas the incoherent setting requires an extension of this derivation. These lower bounds are important, because they put limits on the optimal energetic performance of the machines for cooling. Note, finally, that the initial state of the machine is diagonal in its energy eigenbasis and must remain so for any process saturating the (Carnot-)Landauer limit; moreover, the target begins similarly and ends up in the pure state $\ket{0}\!\bra{0}$ when perfect cooling is achieved. As a result, all quantities relevant to perfect cooling at the (Carnot-)Landauer limit can be computed in terms of their ``classical'' counterparts, i.e., $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}} \to p_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}} := (p_0, \hdots, p_d)$ with $p_n = e^{-\beta E_n}$, $\tr{H \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}} \to \langle E \rangle_{p_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}} := \sum_n p_n E_n$, $S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}) \to S(p_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}) := -\sum_n p_n \log{(p_n)}$, $\mathcal{Z}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}) = \sum_n e^{-\beta E_n}$, and so on. Nonetheless, all of the results presented hold for the more general ``quantum'' properties. \subsection{Coherent-Control Paradigm: The Landauer Limit} \label{app:coherentcontrolparadigmlandauerlimit} The coherent setting was already studied in detail in Ref.~\cite{Reeb_2014}, where the authors derived an equality version of Landauer's principle. We restate the results here for convenience, since we will also use them in the incoherent paradigm. Recall that the setting we consider consists of two parts, the target system $\mathcal{S}$ and the machine $\mathcal{M}$. In the beginning, the joint state is $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}} = \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})$ for some arbitrary (but fixed) Hamiltonian $H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ and $\beta \in \mathbb R$. Note that any full-rank state $\varrho$ can be associated to some chosen temperature $\beta$, which sets the energy scale, and a Hamiltonian $H = -\frac{1}{\beta} \log{(\varrho)}$; as we consider arbitrary Hamiltonians, we only write the state dependence on these parameters when necessary. If the state is not full rank, the rank can be used to redefine the dimension. We assume that both systems are finite dimensional. Let $U$ be a global unitary on $\mathcal{S} \mathcal{M}$. We write $\varrho^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}} := U [\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})]U^\dagger$ and denote by $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime$ and $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime$ the respective reduced states. The quantity $I(\mathcal{S}: \mathcal{M})_{\varrho^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}} = S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime) + S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime) - S(\varrho^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}})$ is the final mutual information between $\mathcal{S}$ and $\mathcal{M}$ and $D(\varrho^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}||\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}) = \tr{\varrho^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \log(\varrho^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})}-\tr{\varrho^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \log(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})}$ is the relative entropy of the final machine state with respect to its initial state. \begin{lem}[{\cite[Lemma 2]{Reeb_2014}}] \label{lem:second-law-lemma} Let the setting be as above. Then \begin{equation} [S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime) - S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})] + [S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime) - S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})] = I(\mathcal{S}: \mathcal{M})_{\varrho^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}} \geq 0. \end{equation} \end{lem} \begin{proof} We note that \begin{equation} [S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime) - S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})] + [S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime) - S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})] = S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime) + S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime) - S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^\prime), \end{equation} since the von Neumann entropy is additive for product states and invariant under unitary evolution. The assertion follows from the definition of the mutual information and the fact that it is non-negative. \end{proof} \begin{thm}[Equality form of Landauer's principle, {\cite[Theorem 3]{Reeb_2014}}] Let the setting be as above. Then \begin{equation} \beta \, \mathrm{tr}[H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime-\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})] - [S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}) - S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime)] = I(\mathcal{S} : \mathcal{M})_{\varrho^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}} + D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime||\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}) \geq 0. \end{equation} \end{thm} \begin{proof} From Lemma \ref{lem:second-law-lemma}, it follows that \begin{equation} [S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}) - S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime)] + I(\mathcal{S} : \mathcal{M})_{\varrho^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}} = S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime) - S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}). \label{eq:th8} \end{equation} Using the fact that $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} = \tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})$, we infer that $D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime||\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})=-S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime)+\beta \mathrm{tr}[H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime] + \log{[\mathrm{tr}(e^{-\beta H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}})]}$ and $S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}) = \beta \mathrm{tr}[H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}] + \log{[\mathrm{tr}(e^{-\beta H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}})]}$. Re-expressing the first of these for $S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime)$ and inserting both into Eq.~\eqref{eq:th8} yields the claimed equality. The inequality results from non-negativity of relative entropy and mutual information. This completes the proof. \end{proof} \subsection{Incoherent-Control Paradigm: The Carnot-Landauer Limit} \label{app:incoherentcontrolparadigmcarnot-landauerlimit} Landauer's principle provides a relationship between how much heat must necessarily be dissipated into the thermal background environment upon manipulating the entropy of a given quantum system. Until now, we have assumed that the system of interest can interact arbitrarily with its environment (i.e., the machine); in other words, we have considered general joint unitary interactions between system and machine, without restriction. In doing so, we have tacitly assumed the ability to draw energy from some external resource (i.e., a work source) in order to implement said unitaries, which are in general not energy preserving. The particularities of such a resource are left as an abstraction. However, from a thermodynamicists' perspective, this setting may seem somewhat unsatisfactory, as the joint target-machine system is not energetically closed. In order to provide a more self-contained picture of the cooling procedure, one can explicitly include the energy resource, modelled as a quantum system itself, into the setting. To this end, note first that said resource must be out of thermal equilibrium with respect to the target and machine in order to perform any meaningful thermodynamic transformation. Furthermore, it is sensible to assume that the energy resource system is in thermal equilibrium with its own environment to begin with. The joint target-machine-resource system is then considered to be energetically closed; as such, global unitaries in this setting are restricted to be energy conserving. In order to act as a resource for cooling the target in this picture, the energy source here must begin in equilibrium with a heat bath that is hotter than the initial temperature of the machine (assuming that the machine and resource both begin in thermal states), such that a natural heat flow is induced that leads the environment of the machine to act as a final heat sink. This setting is what we call the incoherent-control scenario. In this context, Landauer's principle translates to studying the relationship between the heat that is necessarily dissipated into the machine's environment upon manipulating the entropy of the target system. Finally, note that the relationship between the coherent and the incoherent-control paradigms is interesting in itself: while on the one hand the incoherent setting includes an additional system and therefore increases the dimensionality of the overall joint system, on the other hand by restricting the transformations on this larger space to be energy conserving, one limits the orbit of attainable states. Now let us consider the incoherent-control setting. Here, we have the target system $\mathcal{S}$ and the machine comprises of one part $\mathcal{C}$ coupled to the cold bath and another $\mathcal{H}$ coupled to the hot bath. We assume that all systems are finite-dimensional. Every subsystem $\mathcal{A}$ is associated to a Hamiltonian $H_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}$ and $\mathcal{C}$, $\mathcal{H}$ are initially in a thermal state; the cold bath has inverse temperature $\beta$ and the hot bath has inverse temperature $\beta_{\raisebox{-1pt}{\tiny{$H$}}} < \beta$. We assume $\beta$, $\beta_{\raisebox{-1pt}{\tiny{$H$}}}$. Thus, the initial joint state is $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}} = \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}) \otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}(\beta_{\raisebox{-1pt}{\tiny{$H$}}}, H_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}})$. The global evolution on $\mathcal{S}\mathcal{C}\mathcal{H}$ is implemented via a unitary $U$, leading to $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}^\prime = U (\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}) U^\dagger $. We further assume that the unitary evolution on the joint system is energy conserving, i.e.,\ $[U, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + H_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} + H_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}] = 0$. We write $\Delta S_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} := S(\varrho^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}) - S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}})$ for the entropy change on subsystem $\mathcal{A}$ and $\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} := \mathrm{tr}[H_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}(\varrho^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}})]$ for the average energy change. Moreover, the free energy of a state $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}$ with respect to the inverse temperature $\beta$ is $F_{\raisebox{-1pt}{\tiny{$\beta$}}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}) = \mathrm{tr}[H_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}] - \beta^{-1} S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}})$. In the incoherent setting, it makes sense to look at the energy decrease in the hot bath $\mathcal{H}$, since the hot bath can be seen as the energetic resource one must to expend in order to cool the system $\mathcal{S}$ (alternatively, as we present after the following theorem, one can consider the energy dissipated into the cold bath $\mathcal{C}$, which serves as the heat sink). \begin{thm} \label{thm:landauer-incoherent} In the above setting, it holds that \begin{equation} \Delta F_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(\beta)} + \eta \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} = - \frac{1}{\beta}[\Delta S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + \Delta S_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} + \Delta S_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} + D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}^\prime||\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}) + D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^\prime||\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}})] \leq 0, \end{equation} where $(0,1) \ni \eta := 1 - \beta_{\raisebox{-1pt}{\tiny{$H$}}}/\beta $ is the Carnot efficiency and $\Delta F_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(\beta)} = F_{\raisebox{-1pt}{\tiny{$\beta$}}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime) - F_{\raisebox{-1pt}{\tiny{$\beta$}}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$. \end{thm} \begin{proof} Let us consider \begin{equation} I(\mathcal{S}:\mathcal{C}:\mathcal{H})_{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}^\prime} := S(\varrho^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}) + S(\varrho^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}) + S(\varrho^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}) - S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}^\prime) \geq 0. \end{equation} Note that the quantity $I(\mathcal{S}:\mathcal{C}:\mathcal{H})_{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}^\prime}$, which quantifies the tripartite mutual information of the state $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}^\prime$, is non-negative via subadditivity $S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}) + S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{B}$}}}) \geq S(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{A}\mathcal{B}$}}})$ for any state $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{A}\mathcal{B}$}}}$. Furthermore, since the von Neumann entropy is invariant under unitary transformations and additive for tensor product states, we have \begin{equation} I(\mathcal{S}:\mathcal{C}:\mathcal{H})_{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}^\prime} = \Delta S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + \Delta S_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} + \Delta S_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}. \end{equation} We also have that \begin{equation} \Delta S_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} = \beta \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} - D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}^\prime||\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}) \end{equation} and \begin{equation} \Delta S_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} = \beta_{\raisebox{-1pt}{\tiny{$H$}}} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} - D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^\prime||\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}). \end{equation} Thus, \begin{equation}\label{eq:incoherentlandauerequalityallterms} I(\mathcal{S}:\mathcal{C}:\mathcal{H})_{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}^\prime} = \Delta S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + \beta \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} - D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}^\prime||\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}) + \beta_{\raisebox{-1pt}{\tiny{$H$}}} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} - D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^\prime||\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}). \end{equation} Since the unitary is energy conserving, we infer that $\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} + \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} = 0$. Hence, we have \begin{equation} \Delta S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} - \beta \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + (\beta_{\raisebox{-1pt}{\tiny{$H$}}} - \beta) \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} = I(\mathcal{S}:\mathcal{C}:\mathcal{H})_{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}^\prime} + D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}^\prime||\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}) + D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^\prime||\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}). \end{equation} Using the free energy, we can rewrite this as \begin{equation} - \beta[F_{\raisebox{-1pt}{\tiny{$\beta$}}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime) - F_{\raisebox{-1pt}{\tiny{$\beta$}}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})] - (\beta - \beta_{\raisebox{-1pt}{\tiny{$H$}}}) \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} = I(\mathcal{S}:\mathcal{C}:\mathcal{H})_{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}^\prime} + D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}^\prime||\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}) + D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^\prime||\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}). \end{equation} Dividing by $-\beta$, we obtain the assertion, since, in particular, $ I(\mathcal{S}:\mathcal{C}:\mathcal{H})_{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}^\prime} + D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}^\prime||\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}) + D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^\prime||\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}) \geq 0$ by the non-negativity of each term. \end{proof} In particular, we have shown that the energy extracted from the hot bath is lower-bounded by the increase in free energy, weighted by the inverse Carnot efficiency: \begin{equation} \mathrm{tr}[H_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^\prime)] \geq \frac{1}{\eta}[F_{\raisebox{-1pt}{\tiny{$\beta$}}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime) - F_{\raisebox{-1pt}{\tiny{$\beta$}}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})]. \end{equation} Note that if $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}=\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$, the r.h.s. is non-negative for any nontrivial thermodynamic process, i.e., any for which the target system is heated or\textemdash of particular relevance for us\textemdash cooled. This follows by the Gibbs variational principle, which states that the free energy of $\varrho$ is minimal iff $\varrho$ is the corresponding Gibbs state. Finally, in order to make a more concrete connection to the spirit of Landauer's original derivation, note that one can consider bounding the heat dissipated into the cold bath, rather than that drawn from the hot bath. Substituting $\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} = - (\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}})$ into Eq.~\eqref{eq:incoherentlandauerequalityallterms} leads to \begin{align}\label{eq:carnotlandauercold} -\widetilde{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} - \beta_{\raisebox{-1pt}{\tiny{$H$}}} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + (\beta - \beta_{\raisebox{-1pt}{\tiny{$H$}}}) \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} \geq 0, \end{align} which recovers the standard Landauer bound for the dissipated heat in the limit of an infinitely hot heat bath, i.e., $\beta_{\raisebox{-1pt}{\tiny{$H$}}} \to 0$. \section{Diverging Energy} \label{app:divergingenergy} \subsection{Sufficiency: Diverging Energy Cooling Protocol} \label{app:sufficiencydivergingenergycoolingprotocol} This cooling protocol is arguably the simplest of those presented. The thermal populations of any target system can be exchanged with a machine system of the same dimension, in the thermal state of $H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}=\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}} \sum_{n=0}^{d-1} n |n\rangle\!\langle n|$. As $\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}\rightarrow\infty$, the machine state $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})$ approaches $ |0\rangle\!\langle0|_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ independently of $\beta$ (as long as $\beta \neq 0$). Such a population-exchange operation is a single interaction (i.e., the protocol occurs in unit time), which is of finite complexity (in a sense that we discuss below). However, the energy drawn from the resource $\mathcal{W}$ upon performing said \texttt{SWAP} operation is at least $E = (p_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(1)}-p_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(1)}) (\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}-\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{(1)})$, where $p_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}^{(1)}$ is the initial population of the first excited level of system $\mathcal{X}$ and $\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{(1)}$ is the first energy eigenvalue of the target system. Denoting by $\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{(k)}$ the energy eigenvalue of the $k^{\text{th}}$ excited level of the target system, we have above assumed that $\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{(0)}=0$ (which we do for all Hamiltonians without loss of generality) and $\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}} > \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{(d-1)}$. As such, perfect cooling will incur diverging energy cost. \subsection{Necessity of Diverging Energy for Protocols with Finite Time and Control Complexity}\label{app:necessitydivergingenergyprotocolsfinitetimecontrolcomplexity} Consider the following Hamiltonians for the target system and machine with finite but otherwise arbitrary energy levels, $H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}= \sum_{n=0}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}-1} \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{(n)} |n\rangle\!\langle n|_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ and $H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}= \sum_{n=0}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1} \omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}^{(n)} |n\rangle\!\langle n|_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$, respectively. For any finite inverse temperature $\beta$, the initial thermal states $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$ and $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta,H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})$ are of full rank. Suppose now that one can implement a single unitary transformation (i.e., a unit time protocol) of finite control complexity on the joint target and machine, yielding the joint output state $\varrho'_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}} = \ptr{\mathcal{M}}{U (\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}) \otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})) U^\dagger}$, and wishes to attain perfect cooling of the target in doing so. By invariance of the rank under unitary transformations and the fact that the system and machine begin uncorrelated, we have \begin{align} \mathrm{rank}[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})] \, \mathrm{rank}[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta,H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})] = \mathrm{rank}[\varrho'_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}] \leq \mathrm{rank}[\varrho'_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}] \, \mathrm{rank}[\varrho'_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}], \end{align} where the inequality follows from the subadditivity of the R{\' e}nyi-zero entropy~\cite{vanDam2002}, which is the logarithm of the rank. To achieve perfect cooling of the target, one must (at least asymptotically) attain $\mathrm{rank}[\varrho'_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}] < \mathrm{rank}[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})]$, which implies that $\mathrm{rank}[\varrho'_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}] > \mathrm{rank}[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta,H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})]$. However, if this condition is achieved, then $D[\varrho'_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \| \tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta,H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})]$ diverges, implying a diverging energy cost by Eq.~\eqref{eq:landauerequality}. The above argument already appears in Ref.~\cite{Reeb_2014}. The other situation that one must consider is the case where one attains a $\varrho'_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ such that $\mathrm{rank}[\varrho'_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}] = \mathrm{rank}[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})]$ but nonetheless $\varrho'_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ is arbitrarily close to a pure state, as is the case, for instance, in the protocols that we present. Consider a sequence of machines $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(i)}$ and unitaries $U^{(i)}$ such that $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(i)} \to \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ and $U^{(i)} \to U$. Note that since we fixed the dimensions of $\mathcal S$ and $\mathcal M$, any sequence of machines has a converging subsequence by the Bolzano-Weierstrass theorem and the fact that the set of quantum states is compact. Here, $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ and $U$ achieve perfect cooling. If we fix $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$, we obtain a corresponding sequence $(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime)^{(i)}$ such that $(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime)^{(i)} \to \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime$. Crucially, here, since we restrict the unitary transformation to be of finite control complexity, the states $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ and $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime$ are effectively finite dimensional, in the sense that whatever their true dimension, they can be replaced by finite-dimensional versions without changing any of the relevant quantities (see Appendix~\ref{app:conditionsstructuralcontrolcomplexity}). Since the relative entropy $(\varrho, \sigma) \mapsto D(\varrho||\sigma)$ is lower semicontinuous~\cite{Ohya1993, Ohya2010} and since $D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime || \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}) \to\infty$ by the arguments above, we infer that $D[(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime)^{(i)} || \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(i)}] \to \infty$ as $i \to \infty$. This argument holds independently of $\mathrm{rank}[\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime]$; in particular, for the special case $\mathrm{rank}[\varrho'_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}] = \mathrm{rank}[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})]$ that we are considering here. Thus, to approach perfect cooling in finite time and with finite control complexity, one would need a diverging energy cost. Thus, we see that within the resource trinity of energy, time, and control complexity, if the latter two are finite, then energy must diverge to asymptotically achieve a pure state. Whether or not there exist other (unaccounted for) resources that allow one to achieve this with all three of the aforementioned resources being finite remains an open question. Importantly, the above argument no longer holds if the time or control complexity is allowed to diverge. In such cases, both $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ and $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime$ can be infinite dimensional, and because of this the rank argument no longer applies and the relative entropy does not necessarily diverge in the limit of perfect cooling. In contrast, as we show, it is even possible to saturate the Landauer bound. \section{Diverging Time Cooling Protocol for Finite-Dimensional Systems} \label{app:divergingtimecoolingprotocolfinitedimensionalsystems} \subsection{Proof of Theorem~\ref{thm:inftimeFinTepFinDim}} \label{app:prooftheoreminftime} \begin{proof} Consider a target system $\mathcal{S}$ of dimension $d$ with associated Hamiltonian \begin{equation}\label{eq:sysHam} H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}=\sum_{k=0}^{d-1} \, \omega_{k} \ket{k}\!\bra{k}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}, \end{equation} where we also set $\omega_0 = 0$ without loss of generality. Consider also the machine $\mathcal{M}$ to be composed of $N$ subsystems, $\{\mathcal{M}_n\}_{n=1,\hdots,N}$, each of the same dimension $d$ as the target, whose local Hamiltonians are \begin{equation}\label{eq:machHam} H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n)}=(1+n\epsilon)H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} , \end{equation} where $\epsilon=(\beta_{\textup{max}}-\beta)/(N\beta)$. We first cool the system initially at nonzero $\beta$ to some fixed, finite $\beta_{\textup{max}}$, which we eventually take $\beta_{\textup{max}} \to \infty$ in order to asymptotically achieve perfect cooling. We treat the case $\beta = 0$ as a limiting case of $\beta \to 0$: here, as $\beta \to 0$, we let $N \rightarrow \infty$ such that $N \beta \to \infty$, e.g., we specify a suitable function $N(\beta)$ such that $N(\beta) \to \infty$ ``faster'' than $\beta \to 0$. We now show that, given the ability to perform a diverging number of operations on such a configuration, one can reach the target state $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta_{\textup{max}}, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$. In particular, we show that the protocol presented uses the minimal amount of energy to do so, and explicitly calculate this to be $\beta^{-1} \widetilde{\Delta} S$ units of energy, where $\widetilde{\Delta} S:=S[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})]-S[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta_{\textup{max}}, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})]$. In other words, as the number of operations in the protocol diverges, we approach perfect cooling at the Landauer limit, thereby saturating the ultimate bound. The diverging time cooling protocol is as follows. At each step, the target system interacts with a single machine labelled by $n$ via the \texttt{SWAP} operator \begin{align} \mathbbm{S}^d_{\mathcal{S}\mathcal{M}_n} := \sum_{i,j=0}^{d-1}\ket{i,j}\!\bra{j,i}_{\mathcal{S}\mathcal{M}_n}. \end{align} As the target and machine subsystems considered here are of the same dimension, we drop the subscript on the states associated to each subsystem, for ease of notation. Such a transformation is, in general, not energy conserving, but one can calculate the energy change for both the target system and the machine due to the $n^{\textup{th}}$ interaction as \begin{align} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n)} = \tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\,\tau (\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n)})} -\tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\, \tau (\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n-1)})}, \end{align} and so the total energy change of the system over the entire $N$-step protocol is given by \begin{align} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}=\sum_{n=1}^N\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n)}=\tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\,\tau (\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(N)})} -\tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\, \tau (\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(0)})}. \end{align} The energy change of the machine subsystem that is swapped with the target system at each step is given by \begin{align} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n)}=&\tr{H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n)}\tau (\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n-1)})} -\tr{H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n)}\tau (\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n)})} = \sum_{k=0}^{d-1} \,(1+n\epsilon) \omega_{k} \left[p_k (\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n-1)})-p_k(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n)})\right], \label{eq:energy exch machine ar} \end{align} where $p_k(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n)}) = e^{-\beta (1+n\epsilon)\omega_k}/\mathcal{Z}_{\mathcal{M}_n}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n)})$ is the population in the $k^{\textup{th}}$ energy level of the thermal state of the $n^{\textup{th}}$ machine subsystem at inverse temperature $\beta$, with $\mathcal{Z}_{\mathcal{M}_n}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n)})=\tr{e^{-\beta H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n)} }}$ being the partition function. By summing the contributions of the energy changes in each step, one can obtain the total energy change for the overall machine throughout the entire process: \begin{align} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(N)}=& \sum_{n=1}^N\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n)} = \sum_{n=1}^N \sum_{k=0}^{d-1} \, (1+n\epsilon)\omega_{k} \left[p_k (\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n-1)})-p_k(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n)})\right], \label{eq:total energy exch machine ar} \end{align} In general, it is complicated to calculate the energy cost for the protocol up until a finite time step $N$, since this depends on the full energy structure of the target system and machine subsystems involved (we return to resolve this problem for the special case of equally spaced system and machine Hamiltonians in the coming section). Here, we focus on a special case in which $N \to \infty$, i.e., there is a diverging number of machine subsystems that the target system interacts with throughout the protocol. This limit physically corresponds to that of requiring a diverging amount of time (in terms of the number of steps). Furthermore, we take the limit $\epsilon \to 0$ for any fixed $\beta, \beta_{\textup{max}}$. Considering the differentials \begin{equation} \Delta p_k^{(n)} := p_k(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n)}) - p_k (\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n-1)}) , \end{equation} and \begin{equation} \Delta x_n := x_{n} - x_{n-1} \qquad \text{with} \quad x_n := 1+n \epsilon . \end{equation} In order for $x_n$ to become infinitesimal, and noting the explicit form of the machine subsystem Hamiltonians $H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(n)} = (1+n\epsilon)H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$, we can make the replacement \begin{align} -\frac{\Delta p_k^{(n)}}{\Delta x_n} \, \Delta x_n \to -\frac{\partial{p_k (\beta, xH_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})}}{\partial x} \,\textup{d}x \end{align} where $x:=1+n \epsilon$ has become a continuous parameter. This way we can express the limit $N\to\infty$ of Eq.~\eqref{eq:total energy exch machine ar} as a Riemann integral in the following form: \begin{align} \lim_{N\to\infty}\Delta E^{(N)}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} =& -\int_{1}^{x_{\textup{max}}} \sum_{k=0}^{d-1} \, x \omega_{k} \,\frac{\partial p_k (\beta, xH_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})}{\partial x}\, \textup{d}x, \end{align} where $x_{\textup{max}}:=\beta_{\textup{max}}/\beta$. Both the summation and the integral converge, so one can swap the order of their evaluation. Integrating by parts then gives \begin{align} \lim_{N\to\infty}\Delta E^{(N)}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} =& \sum_{k=0}^{d-1} \left[ - x \omega_{k} \, p_k (\beta, xH_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})\big|_{1}^{x_{\textup{max}}}+\int_{1}^{x_\textup{{max}}} \, \omega_{k} \,p_k (\beta, xH_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})\, \textup{d}x\right] \nonumber\\ =& \sum_{k=0}^{d-1} \left[- x \omega_{k} \, p_k (\beta, xH_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})\big|_{1}^{x_{\textup{max}}} \right] -\int_{1}^{x_\textup{{max}}} \frac{1}{\beta}\frac{\partial }{\partial x}\big[\log \mathcal{Z}(\beta, x H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})\big]\, \textup{d}x \nonumber\\ =& {E}[\tau(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})]-{E}[\tau(\beta, x_{\textup{max}}H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})]-\frac{1}{\beta}\log \mathcal{Z}(\beta, x_{\textup{max}}\, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})+\frac{1}{\beta}\log \mathcal{Z}(\beta, \, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}), \label{eq:total energy exch integral} \end{align} where in the second line we again swap the order of the integral and the sum to write $\sum_{k=0}^{d-1} \omega_k p_k(\beta, x H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}) = -\tfrac{1}{\beta} \tfrac{\partial}{\partial x} [\log {\mathcal{Z}(\beta, x H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})} ]$ and in the last line we invoke ${E}[\tau(\beta,x H)]=\tr{x H\, \tau(\beta,x H)}$. Finally, writing the partition function in terms of the average energy and entropy, i.e., $ \log[ \mathcal{Z}(\beta,x H)]=-\beta \, E [\tau(\beta,x H)]+S[\tau(\beta,x H)]$, the total energy change of the machine is given by \begin{align} \lim_{N\to\infty}\Delta E^{(N)}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} =& {E}[\tau(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})]-{E}[\tau(\beta, x_{\textup{max}}H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})]+ \, E [\tau(\beta,x_{\textup{max}} H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})]-\frac{1}{\beta} S[\tau(\beta,x_{\textup{max}} H)]- \, E [\tau(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})]+\frac{1}{\beta}S[\tau(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})] \nonumber\\ =&\frac{1}{\beta}\big\{S[\tau(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})] - S[\tau(\beta_{\textup{max}}, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})]\big\}=\frac{1}{\beta}\,\widetilde{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}, \label{eq:total energy exch integral final} \end{align} where we make use of the property $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, x_{\textup{max}} H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})=\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta_{\textup{max}}, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$ and the entropy decrease of the target system corresponds to that associated with the transformation $\tau(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})\to \tau(\beta_{\textup{max}}, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$. Thus, as the number of timesteps diverges, this cooling process saturates the Landauer limit for the heat dissipated by the machine. In order to achieve perfect cooling at the Landauer limit, i.e., the final target state to approach $\ketbra{0}{0}$ and thus prove Theorem~\ref{thm:inftimeFinTepFinDim}, we can now take the limit $\beta_{\textup{max}} \to \infty$. \end{proof} The above proof holds for systems and machines of arbitrary (but equal) dimension, either finite or infinite, with arbitrary Hamiltonians. We now present some more detailed analysis regarding the special case where the Hamiltonians of the target system and all machine subsystems are equally spaced; this provides an opportunity both to derive a more detailed formula for the energy costs involved and to build intuition regarding some of the important differences between the finite- and infinite-dimensional settings. \subsection{Special Case: Equally Spaced Hamiltonians} \label{app:specialcaseequallyspacedhamiltonians} Consider a finite $d$-dimensional target system beginning at inverse temperature $\beta$ with an equally spaced Hamiltonian $H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}) = \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} \sum_{n=0}^{d-1} n \ket{n}\!\bra{n}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$. In this case, we can derive a more precise dimension-dependant function for the energy cost dissipated by the machines throughout the optimal cooling protocol presented above. Consider an initial target system $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$ and a diverging number $N$ of machines $\{ \mathcal{M}_{\alpha} \}_{\alpha = 0, \hdots, N}$ of the same dimension $d$ as the target, which all begin in a thermal state at inverse temperature $\beta$ with respect to an equally spaced Hamiltonian whose gaps between neighbouring energy levels $\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}_\alpha$}}}$ are ordered non-decreasingly. Each machine is used once and then discarded; the particular interaction is the aforementioned \texttt{SWAP} between the target system and the $n^\text{th}$ qudit machine, i.e., that represented by the unitary $\mathbbm{S}^d_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}_\alpha$}}} := \sum_{i,j=0}^{d-1}\ket{i,j}\!\bra{j,i}_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}_\alpha$}}}.$ After applying such an operation, the state of the target system is given by \begin{align}\label{eq: opt asy state max} \tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,\omega_{\alpha}) := \frac{e^{-\beta {H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_\alpha) }}{{\mathcal{Z}}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,\omega_\alpha)}, \end{align} where ${H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_\alpha) := \omega_{\alpha} \sum_{n=0}^{d-1} n \ket{n}\!\bra{n}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ and ${\mathcal{Z}}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,\omega_\alpha) :=\tr{e^{-\beta {H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_\alpha) }}$. We now calculate the energy cost explicitly for the diverging time cooling protocol, which saturates the Landauer bound in the asymptotic limit. In order to minimise the energy cost of cooling, the target system must be cooled by the qudit system in the machines with the smallest gap between neighbouring energy levels (that permits cooling) as much as possible at each stage. In order to optimally use the given machine structure at hand, we thus order the set of energy gaps $\omega_\alpha $ in non-decreasing order. In addition, the protocol to reach the Landauer erasure bound, i.e., minimal energy cost, dictates that one must infinitesimally increase $\omega_\alpha$ of the machines in order to dissipate as little heat as possible throughout the interactions. Since we are here considering a diverging time limit, we have access to a diverging number of qudit machine with distinct energy gap $\omega_{\alpha}$ at our disposal; the task is then to use these in an energy-optimal manner. It is straightforward to see that to minimise the total energy cost, one must apply the sequence of unitaries $\mathbbm{S}^d_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}_\alpha$}}}$ such that $\mathbbm{S}^d_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}_0$}}}$ is first applied to reach the optimally cool $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,\omega_0)$, then $\mathbbm{S}^d_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}_1$}}}$ to reach $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,\omega_1)$, and so on. The heat dissipated by the reset machines in each stage of such a cooling protocol (i.e., for each value of $\alpha$) can thus be calculated as \begin{align} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}_\alpha$}}}(\omega_{\alpha})&=-\left\{\tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{M}_\alpha$}}}(\omega_{\alpha})\tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}_\alpha$}}} (\beta, \omega_{\alpha})} +\tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{M}_\alpha$}}}(\omega_{\alpha})\, \tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}_\alpha$}}} (\beta, \omega_{\alpha-1})}\right\}\nonumber\\ &=-\tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\alpha})\left[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} (\beta, \omega_{\alpha}) -\, \tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} (\beta, \omega_{\alpha-1})\right]}. \label{eq:energy exch R} \end{align} In the second line, we have made use of the fact that the Hamiltonians of both the target system and each of machine are $d$-dimensional and equally spaced. So far, we have obtained the energy dissipated by the reset machines. To investigate the total energy cost of cooling in such a process, we also must consider the contribution of energy transferred to the target system $\mathcal{S}$, which is characterised via its local Hamiltonian $H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ and calculated via \begin{equation} \Delta {E}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\alpha})= \tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})\,\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} (\beta, \omega_{\alpha})} -\tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})\, \tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} (\beta, \omega_{\alpha-1})}, \label{eq:energy exch SL} \end{equation} in which we set $\omega_0=\omega_\mathcal{S}$. Using Eqs.~(\ref{eq:energy exch R},~\ref{eq:energy exch SL}), the total energy cost for each stage of cooling is given by \begin{equation} \Delta {E}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}(\omega_\alpha)=\Delta {E}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\alpha})+\Delta {E}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\omega_{\alpha})= \mathrm{tr}\left\{\big[{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\mathcal{S}})-{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\alpha})\big]\big[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} (\beta, \omega_{\alpha})-\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,\omega_{\alpha-1})\big]\right\}, \label{eq:tot energy memory} \end{equation} which leads to the overall energy cost after $N$ stages, where $N$ is the number of non-zero distinct energy gaps of the reset machines, as \begin{align} \Delta {E}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^{(N)}&= \sum_{\alpha = 1}^N \Delta {E}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}(\omega_\alpha) =\sum_{\alpha=1}^N \mathrm{tr}\left\{\big[{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\mathcal{S}})-{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\alpha})\big]\big[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} (\beta, \omega_{\alpha})-\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,\omega_{\alpha-1})\big]\right\} \label{eq:tot energy n}. \end{align} Now, we can obtain the total energy cost for each stage of the protocol (i.e., each value of $\alpha$ considered) in terms of the transformation of the target system alone. Note that in this protocol, each stage corresponding to each of the $N$ distinct energy gaps $\{ \omega_\alpha\}$ in itself requires only one operation to perfectly reach $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, \omega_\alpha)$. The end result of this protocol is that the target system is cooled from the initial thermal state $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})$, where $\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}$ is the energy gap between each pair of adjacent energy levels in the system, to $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,\omega_{\textup{max}})$ in the energy-optimal manner. Starting from Eq.~\eqref{eq:tot energy n}, we have \begin{align} \Delta {E}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^{(N)}&=\sum_{\alpha=1}^{N} \mathrm{tr}\left\{\big[{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})-{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\alpha})\big]\big[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} (\beta, \omega_{\alpha})-\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,\omega_{\alpha-1})\big]\right\}\nonumber\\ &=\sum_{\alpha=1}^{N} (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}-\omega_\alpha)\left[\left(\frac{e^{-\beta \omega_{\alpha}}}{1-e^{-\beta \omega_{\alpha}}}-\frac{e^{-\beta \omega_{\alpha-1}}}{1-e^{-\beta \omega_{\alpha-1}}}\right)-\left(\frac{d\,e^{-\beta d \omega_{\alpha}}}{1-e^{-\beta d \omega_{\alpha}}}-\frac{d e^{-\beta d \omega_{\alpha-1}}}{1-e^{-\beta d \omega_{\alpha-1}}}\right)\right]\nonumber\\ &=\lim_{K\to\infty} \sum_{\alpha=1}^{N} (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}-\omega_\alpha)\sum_{k=0}^{K}\big[\big(e^{-\beta (k+1) \omega_{\alpha}}-e^{-\beta (k+1) \omega_{\alpha-1}}\big)-d\, \big(e^{-\beta (k+1)d \omega_{\alpha}}-e^{-\beta (k+1)d \omega_{\alpha-1}}\big)\big]\nonumber\\ &=\lim_{K\to\infty}\sum_{\alpha=1}^{N} (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}-\omega_\alpha)\sum_{k=0}^{K}\big[e^{-\beta (k+1) \omega_{\alpha}}\big(1-e^{-\beta (k+1) (\omega_{\alpha-1}-\omega_{\alpha})}\big)-d\, e^{-\beta d(k+1) \omega_{\alpha}}\big(1-e^{-\beta d(k+1) (\omega_{\alpha-1}-\omega_{\alpha})}\big)\big]. \label{eq:tot energy d} \end{align} Here, since both ${H}_{\raisebox{-1pt}{\tiny{$\mathcal{M}_\alpha$}}}$ and $H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ are equally spaced Hamiltonians, the average energy can be written as \begin{align} E(\omega_x, \omega_y) = \tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_x)\,\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} (\beta, \omega_y)}= \frac{\sum_{n=0}^{d-1}\, n \omega_{x}e^{-n\beta \omega_y}}{\sum_{n=0}^{d-1}\, e^{-n\beta \omega_y}}=\omega_{x}\left(\frac{e^{-\beta \omega_{y}}}{1-e^{-\beta \omega_{y}}}-\frac{d\,e^{-\beta d\, \omega_{y}}}{1-e^{-\beta d\,\omega_{y}}}\right) \end{align} by evaluating the geometric series \begin{align} \mathcal{Z}(\beta, \omega_y) = \sum_{n=0}^{d-1} e^{- \beta n \omega_y} = \tfrac{1-e^{-\beta d \omega_y}}{1-e^{-\beta \omega_y}} \end{align} and writing \begin{align} E(\omega_x, \omega_y) = \sum_{n=0}^{d-1} n \omega_x \tfrac{e^{- \beta n \omega_y}}{\mathcal{Z}(\beta,\omega_y)} = \tfrac{\omega_x}{\omega_y} \left\{ - \tfrac{\partial}{\partial \beta} \log{\left[ \mathcal{Z}(\beta, \omega_y)\right]} \right\} = - \tfrac{\omega_x}{\omega_y} \tfrac{\partial}{\partial \beta} \left[ \log{\left( 1-e^{-\beta d \omega_y}\right) - \log{\left( 1-e^{-\beta \omega_y}\right)}} \right] \end{align} as we do in the second line of Eq.~\eqref{eq:tot energy d} and then using the infinite series expression $(1-x)^{-1}=\lim_{K\to\infty}\sum_{k=0}^K x^k$ for any $|x|<1$ as per the third line. As we will see in Appendix~\ref{app:hodivergingtimegaussian}, the energy cost for cooling an infinite-dimensional system when both target and machines have equally spaced Hamiltonians (i.e., harmonic oscillators) is similar to the form of Eq.~\eqref{eq:tot energy d}. Importantly, the second term in square parenthesis vanishes as $d\to\infty$, simplifying the expression even further. We now assume that the energy gaps of the machine are given by $\omega_\alpha =\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+\epsilon \alpha$ and so the total energy cost can be written as follows: \begin{align} \Delta {E}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^{(N)} &=- \lim_{K\to\infty} \sum_{\alpha=1}^{N} \alpha\epsilon \sum_{k=0}^{K} e^{-\beta k (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+\alpha \epsilon)}\big(1-e^{\beta k \epsilon}\big)+\lim_{K\to\infty} \sum_{\alpha=1}^{N} \alpha d \epsilon \sum_{k=0}^{K}\, e^{-\beta k d (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+\alpha \epsilon)}\big(1-e^{\beta k d \epsilon}\big)\nonumber\\ &= \lim_{K\to\infty} \sum_{k=0}^{K} \big[e^{-\beta k \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}\big(e^{\beta k \epsilon}-1\big)\big(\sum_{\alpha=1}^{N} \alpha\epsilon e^{-\beta k \alpha \epsilon}\big)\big]-\lim_{K\to\infty} \sum_{k=0}^{K}e^{-\beta k d \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}\big[\big(e^{\beta k d \epsilon}-1\big)\big(\sum_{\alpha=1}^{N} d \alpha\epsilon\, e^{-\beta kd \alpha \epsilon}\big)\big], \label{eq:tot energy alphaepsilon} \end{align} where we can swap the order of summation since both sums converge and the summands are non-positive. This can be seen from the first line above, using the fact that $e^{-\alpha x}(1-e^x) \in [-1,0]$ for all $\alpha\geq 1 $ and $ x \geq 0$. We now calculate the sum over $\alpha$. \begin{align} \sum_{\alpha=1}^{N} \, \alpha\epsilon\, e^{-\beta \alpha \epsilon}&= -\frac{\partial }{\partial \beta}\sum_{\alpha=0}^{N} e^{-\beta \alpha \epsilon}= -\frac{\partial }{\partial \beta}\left(\frac{1-e^{-\beta (N+1)\epsilon}}{1-e^{-\beta \epsilon}}\right)\nonumber\\ &= -\left(\frac{(N+1)\epsilon e^{-\beta (N+1)\epsilon} -(N+1)\epsilon e^{-\beta (N+2)\epsilon}-\epsilon e^{-\beta \epsilon}+\epsilon e^{-\beta (N+2)\epsilon}}{(1-e^{-\beta \epsilon})^2}\right)\nonumber\\ & = \frac{\epsilon e^{-\beta \epsilon }}{(1-e^{-\beta \epsilon})^2}\big(1-(N+1) e^{-\beta N\epsilon} +N e^{-\beta (N+1)\epsilon}\big)\nonumber\\ &=\frac{\epsilon e^{-\beta \epsilon }}{(1-e^{-\beta \epsilon})^2}\left(1- e^{-\beta N\epsilon} -N e^{-\beta N\epsilon}(1-e^{-\beta\epsilon})\right). \label{eq:sum 1} \end{align} Combining Eqs.~(\ref{eq:tot energy alphaepsilon}) and~(\ref{eq:sum 1}), we arrive at \begin{align} \Delta {E}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^{(N)} &= \lim_{K\to\infty}\sum_{k=0}^{K} \left[\frac{e^{-\beta k \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}}{k}\frac{k\epsilon (1- e^{-\beta Nk\epsilon})}{(1-e^{-\beta k \epsilon})}- N \epsilon \, e^{-\beta k ( \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+N \epsilon)} \right]\nonumber\\ & - \lim_{K\to\infty} \sum_{k=0}^{K} \left[\frac{e^{-\beta k d \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}}{k}\frac{kd\epsilon (1- e^{-\beta Nkd\epsilon})}{(1-e^{-\beta kd \epsilon})}- N d \epsilon \, e^{-\beta kd ( \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+N \epsilon)} \right]. \label{eq:tot energy alphaepsilon 2} \end{align} In order to optimise the energy cost, we now assume that the energy gaps of the machines can be chosen to be smoothly increasing in such way that $\epsilon=\Delta \omega/N :=(\omega_{\textup{max}}-\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})/N$. Substituting this expression for $\epsilon$ into the above equation yields \begin{align} \Delta {E}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^{(N)} &= \lim_{K\to\infty}\sum_{k=0}^{K} \left[\frac{e^{-\beta k \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}}{k}\frac{k\Delta \omega (1- e^{-\beta k\Delta \omega})}{N(1-e^{-\beta k \frac{\Delta \omega}{N}})}- \Delta \omega \, e^{-\beta k ( \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+\Delta \omega)} \right]\nonumber\\ &- \lim_{K\to\infty} \sum_{k=0}^{K} \left[\frac{e^{-\beta k d \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}}{k}\frac{kd\Delta \omega (1- e^{-\beta kd\Delta \omega})}{N(1-e^{-\beta kd \frac{\Delta \omega}{N}})}- d \Delta \omega \, e^{-\beta kd ( \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+ \Delta \omega)} \right]. \end{align} We now wish to take the limit of $N \gg K \to \infty$. This assumption means that energy change of the system is approximately equal to its free energy change; in other words, the process occurs quasi-adiabatically. The ability to switch the order of taking the limits of $K$ and $N$ going to $\infty$ follows from the monotonic convergence of the sum over $k$. In particular, note that the term inside square parentheses in each summand converges and the first term in each summation (which is the only part that depends on $N$) is positive and bounded. Under this assumption, we can use the approximation $\lim_{\beta x\to 0}\, \frac{x}{1-e^{-\beta x}}=\frac{1}{\beta}$; since $0 < e^{-\beta x} < 1$ for any positive $x$, the sum over $k$ converges to a finite value. In general, this approximation introduces a correction term for the energy change, however under said assumption the error incurred becomes negligible. Then, the total energy change $\Delta {E}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^{\textup{tot}}$ for the transformation $ \tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})\to \tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, \omega_{\textup{max}})$ throughout the overall process is \begin{align} \Delta {E}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^{\textup{tot}} &= \lim_{K\to\infty} \sum_{k=0}^{K} \left[\frac{e^{-\beta k \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}}{\beta k}-\frac{e^{-\beta k \omega_{\textup{max}}}}{\beta k} - (\omega_{\textup{max}}-\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}) \, e^{-\beta k \omega_{\textup{max}}} \right]\nonumber\\ & -\lim_{K\to\infty}\sum_{k=0}^{K} \left[\frac{e^{-\beta kd \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}}{\beta k}-\frac{e^{-\beta k d\omega_{\textup{max}}}}{\beta k} - d (\omega_{\textup{max}}-\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}) \, e^{-\beta k d \omega_{\textup{max}}} \right]. \label{eq:tot energy alphaepsilon N infty} \end{align} As a side remark, note that here one can see that in the special case of equally spaced Hamiltonians, one indeed requires a diverging number of machine subsystems to attain perfect cooling at the Landauer limit, as this is the only way to fulfil the condition of Theorem~\ref{thm:variety}. This follows from the fact that the approximation $\frac{x}{1-e^{-\beta x}}\approx\frac{1}{\beta}$ only holds for small $\beta x$ and in general one would need to include higher-order terms that lead to an increase in energy cost. We then have, using the expression for $E(\omega_x, \omega_y)$ derived earlier: \begin{align} \Delta {E}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^{\textup{tot}} &= -\frac{1}{\beta}\log (1-e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}})+\frac{1}{\beta}\log (1-e^{-\beta \omega_{\textup{max}}})-\frac{(\omega_{\textup{max}}-\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}) \, e^{-\beta \omega_{\textup{max}}}}{1- \, e^{-\beta \omega_{\textup{max}}}}\nonumber\\ &+\frac{1}{\beta}\log (1-e^{-\beta d \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}})-\frac{1}{\beta}\log (1-e^{-\beta d \omega_{\textup{max}}})+\frac{d (\omega_{\textup{max}}-\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}) \, e^{-\beta d \omega_{\textup{max}}}}{1- \, e^{-\beta d \omega_{\textup{max}}}}\nonumber\\ & = \frac{1}{\beta}\log \left(\frac{1-e^{-\beta d \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}}{1-e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}}\right)- \frac{1}{\beta}\log \left(\frac{1-e^{-\beta d \omega_{\textup{max}}}}{1-e^{-\beta \omega_{\textup{max}}}}\right) -(\omega_{\textup{max}}-\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})\left(\frac{ \, e^{-\beta \omega_{\textup{max}}}}{1- \, e^{-\beta \omega_{\textup{max}}}}-\frac{d \, e^{-\beta d \omega_{\textup{max}}}}{1- \, e^{-\beta d \omega_{\textup{max}}}}\right)\nonumber\\ &=\frac{1}{\beta} \log [\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})]-\frac{1}{\beta} \log [\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,\omega_{\textup{max}})]- \tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\textup{max}})\,\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, \omega_{\textup{max}})}+\tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})\,\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, \omega_{\textup{max}})}\notag\\ &=\frac{1}{\beta} \log [\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta , \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})]-\frac{1}{\beta} \log [\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta , \omega_{\textup{max}})]\nonumber\\ &- \tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\textup{max}})\,\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, \omega_{\textup{max}})}+\tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})\,\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})}-\tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})\,\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})}+\tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})\,\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, \omega_{\textup{max}})}\nonumber\\ & =\frac{1}{\beta}{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} +\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}, \label{eq:tot energy alphaepsilon N infty 1} \end{align} where we have explicitly written the von Neumann entropy $S(\varrho) = -\tr{\varrho \log (\varrho)}$ of a thermal state at inverse temperature $\beta$ as $S[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,\omega)]= \log [\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta ,\omega)] +\beta \, E [\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta ,\omega)]$. Since the energy change of the target system only concerns its local Hamiltonian, we immediately see that the heat dissipated by the resetting of machines in such a cooling process, i.e., $\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$, saturates the Landauer bound as it is equal to $\beta^{-1} {\Delta}S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$. The process described is thus energy-optimal. \section{Conditions for Structural and Control Complexity} \label{app:conditionsstructuralcontrolcomplexity} Here we begin by considering the protocol-independent structural conditions that must be fulfilled by the machine Hamiltonian to enable \emph{(1) perfect cooling} and \emph{(2) cooling at Landauer cost}; combined, these independent conditions provide a necessary requirement, namely that the machine must be infinite-dimensional with a spectrum that is unbounded (from above) for the \emph{possibility} of \emph{(3) perfect cooling at the Landauer limit}. We then turn to analyse the control complexity, which concerns the properties of the interaction that implements a given protocol. The properties of the machine Hamiltonian define the \emph{structural complexity}, which set the potential for how cool the target system can be made and at what energy cost; the extent to which a machine's potential is utilised in a particular protocol then depends on the properties of the joint unitary, i.e., the \emph{control complexity}. Here, we show that it is necessary that any protocol achieving perfect cooling at the Landauer limit involves interactions between the target and infinitely-many levels of the machine to realise the full cooling potential. We then analyse some sufficient conditions that arise as observations from our diverging control complexity protocols. This then leads us to demonstrate that individual degrees of freedom of the machine must be addressed in a fine-tuned manner to permute populations, highlighting that an operationally meaningful notion of control complexity must take into account factors beyond the effective dimensionality. \subsection{Necessary Complexity Conditions} \label{app:necessaryconditions} \subsubsection{Necessary Structural Conditions} \label{app:necessarystructuralconditions} \emph{1. Perfect Cooling.\textemdash }Let us consider the task of perfect cooling, independently from protocol-specific constraints, in the envisaged setting. One can lower bound the smallest eigenvalue $\lambda_{\textup{min}}$ of the final state $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}'$ (and hence how cold the system can become) after \emph{any} unitary interaction with a thermal machine by~\cite{Reeb_2014} \begin{align}\label{eq:generalpuritybound} \lambda_{\textup{min}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime) \geq e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}^{\textup{max}}} \lambda_{\textup{min}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}), \end{align} where $\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}^{\textup{max}}:=\max_{i,j}|\omega_{j}-\omega_{i}|$ denotes the largest energy gap of the machine Hamiltonian $H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ with eigenvalues $\omega_{i}$. Without loss of generality, throughout this paper we set the ground-state energy of any system to be zero, i.e., $\omega_0 = 0$, such that the largest energy gap coincides with the largest energy eigenvalue. As we make no restrictions on the size or structure of the target or machine, the above inequality pertains to cooling protocols that could, for instance, be realised via sequences of unitaries on the target and parts of the machine. It follows that perfect cooling is only possible under two conditions: either the machine begins in a pure state ($\beta\to\infty$), or $H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ is unbounded, i.e., $\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}^{\textup{max}}\to\infty$. Requiring $\beta<\infty$, a diverging energy gap in the machine Hamiltonian is thus a necessary structural condition for perfect cooling. Indeed, the largest energy gap of the machine plays a crucial role in limiting how cool the target system can be made (see also, e.g., Refs.~\cite{Allahverdyan_2011,Clivaz_2019L}). We now detail an independent property that is required for cooling with minimal energetic cost. \emph{2. Cooling at the Landauer Limit.\textemdash }Suppose now that one wishes to cool an initial target state $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$ to any thermal state $\tau^\prime_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta^*, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$ with $\beta^*>\beta$ (not necessarily close to a pure state), at an energy cost saturating the Landauer limit. In Ref.~\cite{Reeb_2014}, it was shown that for any finite-dimensional machine, there are correction terms to the Landauer bound, which imply that it cannot be saturated; these terms vanish only in the limit where the machine dimension diverges. Thus, a necessary condition for achieving cooling with energy cost at the Landauer limit is provided by the following: \begin{thm}\label{thm:landauerstructural} To cool a target system $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$ to $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta^*,H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$, with $\beta^* > \beta$, using a machine in the initial state $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})$ with energy cost at the Landauer limit, the machine must be infinite dimensional. \end{thm} As we will discuss below, this minimal requirement for the notion of complexity is far from sufficient to achieve cooling at Landauer cost. \emph{3. Perfect Cooling at the Landauer Limit.\textemdash }We have two independent necessary conditions on the structure of the machine that must be asymptotically achieved to enable relevant goals for cooling: the former is required to achieve perfect cooling; the latter for cooling at the Landauer limit. Together, these conditions imply that in order to achieve perfect cooling at the Landauer limit, one must have an infinite-dimensional machine with a spectrum that is unbounded (from above), as stated in Corollary~\ref{cor:structuralcondition}. Henceforth, we assume that these conditions are satisfied by the machine. The question then becomes: \emph{how does one engineer an interaction between the target system and machine to achieve perfect cooling at Landauer cost?} \subsubsection{Necessary Control Complexity Conditions} \label{app:necessarycontrolcomplexityconditions} The unbounded structural properties of the machine support the \emph{possibility} for perfect cooling at the Landauer limit; however, we now focus on the control properties of the interaction that \emph{realise} said potential (see Fig.~\ref{fig:complexity}). This leads to the distinct notion of \emph{control complexity}, which aims to differentiate between protocols that access the machine in a more or less complex manner. The structural complexity properties are protocol independent and related to the energy spectrum and dimensionality of the machine, whereas the control complexity concerns properties of the unitary that represents a particular protocol. For instance, the diverging-time protocol previously outlined comprises a sequence of interactions, each of which is individually not very complex; at the same time, the unconstrained control complexity protocol accesses the total (overall infinite-dimensional) machine ``at once'', and thus the number of (nontrivial) terms in the interaction Hamiltonian, or the effective dimensionality of the machine accessed by the unitary, becomes unbounded. Nonetheless, the net energy cost of this protocol with unconstrained control complexity remains in accordance with the Landauer limit, as the initial and final states of both the system and machine are identical to those in the diverging-time protocol. \emph{Effective Dimensionality.---}We begin by considering the effective dimensionality accessed (nontrivially) by a unitary, whose divergence is necessary but insufficient for achieving perfect cooling at the Landauer limit, as we show in the next section. This in turn motivates the desire for a more detailed notion of control complexity that takes into account the energy-level structure of the machine. We define the effective dimension as the dimension of the subspace of the global Hilbert space upon which the unitary acts nontrivially, which can be quantified via the minimum dimension of a subspace $\mathcal{A}$ of the joint Hilbert space $\mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}}$ in terms of which the unitary can be decomposed as $U_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}} = U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} \oplus \mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{A}^\perp$}}}$, i.e., \begin{align}\label{eq:appeffectivedimension} d^{\,\textup{eff}} := \min \mathrm{dim}(\mathcal{A}) : U_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}} = U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} \oplus \mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{A}^\perp$}}}. \end{align} One can relate this quantity to properties of the Hamiltonian that generates the evolution in a finite unit of time $T$ (which we can set equal to unity without loss of generality) by considering the interaction picture. In general, any global unitary $U_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}} = e^{-i H_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}} T}$ is generated by a Hamiltonian of the form $H_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}} = H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\otimes \mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} + \mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\otimes H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} + H_{\raisebox{-1pt}{\scriptsize{\textup{int}}}}$. However, all protocols considered in this work have vanishing local terms, i.e., $H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}= H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}=0$. More generally, one can argue that the local terms play no role in how the machine is used to cool the target. As such, one can consider unitaries generated by only the nontrivial term $H_{\raisebox{-1pt}{\scriptsize{\textup{int}}}}$ to be those representing a particular protocol of interest. That is, we can restrict our attention to $U_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}} = e^{-i H_{\raisebox{-1pt}{\scriptsize{\textup{int}}}} T}$, where $H_{\raisebox{-1pt}{\scriptsize{\textup{int}}}}$ is a Hermitian operator on $\mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}}$ of the form $\sum_i A^i_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\otimes B^i_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ such that none of the $A^i_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}, B^i_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ are proportional to the identity operator. In doing so, it follows that the effective dimension corresponds to $\mathrm{rank}(H_{\raisebox{-1pt}{\scriptsize{\textup{int}}}})$. Lastly, note that the above definition in terms of a direct sum decomposition provides an upper bound on any similar quantification of effective dimensionality based on other tensor factorisations of the joint Hilbert space considered and makes no assumption about the underlying structure. On the other hand, knowledge of said structure would permit a more meaningful notion of complexity to be defined. For instance, the effective dimensionality of a unitary acting on a many qubit system is better captured by considering its decomposition into a tensor product factorisation rather than the direct sum. We leave the exploration of such considerations to future work. The effective dimensionality provides a minimal quantifier for a notion of control complexity, insofar as its divergence is necessary for saturating the Landauer bound, as we prove in the next section. In fact, we prove a slightly stronger statement, namely that the dimension of the machine Hilbert space to which the unitary (nontrivially) couples the target system to must diverge. However, as we discuss below, $d^{\,\textup{eff}} \to \infty$ is generally insufficient to achieve said goal, and fine-tuned control is required. Nonetheless, the manifestation of such control seems to be system dependent, precluding our ability (so far) to present a universal quantifier of control complexity. Thus, even though further conditions need to be met to achieve perfect cooling at minimal energy cost in unit time (see Theorem~\ref{thm:maineigenvaluecondition}), whenever we talk of an operation with finite control complexity, we mean those represented by a unitary that acts (nontrivially) only on a finite-dimensional subspace of the target system and machine. In contrast, by diverging control complexity, we mean a unitary that couples the target (nontrivially) to a full basis of the machine's Hilbert space, whose dimension diverges. With this notion at hand, we have Theorem~\ref{thm:variety}, which is proven below. Intuitively, we show that if a protocol accesses only a finite-dimensional subspace of the machine, then the machine is effectively finite dimensional inasmuch as a suitable replacement can be made while keeping all quantities relevant for cooling invariant. Invoking then the main result of Ref.~\cite{Reeb_2014}, there are finite-dimensional correction terms that then imply that the Landauer limit cannot be saturated. Note finally that in Theorem~\ref{thm:variety} no particular structure of the systems is presupposed and the effective dimensionality relates to various notions of complexity put forth throughout the literature (see, e.g., Refs.~\cite{Ladyman2013,Holovatch_2017}). For instance, for a finite-dimensional target system with equally spaced energy levels $\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}$, suppose that the machine structure is decomposed as $N$ qubits with energy gaps $\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}_n$}}} \in \{ \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+n \epsilon\}_{n=1, \dots, N}$, with arbitrarily small $\epsilon > 0$ and $N\to\infty$. Then the overall unitary that approaches perfect cooling at the Landauer limit has circuit complexity equal to the diverging $N$. \subsection{Proof of Theorem~\ref{thm:variety}, Corollary~\ref{cor:structuralcondition}, and Theorem~\ref{thm:landauerstructural}} \label{app:proofcomplexitytheorems} Here we prove Theorem~\ref{thm:variety}, which implies Theorem~\ref{thm:landauerstructural} and leads to Corollary~\ref{cor:structuralcondition}. \begin{proof} Let $\mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}$ be a separable Hilbert space associated with the system $\mathcal{X}$. Consider \begin{align} H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} = \sum_{n=0}^{\infty} \omega_n \ket{n}\!\bra{n} \quad \quad \textup{and} \quad \quad \mathscr H_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}} = \textup{span}_{n \leq m}\{\ket{n}\}, \end{align} for some finite $m$. In other words, $\mathscr H_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}$ is a finite-dimensional restriction of $\mathscr H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$. We show that any unitary that (nontrivially) interacts the target system with only a subspace spanned by finitely many eigenstates of $H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ cannot attain Landauer's bound. Consider a general unitary $U$. Suppose that $U$ couples only $\mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ with $\mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}$; whenever we talk of an operation with finite effective dimension in this paper, we mean specifically such a $U$, and by diverging effective dimension we mean a unitary that couples the target to any subspace of $\mathscr H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ whose dimension diverges. Since \begin{align} \mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} &= \mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes (\mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}} \oplus \mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}^\perp) \simeq (\mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}} )\oplus(\mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}^\perp), \end{align} we can associate the subspace $\mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}$ with the label $\mathcal{A}$ and $\mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}^\perp$ with $\mathcal{B}$ and write $U = U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} \oplus \mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{B}$}}}$. Then the initial configuration can be expressed as \begin{align} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}) = \left[ \begin{array}{cc} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}} & 0 \\ 0 & \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}^\perp \end{array} \right], \end{align} where \begin{align} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}} := \frac{1}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})} \sum_{n \leq m} e^{-\beta \omega_n } \ket{n}\!\bra{n} \quad \quad \textup{and} \quad \quad \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}^\perp := \frac{1}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})} \sum_{n > m} e^{-\beta \omega_n } \ket{n}\!\bra{n} \end{align} add up to a (normalised) thermal state. Now consider the state \begin{align} \widetilde{\varrho}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} = \left[ \begin{array}{cc} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}} & 0 \\ 0 & \tr{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}^\perp} \end{array} \right]. \end{align} It is straightforward to check that is indeed a quantum state; moreover, it is the Gibbs state (at inverse temperature $\beta$) associated with the Hamiltonian \begin{align} \widetilde{H}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} = \sum_{n \leq m} \omega_n \ket{n}\!\bra{n} - \frac{1}{\beta} \log\left( \sum_{n>m} e^{-\beta \omega_n}\right) \ket{m+1}\!\bra{m+1}. \end{align} To see this, note that $\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}) = \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, \widetilde{H}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})$ and that \begin{align} \textup{exp}\left\{-\beta \left[-\frac{1}{\beta} \log\left( \sum_{n>m} e^{-\beta \omega_n}\right)\right]\right\} = \sum_{n>m} e^{-\beta \omega_n}. \end{align} Thus $\widetilde{\varrho}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} = \tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, \widetilde{H}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})$. To ease notation in what follows, we write $\widetilde{\omega}_{m+1} := - \frac{1}{\beta} \log\left( \sum_{n>m} e^{-\beta \omega_n}\right)$. In the rest of the proof, we show that the unitary $U$ and the Hamiltonian $H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ can be replaced by finite-dimensional versions without changing the quantities relevant for Landauer's principle. Let $\widetilde{U} = U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} \oplus (\mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\otimes \ket{m+1}\!\bra{m+1})$. We then have \begin{align} \widetilde{U} \left( \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \widetilde{\varrho}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \right) \widetilde{U}^\dagger = \left[ \begin{array}{cc} U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} (\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}) U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}^\dagger & 0 \\ 0 & \frac{e^{-\beta \widetilde{\omega}_{m+1}}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \end{array} \right] \end{align} and \begin{align} \ptr{\mathcal{M}}{\widetilde{U} \left( \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \widetilde{\varrho}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \right) \widetilde{U}^\dagger} = \ptr{\mathcal{M}^\prime}{U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} \left( \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}} \right) U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}^\dagger} + \frac{e^{-\beta \widetilde{\omega}_{m+1}}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \end{align} Compare this to the expression \begin{align} \ptr{\mathcal{M}}{U (\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}) U^\dagger} &= \mathrm{tr}_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}\left[ \begin{array}{cc} U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} (\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}) U^\dagger_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} & 0 \\ 0 & \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}^\perp \end{array}\right] \notag \\ &= \ptr{\mathcal{M}^\prime}{U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} (\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}) U^\dagger_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}} + \tr{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}^\perp} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \notag \\ &= \ptr{\mathcal{M}^\prime}{U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} (\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}) U^\dagger_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}} + \frac{e^{-\beta \widetilde{\omega}_{m+1}}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}, \end{align} since $\tr{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}^\perp} = \frac{1}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})} \sum_{n>m} e^{-\beta \omega_n}$. Thus, the final system state is the same as it would be if we replaced the full initial machine state with $\widetilde{\varrho}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$; in particular, the entropy decrease of the system for any unitary that cools it is also unchanged. The last thing we need to check is that the energy change of the machine similarly remains invariant. To that end, we have that \begin{align} \widetilde{\varrho}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime &= \ptr{\mathcal{S}}{\widetilde{U} (\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \widetilde{\varrho}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}) \widetilde{U}^\dagger} = \ptr{\mathcal{S}}{U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} (\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}) U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}^\dagger} + \frac{e^{-\beta \widetilde{\omega}_{m+1}}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})} \ket{m+1}\!\bra{m+1} \notag \\ \widetilde{\varrho}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} &= \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}} + \frac{e^{-\beta \widetilde{\omega}_{m+1}}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})} \ket{m+1}\!\bra{m+1}. \end{align} Thus, we have \begin{align} \tr{\widetilde{H}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\widetilde{\varrho}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime - \widetilde{\varrho}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})} = \mathrm{tr}\left\{H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \left[\ptr{\mathcal{S}}{U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} (\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}) U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}^\dagger} - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}} \right] \right\}, \end{align} since $U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}$ only acts on $\mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \mathscr{H}_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}$ and $\widetilde{H}_{\raisebox{-1pt}{\tiny{$\mathcal{M}|\mathcal{M}^\prime$}}} = H_{\raisebox{-1pt}{\tiny{$\mathcal{M}|\mathcal{M}^\prime$}}}$. In the same way, we have \begin{align} \ptr{\mathcal{S}}{U (\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}) U^\dagger} &= \ptr{\mathcal{S}}{U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} (\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\otimes \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}) U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}^\dagger} + \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}^\perp \notag \\ \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} &= \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}} + \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}^\perp. \end{align} Thus, the energy difference is also \begin{align} \mathrm{tr}\left\{ H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \left[ \ptr{\mathcal{S}}{U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}} (\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\otimes \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}}) U_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}^\dagger} - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}^\prime$}}} \right] \right\}. \end{align} Hence, we show that one can replace (a potentially infinite-dimensional) $\mathcal{M}$ by some (finite) $m+1$-dimensional machine $\widetilde \mathcal{M}$ if the joint unitary $U$ acts only on $m$ levels of $H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$. By Theorem 6 of Ref.~\cite{Reeb_2014}, there are finite-dimensional corrections to the Landauer bound, which then imply that it cannot be reached for finite $m$. Thus, the effective machine dimension, i.e., that which is actually (nontrivially) accessed throughout the interaction, must diverge in order for cooling to be possible at the Landauer limit. This proves Theorem~\ref{thm:variety}, which implies Theorem~\ref{thm:landauerstructural}. \end{proof} \subsection{Sufficient Complexity Conditions} \label{app:sufficientconditionscomplexity} Having shown the necessary requirements for cooling at Landauer cost, namely a control interaction that acts nontrivially on an infinite-dimensional (sub)space of the machine's Hilbert space, let us now return to emphasise the properties of the machine and cooling protocol that are sufficient to achieve perfect cooling at Landauer cost. For simplicity, we consider the case of a qubit, which exemplifies the discussion of finite-dimensional systems. The case of infinite-dimensional systems is treated independently in the next Appendix. We first consider the structural properties of the machine. The diverging-time protocol discussed in Appendix~\ref{app:divergingtimecoolingprotocolfinitedimensionalsystems} makes use of a diverging number $N$ of machines. Thus, the machine begins in the thermal state $\tau (\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\rm tot})$ of a $(2^N)$-dimensional system (with $N$ eventually diverging), with energy-level structure given by the sum of the Hamiltonians in Eq.~\eqref{eq:machHam}, i.e., \begin{equation} H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\rm tot} = \sum_{n=1}^N H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}$_n$}}^{(n)} =\sum_n (1+n\epsilon)H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n)} , \end{equation} that acts on the full Hilbert space (we use the usual convention that it acts as identity on unlabelled subspaces, e.g., $H^{(1)}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}\equiv H^{(1)}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \otimes \openone^{(2)}\otimes \dots \otimes \openone^{(N)}$). Let us analyse in detail the properties of this Hamiltonian. The ground state is $\ket{0}^{\otimes N}$, which is set at zero energy. More generally, the energy eigenvalue corresponding to an eigenstate $\ket{i_0,i_1,\dots,i_{\raisebox{-1pt}{\tiny{$N$}}}}$ is given by $\omega_1$ multiplied by the number of indices $i_k$ that are equal to $1$, plus a sum of terms $k \epsilon$ where $k$ is the label of each index equal to $1$. Thus, the energy eigenvalue of the eigenstate $\ket{1,\dots,1}$ diverges as the number of subsystems diverges. At the same time, letting the factor $\epsilon$ go to zero renders all eigenstates with the same (constant) number of indices such that $i_k=1$ approach the same energy. Thus, in the limit $\epsilon \rightarrow 0$, one obtains subspaces of energy $E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{(k)}=k \omega_1$ with degeneracy given by $D_k = \binom{N}{k}$, which also diverges for each constant $k$ and diverging $N$. Therefore, in addition to satisfying the structural conditions that are necessary for perfect cooling, as stated in Theorem~\ref{thm:landauerstructural}, the machine used here features additional properties, which are crucially important for this particular protocol, in particular because they are sufficient for perfect cooling at Landauer cost. As a remark, we also emphasise that for fixed (large) $N$ and (small) $\epsilon$, the machine is finite dimensional and has a nondegenerate Hamiltonian without any energy levels formally at infinity. Concerning the control complexity properties of the unitary that achieves perfect cooling in unit time, note that it is a cyclic shift operator, which can be written as \begin{align} U_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}} &= \Pi_{n=1}^N \mathbbm{S}^2_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}$_n$}} = \Pi_n \left( \sum_{i,j_n=0}^{1} \ket{i,j_1,\dots,j_n,\dots,j_{\raisebox{-1pt}{\tiny{$N$}}}}\!\bra{j_n,j_1,\dots ,i,\dots,j_{\raisebox{-1pt}{\tiny{$N$}}}}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}\right) \notag \\ &= \sum_{i,j_1\dots j_{\raisebox{-1pt}{\tiny{$N$}}}=0}^{1}\ket{i,j_1,\dots ,j_{\raisebox{-1pt}{\tiny{$N$}}}}\!\bra{j_{\raisebox{-1pt}{\tiny{$N$}}},i,j_1,\dots,j_{N-1}}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}} . \end{align} As it is evident from its form, this unitary acts nontrivially on all of the (divergingly many) energy levels of the machine. The only basis vectors of the system-plus-machine Hilbert space that are left invariant are $\ket{i=0,j_1=0,\dots,j_{\raisebox{-1pt}{\tiny{$N$}}}=0}$ and $\ket{i=1,j_1=1,\dots,j_{\raisebox{-1pt}{\tiny{$N$}}}=1}$. \subsection{Fine-Tuned Control Conditions} \label{app:finetunedcontrolconditions} Theorem~\ref{thm:variety} captures a notion of control complexity as a resource in a thermodynamically consistent manner, i.e., in line with Nernst's unattainability principle. However, following the discussion around Theorem~\ref{thm:landauerstructural} and that above, the protocols that we present that achieve perfect cooling at Landauer cost make use of machines and interactions with a far more complicated structure than suggested by the necessary condition of diverging effective dimensionality. In particular, we note that the interactions couple the target system to a diverging number of subspaces of the machine corresponding to distinct energy gaps in a fine-tuned manner. Moreover, there are a diverging number of energy levels of the machine both above and below the first excited level of the target. In this section, we begin by outlining the general conditions that perfect cooling at the Landauer limit entails, before presenting a more nuanced notion of control complexity in terms of the variety of distinct energy gaps in the machine in Appendix~\ref{app:energygapvariety}. This suggests that an operationally meaningful quantifier of control complexity must take into account the energy-level structure of the machine that is accessed throughout any given protocol; additionally that of the target system plays a role. Indeed, both the final temperature of the target as well as the energy cost required to achieve this depends upon how the global eigenvalues are permuted via the cooling process. First, how cool the target becomes depends on the sum of the eigenvalues that are placed into the subspace spanned by the ground state. Second, for any fixed cooling amount, the energy cost depends on the constrained distribution of eigenvalues within the machine. Thus, in general, the optimal permutation of eigenvalues depends upon properties of both the target and machine. For instance, consider an arbitrary initially thermal target qubit, whose state is given by $\mathrm{diag}(p, 1-p)$ and a thermal machine of dimension $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ with spectrum $\{\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{i}\}_{i = 0, \hdots , d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1}$. Now consider the decomposition of the joint Hilbert space into two orthogonal subspaces, $\mathcal{B}_0$ and $\mathcal{B}_1$, corresponding to the ground and excited eigenspaces of the target. The initial joint state is $p \, \mathrm{diag}(\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{B}_0$}}}^{i}) \oplus (1-p) \, \mathrm{diag}(\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{B}_1$}}}^{i})$, where we write $\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{B}_j$}}}^{i}$ to denote the $i^\textup{th}$ machine eigenvalue in the subspace $\mathcal{B}_j$. The total population in the subspaces $\mathcal{B}_0$ and $\mathcal{B}_1$ are $p$ and $(1-p)$ respectively. To achieve perfect cooling one must permute the eigenvalues such that approximately a net transfer of population $(1-p)$ is moved from $\mathcal{B}_1$ to $\mathcal{B}_0$. To do this, one can take any subset $K$ of $\{ \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{B}_1$}}}^{i} \}$ such that as $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \to \infty$, $\sum_{i\in K} \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{B}_1$}}}^{i} \to (1-p)$ and a subset $K^\prime$ (with $|K| = |K^\prime|$) from $\{ \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{B}_0$}}}^{i} \}$ such that $\sum_{i \in K^\prime} \{ \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{B}_0$}}}^{i} \} \to 0$ and exchange them. Although the choice of eigenvalues permuted is nonunique, the requirement must be fulfilled for some sets to perfectly cool the target. For any pair of eigenvalues exchanged between the subspaces, demanding that the exchange costs minimal energy amounts to a fine-tuning condition of the form $\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{i} \to p \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{B}_0$}}}^{i} + (1-p) \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{B}_1$}}}^{i}$ that must be satisfied. In general, the fine-tuned eigenvalue conditions that must be asymptotically attained depend upon target and machine eigenvalues, making it difficult to derive a closed-form expression. However, in the restricted scenario in which the target qubit begins maximally mixed (i.e., at infinite temperature), the machine begins thermal at some $\beta > 0$ and of dimension $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$, and that the unitary implemented is such that the target is cooled as much as possible, one can derive precise conditions in terms of the machine structure alone, as we demonstrate below. The case for higher-dimensional target systems is similar. This discussion highlights the importance of capturing properties beyond the effective dimensionality, e.g., those regarding the distribution of machine (and, more generally, target system) eigenvalues, in order to meaningfully quantify control complexity in thermodynamics. Our protocols display similar behaviour to that discussed above asymptotically. Moreover, the machines exhibit an energy-level structure such that every possible energy gap is present, i.e., the set of machine energy gaps $\{ \omega_{ij} = \omega_i - \omega_j \}$ densely covers the interval $[\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}, \infty)$, where $\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}$ is the energy of the first excited level of the target. In Appendix~\ref{app:energygapvariety}, we demonstrate that indeed this condition is necessary for minimal-energy cost cooling. Before doing so, we here first derive the fine-tuned control conditions that are asymptotically required for cooling at the Landauer limit. We begin with some general considerations before focusing on a special case for which an analytic expression can be derived. Furthermore, we demand that the unitary implemented is such that the target is cooled as much as possible: this does not preclude the possibility for cooling the target system less (albeit still close to a pure state) at a cost closer to the Landauer bound without satisfying all of the fine-tuning conditions. Nonetheless, in general there are a number of such conditions to be satisfied, and the special case serves as a pertinent example that demonstrates how the particular set of fine-tuning conditions for any considered scenario can be similarly derived. Consider an arbitrary thermal target system and machine of finite dimensions, with respective spectra $\boldsymbol{\lambda}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} := \{ \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{0}, \hdots, \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}-1}\}$ and $\boldsymbol{\lambda}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} := \{ \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{0}, \hdots, \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1}\}$. The states begin uncorrelated, so the global spectrum of the initial joint state is $\boldsymbol{\lambda}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}} := \{ \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^0, \hdots, \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} - 1}\} = \{ \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{0}\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{0}, \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{0}\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{1}, \hdots , \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}-1}\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1}\}$. Consider now a global unitary transformation; such a transformation cannot change the values of the spectrum, but merely permute them. In other words, the spectrum of the final global state after any such unitary is invariant and we have equivalence of the (unordered) sets $\boldsymbol{\lambda}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^\prime$ and $\boldsymbol{\lambda}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}$. The transformation that cools the target system as much as possible\footnote{We take majorisation among passive states to be the measure of cooling; this implies the highest possible ground state population and purity, and lowest possible entropy and average energy via Schur convexity.} is the one that places the $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ largest of the global eigenvalues into the subspace spanned by the ground state of the target, the second $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ largest into that spanned by the first excited state of the target, and so forth, with the smallest $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ global eigenvalues placed in the subspace corresponding to the highest energy eigenstate of the target system (we prove this statement shortly). More precisely, we denote by $\boldsymbol{\lambda}^{\downarrow}$ the nonincreasing ordering of the set $\boldsymbol{\lambda}$. Since the target and machine begin thermal, the local spectra $\boldsymbol{\lambda}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ and $\boldsymbol{\lambda}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ are already ordered in this way with respect to their energy eigenbases, which we consider to be labelled in nondecreasing order. Cooling the target system as much as possible amounts to achieving the final reduced state of the target \begin{align}\label{eq:systemreducedoptimallycool} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime = \sum_{i=0}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}-1} \left(\sum_{j=0}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1} \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^{\downarrow id_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} + j}\right) \ket{i}\!\bra{i}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}. \end{align} As a side remark, note that since each of the global eigenvalues are a product of the initial local eigenvalues (due to the initial tensor product structure), which are in turn related to the energy-level structure of the target system and machine (as they begin as thermal states), one can already see here that in order to approach perfect cooling, the machine must have some diverging energy gaps, such that the (finite) sum of the global eigenvalues contributing to the ground-state population of the target approaches 1. Of course, there is an equivalence class of unitaries that can achieve the same amount of cooling; in particular, any permutation of the set of the $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ global eigenvalues \emph{within} each energy eigenspace of the target system achieves the same amount of cooling, since it is the sum of these values that contribute to the total population in each subspace. Importantly, although such unitaries cool the target system to the same extent, their effect on the machine differs, and therefore so too does the energy cost of the protocol. However, demanding that such cooling is achieved at minimal energy cost amounts to a unique constraint on the global post-transformation state, namely that it must render the machine energetically passive, leading to the form: \begin{align}\label{eq:globaloptimallycool} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^\prime = \sum_{i=0}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}-1} \sum_{j=0}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1} \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^{\downarrow id_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}+j} \ket{ij}\!\bra{ij}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}. \end{align} We can derive the above form of the final joint state as follows. Consider the following ordering for the energy eigenbasis of $\mathcal{S}\mathcal{M}$ chosen to match the above form \begin{align}\label{eq:basisordering} \{ \ket{00}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}, \ket{01}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}, ..., \ket{0,d_{\raisebox{-1pt}{\tiny{$ \mathcal{M}$}}}-1}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}, \ket{10}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}, ..., \ket{1,d_{\raisebox{-1pt}{\tiny{$\mathcal{M} $}}}-1}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}, ..., \ket{d_{\raisebox{-1pt}{\tiny{$\mathcal{S} $}}}-1,0}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}, ..., \ket{d_{\raisebox{-1pt}{\tiny{$\mathcal{S} $}}}-1,d_{\raisebox{-1pt}{\tiny{$ \mathcal{M}$}}}-1}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}\}. \end{align} This ordering is monotonically nondecreasing primarily with respect to the energy of $\mathcal{S}$, and secondarily w.r.t. $\mathcal{M}$. We take the final state $\rho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^\prime$ to be expressed in this basis. To maximise the cooling in a single unitary operation, we maximise the sum of the first $k \cdot d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ diagonal elements, for each $k\in \{1,2,..., d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\}$, as each sum corresponds to the total population in the $k^\textup{th}$ lowest energy eigenstate of $\mathcal{S}$. The initial state $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}$ is diagonal in this basis, so the vector of initial diagonal elements, which we label $\boldsymbol{\theta} := \mathrm{diag}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}})$, is also the vector of eigenvalues, $\boldsymbol{\lambda}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}$, i.e., $\boldsymbol{\theta} = \boldsymbol{\lambda}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}$. Furthermore, since the unitary operation leaves the set of eigenvalues invariant, we have via the Schur-Horn lemma~\cite{2011Marshall} that the vector of final diagonal elements, which we label $\boldsymbol{\theta}^\prime := \mathrm{diag}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^\prime)$, is majorised by the vector of initial ones, i.e., $\boldsymbol{\theta}^\prime \prec \boldsymbol{\theta}$. It follows that the partial sums we wish to maximise are upper bounded by the corresponding partial sums of the $k \cdot d_{\raisebox{-1pt}{\tiny{$ \mathcal{M}$}}}$ largest diagonal elements of the initial state. We claim that the unitary that cools this maximal cooling amount at minimum energy cost is the one that permutes the diagonal elements to be ordered w.r.t. the basis ordering in Eq.~\eqref{eq:basisordering}. More precisely, via the Schur-Horn lemma, one can always write $\boldsymbol{\theta}^\prime = D \boldsymbol{\theta}$, with $D$ a doubly stochastic matrix. The partial sums of the $k \cdot d_{\raisebox{-1pt}{\tiny{$ \mathcal{M}$}}}$ first elements are linear functions of the elements of $\boldsymbol{\theta}$. Thus the maximum values are obtained at the extremal points of the convex set of doubly stochastic matrices, which are the permutation matrices, via the Birkhoff-von Neumann theorem~\cite{2011Marshall}. One can see by inspection that the optimal permutation matrices are the ones that place the largest $d_{\raisebox{-1pt}{\tiny{$ \mathcal{M}$}}}$ diagonal elements in the first block (i.e., the ground-state eigenspace of $\mathcal{S}$), the next largest $d_{\raisebox{-1pt}{\tiny{$ \mathcal{M}$}}}$ elements in the second block (i.e., the first excited-state eigenspace of $\mathcal{S}$), and so on. Within each block, the ordering does not affect the cooling of the target, so there is an equivalence class of permutations that satisfy the maximal cooling criterion. However, adding the optimisation over the energy cost eliminates this freedom. We may consider the reduced set of stochastic matrices that satisfy maximal cooling, generated by the permutations described above. Since the average energy of the final state is again a linear function of the diagonal elements, here too the minimum corresponds to a permutation matrix. Clearly the permutation that minimises the average energy is the one that orders the elements within each block to be decreasing w.r.t. the energies of $\mathcal{M}$. Thus, the unique\footnote{Note that degeneracies in energy eigenvalues would lead to sets of equal diagonal elements, and prevent one from choosing a unique permutation. However, as the state in such degenerate subspaces is proportional to the identity matrix, we may take any unitary that is block diagonal w.r.t. the degeneracies without affecting the state, and hence the final cooling or average energy change.} stochastic matrix $D$ that leads to maximal cooling at the least energy cost possible is the one that permutes the energy eigenvalues to be ordered decreasing primarily w.r.t. the system energies, and secondarily w.r.t. the machine energies. The action of the stochastic matrix on diagonal elements of the state is related to the unitary operation on the entire quantum state by $|U_{ij}|^2 = D_{ij}$, so that the unitary operation is also a permutation (up to an energy-dependent phase, which is irrelevant since the initial and final states are diagonal). We may understand this optimal operation through the notion of passivity, by noting that it cools at minimal energy cost by rendering the machine into the most energetically passive reduced state in the joint unitary orbit with respect to the cooling constraint on the target. Intuitively, one has cooled the target system maximally at the expense of heating the machine as little as possible. The final reduced state of the machine corresponding to this energetically optimal cooling transformation is \begin{align} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime = \sum_{j=0}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1} \left(\sum_{i=0}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}-1} \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^{\downarrow i d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} + j}\right) \ket{j}\!\bra{j}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}. \end{align} In general, any unitary that achieves these desired conditions simultaneously depends upon the energy-level structure of both the target system and machine, precluding a closed-form set of conditions that can be expressed only in terms of the machine. However, for the special case of a maximally mixed initial target state (i.e., cooling a thermal state at infinite temperature or erasing quantum information from its most entropic state), one can deduce this ordering precisely and moreover relate it directly to properties of the machine Hamiltonian, as we now demonstrate. In the following, we assume that $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ is even; the case for odd $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ can be derived similarly. \begin{thm}\label{thm:maineigenvaluecondition} Consider the target system to begin in the maximally mixed state and a thermal machine at temperature $\beta > 0$, whose eigenvalues are labelled in nonincreasing order, $\{\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow i}\}_{i = 0, \hdots , d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1}$. In order to cool the target perfectly, with the restriction that the target must be cooled as much as possible, at an energy cost that saturates the Landauer limit, the machine eigenvalues must satisfy \begin{align}\label{eq:maineigenvaluesum} \sum_{i=0}^{\frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2}-1} \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow i} \to 1, \quad \sum_{i=\frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2}}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1} \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow i} \to 0, \end{align} and \begin{align}\label{eq:maingenerictermeigenvalue} \frac{\frac{1}{2} \left(\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \lfloor \frac{i}{2} \rfloor} + \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2} + \lfloor \frac{i}{2} \rfloor}\right)}{\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow i}} \to 1 \end{align} for all $i \in \{ 0, \hdots, d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1\}$, where $\lfloor \cdot \rfloor$ denotes the floor function and $\rightarrow$ denotes that the condition is satisfied asymptotically, \textup{i.e.}, as $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \to \infty$\footnote{Strictly speaking, in the limit $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \to \infty$ the conditions in Eq.~\eqref{eq:maingenerictermeigenvalue} must only be satisfied for almost all $i$, i.e., for all but a small subset that contributes negligibly to the relative entropy, as we discuss below.}. \end{thm} \begin{proof} We consider a qubit for simplicity, but the generalisation to cooling an arbitrary-dimensional maximally mixed state is straightforward. The initial joint spectrum of the system and machine is \begin{align}\label{eq:maximallymixedglobalspectrum} \boldsymbol{\lambda}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}} = \tfrac{1}{2} \{ \boldsymbol{\lambda}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow}, \boldsymbol{\lambda}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow}\} = \tfrac{1}{2} \{ \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow 0}, \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow 1} , \hdots, \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1}, \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow 0}, \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow 1} , \hdots, \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1}\}. \end{align} As each $\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow i} = \tfrac{1}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})} e^{-\beta \omega_i}$ for any thermal state with Hamiltonian $H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} = \sum_{i} \omega_i \ket{i}\!\bra{i}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ written with respect to nondecreasing energy eigenvalues, it follows that the globally ordered spectrum is \begin{align}\label{eq:appdgloballyorderedfinalspectrum} \boldsymbol{\lambda}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^{\downarrow} = \tfrac{1}{2} \{ \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow 0}, \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow 0}, \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow 1} , \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow 1} , \hdots, \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1}, \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1}\}. \end{align} Expressing the global states with respect to the product of local energy eigenbases, we have that the initial joint state is $\tfrac{\mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}}{2} \otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta,H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}) = \mathrm{diag}(\boldsymbol{\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}})$ [see Eq.~\eqref{eq:maximallymixedglobalspectrum}] and the unitary that cools the target as much as possible at minimum energy cost is the one achieving the globally passive final joint state $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^\prime = \mathrm{diag}(\boldsymbol{\lambda}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^{\downarrow})$. This leads to the following reduced states \begin{align} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime &= \left(\sum_{i=0}^{\frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2} - 1} \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow i}\right) \ket{0}\!\bra{0}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + \left(\sum_{i=\frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2}}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1} \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow i}\right) \ket{1}\!\bra{1}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}, \\ \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime &= \frac{1}{2} \left( \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow 0} + \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2}}\right) \ket{0}\!\bra{0}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} + \frac{1}{2} \left( \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow 0} + \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2}}\right) \ket{1}\!\bra{1}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} + \notag \\ &+ \frac{1}{2} \left( \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow 1} + \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2} + 1}\right) \ket{2}\!\bra{2}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} + \frac{1}{2} \left( \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow 1} + \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2} + 1}\right) \ket{3}\!\bra{3}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} + \hdots \label{eq:finalmachinestateoptimalenergy} \end{align} Intuitively, the reduced target state has the larger half of the initial machine eigenvalues in the ground state and the smaller half in the excited state; the reduced machine state has the sum of the largest elements from each of these halves in its ground state, the next largest element from each half (which, in this case, is equal to the first) in its first excited state, and so forth. Let us denote the spectrum of the final state of the machine by $\boldsymbol{\lambda}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\prime \downarrow} := \{\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\prime \downarrow 0} , \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\prime \downarrow 1}, \hdots, \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\prime \downarrow d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1} \} = \tfrac{1}{2}\{ \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow 0} + \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2} }, \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow 0} + \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2} }, \hdots, \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2} + 1} + \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1} \}$. Importantly, by construction, the reduced state of the final machine has its local eigenvalues in nonincreasing order, i.e., it is energetically passive. We therefore have the final reduced states of the protocol that cools the initially maximally mixed target as much as possible at minimal energy cost, in particular with minimal heat dissipation by the machine, given the structural resources at hand. We can now analyse the properties that are required to saturate the Landauer limit by considering the terms on the r.h.s. of Eq.~\eqref{eq:landauerequality} for any fixed initial inverse temperature of the machine $\beta \geq 0$. First note that cooling the target system by any amount fixes the change in entropy of the target system, so the first term is irrelevant. The second term concerns the mutual information built up between the target system and machine. In general, this is nonvanishing, although one can achieve any desired amount of cooling without generating such correlations (as per our constructions). Furthermore, in the case where one wants to consider attaining a perfectly cool final state, as we do here, the final reduced state of the target is approximately pure and so $I(S:M)_{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^\prime} \to 0$. In terms of the reduced states above, this means that $\sum_{i=0}^{\frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2}-1} \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow i} \to 1$ and $\sum_{i=\frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2}}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1} \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow i} \to 0$, which can occur only if the largest half of energy eigenvalues of the machine, i.e., $\omega_{i}$ for all $i \geq \frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2}$, diverge (since the summation contains only non-negative summands). The final term that must be minimised to saturate the Landauer limit is the relative entropy of the final with respect to the initial machine state, $D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime \| \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})$. Here one can already see that an infinite-dimensional machine is required to saturate the Landauer bound: from Ref.~\cite{Reeb_2014}, $D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime \| \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}) \geq f(\Delta S_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}, d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})$, where $f$ is a dimension-dependant function of the entropy difference of the machine that exhibits non-negative correction terms that vanish only in the limit $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \to \infty$. The relative entropy vanishes iff $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} = \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime$; moreover, by Pinsker's inequality one has $\tfrac{1}{2}\| \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime\|_1^2 \leq D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}\| \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime)$, so one can bound the trace distance between the initial and final state of the machine for any desired value of the relative entropy. Although $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} = \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime$ implies a trivial process that cannot cool the (initially thermal) target system, as our protocols that saturate the Landauer limit demonstrate, there are processes that asymptotically display the behaviour $ \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime \to \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ \emph{and} cool the target system. For the asymptotic machine states to converge, in particular, their eigenvalues must become approximately equal asymptotically. Demanding this on the spectrum in Eq.~\eqref{eq:finalmachinestateoptimalenergy} leads to a generic term that must be asymptotically satisfied of the form: \begin{align}\label{eq:lemmagenerictermeigenvalue} \frac{\frac{1}{2} \left(\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \lfloor \frac{i}{2} \rfloor} + \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2} + \lfloor \frac{i}{2} \rfloor}\right)}{\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow i}} \to 1 \qquad \forall \; i \in \{ 0, \hdots, d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1\}. \end{align} \end{proof} \noindent In order to achieve perfect cooling at the Landauer limit, one thus must simultaneously satisfy the conditions outlined in Theorem~\ref{thm:maineigenvaluecondition}. In other words, to minimise the relative-entropy term with the additional constraints $\sum_{i=0}^{\frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2}-1} \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow i} \to 1$ and $\sum_{i=\frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2}}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-1} \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow i} \to 0$. The first thing to note is that since the eigenvalues $\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow i}$ contribute to different sums depending on whether $i$ is in the larger half $\{ 0, \hdots, \tfrac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2}-1\}$ or smaller half $\{ \tfrac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2}, \hdots, d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}\}$, one cannot have $\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2} + \lfloor \frac{i}{2} \rfloor } = \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \lfloor \frac{i}{2} \rfloor} \; \forall \;i$ (i.e., a completely degenerate machine), since then both summations would be over identical values and there is no way for them to converge to distinct values. This precludes the trivial solution that satisfies the constraints of Eq.~\eqref{eq:maingenerictermeigenvalue} alone, namely the maximally mixed machine state, which cannot be used to perform any cooling [as, in particular, it does not satisfy the constraints of Eq.~\eqref{eq:maineigenvaluesum}]. For the conditions to be simultaneously satisfied, we intuitively require that, although they must be distinct, for each $i$ both $\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \lfloor \frac{i}{2} \rfloor}$ and $\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2} + \lfloor \frac{i}{2} \rfloor }$ become ``close'' to each other, but with a difference that decays rapidly as $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \to \infty$, such that in the infinite-dimensional limit the larger ``half'' of the eigenvalues sum to one and the smaller ``half'' sum to zero. A subtle point to note is that because the relative entropy involves the ratio of final to original eigenvalues it is not enough that the absolute difference $|\lambda^{\prime \downarrow i}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} - \lambda^{\downarrow i}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}|$ goes to zero, as in the infinite $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ limit, it is possible for this to happen for all of the eigenvalues approaching zero without the ratios of final to initial eigenvalues approaching unity (and hence the relative entropy not vanishing). One manner of satisfying such a constraint, as evidenced by the construction we proceed with next, is for the ratios of final to initial eigenvalues go to unity for all but a small number energy levels, with the population in this exceptional subspace going to zero in the infinite $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ limit (along with the ratios not diverging within said subspace). The natural question that arises here is whether or not it is possible to satisfy these constraints concurrently. (Note that none of the cooling protocols provided throughout this paper use the max-cooling operation, so do not necessarily serve as examples.) To this end, we now construct a family of machine Hamiltonians $H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ of increasing dimension that in the limit $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \rightarrow \infty$ manages to attain both perfect cooling of a maximally mixed qubit and the Landauer limit for the energy cost using the maximal cooling operation discussed above. The form of the Hamiltonian is instructive regarding the complexity requirements for perfect cooling at the Landauer limit. The construction is inspired by the infinite-dimensional Hamiltonian found in Ref.~\cite{Reeb_2014} (Appendix D), therein used to perfectly cool a qubit with energy cost arbitrarily close to the Landauer limit. Their construction already begins with infinitely many machine eigenvalues, as well as infinitely many of them corresponding to diverging energy levels. In the following, we demonstrate that one can arbitrarily closely attain perfect cooling and the Landauer limit with finite-dimensional Hamiltonians, and by taking the limit $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \rightarrow \infty$, recover the result of Ref.~\cite{Reeb_2014}. The Hamiltonian of the machine is $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} := 2^{N+1}$ dimensional, \begin{align} H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} &= \sum_{n=0}^N \sum_{j=1}^{2^n} \bigg( n \Delta \ket{n;j}\!\bra{n;j}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \bigg) + N\Delta \ket{N;2^N\!\!+\!\!1}\!\bra{N;2^N\!\!+\!\!1}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \end{align} Here, each energy eigenvalue labelled by $n$ is $2^n$-fold degenerate. Thus the ground state is unique, the first excited state is twofold degenerate, the second excited state fourfold degenerate, and so on, with the degeneracy doubling every energy level. In order to make the Hamiltonian of even dimensionality for convenience, we add an extra degenerate state to the final level [which makes this level $(2^N+1)$-fold degenerate]. Also note that the Hamiltonian is equally spaced with energy gap $\Delta$. In the following, we use the index $n$ to denote any one of the degenerate states in the $n^{\textup{th}}$ energy level from $n=0$ to $n=N$, and the index $i$ to denote individual energy eigenstates from $i=1$ to $i=2^{N+1}$ (note that in contrast to the previous section, we are here beginning with $i=1$ in order to simplify some future notation). With these indices, the eigenvalues are related by \begin{alignat}{2} \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow i} &= e^{-\beta \Delta} \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \lfloor \frac{i}{2} \rfloor} \qquad &&\forall \; i \in \{ 2, \hdots, d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} - 1\}, \label{eq:rwcooling1}\\ \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow n} &= e^{-\beta \Delta} \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow n-1} \qquad &&\forall \; n \in \{ 1, \hdots, N\}. \label{eq:rwcooling2} \end{alignat} \noindent We introduce a parameter $\epsilon$ to express the Gibbs ratio as \begin{align} e^{-\beta \Delta} &= \frac{1 - \epsilon}{2}, \end{align} where $0<\epsilon<1$, and we eventually take the limit $\epsilon \rightarrow 0$ appropriately as the dimension diverges. Note that this constrains the Gibbs ratio to be smaller than $\tfrac{1}{2}$, which in turn ensures that the total population over all of the degenerate eigenstates in the $n^{\textup{th}}$ level is smaller than that in the $(n-1)^{\textup{th}}$ level (as it has twice the number of eigenstates, but less than half the population in each). If this constraint failed to hold, then in the asymptotic limit, all of the population would lie in energy levels that diverge. We now consider using this machine to cool a maximally mixed qubit target. The final ground-state population of the qubit under the maximal cooling operation is the sum over the larger half of the eigenvalues of the machine, corresponding to the eigenvalues from $i=1$ to $i=2^N$ (equivalently, from $n=0$ to $n=N-1$ plus a single eigenvalue from the $n=N$ energy level), and is thus given by \begin{align} p_0^\prime &= \frac{1}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}} \left( \sum_{n=0}^{N-1} 2^n \left( \frac{1-\epsilon}{2} \right)^n + \left( \frac{1-\epsilon}{2} \right)^N \right), \\ \text{where} \quad \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} &= \sum_{n=0}^{N} 2^n \left( \frac{1-\epsilon}{2} \right)^n + \left( \frac{1-\epsilon}{2} \right)^N \noindent \end{align} is the partition function of the machine. The geometric series above evaluates to \begin{align} p_0^\prime &= \left( 1 + \frac{\epsilon (1 - \epsilon)^N}{1 - (1-\epsilon)^N + \epsilon (1-\epsilon)^N 2^{-N}} \right)^{-1}. \end{align} As an ansatz, supposing that $\epsilon$ scales inversely with $N$ as $\epsilon := \tfrac{\theta}{N}$ leads to the simplification $(1-\epsilon)^N \rightarrow e^{-\theta}$ as $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ (and hence $N$) diverges. The asymptotic behaviour of the ground-state population is thus \begin{align} p_0^\prime &= 1 - \frac{1}{N} \left( \frac{\theta}{e^{\theta} - 1} \right) + O \left( \frac{1}{N^2} \right),\label{eq:asymptoticgroundpop} \end{align} and so $p_0^\prime \rightarrow 1$ in the $N \rightarrow \infty$ limit. We now move to calculate the energy cost. Rather than considering the optimal max-cooling operation described above, we consider a slight modification in order to make the connection to the construction in Ref.~\cite{Reeb_2014} clear as well as to simplify notation. Nonetheless, the energy cost of this modified protocol upper bounds that of the max-cooling operation (for the same achieved ground-state population), and so showing that the Landauer limit is attained for the modified protocol implies that it would be too for the max-cooling protocol. The modification is simply to relabel the smallest eigenvalue of the machine $\lambda^{ 2^{N+1}}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ as $\lambda^{0}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$, and treat it as the ground-state eigenvalue in the max-cooling operation. For general machine states, such a switch would lead to less cooling (if the same unitary were applied), but in this case it does not because the sum of the first half of the machine eigenvalues, from $i=0$ to $i=2^N-1$, is the same as the original sum from $i=1$ to $i=2^N$, due to the relabelling $\lambda_0 = \lambda_{2^N}$, since they are both eigenvalues of states corresponding the maximum excited energy level of the machine spectrum. The spectrum of the final state of the machine is then given by \begin{align} \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\prime\downarrow i} &= \frac{1}{2} \left( \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \lfloor \frac{i}{2} \rfloor} + \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \lfloor \frac{i}{2} \rfloor + \frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2}} \right) \qquad \forall \; i \in \{ 0, \hdots, d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} - 1\}, \end{align} which leads to \begin{align} \lambda^{\prime \downarrow 0}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} &= \frac{1}{2} \left( \lambda^{\downarrow 0}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} + \lambda^{\downarrow 2^N}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \right) = \lambda^{\downarrow 0}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}, \qquad \lambda^{\prime \downarrow 1}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} = \frac{1}{2} \left( \lambda^{\downarrow 0}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} + \lambda^{\downarrow 2^N}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \right) = \lambda^{\downarrow 0}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}, \notag \\ \lambda^{\prime \downarrow i}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} &= \frac{1}{2} \left( \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \lfloor \frac{i}{2} \rfloor} + \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow \lfloor \frac{i}{2} \rfloor + \frac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2}} \right) \qquad \forall \; i \in \{ 2, \hdots, d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} - 1\} \notag \\ &= \frac{1}{2} \left( \frac{2}{1-\epsilon} \lambda^{\downarrow i}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} + \lambda^{n=N}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \right) = \frac{1}{2} \frac{1}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}} \left[ \left( \frac{1-\epsilon}{2} \right)^{n-1} + \left( \frac{1-\epsilon}{2} \right)^N \right], \end{align} where we observe that the index $\lfloor \tfrac{i}{2} \rfloor + \tfrac{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{2}$ corresponds to the largest energy level of the machine for all $i$, and we use Eq.~\eqref{eq:rwcooling1} for the spectrum of initial eigenvalues. Using the index $n$ instead to denote a generic eigenvalue of the $n^{\textup{th}}$ energy level, we have the simpler expression \begin{align} \lambda^{\prime \downarrow n}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} &= \frac{1}{2} \left( \lambda^{\downarrow n-1}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} + \lambda^{\downarrow N}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \right), \qquad \forall \; n \in \{1,2, \hdots, N\}. \end{align} The energy cost can now be simply calculated from the difference in the average energy of the machine state, \begin{align} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} &= \sum_{i=0}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} - 1} \left( \lambda^{\prime \downarrow i}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} - \lambda^{\downarrow i}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \right) \omega_i, \end{align} where we denote the $i^\textup{th}$ energy eigenvalue by $\omega_i$. $\lambda^{\downarrow 0}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ is unchanged, and although $\lambda^{\downarrow 1}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ does change, $\omega_1=0$ corresponds to the ground state and thus this eigenvalue change does not affect the energy cost. We can thus express the energy cost in terms of the index $n$ instead, starting from $n=1$ (corresponding to $i=2$ onward), as \begin{align} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} &= \sum_{n=1}^{N} \left( \lambda^{\prime \downarrow n}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} - \lambda^{ \downarrow n}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \right) \omega_n = \frac{1}{\beta} \left[ 1 - \frac{2 (1-\epsilon)^N}{1 - (1-\epsilon)^N + (1 - 2^{-N}) (1-\epsilon)^N \epsilon} \right] \log \left( \frac{2}{1-\epsilon} \right). \end{align} As we did above, we parameterise $\epsilon = \tfrac{\theta}{N}$. The asymptotic behaviour of the energy cost is then \begin{align} \beta \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} &= \log (2) + \frac{1}{N} \left( 1 - \frac{2 \log (2)}{e^{\theta} - 1} \right) \theta + O \left( \frac{1}{N^2} \right),\label{eq:asymptoticworkcost} \end{align} or in terms of the decrease in entropy of the system, \begin{align} \beta \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} &= \tilde{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} + \frac{\log N}{N} \left( \frac{\theta}{e^{\theta} - 1} \right) + O \left( \frac{1}{N} \right). \end{align} Combining \eqref{eq:asymptoticgroundpop} and \eqref{eq:asymptoticworkcost}, we have that in the limit $N \rightarrow \infty$, which is also $d_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \rightarrow \infty$, the ground-state population approaches $1$---corresponding to perfect cooling---and the energy cost approaches $\beta^{-1} \log (2)$, which is the Landauer limit for the perfect erasure of a maximally mixed qubit. To connect this construction to the constraints of Eq.~\eqref{eq:maingenerictermeigenvalue}, note that in the limit $N\rightarrow \infty$ (recalling that $\epsilon = \frac{\theta}{N}$), \begin{align} \frac{\lambda^{\prime \downarrow n}}{\lambda^{\downarrow n}} &= \lim_{N\rightarrow \infty} \frac{ \frac{1}{2} \left[ \left( \frac{1-\epsilon}{2} \right)^{n-1} + \left( \frac{1-\epsilon}{2} \right)^N \right]}{\left( \frac{1-\epsilon}{2} \right)^n} = \lim_{N \rightarrow \infty} \left[ \frac{1}{1-\epsilon} + \frac{1}{2^{N-n+1}} \left( \frac{e^{-\theta}}{(1-\epsilon)^n} \right) \right] = 1, \end{align} for all $n \geq 1$, leaving only the ground-state eigenvalue (corresponding to $n=0$ and $i=1$) not satisfying the condition. However, this term is actually a negative contribution to the relative entropy as this eigenvalue decreases, and in any case can be verified independently to approach zero. To see this, note that a necessary condition that ensures the contribution of any set of eigenvalues that do not satisfy Eq.~\eqref{eq:maingenerictermeigenvalue} to the relative entropy to be negligible is that the total population of the relevant subspace is vanishingly small. Writing the relative entropy between two states in terms of their eigenvalues, we have $D(\varrho^\prime\|\varrho) = \sum_{n} \lambda^\prime_n \log\left( \tfrac{\lambda^\prime_n}{\lambda_n}\right)$, which we split up into two sets: $S_{0}$ containing all $n$ for which Eq.~\eqref{eq:maingenerictermeigenvalue} is satisfied and $S_{\pm}$ containing the all $n$ for which Eq.~\eqref{eq:maingenerictermeigenvalue} is not satisfied. The contribution of the first term to the relative entropy is asymptotically zero, so we are left with $D(\varrho^\prime\|\varrho) = \sum_{n\in S_{\pm}} \lambda^\prime_n \log\left( \tfrac{\lambda^\prime_n}{\lambda_n}\right)$. For each term in the sum here, one can write $\lambda^\prime_n = \lambda(1+\Delta_n)$ with the condition $|\Delta_n| \geq \theta > 0$ for some $\theta$, i.e., the ratio of eigenvalues is bounded away from unity (on either side) by at least $\theta$. This leads to the expression \begin{align}\label{eq:appdrelativeentropydeviation-start} D(\varrho^\prime\|\varrho) = - \sum_{n\in S_{\pm}} \lambda^\prime_n \log(1+\Delta_n) = - N_{\pm} \sum_{n\in S_{\pm}} p_n \log(1+\Delta_n), \end{align} where we renormalise the eigenvalues (which here correspond to a subnormalised probability distribution) by writing $\lambda_n^\prime = N_{\pm} p_n$, with $N_{\pm} := \sum_{n\in S_{\pm}} \lambda^\prime_n$ being the total population of the subspace $S_{\pm}$ and $\{p_n\}$ here forming a probability distribution. Note that the ratio of eigenvalues going to unity in the $S_0$ subspace implies that the total populations of initial and final eigenvalues in this subspace are equal, i.e., $\sum_{n \in S_0} \lambda_n = \sum_{n \in S_0} \lambda_n^\prime$, which in turn implies that the same is true for the $S_{\pm}$ subspace, leading to $\sum_{n \in S_{\pm}} p_n \Delta_n = 0$. We argue from the concavity of the logarithm function that \begin{align} \tfrac{1}{2} \log(1+\theta) + \tfrac{1}{2} \log(1-\theta) \geq \sum_{n\in S_{\pm}} p_n \log(1+\Delta_n). \end{align} Visualising the graph of the function $y=\log(1+x)$, the latter expression above must evaluate to a point that lies within the intersection of the convex hull of $(\Delta_n, \log(1+\Delta_n) )$ and the linear equality $\sum_{n \in S_{\pm}} p_n \Delta_n = 0$, the latter of which is the line $x=0$. By the concavity of the logarithm, the aforementioned convex hull lies entirely below the line segment connecting $(1-\theta, \log(1-\theta))$ to $(1+\theta, \log(1+\theta))$, and thus the expression is upper bounded by the intersection of this line segment with $x=0$, which is precisely the l.h.s. of the inequality above. Thus we have the inequality \begin{align}\label{eq:appdrelativeentropydeviation-end} D(\varrho^\prime\|\varrho) \geq -N_{\pm} \left[ \tfrac{1}{2} \log(1+\theta) + \tfrac{1}{2} \log(1-\theta) \right] = -\frac{N_{\pm}}{2}\log(1-\theta^2) \geq \frac{N_{\pm}}{2} \theta^2, \end{align} where we use $\log(1-\theta^2) \leq - \theta^2$ for all $\theta \in [-1,1]$. As $\theta > 0$, the only way that this contribution to the relative entropy by the eigenvalues that do not satisfy Eq.~\eqref{eq:maingenerictermeigenvalue} can be asymptotically negligible is if the total population of their associated subspace $N_{\pm}$ goes to zero. Finally note that, as mentioned in the main text, the above result pertains to the restricted setting where the target system is cooled as much as possible. However, this is not the only way to approach perfect cooling at the Landauer cost: instead of the largest half of global eigenvalues being placed into the ground-state subspace of the target system, any amount of them such that their sum is sufficiently close to one would suffice. Although it is complicated to derive an exact set of conditions that would need to be satisfied in such cases (since it depends upon exactly which eigenvalues are permuted to which subspaces), the fact that fine-tuned control over particular degrees of freedom is required remains. Lastly, note that even in the restricted setting of cooling the target as much as possible, the situation becomes even more complicated when considering target systems that begin at a finite temperature. Here, the choice of which global eigenvalues should be permuted to which subspaces to cool the system as much as possible at minimal energy cost depends on the microscopic structure of both the system and machine. This means that one can no longer determine the final eigenvalue distributions of the reduced states in terms of the initial machine eigenvalues alone, as we were able to do for the maximally mixed state. In turn, one can no longer derive a condition on properties of the machine itself, independently of the target system. Nonetheless, again, the key message that cooling at minimal energy cost requires fine-tuned control to access precisely distributed populations still holds true. We leave the further exploration of such scenarios, for instance constructing optimal machines for particular initial target systems, to future work. \subsection{Energy-Gap Variety as a Notion of Control Complexity} \label{app:energygapvariety} The insights drawn above regarding sufficient conditions for cooling a system at the Landauer limit lead us to propose a more nuanced notion of control complexity than the preliminary effective dimension that satisfies the natural desiderata outlined in the main text. In particular, here we demonstrate that the \emph{energy-gap variety} (see Definition~\ref{def:energygapvariety}) provides a good measure of control complexity, both from a theoretical, thermodynamic standpoint as well as a practical one. Firstly, it is quite clear that coupling a system to a diverging number of distinct machine energy gaps is a difficult task to achieve in almost any conceivable physical platform, especially when the energy gaps are closely spaced; thus, this definition indeed corresponds to our intuitive understanding of ``complex'' as an operation that is inherently difficult to perform in practice. Secondly, from all of the optimal cooling protocols that we outline in this paper, we see that, in contrast to the effective dimension, having a diverging energy-gap variety that densely covers an appropriate interval is sufficient for saturating the Landauer limit, thereby making it a better quantifier of control complexity. The remaining point is to show that its divergence is necessary to cool a system to the ground state using a single control operation with energy cost saturating the Landauer bound, so that it is fully consistent also with Nernst's unattainability principle. We argue that this is indeed the case below by proving Theorem~\ref{thm:energygapvariety-main}. \textbf{Proof:} First of all, note that how cold the final system state can be made is bounded by the inequality: \begin{align}\label{eq:appgeneralpuritybound-2} \lambda_{\textup{min}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime) \geq e^{-\beta\, \omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}^{\textup{max}}} \lambda_{\textup{min}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}), \end{align} where $\lambda_{\textup{min}}$ denotes the minimal eigenvalue. For a pure final system state, the l.h.s. of the above equation goes to 0; thus, for any nontrivial initial system state [i.e., such that $\lambda_{\textup{min}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})>0$] and finite temperature $\beta < \infty$, we must have $\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}^{\textup{max}} \to \infty$. This determines the upper limit of the required interval of energy gaps. The lower limit of the required interval comes from the fact that the only subspaces of the machine that are relevant for cooling the target system are those associated to energy gaps that are at least as large as the smallest energy gap of the target, $\omega_0$~\cite{Clivaz_2019E}. Next, recall the equality form of the Landauer limit, which holds true for \emph{any} global unitary transformation with a thermal machine: \begin{align}\label{eq:landauerequality-appd} \beta \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} = \widetilde{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + I(\mathcal{S}: \mathcal{M})_{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^\prime} + D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime \| \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}), \end{align} Cooling the target system to a pure state necessitates that the final system and machine are uncorrelated and we therefore have $I(\mathcal{S}: \mathcal{M})_{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^\prime} = 0$ for the optimal process. We thus need to focus on minimising the relative-entropy term, which we do in the following steps. Consider for simplicity the target system to be a qubit initially in the maximally mixed state. A generic cooling machine should be able to cool \emph{any} system state, include the maximally mixed one; therefore the following insights pertaining to this special case apply generically. In this case, the initial joint spectrum of the system and machine is given by Eq.~\eqref{eq:maximallymixedglobalspectrum}. Cooling the target system to the ground state necessitates taking a subset $\mathcal{A}$ of these global eigenvalues such that $\sum_{i \in \mathcal{A}} \lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow i} = 1 - \epsilon$ for arbitrarily small $\epsilon$ and placing these into the ground-state subspace of the target, with the remaining (small) $\epsilon$ amount of population contributing only to any higher-energy eigenstates [this is essentially a generalisation of the conditions put forth in Eqs. \eqref{eq:maineigenvaluesum}, accounting for an arbitrarily small cooling error]. As discussed previously, there are many possible ways to achieve such a configuration, but there is a \emph{unique} one that minimises the total energy cost of doing so: namely, that in which the reduced final state of the machine is rendered passive. This is because if one compares two protocols achieving the same cooling for the target system, one in which the final machine is passive and any other in which it is not, then the former protocol has the smaller energy cost since a positive amount of energy can be (unitarily) extracted from the latter machine in order to render it passive. Thus, for any protocol saturating the Landauer limit, the final machine state must be arbitrarily close to a passive state, which implies that it must be approximately diagonal in the local machine energy eigenbasis with the globally ordered spectrum as per Eq.~\eqref{eq:appdgloballyorderedfinalspectrum}. Moreover, in order to minimise the relative-entropy term and therefore saturate the Landauer limit, the final machine state must be arbitrarily close to the initial (i.e., thermal) machine state; following the argumentation put forth in the previous Appendix, this leads to the set of conditions outlined in Eq.~\eqref{eq:lemmagenerictermeigenvalue}, which must be satisfied up to arbitrary precision. Since we have the exact relationship between the initial and final machine eigenvalues, the contribution to the energy cost from the relative-entropy term can be calculated explicitly, i.e., $D(\varrho^\prime\|\varrho) = \sum_{n} \lambda^\prime_n \log\left( \tfrac{\lambda^\prime_n}{\lambda_n}\right)$. Following the argumentation from Eq.~\eqref{eq:appdrelativeentropydeviation-start} until Eq.~\eqref{eq:appdrelativeentropydeviation-end} in the previous appendix, we see that by assuming a finite deviation from \emph{any} of the conditions of Eq.~\eqref{eq:lemmagenerictermeigenvalue}, i.e., writing $\lambda_n^\prime = \lambda_n(1+\Delta_n)$ with $|\Delta_n| \geq \theta > 0$ for some $\theta$, one can derive a lower bound on the relative entropy: \begin{align}\label{eq:appdlowerboundrelativeentropysimple} D(\varrho^\prime\|\varrho) \geq \frac{N_{\pm}}{2} \theta^2, \end{align} where $N_{\pm}$ is the total population of the subspaces corresponding to the terms in the sum such that $\tfrac{\lambda^\prime_n}{\lambda_n}$ differs from unity by at least $\theta$. In other words, these are the relevant additional contributions to the energy cost; whenever $N_{\pm}$ is nonzero, the Landauer bound \emph{cannot} be approached arbitrarily closely. The final piece is to relate the machine eigenvalues to its energy-gap spectrum, which can be done straightforwardly due to the initial thermality of the machine, i.e., $\lambda_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\downarrow i} = e^{-\beta \omega_i}/\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})$. We now argue that if there is ever a finite ``jump'' in the energy-gap structure of the machine, then one cannot achieve both a ground-state population of the target that is arbitrarily close to unity \emph{and} have $N_{\pm}$ be arbitrarily close to zero concurrently. Suppose now that one has a machine with a dense energy-gap structure from $\omega_0$ up until some (finite) $\omega_a$, followed by a finite jump until the energy level $\omega_a + \Omega$ (for some strictly finite $\Omega>0$), and then again a dense set of energy gaps throughout the interval $[\omega_a + \Omega, \infty)$. Then, one can utilise the energy-gap structure in the ``lower band'' $[\omega_0,\omega_a)$ in an optimal fashion in order to cool the target system to a minimum temperature (set by $\omega_a$) at Landauer energy cost~\cite{Clivaz_2019L,Clivaz_2019E}. However, assuming that the jump in the energy-gap structure begins at some finite $\omega_a$, then there is always a finite amount of population in the machine that is supported on the energy levels corresponding to the ``upper band'' $[\omega_a + \Omega, \infty)$. To cool the target to arbitrarily close to the ground state, one must therefore access this population and transfer it to the ground-state subspace of the target; this precisely corresponds to the $N_{\pm}$ that contributes to the excess energy cost in a non-negligible manner for finite population exchanges. In particular, we have the bound $N_\pm \geq \textup{min}({\frac{e^{-\beta \omega_a}}{1+e^{-\beta \omega_a}},\frac{1}{1+e^{-\beta \omega_a}}})$. Thus, whenever $\omega_a$ takes a finite value, $N_\pm$ is a strictly positive number. The only way that the relative-entropy term can vanish then is if $\theta$ vanishes; however, this can occur only if $\Omega \to 0$, because for any finite $\Omega$, the ratio $\frac{\lambda^\prime_n}{\lambda_n}$ for at least one value of $n$ differs from 1 by a finite amount as argued above, which finally leads to a nonzero lower bound in Eq.~\eqref{eq:appdlowerboundrelativeentropysimple} and implies that the Landauer limit cannot be saturated. In other words, the endpoints of the lower and upper energy gap intervals considered above must coincide (up to arbitrary precision) in order to saturate the Landauer bound. This implies that the energy-gap variety must diverge and moreover, since the above logic holds for arbitrary $\omega_\alpha$, which can be smoothly varied as a parameter, it follows that the diverging number of energy gaps must additionally approximately densely cover the interval in question. \section{Diverging Time and Diverging Control Complexity Cooling Protocols for Harmonic Oscillators} \label{app:harmonicoscillators} We now analyse the case of cooling infinite-dimensional quantum systems in detail. More specifically, we consider ensembles of harmonic oscillators. For the sake of completeness, we first briefly present some key concepts that will become relevant throughout this analysis. Following this, in Appendix~\ref{app:hodivergingtimegaussian}, we construct a protocol that achieves perfect cooling at the Landauer limit using a diverging number of Gaussian operations. Although such operations are typically considered to be relatively ``simple'' both when it comes to experimental implementation and theoretical description, according to the effective dimension notion of control complexity that we have shown must necessarily diverge to cool at the Landauer limit [see Eq.~\eqref{eq:maineffectivedimension}], such Gaussian operations have infinite control complexity. Subsequently, in Appendix~\ref{app:hodivergingtimenongaussian}, we consider the task of perfect cooling with diverging time but restricting the individual operations to be of finite control complexity. In particular, note that such operations are non-Gaussian in general. Here, we present a protocol that approaches perfect cooling of the target system as the number of operations diverges, with finite energy cost---albeit not at the Landauer limit. Whether or not a similar protocol exists that also saturates the Landauer bound remains an open question. Finally, in Appendix~\ref{app:hodivergingcontrolcomplexitygaussian}, we reconsider the protocol from Appendix~\ref{app:hodivergingtimenongaussian} in terms of a single transformation, i.e., unit time. By explicitly constructing the joint unitary transformation that is applied throughout the entire protocol, we show this to be a multimode Gaussian operation acting on a diverging number of harmonic oscillators. The key message to be taken away from these protocols is that, while the distinction between Gaussian and non-Gaussian operations is a significant one in terms of experimental feasibility, and it certainly plays a role regarding the task of cooling---in particular, the energy cost incurred---these concepts alone cannot be used to characterise a notion of control complexity that must diverge to approach perfect cooling at the Landauer limit. On the other hand, the effective dimension of the machine used does precisely that; however, in a manner that is far from sufficient (for the case of harmonic oscillators), as even a single two-mode swap, which cannot cool perfectly at Landauer cost, would have infinite control complexity. Indeed, a more nuanced characterisation of control complexity in the infinite-dimensional setting, which takes more structure regarding the operations and energy levels into account, remains an open problem to be addressed. \subsection{Preliminaries} \label{app:hopreliminaries} We consider ensembles of $N$ harmonic oscillators (i.e., infinite-dimensional systems consisting of $N$ bosonic modes), which are associated to a tensor product Hilbert space $\mathcal{H}_{\textup{tot}}=\bigotimes_{j=1}^N \mathcal{H}_{j}$ and (respectively: lowering, raising) mode operators $\{a_k$ , \,$a_k^{\dagger}\}$ satisfying the bosonic commutation relations: \begin{equation} [a_k,a_{k'}^{\dagger}]=\delta_{kk'} , \quad \quad [a_k,a_{k'}]=0, \quad \quad \quad \forall \; k,k'=1,2,\hdots,N. \label{eq:bosonicommrelation} \end{equation} The free Hamiltonian of any such system can be written as $H_{\textup{tot}}=\sum_{k=1}^N \omega_k a_{k}^{\dagger}a_{k}$, where $\omega_{k}$ represents the energy gap of the $k$-th mode (in units where $\hbar=1$). Position- and momentum-like operators for each mode can be defined as follows (for simplicity, we use the rescaled version below where the $\omega_k$ are omitted from the prefactors) \begin{equation} q_k:=\frac{1}{\sqrt{2}}(a_{k}+a_{k}^{\dagger}), \quad \quad p_{k}:=\frac{1}{i \sqrt{2}}(a_{k}-a_{k}^{\dagger}). \end{equation} As a consequence of the commutation relations in Eq.~\eqref{eq:bosonicommrelation}, the generalised position and momentum operators satisfy the canonical commutation relations \begin{equation} \left[ q_{k},\,p_l\right]=\, i\delta_{kl}. \end{equation} To simplify notation, one may further introduce the vector of quadrature operators $\mathds{X}:=( q_{1},\, p_{1},\hdots,\, q_{N},\, p_{N})$; then, the commutation relations can be expressed succinctly as \begin{equation} \left[ \mathds{X}_{k},\,\mathds{X}_{l}\right]=\, i \Omega_{kl}, \end{equation} where the $\Omega_{kl} $ are the components of the symplectic form \begin{equation} \Omega =\bigoplus_{j=1}^N \Omega_{j}, \quad \quad \Omega_{j}=\begin{bmatrix} 0 & 1\\ -1 & 0 \end{bmatrix}. \end{equation} The density operator associated to $N$ harmonic oscillators can be written in the so-called \emph{phase-space representation} as \begin{align} \varrho = \frac{1}{(2 \pi)^N} \int \chi(\Omega \xi) \mathcal{W}(- \Omega \xi) \, d^{2N} \xi, \end{align} where $\mathcal{W}(\xi) := e^{i \xi^{T} \mathds{X}}$ is the Weyl operator and $\chi(\xi) := \tr{\varrho \mathcal{W}(\xi)}$ is called the characteristic function. Throughout our analysis, we see that a particular class of states and operations, namely those that are known as \emph{Gaussian}, are of particular importance. A Gaussian state is one for which the characteristic function is Gaussian \begin{align} \chi(\xi) = e^{-\frac{1}{4} \xi^T \Gamma \xi + i \overline{\mathds{X}}^{T} \xi}. \end{align} Here, $\overline{\mathds{X}} := \left<\mathds{X}\right>_\varrho$ is the \emph{displacement vector} or \emph{vector of first moments}, and $\Gamma$ is a real symmetric matrix that collects the \emph{second statistical moments} of the quadratures, which is known as the \emph{covariance matrix}. Its entries are given by \begin{equation} \Gamma_{mn}:=\left<\mathds{X}_{m}\mathds{X}_{n}+\mathds{X}_{n}\mathds{X}_{m} \right>_\varrho-2\left<\mathds{X}_{n}\right>_\varrho\left<\mathds{X}_{m} \right>_\varrho. \end{equation} We see that any Gaussian state is thus uniquely determined by its first and second moments. As an example of specific interest here, we recall that any thermal state $\tau$ of a harmonic oscillator with frequency $\omega$ is a Gaussian state and has vanishing first moments, $\overline{\mathds{X}}=0$. Here and throughout this article, we are assuming that the infinite-dimensional thermal state is well defined (see, e.g., Ref.~\cite{Thirring_2002} for discussion). The covariance matrix of a thermal state is proportional to the $2\times2$ identity, and given by $ \Gamma[\tau(\beta, H)]= \coth{\left(\frac{\beta \omega}{2}\right)}\,\openone_{2}$. Gaussian operations are transformations that map the set of Gaussian states onto itself. Such operations, which include, e.g., beam-splitting and phase-space displacement, are generally considered to be relatively easily implementable in the laboratory. Although nonunitary Gaussian operations exist as well, all of the examples mentioned above are Gaussian unitaries. Such Gaussian unitaries are generated by Hamiltonians that are at most quadratic in the raising and lowering operators. Conversely, any Hamiltonian that can be expressed as a polynomial of at most second order in the mode operators generates a Gaussian unitary. Any unitary Gaussian transformation can be represented by an affine map $(M, \kappa)$, \begin{equation} \mathds{X} \mapsto M \mathds{X}+\kappa , \end{equation} where $\kappa \in \mathds{R}^{2N}$ is a displacement vector in the phase-space representation and $M$ is a symplectic $2N \times 2N$ matrix that leaves the symplectic form $\Omega$ invariant, i.e., \begin{equation} M \, \Omega \, M^T= \Omega. \end{equation} Under such a mapping, the first and second moments transform according to \begin{align}\label{eq:gaussianaffinemoments} \overline{\mathds{X}} \mapsto M \overline{\mathds{X}} + \kappa, \quad \quad \Gamma \mapsto M \Gamma M^T. \end{align} Lastly, note that the energy of a Gaussian state $\varrho_G$ with respect to its free Hamiltonian $H = \sum_k \omega_k a_k^\dagger a_k$ can be calculated in terms of the first and second moments as follows~\cite{Friis2018} \begin{equation} E(\varrho_{G})=\sum_{k} \omega_{k}\left(\frac{1}{4}\tr{\Gamma^{(k)}-\,2}\, +\frac{1}{2}\,||\overline{\mathds{X}}^{(k)}||^2\right), \label{eq:gaussianenergy} \end{equation} where $\| \cdot \|$ denotes the Euclidean norm. Here, $\Gamma^{(k)}$ indicates the $(2 \times 2)$ submatrix of the full covariance matrix $\Gamma$ corresponding to the reduced state of the $k^{\textup{th}}$ mode. Similarly $\overline{\mathds{X}}^{(k)}$ denotes the two-component subvector of first moments for the $k^{\textup{th}}$ mode of the displacement vector $\overline{\mathds{X}}$.\\ \subsection{Diverging-Time Cooling Protocol for Harmonic Oscillators} \label{app:hodivergingtime} \subsubsection{Diverging-Time Protocol using Gaussian Operations (with Diverging Control Complexity)} \label{app:hodivergingtimegaussian} We now consider a simple protocol for lowering the temperature of a single-mode system within the coherent-control paradigm using a single harmonic oscillator machine. This protocol will form the basic step of a protocol for achieving perfect cooling at the Landauer limit using diverging time, which we subsequently present. In the situation we consider here, the target system $\mathcal{S}$ to be cooled is a harmonic oscillator with frequency $\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}$ interacting with a harmonic oscillator machine $\mathcal{M}$ at frequency $\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}} \geq \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}$ via a (non-energy-conserving) unitary acting on the joint system $\mathcal{S}\mathcal{M}$ initialised as a tensor product of thermal states $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}) \otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta,H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}) $ at inverse temperature $\beta$ with respect to their local Hamiltonians $H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ and $H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$, respectively. The joint covariance matrix of the system and machine modes is block diagonal since the initial state is of product form, i.e., \begin{equation} \Gamma[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}) \otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})]= \Gamma[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}) ]\oplus \Gamma[ \tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})], \end{equation} and the $2\times2$ blocks of the individual modes are also diagonal, with the explicit expression $ \Gamma[\tau_{X}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}})]= \coth{\left(\frac{\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{X}$}}}}{2}\right)}\,\openone_{2}$. In this setting, it has been shown that the minimum reachable temperature of the target system is given by $T_{\textup{min}}=\frac{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}}\,T$ (for the case $ \omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}} \geq \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}$)~\cite{Clivaz_2019E}. The non-energy-conserving unitary transformation that achieves this is of the form \begin{equation} \label{eq:swapwithi} U=e^{-i\frac{\pi}{2}( a^{\dagger} b+\, a b^{\dagger})}, \end{equation} where the operators $a~(a^{\dagger})$ and $b~(b^{\dagger})$ denote the annihilation (creation) operators of the target system and machine, respectively. This beam-splitter-like unitary acts as a \texttt{SWAP} with a relative phase factor imparted on the resultant state; nonetheless, this phase is irrelevant at the level of the covariance matrix, which fully characterises the (Gaussian) thermal states considered, and transforms it according to a standard swapping of the systems. After acting with such a \texttt{SWAP} operator, which is a Gaussian operation, the first moment remains vanishing and the covariance matrix transforms as [see Eq.~\eqref{eq:gaussianaffinemoments}] \begin{align} \begin{bmatrix} \coth{\left( \frac{\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{2} \right)\,\openone_2}& 0\\ 0& \coth{\left( \frac{\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}}{2} \right)} \,\openone_2 \end{bmatrix}\, \overset{\textup{SWAP}}{\longmapsto }\, \begin{bmatrix} \coth{\left( \frac{\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}}{2} \right)} \,\openone_2& 0\\ 0& \coth{\left( \frac{\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{2} \right)\,\openone_2} \label{cov. transformation} \end{bmatrix}. \end{align} This means that both the output target system and machine are thermal states at different temperatures $T'_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}=\frac{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}}\,T $ and $T'_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}=\frac{\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}}{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}\,T$. Making use of Eq.~\eqref{eq:gaussianenergy}, we can calculate the energy change for the system and machine as \begin{align} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} &= E\left[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\left(\frac{\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}}{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\right) \right]-\, E\left[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})\right]= \frac{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{2}\,\left[\coth{\left(\frac{\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}} }{2}\right)}-\coth{\left(\frac{\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} }{2}\right)}\right],\nonumber\\ \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} &= E\left[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}\left(\frac{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}}\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \right)\right]-\, E\left[\tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})\right]= \frac{\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}}{2}\,\left[\coth{\left(\frac{\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} }{2}\right)}-\coth{\left(\frac{\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}} }{2}\right)}\right]. \label{EnergyExch.2moeds} \end{align} The total energy cost associated to this \texttt{SWAP} operation is thus \begin{align} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}} \,=\, \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \,=\, \frac{(\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}-\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})}{2}\,\left[\coth{\left(\frac{\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{2}\right)}-\coth{\left(\frac{\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}} }{2}\right)}\right] \,=\, (\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}-\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})\,\frac{e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}} \,(1-e^{-\beta( \omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}-\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})})}{(1-e^{-\beta\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}})(1-e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}})}. \label{freeEn bosonic modes} \end{align} Note that this form is similar to that for finite-dimensional systems with equally spaced Hamiltonian [cf., Eq.~\eqref{eq:tot energy d}]; the dimension-dependent term vanishes as $d \to \infty$, simplifying the expression in the infinite-dimensional case. With this simple protocol for lowering the temperature of a harmonic oscillator target using a single harmonic oscillator machine at hand, we are now in a position to describe an energy-optimal (in the sense of saturating the Landauer bound) cooling protocol when a diverging number of operations, i.e., diverging time, is permitted. In other words, we now show how to achieve perfect cooling with minimal energy at the expense of requiring diverging time, i.e., infinitely many steps of finite duration. As mentioned above, in this specific protocol, the control complexity as per Eq.~\eqref{eq:maineffectivedimension} is infinite in each of these infinitely many steps. As we argue after having presented the protocol, this is an artefact of the simple structure of the Gaussian operations used. Indeed, we later present a non-Gaussian diverging-time protocol for cooling a single harmonic oscillator to the ground state using finite control complexity in each of the infinitely many steps, and at an overall finite (albeit not minimal, i.e., not at the Landauer limit) energy cost. Before presenting this non-Gaussian protocol, let us now discuss the details of the Gaussian diverging-time protocol for cooling at the Landauer limit. We consider a harmonic oscillator with the frequency $\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}$ as the target system and the machine to comprise $N$ harmonic oscillators, where the $n^{\textup{th}}$ oscillator has frequency $\omega_{M_n}= \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} +\, n\,\epsilon$. In addition, we assume that all modes are initially uncorrelated and in thermal states at the same inverse temperature $\beta$ with respect to their free Hamiltonians, i.e., the target system is $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta,H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$ and the multimode thermal machine is $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}) = \bigotimes_{n=1}^N\,\tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}$_n$}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}$_n$}})$. In this case, the cooling process is divided into $N$ time steps. During each step, there is an interaction between the target system and one of the harmonic oscillators in the machine. Here, we assume that at the $n^{\textup{th}}$ time step, the target system interacts only with the $n^{\textup{th}}$ harmonic oscillator, which has frequency $\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} +\, n\,\epsilon$. To obtain the minimum temperature for the target system, we perform the previously outlined cooling process at each step, which is given by swapping the corresponding two modes. Using Eq.~\eqref{cov. transformation}, the covariance matrix transformation of the two-mode process at the first time step takes the form \begin{align} \Gamma^{(1)}(\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta) \otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}$_1$}}(\beta))&= \begin{bmatrix} \coth{\left(\frac{\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{2}\right)}\openone_2& 0\\ 0& \coth{\left(\frac{\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+\epsilon)}{2}\right)}\,\openone_2 \end{bmatrix}\, \overset{\textup{SWAP}}{\longmapsto }\, \Gamma^{(1)}_{\textup{opt}}&=\, \begin{bmatrix} \coth{\left(\frac{\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+\epsilon)}{2}\right)}\,\openone_2& 0\\ 0& \coth{\left(\frac{\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{2}\right)}\,\openone_2 \label{cov. transformation sm} \end{bmatrix}. \end{align} By repeating this process on each of the harmonic oscillators in the machine, after the $(n-1)^{\textup{th}}$ step, the $2\times 2$ block corresponding to the target system $\mathcal{S}$ in the covariance matrix is given by $\coth{\left(\frac{\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+(n-1)\epsilon)}{2}\right)}\,\openone_2$. Therefore, one can show inductively that the covariance matrix transformation associated to the $n^{\textup{th}}$ interaction is given by \begin{align} \Gamma^{(n)}(\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta) \otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}$_n$}}(\beta))=& \begin{bmatrix} \coth{\left(\frac{\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+(n-1)\epsilon)}{2}\right)}\openone_2& 0\\ 0& \coth{\left(\frac{\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+n\epsilon)}{2}\right)}\,\openone_2 \end{bmatrix}\, \notag \\ \overset{\textup{SWAP}}{\longmapsto }\, \Gamma^{(n)}_{\textup{opt}}=&\, \begin{bmatrix} \coth{\left(\frac{\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+n\epsilon)}{2}\right)}\,\openone_2& 0\\ 0& \coth{\left(\frac{\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+(n-1)\epsilon)}{2}\right)}\,\openone_2 \label{cov. transformation sm 2} \end{bmatrix}. \end{align} Based on this process, after $N$ steps (i.e., after the system has interacted with all $N$ harmonic oscillators), the minimal achievable temperature of the target system is $T^{(N)}_{\textup{min}} = \frac{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + N \epsilon} T$. Moreover, by using Eq.~(\ref{EnergyExch.2moeds}), one can calculate the energy changes of the target system and the machine at each time step as \begin{align} \Delta E^{(n)}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}&= \frac{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{2}\,\left[\coth{\left(\frac{\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+n\epsilon)}{2}\right)}-\coth{\left(\frac{\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+(n-1)\epsilon)}{2}\right)}\right],\notag\\ \Delta E^{(n)}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}$_n$}} &= \frac{(\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+n\epsilon)}{2}\,\left[\coth{\left(\frac{\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+(n-1)\epsilon)}{2}\right)}-\coth{\left(\frac{\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+n\epsilon)}{2}\right)}\right]. \label{EnergyExch.2moeds step m} \end{align} The total energy change for the target system during the overall process (i.e., throughout the $N$ steps) is thus given by \begin{align} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} &=\sum_{n=1}^{N}\Delta E^{(n)}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} = \sum_{n=1}^{N}\frac{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{2}\,\left[\coth{\left(\frac{\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+n\epsilon)}{2}\right)}-\coth{\left(\frac{\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+(n-1)\epsilon)}{2}\right)}\right]\nonumber\\ &=\frac{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{2}\,\left[\coth{\left(\frac{\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+N\epsilon)}{2}\right)}-\coth{\left(\frac{\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{2}\right)}\right]= \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}\,\left[\frac{e^{-\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+N\epsilon)}}{1-e^{-\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+N\epsilon)}}-\frac{e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}}{1-e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}}\right]. \label{TotEnergyExch S} \end{align} Here, we write $\coth{(x)} = 1+(2e^{-2x})/(1-e^{-2x})$. Similarly, one can obtain the total energy change of the overall machine \begin{align} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} &=\sum_{n=1}^{N}\Delta E^{(n)}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}$_n$}}= \sum_{n=1}^{N}\frac{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+ n\epsilon}{2}\,\left[\coth{\left(\frac{\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+(n-1)\epsilon)}{2}\right)}-\coth{\left(\frac{\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+n\epsilon)}{2}\right)}\right]\nonumber\\ &= \sum_{n=1}^{N}(\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+ n\epsilon)\,\left[\frac{e^{-\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+(n-1)\epsilon)}}{1-e^{-\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+(n-1)\epsilon)}}-\frac{e^{-\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+n\epsilon)}}{1-e^{-\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+n\epsilon)}}\right]. \label{eqTotEnergyExchMach.} \end{align} It is straightforward to check that the total energy change, i.e., the sum of Eqs.~\eqref{TotEnergyExch S} and \eqref{eqTotEnergyExchMach.}, is equal to the energy cost obtained in Eq.~\eqref{eq:tot energy d} with $d \to \infty$. In particular, this can be seen by considering the second line of Eq.~\eqref{eq:tot energy d}, where the second term in round parenthesis vanishes as $d \to \infty$ for any value of $N$. Thus, when the number of operations diverges $N\to \infty$ and $\epsilon =(\omega_{\textup{max}}-\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})/N \to 0$, where $\omega_{\textup{max}} := \frac{\beta_{\textup{max}}}{\beta}\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}$ is the maximum frequency of the machines, the heat dissipated by the machines throughout the process saturates the Landauer bound and is therefore energetically optimal. Moreover, by taking $\omega_{\textup{max}} \to \infty$ one approaches perfect cooling. At this point, a comment on the notion of control complexity is in order. According to Eq.~\eqref{eq:maineffectivedimension}, the effective dimension of the machine in the protocol we consider here diverges in addition to time. Indeed, the notion of control complexity thusly defined diverges for \emph{any} Gaussian operation acting on the machine, in particular, it diverges for any single one of the infinitely many steps of the protocol, as each operation is a two-mode Gaussian operation. At first glance, this appears to be in contrast to the common conception that Gaussian operations are typically easily implementable (cf. Refs.~\cite{BrownFriisHuber2016,Friis2018}). However, an alternative way of interpreting this protocol is that, exactly because of the simple structure of Gaussian operations, reaching the ground state at finite energy cost requires a diverging number of two-mode Gaussian unitaries, and thus divergingly many modes on which to act (see also Appendix~\ref{app:hodivergingcontrolcomplexitygaussian}). In fact, if non-Gaussian unitaries are employed, then the ground state can be approached at finite energy cost using just a single harmonic oscillator machine, as we now show. \subsubsection{Diverging-Time Protocol using Non-Gaussian Operations (with Finite Control Complexity)} \label{app:hodivergingtimenongaussian} We now consider a protocol for cooling a single harmonic oscillator at frequency $\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}$ to the ground state using a diverging amount of time, but requiring only a finite overall energy input as well as finite control complexity in each of the diverging number of steps of the protocol. In this protocol, the machine $\mathcal{M}$ is also represented by a single harmonic oscillator whose frequency matches that of the target oscillator that is to be cooled, $\omega_{\raisebox{0pt}{\tiny{$\mathcal{M}$}}}=\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}=:\omega$. The initial states of both the target system $\mathcal{S}$ and machine $\mathcal{M}$ are assumed to be thermal at the same inverse temperature $\beta$, and are hence both described by thermal states of the form \begin{align} \tau(\beta) &=\,\frac{ e^{-\beta H}}{\tr{e^{-\beta H}}} \,=\, \sum\limits_{n=0}^{\infty} e^{-\beta\omega\ensuremath{\hspace*{0.5pt}} n} (1-e^{-\beta\omega}) \,\ket{n}\!\bra{n} \,=\, \sum\limits_{n=0}^{\infty} p_{n} \,\ket{n}\!\bra{n}{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}, \end{align} where the Hamiltonian $H$ is given by $H=\sum_{n=0}^{\infty}n\omega\,\ket{n}\!\bra{n}$ and the $p_{n}=e^{-\beta\omega\ensuremath{\hspace*{0.5pt}} n}(1-e^{-\beta\omega})$ are the eigenvalues of $\tau$. The joint initial state is a product state that we can then write as \begin{align} \tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta)\otimes\tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}(\beta) &=\,\sum\limits_{m,n=0}^{\infty} p_{m}p_{n} \,\ket{m}\!\bra{m}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\otimes\ket{n}\!\bra{n}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}} \,=\,\sum\limits_{m,n=0}^{\infty} \tilde{p}_{m+n} \,\ket{m,n}\!\bra{m,n}, \end{align} where we define $\tilde{p}_{k}:=e^{-\beta\omega\ensuremath{\hspace*{0.5pt}} k}(1-e^{-\beta\omega})^{2}$. We then note that the eigenvalues $\tilde{p}_{k}$ of the joint initial state have degeneracy $k+1$. For instance, the largest value $\tilde{p}_{0}=p_{0}p_{0}$, corresponding to both the system and machine being in the ground state, is the single largest eigenvalue, but there are two eigenstates, $\ket{0,1}$ and $\ket{1,0}$, corresponding to the second largest eigenvalue $\tilde{p}_{1}$, three states, $\ket{0,2}$, $\ket{1,1}$, and $\ket{2,0}$ for the third largest eigenvalue $\tilde{p}_{2}$, and so forth. Obviously, not all of these eigenvalues correspond to eigenstates for which the target system is in the ground state. In order to increase the ground-state population of the target system oscillator, we can now apply a sequence of `two-level' unitaries, i.e., unitaries that act only on a subspace spanned by two particular eigenstates and exchange their respective populations. The two-dimensional subspaces are chosen such that one of the two eigenstates corresponds to the system $\mathcal{S}$ being in the ground state, $\ket{0,k}$, while the other eigenstate corresponds to $\mathcal{S}$ being in an excited state, $\ket{i\neq0,j}$. In addition, these pairs of levels are selected such that, at the time the unitary operation is to be performed, the population of $\ket{0,k}$ is smaller than that of $\ket{i\neq0,j}$, such that the two-level exchange increases the ground-state population of $\mathcal{S}$ at each step. More specifically, at the $k^\text{th}$ step of this sequence, the joint system $\mathcal{S}\mathcal{M}$ is in the state $\varrho\suptiny{0}{0}{(k)}_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}}$ and one determines the set $\Omega_{k}$ of index pairs $(i\neq 0,j)$ such that $\tilde{p}_{k}<\bra{i,j}\varrho\suptiny{0}{0}{(k)}_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}}\ket{i,j}$, i.e., the set of eigenstates for which $\mathcal{S}$ is not in the ground state and which have a larger associated population (at the beginning of the $k^\text{th}$ step) than $\ket{0,k}$. One then determines an index pair $(m_{k},n_{k})$ for which this population is maximal, i.e., $\bra{m_{k},n_{k}}\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}}\suptiny{0}{0}{(k)}\ket{m_{k},n_{k}}=\max\{\bra{i,j}\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}}\suptiny{0}{0}{(k)}\ket{i,j}|(i,j)\in \Omega_{k}\}$, and performs the unitary \begin{align} U^{(k)}_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}} &=\,\openone_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}}-\ket{0,k}\!\bra{0,k}-\ket{m_{k},n_{k}}\!\bra{m_{k},n_{k}} +\Bigl( \ket{0,k}\!\bra{m_{k},n_{k}}+ \ket{m_{k},n_{k}}\!\bra{0,k} \Bigr). \end{align} If there is no larger population that is not already in the subspace of the ground state of the target system, i.e., when $\Omega_{k}=\emptyset$, which is only the case for the first step ($k=1$), then no unitary is performed. After the $k^\text{th}$ step, the joint state $\varrho\suptiny{0}{0}{(k+1)}_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}}$ is still diagonal in the energy eigenbasis, and the subspace of the joint Hilbert spaces for which $\mathcal{S}$ is in the ground state is populated with the $k+1$ largest eigenvalues $\tilde{p}_{i}$ in nonincreasing order with respect to nondecreasing energy eigenvalues of the subspace's basis vectors $\ket{0,i}$. That is, for all $i\in\{0,1,2,\ldots,k\}$ and for all $j\in\mathbbm{N}$ with $j>i$, we have $\bra{0,i}\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}}\suptiny{0}{0}{(k+1)}\ket{0,i}\geq \bra{0,j}\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}}\suptiny{0}{0}{(k+1)}\ket{0,j}$. Since the Hilbert spaces of both $\mathcal{S}$ and $\mathcal{M}$ are infinite dimensional, we can continue with such a sequence of two-level exchanges indefinitely, starting with $k=1$ and continuing step by step as $k\rightarrow\infty$. Here we note that the choice of $(m_{k},n_{k})$ is generally not unique at the $k$-th step, but as $k\rightarrow\infty$, the resulting final state is independent of the particular choices of $(m_{k},n_{k})$ made along the way. In particular, in a fashion that is reminiscent of the famed Hilbert hotel paradox (see, e.g., Ref.~\cite[p.~17]{Gamow1947}), this sequence places \emph{all} of the infinitely many eigenvalues $\tilde{p}_{k}$ of the joint state of $\mathcal{S}\mathcal{M}$ (which must hence sum to one) into the subspace where $\mathcal{S}$ is in the ground state. In other words, in the limit of infinitely many steps, the population of the ground-state subspace evaluates to \begin{align} \sum\limits_{k=0}^{\infty}(k+1)\tilde{p}_{k}\,=\, \sum\limits_{k=0}^{\infty}(k+1)\,e^{-\beta\omega\ensuremath{\hspace*{0.5pt}} k}\,(1-e^{-\beta\omega})^{2}\,=\,1, \end{align} where we take into account the $(k+1)$-fold degeneracy of the $k^\text{th}$ eigenvalue $\tilde{p}_{k}$. We thus have $\lim_{k\rightarrow\infty}\ptr{\mathcal{M}}{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}\mathcal{M}$}}}\suptiny{0}{0}{(k)}}=\ket{0}\!\bra{0}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$, the reduced state of the system is asymptotically the pure state $\ket{0}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$. As per our requirement on the structural complexity (see Appendix~\ref{app:conditionsstructuralcontrolcomplexity}), the Hilbert space of the machine required to achieve this is infinite-dimensional, and since each step of the protocol is assumed to take a finite amount of time, the overall time for reaching the ground state diverges. At the same time, the control complexity for each individual step is finite, since each $U_{k}$ acts nontrivially only on a two-dimensional subspace. To see that also the energy cost for this protocol is finite, we first note that the protocol results in a final state of the machine that is diagonal in the energy eigenbasis $\ket{n}_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$, with probability weights $\tilde{p}_{k}$ decreasing (but not strictly) with increasing energy. Due to the degeneracy of the eigenvalues $\tilde{p}_{k}$, each one appears $(k+1)$ times on the diagonal (w.r.t. the energy eigenbasis) of the resulting machine state, populating adjacent energy levels. The label $n(k)$ of the lowest energy level that is populated by a particular value $\tilde{p}_{k}$ can be calculated as \begin{align} \tilde{n}(k):=\sum\limits_{n=0}^{k-1}(n+1)\,=\,\tfrac{1}{2}k(k+1), \end{align} while the largest energy populated by $\tilde{p}_{k}$ is given by $\tilde{n}(k+1)-1$. With this, we calculate the energy of the machine after the protocol, which evaluates to \begin{align} \frac{E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\raisebox{0pt}{\tiny{final}}}}{\omega} &=\,\sum\limits_{k=1}^{\infty}e^{-\beta\omega\ensuremath{\hspace*{0.5pt}} k}\,(1-e^{-\beta\omega})^{2}\,\sum\limits_{n=\tilde{n}(k)}^{\tilde{n}(k+1)-1}n =\,\sum\limits_{k=1}^{\infty}e^{-\beta\omega\ensuremath{\hspace*{0.5pt}} k}\,(1-e^{-\beta\omega})^{2}\,\tfrac{1}{2}k(k+1)(k+2)\,=\, \tfrac{3}{4}\operatorname{cosech}^{2}\bigl(\tfrac{\beta\omega}{2}\bigr). \end{align} Since the energy of the initial thermal state is given by \begin{align} \frac{E\left[\tau(\beta)\right]}{\omega} &=\,\sum\limits_{n=0}^{\infty} n\,e^{-\beta\omega\ensuremath{\hspace*{0.5pt}} n}\,(1-e^{-\beta\omega})\,=\,\frac{e^{-\beta\omega}}{1-e^{-\beta\omega}}, \end{align} we thus arrive at the energy cost \begin{align} \frac{\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}}{\omega} &=\,\frac{E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^{\raisebox{0pt}{\tiny{final}}}-E\left[\tau(\beta)\right]}{\omega} \,=\, \frac{e^{-\beta\omega}(2+e^{-\beta\omega})}{(1-e^{-\beta\omega})^{2}}. \label{eq:DeltaE harmonic osci nonGaussian infinite control protocol} \end{align} We thus see that this energy cost is finite for all finite initial temperatures (although note that the energy cost diverges when $\beta\rightarrow0$). However, as we show next, the energy cost for attaining the ground state is not minimal, i.e., the protocol achieves perfect cooling (with finite energy and control complexity, but infinite time) but not at the Landauer limit. To see this, we first observe that the entropy of the final pure state of the system $\mathcal{S}$ vanishes, such that $\widetilde{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}=S\left[\tau(\beta)\right]$. Evaluating this entropy, one obtains \begin{align} S\left[\tau(\beta)\right] &=-\tr{\tau\log(\tau)} = -\sum\limits_{n=0}^{\infty}e^{-\beta\omega\ensuremath{\hspace*{0.5pt}} n} (1\ensuremath{\hspace*{-0.5pt}}-\ensuremath{\hspace*{-0.5pt}} e^{-\beta\omega})\log\left[e^{-\beta\omega\ensuremath{\hspace*{0.5pt}} n}(1\ensuremath{\hspace*{-0.5pt}} -\ensuremath{\hspace*{-0.5pt}} e^{-\beta\omega})\right] = -\sum\limits_{n=0}^{\infty}e^{-\beta\omega\ensuremath{\hspace*{0.5pt}} n}(1\ensuremath{\hspace*{-0.5pt}}-\ensuremath{\hspace*{-0.5pt}} e^{-\beta\omega})\left[-\beta\omega\ensuremath{\hspace*{0.5pt}} n+\log(1\ensuremath{\hspace*{-0.5pt}}-\ensuremath{\hspace*{-0.5pt}} e^{-\beta\omega})\right]\nonumber\\ &=\,\frac{\beta\omega e^{-\beta\omega}}{1-e^{-\beta\omega}}\,+\, \beta\omega\,+\,\log\Bigl(\frac{ e^{-\beta\omega}}{1-e^{-\beta\omega}}\Bigr) \,=\, \frac{\beta\omega}{1-e^{-\beta\omega}}\,+\,\log\Bigl(\frac{ e^{-\beta\omega}}{1-e^{-\beta\omega}}\Bigr). \label{eq:thermal state entropy harmonic osci} \end{align} Using the results from Eqs.~\eqref{eq:DeltaE harmonic osci nonGaussian infinite control protocol} and~\eqref{eq:thermal state entropy harmonic osci}, we can thus compare the expressions for $\beta\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}$ and $\widetilde{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$, and we find that $\beta\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}-\widetilde{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}>0$ for all nonzero initial temperatures. The origin of this difference is easily identified: although the protocol results in an uncorrelated final state because the system is left in a pure state, that is, $I(\mathcal{S}: \mathcal{M})_{\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{M}$}}}^\prime}=0$, the last term $D(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}}^\prime \| \tau_{\raisebox{-1pt}{\tiny{$\mathcal{M}$}}})$ in Eq.~\eqref{eq:landauerequality} is nonvanishing for nonzero temperatures because the protocol does not result in a thermal state of the machine. With this, we thus show that perfect cooling is indeed possible using a finite energy cost and a finite control complexity in every one of infinitely many steps (thus using diverging time). As we have seen, the structural requirement of an infinite-dimensional effective machine Hilbert space can be met by realising $\mathcal{M}$ as a single harmonic oscillator. Although the presented protocol does not minimise the energy cost to saturate the Landauer bound, we cannot at this point conclusively say that it is not possible to do so in this setting. However, we suspect that a more complicated energy-level structure of the machine is necessary. Finally, let us comment again on the notion of control complexity in terms of effective machine dimension as opposed to the notion of complexity that is often (loosely) associated with the distinction between Gaussian and non-Gaussian operations. As we see from the protocols presented here, the concept of control complexity based on the nontrivially accessed Hilbert-space dimension of the machine indeed captures the resource that must diverge in order to reach the ground state, while the intuition of complexity associated with (non)-Gaussian operations, albeit valid as a characterisation of a certain practical difficulty in realising such operations, seems to be irrelevant for determining if the ground state can be reached. In the protocol presented in this section, non-Gaussian operations with finite control complexity are used in each step to reach the ground state. Infinitely many steps (i.e., diverging time) could then be traded for a single (also non-Gaussian) operation of infinite control complexity, performed in unit time. In the previous protocol based on Gaussian operations (Appendix~\ref{app:hodivergingtimegaussian}), the control complexity diverges in every single step of the cooling protocol, but only when there are infinitely many such steps (diverging time) or one operation in unit time on infinitely many modes (see below), can we reach the ground state. However, in the latter case, the operation, although acting on a diverging number of harmonic oscillators, remains Gaussian, as we now show explicitly. \subsection{Diverging Control Complexity Cooling Protocol for Harmonic Oscillators} \label{app:hodivergingcontrolcomplexitygaussian} Here we give a protocol for perfectly cooling a harmonic oscillator in unit time and with the minimum energy cost, but with diverging control complexity. In accordance with Theorem~\ref{thm:variety}, the machines used to cool the target system will likewise be harmonic oscillators. Let the operators $a~(a^{\dagger})$ and $b_{k}~(b_{k}^{\dagger})$, respectively, denote the annihilation (creation) operators of the target system and a machine subsystem labelled $k$. We then consider the the unitary transformation in Eq.~\eqref{eq:swapwithi}, namely \begin{equation} U_{k}:= e^{i\frac{\pi}{2}(a^{\dag}b_k +ab_{k}^\dag)} . \end{equation} One can then apply the diverging-time cooling protocol from Appendix~\ref{app:hodivergingtimegaussian} to cool the system to the ground state at the Landauer limit via the total unitary transformation \begin{align} U_{\textup{tot}} := \lim_{N \to \infty} U_{(N)}, \qquad \text{ with } \qquad U_{(N)}:= \prod_{k=1}^N U_k . \label{eq:total unitary} \end{align} We now seek the Hamiltonian that generates $U_{\textup{tot}}$. First note that $U_{(N)}aU_{(N)}^\dag =i b_{1}$ and \begin{equation} U_{(N)}b_{k}U_{(N)}^\dag = \begin{cases} -b_{k+1}, & \text{for } k < N \\ i a, & \text{for } k = N \\ b_{k}, & \text{for } k > N \end{cases}, \end{equation} which can be proven by induction. In contrast with Appendix~\ref{app:hodivergingtimegaussian}, here we use the complex representation of the symplectic group to describe the transformation, i.e., the set of matrices $S$ satisfying $SKS^\dag =K$, where $K:=\openone_{N} \oplus (-\openone_{N})$. Gathering the raising and lowering operators of the target system and the first $N$ machines into the vector $\vec{\xi}:=\bigl(\begin{matrix} a & b_{1} & b_{2} & \ldots & b_{N} & a^{\dag} & b_{1}^{\dag} & b_{2}^{\dag} & \ldots & b_{N}^{\dag}\end{matrix}\bigr)^\mathrm{T}$, we can write the transformation above as $U_{(N)}\vec{\xi} \, U_{(N)}^\dag = S^\mathrm{T} \vec{\xi}$~\cite{AdessoRagyLee2014}, where \begin{equation} S=\begin{pmatrix} \alpha_{(N)} & 0 \\ 0 & \alpha_{(N)} \end{pmatrix}, \qquad \text{ with } \qquad \alpha_{(N)} := \begin{pmatrix} 0 & 0 & 0 & \ldots & 0 & i \\ i & 0 & 0 & \ldots & 0 & 0 \\ 0 & -1 & 0 & \ldots & 0 & 0 \\ 0 & 0 & -1 & \ldots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \ldots & -1 & 0 \end{pmatrix} . \label{eq:alphamatrix} \end{equation} Now, defining the matrix of Hamiltonian coefficients $h_{(N)}$ implicitly by $U_{(N)} =: \exp(-i \vec{\xi}^{\,\dagger} \cdot h_{(N)} \cdot \vec{\xi})$, we have that $S=\exp(-i K h_{(N)})$~\cite{AdessoRagyLee2014}, i.e., $h_{(N)}=i K \log(S^\mathrm{T})=i K \log(S)^\mathrm{T}$, where we take the principal logarithm. To calculate this, we must diagonalise the matrix $\alpha_{(N)}$ in Eq.~\eqref{eq:alphamatrix}. The eigenvalues of $\alpha_{(N)}$ are \begin{equation} \lambda_{k}:=-e^{-i\pi \frac{2k-1}{N+1}}, \qquad \text{ with } \qquad k\in\lbrace 1,2,\ldots,N+1\rbrace , \end{equation} i.e., the negative of the $(N+1)^\mathrm{th}$ roots of $-1$, and it is diagonalised by the unitary matrix $V$ constructed from the eigenvectors $\vec{v}_{k}$: \begin{equation} V:=\begin{pmatrix} \vec{v}_{1} & \vec{v}_{2} & \vec{v}_{3} & \ldots & \vec{v}_{N+1} \end{pmatrix} \qquad \text{ with } \qquad \vec{v}_{k}:= \frac{-1}{\sqrt{N+1}} \begin{pmatrix} i (-\lambda_{k})^{-1} \\ (-\lambda_{k})^{-2} \\ (-\lambda_{k})^{-3} \\ \vdots \\ (-\lambda_{k})^{-(N+1)} \end{pmatrix} . \end{equation} Specifically, $\alpha_{(N)}=VDV^\dag$, where $D:=\diag ( \lambda_{1},\lambda_{2},\ldots,\lambda_{N+1} )$, and thus \begin{equation} h_{(N)}^\mathrm{T} =i K \log \begin{pmatrix} V D V^\dag & 0 \\ 0 & V D V^\dag \end{pmatrix} = i K \begin{pmatrix} V & 0 \\ 0 & W \end{pmatrix} \begin{pmatrix} \log (D) & 0 \\ 0 & \log (D) \end{pmatrix} \begin{pmatrix} V^\dag & 0 \\ 0 & V^\dag \end{pmatrix} =: \begin{pmatrix} A & 0 \\ 0 & -A \end{pmatrix} \end{equation} for some matrix $A$. By direct calculation, one finds that \begin{equation} \label{eq:AmatrixInfComplex} A_{jk} = i^{\delta_{j1}}i^{\delta_{k1}}\frac{\pi}{(N+1)^2} \sum_{p=1}^{N+1} \, (2p-2-N)e^{-i\pi\frac{2p-1}{N+1}(j-k)}. \end{equation} Now, considering the identity \begin{equation} \sum_{p=1}^{N+1} \, e^{i \theta p} = \frac{e^{i \theta (N+1)}-1}{1-e^{i \theta}} \end{equation} for $\theta \in \mathbb{R}$, as well as its derivative with respect to $\theta$, one can calculate the sum in Eq.~\eqref{eq:AmatrixInfComplex}. We then have \begin{equation} \lim_{N\to\infty} A_{jk} = \begin{cases} 0, & \text{for } j = k \\ i i^{\delta_{j1}}i^{\delta_{k1}} \frac{1}{j-k}, & \text{for } j \neq k \end{cases}. \end{equation} Then, finally, we have that $U_{\textup{tot}} = e^{-i H_{\textup{tot}}}$, where $H_{\textup{tot}}= \lim_{N\to\infty} \left( \vec{v}^{\,\dagger} \cdot h_{(N)} \cdot \vec{v} \right)$, i.e., \begin{equation} H_{\textup{tot}} = - \sum_{j=2}^{\infty} \left( \frac{1}{j-1} b_{j}^\dag a + \mathrm{H.c.} \right) + \sum_{j,k=1; \, j\neq k}^{\infty} \frac{i}{j-k} b_{j}^\dag b_{k} . \label{eq:complexHamiltonian} \end{equation} Thus, the system is cooled to the ground state at an energy cost saturating the Landauer bound, and in unit time, but via a procedure that implements a multimode Gaussian unitary on a diverging number of modes. \section{Cooling Protocols in the Incoherent-Control Paradigm} \label{app:coolingprotocolsincoherentcontrolparadigm} In this section, we investigate the required resources to cool the target system within the incoherent-control paradigm. For simplicity, we consider only the finite-dimensional setting. Here, we have a qudit target system $\mathcal{S}$ interacting resonantly (i.e., in an energy-conserving manner) with a qudit machine $\mathcal{M}$, which is partitioned into one part, $\mathcal{C}$, in thermal contact with the ambient environment at inverse temperature $\beta$ and another part, $\mathcal{H}$, in contact with a hot bath at inverse temperature $\beta_{\raisebox{-1pt}{\tiny{$H$}}} < \beta$. The Hamiltonians for each subsystem are $H_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}=\sum_{n=0}^{d_X-1} n\, \omega_{\raisebox{0pt}{\tiny{$\mathcal{X}$}}} \ket{n}\!\bra{n}_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}$; the energy resonance condition enforces that $\omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}}=\omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}-\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}$. For the most part in this section, we focus on equally spaced Hamiltonians for simplicity; we comment specifically whenever we consider otherwise. In order to cool the target system, we aim to compress as much population as possible into the its lowest energy eigenstates via interactions that are restricted to the energy-degenerate subspaces of the joint $\mathcal{S}\mathcal{C}\mathcal{H}$ system. Thus we are restricted to global energy-conserving unitaries $U_{\raisebox{-1pt}{\tiny{EC}}}$ that satisfy \begin{equation} [{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}+H_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}+H_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}, U_{\raisebox{-1pt}{\tiny{EC}}}]=0. \label{eq:tripartite commutator UH0} \end{equation} In Ref.~\cite{Clivaz_2019E}, it was shown that for the case where all three subsystems are qubits, the optimal global unitary in this setting (inasmuch as they render the target system in the coldest state possible given the restrictions) is \begin{align} U_{\raisebox{-1pt}{\tiny{EC}}}=\ket{0,1,0}\! \bra{1,0,1}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}+\ket{1,0,1}\! \bra{0,1,0}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}+\bar{\openone}, \label{eq:tripartite unitary op qubit} \end{align} where $\bar{\openone}$ denotes the identity matrix on all subspaces that are not energy degenerate. Considering the generalisation to qudit subsystems, it is straightforward to see that, for equally spaced Hamiltonians, the optimal global unitaries must be of the form \begin{equation} U_{\raisebox{-1pt}{\tiny{EC}}}=\Bigg[\sum_{m,n,l=0}^{d-2}\ket{m,n+1,l}\! \bra{m+1,n,l+1}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}+\ket{m+1,n,l+1}\! \bra{m,n+1,l}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}\Bigg]+\bar{\openone}. \label{eq:tripartite unitary op} \end{equation} For the most general case where the Hamiltonians of each subsystem are arbitrary, it is not possible to write down a generic form of the optimal unitary, since the energy-resonant transitions that lead to cooling the target now depend on the microscopic details of the energetic structure. Nonetheless, in Appendix~\ref{app:incoherentcoolingfinitetemperature}, we provide a protocol (i.e., not the unitary \emph{per se}, but a sequence of steps) in this setting that attains perfect cooling and saturates the Carnot-Landauer limit. Intuitively, the above types of unitaries simply reshuffle populations that are accessible through resonant transitions. For the purpose of cooling, one wishes to do this in such a way that the largest population is placed in the lowest energy eigenstate of the target system, the second largest in the second lowest energy eigenstate, and so on (in line with the optimal unitaries in the coherent-control setting); indeed, on the energy-degenerate subspaces accessible, such unitaries act precisely in this way. It is straightforward to show that interactions of this form satisfy Eq.~\eqref{eq:tripartite commutator UH0}. For the sake of simplicity, we now focus on the case where all systems are qubits, although the results generalise to the qudit setting. Consider the initial joint state $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}=\sum_{m,n,l=0}^1\, p_{mnl}\ket{m,n,l}\!\bra{m,n,l}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$. By applying a unitary $U_{\raisebox{-1pt}{\tiny{EC}}}$ of the form given in Eq.~\eqref{eq:tripartite unitary op}, the post-transformation joint state is \begin{equation} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}^\prime=U_{\raisebox{-1pt}{\tiny{EC}}}\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}} U_{\raisebox{-1pt}{\tiny{EC}}}^{\dagger}=\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}+ \Delta p \, \ket{0,1,0}\!\bra{0,1,0}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}} -\Delta p\, \ket{1,0,1}\!\bra{1,0,1}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}, \end{equation} where $\Delta p := p_{101}-p_{010}$ indicates the amount of population that has been transferred from the excited state of the target system to the ground state throughout the interaction. Naturally, in order to cool the target system, $\Delta p \geq 0$, i.e., the initial population $p_{101}$ must be at least as large as $p_{010}$. Due to the energy-conserving nature of the global interaction, the energy exchanged between the subsystems throughout a single such interaction, $\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}= \tr{H_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}} (\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}^{\prime}-\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}})}$, can be calculated via \begin{align} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}= -\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} \Delta p, ~~~~~~ \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}= \omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}} \Delta p, ~~~~~\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}= -\omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}} \Delta p. \label{eq:energy exchange INc} \end{align} Thus, for a fixed energy-level structure of all subsystems (i.e., given the local Hamiltonians), one requires only knowledge of the pre- and post-transformation state of any one of the subsystems to calculate the energy change for all of them. \subsection{Diverging Energy: Proof of Theorem~\ref{thm:infenergyauto}} \label{app:divergingenergyincoherentnogotheorem} The first thing to note is that in the incoherent-control paradigm, even when one allows for the energy cost, i.e., the heat drawn from the hot bath, to be diverging, it is not possible to perfectly cool the target system, as presented in Theorem~\ref{thm:infenergyauto}. The intuition behind this result is that the target system can interact only with energy-degenerate \emph{subspaces} of the hot and cold machine subsystems. The optimal transformation that one can do here to achieve cooling is to transfer the highest populations of any such subspace to the lowest energy eigenstate of the target system; however, any such subspace has population strictly less than one for any $0 \leq \beta_{\raisebox{-1pt}{\tiny{$H$}}} \leq \beta < \infty$ independently of the energy structure. Moreover, the difference from one can be bounded by a finite amount that does not vanish independent of the energy-level structure of any machine of finite dimension. This makes it impossible to attain a subspace population of one even as the energy cost diverges for any fixed and finite control complexity. It follows that the ground-state population of the target system can never reach unity in a single operation of finite control complexity and hence perfect cooling cannot be achieved. Precisely, we show the following. Let $\mathcal{S}$ be a finite-dimensional system of dimension $d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ with associated Hamiltonian with finite but otherwise arbitrary energy gaps $H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} = \sum_{i=0}^{d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} - 1} \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^i \ket{i}\!\bra{i}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$, and let $d_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}$ and $d_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}$ be integers denoting the dimensions of the cold and hot parts of the machine respectively. Then it is impossible to cool the system $\mathcal{S}$ in the incoherent-control paradigm, i.e., using energy-conserving unitaries involving $\mathcal{C}$ and $\mathcal{H}$ at some initial inverse temperatures $\beta, \beta_{\raisebox{-1pt}{\tiny{$H$}}}$ respectively, arbitrarily close to the ground state. Note that, in particular, this result holds irrespective of the energy-level structure of $\mathcal{C}$ and $\mathcal{H}$ and no matter how much energy is drawn from the hot bath as a resource. In order to set notation for the following, we assume $\omega_{\raisebox{0pt}{\tiny{$\mathcal{X}$}}}^i \geq \omega_{\raisebox{0pt}{\tiny{$\mathcal{X}$}}}^j$ for $i \geq j$ and $\omega_{\raisebox{0pt}{\tiny{$\mathcal{X}$}}}^0=0$, where $\omega_{\raisebox{0pt}{\tiny{$\mathcal{X}$}}}^i$ denotes the $i^{\text{th}}$ energy eigenvalue of system $\mathcal{X}$ with $\mathcal{X} \in \{ \mathcal{S}, \mathcal{C}, \mathcal{H}\}$. We also assume the initial states on $\mathcal{S}$ and $\mathcal{C}$ to be thermal at inverse temperature $\beta$, and $\mathcal{H}$ is assumed to be initially in a thermal state at inverse temperature $\beta_{\raisebox{-1pt}{\tiny{$H$}}} \leq \beta$. We denote by $p_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}^i$ the $i^{\text{th}}$ population of system $\mathcal{X}$ in a given state, i.e., $p_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}^i= \bra{i} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}} \ket{i}$, where $\ket{i}$ denotes the $i^{\text{th}}$ energy eigenstate of $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}$. We also write $p_{ijk} := p_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^i p_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}^j p_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^k$. The intuition behind the proof is as follows. The global ground-state level of the joint hot-and-cold machine has some nonzero initial population for any finite-dimensional machine; in particular it can always be lower bounded by $\tfrac{1}{d_\mathcal{C} d_\mathcal{H}}$ for any Hamiltonians and initial temperatures, which is strictly greater than zero as long as the dimensions remain finite. Fixing the control complexity of any protocol considered here to be finite in value thus implies a lower bound on the initial ground-state population of the total machine that is larger than zero by a finite amount. Depending on the energy-level structure of the hot and cold parts of the machine, there may be other nonzero initial populations, but in order to cool the target system $\mathcal{S}$ perfectly, at least all of the previously mentioned populations must be transferred into spaces spanned by energy eigenstates of the form $\ket{0 j k}_{{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}}$. This intuition is formalised via Lemma~\ref{lemma:necdeg}, where we show that independent of the energy structure of $\mathcal{C}$ and $\mathcal{H}$, one must be able to make such transfers of population in order to perfectly cool $\mathcal{S}$. However, in order to make such transfers in an energy-conserving manner, all energy eigenstates of the form $\ket{i00}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$ must be degenerate with some of the form $\ket{0jk}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$. This degeneracy condition, in turn, also implies that every energy eigenstate of the form $\ket{0 j k}_{{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}}$ has an associated initial population $p_{0jk}$ that is nonvanishing for all machines of finite dimension (i.e., for all protocols with finite control complexity). Thus, upon transferring some population $p_{i00}$ \emph{into} the subspace spanned by $\ket{0jk}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$, i.e., one of a relevant form for the population to contribute to the final ground-state population of the target, one inevitably transfers some finite amount of population \emph{away} from the relevant space and into $\ket{i00}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$, which does not contribute to the final ground-state population of the target. We formalise this intuition in the discussion following Lemma~\ref{lemma:necdeg}. In this way, no matter what one does, there is always a finite amount of population, which is lower bounded by some strictly positive number due to the constraint on control complexity, that does not contribute to the final ground-state population of the target, implying that perfect cooling is not possible. The formal proof occurs in two steps. We first show that some specific degeneracies in the joint $\mathcal{S} \mathcal{C} \mathcal{H}$ system must be present in order to be able to even potentially cool $\mathcal{S}$ arbitrarily close to the ground state. We then prove that, given the above degeneracies, one cannot cool the system $\mathcal{S}$ beyond a fixed ground-state population that is independent of the energy structure of $\mathcal{C}$ and $\mathcal{H}$; in particular, one can draw as much energy from the hot bath as they like and still do no better. We begin with the following lemma. \begin{lem}\label{lemma:necdeg} Given $\mathcal{S}$, $d_\mathcal{C}$, and $d_\mathcal{H}$ as above, one can reach a final ground-state population of the system $\mathcal{S}$ arbitrarily close to one in the incoherent-control setting only if each $\ket{i00}{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$, where $i \in \{1, \dots, d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}-1\}$, energy eigenstate is degenerate with at least one $\ket{0jk}{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$ energy eigenstate, where $j \in \{0, \dots d_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}-1\}, k \in \{0, \dots d_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}-1\}$. \end{lem} \begin{proof} Suppose that there exists an $i^* \in \{1, \dots, d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}-1\}$ such that $\ket{i^*00}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$ is not degenerate with any $\ket{0jk}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$, where $j \in \{0, \dots d_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}-1\}, k \in \{0, \dots d_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}-1\}$. We show that, then, one cannot cool $\mathcal{S}$ arbitrarily close to zero. Let $B_{i}$ denote the degenerate subspace of the total Hamiltonian $H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}+H_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}+H_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}$, where $H_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}}$ denotes the Hamiltonian of system $\mathcal{X} \in \{ \mathcal{S}, \mathcal{C}, \mathcal{H}\}$, that contains the eigenvector $\ket{i00}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$. Then, any energy-conserving unitary $U_{\raisebox{-1pt}{\tiny{EC}}}$ used to cool the system in the incoherent-control paradigm must act within such $B_i$ subspaces, i.e., $U_{\raisebox{-1pt}{\tiny{EC}}} = \bigoplus_{i} U_{B_i}$ (this is a direct consequence of $[U_{\raisebox{-1pt}{\tiny{EC}}},H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}+H_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}+H_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}]=0$, see, e.g., Lemma 5 of Ref.~\cite{Clivaz2020Thesis}). This means, in particular, that the initial population of $\ket{i^* 00}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$ can only be distributed within $B_{i^*}$, and as no eigenvector of the form $\ket{0jk}{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$ is contained in $B_{i^*}$ by assumption, it can never contribute to the final ground-state population of $\mathcal{S}$, which we denote $\widetilde{p}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{0}$. So we have \begin{equation} \widetilde{p}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^0 \leq 1- p_{i^*00}. \end{equation} Now, as for $\mathcal{X} \in \{ \mathcal{C}, \mathcal{H} \}$, with any $\{\omega_{\raisebox{0pt}{\tiny{$\mathcal{X}$}}}^i\}$ such that each $\omega_{\raisebox{0pt}{\tiny{$\mathcal{X}$}}}^i \geq 0$ with $\omega_{\raisebox{0pt}{\tiny{$\mathcal{X}$}}}^0 = 0$ and any inverse temperature $\beta \geq 0$, we have for the partition function $\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ that \begin{equation} \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{X}$}}} = 1 + e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{X}$}}}^1} + \dots + e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{X}$}}}^{d_X-1}} \leq d_\mathcal{X}, \end{equation} and so we have the following bound on the initial populations associated to each eigenvector $\ket{i00}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$ \begin{equation} p_{i00} = \frac{e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^i}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}} \geq \frac{e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^i}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} d_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} d_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}} > 0 \quad \quad \forall \, i \in \{ 1, \hdots , d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} - 1 \}. \end{equation} Combining the above, we have that \begin{equation} \widetilde{p}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^0 \leq 1-\frac{e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{i^*}}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} d_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} d_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}}<1. \end{equation} So as desired, we show that one cannot cool beyond $1-\frac{e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{i^*}}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} d_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} d_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}}$, a bound strictly smaller than 1 for any finite-dimensional machine (i.e., for any protocol using only finite control complexity) and independent of the energies of $\mathcal{C}$ and $\mathcal{H}$. \end{proof} \noindent We can now proceed to the second step of the proof of Theorem~\ref{thm:infenergyauto}. \begin{proof} To this end, consider any $i^* \in \{ 1, \dots, d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}-1\}$. If $\ket{i^*00}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$ is not degenerate with any $\ket{0jk}{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$, our assertion is proven by Lemma~\ref{lemma:necdeg}. On the other hand, if there is a $j^* \in \{0,\dots, d_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}-1\}$ and a $k^* \in \{0, \dots, d_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}-1\}$ for which $\ket{i^*00}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$ and $\ket{0j^*k^*}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$ are degenerate, then $B_{i^*}$, the degenerate subspace containing $\ket{i^*00}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$, also contains $\ket{0j^* k^*}$. Now $B_{i^*}$ may also contain other eigenvectors of the form $\ket{0jk}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$, i.e., some other $\ket{0j'k'}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$ with $j' \in \{0,\dots,d_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}-1\}, k' \in \{0, \dots, d_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}-1\}$. Crucially, each such eigenvector in $B_{i^*}$ must have an associated minimal amount of initial population as long as the machine is finite dimensional. Indeed, for any such $\ket{0 j^* k^*}{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$ in $B_{i^*}$, we have the condition $\omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}^{j^*} + \omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}}^{k^*}=\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{i^*}$ and so $\omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}^{j^*} \leq \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{i^*}$, $\omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}}^{k^*} \leq \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{i^*}$, implying that $ \beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}^{j^*} \leq \beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{i^*}$ and $\beta_H \omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}}^{k^*} \leq \beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{i^*}$. Thus we have the bound \begin{equation}\label{eq:minimalpopbound} p_{0j^*k^*} = \frac{e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}^{j^*}} e^{-\beta_H \omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}}^{k^*}}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}} \geq \frac{e^{-2 \beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{i^*}}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}} \geq \frac{e^{-2 \beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{i^*}}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} d_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} d_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}}=: q_{i^*}. \end{equation} Now, take any particular $i^* \in \{ 1, \hdots, d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} -1\}$ and let $\pi_{i^*}$ be the dimension of $B_{i^*}$, $\mu$ the number of energy eigenstates of the form $\ket{0jk}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$ that $B_{i^*}$ contains and $\nu=\pi-\mu$ the number of energy eigenstates of the form $\ket{ijk}{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$, where $i \neq 0$, that $B_{i^*}$ contains. So \begin{equation} B_{i^*} = \text{span} \{ \ket{0jk}, \ket{0j_2k_2}, \dots, \ket{0j_\mu k_\mu}, \ket{i^*00}, \ket{i_2 \ell_2 m_2}, \dots, \ket{i_\nu \ell_\nu m_\nu}\}. \end{equation} Let $\boldsymbol{v}=\{p_{0jk},p_{0j_2l_2}, \dots, p_{0j_\mu k_\mu},p_{i^*00},p_{i_2 \ell_2 m_2}, \dots, p_{i_\nu \ell_\nu m_\nu}\}$ be the vector of initial populations associated to the eigenvectors of $B_{i^*}$, and $\boldsymbol{v}^{\uparrow}$ be the vector whose components are those of $\boldsymbol{v}$ arranged in nondecreasing order. Using Schur's theorem \cite{2011Marshall}, we know that after applying any unitary transformation $U_{B_{i^*}}$ on the relevant energy-degenerate subspace, then the vector of transformed populations, $\boldsymbol{\widetilde{v}}$, is majorised by $\boldsymbol{v}$. In particular, labelling the vector elements by $\boldsymbol{v}_\alpha$, we have \begin{equation}\label{eqs:nocontributing} \widetilde{p}_{i^* 00} + \sum_{\alpha=2}^\nu \widetilde{p}_{i_\alpha \ell_\alpha m_\alpha} \geq \sum_{\alpha=1}^\nu \boldsymbol{v}_{\alpha}^{\uparrow}. \end{equation} We now claim that $\sum_{\alpha=1}^\nu \boldsymbol{v}_{\alpha}^{\uparrow} \geq q_{i^*}$ from Eq.~\eqref{eq:minimalpopbound}. Indeed, as $\boldsymbol{v}$ has at most $\nu-1$ elements that do not belong to the set $A:=\{p_{0jk},p_{0j_2k_2}, \dots, p_{0j_\mu k_\mu},p_{i^*00}\}$, at least one element of $A$ must contribute to the sum $\sum_{\alpha=1}^\nu\boldsymbol{v}^{\uparrow}_\alpha$. Let $x$ be that element. As $\boldsymbol{v}^{\uparrow}_\alpha \geq 0$ for all $\alpha=1,\dots, \pi = \mu + \nu$, we have \begin{equation} \sum_{\alpha=1}^{\nu}\boldsymbol{v}^{\uparrow}_\alpha \geq x. \end{equation} Now as $p_{0j_\gamma k_\gamma} \geq q_{i^*}$ for all $\gamma=2, \dots, \mu$, we have \begin{equation} x \geq \min (q_{i^*}, p_{i^*00})= q_{i^*}, \end{equation} where $p_{i^*00} \geq q_{i^*}$ can be seen from Eq.~\eqref{eq:minimalpopbound}, as claimed. As the l.h.s. of Eq.~\eqref{eqs:nocontributing} represents the amount of population in the subspace $B_{i^*}$ that does \emph{not} contribute to the final ground-state population of the target system, we have \begin{align} \widetilde{p}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^0 &\leq 1- \left( \widetilde{p}_{i^* 00} + \sum_{\alpha=2}^\nu \widetilde{p}_{i_\alpha \ell_\alpha m_\alpha} \right) \leq 1- q_{i^*} = 1- \frac{e^{-2 \beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{i^*}}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} d_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} d_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}}. \end{align} So, for any finite-dimensional machine, one cannot cool the system $\mathcal{S}$ beyond $1-\frac{e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{i^*}}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} d_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} d_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}}$, a bound strictly smaller than 1 and independent of the energy structure of $\mathcal{C}$ and $\mathcal{H}$, as desired. \end{proof} As a concrete example, consider the case where all systems are qubits. The initial joint state is \begin{align} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}^{(0)} = \frac{(\ketbra{0}{0} + e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}} \ketbra{1}{1})_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \otimes (\ketbra{0}{0} + e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}} \ketbra{1}{1})_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} \otimes (\ketbra{0}{0} + e^{-\beta_{\raisebox{-1pt}{\tiny{$H$}}} \omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}}} \ketbra{1}{1})_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}) \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}) \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}(\beta_{\raisebox{-1pt}{\tiny{$H$}}}, \omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}})}. \end{align} The only energy-conserving unitary interaction that is relevant for cooling is the one that exchanges the populations in the levels spanned by $\ket{010}$ and $\ket{101}$, which have initial populations $\tfrac{e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}) \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}) \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}(\beta_{\raisebox{-1pt}{\tiny{$H$}}}, \omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}})}$ and $\tfrac{e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}} e^{-\beta_{\raisebox{-1pt}{\tiny{$H$}}} \omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}}}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}) \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}) \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}(\beta_{\raisebox{-1pt}{\tiny{$H$}}}, \omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}})}$ respectively, which are both strictly less than one. The necessary condition for any cooling to be possible implies that $e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}} e^{-\beta_{\raisebox{-1pt}{\tiny{$H$}}} \omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}}} \geq e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}}$; now, performing the optimal cooling unitary leads to the final ground-state population of the target system \begin{align} p_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{\prime}(0) = \bra{0} \,\ptr{\mathcal{C} \mathcal{H}}{U \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}^{(0)} U^\dagger}\!\ket{0}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} = \frac{1 + e^{-\beta_{\raisebox{-1pt}{\tiny{$H$}}} \omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}}}(1 + e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}} + e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}})}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}) \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}) \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}(\beta_{\raisebox{-1pt}{\tiny{$H$}}}, \omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}})} < 1. \end{align} Indeed, using $e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}} e^{-\beta_{\raisebox{-1pt}{\tiny{$H$}}} \omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}}} \geq e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}}$, \begin{equation} p_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{\prime}(0) \leq \frac{1 + e^{-\beta_{\raisebox{-1pt}{\tiny{$H$}}} \omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}}}e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}} }{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}) \mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}) } \leq \frac{1}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}})} \leq 1. \end{equation} The second inequality is strict unless $\beta_{\mathcal H} = 0$ or $\omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}} = 0$. In the both cases, for equality in the first inequality, we need $\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} = \beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}$. If $\beta = 0$, then $\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}) = 2$ and the last inequality is strict. If $\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} = \omega_{\raisebox{0pt}{\tiny{$\mathcal{C}$}}}$, no cooling is possible; hence $p_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{\prime}(0) = p_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(0) < 1$. \subsection{Diverging Time and Diverging Control Complexity} \label{app:incoherentdivergingtimecontrolcomplexity} We now move to analyse the case where diverging time is allowed, where we wish to minimise the energy cost and control complexity throughout the protocol over a diverging number of energy-conserving interactions between the target system and the hot and cold subsystems of the machine. We again consider all three systems to be qubits, but the results generalise to arbitrary (finite) dimensions. Here, the machines and ancillas begin as thermal states with initial inverse temperatures $\beta$ and $\beta_{\raisebox{-1pt}{\tiny{$H$}}} \leq \beta$ respectively. Just as in the diverging time cooling protocol in the coherent-control setting presented in Appendix~\ref{app:divergingtimecoolingprotocolfinitedimensionalsystems}, we consider a diverging number of machines, with slightly increasing energy gaps, in a configuration such that the target system interacts with the $n^{\textup{th}}$ machine at time step $n$. Suppose that after $n$ steps of the protocol, the target qubit has been cooled to some inverse temperature $\beta_n > \beta$; equivalently, this can be expressed as a thermal state with corresponding energy gap $\omega_n=\frac{\beta_n}{\beta}\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}$. We now wish to interact the target system $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta_n, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})$ with a machine $\mathcal{M}_{n+1}$ with slightly increased energy gaps with respect to the most recent one $\mathcal{M}_{n}$, i.e., we increase the energy gaps of the cold subsystem $\mathcal{C}$ from $\omega_n$ to $\omega_{n+1} = \omega_n + \epsilon_n$; the resonance condition enforces the energy gap of the hot subsystem $\mathcal{H}$ to be similarly increased to $\omega_n + \epsilon_n - \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}$. Thus, the next step of the protocol is a unitary acting on the global state \begin{align} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}^{(n)}=\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta_n, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})\otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\beta, \omega_n+\epsilon _n) \otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}(\beta_{\raisebox{-1pt}{\tiny{$H$}}},\omega_n+\epsilon _n-\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} ). \label{eq: initial 3qubits} \end{align} In order to cool the target system via said unitary, we must have that $p_{101} \geq p_{010}$ for the state in Eq.~\eqref{eq: initial 3qubits}, which implies that $\epsilon_n$ must satisfy the following condition: \begin{align} e^{-\beta \omega_n-\beta_{\raisebox{-1pt}{\tiny{$H$}}}( \omega_n+\epsilon _n- \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})}\,\geq\,e^{-\beta(\omega_n+ \epsilon_n)} \Rightarrow \epsilon_n\geq \gamma (\omega_n-\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}) \quad \quad \mathrm{where} \quad \gamma:=\frac{\beta_{\raisebox{-1pt}{\tiny{$H$}}}}{\beta-\beta_{\raisebox{-1pt}{\tiny{$H$}}}}.\label{eq: epsilon bound} \end{align} This condition is crucial. It means that if the hot subsystem $\mathcal{H}$ is coupled to a heat bath at any finite temperature, i.e., $\beta_{\raisebox{-1pt}{\tiny{$H$}}} > 0$, $\epsilon_n$ depends linearly on the inverse temperature of the target system at the previous step $\beta_n$, and can thus not be taken to be arbitrarily small. As we now show, this condition prohibits the ability to perfectly cool the target system at the Landauer limit for the energy cost whenever the heat bath is at finite temperature. On the other hand, for infinite-temperature heat baths, perfect cooling at the Landauer limit is seemingly achievable; here, $\beta_{\raisebox{-1pt}{\tiny{$H$}}} \to 0$ and so $\gamma \to 0$, leading to the trivial constraint $\epsilon_n \geq 0$ which allows it to be arbitrarily small, as is required. Nonetheless, the explicit construction of any protocol doing so in the incoherent-control setting is \emph{a priori} unclear, as the restriction of energy conservation makes for a fundamentally different setting from the coherent-control paradigm. We now explicitly derive the optimal diverging-time protocol to perfectly cool at the Landauer limit for an infinite-temperature heat bath, thereby proving Theorem~\ref{thm:autoinftempinftime}. \subsection{Saturating the Landauer Limit with an Infinite-Temperature Heat Bath} \label{app:incoherentinfiniteheatbath} Before calculating the energy cost, we briefly discuss the attainability of the optimally cool target state. We begin with all subsystems as qubits, for the sake of simplicity, but the logic generalises to higher dimensions. In the incoherent paradigm, the target system $\mathcal{S}$ interacts with a virtual qubit of the total machine $\mathcal{M} = \mathcal{C}\mathcal{H}$ that consists of the energy eigenstates $\ket{0,1}_{\mathcal{C}\mathcal{H}}$ and $ \ket{1,0}_{\mathcal{C}\mathcal{H}}$, with populations $p_{0_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} 1_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}}$ and $p_{1_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} 0_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}}$ respectively. Suppose that at step $n+1$ the cold subsystem involved in the interaction has energy gap $\omega_n + \epsilon_n$. In Ref.~\cite{Clivaz_2019E}, it is shown that by repeating the incoherent cooling process (i.e., implementing the unitary in Eq.~\eqref{eq:tripartite unitary op}) and taking the limit of infinite cycles, this scenario equivalently corresponds to the general (coherent) setting where arbitrary unitaries are permitted and the target system interacts with a virtual qubit machine with effective energy gap $\omega^{\textup{eff}}_n$ given by \begin{align} e^{-\beta \omega^{\textup{eff}}_n }:=\frac{p_{1_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} 0_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}}}{p_{0_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} 1_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}}}= e^{-\beta(\omega_n+ \epsilon_n)}\, e^{\beta_{\raisebox{-1pt}{\tiny{$H$}}}( \omega_n+\epsilon_n- \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})} \quad \quad \Rightarrow \omega^{\textup{eff}}_n=\omega_n +\epsilon_n-\frac{\beta_{\raisebox{-1pt}{\tiny{$H$}}}}{\beta}(\omega_n+\epsilon _n- \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}). \label{eq: eff gap} \end{align} It is clear that for finite-temperature heat baths, i.e., $\beta_{\raisebox{-1pt}{\tiny{$H$}}}> 0$, the effective energy gap $\omega^{\textup{eff}}_n$ is always smaller than the energy gap of the machine at any given step, i.e., $\omega^{\textup{eff}}_n\leq \omega_n+\epsilon _n$; on the other hand, equality holds iff the heat bath is at infinite temperature, i.e., $\beta_{\raisebox{-1pt}{\tiny{$H$}}} \to 0$. Thus, in the infinite-temperature case, given a target system beginning at some step of the protocol in the state $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^*(\beta, \omega_n)$, it is possible to get close to the asymptotic state $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^*(\beta, \omega_n+\epsilon_n)$; if the temperature is finite, however, this state is not attainable (even asymptotically). Following the arguments in Appendix~\ref{app:divergingtimecoolingprotocolfinitedimensionalsystems}, i.e., considering a diverging number of machines, each of which having the part connected to the cold bath with energy gap $\omega_{\mathcal{C}_n} = \omega_n + \epsilon_n$ and taking the limit of $\epsilon_n \to 0$, which one can \emph{only} do if the hot-bath temperature is infinite, allows one to cool perfectly in diverging time in the incoherent paradigm at the Landauer limit. We now calculate the energy cost explicitly for the infinite-temperature heat bath case, precisely demonstrating attainability of the Landauer limit. We use a similar approach to that described in Appendix~\ref{app:divergingtimecoolingprotocolfinitedimensionalsystems}: we have a diverging number of cold machines for each energy gap $\omega_n$, with which the target system at the $n-1^{\textup{th}}$ time step interacts; for an infinite-temperature heat bath, i.e., $\mathcal{H}$ is in the maximally mixed state independent of its energy structure, the state of the target system at each step $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^*(\beta, \omega_{n-1})$ is achievable. From Eq.~\eqref{eq:energy exchange INc}, the energy change between all subsystems for a given step of the protocol, i.e., taking $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^*(\beta, \omega_{n-1}) \to \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^*(\beta, \omega_{n})$, can be calculated as \begin{align} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n)}&=\tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_{n})- \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_{n-1})}\nonumber\\ \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}^{(n)}&=-\tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\omega_{n})(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_{n})- \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_{n-1})}\nonumber\\ \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^{(n)}&=\tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}(\omega_{n}-\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_{n})- \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_{n-1})} \label{eq:energy exch incoherent ind} \end{align} In general, i.e., for finite-temperature heat baths, we would have $\omega_n = \omega_{n-1} + \epsilon_{n-1}$, with a lower bound on $\epsilon_{n-1}$ for cooling to be possible [in accordance with Eq.~\eqref{eq: epsilon bound}]. However, for infinite-temperature heat baths, this lower bound trivialises since the energy structure of the hot-machine subsystem plays no role in its state; thus we can choose the energy gap structure for the machines as $\{\omega_n=\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+n \epsilon\}_{n=1}^N$ with $\epsilon$ arbitrarily small. Taking the limit $\epsilon \to 0$, the diverging time limit $N \to \infty$, and writing $\omega_{\raisebox{-1pt}{\tiny{$N$}}} = \omega_{\textup{max}}$ for the maximum energy gap of the cold-machine subsystems, the energy exchanged throughout the entire cooling protocol here is given by \begin{align} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}&=\lim_{N\to \infty}\sum_{n=1}^N\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n)}=\tr{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_{\textup{max}})- \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})}\nonumber\\ \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}&=\lim_{N\to \infty}\sum_{n=1}^N\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}^{(n)}=\frac{1}{\beta} \left\{ S[\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^*(\beta,\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})] - S[\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^*(\beta,\omega_{\textup{max}})] \right\} = \frac{1}{\beta}\widetilde{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \nonumber\\ \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}&=\lim_{N\to \infty}\sum_{n=1}^N\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^{(n)}=-\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}-\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}. \label{eq:energy exch incoherent} \end{align} Here, the expression for $\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}$ can be derived using the same arguments as presented in Appendix~\ref{app:prooftheoreminftime}. In particular, the heat dissipated by the cold part of the machine, which is naturally connected to the heat sink in the incoherent setting as an infinite-temperature heat bath can be considered a work source since any energy drawn comes with no entropy change, is in accordance with the Landauer limit. It is straightforward to obtain the same result for qudit systems. Lastly, in a similar way to the other protocols we have presented, one could compress all of the diverging number of operations into a single one whose control complexity diverges, thereby trading off between time and control complexity. \subsection{Analysis of Finite-Temperature Heat Baths} \label{app:analysisfinitetempheatbaths} We now return to the more general consideration of finite-temperature heat baths, i.e., $0 < \beta_{\raisebox{-1pt}{\tiny{$H$}}}\leq\beta$. In the case where $\beta_{\raisebox{-1pt}{\tiny{$H$}}}=\beta$, from Eq.~\eqref{eq: eff gap}, it is straightforward to see that for any machine energy gap $\omega_n$, the effective gap $\omega^{\textup{eff}}_n$ is equal to the gap of the target system, which means that no cooling can be achieved in the incoherent paradigm. Nonetheless, for any $\mathcal{H}$ subsystem coupled to a heat bath of inverse temperature $\beta_{\raisebox{-1pt}{\tiny{$H$}}} < \beta$, cooling is possible. We first provide more detail regarding why cooling at the Landauer limit is not possible in this setting, before deriving the minimal energy cost in accordance with the Carnot-Landauer limit presented in Theorem~\ref{thm:main-landauer-incoherent}; in Appendix~\ref{app:incoherentcoolingfinitetemperature}, we provide explicit protocols that saturate this bound for any finite-temperature heat bath and arbitrary finite-dimensional systems and machines. Suppose that at some step $n$ one has the initial joint state of Eq.~\eqref{eq: initial 3qubits}, where $\epsilon_n= \gamma (\omega_n-\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})+\epsilon$ and $\omega_n=\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+n\epsilon$. Here, $\gamma$ is as in Eq.~\eqref{eq: epsilon bound}. We now wish to cool the target system to $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_{n}+\epsilon)$. For cooling to be possible in the incoherent setting here, we need the cold-machine subsystem to have an energy gap of at least $\omega_n + \epsilon_n$; moreover, with a finite-temperature heat bath, this energy gap is insufficient to achieve the desired transformation [see Eq.~\eqref{eq: epsilon bound}]. Based on Eq.~\eqref{eq:energy exchange INc}, we can see that nonetheless, if we calculate the \emph{hypothetical} energy change in this scenario if it were possible, we can derive a lower bound for the actual energy cost incurred. Employing Eq.~\eqref{eq:energy exch incoherent ind}, we have \begin{align} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}^{(n+1)}&\geq -\mathrm{tr}\{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\omega_{n}+\epsilon_n)[\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_{n}+\epsilon)- \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_n)]\}\nonumber\\ &=-\mathrm{tr}\{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}[(\gamma+1)\omega_{n}-\gamma \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} +\epsilon][\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_{n}+\epsilon)- \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_n)]\}\nonumber\\ &=-\mathrm{tr}\{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}[(\gamma+1)\omega_{n}-\gamma \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} +\epsilon+\gamma \epsilon-\gamma \epsilon][\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_{n}+\epsilon)- \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_n)]\}\nonumber\\ &=-(\gamma+1)\mathrm{tr}\{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\omega_{n} +\epsilon)[\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_{n}+\epsilon)- \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_n)]\}+\gamma\mathrm{tr}\{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}+\epsilon)[\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_{n}+\epsilon)- \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_n)]\}\nonumber\\ &= (\gamma+1)\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}^{* (n+1)}+\gamma \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{* (n+1)}+\gamma\mathrm{tr}\{{H}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\epsilon)[\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_{n}+\epsilon)- \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_n)]\}, \end{align} where we make use of the fact that for equally spaced Hamiltonians, the structure of the Hamiltonians on each subsystem take the same form [i.e., we can write, with slight abuse of notation, $H_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\omega+\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})=H_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\omega)+H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})$]. We use the star in $ \Delta E^*_{\raisebox{-1pt}{\tiny{$\mathcal{A}$}}}$ to denote the idealised energy cost [i.e., that corresponding to what would be achievable in the infinite-temperature setting; see Eq.~\eqref{eq:energy exch incoherent ind}] and the energy costs without the star to represent those for when the temperature of the heat bath is finite. The additional term $\mathrm{tr}\{{H}(\gamma\epsilon)[\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_{n}+\epsilon)- \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^* (\beta, \omega_n)]\}$ vanishes for $\epsilon \to 0$. Summing up these contributions for a diverging number of steps gives the lower bound for the heat dissipated throughout the entire protocol for cooling an initial state $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})$ to some final $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta_{\textup{max}}, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}})$ is given by \begin{align} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} &= \lim_{N\to\infty} \sum_{n=1}^{N} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}^{(n+1)} \nonumber\\ &\geq (\gamma+1) \frac{1}{\beta}\,\widetilde{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} +\,\gamma\,\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \notag \\ &= \frac{1}{\beta}\,\widetilde{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + \gamma \left( \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + \frac{1}{\beta}\,\widetilde{\Delta} S_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\right). \end{align} Note that for infinite-temperature heat baths, $\gamma \to 0$ and the usual Landauer limit is recovered; nonetheless, for finite-temperature heat baths, $\gamma > 0$ and there is an additional energy contribution, implying that the Landauer limit cannot be achieved. Moreover, note that the expression inside the parenthesis in the second term above is always non-negative, as it is the free energy difference of the system during the cooling process. Lastly, it is straightforward to show that this lower bound is equivalent to the Carnot-Landauer limit in Eq.~\eqref{eq:carnotlandauercold}, which was derived in a protocol-independent manner as the ultimate limit in the incoherent-control setting. We now present explicit protocols that saturate this bound. \section{Perfect Cooling at the Carnot-Landauer Limit in the Incoherent-Control Paradigm} \label{app:incoherentcoolingfinitetemperature} The precise statement that we wish to prove regarding saturation of the Carnot-Landauer limit is the following: \begin{lem}\label{lem:qubitincoherentfinitetemp} For any $\beta^*\geq\beta>\beta_{\raisebox{-1pt}{\tiny{$H$}}}$ and $\epsilon_{1,2} > 0$, there exists a cooling protocol in the incoherent-control setting comprising a number of unitaries of finite control complexity, which, when the number of operations diverges, cools to some final temperature $\beta^\prime$ that is arbitrarily close to the ideal temperature value $\beta^*$, i.e., \begin{align} \left| \beta^{\prime} - \beta^* \right| < \epsilon_1, \end{align} with an energy cost, measured as heat drawn from the hot bath, that is arbitrarily close to the ideal Carnot-Landauer limit, i.e., \begin{align} \left| \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} - \eta^{-1} \widetilde{\Delta} F_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(\beta)} \right| < \epsilon_2, \end{align} where $\eta = 1 - \beta_{\raisebox{-1pt}{\tiny{$H$}}} / \beta$ and $\Delta F_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(\beta)} = F_{\raisebox{-1pt}{\tiny{$\beta$}}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime) - F_{\raisebox{-1pt}{\tiny{$\beta$}}}(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$ is the free energy difference between the initial $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} = \tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$ and final $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime = \tau_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(\beta^*, H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}})$ system states (w.r.t. inverse temperature $\beta$). \end{lem} We begin by presenting the diverging-time protocol that saturates the Carnot-Landauer limit when all three subsystems $\mathcal{S}, \mathcal{C}, \mathcal{H}$ are qubits. The simplicity of this special case allows us to calculate precisely bounds on the number of operations required to reach any chosen error threshold. Building on this intuition, we then present the generalisation to the case where all systems are qudits. The protocols with diverging control complexity follow directly via the same line of reasoning presented in the main text. \subsection{Qubit Case} \label{app:incoherentqubit} We begin with setting some notation and intuition for the proof, before expanding on mathematical details. \textbf{Sketch of Protocol.---}The protocol consists of the following. There are $N$ stages, each labelled by $n \in \{1,2, ... , N\}$. Each stage proceeds as follows: \begin{itemize} \item A qubit with energy gap $\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + n \theta$ is taken from the cold part of the machine, and a qubit with energy gap $n \theta$ is taken from the hot part (see below). The initial state of the machine at the beginning of the $n^\textup{th}$ stage is thus $\tau_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + n \theta) \otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} (\beta_{\raisebox{-1pt}{\tiny{$H$}}}, n\theta)$. \item The energy-preserving three qubit unitary cycle in the $\{010,101\}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$ subspace is performed [see Eq.~\eqref{eq:tripartite unitary op}], after which the cold and hot qubits are rethermalised to their respective initial temperatures. \item The above steps are repeated $m_n$ times. \end{itemize} \noindent The energy increment $\theta$ is defined as \begin{align}\label{eq:incrementsize} \theta &:= \frac{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{N} \left( \frac{\beta^* - \beta}{\beta - \beta_{\raisebox{-1pt}{\tiny{$H$}}}} \right), \end{align} while the number of repetitions within each stage is given by \begin{align}\label{eq:noofreps} m_n = \Bigg\lceil \frac{ \log (\delta) }{ \log (1 - N_{\raisebox{-1pt}{\tiny{$V$}}}^{(n)} ) } \Bigg\rceil. \end{align} $\lceil \cdot \rceil$ is the ceiling function, and $N_{\raisebox{-1pt}{\tiny{$V$}}}^{(n)}$ is the sum of the initial thermal populations in the $\{01,10\}_{\mathcal{C}\mathcal{H}}$ subspace of the machine, i.e., \begin{align} N_{\raisebox{-1pt}{\tiny{$V$}}}^{(n)} &:= \braket{01| \tau_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + n \theta) \otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} (\beta_{\raisebox{-1pt}{\tiny{$H$}}}, n\theta) | 01} + \braket{10| \tau_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + n \theta) \otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} (\beta_{\raisebox{-1pt}{\tiny{$H$}}}, n\theta) | 10}. \end{align} The parameter $\delta$ is chosen appropriately to complete the proof ($\delta = 1/N^2$ works). The intuition for the proof is as follows. We first consider how the populations of the target system changes in the \emph{idealised} protocol where $m_n \rightarrow \infty$, so that in each stage, the system reaches the virtual temperature determined by the $\mathcal{C}\mathcal{H}$ qubits. We can use this ideal setting to find expressions for the final temperature and energy cost, which serves as a baseline that we wish to attain to within arbitrary precision. We then consider the protocol as constructed above with a finite number of repetitions $m_n$ in each stage, and show that its expressions for temperature and work cost are close (w.r.t. $1/N$) to the original expressions, and by taking $N$ to be sufficiently large but still finite (i.e., in the diverging time limit), we prove that the protocol can be arbitrarily close in temperature and energy cost to the ideal values. \begin{proof} We label the population in the \emph{excited} state of the target system at the end of stage $n$ as $p_n$. Thus $p_0$ is the initial population and $p_{\raisebox{-1pt}{\tiny{$N$}}}$ is the final population in the excited level of the target system qubit, i.e., that spanned by $\ket{1}\!\bra{1}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$. We also label by $q_n$ what the corresponding population $p_n$ would hypothetically be in the limit $m_n \rightarrow \infty$. This value can be calculated by matching the temperature of the target system qubit to the temperature of the $\{01,10\}_{\mathcal{C} \mathcal{H}}$ virtual qubit within the machine (see Appendix G in Ref.~\cite{Clivaz_2019E}). Thus $q_n$ is defined via the Gibbs ratio \begin{align} \frac{q_n}{1-q_n} &= e^{-\beta(\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + n\theta)} e^{+\beta_{\raisebox{-1pt}{\tiny{$H$}}} n\theta} = e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}} e^{-(\beta - \beta_{\raisebox{-1pt}{\tiny{$H$}}}) n \theta}. \label{eq:abstractpop} \end{align} Note that \begin{enumerate} \item $\{ p_n\},\{q_n \}$ are both monotonically decreasing sequences, as each stage cools the target qubit further. \item $p_n > q_n$ for all $n$, as more repetitions within each stage keep cooling the target qubit further. \end{enumerate} To keep track of the energetic resource cost, which we take here to be the total heat drawn from the hot bath, we must sum the energetic contribution from each time the hot qubit is rethermalised to $\beta_{\raisebox{-1pt}{\tiny{$H$}}}$ after the application of the three-party cycle unitary. Due to the fact that the only manner in which the population of the hot qubit changes is due to the $\{010,101\}_{\raisebox{-1pt}{\tiny{$\mathcal{S} \mathcal{C} \mathcal{H}$}}}$ exchange, it follows that any population change in the hot qubit is identical to the population change in the target system qubit. Focusing on a single stage, where the machine qubits are fixed in energy gap, the total population change in the hot qubit that must be restored by the hot bath is therefore equal to the population change in the target system throughout that stage. The heat drawn from the hot bath throughout the entire stage is therefore \begin{align} \widetilde{\Delta} E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^{(n)} &= \omega_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}}^{(n)} (p_{n-1} - p_n) = n \theta (p_{n-1} - p_n). \end{align} With these expressions derived, we can study the properties of the abstract protocol where the number of repetitions within each stage goes to infinity: $m_n\to\infty$. First, the final temperature asymptotically achieved here is given by finding the temperature $\widetilde{\beta}$ associated with the qubit with excited-state population $q_{\raisebox{-1pt}{\tiny{$N$}}}$ \begin{align} \frac{q_{\raisebox{-1pt}{\tiny{$N$}}}}{1-q_{\raisebox{-1pt}{\tiny{$N$}}}} = e^{-\widetilde{\beta} \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}} \Rightarrow \quad e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}} e^{-(\beta - \beta_{\raisebox{-1pt}{\tiny{$H$}}}) N \theta} = e^{-\widetilde{\beta} \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}} \Rightarrow \quad \widetilde{\beta} = \beta^*, \end{align} where we make use of the definition of $\theta$ in Eq.~\eqref{eq:incrementsize}. We can thus identify $q_{\raisebox{-1pt}{\tiny{$N$}}} = q^*$, since it is the population associated with the ideal final temperature $\beta^*$. We also have the following expression for the total energetic cost of the ideal protocol after $N$ stages \begin{align}\label{eq:abstractcost} \widetilde{\Delta} E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^* &= \sum_{n=1}^N n \theta (q_{n-1} - q_n), \end{align} which can alternatively be expressed as \begin{align}\label{eq:leftsum} \widetilde{\Delta} E^*_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} &= \sum_{n=1}^N \left[(n-1) \theta (q_{n-1} - q_n)\right] + \theta (q_0 - q_{\raisebox{-1pt}{\tiny{$N$}}}) \end{align} The sums appearing in the two alternative expressions are the left and right Riemann sums of the integral of the variable $y = n\theta$ integrated with respect to the variable $q$, i.e., \begin{align}\label{eq:freeintegral} I :=& - \int_{q_0}^{q^*} y \; \textup{d}q, \notag\\ \text{where} \quad \frac{q(y)}{1-q(y)} =& e^{-\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}} e^{-(\beta - \beta_{\raisebox{-1pt}{\tiny{$H$}}}) y}, \end{align} from Eq.~\eqref{eq:abstractpop}. For $y>0$, $q(y)$ is monotonically decreasing and so the converse is also true, i.e., $y$ is monotonically decreasing w.r.t. $q(y)$. This implies that the integral is bounded by the left and right Riemann sums, so we have \begin{align} \sum_{n=1}^N (n-1) \theta (q_{n-1} - q_n) \leq \int_{q_0}^{q^*} y \; \textup{d}q \leq \sum_{n=1}^N n \theta (q_{n-1} - q_n), \end{align} from which we can deduce that the value of $\Delta E^*_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}$ is itself is bounded both ways from Eqs.~\eqref{eq:abstractcost} and~\eqref{eq:leftsum}: \begin{align} \int_{q_0}^{q^*} y \; \textup{d}q \leq \widetilde{\Delta} E^*_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} \leq \int_{q_0}^{q^*} y \; \textup{d}q + \theta (q_0 - q^*). \end{align} The integral itself can by expressed in terms of the free energy of the qubit target system with respect to the environment inverse temperature $\beta$. Expressing the free energy as a function of the excited-state population $q$ and differentiating w.r.t. $q$ gives \begin{align} F(q) &= \braket{E}(q) - \frac{S(q)}{\beta}= q \; \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + \frac{1}{\beta} \left[ q \log (q) + (1-q) \log (1-q) \right]. \\ \frac{\partial F}{\partial q} &= \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + \frac{1}{\beta} \log \left( \frac{q}{1-q} \right) = \left( \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + \frac{1}{\beta} \left( -\beta \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} - (\beta - \beta_{\raisebox{-1pt}{\tiny{$H$}}}) y \right) \right) = -\frac{\beta - \beta_{\raisebox{-1pt}{\tiny{$H$}}}}{\beta} y. \end{align} Using the above expression, the definite integral in Eq.~\eqref{eq:freeintegral} amounts to \begin{align} I &= \frac{1}{\eta} \left[ F(q^*) - F(q_0) \right] =: \frac{1}{\eta} \left( F^* - F_0 \right), \end{align} where we identify the Carnot efficiency $\eta =1 - \beta_{\raisebox{-1pt}{\tiny{$H$}}}/\beta$ and for ease of notation written $F^* := F(q^*)$ and $F_0 := F(q_0)$. Thus we can bound $\widetilde{\Delta} E^*_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}$ on both sides \begin{align} \frac{1}{\eta} \left( F^* - F_0 \right) \leq \widetilde{\Delta} E^*_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} &\leq \frac{1}{\eta} \left( F^* - F_0 \right) + \theta (q_0 - q^*) \leq \frac{1}{\eta} \left( F^* - F_0 \right) + \frac{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{N} \left( \frac{\beta^* - \beta}{\beta - \beta_{\raisebox{-1pt}{\tiny{$H$}}}} \right), \label{eq:abstractcostdiff} \end{align} where the inequality in the second line follows from the fact that $\{q_n\}$ forms a decreasing sequence. We now proceed to consider the cooling protocol with a finite number of repetitions $m_n$ within each stage. We first bound the difference between $p_n$ and $q_n$. Using the properties of the exchange unitary under repetitions~\cite{Silva_2016,Clivaz_2019E} (in particular, see Appendix G in Ref.~\cite{Clivaz_2019E}), we have that in each stage \begin{align}\label{eq: swapreps} \frac{p_n - q_n}{p_{n-1}-q_n} &= \left(1 - N_{\raisebox{-1pt}{\tiny{$V$}}}^{(n)} \right)^{m_n}. \end{align} Thus, the population difference to the asymptotically achievable population given by the virtual temperature shrinks as a power law w.r.t. the number of repetitions. Since $0 < N_{\raisebox{-1pt}{\tiny{$V$}}}^{(n)} < 1$ (all strict inequalities), three points follow: first, the population $q_n$ can never be attained with a finite number of steps within the stage $n$; second, that every repetition cools the system further by some finite amount; third, that one can get arbitrarily close to $q_n$ by taking $m_n$ sufficiently large. In fact, by our definition of $m_n$, we have that \begin{align} \label{eq: swapreps-delta} \frac{p_n - q_n}{p_{n-1}-q_n} \leq \delta. \end{align} From this, we can prove that \begin{align}\label{eq:popdiff} p_n - q_n \leq \delta^n q_0 - \delta q_n + (1-\delta)\delta \sum_{j=1}^{n-1} \delta^{n-j-1} q_j. \end{align} The proof is by induction. For $n=0$, $p_0 = q_0$ (initial state), and for $n=1$, using Eq.~\eqref{eq: swapreps-delta} \begin{align} p_1 - q_1 &\leq \delta ( p_0 - q_1 )\notag \\ &= \delta (q_0 - q_1). \end{align} Suppose that the above statement holds true for $p_k$. Then from Eq.~\eqref{eq: swapreps-delta} \begin{align} p_{k+1} - q_{k+1} &\leq \delta (p_k - q_{k+1}) \notag \\ &= \delta (p_k - q_k + q_k - q_{k+1}) \notag\\ &\;\,\vdots \notag\\ &\leq \delta^{k+1} q_0 - \delta q_{k+1} (1-\delta)\delta +\sum_{j=1}^{(k+1)-1} \delta^{(k+1)-j-1} q_j. \end{align} With this result, we can now bound the difference between the energy cost of this finite-repetition protocol and that of the idealised one. We now proceed to prove that \begin{align}\label{eq:generaldiffhypothesis} \widetilde{\Delta} E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} - \widetilde{\Delta} E^*_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} &= \sum_{n=1}^N n\theta (p_{n-1} - p_n) - \sum_{n=1}^N n\theta (q_{n-1} - q_n) \leq \theta \left( q_0 \sum_{j=1}^{N-1} \delta^{N-j} - \sum_{j=1}^{N-1} \delta^{N-j} q_j \right). \end{align} We again use proof by induction. First note that we can rewrite \begin{align} \sum_{n=1}^N n\theta (f_{n-1} - f_n) &= \theta \left( \sum_{n=1}^N f_{n-1} \right) - N \theta f_{\raisebox{-1pt}{\tiny{$N$}}}, \end{align} for $f_n \in \{ p_n, q_n\}$. Therefore, we can rewrite the difference \begin{align} \widetilde{\Delta} E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} - \widetilde{\Delta} E^*_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} &= \theta \sum_{n=1}^N \left(p_{n-1} - q_{n-1} \right) - N \theta (p_{\raisebox{-1pt}{\tiny{$N$}}} - q_{\raisebox{-1pt}{\tiny{$N$}}}) \leq \theta \left( \sum_{n=1}^N (p_{n-1} - q_{n-1}) \right), \end{align} since the last subtracted term is always strictly positive. Consider now the partial sum \begin{align} \mathcal{E}_k &= \sum_{n=1}^k \left(p_{n-1} - q_{n-1} \right). \end{align} For $k=1$, $\mathcal{E}_1 = 0$, since $p_0 = q_0$. For $k=2$, we have \begin{align} \mathcal{E}_1 &= (p_1 - q_1) \leq \delta (q_0 - q_1) = \left( q_0 \sum_{j=1}^{1} \delta^{2-j} - \sum_{j=1}^{1} \delta^{2-j} q_j \right), \end{align} which matches the hypothesis of Eq.~\eqref{eq:generaldiffhypothesis}. Assuming that the same holds true for $\mathcal{E}_k$, then for $\mathcal{E}_{k+1}$, we have \begin{align} \mathcal{E}_{k+1} &= \mathcal{E}_k + (p_k - q_k)\notag \\ &\leq \left( q_0 \sum_{j=1}^{k-1} \delta^{k-j} - \sum_{j=1}^{k-1} \delta^{k-j} q_j \right) + \left( \delta^k q_0 + (1-\delta)\delta \sum_{j=1}^{k-1} \delta^{k-j-1} q_j - \delta q_k \right) \notag \\ &\;\vdots \notag \\ &= q_0\sum_{j=1}^k \delta^{k+1-j} - \sum_{j=1}^{k} \delta^{k+1-j} q_j. \end{align} Then, by dropping the second sum, which is a strictly positive quantity, the difference in Eq.~\eqref{eq:generaldiffhypothesis} can be further simplified to \begin{align} \widetilde{\Delta} E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} - \widetilde{\Delta} E^*_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} &\leq \theta q_0 \sum_{j=1}^{N-1} \delta^{N-j} = \theta q_0 \, \delta \sum_{k=0}^{N-2} \delta^k < \theta q_0 \, \delta (N-1) < \theta q_0 \, \delta N < \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} \left( \frac{\beta^* - \beta}{\beta - \beta_{\raisebox{-1pt}{\tiny{$H$}}}} \right) \delta, \label{eq:costdiff} \end{align} where we use that $\delta < 1$. Finally, to upper bound the number of operations required in the protocol, we bound the number of repetitions within each stage by bounding the total population of the virtual qubit spanned by the levels $\{01,10\}_{\mathcal{C}\mathcal{H}}$ as follows: \begin{align} N_{\raisebox{-1pt}{\tiny{$V$}}}^{(n)} &= \braket{01| \tau_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + n \theta) \otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} (\beta_{\raisebox{-1pt}{\tiny{$H$}}}, n\theta) | 01} + \braket{10| \tau_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}(\beta, \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + n \theta) \otimes \tau_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} (\beta_{\raisebox{-1pt}{\tiny{$H$}}}, n\theta) | 10} \notag \\ &= \frac{e^{-\beta_{\raisebox{-1pt}{\tiny{$H$}}} n \theta} + e^{-\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + n \theta)}}{(1+e^{-\beta_{\raisebox{-1pt}{\tiny{$H$}}} n \theta})(1+e^{-\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + n \theta)})} \notag \\ &> \frac{e^{-\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + n \theta)}}{4}. \notag \\ \Rightarrow \quad \log \left[ 1 - N_{\raisebox{-1pt}{\tiny{$V$}}}^{(n)} \right] &< \log \left[ 1 - \frac{e^{-\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + n \theta)}}{4} \right] \notag \\ &< - \frac{e^{-\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + n \theta)}}{4} \quad\quad\quad \text{if $x \in (0,1) \;\Rightarrow\; \log(1-x) < -x$.} \notag \\ \Rightarrow \quad - \frac{1}{\log \left[ 1 - N_{\raisebox{-1pt}{\tiny{$V$}}}^{(n)} \right]} &< 4 e^{+\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + n \theta)} \end{align} Thus we can bound the number of repetitions in each stage from Eq.~\eqref{eq:noofreps}. Noting that $\log (\delta) < 0$, we have \begin{align} m_n < 4\log \left( 1/\delta \right) e^{+\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + n \theta)} + 1. \end{align} For a crude bound, we can replace $n$ by its maximum value $N$, and sum over all the stages to find an upper bound on the total number of three-qubit exchange unitaries implemented throughout the entire protocol, which gives \begin{align} M = \sum_{n=1}^N m_n &< N \left[ 4\log \left( 1/\delta \right) e^{+\beta (\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + N \theta)} + 1 \right] = N \left[ 4\log \left( 1/\delta \right) e^{ \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} (\beta^* - \beta_{\raisebox{-1pt}{\tiny{$H$}}})/\eta} + 1 \right]. \label{eq:totalreps} \end{align} Also, note that $\lim_{\delta \rightarrow 0} p_{\raisebox{-1pt}{\tiny{$N$}}} = q_{\raisebox{-1pt}{\tiny{$N$}}} = q^*$. More precisely, using Eq.~\eqref{eq:popdiff}, we have \begin{align} p_{\raisebox{-1pt}{\tiny{$N$}}} - q^* &< \delta \left( \delta^{N-1} q_0 + (1-\delta) \sum_{j=1}^{N-1} \delta^{n-j-1} q_j - q_{\raisebox{-1pt}{\tiny{$N$}}} \right)\notag \\ &< \delta \left( 1 + (1-\delta)(N-1) \right) < \delta N. \end{align} In summary, we have the following bounds on the protocol in which each stage consists of a finite number of steps \begin{align} p_{\raisebox{-1pt}{\tiny{$N$}}} - q^* &< \delta N \notag\\ \widetilde{\Delta} E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} &< \frac{1}{\eta} \left( F^* - F_0 \right) + \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} \left( \frac{\beta^* - \beta}{\beta - \beta_{\raisebox{-1pt}{\tiny{$H$}}}} \right) \left( \frac{1}{N} + \delta \right), \end{align} where we combine Eqs.~\eqref{eq:abstractcostdiff} and \eqref{eq:costdiff} for the second expression. For simplicity, we choose $\delta = 1/N^2$, so that \begin{align} p_{\raisebox{-1pt}{\tiny{$N$}}} - q^* &< \frac{1}{N} \notag\\ \widetilde{\Delta} E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} &< \frac{1}{\eta} \left( F^* - F_0 \right) + \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} \left( \frac{\beta^* - \beta}{\beta - \beta_{\raisebox{-1pt}{\tiny{$H$}}}} \right) \left( \frac{2}{N} \right). \end{align} Thus, given any final temperature (encoded by the population $q^*$), and allowed errors $\epsilon_1$ and $\epsilon_2$ for the final population and energy cost respectively, one can always choose $N$ large enough so that both quantities are within the error threshold. Specifically, choosing $N$ as \begin{align} N &= \Bigg\lceil \textup{max} \left\{ \epsilon_1^{-1}, 2 \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} \left( \frac{\beta^* - \beta}{\beta - \beta_{\raisebox{-1pt}{\tiny{$H$}}}} \right) \epsilon_2^{-1} \right\} \Bigg\rceil, \end{align} we automatically have that $p_{\raisebox{-1pt}{\tiny{$N$}}} - q^* <\epsilon_1$ and $\Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} < (F^* - F_0)/\eta + \epsilon_2$. The total number of unitary operations (each of which is followed by rethermalisation of the machine) is then bounded by Eq.~\eqref{eq:totalreps} \begin{align} M < N \left(8 \log [N] e^{ \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} (\beta^* - \beta_{\raisebox{-1pt}{\tiny{$H$}}})/\eta} + 1 \right). \end{align} \noindent We can see from Theorem \ref{thm:landauer-incoherent} that the protocol is asymptotically optimal with respect to the energy extracted from the hot bath. \end{proof} \subsection{Qudit Case} \label{app:incoherentqudit} The extension of the proof above to the case of qudits is nontrivial. This is because, while for qubits there is only one energy-resonant subspace that leads to cooling and hence a unique protocol [see Eq.~\eqref{eq:tripartite unitary op}] that asymptotically attains perfect cooling at the Carnot-Landauer bound, this is no longer the case for higher-dimensional systems; here, there can be a number of energy-resonant subspaces that cool the target and the question of optimality hinges crucially on the complex energy-level structure of all systems involved. Hence, it is not possible to provide a unique unitary that generates the optimal protocol independently of the subsystem Hamiltonians. Nonetheless, we slightly modify the protocol for the qubit case above to be implemented on a number of particular three-qubit subspaces of the three-qudit global state such that, at the end of each stage, the state of the target system is arbitrarily close to the (known) state, which would be achieved in an abstract protocol in the diverging-time limit. This asymptotically attainable state is precisely that which would be achieved in the coherent-control paradigm with a machine the same dimension as the joint hot-cold qudits. Thus, we first begin by presenting the necessary steps for the proof in the coherent-control setting, which we then adapt as appropriate for the incoherent setting control. Finally, summing the overall energy cost of said protocol over all stages saturates the Carnot-Landauer bound, as required. \\ \begin{proof} \textbf{An idealised sequence of temperatures and system states.} We construct the incoherent protocol in the following manner. We seek to take the system through a sequence of thermal states starting at inverse temperature $\beta$ and ending at inverse temperature $\beta^*$ with $N$ equally spaced intermediary steps, i.e., \begin{align} \beta_n &= \beta + n \theta \left( \beta - \beta_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} \right), \label{eq:idealtemperatureincoherentstagen} \\ \theta &= \frac{1}{N} \left( \frac{\beta^* - \beta}{\beta - \beta_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}} \right), \end{align} so that $\beta_{\raisebox{-1pt}{\tiny{$N$}}} = \beta^*$ by construction. This corresponds to taking the system through the following sequence of thermal states \begin{align} \varrho^{(n)}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} &= \frac{e^{-\beta_n H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}}}{\mathcal{Z}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}, \beta_n)}. \end{align} Note that, in contrast to the coherent protocol where such a sequence can be traversed by simply swapping the target system with a sequence of appropriate machines, in the incoherent setting such a protocol is generally not possible as such swaps are not energy conserving. Nonetheless, we develop a modified protocol that is energy conserving and mimics this idealised one. Corresponding to each step in the sequence, we define the following quantity, which we eventually show to be related to the heat drawn from the hot bath: \begin{align} G^{(n)} &= - n \theta \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n)} = - n \theta \, \mathrm{tr} \left[ H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \left( \varrho^{(n)}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} - \varrho^{(n-1)}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \right) \right]. \end{align} We proceed to show that the total $\sum_n G^{(n)}$ that we label \textit{the idealised heat cost $\widetilde{\Delta} E^*_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}$} is close to the free energy difference over the entire sequence. We have \begin{align} \widetilde{\Delta} E^*_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} &= \sum_{n=1}^N G^{(n)} \notag \\ &= \sum_{n=1}^N n \theta \, \mathrm{tr} \left[ H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \left( \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n-1)} - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n)} \right) \right] \label{eq:rightRiemannsum} \\ &= \left\{ \sum_{n=1}^N (n-1) \theta \, \mathrm{tr} \left[ H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \left( \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n-1)} - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n)} \right) \right] \right\} + \theta \, \mathrm{tr} \left[ H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \left( \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(0)} - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(N)} \right) \right].\label{eq:leftRiemannsum} \end{align} The sums on the second and third lines above, Eqs.~\eqref{eq:rightRiemannsum} and \eqref{eq:leftRiemannsum} respectively, are the \textit{right} and \textit{left} Riemann sums corresponding to the following integral: \begin{align} I &= \int_{q_i}^{q_f} q \left( - \textup{d}x \right) = \int_{q_f}^{q_i} q \, \textup{d}x, \notag\\ \text{where} \quad n\theta &\rightarrow q, \notag\\ \quad x &= \mathrm{tr} \left[ H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q) \right], \notag \\ \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q) &= \frac{ e^{-[\beta + q(\beta - \beta_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}})] H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}} }{ \mathrm{tr}\left[ e^{-[\beta + q(\beta - \beta_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}})] H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}} \right]}.\label{eq:incoherentworkint} \end{align} We observe that $x$ is the average energy of the thermal state of temperature $\beta + q(\beta - \beta_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}})$, and thus $x$ and $q$ are strictly monotonically decreasing w.r.t. each other (which explains why the left and right sums are switched). It follows that the Riemann sums bound the integral \begin{align} \sum_{n=1}^N (n-1) \theta \, \mathrm{tr} \left[ H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \left( \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n-1)} - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n)} \right) \right] \leq \int_{q_f}^{q_i} q \, \textup{d}x \leq \sum_{n=1}^N n \theta \, \mathrm{tr} \left[ H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \left( \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n-1)} - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n)} \right) \right]. \end{align} We can thus bound the idealised heat cost in both directions via \begin{align} I \leq \widetilde{\Delta} E^*_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} \leq I + \theta \, \mathrm{tr} \left[ H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \left( \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(0)} - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(N)} \right) \right]. \end{align} The integral in Eq.~\eqref{eq:incoherentworkint} can be shown to be equal to the change in free energy of the target system (w.r.t. inverse temperature $\beta$) \begin{align} F_{\raisebox{-1pt}{\tiny{$\beta$}}}[\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q)] &= \mathrm{tr} \left[ H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q) \right] + \frac{1}{\beta} \mathrm{tr} \left[ \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q) \log \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q) \right], \notag \\ \frac{\textup{d}}{\textup{d}q} F_{\raisebox{-1pt}{\tiny{$\beta$}}}[\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q)] &= \mathrm{tr} \left[ \left( H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + \frac{\mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + \log \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q)}{\beta} \right) \frac{\textup{d} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q)}{\textup{d}q} \right]. \end{align} Note that $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q)$ and $\textup{d} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q)$ are both always diagonal in $H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ and full rank for all $q \in \mathbb{R}$, so we have no problems with $\log \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q)$, and all of the operators in the expression are well defined and commute. Proceeding, we repeatedly use $\mathrm{tr} \left[ \textup{d} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q) \right] = \textup{d} \, \mathrm{tr} \left[ \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q) \right] = 0$ and label the partition function $\mathcal{Z}(q) := \mathrm{tr} \left[ e^{-[\beta + q(\beta - \beta_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}})] H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}} \right]$ to obtain \begin{align} \frac{\textup{d}}{\textup{d}q} F_{\raisebox{-1pt}{\tiny{$\beta$}}}[\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q)] &= \mathrm{tr} \left[ \left( H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} + \frac{\log \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q)}{\beta} \right) \frac{\textup{d} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q)}{\textup{d}q} \right] \notag \\ &= \mathrm{tr} \left[ \left( H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} - \frac{ \beta + q (\beta - \beta_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}})}{\beta} H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} - \mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \frac{\log \mathcal{Z}(q)}{\beta} \right) \frac{\textup{d} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q)}{\textup{d}q} \right] \notag \\ &= - q \left( 1 - \frac{\beta_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}}{\beta} \right) \frac{\textup{d}}{\textup{d}q} \mathrm{tr} \left[ H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q) \right] = - q \eta \frac{\textup{d}x}{\textup{d}q}, \end{align} where we identify the Carnot efficiency $\eta$ for an engine operating between $\beta$ and $\beta_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}$. The integral thus simplifies to \begin{align} I &= \eta^{-1} \left( F_{\raisebox{-1pt}{\tiny{$\beta$}}}[\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q_f)] - F_{\raisebox{-1pt}{\tiny{$\beta$}}}[\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(q_i)] \right) =: \eta^{-1} \Delta F_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(\beta)}. \end{align} The idealised heat cost is thus bounded by \begin{align} \eta^{-1} \Delta F_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(\beta)} \leq \widetilde{\Delta} E^*_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} \leq \eta^{-1} \Delta F_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(\beta)} + \theta \, \mathrm{tr} \left[\mathcal{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \left( \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(0)} - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(N)}\right) \right]. \end{align} The left inequality is Landauer's bound applied to cooling a target system with Hamiltonian $H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ (see Theorem~\ref{thm:main-landauer-incoherent}), and the error term on the right can be bounded quite easily; for instance, for $\beta>0$, we have \begin{align} \mathrm{tr} \left[ \mathcal{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \left(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(0)} - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(N)}\right) \right] &= \mathrm{tr} \left[ \left( \mathcal{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} - E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{\textup{min}} \mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \right) \left(\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(0)} - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(N)}\right) \right] \notag \\ &\leq \mathrm{tr} \left[ \left( \mathcal{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} - E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{\textup{min}} \mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \right) \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(0)} \right] & &\text{since $\mathcal{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} - E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{\textup{min}} \mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ is a positive operator,} \notag \\ &\leq \mathrm{tr} \left[ \left( \mathcal{H}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} - E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{\textup{min}} \mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \right) \frac{\mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}}{d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}} \right] \leq \frac{ \omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}^{\textup{max}}}{d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}}, \end{align} where $\omega^{\textup{max}}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} := E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{\textup{max}} - E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{\textup{min}}$ is the largest energy gap in the target system Hamiltonian and $d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ is the system dimension. We use the fact that since $\rho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(0)}$ is a thermal state of positive temperature, its average energy is less than that of the infinite temperature thermal state, $\mathbbm{1}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}/d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$. Since $\theta \propto 1/N$, it follows that one can always find an $N$ large enough such that the error is smaller than a given value, thereby saturating the Landauer bound. \hrulefill \textbf{A sequence of machine Hamiltonians to mimic the idealised sequence.} Next we construct a protocol that mimics the above sequence and obeys the global energy conservation condition imposed in the incoherent-control setting. The protocol is split into $N$ stages (like above). In each stage, the Hamiltonian of the machine is fixed. The machine here comprises to two parts: the ``cold" part and the ``hot" part. The cold part is chosen to begin in a thermal state at temperature $\beta$ of the Hamiltonian \begin{align} H_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} &= \left( 1 + n \theta \right) H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \end{align} At this point we note that this sequence of cold-machine states is exactly the same as in the coherent protocol, which would proceed by simply swapping the full state of target system and machine in each stage. However, that is not possible here since this is not an energy-preserving operation. To allow for energy-preserving operations, the hot part of the machine consists of $d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}-1)/2$ qubits, each corresponding to a pair of levels $(i,j)$ of the target system (henceforth we take $i < j$ to avoid double counting), whose energy gap is equal to the difference in energies of the target and cold qubit subspaces (hence rendering the desired exchange energy resonant) \begin{align} H^{(ij)}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} &= \left[\omega_i + (1+n\theta) \omega_j - \left( \omega_j + (1+n\theta) \omega_i \right) \right] \ket{1}\!\bra{1}^{(ij)}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} = n\theta \left( \omega_j - \omega_i \right) \ket{1}\!\bra{1}^{(ij)}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}, \end{align} where we label the energy eigenvalues of $H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ by $\{\omega_i\}$. Each of these hot qubits begins at inverse temperature $\beta_{\raisebox{-1pt}{\tiny{$H$}}}$. After every unitary operation, the cold and hot parts of the machine are rethermalised to their respective initial temperatures. To understand the choice of machine Hamiltonians, consider the following two energy eigenstates of the machine: $\ket{i}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} \otimes \ket{1}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^{(ij)}$ and $\ket{j}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} \otimes \ket{0}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^{(ij)}$. The energy difference is \begin{align} \Delta^{(ij)} &= \omega_j (1 + n\theta) - \omega_i (1+n\theta) - n\theta (\omega_j - \omega_i) = \omega_j - \omega_i, \end{align} matching the energy difference between the corresponding pair of energy eigenstates of the target system. Furthermore, calculating the ratio of populations of the two levels we find \begin{align} g^{(ij)} &= \frac{ e^{-\beta \omega_j (1+n\theta)} }{ e^{-\beta \omega_i (1+n\theta)} e^{-\beta_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} n \theta (\omega_j - \omega_i)} } = e^{-(\omega_j - \omega_i) (\beta + n\theta (\beta - \beta_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}))}. \end{align} This corresponds to the Gibbs ratio of a qubit at the temperature $\beta + n\theta (\beta - \beta_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}})$, which is the temperature that defines stage $n$ [see Eq.~\eqref{eq:idealtemperatureincoherentstagen}]. In summary, we construct a machine featuring $d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}(d_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}-1)/2$ qubit subspaces (or virtual qubits), each of the same energy gap as one pair of energy eigenstates of the system, and all of which have a Gibbs ratio (or virtual temperature) corresponding the $n^{\text{th}}$ temperature of our desired sequence. \hrulefill \textbf{A single step of the protocol: The max exchange.} Within each stage of the protocol, a single step consists of a unitary operation on $\mathcal{S}\mathcal{C}\mathcal{H}$, followed by the rethermalisation of the machine parts to their respective initial temperatures. We construct the unitary operation as follows: for every pair $(i,j)$ of system energy levels, one can calculate the absolute value of the difference in populations of the following two degenerate eigenstates $\ket{i}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\ket{j}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}\ket{0}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^{(ij)}$ and $\ket{j}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}\ket{i}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}}\ket{1}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^{(ij)}$. This value corresponds to the amount of population that would move under an exchange $\ket{i}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \ket{j}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} \ket{0}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^{(ij)} \leftrightarrow \ket{j}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \ket{i}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} \ket{1}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^{(ij)}$. We then choose the pair with the largest absolute value of this difference and perform that exchange, with an identity operation applied to all other subspaces. We call this unitary operation the \emph{max exchange}. We proceed to prove two statements about the max-exchange operation. First, that the heat extracted from the hot bath is proportional to the change in average energy of the system; and second, that system state under repetition of said operation converges to the thermal state of the temperature that defines the stage $n$. Consider the change in average energy of the target system under the exchange unitary. The only two populations that change are those of the $\ket{i}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ and $\ket{j}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$. We label the increase in the population of $\ket{i}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}$ as $\delta p$. Then, we have \begin{align} \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} &= \mathrm{tr} \left[ H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \left( \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^\prime - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \right) \right] = -\delta p \left( \omega_j - \omega_i \right). \end{align} On the other hand, the populations of the corresponding hot qubit (i.e., tracing out the target system and cold machine) change by the same amount, i.e., there is a move of $\delta p$ from $\ket{1}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^{(ij)}$ to $\ket{0}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^{(ij)}$. In order to rethermalise the hot qubit, the heat drawn from the hot bath is thus \begin{align} \widetilde{\Delta} E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} &= \delta p \; n \theta (\omega_j - \omega_i) = - n \theta \Delta E_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}. \end{align} This is an expression conveniently independent of the pair $(i,j)$ that applies after an arbitrary number of repetitions of the max-exchange operation (which will use different pairs in general). \hrulefill \textbf{Convergence of the max-exchange protocol to the virtual temperature.} To show that the max-exchange protocol indeed converges to the desired system state in each stage of the protocol, we first prove a rather general statement: given a state $\varrho$ diagonal in the energy eigenbasis, if we exchange any qubit subspace within this system with a virtual qubit of a particular virtual temperature, then the relative entropy of the target system w.r.t. the thermal state of that (virtual) temperature decreases. To this end, consider the relative entropy of a state $\varrho$ that is diagonal in the energy eigenbasis to a thermal state $\tau$. Labelling the populations of $\varrho$ as $p_i$ and those of $\tau$ as $q_i$, this can be expressed as \begin{align} D(\varrho||\tau) &= \sum_k p_k \log \left(\frac{p_k}{q_k}\right). \end{align} We now focus on a single-qubit subspace labelled by $\{i,j\}$, which leads to \begin{align} D(\varrho||\tau) &= p_i \log \left(\frac{p_i}{q_i}\right) + p_j \log \left(\frac{p_j}{q_j}\right) + \sum_{k \notin \{i,j\}} p_k \log \left(\frac{p_k}{q_k}\right) \notag \\ &= (p_i + p_j) \left[ \frac{p_i}{p_i + p_j} \log \left( \frac{\frac{p_i}{p_i+p_j}}{\frac{q_i}{q_i+q_j}} \frac{p_i+p_j}{q_i+q_j} \right) + \frac{p_j}{p_i + p_j} \log \left( \frac{\frac{p_j}{p_i+p_j}}{\frac{q_j}{q_i+q_j}} \frac{p_i+p_j}{q_i+q_j} \right) \right] + \sum_{k \notin \{i,j\}} p_k \log \left(\frac{p_k}{q_k}\right) \notag \\ &= N \left( \bar{p}_i \log \frac{\bar{p}_i}{\bar{q}_i} + \bar{p}_j \log \frac{\bar{p}_j}{\bar{q}_j} + \log \frac{N}{N_{\raisebox{-1pt}{\tiny{$V$}}}} \right) + \sum_{k \notin \{i,j\}} p_k \log \frac{p_k}{q_k}. \end{align} In the last line we renormalise the populations within the qubit subspace and labelled the total populations of the system and thermal state qubit subspaces of interest by $N$ and $N_{\raisebox{-1pt}{\tiny{$V$}}}$, respectively. Labelling the normalised states within these subspaces as $\varrho_{\raisebox{-1pt}{\tiny{$V$}}}$ and $\tau_{\raisebox{-1pt}{\tiny{$V$}}}$ respectively, we have \begin{align} D(\varrho||\tau) &= N \left[ D(\varrho_{\raisebox{-1pt}{\tiny{$V$}}}||\tau_{\raisebox{-1pt}{\tiny{$V$}}}) + \log \left(\frac{N}{N_{\raisebox{-1pt}{\tiny{$V$}}}} \right)\right] + \sum_{k \notin \{i,j\}} p_k \log \left(\frac{p_k}{q_k}\right). \end{align} Suppose now that this qubit subspace of the target system is exchanged with a qubit subspace of any machine that has the same temperature as the thermal state above. The only object that changes in the the above expression is $\varrho_{\raisebox{-1pt}{\tiny{$V$}}}$, since the norm $N$ remains the same. In addition, $\varrho_{\raisebox{-1pt}{\tiny{$V$}}}$ always gets closer to $\tau_{\raisebox{-1pt}{\tiny{$V$}}}$ under such an exchange~\cite{Silva_2016,Clivaz_2019E}, implying that the relative entropy always strictly decreases under such an operation. Returning to the max-exchange protocol, note that by construction, every virtual qubit in the machine that is exchanged with the qubit subspace $\{i,j\}$ of the target system in a given stage $n$ has the same virtual temperature, $\beta_n = \beta + n\theta (\beta - \beta_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}})$. Thus the relative entropy of the system to the thermal state at this temperature always decreases under this operation, unless the operation does not shift any population, which happens only at the unique fixed point where every qubit subspace of the system is already at the virtual temperature $\beta_n$. By monotone convergence, the relative entropy must converge, and moreover converge to the value that it has at the fixed point of the operation, which is the thermal state at inverse temperature $\beta_n$. Note that rather than choosing the qubit subspace with maximum population difference to exchange we could also have picked at random from among the pairs $\{i,j\}$ and convergence would still hold; the max-exchange protocol simply ensures the fastest rate of convergence among these choices. \hrulefill \textbf{Choosing a large enough number of repetitions in each stage so that the overall heat cost is close to the idealised heat cost.} Given that the max-exchange protocol in stage $n$ converges to the thermal state that we label $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n)}$, given any error $\delta_{\raisebox{-1pt}{\tiny{$E$}}}$, we choose a number of repetitions $m_n$ that is large enough so that the difference between the average energy of the actual final state of this stage, which we label $\widetilde{\varrho}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n)}$, and that of the ideal state $\varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n)}$ is less than $\delta_{\raisebox{-1pt}{\tiny{$E$}}}$. In this case, the total heat cost over all stages is close to the idealised heat cost \begin{align} \left| \widetilde{\Delta} E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} - \widetilde{\Delta} E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^* \right| &= \left| \sum_{n=1}^N \left\{- n \theta \, \mathrm{tr} \left[ H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \left( \widetilde{\varrho}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n)} - \widetilde{\varrho}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n-1)} \right) \right]\right\} - \sum_{n=1}^N \left\{- n \theta \, \mathrm{tr} \left[ H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \left( \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n)} - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n-1)} \right) \right] \right\} \right| \notag \\ &= \left| \sum_{n=0}^{N-1} \theta \, \mathrm{tr} \left[ H_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \left( \widetilde{\varrho}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n)} - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(n)} \right) \right] - N \theta \left( \widetilde{\varrho}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(N)} - \varrho_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(N)} \right) \right| \notag \\ &\leq 2 N \theta \delta_E = 2 \left(\frac{\beta^* - \beta}{\beta - \beta_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}} \right) \delta_E. \end{align} The number of repetitions in each stage $m_n$ required depends only upon the initial choice of $\beta^*$ and $N$. \hrulefill \textbf{Completing the proof.} Finally, suppose that one is given any target temperature $\beta^*$ and two arbitrarily small errors, $\epsilon_{\raisebox{-1pt}{\tiny{$\beta$}}}$ for the cooling and $\epsilon_{\raisebox{-1pt}{\tiny{$E$}}}$ for the heat cost, and asked to cool incoherently in such a way that achieves \begin{align} \left| \beta^\prime - \beta^* \right| \leq \epsilon_{\raisebox{-1pt}{\tiny{$\beta$}}}, \\ \left| \widetilde{\Delta} E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} - \eta^{-1} \Delta F_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}}^{(\beta)} \right| \leq \epsilon_{\raisebox{-1pt}{\tiny{$E$}}}. \end{align} We proceed by first choosing a number of stages $N$ so that the idealised heat cost $\widetilde{\Delta} E_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}^*$ is within $\tfrac{\epsilon_{\raisebox{-1pt}{\tiny{$E$}}}}{2}$ to the Carnot-Landauer bound above. The idealised sequence of temperatures satisfies $\beta_{\raisebox{-1pt}{\tiny{$N$}}} = \beta^*$ by construction. Once $N$ is fixed, for each stage from $n=1$ to $N-1$ we choose a number of repetitions for each stage $m_n$ such that the actual heat cost is within $\tfrac{\epsilon_E}{2}$ of the idealised heat cost, as discussed above. This ensures that the total heat cost is within $\epsilon_{\raisebox{-1pt}{\tiny{$E$}}}$ of the bound. Finally, we check that the number of repetitions of the last stage $m_{\raisebox{-1pt}{\tiny{$N$}}}$ is large enough for us to be within $\epsilon_{\raisebox{-1pt}{\tiny{$\beta$}}}$ of $\beta^*$. If not, we increase the number of repetitions (this can only decrease the error in the heat cost anyway) until we are close enough, as required. \end{proof} \section{Comparison of Cooling Paradigms and Resources for Imperfect Cooling} \label{app:imperfectcooling} Although we have looked at a number of cooling protocols throughout to demonstrate the ability for perfect cooling in the asymptotic limit, here we focus on imperfect cooling behaviour, i.e., when all resources are restricted to be finite and thus a perfectly pure state cannot be attained. We have three main goals in doing so. \begin{enumerate} \item To illustrate the finite trade-offs between the trinity of resources (energy, time, control complexity). \item To compare the behaviour of different constructions of the cooling unitary for machines of the same size (i.e., analysing the energy-time trade-off for for fixed control complexity). \item To demonstrate the increase in resources required for cooling in the thermodynamically self-contained paradigm of energy-preserving unitaries (i.e., incoherent control), as compared to coherently driven unitaries. \end{enumerate} \subsection{Rates of Resource Divergence for Linear Qubit Machine Sequence} \label{app:linearqubitscaling} Consider cooling a qubit target system with energy gap $\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}$ by swapping it sequentially with a sequence of $N$ machine qubits of linearly increasing energy gaps. In Appendix~\ref{app:incoherentqubit}, we derived the deviation from the idealised heat dissipation in the incoherent control setting for a sequence of $N$ machines [see Eq.~\eqref{eq:abstractcostdiff}], which we repeat below: \begin{align} \frac{1}{\eta} \left( F^* - F_0 \right) \leq \widetilde{\Delta} E^*_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} \leq \frac{1}{\eta} \left( F^* - F_0 \right) + \frac{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{N} \left( \frac{\beta^* - \beta}{\beta - \beta_{\raisebox{-1pt}{\tiny{$H$}}}} \right). \end{align} We can immediately adapt this result to the paradigm of coherent control by taking $\beta_{\raisebox{-1pt}{\tiny{$H$}}} = 0$ and replacing the heat by work, which yields \begin{align} \Delta F_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} \leq W \leq \Delta F_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}} + \frac{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}}{N} \left( \frac{\beta^*}{\beta} - 1 \right). \end{align} Since the above inequalities are derived from the left and right Riemann sums of an integral, as $N$ becomes large, one can expect that $W$ lies roughly halfway between both extremes; we can thus cast the scaling in the approximate form \begin{align}\label{eq:finitescaling} \left[ \frac{W - \Delta F}{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}} \right] N \sim \frac{1}{2} \left( \frac{\beta^*}{\beta} - 1 \right). \end{align} Thus, we see that the relevant quantifier of the energy resource here is the extra work cost above the Landauer limit relative to the system energy. Additionally, the quantifier of how much said resource is required (per machine qubit) is $\beta^*/\beta - 1$, which, for cold enough final temperatures, is approximately the ratio $\beta^*/\beta$. Returning to the incoherent control paradigm, analysing the scaling behaviour between energy and time is more complicated. On the one hand, the expression above is only slightly modified, with the work being replaced by the heat dissipated multiplied by the Carnot factor: \begin{align}\label{eq:finitescalingincoherent} \left[ \frac{\eta \; \Delta E_{\raisebox{0pt}{\tiny{$\mathcal{H}$}}} - \Delta F}{\omega_{\raisebox{0pt}{\tiny{$\mathcal{S}$}}}} \right] N \sim \frac{1}{2} \left( \frac{\beta^*}{\beta} - 1 \right), \end{align} which is consistent with the work-to-heat efficiency of a Carnot engine. However, in the case of incoherent control, since the population swap only takes place within a subspace of the two-qubit machine, the total population is not completely exchanged in a single operation (in contrast to that in the coherent control setting). Thus the number of operations here required to transfer a desired amount of population to the ground state of the target is greater than the number of machine qubits $N$. To make a fair comparison, one could either compare the same number of machine qubits but swap repeatedly (with rethermalisation of the machine in between operations)---thereby fixing the control complexity at the expense of longer time---or one could increase the number of machine qubits and count time by the number of two-level swaps---thereby fixing time to be equal at the expense of increased control complexity overall. We investigate both methods in the coming section. \subsection{Comparison of Coherent and Incoherent Control} \label{app:comparisoncoherentincoherent} Intuitively, the incoherent control paradigm requires the utilisation of a greater amount of resources (albeit less overall control in general) than the coherent control counterpart because of two distinct disadvantages. First, the temperature of the baths plays a substantial role in cooling performance. Consider the example of a \texttt{SWAP} gate applied between a system and machine qubit: in the coherent control case, this operation transforms the target system to the state of the thermal machine qubit, characterised by the Gibbs ratio of ground-state to excited-state population. In the incoherent control case, one requires the addition of a thermal qubit from the hot bath to render said operation energy preserving; as a result, the Gibbs ratio of the virtual qubit that the target system swaps with is, in general, worse than that of the coherent control setting, and only becomes equal in the limit of an infinite temperature hot bath. This is the first disadvantage. The second disadvantage is that in the incoherent control setting, the target system swaps with only a subspace of the machine rather than the entire one, i.e., it is swapped with a virtual qubit. Thus, the exchange of population is only partial as compared to the coherent control case: in the limiting case of an infinite temperature hot bath, said factor goes to $\tfrac{1}{2}$ for all relevant two-level subspaces. This implies that a greater number of operations, and thus time, is required in the incoherent control paradigm in order to achieve a similar result as its coherent control counterpart. We illustrate this behaviour via the following example. The system is a degenerate qubit (beginning in the maximally mixed state), and we fix the final target ground-state population ($p=0.99$, corresponding to $\epsilon = 1-p=0.01$). Even in this simple case, the optimal finite-resource protocols with coherent and incoherent control are not known; we therefore compare protocols from each setting that make use of machines of a similar structure, namely swapping with machine qubits (virtual ones, in the incoherent control setting) of linearly increasing energy gaps. More specifically, the coherent control cooling protocol employed is that of a sequence of swaps with machine qubits of linearly increasing energy gaps, and for the fixed target population, we can calculate the surplus work cost over the Landauer limit as a function of the number $N$ of operations (which corresponds in this case to the number of machine qubits). In the incoherent control case, we take the hot bath to be at infinite temperature, allowing for the potential saturation of the Landauer limit as in the coherent case. In this way we isolate the disadvantage that arises due to working in degenerate subspaces in our analysis. Here too we take a linear sequence of energy gaps for the cold (and hot) baths, with a single operation step corresponding to a three-level energy-conserving exchange involving the qubit taken from each of the hot and cold parts of the machine, i.e., $\ket{1}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \ket{0}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} \ket{0}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}} \leftrightarrow \ket{0}_{\raisebox{-1pt}{\tiny{$\mathcal{S}$}}} \ket{1}_{\raisebox{-1pt}{\tiny{$\mathcal{C}$}}} \ket{1}_{\raisebox{-1pt}{\tiny{$\mathcal{H}$}}}$. As mentioned previously, for an incoherent control protocol of fixed overall machine size, there are essentially two extremal methods of implementation. The first is to identify $N$ two-level subspaces of the total machine with distinct energy gaps and perform the sequence of virtual swaps between them and the target; in the language of Appendix~\ref{app:incoherentcoolingfinitetemperature}, we therefore have $N$ different stages with a single step within each stage (no repetitions) before moving on to the next stage. The second is to take $N/m$ two-level subspaces and swap the target with each virtual qubit $m$ times before moving on to the next; in other words, we here have $N/m$ different stages with $m$ steps (repetitions) within each stage. For the same fixed ground-state population, we plot the surplus work cost (energy drawn from the hot bath in the case of incoherent control) against the total machine size and number of two-level unitary swaps, as characterised by $N$, for both of these incoherent control adaptations, comparing them to the coherent control paradigm in Fig.~\ref{fig:cohvincoh}. In both control paradigms, we see that the deviation of the energy cost above the Landauer limit scales inversely with the number of operations [as expected from Eqs.~\eqref{eq:finitescaling} and~\eqref{eq:finitescalingincoherent}], but the proportionality constant is worse in the case of incoherent control. Moreover, the incoherent control paradigm with no repetitions within stages outperforms that with multiple repetitions, as intuitively expected since the former protocol corresponds to one for which the spacing between distinct energy gaps that are utilised is smaller, allowing us to stay closer to the reversible limit in each step. In our example, the no repetition incoherent control protocol is around 3 times worse than the coherent control protocol and the incoherent control protocol with $m=5$ repetitions is around 5.3 times worse, implying that one would require that many times the number of operations (i.e., that much more time) to achieve the same performance with incoherent control paradigm as with coherent control. \begin{figure}\label{fig:cohvincoh} \end{figure} \end{document}
arXiv
A box of 25 chocolate candies costs $\$6$. How many dollars does it cost to buy 600 chocolate candies? 600 chocolate candies is $\frac{600}{25} = 24$ times as many candies as 25 candies. Multiplying the number of candies by 24 multiplies the cost by 24, so 600 candies costs $24\cdot 6 = \boxed{144}$ dollars.
Math Dataset
Conditions for Multi-functionality in a Rhythm Generating Network Inspired by Turtle Scratching Abigail C. Snyder1 & Jonathan E. Rubin1 The Journal of Mathematical Neuroscience (JMN) volume 5, Article number: 15 (2015) Cite this article Rhythmic behaviors such as breathing, walking, and scratching are vital to many species. Such behaviors can emerge from groups of neurons, called central pattern generators, in the absence of rhythmic inputs. In vertebrates, the identification of the cells that constitute the central pattern generator for particular rhythmic behaviors is difficult, and often, its existence has only been inferred. For example, under experimental conditions, intact turtles generate several rhythmic scratch motor patterns corresponding to non-rhythmic stimulation of different body regions. These patterns feature alternating phases of motoneuron activation that occur repeatedly, with different patterns distinguished by the relative timing and duration of activity of hip extensor, hip flexor, and knee extensor motoneurons. While the central pattern generator network responsible for these outputs has not been located, there is hope to use motoneuron recordings to deduce its properties. To this end, this work presents a model of a previously proposed central pattern generator network and analyzes its capability to produce two distinct scratch rhythms from a single neuron pool, selected by different combinations of tonic drive parameters but with fixed strengths of connections within the network. We show through simulation that the proposed network can achieve the desired multi-functionality, even though it relies on hip unit generators to recruit appropriately timed knee extensor motoneuron activity, including a delay relative to hip activation in rostral scratch. Furthermore, we develop a phase space representation, focusing on the inputs to and the intrinsic slow variable of the knee extensor motoneuron, which we use to derive sufficient conditions for the network to realize each rhythm and which illustrates the role of a saddle-node bifurcation in achieving the knee extensor delay. This framework is harnessed to consider bistability and to make predictions about the responses of the scratch rhythms to input changes for future experimental testing. Under experimental conditions, intact turtles are observed to generate a variety of rhythmic motor patterns corresponding to stimulation of different body regions (including caudal scratch, rostral scratch, pocket scratch, and forward swim; see Fig. 1) [1]. All of these patterns feature alternating phases of motoneuron activation that occur repeatedly, while different patterns are distinguished by the relative timing and duration of activity of hip extensor motoneurons, hip flexor motoneurons and knee extensor motoneurons. Notably, these stable, rhythmic behaviors arise in the absence of rhythmic stimulation, suggesting that a central pattern generator (CPG) may be responsible. Spinalized turtles, in which motor pathways from higher brain areas have been cut, display corresponding fictive behaviors in response to the same forms of stimulation, which suggests that necessary components for rhythm generation are present in the brain stem and spinal cord [1–4]. However, even with restriction to these areas, the complexity of the neuronal networks in turtle have made it impractical to locate the relevant CPG neurons experimentally. Schematic illustration of stimulation of different turtle body sites. Illustration of how stimulation of different sites, via an electrode for swim or body surface contact for scratch, elicits different patterns of activity in motoneuron recordings from turtle. Figure source: [1] As an alternative, researchers have, on theoretical grounds, proposed structures that may represent important components or principles involved in the function of the relevant CPGs [5–9]. Computational methods offer a natural means to investigate these structures' properties and generate predictions about them that may guide future experimental investigations. In this work, we use computational methods to study a model CPG network that was previously suggested as a kernel for turtle pocket scratch (pocket) and rostral scratch (rostral) motor pattern generation [4]. Specifically, we demonstrate that a simulated version of this model can generate both of these rhythms, selected only by the relative levels of certain constant inputs, for fixed parameter values, and we derive conditions on model parameters that ensure that this dual functionality will exist. Previous theoretical work on motor pattern generation in turtles [5, 10] focused on the generation of two other turtle motor rhythms, caudal scratch and forward swim, from a variety of network architectures, testing their compatibility with several observed experimental characteristics. A common theme between those works and this one is the notion of eliciting multiple rhythms from a fixed network. Indeed, both approaches depart from the traditional unit pattern generator framework (in which there exist specific excitatory and inhibitory populations dedicated to controlling the activity of motoneurons associated with each joint, [11]). The models in the earlier paper included distinct interneurons projecting to each motoneuron (MN) involved, but these could interact directly in the rhythm generation process; furthermore, inhibition was restricted to interactions shaping the interneuron outputs, rather than impinging on MNs directly [5]. Here, we do not maintain a complete segregation of projection targets and instead show that by considering only hip-related pools of excitatory and inhibitory interneurons, each projecting to both hip and knee MNs, appropriate knee-hip timing relations can be produced. This result may seem surprising in light of past theory; however, a variety of experimental works [2–4, 12] have shown that knee extensor MNs receive temporally overlapping excitation and inhibition and that the time courses of the inputs to knee extensor MNs are similar to those of inputs to hip flexor MNs in rostral and to hip extensor MNs in pocket. Berkowitz and Stein argued that an architecture featuring excitatory and inhibitory pools of interneurons for each of hip extensor and hip flexor (with each MN population active in synchrony with its respective excitatory pool), which also project to knee extensor MNs, could be more consistent with experimental findings than other architectures [4]. The idea that different rhythm generators can control knee extensor MN timing in different rhythms also fits in with recent observations from experiments in the mouse hindlimb locomotor network, which suggest that intrinsically rhythmic interneuron modules can be flexibly recruited to drive MN pools [13]. Certainly, knee flexor motoneurons are also involved in the generation of these rhythms [9, 14, 15]. Hip extension, hip flexion, and knee extension are sufficient to typify the rhythms, however, and previous studies have focused on these three MN populations [1–4], so we do not consider hip flexor activity in this work. While the specific network architecture that we consider is motivated by findings from experiments in turtles, our model has a variety of features that are interesting from a mathematical point of view and that may be of use in other modeling work. Wherever possible, we use a general framework and mathematical approach to gain insight into the mechanisms underlying our key results: a single network can (in a nontrivial way) produce two distinct rhythms selected by constant input levels, the timing of activation of a neuron receiving concurrent excitation and inhibition at all times can be controlled by different inputs under different conditions, and a delay in the onset of activity of one neuron relative to another can arise robustly in a model network lacking any explicit inclusion of delay. Our general mathematical approach will allow our findings, while made in a model for turtle motor rhythm generation, to be extensible to other networks with fairly general features. The remainder of this paper is organized as follows. In Sect. 2, we present the details of the implemented architecture and the specific mathematical choices made to model it. Section 3 has three main parts. First, we show results of simulations that illustrate the multi-functionality of the model network (Sect. 3.1). Next, we derive a reduced slow phase space based on knee extensor motoneuron dynamics in which analysis becomes tractable and apply this framework to elucidate the fundamental mechanisms that generate the network dynamics we observe (Sect. 3.2). Finally, we harness the phase space to consider additional experimental findings and new predictions relating to bistability and to responses to changes in inputs (Sect. 3.3). The paper concludes with a discussion (Sect. 4). A possible motor CPG architecture, differing from the traditional unit pattern generator (UPG) framework with a separate interneuron pool driving each muscle's motoneurons [11, 16], was proposed based on experimental results on turtle scratching rhythms [4] (Fig. 2, left). As has been well established, however, drawing a plausible wiring diagram for a rhythmic circuit does not allow the immediate inference of actual circuit activity patterns [17]. To explore network dynamics, we implement a simplified version of the proposed architecture, featuring a layer of interneuron pools indexed by labels \(i \in\{\mathit {IP}, \mathit {EP}, \mathit {ER}, \mathit {IR}\} \) interacting with each other and feeding forward to a layer of MNs indexed by labels \(i \in\{\mathit {HE}, \mathit {KE}, \mathit {HF}\}\) that do not interact. In lieu of an excitatory pool exciting an inhibitory sub-population that in turn inhibits or disinhibits inhibitory pools as originally proposed (e.g. EP excites a sub-population that inhibits IP and disinhibits IR, Fig. 2, left), in our model E and I pools are linked, for simplicity, via direct synaptic connections (Fig. 2, right). A variety of notation associated with this model and its dynamics will be introduced throughout the paper, which we summarize in Table 1. Proposed (left) and implemented (right) network architectures. Solid circles correspond to inhibitory synaptic connections, open triangles (left) and dashed arrows (right) to excitatory ones. Figure source for proposed architecture: [4] Table 1 Variables Based on the experimental recordings shown in Fig. 1 and the architecture in Fig. 2, the parsimonious assumptions are that HE activates in synchrony with its excitatory interneuron population EP, which activates in antiphase with the inhibitory interneuron population IP, while HF activates in synchrony with its excitatory interneuron population ER, which activates in antiphase with the inhibitory interneuron population IR. The nature of the rhythms (Fig. 1) indicates additionally that HE and HF must activate in antiphase for both rhythms, with HF activated longer in rostral and HE activated longer in pocket. It was hypothesized that KE receives inputs that are similar to those received by HF in rostral and similar to those received by HE in pocket [3]. The subsequently proposed architecture in Fig. 2, however, suggests that the inputs to KE are proportional to those to both HE and HF, which makes it less clear why KE synchronizes with HF, after some delay, in rostral and with HE in pocket (Fig. 1), which is what we seek to explain. Since we seek to assess the basic rhythm generating capabilities of the proposed architecture, we model each neuronal population in the network as a single cell, leaving issues of heterogeneity for future investigation; we nonetheless refer to each as a "population" in the remainder of the paper (cf. [6]). Inasmuch as the relevant rhythm generating neurons in turtle have not been identified, the specific currents that are central to their rhythmicity are not known. Given this situation, it makes sense to avoid overly specific assumptions about the dynamics of model components. The dynamically simple Wilson–Cowan equations were used in related previous work [5] to model forward swim and caudal scratch rhythms. However, there is a delay in the onset of knee extensor activity relative to hip extensor in caudal scratch that was not modeled in the earlier study. Since the delay of knee extensor onset in rostral scratch is one of the key features that we seek to model, and phase plane considerations suggest that the monotone nullclines of a Wilson–Cowan system cannot give a significant delay, the Wilson–Cowan framework does not appear to be appropriate for our study. As an alternative, we use a minimal Hodgkin–Huxley type model for each population. We choose an inward, slowly deinactivating persistent sodium current (\(I_{\mathit {Na}P}\)) as the primary current controlling oscillations in our model. This current has been used in previous CPG modeling studies [6, 7, 18, 19] has been observed experimentally in neurons in other CPGs [20], and is well suited to supply the voltage plateaus underlying bursts of spikes. Since past computational and mathematical work has established that certain classes of currents endow models with similar properties, this specific current choice is not critical for qualitative aspects of our model's behavior, and our results will apply immediately to networks featuring other inward, slowly deinactivating currents [18, 21]. We omit the details of actual spikes in our model, since the relative durations of active periods, not specific spiking dynamics, are the primary results that we seek to reproduce and since plateau potentials are observed in turtle motoneurons [22, 23]. As a result, we obtain an analytically tractable framework, which would not be possible from incorporation of detailed models for turtle motoneuron dynamics [23, 24]. Given these considerations, our model for each interneuron population takes the form $$ \begin{aligned} C_{m} \dot{V_{i}} &= -I_{\mathit {Na}P}(V_{i},h_{i})-I_{L}(V_{i})- \sum_{j \ne i}I_{\mathrm {syn}}(V_{i},s_{j})-I_{\mathrm {ext}}(V_{i}) \equiv F_{i}(V_{i},h_{i},\mathbf{s}), \\ \dot{h_{i}} &= \bigl(h_{\infty}(V_{i})-h_{i} \bigr)\tau_{h}(V_{i}) \equiv g_{i}(V_{i}, h_{i}), \\ \dot{s_{i}} &= \alpha(1-s_{i})s_{\infty}(V_{i})- \beta s_{i}, \end{aligned} $$ where \(V_{i}\) denotes voltage, \(h_{i}\) the inactivation of the persistent sodium current \(I_{\mathit {Na}P}\), \(s_{i}\) the fraction of the maximal synaptic conductance that is induced by the population's activity, and s the vector of s variables of all populations in the network (although the evolution of \(V_{i}\) does not depend directly on \(s_{i}\)). In the voltage equation for population i, \(I_{\mathit {Na}P}(V_{i},h_{i}) = g_{\mathit {Na}P}m_{\infty}h(V_{i}-e_{\mathit {Na}})\), \(I_{L}(V_{i}) = g_{L}(V_{i}-e_{L})\) is a leak current, \(I_{\mathrm {syn}}(V_{i},s_{j}) = g_{\mathrm {syn}}^{ij}s_{j}(V_{i}-e_{\mathrm {syn}})\) for \(e_{\mathrm {syn}} \in\{e_{\mathrm {syn}}^{\mathrm {exc}}, e_{\mathrm {syn}}^{\mathrm {inh}} \}\) denotes synaptic current induced by population j, \(I_{\mathrm {ext}}(V_{i}) = (i^{\mathrm {ext}}_{i})(V_{i}-e_{\mathrm {syn}}^{\mathrm {exc}})\) denotes excitatory synaptic current with conductance \(i^{\mathrm {ext}}_{i}\) from a source outside the network, \(m_{\infty}\), \(h_{\infty}\), and \(s_{\infty}\) are monotone sigmoidal functions given by \(x_{\infty}(v) = (1+\exp((v-x_{\mathrm {half}})/\theta _{x}))^{-1}\), \(x\in\{m,h,s\}\) with \(m_{\infty}\) and \(s_{\infty}\) increasing and \(h_{\infty}\) decreasing, and \(\tau_{h}(v) = \epsilon \cosh((v-h_{\mathrm {half}})/2\theta_{h})\) for \(0 < \epsilon\ll1\). All synaptic inputs are defined with \(g_{\mathrm {syn}}^{ij}>0\); whether a synaptic input is excitatory or inhibitory is determined by its reversal potential \(e_{\mathrm {syn}}\). Default parameter values used in simulations are listed in Table 2; values of \(i^{\mathrm {ext}}_{i}\) are varied and are discussed as they arise in our analysis. Simulations of the above system give physiologically realistic voltage ranges with the parameters used in Table 2. However, because we are interested in relative durations of activity, it is more useful to consider rescaled voltage as a representation of population activity. That is, the population activity, PA, is related to voltage, V, as follows: \(\operatorname {PA}(V) = 1 / (1+e^{ (V+30)/{-2} } ) \). This can be seen in Figs. 6, 15, and 16. Table 2 Model parameters With these parameter values, our model equations satisfy several structural hypotheses. We base our analytical arguments on these hypotheses, so that our results extend beyond our specific choices of model functions and parameter values. For each population i, for all relevant synaptic inputs s, the \(V_{i}\) nullcline, \(\{ (V_{i}, h_{i}) : F_{i}(V_{i},h_{i},\mathbf{s})=0\}\), is cubic in the \((V_{i},h_{i})\) phase plane. This nullcline includes left, middle, and right branches, denoted, respectively, by \(V=V_{i,\mathrm{L}}(h,\mathbf{s})\), \(V=V_{i,\mathrm{M}}(h,\mathbf{s})\), and \(V=V_{i,\mathrm{R}}(h,\mathbf{s})\) with \(V_{i,\mathrm{L}} < V_{i,\mathrm{M}}< V_{i,\mathrm{R}}\) for each \((h,\mathbf{s})\) for which all three exist. For our choice of model, for fixed s, \(V_{i,\mathrm{L}}\) and \(V_{i,\mathrm{R}}\) increase as a function of h and \(V_{i,\mathrm{M}}\) decreases as a function of h, so this will henceforth be assumed as well, although it is not required for our results to hold. Figure 3 illustrates these structures and those introduced in subsequent hypotheses. Nullcline configurations for varying values of \(\theta_{h}\) (shifting the h nullcline, red) to illustrate key structures in phase space For each population i, the \(h_{i}\) nullcline, \(\{ (V_{i},h_{i}) : g_{i}(V_{i}, h_{i}) =0 \}\), is monotone decreasing. In the absence of synaptic coupling (\(g_{\mathrm {syn}}=0\)), each population has a unique fixed point, \(p_{i,\mathrm{R}}^{\mathrm {FP}}(0)=(V_{i,\mathrm{R}}^{\mathrm {FP}}(0),h_{i,\mathrm{R}}^{\mathrm {FP}}(0))\), on the right branch of the \(V_{i}\) nullcline for a range of input conductances, \(i^{\mathrm {ext}}_{i}\). In the presence of coupling (\(g_{\mathrm {syn}}>0\)) and with input strength \(i^{\mathrm {ext}}_{i}\) fixed within the range we consider, the right fixed point is retained and left \(p_{i,\mathrm{L}}^{\mathrm {FP}}( \mathbf{s})=(V_{i,\mathrm{L}}^{\mathrm {FP}}(\mathbf{s}),h_{i,\mathrm{L}}^{\mathrm {FP}}(\mathbf{s}))\) and middle \(p_{i,\mathrm{M}}^{\mathrm {FP}}(\mathbf{s})=(V_{i,\mathrm{M}}^{\mathrm {FP}}(\mathbf{s}),h_{i,\mathrm{M}}^{\mathrm {FP}}( \mathbf{s}))\) fixed points are gained and lost via saddle-node bifurcations that occur for some nonzero choices of the synaptic input s (for example, see Fig. 4). Saddle-node bifurcation for KE. The red curve is the \(h_{\mathit {KE}}\) nullcline, while the black curves are \(V_{\mathit {KE}}\) nullclines for differing combinations of synaptic input. The change between these two combinations induces a saddle-node bifurcation. We illustrate this bifurcation in the \((V_{\mathit {KE}},h_{\mathit {KE}})\) phase plane since it is critical for delaying KE activation in the rostral rhythm These hypotheses restrict the system such that it has between one and three fixed points for all relevant inputs and coupling strengths. Fixed points on the right branch of the \(V_{i}\) nullcline correspond to tonic spiking behavior (since the model lacks spike generating currents), while fixed points on the left branch of the \(V_{i}\) nullcline correspond to a relatively constant low voltage. Therefore, hypothesis (H3) means each population is intrinsically tonically active (Fig. 3, right fixed point). In our desired network activity, bursting behavior in a population of neurons consists of regular alternations between states of low voltage near some family of left nullcline branches \(V_{i,\mathrm{L}}^{\mathrm {FP}}(\mathbf{s})\) (silent phase) and states of tonic spiking (i.e., elevated voltage) near some family of right nullcline branches \(V_{i,\mathrm{R}}^{\mathrm {FP}}(\mathbf{s})\) (active phase), linked via abrupt voltage transitions of significant amplitude, corresponding to jumps between branches. In this framework, the synaptic decay must be sufficiently slow relative to the time scale of voltage jumps, to avoid convergence to a fixed point. Since the synaptic variables represent conductances induced by populations of neurons that are generating a burst of activity, the assumption that they decay gradually during a phase is quite reasonable. On the other hand, we take synaptic activation to occur on the fast time scale, reflecting the synchronized onset of activity in a presynaptic population; see Eqs. (2) and (3) below. A key point is that hypotheses (H3) and (H4) together imply that transitions from the silent to the active phase must occur by escape. Given a mutually inhibitory pair of populations where one is active and the other is silent, the silent population may become active by reaching the jump up (left) knee of its V nullcline (i.e., left fold of its family of V nullclines, parameterized by the synaptic strength s controlled by the other population). Doing so allows it to jump to the active phase, inhibiting the other population and, for sufficiently large \(g_{\mathrm {syn}}\), relegating the other population to the silent phase. When these conditions are met, the two populations form a half-center oscillator in which switches between phases are controlled by the silent population [25, 26]. Thus, in addition to the surfaces of fixed points for each population, \(p_{i,X}^{\mathrm {FP}}(\mathbf{s})=(V_{i,X}^{\mathrm {FP}}(\mathbf{s}), h_{i,X}^{\mathrm {FP}}( \mathbf{s}))\), \(X \in\{\mathrm{L},\mathrm{M},\mathrm{R}\}\), of mathematical importance are also the surfaces of jump up and jump down V nullcline folds, or knees, for each population: \((V_{i}^{\mathrm {JU}}(\mathbf{s}),h_{i}^{\mathrm {JU}}(\mathbf{s}))\) and \((V_{i}^{\mathrm {JD}}(\mathbf{s}),h_{i}^{\mathrm {JD}}(\mathbf{s}))\). For fixed levels of external and synaptic inputs, the jump up (down) knee corresponds to a local maximum (minimum) of the \(V_{i}\) nullcline. A surface of knees is then the surface of these local extrema, parameterized by the values of the synaptic input variables, for a fixed external input strength. Based on our parameter choices (Table 2), for each i, we consider that jumps between branches of a V nullcline occur instantaneously relative to the rate of \(I_{\mathit {Na}P}\) (de)inactivation and relative to the slow decay of \(s_{i}\) (set by the small value of β) in the silent phase. Furthermore, we have performed simulations with a very steep synaptic activation function \(s_{\infty}(v)\), since \(\theta_{s}\) is quite small. Thus, for purposes of analysis, we write \(\beta= \epsilon\tilde {\beta}\), define \(\tau=\epsilon t\), and let a prime denote differentiation with respect to τ. We then extract from system (1) in the \(\epsilon\to0\) limit a fast subsystem governing jumps between phases: $$ \begin{aligned} C_{m}\dot{V_{i}} &= F_{i}(V_{i},h_{i},\mathbf{s}),\quad j \ne i, \\ \dot{h_{i}} &= 0, \\ \dot{s_{i}} &= \alpha(1-s_{i})s_{\infty}(V_{i}), \end{aligned} $$ a slow subsystem governing evolution within the silent phase: $$ \begin{aligned} h_{i}' &= g_{i}\bigl(V_{i,\mathrm{L}}(h_{i},\mathbf{s}), h_{i} \bigr), \\ s_{i}' &= -\tilde{\beta} s_{i}, \end{aligned} $$ and a slow subsystem governing evolution within the active phase $$ \begin{aligned} h_{i}' &= g_{i}\bigl(V_{i,\mathrm{R}}(h_{i},\mathbf{s}), h_{i} \bigr), \\ s_{i} & = 1. \end{aligned} $$ At any time when there is no population making a fast jump, the collection of populations evolves in a high-dimensional slow phase space with governing equations given by making an appropriate choice of either Eq. (3) or Eq. (4) for each population. Suppose we consider a collection of N interacting populations. Since \(s_{i}\) does not affect \(V_{i}\), \(h_{i}\) directly, it is useful to project the trajectory to an N-dimensional slow phase space for each population, with dimensions corresponding to that population's h variable along with the s variables for the other \(N-1\) populations. The population's jump up and jump down knees, \((V_{i}^{\mathrm {JU}}( \mathbf{s}),h_{i}^{\mathrm {JU}}(\mathbf{s}))\) and \((V_{i}^{\mathrm {JD}}(\mathbf{s}),h_{i}^{\mathrm {JD}}(\mathbf{s}))\), are then given by surfaces in its slow phase space (e.g. [27, 28]). In the singular limit, each \(s_{i}\) jumps to 1 at the instant (with respect to the slow time scale) of the jump in \(V_{i}\), hence the equation \(s_{i}=1\) in (4). In our simulations, we will be away from the singular limit and hence the maximal value of s is \(\alpha/(\alpha+\beta)\), which we will denote by \(s_{\max}\) in the analysis below. Baseline Simulation Results We simulated system (1) using XPPAUT [29] to find parameter values for which the network (Fig. 2, right) would generate a rostral scratch rhythm under one set of constant external input strengths, \(\{ i^{\mathrm {ext}}_{i} \}_{\mathcal{R}}\), and a pocket scratch rhythm under a different set of constant external input strengths, \(\{ i^{\mathrm {ext}}_{i} \}_{\mathcal{P}}\) (see Fig. 1). We required that synaptic weights, \(\{ g_{\mathrm {syn}}^{i j}\}\), were fixed at the same values for both rhythms, such that our results would represent activation of a fixed network by two different forms of stimulation, presumably representing effects of body surface stimulation in two different regions (Fig. 1). Two distinct classes of synaptic weights were implemented in the network, standard (S) and strong cross-excitation (SCE) (Fig. 5). The S class is based on the idea that a rostral-inducing stimulus should strongly recruit the excitatory ER pool responsible for driving HF and less strongly recruit the inhibitory IR pool that blocks this action, and similarly for pocket. These input levels can also be interpreted as all four interneuron populations receiving a baseline level of input, with ER, IP receiving additional input in rostral and EP, IR receiving additional input in pocket. Synaptic weights and input strengths. Two different sets of synaptic weights \(g_{\mathrm {syn}}^{i j}\) and external input strengths \(i^{\mathrm {ext}}_{i}\) used in our simulations of system (1), with units (mS) omitted. Top: "standard" weights; bottom: "strong cross-excitation" weights. Solid lines ending in circles denote inhibitory connections; dashed lines ending in arrows represent excitatory ones. Both sets of weights include certain symmetries but the activity they support is robust to asymmetric perturbations The SCE class is based on the reasoning that the entire rostral pool, including both ER and IR, should be most strongly stimulated by rostral-inducing stimuli, and similarly for pocket. We call this weight class SCE because a stronger cross-excitation from ER to IP and from EP to IR (0.8 nS versus 0.5 nS) was used to promote synchrony between these pairs of populations in this case. Here, all four interneuron populations can be viewed as receiving a baseline level of input, but with an additional input boost to the "active side". In both cases, the synaptic weights at the interneuron level (not to the MNs) are just a minimal combination that allows oscillations to occur; that is, decreasing any of the weights appreciably without changing the others to compensate leads to a loss of all oscillations. The baseline input strengths (0.17 nS in S and 0.16 nS in SCE) were chosen such that no oscillations are elicited when no interneuron populations receive an additional drive. The S and SCE weights are similar in the sense that they result in qualitatively similar interneuron dynamics and output from the interneurons to the MNs. This output is largely constrained by the required behavior of HF and HE: HF and HE activate in antiphase and do not receive temporally overlapping excitation and inhibition [2–4] meaning that IP must be in antiphase with EP and IR in antiphase with ER (Fig. 2, right panel). In light of these antiphase relations, it is natural for EP, IR to activate in synchrony and ER, IP to activate in synchrony. HF is activated longer than HE in rostral (Fig. 1, right panel of Fig. 2, Fig. 5), hence ER must receive more input than EP in rostral (reversed in pocket). Any synaptic weights selected must satisfy these constraints. Furthermore, as will be seen in the next section, a certain general relationship among the synaptic weights to KE must be satisfied to allow both rhythms to be elicited from the network. With the S and SCE weights, the network can generate both rostral and pocket rhythms, selected by the external input strengths \(\{ i^{\mathrm {ext}}_{i} \}\) as shown in Fig. 5; see Fig. 6 for an example simulation with the S class. Thus, we have confirmed the conjecture that the architecture illustrated in Fig. 2 is capable of such multi-functionality, suggesting its viability as a building block of circuits generating multiple output rhythms from a single set of MNs and muscles. Naturally, for both the S and the SCE weights, there is a range of each input parameter \(\{ i^{\mathrm {ext}}_{i} \}\) over which each rhythm persists. As mentioned previously, the reason that both architectures work is because they produce qualitatively similar interneuron activity patterns and corresponding outputs from the interneurons to the MNs; note that the connections from the interneurons to the MNs are weighted the same across both weight classes. The mathematical analysis done in the next section shows that sufficient changes in these interneuron-to-MN weights would cause the network to lose the desired behavior. Basic simulation results. Example relative population activity for MN populations resulting from simulation of system (1) with the S weights. MN population identified in the legend. The y-axis represents population activity as rescaled voltage, 0 indicates silent, 1 indicates active. Note that the relative timing and durations of activity in the simulation match the recordings (see Fig. 1). The SCE weights produce the desired relative timing and durations as well (not shown) Necessary Conditions for Rhythms Because hip extensor and hip flexor each only receive antiphase excitation and inhibition and maintain the same antiphase relationship with each other across both rhythms, choosing synaptic weights from the interneuron populations to HE and HF is easy. We henceforth assume that these weights and the weights within the interneuron network are fixed such that this antiphase behavior, with appropriate relative phase durations, occurs. Because KE receives temporally overlapping excitation and inhibition, synchronizes with a different hip component in each rhythm, and exhibits a delay in onset relative to its hip partner in rostral and not pocket, the synaptic weights to KE are much more constrained. We will consider dynamics in certain slow phase spaces to derive conditions on these weights that yield multi-functionality of the networks shown in Fig. 5, which generalize to any model with a qualitatively similar structure. Reduction of Slow Phase Space Dimension To focus on KE, we need consider only a subset of the slow variables in the model. KE receives four synaptic inputs with conductance variables \(\{ s_{\mathit {EP}}, s_{\mathit {ER}}, s_{\mathit {IP}}, s_{\mathit {IR}} \}\), which activate on the fast time scale (Eq. (2)) and decay on the slow time scale (Eq. (3)). Additionally, the inactivation of persistent sodium for KE, \(h_{\mathit {KE}}\), evolves on the same slow time scale. Therefore, there is a five-dimensional slow phase space for KE. Analyzing dynamics in this full, five-dimensional space is impractical. To reduce dimension further, we identify the interneuron pairs that activate together, \((\mathit {EP},\mathit {IR})\) and \((\mathit {ER},\mathit {IP})\), to form a single half-center oscillator and we consider a reduced model to describe KE activity, illustrated in Fig. 7. With this reduction, using \(e_{\mathrm {syn}}^{\mathrm {exc}}=0\), \(s_{\mathit {ER}}=s_{\mathit {IP}}\), and \(s_{\mathit {EP}}=s_{\mathit {IR}}\), the synaptic input for knee extensor becomes $$I_{\mathrm {syn}}^{\mathit {KE}} = s_{\mathit {ER}}\bigl[ (g_{\mathit {IP}}+g_{\mathit {ER}})V_{\mathit {KE}}-g_{\mathit {IP}}e_{\mathrm {syn}}^{\mathrm {inh}} \bigr]+s_{\mathit {EP}}\bigl[ (g_{\mathit {IR}}+g_{\mathit {EP}})V_{\mathit {KE}}-g_{\mathit {IR}}e_{\mathrm {syn}}^{\mathrm {inh}} \bigr]. $$ This step reduces our phase space from five dimensions to three, with variables \((h_{\mathit {KE}}, s_{\mathit {EP}}, s_{\mathit {ER}})\). The projection of the periodic pocket trajectory of the reduced model to \((h_{\mathit {KE}},s_{\mathit {EP}},s_{\mathit {ER}})\) space is shown in the top left of Fig. 8, along with several curves that are important for understanding KE dynamics. These plots are critical to our analysis. When ER is active, \(s_{\mathit {ER}} \approx s_{\max}\), so the corresponding part of the trajectory, color coded red, lies approximately on the \(\{ s_{\mathit {ER}} = s_{\max} \}\) plane within phase space, which is the back right face of the cube shown. Similarly, the epoch with EP active has \(s_{\mathit {EP}} \approx s_{\max}\) and yields a trajectory, color coded black, near the back left face of the cube. As an alternative to considering a three-dimensional phase space, however, it is convenient to switch between a pair of two-dimensional slow phase planes, corresponding to the back two faces in the top left of Fig. 8, as EP and ER alternate between periods of silence and activity. These are shown in the top right of Fig. 8. For example, while EP is active, \(s_{\mathit {ER}}\) evolves and the projection of the trajectory to the \((h_{\mathit {KE}},s_{\mathit {ER}})\) plane is shown as the thick black curve. Of course, even after EP switches from active to silent, the projection of the trajectory to the \((h_{\mathit {KE}},s_{\mathit {ER}})\) plane still exists; the projected trajectory segment after the switch is shown as the thin black curve. Using similar considerations for the projection to \((h_{\mathit {KE}},s_{\mathit {EP}})\), we in fact plot two copies of the full trajectory, each in its own two-dimensional phase plane, one with the trajectory shown thick while EP is active and thin while ER is active, and the other the opposite. The switch from EP active to ER active occurs abruptly when \(s_{\mathit {EP}}\) begins its slow decay from \(s_{\max}\) and \(s_{\mathit {ER}}\) increases very rapidly (instantly in the singular limit) to \(s_{\max}\), and we switch each curve from thick to thin when \(s_{\mathit {EP}}=s_{\mathit {ER}}\) occurs. Reduced module controlling knee extensor activity. Two interneuron units form a half-center oscillator, linked by mutual inhibition (thick solid lines). Each unit recruits a corresponding hip MN (thin solid lines) and supplies a hybrid excitatory and inhibitory input to KE (dot dashed lines with squares), with a single corresponding synaptic conductance variable Phase space views for the KE dynamics in the reduced module shown in Fig. 7 during the pocket rhythm. Top left: full three-dimensional slow phase space. Top right: projections onto the two two-dimensional planes where the trajectory lies. Bottom: single, combined two-dimensional representation. In all plots, black and red curves are projections of parts or all of the trajectory of a periodic pocket scratch solution, with bold black and thin red denoting times when EP is active and bold red and thin black times when ER is active. Green curves denote the fixed point curves for KE \(p_{\mathit {KE},\mathrm{R}}^{\mathrm {FP}}(\mathbf{s})\) (stable, solid), \(p_{\mathit {KE},\mathrm{M}}^{\mathrm {FP}}(\mathbf{s})\) (unstable, dashed), and \(p_{\mathit {KE},\mathrm{L}}^{\mathrm {FP}}(\mathbf{s})\) (stable, solid) (in order of increasing \(h_{\mathit {KE}}\)) while EP is active. Magenta curves denote the analogous curves of fixed points for KE while ER is active. The dark blue curve is the curve of jump down knees for KE while EP is active; cyan curves are jump down knees and jump up knees (larger \(h_{\mathit {KE}}\) values) for KE while ER is active. Finally, dashed black curves in the top right indicate points on the two projections that correspond to the same times, when the switches between the EP active phase and the ER active phase occur. Additional labeling on the top right indicates relevant structures defined above. Additional labeling on the bottom indicates key changes in activity of various populations throughout the rhythms. Gray tick marks indicate transitions from activity to silence. This labeling holds for all panels and future figures Finally, since the values over which \(s_{\mathit {ER}}\) and \(s_{\mathit {EP}}\) vary over each period are similar, both slow phase planes can be compressed to a single plot. Again, when this plot is displayed in the bottom part of Fig. 8, we show two copies of the trajectory. For the black (red) copy, \(s_{\mathrm {dynamic}}\) should be interpreted as \(s_{\mathit {ER}}\) (\(s_{\mathit {EP}}\)), with thick and thin parts as in the separate two-dimensional plots (thick black when EP is active such that \(s_{\mathit {ER}}\) decays gradually, thick red when ER is active such that \(s_{\mathit {EP}}\) decays gradually). For fixed input levels \((s_{\mathit {EP}},s_{\mathit {ER}})\), the \(V_{\mathit {KE}}\) nullcline has one or more fixed points, a jump up knee, and a jump down knee. These become two-dimensional surfaces under variation of both inputs, while fixing one input at \(s_{\max}\) selects a one-dimensional curve. In Fig. 8, the curves of fixed points for \(s_{\mathit {EP}}=s_{\max}\) are shown in green and for \(s_{\mathit {ER}}=s_{\max}\) in magenta; both show up in the bottom plot, but it is important to keep in mind that each is only meaningful when \(s_{\mathrm {dynamic}}\) has the correct interpretation. Similarly, the curves of knees are shown in dark blue and cyan. There are two cyan curves, with smaller \(h_{\mathit {KE}}\) values for jump down knees than for jump up. There is only one dark blue curve because the curve of jump up knees is outside of the relevant range of \((h_{\mathit {KE}},s)\) values when EP is active. Scratch Trajectories and Weights of Synapses onto KE To generate pocket and rostral scratch rhythms in our model, we had to select values for synaptic connections in the model network, which remain the same for both rhythms, and strengths of external inputs to the network, which differ between the rhythms. As mentioned previously, fixing the weights of synapses to the HE and HF MNs is not particularly interesting, since the desired antiphase activation patterns for each rhythm are set at the interneuron level in the full or reduced model. For convenience, we simply choose \(g_{\mathrm {syn}}^{\mathit {HE}, \mathit {EP}}=g_{\mathrm {syn}}^{\mathit {HF}, \mathit {ER}}\) and \(g_{\mathrm {syn}}^{\mathit {HE}, \mathit {IP}}=g_{\mathrm {syn}}^{\mathit {HF}, \mathit {IR}}\). The weights of synapses onto KE are more interesting. To understand how these are constrained, we can focus on the reduced model, which maintains four distinct synaptic weights from the interneurons onto KE. With the convenient viewpoint that we have established, it is now helpful to consider the details of the trajectories for pocket scratch (Fig. 8) and rostral scratch (shown in Fig. 9 in a two-dimensional view analogous to the bottom panel of Fig. 8) for our baseline parameter choices. Recall that in the pocket rhythm, KE activates with HE, here represented by the activation of EP. When EP becomes active and the thick black part of the trajectory starts, \(h_{\mathit {KE}}\) decreases, corresponding to the trajectory being in the active phase for KE, near a right branch of the \(V_{\mathit {KE}}\) nullcline. The trajectory cannot cross the curve of jump down knees (dark blue) with \(s_{\mathrm {dynamic}}\) decreasing, because it is blocked by the green fixed point curve (which almost coincides with the dark blue one in Figs. 8 and 9). The switch of \(s_{\mathrm {dynamic}}\) from decreasing to increasing corresponds to the activation of ER (and hence HF). The rise in \(s_{\mathrm {dynamic}}\) pulls the trajectory across the curve of jump down knees of the \(V_{\mathit {KE}}\) nullcline (dark blue), terminating the active phase of KE. We then switch our view to the thick red trajectory, along which \(h_{\mathit {KE}}\) increases (and \(s_{\mathrm {dynamic}}=s_{\mathit {EP}}\) decreases), corresponding to the trajectory being in the silent phase for KE, near a left branch of the \(V_{\mathit {KE}}\) nullcline. The trajectory actually reaches the curve of jump up knees (cyan), and hence KE activates before the activation of EP and HE cause \(s_{\mathrm {dynamic}}=s_{\mathit {EP}}\) to increase. But shortly after this switch, EP itself activates, yielding a rise in \(s_{\mathrm {dynamic}}\), and we switch back to the thick black trajectory, where we started. In fact, experiments reveal a natural variability in pocket scratch patterns. There are many experimental examples of pocket rhythms in which knee extensor becomes active just before hip extensor, at the final moments of hip flexor activity, and indeed a mean pocket rhythm computed from experimentation has this property [30]. Hence, this result provides validation that the solution that we have obtained provides a reasonable reduced representation of a pocket rhythm. Rostral slow phase plane. Trajectory for KE for rostral scratch projected to a single slow phase plane. Coloring of curves is identical to Fig. 8. Bottom: zoomed view near the saddle-node bifurcation where the fold in the magenta fixed point curve intersects the cyan jump up knee curve for \(\mathit {ER}/\mathit {HF}\) active In the rostral rhythm, KE activation follows that of HF, here represented by the activation of ER, with a delay. When ER becomes active, and the thick red part of the trajectory starts, KE is still in the silent phase, with a fixed point on the left branch of the \(V_{\mathit {KE}}\) nullcline (solid magenta line at the far right of Fig. 9; see especially the bottom panel of Fig. 9). As \(s_{\mathrm {dynamic}}\) decreases, the trajectory approaches the corresponding branch of fixed points, and KE cannot activate until this branch undergoes a saddle-node bifurcation (meeting the dashed fixed point branch in the figure) at the curve of jump up knees of the \(V_{\mathit {KE}}\) nullcline (lower right cyan curve; also see Fig. 4). At the bifurcation, KE activates and \(h_{\mathit {KE}}\) starts to decay, with the trajectory heading toward the magenta curve of fixed points in the left part of Fig. 9. When the activity of ER terminates, \(s_{\mathrm {dynamic}}\) increases, which pulls the trajectory through the curve of jump down knees (cyan) and hence switches KE to the silent phase. With EP now activated (thick black part of the trajectory) and KE silent, \(h_{\mathit {KE}}\) increases, but there is no curve of jump up knees available to reach over the relevant range of \((h_{\mathit {KE}},s_{\mathrm {dynamic}})\) (note the absence of a dark blue curve in the lower right of Fig. 9, analogous to its absence in Fig. 8). Thus KE remains silent until the active phase of EP ends, \(s_{\mathrm {dynamic}}\) rises, and ER activates at the transition from the thick black to the thick red part of the trajectory, where we started. From our investigations, it appears that obtaining both pocket and rostral scratch rhythms with the same set of synaptic weights through the dynamic mechanisms we have described requires certain phase plane features and timing relations, which arise in the trajectory descriptions we have provided. Classifying these in terms of particular phases of rhythms, the requirements on the trajectory projected to KE space are as follows: pocket, EP active: the trajectory must not reach the curve of jump down knees as \(s_{\mathrm {dynamic}}\) decreases yet must cross it as \(s_{\mathrm {dynamic}}\) rises (Fig. 8, the red part of the trajectory does not increase through the cyan curve but the black part of the trajectory increases through the blue curve); pocket, ER active: the trajectory must reach the curve of jump up knees as \(s_{\mathrm {dynamic}}\) decreases, but only sufficiently late in the active phase of ER (Fig. 8, the red part of the trajectory reaches the right cyan curve near where it switches to black); rostral, ER active: the trajectory must follow a curve of fixed points to a saddle-node bifurcation at the curve of jump up knees, must subsequently not reach the curve of jump down knees as \(s_{\mathrm {dynamic}}\) decreases, and must cross the jump down knees as \(s_{\mathrm {dynamic}}\) rises (Fig. 9, red parts of the trajectory); rostral, EP active: the trajectory must not reach the curve of jump up knees as \(s_{\mathrm {dynamic}}\) decreases (Fig. 9, note that there is no curve of jump up knees visible while EP is active, corresponding to the black part of the trajectory). The first part of requirement (iii) is critical for imposing a delay between ER activation and KE activation. Requirement (iv) goes together with (iii); certainly no delay would be possible if the trajectory reached a curve for the activation of KE even before ER activated at all! To achieve requirements (iii) and (iv), we find that it is necessary but not sufficient for \(g_{\mathrm {syn}}^{\mathit {KE}, \mathit {EP}}\), \(g_{\mathrm {syn}}^{\mathit {KE}, \mathit {IP}}\), \(g_{\mathrm {syn}}^{\mathit {KE}, \mathit {ER}}\), \(g_{\mathrm {syn}}^{\mathit {KE}, \mathit {IR}}\) to be such that the \(\mathit {ER}/\mathit {IP}\) active pair has an overall more excitatory effect on KE than the \(\mathit {EP}/\mathit {IR}\) active pair. This means that the synaptic weights coming from ER and IR to KE must be stronger than those coming from EP and IP. Once these requirements are imposed, we find that KE also activates while ER is still active in the pocket rhythm; requirement (ii) constrains weights so that this happens as late as possible, providing a realistic pocket rhythm. This is not, however, contrary to many experimental observations. For example, Earhart et al. [30] appear to find this slight overlap. Finally, both requirements (i) and (iii) are partially trivial, since the trajectory is blocked from reaching jump down knees by the location of fixed point curves. Nonetheless, they do constrain weights to ensure that \(h_{\mathit {KE}}\) decays sufficiently during each active phase such that subsequent rises in \(s_{\mathrm {dynamic}}\) can pull the trajectory across the curves of jump down knees, transitioning KE to the silent phase along with its interneuron partner, as desired. Conditions for Rhythm Selection and Slow Phase Plane Analysis/Contraction Arguments With our synaptic weights onto KE and slow phase plane structure fixed to satisfy the requirements described in the previous subsections, for each rhythm, we now derive certain conditions on the set of inputs \(I = \{ i_{\mathit {IP}}^{\mathrm {ext}}, i_{\mathit {EP}}^{\mathrm {ext}}, i_{\mathit {ER}}^{\mathrm {ext}}, i_{\mathit {IR}}^{\mathrm {ext}} \}\), which ensure that that rhythm will be selected. Some of these conditions are necessary, while together the collection is sufficient, although we cannot rule out that there may be different necessary and sufficient conditions elsewhere in parameter space. At a minimum, it is always necessary that the inputs actually elicit oscillations, both at the interneuron and the motoneuron levels. For convenience in what follows, define \(T_{\mathrm {active}}^{j}(I)\) as the length of time for which population j is active for a given set of input parameters I as above. Recall that we have defined a slow phase plane structure in which activation occurs by gaining access to the curve of jump up knees with ER active (as discussed in the previous subsection). For simplicity, we henceforth refer to \(s_{\mathrm {dynamic}}\) as s. We define the interval \(I_{s} = [s_{\mathit {ER}}^{\min}(I), s_{\mathrm {SN}}]\). \(s_{\mathrm {SN}}\) is defined as the value of s at which the saddle-node bifurcation of fast subsystem critical points occurs with ER active (Figs. 8 and 9), and \(s_{\mathit {ER}}^{\min}(I)\) is simply the minimum value to which s decays while EP is still active. The dependence of \(s_{\mathit {ER}}^{\min}\) on input arises because the set I determines how long EP and ER are active and hence how far s decays from \(s_{\max}\). The interval \(I_{s}\) is illustrated for a particular input set I in Fig. 8. When there is a switch between EP active and ER active, s jumps to \(s_{\max}\). (This occurs instantaneously in the singular limit, but in our simulations, such as Figs. 8 and 9, the switch occurs at some \(s^{*}< s_{\max}\). The value of \(s^{*}\) can easily be approximated as \(s^{*} \approx s_{\max}e^{-\beta t}\) where, using the differential equation for s in (1), t satisfies \(s_{\max}e^{-\beta t} = (s^{\min}(I)-s_{\max})e^{-(\alpha +\beta)t} + s_{\max}\) given the minimal value of \(s_{\mathrm {dynamic}}\) is \(s^{\min}(I)\). This equality illustrates how \(t \to0\) and hence \(s^{*} \to s_{\max}\) as \(\alpha\to\infty\), corresponding to a complete separation of time scales.) We assume that \(h_{\mathit {EP}}^{\mathrm {JD}}(s_{\max})=h_{\mathit {ER}}^{\mathrm {JD}}(s_{\max})\) and denote this h-value by \(h_{\max}\). This assumption is based on the numerical observation that the curves of right knees corresponding to EP active or ER active are quite close, which relates to the reversal of synaptic excitation at large voltages, and appear to converge at s near \(s_{\max}\). We define a second interval \(I_{h} = [ h_{\min}(I), h_{\max}]\), where \(h_{\min}(I)\) is the value of \(h_{\mathit {KE}}\) along the ER curve of jump down knees at \(s=s_{\mathit {ER}}^{\min}(I)\). This interval specifies the full set of \(h_{\mathit {KE}}\) values from which a jump down will yield a crossing of the curve of knees. The interval \(I_{h}\) is illustrated for a particular input set I in Fig. 9. Pocket Rhythm Recall the form of the pocket rhythm, illustrated in Figs. 8 and 10. Since HE is active longer than HF in this rhythm, we take \(i_{\mathit {ER}}^{\mathrm {ext}}< i_{\mathit {EP}}^{\mathrm {ext}}\), which leads to \(T_{\mathrm {active}}^{\mathit {ER}}(I)< T_{\mathrm {active}}^{\mathit {EP}}(I)\). In a successful pocket rhythm, KE activation can occur at any value of \(s_{\mathrm {dynamic}}=s_{\mathit {ER}} \in I_{s}\). The closer to \(s_{\mathit {ER}}^{\min}(I)\) that activation occurs, the less the overlap of KE and HF activations. With the above constraints and definitions, the pocket rhythm will exist for any set of inputs for which \(I_{s}\) is mapped to \(\operatorname {int}(I_{s})\) under the slow flow pieced together by appropriate selection of (3) and (4). This mapping to the \(\operatorname {int}(I_{s})\) helps ensure that requirement (ii) in the previous section is met, as we will show below. Pocket rhythm: duration and timing of MN activations in simulations (left) and experimental recordings from MNs (right). Recall that HF activates with ER and HE with EP By continuity, it is sufficient for the existence of a stable pocket rhythm to find conditions on I under which the endpoints \(s_{\mathrm {SN}}\) and \(s_{\mathit {ER}}^{\min}(I)\) are mapped into the interior of \(I_{s}\). We use slow phase plane arguments to do so. Fix input set I. Note that there is an ordering of the trajectories starting from the relevant part of the cyan curve of jump up knees corresponding to ER active (Fig. 8), given by \(\mathit {LK}_{I_{s}} := \{ (h_{\mathit {KE}},s) : s \in I_{s}, h_{\mathit {KE}}=h^{\mathrm {JU}}_{\mathit {ER}}(s) \}\). That is, suppose \((h_{1},s_{1}), (h_{2},s_{2}) \in \mathit {LK}_{I_{s}}\) with \(h_{1} > h_{2}\) and hence \(s_{1}>s_{2}\). Flow \((h_{1},s_{1})\) forward under (3), obtaining a trajectory \((h_{1}(t),s_{1}(t))\), until \(s_{1}(t)=s_{2}\). Similarly, denote the forward flow from \((h_{2},s_{2})\) as \((h_{2}(t),s_{2}(t))\). If \(h_{1}(t)>h_{2}\) (\(h_{1}(t)< h_{2}\)), then \(h_{1}(t+\tau)>h_{2}(\tau)\) (\(h_{1}(t+\tau)< h_{2}(\tau)\)) for all τ until \(s_{1}(t+\tau)=s_{2}(\tau )=s_{\mathit {ER}}^{\min}(I)\) and the ER active phase ends. Moreover, by continuity, all points on \(\mathit {LK}_{I_{s}}\) are ordered in this sense. Thus, the trajectory from \(\mathit {LK}_{I_{s}}\) that attains the minimal \(h(s)\) value when \(s=s_{\mathit {ER}}^{\min}(I)\) when evolved forward in time is either the one starting from \(s=s_{\mathrm {SN}}\) (corresponding to < in the statements above) or that from \(s=s_{\mathit {ER}}^{\min}(I)\) (corresponding to >). It turns out that the more interesting case, for which our argument yields one additional sufficient condition, occurs when the minimal h corresponds to the initial condition \(s=s_{\mathrm {SN}}\), with the initial value of h given by \(h_{\mathrm {SN}}:=h^{\mathrm {JU}}_{\mathit {ER}}(s_{\mathrm {SN}})\), so without loss of generality we henceforth assume that this orientation holds (Fig. 11). Useful trajectories for deriving sufficient conditions for a stable pocket rhythm. Solid black lines are flows forward from a known point. Dotted black lines represent backward flows. Left: the conditions that arise when a flow is initiated from \(s_{\mathrm {SN}}\). Right: the conditions that arise when a flow is initiated from \(s_{\mathit {ER}}^{\min}(I)\) Now, let \(T(I)=(1/\beta)\ln(s_{\mathrm {SN}}/s_{\mathit {ER}}^{\min}(I))\) denote the time for s to decay from \(s_{\mathrm {SN}}\) to \(s_{\mathit {ER}}^{\min}(I)\). Suppose we choose an initial condition such that KE activation occurs at \(s=s_{\mathrm {SN}}\) during the ER active phase. We introduce the notation \(h(a;b,c)\) to denote the \(h_{\mathit {KE}}\) value at time a for a trajectory that started at time 0 with initial condition \((h_{\mathit {KE}},s)=(b,c)\). The first sufficient condition that we include is that the resulting KE trajectory does not cross a curve of jump down knees when EP takes over from ER: $$\mbox{(P1)}\quad h_{\mathrm {SN}}^{+} := h\bigl(T(I);h_{\mathrm {SN}},s_{\mathrm {SN}} \bigr) > h_{\max}. $$ Condition (P1) ensures that the KE active phase overlaps with the active phase of EP and hence HE, as desired; in other words, \(T_{\mathrm {active}}^{\mathit {KE}}(I)>T(I)\) (labeled in Fig. 11, left). Next, we impose a condition to ensure that KE activation ends when EP activation does. This condition forces the KE trajectory with largest h value to lie in \(I_{h}\) at the end of the EP active phase. This trajectory has initial condition \((h_{\mathit {ER}}^{\min}(I),s_{\max})\) at the start of the EP active phase, where \(h_{\mathit {ER}}^{\min}(I):= h^{\mathrm {JU}}_{\mathit {ER}}(s_{\mathit {ER}}^{\min}(I))\), and evolves under (3) with EP active for time \(T_{\mathrm {active}}^{\mathit {EP}}(I)\) (to \(\{ s=s_{\mathit {EP}}^{\min}(I) \}\)). The condition (Fig. 11, right) is $$\mbox{(P2)}\quad h\bigl(T_{\mathrm {active}}^{\mathit {EP}}(I);h_{\mathit {ER}}^{\min}(I),s_{\max} \bigr) < h_{\max}. $$ Next, we obtain two conditions that are sufficient to ensure that the flow of \(\mathit {LK}_{I_{s}}\) yields trajectories that return to \(\operatorname {int}(\mathit {LK}_{I_{s}})\) and that do so while ER is active, but not newly active (to ensure requirement (ii) in the previous section). To state these conditions, we need to make use of the backwards flow of the endpoints \((h_{\mathrm {SN}},s_{\mathrm {SN}})\) and \((h^{\mathrm {JU}}_{\mathit {ER}}(s_{\mathit {ER}}^{\min}(I)),s_{\mathit {ER}}^{\min}(I))\) back to the line \(\{ s = s_{\max} \}\) under system (3) with ER active. Denote the h-coordinates of these intersections by \(h_{\mathrm {SN}}^{-}\) and \(h_{s_{\min}}^{-}\), respectively, with \(h_{s_{\min}}^{-} < h_{\mathrm {SN}}^{-}\) by continuity. Recall that the forward trajectory from the endpoint \((h_{\mathrm {SN}},s_{\mathrm {SN}})\) has \(h = h_{\mathrm {SN}}^{+} := h(T(I);h_{\mathrm {SN}},s_{\mathrm {SN}})\) when EP becomes active (see Condition (P1) and Fig. 11, left). With these definitions, the final sufficient conditions, which guarantee that the next KE activation occurs from \(\operatorname {int}(\mathit {LK}_{I_{s}})\), read $$\begin{aligned} &\mbox{(P3)}\quad h\bigl(T_{\mathrm {active}}^{\mathit {EP}}(I);h_{\mathit {ER}}^{\min}(I),s_{\max} \bigr) < h_{\mathrm {SN}}^{-}, \\ &\mbox{(P4)}\quad h\bigl(T_{\mathrm {active}}^{\mathit {EP}}(I);h_{\mathrm {SN}}^{+},s_{\max} \bigr) > h_{s_{\min}}^{-}. \end{aligned} $$ (P1)–(P4) are conditions on relative orderings of points in the slow phase space that may result under certain choices of I. To appreciate that when I is chosen to satisfy Conditions (P1)–(P4), together with the earlier condition that \(T_{\mathrm {active}}^{\mathit {ER}}(I) < T_{\mathrm {active}}^{\mathit {EP}}(I)\), it follows that \(\mathit {LK}_{I_{s}}\) is mapped into its own interior under the flow and there exists a stable periodic pocket rhythm, note that the time of evolution from \(s=s_{\max}\) down to \(s=s_{\mathit {ER}}^{\min}(I)\) under (3) with EP active is exactly time \(T_{\mathrm {active}}^{\mathit {EP}}(I)\). Conditions (P3)–(P4) ensure that all trajectories emanating from \(\mathit {LK}_{I_{s}}\) end up with \(h \in(h_{\min}^{-},h_{\mathrm {SN}}^{-})\) when ER first activates. From the time of ER activation, these trajectories all evolve under (3) from \(s=s_{\max}\), and Conditions (P3)–(P4) imply that they reach \(\operatorname {int}(\mathit {LK}_{I_{s}})\). In particular, they arrive with \(s>s_{\mathit {ER}}^{\min}(I)\) and hence they do so after times that are less than \(T_{\mathrm {active}}^{\mathit {ER}}(I)\), before the end of the ER active phase, as desired. Furthermore, Condition (P3) allows us to clarify what we mean by "sufficiently late" in requirement (ii) from the previous section. That is, the time KE spends in the silent phase is minimized when it activates from \((h_{\mathrm {SN}}, s_{\mathrm {SN}})\), or equivalently when it enters the silent phase at \(h=h_{\mathrm {SN}}^{-}\). We can use Eq. (3) to calculate the minimal time spent in the silent phase: \(t^{*} = \frac{-1}{\beta} \ln( \frac{s_{\mathrm {SN}}}{s_{\max}})\). (P3) guarantees that ER is active for at least time \(t^{*}\) before KE activates. In summary, we conclude that for a choice of synaptic weights such that our earlier assumptions on the structure of phase space are satisfied, for any choice of I such that Conditions (P1)–(P4) hold, there exists an open set of initial conditions supporting a stable, periodic pocket rhythm. Choices of weights that shrink \(s_{\mathrm {SN}}\) toward \(s_{\mathit {ER}}^{\min}(I)\), narrowing \(I_{s}\), yield less overlap between the phases when KE and HF are active at the end of the ER active phase, and hence more experimentally realistic solutions. This change can be achieved, for example, by weakening the excitation from ER to KE relative to the inhibition from IP to KE; however, making this excitation too weak will prevent KE activation entirely and destroy the rhythm. Rostral Rhythm Next, recall the form of the rostral rhythm, illustrated in Fig. 9. Since HF is active longer than HE in this rhythm, we take \(i_{\mathit {EP}}^{\mathrm {ext}}< i_{\mathit {ER}}^{\mathrm {ext}}\), which leads to \(T_{\mathrm {active}}^{\mathit {EP}}(I)< T_{\mathrm {active}}^{\mathit {ER}}(I)\). In the rostral rhythms that we seek, we assume that KE activation occurs with \(s_{\mathrm {dynamic}}=s_{\mathrm {SN}}\) with ER (and thus HF) active, in order to achieve the delay with respect to HF activation in a robust way, keeping the same synaptic weights as in the pocket case. We also require that KE activation ends at the same time as ER activation. We now use slow phase plane arguments to derive sufficient conditions for the existence of a stable rostral rhythm that meets these constraints. The trajectory for the desired rhythm should reach the curve of jump up knees with \(s=s_{\mathrm {SN}}\) and ER active and flow from there to the interval \(I_{h}\). Using our previous definitions of \(T(I)\) and \(h_{\mathrm {SN}}\), a sufficient condition to achieve this requirement is simply (Fig. 12): $$\mbox{(R1)}\quad h\bigl(T(I);h_{\mathrm {SN}},s_{\mathrm {SN}}\bigr) \in I_{h}. $$ Useful trajectories for deriving sufficient conditions for a stable rostral rhythm. The solid black line denotes the flow forward from \((h_{\mathrm {SN}},s_{\mathrm {SN}})\). Dashed black lines indicate flows forward from two points \((h_{\max}, s_{\mathrm {SN}})\) and \((h_{\min}(I),s_{\mathrm {SN}})\). The dotted black line represents a backward flow Next, it suffices to impose conditions under which the flow maps the interval \(I_{h}\) back to the curve of jump up knees where it intersects \(\{ s=s_{\mathrm {SN}} \}\) at some time after ER has already activated but while ER is still active. To derive these, it suffices to consider the trajectories generated by the forward flow from the endpoints of \(I_{h}\), namely \((h_{\min}(I),s_{\mathit {ER}}^{\min}(I))\) and \((h_{\max},s_{\mathit {ER}}^{\min}(I))\). There are two aspects to this mapping requirement. One is that all trajectories have time to reach \(\{ s=s_{\mathrm {SN}} \}\) from \(\{ s = s_{\max} \} \) (Fig. 12), a condition for which can be written in two equivalent forms using the notation we have introduced: $$\mbox{(R2)}\quad s_{\mathrm {SN}} > s_{\mathit {ER}}^{\min}(I) \quad \Leftrightarrow\quad T_{\mathrm {active}}^{\mathit {ER}}(I) > (1/\beta) \ln(s_{\max}/s_{\mathrm {SN}}). $$ The other aspect is that even the trajectory with minimal h value, which originates from \((h_{\min}(I),s_{\mathit {ER}}^{\min}(I))\) just before EP activates, must be able to reach \((h_{\mathrm {SN}},s_{\mathrm {SN}})\) while ER is active. This trajectory flows forward from \((h_{\min}(I),s_{\max})\) under (3) with EP active, say to \((h_{\mathit {EP}},s_{\mathit {EP}})\), and then continues forward under (3) with ER active from \((h_{\mathit {EP}},s_{\max})\) (Fig. 12). Our additional sufficient condition is therefore $$\mbox{(R3)} \quad h_{\mathit {EP}} > h_{\mathrm {SN}}^{-}, $$ where \(h_{\mathrm {SN}}^{-}\) is derived from the backwards flow of (3) with ER active as in the previous subsection. Conditions (R1)–(R3), together with the earlier condition that \(T_{\mathrm {active}}^{\mathit {EP}}(I) < T_{\mathrm {active}}^{\mathit {ER}}(I)\), are sufficient for all initial conditions within \(I_{h}\) to pass through \((h_{\mathrm {SN}},s_{\mathrm {SN}})\), in the singular limit, albeit at different times, and reach the interior of \(I_{h}\) with ER active, which guarantees a stable rostral rhythm. We observe that our strong structural requirement that KE activation occurs at a saddle-node bifurcation of fast subsystem equilibria, which ensures a robust delay of KE activation relative to ER (and hence HF) activation as seen in the rostral rhythm, makes our remaining sufficient conditions for the existence of a stable rostral rhythm milder than those we invoked to ensure the existence of a stable pocket rhythm. Key Differentiator Between Rhythms The work in this section supplies a variety of conditions on the relative positions of various trajectories such that when a set of inputs allows an appropriate collection of conditions to be satisfied, a pocket or rostral rhythm results. From this analysis and our numerical simulations, we can extract a key factor that distinguishes whether a rhythm generated by an input set is likely to be a pocket rhythm or a rostral rhythm. Given an initial condition on \(\mathit {LK}_{I_{s}}\) with ER active, inputs that lead to \(h_{\mathit {KE}}>h_{\max}\) at the termination of ER activity push the solution toward pocket; inputs that lead to \(h_{\mathit {KE}}< h_{\max}\) at the termination of ER activity push the solution toward rostral. In other words, roughly speaking, the rhythm is selected based on whether or not the KE trajectory has access to a curve of jump down knees from which to enter the silent phase at the switch from ER activity to EP activity (Fig. 13). Of course, this access depends on the time remaining with ER active after KE activates, which in turn depends on all relationships presented in the previous two subsections. Nonetheless, a numerical exploration of this timing issue can give a quick, rough idea of which solutions will be favored for a given input set, an option that would not have been obvious without our analysis. Further, this analysis provides a framework in which features can be examined thoroughly, which we harness in the next section. Key differentiator. The location of a trajectory at the end of the ER active phase, relative to \(h_{\max}\), ends up being the key separator in the slow phase plane between inputs that elicit rostral and those that elicit pocket Modeling Additional Experimental Results Experiments and Simulations with Input Switching We can test the experimental relevance of our model by trying to simulate some additional experiments that have been performed involving the rostral and pocket rhythms. Furthermore, now that we understand the dynamic mechanisms underlying each rhythm and the rhythm selection process, we can understand the outcomes of simulations in these scenarios. In their 1988 work seeking to further typify scratch and swim behavior, Currie and Stein [31] explored the presentation of rhythm-specific stimulation during ongoing scratch activity. For example, while the turtle was exhibiting the rostral scratch pattern (following stimulation in the rostral body region), stimulation was provided in the pocket body region, which could eventually lead to a period of blended rhythm, followed by the pocket scratch (Figs. 1 and 14). Currie and Stein 1988 experiments. Converting a rostral rhythm to a pocket rhythm. Bottom three traces show MN activity corresponding to KE, HF, and HE, respectively. Initial bouts of activity represent a rostral rhythm with large delay of KE activation relative to HF. Transient pulse stimulation of the VPP nerve (inverted triangles) eventually switches the network into a pocket rhythm. Figure source: [31] To qualitatively reproduce this experiment, we consider the result of an instantaneous switch of inputs. That is, a rostral input set, \(I_{\mathrm {rostral}}\), is given to the system. After several periods, at the end of a phase of HE activity (as in the experiment), the inputs are switched to a pocket input set, \(I_{\mathrm {pocket}}\). With both the Standard and the Strong Cross-Excitation synaptic weights, this change in inputs leads to a similar transition to pocket as seen in the experiment (Fig. 15). Our phase plane analysis makes it easy to understand the switch in dynamics. Once pocket inputs are applied, KE still reaches the SN bifurcation and activates while EP and HF are active, as in rostral. But the pocket inputs shorten \(T_{\mathrm {active}}^{\mathit {ER}}(I)\), allowing EP and hence HE to take over before \(h_{\mathit {KE}}\) decays down to \(h_{\max}\). Thus KE remains active when \(\mathit {EP}/\mathit {HE}\) activates, yielding a cycle that blends features of rostral and pocket followed by rapid convergence to a pocket rhythm. Simulation of Currie and Stein 1988 experiments. A switch from rostral inputs to pocket inputs, at the time indicated by the arrow, causes the model behavior to transition from rostral to blended output to pocket. Standard weights were used, with similar results obtained for SCE weights (not shown). Inputs: \(I_{\mathrm {rostral}}= \{i_{\mathit {IP}}=0.19, i_{\mathit {EP}}=0.17, i_{\mathit {ER}}=0.19, i_{\mathit {IR}}=0.17\}\), \(I_{\mathrm {pocket}}= \{i_{\mathit {IP}}=0.17, i_{\mathit {EP}}=0.19, i_{\mathit {ER}}=0.17, i_{\mathit {IR}}=0.19\}\) We also consider the reverse scenario of applying rostral inputs during an ongoing pocket rhythm. Interestingly, simulations of this manipulation yield different results depending on whether we use our Standard or SCE synaptic weights. In the Standard set up, interrupting pocket at the end of an HE cycle with two different input sets, each of which yields a rostral rhythm when applied to the model in a rest state, induces two qualitatively different behaviors. In one case, even with the rostral inputs, a rhythm that can be classified as pocket persists, although HF is active slightly longer than HE, unlike the prototypical pocket rhythm (Fig. 16, top). In the other case, the rostral inputs cause a switch to the rostral rhythm (Fig. 16, bottom). Pocket to rostral simulations. Applying rostral inputs during a pocket rhythm may or may not induce a switch to rostral. A pocket rhythm was induced using \(I_{\mathrm {pocket}}=\{i_{\mathit {IP}}=0.17, i_{\mathit {EP}}=0.19, i_{\mathit {ER}}=0.17, i_{\mathit {IR}}=0.19\}\). Inputs were switched at the time indicated by the arrows to one of two different input sets, each of which evoked rostral from rest. Top: \(I_{\mathrm {rostral}}^{1} = \{i_{\mathit {IP}}=0.19, i_{\mathit {EP}}=0.18, i_{\mathit {ER}}=0.19, i_{\mathit {IR}}=0.18\}\) maintains the pocket rhythm, and hence uncovers bistability in the system. Bottom: \(I_{\mathrm {rostral}}^{2}=\{i_{\mathit {IP}}=0.19, i_{\mathit {EP}}=0.17, i_{\mathit {ER}}=0.19, i_{\mathit {IR}}=0.17\}\) leads to switching behavior as seen in experiments [31] In the case where pocket persists, we conclude that the rostral inputs that are applied render the system bistable. These inputs are closer to \(I_{\mathrm {pocket}}\) than are other rostral inputs that do not reveal bistability. In particular, the stronger inputs to IR and EP in the former case cause an earlier switch from HF to HE, allowing pocket dynamics to be maintained. In the SCE set up, we do not observe bistability numerically across a wide range of inputs and synaptic weights that we have explored. Explanation of Bistability (and Lack Thereof) The selection between the two cases illustrated in Fig. 16 essentially comes down to a race between EP (corresponding to HE) and KE: from the activation of \(\mathit {ER}/\mathit {HF}\), does EP reach the jump up knee before \(h_{\mathit {KE}}\) is able to decay to reach \(h_{\max}\)? If EP does activate first, then the rhythm remains in pocket. If KE reaches \(h_{\max}\) first, then a switch to rostral can occur. The data used to generate Fig. 16 indicates that a decrease in \(i_{\mathit {EP}}\) promotes this switch. This idea can be investigated more closely through a series of numerical calculations of these quantities, with a few approximations motivated by the framework that the slow phase plane analysis provides. In the SCE regime, we have not observed bistability: introduction of rostral inputs during ongoing pocket switches the rhythm to rostral. Heuristically, we can see why SCE would tend to suppress bistability, based on the SCE synaptic weights (Fig. 5). For a pocket rhythm to persist despite rostral inputs, the ER active phase must remain sufficiently short that EP can activate before \(h_{\mathit {KE}}\) drops to \(h_{\max}\) (Fig. 13). Because transitions in our networks occur by escape, this requirement means that EP or IR must be able to activate before the ER stays active too long. In SCE, however, the weights of synaptic inhibition from ER to IR and excitation from ER to IP are strong, relative to the S case. These synaptic connections are exactly the ones that would suppress the activation of EP and IR and thus prolong the ER active phase, causing KE to jump down with ER and inducing a switch from pocket to rostral. The observation that some weight and input parameter sets yield bistability and others do not may be useful for making predictions. That is, if bistability is observed experimentally, then we can conservatively state that it should rule out certain parameter combinations within the underlying rhythm generating circuit, if indeed that circuit is qualitatively represented by our model. For example, although our simulations were not exhaustive, together with the heuristic arguments we have provided they suggest that an observation of bistability of pocket and rostral rhythms in response rostral inputs would represent evidence against SCE weights, in which both the excitatory and the inhibitory interneurons projecting to HF are more strongly recruited by rostral stimulation than are the corresponding HE-projecting interneurons. More generally, we can also observe that if a single circuit generates both pocket and rostral rhythms, then one rhythm may be more resistant to input-induced switching than the other, as we have seen by introducing rostral input during an ongoing pocket rhythm. This is an important observation: Suppose that two separate modules generated pocket and rostral rhythms. In that case, introducing a rostral input during ongoing pocket would necessarily recruit the rostral module, likely perturbing the pocket rhythm in some way that is more significant than seen in our simulations. Hence, bistability may be used to help distinguish between these possible rhythm generation frameworks (see also [5]). Additionally, we can consider the effect of scaling inputs to the interneurons. We consider what happens in the SCE regime when all four inputs are scaled by the same factor, only the E inputs (to EP, ER) are scaled by the same factor, or only the I inputs (to IP, IR) are scaled by the same factor (Fig. 17). In the first case (Fig. 17, left), we see that increasing inputs (black to gray) leads to a decrease in active phase length for both KE and the dominant IN population (namely HE in pocket and HF in rostral) with almost no change in phase duration for the other population. This result, which is consistent with the stipulation that phase transitions occur by escape and also with past work exploring asymmetries in persistent sodium half-center oscillator models [18, 26], represents a testable prediction. Next, we find that scaling only the inputs to the excitatory INs leads to almost the same changes in active phase durations as occur when all inputs are scaled (Fig. 17, left versus middle), while there is virtually no change in active phase length across different scalings of the inputs to the inhibitory INs (Fig. 17, right). These results indicate that the escape of the excitatory INs from the silent phase largely controls rhythm frequencies. In fact, we find that the external input to the inhibitory interneurons can be removed and the synchrony patterns of the rhythms (but not the delay in rostral) can be maintained (data not shown), because the excitatory INs still recruit the inhibitory populations to become active. These predictions are more difficult to test, given that these populations of interneurons have not yet been identified, but remain for future experimental consideration. Effect of input scaling on phase durations in SCE regime. The black bars represent the durations of the active phases of HE, KE, and HF when the indicated inputs are uniformly decreased by multiplication by a scaling factor less than one, just large enough to maintain each rhythm. The gray bars represent the durations of the active phases of HE, KE, and HF when the scaling factor is greater than one, near the upper bound for maintaining each rhythm We repeat this experiment with the S regime (Fig. 18) and find generally very similar results. However, it is worth noting that, in the S regime, the changes in active phase durations across similar scaling is much less than in the SCE regime. Additionally, there is a much greater change in active phase durations in rostral than in pocket. These differences, in addition to the bistability observed, may serve to differentiate the S regime from the SCE regime in practice. Effect of input scaling on phase durations in S regime. The black bars represent the durations of the active phases of HE, KE, and HF when the indicated inputs are uniformly decreased by multiplication by a scaling factor less than one, just large enough to maintain each rhythm. The gray bars represent the durations of the active phases of HE, KE, and HF when the scaling factor is greater than one, near the upper bound for maintaining each rhythm It has been postulated that turtle scratching and swimming arise when "behavioral modules" interact and combine to control "muscle synergies" producing appropriately coordinated motor outputs [32], but there is a large gap between such an abstract statement and concrete hypotheses about the neuronal networks involved. While a specific wiring diagram for a single circuit that could parsimoniously drive both pocket and rostral scratching has been proposed [4], it is well known that connectivity diagrams alone do not uniquely map to output patterns [17]. We have performed a computational and mathematical study to investigate whether the proposed unified CPG network, which features only hip-related populations of interneurons, could indeed be responsible for the generation of two different turtle scratch rhythms with distinct knee-hip synchrony patterns. Importantly, these patterns are selected by changing external inputs to the interneurons, with the same synaptic weights between interneurons, and from interneurons to motoneurons, preserved for both. Through the use of slow phase plane arguments, we were able to explain how particular phase space and bifurcation structures underlie the generation of the rhythms and to derive sufficient conditions on these structures that guarantee the existence of stable rhythms. This analysis was possible due to time scale decomposition and certain model reductions, despite the relative high-dimensionality of the model system; because our conditions are stated in terms of dynamic structures, they apply beyond the particular model features, such as a slowly inactivating persistent sodium current, used in our simulations. Even with model reductions, the synaptic variables evolving during each stage of each rhythm were hybrid variables, representing combined effects of excitatory and inhibitory inputs, which was one unusual aspect of our analysis. Past research has focused on several different aspects that arise in multi-functionality, including the general organizing principles governing CPGs [16, 33], and the notion that an organism exhibits different motor patterns by selecting different CPGs [34], which may be collections of burst-capable unit CPGs that each control a set of synergistic muscles [11]. Similarly, recent experimental work in mice [13] found that the hindlimb locomotor network is composed of intrinsically rhythmic modules that each drive a pool of motoneurons. Consistent with the unit CPG framework, the model that we consider includes separate hip extensor and hip flexor interneuron pairs (EP and IP, ER and IR); although each individual population is tonically active in the absence of inputs, each pair can generate bursts through a mechanism of escape from reciprocal inhibition, consistent with previous related work [5]. Our interneuron network includes fixed interconnections and projections both to antagonist hip interneurons and to hip motoneurons and is able to generate multiple rhythms under changes in inputs that alter the relative durations of the unit CPGs, without changing network connections. In contrast to the unit CPG idea, however, the hip interneurons also control knee extensor motoneurons in the model. Despite the multi-tasking demanded of the unit CPGs, we find that the network can generate multiple motor patterns, selected by tuning the relative strengths of their tonic inputs. That networks of unit CPGs can be influenced to demonstrate different activity patterns is not surprising, given the wide variety of activity patterns that can be elicited from a single neuron [35, 36], but the idea that CPGs for one unit can also be harnessed to control the timing of another joint is relatively novel. Although this idea makes sense in terms of efficient use of neuronal resources, evolutionary principles, and the observation that individual interneurons are active during multiple forms of activity [1], it remains to be determined whether this framework offers enough robustness for functional rhythm generation. A distinctive feature of one of the rhythms considered, the rostral scratch, is a delay in the onset of KE motoneuron activity relative to HF onset. While synchronization ([27, 37]) and near-synchronization [38] in networks of planar neuron models with strong synaptic coupling has been well studied, the delay we consider appears to be novel. This delay significantly restricts the choices of synaptic weights to KE for which both rhythms can be elicited. The resulting phase plane structure leads us to observe that, given that the sufficient conditions on synaptic weights hold, the rhythm selected by a particular input set is largely determined by the position of the slow variable coordinate of a particular trajectory segment relative to a key value \(h_{\max}\) at the termination of ER activity (Fig. 13). Unfortunately, from an experimental point of view, the specific rhythm generation conditions in our model are not accessible for many reasons, starting with the fact that the interneuron populations in the CPG have not been identified. However, our analysis yields the observation that in the framework we have considered, the KE motoneuron must activate slightly before the HE during the pocket rhythm, and this is exactly what is observed experimentally [30], which offers some validation for our approach. Furthermore, simulation of the model can help guide future experiments. In particular, the model network can exhibit bistability to rostral scratch inputs for some of the parameter values considered, which seems unlikely to arise with separate pocket and rostral generation modules (see also [5]). Thus, future experiments to explore this form of bistability could be useful. The slow phase plane approach that we have presented provides a framework that can be used to make predictions about specific experiments and to explain the mechanisms underlying observed outcomes. Our simulations also predict that changes in inputs to the CPG that are not strong enough to destroy an ongoing rhythm will alter the active phase durations of the hip MN that is dominant in that rhythm and of the knee extensor MN while leaving the other hip MN activity period almost entirely unchanged, and that these changes are controlled by the excitatory INs in the CPG. These outcomes likely result from the underlying assumption that activity transitions in our model occur through escape [25, 26], based on past experimentally constrained work modeling turtle motor CPGs [5], and alternative transition mechanisms should be considered if these predictions are falsified in future experiments. During rostral scratching, hip extensor deletions can occur [9, 39]. In these deletions, hip extensor is silent while knee extensor behavior is entirely preserved (synchrony with hip flexor after a delay, periods of full activity and full quiescence); hip flexor fails to shut down fully during its quiescent period, as during normal rostral. This lack of quiescence presumably results from the absence of inhibition from hip extensor related motor pools. These deletions occur unpredictably in some preparations, although the frequency can be increased through particular experimental techniques [9]. Due to a combination of the proposed architecture and the use of deterministic differential equations to describe population behavior, it is not possible to reproduce this behavior fully in our model. The only way to shut down hip extensor behavior in both the standard and the strong cross-excitation architectures, by only changing inputs and without changing synaptic weights from interneuron motor pools to the hip extensor (which would be a trivial but non-biological solution), is by decreasing input to IR and EP until oscillations are lost (Fig. 2). While this does lead to tonic activity in hip flexor as desired, it also leads to tonic activity in knee extensor. One possible way to resolve this issue is to suppose that an additional source of inputs, not included in our model, provides enough inhibition to shut down knee extensor motoneurons while the ER input is low. A need to invoke additional inputs to explain deletions suggests that hip-related motor pools may account for synchrony and relative timing of scratch rhythms but may not be sufficient to fully capture all motor behaviors observed. Although experiments suggest that inputs to knee extensor motoneurons are hip related, it may be that knee motor pools (as in the standard UPG approach to rhythm generation) are present in a secondary role and that interneurons related to knee flexor activity provide inhibition that contributes to the termination of knee extensor activity; past experiments on pocket and rostral scratch have not focused on knee flexor motoneuron recording [1–4], and hence we omit knee flexion in our model, but it could be included in future work. Alternatively, stochasticity may need to be taken into account to capture the full range of scratch rhythm phenomenology [40, 41]. Certainly, our model could be expanded to include additional neuron pools or stochastic mechanisms. Additional experimental work to constrain the mechanisms underlying deletions would be beneficial to help guide efforts in this direction. It has been suggested that oscillations underlying turtle motor rhythms may be driven by concurrent excitation and inhibition, based on analysis of data showing that the estimated synaptic conductances for excitation and inhibition to turtle motoneurons oscillate in phase [12]. It is worth noting, however, that for the most part, neither the type (hip extensor, hip flexor, and so on) of motoneurons from which recordings were obtained nor the source of synaptic inputs was identified, so it is hard to know how to interpret these results. Past reviews [16] hypothesize that this may be an artifact of the experimental setup or a feature unique to motor pattern generation in turtles (as opposed to say mammals). These findings contrast with the traditional reciprocal model in which motoneurons receive synaptic excitation and inhibition in antiphase [9, 26], as imposed by the mutual inhibition between EP and IP and between ER and IR in our model network. Note that we chose this mutually inhibitory structure on experimental grounds: It has long been established that HE is active together with its excitatory motor pool of interneurons, EP; additionally, HE and HF activate in antiphase (Fig. 1) [2]. The simplest way to meet these benchmarks is for EP to be active with IR and ER with IP, as imposed by mutual inhibition. Nonetheless, it would be interesting to explore how stochastic effects might allow multi-functionality of a rhythm generating circuit despite less segregated excitatory and inhibitory inputs to its motoneurons, especially since rhythm generation in several other CPG circuits involves some mixture of reciprocal and concurrent excitation and inhibition (see references in [9]). Another important future challenge will be to bring this work together with other previous modeling efforts [5] to develop a system capable of generating all four observed motor patterns, forward swim and rostral, pocket, and caudal scratch. While it is possible to produce rhythms like forward swim and caudal scratch with a hip dominated architecture as explored in this work, different synaptic weights (and, therefore, a different network) are necessary. It is an open problem to ascertain whether a single network could produce all four patterns. One possible approach to this problem would be the use of genetic algorithms to derive optimal CPG network structures [42, 43] or to determine the parameter values necessary to coordinate multiple CPGs to generate multiple rhythms [34]. It is not clear what would constitute a practically useful objective function for a genetic algorithm approach, however. Including more of the known details about the ionic currents in turtle motoneurons [23] would be another way to tie our modeling more closely to the biology of turtle motor rhythms in future works. Finally, it is worth considering the effect of within-leg proprioceptive sensory feedback, as is often considered with cats [8, 16, 44]. However, at present such data appears to be unavailable in the literature regarding turtles. Future work could include testing hypotheses about the effects of feedback in the present model, to yield predictions for future experimental testing. Berkowitz A. Physiology and morphology of shared and specialized spinal interneurons for locomotion and scratching. J Neurophysiol. 2008;99(6):2887–901. Robertson GA, Mortin LI, Keifer J, Stein PS. Three forms of the scratch reflex in the spinal turtle: central generation of motor patterns. J Neurophysiol. 1985;53(6):1517–34. Robertson GA, Stein PS. Synaptic control of hindlimb motoneurones during three forms of the fictive scratch reflex in the turtle. J Physiol. 1988;404(1):101–28. Berkowitz A, Stein PS. Activity of descending propriospinal axons in the turtle hindlimb enlargement during two forms of fictive scratching: phase analyses. J Neurosci. 1994;14(8):5105–19. Hao ZZ, Spardy LE, Nguyen EB, Rubin JE, Berkowitz A. Strong interactions between spinal cord networks for locomotion and scratching. J Neurophysiol. 2011;106(4):1766–81. Rubin JE, Shevtsova NA, Ermentrout GB, Smith JC, Rybak IA. Multiple rhythmic states in a model of the respiratory central pattern generator. J Neurophysiol. 2009;101(4):2146–65. Rybak IA, Shevtsova NA, Lafreniere-Roula M, McCrea DA. Modelling spinal circuitry involved in locomotor pattern generation: insights from deletions during fictive locomotion. J Physiol. 2006;577(2):617–39. Spardy LE, Markin SN, Shevtsova NA, Prilutsky BI, Rybak IA, Rubin JE. A dynamical systems analysis of afferent control in a neuromechanical model of locomotion: II. Phase asymmetry. J Neural Eng. 2011;8(6):065004. Stein PS. Alternation of agonists and antagonists during turtle hindlimb motor rhythms. Ann NY Acad Sci. 2010;1198(1):105–18. Spardy L, Rubin J. Multifunctional central pattern generators controlling turtle scratching and swimming. In preparation. Grillner S. Biological pattern generation: the cellular and computational logic of networks in motion. Neuron. 2006;52(5):751–66. Berg RW, Alaburda A, Hounsgaard J. Balanced inhibition and excitation drive spike activity in spinal half-centers. Science. 2007;315(5810):390–3. Hägglund M, Dougherty KJ, Borgius L, Itohara S, Iwasato T, Kiehn O. Optogenetic dissection reveals multiple rhythmogenic modules underlying locomotion. Proc Natl Acad Sci USA. 2013;110(28):11589–94. Stein PS. Motor pattern deletions and modular organization of turtle spinal cord. Brains Res Rev. 2008;57(1):118–24. Stein PS, Daniels-McQueen S. Variations in motor patterns during fictive rostral scratching in the turtle: knee-related deletions. J Neurophysiol. 2004;91(5):2380–4. Grillner S, Jessell TM. Measured motion: searching for simplicity in spinal locomotor networks. Curr Opin Neurobiol. 2009;19(6):572–86. Bargmann CI, Marder E. From the connectome to brain function. Nat Methods. 2013;10(6):483–90. Daun S, Rubin JE, Rybak IA. Control of oscillation periods and phase durations in half-center central pattern generators: a comparative mechanistic analysis. J Comput Neurosci. 2009;27(1):3–36. Butera RJ Jr, Rinzel J, Smith JC. Models of respiratory rhythm generation in the pre-Bötzinger complex. II. Populations of coupled pacemaker neurons. J Neurophysiol. 1999;82(1):398–415. Tazerart S, Vinay L, Brocard F. The persistent sodium current generates pacemaker activities in the central pattern generator for locomotion and regulates the locomotor rhythm. J Neurosci. 2008;28(34):8577–89. Izhikevich EM. Dynamical systems in neuroscience. Cambridge: MIT Press; 2007. Hounsgaard J, Kiehn O, Mintz I. Response properties of motoneurones in a slice preparation of the turtle spinal cord. J Physiol. 1988;398(1):575–89. Booth V, Rinzel J, Kiehn O. Compartmental model of vertebrate motoneurons for Ca2+-dependent spiking and plateau potentials under pharmacological treatment. J Neurophysiol. 1997;78(6):3371–85. Booth V, Rinzel J. A minimal, compartmental model for a dendritic origin of bistability of motoneuron firing patterns. J Comput Neurosci. 1995;2(4):299–312. Wang X-J, Rinzel J. Alternating and synchronous rhythms in reciprocally inhibitory model neurons. Neural Comput. 1992;4(1):84–97. Skinner FK, Kopell N, Marder E. Mechanisms for oscillation and frequency control in reciprocally inhibitory model neural networks. J Comput Neurosci. 1994;1(1–2):69–87. Ermentrout GB, Terman DH. Mathematical foundations of neuroscience. Berlin: Springer; 2010. Rubin JE. Bursting induced by excitatory synaptic coupling in nonidentical conditional relaxation oscillators or square-wave bursters. Phys Rev E. 2006;74(2):021917. Ermentrout B. Simulating, analyzing, and animating dynamical systems: a guide to XPPAUT for researchers and students. Philadelphia: SIAM; 2002. Earhart GM, Stein PS. Step, swim, and scratch motor patterns in the turtle. J Neurophysiol. 2000;84(5):2181–90. Currie SN, Stein PS. Electrical activation of the pocket scratch central pattern generator in the turtle. J Neurophysiol. 1988;60:2122–37. Briggman KL, Kristan WB Jr. Multifunctional pattern-generating circuits. Annu Rev Neurosci. 2008;31:271–94. Kopell N, Ermentrout GB. Coupled oscillators and the design of central pattern generators. Math Biosci. 1988;90(1):87–109. Ijspeert AJ. A connectionist central pattern generator for the aquatic and terrestrial gaits of a simulated salamander. Biol Cybern. 2001;84(5):331–48. Cymbalyuk G, Shilnikov A. Coexistence of tonic spiking oscillations in a leech neuron model. J Comput Neurosci. 2005;18(3):255–63. Cymbalyuk GS, Calabrese RL, Shilnikov AL. How a neuron model can demonstrate co-existence of tonic spiking and bursting. Neurocomputing. 2005;65:869–75. Somers D, Kopell N. Waves and synchrony in networks of oscillators of relaxation and non-relaxation type. Phys D, Nonlinear Phenom. 1995;89(1):169–83. Bose A, Kopell N, Terman D. Almost-synchronous solutions for mutually coupled excitatory neurons. Phys D, Nonlinear Phenom. 2000;140(1):69–94. Stein PS, McCullough ML, Currie SN. Spinal motor patterns in the turtles. Ann NY Acad Sci. 1998;860(1):142–54. Kolind J, Hounsgaard J, Berg RW. Opposing effects of intrinsic conductance and correlated synaptic input on \(V_{m}\)-fluctuations during network activity. Front Comput Neurosci. 2012;6:40. Jahn P, Berg RW, Hounsgaard J, Ditlevsen S. Motoneuron membrane potentials follow a time inhomogeneous jump diffusion process. J Comput Neurosci. 2011;31(3):563–79. Chiel HJ, Beer RD, Gallagher JC. Evolution and analysis of model CPGs for walking: I. Dynamical modules. J Comput Neurosci. 1999;7(2):99–118. Beer RD, Chiel HJ, Gallagher JC. Evolution and analysis of model CPGs for walking: II. General principles and individual variability. J Comput Neurosci. 1999;7(2):119–47. Pearson KG. Role of sensory feedback in the control of stance duration in walking cats. Brains Res Rev. 2008;57(1):222–7. The authors thank Ari Berkowitz (U. Oklahoma) for useful discussions throughout the completion of this work. This project was partially supported by NSF Awards DMS 1021701 and 1312508 and EMSW21-RTG 0739261. Department of Mathematics, University of Pittsburgh, 301 Thackeray Hall, Pittsburgh, PA, 15260, USA Abigail C. Snyder & Jonathan E. Rubin Search for Abigail C. Snyder in: Search for Jonathan E. Rubin in: Correspondence to Abigail C. Snyder. JER conceived of the project. ACS performed the simulations and generated the figures. ACS and JER completed the analysis, designed the figures, and wrote the paper. Snyder, A.C., Rubin, J.E. Conditions for Multi-functionality in a Rhythm Generating Network Inspired by Turtle Scratching. J. Math. Neurosc. 5, 15 (2015). https://doi.org/10.1186/s13408-015-0026-5 Accepted: 02 June 2015 Central pattern generator Turtle motor rhythms Phase plane analysis Slow dynamics
CommonCrawl
\begin{document} \title[On weaker notions of nonlinear embeddings.]{On weaker notions of nonlinear embeddings between Banach spaces.} \subjclass[2010]{Primary: 46B80 } \keywords{} \author{B. M. Braga } \address{Department of Mathematics, Statistics, and Computer Science (M/C 249)\\ University of Illinois at Chicago\\ 851 S. Morgan St.\\ Chicago, IL 60607-7045\\ USA}\email{[email protected]} \date{} \maketitle \begin{abstract} In this paper, we study nonlinear embeddings between Banach spaces. More specifically, the goal of this paper is to study weaker versions of coarse and uniform embeddability, and to provide suggestive evidences that those weaker embeddings may be stronger than one would think. We do such by proving that many known results regarding coarse and uniform embeddability remain valid for those weaker notions of embeddability. \end{abstract} \section{Introduction.}\label{sectionintro} The study of Banach spaces as metric spaces has recently increased significantly, and much has been done regarding the uniform and coarse theory of Banach spaces in the past two decades. In particular, the study of coarse and uniform embeddings has been receiving a considerable amount of attention (e.g., \cite{B}, \cite{K}, \cite{Ka}, \cite{MN}, \cite{No}). These notes are dedicated to the study of several different notions of nonlinear embeddings between Banach spaces, and our main goal is to provide the reader with evidences that those kinds of embeddings may not be as different as one would think. Let $(M,d)$ and $(N,\partial)$ be metric spaces, and consider a map $f:(M,d)\to (N,\partial)$. For each $t\geq 0$, we define the \emph{expansion modulus of $f$} as $$\omega_f(t)=\sup\{\partial(f(x),f(y))\mid d(x,y)\leq t\},$$ \noindent and the \emph{compression modulus of $f$} as $$\rho_f(t)=\inf\{\partial(f(x),f(y))\mid d(x,y)\geq t\}.$$ \noindent Hence, $\rho_f(d(x,y))\leq \partial(f(x),f(y))\leq \omega_f(d(x,y))$, for all $x,y\in M$. The map $f$ is uniformly continuous if and only if $\lim_{t\to 0_+}\omega_f(t)=0$, and $f^{-1}$ exists and it is uniformly continuous if and only if $\rho_f(t)>0$, for all $t>0$. If both $f$ and its inverse $f^{-1}$ are uniformly continuous, $f$ is called a \emph{uniform embedding}. The map $f$ is called \emph{coarse} if $\omega_f(t)<\infty$, for all $t\geq 0$, and \emph{expanding} if $\lim_{t\to\infty}\rho_f(t)=\infty$. If $f$ is both expanding and coarse, $f$ is called a \emph{coarse embedding}. A map which is both a uniform and a coarse embedding is called a \emph{strong embedding}. Those notions of embeddings are fundamentally very different, as coarse embeddings deal with the large scale geometry of the metric spaces concerned, and uniform embeddings only deal with their local (uniform) structure. Although those notions are fundamentally different, it is still not known whether the existence of those embeddings are equivalent in the Banach space setting. Precisely, the following problem remains open. \begin{problem}\label{mainproblem} Let $X$ and $Y$ be Banach spaces. Are the following equivalent? \begin{enumerate}[(i)] \item $X$ coarsely embeds into $Y$. \item $X$ uniformly embeds into $Y$. \item $X$ strongly embeds into $Y$. \end{enumerate} \end{problem} It is known that Problem \ref{mainproblem} has a positive answer if $Y$ is either $\ell_\infty$ (see \cite{Ka4}, Theorem 5.3) or $\ell_p$, for $p\in [1,2]$ (see \cite{No}, Theorem 5, and \cite{Ra}, page 1315). C. Rosendal made some improvements on this problem by showing that if $X$ uniformly embeds into $\ell_p$, then $X$ strongly embeds into $\ell_p$, for all $p\in [1,\infty)$. This result can be generalized by replacing $\ell_p$ with any minimal Banach space (see \cite{B}, Theorem 1.2(i)). Natural weakenings for the concepts of coarse and uniform embeddings were introduced in \cite{Ro}, and, as it turns out, those weaker notions are rich enough for many applications. Given a map $f:(M,d)\to(N,\partial)$ between metric spaces, we say that $f$ is \emph{uncollapsed} if there exists some $t>0$ such that $\rho_f(t)>0$. The map $f$ is called \emph{solvent} if, for each $n\in\mathbb{N}$, there exists $R>0$, such that $$d(x,y)\in [R,R+n] \ \ \text{ implies }\ \ \partial(f(x),f(y))>n,$$ \noindent for all $x,y\in M$. For each $t\geq 0$, we define the \emph{exact compression modulus of $f$} as $$\overline{\rho}_f(t)=\inf\{\partial(f(x),f(y))\mid d(x,y)=t\}.$$ \noindent The map $f$ is called \emph{ almost uncollapsed } if there exists some $t>0$ such that $\overline{\rho}_f(t)>0$. It is clear from its definition, that expanding maps are both solvent and uncollapsed. Also, as $\rho_f(t)\leq \overline{\rho}_f(t)$, for all $t\in[0,\infty)$, uncollapsed maps are also almost uncollapsed. Therefore, we have Diagram \ref{DiagEx}. \begin{equation}\label{DiagEx} \xymatrix{ & \text{Expanding} \ar[dl]\ar[dr] & \\ \text{Solvent} \ar[dr]& & \text{Uncollapsed}\ar[dl]\\ & \text{ Almost uncollapsed } & } \end{equation} As a map $f:(M,d)\to (N,\partial)$ has uniformly continuous inverse if and only if $\rho_f(t)>0$, for all $t>0$, Diagram \ref{DiagUCI} holds. \begin{equation}\label{DiagUCI} \xymatrix{ \text{Uniformly continuous inverse} \ar[r]& \text{Uncollapsed}\ar[r] &\text{ Almost uncollapsed } }\end{equation} None of the arrows in neither Diagram \ref{DiagEx} nor Diagram \ref{DiagUCI} reverse. Indeed, any bounded uniform embedding is uncollapsed (resp. almost uncollapsed), but it is not expanding (resp. solvent). Examples of uncollapsed maps which are not uniformly continuous are easy to be constructed, as you only need to make sure the map is not injective. At last, Proposition \ref{solventcollapsedmaps} below provides an example of a map which is solvent but collapsed (i.e., not uncollapsed), which covers the remaining arrows. In \cite{Ro}, Theorem 2, Rosendal showed that if there exists a uniformly continuous uncollapsed map $X\to Y$ between Banach spaces $X$ and $Y$, then $X$ strongly embeds into $\ell_p(Y)$, for any $p\in [1,\infty)$. Rosendal also showed that there exists no map $c_0\to E$ which is both coarse and solvent (resp. uniformly continuous and almost uncollapsed), where $E$ is any reflexive Banach space (see \cite{Ro}, Proposition 63 and Theorem 64). This result is a strengthening of a result of Kalton that says that $c_0$ does not coarsely embed (resp. uniformly embed) into any reflexive space (see \cite{K}, Theorem 3.6). Those results naturally raise the following question. \begin{problem}\label{mainproblemPartII} Let $X$ and $Y$ be Banach space. Are the statements in Problem \ref{mainproblem} equivalent to the following weaker statements? \begin{enumerate}[(i)]\setcounter{enumi}{3} \item $X$ maps into $Y$ by a map which is coarse and solvent. \item $X$ maps into $Y$ by a map which is uniformly continuous and almost uncollapsed. \end{enumerate} \end{problem} In these notes, we will not directly deal with Problem \ref{mainproblem} and Problem \ref{mainproblemPartII} for an arbitrary $Y$, but instead, we intend to provide the reader with many suggestive evidences that those problems either have a positive answer or that any possible differences between the aforementioned embeddings are often negligible. We now describe the organization and the main results of this paper. In Section \ref{sectionbackground}, we give all the remaining notation and background necessary for these notes. Also, in Subsection \ref{subsectionEx} we give an example of a map $\mathbb{R}\to \ell_2(\C)$ which is Lipschitz, solvent and collapsed (Proposition \ref{solventcollapsedmaps}). For a Banach space $X$, let $q_X=\inf\{q\in [2,\infty)\mid X\text{ has cotype }q\}$ (see Subsection \ref{deftype} for definitions regarding type and cotype). In \cite{MN}, M. Mendel and A. Naor proved that if a Banach space $X$ either coarsely or uniformly embeds into a Banach space $Y$ with nontrivial type, then $q_X\leq q_Y$ (see Theorem 1.9 and Theorem 1.11 of \cite{MN}). In Section \ref{sectioncotype}, we prove the following strengthening of this result. \begin{thm}\label{solventemb} Let $X$ and $Y$ be Banach spaces, and assume that $Y$ has nontrivial type. If either \begin{enumerate}[(i)] \item there exists a coarse solvent map $X\to Y$, or \item there exists a uniformly continuous almost uncollapsed map $X\to Y$, \end{enumerate} \noindent then, $q_X\leq q_Y$. \end{thm} Theorem \ref{solventemb} gives us the following corollary. \begin{cor}\label{funnyOS} Let $p,q\in [1,\infty)$ be such that $q>\max\{2,p\}$. Any uniformly continuous map $f:\ell_q\to \ell_p$ (resp. $f:L_q\to L_p$) must satisfy $$\sup_t\inf_{\|x-y\|=t}\| f(x)-f(y)\|=0.$$ \end{cor} While the unit balls of the $\ell_p$'s are all uniformly homeomorphic to each other (see \cite{OS}, Theorem 2.1), Corollary \ref{funnyOS} says that those uniform homeomorphisms cannot be extended in any reasonable way. In Section \ref{subsectionPropQ}, we look at Kalton's Property $\mathcal{Q}$, which was introduced in \cite{K}, Section 4. We prove that Property $\mathcal{Q}$ is stable under those weaker kinds of embeddings (see Theorem \ref{PropertyQ}). Although the stability of Property $\mathcal{Q}$ under coarse and uniform embeddings is implicit in \cite{K}, to the best of our knowledge, this is not explicitly written in the literature. Theorem \ref{PropertyQ} allows us to obtain the following result (see Theorem \ref{IntoSuperreflexive2} below for a stronger result). \begin{theorem}\label{IntoSuperreflexive} Let $X$ and $Y$ be Banach spaces, and assume that $Y$ is reflexive (resp. super-reflexive). If either \begin{enumerate}[(i)] \item there exists a coarse solvent map $X\to Y$, or \item there exists a uniformly continuous almost uncollapsed map $X\to Y$, \end{enumerate} \noindent then, $X$ is either reflexive (resp. super-reflexive) or $X$ has a spreading model equivalent to the $\ell_1$-basis (resp. trivial type). \end{theorem} Theorem \ref{IntoSuperreflexive} was proven in \cite{K}, Theorem 5.1, for uniform and coarse embeddings into super-reflexive spaces. Although the result above for uniform and coarse embeddings into reflexive spaces is implicit in \cite{K}, we could not find this result explicitly written anywhere in the literature. It is worth noticing that Theorem \ref{IntoSuperreflexive} cannot be improved for embeddings of $X$ into super-reflexive spaces in order to guarantee that $X$ either is super-reflexive or has a spreading model equivalent to the $\ell_1$-basis (see Remark \ref{Gideon}). As mentioned above, Problem \ref{mainproblem} has a positive answer for $Y=\ell_p$, for all $p\in [1,2]$ (see \cite{No}, Theorem 5, and \cite{Ra}, page 1315). In Section \ref{SectionHilbert}, we show that Problem \ref{mainproblemPartII} also has a positive answer in the same settings. Precisely, we show the following. \begin{thm}\label{ThmHilbert} Let $X$ be a Banach space, and $Y=\ell_p$, for any $p\in [1,2]$. Then Problem \ref{mainproblemPartII} has a positive answer. \end{thm} See Theorem \ref{ThmHilbertCru} below for another equivalent condition to the items in Problem \ref{mainproblemPartII} in the case $Y=\ell_p$, for $p\in [1,2]$. At last, in Section \ref{Sectionlinfty}, we give a positive answer to Problem \ref{mainproblemPartII} for $Y=\ell_\infty$. This is a strengthening of Theorem 5.3 of \cite{Ka4}, where Kalton shows that Problem \ref{mainproblem} has a positive answer for $Y=\ell_\infty$. Moreover, Kalton showed that uniform (resp. coarse) embeddability into $\ell_\infty$ is equivalent to Lipschitz embeddability. \begin{thm}\label{Thmlinfty} Let $X$ be a Banach space, and $Y=\ell_\infty$. Then Problem \ref{mainproblemPartII} has a positive answer. Moreover, for $Y=\ell_\infty$, items (iv) and (v) of Problem \ref{mainproblemPartII} are also equivalent to Lipschitz embeddability into $\ell_\infty$. \end{thm} Even though we do not give a positive answer to Problem \ref{mainproblem} and Problem \ref{mainproblemPartII}, we believe that the aforementioned results provide considerable suggestive evidences that all the five different kinds of embeddings $X\hookrightarrow Y$ above preserve the geometric properties of $X$ in a similar manner. \section{Preliminaries.}\label{sectionbackground} In these notes, all the Banach spaces are assumed to be over the reals, unless otherwise stated. If $X$ is a Banach space, we denote by $B_X$ its closed unit ball. Let $(X_n,\|\cdot\|_n)_{n=1}^\infty$ be a sequence of Banach spaces. Let $\mathcal{E}=(e_n)_{n=1}^\infty$ be a $1$-unconditional basic sequence in a Banach space $E$ with norm $\|\cdot\|_E$. We define the sum $(\oplus _n X_n)_{\mathcal{E}}$ to be the space of sequences $(x_n)_{n=1}^\infty$, where $x_n\in X_n$, for all $n\in\mathbb{N}$, such that $$\|(x_n)_{n=1}^\infty\|\vcentcolon =\Big\|\sum_{n\in\mathbb{N}}\|x_n\|_ne_n\Big\|_E<\infty.$$ \noindent The space $(\oplus _n X_n)_\mathcal{E}$ endowed with the norm $\|\cdot\|$ defined above is a Banach space. If the $X_n$'s are all the same, say $X_n=X$, for all $n\in\mathbb{N}$, we write $(\oplus X)_\mathcal{E}$. \subsection{Nonlinear embeddings.} \label{SubsectionEmb} Let $X$ be a Banach space and $M$ be a metric space. Then, a uniformly continuous map $f:X\to M$ is automatically coarse. Moreover, if $f$ is coarse, then there exists $L>0$ such that $\omega_f(t)\leq Lt+L$, for all $t\geq 0$ (see \cite{Ka3}, Lemma 1.4). In particular, $f$ is Lipschitz for large distances. Indeed, if $\|x-y\|\geq L$, we have that $\|f(x)-f(y)\|\leq (L+1)\|x-y\|$. By replacing $f$ by $f(L\ \cdot)/(L+1)$, we can assume that $$\|x-y\|\geq 1\ \ \text{ implies }\ \ \|f(x)-f(y)\|\leq \|x-y\|.$$ The following proposition, proved in \cite{Ro}, Lemma 60, gives us a useful equivalent definition of solvent maps. \begin{prop}\label{propsolvent} Let $X$ be a Banach space and $M$ be a metric space. Then a coarse map $f:X\to M$ is solvent if and only if $\sup_{t>0}\overline{\rho}_f(t)=\infty$. \end{prop} Although the statement of the next proposition is different from Proposition 63 of \cite{Ro}, its proof is the same. However, as its proof is very simple and as this result will play an important role in our notes, for the convenience of the reader, we include its proof here. \begin{prop}\label{Rosendal} Let $X$ and $Y$ be a Banach space, and let $\mathcal{E}$ be an $1$-unconditional basic sequence. Assume that there exists a uniformly continuous almost uncollapsed map $\varphi:X\to Y$. Then, there exists a uniformly continuous solvent map $\Phi:X\to (\oplus Y)_\mathcal{E}$. \end{prop} \begin{proof} Let $\varphi :X\to Y$ be a uniformly continuous almost uncollapsed map. As $\varphi$ is almost uncollapsed, pick $t>0$ such that $\overline{\rho}_\varphi(t)>0$. As $\varphi$ is uniformly continuous, pick a sequence of positive reals $(\varepsilon_n)_n$ such that $$\|x-y\|<\varepsilon_n\ \ \Rightarrow \ \ \|\varphi(x)-\varphi(y)\|<\frac{1}{n2^{n}},$$ \noindent for all $x,y\in X$. For each $n\in\mathbb{N}$, let $\Phi_n(x)=n\cdot \varphi\big(\frac{\varepsilon_n}{n}x\big)$, for all $x\in X$. Then, for $n_0\in\mathbb{N}$, and $x,y\in X$, with $\|x-y\|\leq n_0$, we have that \begin{align*} \|\Phi_n(x)-\Phi_n(y)\|&= n\cdot \Big\| \varphi\Big(\frac{\varepsilon_n}{n}x\Big)-\varphi\Big(\frac{\varepsilon_n}{n}y\Big)\Big\|\leq \frac{1}{2^n}, \end{align*} \noindent for all $n\geq n_0$. Define $\Phi:X\to (\oplus Y)_\mathcal{E}$ by letting $\Phi(x)=(\Phi_n(x))_n$, for all $x\in X$. By the above, $\Phi$ is well-define and it is uniformly continuous. Now notice that, if $\|x-y\|=tn/\varepsilon_n$, then $\|\frac{\varepsilon_n}{n}x-\frac{\varepsilon_n}{n}y\|=t$. Hence, if $\|x-y\|=tn/\varepsilon_n$, we have that \begin{align*} \|\Phi(x)-\Phi(y)\|&\geq \|\Phi_n(x)-\Phi_n(y)\| = n\cdot \Big\|\varphi\Big(\frac{\varepsilon_n}{n}x\Big)-\varphi\Big(\frac{\varepsilon_n}{n}y\Big)\Big\|\geq n\cdot\overline{\rho}_\varphi(t). \end{align*} \noindent So, as $\overline{\rho}_\varphi(t)>0$, we have that $\lim_n\overline{\rho}_\Phi(tn/\varepsilon_n)= \infty$. By Proposition \ref{propsolvent}, $\Phi$ is solvent. \end{proof} \subsection{Type and cotype.}\label{deftype} Let $X$ be a Banach space and $p\in (1,2]$ (resp. $q\in [2,\infty)$). We say that $X$ has \emph{type $p$} (resp. \emph{cotype $q$}) if there exists $T>0$ (resp. $C>0$) such that, for all $x_1,\ldots, x_n\in X$, $$\mathbb{E}_\varepsilon\Big\|\sum_{j=1}^n\varepsilon_jx_j\Big\|^p\leq T^p\sum_{j=1}^n\|x_j\|^p \ \ \Big(\text{resp. }\mathbb{E}_\varepsilon\Big\|\sum_{j=1}^n\varepsilon_jx_j\Big\|^q\geq \frac{1}{C^q}\sum_{j=1}^n\|x_j\|^q\Big),$$ \noindent where the expectation above is taken with respect to a uniform choice of signs $\varepsilon=(\varepsilon_j)_j\in\{-1,1\}^n$. The smallest $T$ (resp. $C$) for which this holds is denoted $T_p(X)$ (resp. $C_q(X)$). We say that $X$ has \emph{nontrivial type} (resp. \emph{nontrivial cotype}) if $X$ has type $p$, for some $p\in (1,2]$ (resp. if $X$ has cotype $q$, for some $q\in[2,\infty)$). \subsection{Cocycles.}\label{subsectionEx} By the Mazur-Ulam Theorem (see \cite{MU}), any surjective isometry $A:Y\to Y$ of a Banach space $Y$ is affine, i.e., there exists a surjective linear isometry $T:Y\to Y$, and some $y_0\in Y$, such that $A(y)=T(y)+y_0$, for all $y\in Y$. Therefore, if $G$ is a group, every isometric action $\alpha:G\curvearrowright Y$ of $G$ on the Banach space $Y$ is an affine isometric action, i.e., there exists an isometric linear action $\pi:G\curvearrowright Y$, and a map $b:G\to Y$ such that \begin{align*} \alpha_g(y)=\pi_g(y)+b(g), \end{align*} \noindent for all $g\in G$, and all $y\in Y$. The map $b:G\to Y$ is called the \emph{cocycle of $\alpha$}, and it is given by $b(g)=\alpha_g(0)$, for all $g\in G$. As $\alpha$ is an action by isometries, we have that \begin{align*} \|b(g)-b(h)\|&=\|\alpha_g(0)-\alpha_{h}(0)\|=\|\alpha_{h^{-1}g}(0)\|=\|b(h^{-1}g)\| \end{align*} \noindent for all $g,h\in G$. Hence, if $G$ is a metric group, a continuous cocycle $b:G\to Y$ is automatically uniformly continuous. \iffalse An arbitrary map $b: G\to Y$ is called an \emph{cocycle} if there exists an isometric linear action $\pi:G\curvearrowright Y$ such that Equation \ref{cocycle} defines an affine isometric action on $Y$. In particular, if $b:G\to Y$ is a cocycle, then $\|b(g)-b(h)\|=\|b(h^{-1}g)\|$, for all $g,h\in G$. Hence, if $G$ is a metric group, a continuous cocycle $b:G\to Y$ is automatically uniformly continuous. If $(G,d)$ is a topological group with a compatible metric $d$, we call an affine action $\alpha:G\curvearrowright Y$ \emph{coarse} (resp. \emph{Lipschitz}, \emph{expanding}, \emph{solvent}, \emph{collapsed}, \emph{uncollapsed}, or \emph{ almost uncollapsed}) if its cocycle $b:G\to Y$ is coarse (resp. Lipschitz, expanding, solvent, collapsed, uncollapsed, or almost uncollapsed). \fi \begin{remark} If $(X,\|\cdot\|)$ is a Banach space, we look at $(X,+)$ as an additive group with a metric given by the norm $\|\cdot\|$. So, we can work with affine isometric actions $\alpha:X\curvearrowright Y$ of the additive group $(X,+)$ on a Banach space $Y$. \end{remark} Let $\alpha:G\curvearrowright Y$ be an action by affine isometries. Its cocycle $b$ is called a \emph{coboundary} if there exists $\xi\in Y$ such that $b(g)=\xi-\pi_g(\xi)$, for all $g\in G$. Clearly, $b$ is a coboundary if and only if $\alpha$ has a fixed point. Also, if $Y$ is reflexive, then $\text{Im}(b)$ is bounded if and only if $b$ is a coboundary. Indeed, if $b$ is a coboundary, it is clear that $\text{Im}(b)$ is bounded. Say $\text{Im}(b)$ is bounded and let $\mathcal{O}$ be an orbit of the action $\alpha$. Then the closed convex hull $\overline{\text{conv}}(\mathcal{O})$ must be bounded, hence weakly compact (as $Y$ is reflexive). Therefore, by Ryll-Nardzewski fixed-point theorem (see \cite{R-N}), there exists $\xi\in Y$ such that $\alpha_g(\xi)=\xi$, for all $g\in G$. So, $b(g)=\xi-\pi_g(\xi)$, for all $g\in G$. The discussion above is well-known, and we isolate it in the proposition below. \begin{prop}\label{coboundary} Let $G$ be a group and $Y$ be a Banach space. Let $\alpha:G\curvearrowright Y$ be an action by affine isometries with cocycle $b$. Then $b$ is a coboundary if and only if $\alpha$ has a fixed point. Moreover, if $Y$ is reflexive, then $b$ is a coboundary if and only if $b$ is bounded. \end{prop} As we are interested in studying the relations between maps which are expanding, solvent, uncollapsed, and almost uncollapsed, it is important to know that those are actually different classes of maps. The next proposition shows that there are maps which are both solvent and collapsed (see \cite{E}, Theorem 2.1, for a similar example). In particular, such maps are not expanding. \begin{prop}\label{solventcollapsedmaps} There exists an affine isometric action $\mathbb{R}\curvearrowright \ell_2(\mathbb{C})$ whose cocycle is Lipschitz, solvent, and collapsed. \end{prop} \begin{proof} Define an action $U:\mathbb{R}\curvearrowright \C^\mathbb{N}$ by letting $$U_t(x)=\Big(\exp\Big(\frac{2\pi i t}{2^{2^n}}\Big) x_n\Big)_n,$$ \noindent for all $t\in \mathbb{R}$, and all $x=(x_n)_n\in\C^\mathbb{N}$. Let $w=(1,1,\ldots)\in \C^\mathbb{N}$ and define an action $\alpha:\mathbb{R}\curvearrowright \C^\mathbb{N}$ as $\alpha_t(x)=w+U_t(x-w)$, for all $t\in \mathbb{R}$, and all $x\in\C^\mathbb{N}$. So, \begin{align}\label{eqA} (\alpha_t(x))_m= \exp\Big(\frac{2\pi i t}{2^{2^m}}\Big)x_m + 1-\exp\Big(\frac{2\pi i t}{2^{2^m}}\Big), \end{align} \noindent for all $t\in \mathbb{R}$, all $x=(x_n)_n\in\C^\mathbb{N}$, and all $m\in\mathbb{N}$. As $|1-\exp(\theta i)|\leq |\theta|$, for all $\theta\in\mathbb{R} $, it follows that $( 1-\exp(2\pi i t/2^{2^n}))_n\in \ell_2(\C)$, for all $t\in \mathbb{R}$. Hence, $\alpha_t(x)\in\ell_2(\C)$, for all $ t\in \mathbb{R}$, and all $x\in\ell_2(\C)$. So, $\alpha$ restricts to an action $\alpha:\mathbb{R}\curvearrowright \ell_2(\C)$. By Equation \ref{eqA}, it follows that $\alpha:\mathbb{R}\curvearrowright \ell_2(\C)$ is an affine isometric action. Let $b:\mathbb{R}\to \ell_2(\C)$ be the cocycle of $\alpha:\mathbb{R}\curvearrowright \ell_2(\C)$, i.e., $b(t)=\alpha_t(0)$, for all $t\in\mathbb{R}$. Then, an easy induction gives us that $b(t)=w-U_t(w)$, for all $t\in\mathbb{R}$. Let $C=\sum_{n\in\mathbb{N}}\Big(\frac{2\pi}{2^{2^n}}\Big)^2$, then \begin{align*} \|b(t)\|^2&=\sum_{n\in\mathbb{N}}\Big|1-\exp\Big(\frac{2\pi i t}{2^{2^n}}\Big)\Big|^2\leq \sum_{n\in\mathbb{N}}\Big(\frac{2\pi t}{2^{2^n}}\Big)^2=C|t|^2, \end{align*} \noindent for all $t\in\mathbb{R}$. So, $b$ is Lipschitz. For $t\neq 0$, $0\in \C^\mathbb{R}$ is the only fixed point of $U_t$. Hence, $w$ is the only fixed point of $\alpha_t$. So, as $w\not\in \ell_2(\C)$, $\alpha:\mathbb{R}\curvearrowright \ell_2(\C) $ has no fixed points. Therefore, $b$ is unbounded (see Proposition \ref{coboundary}). By Proposition \ref{propsolvent}, $b$ is solvent. Pick $L>0$ such that $Ls\leq 2^s-1$, for all $s\in\mathbb{N}$. If $k\in\mathbb{N}$ is large enough, say ${2\pi }/{ 2^{2^kL}}<1$, we have that \begin{align*} \|b(2^{2^k})\|^2&=\sum_{n>k}\Big| 1-\exp\Big(\frac{2\pi i 2^{2^k}}{2^{2^n}}\Big)\Big|^2\leq \sum_{n>k}\Big(\frac{2\pi 2^{2^k}}{2^{2^n}}\Big)^2\\ &= \sum_{s\in\mathbb{N}}\Big(\frac{2\pi }{ 2^{2^k(2^{s}-1)}}\Big)^2\leq \sum_{s\in\mathbb{N}}\Big(\frac{2\pi }{ 2^{2^kLs}}\Big)^2\\ &\leq \frac{2\pi}{2^{2^kL}-1}. \end{align*} \noindent Hence, $\|b(2^{2^k})\|\to 0$, as $k\to \infty$. So, $b$ is collapsed. \end{proof} \begin{problem} Is there a map $X\to Y$ which is collapsed, almost uncollapsed and bounded (in particular not solvent), for some Banach spaces $X$ and $Y$? \end{problem} \section{Preservation of cotype.}\label{sectioncotype} Mendel and Naor solved in \cite{MN} the long standing problem in Banach space theory of giving a completely metric definition for the cotype of a Banach space. As a subproduct of this, they have shown that if a Banach space $X$ coarsely (resp. uniformly) embeds into a Banach space $Y$ with nontrivial type, then $q_X\leq q_Y$ (see \cite{MN}, Theorem 1.9 and Theorem 1.11). In this section we prove Theorem \ref{solventemb}, which shows that the hypothesis on the embedding $X\hookrightarrow Y$ can be weakened. For every $m\in\mathbb{N}$, we denote by $\mathbb{Z}_m$ the set of integers modulo $m$. For every $n,m\in\mathbb{N}$, we denote the normalized counting measure on $\mathbb{Z}^n_m$ by $\mu=\mu_{n,m}$, and the normalized counting measure on $\{-1,0,1\}^n$ by $\sigma=\sigma_n$. \begin{defi}\textbf{(Metric cotype)} Let $(M,d)$ be a metric space and $q,\Gamma>0$. We say that $(M,d)$ has \emph{metric cotype $q$ with constant $\Gamma$} if, for all $n\in\mathbb{N}$, there exists an even integer $m$, such that, for all $f:\mathbb{Z}^n_m\to M$, \begin{align}\label{metriccotypedef} \sum_{j=1}^n\int_{\mathbb{Z}^n_m}d\Big(f\big(x&+\frac{m}{2}e_j\big),f(x)\Big)^qd\mu(x)\\ & \leq\Gamma^qm^q\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}^n_m}d\big(f(x+\varepsilon),f(x)\big)^qd\mu(x)d\sigma(\varepsilon).\nonumber \end{align} \noindent The infimum of the constants $\Gamma$ for which $(M,d)$ has metric cotype $q$ with constant $\Gamma$ is denoted by $\Gamma_q(M)$. Given $n\in\mathbb{N}$ and $\Gamma>0$, we define $m_q(M,n,\Gamma)$ as the smallest even integer $m$ such that Inequality \ref{metriccotypedef} holds, for all $f:\mathbb{Z}^n_m\to M$. If no such $m$ exists we set $m_q(M,n,\Gamma)=\infty$. \end{defi} The following is the main theorem of \cite{MN}. Although we will not use this result in these notes, we believe it is worth mentioning. \begin{thm}\textbf{(Mendel and Naor, 2008)} Let $X$ be a Banach space and $q\in[2,\infty)$. Then $X$ has metric cotype $q$ if and only if $X$ has cotype $q$. Moreover, $$\frac{1}{2\pi}C_q(X)\leq \Gamma_q(X)\leq 90 C_q(X),$$ \noindent where $C_q(X)$ is the $q$-cotype constant of $X$. \end{thm} We start by proving a simple property of solvent maps. \begin{prop}\label{ltrivial} Let $(M,d)$ and $(N,\partial)$ be metric spaces, $\varphi:M\to N$ be a solvent map, and $S>0$. If $[a_n,b_n]_n$ is a sequence of intervals of the real line such that $\lim_n a_n=\infty$, $b_n-a_n<S$ and $a_{n+1}-a_n<S$, for all $n\in\mathbb{N}$, then, we must have $$\sup_n\inf\{\overline{\rho}_\varphi(t)\mid t\in [a_n,b_n]\}=\infty.$$ \end{prop} \begin{proof} Let $k>0$. Pick $N\in\mathbb{N}$ so that $N\geq \{a_1+S, k, 2S\}$, and let $R\geq 0$ be such that $d(x,y)\in [R,R+N]$ implies $\partial(f(x),f(y))>N$, for all $x,y\in M$. Then there exists $n\in\mathbb{N}$ such that $[a_n,b_n]\subset [R,R+N]$. Indeed, if $a_1< R$ let $n=\max \{j\in\mathbb{N}\mid a_j<R\}+1$, and if $R\leq a_1$ let $n=1$. Hence, $$\inf\{\overline{\rho}_\varphi(t)\mid t\in [a_n,b_n]\}\geq \inf\{\overline{\rho}_\varphi(t)\mid t\in [R,R+N]\}\geq N\geq k.$$ \noindent As $k$ was chosen arbitrarily, we are done. \end{proof} The following lemma is a version of Lemma 7.1 of \cite{MN} in the context of the modulus $\overline{\rho}$ instead of $\rho$. It's proof is analogous to the proof of Lemma 7.1 of \cite{MN} but we include it here for completeness. Let $n\in\mathbb{N}$ and $r\in [1,\infty]$. In what follows, $\ell_r^n(\C)$ denotes the complex Banach space $(\C^n,\|\cdot\|_r)$, where $\|\cdot\|_r$ denotes the $\ell_r$-norm in $\C^n$. \begin{lemma}\label{lemmasolvent} Let $(M,d)$ be a metric space, $n\in\mathbb{N}$, $q,\Gamma>0$, and $r\in [1,\infty]$. Then, for every map $f:\ell^n_r(\C)\to M$, and every $s>0$, we have $$n^{1/q}\overline{\rho}_f(2s)\leq \Gamma \cdot m_q(M,n,\Gamma)\cdot\omega_f\left(\frac{2\pi sn^{1/r}}{m_q(M,n,\Gamma)}\right)$$ \noindent (if $r=\infty$, we use the notation $1/r=0$). \end{lemma} \begin{proof} In order to simplify notation, let $m=m_q(M,n,\Gamma)$ and assume $r<\infty$ (if $r=\infty$, the same proof holds with the $\ell_r$-norm substituted by the max-norm below). Let $e_1,\ldots,e_n$ be the standard basis of $\ell_r^n(\C)$. Let $h:\mathbb{Z}^n_m\to \ell^n_r(\C)$ be given by $$h(x)=s\cdot\sum_{j=1}^ne^{\frac{2\pi i x_j}{m}}e_j,$$ \noindent for all $x=(x_j)_{j}\in \mathbb{Z}^n_m$, and define $g:\mathbb{Z}^n_m\to M$ by letting $g(x)=f(h(x))$, for all $x=(x_j)_{j}\in \mathbb{Z}^n_m$. Then, as \begin{align*} d(g(x+\varepsilon),g(x))\leq \omega_f\Big(s\big(\sum_{j=1}^n|e^{\frac{2\pi i\varepsilon_j}{m}}-1|^r\big)^{1/r}\Big)\leq \omega_f\Big(\frac{2\pi s n^{1/r}}{m}\Big), \end{align*} \noindent for all $\varepsilon=(\varepsilon_j)_j\in \{-1,0,1\}^n$ and all $x=(x_j)_j\in \mathbb{Z}^n_m$, we must have \begin{align*} \int_{\{-1,0,1\}^n}\int_{\mathbb{Z}^n_m}d\big(g(x+\varepsilon),g(x)\big)^qd\mu(x)d\sigma(\varepsilon)\leq \omega_f\Big(\frac{2\pi s n^{1/r}}{m}\Big)^q. \end{align*} Also, as $\|h(x+\frac{m}{2}e_j)-h(x)\|=2s$, for all $x\in \mathbb{Z}^n_m$, and all $j\in\{1,\ldots,n\}$, we have that $d(g(x+\frac{m}{2}e_j),g(x))\geq \overline{\rho}_f(2s)$, for all $x\in \mathbb{Z}^n_m$, and all $j\in\{1,\ldots,n\}$. Hence, $$\sum_{j=1}^n\int_{\mathbb{Z}^n_m}d\Big(g(x+\frac{m}{2}e_j),g(x)\Big)^qd\mu(x)\geq n \overline{\rho}_f(2s)^q.$$ \noindent Therefore, by the definition of $m_q(M,n,\Gamma)$, we conclude that $$n\overline{\rho}_f(2s)^q\leq \Gamma^q m^q\omega_f\Big(\frac{2\pi s n^{1/r}}{m}\Big)^q.$$ \noindent Raising both sides to the $(1/q)$-th power, we are done. \end{proof} We can now prove the main result of this section. \begin{proof}[Proof of Theorem \ref{solventemb}] First, let us notice that we only need to prove the case in which $\varphi$ is coarse and solvent. Indeed, let $\varphi:X\to Y$ be a uniformly continuous almost uncollapsed map, then $X$ maps into $\ell_2(Y)$ by a uniformly continuous solvent map (see Proposition \ref{Rosendal}). As $p_{\ell_2(Y)}=p_Y$ and $q_{\ell_2(Y)}=q_Y$, there is no loss of generality if we assume that $\varphi$ is solvent. If $q_Y=\infty$ we are done, so assume $q_Y<\infty$. Suppose $q_X>q_Y$. Pick $q\in(q_Y, q_X)$ such that $1/q-1/q_X<1$, and let $\alpha=1/q-1/q_X$ (if $q_X=\infty$, we use the notation $1/q_X=0)$. Let $(\varepsilon_n)_n$ be a sequence in $(0,1)$ such that $(1+\varepsilon_n)n^\alpha\leq n^\alpha+1$, for all $n\in\mathbb{N}$. By Maurey-Pisier Theorem (see \cite{MP}), $\ell_{q_X}$ is finitely representable in $X$. Considering $\ell_p(\C)$ as a real Banach space, we have that $\ell_p(\C)$ is finitely representable in $\ell_p$, so $\ell_p(\C)$ is finitely representable in $X$. Therefore, looking at $\ell_p^n(\C)$ as real Banach spaces, we can pick a (real) isomorphic embedding $f_n:\ell_{q_X}^n(\C)\to X$ such that $\|x\|\leq \|f_n(x)\|\leq (1+\varepsilon_n)\|x\|$, for all $x\in\ell_{q_X}^n(\C)$. For each $n\in\mathbb{N}$, let $g_n=\varphi\circ f_n$. Hence, \begin{align*} \overline{\rho}_{g_n}(t)&=\inf\{\|\varphi(f_n(x))-\varphi(f_n(y))\|\mid \|x-y\|=t\}\\ &\geq \inf\{\overline{\rho}_\varphi(\|f_n(x)-f_n(y)\|)\mid \|x-y\|=t\}\\ &\geq \inf\{\overline{\rho}_\varphi (a)\mid a\in[t,(1+\varepsilon_n)t]\}, \end{align*} \noindent for all $n\in\mathbb{N}$, and all $t\in[0,\infty)$. Also, as $\varepsilon_n\in (0,1)$, we have that $\omega_{g_n}(t)\leq \omega_\varphi(2t)$, for all $n\in\mathbb{N}$, and all $t\in[0,\infty)$. As $Y$ has nontrivial type and as $q>q_Y$, Theorem 4.1 of \cite{MN} gives us that, for some $\Gamma>0$, $m_q(Y,n,\Gamma)=O(n^{1/q})$. Therefore, there exists $A>0$ and $n_0\in\mathbb{N}$ such that $m_q(Y,n,\Gamma)\leq An^{1/q}$, for all $n>n_0$. On the other hand, by Lemma 2.3 of \cite{MN}, $m_q(Y,n,\Gamma)\geq n^{1/q}/\Gamma$, for all $n\in\mathbb{N}$. Hence, applying Lemma \ref{lemmasolvent} with $s=n^{\alpha}$ and $r=q_X$, we get that, for all $n>n_0$, $$\inf\{\overline{\rho}_{\varphi}(2a)\mid a\in[2n^{\alpha},2n^{\alpha}+2]\}\leq \Gamma A \omega_{\varphi}\left(4\pi \Gamma\right).$$ \noindent As $\alpha<1$, we have that $\sup_n|(n+1)^\alpha-n^\alpha|<\infty$. Therefore, by Proposition \ref{ltrivial}, the supremum over $n$ of the left hand side above is infinite. As $\varphi$ is coarse, this gives us a contradiction. \end{proof} \begin{proof}[Proof of Corollary \ref{funnyOS}] If $p>1$, this follows straight from Theorem \ref{solventemb}, the fact that $q_{\ell_p}=\max\{2,p\}$ and that $\ell_p$ has nontrivial type. If $p=1$, let $g:\ell_1\to \ell_2$ be a uniform embedding (see \cite{No}, Theorem 5). Then the conclusion of the corollary must hold for the map $g\circ f:\ell_q\to \ell_2$, which implies that it holds for $f$ as well. \end{proof} \section{Property $\mathcal{Q}$.}\label{subsectionPropQ} For each $k\in\mathbb{N}$, let $\mathcal{P}_k(\mathbb{N})$ denote the set of all subset of $\mathbb{N}$ with exactly $k$ elements. If $\bar{n}\in \mathcal{P}_k(\mathbb{N})$, we always write $\bar{n}=\{n_1,\ldots,n_k\}$ in increasing order, i.e., $n_1<\ldots<n_k$. We make $\mathcal{P}_k(\mathbb{N})$ into a graph by saying that two distinct elements $\bar{n}=\{n_1,\ldots,n_k\},\bar{m}=\{m_1,\ldots,m_k\}\in \mathcal{P}_k(\mathbb{N})$ are connected if they interlace, i.e., if either $$n_1\leq m_1\leq n_2\leq \ldots\leq n_k\leq m_k\ \ \text{ or }\ \ m_1\leq n_1\leq m_2\leq \ldots m_k\leq n_k.$$ \noindent We write $\bar{n}<\bar{m}$ if $n_k<m_1$. We endow $\mathcal{P}_k(\mathbb{N})$ with the shortest path metric. So, the diameter of $\mathcal{P}_k(\mathbb{N})$ equals $k$. Kalton introduced the following property for metric spaces in \cite{K}, Section 4. For $\varepsilon,\delta>0$, a metric space $(M,d)$ is said to have \emph{Property $\mathcal{Q}(\varepsilon,\delta)$} if for all $k\in\mathbb{N}$, and all $f:\mathcal{P}_k(\mathbb{N})\to M$ with $\omega_f(1)\leq \delta$, there exists an infinite subset $\M\subset\mathbb{N}$ such that $$d(f(\bar{n}),f(\bar{m}))\leq \varepsilon,\ \ \text{ for all }\ \ \bar{n}<\bar{m}\in \mathcal{P}_k(\M).$$ \noindent For each $\varepsilon>0$, we define $\Delta_M(\varepsilon)$ as the supremum of all $\delta>0$ so that $(M,d)$ has Property $\mathcal{Q}(\varepsilon,\delta)$. For a Banach space $X$, it is clear that there exists $Q_X\geq 0$ such that $\Delta_X(\varepsilon)=Q_X\varepsilon$, for all $\varepsilon>0$. The Banach space $X$ is said to have \emph{Property $\mathcal{Q}$} if $Q_X>0$. \begin{theorem}\label{PropertyQ} Let $X$ and $Y$ be Banach spaces, and assume that $Y$ has Property $\mathcal{Q}$. If either \begin{enumerate}[(i)] \item there exists a coarse solvent map $X\to Y$, or \item there exists a uniformly continuous map $\varphi: B_X\to Y$ such that $\overline{\rho}_\varphi(t)>0$, for some $t\in (0,1)$, \end{enumerate} \noindent then, $X$ has Property $\mathcal{Q}$. In particular, if there exists a uniformly continuous almost uncollapsed map $X\to Y$, then, $X$ has Property $\mathcal{Q}$. \end{theorem} \begin{proof} (i) Assume $\varphi: X\to Y$ is a coarse solvent map. In particular, $\omega_\varphi(1)>0$. Fix $j\in\mathbb{N}$, and pick $R>0$ such that \begin{align}\label{EqPropQ1}\|x-y\|\in [R,R+j]\ \ \text{ implies }\ \ \|\varphi(x)-\varphi(y)\|>j, \end{align} \noindent for all $x,y\in X$. Assume that $X$ does not have Property $\mathcal{Q}$. So, $\Delta_X(R)=0$, and there exists $k\in\mathbb{N}$, and $f:\mathcal{P}_k(\mathbb{N})\to X$ with $\omega_{f}(1)\leq 1$, such that, for all infinite $\M\subset \mathbb{N}$, there exists $\bar{n}<\bar{m}\in \mathcal{P}_k(\M)$ such that $ \|f(\bar{n})-f(\bar{m})\|> R$. By standard Ramsey theory (see \cite{T}, Theorem 1.3), we can assume that $\|f(\bar{n})-f(\bar{m})\|>R$, for all $\bar{n}<\bar{m}\in \mathcal{P}_k(\M)$. Pick a positive $\theta <j$. As $\omega_{f}(1)\leq 1$, we have that $\|f(\bar{n})-f(\bar{m})\|\in [R,k]$, for all $\bar{n}<\bar{m}\in \mathcal{P}_k(\M)$. Therefore, applying Ramsey theory again, we can get an infinite subset $\M'\subset \M$, and $a\in [R,k]$ such that $\|f(\bar{n})-f(\bar{m})\|\in [a,a+\theta]$, for all $\bar{n}<\bar{m}\in\mathcal{P}_k(\M')$. By our choice of $\theta$, it follows that \begin{align}\label{EqProp2} \Big\|\frac{R}{a}f(\bar{n})-\frac{R}{a}f(\bar{m})\Big\|\in [R,R+j], \ \ \text{ for all }\ \ \bar{n}<\bar{m}\in\mathcal{P}_k(\M'). \end{align} Let $Q_Y>0$ be the constant given by the fact that $Y$ has Property $\mathcal{Q}$. Let $g=(R/a)f$. As $R/a\leq 1$, we have that $\omega_{\varphi\circ {g}}(1)\leq \omega_\varphi(1)$. As $\Delta_Y(2\omega_\varphi(1)Q_Y^{-1})= 2\omega_\varphi(1) $, we get that there exists $\M''\subset \M'$ such that \begin{align}\label{EqPropQ3} \|\varphi(g(\bar{n}))-\varphi(g(\bar{m}))\|\leq 2 \omega_\varphi(1) Q_Y^{-1}, \end{align} \noindent for all $\bar{n}<\bar{m}\in \mathcal{P}_k(\M'')$. As $j$ was chosen arbitrarily, (\ref{EqPropQ1}), (\ref{EqProp2}) and (\ref{EqPropQ3}) above gives us that $j<2 \omega_\varphi(1) Q_Y^{-1}$, for all $j\in\mathbb{N}$. As $\varphi$ is coarse, this gives us a contradiction. (ii) Assume $\varphi: B_X\to Y$ is a uniformly continuous map, and let $t\in (0,1)$ be such that $\overline{\rho}_\varphi(t)>0$. As $\varphi$ is uniformly continuous, we can pick $\rho\in (t,1)$, $s,r\in (0,\rho)$ with $s<t<r$, and $\gamma>0$, such that \begin{align}\label{EqPropQ4} \|x-y\|\in [s,r]\ \ \text{ implies }\ \ \|\varphi(x)-\varphi(y)\|>\gamma, \end{align} \noindent for all $x,y\in \rho B_X$. Assume that $X$ does not have Property $\mathcal{Q}$. So, $\Delta_{X}(s)=0$. Fix $j\in\mathbb{N}$. Then, there exists $k\in\mathbb{N}$, and $f:\mathcal{P}_k(\mathbb{N})\to X$ with $\omega_{f}(1)\leq j^{-1}$, such that, for all infinite $\M\subset \mathbb{N}$, there exists $\bar{n}<\bar{m}\in \mathcal{P}_k(\M)$ such that $ \|f(\bar{n})-f(\bar{m})\|> s$. Without loss of generality, we can assume that $ \|f(\bar{n})-f(\bar{m})\|> s$, for all $\bar{n}<\bar{m}\in\mathcal{P}_k(\M)$. Pick a positive $\theta<(r-s)$. As $ \|f(\bar{n})-f(\bar{m})\|\in [s,k]$, we can use Ramsey theory once again to pick an infinite $\M'\subset\M$, and $a\in [s,k]$ such that $ \|f(\bar{n})-f(\bar{m})\|\in [a,a+\theta]$, for all $\bar{n}<\bar{m}\in \mathcal{P}_k(\M')$. By our choice of $\theta$, it follows that \begin{align}\label{EqPropQ5} \Big\|\frac{s}{a}f(\bar{n})-\frac{s}{a}f(\bar{m})\Big\|\in [s,r], \ \ \text{ for all }\ \ \bar{n}<\bar{m}\in\mathcal{P}_k(\M'). \end{align} Let $\bar{m}_0$ be the first $k$ elements of $\M'$, and $\M''=\M'\setminus \bar{m}_0$. For each $\bar{n}\in\mathcal{P}_k(\M'')$, let $h(\bar{n})=(s/a)(f(\bar{n})-f(\bar{m}_0))$. Then, $h(\bar{n})\in \rho B_X$, and $\|h(\bar{n})-h(\bar{m})\|\in [s,r]$, for all $\bar{n}<\bar{m}\in\mathcal{P}_k(\M'')$. As $s/a\leq 1$, we have $\omega_h(1)\leq \omega_f(1)$. Hence, $\omega_{\varphi\circ {h}}(1)\leq \omega_\varphi(j^{-1})$. Let $Q_Y>0$ be the constant given by the fact that $Y$ has Property $\mathcal{Q}$. As $\Delta_Y(2 \omega_\varphi(j^{-1}) Q_Y^{-1})= 2 \omega_\varphi(j^{-1}) $, there exists $\M'''\subset \M''$ such that \begin{align}\label{EqPropQ6} \|\varphi(h(\bar{n}))-\varphi(h(\bar{m}))\|\leq 2 \omega_\varphi(j^{-1}) Q_Y^{-1}, \end{align} \noindent for all $\bar{n}<\bar{m}\in \mathcal{P}_k(\M''')$. As $j$ was chosen arbitrarily, (\ref{EqPropQ4}), (\ref{EqPropQ5}) and (\ref{EqPropQ6}) gives us that $\gamma<2 \omega_\varphi(j^{-1}) Q_Y^{-1}$, for all $j\in\mathbb{N}$. As $\varphi$ is uniformly continuous, this gives us a contradiction. \end{proof} We can now prove the following generalization of Theorem 5.1 of \cite{K}. \begin{theorem}\label{IntoSuperreflexive2} Let $X$ and $Y$ be Banach spaces, and assume that $Y$ is reflexive (resp. super-reflexive). If either \begin{enumerate}[(i)] \item there exists a coarse solvent map $X\to Y$, or \item there exists a uniformly continuous map $\varphi:B_X\to Y$ such that $\overline{\rho}_\varphi(t)>0$, for some $t\in (0,1)$, \end{enumerate} \noindent then, $X$ is either reflexive (resp. super-reflexive) or $X$ has a spreading model equivalent to the $\ell_1$-basis (resp. trivial type). \end{theorem} \begin{proof} By Corollary 4.3 of \cite{K}, any reflexive Banach space has Property $\mathcal{Q}$. By Theorem 4.5 of \cite{K}, a Banach space with Property $\mathcal{Q}$ must be either reflexive or have a spreading model equivalent to the $\ell_1$-basis (in particular, have nontrivial type). Therefore, if $Y$ is reflexive, the result now follows from Theorem \ref{PropertyQ}. For an index set $I$ and an ultrafilter $\mathcal{U}$ on $I$, denote by $X^I/\mathcal{U}$ the ultrapower of $X$ with respect to $\mathcal{U}$. Say $Y$ is super-reflexive. In particular, by Corollary 4.3 of \cite{K}, every ultrapower of $Y$ has Property $\mathcal{Q}$. If $X$ maps into $Y$ by a coarse and solvent map, then $X^I/\mathcal{U}$ maps into $Y^I/\mathcal{U}$ by a coarse and solvent map. Therefore, it follows from Theorem \ref{PropertyQ} that every ultrapower of $X$ has Property $Q$. Suppose $X$ has nontrivial type. Then, all ultrapowers of $X$ have nontrivial type. Therefore, by Theorem 4.5 of \cite{K}, we conclude that all ultrapowers of $X$ are reflexive. Hence, item (i) follows. Similarly, if there exists $\varphi:B_X\to Y$ as in item (ii), then the unit balls of ultrapowers of $X$ are mapped into ultrapowers of $Y$ by maps with the same properties as $\varphi$, and item (ii) follows. \end{proof} \begin{proof}[Proof of Theorem \ref{IntoSuperreflexive}] Item (ii) of Theorem \ref{IntoSuperreflexive} follows directly from item (ii) of Theorem \ref{IntoSuperreflexive2}. \end{proof} \begin{remark}\label{Gideon} The statement in Theorem \ref{IntoSuperreflexive} cannot be improved so that if $X$ embeds into a super-reflexive space, then $X$ is either super-reflexive or it has an $\ell_1$-spreading model. Indeed, it was proven in Proposition 3.1 of \cite{NaorSchechtman} that $\ell_2(\ell_1)$ strongly embeds into $L_p$, for all $p\geq 4$. As $(\oplus_n\ell_1^n)_{\ell_2}\subset \ell_2(\ell_1)$, it follows that $(\oplus_n\ell_1^n)_{\ell_2}$ strongly embeds into $L_4$. However $(\oplus_n\ell_1^n)_{\ell_2}$ is neither super-reflexive nor contains an $\ell_1$-spreading model. \end{remark} \iffalse To finish this section, let us show that the statement in Theorem \ref{IntoSuperreflexive} cannot be improved so that if $X$ embeds into a super-reflexive space, then $X$ is either super-reflexive or it has an $\ell_1$-spreading model. \begin{prop}\label{Gideon} $\ell_2(\ell_1)$ strongly embeds into $L_p$, for all $p\geq 4$. In particular, there exists a Banach space $X$ which strongly embeds into a super-reflexive space with the property that $X$ is neither super-reflexive nor contains an $\ell_1$-spreading model. \end{prop} \begin{proof} Let $S:\ell_2\to L_4$ be a linear isometry. By Proposition 5.3(iii) of \cite{AB}, there exists a map $\varphi:\ell_1\to \ell_2$ such that $$\|\varphi(x)-\varphi(y)\|_{\ell_2}^2=\|x-y\|_{\ell_1},$$ \noindent for all $x,y\in \ell_1$. Define a map $\Phi:\ell_2(\ell_1)\to \ell_4(L_4)$ by letting $\Phi(x)=(S(\varphi(x_n)))_n$, for all $x=(x_n)_n\in\ell_2(\ell_1)$. Hence, we have that \begin{align*} \|\Phi(x)-\Phi(y)\|_{\ell_4(L_4)}^4&=\sum_{n\in\mathbb{N}}\|S(\varphi(x_n))-S(\varphi(y_n))\|_{L_4}^4\\ &=\sum_{n\in\mathbb{N}}\|\varphi(x_n)-\varphi(y_n)\|_{\ell_2}^4\\ &=\sum_{n\in\mathbb{N}}\|x_n-y_n\|_{\ell_1}^2=\|x-y\|_{\ell_2(\ell_1)}^2, \end{align*} \noindent for all $x=(x_n)_n,y=(y_n)_n\in \ell_2(\ell_1)$. Hence, $\ell_2(\ell_1)$ strongly embeds into $L_4$. As $L_4$ strongly embeds into $L_p$, for all $p\geq 4$ (see \cite{MN2004}, Remark 5.10), we are done. For the last statement, notice that $(\oplus_n\ell_1^n)_{\ell_2}\subset \ell_2(\ell_1)$. Hence, $(\oplus_n\ell_1^n)_{\ell_2}$ strongly embeds into $L_4$. As $(\oplus_n\ell_1^n)_{\ell_2}$ is neither super-reflexive nor contains an $\ell_1$-spreading model, we are done. \end{proof} \begin{remark} The argument presented in Proposition \ref{Gideon} to show that $\ell_2(\ell_1)$ strongly embeds into $L_4$ was presented by G. Schechtman in \cite{Sc}. \end{remark} \fi \section{Embeddings into Hilbert spaces.}\label{SectionHilbert} In \cite{Ra}, Randrianarivony showed that a Banach space $X$ coarsely embeds into a Hilbert space if and only if it uniformly embeds into a Hilbert space. This result together with Theorem 5 of \cite{No}, gives a positive answer to Problem \ref{mainproblem} for $Y=\ell_p$, for $p\in[1,2]$. In this section, we show that Problem \ref{mainproblemPartII} also has a positive answer if $Y$ is $\ell_p$, for any $p\in [1,2]$. First, let us prove a simple lemma. For $\delta>0$, a subset $S$ of a metric space $(M,d)$ is called \emph{$\delta$-dense} if $d(x,S)< \delta$, for all $x\in M$. \begin{lemma}\label{SolventMapNet} Let $(M,d)$ and $(N,\partial)$ be Banach spaces and $S\subset M$ be a $\delta$-dense set, for some $\delta>0$. Let $f:M\to N$ be a coarse map such that $f_{|S}$ is solvent. Then $f$ is solvent. \end{lemma} \begin{proof} Let $n\in\mathbb{N}$. As $f_{|S}$ is solvent and $\omega_f(\delta)<\infty$, we can pick $R>0$ such that $$d(x,y)\in [R-2\delta,R+n+2\delta]\ \ \text{ implies }\ \ \partial(f(x),f(y))>n+2\omega_f(\delta),$$ \noindent for all $x,y\in S$. Pick $x,y\in X$, with $d(x,y)\in [R,R+n]$. As $S$ is $\delta$-dense, we can pick $x',y'\in S$ such that $d(x,x')\leq \delta$ and $d(y,y')\leq\delta$. Hence, $d(x',y')\in [R-2\delta,R+n+2\delta]$, which gives us that $\partial(f(x'),f(y'))>n+2\omega_f(\delta)$. Therefore, we conclude that $\partial(f(x),f(y))> n$. \end{proof} The next lemma is an adaptation of Proposition 2 of \cite{Ra}, and its proof is analogous to the proof of Theorem 1 of \cite{JohnsonRandrianarivony}. Before stating the lemma, we need the following definition: a map $K:X\times X\to \mathbb{R}$ is called a \emph{negative definite kernel} (resp. \emph{positive definite kernel}) if \begin{enumerate}[(i)] \item $K(x,y)=K(y,x)$, for all $x,y\in X$, and \item $\sum_{i,j} K(x_i,x_j)c_ic_j\leq 0$ (resp. $\sum_{i,j} K(x_i,x_j)c_ic_j\geq 0$), for all $n\in\mathbb{N}$, all $x_1,\ldots,x_n\in X$, and all $c_1,\ldots, c_n\in \mathbb{R}$, with $\sum_i c_i=0$. \end{enumerate} \noindent A function $f: X\to R$ is called \emph{negative definite} (resp. \emph{positive definite}) if $K(x,y)=f(x-y)$ is a negative definite kernel (resp. positive definite kernel). \begin{lemma}\label{LemmaJR} Let $X$ be a Banach space and assume that $X$ maps into a Hilbert space by a map which is coarse and solvent. Then there exist $\alpha>0$, a map $\overline{\rho}:[0,\infty)\to [0,\infty)$, with $\limsup_{t\to\infty}\overline{\rho} (t)=\infty$, and a continuous negative definite function $g: X\to \mathbb{R}$ such that \begin{enumerate}[(i)] \item $g(0)=0$, and \item $\overline{\rho}(\|x\|)\leq g(x)\leq \|x\|^{2\alpha}$, for all $x\in X$. \end{enumerate} \end{lemma} \begin{proof}[Sketch of the Proof of Lemma \ref{LemmaJR}.] Let $H$ be a Hilbert space and consider a coarse solvent map $f: X\to H$. Without loss of generality, we may assume that $\|f(x)-f(y)\|\leq \|x-y\|$, for all $x,y\in X$, with $\|x-y\|\geq 1$ (see Subsection \ref{SubsectionEmb}). \\ \textbf{Claim 1:} Let $\alpha\in (0,1/2)$. Then $X$ maps into a Hilbert space by a map which is $\alpha$-H\"{o}lder and solvent.\\ As $H$ is Hilbert, the assignment $(x, y)\mapsto\|f(x) - f(y) \|^2$ is a negative definite kernel on X (this is a simple computation and it is contained in the proof of Proposition 3.1 of \cite{Nowak2005}). Hence, for all $\alpha\in (0,1)$, the kernel $N(x, y) = \|f(x) - f(y)\|^{2\alpha}$ is also negative definite (see \cite{Nowak2005}, Lemma 4.2). So, there exists a Hilbert space $H_\alpha$ and a map $f_\alpha: X\to H_\alpha$ such that $N(x,y)=\|f_\alpha(x)-f_\alpha(y)\|^2$, for all $x,y\in X$ (see \cite{Nowak2005}, Theorem 2.3(2)). This gives us that $$\big(\overline{\rho}_f(\|x-y\|)\big)^\alpha\leq \|f_\alpha(x)-f_\alpha(y)\|\leq \|x-y\|^\alpha,$$ \noindent for all $x,y\in X$, with $\|x-y\|\geq 1$. In particular, $f_{\alpha}$ is solvent. Hence, if $N\subset X$ is a $1$-net (i.e., a maximal $1$-separated set), the restriction $f_{\alpha|N}:N\to H_\alpha$ is $\alpha$-H\"{o}lder and solvent. Using that $\alpha\in (0,1/2)$, Theorem 19.1 of \cite{WW} gives us that there exists an $\alpha$-H\"{o}lder map $F_\alpha:X\to H_\alpha$ extending $f_{\alpha|N}$. By Lemma \ref{SolventMapNet}, $F_\alpha$ is also solvent. This finishes the proof of Claim 1. By Claim 1 above, we can assume that $f:X\to H$ is an $\alpha$-H\"{o}lder solvent map, with $\alpha\in (0,1/2)$. Set $N(x,y)=\|f(x)-f(y)\|^2$, for all $x,y\in X$. So, $N$ satisfies \begin{align}\label{eqN} \big(\overline{\rho}_f(\|x-y\|)\big)^2\leq N(x,y)\leq \|x-y\|^{2\alpha}, \end{align} \noindent for all $x,y\in X$. Let $\mu$ be an invariant mean on the bounded functions $X\to \mathbb{R}$ (see \cite{BL}, Appendix C, for the definition of an invariant mean, and \cite{BL}, Theorem C.1, for the existence of such invariant mean), and define $$g(x)=\int_XN(y+x,y)d\mu(y),\ \ \text{ for all }\ \ x\in X.$$ \noindent Let $\overline{\rho}(t)=(\overline{\rho}_f(t))^2$, for all $t\geq 0$. As $\int_X 1d\mu=1$, Inequality \ref{eqN} gives us that items (i) and (ii) are satisfied. As $f$ is solvent, we also have that $ \limsup_{t\to \infty}\overline{\rho}(t)=\infty$. The proof that $g$ is a negative definite kernel is contained in Step 2 of \cite{JohnsonRandrianarivony} and the proof that $g$ is continuous is contained in Step 3 of \cite{JohnsonRandrianarivony}. As both proofs are simple computations, we omit them here. \end{proof} We can now prove the main theorem of this section. For that, given a probability space $(\Omega,\mathcal{A},\mu)$, we denote by $L_0(\mu)$ the space of all measurable functions $\Omega\to \C$ with metric determined by convergence in probability. \begin{thm}\label{ThmHilbertCru} Let $X$ be a Banach space. Then the following are equivalent. \begin{enumerate}[(i)] \item $X$ coarsely embeds into a Hilbert space. \item $X$ uniformly embeds into a Hilbert space. \item $X$ strongly embeds into a Hilbert space. \item $X$ maps into a Hilbert space by a map which is coarse and solvent. \item $X$ maps into a Hilbert space by a map which is uniformly continuous and almost uncollapsed. \item There is a probability space $(\Omega, \mathcal{A}, \mu)$ such that $X$ is linearly isomorphic to a subspace of $L_0(\mu)$. \end{enumerate} \end{thm} \begin{proof} We only need to show that (iv) implies (vi). Indeed, the equivalence between (i), (ii), and (vi) were established in \cite{Ra}, Theorem 1 (see the paragraph preceeding Theorem 1 of \cite{Ra} as well). By \cite{Ro}, Theorem 2, if $X$ uniformly embeds into a Hilbert space $H$ then $X$ strongly embeds into $\ell_2(H)$. Hence, (ii) and (iii) are also equivalent. Using Proposition \ref{Rosendal} with $\mathcal{E}$ being the standard basis of $\ell_2$, we get that (v) implies (iv). Hence, once we show that (iv) implies (vi), all the equivalences will be established. Let $H$ be a Hilbert space and $f:X\to H$ be a coarse solvent map. Let $\alpha>0$, $\overline{\rho}$ and $g:X\to \mathbb{R}$ be given by Lemma \ref{LemmaJR}. Define $F(x)=e^{-g(x)}$, for all $x\in X$. So, $F$ is a positive definite function (see \cite{Nowak2005}, Theorem 2.2). As $F$ is also continuous, by Lemma 4.2 of \cite{AMM} applied to $F$, there exist a probability space $(\Omega,\mathcal{A},\mu) $ and a continuous linear operator $U:X\to L_0(\mu)$ such that $$F(tx)=\int_\Omega e^{itU(x)(w)}d\mu(w), \ \ \text{for all} \ \ t\in \mathbb{R}, \ \ \text{and all}\ \ x\in X.$$ As $U$ is continuous, we only need to show that $U$ is injective and its inverse is continuous. Suppose false. Then there exists a sequence $(x_n)_n$ in the unit sphere of $X$ such that $\lim_nU(x_n)=0$. By the definition of convergence in $L_0(\mu)$, this gives us that $\lim_nF(tx_n)=1$, for all $t\in \mathbb{R}$. As $\limsup_{t\to \infty}\overline{\rho}(t)=\infty$, we can pick $t_0>0$ such that $e^{-\overline{\rho}(t_0)}<1/2$. Hence, we have that $$ F(t_0x_n)=e^{-g(t_0x_n)}\leq e^{ -\overline{\rho}(\|t_0x_n\|)}= e^{ -\overline{\rho}(t_0)}<\frac{1}{2}, \ \ \text{ for all } \ \ n\in\mathbb{N}.$$ \noindent As $\lim_nF(t_0x_n)=1$, this gives us a contradiction. \end{proof} \begin{proof}[Proof of Theorem \ref{ThmHilbert}.] This is a trivial consequence of Theorem \ref{ThmHilbertCru} and the equivalence between coarse and uniform embeddability into $\ell_p$, for $p\in[1,2]$ (see \cite{No}, Theorem 5). \end{proof} \section{Embeddings into $\ell_\infty$.}\label{Sectionlinfty} Kalton proved in \cite{Ka4}, Theorem 5.3, that uniform embeddability into $\ell_\infty$, coarse embeddability into $\ell_\infty$ and Lipschitz embeddability into $\ell_\infty$ are all equivalent. In this section, we show that Problem \ref{mainproblemPartII} also has a positive answer if $Y=\ell_\infty$. The following lemma is Lemma 5.2 of \cite{Ka4}. Although in \cite{Ka4} the hypothesis on the map are stronger, this is not used in their proof. \begin{lemma}\label{SonventLipschitz} Let $X$ be a Banach space and assume that there exists a Lipschitz map $X\to \ell_\infty$ that is also almost uncollapsed. Then $X$ Lipschitz embeds into $\ell_\infty$. \end{lemma} \begin{proof} Let $f:X\to \ell_\infty$ be a Lipschitz almost uncollapsed map. Pick $t>0$ such that $\overline{\rho}_f(t)>0$. Define a map $F:X\to \ell_\infty(\Q_+\times \mathbb{N})$ by setting $F(x)(q,n)=q^{-1}f(qx)_n$, for all $x\in X$, and all $(q,n)\in \Q_+\times \mathbb{N}$. Then $$\|F(x)-F(y)\|=\sup_{(q,n)\in \Q_+\times \mathbb{N}}q^{-1}\big|f(qx)_n-f(qy)_n\big|\leq \text{Lip} (f)\cdot\|x-y\|.$$ \noindent So, $F$ is also Lipschitz. Now notice that, as $f$ is continuous, we have that $$\|F(x)-F(y)\|=\sup_{q>0}q^{-1}\|f(qx)-f(qy)\|.$$ \noindent Hence, if $x\neq y$, by letting $q=t\|x-y\|^{-1}$, we obtain that $$\|F(x)-F(y)\|\geq \frac{\|x-y\|}{t}\cdot\Big\|f\Big(\frac{tx}{\|x-y\|}\Big)-f\Big(\frac{ty}{\|x-y\|}\Big)\Big\|\geq \frac{\overline{\rho}_f(t)}{t}\cdot\|x-y\|.$$ \noindent So, $F$ is a Lipschitz embedding. \end{proof} \begin{proof}[Proof of Theorem \ref{Thmlinfty}.] By Theorem 5.3 of \cite{Ka4}, items (i), (ii) and (iii) of Problem \ref{mainproblem} are all equivalent. Using Proposition \ref{Rosendal} with $\mathcal{E}$ being the standard basis of $c_0$, we have that item (v) of Problem \ref{mainproblemPartII} implies item (iv) of Problem \ref{mainproblemPartII}. Hence, we only need to show that item (iv) of Problem \ref{mainproblemPartII} implies that $X$ Lipschitz embeds into $\ell_\infty$. For that, let $f: X\to \ell_\infty$ be a coarse solvent map. Without loss of generality, we may assume that $\|f(x)-f(y)\|\leq \|x-y\|$, for all $x,y\in X$, with $\|x-y\|\geq 1$. Let $N\subset X$ be a $1$-net. Then $f_{|N}$ is $1$-Lipschitz and solvent. Recall that $\ell_\infty$ is a \emph{$1$-absolute Lipschitz retract}, i.e., every Lipschitz map $g:A\to \ell_\infty$, where $M$ is a metric space and $A\subset M$, has a $\text{Lip}(g)$-Lipschitz extension (see \cite{Ka3}, Subsection 3.3). Let $F$ be a Lipschitz extension of $f_{|N}$. By Lemma \ref{SolventMapNet}, $F$ is solvent. Hence, by Lemma \ref{SonventLipschitz}, it follows that $X$ Lipschitz embeds into $\ell_\infty$. \end{proof} \section{Open questions.} Besides Problem \ref{mainproblem} and Problem \ref{mainproblemPartII}, there are many other interesting questions regarding those weaker kinds of embeddings. We mention a couple of them in this section. Raynaud proved in \cite{Raynaud1983} (see the corollary in page 34 of \cite{Raynaud1983}) that if a Banach space $X$ uniformly embeds into a superstable space (see \cite{Raynaud1983} for definitions), then $X$ must contain an $\ell_p$, for some $p\in[1,\infty)$. Hence, in the context of those weaker embeddings, it is natural to ask the following. \begin{problem} Say an infinite dimensional Banach space $X$ maps into a superstable space by a map which is both uniformly continuous and almost uncollapsed. Does it follow that $X$ must contain $\ell_p$, for some $p\in [1,\infty)$. \end{problem} Similarly, if was proved in \cite{BragaSwift} that if a Banach space $X$ coarsely embeds into a superstable space, then $X$ must contain an $\ell_p$-spreading model, for some $p\in[1,\infty)$. We ask the following. \begin{problem} Say an infinite dimensional Banach space $X$ maps into a superstable space by a map which is both coarse and solvent. Does it follow that $X$ must contain an $\ell_p$-spreading model, for some $p\in [1,\infty)$. \end{problem} The properties of a map being solvent (resp. almost uncollapsed) are not necessarily stable under Lipschitz isomorphisms. Hence, the following question seems to be really important for the theory of solvent (resp. almost uncollapsed) maps between Banach spaces. \begin{problem} Assume that there is no coarse solvent (resp. uniformly continuous almost uncollapsed) map $X\to Y$. Is this also true for any renorming of $X$? \end{problem} At last, we would like to notice that we have no results for maps $X \to Y$ which are coarse and almost uncollapsed. Hence, we ask the following. \begin{problem} What can we say if $X$ maps into $Y$ by a map which is coarse and almost uncollapsed map? Is this enough to obtain any restriction in the geometries of $X$ and $Y$? \end{problem} \noindent \textbf{Acknowledgments:} The author would like to thank his adviser C. Rosendal for all the help and attention he gave to this paper. The author would also like to thank G. Lancien for suggesting to look at Kalton's Property $\mathcal{Q}$. \end{document}
arXiv
Asia Pacific Mathematics Newsletter National Mathematics Day 2014 NET / GATE / SET Millennium Problems AHSEC Question Papers Math Quest Mathematical Societies in Asia Pacific Region Institutes Offering Mathematics Courses in India Mathematical Societies in India Sci-Tech Quiz Diligimus Cognition Economics Quiz Business Quiz Hypotheses Non Fingo 18 Aug Problem Set prepared by B. J. Venkatachala for Olympiad Orientation Programme-2014, North-East Regions Posted at 17:01h in Olympiad, Problems by Gonit Sora 1 Comment 1) A triangle has sides 13, 20, 21. Is there an altitude having integral length? 2) What is the minimum number of years needed for the total number of months in them is a number containing only the digits 0 and 1? 3) Suppose a, b are integers such that 9 divides $$a^2+ab+b^2.$$ Prove that 3 divides both a and b. 4) Suppose x and y are real numbers such that $$(x+sqrt(x^2+1))(y+sqrt(y^2+1))=1.$$ find x+y. Image Source : Shutterstock 5) Solve the system for positive real x, y : $$x^2+y=7, x+y^2=11.$$ 6) Suppose p and $$p^2+2$$ are primes. Prove that $$p^3+2$$ is also a prime. 7) Prove that is 2n+1 and 3n+1 are square numbers for some positive integers n, then 5n+3 can't be a prime number. 8) Show that $$65^{64}+64$$ is a composite number. 9) Four different digits are chosen, and all possible positive four-digit numbers of distinct digits are constructed out of them. The sum of these four-digit numbers is found to be 186648. What me be the four digits used? 10) Solve the simultaneous equations $$x-xy+y=1, x^2+y^2=17.$$ 11) Given eight 3-digit numbers, from all possible 6-digit numbers by writing two 3-digit numbers side-by-side. Prove that among these 6-digit numbers, there is always a number divisible by 7. 12) Find all pairs of positive integers (m, n) such that $$|3^m-2^n|=1.$$ 13) For any set of n integers, show that it contains a subset of whose elements are divisible by n. 14) Find all triples of natural numbers (a, b, c) such that the remainder after dividing the product of any two by the other is 1. 15) If a, b, c are real numbers such that a+b+c=0, prove that $$frac{a^5+b^5+c^5}{5}= frac{a^3+b^3+c^3}{3}.frac{a^2+b^2+c^2}{2}.$$ 16) Solve the equation: $$16[x]^2+16{x}^2-24x=11.$$ 17) Find the least positive integer having 30 positive divisors. 18) Let a and b be real numbers such that $$a^3-3a^2+5a-17=0$$ and $$b^3-3b^2+5b+11=0.$$ Find a+b. 19) Is there a square number the sum of whose digits is 2015? 20) Find all numbers a, b such that $$(x-1)^2$$ divides $$ax^4+bx^3+1.$$ 21) Suppose P(x) is a polynomial with integer coefficients such that P(0) and P(1) are both odd numbers, Prove that P(x)=0 has no integer root. 22) Let $$p(x)=x^2+ax+b,$$ where a, b are integers. Given an integer m. Prove that there exists an integer n such that $$p(m)p(m+1)=p(n).$$ 23) Let P(x) be a cubic polynomial such that P(1)=1, P(2)=2, P(3)=3 and P(4)=5. Find P(6). 24) For any four positive real numbers $$a_1,a_2,a_3,a_4,$$ prove the inequality: $$frac{a_1}{a_1+a_2}+frac{a_2}{a_2+a_3}+frac{a_3}{a_3+a_4}+frac{a_4}{a_4+a_5}lefrac{a_1}{a_2+a_3}+frac{a_2}{a_3+a_4}+frac{a_3}{a_4+a_5}+frac{a_4}{a_1+a_2}.$$ 25) If a, b, c are positive real numbers, prove that $$3(a+sqrt{ab}+^3sqrt{abc})le 4(a+b+c)).$$ 26) How many zeros are there at the end of 1000!? 27) Suppose x, y, z are integers such that $$x^2+y^2=z^2.$$ Prove that 60 divides xyz. 28) Find all 5-term geometric progressions of positive integers whose sum is 211. 29) Find all arithmetic progressions of natural numbers such that for each n, the sum of the first n-terms of the progression is a perfect square. 30) Consider the two squares lying inside a triangle ABC with $$angle A=90^{circ}$$ with their vertices on the sides of ABC: one square having its sides parallel to AB and AC, the other, having two sides parallel to the hypotenuse. Determine which of these two squares has greater area. 31) How many 5-digit numbers contain at least one 5? 32) Let a, b, c, d be four integers. Prove that (a-b)(a-c)(a-d)(b-c)(b-d)(c-d) is always divisible by 12. 33) Let N be a 16-digit positive integer. Show that we can find some consecutive digits of N such that the product of these digits is a square. 34) Let ABCD be a unit square and P be an interior point such that $$angle PAB=angle PBA=15^{circ}.$$ Show that DPC is an equilateral triangle. 35) Let ABC be an isosceles triangle in which $$angle A=20^{circ}.$$ Let D be a point on AC such that AD=BC. Find $$angle ABD.$$ 36) Let ABC be an isosceles triangle in which $$angle A=100^{circ}.$$ Extend AB to D such that AD=BC. Find $$angle ADC.$$ 37) Let ABC be an isosceles triangle with AB=AC and $$angle A=20^{circ}.$$ Let D, E be points on AB and AC respectively such that $$angle CBE=50^{circ}$$ and $$angle BCD=60^{circ}.$$ Determine $$angle EDC.$$ 38) In a triangle ABC, the altitude, the angle bisector and the median from A divide $$angle A$$ in four equal parts. Find the angles of ABC. 39) In an equilateral triangle ABC, there is a point P which is at a distance 3, 4, 5 from the three vertices respectively. What is the area of the triangle? 40) In a square ABCD, there is a point P such that PA=3, PB=7 and PD=5. What is the area of ABCD? 41) Let $$x_1,x_2$$ be the roots of $$x^2+ax+bc=0$$ and $$x_2,x_3$$ be those of $$x^2+bx+ac=0.$$ Suppose $$acne bc.$$ Prove that $$x_1,x_3$$ are the roots of $$x^2+cx+ab=0.$$ 42) The polynomial $$p(x)=ax^3+bx^2+cx+d$$ has integer coefficients a, b, c, d with ad odd and bc even. Prove that the equation p(x)=0 has at least one one-trivial root. 43) If a, b, c are the sides of a triangle of a triangle, prove that $$frac{3}{2}lefrac{a}{b+c}+frac{b}{c+a}+frac{c}{a+b}<2.$$ 44) Let a, b, c be the sides of a sides of a triangle such that $$frac{bc}{b+c}+frac{ca}{c+a}+frac{ab}{a+b}=s,$$ Where s is the semi-perimeter of the triangle. Prove that the triangle is equilateral. 45) Let a, b, c, d be positive real numbers. Prove that $$frac{a}{b+2c+3d}+frac{d}{c+2d+3a}+frac{c}{d+2a+3b}+frac{d}{a+2b+3c}gefrac{2}{3}.$$ 46) Suppose n is a natural number such that 2n+1 and 3n+1 are both perfect squares. Prove that 40 divides n. 47) Let ABC be a triangle in which AB<AC. Let D be the mid-point of the arc BC of the circumcircle of ABC containing A. Draw DE perpendicular to AC (with E on AC). Prove that AB+AE=BC. 48) Construct an equilateral triangle, only with ruler and compass, which has area equal to that of a given triangle. 49) Show that for each natural number n, the number of integer solutions (x,y) of the equation $$x^2+xy+y^2=n$$ is a multiple of 6. 50) For any $$nin N,$$ Let $$a_n$$ denote the number of positive integers whose digits are from the set {1,3,4} and the sum of the digits is n. prove that $$a_{2n}$$ is a perfect square for every $$nin N.$$ 51) Solve $$2^t=3^x5^y+7^z$$ in positive integers. 52) Let n be a positive integers such that 2n+1 and 3n+a are perfect squares. Prove that 5n+3 is a composite integers. 53) Let S denote the set of all integers which can be expressed in the form $$a^3+b^3+c^3-3abc,$$ where a, b, c are integers. Prove that S is closed under multiplication. 54) Let a, b, c, d be positive integers such that both $$frac{a}{b}+frac{b}{c}+frac{c}{a}$$ and $$frac{a}{c}+frac{b}{a}+frac{c}{b}$$ are integers. Prove that a=b=c. 55) Positive integers a, b, c are such that $$frac{1}{a}+frac{1}{b}+frac{1}{c}<1.$$ Prove that $$frac{1}{a}+frac{1}{b}+frac{1}{c}lefrac{41}{42}.$$ 56) Find all integers x, y, z such that $$x^3+2y^3=4z^3.$$ 57) Find the largest power of 3 that divides $$10^k-1,$$ where k is any positive integer. 58) Find the sum $$sum^{100}_{k=1}frac{k}{k^4+k^2+1}.$$ 59) Find all ordered pairs (p,q) of prime numbers such that pq divides $$5^p+5^q.$$ 60) Let a, b, c be positive real numbers such that $$frac{1}{a}+frac{1}{b}+frac{1}{c}=1.$$ Prove that $$(a-1)(b-1)(c-1)ge 8.$$ 61) Find the sum $$sum^{2014}_{k=1}sqrt{1+frac{1}{k^2}+frac{1}{(k+1)^2}}.$$ 62) Around a circle are written all positive integers from 1 to N, $$Nge 2,$$ in such a way that any two adjacent numbers have at least one common digit; for example, 12 and 26 can occur as adjacent numbers, but not 16 and 24. Find the least N for which this is possible. 63) The length of the sides of a quadrilateral are positive integers. It is known that the sum of any three numbers is divisible by the fourth-one. Prove that two sides of the quadrilateral are equal. 64) Prove that $$n^{12}+64$$ has at least 4 distinct factors (other than 1 and itself), for any n>1. 65) Suppose a, b, c, d are integers such that a+b+c+d=0. Prove that $$2a^4+2b^4+2c^4+2d^4+8abcd$$ is a perfect square. 66) Solve the simultaneous equations: $$asqrt{a}+b asqrt{b}=183, asqrt{b}+bsqrt{a}=182.$$ Download this post as PDF (will not include images and mathematical symbols). Gonit Sora algebra, B. J. Venkatachala, Geometry, inequality, INMO, NMO, North-East Regions, Olympiad Orientation Programme-2014, Problem Set Parag Dey Solution I will give only a few bcoz of latex problem 9|a^2+ab+b^2=(a–b) ^2+3ab From there it follows P(x)=y^3+by^2+acy+b^2d=0 $k+l+m=–b$ and klm=b^2d 5^p+5^q it's from balkan Only two solution in 2,3 ,3,3 Others I can give but latex error Please submit your article by email to : [email protected] অসমীয়া লেখাৰ বাবে তলত ইমেইল-আইডি লিখি: https://www.facebook.com/http://GonitSora Are we too Dependent on Computers? 50 questions on linear algebra for NET and GATE aspirants Best Youtube Channels for JEE Main Preparation AHSEC 2015 Question Papers Career opportunities in Physics after Class XII Free E-Books: Categories Select Category AHSEC Question Papers Aptitude test Articles Hindi Asia Pacific Mathematics Newsletter Assamese Awards Awards and Honours Biography Biology Book Reviews Books Business Quiz Careers Cartoons Chemistry Computer Science Diligimus Cognition Downloads Engineering English Fun Facts Fiction Gallery Ganit Bikash History Interviews Mathematics Quiz Medicine Millennium Problems Movie Reviews National Mathematics Day 2014 NET / GATE / SET News Notes Olympiad Others Physics Poems Problems Quiz Economics Quiz Hypotheses Non Fingo Sci-Tech Psychology Sci-Tech Quiz Special Statistics Tech: Gonit Sora Videos Views Workshop Xophura Archives Select Month December 2019 November 2019 October 2019 September 2019 August 2019 July 2019 March 2019 February 2019 January 2019 November 2018 September 2018 August 2018 July 2018 June 2018 May 2018 April 2018 March 2018 February 2018 January 2018 December 2017 October 2017 September 2017 August 2017 July 2017 June 2017 May 2017 April 2017 March 2017 February 2017 January 2017 December 2016 November 2016 October 2016 September 2016 August 2016 July 2016 June 2016 May 2016 April 2016 March 2016 February 2016 January 2016 December 2015 November 2015 October 2015 September 2015 August 2015 July 2015 June 2015 May 2015 April 2015 March 2015 February 2015 January 2015 December 2014 November 2014 October 2014 September 2014 August 2014 July 2014 June 2014 May 2014 April 2014 March 2014 February 2014 January 2014 December 2013 November 2013 October 2013 September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 February 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 গণিত চ'ৰা সংখ্যা-শৃংখল ৮ সংখ্যা-শৃংখল ৭ (GONIT SORA) আইনষ্টাইনৰ আপেক্ষিকতাবাদ আৰু‌ ঈশ্বৰৰ সন্ধান প্ৰসিদ্ধ পদাৰ্থ-বিজ্ঞানী তথা ভৱিষ্যদ্ৰষ্টা মিচ্ছিউ কাকু আৰু নিউ এটলাছ আলোচনীৰ মাজত হোৱা এক সাক্ষাৎকাৰৰ অনূদিত ৰূপ ০ ৰ পৰা ৫০০ লৈকে অখণ্ড সংখ্যাসমূহৰ কিছুমান বৈশিষ্ট PROSHTUTI CHETIA on AHSEC 2017 PCM Question Papers Mathematics is everywhere – Proclamation by UNESCO of March 14 as the International Day of Mathematics - Gonit Sora on Pi Day Sneha sinha on AHSEC 2017 PCM Question Papers Mathematics is everywhere – Proclamation by UNESCO of March 14 as the International Day of Mathematics Great Apps to Successfully Homeschool Your Kids on a Budget Nicolas Bourbaki: The greatest mathematician who never was Gonit Sora গণিত চ'ৰা is a multi lingual (English and Assamese) web magazine devoted to publishing well written and original articles related to science in general and mathematics in particular. Gonit Sora is an attempt to bridge the gap between classroom math teaching and real life practical and fun mathematics. We strive for the popularization of mathematics teaching and understanding at all levels. Gonit Sora is an initiative of a few students and is committed to bringing out well researched and written articles. The name is in Assamese and means 'gateway to mathematics'. ✍ Our core team members © , Gonit Sora গণিত চ'ৰা Terms of Service. Privacy Policy Gonit Sora is co-founded and maintained by Manjil P. Saikia and Pankaj Jyoti Mahanta gonitsora.com || as.gonitsora.com
CommonCrawl
Physics And Astronomy (30) Journal of Fluid Mechanics (30) Ryan Test (30) Numerical stability analysis of a vortex ring with swirl Yuji Hattori, Francisco J. Blanco-Rodríguez, Stéphane Le Dizès Journal: Journal of Fluid Mechanics / Volume 878 / 10 November 2019 Published online by Cambridge University Press: 04 September 2019, pp. 5-36 Print publication: 10 November 2019 The linear instability of a vortex ring with swirl with Gaussian distributions of azimuthal vorticity and velocity in its core is studied by direct numerical simulation. The numerical study is carried out in two steps: first, an axisymmetric simulation of the Navier–Stokes equations is performed to obtain the quasi-steady state that forms a base flow; then, the equations are linearized around this base flow and integrated for a sufficiently long time to obtain the characteristics of the most unstable mode. It is shown that the vortex rings are subjected to curvature instability as predicted analytically by Blanco-Rodríguez & Le Dizès (J. Fluid Mech., vol. 814, 2017, pp. 397–415). Both the structure and the growth rate of the unstable modes obtained numerically are in good agreement with the analytical results. However, a small overestimation (e.g. 22 % for a curvature instability mode) by the theory of the numerical growth rate is found for some instability modes. This is most likely due to evaluation of the critical layer damping which is performed for the waves on axisymmetric line vortices in the analysis. The actual position of the critical layer is affected by deformation of the core due to the curvature effect; as a result, the damping rate changes since it is sensitive to the position of the critical layer. Competition between the curvature and elliptic instabilities is also investigated. Without swirl, only the elliptic instability is observed in agreement with previous numerical and experimental results. In the presence of swirl, sharp bands of both curvature and elliptic instabilities are obtained for $\unicode[STIX]{x1D700}=a/R=0.1$ , where $a$ is the vortex core radius and $R$ the ring radius, while the elliptic instability dominates for $\unicode[STIX]{x1D700}=0.18$ . New types of instability mode are also obtained: a special curvature mode composed of three waves is observed and spiral modes that do not seem to be related to any wave resonance. The curvature instability is also confirmed by direct numerical simulation of the full Navier–Stokes equations. Weakly nonlinear saturation and subsequent decay of the curvature instability are also observed. Internal shear layers from librating objects Stéphane Le Dizès, Michael Le Bars Journal: Journal of Fluid Mechanics / Volume 826 / 10 September 2017 Print publication: 10 September 2017 In this work, we analyse the internal shear layer structures generated by the libration of an axisymmetric object in an unbounded fluid rotating at a rotation rate $\unicode[STIX]{x1D6FA}^{\ast }$ using direct numerical simulation and small Ekman number asymptotic analysis. We consider weak libration amplitude and libration frequency $\unicode[STIX]{x1D714}^{\ast }$ within the inertial wave interval $(0,2\unicode[STIX]{x1D6FA}^{\ast })$ such that the fluid dynamics is mainly described by a linear axisymmetric harmonic solution. The internal shear layer structures appear along the characteristic cones of angle $\unicode[STIX]{x1D703}_{c}=\text{acos}(\unicode[STIX]{x1D714}^{\ast }/(2\unicode[STIX]{x1D6FA}^{\ast }))$ which are tangent to the librating object at so-called critical latitudes. These layers correspond to thin viscous regions where the singularities of the inviscid solution are smoothed. We assume that the velocity field in these layers is described by the class of similarity solutions introduced by Moore & Saffman (Phil. Trans. R. Soc. Lond. A, vol. 264, 1969, pp. 597–634). These solutions are characterized by two parameters only: a real parameter $m$ , which measures the strength of the underlying singularity, and a complex amplitude coefficient $C_{0}$ . We first analyse the case of a disk for which a general asymptotic solution for small Ekman numbers is known when the disk is in a solid plane. We demonstrate that the numerical solutions obtained for a free disk and for a disk in a solid plane are both well described by the asymptotic solution and by its similarity form within the internal shear layers. For the disk, we obtain a parameter $m=1$ corresponding to a Dirac source at the edge of the disk and a coefficient $C_{0}\propto E^{1/6}$ where $E$ is the Ekman number. The case of a smoothed librating object such as a spheroid is found to be different. By asymptotically matching the boundary layer solution to similarity solutions close to a critical latitude on the surface, we show that the adequate parameter $m$ for the similarity solution is $m=5/4$ , leading to a coefficient $C_{0}\propto E^{1/12}$ , that is larger than for the case of a disk for small Ekman numbers. A simple general expression for $C_{0}$ valid for any axisymmetric object is obtained as a function of the local curvature radius at the critical latitude in agreement with this change of scaling. This result is tested and validated against direct numerical simulations. Curvature instability of a curved Batchelor vortex Francisco J. Blanco-Rodríguez, Stéphane Le Dizès Journal: Journal of Fluid Mechanics / Volume 814 / 10 March 2017 In this paper, we analyse the curvature instability of a curved Batchelor vortex. We consider this short-wavelength instability when the radius of curvature of the vortex centreline is large compared with the vortex core size. In this limit, the curvature instability can be interpreted as a resonant phenomenon. It results from the resonant coupling of two Kelvin modes of the underlying Batchelor vortex with the dipolar correction induced by curvature. The condition of resonance of the two modes is analysed in detail as a function of the axial jet strength of the Batchelor vortex. In contrast to the Rankine vortex, only a few configurations involving $m=0$ and $m=1$ modes are found to become the most unstable. The growth rate of the resonant configurations is systematically computed and used to determine the characteristics of the most unstable mode as a function of the curvature ratio, the Reynolds number and the axial flow parameter. The competition of the curvature instability with another short-wavelength instability, which was considered in a companion paper (Blanco-Rodríguez & Le Dizès, J. Fluid Mech., vol. 804, 2016, pp. 224–247), is analysed for a vortex ring. A numerical error found in this paper, which affects the relative strength of the elliptic instability, is also corrected. We show that the curvature instability becomes the dominant instability in large rings as soon as axial flow is present (vortex ring with swirl). Elliptic instability of a curved Batchelor vortex Journal: Journal of Fluid Mechanics / Volume 804 / 10 October 2016 Print publication: 10 October 2016 The occurrence of the elliptic instability in rings and helical vortices is analysed theoretically. The framework developed by Moore & Saffman (Proc. R. Soc. Lond. A, vol. 346, 1975, pp. 413–425), where the elliptic instability is interpreted as a resonance of two Kelvin modes with a strained induced correction, is used to obtain the general stability properties of a curved and strained Batchelor vortex. Explicit expressions for the characteristics of the three main unstable modes are obtained as a function of the axial flow parameter of the Batchelor vortex. We show that vortex curvature adds a contribution to the elliptic instability growth rate. The results are applied to a single vortex ring, an array of alternate vortex rings and a double helical vortex. Instability of a boundary layer flow on a vertical wall in a stably stratified fluid Jun Chen, Yang Bai, Stéphane Le Dizès Journal: Journal of Fluid Mechanics / Volume 795 / 25 May 2016 The stability of a horizontal boundary layer flow on a vertical wall in a viscous stably stratified fluid is considered in this work. A temporal stability analysis is performed for a tanh velocity profile as a function of the Reynolds number $Re=UL/{\it\nu}$ and the Froude number $F=U/(LN)$ where $U$ is the main stream velocity, $L$ the boundary layer thickness, $N$ the buoyancy frequency and ${\it\nu}$ the kinematic viscosity. The diffusion of density is neglected. The boundary layer flow is found to be unstable with respect to two instabilities. The first one is the classical viscous instability which gives rise to Tollmien–Schlichting (TS) waves. We demonstrate that, even in the presence of stratification, the most unstable TS wave remains two-dimensional and therefore independent of the Froude number. The other instability is three-dimensional, inviscid in nature and associated with the stratification. It corresponds to the so-called radiative instability. We show that this instability appears first for $Re\geqslant Re_{c}^{(r)}\approx 1995$ for a Froude number close to 1.5 whereas the viscous instability develops for $Re\geqslant Re_{c}^{(v)}\approx 3980$ . For large Reynolds numbers, the radiative instability is also shown to exhibit a much larger growth rate than the viscous instability in a large Froude number interval. We argue that this instability could develop in experimental facilities as well as in geophysical situations encountered in ocean and atmosphere. Internal structure of vortex rings and helical vortices Francisco J. Blanco-Rodríguez, Stéphane Le Dizès, Can Selçuk, Ivan Delbende, Maurice Rossi Journal: Journal of Fluid Mechanics / Volume 785 / 25 December 2015 Print publication: 25 December 2015 The internal structure of vortex rings and helical vortices is studied using asymptotic analysis and numerical simulations in cases where the core size of the vortex is small compared to its radius of curvature, or to the distance to other vortices. Several configurations are considered: a single vortex ring, an array of equally-spaced rings, a single helix and a regular array of helices. For such cases, the internal structure is assumed to be at leading order an axisymmetric concentrated vortex with an internal jet. A dipolar correction arises at first order and is shown to be the same for all cases, depending only on the local vortex curvature. A quadrupolar correction arises at second order. It is composed of two contributions, one associated with local curvature and another one arising from a non-local external 2-D strain field. This strain field itself is obtained by performing an asymptotic matching of the local internal solution with the external solution obtained from the Biot–Savart law. Only the amplitude of this strain field varies from one case to another. These asymptotic results are thereafter confronted with flow solutions obtained by direct numerical simulation (DNS) of the Navier–Stokes equations. Two different codes are used: for vortex rings, the simulations are performed in the axisymmetric framework; for helices, simulations are run using a dedicated code with built-in helical symmetry. Quantitative agreement is obtained. How these results can be used to theoretically predict the occurrence of both the elliptic instability and the curvature instability is finally addressed. Wave field and zonal flow of a librating disk Stéphane Le Dizès In this work, we provide a viscous solution of the wave field generated by librating a disk (harmonic oscillation of the rotation rate) in a stably stratified rotating fluid. The zonal flow (mean flow correction) generated by the nonlinear interaction of the wave field is also calculated in the weakly nonlinear framework. We focus on the low dissipative limit relevant for geophysical applications and for which the wave field and the zonal flow exhibit generic features (Ekman scaling, universal structures, etc.). General expressions are obtained which depend on the disk radius $a^{\ast }$ , the libration frequency ${\it\omega}^{\ast }$ , the rotation rate ${\it\Omega}^{\ast }$ of the frame, the buoyancy frequency $N^{\ast }$ of the fluid, its kinematic diffusion ${\it\nu}^{\ast }$ and its thermal diffusivity ${\it\kappa}^{\ast }$ . When the libration frequency is in the inertia-gravity frequency interval ( $\min ({\it\Omega}^{\ast },N^{\ast })<{\it\omega}^{\ast }<\max ({\it\Omega}^{\ast },N^{\ast })$ ), the presence of conical internal shear layers is observed in which the spatial structures of the harmonic response and of the mean flow correction are provided. At the point of focus of these internal shear layers on the rotation axis, the largest amplitudes are obtained: the angular velocity of the harmonic response and the mean flow correction are found to be $O({\it\varepsilon}E^{-1/3})$ and $({\it\varepsilon}^{2}E^{-2/3})$ respectively, where ${\it\varepsilon}$ is the libration amplitude and $E={\it\nu}^{\ast }/({\it\Omega}^{\ast }a^{\ast 2})$ is the Ekman number. We show that the solution in the internal shear layers and in the focus region is at leading order the same as that generated by an oscillating source of axial flow localized at the edge of the disk (oscillating Dirac ring source). Response of a stratified boundary layer on a tilted wall to surface undulations Pierre-Yves Passaggia, Patrice Meunier, Stéphane Le Dizès Journal: Journal of Fluid Mechanics / Volume 751 / 25 July 2014 Print publication: 25 July 2014 The structure of a stratified boundary layer over a tilted bottom with a small streamwise undulation is studied theoretically and numerically. We show that the tilt of the boundary can induce strong density variations and wall-transverse velocities in the critical layer when the frequency of the forcing by the topography $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}kU(z_c)$ is equal to the transverse Brunt–Väisälä frequency $N \sin \alpha $ ( $N$ being the vertical Brunt–Väisälä frequency). The viscous solution in the critical layer, obtained and compared with direct numerical simulation results, is in good agreement for both the scaling and the spatial structure. The amplitude of the transverse velocity response is also shown to exhibit quasi-resonance peaks when the stratification strength is varied. Libration-induced mean flow in a spherical shell Alban Sauret, Stéphane Le Dizès We investigate the flow in a spherical shell subject to a time harmonic oscillation of its rotation rate, also called longitudinal libration, when the oscillation frequency is larger than twice the mean rotation rate. In this frequency regime, no inertial waves are directly excited by harmonic forcing. We show, however, that, through nonlinear interactions in the Ekman layers, it can generate a strong mean zonal flow in the interior. An analytical theory is developed using a perturbative approach in the limit of small libration amplitude $\varepsilon $ and small Ekman number $E$ . The mean flow is found to be at leading order an azimuthal flow that scales as the square of the libration amplitude and depends only on the cylindrical radius coordinate. The mean flow also exhibits a discontinuity across the cylinder tangent to the inner sphere. We show that this discontinuity can be smoothed through multi-scale Stewartson layers. The mean flow is also found to possess a weak axial flow that scales as $O({\varepsilon }^{2} {E}^{5/ 42} )$ in the Stewartson layers. The analytical solution is compared to axisymmetric numerical simulations, and a good agreement is demonstrated. Inviscid instability of a stably stratified compressible boundary layer on an inclined surface Julien Candelier, Stéphane Le Dizès, Christophe Millet The three-dimensional stability of an inflection-free boundary layer flow of length scale and maximum velocity in a stably stratified and compressible fluid of constant Brunt–Väisälä frequency , sound speed and stratification length is examined in an inviscid framework. The shear plane of the boundary layer is assumed to be inclined at an angle with respect to the vertical direction of stratification. The stability analysis is performed using both numerical and theoretical methods for all the values of and Froude number . When non-Boussinesq and compressible effects are negligible ( and ), the boundary layer flow is found to be unstable for any as soon as . Compressible and non-Boussinesq effects are considered in the strongly stratified limit: they are shown to have no influence on the stability properties of an inclined boundary layer (when ). In this limit, the instability is associated with the emission of internal-acoustic waves. Shear instability in a stratified fluid when shear and stratification are not aligned The effect of an inclination angle of the shear with respect to the stratification on the linear properties of the shear instability is examined in the work. For this purpose, we consider a two-dimensional plane Bickley jet of width and maximum velocity in a stably stratified fluid of constant Brunt–Väisälä frequency in an inviscid and Boussinesq framework. The plane of the jet is assumed to be inclined with an angle with respect to the vertical direction of stratification. The stability analysis is performed using both numerical and theoretical methods for all the values of and Froude number . We first obtain that the most unstable mode is always a two-dimensional Kelvin–Helmholtz (KH) sinuous mode. The condition of stability based on the Richardson number , which reads here , is recovered for . But when , that is, when the directions of shear and stratification are not perfectly aligned, the Bickley jet is found to be unstable for all Froude numbers. We show that two modes are involved in the stability properties. We demonstrate that when is decreased below , there is a 'jump' from one two-dimensional sinuous mode to another. For small Froude numbers, we show that the shear instability of the inclined jet is similar to that of a horizontal jet but with a 'horizontal' length scale . In this regime, the characteristics (oscillation frequency, growth rate, wavenumber) of the most unstable mode are found to be proportional to . For large Froude numbers, the shear instability of the inclined jet is similar to that of a vertical jet with the same scales but with a different Froude number, . It is argued that these results could be valid for any type of shear flow. Radiative instability of the flow around a rotating cylinder in a stratified fluid XAVIER RIEDINGER, STÉPHANE LE DIZÈS, PATRICE MEUNIER Journal: Journal of Fluid Mechanics / Volume 672 / 10 April 2011 Print publication: 10 April 2011 The stability of the flow around a rotating cylinder in a fluid linearly stratified along the cylinder axis is studied numerically and experimentally for moderate Reynolds numbers. The flow is assumed potential and axisymmetric with an angular velocity profile Ω = 1/r2, where r is the radial coordinate. Neglecting density diffusion and non-Boussinesq effects, the properties of the linear normal modes are first provided. A comprehensive stability diagram is then obtained for Froude numbers between 0 and 3 and Reynolds numbers below 1000. The main result is that the potential flow, which is stable for a homogeneous fluid, becomes unstable for Froude number close to one and for Reynolds numbers larger than 360. The numerical results are then compared with experimental results obtained using shadowgraph and synthetic Schlieren techniques. Two symmetrical helical modes are found to be simultaneously unstable. We show that these modes exhibit an internal gravity wave structure extending far from the cylinder in agreement with the theory. Their wavelength and frequency are shown to be in good agreement with the numerical predictions for a large range of Froude and Reynolds numbers. These experimental results are the first indisputable evidence of the radiative instability. Temporal instability modes of supersonic round jets LUIS PARRAS, STÉPHANE LE DIZÈS In this study, a comprehensive inviscid temporal stability analysis of a compressible round jet is performed for Mach numbers ranging from 1 to 10. We show that in addition to the Kelvin–Helmholtz instability modes, there exist for each azimuthal wavenumber three other types of modes (counterflow subsonic waves, subsonic waves and supersonic waves) whose characteristics are analysed in detail using a WKBJ theory in the limit of large axial wavenumber. The theory is constructed for any velocity and temperature profile. It provides the phase velocity and the spatial structure of the modes and describes qualitatively the effects of base-flow modifications on the mode characteristics. The theoretical predictions are compared with numerical results obtained for an hyperbolic tangent model and a good agreement is demonstrated. The results are also discussed in the context of jet noise. We show how the theory can be used to determine a priori the impact of jet modifications on the noise induced by instability. Nonlinear dynamics of the elliptic instability NATHANAËL SCHAEFFER, STÉPHANE LE DIZÈS Published online by Cambridge University Press: 08 March 2010, pp. 471-480 In this paper, we analyse by numerical simulations the nonlinear dynamics of the elliptic instability in the configurations of a single strained vortex and a system of two counter-rotating vortices. We show that although a weakly nonlinear regime associated with a limit cycle is possible, the nonlinear evolution far from the instability threshold is, in general, much more catastrophic for the vortex. In both configurations, we put forward some evidence of a universal nonlinear transition involving shear layer formation and vortex loop ejection, leading to a strong alteration and attenuation of the vortex, and a rapid growth of the vortex core size. Viscous stability properties of a Lamb–Oseen vortex in a stratified fluid Journal: Journal of Fluid Mechanics / Volume 645 / 25 February 2010 Print publication: 25 February 2010 In this work, we analyse the linear stability of a frozen Lamb–Oseen vortex in a fluid linearly stratified along the vortex axis. The temporal stability properties of three-dimensional normal modes are obtained under the Boussinesq approximation with a Chebychev collocation spectral code for large ranges of Froude numbers and Reynolds numbers (the Schmidt number being fixed to 700). A specific integration technique in the complex plane is used in order to apply the condition of radiation at infinity. For large Reynolds numbers and small Froude numbers, we show that the vortex is unstable with respect to all non-axisymmetrical waves. The most unstable mode is however always a helical radiative mode (m = 1) which resembles either a displacement mode or a ring mode. The displacement mode is found to be unstable for all Reynolds numbers and for moderate Froude numbers (F ~ 1). The radiative ring mode is by contrast unstable only for large Reynolds numbers above 104 and is the most unstable mode for large Froude numbers (F > 2). The destabilization of this mode for large Froude numbers is shown to be associated with a resonance mechanism which is analysed in detail. Analyses of the scaling and of the spatial structure of the different unstable modes are also provided. Viscous and inviscid centre modes in the linear stability of vortices: the vicinity of the neutral curves DAVID FABRE, STÉPHANE LE DIZÈS Published online by Cambridge University Press: 30 April 2008, pp. 1-38 In a previous paper, We have recently that if the Reynolds number is sufficiently large, all trailing vortices with non-zero rotation rate and non-constant axial velocity become linearly unstable with respect to a class of viscous centre modes. We provided an asymptotic description of these modes which applies away from the neutral curves in the (q, k)-plane, where q is the swirl number which compares the azimuthal and axial velocities, and k is the axial wavenumber. In this paper, we complete the asymptotic description of these modes for general vortex flows by considering the vicinity of the neutral curves. Five different regions of the neutral curves are successively considered. In each region, the stability equations are reduced to a generic form which is solved numerically. The study permits us to predict the location of all branches of the neutral curve (except for a portion of the upper neutral curve where it is shown that near-neutral modes are not centre modes). We also show that four other families of centre modes exist in the vicinity of the neutral curves. Two of them are viscous damped modes and were also previously described. The third family corresponds to stable modes of an inviscid nature which exist outside of the unstable region. The modes of the fourth family are also of an inviscid nature, but their structure is singular owing to the presence of a critical point. These modes are unstable, but much less amplified than unstable viscous centre modes. It is observed that in all the regions of the neutral curve, the five families of centre modes exchange their identity in a very intricate way. For the q vortex model, the asymptotic results are compared to numerical results, and a good agreement is demonstrated for all the regions of the neutral curve. Finally, the case of 'pure vortices' without axial flow is also considered in a similar way. In this case, centre modes exist only in the long-wave limit, and are always stable. A comparison with numerical results is performed for the Lamb–Oseen vortex. Inviscid waves on a Lamb–Oseen vortex in a rotating stratified fluid: consequences for the elliptic instability The inviscid waves propagating on a Lamb–Oseen vortex in a rotating medium for an unstratified fluid and for a strongly stratified fluid are analysed using numerical and asymptotic approaches. By a local Lagrangian description, we first provide the characteristics of the local plane waves (inertia–gravity waves) as well as the local growth rate associated with the centrifugal instability when the vortex is unstable. A global WKBJ approach is then used to determine the frequencies of neutral core modes and neutral ring modes. We show that these global Kelvin modes only exist in restricted domains of the parameters. The consequences of these domain limitations for the occurrence of the elliptic instability are discussed. We argue that in an unstratified fluid the elliptic instability should be active in a small range of the Coriolis parameter which could not have been predicted from a local approach. The wavenumbers of the sinuous modes of the elliptic instability are provided as a function of the Coriolis parameter for both an unstratified fluid and a strongly stratified fluid. Tilt-induced instability of a stratified vortex NICOLAS BOULANGER, PATRICE MEUNIER, STÉPHANE LE DIZÈS Journal: Journal of Fluid Mechanics / Volume 596 / 25 January 2008 Published online by Cambridge University Press: 17 January 2008, pp. 1-20 Print publication: 25 January 2008 This experimental and theoretical study considers the dynamics and the instability of a Lamb–Oseen vortex in a stably stratified fluid. In a companion paper, it was shown that tilting the vortex axis with respect to the direction of stratification induces the formation of a rim of strong axial flow near a critical radius when the Froude number of the vortex is larger than one. Here, we demonstrate that this tilt-induced flow is responsible for a three-dimensional instability. We show that the instability results from a shear instability of the basic axial flow in the critical-layer region. The theoretical predictions for the wavelength and the growth rate obtained by a local stability analysis of the theoretical critical-layer profile are compared to experimental measurements and a good agreement is observed. The late stages of the instability are also analysed experimentally. In particular, we show that the tilt-induced instability does not lead to the destruction of the vortex, but to a sudden decrease of its Froude number, through the turbulent diffusion of its core size, when the initial Froude number is close to 1. A movie is available with the online version of the paper. Large-Reynolds-number asymptotic analysis of viscous centre modes in vortices STÉPHANE LE DIZÈS, DAVID FABRE Journal: Journal of Fluid Mechanics / Volume 585 / 25 August 2007 Print publication: 25 August 2007 This paper presents a large-Reynolds-number asymptotic analysis of viscous centre modes on an arbitrary axisymmetrical vortex with an axial jet. For any azimuthal wavenumber m and axial wavenumber k, the frequency of these modes is given at leading order by ω0 = mΩ0 + kW0 where Ω0 and W0 are the angular and axial velocities of the vortex at its centre. These modes possess a multi-layer structure localized in an O(Re−1/6) neighbourhood of the vortex. By a multiple-scale matching analysis, we demonstrate the existence of three different families of viscous centre modes whose frequency expands as ω(n) ∼ ω0 + Re−1/3ω1 + Re−1/2ω(n)2. One of these families is shown to have unstable eigenmodes when H0 = 2Ω0k(2kΩ0 − mW2) < 0 where W2 is the second radial derivative of the axial flow in the centre. The growth rate of these modes is given at leading order by σ ∼ (3/2)(H0/4)1/3Re−1/3. Our results prove that any vortex with a jet (or jet with swirl) such that Ω0W2 ≠ 0 is unstable if the Reynolds number is sufficiently large. The spatial structure of the viscous centre modes is obtained and simple approximations which capture the main feature of the eigenmodes are also provided. The theoretical predictions are compared with numerical results for the q-vortex model (or Batchelor vortex) for Re ≥ 105. For all modes, a good agreement is demonstrated for both the frequency and the spatial structure. Structure of a stratified tilted vortex The structure of a columnar vortex in a stably stratified fluid is studied experimentally and theoretically when the vortex axis is slightly tilted with respect to the direction of stratification. When the Froude number of the vortex is larger than 1, we show that tilting induces strong density variations and an intense axial flow in a rim around the vortex. We demonstrate that these characteristics can be associated with a critical-point singularity of the correction of azimuthal wavenumber m = 1 generated by tilting where the angular velocity of the vortex equals the Brunt–Väisälä frequency of the stratified fluid. The theoretical structure obtained by smoothing this singularity using viscous effects (in a viscous critical-layer analysis) is compared to particle image velocimetry measurements of the axial velocity field and visualizations of the density field and a good agreement is demonstrated.
CommonCrawl
\begin{definition}[Definition:Method of Least Squares (Approximation Theory)] Let there be a set of points $\set {\tuple {x_k, y_k}: k \in \set {1, 2, \ldots, n} }$ plotted on a Cartesian $x y$ plane which correspond to measurements of a physical system. Let it be required that a straight line is to be fitted to the points. The '''method of least squares''' is a technique of producing a straight line of the form $y = m x + c$ such that: :the points $\set {\tuple {x_k', y_k'}: k \in \set {1, 2, \ldots, n} }$ are on the line $y = m x + c$ :$\forall k \in \set {1, 2, \ldots, n}: y_k' = y_k$ :$\ds \sum_n \paren {x_k' = x_k}^2$ is minimised. :500px \end{definition}
ProofWiki
Primal ideal In mathematics, an element a of a commutative ring A is called (relatively) prime to an ideal Q if whenever ab is an element of Q then b is also an element of Q. A proper ideal Q of a commutative ring A is said to be primal if the elements that are not prime to it form an ideal. References • Fuchs, Ladislas (1950), "On primal ideals", Proceedings of the American Mathematical Society, 1: 1–6, doi:10.2307/2032421, MR 0032584.
Wikipedia
The Michigan Mathematical Journal Advance publication Michigan Math. J. Volume 67, Issue 3 (2018), 485-509. A Geometric Reverse to the Plus Construction and Some Examples of Pseudocollars on High-Dimensional Manifolds Jeffrey J. Rolland More by Jeffrey J. Rolland Full-text: Access denied (no subscription detected) We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text Article info and citation In this paper, we develop a geometric procedure for producing a "reverse" to Quillen's plus construction, a construction called a 1-sided h-cobordism or semi-h-cobordism. We then use this reverse to the plus construction to produce uncountably many distinct ends of manifolds called pseudocollars, which are stackings of 1-sided h-cobordisms. Each of our pseudocollars has the same boundary and prohomology systems at infinity and similar group-theoretic properties for their profundamental group systems at infinity. In particular, the kernel group of each group extension for each 1-sided h-cobordism in the pseudocollars is the same group. Nevertheless, the profundamental group systems at infinity are all distinct. A good deal of combinatorial group theory is needed to verify this fact, including an application of Thompson's group V. The notion of pseudocollars originated in Hilbert cube manifold theory, where it was part of a necessary and sufficient condition for placing a Z-set as the boundary of an open Hilbert cube manifold. Michigan Math. J., Volume 67, Issue 3 (2018), 485-509. Revised: 4 October 2017 First available in Project Euclid: 6 April 2018 Permanent link to this document https://projecteuclid.org/euclid.mmj/1522980163 Digital Object Identifier doi:10.1307/mmj/1522980163 Mathematical Reviews number (MathSciNet) Zentralblatt MATH identifier Primary: 57R65: Surgery and handlebodies 57R19: Algebraic topology on manifolds Secondary: 57S30: Discontinuous groups of transformations 57M07: Topological methods in group theory Rolland, Jeffrey J. A Geometric Reverse to the Plus Construction and Some Examples of Pseudocollars on High-Dimensional Manifolds. Michigan Math. J. 67 (2018), no. 3, 485--509. doi:10.1307/mmj/1522980163. https://projecteuclid.org/euclid.mmj/1522980163 [1] F. Ancel and C. Guilbault, $\mathcal{Z}$-Compactifications of open manifolds, Topology 38 (1999), 1265–1280. [2] R. Baer and F. Levi, Freie Produkte und ihre Untergruppen, Compos. Math. 3 (1936), 391–398. Zentralblatt MATH: 62.1093.03 [3] K. Brown, The geometry of finitely presented simple groups, Algorithms and classification in combinatorial group theory, Math. Sci. Res. Inst. Publ., pp. 121–136, 1992. Zentralblatt MATH: 0753.20007 [4] M. Brown, Locally flat embeddings of topological manifolds, Ann. of Math. 75 (1962), 331–341. Digital Object Identifier: doi:10.2307/1970177 [5] T. Chapman and L. Siebenmann, Finding a boundary for a Hilbert cube manifold, Topology 3 (1965), 171–208. Digital Object Identifier: doi:10.1007/BF02392417 Project Euclid: euclid.acta/1485889935 [6] M. Cohen, A course in simple-homotopy theory, first edition, Springer, New York, 1973. [7] M. Curtis and K. Kwun, Infinite sums of manifolds, Acta Math. 137 (1976), 31–42. Digital Object Identifier: doi:10.1016/0040-9383(65)90068-6 [8] M. Freedman and F. Quinn, Topology of $4$-manifolds, Princeton University Press, Princeton, 1990. Mathematical Reviews (MathSciNet): MR1201584 [9] R. Geoghegan, Topological methods in group theory, Springer, New York, 2007. [10] C. Guilbault, Manifolds with non-stable fundamental groups at infinity, Geom. Topol. 4 (2000), 537–579. Digital Object Identifier: doi:10.2140/gt.2000.4.537 Project Euclid: euclid.gt/1513883296 [11] C. Guilbault, Ends, shapes, and boundaries in manifold topology and geometric group theory, Topology and geometric group theory, Springer Proc. Math. Stat., pp. 45–125, Springer, 2016. [12] C. Guilbault and F. Tinsley, Noncompact manifolds that are inward tame, Pacific J. Math. 288 (2017), no. 1, 87–128. Digital Object Identifier: doi:10.2140/pjm.2017.288.87 [13] C. Guilbault and F. Tinsley, Manifolds with non-stable fundamental groups at infinity, II, Geom. Topol. 7 (2003), 255–286. [14] C. Guilbault and F. Tinsley, Manifolds with non-stable fundamental groups at infinity, III, Geom. Topol. 10 (2006), 541–556. Digital Object Identifier: doi:10.2140/gt.2006.10.541 [15] C. Guilbault and F. Tinsley, Spherical alterations of handles: embedding the manifold plus construction, Algebr. Geom. Topol. 13 (2013), 35–60. Zentralblatt MATH: 06138571 Digital Object Identifier: doi:10.2140/agt.2013.13.35 Project Euclid: euclid.agt/1513715491 [16] A. Hatcher, Algebraic topology, Cambridge University Press, Cambridge, 2002. [17] J.-C. Hausmann, Homological surgery, Ann. of Math. 104 (1976), 573–584. [18] M. Kervaire, Smooth homology spheres and their fundamental groups, Trans. Amer. Math. Soc. 144 (1969), 67–72. Digital Object Identifier: doi:10.1090/S0002-9947-1969-0253347-3 [19] R. Lyndon and P. Schupp, Combinatorial group theory, first edition, Springer, Berlin, 2013. [20] H. Neumann and I. Dey, The Hopf property of free groups, Math. Z. 117 (1970), 325–339. [21] W. Parry, J. Cannon, and W. Floyd, Introductory notes on Richard Thompson's groups, Enseign. Math. (2) 42 (1996), 215–256. [22] J. Robbin, Matrix algebra: using MINImal MATlab, first edition, A K Peters, Wellesley, 1994. [23] D. J. S. Robinson, A course in the theory of groups, second edition, Springer, New York, 1995. [24] C. Rourke and B. Sanderson, Introduction to piecewise-linear topology, Springer, Berlin, 1972. [25] L. Siebenmann, The obstruction to finding a boundary for an open manifold of dimension greater than five, Ph.D. thesis, Princeton, 1965. [26] P. Sparks, Contractible $n$-manifolds and the double $n$-space property, Ph.D. thesis, University of Wisconsin-Milwaukee, 2014. University of Michigan, Department of Mathematics Journal Website Hochster Volume 57 Email RSS ToC RSS Article Universal nowhere dense subsets of locally compact manifolds Banakh, Taras and Repovš, Dušan, Algebraic & Geometric Topology, 2013 $h$–cobordisms between 1–connected 4–manifolds Kreck, Matthias, Geometry & Topology, 2001 Homology equivalences of manifolds and zero-in-the-spectrum examples Ye, Shengkui, Algebraic & Geometric Topology, 2013 Manifolds with non-stable fundamental groups at infinity, II Guilbault, Craig R and Tinsley, Frederick C, Geometry & Topology, 2003 Chapter IV. Groups and Group Actions Anthony W. Knapp, Basic Algebra, Digital Second Edition (East Setauket, NY: Anthony W. Knapp, 2016), 2016 sl(2) tangle homology with a parameter and singular cobordisms Caprau, Carmen Livia, Algebraic & Geometric Topology, 2008 Smooth s-Cobordisms of Elliptic 3-Manifolds Chen, Weimin, Journal of Differential Geometry, 2006 $\mathbb{T}\sp 2$–cobordism of quasitoric $4$–manifolds Sarkar, Soumen, Algebraic & Geometric Topology, 2012 CAT(0) and CAT(−1) fillings of hyperbolic manifolds Fujiwara, Koji and Manning, Jason Fox, Journal of Differential Geometry, 2010 The tree of knot tunnels Cho, Sangbum and McCullough, Darryl, Geometry & Topology, 2009 euclid.mmj/1522980163
CommonCrawl
\begin{document} \keywords{} \subjclass[2010]{Primary. 32S05, 32S10, 32S25, 57K18; Secondary. 14Bxx, 57K10, 57K14} \begin{abstract} Let $(C,o)$ be a complex analytic isolated curve singularity of arbitrary large embedded dimension. Its lattice cohomology $\mathbb H^*=\oplus_{q\geq 0}\mathbb H^q$ was introduced in \cite{AgostonNemethi}, each $\mathbb H^q$ is a graded $\mathbb Z[U]$--module. Here we study its homological version $\mathbb H_*(C,o)=\oplus_{q\geq 0}\mathbb H_q$. The construction uses the multivariable Hilbert function associated with the valuations provided by the normalization of the curve. A key intermediate product is a tower of spaces $\{S_n\}_n$ such that $\mathbb H_q=\oplus_n H_q(S_n,\mathbb Z)$. In this article for every $n$ we consider a natural filtration of the space $S_n$, which provides a homological spectral sequence converging to the homogeneous summand $H_q(S_n,\mathbb Z)$ of the lattice homology. All the entries of all the pages of the spectral sequences are new invariants of $(C,o)$. We show how the collection of the first pages is equivalent with the motivic Poincar\'e series of $(C,o)$. We provide several concrete computations of the corresponding multivariable Poincar\'e series associated with the entries of the spectral sequences. In the case of plane curve singularities, the first page can also be identified with the Heegaard Floer Link homology of the link of the singularity. In this way, the new invariants provide for an arbitrary (non necessarily plane) singularity a homological theory which is the analogue of the Heegaard Floer Link theory for links of plane curve singularities. \end{abstract} \title{Filtered lattice homology of curve singularities} \linespread{1.2} \pagestyle{myheadings} \markboth{{\normalsize A. N\'emethi}} {{\normalsize Filtered lattice homology of curve singularities}} \section{Introduction} \subsection{} Let $(C,o)$ be a complex analytic isolated singular germ of dimension one with $r$ irreducible components. In \cite{AgostonNemethi} the authors associated with such a curve singularity the {\it analytic lattice cohomology} $\mathbb H^*(C,o)$. In this note we will consider the homological version, the analytic lattice homology $\mathbb H_*(C,o)$, with a very similar construction. The definition is based on the construction of a tower of spaces $\emptyset=S_{m_w-1}\subset S_{m_w}\subset S_{m_w+1}\subset \cdots \subset S_n \subset \cdots$. Then the lattice homology has the form $\mathbb H_*(C,o)=\oplus_{q\geq 0}\mathbb H_q(C,o)$, where each $\mathbb H_q(C,o):=\oplus_n H_q(S_n,\mathbb Z)$ is a $\mathbb Z$--graded $\mathbb Z$--module. Its homogeneous elements $(\mathbb H_q)_{-2n}$ consists of the elements of the summand $H_q(S_n,\mathbb Z)$. Moreover, $\mathbb H_q(C,o)$ admits a homogeneous $U$--action of degree $-2$, $H_q(S_{n-1},\mathbb Z)\to H_q(S_n,\mathbb Z)$, the homological morphism induced by the inclusion $S_{n-1}\hookrightarrow S_n$. In this way, each $\mathbb H_q(C,o)$ becomes a graded $\mathbb Z[U]$--module. The Euler characteristic of $\mathbb H_*(C,o)$ is the {\it delta invariant} $\delta(C,o)$ of $(C,o)$. The construction uses the cubical decomposition of $\frX=(\mathbb R_{\geq 0})^r$ determined by the lattice points $(\mathbb Z_{\geq 0})^r\subset (\mathbb R_{\geq 0})^r$ and the canonical bases $\{E_i\}_{i=1}^r$ of $\mathbb Z^r$. Moreover, one also requires a weight function $w:(\mathbb Z_{\geq 0})^r\to\mathbb Z$. In the present case it is given by $w(l)=2\mathfrak{h}(l)-|l|$, where $l\mapsto \mathfrak{h}(l)$ is the Hilbert function of $(C,o)$ associated with valuations given by the normalization map, and $|l|=|(l_1,\ldots , l_r)|=\sum_il_i$ (cf. section 3). In this note, for any $n$ we consider an increasing filtration $\{S_n \cap \frX_{-d}\}_{d\geq 0}$ of the space $S_n$. It is canonically associated with the normalization map of $(C,o)$, therefore, any output of the filtration is a well--defined invariant of the singularity $(C,o)$. Note that in the definition of the lattice homology it is enough to know the homotopy type of the tower of spaces $\{S_n\}_n$. However, in the filtration we use a more subtle information: the embedding of the finite cubical complex $S_n$ into $(\mathbb R_{\geq 0})^r$. The filtration $\{S_n\cap \frX_{-d}\}_d$ is induced by the filtration $\{\frX_{-d}\}_d$ of $(\mathbb R_{\geq 0})^r$, where $\frX_{-d}$ is the union of those cubes whose left-lower vertex $l$ satisfies $|l|\geq d$ (see section \ref{s:levfiltr}). By this construction we wish to emphasize once more the importance of the tower $\{S_n\}_n$ and to point out the structural subtleties and riches of these (embedded) cubical complexes. The filtration induces a homological spectral sequence $(E^k_{-d,q})_n\Rightarrow (E^\infty_{-d,q})_n$ for every $n$, which stabilizes after finitely many steps. Its terms are the following: \begin{equation*}\begin{split} (E^1_{-d,q})_n=& H_{-d+q}(S_n\cap \frX_{-d}, S_n\cap \frX_{-d-1},\mathbb Z),\\ (E^\infty_{-d,q})_n=& \frac{(F_{-d}\, \mathbb H_{-d+q}(\frX))_{-2n}} { (F_{-d-1}\, \mathbb H_{-d+q}(\frX))_{-2n}}=({\rm Gr}^F_{-d}\, \mathbb H_{-d+q}(\frX)\,)_{-2n}. \end{split}\end{equation*} In particular, all the information coded by the pages of these spectral sequences are invariants of $(C,o)$. E.g., we can consider those minimal values $k$ for which $(E^k_{*,*})_n=(E^\infty_{*,*})_n$, see \ref{ss:ss}, (this is the analogue of the $\tau$--invariant of Ozsv\'ath and Szab\'o \cite{OSztau}, or of the $s$--invariant of Rasmussen \cite{ras_s} in the context of Heegaard Floer Link and Khovanov theories). We show via examples that this minimal $k$ for which $(E^k_{*,*})_n=(E^\infty_{*,*})_n$ for every $n$ can be large. In particular, all the pages of the spectral sequences might contain deep information. The spectral sequence converges to the lattice homology $\mathbb H_*$: the $\infty$--pages (collected for all $S_n$) provide ${\rm Gr}^F_*\mathbb H_*$, the `graded lattice homology'. Already the lattice homology has an interesting rich structure, but its graded version ${\rm Gr}^F_*\mathbb H_*$ contains considerable additional information as well. \subsection{} From the spectral sequences we can also extract the multivariable Poincar\'e series coding the corresponding ranks of the entries: $$PE_k(T,Q,h):=\sum_{d,q,n} \ \rank (E_{-d,q}^k)_{n}\cdot T^dQ^nh^{-d+q}\in\mathbb Z[[T,Q]][Q^{-1},h].$$ Regarding the $PE_\infty$ series we have the following structure theorem (see Proposition \ref{prop:infty}). \begin{theorem}\label{prop:infty_intro} \ \noindent (a) $PE_\infty(1, Q,h)=\sum_{n\geq m_w}\, \big(\, \sum_b \, {\rm rank}\, H_b(S_n,\mathbb Z)\,h^b\, \big)\cdot Q^n$ is the Poincar\'e series of \ $\mathbb H_*(C,o)$. \noindent (b) $PE_\infty(1, Q,-1)=\sum_{n\geq m_w}\, \chi_{top}(S_n)\cdot Q^n$ \ (where $\chi_{top}$ denoted the topological Euler characteristic) \noindent (c) Let $R$ be any rectangle $\{x\, :\, 0\leq x\leq c'\}$ with $c'\geq c$, where $c$ is the conductor of $(C,o)$. Then $$PE_\infty(1, Q,-1)=\frac{1}{1-Q}\cdot \sum_{\square_q\subset R}\, (-1)^q \, Q^{w(\square_q)}.$$ (This shows that this Euler characteristic type invariant can be computed in two different ways: either homologically --- \`a la Betti --- or by cell-decomposition --- \`a la Euler.)\\ (d) $$\lim_{Q\to 1}\Big( PE_\infty (1,Q, -1)-\frac{1}{1-Q}\Big)=eu(\mathbb H_*(C,o))=\delta(C,o).$$ (e) For two series $P$ and $P'$ we write $P\geq P'$ if $P-P'$ has all of its coefficients nonnegative. Then $$PE_\infty(1, Q,h)\geq PE_\infty(1, Q,h=0)\geq \frac{1}{1-Q}$$ and \ $PE_\infty(1, Q,h=0)-1/(1-Q)$ is finitely supported. \end{theorem} \subsection{} The series $PE_1(T,Q,h)$ can be enhanced by a richer multivariable series. Indeed, we consider for any lattice point $l\in (\mathbb Z_{\geq 0})^r$ the shifted first quadrant $\frX_{-l}:= \{x\in (\mathbb R_{\geq 0})^s\,:\, x\geq l\}$ and the set of cubes in it (i.e. those cubes with left-lower vertex $\geq l$). Then we have the natural direct sum decomposition \begin{equation*} (E^1_{-d,q})_{n}= \bigoplus_{l\in\mathbb Z^r_{\geq 0},\, |l|=d}\ (E^1_{-l,q})_{n}, \ \mbox{where} \ (E^1_{-l,q})_{n}:= H_{-|l|+q}(S_n\cap \frX_{-l}, S_n\cap \frX_{-l}\cap \frX_{-|l|-1},\mathbb Z)\end{equation*} and the corresponding Poincar\'e series $${{\bf PE}}_1({\bf T}, Q,h)={{\bf PE}}_1(T_1, \ldots , T_r, Q,h):=\sum_{l\in\mathbb Z^r_{\geq 0},\, n,q}\ \rank\big((E^1_{-l,q})_{n}\big)\cdot T_1^{l_1}\cdots T_r^{l_r}\, Q^n\, h^{-|l|+q}. $$ It satisfies ${{\bf PE}}_1(T_1=T,\ldots, T_r=T, Q,h)=PE_1(T,Q,h)$. Rather surprisingly, the series ${\bf PE}_1({\bf T}, Q,h)$ is related with another analytic invariant, defined in a very different way and context in \cite{cdg3}, the {\it motivic Poincar\'e series} $P^m({\bf t}, q)$ of the singularity. In Theorem 5.1.3 we show that ${\bf PE}_1({\bf T}, Q,h)$ and $P^m({\bf t}, q)$ determine each other by a very explicit procedure (see Theorem \ref{th:PP} for a more precise statement and connections with other invariants). \begin{theorem}\label{th:PP_intro} (a) $${\bf PE}_1({\bf T}, Q,h)|_{T_i\to t_i\sqrt{q},\ Q\to \sqrt{q},\ h\to -\sqrt{q}}= P^m({\bf t}, q).$$ (b) Write $P^m({\bf t},q)$ as $\sum_l\pp^m_l(q)\cdot{\bf t}^l$. Then each $\pp^m_l(q)$ can be written in a unique way in the form $$\pp^m_l(q)=\sum_{k\in\mathbb Z_{\geq 0} }\pp^m _{l,k}q^{k+\mathfrak{h}(l)}, \ \ (\pp^m_{l,k}\in\mathbb Z).$$ (c) Write $P^m({\bf t},q)=\sum_l\ \sum_{k\in\mathbb Z_{\geq 0} }\pp^m _{l,k}q^{k+\mathfrak{h}(l)}\cdot {\bf t}^l$. Then $${\bf PE}_1({\bf T}, Q,h)= \sum_l\ \sum_{k\in\mathbb Z_{\geq 0} }\pp^m _{l,k}\ {\bf T}^l Q^{w(l)}\cdot (-Qh)^k.$$ (d)\ ${\bf PE}_1$ is a rational function of type $$\overline{{\bf PE}_1}({\bf T}, Q, h)/\prod_i(1-T_iQ), \ \ \mbox{where} \ \ \overline{{\bf PE}_1}({\bf T}, Q, h)\in\mathbb Z[{\bf T}, Q, Q^{-1}, h].$$ (e) In the Gorenstein case, after substitution $h=-Q$ one has a symmetry of type $$\overline{{\bf PE}_1}({\bf T}, Q, h=-Q)|_{T_i\to T_i^{-1}}=\prod_i T_i^{-c_i}\cdot \overline{{\bf PE}_1}({\bf T}, Q, h=-Q).$$ \end{theorem} The identification of ${\bf PE}_1({\bf T}, Q,h)$ with $P^m({\bf t}, q)$ is based on the following (non)vanishing result: $$\mbox{if $H_{b}(S_n\cap \frX_{-l}, S_n\cap \frX_{-l}\cap\frX_{-|l|-1})\not=0$ then necessarily $n=w(l)+b$.}$$ This (non)vanishing follows from another identification of the modules $H_{b}(S_n\cap \frX_{-l}, S_n\cap \frX_{-l}\cap\frX_{-|l|-1})$: in \cite{GorNem2015} they appear under the name `local lattice cohomology', and they were studied via certain hyperplane arrangements and their Orlik--Solomon algebras (see here subsection 3.3). In fact, the present work can be considered as a completion of \cite{GorNem2015} by the construction in the algebraic case of the object whose `localization' is the (co)homology studied in \cite{GorNem2015}. It is worth to mention that the article \cite{GorNem2015} introduced in the topological context the analogue of the Hilbert function of curve singularities, called the $H$--function, which generated a rather intense activity in topology, see e.g. \cite{GH,GLM,GLLM,BLZ} and the references therein. We definitely expect that all the results of the present note can be extended to the purely topological context, if we replace the Hilbert function of algebraic singularities with the $H$--function of links in $S^3$ (or, in $L$--spaces), hence the (filtered) tower of spaces $\{S_n\}_n$ considered in this note with the corresponding tower associated with the topological $H$--function and the weight function $w(l)=2H(l)-|l|$. \subsection{} Along the spectral sequence the $U$--action (induced by the inclusions $S_{n-1}\hookrightarrow S_n$) is trivial. However, we introduce new homological operators $Y_1,\ldots, Y_r$ induced by the lattice shifts $x\mapsto x+E_i$. Each $Y_i$ is a spectral sequence morphism connecting the spectral sequences of $S_n$ and $S_{n+1}$: $$Y_i: (E^k_{-d, q})_{n} \to (E^k_{-d-1, q+1})_{n+1}.$$ This new action usually is non--trivial; in fact, even on the $\infty$--pages (i.e. on ${\rm Gr}^F_*\mathbb H_*$) the information carried by them is more subtle than the information coded by $U$--action acting on $\mathbb H_*$. \subsection{} In the body of the article we discuss several key examples and several families of germs. E.g., we analyse with several details the case of irreducible germs and decomposable singularities (as two extremal cases). We also stress the additional properties valid for plane curve singularities. \subsection{} The reader familiar with the Heegaard Floer Link theory will realize strong similarities with the structure of that theory and the results of the present note. However, we wish to emphasise that the two setups are different. In the HFL theory one constructs topological invariants associated with the embedded topological type of a link $L$ of the three sphere $S^3$, while in our case we construct analytic invariants using the rigid analytic structure of a {\it not-necessarily plane curve singularity}. {\bf However, in the case of plane curve singularities the two theories meet!} Indeed, for such germs, it turns out that the Hilbert function (which is used to define the analytic lattice cohomology) can be recovered from the embedded topological type of the link. In particular, in these cases, all the invariants what we construct (lattice homology, the spaces $S_n$, their filtrations, the spectral sequences) are invariants of the embedded topological type. E.g., Theorem \ref{th:PP_intro} gets an additional new colour: $P^m({\bf t},q)=P({\bf t})$ can be identified with the multivariable Alexander polynomial of the link, cf. Theorem \ref{Poincare vs Alexander}. More surprisingly, the first page of the collection of spectral sequences can be identified with the Heegaard Floer Link homologies HFL$_*^-(L)=\sum_{l\in (\mathbb Z^r_{\geq 0})} {\rm HFL}_*^-(L,l)$ of the corresponding link $L\subset S^3$. Nevertheless, by the present theory we create an extra `partition' of HFL$_*^-(L)$ given by $n$. The connecting bridge is the following. \begin{equation}\label{eq:lb2_intro} \HFL^-_{-n-|l|}(L,l)\simeq H_b(S_n\cap\frX_{-l}, S_n\cap\frX_{-l}\cap \frX_{-|l|-1}), \end{equation} where $b=n-w(l)$. For any fixed $d$, summation over $\{l\,:\, |l|=d\}$ gives for any $n,\, b$ and $d$: \begin{equation}\label{eq:lb3_intro} \bigoplus_{|l|=d,\ w(l)=n-b} \HFL^-_{-n-|l|}(L,l)\simeq \bigoplus_{|l|=d}\, H_b(S_n\cap\frX_{-l}, S_n\cap\frX_{-l}\cap \frX_{-|l|-1})= (E^1_{-d, b+d})_n. \end{equation} In particular, for any fixed $n$, the spectral sequence associated with $S_n$ uses and capture only the specially chosen summand $\oplus_{l}{\rm HFL}^-_{-n-|l|}(L,l)$ of $\oplus _l{\rm HFL^-}_*(L,l)$. This is a partition of $\oplus _l{\rm HFL^-}_*(L,l)$ indexed by $n$. Each $\oplus_{l}{\rm HFL}^-_{-n-|l|}(L,l)$, interpreted as an $E^1$--term, converges to $({\rm Gr}^F_*\mathbb H_*)_{-2n}$. (This is not the spectral sequence of the Heegaard Floer Link theory, which converges to $HF^-(S^3)$, though the {\it entries} --- but not the differentials --- of the first pages can be identified). In this identification we use again \cite{GorNem2015}. In this way, our theory in fact provides for any analytic (not necessarily plane) curve singularity the analogue of HFL$^-_*$ (where this later one is constructed only for plane germs, or for links in $S^3$). In the case of higher embedded dimension, the information coded in the embedded topological type needed for the definition of HFL$^-_*$ is replaced by information read from the analytic rigidity of the analytic germ. (However, nota that the two constructions are very-very different.) \subsection{} All the constructions have an additional subtlety. Recall that the core of all the constructions is the tower of spaces $\{S_n\}_{n\geq m_w}$. Now, each $S_n$ can be considered as the disjoint union of its connected components $\sqcup_v S_n^v$. They are indexed by the vertices ${\mathcal V}(\mathfrak{R})$ of the graded root $\mathfrak{R}(C,o)$, see \ref{ss:grroot}. On the other hand, for each $n$, the filtration $\{S_n\cap \frX_{-d}\}_d$ also decomposes into a disjoint union $\sqcup_{v\in{\mathcal V}(\mathfrak{R}),\, w_0(v)=n}\{S_n^v\cap \frX_{-d}\}_d$. In particular, all the spectral sequence (and all its outputs) split according to this decomposition indexed by ${\mathcal V}(\mathfrak{R})$. That is, all the invariants which originally were graded by $\{n\in\mathbb Z\,:\, n\geq m_w\}$ will have a refined grading given by the vertices of the graded root. In particular, the graded root $\mathfrak{R}$ supports the following invariants partitioned as decorations of the vertices ${\mathcal V}(\mathfrak{R})$: $\mathbb H_*(C,o)$, $PE_k(T,Q,h)$, ${\bf PE}_1({\bf T},Q,h)$, the actions $Y_i$ (guided by the edges of $\mathfrak{R}$), and $\oplus_l{\rm HFL}^-_*(L, l)$. For details see subsections \ref{ss:grroot}, \ref{ss:decroot}, or equations (\ref{eq:yi2}) and (\ref{eq:v}). For another case in the literature when some important invariants (series) appear as decorations of a graded root see \cite{AJK}. \subsection{} The structure of the article is the following. Section 2 reviews the general definition of the lattice homology. Section 3 provides the definition of the lattice homology associated with isolated curve singularities. In section 4 we also provide several invariants of the singular germ (Hilbert function, semigroup, motivic Poincar\'e series, local lattice homology of \cite{GorNem2015}) and we discuss several relationships connecting them. The case of plane curves is also highlighted. In section 5 we introduce and discuss the (level) filtration and the corresponding spectral sequences associated with any fixed space $S_n$. Section 6 contains an improvement of this level filtration: the page $E^1_{*,*}$ carries a lattice filtration which makes the invariants more subtle, and the corresponding multivariable Poincar\'e series. In section 7 we introduce the operators $Y_1,\ldots, Y_r$. Section 8 treats plane curve singularities and we compare our theory with the Heegaard Floer Link theory. \section{The definition of the lattice homology}\label{ss:latweight} \subsection{The lattice homology associated with a system of weights} \cite{Nlattice} \bekezdes We consider a free $\mathbb Z$--module, with a fixed basis $\{E_i\}_{i\in\mathcal{V}}$, denoted by $\mathbb Z^s$. It is also convenient to fix a total ordering of the index set $\mathcal{V}$, which in the sequel will be denoted by $\{1,\ldots,s\}$. The lattice homology construction associates a graded $\mathbb Z[U]$--module with the pair $(\mathbb Z^s, \{E_i\}_i)$ and a set of weights. The construction follows closely the construction of the lattice cohomology developed in \cite{Nlattice} (for more see also \cite{NOSz,NGr,Nkonyv}). In particular, we will not prove all the statements, the corresponding modifications are rather natural. \bekezdes\label{9zu1} {\bf $\mathbb Z[U]$--modules.} We will modify the usual grading of the polynomial ring $\mathbb Z[U]$ in such a way that the new degree of $U$ is $-2$. Besides ${\mathcal T}^-_0:=\mathbb Z[U]$, considered as a graded $\mathbb Z[U]$--module, we will consider the modules ${\mathcal T}_0(n):=\mathbb Z[U]/(U^n)$ too with the induced grading. Hence, ${\mathcal T}_0(n)$, as a $\mathbb Z$--module, is freely generated by $1,U^1,\ldots,U^{n-1}$, and has finite $\mathbb Z$--rank $n$. More generally, for any graded $\mathbb Z[U]$--module $P$ with $d$--homogeneous elements $P_d$, and for any $k\in\mathbb Z$, we denote by $P[k]$ the same module graded in such a way that $P[k]_{d+k}=P_{d}$. Then set ${\mathcal T}^-_k:={\mathcal T}^-_0[k]$ and ${\mathcal T}_k(n):={\mathcal T}_0(n)[k]$. Hence, for $m\in \mathbb Z$, ${\mathcal T}_{-2m}^-=\mathbb Z\langle U^{m}, U^{m+1},\ldots\rangle$ as a $\mathbb Z$-module. \bekezdes\label{9complex} {\bf The chain complex.} $\mathbb Z^s\otimes \mathbb R$ has a natural cellular decomposition into cubes. The set of zero-dimensional cubes is provided by the lattice points $\mathbb Z^s$. Any $l\in \mathbb Z^s$ and subset $I\subset \mathcal{V}$ of cardinality $q$ defines a $q$-dimensional cube $(l, I)$, which has its vertices in the lattice points $(l+\sum_{i\in I'}E_i)_{I'}$, where $I'$ runs over all subsets of $I$. On each such cube we fix an orientation. This can be determined, e.g., by the order $(E_{i_1},\ldots, E_{i_q})$, where $i_1<\cdots < i_q$, of the involved base elements $\{E_i\}_{i\in I}$. The set of oriented $q$-dimensional cubes defined in this way is denoted by ${\mathcal Q}_q$ ($0\leq q\leq s$). Let $\calC_q$ be the free $\mathbb Z$-module generated by oriented cubes $\square_q\in{\mathcal Q}_q$. Clearly, for each $\square_q\in {\mathcal Q}_q$, the oriented boundary $\partial \square_q$ (of `classical' cubical homology) has the form $\sum_k\varepsilon_k \, \square_{q-1}^k$ for some $\varepsilon_k\in \{-1,+1\}$. These $(q-1)$-cubes $\square_{q-1}^k$ are the {\em faces} of $\square_q$. Clearly, $H_*(\calC_*, \partial_*)=H_*(\mathbb R^s,\mathbb Z)$. In order to define a `non-trivial' homology theory of these cubes, we consider a set of compatible {\em weight functions} $\{w_q\}_q$. \begin{definition}\label{9weight} A set of functions $w_q:{\mathcal Q}_q\to \mathbb Z$ ($0\leq q\leq s$) is called a {\em set of compatible weight functions} if the following hold: (a) For any integer $k\in\mathbb Z$, the set $w_0^{-1}(\,(-\infty,k]\,)$ is finite; (b) for any $\square_q\in {\mathcal Q}_q$ and for any of its faces $\square_{q-1}\in {\mathcal Q}_{q-1}$ one has $w_q(\square_q)\geq w_{q-1}(\square_{q-1})$. \end{definition} \bekezdes\label{bek:grading} In the presence of any fixed set of compatible weight functions $\{w_q\}_q$ we set ${\pazocal{L}}_q:=\calC_q\otimes_{\mathbb Z}\mathbb Z[U]$. Note that $\pazocal{L}_q$ is a $\mathbb Z[U]$-module by $U*(U^m\square_q):= U^{m+1}\square$. Moreover, $\pazocal{L}_q$ has a $\mathbb Z$--grading: by definition the degree of $U^m\square$ is ${\rm deg}_{\pazocal{L}}(U^m\square):=-2m-2w(\square)$. In fact, the grading is $2\mathbb Z$--valued; we prefer this convention in order to keep the compatibility with (Link) Heegaard Floer theory. We define $\partial_{w,q}:\pazocal{L}_{q}\to \pazocal{L}_{q-1}$ as follows. First write $\partial\square_{q}=\sum_k\varepsilon_k \square ^k_{q-1}$, then set $$\partial_{w,q}(U^m\square_{q}):=U^m\sum_k\,\varepsilon_k\, U^{w(\square_{q})-w(\square^k_{q-1})}\, \square^k_{q-1}.$$ Then one shows that $\partial_w\circ\partial_w=0$, i.e. $(\pazocal{L}_*,\partial_{w,*})$ is a chain complex. \bekezdes Next we define an augmentation of $(\pazocal{L}_*,\delta_{w,*})$. Set $m_w:=\min_{l\in \mathbb Z^s}w_0(l)$ and define the $\mathbb Z[U]$-linear map $\epsilon_w:\pazocal{L}_0\to {\mathcal T}^-_{-2m_w}$ by $\epsilon_w(U^ml)=U^{m+w_0(l)}$, for any $l\in \mathbb Z^s$ and $m\geq 0$. Then $\epsilon_w$ is surjective, $\epsilon_w\circ \partial_w=0$, and $\epsilon_w$ and $\partial_w$ are homogeneous morphisms of $\mathbb Z[U]$--modules of degree zero. \begin{definition}\label{9def12} The homology of the chain complex $(\pazocal{L}_*,\partial_{w,*})$ is called the {\em lattice homology} of the pair $(\mathbb R^s,w)$, and it is denoted by $\mathbb H_*(\mathbb R^s,w)$. The homology of the augmented chain complex $$\ldots\stackrel{\partial_w}{\longrightarrow} \pazocal{L}_1 \stackrel{\partial_w}{\longrightarrow} \pazocal{L}_0\stackrel{\epsilon_w}{\longrightarrow}{\mathcal T}^-_{-2m_w}\longrightarrow 0 $$ is called the {\em reduced lattice homology} of the pair $(\mathbb R^s,w)$, and it is denoted by $\mathbb H_{*,red}(\mathbb R^s,w)$. \end{definition} If the pair $(\mathbb R^s,w)$ is clear from the context, we omit it from the notation. For any $q\geq 0$ fixed, the $\mathbb Z$--grading of $\pazocal{L}_q$ induces a $\mathbb Z$--grading on $\mathbb H_q$ and $\mathbb H_{q,red}$; the homogeneous part of degree $d$ is denoted by $(\mathbb H_{q})_{d}$, or $(\mathbb H_{q,red})_{d}$. Moreover, both $\mathbb H_q$ and $\mathbb H_{red,q}$ admit an induced graded $\mathbb Z[U]$--module structure and $\mathbb H_q=\mathbb H_{q,red}$ for $q>0$. \begin{lemma}\label{9lemma3} One has a graded $\mathbb Z[U]$--module isomorphism $\mathbb H_0={\mathcal T}^-_{-2m_w}\oplus\mathbb H_{0,red}$.\end{lemma} \bekezdes\label{9rem} Next, we present another realization of the modules $\mathbb H_*$. For each $n\in \mathbb Z$ we define $S_n=S_n(w)\subset \mathbb R^s$ as the union of all the cubes $\square_q$ (of any dimension) with $w(\square_q)\leq n$. Clearly, $S_n=\emptyset$, whenever $n<m_w$. For any $q\geq 0$, set $${\mathbb S}_q(\mathbb R^s,w):=\oplus_{n\geq m_w}\, H_q(S_n,\mathbb Z).$$ Then ${\mathbb S}_q$ is $\mathbb Z$ (in fact, $2\mathbb Z$)--graded: the $(-2n)$--homogeneous elements $({\mathbb S}_q)_{-2n}$ consist of $H_q(S_n,\mathbb Z)$. Also, ${\mathbb S}_q$ is a $\mathbb Z[U]$--module; the $U$--action is the homological morphism $H_q(S_{n},\mathbb Z)\to H_q(S_{n+1},\mathbb Z)$ induced by the inclusion $S_n\hookrightarrow S_{n+1}$. Moreover, for $q=0$, a fixed base-point $l_w\in S_{m_w}$ provides an augmentation (splitting) $H_0(S_n,\mathbb Z)= \mathbb Z\oplus \widetilde{H}_0(S_n,\mathbb Z)$, hence a splitting of the graded $\mathbb Z[U]$-module $${\mathbb S}_0={\mathcal T}^-_{-2m_w}\oplus {\mathbb S}_{0,red}=(\oplus_{n\geq m_w}\mathbb Z)\oplus ( \oplus_{n\geq m_w}\widetilde{H}_0(S_n,\mathbb Z)).$$ \begin{theorem}\label{9STR1} \ (For the cohomological version see \cite{Nlattice},\cite[Theorem 11.1.12]{Nkonyv}) There exists a graded $\mathbb Z[U]$--module isomorphism, compatible with the augmentations: $\mathbb H_*(\mathbb R^s,w)={\mathbb S}_*(\mathbb R^s,w)$. \end{theorem} \noindent Let us sketch the proof of this isomorphism, since several versions of it will be used later. Let $(\pazocal{L}_*)_{-2n}$ denote the $\mathbb Z$-module of homogeneous elements of degree $(-2n)$ of $\pazocal{L}_*$, cf. \ref{bek:grading}. Since $\partial_{w,*}$ preserves the homogeneity, the chain complex $(\pazocal{L}_*,\partial_{w,*})$ has the following direct sum decomposition $\oplus_n ((\pazocal{L}_*)_{-2n},\partial_{w,*})$. We claim that $((\pazocal{L}_*)_{-2n},\partial_{w,*})$ can be identified with the chain complex $(\calC_*(S_n),\partial_*(S_n))$ of the space $S_n$. Indeed, if $U^m\square_q\in (\pazocal{L}_q)_{-2n}$ is a homogeneous element, then $-2m-2w_q(\square_q)=-2n$. Since $m\geq 0$, necessarily $w(\square_q)\leq n$, hence $\square_q\in S_n$. Then the correspondence $U^m\square_q\mapsto \square_q$, $(\pazocal{L}_*)_{-2n}\to \calC_*(S_n)$, is a morphism of chain complexes which realizes the wished $\mathbb Z$--module isomorphism. Choose $\square\in S_n$. It corresponds to $U^m\square$ in $(\pazocal{L}_*)_{-2n}$, where $m =n-w(\square)$. Let $u\square $ be $\square$ considered in $S_{n+1}$. Then, as a cube of $S_{n+1}$ it corresponds to $U^{m+1}\square \in (\pazocal{L}_*)_{-2n-2}$ (since $m+1=n+1-w(\square)$). Hence the morphism $H_q(S_{n},\mathbb Z)\to H_q(S_{n+1},\mathbb Z)$ corresponds to multiplication by $U$ in $\pazocal{L}_*$. \subsection{Restrictions and the Euler characteristic}\label{ss:REu} \bekezdes\label{9SSP} {\bf Restrictions.} Assume that $\mathfrak{R}\subset \mathbb R^s$ is a subspace of $\mathbb R^s$ consisting of a union of some cubes (from ${\mathcal Q}_*$). Let $\calC_q(\mathfrak{R})$ be the free $\mathbb Z$-module generated by $q$-cubes of $\mathfrak{R}$ and $\pazocal{L}_q(\mathfrak{R})=\calC_*(\mathfrak{R})\otimes_{\mathbb Z}\mathbb Z[U]$. Then $(\pazocal{L}_*(\mathfrak{R}),\partial_{w,*})$ is a chain complex, whose homology will be denoted by $\mathbb H_*(\mathfrak{R},w)$. It has an augmentation and a natural graded $\mathbb Z[U]$--module structure. In some cases it can happen that the weight functions are defined only for cubes belonging to $\mathfrak{R}$. Some of the possibilities (besides $\mathfrak{R}=\mathbb R^s$) used in the present note are the following: (1) $\mathfrak{R}=(\mathbb R_{\geq 0})^s$ is the first quadrant and weight functions are defined for its cubes; (2) $\mathfrak{R}=R(0,c)$ is the rectangle $\{x\in\mathbb R^s \,:\, 0\leq x\leq c\}$, where $c\geq 0$ is a lattice point; (3) $\mathfrak{R}=\{x\in\mathbb R^s \,:\, x\geq l\}$ for some fixed $l\in (\mathbb Z_{\geq0})^s$. \bekezdes \label{bek:eu}{\bf The Euler characteristic of $\mathbb H_*$ \cite{JEMS}.} Though $\mathbb H_{*,red}(\mathbb R^s,w)$ has a finite $\mathbb Z$-rank in any fixed homogeneous degree, in general, it is not finitely generated over $\mathbb Z$, in fact, not even over $\mathbb Z[U]$. Let $\mathfrak{R}$ be as in \ref{9SSP} and assume that each $\mathbb H_{q,red}(\mathfrak{R},w)$ has finite $\mathbb Z$--rank. (This happens automatically when $\mathfrak{R}$ is a finite rectangle.) We define the Euler characteristic of $\mathbb H_*(\mathfrak{R},w)$ as $$eu(\mathbb H_*(\mathfrak{R},w)):=-\min\{w(l)\,:\, l\in \mathfrak{R}\cap \mathbb Z^s\} + \sum_q(-1)^q\rank_\mathbb Z(\mathbb H_{q,red}(\mathfrak{R},w)).$$ If $\mathfrak{R}=R(0, c)$ (for a lattice point $c\geq 0$), then by \cite{JEMS}, \begin{equation}\label{eq:eu} \sum_{\square_q\subset \mathfrak{R}} (-1)^{q+1}w(\square_q)=eu(\mathbb H_*(\mathfrak{R},w)).\end{equation} \section{The analytic lattice homology of curves}\label{ss:curveslattice} \subsection{The definition and first properties of the lattice homology} \bekezdes \label{bek:ANcurves} {\bf Some classical invariants of curves.} Let $(C,o)$ be an isolated curve singularity with local algebra $\calO=\calO_{C,o}$. Let $\cup_{i=1}^r(C_i,o)$ be the irreducible decomposition of $(C,o)$, denote the local algebra of $(C_i,o)$ by $\calO_i$. We denote the integral closure of $\calO_i$ by $\overline{\calO_i}= \mathbb C\{t_i\} $, and we consider $\calO_i$ as a subring of $\overline{\calO_i}$. Similarly, we denote the integral closure of $\calO$ by $\overline{\calO}= \oplus_i \mathbb C\{t_i\}$. Let $\delta=\delta(C,o)$ be the delta invariant $\dim _\mathbb C\, \overline{\calO}/\calO$ of $(C,o)$. We denote by $\mathfrak{v}_i: \overline{\calO_i}\to \overline{\mathbb Z_{\geq 0}}=\mathbb Z_{\geq 0}\cup\{\infty\}$ the discrete valuation of $\overline{\calO_i}$, where $\mathfrak{v}_i(0)=\infty$. This restricted to $\calO_i$ reads as $\mathfrak{v}_i(f)= {\rm ord}_{t_i}(f)$ for $f\in \calO_i$. Let $\calS_i=\mathfrak{v}_i(\calO_i)\cap \mathbb Z_{\geq 0}\subset \mathbb Z_{\geq 0}$ and $\calS=(\mathfrak{v}_1,\ldots, \mathfrak{v}_r)(\calO)\cap (\mathbb Z_{\geq 0})^r$, or $$\calS=\{\mathfrak{v}(f):=(\mathfrak{v}_1(f),\ldots, \mathfrak{v}_r(f))\,:\, f \ \mbox{is a nonzero divisor}\}\subset (\mathbb Z_{\geq 0})^r.$$ It is called the {\it semigroup of $(C,o)$}. Let ${\mathfrak c}=(\calO:\overline{\calO})$ be the conductor ideal of $\overline{\calO}$, it is the largest ideal of $\calO$, which is an ideal of $\overline{\calO}$ too. It has the form $(t_1^{c_1}, \ldots, t_r^{c_r})\overline{\calO} $. The element $c=(c_1,\ldots , c_r)$ is called the conductor of $\calS$. From definitions $c+\mathbb Z^r_{\geq 0}\subset \calS$ and $c$ is the smallest lattice point with this property (whenever $(C,o)$ is not smooth). If $r=1$ then $\delta(C,o)=|\{\mathbb Z_{\geq0}\setminus \calS\}|$ (otherwise $|\{\mathbb Z_{\geq0}\setminus \calS\}|=\infty$). Assume that $(C,o)$ is the union of two (not necessarily irreducible) germs $(C',o)$ and $(C'',o)$ without common components, and fix some embedding $(C,o)\subset (\mathbb C^n,0)$. One defines the {\it Hironaka's intersection multiplicity } of $(C',o)$ and $(C'',o)$ by $(C',C'')_{Hir}:=\dim ( \calO_{\mathbb C^n,o}/ I'+I'')$, where $I'$ and $I''$ are the ideals which define $(C',o)$ and $(C'',o)$. Then one has the following formula \cite{Hironaka,BG80,Stevens85}: \begin{equation}\label{eq:Hir} \delta(C,o)=\delta(C',o)+\delta(C'',o)+ (C',C'')_{Hir}.\end{equation} From this it follows inductively that $\delta(C,o)\geq r-1$. In fact, $\delta(C,o)= r-1$ if and only if $(C,o)$ is analytically equivalent with the union of the coordinate axes of $\mathbb C^r,0)$, called {\it ordinary $r$-tuple} \cite{BG80}. For {\bf plane curve germs} $(C',C'')_{Hir}$ agrees with the usual intersection multiplicity at $(\mathbb C^2,0)$. In this case, the conductor entry is $c_i=2\delta(C_i,o)+ (C_i,\cup_{j\not=i} C_j)$. For a formula of $c_i$ in this general case see \cite{D'Anna}. (For some additional inequalities see also the end of \ref{bek:AnnFiltr}.) \bekezdes \label{bek:AnnFiltr} {\bf The valuative filtrations.} We will focus on two vector spaces; the first is the infinite dimensional local algebra $\calO$, the second one is the finite dimensional $\overline{\calO}/\calO$. Consider the lattice $\mathbb Z^r$ with its natural basis $\{E_i\}_i$ and partial ordering. If $l=(l_1, \ldots, l_r)\in \mathbb Z^r$ we set $|l|:=\sum_il_i$. Then $\overline{\calO}$ has a natural filtration indexed by $l\in \mathbb Z^r_{\geq 0}$ given by $\overline{{\mathcal F}}(l):=\{g\,:\, \mathfrak{v}(g)\geq l\}$. This induces an ideal filtration of $\calO$ by ${\mathcal F}(l):=\overline{{\mathcal F}}(l)\cap \calO\subset \calO$, and also a filtration $${\mathcal F}^\circ (l)=\frac{\overline{{\mathcal F}}(l)+\calO}{\calO}=\frac{\overline{{\mathcal F}}(l)}{\overline{{\mathcal F}}(l)\cap\calO}\subset \overline {\calO}/\calO.$$ The first filtration of $\calO$ is `classical', it was considered e.g. in \cite{cdg,cdg2,Moyano,NPo,AgostonNemethi}, see also \cite{GLS,GLS2b}. The second filtration provides a `multivariable sum-decomposition' of $\delta$. Set $\mathfrak{h}(l)=\dim \calO/{\mathcal F}(l)=\dim (\overline{{\mathcal F}}(l)+\calO)/\overline{{\mathcal F}}(l)$. Then $\mathfrak{h}$ is increasing and $\mathfrak{h}(0)=0$. Moreover, we also set $\overline{\mathfrak{h}} (l):= {\rm codim}( {\mathcal F}^\circ (l)\subset \overline{\calO}/\calO)=\dim(\overline{\calO}/(\overline{{\mathcal F}}(l)+\calO))$. $\overline{\mathfrak{h}}$ is also increasing, $\overline{\mathfrak{h}}(0)=0$, and $\overline{\mathfrak{h}}(l)=\delta$ for any $l\geq c$ (since $\overline {{\mathcal F}}(l)\subset {\mathfrak c}\subset \calO$ for such $l$). Finally set $\mathfrak{h}^\circ (l):=\delta-\overline {\mathfrak{h}}(l)$. One has \begin{equation}\label{eq:DUAL1} \mathfrak{h}(l)+\overline{\mathfrak{h}}(l)=\dim\, \overline{\calO}/\overline{{\mathcal F}}(l)=|l|. \end{equation} Assume that $(C,o)$ is not smooth. Since $\dim(\overline{\calO}/\mathfrak{c})=|c|$ and $\dim(\calO/\mathfrak{c})=\mathfrak{h}(c)$ we have $\delta =|c|-\mathfrak{h}(c)$. Since $\mathfrak{h}(c)\geq 1$, $\delta \leq |c|-1$, with equality if and only if $\mathfrak{c}$ is the maximal ideal of $\calO$. On the other hand, $\delta\geq |c|/2$ too, with equality if and only if $(C,o)$ is Gorenstein (cf. \cite[page 72]{Serre}, see also \ref{bek:GORdualoty}). \bekezdes \label{bek:ANweightsCurves} {\bf The weights and the analytic lattice homology } (For the cohomology version see \cite{AgostonNemethi}.) We consider the lattice $\mathbb Z^r$ with its fixed basis $\{E_i\}_{i=1}^r$, and the functions $h$ and $h^\circ$ defined in \ref{bek:AnnFiltr}. We also set ${\mathcal V}:=\{1, \ldots, r\}$. For the construction of the analytic lattice (co)homology of $(C,o)$ we consider only the first quadrant, namely the lattice points $(\mathbb Z_{\geq 0})^r$ in $\frX:=(\mathbb R_{\geq 0})^r$ and the cubes from $\frX$. The weight function on the lattice points $(\mathbb Z_{\geq 0})^r$ is defined by $$w_0(l)=\mathfrak{h}(l)+\mathfrak{h}^\circ (l)-\mathfrak{h}^\circ(0)=\mathfrak{h}(l)-\overline{\mathfrak{h}}(l)= 2\cdot \mathfrak{h}(l)-|l|, $$ and $w_q(\square):=\max\{w_0(l)\,:\, \mbox{$l$ is a vertex of $\square$}\}$. Then the cubical decomposition of $\frX$ and $l\mapsto w_0(l)=2\mathfrak{h}(l)-|l|$ define a lattice homology $\mathbb H_*(\frX,w)$. It is denoted by $\mathbb H_*(C,o)$. \begin{theorem}\label{cor:EUcurves} \cite{AgostonNemethi} (a) For any $c'\geq c$ the inclusion $S_n\cap R(0,c')\hookrightarrow S_n$ is a homotopy equivalence. In particular, $S_n$ is contractible for $n\gg 0$. (b) One has a graded $\mathbb Z[U]$--module isomorphism $\mathbb H_*(C,o)=\mathbb H_*(R(0,c'),w)$ for any $c'\geq c$ induced by the natural inclusion map. Therefore, $\mathbb H_*(C,o)$ is determined by the weighted cubes of the rectangle $R(0,c)$ and $\mathbb H_{*,red}(C,o)$ has finite $\mathbb Z$--rank. (c) $eu(\mathbb H_*(C,o))=\delta(C,o)$, that is, $\mathbb H_*(C,o)$ is a `categorification' of \,$\delta(C,o)$. \end{theorem} The ranks can be coded in the following Poincar\'e series $$PH(Q,h):= \sum_{n,b}\, {\rm rank} \ H_b(S_n,\mathbb Z)\,Q^nh^b=\sum_{n,b}\, {\rm rank} \ (\mathbb H_b)_{-2n}\,Q^nh^b.$$ \subsection{The homological graded root}\label{ss:grroot} There is an important enhancement of $\mathbb H_0(C,o)$, the homological graded root ${\mathfrak R}(C,o)$ associated with $(C,o)$. It is the natural homological version of the {\it cohomological graded root} already present in the literature. For the general construction and its relation with $\mathbb H^0$ of the cohomological root see e.g. \cite{NGr,Nlattice,Nkonyv}. The homological root is a connected tree whose vertices are graded by $\mathbb Z$. The vertices graded by $n\in\mathbb Z$ correspond to the connected components $\{S_n^v\}_v$ of $S_n$. If a component $S_n^v$ of $S_n$ and a component $S_{n+1}^u$ of $S_{n+1}$ satisfy $S_n^v\subset S_{n+1}^u$ then we introduce an edge $[v,u]$ in ${\mathfrak{R}}(C,o)$ connecting $v$ and $u$. There is a natural way to read from ${\mathfrak R}(C,o)$ the lattice homology $\mathbb H_0(C,o)$ together with the $U$--action (see e.g. Example \ref{ex:34}). The vertices of ${\mathfrak {R}}(C,o)$ will be denoted by ${\mathcal V}({\mathfrak R})$. The lattice homology can be regarded as a `decoration' of ${\mathfrak {R}}(C,o)$. Indeed, $\mathbb H_*(C,o)$ has a direct sum decomposition $\oplus_n \mathbb H_*(C,o)_{-2n}$ according to the grading of $\mathbb H_*$ (weights of the cubes). The homogeneous part $(\mathbb H_*)_{-2n}$ equals $H_*(S_n,\mathbb Z)$. The point is that for each $S_n$ we can consider its connected components $\{S_n^v\}_v$; they are indexed by the set of vertices of $\mathfrak{R}$ with $w_0(v)=n$. In particular, we can decorate each vertex $v\in{\mathcal V}(\mathfrak{R})$ with $H_*(S^v_{w_0(v)},\mathbb Z)$. This shows that $$\mathbb H_*(C,o)=\oplus_{v\in {\mathcal V}(\mathfrak{R})} \, H_*(S^v_{w_0(v)},\mathbb Z).$$ This introduces an additional (enhanced) grading of $\mathbb H_*$. Namely, the $\mathbb Z$ grading of $\mathbb H_*$ (supported on $\{m_w,m_w+1,\ldots\}$) is replaced by the index set ${\mathcal V}( {\mathfrak{R}})$, i.e. $\mathbb H_*= \oplus_{v\in {\mathcal V}(\mathfrak{R})} \, (\mathbb H_*)^v_{-2w_0(v)}$. The $U$--action of $\mathbb H_*$ also extends to a (graded) $U$--action of $ \oplus_{v\in {\mathcal V}(\mathfrak{R})} \, (\mathbb H_*)^v_{-2w_0(v)}$. Indeed, an inclusion $S_n^v\subset S_{n+1}^u$ induces $U: (\mathbb H_*)^v_{-2w_0(v)}\to (\mathbb H_*)^u_{-2w_0(u)}$. This `homological decoration' can be simplified if we replace $(\mathbb H_*)^v_{-2w_0(v)}$ by its Poincar\'e polynomial $PH^v_{-2w_0(v)}(h)=\sum_b \, {\rm rank}\, (\mathbb H_b)^v_{-2w_0(v)}\, h^b$. Then for each $n\in\mathbb Z$ the sum $\sum_{v\,:\, w_0(v)=n}PH^v_{-2n}(h)$ is the coefficient of $Q^n$ in $PH(Q,h)$. \subsection{Rosenlicht's forms}\label{rem:Rosenlicht} We have the following reinterpretation of $\mathfrak{h}^\circ$ in terms of forms. Write $n:\overline{(C,o)}\to (C,o)$ for the normalization. Let $\Omega^1(*)$ be the germs of meromorphic differential forms on the normalization $\overline{(C,o)}$ with a pole (of any order) at most in $\overline {o}=n^{-1}(o)$. Let $\Omega^1_{\overline{(C,o)}}$ be the germs of regular differential forms on $\overline{(C,o)}$. The {\it Rosenlicht's regular differential forms} are defined as $$\omega^R_{C,o}:=\{ \alpha\in \Omega^1(*)\,:\, \sum _{p\in \overline{o}} {\rm res}_p(f\alpha)=0 \ \ \mbox{for all $f\in \calO$}\}.$$ (In fact one shows that it is canonically isomorphic with the dualizing module of Grothendieck associated with $(C,o)$.) Then, by \cite{Serre,BG80}, one has a perfect duality between $\omega^R_{C,o}/ n_*\Omega^1_{\overline{(C,o)}}$ and $\overline{\calO}/\calO=n_*\calO_{\overline{(C,o)}}/ \calO_{C,o}$: \begin{equation}\label{eq:ROS} n_*\calO_{\overline{(C,o)}}/ \calO_{C,o}\ \times \ \omega^R_{C,o}/ n_*\Omega^1_{\overline{(C,o)}}\to \mathbb C,\ \ \ [f]\times [\alpha]\mapsto \sum_{p\in\overline{o}} \, {\rm res}_p(f\alpha).\end{equation} Moreover, one can define a $\mathbb Z^r$--filtration in $\omega^R_{C,o}/ n_*\Omega^1_{\overline{(C,o)}}$ such that the duality is compatible with the filtrations in $n_*\calO_{\overline{(C,o)}}/ \calO_{C,o}= \overline{\calO}/{\mathcal O}$ and $\omega^R_{C,o}/ n_*\Omega^1_{\overline{(C,o)}}$. \bekezdes \label{bek:GORdualoty} {\bf The Gorenstein case.} By Serre \cite{Serre} or Bass \cite{Bass} (see also \cite{Huneke}) $(C,o)$ is Gorenstein if and only if $\dim (\overline{\calO}/\calO)=\dim(\calO/\mathfrak{c})$. On the other hand, Delgado in \cite{delaMata} proved that the condition $\dim (\overline{\calO}/\calO)=\dim(\calO/\mathfrak{c})$ is equivalent with the symmetry of the semigroup of values $\calS$. If $r=1$ then the symmetry can be formulated easily: $l\in\calS\ \Leftrightarrow \ c-1-l\not\in\calS$. If $r\geq 2$ then the definition is the following \cite{delaMata}. For any $l \in \mathbb Z^r$ and $i\in\{1,\cdots, r\}$ set $$\Delta_i(l)=\{s\in\calS\, :\, s_i=l_i \ \mbox{and} \ s_j>l_j \ \mbox{for all} \ j\not= i\}, \ \ \mbox{and} \ \ \Delta(l):=\cup_i\Delta_i(l).$$ Then $\calS$ is called symmetric if $l\in\calS\ \Leftrightarrow \ \Delta(c-{\bf 1}-l)=\emptyset$. (Here ${\bf 1}=(1,\ldots, 1)$.) If $(C,o)$ is Gorenstein then in \cite{cdk} (see also \cite{Moyano}) is proved that $$\mathfrak{h}(l)-\mathfrak{h}(c-l)=|l|-\delta.$$ This combined with (\ref{eq:DUAL1}) gives $\mathfrak{h}(c-l)=\mathfrak{h}^\circ(l)$. In particular, $\mathfrak{h}^\circ$ is recovered from $\mathfrak{h}$ as its symmetrization with respect to $c$, namely from $h^{sym}(l):=\mathfrak{h}(c-l)$ for any $l\in R(0, c)$. In particular, the weight function and $\mathbb H_*(C,o)$ admits a $\mathbb Z_2$--symmetry induced by $l\mapsto c-l$. \subsection{The `hat'--version $\hat{\mathbb H}$.}\label{ss:hat} Once the tower of spaces $\{S_n\}_n$ is defined, it is natural to consider the relative homologies as well: $$\hat{\mathbb H}_b(C,o)=\hat{\mathbb H}_b(\frX,w):= \oplus_{n}\, H_b(S_n,S_{n-1},\mathbb Z).$$ It is a graded $\mathbb Z$--module, with $(-2n)$--homogeneous summand $H_b(S_n,S_{n-1},\mathbb Z)$. The $U$--action induced by the inclusion of pairs $(S_{n}, S_{n-1})\hookrightarrow (S_{n+1},S_{n})$ is trivial. By Theorem \ref{cor:EUcurves}, $ H_b(S_n,S_{n-1},\mathbb Z)\not=0$ only for finitely many pairs $(n,b)$. By a homological argument (based on the contractibility of $S_n$ for $n\gg 0$, cf. Theorem \ref{cor:EUcurves}, see also Remark \ref{rem:hat}) \begin{equation}\label{eq:hateu} \sum_{n,b}\ (-1)^b \,{\rm rank}\, H_b(S_n, S_{n-1},\mathbb Z)=1. \end{equation} One also has the exact sequence $\cdots \to \mathbb H_b\stackrel{U}{\longrightarrow} \mathbb H_b\longrightarrow \hat{\mathbb H}_b\longrightarrow \mathbb H_{b-1} \stackrel{U}{\longrightarrow} \cdots$. \section{More invariants read from $\mathfrak{h}$}\label{s:more} \subsection{The Hilbert series} \begin{definition}\label{def:Hil} The {\it Hilbert series of the multi-index filtration} is \begin{equation} H(t_1,\ldots,t_r)=\sum_{l\in \mathbb Z^r}\ \mathfrak{h}(l)\cdot t_1^{l_1}\cdots\, t_r^{l_r}=\sum_{l\in \mathbb Z^r} \mathfrak{h}(l)\cdot {\bf t}^l \in {\mathbb Z}[[t_1,t_1^{-1},\ldots, t_r,t_r^{-1}]]. \end{equation} \end{definition} \noindent Here $l=\sum_{i=1}^r l_iE_i$. Note that \begin{equation}\label{eq:MAXh} \mathfrak{h}(l)=\mathfrak{h}(\max\{l,0\}). \end{equation} Hence $H$ determined completely by $H({\bf t})|_{l\geq 0}:=\sum_{l\geq 0} \mathfrak{h}(l)\cdot {\bf t}^l$. \subsection{The semigroup of $C$.} The semigroup $\calS$ and the Hilbert function $H$ determine each other: \begin{lemma} (See e.g. \cite{GorNem2015}) \label{eq:semi} The semigroup can be deduced from the Hilbert function as follows: $$ \calS=\{l\in \mathbb Z_{\geq 0}^r\ |\ \mathfrak{h}(l+E_i)>\mathfrak{h}(l) \ \ \mbox{for every \ $i=1,\ldots, r$}\}. $$ On the other hand, $\mathfrak{h}(l+E_i)-\mathfrak{h}(l)\in\{0,1\}$ for any $l\geq 0$ and $i\in {\mathcal V}$, Moreover, $\mathfrak{h}(l+E_i)=\mathfrak{h}(l)+1$ if there is an element $s\in \calS$ such that $s_i=l_i$ and $s_j\ge l_j$ for $j\neq i$. Otherwise $\mathfrak{h}(l+E_i)=\mathfrak{h}(l).$ \end{lemma} \subsection{The Poincar\'e series.} If $r=1$, then the Poincar\'e series of the graded ring $\oplus_{l} {\mathcal F}(l)/{\mathcal F}(l+E_1)$ is $P(t)=-H(t)(1-t^{-1})$. For general $r$, one defines the Poincar\'e series by \begin{equation}\label{eq:HPC} P(t_1,\ldots,t_r):=-H(t_1,\ldots,t_r)\cdot \prod_i(1-t_i^{-1}). \end{equation} This means that the coefficient $\pp_l$ of $ P({\bf t})=\sum_{l}\pp_l\cdot t_1^{l_1}\ldots t_r^{l_r}$ satisfies \begin{equation} \label{htopi} \pp_{l}=\sum_{I\subset {\mathcal V}}(-1)^{|I|-1}\mathfrak{h}(l+E_{I}), \ \ \ \ (E_I=\sum_{i\in I}E_i). \end{equation} The space ${\mathbb Z}[[t_1,t_1^{-1},\ldots, t_r,t_r^{-1}]]$ is a module over the ring of Laurent power series, hence the multiplication by $\prod_i(1-t_i^{-1})$ in \eqref{eq:HPC} is well-defined. One can check (using e.g. \eqref{eq:MAXh}) that the right hand side of \eqref{eq:HPC} is a power series involving only nonnegative powers of $t_i$. In fact, cf. Lemma \ref{prop:motProp}, the support of $P({\bf t})$ is included in $\calS$. If $r=1$, then Lemma \ref{eq:semi} implies that $P(t)=\sum_{s\in \calS}t^s=-\sum _{s\not\in \calS}t^s+ 1/(1-t)$, where $P^+(t):=-\sum _{s\not\in \calS}t^s$ is a polynomial. Furthermore, by \cite{cdg}, $P({\bf t})$ is a polynomial for $r>1$. Multiplication by $\prod_i(1-t_i^{-1})$ of series with $\mathfrak{h}(0)=0$ is injective if $r=1$, however it is not if $r>1$. In particular, in such cases it can happen that for two different series $H({\bf t})$ we obtain the very same $P({\bf t})$. For a concrete pair see e.g. \cite{cdg3}. Nevertheless, even in such cases $r>1$ one can recover $H({\bf t})$ as follows. For any subset $J=\{i_1,\ldots,i_{|J|}\}\subset {\mathcal V}:=\{1,\ldots, r\}$, $J\not=\emptyset$, consider the curve germ $(C_{J},o)=\cup_{i\in J}(C_{i},o)$. As above, this germ defines the Hilbert series $H_{C_J}$ of $(C_J,o)$ in variables $\{t_i\}_{i\in J}$: $$H_{C_J}(t_{i_1},\ldots,t_{i_{|J|}})=\sum_{l} \mathfrak{h}_{J}(l)\cdot t_{i_1}^{l_{i_1}}\ldots t_{i_{|J|}}^{l_{i_{|J|}}}.$$ By the very definition, \begin{equation}\label{eq:redHilb} H_{C_J}(t_{i_1},\ldots,t_{i_{|J|}})=H(t_1,\ldots,t_r)|_{t_i=0\ i\not\in J}.\end{equation} Analogously, we also consider the Poincar\'e series of $(C_J,o)$: $$P_{C_J}(t_{i_1},\ldots,t_{i_{|J|}})=\sum_{l} \pp_{J,l}\cdot t_{i_1}^{l_{i_1}}\ldots t_{i_{|J|}}^{l_{i_{|J|}}}$$ computed from $H_{C_J}$ by a similar identity as (\ref{eq:HPC}). By definition, for $J=\emptyset$ we take $P_{\emptyset}\equiv 0.$ The next theorem inverts (\ref{htopi}) in the sense that we recover $H$ from the collection $\{P_{C_J}\}_J$. \begin{theorem}\label{reconst} (\cite[Theorem 3.4.3]{GorNem2015}, see also \cite[Corollary 4.3]{julioproj}) With the above notations \begin{equation} \label{hilbert} H(t_1,\ldots,t_r)|_{l\geq 0} =\frac{1}{ \prod_{i=1}^{r}(1-t_i)}\sum_{J\subset {\mathcal V}}(-1)^{|J|-1} \Big(\prod_{i\in J}t_i\Big)\cdot P_{C_J}(t_{i_1},\ldots,t_{i_{|J|}}). \end{equation} In particular, the restricted Hilbert series $H(t)|_{l\geq 0}$ of a multi-component curve is a rational function with denominator $\prod_{i=1}^{r}(1-t_i)^2$. \end{theorem} Assume that $(C,o)$ is Gorenstein. If $r=1$ and one writes $P(t)=\Delta(t)/(1-t)$ (for a certain polynomial $\Delta(t)$), then from the Gorenstein symmetry of the semigroup one gets the symmetry of $\Delta(t)$, namely $\Delta(t^{-1})=t^{-\mu(C,o)}\Delta(t)$. More generally, for any $r>1$, the polynomial $P({\bf t})$ satisfies the Gorenstein symmetry $P(t_1^{-1},\ldots, t_r^{-1})=(-1)^r\prod_i t_i^{1-c_i}\cdot P(t_1,\ldots , t_r)$. \subsection{The local hyperplane arrangements. }\label{ss:ARR} For any fixed $l$ let us consider the set \begin{equation*} {\mathcal H}(l):=\{f\in \calO\, :\, \mathfrak{v}(f)=l\}={\mathcal F}(l)\setminus \bigcup_i\, {\mathcal F}(l+E_i). \end{equation*} Since ${\mathcal F}(l+E_i)$ is either ${\mathcal F}(l)$ or one of its hyperplanes (cf. \ref{eq:semi}), ${\mathcal H}(l)$ is either empty or it is a hyperplane arrangement in ${\mathcal F}(l)$. This can be reduced to a finite dimensional central hyperplane arrangement $${\mathcal H}'(l):=\frac{{\mathcal F}(l)}{{\mathcal F}(l+E_{{\mathcal V}})}\setminus \bigcup_i \frac{{\mathcal F}(l+E_i)}{{\mathcal F}(l+E_{{\mathcal V}})},$$ since ${\mathcal H}(l)\simeq {\mathcal F}(l+E_{{\mathcal V}})\times {\mathcal H}'(l)$ as vector spaces. Note that both ${\mathcal H}'(l)$ and ${\mathcal H}(l)$ admit a free $\mathbb C^*$--action (multiplication by nonzero scalar), hence one automatically has the two projective arrangements $\mathbb{P}{\mathcal H}'(l)={\mathcal H}'(l)/\mathbb C^*$ and $\mathbb{P}{\mathcal H}(l)={\mathcal H}(l)/\mathbb C^*$. In fact ${\mathcal H}'(l)=\mathbb{P}{\mathcal H}'(l)\times \mathbb C^*$. The first part of the following proposition follows from \cite[Theorem 5.2]{or1}, the second part can be deduced from \eqref{htopi} and inclusion-exclusion formula (see e.g. \cite{cdg2,cdg} and Lemma \ref{lem:L} below). \begin{proposition} $H_*({\mathcal H}'(l),\mathbb Z)$ and $H_*(\mathbb{P}{\mathcal H}'(l),\mathbb Z)$ have trivial $\mathbb Z$--torsion. The Euler characteristic of\ \ $\mathbb{P}{\mathcal H}(l)$ (and of\ \ $\mathbb{P}{\mathcal H}'(l)$) equals $\pp_{l}$, the coefficient of the Poincar\'e series $P({\bf t})$ at \ ${\bf t}^{l}$. \end{proposition} \subsection{Motivic Poincar\'e series.}\label{ss:MP} The series $P^m(t_1,\ldots, t_r;q)\in \mathbb Z[[t_1,\ldots, t_r]][q]$ is defined in \cite{cdg3} as a refinement of $P({\bf t})$. By definition, the coefficient of $t_1^{l_1}\ldots t_r^{l_r}$ is the (normalized) class of $\mathbb{P}{\mathcal H}'(l)$ in the Grothendieck ring of algebraic varieties. It turns out that the class of a central hyperplane arrangement can always be expressed in terms of the class ${\mathbb L}$ of the affine line. Indeed, one has: \begin{lemma}\label{lem:L} $V$ be a vector space and let ${\mathcal H}=\{{\mathcal H}_i\}_{i\in{\mathcal V}}$ be a collection of linear hyperplanes in $V$. For a subset $J\subset {\mathcal V}$ we define the rank function by $\rho(J)={\rm codim} \{\cup_{i\in J}{\mathcal H}_i\subset V\}$. Then in the Grothendieck ring of varieties (by the inclusion-exclusion formula) one has $$[V\setminus \cup_{i\in{\mathcal V}}{\mathcal H}_i]=\sum_{J\subset {\mathcal V}}(-1)^{|J|}\ [\cap_{i\in J}{\mathcal H}_i] =\sum_{J\subset {\mathcal V}}(-1)^{|J|}\ {\mathbb L}^{\dim V-\rho(J)}.$$ Since $[\mathbb C^*]={\mathbb L}-1$, one also has $[(V\setminus \cup_{i\in{\mathcal V}}{\mathcal H}_i)/\mathbb C^*]=[V\setminus \cup_{i\in{\mathcal V}}{\mathcal H}_i] /({\mathbb L}-1)$. \end{lemma} \begin{corollary} The class of the (finite) local hyperplane arrangement ${\mathcal H}'(l)$ equals $$[{\mathcal H}'(l)]=({\mathbb L}-1)[\mathbb{P}{\mathcal H}'(l)]=\sum_{J\subset {\mathcal V}}(-1)^{|J|}\ {\mathbb L}^{\mathfrak{h}(l+E_{{\mathcal V}})-\mathfrak{h}(l+E_J)}.$$ \end{corollary} Replacing ${\mathbb L}^{-1}$ by a new variable $q$, one can define (following \cite{cdg3}) the {\it motivic Poincar\'e series} $P^{m}({\bf t};q):=\sum_{l}\pp^m_{l}(q)\cdot {\bf t}^{l}$ by \begin{equation}\label{eq:pmot} \pp^m_{l}(q):={\mathbb L}^{1-\mathfrak{h}(l+E_{{\mathcal V}})}[\mathbb{P}{\mathcal H}'(l)]\Big|_{{\mathbb L}^{-1}=q} =\sum_{J\subset {\mathcal V}}(-1)^{|J|}\frac{q^{\mathfrak{h}(l+E_J)}}{1-q}= \sum_{J\subset {\mathcal V}}(-1)^{|J|}\cdot\frac{q^{\mathfrak{h}(l+E_J)}-q^{\mathfrak{h}(l)}}{1-q}. \end{equation} For another definition in terms of motivic integrals, see \cite{cdg3}. \begin{proposition}\label{prop:motProp} \cite{cdg3,MoyanoTh,Moyano,Gorsky,GorNem2015} (a) $\lim_{q\to 1}P^m({\bf t};q)=P({\bf t})$; (b) $P^m({\bf t};q)$ is a rational function of type $\overline{{P}^m}({\bf t};q)/(\prod_{i\in{\mathcal V}}(1-t_{i}q))$, where $\overline{{P}^m}({\bf t};q)\in \mathbb Z[{\bf t},q]$. (c) The support of $P^m({\bf t};q)$ is exactly $\calS$. That is, $\pp^m_l(q)\not=0$ if and only if $l\in \calS$. (d) In the Gorenstein case, $$\overline{P^m}((qt_1)^{-1},\ldots , (qt_r)^{-1};q) =q^{-\delta(C,o)} \prod_{i\in {\mathcal V}} t_i^{-c_i}\cdot \overline{P^m}(t_1,\ldots, t_r;q).$$ \end{proposition} For a combinatorial formula, valid for plane curve singularities, in terms of the embedded resolution graph, see \cite{cdg3}. The formula from \cite{cdg3} was simplified in \cite{Gorsky}. \subsection{The case of plane curves singularities; Poincar\'e series versus Alexander polynomial.}\label{ss:PA}\ \\ In the case of plane curve singularities the above invariants ($H$, $P$, $\calS$) can be compared with the embedded topological type of the link of $(C,o)$ embedded into $S^3$. One has the following statement: \begin{theorem} [\cite{cdg2,cdg}] \label{Poincare vs Alexander} Let $\Delta(t_1,\ldots,t_r)$ be the multi-variable Alexander polynomial of the link of $(C,o)$. If \,$r=1$ then $P(t)(1-t)=\Delta(t)$, while $P({\bf t})=\Delta({\bf t}))$ if \,$r>1$. \end{theorem} Since plane curve germs are Gorenstein, the above symmetry of $P$ is compatible with the well--known symmetry property of the Alexander polynomials. By \cite{Yamamoto} the multi-variable Alexander polynomial (and hence by Theorem \ref{Poincare vs Alexander}, the Poincar\'e series $P({\bf t})$) determines the embedded topological type of $(C,o)$, in particular it determines all the series $\{P_{C_J}\}_{J\subset {\mathcal V}}$. In particular, it determines via (\ref{hilbert}) the series $H({\bf t})$ as well. However, the reduction procedure from $P$ to $P_{C_J}$ is more complicated than the analog of (\ref{eq:redHilb}) valid for the Hilbert series (the `naive substitution'). Indeed, these reductions are of type (see \cite{torres}): \begin{equation}\label{eq:redP} P_{C_{{\mathcal V}\setminus \{1\}}}(t_2,\ldots,t_r)= P(t_1,\ldots,t_r)|_{t_1=1}\cdot \frac{1}{(1-t_2^{(C_1,C_2)}\cdots t_r^{(C_1,C_r)})}. \end{equation} Let us prove the identity $\mathfrak{h}(l)=|l|-\delta$, valid for $l\geq c$, using (\ref{hilbert}). For $J=\{i\}$ we have $\sum_{0\leq u_i\leq l_i-1}\pp^J_{u_i}=l_i-\delta(C_i,o)$. For $J=\{i,j\}$ (since $P_{C_J}$ is a polynomial) $\sum_l \pp^{\{i,j\}}_l=P_{C_J}(1,1)$. This equals $(C_i,C_j)$ by \eqref{eq:redP}. By similar argument, for $|J|>2$ the contribution is zero. Hence $\mathfrak{h}(l)=\sum_i (l_i-\delta(C_i,o))-\sum_{i\not= j}(C_i, C_j)=|l|-\delta(C,o).$ Note also that $\Delta({\bf t})$ can also be deduced from the splice diagram, or embedded resolution graph of the link (or of the embedded topological type of the pair $(C,o)\subset (\mathbb C^2,0)$) \cite{EN}. (In fact, this is the most convenient way to determine $\Delta(t)$, at least for high $r$.) The above discussions show that in the case of plane curve singularities the invariants $H$, $P$, $\calS$, $P^m$ are all equivalent and are complete topological invariants of the embedded link into $S^3$. Note also that the embedded link is fibred, the first Betti number of the (Milnor) fiber $F$ is $\mu(F)$, the Milnor number of $(C,o)$. It satisfies $\mu(C,o)=2\delta(C,o)-r+1$, cf. \cite{MBook}, i.e., $\delta(C,o)$ is the genus of the link. Furthermore, $\mu(C,o)$ equals ${\rm deg}\,\Delta(t)$ if $r=1$ and $1+{\rm deg}\,\Delta(t,\cdots, t)$ if $r>1$. \subsection{The local lattice cohomology \cite{GorNem2015}.} \label{bek:2.22} Consider again the space (CW complex) $\frX=(\mathbb R_{\geq 0})^r$ with its cubical decomposition. Recall that the $q$-cubes have the form $\square=(l, I)$, with $l\in (\mathbb Z_{\geq 0})^r$, and $I\subset {\mathcal V}$, $q=|I|=\dim(\square)$, where the vertices of $(l,I)$ are $\{l+\sum_{i\in I'}E_i\}_{I'\subset I}$. Let us consider the weight function $l\mapsto \mathfrak{h}(l)$, {\it provided by the Hilbert function}, and the lattice homology associated with it. (This should not be confused with the object defined in \ref{bek:ANweightsCurves}: note that the weight functions are different!) More precisely, in this case the weights of the cubes are defines as $\mathfrak{h}((l,I))=\max\{\mathfrak{h}(l'), \ \mbox{$l'$ vertex of $(l,I)$}\}=\mathfrak{h}(l+E_I)$. The homological complex $\mathcal{L}^-_*$ is $\calC_*\otimes_\mathbb Z\mathbb Z[U]$, the boundary operator is $\partial_U$ given by $\partial _U(U^m\square)=U^m\sum_k \epsilon _k U^{\mathfrak{h}(\square)-\mathfrak{h}(\square^k)}\square^k$; it satisfies $\partial _U^2=0$. Motivated by the Link Heegaard Floer Theory, in \cite{GorNem2015} it was introduced the {\it homological degree} of the generators $U^m\square$ as well: \begin{equation}\label{eq:szamdeg}\deg(U^m\square):= -2m -2\mathfrak{h}(\square)+\dim(\square).\end{equation} Then $\partial _U$ decreases the homological degree by one. The homology of $({\mathcal L}^-_*,\partial _{U,*})$ is not very rich (it is $\mathbb Z[U]$, cf. \cite{GorNem2015}), however deep information is coded in the `{\it local lattice homology groups}' read from the graded version of $({\mathcal L}^-_*,\partial_{U,*})$, which is defined as follows. First note that ${\mathcal L}^-$, as a $\mathbb Z$--module has a sum decomposition $\oplus _l{\mathcal L}^-(l)$, where ${\mathcal L}^-(l)$ is generated by cubes of the form $(l,I)$. Then one sets the multi-graded direct sum complex ${\rm gr}\,\, {\mathcal L}^-=\oplus_{l\in (\mathbb Z_{\geq 0})^r}{\rm gr}_l\,{\mathcal L}^-$, where $${\rm gr}_l\,{\mathcal L}^-={\mathcal L}^-(l)/\sum_i {\mathcal L}^-(l+E_i)=\mathbb Z\langle (l,I); \ I\subset {\mathcal V}\rangle\otimes _{\mathbb Z}\mathbb Z[U]$$ together with the boundary operator ${\rm gr}_l\,\partial _U$ defined by $${\rm gr}_l\,\partial _U((l, I))=\sum_k (-1)^kU^{\mathfrak{h}(l+E_I)-\mathfrak{h}(l+E_{I\setminus \{k\}})}\ (l, I\setminus \{k\}).$$ Then $H_*({\rm gr}_l{\mathcal L}^-, {\rm gr}_l \partial _U)$, graded by the induced homological degree $\deg$, is called the {\it local lattice homology} associated with the weight function $\mathfrak{h}$ and the lattice point $l$. It is denoted by ${\rm HL}^-(l)$, $l\in (\mathbb Z_{\geq 0})^r$. Its Poincar\'e series is denoted by $$P_l^{{\mathcal L}^-}(s):=\sum_p \ {\rm rank}\, H_p({\rm gr}_l \, {\mathcal L}^-, {\rm gr}_l\, \partial _U)\cdot s^p.$$ In fact, cf. \cite{GorNem2015}, the complex is bigraded by \begin{equation}\label{eq:bideg} {\rm bdeg} (U^m \square):= (-2m -2\mathfrak{h}(\square), \, \dim(\square)\,)\in\mathbb Z^2. \end{equation} The total degree of bdeg is the homological degree $-2m-2\mathfrak{h}(\square)+\dim(\square)$, cf \ref{eq:szamdeg}. The operator ${\rm gr}_l\partial_U$ has bidegree $(0,-1)$. In particular, the local lattice homology is also bigraded. Let ${\rm HL}^-_{a,b}(l)$ denote the $(a,b)$-homogeneous components of ${\rm HL}^-(l)$. In \cite{GorNem2015} the following facts are proved. \begin{theorem}\cite[Theorems 4.2.1 and 5.3.1]{GorNem2015} \label{zlat} (1) Consider the motivic Poincar\'e series of $(C,o)$, $P^m({\bf t};q)=\sum_{l}\pp^m_{l}(q)\,{\bf t}^{l}$. Then the Poincar\'e polynomial of\, ${\rm HL}^-(l)$, satisfies \begin{equation} \label{hfminus} P^{{\mathcal L}^-}_{l}(-q^{-1})=q^{\mathfrak{h}(l)}\cdot \pp^m_{l}(q). \end{equation} In particular, $(-1)^{\mathfrak{h}(l)}\cdot \pp^m_l(-q)$ is a polynomial in $q$ with nonnegative coefficients, and the Euler characteristic $P^{{\mathcal L}^-}_l(-1)=\sum_p(-1)^p \,{\rm rank} \, H_p({\rm gr}_l{\mathcal L}^-, {\rm gr}_l\partial_U)$ equals $\pp_l^m(1)=\pp_l$, the $l$-coefficient of $P({\bf t})$. (2) $H_{-2\mathfrak{h}(l)-p}({\rm gr}_l\,{\mathcal L}^-, {\rm gr}_l\, \partial _U)\simeq H_p({\mathbb P}{\mathcal H}'(l),\mathbb Z)$. Hence $H_{*}({\rm gr}_l\,{\mathcal L}^-, {\rm gr}_l\, \partial _U)$ has no $\mathbb Z$-torsion. (3) If $H_{a,b}({\rm gr}_l\,{\mathcal L}^-, {\rm gr}_l\, \partial _U)\not=0$ then necessarily $(a,b)$ has the form $(-2\mathfrak{h}(l)-2|I|, |I|)$. (In other words, $a+2b=-2\mathfrak{h}(l)$ and $r\geq b\geq 0$.) (4) The $U$--action on $H_*({\rm gr}_l\,{\mathcal L}^-, {\rm gr}_l\, \partial _U)$ is trivial. \end{theorem} \begin{remark}\label{rem:fontos} Theorem \ref{zlat} was stated in \cite{GorNem2015} for plane curves, but proofs are valid for any algebraic curve based on the general properties of the Hilbert function. The key statement {\it (3)} is based on properties of the Orlik--Solomon algebras associated with the local hyperplane arrangements. \end{remark} \section{The level filtration of the analytic lattice homology}\label{s:levfiltr} \subsection{The submodules ${\rm F}_{-d}\mathbb H_*(\frX,w)$.}\ Let us fix an isolated curve singularity $(C,o)$. We consider again the space $\frX=(\mathbb R_{\geq 0})^r$, its cubical decomposition and the lattice homology $\mathbb H_*(C,o)=\mathbb H_*(\frX,w)=H_*(\pazocal{L}_*,\partial_{w,*})$ associated with the weight function $l\mapsto w_0(l):= 2\mathfrak{h}(l)-|l|$. For different notations see the previous sections. In the present article we define several filtrations at the level of complexes and also at the level of the lattice homology $\mathbb H_*(C,o)$, and we analyse the corresponding spectral sequences. We start with the most natural $\mathbb Z$--filtration. It is induced by the grading of the lattice points $(\mathbb Z_{\geq 0})^r$ indexed by $\mathbb Z_{\geq 0}$, $l\mapsto |l|$. Since in the spectral sequences associated with a filtration of subspaces the literature prefers {\it increasing} filtrations, we will follow this setup. For any $d\in \mathbb Z_{\geq 0}$ we define the subspaces $\frX_{-d}:= \{\cup(l, I) \,:\, |l|\geq d\}$ of $\frX$. Then we have the infinite sequence of subspaces $$\frX=\frX_0\supset \frX_{-1}\supset \frX_{-2}\supset \ \cdots. $$ Note also that for any $\square_q =(l,I)\in \frX_{-d}$ its faces belong to $\frX_{-d}$ too. Accordingly we have the chain complexes $\calC_*(\frX_{-d})=\mathbb Z\langle (l,I)\,:\, |l|\geq d\rangle$ endowed with the natural boundary operators $\partial \square_q=\sum_k\varepsilon_k \, \square_{q-1}^k$, together with the chain inclusions $\calC_*=\calC_*(\frX)\supset \calC_*(\frX_{-1})\supset \calC_*(\frX_{-2})\supset\ \cdots.$ Then we define the graded $\mathbb Z[U]$--module chain complexes $\pazocal{L}_*(\frX_{-d})=\calC_*(\frX_{-d})\otimes _{\mathbb Z}\mathbb Z[U]$ endowed with its natural boundary operator $\partial _{*,w}$. In this way we obtain the sequence of chain complexes $\pazocal{L}_*=\pazocal{L}_*(\frX) \supset \pazocal{L}_*(\frX_{-1})\supset \pazocal{L}_*(\frX_{-2})\supset \cdots$ and a sequence of graded $\mathbb Z[U]$--module morphisms $\mathbb H_*(\frX,w)\leftarrow \mathbb H_*(\frX_{-1},w)\leftarrow \mathbb H_*(\frX_{-2},w)\leftarrow \cdots$ For each $b$, the map $\mathbb H_b(\frX,w)\leftarrow \mathbb H_b(\frX_{-d}, w)$ induced at lattice homology level is homogeneous of degree zero. These morphisms provide the following filtration of $\mathbb Z[U]$--modules in $\mathbb H_*(\frX,w)$ $${\rm F}_{-d}\mathbb H_*(\frX,w):={\rm im}\big( \mathbb H_*(\frX,w)\leftarrow \mathbb H_*(\frX_{-d},w)\, \big).$$ \begin{example}\label{ex:34} Consider the irreducible plane curve singularity $\{x^3+y^4=0\}$ with semigroup $\calS=\langle 3,4\rangle$. The conductor is $c=\mu=6$ and the Hilbert function and $w_0$ are shown in the next picture. \begin{picture}(320,60)(0,0) \put(20,50){\makebox(0,0){\small{$\calS$}}} \put(20,35){\makebox(0,0){\small{$\mathfrak{h}$}}} \put(20,20){\makebox(0,0){\small{$w_0$}}} \put(40,50){\line(1,0){180}} \put(40,50){\circle*{4}} \put(100,50){\circle*{4}} \put(120,50){\circle*{4}} \put(160,50){\circle*{4}} \put(180,50){\circle*{4}} \put(200,50){\circle*{4}} \put(240,50){\makebox(0,0){$\ldots$}} \put(240,35){\makebox(0,0){$\ldots$}} \put(240,20){\makebox(0,0){$\ldots$}} \put(60,50){\makebox(0,0){\small{$\circ$}}} \put(80,50){\makebox(0,0){\small{$\circ$}}} \put(140,50){\makebox(0,0){\small{$\circ$}}} \put(40,35){\makebox(0,0){\small{$0$}}} \put(60,35){\makebox(0,0){\small{$1$}}} \put(80,35){\makebox(0,0){\small{$1$}}} \put(100,35){\makebox(0,0){\small{$1$}}} \put(120,35){\makebox(0,0){\small{$2$}}} \put(140,35){\makebox(0,0){\small{$3$}}} \put(160,35){\makebox(0,0){\small{$3$}}} \put(180,35){\makebox(0,0){\small{$4$}}} \put(200,35){\makebox(0,0){\small{$5$}}} \put(40,20){\makebox(0,0){\small{$0$}}} \put(60,20){\makebox(0,0){\small{$1$}}} \put(80,20){\makebox(0,0){\small{$0$}}} \put(100,20){\makebox(0,0){\small{$-1$}}} \put(120,20){\makebox(0,0){\small{$0$}}} \put(140,20){\makebox(0,0){\small{$1$}}} \put(160,20){\makebox(0,0){\small{$0$}}} \put(180,20){\makebox(0,0){\small{$1$}}} \put(200,20){\makebox(0,0){\small{$2$}}} \end{picture} Then $\mathbb H_{>0}(\frX)=0$ and $\mathbb H_0(\frX)={\mathcal T}^-_{2}\oplus {\mathcal T}_0(1)^2$. One has $eu(\mathbb H_*)=3=\delta(C,o)$. $\mathbb H_0=\oplus_n H_0(S_n,\mathbb Z)$ can be illustrated by its `homological graded root' ${\mathfrak R}(C,o)$: \begin{picture}(300,82)(80,330) \put(180,380){\makebox(0,0){\footnotesize{$0$}}} \put(177,370){\makebox(0,0){\footnotesize{$-1$}}} \put(177,360){\makebox(0,0){\footnotesize{$-2$}}}\put(177,390){\makebox(0,0){\footnotesize{$+1$}}} \put(177,340){\makebox(0,0){\small{$-w_0$}}}\put(177,400){\makebox(0,0){\footnotesize{$+2$}}} \dashline{1}(200,370)(240,370) \dashline{1}(200,380)(240,380) \dashline{1}(200,400)(240,400) \dashline{1}(200,360)(240,360) \dashline{1}(200,390)(240,390) \put(220,345){\makebox(0,0){$\vdots$}} \put(220,360){\circle*{3}} \put(220,390){\circle*{3}} \put(210,380){\circle*{3}} \put(230,380){\circle*{3}} \put(220,380){\circle*{3}} \put(220,370){\circle*{3}} \put(220,390){\line(0,-1){40}} \put(220,370){\line(1,1){10}} \put(210,380){\line(1,-1){10}} \put(280,381){\vector(0,-1){10}}\put(269,381){\vector(1,-1){10}}\put(291,381){\vector(-1,-1){10}} \put(300,375){\makebox(0,0){\footnotesize{$U$}}} \put(280,391){\vector(0,-1){8}} \put(280,369){\vector(0,-1){8}} \put(300,385){\makebox(0,0){\footnotesize{$U$}}} \put(300,365){\makebox(0,0){\footnotesize{$U$}}} \put(280,355){\makebox(0,0){$\vdots$}} \end{picture} This means that each vertex $v$ weighted by $-w_0(v)=-n$ in the root denotes a free summand $\mathbb Z=\mathbb Z\langle 1_v\rangle\in (\mathbb H_0)_{-2w_0(v)}$ (i.e. a connected component of $S_{n}$), and if $[v,u]$ is an edge connecting the vertices $v$ and $u$ with $w_0(v)=w_0(u)+1$, then $U(1_u)=1_v\in (\mathbb H_0)_{-2w_0(v)}$ (i.e. the edges codify the corresponding inclusions of the connected components of $S_{n-1}$ into the connected components of $S_n$). The weighs $-w_0$ are marked on the left of the graph. (See also subsection \ref{ss:grroot}.) The graded $\mathbb Z[U]$--modules ${\rm F}_{-d}\mathbb H_0(\frX)$ for $d>0$ are illustrated below. \begin{picture}(500,100)(20,310) \put(20,345){\makebox(0,0){$\vdots$}} \put(20,360){\circle*{3}} \put(20,390){\circle*{3}} \put(10,380){\circle*{3}} \put(30,380){\circle*{3}} \put(20,380){\circle*{3}} \put(20,370){\circle*{3}} \put(20,390){\line(0,-1){40}} \put(20,370){\line(1,1){10}} \put(10,380){\line(1,-1){10}} \put(120,345){\makebox(0,0){$\vdots$}} \put(120,360){\circle*{3}} \put(120,390){\circle*{3}} \put(110,380){\circle*{3}} \put(130,380){\circle*{3}} \put(120,380){\circle*{3}} \put(120,370){\circle*{3}} \put(120,390){\line(0,-1){40}} \put(120,370){\line(1,1){10}} \put(110,380){\line(1,-1){10}} \put(220,345){\makebox(0,0){$\vdots$}} \put(220,360){\circle*{3}} \put(220,390){\circle*{3}} \put(210,380){\circle*{3}} \put(230,380){\circle*{3}} \put(220,380){\circle*{3}} \put(220,370){\circle*{3}} \put(220,390){\line(0,-1){40}} \put(220,370){\line(1,1){10}} \put(210,380){\line(1,-1){10}} \put(320,345){\makebox(0,0){$\vdots$}} \put(320,360){\circle*{3}} \put(320,390){\circle*{3}} \put(310,380){\circle*{3}} \put(330,380){\circle*{3}} \put(320,380){\circle*{3}} \put(320,370){\circle*{3}} \put(320,390){\line(0,-1){40}} \put(320,370){\line(1,1){10}} \put(310,380){\line(1,-1){10}} \put(420,345){\makebox(0,0){$\vdots$}} \put(420,360){\circle*{3}} \put(420,390){\circle*{3}} \put(410,380){\circle*{3}} \put(430,380){\circle*{3}} \put(420,380){\circle*{3}} \put(420,370){\circle*{3}} \put(420,390){\line(0,-1){40}} \put(420,370){\line(1,1){10}} \put(410,380){\line(1,-1){10}} \put(20,320){\makebox(0,0){\footnotesize{$d=1,2,3$}}} \put(120,320){\makebox(0,0){\footnotesize{$d=4$}}} \put(220,320){\makebox(0,0){\footnotesize{$d=5,6$}}} \put(320,320){\makebox(0,0){\footnotesize{$d=7$}}} \put(420,320){\makebox(0,0){\footnotesize{$d=8$}}} \put(10,374){\line(2,5){12}} \put(110,374){\line(1,1){20}} \put(210,373){\line(2,1){20}} \put(310,374){\line(1,0){20}} \put(410,364){\line(1,0){20}} \end{picture} From the root of $\mathbb H_0$ one has to delete those edges which intersect the `cutting line'. The $\mathbb Z[U]$--module ${\rm F}_{-d}\mathbb H_0(\frX)$ sits below the cutting line, where the $U$--action is determined from the remaining edges by the principle described above. For $d\geq 8$ the cutting line moves down one by one. \end{example} \subsection{The homological spectral sequence associated with the subspaces $\{S_n\cap \frX_{-d}\}_d$.}\label{ss:ss} \ For any fixed $n$, by an argument as in the proof of Theorem \ref{9STR1}, the morphism $(\mathbb H_*(\frX_{-d}, w))_{-2n}\to (\mathbb H_*(\frX,w))_{-2n}$ is identical with the morphisms $ H_*(S_n\cap \frX_{-d}, \mathbb Z)\to H_*(S_n,\mathbb Z)$ induced by the inclusion $S_n\cap \frX_{-d}\hookrightarrow S_n$. In particular, for any fixed $n$ one can analyse the spectral sequence associated with the filtration $\{S_n\cap \frX_{-d}\}_{d\geq 0}$ of subspaces of $S_n$. Since $S_n$ is compact, this filtration is finite. The spectral sequence will be denoted by $(E^k_{-d,q})_n\Rightarrow (E^\infty_{-d,q})_n$. Its terms are the following: \begin{equation}\begin{split} (E^1_{-d,q})_n=& H_{-d+q}(S_n\cap \frX_{-d}, S_n\cap \frX_{-d-1},\mathbb Z),\\ (E^\infty_{-d,q})_n=& \frac{(F_{-d}\, \mathbb H_{-d+q}(\frX))_{-2n}} { (F_{-d-1}\, \mathbb H_{-d+q}(\frX))_{-2n}}=({\rm Gr}^F_{-d}\, \mathbb H_{-d+q}(\frX)\,)_{-2n}. \end{split}\end{equation} We wish to emphasize that each $\mathbb Z$--module $(E^k_{-d,q})_{n}$ is well-defined (in the sense that in its definition does not depend on any choice or additional construction which might produce a certain ambiguity); it is an invariant of the curve $(C,o)$. In fact, this is true for all the spaces $\{S_n\}_n$ as well. In this way, for every $1\leq k\leq \infty$, we have the well-defined Poincar\'e series associated with $(C,o)$: $$PE_k(T,Q,h):=\sum_{d,q,n} \ \rank (E_{-d,q}^k)_{n}\cdot T^dQ^nh^{-d+q}\in\mathbb Z[[T,Q]][Q^{-1},h].$$ Note also that all the coefficients of $PE_k(T,Q,h)$ are nonnegative. The differential $d_{-d,q}^k$ acts as $(E_{-d,q}^k)_{n}\to (E_{-d-k,q+k-1}^k)_{n}$, hence $PE_{k+1}$ is obtained from $PE_k$ by deleting terms of type $Q^n(T^dh^{-d+q}+T^{d+k}h^{-d+q-1})=Q^nT^dh^{-d+q-1}(h+T^k)$ ($k\geq 1$). We say that two series $P$ and $P'$ satisfies $P\geq P'$ if and only if $P-P'$ has all of its coefficients nonnegative. Then the above discussion shows that (cf. \cite[page 15]{McCleary}) $$PE_1(T,Q,h)\geq PE_2(T,Q,h)\geq \cdots \geq PE_\infty(T,Q,h).$$ If $E^2_{*,*}=E^{\infty}_{*,*}$ then $PE_1-PE_2=(h+T)R^+$, where all the coefficients of $R^+$ are nonnegative. In general, $(PE_1-PE_{\infty})|_{T=1}=(h+1)\bar{R}^+$, where all the coefficients of $\bar{R}^+$ are nonnegative. Thus, \begin{equation}\label{eq:spseq} PE_1(T,Q,h)|_{T=1,h=-1}= PE_2(T,Q,h)|_{T=1,h=-1}= \cdots = PE_\infty(T,Q,h)|_{T=1,h=-1}.\end{equation} Since the whole spectral sequence is an invariant of $(C,o)$, $$k_{(C,o)}(n):=\min\{k\,:\, (E_{*,*}^k)_n=(E_{*,*}^\infty)_n\,\} \ \mbox{and } \ \ \ k_{(C,o)}=\max_n \,k_{(C,o)}(n)$$ are invariants of $(C,o)$ as well. \subsection{The term $E^\infty_{*,*}$.} First note that for any fixed $b$ and $n$ one has \begin{equation}\label{eq:pqr} \sum_{-d+q=b}{\rm rank}\big( E_{-d, q}^\infty\big)_{n}={\rm rank }\, H_b(S_n, \mathbb Z)={\rm rank} \,(\mathbb H_b(C,o))_{-2n}. \end{equation} \begin{proposition}\label{prop:infty} \ \noindent (a) $PE_\infty(1, Q,h)=PH(Q,h)=\sum_{n\geq m_w}\, \big(\, \sum_b \, {\rm rank}\, H_b(S_n,\mathbb Z)\,h^b\, \big)\cdot Q^n.$ \noindent (b) $PE_\infty(1, Q,-1)=\sum_{n\geq m_w}\, \chi_{top}(S_n)\cdot Q^n$ \ (where $\chi_{top}$ denoted the topological Euler characteristic) \noindent (c) Let $R$ be any rectangle of type $R(0,c')$ with $c'\geq c$ (where $c$ is the conductor). Then $$PE_\infty(1, Q,-1)=\frac{1}{1-Q}\cdot \sum_{\square_q\subset R}\, (-1)^q \, Q^{w(\square_q)}.$$ (d) $$\lim_{Q\to 1}\Big( PE_\infty (1,Q, -1)-\frac{1}{1-Q}\Big)=eu(\mathbb H_*(C,o))=\delta(C,o).$$ (e) $$PE_\infty(1, Q,h)\geq PE_\infty(1, Q,h=0)\geq \frac{1}{1-Q}$$ and \ $PE_\infty(1, Q,h=0)-1/(1-Q)$ is finitely supported. \end{proposition} \begin{proof} {\it (a)} follows from (\ref{eq:pqr}), while {\it (b)} from {\it (a)}. Next we prove {\it (c)}. Set $Eu(Q):=\sum_{\square_q\subset R}\, (-1)^q \, Q^{w(\square_q)}\in \mathbb Z[Q,Q^{-1}]$ and write $Eu(Q)/(1-Q)$ as $\sum _{n\geq m_w} a_nQ^n$. Then $$a_n=\sum_{\square_q\subset R,\, w(\square_q)\leq n}\,(-1)^q=\chi_{top}(S_n\cap R).$$ But $S_n\cap R\hookrightarrow S_n$ is a homotopy equivalence, cf. \ref{cor:EUcurves}. Then use part {\it (b)}. For {\it (d)} use {\it (c)}, $\sum_{\square_q\subset R}(-1)^{q}=1$ and $eu(\mathbb H_*(C,o))=\sum_{\square_q\subset R}(-1)^{q+1}w(\square_q)$, cf. (\ref{eq:eu}). For {\it (e)} use Th. \ref{cor:EUcurves}. \end{proof} \begin{remark}\label{rem:hat} From part {\it (b)-- (c)} of the above proposition $$\sum_{\square_q\subset R}\, (-1)^q \, Q^{w(\square_q)}=\sum_n\chi_{top}(S_n)(Q^n-Q^{n+1})=\sum_n \chi_{top}(S_n,S_{n-1}) Q^n=\sum_n\sum_b (-1)^b {\rm rank}(\hat{\mathbb H}_b)_{-2n}Q^n. $$ This, in fact says, that the bigraded $\hat{\mathbb H}$ is the categorification of $\sum_{\square_q\subset R}\, (-1)^q \, Q^{w(\square_q)}$. The above identity for $Q=1$ gives (\ref{eq:hateu}). \end{remark} \subsection{The case of irreducible curves.}\label{ss:irredu} If $r=1$ then $\frX=[0,\infty)$, and any non-empty $S_n $ (for $n\geq m_w$) is a union of intervals $\cup_{\lambda \in\Lambda } [a_\lambda, b_\lambda]$. Since $w(b_\lambda+1)>n$ (and, in general, $w(l+1)-w(l)\in\{1,-1\}$), we obtain that $w(b_\lambda)=n$ and $b_\lambda\in\calS$. In particular, $(E^1_{-d,q})_{n}=H_{-d+q}(S_n\cap \frX_{-d}, S_n\cap \frX_{-d-1},\mathbb Z)$ is nonzero if and only if $\Lambda\not=\emptyset$, $d=q$, and $d=b_\lambda$ for some $\lambda$. Moreover, the differential of the spectral sequence $d_{*,*}^k=0$ for $k\geq 1$. Hence, $$PE_1(T,Q,h)=PE_{\infty}(T,Q,h)= \sum_{s\in \calS} T^s\,Q^{w(s)}\,h^0= \sum_{s\in \calS} T^s\,Q^{w(s)}.$$ This can be compared with the motivic Poincar\'e series associated with $(C,o)$ (cf. \cite[2.3]{Gorsky}): $$P^m(t,q)=\sum_{s\in\calS}\, t^s\, q^{\mathfrak{h}(s)}.$$ Since $w(s)=2\mathfrak{h}(s)-s$, we obtain for any $1\leq k\leq \infty$: \begin{equation}\label{eq:pme} PE_k(T,Q)|_{T=t\sqrt{q}, \ Q=\sqrt{q}}= P^m(t,q). \end{equation} Since $P^m(t, q=1)=P(t)$, we also obtain $PE_k(T=t,Q=1,h)= P(t)$. Since $P^m(t,q)$ is a rational function of type $\overline{P^m}(t,q)/(1-tq)$, cf. \ref{prop:motProp}, a similar property should hold for $PE_k$ too. Indeed, since for $s\geq c$ one has $s\in\calS$ and $w(s)=w(c)+s-c=(c-2\delta)+s-c=s-2\delta$, we obtain $$PE_k(T,Q,h)=\sum_{s\in\calS, \, s<c}T^sQ^{w(s)}+\sum_{s\geq c}T^sQ^{s-2\delta}=\sum_{s\in\calS, \, s<c}T^sQ^{w(s)} +\frac{T^cQ^{c-2\delta}}{1-TQ}.$$ Let us write $PE_k(T,Q,h)$ as $\overline{PE_k}(T,Q,h)/(1-TQ)$ with $\overline{PE_k}\in\mathbb Z[T,Q,Q^{-1}]$. If $(C,o)$ is Gorenstein then $c=2\delta$ above, and the symmetry \ref{prop:motProp}{\it (c)} reads as $$\overline{PE_k}(T^{-1}, Q)=T^{-c}\cdot \overline{PE_k}(T,Q).$$ That is, $\overline{PE_k}$ is a polynomial of degree $c$ in $T$. Here are some examples for some torus knots: $\calS=\langle 2,3\rangle$: \ \ \ \ \ \ \ \ \ \ \ $\overline{PE_k}(T,Q,h)=1-TQ+T^2$; $\calS=\langle 3,4\rangle$: \ \ \ \ \ \ \ \ \ \ \ $\overline{PE_k}(T,Q,h)=1-TQ+T^3Q^{-1}-T^5Q+T^6$; $\calS=\langle 2,2m+1\rangle$: \ \ $\overline{PE_k}(T,Q,h)=(1+t^2+\cdots +T^{2m-2})(1-TQ)+T^{2m}$; $\calS=\langle 3,3m+1\rangle$: \ \ $\overline{PE_k}(T,Q,h)=\Big[\frac{1-T^{3m}Q^{-m}}{1-T^3Q^{-1}}+T^{3m}Q^{-m} (1+TQ)\frac{ 1-T^{3m}Q^{m}}{1-T^3Q}\Big](1-TQ)+T^{6m}.$ \subsection{The case of plane curve singularity $x^2+y^2=0$}\label{ss:22} In this case $r=2$, $\delta=1$, $c=(1,1)$, $\calS=\{(0,0)\}\cup \{(1,1)+(\mathbb Z_{\geq 0})^2\}$. The Hilbert finction $\mathfrak{h}$, the weight function $w$ in the rectangle $R((0,0), (4,4))$, and the homological graded root are the following \begin{picture}(320,80)(-50,-20) \put(-40,30){\makebox(0,0){$\mathfrak{h}:$}} \put(140,30){\makebox(0,0){$w:$}} \put(-15,0){\line(1,0){90}} \put(-5,-10){\line(0,1){60}} \put(5,-5){\makebox(0,0){\small{$0$}}} \put(20,-5){\makebox(0,0){\small{$1$}}} \put(35,-5){\makebox(0,0){\small{$2$}}} \put(50,-5){\makebox(0,0){\small{$3$}}} \put(65,-5){\makebox(0,0){\small{$4$}}} \put(-10,5){\makebox(0,0){\small{$0$}}} \put(-10,15){\makebox(0,0){\small{$1$}}} \put(-10,25){\makebox(0,0){\small{$2$}}} \put(-10,35){\makebox(0,0){\small{$3$}}} \put(-10,45){\makebox(0,0){\small{$4$}}} \put(5,5){\makebox(0,0){\small{$0$}}} \put(5,15){\makebox(0,0){\small{$1$}}} \put(5,25){\makebox(0,0){\small{$2$}}} \put(5,35){\makebox(0,0){\small{$3$}}} \put(5,45){\makebox(0,0){\small{$4$}}} \put(20,5){\makebox(0,0){\small{$1$}}} \put(20,15){\makebox(0,0){\small{$1$}}} \put(20,25){\makebox(0,0){\small{$2$}}} \put(20,35){\makebox(0,0){\small{$3$}}} \put(20,45){\makebox(0,0){\small{$4$}}} \put(35,5){\makebox(0,0){\small{$2$}}} \put(35,15){\makebox(0,0){\small{$2$}}} \put(35,25){\makebox(0,0){\small{$3$}}} \put(35,35){\makebox(0,0){\small{$4$}}} \put(35,45){\makebox(0,0){\small{$5$}}} \put(50,5){\makebox(0,0){\small{$3$}}} \put(50,15){\makebox(0,0){\small{$3$}}} \put(50,25){\makebox(0,0){\small{$4$}}} \put(50,35){\makebox(0,0){\small{$5$}}} \put(50,45){\makebox(0,0){\small{$6$}}} \put(65,5){\makebox(0,0){\small{$4$}}} \put(65,15){\makebox(0,0){\small{$4$}}} \put(65,25){\makebox(0,0){\small{$5$}}} \put(65,35){\makebox(0,0){\small{$6$}}} \put(65,45){\makebox(0,0){\small{$7$}}} \put(160,0){\line(1,0){90}} \put(170,-10){\line(0,1){60}} \put(180,5){\makebox(0,0){\small{$0$}}} \put(180,15){\makebox(0,0){\small{$1$}}} \put(180,25){\makebox(0,0){\small{$2$}}} \put(180,35){\makebox(0,0){\small{$3$}}} \put(180,45){\makebox(0,0){\small{$4$}}} \put(195,5){\makebox(0,0){\small{$1$}}} \put(195,15){\makebox(0,0){\small{$0$}}} \put(195,25){\makebox(0,0){\small{$1$}}} \put(195,35){\makebox(0,0){\small{$2$}}} \put(195,45){\makebox(0,0){\small{$3$}}} \put(210,5){\makebox(0,0){\small{$2$}}} \put(210,15){\makebox(0,0){\small{$1$}}} \put(210,25){\makebox(0,0){\small{$2$}}} \put(210,35){\makebox(0,0){\small{$3$}}} \put(210,45){\makebox(0,0){\small{$4$}}} \put(225,5){\makebox(0,0){\small{$3$}}} \put(225,15){\makebox(0,0){\small{$2$}}} \put(225,25){\makebox(0,0){\small{$3$}}} \put(225,35){\makebox(0,0){\small{$4$}}} \put(225,45){\makebox(0,0){\small{$5$}}} \put(240,5){\makebox(0,0){\small{$4$}}} \put(240,15){\makebox(0,0){\small{$3$}}} \put(240,25){\makebox(0,0){\small{$4$}}} \put(240,35){\makebox(0,0){\small{$5$}}} \put(240,45){\makebox(0,0){\small{$6$}}} \put(320,40){\makebox(0,0){\small{$0$}}} \dashline[60]{1}(270,40)(310,40) \put(290,5){\makebox(0,0){$\vdots$}} \put(290,20){\circle*{3}} \put(290,30){\circle*{3}} \put(300,40){\circle*{3}} \put(280,40){\circle*{3}} \put(290,30){\line(0,-1){20}} \put(290,30){\line(1,1){10}} \put(290,30){\line(-1,1){10}} \end{picture} Therefore, $\mathbb H_{\geq 1}=0$ and $\mathbb H_0={\mathcal T}^-_0\oplus {\mathcal T}_0(1)$. The spaces $S_n$ for $n=0,1,2,3$ are the following (the sequence of spaces can be continued easily): \begin{picture}(300,80)(-50,-10) \put(10,10){\circle*{2}}\put(20,20){\circle*{2}} \put(15,0){\makebox(0,0){$S_0$}} \put(5,30){\makebox(0,0){\tiny{$2$}}} \dashline[60]{1}(10,30)(30,10) \put(-5,20){\makebox(0,0){\tiny{$0$}}} \dashline[60]{1}(0,20)(10,10) \put(60,10){\line(1,0){10}}\put(60,10){\line(0,1){10}} \put(70,10){\line(0,1){20}}\put(60,20){\line(1,0){20}} \put(60,12){\line(1,0){10}}\put(60,14){\line(1,0){10}}\put(60,16){\line(1,0){10}}\put(60,18){\line(1,0){10}} \put(65,0){\makebox(0,0){$S_1$}} \put(55,30){\makebox(0,0){\tiny{$2$}}}\put(55,40){\makebox(0,0){\tiny{$3$}}} \dashline[60]{1}(60,30)(80,10)\dashline[60]{1}(60,40)(90,10) \put(110,10){\line(1,0){20}}\put(110,10){\line(0,1){20}} \put(130,10){\line(0,1){20}}\put(110,30){\line(1,0){20}} \put(120,30){\line(0,1){10}}\put(130,20){\line(1,0){10}} \put(110,12){\line(1,0){20}}\put(110,14){\line(1,0){20}}\put(110,16){\line(1,0){20}} \put(110,18){\line(1,0){20}}\put(110,20){\line(1,0){20}}\put(110,22){\line(1,0){20}}\put(110,24){\line(1,0){20}} \put(110,26){\line(1,0){20}}\put(110,28){\line(1,0){20}} \put(120,0){\makebox(0,0){$S_2$}} \put(105,40){\makebox(0,0){\tiny{$3$}}}\put(105,50){\makebox(0,0){\tiny{$4$}}} \dashline[60]{1}(110,40)(140,10)\dashline[60]{1}(110,50)(145,15) \put(160,10){\line(1,0){30}}\put(160,10){\line(0,1){30}} \put(190,10){\line(0,1){20}}\put(160,40){\line(1,0){20}} \put(180,30){\line(0,1){10}}\put(180,30){\line(1,0){10}} \put(190,20){\line(1,0){10}}\put(170,40){\line(0,1){10}} \put(160,12){\line(1,0){30}}\put(160,14){\line(1,0){30}}\put(160,16){\line(1,0){30}} \put(160,18){\line(1,0){30}}\put(160,20){\line(1,0){30}}\put(160,22){\line(1,0){30}}\put(160,24){\line(1,0){30}} \put(160,26){\line(1,0){30}}\put(160,28){\line(1,0){30}} \put(160,30){\line(1,0){20}}\put(160,32){\line(1,0){20}}\put(160,34){\line(1,0){20}} \put(160,36){\line(1,0){20}}\put(160,38){\line(1,0){20}} \put(180,0){\makebox(0,0){$S_3$}} \put(155,50){\makebox(0,0){\tiny{$4$}}}\put(155,60){\makebox(0,0){\tiny{$5$}}} \dashline[60]{1}(160,50)(200,10)\dashline[60]{1}(160,60)(205,15) \end{picture} In the case of $S_0$ the two points are the lattice points $(0,0)$ and $(1,1)$, hence in their case the values $|l|$ are $d=0$ and $d=2$, hence $(E^1_{*,*})_0$ contributes in $PE_1$ with $Q^0(1+T^2)$. In the case of $S_1$, for $d=2$ we have a relative 1-cycle, and for $d=3$ two relative 0-cycles. Hence $(E^1_{*,*})_1$ (i.e. $S_1$) contributes with $Q(T^2h^1+2T^3h^0)$. Similarly, the contribution of $S_2$ is $Q^2(2T^3h^1+3T^4h^0)$, of $S_3$ is $Q^3(3T^4h^1+4T^5h^0)$, and so on. Therefore, $$PE_1(T,Q,h)=1+T^2(1+2TQ+3T^2Q^2+\cdots )+T^2Qh^1(1+2TQ+3T^2Q^2+\cdots )=1+\frac{T^2(1+Qh)}{(1-TQ)^2}.$$ The entries of the page $(E^1_{*,*})_{n}$ for $n=0,1,2$ are the following: \begin{picture}(300,100)(-40,-20) \put(10,10){\vector(1,0){70}}\put(60,0){\vector(0,1){60}} \dashline[200]{1}(50,0)(50,60)\dashline[200]{1}(40,0)(40,60)\dashline[200]{1}(30,0)(30,60) \dashline[200]{1}(20,0)(20,60)\dashline[200]{1}(10,0)(10,60) \dashline[200]{1}(10,20)(70,20)\dashline[200]{1}(10,30)(70,30)\dashline[200]{1}(10,40)(70,40) \dashline[200]{1}(10,50)(70,50)\dashline[200]{1}(10,60)(70,60) \put(60,70){\makebox(0,0){\footnotesize{$q$}}} \put(80,15){\makebox(0,0){\footnotesize$p$}} \put(30,-10){\makebox(0,0){\footnotesize{$n=0$}}} \put(60,10){\makebox(0,0){\footnotesize{$\mathbb Z$}}} \put(40,30){\makebox(0,0){\footnotesize{$\mathbb Z$}}} \put(110,10){\vector(1,0){70}}\put(160,0){\vector(0,1){60}} \dashline[200]{1}(150,0)(150,60)\dashline[200]{1}(140,0)(140,60)\dashline[200]{1}(130,0)(130,60) \dashline[200]{1}(120,0)(120,60)\dashline[200]{1}(110,0)(110,60) \dashline[200]{1}(110,20)(170,20)\dashline[200]{1}(110,30)(170,30)\dashline[200]{1}(110,40)(170,40) \dashline[200]{1}(110,50)(170,50)\dashline[200]{1}(110,60)(170,60) \put(160,70){\makebox(0,0){\footnotesize{$q$}}} \put(180,15){\makebox(0,0){\footnotesize$p$}} \put(130,-10){\makebox(0,0){\footnotesize{$n=1$}}} \put(140,40){\makebox(0,0){\footnotesize{$\mathbb Z$}}} \put(131,41){\makebox(0,0){\footnotesize{$\mathbb Z^2$}}} \put(210,10){\vector(1,0){70}}\put(260,0){\vector(0,1){60}} \dashline[200]{1}(250,0)(250,60)\dashline[200]{1}(240,0)(240,60)\dashline[200]{1}(230,0)(230,60) \dashline[200]{1}(220,0)(220,60)\dashline[200]{1}(210,0)(210,60) \dashline[200]{1}(210,20)(270,20)\dashline[200]{1}(210,30)(270,30)\dashline[200]{1}(210,40)(270,40) \dashline[200]{1}(210,50)(270,50)\dashline[200]{1}(210,60)(270,60) \put(260,70){\makebox(0,0){\footnotesize{$q$}}} \put(280,15){\makebox(0,0){\footnotesize$p$}} \put(230,-10){\makebox(0,0){\footnotesize{$n=2$}}} \put(231,51){\makebox(0,0){\footnotesize{$\mathbb Z^2$}}} \put(221,51){\makebox(0,0){\footnotesize{$\mathbb Z^3$}}} \end{picture} Note that for different $n$'s the corresponding spectral sequences do not interfere, they run independently. Since $\mathbb H_{\geq 1}=0$, $E_{-d,q}^\infty=0$ for $-d+q>0$, hence $(E^1_{-1-n, 2+n})_{n}=\mathbb Z^n$ should be injected by the differential $(d^1_{-1-n, 2+n})_{n}$. Then $(E_{*,*}^2)_n=(E_{*,*}^\infty)_n$, and $$PE_\infty(T,Q,h)=1+T^2 +T^3Q+T^4Q^2+\cdots = 1+\frac{T^2}{1-TQ},$$ compatibly with the filtration $\{{\rm Gr}_{-d}^F \,\mathbb H_0(C,o)\}_d$ of $\mathbb H_0(C,o)$ (which can be determined similarly as the filtration in Example \ref{ex:34}). This example shows that in general it can happen that $(E_{*,*}^1)_n\not = (E_{*,*}^\infty)_n$, i.e. $k_{(C,o)}(n)\not=1$. Also, even if $\mathbb H_{\geq m}(C,o)=0$ for some $m$, the first page might have nonzero terms with $-d+q\geq m$. Let us return back to $PE_1(T,Q,h)$ and consider $$\overline{PE_1}(T,Q,h)=(1-TQ)^2+T^2(1+Qh).$$ This can be compared with the motivic multivariable Poinvar\'e polynomial of $(C,o)$, $\overline{P^m}(t_1,t_2,q)=1-qt_1-qt_2+qt_1t_2 $, cf. \cite[5.1]{Gorsky}. Indeed $$\overline{PE_1}(T,Q,h)|_{T\to t\sqrt{q}, \ Q\to \sqrt{q}, \ h\to -\sqrt{q}}=\overline{P^m}(t,t,q).$$ This will be generalized and proved for any $(C,o)$ in Theorem \ref{th:PP}. Above all the differentials have `maximal ranks', but this is not the case in general, see Example \ref{ex:decsing2}. \begin{remark}\label{rem:3.5.1} The graded lattice homology, or $PE_\infty(T,Q,h)$, contains several additional information compared with the lattice homology $\mathbb H_*(C,o)$. E.g., in the irreducible case $PE_\infty(T,Q,h)=\sum_{s\in\cS}T^sQ^{w(s)}$, hence from $PE_\infty$ one can recover the semigroup $\cS$. However, this is not possible from $\mathbb H_*$. Take for example the semigroups $\langle 4,5,7\rangle$ and $\langle 3,7,8\rangle$. They have different $PE_k$ (for all $1\leq k\leq \infty$) but they have identical $\mathbb Z[U]$--module lattice homology $\mathbb H_0=\EuScript{T}^-_{4}\oplus \EuScript{T}_{2}(1)\oplus \EuScript{T}_{0}(1)$. In particular, for the two cases the series $PE_k(T=1,Q)$ are the same as well. Merely in $\mathbb H_*$ (or in $PE_1(T=1,Q)$) the values of the level filtration are not coded. (For more see also Example \ref{ex:irredY1}.) \end{remark} \subsection{The spectral sequence as decoration of $\mathfrak{R}(C,o)$.}\label{ss:decroot} Above, for any fixed $n\geq m_w$, we considered the space $S_n$, its filtration $\{S_n\cap\frX_{-d}\}_d$ and the homological spectral sequence $(E^k_{*,*})_n$. We can define the generating function of the corresponding ranks $$PE_k(T,h)_n =\sum_{d,q} \,{\rm rank}(E^k_{-d,q})_nT^dh^{-d+q}\in \mathbb Z[[T]][h],$$ which satisfies $\sum_n PE_k(T,h)_n Q^n=PE_k(T,Q,h)$. Now, we can proceed as in subsection \ref{ss:grroot}: we can replace the $\mathbb Z$--grading given by $n$ (or, the generating functions indexed by $n$) by an index set given by the vertices of the graded root. Indeed, consider the connected components $\{S^v_n\}_v$ of $S_n$, where $v$ runs over the vertices of ${\mathfrak{R}}$ with $w_0(v)=n$. Then one can consider its filtration $\{S_n^v\cap\frX_{-d}\}_d$ and the corresponding homological spectral sequence $(E^k_{*,*})^v_n$. In this way the spectral sequence $\{(E^k_{*,*})^v_n\}_{k\geq 1}$, indexed by $v\in{\mathcal V}(\mathfrak{R})$, appears as the decoration of the (vertices of the) graded root $\mathfrak{R}$. Clearly, $\oplus_{v:w_0(v)=n}\, (E^k_{*,*})^v_n=(E^k_{*,*})_n$. At Poincar\'e series level we have generating functions $$PE_k(T,h)^v_{w_0(v)}= \sum _{d,q}{\rm rank}(E^k_{-d,q})^v_{w_0(v)}T^dh^{-d+q}\in \mathbb Z[[T]][h],$$ with $\sum_{v\,:\, w_0(v)=n}PE_k(T,h)^v_{w_0(v)}=PE_k(T,h)_n$. The decoration of ${\mathcal V}(\mathfrak{R})$ by the $E_1$ pages can be improved even more. In section \ref{s:LFilt} we will consider a lattice filtration and the corresponding series ${\bf PE}_1({\bf T},Q,h)=\sum_n {\bf PE}_1({\bf T},h)_nQ^n$ and ${\bf PE}_1(T_1=T, \ldots, T_r=T,h)_n=PE_1(T,h)_n$. Similarly as above, each ${\bf PE}_1({\bf T},h)_n$ has a decomposition ${\bf PE}_1({\bf T},h)_n=\sum _{v\,:\, w_0(v)=n}{\bf PE}_1({\bf T},h)^v_{w_0(v)}$, where $ {\bf PE}_1({\bf T},h)^v_{w_0(v)}$ is indexed by the vertices of $\mathfrak {R}$. \section{The lattice filtration and the multigraded $E_{*,*}^1$.}\label{s:LFilt} \subsection{The improved first page of the spectral sequence.} \ Recall that for any fixed $n\geq m_w$ the spectral sequence $E_{*,*}^k$ ($k\geq 0$) associated with the {\it level filtration} $\{S_n\cap \frX_{-d}\}_{d\geq 0}$ of $S_n$ has its first terms $$(E^0_{-d,q})_{n}=\calC_{-d+q}(S_n\cap \frX_{-d}, S_n\cap \frX_{-d-1}),\ \ \ (E^1_{-d,q})_{n}=H_{-d+q}(S_n\cap \frX_{-d}, S_n\cap \frX_{-d-1},\mathbb Z).$$ The point is that $\calC_{-d+q}(S_n\cap \frX_{-d}, S_n\cap \frX_{-d-1})$ is generated by cubes of the form $\square =(l, I)$ with $|l|=d$ and $w((l,I))\leq n$. This automatically provides a direct sum decomposition as follows. For any $l=\sum_il_iE_i\geq 0$ define $\frX_{-l}=\prod_i [l_i,\infty)$ with its cubical decomposition $\cup\{(l',I)\,:\, l'\geq l,\ I\subset{\mathcal V}\}$. Then \begin{equation}\label{eq:sum} (E^1_{-d,q})_{n}= \bigoplus_{l\in\mathbb Z^r_{\geq 0},\, |l|=d}\ (E^1_{-l,q})_{n}, \ \mbox{where} \ (E^1_{-l,q})_{n}:= H_{-d+q}(S_n\cap \frX_{-l}, S_n\cap \frX_{-l}\cap \frX_{-d-1},\mathbb Z).\end{equation} Accordingly to this direct sum decomposition we also define $${{\bf PE}}_1({\bf T}, Q,h)={{\bf PE}}_1(T_1, \ldots , T_r, Q,h):=\sum_{l\in\mathbb Z^r_{\geq 0},\, n,q}\ \rank\big((E^1_{-l,q})_{n}\big)\cdot T_1^{l_1}\cdots T_r^{l_r}\, Q^n\, h^{-|l|+q}.$$ Clearly, ${{\bf PE}}_1(T_1=T,\ldots, T_r=T, Q,h)=PE_1(T,Q,h)$. \begin{example}\label{ex:22} Assume that $(C,o)$ is the plane curve singularity $x^2+y^2=0$, cf. \ref{ss:22}. Then using the shape of the spaces $\{S_n\}_{n\geq 0}$ from \ref{ss:22}, we deduce $${\bf PE}_1(T_1,T_2, Q,h)= 1+\frac{T_1T_2(1+Qh)}{(1-T_1Q)(1-T_2Q)}= \frac{\overline{{\bf PE}_1}(T_1,T_2, Q,h)}{(1-T_1Q)(1-T_2Q)}.$$ Then $$\overline{{\bf PE}_1}(T_1,T_2, Q,h)|_{T_1\to t_1\sqrt{q},\ T_2\to t_2\sqrt{q},\ Q\to \sqrt{q},\ h\to -\sqrt{q}}= 1-qt_1-qt_2+qt_1t_2 = \overline{P^m}(t_1,t_2,q).$$ \end{example} Our goal is to extend this relation for any $(C,o)$. More precisely, we will prove that ${\bf PE}_1({\bf T}, Q,h)$ and the multivariable motivic Poincar\'e series $P^m({\bf t}, q)$ determine each other. \begin{theorem}\label{th:PP} (a) For any fixed $l$ one has the isomorphism \begin{equation}\label{eq:PP} H_{b}(S_n\cap \frX_{-l}, S_n\cap \frX_{-l}\cap\frX_{-|l|-1})= H_{-2n+2\mathfrak{h}(l)-2|l|, b}({\rm gr}_l{\mathcal L}^-,{\rm gr}_l\partial_U).\end{equation} In particular, $H_{b}(S_n\cap \frX_{-l}, S_n\cap \frX_{-l}\cap\frX_{-|l|-1})$ has no $\mathbb Z$--torsion (cf. Theorem \ref{zlat}{\it (2)}). (b) If $H_{b}(S_n\cap \frX_{-l}, S_n\cap \frX_{-l}\cap\frX_{-|l|-1})\not=0$ then necessarily $n=w(l)+b$. (c) The next morphism (induced by the inclusion $S_n\hookrightarrow S_{n+1}$) is trivial: $$ H_{b}(S_n\cap \frX_{-l}, S_n\cap \frX_{-l}\cap\frX_{-|l|-1}) \to H_{b}(S_{n+1}\cap \frX_{-l}, S_{n+1}\cap \frX_{-l}\cap\frX_{-|l|-1}).$$ (d) $${\bf PE}_1({\bf T}, Q,h)|_{T_i\to t_i\sqrt{q},\ Q\to \sqrt{q},\ h\to -\sqrt{q}}= P^m({\bf t}, q).$$ In particular, $${\bf PE}_1({\bf T}, Q,h)|_{T_i\to t_i,\ Q\to 1,\ h\to -1}= P({\bf t}).$$ (e) Write $P^m({\bf t},q)$ as $\sum_l\pp^m_l(q)\cdot{\bf t}^l$. Then each $\pp^m_l(q)$ can be written in a unique way in the form $$\pp^m_l(q)=\sum_{k\in\mathbb Z_{\geq 0} }\pp^m _{l,k}q^{k+\mathfrak{h}(l)}, \ \ (\pp^m_{l,k}\in\mathbb Z).$$ In fact, \begin{equation}\label{eq:HLFUJ} (-1)^k\pp^m_{l,k}={\rm rank}\, H_{-2\mathfrak{h}(l)-k}({\rm gr}_l{\mathcal L}^-,{\rm gr}_l\partial_U).\end{equation} (f) Write $P^m({\bf t},q)=\sum_l\ \sum_{k\in\mathbb Z_{\geq 0} }\pp^m _{l,k}q^{k+\mathfrak{h}(l)}\cdot {\bf t}^l$. Then $${\bf PE}_1({\bf T}, Q,h)= \sum_l\ \sum_{k\in\mathbb Z_{\geq 0} }\pp^m _{l,k}\ {\bf T}^l Q^{w(l)}\cdot (-Qh)^k.$$ (g) In particular, ${\bf PE}_1$ is a rational function of type $$\overline{{\bf PE}_1}({\bf T}, Q, h)/\prod_i(1-T_iQ), \ \ \mbox{where} \ \ \overline{{\bf PE}_1}({\bf T}, Q, h)\in\mathbb Z[{\bf T}, Q, Q^{-1}, h].$$ (h) The ${\bf T}$-support of ${{\bf PE}_1}$ is exactly $\calS$: if we write ${{\bf PE}_1}$ as $\sum_l {\mathfrak{ pe}}_l(Q,h){\bf T}^l$, then $\mathfrak{pe}_l\not\equiv 0$ if and only if \ $l\in\calS$. (i) In the Gorenstein case, after substitution $h=-Q$ one has the symmetry $$\overline{{\bf PE}_1}({\bf T}, Q, h=-Q)|_{T_i\to T_i^{-1}}=\prod_i T_i^{-c_i}\cdot \overline{{\bf PE}_1}({\bf T}, Q, h=-Q).$$ \end{theorem} \begin{proof} The proof of part {\it (a)} has a similar strategy as the proof of Theorem \ref{9STR1}. Consider the complex $({\rm gr}_l{\mathcal L}^-,{\rm gr}_l\partial_U)$ generated over $\mathbb Z$ by elements $U^m\square$, $\square=(l,I)$, $m\geq 0$, cf. \ref{bek:2.22}. Each generator $U^m\square$\, has a bidegree $(a,b)=(-2m-2\mathfrak{h}(\square), \dim(\square))= (-2m-2\mathfrak{h}(l+E_I), |I|)$ and ${\rm gr}_l\partial_U$ has a bidegree $(0,-1)$. So, if we consider the subcomplex $({\rm gr}_l{\mathcal L}^-,{\rm gr}_l\partial_U)_{a,*}$ generated by elements $U^m\square$ with $a$ fixed and graded by $b$, then $({\rm gr}_l{\mathcal L}^-,{\rm gr}_l\partial_U)$ decomposes as a direct sum of subcomplexes $\oplus_a({\rm gr}_l{\mathcal L}^-,{\rm gr}_l\partial_U)_{a,*}$. On the other hand, for any fixed $n$, let $\calC_{n,*} :=\calC_*( S_n\cap \frX_{-l}, S_n\cap \frX_{-l}\cap\frX_{-|l|-1})$ be the homological (cubical) complex associated with the corresponding pair. It is generated over $\mathbb Z$ by cubes of type $\square=(l,I)\subset S_n$, $I\subset {\mathcal V}$. The boundary operator is $\partial ((l,I))=\sum_k (-1)^k (l, I\setminus \{k\})$. In the next discussion we identify the two complexes using a correspondence between the integers $a$ and $n$. Recall that in all these discussions $l$ is fixed and $a$ is even. We claim that under the bijective correspondence $n:=-a/2+\mathfrak{h}(l)-|l|$ the complexes $({\rm gr}_l{\mathcal L}^-,{\rm gr}_l\partial_U)_{a,*}$ and $(\calC_{n,*},\partial)$ are identical. First note that in the two complexes we use two different weight functions: in $\calC_{n,*}$ (that is, in the definition of $S_n$) we use the weights $w(\square)$, while in the definition of ${\rm gr}_l\partial_U$ we use $\mathfrak{h}(\square)$. The next identity appeared as identity (7) in the proof of Theorem 3.1.8 in \cite{{AgostonNemethi}}. It shows that for restricted cubes of type $\{(l,I)\}_{I\subset {\mathcal V}}$ the `relative weights with respect to $l$' are the same. \begin{equation}\label{eq:relweights} w((l,I))-w(l)=\mathfrak{h}((l,I))-\mathfrak{h}(l). \end{equation} Next, for any generator $U^m(l, I)$ of ${\rm gr}_l{\mathcal L}^-$, the identity $n=-a/2+\mathfrak{h}(l)-|l|$ together with (\ref{eq:relweights}) transforms into $n=m+w((l,I))$. Since $m\geq 0$ we get $w((l,I))\leq n$, hence $(l, I)\subset \frX_{-l}\cap S_n$. Conversely, any $(l, I)\subset \frX_{-l}\cap S_n$ defines the generator $U^m(l,I)$ of ${\rm gr}_l{\mathcal L}^-$ with $m:= n-w((l,I))\geq 0$. Finally we verify that this correspondence commute with the boundary operators. Indeed, for any $U^m(l, I)$ with $m+w((l, I))=n$, ${\rm gr}_l\partial_U(U^m(l,I))$ equals $$\sum_k (-1)^k U^{m+\mathfrak{h}((l, I))-\mathfrak{h}((l, I\setminus \{k\}))}(l, I\setminus \{k\})\stackrel{(\ref{eq:relweights})}{=} \sum_k (-1)^k U^{m+w((l, I))-w((l, I\setminus \{k\}))}(l, I\setminus \{k\})$$ which by the above correspondence coincides with $\partial ((l, I))=\sum_k(-1)^k(l, I\setminus \{k\})$ considered in $S_n$ since $m+w((l, I))-w((l, I\setminus \{k\}))+w((l, I\setminus \{k\}))=n$. {\it (b)} Use part {\it (a)} and Theorem \ref{zlat}{\it (3)}. {\it (c)} By part {\it (b)} at least one of the modules vanish. This part {\it (c)} is compatible (via the proof of part {\it (a)}) with Theorem \ref{zlat}{\it (4)}. {\it (d)} Using parts {\it (a)} and {\it (b)} we obtain that $${\bf PE}_1({\bf T}, Q,h)=\sum_{l,n,q} {\rm rank}\, H_{-n-q, -|l|+q} ({\rm gr}_l{\mathcal L}^-, {\rm gr}_l \partial _U)\,{\bf T}^l Q^nh^{-|l|+q}. $$ Again, by part {\it (b)}, $l, n, q$ are related by the identity $n=w(l)-|l|+q=2\mathfrak{h}(l)-2|l|+q$. Therefore, $${\bf PE}_1({\bf T}, Q,h)=\sum_{l,n} {\rm rank}\, H_{-n-|l|} ({\rm gr}_l{\mathcal L}^-, {\rm gr}_l \partial _U)\, {\bf T}^l Q^nh^{n+|l|-2\mathfrak{h}(l)}. $$ After the corresponding substitutions, the right hand side transforms into $$\sum_{l, n}{\bf t}^lq^{-\mathfrak{h}(l)}\ \sum_n (-q)^{n+|l|} {\rm rank}\, H_{-n-|l|} ({\rm gr}_l{\mathcal L}^-, {\rm gr}_l \partial _U),$$ which equals $\sum_l{\bf t}^l\pp^m_l(q)$ by (\ref{hfminus}). {\it (e)} By (\ref{hfminus}) $\pp^m_l(q)=q^{-\mathfrak{h}(l)}\ \sum_n (-q)^{n+|l|} {\rm rank}\, H_{-n-q, -|l|+q} ({\rm gr}_l{\mathcal L}^-, {\rm gr}_l \partial _U)$ with $-|l|+q\geq 0$ (cf. part {\it (a)} or Theorem \ref{zlat}{\it (3)}). Then use $n+|l|-\mathfrak{h}(l)=\mathfrak{h}(l)-|l|+q\geq \mathfrak{h}(l)$. For {\it (f)} use the identities of the proof of {\it (d)} and {\it (e)}; for {\it (g)} the above correspondence and Proposition \ref{prop:motProp}{\it (b)}; for {\it (h)} Proposition \ref{prop:motProp}{\it (c)}, and for {\it (i)} Proposition \ref{prop:motProp}{\it (d)} and $2\delta=|c|$. \end{proof} \begin{remark}\label{rem:hathat} For any cube $\square=(l,I)$ set $d(\square)=d(l):=|l|$. Note also that in Proposition \ref{prop:infty}{\it (c)} $PE_\infty(1,Q, -1)$ can be replaced by $PE_1(1,Q, -1)$, cf. (\ref{eq:spseq}). Then we have the following extensions of Proposition \ref{prop:infty}{\it (c)}. $$PE_1(T, Q,h=-1)=\frac{1}{1-Q}\cdot \sum_{\square_q\subset \frX}\, (-1)^q \, Q^{w(\square_q)}T^{d(\square_q)},$$ $${\bf PE}_1({\bf T}, Q,h=-1)=\frac{1}{1-Q}\cdot \sum_{\square_q\subset \frX}\, (-1)^q \, Q^{w(\square_q)}{\bf T}^{l}.$$ The proof is similar: write $(\sum_{\square_q}\, (-1)^q \, Q^{w(\square_q)}{\bf T}^{l})/(1-Q)$ as $\sum _{n,l} a_{n,l}Q^n{\bf T}^l$. Then $$a_{n,l}=\sum_{\square_q=(l,I):\ w(\square_q)\leq n}\,(-1)^{|I|}= \chi_{top}(S_n\cap \frX_{-l}, S_n\cap\frX_{-l}\cap \frX_{-|l|-1},\mathbb Z).$$ \end{remark} \subsection{Example. The plane curve singularity $(C,o)=\{x^3+y^3=0\}$.}\label{ss:33} The embedded link in $S^3$ consists of three Hopf link components, $r=3$. The conductor is $c=(2,2,2)$ and $\delta=3$. Each irreducible component is smooth, the partial Poincar\'e series/polynomials are $1/1-t_i$ for $C_i$, 1 for $C_{i,j}$ and $1-t_1t_2t_3$ for $C$. In particular, via (\ref{hilbert}), the Hilbert series is \begin{equation}\label{eq:hilb33} H({\bf t})|_{\geq 0}= \frac{1}{\prod_{i=1}^3(1-t_i)}\cdot \Big( \frac{1}{1-t_1}+ \frac{1}{1-t_2}+ \frac{1}{1-t_3}-t_1t_2-t_2t_3-t_3t_1+t_1t_2t_3(1-t_1t_2t_3)\Big). \end{equation} The semigroup $\calS$ can be computed directly, or via Lemma \ref{eq:semi}, it is $$\calS=\{(0,0,0)\}\cup \{(l,1,1)\}_{l\geq 1}\cup \{(1,l,1)\}_{l\geq 1}\cup \{(1,1,l)\}_{l\geq 1}\cup \{(l_1,l_2,l_3)\}_{l_1,l_2,l_3\geq 2}. $$ The lattice homology $\mathbb H_*(C,o)$ can be read from the $w$-weights of the rectangle $R(0,c)$, cf. Theorem \ref{cor:EUcurves}. Since later we will need the $w$-weights of $R(0, c+{\bf 1})$ too, here we provide that one: \begin{picture}(380,85)(-30,-25) \footnotesize{ \put(65,0){\makebox(0,0){$l_1$}} \put(-5,50){\makebox(0,0){$l_2$}} \put(-15,0){\vector(1,0){70}} \put(-5,-10){\vector(0,1){50}} \put(5,-5){\makebox(0,0){\small{$0$}}} \put(20,-5){\makebox(0,0){\small{$1$}}} \put(35,-5){\makebox(0,0){\small{$2$}}} \put(50,-5){\makebox(0,0){\small{$3$}}} \put(25,-20){\makebox(0,0){\small{$l_3=0$}}} \put(-10,5){\makebox(0,0){\small{$0$}}} \put(-10,15){\makebox(0,0){\small{$1$}}} \put(-10,25){\makebox(0,0){\small{$2$}}} \put(-10,35){\makebox(0,0){\small{$3$}}} \put(5,5){\makebox(0,0){\small{$0$}}} \put(5,15){\makebox(0,0){\small{$1$}}} \put(5,25){\makebox(0,0){\small{$2$}}} \put(5,35){\makebox(0,0){\small{$3$}}} \put(20,5){\makebox(0,0){\small{$1$}}} \put(20,15){\makebox(0,0){\small{$0$}}} \put(20,25){\makebox(0,0){\small{$1$}}} \put(20,35){\makebox(0,0){\small{$2$}}} \put(35,5){\makebox(0,0){\small{$2$}}} \put(35,15){\makebox(0,0){\small{$1$}}} \put(35,25){\makebox(0,0){\small{$2$}}} \put(35,35){\makebox(0,0){\small{$3$}}} \put(50,5){\makebox(0,0){\small{$3$}}} \put(50,15){\makebox(0,0){\small{$2$}}} \put(50,25){\makebox(0,0){\small{$3$}}} \put(50,35){\makebox(0,0){\small{$4$}}} \put(85,0){\vector(1,0){70}} \put(95,-10){\vector(0,1){50}} \put(125,-20){\makebox(0,0){\small{$l_3=1$}}} \put(105,5){\makebox(0,0){\small{$1$}}} \put(105,15){\makebox(0,0){\small{$0$}}} \put(105,25){\makebox(0,0){\small{$1$}}} \put(105,35){\makebox(0,0){\small{$2$}}} \put(120,5){\makebox(0,0){\small{$0$}}} \put(120,15){\makebox(0,0){\small{$-1$}}} \put(120,25){\makebox(0,0){\small{$0$}}} \put(120,35){\makebox(0,0){\small{$1$}}} \put(135,5){\makebox(0,0){\small{$1$}}} \put(135,15){\makebox(0,0){\small{$0$}}} \put(135,25){\makebox(0,0){\small{$1$}}} \put(135,35){\makebox(0,0){\small{$2$}}} \put(150,5){\makebox(0,0){\small{$2$}}} \put(150,15){\makebox(0,0){\small{$1$}}} \put(150,25){\makebox(0,0){\small{$2$}}} \put(150,35){\makebox(0,0){\small{$3$}}} \put(185,0){\vector(1,0){70}} \put(195,-10){\vector(0,1){50}} \put(225,-20){\makebox(0,0){\small{$l_3=2$}}} \put(205,5){\makebox(0,0){\small{$2$}}} \put(205,15){\makebox(0,0){\small{$1$}}} \put(205,25){\makebox(0,0){\small{$2$}}} \put(205,35){\makebox(0,0){\small{$3$}}} \put(220,5){\makebox(0,0){\small{$1$}}} \put(220,15){\makebox(0,0){\small{$0$}}} \put(220,25){\makebox(0,0){\small{$1$}}} \put(220,35){\makebox(0,0){\small{$2$}}} \put(235,5){\makebox(0,0){\small{$2$}}} \put(235,15){\makebox(0,0){\small{$1$}}} \put(235,25){\makebox(0,0){\small{$0$}}} \put(235,35){\makebox(0,0){\small{$1$}}} \put(250,5){\makebox(0,0){\small{$3$}}} \put(250,15){\makebox(0,0){\small{$2$}}} \put(250,25){\makebox(0,0){\small{$1$}}} \put(250,35){\makebox(0,0){\small{$2$}}} \put(285,0){\vector(1,0){70}} \put(295,-10){\vector(0,1){50}} \put(325,-20){\makebox(0,0){\small{$l_3=3$}}} \put(305,5){\makebox(0,0){\small{$3$}}} \put(305,15){\makebox(0,0){\small{$2$}}} \put(305,25){\makebox(0,0){\small{$3$}}} \put(305,35){\makebox(0,0){\small{$4$}}} \put(320,5){\makebox(0,0){\small{$2$}}} \put(320,15){\makebox(0,0){\small{$1$}}} \put(320,25){\makebox(0,0){\small{$2$}}} \put(320,35){\makebox(0,0){\small{$3$}}} \put(335,5){\makebox(0,0){\small{$3$}}} \put(335,15){\makebox(0,0){\small{$2$}}} \put(335,25){\makebox(0,0){\small{$1$}}} \put(335,35){\makebox(0,0){\small{$2$}}} \put(350,5){\makebox(0,0){\small{$4$}}} \put(350,15){\makebox(0,0){\small{$3$}}} \put(350,25){\makebox(0,0){\small{$2$}}} \put(350,35){\makebox(0,0){\small{$3$}}} } \end{picture} The Gorenstein symmetry of $R(0, c)$ with respect to $c/2=(1,1,1)$ is transparent. From $(R(0,c), w)$ one reads $\mathbb H_{>0}=0$, and $\mathbb H_0={\mathcal T}^-_{2}\oplus {\mathcal T}_0(1)^2$ is associated with the graded root identical with the root of the irreducible plane curve singularity $x^3+y^4=0$, given in \ref{ex:34}. Using the weighted lattice points (or the spaces $S_n$, see below), we can also see the induced level grading $ \{{\rm F}_{-d}\mathbb H_0\}_d$. It turns out that it coincides with the grading $ \{{\rm F}_{-d}\mathbb H_0\}_d$ of the singularity $x^3+y^4=0$ shown in \ref{ex:34}. At $d=0$ the lattice point $(0,0,0)$ is `cut out', at $d=3$ the point $(1,1,1)$ with $w=-1$, at $d=4$ the central cross of $S_0$ is left out, at $d=6$ the lattice point $(2,2,2)$ is cut, and so on. Hence, in this way we obtain $PE_\infty(T,Q,h)$ $$PE_\infty (T,Q,h)=1+T^3Q^{-1}+T^4+ \frac{T^6}{1-TQ}.$$ Regarding of the coincidence $PE_\infty (T,Q,h)(x^3+y^3)=PE_\infty (T,Q,h)(x^3+y^4)$ (for the second one see \ref{ss:irredu}) note that there exists a $\delta$-constant deformation $x^3+ty^3+y^4=0$ connecting the two germs, and we might expect some kind of stability along such deformation of certain invariants (see e.g. \cite{AgostonNemethi}). However, the stability of ${\rm Gr}^F_*\mathbb H_0$ is still surprising (not just because of the jump of the number of irreducible components, but also because at the level of semigroups we do not recognise the trace of an immediate stability). On the other hand, all the other invariants of the two germs $x^3+y^3$ and $x^3+y^4$ are very different. Let us start with the motivic Poincar\'e series of $x^3+y^3$. It can be determined either via (\ref{eq:pmot}) using the $\mathfrak{h}$-function given above in (\ref{eq:hilb33}), or from \cite[5.1]{Gorsky}: $$P^m({\bf t};q)=1+\frac{q(1-q)^2\cdot t_1t_2t_3\,-\,q^3t_1t_2t_3(1-t_1)(1-t_2)(1-t_3)}{(1-t_1q)(1-t_2q)(1-t_3q)}.$$ Then ${\bf PE}_1 ({\bf T}, Q,h)$ can be deduced using Theorem \ref{th:PP}. Indeed, by part {\it (h)} of that theorem we have to focus on the coefficients of the ${\bf T}^l$ for $l\in \calS$ only. For $l=(0,0,0)$ we get the contribution $1$. In the case of $l=(1,1,1)$, the coefficient of ${\bf t}^{(1,1,1)}$ in $P^m$ is $q(1-q)^2-q^3=q(1-2q)$. Since $\mathfrak{h}(1,1,1)=1$ and $w(1,1,1)=-1$ from Theorem \ref{th:PP} we get the term ${\bf T}^{(1,1,1)}Q^{-1}(1+2Qh)$ in ${\bf PE}_1$. For the semigroup element $s=(l+1,1,1)$ ($l\geq 1$) the coefficient in $P^m$ is $q(1-q)^2q^l-q^{3+l}+q^{2+l}=q^{l+1}(1-q)$. Since $\mathfrak{h}(s)=l+1$ and $w(s)=l-1$, we get the contribution $T_1^{l+1}T_2T_3Q^{l-1}(1+Qh)$ in ${\bf PE}_1$. The contribution for $(l_1,l_2, l_3)_{l_1,l_2,l_3\geq 2}$ is ${\bf T}^{(l_1,l_2,l_3)}Q^{l_1+l_2+l_3-6}(1+Qh)^2$. Therefore, ${\bf PE}_1({\bf T}, Q,h)$ is \begin{equation}\label{eq:TTT} 1+T_1T_2T_3Q^{-1}(1+2Qh)+T_1T_2T_3\cdot (\sum_{i=1}^3\frac{T_i}{1-T_iQ}\,)\cdot (1+Qh)+ \frac{T_1^2T_2^2T_3^2(1+Qh)^2}{\prod_{i=1}^3 (1-T_iQ)}. \end{equation} This can be deduced from the filtrations of the spaces $S_n$ as well. Let us do the first step together with the study of the corresponding spectral sequences. Clearly $S_{<-1}=\emptyset$. \begin{picture}(300,100)(-20,-20) \dashline[200]{1}(0,0)(20,0)\dashline[200]{1}(0,0)(0,20)\dashline[200]{1}(20,0)(20,20) \dashline[200]{1}(0,20)(20,20)\dashline[200]{1}(20,0)(30,10)\dashline[200]{1}(20,20)(30,30) \dashline[200]{1}(0,20)(10,30)\dashline[200]{1}(10,30)(30,30) \dashline[200]{1}(30,10)(30,30) \dashline[200]{1}(30,30)(50,30)\dashline[200]{1}(30,30)(30,50)\dashline[200]{1}(50,30)(50,50) \dashline[200]{1}(30,50)(50,50)\dashline[200]{1}(50,30)(60,40)\dashline[200]{1}(50,50)(60,60) \dashline[200]{1}(30,50)(40,60)\dashline[200]{1}(40,60)(60,60) \dashline[200]{1}(60,40)(60,60) \put(30,-10){\makebox(0,0){$S_{-1}$}} \put(30,30){\circle*{4}} \dashline[200]{1}(100,0)(120,0)\dashline[200]{1}(100,0)(100,20)\dashline[200]{1}(120,0)(120,20) \dashline[200]{1}(100,20)(120,20)\dashline[200]{1}(120,0)(130,10)\dashline[200]{1}(120,20)(130,30) \dashline[200]{1}(100,20)(110,30)\dashline[200]{1}(110,30)(130,30) \dashline[200]{1}(130,10)(130,30) \dashline[200]{1}(130,30)(150,30)\dashline[200]{1}(130,30)(130,50)\dashline[200]{1}(150,30)(150,50) \dashline[200]{1}(130,50)(150,50)\dashline[200]{1}(150,30)(160,40)\dashline[200]{1}(150,50)(160,60) \dashline[200]{1}(130,50)(140,60)\dashline[200]{1}(140,60)(160,60) \dashline[200]{1}(160,40)(160,60) \put(130,-10){\makebox(0,0){$S_{0}$}} \put(100,0){\circle*{4}}\put(160,60){\circle*{4}} \thicklines \put(110,30){\line(1,0){40}} \put(120,20){\line(1,1){20}} \put(130,10){\line(0,1){40}} \put(230,0){\line(1,0){20}}\put(230,0){\line(0,1){20}} \put(230,20){\line(1,0){40}}\put(250,0){\line(0,1){40}} \put(230,20){\line(1,1){10}} \put(240,30){\line(1,0){10}} \put(250,20){\line(1,1){10}}\put(250,0){\line(1,1){10}}\put(260,10){\line(0,1){10}} \put(240,30){\line(0,1){20}} \put(240,50){\line(1,0){40}} \put(250,40){\line(1,1){20}} \put(260,30){\line(1,0){40}} \put(240,30){\line(1,0){20}} \put(260,30){\line(0,1){40}}\put(270,20){\line(1,1){20}}\put(280,10){\line(0,1){40}}\put(280,10){\line(-1,0){20}} \put(270,60){\line(1,0){40}}\put(280,50){\line(1,1){20}}\put(290,40){\line(0,1){40}} \put(230,20){\line(1,1){10}} \put(240,30){\line(1,0){10}} \put(250,20){\line(1,1){10}}\put(250,0){\line(1,1){10}}\put(260,10){\line(0,1){10}} \thinlines \dashline[200]{1}(230,0)(240,10)\dashline[200]{1}(240,10)(240,30)\dashline[200]{1}(240,10)(260,10) \dashline[200]{1}(260,30)(280,50)\dashline[200]{1}(270,40)(270,60)\dashline[200]{1}(270,40)(290,40) \put(260,-10){\makebox(0,0){$S_{1}$}} \end{picture} $S_{-1}$ is the lattice point $(1,1,1)$, hence via (\ref{eq:sum}) the contribution in ${\bf PE}_1$ is ${\bf T}^{(1,1,1)}Q^{-1}$. The space $S_0$ is $\{(0,0,0)\}\cup \{(2,2,2)\}\cup \{(1,1,[0,2])\}\cup \{(1,[0,2], 1)\}\cup \{([0,2],1,1)\}$. The contribution is $$Q^0\big(\ 1+2T_1T_2T_3 h+T_1T_2T_3(T_1+T_2+T_3)+T_1^2T_2^2T_3^2\,\big).$$ 1 respectively $T_1^2T_2^2T_3^2$ are produced by the isolated lattice points $(0,0,0)$ and $(2,2,2)$ of $S_0$. If we glue the three endpoints with $d=4$ of the central cross of $S_0$ into a single point then we create two 1--loops, this gives the term $2T_1T_2T_3 h$; while $T_1T_2T_3(T_1+T_2+T_3)$ is given by the three ends considered before. Next we consider the spectral sequence associated with $S_0$ and the level filtration (i.e. in the above formula in order to get $PE_1(T,Q,h)$ we substitute $T_1=T_2=T_3=T$. I.e., $$PE_1(T,Q,h)=1+T^3Q^{-1}(1+2Qh)+3T^4\frac{1+Qh}{1-TQ}+T^6\frac{(1+Qh)^2}{(1-TQ)^3}.$$ The left diagram $E^1_{*,*}$ is given by the above expression (by the term of $Q^0$), while the right $E^\infty_{*,*}$ is given by the $Q^0$ term $1+T^4+T^6$ of $PE_\infty$ computed above (or verifying the filtration on $S_0$). The differential has degree $(-1,0)$, so necessarily $\mathbb Z^3\leftarrow \mathbb Z^2$ should be injective and $E^2_{*,*}=E^\infty_{*,*}$. \begin{picture}(300,110)(-40,-20) \put(0,10){\vector(1,0){80}}\put(60,0){\vector(0,1){70}} \dashline[200]{1}(50,0)(50,70)\dashline[200]{1}(40,0)(40,70)\dashline[200]{1}(30,0)(30,70) \dashline[200]{1}(20,0)(20,70)\dashline[200]{1}(10,0)(10,70)\dashline[200]{1}(0,0)(0,70) \dashline[200]{1}(0,20)(70,20)\dashline[200]{1}(0,30)(70,30)\dashline[200]{1}(0,40)(70,40) \dashline[200]{1}(0,50)(70,50)\dashline[200]{1}(0,60)(70,60)\dashline[200]{1}(0,70)(70,70) \put(60,80){\makebox(0,0){\footnotesize{$q$}}} \put(80,15){\makebox(0,0){\footnotesize$p$}} \put(30,-10){\makebox(0,0){\footnotesize{$E^1_{*,*}(S_0)$}}} \put(60,10){\makebox(0,0){\footnotesize{$\mathbb Z$}}} \put(0,70){\makebox(0,0){\footnotesize{$\mathbb Z$}}} \put(22,51){\makebox(0,0){\footnotesize{$\mathbb Z^3$}}} \put(32,51){\makebox(0,0){\footnotesize{$\mathbb Z^2$}}} \put(200,10){\vector(1,0){80}}\put(260,0){\vector(0,1){70}} \dashline[200]{1}(250,0)(250,70)\dashline[200]{1}(240,0)(240,70)\dashline[200]{1}(230,0)(230,70) \dashline[200]{1}(220,0)(220,70)\dashline[200]{1}(210,0)(210,70)\dashline[200]{1}(200,0)(200,70) \dashline[200]{1}(200,20)(270,20)\dashline[200]{1}(200,30)(270,30)\dashline[200]{1}(200,40)(270,40) \dashline[200]{1}(200,50)(270,50)\dashline[200]{1}(200,60)(270,60)\dashline[200]{1}(200,70)(270,70) \put(260,80){\makebox(0,0){\footnotesize{$q$}}} \put(280,15){\makebox(0,0){\footnotesize$p$}} \put(230,-10){\makebox(0,0){\footnotesize{$E^\infty_{*,*}(S_0)$}}} \put(260,10){\makebox(0,0){\footnotesize{$\mathbb Z$}}} \put(200,70){\makebox(0,0){\footnotesize{$\mathbb Z$}}} \put(222,51){\makebox(0,0){\footnotesize{$\mathbb Z$}}} \end{picture} In the case of $S_1$, $Q^1$ in $PE_1(T,Q,h)$ appears with $3(T^4h+T^5)+2T^6h+3T^7$ while in $PE_\infty(T,Q,h)$ with $T^7$. Hence the pages of the spectral sequence are \begin{picture}(300,120)(-40,-20) \put(-10,10){\vector(1,0){90}}\put(60,0){\vector(0,1){80}} \dashline[200]{1}(50,0)(50,80)\dashline[200]{1}(40,0)(40,80)\dashline[200]{1}(30,0)(30,80) \dashline[200]{1}(20,0)(20,80)\dashline[200]{1}(10,0)(10,80)\dashline[200]{1}(0,0)(0,80)\dashline[200]{1}(-10,0)(-10,80) \dashline[200]{1}(-10,20)(70,20)\dashline[200]{1}(-10,30)(70,30)\dashline[200]{1}(-10,40)(70,40) \dashline[200]{1}(-10,50)(70,50)\dashline[200]{1}(-10,60)(70,60)\dashline[200]{1}(-10,70)(70,70)\dashline[200]{1}(-10,80)(70,80) \put(60,90){\makebox(0,0){\footnotesize{$q$}}} \put(80,15){\makebox(0,0){\footnotesize$p$}} \put(30,-10){\makebox(0,0){\footnotesize{$E^1_{*,*}(S_1)$}}} \put(12,61){\makebox(0,0){\footnotesize{$\mathbb Z^3$}}} \put(22,61){\makebox(0,0){\footnotesize{$\mathbb Z^3$}}} \put(-8,81){\makebox(0,0){\footnotesize{$\mathbb Z^3$}}} \put(2,81){\makebox(0,0){\footnotesize{$\mathbb Z^2$}}} \put(190,10){\vector(1,0){90}}\put(260,0){\vector(0,1){80}} \dashline[200]{1}(250,0)(250,80)\dashline[200]{1}(240,0)(240,80)\dashline[200]{1}(230,0)(230,80) \dashline[200]{1}(220,0)(220,80)\dashline[200]{1}(210,0)(210,80)\dashline[200]{1}(200,0)(200,80)\dashline[200]{1}(190,0)(190,80) \dashline[200]{1}(190,20)(270,20)\dashline[200]{1}(190,30)(270,30)\dashline[200]{1}(190,40)(270,40) \dashline[200]{1}(190,50)(270,50)\dashline[200]{1}(190,60)(270,60)\dashline[200]{1}(190,70)(270,70)\dashline[200]{1}(190,80)(270,80) \put(260,90){\makebox(0,0){\footnotesize{$q$}}} \put(280,15){\makebox(0,0){\footnotesize$p$}} \put(230,-10){\makebox(0,0){\footnotesize{$E^\infty_{*,*}(S_1)$}}} \put(190,80){\makebox(0,0){\footnotesize{$\mathbb Z$}}} \end{picture} It turns out that $E^2_{*,*}=E^\infty_{*,*}$ for all $n$, in particular, $PE_\infty$ is obtained from $PE_1$ by substitution $h\mapsto -T$, cf. \ref{ss:ss}. \begin{remark} The above examples from \ref{ss:22} and \ref{ss:33} might suggest that in general, for any $n$, the spectral sequence degenerates at most at $E^2$-level, that is, $E^2_{*,*}=E^\infty_{*,*}$. However, this is not the case as the next family shows. In particular, the degeneration invariant $k_{(C,o)}$ can be strict larger than two. \end{remark} \subsection{Decomposable singularities.}\label{ss:decsing} A curve singularity $(C,o)$ is called `decomposable' (into $(C',o)$ and $(C'',o)$), if it is isomorphic to the one-point union in $(\bC^n\times \bC^m,o)$ of $(C',o)\times \{o\}\subset (\bC^n,o)\times \{o\}$ and $\{o\}\times (C'',o)\subset \{o\}\times (\bC^m,o)$. We denote this by $(C,o)=(C',o)\vee (C'',o)$. If $(C',o)\subset (\bC^n,o)$ is given by the ideal $I'$, then its ideal in $(\bc^n\times \bC^m,o)$ is $I'+\frm_{(\bc^m,o)}$ (here $\frm$ denotes the maximal ideal). Using this observation, one can deduce that if $(C,o)=(C',o)\cup(C'',o)$ then $(C,o)$ is decomposable into $(C',o)$ and $(C'',o)$ if and only if $(C',C'')_{Hir}=1$ (or, if and only if $\delta(C,o)=\delta(C',o)+\delta(C'',o)+1$, cf. (\ref{eq:Hir})), see \cite{Steiner83,Stevens85}. Similarly, a computation shows that the semigroup of values also behave `additively'. Assume that the number of irreducible components of $(C',o)$ and $(C'',o)$ is $r'$ and $r''$, then \begin{equation}\label{eq:dec1} \calS(C,o)=\{(0,0)\}\cup\, \big(\ (\calS(C',o)\setminus \{0\}\times \calS(C'',o)\setminus \{0\})\ \big) \subset \mathbb Z_{\geq 0}^{r'}\times \mathbb Z_{\geq 0}^{r''}= \mathbb Z_{\geq 0}^{r'+r''}. \end{equation} Furthermore, by Lemma \ref{eq:semi} and from $w(l)=2\mathfrak{h}(l)-|l|$ we also have \begin{equation}\label{eq:dec2} \mathfrak{h}_{(C,o)}(l',l'')=\left\{ \begin{array}{ll} \mathfrak{h}_{(C',o)}(l') & \mbox{if $l''=0$},\\ \mathfrak{h}_{(C'',o)}(l'') & \mbox{if $l'=0$},\\ \mathfrak{h}_{(C',o)}(l') +\mathfrak{h}_{(C'',o)}(l'')-1 & \mbox{if $l'>0$ and $l''>0$}; \end{array}\right. \end{equation} \begin{equation}\label{eq:dec3} w_{(C,o)}(l',l'')=\left\{ \begin{array}{ll} w_{(C',o)}(l') & \mbox{if $l''=0$},\\ w_{(C'',o)}(l'') & \mbox{if $l'=0$},\\ w_{(C',o)}(l') +w_{(C'',o)}(l'')-2 & \mbox{if $l'>0$ and $l''>0$}; \end{array}\right. \end{equation} Furthermore, (\ref{eq:pmot}) gives \begin{equation}\label{eq:dec4} P^m_{(C,o)}({\bf t}',{\bf t}'';q)-1= q^{-1}(1-q)\big(\,P^m_{(C',o)}({\bf t}';q)-1\big)\big(P^m_{(C'',o)}({\bf t}'';q)-1\big). \end{equation} In particular, via Theorem \ref{th:PP}, \begin{equation}\label{eq:dec5} {\bf PE}_{(C,o),1}({\bf T}',{\bf T}'', Q,h)-1= Q^{-2}(1+Qh)\big( {\bf PE}_{(C',o),1}({\bf T}', Q,h)-1\big)\big( {\bf PE}_{(C'',o),1}({\bf T}'', Q,h)-1\big). \end{equation} \begin{remark} (a) The formulae for ${\bf PE}_{(C,o),k}$ for $k>1$, in general is more complicated. The complete discussion will be given in a forthcoming manuscript. They can be deduced from the structure of the level spaces $\{S_n\}_n$, which are determined by the weight function according to (\ref{eq:dec3}). E.g., if we denote by $\bar{S}_n$ the intersection of $S_n$ with $(\mathbb R_{\geq 1})^r$, then at the level of these spaces we have $$\bar{S}_{(C,o),n}=\cup_{n'+n''=n} \ \bar{S}_{(C',0),n'}\times \bar{S}_{(C'',0),n''}.$$ By analysing the first lines/columns, we get that for $n\not=0$ the spaces $S_{(C,o),n}$ and $\bar{S}_{(C,o),n}$ have the same (graded) homotopy type, while for $n=0$ the space $S_{(C,o),0}$ is the disjoint union of $\{0\}$ and a space $S'_{(C,o),0}$ where $S'_{(C,o),0}$ and $\bar{S}_{(C,o),0}$ have the same (graded) homotopy type. (b) Based on the discussion from {\it (a)} in order to compute $\mathbb H_*(C_1\vee C_2)$ we have to focus on the lattice homology on the spaces $(\mathbb R_{\geq 1})^r$. But by (\ref{eq:dec3}) the weights here behave additively. This creates a situation similar to the computation of the lattice homology, or of the Heegaard Floer homology associated with the connected sum of plumbed 3--manifolds, where one compares the homologies associated with graphs $\Gamma_1$, $\Gamma_2$ and the disjoint union $\Gamma_1\sqcup\Gamma_2$. For such a connected formula in the $HF^-$ theory see \cite{os22}, section 6. Here a very similar (K\"unneth) formula holds (with similar homological algebra proof). This is formulated as follows. \end{remark} We separate from $\mathbb H_*$ the contribution given by the lattice point $0$: we write $\mathbb H_0$ as $\bar{\mathbb H}_0\oplus {\mathcal T}_0(1)$, and $\bar{\mathbb H}_b=\mathbb H_b$ for $b\geq 1$. \begin{theorem}\label{th:kunneth} For any $b\geq 0$ we have the following isomorphism of $\mathbb Z[U]$--modules: $$\bar{\mathbb H}_b(C_1\vee C_2)=\oplus _{i+j=b}\, \bar{\mathbb H}_i(C_1)\otimes_{\mathbb Z[U]} \bar{\mathbb H}_j(C_2)[4]\ \oplus \ \oplus _{i+j=b-1}\, {\rm Tor}_{\mathbb Z[U]}(\bar{\mathbb H}_i(C_1), \bar{\mathbb H}_j(C_2))[2].$$ \end{theorem} Note also the following identities: ${\mathcal T}^-_n\otimes _{\mathbb Z[U]} {\mathcal T}^-_m={\mathcal T}^-_{n+m}$, ${\mathcal T}^-_n\otimes _{\mathbb Z[U]} {\mathcal T}_m(k)={\mathcal T}_{n+m}(k)$, ${\mathcal T}_n(k')\otimes _{\mathbb Z[U]} {\mathcal T}_m(k)={\mathcal T}_{n+m}(\min\{k',k\})$, ${\rm Tor}_{\mathbb Z[U]}({\mathcal T}^-_n, M)=0$ and ${\rm Tor} _{\mathbb Z[U]}({\mathcal T}_n(k'),{\mathcal T}_m(k))={\mathcal T}_{n+m}(\min\{k',k\})$. Next we focus on some concrete key examples. \begin{example}\label{ex:gen} Assume that $(C^{(j)},o)$ ($1\leq j\leq r$) is an irreducible curve singularity with $\calS_{(C^{(j)},o)}=\{0\}\cup {\mathbb Z}_{\geq c^{(j)}}$, for some $c^{(j)}\geq 1$. This situation can be realized. In fact, any numerical semigroup $\calS\subset \mathbb Z_{\geq 0}$ with $\mathbb Z_{\geq 0}\setminus \calS$ finite is the semigroup of some singular germ. Indeed, let $\bar{\beta}_0, \ldots, \bar{\beta}_s $ be a minimal set of generators of $\calS$ (for its existence see e.g. \cite{Assi}). Let $C^\calS$ be the affine curve defined via the parametrization $t\mapsto (t^{\bar{\beta}_0}, \ldots, t^{\bar{\beta}_s})$. Then the affine coordinate ring $\mathbb C[C^\calS]$ of $C^\calS$ is the image in $\mathbb C[t]$ of the morphism $\varphi:\mathbb C[u_0,\ldots, u_s]\to \mathbb C[t]$, $\varphi(u_i)=t^{\bar{\beta}_i}$, which correspond to the normalization of $C^{\calS}$. The analytic germ $(C^\calS,0)$ is irreducible, and its semigroup is $\calS$. Having the germs $(C^{(j)},o)$ ($1\leq j\leq r$), let us consider $(C,o):=\vee_{j=1}^r (C^{(j)},o)$. It has $r$ irreducible components, they are isomorphic to $\{(C^{(j)},o)\}_j$. Then $\calS_{(C,o)}=\{0\}\cup \prod_j \mathbb Z_{\geq c^{(j)}}$, hence the conductor of $\calS_{(C,o)}$ is $c=(c^{(1)}, \dots, c^{(r)})$. By the above general formula (\ref{eq:dec4}) we have $$P^m({\bf t};q)=1+\sum_{l\geq c}{\bf t}^l q^{|l|-|c|+1}(1-q)^{r-1}=1+{\bf t}^cq\cdot \frac{(1-q)^{r-1}} {\prod_i (1-t_iq)}.$$ Since for any $l\geq c$ we have $\mathfrak{h}(l)=|l|-|c|+1$, we also have $${\bf PE}_1({\bf T}, Q,h)=1+\sum_{l\geq c}{\bf T}^l Q^{|l|-2|c|+2}(1+Qh)^{r-1} =1+{\bf T}^cQ^{2-|c|}\cdot \frac{(1+Qh)^{r-1}} {\prod_i (1-T_iQ)},$$ $$PE_1(T, Q,h) =1+T^{|c|}Q^{2-|c|}\cdot \frac{(1+Qh)^{r-1}} {(1-TQ)^r}.$$ In this case, from the explicit structure of the $w$-table in $R(0,c)$ we also get that each $S_n$ for $n\not=0$ is contractible, and $S_0$ consists of two components, each contractible. Therefore, $\mathbb H_{>0}(C,o)$ is trivial, and $\mathbb H_0(C,o)= \mathcal{T}^-_{-2(2-|c|)}\oplus {\mathcal T}_0(1)$. (This identity can be deduced via Theorem \ref{th:kunneth} as well.) The homological graded root is \begin{picture}(300,92)(80,330) \put(180,380){\makebox(0,0){\footnotesize{$0$}}} \put(177,390){\makebox(0,0){\footnotesize{$+1$}}} \put(177,410){\makebox(0,0){\footnotesize{$-(2-|c|)$}}}\put(177,370){\makebox(0,0){\footnotesize{$-1$}}} \put(177,340){\makebox(0,0){\small{$-w_0$}}}\put(177,360){\makebox(0,0){\footnotesize{$-2$}}} \dashline{1}(200,350)(240,350) \dashline{1}(200,380)(240,380) \dashline{1}(200,360)(240,360) \dashline{1}(200,390)(240,390) \dashline{1}(200,370)(240,370) \put(220,345){\makebox(0,0){$\vdots$}} \put(220,360){\circle*{3}} \put(220,370){\circle*{3}} \put(210,380){\circle*{3}} \put(220,380){\circle*{3}} \put(220,390){\circle*{3}} \put(220,410){\circle*{3}} \put(220,410){\line(0,-1){5}} \put(220,350){\line(0,1){45}} \put(210,380){\line(1,-1){10}} \put(223,403){\makebox(0,0){$\vdots$}} \end{picture} \noindent Thus, one reads directly that $$PE_\infty(T,Q,h) =1+T^{|c|}Q^{2-|c|}\cdot \frac{1} {1-TQ}.$$ In particular, $PE_\infty$ is obtained from $PE_1$ by cancellation of terms of type $T^aQ^bH^c(T+h)$ (cf. \ref{ss:ss}) (that is, by substitution $h\mapsto -T$), hence the spectral sequence degenerates at $E^2$ level. Note that in this case $\delta=|c|-1$. This shows that the inequality $\delta\leq |c|-1$ proved in \ref{bek:AnnFiltr} is sharp, and this extremal case corresponds exactly to $\mathfrak{h}(c)=1$, or $\mathfrak{c}=\frm_{(C,o)}$. The curve is extremal from the point of view of the lattice homology $\mathbb H_*(C,o)$ as well. In \cite[Example 4.6.1]{AgostonNemethi} is proved that $\mathbb H_{*,red}\not=0$ if and only if $(C,o)$ is non-smooth. Note that in our case ${\rm rank}_{\mathbb Z}\mathbb H_{*,red}(C,o)=1$, the smallest possible among the non-smooth germs. If $c^{(i)}=1$ for all $i$ then $(C,o)$ is an ordinary $r$-tuple, hence the above discussion clarifies their invariants as well. \end{example} \begin{example}\label{ex:decsing2} Assume that $(C',o)=(C'',o)=\{x^3+y^4=0\}$. Then $\calS_{(C',o)}=\calS_{(C'',o)}=\langle3,4\rangle$. Here we provide some concrete computation for $(C,o)=(C',o)\vee (C'',o)$. The $w$-table in $R(0,(8,8))$ and the homological graded root is the following: \begin{picture}(320,120)(-100,-20) \put(177,50){\makebox(0,0){\footnotesize{$0$}}} \put(177,60){\makebox(0,0){\footnotesize{$1$}}} \put(177,70){\makebox(0,0){\footnotesize{$2$}}}\put(177,80){\makebox(0,0){\footnotesize{$3$}}} \put(177,90){\makebox(0,0){\footnotesize{$4$}}}\put(177,10){\makebox(0,0){\footnotesize{$-w$}}} \dashline{1}(200,60)(240,60) \dashline{1}(200,80)(240,80) \dashline{1}(200,50)(240,50) \dashline{1}(200,90)(240,90) \dashline{1}(200,70)(240,70) \put(220,10){\makebox(0,0){$\vdots$}} \put(220,30){\circle*{3}} \put(220,40){\circle*{3}} \put(210,50){\circle*{3}} \put(220,50){\circle*{3}} \put(220,60){\circle*{3}} \put(220,20){\line(0,1){70}} \put(210,50){\line(1,-1){10}} \put(220,80){\circle*{3}} \put(220,90){\circle*{3}} \put(210,80){\circle*{3}} \put(220,70){\circle*{3}} \put(230,70){\circle*{3}} \put(230,80){\circle*{3}} \put(220,70){\line(1,1){10}} \put(220,70){\line(-1,1){10}} \put(220,60){\line(1,1){10}} \put(-15,0){\line(1,0){145}} \put(-5,-10){\line(0,1){100}} \put(5,-5){\makebox(0,0){\small{$0$}}} \put(20,-5){\makebox(0,0){\small{$1$}}} \put(35,-5){\makebox(0,0){\small{$2$}}} \put(50,-5){\makebox(0,0){\small{$3$}}} \put(65,-5){\makebox(0,0){\small{$4$}}} \put(80,-5){\makebox(0,0){\small{$5$}}} \put(95,-5){\makebox(0,0){\small{$6$}}} \put(110,-5){\makebox(0,0){\small{$7$}}} \put(125,-5){\makebox(0,0){\small{$8$}}} \put(-10,5){\makebox(0,0){\small{$0$}}} \put(-10,15){\makebox(0,0){\small{$1$}}} \put(-10,25){\makebox(0,0){\small{$2$}}} \put(-10,35){\makebox(0,0){\small{$3$}}} \put(-10,45){\makebox(0,0){\small{$4$}}} \put(-10,55){\makebox(0,0){\small{$5$}}} \put(-10,65){\makebox(0,0){\small{$6$}}} \put(-10,75){\makebox(0,0){\small{$7$}}} \put(-10,85){\makebox(0,0){\small{$8$}}} \put(5,5){\makebox(0,0){\small{$0$}}} \put(5,15){\makebox(0,0){\small{$1$}}} \put(5,25){\makebox(0,0){\small{$0$}}} \put(5,35){\makebox(0,0){\small{$-1$}}} \put(5,45){\makebox(0,0){\small{$0$}}} \put(5,55){\makebox(0,0){\small{$1$}}} \put(5,65){\makebox(0,0){\small{$0$}}} \put(5,75){\makebox(0,0){\small{$1$}}} \put(5,85){\makebox(0,0){\small{$2$}}} \put(20,5){\makebox(0,0){\small{$1$}}} \put(20,15){\makebox(0,0){\small{$0$}}} \put(20,25){\makebox(0,0){\small{$-1$}}} \put(20,35){\makebox(0,0){\small{$-2$}}} \put(20,45){\makebox(0,0){\small{$-1$}}} \put(20,55){\makebox(0,0){\small{$0$}}} \put(20,65){\makebox(0,0){\small{$-1$}}} \put(20,75){\makebox(0,0){\small{$0$}}} \put(20,85){\makebox(0,0){\small{$1$}}} \put(35,5){\makebox(0,0){\small{$0$}}} \put(35,15){\makebox(0,0){\small{$-1$}}} \put(35,25){\makebox(0,0){\small{$-2$}}} \put(35,35){\makebox(0,0){\small{$-3$}}} \put(35,45){\makebox(0,0){\small{$-2$}}} \put(35,55){\makebox(0,0){\small{$-1$}}} \put(35,65){\makebox(0,0){\small{$-2$}}} \put(35,75){\makebox(0,0){\small{$-1$}}} \put(35,85){\makebox(0,0){\small{$0$}}} \put(50,5){\makebox(0,0){\small{$-1$}}} \put(50,15){\makebox(0,0){\small{$-2$}}} \put(50,25){\makebox(0,0){\small{$-3$}}} \put(50,35){\makebox(0,0){\small{$-4$}}} \put(50,45){\makebox(0,0){\small{$-3$}}} \put(50,55){\makebox(0,0){\small{$-2$}}} \put(50,65){\makebox(0,0){\small{$-3$}}} \put(50,75){\makebox(0,0){\small{$-2$}}} \put(50,85){\makebox(0,0){\small{$-1$}}} \put(65,5){\makebox(0,0){\small{$0$}}} \put(65,15){\makebox(0,0){\small{$-1$}}} \put(65,25){\makebox(0,0){\small{$-2$}}} \put(65,35){\makebox(0,0){\small{$-3$}}} \put(65,45){\makebox(0,0){\small{$-2$}}} \put(65,55){\makebox(0,0){\small{$-1$}}} \put(65,65){\makebox(0,0){\small{$-2$}}} \put(65,75){\makebox(0,0){\small{$-1$}}} \put(65,85){\makebox(0,0){\small{$0$}}} \put(80,5){\makebox(0,0){\small{$1$}}} \put(80,15){\makebox(0,0){\small{$0$}}} \put(80,25){\makebox(0,0){\small{$-1$}}} \put(80,35){\makebox(0,0){\small{$-2$}}} \put(80,45){\makebox(0,0){\small{$-1$}}} \put(80,55){\makebox(0,0){\small{$0$}}} \put(80,65){\makebox(0,0){\small{$-1$}}} \put(80,75){\makebox(0,0){\small{$0$}}} \put(80,85){\makebox(0,0){\small{$1$}}} \put(95,5){\makebox(0,0){\small{$0$}}} \put(95,15){\makebox(0,0){\small{$-1$}}} \put(95,25){\makebox(0,0){\small{$-2$}}} \put(95,35){\makebox(0,0){\small{$-3$}}} \put(95,45){\makebox(0,0){\small{$-2$}}} \put(95,55){\makebox(0,0){\small{$-1$}}} \put(95,65){\makebox(0,0){\small{$-2$}}} \put(95,75){\makebox(0,0){\small{$-1$}}} \put(95,85){\makebox(0,0){\small{$0$}}} \put(110,5){\makebox(0,0){\small{$1$}}} \put(110,15){\makebox(0,0){\small{$0$}}} \put(110,25){\makebox(0,0){\small{$-1$}}} \put(110,35){\makebox(0,0){\small{$-2$}}} \put(110,45){\makebox(0,0){\small{$-1$}}} \put(110,55){\makebox(0,0){\small{$0$}}} \put(110,65){\makebox(0,0){\small{$-1$}}} \put(110,75){\makebox(0,0){\small{$0$}}} \put(110,85){\makebox(0,0){\small{$1$}}} \put(125,5){\makebox(0,0){\small{$2$}}} \put(125,15){\makebox(0,0){\small{$1$}}} \put(125,25){\makebox(0,0){\small{$0$}}} \put(125,35){\makebox(0,0){\small{$-1$}}} \put(125,45){\makebox(0,0){\small{$0$}}} \put(125,55){\makebox(0,0){\small{$1$}}} \put(125,65){\makebox(0,0){\small{$0$}}} \put(125,75){\makebox(0,0){\small{$1$}}} \put(125,85){\makebox(0,0){\small{$2$}}} \end{picture} Then we deduce that $\mathbb H_{>2}=0$, $\mathbb H_1={\mathcal T}_{2}(1)$ is generated by the loop around the lattice point $(5,5)$ in $S_{-1}$, and $\mathbb H_0={\mathcal T}_{8}^-\oplus {\mathcal T}_{6}(1)^{\oplus 2}\oplus {\mathcal T}_{4}(1)\oplus{\mathcal T}_0(1)$. (These can be read from Theorem \ref{th:kunneth} as well.) The delta invariant is $\delta=7$ (compatibly with $7=3+3+1$). We wish to emphasize that in general, by Proposition \ref{cor:EUcurves}, the {\it homotopy type} of $S_n$ is given by $S_n\cap R(0,c)$. However, if we wish to determine the graded $S_n$, or ${\bf PE}_1$, then we need a rectangle which contains $S_n$. (This motivates that we provided above $R(0,(8,8))$, though $c=(6,6)$, since $S_n\subset R(0,(8,8))$ for $-4\leq n\leq -1$, and we wish to picture these spaces.) From the rectangle $R(0,c)$ one also sees that $w$ is not symmetric with respect to $l\leftrightarrow c-l$ (cf. \ref{bek:GORdualoty})), hence $(C,o)$ is not Gorenstein, though both components are plane curve singularities. The spaces $S_n$ for $n=-4 -3,-2,-1 $ are the following: \begin{picture}(350,120)(0,-20) \thicklines \put(30,-10){\makebox(0,0){$S_{-4}$}}\put(42,35){\makebox(0,0){\footnotesize$(3,3)$}} \put(30,30){\circle*{4}} \put(130,-10){\makebox(0,0){$S_{-3}$}}\put(142,65){\makebox(0,0){\footnotesize$(3,6)$}} \put(172,35){\makebox(0,0){\footnotesize$(6,3)$}} \put(130,60){\circle*{4}} \put(160,30){\circle*{4}} \put(120,30){\line(1,0){20}}\put(130,20){\line(0,1){20}} \put(230,-10){\makebox(0,0){$S_{-2}$}} \put(272,65){\makebox(0,0){\footnotesize$(6,6)$}} \put(260,60){\circle*{4}} \put(210,30){\line(1,0){10}} \put(240,30){\line(1,0){30}} \put(220,40){\line(1,0){20}}\put(220,20){\line(1,0){20}}\put(220,60){\line(1,0){20}} \put(230,10){\line(0,1){10}} \put(230,40){\line(0,1){30}} \put(220,20){\line(0,1){20}}\put(240,20){\line(0,1){20}}\put(260,20){\line(0,1){20}} \put(220,22){\line(1,0){20}}\put(220,24){\line(1,0){20}}\put(220,26){\line(1,0){20}} \put(220,28){\line(1,0){20}}\put(220,32){\line(1,0){20}}\put(220,34){\line(1,0){20}} \put(220,36){\line(1,0){20}}\put(220,38){\line(1,0){20}} \put(220,30){\line(1,0){20}} \put(330,-10){\makebox(0,0){$S_{-1}$}} \put(300,30){\line(1,0){80}} \put(310,20){\line(1,0){60}}\put(310,40){\line(1,0){60}}\put(310,60){\line(1,0){60}} \put(310,22){\line(1,0){60}}\put(310,24){\line(1,0){60}}\put(310,26){\line(1,0){60}} \put(310,28){\line(1,0){60}} \put(310,32){\line(1,0){60}}\put(310,34){\line(1,0){60}}\put(310,36){\line(1,0){60}} \put(310,38){\line(1,0){60}} \put(320,18){\line(1,0){20}}\put(320,16){\line(1,0){20}}\put(320,14){\line(1,0){20}}\put(320,12){\line(1,0){20}} \put(320,42){\line(1,0){20}}\put(320,46){\line(1,0){20}}\put(320,44){\line(1,0){20}}\put(320,48){\line(1,0){20}}\put(320,50){\line(1,0){20}} \put(320,52){\line(1,0){20}}\put(320,56){\line(1,0){20}}\put(320,54){\line(1,0){20}}\put(320,58){\line(1,0){20}} \put(320,10){\line(1,0){20}} \put(320,62){\line(1,0){20}}\put(320,66){\line(1,0){20}}\put(320,64){\line(1,0){20}}\put(320,68){\line(1,0){20}}\put(320,70){\line(1,0){20}} \put(310,20){\line(0,1){20}} \put(320,10){\line(0,1){10}} \put(320,40){\line(0,1){30}} \put(340,10){\line(0,1){10}} \put(340,40){\line(0,1){30}} \put(370,20){\line(0,1){20}} \put(330,0){\line(0,1){10}} \put(330,70){\line(0,1){10}}\put(360,10){\line(0,1){10}}\put(360,40){\line(0,1){30}} \thinlines \end{picture} The space $S_{-2}$ has two connected component: one of them is the isolated lattice point $(6,6)$, let $S_{-2}'$ be the other one. The non-zero terms in the first page of the spectral sequence $E^1_{*,*}$ {\it associated with} $S_{-2}'$ can be read from its cubical complex: $E^1_{-10,10}=\mathbb Z^4$, $E^1_{-9,10}=\mathbb Z^2$, $E^1_{-8,8}=\mathbb Z$, $E^1_{-7,8}=\mathbb Z^2$. This must converge to $E^\infty_{*,*}$ where the only non-zero term is $E^\infty_{-10,10}=\mathbb Z$. This can happen only if the differential $d^3 $ is non-trivial. I.e., $E^1\not=E^2=E^3\not= E^4=E^\infty$, hence $k_{(C,o)}\geq 4$. \begin{picture}(300,80)(-40,10) \dashline[200]{1}(30,50)(30,80) \dashline[200]{1}(20,50)(20,80)\dashline[200]{1}(10,50)(10,80) \dashline[200]{1}(0,50)(0,80) \dashline[200]{1}(-10,50)(-10,80) \dashline[200]{1}(-10,50)(30,50)\dashline[200]{1}(-10,60)(30,60) \dashline[200]{1}(-10,70)(30,70)\dashline[200]{1}(-10,80)(30,80) \put(10,30){\makebox(0,0){\footnotesize{$E^1_{*,*}(S_{-2}')$}}} \put(10,60){\makebox(0,0){\footnotesize{$\mathbb Z$}}} \put(21,61){\makebox(0,0){\footnotesize{$\mathbb Z^2$}}} \put(-8,81){\makebox(0,0){\footnotesize{$\mathbb Z^4$}}} \put(2,81){\makebox(0,0){\footnotesize{$\mathbb Z^2$}}} \dashline[200]{1}(130,50)(130,80) \dashline[200]{1}(120,50)(120,80)\dashline[200]{1}(110,50)(110,80) \dashline[200]{1}(100,50)(100,80) \dashline[200]{1}(90,50)(90,80) \dashline[200]{1}(90,50)(130,50)\dashline[200]{1}(90,60)(130,60) \dashline[200]{1}(90,70)(130,70)\dashline[200]{1}(90,80)(130,80) \put(110,30){\makebox(0,0){\footnotesize{$E^2_{*,*}(S_{-2}')$}}} \put(110,15){\makebox(0,0){\footnotesize{$E^3_{*,*}(S_{-2}')$}}} \put(121,60){\makebox(0,0){\footnotesize{$\mathbb Z$}}} \put(92,81){\makebox(0,0){\footnotesize{$\mathbb Z^2$}}} \put(300,60){\makebox(0,0){\mbox{(the arrow is $d^3\not=0$)}}} \put(117,64){\vector(-2,1){22}} \dashline[200]{1}(230,50)(230,80) \dashline[200]{1}(220,50)(220,80)\dashline[200]{1}(210,50)(210,80) \dashline[200]{1}(200,50)(200,80) \dashline[200]{1}(190,50)(190,80) \dashline[200]{1}(190,50)(230,50)\dashline[200]{1}(190,60)(230,60) \dashline[200]{1}(190,70)(230,70)\dashline[200]{1}(190,80)(230,80) \put(210,30){\makebox(0,0){\footnotesize{$E^4_{*,*}(S_{-2}')$}}} \put(210,15){\makebox(0,0){\footnotesize{$E^\infty_{*,*}(S_{-2}')$}}} \put(190,80){\makebox(0,0){\footnotesize{$\mathbb Z$}}} \end{picture} (Above in each diagram the upper-left corner is the lattice point $(-10,10)$.) Looking at the pages of this spectral sequence (and the examples considered above) we might believe that each differential $d^k_{*,*}$ has `maximal rank' (that is, its rank is the maximal of the ranks of its source and target). However, this is not the case in general. E.g., $S_{-1}$ has the homotopy type of a circle, hence $E^\infty_{p,q}=\mathbb Z$ for a pair $(p,q)$ with $p+q=0$ and for a pair with $p+q=1$. It turns out that the spectral sequence has the following pages (the upper-left corner is $(-13,13)$): \begin{picture}(300,80)(-40,10) \dashline[200]{1}(30,40)(30,80) \dashline[200]{1}(20,40)(20,80)\dashline[200]{1}(10,40)(10,80) \dashline[200]{1}(0,40)(0,80) \dashline[200]{1}(-10,40)(-10,80) \dashline[200]{1}(40,40)(40,80) \dashline[200]{1}(-10,50)(40,50)\dashline[200]{1}(-10,60)(40,60) \dashline[200]{1}(-10,70)(40,70)\dashline[200]{1}(-10,80)(40,80)\dashline[200]{1}(-10,40)(40,40) \put(10,30){\makebox(0,0){\footnotesize{$E^1_{*,*}(S_{-1})$}}} \put(12,61){\makebox(0,0){\footnotesize{$\mathbb Z^4$}}} \put(22,61){\makebox(0,0){\footnotesize{$\mathbb Z^4$}}} \put(-8,81){\makebox(0,0){\footnotesize{$\mathbb Z^2$}}} \put(0,80){\makebox(0,0){\footnotesize{$\mathbb Z$}}} \put(40,40){\makebox(0,0){\footnotesize{$\mathbb Z$}}} \dashline[200]{1}(130,40)(130,80) \dashline[200]{1}(120,40)(120,80)\dashline[200]{1}(110,40)(110,80) \dashline[200]{1}(100,40)(100,80) \dashline[200]{1}(90,40)(90,80) \dashline[200]{1}(140,40)(140,80) \dashline[200]{1}(90,50)(140,50)\dashline[200]{1}(90,60)(140,60) \dashline[200]{1}(90,70)(140,70)\dashline[200]{1}(90,80)(140,80)\dashline[200]{1}(90,40)(140,40) \put(110,30){\makebox(0,0){\footnotesize{$E^2_{*,*}(S_{-1})$}}} \put(110,15){\makebox(0,0){\footnotesize{$E^\infty_{*,*}(S_{-1})$}}} \put(90,80){\makebox(0,0){\footnotesize{$\mathbb Z$}}}\put(140,40){\makebox(0,0){\footnotesize{$\mathbb Z$}}} \dashline[200]{1}(230,40)(230,80) \dashline[200]{1}(220,40)(220,80)\dashline[200]{1}(210,40)(210,80) \dashline[200]{1}(200,40)(200,80) \dashline[200]{1}(190,40)(190,80) \dashline[200]{1}(240,40)(240,80) \dashline[200]{1}(190,50)(240,50)\dashline[200]{1}(190,60)(240,60) \dashline[200]{1}(190,70)(240,70)\dashline[200]{1}(190,80)(240,80)\dashline[200]{1}(190,40)(240,40) \put(235,45){\vector(-4,3){40}} \put(190,80){\makebox(0,0){\footnotesize{$\mathbb Z$}}}\put(240,40){\makebox(0,0){\footnotesize{$\mathbb Z$}}} \put(210,30){\makebox(0,0){\footnotesize{$E^5_{*,*}(S_{-1})$}}} \put(210,15){\makebox(0,0){\footnotesize{$d^5=0$}}} \end{picture} I.e., $E^\infty_{-13,13}=E^{\infty}_{-8,9}=\mathbb Z$ and $d^5_{-8,9}:E^5_{-8,9}=\mathbb Z\to E^5_{-13,13}=\mathbb Z$ is the zero morphism. Hence, the spectral sequence --- starting from $E^1_{*,*}$ --- does not follow automatically the `maximal rank' principle. \end{example} \begin{example}\label{ex::decsing2} Consider now $r$ copies of $(C',o)=\{x^3+y^4=0\}$, and define $(C,o):=\vee_{i=1}^r (C',o)$. Then $l=5$ for $w(C',o)$ is a local maximum point. Therefore, the lattice point $M:=(5, 5, \ldots, 5)$ will be a local maximum point for the $w$-values of $(C,o)$ with $w(M)=2-r$. Hence, $\mathbb H_{r-1}(C,o)\not=0$. This shows that for well-chosen germs $\mathbb H_b(C,o)$ can be non-zero for arbitrarily large $b$. \end{example} \begin{question} (a) Is there any plane curve singularity with $k_{(C,o)}>2$? (b) What is the algebraic interpretation of $k_{(C,o)}(n)$ and $k_{(C,o)}$ ? \end{question} \section{Actions along the spectral sequences. The operators $Y_1, \ldots , Y_r$}\label{s:Uoperators} \subsection{} Recall that we have the natural inclusion $S_n\subset S_{n+1}$ for any $n$. This induces a $U$--action on $\oplus_n H_*(S_n,\mathbb Z)$. Besides this, in this section we will define certain additional maps too, and we will analyse the actions what they induce on the terms of the spectral sequence $E^*_{*,*}$. In this way we endow the terms of the spectral sequence $E^*_{*,*}$ with certain additional structure. We will denote by $u=u_n$ the inclusion $S_n\subset S_{n+1}$, and we will use similar notation for inclusions of type $S_n\cap \frX_{-l}\subset S_{n+1}\cap \frX_{-l}$ as well. Analogously, for any $i$, the inclusions of type $\frX_{-l-E_i}\hookrightarrow \frX_{-l}$ will be denoted by $x_i$. Furthermore we define another map $y_i:\frX\to \frX$ by $y_i(x)=x+E_i$. This map clearly sends $\frX_{-d}$ to $\frX_{-d-1}$, $\frX_{-l}$ to $\frX_{-l-E_i}$ and each cube $\square=(l,I)$ to $(l+E_i, I)$. Moreover, since $w(l+E_i)\leq w(l)+1$, we have \begin{lemma}\label{lem:ui} (i) \ $w(y_i(l))\leq w(l)+1$, \ (ii) \ $w(y_i(\square))\leq w(\square)+1$, \ (iii) \ $y_i(S_n)\subset S_{n+1}$. \end{lemma} The map $y_i:S_n\to S_{n+1}$ is not very interesting at $H_b(S_n,\mathbb Z)\to H_b(S_{n+1},\mathbb Z)$ level. Indeed, the homotopy $S_n\times [0,1]\to S_{n+1}$ by $(x,t)\mapsto x+tE_i $ gives the following. \begin{lemma}\label{lem:ui2} For each $i$, the map $y_i:S_n\to S_{n+1}$ is homotopic to the inclusion map $u:S_n\subset S_{n+1}$, hence the morphisms $(\mathbb H_b)_{-2n}\to (\mathbb H_b)_{-2n-2}$ induced by $y_i$ and $u$ are equal. For $b=0$, the action induced by $y_i$ can be read from the edges of graded root as well (as described in Example \ref{ex:34}). \end{lemma} However, along the spectral sequence associated with the filtration $\{S_n\cap \frX_{-d}\}_d$ of $S_n$ the maps $y_i$ induce more interesting morphisms. First, consider the following diagram, which is commutative up to homotopy (by a similar argument as in Lemma \ref{lem:ui2}): \begin{picture}(300,60)(0,10) \put(50,50){\makebox(0,0){{$S_n\cap \frX_{-l-E_i}$}}} \put(50,10){\makebox(0,0){{$S_{n}\cap \frX_{-l}$}}} \put(150,50){\makebox(0,0){{$S_{n+1}\cap \frX_{-l-E_i}$}}} \put(150,10){\makebox(0,0){{$S_{n+1}\cap \frX_{-l}$}}} \put(85,50){\vector(1,0){30}} \put(85,10){\vector(1,0){30}} \put(50,40){\vector(0,-1){20}} \put(150,40){\vector(0,-1){20}} \put(80,20){\vector(3,2){30}} \put(100,55){\makebox(0,0){{$u$}}} \put(100,15){\makebox(0,0){{$u$}}} \put(43,30){\makebox(0,0){{$x_{i}$}}} \put(160,30){\makebox(0,0){{$x_{i}$}}} \put(87,30){\makebox(0,0){{$y_i$}}} \end{picture} \noindent It induced two other commutative diagrams: \begin{picture}(300,60)(-50,10) \put(50,50){\makebox(0,0){{$H_b(S_n\cap \frX_{-l-E_i},S_n\cap \frX_{-l-E_i}\cap \frX_{-|l|-2}) $}}} \put(50,10){\makebox(0,0){{$H_b(S_n\cap \frX_{-l},S_n\cap \frX_{-l}\cap \frX_{-|l|-1} )$}}} \put(250,50){\makebox(0,0){{$H_b(S_{n+1}\cap \frX_{-l-E_i},S_{n+1}\cap \frX_{-l-E_i}\cap \frX_{-|l|-2} )$}}} \put(250,10){\makebox(0,0){{$H_b(S_{n+1}\cap \frX_{-l},S_{n+1}\cap \frX_{-l}\cap \frX_{-|l|-1} ) $}}} \put(140,50){\vector(1,0){10}} \put(130,10){\vector(1,0){30}} \put(50,40){\vector(0,-1){20}} \put(250,40){\vector(0,-1){20}} \put(130,20){\vector(3,2){30}} \put(145,55){\makebox(0,0){{$U$}}} \put(145,15){\makebox(0,0){{$U$}}} \put(40,30){\makebox(0,0){{$X_{i}$}}} \put(265,30){\makebox(0,0){{$X_{i}$}}} \put(130,30){\makebox(0,0){{$Y_i$}}} \end{picture} \noindent and \begin{picture}(300,60)(0,10) \put(50,50){\makebox(0,0){{$F_{-d-1}H_b(S_n)$}}} \put(50,10){\makebox(0,0){{$F_{-d}H_b(S_n)$}}} \put(150,50){\makebox(0,0){{\ \ $F_{-d-1}H_b(S_{n+1})$}}} \put(150,10){\makebox(0,0){{$F_{-d}H_b(S_{n+1})$}}} \put(85,50){\vector(1,0){30}} \put(85,10){\vector(1,0){30}} \put(50,40){\vector(0,-1){20}} \put(150,40){\vector(0,-1){20}} \put(80,20){\vector(3,2){30}} \put(100,55){\makebox(0,0){{$U$}}} \put(100,15){\makebox(0,0){{$U$}}} \put(40,30){\makebox(0,0){{$X_{i}$}}} \put(165,30){\makebox(0,0){{$X_{i}$}}} \put(85,30){\makebox(0,0){{$Y_i$}}} \end{picture} In the first diagram the morphisms $X_{i}$ are automatically trivial, hence, we obtain that $U$ in that diagram acts trivially too. This is compatible with the statement from Theorem \ref{th:PP}{\it (c)}. That statement from Theorem \ref{th:PP}{\it (c)} was based on the structure of the Orlik--Solomon algebra exploited in \cite{GorNem2015}, the above proof is based merely on the very existence of the morphisms $Y_i$ induced by $y_i$. The second commutative diagram from above proves that $U:{\rm Gr}_{-d}^F\,H_b(S_n)\to {\rm Gr}_{-d}^F\,H_b(S_{n+1})$ is trivial as well. This fact can be deduced by the following argument too. Consider again the map $u_n:S_n\hookrightarrow S_{n+1}$, which is compatible with the corresponding filtration $\{\frX_{-d}\}_d$. Therefore, $u_n$ induces morphisms at the level of spectral sequences associated with $S_n$ and $S_{n+1}$. Since at the $E^1_{*,*}$ page the induced map $U$ is trivial, it is trivial at the level of all the pages $E^k_{*,*}$, including $E^\infty_{*,*}$. For further reference we state: \begin{lemma} The morphisms $U$ induced by the inclusion $S_n\subset S_{n+1}$ on $(E^*_{*,*})_{n}\to (E^*_{*,*})_{n+1}$ (in particular, on ${\rm Gr}_{-d}^F\,H_b(S_n)\to {\rm Gr}_{-d}^F\,H_b(S_{n+1})$\,) are trivial. \end{lemma} \subsection{The actions $Y_i$} Similarly as above the map $y_i:S_n\to S_{n+1}$ is compatible with the filtrations (i.e. $y_i:S_n\cap \frX_{-d}\to S_{n+1}\cap \frX_{-d-1}$), hence it induces spectral sequence morphisms connecting the spectral sequences of $S_n$ and $S_{n+1}$. Thus, for any $k\geq 1$ one has a morphism \begin{equation}\label{eq:yi} Y_i: (E^k_{-d, b+d})_{n} \to (E^k_{-d-1, b+d+1})_{n+1},\end{equation} which commutes with the differentials of the spectral sequences. This is compatible with the disjoint decomposition $S_n=\sqcup_v S_n^v$ into connected components. Via the notation of \ref{ss:grroot}, if $[v,u]$ is an edge of ${\mathfrak{R}}$ then $y_i(S_n^v)\subset S_{n+1}^u$, hence we have a well-defined restriction \begin{equation}\label{eq:yi2} Y_i|_{(E^k_{-d,b+d})^v_n}: (E^k_{-d, b+d})^v_{n} \to (E^k_{-d-1, b+d+1})^u_{n+1},\end{equation} which commutes with the differentials of the spectral sequences. In particular, all the discussions below regarding the actions $\{Y_i\}_i$ from (\ref{eq:yi}) can be extended to the level of their restrictions (\ref{eq:yi2}) indexed by the edges of $\mathfrak{R}$. The corresponding details regarding this extensions are left to the reader. The morphisms $Y_i$, for $k=1$ and for any fixed $b$, make $$\oplus_{n,d}\, \oplus_{l\,:\, |l|=d}\, H_b(S_n\cap \frX_{-l}, S_n\cap \frX_{-l}\cap \frX_{-|l|-1}, \mathbb Z) =\oplus_{n,d}\, (E^1_{-d, b+d})_{n}$$ a $\mathbb Z[Y_1,\ldots, Y_r]$-- module. The operator $Y_i$ increases the $w$--weight (or $n$) by $+1$, increases the lattice filtration by $E_i$ (i.e., $l\rightsquigarrow l+E_i$), and preserves the homological degree $b$. This new structure completes considerably the rank discussions from the previous sections coded in ${\bf PE}_1({\bf T}, Q,h)$ and $PE_k(T,Q,h)$. One might think that it is a weaker replacement for the missing $U$--action (killed in the relative setup), but, in fact, as we will see, the $Y$--action on ${\rm Gr}^F_*\mathbb H_*$ has even a more subtle structure than the $U$--action on $\mathbb H_*$. Recall also that above, if the summand $ H_b(S_n\cap \frX_{-l}, S_n\cap \frX_{-l}\cap \frX_{-|l|-1}, \mathbb Z)$ is non-zero, then necessarily $n=w(l)+b$. Let us exemplify the operators $Y_i$ in some concrete situations. \begin{example}\label{ex:irredY1} Let $(C,o)$ be an {\bf irreducible curve singularity} with semigroup $\cS$. We assume that $1\not\in\cS$. We organize the gaps $\mathbb Z_{\geq 0} \setminus \cS$ as follows. $$\cS=\{0=b_0, a_1,a_1+1,\ldots, b_1, a_2, a_2+1, \ldots, b_2, \ldots, a_k, a_k+1, \ldots, b_k, a_{k+1}, a_{k+2}, \ldots\},$$ where the gaps are exactly $\cup_{i=0}^k\{\mbox{integers strictly between $b_i$ and $a_{i+1}$}\}$ and $a_{k+1}$ is the conductor. Fix some $s\in\cS$. Note that $w(s+1)=w(s)+1$, hence $s$ determines a generator $[s]$ in $H_0(S_{w(s)}\cap \frX_{-s}, S_{w(s)}\cap \frX_{-s-1},\mathbb Z)=\mathbb Z=\mathbb Z_{(s)}$. This element corresponds to $T^sQ^{w(s)}$ in $PE_1(T,Q)$. It has level degree $d=s$ and weight $n=w(s)$. Then $Y([s]):=Y_1([s])=[s+1]$ if $s+1\in\cS$, and $=0$ otherwise. In particular, the $Y$--module $\sum _{s\in \cS}H_0(S_{w(s)}\cap \frX_{-s}, S_{w(s)}\cap \frX_{-s-1},\mathbb Z)=\oplus _{s\in\cS}\mathbb Z_{(s)}$ has the following irreducible summands: the $Y$--modules of finite $\mathbb Z$--rank: $$\mathbb Z_{(0)}, \ \ \mathbb Z_{(a_1)}\stackrel{Y}{\longrightarrow} \cdots \stackrel{Y}{\longrightarrow} \mathbb Z_{(b_1)}, \ \cdots, \mathbb Z_{(a_k)}\stackrel{Y}{\longrightarrow} \cdots \stackrel{Y}{\longrightarrow} \mathbb Z_{(b_k)}$$ and one $Y$--module of infinite $\mathbb Z$--rank $ \mathbb Z_{(a_{k+1})}\stackrel{Y}{\longrightarrow} \mathbb Z_{(a_{k+2})}\stackrel{Y}{\longrightarrow}\cdots$. Take for example the semogroups $\cS_1=\langle 4,5,7\rangle$ and $\cS_{2}=\langle 3,7,8\rangle$, cf. Remark \ref{rem:3.5.1}. They have the very same $\mathbb Z[U]$--module $\mathbb H_0=\EuScript{T}^-_{4}\oplus \EuScript{T}_{2}(1)\oplus \EuScript{T}_{0}(1)$, and $PE_k(T=1,Q)=Q^{-2}+2Q^{-1}+2Q^0+Q^1+\cdots$. The common $(-w)$--graded root is shown by the left graph below. This, for both cases, can be compared with $\oplus_d\,{\rm Gr}^F_{-d} \,\mathbb H_0$ with Poincar\'e series $PE^\infty(T,Q)=\sum_{s\in\cS}T^sQ^{w(s)}$. Of course, this object still keeps all the information about the semigroup. From this if we delete the level degrees (or in $PE^\infty$ we put $T=1$) we obtain that $\oplus_d\,{\rm Gr}^F_{-d} \,\mathbb H_0$ endowed (only) with the $w$--weights coincides with the $w$--weighted $\mathbb H_0$ as graded $\mathbb Z$--modules (which in both cases of $\cS_1$ and $\cS_2$ are the same). Then we can compare $\mathbb H_0$ and $\oplus_d\,{\rm Gr}^F_{-d} \,\mathbb H_0$, where both are endowed with their $w$--weights. The point is that for the two cases of $\cS_1$ and $\cS_2$, the $Y_1$--module structures on $\oplus_d\,{\rm Gr}^F_{-d} \,\mathbb H_0$ (weighted by $n=w(d)$) distinguish the two cases (while the $U$--multiplication on $\mathbb H_0$ not); see the second and the third root below (where again the edges code the action). \begin{picture}(320,120)(170,-20) \put(177,50){\makebox(0,0){\footnotesize{$0$}}} \put(177,60){\makebox(0,0){\footnotesize{$1$}}} \put(177,70){\makebox(0,0){\footnotesize{$2$}}} \put(177,10){\makebox(0,0){\footnotesize{$-w$}}} \dashline{1}(200,60)(240,60) \dashline{1}(200,50)(240,50) \dashline{1}(200,70)(240,70) \put(220,10){\makebox(0,0){$\vdots$}} \put(220,30){\circle*{3}} \put(220,40){\circle*{3}} \put(210,50){\circle*{3}} \put(220,50){\circle*{3}} \put(220,60){\circle*{3}} \put(220,20){\line(0,1){50}} \put(210,50){\line(1,-1){10}} \put(220,70){\circle*{3}} \put(230,60){\circle*{3}} \put(220,50){\line(1,1){10}} \put(220,-10){\makebox(0,0){\footnotesize{$U$--action}}} \put(320,-10){\makebox(0,0){\footnotesize{$Y$--action}}} \put(420,-10){\makebox(0,0){\footnotesize{$Y$--action}}} \put(320,70){\circle*{3}} \put(330,60){\circle*{3}} \put(320,10){\makebox(0,0){$\vdots$}} \put(320,30){\circle*{3}} \put(320,40){\circle*{3}} \put(310,50){\circle*{3}} \put(320,50){\circle*{3}} \put(320,60){\circle*{3}} \put(320,70){\line(0,-1){8}} \put(330,60){\line(-1,-1){10}} \put(320,40){\line(0,-1){8}} \put(320,30){\line(0,-1){8}} \put(320,50){\line(0,-1){8}} \put(420,70){\circle*{3}} \put(430,60){\circle*{3}} \put(420,10){\makebox(0,0){$\vdots$}} \put(420,30){\circle*{3}} \put(420,40){\circle*{3}} \put(410,50){\circle*{3}} \put(420,50){\circle*{3}} \put(420,60){\circle*{3}} \put(420,70){\line(0,-1){8}} \put(420,60){\line(0,-1){8}} \put(420,40){\line(0,-1){8}} \put(420,30){\line(0,-1){8}} \put(420,50){\line(0,-1){8}} \put(320,85){\makebox(0,0){$\langle 4,5,7\rangle$}} \put(420,85){\makebox(0,0){$\langle 3,7,8\rangle$}} \put(500,85){\makebox(0,0){$\langle 4,5,7\rangle$}} \put(560,85){\makebox(0,0){$\langle 3,7,8\rangle$}} \put(500,10){\makebox(0,0){$\vdots$}} \put(500,30){\makebox(0,0){\footnotesize{10}}} \put(500,40){\makebox(0,0){\footnotesize{9}}} \put(490,50){\makebox(0,0){\footnotesize{0}}} \put(500,50){\makebox(0,0){\footnotesize{8}}} \put(500,60){\makebox(0,0){\footnotesize{5}}} \put(500,70){\makebox(0,0){\footnotesize{4}}} \put(510,60){\makebox(0,0){\footnotesize{7}}} \put(560,10){\makebox(0,0){$\vdots$}} \put(560,30){\makebox(0,0){\footnotesize{10}}} \put(560,40){\makebox(0,0){\footnotesize{9}}} \put(550,50){\makebox(0,0){\footnotesize{0}}} \put(560,50){\makebox(0,0){\footnotesize{8}}} \put(560,60){\makebox(0,0){\footnotesize{7}}} \put(560,70){\makebox(0,0){\footnotesize{6}}} \put(570,60){\makebox(0,0){\footnotesize{3}}} \end{picture} The diagrams on the right show that different level--degrees corresponding to the $w$--weights of the generators of the root. They are the semigroup elements. Note that $Y$ acts via the following pattern: $Y(1_u)=1_v$ if and only if $u,v\in\calS$ satisfy $v=u+1$. Hence the $Y$--action still keeps considerably information about $\cS$. If $(C,o)$ is an irreducible {\it plane} curve singularity, then by the formula $\sum_{s\in\cS}t^s=\Delta(t)/(1-t)$ (cf. Theorem \ref{Poincare vs Alexander}), the Alexander polynomial transforms into $$\Delta(t)=1-t+t^{a_1}-t^{b_1+1}+t^{a_2}-t^{b_2+1}+\cdots t^{a_k}-t^{b_k+1}+t^{a_{k+1}}.$$ Then the above $Y$ action can be read from this shape of $\Delta(t)$ too (compatibly to the staircase diagrams of $(C,o)$ in the language of ${\rm HFL}^-$, see e.g. \cite{Kr}). \end{example} \begin{example}\label{ex:u1} {\bf Assume that $(C,o)=\{x^2+y^2=0\}$}. Then $r=2$ and ${\bf PE}_1(T_1,T_2, Q,h)= 1+\frac{T_1T_2(1+Qh)}{(1-T_1Q)(1-T_2Q)}$, cf. \ref{ex:22}. The $Y_1$--action on $\oplus_{n,d}\, \oplus_{l\,:\, |l|=d}\, (E^1_{-d, b+d})_{n}$ is the following. Corresponding to $(0,0)\in\cS$, or to the monomial $1$ of ${\bf PE}_1$, the actions of $Y_1$ and $Y_2$ are trivial. Hence, it forms an irreducible $\mathbb Z[Y_1,Y_2]$--module with $\mathbb Z$--rank one. Let us denote it by $\mathbb Z_{w=0}$. There are two other irreducible $\mathbb Z[Y_1,Y_2]$ modules, both of them generated at the semigroup entry $(1,1)$. One of them corresponds to $b=0$, or to the monomials $\{T_1^{a_1}T_2^{a_2}Q^{a_1+a_2-2}h^0\}_{(a_1,a_2)\geq (1,1)}$, with actions $Y_1(T_1^{a_1}T_2^{a_2}Q^{a_1+a_2-2})=T_1^{a_1+1}T_2^{a_2}Q^{a_1+a_2-1}$, $Y_2(T_1^{a_1}T_2^{a_2}Q^{a_1+a_2-2})=T_1^{a_1}T_2^{a_2+1}Q^{a_1+a_2-1}$. It is generated over $\mathbb Z[Y_1,Y_2]$ by $T_1T_2Q^0h^0$, let us denote it by $\mathbb Z[Y_1,Y_2]_{w=0}$. The other, corresponding to $b=1$ (or to $h^1$) is given by $Y_1(T_1^{a_1}T_2^{a_2}Q^{a_1+a_2-1}h)=T_1^{a_1+1}T_2^{a_2}Q^{a_1+a_2}h$, $Y_2(T_1^{a_1}T_2^{a_2}Q^{a_1+a_2-1}h)=T_1^{a_1}T_2^{a_2+1}Q^{a_1+a_2}h$. It is generated by $T_1T_2QH$, let us denote it by $\mathbb Z[Y_1,Y_2]_{w=1}$. These two modules are isomorphic with $\mathbb Z[Y_1,Y_2]$ with the corresponding degree shifts. Pictorially, denoted by `big bullets' ($b=0$) and `circles' $(b=1)$, they are: \begin{picture}(300,70)(-50,-10) \put(10,10){\circle*{2}}\put(20,20){\circle*{4}} \put(15,0){\makebox(0,0){$S_0$}} \qbezier(22,22)(40,30)(75,22) \put(70,23){\vector(4,-1){6}} \qbezier(22,22)(40,40)(65,32) \put(60,33){\vector(4,-1){6}} \qbezier(82,22)(100,30)(135,22) \put(130,23){\vector(4,-1){6}} \qbezier(82,22)(100,40)(125,32) \put(120,33){\vector(4,-1){6}} \qbezier(72,32)(90,40)(125,32) \put(120,33){\vector(4,-1){6}} \qbezier(72,32)(90,50)(115,42) \put(110,43){\vector(4,-1){6}} \qbezier(280,27)(300,40)(330,27) \put(328,27){\vector(3,-1){6}} \qbezier(280,30)(300,50)(320,37) \put(318,37){\vector(3,-1){6}} \put(60,10){\line(1,0){10}}\put(60,10){\line(0,1){10}} \put(70,10){\line(0,1){20}}\put(60,20){\line(1,0){20}} \put(60,12){\line(1,0){10}}\put(60,14){\line(1,0){10}} \put(60,16){\line(1,0){10}}\put(60,18){\line(1,0){10}} \put(65,0){\makebox(0,0){$S_1$}} \put(70,30){\circle*{4}}\put(80,20){\circle*{4}} \put(110,10){\line(1,0){20}}\put(110,10){\line(0,1){20}} \put(130,10){\line(0,1){20}}\put(110,30){\line(1,0){20}} \put(120,30){\line(0,1){10}}\put(130,20){\line(1,0){10}} \put(110,12){\line(1,0){20}}\put(110,14){\line(1,0){20}}\put(110,16){\line(1,0){20}} \put(110,18){\line(1,0){20}}\put(110,20){\line(1,0){20}}\put(110,22){\line(1,0){20}}\put(110,24){\line(1,0){20}} \put(110,26){\line(1,0){20}}\put(110,28){\line(1,0){20}} \put(120,0){\makebox(0,0){$S_2$}} \put(120,40){\circle*{4}}\put(130,30){\circle*{4}}\put(140,20){\circle*{4}} \put(210,10){\circle*{2}}\put(220,20){\circle*{2}} \put(215,0){\makebox(0,0){$S_0$}} \put(260,10){\line(1,0){10}}\put(260,10){\line(0,1){10}} \put(270,10){\line(0,1){20}}\put(260,20){\line(1,0){20}} \put(260,12){\line(1,0){10}}\put(260,14){\line(1,0){10}}\put(260,16){\line(1,0){10}} \put(260,18){\line(1,0){10}} \put(265,0){\makebox(0,0){$S_1$}} \put(220,20){\circle*{2}} \put(275,25){\circle{4}} \put(310,10){\line(1,0){20}}\put(310,10){\line(0,1){20}} \put(330,10){\line(0,1){20}}\put(310,30){\line(1,0){20}} \put(320,30){\line(0,1){10}}\put(330,20){\line(1,0){10}} \put(310,12){\line(1,0){20}}\put(310,14){\line(1,0){20}}\put(310,16){\line(1,0){20}} \put(310,18){\line(1,0){20}}\put(310,20){\line(1,0){20}}\put(310,22){\line(1,0){20}} \put(310,24){\line(1,0){20}} \put(310,26){\line(1,0){20}}\put(310,28){\line(1,0){20}} \put(320,0){\makebox(0,0){$S_2$}} \put(325,35){\circle{4}}\put(335,25){\circle{4}} \end{picture} Next, we can analyse the page $E^2$ too. The differential $d^1$ on $\mathbb Z_{w=0}$ is trivial, hence this term survives in $E^2$ too. The other two modules are connected by a nontrivial graded differetial $\oplus_n d^1_{*,*}(S_n)$, which is a $w$--homogeneous morphism of modules $\mathbb Z[Y_1,Y_2]_{w=1}\to \mathbb Z[Y_1,Y_2]_{w=0}$, provided by multiplication by $Y_1-Y_2$. Hence the $E^2 $ terms, as a $\mathbb Z[Y_1,Y_2]$--module has two irreducible submodules, one of them is a $\mathbb Z_{w=0}$ of $\mathbb Z$-rank one and it has a trivial $Y_1,Y_2$--action, while the other one is $\mathbb Z[Y]$, where both $Y_1$ and $Y_2$ act as multiplication by $Y$, and its generator sit at the semigroup element $(1,1)$, its weight $w=0$ and $h=0$. Its Poincar\'e series is $T^2+T^3Q+T^4Q^2+\cdots= PE^2(T,Q)-1$. The $\mathbb Z[Y_1,Y_2]$ module $ E^2$ (and any $E^k$ for $k\geq 2$), when we keep only the $w$--weight and the $Y_1$ action, pictorially is the following \begin{picture}(320,60)(150,0) \put(320,5){\makebox(0,0){$\vdots$}} \put(320,30){\circle*{3}} \put(320,40){\circle*{3}} \put(320,20){\circle*{3}} \put(330,50){\circle*{3}} \put(310,50){\circle*{3}} \put(330,50){\line(-1,-1){10}} \put(320,40){\line(0,-1){8}} \put(320,30){\line(0,-1){8}} \put(320,20){\line(0,-1){8}} \put(400,30){\makebox(0,0){$(\mbox{$Y_1$ and $Y_2$ act identically})$}} \end{picture} \end{example} \begin{example}\label{ex:u1u4} {\bf Assume that $(C,o)$ is an ordinary $r$-tuple } (cf. \ref{bek:ANcurves}). For the invariants ${\bf PE}_1({\bf T}, Q,h)$, $PE_k(T,Q,h)$, see \ref{ex:gen} (with $c=(1,1,\ldots, 1)$). Then based on the discussion from \ref{ex:gen} (or even by construction of the spaces $S_n$) one deduce that the $\mathbb Z[Y_1,\ldots, Y_r]$ structure on the first page $E^1$ is $$\mathbb Z_{w=0}\oplus \, \oplus _{b=0}^{r-1} \ \mathbb Z[Y_1, \ldots, Y_r]_{(b)}^{\oplus \binom{r-1}{b}},$$ where $\mathbb Z_{w=0}$ has $\mathbb Z$--rank one, the generator corresponds to the lattice point $l=0$, $w(0)=0$ and $b=0$, while $\mathbb Z[Y_1, \ldots, Y_r]_{(b)}$ is generated at the lattice point $l=(1,\ldots, 1)$, $n=w(l)+b=2-r+b$ (i.e. it corresponds to the monomial ${\bf T}^{(1, \ldots ,1 )}Q^{2-r+b }h^b$ and each $Y_i$ is multiplication by $T_iQ$). \end{example} The collapse of the $\mathbb Z[Y_1,\ldots, Y_r]$--action on $E^{\geq 2}$ in Example \ref{ex:u1} is not an accident, it is a genaral fact valid for any $(C,o)$. \begin{proposition}\label{prop:collapse} Fix an isolated singularity $(C,o)$. For any $k\geq 2$ the action of $Y_i$ on $\oplus_{n,d} (E^k_{*,*})_n$ is independent of\, $i$. In particular, $\oplus_{n,d} (E^k_{*,*})_n$ admits a natural $\mathbb Z[Y]$--module structure. ${\rm Gr}\,\mathbb H_*:=\oplus_{n,d} (E^\infty_{*,*})_n$ and $\mathbb H_*$ both considered as a graded $\mathbb Z$--modules, graded by the $w$--weights (equivalently, by the summands associated with each $S_n$) are isomorphic as graded $\mathbb Z$--modules. However, ${\rm Gr}\,\mathbb H_*$ considered as a $Y$--module and $\mathbb H_*$ considered as a $U$--module usually do not agree. \end{proposition} \begin{proof} Let $\alpha$ be a chain in $S_n\cap \frX_{-d}$. For any $t\in[0,1]$ set $y_{i,t}:\frX\to \frX$ given by $x\mapsto x+tE_i$. Denote $\cup_{t\in[0,1]} y_{i,t}(\alpha)$ by $\beta_i$. Then $\beta_i-\beta_j\in S_{n+1}\cap \frX_{-d}$ and $\partial (\beta_i-\beta_j)=y_i(\alpha)-y_j(\alpha)$. Then using the general construction of spectral sequences, $y_i(\alpha)-y_j(\alpha)\in (B^2_{*,*})_{n+1}$, hence its class $[y_i(\alpha)-y_j(\alpha)]\in (E^2_{*,*})_{n+1} =(Z^2_{*,*})_{n+1}/(B^2_{*,*})_{n+1}$ is zero. (Cf. \cite[page 131]{SpecSeq}.) \end{proof} \begin{remark} There is another aspect and difference in the comparison of ${\rm Gr}\,\mathbb H_*$ considered as a $Y$--module and $\mathbb H_*$ considered as a $U$--module. Assume that $(C,o)$ is Gorenstein. Then $\mathbb H_*(C,o)$ and the $U$--action respects the Gorenstein $\mathbb Z_2$--symmetry $l\mapsto c-l$, however, the $Y$--action on ${\rm Gr}\,\mathbb H_*$ does not. (See e.g. the case of an irreducible plane curve singularity, or Example \ref{ex:u1}.) \end{remark} \begin{example}\label{ex:u1u3} {\bf Assume that $(C,o)$ is the plane curve singularity $\{x^3+y^3=0\}$}. (This is a continuation of Example \ref{ss:33}.) For each $b\geq 0$ the $\mathbb Z[Y_1,Y_2, Y_3]$--module $\oplus_{n,d}(E^1_{-d, b+d})_n$ decomposes into irreducible summands. They are the following. \underline{Case $b=0$:} \ $M_{b=0}^1=\mathbb Z$ of $\mathbb Z$--rank one generated by ${\bf T}^0Q^0h^0$; $M_{b=0}^2\simeq \mathbb Z[Y_1,Y_2,Y_3]/ (Y_1Y_2,Y_2Y_3,Y_3Y_1)$ generated by $T_1T_2T_3Q^{-1}h^0$; $M_{b=0}^3\simeq \mathbb Z[Y_1,Y_2,Y_3]$ generated by $T_1^2T_2^2T_3^2Q^0h^0$. \underline{Case $b=1$:} \ $M_{b=1}^2$ is supported by the semigroup elements $(1,1,1)\cup\cup_{k\geq 2}(k,1,1)\cup \cup_{k\geq 2}(1,k,1)\cup \cup_{k\geq 2}(1,1,k)$, similarly as $M_{b=0}^2$, but in this case the $(1,1,1)$--homogeneous part has $\mathbb Z$--rank 2. All other homogeneous $\mathbb Z$--summands have rank one. So, it has two generators both coded by $T_1T_2T_3Q^0h$. $M_{b=1}^{3,3'}$, two copies of $\mathbb Z[Y_1,Y_2,Y_3]$, both generated at $T_1^2T_2^2T_3^2Qh$. \underline{Case $b=2$:} \ $M_{b=2}^{3}\simeq \mathbb Z[Y_1,Y_2,Y_3]$ generated at $T_1^2T_2^2T_3^2Q^2h^2$. This shows that we might have irreducible $\mathbb Z[Y_1,Y_2,Z_3]$--modules with homogeneous summand associated with certain $l\in\cS$ of $\mathbb Z$--rank $\geq 2$. (This never happens if $r=1$.) For $k=\infty$, the $Z[Y]$--modules $\oplus_{n,d}(E^1_{-d, b+d})_n$ for $x^3+y^3$ and $x^3+y^4$ agree (for the last one see Example \ref{ex:irredY1}.) The above picture coincides with the description of HFL$^-$ of the torus knot $T_{3,3}$ in \cite{GH}. \end{example} \begin{example}\label{ex:u1u2} The reader is invited to describe the $\mathbb Z[Y_1,Y_2]$--modules in the case of Example \ref{ex:decsing2} (case $\langle 3,4\rangle \vee \langle 3,4\rangle $). Here appear modules of type $\mathbb Z[Y_1,Y_2]/(Y_1^2,Y_2^2)$ and $\mathbb Z[Y_1,Y_2]/(Y_1^2)$ as well. \end{example} \begin{remark} There is a $\mathbb Z[Y_1,\ldots, Y_r]$--action on the relative homologies as well. Indeed, the morphism $$Y_i:H_*(S_n\cap \frX_{-l}, S_{n-1}\cap \frX_{-l})\to H_*(S_{n+1}\cap \frX_{-l-E_i}, S_{n}\cap \frX_{-l-E_i})$$ induced by $x\mapsto x+E_i$ is well defined and usually nontrivial (and distinct for diffenet $i$'s). \end{remark} \section{Plane curves. Relation with Heegaard Floer Link homology}\label{s:HFL} \subsection{Review of Heegaard Floer Link homology}\label{ss:HFL} Assume that $(C,o)$ is a plane curve singularity, $(C,o)\subset (\bC^2,0)$. It defines the link $L(C,o)=L\subset S^3$ with link-components $L_1,\ldots, L_r$. In this section we relate the above considered filtered lattice homology with the Heegaard Floer link homology of $L$. For more on Heegaard Floer Link theory see e.g. \cite{MO,os,os2,OSzHol,os4,ras}. In the next paragraphs we recall in short some notations and facts. To every 3-manifold $M$ with fixed Heegaard splitting (and extra structures) one can associate a {\em Heegaard Floer complex} $CF^{-}(M)$ of free $\mathbb Z[U]$-modules. The operator $U$ has homological degree $(-2)$, and the differential $d$ has degree $(-1)$. This complex is not unique, but different choices (e.g. the splitting) lead to quasi-isomorphic complexes. Therefore the homology of $CF^{-}(M)$ is an invariant of $M$ called {\em Heegaard Floer homology} and denoted by $HF^{-}(M)$. In this note we will have $M=S^3$; in this case $HF^{-}(S^3)=\mathbb Z[U]$. To a link $L=L_1\cup\ldots\cup L_{r}\subset S^3$ one can associate a $\mathbb Z^r$--filtered complex of $\mathbb Z[U_1,\ldots,U_r]$-modules, denoted by $CFL^{-}(L)$. The $\mathbb Z^r$--filtration is called the Alexander filtration. The operators $U_i$ have homological degree $(-2)$ and shift the filtration level by $E_i$. If one ignores the filtration, then the complex is quasi-isomorphic to the Heegaard Floer complex $CF^{-}(S^3)$, where all the operators $U_i$ are homotopic to each other, cf. \cite{OSzHol}. One can consider the complex also as a $\mathbb Z[U]$-module, where $U=U_1$. However, the filtration captures nontrivial information about the link. For $l\in \mathbb Z^r$, we will denote the Alexander filtration by $\{A^-(l)\}_l$. Each $\Cc(l)=(\oplus_\nu A^{-,\nu}(l),d)$ is a subcomplex of $CFL^{-}(L)$ at filtration level $v$ (in \cite{MO} these complexes are denoted by $\mathfrak{A}^{-}(v)$). It is spanned by the elements of $CFL^{-}(L)$ with Alexander filtration greater than or equal to $l$. (For a more clear match with the algebraic picture, we reverse the sign of $l$, thus reversing the direction of the filtration as well.) The upper index $\nu$ denotes the homological (Maslow) grading. They satisfy \begin{equation}\label{eq:INCL}\begin{array}{l} \Cc(l_1)\supset \Cc(l_2) \mbox{ \ for $l_2\geq l_1$, \ and }\\ \Cc(l_1)\cap \Cc(l_2)= \Cc(\max\{l_1,l_2\}). \end{array} \end{equation} The subcomplexes $\Cc(v)$ are $\mathbb Z[U_1,\ldots,U_r]$-submodules, the induced operators $U_i$ have homological degree $-2$ and are homotopic to each other again. Moreover $U_i(A^-(l))\subset A^-(l+E_i)$. The Heegaard-Floer link homology is defined as the homology of the associated graded pieces of $\Cc(l)$: $$ \HFL^{-}(L,l):=H_{*}(\, (\gr\Cc)(l)\,), \ \ \mbox{where} \ (\gr\Cc)(l):= \Cc(l)/\sum_{l'\geq l}\Cc(l'). $$ \begin{remark} At present, Heegaard Floer link homology is defined only for $\mathbb{F}_2$ coefficients, hence, strictly speaking, all results of this section are valid only over $\mathbb{F}_2$. Nevertheless, we believe that all the statements are true over $\mathbb Z$ as well, but the cautious reader might take everywhere $\mathbb{F}_2$ instead of $\mathbb Z$. \end{remark} \subsection{The connection with the local lattice cohomology}\label{ss:HFLloc} In \cite[Theorem 6.1.3]{GorNem2015} the following isomorphism was proved. \begin{theorem}\label{th:isoGor} For any $l\in \mathbb Z^r$ let ${\rm HL}^-_*$ denote the local lattice cohomology $H_*({\rm gr}_l{\mathcal L}^-, {\rm gr}_l \partial _U)$, graded by the homological degree, cf. \ref{bek:2.22}. Let ${\rm HFL}_*(L,l)$ be the Heegaard Floer link homology, graded by its homological (Maslow) grading. Then $\HFL^-_*(L,l)$ and $ {\rm HL}^-_*(l)$ are isomorphic as graded\, $\mathbb Z$-modules. In particular, $\HFL^-_*(L,l)$ has no $\mathbb Z$--torsion and $\HFL^-_*(L, l)=0$ whenever $l\not\in \calS$. \end{theorem} Theorem \ref{th:PP} has the following consequences. Let us fix $l\in\calS$ and assume that $\HFL^-_k(L,l)\not=0$. This means ${\rm HL}^-_k(l)\not=0$. For the convenience of the reader let us write down again the possible bidegrees $(k-b,b)$ which might appear. Using parts {\it (a)-(b)} of Theorem \ref{th:PP}, in the isomorphism (\ref{eq:PP}) we have \begin{equation} k =-2n+2\mathfrak{h}(l)-2|l|+b\ \ \ \ \mbox{and} \ \ \ w(l) =n-b. \end{equation} These identities identify both $n$ and $b$ in terms of $l$ and $k$: \begin{equation}\label{eq:nb} n =-|l|-k\ \ \ \ \mbox{and} \ \ \ b = -|l|-k-w(l)=-k-2\mathfrak{h}(l). \end{equation} Hence, with fixed $k$, there is only one bidegree $(k-b,b)$ for which ${\rm HL}^-_{k-b,b}(l)\not=0$ given by the second identity of (\ref{eq:nb}), namely $b=-k-2\mathfrak{h}(l)\geq 0$. Moreover, (\ref{eq:PP}) and (\ref{eq:HLFUJ}) transforms into the following isomorphism \begin{corollary} \begin{equation}\label{eq:lb1} \HFL^-_{-2\mathfrak{h}(l)-b}(L,l)\simeq H_b(S_{w(l)+b}\cap\frX_{-l}, S_{w(l)+b}\cap\frX_{-l}\cap \frX_{-|l|-1}). \end{equation} In particular, the spaces $\{S_n\}_n$ (as cube--subcomplexes of $\mathbb R_{\geq 0}^r$) provide by their relative homologies all the Heegaard Floer link homologies. \end{corollary} E.g., in the case of Example \ref{ss:33}, let us fix the lattice point $l=(2,2,2)$. Then $|l|=6$, $w(l)=0$ and $-2\mathfrak{h}(l)=-6$. The coefficient of ${\bf T}^l=T_1^2T_2^2T_3^2$ in ${\bf PE}_1({\bf T}, Q,h)$ is $(1+Qh)^2 =Q^0h^0+2Q^1h^1+Q^2H^2$, cf. (\ref{eq:TTT}). This means that at the lattice point $l=(2,2,2)$, via the relative homology `at $l$', from $S_0$ (i.e. from the exponents and coefficient of $Q^0h^0$) we read that ${\rm rank}\, \HFL^-_{-6}(L,l)=1$. Next, from $S_1$ (i.e. from $2QH$) we read that ${\rm rank}\, \HFL^-_{-6-1}(L,l)=2$, finally from $S_2$ that ${\rm rank}\, \HFL^-_{-6-2}(L,l)=1$. All other homologies $\HFL^-_{-6-b}(L,l)$, $b\not=0,1,2$, are zero. (It is instructive to compare the statements with the pictures of the spaces $\{S_n\}_n$ as well.) Next, using the Poincar\'e polynomial identity (\ref{hfminus}) together with Theorem \ref{th:PP} we obtain \begin{corollary} \begin{equation*} \begin{split} P^m({\bf t};q)&=\sum_l\sum _k (-1)^k \cdot {\rm rank} \HFL^-_{-2\mathfrak{h}(l)-k}(L,l)\cdot{\bf t}^l\, q^{\mathfrak{h}(l)+k},\\ {\bf PE}_1({\bf T}, Q, h)&=\sum_l \ \sum_k\ {\rm rank} \HFL^-_{-2\mathfrak{h}(l)-k}(L,l)\cdot {\bf T}^lQ^{w(l)+k}h^{k}\\ &= \sum_l \ \sum_n\ {\rm rank} \HFL^-_{-n-|l|}(L,l)\cdot {\bf T}^lQ^nh^{n-w(l)},\\ {\bf PE}_1({\bf T}, Q=1, h=1)&=\sum_l \sum_k (-1)^k\pp^m _{l,k}\cdot {\bf T}^l=\sum_l \ \big(\sum_k\ {\rm rank} \HFL^-_k(L,l)\,\big)\cdot {\bf T}^l,\\ {\bf PE}_1({\bf T}, Q=1, h=-1)&=\sum_l \sum_k \pp^m _{l,k}\cdot {\bf T}^l=\sum_l \ \chi\big(\HFL^-_*(L,l)\,\big)\cdot {\bf T}^l= P({\bf T}). \end{split}\end{equation*} \end{corollary} The last identity is equivalent via Theorem \ref{Poincare vs Alexander} by a result of Ozsv\'ath and Szab\'o from \cite[Proposition 9.2]{OSzHol}, which determines the generating function $\sum_{l}\ \chi(\HFL^-(L,l))\cdot {\bf t}^l$ of the Euler characteristic of the Heegaard Floer link homology as $\Delta({\bf t})$ if $r>1$, and $\Delta(t)/1-t$ if $r=1$. Next, we compare $\{\HFL^-_*(L,l)\}_l$ with the spectral sequences. First of all, we wish to emphasize that the spectral sequence of this note is {\it not} the spectral sequence constructed in the $HFL^-$--theory. In that theory, the $\infty$--page is the graded $HF^-(S^3)=\mathbb Z[U]$, while in our case the $\infty$--page is the graded $\mathbb H_*(C,o)$. Let us write (\ref{eq:lb1}) in the form \begin{equation}\label{eq:lb2} \HFL^-_{-n-|l|}(L,l)\simeq H_b(S_n\cap\frX_{-l}, S_n\cap\frX_{-l}\cap \frX_{-|l|-1}), \end{equation} where $b=n-w(l)$. For any fixed $d$, summation over $\{l\,:\, |l|=d\}$ gives for any $n,\, b$ and $d$: \begin{equation}\label{eq:lb3} \bigoplus_{|l|=d,\ w(l)=n-b} \HFL^-_{-n-|l|}(L,l)\simeq \bigoplus_{|l|=d}\, H_b(S_n\cap\frX_{-l}, S_n\cap\frX_{-l}\cap \frX_{-|l|-1})= (E^1_{-d, b+d})_n. \end{equation} In particular, for any fixed $n$, the spectral sequence associated with $S_n$ uses and capture only the specially chosen summand $\oplus_{l}{\rm HFL}^-_{-n-|l|}(L,l)$ of $\oplus _l{\rm HFL^-}_*(L,l)$. This is a partition of $\oplus _l{\rm HFL^-}_*(L,l)$ indexed by $n$. Each $\oplus_{l}{\rm HFL}^-_{-n-|l|}(L,l)$, interpreted as an $E^1$--term, converges to $({\rm Gr}^F_*\mathbb H_*)_{-2n}$. (This is not the spectral sequence of the Link Heegaard Floer theory, which converges to $HF^-(S^3)$, though the entries --- but not the differentials --- of the first page can be identified.) This partition of $\{\oplus_{l}{\rm HFL}^-_{-n-|l|}(L,l)\}_{n\geq m_w}$ of $\oplus _l{\rm HFL^-}_*(L,l)$ can be refined by the vertices of the graded root. Indeed, if we replace in (\ref{eq:lb2}) $S_n$ by $\sqcup_v S_n^v$, then each $\oplus_{l}{\rm HFL}^-_{-n-|l|}(L,l)$ for fixed $n\geq m_w$ decomposes into a direct sum \begin{equation}\label{eq:v} \oplus _{v\in{\mathcal V}(\mathfrak{R}): w_0(v)=n}\ {\rm HFL}^-_{-n-|l|}(L,l)^v,\end{equation} providing a direct sum decomposition of the link Heegaard Floer homology $\oplus _l{\rm HFL^-}_*(L,l)$ indexed by ${\mathcal V}(\mathfrak{R})$. (The author does not know whether this direct sum decomposition can be realized via the HFL theory.) Let us make in (\ref{eq:lb3}) a summation over $d$. For fixed $n$ and $b$ we obtain \begin{equation}\label{eq:lb4} \bigoplus_{l\,:\, \, w(l)=n-b} \HFL^-_{-n-|l|}(L,l)\simeq \bigoplus_{d}\, (E^1_{-d, b+d})_n. \end{equation} Since $\rank (E^1_{-d, b+d})_n\geq \rank (E^\infty_{-d, b+d})_n$, we get the following lower bound for the Heegaard Floer link homology in terms of the lattice homology of $(C,o)$: \begin{corollary} (a) For any fixed $d$, $n$ and $b$: \begin{equation}\label{eq:lb3b} \sum_{|l|=d,\ w(l)=n-b} \rank\ \HFL^-_{-n-|l|}(L,l)\geq {\rm rank}\ {\rm Gr}^F_{-d}\,(\mathbb H_b(C,o))_{-2n}. \end{equation} (b) For any fixed $n$ and $b$: \begin{equation}\label{eq:lb4b} \sum_{l\, :\, \, w(l)=n-b} \rank\ \HFL^-_{-n-|l|}(L,l)= \sum_{l\, :\, \, w(l)=n-b} \rank\ \HFL^-_{-2\mathfrak{h}(l)-b}(L,l) \geq \rank\, (\mathbb H_b(C,o))_{-2n}. \end{equation} (In (\ref{eq:lb4b}) the left hand side also equals the coefficient of $Q^nh^b$ in $PE_1(T=1, Q,h)$.) \end{corollary} \begin{example} (\ref{eq:lb4b}) for $b=0$ reads as \begin{equation}\label{eq:lb5} \sum_{l\, :\, \, w(l)=n} \rank\ \HFL^-_{-2\mathfrak{h}(l)}(L,l) \geq \rank\, (\mathbb H_0(C,o))_{-2n}. \end{equation} Note that for $n\geq 0$ we have $\rank\, (\mathbb H_0(C,o))_{-2n}\geq 1$ (see e.g. Proposition \ref{prop:infty}). Assume that $\mathbb H^{\geq 1}(C,o)=0$. Then (\ref{eq:lb5}) together with Proposition \ref{prop:infty} {\it (d)} gives $$\sum_{n<0}\, \sum_{l\,:\, w(l)=n} \rank\ \HFL^-_{-2\mathfrak{h}(l)}(L,l)\, +\, \sum_{n\geq 0}\, \sum_{l\,:\, w(l)=n} \big(\rank\ \HFL^-_{-2\mathfrak{h}(l)}(L,l)-1\big)\, \geq \delta(C,o).$$ \end{example} \section{Other level filtrations} In section \ref{s:levfiltr} the filtration of $S_n$ was induced by the filtration $\{\frX_{-d}\}_d$ of $\frX$, where $\frX_{-d}:=\{\cup(l,I)\,:\, |l|\geq d\}$. However, there are infinitely many similar level filtrations which might serve equally well and can be considered and studied. Indeed, fix e.g. an integral nonzero vector $a=(a_1, \ldots, a_r)\in\mathbb Z^r$, and set $\frX^{(a)}_{-d}:= \{\cup(l,I)\,:\, \sum_ia_il_i\geq d\}$. Then for each fixed $n$, $\{S_n\cap \frX^{(a)}_{-d}\}_d$ is an increasing finite filtration of $S_n$. In particular (by taking the very same weight function $w$) it induces a homological spectral sequence converging to the lattice homology. That is, the $\infty$--page is the graded ${\rm Gr}^F_*\mathbb H_*$, where both $F_{*}\mathbb H_*(\frX,w)$ and ${\rm Gr}^F_*\mathbb H_*$ depend on the choice of $a$. The first pages can also be identified with certain local lattice homologies. For example, assume that $a\in (\mathbb Z_{>0})^r$. Then, for each fixed $n$, the first pages $ (E^1_{-d,q})_n^{(a)}= H_{-d+q}(S_n\cap \frX^{(a)}_{-d}, S_n\cap \frX^{(a)}_{-d-1},\mathbb Z)$ has a direct sum decomposition \begin{equation*} (E^1_{-d,q})_{n}^{(a)}= \bigoplus_{l\in\mathbb Z^r_{\geq 0},\, \sum_ia_il_i=d}\ (E^1_{-l,q})_{n}^{(a)}, \ \mbox{where} \ (E^1_{-l,q})_{n}^{(a)}:= H_{-d+q}(S_n\cap \frX_{-l}, S_n\cap \frX_{-l}\cap \frX^{(a)}_{-d-1},\mathbb Z)\end{equation*} Note that for any fixed $l\in(\mathbb Z_{\geq 0})^r$ the modules $(E^1_{-l,q})_{n}^{(a)}$ are $a$--independent (i.e. they are the terms of the local lattice homologies considered in the previous sections), however in $(E^1_{-d,q})_{n}^{(a)}$ we specially choose the summation set according to the hyperplane equation $ \sum_ia_il_i=d$. In particular, for a plane curve singularity, for every $a\in (\mathbb Z_{>0})^r$ we get a spectral sequence whose first pages consists of the `reorganized packages' of HFL$^-_*$, and it converges to $\mathbb H_*(C,o)$. The construction can be compared with the definition of the homologies tHFK (and upsilon invariant) from \cite{OSZStu}. \end{document}
arXiv
Heat Transfer Coefficient at Cast-Mold Interface During Centrifugal Casting: Calculation of Air Gap Jan Bohacek1, Abdellah Kharicha1, Andreas Ludwig1, Menghuai Wu2 & Ebrahim Karimi-Sibaki2 Metallurgical and Materials Transactions B volume 49, pages 1421–1433 (2018)Cite this article During centrifugal casting, the thermal resistance at the cast-mold interface represents a main blockage mechanism for heat transfer. In addition to the refractory coating, an air gap begins to form due to the shrinkage of the casting and the mold expansion, under the continuous influence of strong centrifugal forces. Here, the heat transfer coefficient at the cast-mold interface h has been determined from calculations of the air gap thickness d a based on a plane stress model taking into account thermoelastic stresses, centrifugal forces, plastic deformations, and a temperature-dependent Young's modulus. The numerical approach proposed here is rather novel and tries to offer an alternative to the empirical formulas usually used in numerical simulations for a description of a time-dependent heat transfer coefficient h. Several numerical tests were performed for different coating thicknesses dC, rotation rates Ω, and temperatures of solidus Tsol. Results demonstrated that the scenario at the interface is unique for each set of parameters, hindering the possibility of employing empirical formulas without a preceding experiment being performed. Initial values of h are simply equivalent to the ratio of the coating thermal conductivity and its thickness (~ 1000 Wm−2 K−1). Later, when the air gap is formed, h drops exponentially to values at least one order of magnitude smaller (~ 100 Wm−2 K−1). Horizontal centrifugal casting is an important industrial process used especially for the production of high-quality seamless tubes and outer shells of work rolls. In this process, the effect of centrifuging is twofold. First, it is the fictitious centrifugal force making the production of axisymmetric hollow castings even possible by pushing the molten metal against the inner wall of the cylindrical mold. Second, the interaction between inertial forces and the vector of the gravitational acceleration induces the so-called pumping effect, responsible for thorough mixing,[1] the growth of fine equiaxed grains, and superior mechanical properties of the cast.[2,3] As with many other industrial processes, horizontal centrifugal casting has been studied with increased attention, with the help of various numerical techniques, in order to gain a better understanding of the process and underlying physical phenomena. While some of the numerical studies concentrate more on simulating flow dynamics, such as the mold filling, waves propagating over the free surface, and complex buoyant flow patterns inside the molten metal,[4,5,6,7,8,9,10,11] others focus more on heat transfer and solidification, often assuming coupling with simple segregation models.[12,13,14] The latter is naturally more frequent within the centrifugal casting community. Solidification is usually modeled by means of applying the enthalpy method with appropriate rules for a liquid fraction evolution in the mushy zone. In order to construct useful and realistic heat transfer models, precise and accurate material properties and boundary conditions are necessary. Heat transfer coefficients are usually imposed at boundaries, generally being determined from empirical formulas for the Nusselt number. Materials properties are generally temperature dependent and must be specified for all zones, i.e., the casting, the mold, and the coating. The thickness of the coating, a kind of a refractory material, such as ZrO2, is usually small (~ 1 mm); therefore, in numerical models, it is often simplified by an assumption of the thin-wall (zero-capacity) model. The coating is applied on the inner surface of the mold in order to insulate the mold from high temperatures and also to control to a certain extent the solidification rate. The general consensus is that it tends to stick firmly to the mold surface. A time-dependent scenario at the contact between the casting and the coating attached to the mold surface is perhaps one of the weakest points of all currently available heat transfer models. Only during the first seconds of the casting, the molten metal is in perfect contact with the coating, as pointed out in Reference 15. Immediately after that, a so-called microscopic air gap is formed, whose properties such as thickness and temporal growth are strongly influenced by a surface roughness of the mold and the coating eventually. Earlier studies[16] show that the importance of the surface roughness has been for many years underestimated. Furthermore, in the literature (e.g., Reference 17), there are interesting numerical works available, taking into account the surface roughness and trying to evaluate the effective thickness of the microscopic air gap by using simple geometrical operations. As time proceeds, the first layer of the solid has enough strength to withstand the metallostatic pressure, whereby the air gap thickness gradually grows and the microscopically thin contact is permanently lost. Such an air gap is often referred to as the macroscopic air gap. Please note that while the cast-mold contact has been extensively studied in static castings and a large body of experimental evidence of microscopic and macroscopic air gap behavior has been presented, it is not yet clear whether at least qualitatively the same observations would apply to the centrifugal casting. Unlike static casting, extreme centrifugal forces are exerted on the liquid metal being cast, which strive to delay the subsequent air gap formation. We deduce that the high centrifugal pressure may be able to significantly reduce the microscopic air gap. Furthermore, we also believe—and this has been proven in this article—that once the macroscopic air gap is formed, the centrifugal force has a negligible impact on its growth. In order to cope with time-dependent thermal resistance induced by the formation of an air gap, various approaches have been adopted in earlier numerical studies of centrifugal casting. Naturally, a simplest approach would be an assumption of a perfect contact formed throughout the entire casting, which was used, e.g., by Xu et al.,[18] Gao and Wang,[19] and Cook et al.[20] Bohacek et al.[21] pointed out the importance of an air gap in their findings and conclusions, yet in the numerical model, it was not taken into account. Humphreys et al.[22] calculated the heat transfer at the interface by employing a "virtual wall" technique with cumulative thermal resistances; however, they did not provide parameters to calculate them. Chang et al.[23] used arbitrary values of heat transfer coefficients at the interface, constant during the casting and increasing for higher rotation rates. Other researchers, such as Kang et al.[24] and Kang and Rohatgi,[25] have also used time-independent heat transfer coefficients. Ebisu[26] and Kamlesh[27] assumed an exponential decay of the radiative heat flux through the interface as follows: $$ q = q_{0} e^{ - \beta s\left( t \right)} , $$ where q0, β, and s(t) are the initial heat flux through the interface, a damping coefficient, and the current solidified thickness, respectively. A similar approach was adopted by Lajoye and Suery[28] and was later widely used by other authors such as Raju and Mehrotra,[29] Drenchev et al.,[30] Panda et al.[31] Instead of the heat flux, a time-dependent heat transfer coefficient h was considered at the interface and defined by the following formula: $$ h = h_{0} \left( {\frac{{h_{\text{f}} }}{{h_{0} }}} \right)^{s(t)/d} , $$ where h0, hf, and d are the initial and the final heat transfer coefficient and the casting thickness, respectively. Naturally, Eqs. [1] and [2] are not equivalent. However, it is worth noting that when the heat flux q was replaced with the heat transfer coefficient h, Eqs. [1] and [2] would become identical provided that the damping coefficient was defined as $$ \beta = - \frac{1}{d}{ \log }\left( {\frac{{h_{\text{f}} }}{{h_{0} }}} \right). $$ Recently, Nastac[32] applied a different approach based on calculating an equivalent convective heat transfer coefficient h to simulate the effect of the coating and the air gap, which can be written as $$ h = \frac{{h_{\text{a}} k_{\text{C}} }}{{k_{\text{C}} + d_{\text{C}} h_{\text{a}} }}, $$ where kC, dC, and ha represent thermal conductivity and thickness of the coating and the heat transfer coefficient between the casting and the coating, which is defined as follows: $$ h_{\text{a}} = h_{0} + \left( {h_{\text{f}} - h_{0} } \right)\left\{ {1 - \left[ {\hbox{min} \left( {1,\frac{{t_{0} }}{t}} \right)} \right]^{\gamma } } \right\}, $$ where t0, t, and γ stand for the time to initiation of solidification, the current time, and a constant exponent. Table I summarizes the values of \( h \) adopted by the aforementioned authors. Obviously, all of the aforementioned approaches contain at least one unknown parameter, which needs to be adjusted, e.g., by means of an experiment. Although especially the choice of the function given by Eq. [2] appears to be a reasonable solution, a careful fine-tuning of hf is required in order to reflect, or at least approximate, real-life conditions. According to Vacca et al.,[33] who performed a valuable experimental study of the heat transfer coefficient at the interface involving the inverse task, values of the heat transfer coefficient adopted in centrifugal casting simulations are unreliable and usually arbitrary. A similar approach combining a simulation and experiment was employed by Susac et al.[34] and Sahin et al.[35] The inverse task is, however, in general, computationally very intensive. Moreover, time-dependent experimental data are necessary at least at one point, located close to the cast-mold interface. While the inverse task cannot be practically applied in a typical centrifugal casting simulation, it is an excellent tool for validating other numerical models or determining constants in empirical models. In Reference 36, the inverse task, the inverse heat conduction problem (IHCP), was solved by a popular nonlinear estimation technique, originally developed by Beck.[37] The casting material, A356 Al alloy, was cast into a carbon steel mold. The IHCP proved that the heat transfer to the mold can be significantly improved by applying the pressure load during solidification in terms of restoring a contact between the mold and the casting. Table I Values of h Adopted by Different Authors In Reference 38, the research concerns the simulation of trip continuous casting. An engineering approach was employed to approximate the thermal resistance at the strip-mold interface from the heat transferred to the cooling water. The water flow rate and temperature were recorded at different positions along the length of the strip for this purpose. At selected points in the liquid pool, the calculated cooling curves agreed strongly with those obtained from Inconel (American Special Metals, Corp., Miami, FL) sheathed thermocouples. In addition, a direct measurement of the macroscopic air gap can be performed by using linear variable differential transformers (LVDTs). However, this technique during the centrifugal casting is limited to static castings due to high rotations of the mold during centrifugal casting. For example, in Reference 39, the heat transfer coefficient at the cast-mold interface of a static casting was determined from the inverse task. The air gap thickness was measured with the help of the LVDTs. Finally, a correlation was found, defining the heat transfer coefficient as a function of the air gap thickness. The effect of the surface roughness of the mold was analyzed. As expected, during formation of the microscopic air gap, findings showed that the smaller degree of roughness provides stronger contact with the mold and that the heat transfer coefficient is, therefore, higher. Consequently, the smaller the surface roughness of the mold, the earlier the macroscopic air gap occurs. On the other hand, the ultimate heat transfer coefficient, when solidification is nearly complete, is insignificantly influenced by the surface roughness. As a numerical alternative of estimating the air gap thickness, one could suggest ignoring stresses built up in the casting and the mold and using the thermal expansion coefficient to calculate the shrinkage simply by assuming displacements, independent of direction, i.e., uniform rate of deformation of control volume. This technique was applied, e.g., by Taha et al.,[40] for a static casting. Accuracy and reliability are, however, doubtful due to the missing thermal stresses, which may significantly alter total displacements. In addition, unlike static castings, extreme forces in the centrifugal casting process act on the casting, leading to yielding of the material especially at early stages. In addition, for this reason, the aforementioned approach should be avoided. A strategy that incorporates a more complete and holistic model of physics was outlined by Kron,[41] who suggested taking into account vacancies formed due to the thermal expansion of the mold and the material being cast, as well as elastic stresses acting as a consequence of thermally induced strains. Kron et al.[42] developed the thermomechanical model, based on the plane stress model, assuming elastic materials. They showed that the model predicts accurately the casting scenario only for grain-refined alloys such as Al-4.5 pct Mg. When the solid grains are surrounded by liquid, the material becomes more ductile and consequently the microscopic air gap is suppressed. On the other hand, in the non-grain-refined case, the elastic thermomechanical model does not perform particularly effectively. However, one may wish to interpret these findings, the important message of Reference 42 could be formulated as follows: The peak value of the heat transfer coefficient is larger for the grain-refined alloy due to ductile suppression of the microscopic air gap, but since the formation of the macroscopic air gap starts earlier in this case, total solidification times are almost identical. Lagerstedt, a colleague of Kron, pointed out in the future work chapter of his doctoral thesis[43] that including plasticity in the stress model probably should be the next step in developing an accurate shrinkage model. In Reference 44, Schwerdtfeger et al. underlined the importance of the displacement reference. Classically, in stress theory, the displacement is the distance of a specified atom from the position it had assumed when the entire solid body was stress free. Such a situation, however, does not occur during solidification; therefore, they recommended defining the displacement as the distance of a specified atom from the position where it was at the moment of solidification. Consequently, one should work with stress rates and strain rates rather than with stresses and strains. In their study, plastic deformations were also considered and added to the elastic ones by assuming an empirical strain-hardening equation. Nowadays, most of the commercial software available on the market, including MAGMASOFT (MAGMA in Aachen, North Rhine-Westphalia, Germany), PROCAST (ESI Group, Paris, France), and THERCAST (TRANSVALOR S.A., Mougins, France), offers modules for thermomechanical calculations, and often the user can choose from several elastic-plastic models. In Reference 45, in conclusion, Kron et al. stated that an accurate modeling of the air gap formation can only be realized through fully coupled thermomechanical models. They highlighted that the prediction of the air gap, done with the commercial codes, is not satisfactory, suggesting that the solidification shrinkage in the air gap vicinity should be relaxed by the liquid and, therefore, contribute more to a cavity formed in the top of the casting or to the porosity. In conclusion, the entire strain model needs to be defined more precisely. Difficulties associated with air gap modeling were summarized in Reference 46 as the following. High-temperature elastic constants are generally hard to obtain. Defining material properties of the mushy zone remains a challenging topic. Currently, a transition model is available described by the Percolation theory. Next, obtaining proper values of rheological parameters in the power law equation is difficult. Finally, other difficulties or discrepancies, common to all numerical models, are related to oversimplifying assumptions and numerical errors. In the article by Nayak and Sundarraj,[47] it was shown that while it is accurate enough to assume a constant value of the interface heat transfer coefficient during the entire casting into the sand mold, it is not the case with the metal mold. Furthermore, the rate of gap formation significantly affects the solidification process. The cast-mold interface, namely, the coating and an air gap, represents a significant blockage for the heat transfer and solidification. The thermal resistance of the coating is often negligible compared to that of the air gap. Therefore, a thermal resistance of such an interface has to be carefully determined in order to allow reliable and trustworthy numerical simulations. Existing empirical formulas describing the heat transfer scenario at the interface should only be applied when validated against experimental data. The formation of an air gap depends on many factors such as material and mechanical properties of the casting and the mold, the coating properties, and the process parameters (initial temperatures, the pouring temperature, the casting geometry, and the rotation rate). Obviously, setting up a generalized and unique formula for the heat transfer coefficient at the interface a priori would be very hard, if not impossible. In the present article, we target developing a simple, computationally cheap, and robust algorithm for calculating the air gap at the cast-mold interface during the centrifugal casting, which could be used as an alternative to often doubtful empirical formulas. A schematic of the configuration at the interface is shown in Figure 1. Schematic of the configuration at the cast-mold interface. In the case of perfect contact, the air gap disappears Numerical Model During the centrifugal casting of cylindrical parts, it is reasonable to assume that fields of variables and other properties are uniform in the tangential direction. In fact, also, axial variations will be often small and, therefore, could be neglected, too. This finding directly suggests using a plane stress model. Although different variations of plane stress models have been used in diverse industrial applications, such as autofrettage of gun barrels, strain-hardened pressure vessels, and multilayer seamless pipes, in the past,[48,49,50,51] they have rarely been employed in air gap thickness calculations. Especially, concerning the centrifugal casting, to the best of our knowledge, not a single match was found in the literature survey. Radial and tangential stresses, σr and σt, are coupled through the equilibrium equation: $$ \frac{{{\text{d}}\sigma_{\text{r}} }}{{{\text{d}}r}} + \frac{{\sigma_{\text{r}} - \sigma_{\text{t}} }}{r} + \rho {{\varOmega }}^{2} r = 0, $$ where ρ, Ω, and r are the density, the rotation rate, and the radial coordinate, respectively. When only elastic deformations are considered, stresses are coupled with strains via the Hooke's law with the thermoelastic term. Such a relationship takes the following form: $$ \begin{array}{*{20}c} {\varepsilon_{\text{t}} = \frac{1}{E}\left[ {\sigma_{\text{t}} - \nu \sigma_{\text{r}} } \right] + \alpha T,} \\ {\varepsilon_{\text{r}} = \frac{1}{E}\left[ {\sigma_{\text{r}} - \nu \sigma_{\text{t}} } \right] + \alpha T,} \\ \end{array} $$ where ν, E, α, and T are Poisson's ratio, Young's modulus, the thermal expansion coefficient, and the temperature, respectively. Strains and total radial displacements are related through the following laws: $$ \varepsilon_{\text{t}} = \frac{u}{r},\varepsilon_{\text{r}} = \frac{{{\text{d}}u}}{{{\text{d}}r}}. $$ It would, however, be incorrect to only consider elastic strains. Since the casting in a semisolid state can easily yield under strong centrifugal forces, plastic deformations also must be taken into account. Then, total strains can be conveniently expressed as a sum of elastic and plastic strains as follows: $$ \begin{array}{*{20}c} {\varepsilon_{\text{t}} = \frac{1}{E}\left[ {\sigma_{\text{t}} - \nu \sigma_{\text{r}} } \right] + \alpha T + \varepsilon_{\text{t}}^{\text{p}} ,} \\ {\varepsilon_{\text{r}} = \frac{1}{E}\left[ {\sigma_{\text{r}} - \nu \sigma_{\text{t}} } \right] + \alpha T + \varepsilon_{\text{r}}^{\text{p}} ,} \\ \end{array} $$ where \( \varepsilon_{\text{t}}^{\text{p}} \)and \( \varepsilon_{\text{r}}^{\text{p}} \) represent plastic strains in corresponding directions. In addition, it is also physically meaningful to consider a temperature-dependent Young's modulus. Substituting total strains in Eq. [9] with displacements from Eq. [8], and by combining the resulting equations with Eq. [6], we can arrive at the following ordinary differential equation for the total displacement u in the casting: $$ \begin{gathered} {\frac{{{\text{d}}^{2} u}}{{{\text{d}}r^{2} }} + \frac{1}{r}\frac{{{\text{d}}u}}{{{\text{d}}r}}\left( {1 + r\frac{1}{E}\frac{{{\text{d}}E}}{{{\text{d}}r}}} \right) - \frac{u}{{r^{2} }}\left( {1 - r\nu \frac{1}{E}\frac{{{\text{d}}E}}{{{\text{d}}r}}} \right) = \alpha \left( {1 + \nu } \right)\left( {\frac{{{\text{d}}T}}{{{\text{d}}r}} + T\frac{1}{E}\frac{{{\text{d}}E}}{{{\text{d}}r}}} \right)} - \rho r\Omega ^{2} \left( {1 - \nu ^{2} } \right)\frac{1}{E} + F\left( {\varepsilon _{{\text{t}}}^{{\text{p}}} ,\varepsilon _{{\text{r}}}^{{\text{p}}} } \right) {\text{with}} F = \frac{1}{E}\frac{{{\text{d}}E}}{{{\text{d}}r}}\left( {\varepsilon _{{\text{r}}}^{{\text{p}}} + \nu \varepsilon _{{\text{t}}}^{{\text{p}}} } \right) + \frac{{{\text{d}}\varepsilon _{{\text{r}}}^{{\text{p}}} }}{{{\text{d}}r}} + \nu \frac{{{\text{d}}\varepsilon _{{\text{t}}}^{{\text{p}}} }}{{{\text{d}}r}} + \frac{{1 - \nu }}{r}\varepsilon _{{\text{r}}}^{{\text{p}}} - \frac{{1 - \nu }}{r}\varepsilon _{{\text{t}}}^{{\text{p}}} . \end{gathered} $$ In the mold, Eq. [10] is considerably simplified because one assumes a constant Young's modulus and pure elastic deformations. The differential equation for the total displacement u in the mold becomes $$ \frac{{{\text{d}}^{2} u}}{{{\text{d}}r^{2} }} + \frac{1}{r}\frac{{{\text{d}}u}}{{{\text{d}}r}} - \frac{u}{{r^{2} }} = \alpha \left( {1 + \nu } \right)\frac{{{\text{d}}T}}{{{\text{d}}r}} - \rho r{{\varOmega }}^{2} \left( {1 - \nu^{2} } \right)\frac{1}{E}. $$ Note that in Eqs. [10] and [11], subscripts […] S and […] M denoting the mold and the casting are omitted for the sake of brevity. Equations [10] and [11] can be solved provided that plastic strains \( \varepsilon_{\text{r}}^{\text{p}} \;\;{\text{and}} \;\;\varepsilon_{\text{t}}^{\text{p}} \) are known. In order to determine them, a universal stress-strain curve, usually assumed to be equivalent to the stress-strain curve obtained from the uniaxial loading test, must be known in advance. The universal stress-strain curve relates two scalar quantities: the effective stress \( \overline{\sigma } \) and the effective plastic strain \( \overline{{\varepsilon^{\text{p}} }} \). Several models of the effective stress \( \overline{\sigma } \) exist. Here, the von Mises stress, calculated by assuming a principal stress loading, was used in the following form: $$ \overline{\sigma } = \frac{1}{\sqrt 2 }\sqrt {\left( {\sigma_{\text{r}} - \sigma_{\text{t}} } \right)^{2} + \sigma_{\text{r}}^{2} + \sigma_{\text{t}}^{2} } . $$ The von Mises stress \( \overline{\sigma } \) is coupled with the increment of the effective plastic strain \( \overline{{d\varepsilon^{\text{p}} }} \) by the Prandtl–Reuss (Levy–Mises) flow rule as follows: $$ d\varepsilon_{\text{r}}^{\text{p}} = \frac{{\overline{{d\varepsilon^{\text{p}} }} }}{{\overline{\sigma } }}\left( {\sigma_{\text{r}} - 0.5\sigma_{\text{t}} } \right) $$ $$ d\varepsilon_{\text{t}}^{\text{p}} = \frac{{\overline{{d\varepsilon^{\text{p}} }} }}{{\overline{\sigma } }}\left( {\sigma_{\text{t}} - 0.5\sigma_{\text{r}} } \right). $$ Knowing or assuming \( \overline{{d\varepsilon^{p} }} \), the plastic strains \( d\varepsilon_{r}^{p} \) and \( d\varepsilon_{t}^{p} \) can be easily calculated and then used in Eq. [10] to extrapolate the total displacement u. In Eqs. [13] and [14], the increment of the effective plastic strain \( \overline{{d\varepsilon^{p} }} \) is used and not the effective plastic strain \( \overline{{\varepsilon^{p} }} \), which means that the plastic strain history (or the loading path) is very important. Correspondingly, a progressive load must be also applied in the simulation. Here, such loading is automatically realized by a time-dependent temperature field and a gradual progress of solidification. In the present study, a temperature-dependent, perfectly elastic-plastic material is used, as shown in Figure 2. At a given temperature, the material deforms elastically until a certain threshold of the effective stress, known as the yield strength, is reached, at which point the material starts yielding with no further increase of the effective stress. Instead of the elastic-perfectly plastic material, any kind of other material could be used such as a strain-hardening or a strain-softening material. Uniaxial stress-strain curves of elastic-perfectly plastic material used in the present study In the following text, we summarize all the facts, assumptions, and solution strategy necessary to run a successful numerical simulation and obtain a reasonable air gap. The mold constantly expands during the entire casting process. The casting also expands but only during the early stage of casting. Later, the strength of the solidified part of the casting is sufficient to withstand centrifugal forces; therefore, the casting contracts. Consequently, the air gap forms. The mold undergoes purely elastic deformations. (In reality, it may not be true, especially in the vicinity of the cast-mold interface. At the initial stage of casting, extreme stresses may occur, causing a type of damage known as "fire cracks," to the inner part of the mold.) Mechanical properties of the mold material are constant. Thermophysical properties may vary with the temperature. The casting may deform both elastically and plastically. A temperature-dependent, perfectly elastic-plastic material is assumed (Figure 2). Mechanical properties (Young's modulus E and yield strength Y) of the casting material are temperature dependent. Thermophysical properties may also vary with the temperature. Only the radiative and the conductive heat transfer mechanisms are expected within the air gap. The convective mechanism is neglected due to the small size of the air gap. In addition to Eqs. [10] and [11], we also need to solve the heat conduction for the temperature T. In the cylindrical coordinate system, it takes this form in the mold: $$ \rho_{\text{M}} c_{\text{pM}} \frac{\partial T}{\partial t} = \frac{1}{r}\frac{\partial }{\partial r}\left( {k_{\text{M}} \frac{\partial T}{\partial r}} \right), $$ where ρM, cpM, and kM are the density, specific heat, and thermal conductivity of the mold material, respectively. Similarly, in the casting, it can be written as $$ \rho_{\text{S}} c_{\text{pS}} \frac{\partial T}{\partial t} = \frac{1}{r}\frac{\partial }{\partial r}\left( {k_{\text{S}} \frac{\partial T}{\partial r}} \right) + \rho_{\text{S}} L_{\text{f}} \frac{{\partial g_{\text{s}} }}{\partial t}, $$ where ρ S , cpS, and k S are density, specific heat, and thermal conductivity of the casting material, respectively. The last term is a latent heat source term due to the phase change, in which Lf and gs represent the latent heat and the solid fraction, respectively. In the present study, a simple linear relationship is considered between the solid fraction gs and the temperature T. Other relationships, however, could also be considered (e.g., the lever rule or the Gulliver–Scheil equation). The heat conduction equations, Eqs. [15] and [16], are coupled via the heat flux at the cast-mold interface. A thin-wall model, also known as a zero-capacity model, was used to numerically simplify the situation at the interface by considering only a thermal resistance, exerted by the coating and possibly the air gap. Then, the heat flux at the interface reads as $$ q = k_{\text{ifc}} \frac{{T_{\text{S}} - T_{\text{M}} }}{{dr_{\text{M}} + dr_{\text{S}} }}, $$ $$ k_{\text{ifc}} = \frac{{k_{\text{S}} k_{\text{M}} k_{\text{C}} k_{\text{a}} \left( {dr_{\text{M}} + dr_{\text{S}} } \right)}}{{\left( {k_{\text{C}} k_{\text{a}} \left( {dr_{\text{M}} k_{\text{S}} + dr_{\text{S}} k_{\text{M}} } \right) + k_{\text{S}} k_{\text{M}} \left( {d_{\text{C}} k_{\text{a}} + d_{\text{a}} k_{\text{C}} } \right)} \right)}}, $$ where kifc, kC, and ka denote the effective thermal conductivity, the thermal conductivity of the coating, and the air gap, respectively. Other quantities are explained in Figure 1. The air gap thermal conductivity ka is, in fact, the sum of the thermal conductivity of air ka,phys and a thermal conductivity, which is equivalent to the radiative heat transfer through the air gap, given by $$ k_{{\text{a}}} = k_{{{\text{a,phys}}}} + \sigma d_{{\text{a}}} \left( {T^{\prime}_{{\text{S}}} + T^{*} } \right)\left( {T^{\prime 2} _{{\text{S}}} + T^{{*2}} } \right), $$ where \( \sigma \) is the Stefan–Boltzman constant (5.67 × 10−8 W m−2 K−4). The temperatures T'S and T* must be given in Kelvin. Note that without the air gap (da = 0), Eq. [18] is still valid and represents the effective thermal conductivity only in the presence of the coating. The reader should be reminded that in this study, the black body radiation model has been taken for its simplicity and convenience. However, when targeting more accurate results, considering gray bodies is better justified and Eq. [19] would then become $$ k_{\text{a}} = k_{\text{a,phys}} + \sigma d_{\text{a}} \left( {T'_{\text{S}} + T^{*} } \right)\left( {T^{\prime 2}_{\text{S}} + T^{*2} } \right)\left( {1/\varepsilon_{\text{S}} + 1/\varepsilon_{\text{C}} - 1} \right)^{ - 1} , $$ where εS and εC are emissivity coefficients of both surfaces enclosing the air gap, which belong to the casting and the coating, respectively. Using Eq. [20]. instead of Eq. [19] will naturally reduce the radiative heat transfer. In reality, the air between the casting and the shell may, as a participating gas, further reduce the radiative heat transfer. A description of the corresponding mathematical model can be found, e.g., in Reference 52. Concerning thermal boundary conditions, both the free surface of the casting and the outer surface of the mold are considered to be adiabatic: $$ \begin{array}{*{20}c} {\frac{{{\text{d}}T}}{{{\text{d}}r}}\left( {r_{i} } \right) = 0}, \\ {\frac{{{\text{d}}T}}{{{\text{d}}r}}\left( {r_{o} } \right) = 0.} \\ \end{array} $$ In simulations focused on a comparison with experimental data, thermal boundary conditions, however, should be specified more precisely, e.g., with the help of existing empirical formulas for the Nusselt number[53,54] in rotating geometries. In addition to Eq. [21], boundary conditions have to be specified also for Eqs. [10] and [11]. Here, we have to distinguish between two cases: a perfect contact or an air gap. In the case of the contact, appropriate boundary conditions take the following form: $$ \begin{array}{*{20}l} {\sigma_{\text{rS}} \left( {r_{\text{i}} } \right) = 0} \hfill \\ {u_{\text{S}} \left( R \right) = u_{\text{M}} \left( R \right)} \hfill \\ {\sigma_{\text{rM}} \left( {r_{\text{o}} } \right) = 0.} \hfill \\ \end{array} $$ Otherwise (the air gap), $$ \begin{array}{*{20}l} {\begin{array}{*{20}l} {\begin{array}{*{20}l} {\sigma_{\text{rS}} \left( {r_{\text{i}} } \right) = 0} \hfill \\ {\sigma_{\text{rS}} \left( R \right) = 0} \hfill \\ \end{array} } \hfill \\ {\sigma_{\text{rM}} \left( R \right) = 0} \hfill \\ \end{array} } \hfill \\ {\sigma_{\text{rM}} \left( {r_{\text{o}} } \right) = 0.} \hfill \\ \end{array} $$ Differential equations for total displacements, Eqs. [10] and [11], and temperature, Eqs. [15] and [16], were all solved using the finite difference method with second-order accurate central difference schemes for derivatives (including points at the boundaries). In the radial direction, the casting and the mold were divided into NS and NM uniformly spaced grid points with the dimensions drS and drM of 1 mm (Figure 1). The same grids with uniform spacing were used for both quantities u and T. For Eqs. [15] and [16], the implicit backward Euler method was used for time-stepping. In addition, an iterative approach was necessary for heat conduction equations due to temperature-dependent thermophysical properties, the nonlinear heat flux \( q \) at the cast-mold interface (Eq. [17]), and especially the stiff, latent heat source term. Treatment of the latent heat source term was realized by using a semi-implicit method proposed by Voller and Swaminathan.[55] A detailed description of the discretization can be found in Reference 56. Although systems of equations are unconditionally stable, the time-step size should be small enough so that the loading rate still allows finding correct and physically meaningful increments of plastic strains. Here, the time-step of 0.1 seconds was found to be reasonable. Solution strategy: Initialize fields of temperature, solid fraction, plastic strains, stresses, and air gap thickness. For time tn+1, solve heat conduction equations, Eqs. [15] and [16], coupled by the heat flux at the interface (Eq. [17]), with the air gap thickness da from the previous time t n and obtain new temperature T and solid fraction gs fields. In our numerical tests, usually between two and five iterations were necessary to drop scaled residuals below 10 × 10−8 for the latent heat source term and the nonlinear heat flux q, respectively. The residuals were calculated according to the following formula: $$ {\text{res}} = \frac{{\sqrt {N\mathop \sum \nolimits_{i} \left( {T_{i} - T_{i}^{\text{old}} } \right)^{2} } }}{{\mathop \sum \nolimits_{i} T_{i}^{\text{old}} }} $$ where N is the total number of cells (NS + NM). T i and \( T_{i}^{\text{old}} \) are the current temperature and the temperature from the previous iteration both taken at the grid point with the index \( i \) Use Figures 3 and 4 to find new values of yield strength Y and Young's modulus E of the casting material. Assume an air gap. For time tn+1, solve Eq. [11] for total displacements u of mold with the boundary conditions for the radial stress σrM in Eq. [23]. Similarly, for time tn+1, solve Eq. [10] for total displacements u of the casting with the boundary conditions for the radial stress σrS in Eq. [23] and the plastic strains \( \varepsilon_{\text{r}}^{\text{p}} \) and \( \varepsilon_{\text{t}}^{\text{p}} \)from the previous time t n . If the total displacement of the casting at the interface is greater than that of the mold, the casting and the mold are in perfect contact, i.e., no air gap is formed. Otherwise, the cast-mold contact is lost and the air gap is formed. If the earlier is true (perfect contact), recalculate Eqs. [10] and [11] with the boundary conditions given in Eq. [22]. Otherwise, evaluate the air gap thickness as $$ d_{\text{a}} = u_{\text{M}} \left( R \right) - u_{\text{S}} \left( R \right) $$ In the casting, evaluate radial and tangential stresses by using the following explicit formulas, which can be obtained by straightforward manipulations of Eqs. [8] and [9]: $$ \begin{array}{*{20}c} {\sigma_{\text{rS}} = \frac{E}{{1 - \nu^{2} }}\left[ {\left( {\frac{{{\text{d}}u}}{{{\text{d}}r}} - \varepsilon_{r}^{p} } \right) + \nu \left( {\frac{u}{r} - \varepsilon_{t}^{p} } \right)} \right] - \alpha E\frac{1}{1 - v}T,} \\ {\sigma_{\text{tS}} = \frac{E}{{1 - \nu^{2} }}\left[ {\left( {\frac{u}{r} - \varepsilon_{t}^{p} } \right) + \nu \left( {\frac{{{\text{d}}u}}{{{\text{d}}r}} - \varepsilon_{r}^{p} } \right)} \right] - \alpha E\frac{1}{1 - v}T.} \\ \end{array} $$ Using Eq. [26] in Eq. [12], calculate von Mises stresses \( \overline{\sigma } \) in the casting and compare them with yield stresses Y obtained from the stress-strain curve (Figure 3). Then, identify only the points P that yield \( \left( {P = \overline{\sigma } \ge Y} \right) \). For the points P, new increments \( \overline{{d\varepsilon^{p} }} \) of the effective plastic strain \( \overline{{\varepsilon^{p} }} \) must be calculated so that $$ \overline{\sigma } - Y = 0 $$ This is realized through an optimization loop, in which Eq. [27] is the objective function and \( \overline{{d\varepsilon^{\text{p}} }} \) is constrained to values greater than zero. Then, one iteration sequence could have the following form: Estimate increments \( \overline{{d\varepsilon^{p} }} \); using Eqs. [13] and [14], calculate increments \( d\varepsilon_{r}^{p} \)and \( d\varepsilon_{t}^{p} \) and update the plastic strains \( \varepsilon_{r}^{p} \)and\( \varepsilon_{t}^{p} \left( {\varepsilon_{r}^{p} = \varepsilon_{r}^{p} + d\varepsilon_{r}^{p} ,\varepsilon_{t}^{p} = \varepsilon_{t}^{p} + d\varepsilon_{t}^{p} } \right) \); solve Eq. [10] for new displacements u in the casting and get new stresses (Eq. [26]); and recalculate von Mises stresses \( \overline{\sigma } \) and repeat until the convergence of Eq. [27] is attained. In the present article, a nonlinear least-squares optimization algorithm, known as the trust-region-reflective algorithm,[57,58] was applied. In order to reduce dispersive errors of \( \overline{{d\varepsilon^{p} }} \) appearing due to complex loading and, consequently, yielding on a discrete grid, the Savitzky–Golay filter[59] was applied on a temporally overlaid \( \overline{{d\varepsilon^{p} }} \) signal. Proceed to the next time-step. Temperature-dependent yield strength of the casting material.[60] The dashed line represents data reconstructed using the extrapolation until the temperature of solidus Tsol Temperature-dependent Young's modulus of the casting material.[60] The dashed line represents data reconstructed by means of extrapolation until the solidus temperature Tsol As the main objective of the present study is to propose and test a novel approach of calculating an air gap rather than numerically analyzing a particular process, material and mechanical properties used in the simulations only roughly correspond to those of real materials (Table II). The geometry, initial conditions, and other casting parameters are given in Table III. Table II Properties of Materials Used in the Simulations Table III Geometry, Initial Conditions, and Other Casting Parameters First, numerical tests were performed for different coating thicknesses (0.5, 1, 2, and 4 mm) and all other parameters were fixed. Total displacements u of the mold and the casting at the interface are shown as a function of time t in Figure 5. As one would expect, initially, the mold and the casting are in contact. At a certain moment, they detach and their displacements follow different paths. This is clearly shown by bifurcating curves in Figure 5. A difference between the casting and the mold displacement corresponds to the air gap thickness da. Obviously, the air gap appears earlier in the case of a thin coating than that of a thick one. Consequently, the heat transfer coefficient h at the interface will drop faster in the case of a thin coating than that of a thick one, which can be seen in Figure 6. The heat transfer coefficient h was simply calculated as $$ h = \frac{q}{{T_{\text{S}} - T_{\text{M}} }} = \frac{{k_{\text{ifc}} }}{{dr_{\text{M}} + dr_{\text{S}} }}. $$ Total displacements u at the cast-mold interface for different coating thicknesses of 0.5, 1, 2, and 4 mm. Bifurcation corresponds to the first appearance of the air gap Heat transfer coefficients h at the cast-mold interface for different coating thicknesses of 0.5, 1, 2, and 4 mm. Before the air gap is formed, h is constant defined as kCdC For each configuration (dC = 0.5, 1, 2, and 4 mm), the time evolution of the heat transfer coefficient h is, therefore, unique. Ultimately, it seems that after 300 seconds, the heat transfer coefficients are almost identical, close to 200 W m−2 K−1. Similar tests with similar outputs were performed for different values of the rotation rate Ω (50, 71, and 90 rad s−1). Naturally, the higher the centrifugal force, the better the contact between the casting and the mold. Concerning the elastic deformations of the mold, displacements u are larger at higher rotation rates Ω (Figure 7). Higher centrifugal forces are also responsible for stronger and longer yielding of the partly solidified casting, which delays a formation of the air gap. Consequently, at a given instance, the heat transfer coefficient h is higher in the case of a higher rotation rate Ω (Figure 8). Total displacements u at the cast-mold interface for different values of rotation rate Ω such that Ω = 50, 71, 90 rad s−1 Heat transfer coefficients h at the cast-mold interface for different values of rotation rate Ω such that Ω = 50, 71, and 90 rad s−1 A similar study was carried out for different values of solidus temperature Tsol such that Tliq − Tsol = 10 K, 50 K, 100 K, and 200 K (10 °C, 50 °C, 100 °C, and 200 °C). Although it is a somewhat intuitive, one could confidently state that the smaller the difference is, the earlier the air gap occurs. Again, we provide total displacements and heat transfer coefficients at the interface in Figures 9 and 10, respectively. Since coating parameters are fixed this time, initial heat transfer coefficients are all identical, equal to 1000 W m−2 K−1. Later, they significantly deviate. While the curves referring to mold displacements at the interface have a similar trend, indicating a continuous thermoelastic expansion of the mold, those corresponding to casting displacements exhibit more complex scenarios due to the combination of plastic and elastic deformations. In Figure 9, e.g., the dash-dot line representing Tliq − Tsol = 200 K (200 °C) indicates the yielding of the casting material within the entire time span simulated. On the contrary, the dash line Tliq − Tsol = 50 K (50 °C) displays only slight yielding in the beginning, immediately followed by thermoelastic contraction. Total displacements u at the cast-mold interface for different values of solidus temperature Tsol such that Tliq − Tsol = 10 K, 50 K, 100 K, and 200 K (10 °C, 50 °C, 100 °C, and 200 °C) Heat transfer coefficients h at the cast-mold interface for different values of solidus temperature Tsol such that Tliq − Tsol = 10 K, 50 K, 100 K, and 200 K (10 °C, 50 °C, 100 °C, and 200 °C) Although the quantities calculated at the interface, such as the air gap thickness da and the heat transfer coefficient h, are of primary interest here, the numerical model also provides other quantities such as stresses and elastic/plastic strains. In Figure 11, a typical example of temperature and strains appears at 50 seconds. The gray zone on the right represents the mold. The rest on the left belongs to the casting. A temperature drop can be seen at the interface due to a large thermal resistance. As the time proceeds, total strains grow quite uniformly throughout the entire thickness of the mold. The same applies also to the casting but only at the early stage. Later, when the casting is partly solidified and the yield strength Y increased, total strains start dropping and the casting contracts consequently. Distribution of strains and temperature in the radial direction for the case with the coating thickness dC of 2.0 mm at 50 s. The zones in white and gray stand for the casting and the mold, respectively In addition to strains (Figure 11), stresses are shown in Figure 12. A typical distribution of stresses can be seen in the mold. While the radial stresses are exclusively compressive, the tangential stresses are compressive close to the inner surface of the mold and become tensile as they approach the outer surface of the mold. The greatest stresses the mold must withstand are naturally located at the inner surface due to a sudden temperature loading. In this particular case (Figure 12), they do, in fact, reach the yield strength of the material. After several casting cycles, a thermal loading would most likely lead to the formation of fire cracks.[61] Concerning stresses in the casting, at the early stage of solidification, they are within the temperature-dependent envelope of the yield strength. The (semi-)solid part of the casting close to the mold can already hold some stresses, whereas the liquid part remains stress free. At later stages of the casting, the stresses in the casting are well below the yield strength and only the elastic loading is present. When the casting procedure is finished and the temperature field becomes uniform, residual stresses remain in the casting. An analysis of stresses and strains is, however, beyond the scope of the present article. Distribution of radial and tangential stresses for the case with the coating thickness dC of 2.0 mm at 50 s In centrifugal casting simulations, exponential functions are generally used to describe the heat transfer coefficient at the cast-mold interface, varying due to the air gap formation. Such functions contain empirical constants, which must be carefully specified. Unfortunately, this is not an easy task. An experiment alone is not sufficient to determine such constants, and computationally expensive inverse methods should be employed, which is, however, rarely the case. A literature survey performed here reveals an expansive scatter of data used in current and previous research. In the present study, we offer an alternative of calculating an air gap thickness and the corresponding heat transfer coefficient at the interface. The heat transfer model is coupled with a plane stress model, taking into account thermoelastic stresses, centrifugal forces, plastic deformations, and a temperature-dependent Young's modulus. Several numerical tests were performed for different coating thicknesses dC, rotation rates Ω, and solidus temperatures Tsol. Results were analyzed in the sense of comparing heat transfer coefficients at the interface and air gap thicknesses as a function of time. The numerical model developed here helps demonstrate that the scenario at the interface is unique for each set of parameters. Therefore, deploying any of the exponential functions that explicitly describe the thermal resistance at the cast-mold interface will always give rise to the question about the actual value of empirical constants used in that particular function. Although the material properties taken for this study do not strictly correspond to any particular material, they are obviously not far from material properties of common steels and coatings, and the results obtained here appear to be entirely reasonable and meaningful. In the near future, we plan to verify the current numerical approach against the results obtained from the inverse task run with the experimental data. Finally, possible room for improvement of the presented model remains. For example, some kind of implicit coupling between the heat transfer model and the plane stress model would be beneficial and might even be necessary in order to maintain numerical stability (or suppress unphysical oscillations of calculated displacements), especially at higher cooling rates, e.g., when an air gap is being just formed. In addition, the optimization loop involved in the loading step, i.e., the process of calculating the plastic strains could also be further improved. […]C : Subscript referring to the coating […]M : Subscript referring to the mold […] S : Subscript referring to the casting c p : Specific heat (J kg−1 K−1) d : Casting thickness (mm) d a : Air gap thickness (mm) d C : Coating thickness (mm) E : Young's modulus (Pa) g s : Solid fraction h : Heat transfer coefficient at the interface (W m−2 K−1) h 0 : Initial heat transfer coefficient at the interface (W m−2 K−1) h f : Final heat transfer coefficient at the interface (W m−2 K−1) h a : Heat transfer coefficient between the casting and the coating (W m−2 K−1) k : Thermal conductivity (W m−1 K−1) k a : Effective thermal conductivity of the air gap (W m−1 K−1) k a,phys : Thermal conductivity of air (W m−1 K−1) k C : Thermal conductivity of the coating (W m−1 K−1) k ifc : Effective thermal conductivity of the control volume built of the mold, coating, air gap, and casting (W m−1 K−1) L : Thickness of the mold (m) L f : Latent heat of solidification (J kg−1) N : Number of grid points Set of yielding points Heat flux through the interface (W m−2) q 0 : Initial heat flux through the interface (only coating present) (W m−2) r : Radial distance (mm) r i : Inner radius of the casting (mm) r o : Outer radius of the mold (mm) Inner radius of the mold–radius of the interface (mm) s : Solidified thickness of the casting (mm) t : t 0 : Time of solidification initiation (s) Temperature (K (°C)) T fill : Initial temperature of the liquid metal-filling temperature (K (°C)) T liq : Liquidus temperature of the casting material (K (°C)) T mold : Initial temperature of the mold (K (°C)) T sol : Solidus temperature of the casting material (K (°C)) T * : Temperature between the coating and the air gap (Fig. 1) (K (°C)) T' M : Temperature between the mold and the coating (K (°C)) T' S : Temperature between the casting and the air gap (K (°C)) u : Radial displacement (mm) Y : Yield strength (Pa) α : Thermal expansion coefficient (K−1) β : Damping coefficient γ : Constant exponent ε r : Total strains in radial direction (mm mm−1) ε t : Total strains in tangential direction (mm mm−1) \( \varepsilon_{\text{r}}^{\text{p}} \) : Plastic strains in radial direction (mm mm−1) \( \varepsilon_{\text{t}}^{\text{p}} \) : Plastic strains in tangential direction (mm mm−1) \( \overline{{\varepsilon^{\text{p}} }} \) : Effective plastic strain (mm mm−1) є : Radiative emissivity ν : Poisson's ratio ρ : Density (kg m−3) σ r : Radial stress (Pa) σ t : Tangential stress (Pa) \( \overline{\sigma } \) : von Mises (effective stress) (Pa) σ : Stefan–Boltzmann constant (W m−2 K−4) Ω : Rate of rotation (rad s−1) H. Esaka, K. Kawai, H. Kaneko, and K. Shinozuka: IOP Conference Series: Materials Science and Engineering, 2012, vol. 33, p. 012041. S. Wei: ASM Int., 2008, vol. 15, pp. 667–73. P. Kapranos, C. Carney, A. Pola, and M. Jolly: Compr. Mater. Processing, 2014, vol. 5, pp. 39–67. M. Yuan, L. Cao, Y. Xu, and X. Song: Model. Num. Simul. Mater. Sci., 2014, vol. 4, pp. 20–24. E. Kaschnitz: IOP Conference Series: Materials Science and Engineering, 2012, vol. 33, p. 012031. T.C. Lebeau: Master's Thesis, University of Alabama, Birmingham, AL, 2008, p. 35. K.S. Keerthi Prasad, M.S. Murali, and P.G. Mukunda: Front. Mater. Sci. China, 2010, vol. 4, pp. 103–11. S.-L. Lu, F.-R. Xiao, S.-J. Zhang, Y.-W. Mao, and B. Liao: Appl. Therm. Eng., 2014, vol. 73, pp. 512–21. D. McBride, N.J. Humphreys, T.N. Croft, N.R. Green, M. Cross, and P. Withey: Comput. Fluids, 2013, vol. 82, pp. 63–72. J. Bohacek, A. Kharicha, A. Ludwig, and M. Wu: ISIJ Int., 2014, vol. 54, pp. 266–74. J. Bohacek, A. Kharicha, A. Ludwig, and M. Wu: Appl. Math. Comp., 2015, vol. 267, pp. 179–94. N. Song, Y. Luan, Y. Bai, Z. A. Xu, X. Kang, and D. Li: J. Mater. Sci. Technol., 2012, vol. 28, pp. 147–54. H. Fu, Q. Xiao, and J. Xing: Mater. Sci. Eng. A, 2008, vol. 479, pp. 253–60. L. Drenchev, J. Sobczak, S. Malinov, and W. Sha: Model. Simul. Mater. Sci. Eng., 2003, vol. 11, pp. 651–74. J.A. Hines: Metall. Mater. Trans. B, 2004, vol. 35B, pp. 299–311. W.D. Griffiths and R. Kayikci: J. Mater. Sci., 2007, vol. 42, pp. 4036–43. C.P. Hallam and W.D. Griffiths: Metall. Mater. Trans. B, 2004, vol. 35B, pp. 721–33. Z. Xu, N. Song, R.V. Tol, Y. Luan, and D. Li: IOP Conference Series: Materials Science and Engineering, 2012, vol. 33, p. 012030. J.W. Gao and C.Y. Wang: Mater. Sci. Eng. A, 2000, vol. 292, pp. 207–15. K. Cook, B. Wu, and R.G. Reddy: Int. J. Manuf. Sci. Prod., 2006, vol. 17, pp. 48–59. J. Bohacek, A. Kharicha, A. Ludwig, and Wu M: IOP Conference Series: Materials Science and Engineering, 2012, vol. 33, p. 012032. N.J. Humphreys, D. McBride, D.M. Shevchenko, T.N. Croft, P. Withey, N.R. Green, and M. Cross: Appl. Math. Model., 2013, vol. 37, pp. 7633–43. S.R. Chang, J.M. Kim, and C.P. Hong: ISIJ Int., 2001, vol. 41, pp. 738–47. C.G. Kang, P.K. Rohatgi, C.S. Narendranath, and G.S. Cole: ISIJ Int., 1994, vol. 34, pp. 247–54. C.G. Kang and P.K. Rohatgi: Metall. Mater. Trans. B, 1996, vol. 27B, pp. 277–85. Y. Ebisu: AFS Trans., 1977, pp. 643–54. Kamlesh: Ph.D. Thesis, BVM Engineering College, Gujarat, India, 2001, p. 35. L. Lajoye, and M. Suery: Proc. Int. Symp. on Advances in Cast Reinforced Metal Composites, ASM International, Chicago, IL, 1988, pp. 15–20. P.S.S. Raju and S.P. Mehrotra: Mater. Trans. JIM, 2000, vol. 41, 1626–35. E. Panda, D. Mazumdar, and S.P. Mehrotra: Metall. Mater. Trans. A, 2006, vol. 37A, pp. 1675–87. L. Nastac: ISIJ Int., 2014, vol. 54 (6), pp. 1294–1303. S. Vacca, M.A. Martorano, R. Heringer, and M. Boccalini, Jr.: Metall. Mater. Trans. A, 2015, vol. 46A, pp. 2238–48. F. Susac, K. Ohura, M. Banu, and A. Makinouchi: VCAD System Res., 2009, pp. 69–70. H.M. Sahin, K. Kocatepe, R. Kayikci, and N. Akar: Energy Convers. Manag., 2006, vol. 47, pp. 19–34. A. F. Ilkhchy, N. Varahraam, and P. Davami: Iran. J. Mater. Sci. Eng., 2012, vol. 9, pp. 11–20. J.V. Beck, B. Blackwell, and C. Clair: Inverse Heat Conduction: Ill-Posed Problems, Wiley-Interscience, New York, NY, 1985. J. Mahmoudi: Int. J. Cast. Met. Res., 2006, vol. 19, pp. 223–36. B. Coates and S.A. Argyropoulos: Metall. Mater. Trans. B, 2007, vol. 38B, pp. 243–55. M.A. Taha, N.A. El-Mahallawy, M.T. El-Mestekawi, and A.A. Hassan: Mater. Sci. Technol., 2001, vol. 17, pp. 1093–1101. J. Kron: Ph.D. Thesis, Royal Institute of Technology, Stockholm, 2004, p. 15. J. Kron, A. Lagerstedt, and H. Frederiksson: Int. J. Cast. Met. Res., 2005, vol. 18, pp. 29–40. A. Lagerstedt: Ph.D. Thesis, Royal Institute of Technology, Stockholm, 2004, p. 41. K. Schwerdtfeger, M. Sato, and K.-H. Tacke: Metall. Mater. Trans. B, 1998, vol. 29B, pp. 1057–68. J. Kron, M. Bellet, A. Ludwig, B. Pustal, J. Wendt, and H. Fredriksson: Int. J. Cast. Met. Res., 2004, vol. 17, pp. 295–310. M. Trovant and S.A. Argyropoulos: Can. Metall. Q., 1998, vol. 37, pp. 185–96. R.K. Nayak and S. Sundarraj: Metall. Mater. Trans. B, 2010, vol. 41B, pp. 151–60. A. Kandil, A.A. El-Kady, and A. El-Kafrawy: Int. J. Mech. Sci., 1995, vol. 37, pp. 721–32. X.-L. Gao: Int. J. Solid Struct., 2003, vol. 40, pp. 6445–55. A. Loghman and M.A. Wahab: J. Press. Vess.-Trans. ASME, 1994, vol. 116, pp. 105–09. J. Perry and J. Aboudi: J. Press. Vess.-Trans. ASME, 2003, vol. 125, pp. 248–52. G.H. Geiger and D.R. Poirier: Transport Phenomena in Metallurgy, Addison-Wesley Publishing Company, Reading, MA, 1973, p. 396. S. Seghir-Ouali, D. Saury, S. Harmand, O. Phillipart, and D. Laloy: Int. J. Therm. Sci., 2006, vol. 45, pp. 1166–78. S. Harmand, J. Pelle, S. Poncet, and I.V. Shevchuk: Int. J. Therm. Sci., 2013, vol. 67, pp. 1–30. V.R. Voller and C.R. Swaminathan: Num. Heat Transfer B, 1991, vol. 19, pp. 175–89. J. Bohacek, A. Kharicha, A. Ludwig, M. Wu, E. Karimi-Sibaki, A. Paar, M. Brandner, L. Elizondo, and T. Trickl: Appl. Math. Comp., 2017, vol. 319(C), 301–17. T.F. Coleman and Y. Li: SIAM J. Optimiz., 1996, vol. 6, pp. 418–45. T.F. Coleman and Y. Li: Math. Program., 1994, vol. 67, pp. 189–224. S.J. Orfanidis: Introduction to Signal Processing, Prentice-Hall, Englewood Cliffs, NJ, 1996. R. Brockenbrough and F. Merrit: Structural Steel Designer's Handbook, 5th ed., McGraw-Hill Education, New York, NY, 2011. A. Weronski and T. Hejwowski: Thermal Fatigue of Metals, Marcel Dekker, Inc., New York, NY, 1991, p. 223. Open access funding provided by Montanuniversity Leoben. Financial support from the Austrian Federal Government (in particular, from the Bundesministerium fuer Verkehr, Innovation und Technologie and the Bundesministerium fuer Wirtschaft, Familie und Jugend) and the Styrian Provincial Government, represented by Oesterreichische Forschungsfoerderungsgesellschaft mbH and by Steirische Wirtschaftsfoerderungsgesellschaft mbH, within the research activities of the K2 Competence Centre on "Integrated Research in Materials, Processing and Product Engineering," operated by the Materials Center Leoben Forschung GmbH in the framework of the Austrian COMET Competence Centre Programme, is gratefully acknowledged. This work is also financially supported by the Eisenwerk Sulzau-Werfen R. & E. Weinberger AG. Chair of Simulation and Modeling Metallurgical Processes, Metallurgy Department, Montanuniversitaet Leoben, Franz-Josef-Str. 18/III, 8700, Leoben, Austria Jan Bohacek, Abdellah Kharicha & Andreas Ludwig Christian Doppler Laboratory for "Advanced Process Simulation of Solidification and Melting", Franz-Josef-Str. 18/III, 8700, Leoben, Austria Menghuai Wu & Ebrahim Karimi-Sibaki Jan Bohacek Abdellah Kharicha Andreas Ludwig Menghuai Wu Ebrahim Karimi-Sibaki Correspondence to Jan Bohacek. Manuscript submitted October 26, 2016. Bohacek, J., Kharicha, A., Ludwig, A. et al. Heat Transfer Coefficient at Cast-Mold Interface During Centrifugal Casting: Calculation of Air Gap. Metall Mater Trans B 49, 1421–1433 (2018). https://doi.org/10.1007/s11663-018-1220-0 Issue Date: June 2018 Casting-mold Interface Centrifugal Casting Time-dependent Heat Transfer Coefficient Plane Stress Model Casting Contraction
CommonCrawl
\begin{definition}[Definition:Secant Function/Definition from Triangle] :400px In the above right triangle, we are concerned about the angle $\theta$. The '''secant''' of $\angle \theta$ is defined as being $\dfrac{\text{Hypotenuse}} {\text{Adjacent}}$. \end{definition}
ProofWiki
Fair Patrolling Given a set of areas the robot(s) should repeatedly visit all areas in a fair way. This pattern requires a robot to keep visiting a set of locations in a fair way, i.e., the robots patrols the locations by keeping the number of times every area is patrolled equal. $\overset{n}{\underset{i=1}{\bigwedge}} \mathcal{G}( \mathcal{F} (l_i))$ $\overset{n}{\underset{i=1}{\bigwedge}} \mathcal{G} (l_{i} \rightarrow \mathcal{X} ((\neg l_i)\ \mathcal{W}\ l_{(i+1)\%n}))$ , where ($l_1, l_2, \ldots$ are location propositions) where "l1" and "l2" are expressions that indicate that a robot r is in location 1 and 2. The specification given locations 1 and 2 to be visited fairly, requires that after 1 is visited, 1 is not visited again before 2. This is a necessary approximation since LTL does not allow counting, ensuring that the difference on the number of times locations 2 and 1 are visited is at most one. Note that the pattern is general and consider the case in which a robot can be in two locations at the same time. For example, a robot can be in an area of a building indicated as l1 (e.g., area 01) and at the same time in a room of the area indicated as l2 (e.g., room 002) at the same time. If the topological intersection of the considered locations is empty, then the robot cannot be in two locations at the same time and the transitions labeled with both l1 and l2 cannot be fired. Locations $l_1$, $l_2$, and $l_3$ must be fair patrolled. The trace $l_1 \rightarrow l_4 \rightarrow l_3 \rightarrow l_1 \rightarrow l_4 \rightarrow l_2 \rightarrow ( l_1 \rightarrow l_2 \rightarrow l_1 \rightarrow l_3)^\omega$ violates the mission requirements since the robot patrols $l_1$ more than $l_2$ and $l_3$. The trace $l_1 \rightarrow l_4 \rightarrow l_3 \rightarrow l_4 \rightarrow l_2 \rightarrow l_4 \rightarrow ( l_1 \rightarrow l_2 \rightarrow l_3)^\omega$ satisfies the mission requirement since locations $l_1$, $l_2$, and $l_3$ are patrolled fairly. The Fair Patrolling pattern is a specialization of the Patrolling pattern, in which the robot should keep visiting a set of locations in a fair way. Occurences Smith et al. proposed an instance of mission specification that requires an equal number of visits to each data-gathered location. Büchi Automaton representing accepting sequences of events where circled states are accepting states and states with an incoming arrow with no source are initial states. The automaton above is deterministic. $\overset{n}{\underset{i=1}{\bigwedge}} \forall \mathcal{G}( \forall \mathcal{F} (l_i))$ \\ $\overset{n}{\underset{i=1}{\bigwedge}} \forall \mathcal{G} (l_{i} \rightarrow \forall \mathcal{X} (\forall (\neg l_i) \mathcal{W} l_{(i+1)\%n}))$ Tagged: surveillance
CommonCrawl
\begin{document} \title[Cover time of binary tree]{Limit law for the cover time\\ of a random walk on a binary tree} \author[Amir Dembo\;\; Jay Rosen\;\; Ofer Zeitouni] {Amir Dembo\;\; Jay Rosen\;\; Ofer Zeitouni} \date{June 16, 2019} \thanks{Amir Dembo was partially supported by NSF grant DMS-1613091.} \thanks{Jay Rosen was partially supported by the Simons Foundation.} \thanks{Ofer Zeitouni was supported by the ERC advanced grant LogCorFields.} \subjclass[2010]{60J80; 60J85; 60G50} \keywords{Cover time. Binary tree. Barrier estimates.} \maketitle \begin{abstract} Let $\mathcal{T}_n$ denote the binary tree of depth $n$ augmented by an extra edge connected to its root. Let $\mathcal{C}_n$ denote the cover time of $\mathcal{T}_n$ by simple random walk. We prove that $\sqrt{ \mathcal{C}_{n} 2^{-(n+1) } } - m_n$ converges in distribution as $n\to \infty$, where $m_n$ is an explicit constant, and identify the limit. \end{abstract} \section{Introduction} We introduce in this section notation and our main results, provide background, and give a road map for the rest of the paper. \subsection{Notation and main result} Let $\mathcal{T}_n$ denote the binary tree of depth $n$, whose only vertex of degree $2$ is attached to an extra vertex $\rho$, called the \abbr{root}. The \abbr{cover time} $\mathcal{C}_n$ of $\mathcal{T}_n$ is the number of steps of a (discrete time) simple random walk started at $\rho$, till visiting all vertices of $\mathcal{T}_n$. We write \abbr{srw} for such a random walk. The main result of this paper is the following theorem, which gives convergence in law of a (normalized) version of the cover time $\mathcal{C}_n$. \begin{theorem} \label{theo-covertime} Let $\mathcal{C}'_n := 2^{-(n+1)} \mathcal{C}_n$, and set \begin{equation} m_{n} : = \rho_{n} n\,, \quad \rho_{n}: = c_\ast-\frac{\log n}{c_\ast n}\,, \quad \quad c_\ast=\sqrt{2\log 2}\,. \label{28.00} \end{equation} There exist a random variable $X'_\infty>0$ and $\alpha_{\ast}>0$ finite, so that, for any fixed $y\in \mathbb{R}$, \begin{align} \lim_{n\to\ff} \mathbb{P} \big(\sqrt{\mathcal{C}'_n} - m_n \le y \big) = \mathbb{E} \(\exp\{-\alpha_{\ast} X'_\infty \mathrm{e}^{-c_{\ast}y}\} \) := \mathbb{P}(Y'_\infty\leq y) \,. \label{ct2.25b} \end{align} \end{theorem} \noindent That is, the normalized cover time $\sqrt{\mathcal{C}'_n} - m_n$ converges in distribution to $Y'_\infty$, a standard Gumbel random variable shifted by $\log (\alpha_{\ast} X'_\infty)$ then scaled by $1/c_\ast$. See \Cref{lem-dermartconv} for a description of the random variable $X'_\infty$ in terms of the limit of a derivative martingale. As is often the case, most of the work in proving a statement such as Theorem \ref{theo-covertime} involves the control of certain excursion counts. We now introduce notation in order to describe these. Let $V_j$ denote the set of vertices of $\mathcal{T}_n$ at level $j$, with $V_{-1}=\{\rho\}$. For each $v\in V_{n}$ let $v(j)$ denote the ancestor of $v$ at level $j \le n$, i.e. the unique vertex in $V_j$ on the geodesic connecting $v$ and \corDB{$\rho$}. In particular, $v(-1)$ is the root \corDB{$\rho$}. Next, for each $v\in V_{n}$ let \begin{align} T_{v,j}^{s} = \#\{ &\mbox{\rm excursions from $v(j-1)$ to $v(j)$ made by the \abbr{srw} on $\mathcal{T}_n$}\nonumber \\ &\; \mbox{\rm prior to completing its first $s$ excursions from the root $\rho$}\} . \label{Tjsdef} \end{align} Setting \begin{equation} t^{\ast}_{v,n}=\inf \{s \in \mathbb{Z}_+\,:\, T_{v,n}^{s} \neq 0\}\,, \label{deftast} \end{equation} for the number of excursions to the root by the \abbr{srw} till reaching $v$, we consider the corresponding excursion cover time of $V_{n}$, \begin{equation}\label{eq-tsar} t^{\ast}_{n}:=\sup_{v\in V_{n}} \{ t^{\ast}_{v,n} \} \,. \end{equation} Our main tool in proving \Cref{theo-covertime} is a generalization (see \Cref{prop-eta-tau} below), of the following theorem concerning $t_n^*$. \begin{theorem} \label{thm1.2} With notation as in Theorem \ref{theo-covertime}, \begin{equation} \sqrt{2 t^{\ast}_n} - m_n \overset{dist}{\Longrightarrow} Y_\infty \qquad \mbox{ as } \quad n\to\ff \,,\label{new-CT.0} \end{equation} where $Y_\ff := Y'_\ff - \bar g_\infty$ for a standard Gaussian random variable $\bar g_\infty$, independent of $Y'_\infty$. Alternatively, for some random variable $X_\infty>0$, \begin{equation} \mathbb{P}(Y_\infty\leq y) := \mathbb{E} \(\exp\{-\alpha_{\ast} X_\infty \mathrm{e}^{-c_{\ast}y}\} \) \,. \label{ct2.25} \end{equation} \end{theorem} \noindent As we will see, most of the technical work in the proof of \Cref{thm1.2} (or \Cref{prop-eta-tau}), is in obtaining the sharp tail estimates of \Cref{prop-limiting-tail-gff} below. To describe $X_\infty$ and $X'_\infty$, let $\{g_{u}, u \in \mathcal{T}_\infty \}$ be the standard Gaussian branching random walk (\abbr{brw}), on the infinite binary tree $\mathcal{T}_\infty$. That is, placing i.i.d. standard normal weights on the edges of $\mathcal{T}_\infty$, we write $g_{u}$ for the sum of the weights along the geodesic connecting $0$ to $u$. We further consider the empirically centered $g_u':=g_u - \bar g_{|u|}$, where \begin{equation}\label{barg-k} \bar g_k = 2^{-k} \sum_{u' \in V_k} g_{u'} \,, \quad k \in \mathbb{Z}_+ \,, \end{equation} denotes the average of the \abbr{brw} at level $k$, and set \begin{equation} X_k = \sum_{u \in V_k}\( c_\ast k+ g_u \) \,\,\mathrm{e}^{-c_\ast\( c_\ast k+ g_u \)} \,, \; X'_k = \sum_{u \in V_k} \( c_\ast k + g'_u \) \,\, \mathrm{e}^{-c_\ast\( c_\ast k + g'_u \)}\,. \label{dmart.1} \end{equation} It is not hard to verify that $\{X_k\}$ is a martingale, referred to as the \abbr{derivative martingale}. We then have that \begin{lemma} \label{lem-dermartconv} $X_k$, $X'_k$ and $\bar g_k$ converge a.s. to positive, finite limits $X_\infty$, $X'_\infty$ and a standard Gaussian variable $\bar g_\infty$, independent of $X'_\infty$, such that \begin{equation}\label{tilted-dermg} X_\infty = X'_\infty e^{-c_\ast \bar g_\infty} \,. \end{equation} \end{lemma} The convergence of $X_k$ to $X_\infty$ is well known, see e.g. \cite{Aidekon} (and, for its first occurrence in terms of limits in branching processes, \cite{LalleySelke}), and building on it, we easily deduce the corresponding convergence for $X'_k$. \subsection{Background and related results} Theorem \ref{theo-covertime} is closely related to the recent paper \cite{CLS}, which deals with continuous time \abbr{SRW}, and we wish to acknowledge priority to their work. The proofs however are different - while \cite{CLS} builds heavily on the isomorphism theorem of \cite{EKMRS} to relate directly the occupation time on the tree to the Gaussian free field on the tree which is nothing but the \abbr{BRW} described above Lemma \ref{lem-dermartconv}, our proof is a refinement of \cite[Theorem 1.3]{BRZ}, where the tightness of the \abbr{lhs} of \eqref{new-CT.0} is proved. Our proof, which is based on the strategy for proving convergence in law of the maximum of branching random walk described in \cite{BDZconvlaw}, was obtained independently of \cite{CLS}, except that in proving that Theorem \ref{thm1.2} implies Theorem \ref{theo-covertime}, we do borrow some ideas from \cite{CLS}. As motivation to our work, we note that estimates from \cite{BRZ} were instrumental in obtaining the tightness of the (centered) cover time of the two dimensional sphere by $\epsilon$-blowup of Brownian motion, see \cite{BRZ17}. We expect that the ideas in the current work will play an important role in improving the tightness result of \cite{BRZ17} to convergence in law. We defer this to forthcoming work. We next put our work in context. The study of the cover time of graphs by \abbr{SRW} has a long history. Early bounds appear in \cite{Matthews}, and a general result showing that the cover time is concentrated as soon as it is much longer than the maximal hitting time appears in \cite{Aldousgeneral}. A modern general perspective linking the cover time of graphs to Gaussian processes appears in \cite{DLP}, and was refined to sharp concentration in \cite{ding} (for many graphs including trees) and \cite{zhai} (for general graphs). See also \cite{lehec} for a different perspective on \cite{DLP}. For the cover time of trees, an exact first order asymptotic appears in \cite{Aldous}. The tightness of $\sqrt{ \mathcal{C}'_{n} }$ around an implicit constant was derived by analytic methods in \cite{BZ}, and, following the identification of the logarithmic correction in $m_n$ \cite{DingZeitouni-ASharpEstimateForCoverTimesOnBinaryTrees}, its $O(1)$ identification appears in \cite{BRZ}. We note that the evaluation of the cover time is but one of many natural questions concerning the process of points with a-typical (local) occupation time, and quite a bit of work has been devoted to this topic. We do not elaborate here and refer the reader to \cite{DPRZ1,Sznitman,AbeBiskup}. Particularly relevant to this paper is the recent \cite{Abe}. It has been recognized for quite some time that the study of the cover time of two dimensional manifolds by Brownian motion (and of the cover time of two dimensional lattices by \abbr{SRW}) is related to a hierarchical structure similar to that appearing in the study of the cover time for trees, see e.g. \cite{DemboPeresEtAl-CoverTimesforBMandRWin2D} and, for a recent perspective, \cite{Schmidt}. A similar hierarchical structure also appears in the study of extremes of the critical Gaussian free field, and in other logarithmically correlated fields appearing e.g. in the study of random matrices. We do not discuss that literature and refer instead to recent surveys offering different perspectives \cite{Arguin,Biskup,Bovier,Kistler,Zeitouni}. \subsection{Structure of the paper} In contrast with \cite{CLS} the key to our proof of \Cref{thm1.2} is the following sharp right tail for the excursion cover times. \begin{theorem}\label{prop-limiting-tail-gff} There exists a finite $\alpha_\ast>0$ such that \begin{equation} \label{righttail.1} \lim_{z\to \infty}\limsup_{n\to \infty}|z^{-1} \mathrm{e}^{c_\ast z} \mathbb{P} (\sqrt{2t^{\ast}_n} -m_n >z) - \alpha_\ast| =0\,. \end{equation} \end{theorem} After quickly dispensing of \Cref{lem-dermartconv}, in Section \ref{sec-cl} we obtain \Cref{thm1.2} out of \Cref{prop-limiting-tail-gff}, by adapting the approach of \cite{BDZconvlaw} for the convergence in law of the maximum of \abbr{brw}. The main difference is that here we have a more general Markov chain (and not merely a sum of i.i.d.-s). In the short \Cref{sec-ctime}, which is the only part of this work that parallels the derivation of \cite{CLS}, we deduce \Cref{theo-covertime} out of \Cref{thm1.2} The bulk of this paper is devoted to the proof of \Cref{prop-limiting-tail-gff}, which we establish in \Cref{sec-rt} by a refinement of the approach used in deriving \cite[Theorem 1.3]{BRZ}. In doing so, we defer the a-priori bounds we need on certain barrier events, which might be of some independent interest, to \Cref{sec-re}, where we derive these bounds by refining estimates from \cite{BRZ}. The proof of the main contribution to the tail estimate of \Cref{prop-limiting-tail-gff}, as stated in \Cref{prop-asymptotic-first-moment}, is further deferred to \Cref{sec-sharpbar}. There, utilizing the close relation between our Markov chain and the $0$-dimensional Bessel process, we get sharper barrier estimates, now up to $(1+o(1))$ factor of the relevant probabilities. \section{From tail to limit: \Cref{lem-dermartconv} and \Cref{thm1.2}}\label{sec-cl} \label{sec-maintheorem} We start by proving the elementary \Cref{lem-dermartconv}, denoting throughout the last common ancestor of $u,u' \in \mathcal{T}_\infty$ by $w = u \wedge u'$. Namely, $w=u(|w|)$ for $|w|=\max \{ j \ge 0,u(j)=u'(j)\}$. \begin{proof}[Proof of \Cref{lem-dermartconv}] The \abbr{brw} $\{g_{u};\,\, u \in \mathcal{T}_k \}$ of \Cref{lem-dermartconv} is the centered Gaussian random vector having \begin{equation}\label{cov-brw} \Cov(g_u,g_{u'}) = |u \wedge u'| \,. \end{equation} Further, the average of the \abbr{brw} weights on the edges of $\mathcal{T}_\infty$ between levels $(k-1)$ and $k$, is precisely $\Delta g_k := \bar g_k - \bar g_{k-1}$ for $\bar g_k$ of \eqref{barg-k}. With $(\Delta g_k, k \ge 1)$ independent centered Gaussian random variables with $\Var(\Delta g_k) = 2^{-k}$, we have that $\bar g_k$ converges a.s. to the standard Gaussian $\bar g_\infty := \sum_k \Delta g_k$. Next, recall the existence of $w_k \to \infty$ such that, a.s., \begin{equation}\label{Ak-def} A_k := \{ c_\ast k + g_u \in w_k + (0,2 c_\ast k) \,, \;\; \forall u \in V_k \}\; \mbox{\rm occurs for all $k$ large,} \end{equation} see e.g. \cite[(1.8)]{AS09}. For $X_k$ of \eqref{dmart.1}, it follows from \cite{Aidekon} that $X_k \stackrel{a.s.}{\to} X_\infty \in (0,\infty)$, while \begin{equation}\label{tildeXk} X_k > w_k \widetilde{X}_k \quad \mbox{on} \;\; A_k, \qquad \mbox{for} \qquad \widetilde X_k := \sum_{u \in V_k} e^{-c_\ast (c_\ast k + g_u)} \,. \end{equation} Thus, $\widetilde X_k \stackrel{a.s.}{\to} 0$ as $k \to \infty$. From the two expressions in \eqref{dmart.1} we have that \[ X'_k = (X_k - \bar g_k \widetilde X_k) e^{c_\ast \bar g_k} \] which thereby converges a.s. to $X'_\infty = X_\infty e^{c_\ast \bar g_\infty}$ as claimed in \eqref{tilted-dermg}. Finally, from \eqref{cov-brw} we deduce that for any $u \in V_k$, $k \ge 0$, \begin{align} \Cov(g_u, \bar g_k)& =2^{-k} \sum_{u' \in V_k} |u \wedge u'| = \sum_{j=1}^{k} (j-1) 2^{-j} + k 2^{-k} = 1-2^{-k}. \label{ct10j} \end{align} This covariance is constant over $u \in V_k$, hence $\Cov(g'_u,\bar g_{|u|}) = 0$ for $g'_u := g_u - \bar g_{|u|}$, implying the independence of $\bar g_k$ and $\{ g'_u, u \in V_k \}$. The latter variables are further independent of the \abbr{brw} edge weights outside $\mathcal{T}_k$, hence of $\bar g_\infty$. Thus, the random variable $X'_\infty$, which is measurable on $\sigma(g'_u, u \in \mathcal{T}_\infty)$, must also be independent of $\bar g_\infty$. \end{proof} \noindent We next normalize the counts $T^s_{u,j}$ of \eqref{Tjsdef} and define \begin{equation}\label{std-occ} \widehat T_{u}(s) := \frac{T_{u,|u|}^{s} - s}{\sqrt{2s}} \,, \qquad \widehat{S}_k(s) := 2^{-k} \sum_{u \in V_k} \widehat{T}_u(s)\,, \end{equation} and get from the \abbr{clt} for sums of i.i.d. the following relation with the \abbr{brw}. \bl \label{lem-wish} For fixed $k$ and the \abbr{brw} $\{g_{u};\,\, u \in \mathcal{T}_k \}$ of \Cref{lem-dermartconv}, we have \begin{align}\label{clt-brw} \lc \widehat{T}_{u}(s) ,\, u\in \mathcal{T}_{k}\rc & \overset{dist}{\underset{s\to \ff}{\Longrightarrow}} \lc g_u \,,\, u \in \mathcal{T}_k \rc, \\ \lc \widehat{T}_{u}(s) - \widehat{S}_k (s) ,\, u\in V_{k}\rc & \overset{dist}{\underset{s\to \ff}{\Longrightarrow}} \lc g'_u \,,\, u \in V_k \rc. \label{clt-mbrw} \end{align} \el \begin{proof} The consecutive excursions to $\rho$ by the \abbr{srw} on $\mathcal{T}_n$ are i.i.d. Hence, $s \mapsto \{T^s_{u,|u|},\; u \in \mathcal{T}_k\}$ is an $\mathbb{R}^d$-valued random walk (with $d $ the finite size of $\mathcal{T}_k$). Further, projecting the \abbr{srw} on $\mathcal{T}_n$ to the geodesic from $u$ to $\rho$, yields a symmetric \abbr{srw} on $\{-1,0,\ldots,|u|\}$. Thus, denoting by $T_j$ the number of excursions from $u(j-1)$ to $u \in V_j$ during a single excursion to $\rho$, we have that $\mathbb{P}(T_j \ge 1) = {\sf p}_j := 1/(j+1)$ (for reaching $u$ before returning to $\rho$), and $T_j$ conditional on $T_j \ge 1$, follows a geometric law of success probability $\mathbb{P}(T_j=1 |T_j \ge 1)={\sf p}_j$. Consequently, for any $j \in [0,k]$, \begin{equation} \begin{aligned} \mathbb{E} (T_j) = 1 \,, \qquad \label{ct2.6} \Var(T_{j}) &= \mathbb{E} [ T_j (T_j-1) ] = \frac{2 (1-{\sf p}_j)}{{\sf p}_j} = 2 j \,. \end{aligned} \end{equation} Note that $T^1_{u,|u|}$ and $T^1_{u',|u'|}$ are independent, conditionally on $T^1_{w,|w|}$, for $w=u \wedge u'$, each having the conditional mean $T^1_{w,|w|}$. We thus see that for any $u,u' \in \mathcal{T}_k$, in view of \eqref{ct2.6}, \begin{equation} \Cov(T^1_{u,|u|},T^1_{u',|u'|}) = \Var(T_{|u \wedge u'|}) =2 |u\wedge u'| \,. \label{ct2.8} \end{equation} Comparing with \eqref{cov-brw}, the i.i.d. increments of our $\mathbb{R}^d$-valued random walk have the mean vector ${\bf 1}$ and covariance matrix which is twice that of the \abbr{brw}, with \eqref{clt-brw} and \eqref{clt-mbrw} as immediate consequences of the multivariate \abbr{clt}. \end{proof} \noindent Using throughout the notation \begin{equation} \label{eq:defs} s_{n,y} := (m_n+y)^2/2 \end{equation} for $m_n$ of \eqref{28.00}, we have that \[ \{ \sqrt{2 t^\ast_n} - m_n \le y\} = \{ t^\ast_n \le s_{n,y} \}. \] In view of \Cref{lem-wish}, we thus see that \Cref{thm1.2} is an immediate consequence (for non-random $\tau_k(s)=s$), of \eqref{ct2.17} in the following lemma. (The additional statement employing \eqref{ct2.12} is utilized in the proof of \Cref{theo-covertime}.) \begin{proposition}\label{prop-eta-tau} Let $\mathcal{F}_k$ denote the $\sigma$-algebra of the $\mathcal{T}_k$-projection of the \abbr{srw} on $\mathcal{T}_n$. If $\mathcal{F}_k$-measurable $\{ \tau_k(s), s \ge 0\}$ are such that \begin{equation} \widehat{\tau}_k(s) := \Big(\frac{\tau_k(s) -s}{\sqrt{2s}}\Big) \overset{p}{\underset{s\to \ff}{\longrightarrow}} 0, \label{ct2.12b} \end{equation} then for any fixed $y \in \mathbb{R}$, \begin{equation} \label{ct2.17} \lim_{k\rightarrow\infty} \limsup_{n \to \infty} |\mathbb{P}( t^{\ast}_{ n} \le \tau_{k}(s_{n,y}) ) - \mathbb{P}(Y_\infty \le y)| = 0 \,. \end{equation} Further, replacing \eqref{ct2.12b} by \begin{equation} \widehat{\tau}_k(s) + \widehat{S}_k(s) \overset{p}{\underset{s\to \ff}{\longrightarrow}} 0 \,, \label{ct2.12} \end{equation} leads to \eqref{ct2.17} holding with $Y'_\infty$ of \eqref{ct2.25b} instead of $Y_\infty$. \end{proposition} \begin{proof} For a possibly random, $\mathcal{F}_k$-measurable $\tau$, we set \begin{equation}\label{def:T-hat} \widehat{T}_u(\tau;s) := \frac{T_{u,|u|}^{\tau} - s}{\sqrt{2s}} \,, \end{equation} in analogy to $\widehat{T}_u(s;s)=\widehat{T}(s)$ of \eqref{std-occ}. In case $\frac{\tau_k(s)}{s} \stackrel{p}{\to} 1$ as $s \to \ff$, we get from Donsker's invariance principle that \begin{equation}\label{inv-brw} \widehat{T}_u(\tau_k(s);s) - \widehat{T}_u(s;s) - \widehat{\tau}_k(s) \overset{p}{\underset{s\to \ff}{\longrightarrow}} 0\,, \qquad \forall u \in \mathcal{T}_k \,. \end{equation} Further, \corOF{ \begin{equation}\label{eq:f_s} f_s(x) := \sqrt{ 2(s+\sqrt{2s} x)} - \sqrt{2s} \overset{}{\underset{s\to \ff}{\longrightarrow}} x \,, \end{equation}} uniformly over bounded $x$. Hence, setting \begin{equation}\label{def:T-tilde} \widetilde{T}_u(\tau;s) := \sqrt{2 T_{u,|u|}^{\tau}} - \sqrt{2s} = f_s( \widehat T_u (\tau;s) ) \,, \end{equation} upon combining \Cref{lem-wish} and \eqref{inv-brw}, we deduce from \eqref{ct2.12b} that \begin{equation}\label{cnv-sqrt-tau-brw} \Big\{ \widetilde{T}_u(\tau_k(s);s) ,\, u\in V_k \Big\} \overset{dist}{\underset{s\to \ff}{\Longrightarrow}} \lc g_u \,,\, u \in V_k \rc, \end{equation} whereas under \eqref{ct2.12} we merely replace $g_u$ by $g'_u$ on the \abbr{rhs}. Proceeding under the assumption \eqref{ct2.12b}, fix $y \in \mathbb{R}$ and an integer $k \ge 1$, setting \begin{equation}\label{eq:zv-def} z_u^{(n)} := \sqrt{2 T_{u,k}^{\tau_k(s_{n,y})}} - m_{n-k} \,, \qquad z_u^{(\infty)} := c_\ast k + g_u + y,\, \quad \forall u \in V_k \,, \end{equation} with $c_\ast$ as in \eqref{28.00}. For fixed $y$ and $k$, we have, using \eqref{eq:defs}, that \[ \sqrt{2 s_{n,y}} - m_{n-k} - (c_\ast k + y) = m_n - m_{n-k} - c_\ast k = \frac{1}{c_\ast} \log (1-\frac{k}{n})_{ \stackrel{\longrightarrow}{\small{n\to\infty}}} 0\,. \] Hence from \eqref{cnv-sqrt-tau-brw} at $s=s_{n,y}$ it follows that \begin{equation}\label{zvn-conv} \{z_u^{(n)}, \; u \in V_k \} \overset{dist}{\underset{n \to \ff}{\Longrightarrow}} \{z_u^{(\infty)}, \; u \in V_k \} . \end{equation} In particular, for $X_k$ of \eqref{dmart.1} and $\widetilde{X}_k$ of \eqref{tildeXk} we have that \begin{equation}\label{Xkn-conv} X_k^{(n,y)} := \sum_{u \in V_k} z_u^{(n)} e^{-c_\ast z_u^{(n)}} \overset{dist}{\underset{n \to \ff}{\Longrightarrow}} ( X_k + y \widetilde{X}_k ) e^{-c_\ast y} \,. \end{equation} For any fixed $y \in \mathbb{R}$, we have by \eqref{Ak-def} and \eqref{zvn-conv} that \begin{equation}\label{Ank-def} \lim_{k \to \infty} \varliminf_{n \to \ff} \mathbb{P}(A^{(n)}_k) = 1 \,, \;\; A^{(n)}_k = \{ z_u^{(n)} \in w_k + y + (0,2 c_\ast k) \,, \; \forall u \in V_k \} \,. \end{equation} Recalling that $\widetilde{X}_k \stackrel{a.s.} \to 0$ (see the line following \eqref{tildeXk}), and the definition of $Y_\infty$ from \eqref{ct2.25}, we have in view of \eqref{Xkn-conv} that for any $\alpha_k \to \alpha_\ast$ \begin{align}\label{Yinf-conv} \mathbb{P}(Y_\infty\leq y) &= \lim_{k \to \ff} \mathbb{E} \Big[ {\bf 1}_{A_k} \exp\{-\alpha_k (X_k + y \widetilde{X}_k) \mathrm{e}^{-c_{\ast}y}\} \Big] \nn \\ & = \lim_{k \to \ff} {\mathop{\underline{\overline{\lim }}}_{n\to\ff}} \mathbb{E} \Big[ {\bf 1}_{A^{(n)}_k} \exp\{-\alpha_k X^{(n,y)}_k \} \Big] \nn \\ & = \lim_{k \to \ff} {\mathop{\underline{\overline{\lim }}}_{n\to\ff}} \mathbb{E} \Big[ {\bf 1}_{A^{(n)}_k} \prod_{u \in V_k} \big(1-\alpha_k z_u^{(n)} e^{-c_\ast z_u^{(n)}} \big) \Big] \,, \end{align} where ${\mathop{\underline{\overline{\lim }}}_{n\to \ff}}f(n)$ stands for bounds given by both $\limsup_{n\rightarrow\infty} f(n)$ and $\liminf_{n\rightarrow\infty} f(n)$, and in the last equality of \eqref{Yinf-conv} we relied on having \[ \delta_k:= \sup_{n} {\bf 1}_{A_k^{(n)}} \, \sup_{u \in V_k} \{ \alpha_k z_u^{(n)} e^{-c_\ast z_u^{(n)}} \}_{ \stackrel{\longrightarrow}{\small{k\to\infty}}} 0 \,, \] as well as $e^{-a}\geq 1-a \geq e^{-a(1+\delta)}$ for $a\in [0,\delta \wedge 1/2]$. For $u \in V_k$ let $V^u_n = \{ v \in V_n : v(k) = u \}$ denote the leaves of the binary sub-tree of $\mathcal{T}_n$ of depth $n-k$, emanating from $u$, with $u(k-1)$ acting as its (extra) root. The event $\{t_n^\ast \le \tau\}$ of the \abbr{srw} reaching all of $V_n$ within its first $\tau$ excursions to $\rho$ is the intersection over $u \in V_k$ of the events of reaching all of $V^u_n$ within the first $T^{\tau}_{u,k}$ excursions of the \abbr{srw} between $u(k-1)$ and $u$. By the Markov property, for $\mathcal{F}_k$-measurable $\tau$, conditionally on $\mathcal{F}_k$ the latter events are mutually independent, of conditional probabilities $\bar \ga_{n-k}(T_{u,k}^{\tau})$ for $u \in V_k$ and $\bar \ga_n(s):=\mathbb{P}(t^\ast_n \le s)$. Consequently, for $\tau=\tau_k(s_{n,y})$ we get that \begin{align} \mathbb{P}(t^{\ast}_n \le \tau |\,\mathcal{F}_k) &= \prod_{u \in V_k} \bar \ga_{n-k} (T_{u,k}^{\tau}) = \prod_{u \in V_k} \(1-\ga_{n-k}(z^{(n)}_u) \) \,, \quad \label{dmart.10} \end{align} for $\ga_n(z) := \mathbb{P} (\sqrt{2t^{\ast}_{n}} -m_n>z)$ and $z^{(n)}_u$ of \eqref{eq:zv-def}. \Cref{prop-limiting-tail-gff} and the monotonicity of $z \mapsto \ga_n(z)$ yield that for some $n_k<\infty$ and $\alpha^{(\pm)}_k \to \alpha_\ast$, \begin{equation}\label{eq:z-tail-bd} \alpha^{(-)}_k z e^{-c_\ast z} \le \ga_n(z) \le \alpha_k^{(+)} z e^{-c_\ast z} \quad \forall n \ge n_k, \; \forall z \in w_k + y + [0,2 c_\ast k] . \end{equation} Under the event $A^{(n)}_k$, which is measurable on $\mathcal{F}_k$, the latter bounds apply for all $z=z^{(n)}_u$. Hence, we get from \eqref{dmart.10} that \begin{equation*} \begin{aligned} & {\mathop{\underline{\lim }}_{n\to\ff}} \mathbb{E} \Big[ {\bf 1}_{A^{(n)}_k} \prod_{u \in V_k} \big(1- \alpha^{(+)}_k z_u^{(n)} e^{-c_\ast z_u^{(n)}} \big) \Big] \le {\mathop{\underline{\lim }}_{n\to\ff}} \mathbb{P}(t^{\ast}_n \le \tau_k(s_{n,y})\,;\, A^{(n)}_k) \\ & \le {\mathop{\overline{\lim }}_{n\to\ff}} \mathbb{P}(t^{\ast}_n \le \tau_k(s_{n,y}); A^{(n)}_k) \le {\mathop{\overline{\lim }}_{n\to\ff}} \mathbb{E} \big[ {\bf 1}_{A^{(n)}_k} \prod_{u \in V_k} \big(1-\alpha^{(-)}_k z_u^{(n)} e^{-c_\ast z_u^{(n)}}\big) \big]. \end{aligned} \end{equation*} We now establish \eqref{ct2.17}, by taking $k \to \infty$ while utilizing \eqref{Ank-def} and \eqref{Yinf-conv}. The same argument applies under \eqref{ct2.12}, now replacing $g_u$ by $g'_u$ in \eqref{eq:zv-def}, thereby changing $X_k$, $\widetilde X_k$ and $Y_\infty$ in \eqref{Xkn-conv} and \eqref{Yinf-conv}, to $X'_k$, $\widetilde X'_k$ and $Y'_\ff$. \end{proof} \section{Excursion counts to real time:\\ From \Cref{prop-eta-tau} to \Cref{theo-covertime}} \label{sec-ctime} \Cref{theo-covertime} amounts to showing that for any fixed $y \in \mathbb{R}$ and $\ep>0$, \begin{eqnarray} \varlimsup_{n\to\ff} \mathbb{P} (\mathcal{C}'_{n} \le 2 s_{n,y-2\ep}) \le \mathbb{P}(Y'_\infty \le y) \le \varliminf_{n\to\ff} \mathbb{P}(\mathcal{C}'_{n} \le 2 s_{n,y+2\ep} ) \,, \label{ct2.24} \end{eqnarray} where throughout $s_{n,y}:=(m_n+y)^2/2$, as in \eqref{eq:defs}. To this end, let \begin{equation} R_n^s := 2^{-n} \sum_{u \in \mathcal{T}_n} T_{u,|u|}^s \,. \label{ct2.9} \end{equation} The \abbr{srw} on $\mathcal{T}_n$ makes $2^{(n+1)} R^s_n$ steps during its first $s$ excursions from the root to itself. Thus, $\{t^{\ast}_n \le \tau\} =\{\mathcal{C}'_n \le R_n^{\tau} \}$, so for any random $t,\tau$, \begin{equation} \mathbb{P}( t^{\ast}_ n \le \tau) - \mathbb{P}( R_n^{\tau} > 2 t) \le \mathbb{P}(\mathcal{C}'_n \le 2 t )\leq \mathbb{P}( t^{\ast}_{n} \le \tau ) + \mathbb{P}( R_{n}^{\tau} < 2 t) \,. \label{ct2.20} \end{equation} Considering \eqref{ct2.20} at $t=s_{n,y \pm 2\ep}$, the insufficient concentration of $R_n^\tau$ at the non-random $\tau=s_{n,y}$ rules out establishing \eqref{ct2.24} directly from \Cref{thm1.2}. We thus follow the approach of \cite[Section 9]{CLS}, in employing instead \eqref{ct2.20} for $\tau=\tau_k(s_{n,y})$ and the $\mathcal{F}_k$-measurable \begin{equation} \tau_k(s) := \inf \{\ell \in \mathbb{Z}_+ \,|\, S_k^{\ell} \ge s\}\,, \qquad S_k^{\ell} := 2^{-k} \sum_{u \in V_k} T^\ell_{u,k} \,. \label{ct2.11} \end{equation} Recall that $\mathbb{E} [S_k^{1}]=1$ (see \eqref{ct2.6}), while setting $\bar \sigma_k^2 := \Var (\bar g_k) = 1 - 2^{-k}$ (see \eqref{ct10j}), and comparing \eqref{cov-brw} to \eqref{ct2.8}, we arrive at $\Var (S_k^{1}) = 2 \bar \sigma_k^2$. Hence, Donsker's invariance principle yields a coupling between the piece-wise linear interpolation $t \mapsto \widehat S_{s,k} (t)$ of $\{ (S_k^t - t)/\sqrt{2s};\; t \in \mathbb{Z}_+ \}$, and a standard Brownian motion $\{ W_\theta \}$, such that \begin{equation}\label{ct2.11b} \sup_{\theta \in [0,2]} \big| \widehat S_{s,k} (\theta s) - \bar \sigma_k W_\theta \big| \overset{p}{\underset{s \to \ff}{\longrightarrow}} 0 \,. \end{equation} From \eqref{ct2.11} we see that $S_k^{\tau_k(s)} - s \ge 0$ is at most the total number of excursions from $V_{k-1}$ to $V_k$ made by the \abbr{srw} started at some $v\in V_k$, before hitting the root, plus $1$. The latter has exactly the law of $2^k S_k^1$ given $S_k^1>0$. Thus, \begin{equation}\label{eq:ck-def} c_k := \sup_{s \ge 0} \{ \mathbb{E} [ S_k^{\tau_k(s)} - s ] \} \le \frac{\mathbb{E} [ S_k^{1} ]}{\mathbb{P}( S_k^1>0)} < \infty \,, \end{equation} and for $\widehat \tau_k(s)$ defined as in \eqref{ct2.12b}, one has when $s \to \infty$, that \begin{equation}\label{tauk-wlln} \widehat S_{s,k} (\tau_k(s)) + \widehat \tau_k(s) = \frac{S_k^{\tau_k(s)}-s}{\sqrt{2s}} \overset{p}{\longrightarrow} 0 \,, \qquad \theta_s : = \frac{\tau_k(s)}{s} \overset{p}{\longrightarrow} 1 \,. \end{equation} In particular, considering \eqref{ct2.11b} at $\theta_s$, by the continuity of $\theta \mapsto W_\theta$ \[ \widehat S_{s,k} (\theta_s s) = \bar \sigma_k W_{\theta_s} +o_p(1) = \bar \sigma_k W_1 + o_p(1) = \widehat S_{s,k} (s) + o_p(1)\,. \] Since $\widehat S_{s,k}(s)= \widehat S_k(s)$ of \eqref{std-occ}, we conclude that $\{ \tau_k(s), s \ge 0 \}$ of \eqref{ct2.11} satisfy \eqref{ct2.12}, and with $|2 s_{n,y \pm 2 \ep} - 2 s_{n,y}| \ge 4 \ep \sqrt{s_{n,y}}$ for $n$ large enough, we finish the proof of \Cref{theo-covertime} upon showing that for $s=s_{n,y}$ and any fixed $\ep >0$, \begin{equation} \lim_{k\to\ff} \varlimsup_{n \to\ff} \mathbb{P}\big( |R_n^{\tau_k(s)} - 2 s| \ge 4 \ep \sqrt{s} \big) = 0 \,. \label{ct2.22} \end{equation} To this end, recall that in view of \eqref{ct2.6} and \eqref{ct2.9} \begin{equation} r_j := \mathbb{E}(R_j^1) = 2^{-j} |\mathcal{T}_j| = 2 - 2^{-j} \,, \label{ct2.10} \end{equation} and similarly, by \eqref{ct2.8} and \eqref{ct2.9} we get that \begin{align} \sigma_n^2 := \Var(R_n^1) &= 2 \sum_{u, u' \in \mathcal{T}_n} 2^{-2n} |u\wedge u'| \le 4 \,. \label{ct2.10c} \end{align} Next, writing in short $\tau=\tau_k(s)$, we have for any $n \ge k$, the representation \[ R_n^{\tau} = 2^{k-n} R_k^{\tau} + r_{n-k} S_k^{\tau} + 2^{-k} \Delta_{k,n} (2^k S_k^{\tau}) \,, \] where the random variable $\Delta_{k,n}(\ell)$ is the centered and scaled total time spent by the \abbr{srw} on $\mathcal{T}_n$ below level $k$ during the first $\ell$ excursions from $V_{k-1}$ to $V_k$. For fixed $\ell$, $\Delta_{k,n}(\ell) \stackrel{(d)}{=} R_{n-k}^\ell -\mathbb{E}(R_{n-k}^\ell)$ and has variance $\sigma_{n-k}^2 \ell \le 4 \ell$. Further, $\Delta_{k,n} (2^k S_k^{\tau})$ conditioned on $\mathcal{F}_k$ is distributed as $R_{n-k}^\ell-\mathbb{E}(R_{n-k}^\ell)$ with $\ell=2^k S_k^{\tau}$. Recalling that $R_{k-1}^\ell = 2(R_k^\ell-S_k^\ell)$ and utilizing \eqref{eq:ck-def}, we thus get by Markov's inequality (conditional on $\mathcal{F}_k$), that \[ \delta_{k,n} := \mathbb{P}(|R_n^{\tau} - 2^{k-n-1} R_{k-1}^{\tau} - 2 S_k^{\tau}| \ge \ep \sqrt{s} ) \le \frac{4 2^{-k}}{\ep^2} \frac{ \mathbb{E} (S_k^\tau)}{s} = \frac{4 2^{-k}}{\ep^2} \big( 1 + \frac{c_k}{s} \big) \,, \] goes to zero for $s = s_{n,y} \to \infty$ followed by $k \to \infty$. Also, by the union bound, \begin{align*} \mathbb{P}\big( |R_n^{\tau} - 2 s| \ge 4 \ep \sqrt{s} \big) \le \delta_{k,n} + \mathbb{P}(S_k^\tau - s \ge \ep \sqrt{s} ) &+ \mathbb{P}(\tau \ge 2s) \\& + \mathbb{P}(2^{k-n-1} R_{k-1}^{2s} \ge \ep \sqrt{s}) \,. \end{align*} Next, employing Markov's inequality, we deduce that the last term goes to zero, since $2^{k-n} \, s \, r_{k-1}/(\ep \sqrt{s}) \to 0$ for $s=s_{n,y}$ and $n \to \infty$. By \eqref{eq:ck-def} and having \abbr{whp} $\tau = \tau_k(s) < 2s$ (see \eqref{tauk-wlln}), we thus arrive at \eqref{ct2.22} and thereby conclude the proof of \Cref{theo-covertime}. \section{Sharp right tail: auxiliary lemmas and proof of \Cref{prop-limiting-tail-gff}}\label{sec-rt} \label{sec-limittail} Hereafter we denote by $\mathbb{P}_{s}$ probabilities of events occurring up to the completion of the first $s$ excursions at the root and let $\eta_v (j):=\sqrt{2T^s_{v,j}}$ for $v \in V_k$, $j \le k$ and $T^s_{v,j}$ of \eqref{Tjsdef}, with the value of $s$ implicit. For $u \in V_{n'}$ where $n':=n-\ell$, and $V^u_n := \{ v \in V_n : v(n') = u \}$, let \begin{equation}\label{def:eta-sharp-u} \eta^{\sharp}_\ell (u) := \min_{v\in V^u_n} \{ \eta_{v}(n) \} \,, \end{equation} denote the minimal (normalized) occupation time of edges entering leaves of the sub-tree of depth $\ell$ rooted at $u$ (during the first $s$ excursions from the root), abbreviating $\eta^\sharp_n$ for $\eta^\sharp_n(0)$. Since $$\{t^{\ast}_{ n}> s\}= \{\min_{v \in V_n} \{T_{v,n}^{s}\}=0\},$$ \Cref{prop-limiting-tail-gff} amounts to the claim \begin{equation} \label{eqforP3.1} \alpha_\ast = \lim_{z\to \infty} z^{-1} \mathrm{e}^{c_\ast z} {\mathop{\underline{\overline{\lim }}}_{n\to\ff}} \mathbb{P}_{s_{n,z}}(\eta^{\sharp}_n =0), \end{equation} for $s_{n,z}$ and $c_\ast$ of \eqref{eq:defs} and \eqref{28.00}, respectively. Our proof of \eqref{eqforP3.1} is based on a refinement of the probability estimates of \cite[Section 5]{BRZ}, intersecting here the event $\{\eta^\sharp_n = 0\}$ with barrier events involving the (normalized) edge occupation times $\{ j \mapsto \eta_v(j), v \in \mathcal{T}_n \}$. More precisely, we adapt the strategy of \cite[Section 3]{BDZconvlaw}, by essentially bounding $\mathbb{P}_{s_{n,z}} (\eta^\sharp_n=0)$ between the expectations of counts $\Lambda_{n,\ell} \le \Gamma_{n,\ell}$ for two barrier type events, which are equivalent at the claimed scale of asymptotic growth in $z$ (see \Cref{lem-Gamma-Lambda}). Our curved barrier event for $\Gamma_{n,\ell}$ is relaxed enough to deduce that the event $\{\Gamma_{n,\ell} \ge 1\}$ is for large $n$, $\ell$, about the same as having $\{\eta_n^\sharp=0\}$ (see \Cref{lem-G-neglig}). The straight barrier event for $\Lambda_{n,\ell}$ is strict enough to yield a negligible variance (see \Cref{lem-second-moment}), so its expectation serves to lower bound $\mathbb{P}_{s_{n,z}} (\eta^\sharp_n=0)$. Our claim \eqref{eqforP3.1} then follows from such a limit for $\mathbb{E}_{s_{n,z}} [\Lambda_{n,\ell} ]$ (which is a consequence of \Cref{prop-asymptotic-first-moment}). Specifically, for $s=s_{n,z}$ consider the excess edge occupation times, over the barrier \begin{equation}\label{eq:bar-st} \bar \varphi_n(j) := \rho_n (n-j) \,, \quad \quad j \in [0,n'] \,, \quad\quad n'=n-\ell\,. \end{equation} In the sequel we show that the main contribution to $\{\eta_n^\sharp=0\}$ is due to not covering a sub-tree rooted at some $u \in V_{n'}$ while the edge occupation times along the geodesic to $u$ exceed the barrier $\bar \varphi_n(\cdot)$ of \eqref{eq:bar-st}, with the excess at the edge into $u$ further restricted to \begin{equation}\label{dfn:I_ell} I_\ell:=\sqrt{\ell} \, [r^{-1}_\ell,r_\ell], \qquad r_\ell:=\sqrt{\log \ell}\,. \end{equation} To this end, let \begin{equation}\label{def:eta-hat} \hat \eta_v(j) := \eta_v(j) - \bar \varphi_n(j) \,, \end{equation} considering for $u \in V_{n'}$ the events \begin{align}\label{eq-big-definition-sharp} E_{n,\ell}(u) &: = \bigcap_{0 \le j \le n'} \{\hat \eta_u(j) > 0 \} \bigcap \{\hat \eta_u (n') \in I_\ell,\, \eta^\sharp_\ell (u) = 0 \} \,, \end{align} and the corresponding counts \begin{equation}\label{def:Lambda-n} \Lambda_{n,\ell} := \sum_{u \in V_{n'}}{\bf 1}_{E_{n,\ell}(u)}\,. \end{equation} \begin{figure} \caption{Depiction of the events $E_{n,\ell}(u)$ (dashed line) and $F_{n,\ell}(u)$ (dotted line) for some $u\in V_{n'}$. In either case, the red paths emanating from level $n'=n-\ell$ denote excursion counts corresponding to different children of $u$. Note the curved vs. straight barrier and the excursion count that reaches $0$.} \label{fig:EnlandFnl} \end{figure} See Figure \ref{fig:EnlandFnl} for a pictorial illustration of the event $E_{n,\ell}(u)$. As explained before, aiming first to upper bound $\mathbb{P}_{s_{n,z}}(\eta^\sharp_n =0)$, we fix $\delta \in (0,\frac12)$ and for $k \in [1,n]$, $h \in [0,n-k]$, consider the curved, relaxed barriers \begin{equation}\label{eq:barrier} \varphi_{n,k,h} (j):=\bar \varphi_n (j) - \psi_{k,h}(j) \,, \quad j \in [0,k] \,, \end{equation} using hereafter the notations \begin{equation}\label{psi-def} \psi_{k,h}(j):= h + j_k^\delta\,, \quad j_k:= j \wedge (k-j), \qquad j \in [0,k] \,. \end{equation} We further use the abbreviated notation \begin{equation}\label{dfn:psi-ell} \psi_{\ell}(\cdot):=\psi_{n',h_\ell}(\cdot), \quad \mbox{where} \quad h_\ell := \frac{1}{2} \log \ell \,, \end{equation} with $n'=n-\ell \ge 1$. Replacing the barriers of \eqref{eq:bar-st} by those of \eqref{eq:barrier}, we then form the larger counts \begin{equation} \Gamma_{n,\ell} = \sum_{u \in V_{n'}} {\bf 1}_{F_{n,\ell}(u)}\,, \end{equation} where in terms of \eqref{def:eta-sharp-u}, \eqref{def:eta-hat} and \eqref{eq:barrier}, we define for each $u \in V_{n'}$ \begin{align}\label{eq-big-definition} F_{n,\ell}(u) &: = \bigcap_{0 \le j \le n'} \{\hat \eta_{u}(j) + \psi_{\ell}(j) > 0 \} \bigcap \{ \eta^\sharp_\ell (u) = 0 \} \,. \end{align} See Figure \ref{fig:EnlandFnl} for a pictorial illustration of the event $F_{n,\ell}(u)$. If $\eta^\sharp_n=0$, then necessarily $\eta^\sharp_\ell(u)=0$ for some $u \in V_{n'}$ and either $F_{n,\ell}(u)$ occurs (so $\Gamma_{n,\ell} \ge 1$), or else the event $G_{n,\ell} := G_{n,n'}(h_\ell)$ must occur, where, see Figure \ref{fig:Gnl}, \begin{align}\label{eq-def-G-N-prelim} G_{n,k'} (h) &:= \bigcup_{u\in V_{k'}} \bigcup_{0 \leq j \leq k'}\{ \, \hat \eta_{u}(j) \le -\psi_{k',h} (j) \} \,. \end{align} Hence, for any $\ell$, \begin{equation}\label{eq:ubd-by-Gamma} \mathbb{E}_{s_{n,z}} [ \Gamma_{n,\ell} ] \ge \mathbb{P}_{s_{n,z}} (\Gamma_{n,\ell} \geq 1) \ge \mathbb{P}_{s_{n,z}}(\eta^{\sharp}_n =0) - \mathbb{P}_{s_{n,z}}(G_{n,\ell}) \,. \end{equation} \begin{figure} \caption{ Depiction of an event from $G_{n,\ell}$ (dashed line) corresponding to some $u\in V_{n'}$, $n'=n-\ell$. Note the curved barrier. } \label{fig:Gnl} \end{figure} Recall that by \cite[proof of Corollary 5.4]{BRZ}, for some $c'>0$ and all $z \ge 1$, \begin{equation}\label{eq2.7} \varliminf_{n \to \infty} \mathbb{P}_{s_{n,z}}(\eta^{\sharp}_n =0) \ge c' z \mathrm{e}^{-c_\ast z} \,, \end{equation} so our next lemma, which is an immediate consequence of \Cref{lem-a-priori2} below, shows that the right-most term in \eqref{eq:ubd-by-Gamma} is negligible. \begin{lemma}\label{lem-G-neglig} We have that \begin{equation}\label{gee-final} \lim_{\ell \to \infty} \sup_{z \ge 1} \{ z^{-1} e^{c_\ast z} \varlimsup_{n \to \infty} \mathbb{P}_{s_{n,z}} (G_{n,\ell}) \} = 0 \,. \end{equation} \end{lemma} Combining \eqref{eq:ubd-by-Gamma}-\eqref{gee-final}, we arrive at \begin{equation} \label{eq3.60-jay-nn} \varlimsup_{\ell \to \ff} \varlimsup_{z\to \infty} \varlimsup_{n\to \infty} \frac{\mathbb{P}_{s_{n,z}}(\eta^{\sharp}_n =0)} {\mathbb{E}_{s_{n,z}}[\Gamma_{n,\ell}]}\le 1 \,. \end{equation} Restricting hereafter to $\delta \in (0,\frac{1}{6})$ allows us to further show in \Cref{zerosubsection} the following equivalence of first moments (c.f. \eqref{delta-under-16} for why we take $\delta$ small). \begin{lemma}\label{lem-Gamma-Lambda} For any $\delta \in (0,\frac{1}{6})$ we have that \begin{equation} \label{eq-clear240113} \lim_{\ell \to \ff} \varlimsup_{z\to \infty} \{ z^{-1} e^{c_\ast z} \varlimsup_{n\to \infty} \mathbb{E}_{s_{n,z}} [\Gamma_{n,\ell} - \Lambda_{n,\ell}] \} = 0 \,. \end{equation} \end{lemma} \noindent Now, from \eqref{eq3.60-jay-nn} and \eqref{eq-clear240113}, we have the upper bound \begin{equation} \label{eq3.60nn} \varlimsup_{\ell \to \ff} \varlimsup_{z\to \infty} \varlimsup_{n\to \infty} \frac{\mathbb{P}_{s_{n,z}}(\eta^{\sharp}_n =0)} {\mathbb{E}_{s_{n,z}}[\Lambda_{n,\ell}]}\le 1 \,. \end{equation} \noindent For such expected counts with straight barriers, we establish in \Cref{sec-3.5}, using the connection to the $0$-Bessel process, the following large $n$ and $z$ asymptotic. \begin{proposition}\label{prop-asymptotic-first-moment} There exists $\alpha_\ell >0$ such that \begin{align}\label{eqmainresultinsecondsection} \lim_{\ell \to \infty} \alpha_{\ell}^{-1} \big\{ {\mathop{\underline{\overline{\lim }}}_{z \to\ff}} z^{-1} \mathrm{e}^{c_\ast z} {\mathop{\underline{\overline{\lim }}}_{n\to\ff}} \mathbb{E}_{s_{n,z}}[\Lambda_{n,\ell}] \big\} = 1 \,, \end{align} where by \eqref{eq2.7} and \eqref{eq3.60nn}, $\liminf \{\alpha_{\ell}\}$ is strictly positive. \end{proposition} \noindent As shown in \Cref{firstsubsection}, the barrier event we have added in the definition \eqref{eq-big-definition-sharp} of $E_{n,\ell}(u)$ yields the following tight control on the second moment of $\Lambda_{n,\ell}$. \begin{lemma}\label{lem-second-moment} We have that \begin{equation} \label{newequation72} \lim_{\ell \to \ff} \varlimsup_{z\to \infty} \big\{ z^{-1} e^{c_\ast z} \varlimsup_{n \to \infty} \mathbb{E}_{s_{n,z}} [\Lambda_{n,\ell} (\Lambda_{n,\ell}-1) ] \big\} = 0\,. \end{equation} \end{lemma} \noindent Note that $\Lambda_{n,\ell} \ge 1$ implies having $\eta_v(n)=0$ for some $v \in V_n$, that is, having $\eta^\sharp_n=0$. Hence, with $\Lambda_{n,\ell}$ integer valued, for any choice of $\ell$, \begin{align}\label{eq:2mom-lbd} \mathbb{P}_{s_{n,z}}(\eta^{\sharp}_n =0) \geq \mathbb{P}_{s_{n,z}}(\Lambda_{n,\ell} \ge 1) \geq \mathbb{E}_{s_{n,z}} [\Lambda_{n,\ell}] - \mathbb{E}_{s_{n,z}}[\Lambda_{n,\ell} (\Lambda_{n,\ell}-1)] \,. \end{align} Having a positive $\liminf \{\alpha_\ell \}$, the latter bound, together with \eqref{eqmainresultinsecondsection} and \eqref{newequation72}, imply that \begin{equation} \label{eq3.61nn} \varliminf_{\ell \to \infty} \varliminf_{z\to \infty} \varliminf_{n\to\ff} \frac{\mathbb{P}_{s_{n,z}}(\eta^{\sharp}_n =0)} {\mathbb{E}_{s_{n,z}}[\Lambda_{n,\ell}]}\ge 1 \,. \end{equation} \begin{proof}[Proof of \Cref{prop-limiting-tail-gff}] Comparing first \eqref{eq3.60nn} to \eqref{eq3.61nn} and then with \eqref{eqmainresultinsecondsection}, we conclude that \begin{equation}\label{eq:punch-line} \lim_{\ell \to \infty} \alpha_{\ell}^{-1} \big\{ {\mathop{\underline{\overline{\lim }}}_{n\to\ff}} z^{-1} \mathrm{e}^{c_\ast z} {\mathop{\underline{\overline{\lim }}}_{n\to\ff}} \mathbb{P}_{s_{n,z}} (\eta^\sharp_n = 0) \big\} = 1 \,. \end{equation} Necessarily $\alpha_\ell \to \alpha_\ast$ for which \eqref{eqforP3.1} holds (with $\alpha_\ast>0$ in view of \eqref{eq2.7} and $\alpha_\ast<\infty$ by \cite[Proposition 5.2]{BRZ}). \end{proof} \section{Barrier bounds for excursion counts}\label{sec-re} We keep the barrier sequences of \eqref{eq:barrier} and all other related notation from \Cref{sec-rt}. Further, with $\rho_n \to c_\ast > 1.1$, see \eqref{28.00}, \abbr{wlog} we restrict to $n \ge n_\ast \ge 64$ with $\rho_n \ge \rho_\ast =: 1.1$, starting at the following a-priori bound on the events $G_{n,k'}(h)$ from \eqref{eq-def-G-N-prelim}. Recall the notation $s_{n,z}$ of \eqref{eq:defs}. \begin{lemma}\label{lem-a-priori2} For some $c<\infty$, any $n \ge n_\ast$, $z,k' \ge 1$ and $h \in [0,n-k']$, \begin{equation} \mathbb{P}_{s_{n,z}} (G_{n,k'}(h)) \le c (z+h) e^{ -c_\ast (z+h)} e^{-(z+h)^2/(8n)} \,. \label{gee.2} \end{equation} \end{lemma} \begin{proof}[Proof of \Cref{lem-G-neglig}] Setting $h=h_\ell= \frac{1}{2} \log \ell$ in \eqref{gee.2}, results with \begin{align} \mathbb{P}_{s_{n,z}}\big(G_{n,\ell}\big) &\le c(z+\log \ell) \ell^{-c_\ast/2} e^{ -c_\ast z}e^{-z^2/(8n)}\,, \label{gee.3} \end{align} so taking $\ell \to \infty$ establishes \eqref{gee-final}. \end{proof} Before embarking on the proof of \Cref{lem-a-priori2}, we deduce from it certain useful a-priori tail bounds on the non-covering events $\{\eta^\sharp_\ell = 0 \}$. \begin{corollary} \label{lem-prelim-taillower}\label{lem2.7}\label{cor: Upper Bound cover} For some $c<\infty$ and all $n \ge n_\star$, $z\geq 1$, \begin{equation}\label{eq-right-tail-ex} \mathbb{P}_{s_{n,z}}(\eta^{\sharp}_n =0) \le c z e^{ -c_\ast z} e^{-z^2/(8n)}\,. \end{equation} Further, for some $\hat \ell$ finite, any $\hat \ell \le \ell \leq n/\log n$ and $r \ge - h_\ell$, \begin{equation} \ga_{n,\ell}(r) := \mathbb{P}_{[(\rho_{n}\ell+r)^{2}/2]}(\eta^{\sharp}_{\ell} =0) \le c \ell^{-1} (r+\log \ell) e^{-c_\ast r} e^{-r^{2}/(8 \ell)} \,. \label{35.1} \end{equation} \end{corollary} \begin{proof} The event $\eta^\sharp_n =0$ amounts to $\eta_v(n)=0$ for some $v \in V_n$. With $\varphi_{n,n,0}(n)=0$ (see \eqref{eq:barrier}-\eqref{psi-def}), this implies that $\eta_v(n) \le \varphi_{n,n,0}(n)$ and consequently that $G_{n,n}(0)$ also occurs (take $j=k'=n$ in \eqref{eq-def-G-N-prelim}). That is $\{\eta^\sharp_n = 0\} \subseteq G_{n,n}(0)$, so the bound \eqref{eq-right-tail-ex} follows from \eqref{gee.2}. Proceeding to prove \eqref{35.1}, setting $\hat \ell := n_\star \vee \exp(2(c_\ast + 1)/(2-c_\ast))$, one easily checks that $z = r + (\log \ell - 1)/c_\ast \ge 1$ whenever $r \ge - \frac{1}{2} \log \ell$ and $\ell \ge \hat \ell$. If further $1 \le \ell \le n/\log n$, then \[ \rho_{n} \ell + r = c_\ast \ell - \frac{\ell \log n}{c_\ast n} + r \ge m_\ell + z \,, \] so \eqref{eq-right-tail-ex} at $n=\ell$ and such $z$ yields the bound \eqref{35.1}, possibly with $c \mapsto e c$. \end{proof} \noindent We recall \eqref{def:eta-hat}, \eqref{psi-def} and take throughout \begin{equation}\label{hx-def} H_x:=[x,x+1] \,. \end{equation} The key to this section are the following a-priori barrier estimates adapted from \cite[Section 5]{BRZ} (though it is advised to skip the proofs at first reading). \begin{lemma}\label{lem:barrier-est} Let $g_m(i):=i \exp(c_\ast i + i^2/m)$, $\beta_{n,k} := \frac{k}{n} \log n - \log k$ and $z_h:=z+h$. For some $c_1 \ge 1$, all $z \ge 1$, $k \ge 0$, $h \in [0,n-k)$ and $i \in \mathbb{Z}_+$ \begin{align}\label{eq-oferlau1-def} q_{n,k,z} & (i;h) := \mathbb{P}_{s_{n,z}} (\min_{j \le k} \{ \hat \eta_v(j) + \psi_{k,h}(j) \} \ge 0, \; \hat \eta_v(k) \in H_{i-h}) \\ \le & c_1 2^{-k} \big( \frac{z_h}{\sqrt{k_n}} \wedge k \big) \, e^{\beta_{n,k}} \, e^{- c_\ast z_h} e^{-z_h^2/(4m)} \, g_m (i+1) \,, \;\; \forall m \in [2k,n^2] \label{eq-oferlau1} \end{align} (replacing for $k=0$ the ill-defined factor $\big(\frac{z_h}{\sqrt{k_n}} \wedge k\big) e^{\beta_{n,k}}$ by $1$). Likewise, for $i, k' \in \mathbb{Z}_+$, $n' = k'+k \in (k',n)$, $m \in [2k,(n-k')^2]$ and $z \ge 0$, \begin{align}\label{eq-amirlau1} p_{n,k,z} (i) &:= \mathbb{P}(\min_{j \in (k',n']} \{ \hat \eta_{v}(j) \} \ge 0, \hat \eta_v (n') \in H_i \,|\, \hat \eta_v(k') = z) \nn \\ & \le c_1 \frac{2^{-k} e^{\beta_{n,k}}}{\sqrt{k_{n-k'}}} (z \vee 1) e^{- c_\ast z} e^{-(z \vee 1)^2/(4m)} \, g_m (i+1) \,. \end{align} The bound \eqref{eq-amirlau1} applies also to $z \in [- \rho_n k,0]$, now with $m=-4k$. \end{lemma} \begin{proof} In case $k \ge 1$, setting $a = \rho_n n - h $ and $b = \rho_n (n-k) - h $, the event considered in \eqref{eq-oferlau1-def} corresponds to \cite[(1.1)]{BRZ} for $L=k$, $C=1$, $\varepsilon=\frac{1}{2}-\delta$ and the line $f_{a,b}(j;k)$ between $(0,a)$ and $(k,b)$, taking there $y=b+i$ and $x= m_n+z \ge a$. Having $z \ge 1$ and $m_n \ge 1$ yields that $x \ge 2$. Further, with $h \in [0,n-k)$ and $\rho_n \ge \rho_\ast$, \begin{equation}\label{eq:y-lbd} y \ge b = \rho_n (n-k) - h \ge 1 + \frac{1}{10} (n-k) \end{equation} (the restriction to $y \ge \sqrt{2}$ in \cite[(1.1)]{BRZ} clearly can be replaced with $y> 1$ there, since for $y\in [1,\sqrt{2})$ one has $H_y^2/2 \cap \mathbb{Z}\subset (H_{\sqrt{2}})^2/2 \cap \mathbb{Z}$). Here $x/L$ and $y/L$ are not uniformly bounded above but following \cite[proof of (4.2)]{BRZ} and utilizing \cite[Remark 2.6]{BRZ} to suitably modify \cite[(4.16)]{BRZ}, we nevertheless arrive at the bound \begin{align}\label{eq-oferlau} q_{n,k,z}(i;h) \le c \, \frac{(1+z_h)(1+i)}{k} \sqrt{\frac{x}{ky}} \sup_{w \in H_y} \{ e^{-(x-w)^2/(2k)} \} \,. \end{align} In addition, whenever $x \ge \sqrt{2}$, $y,k \ge 1$, we have that \begin{align}\label{eq-oferlau-s} q_{n,k,z}(i;h) \le \mathbb{P}_{s_{n,z}} ( \hat \eta_v(k) \in H_{i-h}) \le c \sup_{w \in H_y} \{ e^{-(x-w)^2/(2k)} \} \end{align} (see \cite[Lemma 3.6]{BRZ17}). Next, since $x \le c_\ast n + z$, we deduce from \eqref{eq:y-lbd} that \begin{equation}\label{eq:bd-xky} \frac{x}{ky} \le \frac{20}{k_n} \Big(c_\ast + \frac{z}{n} \Big) \le \frac{c_o}{k_n} \exp\big(\frac{z^2}{12 n^2}\big) \,, \end{equation} for some constant $c_o<\infty$. Further, as $x-a=z_h$ we have from \eqref{28.00} that \begin{equation} \label{eq-latenight} x - b = c_\ast k + z_h - \varepsilon_{n,k} \,, \qquad \varepsilon_{n,k} := \frac{k \log n}{c_\ast n} \,, \end{equation} and since $\frac{c^2_\ast}{2} = \log 2$, we get, for any real $\widetilde w$, \begin{equation}\label{eq:jay-ident1} \frac{1}{2k} (x-b-\widetilde w)^2 = k \log 2 + c_\ast (z_h-\varepsilon_{n,k}-\widetilde w) + \frac{1}{2k} (z_h-\varepsilon_{n,k}-\widetilde w)^2 \,. \end{equation} By an elementary inequality, for any $m \ge 2k$, \begin{equation}\label{eq:bd-el-xw} (z_h - \widetilde w-\varepsilon_{n,k})^2 \ge \frac{2k}{3m} (z_h^2 - 3 \widetilde w^2 - 3 \varepsilon_{n,k}^2) \,, \end{equation} so that by \eqref{eq:jay-ident1} \begin{equation}\label{eq:bd-xbw2} \frac{1}{2k} (x-b-\widetilde w)^2 \ge k \log 2 + c_\ast \big(z_h - \varepsilon_{n,k} - \widetilde w \big) + \frac{z_h^2}{3m} - \frac{\widetilde w^2}{m} - \frac{\varepsilon_{n,k}^2}{m} \,. \end{equation} With $c_\ast \varepsilon_{n,k} - \log k = \beta_{n,k} \le 1$ for $k \in [1,n]$ (and $\varepsilon_{n,k}^2/k$ uniformly bounded), we plug into the smaller among \eqref{eq-oferlau} and \eqref{eq-oferlau-s} the bounds \eqref{eq:bd-xky} and \eqref{eq:bd-xbw2} (for $w=b+\widetilde w$ and $\widetilde w \in H_i$), to arrive at \eqref{eq-oferlau1}. By definition $q_{n,0,z}(i;h) = {\bf 1}_{z_h}(i)$, and since $g_m(z_h+1) \ge \exp(c_\ast z_h + z_h^2/(4m))$, clearly \eqref{eq-oferlau1} holds also for $k=0$ (under our convention). Turning to the proof of \eqref{eq-amirlau1}, we consider first $z > 0$, proceeding as in the proof of \eqref{eq-oferlau1} with the line $f_{a,b}(j;k)$ of length $k$ and same slope as in the preceding, now connecting $(k',a)$ to $(n',b)$, where $a = \rho_n (n-k')$ and $b = \rho_n (n-n')$. For $x=a+z$ and $y=b+i$ we have $x \ge a > b$ and $y \ge b>0$, thanks to our assumption that $n' \in (k',n)$. By the Markov property of $j \mapsto \eta_v(j)$, for such values of $(a,b,x,y)$ the \abbr{rhs} of \eqref{eq-oferlau} with $h=0$ necessarily bounds the probability $p_{n,k,z}(i)$, and thereafter one merely follows the derivation of \eqref{eq-oferlau1}, now with $h=0$ and $n-k'$ replacing $n$ when bounding $x/y$. The latter modification results in having $\sqrt{k_{n-k'}}$ in \eqref{eq-amirlau1}, instead of $\sqrt{k_n}$. Next, for $z \le 0$ we simply lower the barrier line $f_{a,b}(j;k)$ to start at $a=x=\bar \varphi_n(k') + z$, where our assumption that $z \ge -\rho_n k$ guarantees having $x \ge \bar \varphi_n(n') >0$ (thereby yielding \eqref{eq-oferlau}). Here $x/(ky) \le c_o/k_{n-k'}$, and $z_h =z \le 0$ allows us to replace the \abbr{rhs} of \eqref{eq:bd-el-xw} by $\frac{\widetilde w^2}{2} - \varepsilon_{n,k}^2$, yielding the stated form of \eqref{eq-amirlau1}. \end{proof} We conclude this sub-section by adapting the bounds of \Cref{lem:barrier-est} to the form needed when proving \Cref{lem-second-moment} and \Cref{lem-Gamma-Lambda}. \begin{lemma}\label{lem-apriori-c} There exists a constant $c_2<\infty$ satisfying the following. Fix $\hat \ell$ as in \Cref{lem2.7} and $I_\ell$ as in \eqref{dfn:I_ell}. For any $z \ge 0$, $\ell \in [\hat \ell, \frac{n}{\log n}]$ and $k\geq 1$, setting $k'=n'-k$ with $n'$ as in \eqref{eq:bar-st}, \begin{align}\label{eq:bd-theta} \theta_{n,k,\ell} (z) &:= \mathbb{P} (\min_{j \in (k',n']} \{ \hat \eta_{v}(j) \} \ge 0, \, \hat \eta_v (n') \in I_\ell,\ \eta^\sharp_\ell(v) = 0 \,|\, \hat \eta_v(k') = z) \nn \\ & \le c_2 \, 2^{-k} e^{\beta_{n,k}} (1 \vee \sqrt{\ell/k}) (1+z) e^{- c_\ast z} e^{-z^2/(8(k \vee 8\ell))} \end{align} If in addition $z \ge 4 h_\ell$, $n \ge 3 \ell$, then for any $r \ge 0$, \begin{align}\label{eq:bd-phi} {\bf p}_{n,k,z}(r) &:= \mathbb{P}_{s_{n,z}}(\min_{j \le n'} \{\hat \eta_v (j) + \psi_{\ell}(j) {\bf 1}_{j \le k} \} > 0 \,, \; \hat \eta_{v} (k) \le 0, \, \; \hat \eta_{v} (n') \in H_r ) \nn \\ &\le c_2 \, 2^{-n'} \frac{\psi_\ell(k)^3}{k_{n'}^{3/2} \ell^{1/2}} e^{-z^2/(16k)} \, z (r+1) e^{-c_\ast (z- r)} e^{-\frac{r^2}{4(n'-k)}}\,. \end{align} \end{lemma} \begin{proof} Starting with \eqref{eq:bd-theta}, by the Markov property of $\eta_v(j)$ at $j=n'$ and monotonicity of $y \mapsto \ga_{n,\ell}(y)$ of \eqref{35.1}, we have in terms of $p_{n,k,z}(\cdot)$ of \eqref{eq-amirlau1} \[ \theta_{n,k,\ell}(z) \le \sum_{r \in I_\ell} p_{n,k,z}(r) \ga_{n,\ell}(r) \,. \] Plugging the bounds of \eqref{35.1} and \eqref{eq-amirlau1} (at $m=2 (k \vee 8 \ell)$), yields for $c'_1$ finite \[ \theta_{n,k,\ell}(z) \le c'_1 \frac{2^{-k} e^{\beta_{n,k}}\sqrt{\ell}}{\sqrt{k_{n-k'}}} (1+z) e^{- c_\ast z} e^{-z^2/(8(k \vee 8 \ell))} \, \sum_{r \in I_\ell} \frac{r^2}{\ell^{3/2}} e^{-r^{2}/(16 \ell)} \,. \] With the latter sum uniformly bounded and $k_{n-k'}=k \wedge \ell$, we arrive at \eqref{eq:bd-theta}. Next, turning to establish \eqref{eq:bd-phi}, note first that \begin{equation}\label{eq:psi-pre-dom} j_{k'}^\bb\leq k_{k'}^\bb+j_k^\bb \,, \qquad\qquad \forall \; 0 \le j \le k < k' \end{equation} when $\bb=1$ and consequently also for all $\bb \in [0,1]$. For $\bb=\delta$, it results with \[ \psi_{k',h}(j) \le \psi_{k,h'}(j) \,, \quad \mbox{ for } \quad h'=\psi_{k',h}(k), \quad j \in [0,k] , \] with equality at $j=k$. In particular, recalling \eqref{dfn:psi-ell} and considering $k'=n'$, we have that \begin{equation}\label{eq:psi-dom} \psi_\ell(j) \le \psi_{k,h}(j) \quad \mbox{ for } \quad h = \psi_\ell(k), \;\; j \in [0,k] \,. \end{equation} Employing \eqref{eq:psi-dom} to enlarge the event whose probability is ${\bf p}_{n,k,z}(i)$, we get by the Markov property of $\eta_v(j)$ at $j=k$, in terms of $q_{n,k,z}(\cdot;\cdot)$, $p_{n,k,z}(\cdot)$ and $h=\psi_\ell(k)$, that \begin{align}\label{eq:varphi-def} {\bf p}_{n,k,z}(r) \le & \sum_{i=1}^{h} q_{n,k,z} (h-i;h) \sup_{z' \in H_{-i}} \{ p_{n,k',z'}(r) \} \,. \end{align} Substituting first our bound \eqref{eq-amirlau1} at $m=-4k'$, and then \eqref{eq-oferlau1} at $m=2k$, yields that for some $c'_1 $ finite, \begin{align}\label{eq:phi-bd2} & {\bf p}_{n,k,z}(r) \le c_1 \sum_{i=1}^{h} q_{n,k,z} (h-i;h) 2^{-k'} e^{\beta_{n,k'}} (k'_{k'+\ell})^{-1/2} e^{c_\ast (r+i)} (r+1) e^{-\frac{r^2}{4k'}} \nn \\ &\le c'_1 \, h \, 2^{-n'} \frac{e^{\beta_{n,k} + \beta_{n,k'}}} {\sqrt{k'_{k'+\ell} k_n}} z_h e^{-c_\ast z_h} e^{-z^2/(8k)} g_{2k}(h) (r+1) e^{c_\ast r} e^{-r^2/(4k')}\,. \end{align} With $\delta \le \frac{1}{2}$ and $z \ge 4 h_\ell$, it follows that \[ \frac{h^2}{2k} \le \frac{h_\ell^2 + k_{n'}^{2\delta}}{k} \le \frac{z^2}{16k} + 1 \,. \] Further, recall that $k+k'=n'$ and $\beta_{n,n'} \le 1$, hence \begin{equation}\label{eq:sum-beta} e^{\beta_{n,k}+\beta_{n,k'}} = e^{\beta_{n,n'}} \frac{n'}{k k'} \le \frac{2 e}{k_{n'}} \,. \end{equation} Our assumption $n \ge 3 \ell$ results with $k \vee k' \ge \ell$ and thereby $k'_{k'+\ell} \, k_n \ge \ell \, k_{n'}$. Applying the preceding within \eqref{eq:phi-bd2}, we arrive at \eqref{eq:bd-phi} . \end{proof} \subsection{Negligible crossings: Proof of \Cref{lem-a-priori2}} Fixing $n,k',h$ as in \Cref{lem-a-priori2}, consider for $u \in V_{k'}$, the first time \[ \tau_u := \min\{ j \ge 0 : \eta_u (j) \le \varphi_{n,k',h} (j) \} \] that the process $j \mapsto \eta_u (j)$ reaches the relevant barrier of \eqref{eq:barrier}, see Figure \ref{fig:Gnl}. For $z \ge 1$ we have that $\tau_u \ge 1$, since $\eta_{u}(0)=\sqrt{2s_{n,z}} = m_n + z > \varphi_{n,k',h}(0)$ under $\mathbb{P}_{s_{n,z}}$. Decomposing $G_{n,k'}(h)$ according to the possible values of $\{\tau_u\}$, results with \begin{align} \mathbb{P}_{s_{n,z}}(G_{n,k'}(h)) &\le \sum_{k=1}^{k'} \underset{({\rm I}_k)}{\underbrace{ \mathbb{P}_{s_{n,z}} \Big(\exists u\in V_{k'} \; \text{ such that } \;\tau_{u} = k \Big) }} \,. \label{eq: upper bound split} \end{align} The event $\{\tau_u=k\}$ depends only on the value of $u(k) \in V_k$. Hence, by the union bound we have that for any fixed $u \in V_{k'}$ \begin{equation} ({\rm I}_k) \le 2^{k}\, \mathbb{P}_{s_{n,z}} (\tau_u=k) . \label{eq: sum over levels2} \end{equation} With $\rho_n > 1 > \delta$, it is easy to verify that $j \mapsto \varphi_{n,k',h}(j)$ is strictly decreasing with $\varphi_{n,k',h}(k')=\rho_n (n-k') - h \ge 0$. Further, for $b:=\varphi_{n,k',h}(k)$, $\widetilde{b}:=\varphi_{n,k',h}(k+1)$, we get upon conditioning on $\eta_u(k)=y$, that \begin{align} & \mathbb{P}_{s_{n,z}} (\tau_u=k+1) \le \nn \\ & \qquad \sum_{i=0}^\infty \mathbb{P}_{s_{n,z}} ( \tau_u > k, \eta_{u} (k) \in H_{b+i} ) \sup_{y \in H_{b+i}} \mathbb{P}_{\frac{y^2}{2}} (\eta_u (1) \le \widetilde{b}) \,. \label{eq: cond on height2} \end{align} Applying \cite[Lemma 4.6]{BK} at $p=q=1/2$ and $\theta=\widetilde{b}^2/2 \le y^2/2$ \begin{equation} \label{eq-I5.6} \sup_{y \in H_{b+i}} \mathbb{P}_{\frac{y^2}{2}} (\eta_u (1) \le \widetilde{b} ) \leq e^{-i^2/4}. \end{equation} Setting $h'=\psi_{k',h}(k)$, we proceed to bound the first probability on the \abbr{rhs} of \eqref{eq: cond on height2}. To this end, recall \eqref{eq:psi-pre-dom}, yielding that $\varphi_{n,k,h'}(j) \le \varphi_{n,k',h}(j)$ for $j \in [0,k]$, with equality at $j=k$ (see \eqref{eq:barrier}--\eqref{psi-def}). Consequently, \begin{equation} \mathbb{P}_{s_{n,z}} (\tau_{u} > k, \eta_{u} (k) \in H_{b+i}) \leq q_{n,k,z} (i;h'), \end{equation} for $q_{n,k,z}(\cdot;\cdot)$ of \Cref{lem:barrier-est}. Since $n-k' \ge h$ and $\delta<1$, for any $k < k'$, \[ n-k - h' \ge k'-k - (k'-k)^\delta > 0 \,, \] in which case, by \eqref{eq-oferlau1} we have that for any $i \in \mathbb{Z}_+$ \begin{equation}\label{eq:oferlau2} q_{n,k,z}(i;h') \le c_1 2^{-k} z_{h'} e^{- c_\ast z_{h'}} e^{ -z_{h'}^2/(8n)} \, g_{2n} (i+1) \,. \end{equation} Noting that $\sup_{n \ge 3} \{ g_{2n}(i+1) \} e^{-i^2/4}$ is summable (and $z_{h'} \ge z_h$), we find upon combining \eqref{eq: sum over levels2}--\eqref{eq:oferlau2}, that for some $c_3$ finite and any $1 \le k < k'$, \begin{align}\label{eq:final-ofla} ({\rm I}_{k+1}) &\le 2^{k+1} \sum_{i=0}^\infty q_{n,k,z}(i;h') e^{-i^2/4} \nn \\ &\le c_3 (z_h+k_{k'}^\delta) e^{- c_\ast (z_h + k_{k'}^{\delta})} e^{-z_h^2/(8n)} \,. \end{align} Further, with $\varphi_{n,k',h}(1) \le m_n-h$ we have similarly to \eqref{eq: sum over levels2}--\eqref{eq-I5.6} that \[ ({\rm I}_{1}) \le 2 \mathbb{P}_{s_{n,z}} (\tau_u=1) \le 2 \mathbb{P}_{\frac{x^2}{2}} (\eta_{u} (1) \le m_n-h) \le 2 e^{-z_h^2/4} \,, \] which is further bounded for $z_h \ge z \ge 1$ by the \abbr{rhs} of \eqref{eq:final-ofla} at $k=0$ (possibly increasing the universal constant $c3$). Summing over $k \le k'$ it follows from \eqref{eq: upper bound split} and \eqref{eq:final-ofla} that for some universal $c_4<\infty$, \begin{align*} \mathbb{P}_{s_{n,z}} (G_{n,k'}(h)) &\le c_3 e^{-c_\ast z_h} e^{-z_h^2/(8n)} \sum_{k=0}^{k'} (z_h+k_{k'}^{\delta}) e^{-c_\ast k_{k'}^{\delta}} \\ &\le c_4 z_h e^{ -c_{\ast} z_h} e^{-z_h^2/(8n)} \,, \end{align*} as claimed in \eqref{gee.2}. \subsection{Comparing barriers: Proof of \Cref{lem-Gamma-Lambda}} \label{zerosubsection} Hereafter, let $\nu_{n,k,z}(\cdot)$ denote the finite measure on $[0,\infty)$ such that \begin{equation}\label{dfn:nu-z} \nu_{n,k,z}(A) := 2^k \mathbb{P}_{s_{n,z}}(\min_{j \le k} \{\hat \eta_v (j)\} > 0 \,, \; \hat \eta_{v} (k) \in A) \,, \end{equation} using the abbreviation $\nu_{n,z}=\nu_{n,n',z}$ (where $n'=n-\ell$). In view of \eqref{eq-big-definition-sharp} and the Markov property of $\{\eta_v(j)\}$ at $j=n'$ we have that \begin{align} \mathbb{E}_{s_{n,z}}[\Lambda_{n,\ell}] = 2^{n'} \mathbb{P}_{s_{n,z}}(E_{n,\ell}(v)) = \int_{I_\ell} \ga_{n,\ell} ( y) \nu_{n,z}(dy) \,. \label{equpperbd} \end{align} for $\ga_{n,\ell}(\cdot)$ of \eqref{35.1}. Similarly, setting the finite measure on $[-h_\ell,\infty)$ \begin{equation}\label{dfn:mu-z} \mu_{n,z}(A) := 2^{n'} \mathbb{P}_{s_{n,z}}(\min_{j \le n'} \{\hat \eta_v (j) + \psi_{\ell}(j) \} > 0 \,, \; \hat \eta_{v} (n') \in A) \,, \end{equation} we have by the Markov property and \eqref{eq-big-definition} that \begin{align} \mathbb{E}_{s_{n,z}}[\Gamma_{n,\ell}] = 2^{n'} \mathbb{P}_{s_{n,z}}(F_{n,\ell}(v)) = \int_{-h_\ell}^\infty \ga_{n,\ell} ( y) \mu_{n,z}(dy) \,. \label{Markov-Gamma} \end{align} For $r \ge 0$, recalling $H_r=[r,r+1]$, we decompose $\mu_{n,z}(H_r)-\nu_{n,z}(H_r)$ according to the possible values of $\tau := \max \{ j < n' : \hat \eta_v(j) \le 0 \}$, to arrive at \begin{align}\label{eq:def-mu-nu} \mu_{n,z}(H_r) - \nu_{n,z}(H_r) \le 2^{n'} \sum_{k=1}^{n'-1} {\bf p}_{n,k,z}(r) \,, \end{align} for ${\bf p}_{n,k,z}(r)$ of \Cref{lem-apriori-c}. By \eqref{equpperbd}, \eqref{Markov-Gamma}, \eqref{eq:def-mu-nu} and the monotonicity of $y \mapsto \ga_{n,\ell}(y)$ we have that \begin{align}\label{amir-decmp} \mathbb{E}_{s_{n,z}}[\Gamma_{n,\ell} - \Lambda_{n,\ell}] \le& \sum_{r \notin I_\ell} \ga_{n,\ell}(r) \mu_{n,z}(H_r) + \sum_{r \in I_\ell} \ga_{n,\ell}(r) \sum_{k=1}^{n'-1} 2^{n'} {\bf p}_{n,k,z} (r) \nn \\ & := \; {\sf I}_n (z,\ell) \; + \; {\sf II}_n (z,\ell) \,. \end{align} Dealing first with ${\sf I}_n(z,\ell)$ of \eqref{amir-decmp}, note that $\mu_{n,z}(H_r)=2^{n'} q_{n,n'}(r+h;h)$ for $q_{n,k}(i;h)$ of \Cref{lem:barrier-est} and $h=h_\ell$. Combining \eqref{35.1} with \eqref{eq-oferlau1} at $k=n'$ (where $k_n=\ell$), and having $z+h_\ell \le 2z$ (as $z \to \infty$ before $\ell \to \infty$), yields for some $c_5$ finite, any $\ell \ge \hat \ell$, large $n$ and all $r \ge -h_\ell$ \begin{equation}\label{mun-bd} \ga_{n,\ell}(r) \mu_{n,z} (H_r) \leq c_5 \, z e^{-c_\ast z} \frac{(r+2h_\ell)^2}{\ell^{3/2}} e^{-r^2/(8 \ell)} e^{(r+h_\ell)^2/n} \,. \end{equation} Substituting \eqref{mun-bd} in \eqref{amir-decmp} and taking $n \to \infty$ results with \begin{equation}\label{eq:neglig-In} \varlimsup_{z \to \infty} \big\{ z^{-1} e^{c_\ast z} \varlimsup_{n \to \infty} {\sf I}_n(z,\ell) \big\} \le \varepsilon_{\sf I}(\ell) \,, \end{equation} where by our choice \eqref{dfn:I_ell} of $I_\ell$, for any $\ell \to \infty$ \[ \varepsilon_{\sf I}(\ell) := c_5 \sum_{r \not\in I_\ell} \frac{(r+2 h_\ell)^2}{\ell^{3/2}} e^{-r^2/(8 \ell)} \longrightarrow 0 \,. \] In view of \eqref{35.1}, it suffices to show that for some $\varepsilon_{\sf II}(\ell) \to 0$ and all $r \in I_\ell$, \begin{equation}\label{eq:neglig-IIn} \varlimsup_{z \to \infty} \big\{ z^{-1} e^{c_\ast z} \varlimsup_{n \to \infty} \sum_{k=1}^{n'-1} 2^{n'} {\bf p}_{n,k,z} (r) \big\} \le \frac{\varepsilon_{\sf II}(\ell)}{\sqrt{\ell}} \, (r+1) e^{c_\ast r} \,, \end{equation} in order to get the analog of \eqref{eq:neglig-In} for ${\sf II}_n(z,\ell)$ and thereby complete the proof of the lemma. In view of \eqref{eq:bd-phi} we get \eqref{eq:neglig-IIn} upon showing that \begin{equation}\label{eq:neglig-IIn-final} \lim_{z \to \infty} \sup_{r \in I_\ell} \, \varlimsup_{n \to \infty} \sum_{k=1}^{n'-1} \psi_\ell(k)^3 k_{n'}^{-3/2} \exp\big(-\frac{z^2}{16k}-\frac{r^2}{4(n'-k)}\big) = 0 \,. \end{equation} Even without the exponential factor, since $\delta<\frac{1}{6}$ the sum in \eqref{eq:neglig-IIn-final} over $\{ k : k_{n'}^\delta \ge h_\ell \}$, where $\psi_{\ell}(k) \le 2 k_{n'}^\delta$ is bounded above by \begin{equation}\label{delta-under-16} 4 \sum_{k \ge h_\ell^{1/\delta}} k^{3\delta - 3/2} \le c\, {h_\ell^{(3-\frac{1}{2 \delta})}}_{\stackrel{\longrightarrow}{\ell \to \infty}} 0 \,. \end{equation} Further, the sum in \eqref{eq:neglig-IIn-final} over $\{ k : k_{n'}^\delta < h_\ell\}$ has $2 h_\ell^{1/\delta}$ terms, which are uniformly bounded by $(2 h_\ell)^3\exp\big(-b_\ell/(4 h_\ell^{1/\delta})\big)$, where having \begin{equation}\label{dfn:b-ell} b_\ell := \frac{z^2}{4} \wedge \inf_{r \in I_\ell} \{ r^2 \} = \frac{\ell}{\log \ell} \,, \end{equation} makes that sum also negligible, as claimed in \eqref{eq:neglig-IIn-final}. { $\square$ } \subsection{Second moment: proof of \Cref{lem-second-moment}} \label{firstsubsection} In view of \eqref{def:Lambda-n} we have that \[ \mathbb{E}_s [\Lambda_{n,\ell} (\Lambda_{n,\ell}-1)] = \sum_{\stackrel{ u,v \in V_{n'}}{u \ne v}} \mathbb{P}_s (E_{n,\ell}(u) \cap E_{n,\ell}(v) ) \,. \] We recall the definition \eqref{eq-big-definition-sharp} of $E_{n,\ell}(\cdot)$ and split the preceding sum according to the values of $k'=|u \wedge v| < n' $ and $\hat \eta_v(k') > 0$. Specifically, having $2^{n'+k-1}$ such ordered pairs (for $k=n'-k'$), yields the bound \begin{align} \label{newequation73} \mathbb{E}_{s_{n,z}} [\Lambda_{n,\ell} (\Lambda_{n,\ell}-1)] &\le \sum_{k=1}^{n'} 2^{2k} \int_0^\infty \theta^2_{n,k,\ell} (y) \nu_{n,k',z}(dy) =: \sum_{k=1}^{n'} {\sf J_{k}} \end{align} in terms of $\theta_{n,k,\ell}(\cdot)$ and $\nu_{n,k',z}(\cdot)$ of \eqref{eq:bd-theta} and \eqref{dfn:nu-z}, respectively. Further, the Markov property of $\eta_v(j)$ at $j=k'$ yields in terms of $q_{n,k',z}(\cdot;0)$ of \eqref{eq-oferlau1-def} \[ {\sf J_k} \le \sum_{i=0}^\infty 2^{k'} q_{n,k',z}(i;0) \big[ 2^k \sup_{y \in H_i} \theta_{n,k,\ell}(y) \big]^2 \,. \] Plugging into the preceding the bounds \eqref{eq:bd-theta} and \eqref{eq-oferlau1} (at $m=64 n \le n^2$), we find that for some $c_6<\infty$, any $k \in (0,n')$, $z \ge 1$ and $n \ge n_0(\ell)$ as in \Cref{lem-apriori-c}, \begin{equation}\label{eq:bd-Jk} {\sf J_k} \le c_6 e^{\beta_{n,k'}+2\beta_{n,k}} (1 \vee \ell/k) \frac{z}{\sqrt{k'_n} \wedge k'} e^{- c_\ast z} \, \sum_{i=0}^\infty (i+1)^3 e^{-c_\ast i} \end{equation} Using \eqref{eq:sum-beta}, $\beta_{n,k} \le 1$ and $k'_n \ge k_{n'}$ we get from \eqref{eq:bd-Jk} that for some $c_7<\infty$, \begin{equation}\label{eq:bd-Jk-main} {\sf J_k} \le \left\{ \begin{array}{l} c_7 \, z \, e^{-c_\ast z} \, k_{n'}^{-3/2}, \qquad \quad k_{n'} \ge \ell \,, \\ c_7 \, e^{-c_\ast z} \,, \qquad \qquad \qquad k' < \ell \,, \\ c_7\, z \, e^{-c_\ast z}\, \sqrt{\ell} k^{-3}, \quad \quad \; \; k < \ell \,, \end{array} \right. \end{equation} where for $k < \ell$ we used the alternative bounds $\beta_{n,k} \le 1 - \log k$ and $k'_n \ge \ell$. Now, \eqref{eq:bd-Jk-main} implies that for all $n$, \[ z^{-1} e^{c_\ast z} \sum_{k= \ell^{1/3}}^{n'} {\sf J}_k \le c_7 \, [ \, \sqrt{\ell} \sum_{k = \ell^{1/3}}^{\ell-1} k^{-3} + \sum_{k=\ell}^{n'-\ell} k_{n'}^{-3/2} + \sum_{k=n'-\ell}^{n'} z^{-1} \, ] \le \delta_{\ell,z} \] for some $\delta_{\ell,z} \to 0$, when $z \to \infty$ followed by $\ell \to \infty$. Turning to control the remaining sum of ${\sf J}_k$ over $k < \ell^{1/3}$, note that by \eqref{dfn:b-ell}, upon comparing \eqref{35.1} and \eqref{eq:bd-theta} we find that for some $\varepsilon(\ell) \to 0$ as $\ell \to \infty$ and any $n \ge n_0(\ell)$, \[ \sum_{k=1}^{\ell^{1/3}} 2^k \sup_{y \ge 0} \{ \theta_{n,k,\ell}(y) \} \le 4^{\ell^{1/3}} \sup_{r \in I_\ell} \{ \ga_{n,\ell}(r) \} \le \varepsilon(\ell) \,. \] To complete the proof of \Cref{lem-second-moment}, recall \eqref{eq-big-definition-sharp} and \eqref{dfn:nu-z}, that for $k'=n'-k$ \begin{equation}\label{theta-first-mom} 2^k \int_0^\infty \theta_{n,k,\ell} (y) \nu_{n,k',z}(dy) = \mathbb{E}_{s_{n,z}} [\Lambda_{n,\ell}] \,. \end{equation} Hence, we have on the \abbr{rhs} of \eqref{newequation73} that \[ \varlimsup_{z \to \infty} \big\{ z^{-1} e^{c_\ast z} \varlimsup_{n \to \infty} \sum_{k=1}^{\ell^{1/3}} {\sf J}_k \big\} \le \varepsilon(\ell) \varlimsup_{z \to \infty} \big\{ z^{-1} e^{c_\ast z} \varlimsup_{n \to \infty} \mathbb{E}_{s_{n,z}} [\Lambda_{n,\ell}] \big\} \,, \] which together with \eqref{eq:2mom-lbd} and \eqref{eq-right-tail-ex} imply that the \abbr{rhs} is for all $\ell$ large enough at most $2 c \varepsilon(\ell)$ (i.e. negligible, as claimed). { $\square$ } \section{The Bessel process: proof of \Cref{prop-asymptotic-first-moment}}\label{sec-sharpbar} Hereafter, set $\la_\ell (y) := \frac{1}{2} (c_\ast \ell +y)^2$, $y\geq -c^\ast \ell$, with \begin{equation} \begin{aligned} \widehat \ga_{\ell} (y) & := \mathbb{P}_{[\la_\ell(y)]} (\eta^{\sharp}_{\ell}=0, \; \eta(0) \in c_\ast \ell + I_\ell) \,, \\ \widetilde \ga_{\ell}(y) & := \mathbb{E}^\xi \big\{ \mathbb{P}_{\xi(\la_\ell(y))} (\eta^{\sharp}_{\ell}=0, \; \eta(0) \in c_\ast \ell + I_\ell) )\big\} \,, \end{aligned} \label{3a.49} \end{equation} which are $\ga_{\infty,\ell}(\cdot)$ from \eqref{35.1}, restricted to $I_\ell$ of \eqref{dfn:I_ell}, and its regularization by an expectation, denoted $\mathbb{E}^\xi$, over the independent Poisson($\la$) variable $\xi(\la)$ at $\la=\la_\ell(y)$. We emphasize that the law of $\xi$ depends on $\ell$ but we suppress this from the notation. We follow this convention of suppressing dependence in $\ell,n$ in many places throughout this section, e.g. in the definitions \eqref{dfn:B-kappa}, \eqref{eq-Ak}, \eqref{dfn-D-psi-T}, \eqref{dfn:Fpm_T} and \eqref{eq-evening2-T} below. Our goal here is to prove \Cref{prop-asymptotic-first-moment}, with \begin{equation} \alpha_\ell :=\frac{1}{\sqrt{\pi \ell}} \int_0^\infty y e^{c_{\ast} y} \, \widetilde \ga_{\ell}(y) \,d y\,. \label{3a.54m} \end{equation} In particular $\al_\ell <\infty$, since by standard Poisson tail estimates for some $c<\infty$, \begin{equation}\label{bd-Pois-for-ofer} \widetilde \ga_\ell(y) \le \mathbb{P}(\sqrt{2\xi(\la_\ell(y))} - c_\ast \ell \in I_\ell) \le \exp\big\{-c \, {\rm dist}(y,I_\ell)^2 \big\} \end{equation} (see \cite[(3.8)]{BRZ}). Omitting hereafter from the notation the (irrelevant) specific choice $v \in V_{n'}$, we recall from \eqref{def:eta-hat}, \eqref{dfn:nu-z} and \eqref{equpperbd} that \begin{align}\label{3a.48} 2^{-n'} \mathbb{E}_{s_{n,z}}[\Lambda_{n,\ell}] & = \mathbb{E}_{s_{n,z}} \Big( \widehat \ga_{\ell}\big(\eta(n') -c_{\ast}\ell \big); \min_{j < n'} \{\hat \eta (j) \} > 0 \Big). \end{align} The first step towards \Cref{prop-asymptotic-first-moment} is our next lemma, utilizing the Markov structure from \cite[Lemma 3.1]{BRZ} to estimate the barrier probabilities on the \abbr{rhs} of \eqref{3a.48} via the law $\mathbb{P}_x^Y$ of a $0$-dimensional Bessel process $\{Y_t\}$, starting at $Y_1=x$. To this end, define for $\kappa \in \mathbb{R}$ the events \begin{equation}\label{dfn:B-kappa} B_\kappa :=\bigcap_{j=1}^{n'} \left\{ Y_j > \bar\varphi_n (j) + \kappa \psi_\ell(j) \right\}, \end{equation} in terms of the barrier notations \eqref{eq:bar-st}, \eqref{psi-def}, \eqref{dfn:psi-ell}, and associate to each $[0,1]$-valued $g(\cdot)$, the function \begin{equation}\label{dfn:wt-g} \widetilde g(w) := \mathbb{E}^\xi \big[ g\big(\sqrt{2\xi(w^2/2)}\big) \big] \,, \end{equation} so in particular $g(w)=\widehat \ga_\ell(w-c_\ast \ell)$ yields $\widetilde g(w)=\widetilde \gamma_\ell(w - c_\ast \ell)$. \begin{lemma}\label{theo-sbarrierd} There exist $U_s \overset{dist}{\Longrightarrow} U_\infty$, \corOF{a centered Gaussian of variance $1/2$}, and $\varepsilon_\ell \to 0$ as $\ell \to \infty$, such that for $s=s_{n,z}$, any $[0,1]$-valued $g(\cdot)$ supported on $[\bar \varphi_n(n'),\infty)$ and $z \ge \ell$, \begin{align}\label{18.01d} (1- \varepsilon_\ell)^{-1} \mathbb{E}^{Y,s} \big( \; \widetilde g(Y_{n'}) ; B_{-1} \big) & \ge \mathbb{E}_{s} \Big( g(\eta(n')) ; \min_{j < n'} \{\widehat \eta(j)\} > 0 \Big) \\ & \ge (1-\varepsilon_\ell) \mathbb{E}^{Y,s} \big( \widetilde g (Y_{n'}); B_2 \big) , \label{18.02d} \end{align} where $\mathbb{E}^{Y,s}$ denotes expectation with respect to a $0$-dimensional Bessel process starting at $Y_1= U_s + \sqrt{2s}$. Further, for some $\delta>0$, \begin{equation}\label{eq-U-sub-gaussian} \sup_s \, \{\, \mathbb{E} [ e^{\delta U^2_s} ]\, \} \, < \infty \,. \end{equation} \end{lemma} \begin{proof} Recall from \cite[Lemma 3.1]{BRZ} the time in-homogeneous Markov chain \[ (\eta(0)=\corOF{\sqrt{2s}},Y_1,\eta(1),\cdots,Y_{n'},\eta(n'),\cdots) , \] of law $\mathbb{Q}_1^s$. From \cite[Lemma 3.1(a)]{BRZ} we have that $\mathbb{Q}[g(\eta(j))|Y_j]=\widetilde g(Y_j)$ of \eqref{dfn:wt-g}, and that $Y_1=\sqrt{2 \mathcal{L}_1(s)}$ for a $\Gamma(s,1)$-random variable $\mathcal{L}_1(s)$. Set $U_s := \sqrt{2 \mathcal{L}_1(s)} - \sqrt{2s}$ and note that by \cite[Lemma 3.1(d,e)]{BRZ}, the random variables $\{\eta(j), j \ge 0\}$ and $\{Y_j, j \ge 1\}$ have respectively, the marginal laws $\mathbb{P}_{s}$ and $\mathbb{P}^{Y,s}$. Standard large deviations for Gamma variables yield \eqref{eq-U-sub-gaussian} with $\delta<1/2$ (c.f. \cite[(3.13)]{BRZ}). \corOF{Recall that $(\mathcal{L}_1(s)-s)/\sqrt{2s} \overset{dist}{\Longrightarrow} U_\infty$ when $s \to \infty$ (by the \abbr{clt}), hence the same convergence applies for $U_s = f_s((\mathcal{L}_1(s)-s)/\sqrt{2s})$ and $f_s(\cdot)$ of \eqref{eq:f_s}.} In addition, setting for $k \in \mathbb{N}$ the events \begin{equation} \label{eq-Ak} A_k := \bigcap_{j=0}^{k} \left\{ \eta(j) > \bar \varphi_n (j) \right\}, \end{equation} we have by the preceding and \eqref{dfn:B-kappa} that the bound \eqref{18.01d} follows from \begin{equation}\label{18.01-eff} \mathbb{Q}_{1}^{s} (\, g(\eta(n')); A_{n'} \cap B_{-1} ) \ge (1-\varepsilon_\ell) \mathbb{E}_s (\, g(\eta(n')); A_{n'} ) \end{equation} (taking $k=n'$ due to the assumed support of $g(\cdot)$ and including $j=0$ at no loss of generality since $z>0$). Now, recall from \cite[Lemma 3.1(b)]{BRZ} that \begin{align}\label{eq:prod} \mathbb{Q}_{1}^{s} (\, g(\eta(n'));\,A_{n'} \cap B_{-1})= \mathbb{E}_s \big(\, g(\eta(n')) \prod_{j=1}^{n'} F^\eta_j ; \, A_{n'} \big) \,, \end{align} where if \begin{equation}\label{cond-A} \sqrt{\eta^2(j-1)/2+\eta^2(j)/2} > \bar \varphi_n(j) \,, \end{equation} then by \cite[(3.14)]{BRZ}, \[ F^\eta_j := \mathbb{Q}_1^s \big(Y_j > \bar \varphi_n(j) - \psi_\ell(j) \,\big|\,\eta(j-1),\eta(j)\, \big) \ge 1 -c e^{-c \psi^2_\ell(j)} \,. \] Since \eqref{cond-A} holds on the event $A_{n'}$ for $j=1,\ldots,n'$ \textcolor{purple}{recalling \eqref{dfn:psi-ell} that $\psi_\ell(j)=\psi_\ell(n'-j)$ and splitting the product on the \abbr{lhs} of \eqref{eq:prod} to $j > n/2$ and $j \le n/2$,} yields the inequality \eqref{18.01-eff}, and thereby \eqref{18.01d}, with \begin{equation}\label{dfn:vep-ell} \varepsilon_\ell = 1 - \prod_{j=0}^\infty \big[1-c e^{- c (j^\delta+h_\ell)^2} \big]^{2} \,, \end{equation} which converge to zero when $\ell \to \infty$. Recall from \cite[(3.4)]{BRZ} the notation $\mathbb{Q}_2^{x^2/2}$ for the law of the Markov chain $(Y_1,\eta(1),\cdots)$ started at $Y_1=x$. To see \eqref{18.02d}, we will show \begin{equation} \mathbb{Q}_{2}^{x^2/2} ( \widetilde g(Y_{n'}) ;\,A_{n'-1} \cap B_2) \geq (1-\varepsilon_\ell) \mathbb{P}^Y_x (\widetilde g (Y_{n'}) ;\,B_2 ) \,. \label{120.5d} \end{equation} Taking the expectation over $x$ with respect to the law of $Y_1$ under $Q_1^s$ and arguing as in the proof of \eqref{18.01d} will then give \eqref{18.02d}. Turning to establishing \eqref{120.5d}, recall from \cite[Lemma 3.1(c)]{BRZ} that \[ \mathbb{Q}_{2}^{x^2/2} ( \widetilde g(Y_{n'}) ;\,A_{n'-1} \cap B_2) = \mathbb{P}_x^Y ( \widetilde g(Y_{n'}) \prod_{j=1}^{n'-1} F^Y_j ; B_2) \,, \] where by \cite[(3.16)]{BRZ}, \[ F^Y_j := \mathbb{Q}_2^{x^2/2}(\eta(j) > \bar \varphi_n(j) \,|\, Y_j, Y_{j+1}) \ge 1 - c e^{-c \psi^2_\ell(j)} \,, \] provided that \begin{equation}\label{cond-B'} \sqrt{Y_j Y_{j+1}} \ge \bar \varphi_n(j) + \psi_\ell(j) \,. \end{equation} On $B_2$, we have that for any $j < n'$, and all $\ell$ larger than some fixed universal constant, \[ \sqrt{Y_j Y_{j+1}} > \bar \varphi_n(j+1) + 2 \psi_\ell(j+1) \ge \bar \varphi_n(j) + \psi_\ell(j) \,. \] In particular, with \eqref{cond-B'} holding on $B_2$ for any $j \in \{1,\ldots,n'-1\}$, by the same reasoning as before, this yields the inequality \eqref{120.5d}, and hence also \eqref{18.02d}. \end{proof} We next estimate the barrier probabilities for $\{Y_j\}$ in terms of the law $\mathbb{P}^W_x$ of a Brownian motion $\{W_t\}$, starting at $W_1=Y_1=x$. For \corOF{$0 \le T < T' \le n'$, introduce the events \begin{equation}\label{dfn-D-psi-T} D_{\pm 2 \psi,T,T'} := \{ W_t > \bar \varphi_n(t) \pm 2 \psi_\ell(t), \;\forall t\in [T+1,T'] \}, \end{equation} using hereafter $D_{ \kappa h_\ell,T,T'}$ if $\pm 2\psi_\ell(t)$ in \eqref{dfn-D-psi-T} is replaced by the constant function $\kappa h_\ell$, with abbreviated notation $D_{\pm 2 \psi,T}$ when $T'=n'-T$ and $D_{\pm 2 \psi}$ for $D_{\pm 2 \psi,0}$.} See Figure \ref{fig:DPsi} for a pictorial description. Recall the sets $B_\kappa$, see \eqref{dfn:B-kappa}. \begin{figure} \caption{ The curves in the events $D_{\pm 2h_\ell}$ (dashed lines) and $D_{\pm 2\psi}$ (curved, solid lines). The event $D_{2\psi,T}$ involves the curves between $T+1$ and $n'-T$. } \label{fig:DPsi} \end{figure} \begin{lemma}\label{theo-sbarrierbmd} For $\widehat g(w):=\widetilde g(w)/\sqrt{w}$, some $\varepsilon_\ell \to 0$, any $\widetilde g(\cdot)$, $n,\ell$ as in \Cref{theo-sbarrierd} and all $x>0$, \begin{align} \label{48.02d} \mathbb{E}^Y_{x} \big( \widetilde g (Y_{n'}); B_2 \big) &\ge (1-\varepsilon_{\ell} ) \sqrt{x} \, \mathbb{E}^W_{x} \big( \widehat g(W_{n'}) ; D_{2\psi} \big), \\ \mathbb{E}^Y_{x} \big( \widetilde g(Y_{n'}) ; B_{-1} \big) \label{48.01d} & \leq (1-\varepsilon_\ell)^{-1} \sqrt{x} \, \mathbb{E}^W_{x} \big( \widehat g(W_{n'}) ; D_{-2\psi} \big). \end{align} \end{lemma} \begin{proof} Recall that up to the absorption time $\tau_\ast := \inf \{ t >1 : Y_t =0 \}$, the $0$-dimensional Bessel process satisfies the \abbr{sde} \[ Y_{t}=W_{t}-\int_1^t \frac{1}{2Y_{s}} ds \,, \qquad Y_1 = W_1 = x \,, \] with $\{W_{t}\}$ having the Brownian law $\mathbb{P}^W_x$. Further, the event $\{Y_{n'}>0\}$ implies that $\{\tau_\ast > n'\}$, in which case by Girsanov's theorem and monotone convergence, we have that for any bounded $\mathcal{F}_{n'}$-measurable $Z$, \be \mathbb{E}_{x}^{Y} ( Z; Y_{n'}>0 )=\mathbb{E}_{ x}^{W} \Big(Z \sqrt{\frac{x}{W_{n'}}} \,\,e^{-\frac{3}{8}\int_{1}^{n'} (W_{s})^{-2} ds}\, ; \; \inf_{t \in [1,n']} \{W_{t} \} >0\Big).\label{Girs.1} \ee \textcolor{purple}{With $B_2$ containing the event corresponding to $D_{2 \psi}$ for the process $Y_t$, we} get \eqref{48.02d} by considering \eqref{Girs.1} for $Z=\widetilde g(W_{n'}) {\bf 1}_{D_{2\psi}}$. Indeed, the event $D_{2\psi}$ implies that $\inf_{t \le n'} \{ W_t - \bar \varphi_n(t) \} \ge 0$, hence \begin{equation*} e^{-\frac{3}{8} \int_{1}^{n'} (W_{s})^{-2} ds} \ge e^{-\frac{3}{8} \int_1^{n'} (n-s)^{-2} ds} \ge e^{-\frac{3}{8 \ell}} := 1 -\varepsilon_\ell \end{equation*} with $\varepsilon_\ell \to 0$ as $\ell \to \infty$. Next, for all $\ell$ larger than some universal constant, \[ \inf_{t \in [1,n']} \{ \bar \varphi_n(t) - 2 \psi_\ell(t) \} = \bar \varphi_n(n') - 2 h_\ell > 0 \,, \] and with $B' := \{ W_j > \bar \varphi_n(j) - \psi_{\ell} (j), j = 1,2,\ldots,n'\}$, it suffices for \eqref{48.01d} to show that \begin{align} \mathbb{E}^W_x (\widehat g(W_{n'}); D_{-2\psi} \cap B' ) \ge (1 - \varepsilon_{\ell} ) \mathbb{E}^W_{x} ( \widehat g(W_{n'}); B' ) \,. \label{48.01n} \end{align} To this end, since $\phi_t := \bar \varphi_n(t) - 2\psi_\ell(t)$ is a convex function, we get upon conditioning on $\{W_1,W_2,\ldots,W_{n'}\}$, that \[ \mathbb{E}^W_x (\widehat g(W_{n'}) ; D_{-2\psi} \cap B') \ge \mathbb{E}^W_x (\widehat g(W_{n'}) \prod_{j=1}^{n'-1} F_j^W; B') \,, \] where by the reflection principle (see \cite[(2.1)]{BRZ} or \cite[Lemma 2.2]{Bramson1}), \begin{align}\label{eq:FW-bd} F_j^W &:= \mathbb{P}^W( \min_{u \in [0,1]} \{W_{j+u} - f_{\phi_j,\phi_{j+1}} (u;1)\} > 0 \,|\, W_j,W_{j+1}) \nn \\ & = 1 - \exp\big(-2 (W_j-\phi_j) (W_{j+1}-\phi_{j+1})\big) \,, \end{align} with $f_{a,b}(\cdot;1)$ denoting the line segment between $(0,a)$ and $(1,b)$. On the event $B'$ we thus have that $F_j^W \ge 1-\exp(-2 \psi_\ell(j) \psi_\ell(j+1))$ for all $j \in \{1,2,\ldots,n'-1\}$, thereby in analogy with \eqref{dfn:vep-ell}, establishing \eqref{48.01n} for \[ \varepsilon_\ell = 1 - \prod_{j=0}^\infty \big[1-e^{- 2 (j^\delta+h_\ell)((j+1)^\delta + h_\ell)} \big]^{2} \,, \] which converges to zero as $\ell \to \infty$. \end{proof} \subsection{ Proof of \Cref{prop-asymptotic-first-moment}}\label{sec-3.5} Taking $s=s_{n,z}$ yields that $W_1=m_n + z + U_s$. For such $W_1$ let \begin{equation}\label{dfn:alpha-n-pm} \alpha^{(\pm)}_{n,\ell,z} := z^{-1} e^{c_\ast z} 2^{n'} \mathbb{E}\big[ \sqrt{W_1/W_{n'}} \, \widetilde \ga_\ell(W_{n'} - c_\ast \ell) {\sf q}^{(\pm)}_{\widetilde n} (W_1,W_{n'}) \, \big] \,, \end{equation} with $\widetilde n := n'-1$ denoting our barrier length and ${\sf q}^{(\pm)}_{\widetilde n}(x,w) := {\sf q}^{(\pm)}_{\widetilde n,0}(x,w)$ for the corresponding non-crossing probabilities \begin{equation}\label{dfn:Fpm_T} {\sf q}^{(\pm)}_{\widetilde n,T}(x,w) := \mathbb{P}^W_x(D_{\pm 2 \psi,T} \, | \, W_{n'}=w). \end{equation} Combining \eqref{3a.48} with Lemmas \ref{theo-sbarrierd} and \ref{theo-sbarrierbmd} for $g(\cdot)=\widehat \ga_\ell(\cdot-c_\ast \ell)$ and $\widetilde g(\cdot) = \widetilde \ga_\ell (\cdot - c_\ast \ell)$, respectively, we have that \begin{align*} (1-\varepsilon_\ell)^{-2} \alpha^{(-)}_{n,\ell,z} \ge z^{-1} e^{c_\ast z} \mathbb{E}_{s_{n,z}} [\Lambda_{n,\ell}] \ge (1 \corOF{-} \varepsilon_\ell)^{2} \alpha^{(+)}_{n,\ell,z} \,. \end{align*} The proof of \Cref{prop-asymptotic-first-moment} thus amounts to showing that for any $\epsilon > 0$ and all large enough $\ell$, \begin{equation}\label{3a.52} (1+\epsilon)^3 \alpha_\ell \ge \varlimsup_{z \to \infty}\varlimsup_{n \to \infty} \{ \alpha^{(-)}_{n,\ell,z} \} \ge \varliminf_{z \to \infty} \varliminf_{n \to \infty} \{ \alpha^{(+)}_{n,\ell,z} \} \ge (1- \epsilon)^3 \alpha_\ell \,. \end{equation} To this end, setting $z'=z+U_s$ and $W_{n'}=c_\ast \ell + y$, we write \eqref{dfn:alpha-n-pm} explicitly as \[ \alpha^{(\pm)}_{n,\ell,z} = \frac{e^{c_\ast z} 2^{n'}}{z \sqrt{2 \pi \widetilde n}} \mathbb{E} \Big[ \int \widetilde \ga_\ell(y) dy \frac{\sqrt{m_n+z'}}{\sqrt{c_\ast \ell+y}} {\sf q}_{\widetilde n}^{(\pm)} (m_n+z',c_\ast \ell+y) e^{-(m_n-c_\ast \ell+z'-y)^2/2 \widetilde n} \Big] \,, \] with the expectation over $z'$. Hereafter $\ell \le n/\log n$ so $|c_\ast - \rho_n| \ell \le 1$ and $D_{\pm 2 \psi}$ imposes heights $a^{(\pm)}=\rho_n \widetilde n + b^{(\pm)}$, $b^{(\pm)} = c_\ast \ell \pm 2 h_\ell$ at barrier end points. Thus, in the preceding formula one needs only consider $z'$, $y \ge \pm 2 h_\ell$. Recall \eqref{eq-latenight}. With $m_n-c_\ast \ell = c_\ast n'- \varepsilon_{n,n}$, upon setting $\Delta_n := \frac{1}{2 \widetilde n}(z'-y+c_\ast-\varepsilon_{n,n})^2$, we then get similarly to \eqref{eq:jay-ident1} that \begin{equation*}\label{eq:new-ident1} \frac{1}{2\widetilde n} (c_\ast n' + z'-y - \varepsilon_{n,n})^2 = (n'+1) \log 2 + c_\ast (z'-y - \varepsilon_{n,n}) + \Delta_n \,. \end{equation*} Since $c_\ast \varepsilon_{n,n} = \log n$, this simplifies our formula for $\alpha^{(\pm)}_{n,\ell,z}$ to \begin{align}\label{eq:alpha-pm-ident} \alpha^{(\pm)}_{n,\ell,z} &= \frac{1}{\sqrt{\pi \ell}} \int_{\pm 2 h_\ell}^\infty e^{c_\ast y} \widetilde \ga_\ell(y) \frac{{\widehat f}^{(\pm)}_{n,\ell,z}(y)}{\sqrt{1+y/(c_\ast \ell)}} dy \,,\\ {\widehat f}^{(\pm)}_{n,\ell,z}(y) & := \frac{n}{2 \sqrt{2} z} \mathbb{E} \Big[ \frac{\sqrt{m_n+z'}}{\sqrt{c_\ast \widetilde n}} {\sf q}_{\widetilde n}^{(\pm)} (m_n+z',c_\ast \ell+y) e^{-c_\ast U_s} e^{-\Delta_n} \Big] \,. \nn \end{align} By our uniform tail estimate \eqref{eq-U-sub-gaussian} for $U_s$ and the tail bound \eqref{bd-Pois-for-ofer} on $\widetilde \ga_\ell(y)$, up to an error $\varepsilon_n \to 0$ as $n \to \infty$, we can restrict the evaluation of ${\widehat f}^{(\pm)}_{n,\ell,z}(y)$ to $|z'|+y \le C \sqrt{\log n}$. This forces $m_n+z' = c_\ast \widetilde n (1 + \varepsilon_{n})$ and eliminates $\Delta_n$, thereby allowing us to replace ${\widehat f}^{(\pm)}_{n,\ell,z}(y)$ in \eqref{eq:alpha-pm-ident} by \begin{equation}\label{eq:f-alt} f^{(\pm)}_{n,\ell,z}(y) = \mathbb{E} \Big[ \frac{e^{-c_\ast U_s}}{\sqrt{2} z} \frac{\widetilde n}{2} {\sf q}_{\widetilde n}^{(\pm)} (m_n+z',c_\ast \ell+y) \Big] \,. \end{equation} Recalling the events $D_{\kappa h_\ell,T}$, see below \eqref{dfn-D-psi-T}, we further consider the barrier probabilities \begin{equation} \label{eq-evening2-T} \widetilde {\sf q}_{\widetilde n,T}^{(\pm)} (x,w) := \mathbb{P}^W_x (D_{\pm 2 h_\ell,T} \,| \, W_{n'} = w) \,, \end{equation} using the abbreviated notation $\widetilde {\sf q}_{\widetilde n}^{(\pm)} (x,w) =\widetilde {\sf q}_{\widetilde n,0}^{(\pm)} (x,w)$. \textcolor{purple}{Let $ \mathbb{P}_{x \to w}^{[t_1,t_2]}\(A_{m(t)}\)$ denote the probability that the Brownian bridge, taking the value $x$ at $t_1$ and $w$ at $t_2$ remains above the barrier $m(t)$ on the interval $[t_1,t_2]$. Recall from \cite[Lemma 2.2]{Bramson1} that for a linear barrier $m(t)$, \begin{align}\label{eq:lin-barrier-prob-idenBB} \mathbb{P}_{x\to w}^{[t_1,t_2]}\(A_{m(t)}\) = 1 - e^{-2 (x-m(t_1))_+(w-m(t_2))_+/(t_2-t_1)} \,. \end{align} It follows that \begin{align}\label{eq:lin-barrier-prob-iden} \widetilde {\sf q}_{\widetilde n}^{(\pm)} (x,w) =\mathbb{P}_{x\to w}^{[1,n']}\(A_{\bar \varphi_n(t)\pm 2 h_\ell}\) = 1 - e^{-2 (x-a^{(\pm)})_+(w-b^{(\pm)})_+/\widetilde n} \,, \end{align} yielding for $x-a^{(\pm)}=z' \mp 2 h_\ell$ and $w-b^{(\pm)}= y \mp 2 h_\ell$ which are both $O(\sqrt{\log n})$, \begin{align} \widetilde {\sf q}_{\widetilde n}^{(\pm)} (m_n+z',c_\ast \ell + y) = \frac{2 + \varepsilon_n}{\widetilde n} (z' \mp 2 h_\ell) (y \mp 2 h_\ell)\,. \label{eq:lin-barrier-prob} \end{align} Note further that \begin{equation}\label{eq:Markov} \widetilde{\sf q}_{\widetilde n,T}^{(\pm)} (x,w) = \mathbb{E}_x^W \big[\mathbb{P}_{W_{T+1}\to W_{n'-T}}^{[T+1,n'-T]}\(A_{\bar \varphi_n(t)\pm 2 h_\ell}\) | W_{n'} = w \big] \,. \end{equation}} The next lemma paraphrases \cite[Proposition 6.1]{Bramson1} (with the proof given there also yielding the claimed uniformity). \begin{lemma} \label{lem-evening1} For each $\epsilon>0$ there exist $T_\epsilon$, $n_\epsilon$ finite so that, for any $\ell \ge 0$, $T \in [T_\epsilon, \frac{1}{2} \widetilde n]$, $x-a^{(\pm)},w-b^{(\pm)} \in [0,\log \widetilde n]$ and all $\widetilde n>n_\epsilon$ \begin{align} (1-\epsilon) \,\widetilde {\sf q}^{(+)}_{\widetilde n, T}(x,w) &\le {\sf q}^{(+)}_{\widetilde n,T}(x,w) \le {\sf q}^{(-)}_{\widetilde n,T}(x,w) \leq (1+\epsilon) \,\widetilde {\sf q}^{(-)}_{\widetilde n,T}(x,w) \,. \label{eq-evening3} \end{align} \end{lemma} Fixing $\epsilon>0$, we bound separately $f^{(\pm)}_{n,\ell,z}(y)$. Starting with $f^{(-)}_{n,\ell,z}(y)$, we have from \eqref{eq:f-alt}, using the fact that $ {\sf q}_{\widetilde n}^{(-)}\leq {\sf q}_{\widetilde n,T_\epsilon}^{(-)}$ and the \abbr{rhs} of \eqref{eq-evening3}, that \begin{eqnarray}\label{eq:f-alt-ub} f^{(-)}_{n,\ell,z}(y) &\leq & \mathbb{E} \Big[ \frac{e^{-c_\ast U_s}}{\sqrt{2} z} \frac{\widetilde n}{2} {\sf q}_{\widetilde n,T_\epsilon}^{(-)} (m_n+z',c_\ast \ell+y) \Big] \nonumber\\ &\leq & (1+\epsilon) \mathbb{E} \Big[ \frac{e^{-c_\ast U_s}}{\sqrt{2}z} \frac{\widetilde n}{2} \widetilde {\sf q}_{\widetilde n,T_\epsilon}^{(-)} (m_n+z',c_\ast \ell+y) \Big] \,. \end{eqnarray} Turning to evaluate $\widetilde{\sf q}_{\widetilde n,T}^{(\pm)} (m_n+z',c_\ast \ell+y)$, we get from \textcolor{purple}{ \eqref{eq:Markov} and \eqref{eq:lin-barrier-prob-idenBB} that \begin{equation}\label{eq:comp-core} \widetilde{\sf q}_{\widetilde n,T}^{(\pm)} (m_n+z',c_\ast \ell+y) \le \frac{2} { \widetilde n-2T} \mathbb{E} \Big[ (\bar Z \mp 2 h_\ell)_+ (\bar Y \mp 2 h_\ell)_+ \Big] \,, \end{equation}} where $(\bar Z,\bar Y)$ follow the joint Gaussian distribution of \[ (W_{T+1} - \bar \varphi_n(T+1),W_{n'-T} - \bar \varphi_n(n'-T)) \] given $W_1=m_n+z'$ and $W_{n'}=c_\ast \ell+y$. It is further easy to verify that \begin{equation} \Cov (\bar Z, \bar Y) = T \, \left[ \begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right] + \frac{T^2}{\widetilde n} \left[ \begin{matrix} -1 & 1 \\ 1 & -1 \end{matrix}\right] \,, \end{equation} and that \[ \mathbb{E} [ (\bar Z, \bar Y) ] - [ z'+c_\ast + \frac{T}{\widetilde n} (y-z') , y + \frac{T}{\widetilde n} (z'-y) ] = o_{\widetilde n}(1) \] \corOF{independently} of $(z',y)$, decaying to zero when $\widetilde n \to \infty$ with $\ell,T$ kept fixed. From this we get, in view of \eqref{eq:f-alt-ub} and \eqref{eq:comp-core}, that \[ \varlimsup_{z \to \infty}\varlimsup_{n \to \infty} \{ f^{(-)}_{n,\ell,z}(y) \} \leq (1+\epsilon) (y + 3 h_\ell) \lim_{z,s \to \infty} \mathbb{E} \big[ \frac{(z'+3 h_\ell)_+}{\sqrt{2}z} e^{-c_\ast U_s} \big] \] provided $h_\ell \ge c_\ast + \sqrt{T_\epsilon/(2\pi)}$. In addition, with ${\mathcal B}=\{|U_s|<z/2\}$, or without such restriction, we get thanks to \eqref{eq-U-sub-gaussian}, via dominated convergence that \begin{equation}\label{eq:Us-limit} \lim_{z,s \to \infty} \mathbb{E} \big[ {\bf 1}_{\mathcal B} \frac{(z' \pm 3 h_\ell)_+}{\sqrt{2} z} e^{-c_{\ast} U_s} \big] =\frac{1}{\sqrt{2}} \mathbb{E} \(e^{-c_{\ast}U_\infty}\) \corOF{= \frac{1}{\sqrt{2}} e^{c_\ast^2/4} } = 1 \,. \end{equation} \corOF{(Recall \Cref{theo-sbarrierd} that $U_\infty \sim N(0,1/2)$.)} Combined with the previous display, we obtain \begin{equation}\label{eq:bound-up} \varlimsup_{z \to \infty}\varlimsup_{n \to \infty} \{ f^{(-)}_{n,\ell,z}(y) \} \leq (1+\epsilon) (y + 3 h_\ell) \,. \end{equation} Note that for $\ell \to \infty$ \begin{equation} \label{eq-morning1} \frac{1}{\sqrt{ \pi \ell}} \int_{- 2 h_\ell}^\infty e^{c_\ast y} \widetilde \ga_\ell(y) \frac{y +3 h_\ell}{\sqrt{1+y/(c_\ast \ell)}} dy \le \al_\ell (1+ \epsilon)\,. \end{equation} (Due to \eqref{bd-Pois-for-ofer} the contribution to $\al_\ell$ outside $[\sqrt{\ell}/(2 r_\ell),2 r_\ell \sqrt{\ell}]$ is negligible, whereas within that interval $y/\ell \to 0$ and $h_\ell/y \to 0$.) Combining \eqref{eq:alpha-pm-ident}, \eqref{eq:bound-up} and \eqref{eq-morning1} yields the \abbr{lhs} of \eqref{3a.52}, thereby completing the proof of the upper bound in \Cref{prop-asymptotic-first-moment}. Turning next to the lower bound on $f^{(+)}_{n,\ell,z}(y)$, we first truncate to $y \in [\sqrt{\ell}/(2r_\ell),\sqrt{\ell} \, 2 r_\ell]$ and restrict to $z' \in [\frac{z}{2},\frac{3z}{2}]$ via the event ${\mathcal B}$. Then, taking $h_\ell \ge 2 (T+1)^\delta$ for $T=T_\epsilon$ of \Cref{lem-evening1}, guarantees that \[ 3 h_\ell \ge \sup_{t \in [1,T+1] \cap [n'-T,n'] } \{ 2 \psi_\ell(t) \}, \] for $\psi_\ell(\cdot) \ge h_\ell$ of \eqref{dfn:psi-ell}. This in turn implies that \corOF{ \begin{align}\label{eq:event-bd} D_{2\psi,T} &\subset D_{2\psi} \bigcup (D_{3h_\ell,0,T+1}^c \cap D_{2 h_\ell,T}) \bigcup( D_{2 h_\ell,T} \cap D_{3h_\ell, \widetilde n - T,n'}^c ) \,, \end{align} where $D_{3h_\ell,T,T'}^c$ denotes the event of the Brownian motion crossing below the linear barrier in the definition of \textcolor{purple}{$D_{3h_\ell,T,T'}$,} see below \eqref{dfn-D-psi-T}.} See figure \ref{fig:bessel} for an illustration of these events. \begin{figure}\label{fig:bessel} \end{figure} From \eqref{eq:event-bd} and the \abbr{lhs} of \eqref{eq-evening3} we deduce by the union bound, that at $x=m_n+z'$, $w=c_\ast \ell + y$, for all $\widetilde n$ large enough \begin{equation}\label{eq:basic-decomp} {\sf q}^{(+)}_{\widetilde n} (x,w) \geq (1-\epsilon) \widetilde {\sf q}^{(+)}_{\widetilde n,T} (x,w) - \widetilde {\sf q}^{(\downarrow 3h_\ell,+)}_{\widetilde n,T} (x,w) - \widetilde {\sf q}^{(+,\downarrow 3h_\ell)}_{\widetilde n,T} (x,w) \,, \end{equation} where $\widetilde {\sf q}^{(\downarrow 3h_\ell,+)}_{\widetilde n,T} (x,w)$ and $\widetilde {\sf q}^{(+,\downarrow 3h_\ell)}_{\widetilde n,T} (x,w)$ are the probabilities of the events $D_{3h_\ell,0,T+1}^c \cap D_{2 h_\ell,T}$ and $D_{2 h_\ell,T} \cap D_{3h_\ell, \widetilde n - T,n'}^c$ under $\mathbb{P}_x^W(\cdot|W_{n'}=w)$. Proceeding to evaluating the latter terms, note that conditional on $(\bar Z,\bar Y)$ and the given values of $W_1=x$, $W_{n'}=w$, the events $D_{3h_\ell,0,T+1}$, $D_{2 h_\ell,T}$ and $D_{3h_\ell,\widetilde n-T,n'}$ are mutually independent. Thus, setting $y'=w - \rho_n \ell \in [y,y+1]$ and assuming \abbr{wlog} that $(\sqrt{\ell}/r_\ell) \wedge z \ge 8 h_\ell$, we have from \textcolor{purple}{\eqref{eq:lin-barrier-prob-idenBB}, that \begin{align*} \mathbb{P}(D_{3h_\ell,0, T+1}^c | \, \bar Z, \bar Y) &=1 - \mathbb{P}_{x\to W_{ T+1}}^{[ 1, T+1]}\(A_{\bar \varphi_n(t)+3 h_\ell}\)= e^{-2(z'+\rho_n-3h_\ell)(\bar Z-3h_\ell)_+/T} \,, \\ \mathbb{P}(D_{3h_\ell, \widetilde n - T,n'}^c | \, \bar Z, \bar Y) &=1 - \mathbb{P}_{W_{n'-T }\to w}^{[n'-T ,n']}\(A_{\bar \varphi_n(t)+3 h_\ell}\)= e^{-2(\bar Y - 3h_\ell)_+(y'-3h_\ell)/T} \,. \end{align*}} Combining these identities with \eqref{eq:Markov} and the inequality \eqref{eq:basic-decomp}, we arrive at \begin{align*} {\sf q}^{(+)}_{\widetilde n} (x,w) \geq \mathbb{E} \Big[ \big( 1-\epsilon -& e^{-2(z'+\rho_n-3h_\ell)(\bar Z-3h_\ell)_+/T} - e^{-2(y'-3h_\ell) (\bar Y - 3h_\ell)_+/T} \, \big) \nn \\ & \qquad \qquad \textcolor{purple}{\mathbb{P}_{\bar{Z}+\bar \varphi_n(T+1)\to \bar{Y}+\bar \varphi_n(n'-T) }^{[T+1,n'-T]}\(A_{\bar \varphi_n(t)+ 2 h_\ell}\)} \Big] \,. \end{align*} The first \corOF{factor} on the \abbr{rhs} is at least $-2$ and for all $\ell$ larger than some universal $\ell_0(\epsilon)$ it exceeds $(1-\epsilon)^2$ on the event $A := \{ \bar Z \wedge \bar Y \ge 4 h_\ell \}$. Setting $V:= (\bar Z - 2 h_\ell)_+ (\bar Y - 2 h_\ell)_+$, we combine for the second term on the \abbr{rhs} the analog of identity \eqref{eq:lin-barrier-prob-iden} with the bound $1-e^{-a} \in [a-a^2/2,a]$ on $\mathbb{R}_+$ to arrive at \[ (\widetilde n-2T) {\sf q}^{(+)}_{\widetilde n} (x,w) \ge 2 \mathbb{E} \Big[ \big\{ (1-\epsilon)^2 - 2 {\bf 1}_{A^c}-\frac{2V}{\widetilde n - 2 T}\big\} V \Big] \,. \] Utilizing \eqref{eq:f-alt}, the uniform tail bounds one has on $(\bar Z-z',\bar Y-y)$ when $\widetilde n \to \infty$, for our truncated range of $z'$ and $y$, followed by \eqref{eq:Us-limit}, we conclude that \begin{align*} & \varliminf_{z \to \infty} \varliminf_{n \to \ff} \{ f^{(-)}_{n,\ell,z}(y) \} \geq \varliminf_{z \to \infty} \varliminf_{n \to \ff} \mathbb{E} \Big[ {\bf 1}_{\mathcal B} \frac{e^{-c_\ast U_s}}{\sqrt{2}z} \frac{\widetilde n}{2} {\sf q}_{\widetilde n}^{(+)} (m_n+z',c_\ast \ell+y) \Big] \nn \\ & \geq (1-\epsilon)^2 (y-3 h_\ell) \lim_{z,s \to \infty} \mathbb{E} \Big[ {\bf 1}_{\mathcal B} \frac{(z'-3 h_\ell)_+}{\sqrt{2}z} e^{-c_\ast U_s} \Big] \ge (1-\epsilon)^2 (y - 3 h_\ell) \,. \end{align*} Plugging this into \eqref{eq:alpha-pm-ident} and noting that for $\ell \to \infty$ \begin{equation*} \frac{1}{\sqrt{\pi \ell}} \int_{\sqrt{\ell}/(2 r_\ell)}^{2 r_\ell \sqrt{\ell}} e^{c_\ast y} \widetilde \ga_\ell(y) \frac{y - 3 h_\ell}{\sqrt{1+y/(c_\ast \ell)}} dy \ge \al_\ell (1 - \epsilon) \end{equation*} we arrive at the \abbr{rhs} of \eqref{3a.52}, thereby completing the proof of \Cref{prop-asymptotic-first-moment}. \noindent \begin{tabular}{lll} & Amir Dembo\\ & Department of Mathematics \& Department of Statistics\\ & Stanford University, Stanford, CA 94305\\ & [email protected]\\ & &\\ & &\\ & Jay Rosen\\ & Department of Mathematics\\ & College of Staten Island, CUNY\\ & Staten Island, NY 10314 \\ & [email protected]\\ & &\\ & & \\ & Ofer Zeitouni\\ & Faculty of Mathematics, Weitzmann Institute and\\ &Courant Institute, NYU\\ & Rehovot 76100, Israel and NYC, NY 10012 \\ & [email protected] \end{tabular} \end{document}
arXiv
\begin{document} \setlength{\leftmargini}{18pt} \title[Existence of orbits with non-zero torsion]{Existence of orbits with non-zero torsion for certain types of surface diffeomorphisms} \author{ F.\,B\'eguin and Z.\,Rezig Boubaker} \address{F.\,B\'eguin, Universit\'{e} Paris-Sud 11, D\'{e}partement de Math\'{e}matiques, 91405 Orsay, France.} \address{Z.\,Rezig Boubaker, UniversitŽ du 7 Novembre \`a Carthage, Facult\'{e} des Sciences de Bizerte, D\'{e}partement de Math\'{e}matiques, 7021 Zarzouna, Tunisie.} \date{\today{}} \subjclass[2010]{37E30, 37E45} \keywords{Torsion, Ruelle number, surface diffeomorphisms, rotation sets} \begin{abstract} The present paper concerns the dynamics of surface diffeomorphisms. Given a diffeomorphism $f$ of a surface $S$, the \emph{torsion} of the orbit of a point $z\in S$ is, roughly speaking, the average speed of rotation of the tangent vectors under the action of the derivative of $f$, along the orbit of $z$ under $f$. The purpose of the paper is to identify some situations where there exist measures and orbits with non-zero torsion. We prove that every area preserving diffeomorphism of the disc which coincides with the identity near the boundary has an orbit with non-zero torsion. We also prove that a diffeomorphism of the torus $\mathbb T^2$, isotopic to the identity, whose rotation set has non-empty interior, has an orbit with non-zero torsion. \end{abstract} \maketitle \section{Introduction} Numerical conjugacy invariants are a key tool to analyze the behavior of dynamical systems. The paradigmatic example is of course Poincar\'e's rotation number for circle homeomorphisms~: a single numerical conjugacy invariant, the rotation number, allows to describe completely the dynamics of a circle diffeomorphism, at least when this diffeomorphism is smooth enough and the rotation number is irrational. Unfortunately, this situation is quite specific to the circle. Consider for example a homeomorphism $f$ of the torus $\mathbb T^2$, which is isotopic to the identity. Poincar\'e's construction of rotation numbers may be generalized, yielding rotation vectors for $f$. Nevertheless, unlike what happens for circle homeomorphisms, different points of $\mathbb T^2$ may have different rotation vectors. Moreover, even the collection of all the rotation vectors of the points in $\mathbb T^2$ is far from describing completely the dynamics of $f$. This is the reason why many other invariants have been defined to study the dynamics of torus (or, more generally, surfaces) homeomorphisms. One may for example consider the average speed at which two given orbits turn around each other. Or, for a diffeomorphism $f$ of $\mathbb S^{2} \simeq \bar{\mathbb C}$, one may construct a conjugacy invariant by considering the evolution of the cross ratio of four points under the action of $f$ (see \emph{e.g.} \cite{GG:95} for more details and many more examples). In the present paper, we consider a numerical invariant which measures the average rotation speed of tangent vectors under the action of the derivative of a surface diffeomorphism. Consider a (non necessarily compact) surface $S$ with trivializable tangent bundle and a diffeomorphism $f$ of $S$ isotopic to the identity. Choose an isotopy $I=(f_t)_{t\in [0,1]}$ joining the identity to $f$. For $t \in \mathbb R,$ we define $f_t=f_{t-n} \circ f^{n}$ where $n=\lfloor t\rfloor$. Choose a trivialization of the tangent bundle of $S$. Then, for every point $x \in S$, every vector $\xi \in T_xS \setminus\{0\}$ and every $t$, we can see $df_t(x).\xi$ as a non-zero vector in $\mathbb R^2 \simeq \mathbb C$. We denote by $\mathrm{Torsion}_n(I,x,\xi)$ the variation of the argument of $df_t(x).\xi$ when $t$ runs from $0$ to $n$, divided by $n$ (as a unit for angles, we use the full turn instead of the radian). If the limit $\lim_{n\rightarrow +\infty} \mathrm{Torsion}_n (I,x,\xi) $ exists, then it does not depend on $\xi$; we call it \emph{torsion of the orbit of $x$}, and denote it by $\mathrm{Torsion}(I,x)$. If $\mu$ is an $f$-invariant probability measure, then $\mathrm{Torsion}(I,x)$ exists for $\mu$-almost every $x$ and the function $x \mapsto \mathrm{Torsion}(I,x)$ is $\mu-$integrable; we call \emph{torsion of the measure $\mu$} the integral $\int_{S} \mathrm{Torsion}(I,x) d\mu (x)$ and denote it by $\mathrm{Torsion}(I,\mu)$. If $f$ has compact support in $S$ and if we only consider isotopies with compact support, then the quantities $\mathrm{Torsion}(I,x)$ and $\mathrm{Torsion}(I,\mu)$ do not depend on the choice of the isotopy $I$. Similarly, if $S$ is the torus $\mathbb T^{2}=\mathbb R^2 / \mathbb Z^2$ or the annulus $\mathbb A= \mathbb R/\mathbb Z \times \mathbb R$ and if we use the canonical trivialization of the tangent bundle of $S$, then the quantities $\mathrm{Torsion}(I,x)$ and $\mathrm{Torsion}(I,\mu)$ do not depend on the choice of the isotopy~$I$. In those two cases, we may speak of the torsion of an orbit of $f$ without having chosen an isotopy (see section~\ref{s.definition-torsion} for more details). Similar notions have been considered by several authors. What we call the \emph{torsion of the area probability measure} has first been defined by D.Ruelle for conservative diffeomorphisms using the polar decomposition of $\mathrm{GL}(2,\mathbb R)$ \cite{Ruelle}. In the context of twists maps, the notion of torsion of an orbit has been considered by J.\;Mather \cite{Mather1} \cite{Mather2} and S.\;Angenent \cite{Angenent} and S.\;Crovisier \cite{Crovisier1,Crovisier2}; see below for more details. J.Mather calls it ``the amount of rotation of an orbit''. The torsion of the area probability measure is one of the quasi-morphisms considered by J.M.\;Gambaudo and \'E.\;Ghys to study the algebraic structure of the groups of area preserving diffeomorphisms of a surface; they call it ``the Ruelle number of a diffeomorphism'' \cite{GG:95}. A few years ago, T.\;Inaba and H.\;Nakayama have interpreted the torsion of an invariant probability measure as the difference of the volume of two subsets of a line bundle over the surface \cite{InabaNakayama}. Existence results for orbits and/or invariant probability measures with zero torsion have been proved in several contextes. For area preserving twists maps of the compact annulus, well-ordered periodic orbits with a given rotation number can be found as critical points of a certain energy functionnal \cite{Mather1,AubryLeDaeron}. J.Mather and S.B.Angenent have related the Morse index of these critical points with the torsion of the corresponding orbit \cite{Mather2}, \cite{Angenent}.The existence of periodic orbits with zero torsion for area preserving twists maps of the annulus follows (see e.g. \cite{Angenent} thm1). Actually, S.Crovisier has constructed orbits with arbitrary rotation number and zero torsion for any twist map of the annulus \cite{Crovisier2}. He subsequently used these orbits to exhibit generalized Arnold's tongues for twist maps \cite{Crovisier1}. In other direction, S.Matsumoto and H.Nakayama have proved that every isotopic to the identity diffeomorphism of the torus $\mathbb T^2$ has an invariant probability measure with zero torsion \cite{MatsumotoNakayama}. The purpose of the present paper is to prove results which go in the opposite direction: we want to identify general situations where there exist orbits with non-zero torsion (note that the existence of an orbit with non-zero torsion is equivalent to the existence of an (ergodic) invariant probability measure with non-zero torsion, see section~\ref{s.definition-torsion}). In other words, we want to find general situations where the non-triviality of the dynamics of a surface diffeomorphism $f$, can be read on the action of the derivative of $f$ along a single orbit. \begin{rem} Such results are typically useful to prove rigidity theorems for some types of group actions. For example, J.\;Franks and M.\;Handel have proved that every action of $\mathrm{SL}(3,\mathbb Z)$ on a surface by area preserving $C^1$-diffeomorphisms factors through a finite group (\cite{FranksHandel}). Their proof uses the fact that, for every area preserving diffeomorphism $f$ on a closed surface of negative Euler characteristic, either $f$ has trivial dynamics (there exits $n$ such that $f^n=\mathrm{Id}$), or there is a simple dynamical invariant which detects the non-triviality of the dynamics of $f$: there is an orbit with non-zero rotation vector, or there is a closed curve $\alpha$ such that the length of $f^n(\alpha)$ grows exponentially, or... Similarly, in order to prove that every action of $\mathbb Z^n$ on $\mathbb S^2$ by area preserving $C^1$-diffeomorphisms has two global fixed points (provided that it is generated by diffeomorphisms that are $C^0$-close to the identity), the authors of \cite{BFLM} have used the fact that every area preserving diffeomorphism of $\mathbb S^2$ has two fixed points $a, b$ and a recurrent point $c$ such that $c$ has a non-zero rotation number in the annulus $\mathbb S^2 \setminus \{a,b\}$. \end{rem} Our first result concerns conservative diffeomorphisms of the disc: \begin{thm}\label{thm1} Every area-preserving diffeomorphism of the disk $\mathbb D^{2}$ with compact support, which is not the identity, has an orbit with non-zero torsion. \end{thm} In the above statement, the disk $\mathbb D^2$ is tacitly equipped with a trivialization of its tangent bundle. Observe that, since $\mathbb D^2$ is simply connected and since we consider diffeomorphisms with compact support, the result does not depend on the choice of this trivialization. Also recall that, for diffeomorphisms of the disc with compact support, the torsion of an orbit does not depend on the choice of an isotopy. \begin{rem} Let $S$ be an orientable surface with zero genus, and $f$ be an area-preserving diffeomorphism of the surface $S$ with compact support. Then we can embed $S$ in $\mathbb D^2$, extend $f$ to a diffeomorphism $\bar f$ of $\mathbb D^2$ which is the identity on $\mathbb D^2 \setminus S$. Then, applying theorem \ref{thm1} to $\bar f$, we get an orbit with non-zero torsion. This orbit is automatically in $S$, since $\bar f=Id$ on $\mathbb D^2 \setminus S$. Observe nevertheless that, if $S$ is not simply connected, the torsion of an orbit of $f$ does depend on the choice of a trivialization of the tangent bundle of $S$. Moreover, there exist trivializations of the tangent bundle of $S$ which do not extend to trivializations of the tangent bundle of $\mathbb D^2$ for any embedding of $S$ in $\mathbb D^2$. Note, for example, that all the orbits of a rigid rotation of the annulus $\mathbb A= \mathbb R/\mathbb Z \times \mathbb R$ have zero torsion with respect to the natural trivialization of the tangent bundle of $\mathbb A$ (\emph{i.e.} the trivialization induce by the canonical trivialization of the tangent bundle of $\mathbb R \times \mathbb R$). \end{rem} The existence of orbits with non-zero torsion will be obtained as a consequence of the existence of a recurrent orbit rotating around a fixed point (at a non-zero average speed). We will obtain the existence of such a recurrent orbit and such a fixed point as a consequence of a symplectic geometry result of C. Viterbo (\cite{Viterbo}). It could also be obtained as a consequence of P. Le Calvez's foliated equivariant version of Brouwer's plane translation theorem (\cite{LeCalvez}), together with a recent result of O. Jaulent which allows to apply Le Calvez's result to a diffeomorphism which has infinitely many fixed points (\cite{Jaulent}). Theorem~\ref{thm1} fails to be true if one replaces the disc $\mathbb D^2$ by another surface. Indeed, for a surface which is not the disk, the recurrent orbits can rotate around the holes or the handles of the surface instead of rotating around the fixed points. A rigid rotation of $\mathbb T^{2}=\mathbb R^2 / \mathbb Z^2$ is an example of an area-preserving diffeomorphism having only orbits with zero torsion (for the trivialization of the tangent bundle of $\mathbb T^2$ induced by the canonical trivialization of the tangent bundle of $\mathbb R^2$). We will nevertheless prove a result of existence of orbits with non-zero torsion for some diffeomorphisms of $\mathbb T^{2}$; unlike the rigid rotations, these diffeomorphims will have orbits ''rotating around $\mathbb T^{2}$'' in different directions. Let us recall the definition of the rotation set of a torus homeomorphism. Let $f$ be a homeomorphism of the torus $\mathbb T^2$ isotopic to the identity, and $\widetilde{f}:\mathbb R^2\to\mathbb R^2$ be a lift of $f$. Given a point $z \in\mathbb T^2$ and a lift $\widetilde z\in\mathbb R^2$ of $z$, let $\rho_n(\widetilde{f},z):=\frac{1}{n}(\widetilde{f}^n(\widetilde z)-\widetilde z)$ (this quantity does not depend on the choice of $\widetilde z$). If $\rho _n(\widetilde{f},z)$ converges towards $\rho(\widetilde{f},z)$ as $n$ goes to $+\infty$, we say that $\rho(\widetilde{f},z)$ is the \emph{rotation vector} of $z$ for $\widetilde{f}$. Note that, if $z$ is a periodic point, then the rotation vector $\rho(\widetilde{f},z)$ is well-defined, and has rational coordinates. Now, the \emph{rotation set} of $\widetilde{f}$ is $$\rho(f):=\bigcap _{n\geq 0} \overline{\{\rho _n(\widetilde{f},z),\;\; z \in \mathbb R^2\}}.$$ This is a convex compact of $\mathbb R^2$ (\cite{Misu-Zie}). If $\widetilde{f}'$ is another lift of $f$, then there exists $(p,p') \in \mathbb Z^2$ such that $\widetilde{f}'=\widetilde{f}+(p,p')$, and thus, $\rho(\widetilde{f}')=\rho(\widetilde{f})+(p,p')$. We will speak of the rotation set of $f$: this is a subset of $\mathbb R^2$, defined up to a translation (in particular, the fact that the rotation has non-empty interior does not depend on the choice of a lift of $f$). We will prove the following result: \begin{thm}\label{thm2} Every diffeomorphism of $\mathbb T^{2}$, isotopic to the identity, and whose rotation set has non-empty interior, has an orbit with non zero torsion. \end{thm} In this statement, the torsion is measured with respect to the ``canonical'' trivialization of the tangent bundle of $\mathbb T^2=\mathbb R^2 / \mathbb Z^2$, \emph{i.e.} the trivialization induced by the affine structure of $\mathbb R^2$. We have already pointed out that, for this trivialization, the torsion of an orbit does not depend on the choice of an isotopy joining the identity to $f$. Using a result of Franks (\cite{Franks}), the hypothesis of theorem \ref{thm2} may be replaced by the following one: $f$ has three periodic orbits whose rotation vectors are affinely independent. \subsection*{Comments and questions.} \begin{enumerate} \item Matsumoto and Nakayama have proved that every diffeomorphism of the torus $\mathbb T^2$ admits an invariant probability measure with zero torsion (\cite{MatsumotoNakayama}). It seems to be unknown if one can find such a probability measure which is moreover ergodic. And thus, it seems to be unknown if one can find an orbit with zero torsion. On the contrary, observe that the existence of a measure with non-zero torsion automatically implies (using the ergodic decomposition theorem) the existence of an ergodic measure with non-zero torsion, and thus (using Birkhoff's ergodic theorem) the existence of an orbit with non-zero torsion. \item We do not know if, under the hypotheses of theorems~\ref{thm1} and~\ref{thm2}, it is possible to find \emph{periodic} orbits with non-zero torsion. \item As we already mentionned, our proof of theorem \ref{thm1} relies on a symplectic geometry result due to Viterbo. Using a recent result by Jaulent and the equivariant foliated Brouwer's translation theorem of Le Calvez instead of Viterbo's result, it is possible to prove that theorem \ref{thm1} is still valid for a diffeomorphism $f$ of $\mathbb D^2$ which preserves a probability measure whose support is not contained in the fixed points set of $f$. \item Theorem~\ref{thm1} is of course false, if one simply drops the area-preservation hypothesis. A counter-example is obtained by considering a diffeomorphism which preserves each horizontal line. Nevertheless, we do not know if theorem~\ref{thm1} is true or false, if one assumes for example the existence of a non-wandering orbit (which is not a fixed point) instead of the preservation of some probability measure. \item It seems reasonable to think that under the hypothesis of theorem \ref{thm1}, the set of points whose orbits have non-zero torsion should have positive area, but we were unfortunately not able to prove such a result. \item We do not know if it is possible to prove a quantitative version of theorem \ref{thm2}. \emph{Is there a constant $C$ such that, every diffeomorphism $f$ of $\mathbb T^2$ isotopic to the identity has an orbit of torsion at least $C.r$, where $r$ is the maximal radius of an euclidean ball contained in the rotation set of $f$?} \item The arguments of the proof of theorem \ref{thm2} do not seem to allow to control the rotation vectors of the orbits with non-zero torsion that we construct. We do not know if, under the hypothesis of theorem \ref{thm2}, it is possible to construct an orbit with rotation vector $v$ and non-zero torsion, for every vector $v$ in the rotation set of a lift $\widetilde f$ of $f$. \item It has been proved by J. Llibre and R. MacKay that every $f$ of $\mathbb T^2$, isotopic to the identity, whose rotation set has non-empty interior, has positive topological entropy. H. Einrich and N. Guelman and A. Larcanch\'e and I. Liousse have obtained a partial converse to this result: for every $C^{1+\alpha}$-diffeomorphism $f$ of $\mathbb T^2$, isotopic to the identity, satisfying a transitivity assumption, if $f$ has positive topological entropy, then the rotation set of $f$ has non-empty interior. Combining Einrich-Guelman-Larcanch\'e-Liousse's result with our theorem~\ref{thm2}, we obtain that a $C^{1+\alpha}$-diffeomorphism of $\mathbb T^2$, isotopic to the identity, satisfying a transitivity assumption, if $f$ has positive topological entropy, then $f$ has an orbit with non-zero torsion. We do not know if this result admits a converse: \emph{does a $C^{1+\alpha}$ diffeomorphism of $\mathbb T^2$, satisfying an appropriate transitivity assumption, and having an orbit with non-zero torsion, necessarly have positive topological entropy?} \end{enumerate} \subsection*{Organization of the paper.} In section~\ref{s.definition-torsion}, we define precisely what we mean by the \emph{torsion} of an orbit or an invariant probability measure, and we give some basic properties of these notions. In section~\ref{s.linking-and-torsion}, we define the linking number of two orbits for a diffeomorphism of $\mathbb R^2$ isotopic to the identity, and we prove that the existence of two orbits with non-zero linking number implies the existence of an orbit with non-zero torsion. Section~\ref{s.thm1} is devoted to the proof of theorem~\ref{thm1} and section~\ref{s.thm2} is devoted to the proof of theorem~\ref{thm2}. \subsection*{Conventions and notations.} All along the paper, the vector space $\mathbb R^2$ is endowed with its canonical euclidean norm, which we denote by $\|\cdot\|$. We denote by $\mathbb{S}^1$ the unit circle in $\mathbb R^2$. We identify the universal cover of $\mathbb S^1$ with the real line $\mathbb R$ using the map $\pi:\mathbb R\to\mathbb S^1$ given by $\pi(\theta)=e^{i2\pi\theta}$. As a unit for angles, we use the full turn. \subsection*{Acknowledgments} The authors would like to thank S. Crovisier, P. Le Calvez, F. Le Roux and P. Py for helpful conversations. The authors are grateful to the \emph{Unit\'e de Recherche Math\'ematiques et Applications} of the \emph{Facult\'e des Sciences de Bizerte} and the \emph{Institut Pr\'eparatoire aux \'Etudes d'Ing\'enieur de Bizerte} for their support of the visits of the second author to Orsay, where this work was carried out. \section{ Torsion of an orbit or an invariant measure } \label{s.definition-torsion} Let $S$ be a (not necessary compact) surface with trivializable tangent bundle; in other words, $S$ is an orientable surface with genus $0$ or $1$. We choose a trivialization of the tangent bundle of $S$, which identifies the tangent bundle $TS$ with $S\times\mathbb R^2$. Thanks to this identification, the canonical euclidean norm of $\mathbb R^2$ defines a riemannian metric on $S$. This allows us to speak of the unit tangent bundle of $S$, which we denote by $T^1S$. By construction, $T^1S$ is identified to~$S\times\mathbb S^1$. Now, let $f$ be a ${\mathcal C}^{1}$-diffeomorphism of $S$ isotopic to the identity, and $I=(f_t)_{t\in [0,1]}$ be an isotopy joining the identity to $f$ in $\mathrm{Diff}^1(S)$. As usual, we define $f_t$ for every $t\in\mathbb R$ by setting $f_t=f_{t-\lfloor t\rfloor} \circ f^{\lfloor t\rfloor}$. We denote by~$f_*$ the action of $f$ on the unit tangent bundle $T^1S\simeq S\times\mathbb S^1$: $$f_*\big(x,\xi\big) = \left(f(x),\frac{df_t(x).\xi}{\|df_t (x).\xi\|}\right).$$ Now we define the torsion of an orbit of $f$. Let $x$ be a point in $S$, and $\xi$ be a unit tangent vector at $x$. For every $t$, we see $\frac{df_t(x).\xi}{\|df_t (x).\xi\|}$ as an element of $\mathbb S^1$. We consider the map $$\begin{array}{rrcl} v(I,x,\xi) : & \mathbb R & \longrightarrow &\mathbb S^1\\ & t & \longmapsto & \displaystyle\frac{df_t(x).\xi}{\|df_t (x).\xi\|} \end{array}.$$ We choose a continuous lift $\widetilde v(I,x,\xi):\mathbb R\longrightarrow \mathbb R $ of this map. For every $t \in \mathbb R,$ the quantity $\widetilde{v}(I,x,\xi)(t)-\widetilde{v}(I,x,\xi)(0)$ does not depend on the choice of the lift $ \widetilde{v}(I,x,\xi)$. We set $$\mathrm{Torsion}_1(I,x,\xi)= \widetilde{v}(I,x,\xi)(1)-\widetilde{v}(I,x,\xi)(0).$$ More generally, for every $n\in \mathbb N\setminus\{0\}$, we set $$ \mathrm{Torsion}_n(I,x,\xi) = \frac{1}{n} \Big(\widetilde{v}(I,x,\xi)(n)-\widetilde{v}(I,x,\xi)(0) \Big)= \displaystyle\frac{1}{n}\sum^{n-1}_{k=0} \mathrm{Torsion}_1\left(I,f^k_*(x,\xi)\right).$$ Let $\xi'$ be another unit vector in $T^1_x S$; since $df_n(x) :\xi \longmapsto v(I,x,\xi)(n)$ preserves the cyclic order on $\mathbb S^1$, we have $$\Big|\mathrm{Torsion}_n(I,x,\xi)- \mathrm{Torsion}_n(I,x,\xi')\Big|\leq \frac{2}{n}.$$ This shows that the quantity $\mathrm{Torsion}_n(I,x,\xi)$ has a limit for some $\xi\in T^1_x S$ when $n$ goes to $\infty$, if and only if it has a limit for every $\xi\in T^1_x S$, and that the numerical value of the limit does not depend on $\xi$. \begin{defi} Consider a point $x\in S$, and assume that the quantity $\mathrm{Torsion}_n (I,x,\xi) $ converges for some (or equivalently, for every) $\xi\in T^1 S$ when $n$ goes to $\infty$. Then we call \emph{the torsion of the orbit of $x$} (for the isotopy $I$) the quantity $$\mathrm{Torsion}(I,x)=\lim_{n\to\infty} \mathrm{Torsion}_n(I,x,\xi).$$ \end{defi} Now, let $\mu$ be an $f$-invariant probability measure on $S$. We can lift $\mu$ to $\widetilde{\mu}$ an $f_*$-invariant probability measure on $T^1S \cong S \times \mathbb S^1$. Assume that either $\mu$ or $f$ has compact support. Then, the function $(x,\xi) \mapsto \mathrm{Torsion}_1 (I,x,\xi)$ is in $L^{\infty}(T^1S, \widetilde{\mu})$, since it is bounded on every compact subset of $T^1S$ and vanishes on the complement of the support of $f$. So, Birkhoff's ergodic theorem implies that the limit $$\mathrm{Torsion}(I,x)=\displaystyle\lim_{n\rightarrow +\infty} \displaystyle\frac{1}{n} \sum^{n-1}_{k=0} \mathrm{Torsion}_1(I,f_*^k(x,\xi))$$ exists for $\widetilde{\mu}$ almost every $(x,\xi)$, and hence, for $\mu$ almost every $x$ (since the convergence does not depend on $\xi$). Furthermore, and always by the Birkhoff ergodic theorem, the function $x \mapsto \mathrm{Torsion}(I,x)$ is $\mu-$integrable and $$\int_{S} \mathrm{Torsion}(I,x) d\mu (x)= \int_{T^1S} \mathrm{Torsion}_1(I,x,\xi) d\widetilde{\mu} (x,\xi).$$ \begin{defi} Let $\mu$ be a $f$-invariant probability measure on $S$. Assume that either $\mu$ or $f$ has compact support. We call the \emph{torsion of the measure $\mu$} the quantity $$\mathrm{Torsion}(I,\mu)=\int_{S} \mathrm{Torsion}(I,x) d\mu (x).$$ \end{defi} Now, we give some relations between these different notions of torsion. \begin{lem}\label{lem1} Let $(x_n,\xi_n)_{n\geq 0}$ be a sequence in some compact subset of~$T^1 S$. For every $n$, let $\alpha_n:=\mathrm{Torsion}_n(I,x_n,\xi_n)$. Then, every limit point of the sequence $(\alpha_n)_{n\geq0}$ is the torsion of an $f$-invariant probability measure whose support is contained in $\overline{\{x_n \mid n \in \mathbb N\}}.$ \end{lem} \begin{proof} Let $\alpha = \lim_{i\rightarrow + \infty} \alpha_{n_i}$ be a limit point of the sequence $(\alpha_n)_{n\geq0}$. For every $i$, consider the probability measure $\widetilde{\mu}_i$ on $T^1 S$ defined by $$\widetilde{\mu}_i=\displaystyle\frac{1}{n_i}\displaystyle\sum^{n_i-1}_{k=0}\delta_{f_*^{k}(x_{n_i},\xi_{n_i})}.$$ Let $\widetilde{\mu}$ be a limit point of the sequence $(\widetilde{\mu}_i)_{i \in \mathbb N}$. This is a $f_*$-invariant probability measure on~$T^1 S$. So the projection $\mu$ of $\widetilde\mu$ is a $f$-invariant probability measure on $S$, and one has $$\alpha = \mathop{\lim}_{i\to\infty}\alpha_{n_i}=\displaystyle\lim_{i\rightarrow\infty} \int_{T^1S} \mathrm{Torsion}_1(I,x,\xi) d\widetilde{\mu}_i (x,\xi)$$ $$ =\int_{T^1S} \mathrm{Torsion}_1(I,x,\xi) d\widetilde{\mu} (x,\xi)=\int_{S} \mathrm{Torsion}_1(I,x) d\mu (x)= \mathrm{Torsion}(I,\mu).$$ Moreover, the support of $\mu$ is obviously contained in $\overline{\{x_n \mid n \in \mathbb N\}}$. \end{proof} \begin{lem} \label{lem2} For every $f$-invariant probability measure $\mu$, the torsion of the measure $\mu$ is a convex combination of torsions of orbits of points of $S$. \end{lem} \begin{proof} On the one hand, the map $\mu \mapsto \mathrm{Torsion}(I,\mu)$ is affine, so the ergodic decomposition theorem shows that the torsion of $\mu$ is a convex combination of torsions of ergodic probability measure. On the other hand, it follows from Birkhoff's ergodic theorem that the torsion of an ergodic measure is the torsion of an orbit. The lemma follows. \end{proof} Combining these two lemma, we obtain the following corollary: \begin{corol} \label{corol1} If there exists a sequence of integers $n_i \rightarrow + \infty$ and a sequence of points $(x_{n_i})$ in a compact of $S$ such that $\mathrm{Torsion}_{n_i}(I,x_{n_i}) \geq \epsilon$ for every $i$, then there exists a point $x \in S$ such that $\mathrm{Torsion}(I,x) \geq \epsilon.$ \end{corol} Now, we examine the dependance of the torsion of an orbit or an invariant probability measure with respect to the isotopy $I$. Let $I'$ be another isotopy joining the identity to $f$. Then, for every $(x,\xi)\in T^1S$, the quantities $\mathrm{Torsion}_1(I,x,\xi)$ and $\mathrm{Torsion}_1(I',x,\xi)$ differ by an integer. For continuity reasons, this integer does not depend on $x$ and $\xi$ (provided that $S$ is connnected). It follows that there is an integer $k\in\mathbb Z$, such that $\mathrm{Torsion}(I',x)=\mathrm{Torsion}(I',x)+k$ for every point $x\in S$, and $\mathrm{Torsion}(I',\mu)=\mathrm{Torsion}(I',\mu)+k$ for every $f$-invariant probability measure $\mu$. Now, the integer $k$ obviously depends continuously on the isotopy $I'$. It follows that, for every point $x\in S$ and every $f$-invariant probability measure $\mu$, the quantities $\mathrm{Torsion}(I,x)$ and $\mathrm{Torsion}(I,\mu)$ depend only on the homotopy class of the isotopy $I$. There exist several interesting situations where these quantities $\mathrm{Torsion}(I,x)$ and $\mathrm{Torsion}(I,\mu)$ depend only on $f$, and not on the choice of the isotopy $I$ joining the identity to $f$. Let us first consider the case where $S$ is the torus $\mathbb T^2= \mathbb R^2 / \mathbb Z^2$ endowed with the canonical trivialization of its tangent bundle (\emph{i.e.} the trivialization induced by the affine structure of $\mathbb R^2$). We know that $\mathrm{Diff}^1_0(\mathbb T^2)$ retracts on the subgroup (isomorphic to $\mathbb T^2$) made of the rigid rotations. Thus, if $I$ and $I'$ are two isotopies joining the identity to a diffeomorphism $f$ of $\mathbb T^2$, then $I'^{-1}I$ is homotopic to a loop in the rigid rotations group. But, it is clear, that if $I''$ is a loop made of rigid rotations then, for every $(x, \xi) \in T^1 \mathbb T^2$, we have $\mathrm{Torsion}_1(I'', x, \xi)=0$ (the rigid rotations are parallel with respect to the canonical trivialization of the tangent bundle). It follows that the quantities $;\mathrm{Torsion}(I,x)$ and $\mathrm{Torsion}(I,\mu)$ do not depend on the choice of the isotopy $I$. Now assume that $S$ is not a closed surface. Let $\mathrm{Diff}^1_c(S)$ be the set of the $C^1$-diffeomorphims of $S$ with compact supports, and $\mathrm{Diff}^1_{c,0}(S)$ the subset of $\mathrm{Diff}^1_c(S)$ made of the diffeomorphisms that are isotopic to the identity (\emph{via} an isotopy in $\mathrm{Diff}^1_c(S)$). We know that $\mathrm{Diff}^1_{c,0}(S)$ is contractible ( as a corollary of Kneser's theorem, see \cite{Kneser} or e.g.~\cite[th\'eor\`eme 2.9]{LeRoux}). Hence, if $f \in \mathrm{Diff}^1_{c,0}(S)$ and if we consider only isotopies $I$ joining the identity to $f$ in $\mathrm{Diff}^1_c(S)$, then the quantities $\mathrm{Torsion}(I,x)$ and $\mathrm{Torsion}(I,\mu)$ depend only on $f$ and not on the choice of $I$. When the quantities $\mathrm{Torsion}(I,x)$ and $\mathrm{Torsion}(I,\mu)$ do not depend on the isotopy $I$, we will denote them by $\mathrm{Torsion}(f,x)$ and $\mathrm{Torsion}(f,\mu).$ In general, the torsion of an orbit of $f$ or a $f$-invariant probability measure also depends on the trivialization of the tangent bundle of $S$ which is used to identify $T^1 S$ with $S\times \mathbb S^1$. Nevertheless, it is quite easy to check that, if $f$ has compact support in $S$, then two trivializations that are homotopic yield the same value for the torsion of an orbit of $f$ or a $f$-invariant probability measure. In particular, if $S$ is the disc $\mathbb D^2$ and $f$ has compact support, then the torsion of an orbit of $f$ and the torsion of a $f$-invariant probability measure do not depend on the trivialization of the tangent bundle of $S$. Although this fact is worth to be pointed out, we shall not use it strictly speaking. We conclude this section by a remark, which will be useful for the proof of our theorem~\ref{thm2}: \begin{rem} \label{rem1} Assume $\mathbb R^2$ and $\mathbb T^2=\mathbb R^2/\mathbb Z^2$ to be equipped with the canonical trivializations of their tangent bundles. Let $I$ be an isotopy joining the identity to a diffeomorphism $f$ in $\mathrm{Diff}^1(\mathbb T^2)$, and $\widetilde I$ be a lift of the isotopy $I$ in $\mathrm{Diff}^1(\mathbb R^2)$. Let $x$ be a point in $\mathbb T^2$. If there is a lift $\widetilde x\in\mathbb R^2$ of $x$ such that the quantity $\mathrm{Torsion}(\widetilde I,\widetilde x)$ is well-defined, then the quantity $\mathrm{Torsion}(I,x)$ is also well-defined, and $\mathrm{Torsion}(\widetilde I,\widetilde x)=\mathrm{Torsion}(I,x)$. This follows immediatly from the definitions. \end{rem} \section{Linking and torsion} \label{s.linking-and-torsion} In this section, we consider a diffeomorphism $f$ of $\mathbb R^2$ which we assume to be isotopic to the identity, and an isotopy $I=(f_t)_{t\in [0,1]}$ joining the identity to $f$. The plane $\mathbb R^2$ on which the diffeomorphism $f$ acts should be considered as an affine plane rather than a vector space. We will denote by $d(\cdot,\cdot)$ the euclidean distance on this affine plane, which is induced by the canonical euclidean norm $\|\cdot\|$ on the underlying vector space. \emph{All along this section, $\mathbb R^2$ is endowed with the canonical trivialization of its tangent bundle.} Let us first define the \emph{linking number} of a pair of orbits of $f$. Let $x,y$ be two distinct points in $\mathbb R^2$. We consider the map $$\begin{array}{rrcl} v(I,x,y) : & \mathbb R & \longrightarrow &\mathbb S^1\\ & t & \longmapsto & \displaystyle\frac{df_t(x)-df_t(y)}{\|df_t (x)-df_t(y)\|} \end{array}.$$ We choose a continuous lift $\widetilde v(I,x,y):\mathbb R\longrightarrow \mathbb R $ of $v(I,x,y)$ (we recall that $\mathbb R$ is seen as the universal cover of $\mathbb S^1$, thanks to the map $\pi:\mathbb R\to\mathbb S^1$ given by $\pi(\theta)=e^{i2\pi\theta}$). For every $t \in \mathbb R,$ the quantity $\widetilde{v}(I,x,y)(t)-\widetilde{v}(I,x,y)(0)$ does not depend on the choice of this lift. For $n\in \mathbb N\setminus\{0\}$, we set $$ \mathrm{Linking}_n(I,x,y) := \frac{1}{n} \Big(\widetilde{v}(I,x,y)(n)-\widetilde{v}(I,x,y)(0) \Big)= \displaystyle\frac{1}{n}\sum^{n-1}_{k=0} \mathrm{Linking}_1\left(I,f^k(x),f^k(y)\right).$$ \begin{defi} If $\mathrm{Linking}_n (I,x,y) $ converges when $n$ goes to $\infty$, then we call \emph{linking number of the orbits of $x$ and $y$} (for the isotopy $I$) the quantity $$\mathrm{Linking}(I,x,y)=\lim_{n\to\infty} \mathrm{Linking}_n(I,x,y).$$ \end{defi} The purpose of the section is to prove the following proposition: \begin{pro} \label{prop.linking-implies-torsion} If $f$ has two orbits with non-zero linking number, then $f$ has an orbit with non-zero torsion. \end{pro} \begin{rem} Since we are dealing with diffeomorphisms with non-compact support, the linking number of a pair of orbits, as well as the torsion of an orbit, may depend on the choice of a trivialization of the tangent bundle of $\mathbb R^2$. We recall that, all along this section, the plane $\mathbb R^2$ is endowed with the canonical trivialization of its tangent bundle. \end{rem} The proof of proposition~\ref{prop.linking-implies-torsion} was sketched during a discussion between by the first author, F.~Le Roux and P.~Py several years ago. Observe proposition is intuitively obvious. Indeed, saying that the orbits of $x$ and $y$ have a non-zero linking number means that the segment line $[f_t(x),f_t(y)]$ turns with a non-zero average speed, when $t$ increases. And this should clearly imply that, at least for \emph{some} point $z$ of the segment line $[x,y]$, the image under $df_t(z)$ of the vector which is tangent to the segment $[x,y]$ must turn at a non-zero average speed, when $t$ increases. Nevertheless, turning this intuition into a formal proof appears to be quite delicate. We start by giving a quantitative version of the proposition~\ref{prop.linking-implies-torsion}~: \begin{lem} \label{lemma.linking-implies-torsion} Suppose that there exist two points $x,y\in\mathbb R^2$ and an integer $n>0$ such that $\mathrm{Linking}_n(I,x,y)\neq 0$. Then, there exist a point $z\in\mathbb R^2$ et a vector $\xi\in T_z \mathbb R^2$ such that $$\left |\mathrm{Torsion}_n(I,z,\xi)\right |\geq \frac{1}{3} \left |\mathrm{Linking}_n(I,x,y)\right |-\frac{1}{n}.$$ \end{lem} Proposition~\ref{prop.linking-implies-torsion} follows from lemma~\ref{lemma.linking-implies-torsion} and corollary~\ref{corol1}; so we are left to prove lemma~\ref{lemma.linking-implies-torsion}. \begin{proof}[Proof of lemma~\ref{lemma.linking-implies-torsion}] Let $x,y$ be two distinct points in $\mathbb R^2$, and $n$ be a positive integer such that $\mathrm{Linking}_n(I,x,y)\neq 0$. Let $\epsilon:=\mathrm{Linking}_n(I,x,y)$ and assume that $\epsilon$ is positive (the proof is similar in the case where it is negative). For $s \in [0,1]$, let $z(s)=(1-s)x+sy$. Let $\xi:=\frac{y-x}{\|y-x\|}$. Using affine structure of~$\mathbb R^2$, we will see the vector $\xi$ as an element of $T_{z(s)}\mathbb R^2$ for every $s\in [0,1]$. In order to prove the lemma, we will find $s_0\in [0,1]$ such that $\mathrm{Torsion}_n(f,z(s_0),\xi)\geq \frac{\epsilon}{3}-\frac{1}{n}$. If $\mathrm{Torsion}_n(I,x,\xi)\geq \frac{\epsilon}{3}$, then we take $s_0=0$ and we are done. Hence, in the remainder of the proof, we will assume that \begin{equation} \label{e.hypothese-torsion-x} \mathrm{Torsion}_n(I,x,\xi) \leq \frac{\epsilon}{3}. \end{equation} Using the affine structure of $\mathbb R^2$, we can define a map \begin{eqnarray*} u \; : \; [0,1]\times\mathbb R & \longrightarrow &\mathbb S^1 \\ (s,t) & \longmapsto & \frac{f_t(z(s))-f_t(x)}{\|f_t(z(s))-f_t(x)\|} \quad \mbox{ if } s>0, \\ (0,t) & \longmapsto & \frac{df_t(z(s)).\xi}{\|df_t(z(s))\xi\|}. \end{eqnarray*} This map is continuous since $f_t$ is assumed to be $C^1$ and to depend continuously on $t$. So we may consider a continuous lift $\widetilde u : [0,1]\times\mathbb R\rightarrow\mathbb R$ of the map $u$. This lift $\widetilde u $ is well defined up to the addition of an integer. \begin{claim} \label{c.1} One has $\widetilde u(1,n)-\widetilde u(0,n) \geq \frac{2n\epsilon}{3}$. \end{claim} \begin{proof} Observe that $\widetilde u(1,n)-\widetilde u(1,0)= n.\mathrm{Linking}_n(I,x,y)$ and $\widetilde u(0,n)-\widetilde u(0,0)=n.\mathrm{Torsion}_n(I,x,\xi)$. Also observe that $\widetilde u(1,0)-\widetilde u(0,0)=0$ since $u(s,0)=\xi$ for any $s\in [0,1]$. Now write \begin{eqnarray*} \widetilde u(1,n)-\widetilde u(0,n) & = & \Big(\widetilde u(1,n)-\widetilde u(1,0)\Big) + \Big(\widetilde u(1,0)-\widetilde u(0,0)\Big) + \Big(\widetilde u(0,0)-\widetilde u(0,n)\Big).\\ & = & n.\mathrm{Linking}_n(I,x,y) - n.\mathrm{Torsion}_n(I,x,\xi) \end{eqnarray*} Recall that $\epsilon$ is by definition equal to $\mathrm{Linking}_n(I,x,y)$. Also recall that we are assuming that $\mathrm{Torsion}_n(I,x,\xi)$ is smaller than $\epsilon/3$ (hypothesis~\eqref{e.hypothese-torsion-x}). The claim follows. \end{proof} Now, we consider $$s_0:=\inf\left\{s\in [0,1] \mbox{ such that } \widetilde u(s,n)-\widetilde u(0,n) \geq \frac{2n\epsilon}{3}\right\}.$$ Claim~1 shows that $s_0$ is well-defined, and hypothesis~\eqref{e.hypothese-torsion-x} shows that $s_0$ is positive.\ Now, we introduce the continuous map $$\begin{array}{ccrcll} v & : & [0,1]\times\mathbb R & \rightarrow & \mathbb S^1 & \\ && (s,t) & \mapsto & \frac{df_t(z(s)).\xi}{\|df_t(z(s).\xi\|}. \end{array}$$ It is lift to a continuous map $\widetilde v : [0,1]\times\mathbb R\rightarrow\mathbb R$ satisfying $\pi\circ\widetilde v= v$, well defined by addition of an integer. \begin{claim} \label{c.2} One has $\widetilde v(s_0,n)-\widetilde v(0,n)\geq \frac{2n\epsilon}{3} - \frac{3}{4}$. \end{claim} \begin{proof} We consider the curve $\alpha:[0,s_0]\to \mathbb R^2$ defined by $\alpha(s):=f^n(z(s))-f^n(x)$. We observe that, for every $s\in [0,s_0]$, $$v(s,n)=\frac{\frac{d\alpha}{ds}(s)}{\left\|\frac{d\alpha}{ds}(s)\right\|}\quad\mbox{and}\quad u(s,n)=\frac{\alpha(s)}{\|\alpha(s)\|}.$$ So, the quantity $\widetilde v(s_0,n)-\widetilde v(0,n)$ is the number of turns made by the tangent vector of the curve $\alpha$, and the quantity $\widetilde u(s_0,n)-\widetilde u(0,n)$ is the number of turns made by the curve $\alpha$ around the origin of $\mathbb R^2$. To compare these two quantities, we will construct a simple closed curve (by concatenating a lift of $\alpha$ with a curve made of a controlled number of segments and circle arcs), and use the fact that the tangent vector of a simple closed curve makes $+1$ turn or $-1$ turn, when one goes around this curve once. We introduce the annulus $\mathbb A:=[0,+\infty)\times \mathbb S^1$ obtained by blowing up the origin of $\mathbb R^2$. We will denote by $p:\mathbb A\to\mathbb R^2$ the natural projection given by $p(r,e^{2i\pi\theta})=re^{2i\pi\theta}$. The strip $\widetilde\mathbb A:=[0,\infty)\times\mathbb R$ will be seen as the universal covering of $\mathbb A$, the covering map $\Pi:\widetilde\mathbb A\to\mathbb A$ being given by $\Pi(r,\theta)=(r,\pi(\theta))=(r,e^{2i\pi\theta})$. We will denote by $p_\theta:\widetilde\mathbb A\to\mathbb R$ the projection on the second coordinate. The inclusion of $\widetilde\mathbb A=[0,+\infty)\times\mathbb R$ in the affine space $\mathbb R\times\mathbb R$ provides a natural trivialization of the tangent bundle of $\widetilde\mathbb A$. \begin{rem} \label{r.difference-trivializations} Caution~! This trivialization of the tangent bundle of $\widetilde\mathbb A$ is not the same that the one obtained, by pulling back by $p\circ \Pi:\widetilde\mathbb A\to\mathbb R^2$ the natural trivialization of the tangent bundle of the affine plane $\mathbb R^2$, where the image of the curve $\alpha$ lies. More precisely, for $(r,\theta)\in\widetilde\mathbb A$, using the trivializations of the tangent bundles of $\widetilde\mathbb A$ and $\mathbb R^2$, we can consider the differential map $d(p\circ \Pi)(r,\theta)$ as a linear map from $\mathbb R^2$ to $\mathbb R^2$; this linear map is \emph{not} the identity; it is a rotation of angle $\theta$ (recall that we use the full turn as a unit for angles). \end{rem} \begin{figure}\label{f.difference-trivializations} \end{figure} The curve $\alpha:[0,s_0]\to\mathbb R^2$ can be lifted to a (uniquely defined) continuous curve $\widehat\alpha:[0,s_0]\to\mathbb A$, satisfying $p\circ \widehat\alpha=\alpha$. This curve $\widehat\alpha$ is given by the formula $\widehat\alpha(s)=(\|f_n(z(s)-f_n(x)\|,u(s,n))$. Then the curve $\widehat\alpha:[0,s_0]\to\mathbb A$ can be lifted to a continuous curve $\widetilde\alpha:[0,s_0]\to \widetilde\mathbb A$, satisfying $\Pi\circ \widetilde\alpha=\alpha$, given by the formula $\widetilde\alpha(s)=(\|f_n(z(s)-f_n(x)\|,\widetilde u(s,n))$. Hence, one has $p_\theta(\widetilde\alpha(s))=\widetilde u(s,n)$ for any $s\in [0,1]$. Remark that $\alpha$ is a simple $C^1$ curve; indeed, up to a translation, it is the image of the segment line $s\mapsto z(s)$ under the diffeomorphism $f_n$. So, $\widetilde \alpha$ is a simple $C^1$ curve as well. On the other hand, the definition of $s_0$ and the relation $p_\theta\circ \widetilde\alpha(s)=\widetilde u(s,n)$ imply that we have $$p_\theta(\widetilde\alpha(s_0))=p_\theta(\widetilde\alpha(0))+\frac{2n\epsilon}{3}\quad\quad\mbox{et}\quad\quad p_\theta(\widetilde\alpha(s))<p_\theta(\widetilde\alpha(0))+\frac{2n\epsilon}{3}\mbox{ for }s\in [0,s_0).$$ Hence, the curve $\widetilde \alpha$ is contained in the quarter of the plane $$Q=\{(r,\theta) \mid r \geq 0 \mbox{ et } \theta \leq p_\theta(\widetilde\alpha(0))+ \frac{2n\epsilon}{3}\},$$ and joins the point $\widetilde\alpha(0)$, which is situated on the horizontal part of the boundary of $Q$, to the point $\widetilde\alpha(s_0)$ which is situated on the vertical part of the boundary of $Q$. It is so easy to construct a simple arc $\widetilde\beta$ in $\mathbb R^2\setminus \mathrm{int}(Q)$ joining the extremities of $\widetilde\alpha$, such that $\widetilde\alpha\cup\widetilde\beta$ is a $C^1$ negatively orientated simple closed curve, and such that the tangent vector of $\widetilde\beta$ makes between $-\frac{5}{4}$ turns and $-\frac{1}{4}$ turns, when we cover $\widetilde\beta$. We can, for example, construct $\widetilde\beta$ as union of three circle arcs and two segment lines (see figure~\ref{f.getting-a-simple-closed-curve}). As $\widetilde\alpha\cup\widetilde\beta$ is a $C^1$ negatively orientated simple closed curve, the tangent vector of this curve makes exactly $-1$ turn when one runs around the curve. Thus, the tangent vector of $\widetilde\alpha$ makes between $-\frac{3}{4}$ turn and $+\frac{1}{4}$ turn, when the parameter $s$ runs from $0$ to $s_0$. According to the remark~\ref{r.difference-trivializations}, and since $p_\theta\circ\widetilde\alpha(s_0)-p_\theta\circ\widetilde\alpha(0)= \frac{2n\epsilon}{3}$, this means that, for the canonical trivialization of the tangent bundle of $\mathbb R^2$, the tangent vector of $\alpha=p\circ\Pi(\widetilde\alpha)$ makes between $\frac{2n\epsilon}{3}-\frac{3}{4}$ turns and $\frac{2n\epsilon}{3}+\frac{1}{4}$ turns, when $s$ runs from $0$ to $s_0$. . Recall, now, that this number of turns is equal to $\widetilde v(s_0,n)-\widetilde v(0,n)$. This completes the proof of claim~2. \end{proof} \begin{figure} \caption{By concatenating $\widetilde\alpha$ with three circle arcs and two segment lines, we obtain a negatively orientated simple closed curve.} \label{f.getting-a-simple-closed-curve} \end{figure} \begin{rem} Note that the choice of $s_0$ is important: our proof of claim~\ref{c.2} uses the fact that $s_0$ is \emph{the smallest} value of the parameter $s$ which satisfies the inequality $\widetilde u(s,n)-\widetilde u(0,n) \geq \frac{2n\epsilon}{3}$. This ensures that the curve $\widetilde\alpha$ is contained in the quarter of the plane $Q$, and hence, prevents $\widetilde\alpha$ from ``spiraling" arround its extremity $\alpha(s_0)$. \end{rem} Now, using claim~2, it is not difficult to finish the proof of lemma~\ref{lemma.linking-implies-torsion}. Indeed, write \begin{eqnarray*} \mathrm{Torsion}_n(I,z(s_0),\xi) & = & \frac{1}{n}(\widetilde v(s_0,n)-\widetilde v(s_0,0)) \\ & = & \frac{1}{n}\Big(\mathop{\underbrace{(\widetilde v(s_0,n)-\widetilde v(0,n))}}_{A} + \mathop{\underbrace{(\widetilde v(0,n)-\widetilde v(0,0))}}_{B} + \mathop{\underbrace{(\widetilde v(0,0)-\widetilde v(s_0,0))}}_{C}\Big). \end{eqnarray*} Claim~2 states that quantity~A is bigger than $\frac{2n\epsilon}{3}-\frac{3}{4}$. Quantity $B$ is equal to $-n\mathrm{Torsion}_n(I,x,\xi)$, and we assumed that this quantity is bigger than $-\frac{n\epsilon}{3}$ (hypothesis~\eqref{e.hypothese-torsion-x}). Finally, quantity~C is equal to $0$ since $v(s,0)=\xi$, for every~$s$. So we get $$\mathrm{Torsion}_n(I,z(s_0),\xi)\geq \frac{\epsilon}{3}-\frac{3}{4n}\geq \frac{\epsilon}{3}-\frac{1}{n},$$ and lemma \ref {lemma.linking-implies-torsion} is proved. \end{proof} \section{Proof of theorem \ref{thm1}} \label{s.thm1} In this section, we consider a $C^1$-diffeomorphism $f$ of the open unit disc $\mathbb D^2$. We assume that $f$ has compact support (\emph{i.e.} coincides with the identity outside a compact subset of $\mathbb D^2$), and preserves the standard area two-form, which we denote by $\omega$. The goal of the section is to prove theorem~\ref{thm1}, \emph{i.e.} to prove that $f$ has an orbit with non-zero torsion. We will compute linking numbers and torsion using the natural trivialization of the tangent bundle of $\mathbb D^2$, and an isotopy $I=(f_t)_{t\in [0,1]}$ joining the identity to $f$ in the space of $C^1$-diffeomorphisms of $\mathbb D^2$ with compact supports. We have already noticed at the end of section~\ref{s.definition-torsion} that the torsion of an orbit of $f$ does not depend on the choice of an isotopy joining the identity to $f$. This will allow us to assume that $I$ is an isotopy in the set of area preserving diffeomorphisms of $\mathbb D^2$ (the existence of such an isotopy follows from Moser's lemma). As usual, we extend the isotopy $I$ by setting $f_t:=f_{t-n} \circ f^{n}$ where $n=\lfloor t\rfloor$, for every $t\in \mathbb R$. Let us first observe that the average linking number, with respect to $\omega$, of the orbits with a given point $x_0$ is well-defined: \begin{pro} \label{Birkh} For $x_0 \in \mathbb D^2$, the quantity $\mathrm{Linking}(I,x_0,x)$ is defined for almost every $x$ in $\mathbb D^2$, the function $x\mapsto \mathrm{Linking}(I,x_0,x)$ is integrable, and $$\displaystyle\int_{\mathbb D^{2}} \mathrm{Linking}(I,x_0,x) d\omega(x)= \displaystyle\int_{\mathbb D^{2}} \mathrm{Linking}_1(I,x_0,x) d\omega(x).$$ \end{pro} \begin{proof}[Proof of proposition~\ref{Birkh}] We denote by $\mathbb A$ the annulus obtained from $\mathbb D^2$ by blowing up $x_0$. The function $x \mapsto \mathrm{Linking}_1(I,x_0,x)$, which is defined on $\mathbb D^2\setminus\{x_0\}$, extends continuously on $\mathbb A$ (using the derivative of $f$ at $x_0$). This shows that $x \mapsto \mathrm{Linking}_1(I,x_0,x)$ is bounded on $\mathbb D^2\setminus\{x_0\}$. Proposition~\ref{Birkh} then follows from Birkhoff's ergodic theorem. \end{proof} Denote by $\mathrm{Fix}(f)$ be the set of the fixed points of $f$. Theorem~\ref{thm1} is an immediate consequence of the following proposition, together with proposition~\ref{prop.linking-implies-torsion}. \begin{pro}\label{pro1} If $f$ is not the identity, then there exists a point $x_0 \in Fix(f)$ such that $$\displaystyle\int_{\mathbb D^{2}} \mathrm{Linking}(I,x_0,x) d\omega(x) \not =0.$$ \end{pro} Proposition \ref{pro1} is actually a reformulation, in the context surfaces dynamics, of a symplectic geometry result due to C.~Viterbo (proposition~\ref{viterbo} below). For every $t\in [0,1]$, let $X_t$ be the vector field on $\mathbb D^2$ defined by $X_t(x)=\frac{d}{ds}_{|s=t} f_s(x)$. Then, for every $t\in [0,1]$, there is a unique normalised function $H_t:\mathbb D^2\to\mathbb R$ which ``generates the vector field $X_t$". By such, we mean that $H_t$ is null outside a compact subset of $\mathbb D^2$ and satisfies $\omega(X_t,\cdot)=dH_t$. This allows to define the symplectic action (for the isotopy $I$) of a point $x\in\mathrm{Fix}(f)$: \begin{defi} Let $x$ be a point in $\mathrm{Fix}(f)$. The \emph{symplectic action} (for the isotopy $I$) of the fixed point $x$ is the quantity $$A_I(x)=\displaystyle\int_{\gamma_x} \; \lambda - \displaystyle\int_{0}^1 H_t(f_t(x)) \; dt,$$ where $\gamma_x:[0,1]\to\mathbb D^2$ is the loop defined by $\gamma_x(t)=f_t(x)$ and $\lambda$ is a primitive of the area form $\omega$. \end{defi} Viterbo's result reads as follows: \begin{pro}[Viterbo; see proposition 4.2. of \cite{Viterbo}] \label{viterbo} If $f$ is not the identity, then $$\mathop{\sup}_{x\in\mathrm{Fix}(f)}\left\{A_I(x)\right\} \not= \mathop{\inf}_{x\in\mathrm{Fix}(f)}\left\{A_I(x)\right\}.$$ \end{pro} Now, we will see that the hamiltonian action $A(x_0)$ of a point $x_0 \in\mathrm{Fix}(f)$ is nothing else that the average linking around $x_0$, that is $\mathrm{Linking}(I,x_0,\mu)$. The following lemma is well-known by experts in hamiltonian dynamics; the first author learned it from P. Le Calvez. \begin{lem} \label{lem4} We have $$A_I(x_0)=\mathrm{Linking}(I,x_0,\omega).$$ \end{lem} \begin{proof} The symplectic action $A_I(x_0)$ does not depend on the class of homotopy of the isotopy $I$ (see e.g. \cite{Viterbo}). If $x,y\in \mathbb D^2$, it is easy to find an area preserving diffeomorphism with compact support $\tau_{x,y}$ such that $\tau_{x,y}(x)=y$, such that $\tau_{x,y}=\mathrm{Id}$ if $x=y$, and such that $\tau_{x,y}$ depends continuously on the couple $(x,y)$. Up to replacing $f_t$ by $\tau_{x_0,f_t(x_0)}\circ f_t$, we may therefore assume that the isotopy $I=(f_t)_{t \in [0,1]}$ fixes the point $x_0$, \emph{i.e.} satisfies $f_t(x_0)=x_0$ for all $t$. Under this assumption, the formula for the symplectic action of the point $x_0$ reads $$A_I(x_0)= - \int_{0}^1 H_t(x_0) \; dt.$$ We are left to prove that $\mathrm{Linking}(I,x_0,\omega)$ is equal to the integral on the right-hand side above. For $x\in\mathbb D^2\setminus\{x_0\}$, let $\theta(x)\in\mathbb R/\mathbb Z$ be the argument of $\frac{x-x_0}{\|x-x_0\|}\in\mathbb S^1$. Note that $d\theta$ is a closed $1$-form on $\mathbb D^2\setminus\{x_0\}$. By definition, for every $x\in\mathbb D^2\setminus\{x_0\}$, the quantity $\mathrm{Linking}_1(I,x,x_0)$ is the variation of $\theta(f_t(x))$ when $t$ runs from $0$ to $1$. Since $X_t(x)=\frac{d}{ds}_{|s=t} f_s(x)$, this yields $$\mathrm{Linking}_1(I,x,x_0)=\int_0^1 d\theta(X_t(f_t(x)) \;dt$$ So, if we denote by $D_\epsilon$ the disc of radius $\epsilon$ and centered at $x_0$, we have \begin{eqnarray*} \mathrm{Linking}(I,x_0,\omega) & = & \int_{\mathbb D^{2}\setminus \{x_0\}} \mathrm{Linking}_1(I,x,x_0) \; \omega(x)\\ & = & \int_{\mathbb D^{2}\setminus \{x_0\}} \left( \int_0^1 d\theta\big(X_t\big(f_t(x)\big)\big) dt\right) \omega(x) \\ & = & \int_0^1 \left(\int_{\mathbb D^2\setminus \{x_0\}} d\theta\big(X_t\big(f_t(x)\big)\big) \omega(x) \right) dt \\ & = & \int_0^1 \left(\int_{\mathbb D^2\setminus\{x_0\}} d\theta\big(X_t(x)\big) \; \omega(x) \right) dt\\ & = & \int_0^1 \left(\lim_{\epsilon\to 0}\int_{\mathbb D^2\setminus D_\epsilon} d\theta(X_t) \; \omega \right) dt \end{eqnarray*} (the third equality uses the fact that $f_t$ preserves $\omega$ and fixes $x_0$). Now observe that $$d\theta(X_t)\;\omega = d\theta \wedge \omega(X_t,\cdot) = -d(H_t \;d\theta)$$ (the first equality follows from the nullity of the 3-form $d\theta\wedge \omega$, whereas the second equality is a consequence of the nullity of the form $dd\theta$). Using this and Stokes' theorem, we obtain \begin{eqnarray*} \mathrm{Linking}(I,x_0,\omega) & = & - \int_0^1\left( \lim_{\epsilon \rightarrow 0} \displaystyle\int_{\mathbb D^2\setminus D_\epsilon} d(H_t \;d\theta)\right) dt \\ & = & - \int_0^1\left( \lim_{\epsilon \rightarrow 0} \displaystyle\int_{\partial D_\epsilon} H_t\; d\theta\right) dt \\ & = & -\int_0^1 H_t(x_0) dt. \end{eqnarray*} This completes the proof of lemma~\ref{lem4}. \end{proof} \begin{proof}[Proof of proposition~\ref{pro1}] If $f$ is not the identity, Viterbo's proposition~\ref{viterbo} shows that there is at least one point $x_0\in\mathrm{Fix}(f)$ such that the symplectic action $A_I(x_0)$ is different from zero, and lemma~\ref{lem4} shows that $A_I(x_0)$ is equal to the integral $\int_{\mathbb D^2}\mathrm{Linking}(I,x_0,x)\;\omega(x)$. \end{proof} Proposition~\ref{pro1} can also be deduced from of Le Calvez's equivariant foliated version of the Brouwer plane translations theorem (\cite{LeCalvez}), together with a recent result of O. Jaulent, which allows to apply Le Calvez's result in a situation where there are infinitely many fixed points (\cite{Jaulent}). See~\cite[proposition~2.1., item (5)]{BFLM} for more details. Observe that this alternative proof also works in the case where the area form $\omega$ is replaced by any $f$-invariant probability measure, whose support in not contained in $\mathrm{Fix}(f)$. \begin{proof}[Proof of theorem~\ref{thm1}] Assume that $f$ is not the identity. Proposition~\ref{pro1} shows the existence of a point $x_0\in\mathrm{Fix}(f)$ and a point $x\in\mathbb D^2\setminus\{x_0\}$, such that the linking number $\mathrm{Linking}(I,x_0,x)$ is not zero. According to proposition \ref{prop.linking-implies-torsion}, this implies the existence of a point $z\in \mathbb D^2$ such that the torsion $\mathrm{Torsion}(I,z)$ is not null. \end{proof} \section{Proof of theorem \ref{thm2}} \label{s.thm2} The purpose of this section is to prove theorem~\ref{thm2}. Before starting the proof strictly speaking, we need to define a notion of \emph{linking number} for a pair of curves. \begin{defi}\label{defi2} Let $\alpha,\beta:\mathbb R \mapsto \mathbb R^2$ be two curves such that $\alpha(t)\neq\beta(t)$ for all $t\in\mathbb R$. Consider the function $v:\mathbb R\to\mathbb S^1$ given by $v(t)= \frac{\beta(t)-\alpha(t)}{\|\beta(t)-\alpha(t)\|}.$ Choose a continuous lift $\widetilde{v}:\mathbb R \to \mathbb R$ of $v$. The \emph{linking number} of the curves $\alpha$ and $\beta$ is the quantity $$\mathrm{Linking}(\alpha,\beta):=\mathop{\lim}_{t\rightarrow+\infty}\frac{1}{t}\Big(\widetilde{v}(t)-\widetilde{v}(0)\Big),$$ provided that the limit exists. \end{defi} \begin{rem} \label{r.linking-curve-orbits} If the curves $\alpha,\beta$ are defined by $\alpha(t)=f_t(x)$ and $\beta(t)=f_t(y)$ for some isotopy $I=(f_t)_{t\in [0,1]}$ on a surface $S$ and for some points $x,y\in S$, then $\mathrm{Linking}(\alpha,\beta)=\mathrm{Linking}(I,x,y)$. \end{rem} The following technical lemma will be used twice in the proof of theorem~\ref{thm2}: \begin{lem} \label{lem3} Consider two curves $\alpha:\mathbb R \to \mathbb R^2$ and $\beta:\mathbb R\to\mathbb R^2$, and assume that there is a positive constant $d$ such that $\|\beta(t)-\alpha(t)\|\geq d$ for all $t \in \mathbb R$. Consider two curves $\alpha':\mathbb R \to \mathbb R^2$ and $\beta':\mathbb R\to\mathbb R^2$ such that $\|\alpha(t)-\alpha'(t)\|\leq \frac{d}{2}$ and $\|\beta(t)-\beta'(t)\|\leq \frac{d}{2}$ for all $t\in\mathbb R$. Then, $$\mathrm{Linking}(\alpha,\beta)=\mathrm{Linking}(\alpha',\beta').$$ \end{lem} \begin{proof} Since $\left\|\big(\alpha(t)-\alpha'(t)\big)+\big(\beta(t)-\beta'(t)\big)\right\|\leq \|\beta(t)-\alpha(t)\|$, the angle between the vectors $\beta(t)-\alpha(t)$ and $\beta'(t)-\alpha'(t)$ is less than $\frac{1}{4}$ turn. So, $\frac{1}{t}\Big|\mathrm{Linking}_t(\alpha,\beta)-\mathrm{Linking}_t(\alpha',\beta')\Big| \leq \frac{1}{4t}.$ Letting $t\rightarrow+\infty$, we obtain $\mathrm{Linking}(\alpha,\beta)=\mathrm{Linking}(\alpha',\beta').$ \end{proof} We are now ready to begin the proof of theorem~\ref{thm2}. We consider a diffeomorphism $f$ of the torus $\mathbb T^2= \mathbb R^2/\mathbb Z^2$ isotopic to the identity and an isotopy $I=(f_t)_{t\in [0,1]}$ joining the $\mathrm{Id}_{\mathbb T^2}$ to $f$ in $\mathrm{Diff}^1(\mathbb T^2)$. This isotopy $I$ can be lifted to an isotopy $\widetilde I=(\widetilde f_t)_{t\in [0,1]}$ joining $\mathrm{Id}_{\mathbb R^2}$ to a lift $\widetilde f$ of $f$ in $\mathrm{Diff}^1(\mathbb R^2)$. As usual, for $t\in\mathbb R$, we set $f_t=f_{t-n} \circ f^{n}$ and $\widetilde f_t=\widetilde f_{t-n} \circ f^{n}$ where $n=\lfloor t\rfloor$. We assume that the rotation set of $\widetilde f$ has non-empty interior. Recall that this hypothesis depends only on $f$, and not on the choice of the lift $\widetilde f$ (two different lifts have the same rotation set up to a translation). We trivialize the tangent bundle of $\mathbb T^2$, using the trivialization induced by the affine structure of $\mathbb R^2$. To prove theorem~\ref{thm2}, we have to find a point $\bar z \in \mathbb T^2$ such that the torsion $\mathrm{Torsion}(I,\bar z)$ is non-zero. We recall that this is equivalent to finding a point $z \in \mathbb R^2$ such that the torsion $\mathrm{Torsion}(\widetilde I, z)$ is non-zero (see remark \ref{rem1}). Let $(p/q,p'/q)$ be a vector with rational coordinates in the interior of the rotation set of $\widetilde f$. Let $g:=f^q$ and $\widetilde g := \widetilde f^q-(p,p')$. It is easy to check that $\rho (\widetilde g) = q \rho (\widetilde f)-(p,p')$; in particular, $(0,0)$ is in the interior of $\rho (\widetilde g)$. Moreover, it is easy to check that $\mathrm{Torsion}(g,z)=q \mathrm{Torsion}(f,z)$, for every $z\in\mathbb R^2$; in particular, $f$ has an orbit with non-zero torsion if and only if it is the case for $g$. Therefore, up to replacing $f$ by $g$, we may ---~and we will~--- assume that $(0,0)$ is in the interior of the rotation set of $\widetilde f$. We will use the following lemma, which is due to J.~Franks: \begin{lem}[Franks, \cite{Franks}] \label{lemFranks} Let $g$ be a homeomorphism of the torus $\mathbb T^2$ that is isotopic to the identity, and $\widetilde g$ be a lift of $g$ to $\mathbb R^2$. Let $(p/q,p'/q)$ be a vector with rational coordinates in the interior of the rotation set of $\widetilde g$. Then, there exits $z \in \mathbb R^2$ such that $\widetilde g^q (z)=z+(p,p')$. In other words, each vector with rational coordinates $(p/q,p'/q)$ in the interior of the rotation set of $\widetilde g$ is realized as a rotation vector of a periodic point of $g$ of period $q$. \end{lem} Since the rotation set of $\widetilde f$ has non-empty interior, we can find three affinely independent vectors $u_1,u_2,u_3$ with rational coordinates in the interior of the rotation set of $\widetilde f$. According to lemma \ref{lemFranks}, there exist three periodic orbits of $f$ with respective rotation vectors $u_1,u_2,u_3$. Let $E$ be the finite subset of $\mathbb T^2$ made of the points of these three orbits. Llibre and MacKay \cite{Llibre-MacKay:91} have proved that $f$ is isotopic, relatively to $E$, to a pseudo-Anosov homeomorphism with marked points $\phi$. According to a well-known result of M. Handel (\cite{Han:85}), this implies that $\phi$ is a topological factor of $f$. More precisely, there exists a continuous surjection $h:\mathbb T^2\to \mathbb T^2$, which fixes each point of $E$, which is homotopic to the identity (through a homotopy which fixes each point of $E$), such that $h\circ f=\phi\circ h$. As $h$ is homotopic to the identity, it has a lift $\widetilde h:\mathbb R^2\to\mathbb R^2$ commuting to the action of $\mathbb Z^2$. Then the homeomorphism $\phi$ has a unique lift $\widetilde \phi:\mathbb R^2\to \mathbb R^2$ such that $\widetilde h\circ \widetilde f=\widetilde \phi\circ \widetilde h$. Now, note that, considered as homeomorphism of $\mathbb T^2$, the homeomorphism $\phi$ is isotopic to the identity (since it is isotopic to $f$). So we may consider an isotopy $J=(\phi_t)_{t \in [0,1]}$ joining $Id_{\mathbb T^2}$ to $\phi$ in $\mathrm{Homeo}(\mathbb T^2)$, which lifts to an isotopy $\widetilde J=(\widetilde\phi_t)_{t \in [0,1]}$ joining $Id_{\mathbb R^2}$ to $\widetilde\phi$ in $\mathrm{Homeo}(\mathbb R^2)$. As usual, for $t \in \mathbb R$, we set $\phi_t=\phi_{t-n} \circ \phi^{n}$ and $\widetilde\phi_t=\widetilde\phi_{t-n} \circ \widetilde\phi^{n}$ where $n=\lfloor t\rfloor$. For each $t \in \mathbb R$, the homeomorphism $\widetilde\phi_t$ commutes to the action of $\mathbb Z^2$. \begin{rem} Since $h$ is not one-to-one, it is not possible in general to find an isotopy $(\phi_t)_{t \in [0,1]}$ joining $Id_{\mathbb T^2}$ to $\phi$ such that $h \circ f_t = \phi_t \circ h$, for every $t$. \end{rem} Since $\widetilde h$ commutes to the action of $\mathbb Z^2$, and since $\mathbb R^2/\mathbb Z^2$ is compact, $\widetilde h$ is at a finite uniform distance from the identity; we will denote this distance by $d_1$: $$d_1:=\left\|\widetilde h-Id_{\mathbb R^2}\right\|_{\infty}=\sup_{z \in \mathbb R^2} \mathrm{dist}\left(\widetilde h(z),z\right)<+\infty$$ (we recall that $\mathrm{dist}$ denote the canonical euclidean distance on the affine space $\mathbb R^2$). \begin{lem} \label{lem5} The homeomorphisms $\widetilde f$ and $\widetilde \phi$ have the same rotation set. In particular, $(0,0)$ is in the interior of the rotation set of $\widetilde\phi$. \end{lem} \begin{proof} The proof uses from the conjugacy relation $\widetilde\phi\circ \widetilde h=\widetilde h\circ \widetilde f$, and the fact that $\widetilde h$ is at a finite uniform distance from the identity. For every point $x\in\mathbb R^2$ and every positive integer $n$, \begin{eqnarray*} \left|\rho_n\left(\widetilde\phi,\widetilde h(x)\right)-\rho_n\left(\widetilde f,x\right)\right| & = & \left |\frac{1}{n}\left(\widetilde \phi^n\left(\widetilde h(x)\right)-\widetilde h (x)\right)-\frac{1}{n}\left(\widetilde f^n(x)-x\right) \right|\\ & = & \frac{1}{n}\left|\left(\widetilde h\left(\widetilde f^n (x)\right)-\widetilde f^n (x)\right)-\left(\widetilde h(x)-x\right) \right|~\leq~\frac{2d_1}{n}. \end{eqnarray*} Letting $x$ range over $\mathbb R^2$, we obtain that the Hausdorff distance between the sets $\rho_n(\widetilde\phi)$ and $\rho_n(\widetilde f)$ is less than $\frac{2d_1}{n}$. The lemma follows. \end{proof} Theorem~\ref{thm2} will appear as a consequence of the following lemma, together with proposition~\ref{prop.linking-implies-torsion} and lemma~\ref{lem3}: \begin{lem} \label{lem6} Let $d$ be a positive constant. There exist two points $x,y \in \mathbb R^2$ such that \begin{enumerate} \item $x$ and $y$ are periodic points for $\widetilde \phi$, \item for every $t \in \mathbb R$, the distance $\mathrm{dist}(\widetilde\phi_t(x),\widetilde\phi_t(y))$ is bigger than $d$, \item the linking number $\mathrm{Linking}(\widetilde J,x,y)$ is not null (observe that this linking number is well-defined since the orbits of $x$ and $y$ are periodic for $\widetilde\phi$). \end{enumerate} \end{lem} Let us postpone the proof of lemma~\ref{lem6}, and complete the proof of theorem~\ref{thm2} assuming that this lemma is true. \begin{proof}[Proof of theorem~\ref{thm2}, assuming lemma~\ref{lem6}] Since $\widetilde \phi_t$ commutes to the action of $\mathbb Z^2$ and depends continuously on $t$, we can find a constant $d_2$ with the following property: $$\mbox{if } \mathrm{dist}\left(x,y\right)\leq d_1, \mbox{ then } \sup_{t\in [0,1]}(\mathrm{dist}\left(\widetilde\phi_t(x),\widetilde \phi_t(y)\right)\leq d_2 .$$ Set $d:=2d_2$, and consider the points $x,y$ provided by lemma \ref{lem6}. Choose a point $x' \in \widetilde h^{-1}(\{x\})$ and a point $y' \in \widetilde h^{-1}(\{y\})$. Let $t \in \mathbb R$ and $\lfloor t\rfloor=n$. Since $\widetilde h\circ \widetilde f^n=\widetilde \phi^n \circ \widetilde h$ and $\widetilde h(x') = x$, one has $$\mathrm{dist}\left(\widetilde\phi^n(x),\widetilde f^n(x')\right) = \mathrm{dist}\left(\widetilde h \left(\widetilde f^n(x')\right),\widetilde f^n(x')\right) \leq d_1.$$ Hence, $$\mathrm{dist}\left(\widetilde\phi_t(x),\widetilde f_t(x')\right) = \mathrm{dist}\left(\widetilde \phi_{t-n} \left(\widetilde \phi^n(x)\right), \widetilde f_{t-n}\left(\widetilde f^n(x')\right)\right) \leq d_2 = \frac{d}{2}.$$ Similar arguments yield $$\mathrm{dist}\left(\widetilde\phi_t(y),\widetilde f_t(y')\right) \leq \frac{d}{2}.$$ Let $\alpha,\beta,\alpha',\beta':\mathbb R\to\mathbb R^2$ be the curves defined by $\alpha(t)=\widetilde\phi_t(x)$, $\beta(t)=\widetilde\phi_t(y)$, $\alpha'(t)=\widetilde f^t(x')$ and $\beta'(t)=\widetilde f^t(y')$. The inequalities above yield $$\mathrm{dist}\left(\alpha(t),\alpha'(t)\right)\leq \frac{d}{2}\;\; \mathrm{and}\;\;\mbox{dist}\left(\beta(t),\beta'(t)\right)\leq \frac{d}{2}\;\; \mbox{for all}\;\;t,$$ and property~(2) of lemma \ref{lem6} yields $$\mathrm{dist}(\alpha(t),\beta(t))>d,\;\;\mbox{for all}\;\;t\in\mathbb R.$$ Moreover, according to the item~(3) of lemma \ref{lem6}, the quantity $\mathrm{Linking}(\alpha, \beta)$ is non-zero. So lemma \ref{lem3} implies that $\mathrm{Linking}(\alpha', \beta')$ is also non-zero. Now observe that $\mathrm{Linking}(\alpha', \beta')=\mathrm{Linking}(\widetilde I, x, y)$. Hence, we have found two points $x,y\in\mathbb R^2$ such that $\mathrm{Linking}(\widetilde I, x, y)$ is non-zero. According to proposition~\ref{prop.linking-implies-torsion}, this implies the existence of a point $z\in\mathbb R^2$ such that $\mathrm{Torsion}(\widetilde I,z)$ is non-zero. If $\bar z$ is the projection of $z$ in $\mathbb T^2$, then we have $\mathrm{Torsion}(I,\bar z)=\mathrm{Torsion}(\widetilde I,z)\neq 0$. This completes the proof of theorem~\ref{thm2}. \end{proof} Now, we are left to prove lemma~\ref{lem6}. The main tool of the proof will be a Markov partition for the diffeomorphism $\widetilde\phi$. Let us recall some basic facts about Markov partitions. Let $g$ be homeomorphism of a (non-necessarly compact) surface $S$. A \emph{rectangle} in $S$ is a topological embedding of $[0,1]^2$ in $S$. A \emph{Markov partition} for $g$ is a covering of $S$ by a locally finite collection of rectangles with pairwise disjoint interiors, such that, for every $i,j\in I$ $g(R_i)$ intersects $R_j$ in a certain way (we will not need the precise definition, but only some properties that we will be stated later; see e.g.~\cite{FLP} for a precise definition). Given a Markov partition $\mathcal M=\{R_i\}_{i\in I}$, we call a \emph{$g$-chain of rectangles of $\mathcal M$} a finite sequence $c=(R_{i_1},\dots,R_{i_n})$ of rectangles in $\mathcal M$ such that $g(R_{i_k})$ intersects $R_{i_{k+1}}$ for $i=1,\dots,n-1$. The \emph{length} of a $g$-chain $c=(R_{i_1},\dots,R_{i_n})$ is $n-1$. A close $g$-chain of rectangles of $\mathcal M$ is of course a $g$-chain $c=(R_{i_1},\dots,R_{i_n})$ such that $R_{i_n}=R_{i_1}$. If $x\in S$ is a periodic point of period $p$ for $g$, the \emph{$\mathcal M$-itinerary} of $g$ is the closed $g$-chain of rectangles $c=(R_{i_0},\dots,R_{i_p}=R_{i_0})$ such that $g^k(x)\in R_{i_k}$ for $k=0\dots p$. We will use the following properties of Markov partitions: \begin{facts}[see for example~\cite{FLP}] \label{f.Markov}~ \begin{enumerate} \item If $\mathcal M=\{R_i\}_{i\in I}$ is a Markov partition for a surface homeomorphism $g$, then every closed $g$-chain of rectangles of~$\mathcal M$ of length $p$ is the itinerary of a periodic point of period $p$. \item Every pseudo-Anosov homeomorphism admits a (finite) Markov partition. \item Let $g$ be a homeomorphism of a surface $S$, and $\widetilde g$ be a lift of $g$ to the universal covering~$\widetilde S$ of~$S$. Let $\mathcal M$ be a Markov partition for $g$, and $\widetilde\mathcal M$ be the lift of $\mathcal M$ in $\widetilde S$ (\emph{i.e.} the collection of all the lifts of the rectangles of $\mathcal M$). Then $\widetilde\mathcal M$ is a Markov partition for $\widetilde g$. \end{enumerate} \end{facts} We are now ready to begin the proof of lemma~\ref{lem6}. \begin{proof}[Proof of lemma~\ref{lem6}] According to the second item of facts~\ref{f.Markov}, the pseudo-Anosov homeomorphism $\phi$ admits a finite Markov partition $\mathcal M$. We denote by $\widetilde \mathcal M$ the lift of this Markov partition in $\mathbb R^2$ (\emph{i.e.} the collection of all the lifts of the rectangles of $\mathcal M$). According to the facts stated above, $\widetilde\mathcal M$ is a Markov partition for $\widetilde\phi$. Note that, if $\widetilde R$ is a rectangle in $\widetilde \mathcal M$, and $(p,p')$ is a vector in $\mathbb Z^2$, then $\widetilde R+(p,p')$ is also a rectangle of $\widetilde\mathcal M$. To prove the lemma \ref{lem6}, we will construct a closed $\widetilde\phi$-chain $\Gamma$ of rectangles of $\widetilde\mathcal M$ with the following properties: one can find a fundamental domain $\Delta$ for the action of $\mathbb Z^2$ on $\mathbb R^2$, such that ``$\Gamma$ makes one turn around $\Delta$" and ``$\Gamma$ stays far away from $\Delta$". The closed chain $\Gamma$ will be the itinerary of a periodic point $y$ of $\widetilde\phi$. Franks'lemma \ref{lemFranks} will provide us with a fixed point $x$ of $\widetilde\phi$ in $\Delta$. The property `$\Gamma$ makes one turn around $\Delta$" will imply that the orbits of $x$ and $y$ will have non-zero linking number (item~(3) of lemma~\ref{lem6}). The property ``$\Gamma$ stays far away from $\Delta$" will imply that the orbits of $x$ and $y$ will stay far from each other ((item~(2) of lemma~\ref{lem6}). Let us start the construction of the closed $\widetilde\phi$-chain $\Gamma$: \setcounter{claim}{0} \begin{claim} \label{Affirm3} There exists a constant $K\in \mathbb N$ such that for every couple of rectangles $(\widetilde S,\widetilde T)$ in $\widetilde \mathcal M$, there exists a vector $(p,p')\in\mathbb Z^2$ and a finite $\widetilde \phi$-chain of rectangles of $\widetilde\mathcal M$ of length smaller than $K$ going from $\widetilde S$ to $\widetilde T+(p,p')$. \end{claim} \begin{proof} We recall that a pseudo-Anosov homeomorphism is always transitive (see e.g.~\cite{FLP}). The topological transitivity of $\phi$ implies that, for every couple of rectangles $S,T$ of $\mathcal M$, there exists an integer $k_{ST}$ such that $\phi^{k_{ST}}(S)$ intersects $T$. Hence, for any lift $\widetilde S$ of $S$ and any lift $\widetilde T$ of $T$, there is a vector $(p,p')\in\mathbb Z^2$ such that $\widetilde\phi^{k_{ST}}(\widetilde S)$ intersects $\widetilde T+(p,p')$. In particular, there is a finite $\widetilde \phi$-chain of rectangles of $\widetilde\mathcal M$ of length smaller than $k_{ST}$ joining the rectangle $\widetilde S$ to the rectangle $\widetilde T+(p,p')$. Since $\mathcal M$ has a finite number of rectangles , the integers $k_{ST}$ are uniformly bounded. This completes the proof of the claim. \end{proof} We pick a rectangle $\widetilde R$ of $\widetilde\mathcal M$. \begin{claim} \label{Affirm4} There exist three vectors $(p_1,p_1'),\; (p_2,p_2'),\;(p_3,p_3')$ in $\mathbb Z^2$ such that: \begin{enumerate} \item $(0,0)$ is in the interior of the convex hull of the vectors $(p_1,p_1'),\; (p_2,p_2'),\;(p_3,p_3')$, \item for $i \in \{1,2,3\}$, there exists a finite $\widetilde\phi$-chain of rectangles of $\widetilde\mathcal M$, denoted $c_i$, joining $\widetilde R$ to a translate $\widetilde R+(p_i,p_i')$ of $\widetilde R$. We denote $k_i$ the length of the chain $c_i$. \end{enumerate} \end{claim} \begin{proof} Let $K$ be the constant given by claim~\ref{Affirm3}. Since $\widetilde\phi$ is at finite uniform distance from the identity, there exists a constant $L$ with the following property: if $\widetilde S$ and $\widetilde T$ are two rectangles of $\widetilde\mathcal M$ such that there is a $\widetilde \phi$-chain of rectangles of length smaller than $K$ going from $\widetilde S$ to $\widetilde T$, then $\mathrm{dist}(x,y)\leq L$ for every $x\in\widetilde S$ and every $y\in\widetilde T$. Since $(0,0)$ is in the interior of the rotation set of $\widetilde\phi$, we can find three vectors $v_1,v_2,v_3$ in the rotation set of $\widetilde\phi$ such that $(0,0)$ is in the interior of the convex hull of $v_1,v_2,v_3.$ Then, we can find a constant $\eta > 0$ such that if $w_1,w_2,w_3$ are vectors in $\mathbb R^2$ such that $\|w_i-v_i\| < \eta$ for $i=1,2,3$, then $(0,0)$ is in the interior of the convex hull of $w_1,w_2,w_3$. Let $N$ be an integer such that $\frac{L}{N}<\frac{\eta}{3}$. For $i=1,2,3,$ since $v_i$ is in the rotation set of $\widetilde\phi$, we can find a point $z_i \in \mathbb R^2$ and an integer $n_i\geq N$ such that \begin{equation} \label{e.presque-v_i} \left\|\frac{1}{n_i}\left(\widetilde{\phi}^{n_i}(z_i)-z_i\right)-v_i\right\| < \frac{\eta}{3}. \end{equation} Let $\widetilde S_i$ and $\widetilde T_i$ be the rectangles of $\widetilde\mathcal M$ containing respectively the points $z_i$ and $\widetilde\phi^{n_i}(z_i)$. Now we construct the $\widetilde\phi$-chain of rectangles $c_i$. \begin{itemize} \item[--] Claim~1 provides us with a vector $(r_i,r_i')$ in $\mathbb Z^2$ and a $\widetilde\phi$-chain of rectangles of $\widetilde\mathcal M$ of length less than $K$ going from $\widetilde R$ to $\widetilde S_i+(r_i,r_i')$. \item[--] By definition of the rectangles $\widetilde S_i$ and $\widetilde T_i$, and since $\widetilde\phi$ commutes with the action of $\mathbb Z^2$, the image under $\widetilde\phi^{n_i}$ of the rectangle $\widetilde S_i+(r_i,r_i')$ intersects the rectangle $\widetilde T_i+(r_i,r_i')$. In particular, there is a $\widetilde\phi$-chain of rectangles of $\widetilde\mathcal M$ going from $\widetilde S_i+(r_i,r_i')$ to $\widetilde T_i+(r_i,r_i')$. \item[--] Claim~1 provides us with a vector $(p_i,p_i')$ in $\mathbb Z^2$ and a $\widetilde\phi$-chain of rectangles of $\widetilde\mathcal M$ of length less than $K$ going from $\widetilde T_i+(r_i,r_i')$ to $\widetilde R+(p_i,p_i')$. \end{itemize} Concatenating these three $\widetilde\phi$-chains of rectangles of $\widetilde\mathcal M$, we obtain a $\widetilde\phi$-chain of rectangles of $\widetilde\mathcal M$ going from $\widetilde R$ to $\widetilde R+(p_i,p_i')$, which we denote by $c_i$. We will now prove that $\|\frac{1}{n_i}(p_i,p_i')-v_i\| < \eta$. Fix a point $z$ in $\widetilde R$. Since there exists a $\widetilde\phi$-chain of rectangles of $\widetilde\mathcal M$ of length less than $K$ going from $\widetilde R$ to $\widetilde S_i+(r_i,r_i')$, we have $\|z-(z_i+(r_i,r_i'))\|\leq L.$ The same arguments yield $\|\widetilde\phi^{n_i}(z_i+(r_i,r_i'))-(z+(p_i,p_i'))\|\leq L.$ So, we get \begin{eqnarray*} && \Big\|\frac{1}{n_i}(p_i,p_i')-v_i\Big\| = \Big\|\frac{1}{n_i}\left(\left(z+(p_i,p_i')\right)-z\right)-v_i\Big\|\\ & \leq &\frac{1}{n_i} \left\| (z+(p_i,p_i'))-\widetilde\phi^{n_i}(z_i+(r_i,r_i')) \right\| + \Big\| \frac{1}{n_i} \left(\widetilde{\phi}^{n_i}(z_i+(r_i,r_i'))-(z_i+(r_i,r_i'))\right)-v_i\Big\| \\ &&+ \frac{1}{n_i}\Big\|\left(z_i+(r_i,r_i')\right)-z\Big\|\\ & \leq & \frac{L}{n_i}+\frac{\eta}{3}+\frac{L}{n_i} \;\leq\;\eta. \end{eqnarray*} By definition of $\eta$, these inequalities imply that $(0,0)$ is in the interior of the convex hull of the vectors $\frac{1}{n_1}(p_1,p_1'),\; \frac{1}{n_2}(p_2,p_2'),\; \frac{1}{n_3}(p_3,p_3')$. Therefore $(0,0)$ is also in the interior of the convex hull of the vectors $(p_1,p_1'),\; (p_2,p_2'),\;(p_3,p_3')$. \end{proof} We will now construct closed $\widetilde\phi$-chains of rectangles of $\widetilde\mathcal M$. If $c=(\widetilde R_1,\dots,\widetilde R_n)$ is a $\widetilde\phi$-chain of rectangles of $\widetilde\mathcal M$, and $(p,p')$ is a vector in $\mathbb Z^2$, we denote by $c+(p,p')$ the sequence of rectangles $(\widetilde R_1+(p,p'),\dots,\widetilde R_n+(p,p'))$. Observe that $c+(p,p')$ is a $\widetilde\phi$-chain of rectangles of $\widetilde\mathcal M$, since $\widetilde\phi$ commutes with the action of $\mathbb Z^2$. If $c$ and $c'$ are two $\widetilde\phi$-chains of rectangles of $\widetilde\mathcal M$ such that the last rectangles of $c$ is equal to the first rectangle of $c'$, we denote by $c\vee c'$ the $\widetilde\phi$-chains of rectangles obtained by concataneting $c$ and $c'$. Since $(p_1,p_1')$, $(p_2,p_2')$, $(p_3,p_3')$ are vectors with integral coordinates, and since $(0,0)$ is in the interior of the convex hull of these vectors, there exists three positive integers $\ell_1,\ell_2,\ell_3$ such that $$\ell_1(p_1,p_1')+\ell_2(p_2,p_2')+\ell_3(p_3,p_3')=(0,0).$$ For $i=1,2,3$, and $n\in \mathbb N\setminus\{0\}$, we consider the $\widetilde\phi$-chain of rectangles of $\widetilde\mathcal M$ defined as follows: $$\Gamma_{i,n}=c_i \vee \Big(c_i+(p_i,p_i')\Big) \vee \Big(c_i+2(p_i,p_i')\Big) \vee\dots\vee \Big(c_i+(n\ell_i-1)(p_i,p_i')\Big).$$ This is a chain of length $n\ell_ik_i$, joining the rectangle $\widetilde R$ to the rectangle $\widetilde R+n\ell_i(p_i,p_i')$. Now, for $n\in\mathbb N\setminus\{0\}$, we consider the $\widetilde\phi$-chain of rectangles of $\widetilde\mathcal M$ defined as follows: $$\Gamma_n:=\Gamma_{1,n}\vee \Big(\Gamma_{2,n}+n\ell_1(p_1,p_1')\Big)\vee\Big(\Gamma_{3,n}+n\ell_1(p_1,p_1')+n\ell_2(p_2,p_2')\Big).$$ This is a closed $\widetilde\phi$-chain of rectangles of $\widetilde\mathcal M$, starting and ending at $\widetilde R$. We denote by $p_n=n(\ell_1k_1+\ell_2k_2+\ell_3k_3)$ the length of this $\widetilde\phi$-chain $\Gamma_n$. For every $n\in\mathbb N\setminus\{0\}$, since $\Gamma_n$ is a closed $\widetilde\phi$-chain of rectangles of $\widetilde\mathcal M$ of length $p_n$, and since $\widetilde\mathcal M$ is a Markov partition for $\widetilde\phi$, there exists a point $y_n\in\widetilde R$ which is periodic of period $p_n$ for $\widetilde\phi$, and such that the itinerary of the orbit of $y_n$ is precisely $\Gamma_n$. We will prove that the orbit of $y_n$ is very close from an affine triangle. We pick a point $\widetilde y$ in the rectangle $\widetilde R$. For every $n\in\mathbb N\setminus\{0\}$, we consider the triangle $$T_n:=\mathrm{Conv}\left(y , y+n\ell_1(p_1,p_1') , y+n\left(\ell_1(p_1,p_1')+\ell_2(p_2,p_2'\right)\right).$$ Recall that $p_n=n(\ell_1k_1+\ell_2k_2+\ell_3k_3)$. We consider the parametrisation $\tau_n:[0,p_n]\to\partial T_n$ of the boundary of $T_n$ defined by the following properties: \begin{itemize} \item[--] $\tau_n$, is affine on the interval $[0,n\ell_1k_1]$, $[n\ell_1k_1,n(\ell_1k_1+\ell_2k_2)]$ and $[n(\ell_1k_1+\ell_2k_2),p_n]$, \item[--] $\tau_n(0)=y$, $\tau_n(n\ell_1k_1)=y+n\ell_1(p_1,p_1')$, $\tau_n\left(n(\ell_1k_1+\ell_2k_2)\right)=y+n\left(\ell_1(p_1,p_1')+\ell_2(p_2,p_2')\right)$ and $\tau_n(p_n)=y$. \end{itemize} The trajectory of $t\mapsto \widetilde\phi_t(y_n)$ and the triangle $T_n$ are depicted on figure~\ref{f.triangle}. The following claim states that the distance from the trajectory of $t\mapsto \widetilde\phi_t(y_n)$ to the boundary of the triangle $T_n$ is bounded independently of $n$. \begin{claim} \label{claim-3} There exists $D$ such that $\mathrm{dist}\left(\widetilde\phi_t(y_n) ,\tau_n(t) \right)\leq D$ for every $t\in [0,p_n]$ and every $n\in\mathbb N$. \end{claim} \begin{proof} Let us first observe that, for $t\in [0,n\ell_1k_1]$, one has $\tau_n(t)=y+\frac{t}{k_1}(p_1,p_1')$. Now consider the constant $$D_1:=\sup_{z\in\widetilde R\,,\,t\in [0,k_1]} \mathrm{dist}\left(\widetilde\phi_t(z) \;,\; \tau_n(t)\right)=\sup_{z\in\widetilde R\,,\,t\in [0,k_1]} \mathrm{dist}\left(\widetilde\phi_t(z) \;,\; y+\frac{t}{k_1}(p_1,p_1')\right).$$ For every $n\in\mathbb N\setminus\{0\}$ and every $j\in \{0,\dots,n\ell_1-1\}$, the point $\widetilde\phi_{jk_1}(y_n)-j(p_1,p_1')$ is in the rectangle $\widetilde R$. So, for every $n\in\mathbb N\setminus\{0\}$, every $j\in \{0,\dots,n\ell_1-1\}$ and every $t\in [0,k_1]$, one has $$\mathrm{dist}\left(\widetilde\phi_t\left(\widetilde\phi_{jk_1}(y_n)-j(p_1,p_1')\right) \;,\; y+\frac{t}{k_1}(p_1,p_1')\right) \leq D_1,$$ or equivalently $$\mathrm{dist}\left(\widetilde\phi_{jk_1+t}(y_n) \;,\; \tau_n(jk_1+t)\right) = \mathrm{dist}\left(\widetilde\phi_{jk_1+t}(y_n) \;,\; y+\left(j+\frac{t}{k_1}\right)(p_1,p_1')\right) \leq D_1.$$ This shows that the distance between the points $\widetilde\phi_t(y_n)$ and $\tau_n(t)$ is bounded from above by $D_1$, for every $n\in\mathbb N\setminus\{0\}$ and for every $t\in [0,n\ell_1k_1]$. Similar arguments show the existence of some constants $D_2$ and $D_3$, such that the distance between the points $\widetilde\phi_t(y_n)$ and $\tau_n(t)$ is bounded from above by $D_2$ and $D_3$, respectively for $t\in [n\ell_1k_1,n\ell_1k_1+n\ell_2k_2]$ and for $t\in [n\ell_1k_1+n\ell_2k_2,p_n]$. This completes the proof of claim~\ref{claim-3}. \end{proof} \begin{claim} \label{claim-4} For $n$ large enough, there exists a fixed point $x_n$ of $\widetilde\phi$ such that, for every $t\in\mathbb R$, $$\phi_t(x_n)\in T_n\quad\mbox{and}\quad\mathrm{dist}\left(\phi_t(x_n)\;,\;\partial T_n\right) > D+d.$$ \end{claim} \begin{proof} Consider the real constant $d_3$ defined as follow: $$d_3:=\sup_{t\in [0,1]} \left\|\widetilde\phi_t-\mathrm{Id} \right\|_\infty = \sup_{z\in\mathbb R^2}\sup_{t\inÊ[0,1]} \mathrm{dist}_{\mathbb R^2}\left(\widetilde\phi_t(z),z\right).$$ The triangle $T_1$ has non-empty interior, and the triangle $T_n$ is the image of $T_1$ under an homothecy of ratio $n$. It follows that there exists $n_0\in\mathbb N$ such that, for $n\geq n_0$, one can find a fundamental $\Delta_n$ for the action $\mathbb Z^2$ on $\mathbb R^2$ such that $\Delta_n\subset T_n$ and $\mathrm{dist}(\partial\Delta_n,\partial T_n)\geq D+d+d_3$. Now, since $(0,0)$ is in the interior of the rotation set of $\widetilde\phi$, lemma~\ref{lemFranks} implies that $\widetilde\phi$ has a fixed point $x$. For $n\geq n_0$, let $x_n$ be an element of $x+\mathbb Z^2$ which is in the fundamental domain $\Delta_n$. Note that $x_n$ is also a fixed point of $\widetilde\phi$, since $\widetilde\phi$ commutes with the action of $\mathbb Z^2$. The properties of the fundamental domain $\Delta_n$ and the definition of the constant $d_3$ imply that $\phi_t(x_n)\in T_n$ and $\mathrm{dist}\left(\phi_t(x_n),\partial T_n\right) > D+d$. This completes the proof of claim~\ref{claim-4}. See figure~\ref{f.triangle}. \end{proof} Claim~\ref{claim-3} and~\ref{claim-4} imply that, for $n$ large enough, for every $t\in [0,p_n]$, one has $$\left\|\widetilde\phi_t(x_n) - \widetilde\phi_t(y_n)\right\| \geq \left\|\widetilde\phi_t(x_n) - v_n(t)\right\| - \left\|v_n(t) - \widetilde\phi_t(y_n)\right\| \geq d.$$ Now consider the curves $\alpha_n:\mathbb R\to\mathbb R^2$ and $\beta_n:\mathbb R\to\mathbb R^2$, defined by $\alpha_n(t)=\widetilde\phi_t(x_n)$ and $\beta_n(t)=\widetilde\phi_t(y_n)$. The curve $\tau_n|{[0,p_n]}$ is a parametrization of the boundary of the triangle $T_n$~; the image of the curve $\alpha_n$ is contained in $T_n$. Therefore we have $$\mathrm{Linking}(\alpha_n,\tau_n)=\pm\frac{1}{p_n}$$ (the sign depends on the orientation induced by the parametrization $\tau_n$ on $\partial T_n$). Claim~3 implies that $\|\tau_n(t)-\beta_n(t)\|\leq D$ for every $t\in\mathbb R$ and every $n\in\mathbb N\setminus\{0\}$. Claim~4 implies that there exists an integer $n_0$ such that $\|\alpha_n(t)-\tau_n(t)\|>D+d$ for every $t\in\mathbb R$ and every $n\geq n_0$. According to lemma~\ref{lem3}, this implies that $\mathrm{Linking}(\alpha_n,\beta_n)=\mathrm{Linking}(\alpha_n,\tau_n)$ for every $n\geq n_0$. Lastly, observe that $\mathrm{Linking}(\alpha_n,\beta_n)=\mathrm{Linking}\left(\widetilde J, x_n , y_n\right)$ (by definition of the curves $\alpha_n$ and $\beta_n$). So we finally get $$\mathrm{Linking}\left(\widetilde J, x_n , y_n\right)=\pm \frac{1}{p_n}$$ for every $n\geq n_0$. This completes the proof of lemma~\ref{lem6}. See figure~\ref{f.triangle}. \end{proof} \begin{figure} \caption{This figure depicts the situation for $n$ bigger than $n_0$. The blue curve represents the trajectory $t\mapsto \widetilde\phi_t(x_n)$. The red curve represents the trajectory $t\mapsto \widetilde\phi_t(y_n)$. The triangle $T_n$ is drawn in black. The red curve stays at distance less than $D$ of the black triangle. The distance between the blue curve and the black triangle is bigger than $D+d$. Hence, the distance between the red curve and the blue curve is bigger than $d$.} \label{f.triangle} \end{figure} \end{document}
arXiv
\begin{document} \title[Solutions to Polynomial Congruences in Small Boxes]{On Solutions to Some Polynomial Congruences in Small Boxes} \author{Igor E.~Shparlinski} \address{Department of Computing, Macquarie University, Sydney, NSW 2109, Australia} \email{[email protected]} \begin{abstract} We use bounds of mixed character sum to study the distribution of solutions to certain polynomial systems of congruences modulo a prime $p$. In particular, we obtain nontrivial results about the number of solution in boxes with the side length below $p^{1/2}$, which seems to be the limit of more general methods based on the bounds of exponential sums along varieties. \end{abstract} \subjclass[2010]{11D79, 11K38} \keywords{Multivariate congruences, distribution of points} \maketitle \section{Introduction} There is an extensive literature investigating the distribution of solutions to the system congruence \begin{equation} \label{eq:syst} F_j(x_1, \ldots, x_n) \equiv 0 \pmod p, \qquad j =1, \ldots, m, \end{equation} $F_j(X_1,\ldots,X_n) \in \mathbb{Z}[X_1,\ldots,X_n]$, $j=1, \ldots, m$, in $m$ variables with integer coefficients, modulo a prime $p$, see~\cite{Fouv,FoKa,Luo,ShpSk,Skor}. In particular, subject some additional condition (related to the so-called $A$-number), Fouvry and Katz~\cite[Corollary~1.5]{FoKa} have given an asymptotic formula for the number of solutions to~\eqref{eq:syst} in a box $$(x_1, \ldots, x_n) \in [0, h-1]^n $$ for a rather small $h$. In fact the limit of the method of~\cite{FoKa} is $h = p^{1/2 + o(1)}$. Here we consider a very special class of systems of $s+1$ polynomial congruences \begin{equation} \label{eq:prod} x_1 \ldots x_n \equiv a \pmod p, \end{equation} and \begin{equation} \label{eq:diag} c_{1,j}x_1^{k_{1,j}} + \ldots + c_{n,j} x_n^{k_{m,j}} \equiv b_j \pmod p, \qquad j =1, \ldots, s, \end{equation} where $a,b_j, c_{i,j}, k_{i,j} \in \mathbb{Z}$, with $\gcd(a c_{i,j},p) = 1$, $i=1, \ldots, n$, $j =1, \ldots, s$, and $3 \le k_{i,1} < \ldots < k_{i,s}$. The interest to the systems of congruences~\eqref{eq:prod} and~\eqref{eq:diag} stems from the work of Fouvry and Katz~\cite{FoKa}, where a particular case of the congruence~\eqref{eq:prod} and just one congruence of the type~\eqref{eq:diag} (that is, for $s=1$) with the same odd exponents $k_{1,1} = \ldots = k_{n,1} = k$ and $b_1 = 0$ is given as an example of a variety to which one of their main general results applies. In particular, in this case and for $k \ge 3$, $b_1=0$ (and fixed non-zero coefficients) we see that~\cite[Theorem~1.5]{FoKa} gives an asymptotic for the number of solutions with $1 \le x_i \le h$, $i=1, \ldots, n$, starting from the values of $h$ of size about $\max\{p^{1/2 + 1/n}, p^{3/4}\} \log p$. Here we show that a different and more specialised treatment allows to significantly lower this threshold, which now in some cases reaches $p^{1/4+\kappa}$ for any $\kappa > 0$. Furthermore, this applies to the systems~\eqref{eq:prod} and~\eqref{eq:diag} in full generality and is uniform with respect to the coefficients. More precisely, we use a combination of \begin{itemize} \item the bound of mixed character sums to due to Chang~\cite{Chang}; \item the result of Ayyad, Cochrane, and Zheng~\cite{ACZ} on the fourth moment o short character sums; \item the bound of Wooley~\cite{Wool3} on exponential sums with polynomials. \end{itemize} We note that the classical P{\'o}lya-Vinogradov and Burgess bounds of multiplicative character sums (see~\cite[Theorems~12.5 and 12.6]{IwKow}) in a combination with a result of Ayyad, Cochrane, and Zheng~\cite{ACZ}, has been used in~\cite{Shp1,Shp2} to study the distribution of the single congruence~\eqref{eq:prod} in very small boxes), and thus go below the $p^{1/2}$-threshold. Here we show that the recent result of Chang~\cite{Chang} enables us now to study a much more general case of the simultaneous congruences~\eqref{eq:prod} and~\eqref{eq:diag}. Throughout the paper, the implied constants in the symbols ``$O$'' and ``$\ll$'' can depend on the degrees $k_{i,j}$ in~\eqref{eq:prod} and~\eqref{eq:diag} as well as, occasionally, of some other polynomials involved. We recall that the expressions $A \ll B$ and $A=O(B)$ are each equivalent to the statement that $|A|\le cB$ for some constant $c$. \section{Character and Exponential Sums} Let ${\mathcal X}_p$ be the set of multiplicative characters modulo $p$ and let ${\mathcal X}_p^* = {\mathcal X}_p \setminus \{\chi_0\}$ be the set of non-principal characters. We also denote $$ {\mathbf{\,e}}_p(z) = \exp(2 \pi i z/p). $$ We appeal to~\cite{IwKow} for a background on the basic properties of multiplicative characters and exponential functions, such as orthogonality. The following bounds of exponential sums twisted with a multiplicative character has been given by Chang~\cite{Chang} for sum in arbitrary finite fields but only for intervals starting at the origin. However, a simple examination of the argument of~\cite{Chang} reveals that this is not important for the proof: \begin{lem} \label{lem:Chang} For any character $\chi \in {\mathcal X}_p^*$, a polynomial $F(X) \in \mathbb{Z}[X]$ of degree $k$ and any integers $u$ and $h\ge p^{1/4+\kappa}$, we have $$\sum_{x=u+1}^{u+h} \chi(x) {\mathbf{\,e}}_p(F(x)) \ll h p^{-\eta}, $$ where $$ \eta = \frac{\kappa^2}{4(1+2 \kappa)(k^2+2k + 3)}. $$ \end{lem} We note that we do not impose any conditions on the polynomial $F$ in Lemma~\ref{lem:Chang}. On the other hand when $\chi = \chi_0$, we use the following a very special case of the much more general bound of Wooley~\cite{Wool3} that applies to polynomials with arbitrary real coefficients. \begin{lem} \label{lem:Wooley} For any polynomial $F(X) \in \mathbb{Z}[X]$ of degree $k > 2$ with the leading coefficient $a_k \not \equiv 0 \pmod p$, and any integers $u$ and $h$ with $ h < p$, we have $$\sum_{x=u+1}^{u+h} {\mathbf{\,e}}_p(F(x)) \ll h^{1-1/ 2k(k-2)} + h^{1-1/2(k-2)}p^{1/ 2k(k-2)}. $$ \end{lem} Clearly Lemma~\ref{lem:Wooley} is nontrivial only for $h \ge p^{1/k}$ which is actually the best possible range. Furthermore, in a slightly shorter range we have: \begin{cor} \label{cor:Wooley short} For any polynomial $F(X) \in \mathbb{Z}[X]$ of degree $k > 2$ with the leading coefficient $a_k \not \equiv 0 \pmod p$, and any integers $u$ and $h$ with $p^{1/(k-1)}\le h < p$, we have $$\sum_{x=u+1}^{u+h} {\mathbf{\,e}}_p(F(x)) \ll h^{1-1/ 2k(k-2)}. $$ \end{cor} We make use of the following estimate of Ayyad, Cochrane and Zheng~\cite[Theorem~1]{ACZ}. \begin{lem} \label{lem:ACZ} Uniformly over integers $u$ and $h\le p$, the congruence $$ x_1 x_2 \equiv x_3 x_4 \pmod p, \qquad u+1 \le x_1,x_2,x_3,x_4 \le u+h, $$ has $h^{4}/p + O(h^{2}p^{o(1)})$ solutions as $h\to\infty$. \end{lem} We note that Lemma~\ref{lem:ACZ} is a essentially a statement about the fourth monent of short character sums, see~\cite[Equation~(4)]{ACZ}. In fact, the next result makes it clearer: \begin{cor} \label{cor:ACZ-weights} Let $\rho(x)$ be an arbitrary complex valued function with $$ |\rho(x)| \le 1, \qquad x\in \mathbb{R}. $$ Uniformly over integers $u$ and $h\le p$, we have $$ \sum_{\chi\in {\mathcal X}_p} \left| \sum_{x=u+1}^{u+h} \rho(x) \chi(x)\right|^4 \le h^{4} + O\(h^{2}p^{1+o(1)}\), $$ as $h\to\infty$. \end{cor} \begin{proof} Expanding the fourth power, and changing the order of summation, we obtain \begin{equation*} \begin{split} \sum_{\chi\in {\mathcal X}_p} &\left| \sum_{x=u_i+1}^{u+h} \rho(x) \chi(x)\right|^4 \\ &= \sum_{\chi\in {\mathcal X}_p} \sum_{x_1, \ldots, x_4=u+1}^{u+h} \rho(x_1) \rho(x_2) \rho(x_3) \rho(x_4) \chi(x_1x_2 x_3^{-1} x_4^{-1})\\ &= \sum_{x_1, \ldots, x_4=u+1}^{u+h} \rho(x_1) \rho(x_2) \rho(x_3) \rho(x_4) \sum_{\chi\in {\mathcal X}_p} \chi(x_1x_2 x_3^{-1} x_4^{-1}). \end{split} \end{equation*} Using the orthogonality of characters, we write \begin{equation*} \begin{split} \sum_{\chi\in {\mathcal X}_p} &\left| \sum_{x=u+1}^{u+h} \rho(x) \chi(x)\right|^4 \\ &= \sum_{\chi\in {\mathcal X}_p} \sum_{x_1, \ldots, x_4=u+1}^{u+h} \rho(x_1) \rho(x_2) \rho(x_3) \rho(x_4) \chi(x_1x_2 x_3^{-1} x_4^{-1})\\ &= (p-1) \sum_{\substack{x_1, \ldots, x_4=u+1\\ x_1x_2 \equiv x_3 x_4 \pmod p}}^{u+h} \rho(x_1) \rho(x_2) \rho(x_3) \rho(x_4) \\ & \le (p-1) \sum_{\substack{x_1, \ldots, x_4=u+1\\ x_1x_2 \equiv x_3 x_4 \pmod p}}^{u+h} 1. \end{split} \end{equation*} Using Lemma~\ref{lem:ACZ} we derive the desired bound. \end{proof} \section{Main Result} We are now able to present our main result. Let ${\mathfrak B}$ be a cube of the form $$ {\mathfrak B} = [u_1+1,u_1+h]\times \ldots \times [u_n+1,u_n+h] $$ with some integers $h,u_i$ with $1 \le u_i +1 < u_i+h < p$, $i=1, \ldots, n$. We denote by $N({\mathfrak B})$ the number of integer vectors $$ (x_1, \ldots, x_n) \in {\mathfrak B} $$ satisfying~\eqref{eq:prod} and~\eqref{eq:diag} simultaneously. As we have mentioned the case of just one congruence~\eqref{eq:prod} has been considered in~\cite{Shp1,Shp2}, so we always assume that $s \ge 1$ (and thus $n \ge 3$). Let \begin{equation*} \begin{split} k & = \min\{k_{i,j}~:~i =1, \ldots, n, \ j =1, \ldots, s\},\\ K & = \max\{k_{i,j}~:~i =1, \ldots, n, \ j =1, \ldots, s\}. \end{split} \end{equation*} \begin{thm} \label{thm:N Asymp} For any fixed $\kappa > 0$ and $$ p > h \ge \min\{ p^{1/4+\kappa}, p^{1/(k-1)}\} $$ we have $$ N_p({\mathfrak B}) = \frac{h^n}{p^{s+1}} + O\( h^{n} p^{-1-\eta(n-4)} + h^{n-2} p^{-\eta(n-4)}\), $$ where $$ \eta = \frac{\kappa^2}{4(1+2 \kappa)(K^2+2K + 3)}. $$ \end{thm} \begin{proof} Using the orthogonality of characters, we write \begin{equation*} \begin{split} N_p({\mathfrak B}) = \sum_{(x_1, \ldots, x_n) \in {\mathfrak B}} \frac{1}{p^s} \sum_{\lambda_1, \ldots, \lambda_s =0}^{p-1} & {\mathbf{\,e}}_p\( \sum_{j=1}^s \lambda_j \(\sum_{i=1}^n c_{i,j}x_i^{k_{i,j}}- b_j\)\) \\ & \qquad \qquad \quad \frac{1}{p-1} \sum_{\chi\in {\mathcal X}_p} \chi(x_1 \ldots x_na^{-1}) . \end{split} \end{equation*} Hence, changing the order of summation, we obtain \begin{equation*} \begin{split} N_p({\mathfrak B}) = \frac{1}{(p-1) p^s} & \sum_{\lambda_1, \ldots, \lambda_s =0}^{p-1} {\mathbf{\,e}}_p\( - \sum_{j=1}^s \lambda_j b_j \)\\ &\sum_{\chi\in {\mathcal X}_p} \chi(a^{-1}) \prod_{i=1}^n S_i(\chi; \lambda_1, \ldots, \lambda_s), \end{split} \end{equation*} where $$ S_i(\chi; \lambda_1, \ldots, \lambda_s) = \sum_{x=u_i+1}^{u_i+h} {\mathbf{\,e}}_p\(\sum_{j=1}^s \lambda_j c_{i,j} x^{k_{i,j}}\), \quad i =1, \ldots, n. $$ Separating the term $h^n/(p-1) p^s$, corresponding to $\chi=\chi_0$ and $\lambda_1 =\ldots= \lambda_s =0$, we derive \begin{equation} \label{eq:R1R2} N_p({\mathfrak B}) - \frac{h^n}{(p-1) p^s} \ll \frac{1}{p^{s+1}}\(R_1 + R_2\), \end{equation} where \begin{equation*} \begin{split} R_1 &= \sum_{\lambda_1, \ldots, \lambda_s =0}^{p-1} \sum_{\chi\in {\mathcal X}_p^*} \prod_{i=1}^n |S_i(\chi; \lambda_1, \ldots, \lambda_s)|,\\ R_2 &= \sum_{\substack{\lambda_1, \ldots, \lambda_s =0\\ (\lambda_1, \ldots, \lambda_s) \ne (0, \ldots, 0)}}^{p-1} \prod_{i=1}^n |S_i(\chi_0; \lambda_1, \ldots, \lambda_s)|. \end{split} \end{equation*} To estimate $R_1$ we use Lemma~\ref{lem:Chang} and write $$ R_1 \le h^{n-4} p^{-\eta(n-4)} \sum_{\lambda_1, \ldots, \lambda_s =0}^{p-1} \sum_{\chi\in {\mathcal X}_p^*} \prod_{i=1}^4 |S_i(\chi; \lambda_1, \ldots, \lambda_s)|. $$ Using the H{\"o}lder inequality and Corollary~\ref{cor:ACZ-weights}, we obtain \begin{equation*} \begin{split} \sum_{\chi\in {\mathcal X}_p^*} \prod_{i=1}^4 |S_i(\chi; \lambda_1, \ldots, \lambda_s)| \le \(\prod_{i=1}^4 \sum_{\chi\in {\mathcal X}_p^*} |S_i(\chi; \lambda_1, \ldots, \lambda_s)|^4\)^{1/4}&\\ \ll h^{4} + h^{2}&p^{1+o(1)} . \end{split} \end{equation*} Therefore, \begin{equation} \label{eq:R1 bound} R_1 \ll h^{n} p^{s-\eta(n-4)} + h^{n-2} p^{s+1-\eta(n-4)}. \end{equation} Furthermore, for $R_2$ we use Corollary~\ref{cor:Wooley short} to derive $$ R_2 \le h^{(n-2)(1-1/2K(K-2))} \sum_{\substack{\lambda_1, \ldots, \lambda_s =0\\ (\lambda_1, \ldots, \lambda_s) =(0, \ldots, 0)}}^{p-1} \prod_{i=1}^2 |S_i(\chi; \lambda_1, \ldots, \lambda_s)|. $$ Using the H{\"o}lder inequality and the orthogonality of exponential functions (similarly to the proof of Corollary~\ref{cor:ACZ-weights}), we obtain \begin{equation*} \begin{split} \sum_{\substack{\lambda_1, \ldots, \lambda_s =0\\ (\lambda_1, \ldots, \lambda_s) =(0, \ldots, 0)}}^{p-1} & \prod_{i=1}^2 |S_i(\chi; \lambda_1, \ldots, \lambda_s)|\\ &\le \(\prod_{i=1}^2 \sum_{\lambda_1, \ldots, \lambda_s =0}^{p-1} |S_i(\chi; \lambda_1, \ldots, \lambda_s)|^2\)^{1/2} \ll p^sh. \end{split} \end{equation*} Thus \begin{equation} \label{eq:R2 bound} R_2 \ll h^{n -1 - (n-2)/ 2K(K-2)} p^{s}. \end{equation} Substituting the bounds~\eqref{eq:R1 bound} and~\eqref{eq:R2 bound} in~\eqref{eq:R1R2} we obtain \begin{equation*} \begin{split} N_p({\mathfrak B}) - & \frac{h^n}{p^{s+1}} \\ &\ll h^{n} p^{-1-\eta(n-4)} + h^{n-2} p^{-\eta(n-4)}+ h^{n -1 - (n-2)/ 2K(K-2)} p^{-1} .\end{split} \end{equation*} Clearly, $$ \eta < \frac{1}{2K(K-2)}. $$ Thus we see that the second term always dominates the third term and the result follows. \end{proof} \section{Comments} Clearly, for any $\kappa > 0$, $k\ge 5$ and $p > h \ge p^{1/4+\kappa}$, Theorem~\ref{thm:N Asymp} implies that $$ N_p({\mathfrak B})= (1 + o(1)) \frac{h^n}{p^{s+1}}, $$ as $p\to \infty$, provided that $$ n \ge (s+1/2) \eta^{-1}+4. $$ For $k=3$ and $4$ the range of Theorem~\ref{thm:N Asymp} becomes $h \ge p^{1/2}$ and $h \ge p^{1/3}$. However it is easy to see that using the full power of Lemma~\ref{lem:Wooley} instead of Corollary~\ref{cor:Wooley short} one can derive nontrivial results in a wider range. Namely, for any $\kappa > 0$ there exists some $\gamma > 0$ (independent on $n$ and other parameters in~\eqref{eq:prod} and~\eqref{eq:diag}) such that, for $h \ge p^{1/3+\kappa}$ if $k = 3$ and for $h \ge p^{1/4+\kappa}$ if $k = 4$, we have $$ N_p({\mathfrak B})= \frac{h^n}{(p-1)p^{s}} + O\(h^{(1 - \gamma)n}\). $$ We also recall that for polynomials of small degrees stronger values of Lemma~\ref{lem:Wooley} are available, see~\cite{BoWo} and references therein. Note that the same method can be applied (with essentially the same results) to the systems of congruences where instead of~\eqref{eq:prod} we have a more general congruence $$ x_1^{m_1} \ldots x_n^{m_n} \equiv a \pmod p $$ for some integers $m_i$ with $\gcd(m_i,p-1)=1$, $i=1, \ldots, n$. Moreover, we recall that the Weil bound~\cite[Appendix~5, Example~12]{Weil} (see also~\cite[Chapter~6, Theorem~3]{Li}) and the standard reduction between complete and incomplete sums (see~\cite[Section~12.2]{IwKow}) implies that $$\sum_{x=u+1}^{u+h} \chi(G(x)) {\mathbf{\,e}}_p(F(x)) \ll p^{1/2} \log p, $$ where $G(x)$ is a polynomial that is not a perfect power of any other polynomial in the algebraic closure $\overline \mathbb{F}_p$ of the finite field of $p$ elements, Thus for $h \ge p^{1/2+\kappa}$, using this bound instead of Lemma~\ref{lem:Chang} allows us to replace~\eqref{eq:prod} with the congruence $$ G_1(x_1) \ldots G_n(x_n) \equiv a \pmod p $$ for arbitrary polynomials $G_1(X), \ldots, G_n(X)\in \mathbb{Z}[X]$ such that their reductions modulo $p$ are not perfect powers in $\overline \mathbb{F}_p$. In fact, even for $G_1(X) = \ldots= G_n(X)= X$ (that is, for the congruence~\eqref{eq:prod}) this leads to a result, which is sometimes stronger that those of~\cite{FoKa} and Theorem~\ref{thm:N Asymp}. \section{Acknowledgment} The author is very grateful to Mei-Chu Chang for the confirmation that the main result of~\cite{Chang} applies to intervals in an arbitrary position. This work was supported in part by the ARC Grant DP1092835. \end{document}
arXiv
\begin{document} \title[Cheeger constant and conformal infinity]{The Cheeger constant of an asymptotically locally hyperbolic manifold and the Yamabe type of its conformal infinity} \author{Oussama Hijazi} \address[Oussama Hijazi]{Institut {\'E}lie Cartan, Universit{\'e} de Lorraine, Nancy, B.P. 70239, 54506 Vand\oe uvre-L{\`e}s-Nancy Cedex, France.} \email{[email protected]} \author{Sebasti{\'a}n Montiel} \address[Sebasti{\'a}n Montiel]{Departamento de Geometr{\'\i}a y Topolog{\'\i}a, Universidad de Granada, 18071 Granada, Spain.} \email{[email protected]} \author{Simon Raulot} \address[Simon Raulot]{Laboratoire de Math\'ematiques R. Salem UMR $6085$ CNRS-Universit\'e de Rouen Avenue de l'Universit\'e, BP.$12$ Technop\^ole du Madrillet $76801$ Saint-\'Etienne-du-Rouvray, France.} \email{[email protected]} \begin{abstract} Let $(M,g)$ be an $(n+1)$-dimensional asymptotically locally hyperbolic (ALH) manifold with a conformal compactification whose conformal infinity is $(\partial M,[\gamma])$. We will first observe that ${\mathcal Ch}(M,g)\le n$, where ${\mathcal Ch}(M,g)$ is the Cheeger constant of $M$. We then prove that, if the Ricci curvature of $M$ is bounded from below by $-n$ and its scalar curvature approaches $-n(n+1)$ fast enough at infinity, then ${\mathcal Ch}(M,g)= n$ if and only ${\mathcal Y}(\partial M,[\gamma])\ge 0$, where ${\mathcal Y}(\partial M,[\gamma])$ denotes the Yamabe invariant of the conformal infinity. This gives an answer to a question raised by J. Lee \cite{L}. \end{abstract} \keywords{Conformally compact manifold, Asymptotically hyperbolic manifold, Cheeger constant, Isoperimetric inequalities, Yamabe type} \subjclass{Differential Geometry, Global Analysis, 53C27, 53C40, 53C80, 58G25} \date {\today} \maketitle \pagenumbering{arabic} \section{Introduction} We will study asymptotically locally hyperbolic (ALH) manifolds in the more general setting of conformally compact manifolds. During the last decades, due to the important role that they play in the so-called anti-de Sitter/conformal field theory (AdS/CFT) correspondence (see \cite{Bi}, for instance), this class of Riemannian manifolds has attracted a great deal of interest in both physical and mathematical realms. The mathematical aspects relative to the existence and the behavior near the infinity of conformally compact manifolds satisfying the Einstein condition were first studied in the seminal paper by C. Fefferman and C. Graham \cite{FG}. This particular class of conformally compact manifolds are the usually called Poincar{\'e}-Einstein (PE) spaces (they necessarily have negative constant scalar curvature). As in many papers about asymptotically hyperbolic manifolds, here we will drop the Einstein condition and call ALH manifold any conformally compact Riemannian manifold whose scalar curvature is asymptotically constant (also necessarily negative). This implies that the same must occur for its sectional curvatures. In some sense, ALH manifolds are just those looking like PE spaces at infinity. On the other hand, a Riemannian manifold $(M,g)$ is called conformally compact if it is a connected complete manifold whose metric extends conformally to a compact manifold with (non-necessarily connected) boundary whose interior is the original manifold. So, by means of this extended conformal metric, the corresponding original metric determines a conformal structure on the boundary $(\partial M,[\gamma])$, which is usually called the conformal infinity or the boundary at infinity. In this setting, a natural question is to look for relations between Riemannian invariants of an $(n+1)$-dimensional ALH manifold $M$, $n\ge 2$, and conformal invariants of the $n$-dimensional conformal boundary $(\partial M,[\gamma])$. A beautiful result in this direction was obtained by J. Lee in \cite{L}. In fact, he proved that if ${\mathcal Y}(\partial M,[\gamma])\ge 0$, then $\lambda_{1,2}(M)=\frac{n^2}{4}$, where ${\mathcal Y}(\partial M,[\gamma])$ denotes the Yamabe invariant of the compact conformal manifold $(\partial M,[\gamma])$, which is the infimum of the total scalar curvature functional over unit-volume metrics in the conformal class $[\gamma]$, and $\lambda_{1,2}(M)$ is the infimum of the $L^2$ spectrum of the Laplacian of $M$. In this way, he thoroughly extended a result which was known to occur when $M={\mathbb H}^{n+1}/\Gamma$ is a geometrically finite and cusp-free quotient of the hyperbolic space by a Kleinian group, a consequence from previous works by D. Sullivan and by R. Schoen and S.-.T. Yau (see \cite{Su,SY}). J. Lee pointed out that his theorem is not sharp, in the sense that there are ALH manifolds $M$ with ${\mathcal Y}(\partial M,[\gamma])<0$ but still $\lambda_0(M)=\frac{n^2}{4}$, and raised the question of finding a necessary and sufficient condition on the geometry of $M$ for ${\mathcal Y}(\partial M,[\gamma])\ge 0$. In this direction, C. Guillarmou and J. Qing proved in \cite{GQ} that, when $M$ is a PE space with $n>2$, ${\mathcal Y}(\partial M,[\gamma])> 0$ if and only if the largest real scattering pole of $M$ is less than $\frac{n}{2}-1$. In this work, we answer the aforementioned question (see Theorem \ref{maintheorem}) by relating the Yamabe type of the boundary at infinity with the value of the Cheeger constant ${\mathcal Ch}(M,g)$ of the ALH manifold $(M,g)$ (see Section \ref{SectionUpper} for a precise definition), namely \begin{theoremNN} Let $(M,g)$ be an $(n+1)$-dimensional conformally compact Riemannian manifold of order $C^{m,\alpha}$ with $m\geq 3$, $0<\alpha<1$ and whose Ricci tensor and scalar curvature satisfy $$ \hbox{\rm Ric}_g+ng\ge 0,\qquad R_g+n(n+1)=o\big(r^2\big), $$ where $r$ is any defining function on $M$, then $$ {\mathcal Ch}(M,g)= n\quad\Longleftrightarrow\quad{\mathcal Y}(\partial M,[\gamma])\ge 0. $$ \end{theoremNN} Since it is not difficult to observe that ${\mathcal Ch}(M,g)\le n$ for each $(n+1)$-dimensional ALH manifold (see Corollary \ref{upper-Cheeger}), an equivalent statement of our characterization is to say that ${\mathcal Y}(\partial M,[\gamma])\ge 0$ if and only if the following linear isoperimetric inequality \begin{equation}\label{LII} A(\partial\Omega)\ge nV(\Omega) \end{equation} holds for all compact domains $\Omega\subset M$ (see Corollary \ref{yamabe-yau}). This isoperimetric inequality was well-known to be valid for hyperbolic spaces and S.-T. Yau proved that it is also true on complete simply connected Riemannian manifolds with sectional curvatures bounded from above by $-1$ (see \cite{Y} and \cite[Theorem 34.2.6]{BZ}). Another direct consequence of our main result is the generalization of the result of Lee on the bottom of the $L^2$ spectrum of the Laplacian of $M$ to the principal eigenvalue of its $p$-Laplacian (see Theorem \ref{lambda}). It is worth mentioning (and maybe useful to the reader) that, in a different context of hyperbolicity, J. Cao \cite{C} also explored the relation between the geometric properties of a Gromov-hyperbolic space and some diverse features of its boundary at infinity. \section{Conformally compact Riemannian manifolds} Let ${\overline M}$ be a (connected) compact $(n+1)$-manifold with (non-ne\-ce\-ssa\-ri\-ly connected) boundary and $n\ge 2$. The interior of ${\overline M}$ will be denoted by $M$ and its boundary by $\partial M$. If $g$ is a smooth Riemannian metric on $M$, the open Riemannian manifold $(M,g)$ is said to be {\em conformally compact} of order $C^{m,\alpha}$ if, for some (and hence for all) smooth defining function $\rho$, the smooth conformal metric $\rho^2 g$ on $M$ extends to a $C^{m,\alpha}$ Riemannian metric $\overline{g}$ on ${\overline M}$. Here $C^{m,\alpha}$ denotes the classical H\"older space for $m\in\mathbb{N}$ and $\alpha\in[0,1]$. Recall that a $C^1$ map $\rho:{\overline M}\rightarrow\mathbb{R}$ is a {\em defining function} of the boundary if it is a non-negative function such that $\rho^{-1}(\{0\})=\partial M$ and $d\rho\ne 0$ everywhere on $\partial M$. It is obvious that there are many different defining functions for $\partial M$, but all the corresponding extended metrics $\overline{g}=\rho^2 g$ will have conformally equivalent restrictions to the boundary $\partial M$. Then, if $\gamma=\overline{g}_{|\partial M}$, the conformal manifold $(\partial M,[\gamma])$ is well defined and depends only on $(M,g)$. This pair is called the {\em conformal infinity} of $(M,g)$. The simplest example of a conformally compact Riemannian manifold is the hyperbolic space ${\mathbb H}^{n+1}$ which can be realized as the Riemannian manifold $\big(B^{n+1},\frac{4|dx|^2}{(1-|x|^2)^2}\big)$. Here $B^{n+1}=\{x\in {\mathbb R}^{n+1}\,/\,|x|<1\}$ is the $(n+1)$-dimensional Euclidean unit open ball and $|dx|^2$ is the flat Euclidean metric. In this situation, the map $x\in B^{n+1}\mapsto(1-|x|^2)/2$ is a defining function for the boundary $\partial B^{n+1}={\mathbb S}^n$ and the corresponding conformal infinity is then easily seen to be $({\mathbb S}^n,[g_0])$, where $[g_0]$ denotes the conformal class of the round metric $g_0$ of constant sectional curvature $1$ on ${\mathbb S}^n$. Assume now that the conformally compact Riemannian manifold $(M,g)$ is of order at least $C^{2}$. Then using the relation of the Riemannian curvature tensors under conformal changes of metrics (see, for instance, \cite[p.\! 59]{Be}), it can be easily seen that all its sectional curvatures $K_g$ uniformly approach $-|{\overline \nabla}\rho|_{\overline{g}}^2$ as $\rho\rightarrow 0$. Here ${\overline\nabla}$ is the gradient operator corresponding to the metric ${\overline g}$ and the norm is taken with respect to the same metric as that of the gradient. So, it is clear that the quantity $|{\overline \nabla}\rho|_{\overline{g}}$ restricted to $\partial M$ depends only on the original metric $g$. Thus, {\em conformally compact manifolds of order at least $C^{2}$ are asymptotically negatively curved}. From this observation, we will say that a conformally compact Riemannian manifold $(M,g)$ is {\em asymptotically locally hyperbolic (ALH)} when $|{\overline \nabla}\rho|_{\overline{g}|\partial M}$ is constant, that we normalize to be equal to $1$. It immediately follows that we have $K_g\rightarrow -1$ near infinity and so the Ricci tensor satisfies $\hbox{Ric}_g\rightarrow-ng$, that is, the manifold seems to be Einstein with Ricci curvature $-n$ when one moves towards infinity. Obviously, the scalar curvature $R_g$ tends to the constant value $-n(n+1)$. Conversely, from the transformation rules of the Ricci tensor and the scalar curvature of conformal metrics, it can be seen that any of these asymptotical behaviors for $K_g$, $\hbox{Ric}_g$ or $R_g$ implies that $|{\overline \nabla}\rho|_{\overline{g}|\partial M}=1$ for any defining function $\rho$. This means that {\em a $C^{2}$ conformally compact Riemannian manifold is ALH if and only if it is asymptotically Einstein, that is, $\hbox{\rm Ric}_g+ng\rightarrow 0$ uniformly}. As we just noticed, it is also equivalent to the fact that {\em the scalar curvature is asymptotically constant, that is, $R_g+n(n+1)\rightarrow 0$ uniformly as $\rho\rightarrow 0$}. In particular, this occurs when the manifold $(M,g)$ is supposed to be Einstein. In this case, $(M,g)$ is often called a Poincar{\'e}-Einstein manifold (in short, a PE manifold) and we have $\hbox{Ric}_g+ng=0$. The weaker condition on the constant scalar curvature $R_g +n(n+1)=0$ implies that $(M,g)$ is an ALH manifold as well. In the general non-necessarily Einstein ALH case, if we assume $(M,g)$ to be conformally compact of order at least $C^{3,\alpha}$, we can modify any given smooth defining function $\rho$ to get another one $r\in C^{2,\alpha}({\overline M})$ such that the corresponding extended conformal metric ${\overline g}= r^2g$ is of class $C^{2,\alpha}$ and $|{\overline\nabla r}|_{\overline{g}}\equiv 1$ in a collar neighborhood of the boundary at infinity $\partial M$. More precisely, we have \begin{lemma}{\rm ({\cite[Lemma 5.2]{GL}, \cite[Lemma 5.1]{L}}\label{defining-geodesic})} Let $(M,g)$ be an ALH manifold of class $C^{m,\alpha}$ with $m\ge 3$ and $0<\alpha<1$. For each choice of a metric $\gamma$ on its conformal infinity $(\partial M,[\gamma])$, there exists a defining function $r\in C^{m-1,\alpha}({\overline M})$ uniquely determined in a neighborhood of $\partial M$ such that the extended conformal metric ${\overline g}=r^2g$ is of class $C^{m-1,\alpha}$, $|{\overline\nabla} r|_{\overline{g}}\equiv 1$ in this neighborhood and with \begin{equation}\label{Taylor} {\overline g}=dr^2+g_r=dr^2+\gamma+rg^{(1)}+r^2g^{(2)}+O(r^{2+\alpha}), \end{equation} where $O(r^{2+\alpha})$ is a symmetric two-tensor on $\partial M$. Moreover $g^{(i)}$ is of class $C^{2-i,\alpha}$ for $i=0,1,2$ and is computable from the iterated Lie derivatives of the extended metric: \begin{equation}\label{Lie} g^{(i)}=\frac1{i!}\left.{\mathcal L}_{\overline\nabla r}^{(i)}{\overline g}\right|_{r=0}. \end{equation} We will say that such a function $r$ is {\em the geodesic defining function} associated with the metric $\gamma$. \end{lemma} For these reasons, we will always assume in this paper that the ALH hyperbolic manifolds considered are of class $C^{m,\alpha}$ with $m\geq 3$ in such a way that for any choice of a metric in the conformal infinity we have a compactification of class $C^{2,\alpha}$ for which the compactified metric has an expansion given by (\ref{Taylor}). A particularly interesting class of such manifolds are the PE spaces or, as discussed in the next section, the weakly Poincar\'e-Einstein (WPE) spaces. \section{Upper bounds for the Cheeger constant of ALH manifolds}\label{SectionUpper} In this section, we will see that the Yamabe type of the conformal infinity $(\partial M,[\gamma])$ has a direct influence on the isoperimetric behavior of the large regions of $(M,g)$. In fact, we will observe that these properties can be expressed using the Cheeger constant of $M$. In a first step, we collect some curvature properties for level hypersurfaces (near infinity) of any geodesic defining function of the boundary. More precisely, suppose that $(M,g)$ is an ALH manifold and fix $\gamma\in[\gamma]$ and $r$ the corresponding geodesic defining function. For $r>0$ sufficiently small, the level sets $\Sigma_r=\{r={\rm const}.\}$ are smooth compact embedded hypersurfaces. If $H_r$ denotes the (inner) mean curvature of $\Sigma_r$, it is straightforward to observe from the first equality in (\ref{Taylor}) that \begin{eqnarray}\label{MeanCurvatureF1} H_r = \frac{1}{2n}r^{2}{\rm Tr}_{g_r}\big(-r\partial_r(r^{-2}g_r)\big) = 1-\frac{r}{2n}{\rm Tr}_{g_r}(\partial_rg_r). \end{eqnarray} Then using the second equality in (\ref{Taylor}), we compute that \begin{eqnarray}\label{MeanCurvatureF2} H_r = 1-\frac{r}{2n}{\rm Tr}_\gamma(g^{(1)})+\frac{1}{n}\Big(\frac{1}{2}{\rm Tr}_\gamma(A_{\overline{g}}^2)-{\rm Tr}_\gamma(g^{(2)})\Big)r^2+O(r^{2+\alpha}) \end{eqnarray} where $A_{\overline{g}}$ is the symmetric endomorphism of the tangent bundle of $\partial M$ with respect to $\gamma$ defined by $A_{\overline{g}}:=\gamma^{-1}g^{(1)}$. We immediately deduce from (\ref{MeanCurvatureF1}) that $H_r$ is $C^{2,\alpha}$ in $r$ and $H_0=1$. Assume now for a moment that the manifold is weakly Poincar\'e-Einstein (WPE) in the sense that the coefficients in the asymptotic expansion (\ref{Taylor}) of $g$ are given by $g^{(1)}=0$ and \begin{eqnarray*} \quad{g^{(2)}}=-P_\gamma=-\frac{1}{n-2}\Big(\hbox{\rm Ric}_\gamma-\frac{R_\gamma}{2(n-1)}\gamma\Big) \end{eqnarray*} where $\hbox{\rm Ric}_\gamma$, $R_\gamma$ and $P_\gamma$ denote respectively the Ricci tensor, the scalar curvature and the Schouten tensor of $\gamma$. It can be shown that these conditions are equivalent to a second order decay assumption of the Ricci tensor of the metric $g$ namely $|\hbox{\rm Ric}_g+ng|_g=o(r^2)$. Then in this situation we observe that formula (\ref{MeanCurvatureF2}) directly implies that $H'_0=0$ and $H''_0=R_\gamma/(n(n-1))$ so that the value on the boundary of the second derivative of the mean curvature of the level sets of the geodesic defining function with respect to $\gamma$ is precisely encoded by the scalar curvature of this metric. The purpose of the next proposition is to show that a similar result holds under weaker curvature assumptions. By keeping the notations introduced in the above discussion, we have \begin{proposition}\label{H-limit} Let $(M,g)$ be an $(n+1)$-dimensional ALH manifold of order $C^{m,\alpha}$ with $m\geq 3$ and $0<\alpha<1$. Then $H_r$ extends to a $C^{2,\alpha}$ function at $r=0$ with $H_0=1$. If, in addition, we have $R_g+n(n+1)=o\big(r^2\big)$, then \begin{equation}\label{meancur-1} H'_0=0, \qquad H''_0\le \frac{R_\gamma} {n(n-1)}. \end{equation} If, moreover, we have $\hbox{\em Ric}_g\ge -ng$, then for $\varepsilon>0$ sufficiently small, \begin{equation}\label{meancur-2} H_r\ge 1+\frac{R_\gamma}{2n(n-1)}r^2,\qquad 0\le r\le \varepsilon. \end{equation} \end{proposition} \begin{remark}{\rm It is clear that the decay conditions on the scalar and the Ricci curvatures are slightly hardening the ALH condition. Note however that they are obviously satisfied when $(M,g)$ is a PE space or a WPE space. }\end{remark} \noindent{\textit {Proof.} } As before, we will work in a collar neighborhood of $\partial M$ where $|{\overline \nabla}r|_{\overline{g}}^2= 1$ for $r$ the unique geodesic defining function associated with $\gamma$ whose existence is ensured by Lemma \ref{defining-geodesic}. Note that we have already proved that $H_r$ is a $C^{2,\alpha}$ function in $r$ with $H_0=1$. First observe that since $g$ and $\overline{g}=r^2g$ are two conformally related metrics, their scalar curvatures satisfy \begin{equation}\label{step2-r} \frac1{r}\big(R_g+n(n+1)\big)=rR_{\overline g}+2n{\overline\Delta}r. \end{equation} So if we assume that $R_g+n(n+1)=o(r^2)$, the previous identity implies in particular that ${\overline\Delta}r_{|\partial M}=0$. On the other hand, since $\overline{g}=dr^2+g_r$ we compute that \begin{eqnarray}\label{LaplaceForm-1} {\overline\Delta}r_{|\partial M}=\frac{1}{2}{\rm Tr}_\gamma(g^{(1)}) \end{eqnarray} and then $H'_0=-{\rm Tr}_\gamma(g^{(1)})/(2n)=0$ where the first equality follows from (\ref{MeanCurvatureF2}). Moreover, since \begin{eqnarray}\label{VolumeForm} \frac{\partial_r\sqrt{{\rm det}(g_r)}}{\sqrt{{\rm det}(g_r)}}=\frac{1}{2}{\rm Tr}_{g_r}(\partial_rg_r)=r\Big({\rm Tr}_\gamma(g^{(2)})-\frac{1}{2}{\rm Tr}_\gamma(A_{\overline{g}}^2)\Big)+O(r^{1+\alpha}) \end{eqnarray} we also deduce from (\ref{step2-r}) that \begin{eqnarray} \frac{R_g+n(n+1)}{r^2}=R_{\overline{g}}+2n\Big({\rm Tr}_\gamma(g^{(2)})-\frac{1}{2}{\rm Tr}_\gamma(A_{\overline{g}}^2)\Big)+O(r^{\alpha}). \end{eqnarray} Our assumption on the asymptotic behavior of the scalar curvature implies that \begin{eqnarray}\label{DerSecMean} H''_0=-\frac{2}{n}\Big({\rm Tr}_\gamma(g^{(2)})-\frac{1}{2}{\rm Tr}_\gamma(A_{\overline{g}}^2)\Big)=\frac{1}{n^2}R_{\overline{g}|\partial M} \end{eqnarray} where the first equality follows from (\ref{MeanCurvatureF2}). Now we note that the mean curvature $H_r$ is easily computable using the well-known relation between the two mean curvatures of a hypersurface corresponding to two conformal metrics $g=\frac1{r^2}{\overline g}$ on the ambient space (see \cite{E}, for instance): \begin{eqnarray}\label{step4-r} H_r=r\big({\overline H}_r-{\overline g}( {\overline\nabla}\log \frac1{r},{\overline N}_r)\big)=1+r{\overline H}_r. \end{eqnarray} Here ${\overline N}_r=\overline{\nabla}r$ (resp. ${\overline H}_r$) denotes the inner unit normal (resp. the mean curvature) of $\Sigma_r$ with respect to the metric ${\overline g}$. This identity immediately implies that ${\overline H}_0=0$. Moreover, since $r$ is in fact the $\overline{g}$-distance from the boundary, the mean curvature function $\overline{H}_r$ satisfies the well-known Riccati equation (which can be deduced from \cite[p.\! 44]{P}) \begin{eqnarray}\label{bochner} n{\overline H}'_r=|{\overline\sigma}_r|_{\overline{g}}^2 +\hbox{\rm Ric}_{\overline g}(\overline{N}_r,\overline{N}_r) \end{eqnarray} where $\overline{\sigma}_r$ is the second fundamental form of $\Sigma_r$ with respect to the metric ${\overline g}$. On the other hand, since ${\overline H}_0=0$, the Gau{\ss} formula implies that $$ \hbox{\rm Ric}_{\overline g}({\overline N}_0,{\overline N}_0) = \frac{1}{2}\Big(R_{{\overline g}|\partial M}-R_\gamma-|{\overline\sigma}_0|_{\overline{g}}^2\Big) $$ and so (\ref{bochner}) for $r=0$ writes \begin{eqnarray*} \overline{H}'_0=\frac{1}{2n}\left({R_{\overline g}}_{|\partial M}-R_\gamma+|{\overline\sigma}_0|_{\overline{g}}^2\right). \end{eqnarray*} This with formula (\ref{step4-r}) yields to \begin{eqnarray*} H''_0=\frac{1}{n}\left({R_{\overline g}}_{|\partial M}-R_\gamma+|{\overline\sigma}_0|_{\overline{g}}^2\right) \end{eqnarray*} which, when combined with (\ref{DerSecMean}), gives \begin{eqnarray*} H''_0=\frac{1}{n(n-1)}\left(R_\gamma-|{\overline\sigma}_0|_{\overline{g}}^2\right). \end{eqnarray*} The inequality in (\ref{meancur-1}) follows directly. We assume now in addition that $\hbox{\rm Ric}_g\ge -ng$. Since $g$ and $\overline{g}$ are conformally related we compute (see \cite[p.\! 59]{Be}) that \begin{equation}\label{step1-r} r(\hbox{\rm Ric}_g+ng)=r\hbox{\rm Ric}_{\overline g}+(n-1) {\overline\nabla}^2r+({\overline\Delta}r) {\overline g} \end{equation} where ${\overline\nabla}^2$ denotes the Hessian of a function with respect to $\overline{g}$. Applying this formula to the vector field $\frac{{\overline\nabla}r}{r}$ and using the fact that $|\overline{\nabla} r|_{\overline{g}}=1$ yield to \begin{eqnarray}\label{ricn-1} \hbox{\rm Ric}_{\overline g}({\overline N}_r,{\overline N}_r)-\frac{n\overline{H}_r}{r}\geq 0 \end{eqnarray} since ${\overline H}_r=-\frac1{n}{\overline\Delta}r$. Taking the limit as $r\rightarrow 0$ implies that \begin{equation}\label{ricn-2} \hbox{\rm Ric}_{\overline g}({\overline N}_0,{\overline N}_0)\geq n\overline{H}'_0. \end{equation} However from (\ref{bochner}) with $r=0$, we observe that this inequality is in fact an equality so that \begin{eqnarray}\label{curv-hyp} \overline{\sigma}_0=0\quad\text{and}\quad \overline{H}'_0=\frac{R_\gamma}{2n(n-1)}. \end{eqnarray} On the other hand, combining (\ref{bochner}) and (\ref{ricn-1}) we get \begin{eqnarray*} \overline{H}'_r-\frac{\overline{H}_r}{r}\geq 0 \end{eqnarray*} and then the map $r\mapsto \overline{H}_r/r$ is non decreasing. This property with (\ref{curv-hyp}) gives (\ref{meancur-2}). {q.e.d.} Recall that a compact connected Riemannian manifold is said to be of positive (respectively, negative or zero) Yamabe type when its metric can be conformally deformed into a metric with positive (respectively, negative or zero) constant scalar curvature. This Yamabe type is precisely encoded by the sign of its Yamabe invariant ${\mathcal Y}(\partial M,[\gamma])$. In fact, this number (and so its sign) only depends on the conformal structure of the manifold so that any compact connected Riemannian manifold must belong just to one of these three conformal types. The next proposition gives a first relation between the Yamabe type of a connected component of the conformal boundary $(\partial M,[\gamma])$ and the asymptotic behavior of the isoperimetric profile of certain compact domains in $M$. More precisely, we have \begin{proposition}\label{large-regions} Let $(M,g)$ be an $(n+1)$-dimensional ALH manifold of order $C^{m,\alpha}$ with $m\geq 3$ and $0<\alpha<1$. Let $r$ be the geodesic defining function associated with a metric $\gamma$ in the conformal infinity $(\partial M,[\gamma])$. For each $r>0$ small enough, there exists a compact domain $\Omega_r\subset M$ with smooth boundary $\partial\Omega_r$ such that \begin{equation}\label{H-zero} \lim_{r\rightarrow 0}\frac{A_r}{V_r}=n, \end{equation} where $A_r$ and $V_r$ denote respectively the $n$-dimensional Riemannian area of $\partial\Omega_r$ and the Riemannian volume of the domain $\Omega_r$. Moreover if we assume that a connected component of $\partial M$ has negative Yamabe invariant and that the scalar curvature $R_g$ of $(M,g)$ satisfies $$ R_g+n(n+1)=o(r^2) $$ near this component for some defining function $r$, then $A_r<n\,V_r$ for all $r>0$ small enough. \end{proposition} \noindent{\textit {Proof.} } Let $r$ be the geodesic defining function associated with a metric $\gamma$ in the conformal infinity and denote by $\Sigma_j$, $j\in\{1,\cdots,k\}$, its connected components. Let $U$ be a neighborhood of $\partial M$ and for $t>0$ sufficiently small define $M_t=M\setminus\big(r^{-1}(]0,t[)\cap U\big)$. Then there exists $t_0>0$ such that for all $0<t<t_0$, $(M_t,g)$ is a compact Riemannian manifold contained in $M$ whose boundary \begin{eqnarray*} \partial M_{t}=\Sigma_{1,t}\sqcup\cdots\sqcup\Sigma_{k,t} \end{eqnarray*} has exactly $k$ connected components $\Sigma_{j,t}$ each of them being diffeomorphic to $\Sigma_j$ for $j\in\{1,\cdots,k\}$. Finally we fix $0<r_0<t_0$ and we define for $0<r<r_0$ the smooth compact domain $\Omega_r$ by fixing all but one the components of $\partial M_t$ at a $\overline{g}$-distance $r_0$ to $\partial M$ and, in particular, we can assume that the boundary of $\Omega_r$ is \begin{eqnarray*} \partial\Omega_r=\Sigma_{1,r}\sqcup\Sigma_{2,r_0}\sqcup\cdots\sqcup\Sigma_{k,r_0}. \end{eqnarray*} Now by a direct application of the Taylor formula we have from (\ref{Taylor}) and (\ref{MeanCurvatureF2}) that \begin{eqnarray}\label{VolumeElement} \sqrt{\frac{{\rm det\,} g_r}{{\rm det\,} \gamma}} = 1+\alpha_1r+\alpha_2 r^2+O(r^{2+\alpha}) \end{eqnarray} where \begin{eqnarray}\label{alphas} \alpha_1=-nH'_0\quad\text{and}\quad\alpha_2=\frac{n}{2}\left(nH'^2_0-\frac{H''_0}{2}\right). \end{eqnarray} Recall that $H_r$ is the mean curvature of $\Sigma_{1,r}$ in $(M,g)$ for $0<r<r_0$ whose $C^{2,\alpha}$ extension to $r=0$ has been proved in the previous proposition. Then the area $A_r$ of the compact hypersurface $\partial\Omega_r$ with respect to the metric $g$ is \begin{eqnarray*} A_r=A(\Sigma_{1,r})+C_0= r^{-n}\int_{\Sigma_1}\sqrt{\frac{{\rm det\,} g_r}{{\rm det\,} \gamma}}\,dv_{\gamma}+C_0 \end{eqnarray*} where $dv_\gamma$ is the Riemannian volume element of $\gamma$ and $C_0$ is the area of the other connected components of $\partial\Omega_r$ (which does not depend of $r$). A straightforward computation using (\ref{VolumeElement}) gives \begin{eqnarray}\label{arean3} A_r=r^{-n}{\rm Vol}_\gamma(\Sigma_1)\left(1+\beta_1 r+\beta_2 r^2+O(r^{2+\alpha})\right) \end{eqnarray} for $n\geq 3$ and \begin{eqnarray}\label{arean2} A_r=r^{-2}{\rm Vol}_\gamma(\Sigma_1)\left(1+\beta_1 r+O(r^2)\right) \end{eqnarray} for $n=2$ where $\beta_j$ is the constant defined by \begin{eqnarray}\label{betas} \beta_j=\frac{1}{{\rm Vol}_\gamma(\Sigma_1)}\int_{\Sigma_1}\alpha_j\,dv_\gamma \end{eqnarray} for $j=1,2$. Here ${\rm Vol}_\gamma(\Sigma_1)$ denotes the Riemannian volume of $\Sigma_1$ with respect to $\gamma$. Similarly there exists a constant $C_1>0$ independent of $r$ such that the volume $V_r$ of $\Omega_r$ with respect to $g$ is given by \begin{eqnarray*} V_r= C_1+\int_{\Sigma_1}\int_r^{r_0}\sqrt{\frac{{{\rm det\,} g_s}}{{\rm det\,} \gamma}}\,dsdv_{\gamma} \end{eqnarray*} so that \begin{eqnarray}\label{volumen3} V_r= \frac{r^{-n}{\rm Vol}_\gamma(\Sigma_1)}{n}\left(1+\frac{n\beta_1}{n-1}r+\frac{n\beta_2}{n-2}r^2+O(r^{2+\alpha})\right) \end{eqnarray} for $n\geq 3$ and \begin{eqnarray}\label{volumen2} V_r=\frac{r^{-2}{\rm Vol}_\gamma(\Sigma_1)}{2}\left(1+2\beta_1r+2\beta_2r^2\log\frac{1}{r}+O(r^2)\right) \end{eqnarray} for $n=2$. Combining (\ref{arean3}) with (\ref{volumen3}) and (\ref{arean2}) with (\ref{volumen2}) immediately prove that (\ref{H-zero}) holds for all $n\geq 2$. Now if $\Sigma_1$ has negative Yamabe invariant we can assume without loss in generality that the scalar curvature of $\gamma$ is negative on $\Sigma_1$. Moreover if $R_g+n(n+1)=o(r^2)$, we have from Proposition \ref{H-limit} that $H'_0=0$ and thus \begin{eqnarray*} \beta_1=0\quad\text{and}\quad\beta_2=-\frac{n}{4{\rm Vol}_\gamma(\Sigma_1)}\int_{\Sigma_1}H''_0\,dv_\gamma \end{eqnarray*} because of (\ref{alphas}) and (\ref{betas}). Using these facts in (\ref{arean3}) and (\ref{volumen3}) finally leads for $n\geq 3$ to \begin{eqnarray*} \frac{A_r}{V_r}= n\left(1+\frac{n}{2(n-2){\rm Vol}_\gamma(\Sigma_1)} \left(\int_{\Sigma_1}H''_0\,dv_\gamma\right)r^2+O(r^{2+\alpha})\right). \end{eqnarray*} Now since we assume that $R_\gamma$ is negative on $\Sigma_1$, the inequality in (\ref{meancur-1}) of Proposition \ref{H-limit} implies that \begin{eqnarray}\label{MeanIntegral} \int_{\Sigma_1}H''_0\,dv_\gamma\leq\frac{1}{n(n-1)}\int_{\Sigma_1} R_\gamma\,dv_\gamma<0 \end{eqnarray} so that $A_r<n V_r$ for $r$ sufficiently small. In a same way, for $n=2$ we derive from (\ref{arean2}) and (\ref{volumen2}) that \begin{eqnarray*} \frac{A_r}{V_r}=2\left(1+\frac{1}{{\rm Vol}_\gamma(\Sigma_1)}\left(\int_{\Sigma_1}H''_0\,dv_\gamma\right)r^2\log\frac{1}{r}+O(r^2)\right) \end{eqnarray*} which also gives that $A_r<2V_r$ for $r>0$ small enough because of (\ref{MeanIntegral}). {q.e.d.} From Proposition \ref{large-regions}, we immediately deduce an upper bound for the Cheeger constant ${\mathcal Ch}(M,g)$ of any ALH manifold. Recall that this isoperimetric constant is defined for any Riemannian manifold by \begin{eqnarray}\label{def-Ch} {\mathcal Ch}(M,g)=\inf_{\Omega}\frac{A(\partial\Omega)}{V(\Omega)}, \end{eqnarray} where the infimum is taken over all the compact (smooth) domains in $M$, $A(\partial\Omega)$ being the area of the compact hypersurface $\partial \Omega$ and $V(\Omega)$ the volume of the domain $\Omega$. Then we prove \begin{corollary}\label{upper-Cheeger} The Cheeger isoperimetric constant ${\mathcal Ch}(M,g)$ of an $(n+1)$-dimensional ALH manifold $(M,g)$ of order $C^{m,\alpha}$ with $m\geq 3$ and $0<\alpha<1$ satisfies $$ {\mathcal Ch}(M,g)\le n. $$ If some connected component of the conformal infinity $(\partial M,[\gamma])$ has negative Yamabe invariant and if, near this component, for some defining function $r$, the scalar curvature $R_g$ of $M$ satisfies $$ R_g+n(n+1)=o(r^2), $$ then $$ {\mathcal Ch}(M,g)<n. $$ \end{corollary} \begin{remark} {\rm Note that the second part of the previous corollary applies for WPE manifolds.} \end{remark} \noindent{\textit {Proof.} } Choose any geodesic defining function $r$ for the ALH manifold $M$ and let $\Omega_r\subset M$ be the compact domain associated with $r>0$ small enough as in Proposition \ref{large-regions}. If ${\mathcal Ch}(M,g) > n$, since Proposition \ref{large-regions} assures that $\lim_{r\rightarrow 0}(A(\partial\Omega_r)/V(\Omega_r))=n$, then ${\mathcal Ch}(M,g)$ could not be a lower bound for the set of the isoperimetric quotients $A(\partial\Omega)/V(\Omega)$, with $\Omega \subset M$ compact. Thus, we have ${\mathcal Ch}(M,g)\le n$ for any ALH manifold $M$. Now, assume that the conformal infinity $(\partial M,[\gamma])$ of the given ALH manifold $M$ has at least one connected component with negative Yamabe type which, near this component, satisfies the decay condition $R_g+n(n+1)=o(r^2)$. In this situation, the second part of Proposition \ref{large-regions} provides compact domains $\Omega_r$ in $M$ such that $A(\partial\Omega_r)<n\,V(\Omega_r)$, hence we directly have ${\mathcal Ch}(M,g)<n$. {q.e.d.} \section{Some examples} Let $\big(B^{n+1},\frac{4|dx|^2}{(1-|x|^2)^2}\big)$ be the Poincar{\'e} hyperbolic ball. Using the change of variables given by $s=\ln\frac{1+|x|}{1-|x|}\in{\mathbb R}_+$, we obtain the metric $$ g=ds^2+(\sinh^2 s)\gamma_{{\mathbb S}^n}. $$ This expression for the Poincar{\'e} metric is valid only on the punctured ball $B^{n+1}-\{0\}\cong {\mathbb R}_+^*\times {\mathbb S}^n$, although it is smoothly extendible to the origin. Written in this way, we can see that the hyperbolic metric is an example of the so-called {\em warped Riemannian products} (see for instance, \cite{Be,O'N,K}). In general, if $I\subset {\mathbb R}$ is an open interval, $(P,\gamma)$ a Riemannian $n$-manifold and $f\in C^\infty(I)$ is a positive function, we will say that the $(n+1)$-dimensional Riemannian manifold $(I\times P, g=ds^2+f(s)^2 \gamma)$ is the product of $I$ and $P$ warped by the function $f$. We will restrict ourselves to warping functions $f$ satisfying $f''-f=0$. With this choice, we ensure that $\hbox{Ric}_g(\frac{\partial}{\partial s},\frac{\partial}{\partial s})=-n$ at each point of $I\times P$ (see \cite[Lemma 4]{K}). Moreover, taking also into account the values of $\hbox{Ric}_g$ on the directions orthogonal to the vector field $\frac{\partial}{\partial s}$, that is, directions tangent to $P$, we conclude that there are essentially three types of warped products which eventually may produce PE spaces with scalar curvature $-n(n+1)$, according to the warping function is chosen to be $\sinh s$, $e^s$ or $\cosh s$ (conical singularities and cusps being provisionally permitted). \begin{example}\label{rk3} {\rm The first class of manifolds we consider here is the so-called {\em hyperbolic cones} on given compact Riemannian manifolds $(P,\gamma)$, which are defined by $$ \big(M={\mathbb R}_+\times P, g = ds^2 + (\sinh^2 s) \gamma\big). $$ Defining a new variable $t\in ]0,1]$ by $t=\tanh\frac{s}{2}$, we obtain that the conformal metric $$ {\overline g}=\left(\frac1{1+\cosh s}\right)^2 g = dt^2 + t^2 \gamma $$ extends to $[0,1]\times P$ that is to a compact manifold with boundary $\{1\}\times P\cong P$ and a conical singularity at $t=0$. This singularity is removable if and only if $(P, \gamma)$ is the round unit $n$-sphere and, in this case, the corresponding hyperbolic cone is nothing but the $(n+1)$-dimensional hyperbolic space (see \cite[p. 269, Lemma 9.114]{Be}). It is clear from the above considerations, that if $({\mathbb S}^n,\gamma)$ is the round unit sphere and $f:{\mathbb R}_+\rightarrow{\mathbb R}^*_+$ coincides with $s\mapsto\sinh s$, both near $0$ and $+\infty$, the warped product metric given by $$ \big(M={\mathbb R}_+\times {\mathbb S}^n, g =ds^2 + f(s)^2 \gamma\big), $$ also avoids the conical singularity at $s=0$ and that the resultant $(n+1)$-dimensional Riemannian manifold $(M,g)$ is a rotationally invariant deformation of the hyperbolic space. These manifolds are not in general PE spaces but are WPE spaces whose conformal infinity is obviously $({\mathbb S}^n, [\gamma])$, and so they trivially have positive Yamabe type. For a fixed $\varepsilon>0$, if we choose $f$ in such a way that $$ f \left(\frac1{\varepsilon}\right) = f \left(\frac{3}{\varepsilon}\right)=\varepsilon^{\frac1{n}}, \qquad f(s)\ge \varepsilon^{\frac1{n}},\quad\forall s\in\left[\frac1{\varepsilon},\frac{3}{\varepsilon}\right], $$ and represent by $\Omega_\varepsilon$ the domain $\left]\frac1{\varepsilon},\frac{3}{\varepsilon}\right[ \times P\subset M$, we have $$ \frac{A(\partial\Omega_\varepsilon)}{V(\Omega_\varepsilon)}=\frac{2\varepsilon A(P)}{\int_{1/\varepsilon} ^{3/\varepsilon}\int_Pf(s)^n\,ds\,dP}\le \varepsilon. $$ Hence, from Definition (\ref{def-Ch}), we obtain $$ {\mathcal Ch}(M,g)\le \varepsilon. $$ This means that we have $(n+1)$-dimensional ALH manifolds (in fact, WPE spaces) with conformal infinity of positive Yamabe type and Cheeger isoperimetric constants arbitrarily small belonging to the interval $[0,n]$.} \end{example} \begin{example}{\rm The second type we also consider here is of the form $$ \big(M={\mathbb R}\times P, g = ds^2 + (\cosh^2 s) \gamma\big). $$ Such warped product metrics satisfy (see \cite{K}, for instance) $$\hbox{Ric}_g(\frac{\partial}{\partial s},\frac{\partial}{\partial s})=-n \quad {\rm and} \quad R_g+n(n+1)=R_\gamma+n(n-1). $$ Hence, as before, to get a WPE space, we will impose on $(P,\gamma)$ to have scalar curvature $-n(n-1)$. In order to compactify, we define a new variable $t\in]0,\pi[$ by the relation $t=2\arctan e^s$. Then $$ {\overline g}=\left(\frac1{\cosh s}\right)^2g=dt^2+\gamma, $$ is smoothly extendable to the compact manifold $[0,\pi]\times P$. Hence, its conformal infinity $(\partial M, [\gamma])$ consists of two copies of $(P,[\gamma])$. Thus we obtain an example of WPE space whose conformal compactification has two connected components at the conformal boundary (a {\em wormhole} in the physical jargon), both with negative Yamabe type. Now, if $f:{\mathbb R}\rightarrow{\mathbb R}^*_+$ coincides with $s\mapsto\cosh s$ both near $-\infty$ and $+\infty$, the manifold $(P, \gamma)$ is a Riemannian manifold with constant scalar curvature $-n(n-1)$, and we consider a warped product $$ \big(M={\mathbb R}\times P, g = ds^2 + f(s)^2 \gamma\big), $$ then the corresponding $(n+1)$-dimensional Riemannian manifold $(M, g)$ is a deformation of the above one which is still a WPE space with conformal infinity consisting of two copies of $(P, [\gamma])$ as well. We may require $f$ to have exactly the same behavior as in the above example on the interval $[\frac1{\varepsilon},\frac{3}{\varepsilon}]$ and we conclude that there are $(n+1)$-dimensional ALH manifolds (in fact, WPE spaces) with conformal infinity of negative Yamabe type and Cheeger isoperimetric constants arbitrarily small inside the interval $[0,n[$.} \end{example} \section{The Cheeger constant of $(M, g)$ and the Yamabe type of $(\partial M, [\gamma])$} In this section, we state and prove the main result of this paper which relates the value of the Cheeger constant of a conformally compact Riemannian manifold with the Yamabe type of its conformal infinity. As we are admitting the possibility that $\partial M$ is not connected, we should be more precise in the definition of the Yamabe type of the conformal infinity in this setting. However, using a famous result on the connectedness of the boundary at infinity by E. Witten and S.-T. Yau (for boundaries with a component of positive Yamabe type) and by M. Cai and G. Galloway (for boundaries with a component of null Yamabe type), we will avoid this discussion. In order to make this paper self-contained, we include a new proof of this result which simplifies and unifies the two aforementioned proofs and slightly weakens their hypotheses (and fits into ours). In fact, we will show that these two theorems can be seen as direct consequences of an old paper by A. Kasue \cite{Ka} generalizing the Bonnet-Myers theorem to complete manifolds with non-empty boundary. \begin{theorem}{\rm (\cite{WY,CG})}\label{connectedness} Let $(M,g)$ be an $(n+1)$-dimensional conformally compact Riemannian manifold of order $C^{m,\alpha}$ with $m\geq 3$ and $0<\alpha<1$ whose Ricci tensor and scalar curvature satisfy \begin{equation}\label{Ricci-Scalar} \hbox{\rm Ric\,}_g+ng\ge 0,\qquad R_g+n(n+1)=o(r^2), \end{equation} where $r$ is a geodesic defining function. Suppose that the conformal infinity $(\partial M,[\gamma])$ has a connected component with non-negative Yamabe invariant. Then $\partial M$ is connected. \end{theorem} \noindent{\textit {Proof.} } Let $\partial M_0$ be the component of the conformal infinity $(\partial M,[\gamma])$ with non-negative Yamabe invariant. By the solution of the Yamabe problem \cite{Sc}, we can choose a metric $\gamma\in [\gamma]$ with constant scalar curvature, say $R_{\gamma}=n(n-1)\varepsilon^2$, where $\varepsilon=1$ or $\varepsilon=0$ depending on whether the Yamabe invariant is positive or zero. Denote by $r$ the unique geodesic defining function associated with $\gamma$ whose existence is given by Lemma \ref{defining-geodesic}. Let $U$ be a neighborhood of $\partial M_0$ which does not meet any other component of $\partial M$ and $M_t=M-\big(r^{-1}(]0,t[)\cap U\big)$. For $t>0$ sufficiently small, $(M_t,g)$ is a Riemannian manifold with boundary, whose one of its components $\Sigma_t$ is diffeomorphic to $\partial M_0$ and whose mean curvature satisfies \begin{eqnarray}\label{mean-cur3} H_t\ge 1+\frac{\varepsilon^2}{2}t^2, \end{eqnarray} (see (\ref{meancur-2}) in Proposition \ref{H-limit}). Suppose now that the Yamabe invariant of $\partial M_0$ is positive, that is, $\varepsilon=1$. In this situation, $(M_t,g)$ is a complete Riemannian manifold such that $\hbox{\rm Ric\,}_g\geq -ng$ and whose compact connected boundary $\Sigma_t$ satisfies $H_t>1$ from (\ref{mean-cur3}). Hence we can apply \cite[Theorem A]{Ka} (see also \cite[Proposition 2]{CG}, which reproves a part of Kasue's result) and conclude that $M_t$ must be compact. So $\Sigma_t$ is the unique component of its boundary. This means that $\partial M_0$ is the unique component in $\partial M$ and $\partial M$ is connected, as claimed. When the Yamabe invariant of $\partial M_0$ is zero, i.e. $\varepsilon=0$, the same reasoning provides $(M_t,g)$ as above with $H_t\ge 1$. If $M_t$ is non-compact, a direct application of \cite[Theorem C]{Ka} implies that $\min H_t= 1$ and $M_t$ is isometric to the warped product $\big([0,+\infty[\times \Sigma_t,ds^2+e^{-2s}g_{|\Sigma_t}\big)$ (for definitions and properties about warped products, see \cite{Be,O'N,K} or Remark \ref{rk3} below). This means that $M_t$ has a connected compact boundary $\Sigma_t$ and one end which is a {\em hyperbolic cusp}. This contradicts the fact that every end of an ALH manifold has infinite volume (since the volume of a hyperbolic cusp is finite). We conclude that $M_t$ is compact and then $\partial M$ has to be connected. {q.e.d.} Assume now that $(M,g)$ is a conformally compact manifold satisfying the curvature assumptions (\ref{Ricci-Scalar}) and that a connected component of its conformal boundary $(\partial M,[\gamma])$ has negative Yamabe invariant. From the solution of the Yamabe problem, we can assume that this component has constant negative scalar curvature so that Corollary \ref{upper-Cheeger} implies that ${\mathcal Ch}(M,g)<n$. Since ${\mathcal Ch}(M,g)\leq n$ for all ALH manifolds, we immediately deduce that if ${\mathcal Ch}(M,g)= n$, then a connected component of the boundary at infinity $(\partial M,[\gamma])$ must have a non negative Yamabe invariant and this implies that $\partial M$ is in fact connected using Theorem \ref{connectedness}. We sum up these properties and show that the converse is also true in our main result: \begin{theorem}\label{maintheorem} The Cheeger constant ${\mathcal Ch}(M,g)$ of an $(n+1)$-dimensional ALH manifold $(M,g)$ of order $C^{m,\alpha}$ with $m\geq 3$ and $0<\alpha<1$ whose conformal infinity is $(\partial M,[\gamma])$ satisfies $$ {\mathcal Ch}(M,g)\le n. $$ Moreover, if the Ricci and the scalar curvatures of $M$ satisfy $$ \hbox{\rm Ric}_g+ng\ge 0,\qquad R_g+n(n+1)=o\big(r^2\big), $$ where $r$ is any defining function on $M$, then we have $$ {\mathcal Ch}(M,g)= n\quad\Longleftrightarrow\quad{\mathcal Y}(\partial M,[\gamma])\ge 0, $$ where ${\mathcal Y}(\partial M,[\gamma])$ denotes the Yamabe invariant of the conformal boundary. In particular, $(\partial M,[\gamma])$ is connected. \end{theorem} \noindent{\textit {Proof.} } It only remains to prove that if ${\mathcal Y}(\partial M,[\gamma])\ge 0$ then ${\mathcal Ch}(M,g)= n$. Since we know from Corollary \ref{upper-Cheeger} that ${\mathcal Ch}(M,g)\leq n$, hence it is sufficient to prove that ${\mathcal Ch}(M,g)\geq n$. We originally proved this fact using an approach relying on geometric measure theory (as in \cite{W}). However, we found in \cite{FR} an elementary proof due to Gilles Carron for PE manifolds. We shall see that these arguments always work under our (weaker) assumptions. Indeed, recall from \cite{L} (see \cite[Proposition 3]{HM} for a precise statement) that if $r$ is a geodesic defining function satisfying (\ref{Ricci-Scalar}), then there exists a unique positive function $u\in C^\infty (M)$ such that $\Delta u=(n+1)u$ and $u-\frac1{r}$ is a bounded function. If in addition ${\mathcal Y}(\partial M, [\gamma])\geq 0$, it can be shown (see \cite[Proposition 4.2]{L}) that the function $u^2-|\nabla u|_g^2$ is superharmonic on $M$, extends continuously to $\overline{M}$ and vanishes on $\partial M$ so that the strong minimum principle implies $u^2-|\nabla u|_g^2\geq 0$. Then if we let $f=\ln u$ on $M$, it is straightforward to check that this function satisfies \begin{eqnarray}\label{SuperMeanExit} |\nabla f|_g^2\leq 1\quad\text{and}\quad\Delta f\geq n. \end{eqnarray} Now consider a smooth compact domain $\Omega$ in $M$ and integrate (using the Stokes formula) the second inequality in (\ref{SuperMeanExit}) over $\Omega$, to get \begin{eqnarray}\label{In1} -\int_{\partial\Omega}\frac{\partial f}{\partial N}\geq n V(\Omega) \end{eqnarray} where $N$ denotes the inner unit normal to $\Sigma$ in $\Omega$. On the other hand, the first inequality in (\ref{SuperMeanExit}) implies \begin{eqnarray}\label{In2} -\int_{\partial\Omega}\frac{\partial f}{\partial N}\leq \int_{\partial\Omega}\Big|\frac{\partial f}{\partial N}\Big|\leq A(\partial\Omega). \end{eqnarray} Combining (\ref{In1}) and (\ref{In2}) gives $A(\partial\Omega)\geq n V(\Omega)$ for all smooth compact domains of $M$ which is exactly ${\mathcal Ch}(M,g)\geq n$, as claimed. {q.e.d.} \begin{remark}\label{r1}{\rm Theorem \ref{maintheorem} implies that the rotationally invariant deformations $(M,g)$ of the hyperbolic space ${\mathbb H}^{n+1}$ built in Example \ref{rk3} to provide examples of WPE spaces with conformal infinity of positive Yamabe type and arbitrarily small Cheeger constant, cannot satisfy the hypothesis on the Ricci curvature, that is, $\hbox{\rm Ric}_g$ cannot admit $-n$ as a lower bound. If so, these examples provide conformally compact manifolds whose conformal infinity is the round conformal sphere and such that (\ref{Ricci-Scalar}) holds. However, the rigidity result of such conformally compact manifolds implies that $(M,g)$ has to be the hyperbolic space that is $f(s)=\sinh s$ for all $s\in\mathbb{R}_+$ (see Corollary $1.5$ in \cite{LQS}). This contradicts the fact that the Cheeger constant of the hyperbolic space is $n$. It is worth mentioning that this result can be directly (and easily) observed by looking closer at such examples. Indeed, if $(M={\mathbb R}_+\times{\mathbb S}^n,g=ds^2+f(s)^2\gamma)$ is as in Example \ref{rk3} with $\hbox{\rm Ric}_g+ng\ge 0$, we would necessarily have $f''\le f$. Letting $y=f'/f$, we immediately observe that $y$ satisfies $y'+y^2\leq 1$ on $\mathbb{R}_+$. Since $y$ coincides with $z : s\mapsto{\rm cotanh\,}s$ in a neighborhood of $0$ and $+\infty$ satisfying $z'+z^2=1$, we can apply Lemma $4.1$ in \cite{ballmann} to conclude first that $y(s)={\rm cotanh\,} s$ for all $s\in\mathbb{R}_+$ and then that $f(s)=\sinh s$ on $\mathbb{R}_+$: the manifold $(M,g)$ is isometric to the hyperbolic space so that we get the desired contradiction. } \end{remark} \section{Some direct consequences} \subsection{Minimizer of the Cheeger constant} A direct consequence of Theorem \ref{maintheorem} is that the Cheeger constant of a conformally compact manifold satisfying (\ref{Ricci-Scalar}) does not possess smooth minimizer. Indeed if we denote by $\Omega_0$ such a domain then we obviously have \begin{eqnarray}\label{minimizers} {\mathcal Ch}(M,g)=n=\frac{A(\partial\Omega_0)}{V(\Omega_0)}. \end{eqnarray} On the other hand, given a smooth function $f$ on $\partial\Omega_0$, we consider the normal variation of $\partial\Omega_0$ defined by \begin{eqnarray*} \psi_t:p\in\partial\Omega_0\mapsto \exp_p\big(-tf(p)N_0(p)\big)\in M \end{eqnarray*} where $\exp$ is the exponential map of $M$ and $N_0$ is the inner unit vector normal to $\partial\Omega_0$ in $\Omega_0$. Denote by $A(t)$ the area of the hypersurface $\psi_t(\partial\Omega_0)$ as well as $V(t)$ the volume of the domain enclosed by $\psi_t(\partial\Omega_0)$ for $|t|$ sufficiently small. Since $\Omega_0$ is a minimum of ${\mathcal Ch}(M,g)$ we must have \begin{eqnarray*} \frac{d}{dt}_{|t=0}\frac{A(t)}{V(t)}=0 \end{eqnarray*} which, from the first variational formulae of the area and of the volume, is equivalent to \begin{eqnarray*} \int_{\partial\Omega_0}f\Big(nHV(\Omega_0)-A(\partial\Omega_0)\Big)=0 \end{eqnarray*} for any $f\in C^\infty(\partial\Omega_0)$. We conclude that $nH=A(\partial\Omega_0)/V(\Omega_0)$ and then $H=1$ because of (\ref{minimizers}). Now since $\hbox{\rm Ric}_g+ng\ge 0$, the Heintze-Karcher inequality \cite{HK} implies \begin{eqnarray*} V(\Omega_0)\leq A(\partial\Omega_0)\int_0^{R_0}\big(\cosh t-H_0\sinh t\big)^ndt \end{eqnarray*} where $H_0$ is the minimum of $H$ on $\partial\Omega_0$ and $R_0$ is the inradius of $\Omega_0$. As $H_0=H=1$ we immediately deduce \begin{eqnarray*} nV(\Omega_0)\leq(1-e^{-nR_0})A(\partial\Omega_0)<A(\partial\Omega_0) \end{eqnarray*} and this precisely contradicts (\ref{minimizers}). \subsection{Isoperimetric inequality} Note that, by the very definition of the Cheeger constant, if ${\mathcal Ch}(M,g)=n$, we have that the isoperimetric inequality (\ref{LII}) is satisfied on $M$. Conversely, when this inequality holds for each compact domain in $M$, we can only conclude the inequality ${\mathcal Ch}(M,g)\ge n$. But, if $M$ is an ALH manifold, from the first inequality in Corollary \ref{upper-Cheeger}, we have that the corresponding equality is achieved. Thus, we obtain another characterization of the non-negativity of the conformal infinity of the class of ALH manifolds that we are studying. \begin{corollary}\label{yamabe-yau} Let $M$ be an $(n+1)$-dimensional conformally compact manifold of order $C^{m,\alpha}$ with $m\geq 3$ and $0<\alpha<1$ whose conformal infinity is $(\partial M,[\gamma])$. If its Ricci and scalar curvatures satisfy (\ref{Ricci-Scalar}) then we have ${\mathcal Y}(\partial M,[\gamma])\ge 0$ if and only if the isoperimetric inequality $$ A(\partial\Omega)\ge n V(\Omega), $$ holds for any compact domain $\Omega\subset M$. \end{corollary} In the particular case where the ALH manifold is a hyperbolic manifold, taking into account the fundamental work \cite[Theorem 4.7] {SY} by R. Schoen and S.-T. Yau about conformally flat manifolds, Theorem \ref{maintheorem} allows to decide when the linear hyperbolic isoperimetric inequality remains valid after quotienting by a Kleinian group. \begin{corollary} Let ${\mathbb H}^{n+1}/\Gamma$ be a geometrically finite and cusp-free quotient of the $(n+1)$-dimensional hyperbolic space by a Kleinian group. Denote by $\Lambda (\Gamma)\subset {\mathbb S}^n$ the limit set of $\Gamma$ and by ${\mathcal H}\big(\Lambda(\Gamma) \big)$ its Hausdorff measure. Then we have$$ A(\partial\Omega)\ge n V(\Omega),\quad\forall\Omega\subset{\mathbb H}^{n+1}/\Gamma \quad \Longleftrightarrow \quad {\mathcal H}\big(\Lambda(\Gamma)\big)\le \frac{n-2}{2}.$$ \end{corollary} \subsection{Principal eigenvalue of the $p$-Laplacian} Another immediate consequence of Theorem \ref{maintheorem} is an extension of the result obtained by J. Lee \cite{L} on the infimum of the $L^2$ spectrum of the Laplacian to the case of the $p$-Laplacian. We first briefly recall some well-known facts on this operator and its principal eigenvalue (for more details we refer to \cite{Mat,SW} and references therein). On a Riemannian manifold, for $1<p<\infty$ and $u\in C^\infty(M)$, the $p$-Laplacian $\Delta_p$ is defined by \begin{eqnarray*} \Delta_p u={\rm div\,}(|\nabla u|_g^{p-2}\nabla u). \end{eqnarray*} The principal eigenvalue $\lambda_{1,p}(M)$ of the $p$-Laplacian is the maximum constant $\lambda$ such that the equation \begin{eqnarray*} \Delta_p u =-\lambda\, u^{p-1} \end{eqnarray*} admits a positive solution. Alternatively, it may be characterized variationally as the best constant in the inequality \begin{eqnarray*} \lambda_{1,p}(M)\int_M |v|^p\leq\int_M|\nabla v|_g^p \end{eqnarray*} for any smooth and compactly supported function $v$ on $M$. From \cite{SW} we know that if $(M,g)$ is a complete $(n+1)$-dimensional Riemannian manifold $(M,g)$ with $\hbox{\rm Ric}_g+ng\ge 0$, this principal eigenvalue satisfies an inequality analogous to the famous Cheng inequality for the bottom of the $L^2$ spectrum of the standard Laplacian, namely \begin{eqnarray}\label{Cheng} \lambda_{1,p}(M)\leq\Big(\frac{n}{p}\Big)^p. \end{eqnarray} On the other hand, we claim that the Cheeger-type inequality \begin{eqnarray}\label{p-Cheeger} \lambda_{1,p}(M)\geq\Big(\frac{{\mathcal Ch}(M,g)}{p}\Big)^p \end{eqnarray} is also satisfied. Indeed, first note that if $(\Omega_i)$ is an exhaustion of $M$ by compact domains, it is straightforward to show that \begin{eqnarray}\label{LimSp} \lambda_{1,p}(M)=\lim_{i\rightarrow\infty}\lambda_{1,p}^D(\Omega_i) \end{eqnarray} where $\lambda_{1,p}^D(\Omega_i)$ is the first eigenvalue of the $p$-Laplacian on $\Omega_i$ with Dirichlet boundary condition that is $$ \left\lbrace \begin{array}{l} \Delta_p v_i=-\lambda_{1,p}^D(\Omega_i) v_i \\ v_{i|\partial\Omega_i}=0. \end{array} \right. $$ Moreover, it is proved in \cite{T} that \begin{eqnarray}\label{p-CheegIn} \lambda_{1,p}^D(\Omega_i)\geq\Big(\frac{{\mathcal Ch}(\Omega_i)}{p}\Big)^p \end{eqnarray} where ${\mathcal Ch}(\Omega_i)$ is the Cheeger constant of $\Omega_i$ defined by \begin{eqnarray*} {\mathcal Ch}(\Omega_i)=\inf_{\Omega}\frac{A(\partial\Omega)}{V(\Omega)} \end{eqnarray*} where $\Omega$ ranges over all smooth compact domain in $\Omega_i$ and smooth boundary $\partial\Omega$. From the definition (\ref{def-Ch}) of the Cheeger constant of $M$ it is obvious that ${\mathcal Ch}(\Omega_i)\geq{\mathcal Ch}(M,g)$ for all $i$ so that (\ref{p-Cheeger}) follows directly from (\ref{LimSp}) and (\ref{p-CheegIn}). Finally applying Theorem \ref{maintheorem} to (\ref{p-Cheeger}) and combining with (\ref{Cheng}) indeed leads to the aforementioned generalization of Lee spectral estimate: \begin{theorem}\label{lambda} Let $(M,g)$ be an $(n+1)$-dimensional conformally compact manifold of order $C^{m,\alpha}$ with $m\geq 3$ and $0<\alpha<1$ whose conformal infinity is $(\partial M,[\gamma])$. Suppose that its Ricci and scalar curvatures satisfy (\ref{Ricci-Scalar}). For $1<p<\infty$, if ${\mathcal Y}(\partial M,[\gamma])\ge 0$, then $$ \lambda_{1,p}(M)= \Big(\frac{n}{p}\Big)^p, $$ where $\lambda_{1,p}(M)$ denotes the principal eigenvalue of the $p$-Laplacian of $M$. \end{theorem} \begin{remark}{\rm According to Theorem \ref{lambda}, all the conformally compact manifolds $(M,g)$ satisfying $\hbox{Ric}_g+ng\ge 0$, a second order scalar curvature decay and ${\mathcal Y}(\partial M,[\gamma])\ge 0$, are examples of complete Riemannian manifolds with optimal Cheeger inequality (\ref{p-Cheeger}). Instead, the results obtained by D. Sullivan in \cite{Su} and by R. Schoen and S.-T. Yau in \cite{SY} imply that the geometrically finite and cusp free quotients ${\mathbb H}^{n+1}/\Gamma$ with $\frac{n-2}{2}<{\mathcal H}\big(\Lambda(\Gamma)\big)\le\frac{n}{2}$ have $\lambda_{1,2}({\mathbb H}^{n+1}/\Gamma)=\frac{n^2}{4}$ and ${\mathcal Y}({\mathbb H}^{n+1}/\Gamma)<0$. Then, we deduce from our Theorem \ref{maintheorem} that ${\mathcal Ch}({\mathbb H}^ {n+1}/\Gamma)<n$. Thus these hyperbolic manifolds give examples of PE spaces where the Cheeger inequality is not sharp. } \end{remark} \end{document}
arXiv
\begin{document} \title[Focusing NLS]{Focusing NLS with inverse square potential} \author[J. Zheng]{Jiqiang Zheng} \address{Institute of Applied Physics and Computational Mathematics, Beijing 100088, China} \email{[email protected]} \begin{abstract} In this paper, we utilize the method in \cite{BM} to establish the radial scattering result for the focusing nonlinear Schr\"odinger equation with inverse square potential $i\partial_tu-\mathcal{L}_a u=-|u|^{p-1}u$ in the energy space $H^1_a(\mathbb{R}^d)$ in dimensions $d\geq3$, which extends the result of \cite{KMVZ,LMM} to higher dimensions cases but with radial initial data. The new ingredient is to establish the dispersive estimate for radial function and overcome the weak dispersive estimate when $a<0$. \end{abstract} \maketitle \begin{center} \begin{minipage}{100mm} { \small {{\bf Key Words:} nonlinear Schr\"odinger equation; scattering; inverse square potential, Morawetz estimate.} {} }\\ { \small {\bf AMS Classification:} {35P25, 35Q55, 47J35.} } \end{minipage} \end{center} \section{Introduction} \noindent We study the initial-value problem for focusing nonlinear Schr\"odinger equations of the form \begin{align} \label{equ1.1} \begin{cases} (i\partial_t-\mathcal{L}_a)u= -|u|^{p-1}u,\quad (t,x)\in\mathbb{R}\times\mathbb{R}^d, \\ u(0,x)=u_0(x)\in H^1(\mathbb{R}^d), \end{cases} \end{align} where $u:\mathbb{R}_t\times\mathbb{R}_x^d\to \mathbb{C}$ and $\mathcal{L}_a=-\Delta+\frac{a}{|x|^2}$. The class of solutions to \eqref{equ1.1} is left invariant by the scaling \begin{equation}\label{scale} u(t,x)\mapsto \lambda^{\frac2{p-1}}u(\lambda^2t, \lambda x),\quad\lambda>0. \end{equation} Moreover, one can also check that the only homogeneous $L_x^2$-based Sobolev space that is left invariant under \eqref{scale} is $\dot{H}_x^{s_c}(\mathbb{R}^d)$ with $s_c:=\tfrac{d}2-\tfrac2{p-1}$. Solutions to \eqref{equ1.1} conserve their \emph{mass} and \emph{energy} by \begin{align*} & M(u(t)) := \int_{\mathbb{R}^d} |u(t,x)|^2 \,dx, \\ & E_a(u(t)) := \int_{\mathbb{R}^d} \tfrac12|\nabla u(t,x)|^2 + \tfrac{a}{2|x|^2} |u(t,x)|^2 - \tfrac1{p+1} |u(t,x)|^{p+1} \,dx. \end{align*} Initial data belonging to $H_x^1(\mathbb{R}^d)$ have finite mass and energy. This follows from equivalent of Sobolev norm and the following variant of the Gagliardo-Nirenberg inequality: \begin{equation}\label{E:GN} \|f\|_{L_x^{p+1}(\mathbb{R}^d)}^{p+1} \leq C_a \|f\|_{L_x^2(\mathbb{R}^d)}^\frac{d+2-(d-2)p}2 \|\sqrt{\mathcal{L}_a}f\|_{L_x^2(\mathbb{R}^d)}^\frac{d(p-1)}2, \end{equation} where $C_a$ denotes the sharp constant in the inequality above for radial functions. We will show in Theorem \ref{T:GN} that the sharp constant $C_{a}$ is attained by a radial solution $Q_{a}$ to elliptic equation $-\mathcal{L}_a Q_a-Q_a+Q_a^p=0$. The functions $Q_{a}$ provide examples of non-scattering solutions at the radial threshold via $u(t,x) = e^{it}Q_{a}(x)$. We consider the problem of global existence and scattering for \eqref{equ1.1} below threshold. We begin with the following definitions. \begin{definition}[Solution, scattering]\label{D:solution} Let $t_0\in\mathbb{R}$ and $u_0\in H_x^1(\mathbb{R}^d)$. Let $I$ be an interval containing $t_0$. A function $u:I\times\mathbb{R}^d\to\mathbb{C}$ is a \emph{solution} to \eqref{equ1.1}, if it belongs to $C_t H_a^1\cap L_t^5 H_a^{1,\frac{10d}{5d-4}}(K\times\mathbb{R}^d)$ for any compact $K\subset I$ and obeys the Duhamel formula \[ u(t) = e^{-i(t-t_0)\mathcal{L}_a}u_0 + i\int_{t_0}^t e^{-i(t-s)\mathcal{L}_a}\bigl(|u(s)|^{p-1} u(s)\bigr)\,ds\qtq{for all}t\in I, \] where we rely on the self-adjointness of $\mathcal{L}_a$ to make sense of $e^{-it\mathcal{L}_a}$ via the Hilbert space functional calculus. We call $I$ the \emph{lifespan} of $u$. We call $u$ a \emph{maximal-lifespan solution} if it cannot be extended to any strictly larger interval. If $I=\mathbb{R}$, we call $u$ \emph{global}. Moreover, a global solution $u$ to \eqref{equ1.1} \emph{scatters} if there exist $u_\pm\in H_x^1(\mathbb{R}^d)$ such that \[ \lim_{t\to\pm\infty} \| u(t) - e^{-it\mathcal{L}_a}u_{\pm} \|_{H_x^1(\mathbb{R}^d)} = 0. \] \end{definition} In this paper, we utilize the method in \cite{BM} to obtain the following threshold result for the class of radial solutions: \begin{theorem}[Radial scattering/blowup dichotomy]\label{T:radial} Let $(a,d,p)$ satisfy \begin{equation}\label{equ:acond} a>\begin{cases}-\big(\frac{d-2}2\big)^2\quad \text{if}\quad d=3\quad \text{and}\quad \frac43<p-1\leq2\\ -\big(\frac{d-2}2\big)^2+\big(\frac{d-2}2-\frac1{p-1}\big)^2\quad \text{if}\quad d\geq3\quad \text{and}\quad \frac2{d-2}\vee\frac4d<p-1<\frac4{d-2}, \end{cases} \end{equation} where $a\vee b:=\max\{a,b\}.$ Let $u_0\in H_x^1(\mathbb{R}^d)$ be radial and satisfy $M(u_0)^{1-s_c}E_a(u_0)^{s_c}<M(Q_a)^{1-s_c}E_a(Q_a)^{s_c}$. Moreover, if $$\|u_0\|_{L_x^2}^{1-s_c} \|u_0\|_{\dot H_a^1}^{s_c} <\|Q_a\|_{L_x^2}^{1-s_c} \|Q_a\|_{\dot H_a^1}^{s_c},$$ then the solution to \eqref{equ1.1} with initial data $u_0$ is global and scatters. \end{theorem} \begin{remark} $(i)$ In the case $a=0$, such result was firstly considered by Holmer and Roudenko\cite{HR} for the 3D cubic radial and Duyckaerts-Holmer-Roudenko\cite{DHR} for nonradial data. Lately, Killip, Murphy, Visan and the third author \cite{KMVZ} and Lu, Miao and Murphy \cite{LMM} generalized their result to the focusing Schr\"odinger equation with inverse square potential, i.e. \eqref{equ1.1}. In this paper, we extend the result of \cite{KMVZ,LMM} to general nonlinear term in dimensions $d\geq3$ but with radial initial data. We also refer the reader to the defocusing nonlinear Schr\"odinger equation with inverse square potential \cite{KMVZZ2,ZZ}. The main new ingredient of this paper is to establish the dispersive estimate for radial function and overcome the weak dispersive estimate when $a<0$. $(ii)$ The restriction on $(a,d,p)$ stems from the local well-posedness theory in $H^1(\mathbb{R}^d)$ for \eqref{equ1.1}. While in the proof of local well-posedness, we need to estimate powers of $\mathcal{L}_a$ applied to the nonlinearity term. To obtain the requisite fractional calculus estimates for $\mathcal{L}_a$, we rely on the equivalence of Sobolev spaces to exchange powers of $\mathcal{L}_a$ and powers of $-\Delta$ (for which fractional calculus estimates are known). This argument leads to a restriction on the range of $(a,d,p)$ as in \eqref{equ:acond}. \end{remark} We sketch the idea and argument for the proof here. First, by variational analysis and blowup criterion, we derive that the solution $u$ is global. And then, by radial Sobolev embedding and dispersive estimate, we establish a scattering criterion as the case $a=0$ \cite{Tao}. Here we should be careful in the case $a<0$, since we have only the weak dispersive estimate, see Lemma \ref{thm:disp}. Finally, using Virial argument, radial Sobolev embedding and variational analysis, we prove the above scattering criterion. We conclude the introduction by giving some notations which will be used throughout this paper. To simplify the expression of our inequalities, we introduce some symbols $\lesssim, \thicksim, \ll$. If $X, Y$ are nonnegative quantities, we use $X\lesssim Y $ or $X=O(Y)$ to denote the estimate $X\leq CY$ for some $C$, and $X \thicksim Y$ to denote the estimate $X\lesssim Y\lesssim X$. We use $X\ll Y$ to mean $X \leq c Y$ for some small constant $c$. We use $C\gg1$ to denote various large finite constants, and $0< c \ll 1$ to denote various small constants. For any $r, 1\leq r \leq \infty$, we denote by $\|\cdot \|_{r}$ the norm in $L^{r}=L^{r}(\mathbb{R}^3)$ and by $r'$ the conjugate exponent defined by $\frac{1}{r} + \frac{1}{r'}=1$. \section{Preliminaries} \subsection{Harmonic analysis for $\mathcal{L}_a$} In this section, we collect some harmonic analysis tools adapted to the operator $\mathcal{L}_a$. The primary reference for this section is \cite{KMVZZ1}. For $1< r < \infty$, we write $\dot H^{1,r}_a(\mathbb{R}^d)$ and $ H^{1,r}_a(\mathbb{R}^d)$ for the homogeneous and inhomogeneous Sobolev spaces associated with $\mathcal{L}_a$, respectively, which have norms $$ \|f\|_{\dot H^{1,r}_a(\mathbb{R}^d)}= \|\sqrt{\mathcal{L}_a} f\|_{L^r(\mathbb{R}^d)} \qtq{and} \|f\|_{H^{1,r}_a(\mathbb{R}^d)}= \|\sqrt{1+ \mathcal{L}_a} f\|_{L^r(\mathbb{R}^d)}. $$ When $r=2$, we simply write $\dot H^{1}_a(\mathbb{R}^d)=\dot H^{1,2}_a(\mathbb{R}^d)$ and $H^{1}_a(\mathbb{R}^d)=H^{1,2}_a(\mathbb{R}^d)$. By the sharp Hardy inequality, the operator $\mathcal{L}_a$ is positive precisely for $a\geq -(\frac{d-2}2)^2$. Denote \begin{equation}\label{equ:sigma} \sigma:=\tfrac{d-2}2-\bigr[\bigl(\tfrac{d-2}2\bigr)^2+a\bigr]^{\frac12}. \end{equation} Estimates on the heat kernel associated to the operator $\mathcal{L}_a$ were found by Liskevich--Sobol \cite{LS} and Milman--Semenov \cite{MS}. \begin{lemma}[Heat kernel bounds, \cite{LS, MS}] \label{L:kernel} Let $d\geq 3$ and $a\geq -(\tfrac{d-2}{2})^2$. There exist positive constants $C_1,C_2$ and $c_1,c_2$ such that for any $t>0$ and any $x,y\in\mathbb{R}^d\backslash\{0\}$, \[ C_1(1\vee\tfrac{\sqrt{t}}{|x|})^\sigma(1\vee\tfrac{\sqrt{t}}{|y|})^\sigma t^{-\frac{d}{2}} e^{-\frac{|x-y|^2}{c_1t}} \leq e^{-t\mathcal{L}_a}(x,y) \leq C_2(1\vee\tfrac{\sqrt{t}}{|x|})^\sigma(1\vee\tfrac{\sqrt{t}}{|y|})^\sigma t^{-\frac{d}{2}} e^{-\frac{|x-y|^2}{c_2t}}. \] \end{lemma} As a consequence, we can obtain the following equivalence of Sobolev spaces. \begin{lemma}[Equivalence of Sobolev spaces, \cite{KMVZZ1}]\label{pro:equivsobolev} Let $d\geq 3$, $a\geq -(\frac{d-2}{2})^2$, and $0<s<2$. If $1<p<\infty$ satisfies $\frac{s+\sigma}{d}<\frac{1}{p}< \min\{1,\frac{d-\sigma}{d}\}$, then \[ \||\nabla|^s f \|_{L_x^p}\lesssim_{d,p,s} \|(\mathcal{L}_a)^{\frac{s}{2}} f\|_{L_x^p}\qtq{for all}f\in C_c^\infty(\mathbb{R}^d\backslash\{0\}). \] If $\max\{\frac{s}{d},\frac{\sigma}{d}\}<\frac{1}{p}<\min\{1,\frac{d-\sigma}{d}\}$, then \[ \|(\mathcal{L}_a)^{\frac{s}{2}} f\|_{L_x^p}\lesssim_{d,p,s} \||\nabla|^s f\|_{L_x^p} \qtq{for all} f\in C_c^\infty(\mathbb{R}^d\backslash\{0\}). \] \end{lemma} We will make use of the following fractional calculus estimates due to Christ and Weinstein \cite{CW}. Combining these estimates with Lemma~\ref{pro:equivsobolev}, we can deduce analogous statements for the operator $\mathcal{L}_a$ (for restricted sets of exponents). \begin{lemma}[Fractional calculus]\text{ } \begin{itemize} \item[(i)] Let $s\geq 0$ and $1<r,r_j,q_j<\infty$ satisfy $\tfrac{1}{r}=\tfrac{1}{r_j}+\tfrac{1}{q_j}$ for $j=1,2$. Then \[ \| |\nabla|^s(fg) \|_{L_x^r} \lesssim \|f\|_{L_x^{r_1}} \||\nabla|^s g\|_{L_x^{q_1}} + \| |\nabla|^s f\|_{L_x^{r_2}} \| g\|_{L_x^{q_2}}. \] \item[(ii)] Let $G\in C^1(\mathbb{C})$ and $s\in (0,1]$, and let $1<r_1\leq \infty$ and $1<r,r_2<\infty$ satisfy $\tfrac{1}{r}=\tfrac{1}{r_1}+\tfrac{1}{r_2}$. Then \[ \| |\nabla|^s G(u)\|_{L_x^r} \lesssim \|G'(u)\|_{L_x^{r_1}} \|u\|_{L_x^{r_2}}. \] \end{itemize} \end{lemma} We will need the following radial Sobolev embedding from \cite{Tao}. \begin{lemma}[Radial Sobolev embedding]\label{lem:sobemb} Let $d\geq3.$ For radial $f\in H^1(\mathbb{R}^d)$, there holds \begin{equation}\label{equ:radsobemb} \big\||x|^sf\big\|_{L_x^\infty(\mathbb{R}^d)}\lesssim\|f\|_{H^1(\mathbb{R}^d)}, \end{equation} for $\tfrac{d}2-1\leq s\leq\tfrac{d-1}2.$ \end{lemma} Let $f$ be Schwartz function defined on $\mathbb{R}^d$, we define the Hankel transform of order $\nu$: \begin{equation}\label{2.14} (\mathcal{H}_{\nu}f)(\xi)=\int_0^\infty(r\rho)^{-\frac{d-2}2}J_{\nu}(r\rho)f(r\omega)r^{d-1}\mathrm{d}r, \end{equation} where $\rho=|\xi|$, $\omega=\xi/|\xi|$ and $J_{\nu}$ is the Bessel function of order $\nu$ defined by the integral \begin{equation*} J_\nu(r)=\frac{(r/2)^\nu}{\Gamma(\nu+\frac12)\Gamma(1/2)}\int_{-1}^{1}e^{isr}(1-s^2)^{(2\nu-1)/2}\mathrm{d }s\quad\text{with}~ \nu>-\frac12~\text{and}~ r>0. \end{equation*} Specially, if the function $f$ is radial, then \begin{equation}\label{2.15} (\mathcal{H}_{\nu}f)(\rho)=\int_0^\infty(r\rho)^{-\frac{d-2}2}J_{\nu}(r\rho)f(r)r^{d-1}\mathrm{d}r. \end{equation} The following properties of the Hankel transform are obtained in \cite{BPSTZ}: \begin{lemma}\label{Hankel} Let $\mathcal{H}_{\nu}$ be defined above and $ A_{\nu}:=-\partial_r^2-\frac{d-1}r\partial_r+\big[\nu^2-\big(\frac{d-2}2\big)^2\big]{r^{-2}}. $ Then $(\rm{i})$ $\mathcal{H}_{\nu}=\mathcal{H}_{\nu}^{-1}$, $(\rm{ii})$ $\mathcal{H}_{\nu}$ is self-adjoint, i.e. $\mathcal{H}_{\nu}=\mathcal{H}_{\nu}^*$, $(\rm{iii})$ $\mathcal{H}_{\nu}$ is an $L^2$ isometry, i.e. $\|\mathcal{H}_{\nu}\phi\|_{L^2_\xi}=\|\phi\|_{L^2_x}$, $(\rm{iv})$ $\mathcal{H}_{\nu}( A_{\nu}\phi)(\xi)=|\xi|^2(\mathcal{H}_{\nu} \phi)(\xi)$, for $\phi\in L^2$. \end{lemma} \subsection{Strichartz estimates and dispersive estimate}\label{sze} Strichartz estimates for the propagator $e^{-it\mathcal{L}_a}$ were proved by Burq, Planchon, Stalker, and Tahvildar-Zadeh in \cite{BPSTZ}. Combining these with the Christ--Kiselev Lemma \cite{CK}, we obtain the following Strichartz estimates: \begin{proposition}[Strichartz estimate, \cite{BPSTZ,ZZ1}] Let $d\geq3$, and fix $a>-\big(\tfrac{d-2}2\big)^2$. The solution $u$ to $(i\partial_t-\mathcal{L}_a)u = F$ on an interval $I\ni t_0$ obeys \[ \|u\|_{L_t^q L_x^r(I\times\mathbb{R}^d)} \lesssim \|u(t_0)\|_{L_x^2(\mathbb{R}^d)} + \|F\|_{L_t^{\tilde q'} L_x^{\tilde r'}(I\times\mathbb{R}^d)} \] for any $2\leq q,\tilde q\leq\infty$ with $\frac{2}{q}+\frac{d}{r}=\frac{d}{\tilde q}+\frac{3}{\tilde r}= \frac{d}2$ including $(q,\tilde q)= (2,2)$. \end{proposition} As a consequence of Strichartz estimate, we obtain the local well-posedness theory in $H^1(\mathbb{R}^d)$. \begin{theorem}[Local well-posedness, \cite{KMVZ,LMM}]\label{T:LWP} Let $(a,d,p)$ satisfy the condition \eqref{equ:acond}. Assume $u_0\in H_x^1(\mathbb{R}^d)$, and $t_0\in\mathbb{R}$. Then the following hold: \begin{itemize} \item[(i)] There exist $T=T(\|u_0\|_{H_a^1})>0$ and a unique solution $u:(t_0-T,t_0+T)\times\mathbb{R}^d\to\mathbb{C}$ to \eqref{equ1.1} with $u(t_0)=u_0$. In particular, if $u$ remains uniformly bounded in $H_a^1$ throughout its lifespan, then $u$ extends to a global solution. \item[(ii)] There exists $\eta_0>0$ such that if \[ \|e^{-i(t-t_0)\mathcal{L}_a}u_0\|_{L_{t,x}^{\frac{d+2}2(p-1)}((t_0,\infty)\times\mathbb{R}^d)} < \eta\qtq{for some} 0 <\eta<\eta_0, \] then the solution $u$ to \eqref{equ1.1} with data $u(t_0)=u_0$ is forward-global and satisfies \[ \|u\|_{L_{t,x}^{\frac{d+2}2(p-1)}((t_0,\infty)\times\mathbb{R}^d)} \lesssim \eta. \] The analogous statement holds backward in time (as well as on all of $\mathbb{R}$). \item[(iii)] For any $\psi\in H_a^1$, there exist $T>0$ and a solution $u:(T,\infty)\times\mathbb{R}^d\to\mathbb{C}$ to \eqref{equ1.1} such that \[ \lim_{t\to\infty} \|u(t) - e^{-it\mathcal{L}_a}\psi\|_{H_a^1} = 0. \] The analogous statement holds backward in time. \end{itemize} \end{theorem} Next, we prove the key estimate (dispersive estimate) which will be useful in the proof of scattering criterion (Lemma \ref{lem:scatcri} below). \begin{theorem}[Dispersive estimate]\label{thm:disp} Let $f$ be radial function. $(i)$ If $a\geq0$, then we have \begin{equation}\label{equ:disp} \|e^{it\mathcal{L}_a}f\|_{L^\infty(\mathbb{R}^d)}\leq C|t|^{-\frac{d}2}\|f\|_{L_x^1(\mathbb{R}^d)}. \end{equation} $(ii)$ If $-\tfrac{(d-2)^2}4<a<0$, then there holds \begin{equation}\label{equ:dispaleq0} \big\|(1+|x|^{-\sigma})^{-1}e^{it\mathcal{L}_a}f\big\|_{L^\infty(\mathbb{R}^d)}\leq C\frac{1+|t|^{\sigma}}{|t|^{\frac{d}2}}\big\|(1+|x|^{-\sigma})f\big\|_{L_x^1(\mathbb{R}^d)}, \end{equation} with $\sigma$ being as in \eqref{equ:sigma}. \end{theorem} \begin{proof} Since $f(x)$ is radial, $u(t,x):=e^{it\mathcal{L}_a}f$ solve \begin{equation}\label{equ:utfxrd} \begin{cases} i\partial_tu-A_{\nu}u=0,\\ u(0,r)=f(r), \end{cases} \end{equation} where the operator $A_\nu$ is defined as in Lemma \ref{Hankel} with $\nu=\frac{d-2}2-\sigma.$ Applying the Hankel transform to the equation \eqref{equ:utfxrd}, by $(\rm{iv})$ in Lemma \ref{Hankel}, we have \begin{equation}\label{3.6} \begin{cases} i\partial_{t} \tilde{ u}-\rho^2\tilde{u}=0 \\ \tilde{u}(0,\rho)=(\mathcal{H}_{\nu}f)(\rho), \end{cases} \end{equation} where $\tilde{u}(t,\rho)=(\mathcal{H}_{\nu} u)(t,\rho)$. Solving this ODE and inverting the Hankel transform, we obtain \begin{align*} u(t,r)=&\int_0^\infty (r\rho)^{-\frac{d-2}2}J_\nu(r\rho)e^{-it\rho^2}(\mathcal{H}_\nu f)(\rho)\rho^{d-1}\;d\rho\\ =&\int_0^\infty (r\rho)^{-\frac{d-2}2}J_\nu(r\rho)e^{-it\rho^2}\rho^{d-1}\int_0^\infty (s\rho)^{-\frac{d-2}2}J_\nu(s\rho)f(s)s^{d-1}\;ds\;d\rho\\ =&\int_0^\infty f(s)s^{d-1}K(t,r,s)\;ds, \end{align*} with the kernel \begin{align*} K(t,r,s)=&(rs)^{-\frac{d-2}2}\int_0^\infty J_\nu(r\rho)J_\nu(s\rho)e^{-it\rho^2}\rho\;d\rho\\ =&(rs)^{-\frac{d-2}2}\frac{e^{-\frac12\nu\pi i}}{2it}e^{-\frac{r^2+s^2}{4it}}J_\nu\Big(\frac{rs}{2t}\Big), \end{align*} where we used the analytic continuation as in \cite{Ford} in the second equality. Thus, \begin{align*} u(t,r)=&\frac{e^{-\frac12\nu\pi i}}{2it}\int_0^\infty f(s)s^{d-1}(rs)^{-\frac{d-2}2}e^{-\frac{r^2+s^2}{4it}}J_\nu\Big(\frac{rs}{2t}\Big)\;ds\\ =&\frac{e^{-\frac12\nu\pi i}}{2it}\Big(\int_0^\frac{2t}{r}+\int_\frac{2t}r^\infty\Big)\Big(f(s)s^{d-1}(rs)^{-\frac{d-2}2}e^{-\frac{r^2+s^2}{4it}}J_\nu\Big(\frac{rs}{2t}\Big) \Big)\;ds\\ \triangleq&I+II. \end{align*} Using $|J_\nu(r)|\lesssim r^{-\frac12}$ with $r\geq1$, we obtain \begin{align*} |II|\leq&Ct^{-\frac{d}2}\int_\frac{2t}r^\infty|f(s)|s^{d-1}\big(\tfrac{rs}{2t}\big)^{-\frac{d-1}2}\;ds\leq Ct^{-\frac{d}2}\|f\|_{L_x^1(\mathbb{R}^d)}. \end{align*} On the other hand, by $|J_\nu(r)|\lesssim r^{\nu}$ with $r\leq1$, we get for $a\geq0$ \begin{align*} |I|\leq&Ct^{-1}\int_0^\frac{2t}r|f(s)|s^{d-1}(rs)^{-\frac{d-2}2}\Big(\frac{rs}{2t}\Big)^\nu\;ds\\ \leq&Ct^{-\frac{d}2}\int_0^\frac{2t}r|f(s)|s^{d-1}\Big(\frac{rs}{2t}\Big)^{-\sigma}\;ds\\ \leq& Ct^{-\frac{d}2}\|f\|_{L_x^1(\mathbb{R}^d)}, \end{align*} while for $a<0$ \begin{align*} (1+r^{-\sigma})^{-1}|I|\leq&Ct^{-\frac{d}2}\int_0^\frac{2t}r(1+s^{-\sigma})|f(s)|s^{d-1}\frac{t^\sigma}{(1+r^\sigma)(1+s^\sigma)}\;ds\\ \leq&Ct^{-\frac{d}2}t^\sigma\big\|(1+|x|^{-\sigma})f\big\|_{L_x^1(\mathbb{R}^d)}. \end{align*} Therefore by collecting all of them, we conclude the proof of Theorem \ref{thm:disp}. \end{proof} \section{Variational analysis}\label{S:var} In this section, we carry out the variational analysis for the sharp Gagliardo--Nirenberg inequality, which leads naturally to the thresholds appearing in Theorem~\ref{T:radial}. \begin{theorem}[Sharp Gagliardo--Nirenberg inequality]\label{T:GN} Fix $a>-\frac{(d-2)^2}4$ and define \[ C_a := \sup\bigl\{ \|f\|_{L_x^{p+1}}^{p+1} \div \bigl[\|f\|_{L_x^2}^{\frac{d+2-(d-2)p}2} \|f\|_{\dot H_a^1}^{\frac{d(p-1)}2}\bigr]:f\in H_a^1\backslash\{0\},~f~{\rm radial}\bigl\}. \] Then $C_a\in(0, \infty)$ and the Gagliardo--Nirenberg inequality for radial functions \begin{equation}\label{E:GN} \|f\|_{L_x^{p+1}}^{p+1}\leq C_a\|f\|_{L_x^2}^{\frac{d+2-(d-2)p}2} \|f\|_{\dot H_a^1}^{\frac{d(p-1)}2} \end{equation} is attained by a function $Q_a\in H_a^1$, which is a non-zero, non-negative, radial solution to the elliptic problem \begin{equation} \label{elliptic} -\mathcal{L}_a Q_a - Q_a + Q_a^p = 0. \end{equation} \end{theorem} \begin{proof} Define the functional \[ J_a(f) := \frac{\|f\|_{L_x^{p+1}}^{p+1}}{\|f\|_{L_x^2}^{\frac{d+2-(d-2)p}2} \|f\|_{\dot H_a^1}^{\frac{d(p-1)}2}}, \qtq{so that} C_a = \sup\{J_a(f):f\in H_a^1\backslash\{0\},~f~{\rm radial}\}. \] Note that the standard Gagliardo--Nirenberg inequality and the equivalence of Sobolev spaces imply $0<C_a<\infty$. We prove by mimicking the well-known proof for $a=0$ and Theorem 3.1 in \cite{KMVZ}. Take the sequence of radial functions $\{f_n\}\subset H_a^1\backslash\{0\}$ such that $J_a(f_n)\nearrow C_a$. Choose $\mu_n\in\mathbb{R}$ and $\lambda_n\in\mathbb{R}$ so that $g_n(x):=\mu_n f_n(\lambda_nx)$ satisfy $\|g_n\|_{L_x^2}=\|g_n\|_{\dot H_a^1}=1$. Note that $J_a(f_n)=J_a(g_n)$. As $H^1_{\rm rad}\hookrightarrow L^{p+1}_x$ compactly, passing to a subsequence we may assume that $g_n$ converges to some $g\in H_a^1$ strongly in $L_x^{p+1}$ as well as weakly in $H_a^1$. As $g_n$ is an optimizing sequence, we deduce that $C_a=\|g\|_{L_x^{p+1}}^{p+1}$. We also have that $\|g\|_{L_x^2} = \|g\|_{\dot H_a^1} = 1$, or else $g$ would be a super-optimizer. Thus $g$ is an optimizer. The Euler--Lagrange equation for $g$ is given by \[ -\frac{d(p-1)}2C_a\mathcal{L}_a g -\frac{d+2-(d-2)p}2 C_a g + (p+1)g^p = 0. \] Thus, if we define $Q_a$ via \[ g(x) = \alpha Q_a(\lambda x), \qtq{with} \alpha =\Big( \tfrac{d+2-(d-2)p}{2(p+1)}C_a\Big)^\frac1{p-1} \qtq{and} \lambda = \sqrt{\tfrac{d+2-(d-2)p}{3(p-1)}}, \] then $Q_a$ is an optimizer of \eqref{E:GN} that solves \eqref{elliptic}. \end{proof} By integration by part, we easily get \begin{lemma}\label{Pohozaev} Assume $\phi\in\mathcal{S}(\mathbb{R}^d)$, then \begin{align*} \int_{\mathbb{R}^d} \Delta\phi x\cdot\nabla\phi dx&=\frac{d-2}{2}\int_{\mathbb{R}^d}|\nabla\phi|^2dx,\\ \int_{\mathbb{R}^d}\phi x\cdot\nabla\phi dx&=-\frac{d}{2}\int_{\mathbb{R}^d}|\phi|^2dx,\\ \int_{\mathbb{R}^d}x\cdot\nabla\phi \phi^p dx&=-\frac{d}{p+1}\int_{\mathbb{R}^d}|\phi|^{p+1}dx. \end{align*} \end{lemma} By a simple computation, we have \begin{lemma}\label{lem:qaident} Let $Q_a$ be the solution to $-\mathcal{L}_a Q_a-Q_a+Q_a^p=0$. Then \begin{equation}\label{equ:identme} \|Q_a\|_{L_x^2}^2=\frac{d+2-(d-2)p}{2(p+1)}\|Q_a\|_{L_x^{p+1}}^{p+1},~\|Q_a\|_{\dot{H}^1_a}^2=\frac{d(p-1)}{2(p+1)}\|Q_a\|_{L_x^{p+1}}^{p+1}, \end{equation} and \begin{equation}\label{equ:idenenery} E_a(Q_a)=\frac{dp-(d+4)}{2d(p-1)}\|Q_a\|_{\dot{H}^1_a}^2=\frac{dp-(d+4)}{4(p+1)}\|Q_a\|_{L_x^{p+1}}^{p+1}. \end{equation} Moreover, \begin{equation}\label{equ:caid} C_a=\frac{\|Q_a\|_{L_x^{p+1}}^{p+1}}{\|Q_a\|_{L_x^2}^{\frac{d+2-(d-2)p}2} \|Q_a\|_{\dot H_a^1}^{\frac{d(p-1)}2}}=\Big(\frac{d+2-(d-2)p}{2(p+1)}\Big)^{-\frac{d+2-(d-2)p}4} \Big(\frac{d(p-1)}{2(p+1)}\Big)^{-\frac{d(p-1)}4}\|Q_a\|_{L_x^{p+1}}^{\frac{(d-1)(p^2-1)}2}. \end{equation} and \begin{equation}\label{equ:qah1} C_a\|Q_a\|_{L^2}^{(1-s_c)(p-1)}\|Q_a\|_{\dot{H}^1_a}^{s_c(p-1)}=\frac{2(p+1)}{d(p-1)}. \end{equation} \end{lemma} \begin{proposition}[Coercivity]\label{P:coercive} Fix $a>-\frac{(d-2)^2}4$. Let $u:I\times\mathbb{R}^d\to\mathbb{C}$ be the maximal-lifespan solution to \eqref{equ1.1} with $u(t_0)=u_0\in H_a^1\backslash\{0\}$ for some $t_0\in I$. Assume that \begin{equation}\label{quant-below} M(u_0)^{1-s_c}E_a(u_0)^{s_c} \leq (1-\delta)M(Q_a)^{1-s_c}E_a(Q_a)^{s_c}\qtq{for some}\delta>0. \end{equation} Then there exist $\delta'=\delta'(\delta)>0$, $c=c(\delta,a,\|u_0\|_{L_x^2})>0$, and $\varepsilon=\varepsilon(\delta)>0$ such that: If $\|u_0\|_{L_x^2}^{1-s_c} \| u_0\|_{\dot H_a^1}^{s_c} \leq \|Q_a\|_{L_x^2}^{1-s_c} \|Q_a\|_{\dot H_a^1}^{s_c} $, then for all $t\in I$, \begin{itemize} \item[(i)] $\|u(t)\|_{L_x^2}^{1-s_c} \|u(t)\|_{\dot H_a^1}^{s_c} \leq (1-\delta')\|Q_a\|_{L_x^2}^{1-s_c} \|Q_a\|_{\dot H_a^1}^{s_c}$, \item[(ii)] $\|u(t)\|_{\dot H_a^1}^2 - \tfrac{d(p-1)}{2(p+1)} \|u(t)\|_{L_x^{p+1}}^{p+1} \geq c\|u(t)\|_{\dot H_a^1}^2$ \item[(iii)] $(\tfrac{dp-(d+4)}{2d(p-1)}+\tfrac{2\delta'}{3(p-1)}) \|u(t)\|_{\dot H_a^1}^2 \leq E_a(u) \leq \tfrac12 \| u(t)\|_{\dot H_a^1}^2,$ \end{itemize} \end{proposition} \begin{proof} By the sharp Gagliardo--Nirenberg inequality, conservation of mass and energy, and \eqref{quant-below}, we may write \begin{align*} (1-\delta)M(Q_a)^{1-s_c}E_a(Q_a)^{s_c} \geq& M(u)^{1-s_c} E_a(u)^{s_c}\\ \geq& \|u(t)\|_{L_x^2}^{2(1-s_c)}\Big(\frac12\|u(t)\|_{\dot H_a^1}^2-\frac1{p+1}C_a\|u(t)\|_{L_x^2}^{\frac{d+2-(d-2)p}2} \|u(t)\|_{\dot H_a^1}^{\frac{d(p-1)}2}\Big)^{s_c} \end{align*} for any $t\in I$. Using \eqref{equ:idenenery} and \eqref{equ:qah1}, this inequality becomes \[ (1-\delta)^\frac1{s_c}\geq \frac{d(p-1)}{dp-(d+4)}\biggl(\frac{\|u(t)\|_{L_x^2}^{1-s_c}\|u(t)\|_{\dot H_a^1}^{s_c}}{\|Q_a\|_{L_x^2}^{1-s_c} \|Q_a\|_{\dot H_a^1}^{s_c}}\biggr)^\frac2{s_c} - \frac2{dp-(d+4)}\biggl(\frac{\|u(t)\|_{L_x^2}^{1-s_c} \|u(t)\|_{\dot H_a^1}^{s_c}}{\|Q_a\|_{L_x^2}^{1-s_c} \|Q_a\|_{\dot H_a^1}^{s_c}}\biggr)^{\frac2{s_c}(p-1)}. \] Claims (i) now follow from a continuity argument, together with the observation that \[ (1-\delta)^\frac1{s_c} \geq \frac{d(p-1)}{dp-(d+4)}y^\frac2{s_c} - \frac2{dp-(d+4)}y^{\frac2{s_c}(p-1)} \implies |y-1|\geq \delta' \qtq{for some} \delta'=\delta'(\delta)>0. \] For claim (iii), the upper bound follows immediately, since the nonlinearity is focusing. For the lower bound, we again rely on the sharp Gagliardo--Nirenberg. Using (i) and \eqref{equ:qah1} as well, we find \begin{align*} E_a(u) & \geq \tfrac12\|u(t)\|_{\dot H_a^1}^2 [ 1 - \tfrac2{p+1} C_a \|u(t)\|_{L_x^2}^{(1-s_c)(p-1)} \|u(t) \|_{\dot H_a^1}^{s_c(p-1)} ] \\ & \geq \tfrac12\|u(t)\|_{\dot H_a^1}^2 [ 1 - \tfrac4{d(p-1)} (1-\delta')^{p-1}] \geq (\tfrac{dp-(d+4)}{2d(p-1)}+\tfrac{2\delta'}{3(p-1)})\|u(t)\|_{\dot H_a^1}^2 \end{align*} for all $t\in I$. Thus (iii) holds. We turn to (ii). We begin by writing \begin{align*} \|u(t)\|_{\dot H_a^1}^2 - \tfrac{d(p-1)}{2(p+1)} \|u(t)\|_{L_x^{p+1}}^{p+1}& = \tfrac{d(p-1)}2E_a(u) - \tfrac{dp-(d+4)}4 \|u(t)\|_{\dot H_a^1}^2, \\ (1+\varepsilon)\|u(t)\|_{\dot H_a^1}^2 - \tfrac{d(p-1)}{2(p+1)} \|u(t)\|_{L_x^{p+1}}^{p+1}&= \tfrac{d(p-1)}2E_a(u) - (\tfrac{dp-(d+4)}4-\varepsilon)\|u(t)\|_{\dot H_a^1}^2, \end{align*} for $t\in I$. Thus (ii) follows from (iii) by choosing any $0<c\leq \delta'$. \end{proof} \begin{remark}\label{R:coercive} Suppose $u_0\in H_x^1\backslash\{0\}$ satisfies $M(u_0)^{1-s_c}E_a(u_0)^{s_c}<M(Q_a)^{1-s_c}E_a(Q_a)^{s_c}$ and $\|u_0\|_{L_x^2}^{1-s_c} \|u_0\|_{\dot H_a^1}^{s_c} \leq \|Q_a\|_{L_x^2}^{1-s_c} \|Q_a\|_{\dot H_a^1}^{s_c}.$ Then by continuity, the maximal-lifespan solution $u$ to \eqref{equ1.1} with initial data $u_0$ obeys $\|u(t)\|_{L_x^2}^{1-s_c} \|u(t)\|_{\dot H_a^1}^{s_c} < \|Q_a\|_{L_x^2}^{1-s_c} \|Q_a\|_{\dot H_a^1}^{s_c}$ for all $t$ in the lifespan of $u$. In particular, $u$ remains bounded in $H_x^1$ and hence is global. \end{remark} \section{Proof of Theorem \ref{T:radial}} In this section, we turn to prove Theorem \ref{T:radial}. Assume that $u$ is a solution to \eqref{equ1.1} satisfying the hypotheses of Theorem \ref{T:radial}. It follows from Remark \ref{R:coercive} that $u$ is global and satisfies the uniform bound \begin{equation}\label{equ:uniformboun} \|u_0\|_{L_x^2}^{1-s_c}\|u(t)\|_{\dot{H}^1_a}^{s_c}<(1-\delta')\|Q_a\|_{L_x^2}^{1-s_c}\|Q_a\|_{\dot{H}^1_a}^{s_c}. \end{equation} To show Theorem \ref{T:radial}, we first establish a scattering criterion by following the argument as in \cite{BM,Tao}. \subsection{scattering criterion} \begin{lemma}[Scattering criterion]\label{lem:scatcri} Suppose $u:~\mathbb{R}\times\mathbb{R}^d\to\mathbb{C}$ is a radial solution to \eqref{equ1.1} satisfying \begin{equation}\label{equ:uhasuni} \|u\|_{L_t^\infty(\mathbb{R},H^1(\mathbb{R}^d))}\leq E. \end{equation} There exist $\epsilon=\epsilon(E)>0$ and $R=R(E)>0$ such that if \begin{equation}\label{equ:masslim} \liminf\limits_{t\to\infty}\int_{|x|<R}|u(t,x)|^2\;dx\leq\epsilon^2, \end{equation} then, $u$ scatters forward in time. \end{lemma} \begin{proof} First, by interpolation with $\|u\|_{L_t^\infty(\mathbb{R},H^1(\mathbb{R}^d))}$, we only need to show that \begin{equation}\label{equ:reduce} \|u\|_{L_t^{4}([0,\infty),L_x^{r}(\mathbb{R}^d))}<+\infty, \end{equation} with $r=\tfrac{2d}{d-2}$. By H\"older's inequality, Sobolev embedding and \eqref{equ:uhasuni}, we have for any finite interval $I$, \begin{equation*} \|u\|_{L_t^{4}(I,L_x^{r})}\leq C|I|^\frac14\|u\|_{L_t^\infty \dot{H}^1}\leq C|I|^\frac14. \end{equation*} Thus, we are reduced to show for some $T>0$ \begin{equation}\label{equ:redfur} \|u\|_{L_t^{4}([T,\infty),L_x^{r}(\mathbb{R}^d))}<+\infty. \end{equation} By continuity argument, Strichartz estimate and Sobolev embedding, we are further reduced to show \begin{equation}\label{equ:redfur} \|e^{i(t-T)\mathcal{L}_a}u(T)\|_{L_t^{4}([T,\infty),L_x^{r}(\mathbb{R}^d))}\ll1. \end{equation} Now, let $0<\epsilon<1$ and $R\geq1$ to be determined later. Using Duhamel formula, we can write \begin{equation}\label{equ:duhamp} e^{i(t-T)\mathcal{L}_a}u(T)=e^{it\mathcal{L}_a}u_0+F_1(t)+F_2(t), \end{equation} where $$F_j(t)=i\int_{I_j}e^{i(t-s)\mathcal{L}_a}(|u|^{p-1}u)(s)\;ds,~I_1=[0,T-\epsilon^{-\theta}],~I_2=[T-\epsilon^{-\theta},T],$$ where $0<\theta<1$ to be determined later. Using Sobolev embedding, Strichartz estimate, we can pick $T_0$ sufficiently large such that \begin{equation}\label{equ:smallline} \big\|e^{it\mathcal{L}_a}u_0\big\|_{L_t^{4}([T_0,\infty),L_x^{r})}<\epsilon. \end{equation} {\bf Estimate the term $F_1(t)$:} We can rewrite $F_1(t)$ as \begin{equation}\label{equ:f2rw} F_1(t)=e^{i(t-T+\epsilon^{-\theta})\mathcal{L}_a}\big[u(T-\epsilon^{-\theta})\big]-e^{it\mathcal{L}_a}u_0. \end{equation} Using Strichartz estimate, we have \begin{equation}\label{equ:l4l6est} \|F_1\|_{L_t^2([T,\infty),L_x^r)}\lesssim1. \end{equation} On the other hand, by Lemma \ref{thm:disp}, we get for $p\geq2$ \begin{align}\nonumber \|F_1(t)\|_{L_x^r(|x|\leq R_1)}\lesssim&\int_{I_1}\big\|e^{i(t-s)\mathcal{L}_a}(|u|^{p-1}u)(s)\big\|_{L_x^r(|x|\leq R_1)}\;ds\\\nonumber \lesssim&\int_{I_1}\big\|(1+|x|^{-\alpha_1})^{-1}e^{i(t-s)\mathcal{L}_a}(|u|^{p-1}u)(s)\big\|_{L_x^\infty}\;ds \cdot\|(1+|x|^{-\alpha_1})\|_{L_x^\frac{2d}{d-2}(|x|\leq R_1)}\\\nonumber \lesssim&R_1^\frac{d-2}2\int_{I_1}|t-s|^{-\frac{d}2+\alpha_1}\big\|(1+|x|^{-\alpha_1})(|u|^{p-1}u)\big\|_{L_x^1}\;ds\\\nonumber \lesssim&R_1^\frac{d-2}2|t-T+\epsilon^{-\theta}|^{-\frac{d-2}2+\alpha_1}\\\label{equ:pgeq2} \lesssim&R_1^\frac{d-2}2\epsilon^{(\frac{d-2}2-\alpha_1)\theta},\quad t>T, \end{align} where \begin{equation}\label{equ:alpha1} \alpha_1=\begin{cases} 0\quad \text{if}\quad a\geq0\\ \sigma\quad\text{if}\quad a<0, \end{cases} \end{equation} and we have used the estimate for $a\geq0$ $$\big\||u|^{p-1}u\big\|_{L^1_x}\leq\|u\|_{L_x^p}^p\leq C\|u\|_{L_t^\infty H^1_x}<+\infty,$$ while for $a<0$ \begin{align*} \big\||x|^{-\sigma}|u|^{p-1}u\big\|_{L^1_x}\lesssim\big\||x|^{-\sigma}u\big\|_{L_x^2}\|u\|_{L_x^{2(p-1)}}^{p-1} \lesssim\big\||\nabla|^\sigma u\big\|_{L_x^2}\|u\|_{H^1_x}^{p-1}\lesssim\|u\|_{H^1_x}^p<+\infty, \end{align*} since $\sigma<1$ by the assumption \eqref{equ:acond}. When $p<2$, we have \begin{align}\nonumber \|F_1(t)\|_{L_x^r(|x|\leq R_1)}\lesssim&\int_{I_2}\big\|e^{i(t-s)\mathcal{L}_a}(|u|^{p-1}u)(s)\big\|_{L_x^r(|x|\leq R_1)}\;ds\\\nonumber \lesssim&\int_{I_2}\big\|(1+|x|^{-\alpha_1})^{-(p-1)}e^{i(t-s)\mathcal{L}_a}(|u|^{p-1}u)(s)\big\|_{L_x^\frac2{2-p}}\;ds\\\nonumber &\times\|(1+|x|^{-\alpha_1})^{p-1}\|_{L_x^\frac{2d}{pd-(d+2)}(|x|\leq R_1)}\\\nonumber \lesssim&R_1^\frac{pd-(d+2)}2\int_{I_2}|t-s|^{-(\frac{d}2+\alpha_1)(p-1)}\big\|(1+|x|^{-\alpha_1})^{p-1}(|u|^{p-1}u)\big\|_{L_x^\frac2p}\;ds\\\nonumber \lesssim&R_1^\frac{pd-(d+2)}2|t-T+\epsilon^{-\theta}|^{-(\frac{d}2+\alpha_1)(p-1)+1}\\\label{equ:pleq2} \lesssim&R_1^\frac{pd-(d+2)}2\epsilon^{(\frac{d}2+\alpha_1)(p-1)\theta-\theta},\quad t>T, \end{align} where $\alpha_1$ is as in \eqref{equ:alpha1} and we have used the estimate for $a\geq0$ $$\big\||u|^{p-1}u\big\|_{L_x^\frac2p}\leq\|u\|_{L_x^2}^p<+\infty$$ and for $a<0$ \begin{align*} \big\||x|^{-(p-1)\sigma}|u|^{p-1}u\big\|_{L_x^\frac2p}\lesssim\big\||x|^{-\sigma}u\big\|_{L_x^2}^{p-1}\|u\|_{L_x^2}\lesssim\|u\|_{L_t^\infty H^1_x}^p<+\infty. \end{align*} Using \eqref{equ:f2rw}, Lemma \ref{lem:sobemb}, we obtain \begin{align*} \|F_1(t)\|_{L_x^r(|x|\geq R_1)}\lesssim\|F_1\|_{L_x^2}^\frac{d-2}d\|F_1\|_{L_x^\infty(|x|\geq R_1)}^\frac2d \lesssim R_1^{-\frac{d-1}d}. \end{align*} Therefore, by taking $R_1^{\frac{d-2}2+\frac{d-1}d}=\epsilon^{-(\frac{d-2}2-\alpha_1)\theta}$ for $p\geq2$, and $R_1^{\frac{pd-(d+2)}2+\frac{d-1}d}=\epsilon^{-(\frac{d}2+\alpha_1)(p-1)\theta+\theta}$ for $p<2$, we get by H\"older's inequality and \eqref{equ:l4l6est} \begin{align}\label{equ:f1est} \|F_1\|_{L_t^4([T,\infty),L_x^r)}\lesssim\|F_1\|_{L_t^\infty([T,\infty),L_x^r)}^\frac12 \|F_1\|_{L_t^2([T,\infty),L_x^r)}^\frac12 \lesssim\epsilon^\beta, \end{align} where \begin{equation*} \beta=\begin{cases} \frac{2(d-1)}{d^2-2}(\frac{d-2}2-\alpha_1)\theta\quad\text{if}\quad p\geq2\\ \frac{2(d-1)}{pd+d-4}(1-(\frac{d}2+\alpha_1)(p-1))\theta\quad\text{if}\quad p<2. \end{cases} \end{equation*} {\bf Estimate the term $F_2(t)$.} First, by \eqref{equ:masslim}, we may choose $T>T_0$ \begin{equation}\label{equ:assumest} \int \chi_R(x)|u(T,x)|^2\;dx\leq \epsilon^2, \end{equation} where $\chi_R(x)\in C_c^\infty(\mathbb{R}^d)$ and \begin{equation*} \chi_R(x)=\begin{cases} 1\quad \text{if}\quad |x|\leq R,\\ 0\quad \text{if}\quad |x|\geq 2R. \end{cases} \end{equation*} On the other hand, combining the identity $\partial_t|u|^2=-2\nabla\cdot{\rm Im}(\bar{u}\nabla u)$ and integration by parts, H\"older's inequality, we obtain \begin{equation*} \Big|\partial_t\int \chi_R(x)|u(t,x)|^2\;dx\Big|\lesssim\frac1R. \end{equation*} Hence, choosing $R\gg\epsilon^{-2-\theta}$, we get by \eqref{equ:assumest} \begin{equation}\label{equ:smallcut} \|\chi_Ru\|_{L_t^\infty L_x^2(I_2\times\mathbb{R}^d)}\lesssim \epsilon. \end{equation} And so, by H\"older's inequality, Sobolev embedding and Lemma \ref{lem:sobemb}, we have for $q=\frac{2(d+2)}{d}$ \begin{align*} \|u\|_{L_{t,x}^{q}(I_2\times\mathbb{R}^d)}\lesssim&\epsilon^{-\frac{\theta}{q}}\|u\|_{L_t^\infty(I_2,L_x^{q})}\\ \lesssim&\epsilon^{-\frac{\theta}{q}}\Big(\|\chi_Ru\|_{L_t^\infty(I_2,L_x^2)}^\frac2{d+2}\|u\|_{L_t^\infty L_x^\frac{2d}{d-2}}^\frac{d}{d+2} +\big\|(1-\chi_R)u\big\|_{L_{t,x}^\infty}^\frac2{d+2}\|u\|_{L_t^\infty L_x^2}^\frac{d}{d+2}\Big)\\ \lesssim&\epsilon^{-\frac{\theta}q}\big(\epsilon^\frac2{d+2}+R^{-\frac{d-2}{d+2}}\big)\lesssim \epsilon^{\frac2{d+2}-\frac{\theta}q}. \end{align*} On the other hand, using Strichartz estimate and continuous argument, we have $$\|\mathcal{L}_a^\frac14u\|_{L_t^\frac{2(d+2)}{d-2}(I_2,L^\frac{2d(d+2)}{d^2+4})}^\frac{2(d+2)}{d-2}+\|u\|_{L_{t,x}^\frac{2(d+2)}{d-2}(I_2\times\mathbb{R}^d)}^\frac{2(d+2)} {d-2}\lesssim 1+|I_2|.$$ Thus, we use Sobolev embedding, Strichartz estimate, equivalence of Sobolev spaces (Lemma \ref{pro:equivsobolev}) to get \begin{align}\nonumber \|F_2\|_{L_t^{4}([T,\infty),L_x^{r})}\lesssim&\big\||\nabla|^\frac12F_2\big\|_{L_t^4((T,\infty),L_x^\frac{2d}{d-1})}\\\nonumber \lesssim&\big\|\mathcal{L}_a^{\frac14}(|u|^{p-1}u)\big\|_{L_t^2(I_2,L_x^\frac{2d}{d+2})}\\\nonumber \lesssim&\|u\|_{L_{t,x}^{\frac{d+2}2(p-1)}(I_2\times\mathbb{R}^d)}^{p-1}\|\mathcal{L}_a^\frac14u\|_{L_t^\frac{2(d+2)}{d-2}(I_2,L^\frac{2d(d+2)}{d^2+4})}\\\nonumber \lesssim&|I_2|^\frac{d-2}{2(d+2)}\Big(\|u\|_{L_{t,x}^q(I_2\times\mathbb{R}^d)}^{1-s_c} \|u\|_{L_{t,x}^\frac{2(d+2)}{d-2}(I_2\times\mathbb{R}^d)}^{s_c}\Big)^{p-1}\\\nonumber \lesssim&|I_2|^{\frac{d-2}{2(d+2)}+\frac{d-2}{2(d+2)}s_c(p-1)}\epsilon^{\frac{4-\theta d}{2(d+2)}(1-s_c)(p-1)}\\\nonumber \lesssim& \epsilon^{\frac{4-\theta d}{2(d+2)}(1-s_c)(p-1)-\frac{d-2}{2(d+2)}(1+s_c(p-1))\theta}\\\label{equ:plar} \lesssim&\epsilon^{\frac{4-\theta d}{4(d+2)}(1-s_c)(p-1)} \end{align} by taking $\frac{4-\theta d}{4(d+2)}(1-s_c)(p-1)=\frac{d-2}{2(d+2)}(1+s_c(p-1))\theta$. This together with \eqref{equ:duhamp}, \eqref{equ:smallline}, and \eqref{equ:f1est} yields that \begin{equation*} \|e^{i(t-T)\mathcal{L}_a}u(T)\|_{L_t^4([T,\infty),L_x^\frac{2d}{d-2}(\mathbb{R}^d))}\lesssim \epsilon+\epsilon^\beta+\epsilon^{\frac{4-\theta d}{4(d+2)}(1-s_c)(p-1)}. \end{equation*} And so \eqref{equ:redfur} follows. Therefore, we conclude the proof of Lemma \ref{lem:scatcri}. \end{proof} \subsection{Virial identities}\label{S:virial} In this section, we recall some standard virial-type identities. Given a weight $w:\mathbb{R}^d\to\mathbb{R}$ and a solution $u$ to \eqref{equ1.1}, we define \[ V(t;w) := \int |u(t,x)|^2 w(x)\,dx. \] Using \eqref{equ1.1}, one finds \begin{align}\label{virial} & \partial_t V(t;w) = \int 2\Im \bar u \nabla u \cdot \nabla w \,dx, \\\nonumber & \partial_{tt}V(t;w) = \int (-\Delta\Delta w)|u|^2 + 4\Re \bar u_j u_k w_{jk} + 4|u|^2 \tfrac{ax}{|x|^4}\cdot \nabla w - \tfrac{2(p-1)}{p+1}|u|^{p+1} \Delta w\,dx. \end{align} The standard virial identity makes use of $w(x)=|x|^2$. \begin{lemma}[Standard virial identity]\label{L:virial0} Let $u$ be a solution to \eqref{equ1.1}. Then \[ \partial_{tt} V(t;|x|^2) = 8\Bigl[\|u(t)\|_{\dot H_a^1}^2 - \tfrac{d(p-1)}{2(p+1)}\|u(t)\|_{L_x^{p+1}}^{p+1}\Bigr]. \] \end{lemma} In general, we do not work with solutions for which $V(t;|x|^2)$ is finite. Thus, we need a truncated version of the virial identity (cf. \cite{OT}, for example). For $R>1$, we define $w_R(x)$ to be a smooth, non-negative radial function satisfying \begin{equation}\label{phi-virial} w_R(x)=\begin{cases} |x|^2 & |x|\leq \frac{R}2 \\ R|x| & |x|>R,\end{cases} \end{equation} with \begin{equation}\label{equ:wr} \partial_rw_R\geq0,~\partial_r^2w_R\geq0,~|\partial^\alpha w_R(x)|\lesssim_\alpha R|x|^{-|\alpha|+1},~|\alpha|\geq1. \end{equation} In this case, we use \eqref{virial} to deduce the following: \begin{lemma}[Truncated virial identity]\label{L:virial} Let $u$ be a radial solution to \eqref{equ1.1} and let $R>1$. Then \begin{align*} \nonumber \partial_{tt}& V(t;w_R)\\ &= 8\int_{|x|\leq\frac{R}2}\Bigl[|\nabla u(t)|^2+a\tfrac{|u|^2}{|x|^2} - \tfrac{d(p-1)}{2(p+1)} |u(t)|^{p+1}\Bigr]\;dx \\ & \quad + \int_{|x|>R} \Bigl[4aR\tfrac{|u|^2}{|x|^3}-\tfrac{2(d-1)(p-1)}{p+1}\tfrac{R}{|x|}|u|^{p+1}+\frac{4R}{|x|}(|\nabla u|^2-|\partial_ru|^2)\Bigr]\;dx \\ &\quad +\int_{\frac{R}2\leq|x|\leq R}\Bigl[4{\rm Re}\partial_{jk}w_R\bar{u}_j\partial_ku+O\bigl( \tfrac{R}{|x|}|u|^{p+1} + \tfrac{R}{|x|^3}|u|^2\bigr)\Bigr]\;dx. \end{align*} Furthermore, by \eqref{equ:wr}, we have that $$\int_{\frac{R}2\leq|x|\leq R}\Bigl[4{\rm Re}\partial_{jk}w_R\bar{u}_j\partial_ku\Bigr]\;dx\geq 0.$$ \end{lemma} \subsection{Proof of Theorem \ref{T:radial}} By the scattering criterion (Lemma \ref{lem:scatcri}) and H\"older's inequality, Theorem \ref{T:radial} follows from the following lemma. \begin{lemma}\label{lem:poste} There exists a sequence of times $t_n\to\infty$ and a sequence of radii $R_n\to\infty$ such that \begin{equation}\label{equ:pot} \lim_{n\to\infty}\int_{|x|\leq R_n}|u(t_n,x)|^{p+1}\;dx=0. \end{equation} \end{lemma} It is easy to see that the above lemma can be derived by the following proposition (choosing $T$ sufficiently large and $R=\max\{T^{1/3}, T^{1/p}\}$). \begin{proposition}[Morawetz estimate]\label{prop:mores} Let $T>0.$ For $R=R(\delta,M(u),Q_a)$ sufficiently large, we have \begin{equation}\label{equ:potensmal} \frac1{T}\int_0^T\int_{|x|\leq R}|u(t,x)|^{p+1}\;dx\;dt\lesssim\frac{R}{T}+\frac1{R^2}+\frac1{R^{p-1}}. \end{equation} \end{proposition} \begin{proof} First, by Lemma \ref{L:virial}, we have \begin{align} \nonumber \partial_{tt}& V(t;w_R)\\ \label{equ:mainter1} &= 8\int_{|x|\leq\frac{R}2}\Bigl[|\nabla u(t)|^2+a\tfrac{|u|^2}{|x|^2} - \tfrac{d(p-1)}{2(p+1)} |u(t)|^{p+1}\Bigr]\;dx \\ \label{equ:erro11} & \quad + \int_{|x|>R} \Bigl[4aR\tfrac{|u|^2}{|x|^3}-\tfrac{2(d-1)(p-1)}{p+1}\tfrac{R}{|x|}|u|^{p+1}+\frac{4R}{|x|}(|\nabla u|^2-|\partial_ru|^2)\Bigr]\;dx \\ \label{equ:erro21} &\quad +\int_{\frac{R}2\leq|x|\leq R}\Bigl[4{\rm Re}\partial_{jk}w_R\bar{u}_j\partial_ku+O\bigl( \tfrac{R}{|x|}|u|^{p+1} + \tfrac{R}{|x|^3}|u|^2\bigr)\Bigr]\;dx. \end{align} We define $\chi$ to be a smooth cutoff to the set $\{|x|\leq1\}$ and set $\chi_R(x)=\chi(x/R)$. Note that \begin{equation}\label{equ:iden} \int\chi_R^2|\nabla u|^2\;dx=\int\Big[|\nabla(\chi_Ru)|^2+\chi_R\Delta(\chi_R)|u|^2\Big]\;dx, \end{equation} we get \begin{align*} \eqref{equ:mainter1}=&8\int\chi_R^2\Bigl[|\nabla u(t)|^2+a\tfrac{|u|^2}{|x|^2} - \tfrac{d(p-1)}{2(p+1)} |u(t)|^{p+1}\Bigr]\;dx \\ &+8\int(1-\chi_R^2)\Bigl[|\nabla u(t)|^2+a\tfrac{|u|^2}{|x|^2} - \tfrac{d(p-1)}{2(p+1)} |u(t)|^{p+1}\Bigr]\;dx \\ =&8\Big[\big\|\chi_Ru\big\|_{\dot{H}^1_a}^2-\tfrac{d(p-1)}{2(p+1)}\|\chi_Ru\|_{L_x^{p+1}}^{p+1}\Big]+8\int(1-\chi_R^2)|\nabla u(t)|^2\;dx\\ &+\int O\big(\tfrac{|u|^2}{R^2}\big)\;dx+C\int_{|x|\geq R}|u|^{p+1}\;dx. \end{align*} Next, we claim that there exists $c>0$ such that \begin{equation}\label{equ:mainlarg} \big\|\chi_Ru\big\|_{\dot{H}^1_a}^2-\tfrac{d(p-1)}{2(p+1)}\|\chi_Ru\|_{L_x^{p+1}}^{p+1}\geq c\|\chi_Ru\|_{L_x^{p+1}}^{p+1}. \end{equation} Indeed, by \eqref{equ:iden}, we have $$\big\|\chi_Ru\big\|_{\dot{H}^1_a}^2\leq \|u\|_{\dot{H}^1_x}^2+O\big(\tfrac{M(u)}{R^2}\big),$$ and $\|\chi_Ru\|_{L_x^2}\leq \|u\|_{L_x^2}$. Then, \eqref{equ:mainlarg} follows by the same argument as Proposition \ref{P:coercive} (ii). Now, applying the fundamental theorem of calculus on an interval $[0,T]$, discarding the positive terms, using \eqref{equ:mainter1}-\eqref{equ:mainlarg}, we obtain $$\int_0^T\int_{\mathbb{R}^d}|\chi_Ru|^{p+1}\;dx\lesssim\sup_{t\in[0,T]}|V(t,w_R)|+\int_0^T\int_{|x|\geq R}|u(t,x)|^{p+1}\;dx\;dt+\frac{T}{R^2}M(u).$$ From H\"older's inequality, \eqref{virial} and \eqref{equ:uniformboun}, we get $$\sup_{t\in\mathbb{R}}|\partial_tV(t,w_R)|\lesssim R.$$ On the other hand, by radial Sobolev embedding, one has \begin{equation} \int_{|x|\geq R}|u(t,x)|^{p+1}\;dx\lesssim \frac1{R^{p-1}}\|u\|_{L_t^\infty \dot{H}^1}^{p-1}M(u). \end{equation} Combining the above together, we obtain $$\frac1{T}\int_0^T\int_{|x|\leq R}|u(t,x)|^{p+1}\;dx\;dt\lesssim\frac{R}{T}+\frac1{R^2}+\frac1{R^{p-1}},$$ which is accepted. Hence we conclude the proof of Proposition \ref{prop:mores}. Therefore, we complete the proof of Theorem \ref{T:radial}. \end{proof} \begin{center} \end{center} \end{document}
arXiv
Linda Gojak Linda M. Gojak is an American mathematics educator who was president of the National Council of Supervisors of Mathematics and, in 2012–2014, of the National Council of Teachers of Mathematics.[1] Education and career Gojak is a graduate of Miami University. She earned a master's degree in education, specializing in elementary and middle school mathematics, from Kent State University. She was a mathematics teacher for 28 years, and then in 1999 took a position in the Department of Education and Allied Studies at John Carroll University as director of the Center for Mathematics and Science Education, Teaching, and Technology.[1] Books Gojak is the author of books including: • What's Your Math Problem!?!: Getting to the Heart of Teaching Problem (Shell Education, 2011) • The Common Core Mathematics Companion: The Standards Decoded (with Ruth Harbin Miles, Corwin, 2016) • Mathematize It! Going Beyond Key Words to Make Sense of Word Problems (with Kimberly Morrow-Leong and Sara Delano Moore, Corwin, 2020) Recognition The Ohio Council of Teachers of Mathematics has named their annual state-level award for middle school teaching as the Linda M. Gojak Award.[2] References 1. Linda M. Gojak, President 2012–2014, National Council of Teachers of Mathematics, retrieved 2021-01-01 2. Educator awards, Ohio Council of Teachers of Mathematics, retrieved 2021-01-01
Wikipedia
Arabic numerals The ten Arabic numerals 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 are the most commonly used symbols for writing numbers. The term often also implies a positional notation using the numerals, as well as the use of a decimal base, in particular when contrasted with other systems such as Roman numerals. However, the symbols are also used to write numbers in other bases such as octal, as well as for writing non-numerical information such as trademarks or license plates identifiers. This article is about the symbols only. For the numerical system, see Decimal. For symbols used in Arab Script, see Eastern Arabic numerals. For other uses, see Arabic numerals (disambiguation). Part of a series on Numeral systems Place-value notation Hindu-Arabic numerals • Western Arabic • Eastern Arabic • Bengali • Devanagari • Gujarati • Gurmukhi • Odia • Sinhala • Tamil • Malayalam • Telugu • Kannada • Dzongkha • Tibetan • Balinese • Burmese • Javanese • Khmer • Lao • Mongolian • Sundanese • Thai East Asian systems Contemporary • Chinese • Suzhou • Hokkien • Japanese • Korean • Vietnamese Historic • Counting rods • Tangut Other systems • History Ancient • Babylonian Post-classical • Cistercian • Mayan • Muisca • Pentadic • Quipu • Rumi Contemporary • Cherokee • Kaktovik (Iñupiaq) By radix/base Common radices/bases • 2 • 3 • 4 • 5 • 6 • 8 • 10 • 12 • 16 • 20 • 60 • (table) Non-standard radices/bases • Bijective (1) • Signed-digit (balanced ternary) • Mixed (factorial) • Negative • Complex (2i) • Non-integer (φ) • Asymmetric Sign-value notation Non-alphabetic • Aegean • Attic • Aztec • Brahmi • Chuvash • Egyptian • Etruscan • Kharosthi • Prehistoric counting • Proto-cuneiform • Roman • Tally marks Alphabetic • Abjad • Armenian • Alphasyllabic • Akṣarapallī • Āryabhaṭa • Kaṭapayādi • Coptic • Cyrillic • Geʽez • Georgian • Glagolitic • Greek • Hebrew List of numeral systems They are also called Western Arabic numerals, Ghubār numerals, Hindu-Arabic numerals,[1] Western digits, Latin digits, or European digits.[2] The Oxford English Dictionary differentiates them with the fully capitalized Arabic Numerals to refer to the Eastern digits.[3] The term numbers or numerals or digits often implies only these symbols, however this can only be inferred from context. Europeans learned of Arabic numerals about the 10th century, though their spread was a gradual process. Two centuries later, in the Algerian city of Béjaïa, the Italian scholar Fibonacci first encountered the numerals; his work was crucial in making them known throughout Europe. European trade, books, and colonialism helped popularize the adoption of Arabic numerals around the world. The numerals have found worldwide use significantly beyond the contemporary spread of the Latin alphabet, and have become common in the writing systems where other numeral systems existed previously, such as Chinese and Japanese numerals. History Origin Positional decimal notation including a zero symbol was developed in India, using symbols visually distinct from those that would eventually enter into international use. As the concept spread, the sets of symbols used in different regions diverged over time. The immediate ancestors of the digits now commonly called "Arabic numerals" were introduced to Europe in the 10th century by Arabic speakers of Spain and North Africa, with digits at the time in wide use from Libya to Morocco. In the eastern part of the Arabian Peninsula, the Arabs were using the Eastern Arabic numerals or "Mashriki" numerals: ٠, ١, ٢, ٣, ٤, ٥, ٦, ٧, ٨, ٩.[4] Al-Nasawi wrote in the early 11th century that mathematicians had not agreed on the form of the numerals, but most of them had agreed to train themselves with the forms now known as Eastern Arabic numerals.[5] The oldest specimens of the written numerals available are from Egypt and date to 873–874 AD. They show three forms of the numeral "2" and two forms of the numeral "3", and these variations indicate the divergence between what later became known as the Eastern Arabic numerals and the Western Arabic numerals.[6] The Western Arabic numerals came to be used in the Maghreb and Al-Andalus from the 10th century onward.[7] Some amount of consistency in the Western Arabic numeral forms endured from the 10th century, found in a Latin manuscript of Isidore of Seville's Etymologiae from 976 and the Gerbertian abacus, into the 12th and 13th centuries, in early manuscripts of translations from the city of Toledo.[4] Calculations were originally performed using a dust board (takht, Latin: tabula), which involved writing symbols with a stylus and erasing them. The use of the dust board appears to have introduced a divergence in terminology as well: whereas the Hindu reckoning was called ḥisāb al-hindī in the east, it was called ḥisāb al-ghubār in the west (literally, "calculation with dust").[8] The numerals themselves were referred to in the west as ashkāl al‐ghubār ("dust figures") or qalam al-ghubår ("dust letters").[9] Al-Uqlidisi later invented a system of calculations with ink and paper "without board and erasing" (bi-ghayr takht wa-lā maḥw bal bi-dawāt wa-qirṭās).[10] A popular myth claims that the symbols were designed to indicate their numeric value through the number of angles they contained, but there is no contemporary evidence of this, and the myth is difficult to reconcile with any digits past 4.[11] Adoption and spread The first mentions of the numerals from 1 to 9 in the West are found in the 976 Codex Vigilanus, an illuminated collection of various historical documents covering a period from antiquity to the 10th century in Hispania.[12] Other texts show that numbers from 1 to 9 were occasionally supplemented by a placeholder known as sipos, represented as a circle or wheel, reminiscent of the eventual symbol for zero. The Arabic term for zero is sifr (صفر), transliterated into Latin as cifra, and the origin of the English word cipher. From the 980s, Gerbert of Aurillac (later Pope Sylvester II) used his position to spread knowledge of the numerals in Europe. Gerbert studied in Barcelona in his youth. He was known to have requested mathematical treatises concerning the astrolabe from Lupitus of Barcelona after he had returned to France.[12] The reception of Arabic numerals in the West was gradual and lukewarm, as other numeral systems circulated in addition to the older Roman numbers. As a discipline, the first to adopt Arabic numerals as part of their own writings were astronomers and astrologists, evidenced from manuscripts surviving from mid-12th-century Bavaria. Reinher of Paderborn (1140–1190) used the numerals in his calendrical tables to calculate the dates of Easter more easily in his text Compotus emendatus.[13] Italy Leonardo Fibonacci was a Pisan mathematician who had studied in Bugia, Algeria, and he endeavored to promote the numeral system in Europe with his 1202 book Liber Abaci: When my father, who had been appointed by his country as public notary in the customs at Bugia acting for the Pisan merchants going there, was in charge, he summoned me to him while I was still a child, and having an eye to usefulness and future convenience, desired me to stay there and receive instruction in the school of accounting. There, when I had been introduced to the art of the Indians' nine symbols through remarkable teaching, knowledge of the art very soon pleased me above all else and I came to understand it. The Liber Abaci introduced the huge advantages of a positional numeric system, and was widely influential. As Fibonacci used the symbols from Béjaïa for the digits, these symbols were also introduced in the same instruction, ultimately leading to their widespread adoption.[14] Fibonacci's introduction coincided with Europe's commercial revolution of the 12th and 13th centuries, centered in Italy. Positional notation could be used for quicker and more complex mathematical operations (such as currency conversion) than Roman and other numeric systems could. They could also handle larger numbers, did not require a separate reckoning tool, and allowed the user to check a calculation without repeating the entire procedure.[14] Although positional notation opened possibilities that were hampered by previous systems, late medieval Italian merchants did not stop using Roman numerals (or other reckoning tools). Rather, Arabic numerals became an additional tool that could be used alongside others.[14] Europe By the late 14th century, only a few texts using Arabic numerals appeared outside of Italy. This suggests that the use of Arabic numerals in commercial practice, and the significant advantage they conferred, remained a virtual Italian monopoly until the late 15th century.[14] This may in part have been due to language barriers: although Fibonacci's Liber Abaci was written in Latin, the Italian abacus traditions was predominantly written in Italian vernaculars that circulated in the private collections of abacus schools or individuals. It was likely difficult for non-Italian merchant bankers to access comprehensive information. The European acceptance of the numerals was accelerated by the invention of the printing press, and they became widely known during the 15th century. Their use grew steadily in other centers of finance and trade such as Lyon.[15] Early evidence of their use in Britain includes: an equal hour horary quadrant from 1396,[16] in England, a 1445 inscription on the tower of Heathfield Church, Sussex; a 1448 inscription on a wooden lych-gate of Bray Church, Berkshire; and a 1487 inscription on the belfry door at Piddletrenthide church, Dorset; and in Scotland a 1470 inscription on the tomb of the first Earl of Huntly in Elgin Cathedral.[17] In central Europe, the King of Hungary Ladislaus the Posthumous, started the use of Arabic numerals, which appear for the first time in a royal document of 1456.[18] By the mid-16th century, they were in common use in most of Europe. Roman numerals remained in use mostly for the notation of Anno Domini (“A.D.”) years, and for numbers on clock faces. Other digits (such as Eastern Arabic) were virtually unknown. Russia Prior to the introduction of Arabic numerals, Cyrillic numerals, derived from the Cyrillic alphabet, were used by South and East Slavs. The system was used in Russia as late as the early 18th century, although it was formally replaced in official use by Peter the Great in 1699.[19] Reasons for Peter's switch from the alphanumerical system are believed to go beyond a surface-level desire to imitate the West. Historian Peter Brown makes arguments for sociological, militaristic, and pedagogical reasons for the change. At a broad, societal level, Russian merchants, soldiers, and officials increasingly came into contact with counterparts from the West and became familiar with the communal use of Arabic numerals. Peter also covertly travelled throughout Northern Europe from 1697 to 1698 during his Grand Embassy and was likely informally exposed to Western mathematics during this time.[20] The Cyrillic system was found to be inferior for calculating practical kinematic values, such as the trajectories and parabolic flight patterns of artillery. With its use, it was difficult to keep pace with Arabic numerals in the growing field of ballistics, whereas Western mathematicians such as John Napier had been publishing on the topic since 1614.[21] China While positional Chinese numeral systems such as the counting rod system and Suzhou numerals had been in use prior to the introduction of Arabic numerals,[22][23] the externally-developed system was eventually introduced to medieval China by the Hui people. In the early 17th century, European-style Arabic numerals were introduced by Spanish and Portuguese Jesuits.[24][25][26] Encoding The ten Arabic numerals are encoded in virtually every character set designed for electric, radio, and digital communication, such as Morse code. They are encoded in ASCII (and therefore in Unicode encodings[27]) at positions 0x30 to 0x39. Masking all but the four least-significant binary digits gives the value of the decimal digit, a design decision facilitating the digitization of text onto early computers. EBCDIC used a different offset, but also possessed the aforementioned masking property. ASCII Binary ASCII Octal ASCII Decimal ASCII Hex Unicode EBCDIC Hex 0 0011 0000 060 48 30 U+0030 DIGIT ZERO F0 1 0011 0001 061 49 31 U+0031 DIGIT ONE F1 2 0011 0010 062 50 32 U+0032 DIGIT TWO F2 3 0011 0011 063 51 33 U+0033 DIGIT THREE F3 4 0011 0100 064 52 34 U+0034 DIGIT FOUR F4 5 0011 0101 065 53 35 U+0035 DIGIT FIVE F5 6 0011 0110 066 54 36 U+0036 DIGIT SIX F6 7 0011 0111 067 55 37 U+0037 DIGIT SEVEN F7 8 0011 1000 070 56 38 U+0038 DIGIT EIGHT F8 9 0011 1001 071 57 39 U+0039 DIGIT NINE F9 Comparison with other digits Overview of numeral systems Symbol Used with scripts Numerals 0123456789many Arabic numerals 𑁦𑁧𑁨𑁩𑁪𑁫𑁬𑁭𑁮𑁯Brahmi Brahmi numerals ०१२३४५६७८९Devanagari Devanagari numerals ০১২৩৪৫৬৭৮৯Bengali–Assamese Bengali numerals ੦੧੨੩੪੫੬੭੮੯Gurmukhi Gurmukhi numerals ૦૧૨૩૪૫૬૭૮૯Gujarati Gujarati numerals ୦୧୨୩୪୫୬୭୮୯Odia Odia numerals ᱐᱑᱒᱓᱔᱕᱖᱗᱘᱙Santali Santali numerals 𑇐𑇑𑇒𑇓𑇔𑇕𑇖𑇗𑇘𑇙Sharada Sharada numerals ௦௧௨௩௪௫௬௭௮௯Tamil Tamil numerals ౦౧౨౩౪౫౬౭౮౯Telugu Telugu script § Numerals ೦೧೨೩೪೫೬೭೮೯Kannada Kannada script § Numerals ൦൧൨൩൪൫൬൭൮൯Malayalam Malayalam numerals ෦෧෨෩෪෫෬෭෮෯Sinhala Sinhala numerals ၀၁၂၃၄၅၆၇၈၉Burmese Burmese numerals ༠༡༢༣༤༥༦༧༨༩Tibetan Tibetan numerals ᠐᠑᠒᠓᠔᠕᠖᠗᠘᠙Mongolian Mongolian numerals ០១២៣៤៥៦៧៨៩Khmer Khmer numerals ๐๑๒๓๔๕๖๗๘๙Thai Thai numerals ໐໑໒໓໔໕໖໗໘໙Lao Lao script § Numerals ᮰᮱᮲᮳᮴᮵᮶᮷᮸᮹Sundanese Sundanese numerals ꧐꧑꧒꧓꧔꧕꧖꧗꧘꧙Javanese Javanese numerals ᭐᭑᭒᭓᭔᭕᭖᭗᭘᭙Balinese Balinese numerals ٠١٢٣٤٥٦٧٨٩Arabic Eastern Arabic numerals ۰۱۲۳۴۵۶۷۸۹Persian / Dari / Pashto ۰۱۲۳۴۵۶۷۸۹Urdu / Shahmukhi -፩፪፫፬፭፮፯፰፱Ethio-Semitic Ge'ez numerals 〇一二三四五六七八九East Asia Chinese numerals See also • Arabic numeral variations • Regional variations in modern handwritten Arabic numerals • Seven-segment display • Text figures Explanatory notes Citations 1. "Arabic numeral". American Heritage Dictionary. Houghton Mifflin Harcourt Publishing Company. 2020. Archived from the original on 21 November 2021. Retrieved 21 November 2021. 2. Terminology for Digits Archived 26 October 2021 at the Wayback Machine. Unicode Consortium. 3. "Arabic", Oxford English Dictionary, 2nd edition 4. Burnett, Charles (2002). Dold-Samplonius, Yvonne; Van Dalen, Benno; Dauben, Joseph; Folkerts, Menso (eds.). From China to Paris: 2000 Years Transmission of Mathematical Ideas. Franz Steiner Verlag. pp. 237–288. ISBN 978-3-515-08223-5. Archived from the original on 30 July 2022. Retrieved 29 July 2022. 5. Kunitzsch 2003, p. 7: "Les personnes qui se sont occupées de la science du calcul n'ont pas été d'accord sur une partie des formes de ces neuf signes; mais la plupart d'entre elles sont convenues de les former comme il suit." 6. Kunitzsch 2003, p. 5. 7. Kunitzsch 2003, pp. 12–13: "While specimens of Western Arabic numerals from the early period—the tenth to thirteenth centuries—are still not available, we know at least that Hindu reckoning (called ḥisāb al-ghubār) was known in the West from the 10th century onward..." 8. Kunitzsch 2003, p. 8. 9. Kunitzsch 2003, p. 10. 10. Kunitzsch 2003, pp. 7–8. 11. Ifrah, Georges (1998). The universal history of numbers: from prehistory to the invention of the computer. Translated by David Bellos (from the French). London: Harvill Press. pp. 356–357. ISBN 9781860463242. 12. Nothaft, C. Philipp E. (3 May 2020). "Medieval Europe's satanic ciphers: on the genesis of a modern myth". British Journal for the History of Mathematics. 35 (2): 107–136. doi:10.1080/26375451.2020.1726050. ISSN 2637-5451. S2CID 213113566. 13. Herold, Werner (2005). "Der "computus emendatus" des Reinher von Paderborn". ixtheo.de (in German). Archived from the original on 30 July 2022. Retrieved 29 July 2022. 14. Danna, Raffaele (12 July 2021). The Spread of Hindu-Arabic Numerals in the European Tradition of Practical Arithmetic: a Socio-Economic Perspective (13th–16th centuries) (Doctoral thesis). University of Cambridge. doi:10.17863/cam.72497. Archived from the original on 27 July 2021. Retrieved 29 July 2022. 15. Danna, Raffaele; Iori, Martina; Mina, Andrea (22 June 2022). "A Numerical Revolution: The Diffusion of Practical Mathematics and the Growth of Pre-modern European Economies". SSRN 4143442. 16. "14th century timepiece unearthed in Qld farm shed". ABC News. Archived from the original on 29 February 2012. Retrieved 10 November 2011. 17. See G. F. Hill, The Development of Arabic Numerals in Europe, for more examples. 18. Erdélyi: Magyar művelődéstörténet 1-2. kötet. Kolozsvár, 1913, 1918. 19. Conatser Segura, Sylvia (26 May 2020). Orthographic Reform and Language Planning in Russian History (Honors thesis). Archived from the original on 30 July 2022. Retrieved 29 July 2022. 20. Brown, Peter B. (2012). "Muscovite Arithmetic in Seventeenth-Century Russian Civilization: Is It Not Time to Discard the "Backwardness" Label?". Russian History. 39 (4): 393–459. doi:10.1163/48763316-03904001. ISSN 0094-288X. Archived from the original on 30 July 2022. Retrieved 29 July 2022. 21. Lockwood, E. H. (October 1978). "Mathematical discoveries 1600-1750, by P. L. Griffiths. Pp 121. £2·75. 1977. ISBN 0 7223 1006 4 (Stockwell)". The Mathematical Gazette. 62 (421): 219. doi:10.2307/3616704. ISSN 0025-5572. JSTOR 3616704. Archived from the original on 30 July 2022. Retrieved 29 July 2022. 22. Shell-Gellasch, Amy (2015). Algebra in context : introductory algebra from origins to applications. J. B. Thoo. Baltimore. ISBN 978-1-4214-1728-8. OCLC 907657424.{{cite book}}: CS1 maint: location missing publisher (link) 23. Uy, Frederick L. (January 2003). "The Chinese Numeration System and Place Value". Teaching Children Mathematics. 9 (5): 243–247. doi:10.5951/tcm.9.5.0243. ISSN 1073-5836. Archived from the original on 30 July 2022. Retrieved 29 July 2022. 24. Helaine Selin, ed. (1997). Encyclopaedia of the history of science, technology, and medicine in non-western cultures. Springer. p. 198. ISBN 978-0-7923-4066-9. Archived from the original on 27 October 2015. Retrieved 18 October 2015. 25. Meuleman, Johan H. (2002). Islam in the era of globalization: Muslim attitudes towards modernity and identity. Psychology Press. p. 272. ISBN 978-0-7007-1691-3. Archived from the original on 27 October 2015. Retrieved 18 October 2015. 26. Peng Yoke Ho (2000). Li, Qi and Shu: An Introduction to Science and Civilization in China. Mineola, New York: Courier Dover Publications. p. 106. ISBN 978-0-486-41445-4. Archived from the original on 27 October 2015. Retrieved 18 October 2015. 27. "The Unicode Standard, Version 13.0" (PDF). unicode.org. Archived (PDF) from the original on 2 June 2001. Retrieved 1 September 2021. General and cited sources • Kunitzsch, Paul (2003). "The Transmission of Hindu-Arabic Numerals Reconsidered". In J. P. Hogendijk; A. I. Sabra (eds.). The Enterprise of Science in Islam: New Perspectives. MIT Press. pp. 3–22. ISBN 978-0-262-19482-2. Further reading • Burnett, Charles (2006). "The Semantics of Indian Numerals in Arabic, Greek and Latin". Journal of Indian Philosophy. Springer-Netherlands. 34 (1–2): 15–30. doi:10.1007/s10781-005-8153-z. S2CID 170783929. • Hayashi, Takao (1995). The Bakhshālī Manuscript: An Ancient Indian Mathematical Treatise. Groningen, Netherlands: Egbert Forsten. ISBN 906980087X. • Ifrah, Georges (2000). A Universal History of Numbers: From Prehistory to Computers. New York: Wiley. ISBN 0471393401. • Katz, Victor J., ed. (20 July 2007). The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook. Princeton, New Jersey: Princeton University Press. ISBN 978-0691114859. • "Mathematics in South Asia". Nature. 189 (4761): 273. 1961. Bibcode:1961Natur.189S.273.. doi:10.1038/189273c0. S2CID 4288165. • Ore, Oystein (1988). "Hindu-Arabic numerals". Number Theory and Its History. Dover. pp. 19–24. ISBN 0486656209. External links Wikimedia Commons has media related to: Arabic numerals (category) • Lam Lay Yong, "Development of Hindu Arabic and Traditional Chinese Arithmetic", Chinese Science 13 (1996): 35–54. • "Counting Systems and Numerals", Historyworld. Retrieved 11 December 2005. • The Evolution of Numbers. 16 April 2005. • O'Connor, J. J., and E. F. Robertson, Indian numerals Archived 6 July 2015 at the Wayback Machine. November 2000. • History of the numerals • Arabic numerals • Hindu-Arabic numerals • Numeral & Numbers' history and curiosities • Gerbert d'Aurillac's early use of Hindu-Arabic numerals at Convergence Mathematics in the medieval Islamic world Mathematicians 9th century • 'Abd al-Hamīd ibn Turk • Sanad ibn Ali • al-Jawharī • Al-Ḥajjāj ibn Yūsuf • Al-Kindi • Qusta ibn Luqa • Al-Mahani • al-Dinawari • Banū Mūsā • Hunayn ibn Ishaq • Al-Khwarizmi • Yusuf al-Khuri • Ishaq ibn Hunayn • Na'im ibn Musa • Thābit ibn Qurra • al-Marwazi • Abu Said Gorgani 10th century • Abu al-Wafa • al-Khazin • Al-Qabisi • Abu Kamil • Ahmad ibn Yusuf • Aṣ-Ṣaidanānī • Sinān ibn al-Fatḥ • al-Khojandi • Al-Nayrizi • Al-Saghani • Brethren of Purity • Ibn Sahl • Ibn Yunus • al-Uqlidisi • Al-Battani • Sinan ibn Thabit • Ibrahim ibn Sinan • Al-Isfahani • Nazif ibn Yumn • al-Qūhī • Abu al-Jud • Al-Sijzi • Al-Karaji • al-Majriti • al-Jabali 11th century • Abu Nasr Mansur • Alhazen • Kushyar Gilani • Al-Biruni • Ibn al-Samh • Abu Mansur al-Baghdadi • Avicenna • al-Jayyānī • al-Nasawī • al-Zarqālī • ibn Hud • Al-Isfizari • Omar Khayyam • Muhammad al-Baghdadi 12th century • Jabir ibn Aflah • Al-Kharaqī • Al-Khazini • Al-Samawal al-Maghribi • al-Hassar • Sharaf al-Din al-Tusi • Ibn al-Yasamin 13th century • Ibn al‐Ha'im al‐Ishbili • Ahmad al-Buni • Ibn Munim • Alam al-Din al-Hanafi • Ibn Adlan • al-Urdi • Nasir al-Din al-Tusi • al-Abhari • Muhyi al-Din al-Maghribi • al-Hasan al-Marrakushi • Qutb al-Din al-Shirazi • Shams al-Din al-Samarqandi • Ibn al-Banna' • Kamāl al-Dīn al-Fārisī 14th century • Nizam al-Din al-Nisapuri • Ibn al-Shatir • Ibn al-Durayhim • Al-Khalili • al-Umawi 15th century • Ibn al-Majdi • al-Rūmī • al-Kāshī • Ulugh Beg • Ali Qushji • al-Wafa'i • al-Qalaṣādī • Sibt al-Maridini • Ibn Ghazi al-Miknasi 16th century • Al-Birjandi • Muhammad Baqir Yazdi • Taqi ad-Din • Ibn Hamza al-Maghribi • Ahmad Ibn al-Qadi Mathematical works • The Compendious Book on Calculation by Completion and Balancing • De Gradibus • Principles of Hindu Reckoning • Book of Optics • The Book of Healing • Almanac • Book on the Measurement of Plane and Spherical Figures • Encyclopedia of the Brethren of Purity • Toledan Tables • Tabula Rogeriana • Zij Concepts • Alhazen's problem • Islamic geometric patterns Centers • Al-Azhar University • Al-Mustansiriya University • House of Knowledge • House of Wisdom • Constantinople observatory of Taqi ad-Din • Madrasa • Maragheh observatory • University of al-Qarawiyyin Influences • Babylonian mathematics • Greek mathematics • Indian mathematics Influenced • Byzantine mathematics • European mathematics • Indian mathematics Related • Hindu–Arabic numeral system • Arabic numerals (Eastern Arabic numerals, Western Arabic numerals) • Trigonometric functions • History of trigonometry • History of algebra
Wikipedia
\begin{document} \begin{abstract} We present the proof of several inequalities using the technique introduced by Alexandroff, Bakelman, and Pucci to establish their ABP estimate. First, we give a new and simple proof of a lower bound of Berestycki, Nirenberg, and Varadhan concerning the principal eigenvalue of an elliptic operator with bounded measurable coefficients. The rest of the paper is a survey on the proofs of several isoperimetric and Sobolev inequalities using the ABP technique. This includes new proofs of the classical isoperimetric inequality, the Wulff isoperimetric inequality, and the Lions-Pacella isoperimetric inequality in convex cones. For this last inequality, the new proof was recently found by the author, Xavier Ros-Oton, and Joaquim Serra in a work where we also prove new Sobolev inequalities with weights which came up studying an open question raised by Haim Brezis. \end{abstract} \date{} \maketitle \vspace {-.2in} \begin{center} {\em Dedicated to Haim Brezis, with great admiration} \end{center} \tableofcontents \section{Introduction} In this article we present the proof of several inequalities using the technique introduced by Alexandroff, Bakelman, and Pucci to establish their ABP estimate. The Alexandroff-Bakelman-Pucci (or ABP) estimate is an $L^\infty$ bound for solutions of the Dirichlet problem associated to second order uniformly elliptic operators written in nondivergence form, $$ Lu= a_{ij}(x) \partial_{ij} u +b_i (x) \partial_{i} u + c(x) u, $$ with bounded measurable coefficients in a domain $\Omega$ of $\mathbb{R}^n$. It asserts that if $\Omega$ is bounded and $c\leq 0$ in $\Omega$ then, for every function $u\in C^2(\Omega)\cap C(\overline\Omega)$, \begin{equation}\label{ABP1} \sup_\Omega u\le \sup_{\partial\Omega} u + C \, \text{diam} (\Omega)\, \Vert Lu \Vert_{L^n(\Omega)} , \end{equation} where $\text{diam} (\Omega)$ denotes the diameter of $\Omega$, and $C$ is a constant depending only on the ellipticity constants of $L$ and on the $L^n$-norms of the coefficients $b_i$ ---see Remark~\ref{proofABP} below for its proof and Chapter~9 of \cite{GT} for more details. The estimate was proven by the previous authors in the sixties using a technique that in this paper we call ABP method. Both the estimate and the method have applications in several areas. First, the ABP estimate is a basic tool in the regularity theory for fully nonlinear elliptic equations $F(D^2u)=0$. The ABP method is also a key ingredient in Jensen's uniqueness result for viscosity solutions. For these questions, see for instance \cite{CC}. Other applications were developed around 1994 by Berestycki, Nirenberg, and Varadhan \cite{BNV}, who established lower bounds on the principal eigenvalue of the operator $L-c(x)$ and, as a consequence, maximum principles in ``small'' domains. These maximum principles are very useful ---when combined with the moving planes method--- to establish symmetry of positive solutions of nonlinear problems (see \cite{BN,C2}). In this paper we give a new and simple proof (unpublished before) of the lower bound of Berestycki, Nirenberg, and Varadhan~\cite{BNV} concerning the principal eigenvalue $\lambda_1=\lambda_1(L_0,\Omega)$ of the operator $L_0:=L-c(x)$, i.e., \begin{equation*}\label{L0} L_0u= a_{ij}(x) \partial_{ij} u +b_i (x) \partial_{i} u. \end{equation*} The bound asserts that \begin{equation}\label{lower} \lambda_1 (L_0,\Omega) \ge \mu \vert\Omega\vert^{-2/n} \end{equation} for some positive constant $\mu$ depending only on the ellipticity constants of $L_0$, the $L^\infty$-norms of the coefficients $b_i$, and an upper bound for $|\Omega|^{1/n}$. In particular, if one has such upper bound for $|\Omega|$, then the constant $\mu$ is independent of $|\Omega|$. As a consequence, if $|\Omega|$ tends to zero then $\lambda_1(L_0,\Omega)$ tends to infinity, by \eqref{lower}. In contrast with theirs, our proof uses only the ABP method and does not require the Krylov-Safonov Harnack inequality. Our proof gives a slight improvement of this result by showing that $\mu$ depends in fact on the $L^n$-norms of the coefficients $b_i$ instead of the $L^\infty$-norms. To prove this lower bound on $\lambda_1$, we apply the ABP method to the problem satisfied by the logarithm of the principal eigenfunction of~$L_0$. Note that the constant $\mu$ in the lower bound does not depend on any modulus of regularity for the coefficients of $L_0$. This is why we say that it is a bound for operators with bounded measurable coefficients. This generality is crucial for the applications to fully nonlinear elliptic equations. When $L_0$ is in divergence form with bounded measurable coefficients, \eqref{lower} was proved by Brezis and Lions~\cite{BL}. They established an estimate of the type \eqref{ABP1} with $L^n$ replaced by $L^\infty$. When applied to the first eigenfunction, it gives \eqref{lower} for operators in divergence form. An improvement of the ABP estimate \eqref{ABP1} in which $\text{diam} (\Omega)$ is replaced by $|\Omega|^{1/n}$ was proved by the author in \cite{C1}; see also \cite{C2}. When $L_0=\Delta$ is the Laplacian, \eqref{lower} with its best constant $\mu$ is the Faber-Krahn inequality, and becomes an equality when $\Omega$ is a ball; see \cite{Ga}. Thus, among sets with same given volume, the ball has the smallest first Dirichlet eigenvalue. In this respect we would like to raise the following: \begin{definition} When $L_0=\Delta$ is the Laplacian, can one prove the Faber-Krahn inequality (that is, inequality \eqref{lower} with best constant, achieved by balls) using an ABP method as described in the following sections? \end{definition} The rest of this paper is a survey in several isoperimetric inequalities proved using the ABP method. We present first the proof of the classical isoperimetric inequality in $\mathbb{R}^n$ found by the author around 1996; see~\cite{CSCM,CDCDS}. It uses the ABP technique applied to a linear Neumann problem for the Laplacian ---instead of applying the method to a Dirichlet problem as in the ABP estimate. It yields then the isoperimetric inequality with best constant. In addition, the proof does not require the domain to be convex, and it shows easily that balls are the only smooth domains for which equality holds. The proof using the ABP method can also be adapted to anisotropic perimeters. This gives a new proof of the Wulff isoperimetric inequality, presented in Section~4. The proof has also been recently extended by J. Serra and M. Teixid\'o \cite{ST}, in a very clever way, to domains in simply connected Cartan-Hadamard Riemannian manifolds of dimension two. These are manifolds with nonpositive sectional curvature. In this way, they give a new proof that the Euclidean isoperimetric inequality (i.e., inequality \eqref{isop} below with the Euclidean constant $P(B_1)/|B_1|^{\frac{n-1}{n}}$) is also valid in such two-dimensional manifolds (with the same Euclidean constant on it). In higher dimensions (except for 3 and 4) this is an important conjecture which has been open for long time; see \cite{Dr}. Finally, Section~5 concerns the recent paper \cite{CRS2}, by the author, X. Ros-Oton, and J. Serra, where we established new isoperimetric and Sobolev inequalities with weights in convex cones of $\mathbb{R}^n$. In particular we give a new poof of the Lions-Pacella isoperimetric inequality \cite{LP} in convex cones. Let us recall that the classical proofs of the Wulff and the Lions-Pacella isoperimetric inequalities used the Brunn-Minkowski inequality \eqref{brunn}. The result in \cite{CRS2} states that Euclidean balls centered at the origin solve the weighted isoperimetric problem in any open convex cone $\Sigma$ of $\mathbb{R}^n$ (with vertex at the origin) for the following class of weights. Here, both perimeter and measure are computed with respect to the weight. The weight $w$ must be nonnegative, continuous, positively homogeneous of degree $\alpha\geq 0$, and such that $w^{1/\alpha}$ is concave in the cone $\Sigma$ if $\alpha>0$. This concavity condition is equivalent to a natural curvature-dimension bound ---in fact, to the nonnegativeness of a Bakry-\'Emery Ricci tensor in dimension $D=n+\alpha$. Except for the constant ones, all these weights are not radially symmetric but still balls centered at the origin are the isoperimetric sets. Our proof uses the ABP method applied to a Neumann problem for the operator $$ w^{-1}\text{div}(w\nabla u)=\Delta u+\frac{\nabla w}{w}\cdot \nabla u. $$ This result yields as a consequence the following Sobolev inequality. If $D=n+\alpha$, $1\leq p<D$, and $p_*=\frac{pD}{D-p}$, then \begin{equation}\label{Sob} \left(\int_{\Sigma}|u|^{p_*}w(x)dx\right)^{1/p_*}\leq C_{w,p,n} \left(\int_{\Sigma}|\nabla u|^pw(x)dx\right)^{1/p} \end{equation} for all smooth functions $u$ with compact support in $\mathbb{R}^n$ ---in particular, not necessarily vanishing on $\partial\Sigma$. We can give the value of the best constant $C_{w,p,n}$ since it is attained by certain radial functions; see \cite{CR2}. Monomial weights, \begin{equation}\label{mon} w(x)=x_1^{A_1}\cdots x_n^{A_n}\qquad\text{in}\quad\Sigma=\{x\in\mathbb R^n\,:\, x_i>0 \textrm{ whenever }A_i>0\} \end{equation} (here $A_i\geq 0$), are an example of weights satisfying the above assumptions. The Sobolev inequality \eqref{Sob} with the above monomial weights $w$ appeared naturally in the paper \cite{CR}, by the author and X. Ros-Oton, while studying the following open question raised by Haim Brezis. \begin{definition} \textbf{(Haim Brezis, 1996 \cite{B,BV})} Is the extremal solution of the problem $-\Delta u=\lambda f(u)$ in a bounded smooth domain $\Omega\subset\mathbb{R}^n$, with zero Dirichlet boundary conditions, always bounded if the dimension $n\leq 9$, and this for every positive, increasing, and convex nonlinearity~$f$? (see \cite{B,BV,CR} for more details). A stronger statement is if the same conclusion holds for every stable solution of the Dirichlet problem for $-\Delta u= f(u)$ in $\Omega$. It has been proved to be true in dimensions 2 and 3 by G. Nedev, in dimension 4 by the author, and in the radial case up to dimension 9 by the author and A. Capella; see the references in \cite{CSS}. In \cite{C3}, we showed that these regularity results hold essentially for any nonnegative nonlinearity $f$. \end{definition} In \cite{CR} we studied this problem in convex domains with symmetry of double revolution, and we establish its validity up to dimension $n\leq 7$. If $\mathbb{R}^n=\mathbb{R}^m\times \mathbb{R}^k$, we say that a domain is of double revolution if it is invariant under rotations of the first $m$ variables and also under rotations of the last $k$ variables. Stable solutions will depend only on the ``radial'' variables $s=\sqrt{x_1^2+\cdots +x_m^2}$ and $t=\sqrt{x_{m+1}^2+\cdots +x_n^2}$. In these coordinates, the Lebesgue measure in $\mathbb{R}^n$ becomes $s^{m-1}t^{k-1}\, ds\, dt$. This is a monomial weight as in \eqref{mon}. In \cite{CR}, to prove regularity results we needed the above Sobolev inequalities with monomial weights, even with nonintegers $A_i$ in \eqref{mon}. \section{The principal eigenvalue for elliptic operators with bounded measurable coefficients} The ABP estimate is the basic bound for subsolutions $u$ of the Dirichlet problem \begin{equation} \left\{ \alignedat2 Lu &\ge f &\quad &\text{in } \Omega \\ u &\le 0 &\quad &\text{on } \partial\Omega , \endalignedat \right. \label{dirich} \end{equation} where $L$ is an elliptic operator written in nondivergence form $$ Lu= a_{ij}(x) \partial_{ij} u +b_i (x) \partial_{i} u + c(x) u, $$ in a domain $\Omega \subset \mathbb{R}^n$. We assume that $L$ is uniformly elliptic with bounded measurable coefficients, i.e., $b:=(b_1,\ldots ,b_n)\in L^\infty(\Omega)$, $c\in L^\infty(\Omega)$ and $$ c_0 \vert \xi\vert^2 \leq a_{ij} (x) \,\xi_i\,\xi_j \leq C_0\vert \xi\vert ^2\qquad \forall \xi\in {\mathbb{R}}^n \; \;\forall x\in \Omega $$ for some constants $0<c_0\le C_0$. The ABP estimate states that, if $\Omega$ is bounded, $c\leq 0$ in $\Omega$, $u\in C^2(\Omega)\cap C(\overline\Omega)$ and \eqref{dirich} holds, then \begin{equation} \sup_\Omega u\leq C \, \text{diam} (\Omega)\, \Vert f\Vert_{L^n(\Omega)} , \label{abp} \end{equation} where $\text{diam} (\Omega)$ denotes the diameter of $\Omega$ and $C$ is a constant depending only on $n$, $c_0$, and $\Vert b\Vert_{L^n(\Omega)}$. The proof of the ABP estimate is explained below in Remark~\ref{proofABP}, after having presented in detail the ABP proof of the isoperimetric inequality. In 1979, Krylov and Safonov used the ABP estimate and the Calder\'on-Zygmund cube decomposition to establish a deep result: the Harnack inequality for second order uniformly elliptic equations in nondivergence form with bounded measurable coefficients. This result allowed for the development of a regularity theory for fully nonlinear equations (see \cite{CC}). Consider now the operator $$ L_0u=(L-c(x))u=a_{ij}(x) \partial_{ij} u +b_i (x)\partial_i u , $$ and assume that $\Omega$ is a bounded smooth domain and that the coefficients $a_{ij}$ are smooth in $\overline\Omega$. In \cite{BNV} it is proved the existence of a unique eigenvalue $\lambda_1=\lambda_1 (L_0,\Omega)$ of $-L_0$ in $\Omega$ (the principal eigenvalue) having a positive (smooth) eigenfunction $\varphi_1$ (the principal eigenfunction): \begin{equation*} \left\{ \alignedat3 L_0\varphi_1 &= -\lambda_1 \varphi_1 &\quad &\text{in } \Omega\\ \varphi_1 &=0 &\quad &\text{on }\partial \Omega \\ \varphi_1 &>0 &\quad &\text{in }\Omega . \endalignedat \right. \label{preig} \end{equation*} In addition, $\lambda_1$ is a simple eigenvalue and satisfies $\lambda_1 >0$. In Theorem~2.5 of \cite{BNV}, Berestycki, Nirenberg, and Varadhan used the Krylov-Safonov theory to establish the lower bound $\lambda_1 \ge \mu\vert\Omega\vert^{-2/n}$ for some positive constant $\mu$ depending only on $n$, $c_0$, $C_0$, and an upper bound on $\vert\Omega\vert^{1/n} \Vert b\Vert_{L^\infty(\Omega)}$. We now give a simpler proof (unpublished before) of this lower bound using the ABP method. We do not need to use the Krylov-Safonov theory. Our proof improves slightly the bound by showing that $\mu$ can be taken to depend on $\Vert b\Vert_{L^n(\Omega)}$ ---instead of $|\Omega|^{1/n}\Vert b\Vert_{L^\infty(\Omega)}$. More precisely, we have the following: \begin{theorem} If $\Omega$ is bounded, the principal eigenvalue $\lambda_1 (L_0,\Omega)$ of $L_0$ in $\Omega$ satisfies \begin{equation*} \lambda_1 (L_0,\Omega) \ge \mu \vert\Omega\vert^{-2/n}, \label{fklow} \end{equation*} where $\mu$ is a positive constant depending only on $n$, $c_0$, $C_0$, and $\Vert b\Vert_{L^n(\Omega)}$. \end{theorem} \begin{proof} Since $\varphi_1 >0$ in $\Omega$ we can consider the function $$ u=-\log\varphi_1 . $$ Using that $\nabla u=-\varphi_1^{-1} \nabla\varphi_1$, we have that \begin{equation} \left\{ \alignedat2 a_{ij}\,\partial_{ij}u &= \lambda_1 - b_i\,\partial_i u +a_{ij}\,\partial_i u\,\partial_j u &\quad &\text{in } \Omega\\ u &=+\infty &\quad &\text{on }\partial \Omega . \endalignedat \right. \label{prlog} \end{equation} We consider the lower contact set of $u$, defined by \begin{equation*} \Gamma_u =\{ x \in \Omega \ : \ u(y) \ge u(x) + \nabla u (x) \cdot (y-x)\ \text{ for all } y \in \overline \Omega \} . \label{lcset0} \end{equation*} It is the set of points where the tangent hyperplane to the graph of $u$ lies below $u$ in all $\overline \Omega$. For every $p\in\mathbb{R}^n$, the minimum $\min_{\overline\Omega}\,\{ u(y)-p\cdot y\}$ is achieved at an interior point of $\Omega$, since $u=+\infty$ on $\partial\Omega$ and $\Omega$ is bounded. At such a point $x$ in $\Omega$ of minimum of the function $y \mapsto u(y)-p\cdot y$, we have $x\in\Gamma_u$ and $p=\nabla u(x)$. It follows that \begin{equation} \mathbb{R}^n = \nabla u(\Gamma_u). \label{gradmap0} \end{equation} It is interesting to visualize geometrically this proof by considering the graphs of the functions $p\cdot y + c$ for $c\in \mathbb{R}$. These are parallel hyperplanes which lie, for $c$ close to $-\infty$, below the graph of $u$. We let~$c$ increase and consider the first~$c$ for which there is contact or ``touching'' at a point~$x$. It is clear that $x\not\in\partial\Omega$, since $u=+\infty$ on $\partial\Omega$. Using \eqref{gradmap0}, we can apply the area formula to the map $p=\nabla u (x)$ for $x\in\Gamma_u$ and, integrating in $\mathbb{R}^n$ a positive function $g=g(\vert p\vert)$ to be chosen later, we obtain \begin{equation} \int_{\mathbb{R}^n} g(\vert p\vert) \, dp \le \int_{\Gamma_u} g(\vert\nabla u(x)\vert)\det D^2u(x) \,dx . \label{arealog} \end{equation} Note that $D^2 u(x)$ is nonnegative definite at any point $x\in\Gamma_u$. Next, we use the matrix inequality $\det (AB)\le\{\text{trace}(AB)/n\}^n$, which holds for every pair $A$ and $B$ of nonnegative symmetric matrices. This is a simple extension of the arithmetic-geometric means inequality. We apply it with $A=[a_{ij}(x)]$ and $B=D^2u(x)$ for $x\in\Gamma_u$. We also use that $$ (a_{ij}\partial_{ij}u)^n\le C (\lambda_1^n+\vert b\vert^{n}\vert\nabla u\vert^n +\vert\nabla u\vert^{2n}) \quad \text{in } \Gamma_u , $$ which follows from \eqref{prlog}. Here, and throughout the proof, $C$ will denote a positive constant depending only on $n$, $c_0$, $C_0$, and $\Vert b\Vert_{L^n(\Omega)}$. We deduce that $$ \alignedat2 \det\, & D^2u \le c_0^{-n} \det ([a_{ij}] D^2u) \le c_0^{-n} \left( \frac{\text{trace} ([a_{ij}] D^2u)}{n}\right)^n = (n c_0)^{-n} (a_{ij}\partial_{ij}u)^n \\ & \le C (\lambda_1^n+\vert b\vert^{n}\vert\nabla u\vert^n +\vert\nabla u\vert^{2n})\quad \text{in } \Gamma_u . \endalignedat $$ Therefore, choosing $g(\vert p\vert)=(\lambda_1^n+\vert \Omega\vert^{-1}\vert p\vert^n +\vert p\vert^{2n})^{-1}$ in \eqref{arealog}, we have \begin{equation} \alignedat2 \int_{\mathbb{R}^n} \frac{dp}{\lambda_1^n+\vert \Omega\vert^{-1} \vert p\vert^n +\vert p\vert^{2n}} & \le \int_{\Gamma_u} \frac{C (\lambda_1^n+\vert b\vert^{n}\vert\nabla u\vert^n +\vert\nabla u\vert^{2n})}{\lambda_1^n+\vert \Omega\vert^{-1} \vert \nabla u\vert^n +\vert \nabla u\vert^{2n}}\,dx \\ & \le C \int_{\Gamma_u} (1+\vert\Omega\vert \vert b\vert^n) \,dx \\ &\le C \left(1+\Vert b\Vert^n_{L^n(\Omega)}\right)\vert\Omega\vert \le C\vert\Omega\vert . \endalignedat \label{upint} \end{equation} On the other hand, using that $\lambda_1^n+\vert \Omega\vert^{-1}\vert p\vert^n +\vert p\vert^{2n}\le\lambda_1^n+2\vert \Omega\vert^{-1}\vert p\vert^n$ for $\vert p\vert \le \vert\Omega\vert^{-1/n}$, we see that \begin{equation} \alignedat2 \int_{\mathbb{R}^n} \frac{dp}{\lambda_1^n+\vert \Omega\vert^{-1}\vert p\vert^n +\vert p\vert^{2n}} &\ge \int_{B_{\vert\Omega\vert^{-1/n}}} \frac{dp}{\lambda_1^n+2\vert \Omega\vert^{-1}\vert p\vert^n}\\ &=c(n)\vert\Omega\vert\log\left( 1+\frac{2\vert \Omega\vert^{-2}}{\lambda_1^n} \right). \endalignedat \label{loint} \end{equation} Combining \eqref{upint} and \eqref{loint}, we conclude $2\vert \Omega\vert^{-2}\lambda_1^{-n}\le C$, which is the desired inequality. \end{proof} \section{The classical isoperimetric inequality} In this section we present a proof of the classical isoperimetric problem for smooth domains of $\mathbb{R}^n$ which uses the ABP technique. It was found by the author in 1996 and published in \cite{CSCM,CDCDS}. The proof establishes the following: \begin{theorem}{\rm \textbf{(Isoperimetric inequality)}} Let $\Omega$ be a bounded smooth domain of $\mathbb{R}^n$. Then \begin{equation} \frac{P(\Omega)}{\vert \Omega \vert^{\frac{n-1}{n}}} \ge \frac{P(B_1)}{\vert B_1 \vert^{\frac{n-1}{n}}} \ , \label{isop} \end{equation} where $B_1$ is the unit ball of $\mathbb{R}^n$, $\vert \Omega \vert$ denotes the measure of $\Omega$, and $P(\Omega)$ the perimeter of $\Omega$. Moreover, equality occurs in \eqref{isop} if and only if $\Omega$ is a ball of~$\mathbb{R}^n$. \end{theorem} \begin{proof} Let $u$ be a solution of the Neumann problem \begin{equation} \left\{ \alignedat2 \Delta u &= \frac{P(\Omega)}{\vert \Omega \vert} &\quad &\text{in } \Omega\\ \frac{\partial u}{\partial\nu} &=1 &\quad &\text{on }\partial \Omega , \endalignedat \right. \label{eqsem} \end{equation} where $\Delta$ denotes the Laplace operator and $\partial u /\partial\nu$ the exterior normal derivative of $u$ on $\partial \Omega$. The constant $P(\Omega) /\vert \Omega \vert$ has been chosen so that the problem has a unique solution up to an additive constant. For these classical facts, see Example 2 in Section 10.5 of \cite{H}, or the end of Section 6.7 of \cite{GT}. In addition, we have that $u$ is smooth in~$\overline \Omega$. We consider the lower contact set of $u$, defined by \begin{equation} \Gamma_u =\{ x \in \Omega \ : \ u(y) \ge u(x) + \nabla u (x) \cdot (y-x)\ \text{ for all } y \in \overline \Omega \} . \label{lcset} \end{equation} It is the set of points where the tangent hyperplane to the graph of $u$ lies below $u$ in all $\overline \Omega$. We claim that \begin{equation} B_1 (0) \subset \nabla u (\Gamma_u) , \label{gradmap} \end{equation} where $B_1 (0)=B_1$ denotes the unit ball of $\mathbb{R}^n$ with center $0$. To show \eqref{gradmap}, take any $p\in \mathbb{R}^n$ satisfying $\vert p \vert <1$. Let $x\in \overline \Omega$ be a point such that $$ \min_{y\in \overline \Omega} \,\{ u(y) -p\cdot y \} = u(x)-p\cdot x $$ (this is, up to a sign, the Legendre transform of $u$). If $x\in \partial \Omega$ then the exterior normal derivative of $u(y)-p\cdot y$ at $x$ would be nonpositive and hence $(\partial u /\partial\nu) (x) \le \vert p \vert <1$, a contradiction with \eqref{eqsem}. It follows that $x\in \Omega$ and, therefore, that $x$ is an interior minimum of the function $u(y)-p\cdot y$. In particular, $p=\nabla u (x)$ and $x\in \Gamma_u$. Claim \eqref{gradmap} is now proved. It is interesting to visualize geometrically the proof of the claim, by considering the graphs of the functions $p\cdot y + c$ for $c\in \mathbb{R}$. These are parallel hyperplanes which lie, for $c$ close to $-\infty$, below the graph of $u$. We let~$c$ increase and consider the first~$c$ for which there is contact or ``touching'' at a point~$x$. It is clear geometrically that $x\not\in\partial\Omega$, since $\vert p\vert <1$ and $\partial u /\partial\nu =1$ on $\partial\Omega$. Next, from \eqref{gradmap} we deduce \begin{equation} \vert B_1\vert \le \vert \nabla u (\Gamma_u) \vert = \int_{\nabla u (\Gamma_u)} dp \le \int_{\Gamma_u} \det D^2u (x) \ dx . \label{ineq} \end{equation} We have applied the area formula to the map $\nabla u : \Gamma_u \rightarrow \mathbb{R}^n$, and we have used that its Jacobian, $\det D^2u$, is nonnegative in $\Gamma_u$ by definition of this set. Finally, we use the arithmetic-geometric means inequality applied to the eigenvalues of $D^2u(x)$ (which are nonnegative numbers for $x\in \Gamma_u$). We obtain \begin{equation} \det D^2u \le \left( \frac{\Delta u}{n} \right)^n \quad \text{in } \Gamma_u . \label{means} \end{equation} This, combined with \eqref{ineq} and $\Delta u \equiv P(\Omega) / \vert \Omega \vert$, gives \begin{equation} \vert B_1 \vert \le \left( \frac{P(\Omega)} {n \vert \Omega \vert} \right)^n \vert \Gamma_u \vert \le \left( \frac{P(\Omega)} {n \vert \Omega \vert} \right)^n \vert \Omega \vert . \label{contact} \end{equation} Since $P( B_1) = n \vert B_1\vert$, we conclude the isoperimetric inequality \begin{equation} \frac{P( B_1)}{\vert B_1 \vert^{\frac{n-1}{n}}}= n\vert B_1 \vert^{\frac{1}{n}} \le \frac{P(\Omega)}{\vert \Omega \vert^{\frac{n-1}{n}}}. \label{isopfin} \end{equation} Note that when $\Omega=B_1$ then $u(x)=\vert x\vert^2/2$ and, in particular, all the eigenvalues of $D^2u(x)$ are equal. Therefore, it is clear that \eqref{gradmap} and \eqref{means} are equalities when $\Omega=B_1$. This explains why the proof gives the isoperimetric inequality with best constant. The previous proof can also be used to show that balls are the only smooth domains for which equality occurs in the isoperimetric inequality. Indeed, if \eqref{isopfin} is an equality then all the inequalities in \eqref{ineq}, \eqref{means} and \eqref{contact} are also equalities. In particular, we have $\vert \Gamma_u \vert = \vert \Omega\vert$. Since $\Gamma_u \subset \Omega$, $\Omega$ is an open set, and $\Gamma_u$ is closed relatively to $\Omega$, we deduce that $\Gamma_u = \Omega$. Recall that the geometric and arithmetic means of $n$ nonnegative numbers are equal if and only if these $n$ numbers are all equal. Hence, the equality in \eqref{means} and the fact that $\Delta u$ is constant in $\Omega$ give that $D^2 u = a\text{I}$ in all $\Gamma_u = \Omega$, where $\text{I}$ is the identity matrix and $a=P(\Omega) /(n\vert\Omega\vert)$ is a positive constant. Let $x_0 \in \Omega$ be any given point. Integrating $D^2u=a\text{I}$ on segments from $x_0$, we deduce that $$ u(x)=u(x_0)+\nabla u(x_0) \cdot (x-x_0) + \frac{a}{2}\, \vert x-x_0 \vert^2 $$ for $x$ in a neighborhood of $x_0$. In particular, $\nabla u (x) = \nabla u (x_0) + a(x-x_0)$ in such a neighborhood, and hence the map $\nabla u - a\text{I}$ is locally constant. Since $\Omega$ is connected we deduce that this map is indeed a constant, say $\nabla u - a\text{I}\equiv y_0$. It follows that $\nabla u (\Gamma_u) = \nabla u (\Omega) = y_0 + a\Omega$. By \eqref{gradmap} we know that $B_1(0)\subset \nabla u (\Gamma_u)= y_0 + a\Omega$. In addition, these two open smooth sets, $B_1(0)$ and $y_0+a\Omega$, have the same measure since equality occurs in the first inequality of \eqref{ineq}. We conclude that $B_1(0) = \nabla u (\Gamma_u)= y_0 + a\Omega$ and hence that $\Omega$ is a ball. \end{proof} The previous proof is also suited for a quantitative version as we will show in \cite{CCPRS} with Cinti, Pratelli, Ros-Oton, and Serra. \begin{rem}\label{proofABP} The ABP estimate \eqref{abp} is proved proceeding as in the previous proof for the isoperimetric inequality, but now considering the Dirichlet problem \eqref{dirich} instead of \eqref{eqsem}. The main claim \eqref{gradmap} is now replaced by $B_{M/d}(0)\subset\nabla u(\Gamma^{u})$, where $M=\sup_\Omega u$, $d=\text{diam}(\Omega)$ and $\Gamma^{u}$ is now the upper contact set of $u$. See Chapter~9 of \cite{GT} for details. \end{rem} In 1994 (before our proof), Trudinger \cite{T} had given a proof of the classical isoperimetric inequality using the Monge-Amp\`ere operator and the ABP estimate. His proof consists of applying the ABP estimate to the problem $$ \left\{ \alignedat2 \det D^2u &= \chi_\Omega &\quad &\text{in } B_R \\ u &= 0 &\quad &\text{on }\partial B_R , \endalignedat \right. $$ where $\chi_\Omega$ is the characteristic function of $\Omega$ and $B_R=B_R(0)$, and then letting $R\to\infty$. Before the proofs in \cite{T} and \cite{CSCM} using ABP, there was already Gromov's proof \cite{G} of the isoperimetric inequality, which used the Knothe map (see also \cite{Cha} for a presentation). A more classical proof of the isoperimetric problem is based on Steiner symmetrization; see \cite{F,O,Be}. A fifth proof consists of deducing easily the isoperimetric inequality from the Brunn-Minkowski inequality \eqref{brunn}; see \cite{Ga}. Finally, in 2004 Cordero-Erausquin, Nazaret, and Villani \cite{CNV} used the Brenier map from optimal transportation to give another proof of the isoperimetric inequality. This optimal transport proof, as well as the Knothe-Gromov one, both lead also to the Wulff isoperimetric inequality for anisotropic perimeters ---which is discussed in the following section. \section{The Wulff isoperimetric inequality} In a personal communication, Robert McCann pointed out that the previous proof also establishes the following inequality concerning Wulff shapes and surface energies of crystals. Given any positive and smooth function $H$ on $\mathbb{S}^{n-1}=\partial B_1$ (the surface tension), consider the convex set $W\subset\mathbb{R}^n$ (called the Wulff shape) defined by \begin{equation}\label{wulffshape} W=\{ p\in\mathbb{R}^n \ :\ p\cdot\nu < H(\nu)\ \text{ for all } \nu\in \mathbb{S}^{n-1}\}. \end{equation} Note that $W$ is an open set with $0\in W$. To visualize $W$, it is useful to note that it is the intersection of the half-spaces $\{p\cdot\nu<H(\nu)\}$ among all $\nu\in \mathbb{S}^{n-1}$. In particular, $W$ is a convex set. For every smooth domain $\Omega\in\mathbb{R}^n$ (not necessarily convex), define $$ P_H(\Omega):=\int_{\partial\Omega} H(\nu(x))\, dS (x) $$ to be its surface energy ---here $dS (x)$ denotes the area element on $\partial\Omega$ and $\nu(x)$ is the unit exterior normal to $\partial\Omega$ at $x$. Then, among sets $\Omega$ with measure $\vert W\vert$, the surface energy $P_H(\Omega)$ is minimized by (and only by) the Wulff shape $W$ and its translates. Equivalently, for every $\Omega$ (without restriction on its measure) we have: \begin{theorem}[\cite{W,T1,T2}] Let $\Omega$ be a bounded smooth domain of $\mathbb{R}^n$. Then $$ \frac{P_H(\Omega)}{\vert \Omega \vert^{\frac{n-1}{n}}} \ge \frac{P_H(W)}{\vert W \vert^{\frac{n-1}{n}}} , $$ with equality if only if $\Omega=aW+b$ for some $a>0$ and $b\in\mathbb{R}^n$. \end{theorem} This theorem was first stated, without proof, by Wulff \cite{W} in 1901. His work was followed by Dinghas \cite{D}, who studied the problem within the class of convex polyhedra. He used the Brunn-Minkowski inequality \begin{equation}\label{brunn} |A+B|^{\frac1n}\geq |A|^{\frac1n}+|B|^{\frac1n}, \end{equation} valid for all nonempty measurable sets $A$ and $B$ of $\mathbb R^n$ for which $A+B$ is also measurable; see \cite{Ga} for more information on this inequality. Some years later, Taylor \cite{T1,T2} finally proved the theorem among sets of finite perimeter ---see \cite{CRS2} for more references in this subject. As mentioned in the previous section, this anisotropic isoperimetric inequality also follows easily using the Knothe-Gromov map or the Brenier map from optimal transport. In addition, a proof of the Wulff theorem using an anisotropic rearrangement was given by Van Schaftingen (with a method coming from Klimov~\cite{Kl}). This anisotropic isoperimetric problem can be solved with the same method that we have used above for the isoperimetric problem. One considers now the solution of \begin{equation*} \left\{ \alignedat2 \Delta u &= \frac{P_H(\Omega)}{\vert \Omega \vert} &\quad &\text{in } \Omega\\ \frac{\partial u}{\partial\nu} &=H(\nu) &\quad &\text{on }\partial \Omega , \endalignedat \right. \label{eqsemW} \end{equation*} Claim \eqref{gradmap} is now replaced by $W\subset\nabla u(\Gamma_u)$, which is proved again using the Legendre transform of $u$. Then, the area formula gives $\vert W\vert\le\{P_H(\Omega)/(n\vert\Omega\vert)\}^n \vert\Omega\vert$. To conclude, one uses that $P_H(W)=n\vert W\vert$. This last equality follows from the fact that $H(\nu(p))=p\cdot\nu(p)$ for almost every $p\in\partial W$ (here $\nu(p)$ denotes the unit exterior normal to $\partial W$ at $p$), and thus \begin{equation*}\label{per/vol} P_{H}(W)=\int_{\partial W} H(\nu(x))dS=\int_{\partial W}x\cdot\nu(x)dS=\int_W \text{div}(x)dx = n|W|, \end{equation*} A similar argument as in the previous section shows that equality is only achieved by the sets $\Omega=aW+b$; see \cite{CRS2} for details. \section{Weighted isoperimetric and Sobolev inequalities in convex cones} The isoperimetric inequality in convex cones of Lions and Pacella reads as follows. \begin{theorem}[\cite{LP}] \label{isopcon} Let $\Sigma$ be an open convex cone in $\mathbb R^n$ with vertex at $0$, and $B_1:=B_1(0)$. Then, \begin{equation*}\label{pp} \frac{P(\Omega;\Sigma)}{|\Omega\cap\Sigma|^{\frac{n-1}{n}} }\ge \frac{P(B_1;\Sigma)}{|B_1\cap\Sigma|^{\frac{n-1}{n}}} \end{equation*} for every measurable set $\Omega\subset \mathbb{R}^n$ with $|\Omega\cap\Sigma|<\infty$. Here $P(\Omega;\Sigma)$ is the perimeter of $\Omega$ relative to $\Sigma$. It agrees with the $(n-1)$-Hausdorff measure of $\partial \Omega \cap \Sigma$ for smooth sets $\Omega$. \end{theorem} Note that $\Sigma$ is an open set. Hence, if there is a part of $\partial \Omega$ contained in $\partial\Sigma$, then it is not counted in this perimeter. The assumption of convexity of the cone can not be removed as shown in \cite{LP}. The proof of Theorem \ref{isopcon} given in \cite{LP} is based on the Brunn-Minkowski inequality \eqref{brunn}. Alternatively, Theorem \ref{isopcon} can also be deduced from a degenerate case of the classical Wulff inequality of Section~4. For this, one must allow the surface energy $H$ to vanish in part of $\mathbb{S}^{n-1}$. More precisely, we say that a function $H$ defined in $\mathbb{R}^n$ is a \emph{gauge} when \begin{equation}\label{gauge} H\textrm{ is nonnegative, positively homogeneous of degree one, and convex}. \end{equation} The Wulff inequality can be proved for such surface energies $H$. With this in hand, one can establish the Lions-Pacella inequality as follows. It is easy to prove that the convex set $B_1\cap\Sigma$ is equal to the Wulff shape $W$, defined by \eqref{wulffshape}, for a unique gauge $H$ (which depends on the cone $\Sigma$). This function $H$ vanishes on normal vectors to $\partial\Sigma$ and agrees with 1 on unit vectors inside $\Sigma$. This is why one can recover the Lions-Pacella inequality from the Wulff one associated to this $H$. In particular, the Lions-Pacella inequality can be proved using the ABP method; see \cite{CRS2} for more details. Let us now turn to the extension of the Lions-Pacella theorem in \cite{CRS2} to the case of some homogeneous weights, as explained in the Introduction. Given a gauge $H$ and a nonnegative function $w$ defined in $\overline \Sigma$, consider the weighted anisotropic perimeter \begin{equation*} \label{defperiint} P_{w,H}(\Omega;\Sigma) := \int_{\partial \Omega \cap \Sigma}H\bigl(\nu(x)\bigr)w(x)dS, \end{equation*} (defined in this way when $\partial\Omega$ is regular enough) and the weighted measure $$ w(\Omega\cap\Sigma):=\int_{\Omega\cap\Sigma} w(x)\, dx. $$ \begin{theorem}[\cite{CRS2}]\label{th1} Let $H$ be a gauge in $\mathbb{R}^n$, i.e., a function satisfying \eqref{gauge}, and $W$ its associated Wulff shape defined by \eqref{wulffshape}. Let $\Sigma$ be an open convex cone in $\mathbb R^n$ with vertex at the origin, and such that $W\cap \Sigma\neq \varnothing$. Let $w$ be a continuous function in $\overline\Sigma$, positive in $\Sigma$, and positively homogeneous of degree $\alpha\geq0$. Assume in addition that $w^{1/\alpha}$ is concave in $\Sigma$ in case $\alpha>0$. Then, for each measurable set $\Omega\subset\mathbb{R}^n$ with $w(\Omega\cap \Sigma)<\infty$, \begin{equation}\label{mainresult} \frac{P_{w,H}(\Omega;\Sigma) }{w(\Omega\cap \Sigma)^{\frac{D-1}{D}} }\geq \frac{P_{w,H}(W;\Sigma) } {w(W\cap \Sigma)^{\frac{D-1}{D}}}, \end{equation} where $D=n+\alpha$. \end{theorem} After announcing our result in \cite{CRS1} and posting the preprint \cite{CRS2}, E. Milman and L. Rotem \cite{MR} have found an alternative proof of our isoperimetric inequality, Theorem \ref{th1} (\cite{MR} mentions that the same has been found independently by Nguyen). Their proof uses the Borell-Brascamp-Lieb extension of the the Brunn-Minkowski inequality. Our key hypothesis that $w^{1/\alpha}$ is a concave function is equivalent to a natural curvature-dimension bound, in fact to the nonnegativeness of a Bakry-\'Emery Ricci tensor in dimension $D=n+\alpha$. This was pointed out by C. Villani. Note that the shape of the minimizer is $W\cap\Sigma$, and that $W$ depends only on $H$ and not on the weight $w$ neither on the cone $\Sigma$. In particular, in the isotropic case $H=\|\cdot\|_{2}$ we find the following noteworthy fact. Even if the weights that we consider are not radial (unless $w\equiv \textrm{constant}$), still Euclidean balls centered at the origin (intersected with the cone) minimize this isoperimetric quotient. Equality in \eqref{mainresult} holds whenever $\Omega\cap \Sigma=rW\cap \Sigma$, where $r$ is any positive number. That $rW\cap\Sigma$ is the unique minimizer of \eqref{mainresult} will be shown in the upcoming paper \cite{CCPRS}, where in addition we show a quantitative version of \eqref{mainresult}. Note also that we allow $w$ to vanish somewhere (or everywhere) on $\partial\Sigma$. This happens in the case of the monomial weights \eqref{mon}, for which the previous theorem holds. From \eqref{mainresult}, it is simple to deduce the sharp Sobolev inequality with monomial weights \eqref{Sob} stated in the introduction. Next, to show the key ideas in a simpler situation, we prove Theorem \ref{th1} in the isotropic case $H=\|\cdot\|_2$ when the weight $w\equiv0$ on $\partial\Sigma$. This is the case of the monomial weights. To simplify, we also assume that $\Omega= U\cap \Sigma$, where $U$ is some bounded smooth domain in $\mathbb{R}^n$. Let $w$ be a positive homogeneous function of degree $\alpha>0$ in an open convex cone $\Sigma\subset\mathbb{R}^n$. In the proof we will need an easy lemma stating that $w^{1/\alpha}$ is concave in $\Sigma$ if and only if \begin{equation}\label{lemmaw} \alpha\left(\frac{w(z)}{w(x)}\right)^{1/\alpha}\leq \frac{\nabla w(x)\cdot z}{w(x)} \end{equation} holds for each $x,z\in\Sigma$; see \cite{CRS2}. To prove the result we will also need the following equality. Here we denote $P_{w,H}$ by $P_w$ since $H$ is the Euclidean norm. Using that $w\equiv 0$ on $\partial\Sigma$, we deduce \begin{equation}\label{formula-per=Dvol-for-Wulff} \begin{split} P_w(W;\Sigma)&= \int_{\partial W\cap \Sigma} H(\nu(x))w(x)dS= \int_{\partial W\cap \Sigma} x\cdot\nu(x)\,w(x)dS\\ &=\int_{\partial(W\cap \Sigma)}x\cdot\nu(x)w(x)dS= \int_{W\cap\Sigma} \text{div}(x w(x))dx\\ &=\int_{W\cap\Sigma} \left\{nw(x)+x\cdot\nabla w(x)\right\}dx =\int_{W\cap\Sigma} (n+\alpha) w(x) dx\\ &=D\,w(W\cap \Sigma), \end{split} \end{equation} where we have used that $x\cdot \nabla w(x)=\alpha w(x)$ since $w$ is homogeneous of degree $\alpha$. A key point in the following proof is that, when $\Omega=B_1\cap\Sigma$, the function $u(x)=|x|^2/2$ solves $w^{-1}\text{div}(w\nabla u)=b$ for some constant $b$, the normal derivative of $u$ on $\partial B_1 \cap\Sigma$ is identically one, and the normal derivative of $u$ on $\partial\Sigma\cap B_1$ is identically zero. \begin{proof}[Proof of Theorem \ref{th1} in the case $w\equiv 0$ on $\partial\Sigma$ and $H=\|\cdot\|_2$] For the sake of simplicity we assume here that $\Omega= U\cap \Sigma$, where $U$ is some bounded smooth domain in $\mathbb{R}^n$. Observe that since $\Omega= U\cap \Sigma$ is piecewise Lipschitz, and $w\equiv0$ on $\partial\Sigma$, it holds \begin{equation} \label{perimeteronlyw} P_w(\Omega ; \Sigma) = \int_{\partial \Omega} w(x)dx. \end{equation} Hence, using that $w\in C(\overline \Sigma)$ and \eqref{perimeteronlyw}, it is immediate to prove that for any $y\in \Sigma$ we have \[ \lim_{\delta \downarrow 0} P_w(\Omega + \delta y;\Sigma) = P_w(\Omega;\Sigma) \quad \mbox{and} \quad \lim_{\delta \downarrow 0} w(\Omega + \delta y) = w(\Omega).\] We have denoted $\Omega + \delta y = \{x + \delta y \, , \ x\in \Omega\}$. Note that $P_w(\Omega + \delta y;\Sigma)$ could not converge to $P_w(\Omega;\Sigma)$ as $\delta \downarrow 0$ if $w$ did not vanish on the boundary of the cone $\Sigma$. By this approximation property and a subsequent regularization of $\Omega + \delta y$ (a detailed argument can be found in \cite{CRS2}), we see that it suffices to prove \eqref{mainresult} for smooth domains whose closure is contained in $\Sigma$. Thus, from now on in the proof, $\Omega$ is a smooth domain satisfying $\overline\Omega\subset \Sigma$. At this stage, it is clear that by approximating $w|_{\overline{\Omega}}$ we can assume $w\in C^\infty(\overline\Omega)$ and $w>0$ in $\overline\Omega$. Let $u$ be a solution of the linear Neumann problem \begin{equation} \left\{ \alignedat2 w^{-1}\textrm{div}(w\nabla u) &= b_\Omega &\quad &\text{in } \Omega\\ \frac{\partial u}{\partial\nu} &=1 &\quad &\text{on }\partial \Omega . \endalignedat \right. \label{eqsem2} \end{equation} The Fredholm alternative ensures that there exists a solution of \eqref{eqsem2} (which is unique up to an additive constant) if and only if the constant $b_\Omega$ is given by \begin{equation}\label{cttb} b_\Omega=\frac{P_w(\Omega;\Sigma)}{w(\Omega)}.\end{equation} Note also that since $w$ is positive and smooth in $\overline\Omega$, \eqref{eqsem2} is a uniformly elliptic problem with smooth coefficients. Thus, $u\in C^{\infty}(\overline\Omega)$. For these classical facts, see Example 2 in Section 10.5 of \cite{H}, or the end of Section 6.7 of \cite{GT}. Consider now the lower contact set of $u$, $\Gamma_u$, defined by \eqref{lcset} as the set of points in $\Omega$ at which the tangent hyperplane to the graph of $u$ lies below $u$ in all $\overline \Omega$. Then, as in Section~3, we touch by below the graph of $u$ with hyperplanes of fixed slope $p\in B_1$, and using the boundary condition in \eqref{eqsem2} we deduce that $B_1 \subset \nabla u (\Gamma_u)$. From this, we obtain \begin{equation*}\label{fivepointstar} B_1\cap \Sigma\subset \nabla u(\Gamma_u)\cap \Sigma \end{equation*} and thus \begin{equation}\label{ineqsec3} \begin{split} w(B_1\cap \Sigma) &\leq \int_{\nabla u (\Gamma_u)\cap\Sigma}w(p)dp \\ &\leq \int_{\Gamma_u\cap (\nabla u)^{-1}(\Sigma)} w(\nabla u(x))\det D^2u(x)\,dx\\ &\leq \int_{\Gamma_u\cap (\nabla u)^{-1}(\Sigma)} w(\nabla u)\left(\frac{\Delta u}{n}\right)^ndx. \end{split} \end{equation} We have applied the area formula to the smooth map $\nabla u : \Gamma_u \rightarrow \mathbb R^n$ and also the classical arithmetic-geometric means inequality ---all eigenvalues of $D^2u$ are nonnegative in $\Gamma_u$ by definition of this set. Next we use that, when $\alpha>0$, \[s^{\alpha}t^n\leq \left(\frac{\alpha s+nt}{\alpha+n}\right)^{\alpha+n}\ \ \textrm{for all }\ s>0\ \textrm{and}\ t>0,\] which follows from the concavity of the logarithm function. Using also \eqref{lemmaw}, we find \[\frac{w(\nabla u)}{w(x)}\left(\frac{\Delta u}{n}\right)^n\leq \left(\frac{\alpha\left(\frac{w(\nabla u)}{w(x)}\right)^{1/\alpha}+\Delta u}{\alpha+n}\right)^{\alpha+n}\leq \left(\frac{\frac{\nabla w(x)\cdot \nabla u}{w(x)}+\Delta u}{D}\right)^{D}.\] Recall that $D=n+\alpha$. Thus, using the equation in \eqref{eqsem2}, we obtain \begin{equation}\label{36} \frac{w(\nabla u)}{w(x)}\left(\frac{\Delta u}{n}\right)^n\leq \left(\frac{b_\Omega}{D}\right)^{D}\ \ {\rm in}\ \Gamma_u\cap (\nabla u)^{-1}(\Sigma). \end{equation} If $\alpha=0$ then $w\equiv1$, and \eqref{36} is trivial. Therefore, since $\Gamma_u\subset \Omega$, combining \eqref{ineqsec3} and \eqref{36} we obtain \begin{equation*}\label{7}\begin{split} w(B_1\cap\Sigma) &\leq \int_{\Gamma_u\cap (\nabla u)^{-1}(\Sigma)} \left(\frac{b_\Omega}{D}\right)^{D}w(x)dx= \left(\frac{b_\Omega}{D}\right)^{D}w(\Gamma_u\cap (\nabla u)^{-1}(\Sigma))\\ &\leq \left(\frac{b_\Omega}{D}\right)^{D}w(\Omega) = D^{-D}\frac{P_w(\Omega;\Sigma)^{D}}{w(\Omega)^{D-1}}.\end{split} \end{equation*} In the last equality we have used the value of the constant $b_\Omega$, given by \eqref{cttb}. Finally, using that, by \eqref{formula-per=Dvol-for-Wulff}, we have $P_w(B_1\cap\Sigma;\Sigma) = D\,w(B_1\cap \Sigma)$, we obtain the desired inequality \eqref{mainresult}. \end{proof} \end{document}
arXiv