text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
Calculators Topics Solving Methods Go Premium
ENG • ESP
Tap to take a pic of the problem
Step-by-step Solution
Solve the differential equation $\left(2x-1\right)dx+\left(3y+7\right)dy=0$
(◻)
◻/◻
◻2
◻◻
√◻
◻√◻
◻√
log◻
d/dx
D□x
∫◻
|◻|
>=
acot
acsc
coth
csch
acoth
asech
acsch
Step-by-step explanation
Problem to solve:
$\left(2x-1\right)\cdot dx+\left(3y+7\right)\cdot dy=0$
Learn how to solve differential equations problems step by step online.
$\left(2x-1\right)dx+\left(3y+7\right)dy=0$
Unlock this full step-by-step solution!
Show intermediate steps
Learn how to solve differential equations problems step by step online. Solve the differential equation (2x-1)dx+(3y+7)dy=0. The differential equation \left(2x-1\right)dx+\left(3y+7\right)dy=0 is exact, since it is written in the standard form M(x,y)dx+N(x,y)dy=0, where M(x,y) and N(x,y) are the partial derivatives of a two-variable function f(x,y) and they satisfy the test for exactness: \displaystyle\frac{\partial M}{\partial y}=\frac{\partial N}{\partial x}. In other words, their second partial derivatives are equal. The general solution of the equation is f(x,y)=C. Using the test for exactness, we check that the differential equation is exact. Integrate M(x,y) with respect to x to get. Now take the partial derivative of x^2-x with respect to y to get.
$x^2-x+\frac{3}{2}y^2+7y=C_0$
Similar Problems
$\left(1+x^4\right)\cdot dy+x\cdot\left(1+4y^2\right)\cdot dx=0$
$\frac{dy}{dx}=\frac{2x}{3y^2}$
$\frac{dx}{dy}=y+xy$
$dx+e^{3x}dy=0$
$\frac{dy}{dx}=\frac{3x^2+4x+2}{2\left(y+1\right)}$
$\frac{dy}{dx}=e^{3x+2y}$
$y'=x+8$
Time to solve it:
~ 0.29 s (SnapXam)
Supercharge your math learning
By signing up, you agree that you're 13 years old or older, and also agree to Snapxam's Terms of Service and Privacy Policy.
Start learning math
Get a sneak peek of step by step solutions of thousands of problems.
Go premium to unlock full solutions.
Get sleep and stress.
Save up time for your hobbies.
Join 100k+ students in problem solving.
Register here if you don't have an account.
© 2020 SnapXam, Inc. About Privacy Terms Contact What's New
Calculators Topics Methods Go Premium | CommonCrawl |
\begin{definition}[Definition:Multiplicative Order of Integer]
Let $a$ and $n$ be integers.
Let there exist a positive integer $c$ such that:
:$a^c \equiv 1 \pmod n$
Then the least such integer is called '''order of $a$ modulo $n$'''.
\end{definition} | ProofWiki |
\begin{document}
\title{On Shifted Eisenstein Polynomials}
\author{ {\sc Randell Heyman}\\ {Department of Computing, Macquarie University} \\ {Sydney, NSW 2109, Australia}\\ {\tt [email protected]} \and {\sc Igor E. Shparlinski} \\ Deptartment of Computing, Macquarie University \\ Sydney, NSW 2109, Australia\\ {\tt [email protected]} }
\date{ } \maketitle
\begin{abstract} We study polynomials with integer coefficients which become Eisenstein polynomials after the additive shift of a variable. We call such polynomials {\it shifted Eisenstein polynomials\/}. We determine an upper bound on the maximum shift that is needed given a shifted Eisenstein polynomial and also provide a lower bound on the density of shifted Eisenstein polynomials, which is strictly greater than the density of classical Eisenstein polynomials. We also show that the number of irreducible degree $n$ polynomials that are not shifted Eisenstein polynomials is infinite. We conclude with some numerical results on the densities of shifted Eisenstein polynomials. \end{abstract}
\section{Introduction}
It is well known that almost all polynomials in rather general families of $\mathbb{Z}[x]$ are irreducible, see~\cite{Diet,Zyw} and references therein. There are also known polynomial time irreducibility tests and polynomial time factoring algorithms, see for example~\cite{LLL}. However, it is always interesting to study large classes of polynomials that are known to be irreducible.
Thus, we recall that \begin{equation} \label{eq:poly} f(x) = a_nx^n +a_{n-1}x^{n-1}+ \ldots +a_1x+a_0 \in \mathbb{Z}[x] \end{equation} is called an {\it Eisenstein polynomial\/}, or is said to be
\emph{irreducible by Eisenstein} if for some prime $p$ we have \begin{enumerate} \item[(i)] $p \mid a_i$ for $i=0, \ldots, n-1$, \item[(ii)] $p^2\nmid a_0$, \item[(iii)] $p \nmid a_n$. \end{enumerate}
We sometimes say that $f$ is \emph{irreducible by Eisenstein with respect to prime $p$} if $p$ is one such prime that satisfies the conditions~(i), (ii) and~(iii) above (see~\cite{Cox} regarding the early history of the irreducibility criterion).
Recently, motivated by a question of Dobbs and Johnson~\cite{DoJo} several statistical results about the distribution of Eisenstein polynomials have been obtained. Dubickas~\cite{Dub} has found the asymptotic density for {\it monic\/} polynomials $f$ of a given degree $\deg f = n$ and growing height \begin{equation} \label{eq:def H}
H(f) = \max_{i=0, \ldots, n} |a_i|. \end{equation} The authors~\cite{Hey} have improved the error term in the asymptotic formula of~\cite{Dub} and also calculated the density of general Eisenstein polynomials.
Clearly the irreducibility of polynomials is preserved under shifting of the argument by a constant. Thus it makes sense to investigate polynomials which become Eisenstein polynomials after shifting the argument. More precisely, here we study polynomials $f(x) \in \mathbb{Z}[x]$ for which there exists an integer $s$ such that $f(x+s)$ is an Eisenstein polynomial. We call such $f(x) \in \mathbb{Z}[x]$ a {\it shifted Eisenstein polynomial\/}. We call the corresponding $s$ an {\it Eisenstein shift of $f$ with respect to $p$\/}.
For example, for $f(x) = x^2+4x+5$, it is easy to see that $s=-1$ is an Eisenstein shift with respect to $p=2$.
Here we estimate the smallest possible $s$ which transfers a shifted Eisenstein polynomial $f(x)$ into an Eisenstein polynomial $f(x+s)$. We also estimate the density of shifted Eisenstein polynomials and show that it is strictly greater than the density of Eisenstein polynomials. On the other hand, we show that there are irreducible polynomials that are not shifted Eisenstein polynomials.
More precisely, let $\mathcal I_n$, $\mathcal E_n$ and $\csE_n$ denote the set of irreducible, Eisenstein and shifted Eisenstein polynomials, of degree $n$ over the integers.
Trivially, $$ \mathcal E_n \subseteq \csE_n \subseteq \mathcal I_n. $$ We show that all inclusions are proper and that $\csE_n \setminus \mathcal E_n$ is quite ``massive''.
\section{Notation}
We define $\mathcal I_n(H)$, $\mathcal E_n(H)$ and $\csE_n(H)$ as the subsets of $\mathcal I_n$, $\mathcal E_n$ and $\csE_n$, respectively, consisting of polynomials of height at most $H$ (where the height of a polynomial~\eqref{eq:poly} is given by~\eqref{eq:def H}).
For any integer $n\ge 1$, let $\omega(n)$ be the number of distinct prime factors and let $\varphi(n)$ be the Euler function of $n$ (we also set $\omega(1) =0$).
We also use $\mu$ to denote the M{\" o}bius function, that is, $$ \mu(n)= \begin{cases} (-1)^{\omega(n)} & \text{if } n \ \text{is square free}, \\ 0 & \text{if } n \ \text{otherwise}. \end{cases} $$
Finally, we denote the discriminant of the function $f$ by $D(f)$.
The letters $p$ and $q$, with or witho
\section{A bound on Eisenstein shifts via the discriminant}
It is natural to seek a bound on the largest shift required to find a shift if it exists. In fact, for any polynomial, there is a link between the maximum shift that could determine irreducibility and the discriminant.
The following result is well-known and in fact in wider generality, can be proven by the theory of
Newton polygons. Here we give a concise elementary proof.
\begin{lem}\label{first} Suppose $f \in \mathbb{Z}[x]$ is of degree $n$. If $f(x)$ is a shifted Eisenstein polynomial then there exists a prime $p$ with $p^{n-1} \mid D(f)$ and $f(x+s)$ is irreducible by Eisenstein for some $0 \leq s < q$, where $q$ is the largest of such primes. \end{lem}
\begin{proof} Since $f(x)$ is a shifted Eisenstein polynomial there exists an integer $t$ and a prime $p$ such that $f(x+t)$ is irreducible by Eisenstein with respect to $p$.
Recall that the discriminant of a $n$ degree polynomial can be expressed as the determinant of the $2n-1$ by $2n-1$ Sylvester matrix. Using the Leibniz formula to express the determinant, and examining each summand, it immediately follows that $p^{n-1} \mid D(f(x+t))$. Also, the difference of any two roots of a polynomial is unchanged by increasing both roots by any integer $u$. So, using the definition of the discriminant, we get $D(f(x))=D(f(x+u))$ for any integer $u$. So it follows that $p^{n-1} \mid D(f(x))$.
Furthermore, by expanding $f(x+t+kp)$ for an arbitrary integer $k$ and examining the divisibility of coefficients, it follows that if $f(x+t)$ is Eisenstein with respect to prime $q$ then so too is $f(x+t+kp)$.
By appropriate choice of $k$ we can therefore find an integer $s$ with $$0 \leq s< p \le \max\{q~\text{prime}~:~ q^{n-1} \mid D(f)\} $$ such that the polynomial $f(x+s)$ is irreducible by Eisenstein. \end{proof}
We also recall a classical bound of Mahler~\cite{Mahler} on the discriminant of polynomials over $\mathbb{Z}$.
For $f(x)$ of the form~\eqref{eq:poly} we define the {\it length\/} $L(f)=|a_0|+|a_1|+\ldots +|a_n|$.
\begin{lemma} \label{Mahler} Suppose $f \in\mathbb{Z}[x]$ is of degree $n$. Then
$$|D(f)|\leq n^nL(f)^{2n-2}.$$ \end{lemma}
Combining Lemmas~\ref{first} and~\ref{Mahler} we derive:
\begin{thm}\label{main} Suppose $f(x) \in \mathbb{Z}[x]$. If $f(x+s)$ is not irreducible by Eisenstein for all s with $$0 \leq s \leq n^{n/(n-1)}L(f)^2,$$ then $f$ is not a shifted Eisenstein polynomial. \end{thm}
We also remark that the shift $s$ which makes $f(x+s)$ irreducible by Eisenstein with respect to prime $p$ satisfies $f(s) \equiv 0 \pmod p$, which can further reduce the number of trials (however a direct irreducibility testing via the classical algorithm of Lenstra, Lenstra and Lov{\'a}sz~\cite{LLL} is still much more efficient).
\section{Density of shifted Eisenstein polynomials}
In this section we show that as polynomial height grows, the density of polynomials that are irreducible by Eisenstein shifting is strictly larger than the density of polynomials that are irreducible by Eisenstein. We start by calculating a maximum height for $f(x)$ such that $f(x+1)$ is of height at most $H$.
\begin{lem}\label{lessheight} For $f\in \mathbb{Z}[x]$ of degree $n$, we denote $f_{+ 1} (x) = f(x + 1)$. Then $H(f_{+ 1}) \le 2^n H(f)$. \end{lem}
\begin{proof} Let $f(x)$ be of the form~\eqref{eq:poly}. For $i =0, \ldots, n$, the absolute value of the coefficient of $x^{n-i}$ in $f_{+ 1}$ can be estimated as \begin{equation*} \begin{split}
\sum_{0 \leq j \leq i}\binom{n-j}{i-j} \left|a_{n-j}\right|\leq 2^n H(f), \end{split} \end{equation*} as required. \end{proof}
We also need the number of polynomials, of given degree and maximum height,
that are irreducible by Eisenstein. Let \begin{equation} \label{eq:rhon} \rho_n = 1-\prod_{p} \(1- \frac{(p-1)^2}{p^{n+2}}\). \end{equation} In~\cite{Hey} we prove the following result.
\begin{lemma} \label{lem:rho} We have, $$ \#\mathcal E_n(H)=\rho_n 2^{n+1} H^{n+1}+\left\{\begin{array}{ll} O\(H^{n}\),&\quad \text{if $n>2$}, \\ O(H^2(\log H)^2),& \quad \text{if $n=2$}. \end{array}\right. $$ \end{lemma}
We also require the following two simple statements.
\begin{lem} \label{lem: pp} Suppose that $f(x)$ is irreducible by Eisenstein with respect to prime $p$. Then $f(x+1)$ is not irreducible by Eisenstein with respect to $p$. \end{lem}
\begin{proof} Let $$f(x)=\sum_{i=0}^n a_ix^i \in \mathcal E_n $$ be irreducible by Eisenstein with respect to prime $p$. The coefficient of $x^{0}$ in $f(x+1)$ is $a_n+a_{n-1}+\ldots+a_1+a_0$, which is clearly not divisible by $p$. So $f(x+1)$ is not irreducible by Eisenstein with respect to $p$. \end{proof}
Let \begin{equation} \label{eq:taun} \tau_n = \(\sum_p \frac{(p-1)^2}{p^{n+2}}\)^2 - \sum_{p}\frac{(p-1)^4}{p^{2n+4}} \end{equation}
\begin{lem} \label{lem:tau} Let $$\mathcal F_n(H)=\{f(x) \in \mathcal E_n(H)~:~f(x+1) \in \mathcal E_n\}.$$ Then for $n \geq 2$,
$$\#\mathcal F_n(H)\leq \(\tau_n +o(1)\) (2H)^{n+1}.
$$ \end{lem}
\begin{proof} Fix some sufficiently large $H$ and let $$f(x)=\sum_{i=0}^na_ix^i\in \mathcal E_n(H).$$ Consequently, $$f(x+1)=\sum_{i=0}^nA_ix^i,$$ with $A_i=a_i+L_i(a_n,a_{n-1},\ldots,a_{i+1})$ where $L_i(a_n,a_{n-1},\ldots,a_{i+1})$ is a linear form in $a_n,a_{n-1},\ldots,a_{i+1}$ for $i=0, \ldots, n$. In particular, $$ A_n = a_n, \quad A_{n-1} = na_n + a_{n-1}, \quad A_{n-2} = \frac{n(n-1)}{2}a_n + (n-1) a_{n-1} + a_{n-2}. $$
Clearly there are at most $O(H^n)$ polynomials $f\in \mathcal I_n(H)$ for which the condition
\begin{equation} \label{eq:top A} 2 A_{n-2} - (n-1) A_{n-1} =(n-1) a_{n-1} + 2a_{n-2} \ne 0. \end{equation} is violated. Thus \begin{equation} \label{eq:B B} \#\mathcal F_n(H) = \#\mathcal F_n^*(H) + O(H^n), \end{equation} where $\mathcal F_n^*(H)$ is the set of polynomials $f\in \mathcal F_n(H)$ for which~\eqref{eq:top A} holds.
Now, given two primes $p$ and $q$, we calculate an upper bound on the number $N_n(H,p,q)$ of $f \in \mathcal F_n^*(H)$ such that \begin{itemize} \item $f(x)$ is irreducible by Eisenstein with respect to prime $p$; \item $f(x+1)$ is irreducible by Eisenstein with respect to prime $q$. \end{itemize}
We see from Lemma~\ref{lem: pp} that $N_n(H,p,q) = 0$ if $p=q$. So we now always assume that $p\ne q$.
To do so we estimate (inductively over $i=n, n-1, \ldots, 0$) the number of possibilities for the coefficient $a_i$ of $f$, provided that higher coefficients $a_n, \ldots, a_{i+1}$ are already fixed.
\begin{itemize}
\item Possible values of $a_n$: We know that $a_n \not \equiv 0 \pmod p$ and $a_n \not \equiv 0 \pmod q$. Therefore we conclude that the number of possible values of $a_n$ is
$2H(p-1)(q-1)/pq + O(1)$.
\item Possible values of $a_i$, $1 \le i < n$: Fix arbitrary
$a_n,a_{n-1},\ldots,a_{i+1}$.
The relations
$$
a_i\equiv 0 \pmod p \quad \text{and} \quad
A_i = a_i+L_i(a_n,a_{n-1},\ldots,a_{i+1}) \equiv 0 \pmod q
$$ put $a_i$ in a unique residue class modulo $pq$. It follows that the number of possible values of $a_i$ for $ i=n-1,n-2,\ldots,1$ cannot exceed $2H/pq +O(1)$.
\item Possible values of $a_0$: We argue as before but also note that for $a_0$ we have the additional constraints that
$A_0 \not \equiv 0 \pmod {p^2}$, $a_0 \not \equiv 0 \pmod {q^2}$
and so $a_0$ can take at most $2H(q-1)(p-1)/p^2q^2 +O(1)$ values. \end{itemize}
So, for primes $p$ and $q$ we have \begin{equation*} \begin{split} N_n(H,p,q) &\le \(\frac{2H(p-1)(q-1)}{pq}+O(1)\)\(\frac{2H}{pq} +O(1)\)^{n-1}\\ &\qquad \qquad \qquad \qquad \qquad \qquad \(\frac{2H(p-1)(q-1)}{p^2q^2} +O(1)\)\\ &= \frac{2^{n+1}H^{n+1}(p-1)^2(q-1)^2}{p^{n+2}q^{n+2}} + O(H^n) . \end{split} \end{equation*} We also see from~\eqref{eq:top A} that if $pq > (n+1) H$ then $N_n(H,p,q)=0$. Hence \begin{equation*} \begin{split} \#\mathcal F_n^*(H) &\le \sum_{\substack{p \neq q\\ pq \le (n+1) H}}
\(\frac{2^{n+1}H^{n+1}(p-1)^2(q-1)^2}{p^{n+2}q^{n+2}} + O(H^n) \)\\ &\le (2H)^{n+1}\sum_{\substack{p \neq q\\ pq \le (n+1) H}}\(\frac{(p-1)^2(q-1)^2}{p^{n+2}q^{n+2}}\) + O\(\frac{H^{n+1}\log \log H}{\log H}\) ,
\end{split} \end{equation*} as there are $O(Q (\log Q)^{-1} \log \log Q)$ products of two distinct primes $pq \le Q$, see~\cite[Chapter~II.6, Theorem~4]{Ten}. Therefore,
$$ \#\mathcal F_n^*(H) \le (2H)^{n+1} \sum_{\substack{p \neq q\\ pq \le (n+1) H}} \frac{(p-1)^2(q-1)^2}{p^{n+2}q^{n+2}} +o(H^{n+1}), $$ Since the above series converges, we derive \begin{equation*} \begin{split} \#\mathcal F_n^*(H) &\le (2H)^{n+1}\sum_{p \neq q}
\frac{(p-1)^2(q-1)^2}{p^{n+2}q^{n+2}}+o(H^{n+1}) \\ &=(2H)^{n+1}\left(\sum_{p,q}\frac{(p-1)^2(q-1)^2}{p^{n+2}q^{n+2}}-\sum_{p}\frac{(p-1)^4}{p^{2n+4}}\right)+o(H^{n+1}), \end{split} \end{equation*} which concludes the proof. \end{proof}
We can now prove the main result of this section. We recall that $\rho_n$ and $\tau_n$ are defined by~\eqref{eq:rhon} and~\eqref{eq:taun}, respectively.
\begin{thm} \label{thm:E/E} For $n \geq 2$ we have $$\liminf_{H \to \infty} \frac{\#\overline{\mathcal E}_n(H)}{\#\mathcal E_n(H)}\ge 1+\gamma_n,$$ where $$ \gamma_n = \frac{1}{2^{n^2+n}} \(1 -\frac{\tau_n}{\rho_n}\) > 0. $$ \end{thm}
\begin{proof} We see from Lemma~\ref{lessheight} that for $h = H/2^n$ we have $$ \mathcal E_n(H) \bigcup \(\mathcal E_n(h) \setminus\mathcal F_n(h)\)\subseteq \overline{\mathcal E}_n(H), $$ where $\mathcal F_n(h)$ is defined as in Lemma~\ref{lem:tau}. Therefore, since $\mathcal F_n(h) \subseteq \mathcal E_n(h)$, we have $$ \#\overline{\mathcal E}_n(H) \ge \# \mathcal E_n(H) + \# \mathcal E_n(h) - \# \mathcal F_n(h). $$ Recalling Lemmas~\ref{lem:rho} and~\ref{lem:tau} we derive the desired inequality.
It now remains to show that $\gamma_n >0$. So it suffices to show that $$\rho_n-\tau_n >0.$$ From~\eqref{eq:rhon} and ~\eqref{eq:taun} we have \begin{equation*} \begin{split} \rho_n-\tau_n & = 1-\prod_{p}\(1- \frac{(p-1)^2}{p^{n+2}}\) - \(\sum_p \frac{(p-1)^2}{p^{n+2}}\)^2 + \sum_{p}\frac{(p-1)^4}{p^{2n+4}}\\ &\ge 1-\prod_{p}\(1- \frac{(p-1)^2}{p^{n+2}}\) - \(\sum_p \frac{(p-1)^2}{p^{n+2}}\)^2\\ & =\sum_{k=1}^\infty (-1)^{k+1}\sum_{p_1< \ldots < p_k} \prod_{j=1}^k\frac{(p_{j}-1)^2} {p_{j}^{n+2}}-\(\sum_p \frac{(p-1)^2}{p^{n+2}}\)^2.
\end{split} \end{equation*}
Discarding from the first sum all positive terms (corresponding to odd $k$) except for the first one, we obtain \begin{equation*} \begin{split} \rho_n-\tau_n & \ge \sum_p \frac{(p-1)^2}{p^{n+2}} - \sum_{k=1}^\infty \ \sum_{p_1< \ldots < p_{2k}} \prod_{j=1}^{2k}\frac{(p_{j}-1)^2} {p_{j}^{n+2}}-\(\sum_p \frac{(p-1)^2}{p^{n+2}}\)^2\\
& \ge \sum_p \frac{(p-1)^2}{p^{n+2}} - \sum_{k=1}^\infty \frac{1}{(2k)!} \(\sum_{p} \frac{(p-1)^2} {p^{n+2}}\)^{2k}-\(\sum_p \frac{(p-1)^2}{p^{n+2}}\)^2 \\ & \ge \sum_p \frac{(p-1)^2}{p^{n+2}} - \sum_{k=1}^\infty \(\sum_{p} \frac{(p-1)^2} {p^{n+2}}\)^{2k}-\(\sum_p \frac{(p-1)^2}{p^{n+2}}\)^2 .
\end{split} \end{equation*}
Hence, denoting $$ P_n = \sum_p \frac{(p-1)^2}{p^{n+2}}, $$ we derive $$\rho_n-\tau_n \ge P_n- \frac{P_n^2}{1+P_n^2} -P_n^2.$$ Since $$ P_n \le P_2 \le 0.18, $$ the result now follows. \end{proof}
It is certainly easy to get an explicit lower bound on $\gamma_n$ in Theorem~\ref{thm:E/E}. Various values of $\gamma_n$ using the first 10,000 primes are given in Table~\ref{tab:gamma}.
\begin{table}[ht]
\caption{Approximations to $\gamma_n$ for some $n$}
\label{tab:gamma} \begin{center}
\begin{tabular}{ | l | l |}
\hline
\textrm{$n$} &$\gamma_n$\\ \hline
$2$ & $1.33 \times 10^{-2}$\\ \hline
$3$ & $2.36\times10^{-4}$ \\ \hline
$4$&$9.44\times10^{-7}$ \\ \hline
$5$&$9.28\times10^{-10}$ \\ \hline
$10$&$7.70\times10^{-34}$ \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{question} \label{quest:E-shift} Obtain tight bounds or the exact values of $$\liminf_{H \to \infty} \frac{\#\overline{\mathcal E}_n(H)}{(2H)^{n+1}} \qquad \mbox{and} \qquad \limsup_{H \to \infty} \frac{\#\overline{\mathcal E}_n(H)}{(2H)^{n+1}}$$ (they most likely coincide). \end{question}
\section{Infinitude of $\mathcal I_n \setminus \csE_n$} \label{sec:sf discr}
We note that a consequence of Lemma~\ref{first} is that any polynomial belongs to $\mathcal I_n \setminus \csE_n$ if its discriminant is $n-1$ free. Hence we would expect the size of $\mathcal I_n \setminus \csE_n$ to be ``massive''. In fact, for a fixed degree greater than or equal to 2, we can prove that the number of irreducible polynomials that are not shifted Eisenstein polynomials is infinite.
\begin{thm} The set $\mathcal I_n \setminus \csE_n$ is infinite for all $n \geq 2$. \end{thm}
\begin{proof} Let $f(x)=x^n+x+p$ for some $n \ge 2$ and even prime $p$. Then $f$ is irreducible (see~\cite[Lemma~9]{Osa}). Since no prime can divide the coefficient of $x$ it follows that $f$ is not an Eisenstein polynomial.
We show that $f$ cannot be an Eisenstein shift polynomial. Suppose this is not the case. Then for some integer $s$ the polynomial $f(x+s)$ is an Eisenstein polynomial with respect to some prime $q$. We have $$f(x+s)=x^n+nsx^{n-1}+ \ldots+ (ns^{n-1}+1)x+s^n+s+p,$$ and so $ns \equiv 0 \pmod q$. If $s \equiv 0 \pmod q$, then as previously explained in the proof of Lemma~\ref{first}, $f(x+s+kq)$ is an Eisenstein polynomial for any integer $k$. Since $f$ is not an Eisenstein polynomial it follows that $s \not \equiv 0 \pmod q$. So $n \equiv 0 \pmod q$. But then $ns^{n-1}+1 \equiv 0 \pmod q$; a contradiction.
So we conclude that for any $n \ge 2$ the infinite set $$\{f(x)=x^n+x+p~:~p~ \textrm{an even prime}\}$$ consists of irreducible polynomials that are not shifted Eisenstein polynomials. \end{proof}
We also expect that $$\lim_{H \to \infty}\frac{\#\(\mathcal I_n \setminus\csE_n\) }{\#\mathcal I_n}>0.$$ For example, it is natural to expect that there is a positive proportion of polynomials $\mathcal I_n$ with a square-free discriminant, which by Lemma~\ref{first} puts them in the set $\mathcal I_n \setminus\csE_n$. However, even the conditional (under the $ABC$-conjecture) results of Poonen~\cite{Poon} about square-free values of multivariate polynomials are not sufficient to make this claim.
We can however prove an inferior result, for degrees greater than 2, involving height constrained polynomials that can be shifted to a height constrained Eisenstein polynomial.
\begin{thm}
\label{thm:CnH} Let $$\csC_n(H)=\{f(x) \in \csE_n(H)~:~f(x+s) \in \mathcal E_n(H)~\text{for some}~s\in \mathbb{Z}\}.$$ Then for $n>2$, $$\lim_{H \to \infty} \frac{\#\csC_n(H)}{2H(2H+1)^n}<1.$$ \end{thm}
\begin{proof} Let $\csC_n(d,H)$ be the set of all polynomials $$f(x+s) = a_n(x+s)^n +a_{n-1}(x+s)^{n-1}+ \ldots +a_1(x+s)+a_0 \in \mathbb{Z}[x]$$ such that: \begin{enumerate} \item[(i)] $s \in \mathbb{Z}$, \item[(ii)] $H(f(x+s)) \le H$, \item[(iii)] $f(x)$ is Eisenstein with respect to all the prime divisors of $d$, \item[(iv)] $H(f(x)) \le H$,
\item [(v)] $|s| < d $. \end{enumerate} Note that each element of $\csC_n(d,H)$ may come from several pairs $(f, s)$.
We also observe that the set of all $f(x)$ described in~(iii) and~(iv) is precisely $\mathcal H_n(d,H)$, where
$\mathcal H_n(d,H)$ is the set of polynomials~\eqref{eq:poly} of height at most $H$ and such that \begin{enumerate} \item[(a)] $d \mid a_i$ for $i=0, \ldots, n-1$, \item[(b)] $\gcd\(a_0/d,d\)=1$, \item[(c)] $\gcd(a_n,d)=1$. \end{enumerate}
It then follows from the condition~(v) in the definition of $\csC_n(d,H)$ that $$\#\csC_n(d,H)\le 2d\#\mathcal H_n(d,H).$$
Using the inclusion exclusion principle implies that $$ \#\csC_n(H)\leq\sum_{\substack{2 \le d \le H\\ \mu(d)=-1}}\#\csC_n(d,H), $$ and so \begin{equation}\label{eq:muH} \#\csC_n(H)\le\sum_{\substack{2 \le d \le H\\ \mu(d)=-1}} 2d\mathcal H_n(d,H). \end{equation}
From~\cite{Hey}, we have \begin{equation}\label{eq:Hvarphi} \#\mathcal H_n(d,H)=\frac{2^{n+1}H^{n+1}\varphi^2(d)}{d^{n+2}}+ O\( \frac{H^{n }}{d^{n -1}} 2^{\omega(d)}\). \end{equation} Combining~\eqref{eq:muH} and~\eqref{eq:Hvarphi} we have
\begin{equation*} \begin{split} \#\csC_n(H)&\leq\sum_{\substack{2 \le d \le H\\ \mu(d)=-1}}2d\(\frac{2^{n+1}H^{n+1}\varphi^2(d)}{d^{n+2}}+O\(\frac{H^n2^{\omega(d)}}{d^{n-1}}\)\)\\ &= 2\sum_{\substack{2 \le d \le H\\ \mu(d)=-1}} \(\frac{2^{n+1}H^{n+1}\varphi^2(d)}{d^{n+1}}+O\(\frac{H^n2^{\omega(d)}}{d^{n-2}}\)\).\\
\end{split} \end{equation*}
Hence
\begin{equation*} \begin{split} \frac{\#\csC_n(H)}{2H(2H+1)^{n}}&\leq
2\sum_{\substack{2 \le d \le H\\ \mu(d)=-1}}
\(\frac{\varphi^2(d)}{d^{n+1}}+O\(\frac{2^{\omega(d)}}{Hd^{n-2}}\)\)\\ &= 2\sum_{\substack{2 \le d \le H\\ \mu(d)=-1}} \frac{\varphi^2(d)}{d^{n+1}}+
O\(\frac{1}{H}\sum_{2 \le d \le H}^H\frac{2^{\omega(d)}}{d^{n-2}}\)\\ \end{split} \end{equation*} for all $n>2$.
It's easy to see that $$
\sum_{d=2}^H \frac{2^{\omega(d)}}{d^{n-2}}= o(H) $$ for all $n>2$. Hence
\begin{equation*} \frac{\#\csC_n(H)}{2H(2H+1)^{n}}\leq 2\sum_{\substack{2 \le d \le H\\ \mu(d)=-1}} \frac{\varphi^2(d)}{d^{n+1}} + o(1).
\end{equation*} So \begin{equation*} \begin{split} \lim_{H \to \infty}\frac{\#\csC_n(H)}{2H(2H+1)^{n}}&\leq 2\sum_{\mu(d)=-1}\frac{\varphi^2(d)}{d^{n+1}} \leq 2\sum_{\mu(d)=-1}\frac{1}{d^{n-1}}=2\sum_{k=0}^\infty \sum_{\omega(d)=2k+1}\frac{1}{d^{n-1}}\\ &\le 2\sum_{k=0}^\infty \left(\frac{1}{(2k+1)!}\(\sum_{p}\frac{1}{p^{n-1}}\)^{2k+1}\)\\ &\le 2\sinh\(\sum_{p}\frac{1}{p^{n-1}}\) \le 2\sinh\(\sum_{p}\frac{1}{p^{2}}\). \end{split} \end{equation*} As direct calculations show that $$\sum_{p}\frac{1}{p^{2}}<0.46, $$ the result follows. \end{proof}
We infer from~\cite[Theorem~1]{Coh} that $$\lim_{H \to \infty} \frac{\#\mathcal I_n(H)}{2H(2H+1)^n}=1,$$ which when combined with Theorem~\ref{thm:CnH} yields $$\lim_{H \to \infty} \frac{\#(\mathcal I_n(H) \setminus \csC_n(H))}{\#\mathcal I_n(H)}>0, $$ for $n>2$.
\section{Some numerical results}\label{results}
As we have mentioned, we believe that the upper and lower limits in Question~\ref{quest:E-shift} coincide and so the density of shifted Eisenstein polynomials can be correctly defined.
By using Monte Carlo simulation we have calculated approximations to the values of $\#\mathcal E_3(H)$ and $\#\overline{\mathcal E}_3(H)$ which suggests that $\#\overline{\mathcal E}_3(H)/\#\mathcal E_3(H)$ is about $3$, see Table~\ref{tab:E3}.
\begin{table}[ht]\centering \caption{Monte Carlo Experiments for Cubic Polynomials} \begin{tabular}{lr} \toprule Maximum height of polynomials: & $1,000,000$ \\ Number of simulations: & $20,000$ \\ Shifted Eisenstein polynomials: & $1,119$ \\ Eisenstein polynomials: & $3,365$\\ Ratio: & $3.0$ \\ \bottomrule \end{tabular} \label{tab:E3} \end{table}
For quartics polynomials the ratio $\#\overline{\mathcal E}_4(H)/\#\mathcal E_4(H)$ is approximately 3.6 as shown in Table~\ref{tab:E4}.
\begin{table}[ht]\centering \caption{Monte Carlo Experiments for Quartic Polynomials} \begin{tabular}{lr} \toprule Maximum height of polynomials: & $1,000,000$ \\ Number of simulations: & $20,000$ \\ Shifted Eisenstein polynomials: & $1515$ \\ Eisenstein polynomials: & $419$\\ Ratio: & $3.6$ \\ \bottomrule \end{tabular} \label{tab:E4} \end{table}
\section{Comments}
It is easy to see that the results of the work can easily be extended to monic polynomials.
We note that testing whether $f \in \mathcal E_n$ can be done in an obvious way via several greatest common divisor computations. We however do not know any efficient algorithm to test whether $f \in \csE_n$. The immediate approach, based on Lemma~\ref{first} involves integer factorisation and thus does not seem to lead to a polynomial time algorithm. It is possible though, that one can get such an algorithm via computing greatest common divisor of pairwise resultants of the coefficients of $f(x+s)$ (considered as polynomials in $s$).
We also note that it is interesting and natural to study the {\it affine Eisenstein polynomials\/}, which are polynomials $f$ such that $$ (cx+d)^{n} f\(\frac{ax+b}{cx + d}\) \in \mathcal E_n $$ for some $a,b,c,d \in \mathbb{Z}$. Studying the distribution of such polynomials is an interesting open question.
\section{Acknowledgment} The authors would like to acknowledge the assistance of Hilary Albert with the programming for Section~\ref{results}.
This work was supported in part by the ARC Grant DP130100237.
\end{document} | arXiv |
\begin{document}
\title{Positive Lyapunov exponent for random perturbations of predominantly expanding multimodal circle maps}
\begin{abstract}
We study the effects of IID random perturbations of amplitude $\epsilon > 0$ on the asymptotic dynamics of one-parameter families
$\{f_a : S^1 \to S^1, a \in [0,1]\}$ of smooth multimodal maps which are ``predominantly expanding'', i.e., $|f'_a| \gg 1$ away from small neighborhoods of the critical set $\{ f'_a = 0 \}$. We obtain, for any $\epsilon > 0$, a \emph{checkable, finite-time} criterion on the parameter $a$ for random perturbations of the map $f_a$ to exhibit (i) a unique stationary measure, and (ii)
a positive Lyapunov exponent comparable to $\int_{S^1} \log |f_a'| \, dx$. This stands in contrast with the situation for the deterministic dynamics of $f_a$, the chaotic regimes of which are determined by typically uncheckable, infinite-time conditions. Moreover, our finite-time criterion depends on only $k \sim \log (\epsilon^{-1})$ iterates of the deterministic dynamics of $f_a$, which grows quite slowly as $\epsilon \to 0$.
\end{abstract}
\section{Introduction and statement of results}
A fundamental goal in dynamical systems is to determine the asymptotic behavior of various dynamical systems. Away from the uniformly expanding, Anosov and Axiom A settings, maps can have ``mixed'' dynamical behavior, e.g., hyperbolicity on some parts of phase space and contractive behavior on others. On the collection of maps with this `mixed' behavior, various dynamical regimes (e.g., asymptotically stable orbits with large basins of attraction versus more `chaotic' asymptotic behavior)
can be intermingled, in the space of maps, in an extremely convoluted way.
These issues are already present in the deceptively simple example of the one-parameter family of quadratic maps $f_a : [0, 1] \to [0, 1], f_a(x) := a x (1 - x)$ for $a \in [0,4]$. Let us agree to say that for a parameter $a \in [0,4]$, the map $f_a$ is \emph{regular} if phase space $[0,1]$ is covered Lebesgue almost-surely by the basins of periodic sinks, while $f_a$ is \emph{chaotic} if it possesses a unique a.c.i.m. with a positive Lyapunov exponent. For the family $\{ f_a\}$, it is known (e.g., \cite{L02} and many others) that the parameter space [0,4] is Lebesgue-almost surely partitioned into two sets, $\mathcal A \cup \mathcal B$, with the following properties: \begin{itemize}
\item For all $a \in \mathcal A$, the map $f_a$ is regular, and for all $a \in \mathcal B$, the map $f_a$ is chaotic.
\item The set $\mathcal A$ is open and dense in $[0,4]$, while $\mathcal B$ has positive Lebesgue measure. In particular, every $a \in \mathcal B$ is the limit point of a sequence $\{ a_n \} \subset \mathcal A$. \end{itemize} In particular, the chaotic property is extremely \emph{structurally unstable} with respect to the parameter $a$: any $a \in \mathcal{B}$ is the limit point of a sequence $\{ a_n \} \subset \mathcal{A}$.
Aside from `exceptional' cases (e.g., $a = 4$), it is typically impossible to rigorously determine, even with the help of a computer, the dynamical regime corresponding to a \emph{given} parameter $a \in [0,4]$, as this determination would require infinite-precision knowledge of infinite-length trajectories. For the quadratic family and other families of 1D maps with mixed expansion and contraction, the core issue is the difficulty in ruling out the formation of \emph{sinks of high period}: even if, for a given $a$, sinks of period $\leq N$ are ruled out for some extremely large $N$, one cannot rule out the existence of a sink of period $N + 1$ or greater. Indeed, the trajectory of a sink of large period may `look' chaotic before the full period has elapsed.
Although fewer results are known for higher-dimensional models, one anticipates a similar degree
of convoluted intermingling of dynamical regimes: see, e.g.,
the class of examples now known as Newhouse phenomena \cite{N79}.
A somewhat more complete account of coexistence phenomena is
available for the famous Chirikov standard map family \cite{C79},
a one-parameter family $\{ F_L, L > 0 \}$ of volume-preserving maps on the torus $\mathbb T^2$ exhibiting
simultaneously both strong hyperbolicity and elliptic-type behavior on phase space.
As the parameter $L$ increases, so too does the proportion of phase space on
which $F_L$ is hyperbolic, as well as the ``strength'' of this hyperbolicity. However, even for large $L$, a small amount of
elliptic-type behavior is intermingled with hyperbolic behavior in the parameter space.
Indeed, for a residual set of large $L$, it is known
that elliptic islands for $F_L$ are approximately $L^{-1}$-dense in $\mathbb T^2$ (Duarte 1994 \cite{D2}; see also \cite{D1}) ,
while the set of points with a positive Lyapunov exponent has Hausdorff
dimension 2 and is approximately $L^{- 1/3}$-dense in $\mathbb T^2$ (Gorodetski 2012 \cite{G12}).
To the authors' knowledge, it is still not known whether $F_L$ has
positive metric entropy (equivalently, a positive Lyapunov exponent on a positive-volume
set) for any fixed value of $L$.
A similar situation exists for the H\'enon family of diffeomorphisms $f_{a,b}(x,y) :=(1 - a x^2 + y, b x)$ for real parameters $a, b$, introduced by H\'enon \cite{henon1976two} as a toy model capturing the dynamics of Poincar\'e sections of the Lorenz model \cite{lorenz1967nature} in certain parameter ranges. Note that the singular limit $b \to 0$ corresponds with the quadratic map family. Of particular interest are the ``classical parameters'' $a = 1.4, b = .3$ at which a wealth of numerical evidence suggests $f_{a,b}$ admits a chaotic strange attractor (see, e.g., the original work \cite{henon1976two}). This remains a major open problem and is likely to be quite difficult: see, e.g., \cite{galias2014structure, galias2015henon} which establishes the existence of parameters close to $(a,b) = (1.4, .3)$ at which the attractor degenerates into periodic sinks $f_{a,b}$ admits sinks. Another known difficulty is the mechanism of unfurling of homoclinic tangencies \cite{newhouse1974diffeomorphisms}; for the H\'enon map specifically, see for example \cite{benedicks2018coexistence} and the references therein. At present, the existence of strange attractors for $f_{a,b}$ is only known for perturbatively small values of $b$ \cite{BC2, mora1993abundance}. This work has since been substantially generalized to a framework for establishing existence of \emph{rank one} strange attractors in the work of Wang, Young and others in a variety of contexts, e.g., near limit cycles subjected to time-periodic forcing with long period \cite{wang2008toward, WO,OS, LWY}. We emphasize that these constructions are quite challenging, and do not explicitly identify parameters at which the strange attractors exist; instead, a parameterized family of maps is considered, and a nonempty set of parameters (a positive-volume Cantor set) is identified at which a strange attractor exists.
\subsubsection*{Random perturbations}
The real world is inherently noisy, and so it is natural to consider IID random perturbations of otherwise deterministic dynamics and seek to understand the corresponding asymptotic behavior. For concreteness, let us consider a smooth, deterministic map $f : S^1 \to S^1$ and assume that $|f'| > 2$ on all but a small neighborhood of the critical set $\{ f' = 0\}$ for $f$.
Parametrizing $S^1 \cong [0,1)$ and doing arithmetic ``modulo 1'', at time $n$ we perturb $f$ to the map $f_{\omega_{n-1}}(x) = f(x + \omega_{n-1})$, where $\omega_0, \omega_1, \cdots$ are IID random variables uniformly distributed in $[- \epsilon, \epsilon]$. Here, the \emph{noise amplitude} $\epsilon > 0$ is a fixed parameter. We will consider the asymptotic dynamics of compositions of the form \[f^n_{{\underline \omega}} = f_{\omega_{n-1}} \circ \cdots \circ f_{\omega_0}\] given a sample ${\underline \omega} = (\omega_0, \omega_1, \cdots)$.
When $\epsilon \geq 1$, random trajectories $X_n = f^n_{\underline \omega}(X_0), n \geq 1$ are essentially IID themselves;
in this situation it is a straightforward exercise to check (i) uniqueness of the
stationary measure for the process $(X_n)$ on $S^1$ and (ii) that the Lyapunov exponent $\lambda = \lim_{n \to \infty}
\frac1n \log |(f^n_{\underline \omega})'(x)|$ exists and is constant for every $x \in S^1$ and a.e. sample ${\underline \omega}$. What is more subtle is the situation when $\epsilon \ll 1$, in which case the composition $f^n_{\underline \omega}$ may
develop one or more \emph{random sinks}; here, for our purposes, a random sink is a
stationary measure for $(X_n)$ with a negative Lyapunov exponent.
Random sinks can develop if, for instance, the map $f$ itself has a periodic sink $z \in S^1$.
Indeed, it is not hard to check that the sink $z$ persists in the form of a random sink
for all $\epsilon > 0$ sufficiently small (see, e.g.,
Section 3.1 of this paper for a worked example).
On the other hand, one anticipates that sinks of $f$ of high period $N$ can be ``destroyed''
in the presence of a small but sufficient amount of noise, i.e.,
when $\epsilon \geq \epsilon_N$, where $\epsilon_N \to 0$ as $N \to \infty$.
As described previously, these high-period sinks are precisely those
responsible for the convoluted intermingling of dynamical regimes in one-parameter families of unimodal
or multimodal maps.
In an alternative perspective: given a fixed noise amplitude $\epsilon > 0$, the only
sinks of $f$ which could possibly persist as random sinks for $(f^n_{\underline \omega})$ are
those of period $\leq k_\epsilon := \max\{ N : \epsilon < \epsilon_N\}$.
A crucial point here is that, for a given map $f$,
it is virtually always possible to check
for sinks of period less than some given value.
For these reasons, one anticipates that for a reasonably large class of $f$ as above and a given noise
amplitude $\epsilon > 0$, it should be possible to determine the asymptotic chaotic regime of the corresponding
random composition $f^n_{\underline \omega}$ based on \emph{checkable criteria} involving
only \emph{finitely many} iterates of the map $f$.
The present paper is a step in this direction for a model of
one-parameter families of multimodal circle maps $f = f_a$ exhibiting strong expansion ($|f_a'| \gg 1$) away from
a small neighborhood of the critical set $\{ f_a' = 0 \}$.
We obtain a checkable sufficient criterion on the parameter $a$, involving only finitely
many iterates of the map $f_a$ (in particular, precluding sinks of low period, as above),
for deducing asymptotic chaotic behavior for the random composition $f^n_{\underline \omega}$
when the noise parameter $\epsilon$ is not too small. An appealing feature of these
results is that, given $\epsilon > 0$, the criterion involves only approximately $\log(\epsilon^{-1})$ iterates,
which grows quite slowly as $\epsilon \to 0$.
\subsection{Statement of results}
\subsubsection*{The model} Let $S^1=\mathbb{R}/ \mathbb{Z}$ be the unit circle, parametrized by the interval $[0,1)$. We assume throughout that $\psi:S^1\rightarrow \mathbb{R}$ is a $C^2$ function for which the following conditions hold: \begin{enumerate} \item[(H1)] the \emph{critical set} $C'_{\psi}=\{\hat{x}\in S^1:\psi'(\hat{x})=0\}$ has finite cardinality, and \item[(H2)] we have $\{ \psi'' = 0 \} \cap C_\psi' = \emptyset$. \end{enumerate}
We consider maps of the form \[ f = f_{L, a} := L \psi + a \,\, (\text{mod } 1) \, , \] for $L > 0, a \in [0,1)$, where $\,\, (\text{mod } 1) : \mathbb{R} \to S^1 \cong \mathbb{R} / \mathbb{Z}$ is the natural projection. Observe that for $L \gg 1$, the map $f$ is strongly expanding away from $C_\psi'$.
When $\epsilon > 0$ is specified, we write $\Omega = \Omega^\epsilon = \big( [- \epsilon, \epsilon]\big) ^{\mathbb{Z}_{\geq 0}}$ for the sample space for our perturbations. Elements ${\underline \omega} \in \Omega$ are written ${\underline \omega} = (\omega_0, \omega_1, \omega_2, \cdots)$ where $\omega_i \in [- \epsilon, \epsilon], i \geq 0$. With $\nu^\epsilon$ denoting the uniform distribution on $[-\epsilon, \epsilon]$, we define $\mathbb{P} = \mathbb{P}^\epsilon = (\nu^\epsilon)^{\otimes \mathbb{Z}_{\geq 0}}$ on $\Omega$. We write $\mathcal{F}$ for the product $\sigma$-algebra on $\Omega$ and for $n \geq 0$ we write $\mathcal{F}_n = \sigma(\omega_0, \omega_1, \cdots, \omega_n) \subset \mathcal{F}$.
When $f = f_{L, a}$ is specified, we consider random maps of the form $f_\omega : S^1 \to S^1, f_\omega(x) := f(x + \omega)$, where it is understood implicitly that the argument for $f$ is taken $\,\, (\text{mod } 1)$. Given a sample ${\underline \omega} \in \Omega$, we have a corresponding random composition \[ f^n_{\underline \omega} := f_{\omega_{n-1}} \circ \cdots \circ f_{\omega_1} \circ f_{\omega_0} \] for $n \geq 1$.
Alternatively, we can view the random maps $f^n_{\underline \omega}$ as giving rise to a {\it Markov chain} $(X_n)_n$ on $S^1$ defined, for fixed initial $X_0 \in S^1$, by $X_{n +1} := f_{\omega_n}(X_n)$. The corresponding Markov transition kernel $P(\cdot, \cdot)$ is defined for $x \in S^1$ and Borel $B \subset S^1$ by \[
P(x, B) := \mathbb{P}(X_1 \in B | X_0 = x) = \nu^\epsilon \{ \omega \in [- \epsilon, \epsilon] : f_\omega(x) \in B \} \, . \] We say that a Borel measure $\mu$ on $S^1$ is \emph{stationary} if \[ \mu(B) = \int_{S^1} P(x, B) \, d \mu(x) \] for all Borel $B \subset S^1$.
\subsubsection*{Results}
Our results concern the following \emph{checkable, finite-time} criterion $(H3)_{c, k}$ on the dynamics of $f$. For now, $c > 0$ and $k \in \mathbb{N}$ are arbitrary.
\begin{align} (H3)_{c, k} \quad \quad \text{ For every } \hat x \in C_\psi' \, , \text{ we have } \quad d(f^l(\hat x) , C_\psi') \geq c \quad \text{ for all } 1 \leq l \leq k \, . \end{align}
We now state our results.
\begin{thmA}\label{thm:ergod} Let $\beta , c \in (0,1)$. Let $L > 0$ be sufficiently large, depending on these constants, and assume $f = f_{L, a}$ satisfies $(H3)_{c, k}$ for some arbitrary $k \in \mathbb{N}$. Finally, assume $\epsilon \geq L^{- (2 k + 1)(1-\beta)}$. Then, the random composition $f^n_{\underline \omega}$ admits a unique (hence ergodic) stationary measure $\mu$ supported on all of $S^1$. \end{thmA}
\begin{thmA}\label{thm:lyapEst} Let $\beta, c \in (0,1)$. Let $L > 0$ be sufficiently large, depending on these constants, and assume $f = f_{L, a}$ satisfies $(H3)_{c, k}$ for some arbitrary $k \in \mathbb{N}$. Finally, assume $\epsilon \geq L^{-(2k + 1)(1-\beta)+\alpha}$ where $\alpha \geq 0$ is arbitrary. Then, the Lyapunov exponent \[
\lambda = \lim_n \frac1n \log |(f^n_{\underline \omega})' (x)| \] exists and is constant over $x \in S^1$ and $\mathbb{P}$-almost every ${\underline \omega} \in \Omega$, and satisfies the estimate \[ \lambda \geq \lambda_0 \log L \, , \] where $\lambda_0 = \lambda_0(\alpha, k) := \min\{\frac{\alpha}{k+1}, \frac{1}{10}\}$. \end{thmA}
Theorems \ref{thm:ergod}, \ref{thm:lyapEst} are approximately sharp, in the sense that $(H3)_{c, k}$ is compatible with the formation of sinks of period $k + 1$, while such sinks persist under random perturbations $\epsilon \leq C L^{- (2 k + 1)}$ where $C > 0$ is a constant. See Proposition \ref{prop:optimal} in Section 3.1 for more information.
A satisfying feature of our results is that, for fixed sufficiently large $L$ and any given $\epsilon > 0$, to deduce a large positive exponent for $f = f_{L, a}$ requires validating condition $(H3)_{c, k}$ with $k = k_\epsilon \approx \log (\epsilon^{-1})$. The value of $k_\epsilon$ grows only logarithmically with $\epsilon^{-1}$, which means that even for quite small $\epsilon > 0$, Theorems \ref{thm:ergod}, \ref{thm:lyapEst} are already valid when $(H3)_{c, k}$ is verified for a relatively small value of $k$.
\subsubsection*{Prior work}
There is a substantial and growing literature on random dynamical systems in low dimensions: we recall below some of the literature on random dynamical systems
closest to the present paper, i.e., dealing with random
maps having strong expansion mixed with some contraction in phase space.
Lian and Stenlund \cite{LS} consider random perturbations of {\it predominantly expanding} (expanding on most of phase space with a small exceptional set) multimodal maps, more-or-less equivalent to the model in the present paper. They prove that for large enough noise amplitudes, the random system has a unique ergodic stationary measure and a positive Lyapunov exponent. They develop a similar result with smaller noise amplitude assuming a `one time-step' condition on the dynamics, essentially equivalent to $(H3)_{c, 1}$ in our paper. Because we deal with higher-iterate dynamical assumptions, the perturbations we may consider are substantially smaller than those in \cite{LS}.
Stenlund and Sulku \cite{SS} obtain exponential loss of memory for IID compositions $T^n = T_n \circ \cdots \circ T_1$ of random circle maps which are ``expanding on average'': contractive behavior ($\inf |T'| \approx 0$) can appear with positive probability, but the random variable $\inf |T'|$ satisfies a moment condition. The random maps we consider in the present paper \emph{always} have critical points, and so do not satisfy the conditions of \cite{SS}.
In a joint work between the first author, Xue and Young \cite{BXY1, BXY2, blumenthal2020lyapunov}, random perturbations of a model of ``predominantly hyperbolic'' two-dimensional maps are considered. The paper \cite{BXY1} considers a volume-preserving model encompassing the Chirikov standard map, and \cite{BXY2} considers a dissipative (volume-compressing) model of maps having qualitative similarities to the H\'enon maps, while the more recent \cite{blumenthal2020lyapunov} considers systems consisting of arbitrarily many coupled volume-preserving maps. Chaotic properties of the deterministic dynamics in each case are anticipated to hold on large subsets of parameter space, but rigorous verification is largely beyond the scope of current studies. What \cite{BXY1, BXY2, blumenthal2020lyapunov} show is that sufficiently large random perturbations have the effect of ``unlocking'' the hyperbolicity of these systems (positive Lyapunov exponent proportional to the Lebesgue average $\int \log \| dF_x\| \, d x$, estimate of decay of correlations). A different but related analysis is carried out in the paper of Ledrappier, Sim\'o, Shub and Wilkinson \cite{LSSW}, which considers IID perturbations applied to a twist map on the sphere.
Additionally, \cite{BXY1, BXY2} allow smaller random perturbations on assuming a checkable condition involving the first several iterates of the deterministic map, consistent with the finite-time checkable criterion given in the present paper.
To reiterate, the papers \cite{LS, SS, LSSW, BXY1, BXY2, blumenthal2020lyapunov} are emphasized because they deal with random perturbations of maps for which very little is assumed: in these studies, the randomness itself is \emph{leveraged} in a crucial way to `shake loose' hyperbolicity. Other works examine random compositions of maps with known `good' asymptotic behavior: by way of example, we mention works on smooth \cite{WSY, BY} and piecewise \cite{Buzzi} expanding maps, maps with a neutral fixed point \cite{AHNV}. This also includes works on the problem of stochastic stability: under what conditions do properties of a given deterministic system persist under small random perturbations? There are many works in this important direction, for example, work on small random perturbations of Axiom A systems \cite{Ki88, Y86}, unimodal maps under a (noncheckable) infinite-time condition \cite{BBV, KK, BY1}, and stochastic stability for H\'enon attractors \cite{BV}. We also acknowledge the related problem of \emph{statistical stability}, e.g., how long-time statistics of a dynamical system change within a parametrized family: see, e.g., the review \cite{R09}.
The study of deterministic one-dimensional maps with critical points (unimodal or multimodal) has a long history, a small part of which we recall here. Naturally we inherit and use some of the ideas developed in this literature. Indeed, our criterion $(H3)_{c, k}$
is a checkable, finite-time version of various criteria on postcritical orbits of unimodal and multimodal maps as used by, e.g., Misiurewicz \cite{M}, Jakobson \cite{J}, Collet-Eckmann \cite{CE} and Benedicks and Carleson \cite{BC1}. We note as well the more expository account by Wang and Young \cite{WY}, which we found remarkably helpful in preparing this work. There are also by now several works attempting to quantify the set of parameters for the quadratic map family at which various dynamical regimes are observed \cite{tucker2009rigorous, luzzatto2006computable, galias2017systematic, golmakani2020rigorous}. Also related to our finite-time checkable criteria are frameworks attempting to understand dynamical properties at ``finite resolution'' \cite{luzzatto2011finite, elshaarawy2013efficient} or along finite, bounded timescales \cite{blumenthal2020diffusion}.
\subsubsection*{Potential future directions}
A natural future direction is to study small random perturbations of the H\'enon map and related models, in the hope that one can derive checkable finite-time conditions for a positive Lyapunov exponent for the corresponding stationary measure.\footnote{For random systems with absolutely continuous stationary measures, a positivity of Lyapunov exponent implies existence of ``random strange attractors'' analogous to those for deterministic systems \cite{ledrappier1988entropy, blumenthal2019equivalence}.} This has been carried out for the standard map and a family of dissipative mappings with ``H\'enon flavor'' using finite-time conditions amounting to three steps of the deterministic dynamics in the previous work \cite{BXY1, BXY2}; the goal of future work would be to go beyond this and derive a succession of stronger finite-time conditions allowing for smaller noise amplitudes.
On the other hand, studying Lyapunov exponents for models of this kind this is likely to entail several fundamental challenges not addressed in the present manuscript, e.g., coping with the fact that one must now track tangent directions as well as the location in phase space.
\subsubsection*{Organization of the paper.}
In Section 2, we derive elementary properties of our model used throughout the paper, especially the notion of \emph{bound period} defined in Section 2.2. In Section 3.1, we discuss the possible formation of sinks of period $k + 1$
under the condition $(H3)_{c, k}$, verifying the relative sharpness of Theorems \ref{thm:ergod}, \ref{thm:lyapEst};
ergodicity as Theorem \ref{thm:ergod} is then proved in Section 3.2. The material in Section 3 depends on Section 2 but is otherwise logically
isolated from the rest of the manuscript.
The proof of Theorem \ref{thm:lyapEst} occupies the remainder
of the paper, Sections 4--6.
\subsubsection*{Notation}
\begin{itemize}
\item Throughout, we parametrize $S^1$ by the half-open interval $[0,1) \cong \mathbb{R} / \mathbb{Z}$. For $s \in \mathbb{R}$, we write $s \,\, (\text{mod } 1)$ for the projection of $s$ to $[0,1) \cong \mathbb{R} / \mathbb{Z}$ modulo 1.
\item We define the \emph{lift} $\tilde f : S^1 \to \mathbb{R}$ of $f$ by $\tilde f(x) = L \psi(x) + a$ (i.e., without projecting $\,\, (\text{mod } 1)$ to $S^1$). We regard $\tilde f$ as a map $\mathbb{R} \to \mathbb{R}$ by extending the domain periodically to all of $\mathbb{R}$. We write $\tilde f_\omega(x) = \tilde f(x + \omega)$. We define the corresponding Markov process $(\tilde X_n)_n$ on $\mathbb{R}$ by setting $\tilde X_{n + 1} = \tilde f_{\omega_n}(\tilde X_n)$.
\item We write $d( \cdot, \cdot)$ for the metric induced on $S^1$ via the identification with $\mathbb{R}/ \mathbb{Z} \cong [0,1)$. Note that in our parameterization,
we have the identity $d(x,y) = \min \{ |x - y|, |x - y \pm 1|\}$. For a set $A \subset S^1$, we write $\mathcal{N}_\epsilon(A)$ for the $\epsilon$-neighborhood of $A$ in the metric $d$.
\item For a point $x \in S^1$ and a set $A \subset S^1$, we define the minimal distance $d(x,A) = \inf_{a \in A} d(x, a)$. For sets $A, B \subset S^1$, we define $d(A, B) = \inf_{a \in A} d(a, B) = \inf_{a \in A, b \in B} d(a, b)$.
\item Given a set $A \subset S^1$ or $\mathbb{R}$ and $z \in S^1$ or $\mathbb{R}$, we write $A - z = \{ a - z : a \in A \}$ for the set $A$ shifted by $z$.
\item Given a partition $\zeta$ of $S^1$ (resp. $\mathbb{R}$) and a set $A \subset S^1$ (resp. $A \subset \mathbb{R}$), we write $\zeta|_A$ for the partition on $A$ consisting of atoms of the form $C \cap A, C \in \zeta, C \cap A \neq \emptyset$.
\item When it is clear from context, we write $\mathbb{E}$ for the expectation with respect to $\mathbb{P}$. \end{itemize}
\section{Preliminaries: predominant expansion and bound periods}
\subsubsection*{Bound periods: a heuristic} Consider the dynamics of a smooth unimodal or multimodal map $f : S^1 \to S^1$. In the pursuit of finding maps $f$ accumulating a positive Lyapunov exponent, the main obstruction is the formation of sinks, and so
a natural assumption to make is that the postcritical orbits $f^n \hat x, \hat x \in \{ f' = 0\}, n \geq 1$ remain enough far away from $\{ |f'| \leq 1\}$
so that $|(f^n)'(f \hat x)| \gtrsim e^{n \alpha}$ for some $\alpha > 0$.
If, for some $x \in S^1$, the orbit $(f^n x)_n$ reaches a small neighborhood of some $\hat x\in \{ f' = 0\}$ at time $t$, then the subsequent iterates $f^{t + i} x$ will closely shadow $f^i \hat x$ for $i \leq p = p(d(f^t x, \hat x))$. The time interval $[t + 1, t + p]$ is referred to as the \emph{bound period} for $x$ at time $t$. As we assumed expansion along the postcritical orbit $(f^i \hat x)_{i \geq 1}$, one anticipates that the derivative growth $(f^p)'(f^{t + 1} x)$ accumulated along the bound period will balance out the derivative `damage' due to $f'(f^t x)$ (possibly $\ll 1$ when $f^t x, \hat x$ are quite close), so that, for instance,
$(f^{p + 1})'(f^t x) \sim e^{ (p + 1)\alpha'}$ holds for some $\alpha' < \alpha$.
This is a rough summary of a mechanism by which 1D maps with critical points (unimodal and multimodal) can accumulate a positive Lyapunov exponent for typical trajectories. For an exposition of this method, see \cite{WY}.
Our aim in Section 2 is to apply a variation of this idea to our model: the condition $(H3)_{c, k}$
involves the first $k$ iterates of postcritical trajectories, and so bound periods of length up to $k$ are available to recover derivative growth. In Section 2.1 we carry out some essential preliminaries used in the rest of the paper, and in Section 2.2 we will discuss bound periods for our random compositions.
\subsection{Preliminaries}
\subsubsection{The basic setup}\label{subsubsec:basicSetup} We fix, below and throughout the paper, a function $\psi: S^1 \to \mathbb{R}$ satisfying (H1) and (H2), as well as parameters $c \in (0,1), \beta \in (0,\frac{1}{100})$ (restricting to $\beta$ in this range incurs no loss of generality). Moreover, we implicitly fix the parameter $L > 0$, and are allowed to take it sufficiently large depending on $c, \beta$ and the function $\psi$.
On rescaling the function $\psi$ in relation to the parameter $L$, we will assume going forward that the following condition holds in addition to (H1) -- (H2). \begin{itemize}
\item[(H4)] We have $\| \psi' \|_{C^0}, \| \psi'' \|_{C^0} \leq 1/10$. \end{itemize}
Separately (i.e., independently of $L$), $k \in \mathbb{N}$ is fixed, and a parameter $a \in [0,1)$ is fixed for which $(H3)_{c, k}$ holds for the mapping $f = f_{L, a} := L \psi + a \,\, (\text{mod } 1)$. Finally, we fix a parameter $\epsilon > 0$, on which constraints (depending on all the previous parameters) will be made as we go along.
\subsubsection{Partition of phase space}
The conditions (H1) -- (H2) imply that there is a constant $K_1 = K_1(\psi) > 0$ with the property that for any $x \in S^1$, \begin{align}\label{eq:lowerBoundDer}
|\psi'(x)| \geq K_1 d(x, C_\psi') \, . \end{align} We use \eqref{eq:lowerBoundDer} repeatedly, often without mention. For $\eta<0$, we define \begin{align} B(\eta) = \{ x \in S^1 : d(x, C_\psi') \leq K_1^{-1}L^{\eta} \} \, . \end{align} It is clear that for $x \notin B(\eta)$, we have $
|f'(x) | \geq L^{\eta + 1} \, , $ while $B(\eta)$ is the union of $\# C_\psi'$-intervals of length $\sim L^{\eta}$ each.
Define the partition $S^1 = \mathcal{G} \cup \mathcal I \cup \mathcal{B}$, where \[ \mathcal{G} = S^1 \setminus B(- \beta) \, , \quad \mathcal I = B(- \beta) \setminus B(- \frac12 - \beta) \, , \quad \mathcal{B} = B(-\frac12 - \beta) \, . \] We have, then, that \[
|f'|_{\mathcal{G}}| \geq L^{1 -\beta } \, , \quad \text{ and } \quad |f'|_{\mathcal I}| \geq L^{\frac12-\beta} \, . \] Similar estimates apply to $f'_\omega$ on the shifted sets $\mathcal{G}_\omega := \mathcal{G} - \omega, \mathcal I_\omega := \mathcal I - \omega$ for $\omega \in [- \epsilon, \epsilon]$.
Observe that $|f'|_{\mathcal{B}}|$ can be arbitrarily small. To address this, we subdivide $\mathcal{B} = \cup_{l = 1}^k \mathcal{B}^l$ in the following way: set \[ \mathcal{B}^k = B(- \frac{k}{2} - \beta) \, , \] and for $1 \leq l < k$, \[ \mathcal{B}^l = B(- \frac{l}{2} - \beta) \setminus B(- \frac{l + 1}{2} - \beta) \, . \] Notice that the definition above is consistent with the identification $\mathcal I = \mathcal{B}^0$. We also use the notation $\mathcal{B}^l_\omega := \mathcal{B}^l - \omega$ for $\omega \in [- \epsilon, \epsilon]$. Using \eqref{eq:lowerBoundDer}, one checks that \[
|f_\omega'|_{\mathcal{B}_\omega^l}| \geq L^{- \frac{l - 1}{2} - \beta} \quad \text{ for } \quad 1 \leq l < k \, , \]
while on $\mathcal{B}_\omega^k$ we have no lower bound on $|f_\omega'|$.
The partitions $S^1 = \mathcal{G} \cup \mathcal I \cup \mathcal{B} = \mathcal{G} \cup \mathcal I \cup \mathcal{B}_1 \cup \cdots \cup \mathcal{B}^k$ are used repeatedly throughout the paper. We will abuse notation and regard these as partitions of $\mathbb{R}$ as well, extended by periodicity via the parametrization $S^1 \cong [0,1) \cong \mathbb{R}/\mathbb{Z}$.
\subsection{Bound periods}
The following lemma confirms that a random orbit $(f^i_{\underline \omega} x)$, initiated at $x \in \mathcal{B}^l, 1 \leq l \leq k$, will closely shadow a postcritical orbit $(f^i \hat x)$ for $l$ steps, i.e., will have a bound period of length $l$.
In Lemma \ref{lem:boundPeriodCoherence} below we do not assume $(H3)_{c, k}$.
\begin{lem}\label{lem:boundPeriodCoherence}
Let $L$ be sufficiently large, and let $k \in \mathbb{N}$ be arbitrary. Assume that \begin{align}\label{boundEpsilon} \epsilon < L^{- \max \{ k-1, \frac12\} - \beta} \, . \end{align}
Then, we have the following. Let $1 \leq l \leq k$ and fix an arbitrary sample ${\underline \omega} \in \Omega$. Let $J_0$ be any connected component of $B(- \frac{l + \beta}{2})$ and let $\hat x = C_\psi' \cap J$ be the (unique) critical point contained in $J_0$.
Then, for all $1 \leq i \leq l$ we have that \[ f_{\underline \omega}^i \big(
J_0
\big) \subset \mathcal{N}_{L^{- \beta / 2}} (f^i \hat x) \, . \]
\end{lem} The reason for the upper bound \eqref{boundEpsilon} is that
if the perturbation amplitude $\epsilon$ is too large, then
$f^i_{\underline \omega}|_{\mathcal{B}^l_{\omega_0}}$ may diverge from $f^i \hat x$
for some $i < k$,
thereby spoiling the corresponding bound periods.
From Lemma \ref{lem:boundPeriodCoherence} and noting $\mathcal{B}^l \subset B(- \frac{l + \beta}{2})$, it is straightforward to check that if $L$ is sufficiently large and
$f = f_a$ satisfies $(H3)_{c, k}$, then
$f^i \hat x$ is well inside $\mathcal{G}$ for $1 \leq i \leq k$. It follows that for any $1 \leq l \leq k$ and $x \in \mathcal{B}^l_{\omega_0}$, we have $f^i_{\underline \omega}(x) \in \mathcal{G}$ for all $1 \leq i \leq l$, and the derivative estimate \[
|(f^{l}_{\theta {\underline \omega}})'(f_{\omega_0}x)| \geq L^{l( 1 - \beta)} \, . \]
Moreover, if $1 \leq l < k$ then we have $|(f_{\omega_0})'(x)| \geq L^{1 - \frac{l+1}{2} - \beta}$, hence \[
|(f^{l + 1}_{\underline \omega})'(x)| \geq L^{(l + 1)(\frac12 - \beta)} \, . \]
For the purposes of the preceeding paragraph, it suffices to take $L$ large enough so that $L^{\beta} \gg 2 / (c K_1)$; note in particular that $L$ does not depend on $k$.
\begin{proof}[Proof of Lemma \ref{lem:boundPeriodCoherence}]
In the following proof, the lift $\tilde f : S^1 \to \mathbb{R}$ of $f$ is defined by $\tilde f(x) = L \psi(x) + a$, i.e., leaving out the ``$\,\, (\text{mod } 1)$'' in the definition of $f$. We extend the domain of $\tilde f$ to all of $\mathbb{R}$ by periodicity.
Without loss, we regard $J_0$ as an interval in $\mathbb{R}$. Let $\hat x \in C_\psi' \cap J_0$ be the (unique) critical point in $J_0$. Define $I_0 = \mathcal{N}_\epsilon(J_0)$ and inductively set $J_{i+1} = \tilde f(I_i)$, $I_{i + 1} = \mathcal{N}_\epsilon(J_{i + 1})$. Since $f^i \hat x \in J_i$ for all $i$, it suffices to show $\operatorname{Len}(J_i) \leq L^{- \beta/2}$ for all $1 \leq i \leq l$.
To start, decompose $I_0 = I_0^- \cup I_0^+$ where $I_0^- = [\hat x - \epsilon - K_1^{-1} L^{- \frac{l + \beta}{2} }, \hat x), I_0^+ = [\hat x, \hat x + \epsilon + K_1^{-1} L^{- \frac{l + \beta}{2} }]$. Noting that the images $\tilde f(I_0^-), \tilde f(I_0^+)$ share the left (resp. right) endpoint $\tilde f(\hat x)$ if $f''(\hat x) > 0$ (resp. $f''(\hat x) < 0$),
we have the estimate \begin{align*}
\operatorname{Len}(J_1) & \leq \max \{ \tilde f(I_0^+), \tilde f(I_0^-) \} \leq \frac12 L \| \psi'' \|_{C^0} \cdot ( \epsilon + K_1^{-1} L^{- \frac{l + \beta}{2} } )^2 \\ & \leq L \max\{ \epsilon, \operatorname{Len}(J_0)\}^2 \end{align*} using $(H4)$ in the last step. For each $i > 1$, we estimate \[
\operatorname{Len}(J_i) = \operatorname{Len}(\tilde f(I_{i-1})) \leq L \| \psi' \|_{C^0} \operatorname{Len}(I_{i-1}) \leq L \max \{ \epsilon, \operatorname{Len}(J_{i-1}) \} \, . \] by estimating $\operatorname{Len}(I_{i-1}) \leq 2 \epsilon + \operatorname{Len}(J_{i-1}) \leq 3 \max \{ \epsilon, \operatorname{Len}(J_{i-1})\}$ and using $(H4)$. Bootstrapping, we conclude \[ \operatorname{Len}(J_i) \leq L^{i-1} \max \{ \epsilon, \operatorname{Len}(J_1)\} \leq \max \{ L^{i-1} \epsilon, L^i \epsilon^2, L^i \operatorname{Len}(J_0)^2 \} \, . \] The first two terms are $< L^{- \beta}$ by \eqref{boundEpsilon} for all $i \leq k$. For $i \leq l$, the third term is $\leq L^i \cdot 4 K_1^{-2} L^{- l - \beta} \leq L^{- \beta/2}$. This completes the proof. \end{proof}
\section{Ergodicity}
In Section 3.1, we prove Proposition \ref{prop:optimal}, which confirms the sharpness of Theorems \ref{thm:ergod}, \ref{thm:lyapEst} in the following sense. To start, condition $(H3)_{c, k}$ for the map $f = f_a$ is compatible with the formation of a sink of period $k + 1$. For all $\epsilon > 0$ sufficiently small, such sinks persist as random sinks for the random compositions $(f^n_{\underline \omega})$, i.e., stationary measures for the Markov chain $(X_n)_n$ admitting a negative Lyapunov exponent. In Proposition \ref{prop:optimal} we make this quantitative by exhibiting a scenario when $f = f_a$ (i) satisfies $(H3)_{c, k}$; (ii) admits a sink of period $k +1$; and (iii) the random composition $(f^n_{\underline \omega})$ admits a random sink for all $\epsilon \lesssim L^{- (2 k + 1)}$. This upper bound for $\epsilon$ approximately matches the upper bound in Theorems \ref{thm:ergod}, \ref{thm:lyapEst}, confirming the view that these results are sharp.
Having established this, in Section 3.2 we proceed with the proof of Theorem \ref{thm:ergod}. We note that in terms of logical dependence, Section 3 depends on Section 2 and is otherwise independent of the remainder of the paper, Sections 4 -- 6.
\subsection{Sinks}
Let us take on the assumptions made for the map $f = f_{L, a}$ as in Section \ref{subsubsec:basicSetup}, except that for Proposition \ref{prop:optimal} we need not assume $(H3)_{c, k}$ holds. Observe, however, that the hypothesis of Proposition \ref{prop:optimal}, i.e., the existence of a sink of period $k + 1$ for $f = f_{L,a}$, is entirely compatible with $(H3)_{c, k}$.
\begin{prop}\label{prop:optimal} For all $L$ sufficiently large, depending only on $\psi$, we have the following. Let $k \in \mathbb{N}$ be arbitrary, and assume $f = f_{L, a}$ has the property that $f^{k + 1} \hat x = \hat x$ for some $\hat x \in C_\psi'$. Then, for any $\epsilon \leq \frac{1}{49} L^{- (2 k + 1)}$, we have that the random composition $f^n_{\underline \omega}$ admits a stationary measure $\mu$ for which \begin{itemize} \item[(a)] the support of $\mu$ $\operatorname{Supp}(\mu)$ is contained in a $\frac17 L^{-(k + 1)}$-neighborhood of the orbit $\hat x, f \hat x, \cdots, f^k \hat x$ (in particular, $\operatorname{Supp} \mu \subsetneq S^1$); and \item[(b)] $\lambda_1(\mu) < 0$. \end{itemize} \end{prop}
\begin{proof}
We will show that there is a neighborhood $U$ of $\hat x$ such that for a.e. sample ${\underline \omega} \in \Omega$, \begin{itemize} \item[(i)] $f^{k + 1}_{\underline \omega}(U) \subset U$\, ; and
\item[(ii)] $|(f^{k + 1}_{\underline \omega} )'(x)| < \frac12$ for all $x \in U$. \end{itemize} By standard arguments, (i) -- (ii) imply the existence of a stationary measure $\mu$ with Lyapunov exponent $\lambda(\mu) \leq - \frac{\log 2}{k + 1} < 0$ supported in $\{ f^i_{\underline \omega} x : x \in U, {\underline \omega} \in \Omega, 0 \leq i \leq k\}$. At the end, we will estimate the size of this support.
Let $\gamma \in (0,1)$ be a constant, to be taken sufficiently small below, and throughout assume that $\epsilon \leq \gamma L^{- (2 k + 1)}$. Set $U$ to be the closed neighborhood of $\hat x$ of radius $r_U = \sqrt \gamma L^{-(k +1)}$. We estimate \[
\sup_{z \in U} |(f^{i}_{\underline \omega})'(z)| \leq \| f' \|_{C^0}^{i-1} \cdot (\epsilon + \sqrt \gamma L^{-(k + 1)}) \cdot \| f'' \|_{C^0} \leq L^{i} \cdot 2 \sqrt \gamma L^{- (k + 1)} \leq 2 \sqrt \gamma L^{i - (k + 1)}\, , \]
having used the elementary bound $|f_\omega(z)| \leq |z + \omega - \hat x| \cdot \| f'' \|_{C^0} \leq L |z + \omega - \hat x|$ for $z$ near $\hat x$. In particular, at $i = k + 1$ we have that \begin{align}\label{eq:derivativeBound11}
|(f^{k + 1}_{\underline \omega})'|_U| \leq 2 \sqrt{\gamma} \, , \end{align}
hence $U$ maps to an interval $f^{k + 1}_{\underline \omega}(U)$ of length $|f^{k + 1}_{\underline \omega}(U)| \leq 2 \sqrt \gamma \cdot |U| = 4 \sqrt{\gamma} \cdot r_U$.
Let us now estimate $d(\hat x, f^{k + 1}_{\underline \omega}(\hat x))$. For simplicity, we pass to the lifts $\tilde f, \tilde f_\omega$: write $\hat x^i = \tilde f^i \hat x, \hat x^i_{\underline \omega} = \tilde f^i_{\underline \omega} \hat x$ for $0 \leq i \leq k + 1$. To start, \[
|\hat x^1 - \hat x^1_{\underline \omega}| = |\tilde f(\hat x) - \tilde f(\hat x + \omega_0)| \leq \epsilon \cdot \sup_{d(z, \hat x) \leq \epsilon} |f'(z)| \leq \epsilon^2 L \, . \] Next, for $i > 0$, \[
|\hat x^{i + 1} - \hat x^{i + 1}_{\underline \omega}| = |\tilde f(\hat x^{i}) - \tilde f(\hat x^{i}_{\underline \omega} + \omega_{i })| \leq L (\epsilon + |\hat x^i - \hat x^i_{\underline \omega}|) \, . \] Collecting, we obtain \begin{align*}
d(\hat x, f^{k + 1}_{\underline \omega}(\hat x)) \leq |\hat x - \hat x^{k + 1}_{\underline \omega}| & \leq ( L + L^2 + \cdots + L^k) \epsilon + L^{k + 1} \epsilon^2 \\ & \leq 2 L^k \epsilon + L^{k + 1} \epsilon^2 \leq 3 \gamma L^{- (k + 1)} \, , \end{align*} here having assumed $L > 2$. We deduce \[ d(\hat x, f^{k + 1}_{\underline \omega}( \hat x)) \leq 3 \sqrt \gamma \cdot r_U \, . \] It is easy to check that the same bound $d(\hat x^i, f^i_{\underline \omega}(\hat x)) \leq 3 \sqrt \gamma \cdot r_U$ holds for any $0 \leq i \leq k$ as well.
To conclude: for (i) it suffices (see \eqref{eq:derivativeBound11}) to take $\gamma \leq 1/16$. For (ii) we estimate as follows for $z \in U$: \begin{align}\label{eq:estimateImageU}
d(f^{k + 1}_{\underline \omega} (z) , \hat x) \leq d(\hat x, f^{k + 1}_{\underline \omega} (\hat x)) + |f^{k + 1}_{\underline \omega} (U)| \leq 7 \sqrt{\gamma} \cdot r_U \, . \end{align} We conclude that $f^{k + 1}_{\underline \omega}(U) \subset U$ almost surely as long as $\gamma \leq 1/49$.
Finally, to estimate the support of $\mu$ it suffices to repeat the estimate \eqref{eq:estimateImageU} with $f^i_{\underline \omega} (z), z \in U$ replacing $f^{k + 1}_{\underline \omega}(z)$. We conclude that $\mu$ is supported in the $7 \sqrt{\gamma} \cdot r_U$-neighborhood of the periodic sink $\{ f^i \hat x\}_{0 \leq i \leq k}$. \end{proof}
\subsection{Ergodicity}\label{subsec:stationaryDistro}
As already seen in the proofs of Lemma \ref{lem:boundPeriodCoherence} and
Proposition \ref{prop:optimal}, the noise amplitude $\epsilon$ is amplified by
the strong expansion $L \gg 1$ exhibited by $f = f_{L, a}$. Each of these results
depended on the noise being \emph{small enough} to control this amplification. Quite to the contrary, in Section 3.2 we will \emph{take advantage of} this amplification to show that our process $(X_n)$ explores all of phase space $S^1$ with some positive probability. The amplification of noise by expansion
is a core motif in this paper, one which we will return to in Sections 5 -- 6.
Before proceeding to the proof of Theorem \ref{thm:ergod}, let us establish the setting and a brief reduction. Throughout, we assume the setup for $f = f_{L, a}$ in Section \ref{subsubsec:basicSetup}, including $(H3)_{c, k}$.
{\it Reductions. } We first argue that without loss of generality, in the hypotheses of Theorem \ref{thm:ergod} we may assume that $\epsilon, k$ are such that the upper bound in \eqref{boundEpsilon} is satisfied, so that Lemma \ref{lem:boundPeriodCoherence} applies. To justify this, consider the following alternative cases: (a) $L^{- (k - 1)} \leq \epsilon < L^{-1}$; (b) $L^{-1} \leq \epsilon < L^{-1/2}$; and (c) $\epsilon \geq L^{-1/2}$. For (a), let $k' \in \mathbb{N}$ be such that $L^{- k'} \leq \epsilon < L^{-(k' - 1)}$. Clearly $k' < k$, hence $(H3)_{c, k}$ implies $(H3)_{c, k'}$, while $\epsilon \geq L^{- k'} \geq L^{- (2 k' + 1)(1 - \beta) + \beta}$. So, it makes no difference to replace $k$ with $k'$ and proceed as before. In case (b), we can replace $k$ with $1$ and proceed as before. Finally, Theorem \ref{thm:ergod} in case (c) is a simple exercise left to the reader-- see also Theorem 1 in \cite{LS}, where ergodicity as in Theorem \ref{thm:ergod} is proved for $\epsilon \gtrsim L^{-1}$ for a very similar model of multimodal circle maps.
In addition, on shrinking the parameter $\beta$ we will assume the slightly stronger hypothesis \[ \epsilon \geq L^{- (2 k + 1)(1 - \beta) + \beta} \] on the noise parameter $\epsilon$. In relation to Theorem \ref{thm:ergod}, this incurs no loss of generality.
{\it Notation. } Given an initial $X_0 \in S^1$, we write $X_n = f^n_{\underline \omega}(X_0)$ for the Markov chain evaluated at the sample ${\underline \omega} \in \Omega$ (notation as in Section 1.1). We write $\mathbb{P}_{X_0}$ for the law of $X_n$ conditioned on the value of $X_0 \in S^1$. Moreover, for $n, m \geq 0$, random variables $Z_1, Z_2, \cdots, Z_m : \Omega \to \mathbb{R}$, and $X_0 \in S^1$, we write \[
{P^n(X_0, { \cdot } | \{ Z_j, 1 \leq j \leq m \}) = \mathbb{P}_{X_0} (X_n \in \cdot | \sigma(Z_1, \cdots, Z_m)) }\] for the law of $X_n$ conditioned on $\sigma(Z_1, Z_2, \cdots, Z_m)$.
With the setup and reduction established, we now turn to the proof of Theorem \ref{thm:ergod}. We break this up into two parts, Propositions \ref{prop:bkToSpread} and \ref{prop:allToBk} below. \begin{prop}\label{prop:bkToSpread}
There exist $N \in \mathbb{N}, c > 0$ with the property that for any sample ${\underline \omega}$ and any $X_0 \in \mathcal{B}_{\omega_0}^k$, we have that $P^N(X_0, { \cdot } | \{\omega_i, 0 \leq i \leq N, i \neq 1\})\geq c \operatorname{Leb}(\cdot)$. \end{prop} What this means is that random trajectories initiated in $\mathcal{B}^k$ reach all of $S^1$ with some positive probability. Note that in Proposition \ref{prop:bkToSpread}, we randomize only in $\omega_1$. One reason is that since $X_0 \in \mathcal{B}^k_{\omega_0}$, we have that $X_1, X_2, \cdots, X_k$ experience a bound period of length $k$, and so $\omega_1$ is the only perturbation which experiences the full $k$ steps of expansion guaranteed by Lemma \ref{lem:boundPeriodCoherence}. Meanwhile, it is technically more convenient to work with one perturbation $\omega_i$ at a time.
By Proposition \ref{prop:bkToSpread}, it suffices to check that almost every trajectory enters $\mathcal{B}^k$ after a finite time. Define the stopping time \[ T := \min \{ i \geq 0 : X_i \in \mathcal{B}^k_{\omega_i} \} \, . \]
\begin{prop}\label{prop:allToBk} Assume the hypotheses of Theorem \ref{thm:ergod}. Then, there exists $\hat N \in \mathbb{N}$ such that for any $X_0 \in S^1$, we have $\mathbb{P}_{X_0}(T \leq \hat N) > 0$. \end{prop}
\begin{proof}[Proof of Theorem \ref{thm:ergod} assuming Propositions \ref{prop:bkToSpread}, \ref{prop:allToBk}] Observe that ergodic measures $\mu$ (1) exist by a standard tightness argument, and (2) automatically inherit absolute continuity w.r.t. Lebesgue on $S^1$ from the same property for our random perturbations $\omega_i, i \geq 0$. So, to conclude uniqueness it suffices to check that for all $X_0 \in S^1$, $P^M(X_0, \cdot)$ is supported on all of $S^1$ (i.e., assigns positive mass to all open intervals) for some $M = M(X_0) \in \mathbb{N}$. For more details, see, e.g., the characterization of ergodicity for stationary measures of random dynamical systems in Lemma 2.4 on pg. 19 of \cite{kifer2012ergodic}.
To complete the proof, fix $X_0 \in S^1$ and let $n \leq \hat N$ be such that $\mathbb{P}_{X_0}(T = n) > 0$. Then, for any interval $J \subset S^1$ with nonempty interior, \begin{align*} P^{n + N }(X_0, J)
& = \mathbb{E} \bigg( P^N\big(X_{n}, J \big| \{\omega_i\}_{0 \leq i \leq n+N, i \neq n+1} \big) \bigg) \\
& \geq \mathbb{E} \bigg( \chi_{T = n} \cdot P^N\big(X_{n}, J \big| \{\omega_i\}_{0 \leq i \leq n+N, i \neq n+1} \big) \bigg)\\
& \geq \mathbb{E} \bigg( \chi_{T = n} \cdot c \operatorname{Leb}(I) \bigg) = c \cdot \mathbb{P}_{X_0}(T = n ) \cdot \operatorname{Leb}(I) > 0 \, . \end{align*} Here, $\mathbb{E}_{X_0}$ refers to the expectation conditioned on the value of $X_0$.
This completes the proof. It remains to check Propositions \ref{prop:bkToSpread}, \ref{prop:allToBk}. \end{proof}
{\it In the remainder of Section 3, we prove Propositions \ref{prop:allToBk}, \ref{prop:bkToSpread}, in that order. With the above setup assumed, we hereafter fix $\epsilon \in [L^{- (2 k + 1)(1 - \beta) + \beta}, L^{- \max\{ k -1, \frac12\}}]$.}
\subsubsection{Constructions and a preliminary Lemma}
Define $\mathcal{R}$ to be the partition of $S^1$ into the connected components of the sets $\mathcal{G}, \mathcal I = \mathcal{B}^0, \mathcal{B}^1, \cdots, \mathcal{B}^k$. For $\omega \in [- \epsilon, \epsilon]$, let $\mathcal{R}_\omega$ denote the partition into atoms of the form $\alpha - \omega, \alpha \in \mathcal{R}$. Extending by periodicity, we regard $\mathcal{R}, \mathcal{R}_\omega$ as partitions on $\mathbb{R}$ as well. Given an interval $J \subset \mathbb{R}$, let us write $\mathcal{R}|_J = \{ \alpha \cap J : \alpha \in \mathcal{R}\}$. For $\omega \in [- \epsilon, \epsilon]$,
the partition $\mathcal{R}_\omega|_J$ of $J$ is defined analogously.
\begin{lem}\label{lem:atomCut}
Assume $\bar J \subset \mathbb{R}$ is an interval with $|\bar J| < L^{- \beta}$. Let $J$ be the longest atom of $\mathcal{R}|_{\bar J}$. Then, $|J| \geq \kappa |\bar J|$, where $\kappa = \min \{ \frac15, K_1^{-1}\}$. \end{lem} \begin{proof}[Proof of Lemma \ref{lem:atomCut}] Some notation for this proof: given $\hat x \in \{ \tilde f' = 0 \} \subset \mathbb{R}$ and $0 \leq l \leq k$, define $\mathcal{B}^{l, +}(\hat x)$ to be the connected component of $\mathcal{B}^l$ to the immediate right of $\hat x$, and $\mathcal{B}^{l, -}(\hat x)$ to be the connected component to the immediate left. Let us write $\mathcal{B}(\hat x)$ for the component of $\mathcal{B}$ containing $\hat x$.
If $\mathcal{R}|_{\bar J}$ has only one or two atoms of positive length, then $|J| \geq \frac15 |\bar J|$ holds trivially. Hereafter we assume $\mathcal{R}|_{\bar J}$ consists of three or more atoms of positive length. In particular,
$\bar J$ contains a connected component of $\mathcal{B}^l$ for some $0 \leq l \leq k$, since $|\bar J| < L^{- \beta}$ was assumed. Let $\hat x \in \{ \tilde f' =0 \}$ be the nearest critical point to $\bar J$.
Define \[ l_1 = \min \{ 0 \leq l \leq k : \bar J \text{ contains a component of } \mathcal{B}^l\} \, . \] There are two cases: (i) $J \subset \mathcal{B}^{l_1}$, in which case $J = \mathcal{B}^{l_1, \pm}(\hat x)$ for some choice of $\pm$, or (ii) $J \cap \mathcal{B}^{l_1} = \emptyset$.
For case (i), assume first that $l_1 = 0$. WLOG we assume $J = \mathcal{B}^{0, +}(\hat x)$. Note that $\bar J \cap \mathcal{G}$ consists of at most two components, hence $|\bar J \cap \mathcal{G}| \leq 2 |J|$, while $\bar J \cap \mathcal{B}$ has one component, hence
$|\bar J \cap \mathcal{B}| \leq 2 K_1^{-1} L^{- \frac12 - \beta} \leq 2 L^{- \frac12} |J|$. Finally, $\bar J \cap \mathcal I$ has at most two components,
and so $|\bar J \cap \mathcal I| \leq 2 |J|$. In total, \[
|\bar J| \leq |\bar J \cap \mathcal{G}| + |\bar J \cap \mathcal I| + |\bar J \cap \mathcal{B}| \leq (4 + 2L^{- \frac12}) |J| \leq 5 |J| \, . \]
Assuming now that $l_1 > 0$, WLOG we have $J = \mathcal{B}^{l_1, +}(\hat x)$. Moreover,
$\bar J \subset \cup_{i = l_1 - 1}^k \mathcal{B}^l$; otherwise, $\bar J$ would contain an intact component of $\mathcal{B}^{l_1-1}$, a contradiction. As before,
$\bar J \cap \mathcal{B}^{l_1 - 1}$ has at most two components, each of length $\leq |J|$, while $\bar J \cap \cup_{l_1 + 1}^k \mathcal{B}^{l}$ has at most one component of length \[ \leq 2 K_1^{-1} L^{- \frac{l_1 + 1}{2} - \beta}
\leq 2 L^{- \frac12} |J| \ll |J| \, ,
\]
unless $l_1 = k$, in which case we can ignore this contribution. As before, we conclude $|\bar J| \leq 3 |J|$.
For case (ii), if $l_1 = 0$, then $J \subset \mathcal{G}$. Note
$\bar J$ does contains some atom $\mathcal{B}^{0, \pm}(\hat x)$, hence $|J| \geq K_1^{-1} L^{- \beta} > K_1^{-1} |\bar J|$, having assumed in Lemma \ref{lem:atomCut} that $|\bar J| < L^{- \beta}$.
If $l_1 > 0$, then likewise it is not hard to show that $J \subset \mathcal{B}^{l_1 - 1}$. As before, $\bar J$ contains some $\mathcal{B}^{l_1, \pm}(\hat x)$
and so $|J| \geq K_1^{-1} L^{- \frac{l_1}{2} - \beta}$ holds. One now repeats the same arguments as for case (i), $l_1 > 0$. \end{proof}
\subsubsection{Proof of Proposition \ref{prop:allToBk}}
To prove Proposition \ref{prop:allToBk}, we introduce the \emph{random interval process} $(J_i)_{i \geq 0}$ of subintervals of $\mathbb{R}$, defined as follows. Fix $X_0 \in S^1$. To start, $J_0 := X_0 + [- \epsilon, \epsilon]$, regarded as an interval in $\mathbb{R}$. We set $\bar J_1 := \tilde f(J_0)$ and define
$J_1$ to be the longest atom of $\mathcal{R}_{\omega_1}|_{J_1}$; if more than one atom has maximal length, then select $J_1$ to be the rightmost one. Inductively, given $J_0, \cdots, J_i$, define $\bar J_{i + 1} := \tilde f_{\omega_i}(J_i)$
and $J_{i + 1}$ to be the longest atom of $\mathcal{R}_{\omega_{i + 1}}|_{\bar J_{i +1 }}$, with the same rule if there is a tie for longest atom.
We terminate the process $(J_i)_i$ at the stopping time $\sigma := \min \{ \sigma_1, \sigma_2\}$, where \begin{gather*}
\sigma_1 := \min \{ i : |\bar J_i| > L^{- \beta}\} \, , \quad \sigma_2 := \min \{ i : J_i \subset \mathcal{B}^k_{\omega_i}\} \, . \end{gather*}
\begin{lem}\label{lem:boundSigma} There exists $\hat N = \hat N(k, \beta) \in \mathbb{N}$ for which $\mathbb{P}_{X_0} (\sigma \leq \hat N-1) > 0$ holds. \end{lem} \begin{proof}[Proof of Proposition \ref{prop:allToBk} assuming Lemma \ref{lem:boundSigma}] Observe that for each $i \geq 0$, \[ \bar J_i \subset \tilde f^{n-1}_{\theta {\underline \omega}} \circ \tilde f \big( \mathcal{N}_\epsilon(X_0) \big) \, , \] hence the projection $ \bar J_i \,\, (\text{mod } 1) $ of $\bar J_i$ to $S^1$ is a subset of the support of
the measure $\mathbb{P}_{X_0}(X_i \in \cdot | \{ \omega_i \}_{i \neq 0})$.
On the event $\sigma = \sigma_1 = m$ for some $m \geq 0$, it is not hard to see that
$|\tilde f_{\omega_m}(\bar J_m)| \gg 1$ (see Section 2), hence on the event $\{ \sigma = \sigma_1\}$ we have $T \leq \sigma_1 + 1$. Meanwhile, $T \leq \sigma_2$ holds unconditionally (note $\tilde X_m \in \mathcal{B}^k_{\omega_m}$ iff $X_m \in \mathcal{B}^k_{\omega_m}$), hence \[ T \leq \sigma + 1 \] holds almost surely.
To complete the proof of Proposition \ref{prop:allToBk}, it remains to prove Lemma \ref{lem:boundSigma}. \end{proof}
\begin{proof}[Proof of Lemma \ref{lem:boundSigma}] We will show that conditioned on $\{ \sigma_2 > \hat N\}$, we have $\sigma_1 \leq \hat N$.
Define $t_1 = \min \{ t : J_t \subset \mathcal{B}_{\omega_t}\}$ and let $p_1\in \{ 1 , \cdots, k-1\}$ be such that $J_{t_1} \subset \mathcal{B}^{p_1}_{\omega_{t_1}}$. Inductively, for $j > 1$ set \[ t_j = \min\{ t > t_{j-1} : J_t \subset \mathcal{B}_{\omega_t}\} \] and let $p_j$ be such that $J_{t_j} \subset \mathcal{B}^{p_j}_{\omega_{t_j}}$. We let $q \geq 0$ be such that $t_q \leq \hat N < t_{q + 1}$ (note $q = 0$ is allowed).
At time $t_j$, the interval process $J_{t_j}$ is said to initiate a \emph{bound period} of length $p_j$; that is, $J_{t_j + 1}, \cdots, J_{t_j + p_j}$ shadow some postcritical orbit in the sense of Lemma \ref{lem:boundPeriodCoherence}. In particular, $t_j + p_j + 1 \leq t_{j + 1}$ for all $j$. For $t_j + p_j + 1 \leq t \leq t_{j + 1}$, we say that the interval $J_t$ is \emph{free}.
When $t$ is free, expansion on $\mathcal{G} \cup \mathcal I$ (see Section 2) and Lemma \ref{lem:atomCut} imply \begin{align}\label{eq:growFreeIntervals}
|J_{t + 1}| \geq \kappa |\bar J_{t + 1}| \geq \kappa L^{\frac12 - \beta} |J_t| \, , \end{align} while along bound periods (having conditioned on $\{ \sigma_2 > \hat N\}$, it follows that $p_j < k$ for all $j \leq q$) we have \begin{align}\label{eq:growBoundPeriodInterval}
J_{t_j + p_j + 1} \geq \kappa |\bar J_{t_j + p_j + 1}| \geq \kappa L^{(\frac12 - \beta)(p_j + 1)} |J_{t_j}| \end{align} since, by Lemma \ref{lem:boundPeriodCoherence}, we have $\bar J_{t_j + p_j + 1} = \tilde f^{p_j + 1}_{\theta^{t_j} {\underline \omega}} J_{t_j}$ (i.e., no cutting can occur during a bound period). We obtain that when $J_t$ is free, we have \[
|\bar J_{t}| \geq \bigg( \kappa L^{\frac12 - \beta} \bigg)^t \cdot 2 \epsilon \geq L^{t (\frac12 - 2 \beta)} \cdot 2 \epsilon \, . \] when $L$ is sufficiently large. Since, for any $t$, the interval $J_{t'}$ is free for at least one $t' \in \{ t, \cdots, t + k\}$, and $\epsilon \geq L^{- (2 k + 1)(1 - \beta) + \beta}$ was assumed, it follows that $\sigma_1 \leq \hat N$, where $\hat N = \hat N(k, \beta)$ depends on $k, \beta$ alone. \end{proof}
\subsubsection{Proof of Proposition \ref{prop:bkToSpread}}\label{provePropa}
Assume $X_0 \in \mathcal{B}^k_{\omega_0}$. We form what is essentially the same interval process as before, starting now with the interval \[ J_1 := X_1 + [- \epsilon, \epsilon] \, , \]
again regarded as a subset of $\mathbb{R}$, and taking $\bar J_2 := \tilde f(J_1)$, and $J_2 \in \mathcal{R}|_{\bar J_2}$ the longest atom. The intervals $J_3, J_4, \cdots$ are defined the same as before.
As in the proof of Lemma \ref{lem:boundSigma}, no cutting occurs during the initial bound period of length $k$, hence $\bar J_{k + 1} = \tilde f_{\theta^2 {\underline \omega}}^{k-1} \circ \tilde f (\mathcal{N}_\epsilon(X_1))$. By Lemma \ref{lem:boundPeriodCoherence} and Lemma \ref{lem:atomCut}, this implies \[
|J_{k + 1}| \geq \kappa |\bar J_{k + 1}| \geq L^{- (k + 1)(1 - \beta) + \beta/2} \, , \] perhaps taking $L$ sufficiently large (independently of $k$).
With $t_1 = 0, p_1 = k$ and $t_j, p_j, j \geq 2$ defined as in the proof of Lemma \ref{lem:boundSigma}, note that if $p_j < k$ then \eqref{eq:growBoundPeriodInterval} holds, while if $t$ is free we have that \eqref{eq:growFreeIntervals} holds. It remains to check that some interval growth occurs when $p_j = k$; we do so below.
\begin{lem}
Assume $L$ is sufficiently large, depending on $\beta$. Let $J \subset \mathcal{B}^k_{\omega_0}$ be an interval for which $|J| \geq L^{- (k + 1)(1 - \beta) + \gamma}$ for some constant $\gamma > \beta / 2$. Then,
$|\tilde f^{k + 1}_{\underline \omega}(J)| \geq L^{- (k+1)(1 - \beta) + \frac32 \gamma}$. \end{lem} \begin{proof}
It suffices to estimate the length of $\tilde f_{\omega_0}(J)$. For this, let us subdivide $J = J^+ \cup J^-$,
where $J^+$ is to the right of the critical point and $J^-$ to the left. WLOG let $J^+$ be the longer of the two intervals,
so $|J^+| \geq \frac12 |J|$ holds.
Writing $J^+ = [\hat x - \omega_0, \hat x - \omega_0 + b^+], b^+ > 0$ (noting $b^+ \geq \frac12 |J|$), we have
\[
(*) = |\tilde f_{\omega_0}(J^+)| = \int_{\hat x}^{\hat x + b^+} | \tilde f'(x)| dx \geq K_1 \int_{0}^{b^+} x \, dx
= \frac12 K_1 (b^+)^2 \geq \frac18 K_1 |J|^2
\]
Plugging in the lower bound for $|J|$ gives $(*) \geq \frac18 K_1 L^{- 2 (k + 1)(1 - \beta) + 2 \gamma}\geq L^{- 2 (k + 1)(1 - \beta) + \frac32 \gamma}$. From here, using Lemma \ref{lem:boundPeriodCoherence} we estimate
\[
| \tilde f^{k + 1}_{\underline \omega}(J)| \geq | \tilde f^{k + 1}_{\underline \omega} (J^+)| \geq L^{- (k + 1) (1 - \beta) + \frac32 \gamma} \, . \qedhere
\] \end{proof}
Proposition \ref{prop:bkToSpread} now follows from a similar argument to that for Lemma \ref{lem:boundSigma}, where $N = N(k, \beta) \in \mathbb{N}$ and the constant $c > 0$ depends on $N$ as well as $L$. Details are left to the reader.
\section{Itineraries and distortion}
For the remainder of the paper we turn our attention to the proof of Theorem \ref{thm:lyapEst}. In essence, this proof will be an elaboration on the idea, used heavily in Section 3.2,
that the predominant expansion of $f = f_{L, a}$ has the effect of amplifying the noise $\epsilon$.
On the other hand, in Section 3.2 and the proof of ergodicity as in Theorem \ref{thm:ergod},
we were able to avoid exerting any precise control on the densities of the conditional laws
$P^n(X_0, \cdot | \{ \omega_i, i \neq 0\})$. For our purposes in Section 6, however,
we will need some control on these densities, which amounts to controlling distortion of
the random compositions $f^n_{\underline \omega}$.
Our objective in Section 4, then, is to establish some control on the distortion of $f^n_{\underline \omega}$.
As is typical of systems exhibiting nonuniform expansion, distortion of $f^n_{\underline \omega}$ for some $n \geq 1$
can only be controlled along sufficiently small intervals $J \subset S^1$ (see, e.g., \cite{WY}).
Establishing just how small these intervals need to be is a crucial component of our argument.
In Section 4.1, we formulate \emph{itineraries} for the random dynamics of $f^n_{\underline \omega}$, a form
of symbolic dynamics for the trajectories of $f^n_{\underline \omega}$ with the property (checked in Section 4.2)
that the distortion of $f^n_{\underline \omega}$ can be controlled along subintervals with the same itinerary (symbolic sequence)
out to time $n-1$.
The preceding paragraphs apply equally well to deterministic as well as random compositions of interval maps-- indeed, the assignment of itineraries to control distortion is an old idea (see the references in \cite{WY} for more information). Something to keep in mind, however, is that since the condition $(H3)_{c, k}$ only guarantees bound periods up to length $k$, we lose control of the dynamics of $f^n_{\underline \omega}$ upon the first visit to the `worst possible' neighborhood $\mathcal{B}^k$ of $\{ f' = 0 \}$. Thus the itinerary subdivision procedure and and resulting distortion estimates we obtain
below are only valid up until this first visit to $\mathcal{B}^k$. This issue will be addressed in Section 5.
\subsection{Itineraries}
\emph{Throughout, in addition to the preparations in Section \ref{subsubsec:basicSetup}, we assume the parameter $\epsilon$ satisfies the upper bound \eqref{boundEpsilon}, so that Lemma \ref{lem:boundPeriodCoherence} holds. No lower bound on $\epsilon$ is assumed. }
\subsubsection*{(A) Partition construction.}
\noindent To start, we define the partition $\mathcal P$ of $S^1$ as follows. Recall the notation $\mathcal{B}^0 = \mathcal I$. \begin{itemize}
\item $\mathcal P|_{\mathcal{G}}$ is the partition of $\mathcal{G}$ into connected components.
\item To define $\mathcal P|_{\mathcal{B}^l}, 0 \leq l < k$, start by cutting $\mathcal{B}^l$ into connected components. For each such component $J$, $\mathcal P|_{J}$ is defined as any
partition of $J$ into intervals of length
\[
\in [ (l + 1)^{-2} L^{- \frac{l + 3}{2} - \beta}, 2 (l + 1)^{-2} L^{- \frac{l + 3}{2} - \beta} ] \, .
\] \item $\mathcal P|_{\mathcal{B}^k}$ is the partition of $\mathcal{B}^k$ into connected components. \end{itemize} \noindent We write $\mathcal P_\omega$ for the partition of $S^1$ with atoms of the form $C - \omega, C \in \mathcal P$. Abusing notation somewhat, we regard $\mathcal P, \mathcal P_\omega$ as partitions of $\mathbb{R}$, extended by periodicity.
\begin{defn}
For a bounded, connected interval $I \subset S^1$ (or $\subset \mathbb{R}$) which is not a singleton, we define the partition $\mathcal P_\omega(I)$ of $I$ as follows. To start, form $ \mathcal P_\omega|_{I} = \{ J \cap I : J \in \mathcal P_\omega, J \cap I \neq \emptyset\}$, and write $J_1, J_1, \cdots, J_{N}$ for the non-singleton atoms of this partition in increasing order from left to right (note that $N = 1$ is possible). \begin{itemize}
\item If $N = 1,2$ or $3$, then set $\mathcal P_\omega(I) := \{ I \}$.
\item If $N \geq 4$, then set $\mathcal P_\omega(I) = \{ J_1 \cup J_2, J_3, J_4, \cdots, J_{N - 2}, J_{N-1} \cup J_N\}$. \end{itemize} \end{defn}
We define the \emph{bound period} $p(I)$ of an interval $I$ as follows. First, $p : S^1 \to \{ 0,\cdots, k\}$ (or $\mathbb{R} \to \{ 0, \cdots, k\}$) is defined by setting $p|_{\mathcal{B}^p} := p$ for all $1 \leq p \leq k$, and $p|_{\mathcal I \cup \mathcal{G}} = 0$. Next, for an interval $I \subset S^1$ or $\mathbb{R}$, we define \[ p(I) = \max_{x \in I} p(x) \, . \] For $\omega \in [- \epsilon, \epsilon]$, we define $p_\omega(\cdot) = p(\cdot - \omega)$.
\begin{rmk}\label{rmk:smallAtoms} For an atom $C \in \mathcal P$ or $\mathcal P_\omega$, write $C^+$ for the union of $C$ with its two adjacent atoms. Observe that for any interval $I$, we have that each atom $J \in \mathcal P_\omega(I)$ is contained in $C^+$ for some $C \in \mathcal P_\omega(I)$. By this line of reasoning, for any $J \in \mathcal P_\omega(I)$ with $p = p(J) \in \{ 1,\cdots, k-1\}$, we have the estimate \[
|J| \leq 6 p^{-2} L^{- \frac{p + 2}{2} - \beta} \, . \]
Similarly, if $J \in \mathcal P_\omega(I)$, $J \cap \mathcal{B}^k_\omega \neq \emptyset$ (i.e. $p(J) = k$) then $|J| \leq 3 \max\{ 1, K_1^{-1}\} L^{- \frac{k}{2} - \beta}$.
For a lower bound: if in the above setting we have that there are at least two distinct atoms in $\mathcal P_\omega(I)$, then any atom $J \in \mathcal P_\omega(I)$ with $p = p_\omega(J) > 0$ must contain an atom $C \in \mathcal P_\omega|_{\mathcal{B}^p}$. Thus \[
|J| \geq (p + 1)^{-2} L^{- \frac{p + 3}{2} - \beta} \, . \] \end{rmk}
\begin{rmk}\label{rmk:extendBoundPeriods} Fix a sample ${\underline \omega} \in \Omega$ and let $J$ be a connected interval contained in $C^+$ for some $C \in \mathcal P_{\omega_0}$. If $p := p_{\omega_0}(J) > 0$, then \[ \tilde f^i_{\underline \omega}(J) \subset \mathcal{G}
\quad \text{ for all } 1 \leq i \leq p \, , \]
even though $J$ is not necessarily a subset of $\mathcal{B}^p_{\omega_0}$. This is because $\mathcal P|_{\mathcal{B}^{p-1}}$-atoms are small enough so that $J \subset B(- \frac{p + \beta}{2})$ must hold, Lemma \ref{lem:boundPeriodCoherence} implies that $f^i_{\underline \omega}(B(- \frac{p + \beta}{2})) \subset \mathcal{G}$ for all $1 \leq i \leq p$ and all samples ${\underline \omega}$. Note, in particular, that $\tilde f^i_{\underline \omega}(J)$ meets at most one component of $\mathcal{G}$ for each $1 \leq i \leq p$, hence $\mathcal P_{\omega_{i }} ( \tilde f^i_{\underline \omega}(J)) = \{ \tilde f^i_{\underline \omega}(J) \}$.
\end{rmk}
\subsubsection*{(B) Time-$n$ itineraries for an interval $I \subset S^1$.}
Let $I \subset S^1$ be an interval (which we regard as a subset of $\mathbb{R}$) and fix a sample ${\underline \omega} \in \Omega$. For each time $i \geq 1$, we define a partition $\mathcal Q_i = \mathcal Q_i(I; (\omega_0, \cdots, \omega_i))$ of $I$, the atoms of which correspond to points in $I$ with the same itinerary for the map $\tilde f^{i + 1}_{\underline \omega}$.
The definition is inductive. To start, we define $\mathcal Q_0 = \mathcal P_{\omega_0}( I)$. Assuming $\mathcal Q_0, \mathcal Q_1, \cdots, \mathcal Q_i$ have been constructed, for each $C_i \in \mathcal Q_i$ we define $\mathcal Q_{i +1} \geq \mathcal Q_i$ as follows\footnote{{ Here, for two partitions $\zeta, \xi$, we write $\zeta \leq \xi$ if each atom of $\zeta$ is a union of $\xi$-atoms.}}: \[
\mathcal Q_{i + 1}|_C = (\tilde f^{i + 1}_{\underline \omega})^{-1} \big(\mathcal P_{\omega_{i + 1}} (\tilde f^{i+1}_{\underline \omega}(C_i)) \big) \, . \]
In what follows, we will only attempt to keep track of itineraries until a first ``near visit'' to the set $\mathcal{B}^k$. Precisely, we define
a `terminating' stopping time $\tau = \tau[I] : I \times \Omega \to \mathbb{Z}_{\geq 0} \cup \{ \infty\}$ as follows: \[ \tau (x, {\underline \omega}) = \min\{ i \geq 0 : f^i_{\underline \omega} (C_i(x)) \cap \mathcal{B}^k_{\omega_{i }} \neq \emptyset \} \, . \] Here, $C_i(x)$ denotes the $\mathcal Q_i$-atom containing $x$. Notice that $\tau$ is adapted to $(\mathcal Q_i)_i$, i.e., $\{ \tau > i\}$ is a union of $\mathcal Q_i$-atoms for each $i \geq 0$. In particular, $\{ \tau > i\}$ depends only on $\omega_0, \cdots, \omega_i$.
\subsubsection*{(C) Bound and free periods of an itinerary}
Fix $n \geq 1$ and $C_n \in \mathcal Q_n$ such that $\tau|_{C_n} \geq n$. For each $i < n$, let $C_i \in \mathcal Q_i$ denote the atom containing $C_n$. For $1 \leq i \leq n$, we write $I_i = \tilde f^i_{\underline \omega}(C_i)$.
Define \begin{gather}\label{defineTJ}\begin{gathered} t_1 = \min \{ n \} \cup \{ i \geq 0 : I_i \cap \mathcal{B}_{\omega_{i }} \neq \emptyset \} \, , \quad \text{and} \\ t_j = \min \{ n \} \cup \{ i > t_{j-1} : I_i \cap \mathcal{B}_{\omega_{i }} \neq \emptyset \} \, \quad \text{ for } j \geq 2 \, , \end{gathered}\end{gather} and let $q \geq 0$ be the index for which $t_{q + 1} = n$. For $1 \leq j \leq q$, define \begin{align}\label{definePJ} p_j = p_{\omega_{t_j }}( I_{t_j}) \, . \end{align} At time $t_j, 1 \leq j \leq q$, the itinerary $C_n$ initiates a bound period of length $p_j$ (Remark \ref{rmk:extendBoundPeriods}); in particular,
$t_j + p_j < t_{j +1}$ for all $1 \leq j < q$. We say that $C_n$ is \emph{bound at time $t$} if $t \in [t_j + 1, t_j + p_j]$ for some $1 \leq j < q$ and
that $C_n$ is \emph{free at time $t$} if it is not bound at time $t$.
By Remark \ref{rmk:extendBoundPeriods} and the fact that $\tau|_{C_n} \geq n$, we have the following. \begin{lem}\label{lem:fullBoundPer}
Let $1 \leq i \leq n$ and assume $C_n \in \mathcal Q_n$ is such that $\tau|_{C_n} \geq n$. \begin{itemize} \item[(a)] If $C_n$ is free at time $i$, then \[
|(f^i_{\underline \omega})'|_{C_n}| \geq L^{i (\frac12 - \beta)} \, . \] \item[(b)] If $C_n$ is bound at time $i$, i.e., $i \in [t_j + 1, t_j + p_j]$ for some $1 \leq j \leq q$, then \[
|(f^i_{\underline \omega})'|_{C_n}| \geq L^{t_j (\frac12 - \beta) + (1 - \beta)(i - (t_j + 1)) - \frac{p_j - 1}{2} - \beta} \, . \] In this case, $C_{t_j} = C_{t_j + 1} = \cdots = C_{t_j + p_j} = C_i$ and $C_n$ is free at time $t_j + p_j + 1$. Note that $C_{t_j + p_j + 1} \subsetneq C_i$ is possible. \end{itemize} \end{lem}
\subsection{Distortion estimates}
Let $I \subset S^1$ be a connected interval, ${\underline \omega} \in \Omega$ a sample. Assume that the partitions $(\mathcal Q_i)_{i \geq 0}, \mathcal Q_i = \mathcal Q_i(I; (\omega_0, \cdots, \omega_i))$ and the stopping time $\tau = \tau[I]$ have been constructed as in Section 4.1. Here we prove a time-$n$ distortion estimate for trajectories with the same time-$n$ itineraries, i.e., belonging to the same $\mathcal Q_n$-atom.
Our approach to distortion estimates is inspired from the treatment in \cite{WY}, which in turn is a version of estimates first appearing in \cite{BC1,BC2}.
\begin{prop}\label{prop:distortion} For all $L$ sufficiently large, the following holds.
Let $n \geq 1$. Assume $C_n \in \mathcal Q_n$ is free at time $n$ and $\tau|_{C_n} \geq n$. Let $x, x' \in C_n$. Then, \[
\frac{(\tilde f^n_{\underline \omega})'(x)}{(\tilde f^n_{\underline \omega})'(x')} \leq e^{K_2 L^{-\frac12} + 4 \| \psi'' \|_{C_0} L^{2 \beta} |\tilde f^n_{\underline \omega} x - \tilde f^n_{\underline \omega} x'|} \, . \] \end{prop}
We start with a preliminary Lemma.
\begin{lem}\label{lem:shortDist} Let $L$ be sufficiently large, and let $\eta \in [-\frac34,0]$. Let $y, y' \in S^1, i \geq 1$, and define $J$ to be the interval between $y, y'$. If $f^j_{\underline \omega}(J) \subset B(\eta)^c$ for all $0 \leq j < i$, then \[
\bigg| \log \frac{ (f^i_{\underline \omega})'(y)}{(f^i_{\underline \omega})'(y')} \bigg| \leq 2 \| \psi''\|_{C^0} L^{-1 - 2 \eta} |\tilde f^i_{\underline \omega} (y) - \tilde f^i_{\underline \omega} (y')| \, . \] \end{lem} \begin{proof} Define $y_j = \tilde f^j_{\underline \omega} y, y_j' = \tilde f^j_{\underline \omega} y'$. We estimate \[
(*) := \bigg| \log \frac{ (f^i_{\underline \omega})'(y)}{(f^i_{\underline \omega})'(y')} \bigg| \leq \sum_{j = 0}^{i-1} \bigg| \log \frac{ (f_{\omega_{j + 1}})'(y_j)}{(f_{\omega_{j + 1}})'(y_j')} \bigg|
\leq \sum_{j = 0}^{i-1} \frac{L \| \psi'' \|_{C_0}}{ L^{1 + \eta}} |y_j - y_j'| = \| \psi''\|_{C^0} L^{- \eta} \sum_{j = 0}^{i-1} |y_j - y_j'| \, . \]
We bound $|y_j - y_j'| \leq L^{-(1 + \eta)(i - j)} |y_i - y_i'|$, hence \[
(*) \leq \| \psi''\|_{C^0} L^{- \eta} \bigg( \sum_{j = 0}^{i-1} L^{- (1 + \eta)(i - j)} \bigg) |y_i - y_i'| \leq 2 \| \psi''\|_{C^0} L^{-1 - 2 \eta} |y_i - y_i'| \, . \] In view of \eqref{eq:lowerBoundDer}, observe that the above estimates can be written in the following alternative form: writing $J_j$ for the interval between $y_j, y_j'$, we have that \[
\sum_{j = 0}^{i-1} \frac{|J_j|}{d(J_j, C_\psi' - \omega_j)} \leq 2 \| \psi'' \|_{C^0} L^{-1 - 2 \eta} |J_i| \, . \]
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:distortion}] Below, we write $C$ to refer to a generic positive constant; the value of $C$ may change from line to line, but always depends only on the function $\psi$.
With $n \geq 1$ and $C_n \in \mathcal Q_n$ fixed and free at time $n$, we adopt the notation of Section 4.1 (C). Write $x_i = \tilde f^i_{\underline \omega} (x), x_i' = \tilde f^i_{\underline \omega}(x')$. By hypothesis, $x, x'$ belong to the same $\mathcal Q_i$ atom $C_i$ for all $0 \leq i \leq n$.
We decompose \[
\bigg| \log \frac{(\tilde f^n_{\underline \omega})'(x)}{(\tilde f^n_{\underline \omega})'(x')}\bigg| \leq \sum_{i = 0}^{n-1} \bigg| \log \frac{\tilde f_{\omega_{i }'}(x_i)}{\tilde f_{\omega_{i }'}(x_i')} \bigg| \] Using \eqref{eq:lowerBoundDer}, each summand is bounded by \[
\bigg| \log \frac{\tilde f_{\omega_{i}}'(x_i)}{\tilde f_{\omega_{i }}'(x_i')} \bigg| \leq C \frac{|J_i|}{d(J_i, C_\psi' - \omega_i)} \, , \] where $J_i$ is the interval from $x_i + \omega_i$ to $x_i' + \omega_i$.
With $t_j, p_j$ as in \eqref{defineTJ},\eqref{definePJ}, we decompose the time interval from $0$ to $n$ into the succession of free and bound periods experienced by the atom $C_n \in \mathcal Q_n$ containing $x, x'$: \[ 0 \leq t_1 < t_1 + p_1 < t_2 < t_2 + p_2 < \cdots < t_q < t_q + p_q < t_{q + 1} := n \, . \] We assume going forward that $q \geq 1$, i.e., $C_n$ experiences at least one bound period. If $q = 0$, then Proposition \ref{prop:distortion} follows easily from Lemma \ref{lem:shortDist} applied to $\eta = - \frac12 - \beta$; details are left to the reader.
We now decompose $\sum_{i = 0}^{n-1}$ as follows: \[
\sum_{i = 0}^{n-1} \frac{|J_i|}{d(J_i, C_\psi' - \omega_i)} = \sum_{i = 0}^{t_1 -1} + \sum_{j =1 }^q \bigg( \sum_{i = t_j}^{t_j + p_j} + \sum_{i = t_j + p_j + 1}^{t_{j + 1} - 1} \bigg) = : D_0' + \sum_{j = 1}^q (D_j + D_j') \]
Above, a summand of the form $\sum_{m}^{m-1}, m \in \mathbb{N}$ is regarded as empty and the corresponding summation is defined to be $0$ (as may happen for some of the $D_j'$ terms). The $D_j, D_j'$ are estimated separately below.
Before proceeding, observe that $|J_{t_j + p_j + 1}| \geq L^{(p_j + 1)(\frac12 - \beta)} |J_{t_j}|$ and $|J_{t + 1}| \geq L^{\frac12 - \beta} |J_t|$ for all $t$ such that $C_t, C_{t + 1}$ are free. In particular, \begin{align}\label{eq:boundJ}
|J_{t_{j + 1}}| \geq L^{(t_{j + 1} - t_j) (\frac12 - \beta)} |J_{t_i}| \end{align} for all $1 \leq i \leq q$.
\noindent {\it Bounding $\sum_{j = 1}^q D_j$: } Let $1 \leq j \leq q$. \begin{cla} \[
\sum_{i = t_j + 1}^{t_j + p_j} \frac{|J_i|}{\operatorname{dist}(J_i, C_\psi' - \omega_i)} \leq C L^{2 \beta} \frac{|J_{t_j}|}{d(J_{t_j}, C_\psi' - \omega_{t_j})} \] \end{cla}
\noindent Assuming the Claim, we now bound $\sum_{j = 1}^q D_j$. For $1 \leq p < k$, let $\mathcal K_p = \{ 1 \leq j \leq q : p_j = p\}$. Let $j^*_p = \max \mathcal K_p$, and observe that $|J_{t_j}| \leq |J_{t_{j^*_p}}| \cdot L^{- (t_{j^*_p} - t_j) (\frac12 - \beta)}$ for all $j \in \mathcal K_p$ by \eqref{eq:boundJ}. Thus \[
\sum_{j \in \mathcal K_p} D_j \leq C L^{2 \beta} \sum_{j \in \mathcal K_p} \frac{|J_{t_j}|}{\operatorname{dist}(J_{t_j}, C_\psi' - \omega_{t_j})}
\leq \frac{C L^{2 \beta}}{1 - L^{- (\frac12 - \beta)}} \cdot \frac{|J_{t_{j_p^*}}|}{ \frac12 K_1^{-1} L^{- \frac{p + 1}{2} - \beta}} \leq C L^{2 \beta} \frac{|J_{t_{j_p^*}}|}{ L^{- \frac{p + 1}{2} - \beta}} \, . \] Here we are using that $\operatorname{dist}(J_{t_j}, C_\psi' - \omega_{t_j}) \geq \frac12 K_1^{-1} L^{- \frac{p + 1}{2} - \beta}$ for all $j \in \mathcal K_p$.
By Remark \ref{rmk:smallAtoms}, we have $|J_{t_{j^*_p}}| \leq 6 p^{-2} L^{- \frac{p + 2}{2} - \beta}$. So, \[ \sum_{j \in \mathcal K_p} D_j \leq C L^{2 \beta}
\frac{ p^{-2} L^{- \frac{p + 2}{2} - \beta}}{ L^{- \frac{p + 1}{2} - \beta}} \leq C p^{-2} L^{- \frac12 + 2 \beta} \] hence \[ \sum_{j = 1}^q D_j = \sum_{p = 1}^{k-1} \sum_{j \in \mathcal K_p} D_j \leq \sum_{p = 1}^{k-1} C p^{-2} L^{- \frac12 + 2 \beta} \leq C L^{- \frac12 + 2 \beta} \, . \]
\begin{proof}[Proof of Claim] Assume $I_{t_j}$ meets the component of $\mathcal{B}_{\omega_{t_j }}$ near $\hat x_{t_j} \in C_\psi' - \omega_{t_j}$; write $\hat x_i = \tilde f^{i - t_j}_{\theta^{t_j} {\underline \omega}} (\hat x_{t_j})$ for $i > t_j$. Assume, without loss, that \begin{align}\label{endpointAnsatz}
|x_{t_j}' - \hat x_{t_j}| \leq |x_{t_j} - \hat x_{t_j}| \, ; \end{align} in the alternative case, exchange the roles of $x_i, x_i'$ in what follows.
For $t_j < i \leq t_j + p_j$, we have \[
\frac{|J_i|}{\operatorname{dist}(J_i, C_\psi' - \omega_i)} = \frac{|x_i - x_i'|}{|x_i - \hat x_i|} \cdot \frac{|x_i - \hat x_i|}{\operatorname{dist}(J_i, C_\psi' - \omega_i)} \] By Lemmas \ref{lem:boundPeriodCoherence} and \ref{lem:shortDist}, we have that the first right-hand factor is \begin{align}\label{eq:firstFactor}
\leq 2 \frac{|x_{t_j+ 1} - x_{t_j+ 1}'|}{|x_{t_j+ 1} - \hat x_{t_j+ 1}|} \end{align}
The numerator of \eqref{eq:firstFactor} coincides with $|f_{\omega_{t_j}}'(\zeta)| \cdot |x_{t_j} - x_{t_j}'|$ for some $\zeta \in J_{t_j}$. Moreover, $|f_{\omega_{t_j}}'(\zeta)| = |f_{\omega_{t_j}}''(\zeta')| \cdot |\zeta - \hat x_{t_j}| \leq L \| \psi'' \|_{C^0} |\zeta - \hat x_{t_j}|$ for some $\zeta'$ between $\zeta$ and $\hat x_{t_j}$. By \eqref{endpointAnsatz}
we have $|\zeta - \hat x_{t_j}| \leq |x_{t_j} - \hat x_{t_j}|$, and so conclude that the numerator of \eqref{eq:firstFactor} is $\leq L \| \psi'' \|_{C^0}\cdot |x_{t_j} - \hat x_{t_j}| \cdot |J_{t_j}|$.
For the denominator of \eqref{eq:firstFactor}, we have $
|x_{t_j+ 1} - \hat x_{t_j+ 1}| = \frac12 |f_{\omega_{t_j}}''(\zeta'')| |x_{t_j} - \hat x|^2 $
for some $\zeta''$ between $x_{t_j}$ and $\hat x$. For $L$ sufficiently large and all $\epsilon$ satisfying \eqref{boundEpsilon}, we have that $\min_{z \in \mathcal{N}_\epsilon (\mathcal{B})} |\psi''(z)| \geq \frac12 \min \{ | \psi''(\hat z) |: \hat z \in C_\psi'\} =: c_1$ from (H1), (H2). We have therefore that the denominator of \eqref{eq:firstFactor} is $\geq \frac12 c_1 L |x_{t_j} - \hat x_{t_j}|^2$.
Collecting, \[
\frac{|J_i|}{\operatorname{dist}(J_i, C_\psi' - \omega_i)} \leq C \frac{|J_{t_j}|}{\operatorname{dist}(J_{t_j}, C_\psi' - \omega_{t_j})} \cdot \frac{|x_i - \hat x_i|}{\operatorname{dist}(J_i, C_\psi' - \omega_i)} \, , \]
since $|x_{t_j} - \hat x_{t_j}|^{-1} \leq d(J_{t_j}, C_\psi' - \omega_{t_j})^{-1}$ by assumption, and so \[
\sum_{i = t_j + 1}^{t_j + p_j} \frac{|J_i|}{d(J_i, C_\psi' - \omega_i)} \leq C \frac{|J_{t_j}|}{\operatorname{dist}(J_{t_j}, C_\psi' - \omega_{t_j})} \bigg( \sum_{i = t_j + 1}^{t_j + p_j} \frac{|x_i - \hat x_i|}{\operatorname{dist}(J_i, C_\psi' - \omega_i)} \bigg)\, . \]
By Lemma \ref{lem:shortDist} applied to $\eta = - \beta$, the parenthetical sum is bounded $\leq C L^{-1 + 2 \beta} |x_{t_j + p_j + 1} - \hat x_{t_j + p_j + 1}|$. Since $|x_{t_j + p_j} - \hat x_{t_j + p_j}| \leq L^{- \beta/2} \ll 1$ (see the proof of Lemma \ref{lem:boundPeriodCoherence}), we bound $|x_{t_j + p_j + 1} - \hat x_{t_j + p_j + 1}| \leq C L$, hence the parenthetical sum is $\leq C L^{2 \beta}$. This completes the proof. \end{proof}
\noindent {\it Bounding $\sum_{j = 0}^q D_j'$: } For each $1 \leq j < q$, we have from Lemma \ref{lem:shortDist} applied to $\eta = - \frac12 - \beta$ that \[
D_j' \leq L^{1 - 2 (-\frac12 - \beta)} |J_{t_{j + 1}}| = C L^{2 \beta} |J_{t_{j + 1}}| \, . \]
Similarly, we estimate $D_0' \leq C L^{2 \beta} |J_{t_1}|$. Since $|J_{t_j}| \leq L^{- (n - t_j)(\frac12 - \beta)} |J_n|$ for all $1 \leq j \leq q$ by \eqref{eq:boundJ}, we conclude
$\sum_{i = 0}^{q} D_j' \leq C L^{2 \beta} |J_n|$. The proof of Proposition \ref{prop:distortion} is now complete. \end{proof}
\section{Selective averaging process}
We aim to get more refined control on the conditional laws $P^n(X_0, \cdot | \{ \omega_i, i \neq 0\}), n \geq 0$. Towards this end, the itinerary subdivision procedure in Section 4 applied to $I = X_0 + [- \epsilon, \epsilon]$
can be used to control the density of $P^n(X_0, \cdot | \{ \omega_i, i \neq 0\}, X_0 + \omega_0 \in C_n)$ for some $C_n \in \mathcal Q_n$, i.e., conditioning on $X_0 + \omega_0$ belonging to a single subdivision $C_n$ of $\mathcal Q_n$.
This is only valid, however, up until the first `near visit' to $\mathcal{B}^k$, the closest neighborhood to the critical set $\{ f' = 0 \}$. Afterwards, the material in Section 4 is no longer valid and we lose control over distortion, hence over the conditioned law $P^n(X_0, \cdot | \{ \omega_i, i \neq 0\})$.
A rough idea of how to proceed is as follows: visits to $\mathcal{B}^k$ `spoil' the random parameter $\omega_0$, and so if $X_m$ comes too close to $\mathcal{B}^k$ for some $m \geq 0$, we will `freeze' $\omega_0$ (essentially, treat as deterministic) and `smear' (average) in the perturbation $\omega_{m + 1}$, i.e.,
for $n \geq m$, work with the conditional law $P^n(X_0, \cdot | \{ \omega_i, i \neq m + 1\})$.
Let us make all this more precise. Fix $X_0 \in S^1$ and define the Markov chain $(\tilde X_n)$ on $\mathbb{R}$ by $\tilde X_n = \tilde f^n_{\underline \omega}(X_0) = \tilde f_{\omega_{n-1}}(\tilde X_{n-1})$. We will obtain in this section an increasing filtration $(\mathcal{H}_n)_{n \geq 0}$, $\mathcal{H}_n \subset \mathcal{F}_{n} := \sigma(\omega_0, \omega_1, \cdots, \omega_n)$ (depending also on $X_0$), designed so that the conditional measures \[
\nu_n(\cdot) := \mathbb{P}(\tilde X_n \in \cdot | \mathcal{H}_n) \] have the following desirable properties: \begin{itemize} \item[(i)] the measures $\nu_n$ are absolutely continuous; \item[(ii)] $\rho_n := \frac{d \nu_n}{d \operatorname{Leb}}$ is more-or-less constant on the interval of support $I_n := \operatorname{supp} \nu_n$; and \item[(iii)] the intervals $I_n = \operatorname{supp} (\nu_n)$ are, for large $n$, rather long with high probability. \end{itemize}
In this section, we focus on the construction of $\mathcal{H}_n, I_n, \nu_n$ as above; property (ii) will fall out as a natural consequence of our construction and the distortion estimate in Proposition \ref{prop:distortion}.
The plan is as follows: first, in Section 5.1 we will describe an algorithm constructing the supporting intervals $I_n$ as above, in a way completely parallel to the itinerary construction given in Section 4.1. From this construction, it will be clear when `smearing' in a new $\omega_i$ is necessary: this decision is made according to a sequence $\tau_1 < \tau_2 < \cdots$ of stopping times roughly related to the first arrival to the neighborhood $\mathcal{B}^k$ (closely related to the stopping time $\tau$ as in Section 4.1). In Section 5.2 we will construct the filtration $(\mathcal{H}_n)$ and then describe the resulting conditional measures $\nu_n$ in Section 5.3.
\emph{ In addition to the preparations in Section \ref{subsubsec:basicSetup}, we assume the parameter $\epsilon$ satisfies \eqref{boundEpsilon}, so that Lemma \ref{lem:boundPeriodCoherence} holds. No lower bound on $\epsilon$ is assumed. }
\subsection{The supporting intervals $I_n$}
We define here an interval\footnote{For our purposes, an \emph{interval} is a bounded, connected subset of $\mathbb{R}$, with either open or closed endpoints. Since we care only about $\mathbb{P}$-typical trajectories, we need not specify what to do with endpoints.}-valued stochastic process $(I_n)_{n \geq 1}$ for which $I_n \subset \mathbb{R}$ is $\mathcal{F}_{n}$-measurable for all $n$.
\renewcommand{{\tilde X}}{{\tilde X}}
Embed $X_0 =: \tilde X_0 \in \mathbb{R}$ via the identification $S^1 \cong [0,1)$. Throughout, the dependence of the $I_n$ on the sample ${\underline \omega} = (\omega_i)_{i \geq 0} \in \Omega$ is implicit (keeping in mind that $I_n$ depends on $\omega_i, 0 \leq i \leq n$).
\noindent {\it Base cases:} We set $I_0 = {\tilde X}_0 + [- \epsilon, \epsilon]$. To determine $I_1$, there are two cases: \begin{itemize}
\item If $I_0 \cap \mathcal{B}^k_{\omega_0} \neq \emptyset$, then define $I_1 = \tilde X_1 + [- \epsilon, \epsilon]$.
\item Otherwise, form $\mathcal P_{\omega_1}( \tilde f(I_0) )$ and let $I_1$ be the atom containing $\tilde X_1$. \end{itemize} Note that since $\epsilon > 0$ is assumed to satisfy \eqref{boundEpsilon}, we have automatically that $\mathcal P(I_0)$ consists of a single atom.
\noindent {\it Inductive step: } Assume the intervals $I_0, I_1, \cdots, I_n$ have been constructed, with $n \geq 1$.
\begin{itemize}
\item[(a)] If $I_n \cap \mathcal{B}^k_{\omega_n} = \emptyset, I_{n-1} \cap \mathcal{B}^k_{\omega_{n-1}} = \emptyset$, then form $\mathcal P_{\omega_{n + 1}}( \tilde f_{\omega_n}(I_n))$ and define $I_{n + 1}$ to be the atom containing $\tilde X_{n + 1}$.
\item[(b)] If $I_n \cap \mathcal{B}^k_{\omega_n} \neq \emptyset$, then define $I_{n + 1} = \tilde X_{n + 1} + [- \epsilon, \epsilon]$.
Form $\mathcal P_{\omega_{n + 2}} \big( \tilde f(I_{n + 1})\big) $ and let $I_{n + 2}$ be the atom containing $\tilde X_{n + 2}$
\end{itemize} From Lemma \ref{lem:boundPeriodCoherence} and Remark \ref{rmk:extendBoundPeriods}, it is simple to check that cases (a) -- (b) are exhaustive and mutually exclusive. Note in case (b) that $I_{n +1} \subset \mathcal{G}_{\omega_{n + 1}}$ holds (Lemma \ref{lem:boundPeriodCoherence} and \eqref{boundEpsilon}).
\begin{defn} We define a sequence of $(\mathcal{F}_n)$-adapted stopping times $0 =: \tau_0 < \tau_1 < \tau_2 < \cdots$ as follows: for $i > 0$, set \begin{align*} \tau_i = \min \{ m > \tau_{i - 1} : I_{m} \cap \mathcal{B}^k_{\omega_{m}} \neq \emptyset \} \, . \end{align*} \end{defn} Observe that case (b) above is observed iff $n = \tau_i$ for some $i$.
As formulated below, between `near visits' to $\mathcal{B}^k$ (i.e., the times $\tau_1, \tau_2, \cdots$), the procedure defining the $(I_n)$ process is completely parallel to the itinerary construction in Section 4.1. The proof is straightforward and left to the reader.
\begin{lem}\label{lem:correspond} Fix $i \geq 0$ and $0 \leq m < n$. \begin{itemize} \item[(a)] On the event $S_{i, m, n} = \{ \tau_i = m , \tau_{i + 1} \geq n\}$, we have that the random interval $I_n$ is given as \[ I_n = \tilde f^{n-m-2}_{\theta^{m + 2} {\underline \omega}} \circ \tilde f(\hat C) \, , \] where $\hat C$ is the atom of $\mathcal Q_{n-m-1} (I_{m + 1}; (0, \omega_{m+2}, \cdots, \omega_n))$ containing $\tilde X_{m+1} + \omega_{m+1}$ (recall $I_{m + 1} = \tilde X_{m+1} + [- \epsilon, \epsilon]$). \item[(b)] On the event $\{ \tau_i = m\}$, we have $I_{m + 1} = \tilde X_{m + 1} + [- \epsilon, \epsilon]$ and \[
\tau_{i + 1} = m + \tau[I_{m + 1}](\tilde X_{m + 1} + \omega_{m + 1}, \hat {\underline \omega}) \, , \] where $\tau[I_{m + 1}]$ is the stopping time as defined in Section 4.1 with
$\hat {\underline \omega} = (0, \omega_{m + 2}, \omega_{m + 3}, \cdots)$. \end{itemize}
\end{lem}
\subsection{Filtration $(\mathcal{H}_n)$}
We now construct $\mathcal{H}_n = \sigma(\mathcal{A}_n)$, where the measurable partition $\mathcal{A}_n$ on $\Omega$ is defined below. Each $\mathcal A_n$ will consist of $\mathcal{F}_n$-measurable atoms, and so will be treated here as a partition on the first $n+1$ coordinates $(\omega_0, \cdots, \omega_n) \in [- \epsilon, \epsilon]^{n+1}$.
To start, we set $\mathcal{A}_0 = \{ [- \epsilon, \epsilon]\}$ to be the trivial partition, and hereafter assume $n \geq 1$.
Continuing: for each $i \geq 0$ and $0 \leq m < n$, the event $S_{i, m, n}$ (notation as in Lemma \ref{lem:correspond})
can be treated as a subset of $[- \epsilon , \epsilon]^{n+1}$ since each $\tau_i$ is a stopping time w.r.t. $\mathcal{F}_n = \sigma(\omega_0, \cdots, \omega_n)$ (i.e., we have $\{ \tau_i > n \} \in \mathcal{F}_n$ for all $i, n$). Define as well the events $S_{i, n} = \{ \tau_i = n -1 \}$, and observe that the collection \[ \mathfrak P_n = \{ S_{i, n} : i \geq 1\} \cup \{ S_{i, m, n} : i \geq 1, 0 \leq m < n \} \] is a partition of $[- \epsilon, \epsilon]^{n+1}$. We define $\mathcal{A}_n \geq \mathfrak P_n$ on each $\mathfrak P_n$-atom separately.
\begin{itemize}
\item For each set of the form $S_{i, m, n} \in \mathfrak P_n, i \geq 0, 0 \leq m < n$, we define $\mathcal{A}_n|_{S_{i, m, n}}$ to consist of atoms of the form
\[
\{ \omega_0 \} \times \{ \omega_1\} \times \cdots \times \{ \omega_{m} \} \times J \times \{ \omega_{m + 2}\} \times \cdots \times \{ \omega_n \} \, ,
\]
as $J$ ranges over the atoms of $\mathcal Q_{n - m+1}(I_{m + 1}; (0, \omega_{m+2}, \cdots, \omega_n))$. Here we identify $[- \epsilon, \epsilon]$ with $I_{m + 1} = \tilde X_{m + 1} + [- \epsilon, \epsilon]$ in the obvious way.
\item On each set $S_{i, n} \in \mathfrak P_n, i \geq 1$, we define $\mathcal{A}|_{S_{i, n}}$ to consist of atoms of the form
\[
\{ \omega_0\} \times \{ \omega_1\} \times \cdots \times \{ \omega_{n-1}\} \times [- \epsilon, \epsilon] \, .
\] \end{itemize}
With $\mathcal{A}_n$ completely described, the construction of $\mathcal{H}_n := \sigma(\mathcal{A}_n)$ is complete. It is not hard to check that $\mathcal{H}_n$ is a filtration, i.e., $\mathcal{H}_n \supset \mathcal{H}_{n-1}$: to do this, one verifies that the partition sequence $\mathcal{A}_n$ is increasing by inspecting each $\mathfrak B_n$-atom separately.
The following is a straightforward consequence of Lemma \ref{lem:correspond}. \begin{lem} \label{lem:supportWorks}
For each $n \geq 1$, the random interval $I_n$ is $\mathcal{H}_n$-measurable. Moreover, the measure $\nu_n (\cdot) = \mathbb{P}(\tilde X_n \in \cdot | \mathcal{H}_n)$
satisfies $\operatorname{supp}(\nu_n) = I_n$. \end{lem}
\subsection{The conditional measures $\nu_n$}
Let us first describe more transparently what the conditional measures $\nu_n(\cdot) = \mathbb{P}(\tilde X_n \in \cdot | \mathcal{H}_n)$ actually are. To start, for ${\underline \omega} \in S_{i, n}, i \geq 0, n \geq 1$, we have that $\nu_n = \delta_{\tilde X_n} * \nu^\epsilon$ is the uniform distribution on $I_n = \tilde X_n + [- \epsilon, \epsilon]$. The following characterizes $\nu_n$ on the event $S_{i, m, n}, i \geq 0, 0 \leq m < n$:
\begin{lem} Let $i \geq 0, 0 \leq m < n$ and condition on the event $S_{i, m, n} = \{ \tau_i = m, \tau_{i + 1} \geq n\}$. Define $\hat F_{m, n} : [-\epsilon, \epsilon] \to \mathbb{R}$ to be the map sending $\omega \mapsto \tilde X_n = \tilde f_{\omega_{n-1}} \circ \cdots \circ \tilde f_{\omega_{m + 2}} \circ \tilde f_\omega({\tilde X}_{m+1})$.
Let $J \in \mathcal Q_{n - m-1}(\tilde X_{m+1}; (0, \omega_{m+2}, \cdots, \omega_n))$ (regarded as a partition of $[- \epsilon, \epsilon]$) be the atom containing $\omega_{m + 1}$. Then, $\hat F_{m, n} : J \to I_n$ is a diffeomorphism, and \begin{align}\label{eq:altForm11}
\nu_n = \frac{1}{\nu^\epsilon(J)} (\hat F_{m, n})_* (\nu^\epsilon|_J) \, . \end{align}
\end{lem}
\noindent The proof is a case-by-case verification of the above formula and is left to the reader.
Recall that $J \subset [- \epsilon, \epsilon]$ appearing in \eqref{eq:altForm11} has the property that points in $\tilde X_{m+1} + J$ have the same itinerary under $\tilde f^{n-m-1}_{\theta^{m + 2} {\underline \omega}} \circ \tilde f$. In that notation, we have that the density $\rho_n = \frac{d \nu_n}{d \operatorname{Leb}}$ at a point $x \in I_n$ is, up to a constant scalar, given by \[ (\hat F_{m, n})' (\omega) = (f^{n-m-1}_{\theta^{m+2} {\underline \omega}} \circ f)'(\tilde X_{m+1} + \omega) \] where $\omega \in [- \epsilon, \epsilon]$ is such that $x = \hat F_{m, n} (\omega)$. In view of Proposition \ref{prop:distortion} and Lemma \ref{lem:correspond}, then, we obtain a distortion estimate for the density $\rho_n = \frac{d \nu_n}{d \operatorname{Leb}}$:
\begin{cor}\label{cor:distortion} Let $n \geq 1$ be such that $I_n$ is free. Then, for all $x, x' \in I_n$, we have the estimate \begin{align}
\frac{\rho_n(x)}{\rho_n(x')} \leq \exp\big( K_2 L^{- 1/2} + 4 \| \psi'' \|_{C^0} L^{2 \beta} |x - x'|\big) \, . \end{align} \end{cor}
\section{Lyapunov exponents}
Finally, we come to the estimation of Lyapunov exponents in Theorem \ref{thm:lyapEst}.
Throughout, we assume the setup of Section \ref{subsubsec:basicSetup}
and that $\epsilon \geq L^{- (2 k + 1)(1 - \beta) + \alpha}$ for some $\alpha \geq 0$. By Theorem \ref{thm:ergod},
it follows that there is a unique ergodic stationary measure $\mu$ supported on $S^1$.
By (a version of) the Birkhoff ergodic theorem (see Corollary 2.2 on pg. 24 of \cite{kifer2012ergodic}), we
have that
\[
\lambda = \lim_{n \to \infty} \frac1n \log | (f^n_{\underline \omega})'(x) |
\] exists and is constant over $\mathbb{P}$-a.e. ${\underline \omega} \in \Omega$ and $\mu$-a.e. $x \in S^1$. Since, however, $\mu$ is absolutely continuous and supported on all of $S^1$, we can promote this limit to \emph{every} $x \in S^1$ and $\mathbb{P}$-a.e. ${\underline \omega} \in \Omega$; details are left to the reader.
It remains to estimate $\lambda$ from below, for which we use the following. \begin{lem} In the above setting, we have that \[
\lambda \geq \inf_{x \in S^1} \liminf_{n \to \infty} \frac{1}{n} \mathbb{E} \big( \log |(f^n_{\underline \omega})'(x)| \big) \] for all $x \in S^1$. \end{lem} \begin{proof} The limit \[
\lambda = \lim_{n \to \infty} \frac1n \int_{S^1} \mathbb{E} \big( \log |(f^n_{\underline \omega})'(x)| \big) \, d \mu(x) \] follows from the $L^1$-Mean Ergodic Theorem applied to the skew product $\tau : S^1 \times \Omega \to S^1 \times \Omega$ defined by setting $\tau(x, {\underline \omega}) = (f_{\omega_0} x, \theta {\underline \omega})$, on noting that $\mu$ is a stationary ergodic measure iff $\mu \otimes \mathbb{P}$ is an ergodic invariant measure for $\tau$ (Theorem 2.1 on pg. 20 in \cite{kifer2012ergodic}).
As is not hard to check, for all $x \in S^1$ we have $- d(L, \epsilon) \leq \mathbb{E} \big( \log |f_\omega'(x) | \big) \leq \log L$
where $d(L, \epsilon) > 0$ is a constant depending only on $\epsilon, L$. These bounds pass to the averages $g_n := \frac1n \mathbb{E} \big( \log |(f^n_{\underline \omega})'(x)| \big)$. Applying Fatou's Lemma to the nonnegative sequence $g_n + d(L, \epsilon)$, we conclude \[
\inf_{x \in S^1} \liminf_{n \to \infty} \frac{1}{n} \mathbb{E} \big( \log |(f^n_{\underline \omega})'(x)| \big) \leq \int_{S^1} \liminf_n g_n \, d \mu \leq \lim_{n \to \infty} \int_{S^1} g_n \, d \mu = \lambda \, . \qedhere \]
\end{proof}
The remaining work is to estimate $\liminf_n \frac1n \mathbb{E}(\log |(f^n_{\underline \omega})'(x)|)$ for arbitrary $x \in S^1$. \begin{prop}\label{prop:LyapSuff} For all $x \in S^1$, we have \[
\liminf_{n \to \infty} \frac{1}{n} \mathbb{E} \big( \log |(f^n_{\underline \omega})'(x)| \big) \geq \lambda_0 \log L \, , \] where $\lambda_0 = \min \{ \frac{\alpha}{k + 1}, \frac{1}{10}\}$.
\end{prop}
The proof of Proposition \ref{prop:LyapSuff} occupies the remainder of Section 6.
{\it Reductions.} We make here some slight modifications to the upper and lower bounds on $\epsilon$ and the parameter $\beta$. To start, on shrinking the parameter $\beta$, we assume \[ \epsilon \geq L^{- \frac{1 - \beta}{1 + \beta} k - (1 - \beta) (k+1) + \alpha } \, . \] Second, we can assume without loss that $\epsilon < L^{- \min \{ k-1, \frac12\}}$ as in the hypothesis
\eqref{boundEpsilon} for Lemma \ref{lem:boundPeriodCoherence}. If not, then we can reduce to this case by
a similar line of reasoning as to the reductions in Section 3.2 in the proof of Theorem \ref{thm:ergod},
to which we refer for details.
Finally, a minor technical point: we will assume that $k, \beta$ satisfy the relation
\begin{align}\label{eq:technicalKBound}
\bigg( \frac{3}{10} - \frac52 \beta - \beta^2 \bigg) k \geq 2 \beta (1 + \beta) \, .
\end{align} For $k \geq 6$, \eqref{eq:technicalKBound} is automatic for all $\beta \in (0,1/10)$, and \eqref{eq:technicalKBound} while holds for all $k \in \mathbb{N}$ when $\beta \in (0,1/100)$. This entails no loss of generality.
{\it With $\beta$ fixed once and for all, we let $L$ be sufficiently large, in terms of $\beta$, and
take on the assumptions of Section \ref{subsubsec:basicSetup}. The parameter $\epsilon$ is as above, and for our choice of $k \in \mathbb{N}$ we assume \eqref{eq:technicalKBound} holds. Finally, the
constructions of Section 5 (namely, the filtration $\mathcal{H}_n$) are applied to the arbitrary initial condition $x = X_0 \in S^1$.}
\subsection{Decomposing the sum}
Fix $n \geq 1$. Define $T_i = \log |\tilde f_{\omega_{i }}' (X_i)|$, $X_i := f^i_{\underline \omega}(x)$. With $\tau_0 \equiv 0 < \tau_1 < \tau_2 < \cdots$ as in Section 5, define the random index $J \in \mathbb{Z}_{\geq 0}$ to satisfy \[ \tau_J < n \leq \tau_{J + 1} \, ; \] note that $\tau_1 \geq n$ implies $J = 0$ since $\tau_0 := 0$.
We decompose \[
(*): = \log|(f^n_{\underline \omega})'(x)| = \sum_{i = 0}^{n-1} T_i = \sum_{i =0}^{\min\{ \tau_1, n \} - 1} T_i +
\sum_{j = 1}^\infty \chi_{J \geq j} \bigg( T_{\tau_j} + \sum_{i = \tau_j + 1}^{\min\{ \tau_{j+1}, n \} - 1} T_i \bigg) \] and will bound $ \mathbb{E} (*)$ from below; here, for an event $A$ we write $\chi_A$ for the indicator function of $A$. The main obstacles are the terms $T_{\tau_j}, 1 \leq j \leq J$, which we bound from below using conditional expectations w.r.t. the filtration $(\mathcal{H}_n)_n$. \begin{prop}\label{prop:main2} Let $j \geq 2$ and condition on the event $ \tau_j = m$. Then, \begin{align}
\mathbb{E} \big( T_{ m} | \mathcal{H}_{m }) \geq - \gamma \log L \, ,
\end{align} where $\gamma := \max\{ (1 + \beta) \big( (\frac12 + \beta) k+ 2 \beta \big) , k (1 - \beta) - \alpha \}$. \end{prop} \noindent Proposition \ref{prop:main2} is proved in Section 6.2.
We apply Proposition \ref{prop:main2} by replacing the terms $\chi_{J \geq j} T_{ \tau_j}, j \geq 2$ under $\mathbb{E}$ with the conditional expectations\footnote{For a filtration $(\mathcal{G}_n)$ and an adapted stopping time $\eta$, we write $\mathcal{G}_\eta$ for the \emph{stopped $\sigma$-algebra} consisting of the set of measurable sets $A$ for which $A \cap \{ \eta \leq m \} \in \mathcal{G}_m$ for all $m$.} \[
(*)_j := \mathbb{E} \big( \chi_{J \geq j} T_{\tau_j} | \mathcal{H}_{\tau_j} \big) = \sum_{m = 1}^{n-1} \mathbb{E} \big( \chi_{\tau_j = m} T_m | \mathcal{H}_{m } \big) = \sum_{m = 1}^{n-1} \chi_{ \tau_j = m} \cdot \mathbb{E} \big( T_m | \mathcal{H}_{m } \big) \, . \] Here, we use that $\{ J \geq j \} = \cup_{m = 1}^{n-1} \{ \tau_j = m \}$ for all $j \geq 1$. By Proposition \ref{prop:main2}, for $j \geq 2$ we have \[ (*)_j \geq - \gamma \log L \cdot \chi_{J \geq j} \, . \]
For the $j = 1$ term, we use the following crude estimate: \begin{lem}\label{lem:crudeBound} We have \[
(*)_1 := \mathbb{E} \big( \chi_{J \geq 1} T_{ \tau_1} | \mathcal{H}_{\tau_1} \big) \geq - 2 (2 k + 1) \log L =: - \gamma_1 \log L \, . \]
\end{lem} We prove Lemma \ref{lem:crudeBound} in Section 6.2.
Applying these estimates, we have \begin{align*} \mathbb{E}(*) & \geq \mathbb{E} \bigg[ \sum_{i =0}^{\min\{ \tau_1, n \} - 1} T_i + (*)_1 + \chi_{J \geq 1} \sum_{i = \tau_1 + 1}^{\min\{ \tau_2 , n \} - 1} T_i + \sum_{j = 2}^\infty \bigg( (*)_j + \chi_{J \geq j} \sum_{i = \tau_j + 1}^{\min\{ \tau_{j+1}, n \} - 1} T_i \bigg) \bigg] \\ & \geq \mathbb{E} \bigg[ \underbrace{ \sum_{i =0}^{\min\{ \tau_1, n \} - 1} T_i }_{I} + \underbrace{\chi_{J \geq 1} \bigg( - \gamma_1 \log L + \sum_{i = \tau_1 + 1}^{\min\{ \tau_2 , n \} - 1} T_i \bigg) }_{II}
+ \underbrace{ \sum_{j = 2}^\infty \chi_{J \geq j} \cdot \bigg( - \gamma \log L + \sum_{i = \tau_j + 1}^{\min\{ \tau_{j+1}, n \} - 1} T_i \bigg) }_{III} \bigg] \\ & =: \mathbb{E}[I + II + III] \, . \end{align*} To complete the estimate, we decompose according to the events $\{J = K\}, K = 0,1,2,\cdots$.
\noindent {\bf (A) Estimate of $\mathbb{E} \big( \chi_{J = 0} (I + II + III) \big)$. }
We have $II = III = 0$ and \[ \mathbb{E} [\chi_{J = 0} \cdot I ] = \mathbb{E} \bigg[ \chi_{J = 0} \sum_{i = 0}^{n-1} T_i \bigg] \] Conditioned on $J = 0$, we have $\tau_1 \geq n$ and so Lemma \ref{lem:fullBoundPer} may be applied (see also Lemma \ref{lem:correspond}). We obtain a lower bound using the worst possible case that $p_{\omega_{n-1}}(I_{n-1}) = k-1$, i.e., $I_{n-1}$ initiates a bound period of length $k-1$ at time $n-1$ (corresponding to $t_j = n-1, p_j = k-1$ In the notation of Lemma \ref{lem:fullBoundPer}(b)). So, \[
\sum_{i = 0}^{n-1} T_i = \log |(f^n_{\underline \omega})'(x_0)| \geq L^{(n-1)(\frac12 - \beta) - \frac{k-1}{2} - \beta} \, . \] We conclude \begin{align*} \mathbb{E}[ \chi_{J = 0} \cdot I ] \geq \bigg( (n-1) \big( \frac12 - \beta \big) \log L -( \frac{k-1}{2} + \beta) \log L \bigg) \cdot \mathbb{P} (J = 0) \, . \end{align*}
\noindent {\bf (B) Estimate of $\mathbb{E} \big( \chi_{J = 1} (I + II + III) \big)$. }
Here we have $III = 0$ and \[ \mathbb{E} [ \chi_{J = 1} \cdot (I + II) ] = \mathbb{E} \bigg[ \chi_{J = 1} \bigg( \sum_{i = 0}^{ \tau_1 - 1} T_i - \gamma_1 \log L + \sum_{i = \tau_1 + 1}^{n-1} T_i \bigg) \bigg] \] By Lemma \ref{lem:fullBoundPer}(a) we have $\sum_{i = 0}^{\tau_1 - 1} T_i \geq \tau_1 \cdot \big( \frac12 - \beta \big) \log L$. The second summation $\sum_{i = \tau_1 + 1}^{n-1} T_i$ is estimated as in paragraph (A): we have \[ \mathbb{E} \bigg[ \chi_{J = 1} \sum_{i = \tau_1 + 1}^{n-1} T_i \bigg] \geq \bigg( (n - 2 - \tau_1) \big( \frac12 - \beta \big) \log L - (\frac{k-1}{2} + \beta) \log L \bigg) \cdot \mathbb{P}(J = 1) \, , \] and so collecting, we get \[ \mathbb{E} [ \chi_{J = 1} \cdot (I + II) ] \geq \bigg( (n - 2) \big( \frac12 - \beta \big) \log L - \gamma_1 \log L + \big( 1 - \frac{k}{2} - \beta \big) \log L \bigg) \cdot \mathbb{P}(J = 1) \, . \]
\noindent {\bf (C) Estimate of $\mathbb{E} \big( \chi_{J = K} (I + II + III) \big)$ for $K > 1$.}
We bound $\mathbb{E}[\chi_{J = K} \cdot (I + II)]$ as in paragraph (A), obtaining \[ \mathbb{E} [\chi_{J = K} \cdot (I + II) ] \geq \mathbb{E} \bigg[ \chi_{J = K} \bigg( (\tau_2 - 1) \big( \frac12 - \beta \big) \log L - \gamma_1 \log L \bigg) \bigg] \, . \]
Conditioned on $\{ J = K\}$ for $K > 1$, the $III$ term has the form \[ III = \sum_{j = 2}^{K-1} \underbrace{\bigg( - \gamma \log L + \sum_{i = \tau_j + 1}^{ \tau_{j + 1} - 1} T_i \bigg) }_{IV_j} + \bigg( \underbrace{- \gamma \log L + \sum_{i = \tau_K + 1}^{n-1} T_i}_{IV_K} \bigg) \] For each summand $IV_j, j \geq 2$, observe that $\tilde X_i \in \mathcal{G}$ for each $i = \tau_j + 1, \cdots, \tau_j + k$, hence $\sum_{i = \tau_j + 1}^{\tau_j + k} T_i \geq k (1 - \beta) \log L$. If $\tau_j +k + 1 \leq \tau_{j+1} - 1$, then the summands $\tau_j +k + 1 \leq i \leq \tau_{j+1} - 1$ are estimated as in Lemma \ref{lem:fullBoundPer}(a). In total, \[ \mathbb{E} [ \chi_{J = K} \cdot IV_j ] \geq \mathbb{E} \bigg[ \chi_{J = K} \bigg( \big( k (1 - \beta) - \gamma \big) \cdot \log L + ( \tau_{j + 1} - 1 - \tau_j - k) \cdot \big( \frac12 - \beta) \log L \bigg) \bigg] \] Observe that \begin{align*} k (1 - \beta) - \gamma &= \min \{ \alpha , \big( \frac12 - \frac52 \beta - \beta^2 \big) k - 2\beta (1 + \beta)\} \geq \min \{ \alpha (k + 1), \frac15 k \} \end{align*} holds from \eqref{eq:technicalKBound}. Dividing the latter by $k + 1$
yields an estimate for the average growth rate $\lambda_0$ as follows: \begin{align}\label{eq:defineGamma0} \frac{k (1 - \beta) - \gamma}{k + 1} \geq \min \{ \alpha, \frac{1}{10}\} =: \lambda_0 = \lambda_0(\alpha, k) \, ,
\end{align} hence \[ \mathbb{E} [ \chi_{J = K} \cdot IV_j ] \geq \mathbb{E} \big[ \chi_{J = K} ( \tau_{j + 1} - \tau_j) \cdot \lambda_0 \log L \big] \, . \] This telescopes, and so \[ \mathbb{E} [\chi_{J = K} \sum_{j = 2}^{K-1} IV_j ] \geq \mathbb{E} \big[ \chi_{J = K} ( \tau_K - \tau_2) \cdot \lambda_0 \log L \big] \] Using Lemma \ref{lem:fullBoundPer}(b) we bound $IV_K$ from below by \[ IV_K = - \gamma \log L + \sum_{j = \tau_K + 1}^{n-1} T_i \geq - \gamma \log L + (n - \tau_K - 2) (\frac12 - \beta) \log L -( \frac{k-1}{2} + \beta) \log L \] hence \[ \mathbb{E} [ \chi_{J = K} \cdot III] \geq \mathbb{E} \bigg[ \chi_{J = K} \bigg( (n-2 - \tau_2 ) \cdot \lambda_0 \log L - \gamma \log L - (\frac{k-1}{2} + \beta) \log L
\bigg) \bigg] \] and in total, \[ \mathbb{E}[ \chi_{J = K} (I + II + III)] \geq \bigg( (n-3) \lambda_0 \log L - (\gamma + \gamma_1) \log L - (\frac{k-1}{2} + \beta) \log L \bigg) \cdot \mathbb{P}(J = K) \, . \]
\noindent {\bf Putting it together.}
The lower bounds obtained for $K > 1$ as in paragraph (C) are the worst of the three cases examined already, hence \[ \mathbb{E} (*) = \mathbb{E}[I + II + III] = \sum_{K = 0}^\infty \mathbb{E}[ \chi_{J = K} (I + II + III)] \geq (n-3) \lambda_0 \log L - (\gamma + \gamma_1) \log L - (\frac{k-1}{2} + \beta) \log L \, . \] On dividing by $n$ and taking $n \to \infty$, we conclude that $
\lim_{n \to \infty} \frac1n \mathbb{E} \big( \log|(f^n_{\underline \omega})'(x)| \big) \geq \lambda_0 \log L \, , $ as desired.
\subsection{Proofs of Proposition \ref{prop:main2} and Lemma \ref{lem:crudeBound}}
Below, $C > 0$ refers to a constant depending only on $\psi$, and may change in value from line to line.
We start with the following preliminary estimate.
\begin{lem}\label{lem:boundIntegralBelow11} Let $I \subset \mathcal{B}$ be any connected interval. Then, \[
\int_I \log | f'(z) | \, d z \geq |I| \cdot \log (L^{1 - \beta} |I|) \, . \] \end{lem} This is a simple consequence of \eqref{eq:lowerBoundDer} and follows on taking $L$ sufficiently large, depending only on $\beta$ and $\psi$; details are left to the reader.
\begin{proof}[Proof of Proposition \ref{prop:main2}]
Unconditionally, for any $m \geq 0$ the conditional expectation $\mathbb{E}(T_m | \mathcal{H}_m)$ is given by \[
(**) = \int_{I_m} \log |f'_{\omega_m}(z) | \, d \nu_m(z) \, . \] by Lemma \ref{lem:supportWorks}.
Conditioning on $\{ \tau_j = m\}$, recall (Remark \ref{rmk:smallAtoms}) that $|I_m| \leq C L^{- \frac{k}{2} - \beta}$ since $I_m$ is an atom of $\mathcal P_{\omega_m}(\tilde f_{\omega_{m-1}}(I_{m-1}))$. Our distortion control on $\rho_m = \frac{d \nu_m}{d \operatorname{Leb}}$ as in Corollary \ref{cor:distortion} along $I_m$ implies $| \log \frac{\rho_m(z)}{\rho_m(z')} | \leq K_2 L^{- 1/2} + 2 K_1^{-1} L^{- \frac{k}{2} + \beta} \leq C L^{- 1/2 + \beta}$ for $z, z' \in I_m$, hence \[
(**) \geq (1 + C L^{-1/2 + \beta} ) \frac{1}{|I_m|} \bigg( \int_{I_m} \log | f'_{\omega_{m+1}} (z) | \, d z \bigg) \, . \]
From Lemma \ref{lem:boundIntegralBelow11} applied to $I = I_m$, we conclude \begin{align}\label{eq:boundImBelow11}
(**) \geq (1 + C L^{-1/2 + \beta}) \log (L^{1 - \beta} |I_m|) \geq (1 + \beta) \log (L^{1 - \beta} |I_m|) \, . \end{align}
We now bound $|I_m|$ from below.
\begin{lem}\label{lem:boundIm}
On the event $\{ \tau_j = m\}, j \geq 2, m \geq 1$, we have the estimate \[|I_m| \geq \min\{ L^{-1 - (\frac12 + \beta) k - \beta}, L^{k (1 - \beta)} \epsilon\} \, .\] \end{lem}
Assuming this and plugging in $\epsilon \geq L^{- \frac{1 - \beta}{1 + \beta} k - (1 - \beta) (k+1) + \alpha }$, we conclude \begin{align*} (**) & \geq (1 + \beta) \log \min\{ L^{ - (\frac12 + \beta) k -2 \beta} , L^{(k+1) (1 - \beta)} \epsilon \} \\ & \geq \min \{ (1 + \beta) \big( - 2 \beta - (\frac12 + \beta) \big) , (1 + \beta) \big( \alpha - k \frac{1 - \beta}{1 + \beta} \big) \} \\ & \geq - \max\{ (1 + \beta) \big( (\frac12 + \beta) k+ 2 \beta \big) , k (1 - \beta) - \alpha \} \log L =: - \gamma \log L \, . \end{align*}
To finish the proof of Proposition \ref{prop:main2}, it remains to prove Lemma \ref{lem:boundIm}. \end{proof}
\begin{proof}[Proof of Lemma \ref{lem:boundIm}] We distinguish two cases: \begin{itemize}
\item[(a)] $I_{i} = f_{\omega_{i-1}}(I_{i-1})$ for each $\tau_{j-1} + 2 \leq i \leq m = \tau_j$
\item[(b)] $I_i \subsetneq f_{\omega_{i-1}}(I_{i-1})$ for some $\tau_{j-1}+2 \leq i \leq m = \tau_j$. \end{itemize}
In case (a), we easily have $|I_{\tau_{j-1} + k + 1}| \geq L^{k (1 - \beta)} \epsilon$, and since no additional cuts are made, we estimate \begin{align*}
| I_{m}|
& = | \tilde f_{\omega_{m-1}} \circ \cdots \circ \tilde f_{\omega_{\tau_{j-1} + k + 1}} (I_{\tau_{j-1} + k + 1} ) | \\
& \geq L^{( m - ( \tau_{j-1} + k + 1))(\frac12 - \beta)} |I_{\tau_{j-1} + k + 1}| \geq L^{k (1 - \beta)} \epsilon \, . \end{align*}
In case (b), set $i^* = \max \{ i \leq \tau_j : I_i \subsetneq \tilde f_{\omega_{i-1}}(I_{i-1}) \}$ (note $i^* = m$ is possible), and note that if $i^* < m$ then \[ I_m = \tilde f_{\omega_{m-1}} \circ \cdots \circ \tilde f_{\omega_{i^*}}(I_{i^*}) \, . \]
To bound $|I_{i^*}|$ we split further to the cases (i) $p_{\omega_{i^*}}(I_{i^*}) = 0$, (ii) $p_{\omega_{i^*}}(I_{i^*}) \in \{ 1,\cdots, k-1\}$ and (iii) $p_{\omega_{i^*}}(I_{i^*}) = k$. Note that in all cases, $\mathcal P_{\omega_{i^*}}(\tilde f_{\omega_{{i^*}-1}}(I_{{i^*}-1}))$ contains at least two elements, hence $I_{i^*}$ contains at least one atom of $\mathcal P_{\omega_{i^*}}$ (Remark \ref{rmk:smallAtoms}).
In case (b)(i), $I_{i^*} \subset \mathcal I_{\omega_{i^*}} \cup \mathcal{G}_{\omega_{i^*}}$. Either $I_{i^*}$ contains an atom of $\mathcal{G}_{\omega_{i^*}}$, in which case $|I_{i^*}|$ is bounded from below by $\frac12 \min\{ d(\hat x, \hat x') : \hat x, \hat x' \in C_\psi' , \hat x \neq \hat x'\}$, or $I_{i^*}$ contains an atom of $\mathcal P_{\omega_{i^*}}|_{\mathcal I_{\omega_{i^*}}}$, hence $|I_{i^*}| \geq L^{- \frac{3}{2} - \beta}$ (the latter bound being the worse of the two). Since $I_m = I_{\tau_j}$ is free, we conclude $|I_m| \geq |I_{i^*}| \geq L^{- \frac32 - \beta}$ from Lemma \ref{lem:fullBoundPer}(a).
In case (b)(ii), we have automatically that $I_{i^*}$ is free and initiates a bound period of length $p^* = p_{\omega_{i^*}}(I_{i^*})$. Since $0 < p^* < k-1$ by assumption, we cannot have $i^* = \tau_j = m$ (since then $p^* = k$) and so conclude $i^* < \tau_j$ in this case-- indeed, we have $i^* + p^* + 1 \leq m = \tau_j$, since $I_{\tau_j}$ is free. From Remark \ref{rmk:smallAtoms} we have \[
| I_{i^*} | \geq (p^* + 1)^{-2} L^{- \frac{p^* + 3}{2} - \beta} \geq L^{- \frac{p^* + 3}{2} - \beta (p^* + 1) } \, , \] on taking $L$ large enough so $\beta > 2/\log L$. Moreover, since $I_{m} = I_{\tau_j}$ is free, we have \[
|I_m| \geq |I_{i^* + p^* + 1}| = |\tilde f^{p^* +1}_{\theta^{i^*} {\underline \omega}} (I_{i^*})| \geq L^{(p^* + 1)(\frac12 - \beta)} |I_{i^*}| \geq L^{(p^* + 1) (\frac12 - \beta) } \cdot L^{- \frac{p^* + 3}{2} - \beta( p^* + 1) } = L^{-1 - 2 \beta (p^* + 1)} \]
The worst possible case is $p^* = k-1$, and so we conclude $|I_m| \geq L^{-1 - 2 \beta k}$ in case (ii).
In case (b)(iii), we have necessarily that $i^* = m = \tau_j$. In the worst case, $I_m$ contains an atom of $\mathcal P_{\omega_m}|_{\mathcal{B}^{k-1}_{\omega_m}}$, and so $|I_m| \geq k^{-2} L^{- \frac{k + 2}{2} - \beta} \geq L^{- 1 - (\frac12 + \beta) k - \beta}$. \end{proof}
\begin{proof}[Proof of Lemma \ref{lem:crudeBound}]
Arguing in parallel to the proof of Proposition \ref{prop:main2} (see \eqref{eq:boundImBelow11}) we have, on the event $\{ \tau_1 = m\}$, the estimate \[
\mathbb{E} ( T_{m} | \mathcal{H}_{m}) \geq (1 + \beta) \log (L^{1 - \beta} |I_m|) \]
As before, we estimate $|I_m|$ from below.
\begin{lem}\label{lem:boundImTake2}
On the event $\{ \tau_1 = m \}$, we have the estimate $|I_m| \geq \min \{ L^{-1 - ( \frac12 + \beta) k - \beta}, \epsilon\}$. \end{lem} Assuming this, we easily obtain \[
\mathbb{E}(T_m | \mathcal{H}_m) \geq (1 + \beta) \log (L^{1 - \beta} \min \{ L^{-1 - ( \frac12 + \beta) k - \beta}, \epsilon\} ) \geq - 2 (2 k + 1) \log L \, , \] as claimed. It remains to prove Lemma \ref{lem:boundImTake2}. \end{proof}
\begin{proof}[Proof of Lemma \ref{lem:boundImTake2}] Condition on $\tau_1 = m$. The proof is very much parallel to that of Lemma \ref{lem:boundIm}. Case (b)
can be repeated verbatim, and yields the identical estimate $|I_m| \geq L^{-1 - (\frac12 + \beta) k - \beta}$.
The only difference is in case (a). Here, we observe that $I_m$ must be free, and so (Lemma \ref{lem:fullBoundPer}(a)) we have \[
|I_m| \geq L^{m (\frac12 - \beta)} \cdot 2 \epsilon \geq \epsilon \, . \] This completes the proof of Lemma \ref{lem:boundImTake2}. \end{proof}
\end{document} | arXiv |
\begin{definition}[Definition:Steady-State/First Order ODE]
Consider the Decay Equation:
:$\dfrac {\d y} {\d x} = k \paren {y_a - y}$
where:
:$k \in \R: k > 0$
:$y = y_0$ at $x = 0$
which has the particular solution:
:$(1): \quad y = y_a + \paren {y_0 - y_a} e^{-k x}$
The term $y_a$ is known as the '''steady-state''' component of $(1)$.
\end{definition} | ProofWiki |
Establishment of an indicator framework for the transmission risk of the mountain-type zoonotic visceral leishmaniasis based on the Delphi-entropy weight method
Zhuowei Luo1,
Zhengbin Zhou1,
Yuwan Hao1,
Jiaxin Feng1,
Yanfeng Gong1,
Yuanyuan Li1,
Yun Huang1,
Yi Zhang1 &
Shizhu Li1
Infectious Diseases of Poverty volume 11, Article number: 122 (2022) Cite this article
Visceral leishmaniasis (VL) is one of the most important neglected tropical diseases. Although VL was controlled in several regions of China during the last century, the mountain-type zoonotic visceral leishmaniasis (MT-ZVL) has reemerged in the hilly areas of China in recent decades. The purpose of this study was to construct an indicator framework for assessing the risk of the MT-ZVL in China, and to provide guidance for preventing disease.
Based on a literature review and expert interview, a 3-level indicator framework was initially established in November 2021, and 28 experts were selected to perform two rounds of consultation using the Delphi method. The comprehensive weight of the tertiary indicators was determined by the Delphi and the entropy weight methods.
Two rounds of Delphi consultation were conducted. Four primary indicators, 11 secondary indicators, and 35 tertiary indicators were identified. The Delphi-entropy weight method was performed to calculate the comprehensive weight of the tertiary indicators. The normalized weights of the primary indicators were 0.268, 0.261, 0.242, and 0.229, respectively, for biological factors, interventions, environmental factors, and social factors. The normalized weights of the top four secondary indicators were 0.122, 0.120, 0.098, and 0.096, respectively, for climatic features, geographical features, sandflies, and dogs. Among the tertiary indicators, the top four normalized comprehensive weights were the population density of sandflies (0.076), topography (0.057), the population density of dogs, including tethering (0.056), and use of bed nets or other protective measures (0.056).
An indicator framework of transmission risk assessment for MT-ZVL was established using the Delphi-entropy weight method. The framework provides a practical tool to evaluate transmission risk in endemic areas.
Visceral leishmaniasis (VL), also known as kala-azar, is a serious disease caused by trypanosomatid protozoans of the genus Leishmania, which are transmitted by biting of sandflies from the genera Phlebotomus and Lutzomyia [1]. If left untreated, VL is fatal in over 95% of cases. VL is one of the most important neglected tropical diseases [2], and is a major global public health problem. L. donovani and L. infantum are the main Leishmania species in China. There are three epidemiological types of VL in China, namely anthroponotic visceral leishmaniasis (AVL), mountain-type zoonotic VL (MT-ZVL), and desert-type zoonotic VL (DT-ZVL), and the main transmitting sandflies for each type of VL are different [3]. Phlebotomus chinensis (endophilic species) is the main vector in MT-ZVL endemic areas, including the central and eastern plains, and mountainous and Loess Plateau areas of China [4].
VL was once rampant in rural areas north of the Yangtze River, afflicting more than 600 counties and cities in 16 provincial-level administrative divisions (PLADs), with an estimated number of 530,000 patients in 1951 [5]. Through large-scale prevention and control campaigns, the number of patients decreased yearly [6]. However, with the development of society and management of the environment, more suitable ecological habitats were created for the vector, Ph. chinensis, and reservoirs, leading to the re-emergence of MT-ZVL in the hilly areas of China [7]. Since the twenty-first century, the number of MT-ZVL cases reported in central and western China has increased rapidly, and the epidemic region has expanded to more than 60 counties and districts in seven PLADs, including the northern suburbs of Beijing, northern Hebei, western Henan, Shanxi, southern Shaanxi, southern Gansu, and northwestern Sichuan [8, 9]. The proportion of dogs infected with Leishmania in endemic areas reached 51.9%, as detected by PCR [10]. A total of 479 MT-ZVL cases were reported in China from 2019 to 2021, and the incidence increased from 0.0010/10,000 in 2019 to 0.0015/10,000 in 2021 [11].
Several studies have been conducted to investigate the risk factors associated with the transmission of the disease based on patterned methods, and found that some meteorological, environmental, and socioeconomic factors could increase the transmission risk of VL [12,13,14,15,16]. The biological activity and size of the sandfly population, as well as that of latent dogs contribute substantially to the dissemination of disease [17]. Regarding biological factors, individual factors, such as the use of bed nets and repellents were considered as influencing factors in several studies [18,19,20]. However, most studies failed to apply a theoretical and comprehensive framework to identify the specific factors that have the greatest impact on the transmission cycle. It is imperative to monitor and control for such risk factors. Thus, identifying and assessing high-risk factors for the transmission of MT-ZVL is the most important consideration for disease control, including the establishment of public policies, environmental management, treatment of patients, and ensuring public health effectively.
Therefore, it is necessary to develop a comprehensive risk factor analysis tool for MT-ZVL transmission. The Delphi method is an anonymous questionnaire-based method that provides an objectivity and neutrality, as well as use of each expert's knowledge and experience. The method has a certain degree of subjectivity given that it is based on a set of integrated views, and practical and scientific support [21]. The entropy method is an objective method and mainly uses the characteristics of entropy to judge the dispersion degree of each indicator in the framework through the entropy value [22]. Thus, the combination of subjective and objective methods has been used in studies to render results more accurate, reasonable, and effective [23,24,25]. In this study, the Delphi and entropy methods were applied to establish the multilevel risk factors and comprehensive assessment framework to provide a new basis for the MT-ZVL control in endemic areas.
Establishing a framework for transmission risk
Search strategy
The questionnaire was designed using a systematic search approach, which was performed on literature of the risk factors for MT-ZVL. English and Chinese databases, including PubMed, Science Direct, Scopus, Google Scholar, Web of Science, Chinese Biomedical Literature Database (http://www.sinomed.ac.cn/), China National Knowledge Infrastructure (CNKI, http://www.sinomed.ac.cn/), China Science and Technology Journal Database (VIP, http://www.cqvip.com/) and Wanfang (https://www.wanfangdata.com.cn/) were comprehensively searched for published articles on the transmission risk of VL from 2010 to 2022. The search was carried out using the following keywords and terms: "visceral leishmaniasis", "Kala-azar", "Leishmania donovani", "Leishmania infantum", "canine visceral leishmaniasis", "zoonotic visceral leishmaniasis", "MT-ZVL", "zoonoses", "Phlebotomus", "risk factors", "transmission", "epidemiology", "control measures", alone or in combination with "OR'' and/or "AND'' operators. As for grey literature, relevant global or national guidelines for VL were identified from the WHO or other resources.
Data were extracted from studies with at least one of the following inclusion criteria: studies corresponding to the determination of risk factors of zoonotic visceral leishmaniasis transmission and control strategies. Summaries of articles presented as proceedings at conferences, studies that contained no qualified data, experimental studies, review articles, duplicates, and case reports were excluded (Fig. 1).
Flowchart of the study selection
Criteria for the selection of experts
To ensure the representativeness and authority of the experts, those selected were engaged in the prevention and control of VL from national, provincial, and municipal centers for disease control. The inclusion criteria for experts were ≥ 10 years of work experience in VL research and field prevention and control; familiarity with the pathogenesis and transmission of VL; a bachelor's degree or above; and intermediate or higher professional title. Experts also provided informed consent and volunteered to participate in the study.
Design of expert consultation questionnaire
The questionnaire was divided into two parts.
Part I: The core part of the expert inquiry was the importance (Cij) on the transmission risk of each indicator, using a 5-point Likert scale method (5 points: very important, 4 point: important, 3 points: generally important, 2 points: weakly important, and 1 point: not important) based on scientific information, necessity, and operability, and providing qualitative opinions and suggestions.
Part II: Basic information, including general information of experts (age, gender, post, educational background, etc.); familiarity (Cs): whether the expert was familiar with the listed indicators and understood the meaning. The highest score was 1, and the higher the score, the greater the familiarity; judgment basis (Ca): Based on the expert's judgment, the degree of influence is divided into large, medium, and small. As shown in Table 1, the judgment basis was based on the degree of influence.
Table 1 Judgment based on the degree of influence
Calculation of the indicator framework
The Delphi method
Two rounds of expert consultations were carried out, and the experts scored the importance (Cij), familiarity (Cs) and judgment basis (Ca) of each indicator of the framework. Experts also provided suggestions for modification and supplementation of the indicators.
After the first round of consultation was complete, the indicator framework was adjusted according to the expert's scores and suggestions. The second round of expert consultation was then conducted to establish the transmission risk assessment framework. Details on the calculation process are described below:
Indicator evaluation score: With the collected data, the assessment criteria were used to assess the indicator framework from those selected experts. An assessment criterion consisted of four parts: (a) positive coefficient: questionnaire response rate; (b) authority coefficient (Cr), determined by the judgment basis coefficient (Ca) and familiarity coefficient (Cs) of the expert. The formula is Cr = (Ca + Cs)/2, and the larger Cr value indicates a higher degree of authority of the expert on the content of the consultation; (c) coordination coefficient (Kendall's W), the W value and its significance test (χ2 test) reflects the degree of dispersion of the expert consultation. W is in the range of 0–1. The larger the W with a significant χ2 test value was, the better the coordination would be; (d) coefficient of variation (CV), where CV = standard deviation/mean value. The smaller the coefficient of variation, the more unanimous the opinion of experts.
Then, the weighted importance score (\({C}_{ij}^{\prime}\)) of each indicator was calculated as follows: \({C}_{ij}^{\prime}= {C}_{ij}\times {C}_{r(ij)}\), which is the product of the importance score and the authority coefficient.
Indicator screening: In combination with the suggestions of the experts, the indicator with a weighted importance score, as well scientific information, necessity, and operability score ≥ 3.00 and CV ≤ 40% was retained. Conversely, the weighted importance score < 3.00 or CV > 40% was deleted in the first round. The exclusion criteria for the second round were the weighted importance score < 3.00 or CV > 35%. The criteria for additional indicators should meet 1/3 of the experts suggested, including the indicator.
Delphi weight calculation: After the indicators were optimized, the weighted importance score of the primary indicator was first normalized as \({W}_{1,j}\). Subsequently, all of the secondary and tertiary indicators were normalized as \({W}_{2,j},{W}_{3,j}\), respectively. The Delphi normalized weight of the secondary indicators was calculated as \({W}_{d,j=}{W}_{1,j}\times {W}_{2,j}\). Finally, the weight was calculated by continuous multiplication of each tertiary indicator \({W}_{d=}{W}_{1,j}\times {W}_{2,j}\times {W}_{3,j}\), which was the final Delphi weight of each indicator.
The entropy method
The entropy weight method is an objective method to determine each indicator's weight based on the uncertainty contained within each indicator for the whole framework. The concept of entropy is well suited to measuring the relative strength of comparison criterion to represent the average intrinsic information involved in the decision. This method largely avoids the defects of the subjective assignment method on the weight calculation for each indicator, and a greater value indicates a greater incidence for the assessed indicator within the overall evaluation [26]. The entropy weight calculation is as follows:
Dimensionless processing: Under the assumption that the indicator framework for transmission risk was assessed through m indicators and n samples, the original data matrix X = \({{(x}_{ij})}_{m\times n}\) was standardized according to the following equation:
$${{x}_{ij}}^{{\prime}+}= \frac{{{x}_{ij}-\mathrm{min}{x}_{i}}}{{\mathrm{max}{x}_{i}-\mathrm{min}{x}_{i}}} {{x}_{ij}}^{{\prime}-}= \frac{{\mathrm{max}{x}_{i}{-x}_{ij}}}{{\mathrm{max}{x}_{i}-\mathrm{min}{x}_{i}}}$$
where \({{x}_{ij}}^{{\prime}+}\) and \({{x}_{ij}}^{{\prime}-}\) are the positive and negative indicators, respectively; and \({{x}_{ij}}^{\prime}\) is the standardized value for jth indicator for the ith sample for i = 1, 2, …, n and j = 1, 2, …, m.
(Calculate) The indicator proportion, \({P}_{ij}\), was calculated for each i object under each j indicator, as follows;
$${P}_{ij}= \frac{{{x}_{ij}}^{\prime}}{{\sum }_{i=1}^{n}{{x}_{ij}}^{\prime}}$$
(Calculate) According to the definition of information/entropy [27], \({E}_{j}\) was calculated for the jth indicator according to (3).
$${E}_{j}=-k\sum_{i=1}^{n}{p}_{ij}\mathrm{ln}({p}_{ij})$$
where \(k=1/\mathit{ln}(n)\) and \({E}_{j}\) ≥ 0. The difference coefficient, \({G}_{j}\), is calculated as:
$${G}_{j}=\frac{1-{E}_{j}}{m-{E}_{e}}$$
where \({E}_{e}=\sum_{j=1}^{m}{E}_{j}\); 0 ≤ \({G}_{j}\)≤ 1; \(\sum_{j=1}^{m}{G}_{j}=1\); a greater value means higher determinacy of the overall evaluation and a smaller entropy.
(Calculate) The entropy weight of each indicator, \({W}_{j}\):
$${W}_{j}=\frac{{G}_{j}}{{\sum }_{j=1}^{m}{G}_{j}}$$
Comprehensive weight
Delphi, analytical hierarchy process (AHP), least square, and binomial coefficient methods were subjective weighting methods. Objective weighting methods mainly include the entropy weight method, principal component analysis method, variance and mean square deviation method, and multi-objective planning method. The Delphi method is highly subjective. Therefore, the combination of subjective and objective methods to jointly establish the weight, complement each other's strengths and weaknesses, respect expert opinions, reflect the objectivity of the data, and reduce the "polarization" effect of different evaluation methods, which can all improve the scientific information of the indicator.
Considering the importance of the indicators and the degree of difference, the weight coefficients obtained from the above two methods were combined by multiplication to obtain the comprehensive weight, \({W}_{c}\) [25]. If there were some indicators with equal scores, the entropy redundancy degree, \({G}_{j}\), was 0 and the comprehensive weight was the Delphi weight (\({W}_{a}\)= \(\sum {W}_{d},{G}_{j}=0\)).
$${W}_{c}=\left\{\begin{array}{l}{W}_{d},{G}_{j}=0\\ \frac{\left(1-{W}_{a}\right)\times \left({G}_{j}*{W}_{d}\right)}{{\sum }_{j=1}^{m}\left({G}_{j}*{W}_{d}\right)},{G}_{j}\ne 0\end{array}\right.$$
Excel 2020 software (Microsoft Corporation, Redmond, USA) was used to input the expert consultation results. R1.2 software (R Foundation for Statistical Computing, Vienna, Austria) was used to calculate the authority coefficient, weighted importance score, and CV of each indicator. SPSS 26.0 software (IBM Corporation, Armonk, USA) was used to perform Kendall's W test on the weighted importance score at all levels of indicators. The test level was α = 0.05.
Basic information of experts
The study conducting an indicator framework for the transmission risk of MT-ZVL applied a structured questionnaire to 28 experts during August, 2021. The questionnaire was distributed by a specially-assigned person and the experts were required to reply within one week. All experts were from the Centers for Disease Prevention and Control, and their basic details are provided in Table 2.
Table 2 Descriptive statistics of the experts
Expert positive coefficient and authority coefficient
During the first round of this study, two experts refused to participate in the consultation; thus, 30 questionnaires were delivered with 28 valid questionnaires returned. The questionnaire recovery rate was 93.3% and half of the experts made recommendations. In the second round, 28 questionnaires were delivered with 28 valid questionnaires returned. Thus, the positive coefficient of experts was 100.0%, which highlighted that the experts were highly motivated. The authority coefficients of the experts in the two rounds were 0.82 and 0.83, respectively.
The degree of coordination of expert opinions
The degree of dispersion of the expert consultation was expressed by the coordination coefficient (Kendall's W), χ2 test, and CV. The Kendall's W value of the tertiary indicator in the first round was 0.277 (χ2 = 294.582, P < 0.05), and CV ranged from 8% to 45%. In the second round, the W value was 0.187 (χ2 = 125.659, P < 0.05) and CV ranged from 14% to 34%. A high degree of recognition was demonstrated by the experts, and the outcome was satisfactory. The coordination coefficients for all levels are shown in Table 3.
Table 3 Results of two rounds of expert consultation on coordination degree
Deletion and modification of indicators
Through a literature review and expert consultations, an initial transmission risk assessment framework was established that included four primary indicators, 12 secondary indicators and 39 tertiary indicators (Additional file 1). Considering that the contents of some indicators were duplicated, after the first round, one secondary indicator and nine tertiary indicators were deleted, 12 indicators were merged into five, two indicators were modified, and two items were added. Finally, a framework containing 25 tertiary indicators in four dimensions was established (Additional file 1).
Comprehensive weights based on the Delphi and entropy weight methods
After two rounds of expert consultation, it was finally determined that the framework for the risk assessment of MT-ZVL included four primary indicators, 11 secondary indicators, and 25 tertiary indicators (Additional file 2). The degree of expert's opinions was expressed with the weighted importance of the indicator and the normalized weight. A larger the score and weight indicated a higher importance for the indicator. The results of expert consultations showed that the weighted importance score of each indicator averaged 3.115–4.322. The normalized weight of the primary indicators based on the Delphi method, ranked from largest to smallest, were biological factors (0.268), interventions (0.261), environmental factors (0.242), and social factors (0.229). The top four Delphi normalized weights of the secondary indicators were climatic features (0.122), geographical features (0.120), sandflies (0.098), and dogs (0.096). The top four tertiary indicators based on the Delphi and entropy weight methods were the density of the sandflies (0.076), the topography (0.057), the population density of dogs, including tethering (0.056), and use of bed nets and other protective measures (0.056). The specific contents are shown in Tables 4, 5, 6 and a comparison obtained by the three weighting methods of the tertiary indicators is shown in Fig. 2.
Table 4 Results of the primary indicators of the risk assessment of mountain-type zoonotic visceral leishmaniasis
Table 5 Results of secondary indicators of the risk assessment of mountain-type zoonotic visceral leishmaniasis
Table 6 Results of tertiary indicators of the risk assessment of MT-ZVL
A comparison of the weight coefficient based on the Delphi and entropy weight method
MT-ZVL is considered a canid zoonosis in which sandflies become infected primarily by feeding on the skin of canids, and humans are the final host of the parasites. The control of Leishmania infections in the domestic dog population is fundamental in preventing the spread of MT-ZVL between dogs and humans. MT-ZVL has been widely rampant in 10 PLADs in China, including Gansu, Qinghai, Ningxia, Sichuan, Shaanxi, Shanxi, Henan, Hebei, Liaoning, and Beijing [28]. Since the 1960s, MT-ZVL has been controlled through intensive intervention measures to eliminate infectious sources and control sandflies [5]. Nevertheless, natural foci still existed in the mountainous regions in central and western China. As the development of society and the improvement of the ecological environment progresses, MT-ZVL has re-emerged and the endemic areas have been extended over the past few decades [29]. Although some studies have been conducted on the risk factors of MT-ZVL transmission nationally and internationally [30, 31], there is still a lack of scientific and systematic transmission risk assessment indicators.
In this study, a three-level indicator framework for assessing the transmission risk of MT-ZVL was established, which consisted of four primary indicators, 11 secondary indicators, and 25 tertiary indicators. Among the tertiary indicators, the population density of sandflies provided the largest weight, followed by topography, the population density of dogs and tethered dog (tethering), and the use of bed nets and other protective measures, thus, suggesting that the population density of sandflies was the most critical indicator for the risk assessment of MT-ZVL transmission. The rapid resurgence of the MT-ZVL epidemic was closely related to the increase in the population density of sandflies [32], which was consistent with the surveillance results of MT-ZVL in China in recent years. For example, the density of sandflies in Yangquan City in 2021 was as high as 103 sandflies/per human and per hour, as determine by the human baiting method. This density was much higher than that in other MT-ZVL endemic counties. In the same year, a total of 108 MT-ZVL cases were reported in Yangquan City, accounting for 48% (108/224) of MT-ZVL cases reported in China, with an incidence of 0.77/10,000. Yangquan City was also the highest risk area for MT-ZVL in the country [9]. In 2016, MT-ZVL re-emerged in Linzhou in Henan province, where VL had been eliminated for more thirty years [33, 34]. A recent study indicated that environmental (i.e., changes in grasslands/forests), meteorological (i.e., temperature and relative humidity), and socioeconomic (i.e., population density) factors contributed to the recurrence of VL in central China [15], and vector monitoring results showed that the local sandfly density was at a historically high level. In addition, two VL outbreaks occurred in Jiashi County, in Xinjiang Uygur Autonomous Region in 2008 and 2014, respectively, when the population density of sandflies was recorded at a historically high level [35]. Thus, the above surveillance results indicate that the population density of sandflies is an important indicator in risk assessments of MT-ZVL.
Additionally, topography was also considered an important indicator of MT-ZVL transmission risk. Historically, MT-ZVL was mainly distributed in hilly settings of Gansu, Sichuan, Shanxi, Shaanxi, western Henan, and northern Hebei, Qinghai, Ningxia, Liaoning, and the suburbs of Beijing [36]. Such areas were extensions of the Loess Plateau, which provides a suitable habitat for wild host reservoirs and sandflies to maintain the MT-ZVL transmission cycle [16, 37]. Thus, MT-ZVL was closely related to topography and the persistence of the natural habitat of MT-ZVL makes it difficult to prevent the transmission of MT-ZVL in such areas [38]. In recent years, with increases in global warming, the gradual improvement of the natural ecological environment, coupled with the implementation of ecological protection policies such as returning farmland to forests in China, the population density and distribution of wild host reservoirs and sandflies have gradually been restored, the infection rate of dogs has increased, and MT-ZVL has reemerged in previously-endemic counties [39, 40]. Surveillance studies showed that MT-ZVL re-emergence occurred in areas with hilly topographies. Such findings are consistent with the results of our study [8, 41].
The population density of dogs, including tethering, is an important indicator of the risk assessment of MT-ZVL transmission. Dogs are the main host reservoirs of MT-ZVL in China and increases in dog population densities create more favorable reservoirs and higher transmission risks. Previous study also shown that the elimination of dogs in endemic areas dramatically reduced the human VL cases, confirming the infected dogs were the major source of the human infection [10]. In the 2000s, with the number of dogs added significantly, the incidence of MT-ZVL increased rapidly in Jiuzhaigou County, Sichuan province. However, when intervention measures such as dog culling and management were implemented, the incidence declined quickly [42, 43]. In addition, the vector, Ph. chinensis, has a small activity radius of usually no more than 300 m [28]. However, free-range style of dogs led to increases in dogs' activities ranges, and increases in the risk of disease transmission. Furthermore, use of bed nets and other protective measures were also crucial indicators for risk assessment. Studies have shown that use of bed nets and indoor insecticides have effectively reduced the risk of human exposure to sandflies and significantly reduced the risk of infection [44]. However, the wild habitat of sandflies in MT-ZVL endemic areas reduced the protective effectiveness of protective measures such as bed nets [27]. Theoretically, the infection rate of dogs and sandflies are important indicators of Leishmania transmission and MT-ZVL risk, but the weight value of these indicators in this study were low, which may be due to the difficulty or low operability of detecting the infection rate in sandflies and asymptomatic dogs [45].
The selection of experts was a crucial factor affecting the quality of the Delphi-entropy weight method [46]. To improve the quality of the consultation, all the experts selected had been engaged in VL prevention and control work with over 10 years of field experience. A total of 64.3% (18/28) of the experts held titles of deputy senior or above, and 39.3% (11/28) had a master's degree or above. The valid response rate of the two rounds of expert consultation were above 90%, indicating that the enthusiasm of experts was high [47]. Additionally, a total of 39 opinions were put forward in the two rounds of consultation, indicating that experts had a high degree of attention and support for this study. A high authority coefficient of 0.82 and 0.83, respectively, in each round ensured the authority and reliability of the results. After two rounds of consultation, the importance score of all indicators was 3.115–4.322 points, the CV was 14–34%, and the Kendall's W at 0.187. Compared with the first round, the CV was smaller, suggesting that the degree of fluctuation of expert opinions was small, the degree of coordination was improved, and experts' opinions tended to be consistent. This study not only provides a reasonable, scientifically supported indicator framework for the evaluation of MT-ZVL risk but also found several key indicators by calculating the comprehensive weight. The conclusions of this research may help policymakers to develop guidelines for an effective evaluation method of MT-ZVL risk that can be further validated in different endemic areas. This study may also assist official organizations to identify potential risk factors to prevent the spread of the disease, as well as for the integration and rationalization of resources to ultimately improve the monitoring system in China. Limited by the research conditions, this indicator framework may have some deficiencies. It may be limited by the number of experts consulted, resulting in too few indicators or unbalanced weight coefficients. Due to different backgrounds and experiences of experts, and the meaning of second-round questionnaires are not well explained, so it is difficult to obtain a higher Kendall's W score. Additionally, all experts were from China and further research on a broader national range will enrich the results presented in this study.
The re-emergence of MT-ZVL has become a serious public health concern. In this study, a risk assessment indicator framework of MT-ZVL was constructed using Delphi-entropy weight method for the first time in China, which consisted of four primary indicators, 11 secondary indicators, and 25 tertiary indicators. Among these indicators, the density of the sandflies, the topography, the population density of dogs and using of bed net were the most critical indicators. The results of this study indicated that the framework can be used to formulate strategies and develop targeted interventions for "vectors-reservoirs-humans" aimed at reducing risk for MT-ZVL control.
Data generated or analyzed during this study are included in this published article and its additional information files.
Burza S, Croft SL, Boelaert M. Leishmaniasis. Lancet. 2018;392(10151):951–70.
Alvar J, Vélez ID, Bern C, Herrero M, Desjeux P, Cano J, et al. Leishmaniasis worldwide and global estimates of its incidence. PLoS ONE. 2012;7(5): e35671.
Guan LR. Current status of kala-azar and vector control in China. Bull World Health Organ. 1991;69(5):595–601.
Zhou ZB, Wang JY, Gao CH, Han S, Li YY, Zhang Y, et al. Contributions of the National Institute of Parasitic Diseases to the control of visceral leishmaniasis in China. Adv Parasitol. 2020;110:185–216.
Guan LR, Wu ZX. Historical experience in the elimination of visceral leishmaniasis in the plain region of Eastern and Central China. Infect Dis Poverty. 2014;3(1):10.
Guan LR, Shen WX. Recent advances in visceral leishmaniasis in China. Southeast Asian J Trop Med Public Health. 1991;22(3):291–8.
Guan LR, Qu JQ, Chai JJ. Leishmaniasis in China - present status of prevalence and some suggestions on its control. Endem Dis Bull Chin. 2000;15(3):49–52 (in Chinese).
Han S, Wu WP, Xue CZ, Ding W, Hou YY, Feng Y, et al. Endemic status analysis of visceral leishmaniasis in China from 2004 to 2016. Chin J Parasitol Parasit Dis. 2019;37(2):189–95 (In Chinese).
Zhou ZB, Li YY, Zhang Y, Li SZ. Prevalence of visceral leishmaniasis in China in 2019. Chin J Parasitol Parasit Dis. 2020;38(5):602–6 (In Chinese).
Wang JY, Ha Y, Gao CH, Wang Y, Yang YT, Chen HT. The prevalence of canine Leishmania infantum infection in western China detected by PCR and serological tests. Parasit Vectors. 2011;4:69.
Luo ZW, Zhou ZB, Gong YF, Feng JX, Li YY, Zhang Y, et al. Current status and challenges of visceral leishmaniasis in China. Chin J Parasitol Parasit Dis. 2022;40(2):146–52 (In Chinese).
Abdullah AYM, Dewan A, Shogib MRI, Rahman MM, Hossain MF. Environmental factors associated with the distribution of visceral leishmaniasis in endemic areas of Bangladesh: modeling the ecological niche. Trop Med Health. 2017;45:13.
Daoudi M, Boussaa S, Hafidi M, Boumezzough A. Potential distributions of phlebotomine sandfly vectors of human visceral leishmaniasis caused by Leishmania infantum in Morocco. Med Vet Entomol. 2020;34(4):385–93.
Wang XY, Xue JB, Xia S, Han S, Hu XK, Zhou ZB, et al. Distribution of suitable environments for Phlebotomus chinensis as the vector for mountain-type zoonotic visceral leishmaniasis - Six Provinces. China China CDC Wkly. 2020;2(42):815–9.
Zhao YZ, Jiang D, Ding FY, Hao MM, Wang Q, Chen S, et al. Recurrence and driving factors of visceral leishmaniasis in central China. Int J Environ Res Public Health. 2021;18(18):9535.
Gong YF, Hu XK, Zhou ZB, Zhu HH, Hao YW, Wang Q, et al. Ecological niche modeling-based prediction on transmission risk of visceral leishmaniasis in the extension region of Loess Plateau. China Chin J Parasitol Parasit Dis. 2021;39(2):218–25 (In Chinese).
Shimozako HJ, Wu J, Massad E. Mathematical modelling for zoonotic visceral leishmaniasis dynamics: A new analysis considering updated parameters and notified human Brazilian data. Infect Dis Model. 2017;2(2):143–60.
Kolaczinski JH, Reithinger R, Worku DT, Ocheng A, Kasimiro J, Kabatereine N, et al. Risk factors of visceral leishmaniasis in East Africa: a case-control study in Pokot territory of Kenya and Uganda. Int J Epidemiol. 2008;37(2):344–52.
Yared S, Deribe K, Gebreselassie A, Lemma W, Akililu E, Kirstein OD, et al. Risk factors of visceral leishmaniasis: a case control study in north-western Ethiopia. Parasit Vectors. 2014;7:470.
Baxarias M, Homedes J, Mateu C, Attipa C, Solano-Gallego L. Use of preventive measures and serological screening tools for Leishmania infantum infection in dogs from Europe. Parasit Vectors. 2022;15(1):134.
Hernández-Leal MJ, Codern-Bové N, Pérez-Lacasta MJ, Cardona A, Vidal-Lancis C, Carles-Lavila M, et al. Development of support material for health professionals who are implementing shared decision-making in breast cancer screening: validation using the Delphi technique. BMJ Open. 2022;12(2): e052566.
Ma YM, Wu YM, Wu BY. Comprehensive evaluation of urbanization sustainable development in the Yangtze river delta—based on entropy method and quadrant diagram method. Econ Geogr. 2015;35(6):47–53 (In Chinese).
Shen YY, Liao K. An application of analytic hierarchy process and entropy weight method in food cold chain risk evaluation model. Front Psychol. 2022;13: 825696.
Chang B, Yang Y, Buitrago Leon GA, Lu Y. Effect of collaborative governance on medical and nursing service combination: an evaluation based on Delphi and entropy method. Healthcare (Basel). 2021;9(11):1456.
Mo XT, Xia S, Ai L, Yin SQ, Li XS, Zheng B. Study on a framework for risk assessment of imported malaria in China during malaria elimination. Chin Trop Med. 2021;21(6):505–11 (In Chinese).
Zou ZH, Sun JN, Ren GP. Study and application on the entropy method for determination of weight of evaluating indicators in fuzzy synthetic evaluation for water quality assessment. Huanjing Kexue Xuebao. 2005;25(4):552–6 (In Chinese).
Contreras-Reyes JE. Lerch distribution based on maximum nonsymmetric entropy principle: Application to Conway's game of life cellular automaton. Chaos Solitons Fractals. 2021;151: 111272.
Guan LR, Zhou ZB, Jin CF, Fu Q, Chai JJ. Phlebotomine sand flies (Diptera: Psychodidae) transmitting visceral leishmaniasis and their geographical distribution in China: a review. Infect Dis Poverty. 2016;5:15.
Wang XY, Xia S, Xue JB, Zhou ZB, Li YY, Zhu ZL, et al. Transmission risks of mountain-type zoonotic visceral leishmaniasis - six endemic provincial-level administrative divisions, China, 2015–2020. China CDC Wkly. 2022;4(8):148–52.
Nackers F, Mueller YK, Salih N, Elhag MS, Elbadawi ME, Hammam O, et al. Determinants of visceral leishmaniasis: a case-control study in Gedaref state, Sudan. PLoS Negl Trop Dis. 2015;9(11):e0004187.
Sriwongpan P, Nedsuwan S, Manomat J, Charoensakulchai S, Lacharojana K, Sankwan J, et al. Prevalence and associated risk factors of Leishmania infection among immunocompetent hosts, a community-based study in Chiang Rai, Thailand. PLoS Negl Trop Dis. 2021;15(7):e0009545.
de Souza Fernandes W, de Oliveira Moura Infran J, de Oliveira E, Etelvina Casaril A, Petilim Gomes Barrios S, de Oliveira SL, et al. Phlebotomine sandfly (Diptera: Psychodidae) fauna and the association between climatic variables and the abundance of Lutzomyia longipalpis sensu lato in an intense transmission area for visceral leishmaniasis in central western Brazil. J Med Entomol. 2022;59(3):997–1007.
Wang HL, Yan QY, Shi DY. Investigation of the remnants of the sandflies in Henan Province. Henan J Prevent Med. 1978;1:13–6 (In Chinese).
Tang ZQ, Gao P, Zhou KH, Liu JQ, Guo XS. A survey on sandfly in Linzhou city of Henan province. Chin J Hyg Insect Equip. 2017;23(04):397–8 (In Chinese).
Yisilayin O, Wang DZ, Hou YY, Kaisuer K, Zuo XP, Ma ZC, et al. Surveillance of sand flies at desert-like area in Jiashi County of Xinjiang: Potential vectors of visceral leishmaniasis. Chin J Parasitol Parasit Dis. 2017;35(2):194–6 (In Chinese).
Hao YW, Hu XK, Gong YF, Xue JB, Zhou ZB, Li YY, et al. Spatio-temporal clustering of mountain-type zoonotic visceral leishmaniasis in China between 2015 and 2019. PLoS Negl Trop Dis. 2021;15(3): e0009152.
Lun ZR, Wu MS, Chen YF, Wang JY, Zhou XN, Liao LF, et al. Visceral leishmaniasis in China: an endemic disease under ccontrol. Clin Microbiol Rev. 2015;28(4):987–1004.
Chen H, Li K, Shi H, Zhang Y, Ha Y, Wang Y, et al. Ecological niches and blood sources of sand fly in an endemic focus of visceral leishmaniasis in Jiuzhaigou, Sichuan, China. Infect Dis Poverty. 2016;5:33.
Hao YW, Luo ZW, Zhao J, Gong YF, Li YY, Zhu ZL, et al. Transmission risk prediction and evaluation of mountain-type zoonotic visceral leishmaniasis in China based on climatic and environmental variables. Atmosphere. 2022;13(6):964.
Li YY, Luo ZW, Hao YW, Zhang Y, Yang LM, Li ZQ, et al. Epidemiological features and spatial-temporal clustering of visceral leishmaniasis in mainland China from 2019 to 2021. Front Microbiol. 2022;13: 959901.
Wang JY, Cui G, Chen HT, Zhou XN, Gao CH, Yang YT. Current epidemiological profile and features of visceral leishmaniasis in people's republic of China. Parasit Vectors. 2012;5:31.
Shang LM, Peng WP, Jin HT, Xu D, Zhong NN, Wang WL, et al. The prevalence of canine Leishmania infantum infection in Sichuan Province, southwestern China detected by real time PCR. Parasit Vectors. 2011;4(1):173.
Zhang JG, Zhang FN, Chen JP. Prevalence and control situation of canine borne Kala-azar in Sichuan province. J Prev Med Inf. 2011;27(11):869–74.
Saha S, Ramachandran R, Hutin YJ, Gupte MD. Visceral leishmaniasis is preventable in a highly endemic village in West Bengal, India. Trans R Soc Trop Med Hyg. 2009;103(7):737–42.
Santarém N, Sousa S, Amorim CG, de Carvalho NL, de Carvalho HL, Felgueiras Ó, et al. Challenges in the serological evaluation of dogs clinically suspect for canine leishmaniasis. Sci Rep. 2020;10(1):3099.
Liu M, Geng JZ, Gao J, Mei ZH, Wang XY, Wang SC, et al. Construction of a training content system for new nurses in cancer hospital based on competency. Front Surg. 2022;8: 833879.
Zhang M, Liu MX, Wang DW, Wang Y, Zhang WH, Yang HX, et al. Development of a risk assessment scale for perinatal venous thromboembolism in Chinese women using a Delphi-AHP approach. BMC Pregnancy Childbirth. 2022;22(1):426.
We thank all the experts for answering the specific questionnaire survey and providing valuable suggestions. We thank the staff of National Institute of Parasitic Diseases, Chinese Center for Disease Control and Prevention (Chinese Center for Tropical Diseases Research). The authors especially thank Prof. Norbert Brattig reviewed the manuscript and provided substantial comments.
This research was funded by the National Key Research and Development Program of China (Nos. 2021YFC2300800, 2021YFC2300804); National Natural Science Foundation of China (No. 32161143036).
National Institute of Parasitic Diseases, Chinese Center for Disease Control and Prevention (Chinese Center for Tropical Diseases Research); NHC Key Laboratory of Parasite and Vector Biology; WHO Collaborating Centre for Tropical Diseases, National Center for International Research On Tropical Diseases, Shanghai, 200025, China
Zhuowei Luo, Zhengbin Zhou, Yuwan Hao, Jiaxin Feng, Yanfeng Gong, Yuanyuan Li, Yun Huang, Yi Zhang & Shizhu Li
Zhuowei Luo
Zhengbin Zhou
Yuwan Hao
Jiaxin Feng
Yanfeng Gong
Yuanyuan Li
Yun Huang
Shizhu Li
ZWL, SZL and ZBZ designed the study. ZWL, YWH and YH collected data. ZWL, JXF, YFG, YYL and ZBZ analyzed data. ZWL interpreted data and wrote the manuscript. ZBZ, YZ and SZL revised the manuscript from preliminary draft to submission. All authors contributed to the article. All authors read and approved the final manusript.
Correspondence to Shizhu Li.
Ethical review and approval were not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent from the patients/participants OR patients/participants legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.
The indicators for the two rounds of the Delphi consultation.
First round of primary indicators of the risk assessment of MT-ZVL.
Luo, Z., Zhou, Z., Hao, Y. et al. Establishment of an indicator framework for the transmission risk of the mountain-type zoonotic visceral leishmaniasis based on the Delphi-entropy weight method. Infect Dis Poverty 11, 122 (2022). https://doi.org/10.1186/s40249-022-01045-0
Mountain-type zoonotic visceral leishmaniasis
Transmission risk
Indicator framework
Entropy weight
Geography and infectious diseases: Role of human mobility, translocation and access to healthcare | CommonCrawl |
\begin{document}
\begin{abstract} We examine the lattice generated by two pairs of supplementary vector subspaces of a finite-dimensional vector-space by intersection and sum, with the aim of applying the results to the study of representations admitting two pairs of supplementary invariant spaces, or one pair and a reflexive form. We show that such a representation is a direct sum of three canonical sub-representations which we characterize. We then focus on holonomy representations with the same property. \end{abstract} \title{Representations admitting two pairs of supplementary invariant spaces} \section{Introduction}
A famous paper of Gelfand and Ponomarev~\cite{gp} classifies the systems on four vector subspaces of a finite-dimensional vector space. We focus on the systems of two pairs of supplementary spaces and explore the lattice generated by sum and intersection starting from the four spaces. The aim is to apply the results to lattices of stable spaces of finite-dimensional representations and in particular of holonomy representations of torsion free connections preserving a reflexive form.
\section{Lattice generated by two pairs of supplementary spaces} We suppose throughout the paper that $\mathbb{K}$ is a commutative field of characteristic different from 2. \subsection{Definitions}\index{paire d'espaces supplémentaires} We call {\em decomposition of a finite-dimensional $\mathbb{K}$-vector space $E$ into $2$ direct sums} a quintuplet ${\mathcal V}= (E, V_1, V_2, W_1, W_2)$ where $V_1, V_2, W_1$ and $W_2$ are four vector subspaces of the finite-dimensional vector space $E$ verifying $V_1 \oplus V_2 = W_1 \oplus W_2=E$.
\begin{ex} In particular if $E$ carries a non-degenerate reflexive structure(i.e. for us a non-degenerate symmetric or antisymmetric bilinear form) and if $E=V_1\oplus V_2$ then $(E,V_1,V_2,V_1^\perp,V_2^\perp)$ is a decomposition of $E$ into $2$ direct sums. \end{ex}
Associated to a decomposition of a finite-dimensional $\mathbb{K}$-vector space $E$ into $2$ direct sums ${\mathcal V}=(E, V_1, V_2, W_1, W_2)$ is a dual decomposition into two direct sums: ${\mathcal V}^*=(E^*, W'_1, W'_2, V'_1, V'_2)$, with $X':=\{u\in E^* \; \vrule \; u(X)=0 \;\}$.
If $E=E_1\oplus E_2$ is a direct sum, let $p_{E_1}^{E_2}$ be the projection on $E_1$ parallely to $E_2$. To simplify notations lets write $p_i$ for the projection on $V_i$ parallely to $V_{\tau(i)}$ and $q_i$ the projection on $W_i$ parallely to $W_{\tau(i)}$. We define the map $\theta_{\mathcal V}: E\to E$ by $\theta_{\mathcal V} = p_{W_1}^{W_2} \circ p_{V_1}^{V_2} - p_{V_1}^{V_2} \circ p_{W_1}^{W_2}$. To simplify notations we write $\theta$ for $\theta_{\mathcal V}$ if it is clear which $\mathcal V$ we mean.
It is easy to verify: \begin{lem} $\theta = p_{W_1}^{W_2} \circ p_{V_1}^{V_2} - p_{V_1}^{V_2} \circ p_{W_1}^{W_2} = p_{W_2}^{W_1} \circ p_{V_2}^{V_1} - p_{V_2}^{V_1} \circ p_{W_2}^{W_1} = p_{V_2}^{V_1} \circ p_{W_1}^{W_2} - p_{W_1}^{W_2} \circ p_{V_2}^{V_1} = p_{V_1}^{V_2} \circ p_{W_2}^{W_1} - p_{W_2}^{W_1} \circ p_{V_1}^{V_2}$ \end{lem}
We have also: \begin{lem}\label{swap} $\theta(V_i) \subset V_{\tau(i)}$ and $\theta(W_i) \subset W_{\tau(i)}$ \end{lem}
\begin{lem} If ${\mathcal V}^*$ is the dual system of $\mathcal V$ then $\theta_{{\mathcal V}^*}=(\theta_{\mathcal V})^*$ \end{lem} \begin{proof} We have: \begin{eqnarray*} (\theta_{\mathcal V})^* & = & (p_{W_1}^{W_2}\circ p_{V_1}^{V_2} -p_{V_1}^{V_2} \circ p_{W_1}^{W_2})^*\\ & = & (p_{W_1}^{W_2}\circ p_{V_1}^{V_2})^* - (p_{V_1}^{V_2} \circ p_{W_1}^{W_2})^*\\ & = & (p_{V_1}^{V_2})^*\circ (p_{W_1}^{W_2})^* -(p_{W_1}^{W_2})^* \circ (p_{V_1}^{V_2})^*\\ & = & p_{V'_2}^{V'_1}\circ p_{W'_2}^{W'_1} -p_{W'_2}^{W'_1} \circ p_{V'_2}^{V'_1}\\ & = & \theta_{{\mathcal V}^*} \end{eqnarray*} \end{proof}
\subsection{Canonical decomposition of $E$}
\begin{defi} Let us define a sequence of vector subspaces of $E$: $F(0):=\{0\}$, $F(n+1):=\sum_{i,j} ((F(n)+V_i) \cap (F(n)+W_j))$ for $n\ge 0$. \end{defi}
$(F(n))_n$ is an increasing sequence of vector subspaces of the finite-dimensional vector space $E$ and necessarily stationary Let us write $F$ or $F(\infty)$ the space $\sum_n F(n)$. $F$ is the smallest fix-point of the increasing mapping $X \mapsto \sum_{i,j} ((X+V_i) \cap (X+W_j))$, and $F$ is the smallest common fix-point of the four increasing mappings $X \mapsto (X+V_i) \cap (X+W_j)$ for $i,j\in \{1,2\}$.
\begin{lem}\label{F1sd} $F(1)=\bigoplus_{i,j} V_i \cap W_j$ \end{lem} \begin{proof} By definition we have $F(1)=\sum_{i,j} V_i \cap W_j$, and it is easy to see that the sum is necessarily direct. \end{proof}
\begin{defi} Let us define a sequence of vector subspaces of $E$: $\tilde F(0):=E$ $\tilde F(n+1):=\bigcap_{i,j} ((\tilde F(n) \cap V_i) + (\tilde F(n) \cap W_j))$ for $n\ge 0$. \end{defi}
$(\tilde F(n))_n$ if a decreasing sequence of vector subspaces of the finite-dimensional vector-space $E$ and so stationary. Let$\tilde F(\infty)$ or simply $\tilde F$ be the space $\bigcap_n \tilde F(n)$. $\tilde F$ is the biggest fix-point of the decreasing mapping $X \mapsto \bigcap_{i,j} ((X \cap V_i) + (X \cap W_j))$, and $\tilde F$ is the biggest common fix-point of the four decreasing mappings $X \mapsto (X \cap V_i) + (X \cap W_j)$, for $i,j\in\{1,2\}$.
\begin{prop}\label{kerimtheta} For every non-negative integer $n$ \begin{enumerate} \item $\ker \theta^n=F(n)$ \item $\mbox{\textnormal{im}} \theta^n=\tilde F(n)$ \end{enumerate} \end{prop}
\begin{proof} \begin{enumerate} \item Let us show first that $\ker \theta=F(1)=V_1 \cap W_1 + V_1 \cap W_2 + V_2 \cap W_1 + V_2 \cap W_2$. If $x\in V_i \cap W_j$, $\theta(x)=(-1)^{i+j}((p_i \circ q_j)(x) - (q_j \circ p_i)(x))=(-1)^{i+j}(x - x) =0$. As $\theta$ is linear, $\theta(F(1))=0$.
Inversely if $\theta(x)=0$, we have $(q_1 \circ p_1 - p_1 \circ q_1)(x)=0$ and so $(q_1 \circ p_1)(x)=(p_1 \circ q_1)(x)$. We have $(q_1 \circ p_1)(x)\in V_1 \cap W_1$. Similarly $(q_j \circ p_i)(x)=(p_i \circ q_j)(x)$ an so $(q_j \circ p_i)(x)\in V_i \cap W_j$. We deduce $x=q_1(x)+q_2(x)=\sum_{i,j} (q_j \circ p_i)(x)\in \sum_{i,j} V_i \cap W_j=F(1)$.
Let us show $F(n) \subset \ker \theta^n$. For $n=0$ it is clear. If $n=k+1$, suppose $\ker \theta^k=F(k)$. Let $x\in F(n)=\sum_{i,j} ((F(k)+V_i) \cap (F(k)+W_j))$. $x$ can be written $x_{11}+x_{22}+x_{12}+x_{21}$ with $x_{ij}\in ((F(k)+V_i) \cap (F(k)+W_j))$. $x_{ij}= y_{ij} + z_{ij} = t_{ij} + u_{ij}$ with $y_{ij},t_{ij}\in F(k)$, $z_{ij}\in V_i$ and $u_{ij}\in W_j$. We have be induction hypothesis $\theta^k(y_{ij})=0$ and $\theta^k(t_{ij})=0$. Be iterated application of lemma~\ref{swap} we have $\theta^k(z_{ij})\in V_{\tau^k(i)}$ et $\theta^k(u_{ij})\in W_{\tau^k(i)}$. As a consequence $\theta^k(x_{ij})\in V_{\tau^k(i)}\cap W_{\tau^k(i)}$ and so $\theta^k(x)\in F(1)=\ker \theta$, giving: $\theta^{k+1}(x)=0$.
Let us show $\ker \theta^n \subset F(n)$. For $n=0$, $\ker \theta^0= \{0\}=F(0)$. For $n=k+1$, suppose $\ker \theta^k \subset F(k)$. Let $x$ be such that $\theta^n(x)=0$. We have then $\theta^k(\theta(x))=0$. By induction hypothesis $\theta(x)\in F(k)$. So $(q_j \circ p_i)(x)-(p_i \circ q_j)(x)\in F(k)$ and as a consequence: $(q_j \circ p_i)(x)\in (F(k)+V_i)$. As $(q_j \circ p_i)(x)\in W_j$, $(q_j \circ p_i)(x)\in (F(k)+V_i)\cap W_j \subset (F(k)+V_i)\cap (F(k)+W_j)$. Finally $x=\sum_{i,j} (q_j \circ p_i)(x) \in \sum_{i,j} ((F(k)+V_i)\cap (F(k)+W_j))=F(n)$.
\item
To show that $\mbox{\textnormal{im}} \theta^n=\tilde F(n)$, we will use duality\footnote{We use the following lemma which is easy to show: For $\Psi\in {\mathcal L}(E,F)$, $\ker \Psi^*=(\mbox{\textnormal{im}} \Psi)'$ and $\mbox{\textnormal{im}} \Psi^*=(\ker \Psi)'.$ }:
In finite dimension it is easy to show by induction that for every $n$, $(F_{\mathcal V}(n))'=\tilde F_{{\mathcal V}^*}(n)$ and $(\tilde F_{\mathcal V}(n))'=F_{{\mathcal V}^*}(n)$.
So we have: $(\tilde F_{\mathcal V}(n))''=(F_{{\mathcal V}^*}(n))'=(\ker \theta^n_{{\mathcal V}^*})'=(\ker (\theta^*_{\mathcal V})^n)'=(\ker (\theta_{\mathcal V}^n)^*)'=(\mbox{\textnormal{im}} \theta^n_{\mathcal V})''$. By injectivity in finite dimension of $''$ we have $\mbox{\textnormal{im}} \theta^n_{\mathcal V}=\tilde F_{\mathcal V}(n)$.
\end{enumerate} \end{proof}
\begin{prop} \begin{enumerate}
\item $\forall n, F(n+1)=\theta^{-1}(F(n)),$
\item $\forall n, \tilde F(n+1)=\theta(\tilde F(n)).$ \end{enumerate} \end{prop} \begin{proof} We have: $F(n+1)=\ker(\theta^{n+1})=\theta^{-1}(\ker(\theta^n))=\theta^{-1}(F(n))$ et $\theta(\tilde F(n))=\theta(\mbox{\textnormal{im}}(\theta^n))=\theta(\tilde F(n))$.\end{proof}
From the first point one can deduce: $\forall n, \theta(F(n+1))\subset F(n)$.
We recall without proof the following well known result: \begin{prop} If $E$ is a finite-dimensional vector space and $\Psi$ an endomorphism of $E$ then the two subspaces of $E$: $E_N=\sum_n \ker(\Psi^n)$ and $E_I=\bigcap_n \mbox{\textnormal{im}}(\Psi^n)$ are stable by $\Psi$ and we have $E=E_N \oplus E_I$. Moreover $\Psi\vrule_{E_N}$ is nilpotent and $\Psi\vrule_{E_I}$ is invertible. \end{prop}
The result applied to $E$ and the endomorphism $\theta$ gives us for $F:=\sum_n F(n)$ and $\tilde F:=\bigcap_n \tilde F(n)$: $E=F\oplus \tilde F$. Moreover $F$ and $\tilde F$ are stables by $\theta$ and $\theta\vrule_F$ is nilpotent and $\theta\vrule_{\tilde F}$ is invertible.
We say that the subspace $V$ of $E$ is {\em homogeneous}with respect to the sum $E_1+ E_2$, where $E_1$ and $E_2$ are vector subspaces of $E$ if: $V\cap(E_1+E_2)=(V\cap E_1)+(V\cap E_2)$. Similarly we say that $V$ is {\em co-homogeneous} with respect to the intersection $E_1\cap E_2$, if: $V+(E_1\cap E_2)=(V+E_1)\cap (V+E_2)$.
\begin{prop}\label{tildefv1v2} \begin{enumerate} \item $(\tilde F\cap V_1) \oplus (\tilde F\cap V_2)=\tilde F$ \item $(\tilde F\cap W_1) \oplus (\tilde F\cap W_2)=\tilde F$ \item $\forall i,j, (\tilde F\cap V_i) \oplus (\tilde F\cap W_j)=\tilde F$ \end{enumerate} \end{prop} \begin{proof} Let us start by the proof of point 3. We have: $V_i \cap W_j \subset F(1)$, which gives us $(\tilde F \cap V_i)\cap(\tilde F \cap W_j)\subset \tilde F \cap F(1)=\{0\}$. From $\tilde F=(\tilde F \cap V_i)+(\tilde F \cap W_j)$ we deduce then $\tilde F=(\tilde F \cap V_i)\oplus (\tilde F \cap W_j)$.
Let us note $n_i=\dim(\tilde F\cap V_i)$ and $m_j:=\dim(\tilde F\cap W_j)$. Point $3$ implies then that $n_i+m_j=\dim \tilde F$ (*). This gives us $n_1=n_2$ and $m_1=m_2$.
As $V_1\cap V_2=\{0\}$, $(\tilde F \cap V_1)\cap(\tilde F \cap V_2)=\{0\}$. As $(\tilde F \cap V_1)\oplus (\tilde F \cap V_2) \subset \tilde F$, we have: $2 n_1=n_1+n_2\le \dim \tilde F$. (**) Similarly $(\tilde F \cap W_1)\oplus (\tilde F \cap W_2) \subset \tilde F$ et $2 m_1=m_1+m_2\le \dim \tilde F$. (***)
From (*),(**) and (***) follows that $2 n_i=2 m_j=\dim \tilde F$ and that $(\tilde F \cap V_1)\oplus (\tilde F \cap V_2)= \tilde F$ and $(\tilde F \cap W_1)\oplus (\tilde F \cap W_2)= \tilde F$. \end{proof}
We can refine the two first points of the proposition as follows:
\begin{prop}\label{tildefnv1v2} For every non negative integer $n$ we have: \begin{enumerate} \item $(\tilde F(n)\cap V_1) \oplus (\tilde F(n)\cap V_2)=\tilde F(n)$ \item $(\tilde F(n)\cap W_1) \oplus (\tilde F(n)\cap W_2)=\tilde F(n)$ \end{enumerate} \end{prop} \begin{proof} We will just prove the first point, the proof of the second point being similar.
By induction on $n$: For $n=0$ we have effectively: $\tilde F(0)=E=V_1 \oplus V_2$. Suppose the the result true for $n$. Evidently we have the inclusion: $(\tilde F(n+1)\cap V_1) \oplus (\tilde F(n+1)\cap V_2) \subset \tilde F(n+1)$. Let $a\in \tilde F(n+1)$. We can write $a=x+y$ with $x\in V_1$ and $y\in V_2$. Let us show then $x,y\in\tilde F(n+1)$.
As $a\in \tilde F(n+1) \subset \tilde F(n)$ and $\tilde F(n)$ is homogeneous with respect to $V_1 \oplus V_2$ we have: $x,y\in \tilde F(n)$.
By definition of $\tilde F(n+1)$, $a$ we can write $a=x_{ij}+y_{ij}$ with $x_{ij}\in \tilde F(n) \cap V_i$ and $y_{ij}\in \tilde F(n) \cap W_j$. We deduce that $x$ is an element of $\tilde F(n+1)=\bigcap_{i,j} ((\tilde F(n) \cap V_i)+(\tilde F(n) \cap W_j))$ by writing: $x=x+0=x+0=(x_{21}-y)+y_{21}=(x_{22}-y)+y_{22}$. A similar reflection shows that $y\in \tilde F(n+1)$.\end{proof}
We will see in the following that one can decompose canonically $F(n)$.
Let's write $e=id_{\{1,2\}}$ and $\tau=(1 2)$ the elements of the group ${\mathcal S}_2$ of the permutations of the set $\{1,2\}$. We will write for $i=1,2$, $\bar i:=\tau(i)$. For $\sigma\in {\mathcal S}_2$, we write $\bar \sigma$ the element of ${\mathcal S}_2$ such that $\{\sigma, \bar \sigma\}={\mathcal S}_2$.
\begin{defi} Let $F_\sigma(0)=0$ and $F_\sigma(n+1)=\sum_i ((F_\sigma(n)+V_i)\cap (F_\sigma(n)+W_{\sigma(i)}))$.
One can see that $(F_\sigma(n))_n$ is an increasing sequence of subvectorspaces of $E$, and so finally stationary (as $E$ is finite-dimensional). Let's write $F_\sigma(\infty)$ or simply $F_\sigma$ the space $\sum_n F_\sigma(n)$ {\em i.e.} the maximal element of this sequence. \end{defi}
Let's remark on the other hand that lemma~\ref{F1sd} implies that $F_e(1)=(V_1\cap W_1) \oplus (V_2 \cap W_2)$, $F_\tau(1)=(V_1\cap W_2) \oplus (V_1 \cap W_2)$ and $F(1)=F_e(1)\oplus F_\tau(1)$.
\begin{prop}\label{thetafs} $\forall n, \theta(F_\sigma(n+1))\subset F_\sigma(n)$. \end{prop} \begin{proof} By induction: It is true for $n=0$. Suppose its true up to order $n$. Let $x\in F_\sigma(n+1), y\in V_1, z\in F_\sigma(n+1), t\in V_2, x'\in F_\sigma(n+1), y'\in W_1, z'\in F_\sigma(n+1), t'\in W_2$, such that $x+y=x'+y'$ et $z+t=z'+t'$.
Let us show that $\theta(x+y+z+t)\in F_\sigma(n+1)$. Let us recall first that $\theta(V_i)\subset V_{\tau(i)}$ and $\theta(W_j)\subset W_{\tau(j)}$. We have consequently: $\theta(x)+\theta(y)=\theta(x')+\theta(y')\in (F_\sigma(n)+V_2)\cap (F_\sigma(n)+W_{\sigma(2)})$ and $\theta(z)+\theta(t)=\theta(z')+\theta(t')\in (F_\sigma(n)+V_1)\cap (F_\sigma(n)+W_{\sigma(1)})$. This gives us $\theta(x+y+z+t)=\theta(x)+\theta(y)+\theta(z)+\theta(t)\in F_\sigma(n+1)$. \end{proof}
We will need the following lemma: \begin{lem}\label{AB} Let $A_0, A, B_0, B$ be four vector subspaces of $E$ such that $A_0\subset A$ et $B_0 \subset B$. We have then $$(A+B_0)\cap(A_0+B)=A_0+B_0+(A\cap B).$$ \end{lem} \begin{proof} The inclusion "$\supset$" is clear, as every $A+B_0$, $A_0+B$ contains every $A_0$, $B_0$, $A\cap B$.
For the inclusion "$\subset$" let $x\in A$, $y_0\in B_0$, $x_0\in A_0$, $y\in B$ such that $x+y_0=x_0+y$. One deduces $x-x_0=y-y_0 \in A\cap B$. So $x+y_0=x_0+y_0+(x-x_0)\in A_0+B_0+(A\cap B).$ \end{proof}
\begin{prop}\label{Fscohomogene} \begin{enumerate} \item $F_\sigma(n)$ is co-homogeneous with respect to the direct sum $V_1\oplus V_2$ or equivalently $(F_\sigma(n)+V_1)\cap (F_\sigma(n)+V_2)=F_\sigma(n)$.
\item $F_\sigma(n)$ is co-homogeneous with respect to the direct sum $W_1\oplus W_2$ or equivalently $(F_\sigma(n)+W_1)\cap (F_\sigma(n)+W_2)=F_\sigma(n)$. \end{enumerate} \end{prop} \begin{proof} We will prove the first point, the proof for the second being similar.
By induction: For $n=0$ its clear. Suppose the result true at the order $n$. It is evident that $F_\sigma(n)\subset(F_\sigma(n)+V_1)\cap(F_\sigma(n)+V_2)$.
Let's prove the other inclusion: We have:
\begin{eqnarray*}F_\sigma(n+1)+V_1 & = & \sum_i ((F_\sigma(n)+V_i)\cap(F_\sigma(n)+W_{\sigma(i)}))+V_1\\ & \subset & \underbrace{F_\sigma(n)+V_1}_A+\underbrace{(F_\sigma(n)+V_2)\cap(F_\sigma(n)+W_{\sigma(2)})}_{B_0}. \end{eqnarray*} Similarly \begin{eqnarray*}F_\sigma(n+1)+V_2 & \subset & \underbrace{(F_\sigma(n)+V_1)\cap(F_\sigma(n)+W_{\sigma(1)})}_{A_0}+\underbrace{F_\sigma(n)+V_2}_B. \end{eqnarray*} By application of lemma~\ref{AB} and the induction hypothesis we obtain: \begin{eqnarray*}(F_\sigma(n+1)+V_1)\cap (F_\sigma(n+1)+V_2) & \subset & F_\sigma(n+1) +(F_\sigma(n)+V_1) \cap (F_\sigma(n)+V_2)\\
& \stackrel{\mbox{\scriptsize ind. hyp.}}{\subset} & F_\sigma(n+1) + F_\sigma(n)\\
& \subset & F_\sigma(n+1). \end{eqnarray*}\end{proof}
\begin{prop}\label{Fsfbs1} $\forall n, F_\sigma(n)\cap F_{\bar \sigma}(1)=\{0\}$. \end{prop} \begin{proof} Let's make the proof for $\sigma=e$, the case $\sigma=\tau$ being analogous.
By induction: It is true up to order $n=0$. Suppose it is true up to order $n$: Let $x\in F_e(n), y\in V_1, z\in F_e(n), t\in V_2, x'\in F_e(n), y'\in W_1, z'\in F_e(n), t'\in W_2, \gamma\in V_1\cap W_2, \delta\in V_2\cap W_1$ such that $x+y=x'+y'$, $z+t=z'+t'$ and $(x+y)+(z+t)=\gamma + \delta\in F_e(n+1)\cap F_\tau(1)$.
On has then $x+(y-\gamma)\in F_e(n)+V_1$,$-(z+(t-\delta))\in F_e(n)+V_2$, et $x+(y-\gamma)=-(z+(t-\delta))$. By application of proposition~\ref{Fscohomogene} one obtains $y-\gamma\in F_e(n)$ and $t-\delta\in F_e(n)$. One deduces: $x+y=(x+(y-\gamma))+\gamma\in F_e(n)+(V_1\cap W_2)$ and also $z+t=(z+(t-\delta))+\delta\in F_e(n)+(V_2\cap W_1)$. Analogously one proves that $x'+y'=(x'+(y'-\delta))+\delta\in F_\sigma(n)+(V_2\cap W_1)$ and $z'+t'=(z'+(t'-\gamma))+\gamma\in F_\sigma(n)+(V_1\cap W_2)$. By a new application of proposition~\ref{Fscohomogene} (possible by the fact that $V_1\cap W_2\subset V_1$ and $V_2\cap W_1\subset V_2$) one obtains that $x+y=x'+y'\in F_e(n)$ and similarly $z+t=z'+t'\in F_e(n)$. So $(x+y)+(z+t)\in F_e(n)\cap F_\tau(1)$. By induction hypothesis one has so $(x+y)+(z+t)=0$. \end{proof}
\begin{corr}\label{interfeft} If $n\ge 1$ then $F_\sigma(n)\cap F(1)=F_\sigma(1)$ \end{corr} \begin{proof} It is clear that $F_\sigma(1)\subset F_\sigma(n)\cap F(1)$. For the other inclusion, lets remark first: $F_\sigma(n)\cap F(1)=F_\sigma(n)\cap (F_\sigma(1)\oplus F_{\bar \sigma}(1))$. Let $x=a+b\in F_\sigma(n)$ with $a\in F_\sigma(1)$ and $b\in F_{\bar \sigma}(1)$. $x-a=b\in F_\sigma(n)\cap F_{\bar \sigma}(1)=\{0\}$. So we have $x\in F_\sigma(1)$. \end{proof}
\begin{prop} $F_e(n)\cap F_\tau(n)=\{0\}$ \end{prop} \begin{proof} By induction: It is true for $n=0$. Suppose its true up to order $n$. Let $x\in F_e(n+1)\cap F_\tau(n+1)$, one deduces then $\theta(x)\in \theta(F_e(n+1)) \cap \theta(F_\tau(n+1))\stackrel{\mbox{\scriptsize prop.~\ref{thetafs}}}{\subset} F_e(n)\cap F_\tau(n)=\{0\}$. From this we obtain $x\in F(1)\cap F_e(n) \cap F_\tau(n)=(F(1)\cap F_e(n)) \cap (F(1)\cap F_\tau(n))\stackrel{\mbox{\scriptsize corr.~\ref{interfeft}}}{=}F_e(1)\cap F_\tau(1)=\{0\}$. \end{proof}
\begin{prop}\label{feftfn} $\forall n, F_e(n)\oplus F_\tau(n)=F(n)$ \end{prop}
Let's start by proving two lemma:
\begin{lem} $\forall i,n, F_\sigma(n)\subset V_i+W_{\bar \sigma(i)}$ \end{lem} \begin{proof} Let's give the proof for $\sigma=\tau$. The proof is essentially the same in the case $\sigma=e$.
By induction on $n$: For $n=0$ we have $F_\tau(0)=\{0\} \subset V_i+W_i$. Suppose the result true up to order $n$. $(F_\tau(n)+V_i)\cap(F_\tau(n)+W_{\bar i})\subset F_\tau(n)+V_i$ and $(F_\tau(n)+V_{\bar i})\cap(F_\tau(n)+W_i)\subset F_\tau(n)+W_i$. By summation of the two inclusions one obtains $F_\tau(n+1)\subset F_\tau(n)+V_i+F_\tau(n)+W_i$. The latter is included in $V_i+W_i$ by induction hypothesis. \end{proof}
\begin{lem}\label{viwjhomogenefeft} $\forall n,i,j, V_i+W_j$ is homogeneous with respect to the (direct) sum $F_e(n)+F_\tau(n)$. \end{lem} \begin{proof} Let's make the proof for $i=j=1$, the proof being similar in the other cases.
The inclusion $(V_1+W_1)\cap F_e(n)+(V_1+W_1)\cap F_\tau(n) \subset (V_1+W_1) \cap (F_e(n)+F_\tau(n))$ being trivial, let us show the other inclusion: Let $\alpha\in F_e(n)$, $\beta\in F_\tau(n)$ with $\alpha+\beta\in V_1+W_1$. By the inclusion $F_\tau(n)\subset V_1+W_1$ obtained by the preceding lemma one has: $\beta\in F_\tau(n)\cap (V_1+W_1)$. As $\alpha+\beta\in V_1+W_1$ and $\beta\in V_1+W_1$ one has $\alpha=(\alpha+\beta)-\beta\in F_e(n)\cap (V_1+W_1)$.\end{proof}
\begin{proof}{\em proposition~\ref{feftfn}: } By induction on $n$. For $n=0$ it is evident. Suppose the result proved up to order $n$.
Let us recall that $F(n+1)=\ker \theta^{n+1}$. Let $x\in F(n+1)$. By induction hypothesis there exists $\alpha \in F_e(n)$, $\beta\in F_\tau(n)$ such that $\theta(x)=\alpha+\beta$. Set $v_{ij}=(p_i\circ q_j)(x)$ and $w_{ij}=(q_j\circ p_i)(x)$. Let us remark that $v_{ij}\in V_i$ and $w_{ij}\in W_j$. Recall that $w_{ij}=\theta(x)+v_{ij}=\alpha+\beta+v_{ij}$. So one has more precisely $w_{ij}\in (F_e(n)+F_\tau(n)+V_i)\cap W_j$. As in the proof of proposition~\ref{kerimtheta} let us remark that $x=\sum_{i,j} w_{ij}$. If one proves that $w_{ij}\in F_e(n+1) + F_\tau(n+1)$ the proposition is proved.
As $\alpha+\beta=-v_{ij}+w_{ij}\in (F_e(n)\oplus F_\tau(n))\cap (V_i+W_j)$ one can apply lemma~\ref{viwjhomogenefeft} in order to obtain that $\alpha_{ij}\in V_i, \alpha'_{ij}\in W_j$ such that $\alpha=\alpha_{ij}+\alpha'_{ij}$ and $\beta_{ij}\in V_i, \beta'_{ij}\in W_j$ such that $\beta=\beta_{ij}+\beta'_{ij}$. One has: $\alpha'_{ij}=\alpha-\alpha_{ij}\in (F_e(n)+V_i)\cap W_j\subset F_e(n+1)$ and $\beta'_{ij}=\beta-\beta_{ij}\in (F_\tau(n)+V_i)\cap W_j\subset F_\tau(n+1)$. On the other hand $W_j\ni w_{ij}-\alpha'_{ij}-\beta'_{ij}=\alpha_{ij}+(v_{ij}+\beta_{ij})\in F_e(n)+V_i$ and so $w_{ij}-\alpha'_{ij}-\beta'_{ij}\in (F_e(n)+V_i)\cap W_j\subset F_e(n+1)$. Finally $w_{ij}=(w_{ij}-\alpha'_{ij}-\beta'_{ij})+\alpha'_{ij}+\beta'_{ij}\in F_e(n+1)+F_\tau(n+1)$, and so $x=\sum_{i,j}w_{ij}\in F_e(n+1)+F_\tau(n+1)$. \end{proof}
\begin{prop}\label{fsv1v2} \begin{enumerate} \item $F_\sigma(n)$ is homogeneous with respect to the sum $V_1\oplus V_2$, equivalently $F_\sigma(n)=(F_\sigma(n)\cap V_1)\oplus (F_\sigma(n)\cap V_2)$. \item $F_\sigma(n)$ is homogeneous with respect to the sum $W_1\oplus W_2$, equivalently $F_\sigma(n)=(F_\sigma(n)\cap W_1)\oplus (F_\sigma(n)\cap W_2)$ \end{enumerate} \end{prop} \begin{proof} Let us prove the first point. The proof of the second is similar. Let $x\in F_\sigma(n)$, $x=y+z$ with $y\in V_1$ and $z\in V_2$. We have then $y=x-z\in V_1 \cap (F_\sigma(n) + V_2)\subset (F_\sigma(n) + V_1) \cap (F_\sigma(n) + V_2)\stackrel{Prop.~\ref{Fscohomogene}}{=}F_\sigma(n)$. From this $y\in F_\sigma(n)\cap V_1$. In the same way $z\in F_\sigma(n)\cap V_2$. As a conclusion $F_\sigma(n)=(F_\sigma(n)\cap V_1)\oplus (F_\sigma(n)\cap V_2)$. \end{proof}
\begin{prop}\label{fsvw} $\forall n, (F_\sigma(n)\cap V_i)\oplus(F_\sigma(n)\cap W_{\bar \sigma(i)})=F_\sigma(n)$ \end{prop} \begin{proof} We have from proposition~\ref{Fsfbs1} $(F_\sigma(n)\cap V_i)\cap(F_\sigma(n)\cap W_{\bar \sigma(i)})=\{0\}$. Let us write $n_i:=\dim F_\sigma(n)\cap V_i$ and $m_j :=\dim F_\sigma(n)\cap W_j$. We have from the preceding remark that $n_i+m_{\bar \sigma(i)}\le \dim F_\sigma(n)$ (*). From proposition~\ref{fsv1v2} one has $n_1+n_2=\dim F_\sigma(n)$ and $m_1+m_2=\dim F_\sigma(n)$. By summing the two equalities it is necessary that (*) is an equality and so $(F_\sigma(n)\cap V_i)\oplus(F_\sigma(n)\cap W_{\bar \sigma(i)})=F_\sigma(n)$. \end{proof}
\begin{prop}\label{interplus} If $A$, $B$ vector subspaces of $E$ are homogeneous with respect to the sum $\oplus_{i\in I} F_i=E$ then $A+B$ and $A\cap B$ are homogeneous with respect to the sum $\oplus_{i\in I} F_i$. \end{prop} \begin{proof} "$A+B$": Equivalently one has: $\bigoplus_i (F_i\cap (A+B)) \subset A+B$. Let us show the other inclusion. Let $x\in A+B=(\bigoplus_i (F_i\cap A))+(\bigoplus_i (F_i\cap B))$. So one has $x=\sum_i x_i+\sum_i x'_i$ with $x_i\in F_i\cap A$ and$ x'_i\in F_i\cap B$. By writing $x=\sum_i (x_i+x'_i)$ one sees that $x\in \bigoplus_i (F_i\cap (A+B))$.
"$A\cap B$": Evidently one has: $\bigoplus_i (F_i\cap (A\cap B)) \subset A\cap B$. For the other inclusion let $x\in A\cap B=(\bigoplus_i (F_i\cap A))\cap (\bigoplus_i (F_i\cap B))$, $x=\sum_i x_i=\sum_i x'_i$ with $x_i\in F_i\cap A$ and $x'_i\in F_i\cap B$. By unicity of the decomposition of $x$ with respect to the direct sum $\bigoplus_i F_i$ it is clear that $\forall i, x_i=x'_i$ and so that $x\in \bigoplus_i (F_i\cap (A\cap B))$. \end{proof}
\begin{prop}\label{treillishomogene} For every element $V$ of the lattice generated by $V_1$, $V_2$, $W_1$ and $W_2$ one has: $V=(V\cap F_e)\oplus (V\cap F_\tau) \oplus (V\cap \tilde F)$ \end{prop} \begin{proof} Due to proposition~\ref{interplus} it is enough to prove that $V_1$, $V_2$, $W_1$ and $W_2$ are homogeneous with respect to the sum: $E=F_e\oplus F_\tau\oplus \tilde F$.
Let us prove for this purpose the lemma:
\begin{lem} Let $E$, $E_j$ et $F_i$ be vector spaces. If $E=E_1\oplus E_2$ with $\forall i, F_i=(F_i\cap E_1)\oplus(F_i\cap E_2)$, then $E_j\cap \oplus_i F_i=\oplus_i (E_j\cap F_i)$ for $j=1,2$. \end{lem} \begin{proof} $\oplus_i F_i =\oplus_i(F_i \cap (E_1 \oplus E_2)) =\oplus_i((F_i \cap E_1) \oplus (F_i \cap E_2))=\oplus_i (F_i \cap E_1)\oplus \oplus_i (F_i \cap E_2)$. But $\oplus_i (F_i \cap E_j) \subset(\oplus_i F_i) \cap E_j$. As
$((\oplus_i F_i) \cap E_1) \oplus ((\oplus_i F_i) \cap E_2) \subset \oplus F_i$,
the inclusions in this proof are necessarily equalities. So $\oplus_i (F_i\cap E_j)=(\oplus_i F_i)\cap E_j$. \end{proof}
{\em end of proof of proposition~\ref{treillishomogene}:} By applying the lemma for $\forall i, E_i=V_i$ (respectively $\forall i, E_i=W_i$) proposition~\ref{fsv1v2} and proposition~\ref{tildefv1v2} show that $V_1$, $V_2$, $W_1$ and $W_2$ are homogeneous with respect to the sum decomposition: $E=F_e\oplus F_\tau\oplus \tilde F$. \end{proof}
\subsection{Reflexive case}\label{orth}\index{forme!réflexive}
Suppose that $E$, $V_1$, $V_2$ are finite-dimensional vector-spaces such that $E=V_1 \oplus V_2$ and suppose that $E$ carries a non degenerate reflexive form $a$. We have seen that $(E,V_1, V_2, V_1^\perp, V_2^\perp)$ is a decomposition of $E$ into two direct sums. Suppose $F(n)$, $F$, $F_\sigma(n)$, $F_\sigma$, $\tilde F(n)$ and $\tilde F$ defiend as before.
Let's prove the following proposition: \begin{prop}\label{ffforth} $$F=F_e \oplus^\perp F_\tau \oplus^\perp \tilde F$$ \end{prop} \begin{proof} For $\sigma\in {\mathcal S}_2$ let $\tilde F_\sigma(0):= E$ and $\tilde F_\sigma(n+1):= \bigcap_i ((\tilde F_\sigma(n)\cap V_i)+(\tilde F_\sigma(n)\cap W_{\sigma(i)}))$. The sequence $\tilde F_\sigma(n)$ is decreasing and so stationary in finite dimensions. Note $\tilde F_\sigma:=\bigcap_n \tilde F_\sigma(n)$.
By induction it is easy to see that\footnote{By using again the fact that $(A+B)^\perp=A^\perp\cap B^\perp$ and $(A\cap B)^\perp=A^\perp+ B^\perp$ for $A$ and $B$ vector subspaces of $E$} $\forall n, \forall \sigma\in{\mathcal S}_2, F_\sigma(n)^\perp = \tilde F_\sigma(n)$.
By writing the definition of $\tilde F(n)$ and $\tilde F_\sigma(n)$ it is easy to see by induction that $\forall n, \forall \sigma, \tilde F(n) \subset \tilde F_\sigma(n)$, from which we obtain $\forall \sigma, \tilde F \subset \tilde F_\sigma$.
In order to finish the proof lets show the following lemma:
\begin{lem} For $\sigma\in {\mathcal S}_2$ we have: $\forall n, F_{\bar \sigma} \subset \tilde F_{\sigma}(n)$ \end{lem} \begin{proof} By induction on $n$: It is clear for $n=0$. For $n+1$ we have: $\tilde F_{\sigma}(n+1)=\bigcap_i ((\tilde F_{\sigma}(n)\cap V_i)+(\tilde F_{\sigma}(n)\cap W_{\sigma(i)})) \subset \bigcap_i ((F_{\bar \sigma}\cap V_i)+(F_{\bar \sigma}\cap W_{\sigma(i)}))$ by induction hypothesis. The latter expression is equal to $F_{\bar \sigma}$ by proposition~\ref{fsvw}. \end{proof}
{\em end of the proof of proposition~\ref{ffforth}:} As $\dim(F_\sigma)+\dim(\tilde F_\sigma)=\dim(E)=\dim(F_\sigma) + \dim(F_{\bar \sigma}) + \dim(\tilde F)$ (because $F_\sigma$ and $\tilde F_\sigma$ are orthogonal, respectively by proposition~\ref{treillishomogene}) we have $\dim(\tilde F_\sigma)= \dim(F_{\bar \sigma}) + \dim(\tilde F)$. By the inclusion $F_{\bar \sigma} \oplus \tilde F \subset \tilde F_{\sigma}$ we must have $F_{\bar \sigma} \oplus \tilde F = \tilde F_{\sigma}$. \end{proof}
\subsection{Sublattice "with 5 direct sums"} It is known that the lattice generated by the three vector subspaces of $E$: $U,V,W$ such that $E =U \oplus W = V \oplus W$ has the following structure:
\begin{center} \includegraphics[origin=c]{cinq_sommes_directes.eps} \end{center}
The construction applies to the lattice $T$ generated by the $4$ subspaces of $E$, $V_1$, $V_2$, $W_1$, $W_2$ such that $E=V_1\oplus V_2=W_1 \oplus W_2 = V_1\oplus W_2 =W_1 \oplus V_2$, in the following way:
We can choose for $(U,V,W)$ the triple $(V_1,W_1,V_2)$ or $(V_1,W_1,W_2)$. Note that then in the first case: $T_1:=(V_1 \cap W_1) + (V_2 \cap (V_1 + W_1))$ and in the second: $U_1:=(V_1 \cap W_1) + (W_2 \cap (V_1 + W_1))$.
The interval $[V_1 \cap W_1, V_1 + W_1]$ is a sub-lattice $T'$ of $T$ which contains is particular the elements $V'_1:=V_1/(V_1 \cap W_1)$, $W'_1:=W_1/(V_1 \cap W_1)$, $T'_1:=T_1/(V_1 \cap W_1)$ et $U'_1:=V_1/(V_1 \cap W_1)$ verifying:
$$V'_1 \oplus W'_1 = V'_1 \oplus T'_1 = V'_1 \oplus U'_1 = W' \oplus T'_1 = W'_1 \oplus U'_1$$
On the other hand it is possible that $T'_1 \cap U'_1 \neq \{0\}$ (as well as $T'_1 + U'_1 \neq (V_1 + W_1)/(V_1 \cap W_1)$).
Note is particular that $T'$ contains two sublattices of type $M_3$: The one constructed on the elements $\{\{0\}, E ,V'_1, W'_1, T'_1\}$ and the one given by the elements $\{\{0\}, E ,V'_1, W'_1, U'_1\}$.
The data of $T'_1$ is equivalent to the data of an isomorphism $i$ of $V'_1$ onto $W'_1$, and the data of $U'_1$ of a second isomorphism $j$ of $V'_1$ onto $W'_1$. the conjugation class in $Gl(V'_1)$ of $j^{-1}\circ i$ is then an invariant of the lattice. We can compare this result to the operators that Gelfand and Ponomarev used in their paper~\cite{gp}.
\subsection{Example}\label{exemple}
In this paragraph we are going to study the structure of the lattice generated by four finite-dimensional vector spaces $V_1, V_2, W_1, W_2$ such that $E=V_1\oplus V_2=W_1 \oplus W_2 = V_1\oplus W_2 =W_1 \oplus V_2$ supposing that $\theta_{\mathcal V}^2 =0$ for ${\mathcal V}=(E, V_1, V_2, W_1, W_2)$.
\begin{lem} On a: $(V_1 + W_1)\cap V_2 = (V_1 + W_1)\cap W_2 \subset V_2 \cap W_2$ et $(V_2 + W_2)\cap V_1 = (V_2 + W_2)\cap W_1 \subset V_1 \cap W_1$ \end{lem} \begin{proof} It is clear that $(V_1 + W_1)\cap V_2 \subset (V_1 + W_1)\cap (V_2+W_2) =\mbox{\textnormal{im}} \theta \subset \ker \theta=(V_1 \cap W_1)\oplus (V_2 \cap W_2)$. From which one can see that $(V_1 + W_1)\cap V_2 \subset \ker \theta \cap V_2 = V_2 \cap W_2$. So $(V_1 + W_1)\cap V_2 \subset (V_1 + W_1)\cap W_2$ and similarly $(V_1 + W_1)\cap W_2 \subset (V_1 + W_1)\cap V_2$, which proves the first assertion. The proof of the second one is similar.\end{proof}
Note $X_0=\{0\}, X_1=(V_2 + W_2)\cap V_1, X_2=V_1 \cap W_1, X_3=V_1$ et $Y_0=\{0\}, Y_1=(V_1 + W_1)\cap V_2, Y_2=V_2 \cap W_2, Y_3=V_2$.
As $X_0 \subset X_1 \subset X_2 \subset X_3=V_1$ and $Y_0 \subset Y_1 \subset Y_2 \subset Y_3=V_2$ and $V_1 \cap V_2 =\{0\}$, it is easy to see that the lattice ${\mathcal T}_0$ generated by the $X_i$ and the $Y_j$ for $i,j=0, 1, 2, 3$ is precisely the set $\{X_i \oplus Y_j \; \vrule \; i,j=0, 1, 2, 3\}$, ordered by inclusion.
It is easy to verify that $X_i \oplus Y_j= (X_i\oplus V_2) \cap (V_1 \oplus Y_j)$ and so the lattice ${\mathcal T}_0$ can be written as well:
Note $X'_i=X_i\oplus V_2$ et $Y'_j=V_1\oplus Y_j$. We have then: $X'_0=V_2$, $X'_1=((V_2 + W_2)\cap V_1) + V_2$, $X'_2=(V_1 \cap W_1)+ V_2$, and $X'_3=V_1 + V_2$.
Let's verify: $X'_1=V_2+W_2$. It is clear that $X'_2 \subset V_2+W_2$ inversely if $x\in V_2$ and $y\in W_2$, $x+y$ can be written uniquely $a+b$ with $a\in V_1$ and $b\in V_2$, so $a=((x-b)+y)+b \in (V_2 + W_2)\cap V_1$ and so $a+b \in ((V_2 + W_2)\cap V_1)+V_2$. So $X'_1=V_2+W_2$.
We have as well: $Y'_0=V_1$, $Y'_1=V_1 + W_1$, $Y'_2=(V_2 \cap W_2)+ V_1$, et $Y'_3=V_1 + V_2$.
The underlying set of the lattice ${\mathcal T}_0$ is so: $\{X'_i \cap Y'_j \; \vrule \; i,j=0, 1, 2, 3\}$.
We are going to prove that ${\mathcal T} = {\mathcal T}_0 \cup \{W_1, W_2\}$ is a lattice. Let us verify that $\mathcal T$ is stable by intersection and sum.
Verify that $(X_i\oplus Y_j)+W_1 \in {\mathcal T}$: If $j=0$ and $i=0, 1,2$ it is clear that $(X_i\oplus Y_j)+W_1= W_1 \in {\mathcal T}$. If $j=0$ and $i=3$, $(X_i\oplus Y_j)+W_1=V_1 + W_1 \in {\mathcal T}$. If $j\ge 1$, $(X_i\oplus Y_j)+W_1 = (Y_1 + W_1 + X_i + Y_j)$. By a similar argument to the one which allowed us to have before: $((V_2 + W_2)\cap V_1) + V_2=V_2 + W_2$, one can prove $Y_1 +W_1=((V_1 + W_1)\cap V_2) + W_1=V_1 + W_1 \in {\mathcal T}_0$, and so $Y_1 + W_1 + X_i + Y_j\in{\mathcal T}_0 \subset {\mathcal T}$.
By using the second representation of ${\mathcal T}_0$ we can show for every $i$ and $j$, $(X'_i\cap Y'_j) \cap W_1 \in {\mathcal T}$. The only delicate point is to verify that $((V_1 \cap W_1)+ V_2) \cap W_1=V_1 \cap W_1$. Let's do it: It is clear that $V_1 \cap W_1 \subset ((V_1 \cap W_1)+ V_2) \cap W_1$, inversely let $x\in V_1 \cap W_1$, $y\in V_2$ and $z\in W_1$ such that $x+y=z$. We have then: $y=z-x \in V_2 \cap W_1 =\{0\}$, and so $z=x \in V_1 \cap W_1$.
In conclusion we can state:
\begin{theo}
The structure of the lattice generated by the four finite-dimensional vector spaces $V_1, V_2, W_1, W_2$ such that $E=V_1\oplus V_2=W_1 \oplus W_2 = V_1\oplus W_2 =W_1 \oplus V_2$ and supposing that $\theta_{\mathcal V}^2 =0$ for ${\mathcal V}=(E, V_1, V_2, W_1, W_2)$ is given by the following diagram:
\begin{center} \includegraphics[origin=c]{thetacarrenul.eps} \end{center}
\end{theo}
\section{Application to representation theory} \subsection{Preliminaries} \subsubsection{General case} We will note $\g{g}\g{l}(V_1,V_2,W_1,W_2)$ the set of $a\in \g{g}\g{l}(E)$ such that $a V_i \subset V_i$ et $a W_j \subset W_j$. It is easy to see that $\g{g}\g{l}(V_1,V_2,W_1,W_2)$ is a sub Lie-algebra of $\g{g}\g{l}(E)$. Let $\g{g}$ be a sub Lie-algebra of $\g{g}\g{l}(V_1,V_2,W_1,W_2)$. We have for all $A, B$ vector subspaces of $E$ such that $\g{g} A \subset A$ et $\g{g} B \subset B$: $\g{g} (A+B) \subset (A+B)$ et $\g{g} (A\cap B) \subset (A\cap B)$. So we have, as $\g{g}$ leaves invariant $V_1,V_2,W_1$ et $W_2$, $\g{g}$ leaves invariant every element of the lattice generated from $V_1,V_2,W_1$ et $W_2$ by intersection and sum.
It is easy to see that the projections $p_{V_i}^{V_j}$ and $p_{W_i}^{W_j}$ commute to the action of $\g{g}$: $\forall a\in \g{g}, a p_{V_i}^{V_j}= p_{V_i}^{V_j} a$ and $a p_{W_i}^{W_j}= p_{W_i}^{W_j} a$. So every element of the associative unitary algebra $A$ generated by the $p_{V_i}^{V_j}$ and the $p_{W_i}^{W_j}$ commutes to every $a\in \g{g}$. As an example $\theta=[p_{W_1}^{W_2},p_{V_1}^{V_2}]$ commutes to every $a\in \g{g}$.
\begin{lem} The data of two supplementary vector-spaces $V_1$ et $V_2$ stable for the action of a linear Lie algebra $\g{g}$ is equivalent to the data of an endomorphism $L$ commuting with the action of $\g{g}$, verifying $L^2=I$. $V_1$ et $V_2$ are then the proper subspaces of $L$ associated to the eigenvalues $1$ and $-1$. \end{lem} \begin{proof} In fact it is easy to see that the endomorphism $L=p_{V_1}^{V_2}-p_{V_2}^{V_1}$ is of square identity and commutes to the action of $\g{g}$. Inversely if an endomorphism $L$ is such that $L^2=I$ and commutes to the action of $\g{g}$, it admits the proper values $1$ and/or $-1$. The corresponding eigenspaces are supplementary and stable for the action of $\g{g}$.\end{proof}
\subsubsection{Reflexive case}
We recall that in the reflexive case we suppose that there exists a non degenerate reflexive form $\langle \cdot,\cdot\rangle$ such that $\forall a\in \g{g}$, $\forall x,y\in E$, we have: $\langle ax,y\rangle +\langle x,ay\rangle =0$.
Recall as well that if $V$ is a subspace of $E$ which is $\g{g}$-invariant then $V^\perp$ is invariant as well. We suppose here that $W_1=V_1^\perp$, $W_2=V_2^\perp$. These two spaces are supplementary and invariant.
\begin{lem} Let $L^*$ be the adjoint with respect to a reflexive form of the endomorphism $L=p_{V_1}^{V_2}-p_{V_2}^{V_1}$ which commutes to the action of $\g{g}$ and is such that $L^2=I$. Then $L^*$ is of square identity, commutes to the action of $\g{g}$ and one has: $$L^*=p_{V_2^\perp}^{V_1^\perp}-p_{V_1^\perp}^{V_2^\perp}$$ \end{lem} \begin{proof} Let's note $L':=p_{V_2^\perp}^{V_1^\perp}-p_{V_1^\perp}^{V_2^\perp}$ and let us show that $\forall v,w\in E, \langle Lv,w\rangle =\langle v,L'w\rangle$,
We write for $x\in V_1,x'\in V_2,y\in V_1^\perp,y'\in V_2^\perp$, \begin{eqnarray*} \langle L(x+x'),y+y'\rangle & = & \langle x-x',y+y'\rangle\\ & = & \langle x,y'\rangle-\langle x',y\rangle\\ & = & \langle x+x',-y+y'\rangle\\ & = & \langle x+x',L'(y+y')\rangle \end{eqnarray*} As a consequence $L^*=L'$.\end{proof}
Let's remark that $L=-L^*$ for $L=p_{V_1}^{V_2}-p_{V_2}^{V_1}$ is equivalent to have $V_1=V_1^\perp$ and $V_2=V_2^\perp$. It is the same to impose $\langle Lx, Ly \rangle=-\langle x, y \rangle$ for $x,y\in E$ {\em i.e.} $L$ is anti-hermitian with respect to the reflexive form.
The data of $L\in End(E)$ such that $L^2=Id$ and of a reflexive form for which $L$ is anti-hermitian is also called a para-Kähler structure.
We recall that the reflexive representation $\g g \subset \g{gl}(E)$ is called {\em weakly irreducible} if any invariant subspace $V\subset E$ is either $\{0\}$, $E$, or is degenerate {\em i.e.} $V\cap V^\perp\neq \{0\}$.
As we saw in paragraph~\ref{orth}, if $W_1=V_1^\perp$ and $W_2=V_2^\perp$, we have in the weakly irreducible case and if $V_1$ et $V_2$ are different from $\{0\}$ necessarily $E=F_e$. In fact if two of the three spaces $F_e$, $F_\tau$, $\tilde F$ are non trivial then $E$ is not weakly irreducible. The more in the case $E=F_\tau$ or $E=\tilde F$, the fact that $E=V_1 \oplus V_1^\perp$ would imply that if $E$ is non trivial, $E$ is not weakly irreducible.
\begin{prop}\label{identification_au_dual} In the case the representation $E=V_1\oplus V_2$ is weakly irreducible and if $V_1$ and $V_2$ are different from $\{0\}$, $V_2$ identifies (as a representation) to the dual $V_1^*$ of $V_1$. \end{prop} \begin{proof} It identifies by the map $$\begin{array}{l} V_2 \to V_1^*\\ v' \mapsto (w \mapsto \langle v',w\rangle ) \end{array} $$ which is injective by the fact that $V_1 \cap V_2^\perp =\{0\}$ and surjective for dimension reasons. In fact we have $V_1 \oplus V_2^\perp = V_1 \oplus V_2$ implies that $dim(V_2)=dim(V_2^\perp)$. From this we obtain $dim(V_2)=\frac{1}{2}dim(E)$ and similarly $dim(V_1)=dim(E)-\frac{1}{2}dim(E)=\frac{1}{2}dim(E)$. As $dim(V_1^*)=dim(V_1)$, we have: $dim(V_1^*)=dim(V_2)$. \end{proof}
\subsection{Main result} The following result could be formulated thanks to a suggestion of Martin Olbrich. He communicated to us a direct proof of the result~\ref{Olbrich}, which we had established for pseudo-riemannian holonomy algebras only.
\begin{theo}\label{deux_isotropes} If $E$ is a representation admitting two decompositions into supplementary sub-representations $E=V_1\oplus V_2=W_1 \oplus W_2$, then, noting $E_{(L,\lambda)}$ the generalized eigenspace associated to the eigenvalue $\lambda$ for the operator $L$, we have: \begin{enumerate}[(i)] \item $F_e=E_{(L,-1)} \oplus E_{(L,1)}$ as a representation for the invariant operator $L=p_{V_1}^{V_2}-p_{W_2}^{W_1}$. The more we have $V_1\cap W_1 \subset E_{(L,1)}$ and $V_2\cap W_2 \subset E_{(L,-1)}$ \item $F_\tau=E_{(L',-1)} \oplus E_{(L',1)}$ as a representation for the invariant operator $L'=p_{V_1}^{V_2}-p_{W_1}^{W_2}$. The more we have $V_1\cap W_2 \subset E_{(L',1)}$ et $V_2\cap W_1 \subset E_{(L',-1)}$ \end{enumerate}
When $E$ is in addition reflexive and $W_j=V_j^\perp$, then \begin{enumerate}[(i)] \item $L$ is anti-self-adjoint with respect to the reflexive form, $E_{(L,-1)}$ and $E_{(L,1)}$ are totally isotropic and their direct sum is non degenerate. \item $L'$ is self-adjoint with respect to the reflexive form, $E_{(L',-1)}$ and $E_{(L',1)}$ are non degenerate and orthogonal. \end{enumerate} \end{theo} \begin{proof} It follows from the fact that the spaces $F_e$, $F_\tau$ and $\tilde F$ are homogeneous, that $L=p_{V_1}^{V_2}-p_{W_2}^{W_1}$ (and similarly $L'=p_{V_1}^{V_2}-p_{W_1}^{W_2}$) is an endomorphism of each of these spaces.
For $\sigma=e$ or $\tau$ note $P_\sigma(X)=\Pi_{\lambda\in\Lambda_\sigma} P_{\sigma,\lambda}^{n_\lambda}(X)$ the minimal polynomial of $L$ restricted to $F_\sigma$ and similarly $\tilde P(X)=\Pi_{\lambda\in\tilde \Lambda} \tilde P_{\lambda}^{n_\lambda}(X)$ the minimal polynomial of $L$ restricted to $\tilde F$.
$F_\sigma$ decomposes into the generalized eigenspaces ${F_\sigma}_{(L,\lambda)}:=\ker(P_{\sigma,\lambda}^{n_\lambda}(L\vrule_{F_\sigma})$. and $\tilde F$ decomposes into the generalized eigenspaces ${\tilde F}_{(L,\lambda)}:=\ker(\tilde P_{\lambda}^{n_\lambda}(L\vrule_{\tilde F})$.
Let's make the convention that $P_{\sigma,\lambda}(X)=X+ \lambda$ and $\tilde P_{\lambda}(X)=X+\lambda$ for $\lambda=0,-1,1$.
It is immediate that: $V_1\cap W_1\subset F_{e,(L,1)}$ and $V_2\cap W_2\subset F_{e,(L,-1)}$.
It is easy to verify from the definitions that $\theta L =- L \theta$. On deduces that $\theta$ maps ${F_\sigma}_{(L,\lambda)}$ into ${F_\sigma}_{(L,\lambda')}$ with $P_{\sigma,\lambda'}(X)=\pm P_{\sigma,\lambda}(-X)$.
Similarly $\theta$ maps ${\tilde F}_{(L,\lambda)}$ into ${\tilde F}_{(L,\lambda')}$ with $\tilde P_{\lambda'}(X)=\pm \tilde P_{\lambda}(-X)$.
Let $x\in {F_e}_{(L,\lambda)}$ and let $n$ be the smallest integer such that $\theta^{n+1}(x)=0$, which exists from the fact that $\theta$ is nilpotent on $F_e$. $\theta^n(x)\in \ker(\theta)\subset V_1\cap W_1 \oplus V_2\cap W_2\subset {F_e}_{(L,1)}\oplus {F_e}_{(L,-1)}$. As a consequence $\lambda=\pm 1$ and $F_e={F_e}_{(L,1)}\oplus {F_e}_{(L,-1)}$
An analogous argument gives $F_\tau={F_\tau}_{(L,0)}$.
Finally let us show that $\lambda = 0, 1, -1\not\in \tilde \Lambda$. Suppose the contrary. It exists then an eigenvector $x$ in $\tilde F$ associated to the eigenvalue $\lambda$. $L(x)=p_{V_1}^{V_2}(x)-p_{W_2}^{W_1}(x)=\lambda x$ implies in the three cases a contradiction with proposition~\ref{tildefv1v2}.
It follows that $F_e=E_{(L,-1)}\oplus E_{(L,1)}$, as $E_{(L,\lambda)}={F_e}_{(L,\lambda)}\oplus {F_\tau}_{(L,\lambda)} \oplus {\tilde F}_{(L,\lambda)}$.
The same arguments show {\em mutatis mutandis} that $F_\tau= {F_\tau}_{(L',1)}\oplus {F_\tau}_{(L',-1)}$, $V_1 \cap W_2 \subset {F_\tau}_{(L',1)}$, $V_2 \cap W_1 \subset {F_\tau}_{(L',-1)}$, and $F_e={F_e}_{(L',0)}$.
It follows similarly $F_\tau=E_{(L',-1)}\oplus E_{(L',1)}$.
The generalized eigenspaces appearing in the proof are invariant by the fact that for any polynomial $Q$, $Q(L)$ commutes to the action of the representation and so $\ker Q(L)$ (and also $\mbox{\textnormal{im}} Q(L)$) is invariant.
In the reflexive case we have: $L=-L^*$. As a consequence $E_{(L,-1)}$ is orthogonal to any $E_{(L,\lambda)}$ for $\lambda\neq 1$ and $E_{(L,1)}$ is orthogonal to any $E_{(L,\lambda)}$ for $\lambda\neq -1$. This follows from the relation $$\langle P_\lambda(L)^{n_\lambda}\cdot, \cdot\rangle=\langle \cdot, P_\lambda(L^*)^{n_\lambda}\cdot\rangle =\langle \cdot, P_\lambda(-L)^{n_\lambda}\cdot\rangle,$$ and from the fact that $P_\lambda(L)^{n_\lambda}$ is an isomorphism of $E_{(L,\mu)}$ for $\mu\neq\lambda$.(kernel lemma)
So $E_{(L,-1)}$, and $E_{(L,-1)}$ are totally isotropic, $E_{(L,-1)}\oplus E_{(L,1)}$ is orthogonal to all other generalized eigenspaces and non degenerate.
One obtains similarly that $L'={L'}^*$. $E_{(L',\lambda)}$ is orthogonal to any $E_{(L',\mu)}$ for $\mu\neq \lambda$. In particular $E_{(L',\lambda)}$ is non degenerate and $E_{(L',-1)}$ is orthogonal to $E_{(L',1)}$. \end{proof}
Let us remark that in the weakly irreducible case, the existence of a decomposition of $E$ into two a direct sum of two degenerate sub-representations implies that $E=F_e$.
\begin{theo}\label{Olbrich} If $E$ is a weakly irreducible representation preserving the non degenerate reflexive form $\langle \cdot, \cdot\rangle$ and admitting a decomposition into a direct sum of degenerate sub-representations $E=V_1 \oplus V_2$, then $E=E_{(L,1)}\oplus E_{(L,-1)}$ with $L:=p-p^*$. We have: $V_1\cap V_1^\perp \subset E_{(L,1)}$ and $V_2\cap V_2^\perp \subset E_{(L,-1)}$. In addition $E_{(L,1)}$ et $E_{(L,-1)}$ are totally isotropic and their sum is non degenerate. \end{theo}
\begin{prop}\label{identification_au_dual_2} If $E=E_1 \oplus E_2$ is a representation preserving the non degenerate reflexive form $\langle \cdot, \cdot\rangle$, and $E_1$ and $E_2$ are totally isotropic, then $E_2$ identifies to $E_1^*$. \end{prop} \begin{proof} As in proposition~\ref{identification_au_dual} the map $$\begin{array}{l} E_2 \to E_1^*\\ v' \mapsto (w \mapsto \langle v',w\rangle ) \end{array} $$ which is injective because $E_2 \cap E_1^\perp=\{0\}$ and surjective for dimension reasons. \end{proof}
\begin{lem} If the representation $E$ admits three sub-representation $F_1$, $F_2$ and $F_3$, such that $E=F_1\oplus F_2=F_2\oplus F_3=F_1\oplus F_3$, then $E=F_1 \otimes \mathbb{K}^2$ where $\mathbb{K}^2$ is the trivial representation. \end{lem} \begin{proof} Let's note $p$ the projection on $F_1$ parallely to $F_2$ restricted to $F_3$. $p$ is an isomorphism of $F_3$ onto $F_1$ and commutes with the action of the representation. As a consequence $E=F_1\oplus F_1=F_1 \otimes \mathbb{K}^2$. \end{proof}
\begin{prop} If $E$ is a representation admitting two decompositions into supplementary sub-representations $E=V_1\oplus V_2=W_1 \oplus W_2$, $\tilde F$ identifies to $V \otimes \mathbb{K}^2$ where $V=\tilde F \cap V_1$ and $\mathbb{K}^2$ is the trivial representation. \end{prop} \begin{proof} In fact we have $\tilde F=\tilde F \cap V_1 \oplus \tilde F \cap V_2=\tilde F \cap V_1 \oplus \tilde F \cap W_1=\tilde F \cap V_2 \oplus \tilde F \cap W_1$. We are in the situation described by the preceding lemma.
\end{proof}
To summarize we have:
\begin{theo}\label{ts} If $E$ is a representation preserving the non degenerate reflexive form $\langle \cdot, \cdot\rangle$ and the direct sum decomposition $E=V_1 \oplus V_2$, then \begin{enumerate}[(i)] \item $E=F_e \oplus^\perp F_\tau \oplus^\perp \tilde F$, \item $F_e=F_e^+\oplus (F_e^+)^*$ for a totally isotropic representation $F_e^+$, \item $F_\tau=F_\tau^+\oplus^\perp F_\tau^-$ for a non degenerate representation $F_\tau^+$, \item $\tilde F=\tilde F_0\otimes \mathbb{K}^2$ for a non degenerate representation $\tilde F_0$ and $\mathbb{K}^2$ being the trivial representation. \end{enumerate} \end{theo}
\section{Application to holonomy}
A particular case of the preceding is when $\g{g}$ is a holonomy algebra. We call {\em formal curvature tensor} an element $R$ of $(E^* \wedge E^*)\otimes E^* \otimes E$ such that for all $x,y,z \in E$ we have: $R(x,y)z + R(y,z)x +R(z,x)y =0$ (first Bianchi identity). We will suppose the that there is a finite set of formal curvature tensors $\{R_1, R_2, \ldots , R_m\}$ such that $\g{g}$ is the linear Lie algebra generated by the $R_i(x,y) \in End(E)$ for $i=1 \ldots m$ and $x,y\in E$. We will call such an algebra {\em Berger algebra}. For a holonomy algebra this situation is given by the Ambrose-Singer theorem which relates the curvature tensor of a connected manifold equipped with a torsion-free connection to its holonomy algebra in a point of the manifold. In the following we will write $R$ one of the formal curvature tensors $R_1, R_2, \ldots, R_m$.
\begin{defi} If $R$ is a formal curvature tensor and $\g g\subset \g{gl}(E)$ a Berger algebra, we say that $R$ matches $\g g$, if $\forall x,y\in E, R(x,y)\in \g g$. \end{defi}
\subsection{General case}
\begin{lem} If $\g g\subset \g{gl}(E)$ is a Berger algebra admitting the invariant spaces $F_1, F_2, \ldots, F_r$ with $E=F_1\oplus F_2 \oplus \cdots \oplus F_r$, and if $R$ is a formal curvature tensor which matches $\g g$, then $\forall i,j,k, k\not\in\{i,j\} \Rightarrow \forall x\in F_i, y\in F_j, z\in F_k, R(x,y)z=0$. \end{lem} \begin{proof} Suppose $x,y,z$ as in the statement. Then by the identity $$R(x,y)z+R(y,z)x+R(z,x)y=0$$ and by the fact that $R(y,z)x\in F_i$, $R(z,x)y\in F_j$ and $R(x,y)z\in F_k$ it is clear from $(F_i+F_j)\cap F_k=\{0\}$ that $R(x,y)z=0$. \end{proof}
\begin{defi} We will say that the representation $\g g\subset \g{gl}(E)$ admitting the invariant spaces $F_i$ with $E=F_1\oplus F_2 \oplus \cdots \oplus F_r$ {\em decomposes into an exterior product along the decomposition $E=F_1\oplus F_2 \oplus \cdots \oplus F_r$} if for any $a\in \g g$, $\forall i, a\vrule_{F_i}\in \g g$. \end{defi}
\begin{prop} If $\g g\subset \g{gl}(E)$ is a Berger algebra and preserves $V_1$, $V_2$, $W_1$ and $W_2$ such that $E=V_1\oplus V_2=W_1\oplus W_2$ then $E$ decomposes into an exterior product along the decomposition $F \oplus \tilde F$. If in addition $\g g$ preserves the reflexive form $\langle\cdot,\cdot\rangle$ and if $W_1=V_1^\perp$ and $W_2=V_2^\perp$, then $E$ decomposes into an exterior product along the decomposition $F_e\oplus F_\tau\oplus \tilde F$. \end{prop} \begin{proof} For the first affirmation, this results from the preceding lemma and from the fact that $\tilde F$ is of type $\tilde F_0\otimes\mathbb{R}^2 \simeq \tilde F_0\oplus \tilde F_0$. In the reflexive case $F_e$ is of type $F_e^+\oplus(F_e^+)^*$ from which by a similar argument one can deduce the second affirmation. \end{proof}
\subsection{Metric case} In the metric case the invariant non degenerate reflexive form $\langle \cdot,\cdot\rangle$ is supposed to be bilinear symmetric and $\mathbb{K}=\mathbb{R}$.
It is well known that from the invariance of $\langle \cdot,\cdot\rangle$, the first Bianchi identity and from the antisymmetry in the two first arguments of $R$, one can deduce $$\forall x,y,z,t\in E, \langle R(x,y)z,t\rangle =\langle R(z,t)x,y\rangle,(*)$$ for any formal curvature tensor $R$ matching the algebra.
\begin{lem} If the algebra $\g g$ is Berger, preserves two supplementary spaces $V_1$ et $V_2$ and a non degenerate symmetric bilinear form $\langle \cdot, \cdot \rangle$, and if for ${\mathcal V}=(E, V_1, V_2, V_1^\perp, V_2^\perp)$ $E=F_e$, then one has for any formal curvature tensor $R$ matching $\g g$ and $x,y\in V_1$, $R(x,y)=0$ and for $x',y'\in V_2$, $R(x',y')=0$. \end{lem} \begin{proof} From the first Bianchi identity one has $\forall z'\in V_2, R(x,y)z'+ R(y,z')x+ R(z',x)y=0$. We have: $R(x,y)z'\in V_2$, $R(y,z')x \in V_1$ and $R(z',x)y\in V_1$ by invariance of $V_1$ and $V_2$ under the action of $R(x,y)\in\g{g}$ (respectively $R(y,z')\in\g{g}$, $R(z',x)\in\g{g}$. As $V_1$ and $V_2$ form a direct sum, one has: $R(x,y)z'=0$.
Let's show us further $\forall z\in V_1, R(x,y)z=0$. Let $t'\in V_2$. $\langle R(x,y)z,t'\rangle =-\langle z,R(x,y)t'\rangle =0$, by the preceding argument. So from $R(x,y)z\in V_1$, it is clear that $R(x,y)z\in V_1\cap V_2^\perp=\{0\}$ (in $F_e$).
As a conclusion for $x,y\in V_1, R(x,y)=0$. Similarly for $x',y'\in V_2, R(x',y')=0$.\end{proof}
\begin{theo} If the algebra $\g g\subset\g{gl}(E)$ is Berger, preserves the the two supplementary spaces $V_1$ and $V_2$ and a non degenerate symmetric bilinear form $\langle \cdot, \cdot \rangle$, for ${\mathcal V}=(E, V_1, V_2, V_1^\perp, V_2^\perp)$, one has: $\g{g}E \subset \ker \theta_{\mathcal V}$ and $\g{g}\mbox{\textnormal{im}} \theta_{\mathcal V}= \{0\}$. \end{theo} \begin{proof} By theorem~\ref{ts} one has the decomposition into sub-representations $E=(F_e^+\oplus (F_e^+)^*)\oplus^\perp F_\tau^+ \oplus^\perp F_\tau^- \oplus^\perp (\tilde F_0 \otimes\mathbb{R}^2)$ with $F_e^+$ (and $(F_e^+)^*$ totally isotropic, $F_\tau^+$, $F_\tau^-$ and $\tilde F_0$ non degenerate
For $R$ a formal curvature tensor matching $\g g$, as $R(x,y)=0$ for $x\perp y$ (by (*)), $\g g$ is generated by the $R(x,y)$ for $(x,y)\in F_e^+\times (F_e^+)^*$, (respectively $(x,y)\in F_\tau^+\times F_\tau^+$, resp. $(x,y)\in F_\tau^-\times F_\tau^-$). $R(x,y)$ acts only on $F_e^+\oplus (F_e^+)^*$ (respectively $F_\tau^+$, resp. $F_\tau^-$).
For $(x,y)\in F_e^+\times (F_e^+)^*$, $z\in V_1\cap F_e$, $t\in V_1\cap F_e$, one has $\langle R(x,y)z,t\rangle =\langle R(z,t)x,y\rangle =0$, and similarly for $(x,y)\in F_e^+\times (F_e^+)^*$, $z\in V_2\cap F_e$, $t\in V_2\cap F_e$, one has $\langle R(x,y)z,t\rangle=0$. So we obtain: $\g g F_e \subset V_1 \cap V_1^\perp \oplus V_2 \cap V_2^\perp \subset \ker(\theta_{\mathcal V})$.
Recall that $\theta$ maps $W_1$ into $W_2$ and $W_2$ into $W_1$.
For $(x,y)\in F_\tau^+\times F_\tau^+$, $z\in F_\tau^+$, $t\in F_\tau^-$, one has: $\langle \theta (R(x,y)z),t\rangle =\langle R(x,y)z,\theta(t)\rangle=\langle R(z,\theta(t))x,y\rangle=0$ because $z \perp \theta(t)$. So $\theta (R(x,y)F_\tau^+)\subset F_\tau^- \cap (F_\tau^-)^\perp=\{0\}$. Similarly for $(x,y)\in F_\tau^-\times F_\tau^-$, $\theta (R(x,y)F_\tau^-)=\{0\}$, so $\g g F_\tau \subset\ker(\theta_{\mathcal V})$.
$\g g E \subset \ker(\theta_{\mathcal V})$ follows from the preceding observations. As $\theta$ commutes with every element of $\g{g}$, we will have as well: $\g{g}
\mbox{\textnormal{im}} \theta =\g{g} \theta(E) \subset \theta \g{g} E
=\{0\}$. \end{proof}
\begin{corr} Let $E$ be a metric indecomposable representation of the Berger algebra $\g g$ preserving the decomposition $E=V_1\oplus V_2$ with $V_1$ or $V_2$ degenerate. For ${\mathcal V}=(E, V_1, V_2, V_1^\perp, V_2^\perp)$, one has: $\theta_{\mathcal V}^2=0$. \end{corr} \begin{proof} Recall that in the metric indecomposable case with $E=V_1 \oplus V_2$ where $V_1$ or $V_2$ is degenerate, one has $E=F_e$. Suppose $\theta_{\mathcal V}^2$ is non zero. In this case one can choose a non trivial supplementary space $A$ of $\ker \theta \cap \mbox{\textnormal{im}} \theta$ in $\mbox{\textnormal{im}} \theta$. $A$ is also a supplementary space of $\ker \theta$ in $\ker \theta + \mbox{\textnormal{im}} \theta$. Let us choose a supplementary space $B$ of $\ker \theta + \mbox{\textnormal{im}} \theta$ in $E$. One has: Because $A \subset \mbox{\textnormal{im}} \theta$, there exists $A'$ subset of $E$ such that $A=\theta A'$. For $a\in\g{g}$, $aA=a\theta A'=\theta a A'=\{0\}$ by the preceding theorem because $a A'\subset \g{g} E$. So $A$ is invariant for the action of $\g{g}$. $\ker \theta + B$ is a supplementary space of $A$, which is also invariant by $\g{g}$, because $\g{g}(\ker \theta + B)\subset \ker \theta \subset \ker \theta + B$. So we obtain a new decomposition of $E$ into two $\g{g}$-invariant spaces $A$ and $\ker \theta + B$. the action of $\g{g}$ on $A$ is trivial. So the action of $\g g$ decomposes into an exterior product along the decomposition $A \oplus (\ker \theta + B)$, in contradiction to what we supposed. \end{proof}
\end{document} | arXiv |
Orbit portraits in non-autonomous iteration
DCDS-S Home
Dynamical properties of endomorphisms, multiresolutions, similarity and orthogonality relations
doi: 10.3934/dcdss.2019145
Solving the Babylonian problem of quasiperiodic rotation rates
Suddhasattwa Das 1,, , Yoshitaka Saiki 2,3,4, , Evelyn Sander 5, and James A. Yorke 6,
Courant Institute of Mathematical Sciences, New York University, 251 Mercer street, New York 10012, USA
Graduate School of Business Administration, Hitotsubashi University, 2-1 Naka, Kunitachi, Tokyo 186-8601, Japan
JST PRESTO, 4-1-8 Honcho, Kawaguchi-shi, Saitama 332-0012, Japan
University of Maryland, College Park, MD 20742, USA
Department of Mathematical Sciences, George Mason University, USA
University of Maryland, College Park, USA
* Corresponding author: S. Das
Received December 2016 Revised July 2017 Published January 2019
Fund Project: The second author is supported by JSPS KAKENHI grant 17K05360, JST PRESTO grant JPMJPR16E5. The third author was supported by NSF grant DMS-1407087. The fourth author is supported by USDA grants 2009-35205-05209 and 2008-04049
Full Text(HTML)
Figure(13)
A trajectory $ \theta_n : = F^n(\theta_0), n = 0,1,2, \dots $ is quasiperiodic if the trajectory lies on and is dense in some $ d $-dimensional torus $ {\mathbb{T}^d} $, and there is a choice of coordinates on $ {\mathbb{T}^d} $ for which $ F $ has the form $ F(\theta) = \theta + \rho\bmod1 $ for all $ \theta\in {\mathbb{T}^d} $ and for some $ \rho\in {\mathbb{T}^d} $. (For $ d>1 $ we always interpret $ \bmod1 $ as being applied to each coordinate.) There is an ancient literature on computing the three rotation rates for the Moon. However, for $ d>1 $, the choice of coordinates that yields the form $ F(\theta) = \theta + \rho\bmod1 $ is far from unique and the different choices yield a huge choice of coordinatizations $ (\rho_1,\cdots,\rho_d) $ of $ \rho $, and these coordinations are dense in $ {\mathbb{T}^d} $. Therefore instead one defines the rotation rate $ \rho_\phi $ (also called rotation rate) from the perspective of a map $ \phi:T^d\to S^1 $. This is in effect the approach taken by the Babylonians and we refer to this approach as the "Babylonian Problem": determining the rotation rate $ \rho_\phi $ of the image of a torus trajectory - when the torus trajectory is projected onto a circle, i.e., determining $ \rho_\phi $ from knowledge of $ \phi(F^n(\theta)) $. Of course $ \rho_\phi $ depends on $ \phi $ but does not depend on a choice of coordinates for $ {\mathbb{T}^d} $. However, even in the case $ d = 1 $ there has been no general method for computing $ \rho_\phi $ given only the sequence $ \phi(\theta_n) $, though there is a literature dealing with special cases. Here we present our Embedding continuation method for general $ d $ for computing $ \rho_\phi $ from the image $ \phi(\theta_n) $ of a trajectory, and show examples for $ d = 1 $ and $ 2 $. The method is based on the Takens Embedding Theorem and the Birkhoff Ergodic Theorem.
Keywords: Quasiperiodic, Birkhoff Ergodic Theorem, rotation number, rotation rate, Takens Embedding Theorem, circular planar restricted 3-body problem, CR3BP.
Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35.
Citation: Suddhasattwa Das, Yoshitaka Saiki, Evelyn Sander, James A. Yorke. Solving the Babylonian problem of quasiperiodic rotation rates. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2019145
J. Barrow-Green, Poincaré and the Three Body Problem, Amer. Math. Soc., Providence, London, 1997. Google Scholar
A. Belova, Rigorous enclosures of rotation numbers by interval methods, J. of Comput. Dyn., 3 (2016), 81-91. doi: 10.3934/jcd.2016004. Google Scholar
[3] M. Brin and G. Stuck, Introduction to Dynamical Systems, Cambridge University Press, 2002. doi: 10.1017/CBO9780511755316.
J. C. Butcher, Numerical Methods for Ordinary Differential Equations, John Wiley & Sons, Ltd, Chichester, 2016. doi: 10.1002/9781119121534. Google Scholar
X. Cabré, E. Fontich and R. de la Llave, The parameterization method for invariant manifolds. I. Manifolds associated to non-resonant subspaces, Indiana Univ. Math. J., 52 (2003), 283-328. doi: 10.1512/iumj.2003.52.2245. Google Scholar
S. Das, C. B. Dock, Y. Saiki, M. Salgado-Flores, E. Sander, J. Wu and J. A. Yorke, Measuring quasiperiodicity, Europhys. Lett., 116 (2016), 40005. doi: 10.1209/0295-5075/114/40005. Google Scholar
S. Das, E. Sander, Y. Saiki and J. A. Yorke, Quantitative quasiperiodicity, Nonlinearity, 30 (2017), 4111-4140. doi: 10.1088/1361-6544/aa84c2. Google Scholar
S. Das and J. A. Yorke, Super convergence of ergodic averages for quasiperiodic orbits, Nonlinearity, 31 (2018), 491-501. doi: 10.1088/1361-6544/aa99a0. Google Scholar
R. de la Llave, A. González, A. Jorba and J. Villanueva, KAM theory without action-angle variables, Nonlinearity, 18 (2005), 855-895. doi: 10.1088/0951-7715/18/2/020. Google Scholar
B. R. Goldstein, On the Babylonian discovery of the periods of lunar motion, J. Hist. Astro., 33 (2002), 1-13. Google Scholar
A. Haro and R. de la Llave, A parameterization method for the computation of invariant tori and their whiskers in quasi-periodic maps: explorations and mechanisms for the breakdown of hyperbolicity, SIAM J. Appl. Dyn. Syst., 6 (2007), 142-207. doi: 10.1137/050637327. Google Scholar
B. Hasselblatt and A. Katok, Principal structures, in Handbook of Dynamical Systems, North-Holland, 1 (2002), 1-203. doi: 10.1016/S1874-575X(02)80003-0. Google Scholar
G. Huguet, R. de la Llave and Y. Sire, Fast iteration of cocycles over rotations and computation of hyperbolic bundles, Discrete Contin. Dyn. Syst., (2013), 323-333. doi: 10.3934/proc.2013.2013.323. Google Scholar
B. Hunt, T. Sauer and J. A. Yorke, Prevalence: a translation-invariant "almost every" on infinite dimensional spaces, Bull. Amer. Math. Soc., 27 (1992), 217-238. doi: 10.1090/S0273-0979-1992-00328-2. Google Scholar
J. Laskar, Introduction to frequency map analysis, in Hamiltonian Systems with Three or More Degrees of Freedom(S'Agaró, 1995), vol. 533, Kluwer Acad. Publ., Dordrecht, 1999,134-150. Google Scholar
A. Luque and J. Villanueva, Numerical computation of rotation numbers of quasi-periodic planar curves, Physica D, 238 (2009), 2025-2044. doi: 10.1016/j.physd.2009.07.014. Google Scholar
A. Luque and J. Villanueva, Quasi-periodic frequency analysis using averaging-extrapolation methods, SIAM J. Appl. Dyn. Syst., 13 (2014), 1-46. doi: 10.1137/130920113. Google Scholar
W. Ott and J. A. Yorke, Prevalence, Bull. Amer. Math. Soc., 42 (2005), 263-290. doi: 10.1090/S0273-0979-05-01060-8. Google Scholar
H. Poincaré, New Methods of Celestial Mechanics, Translated from the French. NASA TT F-451 National Aeronautics and Space Administration, Washington, D. C., 1967. Google Scholar
E. Sander and J. A. Yorke, The many facets of chaos, Int. J. Bifurcat. Chaos, 25 (2015), 1530011, 15pp. doi: 10.1142/S0218127415300116. Google Scholar
T. Sauer, J. A. Yorke and M. Casdagli, Embedology, J. Stat. Phys., 65 (1991), 579-616. doi: 10.1007/BF01053745. Google Scholar
[22] V. Szebehely, Theory of Orbits: The Restricted Problem of Three Bodies, Academic Press, Cambridge, MA, 1967.
F. Takens, Detecting strange attractors in turbulence, in Dynamical systems and turbulence, Warwick 1980 (Coventry, 1979/1980), vol. 898 of Lecture Notes in Math., Springer, Berlin-New York, 1981,366-381. Google Scholar
H. Whitney, Differentiable manifolds, Annals of Math., 37 (1936), 645-680. doi: 10.2307/1968482. Google Scholar
Figure 1. The fish map (left) and flower map (right). The function $ \gamma:S^1\to \mathbb{R}^2 $ for each panel is respectively Eq. 21 and Eq. 22 and the image plotted is $ \gamma(S^1) $ in the complex plane. These are images of quasiperiodic curves with self-intersections, and we want to compute the rotation rate only from knowledge of a trajectory $ \gamma_n\in {\mathbb{R}}^2 $. The curves winds $ j $ times around points $ P_j $, so $ P_1 $ is a correct choice of reference point from which angles can be measured to compute a rotation rate. If instead we choose $ j\ne 1 $, then the measured rotation rate will be $ j $ times as big as for $ j = 1 $. In both cases, $ P_1 $ is the reference point. $ P_1 = (8.25,4.4) $ and $ (0.5,1.5) $ for the fish map and flower map, respectively. The angle marked $ \Delta_n\in [0,1) $ measured from point $ P_1 $ is the angle between trajectory points $ \gamma_n $ and $ \gamma_{n+1} $. For each point $ \gamma_n $ we can define $ \phi_n $ to be a unit vector $ (\gamma_n - P_1)/\|\gamma_n - P_1\| $. Still using $ P_1 $, we can define a map $ \phi: {\mathbb{T}^d} \to S^1 $ - but since this is a one-dimensional torus, $ {\mathbb{T}^d} = S^1 $
Figure Options
Download as PowerPoint slide
Figure 2. The flower map revisited. Suppose instead of having the function $ \gamma:S^1\to \mathbb{R}^2 $ for the flower Eq. 22 in Fig. 1, we had only one coordinate of $ \gamma $, for example, the real component, $ Re~\gamma. $ Knowing only one coordinate would seem to be a huge handicap to measuring a rotation rate. But it is not. In the spirit of Takens's idea of delay coordinate embeddings explained in detail later, we plot $ (Re~\gamma_n,Re~\gamma_{n-1}) $ and choose a point $ P_1 $ as before, and the map is now two dimensional. The rotation rate can be computed as before. The rotation rate $ \rho_{\phi} $ here using $ P_1 $ is the same as for Fig. 1 right
Figure 3. The angle difference for the fish and the flower maps. Here we plot $ (\phi_n, \Delta_n+ k) $ for every $ n\in\mathbb{N} $ and all integers $ k $, where $ \Delta_n = \phi_{n+1}-\phi_n \bmod1 $. In the left panel (the fish map, the easy case) the closure of the figure resolves into disjoint sets (which are curves $ \subset \mathbb{R}\times S^1 $), while on the right (the flower map, the hard case) they do not. Hence if we choose a point plotted on the left panel, it lies on a unique connected curve that we can designate as $ C\subset S^1\times \mathbb{R} $. We can choose any such curve to define $ \hat\Delta_n $, namely we define $ \hat\Delta_n = \Delta_n + k $ where $ k $ is the unique integer for which $ (\phi_n,\Delta_n + k)\in C $. A better method is needed to separate the set in the right panel into disjoint curves - and that is our embedding method
Figure 4. A lift of the angle difference for the fish and for the flower maps. This is similar to Fig. 3 except that the horizontal axis is $ \theta $ instead of $ \phi $. That is, we take $ \theta_n $ to be $ n\rho $ and $ \Delta(\theta) = \phi(\theta+\rho)-\phi(\theta) \bmod1 \in [0,1) $ and we plot $ (\theta_n,\Delta_n + k) $ for all integers $ k $ (where again $ \Delta_n = \Delta(n\rho) $), These are points on the set $ G = \{(\theta, \Delta(\theta)+ k):\theta\in S^1, k\in\mathbb Z \} $. This set $ G $ consists of a countable set of disjoint compact connected sets, "connected components", each of which is a vertical translate by an integer of every other component. For each $ \theta\in S^1 $ and $ k\in \mathbb Z $ there is exactly one point $ y\in [k,k+1) $ for which $ \theta,y)\in G $. Each connected component of $ G $ is an acceptable candidate for $ \hat\Delta $. Unlike the plots in Fig. 3, $ G $ always splits into disjoint curves. Unfortunately the available data, the sequence $ (\phi_n) $ only lets us make plots like Fig. 3. But the Takens Embedding method allows us to plot something like $ G $ and determine the lift in the next figure
Figure 5. Lifts over an embedded torus. Let $ \Theta : = \Theta_K^\phi $ be as in Eq. 15 and let $ \theta_n = n\rho $ be a trajectory on $ {\mathbb{T}^d} $. Assume $ K\ge 3 $. By Theorem 1.2 for almost any map $ \phi $, the set $ \Theta( {\mathbb{T}^d} ) $ is an embedding of $ {\mathbb{T}^d} $ into $ {\mathbb{T}} ^{K} $; i.e., $ \Theta $ is a homeomorphism of $ {\mathbb{T}^d} $ (the circle $ S^1 $ when $ d = 1 $) onto $ \Theta( {\mathbb{T}^d} ) $. In particular the map is one-to-one. The smooth (oval) curve is the set $ (\Theta( {\mathbb{T}^d} ),0) $. As in our previous graphs, the vertical axis shows the angle difference $ \Delta ( \theta ) \in [0,1)+k $ for all integers $ k $. Write $ \mathbb{U} : = \{(\Theta(\theta),\Delta(\theta)+k):\theta\in {\mathbb{T}^d} \mbox{ and } k\in\mathbb{Z}\} $. Unlike Fig. 3 but like Fig. 4, $ \mathbb{U} $ always splits into bounded, connected component manifolds that are disjoint from each other. Hence $ \mathbb{U} $, which is also the closure of the set $ \{(\Theta(\theta_n),\Delta_n + k):k\in \mathbb Z,n = 0,\cdots,\infty\} $, separates into disjoint components each of which is a lift of $ \Delta $ and each of which is homeomorphic to $ {\mathbb{T}^d} $. For each integer $ k $ the set $ \{(\Theta(\theta),\Delta(\theta)+k):\theta\in {\mathbb{T}^d} \} $ is a component as shown in this figure. See Theorem 2.1
Figure 6. Illustrating a chain of points on a rigid rotation on the torus. $ x_n = n\sqrt{3} (\bmod 1), y_n = n\sqrt{5} (\bmod 1) $ for $ n = 0, \cdots, N-1 $ are plotted with the origin indicated by $ 0 $ at the center on the panel. Each point $ \theta_n = (x_n,y_n) $ is labeled with its subscript $ n $. Here $ N = 100 $ (left) and $ = 20,000 $ (right). Only the neighborhood of the origin is shown for the right panel. In the left panel, $ \theta_4 $ and $ \theta_{93} $ (ⅰ) are near the origin and (ⅱ) their subscripts are relatively prime and (ⅲ) the total of the subscripts is less than $ N $. On the right points with subscripts $ 4109 $ and $ 11,700 $ play the corresponding role. In each case it follows that there is a chain of points starting from $ 0 $ and ending at any desired $ \theta_m $ where $ 0 < m < N $. This chain is a series of steps, each achieved by either adding one of the two subscripts or subtracting the other. See Prop. 2 and the algorithm sketched in its proof. In the left panel such a chain - adding $ 93 $ or subtracting $ 4 $ at each step - is shown that ends at $ \theta_{90} $
Figure 8. Projections of the fish torus and the flower torus. The coordinates used to find angle 1 (left) and angle 2 (right) for the fish torus (top) and the flower torus (bottom). The red circle shows the initial condition. The $ \times $ shows the point with which the angle is measured. Note that for the the fish torus, the point from which the angle is measured is very close to the edge torus image. For angle 2, points are projected onto a tilted plane that makes angle $ 0.05 \pi $ with the horizontal. See Section 4.1 for a full description of these projections
Figure 9. Angle differences for the fish torus and flower torus. Each panel shows three possible angle differences, each differing by an integer, for the same projections as were depicted in Fig. 8. The angle versus angle difference for angle 1 (left) and angle 2 (right) for the fish torus (top) and flower torus (bottom). In the final panel, the picture cannot be separated into separate components
Figure 10. Lifts of the angle difference for the fish torus and flower torus. Here one of the possible lifts has been selected from each panel in Fig. 9. Each panel shows the angle versus angle difference lift for fish torus angle 1 (top left) and angle 2 (top right) and the flower torus angle 1 (bottom left) and angle 2 (bottom right), using the projections depicted in Fig. 8
Figure 7. The fish and flower torus. The top figures show two views of the fish torus, and the bottom two views of the flower torus. These figures can be thought of as projections of tori onto the plane represented by the page. The three coordinate axes are presented here to clarify which two-dimensional projection is being used. The projections of the tori on the left are simply connected so there is no way to choose a point P that would yield a non-zero rotation rate. The projections on the right yield images of the tori that are annuli with a hole in which P can be chosen to yield nonzero results. Each is a plot of N = 50090 iterates. The red circle is the initial point
Figure 11. Two views of a two-dimensional quasiperiodic trajectory for the restricted three-body problem described in Section 4.2
Figure 13. Convergence to the rotation rates for the CR3BP. For these two figures, we used differential equation time step $ dt = 0.00002 $ and we compute the change in angle after 50 such steps, that is, in time "output time" $ Dt = 0.001 $. We show the convergence rates to the estimated rate of $ 0.001\times \rho^{*}_{\theta} $ (left) and of $ 0.001 \times \rho^{*}_{\phi} $ (right). For both cases rotation rates are calculated using the Weighted Birkhoff averaging method $ \mbox{WB} ^{[2]}_{N} $ in Eq.13 and show fast convergence
Figure 12. Plots of the circular planar restricted three-body problem in $ r-r' $ coordinates. As described in the text, we define $ r = \sqrt{(q_1+0.1)^2+q^2_2} $ and $ r' = dr/dt $. This figure shows $ r $ versus $ r' $ for a single trajectory. The right figure is the enlargement of the left. One of the two rotation rates $ \rho^{*}_\phi $ is calculated by measuring from $ (r,r^{\prime}) = (0.15,0) $ in these coordinates
Gianni Arioli. Branches of periodic orbits for the planar restricted 3-body problem. Discrete & Continuous Dynamical Systems - A, 2004, 11 (4) : 745-755. doi: 10.3934/dcds.2004.11.745
Elbaz I. Abouelmagd, Juan Luis García Guirao, Jaume Llibre. Periodic orbits for the perturbed planar circular restricted 3–body problem. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1007-1020. doi: 10.3934/dcdsb.2019003
Qunyao Yin, Shiqing Zhang. New periodic solutions for the circular restricted 3-body and 4-body problems. Communications on Pure & Applied Analysis, 2010, 9 (1) : 249-260. doi: 10.3934/cpaa.2010.9.249
Giovanni F. Gronchi, Chiara Tardioli. The evolution of the orbit distance in the double averaged restricted 3-body problem with crossing singularities. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1323-1344. doi: 10.3934/dcdsb.2013.18.1323
Samuel R. Kaplan, Ernesto A. Lacomba, Jaume Llibre. Symbolic dynamics of the elliptic rectilinear restricted 3--body problem. Discrete & Continuous Dynamical Systems - S, 2008, 1 (4) : 541-555. doi: 10.3934/dcdss.2008.1.541
Alain Chenciner, Jacques Féjoz. The flow of the equal-mass spatial 3-body problem in the neighborhood of the equilateral relative equilibrium. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 421-438. doi: 10.3934/dcdsb.2008.10.421
Michel Laurent, Arnaldo Nogueira. Rotation number of contracted rotations. Journal of Modern Dynamics, 2018, 12: 175-191. doi: 10.3934/jmd.2018007
Marcel Guardia, Tere M. Seara, Pau Martín, Lara Sabbagh. Oscillatory orbits in the restricted elliptic planar three body problem. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 229-256. doi: 10.3934/dcds.2017009
Weigu Li, Kening Lu. Takens theorem for random dynamical systems. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 3191-3207. doi: 10.3934/dcdsb.2016093
Yulan Wang, Xinru Cao. Global classical solutions of a 3D chemotaxis-Stokes system with rotation. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 3235-3254. doi: 10.3934/dcdsb.2015.20.3235
Jean-Baptiste Caillau, Bilel Daoud, Joseph Gergaud. Discrete and differential homotopy in circular restricted three-body control. Conference Publications, 2011, 2011 (Special) : 229-239. doi: 10.3934/proc.2011.2011.229
J.-M. Deshouillers, G. Effinger, H. te Riele and D. Zinoviev. A complete Vinogradov 3-primes theorem under the Riemann hypothesis. Electronic Research Announcements, 1997, 3: 99-104.
Paolo Antonelli, Daniel Marahrens, Christof Sparber. On the Cauchy problem for nonlinear Schrödinger equations with rotation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 703-715. doi: 10.3934/dcds.2012.32.703
Wenxian Shen. Global attractor and rotation number of a class of nonlinear noisy oscillators. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2&3) : 597-611. doi: 10.3934/dcds.2007.18.597
Martha Alvarez-Ramírez, Joaquín Delgado. Blow up of the isosceles 3--body problem with an infinitesimal mass. Discrete & Continuous Dynamical Systems - A, 2003, 9 (5) : 1149-1173. doi: 10.3934/dcds.2003.9.1149
Sergey V. Bolotin, Piero Negrini. Variational approach to second species periodic solutions of Poincaré of the 3 body problem. Discrete & Continuous Dynamical Systems - A, 2013, 33 (3) : 1009-1032. doi: 10.3934/dcds.2013.33.1009
Florin Diacu, Shuqiang Zhu. Almost all 3-body relative equilibria on $ \mathbb S^2 $ and $ \mathbb H^2 $ are inclined. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 1-13. doi: 10.3934/dcdss.2020067
Hildeberto E. Cabral, Zhihong Xia. Subharmonic solutions in the restricted three-body problem. Discrete & Continuous Dynamical Systems - A, 1995, 1 (4) : 463-474. doi: 10.3934/dcds.1995.1.463
Cecilia González-Tokman, Anthony Quas. A concise proof of the multiplicative ergodic theorem on Banach spaces. Journal of Modern Dynamics, 2015, 9: 237-255. doi: 10.3934/jmd.2015.9.237
Paul Deuring, Stanislav Kračmar, Šárka Nečasová. A leading term for the velocity of stationary viscous incompressible flow around a rigid body performing a rotation and a translation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1389-1409. doi: 10.3934/dcds.2017057
PDF downloads (27)
HTML views (581)
on AIMS
Suddhasattwa Das Yoshitaka Saiki Evelyn Sander James A. Yorke
Recipient's E-mail* | CommonCrawl |
Nakagami distribution
The Nakagami distribution or the Nakagami-m distribution is a probability distribution related to the gamma distribution. The family of Nakagami distributions has two parameters: a shape parameter $m\geq 1/2$ and a second parameter controlling spread $\Omega >0$.
Nakagami
Probability density function
Cumulative distribution function
Parameters $m{\text{ or }}\mu \geq 0.5$ shape (real)
$\Omega {\text{ or }}\omega >0$ spread (real)
Support $x>0\!$
PDF ${\frac {2m^{m}}{\Gamma (m)\Omega ^{m}}}x^{2m-1}\exp \left(-{\frac {m}{\Omega }}x^{2}\right)$
CDF ${\frac {\gamma \left(m,{\frac {m}{\Omega }}x^{2}\right)}{\Gamma (m)}}$
Mean ${\frac {\Gamma (m+{\frac {1}{2}})}{\Gamma (m)}}\left({\frac {\Omega }{m}}\right)^{1/2}$
Median No simple closed form
Mode ${\frac {\sqrt {2}}{2}}\left({\frac {(2m-1)\Omega }{m}}\right)^{1/2}$
Variance $\Omega \left(1-{\frac {1}{m}}\left({\frac {\Gamma (m+{\frac {1}{2}})}{\Gamma (m)}}\right)^{2}\right)$
Characterization
Its probability density function (pdf) is[1]
$f(x;\,m,\Omega )={\frac {2m^{m}}{\Gamma (m)\Omega ^{m}}}x^{2m-1}\exp \left(-{\frac {m}{\Omega }}x^{2}\right),\forall x\geq 0.$
where $(m\geq 1/2,{\text{ and }}\Omega >0)$
Its cumulative distribution function is[1]
$F(x;\,m,\Omega )=P\left(m,{\frac {m}{\Omega }}x^{2}\right)$
where P is the regularized (lower) incomplete gamma function.
Parametrization
The parameters $m$ and $\Omega $ are[2]
$m={\frac {\left(\operatorname {E} \left[X^{2}\right]\right)^{2}}{\operatorname {Var} \left[X^{2}\right]}},$
and
$\Omega =\operatorname {E} \left[X^{2}\right].$
Parameter estimation
An alternative way of fitting the distribution is to re-parametrize $\Omega $ and m as σ = Ω/m and m.[3]
Given independent observations $ X_{1}=x_{1},\ldots ,X_{n}=x_{n}$ from the Nakagami distribution, the likelihood function is
$L(\sigma ,m)=\left({\frac {2}{\Gamma (m)\sigma ^{m}}}\right)^{n}\left(\prod _{i=1}^{n}x_{i}\right)^{2m-1}\exp \left(-{\frac {\sum _{i=1}^{n}x_{i}^{2}}{\sigma }}\right).$
Its logarithm is
$\ell (\sigma ,m)=\log L(\sigma ,m)=-n\log \Gamma (m)-nm\log \sigma +(2m-1)\sum _{i=1}^{n}\log x_{i}-{\frac {\sum _{i=1}^{n}x_{i}^{2}}{\sigma }}.$
Therefore
${\begin{aligned}{\frac {\partial \ell }{\partial \sigma }}={\frac {-nm\sigma +\sum _{i=1}^{n}x_{i}^{2}}{\sigma ^{2}}}\quad {\text{and}}\quad {\frac {\partial \ell }{\partial m}}=-n{\frac {\Gamma '(m)}{\Gamma (m)}}-n\log \sigma +2\sum _{i=1}^{n}\log x_{i}.\end{aligned}}$
These derivatives vanish only when
$\sigma ={\frac {\sum _{i=1}^{n}x_{i}^{2}}{nm}}$
and the value of m for which the derivative with respect to m vanishes is found by numerical methods including the Newton–Raphson method.
It can be shown that at the critical point a global maximum is attained, so the critical point is the maximum-likelihood estimate of (m,σ). Because of the equivariance of maximum-likelihood estimation, one then obtains the MLE for Ω as well.
Generation
The Nakagami distribution is related to the gamma distribution. In particular, given a random variable $Y\,\sim {\textrm {Gamma}}(k,\theta )$, it is possible to obtain a random variable $X\,\sim {\textrm {Nakagami}}(m,\Omega )$, by setting $k=m$, $\theta =\Omega /m$, and taking the square root of $Y$:
$X={\sqrt {Y}}.\,$
Alternatively, the Nakagami distribution $f(y;\,m,\Omega )$ can be generated from the chi distribution with parameter $k$ set to $2m$ and then following it by a scaling transformation of random variables. That is, a Nakagami random variable $X$ is generated by a simple scaling transformation on a Chi-distributed random variable $Y\sim \chi (2m)$ as below.
$X={\sqrt {(\Omega /2m)Y}}.$
For a Chi-distribution, the degrees of freedom $2m$ must be an integer, but for Nakagami the $m$ can be any real number greater than 1/2. This is the critical difference and accordingly, Nakagami-m is viewed as a generalization of Chi-distribution, similar to a gamma distribution being considered as a generalization of Chi-squared distributions.
History and applications
The Nakagami distribution is relatively new, being first proposed in 1960.[4] It has been used to model attenuation of wireless signals traversing multiple paths[5] and to study the impact of fading channels on wireless communications.[6]
Related distributions
• Restricting m to the unit interval (q = m; 0 < q < 1) defines the Nakagami-q distribution, also known as Hoyt distribution.[7][8][9]
"The radius around the true mean in a bivariate normal random variable, re-written in polar coordinates (radius and angle), follows a Hoyt distribution. Equivalently, the modulus of a complex normal random variable does."
• With 2m = k, the Nakagami distribution gives a scaled chi distribution.
• With $m={\tfrac {1}{2}}$, the Nakagami distribution gives a scaled half-normal distribution.
• A Nakagami distribution is a particular form of generalized gamma distribution, with p = 2 and d = 2m
See also
• Normal distribution
• Gamma distribution
• Modified half-normal distribution
• Normally distributed and uncorrelated does not imply independent
• Reciprocal normal distribution
• Ratio normal distribution
• Standard normal table
• Sub-Gaussian distribution
References
1. Laurenson, Dave (1994). "Nakagami Distribution". Indoor Radio Channel Propagation Modelling by Ray Tracing Techniques. Retrieved 2007-08-04.
2. R. Kolar, R. Jirik, J. Jan (2004) "Estimator Comparison of the Nakagami-m Parameter and Its Application in Echocardiography", Radioengineering, 13 (1), 8–12
3. Mitra, Rangeet; Mishra, Amit Kumar; Choubisa, Tarun (2012). "Maximum Likelihood Estimate of Parameters of Nakagami-m Distribution". International Conference on Communications, Devices and Intelligent Systems (CODIS), 2012: 9–12.
4. Nakagami, M. (1960) "The m-Distribution, a general formula of intensity of rapid fading". In William C. Hoffman, editor, Statistical Methods in Radio Wave Propagation: Proceedings of a Symposium held June 18–20, 1958, pp. 3–36. Pergamon Press., doi:10.1016/B978-0-08-009306-2.50005-4
5. Parsons, J. D. (1992) The Mobile Radio Propagation Channel. New York: Wiley.
6. Ramon Sanchez-Iborra; Maria-Dolores Cano; Joan Garcia-Haro (2013). "Performance evaluation of QoE in VoIP traffic under fading channels". 2013 World Congress on Computer and Information Technology (WCCIT). pp. 1–6. doi:10.1109/WCCIT.2013.6618721. ISBN 978-1-4799-0462-4. S2CID 16810288. {{cite book}}: |journal= ignored (help)CS1 maint: date and year (link)
7. Paris, J.F. (2009). "Nakagami-q (Hoyt) distribution function with applications". Electronics Letters. 45 (4): 210. Bibcode:2009ElL....45..210P. doi:10.1049/el:20093427.
8. "HoytDistribution".
9. "NakagamiDistribution".
Probability distributions (list)
Discrete
univariate
with finite
support
• Benford
• Bernoulli
• beta-binomial
• binomial
• categorical
• hypergeometric
• negative
• Poisson binomial
• Rademacher
• soliton
• discrete uniform
• Zipf
• Zipf–Mandelbrot
with infinite
support
• beta negative binomial
• Borel
• Conway–Maxwell–Poisson
• discrete phase-type
• Delaporte
• extended negative binomial
• Flory–Schulz
• Gauss–Kuzmin
• geometric
• logarithmic
• mixed Poisson
• negative binomial
• Panjer
• parabolic fractal
• Poisson
• Skellam
• Yule–Simon
• zeta
Continuous
univariate
supported on a
bounded interval
• arcsine
• ARGUS
• Balding–Nichols
• Bates
• beta
• beta rectangular
• continuous Bernoulli
• Irwin–Hall
• Kumaraswamy
• logit-normal
• noncentral beta
• PERT
• raised cosine
• reciprocal
• triangular
• U-quadratic
• uniform
• Wigner semicircle
supported on a
semi-infinite
interval
• Benini
• Benktander 1st kind
• Benktander 2nd kind
• beta prime
• Burr
• chi
• chi-squared
• noncentral
• inverse
• scaled
• Dagum
• Davis
• Erlang
• hyper
• exponential
• hyperexponential
• hypoexponential
• logarithmic
• F
• noncentral
• folded normal
• Fréchet
• gamma
• generalized
• inverse
• gamma/Gompertz
• Gompertz
• shifted
• half-logistic
• half-normal
• Hotelling's T-squared
• inverse Gaussian
• generalized
• Kolmogorov
• Lévy
• log-Cauchy
• log-Laplace
• log-logistic
• log-normal
• log-t
• Lomax
• matrix-exponential
• Maxwell–Boltzmann
• Maxwell–Jüttner
• Mittag-Leffler
• Nakagami
• Pareto
• phase-type
• Poly-Weibull
• Rayleigh
• relativistic Breit–Wigner
• Rice
• truncated normal
• type-2 Gumbel
• Weibull
• discrete
• Wilks's lambda
supported
on the whole
real line
• Cauchy
• exponential power
• Fisher's z
• Kaniadakis κ-Gaussian
• Gaussian q
• generalized normal
• generalized hyperbolic
• geometric stable
• Gumbel
• Holtsmark
• hyperbolic secant
• Johnson's SU
• Landau
• Laplace
• asymmetric
• logistic
• noncentral t
• normal (Gaussian)
• normal-inverse Gaussian
• skew normal
• slash
• stable
• Student's t
• Tracy–Widom
• variance-gamma
• Voigt
with support
whose type varies
• generalized chi-squared
• generalized extreme value
• generalized Pareto
• Marchenko–Pastur
• Kaniadakis κ-exponential
• Kaniadakis κ-Gamma
• Kaniadakis κ-Weibull
• Kaniadakis κ-Logistic
• Kaniadakis κ-Erlang
• q-exponential
• q-Gaussian
• q-Weibull
• shifted log-logistic
• Tukey lambda
Mixed
univariate
continuous-
discrete
• Rectified Gaussian
Multivariate
(joint)
• Discrete:
• Ewens
• multinomial
• Dirichlet
• negative
• Continuous:
• Dirichlet
• generalized
• multivariate Laplace
• multivariate normal
• multivariate stable
• multivariate t
• normal-gamma
• inverse
• Matrix-valued:
• LKJ
• matrix normal
• matrix t
• matrix gamma
• inverse
• Wishart
• normal
• inverse
• normal-inverse
• complex
Directional
Univariate (circular) directional
Circular uniform
univariate von Mises
wrapped normal
wrapped Cauchy
wrapped exponential
wrapped asymmetric Laplace
wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
• Circular
• compound Poisson
• elliptical
• exponential
• natural exponential
• location–scale
• maximum entropy
• mixture
• Pearson
• Tweedie
• wrapped
• Category
• Commons
| Wikipedia |
A New Fuzzy Rule-based Model to Partition a Complex Urban System in Homogeneous Urban Contexts
Ferdinando Di Martino, Barbara Cardone
Subject: Earth Sciences, Geoinformatics Keywords: urban system; urban context; microzone, fuzzy rule set; Mamdani fuzzy system; spatial database, GIS
We present a new unsupervised method aimed to obtain a partition of a complex urban system in homogenous urban areas, called urban contexts. The area of study is initially partitioned in microzones, homogeneous portion of the urban system, that are the atomic reference elements for the census data. With the contribution of domain experts, we identify the physical, morphological, environmental and socio-economic indicators need to identify synthetic characteristics of urban contexts and create the fuzzy rule set necessary to determine the type of urban context. We implement the set of spatial analysis processes necessary to calculate the indicators for microzone and apply a Mamdani fuzzy rule system to classify the microzones. Finally, the partition of the area of study in urban contexts is obtained by dissolving continuous microzones belonging to the same type of urban context. Tests are performed on the Municipality of Pozzuoli (Naples - Italy); the reliability of out model is measured by comparing the results with the ones obtained by detailed analysis.
Dynamic Modeling and Adaptive Controlling in GPS-Intelligent Buoy (GIB) Systems Based on Neural-Fuzzy Networks
Dangquan Zhang, Muhammad Aqeel Ashraf, Zhenling Liu, Wan-Xi Peng, Mohammad Javad Golkar, Amir Mosavi
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: positioning system; neural-fuzzy network; adaptive control; buoys
Recently, various relations and criteria have been presented to establish a proper relationship between control systems and control Global Positioning System (GPS)-intelligent buoy system. Given the importance of controlling the position of buoys and the construction of intelligent systems, in this paper, dynamic system modeling is applied to position marine buoys through the improved neural network with a backstepping technique. This study aims at developing a novel controller based on adaptive fuzzy neural network to optimally track the dynamically positioned vehicle on water with unavailable velocities and unidentified control parameters. In order to model the network with the proposed technique, uncertainties and the unwanted disturbances are studied in the neural network. The presented study aims at developing a neural controlling which applies the vectorial back-stepping technique to the surface ships, which have been dynamically positioned with undetermined disturbances and ambivalences. Moreover, the objective function is to minimize the output error for the neural network (NN) based on closed-loop system. The most important feature of the proposed model for the positioning buoys is its independence from comparative knowledge or information on the dynamics and the unwanted disturbances of ships. The numerical and obtained consequences demonstrate that the controller system can adjust the routes and the position of the buoys to the desired objective with relatively few position errors.
Fuzzy Algorithmic Modeling of Economics and Innovation Process Dynamics Based on Preliminary Component Allocation by SSA Method
Alexey F. Rogachev, Alexey B. Simonov, Natalia V. Ketko, Natalia N. Skiter
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: singular spectrum analysis; fuzzy logic; innovative activity; time series; economic development; models
: In this article, the authors propose an algorithmic approach to building a model of the dynamics of economic and, in particular, innovation processes. The approach under consideration is based on a complex algorithm that includes (1) decomposition of the time series into components using singular spectrum analysis; (2) recognition of the optimal component model based on fuzzy rules, and (3) creation of statistical models of individual components with their combination. It is shown that this approach corresponds to the high uncertainty characteristic of the tasks of the dynamics of innovation processes. The proposed algorithm makes it possible to create effective models that can be used both for analysis and for predicting the future states of the processes under study. The advantage of this algorithm is the possibility to expand the base of rules and components used for modeling. This is an important condition for improving the algorithm and its applicability for solving a wide range of problems.
Forecasting Based on High-Order Fuzzy-Fluctuation Trends and Particle Swarm Optimization Machine Learning
Jingyuan Jia, Aiwu Zhao, Shuang Guan
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: Fuzzy forecasting, fuzzy-fluctuation trend, particle swarm optimization, fuzzy time series, fuzzy logical relationship
Online: 4 July 2017 (16:35:22 CEST)
Most of existing fuzzy forecasting models partition historical training time series into fuzzy time series and build fuzzy-trend logical relationship groups to generate forecasting rules. The determination process of intervals is complex and uncertainty. In this paper, we present a novel fuzzy forecasting model based on high-order fuzzy-fluctuation trends and the fuzzy-fluctuation logical relationships of the training time series. Firstly, we compare each data with the data of its previous day in historical training time series to generate a new fluctuation trend time series(FTTS). Then, fuzzify the FTTS into fuzzy-fluctuation time series(FFTS) according to the up, equal or down range and orientation of the fluctuations. Since the relationship between historical FFTS and the fluctuation trend of future is nonlinear, Particle Swarm Optimization (PSO) algorithm is employed to estimate the required parameters. Finally, use the acquired parameters to forecast the future fluctuations. In order to compare the performance of the proposed model with that of the other models, we apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) time series datasets. The experimental results and the comparison results show that the proposed method can be successfully applied in stock market forecasting or such kinds of time series. We also apply the proposed method to forecast Shanghai Stock Exchange Composite Index (SHSECI) to verify its effectiveness and universality.
Time Series Seasonal Analysis Based on Fuzzy Transforms
Ferdinando Di Martino, Salvatore Sessa
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: ARIMA; forecasting; fuzzy partition; fuzzy transform; time series
We define a new seasonal forecasting method based on fuzzy transforms. We use the best interpolating polynomial for extracting the trend of the time series and generate the inverse fuzzy transform on each seasonal subset of the universe of discourse for predicting the value of a an assigned output. Like first example, we use the daily weather dataset of the municipality of Naples (Italy) starting from data collected from 2003 till to 2015 making predictions on the following outputs: mean temperature, max temperature and min temperature, all considered daily. Like second example, we use the daily mean temperature measured at the weather station "Chiavari Caperana" in the Liguria Italian Region. We compare the results with our method, the average seasonal variation, ARIMA and the usual fuzzy transforms concluding that the best results are obtained under our approach in both examples.
A New Validity Index Based on Fuzzy Energy and Fuzzy Entropy Measures in Fuzzy Clustering Problems
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: FCM; validity index; fuzzy energy; fuzzy entropy
Two well-known drawbacks in fuzzy clustering are the requirement of assign in advance the number of clusters and random initialization of cluster centers.; the quality of the final fuzzy clusters depends heavily on the initial choice of the number of clusters and the initialization of the clusters, then it is necessary to apply a validity index to measure the compactness and the separability of the final clusters and run the clustering algorithm several times. We propose a new fuzzy C-means algorithm in which a validity index based on the concepts of maximum fuzzy energy and minimum fuzzy entropy is applied to initialize the cluster centers and to find the optimal number of clusters and initial cluster centers in order to obtain a good clustering quality, without increasing time consumption. We test our algorithm on UCI machine learning classification datasets comparing the results with the ones obtained by using well-known validity indices and variations of FCM using optimization algorithms in the initialization phase. The comparison results show that our algorithm represents an optimal trade-off between the quality of clustering and the time consumption.
A Fuzzy Inference System for Seagrass Distribution Modeling in the Mediterranean Sea
Dimitra Papaki, Nikolaos Kokkos, Georgios Sylaios
Subject: Earth Sciences, Oceanography Keywords: seagrass; fuzzy inference system; modeling; species abundance; Mediterranean Sea
A Mamdani-type fuzzy-logic model has been developed to link Mediterranean seagrass abundance to the prevailing environmental conditions. Big Databases, as UNEP-WCMC (seagrass abundance), CMEMS and EMODnet (oceanographic/environmental) and human-impact parameters were utilized for this expert system. Model structure and input parameters were tested according to their capacity to accurately predict seagrass families at specific locations. The optimum FIS comprised of four input variables: water depth, sea surface temperature and nitrates and bottom chlorophyll-a concentration, exhibiting fair accuracy (76%). Results illustrated that Posidoniaceae prefers cool (16-18oC) and low chlorophyll-a presence (< 0.2 mg/m3); Zosteraceae favors cool (16-18oC) and mesotrophic waters (Chl-a > 0.2 mg/m3), but also slightly warmer (18-19.5 oC) with lower Chl-a levels (< 0.2 mg/m3); Cymodoceaceae lives from warm, oligotrophic (19.5-21.0oC and Chl-a < 0.3 mg/m3) to moderately warm mesotrophic sites (18-21.3oC and 0.3 – 0.4 mg/m3 Chl-a). Finally, Hydrocharitaceae thrives in warm Mediterranaean waters (21-23oC) of low chlorophyll-a content (< 0.25 mg/m3). Climate change scenarios showed that Posidoniaceae and Zosteraceae tolerate bathymetric changes, Posidoniaceae and Zosteraceae are mostly affected by sea temperature rise, while Hydrocharitaceae exhibits tolerance in higher sea temperature rise. This FIS could be used by national and regional policy-makers and public authorities.
An Approximation Method of Fuzzy Numbers Based on Extended Fuzzy Transforms
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: extended fuzzy transform; fuzzy number; rule management system; spatial analysis
We propose a new Mamdani fuzzy rule-based system in which the fuzzy sets in the antecedents and consequents are assigned in a discrete set of points and approximated by using the extended inverse fuzzy transforms, whose components are calculated by verifying that the dataset is sufficiently dense with respect to the uniform fuzzy partition. We test our system in the problem of spatial analysis consisting in the evaluation of the liveability of residential housings in all the municipalities of the district of Naples (Italy). Comparisons are done with the results obtained by using trapezoidal fuzzy numbers in the fuzzy rules.
Modeling and Control of Power and Energy Produced by a Synchronous Generator Using Polynomial Fuzzy Systems and Sum-of-Squares Approach
Seyyed Mohammad Hosseini Rostami, Babak Sheikhi, Ahmad Jafari
Subject: Engineering, Automotive Engineering Keywords: Synchronous generator; Polynomial fuzzy controller; Polynomial fuzzy system; Polynomial Lyapunov function; Stability; Sum of squares (SOS)
The synchronous generator, as the main component of power systems, plays a key role in these system's stability. Therefore, utilizing the most effective control strategy for modeling and control the synchronous generator results in the best outcomes in power systems' performances. The advantage of using a powerful controller is to have the synchronous generator modeled and controlled as well as its main task i.e. stabilizing power systems. Since the synchronous generator is known as a complicated nonlinear system, modeling and control of it is a difficult task. This paper presents a sum of squares (SOS) approach to modeling and control the synchronous generator using polynomial fuzzy systems. This method as an efficacious control strategy has numerous superiorities to the well-known T–S fuzzy controller, due to the control framework is a polynomial fuzzy model, which is more general and effectual than the well-known T–S fuzzy model. In this case, a polynomial Lyapunov function is used for analyzing the stability of the polynomial fuzzy system. Then, the number of rules in a polynomial fuzzy model is less than in a T-S fuzzy model. Besides, derived stability conditions are represented in terms of the SOS approach, which can be numerically solved via the recently developed SOSTOOLS. This approach avoids the difficulty of solving LMI (Linear Matrix Inequality). The Effectiveness of the proposed control strategy is verified by using the third-part Matlab toolbox, SOSTOOLS.
Modeling and Simulation of Fuzzy logic based Hybrid power for Electrification System in case of Ashuda Villages
Abaraham Hizkiel, Gantasala Lakshmi
Subject: Engineering, Electrical & Electronic Engineering Keywords: Key Words: PV-Wind-Hydro Hybrid Power System, Dynamic Modeling, Load Profile, Grid Extension, smart micro grid, fuzzy logic controller, and mat lab/Simulink
ABSTRACT Ethiopia is a developing country, where majority of the population lives in rural areas without access to electricity. 83% of the total population of the country use traditional biomass energy as a basic source of energy. In contrast, the country is endowed with sufficient renewable energy resources which can be used as a standalone electric energy supply system for electrifying remote areas of the country. These resources are mainly micro hydropower and wind which can be used individually or the best combination of one another. The application of hybrid renewable energy system has become an important alternative solution for rural electrification program. The Modeling and control of a hybrid PV-Wind-Hydro DG system is also addressed. Dynamic models for the major system components, namely, wind energy conversion system, PV energy conversion system, hydro, inverter, and overall fuzzy logic controller units are developed. Then, a simulation model for the proposed hybrid power system has been developed using MATLAB /Simulink environment. This is done by creating subsystem sets of the major dynamic component models and then assembling into a single aggregate model. The overall power management strategy for coordinating and/or controlling the different energy sources is also presented in the thesis work. Generally there are 800 households with total electric demand of 71.6KW.To satisfy this demand 52%, 35% and 13% is to be contributed from wind/hydro/solar respectively. To use the power economically fuzzy logic controller is used. The controller monitors the demand and the available sources and then switches to the appropriate power supply according to the written rules. Simulations have been carried out to verify the system dynamic performance using a practical load profile and weather data. The result shows that the overall power management strategy is effective and the load demand is balanced. To complete this work, a grid extension from the closest substation has been compared with hybrid system. Cost of the grid extension is estimated based on the data obtained from EEP office. This is done in order to compare the cost of the designed hybrid power system against the cost of grid extension. The result shows that breakeven grid extension distance to be 23.9km which indicates that grid extension is preferable.
Fermatean fuzzy linguistic weighted averaging/geometric operators based on modified operational laws and their application in multiple attribute group decision-making
Rajkumar Verma
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: Fermatean fuzzy set; Fermatean fuzzy linguistic set; Fermatean fuzzy linguistic number; MAGDM; supplier selection
Fermatean fuzzy linguistic (FFL) set theory provides an efficient tool for modeling a higher level of uncertain and imprecise information, which cannot be represented using intuitionistic fuzzy linguistic (IFL)/Pythagorean fuzzy linguistic (PFL) sets. On the other hand, the linguistic scale function is the better way to consider the semantics of the linguistic terms during the evaluation process. In the present paper, we first define some new modified operational laws for Fermatean fuzzy linguistic numbers (FFLNs) based on linguistic scale function (LSF) to overcome the shortcomings of the existing operational laws and prove some important mathematical properties of them. Based on it, the work defines several new aggregation operators (AOs), namely, the FFL-weighted averaging (FFLWA) operator, the FFL-weighted geometric (FFLWG) operator, the FFL-ordered weighted averaging (FFLOWA) operator, the FFL-ordered weighted geometric (FFLOWG) operator, the FFL-hybrid averaging (FFLHA) operator and the FFL-hybrid geometric (FFLHG) operator under FFL environment. Several properties of these AOs are investigated in detail. Further, based on these operators, a multiple attribute group decision-making (MAGDM) approach with FFL information is developed. Finally, to illustrate the effectiveness of the present approach, a real-life supplier selection problem is presented where the evaluation information of the alternatives is given in terms of FFLNs.
Fuzzy-Based Failure Modes, Effects and Criticality Analysis Applied to Cyber-Power Grids
Andrés A. Zúñiga, João Filipe Pereira Fernandes, Paulo J. C. Branco
Subject: Engineering, Electrical & Electronic Engineering Keywords: FMECA; Fuzzy Inference Systems; fuzzy-based FMECA, Risk assessment, cyber-power grids
In this paper, we introduce the application of Type-I fuzzy inference systems (FIS) as an alternative to improve the prioritization in the FMECA analysis applied in cyber-power grids. Classical FMECA assesses the risk level through the Risk Priority Number (RPN). The multiplication between three integer numbers computes this, called risk factors, representing the severity, occurrence, and detectability of each failure mode and are defined by a team of experts. The RPN does not consider any relative importance between the risk factors and may not necessarily represent the real risk perception of the FMECA team members, usually expressed by natural language; this is the main FMECA shortcoming criticized in the literature. Our approach considers fuzzy variables defined by FMECA experts to represent the uncertainty associated with the human language and a rule base consisting of 125 fuzzy rules to represent the risk perception of the experts. To test our approach, we select a cyber-power grid previously analyzed by the authors using the classical FMECA. The results reveal our proposed fuzzy approach as promissory to represent the uncertainty associated with expert knowledge and to perform an accurate prioritization of failure modes in the context of electrical power systems.
Community Detection Problem Based on Polarization Measures. An application to Twitter: the COVID-19 case in Spain
Inmaculada Gutiérrez García-Pardo, Juan Antonio Guevara Gil, Daniel Gómez González, Javier Castro Cantalejo, Rosa Espínola Vílchez
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Networks; Community Detection; Extended Fuzzy Graphs; Polarization; Fuzzy Sets; Ordinal Variation
In this paper we address one of the most important topics in the field of Social Networks Analysis: the community detection problem with additional information. That additional information is modeled by a fuzzy measure that represents the possibility of polarization. Particularly, we are interested in dealing with the problem of taking into account the Polarization of nodes in the community detection problem. Adding this type of information to the community detection problem makes it more realistic, as a community is more probably to be defined if the corresponding elements are willing to maintain a peaceful dialogue. The polarization capacity is modeled by a fuzzy measure based on the JDJpol measure of polarization related to two poles. We also present an efficient algorithm for finding groups whose elements are no polarized. Hereafter, we work in a real case. It is a network obtained from Twitter, concerning the political position against the Spanish government taken by several influential users. We analyze how the partitions obtained change when some additional information related to how polarized that society is, is added to the problem.
Energy and Entropy Measures of Fuzzy Relations for Data Analysis
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: fuzzy entropy; fuzzy energy; fuzzy rules; fuzzy sets; fuzzy relations
We present a new method for assessing the strength of fuzzy rules with respect to a dataset, based on the measures of the greatest energy and smallest entropy of a fuzzy relation. Considering a fuzzy automaton (relation) in which A is the input fuzzy set and B the output fuzzy set, the fuzzy relation R1 with greatest energy provides information about the greatest strength of the input-output and the fuzzy relation R2 with the smallest entropy provides information about uncertainty of the relationship input-output. We consider a new index of the fuzziness of the input-output based over R1 and R2. In our method this index is calculated for each pair of input and output fuzzy sets in a fuzzy rule. A threshold value is set for choosing the most relevant fuzzy rules with respect to the data.
TOPSIS Based Algorithm for Solving Multi-objective Multi-level Programming Problem with Fuzzy Parameters
Surapati Pramanik, Partha P. Dey, Florentin Smarandache
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: multi-objective multi-level programming; fuzzy parameters; TOPSIS; fuzzy goal programming; multi-objective decision making
The paper proposes TOPSIS method for solving multi-objective multi-level programming problem (MO-MLPP) with fuzzy parameters via fuzzy goal programming (FGP). At first, - cut method is used to transform the fuzzily described MO-MLPP into deterministic MO-MLPP. Then, for specific , we construct the membership functions of distance functions from positive ideal solution (PIS) and negative ideal solution (NIS) of all level decision makers (DMs). Thereafter, FGP based multi-objective decision model is established for each level DM for obtaining individual optimal solution. A possible relaxation on decisions for all DMs is taken into account for satisfactory solution. Subsequently, two FGP models are developed and compromise optimal solutions are found by minimizing the sum of negative deviational variables. To recognize the better compromise optimal solution, the concept of distance functions is utilized. Finally, a novel algorithm for MO-MLPP involving fuzzy parameters is provided and an illustrative example is solved to verify the proposed procedure.
Generalized Topological Notions by Operators
Ismail Ibedou
Subject: Mathematics & Computer Science, Geometry & Topology Keywords: fuzzy operators; fuzzy separation axioms; fuzzy compactness; fuzzy connectedness
In this paper, it is introduced the notion of r-fuzzy β-Ti, i = 0, 1, 2 separation axioms related to a fuzzy operator β on the initial set X which is a generalization of previous fuzzy separation axioms. An r-fuzzy α-connectedness related to a fuzzy operator α on the set X is introduced which is a generalization of many types of r-fuzzy connectedness. An r-fuzzy α-compactness related to a fuzzy operator α on the set X is introduced which is a generalization of many types of fuzzy compactness.
VH-FL: Intelligent Vertical Handoff based on Fuzzy Logic Decision
Rushdi Hamamreh
Subject: Keywords: Vertical Handoff; Fuzzy Logic; wireless systems; IVH-FL
The most important issue in integrated next generation wireless systems (NGWS) by fourth generation networks which consists of various wireless architectures extending from cellular networks to satellite networks is to allow everyone around the world to connect seamlessly to applications anywhere at any time through the best network. Heterogeneous networks have created many challenges such as mobility management, handoff, resource management. For always best connectivity the selection of parameters plays an important role in the decision of vertical handoff, some of parameters are depend upon MT and some are depend upon the network conditions . In this paper we designed Intelligent Vertical Handoff based on Fuzzy Logic Decision model ( IVH-FL), IVH-FL model which has five parameters for vertical handoff decision : Received Signal Strength (RSS), Available bandwidth (B), Users Preference (UP), Mobile Speed (SM) and Power Consumption (PC), with the help of Fuzzy Logic tool box and concept of fuzzy linguistic variables. The results confirmed improvement performance , and reduced the number of unnecessary handover .
Eccentricity Estimation in Ultra-Precision Rotating Devices Based on a Neuro-Fuzzy Model
Raúl Mario del Toro Matamoros, Rodolfo Haber
Subject: Engineering, Other Keywords: neuro-fuzzy modelling; intelligent monitoring; manufacturing processes
Monitoring complex electro-mechanical processes is not straightforward despite the arsenal of techniques nowadays availanle. This paper presents a method based on Adaptive-Network-based Fuzzy Inference System (ANFIS) to estimate eccentricity of its spinning axis. The method is experimentally tested on an ultra-precision rotating device commonly used for micro-scale turning. The developed model has three inputs, two obtained from a frequency domain analysis of a vibration signal and the third, which is the device rotation frequency. A comparative study demonstrates that an adaptive neural-fuzzy inference system model provides better error-based performance indices for detecting imbalance than a non-linear regression model. This simple, fast, and non-intrusive imbalance detection strategy is proposed to counteract eventual deterioration in the performance of ultra-high precision rotating machines due to vibrations.
Some Remarks on Fuzzy Hilbert Spaces
Popa Lorena, Lavinia Sida
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: fuzzy Hilbert space; fuzzy inner product; fuzzy norm
The aim of this paper is to determine a suitable definition for the concept of fuzzy Hilbert space. In order to achieve this, we firstly focused on various approaches from the already-existent literature. Then we considered another approach to the notion of fuzzy inner product and analysed its properties.
Effort Estimation Model for Developing Web Applications Based Fuzzy and Practical Models
Dinesh Kumar Saini, Jabar H. Yousif
Subject: Keywords: effort; estimation; design; coding; unit testing; fuzzy model
Objective: This paper aims to build an Effort Estimation Model for design, coding and testing Web Applications Based Fuzzy and Practical Models, which will help in optimizing the efforts in software development. Methods/Analysis: Soft computing approach is adopted and applied in the effort estimation and then compared with practical efforts in the development process with interpreting the historical data available for the existing functionalities. Findings: The effort estimation model presented in this paper focuses on the first level estimates published by Project Managers and the second level estimates presented by Project Leaders or Developers for any new requirement or enhancement for a web application built on 3-tier architecture using Microsoft technologies. The model considers the classification of each task as either Low or Medium or High complexity. These tasks pertain to the lowest level parts in bottom-up estimation. Efforts are estimated for designing, coding and unit testing of these tasks and the efforts are summed up to get the effort estimation for the higher level which is a feature to be implemented. Novelty/Improvement: The paper also discusses about the application of the effort estimation model by taking a new requirement as a case study. The first level estimates calculated using the effort estimation model has a variance of about 25% when compared with the actual effort. This variance is very much acceptable considering the fact that the first level estimates can be tolerable up to 35%. The proposed effort estimation tool would help the project managers to efficiently control the project, manage the resources effectively, and improve the software development process and also trade off analyses among schedule, performance, quality and functionality. Fuzzy logic is used to verify the claims made in efforts estimation. It is proposed a new relation between the number of data and efforts value membership for actual data. And converts it into crisp value in the range [0…1] which helps to classify the complexity of the task and subtask in the design, coding and testing phases.
A Neuro-Fuzzy Approach Based Tool Condition Monitoring in AISI H13 Milling
Md. Shafiul Alam, Maryam Aramesh, Stephen Veldhuis
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: TCM; cutting force; flank wear; ANFIS; fuzzy; LabVIEW
In the manufacturing industry, cutting tool failure is a probable fault which causes damage to the cutting tools, workpiece quality and unscheduled downtime. It is very important to develop a reliable and inexpensive intelligent tool wear monitoring system for use in cutting processes. A successful monitoring system can effectively maintain machine tools, cutting tool and workpiece. In the present study, the tool condition monitoring system has been developed for Die steel (H13) milling process. Effective design of experiment and robust data acquisition system ensured the machining forces impact in the milling operation. Also, ANFIS based model has been developed based on cutting force-tool wear relationship in this research which has been implemented in the tool wear monitoring system. Prediction model shows that the developed system is accurate enough to perform an online tool wear monitoring system in the milling process.
Fuzzy Ensemble Ideal Solution Based Multi-Criteria Decision-Making Support for Heat Energy Transition in Danish Households
Qianyun Wen, Qiyao Yan, Junjie Qu, Yang Liu
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: MCDM; Individual Heating; Fuzzy; Energy Transition; Ensemble
More than 110 countries including 500 cities worldwide have set the goal of reaching carbon neutrality. Heating contributes to most of the residential energy consumption and carbon emissions. The green energy transition of fossil-based heating systems is needed to reach the emission goals. However, the heating systems vary in energy source, heating technology, equipment location, and these complexities make it challenging for households to compare heating systems and make decisions. Hence, a decision support tool that provides a generalized ranking of individual heating alternatives is proposed for households as decision-makers to identify the optimal choice. This paper presents an analysis of 13 heating alternatives and 19 quantitative criteria in technological, environmental, and financial aspects, combines ideal solution based Multi-Criteria Decision Making with 6 weighting methods and 4 normalization methods, and introduces ensemble learning with a fuzzy membership function derived from Cauchy distribution to finalize the final ranking. The robustness of the proposed method is verified by 3 sensitive analyses from different aspects. Air to water heat pump, solar heating and direct district heating are the top three rankings in the final result under Danish national average data. A framework is designed to guide the decision-makers apply this ranking guideline with their practical feasible situations.
Modeling Climate Change Impact on Wind Power Resources Using Adaptive Neuro-Fuzzy Inference System
Narjes Nabipour, Amir Mosavi, Eva Hajnal, Laszlo Nadai, Shahab Shamshirband, Kwok-Wing Chau
Subject: Keywords: wind turbine; adaptive neuro-fuzzy inference system (ANFIS); dynamical downscaling; regional climate change model; renewable energy; machine learning
Climate change impacts and adaptations is subject to ongoing issues that attract the attention of many researchers. Insight into the wind power potential in an area and its probable variation due to climate change impacts can provide useful information for energy policymakers and strategists for sustainable development and management of the energy. In this study, spatial variation of wind power density at the turbine hub-height and its variability under future climatic scenarios are taken under consideration. An ANFIS based post-processing technique was employed to match the power outputs of the regional climate model with those obtained from the reference data. The near-surface wind data obtained from a regional climate model are employed to investigate climate change impacts on the wind power resources in the Caspian Sea. Subsequent to converting near-surface wind speed to turbine hub-height speed and computation of wind power density, the results have been investigated to reveal mean annual power, seasonal, and monthly variability for a 20-year period in the present (1981-2000) and in the future (2081-2100). The results of this study revealed that climate change does not affect the wind climate over the study area, remarkably. However, a small decrease was projected for future simulation revealing a slightly decrease in mean annual wind power in the future compared to historical simulations. Moreover, the results demonstrated strong variation in wind power in terms of temporal and spatial distribution when winter and summer have the highest values of power. The findings of this study indicated that the middle and northern parts of the Caspian Sea are placed with the highest values of wind power. However, the results of the post-processing technique using adaptive neuro-fuzzy inference system (ANFIS) model showed that the real potential of the wind power in the area is lower than those of projected from the regional climate model.
Modeling of Renewable Energy Systems by a Self-Evolving Nonlinear Consequent Part Recurrent Type-2 Fuzzy System (NCPRT2FS) for Power Prediction
Jafar Tavoosi, Amir Abolfazl Suratgar, Mohammad Bagher Menhaj, Amir Mosavi, Ardashir Mohammadzadeh, Ehsan Ranjbar
Subject: Engineering, Control & Systems Engineering Keywords: Self-Evolving, Recurrent Type-2 Fuzzy, Nonlinear Consequent Part, Convergence Analysis, Renewable Energy.
Not only does this paper present a novel type-2 fuzzy system for identification and behavior prognostication of an experimental solar cell set and a wind turbine, but also it brings forward an exquisite technique to acquire an optimal number of membership functions and the corresponding rules. It proposes a seven-layered NCPRT2FS. For fuzzification in the first two layers, Gaussian type-2 fuzzy membership functions with uncertainty in the mean, are exploited. The third layer comprises rule definition and the forth one embeds fulfillment of type reduction. The three last remained layers are the ones in which resultant left–right firing points, two end-points and output all get assessed correspondingly. It should not be neglected off the nutshell that recurrent feedback at the fifth layer exerts delayed outputs ameliorating efficiency of the suggested NCPRT2FS. Later in the paper, a modern structural learning, established on type-2 fuzzy clustering, is held forth. An adaptively rated learning back-propagation algorithm is extended to adjust the parameters ensuring the convergence as well. Eventually, solar cell photo-voltaic and wind turbine are deemed as case studies. The experimental data are exploited and the consequent yields emerge so persuasive.
Modeling, Simulation and Optimization of agricultural greenhouse microclimate by the application of artificial intelligence and / or fuzzy logic
Didi Faouzi, Nacereddine Bibi-Triki
Subject: Mathematics & Computer Science, Other Keywords: Greenhouse , microclimate , Modelling , fuzzy controller , Optimization , Solar Energy , Energy saving , Climate Model ,Greenhouse effect , Temperature
Agricultural greenhouse is largely answered in the agricultural sphere, despite the shortcomings it has, including overheating during the day and night cooling which sometimes results in the thermal inversion mainly due to its low inertia. The glasshouse dressed chapel is relatively more efficient than the conventional tunnel greenhouse. Its proliferation on the ground is more or less timid because of its relatively high cost[14-22]. Agricultural greenhouse aims to create a favorable microclimate to the requirements of growth and development of culture, from the surrounding weather conditions, produce according to the cropping calendars fruits, vegetables and flower species out of season and widely available along the year. It is defined by its structural and functional architecture, the quality thermal, mechanical and optical of its wall, with its sealing level and the technical and technological accompanying[12-13]. The greenhouse is a very confined environment, where multiple components are exchanged between key stakeholders and them factors are light, temperature and relative humidity[8]. This state of thermal evolution is the level sealing of the cover of its physical characteristics to be transparent to solar, absorbent and reflective of infrared radiation emitted by the enclosure where the solar radiation trapping effect otherwise called "greenhouse effect" and its technical and technological means of air that accompany. The socio-economic analysis of populations in the world leaves appear especially the last two decades of rapid and profound transformations These changes are accompanied by changes in eating habits, mainly characterized by rising consumption spread along the year[14]. To effectively meet this demand, greenhouse-systems have evolved, particularly towards greater control of production conditions (climate, irrigation, ventilation techniques, CO2 supply, etc ...). Technological progress has allowed the development of greenhouses so that they become increasingly sophisticated and of an industrial nature (heating, air conditioning, control, computer, regulation, etc ...). New climate driving techniques have emerged, including the use of control devices from the classic to the use of artificial intelligence[10-11] such as neural networks and / or fuzzy logic, etc... As a result, the greenhouse growers prefer these new technologies while optimizing the investment in the field to effectively meet the supply and demand of these fresh products cheaply and widely available throughout the year.
Two-stage Algorithm for Solving Arbitrary Trapezoidal Fully Fuzzy Sylvester Matrix Equations
Ahmed AbdelAziz Elsayed, Bassem Saassouh, Nazihah Ahmad, Ghassan Malkawi
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: Fully fuzzy Sylvester matrix equations; Fuzzy matrix equation; Numerical fuzzy solution; Trapezoidal fuzzy multiplication
Many authors proposed analytical methods for solving fully fuzzy Sylvester matrix equation (FFSME) based on Vec-operator and Kronecker product. However, these methods are restricted to nonnegative fuzzy numbers and cannot be extended to FFSME with near-zero fuzzy numbers. The main intention of this paper is to develop a new numerical method for solving FFSME with near-zero trapezoidal fuzzy numbers that provides a wider scope of trapezoidal fully fuzzy Sylvester matrix equation (TrFFSME) in scientific applications. This numerical method can solve the trapezoidal fully fuzzy Sylvester matrix equation with arbitrary coefficients and find all possible finite arbitrary solutions for the system. In order to obtain all possible fuzzy solutions, the TrFFSME is transferred to a system of non-linear equations based on newly developed arithmetic fuzzy multiplication between trapezoidal fuzzy numbers. The fuzzy solutions to the TrFFSME are obtained by developing a new two-stage algorithm. To illustrate the proposed method numerical example is solved.
Stochastic Thermal Load Dispatch Employing Opposition-based Greedy Heuristic Search
Manmohan Singh, JS Dhillon
Subject: Engineering, Electrical & Electronic Engineering Keywords: fuzzy theory; heuristic search; stochastic economic load dispatch; risk analysis
A thermal load dispatch problem minimizes the number of objectives viz operating cost and emission of gaseous pollutants together while allocating the power demand among the committed generating units subject to physical and technological system constraints. A stochastic thermal load dispatch problem is undertaken while taking into consideration, the uncertainties, errors in data measurements and nature of load demand which is random. Owing to uncertain load demand, variance due to mismatch of power demand termed as risk, is considered as another conflicting objective to be minimized. Generally multiobjective problems generate a set of non-inferior solutions are generated and supplied to a decision maker to select the best solution from the set of non-inferior solutions. This paper proposes opposition-based greedy heuristic search (OGHS) method to generate a set of non-inferior solutions. Opposition-based learning is applied to generate initial population to select good candidates. Migration to maintain diversity in the set of feasible solutions is also based on opposition-based learning. Mutation strategy is implemented by perturbing the genes heuristically in parallel and better one solution is sought for each member. Feasible solutions are achieved heuristically by modifying the generation-schedules in such a manner that violation of operating generation limits are avoided. The OGHS method is simple to implement and provides global solutions derived from the randomness of the population generated without tuning of parameters. Decision maker exploits fuzzy membership functions to decide the final decision. Validity of the method has been demonstrated by analysing systems in different scenarios consisting of six generators and forty generators.
Evaluating Airline Service Quality Using Fuzzy DEMATEL and ANP
Navid Haghighat
Subject: Social Sciences, Business And Administrative Sciences Keywords: airline service quality, fuzzy theory, FMADM, fuzzy DEMATEL, fuzzy ANP
A hybrid fuzzy MADM method is proposed in this paper for evaluating airline service quality. Fuzzy set theory is used since it helps in measuring the ambiguity of concepts associated with human being's subjective judgment. After reviewing service quality evaluation models especially in airline industry, SSQAI model was adopted as a construct for evaluating airline service quality in Iran. Fuzzy DEMATEL was applied to determine the degree of influence and impact of criteria on each other and extract cause and effect relations between them that helped in ranking criteria based on degree of relationship. Then, ANP network map was constructed based on the relation map generated from Fuzzy DEMATEL analysis. Fuzzy ANP approach assisted in prioritizing criteria based on the need for improvement and enabled in a more accurate measurement in decision making process taking the advantage of using linguistic variables. Fuzzy DEMATEL results demonstrates that expertise, Problem-solving, and conduct has the most influence on other factors and in opposite Valence, Waiting Time, Comfort are the factors which get the most impact from other factors and according to Fuzzy ANP analysis Valence, Convenience, Problem-solving, and Safety&Security are the factors with most priorities that need improvement.
Measuring Software Quality Product Based on Fuzzy Inference System Techniques in ISO Standard
Atrin Barzegar
Subject: Keywords: software quality; fuzzy logic; ISO standard; quality model; usability
The success of a software product depends on several factors. Given that different organizations and institutions use software products, the need to have a quality and desirable software according to the goals and needs of the organization makes measuring the quality of software products an important issue for most organizations and institutions. To be sure of having the right software. It is necessary to use a standard quality model to examine the features and sub-features for a detailed and principled study in the quality discussion. In this study, the quality of Word software was measured. Considering the importance of software quality and to have a good and usable software in terms of quality and measuring the quality of software during the study, experts and skilled in this field were used and the impact of each factor and quality characteristics. It was applied at different levels according to their opinion to make the result of measuring the quality of Word software more accurate and closer to reality. In this research, the quality of the software product is measured based on the fuzzy inference system in ISO standard. According to the results obtained in this study, it is understood that quality is a continuous and hierarchical concept and the quality of each part of the software at any stage of production can lead to high quality products.
Design and Simulation of Adaptive PID Controller Based on Fuzzy Q-Learning Algorithm for a BLDC Motor
Reza Rouhi Ardeshiri, Nabi Nabiyev, Shahab S. Band, Amir Mosavi
Subject: Engineering, Automotive Engineering Keywords: Q-learning; Fuzzy logic; Adaptive controller; BLDC motor
Reinforcement learning (RL) is an extensively applied control method for the purpose of designing intelligent control systems to achieve high accuracy as well as better performance. In the present article, the PID controller is considered as the main control strategy for brushless DC (BLDC) motor speed control. For better performance, the fuzzy Q-learning (FQL) method as a reinforcement learning approach is proposed to adjust the PID coefficients. A comparison with the adaptive PID (APID) controller is also performed for the superiority of the proposed method, and the findings demonstrate the reduction of the error of the proposed method and elimination of the overshoot for controlling the motor speed. MATLAB/SIMULINK has been used for modeling, simulation, and control design of the BLDC motor.
A Study on Domination in two Fuzzy Models
Mohammadesmail Nikfar
Subject: Mathematics & Computer Science, Other Keywords: fuzzy graphs; t−norm fuzzy graphs
The aim of this 9th expository article is to conclude a study on domination in two fuzzy models, including t−norm fuzzy graphs and fuzzy graphs. All parts are twofold even if we don't men- tion, directly. I.e., all results depicts some properties about fuzzy graph and t−norm fuzzy graph.
Vertex Domination in Fuzzy Graphs
Subject: Mathematics & Computer Science, Other Keywords: fuzzy graph; fuzzy bridge; fuzzy tree; $\alpha$-strong arc; vertex domination
We introduce a new variation on the domination theme which we call vertex domination as reducing waste of time in transportation planning and optimization of transport routes. We determine the vertex domination number $\gamma_v$ for several classes of fuzzy graphs. The bounds is obtained for it. In fuzzy graphs, monotone decreasing property and monotone increasing property are introduced. We prove both of the vizing's conjecture and the Grarier-Khelladi's conjecture are monotone decreasing fuzzy graph property for vertex domination. We obtain Nordhaus-Gaddum (NG) type results for these parameters. The relationship between several classes of operations on fuzzy graphs with the vertex domination number of them is studied. Finally, we discuss about vertex dominating set of a fuzzy tree by using the bridges and $\alpha$-strong edges equivalence.
Intuitionistic Fuzzy Normed Prime Ideal and Some of Their Characteristics
Tekalign Regasa Ashale
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: Intuitionist Fuzzy Normed Ring; Intuitionist fuzzy normed Ideal; Intuitionist Fuzzy Normed Point; Fuzzy Normed Prime Idea
This Paper intends to introduce the notion of prime ideals of In tuitionistic fuzzy normed Rings and to establish basic properties related to it. It investigates these notions and shown anew Result using intuinistic fuzzy points and non membership function incorparating with t-norm and s-norm to show aome results of fuzzy prime ideal.
Fuzzy Analytic Hierarchy Process-Based Mobile Robot Path Planning
Changwon Kim, Yeesock Kim, Hak Yi
Subject: Engineering, Mechanical Engineering Keywords: fuzzy based AHP (FAHP); multi-objective decision making; path planning; mobile robot
This study presents a path planning method for a mobile robot to be effectively operated through a multi-objective decision-making problem. Specifically, the proposed Fuzzy analytic hierarchy process (FAHP) determines an optimal position as a sub-goal within the multi-objective boundary. The key features of the proposed FAHP are evaluating the candidates according to the fuzzified relative importance among objectives to select an optimal solution. In order to incorporate FAHP into path planning, an AHP framework is defined, which includes the highest level (goal), middle level (objectives), and the lowest level (alternatives). The distance to the target, robot's rotation, and safety against collision between obstacles are considered as objective functions. Comparative results obtained from the artificial potential field and AHP/FAHP simulations show that FAHP is much preferable for the mobile robot's path planning than typical AHP.
COLREGs Compliant Fuzzy-based Collision Avoidance System for Multiple Ship Encounters
Yaseen Adnan Ahmed, Mohammed Abdul Hannan, Mahmoud Yasser Oraby, Adi Maimun
Subject: Engineering, Marine Engineering Keywords: Collision Avoidance; COLREGs; Fuzzy logic; Decision Making; Multiple Ships; MATLAB Simulink
As the number of ships for marine transportation increases with the advancement of global trade, encountering multiple ships in marine traffic becomes common. This situation raises the risk of collision of the ships; hence this paper proposes a novel Fuzzy-logic based intelligent conflict detection and resolution algorithm, where the collision courses and possible avoiding actions are analyzed by considering ship motion dynamics and the input and output fuzzy membership functions are derived. As a conflict detection module, the Collision Risk (CR) is measured for each ship by using a scaled nondimensional Distance to the Closest Point of Approach (DCPA) and Time to the Closest Point of Approach (TCPA) as inputs. Afterwards, the decisions for collision avoidance are made based on the calculated CR, encountering angle and relative angle of each ship measured from others. In this regard, the rules for the Fuzzy interface system are defined in accordance with the COLREGs, and the whole system is implemented on the MATLAB Simulink platform. In addition, to deal with the multiple ship encounters, the paper proposes a unique maximum-course and minimum-speed change approach for decision making, which has been found to be efficient to solve Imazu problems, and other complicated multiple-ship encounters.
Fuzzy Sumudu Transform for Solving System of Linear Fuzzy Differential Equations with Fuzzy Constant Coefficients
Norazrizal Aswad Abdul Rahman, Muhammad Zaini Ahmad
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: fuzzy Sumudu transform; fuzzy linear differential equations; system of fuzzy differential equations
In this paper, we employ fuzzy Sumudu transform for solving system of linear fuzzy differential equations with fuzzy constant coefficients. The system with fuzzy constant coefficients is interpreted under strongly generalized differentiability. For this purpose, new procedures for solving the system are proposed. A numerical example is carried out for solving system adapted from fuzzy radioactive decay model. Conclusion is drawn in the last section and some potential research directions are given.
Adaptive Neuro-Fuzzy Inference System Based Grading of Basmati Rice Grains Using Image Processing Technique
Dipankar Mandal
Subject: Engineering, Other Keywords: ANFIS; basmati rice; image processing; grading; quality assessment; fuzzy inference system
Grading of rice grains has gain attentions due its requirement of quality assessment during import or export. Rice grain quality depends on milling operation, where rice hull is removed with a huller system followed by whitening operation. In such process, adjustment of rollers, control, and operation is important in terms of quality of milled rice. Especially, the basmati rice needed more quality assurance as it is not parboiled rice and exported globally with a high product value. In this present work, the basic problem of quality assessment in rice industry is addressed with digital image processing based technique. Machine vision and digital image processing provide an alternative with the automated, nondestructive, cost-effective, and fast approach as compared with traditional method which is done manually by human inspectors. A model of quality grade testing and identification is built based on morphological features using digital image processing and knowledge based adaptive neuro-fuzzy inference system (ANFIS). The qualities of rice kernels are determined with the help of shape descriptors and geometric features using the sample images of milled rice. The adopted technique has been tested on a sufficient number of training images of basmati rice grain. The proposed method gives a promising result in an evaluation of rice quality with 100% classification accuracy for broken and whole grain. The milling efficiency is also assessed using the ratio between head rice and broken rice percentage and it is 77.27% for the test sample. The overall results of the adopted methodology are promising in terms of classification accuracy and efficiency.
Fuzzy Logic: An Application into Marketing Strategy
Albérico Travassos Rosário, Joana Carmo Dias
Subject: Social Sciences, Marketing Keywords: fuzzy; marketing; fuzzy marketing; marketing strategy; consumer behaviour
The term fuzzy refers to things that are unclear or vague. In the real world, we often encounter a situation where we cannot determine whether the state is true or false, and fuzzy logic allows for more flexible reasoning. This is a form of multivalued logic where the truth values of variables can be any real number between 0 and 1, as opposed to Boolean logic where the logical values can only be 0 or 1. Fuzzy logic is therefore a problem-solving technique used to evaluate all available information and thus make the best decisions. When applied to marketing, fuzzy logic allows treating customers in an individual and personalized way, instead of being fully identified within a particular market segment. Fuzzy marketing considers the degree to which a customer belongs to certain segments and subsequently allows them to be targeted with messages that engage them emotionally. To better understand the application and importance of fuzzy logic in marketing strategy, we developed a systematic review of the bibliometric literature (LRSB). It was possible to create a connection between these concepts, marketing and fuzzy logic, to increase the efforts of marketing professionals to achieve competitiveness in the unpredictable business environment.
On Fuzzy Locally Convex Spaces
Murphy Emeke Egwe
Subject: Mathematics & Computer Science, Analysis Keywords: t-norms; fuzzy seminorm; Minkowski functional; fuzzy topology
In this paper, the concept of fuzzy locally convex spaces generated by a family of fuzzy seminorms is introduced. We prove that a Minkowski functional of zero neighborhoods generates a seminorm and finally deduce that the topology generated by the family of seminorms coincides with that of the Minkowski functional.
Proving Fixed Point Theorems Employing Fuzzy $(\sigma,\mathcal{Z})$-Contractive Type Mappings
Hayel N. Saleh, Mohammad Imdad, Salvatore Sessa, Ferdinando Di Martino
Subject: Mathematics & Computer Science, Analysis Keywords: fuzzy $(\sigma,\mathcal{Z})$-contractive mappings; fuzzy metric spaces; fuzzy-$\mathcal{Z}$-contractive mappings}
In this article, the concept of fuzzy $(\sigma,\mathcal{Z})$-contractive mapping has been introduced in fuzzy metric spaces which is an improvement over the corresponding concept recently introduced by Shukla et al. [Fuzzy Sets and system. 350 (2018) 85--94]. Thereafter, we utilized our newly introduced concept to prove some existence and uniqueness theorems in $\mathcal{M}$-complete fuzzy metric spaces. Our results extend and generalize the corresponding results of Shukla et al.. Moreover, an example is adopted to exhibit the utility of newly obtained results.
Fuzzy Artificial Intelligence Based Control Strategy Applied to an Electromagnetic Frequency Regulator in Wind Generation Systems
Daniel C. C. Crisóstomo, Thiago F. Do Nascimento, Evandro A. F. Nunes, Elmer Villarreal, Ricardo Pinheiro, Andrés Salazar
Subject: Engineering, Other Keywords: Speed control; Fuzzy Controller; Electromagnetic Frequency Regulator (EFR); Wind Energy; Photovoltaics
This paper presents the implementation of a control strategy based on fuzzy logic artificial intelligence (AI) for speed regulation of an electromagnetic frequency regulator (EFR) prototype, aiming to eliminate the dependence on knowledge of physical parameters in the most diverse operating conditions. Speed multiplication is one of the most important steps in power generation wind. Gearboxes are generally used for this purpose. However, they have a reduced lifespan and a high failure rate and are still noise sources. The search for new ways to match the speed (and torque) between the turbine and the generator is an important research area to increase the energy, financial and environmental efficiency of wind systems. The EFR device is an example of an alternative technology that this team of researchers has proposed. It counts the fact of taking advantage of the main advantages of an induction machine with the rotor in a squirrel cage positively. In the first studies, the EFR control strategy consisted of the conventional PID controllers, which has several limitations widely discussed in the literature. This strategy also limits the EFR's performance, considering its entire operating range. The simulation program was developed using the Matlab/Simulink platform, while the experimental results were obtained in the laboratory emulating the EFR-based system. The EFR prototype used has 2 poles, a nominal power of 2.2 kW, and a nominal frequency of 60 Hz. Experimental results were presented to validate the efficiency of the proposed control strategy.
A Novel Approach for Early detection of Alzheimer's disease Based on Multi Level Fuzzy Neural Networks
Hamid Akramifard, MohammadAli Balafar, SeyedNaser Razavi, Abd Rahman Ramli
Subject: Behavioral Sciences, Applied Psychology Keywords: Alzheimer's disease; classification; early detection; Multi-Level Fuzzy Neural Networks; prognosis
Online: 29 March 2021 (17:09:46 CEST)
Timely diagnosis of Alzheimer's diseases(AD) is crucial to obtain more practical treatments. In this paper, a novel approach Based on Multi-Level Fuzzy Neural Networks (MLFNN) for early detection of AD is proposed. The focus of study was on the problem of diagnosing AD and MCI patients from healthy people using MLFNN and selecting the best feature(s) and most compatible classification algorithm. In this way, we achieve an excellent performance using only a single feature i.e. MMSE score, by fitting the optimum algorithm to the best area using optimum possible feature(s) namely one feature for a real life problem. It can be said, the proposed method is a discovery that help patients and healthy people get rid of painful and time consuming experiments. Experiments shows the effectiveness of proposed method in current research for diagnosis of AD with one of the highest performance (accuracy rates of 96.6%), ever reported in the literature.
Separation Axioms Interval-Valued Fuzzy Soft Topology via Quasi-Neighbourhood Structure
Mabruka Ali, Adem Kılıçman, Azadeh Zahedi Khameneh
Subject: Mathematics & Computer Science, Geometry & Topology Keywords: interval-valued fuzzy soft set; interval-valued fuzzy soft topology; interval-valued fuzzy soft point; interval-valued fuzzy soft neighborhood; interval-valued fuzzy soft quasi-neighbourhood; interva
In this study, we present the concept of interval-valued fuzzy soft point and then introduce the notions of neighborhood and quasi-neighbourhood of it in interval-valued fuzzy soft topological spaces. Separation axioms in interval-valued fuzzy soft topology, so-called $q$-$T_{i}$ for $ i=0,1,2,3,4 ,$ is introduced and some of its basic properties are also studied.
Newly Proposed Matrix Reduction technique Under Mean Ranking Method for Solving Trapezoidal Fuzzy Transportation problems Under Fuzzy Environment
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Mean; Mean of Trapezoidal Fuzzy Numbers; Trapezoidal Fuzzy Numbers; Transportation Problem; and Fuzzy Transportation Problem
In this paper, improved matrix Reduction Method is proposed for the solution of fuzzy transportation problem in which all inputs are taken as fuzzy numbers. Since ranking fuzzy number is important tool in decision making, Fuzzy trapezoidal number is converting in to crisp set by using Mean techniques and solved by proposed method for fuzzy transportation problem. We give suitable numerical example for unbalanced and compare the optimal value with other techniques. The Result shows that the optimum profit of transportation problem using proposed technique under robust ranking method is better than the other method. Novelty: The numerical illustration demonstrates that the new projected method for managing the transportation problems on fuzzy algorithms.
A Novel Hybrid Segmentation Method with Particle Swarm Optimization and Fuzzy C-Mean Based On Partitioning the Image for Detecting Lung Cancer
Kavitha P., Prabakaran S.
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: adaptive bilateral; marker watershed; PSO; fuzzy C-mean; GLCM; SVM
Recently, the medical image processing is extensively used in several areas. In earlier detection and treatment of these diseases is very helpful to find out the abnormality issues in that image. Here there are number of methods available for segmentation to detect the lung nodule of computer tomography (CT) image. The main result of this paper, the earlier detection of lung nodules using Pre-processing techniques of top-hat transform, median and adaptive bilateral filter was compared both filtering methods and proved the adaptive bilateral filter is suitable method for CT images. The proposed segmentation technique uses novel strip method and the image is split into number of strips 3, 4, 5 and 6. A marker- watershed method based on PSO and Fuzzy C-mean Clustering method was proposed method. Firstly, the input image was reduced noise reduction and smoothing and the filter image is using strips method and then the image is segmented by marker watershed method. Secondly, the enhanced PSO technique was used to locate the better accurate value of the clustering centers of Fuzzy C-mean Clustering. Final stage, with the accurate value of centers and the enhanced target function and the small region of the segmented object was clustered by Fuzzy C-mean. In segmentation algorithm presented in this paper gives 95% of accuracy rate to detect lung nodules when strip count is 5.
Normal Approximations of the Arithmetic of Mixed Fuzzy Numbers
Yi-Fang Chen, Hui-Chin Tang
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: fuzzy number; normal fuzzy number; addition and subtraction; approximation.
In fuzzy group decision making problems and fuzzy shortest path problems, the addition and subtraction are the basic problems. For the mixed normal fuzzy numbers and trapezoidal fuzzy numbers, the addition and subtraction operations are approximated by the normal fuzzy numbers in this paper. The behaviors of approximated normal fuzzy numbers are the same as those of the normal distributions from the viewpoint of probability. An application of the addition and subtraction operations of mixed fuzzy numbers to the fuzzy sample mean is also proposed.
A Fuzzy Inference System for Unsupervised Deblurring of Motion Blur in Electron Beam Calibration
Salaheddin Hosseinzadeh
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: fuzzy inference system; fuzzy logics; linear motion blur; fuzzy debluring; electron beam calibration; signal & image processing
This paper presents a novel method of restoring the electron beam (EB) measurements that are degraded by linear motion blur. This is based on a Fuzzy Inference System (FIS) and Wiener inverse filter, together providing autonomy, reliability, flexibility and real-time execution. This system is capable of restoring highly degraded signals without requiring the exact knowledge of EB probe size. The FIS is formed of three inputs, eight fuzzy rules and one output. The FIS is responsible to monitor the restoration results, grade their validity and choose the one that yields to a better grade. These grades are produced autonomously by analyzing results of Wiener inverse filter. To benchmark the performance of the system, ground truth signals obtained using an 18 um wire probe are compared with the restorations. Main aims are therefore a) Provide unsupervised deblurring for device independent EB measurement; b) Improve the reliability of the process; c) Apply deblurring without knowing the probe size. These, further facilitates the deployment and manufacturing of EB probes and probe independent and accurate EB characterization. It also makes restoration of previously collected EB measurements easier, where the probe sizes are not known or recorded.
An Approach to Determining Software Projects with Similar Functionality and Architecture Process Based on Artificial Intelligence Methods
Nadezhda Yarushkina, Gleb Guskov, Pavel Dudarin
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: ontology; conceptual model; natural language processing; engineering design; fuzzy hierarchical classifier; clustering
Software engineers from all over the world solve independently a lot of similar problems. In this condition the problem of code or even better architecture reusing becomes an issue of the day. In this paper two phase approach to determining the functional and structural likenesses of software projects is proposed. This approach combines two methods of artificial intelligence: natural language processing techniques with a novel method for comparing software projects based on ontological representation of their architecture automatically obtained from the projects source code. Additionally several similarity metrics are proposed to estimate similarity between projects.
Nikfar Domination Versus Others: Restriction, Extension Theorems and Monstrous Examples
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: fuzzy graph, fuzzy bridge, α-strong edge, nikfar domination, dynamic networks.
The aim of this expository article is to present recent developments in the centuries-old discussion on the interrelations between several types of domination in graphs. However, the novelty even more prominent in the newly discovered simplified presentations of several older results. Domination can be seen as arising from real-world application and extracting classical results as first described by this article.The main part of this article, concerning a new domination and older one, is presented in a narrative that answers two classical questions: (i) To what extend must closing set be dominating? (ii) How strong is the assumption of domination of a closing set? In a addition, we give an overview of the results concerning domination. The problem asks how small can a subset of vertices be and contain no edges or, more generally how can small a subset of vertices be and contain other ones. Our work was as elegant as it was unexpected being a departure from the tried and true methods of this theory that had dominated the field for one fifth a century. This expository article covers all previous definitions. The inability of previous definitions in solving even one case of real-world problems due to the lack of simultaneous attentions to the worthy both of vertices and edges causing us to make the new one. The concept of domination in a variety of graphs models such as crisp, weighted and fuzzy, has been in a spotlight. We turn our attention to sets of vertices in a fuzzy graph that are so close to all vertices, in a variety of ways, and study minimum such sets and their cardinality. A natural way to introduce and motivate our subject is to view it as a real-world problem. In its most elementary form, we consider the problem of reducing waste of time in transport planning. Our goal here is to first describe the previous definitions and the results, and then to provide an overview of the flows ideas in their articles. The final outcome of this article is twofold: (i) Solving the problem of reducing waste of time in transport planning at static state; (ii) Solving and having a gentle discussions on problem of reducing waste of time in transport planning at dynamic state. Finally, we discuss the results concerning holding domination that are independent of fuzzy graphs. We close with a list of currently open problems related to this subject. Most of our exposition assumes only familiarity with basic linear algebra, polynomials, fuzzy graph theory and graph theory.
Television Rating Control in the Multichannel Environment Using Trend Fuzzy Knowledge Bases and Monitoring Results
Olexiy Azarov, Leonid Krupelnitsky, Hanna Rakytyanska
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: TV channel rating; expert recommendation systems; fuzzy resources control; fuzzy classification knowledge bases; solving fuzzy relational equations
The purpose of the study is to control the ratio of programs of different genres when forming the broadcast grid in order to increase and maintain the rating of the channel. In the multichannel environment, television rating control consists of selecting such content, ratings of which are completely restored after advertising. A hybrid approach combining the benefits of semantic training and fuzzy relational equations in simplification of the expert recommendation systems construction is proposed. The problem of retaining the television rating can be attributed to the problems of fuzzy resources control. The increase or decrease trends of the demand and supply are described by primary fuzzy relations. The rule-based solutions of fuzzy relational equations connect significance measures of the primary fuzzy terms. Rules refinement by solving fuzzy relational equations allows avoiding labor-intensive procedures for the generation and selection of expert rules. The solution set generation corresponds to the granulation of the television time, where each solution represents the time slot and the granulated rating of the content. In automated media planning, generation of the weekly TV program in the form of the granular solutions provides the decrease of the time needed for the programming of the channel broadcast grid.
Fuzzy Based Analysis for Behavioral Factors (Altruism, Courtesy, Sportsmanship, Civic Virtue, and Conscientiousness) to Investigate Impact of Gender on Organizational Citizenship Behavior (OCB)
Wajdee Ajlouni, Gurvinder Kaur, Saleh Alomari
Subject: Behavioral Sciences, Other Keywords: behavioral factors; fuzzy analysis; gender; employees' demographics; organizational citizenship behavior (OCB)
This paper aims to investigate impact of employees' gender on OCB as per the employees' perception in Jordanian governmental hospitals. A convenient sample of 126 employees working in the three main governmental hospitals in north of Jordan has been taken for the purpose of this study. The collected data includes linguistic terms that suffer from uncertainty which, in turn, cannot be dealt with traditional numerical values. The result prove that gender impact on OCB has shown statistically significant differences at (α=0.05) as far as altruism, courtesy, and civic virtue are concerned; and this variable stands in favor of males with the total score of 0.011%. Similarly, as far as the effect of age factor on OCB is concerned, there have been statistically significant differences at (α=0.05) in relation to courtesy, sportsmanship, and civic virtue with the total score of 0.27%. Finally, the results provide a baseline data for further studies which may contribute more significant in the field of OCB.
FQ-AGO: Fuzzy Logic Q-learning Based Asymmetric Link Aware and Geographic Opportunistic Routing Scheme for MANETs
Ali Alshehri, Abdel-Hameed Badawy, Hong Huang
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Fuzzy logic; Q-learning; routing protocol; mobile ad hoc network (MANETs); opportunistic network
The proliferation of mobile and IoT devices, coupled with the advances in the wireless communication capabilities of these devices, have urged the need for novel communication paradigms for such heterogeneous hybrid networks. Researchers have proposed opportunistic routing as a means to leverage the potentials offered by such heterogeneous networks. While several proposals for multiple opportunistic routing protocols exist, only a few have explored fuzzy logic to evaluate wireless links status in the network to construct stable and faster paths towards the destinations. We propose FQ-AGO, a novel Fuzzy Logic Q-learning Based Asymmetric Link Aware and Geographic Opportunistic Routing scheme that leverages the presence of long-range transmission links to assign forwarding candidates towards a given destination. The proposed routing scheme utilizes fuzzy logic to evaluate whether a wireless link is useful or not by capturing multiple network metrics, the available bandwidth, link quality, node transmission power, and distance progress. Based on the fuzzy logic evaluation, the proposed routing scheme employs a Q-learning algorithm to select the best candidate set toward the destination. We implement FQ-AGO on the NS-3 simulator and compare the performance of the proposed routing scheme with three other relevant protocols: AODV, DSDV, and GOR. For precise analysis, we consider various network metrics to compare the performance of the routing protocols. Our simulation result validates our analysis and demonstrates remarkable performance improvements in terms of total network throughput, packet delivery ration, and end-to-end delay.
An Integrated Fuzzy Fault Tree Model With Bayesian Network-based Maintenance Optimization of Complex Equipment in Automotive Manufacturing
Hamzeh Soltanali, Mehdi Khojastehpour, José Torres Farinha
Subject: Engineering, Electrical & Electronic Engineering Keywords: Automotive industry; Bayesian network; Fault tree analysis; Fuzzy set theory; Maintenance optimization; Uncertainty
Knowledge-based approaches are useful alternatives to predict the Failure Probability (FP) coping with the insufficient data, process integrity, and complexity issue in manufacturing systems. This study proposes a Fault Tree Analysis (FTA) approach as proactive knowledge-based technique to estimate the FP based maintenance planning with subjective information from domain experts. However, the classical-FTA still suffers from uncertainty and static structure limitations which poses a substantial dilemma in predicting FP. To deal with the uncertainty issues, a Fuzzy-FTA (FFTA) model was developed by statistical analysing the effective attributes such as experts' trait impacts, scales variation and, assorted membership and defuzzification functions. Besides, a Bayesian Network (BN) theory was conducted to overcome the static limitation of classical-FTA. The results of FFTA model revealed that the changes in decision attributes were not statistically significant on FP variation while BN model considering conditional rules to reflect the dynamic relationship between events had more impact on predicting FP. After all, the integrated FFTA-BN model was used in the optimization model to find the optimal maintenance intervals according to estimated FP and the total expected cost. As a practical example, the proposed model was implemented in a semi-automatic filling system in an automotive production line. The outcomes could be useful for upgrading the availability and safety of complex equipment in manufacturing systems.
Hybrid Moth-Flame Fuzzy Logic Controller Based Integrated Cuk Converter Fed Brushless DC Motor for Power Factor Correction
K. Kamalapathi, Neeraj Priyadarshi, Sanjeevikumar Padmanaban, Farooque Azam, C. Umayal, Vigna K. Ramachandaramurthy
Subject: Engineering, Electrical & Electronic Engineering Keywords: BLDC (brushless DC) motor; VSI, Fuzzy logic controller; Moth flame optimization; Torque ripples
This research work deals hybrid control system based integrated Cuk converter fed brushless DC motor (BLDCM) for power factor correction. In this work, moth-flame optimization (MFO) and fuzzy logic controller (FLC) has been combined and moth –flame fuzzy logic controller (MFOFLC) has been proposed. Firstly, the BLDC motor modelling is composed with power factor correction (PFC) based integrated Cuk converter and BLDC speed is regulated using variable DC-Link inverter voltage which makes low switching operation with less switched losses. Here, with the use of switched inductor, the task and execution of proposed converter is redesigned. The DBR (diode bridge rectifier) trailed by proposed PFC based integrated Cuk converter operates in discontinuous inductor conduction mode(DICM) for achievement of better power factor.MFO is exhibited for gathering of dataset from the input voltage signal. At that point separated datasets is send to FLC to improve the updating function and minimization of torque ripple. However, our main objective is to assess adequacy of proposed method, the power factor is broke down. The execution of the proposed control methodology is executed in MATLAB/Simulink working platform and the display is assessed with the existing techniques.
Applications of Double Framed T-Soft Fuzzy Sets in BCK/BCI-Algebras
Muhammad Bilal Khan, Tahir Mahmood
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: double framed T-soft fuzzy set; double framed T-soft fuzzy algebra; double framed B-soft fuzzy algebra
The aim of this article is introduced the concept of double framed T-soft fuzzy set (DFT-soft fuzzy set) which is the combination of soft set and fuzzy set. We also defined the notions and apply this concept in BCK/BCI-algebras. By using example, we also discussed the concept of double framed T-soft fuzzy algebra (DFT-soft fuzzy algebra) and double framed B-soft fuzzy algebra (DFB-soft fuzzy algebra) and also investigated their properties. Each double framed T-soft fuzzy algebra is double framed B-soft fuzzy algebra but by using example, we proved that converse may or may not be possible.
A New ANFIS-based Peak Power Curtailment in Microgrids Including PV Units and BESSs
Srete Nikolovski, Hamid Reza Baghaee, Dragan Mlakić
Subject: Engineering, Electrical & Electronic Engineering Keywords: adaptive neuro-fuzzy inference system; battery energy storage; photovoltaic unit; power demand; peak power curtailment
One of the most crucial and economically beneficial tasks for energy customer is peak load curtailment. On account of the fast response of renewable energy resources (RERs) such as photovoltaic (PV) units and battery energy storage system (BESS), this task is closer to be efficiently implemented. Depends on the customer peak load demand and energy characteristics, the feasibility of this strategy may warry. When adaptive neuro-fuzzy inference system (ANFIS) is exploited for forecasting, it can provide many benefits to address the above-mentioned issues and facilitate its easy implementation, with short calculating time and re-trainability. This paper introduces a data driven forecasting method based on fuzzy logic for optimized peak load reduction. First, the amount of energy generated by PV is forecasted using ANFIS which conducts output trend, and then, the BESS capacity is calculated according to the forecasted results. The trend of the load power is then decomposed in Cartesian plane into two parts, left and right from load peak, searching for BESS capacity equal. Network switching sequence over consumption is provided by a fuzzy logic controller (FLC) with respect to BESS capacity and PV energy output. Finally, to prove the effectiveness of the proposed ANFIS-based peak shaving method, offline digital time-domain simulations have been performed on a real-life practical test micro grid system in MATLAB/Simulink environment and the results have been experimentally verified by testing on a practical micro grid system with real-life data obtained from smart meter and also, compared with several previously-reported methods.
An Auto-Weighting Aggregative Fuzzy Collaborative Intelligence Approach for DRAM Yield Forecasting
Hsin-Chieh Wu, Tin-Chih Toly Chen
Subject: Keywords: Fuzzy collaborative intelligence; Dynamic random access memory; Fuzzy weighted intersection; Forecasting
In a collaborative forecasting task, experts may have unequal authority levels. However, this has rarely been considered reasonably in the existing fuzzy collaborative forecasting methods. In addition, experts may not be willing to discriminate their authority levels. To address these issues, an auto-weighting fuzzy weighted intersection (FWI) fuzzy collaborative intelligence approach is proposed in this study. In the proposed auto-weighting FWI fuzzy collaborative intelligence approach, experts' authority levels are automatically and reasonably assigned based on their past forecasting performances. Subsequently, the auto-weighting FWI mechanism is established to aggregate experts' fuzzy forecasts. The theoretical properties of the auto-weighting FWI mechanism have been discussed and compared with those of the existing fuzzy aggregation operators. After applying the auto-weighting FWI fuzzy collaborative intelligence approach to a case of forecasting the yield of a DRAM product from the literature, its advantages over several existing methods were clearly illustrated.
Cyber Forensic Review of Human Footprint and Gait-based System for Personal Identification in Crime Scene Investigation
Kapil Kumar Nagwanshi
Subject: Engineering, Electrical & Electronic Engineering Keywords: ANN; biometric; crime-scene; fuzzy logic; gait; human footprint; Hidden Markov Model; PCA; Recognition
Human footprint is having a unique set of ridges unmatched by any other human being, and therefore it can be used in different identity documents for example birth certificate, Indian biometric identification system AADHAR card, driving license, PAN card, and passport. There are many instances of the crime scene where an accused must walk around and left the footwear impressions as well as barefoot prints and therefore it is very crucial to recovering the footprints to identify the criminals. Footprint-based biometric is a considerably newer technique for personal identification. Fingerprints, retina, iris and face recognition are the methods most useful for attendance record of the person. This time world is facing the problem of global terrorism. It is challenging to identify the terrorist because they are living as regular as the citizens do. Their soft target includes the industries of special interests such as defense, silicon and nanotechnology chip manufacturing units, pharmacy sectors. They pretend themselves as religious persons, so temples and other holy places, even in markets is in their targets. These are the places where one can obtain their footprints easily. The gait itself is sufficient to predict the behaviour of the suspects. The present research is driven to identify the usefulness of footprint and gait as an alternative to personal identification.
Comparative Analysis of Single and Hybrid Neuro-Fuzzy-Based Models for an Industrial Heating Ventilation and Air Conditioning Control System
Sina Ardabili, Bertalan Beszedes, Laszlo Nadai, Karoly Szell, Amir Mosavi, Felde Imre
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: adaptive neuro-fuzzy inference system; ANFIS-PSO; ANFIS-GA; HVAC; hybrid machine learning
The hybridization of machine learning methods with soft computing techniques is an essential approach to improve the performance of the prediction models. Hybrid machine learning models, particularly, have gained popularity in the advancement of the high-performance control systems. Higher accuracy and better performance for prediction models of exergy destruction and energy consumption used in the control circuit of heating, ventilation, and air conditioning (HVAC) systems can be highly economical in the industrial scale to save energy. This research proposes two hybrid models of adaptive neuro-fuzzy inference system-particle swarm optimization (ANFIS-PSO), and adaptive neuro-fuzzy inference system-genetic algorithm (ANFIS-GA) for HVAC. The results are further compared with the single ANFIS model. The ANFIS-PSO model with the RMSE of 0.0065, MAE of 0.0028, and R2 equal to 0.9999, with a minimum deviation of 0.0691 (KJ/s), outperforms the ANFIS-GA and single ANFIS models.
A Pre-aggregation Fuzzy Collaborative Intelligence-based Fuzzy Analytic Hierarchy Process Approach for Selecting Alternative Suppliers amid the COVID-19 Pandemic
Toly Chen, Hsin-Chieh Wu
Subject: Keywords: group decision-making; fuzzy analytic hierarchy process; consensus; wafer foundry; COVID-19 pandemic
In the existing group decision-making fuzzy analytic hierarchy process (FAHP) methods, the consensus among experts has rarely been fully reached. To fill this gap, in this study, a pre-aggregation fuzzy collaborative intelligence (FCI)-based FAHP approach is proposed. In the proposed pre-aggregation FCI-based FAHP approach, fuzzy intersection is applied to aggregate experts' pairwise comparison results if these pairwise comparison results overlap. The aggregation result is a matrix of polygonal fuzzy numbers. Subsequently, alpha-cut operations are applied to derive the fuzzy priorities of criteria from the aggregation result. The pre-aggregation FCI-based FAHP approach has been applied to select suitable alternative suppliers for a wafer foundry in Taiwan amid the COVID-19 pandemic. The experimental results revealed that the pre-aggregation FCI-based FAHP approach significantly reduced the uncertainty inherent in the decision-making process by deriving fuzzy priorities with very narrow ranges.
Evaluating Olympic Pictograms Using Fuzzy TOPSIS – Focus on Judo, Taekwondo, Boxing, and Wrestling
Choi kyoungho, Kim Bongseok, Jinhee Choi
Subject: Arts & Humanities, Other Keywords: Olympic; pictogram; fuzzy; tosis
This study evaluated the ranking of comprehensibility of the pictograms for judo, taekwondo, boxing, and wrestling used in the six games from the 27th Sydney Olympics in 2000 to the 32nd Tokyo Olympics in 2021. The evaluation was done using the Fuzzy TOPSIS method, one of the multi-criteria decision-making methodologies commonly used in economics and others fields. The results are as follows. The first, pictograms from the 2008 Beijing Olympics ranked first in three sports: taekwondo, boxing, and wrestling, but there were no pictograms that consistently ranked first or sixth in all sports. Second, the result of the sensitivity analysis shows a possibility that the ranking will be reversed if the weight of the evaluation factors changes, but in the 1000-time repetitive prediction, the better the evaluation ranking, the closer the value of the priority ranking to the ideal solution on average even if the weight changes.
An Exhaustive Review of Bio-Inspired Algorithms and its Applications for Optimization in Fuzzy Clustering
Fevrier Valdez, Oscar Castillo, Patricia Melin
Subject: Engineering, Control & Systems Engineering Keywords: Fuzzy Logic; Clustering; optimization
In recent years, new metaheuristic algorithms have been developed taking as reference the inspiration on biological and natural phenomena. This nature-inspired approach for algorithm development has been widely used by many researchers in solving optimization problem. These algorithms have been compared with the traditional ones algorithms and have demonstrated to be superior in complex problems. This paper attempts to describe the algorithms based on nature, that are used in fuzzy clustering. We briefly describe the optimization methods, the most cited nature-inspired algorithms published in recent years, authors, networks and relationship of the works, etc. We believe the paper can serve as a basis for analysis of the new are of nature and bio-inspired optimization of fuzzy clustering.
${\rm {\bf UL}}_\omega $ and ${\rm {\bf IUL}}_\omega $ Are Substructural Fuzzy Logics
SanMin Wang
Subject: Mathematics & Computer Science, Logic Keywords: Substructural fuzzy logics; Residuated lattices; Semilinear substructural logics; Standard completeness; Fuzzy logic
Two representable substructural logics ${\rm {\bf UL}}_\omega $ and ${\rm {\bf IUL}}_\omega $ are logics for finite UL and IUL-algebras, respectively. In this paper, the standard completeness of ${\rm {\bf UL}}_\omega $ and ${\rm {\bf IUL}}_\omega $ is proved by the method developed by Jenei, Montagna, Esteva, Gispert, Godo and Wang. This shows that ${\rm {\bf UL}}_\omega $ and ${\rm {\bf IUL}}_\omega $ are substructural fuzzy logics.
Generalized Triangular Intuitionistic Fuzzy Geometric Averaging Operator for Decision Making in Engineering
Daniel O. Aikhuele, Sarah Odofin
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: generalized triangular intuitionistic fuzzy geometric aggregation operator; triangular intuitionistic fuzzy number; intuitionistic fuzzy set; multi-criteria decision-making; attitudinal character; flexibility
Intuitionistic fuzzy set, which can be represented using the triangular intuitionistic fuzzy number (TIFN), is a more generalized platform for expressing imprecise, incomplete and inconsistent information when solving multi-criteria decision-making problems, as well as for reflecting the evaluation information exactly in different dimensions. In this paper, the TIFN has been applied for solving some multi-criteria decision-making problems by developing a new triangular intuitionistic fuzzy geometric aggregation operator, that is the generalized triangular intuitionistic fuzzy ordered weighted geometric averaging (GTIFOWGA) operator, and defining some triangular intuitionistic fuzzy geometric aggregation operators including the triangular intuitionistic fuzzy weighted geometric averaging (TIFWGA) operator, the ordered weighted geometric averaging (TIFOWGA) operator and the hybrid geometric averaging (TIFHWGA) operator. Based on these operators, a new approach for solving multicriteria decision-making problems when the weight information is fixed has been proposed. Finally, the proposed method has been compared with some similar existing computational approaches by virtue of a numerical example to verify its feasibility and rationality.
Calculation of Resonant Frequency for a Microstrip Antenna with Vertical Slots Using Applying Adaptive Network-Based Fuzzy Inference System
Mahmood Abbasi Layegh, Changiz Ghobadi, Javad Nourinia
Subject: Engineering, Electrical & Electronic Engineering Keywords: : microstrip antenna, vertical slots , adaptive network-based fuzzy Inference system , resonant frequency, artificial neural networks
This paper attempts at applying adaptive network-based fuzzy inference system (ANFIS) for analysis of the resonant frequency of a microstrip rectangular patch antenna with two equal size slots which are placed on the patch vertically. The resonant frequency is calculated as the position of slots is shifted to the right and left sides on the patch. As a result , the antenna resonates at more than one frequency . Commonly, machine algorithms based on artificial neural networks are employed to recognize the whole resonant frequencies. However ,they fail to estimate the resonant frequencies correctly as in some cases variations are not very sensible and the resonant frequencies overlap each other . It can be concluded that artificial neural networks could be replaced in such designs by the adaptive network-based fuzzy Inference system due to its high approximation capability and much faster convergence rate.
Differential Evolution With Shadowed and General Type-2 Fuzzy Systems for Dynamic Parameter Adaptation in Optimal Design of Fuzzy Controllers
Patricia Ochoa, Oscar Castillo, Patricia Melin, José Soria
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Shadowed Type-2 Fuzzy Sets; Generalized Type-2 Fuzzy Systems; Differential Evolution algorithm
This work is mainly focused on improving the differential evolution algorithm with the utilization of shadowed and general type 2 fuzzy systems to dynamically adapt one of the parameters of the evolutionary method. In this case, the mutation parameter is dynamically moved during the evolution process by using a shadowed and general type-2 fuzzy systems. The main idea of this work is to make a performance comparison between using shadowed and general type 2 fuzzy systems as controllers of the mutation parameter in differential evolution. The performance is compared with the problem of optimizing fuzzy controllers for a D.C. Motor. Simulation results show that general type-2 fuzzy systems are better when higher levels of noise are considered in the controller.
Development of a Fuzzy Variable Rate Irrigation Control System Based on Remote Sensing Data to Fully Automate Center Pivots
Willians Ribeiro Mendes, Salah Er-Raki, Derek M. Heeren, Ritaban Dutta, Fábio M. U. Araújo
Subject: Engineering, Automotive Engineering Keywords: Remote sensing data; variable rate irrigation; irrigation management; fuzzy systems; decision support tools; intelligent center pivot
Growing agricultural demands for the global population are unlocking the path to developing innovative solutions for efficient water management. Herein, an intelligent variable rate irrigation system (fuzzy-VRI) is proposed for rapid decision-making to achieve optimized irrigation in various delimited zones. The proposed system automatically creates irrigation maps for a center pivot irrigation system for a variable-rate application of water. Primary inputs are spatial imagery on remotely sensed soil moisture (SSM), soil adjusted vegetation index (SAVI), canopy temperature (CT), and nitrogen content (NI). To eliminate localized issues with soil characteristics, we used the crop nitrogen content map to provide a focused insight on issues related to water shortage. The system relates these inputs to set reference values for the rotation speed controllers and individual openings of each central pivot sprinkler valve. The results showed that the system can detect and characterize the spatial variability of the crop and further, the fuzzy logic solved the uncertainties of an irrigation system and defined a control model for high-precision irrigation. The proposed approach is validated through the comparison between the recommended irrigation and actual irrigation at two field sites, and the results showed that the developed approach gives an accurate estimation of irrigation with a reduction in the volume of irrigated water of up to 27% in some cases. Future research should implement the fuzzy-VRI real-time during field trials in order to quantify its effect on irrigation use, yield, and water use efficiency.
Metric Dimension in fuzzy(neutrosophic) Graphs-VII
Henry Garrett
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Fuzzy Graphs; Neutrosophic Graphs; Dimension
New notion of dimension as set, as two optimal numbers including metric number, dimension number and as optimal set are introduced in individual framework and in formation of family. Behaviors of twin and antipodal are explored in fuzzy(neutrosophic) graphs. Fuzzy(neutrosophic) graphs, under conditions, fixed-edges, fixed-vertex and strong fixed-vertex are studied. Some classes as path, cycle, complete, strong, t-partite, bipartite, star and wheel in the formation of individual case and in the case, they form a family are studied in the term of dimension. Fuzzification(neutrosofication) of twin vertices but using crisp concept of antipodal vertices are another approaches of this study. Thus defining two notions concerning vertices which one of them is fuzzy(neutrosophic) titled twin and another is crisp titled antipodal to study the behaviors of cycles which are partitioned into even and odd, are concluded. Classes of cycles according to antipodal vertices are divided into two classes as even and odd. Parity of the number of edges in cycle causes to have two subsections under the section is entitled to antipodal vertices. In this study, the term dimension is introduced on fuzzy(neutrosophic) graphs. The locations of objects by a set of some junctions which have distinct distance from any couple of objects out of the set, are determined. Thus it's possible to have the locations of objects outside of this set by assigning partial number to any objects. The classes of these specific graphs are chosen to obtain some results based on dimension. The types of crisp notions and fuzzy(neutrosophic) notions are used to make sense about the material of this study and the outline of this study uses some new notions which are crisp and fuzzy(neutrosophic). Some questions and problems are posed concerning ways to do further studies on this topic. Basic familiarities with fuzzy(neutrosophic) graph theory and graph theory are proposed for this article.
Decomposition and Intersection of Two Fuzzy Numbers for the Fuzzy Preference Relations
Hui-Chin Tang
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: fuzzy number; ranking; preference relation
In fuzzy decision problems, the ordering of fuzzy numbers is the basic problem. Among which, the fuzzy preference relation is the reasonable one to represent preference relation by a fuzzy membership function. This paper studies the Nakamura's and Kołodziejczyk's preference relations. Eight cases of representing different level of overlapping between two triangular fuzzy numbers are considered. We analyze the ranking behaviors of all possible combinations of decomposition and intersection of two fuzzy numbers for the Nakamura's and Kołodziejczyk's preference relations of these test cases. The results indicate that the decomposition and intersection can affect the fuzzy preference relations, thereby the final total order relation of fuzzy numbers.
An Effective FCM Approach of Similarity and Dissimilarity Measures with Alpha-Cut
Sayan Mukhopadhaya, Anil Kumar, Alfred Stein
Subject: Earth Sciences, Geoinformatics Keywords: Fuzzy c-Means (FCM) Classifier, Similarity and Dissimilarity measures, Distance, Fuzzy Error Matrix (FERM)
In this study, the fuzzy c- means classifier has been studied with nine other similarity and dissimilarity measures: Manhattan distance, chessboard distance, Bray-Curtis distance, Canberra, Cosine distance, correlation distance, mean absolute difference, median absolute difference and normalised squared Euclidean distance. Both single and composite modes were used with a varying weight constant (m) and also at different α-cuts. The two best single norms obtained were combined to study the effect of composite norms on the datasets used. An image to image accuracy check was conducted to assess the accuracy of the classified images. Fuzzy Error Matrix (FERM) was applied to measure the accuracy assessment outcomes for a Landsat-8 dataset with respect to the Formosat-2 dataset. To conclude FCM classifier with Cosine norm performed better than the conventional Euclidean norm. But, due to the incapability of the FCM classifier to handle noise properly, the classification accuracy was around 75%.
Numerical Solution of First Order Linear Differential Equations in Fuzzy Environment by Modified Runge-Kutta- Method and Runga-Kutta-Merson-Method under generalized H-differentiability and its Application in Industry
Sankar Prasad Mondal, Susmita Roy, Biswajit Das, Animesh Mahata
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: fuzzy sets; fuzzy differential equation; modified runge kutta method and runge kutta mersion method
The paper presents an adaptation of numerical solution of first order linear differential equation in fuzzy environment. The numerical method is re-established and studied with fuzzy concept to estimate its uncertain parameters whose values are not precisely known. Demonstrations of fuzzy solutions of the governing methods are carried out by the approaches, namely Modified Runge Kutta method and Runge Kutta Merson method. The results are compared with the exact solution which is found using generalized Hukuhara derivative (gH-derivative) concepts. Additionally, different illustrative examples and an application in industry of the methods are also undertaken with the useful table and graph to show the usefulness for attained to the proposed approaches.
Proposing a Framework for Airline Service Quality Evaluation Using Type-2 Fuzzy TOPSIS and Non-parametric Analysis
Subject: Social Sciences, Microeconomics And Decision Sciences Keywords: airline service quality; passenger satisfaction; non-parametric analysis; Type-2 Fuzzy Set; Fuzzy TOPSIS
This paper focuses on evaluating airline service quality from the perspective of passengers view. Until now a lot of researches has performed in airline service quality evaluation in the world but a little research has been conducted in Iran, yet. In this research a framework for measuring airline service quality in Iran is proposed. After reviewing airline service quality criteria, SSQAI model was selected because of its comprehensiveness in covering airline service quality dimensions. SSQAI questionnaire items were redesigned to adopt with Iranian airlines requirements and environmental circumstances in the Iran's economic and cultural context. This study includes fuzzy decision-making theory, considering the possible fuzzy subjective judgment of the evaluators during airline service quality evaluation. Fuzzy TOPSIS have been applied for ranking airlines service quality performances. Three major Iranian airlines which have the most passenger transfer volumes in domestic and foreign flights, were chosen for evaluation in this research. Results demonstrated Mahan airline has got the best service quality performance rank in gaining passengers' satisfaction with delivery of high quality services to its passengers, among the three major Iranian airlines. IranAir and Aseman airlines placed in the second and third rank, respectively, according to passenger's evaluation.Statistical analysis have been used in analyzing passenger responses. Due to abnormality of data, Non-parametric tests were applied. To demonstrate airline ranks in every criterion separately, Friedman test was performed. Variance analysis and Tukey test were applied to study the influence of increasing in age and educational level of passengers' on degree of their satisfaction from airline's service quality. Results showed that age has not significant relation with passenger satisfaction of airlines, however increasing in educational level demonstrated a negative impact on passengers' satisfaction from airline's service quality.
Improved Rapid Visual Earthquake Hazard Safety Evaluation of Existing Buildings Using Type-2 Fuzzy Logic Model
Ehsan Harirchian, Tom Lahmer
Subject: Engineering, Civil Engineering Keywords: seismic vulnerability; fuzzy logic system; Interval Type-2 Fuzzy logic; retrofit prioritization; damage category classification
Rapid Visual Screening (RVS) is a procedure that estimates structural scores for buildings and prioritize their retrofit and upgrade requirements. Despite the speed and simplicity of RVS, many of the collected parameters are non-commensurable and include subjectivity due to visual observations. It might cause uncertainties in the evaluation, which emphasizes the use of a fuzzy-based method. This study aims to propose a novel RVS methodology based on the interval type-2 fuzzy logic system (IT2FLS) to set the priority of vulnerable building to undergo detailed assessment while covering uncertainties and minimizing their effects during evaluation. The proposed method estimates the vulnerability of a building, in terms of Visual Damage Index, considering the number of stories, age of building, plan irregularity, vertical irregularity, building quality, and peak ground velocity, as inputs with a single output variable. Applicability of the proposed method has been investigated using a post-earthquake damage database of 28 reinforced concrete buildings from the Bingöl earthquake in Turkey.
Technique of Gene Expression Profiles Extraction based on the Complex Use of Clustering and Classification Methods
Sergii Babichev, Jiří Škvor
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: gene expression profiles; lung cancer; clustering; classification; binary classifiers; SOTA clustering algorithm; clustering quality criteria; ROC analysis; fuzzy inference system
In this paper, we present the results of the research concerning extraction of informative gene expression profiles from high-dimensional array of gene expressions considering the state of patients' health using clustering method, ensemble of binary classifiers and fuzzy inference system. Applying of the proposed stepwise procedure can allow us to extract the most informative genes taking into account both the subtypes of disease or state of the patient's health for further reconstruction of gene regulatory networks based on the allocated genes and following simulation of the reconstructed models. We used the publicly available gene expressions data as the experimental ones which were obtained using DNA microarray experiments and contained two types of patients' gene expression profiles: the patients with lung cancer tumor and healthy patients. The stepwise procedure of the data processing assumes the following steps: in beginning, we reduce the number of genes by removing non-informative genes in terms of statistical criteria and Shannon entropy; then, we perform the stepwise hierarchical clustering of gene expression profiles at hierarchical levels from 1 to 10 using SOTA clustering algorithm with correlation distance metric. The quality of the obtained clustering was evaluated using complex clustering quality criterion which is considered both the gene expression profiles distribution relative to center of the clusters were these gene expression profiles are allocated and the centers of the clusters distribution. The result of this stage execution was selection of the optimal cluster at each of the hierarchical levels which corresponded to minimum value of the quality criterion. At the next step, we have implemented classification procedure of the examined objects using four well known binary classifiers: logistic regression, support-vector machine, decision trees and random forest classifier. The effectiveness of the appropriate technique was evaluated based on the use of ROC analysis using criteria included as the components the errors of both the first and the second kinds. The final decision concerning extraction of the most informative subset of gene expression profiles was taken based on the use of fuzzy inference system, the inputs of which are the results of the appropriate single classifiers operation and output is the final solution concerning state of the patient's health. To our mind, the implementation of the proposed stepwise procedure of the informative gene expression profiles extraction create the conditions for increasing effectiveness of the further procedure of gene regulatory networks reconstruction and the following simulation of the reconstructed models considering the subtypes of the disease and/or state of the patient's health.
Representations of a Comparison Measure Between Two Fuzzy Sets
Juin-Han Chen, Hui-Chin Tang
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: fuzzy set; comparison measure; representation; disjoint
Online: 26 October 2020 (14:16:55 CET)
This paper analyzes the representation behaviors of a comparison measure between two compared fuzzy sets. Three types of restrictions on two fuzzy sets are considered in this paper: the two disjoint union fuzzy sets, the two disjoint fuzzy sets and the two general fuzzy sets. Differences exist among the numbers of possible representations of a comparison measure for the three types of fuzzy sets restrictions. The value of comparison measure is constant for the two disjoint union fuzzy sets. There are 42 candidate representations of a comparison measure for the two disjoint fuzzy sets. Of which 13 candidate representations with one or two terms can be used to easily calculate and compare a comparison measure.
Why Fuzzy Partition in F-Transform?
Vladik Kreinovich, Olga Kosheleva, Songsak Sriboonchitta
Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: F-transform; fuzzy partition; measurements; measurementuncertainty
In many application problems, F-transform algorithms are very efficient. In F-transform techniques, we replace the original signal or image with a finite number of weighted averages. The use of weighted average can be naturally explained, e.g., by the fact that this is what we get anyway when we measure the signal. However, most successful applications of F-transform have an additional not-so-easy-to-explain feature: the partition requirement, that the sum of all the related weighting functions is a constant. In this paper, we show that this seemingly difficult-to-explain requirement can also be naturally explained in signal-measuring terms: namely, this requirement can be derived from the natural desire to have all the signal values at different moments of time estimated with the same accuracy.
Health Assessment of Landing Gear Retraction and Extension Hydraulic System based on Improved Risk Coefficient and FCE Model
Shixuan Duan, Yanjun Li, Yuyuan Cao, Xingye Wang, Xudong Li, Zejian Zhao
Subject: Engineering, Other Keywords: health assessment; landing gear retraction and extension hydraulic system; improved risk coefficient; fuzzy comprehensive evaluation; fault simulation; maintenance manual
The health of the landing gear retraction and extension hydraulic system may be assessed using fuzzy comprehensive evaluation (FCE), however the traditional FCE method depends solely on human assessment by specialists, which is excessively subjective. To address the issue of excessive human subjective variables in the assessment, an improved FCE model based on enhanced risk coefficient is provided, which includes four consideration indexes: failure probability, failure severity, failure detection difficulty, and failure repair difficulty. To reduce subjective human judgment errors entirely due to expert experience, the improved FCE takes into account the likelihood of failure using a statistical method, the severity of failure using a fault simulation analysis based on the LMS Imagine.Lab AMESim simulation platform, and the difficulty of fault detection and repair using the aircraft manufacturer's professional maintenance information. As part of the evaluation model, the range of health assessment values and accompanying treatment methods are included, making it easier to implement on a daily basis in aircraft maintenance. As a final step, the simulation is evaluated and the simulated faults are calculated.
Machine Learning-Based Node Selection for Cooperative Non-Orthogonal Multi-Access System Under Physical Layer Security
Mohammed Ahmed Salem, Azlan Bin Abd.Aziz, Hatem Fahd Al-Selwi, Mohamad Yusoff Bin Alias, Tan Kim Geok, Azwan Mahmud, Ahmed Salem Bin-Ghoot
Subject: Engineering, Electrical & Electronic Engineering Keywords: physical layer security (PLS); cooperative relay transmission; non-orthogonal multiple access (NOMA); fuzzy logic; feed forward neural networks (FFNN); secrecy capacity
Cooperative non-orthogonal multi access communication is a promising paradigm for the future wireless networks because of its advantages in terms of energy efficiency, wider coverage and interference mitigating. In this paper, we study the secrecy performance of a downlink cooperative non-orthogonal multi access (NOMA) communication system under the presence of an eavesdropper node. Smart node selection based on feed forward neural networks (FFNN) is proposed in order to improve the physical layer security (PLS) of a cooperative NOMA network. The selected cooperative relay node is employed to enhance the channel capacity of the legal users, where the selected cooperative jammer is employed to degrade the capacity of the wiretapped channel. Simulations of the secrecy performance metric namely the secrecy capacity ($C_S$) are presented and compared with the conventional technique based on fuzzy logic node selection technique. Based on our simulations and discussions the proposed technique outperforms the existing technique in term the of secrecy performance.
Developing ANFIS-PSO Model to Predict Mercury Emissions in Combustion Flue Gases
Shahab Shamshirband, Masoud Hadipoor, Alireza Baghban, Amir Mosavi, Jozsef Bukor, Annamária R. Várkonyi-Kóczy
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: air pollution prediction; flue gas; mercury emissions; adaptive neuro-fuzzy inference system (ANFIS); particle swarm optimization (PSO); ANFIS-PSO; hybrid machine learning model; data science; particulate matter; health hazards of air pollution; air quality
Accurate prediction of mercury content emitted from fossil-fueled power stations is of utmost importance for environmental pollution assessment and hazard mitigation. In this paper, mercury content in the output gas of power stations' boilers was predicted using an adaptive neuro-fuzzy inference system (ANFIS) method integrated with particle swarm optimization (PSO). The input parameters of the model include coal characteristics and the operational parameters of the boilers. The dataset has been collected from 82 power plants and employed to educate and examine the proposed model. To evaluate the performance of the proposed hybrid model of ANFIS-PSO model, the statistical meter of MARE% was implemented, which resulted in 0.003266 and 0.013272 for training and testing, respectively. Furthermore, relative errors between acquired data and predicted values were between -0.25% and 0.1%, which confirm the accuracy of the model to deal nonlinearity and representing the dependency of flue gas mercury content into the specifications of coal and the boiler type.
A Novel Data-Driven Fuzzy for Accurate Coagulant Dosage in Drinking Water Treatment
Adriano Bressane, Ana Paula Garcia Goulart, Isadora Gurjon Gomes, Anna Isabel Silva Loureiro, Rogério Galante Negri, Rodrigo Braga Moruzzi, Adriano Gonçalves dos Reis, Jorge K. S. Formiga, Carrie Peres Melo, Gustavo H. R. Silva
Subject: Engineering, Other Keywords: coagulant dosage; fuzzy; machine-learning; water treatment
Coagulation is the most sensitive step in drinking water treatment. Underdosing may not yield the required water quality, whereas overdosing may result in higher costs and excess sludge. Traditionally, the coagulant dosage is set based on bath experiments performed manually. Therefore, this test does not allow real-time dosing control, and its accuracy is subject to operator experience. Alternatively, solutions based on machine-learning (ML) have been evaluated as a computer-aided alternative. Despite these advances, there is open debate on the most suitable ML method applied to the coagulation process, capable of the most highly accurate prediction. This study addresses this gap, where a comparative analysis between ML methods was performed. As a research hypothesis, a novel data-driven fuzzy inference system (FIS) should provide the best performance due to its ability to deal with uncertainties inherent to complex processes. Although ML methods have been widely investigated, only a few studies report hybrid neuro-fuzzy systems applied to coagulation. Thus, to the best of our knowledge, this is the first study thus far to address the accuracy of this novel data-driven FIS for such application. The novel FIS provided the smallest error (0.86), indicating a promising alternative tool for real-time and highly accurate coagulant dosing in drinking water treatment.
A Quantitative Assessment of Rubrics Using a Soft Computing Approach
Siddhartha Bhattacharyya, Sourav De, Leo Mrsic, Indrajit Pan, Khan Muhammad, Anirban Mukherjee, Debanjan Konar
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: OBE; OBTE; graduate attributes; rubrics; fuzzy sets.
This study aims to elucidate a soft computing approach for quantitative assessment of the scoring grade or rubrics for students in an outcome based education system. The intended approach resorts to a fuzzy membership based assessment of the different parameters of the scoring system, thereby yielding a novel and humanly assessment technique. The selection of the membership functions is based on the human behavior so as to make a realistic representation of the scoring strategy. The novelty of the proposed strategy lies in assigning fuzzy membership based weighted scores instead of simply assigning score bands to rubric categories, as is performed in normal rubrics based assessment. Comparative results demonstrated on a case study of Indian education scenario reveal the effectiveness of the proposed strategy over other fuzzy membership and normal rubrics based assessment procedures.
Evaluation on Success of Graduating with Distinction in Higher Education using Fuzzy DEMATEL Method
Mustefa Jibril
Subject: Social Sciences, Education Studies Keywords: Evaluation, Distinction, Fuzzy DEMATEL, Higher education, Cluster
The aim of this paper is the evaluation of the success of graduating with distinction in higher education (SGDHE) using the fuzzy DEMATEL method. The observation has been done using cause and effect criteria. 11 cause and 14 effect clusters have been used in this study. The study result of this work shows that all the effects are connected to the given causes and a cause-effect graph has been generated for each connection. This proposed approach is demonstrated with the empirical case of Dire Dawa University students in Dire Dawa Ethiopia.
Fuzzy Analogues of Sets and Functions Can Be Uniquely Determined from the Corresponding Ordered Category: A Theorem
Christian Servin, Gerardo Muela, Vladik Kreinovich
Subject: Mathematics & Computer Science, Other Keywords: fuzzy set; ordered category; category of fuzzysets
In modern mathematics, many concepts and ideas are described in terms of category theory. From this viewpoint, it is desirable to analyze what can be determined if, instead of the basic category of sets, we consider a similar category of fuzzy sets. In this paper, we describe a natural fuzzy analog of the category of sets and functions, and we show that, in this category, fuzzy relations (a natural fuzzy analogue of functions) can be determined in category terms -- of course, modulo 1-1 mapping of the corresponding universe of discourse and 1-1 re-scaling of fuzzy degrees.
On the Independence of Effect Algebras' Axioms
Ibrahim Senturk, Tahsin Oner
Subject: Mathematics & Computer Science, Logic Keywords: quantum structures; effect algebras; fuzzy sets; independence
In this paper, we scrutinize the axiomatic system of effect algebras which is given by D. J. Foulis and M.K. Bennett in the paper Effect Algebras and Unsharp Quantum Logics. We prove that this axiomatic system consists of independent axioms. To do this, we construct some models to indicate the indepence of each axiom. Therefore none of these axioms can be reduced when constructing any effect algebra. As a result, any algebra is an effect algebra if and only if it verifies (E1)-(E4) axioms.
Explaining Cannabis Use by Adolescents in Tarragona (Spain): Correlational and Fuzzy Set Qualitative Comparative Analyses
Jorge de Andres-Sanchez, Angel Belzunegui-Eraso
Subject: Social Sciences, Sociology Keywords: adolescence; substance use; cannabis use; ordered logistic regression; fuzzy set theory; fuzzy set qualitative comparative analysis; Boolean functions.
The literature on substance use usually extracts conclusions from data with correlational methods. Our study shows the usefulness of complementing ordered logistic regression (OLR) and fuzzy set qualitative comparative analysis (fsQCA) to assess factors inducing cannabis consumption in a sample of 1,935 teenagers. OLR showed a significant influence of gender (odd ratio (OR) =0.383, p<0.0001), parental monitoring (OR=0.587, p=0.0201); religiousness (OR=0.476, p=0.006); parental tolerance to substance use (OR=42.01, p<0.0001) and having close peers that consume substances (OR=5.60, p<0.0001). FsQCA has allowed fitting linkages between factors from a complementary perspective. (1) Coverage (cov) and consistency (cons) attained by solutions explaining use (cons=0.808, cov=0.357) are clearly lower than by recipes of non-use (cons=0.952, cov=0.869) (2) The interaction of gender, a tolerant family to use and the attitude toward substances by peers is very consistent to explain cannabis use. (3) The most important recipe explaining resistance to cannabis is simply parental disagreement with substance consumption (cons=0.956, cov=0.861) (4) Factors as gender, religiosity, parental monitoring and age show also a relevant impact on attitude toward cannabis use. However, whereas some of them impact symmetrically on use and non-use this does not follow in factors such as parental monitoring or age.
Adaptive Neuro-Fuzzy Inference System and a Multilayer Perceptron Model Trained with Grey Wolf Optimizer for Predicting Solar Diffuse Fraction
Randall Claywell, Nadai Laszlo, Felde Imre, Amir Mosavi
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: machine learning; prediction; adaptive neuro-fuzzy inference system; adaptive network-based fuzzy inference system; diffuse fraction; multilayer perceptron
The accurate prediction of the solar Diffuse Fraction (DF), sometimes called the Diffuse Ratio, is an important topic for solar energy research. In the present study, the current state of Diffuse Irradiance research is discussed and then three robust, Machine Learning (ML) models, are examined using a large dataset (almost 8 years) of hourly readings from Almeria, Spain. The ML models used herein, are a hybrid Adaptive Network-based Fuzzy Inference System (ANFIS), a single Multi-Layer Perceptron (MLP) and a hybrid Multi-Layer Perceptron-Grey Wolf Optimizer (MLP-GWO). These models were evaluated for their predictive precision, using various Solar and Diffuse Fraction (DF) irradiance data, from Spain. The results were then evaluated using two frequently used evaluation criteria, the Mean Absolute Error (MAE) and the Root Mean Square Error (RMSE). The results showed that the MLP-GWO model, followed by the ANFIS model, provided a higher performance, in both the training and the testing procedures.
High Impedance Fault Detection in MV Distribution Network using Discrete Wavelet Transform and Adaptive Neuro-Fuzzy Inference System
Veerapandiyan Veerasamy, Noor Izzri Abdul Wahab, Rajeswari Ramachandran, Muhammad Mansoor, Mariammal Thirumeni
Subject: Engineering, Electrical & Electronic Engineering Keywords: Discrete Wavelet Transform (DWT); Adaptive Neuro-Fuzzy Inference System (ANFIS); Fuzzy Logic system (FLS); High Impedance Fault (HIF).
This paper presents a method to detect and classify the high impedance fault that occur in the medium voltage distribution network using discrete wavelet transform (DWT) and adaptive neuro-fuzzy inference system (ANFIS). The network is designed using Matlab software and various faults such as high impedance, symmetrical and unsymmetrical fault have been applied to study the effectiveness of the proposed ANFIS classifier method. This is achieved by training the ANFIS classifier using the features (standard deviation values) extracted from the three phase fault current signal by DWT technique for various cases of fault with different values of fault resistance in the system. The success and discrimination rate obtained for identifying and classifying the high impedance fault from the proffered method is 100% whereas the values are 66.7% and 85% respectively for conventional fuzzy based approach. The results indicate that the proposed method is more efficient to identify and discriminate the high impedance fault accurately from other power system faults in the system.
Vibration Control Design for a Plate Structure with Electrorheological ATVA Using Interval Type-2 Fuzzy System
Chih-Jer Lin, Chun-Ying Lee, Ying Liu
Subject: Engineering, Mechanical Engineering Keywords: Electro-Rheological fluid; Semi-active vibration control; tunable vibration absorber; type-1 fuzzy control; interval type-2 fuzzy control
This study presents a vibration control using actively tunable vibration absorbers (ATVA) to suppress vibration of a thin plate. The ATVA's is made of a sandwich hollow structure embedded with the electrorheological fluid (ERF). ERF is considered to be one of the most important smart fluids and it is suitable to be embedded in a smart structure due to its controllable viscosity property. ERF's apparent viscosity can be controlled in response to the electric field and the change is reversible in 10 microseconds. Therefore, the physical properties of the ERF-embedded smart structure, such as the stiffness and damping coefficients, can be changed in response to the applied electric field. A mathematical model is difficult to be obtained to describe the exact characteristics of the ERF embedded ATVA because of the nonlinearity of ERF's viscosity. Therefore, a fuzzy modeling and experimental validations of ERF-based ATVA from stationary random vibrations of thin plates are presented in this study. Because Type-2 fuzzy sets generalize Type-1 fuzzy sets so that more modelling uncertainties can be handled, a semi-active vibration controller is proposed based on Type-2 fuzzy sets. To investigate the different performances by using different types of fuzzy controllers, the experimental measurements employing type-1 fuzzy and interval type-2 fuzzy controllers are implemented by the Compact RIO embedded system. The fuzzy modeling framework and solution methods presented in this work can be used for design, performance analysis, and optimization of ATVA from stationary random vibration of thin plates.
Fuzzy-logic Approach to Estimating the Fleet Efficiency of a Road Transport Company: A Case Study of Agricultural Products' Deliveries in Kazakhstan
Taran Igor, Karsybayeva Asem, Naumov Vitalii, Murzabekova Kenzhegul, Chazhabayeva Marzhan
Subject: Engineering, Civil Engineering Keywords: fleet structure; road transport; fuzzy logic; transport efficiency
The estimation of the efficiency of road transport vehicles remains a significant problem for contemporary transport companies, as the technological process is influenced by numerous stochastic impacts, such as demand stochasticity, road conditions uncertainty, transport market fluctuations, etc. To consider the uncertainty related to the estimation of the vehicles' fleet efficiency, we propose a fuzzy-logic approach, where the efficiency of a given vehicle is described by a membership function. The efficiency of the whole fleet and its rational structure in that case can be evaluated as a fuzzy set. To demonstrate the developed approach, we depict a case study of using cargo vehicles for deliveries of agricultural products in the Republic of Kazakhstan. The numeric results are presented for the selected models of vehicles that a transport company uses to service a set of clients located in Northern Kazakhstan.
Quantum Holography from Fermion Fields
Paola Zizzi
Subject: Physical Sciences, General & Theoretical Physics Keywords: Holographic Principle; Fermionic QFT; Spin Networks; Fuzzy Sphere
We demonstrate, in the context of Loop Quantum Gravity, the Quantum Holographic Principle, according to which the area of the boundary surface enclosing a region of space encodes a qubit per Planck unit. To this aim, we introduce fermion fields in the bulk, whose boundary surface is the two-dimensional sphere. The doubling of the fermionic degrees of freedom and the use of the Bogoliubov transformations lead to pairs of spin network's edges piercing the boundary surface with double punctures, giving rise to pixels of area encoding a qubit. The proof is also valid in the case of a fuzzy sphere.
Assessment Urban Transport Service and Pythagorean Fuzzy Sets CODAS Method: A Case of Study of Ciudad Juárez
Luis Perez-Dominguez, Sara Nohemi Almeraz Duran, Roberto Romero, Iván Juan Carlos Pérez-Olguín, David Luviano-Cruz, Jesus Andres Hernandez Gomez
Subject: Social Sciences, Accounting Keywords: CODAS; Pythagorean Fuzzy Sets; Public Transportation; COVID-Criteria
The purpose of this research article is to provide a comprehensive method that allows the evaluation of the public transportation in their different transport lines that offer in Ciudad Juárez, Chihuahua. This study presents a description of the public transport system as part of the literature review that describes an appropriate model based on the more outstanding publications about urban mobility and public transportation for passengers' as well as success cases published which serves as a starting point to check the actual state of the public transportation system based on the Pythagorean Fuzzy CODAS to analyze and evaluate the alternatives through criteria that defines the general performance. The integration of these methods provides an adequate methodology for decision-making concerning urban planning and mobility to detect and improve the performance of criteria not considered within sustainable urban mobility plans.
Fuzzy Genetic Algorithm Approach for Verification of Reachability and Detection of Deadlock in Graph Transformation Systems
Nahid Salimi, Vahid Rafe, Hamed Tabrizchi, Amir Mosavi
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: fuzzy genetic algorithm; reachability property; deadlock; model checking
model checking techniques are often used for the verification of software systems. Such techniques are accompanied with several advantages. However, state space explosion is one of the drawbacks to model checking. During recent years, several methods have been proposed based on evolutionary and meta-heuristic algorithms to solve this problem. In this paper, a hybrid approach is presented to cope with the SSE problem in model checking of systems modeled by GTS with an ample state space. Most of existence proposed methods that aim to verify systems are applied to detect deadlocks by graph transformations. The proposed approach is based on the fuzzy genetic algorithm and is designed to decline the safety property by verifying the reachability property and detecting deadlocks. In this solution, the state space of the system is searched by a fuzzy genetic algorithm to find the state in which the specified property is refuted/verified. To implement and evaluate the suggested approach, GROOVE is used as a powerful designing and model checking toolset in GTS. The experimental results indicate that the presented hybrid fuzzy method improves speed and performance by comparing other techniques
Assessment of the Design for Organization Production Processes Using Fuzzy Logic
Józef Matuszek, Tomasz Seneta, Aleksander Moczała
Subject: Engineering, Mechanical Engineering Keywords: production process design; design for manufacturability; fuzzy logic
The paper presents design methodology for the production process of a new product from the point of view of the assembly operations technology criterion (Design for Assembly - DFA) in the conditions of high-volume production. Mentioned are DFA methods and techniques used in the implementation of a new product. Author presents a new method to assess design for manufacturability based on fuzzy variables based on fuzzy variables. An example was given to illustrate the proposed course of action
A Topological Perspective for Interval Type-2 Fuzzy Hedges
Hime Oliveira
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: type-2 fuzzy sets; fiber bundles; differential topology
Type-2 fuzzy sets were introduced by L. Zadeh aiming at modelling some settings in which fuzzy sets (usually called type-1 fuzzy sets) are not sufficient to reflect certain uncertainty degrees - loosely speaking, they are fuzzy sets whose membership degrees are ordinary fuzzy sets. On the other hand, fiber bundles are topological entities of extreme importance in Mathematics itself and many other scientific areas, like Physics (General Relativity, Field Theory etc.), finance modelling, and statistical inference. The present work introduces a conceptual link between the two ideas and conjectures about the potential mutual benefits that can be obtained from this viewpoint.As an objective and usable product of the presented ideas, it is described a framework for defining type-2 fuzzy hedges, proper to operate on interval type-2 fuzzy sets.
Active Vibration Control of Launch Vehicle on Satellite Using Piezoelectric Stack Actuator
Mehran Makhtoumi
Subject: Engineering, Mechanical Engineering Keywords: Vibration Control, Piezoelectric, Fuzzy Logic Control, Launch Vehicle
Satellites are subject to various severe vibration during different phases of flight. The concept of satellite smart adapter is proposed in this study to achieve active vibration control of launch vehicle on satellite. The satellite smart adapter has 18 active struts in which the middle section of each strut is made of piezoelectric stack actuator. Comprehensive conceptual design of the satellite smart adapter is presented to indicate the design parameters, requirements and philosophy applied which are based on the reliability and durability criterions to ensure successful functionality of the proposed system. The coupled electromechanical virtual work equation for the piezoelectric stack actuator in each active strut is drived by applying D'Alembert's principle. Modal analysis is performed to characterize the inherent properties of the smart adapter and extraction of a mathematical model of the system. Active vibration control analysis was conducted using fuzzy logic control with triangular membership functions and acceleration feedback. The control results conclude that the proposed satellite smart adapter configuration which benefits from piezoelectric stack actuator as elements of its 18 active struts has high strength and shows excellent robustness and effectiveness in vibration suppression of launch vehicle on satellite.
Preprint CONCEPT PAPER | doi:10.20944/preprints201710.0087.v1
A Logic Framework for Non-Conscious Reasoning
Felipe Lara-Rosano
Subject: Behavioral Sciences, General Psychology Keywords: non-conscious reasoning; fuzzy logic; linguistic truth values
Human non-conscious reasoning is one of the most successful procedures developed to solve everyday problems in an efficient way. This is why the field of artificial intelligence should analyze, formalize and emulate the multiple ways of non-conscious reasoning with the purpose of applying them in knowledge based systems, neurocomputers and similar devices for aiding people in the problem-solving process. In this paper, a framework for those non-conscious ways of reasoning is presented based on object-oriented representations, fuzzy sets and multivalued logic.
Provision of a New Method to Improve the Detection of Micro Seismic Events
Saeed Ghorbani, Morteza Barari, Mojtaba Hosseini
Subject: Engineering, Other Keywords: micro seismic events; fuzzy logic; seismic event detection
Natural events such as floods, fires, tsunamis, earthquakes and others have nowadays caused serious damage to human beings and nature. The precise detection of these natural events and especially the earthquake has nowadays become the focus of many computer and geoscientific researchers. Computer science and machine learning algorithms have revolutionized early detection and prediction of these events. Hence, a fuzzy method has been initially used in this article to enhance the authenticity of data based on application of effective variables and then combination of neural network algorithms of the MLP perceptron and radial network of RBF in form of a collective learning system in order to more accurately identify seismic events on a small scale. It was observed after simulating the proposed method that the proposed method has significantly improved based on actual error and root-mean-square error (RMSE) criteria compared to basic methods.
A Novel Adaptive Neuro-Fuzzy Based Cascaded PIDF-PIDF Controller for Automatic Generation Control Analysis of Multi-Area Multi-Source Hydrothermal System
Abinands Ramshanker, Ravi K., Jacob I. Raglend, Belwin J. Edward
Subject: Engineering, Electrical & Electronic Engineering Keywords: Automatic generation controls (AGC); Adaptive Neuro-Fuzzy controller; cascaded controller; parallel High voltage direct current (HVDC) tie-lines; Skill Optimization Algorithm (SOA)
This article investigated the Automatic Generation Control(AGC) of multi-area multi-source interconnected systems with hydropower plants, thermal power plants, and wind energy. Adaptive Neuro-fuzzy controller integrated with the cascaded proportional-integral-derivative with filter (PIDF-PIDF) is a new cascaded controller (ANF-PIDF-PIDF) that has been presented as a secondary controller for applied hybrid power systems. The recent Skill Optimization Algorithm (SOA) is employed to optimize PIDF- PIDF controller parameter gains and the Adaptive Neuro-Fuzzy controller's inputs and output scaling factors. SOA is used to update the controller parameters with integral square error (ISE) employed as the objective function. A 1% step load disturbance was considered simultaneously in all three areas. The controller's performance is evaluated and compared with and without considering the effects of wind energy sources and non-linearity for ANF-PIDF-PIDF, PIDF-PIDF, and PIDF and it was determined that the ANF-PIDF-PIDF was the most efficient. The dynamic system performance is also compared with parallel high voltage direct current (HVDC) tie-lines. The investigation clearly shows that incorporating HVDC tie-line with multi-area, multi-source provides better dynamic performance in maximum amplitude, oscillation, and settling time. Additionally, sensitivity analysis is done and the optimum controller gains does not need to be reset to uncertain values in system loading conditions. All simulation results were evaluated using MATLAB 2016b.
Multi-Criteria Decision-Making Techniques for Solving the Airport Ground Handling Service Equipment Vendor Selection Problem
Peng Yen-Ting, Shen Chien-Wen, Tu Chang-Shu
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: equipment vendor selection; fuzzy TOPSIS; fuzzy weighted average left and right score; multi-choice goal programming; multi-aspiration goal programming
The airport ground handling service (AGHS) equipment vendor selection (AGHSEVS) problem is critical for ramp work safety management, because AGHS equipment malfunctions affect airport ramp work safety. Appropriate vendor selection can prevent aircraft damage and delays in airlines schedules, and ensure reliable and high-quality ground handling service. The AGHSEVS problem is a time-consuming and complex process that requires professional knowledge and experience to make judgments. Specifically, AGHSEVS is a multi-criteria decision-making (MCDM) problem. Previous research has seldom integrated MCDM methods with linear and goal programming to solve the AGHSEVS problem. The objective of this study was to develop a new system evaluation model for AGHSEVS by considering both qualitative and quantitative methods. We test the proposed approach on an AGHS company in Taiwan.
Research on Forecast and Suppression Splashing Method of AOD Furnace
Changjun Guan
Subject: Engineering, Control & Systems Engineering Keywords: Forecast splashing; suppression splashing; AOD; signal fusion; fuzzy control
During the smelting process of AOD furnace, the unbalanced reaction of material will lead to the occurrence of splashing. It will not only damage the smelting equipment, but also seriously injure the personnel. In this study, first, the information of liquid level, audio information, and vibration information are detected by multiple sensors respectively. Then, the fused information is used to forecast the splashing. Finally, the multitasking fuzzy controller is used to suppress splashing. The results show that the method of forecasting and suppressing splashing can accurately forecast and achieve rapid suppression. Thus, the efficiency of smelting can be improved. | CommonCrawl |
Ultrasound guided double injection of blood into cisterna magna: a rabbit model for treatment of cerebral vasospasm
Yongchao Chen1,
Youzhi Zhu2,
Yu Zhang2,
Zixuan Zhang3,
Juan Lian1,
Fucheng Luo1,
Xuefei Deng3 &
Kelvin KL Wong4
BioMedical Engineering OnLine volume 15, Article number: 19 (2016) Cite this article
Double injection of blood into cisterna magna using a rabbit model results in cerebral vasospasm. An unacceptably high mortality rate tends to limit the application of model. Ultrasound guided puncture can provide real-time imaging guidance for operation. The aim of this paper is to establish a safe and effective rabbit model of cerebral vasospasm after subarachnoid hemorrhage with the assistance of ultrasound medical imaging.
A total of 160 New Zealand white rabbits were randomly divided into four groups of 40 each: (1) manual control group, (2) manual model group, (3) ultrasound guided control group, and (4) ultrasound guided model group. The subarachnoid hemorrhage was intentionally caused by double injection of blood into their cisterna magna. Then, basilar artery diameters were measured using magnetic resonance angiography before modeling and 5 days after modeling.
The depth of needle entering into cisterna magna was determined during the process of ultrasound guided puncture. The mortality rates in manual control group and model group were 15 and 23 %, respectively. No rabbits were sacrificed in those two ultrasound guided groups. We found that the mortality rate in ultrasound guided groups decreased significantly compared to manual groups. Compared with diameters before modeling, the basilar artery diameters after modeling were significantly lower in manual and ultrasound guided model groups. The vasospasm aggravated and the proportion of severe vasospasms was greater in ultrasound guided model group than that of manual group. In manual model group, no vasospasm was found in 8 % of rabbits.
The ultrasound guided double injection of blood into cisterna magna is a safe and effective rabbit model for treatment of cerebral vasospasm.
Despite the advances in diagnosis and treatment of subarachnoid hemorrhage, effective therapeutic interventions are still limited and clinical outcomes remain disappointing [1]. There is substantial evidence that delayed cerebral vasospasm contributes to the significant mortality and morbidity rates following subarachnoid hemorrhage [1–9]. Cerebral vasospasm can lead to cerebral hypoperfusion, culminating in delayed ischemic neurological deficit, which has been considered a major cause of high mortality and poor outcome. In the last several decades, many researchers have been primarily focused on vasospasm and its sequelae [1–16]. However, the success rate with regard to improved outcome is also limited [1]. For a better understanding of the pathogenic mechanism of cerebral vasospasm and to develop efficacious therapeutic strategies, many animal models have been stimulated [17–20].
Choosing an appropriate animal model is a critical step in the productive cerebral vasospasm research [21]. Primates may be the most preferred species, as the time course of delayed cerebral vasospasm is similar to that observed in humans and the angiography is relatively easy to perform [20]. However, drawbacks with these species include high costs, limited availability, and difficulties in inducing subarachnoid hemorrhage. Canines are another suitable species, especially when used in the molecular biology research [22]. However, the canines are too small and the angiography is hard to perform. Therefore, rabbits are the alternative species of choice, as they offer many advantages [22–26]. First, the time course of cerebral vasospasm shows a biphasic pattern of early and delayed vasospasm as found in humans. Second, the morphological changes in arteries and ventricles observed in rabbit models are similar to those observed in humans. Third, rabbit models are available in larger numbers with relatively low costs, and intubation and respiratory support are not required in anesthesia. Finally, rabbits are relatively easy to restrain when using an appropriate restraining device, given their relatively docile nature.
In the 1980s, rabbits emerged as a new species in delayed cerebral vasospasm research. Since Liszczak et al. presented models of blood injection into cisterna magna [27], this technique became the standard for subarachnoid hemorrhage induction in rabbits. It can generate a pathologic condition similar to that seen after the rupture of an intracranial aneurysm and it is easy to perform with a high success rate. The frequency of blood injection ranges from 1 to 3 times. Double injection method induces a more severe and prolonged vasospasm than single injection method, and is well established in dog and rat models [17, 19, 20, 22]. In rabbits, double injection method reportedly produces a vasospasm that is more severe and persistent. The vasospasm peaks approximately 5 days after first injection and persists for up to the next 2 days [21]. However, due to the unacceptably high mortality rate, reported by Baker et al. (41 %) [28], Spallone and Pastore (20 %) [29], double injection method is not popular in rabbit models.
Deaths of animal occurring during the procedure are mainly caused by a failure to the brainstem needling or intraparenchymal blood injection [26]. Various methods have been used to avoid puncture failure such as having an appropriate posture, not evacuating too much cerebrospinal fluid, directing the needle slightly rostrally [25, 26]. However, deaths of animal were inevitable as the cisterna magna is small and the location of needle tip cannot be confirmed. Moreover, double injection method tends to increase the risk of death [25]. Ultrasound guided puncture is a widely used interventional technique in the clinic, which provides real-time imaging guidance for all kinds of puncture [30–36]. The present study was designed to use ultrasound guided puncture in the establishment of cerebral vasospasm model so as to provide a safe and effective rabbit model of cerebral vasospasm after subarachnoid hemorrhage.
Experimental animal grouping and development of subarachnoid hemorrhage model
This study was approved by the Ethics Committee of Anhui Medical University (Hefei, China, 2012238). All animal use and care protocols including the operation procedures were carried out in strict accordance with the recommendation in Guide for the Care and Use of Laboratory Animals of the National Institutes of Health.
A total of 160 adult male New Zealand white rabbits that weigh from 2.5 to 3.2 kg were supplied by the experimental animal center of Anhui Medical University. All rabbits were randomly divided into four groups of 40 each: (1) manual control group, (2) manual model group, (3) ultrasound guided control group, and (4) ultrasound guided model group. We established and detailed the groups as follows:
Manual model group. After rabbits were fixed on the operating table, they were anesthetized using 3 % sodium pentobarbital (1 ml/kg) after an intravenous injection through auricular vein. The occipital hair was shaved, and the skin was sterilized with 75 % alcohol. A 5 ml-syringe with a 22-gauge needle was inserted into cisterna magna through atlano-occipital fascia (Fig. 1). Once the dura mater was perforated, a small amount of cerebrospinal fluid (about 0.4 ml/kg) was removed. The autologous non-heparinized fresh auricular arterial blood (about 0.6 ml/kg) was injected into cisterna magna slowly. The rabbits were kept in head-down position for 30 min, whereby the blood would distribute into other subarachnoid spaces with cerebrospinal fluid circulation. The second injection was accomplished 48 h after the first injection, and 0.4 ml/kg blood was injected into cisterna magna using the same procedure as first injection.
Cisterna magna and atlano-occipital fascia in CT sagittal image. C cerebellum, EOP external occipital protuberance, AOF atlano-occipital fascia, CM cisterna magna, BS brain stem
Manual control group. All the procedures were the same as model group, except normal saline was used instead of autologous blood.
Ultrasound guided model group. Before puncture, the depth of cisterna magna was measured by ultrasound. The best puncture direction was designed according to the relationship between atlano-occipital fascia and the deepest part of cisterna magna (Fig. 2). If the cisterna magna was found to be too small (the depth was less than 0.23 cm, which is the length of inclined plane of needle tip), the amount of autologous blood injected into cisterna magna would be decreased. Guided by ultrasound, the needle was inserted into cisterna magna. Other procedures remained the same as manual model group.
Measurement and observation of cisterna magna that is guided by ultrasound. a Cisterna magna and adjacent structures were displayed by ultrasound. b Depth of cisterna magna was measured at its optimal part. The puncture direction was designed based on the line from atlano-occipital fascia to the deepest part of cisterna magna. CM cisterna magna, BS brain stem, EOP external occipital protuberance
Ultrasound guided control group. All the procedures were the same as model group except that normal saline was used instead of autologous blood.
Observation of the rabbit subarachnoid hemorrhage model under CT
The rabbits underwent CT scanning 1 day after modeling, to determine whether there was blood in the subarachnoid space. All CT images were obtained using a 16-MDCT unit (Siemens Healthcare). The scanning parameters were as follows: 80 kV, 100 mA, 1.25-mm section thickness, 1-mm intersection gap, 0.7 cm/s table speed, 9.6-cm FOV, and 512 × 512 matrix.
Observation of cerebral vasospasm with magnetic resonance angiography
Next, the rabbits underwent MR scanning before modeling and 5 days after modeling, to determine whether cerebral vasospasm occurred. MR scanning was performed with a 3.0 T MR unit (Siemens Healthcare) using a knee joint coil. After a T2-weighted sequence with a TR/TE of 2000/96 ms was performed to display the loose connective tissue, the time of flight magnetic resonance angiography (TOF-MRA) was performed to display cerebral artery. The scanning parameters were as follows: 256 × 256 matrix, 0.5-mm section thickness, 11-cm FOV, TR 25 ms, and TE 5.72 ms. The data were translated to post-processing workstation after scanning. The basilar artery diameters were measured by two radiologists independently, and their mean values were calculated.
The vasospasm severity was calculated based on the basilar artery diameters before modeling and 5 days after modeling. The calculation formula was modified from Laslo et al. [37] in Eq. (1) as follows:
$${\text{Vasospasm severity }} = \, \left( {{\text{BA}}_{0} {-}{\text{ BA}}_{ 5} } \right) \, \times { 1}00\% \, /{\text{ BA}}_{0}$$
In this formula, BA0 represents the basilar artery diameter before modeling, while BA5 represents the basilar artery diameter 5 days after modeling in the same rabbit.
All the statistical analyses were performed with SPSS software for Windows, and are presented as the mean ± standard deviation (SD). Statistical comparisons of the basilar artery diameter before and after modeling were made using paired t test. Statistical comparisons of the mortality rate and the vasospasm severity were made using Chi square test. Here, P < 0.05 was considered to indicate a significant difference.
Operation based on ultrasound guided puncture
The cisterna magna was a dilated subarachnoid space between cerebellum and medulla oblongata (Fig. 3). Before the puncture procedure, the cisterna magna, brain stem and external occipital protuberance were displayed by ultrasound. The depth of the cisterna magna was (0.42 ± 0.06) cm (Fig. 2). According the line from atlano-occipital fascia to the deepest part of cisterna magna, the best puncture direction was designed. The angle between puncture needle and vertical line was (56 ± 11)°, while the depth of puncture was (2.38 ± 0.81) cm (Fig. 3).
Illustration of puncture direction in MR sagittal image. C cerebellum, CM cisterna magna, BS brain stem, Arrow 1 puncture direction in manual model group (vertical line), Arrow 2 puncture direction in ultrasound guided model group
During ultrasound guided puncture procedure, the whole process of the needle entering into cisterna magna was observed directly. The depth of needle tip entering into cisterna magna and the distance between tip and brain stem were controlled by operator (Fig. 4a). When blood was injected into cisterna magna, the signal of flow blood was monitored by ultrasound (Fig. 4b). At 1 day after modeling, the blood was observed in subarachnoid space in CT images (Fig. 4c), which illustrated that the injection was successful.
Ultrasound guided puncture procedure. a Needle was showed in the puncture procedure. b When blood was injected, the signal of flow blood was monitored by ultrasound, and boundary between cisterna magna and brain stem was disturbed by the high signal. c Blood in subarachnoid space at 1 day after modeling. CM cisterna magna, BS brain stem, Blue line dura mater, Red arrowhead puncture needle, Blue block blood signal, Red arrow blood in subarachnoid space
Safety considerations of puncture
For the manual control group (40 rabbits), six rabbits died within 30 min after puncture. Two died in the first puncture, while the other four died during the second puncture. The mortality rate in manual control group was 15 %. For the manual model group (40 rabbits), nine rabbits died within 30 min after puncture, while six rabbits died during the second puncture. The mortality rate in manual model group was 23 %. There was no statistical difference between the mortality rate of manual control group and manual model group (x2 = 0.738, P = 0.390).
There were no deaths among all 80 rabbits in ultrasound guided control group and model group. Compared with manual model group, the mortality rate in ultrasound guided model group was significantly lower (x2 = 10.141. P = 0.002). The same difference was observed between manual control group and ultrasound guided control group (x2 = 6.486. P = 0.026). Comparison of the mortality rates between all four groups was shown in Fig. 5.
Comparison of mortality rates among four groups
Effectiveness of rabbit cerebral vasospasm model
In both the manual and ultrasound guided control groups, there were no statistical differences in basilar artery diameter between the day before and after modeling. Before modeling, the basilar artery diameter in manual model group was (0.66 ± 0.05) mm, while the diameter was (0.67 ± 0.06) mm in ultrasound guided model group. Five days after modeling, the diameter changed to (0.49 ± 0.13) and (0.36 ± 0.02) mm in manual model group and ultrasound guided model group, respectively. Compared with the diameter before modeling, the basilar artery was significantly narrower after modeling, in both the manual and ultrasound guided model groups. However, the change in ultrasound model group was larger than that in manual guided model group (Figs. 6 and 7).
Comparison of basilar artery diameter among four groups. **Compare with the diameter before modeling, P < 0.05. ##Compare with the diameter in the manual group, P < 0.05. ††Compare with the diameter in the control group, P < 0.05
Observation of basilar artery in MRA before and after modeling. a, b Manual control group. c, d Ultrasound guided control group. e, f Manual model group. g, h ultrasound guided model group. a, c, e, g Before modeling. b, d, f, h 5 days after modeling. BA basilar artery, VA vertebral artery
After calculating the vasospasm severity, the mean severity in manual model group and ultrasound guided model group was 26 and 46 %, respectively. The proportions of the severe (≥30 %), moderate (20 ~ 29 %), mild (10 ~ 19 %) and little (<10 %) were shown in Table 1. The proportion of severe vasospasms was more in ultrasound guided group than that of manual group (x2 = 12.917, P = 0.001). In the ultrasound guided group, the depth of cisterna magna in two rabbits was 0.22 and 0.20 cm, respectively. The cisterna magna was too small and not enough blood was injected, and the vasospasm severity was 14 and 16 %, respectively, while in other rabbits, the severity was higher than 30 %. In the manual group, no obvious vasospasm was found in three (8 %) rabbits, which indicates the failure of model's establishment (Table 1).
Table 1 Vasospasm severity of basilar artery before and after modeling (number/ %)
There are two basic requirements in the establishment of cerebral vasospasm model: one is subarachnoid hemorrhage, and the other is sufficient stimulation of blood. The methods are varied for different subarachnoid hemorrhage models, they are usually as follows [38]: (1) Let the blood coagulate around the blood vessels by pricking intracranial arteries. (2) After surgical exposure of the experimental arteries, the blood from other parts of the body is injected around the vessels. (3) Autologous blood is injected into cerebral cistern, ventricle or subarachnoid space percutaneously, and the blood coagulates around blood vessels with the flow of cerebrospinal fluid. These three methods have their inherent advantages and disadvantages. The former two methods have many disadvantages such as their associated major trauma and high mortality rates. Double injections of blood into cisterna magna allow high-precision control of injection volume, velocity and time. Due to its low mortality rate, this method has good repeatability, which is suitable for single factor analysis under the stable conditions. Due to the importance of pathogenesis of cerebral vasospasm, double injections of blood into cisterna magna were used to establish the cerebral vasospasm model successfully in rats [39, 40], rabbits [25, 26] and dogs [22].
In the process of establishing cerebral vasospasm model, the death of experimental animals seems inevitable in traditional double injections of blood into cisterna magna. This study showed that even when the blood injected into cisterna magna was replaced with normal saline (manual control group), 15 % of the New Zealand rabbits died during the puncture procedure. The mortality rate was slight higher (23 %) in manual model group. However, no significant difference was found between the manual control and model groups, which indicates that the death of rabbits was caused by the puncture procedure itself, not the blood. The cisterna magna is located behind brain stem. Thus, it is easy to damage the vital center inside brain stem, as the entry depth of needle tip is hard to control in the puncture. The rabbits may die as a result of respiratory arrest. In ultrasound guided puncture procedure, the cisterna magna was found to be too small in two rabbits. Because the depth of cisterna magna was less than the length of inclined plane of needle tip, the brain stem would be damaged in traditional manual model group.
Considering the safety of puncture process, a new technology that can monitor entire puncture process is needed to reduce the mortality rate of rabbits. This study modified puncture technology combined with an interventional ultrasound technique. With ultrasound guidance, the cisterna magna, brain stem and external occipital protuberance were displayed. The brain stem remained undamaged because the depth of needle tip entering into cisterna magna was controlled by the operator. To ensure the safety of puncture process, the best puncture direction was designed according the relationship between atlano-occipital fascia and the deepest part of cisterna magna. As the inclined course in cisterna magna increased the activities of needle tip, the tip would not be able to touch brain stem. Before operation, the angle and the depth of puncture can be detected and calculated by ultrasound. The angle was (56 ± 11) degree, while the depth was (2.38 ± 0.81) cm. These data vary in different rabbits and therefore, a uniform puncture scheme is not sufficient for injection of blood into cisterna magna. With ultrasound guidance, no deaths occurred in 80 cases of New Zealand rabbits and the mortality rate was significantly lower than manual group, which confirmed the safety of this new puncture technology.
The next problem to solve is the validity of model pertaining to whether cerebral vasospasm occurs. Cerebral vasospasm is easier to occur as there are two stimulations in double injections of blood into cisterna magna. In this study, no obvious vasospasm was found in three rabbits in the manual group (8 %), which indicated the failure of model's establishment. In the ultrasound guided control group, cerebral vasospasm occurred in all experimental animals 5 days after modeling, and the proportion of severe vasospasm was as high as 95 %, which was more than that of manual group (65 %).
Two questions should be considered in ultrasound real-time observation. First, the cisterna magna was too small in 5 % of the rabbits. Insufficient blood was injected to prevent intracranial hypertension. In these rabbits, mild vasospasm was found in MRA images 5 days after modeling. This study suggests that the operator should give up modeling when they find a small cisterna magna. Second, in most of second punctures, adhesion was found near the cisterna magna. It is hard to feel the perforation of dura mater when the needles enter into cisterna magna. If traditional manual puncture were selected, the operator may choose a shallow needle, to prevent damage of brain stem. No blood enters into cisterna magna leading the lack of second stimulation of vessels, reducing the incidence of cerebral vasospasm. This may be the most important reason for the lack of vasospasm in 8 % of rabbits in manual model group.
In summary, the ultrasound guided double injection of blood into cisterna magna not only has a low animal mortality rate, but it also ensures the occurrence of cerebral vasospasm. This method should be further popularized and applied as a safe and effective rabbit model of cerebral vasospasm.
Ciurea AV, Palade C, Voinescu D, Nica DA. Subarachnoid hemorrhage and cerebral vasospasm—literature review. J Med Life. 2013;6(2):120–5.
Dabus G, Nogueira RG. Current options for the management of aneurysmal subarachnoid hemorrhage-induced cerebral vasospasm: a comprehensive review of the literature. Interv Neurol. 2013;2(1):30–51. doi:10.1159/000354755.
Bar B, MacKenzie L, Hurst RW, Grant R, Weigele J, Bhalla PK, et al. Hyperacute vasospasm after aneurysmal subarachnoid hemorrhage. Neurocrit Care. 2015. doi:10.1007/s12028-015-0177-y.
Przybycien-Szymanska MM, Ashley WW Jr. Biomarker discovery in cerebral vasospasm after aneurysmal subarachnoid hemorrhage. J Stroke Cerebrovasc Dis Off J Natl Stroke Assoc. 2015;24(7):1453–64. doi:10.1016/j.jstrokecerebrovasdis.2015.03.047.
Dusick JR, Gonzalez NR. Management of arterial vasospasm following aneurysmal subarachnoid hemorrhage. Semin Neurol. 2013;33(5):488–97. doi:10.1055/s-0033-1364216.
Izzy S, Muehlschlegel S. Cerebral vasospasm after aneurysmal subarachnoid hemorrhage and traumatic brain injury. Curr Treat Options Neurol. 2014;16(1):278. doi:10.1007/s11940-013-0278-x.
Gross BA, Lin N, Frerichs KU, Du R. Vasospasm after spontaneous angiographically negative subarachnoid hemorrhage. Acta Neurochir (Wien). 2012;154(7):1127–33. doi:10.1007/s00701-012-1383-4.
Velat GJ, Kimball MM, Mocco JD, Hoh BL. Vasospasm after aneurysmal subarachnoid hemorrhage: review of randomized controlled trials and meta-analyses in the literature. World Neurosurg. 2011;76(5):446–54. doi:10.1016/j.wneu.2011.02.030.
Pluta RM, Hansen-Schwartz J, Dreier J, Vajkoczy P, Macdonald RL, Nishizawa S, et al. Cerebral vasospasm following subarachnoid hemorrhage: time for a new world of thought. Neurol Res. 2009;31(2):151–8. doi:10.1179/174313209x393564.
Brathwaite S, Macdonald RL. Current management of delayed cerebral ischemia: update from results of recent clinical trials. Transl Stroke Res. 2014;5(2):207–26. doi:10.1007/s12975-013-0316-8.
Budohoski KP, Guilfoyle M, Helmy A, Huuskonen T, Czosnyka M, Kirollos R, et al. The pathophysiology and treatment of delayed cerebral ischaemia following subarachnoid haemorrhage. J Neurol Neurosurg Psychiatry. 2014;85(12):1343–53. doi:10.1136/jnnp-2014-307711.
Rowland MJ, Hadjipavlou G, Kelly M, Westbrook J, Pattinson KT. Delayed cerebral ischaemia after subarachnoid haemorrhage: looking beyond vasospasm. Br J Anaesth. 2012;109(3):315–29. doi:10.1093/bja/aes264.
Mortimer AM, Steinfort B, Faulder K, Harrington T. Delayed infarction following aneurysmal subarachnoid hemorrhage: can the role of severe angiographic vasospasm really be dismissed? J Neurointerv Surg. 2015. doi:10.1136/neurintsurg-2015-011854.
Brown RJ, Kumar A, Dhar R, Sampson TR, Diringer MN. The relationship between delayed infarcts and angiographic vasospasm after aneurysmal subarachnoid hemorrhage. Neurosurgery. 2013;72(5):702–7. doi:10.1227/NEU.0b013e318285c3db (discussion 7-8).
Caner B, Hou J, Altay O, Fujii M, Zhang JH. Transition of research focus from vasospasm to early brain injury after subarachnoid hemorrhage. J Neurochem. 2012;123(Suppl 2):12–21. doi:10.1111/j.1471-4159.2012.07939.x.
Crowley RW, Medel R, Dumont AS, Ilodigwe D, Kassell NF, Mayer SA, et al. Angiographic vasospasm is strongly correlated with cerebral infarction after subarachnoid hemorrhage. Stroke J Cerebral Circ. 2011;42(4):919–23. doi:10.1161/strokeaha.110.597005.
Dudhani RV, Kyle M, Dedeo C, Riordan M, Deshaies EM. A low mortality rat model to assess delayed cerebral vasospasm after experimental subarachnoid hemorrhage. J Vis Exp JoVE. 2013;71:e4157. doi:10.3791/4157.
Marbacher S, Fandino J, Kitchen N. Characteristics of in vivo animal models of delayed cerebral vasospasm. Acta Neurochir Suppl. 2011;110(Pt 1):173–5. doi:10.1007/978-3-7091-0353-1_30.
Marbacher S, Fandino J, Kitchen ND. Standard intracranial in vivo animal models of delayed cerebral vasospasm. Br J Neurosurg. 2010;24(4):415–34. doi:10.3109/02688691003746274.
Megyesi JF, Vollrath B, Cook DA, Findlay JM. In vivo animal models of cerebral vasospasm: a review. Neurosurgery. 2000;46(2):448–60 (discussion 60-1).
Zhou ML, Shi JX, Zhu JQ, Hang CH, Mao L, Chen KF, et al. Comparison between one- and two-hemorrhage models of cerebral vasospasm in rabbits. J Neurosci Methods. 2007;159(2):318–24. doi:10.1016/j.jneumeth.2006.07.026.
Mori K. Double cisterna magna blood injection model of experimental subarachnoid hemorrhage in dogs. Transl Stroke Res. 2014;5(6):647–52. doi:10.1007/s12975-014-0356-8.
Raslan F, Albert-Weissenberger C, Westermaier T, Saker S, Kleinschnitz C, Lee JY. A modified double injection model of cisterna magna for the study of delayed cerebral vasospasm following subarachnoid hemorrhage in rats. Exp Transl Stroke Med. 2012;4(1):23. doi:10.1186/2040-7378-4-23.
Guresir E, Schuss P, Borger V, Vatter H. Experimental subarachnoid hemorrhage: double cisterna magna injection rat model–assessment of delayed pathological effects of cerebral vasospasm. Transl Stroke Res. 2015;6(3):242–51. doi:10.1007/s12975-015-0392-z.
Kikkawa Y. A rabbit cisterna magna double-injection subarachnoid hemorrhage model. Acta Neurochir Suppl. 2015;120:331–5. doi:10.1007/978-3-319-04981-6_57.
Kikkawa Y, Kurogi R, Sasaki T. The single and double blood injection rabbit subarachnoid hemorrhage model. Transl Stroke Res. 2015;6(1):88–97. doi:10.1007/s12975-014-0375-5.
Liszczak TM, Black PM, Tzouras A, Foley L, Zervas NT. Morphological changes of the basilar artery, ventricles, and choroid plexus after experimental SAH. J Neurosurg. 1984;61(3):486–93. doi:10.3171/jns.1984.61.3.0486.
Baker KF, Zervas NT, Pile-Spellman J, Vacanti FX, Miller D. Angiographic evidence of basilar artery constriction in the rabbit: a new model of vasospasm. Surg Neurol. 1987;27(2):107–12.
Spallone A, Pastore FS. Cerebral vasospasm in a double-injection model in rabbit. Surg Neurol. 1989;32(6):408–17.
Zhong X, Hamill M, Collier B, Bradburn E, Ferrara J. Dynamic multiplanar real time ultrasound guided infraclavicular subclavian vein catheterization. Am Surg. 2015;81(6):621–5.
Chacko J, Gagan B, Kumar U, Mundlapudi B. Real-time ultrasound guided percutaneous dilatational tracheostomy with and without bronchoscopic control: an observational study. Minerva Anestesiol. 2015;81(2):166–74.
Sadahiro H, Nomura S, Goto H, Sugimoto K, Inamura A, Fujiyama Y, et al. Real-time ultrasound-guided endoscopic surgery for putaminal hemorrhage. J Neurosurg. 2015. doi:10.3171/2014.11.jns141508.
Menace C, Choquet O, Abbal B, Morau D, Biboulet P, Bringuier S, et al. Real-time ultrasound-guided epidural anaesthesia technique can be improved by new echogenic Tuohy needles: a pilot study in cadavers. Br J Anaesth. 2014;113(2):299–301. doi:10.1093/bja/aeu247.
Sobolev M, Slovut DP, Lee Chang A, Shiloh AL, Eisen LA. Ultrasound-guided catheterization of the femoral artery: a systematic review and meta-analysis of randomized controlled trials. J Invasive Cardiol. 2015;27(7):318–23.
Stolz LA, Stolz U, Howe C, Farrell IJ, Adhikari S. Ultrasound-guided peripheral venous access: a meta-analysis and systematic review. J Vasc Access. 2015;16(4):321–6. doi:10.5301/jva.5000346.
Tan LA, Lopes DK, Fontes RB. Ultrasound-guided posterolateral approach for midline calcified thoracic disc herniation. J Korean Neurosurg Soc. 2014;55(6):383–6. doi:10.3340/jkns.2014.55.6.383.
Laslo AM, Eastwood JD, Pakkiri P, Chen F, Lee TY. CT perfusion-derived mean transit time predicts early mortality and delayed vasospasm after experimental subarachnoid hemorrhage. AJNR Am J Neuroradiol. 2008;29(1):79–85. doi:10.3174/ajnr.A0747.
Zhou Y, Martin RD, Zhang JH. Advances in experimental subarachnoid hemorrhage. Acta Neurochir Suppl. 2011;110(Pt 1):15–21. doi:10.1007/978-3-7091-0353-1_3.
Guresir E, Schuss P, Borger V, Vatter H. Experimental subarachnoid hemorrhage: double cisterna magna injection rat model-assessment of delayed pathological effects of cerebral vasospasm. Transl Stroke Res. 2015;6(3):242–51. doi:10.1007/s12975-015-0392-z.
Hu N, Wu Y, Chen BZ, Han JF, Zhou MT. Protective effect of stellate ganglion block on delayed cerebral vasospasm in an experimental rat model of subarachnoid hemorrhage. Brain Res. 2014;1585:63–71. doi:10.1016/j.brainres.2014.08.012.
YCC contributed to the experimental design, analysis and interpretation of data. YZZ and YZ participated in acquisition of the CT and MR data. ZXZ, JL and FCL carried out part of the experiments and of the statistical analysis. XFD was involved in all aspects of study conception, design, analysis and interpretation and provided final approval of the version of the submitted manuscript. KKLW participated in the design of the study and helped draft and revise the manuscript. All authors read and approved the final manuscript.
The project was funded by the National Natural Science Foundation of China (Reference No: 81200895) and the Medical and Health Foundation of Nanjing Military Area (Reference No: 12z12).
Ultrasound Center, The 105th Hospital of PLA, Hefei, China
Yongchao Chen
, Juan Lian
& Fucheng Luo
Department of Radiology, The 105th Hospital of PLA, Hefei, China
Youzhi Zhu
& Yu Zhang
Department of Anatomy, Anhui Medical University, Hefei, China
Zixuan Zhang
& Xuefei Deng
School of Medicine, Western Sydney University, Sydney, Australia
Kelvin KL Wong
Search for Yongchao Chen in:
Search for Youzhi Zhu in:
Search for Yu Zhang in:
Search for Zixuan Zhang in:
Search for Juan Lian in:
Search for Fucheng Luo in:
Search for Xuefei Deng in:
Search for Kelvin KL Wong in:
Correspondence to Xuefei Deng.
Yongchao Chen and Youzhi Zhu contributed equally to this work
Chen, Y., Zhu, Y., Zhang, Y. et al. Ultrasound guided double injection of blood into cisterna magna: a rabbit model for treatment of cerebral vasospasm. BioMed Eng OnLine 15, 19 (2016) doi:10.1186/s12938-016-0123-z
Cerebral vasospasm
Double injection of blood into cisterna magna
Animal model
Ultrasound guided puncture
Advances in neuroimaging: Insights into neurological and psychiatric disease
BioMedical Engineering and the Heart | CommonCrawl |
Global stability analysis for a generalized delayed SIR model with vaccination and treatment
A. Elazzouzi1,
A. Lamrani Alaoui2,
M. Tilioua2 &
A. Tridane ORCID: orcid.org/0000-0001-8471-78073
In this work, we investigate the stability of an SIR epidemic model with a generalized nonlinear incidence rate and distributed delay. The model also includes vaccination term and general treatment function, which are the two principal control measurements to reduce the disease burden. Using the Lyapunov functions, we show that the disease-free equilibrium state is globally asymptotically stable if \(\mathcal{R}_{0} \leq 1 \), where \(\mathcal{R}_{0} \) is the basic reproduction number. On the other hand, the disease-endemic equilibrium is globally asymptotically stable when \(\mathcal{R}_{0} > 1 \). For a specific type of treatment and incidence functions, our analysis shows the success of the vaccination strategy, as well as the treatment depends on the initial size of the susceptible population. Moreover, we discuss, numerically, the behavior of the basic reproduction number with respect to vaccination and treatment parameters.
Mathematical modeling has become a powerful and important tool to understand infectious disease dynamic behavior and to improve control of the disease in a population. These models are often described by many forms such as: SI, SIS, SIR, or SIRS models, where S stands for susceptible subpopulation, I is infected subpopulation, and R is recovered subpopulation. The progress of a disease in a population is dictated by the nature and the mode of transmission between infected and susceptible individuals. The mode of transmission is the method of transfer by which the infection moves or carries from one place to another to reach the new host (for example airborne, saliva, vector-borne, and bodily fluids). Hence, it is natural to adapt these models to the concerned disease by choosing the right incidence function. It is known that the function forms of the incidence rate of the infection have a crucial role in the modeling of the infection dynamics, many forms of incidence function have been considered by the researchers in mathematical epidemiology, for example, the bilinear incidence rate βSI, where β is the transmission rate of infection, the saturated incidence rate \(\frac{ \beta \mathit{SI}}{1+ \alpha I}\), with α defined as the inhibitory coefficient, and many other forms (see [1,2,3,4,5,6,7]). To make a model more realistic, the introduction of the time delay is more interesting, and considerable attention has been paid by several authors to studying the dynamics of epidemic models with discrete or distributed time delay (see [3, 4, 8,9,10,11]).
Vaccination and treatment are the two main public health control strategies that help to minimize the burden of an infectious disease spread and to delay a possible outbreak. Vaccination has the role of preventing healthy people from getting infected by a disease, while treatment cures a disease and can also be used as a prophylactic. These control strategies are usually used together to contain the disease spread (see [12] in the context of influenza). Tulu et al., in [13], developed a mathematical model to study the effect of both vaccination and quarantine on the spread of Ebola virus, they applied the vaccination strategy to the susceptible individuals. However, in [14], the authors studied the global dynamics of an SEIRS epidemic model with preventive vaccination applied to the newborns. Various vaccination policies were studied in different mathematical models (see [8, 15,16,17,18]). It is well known in classical epidemic models that the recovery rate due to treatment is proportional to the number of the infected individuals. However, this proportionality is not satisfied in the reality because of limited medical facilities (see [19]). In order to include the limited capacity of medical resources, Chauhan et al., in [20], considered the piecewise linear treatment function of the form
$$ T(I)= \textstyle\begin{cases} kI &\mbox{if } 0\leq I \leq I_{0}, \\ kI_{0} &\mbox{if } I>I_{0}, \end{cases} $$
where \(I_{0}\) is the capacity of treatment. Recently, Li introduced the following saturated treatment function [21]:
$$ T(I)=\frac{a I}{1+\epsilon I}, $$
where a represents the maximal medical resources supplied per united time and ϵ is half-saturation constant, which measures the effect of being delayed for treatment. Other works have investigated the effects of the treatment on an epidemic (see [19,20,21,22,23,24,25]) and also its optimal control (see [26]).
The motivation of this work comes from [10, 11], where the authors studied an SIR epidemic model with nonlinear incidence function, and from [19,20,21], where the authors considered a special type of treatment function. The present work would be a continuation and generalization of the above cited works. It is concerned with a generalized SIR epidemic model with distributed delay, vaccination, and treatment. This model incorporates distributed delay, general incidence function, vaccination, and general function treatment. In fact, we apply the vaccination to both susceptible and newborn individuals. On the newborn individuals, we apply the mechanism of "all-or-nothing" vaccine. Recall that an "all-or-nothing" vaccine offers complete protection to a subset of the vaccinated individuals, but the remainder of them stays susceptible to catching the disease. Second, we consider a class of treatment functions satisfying suitable conditions, and it is more general than the one given by (1) or (2). Moreover, it is necessary to point out that the delay in this model represents the incubation time taken to become infectious. This model can be applied to investigate the impact of the vaccination and the treatment in containing the spread of infections which have an incubation time to become infectious, for example, SARS-CoV(see [27, 28]). Our purpose in this work is to investigate the impact of the combined vaccination and treatment strategies on the dynamic behavior of the considered model. We prove that the basic reproduction number \(\mathcal{R}_{0}\) depends explicitly on the vaccination parameters and the general treatment function \(T(I)\). Moreover, we discuss the global stability of the model near equilibria (the disease-free equilibrium \(E_{0}\) and the disease-endemic equilibrium \(E^{*}\)) by means of \(\mathcal{R}_{0}\) and Lyapunov's method. Furthermore, to verify the theoretical results, numerical simulations are performed for special treatment and incidence functions. For illustration, we give some numerical results on the behavior of the basic reproduction number \(R_{0}\) with respect to vaccination and treatment parameters.
The paper is organized as follows. We give a mathematical model formulation in Sect. 2. In Sect. 3, we propose a mathematical analysis of the considered model. More precisely, we calculate the basic reproduction number \(\mathcal{R} _{0}\), and we determine the disease-free equilibrium \(E_{0}\) and the endemic equilibrium \(E^{*}\). Moreover, we prove the local stability of the disease-free equilibrium and the global stability of \(E_{0}\) and \(E^{*}\). In Sect. 4, we give some numerical examples with an incidence and treatment functions satisfying assumptions presented in the previous sections. We finish the paper, in the last section, by providing some concluding remarks.
Mathematical model and preliminaries
In this work, we are interested in a general SIR epidemic model with distributed delay, vaccination, and treatment. The dynamics are governed by the diagram in Fig. 1.
Flow diagram of the disease transmission
The time series of model (3) in the special case (12), with Figures (a), (b), and (c) representing (respectively) \(S(t)\), \(I(t)\), and \(R(t)\). The parameters of the model are \(b = 10\), \(\mu = 0.65 \), \(\beta = 0.2 \), \(c = 0.77 \), \(\gamma = 0.75 \), \(h=1.5\), \(d=0\), \(p=0\), \(\epsilon =0\), \(\xi = 10\), and \(a=0\). In this case \(\overline{\mathcal{R}_{0}}=1.4179> 1\)
From Fig. 1, we have the following SIR model:
$$ \textstyle\begin{cases} \frac{dS(t)}{dt}= (1-(1-\epsilon )p)b- (\mu +d) S(t)- \beta \int _{0}^{h}g(\tau )f(S(t),I(t-\tau ))\,d\tau , \\ \frac{dI(t)}{dt}= \beta \int _{0}^{h}g(\tau )f(S(t),I(t-\tau ))\,d\tau -(\mu +c+\gamma )I(t)-T(I), \\ \frac{dR(t)}{dt}=T(I)+(1-\epsilon )pb+\gamma I(t)+d S(t)- \mu R(t), \end{cases} $$
where \(S(t) \), \(I(t)\), and \(R(t) \) denote the numbers of susceptible, infective, and recovered individuals at time t respectively. The susceptibles are augmented by the birth of newborns. Here, we assume that the birth rate b and death rate μ are not the same. The parameter p is the fraction of the vaccinated newborns. A fraction \(\epsilon \in [0.1)\) (the all-or-nothing parameter) of the vaccinated newborns exhibits an unsuccessful vaccination and passes directly to the susceptible class. Our vaccine has an efficacy of \(1 - \epsilon \) (see [27,28,29,30,31,32,33]). For simplicity, we assume that the recovered class stands also for the vaccinated state. Hence, susceptible individuals get vaccinated with rate d.
The nonlinear incidence rate and distributed delay are considered to represent wide class epidemic model similarly as in [10, 11]. More precisely, by taking β the disease transmission coefficient, individuals leave the susceptible class at a rate \(\int _{0}^{h}g(\tau )f(s(t),i(t-\tau ))\,d\tau \), where h represents the maximum time taken to become infectious. The function g that satisfies \(\int _{0}^{h}g(\tau )\,d\tau = 1\) is assumed to be nonnegative.
The function \(f:\mathbb{R}^{2}_{+} \rightarrow \mathbb{R}^{2}_{+} \) is assumed to be continuously differentiable in the interior of \(\mathbb{R}^{2}_{+} \) such that
$$ f(0, I) = f(S, 0) = 0\quad \mbox{for } S, I\geqslant 0, $$
and the following hypotheses hold:
\(( \mathbf{H}_{1}) \):
\(f(S,I) \) is a strictly monotone increasing function of \(S \geqslant 0 \) for any fixed \(I > 0\) and a monotone increasing function of \(I > 0 \) for any fixed \(S \geqslant 0\).
\((\mathbf{H}_{2}) \):
\(\phi (S, I) = \frac{f(S,I )}{I} \) is a bounded and monotone decreasing function of \(I > 0 \) for any fixed \(S \geqslant 0 \) and \(k(S) = \lim_{I \rightarrow 0^{+}} \phi (S, I) \) is a continuous and monotone increasing function on \(S \geqslant 0 \).
We also assume that the disease causes death with rate c and γ is the natural recovery rate of the infected individuals.
The function \(T: \mathbb{R}_{+} \rightarrow \mathbb{R}_{+}\) represents the treatment function which we assume to be continuously differentiable and concave down satisfying the following hypotheses:
\((\mathbf{T}_{1})\):
\(T(0) = 0\).
The treatment rate \(\frac{T(I)}{I}\) is monotone increasing.
The assumption of the concavity of the treatment function refers to the fact that the supply of the treatment drugs increases as the disease kicks off in the population until it reaches a maximum level, then the treatment drug stocks start going down due to the exhaustive consumption.
Hypothesis \((\mathbf{T}_{1})\) means that there is no treatment if there is no infection, while hypothesis \(( \mathbf{T}_{2})\) reflects the increasing effort needed from the public health authorities to provide treatment during the time of the infections.
The initial condition for the above system is given for \(\theta \in [-h,0] \) by
$$ S( \theta ) = \Phi _{1} ( \theta ),\qquad I( \theta ) = \Phi _{2} ( \theta ) \quad \mbox{and}\quad R( \theta ) = \Phi _{3} ( \theta ), $$
with \(\Phi = (\Phi _{1}, \Phi _{2}, \Phi _{3}) \in C^{+}\). The space of continuous functions from \([-h,0]\) to \(\mathbb{R}^{2}\) provided with the uniform topology is \(C:= C([-h,0],\mathbb{R}^{3})\), and \(C^{+} = C([-h,0],( \mathbb{R}^{3})^{+}) \) is the nonnegative cone of C. Let \(\Phi _{i}( \theta ) \geq 0\), \(i=1,2,3\), for \(\theta \in [-h,0] \).
Following the standard approach (see [34, 35]), model (3) has a unique local solution, i.e., for all \(t \in [ 0, \delta ] \), \(\delta \geq 0\). Moreover, we have the following preliminary results.
Proposition 2.1
The solution of (3), with initial condition (4), is positive and bounded.
We prove, by contradiction, that the solution \((S(t),I(t),R(t)) \) is positive. Let \(t_{1} = \min \{t\geq 0: S(t)I(t) =0 \} \), and we assume that \(S(t_{1}) =0 \), which implies that, for all \(0 \leq t \leq t_{1} \), \(I(t) \geq 0 \). Let
$$ \zeta = \min_{ 0 \leq t \leq t_{1}} \biggl\{ \frac{(1-(1-\epsilon )p)b}{S(t)} - (\mu +d) - \beta \int _{0}^{h} g(\tau ) \frac{f(S(t),I(t-\tau ))}{S(t)}\,d \tau \biggr\} . $$
It follows that
$$ \frac{dS(t)}{dt} \geq \zeta S(t), $$
$$ S(t_{1}) \geq S(0) \exp (\zeta t_{1}) > 0 . $$
This contradicts \(S(t_{1}) = 0 \). Using a similar argument, we can prove that \(S(t) > 0 \) and \(I(t) > 0 \) for all \(t \geq 0 \). The positivity of R follows from the inequality
$$ \frac{dR(t)}{dt} \geq - \mu R(t), $$
which implies that
$$ R(t) \geq R(0) \exp (- \mu t) > 0 . $$
For the boundedness, we note that
$$ \frac{dn(t)}{dt}= \mu \biggl(\frac{b}{\mu }-n(t) \biggr). $$
It follows that \(\lim_{t \rightarrow + \infty } n(t)= \frac{b}{ \mu } \), which completes the proof. □
The local existence and boundedness of the solution of (3) imply the global existence of the solution.
As the variable R does not appear in the first two equations for system (3), we focus our analysis on the reduced system
$$ \textstyle\begin{cases} \frac{dS(t)}{dt}= (1-(1-\epsilon )p)b- (\mu +d) S(t)- \beta \int _{0}^{h}g(\tau )f(S(t),I(t-\tau ))\,d\tau , \\ \frac{dI(t)}{dt}= \beta \int _{0}^{h}g(\tau )f(S(t),I(t-\tau ))\,d\tau -(\mu +c+\gamma )I(t)-T(I). \end{cases} $$
Analysis of the model
Existence of equilibria points
System (5) has a disease-free equilibrium
$$ E_{0} = (S_{0}, 0),\quad \mbox{with } S_{0} = \frac{(1-(1-\epsilon )p)b}{\mu +d}. $$
On the other hand, using the next generation method [36], the basic reproduction number should be as follows.
The basic reproduction number is
$$ {\mathcal{R}}_{0}=\frac{\beta k (\frac{(1-(1-\epsilon )p)b}{( \mu +d)} )}{(\mu +\gamma +c)+T^{\prime }(0)} = \frac{ \beta k(S_{0})}{( \mu +\gamma +c)+T^{\prime }(0)}. $$
Note that \(S_{0} \)depends on the vaccination of susceptible population and the treatment terms.
Let \(X=(I,S)^{T}\), then it follows from system (5) that
dXdt=(β∫0hg(τ)f(S(t),I(t−τ))dτ0)−((μ+c+γ)I(t)+T(I)β∫0hg(τ)f(S(t),I(t−τ))dτ−(1−(1−ϵ)p)b+(μ+d)S(t)),=F−ν.
The Jacobian of matrices \(\mathcal{F}\) and ν at the disease-free equilibrium \(E_{0}\) is given by
F=(βf2(E0)000)andV=((μ+c+γ)+T′(0)0βf2(E0)μ+d),
where \(f_{2}(E_{0})\) is the derivative of f with respect to I at \(E_{0}\). The inverse of V is given by
V−1=(1(μ+c+γ)+T′(0)0−βf2(E0)((μ+c+γ)+T′(0))(μ+d)1μ+d).
Thus, the next generation matrix for system (5) is
FV−1=(βf2(E0)(μ+c+γ)+T′(0)000).
Since \(R_{0}\) is the spectral radius of the matrix \(FV^{-1}\), it follows that the basic reproduction number is
$$ \mathcal{R}_{0}=\frac{\beta f_{2}(E_{0})}{(\mu +\gamma +c)+T^{\prime }(0)} = \frac{ \beta k(S_{0})}{(\mu +\gamma +c)+T^{\prime }(0)}. $$
To prove the existence of an endemic equilibrium, we need the following lemma.
Assume that assumptions \((\mathbf{T}_{1})\)and \(( \mathbf{T}_{2})\)are satisfied. Then the equation
$$ b-a u-T(u)=0, $$
for \(a>0\)and \(b>0\), has a unique positive solution.
Let \(\mathcal{K}\) be the function defined on \(\mathbb{R_{+}}\) by
$$ \mathcal{K}(u)=b-a u-T(u). $$
$$ \mathcal{K}(0)=b>0 \quad \mbox{and} \quad \mathcal{K}\biggl(\frac{b}{a} \biggr)=-T \biggl( \frac{b}{a}\biggr)< 0. $$
Since \(\mathcal{K}\) is continuous, the equation \(\mathcal{K}(u)=0\) has a unique positive solution in the interval \((0,\frac{b}{a})\). □
Next result shows the existence of the endemic equilibrium.
Assume that assumptions \((\mathbf{H}_{1})\), \((\mathbf{H}_{2})\), \((\mathbf{T}_{1})\), and \((\mathbf{T}_{2})\)hold. If \(\mathcal{R}_{0} > 1 \), then system (5) admits a unique endemic equilibrium \(E^{*} = (S^{*}, I^{*}) \).
At the equilibrium point, we have
$$ \bigl(1-(1-\epsilon )p\bigr)b- (\mu +d) S^{*} -(\mu +c+\gamma )I^{*} -T\bigl(I^{*}\bigr)=0, $$
$$ S^{*}=\frac{(1-(1-\epsilon )p)b-(\mu +c+\gamma )I^{*}-T(I^{*})}{ \mu +d}. $$
Let \(\overline{\mathcal{K}} \) be the function defined for \(\mathbb{R}^{+}\setminus \{0\} \) to \(\mathbb{R}\) by
$$ \overline{\mathcal{K}}(I)=\beta \frac{f (S^{*},I )}{I}-(\mu +c+ \gamma )- \frac{T(I)}{I}. $$
By hypotheses \((\mathbf{H}_{2})\) and \((\mathbf{T}_{2})\), \(\overline{ \mathcal{K}}\) is strictly monotone decreasing on \(\mathbb{R}^{+} \setminus \{0\}\) satisfying
$$ \lim_{I \rightarrow 0^{+}} \overline{\mathcal{K}}(I)=\beta k \biggl( \frac{(1-(1-\epsilon )p)b}{\mu +d} \biggr)-(\mu +c+\gamma )-T^{\prime }(0)= \bigl(\mu +c+ \gamma +T^{\prime }(0)\bigr) (\mathcal{R}_{0}-1)>0. $$
Moreover, by Lemma 3.2, there exists a unique solution \(I^{0}\) of the following equation:
$$ \frac{(1-(1-\epsilon )p)b}{\mu +d}-\frac{1}{{\mu +d}}\bigl((\mu +c+ \gamma )I+T(I) \bigr)=0, $$
$$ \overline{\mathcal{K}}\bigl(I^{0}\bigr)=-\biggl((\mu +c+\gamma )+ \frac{T(I^{0})}{I ^{0}}\biggr)< 0. $$
Hence, there exists a unique positive real \(I^{*}\) such that
$$ 0 < I^{*} < I^{0}\quad \mbox{and} \quad \overline{\mathcal{K}} \bigl(I^{*}\bigr)=0, $$
which allows us to conclude that \(E^{*} = (S^{*}, I ^{*})\) is the unique endemic equilibrium of system (5). □
Local stability analysis
In this section, we discuss the local stability of the disease-free equilibrium of system (5). We have the following result.
If \(\mathcal{R}_{0} < 1 \), then the disease-free equilibrium \(E_{0} = (S_{0}, 0)\)is locally asymptotically stable.
We consider the following linearization equation of system (5) at \(E_{0}\):
$$ \textstyle\begin{cases} \frac{dS(t)}{dt}=- (\mu +d) S(t)- \beta \int _{0}^{h}g(\tau )f_{2}(E_{0})I(t-\tau )\,d\tau , \\ \frac{dI(t)}{dt}= \beta \int _{0}^{h}g(\tau )f_{2}(E_{0})I(t-\tau )\,d\tau -(\mu +c+\gamma )I(t)-T ^{\prime }(0)I(t). \end{cases} $$
Substituting \((S(t),I(t))=\exp (\lambda t) (S_{0},I_{0})\) into (7), we have
$$ \textstyle\begin{cases} \lambda S_{0} \exp (\lambda t) =- (\mu +d) S_{0} \exp (\lambda t)- \beta \int _{0}^{h}g(\tau )f_{2}(E_{0})I_{0} \exp \lambda ( t-\tau )\,d\tau , \\ \lambda I_{0} \exp (\lambda t)= \beta \int _{0}^{h}g(\tau )f_{2}(E_{0})I_{0} \exp \lambda ( t-\tau )\,d\tau -( \mu +c+\gamma +T^{\prime }(0))I_{0} \exp (\lambda t), \end{cases} $$
$$ \textstyle\begin{cases} - (\mu +d +\lambda ) S_{0} - \beta \int _{0}^{h}g(\tau )f_{2}(E_{0})I_{0} \exp ( -\lambda \tau ) \,d\tau =0, \\ \beta \int _{0}^{h}g(\tau )f_{2}(E_{0})I_{0} \exp ( -\lambda \tau ) \,d\tau -( \mu +c+\gamma +T^{\prime }(0)+ \lambda )I_{0}=0. \end{cases} $$
We can write (8) in the following abstract form:
$$ BX=0, $$
where X=(S0I0) and
$$ B= \begin{pmatrix} -(\mu +d+\lambda ) & - \beta f_{2}(E_{0}) \int _{0}^{h}g(\tau ) \exp ( -\lambda \tau ) \,d\tau \\ 0 & \beta f_{2}(E_{0}) \int _{0}^{h}g(\tau )\exp ( -\lambda \tau ) \,d\tau - (\mu +c+\gamma +T ^{\prime }(0) + \lambda ) \end{pmatrix} . $$
Then the characteristic equation of system (8) at \(E_{0} \) is of the form
$$ (\mu +d +\lambda ) \biggl(- \beta f_{2}(E_{0}) \int _{0}^{h}g(\tau )\exp ( - \lambda \tau ) \,d \tau + \bigl(\lambda +\mu +c+\gamma +T^{\prime }(0)\bigr)\biggr)=0. $$
It is clear that \(\lambda = -(\mu +d)\) is a root of (9). All other roots λ of (9) are determined by the following equation:
$$ - \beta f_{2}(E_{0}) \int _{0}^{h}g(\tau )\exp ( -\lambda \tau ) \,d \tau + \bigl(\lambda +\mu +c+T^{\prime }(0)+\gamma \bigr)=0. $$
Then by separating real \((\Re )\) and imaginary \((\Im )\) parts, we derive
$$ \textstyle\begin{cases} - \beta f_{2}(E_{0}) \int _{0}^{h}g(\tau ) \exp ( -\Re (\lambda ) \tau )\cos (\Im (\lambda ) \tau ) \,d\tau + (\Re (\lambda ) +\mu +c+\gamma +T^{\prime }(0))=0, \\ - \beta f_{2}(E_{0}) \int _{0}^{h}g(\tau )\exp ( -\Re (\lambda ) \tau )\sin (\Im (\lambda ) \tau ) \,d\tau + (\Im (\lambda ) +\mu +c+\gamma +T^{\prime }(0))=0. \end{cases} $$
Using the first equation of the above system, we obtain
$$ \Re (\lambda ) = \beta f_{2}(E_{0}) \int _{0}^{h}g(\tau ) \exp \bigl( - \Re (\lambda ) \tau \bigr)\cos \bigl(\Im (\lambda ) \tau \bigr) \,d\tau - \bigl(\mu +c+ \gamma +T^{\prime }(0)\bigr). $$
We suppose, by contradiction, that there exists \(\lambda \in \mathbb{C}\) such that \(\Re (\lambda )\geq 0\), and it satisfies equality (10). Then
$$ \beta f_{2}(E_{0}) \int _{0}^{h}g(\tau ) \exp \bigl( -\Re (\lambda ) \tau \bigr)\cos \bigl(\Im (\lambda ) \tau \bigr) \,d\tau \geq \mu +c+\gamma +T^{\prime }(0). $$
Since the function T is concave down, it follows that \(T^{\prime }(0) \geq 0\).
Moreover, we know that \(f_{2}(E_{0})>0\), which implies
$$ 0\leq \int _{0}^{h}g(\tau ) \exp \bigl( -\Re (\lambda ) \tau \bigr)\cos \bigl(\Im (\lambda ) \tau \bigr) \,d\tau \leq 1. $$
If \(\mathcal{R}_{0} <1 \), then \(\beta f_{2}(E_{0}) < \mu +c+\gamma +T ^{\prime }(0)\) and
$$ \beta f_{2}(E_{0}) \int _{0}^{h}g(\tau ) \exp \bigl( -\Re (\lambda ) \tau \bigr) \cos \bigl(\Im (\lambda ) \tau \bigr) \,d\tau < \mu +c+\gamma +T^{\prime }(0), $$
which gives a contradiction with inequality (11). Then the real parts of all the eigenvalues of (9) are negative. Therefore, if \(\mathcal{R}_{0} < 1\), the disease-free equilibrium \(E_{0} \) of system (5) is locally asymptotically stable. Now, let
$$ P(\lambda )= - \beta f_{2}(E_{0}) \int _{0}^{h}g(\tau )\exp ( -\lambda \tau ) \,d \tau + \bigl( \lambda +\mu +c+\gamma +T^{\prime }(0)\bigr). $$
From the fact that \(P(0)= (\mu +c+\gamma +T^{\prime }(0))(1-\mathcal{R} _{0})<0 \) if \(\mathcal{R}_{0} > 1 \) and \(\lim_{\lambda \longrightarrow + \infty } P(\lambda )= + \infty \), we conclude that there is at least one positive root of (9). Hence, if \(\mathcal{R}_{0}> 1\), \(E_{0}\) is unstable. □
Global stability of the disease-free equilibrium
The next result gives the condition of the global asymptotic stability of the disease-free equilibrium \(E_{0}\) of system (5).
If hypotheses \((\mathbf{H}_{1})\), \((\mathbf{H}_{2})\), \((\mathbf{T}_{1})\), and \((\mathbf{T}_{2})\)hold and \(\mathcal{R}_{0} \leq 1 \), then the disease-free equilibrium \(E_{0} \)of system (5) is globally asymptotically stable.
To prove this result, we consider the following Lyapunov function:
$$ V(t)=V_{1}(t)+I(t)+V_{2}(t)+V_{3}(t), $$
$$\begin{aligned}& V_{1}(t)= \int _{\frac{(1-(1-\epsilon )p)b}{ \mu +d }}^{S(t)} \biggl(1-\frac{k( \frac{(1-(1-\epsilon )p)b}{ \mu +d})}{k( \sigma )} \biggr)\,d\sigma , \\& V_{2}(t)= \sigma \int _{0}^{h} g( \tau ) \int _{t-\tau }^{t} I(u)\,du\,d\tau , \end{aligned}$$
where \(\sigma =\mu +c+\gamma \), and
$$ V_{3}(t)= \int _{0}^{h} g( \tau ) \int _{t-\tau }^{t} T\bigl(I(u)\bigr)\,du\,d\tau . $$
$$ \begin{aligned} \frac{d}{dt}V(t)={}& \biggl(1- \frac{k ( \frac{(1-(1-\epsilon )p)b}{ \mu +d } )}{k(S(t))} \biggr) \biggl(\bigl(1-(1- \epsilon )p\bigr)b- (\mu +d) S(t) \\ &{}- \beta \int _{0}^{h}g(\tau )f\bigl(S(t),I(t-\tau ) \bigr)\,d\tau \biggr) \\ &{} + \beta \int _{0}^{h} g(\tau )f\bigl(S(t),I(t-\tau ) \bigr)\,d\tau -(\mu +c+\gamma )I(t)-T(I) \\ &{}+ \sigma \int _{0}^{h} g( \tau ) \bigl( i(t) - i(t-\tau ) \bigr) \,d\tau + \int _{0}^{h} g( \tau ) \bigl( T\bigl(i(t)\bigr) - T\bigl(i(t-\tau )\bigr)\bigr) \,d\tau \\ = {}& -\mu \biggl(1- \frac{k ( \frac{(1-(1-\epsilon )p)b}{ \mu +d } )}{k(S(t))} \biggr) \biggl(S(t)- \frac{(1-(1-\epsilon )p)b}{ \mu +d } \biggr) \\ &{} + \int _{0}^{h} g(\tau ) \biggl( \beta \frac{k( \frac{(1-(1-\epsilon )p)b}{ \mu +d })}{k(S(t))} \frac{f(S(t),I(t-\tau ))}{I(t -\tau )} -\sigma - \frac{T(I(t-\tau ))}{I(t-\tau )} \biggr)I(t- \tau )\,d\tau . \end{aligned} $$
From hypothesis \((\mathbf{T}_{2})\), it follows that
$$ T^{\prime }(0) \leq \frac{T(I(t-\tau ))}{I(t-\tau )}. $$
$$ \begin{aligned} \frac{d}{dt}V(t) \leq{} & -\mu \biggl(1- \frac{k ( \frac{(1-(1-\epsilon )p)b}{ \mu +d } )}{k(S(t))} \biggr) \biggl(S(t)- \frac{(1-(1-\epsilon )p)b}{ \mu +d } \biggr) \\ &{}+ \int _{0}^{h} g(\tau ) \biggl( \frac{\phi (S(t),I(t - \tau ))}{\sigma +T^{\prime }(0)} \frac{k ( \frac{(1-(1-\epsilon )p)b}{ \mu +d } )}{k(S(t))} -1 \biggr) \bigl(\sigma +T^{\prime }(0) \bigr)I(t - \tau )\,d\tau . \end{aligned} $$
Hypothesis \((\mathbf{H}_{1})\) implies that
$$ -\mu \biggl(1-\frac{k ( \frac{(1-(1-\epsilon )p)b}{ \mu +d } )}{k(S(t))} \biggr) \biggl(S(t)- \frac{(1-(1-\epsilon )p)b}{ \mu +d } \biggr) \leq 0, $$
and hypothesis \((\mathbf{H}_{2})\) gives that
$$ \beta \frac{\phi (S(t),I(t - \tau ))}{\sigma +T^{\prime }(0)} \frac{k ( \frac{(1-(1-\epsilon )p)b}{ \mu +d } )}{k(S(t))} \leq \beta \frac{k(S(t))}{ \sigma +T^{\prime }(0)} \frac{k ( \frac{(1-(1-\epsilon )p)b}{ \mu +d } )}{k(S(t))} = \mathcal{R} _{0}. $$
Hence,
$$ \begin{aligned} \frac{d}{dt}V(t) \leq{} & -\mu \biggl(1- \frac{k ( \frac{(1-(1-\epsilon )p)b}{ \mu +d } )}{k(S(t))} \biggr) \biggl(S(t)- \frac{(1-(1-\epsilon )p)b}{ \mu +d } \biggr) \\ &{}+ ( \mathcal{R}_{0} -1 ) \bigl(\sigma +T^{\prime }(0)\bigr) \int _{0}^{h} g(\tau )I(t-\tau )\,d\tau . \end{aligned} $$
Then the condition \(\mathcal{R}_{0} \leq 1\) implies that
$$ \frac{d}{dt}V(t)\leq 0 \quad \mbox{for all } t\geq 0. $$
Moreover, we have
$$ \frac{d}{dt}V(t)=0 \quad \mbox{holds if}\quad (S, I) = (S_{0}, 0). $$
Hence, it follows from system (5) that the set \(\{E_{0}\} \) is the largest invariant set in \(\{(S, I): \frac{d}{dt}V(t)=0 \}\). By the Lyapunov–LaSalle principle, we conclude that the disease-free equilibrium \(E_{0} \) of system (5) is globally asymptotically stable. □
Global stability of the endemic equilibrium
In this section, we aim to show the global asymptotic stability of the endemic equilibrium \(E^{*} \) of system (5) via a Lyapunov stability approach.
If hypotheses \((\mathbf{H}_{1})\), \((\mathbf{H}_{2})\), \((\mathbf{T}_{1})\), and \((\mathbf{T}_{2})\)hold and \(\mathcal{R}_{0} > 1 \), then the endemic equilibrium of system (5) is the only equilibrium and is globally asymptotically stable.
Let G be the function defined from \(\mathbb{R}^{+} \) to \(\mathbb{R}\) by
$$ G(x)=x-1-\ln (x). $$
It is clear that \(G(x)\geq 0 \) if \(x > 0\) and \(G(x)=0 \) if \(x=1 \). Let us consider the following Lyapunov function:
$$ U(t)=U_{1}(t)+U_{2}(t), $$
$$ U_{1}(t) = S(t)-S^{*} - \int _{S^{*}} ^{S(t)} \frac{f(S^{*},I^{*})}{f( \sigma ,I^{*})}\,d\sigma + I(t)-I^{*} -I^{*} \ln \biggl(\frac{I(t)}{I^{*}}\biggr) $$
$$ U_{2}(t) = \beta f\bigl(S^{*},I^{*}\bigr) \int _{0}^{h} g(\tau ) \int _{t-\tau } ^{t} G\biggl( \frac{ I(U)}{I^{*}} \biggr)\,du\,d\tau . $$
$$ \begin{aligned} \frac{d}{dt}U_{1}(t) ={} & \biggl(1- \frac{f(S^{*},I^{*})}{f(S(t),I^{*})} \biggr) \biggl(\bigl(1-(1-\epsilon )p\bigr)b- (\mu +d) S(t) \\ &{}- \beta \int _{0}^{h} g(\tau )f\bigl(S(t),I(t- \tau ) \bigr)\,d\tau \biggr) \\ &{} + \biggl(1-\frac{I^{*}}{I(t)} \biggr) \biggl(\beta \int _{0}^{h}g(\tau )f\bigl(S(t),I(t-\tau ) \bigr)\,d\tau -(\mu +c+ \gamma )I(t)-T(I) \biggr). \end{aligned} $$
$$ \frac{d}{dt}U_{2}(t)= \beta f\bigl(S^{*},I^{*} \bigr) \int _{0}^{h}g(\tau ) \biggl(G\biggl( \frac{I(t)}{I^{*}}\biggr)-G\biggl( \frac{I(t-\tau )}{I^{*}}\biggr) \biggr) \,d\tau $$
$$ G\biggl( \frac{I(t)}{I^{*}}\biggr)-G\biggl( \frac{I(t-\tau )}{I^{*}}\biggr)= \frac{I}{I ^{*}}- \frac{I(t- \tau )}{I^{*}} + \ln \biggl( \frac{I(t- \tau )}{I^{*}} \biggr). $$
$$ \textstyle\begin{cases} (1-(1-\epsilon )p)b= (\mu +d) S^{*} + \beta f(S^{*},I^{*}), \\ \beta f(S^{*},I^{*})= ( \mu +c+ \gamma )I^{*}+T(I^{*}), \end{cases} $$
$$ \begin{aligned} \frac{d}{dt}U(t)={} & \biggl(1- \frac{f(S^{*},I^{*})}{f(S(t),I^{*})} \biggr) \biggl((\mu +d) S^{*} + \beta f \bigl(S^{*},I^{*}\bigr)- (\mu +d) S(t) \\ &{}- \beta \int _{0}^{h} g(\tau )f\bigl(S(t),I(t- \tau ) \bigr)\,d\tau \biggr) \\ & {}+ \biggl(1-\frac{I^{*}}{I(t)}\biggr) \biggl(\beta \int _{0}^{h}g(\tau )f\bigl(S(t),I(t-\tau ) \bigr)\,d\tau \\ &{}-\beta \frac{ f(S^{*},I ^{*})}{I^{*}}I(t)-T(I)+\frac{I(t)T(I^{*})}{I^{*}} \biggr) \\ &{} + \beta f\bigl(S^{*},I^{*}\bigr) \int _{0}^{h}g(\tau ) \biggl( \frac{I}{I^{*}}- \frac{I(t- \tau )}{I^{*}} + \ln \biggl( \frac{I(t- \tau )}{I^{*}} \biggr) \biggr) \,d\tau \\ ={} & (\mu +d) \biggl(1-\frac{f(S^{*},I^{*})}{f(S(t),I^{*})} \biggr) \bigl(S^{*} - S(t) \bigr) \\ &{} + \beta f\bigl(S^{*},I^{*}\bigr) \int _{0}^{h}g(\tau ) \biggl(1- \frac{f(S^{*},I^{*})}{f(S(t),I^{*})} \biggr) \biggl(1-\frac{f(S(t),I(t- \tau ))}{f(S^{*},I^{*})} \biggr) \,d\tau \\ &{} + \beta f\bigl(S^{*},I^{*}\bigr) \int _{0}^{h}g(\tau ) \biggl(1- \frac{I^{*}}{I(t)}\biggr) \biggl(\frac{f(S(t),I(t- \tau ))}{f(S^{*},I^{*})} - \frac{I(t)}{I^{*}} \biggr)\,d\tau \\ &{} + \beta f\bigl(S^{*},I^{*}\bigr) \int _{0}^{h}g(\tau ) \biggl( \frac{I}{I^{*}}- \frac{I(t- \tau )}{I^{*}} + \ln \biggl( \frac{I(t- \tau )}{I^{*}} \biggr) \biggr) \,d\tau \\ &{}+ \biggl(1-\frac{I^{*}}{I(t)} \biggr) \biggl( \frac{I(t)T(I^{*})}{I^{*}}-T(I) \biggr). \end{aligned} $$
$$\begin{aligned} \frac{d}{dt}U(t) ={}& (\mu +d) \biggl(1- \frac{f(S^{*},I^{*})}{f(S(t),I ^{*})} \biggr) \bigl(S^{*} - S(t) \bigr) \\ &{} + \beta f\bigl(S^{*},I^{*}\bigr) \int _{0}^{h}g(\tau ) \biggl(2- \frac{f(S^{*},I^{*})}{f(S(t),I^{*})} + \frac{f(S(t),I(t- \tau ))}{f(S(t),I^{*})} \\ &{}- \frac{I^{*}}{I(t)} \frac{f(S(t),I(t- \tau )}{f(S^{*},I^{*})} \biggr) \,d\tau \\ &{} + \beta f\bigl(S^{*},I^{*}\bigr) \int _{0}^{h}g(\tau ) \biggl( - \frac{I(t- \tau )}{I^{*}} + \ln \biggl( \frac{I(t- \tau )}{I^{*}}\biggr) \biggr)\,d\tau \\ &{}+ \biggl(1-\frac{I^{*}}{I(t)} \biggr) \biggl(\frac{I(t)T(I^{*})}{I^{*}}-T(I) \biggr). \end{aligned}$$
$$\begin{aligned} \ln \biggl( \frac{I(t- \tau )}{I^{*}}\biggr) =& \ln \frac{f(S^{*},I^{*})}{f(S(t),I ^{*})} + \ln \biggl(\frac{I^{*}}{I(t)} \frac{f(S(t),I(t- \tau ))}{f(S ^{*}, I^{*})} \biggr) \\ &{}+\ln \biggl( \frac{I(t- \tau )}{I^{*}} \frac{f(S(t),I ^{*})}{f(S(t),I(t- \tau ))} \biggr), \end{aligned}$$
$$ \begin{aligned} \frac{d}{dt}U(t)={} & (\mu +d) \biggl(1- \frac{f(S^{*},I^{*})}{f(S(t),I ^{*})} \biggr) \bigl(S^{*} - S(t) \bigr) \\ &{} + \beta f\bigl(S^{*},I^{*}\bigr) \int _{0}^{h}g(\tau ) \biggl(1- \frac{f(S^{*},I^{*})}{f(S(t),I^{*})} + \ln \frac{f(S^{*},I^{*})}{f(S(t),I^{*})} \biggr) \,d\tau \\ &{} + \beta f\bigl(S^{*},I^{*}\bigr) \int _{0}^{h}g(\tau ) \biggl(1- \frac{I^{*}}{I(t)} \frac{f(S(t),I(t- \tau ))}{f(S^{*},I^{*})} \\ &{}+ \ln \biggl( \frac{I^{*}}{I(t)} \frac{f(S(t),I(t- \tau ))}{f(S^{*},I^{*})} \biggr) \biggr)\,d\tau \\ &{} + \beta f\bigl(S^{*},I^{*}\bigr) \int _{0}^{h}g(\tau ) \biggl(1- \frac{i(t- \tau )}{I^{*}} \frac{f(S(t),I ^{*})}{f(S(t),I(t- \tau ))} \\ &{}+ \ln \biggl( \frac{I(t- \tau )}{I^{*}} \frac{f(S(t),I ^{*})}{f(S(t),I(t- \tau ))} \biggr) \biggr)\,d\tau \\ &{} + \beta f\bigl(S^{*},I^{*}\bigr) \int _{0}^{h}g(\tau ) \biggl( \frac{I(t- \tau )}{I^{*}} \frac{f(S(t),I ^{*})}{f(S(t),I(t- \tau ))} - 1 \\ &{}- \frac{I(t- \tau )}{I^{*}} + \frac{f(S(t),I(t- \tau ))}{f(S(t),I^{*})} \biggr) \,d\tau \\ &{} + \bigl(I(t)-I^{*} \bigr) \biggl(\frac{T(I^{*})}{I^{*}}- \frac{T(I)}{I(t)} \biggr). \end{aligned} $$
By hypotheses \((\mathbf{H}_{1})\) and \((\mathbf{H}_{2})\) we have
$$ \begin{aligned} &\frac{I(t- \tau )}{I^{*}} \frac{f(S(t),I^{*})}{f(S(t),I(t- \tau ))} - 1- \frac{I(t- \tau )}{I^{*}} + \frac{f(S(t),I(t- \tau ))}{f(S(t),I ^{*})} \\ &\quad = \biggl( \frac{I(t- \tau )}{I^{*}} - \frac{f(S(t),I(t- \tau ))}{f(S(t),I ^{*})} \biggr) \biggl( \frac{f(S(t),I^{*})}{f(S(t),I(t- \tau ))}-1 \biggr) \\ &\quad = \frac{I(t- \tau )}{I^{*} \phi (S(t),I^{*}) f(S(t),I(t-\tau ))} \bigl(\phi \bigl(S(t),I^{*}\bigr)-\phi \bigl(S(t),I(t-\tau )\bigr) \bigr) \\ &\qquad {}\times\bigl(f\bigl(S(t),I ^{*}\bigr)-f \bigl(S(t),I(t-\tau )\bigr) \bigr), \end{aligned} $$
$$ \frac{I(t- \tau )}{I^{*}} \frac{f(S(t),I^{*})}{f(S(t),I(t- \tau ))} - 1- \frac{I(t- \tau )}{I^{*}} + \frac{f(S(t),I(t- \tau ))}{f(S(t),I ^{*})}\leq 0. $$
Moreover, hypothesis \((\mathbf{H}_{1})\) implies that
$$ (\mu +d) \biggl(1-\frac{f(S^{*},I^{*})}{f(S(t),I^{*})} \biggr) \bigl(S^{*} - S(t) \bigr) \leq 0, $$
and hypothesis \((\mathbf{T}_{2})\) gives
$$ \bigl(I(t)-I^{*} \bigr) \biggl(\frac{T(I^{*})}{I^{*}}- \frac{T(I)}{I(t)} \biggr)\leq 0. $$
Hence, \(\frac{d}{dt}U(t) \leq 0\). We conclude that the endemic equilibrium of system (5) is globally asymptotically stable. □
Numerical results
In this section, we present the numerical simulation of the model by considering the following delayed SIR epidemic model with vaccination, treatment, and distributed time delay:
$$ \textstyle\begin{cases} \frac{dS(t)}{dt}=(1-(1-\epsilon )p)b- (\mu +d) S(t)- \beta \int _{0}^{h} \frac{ e^{- \tau }}{1- e ^{-h}}S(t)I(t-\tau ) \,d\tau , \\ \frac{dI(t)}{dt}= \beta \int _{0}^{h} \frac{ e^{- \tau }}{1- e^{-h}}S(t)I(t- \tau ) \,d\tau -( \mu +c+ \gamma )I(t)-\frac{a I(t)}{1+ \xi I(t)}, \\ \frac{dR(t)}{dt}=(1-\epsilon )pb+\gamma I(t)+ \frac{a I(t)}{1+\xi I(t)} +d S(t)- \mu R(t). \end{cases} $$
The function g is chosen, as in [37], in the following form:
$$ g( \tau ) = \frac{ e^{- \tau }}{1- e ^{-h}}. $$
On the other hand, the treatment function T, similarly to [23], is defined by
$$ T(I)= \frac{a I}{1+\xi I}. $$
The reproduction number \(\mathcal{R}_{0}\) is given by
$$ {\mathcal{R}}_{0} = \frac{ \beta (1-(1-\epsilon )p)b}{ (\mu +d)( \mu + c + \gamma +a)}. $$
For our system (12) without vaccination and treatment, the reproduction number is given by
$$ \overline{\mathcal{R}_{0}} = \frac{ \beta b}{ \mu (\mu +c +\gamma )}. $$
Hence \(\mathcal{R}_{0} \) can be rewritten as
$$ \mathcal{R}_{0} = \frac{\mu (\mu +c +\gamma )(1-(1-\epsilon )p)}{( \mu +d)(\mu +c +\gamma +a)}\overline{ \mathcal{R}_{0}}. $$
If \(\overline{\mathcal{R}_{0}} \leq 1 \), then the disease will die out (the disease-free equilibrium \(E_{0}\) is globally asymptotically stable) without any control measures.
However, if \(\overline{\mathcal{R}_{0}} > 1\), then
$$ \mathcal{R}_{0}\leq 1 \quad \mbox{is equivalent to}\quad S_{0} \leq \bar{S}= \frac{\mu +c +\gamma +a}{\beta }, $$
where \(S_{0}\) is given in (6). Similarly,
$$ \mathcal{R}_{0}\geq 1\quad \mbox{is equivalent to}\quad S_{0} \geq \bar{S} . $$
This shows that, during the epidemic \(\overline{\mathcal{R}_{0}} > 1 \), if the number of susceptible population is below the threshold S̄, then the disease can be controlled by vaccination and treatment. However, if the number of susceptible population is above the threshold S̄, then the disease will persist in the population.
To make sense of our simulation, we will focus on the case of \(\overline{\mathcal{R}_{0}} > 1\), and we choose the parameters p and a to guarantee the clearance of the disease from the population by these two public health control measures.
We consider the following initial conditions:
$$\begin{aligned}& \Phi _{1}(\theta )=\sin (0.5 \theta ) + 100,\qquad \Phi _{2}( \theta ) = \sin (10 \theta ) + 20, \\& \Phi _{3}(\theta ) = 0\quad \mbox{for } -h\leq \theta \leq 0, \\& \Phi _{1}(\theta )=\cos (5 \theta ) + 200,\qquad \Phi _{2}( \theta ) = 10 \cos ( \theta ) + 30, \\& \Phi _{3}(\theta ) = 0\quad \mbox{for } -h\leq \theta \leq 0, \\& \Phi _{1}(\theta )=\cos (5 \theta ) + 260,\qquad \Phi _{2}( \theta ) = 30+20 \sin (10 \theta ) , \\& \Phi _{3}(\theta ) = 80\quad \mbox{for } -h\leq \theta \leq 0, \\& \Phi _{1}(\theta )=\cos (5 \theta ) + 280,\qquad \Phi _{2}( \theta ) = 30+40 \sin (10 \theta ) , \\& \Phi _{3}(\theta ) = 30\quad \mbox{for } -h\leq \theta \leq 0, \\& \Phi _{1}(\theta )=\cos (5 \theta ) + 300,\qquad \Phi _{2}( \theta ) = 30+70 \sin (10 \theta ) , \\& \Phi _{3}(\theta ) = 50\quad \mbox{for } -h\leq \theta \leq 0. \end{aligned}$$
All the numerical simulations are performed using the explicit Runge–Kutta-like method (dde45) [38].
First, we start with the case of no vaccination and no treatment (\(p=0\) and \(a=0\)). In this situation our model is similar to that of Enatsu et al. [10], in which the authors claim that when the basic reproduction number, denoted by \(\overline{\mathcal{R}_{0}}\), is greater than one (\(\overline{ \mathcal{R}_{0}} > 1\)), the disease persists. However, our numerical simulation (Fig. 3 and Fig. 4) shows that the disease will die out even if \(\overline{\mathcal{R}_{0}}>1\).
The time series of model (3) in the special case (12), with Figures (d), (e), and (f) representing (respectively) \(S(t)\), \(I(t)\), and \(R(t)\). The parameters of the model are \(b = 10\), \(\mu = 0.65 \), \(\beta = 0.2 \), \(c = 0.77 \), \(\gamma = 0.75 \), \(h=1.5\), \(d=0.4\), \(p=0.4\), \(\epsilon =0.2\), \(\xi = 10\), and \(a=0\). In this case \(\mathcal{R}_{0}=0.5969< 1\)
The time series of model (3) in the special case (12), with Figures (g), (h), and (i) representing (respectively) \(S(t)\), \(I(t)\), and \(R(t)\). With the same parameters as in Fig. 3 except \(p=0.3\), \(\epsilon =0.2\) and \(d=0.3\) and \(a=0.5\). In this case, \(\mathcal{R}_{0}=0.5993< 1\)
Next, we consider the case with vaccination and no treatment, with \(\overline{\mathcal{R}_{0}} > 1\) and \(\mathcal{R}_{0} < 1\). As shown in Fig. 3, the disease dies out, which corresponds to our theoretical result.
Finally, we give the simulation of the case with vaccination and treatment, with \(\overline{\mathcal{R}_{0}} > 1\) and \(\mathcal{R}_{0} < 1\). As shown in Fig. 4, the treatment with vaccination helps the eradication of the infection from the population.
For more illustration, it is very interesting to discuss the behavior of the basic reproduction number \(R_{0}\) with respect to vaccination and treatment parameters. Namely, the parameters p, d, and a. From the expression of \(R_{0}\), formula (13), it is clear that \(R_{0}\) is a decreasing function with respect to p, d, and a respectively on \([0,1]\), \([0,1]\), and \([0, +\infty [\). Moreover, \(R_{0}\) is an equation of a straight line with respect to p and
$$ \lim_{a \longrightarrow + \infty }R_{0}(a )= 0. $$
In Fig. 5 we show the effect of vaccination and treatment parameters on the dynamic of \(R_{0}\). We notice that the critical values \(\bar{p}=0.875\), \(\bar{d}=1.02\), and \(\bar{a}=1.157\) represent separated values between the endemic state and the disease-free state for (j), (k), and (l) respectively (it means the cases \(R_{0}<1\) and \(R_{0}>1\)).
The behavior of \(R_{0}\) in the special case (12) with the parameters: \(b = 10\), \(\mu = 0.04 \), \(\beta = 0.15 \), \(c = 0.5 \), \(\gamma = 0.003 \), \(d=0.5\), \(p=0.8\), \(\epsilon =0.2\), and \(a=0.3\) for (j) and the same parameters except \(b=20\) for (k) and (l)
In this work, we analyzed a delayed SIR model with generalized incidence function and distributed delay as the contact between infected individuals and healthy ones does not result in an immediate infection. The delay, presented in this work, reflects the time that it takes to have an infection after the contact. The model also included the two main types of disease control measures: vaccine and treatment. The question that arises in using these two measures is how each vaccination should depend on the treatment. In fact, as the treatment is the first control measure to be taken either as a prophylactic or antiviral, the vaccination implementation should take into consideration the effect of the treatment on the disease infectiousness. Moreover, the function treatment was chosen to reflect the reality of drug stock supply during the time of the infections. Our analysis showed that when \(\mathcal{R}_{0} \leq 1\), the disease-free equilibrium is globally asymptotically stable, and when \(\mathcal{R}_{0} > 1\), then there is a unique disease-endemic equilibrium, which is globally asymptotically stable. To put this result in context, we chose the treatment function \(T(I)= \frac{a I(t)}{1+ \xi I(t)}\) (see [23]).
In our analysis we showed that when the disease is endemic, in the absence of the vaccination and treatment, then there are two possible scenarios: (a) if the number of susceptible population is below the threshold S̄, then the disease can be controlled by vaccination and treatment; (b) if the susceptible population is above the threshold S̄, then the disease will persist in the population. This finding reflects the limited capability of the control measure to eradicate the disease if the population is too large.
Kuniya, T.: Stability analysis of an age-structured SIR epidemic model with a reduction method to ODEs. Mathematics 6(9), 147 (2018)
Capasso, V., Serio, G.: A generalization of the Kermack–McKendrick deterministic epidemic model. Math. Biosci. 42(1), 43–61 (1978)
Kaddar, A.: Stability analysis in a delayed SIR epidemic model with a saturated incidence rate. Nonlinear Anal., Model. Control 15(3), 299–306 (2010)
Korobeinikov, A., Maini, P.K.: A Lyapunov function and global properties for SIR and SEIR epidemiological models with nonlinear incidence. Math. Biosci. Eng. 1(1), 57–60 (2004)
Nakata, Y., Enatsu, Y., Muroya, Y.: On the global stability of an SIRS epidemic model with distributed delays. Discrete Contin. Dyn. Syst., Ser. A 2011 1119–1128 (2011)
Xu, R., Ma, Z.: Global stability of a delayed SEIRS epidemic model with saturation incidence rate. Nonlinear Dyn. 61, 229–239 (2010)
Zhang, J.-Z., Jin, Z., Liu, Q.-X., Zhang, Z.-Y.: Analysis of a delayed SIR model with nonlinear incidence rate. Discrete Dyn. Nat. Soc. 2008, Article ID 636153 (2008)
Enatsu, Y.: Lyapunov functional techniques on the global stability of equilibria of SIS epidemic models with delays. Kyoto Univ. Res. Inf. Repos. 1792, 118–130 (2012)
Li, C.-H., Tsai, C.-C., Yang, S.-Y.: Analysis of the permanence of an SIR epidemic model with logistic process and distributed time delay. Commun. Nonlinear Sci. Numer. Simul. 17(9), 3696–3707 (2012)
Yoichi Enatsu, Y.M., Nakata, Y.: Global stability of SIR epidemic models with a wide class of nonlinear incidence rates and distributed delays. Discrete Contin. Dyn. Syst., Ser. B 15(1), 61–74 (2011)
Elazzouzi, A., Lamrani Alaoui, A., Tilioua, M., Torres, D.F.M.: Analysis of a SIRI epidemic model with distributed delay and relapse. Stat. Optim. Inf. Comput. 7, 545–557 (2019)
Feng, Z., Towers, S., Yang, Y.: Modeling the effects of vaccination and treatment on pandemic influenza. AAPS J. 13, 427–437 (2011)
Tulu, T., Tian, B., Wu, Z.: Modeling the effect of quarantine and vaccination on Ebola disease. Adv. Differ. Equ. 2017, 178 (2017)
Khan, M., Badshah, Q., Islam, S., Khan, I., Shafie, S., Khan, S.A.: Global dynamics of SEIRS epidemic model with non-linear generalized incidences and preventive vaccination. Adv. Differ. Equ. 2015, 88 (2015)
Lu, Z., Chi, X., Chen, L.: The effect of constant and pulse vaccination on SIR epidemic model with horizontal and vertical transmission. Math. Comput. Model. 36, 1039–1057 (2002)
Makinde, O.D.: Adomian decomposition approach to a SIR epidemic model with constant vaccination strategy. Appl. Math. Comput. 184(2), 842–848 (2007)
Shulgin, B., Stone, L., Agur, Z.: Pulse vaccination strategy in the SIR epidemic model. Bull. Math. Biol. 60(6), 1123–1148 (1998)
Ma, Y., Liu, J.-B., Li, H.: Global dynamics of an SIQR model with vaccination and elimination hybrid strategies. Mathematics 6(12), 1–12 (2018)
Wang, W., Ruan, S.: Bifurcations in an epidemic model with constant removal rate of the infectives. J. Math. Anal. Appl. 291(2), 775–793 (2004)
Chauhan, S., Bhatia, S.K., Gupta, S.: Effect of pollution on dynamics of SIR model with treatment. Int. J. Biomath. 8(6), 1550083 (2015)
Li, J., Teng, Z., Wang, G., Zhang, L., Hu, C.: Stability and bifurcation analysis of an SIR epidemic model with logistic growth and saturated treatment. Chaos Solitons Fractals 99, 63–71 (2017)
Dubey, B., Dubey, P., Dubey, B.: Dynamics of an SIR model with nonlinear incidence and treatment rate. Appl. Appl. Math. 10, 718–737 (2016)
Kumar, A., Nilam: Stability of a time delayed SIR epidemic model along with nonlinear incidence rate and Holling type-II treatment rate. Int. J. Comput. Methods 15(6), 1850055 (2018)
Eckalbar, J.C., Eckalbar, W.L.: Dynamics of an epidemic model with quadratic treatment. Nonlinear Anal., Real World Appl. 12(1), 320–332 (2011)
Majeed, S.N.: Dynamical study of an SIR epidemic model with nonlinear incidence rate and regress of treatment. Ibn AL. Haitham J. Pure Appl. Sci., 384–396 (2018)
Tridane, A., Hajji, M.A., Mojica-Nava, E.: Optimal drug treatment in a simple pandemic switched system using polynomial approach. In: International Conference on Mathematics and Statistics, pp. 227–240. Springer, Berlin (2015)
Anderson, R.M., May, R.M.: Infectious Diseases of Humans: Dynamics and Control. Oxford University Press, London (1992)
White, M.T., Griffin, J.T., Drakeley, C.J., Ghani, A.C.: Heterogeneity in malaria exposure and vaccine response: implications for the interpretation of vaccine efficacy trials. Malar. J. 9(1), 82 (2010)
Ball, F.G., Lyne, O.D.: Optimal vaccination policies for stochastic epidemics among a population of households. Math. Biosci. 177, 333–354 (2002)
Keeling, M.J., Shattock, A.: Optimal but unequitable prophylactic distribution of vaccine. Epidemics 4(2), 78–85 (2012)
Lugnér, A.K., van Boven, M., de Vries, R., Postma, M.J., Wallinga, J.: Cost effectiveness of vaccination against pandemic influenza in European countries: mathematical modelling analysis. Br. Med. J. 345, e4445 (2012)
Longini, I.M. Jr, Halloran, M.E., Haber, M.: Estimation of vaccine efficacy from epidemics of acute infectious agents under vaccine-related heterogeneity. Math. Biosci. 117, 271–281 (1993)
Shim, E., Galvani, A.P.: Distinguishing vaccine efficacy and effectiveness. Vaccine 30(47), 6700–6705 (2012)
Hale, J.K.: Theory of Functional Differential Equations. Applied Mathematical Sciences, vol. 3. Springer, New York (1977)
Hale, J.K.: Ordinary Differential Equations. Krieger, Malabar (1980)
Van den Driessche, P., Watmough, J.: Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission. Math. Biosci. 180, 29–48 (2002)
Ma, W., Takeuchi, Y., Hara, T., Beretta, E.: Permanence of an SIR epidemic model with distributed time delays. Tohoku Math. J. 54(4), 581–591 (2002)
Kim, A.V., Ivanov, A.V.: Systems with Delays: Analysis, Control, and Computations. Wiley, New York (2015)
The authors would like to thank the anonymous reviewers for their valuable comments and suggestions which helped us to improve the quality of our work.
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
No funding.
LSI Laboratory, FP Taza, Department of MPI, Sidi Mohamed Ben Abdellah University, Taza, Morocco
A. Elazzouzi
MAMCS Group, M2I Laboratory, FST Errachidia, Moulay Ismaïl University of Meknès, Errachidia, Morocco
A. Lamrani Alaoui
& M. Tilioua
Department of Mathematical Sciences, United Arab Emirates University, Al Ain, United Arab Emirates
A. Tridane
Search for A. Elazzouzi in:
Search for A. Lamrani Alaoui in:
Search for M. Tilioua in:
Search for A. Tridane in:
All authors contributed equally to this work. All authors read and approved the final manuscript.
Correspondence to A. Tridane.
Elazzouzi, A., Lamrani Alaoui, A., Tilioua, M. et al. Global stability analysis for a generalized delayed SIR model with vaccination and treatment. Adv Differ Equ 2019, 532 (2019). https://doi.org/10.1186/s13662-019-2447-z
34D03
SIR epidemic model
Distributed delay
Generalized nonlinear incidence
Lyapunov function | CommonCrawl |
Doing Meta-Analysis in R: A Hands-on Guide
2 Discovering R
Meta-Analysis in R
3 Effect Sizes
4 Pooling Effect Sizes
5 Between-Study Heterogeneity
6 Forest Plots
7 Subgroup Analyses
8 Meta-Regression
9 Publication Bias
Advanced Methods
10 "Multilevel" Meta-Analysis
11 Structural Equation Modeling Meta-Analysis
12 Network Meta-Analysis
13 Bayesian Meta-Analysis
14 Power Analysis
15 Risk of Bias Plots
16 Reporting & Reproducibility
17 Effect Size Calculation & Conversion
A Questions & Answers
B Effect Size Formulas
C List of Symbols
D R & Package Information
E Corrections & Remarks
Citing this Guide
B y now, we have already learned how to pool effect sizes in a meta-analysis. As we have seen, the aim of both the fixed- and random-effects model is to synthesize the effects of many different studies into one single number. This, however, only makes sense if we are not comparing apples and oranges. For example, it could be that while the overall effect we calculate in the meta-analysis is small, there are still a few outliers with very high effect sizes. Such information is lost in the aggregate effect, and we do not know if all studies yielded small effect sizes, or if there were exceptions.
The extent to which true effect sizes vary within a meta-analysis is called between-study heterogeneity. We already mentioned this concept briefly in the last chapter in connection with the random-effects model. The random-effects model assumes that between-study heterogeneity causes the true effect sizes of studies to differ. It therefore includes an estimate of \(\tau^2\), which quantifies this variance in true effects. This allows to calculate the pooled effect, defined as the mean of the true effect size distribution.
The random-effects model always allows us to calculate a pooled effect size, even if the studies are very heterogeneous. Yet, it does not tell us if this pooled effect can be interpreted in a meaningful way. There are many scenarios in which the pooled effect alone is not a good representation of the data in our meta-analysis.
Imagine a case where the heterogeneity is very high, meaning that the true effect sizes (e.g. of some treatment) range from highly positive to negative. If the pooled effect of such a meta-analysis is positive, this does not tell us that there were some studies with a true negative effect. The fact that the treatment had an adverse effect in some studies is lost.
High heterogeneity can also be caused by the fact that there are two or more subgroups of studies in our data that have a different true effect. Such information can be very valuable for researchers, because it might allow us to find certain contexts in which effects are lower or higher. Yet, if we look at the pooled effect in isolation, this detail will likely be missed. In extreme cases, very high heterogeneity can mean that the studies have nothing in common, and that it makes no sense to interpret the pooled effect at all.
Therefore, meta-analysts must always take into account the variation in the analyzed studies. Every good meta-analysis should not only report an overall effect but also state how trustworthy this estimate is. An essential part of this is to quantify and analyze the between-study heterogeneity.
In this chapter, we will have a closer look at different ways to measure heterogeneity, and how they can be interpreted. We will also cover a few tools which allow us to detect studies that contribute to the heterogeneity in our data. Lastly, we discuss ways to address large amounts of heterogeneity in "real-world" meta-analyses.
5.1 Measures of Heterogeneity
Before we start discussing heterogeneity measures, we should first clarify that heterogeneity can mean different things. Rücker and colleagues (2008), for example, differentiate between baseline or design-related heterogeneity, and statistical heterogeneity.
Baseline or design-related heterogeneity arises when the population or research design of studies differs across studies. We have discussed this type of heterogeneity when we talked about the "Apples and Oranges" problem (Chapter 1.3), and ways to define the research questions (Chapter 1.4.1). Design-related heterogeneity can be reduced a priori by setting up a suitable PICO that determines which types of populations and designs are eligible for the meta-analysis.
Statistical heterogeneity, on the other hand, is a quantifiable property, influenced by the spread and precision of the effect size estimates included in a meta-analysis. Baseline heterogeneity can lead to statistical heterogeneity (for example if effects differ between included populations) but does not have to. It is also possible for a meta-analysis to display high statistical heterogeneity, even if the included studies themselves are virtually identical. In this guide (and most other meta-analysis texts) the term "between-study heterogeneity" only refers to statistical heterogeneity.
5.1.1 Cochran's \(Q\)
Based on the random-effects model, we know that there are two sources of variation causing observed effects to differ from study to study. There is the sampling error \(\epsilon_k\), and the error caused by between-study heterogeneity, \(\zeta_k\) (Chapter 4.1.2). When we want to quantify between-study heterogeneity, the difficulty is to identify how much of the variation can be attributed to the sampling error, and how much to true effect size differences.
Traditionally, meta-analysts have used Cochran's \(Q\) (Cochran 1954) to distinguish studies' sampling error from actual between-study heterogeneity. Cochran's \(Q\) is defined as a weighted sum of squares (WSS). It uses the deviation of each study's observed effect \(\hat\theta_k\) from the summary effect \(\hat\theta\), weighted by the inverse of the study's variance, \(w_k\):
\[\begin{equation} Q = \sum^K_{k=1}w_k(\hat\theta_k-\hat\theta)^2 \tag{5.1} \end{equation}\]
Let us take a closer look at the formula. First of all, we see that it uses the same type of inverse-variance weighting that is also applied to pool effect sizes. The mean \(\hat\theta\) in the formula is the pooled effect according to the fixed-effect model. The amount to which individual effects deviate from the summary effect, the residuals, is squared (so that the value is always positive), weighted and then summed. The resulting value is Cochran's \(Q\).
Because of the weighting by \(w_k\), the value of \(Q\) does not only depend on how much \(\hat\theta_k\)'s deviate from \(\hat\theta\), but also on the precision of studies. If the standard error of an effect size is very low (and thus the precision very high), even small deviations from the summary effect will be given a higher weight, leading to higher values of \(Q\).
The value of \(Q\) can be used to check if there is excess variation in our data, meaning more variation than can be expected from sampling error alone. If this is the case, we can assume that the rest of the variation is due to between-study heterogeneity. We will illustrate this with a little simulation.
In our simulation, we want to inspect how \(Q\) behaves under two different scenarios: when there is no between-study heterogeneity, and when heterogeneity exists. Let us begin with the no-heterogeneity case. This implies that \(\zeta_k=0\), and that the residuals \(\hat\theta_k-\hat\theta\) are only product of the sampling error \(\epsilon_k\). We can use the rnorm function to simulate deviates from some mean effect size \(\hat\theta\) (assuming that they follow a normal distribution). Because they are centered around \(\hat\theta\), we can expect the mean of these "residuals" to be zero (\(\mu\) = 0). For this example, let us assume that the population standard deviation is \(\sigma=\) 1, which leads to a standard normal distribution.
Normal distributions are usually denoted with \(\mathcal{N}\), and we can symbolize that the residuals are draws from a normal distribution with \(\mu=\) 0 and \(\sigma=\) 1 like this:
\[\begin{equation} \hat\theta_k-\hat\theta \sim \mathcal{N}(0,1) \tag{5.2} \end{equation}\]
Let us try this out in R, and draw \(K\)=40 effect size residuals \(\hat\theta_k-\hat\theta\) using rnorm.
set.seed(123) # needed to reproduce results
rnorm(n = 40, mean = 0, sd = 1)
## [1] -0.56048 -0.23018 1.55871 0.07051 0.12929
## [6] 1.71506 0.46092 -1.26506 -0.68685 -0.44566
## [...]
Because the standard normal distribution is the default for rnorm, we could have also used the simpler code rnorm(40).
Now, let us simulate that we repeat this process of drawing \(n=\) 40 samples many, many times. We can achieve this using the replicate function, which we tell to repeat the rnorm call ten thousand times. We save the resulting values in an object called error_fixed.
set.seed(123)
error_fixed <- replicate(n = 10000, rnorm(40))
We continue with a second scenario, in which we assume that between-study heterogeneity (\(\zeta_k\) errors) exists in addition to the sampling error \(\epsilon_k\). We can simulate this by adding a second call to rnorm, representing the variance in true effect sizes. In this example, we also assume that the true effect sizes follow a standard normal distribution.
We can simulate the residuals of ten thousand meta-analyses with \(K\)=40 studies and substantial between-study heterogeneity using this code:
error_random <- replicate(n = 10000, rnorm(40) + rnorm(40))
Now that we simulated \(\hat\theta_k-\hat\theta\) residuals for meta-analyses with and without heterogeneity, let us do the same for values of \(Q\). For this simulation, we can simplify the formula of \(Q\) a little by assuming that the variance, and thus the weight \(w_k\) of every study, is one, resulting in \(w_k\) to drop out of the equation. This means that we only have to use our calls to rnorm from before, square and sum the result, and replicate this process ten thousand times.
Here is the code for that:
Q_fixed <- replicate(10000, sum(rnorm(40)^2))
Q_random <- replicate(10000, sum((rnorm(40) + rnorm(40))^2))
An important property of \(Q\) is that it is assumed to (approximately) follow a \(\chi^2\) distribution. A \(\chi^2\) distribution, like the weighted squared sum, can only take positive values. It is defined by its degrees of freedom, or d.f.; \(\chi^2\) distributions are right-skewed for small d.f., but get closer and closer to a normal distribution when the degrees of freedom become larger. At the same time, the degrees of freedom are also the expected value, or mean of the respective \(\chi^2\) distribution.
It is assumed that \(Q\) will approximately follow a \(\chi^2\) distribution with \(K-1\) degrees of freedom (with \(K\) being the number of studies in our meta-analysis)–if effect size differences are only caused by sampling error. This means that the mean of a \(\chi^2\) distribution with \(K-1\) degrees of freedom tells us the value of \(Q\) we can expect through sampling error alone.
This explanation was very abstract, so let us have a look at the distribution of our simulated values to make this more concrete. In the following code, we use the hist function to plot a histogram of the effect size "residuals" and \(Q\) values. We also add a line to each plot, showing the idealized distribution.
Such distributions can be generated by the dnorm function for normal distributions, and using dchisq for \(\chi^2\) distributions, with df specifying the degrees of freedom.
# Histogram of the residuals (theta_k - theta)
# - We produce a histogram for both the simulated values in
# error_fixed and error_random
# - `lines` is used to add a normal distribution in blue.
hist(error_fixed,
xlab = expression(hat(theta[k])~-~hat(theta)), prob = TRUE,
breaks = 100, ylim = c(0, .45), xlim = c(-4,4),
main = "No Heterogeneity")
lines(seq(-4, 4, 0.01), dnorm(seq(-4, 4, 0.01)),
col = "blue", lwd = 2)
hist(error_random,
breaks = 100,ylim = c(0, .45), xlim = c(-4,4),
main = "Heterogeneity")
# Histogram of simulated Q-values
# Q_fixed and Q_random
# - `lines` is used to add a chi-squared distribution in blue.
# First, we calculate the degrees of freedom (k-1)
# remember: k=40 studies were used for each simulation
df <- 40-1
hist(Q_fixed, xlab = expression(italic("Q")), prob = TRUE,
breaks = 100, ylim = c(0, .06),xlim = c(0,160),
lines(seq(0, 100, 0.01), dchisq(seq(0, 100, 0.01), df = df),
hist(Q_random, xlab = expression(italic("Q")), prob = TRUE,
breaks = 100, ylim = c(0, .06), xlim = c(0,160),
These are the plots that R draws for us:
If you find the code we used to generate the plots difficult to understand, do not worry. We only used it for this simulation, and these are not plots one would produce as part of an actual meta-analysis.
Let us go through what we see in the four histograms. In the first row, we see the distribution of effect size "residuals," with and without heterogeneity. The no-heterogeneity data, as we can see, closely follows the line of the standard normal distribution we included in the plot. This is quite logical since the data was generated by rnorm assuming this exact distribution. The data in which we added extra heterogeneity does not follow the standard normal distribution. The dispersion of data is larger, resulting in a distribution with heavier tails.
Now, let us explore how this relates to the distribution of \(Q\) values in the second row. When there is no heterogeneity, the values of \(Q\) follow a characteristic, right-skewed \(\chi^2\) distribution. In the plot, the solid line shows the shape of a \(\chi^2\) distribution with 39 degrees of freedom (since d.f. = \(K-1\), and \(K\) = 40 was used in each simulation). We see that the simulated data follows this curve pretty well. This is no great surprise. We have learned that \(Q\) follows a \(\chi^2\) distribution with \(K-1\) degrees of freedom when there is no heterogeneity. Exactly this is the case in our simulated data: variation exists only due to the sampling error.
The distribution looks entirely different for our example with heterogeneity. The simulated data do not seem to follow the expected distribution at all. Values are shifted visibly to the right; the mean of the distribution is approximately twice as high. We can conclude that, when there is substantial between-study heterogeneity, the values of \(Q\) are considerably higher than the value of \(K-1\) we expect under the assumption of no heterogeneity. This comes as no surprise, since we added extra variation to our data to simulate the presence of between-study heterogeneity.
This was a somewhat lengthy explanation, yet it may have helped us to better understand how we can exploit the statistical properties of \(Q\). Cochran's \(Q\) can be used to test if the variation in a meta-analysis significantly exceeds the amount we would expect under the null hypothesis of no heterogeneity.
This test of heterogeneity is commonly used in meta-analyses, and if you go back to Chapter 4, you will see that {meta} also provides us with it by default. It is often referred to as Cochran's \(Q\) test, but this is actually a misnomer. Cochran himself never intended \(Q\) to be used in this way (Hoaglin 2016).
Cochran's \(Q\) is a very important statistic, mostly because other common ways to quantify heterogeneity, such as Higgins and Thompson's \(I^2\) statistic and \(H^2\), are based on it. We will get to these measures in the next sections. Cochran's \(Q\) is also used by some heterogeneity variance estimators to calculate \(\tau^2\), most famously by the DerSimonian-Laird estimator26.
Problems With \(Q\) & the \(Q\)-Test
Although \(Q\) is commonly used and reported in meta-analyses, it has several flaws. Hoaglin (2016), for example, argues that the assumption of \(Q\) following a \(\chi^2\) distribution with \(K-1\) degrees of freedom does not reflect \(Q\)'s actual behavior in meta-analysis, and that related procedures such as the DerSimonian-Laird method may therefore be biased.
A more practical concern is that \(Q\) increases both when the number of studies \(K\), and when the precision (i.e. the sample size of a study) increases. Therefore, \(Q\) and whether it is significant highly depends on the size of your meta-analysis, and thus its statistical power.
From this follows that we should not only rely on the significance of a \(Q\)-test when assessing heterogeneity. Sometimes, meta-analysts decide whether to apply a fixed-effect or random-effects model based on the significance of the \(Q\)-test. For the reasons we stated here, this approach is highly discouraged.
5.1.2 Higgins & Thompson's \(I^2\) Statistic
The \(I^2\) statistic (J. P. Higgins and Thompson 2002) is another way to quantify between-study heterogeneity, and directly based on Cochran's \(Q\). It is defined as the percentage of variability in the effect sizes that is not caused by sampling error. \(I^2\) draws on the assumption that \(Q\) follows a \(\chi^2\) distribution with \(K-1\) degrees of freedom under the null hypothesis of no heterogeneity. It quantifies, in percent, how much the observed value of \(Q\) exceeds the expected \(Q\) value when there is no heterogeneity (i.e. \(K-1\)).
The formula of \(I^2\) looks like this:
\[\begin{equation} I^2 = \frac{Q-(K-1)}{Q} \tag{5.3} \end{equation}\]
Where \(K\) is the total number of studies. The value of \(I^2\) can not be lower than 0%, so if \(Q\) happens to be smaller than \(K-1\), we simply use \(0\) instead of a negative value.
We can use our simulated values of \(Q\) from before to illustrate how \(I^2\) is calculated. First, let us randomly pick the tenth simulated value in Q_fixed, where we assumed no heterogeneity. Then, we use the formula above to calculate \(I^2\).
# Display the value of the 10th simulation of Q
Q_fixed[10]
## [1] 35.85787
# Define k
k <- 40
# Calculate I^2
(Q_fixed[10] - (k-1))/Q_fixed[10]
## [1] -0.08762746
Since the result is negative, we round up to zero, resulting in \(I^2\) = 0%. This value tells us that zero percent of the variation in effect sizes is due to between-study heterogeneity. This is in line with the settings used for our simulation.
Now, we do the same with the tenth simulated value in Q_random.
(Q_random[10] - (k-1))/Q_random[10]
## [1] 0.5692061
We see that the \(I^2\) value of this simulation is approximately 50%, meaning that about half of the variation is due to between-study heterogeneity. This is also in line with our expectations since the variation in this example is based, in equal parts, on the simulated sampling error and between-study heterogeneity.
It is common to use the \(I^2\) statistic to report the between-study heterogeneity in meta-analyses, and \(I^2\) is included by default in the output we get from {meta}. The popularity of this statistic may be associated with the fact that there is a "rule of thumb" on how we can interpret it (J. P. Higgins and Thompson 2002):
\(I^2\) = 25%: low heterogeneity
\(I^2\) = 50%: moderate heterogeneity
\(I^2\) = 75%: substantial heterogeneity.
5.1.3 The \(H^2\) Statistic
The \(H^2\) statistic (J. P. Higgins and Thompson 2002) is also derived from Cochran's \(Q\), and similar to \(I^2\). It describes the ratio of the observed variation, measured by \(Q\), and the expected variance due to sampling error:
\[\begin{equation} H^2 = \frac{Q}{K-1} \tag{5.4} \end{equation}\]
The computation of \(H^2\) is a little more elegant than the one of \(I^2\) because we do not have to artificially correct its value when \(Q\) is smaller than \(K-1\). When there is no between-study heterogeneity, \(H^2\) equals one (or smaller). Values greater than one indicate the presence of between-study heterogeneity.
Compared to \(I^2\), it is far less common to find this statistic reported in published meta-analyses. However, \(H^2\) is also included by default in the output of {meta}'s meta-analysis functions.
5.1.4 Heterogeneity Variance \(\tau^2\) & Standard Deviation \(\tau\)
We already discussed the heterogeneity variance \(\tau^2\) in detail in Chapter 4.1.2. As we mentioned there, \(\tau^2\) quantifies the variance of the true effect sizes underlying our data. When we take the square root of \(\tau^2\), we obtain \(\tau\), which is the standard deviation of the true effect sizes.
A great asset of \(\tau\) is that it is expressed on the same scale as the effect size metric. This means that we can interpret it in the same as one would interpret, for example, the mean and standard deviation of the sample's age in a primary study. The value of \(\tau\) tells us something about the range of the true effect sizes.
We can, for example, calculate the 95% confidence interval of the true effect sizes by multiplying \(\tau\) with 1.96, and then adding and subtracting this value from the pooled effect size. We can try this out using the m.gen meta-analysis we calculated in Chapter 4.2.1.
Let us have a look again what the pooled effect and \(\tau\) estimate in this meta-analysis were:
# Pooled effect
m.gen$TE.random
# Estimate of tau
m.gen$tau
We see that \(g=\) 0.58 and \(\tau=\) 0.29. Based on this data, we can calculate the lower and upper bound of the 95% true effect size confidence interval: 0.58 \(-\) 1.96 \(\times\) 0.29 = 0.01 and 0.58 \(+\) 1.96 \(\times\) 0.29 = 1.15.
"What's the Uncertainty of Our Uncertainty?": Calculation of Confidence Intervals Around \(\tau^2\)
Methods to quantify the uncertainty of our between-study heterogeneity variance estimate (i.e. the confidence intervals around \(\tau^2\)) remain a field of ongoing investigation. Several approaches are possible, and their adequateness depends on the type of \(\tau^2\) estimator (Chapter 4.1.2.1).
The {meta} package follows the recommendations of Veronikki (2016) and uses the \(Q\)-Profile method (Viechtbauer 2007b) for most estimators.
The \(Q\)-Profile method is based on an altered \(Q\) version , the generalized \(Q\)-statistic \(Q_{\text{gen}}\). While the standard version of \(Q\) uses the pooled effect based on the fixed-effect model, \(Q_{\text{gen}}\) is based on the random-effects model. It uses the overall effect according to the random-effects model, \(\hat\mu\), to calculate the deviates, as well as weights based on the random-effects model:
\[\begin{equation} Q_{\text{gen}} = \sum_{k=1}^{K} w^*_k (\hat\theta_k-\hat\mu)^2 \tag{5.5} \end{equation}\]
Where \(w^*_k\) is the random-effects weight (see Chapter 4.1.2.1):
\[\begin{equation} w^*_k = \frac{1}{s^2_k+\tau^2} \tag{5.6} \end{equation}\]
\(Q_{\text{gen}}\) has also been shown to follow a \(\chi^2\) distribution with \(K-1\) degrees of freedom. We can think of the generalized \(Q\) statistic as a function \(Q_{\text{gen}}(\tau^2)\) which returns different values of \(Q_{\text{gen}}\) for higher or lower values of \(\tau^2\). The results of this function have a \(\chi^2\) distribution.
Since the \(\chi^2\) distribution follows a clearly predictable pattern, it is easy to determine confidence intervals with, for example, 95% coverage. We only have to get the value of \(\chi^2\) for the 2.5 and 97.5 percentile, based on its \(K-1\) degrees of freedom. In R, this can be easily done using the quantile function qchisq, for example: qchisq(0.975, df=5).
The \(Q\)-Profile method exploits this relationship to calculate confidence intervals around \(\tau^2\) using an iterative process (so-called "profiling"). In this approach, \(Q_{\text{gen}}(\widetilde{\tau}^2)\) is calculated repeatedly while increasing the value of \(\tau^2\), until the expected value of the lower and upper bound of the confidence interval based on the \(\chi^2\) distribution is reached.
The \(Q\)-Profile method can be specified in {meta} functions through the argument method.tau.ci = "QP". This is the default setting, meaning that we do not have to add this argument manually. The only exception is when we use the DerSimonian-Laird estimator (method.tau = "DL"). In this case, a different method, the one by Jackson (2013), is used automatically (we can do this manually by specifying method.tau.ci = "J").
Usually, there is no necessity to deviate from {meta}'s default behavior, but it may be helpful for others to report which method has been used to calculate the confidence intervals around \(\tau^2\) in your meta-analysis.
5.2 Which Measure Should I Use?
When we assess and report heterogeneity in a meta-analysis, we need a measure which is robust, and not too heavily influenced by statistical power. Cochran's \(Q\) increases both when the number of studies increases, and when the precision (i.e. the sample size of a study) increases.
Therefore, \(Q\) and whether it is significant highly depends on the size of your meta-analysis, and thus its statistical power. We should therefore not only rely on \(Q\), and particularly the \(Q\)-test, when assessing between-study heterogeneity.
\(I^2\), on the other hand, is not sensitive to changes in the number of studies in the analysis. It is relatively easy to interpret, and many researchers understand what it means. Generally, it is not a bad idea to include \(I^2\) as a heterogeneity measure in our meta-analysis report, especially if we also provide a confidence interval for this statistic so that others can assess how precise the estimate is.
However, despite its common use in the literature, \(I^2\) is not a perfect measure for heterogeneity either. It is not an absolute measure of heterogeneity, and its value still heavily depends on the precision of the included studies (Borenstein et al. 2017; Rücker et al. 2008). As said before, \(I^2\) is simply the percentage of variability not caused by sampling error \(\epsilon\). If our studies become increasingly large, the sampling error tends to zero, while at the same time, \(I^2\) tends to 100%–simply because the studies have a greater sample size.
Only relying on \(I^2\) is therefore not a good option either. Since \(H^2\) behaves similarly to \(I^2\), the same caveats also apply to this statistic.
The value of \(\tau^2\) and \(\tau\), on the other hand, is insensitive to the number of studies, and their precision. It does not systematically increase as the number of studies and their size increases. Yet, it is often hard to interpret how relevant \(\tau^2\) is from a practical standpoint. Imagine, for example, that we found that the variance of true effect sizes in our study was \(\tau^2=\) 0.08. It is often difficult for ourselves, and others, to determine if this amount of variance is meaningful or not.
Prediction intervals (PIs) are a good way to overcome this limitation (IntHout et al. 2016). Prediction intervals give us a range into which we can expect the effects of future studies to fall based on present evidence.
Say that our prediction interval lies completely on the "positive" side favoring the intervention. This means that, despite varying effects, the intervention is expected to be beneficial in the future across the contexts we studied. If the prediction interval includes zero, we can be less sure about this, although it should be noted that broad prediction intervals are quite common.
To calculate prediction intervals around the overall effect \(\hat\mu\), we use both the estimated between-study heterogeneity variance \(\hat\tau^2\), as well as the standard error of the pooled effect, \(SE_{\hat\mu}\). We sum the squared standard error and \(\hat\tau^2\) value, and then take the square root of the result. This leaves us with the standard deviation of the prediction interval, \(SD_{\text{PI}}\). A \(t\) distribution with \(K-1\) degrees of freedom is assumed for the prediction range, which is why we multiply \(SD_{\text{PI}}\) with the 97.5 percentile value of \(t_{K-1}\), and then add and subtract the result from \(\hat\mu\). This gives us the 95% prediction interval of our pooled effect.
The formula for 95% prediction intervals looks like this:
\[\begin{align} \hat\mu &\pm t_{K-1, 0.975}\sqrt{SE_{\hat\mu}^2+\hat\tau^2} \notag \\ \hat\mu &\pm t_{K-1, 0.975}SD_{\text{PI}} \tag{5.7} \end{align}\]
All of {meta}'s functions can provide us with a prediction interval around the pooled effect, but they do not do so by default. When running a meta-analysis, we have to add the argument prediction = TRUE so that prediction intervals appear in the output.
In sum, it is advisable to not resort to one measure only when characterizing the heterogeneity of a meta-analysis. It is recommended to at least always report \(I^2\) (with confidence intervals), as well as prediction intervals, and interpret the results accordingly.
5.3 Assessing Heterogeneity in R
Let us see how we can use the things we learned about heterogeneity measures in practice. As an illustration, let us examine the heterogeneity of our m.gen meta-analysis object a little closer (we generated this object in Chapter 4.2.1).
Because the default output of metagen objects does not include prediction intervals, we have to update it first. We simply use the update.meta function, and tell it that we want prediction intervals to be printed out additionally.
m.gen <- update.meta(m.gen, prediction = TRUE)
Now we can reinspect the results:
summary(m.gen)
## Review: Third Wave Psychotherapies
## [...]
## Number of studies combined: k = 18
## SMD 95%-CI t p-value
## Random effects model 0.5771 [ 0.3782; 0.7760] 6.12 < 0.0001
## Prediction interval [-0.0619; 1.2162]
## Quantifying heterogeneity:
## tau^2 = 0.0820 [0.0295; 0.3533]; tau = 0.2863 [0.1717; 0.5944];
## I^2 = 62.6% [37.9%; 77.5%]; H = 1.64 [1.27; 2.11]
## Test of heterogeneity:
## Q d.f. p-value
## 45.50 17 0.0002
## Details on meta-analytical method:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-profile method for confidence interval of tau^2 and tau
## - Hartung-Knapp adjustment for random effects model
In the output, we see results for all heterogeneity measures we defined before. Let us begin with the Quantifying heterogeneity section. Here, we see that \(\tau^2=\) 0.08. The confidence interval around \(\tau^2\) (0.03 - 0.35) does not contain zero, indicating that some between-study heterogeneity exists in our data. The value of \(\tau\) is 0.29, meaning that the true effect sizes have an estimated standard deviation of \(SD=\) 0.29, expressed on the scale of the effect size metric (here, Hedges' \(g\)).
A look at the second line reveals that \(I^2=\) 63% and that \(H\) (the square root of \(H^2\)) is 1.64. This means that more than half of the variation in our data is estimated to stem from true effect size differences. Using Higgins and Thompson's "rule of thumb," we can characterize this amount of heterogeneity as moderate to large.
Directly under the pooled effect, we see the prediction interval. It ranges from \(g=\) -0.06 to 1.21. This means that it is possible that some future studies will find a negative treatment effect based on present evidence. However, the interval is quite broad, meaning that very high effects are possible as well.
Lastly, we are also presented with \(Q\) and the Test of heterogeneity. We see that \(Q\)=45.5. This is a lot more than what we would expect based on the \(K-1=\) 17 degrees of freedom in this analysis. Consequentially, the heterogeneity test is significant (\(p<\) 0.001). However, as we mentioned before, we should not base our assessment on the \(Q\) test alone, given its known deficiencies.
Reporting the Amount of Heterogeneity In Your Meta-Analysis
Here is how we could report the amount of heterogeneity we found in our example:
"The between-study heterogeneity variance was estimated at \(\hat\tau^2\) = 0.08 (95%CI: 0.03-0.35), with an \(I^2\) value of 63% (95%CI: 38-78%). The prediction interval ranged from \(g\) = -0.06 to 1.21, indicating that negative intervention effects cannot be ruled out for future studies."
So, what do we make out of these results? Overall, our indicators tell us that moderate to substantial heterogeneity is present in our data. The effects in our meta-analysis are not completely heterogeneous, but there are clearly some differences in the true effect sizes between studies.
It may therefore be a good idea to explore what causes this heterogeneity. It is possible that there are one or two studies that do not really "fit in," because they have a much higher effect size. This could have inflated the heterogeneity in our analysis, and even worse: it may have led to an overestimation of the true effect.
On the other hand, it is also possible that our pooled effect is influenced heavily by one study with a very large sample size reporting an unexpectedly small effect size. This could mean that the pooled effect underestimates the true benefits of the treatment.
To address these concerns, we will now turn to procedures which allow us to assess the robustness of our pooled results: outlier and influence analyses.
The \(I^2\) > 50% "Guideline"
There are no iron-clad rules determining when exactly further analyses of the between-study heterogeneity are warranted. An approach that is sometimes used in practice is to check for outliers and influential cases when \(I^2\) is greater than 50%. When this threshold is reached, we can assume at least moderate heterogeneity, and that (more than) half of the variation is due to true effect size differences.
This "rule of thumb" is somewhat arbitrary, and, knowing the problems of \(I^2\) we discussed, in no way perfect. However, it can still be helpful from a practical perspective, because we can specify a priori, and in a consistent way, when we will try to get a more robust version of the pooled effect in our meta-analysis.
What should be avoided at any cost is to remove outlying and/or influential cases without any stringent rationale, just because we like the results. Such outcomes will be heavily biased by our "researcher agenda" (see Chapter 1.3), even if we did not consciously try to bend the results into a "favorable" direction.
5.4 Outliers & Influential Cases
As mentioned before, between-study heterogeneity can be caused by one or more studies with extreme effect sizes that do not quite "fit in." This may distort our pooled effect estimate, and it is a good idea to reinspect the pooled effect after such outliers have been removed from the analysis.
On the other hand, we also want to know if the pooled effect estimate we found is robust, meaning that it does not depend heavily on one single study. Therefore, we also want to know whether there are studies which heavily push the effect of our analysis into one direction. Such studies are called influential cases, and we will devote some time to this topic later in this chapter.
5.4.1 Basic Outlier Removal
There are several ways to define the effect of a study as "outlying" (Viechtbauer and Cheung 2010). An easy, and somewhat "brute force" approach, is to view a study as an outlier if its confidence interval does not overlap with the confidence interval of the pooled effect. The effect size of an outlier is so extreme that it differs significantly from the overall effect. To detect such outliers, we can search for all studies:
for which the upper bound of the 95% confidence interval is lower than the lower bound of the pooled effect confidence interval (i.e. extremely small effects)
for which the lower bound of the 95% confidence interval is higher than the upper bound of the pooled effect confidence interval (i.e. extremely large effects).
The idea behind this method is quite straightforward. Studies with a high sampling error are expected to deviate substantially from the pooled effect. However, because the confidence interval of such studies will also be large, this increases the likelihood that the confidence intervals will overlap with the one of the pooled effect.
Yet, if a study has a low standard error and still (unexpectedly) deviates substantially from the pooled effect, there is a good chance that the confidence intervals will not overlap, and that the study is classified as an outlier.
The {dmetar} package contains a function called find.outliers, which implements this simple outlier removal algorithm. It searches for outlying studies in a {meta} object, removes them, and then recalculates the results.
The "find.outliers" Function
The find.outliers function is included in the {dmetar} package. Once {dmetar} is installed and loaded on your computer, the function is ready to be used. If you did not install {dmetar}, follow these instructions:
Access the source code of the function online.
Let R "learn" the function by copying and pasting the source code in its entirety into the console (bottom left pane of R Studio), and then hit "Enter."
Make sure that the {meta} and {metafor} package is installed and loaded.
The find.outliers function only needs an object created by a {meta} meta-analysis function as input. Let us see the what results we get for our m.gen object.
find.outliers(m.gen)
## Identified outliers (random-effects model)
## ------------------------------------------
## "DanitzOrsillo", "Shapiro et al."
## Results with outliers removed
## -----------------------------
## SMD 95%-CI t p-value
## Random effects model 0.4528 [0.3257; 0.5800] 7.59 < 0.0001
## Prediction interval [0.1693; 0.7363]
## I^2 = 24.8% [0.0%; 58.7%]; H = 1.15 [1.00; 1.56]
We see that the find.outliers function has detected two outliers, "DanitzOrsillo" and "Shapiro et al." The function has also automatically rerun our analysis while excluding the identified studies. In the column displaying the random-effects weight of each study, %W(random), we see that the weight of the outlying studies has been set to zero, thus removing them from the analysis.
Based on the output, we see that the \(I^2\) heterogeneity shrinks considerably when the two studies are excluded, from \(I^2=\) 63% to 25%. The confidence interval around \(\tau^2\) now also includes zero, and the \(Q\)-test of heterogeneity is not significant anymore. Consequentially, the prediction interval of our estimate has also narrowed. Now, it only contains positive values, providing much more certainty of the robustness of the pooled effect across future studies.
5.4.2 Influence Analysis
We have now learned a basic way to detect and remove outliers in meta-analyses. However, it is not only extreme effect sizes which can cause concerns regarding the robustness of the pooled effect. Some studies, even if their effect size is not particularly high or low, can still exert a very high influence on our overall results.
For example, it could be that we find an overall effect in our meta-analysis, but that its significance depends on a single large study. This would mean that the pooled effect is not statistically significant anymore once the influential study is removed. Such information is very important if we want to communicate to the public how robust our results are.
Outlying and influential studies have an overlapping but slightly different meaning. Outliers are defined through the magnitude of their effect but do not necessarily need to have a substantial impact on the results of our meta-analysis. It is perfectly possible that removal of an outlier as defined before neither changes the average effect size, nor the heterogeneity in our data substantially.
Influential cases, on the other hand, are those studies which–by definition–have a large impact on the pooled effect or heterogeneity, regardless of how high or low the effect is. This does not mean, of course, that a study with an extreme effect size cannot be an influential case. In fact, outliers are often also influential, as our example in the last chapter illustrated. But they do not have to be.
There are several techniques to identify influential studies, and they are a little more sophisticated than the basic outlier removal we discussed previously. They are based on the leave-one-out method. In this approach, we recalculate the results of our meta-analysis \(K\) times, each time leaving out one study.
Based on this data, we can calculate different influence diagnostics. Influence diagnostics allow us to detect the studies which influence the overall estimate of our meta-analysis the most, and let us assess if this large influence distorts our pooled effect (Viechtbauer and Cheung 2010).
The {dmetar} package contains a function called InfluenceAnalysis, which allows us to calculate these various influence diagnostics using one function. The function can be used for any type of meta-analysis object created by {meta} functions.
The "InfluenceAnalysis" function
The InfluenceAnalysis function is included in the {dmetar} package. Once {dmetar} is installed and loaded on your computer, the function is ready to be used. If you did not install {dmetar}, follow these instructions:
Make sure that the {meta}, {metafor}, {ggplot2} and {gridExtra} package is installed and loaded.
Using the InfluenceAnalysis function is relatively straightforward. We only have to specify the name of the meta-analysis object for which we want to conduct the influence analysis. Here, we again use the m.gen object.
Because InfluenceAnalysis uses the fixed-effect model by default, we also have to set random = TRUE, so that the random-effects model will be used. The function can also take other arguments, which primarily control the type of plots generated by the function. Those arguments are detailed in the function documentation.
We save the results of the function in an object called m.gen.inf.
m.gen.inf <- InfluenceAnalysis(m.gen, random = TRUE)
The InfluenceAnalysis function creates four influence diagnostic plots: a Baujat plot, influence diagnostics according to Viechtbauer and Cheung (2010), and the leave-one-out meta-analysis results, sorted by effect size and \(I^2\) value. We can open each of these plots individually using the plot function. Let us go through them one after another.
5.4.2.1 Baujat Plot
A Baujat plot can be printed using the plot function and by specifying "baujat" in the second argument:
plot(m.gen.inf, "baujat")
Baujat plots (Baujat et al. 2002) are diagnostic plots to detect studies which overly contribute to the heterogeneity in a meta-analysis. The plot shows the contribution of each study to the overall heterogeneity (as measured by Cochran's \(Q\)) on the horizontal axis, and its influence on the pooled effect size on the vertical axis.
This "influence" value is determined through the leave-one-out method, and expresses the standardized difference of the overall effect when the study is included in the meta-analysis, versus when it is not included.
Studies on the right side of the plot can be regarded as potentially relevant cases since they contribute heavily to the overall heterogeneity in our meta-analysis. Studies in the upper right corner of the plot may be particularly influential since they have a large impact on both the estimated heterogeneity, and the pooled effect.
As you may have recognized, the two studies we find on the right side of the plot are the ones we already detected before ("DanitzOrsillo" and "Shapiro et al."). These studies do not have a large impact on the overall results (presumably because they have a small sample size), but they do add substantially to the heterogeneity we find in the meta-analysis.
5.4.2.2 Influence Diagnostics
The next plot contains several influence diagnostics for each of our studies. These can be plotted using this code:
plot(m.gen.inf, "influence")
We see that the plot displays, for each study, the value of different influence measures. These measures are used to characterize which studies fit well into our meta-analysis model, and which do not. To understand what the diagnostics mean, let us briefly go through them from left to right, top to bottom.
5.4.2.2.1 Externally Standardized Residuals
The first plot displays the externally standardized residual of each study. As is says in the name, these residuals are the deviation of each observed effect size \(\hat\theta_k\) from the pooled effect size. The residuals are standardized, and we use an "external" estimate of the pooled effect without the study to calculate the deviations.
The "external" pooled effect \(\hat\mu_{\setminus k}\) is obtained by calculating the overall effect without study \(k\), along with the principles of the leave-one-out method. The resulting residual is then standardized by (1) the variance of the external effect (i.e. the squared standard error of \(\hat\mu_{\setminus k}\)), (2) the \(\tau^2\) estimate of the external pooled effect, and (3) the variance of \(k\).
\[\begin{equation} t_{k} = \frac{\hat\theta_{k}-\hat\mu_{\setminus k}}{\sqrt{\mathrm{Var}(\hat\mu_{\setminus k})+\hat\tau^2_{\setminus k}+s^2_k}} \tag{5.8} \end{equation}\]
Assuming that a study \(k\) fits well into the meta-analysis, the three terms in the denominator capture the sources of variability which determine how much an effect size differs from the average effect. These sources of variability are the sampling error of \(k\), the variance of true effect sizes, and the imprecision in our pooled effect size estimate.
If a study does not fit into the overall population, we can assume that the residual will be larger than expected from the three variance terms alone. This leads to higher values of \(t_k\), which indicate that the study is an influential case that does not "fit in."
5.4.2.2.2 \(\mathrm{DFFITS}\) Value
The computation of the \(\mathrm{DFFITS}\) metric is similar to the one of the externally standardized residuals. The pattern of DFFITS and \(t_k\) values is therefore often comparable across studies. This is the formula:
\[\begin{equation} \mathrm{DFFITS}_k = \dfrac{\hat\mu-\hat\mu_{\setminus k}}{\sqrt{\dfrac{w_k^{(*)}}{\sum^{K}_{k=1}w_k^{(*)}}(s^2_k+\hat\tau^2_{\setminus k})}} \end{equation}\]
For the computation, we also need \(w_k^{(*)}\), the (random-effects) weight of study \(k\) (Chapter 4.1.1), which is divided by the sum of weights to express the study weight in percent.
In general, the \(\mathrm{DFFITS}\) value indicates how much the pooled effect changes when a study \(k\) is removed, expressed in standard deviations. Again, higher values indicate that a study may be an influential case because its impact on the average effect is larger.
5.4.2.2.3 Cook's Distance
The Cook's distance value \(D_k\) of a study can be calculated by a formula very similar to the one of the \(\mathrm{DFFITS}\) value, with the largest difference being that for \(D_k\), the difference of the pooled effect with and without \(k\) is squared.
This results in \(D_k\) only taking positive values. The pattern across studies, however, is often similar to the \(\mathrm{DFFITS}\) value. Here is the formula:
\[\begin{equation} D_k = \frac{(\hat\mu-\hat\mu_{\setminus k})^2}{\sqrt{s^2_k+\hat\tau^2}}. \tag{5.9} \end{equation}\]
5.4.2.2.4 Covariance Ratio
The covariance ratio of a study \(k\) can be calculated by dividing the variance of the pooled effect (i.e. its squared standard error) without \(k\) by the variance of the initial average effect.
\[\begin{equation} \mathrm{CovRatio}_k = \frac{\mathrm{Var}(\hat\mu_{\setminus k})}{\mathrm{Var}(\hat\mu)} \tag{5.10} \end{equation}\]
A \(\mathrm{CovRatio}_k\) value below 1 indicates that removing study \(k\) results in a more precise estimate of the pooled effect size \(\hat\mu\).
5.4.2.2.5 Leave-One-Out \(\tau^2\) and \(Q\) Values
The values in this row are quite easy to interpret: they simply display the estimated heterogeneity as measured by \(\tau^2\) and Cochran's \(Q\), if study \(k\) is removed. Lower values of \(Q\), but particularly of \(\tau^2\) are desirable, since this indicates lower heterogeneity.
5.4.2.2.6 Hat Value and Study Weight
In the last row, we see the study weight and hat value of each study. We already covered the calculation and meaning of study weights extensively in Chapter 4.1.1, so this measure does not need much more explanation. The hat value, on the other hand, is simply another metric that is equivalent to the study weight. The pattern of the hat values and weights will therefore be identical in our influence analyses.
All of these metrics provide us with a value which, if extreme, indicates that the study is an influential case, and may negatively affect the robustness of our pooled result. However, it is less clear when this point is reached. There is no strict rule which \(\mathrm{DFFITS}\), Cook's distance or standardized residual value is too high. It is always necessary to evaluate the results of the influence analysis in the context of the research question to determine if it is indicated to remove a study.
Yet, there are a few helpful "rules of thumb" which can guide our decision. The InfluenceAnalysis function regards a study as an influential case if one of these conditions is fulfilled27:
\[\begin{equation} \mathrm{DFFITS}_k > 3\sqrt{\frac{1}{k-1}} \tag{5.11} \end{equation}\]
\[\begin{equation} D_k > 0.45 \tag{5.12} \end{equation}\]
\[\begin{equation} \mathrm{hat_k} > 3\frac{1}{k}. \tag{5.13} \end{equation}\]
Studies determined to be influential are displayed in red in the plot generated by the InfluenceAnalysis function.
In our example, this is only the case for "Dan," the "DanitzOrsillo" study. Yet, while only this study was defined as influential, there are actually two spikes in most plots. We could also decide to define "Sha" (Shapiro et al.) as an influential case because the values of this study are very extreme too.
So, we found that the studies "DanitzOrsillo" and "Shapiro et al." might be influential. This is an interesting finding, as we selected the same studies based on the Baujat plot, and when only looking at statistical outliers.
This further corroborates that the two studies could have distorted our pooled effect estimate, and cause parts of the between-study heterogeneity we found in our initial meta-analysis.
5.4.2.3 Leave-One-Out Meta-Analysis Results
Lastly, we can also plot the overall effect and \(I^2\) heterogeneity of all meta-analyses that were conducted using the leave-one-out method. We can print two forest plots (a type of plot we will get to know better in Chapter 6.2), one sorted by the pooled effect size, and the other by the \(I^2\) value of the leave-one-out meta-analyses. The code to produce the plots looks like this:
plot(m.gen.inf, "es")
plot(m.gen.inf, "i2")
In these two forest plots, we see the recalculated pooled effects, with one study omitted each time. In both plots, there is a shaded area with a dashed line in its center. This represents the 95% confidence interval of the original pooled effect size, and the estimated pooled effect itself.
The first plot is ordered by effect size (low to high). Here, we see how the overall effect estimate changes when different studies are removed. Since the two outlying and influential studies "DanitzOrsillo" and "Shapiro et al." have very high effect sizes, we find that the overall effect is smallest when they are removed.
The second plot is ordered by heterogeneity (low to high), as measured by \(I^2\). This plot illustrates that the lowest \(I^2\) heterogeneity is reached by omitting the studies "DanitzOrsillo" and "Shapiro et al." This corroborates our finding that these two studies were the main "culprits" for the between-study heterogeneity we found in the meta-analysis.
All in all, the results of our outlier and influence analysis in this example point into the same direction. There are two studies which are likely influential outliers. These two studies may distort the effect size estimate, as well as its precision. We should therefore also conduct and report the results of a sensitivity analysis in which both studies are excluded.
5.4.3 GOSH Plot Analysis
In the previous chapter, we explored the robustness of our meta-analysis using influence analyses based on the leave-one-out method. Another way to explore patterns of heterogeneity in our data are so-called Graphic Display of Heterogeneity (GOSH) plots (Olkin, Dahabreh, and Trikalinos 2012). For those plots, we fit the same meta-analysis model to all possible subsets of our included studies. In contrast to the leave-one-out method, we therefore not only fit \(K\) models, but a model for all \(2^{k-1}\) possible study combinations.
This means that creating GOSH plots can become quite computationally expensive when the total number of studies is large. The R implementation we cover here therefore only fits a maximum of 1 million randomly selected models.
Once the models are calculated, we can plot them, displaying the pooled effect size on the x-axis and the between-study heterogeneity on the y-axis. This allows us to look for specific patterns, for example clusters with different effect sizes and amounts of heterogeneity.
A GOSH plot with several distinct clusters indicates that there might be more than one effect size "population" in our data, warranting a subgroup analysis. If the effect sizes in our sample are homogeneous, on the other hand, the GOSH plot displays a roughly symmetric, homogeneous distribution.
To generate GOSH plots, we can use the gosh function in the {metafor} package. If you have not installed the package yet, do so now and then load it from the library.
Let us generate a GOSH plot for our m.gen meta-analysis object. To do that, we have to "transform" this object created by the {meta} package into a {metafor} meta-analysis object first, because only those can be used by the gosh function.
The function used to perform a meta-analysis in {metafor} is called rma. It is not very complicated to translate a {meta} object to a rma meta-analysis. We only have to provide the function with the effect size (TE), Standard Error (seTE), and between-study heterogeneity estimator (method.tau) stored in m.gen. We can specify that the Knapp-Hartung adjustment should be used by specifying the argument test = "knha".
We save the newly generated {metafor}-based meta-analysis under the name m.rma.
m.rma <- rma(yi = m.gen$TE,
sei = m.gen$seTE,
method = m.gen$method.tau,
test = "knha")
Please note that if you used the fixed-effect model in {meta}, it is not possible to simply copy method.tau to your rma call. Instead, this requires one to set the method argument to "FE" in rma.
We can then use the m.rma object to generate the GOSH plot. Depending on the number of studies in your analysis, this can take some time, even up to a few hours. We save the results as res.gosh.
res.gosh <- gosh(m.rma)
We can then display the plot by plugging the res.gosh object into the plot function. The additional alpha argument controls how transparent the dots in the plot are, with 1 indicating that they are completely opaque. Because there are many, many data points in the graph, it makes sense to use a small alpha value to make it clearer where the values "pile up."
plot(res.gosh, alpha = 0.01)
We see an interesting pattern in our data: while most values are concentrated in a cluster with relatively high effects and high heterogeneity, the distribution of \(I^2\) values is heavily right-skewed and bi-modal. There seem to be some study combinations for which the estimated heterogeneity is much lower, but where the pooled effect size is also smaller, resulting in a shape with a "comet-like" tail.
Having seen the effect size\(-\)heterogeneity pattern in our data, the really important question is: which studies cause this shape? To answer this question, we can use the gosh.diagnostics function.
This function uses three clustering or unsupervised machine learning algorithms to detect clusters in the GOSH plot data. Based on the identified clusters, the function automatically determines which studies contribute most to each cluster. If we find, for example, that one or several studies are over-represented in a cluster with high heterogeneity, this indicates that these studies, alone or in combination, may cause the high heterogeneity.
The "gosh.diagnostics" function
The gosh.diagnostics function is included in the {dmetar} package. Once {dmetar} is installed and loaded on your computer, the function is ready to be used. If you did not install {dmetar}, follow these instructions:
Make sure that the {gridExtra}, {ggplot2}, {fpc} and {mclust} package are installed and loaded.
The gosh.diagnostics function uses three cluster algorithms to detect patterns in our data: the \(k\)-means algorithm (Hartigan and Wong 1979), density reachability and connectivity clustering, or DBSCAN (Schubert et al. 2017) and gaussian mixture models (Fraley and Raftery 2002).
It is possible to tune some of the parameters of these algorithms. In the arguments km.params, db.params and gmm.params, we can add a list element which contains specifications controlling the behavior of each algorithm. In our example, we will tweak a few details of the \(k\)-means and DBSCAN algorithm. We specify that the \(k\)-means algorithm should search for two clusters ("centers") in our data. In db.params, we change the eps, or \(\epsilon\) value used by DBSCAN. We also specify the MinPts value, which determines the minimum number of points needed for each cluster.
You can learn more about the parameters of the algorithms in the gosh.diagnostics documentation. There is no clear rule when which parameter specification works best, so it can be helpful to tweak details about each algorithm several times and see how this affects the results.
The code for our gosh.diagnostics call looks like this:
res.gosh.diag <- gosh.diagnostics(res.gosh,
km.params = list(centers = 2),
db.params = list(eps = 0.08,
MinPts = 50))
res.gosh.diag
## GOSH Diagnostics
## ================================
## - Number of K-means clusters detected: 2
## - Number of DBSCAN clusters detected: 4
## - Number of GMM clusters detected: 7
## Identification of potential outliers
## ---------------------------------
## - K-means: Study 3, Study 16
## - DBSCAN: Study 3, Study 4, Study 16
## - Gaussian Mixture Model: Study 3, Study 4, Study 16
In the output, we see the number of clusters that each algorithm has detected. Because each approach uses a different mathematical strategy to segment the data, it is normal that the number of clusters is not identical.
In the Identification of potential outliers section, we see that the procedure was able to identify three studies with a large impact on the cluster make-up: study 3, study 4 and study 16.
We can also plot the gosh.diagnostics object to inspect the results a little closer.
plot(res.gosh.diag)
This produces several plots. The first three plots display the clustering solution found by each algorithm and the amount of cluster imbalance pertaining to each study in each cluster. Based on this information, a Cook's distance value is calculated for each study, which is used to determine if a study might have a large impact on the detected cluster (and may therefore be an influential case).
The other plots show a GOSH plot again, but there are now shaded points which represent the analyses in which a selected study was included. For example, we see that nearly all results in which study 3 was included are part of a cluster with high heterogeneity values and higher effect sizes. Results in which study 4 was included vary in their heterogeneity, but generally show a somewhat smaller average effect. Results in which study 16 was included are similar to the ones found for study 3, but a little more dispersed.
Let us see what happens if we rerun the meta-analysis while removing the three studies that the gosh.diagnostics function has identified.
update.meta(m.gen, exclude = c(3, 4, 16)) %>%
summary()
## SMD 95%-CI %W(random) exclude
## Call et al. 0.7091 [ 0.1979; 1.2203] 4.6
## Cavanagh et al. 0.3549 [-0.0300; 0.7397] 8.1
## DanitzOrsillo 1.7912 [ 1.1139; 2.4685] 0.0 *
## de Vibe et al. 0.1825 [-0.0484; 0.4133] 0.0 *
## Frazier et al. 0.4219 [ 0.1380; 0.7057] 14.8
## Frogeli et al. 0.6300 [ 0.2458; 1.0142] 8.1
## Gallego et al. 0.7249 [ 0.2846; 1.1652] 6.2
## Hazlett-Stevens & Oren 0.5287 [ 0.1162; 0.9412] 7.0
## Hintz et al. 0.2840 [-0.0453; 0.6133] 11.0
## Kang et al. 1.2751 [ 0.6142; 1.9360] 2.7
## Kuhlmann et al. 0.1036 [-0.2781; 0.4853] 8.2
## Lever Taylor et al. 0.3884 [-0.0639; 0.8407] 5.8
## Phang et al. 0.5407 [ 0.0619; 1.0196] 5.2
## Rasanen et al. 0.4262 [-0.0794; 0.9317] 4.7
## Ratanasiripong 0.5154 [-0.1731; 1.2039] 2.5
## Shapiro et al. 1.4797 [ 0.8618; 2.0977] 0.0 *
## Song & Lindquist 0.6126 [ 0.1683; 1.0569] 6.1
## Warnecke et al. 0.6000 [ 0.1120; 1.0880] 5.0
## tau^2 < 0.0001 [0.0000; 0.0955]; tau = 0.0012 [0.0000; 0.3091];
## I^2 = 4.6% [0.0%; 55.7%]; H = 1.02 [1.00; 1.50]
We see that studies number 3 and 16 are "DanitzOrsillo" and "Shapiro et al." These two studies were also found to be influential in previous analyses. Study 4 is the one by "de Vibe." This study does not have a particularly extreme effect size, but the narrow confidence intervals indicate that it has a high weight, despite its observed effect size being smaller than the average. This could explain why this study is also influential.
We see that removing the three studies has a large impact on the estimated heterogeneity. The value of \(\tau^2\) nearly drops to zero, and the \(I^2\) value is also very low, indicating that only 4.6% of the variability in effect sizes is due to true effect size differences. The pooled effect of \(g\) = 0.48 is somewhat smaller than our initial estimate \(g=\) 0.58, but still within the same orders of magnitude.
Overall, this indicates that the average effect we initially calculated is not too heavily biased by outliers and influential studies.
Reporting the Results of Influence Analyses
Let us assume we determined that "DanitzOrsillo," "de Vibe et al." and "Shapiro et al." are influential studies in our meta-analysis. In this case, it makes sense to also report the results of a sensitivity analysis in which these studies are excluded.
To make it easy for readers to see the changes associated with removing the influential studies, we can create a table in which both the original results, as well as the results of the sensitivity analysis are displayed. This table should at least include the pooled effect, its confidence interval and \(p\)-value, as well as a few measures of heterogeneity, such as prediction intervals and the \(I^2\) statistic (as well as the confidence interval thereof).
It is also important to specify which studies were removed as influential cases, so that others understand on which data the new results are based. Below is an example of how such a table looks like for our m.gen meta-analysis from before:
\(g\)
95%CI
\(p\)
95%PI
\(I^2\)
Main Analysis 0.58 0.38-0.78 <0.001 -0.06-1.22 63% 39-78
Infl. Cases Removed1 0.48 0.36-0.60 <0.001 0.36-0.61 5% 0-56
1Removed as outliers: DanitzOrsillo, de Vibe, Shapiro.
This type of table is very convenient because we can also add further rows with results of other sensitivity analyses. For example, if we conduct an analysis in which only studies with a low risk of bias (Chapter 1.4.5) were considered, we could report the results in a third row.
\[\tag*{$\blacksquare$}\]
5.5 Questions & Answers
Why is it important to examine the between-study heterogeneity of a meta-analysis?
Can you name the two types of heterogeneity? Which one is relevant in the context of calculating a meta-analysis?
Why is the significance of Cochran's \(Q\) test not a sufficient measure of between-study heterogeneity?
What are the advantages of using prediction intervals to express the amount of heterogeneity in a meta-analysis?
What is the difference between statistical outliers and influential studies?
For what can GOSH plots be used?
Answers to these questions are listed in Appendix A at the end of this book.
In meta-analyses, we do not only have to pay attention to the pooled effect size, but also to the heterogeneity of the data on which this average effect is based. The overall effect does not capture that the true effects in some studies may differ substantially from our point estimate.
Cochran's \(Q\) is commonly used to quantify the variability in our data. Because we know that \(Q\) follows a \(\chi^2\) distribution, this measure allows us to detect if more variation is present than what can be expected based on sampling error alone. This excess variability represents true differences in the effect sizes of studies.
A statistical test of \(Q\), however, heavily depends on the type of data at hand. We should not only rely on \(Q\) to assess the amount of heterogeneity. There are other measures, such as \(I^2\), \(\tau\) or prediction intervals, which may be used additionally.
The average effect in a meta-analysis can be biased when there are outliers in our data. Outliers do not always have a large impact on the results of a meta-analysis. But when they do, we speak of influential cases.
There are various methods to identify outlying and influential cases. If such studies are detected, it is advisable to recalculate our meta-analysis without them to see if this changes the interpretation of our results.
"Doing Meta-Analysis in R: A Hands-on Guide" was written by Mathias Harrer, Pim Cuijpers, Toshi A. Furukawa, David D. Ebert.
This book was built by the bookdown R package. | CommonCrawl |
What are the limitations of the analogy between a complex classical oscillator with imaginary energy and a quantum system?
Chiral multiplet : Fundamental and adjoint representation and its Lagrangian
What is a good gentle introduction to the Virasoro algebra and its application in theoretical physics?
Exact meaning of locality and its implications on the formulation of a QFT
Separability of a Hilbert space and its implications for the formalism of QM
Is there a general relationship between the conformal weight of a field and its (classical) scaling dimension?
Constructing a Hamiltonian (as a polynomial of $q_i$ and $p_i$) from its spectrum
Landau-level mixing and its effects in physics and wave function
A lattice non-perturbative definition of an SO(10) chiral gauge theory and its induced standard model
Do the fundamental laws of physics change over time?
Physics of a hologram and its Fundamental Limitations
I understand the basic principle behind holography as the interference of a scattered wave with a reference beam which is captured using a photographic plate. The photographic plate is further used to recreate the original image by using it as a diffraction grating. The reference light is used to illuminate the diffraction grating and what one observes is the original wavefront of the scattered light.
I am interested in 2 questions.
How does using a photographic plate as a diffraction reproduce the original object? I am looking for a mathematical explanation discussing, precisely how the phases add up to give the scattered wave front.
I am also interested about the fundamental limits of what is resolvable through holographic techniques. Is the highest possible resolution spacial resolution achievable $h/p$ as imposed by quantum mechanics? How does one see this from the addition of phases?
This post imported from StackExchange Physics at 2015-03-23 11:08 (UTC), posted by SE-user Prathyush
asked Jul 3, 2013 in Theoretical Physics by Prathyush (700 points) [ no revision ]
A non mathematical idea: the recorded plate plane, when illuminated with the reference beam, is in the same interference state as when recording the scene (plate fringes forces it). Then light beyond the plate has to be in the same state as when recording the scene. See also this and this.
This post imported from StackExchange Physics at 2015-03-23 11:08 (UTC), posted by SE-user Andrestand
commented Sep 14, 2014 by Andrestand (0 points) [ no revision ]
I'll answer the first question: In-Axis holography refers to objects placed within or nearby an axis normal to the holographic plate. This is usually refered as Gabor's architecture. Off-Axis holography removes the problem of ghost images placing the object outside the normal. It was firstly introduced by Leith-Upatnieks. Let's first consider the holographic record of a point source:
Record of the hologram:
Let $\psi_0$ represent the field distribution of the object wave in the same plane as the holographic plate. Similarly, let $\psi_r$ represents the reference wave. The holographic plate is sensible to incoming intensity, therefore its amplitude transmittance is:
$$ t(x,y) \propto |\psi_0 +\psi_r|^2 $$
Let's consider an off-axis point source at distance $z_0$ from the holographic plate. According to Fresnel's diffraction, the object wave emerges from the point source and reaches the plate as a diverging spherical wave:
$$ \psi_0=\delta(x-x0,y-y0) \ast h(x,y;z_0)= \exp(-i k_0 z_0) \frac{ik_0}{2\pi z_0} exp(-i k_0 [(x-x_0)^2+(y-y_0^2)] / 2 z_0) $$
where h(x,y;z) is a simplified impulse response function solution to the Green problem for Fresnel diffraction. The simplification consist on taking the paraxial approximation and observing the field at a distance $z>>\lambda_0$.
Let's $\psi_r$ be a plane wave whose phase is the same as the object wave at $z_0$. Therefore, the reference wave field distribution is $\psi_r= a \exp(-i k_0 z_0)$. Thus, intensity distribution recorded at the plate (hologram's transmittance) is:
\begin{equation} t(x,y) \propto |\psi_0 +\psi_r|^2 \end{equation}
$$ =|a + \frac{i k_0}{2 \pi z_0} \exp(-i k_0 [(x-x_0)^2+(y-y_0^2)] / 2 z_0)|^2 $$ $$ = a^2+(\frac{k_0}{2\pi z_0})^2 + \frac{k_0}{2\pi z_0} sin(\frac{k_0}{2 z_0} [(x-x_0)^2+(y-y_0^2)] ) $$ $$ =FZP(x-x_0, y-y_0;z_0) $$
This functions are called sinusoidal Fresnel Zone Plates and they look like this:
The center of the FZP specifies localization of the point source: $x_0$ e $y_0$. Spatial variation of the FZP is governed by a sinusoidal function with a quadratic spatial dependence. Now let's place the point source on-axis: $x_0=y_0=0$, at some distance $z_0$ from the plate: $$ t(x,y) \propto |\psi_0^2 +\psi_r^2|^2 $$ $$ =|a + \frac{i k_0}{2 \pi z_0} \exp(-i k_0 (x^2+y^2) / 2 z_0)|^2 $$ $$ = a^2+(\frac{k_0}{2\pi z_0})^2 + \frac{k_0}{2\pi z_0} sin(\frac{k_0}{2 z_0} [(x^2+y^2)] ) $$ $$ =FZP(x, y;z_0) $$
We can see how the holographic record presents a lens structure. Focal distance is parametrized by spatial frequency of incoming FZP. This systems records on a bidimensional surface differences between the FZP that conform our object (since any object can be decomposed as a collection of point sources).
Please note here is the answer to your question:
*Different FZP's have different spatial frequencies according to their distance to the plate. This information is encoded in the hologram and revealed upon illumination. The coding consist of a parametrization of different lenses (focal distances) according to the distance of the individual point-sources which make up the object.
Visualization of the hologram
In order to recover the hologram, we simply need to illuminate the plate with the same reference wave. Let's call it reconstruction wave, whose value at the plate is: $\psi_{rc}=a$ ($z_0=0$) Transmitted field distribution after the hologram will be:
$$\psi_{rc} t(x,y)=a t(x,y)$$
Finally, the field at distance $z$, according to Fresnel's diffraction:
$$ a t(x,y) \ast h(x,y;z) $$
$$ = a\left[ a^2+(\frac{k_0}{2\pi z_0})^2 + \frac{k_0}{4 i\pi z_0} \left( e^{\frac{k_0}{2 z_0} [(x^2+y^2)]} - e^{-\frac{k_0}{2 z_0} [(x^2+y^2)]} \right) \right] \ast h(x,y;z) $$
Now we can take advantage of the fact that convolution is a linear operator and the field finally decomposes in the following terms:
zero order (This gives some noise)
\begin{equation} a \left( a^2 + \left( \frac{k_0}{2\pi z_0}\right)^2\right) \ast h(x,y;z) \end{equation}
real image (pseudoscope) (reversed-phase object information)
\begin{equation} \sim e^{-\frac{i k_0}{2 z_0} [(x^2+y^2)]} \ast h(x,y;z=z0)= \end{equation} $$ e^{\frac{i k_0}{2 z_0} [(x^2+y^2)]} \ast e^{- i k_0 z} \frac{i k_0}{2 \pi z_0} e^{\frac{-ik_0(x^2+y^2)}{2z_0}} \sim \delta(x,y) $$
virtual image (orthoscope) ("in-phase" object information)
\begin{equation} \sim e^{-\frac{i k_0}{2 z_0} [(x^2+y^2)]} \ast h(x,y;z=-z0)= \end{equation} $$ e^{\frac{-i k_0}{2 z_0} [(x^2+y^2)]} \ast e^{- i k_0 z} \frac{i k_0}{2 \pi z_0} e^{\frac{-ik_0(x^2+y^2)}{2z_0}} \sim \delta(x,y) $$
In the last term, we have propagated in inverse direction the field immediately anterior to the plate to demonstrate that a virtual image will emerge behind the hologram. This is the wave that preserves the orientation of the object's wave phase. The real part provides this information in reverse order, that is why it's called pseudoscope term. Off-axis architecture eliminates this placing object and plate at different directions.
Regarding your second question:
The hologram is recorded by photosensible material and I think the question of resolutions is actually limited by minimum distance between optical centers, those who contribute to the optical density of the material. Working on the linear regimen for this optical density is crucial to avoid over-exposures which can bring up ghost images to the hologram. This are caused by distortions on the FZP, which no longer behave like sinusoids and create higher harmonics (rather that +1,0,-1 exclusively).
This post imported from StackExchange Physics at 2015-03-23 11:08 (UTC), posted by SE-user cacosomoza
answered Jul 5, 2013 by cacosomoza (60 points) [ no revision ]
p$\hbar$ysicsOv$\varnothing$rflow | CommonCrawl |
Growth, yield and fiber quality characteristics of Bt and non-Bt cotton cultivars in response to boron nutrition
MEHRAN Muhammad1,
ASHRAF Muhammad ORCID: orcid.org/0000-0003-1599-83121,
SHAHZAD Sher Muhammad2,
SHAKIR Muhammad Siddique3,
AZHAR Muhammad Tehseen4,7,
AHMAD Fiaz5 &
ALVI Alamgir6
Journal of Cotton Research volume 6, Article number: 1 (2023) Cite this article
Boron (B) deficiency is an important factor for poor seed cotton yield and fiber quality. However, it is often missing in the plant nutrition program, particularly in developing countries. The current study investigated B's effect on growth, yield, and fiber quality of Bt (CIM-663) and non-Bt (Cyto-124) cotton cultivars. The experimental plan consisted of twelve treatments: Control (CK); B at 1 mg·kg−1 soil application (SB1); 2 mg·kg−1 B (SB2); 3 mg·kg−1 B (SB3); 0.2% B foliar spray (FB1); 0.4% B foliar spray (FB2); 1 mg·kg−1 B + 0.2% B foliar spray (SB1 + FB1); 1 mg·kg−1 B + 0.4% B foliar spray (SB1 + FB2); 2 mg·kg−1 B + 0.2% B foliar spray (SB2 + FB1); 2 mg·kg−1 B + 0.4% B foliar spray (SB2 + FB2); 3 mg·kg−1 B + 0.2% B foliar spray (SB3 + FB1); 3 mg·kg−1 B + 0.4% B foliar spray (SB3 + FB2). Each treatment has three replications, one pot having two plants per replication.
B nutrition at all levels and methods of application significantly (P ≤ 0.05) affected the growth, physiological, yield, and fiber quality characteristics of both cotton cultivars. However, SB2 either alone or in combination with foliar spray showed superiority over others, particularly in the non-Bt cultivar which responded better to B nutrition. Maximum improvement in monopodial branches (345%), sympodial branches (143%), chlorophyll-a (177%), chlorophyll-b (194%), photosynthesis (169%), and ginning out turn (579%) in the non-Bt cultivar was found with SB2 compared with CK. In Bt cultivar, although no consistent trend was found but integrated use of SB3 with foliar spray performed relatively better for improving cotton growth compared with other treatments. Fiber quality characteristics in both cultivars were improved markedly but variably with different B treatments.
B nutrition with SB2 either alone or in combination with foliar spray was found optimum for improving cotton's growth and yield characteristics.
Cotton (Gossypium hirsutum L.) is an important commercial crop grown in various environments for its high-quality fiber and oil. It is globally playing a leading role in the agricultural and industrial economy by providing raw materials, particularly for the textile industry, and employment (Rana et al. 2020). Globally, it was grown on 33.1 million hectares, yielded 136 million bales, and produced about 35% of the total fiber during the year 2020 (FAO 2021). Pakistan is ranked in the 3rd position in the world for cotton exports, the 4th in terms of area under cotton cultivation and the 39th in average productivity. Around 26% of farmers in Pakistan are growing cotton on 1937 thousand hectares, and producing 8.3 million bales. It provides raw materials for the textile industry which is the largest agro-industrial sector of Pakistan, employs 17% of people, earns 60% of foreign exchange, and contributes 0.6% to GDP and 2.4% of the value added in agriculture (Economic Survey of Pakistan 2022).
An adequate plant nutrition program for cotton should be comprised of macronutrients including nitrogen (N), phosphorus (P), potassium (K), calcium (Ca), magnesium (Mg), and sulfur (S) as well as micronutrients such as copper (Cu), iron (Fe), zinc (Zn), manganese (Mn), boron (B), chlorine (Cl), nickel (Ni) and molybdenum (Mo) (White and Brown 2010). However, the current nutrient management program for cotton in Pakistan is based mainly on the use of N, P, and K, while neglecting micronutrients (Khan et al. 2016). An inadequate and imbalanced supply of plant nutrients might be the major cause of low seed cotton yield and fiber quality of cotton in Pakistan (Ashraf et al. 2017). The excessive vegetative growth, poor flower, and fruit setting, as well as retention, and increased susceptibility to insects and pests, might be the result of poor and imbalanced crop nutrition (Rodrigues et al. 2022). Intensive cultivation, high yielding targets, soil alkalinity, and inadequate use of chemical fertilizers result in the deficiencies of multiple nutrients (Yaseen et al. 2013). The deficiency of macro and micronutrients is declining cotton productivity and fiber quality in current years which will become worse in the future if not addressed appropriately (Kumar et al. 2018).
B is considered the most important micronutrient required in all stages of cotton growth, particularly during flowering and boll formation (Rashid et al. 2002; Yeates et al. 2010). In various areas of the world where cotton is being grown, B deficiency is widespread (Zhao and Oosterhuis 2003; Ahmed et al. 2011). Boron deficiency affects 50% of Pakistan's cotton-growing regions (Ahmed et al. 2013). It has been found that tropical soils with their low levels of organic matter and clay are frequently deficient in B (Communar and Keren 2008; Arif et al. 2012). The soil B concentration of 0.60 mg·kg−1 extracted with hot water has been considered as the threshold for general crops (Aitken and McCallum 1988) while 0.4∼0.55 mg·kg−1 for cotton (Oosterhuis 2001). Ahmad et al. (2019) reported that sandy texture, high pH, and low organic matter content could be the main reasons for low B availability for cotton. Furthermore, high calcium carbonate not only increases the soil pH to reduce B availability but also serves as the binding site for the adsorption of soluble B (Shaaban et al. 2004; Shaaban and Helmy 2006).
B is essential for several metabolic processes in cotton including carbohydrate metabolism, sugar transfer, respiration, flower, and fruit development, cell division and elongation, as well as membrane stability (Blevins and Lukaszewski 1998; Zhao and Oosterhius 2002; Ali et al. 2011; Mengel et al. 2012). B contents above 16 mg·kg−1 in recently matured leaves of cotton are considered sufficient for growth and yield (Rosolem et al. 1999). B deficiency may produce small and deformed bolls, poor flower and fruit setting as well as retention, and consequently reduced seed cotton yield and fiber quality (Roberts et al. 2000; Brown et al. 2002; Fontes et al. 2008). According to Sankaranarayanan et al. (2010), B deficiency at the maturity stage of cotton increases the shedding of flowers and bolls, which eventually lowers the seed cotton yield. Cotton is found to be more sensitive to B deficiency at the reproductive phase which might the major factor of low seed cotton yield on B-deficient soils (Rosolem and Costa 2000; de Oliveira et al. 2006). The boll formation and retention in cotton is greatly affected by carbohydrate contents in plants which depends on the B-driven movement of photo-assimilates from leaves to fruits (Bogiani and Rosolem 2012). The reduction in carbohydrate translocation due to B deficiency may cause increased boll shedding, and less seed cotton yield (Zhao and Oosterhuis 2003). Furthermore, water transport, Ca absorption, hormone biosynthesis, and root growth in plants are severely affected by B deficiency, reducing cotton growth and development (Abdulnour et al. 2000; Lou et al. 2001).
Plant response to B nutrition may vary greatly depending upon crop species, varieties within species, level and method of B application, nature of the soil, and climatic conditions (Ahmad et al. 2009). Rosolem et al. (1999) reported that cotton varieties may behave differentially to B nutrition due to the variations in these varieties' potential for carbohydrate transport, B storage and utilization, and associated mechanisms. B application methods including seed dressing, soil, and foliar application may perform differently depending upon many soil, plant, and climatic factors (Kumar et al. 2018).
An adequate supply of B is required for optimum crop yield and quality. However, the differential response of crop species and varieties within species, and a narrow range between deficiency and toxicity levels of B in soil necessitate the choosing of the optimum B dose for achieving quality crop production. It is considered that Bt and non-Bt cotton cultivars are greatly different in their growth and yield behavior as well as nutritional requirements. The present research was planned to evaluate the effect of different levels and methods of B application on growth, yield, physiological, and fiber quality characteristics of Bt and non-Bt cotton cultivars. The research was based on the hypothesis that a combination of soil and foliar application of B might be more effective to achieve optimum seed cotton yield and fiber quality characteristics.
Experimental site description
The experiment was conducted under natural conditions in an open wirehouse having GPS values 30.10° N, 71.25° E, and 128.3 m elevation at Faculty of Agricultural Sciences & Technology, Bahauddin Zakariya University, Multan, Pakistan. During the experimental period, the minimum monthly temperature ranged from 19.8 to 28.9 °C while the maximum temperature was in the range of 36.2∼42.2 °C. Relative humidity changed from 35% to 58%, precipitation was 12∼24 mm, evapotranspiration was 3.1∼9.6 mm, the wind speed was 0.83∼2.77 m·s−1, and sunshine was 8.5∼10.5 h per day during this period. The soil was collected from a cultivated field under a cotton-wheat cropping system. The soil was air-dried, pulverized, and passed through a 2-mm sieve prior to filling the pots. Earthen pots having a volume of 25 × 20 × 20 cm3 were used in experimentation. Each pot was lined with a polythene sheet and filled with 20 kg of prepared soil. Selected physicochemical characteristics of soil analyzed prior to experimentation are presented in Table 1.
Table 1 Pre-sowing analysis of experimental soil
The experimental plan comprised of twelve treatments: Control (CK); B at 1 mg·kg−1 soil application (SB1); 2 mg·kg−1 B (SB2); 3 mg·kg−1 B (SB3); 0.2% B foliar spray (FB1); 0.4% B foliar spray (FB2); 1 mg·kg−1 B + 0.2% B foliar spray (SB1 + FB1); 1 mg·kg−1 B + 0.4% B foliar spray (SB1 + FB2); 2 mg·kg−1 B + 0.2% B foliar spray (SB2 + FB1); 2 mg·kg−1 B + 0.4% B foliar spray (SB2 + FB2); 3 mg·kg−1 B + 0.2% B foliar spray (SB3 + FB1); 3 mg·kg−1 B + 0.4% B foliar spray (SB3 + FB2). Each treatment was replicated thrice, and each replication has one pot having two plants. Measurements were made separately for each plant and then averaged to get the mean value for each replication. Boric acid (H3BO3) from Sigma Aldrich Chemicals was used as a B source. Soil application of B was done prior to sowing by incorporating the required amount of H3BO3 into respective pots. While the foliar spray was made at 35 and 70 days after germination using 30 mL solution for each plant per spray. Two cotton cultivars CIM-663 and Cyto-124 were used in the experimentation. CIM-663 was a Bt cultivar having high yield potential, heat tolerance, big boll, and tolerance to pest incidence. It was developed by Central Cotton Research Institute, Multan, Pakistan in the year 2020. It has a fiber length of 28.8 mm, a ginning out turn (GOT) of 38.8%, and a micronaire value of 4.4 µg·inch−1. CYTO-124 was a non-Bt high-yield cultivar that possessed resistance against the leaf curl virus. It was also developed by Central Cotton Research Institute, Multan, Pakistan in the year 2016. It has a fiber length of 30.3 mm, a GOT of 43%, and micronaire value of 4.4 µg·inch−1.
The sowing was done on April 22, 2021. Ten dehulled cottonseeds of each cultivar were sown in each pot and thinned to two seedlings per pot 15 days after germination. The uprooted plants were incorporated into the same pot. Recommended amounts of N 80 mg·kg−1 soil as urea, P2O5 50 mg·kg−1 soil as single superphosphate, and K2O 50 mg·kg−1 soil as potassium sulfate were added. The whole of P, K, and 1/3 N were added at the time of sowing while the remaining N was added in two splits, 40 and 75 days after germination. For plant protection against different insects and pests, Bifenthrin, Pyriproxyfen, Acephate, and Novastar were sprayed when required. Weeding was done manually throughout the experimentation.
Physiological characteristics
Physiological characteristics in terms of chlorophyll contents, membrane stability index (MSI) and relative water content (RWC) were determined during active boll development (80 days after germination). Chlorophyll-a and chlorophyll-b were measured by the methods of Arnon (1949) and Davies (1976) using the 4th top most leaf from each plant. For this purpose, 0.5 g of leaf samples was treated overnight in 80% acetone. Absorbance readings of the supernatant was recorded at 645 (A) and 663 (B) nm with spectrophotometer (Beckman Coulter DU 730 UV–Vis Spectrophotometer, USA), and used the following formula to determine the chlorophyll content:
$${\text{Chlorophyll}} - {\text{a}}\left( {{\text{mg}} \cdot {\text{g}}^{ - 1} } \right) = \left\{ {\left[ {\left( {0.0127 \times {\text{B}}} \right) - \left( {0.00269 \times {\text{A}}} \right) \times {\text{V}}} \right]/{\text{W}}} \right\};$$
$${\text{Chlorophyll}} - {\text{b}}\left( {{\text{mg}} \cdot {\text{g}}^{ - 1} } \right) = \left\{ {\left[ {\left( {0.0229 \times {\text{A}}} \right) - \left( {0.00468 \times {\text{B}}} \right) \times {\text{V}}} \right]/{\text{W}}} \right\};$$
where A and B are absorbance, V is the volume of sample extract, and W is the weight of the sample.
For MSI estimation, 100 mg of leaf material (the 5th topmost leaf) was divided into two sets and each placed in test tubes containing 10 mL of double distilled water. One set of leaf samples was heated in a water bath at 40 °C for 30 min, and the conductivity of the solution (C1) was measured with a conductivity meter (Elico, CM 183 EC-TDS analyzer, India). The second set of leaf samples in test tubes was heated in a water bath for 20 min at 100 °C, and its conductivity was measured (C2). The MSI was calculated in accordance with method of Blum and Ebercon (1981).
$${\text{MSI}} = \left[ {1{-}\left( {{\text{C}}1/{\text{C}}2} \right)} \right] \times 100.$$
For determining RWC, the 5th topmost leaf from each plant (after measuring MSI) was weighed to record the fresh weight. After that, leaf segments were soaked in distilled water for four hours and reweighed for turgid weight. The leaf segments were then dried at 70 °C for constant weight in an oven (SLN 32, POL-EKO-APARATURA). RWC was calculated according to Barr and Weatherley (1962).
$${\text{RWC}} (\% ) = \frac{{{\text{Fresh}}\;{\text{weight}} - {\text{dry}}\;{\text{weight}}}}{{{\text{Turgid}}\;{\text{weight}} - {\text{dry}}\;{\text{weight}}}} \times 100$$
Gas exchange characteristics
Measurements of net photosynthetic rate, transpiration rate, and stomatal conductance were made on the fully expanded 3rd topmost leaf of each plant using an open-system portable infrared gas analyzer (LCA-4 ADC, Analytical Development Company, Hoddesdon, England). Measurement was made at 9.0 am with the following specifications/adjustments; maximum leaf surface PAR was 1 711 µmol·m−2·s−1, the air molar flow per unit leaf area was 403.3 µmol·m−2·s−1, the atmospheric pressure was 99.9 kPa, the water vapor pressure into the chamber was 6.0∼8.9 mbar, the leaf temperature was 30.7∼42.0 °C, the ambient temperature was 28.6∼38.5 °C, and the ambient CO2 concentration was 352 µmol·mol−1.
Leaf boron content
Leaf samples (the 6th and 7th leaves from the top) were collected at 80 days after germination. The leaves were washed with distilled water, air-dried and then oven dried at 72 °C till constant weight in an oven (SLN 32, POL-EKO-APARATURA). Dry ashing was used to determine leaf B content in accordance with Chapman and Pratt (1961). Spectrophotometer (Beckman Coulter DU 730) was used to obtain absorbance measurements of samples, blanks, and standard solutions at 420 nm. B content was calculated using the calibration curve (Bingham 1982; Ho et al. 1986; Malekani and Cresser 1998). The following formula was used to compute the B content;
$${\text{B}}\left( {{\text{mg}} \cdot {\text{kg}}^{ - 1} } \right) = {\text{B}}\left( {{\text{mg}} \cdot {\text{kg}}^{ - 1} ,{\text{from}}\;{\text{calibration}}\;{\text{curve}}} \right) \times {\text{V}}/{\text{W}}$$
where V is the total volume of the plant digest (mL) and W is the weight of dry plant (g).
Plant growth and yield characteristics
Data regarding plant height, monopodial and sympodial branches per plant, and leaf area per plant were recorded at 120 days after germination. Plant height was measured with a meter rod, and leaf area with a leaf area meter (LI-3100, LI-COR, Lincoln, NE), while other characteristics were recorded manually. At maturity, yield characteristics including the number of bolls per plant, boll size, and boll weight were measured. Boll size was measured with a Vernier caliper. Seed cotton yield and lint yield were measured after picking and ginning. For the measurement of ginning out turn (GOT %), lint weight was divided by seed cotton weight and multiplied by 100.
Fiber quality characteristics
Fiber length was measured using Fibrograph (ASTM 1994a). Pressley Fiber Bundle Tester was used to measure fiber strength (ASTM 1994b), while Micronaire Tester (ASTM 1994c) for fiber fineness.
The statistical analysis was done in accordance with a completely randomized design with two factors, one factor being the cultivar and the other factor being the B application (Steel et al. 1997). The least significant difference (LSD) test was performed to compare the mean values of different treatments.
Plant growth characteristics
Plant growth characteristics of Bt and non-Bt cotton cultivars in terms of plant height, monopodial branches, sympodial branches, and leaf area were significantly (P ≤ 0.05) affected by different B levels and methods of application. When comparing the B levels, a mixed trend was observed. Overall, soil + foliar application performed best followed by soil and foliar application in descending order (Table 2). Among the soil application levels, the tallest plants were found with SB3 for Bt and SB2 for the non-Bt cultivar. In the case of foliar application, FB1 showed superiority over FB2. When combined use of soil and foliar application was done, SB1 + FB1 caused maximum improvement in plant height of both Bt and non-Bt cultivars. The highest increase in monopodial branches plant−1 of Bt cultivar was 377% with SB3 + FB2 compared with CK. However, non-Bt cultivar performed optimally with SB2. Sympodial branches plant−1 were highest with SB1 + FB1 for Bt and SB2 for the non-Bt cultivar. Leaf area was improved with B nutrition, highest improvement of 82.9% in the Bt cultivar with SB3 + FB2 compared with CK. The non-Bt cultivar showed the highest leaf area with SB2 + FB2, indicating that it required a relatively lower level of B than the Bt cultivar.
Table 2 Growth characteristics of Bt and non-Bt cotton cultivars grown with different levels and methods of B application
B levels and methods of application had a significant (P ≤ 0.05) effect on the physiological characteristics of both Bt and non-Bt cotton cultivars. The highest improvement was found with SB2 either alone or in combination with foliar spray (Table 3). Chlorophyll-a contents were found highest with SB3 + FB2 in Bt while with SB2 in the non-Bt cultivar. The highest increase of 140% in chlorophyll-b was observed with SB2 + FB1 in Bt and 194% with SB2 in the non-Bt cultivar compared with CK. B nutrition with FB1 showed superiority over others for improving MSI in both Bt and non-Bt cultivars. Relatively, a slight increase in RWC was found with B nutrition, highest improvement with SB3 + FB2 in Bt while with SB2 in the non-Bt cultivar compared with CK.
Table 3 Physiological characteristics of Bt and non-Bt cotton cultivars grown with different levels and methods of B application
Gas exchange characteristics such as photosynthetic rate, stomatal conductance, and transpiration rate were significantly (P ≤ 0.05) affected by different levels and methods of B application in both Bt and non-Bt cotton cultivars (Table 4). Overall, integrated use of soil and foliar application caused higher improvement in gas exchange characteristics of both cultivars compared with the sole application. The highest photosynthetic rate was found with SB2 + FB2 for Bt and with SB2 for the non-Bt cultivar. Stomatal conductance was improved with B nutrition, with higher improvement in the Bt cultivar compared with the non-Bt one. Among different treatments, SB3 + FB1 performed best for Bt while SB2 + FB1 for the non-Bt cultivar in improving stomatal conductance. When comparing the B application methods for stomatal conductance, the Bt cultivar responded better to foliar application, while the non-Bt cultivar to soil application. In the case of transpiration, SB2 and SB3 performed better than SB1 either alone or in combination with foliar application. Among B application methods, integrated use of soil and foliar application performed best followed by foliar and soil application in descending order.
Table 4 Gas exchange characteristics of Bt and non-Bt cotton cultivars grown with different levels and methods of B application
Leaf B was significantly (P ≤ 0.05) increased with the increasing level of B application. Soil application of B caused a higher increase in leaf B content compared with foliar application. Overall, SB3 + FB2 caused the highest increase of 295% in leaf B content of Bt and 269% in the non-Bt cultivar compared with CK (Fig. 1).
Leaf B content of Bt and non-Bt cotton cultivars grown with different levels and methods of B application
Yield characteristics
Seed cotton yield and yield characteristics including the number of bolls per plant, boll size, boll weight, and GOT were significantly (P ≤ 0.05) affected by different levels and methods of B application in both Bt and non-Bt cotton cultivars (Fig. 2). The minimum number of bolls per plant was found in CK which improved with B nutrition, the highest improvement with SB3 + FB2 in Bt and SB2 + FB2 in the non-Bt cultivar compared with CK (Fig. 2a). Boll size was maximally improved by 71.6% in Bt cultivar with SB2 + FB1 while 36.5% in a non-Bt cultivar with SB2 compared with CK. Soil application caused a higher increase in boll size compared with foliar (Fig. 2b). The highest boll weight was found with SB3 in Bt and SB2 in the non-Bt cultivar (Fig. 2c). Seed cotton yield was improved with all levels and methods of B application, the highest improvement with SB2 in the Bt and SB1 + FB1 in the non-Bt cultivar (Fig. 2d). Minimum GOT was found in CK which improved maximally with SB1 in the Bt and with SB2 in the non-Bt cultivar (Fig. 2e).
Seed cotton yield and yield characteristics of Bt and non-Bt cotton cultivars grown with different levels and methods of B application
Fiber quality characteristics in terms of fiber length, fiber strength, and fiber fineness were relatively less affected by different levels and methods of B application in both Bt and non-Bt cultivars compared with growth and yield characteristics (Fig. 3). The highest improvement in fiber length was found with SB1 + FB2 in the Bt and with FB2 in the non-Bt cultivar (Fig. 3a). Fiber strength was least affected among the fiber quality characteristics by B nutrition. It was maximally improved by 13.2% in the Bt cultivar with SB2 + FB2, while 11.5% in the non-Bt cultivar with SB1 compared with CK (Fig. 3b). The highest improvement of 78.6% in fiber fineness of Bt cultivar was found with FB2 while 79.4% of non-Bt cultivar with SB3 compared with CK (Fig. 3c).
Fiber quality characteristics of Bt and non-Bt cotton cultivars grown with different levels and methods of B application
B-mediated improvement in cotton growth of both cultivars was attributed to its involvement in the synthesis of photosynthetic pigments and photosynthesis (Liu et al. 2000; Karaman et al. 2012; More et al. 2018). According to Dordas (2006), a higher photosynthetic rate at adequate B supply could be the main mechanism for improving cotton growth and development. B-induced improvement in plant height was associated with its role in cell division, cell elongation, and the distance increase between main stem nodes and internodes (Ahmed et al. 2013). B deficiency might inhibit the development of petiole and peduncle cells, resulting in lower cotton growth and productivity (de Oliveira et al. 2006). The mixed trend to change the growth characteristics of both cotton cultivars by different levels and methods of B application was due to the reason that cotton required relatively lower B at the vegetative growth stage (Sagheer et al. 2019). The higher efficiency of soil + foliar application was associated with quick B supply by foliar spray and its persistent availability by soil application (Kumar et al. 2018; Atique-ur-Rehman et al. 2020).
B-mediated improvement in chlorophyll synthesis, MSI, and RWC could be due to its role in the structural stability of chloroplast and cell membrane (Nadim et al. 2012). B involvement in membrane integrity was associated with the synthesis of pectin which is a structural protein improved membrane structure stability (Hu et al. 1996; Wu et al. 2017). Furthermore, B deficiency enhanced the production of reactive oxygen species (ROS) which damaged the structure of chloroplast and cell membrane, resulting in lower chlorophyll content, photosynthesis, and MSI (Hajiboland and Farhanghi 2010; de Souza Júnior et al. 2022).
Improvement in photosynthesis, stomatal conductance, and transpiration with B nutrition was associated with increased leaf area (Li et al. 2012), higher chlorophyll synthesis (Dordas 2006), increased assimilation rate (Nadim et al. 2012), and translocation of photosynthates from source to sink (More et al. 2018). B deficiency could cause leaf yellowing, dieback, brittleness, leaf thickening, veins swelling, and leaf rupturing, all of which led to reduced chlorophyll contents and photosynthetic rate (Liu et al. 2014). B deficiency could decrease the stomatal density and chloroplast contents which led to lower chlorophyll and photosynthesis (Wei et al. 2022)). Boron deficiency might damage the vascular bundles which restricted the transport of water, carbohydrates, and nutrients, leading to lower photosynthesis, transpiration, and stomatal conductance (Li et al. 2017).
The marked increase in seed cotton yield and yield characteristics of Bt and non-Bt cultivars in response to B nutrition could be associated with its role in pollen production and pollen viability, germination and development of pollen tubes, flowering, fruit setting, and retention (Silva et al. 2003; Wang et al. 2003; de Oliveira et al. 2006; Qamar et al. 2020). Increased membrane integrity, photosynthetic rate, and stomatal conductance with B nutrition resulted in an increase in the number of bolls per plant, boll size, and weight (Ahmad et al. 2009). Higher yield characteristics at a high level of B application indicated that the B requirement was more critical at the reproductive phase in cotton (Wei et al. 2022). Furthermore, adequate B application increased the B content in the leaf which could also contribute to the improved cotton yield by improving chlorophyll synthesis, photosynthesis, enzyme activities, flowering, and boll development (Rashid and Rafique 2002). The higher B requirement Bt cultivar was related to its genetic makeup and higher yield potential (Shah et al. 2015).
The improvement in fiber quality with B nutrition might be associated with its role in cell division and differentiation, cell enlargement, photosynthesis, and photosynthates translocation from leaves to bolls (Liu et al. 2000; Zhao and Oosterhuis 2003; de Oliveira et al. 2006; Karaman et al. 2012; Bogiani et al. 2013). The role of B in improving fiber quality was also related to its involvement in enzymatic activities, hormonal balance, protein synthesis, and metabolism (Camacho-Cristobal et al. 2004; Martın-Rejano et al. 2011; Ahmed et al. 2013; Wei et al. 2022). Seilsepour et al. (2013) reported that B could improve the fiber quality of cotton by producing strong and well-developed fibers. B was found to speed up the fiber maturity and thus improving the fiber quality characteristics (Rashidi and Seilsepour 2011).
Cotton growth, physiological, yield, and fiber quality characteristics in Bt and non-Bt cultivars were improved by different levels and methods of B application. Among different treatments, SB2 either alone or in combination with foliar spray showed superiority over other treatments. B-mediated improvement in leaf area, chlorophyll synthesis, photosynthesis, stomatal conductance, and MSI could be the principal mechanisms for increased cotton productivity.
Mean data are provided in table and figure files. Replication data are available and can be provided on request.
Abdulnour JE, Donnelly DJ, Barthakur NN. The effect of boron on calcium uptake and growth in micropropagated potato plantlets. Potato Res. 2000;43(3):287–95.
Ahmad S, Akhtar LH, Iqbal N, et al. Short communication cotton (Gossypium hirsutum L.) varieties responded differently to foliar applied boron in terms of quality and yield. Soil Environ. 2009;28(1):88–92.
Ahmad S, Hussain N, Ahmed N, et al. Influence of boron nutrition on physiological parameters and productivity of cotton (Gossypium hirsutum L.) crop. Pak J Bot. 2019;51(2):401–8.
Ahmed N, Abid M, Ahmad F, et al. Impact of boron fertilization on dry matter production and mineral constitution of irrigated cotton. Pak J Bot. 2011;43(6):2903–10.
Ahmed N, Abid M, Rashid A, et al. Boron requirement of irrigated cotton in a typic haplocambid for optimum productivity and seed composition. Commun Soil Sci Plant Anal. 2013;44(8):1293–309.
Aitken RL, McCallum LE. Boron toxicity in soil solution. Soil Res. 1988;26(4):605–10.
Ali L, Ali M, Mohyuddin Q. Effect of foliar application of zinc and boron on seed cotton yield and economics in cotton-wheat cropping pattern. J Agric Res. 2011;49(2):173–80.
Arif M, Shehzad MA, Bashir F, et al. Boron, zinc and microtone effects on growth, chlorophyll contents and yield attributes in rice (Oryza sativa L.) cultivar. African J Biotech. 2012;11(48):10851–8.
Arnon DI. Copper enzymes in isolated chloroplasts Polyphenoloxidase in Beta vulgaris. Plant Physiol. 1949;24(1):1–15.
Ashraf M, Shahzad SM, Imtiaz M, et al. Ameliorative effects of potassium nutrition on yield and fiber quality characteristics of cotton (Gossypium hirsutum L.) under NaCl stress. Soil Environ. 2017;36:51–8.
ASTM. Standard test method for breaking strength and elongation of fibers (flat bundle method). Annual Book of ASTM Standards. Philadelphia: ASTM. 1994c. p. 392–397.
ASTM. Standard test method for fiber length and length distribution of cotton fibers. Annual Book of ASTM Standards. Philadelphia: ASTM. 1994a. p. 753–-756.
ASTM. Standard test methods for measurement of cotton fibers by high volume instruments. (HVI). Annual Book of ASTM Standards. Philadelphia: ASTM. 1994b. p. 486–494.
Atique Ur R, Qamar R, Hussain A, et al. Soil applied boron (B) improves growth, yield and fiber quality traits of cotton grown on calcareous saline soil. PLoS ONE. 2020;15(8):e0231805. https://doi.org/10.1371/journal.pone.0231805.
Barr HD, Weatherley PE. A re-examination of the relative turgidity technique for estimating water deficit in leaves. Aust J Biol Sci. 1962;15:413–28.
Bingham FT. Boron. In: Page AL, editor. Methods of soil analysis: part 2 chemical and mineralogical properties. Madison, WI: American Society of Agronomy; 1982. p. 431–48.
Blevins DG, Lukaszewski KM. Boron in plant structure and function. Annu Rev Plant Biol. 1998;49(1):481–500.
Blum A, Ebercon A. Cell membrane stability as a measure of drought and heat tolerance in wheat. Crop Sci. 1981;21:43–7.
Bogiani JC, Rosolem CA. Compared boron uptake and translocation in cotton cultivars. Rev Bras Ciênc Solo. 2012;36:1499–506.
Bogiani JC, Amaro ACE, Rosolem CA. Carbohydrate production and transport in cotton cultivars grown under boron deficiency. Sci Agríc. 2013;70:442–8.
Brown PH, Bellaloui N, Wimmer MA, et al. Boron in plant biology. Plant Boil. 2002;4(02):205–23.
Camacho-Cristóbal JJ, Lunar L, Lafont F, et al. Boron deficiency causes accumulation of chlorogenic acid and caffeoyl polyamine conjugates in tobacco leaves. J Plant Physiol. 2004;161(7):879–81.
Chapman HD, Pratt PF. Methods of analysis for soils, plants and water. 1st ed. Berkeley, CA: University of California; 1961.
Communar G, Keren R. Boron adsorption by soils as affected by dissolved organic matter from treated sewage effluent. Soil Sci Soc Am J. 2008;72(2):492–9.
Davies B. Carotenoids. In: Goodwin TW, Editor. Chemistry and biochemistry of plant pigments. London: Academic Press; 1976. p. 38–165.
de Oliveira RH, Dias Milanez CR, Moraes-Dallaqua MA, et al. Boron deficiency inhibits petiole and peduncle cell development and reduces growth of cotton. J Plant Nutr. 2006;29(11):2035–48.
Dordas C. Foliar boron application affects lint and seed yield and improves seed quality of cotton grown on calcareous soils. Nutr Cycl Agroecosys. 2006;76(1):19–28.
Economic Survey of Pakistan. Agriculture. Islamabad: Ministry of Finance, Government of Pakistan; 2022. p. 17–40.
FAO. The state of food security and nutrition in the world 2021: transforming food systems for food security, improved nutrition and affordable healthy diets for all. Rome: FAO; 2021. https://doi.org/10.4060/cb4474en.
Fontes RLF, Medeiros JF, Neves JCL, et al. Growth of Brazilian cotton cultivars in response to soil applied boron. J Plant Nutr. 2008;31:902–18.
Hajiboland R, Farhanghi F. Remobilization of boron, photosynthesis, phenolic metabolism and anti-oxidant defense capacity in boron-deficient turnip (Brassica rapa L.) plants. Soil Sci Plant Nutr. 2010;56(3):427–37.
Ho SB, Chou FR, Houng KH. Studies on the colorimetric determination of boron by azomethine-H method. Chemistry Chin Chem Soc Taiwan, China. 1986;44(3):80–9.
Hu H, Brown PH, Labavitch JM. Species variability in boron requirement is correlated with cell wall pectin. J Exp Bot. 1996;47(2):227–32.
Karaman MR, Turan M, Yıldırım E, et al. Determination of effects calcium and boron humate on tomato (Lycopersicon esculentum L.) yield parameters, chlorophyll and stomatal conductivity. SAÜ Fen Edebiyat Dergisi. 2012;1:177–85.
Khan HR, Ashraf M, Shahzad SM, et al. Adequate regulation of plant nutrients for improving cotton adaptability to salinity stress. J Appl Agric Biotechnol. 2016;1:47–56.
Kumar S, Kumar D, Sekhon KS, et al. Influence of levels and methods of boron application on the yield and uptake of boron by cotton in a calcareous soil of Punjab. Commun Soil Sci Plant Anal. 2018;49(4):499–514.
Li S, Peng S, Liu Y, et al. Observations on morphological abnormalities of the vessel elements of veins and fruit of citrus under boron deficiency. Plant Sci J. 2012;30(6):624–30.
Li Y, Hou L, Song B, et al. Effects of increased nitrogen and phosphorus deposition on offspring performance of two dominant species in a temperate steppe ecosystem. Sci Rep. 2017;7(1):1–11.
Liu G, Dong X, Liu L, et al. Boron deficiency is correlated with changes in cell wall structure that lead to growth defects in the leaves of navel orange plants. Sci Hortic. 2014;176:54–62.
Liu DH, Jiang WS, Zhang LX, Li LF. Effects of boron ions on root growth and cell division of broad bean (Vicia faba L.). Isr J Plant Sci. 2000;48(1):47–51. https://doi.org/10.1560/C74E-VYKD-FKYK-TQWK.
Lou Y, Yang Y, Xu J. Effect of boron fertilization on B uptake and utilization by oilseed rape (Brassica napus L.) under different soil moisture regimes. J Appl Ecol. 2001;12(3):478–80.
Malekani K, Cresser MS. Comparison of three methods for determining boron in soils, plants, and water samples. Commun Soil Sci Plant Anal. 1998;29(3–4):285–304.
Martín-Rejano EM, Camacho-Cristóbal JJ, Herrera-Rodríguez MB, et al. Auxin and ethylene are involved in the responses of root system architecture to low boron supply in Arabidopsis seedlings. Physiol Plant. 2011;142(2):170–8.
Mengel K, Kirkby EA, Kosegarten H, et al. Boron: principles of plant nutrition. Dordrecht: Springer; 2012. p. 621–33.
More VR, Khargkharate VK, Yelvikar NV, et al. Effect of boron and zinc on growth and yield of Bt. cotton under rainfed condition. Int J Pure App Biosci. 2018;6(4):566–70.
Nadim MA, Awan IU, Baloch MS, et al. Response of wheat (Triticum aestivum L.) to different micronutrients and their application methods. J Anim Plant Sci. 2012;22(1):113–9.
Oosterhuis D. Physiology and nutrition of high yielding cotton in the USA. Informações Agronômicas. 2001;95:18–24.
Qamar R, Hussain A, Sardar H, et al. Soil applied boron (B) improves growth, yield and fiber quality traits of cotton grown on calcareous saline soil. PLoS ONE. 2020;15(8): e0231805.
Rana AW, Ejaz A, Shikoh SH. Cotton crop: a situational analysis of Pakistan PACE policy working paper April 2020. Washington: International Food Policy Research Institute; 2020.
Rashid A, Rafique E. Boron deficiency in cotton grown in calcareous soils of Pakistan, II: correction and criteria for foliar diagnosis. In: Goldbach HE, Brown PH, Rerkasem B, et al. editors. Boron in plant and animal nutrition. Boston, MA: Springer; 2002. p. 357–62. https://doi.org/10.1007/978-1-4615-0607-2_36.
Chapter Google Scholar
Rashidi M, Seilsepour M. Effect of different application rates of boron on yield and quality of cotton (Gossypium hirsutum). Middle East J Sci Res. 2011;7(5):758–62.
Roberts RK, Gersman JM, Howard DD. Soil-and foliar-applied boron in cotton production: an economic analysis. J Cotton Sci. 2000;4(3):171–7.
Rodrigues DR, Cordeiro CFDS, Echer F. Low soil fertility impairs cotton yield in the early years of no-tillage over degraded pasture. J Plant Nutr. 2022. https://doi.org/10.1080/01904167.2022.2067048.
Rosolem CA, Costa A. Cotton growth and boron distribution in the plant as affected by a temporary deficiency of boron. J Plant Nutr. 2000;23(6):815–25.
Rosolem CA, Esteves JA, Ferelli L. Response of cotton cultivars to boron in nutrient solution. Sci Agric. 1999;56:705–11.
Sagheer A, Nazim H, Niaz A, et al. Influence of boron nutrition on physiological parameters and productivity of cotton (Gossypium hirsutum L.) crop. Pak J Bot. 2019;51(2):401–8.
Sankaranarayanan K, Praharaj CS, Nalayini P, et al. Effect of magnesium, zinc, iron and boron application on yield and quality of cotton (Gossypium hirsutum). Indian J Agric Sci. 2010;80(8):699.
Seilsepour M, Rashidi M, Yarmohammadi-Samani P. Influence of different application rates of boron on biological growth and fiber quality of cotton. Am-Eurasian J Agric Environ Sci. 2013;13(4):548–52.
Shaaban KA, Helmy AM. Response of wheat to mineral and bio N-fertilization under saline conditions. Zag J Agric Res. 2006;33:1189–205.
Shaaban MM, El-Fouly MM, Abdel-Maguid AA. Zinc-boron relationship in wheat plants grown under low or high levels of calcium carbonate in the soil. Pak J Biol Sci. 2004;7(4):633–9.
Shah JA, Shah Z, Rajpar I, et al. Response of cotton genotypes to boron under B-deficient and B-adequate conditions. Pak J Bot. 2015;47(5):1657–63.
Silva AP, Rosa EAS, Haneklaus S. Influence of foliar boron application on fruit set and yield of Hazelnut. J Plant Nutr. 2003;26(3):561–9.
De Souza JP, De Prado RM, Campos CN, et al. Addition of silicon to boron foliar spray in cotton plants modulates the antioxidative system attenuating boron deficiency and toxicity. BMC Plant Biol. 2022;22(1):1–13.
Steel RGD, Torrie JH, Dickey D. Principles and procedures of statistics: a biometrical approach. 3rd ed. New York: McGraw Hill Book Co. Inc.; 1997.
Wang Q, Lu L, Wu X, et al. Boron influences pollen germination and pollen tube growth in Picea meyeri. Tree Physiol. 2003;23(5):345–51.
Wei R, Huang M, Huang D, et al. Growth, gas exchange, and boron distribution characteristics in two grape species plants under boron deficiency condition. Horticulturae. 2022;8(5):374.
White PJ, Brown P. Plant nutrition for sustainable development and global health. Ann Bot. 2010;105(7):1073–80.
Wu X, Riaz M, Yan L, et al. Boron deficiency in trifoliate orange induces changes in pectin composition and architecture of components in root cell walls. Front Plant Sci. 2017;8:1882.
Yaseen M, Ahmed W, Shahbaz M. Role of foliar feeding of micronutrients in yield maximization of cotton in Punjab. Turk J Agric for. 2013;37(4):420–6.
Yeates SJ, Constable GA, McCumstie T. Irrigated cotton in the tropical dry season. III: Impact of temperature, cultivar and sowing date on fibre quality. Field Crops Res. 2010;116(3):300–7.
Zhao D, Oosterhuis DM. Cotton carbon exchange, nonstructural carbohydrates, and boron distribution in tissues during development of boron deficiency. Field Crops Res. 2002;78(1):75–87.
Zhao D, Oosterhuis DM. Cotton growth and physiological responses to boron deficiency. J Plant Nutr. 2003;26(4):855–67.
Authors are very grateful to Central Cotton Research Institute, Multan, Pakistan for providing seeds of two cotton cultivars.
Department of Soil Science, Bahauddin Zakariya University, Multan, Pakistan
MEHRAN Muhammad & ASHRAF Muhammad
Department of Soil and Environmental Sciences, College of Agriculture, University of Sargodha, Sargodha, Pakistan
SHAHZAD Sher Muhammad
Pesticide Residue Laboratory, Kala Sha Kaku, Pakistan
SHAKIR Muhammad Siddique
Department of Plant Breeding and Genetics, University of Agriculture, Faisalabad, Pakistan
AZHAR Muhammad Tehseen
Physiological/Chemistry Section, Central Cotton Research Institute, Multan, Pakistan
AHMAD Fiaz
Soil and Water Testing Laboratory, Jhelum, Pakistan
ALVI Alamgir
School of Agricultural Science, Zhengzhou University, Zhengzhou, 450000, China
MEHRAN Muhammad
ASHRAF Muhammad
Mehran M executed the experiment. Ashraf M and Shahzad SM helped in planning and writing the manuscript. Shakir MS, Ahmad F, and Alvi A helped in research planning and analytical work. Azhar MT helped with data analysis. All authors read and approved the final manuscript.
Correspondence to ASHRAF Muhammad.
None of the authors have any conflict of interest.
MEHRAN, M., ASHRAF, M., SHAHZAD, S.M. et al. Growth, yield and fiber quality characteristics of Bt and non-Bt cotton cultivars in response to boron nutrition. J Cotton Res 6, 1 (2023). https://doi.org/10.1186/s42397-023-00138-x
Accepted: 10 January 2023
DOI: https://doi.org/10.1186/s42397-023-00138-x
Fiber length
Fiber strength
Micronaire value
Seed cotton yield | CommonCrawl |
math.AT
math.AC
math.AG
math.CO
math.RT
Mathematics > Algebraic Topology
Title: Homological algebra of modules over posets
Authors: Ezra Miller
(Submitted on 31 Jul 2020 (v1), last revised 11 Aug 2020 (this version, v2))
Abstract: Homological algebra of modules over posets is developed, as closely parallel as possible to that of finitely generated modules over noetherian commutative rings, in the direction of finite presentations and resolutions. Centrally at issue is how to define finiteness to replace the noetherian hypothesis which fails. The tameness condition introduced for this purpose captures finiteness for variation in families of vector spaces indexed by posets in a way that is characterized equivalently by distinct topological, algebraic, combinatorial, and homological manifestations. Tameness serves both theoretical and computational purposes: it guarantees finite presentations and resolutions of various sorts, all related by a syzygy theorem, amenable to algorithmic manipulation. Tameness and its homological theory are new even in the finitely generated discrete setting of $\mathbb{N}^n$-gradings, where tame is materially weaker than noetherian. In the context of persistent homology of filtered topological spaces, especially with multiple real parameters, the algebraic theory of tameness yields topologically interpretable data structures in terms of birth and death of homology classes.
Comments: 43 pages, 16 figures. Supersedes the homological portion of arXiv:1908.09750 (which, in turn, superseded the first few sections of arXiv:1709.08155). The material concerning primary decomposition in partially ordered groups and proofs of conjectures due to Kashiwara and Schapira are now in separate manuscripts; they involve different background and running hypotheses. v2: updated references
Subjects: Algebraic Topology (math.AT); Commutative Algebra (math.AC); Algebraic Geometry (math.AG); Combinatorics (math.CO); Representation Theory (math.RT)
MSC classes: 05E40, 13E99, 06B15, 13D02, 55N31, 06A07, 32B20, 14P10, 52B99, 13A02, 13P20, 68W30, 13P25, 62R40, 06A11, 06F20, 06F05, 68T09
Cite as: arXiv:2008.00063 [math.AT]
(or arXiv:2008.00063v2 [math.AT] for this version)
From: Ezra Miller [view email]
[v1] Fri, 31 Jul 2020 20:07:59 GMT (355kb)
[v2] Tue, 11 Aug 2020 03:34:43 GMT (355kb) | CommonCrawl |
Time-resolved denoising using model order reduction, dynamic mode decomposition, and kalman filter and smoother
Andrus Giraldo , Bernd Krauskopf , and Hinke M. Osinga
Department of Mathematics, University of Auckland, Private Bag 92019, Auckland 1142, New Zealand
* Corresponding author: [email protected]
Received December 2019 Published July 2020
Fund Project: AG is supported by the Dodd-Walls Centre for Photonic and Quantum Technologies; BK and HMO are supported by Royal Society of New Zealand Marsden Fund grant 16-UOA-286
We consider the bifurcation diagram in a suitable parameter plane of a quadratic vector field in $ \mathbb{R}^3 $ that features a homoclinic flip bifurcation of the most complicated type. This codimension-two bifurcation is characterized by a change of orientability of associated two-dimensional manifolds and generates infinite families of secondary bifurcations. We show that curves of secondary $ n $-homoclinic bifurcations accumulate on a curve of a heteroclinic bifurcation involving infinity.
We present an adaptation of the technique known as Lin's method that enables us to compute such connecting orbits to infinity. We first perform a weighted directional compactification of $ \mathbb{R}^3 $ with a subsequent blow-up of a non-hyperbolic saddle at infinity. We then set up boundary-value problems for two orbit segments from and to a common two-dimensional section: the first is to a finite saddle in the regular coordinates, and the second is from the vicinity of the saddle at infinity in the blown-up chart. The so-called Lin gap along a fixed one-dimensional direction in the section is then brought to zero by continuation. Once a connecting orbit has been found in this way, its locus can be traced out as a curve in a parameter plane.
Keywords: Homoclinic flip bifurcation, compactification, blow-up, Lin's method, connecting orbit, global invariant manifold, boundary-value problem.
Mathematics Subject Classification: 37C29, 37G25, 37M21.
Citation: Andrus Giraldo, Bernd Krauskopf, Hinke M. Osinga. Computing connecting orbits to infinity associated with a homoclinic flip bifurcation. Journal of Computational Dynamics, 2020, 7 (2) : 489-510. doi: 10.3934/jcd.2020020
P. Aguirre, B. Krauskopf and H. M. Osinga, Global invariant manifolds near homoclinic orbits to a real saddle: (Non)orientability and flip bifurcation, SIAM J. Appl. Dyn. Syst., 12 (2013), 1803-1846. doi: 10.1137/130912542. Google Scholar
P. Aguirre, B. Krauskopf and H. M. Osinga, Global invariant manifolds near a Shilnikov homoclinic bifurcation, J. Comput. Dyn., 1 (2014), 1-38. doi: 10.3934/jcd.2014.1.1. Google Scholar
A. Algaba, M. C. Domínguez-Moreno, M. Merino and A. Rodríguez-Luis, Study of a simple 3D quadratic system with homoclinic flip bifurcations of inward twist case $ \mathbf{C}_{\rm in} $, Commun. Nonlinear Sci. Numer. Simul., 77 (2019), 324-337. doi: 10.1016/j.cnsns.2019.05.005. Google Scholar
R. Barrio, S. Ibáñez and L. Pérez, Hindmarsh–Rose model: Close and far to the singular limit, Phys. Lett. A, 381 (2017), 597-603. doi: 10.1016/j.physleta.2016.12.027. Google Scholar
R. Barrio, M. A. Martínez, S. Serrano and A. Shilnikov, Macro- and micro-chaotic structures in the Hindmarsh–Rose model of bursting neurons, Chaos, 24 (2014), 11pp. doi: 10.1063/1.4882171. Google Scholar
L. A. Belyakov, Bifurcation set in a system with homoclinic saddle curve, Mat. Zametki, 28 (1980), 911-922. Google Scholar
L. A. Belyakov, Bifurcations of systems with a homoclinic curve of the saddle-focus with a zero saddle value, Mat. Zametki, 36 (1984), 681-689. Google Scholar
A. R. Champneys and Y. A. Kuznetsov, Numerical detection and continuation of codimension-two homoclinic bifurcations, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 4 (1994), 785-822. doi: 10.1142/S0218127494000587. Google Scholar
A. R. Champneys, Y. A. Kuznetsov and B. Sandstede, A numerical toolbox for homoclinic bifurcation analysis, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 6 (1996), 867-887. doi: 10.1142/S0218127496000485. Google Scholar
B. Deng, Homoclinic twisting bifurcations and cusp horseshoe maps, J. Dynam. Differential Equations, 5 (1993), 417-467. doi: 10.1007/BF01053531. Google Scholar
E. Doedel, AUTO: A program for the automatic bifurcation analysis of autonomous systems, Congr. Numer., 30 (1981), 265-284. Google Scholar
E. J. Doedel and B. E. Oldeman, AUTO-07p: Continuation and Bifurcation Software for Ordinary Differential Equations, Department of Computer Science, Concordia University, Montreal, Canada, 2010. Available from: http://www.cmvl.cs.concordia.ca/. Google Scholar
F. Dumortier, Local study of planar vector fields: Singularities and their unfoldings, in Structures in Dynamics: Finite Dimensional Deterministic Studies, Studies in Mathematical Physics, 2, Elsevier Science Publishers, Amsterdam, 1991,161–241. doi: 10.1016/B978-0-444-89257-7.50011-5. Google Scholar
F. Dumortier, J. Llibre and J. C. Artés, Qualitative Theory of Planar Differential Systems, Universitext, Springer-Verlag, Berlin, 2006. doi: 10.1007/978-3-540-32902-2. Google Scholar
A. Giraldo, B. Krauskopf and H. M. Osinga, Saddle invariant objects and their global manifolds in a neighborhood of a homoclinic flip bifurcation of case B, SIAM J. Appl. Dyn. Syst., 16 (2017), 640-686. doi: 10.1137/16M1097419. Google Scholar
A. Giraldo, B. Krauskopf and H. M. Osinga, Cascades of global bifurcations and chaos near a homoclinic flip bifurcation: A case study, SIAM J. Appl. Dyn. Syst., 17 (2018), 2784-2829. doi: 10.1137/17M1149675. Google Scholar
E. A. González Velasco, Generic properties of polynomial vector fields at infinity, Trans. Amer. Math. Soc., 143 (1969), 201-222. doi: 10.1090/S0002-9947-1969-0252788-8. Google Scholar
A. J. Homburg, H. Kokubu and M. Krupa, The cusp horseshoe and its bifurcations in the unfolding of an inclination-flip homoclinic orbit, Ergodic Theory Dynam. Systems, 14 (1994), 667-693. doi: 10.1017/S0143385700008117. Google Scholar
A. J. Homburg, H. Kokubu and V. Naudot, Homoclinic-doubling cascades, Arch. Ration. Mech. Anal., 160 (2001), 195-243. doi: 10.1007/s002050100159. Google Scholar
A. J. Homburg and B. Krauskopf, Resonant homoclinic flip bifurcations, J. Dynam. Differential Equations, 12 (2000), 807-850. doi: 10.1023/A:1009046621861. Google Scholar
A. J. Homburg and B. Sandstede, Homoclinic and heteroclinic bifurcations in vector fields, in Handbook of Dynamical Systems, 3, Elsevier, New York, 2010,381–509. Google Scholar
M. Kisaka, H. Kokubu and H. Oka, Bifurcations to $N$-homoclinic orbits and $N$-periodic orbits in vector fields, J. Dynam. Differential Equations, 5 (1993), 305-357. doi: 10.1007/BF01053164. Google Scholar
B. Krauskopf and T. Rieß, A Lin's method approach to finding and continuing heteroclinic connections involving periodic orbits, Nonlinearity, 21 (2008), 1655-1690. doi: 10.1088/0951-7715/21/8/001. Google Scholar
B. Krauskopf, H. M. Osinga and J. Galán-Vioque, Numerical Continuation Methods for Dynamical Systems. Path Following and Boundary Value Problems, Understanding Complex Systems, Springer, Dordrecht, 2007. doi: 10.1007/978-1-4020-6356-5. Google Scholar
Y. A. Kuznetsov, O. De Feo and S. Rinaldi, Belyakov homoclinic bifurcations in a tritrophic food chain model, SIAM J. Appl. Math., 62 (2001), 462-487. doi: 10.1137/S0036139900378542. Google Scholar
X.-B. Lin, Using Mel'nikov's method to solve Šilnikov's problems, Proc. Roy. Soc. Edinburgh Sect. A, 116 (1990), 295-325. doi: 10.1017/S0308210500031528. Google Scholar
D. Linaro, A. Champneys, M. Desroches and M. Storace, Codimension-two homoclinic bifurcations underlying spike adding in the Hindmarsh–Rose burster, SIAM J. Appl. Dyn. Syst., 11 (2012), 939-962. doi: 10.1137/110848931. Google Scholar
K. Matsue, On blow-up solutions of differential equations with Poincaré-type compactifications, SIAM J. Appl. Dyn. Syst., 17 (2018), 2249-2288. doi: 10.1137/17M1124498. Google Scholar
M. Messias, Dynamics at infinity and the existence of singularly degenerate heteroclinic cycles in the Lorenz system, J. Phys. A, 42 (2009), 18pp. doi: 10.1088/1751-8113/42/11/115101. Google Scholar
M. Messias, Dynamics at infinity of a cubic Chua's system, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 21 (2011), 333-340. doi: 10.1142/S0218127411028453. Google Scholar
B. E. Oldeman, B. Krauskopf and A. R. Champneys, Numerical unfoldings of codimension-three resonant homoclinic flip bifurcations, Nonlinearity, 14 (2001), 597-621. doi: 10.1088/0951-7715/14/3/309. Google Scholar
B. Sandstede, Verzweigungstheorie Homokliner Verdopplungen, Ph.D thesis, University of Stuttgart in Stuttgart, Germany, 1993. Google Scholar
B. Sandstede, Constructing dynamical systems having homoclinic bifurcation points of codimension two, J. Dynam. Differential Equations, 9 (1997), 269-288. doi: 10.1007/BF02219223. Google Scholar
Figure 1. Bifurcation diagram of system (2) showing: the curve of primary homoclinic bifurcation (brown), along which the homoclinic orbit changes at $ \mathbf{C}_{\rm in} $ from being orientable along $ \mathbf{H_o} $ to being non-orientable along $ \mathbf{H_t} $; curves $ \mathbf{SNP} $ and $ \mathbf{SNP^3} $ (green) of saddle-node bifurcation of periodic orbits; the first two curves $ \mathbf{PD} $ and $ \mathbf{PD^2} $ (red) of a cascade of period-doubling bifurcations; and the curves $ \mathbf{H^n} $ (increasingly darker shades of cyan) of $ n $-homoclinic bifurcations for $ n = 2, 3, 4, 5 $, and $ 6 $. On $ \mathbf{H^n} $ there are points $ \mathbf{C^n_{\rm O}} $ of orbit flip bifurcations (blue dots) and on $ \mathbf{H^2} $ there is a point $ \mathbf{C^2_{\rm I}} $ of inclination flip bifurcation (open dot). Panel (a) shows the $ (\alpha, \beta) $-plane, while panel (b) shows the $ (\alpha, \hat{\beta}) $-plane, where $ \hat{\beta} $ is the distance in the $ \beta $-coordinate from the curve $ \mathbf{H_{o/t}} $ of primary homoclinic bifurcation, which is now at $ \hat{\beta} = 0 $ (brown horizontal line). Panel (c) is an enlargement of the $ (\alpha, \hat{\beta}) $-plane near $ \mathbf{C}_{\rm in} $
Figure 2. Phase portraits of system (2) along $ \mathbf{H_{t}} $, at $ \mathbf{C}_{\rm in} $ and along $ \mathbf{H_{o}} $ with enlargements near the saddle $ \mathbf{0} $ (top row). Shown are the saddle $ \mathbf{0} $, the homoclinic orbit $ \mathbf{\Gamma_{\rm HOM}} $ (brown curve) formed by one branch of $ W^s(\mathbf{0}) $, the other branch of $ W^s(\mathbf{0}) $ (cyan curve), a first part of $ W^{u}(\mathbf{0}) $ (red surface), and $ W^{uu}(\mathbf{0}) $ (magenta curve). Here $ (\alpha, \beta) = (5.8, 1.7010) $ in panel $ \mathbf{H_{o}} $, $ (\alpha, \beta) = (5.3573, 2.1917) $ in panel $ \mathbf{C}_{\rm in} $ and $ (\alpha, \beta) = (5.1, 2.717) $ in panel $ \mathbf{H_{t}} $
Fig. 1">Figure 3. The primary homoclinic orbit on $ \mathbf{H_t} $ and the $ n $-homoclinic orbits $ \mathbf{H^2} $ to $ \mathbf{H^6} $ of system (2) for $ \alpha = 5.3 $, shown in $ \mathbb{R}^3 $ in brown and increasingly darker shades of cyan to match the colors of the corresponding bifurcation curves in Fig. 1
Figure 4. Dynamics at infinity for system (5), or system (4) with $ \bar{w} = 0 $, shown in the $ (\bar{x}, \bar{z}) $-plane in panel (a). Panel (b) shows the projection of panel (a) onto the corresponding Poincaré half-sphere with $ y_{\rm s} > 0 $ in the compactified $ (x_{\rm s}, y_{\rm s}, z_{\rm s}) $-cordinates
Fig. 4(a)">Figure 5. Dynamics near the equilibrium $ (\bar{x}, \bar{z}, \bar{w}) = (0, 0, 0) $ of system (4). The behavior in the $ (x_{\rm B}, z_{\rm B}) $-plane, that is, the blow-up chart (6) with $ w_{\rm B} = 0 $, is shown in panel (a). It corresponds to the dynamics on a half-sphere around the origin in the $ (\bar{x}, \bar{z}, \bar{w}) $-space, as is illustrated in panel (b); compare also with Fig. 4(a)
Figure 6. Numerical simulations suggest the existence of a cylinder-shaped separatrix $ S_{\rm c} $ of system (6) between trajectories that converge to the equilibrium $ (x_{\rm B}, z_{\rm B}, w_{\rm B}) = (0, -\alpha, 0) $, such as the orange trajectory, and those that do not, such as the blue trajectory. Panel (a) shows the $ (x_{\rm B}, z_{\rm B}, w_{\rm B}) $-space near $ (0, -\alpha, 0) $ and panel (b) the associated intersection sets with the plane defined by $ z_{\rm B} = -\alpha $
Figure 7. The separatrix $ S_{\rm c} $ (purple surface) as represented locally by the cylinder $ C_{r^*} $, shown in the $ (\bar{x}, \bar{z}, \bar{w}) $-space of system (4). Panel (a) shows $ S_{\rm c} $ emerging from the blown-up half-sphere, while in panel (b), $ S_{\rm c} $ is a cone that emerges from the origin
Figure 8. Set-up with Lin's method to compute a connecting orbit from $ \mathbf{q}_\infty $ to $ \mathbf{0} $ with two orbit segments that meet in the common Lin section $ \Sigma $ (green plane), illustrated in compactified Poincaré coordinates. Panel (a) shows the initially chosen orbit segments $ \mathbf{u} $ (cyan) to $ \mathbf{0} $ and $ \mathbf{u}_{\rm B} $ (magenta) from $ \mathbf{q}_\infty $ for $ \beta = 1.8 $ that define the Lin space $ Z $ (which appears curved in this representation); note that the Lin gap $ \eta $ is initially nonzero. Panel (b) shows the situation for $ \beta = 2.08874 $ where $ \eta = 0 $ and $ \mathbf{u} $ and $ \mathbf{u}_{\rm B} $ connect in $ \Sigma $ to form the heteroclinic connection; here, $ \alpha = 5.3 $
Fig. 3 for comparison. Panel (b) shows the overall bifurcation diagram in the $ (\alpha, \hat{\beta}) $-plane and panel (c) is an enlargement near the point $ \mathbf{C}_{\rm in} $; see Fig. 1 for details on the other bifurcation curves">Figure 9. Bifurcation diagram of system (2) with the additional curve $ \mathbf{Het^\infty} $ (magenta) of heteroclinic bifurcation involving the point $ \mathbf{q}_\infty $ at infinity. Panel (a) shows how $ W^s(\mathbf{0}) $ spirals towards infinity in the $ (x, y, z) $-space to form the heteroclinic connection on $ \mathbf{Het^\infty} $ for $ \alpha = 5.3 $ and $ \beta = 2.08874 $; see Fig. 3 for comparison. Panel (b) shows the overall bifurcation diagram in the $ (\alpha, \hat{\beta}) $-plane and panel (c) is an enlargement near the point $ \mathbf{C}_{\rm in} $; see Fig. 1 for details on the other bifurcation curves
Figure 10. To the left of the curve $ \mathbf{Het^\infty} $ in the $ (\alpha, \hat{\beta}) $-plane, the stable manifold of $ W^s(\mathbf{0}) $ approaches, but does not connect to $ \mathbf{q}_\infty $, because it lies outside $ S_{\rm c} $ (a). To the right of $ \mathbf{Het^\infty} $, it lies inside $ S_{\rm c} $ and so connects to $ \mathbf{q}_\infty $. The illustration in compactified Poincaré coordinates is for $ \alpha = 5.3 $ with $ \beta = 2.9 $ in panel (a) and $ \beta = 2.8 $ in panel (b)
Figure 11. Set-up with Lin's method to compute a connecting orbit from $ \mathbf{q}_\infty $ to a saddle periodic orbit $ \Gamma_o $ (green curve) with two orbit segments that meet in the common Lin section $ \Sigma $ (green plane), illustrated in compactified Poincaré coordinates for $ \alpha = 6.2 $ and $ \beta = 1.6 $. Panel (a) shows the initially chosen orbit segments $ \mathbf{u} $ (cyan) to $ \Gamma_o $ and $ \mathbf{u}_{\rm B} $ (magenta) from $ \mathbf{q}_\infty $ that define the Lin space $ Z $ (which appears curved in this representation); note that the Lin gap $ \eta $ is initially nonzero. Panel (b) shows the situation where $ \eta = 0 $ and $ \mathbf{u} $ and $ \mathbf{u}_{\rm B} $ connect in $ \Sigma $ to form the heteroclinic connection
Hideo Ikeda, Koji Kondo, Hisashi Okamoto, Shoji Yotsutani. On the global branches of the solutions to a nonlocal boundary-value problem arising in Oseen's spiral flows. Communications on Pure & Applied Analysis, 2003, 2 (3) : 381-390. doi: 10.3934/cpaa.2003.2.381
Marek Fila, Hiroshi Matano. Connecting equilibria by blow-up solutions. Discrete & Continuous Dynamical Systems, 2000, 6 (1) : 155-164. doi: 10.3934/dcds.2000.6.155
Yihong Du, Zongming Guo, Feng Zhou. Boundary blow-up solutions with interior layers and spikes in a bistable problem. Discrete & Continuous Dynamical Systems, 2007, 19 (2) : 271-298. doi: 10.3934/dcds.2007.19.271
Yihong Du, Zongming Guo. The degenerate logistic model and a singularly mixed boundary blow-up problem. Discrete & Continuous Dynamical Systems, 2006, 14 (1) : 1-29. doi: 10.3934/dcds.2006.14.1
John R. Graef, Lingju Kong, Bo Yang. Positive solutions of a nonlinear higher order boundary-value problem. Conference Publications, 2009, 2009 (Special) : 276-285. doi: 10.3934/proc.2009.2009.276
Kateryna Marynets. Study of a nonlinear boundary-value problem of geophysical relevance. Discrete & Continuous Dynamical Systems, 2019, 39 (8) : 4771-4781. doi: 10.3934/dcds.2019194
Nicola Abatangelo. Large $s$-harmonic functions and boundary blow-up solutions for the fractional Laplacian. Discrete & Continuous Dynamical Systems, 2015, 35 (12) : 5555-5607. doi: 10.3934/dcds.2015.35.5555
Petri Juutinen. Convexity of solutions to boundary blow-up problems. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2267-2275. doi: 10.3934/cpaa.2013.12.2267
Marina Chugunova, Chiu-Yen Kao, Sarun Seepun. On the Benilov-Vynnycky blow-up problem. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1443-1460. doi: 10.3934/dcdsb.2015.20.1443
Victor A. Galaktionov, Juan-Luis Vázquez. The problem Of blow-up in nonlinear parabolic equations. Discrete & Continuous Dynamical Systems, 2002, 8 (2) : 399-433. doi: 10.3934/dcds.2002.8.399
Mingyou Zhang, Qingsong Zhao, Yu Liu, Wenke Li. Finite time blow-up and global existence of solutions for semilinear parabolic equations with nonlinear dynamical boundary condition. Electronic Research Archive, 2020, 28 (1) : 369-381. doi: 10.3934/era.2020021
Alexander Gladkov. Blow-up problem for semilinear heat equation with nonlinear nonlocal Neumann boundary condition. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2053-2068. doi: 10.3934/cpaa.2017101
Ning-An Lai, Yi Zhou. Blow up for initial boundary value problem of critical semilinear wave equation in two space dimensions. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1499-1510. doi: 10.3934/cpaa.2018072
Shao-Yuan Huang, Shin-Hwa Wang. On S-shaped bifurcation curves for a two-point boundary value problem arising in a theory of thermal explosion. Discrete & Continuous Dynamical Systems, 2015, 35 (10) : 4839-4858. doi: 10.3934/dcds.2015.35.4839
Long Wei, Zhijun Qiao, Yang Wang, Shouming Zhou. Conserved quantities, global existence and blow-up for a generalized CH equation. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1733-1748. doi: 10.3934/dcds.2017072
Huyuan Chen, Hichem Hajaiej, Ying Wang. Boundary blow-up solutions to fractional elliptic equations in a measure framework. Discrete & Continuous Dynamical Systems, 2016, 36 (4) : 1881-1903. doi: 10.3934/dcds.2016.36.1881
Jong-Shenq Guo. Blow-up behavior for a quasilinear parabolic equation with nonlinear boundary condition. Discrete & Continuous Dynamical Systems, 2007, 18 (1) : 71-84. doi: 10.3934/dcds.2007.18.71
Pavol Quittner, Philippe Souplet. Blow-up rate of solutions of parabolic poblems with nonlinear boundary conditions. Discrete & Continuous Dynamical Systems - S, 2012, 5 (3) : 671-681. doi: 10.3934/dcdss.2012.5.671
Keng Deng, Zhihua Dong. Blow-up for the heat equation with a general memory boundary condition. Communications on Pure & Applied Analysis, 2012, 11 (5) : 2147-2156. doi: 10.3934/cpaa.2012.11.2147
Andrus Giraldo Bernd Krauskopf Hinke M. Osinga | CommonCrawl |
\begin{document}
\title[Moduli of stable sheaves supported on curves of genus $3$ in $\mathbb P^1 \times \mathbb P^1$] {Moduli of stable sheaves supported on curves of genus three contained in a quadric surface}
\author{Mario Maican} \address{Institute of Mathematics of the Romanian Academy, Calea Grivitei 21, Bucharest 010702, Romania}
\email{[email protected]}
\begin{abstract} We study the moduli space of stable sheaves of Euler characteristic $1$ supported on curves of arithmetic genus $3$ contained in a smooth quadric surface. We show that this moduli space is rational. We compute its Betti numbers by studying the variation of the moduli spaces of $\alpha$-semi-stable pairs. We classify the stable sheaves using locally free resolutions or extensions. We give a global description: the moduli space is obtained from a certain flag Hilbert scheme by performing two flips followed by a blow-down. \end{abstract}
\subjclass[2010]{Primary 14D20, 14D22} \keywords{Moduli spaces, Semi-stable sheaves, Wall crossing}
\maketitle
\section{Introduction} \label{introduction}
Let $\mathbb P^1$ be the projective line over $\mathbb C$ and consider the surface $\mathbb P^1 \times \mathbb P^1$ with fixed polarization $\mathcal O(1, 1) = \mathcal O_{\mathbb P^1}(1) \otimes \mathcal O_{\mathbb P^1}(1)$. For a coherent algebraic sheaf ${\mathcal F}$ on $\mathbb P^1 \times \mathbb P^1$, with support of dimension $1$, the Euler characteristic $\chi({\mathcal F}(m, n))$ is a polynomial expression in $m$, $n$, of the form \[ P_{{\mathcal F}}(m, n) = rm + sn + t, \] where $r$, $s$, $t$ are integers depending only on ${\mathcal F}$. This is the \emph{Hilbert polynomial} of ${\mathcal F}$. The \emph{slope} of ${\mathcal F}$ is \[ \operatorname{p}({\mathcal F}) = \frac{t}{r+s}. \] Let $\operatorname{M}(P)$ be the coarse moduli space of S-equivalence classes of sheaves on $\mathbb P^1 \times \mathbb P^1$ that are semi-stable with respect to the fixed polarization and that have Hilbert polynomial $P$. We recall that ${\mathcal F}$ is semi-stable, respectively, stable, if it is pure and for any proper subsheaf ${\mathcal F}' \subset {\mathcal F}$ we have $\operatorname{p}({\mathcal F}') \le \operatorname{p}({\mathcal F})$, respectively, $\operatorname{p}({\mathcal F}') < \operatorname{p}({\mathcal F})$. According to \cite{lepotier}, $\operatorname{M}(P)$ is projective, irreducible, and smooth at points given by stable sheaves. Its dimension is $2rs + 1$ if $r > 0$ and $s > 0$. The spaces $\operatorname{M}(rm + n + 1)$, $\operatorname{M}(2m + 2n + 1)$ and $\operatorname{M}(2m + 2n + 2)$ were studied in \cite{ballico_huh}. In fact, it is not difficult to see that $\operatorname{M}(rm + n + 1)$ consists of the structure sheaves of curves of degree $(1, r)$, so it is isomorphic to $\mathbb P^{2r + 1}$. The space $\operatorname{M}(3m + 2n + 1)$ was studied in \cite{choi_katz_klemm} and \cite{genus_two}. We refer to the introductory section of \cite{genus_two} for more background information.
This paper is concerned with the study of $\mathbf M = \operatorname{M}(4m + 2n + 1)$. The closed points of $\mathbf M$ are in a bijective correspondence with the isomorphism classes $[{\mathcal F}]$ of stable sheaves ${\mathcal F}$ supported on curves of degree $(2, 4)$ and satisfying the condition $\chi({\mathcal F}) = 1$. As already mentioned, $\mathbf M$ is a smooth irreducible projective variety of dimension $17$. For any $t \in \mathbb Z$, twisting by $\mathcal O(t,t)$ gives an isomorphism $\mathbf M \simeq \operatorname{M}(4m + 2n + 6t + 1)$. According to \cite[Corollary 1]{genus_two}, $\mathbf M \simeq \operatorname{M}(4m + 2n -1)$. In the following theorem we classify the sheaves in $\mathbf M$.
\begin{theorem} \label{main_theorem} The variety $\mathbf M$ can be decomposed into an open subset $\mathbf M_0$, two closed irreducible subsets $\mathbf M_2^{}$, $\mathbf M_2'$, each of codimension $2$, a locally closed irreducible subset $\mathbf M_3$ of codimension $3$, and a locally closed irreducible subset $\mathbf M_4$ of codimension $4$. These subsets are defined as follows: $\mathbf M_0$ is the set of sheaves ${\mathcal F}$ having a resolution of the form \[ 0 \longrightarrow \mathcal O(-1, -3) \oplus \mathcal O(0, -3) \oplus \mathcal O(-1, -2) \stackrel{\varphi}{\longrightarrow} \mathcal O(0, -2) \oplus \mathcal O(0, -2) \oplus \mathcal O \longrightarrow {\mathcal F} \longrightarrow 0, \] where the entries $\varphi_{12}$ and $\varphi_{22}$ are linearly independent and the maximal minors of the matrix $(\varphi_{ij})_{i = 1, 2, j = 1, 2, 3}$, describing the corestriction of $\varphi$ to the first two summands, have no common factor; $\mathbf M_2$ is the set of sheaves ${\mathcal F}$ having a resolution of the form \[ 0 \longrightarrow \mathcal O(-2, -2) \oplus \mathcal O(-1, -3) \overset{\varphi}{\longrightarrow} \mathcal O(-1, -2) \oplus \mathcal O(0, 1) \longrightarrow {\mathcal F} \longrightarrow 0, \] with $\varphi_{11} \neq 0$, $\varphi_{12} \neq 0$; $\mathbf M_2'$ is the set of sheaves ${\mathcal F}$ having a resolution of the form \[ 0 \longrightarrow \mathcal O(-2, -1) \oplus \mathcal O(-1, -4) \overset{\varphi}{\longrightarrow} \mathcal O(-1, -1) \oplus \mathcal O \longrightarrow {\mathcal F} \longrightarrow 0, \] with $\varphi_{11} \neq 0$, $\varphi_{12} \neq 0$; $\mathbf M_4$ is the set of extensions of the form \[ 0 \longrightarrow \mathcal O_Q \longrightarrow {\mathcal F} \longrightarrow \mathcal O_L(1, 0) \longrightarrow 0 \] satisfying the condition $\operatorname{H}^0({\mathcal F}) \simeq \mathbb C$, where $Q \subset \mathbb P^1 \times \mathbb P^1$ is a quintic curve of degree $(2, 3)$ and $L \subset \mathbb P^1 \times \mathbb P^1$ is a line of degree $(0, 1)$; $\mathbf M_3$ is the set of extensions of the form \[ 0 \longrightarrow \mathcal O_Q(p) \longrightarrow {\mathcal F} \longrightarrow \mathcal O_L \longrightarrow 0, \] where $\mathcal O_Q(p)$ is a non-split extension of $\mathbb C_p$ by $\mathcal O_Q$ for a point $p \in Q$, and satisfying the condition $\operatorname{H}^0({\mathcal F}) \simeq \mathbb C$.
Moreover, $\mathbf M_2$ is the Brill-Noether locus of sheaves for which $\operatorname{H}^1({\mathcal F}) \neq \{ 0 \}$. \end{theorem}
\noindent The proof of Theorem \ref{main_theorem}, given in Section \ref{classification}, relies on the Beilinson spectral sequence, which we recall in Section \ref{preliminaries}. The varieties $X$ that appear in this paper have no odd homology, so we can define the Poincar\'e polynomial \[ \operatorname{P}(X)(\xi) = \sum_{i \ge 0} \dim_{\mathbb Q}^{} \operatorname{H}^i(X, {\mathbb Q}) \xi^{i/2}. \]
\begin{theorem} \label{poincare_polynomial} The Euler characteristic of $\mathbf M$ is $288$. The Poincar\'e polynomial of $\mathbf M$ is \begin{multline*} \xi^{17} + 3\xi^{16} + 8\xi^{15} + 16\xi^{14} + 21\xi^{13} + 23\xi^{12} + 24\xi^{11} + 24\xi^{10} + 24\xi^9 \\ + 24\xi^8 + 24\xi^7 + 24\xi^6 + 23\xi^5 + 21\xi^4 + 16\xi^3 + 8\xi^2 + 3\xi + 1. \end{multline*} \end{theorem}
\noindent The proof of this theorem rests on the wall-crossing method of Choi and Chung \cite{choi_chung}. In Section \ref{variation} we investigate how the moduli spaces $\operatorname{M}^{\alpha}(4m + 2n + 1)$ of $\alpha$-semi-stable pairs with Hilbert polynomial $4m + 2n + 1$ change as the parameter $\alpha$ varies. In Theorem \ref{wall_crossing} we find that $\operatorname{M}^{\alpha}(4m + 2n + 1)$ are related by two explicitly described flipping diagrams. Combining this with Proposition \ref{blow_up}, yields a global description: $\mathbf M$ is obtained from the flag Hilbert scheme of three points on curves of degree $(2, 4)$ in $\mathbb P^1 \times \mathbb P^1$ by performing two flips followed by a blow-down with center the Brill-Noether locus $\mathbf M_2$.
The total space $X$ of $\omega_{\mathbb P^1 \times \mathbb P^1}$ is a Calabi-Yau threefold. For a homology class $\beta = (r, s) \in \operatorname{H}_2(\mathbb P^1 \times \mathbb P^1) \subset \operatorname{H}_2(X)$ let $N_{\beta}(X)$ be the genus zero Gromov-Witten invariant of $X$ and let $n_{\beta}(X)$ be the genus zero Gopakumar-Vafa invariant of $X$, as introduced in \cite{katz}. It was noticed in \cite{choi_katz_klemm} that, up to sign, the latter is the Euler characteristic of a moduli space: \[ n_{\beta}(X) = (-1)^{\dim \operatorname{M}(rm + sn + 1)} e(\operatorname{M}(rm + sn + 1)). \] In \cite{katz} Katz conjectured the relation \[
N_{\beta}(X) = \sum_{k | \beta} \frac{n_{\beta/k}(X)}{k^3}. \] For $\beta = (4, 2)$, this conjecture reads \begin{align*} N_{(4, 2)}(X) = & (-1)^{\dim \operatorname{M}(4m + 2n + 1)} e(\operatorname{M}(4m + 2n + 1)) + \frac{1}{8}(-1)^{\dim \operatorname{M}(2m + n + 1)} e(\operatorname{M}(2m + n + 1)) \\ = & (-1)^{\dim \mathbf M} e(\mathbf M) + \frac{1}{8} (-1)^{\dim \mathbb P^5} e(\mathbb P^5) = (-1)^{17} 288 + \frac{1}{8} (-1)^5 6 = -288.75 \end{align*}
\section{Preliminaries} \label{preliminaries}
Our main technical tool in Section \ref{classification} will be the Beilinson spectral sequence. Let ${\mathcal F}$ be a coherent sheaf on $\mathbb P^1 \times \mathbb P^1$. According to \cite[Lemma 1]{buchdahl}, we have a spectral sequence converging to ${\mathcal F}$, whose first level $E_1$ has display diagram \begin{equation} \label{E_1} \xymatrix { \operatorname{H}^2({\mathcal F}(-1, -1)) \otimes \mathcal O(-1, -1) = E_1^{-2, 2} \ar[r] & E_1^{-1, 2} \ar[r] & E_1^{0, 2} = \operatorname{H}^2({\mathcal F}) \otimes \mathcal O \\ \operatorname{H}^1({\mathcal F}(-1, -1)) \otimes \mathcal O(-1, -1) = E_1^{-2, 1} \ar[r]^-{\theta_1} & E_1^{-1, 1} \ar[r]^-{\theta_2} & E_1^{0, 1} = \operatorname{H}^1({\mathcal F}) \otimes \mathcal O \\ \operatorname{H}^0({\mathcal F}(-1, -1)) \otimes \mathcal O(-1, -1) = E_1^{-2, 0} \ar[r]^-{\theta_3} & E_1^{-1, 0} \ar[r]^-{\theta_4} & E_1^{0, 0} = \operatorname{H}^0({\mathcal F}) \otimes \mathcal O } \end{equation} where $E_1^{ij} = \{ 0 \}$ if $i \notin \{ -2, -1, 0 \}$ or if $j \notin \{ 0, 1, 2 \}$ and \begin{equation} \label{E_1^{-1,j}} E_1^{-1, j} = \operatorname{H}^j({\mathcal F}(0, -1)) \otimes \mathcal O(0, -1) \oplus \operatorname{H}^j({\mathcal F}(-1, 0)) \otimes \mathcal O(-1, 0). \end{equation} If ${\mathcal F}$ has support of dimension $1$, then the first row of (\ref{E_1}) vanishes and the convergence of the spectral sequence forces $\theta_2$ to be surjective and yields the exact sequence \begin{equation} \label{convergence} 0 \longrightarrow {\mathcal Ker}(\theta_1) \stackrel{\theta_5}{\longrightarrow} {\mathcal Coker}(\theta_4) \longrightarrow {\mathcal F} \longrightarrow {\mathcal Ker}(\theta_2)/{\mathcal Im}(\theta_1) \longrightarrow 0. \end{equation} An application of the Beilinson spectral sequence is the following lemma that will be used in Section \ref{classification}.
\begin{lemma} \label{length_3_scheme} Let $Z \subset \mathbb P^1 \times \mathbb P^1$ be a zero-dimensional subscheme of length $3$ that is not contained in a line of degree $(1, 0)$ or $(0, 1)$. Then the ideal of $Z$ has resolution \[ 0 \longrightarrow 2\mathcal O(-2, -2) \stackrel{\zeta^{\scriptscriptstyle \operatorname{T}}}{\longrightarrow} \mathcal O(-1, -2) \oplus \mathcal O(-2, -1) \oplus \mathcal O(-1, -1) \longrightarrow {\mathcal I}_Z \longrightarrow 0, \] where the maximal minors of $\zeta$ have no common factor. The dual of the structure sheaf of $Z$ has resolution \begin{equation} \label{Z_resolution} 0 \longrightarrow \mathcal O(-2, -4) \longrightarrow \mathcal O(-1, -3) \oplus \mathcal O(0, -3) \oplus \mathcal O(-1, -2) \stackrel{\zeta}{\longrightarrow} 2\mathcal O(0, -2) \longrightarrow {\mathcal Ext}^2(\mathcal O_Z, \mathcal O) \longrightarrow 0. \end{equation} \end{lemma}
\begin{proof} We apply the spectral sequence (\ref{E_1}) to the sheaf ${\mathcal F} = {\mathcal I}_Z(1, 1)$. By hypothesis, $\operatorname{H}^0({\mathcal I}_Z(1, 0)) = \{ 0 \}$ and $\operatorname{H}^0({\mathcal I}_Z(0, 1)) = \{ 0 \}$ hence, from (\ref{E_1^{-1,j}}), we obtain the vanishing of $E_1^{-1, 0}$. Since $\operatorname{H}^0({\mathcal I}_Z) = \{ 0 \}$, also $E_1^{-2, 0}$ vanishes. From the short exact sequence \[ 0 \longrightarrow {\mathcal I}_Z \longrightarrow \mathcal O \longrightarrow \mathcal O_Z \longrightarrow 0 \] we obtain the vanishing of $\operatorname{H}^2({\mathcal I}_Z)$. Analogously, $\operatorname{H}^2({\mathcal I}_Z(1, 0))$, $\operatorname{H}^2({\mathcal I}_Z(0, 1))$ and $\operatorname{H}^2({\mathcal I}_Z(1, 1))$ vanish. The first row of (\ref{E_1}) vanishes. Denote $d = \dim_{\mathbb C}^{} \operatorname{H}^1({\mathcal I}_Z(1, 1))$. Display diagram (\ref{E_1}) now takes the simplified form \[ \xymatrix { 0 \ar[r] & 0 \ar[r] & 0 \\ 2\mathcal O(-1, -1) \ar[r]^-{\theta_1} & \mathcal O(0, -1) \oplus \mathcal O(-1, 0) \ar[r]^-{\theta_2} & d\mathcal O \\ 0 \ar[r] & 0 \ar[r] & (d + 1)\mathcal O } \] From the convergence of the spectral sequence we see that $\theta_2$ is surjective. There is no surjective morphism $\theta_2 \colon \mathcal O(0, -1) \oplus \mathcal O(-1, 0) \to d\mathcal O$ for $d \ge 1$, hence $d = 0$. Thus, ${\mathcal Ker}(\theta_1)$ is a subsheaf of $\mathcal O$. We claim that ${\mathcal Ker}(\theta_1) = \{ 0 \}$. Indeed, if ${\mathcal Ker}(\theta_1)$ were non-zero, then $\mathcal O/{\mathcal Ker}(\theta_1)$ would be a torsion subsheaf of ${\mathcal I}_Z(1, 1)$. Combining the exact sequences \[ 0 \longrightarrow \mathcal O \longrightarrow {\mathcal I}_Z(1, 1) \longrightarrow {\mathcal Coker}(\theta_1) \longrightarrow 0, \] \[ 0 \longrightarrow 2\mathcal O(-1, -1) \longrightarrow \mathcal O(0, -1) \oplus \mathcal O(-1, 0) \longrightarrow {\mathcal Coker}(\theta_1) \longrightarrow 0 \] yields the resolution \[ 0 \longrightarrow 2\mathcal O(-1, -1) \longrightarrow \mathcal O(0, -1) \oplus \mathcal O(-1, 0) \oplus \mathcal O \longrightarrow {\mathcal I}_Z(1, 1) \longrightarrow 0. \] Applying ${\mathcal Hom}(-, \mathcal O(-1, -3))$, we obtain resolution (\ref{Z_resolution}). If the maximal minors of the matrix representing $\zeta$ had a common factor $f$, then the reduced support of ${\mathcal Coker}(\zeta)$ would contain the curve $\{ f = 0 \}$. But this is impossible because ${\mathcal Ext}^2(\mathcal O_Z, \mathcal O)$ has support of dimension zero. \end{proof}
\begin{lemma} \label{unique_extension} Let $S$ be a smooth projective surface and let $C \subset S$ be a locally Cohen-Macaulay curve. Let ${\mathcal Z}$ be a coherent sheaf on $S$ with support of dimension zero. Let ${\mathcal F}$ be an extension of ${\mathcal Z}$ by $\mathcal O_C$ without zero-dimensional torsion. Then ${\mathcal F}$ is uniquely determined up to isomorphism, meaning that if ${\mathcal F}'$ is another extension of ${\mathcal Z}$ by $\mathcal O_C$ without zero-dimensional torsion, then ${\mathcal F}' \simeq {\mathcal F}$. Moreover, ${\mathcal Z} \simeq {\mathcal Ext}^2_{\mathcal O_S}(\mathcal O_Z, \mathcal O_S)$ for a subscheme $Z \subset C$ of dimension zero, so we have the exact sequence \begin{equation} \label{C_F_Z_dual} 0 \longrightarrow \mathcal O_C \longrightarrow {\mathcal F} \longrightarrow {\mathcal Ext}^2_{\mathcal O_S}(\mathcal O_Z, \mathcal O_S) \longrightarrow 0. \end{equation} \end{lemma}
\begin{proof} This lemma is a direct consequence of \cite[Proposition B.5]{pandharipande_thomas}. Indeed, given an exact sequence \begin{equation} \label{C_F_C} 0 \longrightarrow \mathcal O_C \longrightarrow {\mathcal F} \longrightarrow {\mathcal Z} \longrightarrow 0 \end{equation} in which ${\mathcal F}$ has no zero-dimensional torsion, then the pair $(\mathcal O_C, {\mathcal F})$ is a stable pair supported on $C$, in the sense of \cite{pandharipande_thomas}. By \cite[Lemma B.2]{pandharipande_thomas}, we have ${\mathcal Ext}_{\mathcal O_C}^1({\mathcal F}, \mathcal O_C) = \{ 0 \}$. Applying ${\mathcal Hom}_{\mathcal O_C}^{}(-, \mathcal O_C)$ to (\ref{C_F_C}), yields the exact sequence \begin{equation} \label{C_F_C_dual} 0 \longrightarrow {\mathcal Hom}^{}_{\mathcal O_C} ({\mathcal F}, \mathcal O_C) \longrightarrow \mathcal O_C \longrightarrow {\mathcal Ext}^1_{\mathcal O_C}({\mathcal Z}, \mathcal O_C) \longrightarrow 0. \end{equation} Thus, ${\mathcal Ext}^1_{\mathcal O_C}({\mathcal Z}, \mathcal O_C)$ is the structure sheaf $\mathcal O_Z$ of a zero-dimensional subscheme $Z \subset C$. Under the bijection of \cite[Proposition B.5]{pandharipande_thomas} between stable pairs supported on $C$ and zero-dimensional subschemes of $C$, the pair $(\mathcal O_C, {\mathcal F})$ corresponds to $Z$, so it is uniquely determined, up to isomorphism. Tensoring (\ref{C_F_C_dual}) with the dualising line bundle $\omega_C$ on $C$, yields the exact sequence \begin{equation} \label{F_dual_C_Z} 0 \longrightarrow {\mathcal Hom}({\mathcal F}, \omega_C) \longrightarrow \omega_C \longrightarrow \mathcal O_Z \longrightarrow 0. \end{equation} We claim that ${\mathcal Hom}({\mathcal F}, \omega_C) \simeq {\mathcal Ext}^1({\mathcal F}, \omega_S)$. This follows by applying ${\mathcal Hom}({\mathcal F}, -)$ to the exact sequence \[
0 \longrightarrow \omega_S \longrightarrow \omega_S \otimes \mathcal O(C) \longrightarrow \omega_S \otimes \mathcal O(C)|_{C} \simeq \omega_C \longrightarrow 0. \] We obtain the exact sequence \[ 0 \longrightarrow {\mathcal Hom}({\mathcal F}, \omega_C) \longrightarrow {\mathcal Ext}^1({\mathcal F}, \omega_S) \longrightarrow {\mathcal Ext}^1({\mathcal F}, \omega_S \otimes \mathcal O(C)). \] The last morphism is locally multiplication with an equation $f$ defining $C$. But $C = \operatorname{supp}({\mathcal F})$, hence $f$ annihilates ${\mathcal F}$, and hence $f$ annihilates ${\mathcal Ext}^1({\mathcal F}, \omega_S)$. This proves the claim. According to \cite[Remark 4]{rendiconti}, ${\mathcal Ext}^1({\mathcal Ext^1}({\mathcal F}, \omega_S), \omega_S) \simeq {\mathcal F}$. Clearly, \[ {\mathcal Ext}^1(\mathcal O_Z, \omega_S) = \{ 0 \}, \qquad {\mathcal Ext}^1(\omega_C, \omega_S) \simeq \mathcal O_C, \qquad {\mathcal Ext}^2(\omega_C, \omega_S) = \{ 0 \}. \] Applying ${\mathcal Hom}(-, \omega_S)$ to (\ref{F_dual_C_Z}) yields extension (\ref{C_F_Z_dual}). Comparing with (\ref{C_F_C}), we see that ${\mathcal Z} \simeq {\mathcal Ext}^2(\mathcal O_Z, \mathcal O_S)$. \end{proof}
\noindent Crucial for our classification of semi-stable sheaves is the following vanishing result that should be compared with \cite[Proposition 4]{genus_two}. We fix vector spaces $V_1$ and $V_2$ over $\mathbb C$ of dimension $2$ and we identify $\mathbb P^1 \times \mathbb P^1$ with $\mathbb P(V_1) \times \mathbb P(V_2)$. Let $\{ x, y \}$ be a basis of $V_1^*$ and let $\{z, w \}$ be a basis of $V_2^*$. A morphism $\mathcal O(i, j) \to \mathcal O(k, l)$ will be represented by a form in $\operatorname{S}^{k - i} V_1^* \otimes \operatorname{S}^{l - j} V_2^*$.
\begin{proposition} \label{vanishing} Assume that the sheaf ${\mathcal F}$ gives a point in $\mathbf M$. \begin{enumerate} \item[(i)] We have $\operatorname{H}^0({\mathcal F}(-1, -1)) = \{ 0 \}$ and $\operatorname{H}^0({\mathcal F}(-1, 0)) = \{ 0 \}$. \item[(ii)] If ${\mathcal F}$ satisfies the vanishing condition $\operatorname{H}^0({\mathcal F}(0, -1)) = \{ 0 \}$, then $\operatorname{H}^1({\mathcal F}) = \{ 0 \}$. \end{enumerate} \end{proposition}
\begin{proof} (i) The vanishing of $\operatorname{H}^0({\mathcal F}(-1, -1))$ follows from \cite[Proposition 2(i)]{genus_two}. To prove the vanishing of $\operatorname{H}^0({\mathcal F}(-1, 0))$ we can argue as in the proof of \cite[Proposition 3]{genus_two}.
\noindent (ii) Assume now that $\operatorname{H}^0({\mathcal F}(0, -1)) = \{ 0 \}$. From (\ref{E_1^{-1,j}}) and part (i) of the proposition, we deduce that $E_1^{-1, 1} \simeq \mathcal O(0, -1) \oplus 3\mathcal O(-1, 0)$. Denote $d = \dim_{\mathbb C}^{} \operatorname{H}^1({\mathcal F})$. There is no surjective morphism \[ \theta_2 \colon \mathcal O(0, -1) \oplus 3\mathcal O(-1, 0) \longrightarrow d\mathcal O \] for $d \ge 4$, hence $d \le 3$. Assume that $d = 3$. The maximal minors for a matrix representation of $\theta_2$ have no common factor, otherwise $\theta_2$ would not be surjective. Thus, ${\mathcal Ker}(\theta_2) \simeq \mathcal O(-3, -1)$, hence $\theta_1 = 0$, and hence, from the exact sequence (\ref{convergence}), we obtain a surjective morphism ${\mathcal F} \to \mathcal O(-3, -1)$. This is absurd. Thus, the case when $d = 3$ is unfeasible.
Consider now the case when $d = 2$. If $\theta_2$ is represented by a matrix of the form \[ A = \left[ \begin{array}{cccc} 0 & \star & \star & \star \\ 0 & \star & \star & \star \end{array} \right], \] then ${\mathcal Ker}(\theta_2) \simeq \mathcal O(0, -1) \oplus \mathcal O(-3, 0)$, hence $\mathcal O(-3, 0)$ is a direct summand of ${\mathcal Ker}(\theta_2)/{\mathcal Im}(\theta_1)$, and hence, from the exact sequence (\ref{convergence}), we obtain a surjective morphism ${\mathcal F} \to \mathcal O(-3, 0)$. This is absurd. If $\theta_2$ is represented by a matrix of the form \[ B = \left[ \begin{array}{cccc} \star & \star & \star & 0 \\ \star & \star & \star & 0 \end{array} \right], \] then ${\mathcal Ker}(\theta_2) \simeq \mathcal O(-2, -1) \oplus \mathcal O(-1, 0)$, hence $\mathcal O(-2, -1)$ is a direct summand of ${\mathcal Ker}(\theta_2)/{\mathcal Im}(\theta_1)$, and hence we obtain a surjective morphism ${\mathcal F} \to \mathcal O(-2, -1)$. This is absurd. If $\theta_2$ is represented by a matrix of the form \[ C = \left[ \begin{array}{cccc} 1 \otimes u & v \otimes 1 & 0 & 0 \\ 0 & 0 & x \otimes 1 & y \otimes 1 \end{array} \right], \] then ${\mathcal Ker}(\theta_2) \simeq \mathcal O(-1, -1) \oplus \mathcal O(-2, 0)$ and we obtain a surjective morphism ${\mathcal F} \to \mathcal O(-2, 0)$. This is absurd. We claim that, if $\theta_2$ is not of the form $A$, $B$ or $C$, then $\theta_2$ is represented by a matrix of the form \[ D = \left[ \begin{array}{cccc} - 1 \otimes z & x \otimes 1 & y \otimes 1 & 0 \\ \star & \star & \star & v \otimes 1 \end{array} \right], \] with $v \neq 0$. Indeed, since $\theta_2 \nsim A$ and $\theta_2 \nsim B$, we may write \[ \theta_2 = \left[ \begin{array}{cccc} 1 \otimes u & v_1 \otimes 1 & v_2 \otimes 1 & 0 \\ \star & \star & \star & v \otimes 1 \end{array} \right] \] with $u \neq 0$, $v \neq 0$. Since $\theta_2 \nsim B$, $v_1$ and $v_2$ cannot be both zero. If $v_1$ and $v_2$ are linearly independent, then $\theta_2 \sim D$. If $v_1$ and $v_2$ span a one-dimensional vector space, then, since $\theta_2 \nsim B$, we may write \[ \theta_2 = \left[ \begin{array}{cccc} 1 \otimes u\phantom{_1} & v_1 \otimes 1 & 0 & 0 \\ 1 \otimes u_1 & 0 & x \otimes 1 & y \otimes 1 \end{array} \right]. \] Since $\theta_2 \nsim C$, we have $u_1 \neq 0$, forcing $\theta_2 \sim D$. In the case when $\theta_2 = D$, it is easy to see that the morphism \[ \theta_1 \colon 5\mathcal O(-1, -1) \longrightarrow \mathcal O(0, -1) \oplus 3\mathcal O(-1, 0) \] is represented by a matrix of the form \[ \left[ \begin{array}{ccccc} x \otimes 1 & y \otimes 1 & 0 & 0 & 0 \\ 1 \otimes z & 0 & 0 & 0 & 0 \\ 0 & 1 \otimes z & 0 & 0 & 0 \\ \star & \star & 0 & 0 & 0 \end{array} \right], \] hence ${\mathcal Ker}(\theta_1) \simeq 3\mathcal O(-1, -1)$, and hence ${\mathcal Coker}(\theta_5)$ has Hilbert polynomial $3m + 3n + 3$. But then, in view of the exact sequence (\ref{convergence}), ${\mathcal Coker}(\theta_5)$ is a destabilizing subsheaf of ${\mathcal F}$. Thus, the case when $d = 2$ is also unfeasible.
It remains to examine the case when $d = 1$. Recall that $\theta_2$ is surjective, hence it can have two possible forms. Firstly, if \[ \theta_2 = \left[ \begin{array}{cccc} 0 & x \otimes 1 & y \otimes 1 & 0 \end{array} \right], \] then ${\mathcal Ker}(\theta_2) \simeq \mathcal O(0, -1) \oplus \mathcal O(-2, 0) \oplus \mathcal O(-1, 0)$ and we obtain a surjective morphism ${\mathcal F} \to \mathcal O(-2, 0)$, which is absurd. The second form is \[ \theta_2 = \left[ \begin{array}{cccc} - 1 \otimes z & x \otimes 1 & y \otimes 1 & 0 \end{array} \right]. \] If $\theta_1$ is represented by a matrix having two zero columns, then ${\mathcal Ker}(\theta_1) \simeq 2\mathcal O(-1, -1)$, hence ${\mathcal Coker}(\theta_5)$ has Hilbert polynomial $2m + 2n + 2$, and hence ${\mathcal Coker}(\theta_5)$ is a destabilizing subsheaf of ${\mathcal F}$. Thus, we may write \[ \theta_1 = \left[ \begin{array}{ccccc} x \otimes 1 & y \otimes 1 & 0 & 0 & 0 \\ 1 \otimes z & 0 & 0 & 0 & 0 \\ 0 & 1 \otimes z & 0 & 0 & 0 \\ 0 & 0 & 1 \otimes z & 1 \otimes w & 0 \end{array} \right], \] hence ${\mathcal Ker}(\theta_1) \simeq \mathcal O(-1, -2) \oplus \mathcal O(-1, -1)$, and hence ${\mathcal Coker}(\theta_5)$ has Hilbert polynomial $3m + 2n + 2$. But then ${\mathcal Coker}(\theta_5)$ is a destabilizing subsheaf of ${\mathcal F}$. We deduce that the case when $d = 1$ is also unfeasible. \end{proof}
\section{Classification of sheaves} \label{classification}
We begin our classification of semi-stable sheaves by examining the Brill-Noether locus of sheaves that do not satisfy the first vanishing condition in Proposition \ref{vanishing}(ii).
\begin{proposition} \label{M_2} The sheaves ${\mathcal F}$ in $\mathbf M$ satisfying the condition $\operatorname{H}^0({\mathcal F}(0, -1)) \neq \{ 0 \}$ are precisely the non-split extension sheaves of the form \begin{equation} \label{C_F_p} 0 \longrightarrow \mathcal O_C(0, 1) \longrightarrow {\mathcal F} \longrightarrow \mathbb C_p \longrightarrow 0, \end{equation} where $C \subset \mathbb P^1 \times \mathbb P^1$ is a curve of degree $(2, 4)$ and $p$ is a point on $C$. Moreover, the sheaves from (\ref{C_F_p}) are precisely the sheaves ${\mathcal F}$ having a resolution of the form \begin{equation} \label{M_2_resolution} 0 \longrightarrow \mathcal O(-2, -2) \oplus \mathcal O(-1, -3) \stackrel{\varphi}{\longrightarrow} \mathcal O(-1, -2) \oplus \mathcal O(0, 1) \longrightarrow {\mathcal F} \longrightarrow 0, \end{equation} with $\varphi_{11} \neq 0$, $\varphi_{12} \neq 0$. Let $\mathbf M_2 \subset \mathbf M$ be the subset of sheaves ${\mathcal F}$ from (\ref{C_F_p}). Then $\mathbf M_2$ is closed, irreducible, of codimension $2$, and is isomorphic to the universal curve of degree $(2, 4)$ in $\mathbb P^1 \times \mathbb P^1$. Thus, $\mathbf M_2$ is a fiber bundle with fiber $\mathbb P^{13}$ and base $\mathbb P^1 \times \mathbb P^1$. \end{proposition}
\begin{proof} Let ${\mathcal F}$ give a point in $\mathbf M$ and satisfy $\operatorname{H}^0({\mathcal F}(0, -1)) \neq \{ 0 \}$. As in the proof of \cite[Proposition 2]{genus_two}, there is an injective morphism $\mathcal O_C \to {\mathcal F}(0, -1)$ for a curve $C$ of degree $(s, r)$, $0 \le s \le 2$, $0 \le r \le 4$, $1 \le r + s \le 6$. From the stability of ${\mathcal F}$ we have the inequality \[ \operatorname{p}(\mathcal O_C(0, 1)) = \frac{r + 2s - rs}{r + s} \le \frac{1}{6} = \operatorname{p}({\mathcal F}), \] which has the unique solution $(s, r) = (2, 4)$. We obtain extension (\ref{C_F_p}). Conversely, let ${\mathcal F}$ be given by the non-split extension (\ref{C_F_p}). As in the proof of \cite[Proposition 3]{genus_two}, we can show that $\mathcal O_C(0, 1)$ is stable, from which it immediately follows that ${\mathcal F}$ gives a point in $\mathbf M$ and that $\operatorname{H}^0({\mathcal F}(0, -1)) \neq \{ 0 \}$. Choose $\varphi_{11} \in V_1^* \otimes \mathbb C$ and $\varphi_{12} \in \mathbb C \otimes V_2^*$ defining $p$. Since $p \in C$, we can find $\varphi_{21} \in \operatorname{S}^2 V_1^* \otimes \operatorname{S}^3 V_2^*$ and $\varphi_{22} \in V_1^* \otimes \operatorname{S}^4 V_2^*$ such that the polynomial $\varphi_{11} \varphi_{22} - \varphi_{12} \varphi_{21}$ defines $C$. Consider the morphism \[ \varphi \colon \mathcal O(-2, -2) \oplus \mathcal O(-1, -3) \longrightarrow \mathcal O(-1, -2) \oplus \mathcal O(0, 1), \] \[ \varphi = \left[ \begin{array}{cc} \varphi_{11} & \varphi_{12} \\ \varphi_{21} & \varphi_{22} \end{array} \right]. \] From the snake lemma we see that ${\mathcal Coker}(\varphi)$ is an extension of $\mathbb C_p$ by $\mathcal O_C(0, 1)$. Since ${\mathcal Coker}(\varphi)$ has no zero-dimensional torsion, we can apply Lemma \ref{unique_extension} to deduce that ${\mathcal F} \simeq {\mathcal Coker}(\varphi)$. Thus, $[{\mathcal F}] \in \mathbf M_2$ if and only if ${\mathcal F}$ has resolution (\ref{M_2_resolution}). \end{proof}
In the remaining part of this section we will assume that ${\mathcal F}$ satisfies both vanishing conditions from Proposition \ref{vanishing}(ii). The exact sequence (\ref{convergence}) takes the form \begin{equation} \label{generic_convergence} 0 \longrightarrow {\mathcal Ker}(\theta_1) \stackrel{\theta_5}{\longrightarrow} \mathcal O \longrightarrow {\mathcal F} \longrightarrow {\mathcal Coker}(\theta_1) \longrightarrow 0, \end{equation} where \[ \theta_1 \colon 5\mathcal O(-1, -1) \longrightarrow \mathcal O(0, -1) \oplus 3\mathcal O(-1, 0). \]
\begin{proposition} \label{M_3_4} Assume that $[{\mathcal F}] \in \mathbf M$ and that $\operatorname{H}^0({\mathcal F}(0, -1)) = \{ 0 \}$. Assume that the maximal minors of $\theta_1$ have a common factor. Then ${\mathcal F}$ is an extension of the form \begin{equation} \label{Q_F_L} 0 \longrightarrow \mathcal O_Q \longrightarrow {\mathcal F} \longrightarrow \mathcal O_L(1, 0) \longrightarrow 0 \end{equation} for a quintic curve $Q \subset \mathbb P^1 \times \mathbb P^1$ of degree $(2, 3)$ and a line $L \subset \mathbb P^1 \times \mathbb P^1$ of degree $(0, 1)$, or is an extension of the form \begin{equation} \label{Q_p_F_L} 0 \longrightarrow \mathcal O_Q(p) \longrightarrow {\mathcal F} \longrightarrow \mathcal O_L \longrightarrow 0, \end{equation} where $\mathcal O_Q(p)$ is a non-split extension of $\mathbb C_p$ by $\mathcal O_Q$ for a point $p \in Q$.
Conversely, any extension ${\mathcal F}$ as in (\ref{Q_F_L}) or (\ref{Q_p_F_L}) satisfying the condition $\operatorname{H}^0({\mathcal F}) \simeq \mathbb C$ is semi-stable. Let $\mathbf M_4 \subset \mathbf M$ be the subset of sheaves ${\mathcal F}$ as in (\ref{Q_F_L}) satisfying the condition $\operatorname{H}^0({\mathcal F}) \simeq \mathbb C$. Let $\mathbf M_3 \subset \mathbf M$ be the subset of sheaves ${\mathcal F}$ as in (\ref{Q_p_F_L}) satisfying the condition $\operatorname{H}^0({\mathcal F}) \simeq \mathbb C$. Then $\mathbf M_3$ and $\mathbf M_4$ are locally closed, irreducible subsets, of codimension $3$, respectively, $4$. \end{proposition}
\begin{proof} Let $\eta_i$ be the maximal minor of a matrix representing $\theta_1$ obtained by deleting column $i$. Denote $g = \gcd(\eta_1, \ldots, \eta_5)$. Let $(s, r) = (2, 4) - \deg(g)$. It is easy to check that the sequence \[ 0 \longrightarrow \mathcal O(-s, -r) \stackrel{\eta}{\longrightarrow} 5\mathcal O(-1, -1) \stackrel{\theta_1}{\longrightarrow} \mathcal O(0, -1) \oplus 3\mathcal O(-1, 0), \] \[ \eta = \left[ \begin{array}{rrrrr} \frac{\eta_1}{g} & - \frac{\eta_2}{g} & \frac{\eta_3}{g} & - \frac{\eta_4}{g} & \frac{\eta_5}{g} \end{array} \right]^{\scriptscriptstyle \operatorname{T}} \] is exact. From (\ref{generic_convergence}) we see that ${\mathcal Coker}(\theta_5)$ is a subsheaf of ${\mathcal F}$, hence we have the inequality \[ 1 - \frac{rs}{r+s} = \operatorname{p}({\mathcal Coker}(\theta_5)) \le \operatorname{p}({\mathcal F}) = \frac{1}{6}, \] forcing $(s, r) = (2, 3)$ or $(s, r) = (2, 2)$. If $(s, r) = (2, 2)$, then $P_{{\mathcal Coker}(\theta_1)} = 2m + 1$ and ${\mathcal Coker}(\theta_1)$ is semi-stable, which follows from the semi-stability of ${\mathcal F}$. But, according to \cite[Proposition 10]{ballico_huh}, $\operatorname{M}(2m + 1) = \emptyset$. This contradiction shows that $(s, r) \neq (2, 2)$, hence $(s, r) = (2, 3)$. From (\ref{generic_convergence}) we obtain the extension \[ 0 \longrightarrow \mathcal O_Q \longrightarrow {\mathcal F} \longrightarrow {\mathcal Coker}(\theta_1) \longrightarrow 0. \] If ${\mathcal Coker}(\theta_1)$ has no zero-dimensional torsion, we obtain extension (\ref{Q_F_L}). Otherwise, the zero-dimensional torsion has length $1$, its pull-back in ${\mathcal F}$ is a semi-stable sheaf $\mathcal O_Q(p)$, and we obtain extension (\ref{Q_p_F_L}).
Conversely, let ${\mathcal F}$ be an extension as in (\ref{Q_F_L}) satisfying $\operatorname{H}^0({\mathcal F}) \simeq \mathbb C$. Assume that ${\mathcal F}$ had a destabilizing subsheaf ${\mathcal F}'$. Let ${\mathcal G}$ be the image of ${\mathcal F}'$ in $\mathcal O_L(1, 0)$. According to \cite[Proposition 1]{genus_two}, $\mathcal O_Q$ is stable, hence $\chi({\mathcal F}' \cap \mathcal O_Q) \le -1$. Since $\chi({\mathcal F}') \ge 1$, we see that $\chi({\mathcal G}) \ge 2$, hence ${\mathcal G} = \mathcal O_L(1, 0)$ and $\mathcal O_Q \nsubseteq {\mathcal F}'$. Thus $\operatorname{H}^0({\mathcal F}' \cap \mathcal O_Q) = \{ 0 \}$, hence the map $\operatorname{H}^0({\mathcal F}') \to \operatorname{H}^0(\mathcal O_L(1, 0))$ is injective. But this map factors through $\operatorname{H}^0({\mathcal F}) \to \operatorname{H}^0(\mathcal O_L(1, 0))$, which, by hypothesis, is the zero map. We deduce that $\operatorname{H}^0({\mathcal F}') = \{ 0 \}$, which yields a contradiction. Thus, there is no destabilizing subsheaf. The same argument applies for extensions (\ref{Q_p_F_L}) satisfying $\operatorname{H}^0({\mathcal F}) \simeq \mathbb C$.
By Serre duality \[ \operatorname{Ext}^1(\mathcal O_L(1, 0), \mathcal O_Q) \simeq \operatorname{Ext}^1(\mathcal O_Q, \mathcal O_L(-1, -2))^*. \] From the short exact sequence \[ 0 \longrightarrow \mathcal O(-2, -3) \longrightarrow \mathcal O \longrightarrow \mathcal O_Q \longrightarrow 0 \] we obtain the long exact sequence \[ \{ 0 \} = \operatorname{H}^0(\mathcal O_L(-1, -2)) \longrightarrow \operatorname{H}^0(\mathcal O_L(1, 1)) \simeq \mathbb C^2 \longrightarrow \operatorname{Ext}^1(\mathcal O_Q, \mathcal O_L(-1, -2)) \longrightarrow \operatorname{H}^1(\mathcal O_L(-1, -2)) = \{ 0 \}. \] Thus $\operatorname{Ext}^1(\mathcal O_L(1, 0), \mathcal O_Q) \simeq \mathbb C^2$, hence $\mathbf M_4$ is isomorphic to an open subset of a $\mathbb P^1$-bundle over $\mathbb P^{11} \times \mathbb P^1$. By Serre duality we have \[ \operatorname{Ext}^1(\mathcal O_L, \mathcal O_Q(p)) \simeq \operatorname{Ext}^1(\mathcal O_Q(p), \mathcal O_L(-2, -2))^*. \] Using Lemma \ref{unique_extension}, it is easy to see that the sheaves $\mathcal O_Q(p)$ are precisely the sheaves having a resolution of the form \begin{equation} \label{Q_p_resolution} 0 \longrightarrow \mathcal O(-2, -2) \oplus \mathcal O(-1, -3) \stackrel{\varphi}{\longrightarrow} \mathcal O(-1, -2) \oplus \mathcal O \longrightarrow \mathcal O_Q(p) \longrightarrow 0, \end{equation} where $\varphi_{11} \neq 0$, $\varphi_{12} \neq 0$ (cf. Proposition \ref{M_2}). From resolution (\ref{Q_p_resolution}) we obtain the long exact sequence \begin{align*} \{ 0 \} = & \operatorname{H}^0(\mathcal O_L(-1, 0) \oplus \mathcal O_L(-2, -2)) \longrightarrow \operatorname{H}^0(\mathcal O_L \oplus \mathcal O_L(-1, 1)) \simeq \mathbb C \longrightarrow \\ & \operatorname{Ext}^1(\mathcal O_Q(p), \mathcal O_L(-2, -2)) \longrightarrow \\ & \operatorname{H}^1(\mathcal O_L(-1, 0) \oplus \mathcal O_L(-2, -2)) \simeq \mathbb C \longrightarrow \operatorname{H}^1(\mathcal O_L \oplus \mathcal O_L(-1, 1)) = \{ 0 \}. \end{align*} Thus, $\operatorname{Ext}^1(\mathcal O_L, \mathcal O_Q(p)) \simeq \mathbb C^2$, hence $\mathbf M_3$ has dimension $14$. The other claims about $\mathbf M_3$ are obvious. \end{proof}
\begin{lemma} \label{generic_sheaves} Assume that $[{\mathcal F}] \in \mathbf M$ and $\operatorname{H}^0({\mathcal F}(0, -1)) = \{ 0 \}$. Assume that the maximal minors of $\theta_1$ have no common factor. Then ${\mathcal Ker}(\theta_1) \simeq \mathcal O(-2, -4)$ and ${\mathcal Coker}(\theta_1) \simeq {\mathcal Ext}^2(\mathcal O_Z, \mathcal O)$ with $Z$ described below. We have an extension \begin{equation} \label{generic_extension} 0 \longrightarrow \mathcal O_C \longrightarrow {\mathcal F} \longrightarrow {\mathcal Ext}^2(\mathcal O_Z, \mathcal O) \longrightarrow 0, \end{equation} where $C$ is a curve of degree $(2, 4)$ and $Z \subset C$ is a subscheme of dimension zero and length $3$. Moreover, $Z$ is not contained in a line of degree $(0, 1)$. \end{lemma}
\begin{proof} The fact that ${\mathcal Ker}(\theta_1) \simeq \mathcal O(-2, -4)$ is well-known. The Hilbert polynomial of ${\mathcal Coker}(\theta_1)$ is $3$, hence ${\mathcal Coker}(\theta_1)$ has dimension zero and length $3$. From (\ref{generic_convergence}), we obtain the exact sequence \[ 0 \longrightarrow \mathcal O_C \longrightarrow {\mathcal F} \longrightarrow {\mathcal Coker}(\theta_1) \longrightarrow 0. \] We can now apply Lemma \ref{unique_extension} to obtain the extension (\ref{generic_extension}) and the isomorphism ${\mathcal Coker}(\theta_1) \simeq {\mathcal Ext}^2(\mathcal O_Z, \mathcal O)$.
Assume that $Z$ is contained in a line $L$ of degree $(0, 1)$. Then $\mathcal O_Z \simeq {\mathcal Ext}^2(\mathcal O_Z, \mathcal O)$. Choose $\varphi_{11} \in \mathbb C \otimes V_2^*$ defining $L$. Choose $\varphi_{12} \in \operatorname{S}^3 V_1^* \otimes \mathbb C$ such that $\varphi_{11}$ and $\varphi_{12}$ define $Z$. If $L \nsubseteq C$, then $L.C = 2$, which contradicts the fact that $Z \subset L \cap C$. Thus $L \subset C$, so there is $\varphi_{22} \in \operatorname{S}^2 V_1^* \otimes \operatorname{S}^3 V_2^*$ such that $\varphi_{11} \varphi_{22}$ is a defining polynomial of $C$. Consider the exact sequence \[ 0 \longrightarrow \mathcal O(1, -4) \oplus \mathcal O(-2, -3) \stackrel{\varphi}{\longrightarrow} \mathcal O(1, -3) \oplus \mathcal O \longrightarrow {\mathcal F}' \longrightarrow 0, \] \[ \varphi = \left[ \begin{array}{cc} \varphi_{11} & \varphi_{12} \\ 0 & \varphi_{22} \end{array} \right]. \] Then ${\mathcal F}'$ is an extension of $\mathcal O_Z$ by $\mathcal O_C$ without zero-dimensional torsion. Since, from the exact sequence (\ref{generic_extension}), ${\mathcal F}$ is also an extension of $\mathcal O_Z$ by $\mathcal O_C$ without zero-dimensional torsion, we can apply Lemma \ref{unique_extension} to deduce that ${\mathcal F} \simeq {\mathcal F}'$. We obtain a contradiction from the isomorphisms $\mathbb C\simeq \operatorname{H}^0({\mathcal F}) \simeq \operatorname{H}^0({\mathcal F}') \simeq \mathbb C^3$. \end{proof}
\begin{proposition} \label{M_0} Let $\mathbf M_0 \subset \mathbf M$ be the subset of sheaves ${\mathcal F}$ for which $\operatorname{H}^0({\mathcal F}(0, -1))$ $= \{ 0 \}$, ${\mathcal Ker}(\theta_1) \simeq \mathcal O(-2, -4)$ and $\operatorname{supp}({\mathcal Coker}(\theta_1))$ is not contained in a line of degree $(1, 0)$ or $(0, 1)$. Then $\mathbf M_0$ is open and can be described as the subset of sheaves ${\mathcal F}$ having a resolution of the form \begin{equation} \label{M_0_resolution} 0 \longrightarrow \mathcal O(-1, -3) \oplus \mathcal O(0, -3) \oplus \mathcal O(-1, -2) \stackrel{\varphi}{\longrightarrow} \mathcal O(0, -2) \oplus \mathcal O(0, -2) \oplus \mathcal O \longrightarrow {\mathcal F} \longrightarrow 0, \end{equation} where $\varphi_{12}$ and $\varphi_{22}$ are linearly independent and the maximal minors of the matrix $(\varphi_{ij})_{i = 1, 2, j = 1, 2, 3}$ have no common factor. \end{proposition}
\begin{proof} Let ${\mathcal F}$ give a point in $\mathbf M_0$. Let $Z$ and $C$ be as in Lemma \ref{generic_sheaves}. By hypothesis $Z$ is not contained in a line of degree $(1, 0)$ or $(0, 1)$, hence ${\mathcal Ext}^2(\mathcal O_Z, \mathcal O) \simeq {\mathcal Coker}(\zeta)$ as in (\ref{Z_resolution}). Let $\zeta_1$, $\zeta_2$, $\zeta_3$ be the maximal minors of $\zeta$. They are the defining polynomials of $Z$, hence we can find $\varphi_{31} \in V_1^* \otimes \operatorname{S}^3 V_2^*$, $\varphi_{32} \in \mathbb C \otimes \operatorname{S}^3 V_2^*$, $\varphi_{33} \in V_1^* \otimes \operatorname{S}^2 V_2^*$ such that $\zeta_1 \varphi_{31} - \zeta_2 \varphi_{32} + \zeta_3 \varphi_{33}$ is the polynomial defining $C$. Let \[ \varphi = \left[ \begin{array}{ccc} & \zeta \\ \varphi_{31} & \varphi_{32} & \varphi_{33} \end{array} \right]. \] Then ${\mathcal Coker}(\varphi)$ is an extension of ${\mathcal Ext}^2(\mathcal O_Z, \mathcal O)$ by $\mathcal O_C$ without zero-dimensional torsion and, by Lemma \ref{generic_sheaves}, the same is true of ${\mathcal F}$. From Lemma \ref{unique_extension} we deduce that ${\mathcal F} \simeq {\mathcal Coker}(\varphi)$. By Proposition \ref{vanishing}, $\operatorname{H}^0({\mathcal F}) \simeq \mathbb C$, hence the map $\operatorname{H}^1(\mathcal O(0, -3)) \to \operatorname{H}^1(2\mathcal O(0, -2))$ is injective, which is equivalent to saying that $\varphi_{12}$ and $\varphi_{22}$ are linearly independent. We have shown that ${\mathcal F}$ has resolution (\ref{M_0_resolution}).
Conversely, assume that ${\mathcal F}$ has resolution (\ref{M_0_resolution}). Then $\operatorname{H}^0({\mathcal F}) \simeq \mathbb C$ because $\varphi_{12}$ and $\varphi_{22}$ are linearly independent. From the snake lemma we see that ${\mathcal F}$ is an extension of ${\mathcal Ext}^2(\mathcal O_Z, \mathcal O)$ by $\mathcal O_C$, where $Z$ is the zero-dimensional scheme of length $3$ given by the maximal minors of the matrix obtained by deleting the third row of $\varphi$, and $C$ is the curve of degree $(2, 4)$ defined by $\det(\varphi)$. Thus, $\operatorname{H}^0({\mathcal F})$ generates $\mathcal O_C$. We will show that ${\mathcal F}$ is semi-stable. Assume that ${\mathcal F}$ had a destabilizing subsheaf ${\mathcal F}'$. Then $\chi({\mathcal F}') > 0$ and $\chi({\mathcal F}') \le \dim_{\mathbb C}^{} \operatorname{H}^0({\mathcal F}) = 1$, hence $\chi({\mathcal F}') = 1$, forcing $\operatorname{H}^0({\mathcal F}') \simeq \mathbb C$. Thus $\operatorname{H}^0({\mathcal F}') = \operatorname{H}^0({\mathcal F})$, hence $\mathcal O_C \subset {\mathcal F}'$, and hence ${\mathcal F}'$ has multiplicity $6$. There are no destabilising subsheaves of ${\mathcal F}$ of multiplicity $6$. Thus, ${\mathcal F}$ gives a point in $\mathbf M$. Since $\varphi_{12}$ and $\varphi_{22}$ are linearly independent, we have $\operatorname{H}^0({\mathcal F}(0, -1)) = \{ 0 \}$. Since $\operatorname{H}^0({\mathcal F})$ generates $\mathcal O_C$, ${\mathcal Ker}(\theta_1) \simeq \mathcal O(-2, -4)$ and ${\mathcal Coker}(\theta_1) \simeq {\mathcal Ext}^2(\mathcal O_Z, \mathcal O)$. Note that $Z$ is not contained in a line of degree $(1, 0)$ or $(0, 1)$. In conclusion, ${\mathcal F}$ gives a point in $\mathbf M_0$. \end{proof}
\begin{proposition} \label{rationality} The variety $\mathbf M$ is rational. \end{proposition}
\begin{proof} By Lemma \ref{length_3_scheme}, Lemma \ref{unique_extension}, Lemma \ref{generic_sheaves} and Proposition \ref{M_0}, the open subset of $\mathbf M_0$, given by the condition that $Z$ consist of three distinct points, is a $\mathbb P^{11}$-bundle over an open subset of $\operatorname{Hilb}_{\mathbb P^1 \times \mathbb P^1}(3)$, so it is rational. \end{proof}
\begin{proposition} \label{M_2'} Let ${\mathcal F}$ be an extension as in (\ref{generic_extension}) without zero-dimensional torsion, for a curve $C$ of degree $(2, 4)$ and a subscheme $Z \subset C$ that is the intersection of two curves of degree $(1, 0)$, respectively, $(0, 3)$. Then ${\mathcal F}$ gives a point in $\mathbf M$. Let $\mathbf M_2' \subset \mathbf M$ be the subset of such sheaves ${\mathcal F}$. Then $\mathbf M_2'$ is closed, irreducible, of codimension $2$, and can be described as the set of sheaves ${\mathcal F}$ having a resolution of the form \begin{equation} \label{M_2'_resolution} 0 \longrightarrow \mathcal O(-2, -1) \oplus \mathcal O(-1, -4) \stackrel{\varphi}{\longrightarrow} \mathcal O(-1, -1) \oplus \mathcal O \longrightarrow {\mathcal F} \longrightarrow 0, \end{equation} with $\varphi_{11} \neq 0$, $\varphi_{12} \neq 0$. \end{proposition}
\begin{proof} Note that $\mathcal O_Z \simeq {\mathcal Ext}^2(\mathcal O_Z, \mathcal O)$. Let ${\mathcal F}$ be an extension of $\mathcal O_Z$ by $\mathcal O_C$ without zero-dimensional torsion. Let $\varphi_{11} \in V_1^* \otimes \mathbb C$ and $\varphi_{12} \in \mathbb C \otimes \operatorname{S}^3 V_2^*$ be the defining polynomials of $Z$. We can find $\varphi_{21} \in \operatorname{S}^2 V_1^* \otimes V_2^*$ and $\varphi_{22} \in V_1^* \otimes \operatorname{S}^4 V_2^*$ such that $\varphi_{11} \varphi_{22} - \varphi_{12} \varphi_{21}$ is the defining polynomial of $C$. Then the cokernel of $\varphi = (\varphi_{ij})_{1 \le i, j \le 2}$ is an extension of $\mathcal O_Z$ by $\mathcal O_C$ without zero-dimensional torsion, hence, by Lemma \ref{unique_extension}, ${\mathcal F} \simeq {\mathcal Coker}(\varphi)$. Conversely, arguing as in Proposition \ref{M_0}, we can show that any sheaf of the form ${\mathcal Coker}(\varphi)$, with $\varphi$ as in (\ref{M_2'_resolution}), is semi-stable. \end{proof}
\noindent \emph{Proof of Theorem \ref{main_theorem}}. By Propositions \ref{M_2}, \ref{M_3_4}, \ref{M_0} and \ref{M_2'}, $\mathbf M$ is the union of the subvarieties $\mathbf M_0$, $\mathbf M_2$, $\mathbf M_2'$, $\mathbf M_3$, $\mathbf M_4$. For $[{\mathcal F}] \in \mathbf M_2$, we have $\operatorname{H}^0({\mathcal F}) \simeq \mathbb C^2$, whereas, for $[{\mathcal F}]$ in any of the other subvarieties, we have $\operatorname{H}^0({\mathcal F}) \simeq \mathbb C$. Thus, $\mathbf M_2$ is disjoint from the other subvarieties. For $[{\mathcal F}] \in \mathbf M_0 \cup \mathbf M_2'$, $\operatorname{H}^0({\mathcal F})$ generates the structure sheaf of a curve $C$ of degree $(2, 4)$, whereas, for $[{\mathcal F}] \in \mathbf M_3 \cup \mathbf M_4$, $\operatorname{H}^0({\mathcal F})$ generates the structure sheaf of a curve $Q$ of degree $(2, 3)$. Thus, $\mathbf M_0 \cup \mathbf M_2'$ is disjoint from $\mathbf M_3 \cup \mathbf M_4$. For $[{\mathcal F}] \in \mathbf M_0$, the support of ${\mathcal F}/\mathcal O_C$ is not contained in a line of degree $(1, 0)$, whereas, for $[{\mathcal F}] \in \mathbf M_2'$, the support of ${\mathcal F}/\mathcal O_C$ is contained in a line of degree $(1, 0)$. Thus, $\mathbf M_0$ is disjoint from $\mathbf M_2'$. For $[{\mathcal F}] \in \mathbf M_3$, ${\mathcal F}/\mathcal O_Q$ has zero-dimensional torsion, whereas, for $[{\mathcal F}] \in \mathbf M_4$, ${\mathcal F}/\mathcal O_Q$ is pure. Thus, $\mathbf M_3$ is disjoint from $\mathbf M_4$. In conclusion, the subvarieties in question form a decomposition of $\mathbf M$. \qed
\section{Variation of the moduli spaces of $\alpha$-semi-stable pairs} \label{variation}
A \emph{coherent system} $\Lambda = (\Gamma, {\mathcal F})$ on $\mathbb P^1 \times \mathbb P^1$ consists of a coherent algebraic sheaf ${\mathcal F}$ on $\mathbb P^1 \times \mathbb P^1$ and a vector subspace $\Gamma \subset \operatorname{H}^0({\mathcal F})$. Let $\alpha$ be a positive real number and let $P_{{\mathcal F}}(m, n) = rm + sn + t$ be the Hilbert polynomial of ${\mathcal F}$. We define the $\alpha$-\emph{slope} of $\Lambda$ as the ratio \[ \operatorname{p}_{\alpha}(\Lambda) = \frac{\alpha \dim \Gamma + t}{r+ s}. \] We say that $\Lambda$ is $\alpha$-\emph{semi-stable}, respectively, $\alpha$-\emph{stable}, if ${\mathcal F}$ is pure and for any proper coherent subsystem $\Lambda' \subset \Lambda$ we have $\operatorname{p}_{\alpha}(\Lambda') \le \operatorname{p}_{\alpha}(\Lambda)$, respectively, $\operatorname{p}_{\alpha}(\Lambda') < \operatorname{p}_{\alpha}(\Lambda)$. According to \cite{lepotier_asterisque} and \cite{he}, for fixed positive real number $\alpha$, non-negative integer $k$ and linear polynomial $P(m, n)$, there is a coarse moduli space, denoted $\operatorname{Syst}(\mathbb P^1 \times \mathbb P^1, \alpha, k, P)$, which is a projective scheme whose closed points are in a bijective correspondence with the set of S-equivalence classes of $\alpha$-semi-stable coherent systems $(\Gamma, {\mathcal F})$ on $\mathbb P^1 \times \mathbb P^1$ for which $\dim \Gamma = k$ and $P_{{\mathcal F}} = P$. When $k = 0$ this space is $\operatorname{M}(P)$. A coherent system for which $\dim \Gamma = 1$ will be called a \emph{pair}. Our main concern is with the moduli space of $\alpha$-semi-stable pairs $\operatorname{M}^{\alpha}(P) = \operatorname{Syst}(\mathbb P^1 \times \mathbb P^1, \alpha, 1, P)$. It is known that there are finitely many positive rational numbers $\alpha_1 < \ldots < \alpha_n$, called \emph{walls}, such that the set of $\alpha$-semi-stable pairs with Hilbert polynomial $P$ remains unchanged as $\alpha$ varies in one of the intervals $(0, \alpha_1)$, or $(\alpha_i, \alpha_{i+1})$, or $(\alpha_n, \infty)$. In fact, from the definition of $\alpha$-semi-stability, we can see that, if $\alpha$ is a wall, then there is a strictly $\alpha$-semi-stable pair, i.e. a pair $\Lambda$ for which there exists a subpair or quotient pair $\Lambda'$, such that $\operatorname{p}_{\alpha}(\Lambda) = \operatorname{p}_{\alpha}(\Lambda')$. This equation has only rational solutions in $\alpha$. For $\alpha \in (\alpha_n, \infty)$ we write $\operatorname{M}^{\infty}(P) = \operatorname{M}^{\alpha}(P)$. For $\alpha \in (0, \alpha_1)$ we write $\operatorname{M}^{0+}(P) = \operatorname{M}^{\alpha}(P)$. If $\gcd(r+s, t) = 1$, then, from the definition of $\alpha$-semi-stability, we see that $(\Gamma, {\mathcal F}) \in \operatorname{M}^{0+}(P)$ if and only if ${\mathcal F}$ is semi-stable. At the other extreme we have the following proposition due to Pandharipande and Thomas.
\begin{proposition} \label{M_infinity} For $\alpha \gg 0$, a pair $\Lambda = (\Gamma, {\mathcal F})$ is $\alpha$-semi-stable if and only if ${\mathcal F}$ is pure and ${\mathcal F}/\mathcal O_C$ has dimension zero or is zero, where $\mathcal O_C$ is the subsheaf of ${\mathcal F}$ generated by $\Gamma$. In particular, $t \ge r + s - rs$. The scheme $\operatorname{M}^{\infty}(rm + sn + t)$ is isomorphic to the relative Hilbert scheme of zero-dimensional schemes of length $ t - r - s + rs$ contained in curves of degree $(s, r)$. \end{proposition}
\begin{proof} Assume that $(\Gamma, {\mathcal F})$ is $\alpha$-semi-stable for $\alpha \gg 0$. If $P_{\mathcal O_C}(m, n) = r'm + s'n + t'$ with $r' + s' < r + s$, then \[ \operatorname{p}_{\alpha}(\Gamma, \mathcal O_C) = \frac{\alpha + t'}{r' + s'} > \frac{\alpha + t}{r + s} = \operatorname{p}_{\alpha}(\Lambda) \quad \text{for $\alpha \gg 0$}, \] which contradicts our hypothesis. Thus, $P_{\mathcal O_C}(m, n) = rm + sn + r + s - rs$. Conversely, assume that $\mathcal O_C$ has this Hilbert polynomial and that ${\mathcal F}$ is pure. Let $\Lambda' = (\Gamma', {\mathcal F}') \subset \Lambda$ be a proper coherent subsystem with $P_{{\mathcal F}'}(m, n) = r'm + s'n + t'$. If $\Gamma' = \{ 0 \}$, then \[ \operatorname{p}_{\alpha}(\Lambda') = \frac{t'}{r' + s'} < \frac{\alpha + t}{r + s} = \operatorname{p}_{\alpha}(\Lambda) \quad \text{for $\alpha \gg 0$}. \] If $\Gamma' = \Gamma$, then $\mathcal O_C \subset {\mathcal F}'$, hence $r' = r$, $s' = s$, $t' < t$, and we have \[ \operatorname{p}_{\alpha}(\Lambda') = \frac{\alpha + t'}{r + s} < \frac{\alpha + t}{r + s} = \operatorname{p}_{\alpha}(\Lambda). \] The isomorphism between $\operatorname{M}^{\infty}(P)$ and the relative Hilbert scheme is a particular case of \cite[Proposition B.8]{pandharipande_thomas}. As a map, it is given by $(\Gamma, {\mathcal F}) \mapsto (Z, C)$, where $Z \subset C$ is the subscheme introduced at Lemma \ref{unique_extension}. \end{proof}
\begin{corollary} \label{M_infinity_smooth} The scheme $\operatorname{M}^{\infty}(4m + 2n +1)$ is isomorphic to a fiber bundle with fiber $\mathbb P^{11}$ and base the Hilbert scheme of three points in $\mathbb P^1 \times \mathbb P^1$, so it is smooth. \end{corollary}
\begin{proof} The relative Hilbert scheme of pairs $(Z, C)$, where $C \subset \mathbb P^1 \times \mathbb P^1$ is a curve of degree $(2, 4)$ and $Z \subset C$ is a subscheme of dimension zero and length $3$, has fiber $\mathbb P(\operatorname{H}^0({\mathcal I}_Z(2, 4)))$ over $Z$. If $Z$ is not contained in a line of degree $(0, 1)$ or $(1, 0)$, then, from Lemma \ref{length_3_scheme}, we deduce that $\operatorname{H}^0({\mathcal I}_Z(2, 4)) \simeq \mathbb C^{12}$. If $Z$ is contained in such a line, then it is straightforward to check that $\operatorname{H}^0({\mathcal I}_Z(2, 4)) \simeq \mathbb C^{12}$. \end{proof}
\begin{lemma} \label{alpha_nonempty} Assume that $\operatorname{M}^{\alpha}(rm + sn + t) \neq \emptyset$. Then $t \ge r + s - rs$. For $r$, $s$ non-negative integers, not both zero, and $\alpha \in (0, \infty)$, we have \[ \operatorname{M}^{\alpha}(rm + sn + r + s - rs) \simeq \operatorname{M}^{\infty}(rm + sn + r + s - rs). \] \end{lemma}
\begin{proof} We use induction on $r + s$. If $r + s = 1$, or if there is no wall in $[\alpha, \infty)$, then $\operatorname{M}^{\alpha}(rm + sn + t) = \operatorname{M}^{\infty}(rm + sn + t)$ and the conclusion follows from Proposition \ref{M_infinity}. Assume that $r + s > 1$ and that there is a wall $\alpha' \in [\alpha, \infty)$. There is a pair $\Lambda \in \operatorname{M}^{\alpha'}(rm + sn + t)$ and a subpair or quotient pair $\Lambda' \in \operatorname{M}^{\alpha'}(r'm + s'n + t')$, such that $\operatorname{p}_{\alpha'}(\Lambda) = \operatorname{p}_{\alpha'}(\Lambda')$. We have $0 \le r' \le r$, $0 \le s' \le s$, $1 \le r' + s' < r +s$, \[ \frac{\alpha' + t}{r + s} = \frac{\alpha' + t'}{r' + s'}, \] hence \begin{align*} t & = \frac{(r + s - r' - s') \alpha' + (r + s) t'}{r' + s'} > \frac{r + s}{r' + s'} t' \\ & \ge \frac{r + s}{r' + s'} (r' + s' - r's') \qquad \text{(by the induction hypothesis)} \\ & = r + s - \frac{r + s}{r' + s'} r's' \ge r + s - rs. \end{align*} If $t = r + s - rs$, then there is no wall in $[\alpha, \infty)$, hence we have an isomorphism as in the lemma. \end{proof}
\begin{proposition} \label{walls} With respect to $P(m, n) = 4m + 2n + 1$ there are only two walls at $\alpha_1 = 5$ and $\alpha_2 = 11$. \end{proposition}
\begin{proof} Assume that $\alpha$ is a wall. Then there are pairs $\Lambda \in \operatorname{M}^{\alpha}(4m + 2n + 1)$ and $\Lambda' \in \operatorname{M}^{\alpha}(rm + sn + t)$ such that $\Lambda'$ is a subpair or a quotient pair of $\Lambda$ and \begin{equation} \label{alpha} \frac{\alpha + t}{r + s} = \frac{\alpha + 1}{6}. \end{equation} Here $0 \le r \le 4$, $0 \le s \le 2$, $1 \le r + s \le 5$. By Lemma \ref{alpha_nonempty}, we also have $t \ge r + s - rs$. Assume that $r = 3$, $s = 2$, $t \ge -1$. Equation (\ref{alpha}) has solutions $\alpha_1 = 5$ for $t = 0$ and $\alpha_2 = 11$ for $t = -1$. Assume that $r = 2$, $s = 2$, $t \ge 0$. Equation (\ref{alpha}) has solution $\alpha = 2$ for $t = 0$. In this case either $\Lambda \in \operatorname{Ext}^1(\Lambda', \Lambda'')$ or $\Lambda \in \operatorname{Ext}^1(\Lambda'', \Lambda')$ for some $\Lambda'' \in \operatorname{M}(2m + 1)$. However, according to \cite[Proposition 10]{ballico_huh}, $\operatorname{M}(2m + 1) = \emptyset$. Thus, there is no wall at $\alpha = 2$. For all other choices of $r$ and $s$ equation (\ref{alpha}) has no positive solution in $\alpha$. \end{proof}
\noindent Denote $\mathbf M^{\alpha} = \operatorname{M}^{\alpha}(4m + 2n + 1)$. For $\alpha \in (11, \infty)$, write $\mathbf M^{\alpha} = \mathbf M^{\infty}$. For $\alpha \in (5, 11)$, write $\mathbf M^{\alpha} = \mathbf M^{5+} = \mathbf M^{11-}$. For $\alpha \in (0, 5)$, write $\mathbf M^{\alpha} = \mathbf M^{0+}$. The inclusions of sets of $\alpha$-semi-stable pairs induce the birational morphisms \[ \xymatrix { \mathbf M^{\infty} \ar[dr]_-{\rho_{\infty}} & & \mathbf M^{11-} \ar[dl]^-{\rho_{11}} \ar@{=}[r] & \mathbf M^{5+} \ar[dr]_-{\rho_5} & & \mathbf M^{0+} \ar[dl]^-{\rho_0} \\ & \mathbf M^{11} & & & \mathbf M^5 } \] In view of Theorem \ref{wall_crossing}, the above are flipping diagrams (consult \cite[Remark 5]{genus_two} for details).
\begin{remark} \label{flipping_base} From the proof of Proposition \ref{walls}, we see that an S-equivalence class of strictly $\alpha$-semi-stable elements in $\mathbf M^{11}$ consists of (split or non-split) extensions of $(\Gamma_1, {\mathcal E}_1)$ by $(0, \mathcal O_L(1, 0))$, together with the extensions of $(0, \mathcal O_L(1, 0))$ by $(\Gamma_1, {\mathcal E}_1)$. Here $(\Gamma_1, {\mathcal E}_1)$ lies in $\operatorname{M}^{11}(3m + 2n -1)$ and $L \subset \mathbb P^1 \times \mathbb P^1$ is a line of degree $(0, 1)$. We say, for short, that the strictly $\alpha$-semi-stable elements of $\mathbf M^{11}$ are of the form $(\Gamma_1, {\mathcal E}_1) \oplus(0, \mathcal O_L(1, 0))$. According to Lemma \ref{alpha_nonempty} and Proposition \ref{M_infinity}, ${\mathcal E}_1 \simeq \mathcal O_Q$ for a quintic curve $Q \subset \mathbb P^1 \times \mathbb P^1$ of degree $(2, 3)$. Thus, $\operatorname{M}^{11}(3m + 2n -1) \simeq \mathbb P^{11}$.
Again from the proof of Proposition \ref{walls}, we see that the strictly $\alpha$-semi-stable elements in $\mathbf M^5$ are of the form $(\Gamma, {\mathcal E}) \oplus (0, \mathcal O_L)$, where $(\Gamma, {\mathcal E}) \in \operatorname{M}^5(3m + 2n)$. We claim that $\operatorname{M}^5(3m + 2n) \simeq \operatorname{M}^{\infty}(3m + 2n)$. To see this, we will show that there are no walls relative to the Polynomial $P(m, n) = 3m + 2n$. As in the proof of Proposition \ref{walls}, we attempt to solve the equation \[ \frac{\alpha + t}{r + s} = \frac{\alpha}{5} \] with $0 \le r \le 3$, $0 \le s \le 2$, $1 \le r + s \le 4$, $t \ge r + s - rs$. For all choices of $r$ and $s$ we have $t \ge 0$, hence the above equation has no positive solutions in $\alpha$. From Proposition \ref{M_infinity} we see that $\operatorname{M}^5(3m + 2n)$ isomorphic to the universal quintic of degree $(2, 3)$, so it is a $\mathbb P^{10}$-bundle over $\mathbb P^1 \times \mathbb P^1$. More precisely, the elements in $\operatorname{M}^5(3m + 2n)$ are of the form $(\operatorname{H}^0(\mathcal O_Q(p)), \mathcal O_Q(p))$, where $\mathcal O_Q(p)$ is a non-split extension of $\mathbb C_p$ by $\mathcal O_Q$. \end{remark}
\begin{proposition} \label{M_3_closure} Let $Q \subset \mathbb P^1 \times \mathbb P^1$ be a quintic curve of degree $(2, 3)$, let $p \in Q$ be a point, let $\mathcal O_Q(p)$ be a non-split extension of $\mathbb C_p$ by $\mathcal O_Q$, and let $L \subset \mathbb P^1 \times \mathbb P^1$ be a line of degree $(0, 1)$. Then any non-split extension sheaf ${\mathcal F}$ as in (\ref{Q_p_F_L}) is semi-stable. The set of such sheaves is the closure of $\mathbf M_3$ in $\mathbf M$. The boundary $\overline{\mathbf M}_3 \setminus \mathbf M_3$ is contained in $\mathbf M_2$, more precisely, it consists of extensions as in (\ref{C_F_p}) in which $C = Q \cup L$ and $p \in Q$. \end{proposition}
\begin{proof} The case when $\operatorname{H}^0({\mathcal F}) \simeq \mathbb C$ was examined at Proposition \ref{M_3_4}, so we need only consider the case when $\operatorname{H}^0({\mathcal F}) \simeq \mathbb C^2$. In this case the canonical morphism $\mathcal O \to \mathcal O_L$ lifts to a morphism $\mathcal O \to {\mathcal F}$, hence we can combine resolution (\ref{Q_p_resolution}) with the standard resolution of $\mathcal O_L$ to obtain the resolution \[ 0 \longrightarrow \mathcal O(-2, -2) \oplus \mathcal O(-1, -3) \oplus \mathcal O(0, -1) \overset{\varphi}{\longrightarrow} \mathcal O(-1, -2) \oplus \mathcal O \oplus \mathcal O \longrightarrow {\mathcal F} \longrightarrow 0, \] \[ \varphi = \left[ \begin{array}{ccc} \varphi_{11} & \varphi_{12} & 0 \\ \varphi_{21} & \varphi_{22} & \varphi_{23} \\ 0 & 0 & \varphi_{33} \end{array} \right], \] where $\varphi_{11} \neq 0$, $\varphi_{12} \neq 0$, and $\varphi_{23}$ and $\varphi_{33}$ are linearly independent. Note that $p$ is given by the equations $\varphi_{11} = 0$, $\varphi_{12} = 0$. From the snake lemma, we obtain an extension \[ 0 \longrightarrow {\mathcal F}' \longrightarrow {\mathcal F} \longrightarrow \mathbb C_p \longrightarrow 0, \] where ${\mathcal F}'$ is given by the resolution \[ 0 \longrightarrow \mathcal O(-2, -3) \oplus \mathcal O(0, -1) \overset{\varphi'}{\longrightarrow} 2\mathcal O \longrightarrow {\mathcal F}' \longrightarrow 0, \] \[ \varphi' = \left[ \begin{array}{cc} \varphi'_{11} & \varphi_{23} \\ 0 & \varphi_{33} \end{array} \right], \qquad \varphi'_{11} = \varphi_{11} \varphi_{22} - \varphi_{12} \varphi_{21}. \] We claim that ${\mathcal F}' \simeq \mathcal O_C(0, 1)$, where $C = Q \cup L$. In view of Proposition \ref{M_2}, the claim implies that ${\mathcal F}$ is semi-stable, in fact $[{\mathcal F}] \in \mathbf M_2$. It remains to prove the claim. Let ${\mathcal K}$ be the kernel of the canonical morphism $\mathcal O_C \to \mathcal O_Q$. Since ${\mathcal K}$ has no zero-dimensional torsion and $P_{\mathcal K} = m - 1$, ${\mathcal K} \simeq \mathcal O_L( -2, 0)$. Applying ${\mathcal Hom}(-, \omega)$ to the exact sequence \[ 0 \longrightarrow \mathcal O_L(-2, 0) \longrightarrow \mathcal O_C(0, 1) \longrightarrow \mathcal O_Q(0, 1) \longrightarrow 0, \] yields the exact sequence \[ 0 \longrightarrow {\mathcal Ext}^1(\mathcal O_Q(0, 1), \omega) \longrightarrow {\mathcal Ext}^1(\mathcal O_C(0, 1), \omega) \longrightarrow {\mathcal Ext}^1(\mathcal O_L(-2, 0), \omega) \longrightarrow 0, \] which is the same as the exact sequence \[ 0 \longrightarrow \mathcal O_Q \longrightarrow \mathcal O_C(0, 1) \longrightarrow \mathcal O_L \longrightarrow 0. \] Since $\operatorname{H}^0(\mathcal O_C(0, 1)) \simeq \mathbb C^2$, the canonical morphism $\mathcal O \to \mathcal O_L$ lifts to a morphism $\mathcal O \to \mathcal O_C(0, 1)$, hence the canonical resolutions of $\mathcal O_Q$ and $\mathcal O_L$ can be combined into a resolution of the form \[ 0 \longrightarrow \mathcal O(-2, -3) \oplus \mathcal O(0, -1) \overset{\psi}{\longrightarrow} 2\mathcal O \longrightarrow \mathcal O_C(0, 1) \longrightarrow 0, \] \[ \psi = \left[ \begin{array}{cc} \varphi'_{11} & \psi_{12} \\ 0 & \varphi_{33} \end{array} \right]. \] Since $\mathcal O_C(0, 1)$ is a non-split extension of $\mathcal O_L$ by $\mathcal O_Q$, $\psi_{12}$ and $\varphi_{33}$ are linearly independent. It is clear now that the matrices representing $\varphi'$ and $\psi$ are equivalent under elementary row and column operations. We conclude that ${\mathcal F}' \simeq \mathcal O_C(0, 1)$. \end{proof}
\noindent The preimages of the sets of strictly semi-stable elements are the flipping loci: \begin{align*} F^{\infty} & = \rho_{\infty}^{-1}(\operatorname{M}^{11}(3m + 2n - 1) \times \operatorname{M}(m+ 2)) \subset \mathbf M^{\infty}, \\ F^{11} & = \rho_{11}^{-1}(\operatorname{M}^{11}(3m + 2n - 1) \times \operatorname{M}(m+ 2)) \subset \mathbf M^{11-}, \\ F^5 & = \rho_5^{-1}(\operatorname{M}^5(3m + 2n) \times \operatorname{M}(m + 1)) \subset \mathbf M^{5+}, \\ F^0 & = \rho_0^{-1}(\operatorname{M}^5(3m + 2n) \times \operatorname{M}(m + 1)) \subset \mathbf M^{0+}. \end{align*}
\begin{proposition} \label{flipping_loci} Consider $\Lambda_1 \in \operatorname{M}^{11}(3m + 2n - 1)$, $\Lambda_2 \in \operatorname{M}(m + 2)$, $\Lambda_3 \in \operatorname{M}^5(3m + 2n)$, and $\Lambda_4 \in \operatorname{M}(m + 1)$. \begin{enumerate} \item[(i)] Over a point $(\Lambda_1, \Lambda_2)$, $F^{\infty}$ has fiber $\mathbb P(\operatorname{Ext}^1(\Lambda_1, \Lambda_2))$. \item[(ii)] Over a point $(\Lambda_1, \Lambda_2)$, $F^{11}$ has fiber $\mathbb P(\operatorname{Ext}^1(\Lambda_2, \Lambda_1))$. \item[(iii)] Over a point $(\Lambda_3, \Lambda_4)$, $F^5$ has fiber $\mathbb P(\operatorname{Ext}^1(\Lambda_3, \Lambda_4))$. \item[(iv)] Over a point $(\Lambda_3, \Lambda_4)$, $F^0$ has fiber $\mathbb P(\operatorname{Ext}^1(\Lambda_4, \Lambda_3))$. \end{enumerate} \end{proposition}
\begin{proof} (i) We refer to the argument at \cite[Remark 2]{genus_two}.
\noindent (ii) Assume that $\Lambda = (\Gamma, {\mathcal F}) \in F^{11}$ lies over $(\Lambda_1, \Lambda_2)$. Then $\Lambda$ is a non-split extension of $\Lambda_1$ by $\Lambda_2$, or, vice versa, of $\Lambda_2$ by $\Lambda_1$. If $\Lambda_2 \subset \Lambda$, then \[ \operatorname{p}_{\alpha}(\Lambda_2) = 2 > \frac{\alpha + 1}{6} = \operatorname{p}_{\alpha}(\Lambda) \quad \text{for $\alpha \in (5, 11)$}, \] which violates the semi-stability of $\Lambda$. Thus $\Lambda \in \mathbb P(\operatorname{Ext}^1(\Lambda_2, \Lambda_1))$. Conversely, given such $\Lambda$, we need to show that $\Lambda \in \mathbf M^{\alpha}$ for $\alpha \in (5, 11)$. Write $\Lambda_1 = (\Gamma_1, \mathcal O_Q)$, $\Lambda_2 = (0, \mathcal O_L(1, 0))$. We have a non-split extension of sheaves \[ 0 \longrightarrow \mathcal O_Q \longrightarrow {\mathcal F} \longrightarrow \mathcal O_L(1, 0) \longrightarrow 0. \] Let $\Lambda' = (\Gamma', {\mathcal F}')$ be a proper coherent subsystem of $\Lambda$. Let ${\mathcal G}$ be the image of ${\mathcal F}'$ in $\mathcal O_L(1, 0)$. If ${\mathcal F}' \cap \mathcal O_Q = \{ 0 \}$, then ${\mathcal G} \neq \mathcal O_L(1, 0)$, forcing $\chi({\mathcal F}') = \chi({\mathcal G}) \le 1$. If ${\mathcal F}' \cap \mathcal O_Q \neq \{ 0 \}$, then $\chi({\mathcal F}' \cap \mathcal O_Q) \le -1$ because, by virtue of \cite[Lemma 9]{ballico_huh}, $\mathcal O_Q$ is semi-stable. We have in this case $\chi({\mathcal F}') = \chi({\mathcal F}' \cap \mathcal O_Q) + \chi({\mathcal G}) \le -1 + 2 = 1$. If $\Gamma' = \{ 0 \}$, then \[ \operatorname{p}_{\alpha}(\Lambda') = \operatorname{p}({\mathcal F}') \le 1 < \frac{\alpha + 1}{6} = \operatorname{p}_{\alpha}(\Lambda) \quad \text{for $\alpha \in (5, 11)$}. \] Assume now that $\Gamma' \neq \{ 0 \}$. Then $\Gamma' = \Gamma = \operatorname{H}^0(\mathcal O_Q)$, hence $\mathcal O_Q \subset {\mathcal F}'$. If $\mathcal O_Q = {\mathcal F}'$, then \[ \operatorname{p}_{\alpha}(\Lambda') = \frac{\alpha - 1}{5} < \frac{\alpha + 1}{6} = \operatorname{p}_{\alpha}(\Lambda) \quad \text{for $\alpha \in (5, 11)$}. \] If $\mathcal O_Q \subsetneqq {\mathcal F}'$, then $r({\mathcal F}') + s({\mathcal F}') = 6$, hence $\chi({\mathcal F}') \le 0$, and hence \[ \operatorname{p}_{\alpha}(\Lambda') = \frac{\alpha + \chi({\mathcal F}')}{6} \le \frac{\alpha}{6} < \frac{\alpha + 1}{6} = \operatorname{p}_{\alpha}(\Lambda). \] In all cases we have the inequality $\operatorname{p}_{\alpha}(\Lambda') < \operatorname{p}_{\alpha}(\Lambda)$, hence $\Lambda \in \mathbf M^{\alpha}$, for $\alpha \in (5, 11)$.
\noindent (iii) We will show that every $\Lambda = (\Gamma, {\mathcal F}) \in \mathbb P(\operatorname{Ext}^1(\Lambda_3, \Lambda_4))$ gives a point in $\mathbf M^{\alpha}$ for $\alpha \in (5, 11)$. Write $\Lambda_3 = (\Gamma_3, \mathcal O_Q(p))$, $\Lambda_4 = (0, \mathcal O_L)$. We have a, possibly split, extension of sheaves \[ 0 \longrightarrow \mathcal O_L \longrightarrow {\mathcal F} \longrightarrow \mathcal O_Q(p) \longrightarrow 0. \] Let $\Lambda' = (\Gamma', {\mathcal F}')$ be a proper coherent subsystem of $\Lambda$. Let ${\mathcal G}$ be the image of ${\mathcal F}'$ in $\mathcal O_Q(p)$. Using the fact that $\mathcal O_Q$ is semi-stable, it is easy to see that $\mathcal O_Q(p)$ is semi-stable, as well. Thus, $\chi({\mathcal G}) \le 0$, hence $\chi({\mathcal F}') = \chi({\mathcal F}' \cap \mathcal O_L) + \chi({\mathcal G}) \le 1 + 0 = 1$. If $\Gamma' = \{ 0 \}$, then \[ \operatorname{p}_{\alpha}(\Lambda') = \operatorname{p}({\mathcal F}') \le 1 < \frac{\alpha + 1}{6} = \operatorname{p}_{\alpha}(\Lambda) \quad \text{for $\alpha \in (5, 11)$}. \] Assume now that $\Gamma' \neq \{ 0 \}$, i.e. $\Gamma' = \Gamma$. Then $\mathcal O_Q \subset {\mathcal G}$. If ${\mathcal F}' \cap \mathcal O_L = \{ 0 \}$, then ${\mathcal F}' \ncong \mathcal O_Q(p)$, otherwise $\Lambda \simeq \Lambda_3 \oplus \Lambda_4$. In this case ${\mathcal F}' \simeq \mathcal O_Q$, hence \[ \operatorname{p}_{\alpha}(\Lambda') = \frac{\alpha - 1}{5} < \frac{\alpha + 1}{6} = \operatorname{p}_{\alpha}(\Lambda) \quad \text{for $\alpha \in (5, 11)$}. \] Assume now that ${\mathcal F}' \cap \mathcal O_L \neq \{ 0 \}$. Then $r({\mathcal F}') + s({\mathcal F}') = 6$, hence $\chi({\mathcal F}') \le 0$, and hence $\operatorname{p}_{\alpha}(\Lambda') < \operatorname{p}_{\alpha}(\Lambda)$.
\noindent (iv) If $(\Gamma, {\mathcal F}) \in \mathbb P(\operatorname{Ext}^1(\Lambda_4, \Lambda_3))$, then we have the non-split extension (\ref{Q_p_F_L}), hence, by Proposition \ref{M_3_closure}, ${\mathcal F}$ is semi-stable. Thus $(\Gamma, {\mathcal F}) \in \mathbf M^{0+}$, i.e. $(\Gamma, {\mathcal F}) \in F^0$. \end{proof}
\begin{proposition} \label{ext_sequence} \emph{(\cite[Corollaire 1.6]{he})} Let $\Lambda = (\Gamma, {\mathcal F})$ and $\Lambda' = (\Gamma', {\mathcal F}')$ be two coherent systems on a separated scheme of finite type over $\mathbb C$. Then there is a long exact sequence \begin{align*} 0 & \longrightarrow \operatorname{Hom}(\Lambda, \Lambda') \longrightarrow \operatorname{Hom}({\mathcal F}, {\mathcal F}') \longrightarrow \operatorname{Hom}(\Gamma, \operatorname{H}^0({\mathcal F}')/\Gamma') \\ & \longrightarrow \operatorname{Ext}^1(\Lambda, \Lambda') \longrightarrow \operatorname{Ext}^1({\mathcal F}, {\mathcal F}') \longrightarrow \operatorname{Hom}(\Gamma, \operatorname{H}^1({\mathcal F}')) \\ & \longrightarrow \operatorname{Ext}^2(\Lambda, \Lambda') \longrightarrow \operatorname{Ext}^2({\mathcal F}, {\mathcal F}') \longrightarrow \operatorname{Hom}(\Gamma, \operatorname{H}^2({\mathcal F}')). \end{align*} \end{proposition}
\begin{proposition} \label{flipping_bundles} The flipping loci $F^{\infty}$, $F^{11}$, $F^5$, $F^0$ are smooth bundles with fibers $\mathbb P^3$, $\mathbb P^1$, $\mathbb P^2$, respectively, $\mathbb P^1$. \end{proposition}
\begin{proof} We need to determine the extension spaces of pairs occurring at Proposition \ref{flipping_loci}.
\noindent (i) Choose $\Lambda_1 = (\Gamma_1, \mathcal O_Q)$ and $\Lambda_2 = (0, \mathcal O_L(1, 0))$. From Proposition \ref{ext_sequence} we have the long exact sequence \begin{align*} 0 & \longrightarrow \operatorname{Hom}(\Lambda_1, \Lambda_2) \longrightarrow \operatorname{Hom}(\mathcal O_Q, \mathcal O_L(1, 0)) \longrightarrow \operatorname{Hom}(\Gamma_1, \operatorname{H}^0(\mathcal O_L(1, 0))) \simeq \mathbb C^2 \\ & \longrightarrow \operatorname{Ext}^1(\Lambda_1, \Lambda_2) \longrightarrow \operatorname{Ext}^1(\mathcal O_Q, \mathcal O_L(1, 0)) \longrightarrow \operatorname{Hom}(\Gamma_1, \operatorname{H}^1(\mathcal O_L(1, 0))) = \{ 0 \}. \end{align*} For $\alpha \gg 0$, $\Lambda_1$ and $\Lambda_2$ are $\alpha$-stable coherent systems of different slopes, hence $\operatorname{Hom}(\Lambda_1, \Lambda_2) = \{ 0 \}$. From the short exact sequence \begin{equation} \label{Q_resolution} 0 \longrightarrow \mathcal O(-2, -3) \longrightarrow \mathcal O \longrightarrow \mathcal O_Q \longrightarrow 0, \end{equation} we obtain the long exact sequence \begin{align*} 0 & \longrightarrow \operatorname{Hom}(\mathcal O_Q, \mathcal O_L(1, 0)) \longrightarrow \operatorname{H}^0(\mathcal O_L(1, 0)) \simeq \mathbb C^2 \longrightarrow \operatorname{H}^0(\mathcal O_L(3, 3)) \simeq \mathbb C^4 \\ & \longrightarrow \operatorname{Ext}^1(\mathcal O_Q, \mathcal O_L(1, 0)) \longrightarrow \operatorname{H}^1(\mathcal O_L(1, 0)) = \{ 0 \}. \end{align*} Combining the last two long exact sequences, we obtain the isomorphism $\operatorname{Ext}^1(\Lambda_1, \Lambda_2) \simeq \mathbb C^4$.
\noindent (ii) From Proposition \ref{ext_sequence}, we have the exact sequence \begin{align*} \{ 0 \} = & \operatorname{Hom}(0, \operatorname{H}^0(\mathcal O_Q)/\Gamma_1) \longrightarrow \operatorname{Ext}^1(\Lambda_2, \Lambda_1) \longrightarrow \\ & \operatorname{Ext}^1(\mathcal O_L(1, 0), \mathcal O_Q) \simeq \operatorname{Ext}^1(\mathcal O_Q, \mathcal O_L(-1, -2))^* \longrightarrow \operatorname{Hom}(0, \operatorname{H}^1(\mathcal O_Q)) = \{ 0 \}. \end{align*} From resolution (\ref{Q_resolution}), we obtain the exact sequence \[ \{ 0 \} = \operatorname{H}^0(\mathcal O_L(-1, -2)) \longrightarrow \operatorname{H}^0(\mathcal O_L(1, 1)) \simeq \mathbb C^2 \longrightarrow \operatorname{Ext}^1(\mathcal O_Q, \mathcal O_L(-1, -2)) \longrightarrow \operatorname{H}^1(\mathcal O_L(-1, -2)) = \{ 0 \}. \] Combining the last two exact sequences, we obtain the isomorphism $\operatorname{Ext}^1(\Lambda_2, \Lambda_1) \simeq \mathbb C^2$.
\noindent (iii) Choose $\Lambda_3 = (\Gamma, \mathcal O_Q(p))$ and $\Lambda_4 = (0, \mathcal O_L)$. From Proposition \ref{ext_sequence}, we have the long exact sequence \begin{align*} \{ 0 \} = & \operatorname{Hom}(\Lambda_3, \Lambda_4) \longrightarrow \operatorname{Hom}(\mathcal O_Q(p), \mathcal O_L) \longrightarrow \operatorname{Hom}(\Gamma, \operatorname{H}^0(\mathcal O_L)) \simeq \mathbb C \longrightarrow \\ & \operatorname{Ext}^1(\Lambda_3, \Lambda_4) \longrightarrow \operatorname{Ext}^1(\mathcal O_Q(p), \mathcal O_L) \longrightarrow \operatorname{Hom}(\Gamma, \operatorname{H}^1(\mathcal O_L)) = \{ 0 \}. \end{align*} From resolution (\ref{Q_p_resolution}) we obtain the exact sequence \begin{align*} 0 & \longrightarrow \operatorname{Hom}(\mathcal O_Q(p), \mathcal O_L) \longrightarrow \operatorname{H}^0(\mathcal O_L(1, 2) \oplus \mathcal O_L) \simeq \mathbb C^3 \longrightarrow \operatorname{H}^0(\mathcal O_L(2, 2) \oplus \mathcal O_L(1, 3)) \simeq \mathbb C^5 \\ & \longrightarrow \operatorname{Ext}^1(\mathcal O_Q(p), \mathcal O_L) \longrightarrow \operatorname{H}^1(\mathcal O_L(1, 2) \oplus \mathcal O_L) = \{ 0 \}. \end{align*} Combining the last two exact sequences, it follows that $\operatorname{Ext}^1(\Lambda_3, \Lambda_4) \simeq \mathbb C^3$.
\noindent (iv) From Proposition \ref{ext_sequence}, we obtain the exact sequence \begin{align*} \{ 0 \} = & \operatorname{Hom}(0, \operatorname{H}^0(\mathcal O_Q(p))/\Gamma) \longrightarrow \operatorname{Ext}^1(\Lambda_4, \Lambda_3) \longrightarrow \\ & \operatorname{Ext}^1(\mathcal O_L, \mathcal O_Q(p)) \simeq \operatorname{Ext}^1(\mathcal O_Q(p), \mathcal O_L(-2, -2))^* \longrightarrow \operatorname{Hom}(0, \operatorname{H}^1(\mathcal O_Q(p))) = \{ 0 \}. \end{align*} From resolution (\ref{Q_p_resolution}) we obtain the exact sequence \begin{align*} \{ 0 \} = & \operatorname{H}^0(\mathcal O_L(-1, 0) \oplus \mathcal O_L(-2, -2)) \longrightarrow \operatorname{H}^0(\mathcal O_L \oplus \mathcal O_L(-1, 1)) \simeq \mathbb C \longrightarrow \operatorname{Ext}^1(\mathcal O_Q(p), \mathcal O_L(-2, -2)) \\ \longrightarrow & \operatorname{H}^1(\mathcal O_L(-1, 0) \oplus \mathcal O_L(-2, -2)) \simeq \mathbb C \longrightarrow \operatorname{H}^1(\mathcal O_L \oplus \mathcal O_L(-1, 1)) = \{ 0 \}. \end{align*} Combining the last two exact sequences, it follows that $\operatorname{Ext}^1(\Lambda_4, \Lambda_3) \simeq \mathbb C^2$. \end{proof}
\begin{lemma} \label{ext^2} \emph{(i)} For $\Lambda \in F^{11}$ we have $\operatorname{Ext}^2(\Lambda, \Lambda) = \{ 0 \}$. \\ \emph{(ii)} For $\Lambda \in F^0$ we have $\operatorname{Ext}^2(\Lambda, \Lambda) = \{ 0 \}$. \end{lemma}
\begin{proof} (i) In view of the exact sequence \[ 0 \longrightarrow \Lambda_1 \longrightarrow \Lambda \longrightarrow \Lambda_2 \longrightarrow 0 \] it is enough to show that $\operatorname{Ext}^2(\Lambda_i, \Lambda_j) = \{ 0 \}$ for $i, j = 1, 2$. From Proposition \ref{ext_sequence}, we have the exact sequence \[ \{ 0 \} = \operatorname{Hom}(\Gamma_1, \operatorname{H}^1(\mathcal O_L(1, 0))) \longrightarrow \operatorname{Ext}^2(\Lambda_1, \Lambda_2) \longrightarrow \operatorname{Ext}^2(\mathcal O_Q, \mathcal O_L(1, 0)) \simeq \operatorname{Hom}(\mathcal O_L(1, 0), \mathcal O_Q(-2, -2))^*. \] The group on the right vanishes because $\operatorname{H}^0(\mathcal O_Q(-3, -2)) = \{ 0 \}$. Thus, $\operatorname{Ext}^2(\Lambda_1, \Lambda_2)$ $= \{ 0 \}$. From the exact sequence \[ \{ 0 \} = \operatorname{Hom}(0, \operatorname{H}^1(\mathcal O_Q)) \longrightarrow \operatorname{Ext}^2(\Lambda_2, \Lambda_1) \longrightarrow \operatorname{Ext}^2(\mathcal O_L(1, 0), \mathcal O_Q) \simeq \operatorname{Hom}(\mathcal O_Q, \mathcal O_L(-1, -2))^* = \{ 0 \} \] we obtain the vanishing of $\operatorname{Ext}^2(\Lambda_2, \Lambda_1)$. From the exact sequence \begin{multline*} \{ 0 \} = \operatorname{Hom}(0, \operatorname{H}^1(\mathcal O_L(1, 0))) \longrightarrow \operatorname{Ext}^2(\Lambda_2, \Lambda_2) \\ \longrightarrow \operatorname{Ext}^2(\mathcal O_L(1, 0), \mathcal O_L(1, 0)) \simeq \operatorname{Hom}(\mathcal O_L(1, 0), \mathcal O_L(-1, -2))^* = \{ 0 \} \end{multline*} we obtain the vanishing of $\operatorname{Ext}^2(\Lambda_2, \Lambda_2)$. From Proposition \ref{ext_sequence}, we have the exact sequence \begin{align*} \{ 0 \} = \operatorname{Hom}(\Gamma_1, \operatorname{H}^0(\mathcal O_Q)/\Gamma_1) & \longrightarrow \operatorname{Ext}^1(\Lambda_1, \Lambda_1) \longrightarrow \operatorname{Ext}^1(\mathcal O_Q, \mathcal O_Q) \longrightarrow \operatorname{Hom}(\Gamma_1, \operatorname{H}^1(\mathcal O_Q)) \simeq \mathbb C^2 \\ & \longrightarrow \operatorname{Ext}^2(\Lambda_1, \Lambda_1) \longrightarrow \operatorname{Ext}^2(\mathcal O_Q, \mathcal O_Q) \simeq \operatorname{Hom}(\mathcal O_Q, \mathcal O_Q(-2, -2))^* = \{ 0 \}. \end{align*} According to \cite[Th\'eor\`eme 3.12]{he}, $\operatorname{Ext}^1(\Lambda_1, \Lambda_1)$ is isomorphic to the tangent space of $\operatorname{M}^{11}(3m + 2n -1) \simeq \mathbb P^{11}$ (see Remark \ref{flipping_base}) at $\Lambda_1$, so it is isomorphic to $\mathbb C^{11}$. From resolution (\ref{Q_resolution}), we obtain the exact sequence \begin{align*} 0 \longrightarrow & \operatorname{Hom}(\mathcal O_Q, \mathcal O_Q) \stackrel{\simeq}{\longrightarrow} \operatorname{H}^0(\mathcal O_Q) \longrightarrow \operatorname{H}^0(\mathcal O_Q(2, 3)) \simeq \mathbb C^{11} \\ \longrightarrow & \operatorname{Ext}^1(\mathcal O_Q, \mathcal O_Q) \longrightarrow \operatorname{H}^1(\mathcal O_Q) \simeq \mathbb C^2 \longrightarrow \operatorname{H}^1(\mathcal O_Q(2, 3)) = \{ 0 \}. \end{align*} Combining the last two exact sequences we obtain the vanishing of $\operatorname{Ext}^2(\Lambda_1, \Lambda_1)$.
\noindent (ii) As above, we need to prove that $\operatorname{Ext}^2(\Lambda_i, \Lambda_j) = \{ 0 \}$ for $i, j = 3, 4$. From Proposition \ref{ext_sequence}, we have the exact sequence \[ \{ 0 \} = \operatorname{Hom}(\Gamma, \operatorname{H}^1(\mathcal O_L)) \longrightarrow \operatorname{Ext}^2(\Lambda_3, \Lambda_4) \longrightarrow \operatorname{Ext}^2(\mathcal O_Q(p), \mathcal O_L) \simeq \operatorname{Hom}(\mathcal O_L, \mathcal O_Q(p)(-2, -2))^* = \{ 0 \}. \] Thus, $\operatorname{Ext}^2(\Lambda_3, \Lambda_4) = \{ 0 \}$. From the exact sequence \[ \{ 0 \} = \operatorname{Hom}(0, \operatorname{H}^1(\mathcal O_Q(p))) \longrightarrow \operatorname{Ext}^2(\Lambda_4, \Lambda_3) \longrightarrow \operatorname{Ext}^2(\mathcal O_L, \mathcal O_Q(p)) \simeq \operatorname{Hom}(\mathcal O_Q(p), \mathcal O_L(-2, -2))^* = \{ 0 \} \] we obtain the vanishing of $\operatorname{Ext}^2(\Lambda_4, \Lambda_3)$. From the exact sequence \[ \{ 0 \} = \operatorname{Hom}(0, \operatorname{H}^1(\mathcal O_L)) \longrightarrow \operatorname{Ext}^2(\Lambda_4, \Lambda_4) \longrightarrow \operatorname{Ext}^2(\mathcal O_L, \mathcal O_L) \simeq \operatorname{Hom}(\mathcal O_L, \mathcal O_L(-2, -2))^* = \{ 0 \} \] we obtain the vanishing of $\operatorname{Ext}^2(\Lambda_4, \Lambda_4)$. From Proposition \ref{ext_sequence}, we have the exact sequence \begin{align*} \{ 0 \} = & \operatorname{Hom}(\Gamma, \operatorname{H}^0(\mathcal O_Q(p))/\Gamma) \\ \longrightarrow & \operatorname{Ext}^1(\Lambda_3, \Lambda_3) \longrightarrow \operatorname{Ext}^1(\mathcal O_Q(p), \mathcal O_Q(p)) \longrightarrow \operatorname{Hom}(\Gamma, \operatorname{H}^1(\mathcal O_Q(p))) \simeq \mathbb C \\ \longrightarrow & \operatorname{Ext}^2(\Lambda_3, \Lambda_3) \longrightarrow \operatorname{Ext}^2(\mathcal O_Q(p), \mathcal O_Q(p)) \simeq \operatorname{Hom}(\mathcal O_Q(p), \mathcal O_Q(p)(-2, -2))^* = \{ 0 \}. \end{align*} From resolution (\ref{Q_p_resolution}), we obtain the exact sequence \begin{align*} 0 \longrightarrow & \operatorname{Hom}(\mathcal O_Q(p), \mathcal O_Q(p)) \longrightarrow \operatorname{H}^0(\mathcal O_Q(p)(1, 2)) \oplus \operatorname{H}^0(\mathcal O_Q(p)) \longrightarrow \operatorname{H}^0(\mathcal O_Q(p)(2, 2)) \oplus \operatorname{H}^0(\mathcal O_Q(p)(1, 3)) \\ \longrightarrow & \operatorname{Ext}^1(\mathcal O_Q(p), \mathcal O_Q(p)) \longrightarrow \operatorname{H}^1(\mathcal O_Q(p)(1, 2)) \oplus \operatorname{H}^1(\mathcal O_Q(p)) \longrightarrow \operatorname{H}^1(\mathcal O_Q(p)(2, 2)) \oplus \operatorname{H}^1(\mathcal O_Q(p)(1, 3)) \longrightarrow 0. \end{align*} Since $\operatorname{Hom}(\mathcal O_Q(p), \mathcal O_Q(p)) \simeq \mathbb C$, it follows that \[ \dim^{}_{\mathbb C} \operatorname{Ext}^1(\mathcal O_Q(p), \mathcal O_Q(p)) = 1 - \chi(\mathcal O_Q(p)(1, 2)) - \chi(\mathcal O_Q(p)) + \chi(\mathcal O_Q(p)(2, 2)) + \chi(\mathcal O_Q(p)(1, 3)) = 13. \] According to \cite[Th\'eor\`eme 3.12]{he}, $\operatorname{Ext}^1(\Lambda_3, \Lambda_3)$ is isomorphic to the tangent space at $\Lambda_3$ of $\operatorname{M}^5(3m+2n)$, which, according to Remark \ref{flipping_base}, is smooth of dimension $12$. We obtain the vanishing of $\operatorname{Ext}^2(\Lambda_3, \Lambda_3)$. \end{proof}
\begin{theorem} \label{wall_crossing} Let $\mathbf M^{\alpha}$ be the moduli space of $\alpha$-semi-stable pairs on $\mathbb P^1 \times \mathbb P^1$ with Hilbert polynomial $P(m, n) = 4m + 2n + 1$. We have the following blowing up diagrams \[ \xymatrix { & \ \ \, \widetilde{\mathbf M}^{\infty} \ar[dl]_-{\beta_{\infty}} \ar[dr]^-{\beta_{11}} & & & \ \ \ \widetilde{\mathbf M}^{5+} \ar[dl]_-{\beta_5} \ar[dr]^-{\beta_0} \\ \mathbf M^{\infty} \ar[dr]_-{\rho_{\infty}} & & \mathbf M^{11-} \ar[dl]^-{\rho_{11}} \ar@{=}[r] & \mathbf M^{5+} \ar[dr]_-{\rho_5} & & \mathbf M^{0+} \ar[dl]^-{\rho_0} \\ & \mathbf M^{11} & & & \mathbf M^5 } \] Here $\beta_{\infty}$ is the blow-up along $F^{\infty}$ and $\beta_{11}$ is the contraction of the exceptional divisor $\widetilde{F}^{\infty}$ in the direction of $\mathbb P^3$, where we view $\widetilde{F}^{\infty}$ as a $\mathbb P^3 \times \mathbb P^1$-bundle with base $\operatorname{M}^{11}(3m + 2n - 1) \times \operatorname{M}(m + 2)$. Likewise, $\beta_5$ is the blow-up along $F^5$ and $\beta_0$ is the contraction of the exceptional divisor $\widetilde{F}^5$ in the direction of $\mathbb P^2$, where we view $\widetilde{F}^5$ as a $\mathbb P^2 \times \mathbb P^1$-bundle over $\operatorname{M}^5(3m + 2n) \times \operatorname{M}(m + 1)$. \end{theorem}
\begin{proof} A birational morphism $\beta_{11} \colon \widetilde{\mathbf M}^{\infty} \to \mathbf M^{11-}$ can be constructed as at \cite[Theorem 3.3]{choi_chung} such that $\beta_{11}$ contracts $\widetilde{F}^{\infty}$ in the direction of $\mathbb P^3$, $\beta_{11}$ is an isomorphism outside $F^{11}$, and $\beta_{11}^{-1}(x) \simeq \mathbb P^3$ for any $x \in F^{11}$. We now apply the Universal Property of the blow-up \cite[p. 604]{griffiths_harris} to deduce that $\beta_{11}$ is a blow-up with center $F^{11}$. For this we need to know that $\mathbf M^{11-}$ and $F^{11}$ are smooth. By Corollary \ref{M_infinity_smooth}, $\mathbf M^{\infty}$ is smooth, by Proposition \ref{flipping_bundles}, the blowing up center $F^{\infty}$ is smooth, hence $\widetilde{\mathbf M}^{\infty}$ is smooth, too. Since $\beta_{11}$ is an isomorphism outside $F^{11}$, $\mathbf M^{11-} \setminus F^{11}$ is smooth. Since all points of $\mathbf M^{11-}$ are $\alpha$-stable, we can apply the Smoothness Criterion \cite[Th\'eor\`eme 3.12]{he}, which states that $\Lambda \in \mathbf M^{11-}$ is a smooth point if $\operatorname{Ext}^2(\Lambda, \Lambda) = \{ 0 \}$. Thus, in view of Lemma \ref{ext^2}(i), $\mathbf M^{11-}$ is smooth at every point of $F^{11}$. The smoothness of $F^{11}$ was proved at Proposition \ref{flipping_bundles}.
For the second blow-up diagram we reason analogously, using the facts that $F^5$ and $F^0$ are smooth, and using Lemma \ref{ext^2}(ii). \end{proof}
\noindent According to \cite[Th\'eor\`eme 4.3]{he}, there is a universal family $(\widetilde{\Gamma}, \widetilde{{\mathcal F}})$ of coherent systems on $\mathbf M^{0+} \times \mathbb P^1 \times \mathbb P^1$. In particular, $\widetilde{{\mathcal F}}$ is a family of semi-stable sheaves on $\mathbb P^1 \times \mathbb P^1$ with Hilbert polynomial $4m + 2n + 1$, which is flat over $\mathbf M^{0+}$. It induces the so called \emph{forgetful morphism} $\phi \colon \mathbf M^{0+} \to \mathbf M$. We have $\phi(\Gamma, {\mathcal F}) = [{\mathcal F}]$.
\begin{proposition} \label{blow_up} The forgetful morphism $\phi \colon \mathbf M^{0+} \to \mathbf M$ is a blow-up with center the Brill-Noether locus $\mathbf M_2$. \end{proposition}
\begin{proof} According to Proposition \ref{vanishing}(ii), for $[{\mathcal F}] \in \mathbf M \setminus \mathbf M_2$ we have $\operatorname{H}^0({\mathcal F}) \simeq \mathbb C$, hence $\phi^{-1}([{\mathcal F}]) = (\operatorname{H}^0({\mathcal F}), {\mathcal F})$ is a single point. Thus, $\phi$ is an isomorphism away from $\mathbf M_2$. According to Proposition \ref{M_2}, for $[{\mathcal F}] \in \mathbf M_2$ we have $\operatorname{H}^0({\mathcal F}) \simeq \mathbb C^2$, hence $\phi^{-1}([{\mathcal F}]) \simeq \mathbb P^1$. Taking into account that $\mathbf M$ and $\mathbf M_2$ are smooth, we can apply the Universal Property of the blow-up \cite[p. 604]{griffiths_harris} to conclude that $\phi$ is a blow-up with center $\mathbf M_2$. \end{proof}
\noindent \\ \emph{Proof of Theorem \ref{poincare_polynomial}}. By virtue of Proposition \ref{blow_up}, we have the relation \[ \operatorname{P}(\mathbf M) = \operatorname{P}(\mathbf M^{0+}) - \xi \operatorname{P}(\mathbf M_2). \] According to Proposition \ref{M_2}, we have the relation \[ \operatorname{P}(\mathbf M_2) = \operatorname{P}(\mathbb P^{13}) \operatorname{P}(\mathbb P^1 \times \mathbb P^1). \] By virtue of Theorem \ref{wall_crossing}, we have the relation \begin{align*} \operatorname{P}(\mathbf M^{0+}) = \operatorname{P}(\mathbf M^{\infty}) + & \big(\operatorname{P}(\mathbb P^1) - \operatorname{P}(\mathbb P^3)\big) \operatorname{P}\!\big(\operatorname{M}^{11}(3m + 2n - 1) \times \operatorname{M}(m + 2)\big) \\ + & \big(\operatorname{P}(\mathbb P^1) - \operatorname{P}(\mathbb P^2)\big) \operatorname{P}\!\big(\operatorname{M}^5(3m + 2n) \times \operatorname{M}(m + 1)\big). \end{align*} In view of Corollary \ref{M_infinity_smooth} and Remark \ref{flipping_base}, we have the relation \[ \operatorname{P}(\mathbf M^{0+}) = \operatorname{P}(\mathbb P^{11}) \operatorname{P}(\operatorname{Hilb}_{\mathbb P^1 \times \mathbb P^1}(3)) + (\operatorname{P}(\mathbb P^1) - \operatorname{P}(\mathbb P^3)) \operatorname{P}(\mathbb P^{11}) \operatorname{P} (\mathbb P^1) + (\operatorname{P}(\mathbb P^1) - \operatorname{P}(\mathbb P^2)) \operatorname{P}(\mathbb P^{10}) \operatorname{P}(\mathbb P^1 \times \mathbb P^1) \operatorname{P}(\mathbb P^1). \] According to \cite[Theorem 0.1]{goettsche}, we have the equation \[ \operatorname{P}(\operatorname{Hilb}_{\mathbb P^1 \times \mathbb P^1}(3)) = \xi^6 + 3 \xi^5 + 9 \xi^4 + 14 \xi^3 + 9 \xi^2 + 3 \xi + 1. \] The final result reads \begin{multline*} \operatorname{P}(\mathbf M) = \frac{\xi^{12} - 1}{\xi - 1} (\xi^6 + 3 \xi^5 + 9 \xi^4 + 14 \xi^3 + 9 \xi^2 + 3 \xi + 1) - (\xi^3 + \xi^2) \frac{\xi^{12} - 1}{\xi - 1} (\xi + 1) \\ - \xi^2 \frac{\xi^{11} - 1}{\xi - 1} (\xi + 1)^3 - \xi \frac{\xi^{14} - 1}{\xi - 1} (\xi + 1)^2. \qed \end{multline*}
\noindent \\ {\bf Acknowledgement.} The author would like to thank Jean-Marc Dr\'ezet for several helpful discussions.
\end{document} | arXiv |
The function $f(x)$ satisfies
\[f(2^x) + xf(2^{-x}) = 1\]for all real numbers $x.$ Find $f(2).$
Setting $x = 1,$ we get
\[f(2) + f \left( \frac{1}{2} \right) = 1.\]Setting $x = -1,$ we get
\[f \left( \frac{1}{2} \right) - f(2) = 1.\]Subtracting these equations, we get $2f(2) = 0,$ so $f(2) = \boxed{0}.$ | Math Dataset |
Chapter 4: The Homotopy Theory of $\infty $-Categories
Section 4.6: Morphism Spaces
Subsection 4.6.2: Fully Faithful and Essentially Surjective Functors (cite)
4.6.2 Fully Faithful and Essentially Surjective Functors
Let $\operatorname{\mathcal{C}}$ and $\operatorname{\mathcal{D}}$ be categories. Recall that a functor $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ is an equivalence of categories if and only if it satisfies the following pair of conditions:
The functor $F$ is fully faithful: that is, for every pair of objects $X,Y \in \operatorname{\mathcal{C}}$, the induced map $\operatorname{Hom}_{\operatorname{\mathcal{C}}}(X,Y) \rightarrow \operatorname{Hom}_{\operatorname{\mathcal{D}}}( F(X), F(Y) )$ is bijective.
The functor $F$ is essentially surjective: that is, for every object $X \in \operatorname{\mathcal{D}}$, there exists an object $Y \in \operatorname{\mathcal{C}}$ and an isomorphism $X \simeq F(Y)$ in the category $\operatorname{\mathcal{D}}$.
Our goal in this section is to give an analogous characterization of equivalences in the setting of $\infty $-categories (Theorem 4.6.2.17). We begin by formulating $\infty $-categorical analogues of conditions $(1)$ and $(2)$.
Definition 4.6.2.1. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor of $\infty $-categories. We say that $F$ is fully faithful if, for every pair of objects $X,Y \in \operatorname{\mathcal{C}}$, the induced map of morphism spaces $\operatorname{Hom}_{\operatorname{\mathcal{C}}}( X, Y) \rightarrow \operatorname{Hom}_{\operatorname{\mathcal{D}}}( F(X), F(Y) )$ is a homotopy equivalence of Kan complexes.
Example 4.6.2.2. Let $\operatorname{\mathcal{C}}$ be an $\infty $-category and let $\operatorname{\mathcal{C}}' \subseteq \operatorname{\mathcal{C}}$ be a full subcategory (Definition 4.1.2.15). Then the inclusion map $\iota : \operatorname{\mathcal{C}}' \hookrightarrow \operatorname{\mathcal{C}}$ is fully faithful. In fact, for every pair of objects $X,Y \in \operatorname{\mathcal{C}}'$, the inclusion $\iota $ induces an isomorphism of simplicial sets $\operatorname{Hom}_{\operatorname{\mathcal{C}}'}(X,Y) \simeq \operatorname{Hom}_{\operatorname{\mathcal{C}}}(X,Y)$.
Example 4.6.2.3. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor between ordinary categories. Then $F$ is fully faithful if and only if the induced map $\operatorname{N}_{\bullet }(F): \operatorname{N}_{\bullet }(\operatorname{\mathcal{C}}) \rightarrow \operatorname{N}_{\bullet }(\operatorname{\mathcal{D}})$ is fully faithful (in the sense of Definition 4.6.2.1). Consequently, we can regard Definition 4.6.2.1 as a generalization of the classical notion of fully faithful functor.
Remark 4.6.2.4. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor between $\infty $-categories, so that $F$ induces a functor of homotopy categories $f: \mathrm{h} \mathit{\operatorname{\mathcal{C}}} \rightarrow \mathrm{h} \mathit{\operatorname{\mathcal{D}}}$. If $F$ is fully faithful, then $f$ is also fully faithful (see Remark 4.6.1.11). Beware that the converse is generally false.
Remark 4.6.2.5 (Transitivity). Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ and $G: \operatorname{\mathcal{D}}\rightarrow \operatorname{\mathcal{E}}$ be functors of $\infty $-categories, where $G$ is fully faithful. Then $F$ is fully faithful if and only if $G \circ F$ is fully faithful. In particular, the collection of fully faithful functors is closed under composition.
Proposition 4.6.2.6. Suppose we are given a commutative diagram of $\infty $-categories
\[ \xymatrix@R =50pt@C=50pt{ \operatorname{\mathcal{C}}\ar [r]^-{F} \ar [d]^{q} & \operatorname{\mathcal{C}}' \ar [d]^{q'} \\ \operatorname{\mathcal{D}}\ar [r]^-{\overline{F}} & \operatorname{\mathcal{D}}'. } \]
Assume that the functors $q$ and $q'$ are inner fibrations and that the functors $F$ and $\overline{F}$ are fully faithful. Then, for every object $D \in \operatorname{\mathcal{D}}$, the induced functor $F_{D}: \operatorname{\mathcal{C}}_{D} \rightarrow \operatorname{\mathcal{C}}'_{ \overline{F}(D) }$ is fully faithful.
Proof. Let $X$ and $Y$ be objects of the $\infty $-category $\operatorname{\mathcal{C}}_{D}$. We then have a cubical diagram of Kan complexes
\[ \xymatrix@R =50pt@C=10pt{ \operatorname{Hom}_{\operatorname{\mathcal{C}}_ D}(X,Y) \ar [rr] \ar [dd] \ar [dr] & & \operatorname{Hom}_{\operatorname{\mathcal{C}}}(X,Y) \ar [dr] \ar [dd] & \\ & \operatorname{Hom}_{\operatorname{\mathcal{C}}'_{\overline{F}}(D)}( F(X), F(Y) ) \ar [rr] \ar [dd] & & \operatorname{Hom}_{ \operatorname{\mathcal{C}}' }( F(X), F(Y) ) \ar [dd] \\ \{ \operatorname{id}_{D} \} \ar [rr] \ar [dr] & & \operatorname{Hom}_{\operatorname{\mathcal{D}}}(D,D) \ar [dr] & \\ & \{ \operatorname{id}_{ \overline{F}(D) } \} \ar [rr] & & \operatorname{Hom}_{\operatorname{\mathcal{D}}'}( \overline{F}(D), \overline{F}(D) ). } \]
The front and back faces of this diagram are homotopy pullback squares (Remark 4.6.1.21), the comparison maps
\[ \operatorname{Hom}_{\operatorname{\mathcal{C}}}(X,Y) \rightarrow \operatorname{Hom}_{\operatorname{\mathcal{D}}}(F(X), F(Y) ) \quad \quad \operatorname{Hom}_{\operatorname{\mathcal{D}}}(D,D) \rightarrow \operatorname{Hom}_{\operatorname{\mathcal{D}}}( \overline{F}(D), \overline{F}(D) ) \]
are homotopy equivalences by virtue of our assumptions that $F$ and $\overline{F}$ are fully faithful, and the map of singletons $\{ \operatorname{id}_{D} \} \rightarrow \{ \operatorname{id}_{ \overline{F}(D) } \} $ is an isomorphism. Applying Corollary 3.4.1.12, we conclude that the comparison map $ \operatorname{Hom}_{\operatorname{\mathcal{C}}_ D}(X,Y) \rightarrow \operatorname{Hom}_{\operatorname{\mathcal{C}}'_{\overline{F}(D)} }( F(X), F(Y) )$ is also a homotopy equivalence. $\square$
Proposition 4.6.2.7. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a fully faithful functor of $\infty $-categories. Then $F$ is conservative (Definition 4.4.2.7). That is, if $u: X \rightarrow Y$ is a morphism in $\operatorname{\mathcal{C}}$ for which $F(u)$ is an isomorphism in the $\infty $-category $\operatorname{\mathcal{D}}$, then $u$ is an isomorphism in the $\infty $-category $\operatorname{\mathcal{C}}$.
Proof. Let $\overline{v}: F(Y) \rightarrow F(X)$ be a homotopy inverse to $F(u)$. Since $F$ is fully faithful, the natural map $\operatorname{Hom}_{\operatorname{\mathcal{C}}}(Y, X) \rightarrow \operatorname{Hom}_{\operatorname{\mathcal{D}}}( F(Y), F(X))$ is a homotopy equivalence. We may therefore assume without loss of generality that $\overline{v} = F(v)$, for some morphism $v: Y \rightarrow X$ in the $\infty $-category $\operatorname{\mathcal{C}}$. Let $v \circ u$ be a composition of $u$ and $v$ in the $\infty $-category $\operatorname{\mathcal{C}}$. Since $F(u)$ is homotopy inverse to $F(v)$, the morphism $F( v \circ u)$ is homotopic to $\operatorname{id}_{ F(C) } = F( \operatorname{id}_{C} )$. Since the map $\operatorname{Hom}_{\operatorname{\mathcal{C}}}(X, X) \rightarrow \operatorname{Hom}_{\operatorname{\mathcal{D}}}( F(X), F(X))$ is a homotopy equivalence, it follows that $v \circ u$ is homotopic to $\operatorname{id}_{C}$: that is, $v$ is a left homotopy inverse to $u$. A similar argument (with the roles of $u$ and $v$ reversed) shows that $v$ is also a right homotopy inverse to $u$. It follows that $u$ is an isomorphism. $\square$
Corollary 4.6.2.8. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a fully faithful functor of $\infty $-categories. Then the induced map of cores $\operatorname{\mathcal{C}}^{\simeq } \rightarrow \operatorname{\mathcal{D}}^{\simeq }$ is also fully faithful.
Proof. Fix objects $X,Y \in \operatorname{\mathcal{C}}^{\simeq }$. Our assumption that $F$ is fully faithful guarantees that the induced map $\theta : \operatorname{Hom}_{\operatorname{\mathcal{C}}}( X, Y) \rightarrow \operatorname{Hom}_{\operatorname{\mathcal{D}}}( F(X), F(Y) )$ is a homotopy equivalence of Kan complexes. By virtue of Proposition 4.6.2.7, $\theta $ restricts to a homotopy equivalence from the summand of $\operatorname{Hom}_{\operatorname{\mathcal{C}}}(X,Y)$ spanned by the isomorphisms from $X$ to $Y$ to the summand of $\operatorname{Hom}_{\operatorname{\mathcal{D}}}( F(X), F(Y) )$ spanned by the isomorphisms from $F(X)$ to $F(Y)$. Unwinding the definitions, we conclude that $F^{\simeq }$ induces a homotopy equivalence $\operatorname{Hom}_{\operatorname{\mathcal{C}}^{\simeq }}( X, Y) \rightarrow \operatorname{Hom}_{\operatorname{\mathcal{D}}^{\simeq }}( F(X), F(Y) )$. $\square$
Definition 4.6.2.9. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor of $\infty $-categories. The essential image of $F$ is the full subcategory of $\operatorname{\mathcal{D}}$ spanned by those objects $D \in \operatorname{\mathcal{D}}$ for which there exists an object $C \in \operatorname{\mathcal{C}}$ and an isomorphism $F(C) \simeq D$. We say that $F$ is essentially surjective if its essential image is the entire $\infty $-category $\operatorname{\mathcal{D}}$: that is, if the map of sets $\pi _0( \operatorname{\mathcal{C}}^{\simeq } ) \rightarrow \pi _0( \operatorname{\mathcal{D}}^{\simeq } )$ is surjective.
Remark 4.6.2.10. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor of $\infty $-categories, and let $\operatorname{\mathcal{D}}' \subseteq \operatorname{\mathcal{D}}$ be the essential image of $F$. Then $\operatorname{\mathcal{D}}'$ is a replete full subcategory of $\operatorname{\mathcal{D}}$, and $F$ can be regarded as an essentially surjective functor from $\operatorname{\mathcal{C}}$ to $\operatorname{\mathcal{D}}'$. Moreover, the essential image $\operatorname{\mathcal{D}}'$ is uniquely determined by these properties.
Remark 4.6.2.11. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor between $\infty $-categories. Then $F$ is essentially surjective if and only if the induced functor of homotopy categories $f: \mathrm{h} \mathit{\operatorname{\mathcal{C}}} \rightarrow \mathrm{h} \mathit{\operatorname{\mathcal{D}}}$ is essentially surjective (in the sense of classical category theory).
Remark 4.6.2.12. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor between $\infty $-categories. Then $F$ is essentially surjective if and only if the induced map of Kan complexes $F^{\simeq }: \operatorname{\mathcal{C}}^{\simeq } \rightarrow \operatorname{\mathcal{D}}^{\simeq }$ is essentially surjective.
Example 4.6.2.13. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor between ordinary categories. Then $F$ is essentially surjective if and only if the induced map $\operatorname{N}_{\bullet }(F): \operatorname{N}_{\bullet }(\operatorname{\mathcal{C}}) \rightarrow \operatorname{N}_{\bullet }(\operatorname{\mathcal{D}})$ is an essentially surjective functor of $\infty $-categories (in the sense of Definition 4.6.2.9).
Example 4.6.2.14. Let $f: X \rightarrow Y$ be a morphism of Kan complexes. Then $f$ is essentially surjective (in the sense of Definition 4.6.2.9) if and only if the induced map $\pi _0(f): \pi _0(X) \rightarrow \pi _0(Y)$ is a surjection.
Remark 4.6.2.15 (Transitivity). Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ and $G: \operatorname{\mathcal{D}}\rightarrow \operatorname{\mathcal{E}}$ be functors of $\infty $-categories, where $F$ is essentially surjective. Then $G$ is essentially surjective if and only if $G \circ F$ is essentially surjective. In particular, the collection of essentially surjective functors is closed under composition.
Remark 4.6.2.16. Suppose we are given a commutative diagram of $\infty $-categories
\[ \xymatrix@R =50pt@C=50pt{ \operatorname{\mathcal{C}}\ar [r]^-{F} \ar [d]^{q} & \operatorname{\mathcal{C}}' \ar [d]^{q'} \\ \operatorname{\mathcal{D}}\ar [r]^-{\overline{F}} & \operatorname{\mathcal{D}}' } \]
satisfying the following conditions:
$(a)$
The functor $q$ is an inner fibration and $q'$ is an isofibration.
$(b)$
The functor $\overline{F}$ is essentially surjective.
$(c)$
For each object $D \in \operatorname{\mathcal{D}}$, the induced functor $F_{D}: \operatorname{\mathcal{C}}_{D} \rightarrow \operatorname{\mathcal{C}}'_{ \overline{F}(D) }$ is essentially surjective.
Then the functor $F$ is essentially surjective. To prove this, consider an arbitrary object $Z \in \operatorname{\mathcal{C}}'$. Assumption $(b)$ guarantees that there exists an object $D \in \operatorname{\mathcal{D}}$ and an isomorphism $\overline{u}: \overline{F}(D) \rightarrow q'(Z)$ in the $\infty $-category $\operatorname{\mathcal{D}}'$. Assumption $(a)$ guarantees that we can lift $\overline{u}$ to an isomorphism $u: Y \rightarrow Z$ in the $\infty $-category $\operatorname{\mathcal{C}}'$, where $Y$ belongs to the fiber $\operatorname{\mathcal{C}}'_{ \overline{F}(D)}$. Applying $(c)$, we can choose an object $X \in \operatorname{\mathcal{C}}_{D}$ and an isomorphism $v: F(X) \rightarrow Y$ in the $\infty $-category $\operatorname{\mathcal{C}}'_{ \overline{F}(D) }$. It follows that $Z$ is isomorphic to $F(X)$ in the $\infty $-category $\operatorname{\mathcal{C}}'$.
Theorem 4.6.2.17. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be functor of $\infty $-categories. Then $F$ is an equivalence of $\infty $-categories if and only if it is fully faithful and essentially surjective.
We begin by considering the special case of Theorem 4.6.2.17 where $\operatorname{\mathcal{C}}$ and $\operatorname{\mathcal{D}}$ are Kan complexes.
Lemma 4.6.2.18. Let $f: X \rightarrow Y$ be a morphism of Kan complexes which is fully faithful and essentially surjective. Then $f$ is a homotopy equivalence.
Proof. Since $f$ is essentially surjective, the underlying map of connected components $\pi _0(f): \pi _0(X) \rightarrow \pi _0(Y)$ is surjective. We claim that it is also injective. To prove this, suppose that $x$ and $x'$ are vertices of $X$ such that $f(x)$ and $f(x')$ belong to the same connected component of $Y$. Then the morphism space $\operatorname{Hom}_{Y}( f(x), f(x') )$ is nonempty. Since $f$ is fully faithful, it induces a homotopy equivalence $\operatorname{Hom}_{X}(x,x') \rightarrow \operatorname{Hom}_{Y}(f(x), f(x') )$. It follows that $\operatorname{Hom}_{X}(x,x')$ is nonempty, so that $x$ and $x'$ belong to the same connected component of $X$. This completes the proof that $\pi _0(f)$ is a bijection.
By virtue of Whitehead's theorem (Theorem 3.2.7.1), it will suffice to show that for every vertex $x \in X$ having image $y =f(x) \in Y$ and every integer $n \geq 0$, the induced map $\theta : \pi _{n+1}( X, x) \rightarrow \pi _{n+1}( Y, y)$ is an isomorphism. Using Example 4.6.1.12, we can identify $\theta $ with the natural map $\pi _{n}( \operatorname{Hom}_{X}(x,x), \operatorname{id}_{x} ) \rightarrow \pi _{n}( \operatorname{Hom}_{Y}(y,y), \operatorname{id}_{y} )$, which is bijective by virtue of our assumption that $f$ induces a homotopy equivalence $\operatorname{Hom}_{X}(x,x) \rightarrow \operatorname{Hom}_{Y}(y,y)$. $\square$
Proof of Theorem 4.6.2.17. Assume first that $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ is an equivalence of $\infty $-categories. Then $F$ induces a homotopy equivalence of Kan complexes $F^{\simeq }: \operatorname{\mathcal{C}}^{\simeq } \rightarrow \operatorname{\mathcal{D}}^{\simeq }$ (Remark 4.5.1.19). Passing to connected components, we conclude that the induced map $\pi _0( \operatorname{\mathcal{C}}^{\simeq } ) \rightarrow \pi _0( \operatorname{\mathcal{D}}^{\simeq } )$ is bijective. In particular, $F$ is essentially surjective. We have a commutative diagram of Kan complexes
\begin{equation} \begin{gathered}\label{equation:restriction-to-endpoint-diagram} \xymatrix@R =50pt@C=50pt{ \operatorname{Fun}( \Delta ^1, \operatorname{\mathcal{C}})^{\simeq } \ar [r]^-{\theta } \ar [d] & \operatorname{Fun}( \Delta ^1, \operatorname{\mathcal{D}})^{\simeq } \ar [d] \\ \operatorname{Fun}( \operatorname{\partial \Delta }^1, \operatorname{\mathcal{C}})^{\simeq } \ar [r]^-{\theta _0} & \operatorname{Fun}( \operatorname{\partial \Delta }^1, \operatorname{\mathcal{D}})^{\simeq },} \end{gathered} \end{equation}
where the horizontal maps are homotopy equivalences (Theorem 4.5.7.1) and the vertical maps are Kan fibrations (Corollary 4.4.5.4). Applying Proposition 3.2.8.1, we conclude that for every vertex $(X,Y) \in \operatorname{Fun}( \operatorname{\partial \Delta }^1, \operatorname{\mathcal{C}})^{\simeq }$, the induced map of fibers
\begin{eqnarray*} \operatorname{Hom}_{\operatorname{\mathcal{C}}}(X,Y) & = & \{ (X,Y) \} \times _{ \operatorname{Fun}( \operatorname{\partial \Delta }^1, \operatorname{\mathcal{C}})^{\simeq } } \operatorname{Fun}( \Delta ^1, \operatorname{\mathcal{C}})^{\simeq } \\ & \rightarrow & \{ (X,Y) \} \times _{ \operatorname{Fun}( \operatorname{\partial \Delta }^1, \operatorname{\mathcal{D}})^{\simeq } } \operatorname{Fun}( \Delta ^1, \operatorname{\mathcal{D}})^{\simeq } \\ & = & \operatorname{Hom}_{\operatorname{\mathcal{D}}}( F(X), F(Y) ) \end{eqnarray*}
is a homotopy equivalence. It follows that $F$ is fully faithful.
Now suppose that $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ is a functor of $\infty $-categories which is fully faithful and essentially surjective. Using Corollary 4.6.2.8 and Remark 4.6.2.12, we see that the induced map $F^{\simeq }: \operatorname{\mathcal{C}}^{\simeq } \rightarrow \operatorname{\mathcal{D}}^{\simeq }$ is also fully faithful and essentially surjective, and is therefore a homotopy equivalence of Kan complexes (Lemma 4.6.2.18). It follows that the morphism $\theta _0$ in (4.54) is a homotopy equivalence of Kan complexes. Combining our assumption that $F$ is fully faithful with Proposition 3.2.8.1, we conclude that $\theta $ is also a homotopy equivalence. Applying Theorem 4.5.7.1, we conclude that $F$ is an equivalence of $\infty $-categories. $\square$
Corollary 4.6.2.19. Let $F: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be a functor of $\infty $-categories, and let $\operatorname{\mathcal{D}}' \subseteq \operatorname{\mathcal{D}}$ be the essential image of $F$. Then $F$ is fully faithful if and only if it induces an equivalence of $\infty $-categories $\operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}'$.
Corollary 4.6.2.20. Let $f: X \rightarrow Y$ be a morphism of Kan complexes. Then $f$ is fully faithful (when regarded as a functor of $\infty $-categories) if and only if it induces a homotopy equivalence from $X$ to a summand of $Y$.
Proof. Combine Corollary 4.6.2.19 with Exercise 4.4.1.12. $\square$ | CommonCrawl |
One-parameter group
In mathematics, a one-parameter group or one-parameter subgroup usually means a continuous group homomorphism
$\varphi :\mathbb {R} \rightarrow G$ :\mathbb {R} \rightarrow G}
from the real line $\mathbb {R} $ (as an additive group) to some other topological group $G$. If $\varphi $ is injective then $\varphi (\mathbb {R} )$, the image, will be a subgroup of $G$ that is isomorphic to $\mathbb {R} $ as an additive group.
One-parameter groups were introduced by Sophus Lie in 1893 to define infinitesimal transformations. According to Lie, an infinitesimal transformation is an infinitely small transformation of the one-parameter group that it generates.[1] It is these infinitesimal transformations that generate a Lie algebra that is used to describe a Lie group of any dimension.
The action of a one-parameter group on a set is known as a flow. A smooth vector field on a manifold, at a point, induces a local flow - a one parameter group of local diffeomorphisms, sending points along integral curves of the vector field. The local flow of a vector field is used to define the Lie derivative of tensor fields along the vector field.
Examples
Such one-parameter groups are of basic importance in the theory of Lie groups, for which every element of the associated Lie algebra defines such a homomorphism, the exponential map. In the case of matrix groups it is given by the matrix exponential.
Another important case is seen in functional analysis, with $G$ being the group of unitary operators on a Hilbert space. See Stone's theorem on one-parameter unitary groups.
In his 1957 monograph Lie Groups, P. M. Cohn gives the following theorem on page 58:
Any connected 1-dimensional Lie group is analytically isomorphic either to the additive group of real numbers ${\mathfrak {R}}$, or to ${\mathfrak {T}}$, the additive group of real numbers $\mod 1$. In particular, every 1-dimensional Lie group is locally isomorphic to $\mathbb {R} $.
Physics
In physics, one-parameter groups describe dynamical systems.[2] Furthermore, whenever a system of physical laws admits a one-parameter group of differentiable symmetries, then there is a conserved quantity, by Noether's theorem.
In the study of spacetime the use of the unit hyperbola to calibrate spatio-temporal measurements has become common since Hermann Minkowski discussed it in 1908. The principle of relativity was reduced to arbitrariness of which diameter of the unit hyperbola was used to determine a world-line. Using the parametrization of the hyperbola with hyperbolic angle, the theory of special relativity provided a calculus of relative motion with the one-parameter group indexed by rapidity. The rapidity replaces the velocity in kinematics and dynamics of relativity theory. Since rapidity is unbounded, the one-parameter group it stands upon is non-compact. The rapidity concept was introduced by E.T. Whittaker in 1910, and named by Alfred Robb the next year. The rapidity parameter amounts to the length of a hyperbolic versor, a concept of the nineteenth century. Mathematical physicists James Cockle, William Kingdon Clifford, and Alexander Macfarlane had all employed in their writings an equivalent mapping of the Cartesian plane by operator $(\cosh {a}+r\sinh {a})$, where $a$ is the hyperbolic angle and $r^{2}=+1$.
In GL(n,ℂ)
See also: Stone's theorem on one-parameter unitary groups
An important example in the theory of Lie groups arises when $G$ is taken to be $\mathrm {GL} (n;\mathbb {C} )$, the group of invertible $n\times n$ matrices with complex entries. In that case, a basic result is the following:[3]
Theorem: Suppose $\varphi :\mathbb {R} \rightarrow \mathrm {GL} (n;\mathbb {C} )$ :\mathbb {R} \rightarrow \mathrm {GL} (n;\mathbb {C} )} is a one-parameter group. Then there exists a unique $n\times n$ matrix $X$ such that
$\varphi (t)=e^{tX}$
for all $t\in \mathbb {R} $.
It follows from this result that $\varphi $ is differentiable, even though this was not an assumption of the theorem. The matrix $X$ can then be recovered from $\varphi $ as
$\left.{\frac {d\varphi (t)}{dt}}\right|_{t=0}=\left.{\frac {d}{dt}}\right|_{t=0}e^{tX}=\left.(Xe^{tX})\right|_{t=0}=Xe^{0}=X$.
This result can be used, for example, to show that any continuous homomorphism between matrix Lie groups is smooth.[4]
Topology
A technical complication is that $\varphi (\mathbb {R} )$ as a subspace of $G$ may carry a topology that is coarser than that on $\mathbb {R} $; this may happen in cases where $\varphi $ is injective. Think for example of the case where $G$ is a torus $T$, and $\varphi $ is constructed by winding a straight line round $T$ at an irrational slope.
In that case the induced topology may not be the standard one of the real line.
See also
• Integral curve
• One-parameter semigroup
• Noether's theorem
References
• Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666.
1. Sophus Lie (1893) Vorlesungen über Continuierliche Gruppen, English translation by D.H. Delphenich, §8, link from Neo-classical Physics
2. Zeidler, E. (1995) Applied Functional Analysis: Main Principles and Their Applications Springer-Verlag
3. Hall 2015 Theorem 2.14
4. Hall 2015 Corollary 3.50
| Wikipedia |
Typical dynamics of plane rational maps with equal degrees
JMD Home
This Volume
An Urysohn-type theorem under a dynamical constraint
2016, 10: 339-352. doi: 10.3934/jmd.2016.10.339
On small gaps in the length spectrum
Dmitry Dolgopyat 1, and Dmitry Jakobson 2,
Department of Mathematics, University of Maryland, Mathematics Building, College Park, MD 20742-4015, United States
Department of Mathematics and Statistics, McGill University, 805 Sherbrooke Str. West, Montréal QC H3A 2K6
Received February 2016 Revised June 2016 Published August 2016
We discuss upper and lower bounds for the size of gaps in the length spectrum of negatively curved manifolds. For manifolds with algebraic generators for the fundamental group, we establish the existence of exponential lower bounds for the gaps. On the other hand, we show that the existence of arbitrarily small gaps is topologically generic: this is established both for surfaces of constant negative curvature (Theorem 3.1) and for the space of negatively curved metrics (Theorem 4.1). While arbitrarily small gaps are topologically generic, it is plausible that the gaps are not too small for almost every metric. One result in this direction is presented in Section 5.
Keywords: negatively curved manifolds, Length spectrum, prevalence, hyperbolicity, Diophantine approximations..
Mathematics Subject Classification: Primary: 37C25, 53C22; Secondary: 20H10, 37C20, 37D20, 53D2.
Citation: Dmitry Dolgopyat, Dmitry Jakobson. On small gaps in the length spectrum. Journal of Modern Dynamics, 2016, 10: 339-352. doi: 10.3934/jmd.2016.10.339
R. Abraham, Bumpy metrics, in Global Analysis (Proc. Sympos. Pure Math., Vol. XIV, Berkeley, Calif., 1968), Amer. Math. Soc, Providence, R. I., 1970, 1-3. Google Scholar
M. Aka, E. Breuillard, L. Rosenzweig and N. de Saxcé, Diophantine properties of nilpotent Lie groups, Compos. Math., 151 (2015), 1157-1188. doi: 10.1112/S0010437X14007854. Google Scholar
D. Anosov, Geodesic flows on closed Riemannian manifolds of negative curvature, Trudy Mat. Inst. Steklov., 90 (1967), 209pp. Google Scholar
D. Anosov, Generic properties of closed geodesics, Izv. Akad. Nauk SSSR Ser. Mat., 46 (1982), 675-709, 896. Google Scholar
V. Arnold and A. Avez, Ergodic Problems of Classical Mechanics, W.A. Benjamin, Inc., 1968. Google Scholar
A. Baker and G. Wustholz, Logarithmic Forms and Diophantine Geometry, New Math. Monographs, 9, Cambridge University Press, Cambridge, 2007. Google Scholar
V. Bangert, Mather sets for twist maps and geodesics on tori, in Dynamics Reported, Vol. 1, Dynam. Report. Ser. Dynam. Systems Appl., 1, Wiley, Chichester, 1988, 1-56. Google Scholar
L. Barreira and J. Schmeling, Sets of "non-typical'' points have full topological entropy and full Hausdorff dimension, Israel J. Math., 116 (2000), 29-70. doi: 10.1007/BF02773211. Google Scholar
R. Bowen, Equilibrium States and the Ergodic Theory of Anosov Diffeomorphisms, Lecture Notes in Math., 470, Springer, 1975. Google Scholar
R. Bowen and D. Ruelle, The ergodic theory of Axiom A flows, Invent. Math., 29 (1975), 181-202. doi: 10.1007/BF01389848. Google Scholar
E. Breuillard, Heights on $SL_2$ and free subgroups, in Geometry, Rigidity, and Group Actions, Chicago Lectures in Math., Univ. Chicago Press, 2011, 455-493. Google Scholar
Yu A. Brudnyi and M. I. Ganzburg, A certain extremal problem for polynomials in $n$ variables, (Russian) Izv. Akad. Nauk SSSR Ser. Mat., 37 (1973), 344-355. Google Scholar
D. Dolgopyat, Bounded orbits of Anosov flows, Duke Math. J., 87 (1997), 87-114. doi: 10.1215/S0012-7094-97-08704-4. Google Scholar
P. Eberlein, When is a geodesic flow of Anosov type? I, II, Jour. Diff. Geometry, 8 (1973), 437-463, 565-577. Google Scholar
J. Franchi and Y. Le Jan, Hyperbolic Dynamics and Brownian Motion. An Introduction, Oxford Math. Monographs. Oxford Univ. Press, Oxford, 2012. doi: 10.1093/acprof:oso/9780199654109.001.0001. Google Scholar
A. Gamburd, D. Jakobson and P. Sarnak, Spectra of elements in the group ring of $SU(2)$, Jour. of European Math. Soc., 1 (1999), 51-85. doi: 10.1007/PL00011157. Google Scholar
H. Garland and M. S. Raghunathan, Fundamental domains for lattices in $\mathbfR$-rank 1 semisimple Lie groups, Ann. of Math., 92 (1970), 279-326. doi: 10.2307/1970838. Google Scholar
A. Glutsyuk, Instability of nondiscrete free subgroups in Lie groups, Transform. Groups, 16 (2011), 413-479. doi: 10.1007/s00031-011-9134-9. Google Scholar
B. Hasselblatt, Hyperbolic dynamical systems, in Handbook of Dynamical Systems, Vol. 1A, North-Holland, Amsterdam, 2002, 239-319. doi: 10.1016/S1874-575X(02)80005-4. Google Scholar
D. Hejhal, Selberg Trace Formula for $PSL(2,\mathbbR)$, Vol. I, Lecture Notes in Math., 548, Springer, 1976. Google Scholar
D. Jakobson, I. Polterovich and J. Toth, Lower Bounds for the Remainder in Weyl's Law on Negatively Curved Surfaces, IMRN 2007, article 142. doi: 10.1093/imrn/rnm142. Google Scholar
V. Yu. Kaloshin, Growth rate of the number of periodic points, in Normal Forms, Bifurcations and Finiteness Problems in Differential Equations, NATO Sci. Ser. II Math. Phys. Chem., 137, Kluwer Acad. Publ., Dordrecht, 2004, 355-385. doi: 10.1007/978-94-007-1025-2_10. Google Scholar
V. Kaloshin and I. Rodnianski, Diophantine properties of elements of SO(3), Geom. Funct. Anal., 11 (2001), 953-970. doi: 10.1007/s00039-001-8222-8. Google Scholar
A. Katok, Lyapunov exponents, entropy and periodic orbits for diffeomorphisms, Inst. Hautes Etudes Sci. Publ. Math., 51 (1980), 137-173. Google Scholar
A. Katok and B. Hasselblatt, Introduction to the Modern Theory of Dynamical Systems, Encyclopedia of Math. and its Apps, 54, Cambridge University Press, Cambridge, 1995. doi: 10.1017/CBO9780511809187. Google Scholar
G. Knieper and H. Weiss, $C^\infty$ genericity of positive topological entropy for geodesic flows on $S^2$, J. Differential Geom., 62 (2002), 127-141. Google Scholar
W. Luo and P. Sarnak, Number variance for arithmetic hyperbolic surfaces, Comm. Math. Phys., 161 (1994), 419-432. doi: 10.1007/BF02099785. Google Scholar
G. A. Margulis, Arithmetic properties of discrete subgroups, Uspehi Mat. Nauk, 29 (1974), 49-98. Google Scholar
G. A. Margulis, Arithmeticity of nonuniform lattices in weakly noncompact groups, Funkcional. Anal. i Prilozhen, 9 (1975), 35-44. Google Scholar
G. Margulis, Discrete groups of motions of manifolds of non-positive curvature, in Proceedings of the ICM (Vancouver, B.C., 1974), Vol. 2, Canad. Math. Congress, Montreal, Que., 1975, 21-34. Google Scholar
G. Margulis, On some Aspects of the Theory of Anosov Systems, Translated from the Russian by V. V. Szulikowska, With a survey by R. Sharp: Periodic Orbits of Hyperbolic Flows, Springer Monographs in Math., Springer-Verlag, Berlin, 2004. doi: 10.1007/978-3-662-09070-1. Google Scholar
J. C. Mason and D. C. Handscomb, Chebyshev Polynomials, Chapman & Hall/CRC, Boca Raton, FL, 2003. Google Scholar
J. Milnor, A note on the curvature and fundamental group, J. Diff. Geom., 2 (1968), 1-7. Google Scholar
G. Mostow, Quasi-conformal mappings in $n$-space and the rigidity of hyperbolic space forms, IHES Publ. Math., 34 (1968), 53-104. Google Scholar
W. Parry, Equilibrium states and weighted uniform distribution of closed orbits, in Dynamical Systems (College Park, MD 1986-87), Lecture Notes in Math., 1342, Springer, 1988, 617-625. doi: 10.1007/BFb0082850. Google Scholar
W. Parry and M. Pollicott, Zeta functions and closed orbit structure for hyperbolic systems, Asterisque, 187-188 (1990), 268pp. Google Scholar
V. Petkov and L. Stoyanov, Distribution of periods of closed trajectories in exponentially shrinking intervals, Comm. Math. Phys., 310 (2012), 675-704. doi: 10.1007/s00220-012-1419-x. Google Scholar
G. Prasad and A. Rapinchuk, Zariski-dense subgroups and transcendental number theory, Mathematical Research Letters, 12 (2005), 239-249. doi: 10.4310/MRL.2005.v12.n2.a9. Google Scholar
B. Randol, The length spectrum of a Riemann surface is always of unbounded multiplicity, Proceedings AMS, 78 (1980), 455-456. doi: 10.1090/S0002-9939-1980-0553396-1. Google Scholar
D. Ruelle, Resonances for axiom A flows, J. Diff. Geom., 25 (1987), 99-116. Google Scholar
A. Selberg, On discontinuous groups in higher-dimensional symmetric spaces, in Contributions to Function Theory (Internat. Colloq. Function Theory, Bombay, 1960), Tata Inst. of Fundamental Research, Bombay, 147-164. Google Scholar
Y. Sinai, Gibbs measures in ergodic theory, Russian Math. Surveys, 27 (1972), 21-64. Google Scholar
K. A. Takeuchi, A characterization of arithmetic Fuchsian groups, J. Math. Soc. Japan, 27 (1975), 600-612. doi: 10.2969/jmsj/02740600. Google Scholar
W. P. Thurston, Three-dimensional Geometry and Topology, Vol. 1, Edited by Silvio Levy, Princeton Math. Series, 35, Princeton University Press, Princeton, NJ, 1997. Google Scholar
B. L. van der Waerden, Algebra, Vol. I, Springer, New York, 1991. doi: 10.1007/978-1-4612-4420-2. Google Scholar
P. Varju, Diophantine property in the group of affine transformation of the line, Acta Sci. Math. (Szeged), 80 (2014), 447-458. doi: 10.14232/actasm-013-757-6. Google Scholar
Y. Yomdin, Remez-type inequality for discrete sets, Israel J. Math., 186 (2011), 45-60. doi: 10.1007/s11856-011-0131-4. Google Scholar
Yannick Sire, Christopher D. Sogge, Chengbo Wang. The Strauss conjecture on negatively curved backgrounds. Discrete & Continuous Dynamical Systems, 2019, 39 (12) : 7081-7099. doi: 10.3934/dcds.2019296
Gabriel P. Paternain. On two noteworthy deformations of negatively curved Riemannian metrics. Discrete & Continuous Dynamical Systems, 1999, 5 (3) : 639-650. doi: 10.3934/dcds.1999.5.639
Gabriel P. Paternain. Transparent connections over negatively curved surfaces. Journal of Modern Dynamics, 2009, 3 (2) : 311-333. doi: 10.3934/jmd.2009.3.311
Emmanuel Schenck. Exponential gaps in the length spectrum. Journal of Modern Dynamics, 2020, 16: 207-223. doi: 10.3934/jmd.2020007
Andy Hammerlindl, Jana Rodriguez Hertz, Raúl Ures. Ergodicity and partial hyperbolicity on Seifert manifolds. Journal of Modern Dynamics, 2020, 0: 331-348. doi: 10.3934/jmd.2020012
Michael Khanevsky. Hofer's length spectrum of symplectic surfaces. Journal of Modern Dynamics, 2015, 9: 219-235. doi: 10.3934/jmd.2015.9.219
Luis Barreira, Claudia Valls. Regularity of center manifolds under nonuniform hyperbolicity. Discrete & Continuous Dynamical Systems, 2011, 30 (1) : 55-76. doi: 10.3934/dcds.2011.30.55
Mikhail Karpukhin. Bounds between Laplace and Steklov eigenvalues on nonnegatively curved manifolds. Electronic Research Announcements, 2017, 24: 100-109. doi: 10.3934/era.2017.24.011
Dmitry Jakobson, Alexander Strohmaier, Steve Zelditch. On the spectrum of geometric operators on Kähler manifolds. Journal of Modern Dynamics, 2008, 2 (4) : 701-718. doi: 10.3934/jmd.2008.2.701
Marcin Mazur, Jacek Tabor, Piotr Kościelniak. Semi-hyperbolicity and hyperbolicity. Discrete & Continuous Dynamical Systems, 2008, 20 (4) : 1029-1038. doi: 10.3934/dcds.2008.20.1029
Marcin Mazur, Jacek Tabor. Computational hyperbolicity. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 1175-1189. doi: 10.3934/dcds.2011.29.1175
Boris Hasselblatt, Yakov Pesin, Jörg Schmeling. Pointwise hyperbolicity implies uniform hyperbolicity. Discrete & Continuous Dynamical Systems, 2014, 34 (7) : 2819-2827. doi: 10.3934/dcds.2014.34.2819
John Boscoh H. Njagarah, Farai Nyabadza. Modelling the role of drug barons on the prevalence of drug epidemics. Mathematical Biosciences & Engineering, 2013, 10 (3) : 843-860. doi: 10.3934/mbe.2013.10.843
Boris Hasselblatt and Amie Wilkinson. Prevalence of non-Lipschitz Anosov foliations. Electronic Research Announcements, 1997, 3: 93-98.
David DeLatte. Diophantine conditions for the linearization of commuting holomorphic functions. Discrete & Continuous Dynamical Systems, 1997, 3 (3) : 317-332. doi: 10.3934/dcds.1997.3.317
Shrikrishna G. Dani. Simultaneous diophantine approximation with quadratic and linear forms. Journal of Modern Dynamics, 2008, 2 (1) : 129-138. doi: 10.3934/jmd.2008.2.129
Dmitry Kleinbock, Barak Weiss. Dirichlet's theorem on diophantine approximation and homogeneous flows. Journal of Modern Dynamics, 2008, 2 (1) : 43-62. doi: 10.3934/jmd.2008.2.43
Chao Ma, Baowei Wang, Jun Wu. Diophantine approximation of the orbits in topological dynamical systems. Discrete & Continuous Dynamical Systems, 2019, 39 (5) : 2455-2471. doi: 10.3934/dcds.2019104
Hans Koch, João Lopes Dias. Renormalization of diophantine skew flows, with applications to the reducibility problem. Discrete & Continuous Dynamical Systems, 2008, 21 (2) : 477-500. doi: 10.3934/dcds.2008.21.477
E. Muñoz Garcia, R. Pérez-Marco. Diophantine conditions in small divisors and transcendental number theory. Discrete & Continuous Dynamical Systems, 2003, 9 (6) : 1401-1409. doi: 10.3934/dcds.2003.9.1401
Dmitry Dolgopyat Dmitry Jakobson | CommonCrawl |
Calculate the sum $1 + 3 + 5 + \cdots + 15 + 17$.
The arithmetic sequence 1, 3, 5, $\dots$, 17, has common difference 2, so the $n^{\text{th}}$ term is $1 + 2(n - 1) = 2n - 1$. If $2n - 1 = 17$, then $n = 9$, so this arithmetic sequence contains 9 terms.
The sum of an arithmetic series is equal to the average of the first and last term, multiplied by the number of terms, so the sum is $(1 + 17)/2 \cdot 9 = \boxed{81}$. | Math Dataset |
\begin{document}
\setcounter{page}{1} \thispagestyle{empty}
\begin{abstract} We study the existence of families of periodic solutions in a neighbourhood of a symmetric equilibrium point in two classes of Hamiltonian systems with involutory symmetries. In both classes, involutions reverse the sign of the Hamiltonian function. In the first class we study a Hamiltonian system with a reversing involution R acting symplectically. We first recover a result of Buzzi and Lamb showing that the equilibrium point is contained in a three dimensional conical subspace which consists of a two parameter family of periodic solutions with symmetry R and there may or may not exist two families of non-symmetric periodic solutions, depending on the coefficients of the Hamiltonian. In the second problem we study an equivariant Hamiltonian system with a symmetry S that acts anti-symplectically. Generically, there is no S-symmetric solution in a neighbourhood of the equilibrium point. Moreover, we prove the existence of at least 2 and at most 12 families of non-symmetric periodic solutions. We conclude with a brief study of systems with both forms of symmetry, showing they have very similar structure to the system with symmetry R. \end{abstract}
\maketitle
\section{Introduction}
A classical approach in the analysis of Hamiltonian systems is to study the existence of periodic orbits near equilibria. A basic theorem on the existence of periodic solutions in Hamiltonian systems is the Liapunov centre theorem, which states that if the linearized flow at an equilibrium point has a simple purely imaginary eigenvalue satisfying a non-resonance condition then there exists a smooth 2-dimensional manifold which passes through the equilibrium point and consists of a one parameter family of periodic solutions, or nonlinear normal mode. In this work we extend this theorem to two classes of Hamiltonian systems with involutory symmetries where in both cases the involution reverses the sign of the Hamiltonian. In the first case, already studied by Buzzi and Lamb \cite{b1}, the involution is symplectic while in the second case it is anti-symplectic.
In the literature, there are versions of the Liapunov centre theorem for reversible systems, but they mostly deal with the classical case where the reversing symmetry acts anti-symplectically. For example see Devaney \cite{b13}. In this paper we consider two types of symmetry. First is the existence of periodic solutions in a time reversing Hamiltonian system equipped with an involution $R$ that acts symplectically. The problem was introduced and analysed by Buzzi and Lamb \cite{b1}. If the linear system has two pairs of purely imaginary eigenvalues, they prove in a neighbourhood of a symmetric equilibrium point the existence of a three dimensional subspace consists of a two parameter family of $R$-periodic solutions with period close to $2\pi$. In addition, they claim to find two families of non-symmetric periodic solutions whose period tends to $2\pi$ as they approach the equilibrium point, for an open dense set of coefficients (however there is a sign error in one of their calculations). Motivated by this work, we looked at the problem using different coordinates, and hence a different set of invariants. We recover their result on the existence of symmetric periodic solutions but obtain a different conclusion for the non-symmetric solutions. We determine an expression in the fourth order normal form and show that if this expression is positive there are two families of non-symmetric solutions, while if it is negative there are none.
The second problem we discuss is the dynamics near an equilibrium point in an equivariant Hamiltonian system with an involutory (time preserving) symmetry $S$ acting anti-symplectically. Bifurcations of equilibria in Hamiltonian systems with such symmetry have been considered recently by M.~Bosschaert and H.~Han{\ss}mann \cite{BH}. Existence theorems for periodic solutions in symmetric Hamiltonian systems can be found in Montaldi et al \cite{b8}, \cite{b9}, but this and related work assumes the symmetry transformation acts symplectically. We prove that for systems with this anti-symplectic symmetry, generically, there are no symmetric periodic orbits in a neighbourhood of an equilibrium point. Moreover, we prove the existence of at least 2 and at most 12 non-symmetric families of periodic solutions (nonlinear normal modes) in a neighbourhood of the equilibrium point under the same generic conditions.
In both cases, since the involution reverses the sign of the Hamiltonian and we assume the linear system is periodic, the equilibrium will be in 1:-1 resonance.
The paper is organised as follows. In Section \ref{s1} we introduce basic facts and definitions of Hamiltonian systems with symmetry. Section 3 lists normal forms of the Hamiltonian linear system $L$, the structure map $J$ and the symmetry elements $R$ and $S$ in $\mathbb{C}^2$. Section 4 reviews the standard tool used to find periodic orbits in Hamiltonian systems: Liapunov-Schmidt reduction. In Section 5 we state and prove our theorem on the existence of families of periodic orbits in the $R$-reversible Hamiltonian system with $R$ acting symplectically. In Section 6 we give our main result on the existence of periodic solutions in the $S$-equivariant Hamiltonian system with $S$ acting anti-symplectically. Finally, in Section 7 we study the existence of periodic solutions in systems with the combined symmetry $\mathbb{Z}_2^R \times\mathbb{Z}_2^S$ reversible/equivariant Hamiltonian system.
\section{Hamiltonian systems with symmetry} \label{s1} In this section we recall some basic facts and definitions on Hamiltonian systems with symmetry.
Let $(\mathbb{R}^{2n},\omega)$ be a symplectic space, i.e an even dimensional vector space equipped with a symplectic form $\omega$. Recall that a symplectic form is a non-degenerate, skew symmetric, bilinear form. Then there exists a structure map $J$ satisfying $J^*=-J$ ($J^*$ denotes the transpose of $J$) and $J^2=-I$ such that $\omega(x,y)=\langle x,Jy \rangle$ for $x,y \in \mathbb{R}^{2n}$, where $\langle .,. \rangle$ is the standard inner product in $\mathbb{R}^{2n}$. Let $H:\mathbb{R}^{2n}\rightarrow\mathbb{R}$ be a Hamiltonian function. The Hamiltonian vector field $f$ generated by $H$ is symplectic, i.e.\ its flow preserves the symplectic form $\omega$, and is defined by
\begin{equation} \label{p1} \dot{x}=f(x)=J\nabla H. \end{equation} By using canonical coordinates for the symplectic form $\omega$ given in Darboux theorem \cite{b6} one can write \[J=\left( \begin{array}{cc} 0&-I_n\\ I_n&0 \end{array}\right).\]
In this work we will deal with two types of symmetry, equivariant symmetries and time-reversing symmetries.
\begin{dfn}
Let $S,R$ be two linear transformations of $\mathbb{R}^{2n}$, then
\begin{enumerate}
\item The vector field $f$ is called $S$-equivariant if
\[f(Sx)=Sf(x)\, , \forall x\in \mathbb{R}.\] If $x(t)$ is a solution of \eqref{p1}, then $Sx(t)$ is also a solution and $S$ is referred to as a symmetry.
\item The vector field $f$ is called $R$-reversible if
\[f(Rx)=-Rf(x)\, , \forall x\in \mathbb{R}.\] If $x(t)$ is a solution of \eqref{p1}, then $Rx(-t)$ is also a solution. Such a transformation is called a time reversing symmetry. \end{enumerate} \end{dfn} The symmetry of a periodic solution is given by the following definition.
\begin{dfn} Let $x(t)$ be a periodic solution of the dynamical system $\dot{x}=f(x)$. \begin{enumerate} \item If $S$ is a symmetry of the system then $x(t)$ is said to be $S$-symmetric if\[Sx(t+\theta)=x(t),\] for some $\theta \in S^1$. \item If $R$ is a reversing symmetry of the system then $x(t)$ is said to be $R$-symmetric if \[Rx(\theta-t)=x(t),\] for some $\theta \in S^1$. \end{enumerate} Here we identify $S^1$ with $\mathbb{R}/T\mathbb{Z}$, where $T$ is the period of $x(t)$. In both cases, a symmetric periodic orbit symmetric if and only if it is set-wise invariant. \end{dfn}
In the Hamiltonian context, (reversing) symmetries can arise in two ways: they can either be symplectic or antisymplectic. A (reversing) symmetry $T$ is symplectic if $\omega(Tx,Ty)=\omega(x,y),\forall x,y\in \mathbb{R}$ and anti-symplectic if $\omega(Tx,Ty)=-\omega(x,y),\forall x,y\in \mathbb{R}$. In matrix form we can choose a basis so that $T$ is orthogonal, and then $T$ is symplectic if $TJ= JT$ and anti-symplectic if $TJ=- JT$.
Note for example that by \eqref{p1}, if a reversing symmetry $R$ is symplectic then it must reverse $\nabla H$, and if we assume (as we may, and do) that $H(0)=0$ then this is equivalent to $H(Rx) = -H(x)$, so that $H$ is `anti-invariant'. There are in all 4 possibilities of symmetry, labelled as follows
\begin{table}[h]
\begin{tabular}{c|ccc}
type & $\omega$ & $f$ & $H$ \\ \hline SE & +1 & +1 & +1 \\ AR & -1 & -1 & +1 \\ SR & +1 & -1 & -1 \\ AE & -1 & +1 & -1 \\ \end{tabular} \caption{The `type' refers to a transformation being Symplectic-Equivariant, or Antisymplectic-Reversing etc.} \end{table}
Note that if $T$ is an involution which reverses the sign of $H$, then any symmetric periodic orbit must lie in the set where $H=0$. There may on the other hand be periodic orbits on which $H$ is non-zero, and then $T$ will exchange two such orbits, one with $H>0$ and the other with $H<0$. We will see this in more detail in later sections.
\section{Linear Hamiltonian systems with involutory symmetries}
In this section we give the normal forms of linear Hamiltonian systems with involutory symmetries. Recall that an involution is a transformation of order $2$. An important assumption that is required for studying the existence of periodic orbits is the presence of purely imaginary eigenvalues of the linear Hamiltonian vector field.
Let $L\in sp_J(2n,\mathbb{R})$ be a linear Hamiltonian vector field. Thus, \[LJ=-JL^*,\] where $J$ is the structure map defined in the previous section. By Bochner's theorem \cite{b5}, a (reversing) symmetry $T$ can be chosen to be linear and orthogonal. Therefore, the (reversing) equivariant condition can be written as \[LT=\pm TL,\] and the (anti-)symplectic property of $T$ is given by \[TJ=\pm JT.\]
In \cite{b3}, Hoveijn et al. gave normal forms of linear systems in eigenspaces of (anti-) automorphisms of order two, which can be adapted to our problem. These normal forms are based on writing $\langle J,T \rangle-$ invariant subspaces. Since we are interested in generic systems with given symmetry, then by \cite{b3} we can only focus on the case when $L$ is semi-simple. Also, we assume that $L$ has at least one pair of purely imaginary eigenvalues $\pm i$. Normal forms of $T,J$ and $L$ are given in the following lemma. We use the notation, $$
I_2=\begin{pmatrix}1&0\cr 0&1\end{pmatrix},\quad
J_2=\begin{pmatrix}0&-1\cr 1&0\end{pmatrix},\quad\hbox{and}\quad
S_2=\begin{pmatrix}1&0\cr 0&-1\end{pmatrix}. $$
\begin{lem} \label{l1} Let $L$ be a linear Hamiltonian vector field on $\mathbb{R}^{2n}$. \begin{enumerate} \item[i)] Suppose $L$ is $R$-reversible, with $R$ acting symplectically (symmetry type SR).\\
Let $V$ be a minimal $(L,J,R)$-invariant subspace on which $L$ has eigenvalues $\pm i$. Then $\dim V=4$ and $R|_V,J|_V$ and $L|_V$ can take the following normal forms \[
R|_V=\left( \begin{array}{cc} 0&I_2\\ I_2&0\\ \end{array}
\right), \quad J|_V=\left( \begin{array}{cc} J_2&0\\ 0&J_2\\ \end{array}
\right), \quad\hbox{and}\quad L|_V=\left( \begin{array}{cc} J_2&0\\ 0&-J_2\\ \end{array} \right). \]
\item [ii)]Suppose now $L$ is $S$ equivariant, with $S$ acting anti-symplectically (symmetry type AE).\\
Let $V$ be a minimal $(L,J,S)$-invariant subspace on which $L$ has eigenvalues $\pm i$. Then $\dim V=4$ and $S|_V,J|_V$ and $L|_V$ can take the following normal forms \[
S|_V=\left( \begin{array}{cc} 0&S_2\\ S_2&0\\ \end{array} \right),
\quad J|_V=\left( \begin{array}{cc} J_2&0\\ 0&J_2\\ \end{array}
\right), \quad\mbox{and}\quad L|_V=\left( \begin{array}{cc} J_2&0\\ 0&-J_2\\ \end{array} \right). \] \end{enumerate} \end{lem} \begin{proof}
Normal forms (i) are given in \cite{b1}. For (ii), Let $W$ be a 2-dimensional symplectic subspace on which $L$ has the pair of eigenvalues $\pm i$ and $S(W)=W$. It is known in Hamiltonian context that $L$ and $J$ can take the same normal form on $W $ taking into account multiplication of time by a scalar. Equivariance property yields $SL=LS$. On $W$, $L$ and $J$ take the same form which gives $SJ=JS$ which contradicts the fact that $S$ is acting anti-symplectically. Thus, the minimal invariant subspace is four dimensional and is given by $V=W\oplus W', \, W'=S(W)$. The anti-symplectic property implies $J|_{W'}=-J|_{W}$ while equivariance gives $L|_{W'}=L|_{W}=J|_{W}$. Therefore, normal forms given in \cite{b3} show \[
S|_V=\left( \begin{array}{cc} 0&I_2\\ I_2&0\\ \end{array}
\right), \quad J|_V=\left( \begin{array}{cc} J_2&0\\ 0&-J_2\\ \end{array}
\right), \quad\hbox{and}\quad L|_V=\left( \begin{array}{cc} J_2&0\\ 0&J_2\\ \end{array} \right). \]
To get the same formulas for $J$ and $L$ given in (i) apply the change of coordinates on $\mathbb{C}^2$ given by
\[z_1=w_1, z_2=\bar{w_2}.\] In these new coordinates $S,J$ and $L$ takes the forms given in (ii). \end{proof}
Note that with these conventions, $L$ and $J$ take the same form in both cases, and the quadratic part $H_2$ of the Hamiltonian in both is given by
$$H_2(z_1,\,z_2) = |z_1|^2 - |z_2|^2;$$ that is, $H$ has a 1:-1 resonance. The higher order terms will differ for the two cases, as we see below.
\section{Liapunov-Schmidt reduction} The classical approach to finding periodic orbits in Hamiltonian systems is to solve a variational equation on the loop space. This equation is of infinite dimension and can be reduced by Liapunov-Schmidt Reduction. In this section we will give an overview of that method and how to use it in finding periodic orbits near an equilibrium point in a reversible equivariant Hamiltonian system. We chose the reversible equivariant case to cover all symmetry cases discussed in this paper. We will follow the settings given in \cite{b1} and \cite{b4}.
Consider the vector field $f:\mathbb{R}^n\rightarrow\mathbb{R}^n,$ which has an equivariant reversing symmetry group $G$. This implies the existence of a representation $\rho:G\rightarrow O(n)$ and a reversing sign $\sigma:G\rightarrow \{\pm 1\}$ such that \[f\rho(g)=\sigma(g)\rho(g)f,\forall g \in G.\]
In the following we give briefly the main steps of the Liapunov-Schmidt reduction and details can be found in \cite{b1}.
\subsection{Defining the operator $\Phi$} Let $\Phi:\mathcal{C}_{2\pi}^1\times \mathbb{R}\rightarrow\mathcal{C}_{2\pi}$ be given by \begin{equation}
\Phi(u,\tau)=(1+\tau)\displaystyle\frac{du}{dt}-f(u)
\end{equation}
where $\mathcal{C}_{2\pi}$ is the Banach space of $\mathbb{R}^n$-valued continuous $2\pi$-periodic functions and $\mathcal{C}_{2\pi}^1$ is the space of $\mathcal{C}_{2\pi}$ functions that are continuously differentiable. It is readily seen that zeros of $\Phi$ are periodic solutions of the dynamical system generated by $f$ with period $\frac{2\pi}{1+\tau}$. Now we can define the group action on the loop space $\mathcal{C}_{2\pi}$ as follows
\[T:\widetilde{G}\times \mathcal{C}_{2\pi}\rightarrow \mathcal{C}_{2\pi}\]
\[(T_gu)(t)=\rho(\gamma)(u(\sigma(\gamma)t+\theta)),\] where $g=(\gamma,\theta)$ is an element of $\tilde{G}=G\ltimes S^1$. Straightforward calculations imply that the operator $\Phi$ is $\tilde{G}$- reversible equivariant, that is \[\Phi(T_gu,\tau)=\sigma(\gamma)T_g\Phi(u,\tau),\quad \forall g=(\gamma,\theta)\in\widetilde{G}.\]
The linear part of $\Phi$ is defined by
\[\mathcal{L}=(d\Phi)_{(0,0)}.\]
It is readily verified that $\mathcal{L}$ is $\widetilde{G}$-reversible equivariant.
\subsection{The splittings}
Consider the splittings
\begin{equation}
\label{p2}
\mathcal{C}_{2\pi}^1=\ker{\mathcal{L}}\oplus{(\ker{\mathcal{L}})}^\bot\,\text{and}\,\mathcal{C}_{2\pi}={(\mathrm{range} { \mathcal{L}})}^\bot\oplus \mathrm{range}{\mathcal{L}},
\end{equation}
where the complements are taken with respect to the inner product
\[ [u,v]=\int_{\tilde{G}} \langle T_gu,T_gv \rangle d\mu,\]
where $\mu$ is a normalized Haar measure for $\widetilde{G}$ and $\langle u,v \rangle=\int_0^{2\pi}[u(t)]^t v(t)\,dt$. The splittings \eqref{p2} are $T_g$-invariant.
Now we define the projections
\[E:\mathcal{C}_{2\pi}\rightarrow \mathrm{range}{\mathcal{L}}\]
\[I-E:\mathcal{C}_{2\pi}\rightarrow {(\mathrm{range}{ \mathcal{L}})}^\bot.\]
Invariance of \eqref{p2} under $T_g$ implies that the projections $E$ and $I-E$ commute with $T_g$.
We start this step by solving the equation
\[E\Phi(v+w,\tau)=0,\]
for $w$ by the implicit function theorem, where $u=v+w,v\in\ker{\mathcal{L}}\, , w\in{(\ker{\mathcal{L}})}^\bot$. The solution $W=W(v,\tau)$ commutes with $T_g$. Thus, the Liapunov-Schmidt method reduces the original problem to the problem of finding the zeros of the bifurcation map which is defined by
\[\varphi:\ker{\mathcal{L}}\times \mathbb{R}\rightarrow {(\mathrm{range}{\mathcal{L}})}^\bot\,\]
\[\varphi(u,\tau)=(I-E)\Phi(v+W(v,\tau),\tau).\]
An important property of the bifurcation map $\varphi$ is $\widetilde{G}$ reversing-equivariance property, i.e
\[\varphi(T_gu,\tau)=\sigma(\gamma)T_g\varphi(u,\tau),\forall g\in \widetilde{G}.\]
The last feature to be considered is the Hamiltonian structure of the bifurcation map. Using the implicit Hamiltonian constrain given in \cite{b4} and \cite{b1} one can show that $\Phi$ is a parameter dependent Hamiltonian vector field.
According to the actions of $G$ being (anti-)symplectic we define the symplectic sign $\chi$ by the homomorphism $\chi:G\rightarrow\{\pm 1\}$ such that \[\omega(\gamma x,\gamma y)=\chi(\gamma)\omega(x,y),\gamma\in G.\] Therefore, the weak symplectic form $\Omega$ will satisfy \[\Omega(g u,g v)=\chi(\gamma)\Omega( u,v),g=(\gamma,\theta)\in\widetilde{G},\] and the Hamiltonian sign is given by \begin{equation} \label{hamil} \mathcal{H}(g u,g v)=\sigma(\gamma)\chi(\gamma)\mathcal{H}(u,v). \end{equation}
In all cases we discuss $\ker{\mathcal{L}}$ is finite dimensional and thus $\ker {\mathcal{L}}=\ker {\mathcal{L}^*}$ and so by \cite[Theorem 6.2]{b4} the bifurcation equation is a Hamiltonian vector field. Its corresponding Hamiltonian $h$ satisfies the (semi-)invariance properties given in \eqref{hamil} restricted to $\ker{\mathcal{L}}$ i.e. \begin{equation} \label{hamil2} h(gu)=\sigma(\gamma)\chi(\gamma)h(u), u\in \ker{\mathcal{L}}, \end{equation} where as before $g=(\gamma,\theta)$ for some $\theta\in S^1$. In practice, the function $h$ can be computed to any finite degree by using normal form transformations, as described for example in \cite{b9} (the discussion there is for symplectic symmetries, but is equally valid for all four cases listed in Table 1).
\section{Symplectic time-reversing involution} In this section we prove the existence of symmetric and non-symmetric periodic solutions in a Hamiltonian systems with a reversing involutory symmetry acting symplectically (type SR in Table 1). The problem was first studied by C. Buzzi and J. Lamb \cite{b1}, but there is a minor sign error in the calculations in Lemma 6.4 which effects the statement in their Theorem 6.1. They (correctly) prove the existence of a three dimensional conical subspace of symmetric periodic solutions in a neighbourhood of the origin. Also, they find that the origin is contained in two 2-dimensional manifolds each containing a non-symmetric family of periodic solutions with period close to $2\pi$. Using our expressions for the (semi-)invariants, we first recover their result on the symmetric solutions, and then we correct their Theorem 6.1 to show that generically there may or may not be two families of non-symmetric periodic orbits in a neighbourhood of the equilibrium point 0 depending on the coefficients of the Hamiltonian. Buzzi and Lamb also distinguish between two cases, called elliptic and hyperbolic, distinguishing between the possibilities of the period function on the 3-dimensional family being monotonic or not. It turns out that this distinction coincides with the two cases of existence or non-existence of non-symmetric periodic orbits.
By the normal forms given in Lemma \ref{l1} (i), we have $\dim \ker \mathcal{L}=4$, so we can write $\ker \mathcal{L}\cong \mathbb{C}^2$. Therefore, the bifurcation map is given by \begin{align*} \varphi:\mathbb{C}^2 \times \mathbb{R}\rightarrow \mathbb{C}^2\\ \varphi=2J \nabla_{z}h \end{align*}
with Hamiltonian function \begin{equation*} h:\mathbb{C}^2 \times \mathbb{R}\rightarrow \mathbb{R}, \end{equation*} which satisfies \eqref{hamil2}. Denote by $\mathbb{Z}_2^R$ the cyclic group generated by $R$, which together with $S^1$ gives $S^1\rtimes \mathbb{Z}_2^R$. The reversing symmetry $R$ acts on $\mathbb{C}^2$ by \begin{equation*} R(z_1 ,z_2)=(z_2 ,z_1) \end{equation*} while the $S^1$ action is defined by \begin{equation*} \theta(z_1 ,z_2)=(e^{i\theta} z_1 ,e^{-i\theta}z_2). \end{equation*} Let $\mathcal{E}$ be the ring of $S^1$ invariants, then one can write \[\mathcal{E}=\mathcal{E}_+\oplus\mathcal{E}_-,\] where $\mathcal{E}_+$ consists of $\mathbb{Z}_2^R$ invariants and $\mathcal{E}_-$ consists of $\mathbb{Z}_2^R$ anti-invariants. \begin{lem} \label{l2} Let $S^1\rtimes\mathbb{Z}_2^R$ act on $\mathbb{C}^2$ as above, then \begin{enumerate} \item $\mathcal{E}$ is the ring generated by $A,B,C,D$ where
$A=|z_1|^2, B=|z_2|^2, C+iD= 2z_1 z_2$. \item $\mathcal{E}_+$ is the subring of $\mathcal{E}$ generated by $N,C,D$ where
$ N=|z_1|^2+ |z_2|^2 $, and $\mathcal{E}_-$ is the module over $\mathcal{E}_+$ generated by the function $\delta =|z_1|^2-|z_2|^2$. \item The orbit map $O: \mathbb{C}^2\rightarrow\mathbb{R}^3$ defined by $(z_1,z_2)\rightarrow(N,C,D)$ has image $$\left\{(N,C,D) \mid N^2 \geq C^2 +D^2\right\}.$$ \end{enumerate}
Note that the functions $N,C,D$ and $\delta$ satisfy the identity $\delta^2=N^2 -C^2 -D^2$. \end{lem}
The proof of this lemma is by standard algebraic computations, similar to those found for example in \cite{b11}.
Now we can apply Lemma \ref{l2} to our Hamiltonian. The function $h$ is $S^1$-invariant, $R$ anti-invariant and real valued. This implies there is a smooth function $g$ satisfying \begin{equation} \label{no45} h(z_1,z_2,\tau) =\delta \ g(N,C,D,\tau). \end{equation}
In order to find the periodic solutions we need to solve the bifurcation equation first. The bifurcation equation is given by \begin{equation*} \nabla _z h=0 \end{equation*} This can be written as \begin{equation} \label{SR bif eqn} \left\{\begin{array}{rcccl} \displaystyle\frac{\partial h}{\partial z_1} &=& \displaystyle\bar{z_1} g + \delta \frac{\partial g}{\partial z_1} &=& 0,\\[10pt] \displaystyle\frac{\partial h}{\partial z_2} &=& \displaystyle - \bar{z_2}g + \delta \frac{\partial g}{\partial z_2} &=& 0. \end{array}\right. \end{equation}
We now consider, in turn, the symmetric and non-symmetric periodic orbits.
\subsection{Symmetric Periodic Orbits}
In finding symmetric periodic orbits we recover the result in \cite{b1}. \begin{thm}[Buzzi \& Lamb \cite{b1}]
Consider a symmetric equilibrium $0$ of a reversible Hamiltonian vector field $f$ with the reversing involution acting symplectically. Suppose that $Df(0)$ has two purely imaginary pairs of eigenvalues $\pm i$ with no other eigenvalues of the form $\pm ki,k \in \mathbb{Z}$. Then, the equilibrium is contained in a three-dimensional flow invariant conical subspace, given by the equation $\delta=0$, and generically this consists of a two-parameter family of symmetric periodic solutions whose period tends to $2\pi$ as they approach the equilibrium. \end{thm}
\begin{proof} Since the Hamiltonian is $R$ anti-invariant then all symmetric solutions are zeros of the bifurcation equations that lie in the level set $h=0$. For symmetric solutions we have $\delta =0$. Therefore the bifurcation equation calculated in $\mathrm{Fix} R=\{(z,z)\mid z\in \mathbb{C}\}$ will take the form \begin{equation*} \bar z g(z,\tau) =0 \end{equation*} Non-zero solutions yield $g(z,\tau)=0$. By the formula of the reduced Hamiltonian \eqref{no45}, the lowest order term of the variable $\tau$ is given by \begin{equation*}
h=(|z_1|^2-|z_2|^2)\frac{\tau}{2}+h.o.t. \end{equation*} This implies that $\displaystyle{\frac{\partial g}{\partial \tau}}(0,0)=\frac{1}{2}\neq 0$. By the implicit function theorem for each small non-zero $z$ there exists a $\tau$ such that $(z,z)$ lies in a periodic orbit with period $\frac{2\pi}{\tau +1}$. By reversing property each $R$ symmetric solution intersects $\mathrm{Fix}R$ in two points. Since the conical subspace $\delta=0$ is 3 dimensional and all points in $\mathrm{Fix} R$ are solutions of the bifurcation equation we conclude that the conical subspace completely consists of these periodic solutions with period close to $2\pi$ as they approach the origin. \end{proof}
\subsection{Non-Symmetric Periodic Orbits } We prove the existence of two families of non-symmetric periodic solutions under suitable conditions on the coefficients of the Hamiltonian. This result is fairly different to the one in \cite{b1}. To prove the existence of non-symmetric solutions one needs to solve the bifurcation equation without any symmetry conditions. By calculating the partial derivatives of $g$ the bifurcation equation will be \begin{align*} \frac{\partial h}{\partial z_1} &= \bar{z_1}(g+ \delta g_N)+ z_2 \delta (g_C -ig_D)=0\\ \frac{\partial h}{\partial z_2} &= \bar{z_2}(-g+ \delta g_N)+ z_1\delta (g_C -ig_D)=0 \end{align*} where $g_N=\displaystyle\frac{\partial g}{\partial N}$, $g_C=\displaystyle\frac{\partial g}{\partial C}$ and $g_D=\displaystyle\frac{\partial g}{\partial D}$.
Multiplying the first equation by $z_1$ and the second one by $z_2$ we get \begin{align} \label{no1}
|z_1|^2(g+ \delta g_N)+ z_1z_2 \delta (g_C -ig_D)&=0\\ \label{no2}
|z_2|^2(-g+ \delta g_N)+ z_1z_2\delta (g_C -ig_D)&=0 \end{align} By adding \eqref{no1} and \eqref{no2} we have \begin{equation} \label{no3} \delta (g + N g_ N+(C+iD)(g_C -ig_D))=0 \end{equation} Taking the imaginary part of the above equation gives \begin{equation*} D g_C - C g_D=0 \end{equation*} and when $C,D\neq 0$ we can write that equation as \begin{equation} \label{no4} \frac{g_C}{C}=\frac{g_D}{D} \end{equation} Therefore equation \eqref{no3} will be \begin{equation} \label{no6} \delta (g + N g_ N+ C g_C + D g_D )=0 \end{equation}
By subtracting \eqref{no2} from \eqref{no1} we have \begin{equation} N g+\delta^2 g_N=0 \end{equation} this can also be written by the formula \begin{equation} \label{no5} \frac{g}{\delta^2}=-\frac{g_N}{N} \end{equation} Substituting \eqref{no4} and \eqref{no5} in \eqref{no6} yields \begin{equation} \label{no7} \frac{g_N}{N}=-\frac{g_C}{C} \end{equation} Thus \begin{equation} \frac{g}{\delta^2} = -\frac{g_N}{N}=\frac{g_C}{C}=\frac{g_D}{D} \end{equation} which is equivalent to \begin{equation} \label{no14}
\frac {N}{g_N}=-\frac {C}{g_C}=-\frac{D}{g_D}, \end{equation}
In order to prove the existence of non-symmetric periodic solutions to the original Hamiltonian system we need to prove the following lemma. Let \[g_N(0)=n, g_C(0)=c, g_D(0)=d.\] \begin{lem} \label{l3} If $n,c$ and $d$ are not all zero then there exists a unique solution in $\mathbb{R}^4\cong(\tau,N,C,D)$-space for the system of equations \begin{align} \label{no10} g + N g_ N+ C g_C + D g_D&=0\\ \label{no11} N g_C+C g_N&=0\\ \label{no12} D g_C-C g_D&=0\\ \label{no13} N g_D+D g_N&=0. \end{align} \end{lem} \begin{proof}
It is clear that the last three equations are not independent but we will use them all to make up for the special cases when one of the numbers $n,c$ or $d$ is equal to zero. Suppose that $n\neq 0$. Then we only need to solve \eqref{no10},\eqref{no11} and \eqref{no13}. In order to apply the implicit function theorem we need to study the following Jacobian matrix with respect to $\tau,C,D$ and $N$
\[ \left(
\begin{array}{ccc|c}
\frac{1}{2}&c&d&n\\
0&n&0&c\\
0&0&n&d\\
\end{array}
\right) =
\left(
\begin{array}{c|c}
X&Y
\end{array}
\right)
\]
Since $n\neq 0$ then the matrix $X$ is non-singular. Therefore by the implicit function theorem there exists a unique curve $S=S(N)$, with $dS(0)=-X^{-1} Y$, that solves the system. If $n=0$ but $c\neq0$ we can choose equations \eqref{no10},\eqref{no11} and \eqref{no12}. Solving by the implicit function theorem gives a unique solution $S=S(C)$. A similar argument can be used for the remaining cases.
\end{proof} Now we state and prove the main theorem about the existence of non-symmetric periodic solutions for the given reversible Hamiltonian system.
\begin{thm} \label{th1} Suppose that $n^2 \neq c^2+ d^2$, then there exist the symmetric Liapunov centre families of periodic solutions filling the set $\delta=0$ described before. Moreover, \begin{enumerate} \item [i)] If $n^2>c^2+d^2 $ then there exists two families of non-symmetric periodic orbits for the Hamiltonian system distinguished by the sign of $\delta$. The period of the periodic solutions converges to $2\pi$ as the solutions tend to the origin. \item [ii)] If $n^2<c^2+d^2 $ then the only periodic orbits with period close to $2\pi$ in a neighbourhood of the origin are the symmetric ones.
\end{enumerate} \end{thm} \begin{proof} To prove the existence of non-symmetric periodic orbits we have to solve the equations \eqref{no10},\eqref{no11},\eqref{no12} and \eqref{no13}. By the condition $n^2 \neq c^2+ d^2$ we have that $n,c$ and $d$ cannot all be zero. Applying Lemma \ref{l3} we have a unique solution for those equations. Therefore, we can write \begin{equation}
\frac {N}{g_N}=-\frac {C}{g_C}=-\frac{D}{g_D}=t, \end{equation} which is equivalent to $N= g_N t, C=-g_C t$ and $D=-g_D t$. To get non-symmetric solutions we should have $\delta^2=N^2-C^2-D^2 >0$. This implies \begin{equation*} ({g_N}^2-{g_C}^2-{g_D}^2 )t^2>0, \ \text{for}\ t\neq0, \end{equation*} and therefore, ${g_N}^2-{g_C}^2-{g_D}^2>0$. Taking the limit at the origin gives $n^2\geq c^2+d^2 $. We conclude that non-symmetric solutions exist when $n^2>c^2+d^2 $ and split into two families according to $\delta$ being positive or negative. On the other hand, when $n^2< c^2+d^2 $ the only periodic orbits with period close to $2\pi$ in a neighbourhood of the origin are the symmetric ones. \end{proof}
\subsection{Period Distribution within the Family of Symmetric Periodic Solutions} Following the argument given in Buzzi and Lamb \cite{b1}, we describe the structure of period distribution for symmetric periodic solutions. According to $\mathrm{Fix} R$ being two dimensional the level sets of the period will be given by $\tau=\tau(x,y)$. If we change the coordinates in a neighbourhood of the origin such that $\tau=\varepsilon_1 \tilde{x}^2 + \varepsilon_2 \tilde{y}^2$ with $\varepsilon_j =\pm 1$,where the sign depends on the details of $h$ and $H$. One can give the following definition: \begin{dfn} The level sets of the period $\tau$ can be of two types: \begin{enumerate} \item \emph{elliptic} when $\varepsilon_1 \varepsilon_2=1$. In that case the level sets of the period form circles and $\tau$ increases or decreases monotonically with increasing radius. \item \emph{hyperbolic} when $\varepsilon_1 \varepsilon_2=-1$. Here the level sets of the period form two families of hyperbolae, one family with positive increasing $\tau$ and one with negative decreasing $\tau$. \end{enumerate} \end{dfn}
Now we can prove the following proposition:
\begin{prop} Depending on the quartic terms of the Hamiltonian function (and quadratic terms of the function $g$), among the three dimensional surface of symmetric periodic solutions near the equilibrium point, then the level sets of $\tau$ are elliptic when $n^2>c^2+d^2$ or hyperbolic when $n^2<c^2+d^2$. \end{prop}
\begin{proof} As discussed in the proof of the existence of symmetric periodic solutions, $\tau(x,y)$ can be calculated using the equation $g(z,\tau)=0$, with $z=x+iy$. Using our variables $N,C$ and $D$ and depending on the quadratic terms of that equation we have \begin{align*} g(N,C,D,\tau)&=0\\ nN+cC+dD+....&=-\frac{\tau}{2}\\ 2n(x^2+y^2)+2c(x^2-y^2)-4d(xy)&=-\frac{\tau}{2} \end{align*}
By the Morse Lemma the shape of $\tau(x,y)$ near the origin is given by the determinant \begin{equation*}
D=4^2(n^2-c^2-d^2).
\end{equation*} Therefore the family of periodic orbits is elliptic when $(n^2-c^2-d^2)>0$ or hyperbolic when $(n^2-c^2-d^2)<0$. \end{proof} Accordingly, one can easily deduce the following corollary. \begin{cor} The two dimensional families of non-symmetric periodic orbits given in Theorem $\ref{th1}$ exists if and only if the three dimensional family of symmetric periodic orbits is of elliptic type. \end{cor}
\section{Anti-symplectic involution} In this section we analyse the problem of existence of periodic orbits in a Hamiltonian system which is equivariant under the action of an anti-symplectic involution $S$ (type AE in Table 1). This was studied by J.~Li and Y.~Shi in \cite{b2}, but that paper contains a number of errors. Firstly, the form of the Hamiltonian is not sufficiently general, for example the polynomial function $h= DN$ satisfies the symmetry of the problem but is not in the form assumed in \cite{b2}. This effects the results significantly and the general form of the Hamiltonian makes the calculations more difficult. There is also a serious error in the proof of their Lemma 5.3. As a result we consider the problem anew. We use a different basis from \cite{b2}, so the invariants and anti-invariants are different, and we determine a general formula for the reduced Hamiltonian. Firstly, we find that no symmetric periodic orbits can occur generically (opposite to the result claimed in \cite{b2}). Secondly, we prove the existence of at least two and at most 12 families of non-symmetric periodic solutions near the equilibrium point.
An immediate consequence of our assumptions is the Hamiltonian being $S$ anti-invariant (as pointed out in Table 1). By the normal forms given in Lemma \ref{l1}(ii) we have $\dim \ker \mathcal{L}=4$ i.e. $\ker \mathcal{L}\cong \mathbb{C}^2$. The bifurcation equation is given by the formula \begin{align*} \varphi:\mathbb{C}^2 \times \mathbb{R}\rightarrow \mathbb{C}^2\\ \varphi=2J \nabla_{z}h \end{align*} with the Hamiltonian \begin{equation*} h:\mathbb{C}^2 \times \mathbb{R}\rightarrow \mathbb{R}, \end{equation*} where $J$ is the structure map. Now let's define the actions of $\mathbb{Z}_2^S \times S^1$ on $\mathbb{C}^2$ by \begin{align*} S(z_1,z_2)&=(\bar{z_2},\bar{z_1})\\ \theta(z_1,z_2)&=(e^{i\theta}z_1, e^{-i\theta}z_2) \end{align*}
Now we study the set of (anti-)invariants and find the appropriate formula for $h$. \begin{lem} \label{lem2} For $S^1\rtimes\mathbb{Z}_2^S$ acting on $\mathbb{C}^2$ as above, then \begin{enumerate} \item The $S^1\rtimes\mathbb{Z}_2$ invariant functions are generated by $N,C,D^2$ where
\[ N=|z_1|^2+ |z_2|^2 , C+iD= 2z_1 z_2 .\]
\item the $S^1$ invariant but $\mathbb{Z}_2$ anti-invariant functions are generated by $\delta,D$ where\[\delta=|z_1|^2- |z_2|^2. \] \end{enumerate}
\end{lem} According to that the Hamiltonian $h$ will take the form \begin{equation*} h=\delta g^1(N,C,D^2,\tau)+ D g^2(N,C,D^2,\tau). \end{equation*} The bifurcation equation will be given by \begin{align} \label{no15} \frac{\partial h}{\partial z_1} &= \bar{z_1} g^1+ \delta \frac{\partial g^1}{\partial z_1}-iz_2g^2+D \frac{\partial g^2}{\partial z_1}=0\\ \label{no16} \frac{\partial h}{\partial z_2} &= -\bar{z_2} g^1+ \delta \frac{\partial g^1}{\partial z_2}-iz_1g^2+D\frac{\partial g^2}{\partial z_2}=0 \end{align}
\subsection{Symmetric Periodic Orbits} Symmetric periodic solutions of that equivariant Hamiltonian system lie in the set $\mathrm{Fix} S=\{(z,\bar{z}),z\in \mathbb{C}\}$. Moreover, by anti-invariance, that is $h\circ S=-h$, all symmetric solutions will be in the level set $h=0$. In order to get the symmetric periodic solutions we need to solve the bifurcation equation calculated in $\mathrm{Fix S}$. Consequently, one needs to solve \eqref{no15} and \eqref{no16} with conditions: $\delta=D =0$ and $N=C$. Thus, \begin{align} \label{no17} \bar{z_1}g^1-iz_2 g^2&=0\\ \label{no18} -\bar{z_2}g^1-iz_1 g^2&=0. \end{align} By multiplying \eqref{no17} by $z_1$ and \eqref{no18} by $z_2$ we get \begin{align}
|z_1|^2 g^1 -iz_1 z_2 g^2&=0\\
-|z_2|^2 g^1-iz_1 z_2 g^2&=0 \end{align} Adding and subtracting these two equations yields \begin{align*} \delta g^1 -i(C+iD)g^2&=0\\ N g^1&=0. \end{align*} With the conditions $\delta=D=0$ we have \begin{align*}
C g^2&=0\\
N g^1&=0. \end{align*} Since we are looking for nonzero solutions then $N=C\neq0$ and therefore, solutions are common zeros of $g^1$ and $g^2$ in a neighbourhood of the origin. But $g^1$ and $g^2$ are independent functions and generically the only common zero in a neighbourhood of the origin is $0$ itself. As a result there are no symmetric periodic orbits for the given Hamiltonian system. \begin{rem} Another way to see the non-existence of symmetric solutions in that system is by using a Liapunov function. Consider the Hamiltonian given by the formula $H=\delta(a_1+b_1N+c_1C+\cdots)+D(a_2+b_2N+c_2C+\cdots)$. Restricting the Hamiltonian system on the two dimensional invariant space $\mathrm{Fix} S$ gives \begin{align*} \dot{x}&=2y\left(a_1+2(b_1+c_1)(x^2+y^2)+\cdots\right)+2x\left(a_2+2(b_2+c_2)(x^2+y^2)+\cdots\right) \\ \dot{y}&=-2x\left(a_1+2(b_1+c_1)(x^2+y^2)+\cdots\right)+2y\left(a_2+2(b_2+c_2)(x^2+y^2)+\cdots\right) \end{align*} Easy computations show that the eigenvalues of the linear system are $\lambda=2(a_2\pm a_1 i)$. In order to get periodic orbits we should have $a_2=0$ and the system would be written as \begin{align*} \dot{x}&=2y\left(a_1+2(b_1+c_1)(x^2+y^2)+\cdots\right)+2x\left(2(b_2+c_2)(x^2+y^2)+\cdots\right) \\ \dot{y}&=-2x\left(a_1+2(b_1+c_1)(x^2+y^2)+\cdots\right)+2y\left(2(b_2+c_2)(x^2+y^2)+\cdots\right). \end{align*} Consider as Liapunov function $V=x^2+y^2$. Differentiating $V$ in the direction of the Hamiltonian vector field yields \begin{align*} \dot{V}&=2x\dot{x}+2y\dot{y}\\ &=8(x^2+y^2)^2(b_2+c_2). \end{align*} The number $b_2+c_2$ is generically non-zero and therefore $\dot{V}$ is non-zero. This means the sign of $\dot{V}$ (either positive or negative) is constant along any trajectory, so that the trajectory cannot be closed. Thus, the system does not have any symmetric periodic orbits. \end{rem} \subsection{Non-symmetric Periodic Orbits} For this case we only need to solve the pair \eqref{no15} and \eqref{no16} without any extra conditions. Multiplying \eqref{no15} by $z_1$ and \eqref{no16} by $z_2$ gives \begin{multline} \label{no19}
|z_1|^2g^1+\delta\left(g^1_N |z_1|^2+g^1_C z_1z_2 + g^1_{D^2} 2D(-iz_1z_2)\right)-iz_1z_2g^2\\[12pt]
+D\left(g^2_N |z_1|^2+g^2_C z_1z_2 + g^2_{D^2} 2D(-iz_1z_2)\right)=0 \end{multline} \begin{multline} \label{no20}
-|z_2|^2g^1+\delta\left(g^1_N |z_2|^2+g^1_C z_1z_2 + g^1_{D^2} 2D(-iz_1z_2)\right)-iz_1z_2g^2+\\[12pt]
+D\left(g^2_N |z_2|^2+g^2_C z_1z_2 + g^2_{D^2} 2D(-iz_1z_2)\right)=0 \end{multline} By adding theses two equations we have \begin{multline} \label{no21} \delta \left(g^1+Ng^1_N+(C+iD)g^1_C+2g^1_{D^2}(-iD)(C+iD)\right)-i(C+iD)g^2\\[12pt] +D\left(Ng^2_N+(C+iD)g^2_C+2g^2_{D^2}(-iD)(C+iD)\right)=0 \end{multline} The real and imaginary parts of equation \eqref{no21} are \begin{equation} \label{no22} \delta \left(g^1+Ng^1_N+Cg^1_C+2g^1_{D^2}D^2\right)+Dg^2+D\left(Ng^2_N+Cg^2_C+2g^2_{D^2}D^2\right)=0 \end{equation} \begin{equation} \label{no23} \delta \left(Dg^1_C-2g^1_{D^2}CD\right)-Cg^2+D\left(Dg^2_C-2g^2_{D^2}CD\right)=0 \end{equation} The last equation to be considered comes from subtracting \eqref{no20} from \eqref{no19} and it will take the form \begin{equation} \label{no24} Ng_1+\delta^2g^1_N+D\delta g^2_N=0 \end{equation} This means finding non-symmetric solutions of the Hamiltonian system will be by solving the triple \eqref{no22},\eqref{no23} and \eqref{no24}. Clearly the system is singular at the origin and can be studied using a blow-up method. For that purpose define the new coordinates $(u,v,w,t,x)$ by $$N=rv,\quad C=ru,\quad D=rw,$$ $$\tau=rt,\quad \delta=rx, $$ combined together by the relation $v^2=u^2+w^2+x^2$ according to the relation $N^2=\delta^2+C^2+D^2$. Substituting these new coordinates in \eqref{no22},\eqref{no23} and\eqref{no24} gives \[r\left(vg^1+rx^2g^1_N+rxwg^2_N\right)=0\] \[r\left(x(g^1+rvg^1_N+rug^1_C+2r^2w^2g^1_{D^2})+w(g^2+rvg^2_N+rug^2_C+2r^2w^2g^2_{D^2})\right)=0\] \begin{equation} \label{no25} r\left(x(g^1_Crw-2r^2wug^1_{D^2})-ug^2+w(rwg^2_C-2r^2uwg^2_{D^2})\right)=0 \end{equation} We are interested in the non-zero solutions, i.e.\ $r\neq0$. The first step is to divide by the common power of $r$ in these equations and the second step is to apply the implicit function theorem. For simplicity we can write the Taylor series for the functions $g^1$ and $g^2$ as \begin{align*} g^1&=\displaystyle\frac{\tau}{2}+a_1N+c_1C+d_1D^2+\cdots\\ g^2&=b_2\tau+a_2N+c_2C+d_2D^2+\cdots \end{align*} which with the new coordinates take the form \begin{align*} g^1=r\displaystyle\bar{g}^1&=r(\displaystyle\frac{t}{2}+a_1v+c_1u+d_1rw^2+\cdots)\\ g^2=r\displaystyle\bar{g}^2&=r(b_2t+a_2v+c_2u+d_2rw^2+\cdots) \end{align*} Accordingly, the system \eqref{no25} will be written as \[r^2\left(v\bar{g}^1+x^2\bar{g}^1_v+xw\bar{g}^2_v\right)=0\] \[r^2\left(x(\bar{g}^1+v\bar{g}^1_v+u\bar{g}^1_u+2rw^2\bar{g}^1_{rw^2})+w(\displaystyle\bar{g}^2+v\bar{g}^2_v+u\bar{g}^2_u+2rw^2\bar{g}^2_{rw^2})\right)=0\] \begin{equation} \label{no26} r^2\left(x(w\bar{g}^1_u-2rwu\bar{g}^1_{rw^2})-u\displaystyle\bar{g}^2+w(w\bar{g}^2_u-2ruw\bar{g}^2_{rw^2})\right)=0 \end{equation} Note here that $g^1_N=\bar{g}^1_v$ etc. Dividing by $r^2$ and substituting $r=0$ yields \begin{equation*} v(\frac{t}{2}+a_1v+c_1u)+a_1x^2+a_2xw=0 \end{equation*} \begin{equation*} x(\frac{t}{2}+2a_1v+2c_1u)+w(b_2t+2a_2v+2c_2u)=0 \end{equation*} \begin{equation} \label{no27} c_1xw-u(b_2t+a_2v+c_2u)+c_2w^2=0. \end{equation} Clearly, the system can not be solved by the implicit function theorem at this point in the argument. As a result we will use a different technique as illustrated in the next section. We will show that \eqref{no27} has non-degenerate solutions, then apply a continuation argument to show \eqref{no26} has solutions when $r>0$. Adding the relation between the variables $N,C,D$ and $\delta$ gives us the system
\begin{equation*} v(\frac{t}{2}+a_1v+c_1u)+a_1x^2+a_2xw=0 \end{equation*} \begin{equation*} x(\frac{t}{2}+2a_1v+2c_1u)+w(b_2t+2a_2v+2c_2u)=0 \end{equation*} \begin{equation*} c_1xw-u(b_2t+a_2v+c_2u)+c_2w^2=0 \end{equation*} \begin{equation} \label{no40} u^2+w^2+x^2-v^2=0. \end{equation}
First of all we want to count the number of all solutions of the system \eqref{no40}. For that purpose we need the following theorem.
\begin{thm}[Bezout's theorem] \label{no41} Suppose $n$ homogeneous polynomials on $\mathbb{C}$ in $n+1$ variables, of degrees $d_1,d_2,..,d_n$, that define $n$ hypersurfaces in the projective space of dimension $n$. If the number of intersection points of the hypersurfaces is finite, then this number is $d_1d_2..d_n$ if the points are counted with their multiplicity. \end{thm} For more details and proof see for example \cite{b7}.
The system \eqref{no40} consists of four homogeneous equations each of degree two with five variables. According to Bezout's Theorem we have $16$ complex solutions for that system and can divide them into two main types: solutions when $v=0$ and solutions when $v\neq0$. \subsubsection{\textbf{Solutions when $\boldsymbol{v=0}$}} In that case, algebric calculations give a total of three different solutions: \begin{enumerate}
\item $\{t \in \mathbb{R}, u = 0, w = 0, x = 0\}$,
\item $ \{t = \mp 2i\frac{c_2}{b_2}w,\, u = \pm iw, \,w \in\mathbb{R}, \,x = 0\}.$
\end{enumerate}
Now we want to study the multiplicity of each solution. Consider the Jacobian matrix for the system \eqref{no40} with respect to $v,t,u,w,x$
\[J=
\left(\begin{smallmatrix} \frac{1}{2}t+2a_1v+c_1u & \frac{1}{2}v & c_1v & a_2x & 2a_1x+a_2w\\ 2a_1x+2a_2w & \frac{1}{2}x+b_2w & 2c_1x+2c_2w & 2a_2v+b_2t+2c_2u & \frac{1}{2}t+2a_1v+2c_1u \\ -a_2u & -b_2u & -a_2v-b_2t-2c_2u & c_1x+2c_2w & c_1w\\ -2v & 0& 2u & 2w & 2x\\ \end{smallmatrix}\right) \]
Substituting the values of the first solution and the condition $v=0$ in the Jacobian matrix yields \[J\mid_{v=0,sol.1}= \left( \begin{matrix} \frac{1}{2}t & 0 & 0 &0 & 0\\ 0 & 0 & 0 & b_2t& \frac{1}{2}t\\ 0& 0 & -b_2t& 0 & 0\\ 0 & 0& 0 & 0 & 0\\ \end{matrix} \right) \]
To get the appropriate square submatrix we eliminate the second column because $t$ is non-zero and get
\[ J_1=
\left( \begin{matrix} \frac{1}{2}t & 0 &0 & 0\\ 0 & 0 & b_2t& \frac{1}{2}t\\ 0& -b_2t& 0 & 0\\ 0 &0 & 0 & 0\\ \end{matrix} \right)
\]
This matrix is of rank three and therefore this first solution is not simple. To study its multiplicity we need to study the behaviour of system \eqref{no40} near a solution point for example say $(v,t,u,w,x)=(0,2,0,0,0)$. Consider the system
\begin{equation*} v(1+a_1v+c_1u)+a_1x^2+a_2xw=\varepsilon_1 \end{equation*} \begin{equation*} x(1+2a_1v+2c_1u)+w(2b_2+2a_2v+2c_2u)=\varepsilon_2 \end{equation*} \begin{equation*} c_1xw-u(2b_2+a_2v+c_2u)+c_2w^2=\varepsilon_3 \end{equation*} \begin{equation} \label{no42} u^2+w^2+x^2-v^2=\varepsilon_4. \end{equation} Near the point $(v,t,u,w,x)=(0,2,0,0,0)$ the first equation can be solved by the implicit function theorem for $v$, the second for $x$ and the third equation for $u$. As a result we end up with solving the equation \begin{equation*} w^2+f(w)=\varepsilon_4, \end{equation*} where $f(w)$ is a function constructed by substituting the solutions from the implicit function theorem in equation \eqref{no42}. Clearly $f$ is of order greater than one. So, the least order coefficient is $w^2$ and so the studied solution is of multiplicity two.\\
Regarding the multiplicity of the second and third solution we should assume that $w\neq0$ for a non-zero solution; for simplicity let $w=1$. The Jacobian matrix will take the form
\[J\mid_{v=0,w=1,sol.2}= \left( \begin{matrix} \mp c_2i/b_2\pm c_1 i & 0 & 0 &0 & a_2\\ 2a_2& b_2 & 2c_2 & 0& \mp c_2 i/b_2\pm 2 c_1 i\\ \mp a_2 i& \mp b_2 i & 0&2c_2 & c_1\\ 0 & 0& \pm 2i & 2 & 0\\ \end{matrix} \right) \] Since $w=1$, we can omit the $w$- column and get
\[J_2= \left( \begin{matrix} \mp c_2i/b_2\pm c_1i & 0 & 0& a_2\\ 2a_2& b_2 & 2c_2 &\mp c_2i/b_2\pm 2 c_1i\\ \mp a_2i& \mp b_2i & 0&c_1\\ 0 & 0& \pm 2i & 0\\ \end{matrix} \right) \] \[ \det{J_2}= -2(a_2^2 b_2^2+b_2^2 c_1^2-2 b_2 c_1 c_2+c_2^2)/b2.\]
We can assume that this result is non-zero and therefore the second and the third solutions are simple. We conclude that the case $v=0$ corresponds to four solutions where the first solution is doubled but the others are of multiplicity one. Note that $v=0$ implies $N= |z_1|^2+ |z_2|^2=0$. Thus, these four solutions won't be counted as periodic solutions of the given system, but will help us find out how many non-zero periodic solutions there are.
\subsubsection{\textbf{Solutions when $\boldsymbol{v\neq0}$}}
There remain $12$ solutions for the case $v\neq0$ according to Bezout's theorem . The following proposition guarantees a minimum of two real solutions for the system \eqref{no40}. \begin{prop} \label{prop1} For any choice of coefficients $\{a_1,a_2,b_2,c_1,c_2\}$ the two points
\[\{v\in {\mathbb{R}}^*, t=-4 a_1 v, u=w=0, x=\pm v\},\] satisfy equation \eqref{no40} , when $v\neq0$. \end{prop} \begin{proof} Straightforward calculations yield the result. \end{proof}
In order to find out more about the maximum number of real solutions we can find we will use a numerical approach. We choose various values for the constants in the system \eqref{no40} and then solve the equations using Maple. Since we are interested in solutions with $v\neq0$, we put $v=1$ for simplicity. These numerical calculations suggest that the system can have a maximum of eight real solutions, including the two analytic solutions given by Proposition \ref{prop1}. In addition, there are examples of systems with four or six real solutions. Our aim is to prove that for each of these cases, the solutions are non-degenerate. Then, under any perturbation of the set of coefficients there still exist (nearby) real solutions (i.e.\ periodic solutions). In the following we study an example of each set of coefficients that has two, four, six or eight real solutions for the studied system \eqref{no40}. Then, we check their non-degeneracy conditions. Note that all numbers are rounded to four decimal digits.
\begin{exm}[A system with two real solutions] Consider the set \[R=\{a_1 = 1, a_2 = 5, b_2 = 1, c_1 = 2, c_2 = 2, v = 1\}.\] The corresponding system has only two real solutions \[\{t = -4, u = 0, w = 0, x =\pm v= \pm 1\},\] which are those given in Proposition \ref{prop1}. The remaining $10$ solutions are non-real. In order to check the non-degeneracy condition, we need to study the proper submatrix of $J$ for each solution and ensure that its determinant is non-zero. Substituting the values given in $R$ and the two solutions in $J$ yields \[J_1= \left( \begin{matrix} 0&0.5&2&\pm 5&\pm 2\\ \pm 2&\pm 0.5&\pm 4&6&0\\ 0&0&-1&\pm 2&0\\ -2&0&0&0&\pm 2\\ \end{matrix} \right). \] Since $t\neq0$, we omit the $t-$column and we have the submatrix \[J_{11}= \left( \begin{matrix} 0&2&\pm 5&\pm 2\\ \pm 2&\pm 4&6&0\\ 0&-1&\pm 2&0\\ -2&0&0&\pm 2\\ \end{matrix} \right), \] \[\det{J_{11}}=\pm20.\] Therefore, these two solutions are non-degenerate.
\end{exm} A similar argument is used in the remaining examples to prove the non-degeneracy of solutions in each case.
\begin{exm}[A system with four real solutions] Let the set of coefficients in the system \eqref{no40} be \[R=\{a_1 = 1, a_2 = 5, b_2 = -2, c_1 = 2, c_2 = 2, v = 1\}.\] the associated system has $8$ non-real solutions and only four real solutions and the real ones are : \begin{enumerate} \item $\{t = -4, u = 0, w = 0, x =\pm v= \pm 1\}$ \item $\{t = 1.5602, u = -0.9681, w = \pm 0.0855, x =\pm 0.2354\}$ \end{enumerate}
Substituting $R$ and the first two solutions in the matrix $J$ we get
\[J_1= \left( \begin{matrix} 0 &0.5&2&\pm5&\pm2\\ \pm2&\pm0.5&\pm4&18&0\\ 0&0&-13&\pm2&0\\ -2&0&0&0&\pm2\\ \end{matrix} \right) \] Now we can choose the submatrix $J_{11}$ by omitting the second column because $t$ is non-zero and we find its determinant to be $\det J_{11} = \pm692\neq0$. In the same way we can study the third and fourth solutions to get \[J_{22}= \left(
\begin{matrix} 0.8439&2&\pm 1.1772&\pm0.8985\\ \pm1.3262&\pm1.2839&3.0071&-1.0924\\ 4.8406&1.9929&\pm0.8130&\pm0.1711\\ -2&-1.9362&\pm0.1711&\pm0.4709\\ \end{matrix} \right)\\ \] We have $\det{J_{22}}=\pm 35.6351\neq0$.
Since the determinants are non-zero, all four solutions are non-degenerate and we can find an open set of coefficients that give four real solutions. \end{exm} \begin{exm}[A system with six real solutions] Let \[R=\{a_1 = -2, a_2 = -11, b_2 = -5, c_1 = 1, c_2 = 2, v = 1\}.\] The system \eqref{no40} with those coefficients has only six real solutions: \begin{enumerate} \item $\{t = 8, u = 0, w = 0, x = \pm v=\pm 1\}$ \item $\{t = -2.5592, u = 0.0346, w = \pm 0.4980,x = \mp 0.8665\}$ \item $\{t = -3.7663, u = 0.1529, w =\pm 0.8984,x = \mp 0.4118\}$ \end{enumerate} The non-degeneracy of the above solutions can be studied in pairs. Firstly, we study the determinant of the appropriate matrix $J_{11}$ associated to the first and second solutions.
\[J_{11}= \left( \begin{matrix} 0&1&\mp11&\mp4\\ \mp4&\pm2&-62&0\\ 0&51&\pm1&0\\ -2&0&0&\pm2\\ \end{matrix} \right)\\ \] \[ \det{J_{11}}=\mp20816.\]
Similarly, for the rest of solutions we have \[J_{22}= \left( \begin{matrix} -5.2450&1&\pm9.5314&\mp2.0120\\ \mp7.4900&\pm0.2590&-9.0658&-5.2104\\ 0.3804&-1.9342&\pm1.1255&\pm0.4980\\ -2&0.0692&\pm0.9960&\mp1.7330\\ \end{matrix} \right)\\ \] \[ \det{J_{22}}=\mp164.8123\]
\[J_{33}= \left( \begin{matrix} -5.7303&1&\pm4.5299&\mp8.2346\\ \mp18.1165&\pm2.7698&-2.5567&-5.5774\\ 1.6818&-8.4433&\pm3.1816&\pm0.8984\\ -2&0.3058&\pm1.7967&\mp0.8236\\ \end{matrix} \right)\\ \] \[ \det{J_{33}}=\pm1827.2294.\] As a result all real solutions of this case are non-degenerate \end{exm}
We end with an example of a system with eight real solutions, which is the largest number of real solutions we found using numerical calculations. \begin{exm}[A system with eight real solutions] Let \[R=\{a_1 = 1, a_2 = -4, b_2 = -1, c_1 = 1, c_2 = 2, v = 1\}\] The corresponding real solutions are only eight and they are \begin{enumerate} \item$\{t = -4, u = 0, w = 0, x = \pm v=\pm 1\}$ \item$\{t = -4.9432, u = -0.2615, w =\pm 0.2274, x = \mp 0.9380\}$ \item$\{t = -2.8537, u = 0.8527, w = \pm 0.4155, x =\pm 0 .3165\}$ \item$\{t = -6.4260, u = 0.2940, w = \pm 0.8063, x = \mp 0.5133\}$. \end{enumerate} The non-degeneracy conditions are \[J_{11}= \left( \begin{matrix} 0&1&\mp4&\pm2\\ \pm2&\pm2&-4&0\\ 0&0&\pm1&0\\ -2&0&0&\pm2\\ \end{matrix} \right)\\ \] \[ \det{J_{11}}=\pm4\] \[J_{22}= \left( \begin{matrix} -0.7331&1&\pm3.7521&\mp2.7857\\ \mp3.6953&\mp0.9664&-4.1029&-0.9947\\ -1.0461&0.1029&\mp0.0284&\pm0.2274\\ -2&-0.5231&\pm0.4548&\mp1.8760\\ \end{matrix} \right)\\ \] \[ \det{J_{22}}=\mp13.8083\] \[J_{33}= \left( \begin{matrix} 1.4259&1&\mp1.2659&\mp1.0293\\ \mp2.6915&\pm2.2951&-1.7353&2.2786\\ 3.4110&-2.2647&\pm1.9787&\pm0.4155\\ -2&1.7055&\pm0.8311&\pm0.6329\\ \end{matrix} \right)\\ \] \[ \det{J_{33}}=\mp43.7450\]
\[J_{44}= \left( \begin{matrix} -0.9190&1&\pm2.0533&\mp4.2517\\ \mp7.4767&\pm2.1984&-0.3979&-0.6249\\ 1.1761&-3.6021&\pm2.7117&\pm0.8063\\ -2&0.5881&\pm1.6125&\mp1.0266\\ \end{matrix} \right)\\ \] \[ \det{J_{44}}=\pm111.6657\] Therefore, all eight solutions are non-degenerate. \end{exm} \subsection{Conclusion} Bezout's theorem guaranteed a total of $12$ solutions for the case $v\neq0$, but numerical calculations found at most eight of them to be real (and at least two). The last thing to consider is the effect of the addition of higher order terms to the system \eqref{no40} when solving by the implicit function theorem. We will choose one of the previous examples and prove the existence of periodic orbits in that system and the rest can be done in the same way.
We consider the solution point $(t,u,w,v,x,r)=(-4,0,0, 1,1,0)$ as a candidate. We want to apply the implicit function theorem on the system in a neighbourhood of that point. Note that the functions $g^1,g^2$ are given by \begin{align} \label{no43} g^1(N,C,D^2,\tau)&=\displaystyle\frac{\tau}{2}+a_1N+c_1C+d_1D^2+e_1N^2+f_1NC+g_1N\tau+\cdots,\\ \label{no44} g^2(N,C,D^2,\tau)&=b_2\tau+a_2N+c_2C+d_2D^2+e_2N^2+f_2NC+g_2N\tau+\cdots \end{align} In our new coordinates \eqref{no43} and \eqref{no44} will take the form \begin{align} g^1(N,C,D^2,\tau)&=r[\displaystyle\frac{t}{2}+a_1v+c_1u+d_1rw^2+e_1rv^2+f_1rvu+g_1rvt+\cdots],\\ g^2(N,C,D^2,\tau)&=r[b_2t+a_2v+c_2u+d_2rw^2+e_2rv^2+f_2rvu+g_2rvt+\cdots], \end{align} therefore, the matrix formula associated to the implicit function theorem calculated at the point $(-4,0,0, 1,1,0)$ will be \[ \left(
\begin{array}{cccc|c}
0&2&5&2&3e_1-8g_1\\
2&4&6&0&3e_1-8g_1\\
0&-1&2&0&0\\
-2&0&0&2&0\\
\end{array}
\right) =
\left(
\begin{array}{c|c}
X&Y
\end{array}
\right).
\] The matrix $X$ is invertible and by the implicit function theorem we can solve $v,u,w,x$ as functions of $r$. The linear part of the Taylor series of those solutions is determined by the matrix \[ X^{-1}Y=\left( \begin{array}{cccc}
7/5&-9/10&-4/5&-7/5\\
-2/5&2/5&-1/5&2/5\\ -1/5&1/5&2/5&1/5\\ 7/5&-9/10&-4/5&-9/10\\
\end{array}
\right)
\left(
\begin{array}{c}
3e_1-8g_1\\
3e_1-8g_1\\
0\\
0\\
\end{array}
\right).
\] Those solutions can be written as functions of $r$ as follows \begin{align*} v(r)&=1-\frac{1}{2}(3e_1-8g_1)r+h.o.t.\\ x(r)&=1-\frac{1}{2}(3e_1-8g_1)r+h.o.t.\\ v&=w=0. \end{align*} Converting back to our basic coordinates $N,C,D,\delta$ gives \begin{align*} N&=rv=r-\frac{1}{2}(3e_1-8g_1)r^2+h.o.t.\\ \delta&=rx=r-\frac{1}{2}(3e_1-8g_1)r^2+h.o.t.\\ C&=D=0 \end{align*} This curve of solutions gives a one parameter family of periodic orbits for the equivariant Hamiltonian system. Similarly one can prove the existence of one parameter family of periodic solutions for each case studied before because of their non-degeneracy conditions.
Accordingly, we state the following result.
\begin{thm} Consider an equilibrium point $0$ of a $C^\infty $ equivariant Hamiltonian vector field $f$, with the the symmetry $S$ acting anti-symplectically and $S^2 =I$. Assume that the linear Hamiltonian vector field $L$ has two pairs of purely imaginary eigenvalues $\pm i $ and no other eigenvalues of the form $\pm ki, k \in \mathbb{Z}$. The reduced Hamiltonian is in the form $h=\delta g^1(N,C,D^2,\tau)+ D g^2(N,C,D^2,\tau).$ Then \begin{enumerate} \item For an open dense set of coefficients $(a_1,a_2,b_2,c_1,c_2)$ there exists a neighbourhood of $0$ with no symmetric periodic orbits and at least two and at most $12$ non-symmetric periodic solutions of the equivariant Hamiltonian system .
\item There exist open sets of coefficients $U_i$ $(i=1,2,3,4)$, such that for coefficients in $U_i$ there are precisely $2i$ non-symmetric periodic orbits of period close to $2\pi$ as they tend to zero. \end{enumerate} \end{thm}
\section{ The combined case $\mathbb{Z}_2^R \times\mathbb{Z}_2^S$} It is natural at this point to ask about periodic orbits of a system possessing both the symmetries studied above. Consider now a reversible equivariant Hamiltonian system under the action of the group $G=\mathbb{Z}_2^R \times\mathbb{Z}_2^S$, where $R$ and $S$ are the involutions defined in Section 5 and Section 6 respectively. In this section we prove the existence of families of periodic solutions in a neighbourhood of the origin in that system.
On $\mathbb{C}^2$ the reduced Hamiltonian is a special case of the Hamiltonian in Section 5 and it takes the form \[h(z_1,z_2,\tau)=\delta g(N,C,D^2,\tau).\] Accordingly, the bifurcation equation will be \begin{equation*} \delta[g+N g_N+Cg_C+2D^2g_{D^2}]=0, \end{equation*}
\begin{equation*} Ng+[N^2-C^2-D^2]g_N=0, \end{equation*}
\begin{equation} \label{no46} \delta D[g_C-2Cg_{D^2}]=0. \end{equation}
Now we classify the solutions according to their symmetry type.
\subsection{Periodic orbits in the conical subspace \texorpdfstring{$\boldsymbol{\delta=0}$}{delta=0}} Substituting $\delta =0$ in the system of equations \eqref{no46} yields \begin{equation} \label{no47} Ng=0. \end{equation} For $R$ symmetric solutions one needs to solve \eqref{no47} in $\mathrm{Fix}R$. This implies \[g(z,\tau)=0,\] which can be solved for $\tau=\tau(z), z\in \mathrm{Fix}R$ by the implicit function theorem. This means any periodic orbit in the subspace $\delta=0$ has symmetry $R$. Solving equation\eqref{no47} for $z\in \mathrm{Fix}S$ gives one periodic orbit of symmetry $S$ and it is therefore $SR$ symmetric. Moreover, solving equation\eqref{no47} for $z\in \mathrm{Fix}(S,\pi)$ gives another orbit with symmetry $S$.
\subsection{Periodic orbits in \texorpdfstring{$\boldsymbol{\delta\neq0}$}{delta<>0}} It remains to study the existence of $SR$ solutions which lie in the subset $\delta\neq0$. Clearly $\mathrm{Fix}RS=\{(z_1,z_2)\mid z_1,z_2\in \mathbb{R}\}$ which implies $D=0$ and therefore the system \eqref{no46} takes the form \begin{equation*} g+N g_N+Cg_C=0, \end{equation*} \begin{equation} \label{no49} N g+[N^2-C^2]g_N=0. \end{equation} Eliminating $g$ from both equations gives \begin{equation} \label{no48} C(Ng_C+Cg_N)=0. \end{equation} If $C=0$, then by the fact $\frac{\partial g}{\partial\tau}(0)=\frac{1}{2}$ we can solve using the implicit function theorem. Now if $C\neq 0$, and $g_N(0)=n,g_C(0)=c$ are not both zero then, the system\eqref{no49} can be solved by the implicit function theorem. By the argument used in Theorem 5.2 we conclude that $SR$ periodic solutions exist when $n^2-c^2>0$. The following theorem describes the families of periodic solutions exist in this system.
\begin{thm} Consider a symmetric equilibrium $0$ of a $\mathbb{Z}_2^R\times \mathbb{Z}_2^S$ reversible equivariant Hamiltonian vector field $f$ where $R$ is a reversing involution acting symplectically and $S$ is an involution acting anti-symplectically. Suppose that $Df(0)$ has two purely imaginary pairs of eigenvalues $\pm i$ with no other eigenvalues of the form $\pm ki,k \in \mathbb{Z}$. Also, denote $g_N(0)=n$ and $g_C(0)=c$. Then, \begin{enumerate}
\item there exists a two-parameter family of $R$ symmetric periodic solutions in the conical subspace $\delta =0$ with two of them having extra symmetry $S$. The period of all orbits tends to $2\pi$ as they approach the equilibrium.
\item there exist two Liapunov centre families of $SR$ symmetric periodic solutions in the open subset $\delta\neq0$ provided that $n^2-c^2>0$---one with $\delta>0$ and one with $\delta<0$. These two families are exchanged by both involutions $R$ and $S$. \end{enumerate} \end{thm}
\begin{figure}
\caption{Fixed point spaces on the torus $(\theta_1,\theta_2)$.}
\label{fig:no34}
\end{figure}
Finally we illustrate the relation between fixed point spaces of the involutions $R,S,SR$ and $(S,\pi)$ geometrically. Buzzi and Lamb \cite{b1} show that the intersection between the cone $\delta=0$ and the unit sphere in $\mathbb{C}^2$ is a torus $T$ parametrized by two angles $(\theta_1,\theta_2)$ and draw $\mathrm{Fix}R$ on $T$. In addition to that we show the intersection between $\mathrm{Fix}S$ and the torus $T$ is given by the line $\theta_2=-\theta_1$. Also, we plot $\mathrm{Fix}(S,\pi)=\{(\theta_1,\theta_2)=(\theta_1,\pi-\theta_1)\}$ on the torus. The last thing is to intersect $\mathrm{Fix}SR$ with $T$ which gives a total of two points $(0,0)$ and $(\pi,\pi)$ (shown as large dots in the figure).
\end{document} | arXiv |
\begin{definition}[Definition:Ostensive Definition]
An '''ostensive definition''' is a definition which ''shows'' what a symbol is, rather than use words to ''explain'' what it is or what it does.
As an example of an '''ostensive definition''', we offer up:
:The symbol used for a stipulative definition is $:=$, as in:
::$\text {(the symbol being defined)} := \text {(the meaning of that symbol)}$
\end{definition} | ProofWiki |
\begin{document}
\title{A note on an orthotropic plate model describing the deck of a bridge}
\author{Alberto Ferrero}
\address{\hbox{\parbox{5.7in}{
\noindent{Alberto Ferrero, \\ Universit\`a del Piemonte Orientale, \\
Dipartimento di Scienze e Innovazione Tecnologica, \\
Viale Teresa Michel 11, 15121 Alessandria, Italy. \\[5pt]
\em{E-mail address: }{\tt [email protected]}}}}}
\date{\today}
\maketitle
\begin{abstract} The purpose of this work is to develop a model for a rectangular plate made of an orthotropic material. If compared with the classical model of the isotropic plate, the relaxed condition of orthotropy increases the degrees of freedom as a consequence of the larger number of elastic parameters, thus allowing to better describe rectangular plates having different behaviors in the two directions parallel to the edges of the rectangle.
We have in mind structures like decks of bridges where the rigidity in the direction of their length does not necessarily coincide with the one in the direction of its width.
We introduce some basic notions from the theory of linear elasticity, having a special attention for the theory of orthotropic materials. In particular we recall the Hooke's law in its general setting and we explain how it can be simplified under the orthotropy assumption.
Following the approach of the Kirchhoff-Love model, we obtain the bending energy of an orthotropic plate and from it the corresponding equilibrium equation when the plate is subject to the action of a vertical load. Accordingly, we write the kinetic energy of the plate which combined with the bending energy gives the complete Lagrangian; classical variational methods then produces the equation of motion.
\end{abstract}
\noindent {\bf Keywords:} Linear elasticity, orthotropic materials, elastic plates.
\noindent {\bf 2020 Mathematics Subject Classification:} 35A15, 35G15, 35L25, 74B05, 74K10, 74K20.
\section{Introduction} \label{s:introduction}
Let us consider a rectangular plate of length $L$, width $2\ell$ and thickness $d$. Denoting by $E$ and $\nu$ the Young modulus and Poisson ratio respectively, and by $u=u(x,y)$ the vertical displacement of the middle surface of the plate, the resulting elastic bending energy reads \begin{equation}\label{eq:plate-isotropic}
\mathbb E_B(u)=\frac{Ed^3}{24(1-\nu^2)} \int_{(0,L)\times (-\ell,\ell)}\left(\nu |\Delta u|^2+(1-\nu)|D^2 u|^2\right)dxdy \end{equation}
where $D^2 u$ denotes the Hessian matrix of $u$ and $|D^2 u|=\left(u_{xx}^2+2u_{xy}^2+u_{yy}^2\right)^{1/2}$ is its Euclidean norm, obtained by interpreting the matrix as a vector of four components. For more details on the plate model see \cite{BeBuoGa, BeFeGa, FeGa, LaLi}. This is known as the Kirchhoff-Love model for a plate of an isotropic material, see \cite{Kirchhoff, Love}.
Denoting by $f$ a vertical load per unit of surface, the corresponding Euler-Lagrange equation becomes \begin{equation} \label{eq:plate-iso-EuLa} \frac{Ed^3}{12(1-\nu^2)} \, \Delta^2 u=f \qquad \text{in } \Omega=(0,L)\times (-\ell,\ell) \, . \end{equation} The constant $\frac{Ed^3}{12(1-\nu^2)}$ in front of the biharmonic operator $\Delta^2$ represents the rigidity of the plate.
For more details on recent literature about rectangular plates and applications in models for decks of bridges, we quote \cite{AnGa, BeBuGaZu, BeFaFeGa, BoGaMo, ChGaGa, Suspension, Gazzola-Libro} and the references therein.
As explained in more details in \cite{Suspension}, an isotropic plate could be not optimal for describing the deck of a bridge since the rigidity constant $\frac{Ed^3}{12(1-\nu^2)}$ determines simultaneously the rigidity in both length and width directions.
For this reason, it appears more reasonable to describe the deck of a bridge combining the equation of beams for vertical displacements and the equation of rods for torsion, see \cite{ArLaVaMa, AG15, AG17}.
However, a possible alternative approach could be realized interpreting the deck as an orthotropic plate. This is the approach we want to follow in the present article and that was considered in \cite{Suspension}.
We point out that the present paper can be seen as a sort of auxiliary article to \cite{Suspension} which has the purpose of clarifying in more details some aspects that were treated there and to collect some basic notions well known in the theory elasticity.
As anticipated at the beginning of this introduction, a thin plate can be interpreted as a three-dimensional solid body which in a suitable coordinate system can be represented by the open set $(0,L)\times (-\ell,\ell)\times \left(-\frac d2, \frac d2\right)$.
Denoting by $E_1>E_2=E_3$ the \textit{Young moduli} in the $x$, $y$, $z$ directions respectively, and $\nu_{12}$ and $\nu_{21}$ the \textit{Poisson ratios} relative to the $x$ and $y$ directions, see Section \ref{ss:Ortho-Mat} for the correct definitions, we will show that the equilibrium equation for the orthotropic plate subject to a vertical load $f$ is given by \begin{equation} \label{eq:familiar}
\frac{E_1 d^3}{12(1-\nu_{12} \nu_{21})} \, \frac{\partial^4 u}{\partial x^4}
+\frac{E_2 d^3}{6(1-\nu_{12} \nu_{21})} \, \frac{\partial^4 u}{\partial x^2\partial y^2}
+\frac{E_2 d^3}{12(1-\nu_{12} \nu_{21})} \, \frac{\partial^4 u}{\partial y^4}=f \, , \end{equation} see also Remark \ref{r:1}.
Let us denote by $\mathcal A$ the linear fourth order operator appearing in the left hand side of \eqref{eq:familiar}.
Section \ref{s:orthotropic} is devoted to a survey of some well known notions about elastic anisotropic materials including the anisotropic Hooke's law and the related {\it stiffness matrix}. We explain how the stiffness matrix looks like in the case of an orthotropic material, we show how the elastic energy per unit of volume can be expressed in terms of the components of the strain tensor and we describe the meaning of the elastic coefficients involved in the orthotropic Hooke's law, still named Young moduli (one for each coordinate axis) and Poisson ratios (one for each combination of two coordinate axes); other coefficients completely independent of the previous ones are the so called {\it shear moduli}. We complete that section with a description of an orthotropic material with a two-dimensional symmetry (with respect to planes parallel to the $yz$ plane) and a one-dimensional reinforcement in the orthogonal direction (the $x$ axis).
Having in mind the explanations given in Section \ref{s:orthotropic} about orthotropic materials, in Section \ref{s:orthotropic-plate} we construct a model of orthotropic plate with a one-dimensional reinforcement giving rise to the equation of the orthotropic plate. Since our purpose is to describe the deck of a bridge, the rectangular plate is supposed to be hinged at the two shorter edges which means that the equation has to be coupled with homogeneous Navier boundary conditions on these two edges and free boundary conditions on the other two edges. We refer to these boundary conditions as $(BC)$.
In Theorem \ref{t:Lax-Milgram} we show existence and uniqueness of weak solutions for the equation $\mathcal A u=f$ coupled with $(BC)$; in the same statement we also prove a regularity result based on elliptic regularity estimates by \cite{adn}.
The last part of the paper is devoted to the evolution problem for the plate which can be obtained with classical variational methods starting from the Lagrangian $\mathcal L$ given by a combination of the kinetic energy and the bending energy. Denoting by $\mathbb E_B(u)$ the bending energy corresponding to the configuration determined by the displacement function $u$ and by $M$ the linear mass density of the plate ($M$ is given by the total mass of the plate divided by its length $L$), we may write \begin{equation*}
\mathcal L(u):=\frac 12 \int_\Omega \frac{M}{2\ell} \left(\frac{\partial u}{\partial t}\right)^2 \, dxdy-\mathbb E_B(u) \, . \end{equation*}
Then from $\mathcal L$ we obtain the following equation of motion \begin{equation*}
\frac{M}{2\ell} \frac{\partial^2 u}{\partial t^2}+\mathcal Au=0 \end{equation*} coupled with the boundary conditions $(BC)$.
Special solutions of the equation of motion are the so-called stationary wave solutions. We recall how these solutions can be constructed starting from the eigenfunctions of $\mathcal A$ coupled with $(BC)$. Classical spectral theory implies that this eigenvalue problem admits a discrete spectrum whose eigenvalues may be ordered in an increasing sequence diverging to $+\infty$. If $\lambda$ is an eigenvalue of $\mathcal A$ with corresponding eigenfunction $U_\lambda=U_\lambda(x,y)$, a stationary wave solution admits the following representation $$
u(x,y,t)=\sin(\omega_\lambda t) U_\lambda(x,y) $$ where $\omega_\lambda$ is the ``angular velocity'' whose value is uniquely determined by $\lambda$ as we can see from Section \ref{s:vibrations}; then the frequencies of free vibration $\nu_\lambda$ are immediately obtained by writing $\nu_\lambda=\omega_\lambda/(2\pi)$.
The paper is organized as follows: Section \ref{s:orthotropic} is devoted to a general discussion on the basic notions about orthotropic materials, Section \ref{s:orthotropic-plate} is devoted to the construction of the model of orthotropic plate and to existence, uniqueness and regularity of solutions of the equilibrium equation under the action of a vertical load, and Section \ref{s:vibrations} is devoted to the evolution problem, the eigenvalue problem, the construction of the stationary wave solutions and the corresponding frequencies of free vibration. In the final part of Section \ref{s:vibrations} we also suggest a possible model for a complete suspension bridge consisting of a system of three coupled equations, one for the deck and one for each of the two cables.
\section{Orthotropic materials} \label{s:orthotropic}
\subsection{Basic notions on the theory of anisotropic materials} We recall some notations from the theory of linear elasticity. We denote by ${\bm \sigma}=(\sigma_{ij})$ the stress tensor and by ${\bf e}=(e_{ij})$ the strain tensor where $i,j\in \{1,2,3\}$ for both tensors. We clarify that by strain tensor ${\bf e}$ we actually mean here the ``linearized strain tensor'', i.e. \begin{equation} \label{eq:strain-tensor} e_{ij}=\frac 12 \left(\frac{\partial u_i}{\partial x_j}+\frac{\partial u_j}{\partial x_i}\right) \, , \qquad i,j\in \{1,\dots,3\} \, . \end{equation} Here ${\bf u}=(u_1,u_2,u_3)$ is the vector field which defines the displacement at every point of the elastic body.
We recall that both ${\bm \sigma}$ and ${\bf e}$ are symmetric tensors, i.e. $\sigma_{ij}=\sigma_{ji}$ and $e_{ij}=e_{ji}$ for any $i,j\in \{1,2,3\}$. See the book by Landau \& Lifshitz \cite{LaLi} for more details on these basic notions.
In the theory of linear elasticity, the tension of a material as a consequence of a deformation is proportional to the deformation itself and conversely the deformation of a material is proportional to the forces acting on it. This notion can be resumed by the generalized Hooke's law that states the existence of $81$ coefficients $(C_{ijkl})$ with $i,j,k,l\in \{1,2,3\}$ such that \begin{equation} \label{eq:Hook-0}
\sigma_{ij}=\sum_{k,l=1}^{3} C_{ijkl}\, e_{kl} \qquad \text{for any } i,j\in \{1,2,3\} \, . \end{equation} The elastic energy per unit of volume, as a function of the components of the strain tensor, can be implicitly characterized by \begin{equation} \label{eq:implcit} \frac{\partial \mathcal E}{\partial e_{ij}}=\sigma_{ij} \, , \ \ \text{for any } i,j\in \{1,2,,3\} \, , \quad \mathcal E=0
\ \ \ \text{when the material is undeformed} \, , \end{equation} see \cite[Chapter 1, Paragraph 2]{LaLi} for more explanations on this question.
Then, by \eqref{eq:Hook-0}, we have that \begin{equation} \label{eq:second-der}
\frac{\partial^2 \mathcal E}{\partial e_{ij}\partial e_{kl}}=C_{ijkl} \qquad \text{for any } i,j,k,l\in \{1,2,3\} \end{equation} thus showing the following symmetry property \begin{equation} \label{eq:symmetry-1}
C_{ijkl}=C_{klij} \qquad \text{for any } i,j,k,l\in \{1,2,3\} \, . \end{equation} Combining \eqref{eq:implcit}, \eqref{eq:second-der} and \eqref{eq:symmetry-1} we obtain the explicit representation of the elastic energy per unit of volume: \begin{equation} \label{eq:explicit}
\mathcal E=\frac 12 \sum_{i,j,k,l=1}^{3} C_{ijkl} \, e_{ij} e_{kl} \, . \end{equation}
Since the tensors ${\bm \sigma}$ and ${\bf e}$ are symmetric, actually each of them is completely determined by only six components and the related Hooke's law admits the following matrix representation {\small \begin{equation*}
\begin{pmatrix}
\sigma_{11} \\
\sigma_{22} \\
\sigma_{33} \\
\sigma_{12} \\
\sigma_{13} \\
\sigma_{23}
\end{pmatrix}
=
\begin{pmatrix}
C_{1111} & C_{1122} & C_{1133} & C_{1112} & C_{1113} & C_{1123} \\
C_{2211} & C_{2222} & C_{2233} & C_{2212} & C_{2213} & C_{2223} \\
C_{3311} & C_{3322} & C_{3333} & C_{3312} & C_{3313} & C_{3323} \\
C_{1211} & C_{1222} & C_{1233} & C_{1212} & C_{1213} & C_{1223} \\
C_{1311} & C_{1322} & C_{1333} & C_{1312} & C_{1313} & C_{1323} \\
C_{2311} & C_{2322} & C_{2333} & C_{2312} & C_{2313} & C_{2323}
\end{pmatrix}
\begin{pmatrix}
e_{11} \\
e_{22} \\
e_{33} \\
e_{12} \\
e_{13} \\
e_{23}
\end{pmatrix} \, . \end{equation*} } We denote by $C$ the $6\times 6$ matrix appearing above and we refer to it as the stiffness matrix.
\subsection{Orthotropic materials and their stiffening matrix} \label{a:1} A material is said to be orthotropic if the corresponding stiffness matrix remains invariant under reflections with respect to three mutually orthogonal planes. In other words there exists three orthogonal symmetry planes for which the stiffness matrix remains invariant under the corresponding reflections.
We collect in this section a number of well known facts coming from the theory of orthotropic materials; such facts are presented here in details for the reader convenience.
Let us consider two orthonormal coordinate systems $x_1x_2x_3$ and $x_1'x_2'x_3'$, one related to the other by transformations of the form ${\bf x'}=A {\bf x}$ including rotations and reflections where $A=(A_{ij})$ is a $3\times 3$ orthogonal matrix and where we put ${\bf x}=(x_1,x_2,x_3)$ and ${\bf x'}=(x_1',x_2',x_3')$.
If we have a symmetric tensor which is represented by a matrix $X$ in the coordinate system $x_1x_2x_3$ and we want to determine the corresponding matrix $X'$ in the new coordinate system $x_1'x_2'x_3'$, then one finds that $X'=AXA^T$.
Let us introduce the linear map $\mathcal L_A:{\rm Sym_3}\to {\rm Sym_3}$ defined by $\mathcal L_A(X)=AXA^T$ for any $X\in {\rm Sym_3}$, where ${\rm Sym_3}$ is the vector space of all $3\times 3$ symmetric matrices. As a base of ${\rm Sym_3}$ we choose the set of six matrices $\{X_1,\dots,X_6\}$ defined by {\small \begin{align} \label{eq:base} \left\{ \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \, , \ \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix} \, , \ \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \, , \ \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \, , \ \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{pmatrix} \, , \ \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix} \right\} \, . \end{align} } In this way, any matrix $X\in {\rm Sym_3}$ may be represented as $X=\sum_{i=1}^{6} \alpha_i X_i$. In the base $\{X_1,\dots,X_6\}$ the linear operator $\mathcal L_A$ may be represented by mean of a $6\times 6$ matrix in the following way {\small \begin{equation} \label{eq:big-matrix} \begin{pmatrix}
\alpha_1' \\
\alpha_2' \\
\alpha_3' \\
\alpha_4' \\
\alpha_5' \\
\alpha_6' \end{pmatrix} = \begin{pmatrix}
A_{11}^2 & A_{12}^2 & A_{13}^2 & 2A_{11}A_{12} & 2A_{11}A_{13} & 2A_{12}A_{13} \\
A_{21}^2 & A_{22}^2 & A_{23}^2 & 2A_{21}A_{22} & 2A_{21}A_{23} & 2A_{22}A_{23} \\
A_{31}^2 & A_{32}^2 & A_{33}^2 & 2A_{31}A_{32} & 2A_{31}A_{33} & 2A_{32}A_{33} \\
A_{11}A_{21} & A_{12}A_{22} & A_{13}A_{23} & A_{11}A_{22}+A_{12}A_{21} & A_{11}A_{23}+A_{13}A_{21} & A_{12}A_{23}+A_{13}A_{22} \\
A_{11}A_{31} & A_{12}A_{32} & A_{13}A_{33} & A_{11}A_{32}+A_{12}A_{31} & A_{11}A_{33}+A_{13}A_{31} & A_{12}A_{33}+A_{13}A_{32} \\
A_{21}A_{31} & A_{22}A_{32} & A_{23}A_{33} & A_{21}A_{32}+A_{22}A_{31} & A_{21}A_{33}+A_{23}A_{31} & A_{22}A_{33}+A_{23}A_{32} \end{pmatrix} \begin{pmatrix}
\alpha_1 \\
\alpha_2 \\
\alpha_3 \\
\alpha_4 \\
\alpha_5 \\
\alpha_6 \end{pmatrix} \end{equation} } where we put $\mathcal L_A(X)=\sum_{i=1}^{6} \alpha_i' X_i$.
We denote by ${\mathbb A}$ the $6\times 6$ matrix defined in \eqref{eq:big-matrix} corresponding to the matrix transformation $A$.
Suppose now that ${\bf e}$ are ${\bf \sigma}$ are the strain and stress tensors of some linear elastic material. Once we fix an orthonormal coordinate system $x_1x_2x_3$, they can both be represented by two $3\times 3$ matrices. We recall that ${\bf e}$ are ${\bf \sigma}$ are symmetric tensors so that they can both be represented as linear combinations of the matrices $X_1,\dots,X_6$.
If we perform a coordinate transformation from a system $x_1x_2x_3$ to a new orthonormal system $x_1'x_2'x_3'$ with corresponding transformation $A$, then denoting by \begin{align*} & \widetilde{\bm\sigma}= \begin{pmatrix} \sigma_{11} & \sigma_{22} & \sigma_{33} & \sigma_{12} & \sigma_{13} & \sigma_{23} \end{pmatrix}^T \, , \qquad \widetilde{\bm e}= \begin{pmatrix} e_{11} & e_{22} & e_{33} & e_{12} & e_{13} & e_{23} \end{pmatrix}^T \, , \end{align*} the vector columns of components of the stress and strain tensors in the $x_1x_2x_3$ coordinate system and by \begin{align*} & \widetilde{\bm\sigma}'= \begin{pmatrix} \sigma_{11}' & \sigma_{22}' & \sigma_{33}' & \sigma_{12}' & \sigma_{13}' & \sigma_{23}' \end{pmatrix}^T \, , \qquad \widetilde{\bm e}'= \begin{pmatrix} e_{11}' & e_{22}' & e_{33}' & e_{12}' & e_{13}' & e_{23}' \end{pmatrix}^T \, , \end{align*} the vector columns of components of the stress and strain tensors in the $x_1'x_2'x_3'$ coordinate system, then by \eqref{eq:big-matrix} we obtain \begin{equation*}
\widetilde{\bm\sigma}'=\mathbb A \widetilde{\bm\sigma} \qquad \text{and} \qquad
\widetilde{\bm e}'=\mathbb A \widetilde{\bm e} \, . \end{equation*} Therefore, denoting now by $C$ and $C'$ the stiffness matrices of the material in the two coordinate systems, we infer \begin{equation*}
C'=\mathbb A C \mathbb A^{-1} \, . \end{equation*}
Recalling the definition of orthotropy given at the beginning of this section, we introduce the reflections with respect to the three coordinate planes whose matrices are given by {\small \begin{equation*}
A_1=
\begin{pmatrix}
-1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{pmatrix} \, ,
\qquad
A_2=
\begin{pmatrix}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & 1
\end{pmatrix} \, ,
\qquad
A_3=
\begin{pmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 &
-1
\end{pmatrix} \, . \end{equation*} } We denote by $\mathbb A_1, \mathbb A_2, \mathbb A_3$ the $6\times 6$ matrices corresponding to $A_1, A_2, A_3$ respectively, according with \eqref{eq:big-matrix}. We observe that $\mathbb A_1, \mathbb A_2, \mathbb A_3$ are three diagonal matrices with a diagonal containing four times the number $1$ and twice the number $-1$. We omit for simplicity the explicit representation of $\mathbb A_1, \mathbb A_2, \mathbb A_3$ and we proceed directly to the computation of the following matrices
{\small \begin{equation*} \mathbb A_1 C \mathbb A_1^{-1} = \begin{pmatrix}
C_{1111} & C_{1122} & C_{1133} & -C_{1112} & -C_{1113} & C_{1123} \\
C_{2211} & C_{2222} & C_{2233} & -C_{2212} & -C_{2213} & C_{2223} \\
C_{3311} & C_{3322} & C_{3333} & -C_{3312} & -C_{3313} & C_{3323} \\
-C_{1211} & -C_{1222} & -C_{1233} & C_{1212} & C_{1213} & -C_{1223} \\
-C_{1311} & -C_{1322} & -C_{1333} & C_{1312} & C_{1313} & -C_{1323} \\
C_{2311} & C_{2322} & C_{2333} & -C_{2312} & -C_{2313} & C_{2323}
\end{pmatrix} \, , \end{equation*} } {\small \begin{equation*} \mathbb A_2 C \mathbb A_2^{-1} = \begin{pmatrix}
C_{1111} & C_{1122} & C_{1133} & -C_{1112} & C_{1113} & -C_{1123} \\
C_{2211} & C_{2222} & C_{2233} & -C_{2212} & C_{2213} & -C_{2223} \\
C_{3311} & C_{3322} & C_{3333} & -C_{3312} & C_{3313} & -C_{3323} \\
-C_{1211} & -C_{1222} & -C_{1233} & C_{1212} & -C_{1213} & C_{1223} \\
C_{1311} & C_{1322} & C_{1333} & -C_{1312} & C_{1313} & -C_{1323} \\
-C_{2311} & -C_{2322} & -C_{2333} & C_{2312} & -C_{2313} & C_{2323}
\end{pmatrix} \, , \end{equation*} } {\small \begin{equation*} \mathbb A_3 C \mathbb A_3^{-1} = \begin{pmatrix}
C_{1111} & C_{1122} & C_{1133} & C_{1112} & -C_{1113} & -C_{1123} \\
C_{2211} & C_{2222} & C_{2233} & C_{2212} & -C_{2213} & -C_{2223} \\
C_{3311} & C_{3322} & C_{3333} & C_{3312} & -C_{3313} & -C_{3323} \\
C_{1211} & C_{1222} & C_{1233} & C_{1212} & -C_{1213} & -C_{1223} \\
-C_{1311} & -C_{1322} & -C_{1333} & -C_{1312} & C_{1313} & C_{1323} \\
-C_{2311} & -C_{2322} & -C_{2333} & -C_{2312} & C_{2313} & C_{2323}
\end{pmatrix} \, . \end{equation*} }
The orthotropy condition implies that the matrix $C$ coincides simultaneously with all the three matrices $\mathbb A_1 C \mathbb A_1^{-1}$, $\mathbb A_2 C \mathbb A_2^{-1}$ and $\mathbb A_3 C \mathbb A_3^{-1}$ thus showing that the stiffness matrix $C$ is in the form {\small \begin{equation} \label{eq:stiff-ortho}
\begin{pmatrix}
C_{1111} & C_{1122} & C_{1133} & 0 & 0 & 0 \\
C_{2211} & C_{2222} & C_{2233} & 0 & 0 & 0 \\
C_{3311} & C_{3322} & C_{3333} & 0 & 0 & 0 \\
0 & 0 & 0 & C_{1212} & 0 & 0 \\
0 & 0 & 0 & 0 & C_{1313} & 0 \\
0 & 0 & 0 & 0 & 0 & C_{2323}
\end{pmatrix} \, . \end{equation} }
Another property of the stiffening matric $C$ is the following:
\begin{equation} \label{eq:C2323}
C_{2323}=\frac{C_{2222}-C_{2233}-C_{3322}+C_{3333}}{2} \, . \end{equation}
In order to prove \eqref{eq:C2323}, let us consider a transformation matrix $A$ in the form {\small \begin{equation*} A=R_\theta:= \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos \theta & -\sin\theta \\ 0 & \sin\theta & \cos\theta \end{pmatrix} \, , \qquad \theta\in (-2\pi,2\pi) \, . \end{equation*} } The corresponding matrix $\mathbb A$ introduced in \eqref{eq:big-matrix} becomes \begin{equation} \label{eq:op-3} \mathbb A=T_\theta:=\begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & \cos^2 \theta & \sin^2\theta & 0 & 0 & -\sin(2\theta) \\ 0 & \sin^2\theta & \cos^2\theta & 0 & 0 & \sin(2\theta) \\ 0 & 0 & 0 & \cos\theta & -\sin\theta & 0 \\ 0 & 0 & 0 & \sin\theta & \cos\theta & 0 \\ 0 & \sin\theta \cos \theta & -\sin\theta\cos\theta & 0 & 0 & \cos(2\theta) \end{pmatrix} \, . \end{equation} As explained above, the invariance with respect to the transformation $R_\theta$ implies that \begin{equation} \label{eq:invariance-2323} C=\mathbb AC\mathbb A^{-1}=T_\theta C T_\theta^{-1} \end{equation} where $C$ is the stiffening matrix. It can be easily deduced that the inverse matrix of $T_\theta$ is the matrix $T_{-\theta}$.
We are now interested to exploit the identity \eqref{eq:invariance-2323} relatively to the components placed at the sixth row and sixth column since this will produce the proof of \eqref{eq:C2323}. Indeed, combining \eqref{eq:stiff-ortho},\eqref{eq:op-3} and \eqref{eq:invariance-2323} we obtain \begin{equation*} C_{2323}=(\mathbb T_\theta C\mathbb T_\theta^{-1})_{66}=\frac{C_{2222}-C_{2233}-C_{3322}+C_{3333}}{2}\, \sin^2(2\theta)+C_{2323} \cos^2(2\theta) \end{equation*} and hence \begin{equation*}
C_{2323} [1-\cos^2(2\theta)]=\frac{C_{2222}-C_{2233}-C_{3322}+C_{3333}}{2}\, \sin^2(2\theta) \, . \end{equation*} Choosing $\theta\in (0,2\pi)$, $\theta \neq \frac{\pi}{2}, \pi, \frac{3\pi}{2}$ and simplifying, this gives \eqref{eq:C2323}.
We observe that, differently from $C_{2323}$, the coefficients $C_{1212}$ and $C_{1313}$ are completely independent of the other ones as one can understand observing that proceeding again with the transformation $R_\theta$, the components of $T_\theta C T_\theta^{-1}$ placed at the fourth/fifth row and fourth/fifth column are given by $C_{1212}$ and $C_{1313}$ respectively, independently of the value assumed by the angle $\theta$, thus producing no restrictions on $C_{1212}$ and $C_{1313}$.
\subsection{The elastic energy and the inverse Hooke's law} \label{ss:Ortho-Mat}
As shown in Section \ref{a:1}, the Hooke's law for an orthotropic material reads {\small \begin{equation} \label{eq:Hook-1} \begin{pmatrix} \sigma_{11} \\ \sigma_{22} \\ \sigma_{33} \\ \sigma_{12} \\ \sigma_{13} \\ \sigma_{23} \\ \end{pmatrix} = \begin{pmatrix} C_{1111} & C_{1122} & C_{1133} & 0 & 0 & 0 \\ C_{1122} & C_{2222} & C_{2233} & 0 & 0 & 0 \\ C_{1133} & C_{2233} & C_{3333} & 0 & 0 & 0 \\ 0 & 0 & 0 & C_{1212} & 0 & 0 \\ 0 & 0 & 0 & 0 & C_{1313} & 0 \\ 0 & 0 & 0 & 0 & 0 & C_{2323} \end{pmatrix} \begin{pmatrix} e_{11} \\ e_{22} \\ e_{33} \\ e_{12} \\ e_{13} \\ e_{23} \\ \end{pmatrix} \, . \end{equation} }
In order to apply \eqref{eq:explicit}, we collect the coefficients $C_{ijlm}$ introduced in \eqref{eq:Hook-0} in a $9\times 9$ matrix that we denote by ${\bf C}_{9\times 9}$. Invoking \eqref{eq:Hook-1}, we see that in the special case of an orthotropic material, the matrix ${\bf C}_{9\times 9}$ admits the following representation in terms of $3\times 3$ blocks \begin{equation} \label{eq:blocks}
\left(
\begin{tabular}{c|c|c}
${\bf B_1}$ & $\bf 0$ & $\bf 0$ \\
\hline
$\bf 0$ & ${\bf B_2}$ & $\bf 0$ \\
\hline
$\bf 0$ & $\bf 0$ & ${\bf B_3}$ \\
\end{tabular}
\right) \end{equation} where {\small \begin{equation*}
{\bf B_1}=\begin{pmatrix} C_{1111} & C_{1122} & C_{1133} \\ C_{1122} & C_{2222} & C_{2233} \\ C_{1133} & C_{2233} & C_{3333} \end{pmatrix} \, , \quad
{\bf B_2}=\begin{pmatrix} C_{1212} & 0 & 0 \\ 0 & C_{1313} & 0 \\ 0 & 0 & C_{2323} \end{pmatrix} \, , \quad
{\bf B_3}=\begin{pmatrix} C_{2121} & 0 & 0 \\ 0 & C_{3131} & 0 \\ 0 & 0 & C_{3232} \end{pmatrix} \end{equation*} and $\bf 0$ denotes the $3\times 3$ null matrix. From the symmetry of ${\bm \sigma}$ and ${\bf e}$ we deduce that \begin{equation} \label{eq:symmetry-2}
C_{1212}=C_{2121} \, , \qquad C_{1313}=C_{3131} \, , \qquad C_{2323}=C_{3232} \, , \end{equation} } and in particular ${\bf B_2}={\bf B_3}$.
If we replace the usual representations of ${\bm \sigma}$ and ${\bf e}$ as $3\times 3$ matrices with the following ones as vector columns of $9$ components \begin{align*} & {\bm \sigma}= \begin{pmatrix} \sigma_{11} & \sigma_{22} & \sigma_{33} & \sigma_{12} & \sigma_{13} & \sigma_{23} & \sigma_{21} & \sigma_{31} & \sigma_{32} \end{pmatrix}^T \, , \\ & {\bf e} = \begin{pmatrix} e_{11} & e_{22} & e_{33} & e_{12} & e_{13} & e_{23} & e_{21} & e_{31} & e_{32} \end{pmatrix} ^T \, , \end{align*} by \eqref{eq:explicit}, \eqref{eq:Hook-1} and \eqref{eq:blocks} we deduce that \begin{equation*} {\bm \sigma}={\bf C}_{9\times 9} \, {\bf e} \qquad \text{and} \qquad \mathcal E=\frac 12 \, {\bf e}^T {\bf C}_{9\times 9} \, {\bf e} \, . \end{equation*} In particular by \eqref{eq:blocks}, \eqref{eq:symmetry-2} and the symmetry of ${\bf e}$, we have \begin{align}\label{eq:elastic-energy}
\mathcal E & =\frac 12 \,
{\bf e}_{{\rm diag}}^T \,
{\bf B_1} \, {\bf e}_{{\rm diag}}
+\frac{C_{1212}\, e_{12}^2
+C_{1313}\, e_{13}^2
+C_{2323}\, e_{23}^2}2
+\frac{C_{2121}\, e_{21}^2
+C_{3131}\, e_{31}^2
+C_{3232}\, e_{32}^2}2 \\[10pt]
& \notag = \frac 12 \,
{\bf e}_{{\rm diag}}^T \,
{\bf B_1} \, {\bf e}_{{\rm diag}}
+C_{1212}\, e_{12}^2
+C_{1313}\, e_{13}^2
+C_{2323}\, e_{23}^2 \end{align} with ${\bf e}_{{\rm diag}}=\begin{pmatrix} e_{11} & e_{22} & e_{33} \end{pmatrix}^T$.
Consider now the inverse of identity \eqref{eq:Hook-1} in such a way that the strain tensor is expressed in terms of the stress tensor, \begin{equation} \label{eq:Hook-2} \begin{pmatrix} e_{11} \\ e_{22} \\ e_{33} \\ e_{12} \\ e_{13} \\ e_{23} \\ \end{pmatrix} = \begin{pmatrix} \frac 1{E_{1}} & -\frac{\nu_{21}}{E_{2}} & -\frac{\nu_{31}}{E_{3}} & 0 & 0 & 0 \\ -\frac{\nu_{12}}{E_{1}} & \frac 1{E_{2}} & -\frac{\nu_{32}}{E_{3}} & 0 & 0 & 0 \\ -\frac{\nu_{13}}{E_{1}} & -\frac{\nu_{23}}{E_{2}} & \frac 1{E_{3}} & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac 1{2\mu_{12}} & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac 1{2\mu_{13}} & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac 1{2\mu_{23}} \end{pmatrix} \begin{pmatrix} \sigma_{11} \\ \sigma_{22} \\ \sigma_{33} \\ \sigma_{12} \\ \sigma_{13} \\ \sigma_{23} \\ \end{pmatrix} \end{equation} where the constants $E_{1}, E_{2}, E_{3}$ are known as Young moduli, one for each direction and the constants $\nu_{ij}$, $i,j\in \{1,2,3\}$, $i\neq j$ are known as Poisson ratios. Finally $\mu_{12}, \mu_{13}, \mu_{23}$ are known as shear moduli or moduli of rigidity and their indices coincide with the corresponding components of the stress and strain tensors obviously involved by these coefficients: \begin{equation*}
e_{12}=(2\mu_{12})^{-1} \, \sigma_{12} \, , \qquad e_{13}=(2\mu_{13})^{-1} \, \sigma_{13} \, ,\qquad e_{23}=(2\mu_{23})^{-1} \, \sigma_{23} \, . \end{equation*}
Let us denote by $S$ the $6\times 6$ matrix appearing in \eqref{eq:Hook-2}. Comparing \eqref{eq:Hook-1} with \eqref{eq:Hook-2}, it appears clear that $S=C^{-1}$. The symmetry properties of the stiffening matrix $C$, see \eqref{eq:Hook-1}, implies symmetry of $S$ so that we may write \begin{equation} \label{eq:id0} \frac{\nu_{21}}{E_{2}}=\frac{\nu_{12}}{E_{1}} \, , \qquad \frac{\nu_{31}}{E_{3}}=\frac{\nu_{13}}{E_{1}} \, , \qquad \frac{\nu_{32}}{E_{3}}=\frac{\nu_{23}}{E_{2}} \, . \end{equation} We observe that by \eqref{eq:Hook-2} we easily deduce that in the case of a one-dimensional tension state parallel to the $x_1$-axis, i.e. the only component of ${\bm \sigma}$ different from zero is $\sigma_{11}$, we have that \begin{equation*} \nu_{12}=-\tfrac{e_{22}}{e_{11}} \quad \text{and} \quad \nu_{13}=-\tfrac{e_{33}}{e_{11}} \, . \end{equation*} Similarly, choosing first a one-dimensional tension state parallel to the $x_2$-axis and then a one-dimensional tension state parallel to the $x_3$-axis, we infer \begin{equation*} \nu_{21}=-\tfrac{e_{11}}{e_{22}} \, , \qquad \nu_{23}=-\tfrac{e_{33}}{e_{22}} \, , \qquad \nu_{31}=-\tfrac{e_{11}}{e_{33}} \, , \qquad \nu_{32}=-\tfrac{e_{22}}{e_{33}} \, . \end{equation*} In other words, if we consider the Poisson ratio $\nu_{ij}$, the index $i$ (the first one) represents the direction of the one-dimensional stress and the index $j$ (the second one) represents the direction of the transversal deformation. This explanation clarifies the meaning of the Poisson ratios and the notation used in \eqref{eq:Hook-2}.
We want now to represent the components of the matrix $C$ in terms of the coefficients $E_i$, $\nu_{ij}$, $\mu_{ij}$ introduced in \eqref{eq:Hook-2}, by proceeding with the inversion of $S$ and then by comparing the components of its inverse with the coefficients $C_{ijkl}$.
Let us put \begin{equation} \label{eq:def-delta} \delta:={\rm det} \begin{pmatrix}
\frac{1}{E_1} & -\frac{\nu_{21}}{E_2} & -\frac{\nu_{31}}{E_3} \\
-\frac{\nu_{12}}{E_1} & \frac{1}{E_2} & -\frac{\nu_{32}}{E_3} \\
-\frac{\nu_{13}}{E_1} & -\frac{\nu_{23}}{E_2} & \frac{1}{E_3} \end{pmatrix} =\frac{1-\nu_{12}\nu_{21}-\nu_{13}\nu_{31}-\nu_{23}\nu_{32}-2\nu_{12}\nu_{23}\nu_{31}}{E_1 E_2 E_3} \end{equation} where we exploited \eqref{eq:id0} to show that $\nu_{13}\nu_{21}\nu_{32}=\nu_{12}\nu_{23}\nu_{31}$. In this way we may write $\det(S)=\frac{\delta}{8\mu_{12}\mu_{13}\mu_{23}}$.
Using this notation we obtain \begin{equation} \label{eq:S-1} C=S^{-1}= \begin{pmatrix} \frac{1-\nu_{23}\nu_{32}}{\delta \, E_2E_3} & \frac{\nu_{12}+\nu_{13}\nu_{32}}{\delta \, E_1 E_3} & \frac{\nu_{13}+\nu_{12}\nu_{23}}{\delta \, E_1 E_2} & 0 & 0 & 0 \\ \frac{\nu_{21}+\nu_{31}\nu_{23}}{\delta \, E_2 E_3} & \frac{1-\nu_{13}\nu_{31}}{\delta \, E_1 E_3} & \frac{\nu_{23}+\nu_{13}\nu_{21}}{\delta \, E_1 E_2} & 0 & 0 & 0 \\ \frac{\nu_{31}+\nu_{21}\nu_{32}}{\delta \, E_2 E_3} & \frac{\nu_{32}+\nu_{31}\nu_{12}}{\delta \, E_1 E_3} & \frac{1-\nu_{12}\nu_{21}}{\delta \, E_1 E_2} & 0 & 0 & 0 \\ 0 & 0 & 0 & 2\mu_{12} & 0 & 0 \\ 0 & 0 & 0 & 0 & 2\mu_{13} & 0 \\ 0 & 0 & 0 & 0 & 0 & 2\mu_{23} \end{pmatrix} \, . \end{equation}
\subsection{Orthotropic materials with a one-dimensional reinforcement} \label{ss:p-iso} Let us consider an orthotropic material which a one-dimensional reinforcement in the $x_1$ direction and an isotropic behavior in the $x_2$, $x_3$ variables. If we look at \eqref{eq:Hook-2}, this means that \begin{equation} \label{eq:id1} E_2=E_3 \, , \quad \nu_{21}=\nu_{31} \, , \quad \nu_{12}=\nu_{13} \, , \quad \mu_{12}=\mu_{13} \, , \quad \nu_{23}=\nu_{32} \, . \end{equation}
Recalling \eqref{eq:C2323} and exploiting \eqref{eq:id1}, we see that the elastic properties of the material are uniquely determined by only five constants: \begin{equation} \label{eq:five-const} E_1, \ E_2, \ \nu_{12}, \ \nu_{23}, \ \mu_{12} \, . \end{equation} Indeed, \eqref{eq:C2323} and \eqref{eq:id1} imply $C_{2222}=C_{3333}$, $C_{2233}=C_{3322}$ so that \begin{equation} \label{eq:conjecture} \mu_{23}=\tfrac{C_{2323}}2=\tfrac 12(C_{2222}-C_{2233}) \, . \end{equation} Now, exploiting the identities in \eqref{eq:id0}, \eqref{eq:id1}, \eqref{eq:conjecture}, we can write the matrix $C$ in the form {\small \begin{equation} \label{eq:rigidity-matrix} C= \begin{pmatrix} \frac{1-\nu_{23}^2}{\delta \, E_2^2} & \frac{\nu_{12}(1+\nu_{23})}{\delta \, E_1 E_2} & \frac{\nu_{12}(1+\nu_{23})}{\delta \, E_1 E_2} & 0 & 0 & 0 \\[7pt] \frac{\nu_{12}(1+\nu_{23})}{\delta \, E_1 E_2} & \frac{E_1-E_2 \, \nu_{12}^2}{\delta \, E_1^2 E_2} & \frac{E_1 \nu_{23}+E_2 \, \nu_{12}^2}{\delta \, E_1^2 E_2} & 0 & 0 & 0 \\[7pt] \frac{\nu_{12}(1+\nu_{23})}{\delta \, E_1 E_2} & \frac{E_1 \, \nu_{23}+E_2 \, \nu_{12}^2}{\delta \, E_1^2 E_2} & \frac{E_1-E_2 \, \nu_{12}^2}{\delta \, E_1^2 E_2} & 0 & 0 & 0 \\[7pt] 0 & 0 & 0 & 2\mu_{12} & 0 & 0 \\[7pt] 0 & 0 & 0 & 0 & 2\mu_{12} & 0 \\[7pt] 0 & 0 & 0 & 0 & 0 & \frac{E_1(1-\nu_{23})-2E_2 \, \nu_{12}^2}{\delta \, E_1^2 E_2} \end{pmatrix} \, . \end{equation} }
\section{The model of a plate with a one-dimensional reinforcement} \label{s:orthotropic-plate} We present in this section the model of a plate of length $L$, width $2\ell$ and thickness $d$. We choose a coordinate system in such a way that the plate is described by the set $(0,L)\times (-\ell,\ell)\times \left(-\frac d2,\frac d2 \right)$. We use for this coordinates the usual $x,y,z$ notation in place of the $x_1, x_2, x_3$ notation used in Section \ref{s:orthotropic}.
We assume that the plate is made of an orthotropic material with a one-dimensional reinforcement in the $x$ direction. We assume the validity of the classical constitutive assumptions for the displacement of a plate, see \cite[Paragraph 11]{LaLi}:
\begin{itemize} \item the displacement of the midway surface is only vertical and it is described by a function $u=u(x,y)$ with $(x,y)\in (0,L)\times (-\ell,\ell)$;
\item the third component of the displacement vector ${\bf u}=(u_1,u_2,u_3)$ only depends on $x$ and $y$ and with sufficient accuracy we may assume that $u_3(x,y)=u(x,y)$ for any $(x,y)\in (0,L)\times (-\ell,\ell)$;
\item the components $\sigma_{13}, \sigma_{23}, \sigma_{33}$ of the stress tensor vanish everywhere in the plate. \end{itemize}
We now compute the elastic energy per unit of volume in a configuration corresponding to a generic vertical displacement $u$ of the midway surface. By \eqref{eq:strain-tensor} and the above constitutive conditions we obtain \begin{equation*} u_{1}=-z \frac{\partial u}{\partial x} \, , \qquad u_{2}=-z \frac{\partial u}{\partial y} \, , \qquad u_3=u \, , \end{equation*} and, in turn, \begin{equation} \label{eq:e11-e22} e_{11}=-z \frac{\partial^2 u}{\partial x^2} \, , \qquad e_{22}=-z \frac{\partial^2 u}{\partial y^2} \, , \qquad e_{12}=-z\frac{\partial^2 u}{\partial x\partial y} \, , \qquad e_{13}=0 \, , \qquad e_{23}=0 \, , \end{equation} see \cite[Paragraph 11]{LaLi} for more details. Finally, condition $\sigma_{33}=0$ combined with \eqref{eq:Hook-1}, \eqref{eq:id0} and \eqref{eq:rigidity-matrix}, yields \begin{align} \label{eq:e33} e_{33}&=-\frac{\nu_{12}E_1 (1+\nu_{23})e_{11}+(E_1 \nu_{23}+E_2 \nu_{12}^2)e_{22}}{E_1-E_2 \nu_{12}^2} \\[7pt] \notag & =\frac{\nu_{12}E_1 (1+\nu_{23})}{E_1-E_2 \nu_{12}^2} \, z \frac{\partial^2 u}{\partial x^2}+\frac{E_1 \nu_{23}+E_2 \nu_{12}^2}{E_1-E_2 \nu_{12}^2} \, z \frac{\partial^2 u}{\partial y^2} \, . \end{align} Replacing \eqref{eq:e11-e22} and \eqref{eq:e33} into \eqref{eq:elastic-energy} we obtain \begin{equation*} \mathcal E=\frac 12 \left(K_{11} e_{11}^2+K_{22} e_{22}^2+2K_{1122} \, e_{11}e_{22}+2K_{1212} e_{12}^2\right) \end{equation*} where \begin{align*} & K_{11}=\frac{(1+\nu_{23})[E_1(1-\nu_{23})-2E_2 \nu_{12}^2]}{\delta E_2^2 (E_1-E_2 \nu_{12}^2)} \, , \qquad K_{22}=\frac{(1+\nu_{23})[E_1(1-\nu_{23})-2E_2 \nu_{12}^2]}{\delta E_1 E_2(E_1-E_2 \nu_{12}^2)} \, , \\[10pt] & K_{1122}=\frac{\nu_{12}(1+\nu_{23})[E_1(1-\nu_{23})-2E_2 \nu_{12}^2]}{\delta E_1 E_2 (E_1-E_2\nu_{12}^2)} \, , \qquad K_{1212}=2\mu_{12} \, . \end{align*} Putting $\mathcal K=K_{22}$ we have that \begin{align} \label{eq:mathcal-E} \mathcal E & =\frac 12 \left(\frac{E_1 \mathcal K}{E_2} \, e_{11}^2+\mathcal K e_{22}^2+2\nu_{12} \mathcal K \, e_{11}e_{22}+4\mu_{12} e_{12}^2\right) \\[7pt] \notag & =\frac{z^2}2 \left[\frac{E_1 \mathcal K}{E_2} \left(\frac{\partial^2 u}{\partial x^2}\right)^2+\mathcal K\left(\frac{\partial^2 u}{\partial y^2}\right)^2 +2\nu_{12} \mathcal K \frac{\partial^2 u}{\partial x^2}\frac{\partial^2 u}{\partial y^2}+4\mu_{12} \left(\frac{\partial^2 u}{\partial x \partial y}\right)^2 \right] \, . \end{align} We observe that by \eqref{eq:id0}, \eqref{eq:def-delta} and some computations we may write $\mathcal K$ in a more elegant way: \begin{equation} \label{eq:write-K}
\mathcal K=\frac{E_2}{1-\nu_{12}\nu_{21}} \, . \end{equation}
Looking at \eqref{eq:mathcal-E} and \eqref{eq:write-K}, we see that the elastic coefficients that completely determines the bending energy of the plate corresponding to a generic displacement $u$ are $E_1$, $E_2$, $\nu_{12}$ and $\mu_{12}$ while no dependence on the Poisson ratio $\nu_{23}$ occurs.
The total bending energy of the deformed plate in term of the vertical displacement assumes the form \begin{equation} \label{eq:total-energy} \mathbb E_B(u)=\frac{d^3}{24} \int_\Omega \left[\frac{E_1 \mathcal K}{E_2} \left(\frac{\partial^2 u}{\partial x^2}\right)^2+\mathcal K\left(\frac{\partial^2 u}{\partial y^2}\right)^2 +2\nu_{12} \mathcal K \frac{\partial^2 u}{\partial x^2}\frac{\partial^2 u}{\partial y^2}+4\mu_{12} \left(\frac{\partial^2 u}{\partial x \partial y}\right)^2 \right] dxdy \end{equation} where we put $\Omega=(0,L)\times (-\ell,\ell)$.
As we pointed out at the end of Section \ref{ss:p-iso}, the value of the coefficient $C_{1212}$, and hence of $\mu_{12}$, is completely independent from the other coefficients so that the choice of $\mu_{12}$ is free. It appears reasonable to our purposes to assume that \begin{equation} \label{eq:ass-mu12} \mu_{12}=\frac{\mathcal K (1-\nu_{12})}2 \, , \end{equation} in complete accordance with the classical theory of plates in the isotropic setting in which $\mathcal K=\frac{E}{1-\nu^2}$ and $\mu=\frac{\mathcal K(1-\nu)}{2}=\frac{E}{2(1+\nu)}$, where $E$ and $\nu$ are the Young modulus and Poisson ratio of the isotropic material respectively and $\mu$ is one of the two Lam\'e coefficients usually known as modulus of rigidity of the material, see \cite[Chapter 1, Section 5, (5.9)]{LaLi}.
In this way, by \eqref{eq:total-energy}, we may write the bending energy of the plate in the form \begin{equation} \label{eq:total-energy-2} \mathbb E_B(u)=\frac{d^3 \mathcal K}{24} \int_\Omega \left[\frac{E_1}{E_2} \left(\frac{\partial^2 u}{\partial x^2}\right)^2+\left(\frac{\partial^2 u}{\partial y^2}\right)^2 +2\nu_{12} \frac{\partial^2 u}{\partial x^2}\frac{\partial^2 u}{\partial y^2}+2(1-\nu_{12}) \left(\frac{\partial^2 u}{\partial x \partial y}\right)^2 \right] dxdy \, . \end{equation} Denoting by $D^2 u$ the Hessian matrix of $u$, introducing the notation \begin{equation*}
D^2 u:D^2 v=u_{xx} v_{xx}+2u_{xy}v_{xy}+u_{yy}v_{yy} \quad
\text{and} \quad |D^2 u|^2=u_{xx}^2+2u_{xy}^2+v_{yy}^2 \, \end{equation*} and defining \begin{equation} \label{eq:write-k}
\kappa=\frac{E_1-E_2}{E_2} \, , \end{equation}
we may write \begin{equation} \label{eq:E-B}
\mathbb E_B(u)=\frac{d^3 \mathcal K}{24} \int_\Omega \left[\nu_{12} |\Delta u|^2+(1-\nu_{12})|D^2 u|^2+\kappa\, u_{xx}^2 \right] dxdy \, . \end{equation} We observe that since the plate is reinforced in the $x$ direction, we assume that \begin{equation} \label{eq:ipotesi-Young} E_1>E_2 \end{equation} which in turn implies $\kappa>0$.
In the remaining part of the paper, for the Poisson ratio $\nu_{12}$ we use the simpler notation \begin{equation} \label{eq:write-nu}
\nu=\nu_{12} \, . \end{equation} According with the theory of isotropic materials, we assume that \begin{equation} \label{eq:ipotesi-Poisson} 0<\nu<\frac 12 \, . \end{equation}
We now introduce a suitable functional space for the energy $\mathbb E_B$. As explained in the introduction, our main purpose is to describe the static and dynamic behavior of the deck of a bridge by mean of a plate model. For this reason, we may assume that the deck is hinged at the two vertical edges of the rectangle $\Omega$ and free on the two horizontal edges of the same rectangle. Hence, a reasonable choice for the functional subspace of the Sobolev space $H^2(\Omega)$ is \begin{equation} \label{eq:def-H2*} H^2_*(\Omega):=\{w\in H^2(\Omega): w=0 \ \text{on} \ \{0,L\}\times (-\ell,\ell)\} \, , \end{equation} see \cite[Section 3]{FeGa}.
Thanks to the Intermediate Derivatives Theorem, see \cite[Theorem 4.15]{Adams}, the space $H^2(\Omega)$ is a Hilbert space if endowed with the scalar product $$ (u,v)_{H^2}:=\int_\Omega\left(D^2u:D^2v+uv\right)\, dxdy\qquad \text{for all } u,v\in H^2(\Omega) \, . $$
On the closed subspace $H^2_*(\Omega)$ it is possible to define an alternative scalar product naturally related to the functional $\mathbb E_B$ as explained in the next proposition.
\begin{proposition} \label{l:equivalence} Assume \eqref{eq:ass-mu12}, \eqref{eq:ipotesi-Young} and \eqref{eq:ipotesi-Poisson}. On the space $H^2_*(\Omega)$ the two norms $$
u\mapsto\|u\|_{H^2}\, ,\quad u\mapsto\|u\|_{H^2_*}:=\left\{\int_\Omega \left[\nu |\Delta u|^2+(1-\nu)|D^2 u|^2+\kappa\, u_{xx}^2 \right] dxdy\right\}^{1/2} $$ are equivalent. Therefore, $H^2_*(\Omega)$ is a Hilbert space when endowed with the scalar product \begin{equation*} (u,v)_{H^2_*}:=\int_\Omega \left[ \nu \Delta u \Delta v+(1-\nu)D^2 u:D^2 v+\kappa \, u_{xx}v_{xx} \right] dxdy \, . \end{equation*} \end{proposition}
The proof can be obtained by proceeding as in the proof of \cite[Lemma 4.1]{FeGa} or proceeding directly by combining the Poincar\'e inequality and the classical $H^2$-regularity estimate for the Laplacian.
Next, if we denote by $f$ an external vertical load per unit of surface and if $u$ is the deflection of the plate in the vertical direction, by \eqref{eq:total-energy-2} we have that the total energy $\mathbb E_T$ of the plate becomes \begin{equation} \label{eq:energy-total} \mathbb E_T(u)=\frac{d^3 \mathcal K}{24} \int_\Omega
\left(\nu|\Delta u|^2+(1-\nu)|D^2 u|^2+\kappa\, u_{xx}^2\right) dxdy-\int_\Omega fu \, dxdy \, . \end{equation} If $u\in C^4(\overline \Omega)\cap H^2_*(\Omega)$ and $v\in H^2_*(\Omega)$, by \cite[Proposition 5]{Chasman} and some calculation, we infer \begin{align*} & \nu \int_\Omega \Delta u \Delta v \, dxdy+(1-\nu)\int_\Omega D^2 u:D^2 v \, dxdy \\[6pt] & = \int_0^L [\nu u_{xx}(x,\ell)+u_{yy}(x,\ell)]v_y(x,\ell)\, dx-\int_0^L [\nu u_{xx}(x,-\ell)+u_{yy}(x,-\ell)]v_y(x,-\ell)\, dx \\[6pt] & -\!\!\int_0^L [u_{yyy}(x,\ell)+(2-\nu)u_{xxy}(x,\ell)]v(x,\ell) \, dx\!+\!\!\int_0^L [u_{yyy}(x,-\ell)+(2-\nu)u_{xxy}(x,-\ell)]v(x,-\ell) \, dx \\[6pt] & \quad +\int_{-\ell}^\ell [u_{xx}(L,y)v_x(L,y)-u_{xx}(0,y)v_x(0,y)]dy+\int_\Omega \Delta^2 u \, v \, dxdy \end{align*} and \begin{align*}
\int_\Omega u_{xx}\, v_{xx} \, dxdy & = \int_{-\ell}^{\ell} u_{xx}(L,y)v_x(L,y)\, dy
-\int_{-\ell}^{\ell} u_{xx}(0,y)v_x(0,y)\, dy+\int_\Omega u_{xxxx} \, v\, dxdy \, . \end{align*} Therefore, if $u\in C^4(\Omega)\cap H^2_*(\Omega)$ is a critical point of the functional $\mathbb E_T$ then it is a classical solution of the problem \begin{equation} \label{eq:model-plate} \begin{cases} \frac{d^3\mathcal K}{12} \left(\Delta^2 u+\kappa\frac{\partial^4 u}{\partial x^4}\right)=f & \qquad \text{in } \Omega \, , \\[6pt] u(0,y)=u_{xx}(0,y)=u(L,y)=u_{xx}(L,y)=0 & \qquad \text{for } y\in(-\ell,\ell) \, , \\[6pt] u_{yy}(x,\pm \ell)+\nu u_{xx}(x,\pm \ell)=0 & \qquad \text{for } x\in (0,L) \, , \\[6pt] u_{yyy}(x,\pm \ell)+(2-\nu)u_{xxy}(x,\pm \ell)=0 & \qquad \text{for } x\in (0,L) \, . \end{cases} \end{equation} Problem \eqref{eq:model-plate} represents the model for a plate made of an orthotropic material with a one-dimensional reinforcement in the $x$ direction subject to vertical load $f$ per unit of surface.
\begin{remark} \label{r:1} We observe that recalling \eqref{eq:id0} and \eqref{eq:write-K}, the fourth order equation in \eqref{eq:model-plate} may be written in a different way, more familiar in the theory of orthotropic plates: {\small \begin{equation*}
\frac{E_1 d^3}{12(1-\nu_{12} \nu_{21})} \, \frac{\partial^4 u}{\partial x^4}
+\frac{E_2 d^3}{6(1-\nu_{12} \nu_{21})} \, \frac{\partial^4 u}{\partial x^2\partial y^2}
+\frac{E_2 d^3}{12(1-\nu_{12} \nu_{21})} \, \frac{\partial^4 u}{\partial y^4}=f \, , \end{equation*} } see for example \cite[Chapter 2]{DesignManual}. \end{remark}
Let us denote by $\mathcal H(\Omega)$ the dual space of $H^2_*(\Omega)$. We state here the following result taken from \cite{Suspension}:
\begin{theorem} \label{t:Lax-Milgram} Assume \eqref{eq:ass-mu12}, \eqref{eq:ipotesi-Young} and \eqref{eq:ipotesi-Poisson} and let $f\in \mathcal H(\Omega)$. Then the following conclusions hold true:
\begin{itemize}
\item[$(i)$] there exists a unique $u\in H^2_*(\Omega)$ such that \begin{equation*} \frac{d^3\mathcal K}{12} (u,v)_{H^2_*}=\ _{\mathcal H(\Omega)}\langle f,v\rangle_{H^2_*(\Omega)} \qquad \text{for any } v\in H^2_*(\Omega) \, ; \end{equation*}
\item[$(ii)$] $u$ is the unique minimum point of the convex functional $$
\mathbb E_T(u)=\frac 12 (u,u)_{H^2_*}- \ _{\mathcal H(\Omega)}\langle f,u\rangle_{H^2_*(\Omega)}\, ; $$
\item[$(iii)$] if $f\in W^{k,p}(\Omega)$ for some $1<p<\infty$ and $k\in \mathbb{N}\cup\{0\}$, where we put $W^{0,p}(\Omega):=L^p(\Omega)$, then $u\in W^{k+4,p}(\Omega)$.
\end{itemize} \end{theorem}
The proof of Theorem \ref{t:Lax-Milgram} is based on the Lax-Milgram Theorem and classical elliptic regularity estimates. For the proof of this result we quote \cite{Suspension}.
\section{Eigenvalues and frequencies of free vibration} \label{s:vibrations}
This section is devoted to the evolution equation for the orthotropic plate with a particular attention for stationary wave solutions and for the related frequencies of vibration.
As anticipated in the introduction, the frequencies of vibration are closely related to the following eigenvalue problem \begin{equation} \label{eq:eigenvalue-original} \begin{cases} \frac{d^3 \mathcal K}{12}\left(\Delta^2 u+\kappa \frac{\partial^4 u}{\partial x^4}\right)=\lambda u & \qquad \text{in } \Omega=(0,L)\times (-\ell,\ell) \, , \\[6pt] u(0,y)=u_{xx}(0,y)=u(L,y)=u_{xx}(L,y)=0 & \qquad \text{for } y\in(-\ell,\ell) \, , \\[6pt] u_{yy}(x,\pm \ell)+\nu u_{xx}(x,\pm \ell)=0 & \qquad \text{for } x\in (0,L) \, , \\[6pt] u_{yyy}(x,\pm \ell)+(2-\nu)u_{xxy}(x,\pm \ell)=0 & \qquad \text{for } x\in (0,L) \, . \end{cases} \end{equation}
Problem \eqref{eq:eigenvalue-original} admits a sequence of eigenvalues \begin{equation} \label{eq:eig-plate}
0<\lambda_1\le \lambda_2 \le \dots \le \lambda_m \le \dots \end{equation} diverging to $+\infty$. Indeed \eqref{eq:eigenvalue-original} admits the following weak formulation \begin{equation*}
\frac{ d^3 \mathcal K}{12} (u,v)_{H^2_*}=\lambda (u,v)_{L^2} \qquad \text{for any } v\in H^2_*(\Omega) \end{equation*} so that compact embedding $H^2_*(\Omega)\subset L^2(\Omega)$ and spectral theory for self adjoint operators produce the desired result.
We point out that the eigenvalues of \eqref{eq:eigenvalue-original} can be expressed in terms of some explicit algebraic equations and the eigenfunctions admit an explicit representation in terms of the their respective eigenvalues. For more details see the statement and the proof of \cite[Theorem 3.3]{Suspension} where the reader can realize that the eigenvalues can be classified into four different kinds.
In \cite[Section 6]{Suspension}, we selected two families of eigenvalues denoted there by $\{\lambda_m^{{\rm vert}}\}_{m\ge 1}$ and $\{\lambda_m^{{\rm tors}}\}_{m\ge 1}$ respectively. Numerical evidence has shown that for the specific values assigned to the parameters of the plate, see \eqref{eq:elastic-par} below, the first eighteen eigenvalues all lie in one of these two families. We clarify that for any $m\ge 1$, the eigenfunctions corresponding to $\lambda_m^{{\rm vert}}$ are even with respect to the $y$ variable and the ones corresponding to $\lambda_m^{{\rm tors}}$ are odd with respect to the $y$ variable, thus giving sense to that notation (see Figure \ref{f:1} and Figure \ref{f:2}).
\begin{figure}\label{f:1}
\end{figure}
\begin{figure}\label{f:2}
\end{figure}
Let us consider the equation of motion for a free orthotropic plate:
\begin{equation} \label{eq:evolution}
\frac{M}{2\ell} \, \frac{\partial^2 u}{\partial t^2}+\frac{d^3 \mathcal K}{12}
\left(\Delta^2 u+\kappa \, \frac{\partial^4 u}{\partial x^4}\right)=0 \end{equation} where $M$ is the mass linear density of the deck as explained in the introduction.
Given an eigenfunction $U_\lambda$ of \eqref{eq:eigenvalue-original} corresponding to some eigenvalue $\lambda$ we can construct a stationary wave solution in the form \begin{equation} \label{eq:stat-wave}
u_\lambda(x,y,t)=\sin(\omega_\lambda t) \, U_\lambda(x,y) \end{equation} where $\omega_\lambda$ represents an angular velocity. The frequency $\nu_\lambda$ is then obtained from $\omega_\lambda$ by dividing it by $2\pi$.
Inserting \eqref{eq:stat-wave} into \eqref{eq:evolution} and exploiting the fact that $U_\lambda$ is an eigenfunction with eigenvalue $\lambda$, we obtain \begin{equation*}
\left(-\frac{M}{2\ell} \, \omega_\lambda^2 +\lambda\right) \sin(\omega_\lambda t) U_\lambda(x,y)=0 \end{equation*} and, in turn, \begin{equation} \label{eq:nu-m}
\nu_\lambda=\frac{\omega_\lambda}{2\pi}=\frac{1}{\pi} \sqrt{\frac{\ell \lambda}{2M}} \, . \end{equation}
Looking at \eqref{eq:nu-m} and the definition of $\lambda_m^{{\rm vert}}$ and $\lambda_m^{{\rm tors}}$, it appears natural to define the following frequencies of vertical and torsional vibration respectively by \begin{equation*}
\nu_m^{{\rm vert}}=\frac{1}{\pi} \sqrt{\frac{\ell \lambda_m^{{\rm vert}}}{2M}} \, ,
\qquad \nu_m^{{\rm tors}}=\frac{1}{\pi} \sqrt{\frac{\ell \lambda_m^{{\rm tors}}}{2M}} \, . \end{equation*}
Having in mind the structure of the Tacoma Narrows Bridge built and collapsed in 1940, taking inspiration from \cite{ammann, AG15, AG17}, in \cite{Suspension} we assigned to the parameters of the plate the following values \begin{align} \label{eq:elastic-par}
& L=853.44 \ m \, , \qquad \ell=6 \ m \, , \qquad M=7198 \ kg/m \, , \qquad \nu=0.2 \, , \\[7pt]
\notag & E_1=2.1 \cdot 10^{11} \ Pa \, , \qquad E_2=1.687\cdot 10^9 \ Pa \, , \qquad \mathcal R=2.109 \cdot 10^7 \ N\cdot m \, , \qquad \kappa=123.48 \, , \end{align} where $\mathcal R=\frac{d^3 \mathcal K}{12}$ denotes the rigidity of the plate.
Recalling equation \eqref{eq:familiar} and looking at \eqref{eq:elastic-par}, the reader can realize that the model of plate obtained above, exhibits a strongly anisotropic behavior since the value of the Young modulus $E_1$ is of two orders of magnitude larger than the Young modulus $E_2$. This suggests that the more popular isotropic plate model is not completely suitable to describe torsional oscillations of the deck of a bridge.
Assuming \eqref{eq:elastic-par} and exploiting the implicit representation of the eigenvalues provided in \cite{Suspension}, we numerically computed the values of the first ten frequencies of vertical oscillation and the first eight frequencies of torsional oscillation. Such values are collected in Table \ref{t:1}.
We observe that these frequencies are clearly relatively smaller than the ones expected for the oscillation of the deck of a suspension bridge. This fact is not strange at all since the oscillations that can be observed in a real suspension bridge are absolutely affected by the dynamics of cables and hangers.
This suggests that the present article and the related paper \cite{Suspension} have to be considered only as preliminary works in the formulation of a complete model of suspension bridge in which the deck is described by an orthotropic plate. A general idea in this direction was given in \cite{Suspension} where the following system was proposed: \begin{equation} \label{eq:sistemone}
\begin{cases}
m \xi(x) \frac{\partial^2 p_1}{\partial t^2}-\frac{H_0}{(\xi(x))^2} \, \frac{\partial^2 p_1}{\partial x^2}
=f_1\left(x,p_1,\frac{\partial p_1}{\partial x}\right)+F(u(\cdot,\ell)-p_1) \\[7pt]
m \xi(x) \frac{\partial^2 p_2}{\partial t^2}-\frac{H_0}{(\xi(x))^2} \, \frac{\partial^2 p_2}{\partial x^2}
=f_2\left(x,p_2,\frac{\partial p_2}{\partial x}\right)+F(u(\cdot,-\ell)-p_2) \\[7pt]
\frac{M}{2\ell} \frac{\partial^2 u}{\partial t^2}+\frac{d^3 \mathcal K}{12}\left(\Delta^2 u+\kappa
\frac{\partial^4 u}{\partial x^4}\right)=-F(u(\cdot,\ell)-p_1)-F(u(\cdot,-\ell)-p_2) \, .
\end{cases} \end{equation} In \eqref{eq:sistemone}, $p_1=p_1(x,t)$ and $p_2(x,t)$ describes the displacements of the two cables, $m$ is the mass linear density of the cables, $s=s(x)$ is the configuration of the cables at rest, $\xi(x)=\sqrt{1+(s'(x))^2}$ is the local length of cables at rest, $H_0$ is the horizontal component of the tension of cables and $f_1, f_2, F$ are suitable nonlinearities to be determined in dependence of the accuracy one aims to achieve in the model.
The formulation of \eqref{eq:sistemone} was inspired by the models obtained in \cite{AG15,AG17}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$m$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ \\
\hline
$\nu_m^{{\rm vert}}$ & $0.0045$ & $0.0180$ & $0.0406$ & $0.0722$ & $0.1128$ & $0.1624$ & $0.2211$ &
$0.2887$ & $0.3654$ & $0.4512$ \\
\hline
$\nu_m^{{\rm tors}}$ & $0.0404$ & $0.0822$ & $0.1270$ & $0.1760$ & $0.2301$ & $0.2904$
& $0.3574$ & $0.4317$ & $-$ & $-$ \\
\hline
\end{tabular}
\caption{First ten frequencies corresponding to vertical oscillations measured in $Hz$} \label{t:1}
\end{center} \end{table}
{\bf Acknowledgments} The author is member of the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`{a} e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). The author acknowledges partial financial support from the PRIN project 2017 ``Direct and inverse problems for partial differential equations: theoretical aspects and applications'' and from the INDAM - GNAMPA project 2019 ``Analisi spettrale per operatori ellittici del secondo e quarto ordine con condizioni al contorno di tipo Steklov o di tipo parzialmente incernierato''.
This research was partially supported by the research project ``Metodi e modelli per la matematica e le sue applicazioni alle scienze, alla tecnologia e alla formazione'' Progetto di Ateneo 2019 of the University of Piemonte Orientale ``Amedeo Avogadro''.
The author is grateful to Elvise Berchio, Alessio Falocchi and Pier Domenico Lamberti for the useful discussions and suggestions that supported the beginning of this work.
\end{document} | arXiv |
\begin{document}
\title{On smooth surfaces in $\Pq$ containing a plane curve}
\section{Introduction} We work over an algebraically closed field of characteristic zero.\\ In this paper we are dealing with smooth surfaces $S$ in $\mathbb{P}^4$ which contain a plane curve, $P$.\\
The first part contains some generalities about the linear system $|H-P|$, in particular we prove that its base locus has dimension zero and describe it. \\ In the second section we look at surfaces lying on a hypersurface of degree $s$ with a ($s$-2)-uple plane (we suppose $s \geq 4$), indeed if the surface does not lie on a hyperquadric, this implies that it contains a plane curve (lemma \ref{lem-plane}). The main results are the following. \begin{theorem} \label{th1} Let $\Sigma \subset \mathbb{P}^4$ be an integral hypersurface of degree $s$ with a ($s$-2)-uple plane, then the degree of smooth surfaces $S \subset \Sigma$ with $q(S)=0$ is bounded by a function of $s$. \end{theorem} Then we restrict to the case of regular surfaces lying on a hyperquartic with singular locus of dimension two. It turns out that, if $deg(S) \geq 5$, the hyperquartic must have a double plane (lemma \ref{lem-planeQ}). In this situation we can compute an effective bound. \begin{theorem} \label{th2} Let $S \subset \mathbb{P}^4$ be a smooth surface with $q(S)=0$ and lying on a quartic hypersurface $\Sigma$, such that $Sing(\Sigma)$ has dimension two, then $d=deg(S) \leq 40$. \end{theorem} The assumption $q(S)=0$ is due to technical reasons, in fact we believe that it is not strictly necessary (see \ref{general}).\\ Theorem \ref{th2} is of some interest for the classification of surfaces not of general type, since in this case one has to look only at surfaces lying on low degree hypersurfaces. For similar results concerning smooth surfaces on hyperquartics with isolated singularities see \cite{EF}.
\section{Smooth surfaces containing a plane curve}
Let $S \subset \mathbb{P}^4$ be a smooth, non degenerate surface, of degree $d$, containing a plane curve, $P$, of degree $p$. If $p \geq 2$, there is a unique plane, $\Pi$, containing $P$; otherwise if $P$ is a line, there are $\infty^2$ such planes, we just choose one of them and call it $\Pi$. We assume that $P$ is the one-dimensional part of $\Pi \cap S$. We denote by $\delta$ the linear system cut out on $S$, residually to $P$, by the hyperplanes containing $\Pi$. Since, by Severi's theorem, $H^0(\ensuremath{\mathcal{O}} _S(1)) \simeq H^0(\ensuremath{\mathcal{O}} _{\mathbb{P}^4}(1))$ (we assume $S$ is not a Veronese surface), $\delta = |H-P|$ if $p \geq 2$; if $P$ is a line, $\delta$ is a pencil in the $\infty ^2$ linear system $|H-P|$. Finally we will denote by $Y_H$ the element of $\delta$ cut out by the hyperplane $H$, and by $C_H = P \cup Y_H$, the corresponding hyperplane section of $S$.
\begin{lemma} \label{lem1} (i) The curve $P$ is reduced and the base locus of $\delta$ is empty or zero-dimensional and contained in $\Pi$. The general element $Y_H \in \delta$ is smooth out of $\Pi$ and doesn't have any component in $\Pi$.\\
(ii) If $p=1$, the linear system $|H-P|$ is base point free. \end{lemma} \textit{Proof:} (i) Clearly the base locus of $\delta$ is contained in $\Pi$. Assume an irreducible component of $P$, $P_1$, is in the base locus of $\delta$. Then, for every $H$ through $\Pi$, $C_H =H\cap S$ is singular along $P_1$. It follows that $T_xS \subset H$, for every $x \in P_1$. Since this holds for every $H$ through $\Pi$, we get $T_xS = \Pi$, $\forall x \in P_1$, but this contradicts Zak's theorem (\cite{Z}) which states that the Gauss map is finite. The same argument shows that $P$ is reduced. We conclude by Bertini's theorem.\\
(ii) Assume $P$ is a line. Clearly the base locus of $|H-P|$ is contained in $P$. Take $x \in P$. Now let $H$ be an hyperplane containing $P$ but not containing $T_xS$, then $C_H=P\cup Y_H$ is smooth at $x$, so $x \notin Y_H$.\ensuremath{\diamondsuit}
\begin{remark} (i) If $p=1$, $|H-P|$ is base point free and yields a morphism $f: S \to \mathbb{P}^2$, which is nothing else than the projection from the line $P$. If there is no plane curve on $S$ in a plane through $P$, $f$ is a finite morphism of degree $d-2+P^2$.\\
(ii) Let $S \subset \mathbb{P}^4$ be an elliptic scroll, then $S$ contains a one dimensional family of cubic plane curves which are unisecants. If $P$ is such a cubic, and if $H$ is a general hyperplane through $P$, then $H \cap S=P \cup f \cup f'$, where $f,f'$ are two rulings. This shows that the general curve $Y_H \in |H-P|$ need not be irreducible. \end{remark} Since $\delta$ is a pencil and since the base locus, $\ensuremath{\mathcal{B}}$, is zero-dimensional, the degree of $\ensuremath{\mathcal{B}}$ is $(H-P)^2$. Now we give a geometric description of $\ensuremath{\mathcal{B}}$. Let $Z:=\Pi \cap S$, $Z$ is a 1-dimensional subscheme of $\Pi$ (and also of $S$) and is composed by $P$ and possibly by some 0-dimensional component, which may be isolated or embedded in $P$.
\begin{definition} We define $\ensuremath{\mathcal{R}}$ as the residual scheme of $Z$ with respect to $P$, hence $\ensuremath{\mathcal{I}} _{\ensuremath{\mathcal{R}}} = (\ensuremath{\mathcal{I}} _Z:\ensuremath{\mathcal{I}} _P)$. \end{definition}
Since $\ensuremath{\mathcal{R}} \subset Z$, we can view $\ensuremath{\mathcal{R}}$ as a subscheme of $\Pi$ or of $S$.
\begin{lemma} \label{lem-RB} We have $\ensuremath{\mathcal{B}} = \ensuremath{\mathcal{R}}$. \end{lemma} \textit{Proof:} We observe that $\ensuremath{\mathcal{R}} \subset \ensuremath{\mathcal{B}}$ and that $deg(\ensuremath{\mathcal{B}})=d-2p+P^2$, then we only have to compute $deg(\ensuremath{\mathcal{R}})$.\\
Considering a section of $\omega _S(2)$ (which is always globally generated), we can associate to $S$ a reflexive sheaf $\ensuremath{\mathcal{F}}$ of rank two and an exact sequence: $0 \to \ensuremath{\mathcal{O}}_{\mathbb{P}^4} \stackrel{t}{\to} \ensuremath{\mathcal{F}} \to \ensuremath{\mathcal{I}}_S(3) \to 0$ such that $(t)_0=S$. The singular locus of $\ensuremath{\mathcal{F}}$ is a divisor in $|2H+K|$ and the Chern classes of $\ensuremath{\mathcal{F}}$ are $c_1=3$, $c_2=d$.\\
We can restrict the sequence above to $\Pi$ and get a section $0 \to \ensuremath{\mathcal{O}}_{\Pi} \stackrel{t_{\Pi}}{\to} \ensuremath{\mathcal{F}}_{\Pi}$. Clearly $P \subset (t_{\Pi})_0$ then dividing by an equation of $P$ we get a non-zero section $\bar{t}_{\Pi}$ of $\ensuremath{\mathcal{F}}_{\Pi}(-p))$. We compute $deg((\bar{t}_{\Pi})_0)= c_2(\ensuremath{\mathcal{F}}_{\Pi}(-p))=c_2(\ensuremath{\mathcal{F}}(-p))=-3p+d+p^2$. The section $\bar{t}_{\Pi}$ will vanish on $\ensuremath{\mathcal{R}}$ and on the intersection with $\Pi$ of the singular locus of $\ensuremath{\mathcal{F}}$, which is a curve $X \in |2H+K|$. Thus $(\bar{t}_{\Pi})_0 = \ensuremath{\mathcal{R}} \cup (X \cap \Pi)$. When we restrict to $\Pi$ we have $X \cap \Pi= X \cap P$ and we get $\sharp(X \cap \Pi)= (2H+K)P=2p+PK$.\\ It follows that $deg(\ensuremath{\mathcal{R}})= -5p+d+p^2-PK$. Now we use adjunction to get $PK=p^2-3p-P^2$ and combining with the previous equation we obtain the result.\ensuremath{\diamondsuit}
\begin{remark} (a) There is a cheaper proof of this result. By looking through the lines of \cite{F}, page 155, we infer that $deg(\ensuremath{\mathcal{R}})=d-2p+P^2$.\\ Indeed we can see $S \cap \Pi$ as the intersection of two hyperplane divisors on $S$, $H_1$ and $H_2$ such that $H_1 \cap H_2= \Pi$. Moreover $P$ is a Weil divisor on the smooth surface $S$, hence a Cartier divisor. Then we compute the equivalence of $P$ in the intersection $H_1 \cap H_2$, namely $(H_1 \cdot H_2)^P=(H_1 +H_2 -P) \cdot P=2p-P^2$. This means that the "exceeding" curve $P$ counts for $2p-P^2$ points in $H_1 \cap H_2$, thus the degree of its zero-dimensional component, $\ensuremath{\mathcal{R}}$, drops by $2p-P^2$. It follows that $\deg(\ensuremath{\mathcal{R}})=d-2p+P^2$, hence the result.\\ \end{remark}
\section{Degree $s$ hypersurfaces with a ($s$-2)-uple plane} \begin{lemma} \label{lem-plane} If $S \subset \mathbb{P}^4$ is a smooth surface, lying on a degree $s$ integral hypersurface $\Sigma$ with a ($s$-2)-uple plane, then $S$ contains a plane curve or $h^0(\ensuremath{\mathcal{I}}_S(2)) \neq 0$. \end{lemma} \textit{Proof:} Let $\Pi$ be the($s$-2)-uple plane in $\Sigma$ and let $H$ be an hyperplane containing $\Pi$, then $H \cap \Sigma = (s-2)\Pi \cup Q$, where $Q$ is a quadric surface and $C_H=S \cap H \subset (s-2)\Pi \cup Q$. If $dim(C_H \cap \Pi)=0$, then $C_H \subset Q$, i.e. $h^0(\ensuremath{\mathcal{I}}_{C_H}(2)) \neq 0 $ and the same holds for $S$. Then we can assume $dim(C_H \cap \Pi)=1$ and this is equivalent to say that $S$ contains a plane curve.\ensuremath{\diamondsuit}
\begin{notations} Let $\Sigma \subset \mathbb{P}^4$ be an integral hypersurface of degree $s$ containing a plane, $\Pi$, in its singular locus, with multiplicity $(s-2)$. Let $S \subset \Sigma$ be a smooth surface. If $h^0(\ensuremath{\mathcal{I}} _S(2)) \neq 0$, then $d:=deg(S) \leq 2s$. From now on we assume $h^0(\ensuremath{\mathcal{I}} _S(2))=0$. By Lemma \ref{lem-plane}, $dim(S \cap \Pi )=1$ and we denote by $P$ the 1-dimensional component of $\Pi \cap S$, also we let $p:=deg(P)$.\\ We assume $q(S)=0$, this assumption implies that every hyperplane section $C=H \cap S$ is linearly normal in $H \simeq \mathbb{P}^3$.\\ If $H$ is an hyperplane through $\Pi$, we denote by $C =Y_H \cup P$ the hyperplane section $H \cap S$. We have $C \subset \Sigma \cap H = (s-2)\Pi \cup Q_H$, where $Q_H$ is a quadric surface. By Lemma \ref{lem1}, if $H$ is general, $Y_H \subset Q_H$. If we restrict to $\Pi$, the $q_H=Q_H \cap \Pi$ form, as $H$ varies, a family of conics in $\Pi$. Let us set $\ensuremath{\mathcal{B}}_q=\displaystyle{ \bigcap_{H \supset \Pi} q_H}$, $\ensuremath{\mathcal{B}}_q$ is the base locus of the conics $q_H$. Since $Y_H \cap \Pi \subset Q_H \cap \Pi =q_H$, we have $\ensuremath{\mathcal{R}} \subset \ensuremath{\mathcal{B}}_q$.\\ Recall that if $\mu =c_2(\ensuremath{\mathcal{N}}_S(-s))= d(d+s(s-4))-s(2\pi-2)$ ($\pi$ is the sectional genus of $S$), then by lemma 1 of \cite{EP}: $0 \leq \mu \leq (s-1)^2d-D(3H+K)$ where $D$ is the one dimensional part of the intersection of $S$ with $Sing(\Sigma)$. In our situation $P \subset D$, so $\mu \leq (s-1)^2d-P(3H+K)=(s-1)^2d-3p-PK$. By adjunction we compute $P^2+PK=p^2-3p$ and then $\mu \leq (s-1)^2d-p^2+P^2= s(s-2)d-p^2+2p+r$ (since $r=d-2p+P^2$). \end{notations}
\begin{lemma} \label{lem-multiple} With the notations above, the base locus $\ensuremath{\mathcal{B}}_q$ of the conics $q_H$ is ($s$-1)-uple for $\Sigma$. \end{lemma} \textit{Proof:} We assume the plane $\Pi$ is given by $x_0=x_1=0$, thus if $\phi=0$ is an equation of $\Sigma$ we have $\phi \in (x_0,x_1)^{s-2}$. We can write for example $\phi=\displaystyle{\sum_{i=0}^{s-2} Q_i(x_0,x_1,x_2,x_3,x_4) x_0^i x_1^{s-2-i}}$ where the $Q_i$ are quadratic forms.\\
The general hyperplane $H_{\alpha}$ containing $\Pi$ has an equation of the form $x_0=\alpha x_1$, $\alpha \in k$, we consider $\phi _{|H_{\alpha}}$, namely the equation of the surface $\Sigma \cap H_{\alpha}$:\\
$\phi_{|H_{\alpha}}=\displaystyle{\sum_{i=0}^{s-2} Q_i(\alpha x_1,x_1,x_2,x_3,x_4) \alpha^i x_1^{s-2}}=x_1^{s-2}\displaystyle{\sum_{i=0}^{s-2} Q_i(\alpha x_1,x_1,x_2,x_3,x_4) \alpha^i}$.\\ Clearly $\displaystyle{\sum_{i=0}^{s-2} Q_i(\alpha x_1,x_1,x_2,x_3,x_4) \alpha^i}=0$ is an equation defining $Q_H$ for the hyperplane $H_{\alpha}$. Let $x=(0:0:x_2:x_3:x_4)$ be a point in $\ensuremath{\mathcal{B}}_q$, hence $\displaystyle{\sum_{i=0}^{s-2} Q_i(x) \alpha^i}=0$ for all $\alpha \in k$ and this implies that $Q_i(x)=0$.\\ Now if we look at the ($s$-2)-th derivatives of $\phi$, we see that they all vanish in a point $x \in \ensuremath{\mathcal{B}}_q$, equivalently $x$ is a ($s$-1)-uple point for $\Sigma$.\ensuremath{\diamondsuit}
\begin{lemma} \label{lem-planeQ} If $S \subset \mathbb{P}^4$ is a smooth surface with $q(S)=0$, lying on a quartic hypersurface $\Sigma$ having singular locus of dimension two, then, if $deg(S) \geq 5$, the component of dimension two in $Sing(\Sigma)$ is a plane (or a union of planes) and $S$ contains a plane curve. \end{lemma} \textit{Proof:} Let us suppose that $Sing(\Sigma)$ contains an irreducible surface of degree $>1$, then the general hyperplane section $S \cap H = C$ lies on $F=\Sigma \cap H$, which is a quartic surface of $\mathbb{P}^3$ having an irreducible curve of degree $>1$ in its singular locus. From the classification of quartic surfaces in $\mathbb{P}^3$ it follows that such a surface is a projection of a quartic surface $F' \subset \mathbb{P}^4$, then $F$ is not linearly normal. Since $C$ is linearly normal and smooth, the curve $C' \subset F'$ projecting down to $C$ must be degenerate and thus $d=deg(C') \leq 4$. So we may assume that the singular locus of $\Sigma$ does not contain irreducible surfaces of degree $>1$. Thus $Sing(\Sigma)$ contains a plane, say $\Pi$, which is double in $\Sigma$. Indeed $\Sigma$ cannot have a triple plane, otherwise $F=\Sigma \cap H$ would be a quartic surface in $\mathbb{P}^3$ with a triple line, and we argue as before because such a surface is not linearly normal in $\mathbb{P}^3$. By lemma \ref{lem-plane}, $S$ contains a plane curve.\ensuremath{\diamondsuit}\\ \\ \textit{Proof of theorems \ref{th1} and \ref{th2}}\\ We must distinguish between different cases, according to the behaviour of the curves $q_H$. Note that it is not possible that $q_H=0$ for every $H$; indeed if it were so, $\Pi$ would be ($s$-1)-uple for $\Sigma$. Then for all hyperplanes $H \supset \Pi$, $\Sigma \cap H=(s-1)\Pi \cup \Pi_H$, where $\Pi_H$ is a plane. With notations as above we could say that $Q_H=\Pi \cup \Pi_H$, but we know by lemma \ref{lem1} that, if $H$ is general, $Y_H$ does not have any component in $\Pi$, then $Y_H \subset \Pi_H$ is a plane curve and $h^0(\ensuremath{\mathcal{I}}_C(2)) \neq 0$: absurd.\\ So we are left with the following possibilities. The conics may move, i.e. vary as $H$ varies, so that at least two of them intersect properly, then $dim(\ensuremath{\mathcal{B}}_q)=0$; conversely they may all be equal to a fixed conic $q$ or they can be all reducible and contain a fixed line $D$, while the remaining line is moving. Observe that there are always two possibilities: the one-dimensional part of $\ensuremath{\mathcal{B}}_q$ could be contained in $S$ or not. The starting point of the proof is trying to show that $h^1(\ensuremath{\mathcal{I}}_C(2))=0$ where $C=Y_H\cup P$. Indeed if it is so, then by $0 \to \ensuremath{\mathcal{I}}_S(1) \to \ensuremath{\mathcal{I}}_S(2) \to \ensuremath{\mathcal{I}}_C(2) \to 0$ we obtain $h^1(\ensuremath{\mathcal{I}}_S(2))=0$. Then using $0 \to \ensuremath{\mathcal{I}}_S(2) \to \ensuremath{\mathcal{I}}_S(3) \to \ensuremath{\mathcal{I}}_C(3) \to 0$ and the fact that $h^0(\ensuremath{\mathcal{I}}_C(3)) \neq 0$ we get that $h^0(\ensuremath{\mathcal{I}}_S(3)) \neq 0$ and this implies $d \leq 3s$.\\ The proof will follow from the lemmas below.
\begin{lemma} \label{small-genus} If $p_a(Y_H) \leq 2(d-p-4)$ and if $r \leq 4$, then $d$ is bounded by a function of $s$. More precisely if $s=4$, $d \leq 40$. \end{lemma} \textit{Proof:} We have $\pi=p_a(Y_H)+ \frac{(p-1)(p-2)}{2}+d-p-r-1$, so $\pi-1 \leq 3(d-p)+\frac{p^2-3p}{2}-9-r$. Since $\mu \leq s(s-2)d-p^2+2p+r$ and on the other hand $\mu= d(d+s^2-4s)-2s(\pi-1)$, this yields: $\pi-1 \geq \frac{d^2-2sd+p^2-2p-r}{2s}$.\\ Now comparing the lower and the upper bound on $\pi-1$ we obtain: $ d^2-8sd+p^2(1-s)+p(9s-2)+18s+r(2s-1) \leq 0$ and since $r \geq 0$ it becomes: $ d^2-8sd+p^2(1-s)+p(9s-2)+18s \leq 0$. This implies $d \leq 4s + \sqrt{\Delta} \:\: (*)$, where $\Delta=16s^2 + p^2(s-1)-p(9s-2)-18s$. A short calculation shows that $\sqrt{\Delta} \leq p\sqrt{s-1} + 4s$ for all $s \geq 0$. In conclusion: $d \leq 8s + p \sqrt{s-1}$.\\ We take into account again the relation: $0 \leq \mu \leq s(s-2)d-p^2+2p+4$ and using the bound on $d$ stated above it becomes: $s(s-2)(8s+p\sqrt{s-1}) \geq p^2-2p-4$. This implies that $p$ is bounded by a function of $s$. We conclude since $d \leq 8s+p \sqrt{s-1}$. \par
If $s=4$ we give a better bound for $\sqrt{\Delta}$, indeed $\sqrt{\Delta} \leq p\sqrt{3}-8$ if $p \geq 19$, thus $d \leq 8 + p\sqrt{3}$. The same relation used above now gives: $8d \geq p^2-2p-4$, hence $p^2-2p-8\sqrt{3}p-68 \leq 0$, which implies $p \leq 19$ and consequently by $(*)$: $d \leq 40$. On the other hand if $p \leq 18$, again by $(*)$ we have $d \leq 39$.\ensuremath{\diamondsuit}
\begin{lemma} \label{rleq4} If $r \leq 4$ and if $\ensuremath{\mathcal{R}}$ does not contain three collinear points, then $d$ is bounded by a function of $s$. In particular if $s=4$, $d \leq 40$. \end{lemma} \textit{Proof:} Assume first $Q_H$ is a smooth quadric surface. We have $Y_H \cap \Pi=Y_H \cap P + \ensuremath{\mathcal{R}}$, so $0 \to \ensuremath{\mathcal{I}}_C(2) \to \ensuremath{\mathcal{I}}_P(2) \to \ensuremath{\mathcal{O}}_{Y_H}(\ensuremath{\mathcal{R}}+1) \to 0$. The curve $Y_H$ has bidegree $(a,b)$, $a \leq b$. We may assume $a \geq 4$, otherwise $p_a(Y_H) \leq 2(d-p-4)$ and we conclude by lemma \ref{small-genus}.\\ Thus $Y_H$ is linearly normal. We have $h^0(\ensuremath{\mathcal{O}}_{Y_H}(1+\ensuremath{\mathcal{R}}))=4$ if and only if $\ensuremath{\mathcal{R}}$ gives independent conditions to $\omega_{Y_H}(-1)$. This is equivalent to say that $\ensuremath{\mathcal{R}}$ gives independent conditions to the curves of bidegree $(a-3,b-3)$. If $a=b=4$, then $deg(Y_H)=d-p=8$ and using $s(s-2)d-p^2+2p+4 \geq 0$ we get $0 \leq -d^2+d(18+s(s-2))-76$. This shows that $d$ is bounded by a function of $s$, in particular if $s=4$, $d \leq 22$. So we may assume $a \geq 4$, $b \geq 5$ and since $r \leq 4$ and no three points of $\ensuremath{\mathcal{R}}$ are collinear, the curves of bidegree $(a-3,b-3)$ separate the points of $\ensuremath{\mathcal{R}}$. It follows that the map $H^0(\ensuremath{\mathcal{I}}_P(2)) \to H^0(\ensuremath{\mathcal{O}}_{Y_H}(1+\ensuremath{\mathcal{R}}))$ is surjective, hence $h^1(\ensuremath{\mathcal{I}}_C(2))=0$. As said before, this implies $d \leq 3s$.\\ Now we suppose $Q_H$ is an irreducible quadric cone (recall that every reduced curve on a quadric cone is a.C.M.). If $d-p$ is even, then $Y_H$ is a complete intersection $(\frac{d-p}{2},2)$ and $\omega_{Y_H} \cong \ensuremath{\mathcal{O}}_{Y_H}(\frac{d-p}{2}-2)$. So if $\frac{d-p}{2}-3 \geq 3$, arguing as above, we get $h^0(\ensuremath{\mathcal{O}}_{Y_H}(1+\ensuremath{\mathcal{R}}))=4$. On the other hand if this condition is not satisfied then $d-p \leq 11$, i.e. $p \geq d-11$. Recall that $0 \leq \mu \leq s(s-2)d-p^2+2p+4$; it follows that $(d-11)(d-13) \leq s(s-2)d+4$ and, for fixed $s$, this implies that $d$ is bounded. If $s=4$ we have: $d^2-32d+139 \leq 0$ which yields $d \leq 26$.\\ If $d-p$ is odd, $Y_H$ is linked to a line $L$ by a complete intersection $T$ of type $(\frac{d-p+1}{2},2)$. Since $L$ can be any ruling of $Q_H$, we may assume $L \cap \ensuremath{\mathcal{R}} = \emptyset$. The exact sequence of liaison: $0 \to \ensuremath{\mathcal{I}}_{T}(\frac{d-p-5}{2}) \to \ensuremath{\mathcal{I}}_L(\frac{d-p-5}{2}) \to \omega_{Y_H}(-1) \to 0$ shows that the divisors of $\omega_{Y_H}(-1)$ are cut on $Y_H$ by surfaces of degree $\delta=\frac{d-p-5}{2}$, containing $L$ but not $T$, residually to $L \cap Y_H$. We may consider surfaces of the form: $H_1 \cup \ldots \cup H_{\delta}$, where $H_1$ contains $L$ and where $H_2,\ldots,H_{\delta}$ are general planes. It follows that our condition is satisfied if $\delta-1 \geq 3$. If $\delta \leq 3$, then $p \geq d-11$ and we conclude as above.\\ If $Q_H$ is the union of two distinct planes, then $Y_H$ is the union of two distinct plane curves. We have: $p_a(Y_H) \geq (\frac{d-p}{2}-1)(\frac{d-p}{2}-2)-1$, because the minimal value for the arithmetical genus of a union of two plane curves of global degree $\delta$ is achieved when each curve has degree $\frac{\delta}{2}$ and if the two components do not intersect. Consequently: $\pi-1 \geq \frac{d^2+p^2-2pd-6d+6p+4}{4}+ \frac{p^2-3p+2}{2}+d-p-r-2$.\\ We may assume that the general hyperplane section of $S$ does not lie on a cubic surface (otherwise $h^0(\ensuremath{\mathcal{I}}_S(3)) \neq 0$ and $d \leq 3s$), so $\pi-1 \leq \frac{d^2}{8}$. Comparing these two inequalities (and using $r \leq 4$) we obtain: $6p^2-8p-4dp+d^2-4d-32 \leq 0$. If $d \geq 25$ no value of $p$ can satisfy this inequality, so $d \leq 24$ (for all $s$).\ensuremath{\diamondsuit}
\begin{corollary} \label{Bq=0} If $dim(\ensuremath{\mathcal{B}}_q)=0$, then $r \leq 4$ and $d$ is bounded by a function of $s$. If $s=4$, $d \leq 40$. \end{corollary} \textit{Proof:} Since $\ensuremath{\mathcal{B}}_q$ is the intersection of the conics $q_H$, $\ensuremath{\mathcal{I}}_{\ensuremath{\mathcal{B}}_q}(2)$ is globally generated, hence $\ensuremath{\mathcal{B}}_q$ is contained in a complete intersection of two conics. Recalling that $\ensuremath{\mathcal{R}} \subset \ensuremath{\mathcal{B}}_q$, it follows that $r \leq 4$ and that $\ensuremath{\mathcal{R}}$ does not contain three collinear points. We conclude by lemma \ref{rleq4}.\ensuremath{\diamondsuit}
\begin{lemma} \label{Bq=1,DnotinS} Assume $dim(\ensuremath{\mathcal{B}}_q)=1$, that $\ensuremath{\mathcal{B}}_q$ contains a line $D$ and that $D \not \subset S$. In this case $d \leq s$. \end{lemma}
\textit{Proof:} Under these assumptions, we claim that the general curve $C$ is smooth. Indeed, let $|L|$ be the linear system cut on $S$ by the hyperplanes containing $D$ and let $B=D \cap S=\{p_1,\ldots,p_r\}$. Clearly $B$ is the base locus of $|L|$ and the general element of $|L|$ is smooth out of $B$. If all curves in $|L|$ were singular at a point $p_i \in B$, it would be $T_{p_i}S \subset H$, $\forall H \supset D$. Anyway the intersection of all $H \supset D$ is nothing but $D$, so this is absurd. The same holds for all $p \in B$. It follows that the singular curves in $|L|$ form a closed subset of $|L|$.\\ Since $D$ is contained in the $\ensuremath{\mathcal{B}}_q$, $D$ is $(s-1)$-uple for $\Sigma$ (see \ref{lem-multiple}). Let $H$ be a general hyperplane through $D$. Then $F=\Sigma \cap H$ is a degree $s$ surface of $\mathbb{P}^3$ with a line, $D$, of multiplicity $(s-1)$. Such a surface is a projection of a degree $s$ surface $F' \subset \mathbb{P}^4$. We have $S \cap H = C \subset F$ and we may assume $C$ smooth and irreducible. Moreover since $q(S)=0$, C is linearly normal in $\mathbb{P}^3$. Now $C$ is the isomorphic projection of a degree $d$ curve $C' \subset F'$ (in particular $\ensuremath{\mathcal{O}}_{C'}(1) \cong \ensuremath{\mathcal{O}}_C(1)$). Hence $C'$ is degenerate in $\mathbb{P}^4$ and this implies $d \leq s$.\ensuremath{\diamondsuit}
\begin{lemma} \label{Bq1=D,DinS} Assume that the one-dimensional part of $\ensuremath{\mathcal{B}}_q$ is a line $D$ and that $D \subset S$. Then $r \leq 1$ and lemma \ref{rleq4} applies. \end{lemma} \textit{Proof:} In this case $q_H=D \cup D_H$ and the ${D_H}'s$ are moving. The base locus of the ${D_H}'s$, $\ensuremath{\mathcal{D}}$, is either empty or a point, $b$. If $\ensuremath{\mathcal{D}}=\emptyset$, then $Y_H \cap \Pi \subset P$ and it follows that $r=0$. Hence we assume from now on that $\ensuremath{\mathcal{D}}=\{b\}$.\\ If $b \in D$ we have $\ensuremath{\mathcal{B}}_q=D \cup \eta_b$, where $\eta_b$ is the first infinitesimal neighbourhood of $b$ in $\Pi$. Let $x \in Y_H \cap \Pi$ for a general $H$ and let $\xi_x$ be the zero-dimensional subscheme of $Y_H \cap \Pi$ supported at $x$. We will prove the following:\\ \textit{Claim:} Let $x \in Y_H \cap \Pi$, if $\xi_x \not \subset P$ then $x=b$ and, moreover, $\xi_x \subset \eta_b$ if $b \in D$.\\ \textit{Proof of the Claim:} We have $\xi_x \subset S \cap \Pi$. If $\xi_x \not \subset P$ then its residual scheme with respect to $P$ is non empty and so is, a fortiori, the residual scheme of $Z=S \cap \Pi$ with respect to $P$, namely $\ensuremath{\mathcal{R}}$. So $\ensuremath{\mathcal{R}}$ has a component, $\ensuremath{\mathcal{R}}_x$, supported at $x$. Since $\ensuremath{\mathcal{R}} \subset \ensuremath{\mathcal{B}}_q$, we conclude that $x=b$ or $x \in D$.\\ If $x=b$ and $b \not \in D$, we are over. So we assume $x \in D$. Since $\xi_x \subset q_H$, if $x \neq D \cap D_H$, then $\xi_x \subset D \subset P$: absurd. Thus $x=D \cap D_H$. If $b \in D$ this implies $x=b$ and $\xi_x \subset \eta_b$ (because $\xi_x \subset q_H$). So we may assume $b \not \in D$. In this case the ${D_H}'s$ have no base point on $D$ thus if $H$ is general: $\ensuremath{\mathcal{R}} \cap D \cap D_H = \emptyset$: contradiction ($x \in \ensuremath{\mathcal{R}} \cap D \cap D_H$).\\ \\ We come back to the proof of the lemma. If $\ensuremath{\mathcal{D}}=\{b\}$ and $b \not \in D$ then $Y_H \cap \Pi \subset P$ but for at most one point ($b$), so $Y_HP \geq d-p-1$ and $r \leq 1$.\\ If $\ensuremath{\mathcal{D}}=\{b\}$ and $b \in D$, then $\forall x \in Y_H \cap \Pi$, $\xi_x \subset \eta_b$, the residual scheme of $\xi_x$ with respect to $D$ is contained in the residual scheme of $\eta_b$ with respect to $D$, which is $b$. This shows that $Y_HP \geq d-p-1$, hence $r \leq 1$.\ensuremath{\diamondsuit}
\begin{lemma} \label{Bq=q,qinS} Assume that $\ensuremath{\mathcal{B}}_q$ is a conic $q$ ($q_H=q$ for all $H$). If $q \subset S$, then $r=0$ and lemma \ref{rleq4} applies. \end{lemma} \textit{Proof:} In this case $q \subset P$. Since $Y_H \cap \Pi \subset q_H$, we have $Y_H \cap \Pi \subset P$, hence $Y_HP=d-p$, i.e. $r=0$.\ensuremath{\diamondsuit}
\begin{lemma} \label{Bq=q,qnotinS} Assume that $\ensuremath{\mathcal{B}}_q$ is a conic $q$ and $q \not \subset S$. Then $d \leq max\{s,20\}$. \end{lemma} \textit{Proof:} If no component of $q$ is contained in $S$ (i.e. in $P$), then $Y_H \cap \Pi = Y_H \cap q$ is fixed (otherwise, as $H$ varies, the points of $Y_H \cap \Pi$ will cover a component of $q$). So $Y_H \cap q=\ensuremath{\mathcal{R}}$, i.e. $d-p=r$. Since $r=d-2p+P^2$ we get $P^2=p$ and $Y_HP=(H-P)P=0$, this means that $C_H=Y_H \cup P$ is disconnected: absurd.\\ It follows that $q=D \cup L$ with $D \subset S$ and $q \not \subset S$. If $L \neq D$ we have $L \subset \ensuremath{\mathcal{B}}_q$, $L \not \subset S$ and we conclude that $d \leq s$ thanks to lemma \ref{Bq=1,DnotinS}.\\ So we may assume $q=2D$, $D \subset P \subset S$ but $2D \not \subset S$ ($2D$ means $D$ doubled in $\Pi$). In this case, for all $H$, $q_H=2D$, so $Q_H$ is tangent to $\Pi$ along $D$. This implies that, for a general $H$, $Q_H$ is either a cone or the union of two distinct planes through $D$. In this latter case $Y_H=P_1 \cup P_2$ and $Y_HD=P_1D+P_2D=d-p$. Since $Y_HD \subset Y_HP$ it follows that $r=0$ and we conclude with lemma \ref{rleq4}.\\ From now on we assume that for a general $H$, $Q_H$ is a cone and $D$ a ruling of $Q_H$. If $d-p$ is even, $Y_H$ is a complete intersection $(\frac{d-p}{2},2)$, then $p_a(Y_H)=\frac{d^2-2pd-4d+p^2+4p+4}{4}$ and so $\pi-1=\frac{d^2-2pd-4d+p^2+4p+4}{4} + \frac{p^2-3p+2}{2}+d-p-r-2$. Now $Y_H \cap D \subset Y_H \cap P$, then $Y_HD=\frac{d-p}{2} \leq d-p-r=Y_HP$, i.e. $r \leq \frac{d-p}{2}$ and it follows that $\pi-1 \geq \frac{d^2-2pd-4d+p^2+4p+4}{4} + \frac{p^2-3p+2}{2}+ \frac{d-p}{2}-2$. Now comparing this expression with $\pi-1 \leq \frac{d^2}{8}$ (we can suppose as usual $h^0(\ensuremath{\mathcal{I}}_C(3))=0$) we get: $6p^2-8p-4dp+d^2-4d \leq 0$. If $d \geq 21$ there are no values of $p$ satisfying the inequality, then $d \leq 20$.\\ If $d-p$ is odd, $Y_H$ is linked to a line by a complete intersection $(\frac{d-p+1}{2},2)$ and it turns out $p_a(Y_H)=\frac{d^2-2dp+p^2-4d+4p+3}{4}$. Since $Y_HD=\frac{d-p+1}{2} \leq Y_HP=d-p-r$ we have $r \leq \frac{d-p-1}{2}$. Hence we can write $\pi-1 \geq \frac{d^2-2dp+p^2-4d+4p+3}{4}+ \frac{p^2-3p+2}{2}+\frac{d-p+1}{2}-2$. If we compare this with $\pi-1 \leq \frac{d^2}{8}$ and arguing as before we obtain $d \leq 20$.\ensuremath{\diamondsuit}\\ \\
The proof of \ref{th1} and \ref{th2} follows from \ref{small-genus}, \ref{rleq4}, \ref{Bq=0}, \ref{Bq=1,DnotinS}, \ref{Bq1=D,DinS}, \ref{Bq=q,qinS}, \ref{Bq=q,qnotinS}.
\begin{remark} \label{general} Actually we believe that there are very few smooth surfaces on such hypersurfaces. For example consider the following situation:\\ Assume that the blowing-up of $\Pi$, $\tilde{\Sigma}\to \Sigma$, yields a desingularization of $\Sigma$, so we have a double covering $T \to \Pi$ and $\tilde{S}$ mapping to $S$. Since $T$ and $\tilde{S}$ are two divisors on the smooth threefold $\tilde{\Sigma}$, if they intersect, they intersect along a curve. We conclude that $S \cap \Pi = P$ and all the points of $Y_H \cap \Pi$ lie on $P$.\\ Now assume that for general $H$, $Q_H$ is a smooth quadric. Observe that the $Q_H$ are parametrized by a smooth rational curve ($\simeq \mathbb{P}^1$). Let $\ensuremath{\mathcal{P}}$ denote the curve parametrizing the rulings of the quadrics $Q_H$. We get a degree two covering $f:\ensuremath{\mathcal{P}} \to \mathbb{P}^1$ which is ramified at the points corresponding to singular $Q_H$. Assume $\ensuremath{\mathcal{P}}$ is irreducible. With this assumption the curve $Y_H \subset Q_H$ has bidegree $(a,a)$ (otherwise following the $a$ ruling would yield a section of the covering, which is impossible since $g(\ensuremath{\mathcal{P}} )>0$ because $f$ is ramified in more than two points).\\ Now consider the exact sequence of residuation with respect to $\Pi$:\\ $$0 \to \ensuremath{\mathcal{I}} _{Y_H}(-1) \to \ensuremath{\mathcal{I}} _C \to \ensuremath{\mathcal{I}} _{P,\Pi} \to 0$$ Since $Y_H$ is a.C.M., it follows that $C = Y_H \cup P$ is a.C.M. too. Hence $S$ is a.C.M. and $h^0(\ensuremath{\mathcal{I}} _S(3)) \geq h^0(\ensuremath{\mathcal{I}} _C(3))\neq 0$. This implies $d(S) \leq 3s$. (Notice that we didn't assume $q(S)=0$.) Observe that the assumption that $S$ is smooth is necessary in order to apply Lemma \ref{lem1} and to conclude that $C = Y_H\cup P$ with $Y_H \subset Q_H$. \end{remark}
\begin{remark} There exist integral hypersurfaces in $\mathbb{P}^4$ such that the degree of the smooth surfaces contained in them is bounded. Indeed it is enough to take a non linearly normal hypersurface in $\mathbb{P}^4$, recalling that the only non linearly normal smooth surface in $\mathbb{P}^4$ is the Veronese. The simplest example is the Segre cubic hypersurface. The previous results seem to indicate that this behaviour can happen also on some linearly normal hypersurfaces. From a "codimension two" point of view this is in contrast with the following proposition. \end{remark} \begin{proposition} Let $S \subset \mathbb{P}^3$ be an integral surface, then $S$ contains smooth curves of arbitrarily high degree. \end{proposition}
\textit{Proof:} If $S$ has singular locus of dimension $\leq 0$, this follows from Bertini. If $Sing(S)$ has dimension 1, we consider the normalization $p:\tilde{S} \to S$ of $S$, then $dim(Sing(\tilde{S}))\leq 0$. Let $C$ be the non-normal locus in $S$, $D=p^{-1}(C)$. Let $\delta$ be a very ample linear system on $\tilde{S}$. The general $X \in \delta$ is smooth and doesn't pass through any singular point of $\tilde{S}$. We want to show that for $X \in \delta$ general, $p_|:X \to S$ is an embedding. Since $p$ is an isomorphism outside $D$, we only have to consider the points in $X \cap D$. Let $x \in C$, the curves of $\delta$ passing through two points of $p^{-1}(x)$ form a subspace of codimension $2$. Letting $x$ vary in $C$, we see that the curves of $\delta$ intersecting a fibre $p^{-1}(x)$ in more than one point constitute a subspace of codimension $\geq 1$, hence for general $X \in \delta$, $p_|:X \to S$ is injective.\\
Since there are only finitely many points where $dp$ has rank zero, we may assume that for $y\in D$, $dp_y:T_y\tilde{S} \to T_{p(y)}S$ has rank one. The curves of $\delta$ passing through $y$ and having tangent direction $Ker(dp_y)$ at $y$ form a subspace of codimension $2$ of $\delta$. Letting $y$ vary in $D$ we get a subspace of codimension $1$. So for general $X \in \delta$, $dp_|$ is everywhere injective.\ensuremath{\diamondsuit}
\noindent Address of the authors:\\ Dipartimento di Matematica\\ via Machiavelli, 35\\ 44100 Ferrara (Italy)\\ Email: [email protected] (Ph.E.), [email protected] (C.F.)
\end{document} | arXiv |
Quidditas
Toggle Animation
The Von Mangoldt Dirichlet Series
Will Hoffer
In what follows, let $p$ denote a prime number. The von Mangoldt function $\Lambda$ is defined by the following:
\[\Lambda(n) = \begin{cases} \log p & \text{ if }n=p^m\text{ is a prime power} \\ 0 & \text{ otherwise} \end{cases}\]
Our goal is to elucidate the connection between $\Lambda$, the Riemann zeta function $\zeta$, and the prime numbers. Today, we will consider the classical Dirichlet series whose coefficients are given by the von Mangoldt function:
\[f(s) = \sum_{n=1}^\infty \frac{\Lambda(n)}{n^s}\]
We will show that this series is exactly given by (the negative of) the logarithmic derivative of the Riemann zeta function. This logarithmic derivative is given by:
\[\frac{d}{ds}\log \zeta(s) = \frac{\zeta'(s)}{\zeta(s)}\]
Main Theorem
The Dirichlet series whose coefficients are prescribed by the von Mangoldt function $\Lambda(n)$ is precisely the opposite of the logarithmic derivative of the Riemann zeta function $\zeta$. To wit:
\[-\frac{\zeta'(s)}{\zeta(s)} = \sum_{n=1}^\infty \frac{\Lambda(n)}{n^s}\]
By the Euler product formula for the Riemann zeta function, we have:
\[\zeta(s) = \prod_{p\in \mathscr{P}} \frac{1}{1-p^{-s}}\]
where the product is taken over all primes $p$, and absolutely converges when $\sigma=\Re(s)>1$.
Because the product is absolutely convergent, the following equivalent identity holds for the logarithm of the product:
\[\log \zeta(s) = \log \prod_{p\in \mathscr{P}} \frac{1}{1-p^{-s}} = -\sum_{p\in \mathscr{P}} \log(1-p^{-s})\]
Further, this latter series is absolutely convergent when $\sigma >1$. In particular, it follows that this sum is holomorphic in this domain since it is an absolutely convergent sum comprised of functions holomorphic in said half-plane. (Absolute convergence of the series is enough to ensure that the partial summands converge locally uniformly to a function which is holomorphic by a theorem of Weierstrauss in complex analysis.)
Therefore, term-by-term differentiation is valid for such an absolutely convergent holomorphic series. Thus we compute:
\[\begin{align*} -\frac{\zeta'(s)}{\zeta(s)} &= -\frac{d}{ds}\log \zeta(s) \\ &= \frac{d}{ds} \sum_{p\in \mathscr{P}} \log(1-p^{-s}) \\ &= \sum_{p\in \mathscr{P}} \frac{d}{ds} \log(1-p^{-s}) \\ &= \sum_{p\in \mathscr{P}} (1-p^{-s})^{-1}(- p^{-s})(-\log p) \\ &= \sum_{p\in \mathscr{P}} \log p \frac{p^{-s}}{1-p^{-s}} \end{align*}\]
Next, we shall note the fact that $\frac{1}{p^\sigma}<1$ for any prime $p$ and real number $\sigma >1$. In particular, these ensure that the following series is absolutely convergent:
\[\frac{1}{1-p^{-\sigma}} = \sum_{m=0}^\infty (p^{-\sigma})^m = \sum_{m=0}^\infty (p^m)^{-\sigma}\]
Therefore, the complex valued series identity also holds when $\sigma = \Re(s) >1 $:
\[\frac{1}{1-p^{-s}} = \sum_{m=0}^\infty (p^{-s})^m = \sum_{m=0}^\infty (p^m)^{-s}\]
The corresponding sum of absolute values of its terms is exactly the previous identity. We tweak the identity slightly by multiplying with the extra factor of $p^{-s}$:
\[\frac{p^{-s}}{1-p^{-s}} = \sum_{m=1}^\infty (p^m)^{-s}\]
Using this fact, we now return to our previous calculation. We find:
\[\begin{align*} -\frac{\zeta'(s)}{\zeta(s)} &= \sum_{p\in \mathscr{P}} \log p \frac{p^{-s}}{1-p^{-s}} \\ &= \sum_{p\in \mathscr{P}} \log p \sum_{m=1}^\infty (p^m)^{-s} \\ &= \sum_{p\in \mathscr{P}} \sum_{m=1}^\infty \frac{\log p}{(p^m)^s} \end{align*}\]
Now, this double sum has the precise effect of summing over every prime power of the form $p^m$ for some prime $p$ and positive integer $m$. This double sum can be interpreted as a particular ordering of a more general unordered sum over the countable set of all prime powers. As is proved here on another post, absolute convergence of one ordering of a sum ensures that any other rearrangement is also absolutely convergent. Using this fact, we may write:
\[\sum_{p\in \mathscr{P}} \sum_{m=1}^\infty \frac{\log p}{(p^m)^s} = \sum_{n=p^m} \frac{\log p}{n^s}\]
where the latter sum is over natural numbers which are prime powers of the form $p^m$, with positive integer $m$.
By using the von Mangoldt function, we can convert this sum to a Dirichlet series which sums over the natural numbers. As $\Lambda(n)=0$ for any term which is not a prime power, the two series will have exactly the same terms with the same weights. Thus, we may write:
\[\sum_{n=p^m} \frac{\log p}{n^s} = \sum_{n=p^m} \frac{\Lambda(n)}{n^s} = \sum_{n=1}^\infty \frac{\Lambda(n)}{n^s}\]
Therfore, we may complete the proof of the identity by combining our previous calculations with these two identities.
$ \blacksquare $
Click a button below to view a random quote!
Randomize Order | CommonCrawl |
\begin{document}
\begin{frontmatter}
\title{Imperative Programs as Proofs via Game Semantics \footnote{NOTICE: this is the author’s version of a work that was accepted for publication in Annals of Pure and Applied Logic. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Annals of Pure and Applied Logic, volume 164, issue 11 at \url{http://dx.doi.org/10.1016/j.apal.2013.05.005}} }
\author[label2]{Martin Churchill} \author[label1]{Jim Laird} \author[label1]{Guy McCusker}
\address[label1]{University of Bath} \address[label2]{Swansea University}
\begin{abstract}
Game semantics extends the Curry-Howard isomorphism to a three-way
correspondence: proofs, programs, strategies. But the universe of
strategies goes beyond intuitionistic logics and lambda calculus, to
capture stateful programs. In this paper we describe a logical
counterpart to this extension, in which proofs denote such
strategies. The system is expressive: it contains all of the
connectives of Intuitionistic Linear Logic, and first-order
quantification. Use of Laird's \emph{sequoid} operator allows proofs
with imperative behaviour to be expressed. Thus, we can embed
first-order Intuitionistic Linear Logic into this system, Polarized
Linear Logic, and an imperative total programming language.
The proof system has a tight connection with a simple game model,
where games are forests of plays. Formulas are modelled as games,
and proofs as history-sensitive winning strategies. We provide a
strong \emph{full completeness} result with respect to
this model: each finitary strategy is the denotation of a unique
analytic (cut-free) proof. Infinite strategies correspond to
analytic proofs that are infinitely deep. Thus, we can normalise
proofs, via the semantics.
\end{abstract}
\begin{keyword} game semantics \sep full completeness \sep history-sensitive strategies \sep sequentiality
\MSC[2010] 68Q55 \sep 03B70 \sep 03F52 \sep 18C50
\end{keyword}
\end{frontmatter}
\section{Introduction}
The Curry-Howard isomorphism between proofs in intuitionistic logics and functional programs is a powerful theoretical and practical principle for specifying and reasoning about programs. Game semantics provides a third axis to this correspondence: each proof/program at a given type denotes a strategy for the associated game, and typically a \emph{full completeness} result establishes that this correspondence is also an isomorphism \cite{AJ_MLL}. However, in languages with side-effects such as mutable state it is evident that there are many programs which do not correspond to intuitionistic proofs. Game semantics has achieved notable success in providing models of such programs \cite{AMc_LSS,AHM_GR,Lai_FLC}, in which they typically denote ``history-sensitive'' strategies --- strategies which may break the constraints of innocence \cite{HO_PCF} or history-freeness \cite{AJ_MLL} imposed in fully complete models of intuitionistic or linear logic. The full completeness of these models means there is a precise correspondence between programs and history-sensitive strategies, which raises the question: is there a logic to flesh out the proofs/imperative programs/history-sensitive strategies correspondence?
In this paper we present a first-order logic, \textsf{WS1}, and a games model for it in which proofs denote history-sensitive strategies. Thus total imperative programs correspond, via the game semantics, to proofs in \textsf{WS1}. Moreover, because \textsf{WS1} is more expressive than the typing system for a typical programming language, it can express finer behavioural properties of strategies. In particular, we can embed first-order intuitionistic logic with equality, Polarized Linear Logic, and a finitary imperative language with ground store, coroutines and some infinite data structures. We also take first steps towards answering some of the questions posed by the logic and its semantics: Are there any formulas which only have `imperative proofs', but no proofs in a traditional `functional' proof system? Can we use the expressivity of \textsf{WS1} to specify imperative programs?
\subsection{Related Work}
The games interpretation of linear logic upon which \textsf{WS1} is
based was introduced by Blass in a seminal paper \cite{Bla_LL}. Blass also gives instances of history sensitive strategies which are not denotations of linear logic proofs; these do, however, correspond to proofs in \textsf{WS1}. The particular symmetric monoidal closed category of games underlying our semantics has been studied extensively from both logical and programming perspectives \cite{Cur_SS,Lam_PLL,Hy_GS}. Longley's project to develop a programming language based on it \cite{Long_PLGM} may be seen as complementary to our aim of understanding it from a logical perspective.
Several logical systems have taken games or interaction as a semantic basis yielding a richer notion of meaning than classical or intuitionistic truth, including Ludics \cite{Gir_LS} and Computability Logic \cite{JCL}. The latter also provides an analysis of Blass's examples, suggesting further connections with our logic, although there is a difference of emphasis: the research described here is focused on investigating the structural properties of the games model on which it is based.
Perhaps closest in spirit to our work is tensorial logic, introduced in~\cite{MT_RM}. Like \textsf{WS1}, tensorial logic is directly inspired by the structure of strategies in game semantics, and in~\cite{MelliesPA:gamssd}, Melli\`es demonstrates a tight correspondence between the logic and categories of innocent strategies on dialogue games. Our focus in this paper is somewhat different, because we are primarily concerned with the history-sensitive behaviour characteristic of (game semantics of) imperative programs, rather than the purely functional programs that denote innocent strategies.
In \cite{CC_CGC} a proof theory for Conway games is presented, where formulas are the game trees themselves. In \cite{G_LLB}, the $\lambda \overline{\lambda}$-calculus is presented, where individual moves of game semantics are represented by variables and binders. Both settings deal with history-sensitive strategies, and have dynamics corresponding to composition of strategies.
A quite different formalisation of game semantics for first order logic is given in \cite{Lau_FOL}, also with a full completeness result.
\subsection{Contribution}
The main contribution of this paper is to present an expressive logical system and its semantics, in which proofs correspond to history sensitive strategies. Illustrating the expressive power of this system, we show how proofs of intuitionistic first-order logic, Polarized Linear Logic and imperative programming constructs may be embedded in it. We also demonstrate how formulas in the logic can be used to represent some properties of imperative programs: for example, we describe a formula for which any proof corresponds to a well-behaved (single write) Boolean storage cell.
The interpretation of \textsf{WS1} includes some interesting developments of game semantics. In particular, the exponentials are treated in a novel way: we use the fact that the semantic exponential introduced in \cite{Hy_GS} is a final coalgebra, and reflect this explicitly in the logic in the style of \cite{Cla_FIX}. This formulation allows us to express the usual exponential introduction rules (promotion and dereliction) but also proofs that correspond to strategies on $!A$ that act differently on each interrogation, such as the reusable Boolean reference cell. Another development is the interpretation of first-order logic with equality. A proof corresponds to a family of winning strategies --- one for each possible interpretation of the atoms determined by a standard notion of ${\mathcal{L}}$-structure --- which must be \emph{uniform} across ${\mathcal{L}}$-structures. This notion of uniformity is precisely captured by the requirement that strategies are \emph{lax natural transformations} between the relevant functors.
The main technical results of this paper concern the sharp correspondence between proofs and strategies: \emph{full completeness} results. We show that any bounded uniform winning strategy is the denotation of a unique (cut-free) \emph{analytic proof}. In the exponential-free fragment, where all strategies are bounded, it follows that many rules such as cut are admissible; and it allows us to normalise proofs to analytic proofs via the semantics.
For the full logic, since the exponentials correspond to final coalgebras, proofs can be unfolded to infinitary form. Extending semantics-based normalisation to the full \textsf{WS1}, the resulting normal forms are \emph{infinitary} analytic proofs.
\section{Games and Strategies}
Our notion of game is essentially that introduced by \cite{Bla_LL}, and similar to that of \cite{AJ_MLL,Lam_SGL}, augmented with winning conditions introduced as in \cite{Hy_GS}. We make use of the categorical structure on games and strategies first introduced in \cite{JoyalA:remslt}.
Informally, a game is a tree where Player and Opponent own alternate nodes, together with a polarity specifying which protagonist owns the starting node. A play proceeds down a particular branch, with Opponent/Player choosing the subtree for nodes they control. A strategy for Player specifies which choice Player should make in response to Opponent's moves so far. The winner of a finite play is the last protagonist to play a move. The winner of an infinite play is specified by a winning condition for each game.
If $A$ is a set, let $A^\ast$ denote the free monoid (set of sequences) over $A$, $A^\omega$ the set of infinite sequences over $A$, and $\epsilon$ the empty sequence. We write $s \sqsubseteq t$ if $s$ is a prefix of $s$, and $s \sqsubset t$ if $s$ is a strict (finite) prefix of (possibly infinite)~$t$. If $X \subseteq A^\ast$, write $\overline{X} = \{ s \in A^\omega : \forall t \sqsubset s, t \in X \}$.
\begin{definition} A \emph{game} is a tuple $(M_A, \lambda_A, b_A, P_A, W_A)$ where \begin{itemize}
\item $M_A$ is a set of moves
\item $\lambda_A : M_A \rightarrow \{ O, P \}$
\begin{itemize} \item We call $m$ an
\emph{\textsf{O}-move} if $\lambda_A(m) = O$ and a
\emph{\textsf{P}-move} if $\lambda_A(m) = P$. \end{itemize}
\item $b_A \in \{ O, P \}$ specifies a starting player
\begin{itemize}
\item We call $s \in M_A^\ast$ \emph{alternating} if $s$ starts with
a $b_A$-move and alternates between \textsf{O}-moves and
\textsf{P}-moves. Write $M_A^\varoast$ for the set of such
sequences.
\end{itemize}
\item $P_A \subseteq M_A^\varoast$ is a nonempty prefix-closed set of
valid plays.
\item $W_A \subseteq \overline{P_A}$ represents the set of infinite
plays that are P-winning; we say an infinite play is O-winning if
it is not P-winning. \end{itemize} \end{definition} \label{natgame}
For finite plays, the last player to play a move wins: let $W_A^\ast = W_A \cup E_A$ where $E_A$ is the set of plays that end in a P-move. We will call a game $A$ \emph{negative} if $b_A = O$ and \emph{positive} if $b_A = P$. We write $A, B, C, \ldots$ for arbitrary games; $L, M, N, \ldots$ for arbitrary negative games and $P, Q, R, \ldots$ for arbitrary positive games.
\label{seqnot}
\begin{definition} If $A$ is a game, we define its \emph{negation} by changing its polarity, and swapping its Player/Opponent labelling. Define $\neg : \{O , P\} \rightarrow \{O , P\}$ by $\neg(O) = P$ and $\neg(P) = O$.
$$A^\perp = (M_A, \neg \circ \lambda_A, \neg b_A , P_A, \overline{P_A} - W_A).$$ Negation is evidently an involutive bijection between negative and positive games.
\end{definition}
\begin{definition} A \emph{strategy} $\sigma$ for a game $(M_A, \lambda_A, b_A, P_A, W_A)$ is a subset of $P_A$ (a set of traces) satisfying: \begin{itemize}
\item If $sa \in \sigma$, then $\lambda_A(a) = P$
\item If $sab \in \sigma$, then $s \in \sigma$
\item If $sa,sb \in \sigma$, then $a = b$
\item If $\sigma = \varnothing$ then $b_A = P$, and if $\epsilon \in
\sigma$ then $b_A = O$. \end{itemize} \end{definition}
\noindent We say a strategy $\sigma$ is \emph{bounded} if $\exists k \in \mathbb{N}
. \forall s \in \sigma . |s| \leq k$; in which case we write $\mathsf{depth}(\sigma)$ for the smallest such $k$ (the length of the longest play in $\sigma$).
\begin{definition}
A strategy on a game $A$ is \emph{total} if it is nonempty and
whenever $s \in \sigma$ and $sa \in P_A$, there is some $b \in M_A$
such that $sab \in \sigma$. A total strategy $\sigma$ is
\emph{winning} if whenever $s \in \overline{P_A}$ and all prefixes
of $s$ ending in a \textsf{P}-move are in $\sigma$, then $s \in
W_A$. \end{definition}
\subsection{Connectives}
\label{gameconnectives}
We next describe operations on games, which will correspond to connectives in our logic. These come in dual pairs, determined by involutive negation.
First, some notation. If $X$ and $Y$ are sets, let $X + Y = \{ \mathsf{in}_1(x) : x \in X \} \cup \{ \mathsf{in}_2(y) : y \in Y \}$. We use standard notation $[f,g]$ for copairing. If $s \in (X +
Y)^\ast$ or $s \in (X + Y)^\omega$ then $s|_i$ is the subsequence of $s$ consisting of elements of the form $\mathsf{in}_i(z)$. If $X_1
\subseteq X^\ast$ and $Y_1 \subseteq Y^\ast$ let $X_1 \| Y_1 = \{ s
\in (X+Y)^\ast : s|_1 \in X_1 \wedge s|_2 \in Y_1 \}$. If $X_1
\subseteq X^\omega$ and $Y_1 \subseteq Y^\omega$ let $X_1 \| Y_1 = \{
s \in (X+Y)^\omega : s|_1 \in X_1 \wedge s|_2 \in Y_1 \}$.
\paragraph{Empty Game} We define a negative game with no moves: $$\mathbf{1} = (\emptyset, \emptyset, O, \{\epsilon\}, \emptyset).$$ There is one strategy on $\mathbf{1}$ given by $\{ \epsilon \}$, and this strategy is total (and winning, as $\overline{P_\mathbf{1}}$ is empty).
There is one strategy, $\emptyset$, on the empty \emph{positive} game $\mathbf{0} = \mathbf{1}^\perp$. This strategy is not total (intuitively, it is Player's turn to play first but he has no moves to play).
\paragraph{One-move Game} We write $\bot$ for the negative game with a
single move $q$ and maximal play consisting of $q$ : $$\bot = (\{q\}, \{ q \mapsto O\} , O, \{ \epsilon, q \} , \emptyset).$$ There is a single strategy $\{ \epsilon \}$ on $\bot$; this is not total.
We write $\top$ for the positive game with a single move, $\bot^\perp$.
There are two strategies on $\top$: $\varnothing$ (which is evidently not total) and $\{q\}$ which is total (and thus, trivially winning).
\paragraph{Disjoint Union} The negative game $L \& N$ is played over the disjoint union of the moves of $L$ and $N$: a play in this game is either a (tagged) play in $L$ or a (tagged) play in $N$. A play is $P$-winning if it is a $P$-winning play from $L$ or a $P$-winning play from $N$. Thus, on Opponent's first move he chooses to play either in $L$ or $N$, and thereafter play remains in that component. Formally, define
$$L \& N = (M_L + M_N,[\lambda_L , \lambda_N], O, P_L +^\ast P_N, \{ \mathsf{in}_1^\omega(s) : s§ \in W_L \} \cup \{ \mathsf{in}_2^\omega(s) : s \in W_N \})$$
where $X_1 +^\ast Y_1 = \{ s \in X_1 \| Y_1 : s|_1 = \epsilon \vee s|_2 = \epsilon \}$ if $X_1 \subseteq X^\ast$ and $Y_1 \subseteq Y^\ast$, and if $s \in X_i^\ast$ (resp. $X_i^\omega$) we write $\mathsf{in}_i^\ast(s)$ (resp. $\mathsf{in}_i^\omega$) for the corresponding sequence in $(X_1 + X_2)^\ast$ (resp. $(X_1 + X_2)^\omega$).
A (winning) strategy on $L \& N$ corresponds to a pairing of a (winning) strategy on $L$ with a (winning) strategy on $N$ --- hence the identification of this connective with the ``with'' of linear logic.
Similarly, the positive game $Q \oplus R = (Q^\perp \& R^\perp)^\perp$ corresponds to a disjoint union of plays from $Q$ and $R$ where Player's first move constitutes a choice to play either in $Q$ or $R$. An infinite play in $Q \oplus R$ is P-winning if it is P-winning in the relevant component. Thus a
winning strategy on $Q \oplus R$ corresponds to either a winning strategy on $Q$ or a winning strategy on $R$.
We may form any set-indexed conjunctions and disjunctions in this way. Let $X$ be a set and $\{ N_x : x \in X \}$ a family of negative games indexed by $X$. We define the game $\prod_{x \in X} N_x$ by \begin{small}$$(\sum_{x \in X} M_{N_x}, \mathsf{in}_x(m) \mapsto \lambda_{N_x}(m),O,\{ \mathsf{in}_x^\ast(s) : x \in X , s \in P_{N_x} \},\{ \mathsf{in}_x^\omega(s) : x \in X, s \in W_{N_x} \}).$$\end{small} If $\{ Q_x : x \in X \}$ is a family of positive games then $\bigoplus_{x \in X} Q_x
= (\prod_{x \in X} N_x^\perp)^\perp$.
\paragraph{Symmetric Merge} If $L$ and $N$ are negative games, a play in the negative game $L \otimes N$ is an \emph{interleaving} of a play in $L$ with a play in $N$.
Define \begin{small}$$L \otimes N = (M_L + M_N, [\lambda_L , \lambda_N], O,
(P_L \| P_N) \cap M_{L \otimes N}^\varoast, \{ s \in \overline{P_{L \otimes
N}} : s|_1 \in W_L^\ast \wedge s|_2 \in W_N^\ast \}).$$\end{small} The fact that the play restricted to each component must be alternating, and that the play overall must be alternating, ensures that only Opponent may switch between components. This operation may be used to interpret the ``times'' of linear logic \cite{Bla_LL}.
An infinite play in $L \otimes N$ is P-winning if its restriction to $L$ is P-winning and its restriction to $N$ is P-winning.
Similarly, if $Q$ and $R$ are positive games, plays in the positive game $Q \bindnasrepma R = (Q^\perp \otimes R^\perp)^\perp$ consist of interleavings of plays in $Q$ and $R$ in which Player may switch between the two components. An infinite play in $Q \bindnasrepma R$ is P-winning if its restriction to $Q$ is P-winning or its restriction to $R$ is P-winning.
\paragraph{Left Merge} Let $A$ be a game of polarity $a$ (positive or negative), and $N$ a negative game. The game $A \oslash N$ has polarity $a$: a play in this game is an interleaving of a play in $A$ with a play in $N$ \emph{such that the first move, if any, is in $A$}. An infinite play in $A \oslash N$ is P-winning if both of its restrictions are P-winning. Formally, define \begin{small}$$A \oslash N = (M_A + M_N, [ \lambda_A , \lambda_N ] , b_A, (P_A \|_L P_N) \cap M_{A \oslash N}^\varoast, \{ s \in P_{A
\oslash N}^\omega : s|_1 \in W_A^\ast \wedge s|_2 \in W_N^\ast \}).$$\end{small} where
$X_1 \|_L Y_1
= \{ s \in X_1 \| Y_1 : s|_1 = \epsilon \Rightarrow s|_2 = \epsilon \}$. Observe that it is Opponent who switches between components: if $A$ is negative then $A \oslash N$ consists of the plays in $A \otimes N$ which start in $A$ (or are empty). This connective on games, the \emph{sequoid}, was introduced in \cite{Lai_HOS} and its properties can be used to model stateful effects \cite{Lai_HOS,Lai_FPC}.
If $Q$ is a positive game, the game $A \lhd Q = (A^\perp \oslash Q^\perp)^\perp$ has the same polarity as $A$, and consists of interleavings of a play in $A$ and a play in $Q$, starting in $A$ and with Player switching between components and winning an infinite play if he wins in either $A$ or $Q$.
\paragraph{Exponentials} Let $N$ be a negative game. The negative game $\mathop{!}N$ consists of countably many copies of $N$, tagged with natural numbers. A play over $\mathop{!}N$ is an interleaving of plays in each copy, such that any move in $N_{i+1}$ is preceded by a move in $N_i$.
An infinite play is winning just if it is winning in each component. Define $$\mathop{!}N = (M_N \times \mathbb{N} ,
\lambda_N \circ \pi_1 , \{ s : \forall i . s|_i \in P_N \wedge s|_i =
\epsilon \Rightarrow s|_{i+1} = \epsilon \} , \{ s : \forall i . s|_i \in W_N^\ast \}).$$ As with the tensor, there is an implicit switching condition: only Opponent can open new copies and switch between copies. This operation may be used to interpret the ``of course'' of linear logic \cite{Hy_GS}.
Dually, if $Q$ is a positive game, $\mathop{?}Q = (!Q^\perp)^\perp$ is the game consisting of an infinite number of copies of $Q$, where Player can spawn new copies and switch between them. An infinite play in $\mathop{?}Q$ is winning if it is winning in at least one component.
\subsubsection{Derived Connectives} We shall also make use of the following derived operations:
\paragraph{Lifts} We can use left merge to add a single move at the beginning of a game. If $N$ is a negative game, a play in the positive game $$\downarrow N = \top \oslash N$$ consists of a play in $N$ prefixed by an extra P-move. A strategy on $\downarrow N$ is either $\emptyset$ or corresponds to a strategy on $N$. A winning strategy on $\downarrow N$ corresponds to a winning strategy on $N$. If $P$ is a positive game, a play in the negative game $$\uparrow P = \bot \lhd P$$ consists of a play in $P$ prefixed by an extra O-move. A (winning) strategy on $\uparrow P$ corresponds to a (winning) strategy on $P$.
\paragraph{Affine Implication} If $M$ and $N$ are negative games, we may define
$$M \multimap N = N \lhd M^\perp.$$ A play in $M \multimap N$ consists of a play in $N$ interleaved with a play in $M^\perp$ (an `input version' of $M$), starting in $N$. It is winning if its restriction to $N$ is P-winning or its restriction to $M^\perp$ is P-winning (i.e. its restriction to $M$ is O-winning), agreeing with \cite{Hy_GS}.
\subsubsection{Isomorphisms of Games}
\label{isotrees}
Given two games $A$ and $B$, we say that $A$ and $B$ are \emph{forest
isomorphic} if $b_A = b_B$ and there is a bijection from $P_A$ to $P_B$ which is monotone with respect to the prefix order, and restricts to a bijection on the $P$-winning plays.
Some forest isomorphisms between games are given in Figure \ref{game-isos}. Each isomorphism $M \cong N$ gives rise to winning strategies $M \multimap N$ and $N \multimap M$, which are mutually inverse. Thus, winning strategies on $M$ are in bijective correspondence with winning strategies on $N$.
\begin{figure*}
\caption{Some Characteristic Isomorphisms of Games}
\label{game-isos}
\end{figure*}
\subsection{Imperative Objects as Strategies}
\label{bangsigma} \label{impobjstrat} We may model higher-order programming languages with imperative features by interpreting \emph{types} as games and \emph{programs} as strategies. (Such a semantics of a full object-oriented language, using essentially the notion of game described here, is described in \cite{Wol_OO}.) Here, we illustrate the capacity of our games and strategies to represent imperative objects by describing a strategy with the behaviour of a Boolean reference cell, on a game corresponding to the type of imperative Boolean variables --- essentially the \emph{cell} strategy first described, for a different notion of game, in \cite{AMc_LSS}. (We will later see how this strategy can be represented as a proof in our logic.)
Let $\mathbf{B} = \bot \lhd \top \oplus \top$ be the (negative) game of ``Boolean output'' --- this has one initial Opponent-move \texttt{q} and two possible Player responses, representing \texttt{True} or \texttt{False}. Let $\mathbf{Bi} = (\bot \& \bot) \lhd \top$ be the (negative) game of ``Boolean input'' which has two starting Opponent-moves \texttt{in(tt)} and \texttt{in(ff)} and one possible response to this, \texttt{ok}. The game $!(\mathbf{Bi} \& \mathbf{B})$ represents the type of a Boolean variable --- it is a product of a \texttt{write} method which accepts a Boolean input and a \texttt{read} method which on interrogation produces a Boolean output, under an exponential which allows these methods to be used arbitrarily many times.
The strategy \emph{cell} on this game represents a reference cell which accepts Boolean input on the left, and returns the last value written to it as output on the right (we assume it is initialised with $\mathtt{ff}$). For readability, we will omit the tags on the product and the exponential (since they can be inferred).
\[
\begin{array}{cccccl}
!(\mathbf{Bi} & \& & \mathbf{B}) \\
& & {\mathtt{q}} & \mathsf{O} \\
& & {\mathtt{ff}} & \mathsf{P} \\
{ \mathtt{in(tt)}} & & & \mathsf{O} \\
{\mathtt{ok}} & & & \mathsf{P} \\
& & {\mathtt{q}} & \mathsf{O} \\
& & {\mathtt{tt}} & \mathsf{P} \\
\end{array} \]
In contrast with the \emph{history-free} strategies which denote proofs of linear logic in the model of \cite{AJ_MLL}, this strategy is \emph{history-sensitive} --- the move prescribed by the strategy depends on the entire play so far. It is this property which allows the state of the object to be described implicitly, as in \cite{AMc_LSS}.
\section{The Logic \textsf{WS1}}
\subsection{Formulas of \textsf{WS1}}
The formulas of \textsf{WS1} are based on first-order linear logic, with some additional connectives, and subject to a notion of polarity. A \emph{first-order language} consists of: \begin{itemize} \item A collection of complementary pairs of predicate symbols $\phi$
(negative) and $\overline{\phi}$ (positive), each with an arity in
$\mathbb{N}$ such that $\mathsf{ar}(\phi) = \mathsf{ar}(\overline{\phi})$. This must
include the binary symbol $=$ (negative), and we write $\neq$ for
its complement \item A collection of function symbols, each with an arity. \end{itemize}
The negative and positive formulas of \textsf{WS1} over ${\mathcal{L}}$ are defined by the following grammar. $M,N$ range over negative formulas and $P, Q$ over positive formulas; variables range over some global set $\mathcal{V}$. \begin{center} \begin{tabular}{rlllllllllllllllll}
$M$, $N$ := & $\mathbf{1}$ & $|$ & $\bot$ &$|$ & $\phi(\overrightarrow{s})$ & $|$\\
& $M \otimes N$ & $|$ & $M \varoslash N$ & $|$ & $N \lhd P$ & $|$ \\
& $\forall x . N$ & $|$ & $M \& N$ & $|$ & $!N$ \\
$P$, $Q$ := & $\mathbf{0}$ & $|$ & $\top$ & $|$ & $\overline{\phi}(\overrightarrow{s})$ & $|$ \\
& $P \bindnasrepma Q$ & $|$ & $P \lhd Q$ & $|$ & $P \varoslash N$ & $|$\\
& $\exists x . P$ & $|$ & $P \oplus Q$ & $|$ & $?P$ \end{tabular} \end{center}
\noindent Here, $s$ ranges over $\mathcal{L}$-terms, $x$ over variables, and $\phi(\overrightarrow{s})$ over $n$-ary predicates $\phi$ applied to a tuple of terms $\overrightarrow{s} = (s_1 , \ldots , s_n)$.
The involutive negation operation $(\_)^\perp$ sends negative formulas to positive ones and \emph{vice versa} by exchanging each atom, unit or connective for its dual --- i.e. ${\mathbf{1}}$ for $\mathbf{0}$, $\bot$ for $\top$, $\phi(\overrightarrow{x})$ for $\overline{\phi}(\overrightarrow{x})$, $\otimes$ for $\bindnasrepma$, $\varoslash$ for $\lhd$, $\forall$ for $\exists$, $\&$ for $\oplus$ and $!$ for $?$.
\subsubsection{Interpreting Formulas as Games} We may interpret each positive formula as a positive game, and each negative formula as a negative game, by fixing a truth assignment for the atomic formulas via a standard notion of first-order structure. \begin{definition}
An $\mathcal{L}$-structure $L$ is a set $|L|$ together with an
interpretation function $I_L$ sending: \begin{itemize} \item each predicate symbol (with
arity $n$) to a function $|L|^n \rightarrow \{\mathtt{tt},\mathtt{ff}\}$ such that
$I_L(\phi)(\overrightarrow{a}) \not =
I_L\left(\overline{\phi}\right)(\overrightarrow{a})$ for all
$\vec{a}$ and $I_L(=)(a,b) = \mathtt{tt}$ iff $a = b$; \item each function
symbol $f$ (with arity $n$) to a function $I_L(f) : |L|^n
\rightarrow |L|$. \end{itemize} For any $X \subseteq \mathcal{V}$, an
\emph{$\mathcal{L}$-{model} over $X$} is a pair $(L,v)$ where $L$ is
an $\mathcal{L}$-structure and $v : X \rightarrow |L|$ a valuation
function, yielding an assignment of truth values to all atomic
formulas with variables in $X$. \end{definition}
Given a $\mathcal{L}$-model $(L,v)$ over $X$, we may interpret each formula $A$ with free variables in $X$ as a game $\llbracket A\rrbracket(L,v)$ in as follows: \begin{itemize} \item Each of the units and connectives
$\otimes$,$\bindnasrepma$,$\oslash$,$\lhd$,$\mathbf{1}$,$\mathbf{0}$,$\top$,$\bot$,$!$,$?$,$\&$,$\oplus$
is interpreted as the corresponding operation on games from Section
\ref{gameconnectives}, lifted to an action on families of games. \item Positive atoms which are assigned \emph{true} in $(L,v)$ are interpreted as the game with a single (Player) move ($\top$); positive atoms which are assigned \emph{false} are interpreted as the game with no moves ($\mathbf{0}$). Conversely, negative atoms which are assigned \emph{true} in $(L,v)$ are interpreted as the empty game ($\mathbf{1}$), whilst negative atoms which are assigned \emph{false} are interpreted as the game with a single Opponent move ($\bot$). \item Quantifiers are interpreted as additive conjunctions and
disjunctions over the domain of $L$ --- i.e. $\llbracket\forall x
. N\rrbracket(L,v) = \prod_{l \in |L|} \llbracket N \rrbracket(L,v[x
\mapsto l])$ and $\llbracket \exists x . P \rrbracket(L,v) =
\bigoplus_{l \in |L|} \llbracket P \rrbracket(L,v[x \mapsto l])$. In
the case of $\forall x . N$, this is equivalent to Opponent choosing
an $x \in |L|$ and play proceeding in $N(x)$. In the case of
$\exists x. P$, this is equivalent to Player choosing an $x \in |L|$
and play proceeding in $P(x)$. \end{itemize}
\noindent Note that $\llbracket A^\perp \rrbracket = \llbracket A \rrbracket^\perp$.
\subsection{Proofs}
\label{proofinterp}
A proof of a formula $\vdash A$ will be interpreted as a \emph{uniform
family} of \emph{winning strategies} on $\llbracket A \rrbracket(L,v)$ for each $(L,v)$. We will formalise this interpretation (and, importantly, the meaning of ``uniformity'') in Section 6, but with this in mind, we can define proof rules for \textsf{WS1}. A \emph{sequent} of \textsf{WS1} is of the form $X ; \Theta \vdash \Gamma$ where $X \subseteq \mathcal{V}$, $\Theta$ is a set of positive atomic formulas and $\Gamma$ is a nonempty list of formulas such that $FV(\Theta,\Gamma) \subseteq X$. The explicit free variable set $X$ is required for the tight correspondence between the syntax and semantics. For brevity, let $\Phi$ range over $X ; \Theta$ contexts.
We shall interpret such a sequent as a (family of) dialogue games by interpreting the comma operator in $\Gamma$ as left-associative left-merge (i.e. either $\oslash$ or $\lhd$ depending on the polarity of the right-hand operand), so that the first move must occur in the first element (or head formula) of $\Gamma$. For example, if $M,N$ are negative formulas and $P,Q$ positive formulas, the sequent $$\vdash M, P, Q, N$$ is semantically equivalent to $$\vdash ((M \lhd P) \lhd Q) \oslash N.$$ Thus, in the game interpretation of a sequent $\Gamma$ the first move must occur in the first (or \emph{head}) formula of $\Gamma$.
The derivation rules for proofs are partitioned into \emph{core rules} and \emph{other rules}. Here $M,N$ range over negative formulas, $P,Q$ over positive formulas, $\Gamma,\Delta$ over lists of formulas, $\Gamma^\ast$ over non-empty lists of formulas and $\Gamma^+,\Delta^+$ over lists of positive formulas.
\subsubsection{Core Rules}
Each $n$-ary connective $\varodot$ of \textsf{WS1} is associated with \emph{core introduction rules} which introduce that connective in the head position of a sequent: they conclude $\Phi \vdash \varodot(A_1, \ldots, A_n), \Gamma$ from some premises. These rules are given in Figure \ref{coreintros}. These core introduction rules are all \emph{additive} (by contrast to linear logic: note in particular the difference with respect to the $\otimes$ introduction rule).
\begin{figure*}
\caption{Core Introduction Rules for \textsf{WS1}}
\label{coreintros}
\end{figure*}
We may interpret each of the core introduction rules with respect to $(L,v)$ as follows:
\begin{itemize} \item The interpretation of $\mathsf{P}_\mathbf{1}$ is the unique total
strategy on the game $\mathbf{1},\Gamma$ (where it is Opponent's
turn to start, but there are no moves for him to play since the
first move must take place in the empty game $\mathbf{1}$). \item The interpretation of $\mathsf{P}_\top$ is the unique total strategy on
the game $\top$, where Player plays a move and the game is over. \item The interpretation of the unary rule $\mathsf{P}_\oslash$ is the
identity function, as the game denoted by the conclusion is the same
game as that denoted by the premise. The interpretation of $\mathsf{P}_\lhd$
is similar. \item For $\mathsf{P}_\&$ we note that given strategies $\sigma : M , \Gamma$
and $\tau : N , \Gamma$ we can construct a strategy on $M \& N ,
\Gamma$ which plays as $\sigma$ if Opponent's first move is in $M$,
and as $\tau$ if Opponent's first move is in $N$. \item Similarly, for $\mathsf{P}_\otimes$ we note that given strategies
$\sigma : M , N , \Gamma$ and $\tau : N , M , \Gamma$ we can
construct a strategy on $M \otimes N$ which plays as $\sigma$ if
Opponent's first move is in $M$, and as $\tau$ if Opponent's first
move is in $N$. Here we are making use of the isomorphism $M \otimes
N \cong (M \oslash N) \& (N \oslash M)$ --- each play in $M \otimes
N$ must either start in $M$ (and thus be a play in $M \oslash N$) or
in $N$ (and thus be a play in $N \oslash M$). Thus, \textsf{WS1}
commits to a particular interpretation of $\otimes$, rather than an
arbitrary monoidal structure. \item For ${\mathsf{P}_\oplus}_1$ we note that given a strategy $\sigma : P ,
\Gamma$ we can construct a strategy on $P \oplus Q , \Gamma$ with
Player choosing to play his first move in $P$ and thereafter playing
as $\sigma$. For ${\mathsf{P}_\oplus}_2$ Player can play his first move in $Q$
and then play as the given strategy. \item Similarly, for the $\mathsf{P}_\bindnasrepma$ rules, we note that in a strategy
on $P \bindnasrepma Q , \Gamma$ Player may choose to either play his first
move in $P$ (requiring a strategy on $P , Q, \Gamma$) or in $Q$
(requiring a strategy on $Q,P,\Gamma$). \item The interpretation of $\mathsf{P}_\bot^+$ uses the observation that
total strategies on $\bot , P = \mathop{\uparrow} P$ are in
correspondence with total strategies on $P$. Similarly, the
interpretation of $\mathsf{P}_\top^-$ uses the observation that total
strategies on $\top, N = \mathop{\downarrow} N$ are in
correspondence with total strategies on $N$. \item For ${\mathsf{P}_\mathsf{at}}_-$, we know that
$\phi(\overrightarrow{s}), \Gamma$
is interpreted by $\mathbf{1},\Gamma$ if $(L,v) \models
\phi(\overrightarrow{s})$ and by $\bot,\Gamma$ if $(L,v) \not \models
\phi(\overrightarrow{s})$. In the former case, there are no moves to
respond to, so we only need to consider the case when $(L,v) \models
\overline{\phi}(\overrightarrow{s})$. \item For ${\mathsf{P}_\mathsf{at}}_+$, we can only provide a family of
strategies on a game whose first move is in
$\overline{\phi}(\overrightarrow{s})$ if we know that $(L,v) \models
\overline{\phi}(\overrightarrow{s})$ since otherwise our family has
to contain a winning strategy on the empty positive game
$\mathbf{0}$, of which there are none. \item For $\mathsf{P}_\forall$, to give a family of strategies on $\forall x
. N , \Gamma$ we must give a strategy on $N , \Gamma$ for each
choice of $x$ --- that is, a family of strategies on the set of
$\Theta$-satisfying $\mathcal{L}$-models over $X \uplus \{ x \}$. \item For $\mathsf{P}_\exists$, to give a family of strategies on $\exists x
. P , \Gamma$ we must choose a value $s$ for $x$ and give a family
of strategies on $P[s/x] , \Gamma$.
\end{itemize}
As well as the core introduction rules, there is a small set of \emph{core elimination rules}, found in Figure \ref{coreelims}. These permit decomposition of the second and third formula in a sequent, if the first formula is $\bot$ or $\top$. They correspond to isomorphisms between the premise and conclusion in the semantics, which induces a bijection between the winning strategies on each.
For example, $\mathsf{P}_\bot^-$ uses the isomorphism $\bot \oslash N \cong \bot$, and $\mathsf{P}_\bot^\bindnasrepma$ the isomorphism $\bot \lhd (P \bindnasrepma Q) \cong (\bot \lhd P) \lhd Q$ and $\mathsf{P}_\bot^\oslash$ the isomorphism $\bot \lhd (P \oslash N) \cong (\bot \lhd P) \oslash N$.
\begin{figure*}
\caption{Core Elimination Rules for \textsf{WS1}}
\label{coreelims}
\end{figure*}
\begin{figure*}
\caption{Core Equality Rules for \textsf{WS1}}
\label{coreequals}
\end{figure*}
Finally, there are \emph{core equality rules} which deal with equality, given in Figure \ref{coreequals}. We can interpret the core equality rules at a model $(L,v)$ as follows:
\begin{itemize} \item To interpret $\mathsf{P}_{\neq}$ (reflexivity of identity), we take the empty family of
strategies, since there are no $\Theta$-satisfying
$\mathcal{L}$-models if $\Theta$ contains $x \neq x$. \item To interpret the matching rule $\mathsf{P}_\mathsf{ma}^{x,y,z}$, we note that the collection of
$\Theta$-satisfying $\mathcal{L}$-models can be decomposed into
those where $x$ and $y$ are identified (the left-hand premise) and
those where they are distinct (the right-hand premise). \end{itemize}
Once a discipline regarding where the matching rule is applied has been introduced, proof search in this core subsystem is particularly simple, as the form of the sequent to be proved determines the choice of final rule. We will later show that the core rules are sufficient to denote any finitary family of uniform winning strategies.
\label{focpol}
We make a brief note on polarities and reversibility, and a
comparison with focused proof systems. In such systems,
polarisation is used to differentiate between connectives whose
corresponding rules are \emph{reversible} or \emph{irreversible}
\cite{And_Foc}. Irreversible rules act on positive formulas. An
irreversible rule is one where (reading upwards) in applying the
rule one must make some definite choice, a choice which could
determine whether the proof search succeeds or not. Thus, additive
disjunction introduction is always an irreversible rule, and in
linear logic so is the tensor introduction rule, since a choice must
be made regarding how the context is split.
In \textsf{WS1}, the core introduction rule for tensor (as for all
such rules) is additive, not multiplicative. Thus, this rule is
reversible, and $\otimes$ is resultantly a negative connective. In
contrast, $\bindnasrepma$ is a positive connective as there are two
different core introduction rules, which are not reversible. Thus,
as well as the semantic motivation, we can view our distinction
between positive and negative formulas in the same light as the
polarities of focused systems.
However, there is an important distinction. In focused systems, the
proof search alternates between negative phases, in
which reversible rules are applied, and positive phases, in which irreversible rules are applied. Analytic proof search in \textsf{WS} follows a
different two-phase discipline, in which we first \emph{decompose}
the first formula of a sequent into a unit using the core
introduction rules, and then \emph{collate} the tail formulas
together using the core elimination rules. We will give an embedding
of \textsf{LLP} inside \textsf{WS} in Section \ref{LLP-WS}.
\subsubsection{Other Rules}
\begin{figure*}
\caption{Non-core rules of \textsf{WS1}}
\label{otherrules}
\end{figure*}
The non-core rules of \textsf{WS1} are given in Figure \ref{otherrules}, with $\Delta^+$ ranging over lists of positive formulas, $\Gamma^*$ over non-empty lists of formulas. These rules reflect some of the categorical structure enjoyed by our games model, and allow straightforward interpretation of other logics and programming languages inside \textsf{WS1}. They include a cut rule, a multiplicative $\otimes$ rule, a restricted form of the exchange rule, weakening, and so on. We will later see that these rules are admissible with respect to the rules in Figures \ref{coreintros}, \ref{coreelims} and \ref{coreequals}, when restricted to the exponential-free subsystem of \textsf{WS1}. Informally, we can interpret each of these rules as follows:
\begin{itemize} \item In the cases of $\mathsf{P}^\mathsf{T}_\otimes$,
$\mathsf{P}_\mathbf{1}^\mathsf{T}$, $\mathsf{P}_\bindnasrepma^\mathsf{T}$,
$\mathsf{P}_\mathbf{0}^\mathsf{T}$, $\mathsf{P}_\mathsf{sym}^+$ and $\mathsf{P}_\mathsf{sym}^-$, the premise
and conclusion are the same game, up to retagging, and can be
interpreted using game isomorphisms. \item In the cases of $\mathsf{P}_\&^\mathsf{T}{}_1$, $\mathsf{P}_\&^\mathsf{T}{}_2$,
$\mathsf{P}_\mathsf{wk}^-$, $\mathsf{P}_\mathsf{der}^!$ a strategy on the conclusion can be obtained
by using only part of the strategy on the premise. For example, for
$\mathsf{P}_\mathsf{wk}^-$ we remove all moves in $M$. \item In the cases of $\mathsf{P}_\oplus^\mathsf{T}{}_1$,
$\mathsf{P}_\oplus^\mathsf{T}{}_2$, $\mathsf{P}_\mathsf{wk}^+$, $\mathsf{P}_\mathsf{der}^?$, a strategy on
the conclusion can be obtained by using the strategy on the premise
and ignoring the extra moves available to Player. \item The $\mathsf{P}_\mathsf{id}$ rule requires a strategy on $N \multimap N$: we can
use a \emph{copycat} strategy in which Player always switches
component, playing the move that Opponent previously played. The
${\mathsf{P}_\mathsf{id}}_\oslash$ rule can be interpreted by playing copycat in the
$M$ component. \item The $\mathsf{P}_\mathsf{cut}$ and $\mathsf{P}_\mathsf{cut}^0$ rules can be interpreted by
playing the two strategies given by the premises against each other
in the $N$ component: ``parallel composition plus hiding''. \item The $\mathsf{P}_\mathsf{mul}$ rule can be interpreted by combining the
strategies given by the premises in a multiplicative manner:
Opponent's moves in $M,\Gamma$ are responded to in accordance with
the first premise, and moves in $N$ in accordance with the
second. The $\mathsf{P}_\multimap$ rule can be interpreted similarly. \item To interpret $\mathsf{P}_\mathsf{con}^?$, we can construct a strategy on the
conclusion by identifying the two copies of $?P$ in the premise. To
interpret $\mathsf{P}_\mathsf{con}^!$, we can construct a strategy on the conclusion
by identifying the two copies of $!M$ in the conclusion. \item We can interpret $\mathsf{P}_\mathsf{ana}$ using the following construction:
given a map $N \multimap M \oslash N$, we may ``unwrap'' it an
infinite number of times to yield a strategy on $N \multimap
{!}M$. The $N$ component represents a parameter that can be used to pass
information between the separate threads, to admit history-sensitive
behaviour. \end{itemize}
\subsubsection{Embedding of Intuitionistic Linear Logic} For any negative formulas $M,N$, define $M \multimap N$ to be $N \lhd M^\perp$. Thus any formula of first-order Intuitionistic Linear Logic is a negative formula of \textsf{WS1}. We sketch an embedding into \textsf{WS1} of proofs of \textsf{ILL} (over the connectives $\otimes$,$\multimap$,$\forall$,$\&$,$\mathbf{1}$,$\bot$,$!$ and (negative) atoms, formulated with left- and right- introduction rules as in \cite{Scha_CatMod}).
\begin{proposition}For any proof $p$ of $M_1, \ldots, M_n \vdash N$ in
\textsf{ILL} with free variables in $X$, there is a proof
$\kappa(p)$ in \textsf{WS1} of $X ; \emptyset \vdash N , M_1^\perp,
\ldots, M_n^\perp$. \end{proposition} \begin{proof}
We show that for each rule of \textsf{ILL} there is a derivation in
\textsf{WS1} of the conclusion from the premises.
The left $\otimes$ rule corresponds to $\mathsf{P}_\bindnasrepma^\mathsf{T}$. For the right $\otimes$ rule, with $\Gamma = G_1, \ldots, G_n$ and $\Delta = D_1, \ldots, D_m$, we duplicate the proof and use $\mathsf{P}_\mathsf{mul}$ as follows:
\begin{footnotesize} \begin{prooftree} \AxiomC{$\vdash M , G_1, \ldots, G_n$} \AxiomC{$\vdash N , D_1, \ldots, D_m$} \LeftLabel{$\mathsf{P}_\mathsf{mul}$} \BinaryInfC{$\vdash M , N , G_1, \ldots, G_n , D_1, \ldots, D_m$}
\AxiomC{$\vdash N , D_1, \ldots, D_m$} \AxiomC{$\vdash M , G_1, \ldots, G_n$} \LeftLabel{$\mathsf{P}_\mathsf{mul}$} \BinaryInfC{$\vdash N , M , D_1, \ldots, D_m , G_1, \ldots, G_n$} \LeftLabel{$\mathsf{P}_\mathsf{sym}^+$} \UnaryInfC{$\vdots$} \LeftLabel{$\mathsf{P}_\mathsf{sym}^+$} \UnaryInfC{$\vdash N , M , G_1, \ldots, G_n , D_1, \ldots, D_m$}
\LeftLabel{$\mathsf{P}_\mathsf{\otimes}$} \BinaryInfC{$\vdash M \otimes N , G_1, \ldots, G_n , D_1, \ldots, D_m$} \end{prooftree} \end{footnotesize}
\noindent The left $\mathbf{1}$ rule corresponds to $\mathsf{P}_\mathbf{0}^\mathsf{T}$. The right $\mathbf{1}$ rule corresponds to $\mathsf{P}_\mathbf{1}$. The left $\multimap$ rule can be derived as follows:
\begin{footnotesize} \begin{prooftree} \AxiomC{$\vdash L , D_1, \ldots, D_m, N^\perp$} \AxiomC{$\vdash M , G_1, \ldots, G_n$} \LeftLabel{$\mathsf{P}_\multimap$} \BinaryInfC{$\vdash L , D_1, \ldots, D_m, N^\perp \oslash M, G_1, \ldots, G_n$} \LeftLabel{$\mathsf{P}_\mathsf{sym}^+$} \UnaryInfC{$\vdots$} \LeftLabel{$\mathsf{P}_\mathsf{sym}^+$} \UnaryInfC{$\vdash L , G_1, \ldots, G_n, N^\perp \oslash M , D_1, \ldots, D_m$} \end{prooftree} \end{footnotesize}
\noindent The right $\multimap$ rule corresponds to $\mathsf{P}_\lhd$. The left $\&$ rules correspond to the $\mathsf{P}_\oplus^\mathsf{T}$ rules. The right $\&$ rule corresponds to $\mathsf{P}_\&$. The right-$\forall$ rule corresponds to $\mathsf{P}_\forall$ and the left-$\forall$ rule corresponds to $\mathsf{P}_\exists^\mathsf{T}$.
The dereliction, contraction and weakening rules for the exponential correspond to $\mathsf{P}_\mathsf{der}^?$, $\mathsf{P}_\mathsf{con}^?$ and $\mathsf{P}_\mathsf{wk}^+$ respectively. We next give the translation of the right $!$ rule (promotion). We first assume $\Gamma$ consists of a single formula $L$.
\begin{footnotesize} \begin{prooftree} \AxiomC{$\vdash N , ?L^\perp$}
\AxiomC{} \LeftLabel{$\mathsf{P}_\mathsf{id}$} \UnaryInfC{$\vdash !L , ?L^\perp$} \LeftLabel{$\mathsf{P}_\mathsf{mul}$}
\BinaryInfC{$\vdash N , !L, ?L^\perp, ?L^\perp$} \LeftLabel{$\mathsf{P}_\mathsf{con}^?$} \UnaryInfC{$\vdash N , !L, ?L^\perp$} \LeftLabel{$\mathsf{P}_\mathsf{ana}$} \UnaryInfC{$\vdash !N , ?L^\perp$} \end{prooftree} \end{footnotesize}
\noindent We will later refer to this derived rule as $\mathsf{P}_\mathsf{prom}$. If $\Gamma$ contains more than one formula, we use the equivalence of $!M \otimes !N$ and $!(M \& N)$ in \textsf{WS1}.
The first direction $p_1 \vdash !M \otimes !N \multimap !(M \& N)$ is defined as follows:
\begin{footnotesize} \begin{prooftree}
\AxiomC{} \LeftLabel{$\mathsf{P}_\mathsf{id}$} \UnaryInfC{$\vdash !M , ?M^\perp$}
\AxiomC{} \LeftLabel{$\mathsf{P}_\mathsf{id}$} \UnaryInfC{$\vdash !N , ?N^\perp$}
\LeftLabel{$\mathsf{P}_\mathsf{mul}$} \BinaryInfC{$\vdash !M , !N , ?M^\perp , ?N^\perp$} \LeftLabel{$\mathsf{P}_\mathsf{con}^!$} \UnaryInfC{$\vdash !M , !M , !N , ?M^\perp , ?N^\perp$} \LeftLabel{$\mathsf{P}_\mathsf{der}^!$} \UnaryInfC{$\vdash M , !M , !N , ?M^\perp , ?N^\perp$} \LeftLabel{$\mathsf{P}_\bindnasrepma^\mathsf{T}$} \UnaryInfC{$\vdash M , !M , !N , ?M^\perp \bindnasrepma ?N^\perp$} \LeftLabel{$\mathsf{P}_\otimes^\mathsf{T}$} \UnaryInfC{$\vdash M , !M \otimes !N , ?M^\perp \bindnasrepma ?N^\perp$}
\AxiomC{} \LeftLabel{$\mathsf{P}_\mathsf{id}$} \UnaryInfC{$\vdash !M , ?M^\perp$}
\AxiomC{} \LeftLabel{$\mathsf{P}_\mathsf{id}$} \UnaryInfC{$\vdash !N , ?N^\perp$}
\LeftLabel{$\mathsf{P}_\mathsf{mul}$} \BinaryInfC{$\vdash !M , !N , ?M^\perp , ?N^\perp$} \LeftLabel{$\mathsf{P}_\mathsf{con}^!$} \UnaryInfC{$\vdash !M , !N , !N , ?M^\perp , ?N^\perp$} \LeftLabel{$\mathsf{P}_\mathsf{sym}$} \UnaryInfC{$\vdash !N , !M , !N , ?M^\perp , ?N^\perp$} \LeftLabel{$\mathsf{P}_\mathsf{der}^!$} \UnaryInfC{$\vdash N , !M , !N , ?M^\perp , ?N^\perp$} \LeftLabel{$\mathsf{P}_\bindnasrepma^\mathsf{T}$} \UnaryInfC{$\vdash N , !M , !N , ?M^\perp \bindnasrepma ?N^\perp$} \LeftLabel{$\mathsf{P}_\otimes^\mathsf{T}$} \UnaryInfC{$\vdash N , !M \otimes !N , ?M^\perp \bindnasrepma ?N^\perp$}
\LeftLabel{$\mathsf{P}_\&$} \BinaryInfC{$\vdash M \& N , !M \otimes !N , ?M^\perp \bindnasrepma ?N^\perp$} \LeftLabel{$\mathsf{P}_\mathsf{ana}$} \UnaryInfC{$\vdash !(M \& N) , ?M^\perp \bindnasrepma ?N^\perp$}
\end{prooftree} \end{footnotesize}
\noindent The second direction $p_2 \vdash !(M \& N) \multimap !M \otimes !N$ is given as follows:
\begin{footnotesize} \begin{prooftree} \AxiomC{} \LeftLabel{$\mathsf{P}_\mathsf{id}$} \UnaryInfC{$\vdash M , M^\perp$} \LeftLabel{$\mathsf{P}_\oplus^\mathsf{T}{}_1$} \UnaryInfC{$\vdash M , M^\perp \oplus N^\perp$} \LeftLabel{$\mathsf{P}_\mathsf{der}^?$} \UnaryInfC{$\vdash M , ?(M^\perp \oplus N^\perp)$} \LeftLabel{$\mathsf{P}_\mathsf{prom}$} \UnaryInfC{$\vdash !M , ?(M^\perp \oplus N^\perp)$}
\AxiomC{} \LeftLabel{$\mathsf{P}_\mathsf{id}$} \UnaryInfC{$\vdash N , N^\perp$} \LeftLabel{$\mathsf{P}_\oplus^\mathsf{T}{}_2$} \UnaryInfC{$\vdash N , M^\perp \oplus N^\perp$} \LeftLabel{$\mathsf{P}_\mathsf{der}^?$} \UnaryInfC{$\vdash N , ?(M^\perp \oplus N^\perp)$} \LeftLabel{$\mathsf{P}_\mathsf{prom}$} \UnaryInfC{$\vdash !N , ?(M^\perp \oplus N^\perp)$}
\LeftLabel{${\mathsf{P}_\mathsf{mul}}_\otimes$} \BinaryInfC{$\vdash !M \otimes !N , ?(M^\perp \oplus N^\perp), ?(M^\perp \oplus N^\perp)$} \LeftLabel{$\mathsf{P}_\mathsf{con}^?$} \UnaryInfC{$\vdash !M \otimes !N , ?(M^\perp \oplus N^\perp)$} \end{prooftree} \end{footnotesize}
\noindent We can then generalise $\mathsf{P}_\mathsf{prom}$ to
\begin{footnotesize} \begin{prooftree}
\AxiomC{$\vdash M, ?P_1 , ?P_2, \ldots, ?P_{n-1}, ?P_n$} \UnaryInfC{$\vdots$} \UnaryInfC{$\vdash M, ?(P_1 \oplus P_2 \oplus \ldots \oplus P_{n-1}) , ?P_n$} \LeftLabel{$\mathsf{P}_\bindnasrepma^{\mathsf{T}}$} \UnaryInfC{$\vdash M, ?(P_1 \oplus P_2 \oplus \ldots \oplus P_{n-1}) \bindnasrepma ?P_n$}
\AxiomC{$p_2$} \LeftLabel{$\mathsf{P}_\mathsf{cut}$} \BinaryInfC{$\vdash M, ?(P_1 \oplus P_2 \oplus \ldots \oplus P_{n-1} \oplus P_n)$} \LeftLabel{$\mathsf{P}_\mathsf{prom}$} \UnaryInfC{$\vdash !M, ?(P_1 \oplus P_2 \oplus \ldots \oplus P_{n-1} \oplus P_n)$} \UnaryInfC{$\vdots$} \UnaryInfC{$\vdash !M, ?(P_1 \oplus P_2), \ldots, ?P_{n-1}, ?P_n$}
\AxiomC{$p_1$} \LeftLabel{$\mathsf{P}_\mathsf{cut}$} \BinaryInfC{$\vdash !M, ?P_1 \bindnasrepma ?P_2, \ldots, ?P_{n-1} , ?P_n$} \LeftLabel{$\mathsf{P}_\bindnasrepma^\mathsf{T}$} \UnaryInfC{$\vdash !M, ?P_1 , ?P_2, \ldots, ?P_{n-1}, ?P_n$} \end{prooftree} \end{footnotesize}
\noindent and interpret the right ! rule of \textsf{ILL}. \qed \end{proof}
A detailed proof-theoretic analysis of the properties of this translation is beyond the scope of this paper. However, we note that the translation is semantically natural, in the following sense. We shall see in Section~\ref{sec:catsemantics} that the categorical models of~\textsf{WS1} have (among other properties) the structure of a standard categorical model of~\textsf{ILL}: they are~\emph{Lafont
categories}~\cite{MelliesPA:catsll}. The semantics of the quantifier-free fragment of~\textsf{ILL} induced by translation into~\textsf{WS1} followed by interpretation in a categorical model coincides with the expected semantics of~\textsf{ILL} in a Lafont category.
\subsubsection{New Theorems}
We next sketch some examples of formulas that are not provable in \textsf{ILL} but are provable in \textsf{WS1} --- i.e. they denote games on which there are uniform winning history-sensitive strategies which are expressible in \textsf{WS1}.
The formulas \\ \\ $((A \otimes B \multimap \bot) \otimes (C \otimes D \multimap \bot) \multimap \bot) \multimap \\ ((A \multimap \bot) \otimes (C \multimap \bot) \multimap \bot) \otimes ((B \multimap \bot) \otimes (D \multimap \bot) \multimap \bot)$ \\ \\ \noindent are not provable, in general, in intuitionistic linear logic (in particular, when $A,B,C,D$ are instantiated as negative atoms). They are a counterpart in \textsf{ILL} of the \emph{medial} rule $[(A \otimes B) \bindnasrepma (C \otimes D)] \multimap [(A \bindnasrepma C) \otimes (B \bindnasrepma D)]$, using an interpretation of depolarised formulas in a polarised setting following \cite{MT_RM}.
As observed by Blass \cite{Bla_LL}, however, there are (uniform) history-sensitive winning strategies for medial. Informally, suppose: \begin{itemize} \item Opponent first choses the left hand component in the output (choice 1) \item Opponent then chooses the right hand component in the input (choice 2) \end{itemize} \noindent Player can then play copycat in $C$. If Opponent then switches to the second output component $((B \multimap \bot) \otimes (D \multimap \bot) \multimap \bot)$, Player must enter copycat in $D$. But this decision relies on knowledge of Opponent's choice 2, which is not possible in an innocent setting and requires history-sensitive knowledge.
An outline \textsf{WS1} proof of this formula is given in Figure \ref{medproof}. The use of the $\mathsf{P}_\otimes$ demonstrates where the proof branches; there are four branches corresponding to the two uses of $\mathsf{P}_\otimes$. In each of these four branches different ${\mathsf{P}_\bindnasrepma}_i$ proof rules are chosen at the points labelled ${\mathsf{P}_\bindnasrepma}_1$ here.
\begin{figure*}
\caption{Outline Proof of Medial}
\label{medproof}
\end{figure*}
Similarly, the following theorems of \textsf{WS1} are not provable in \textsf{ILL} but are provable in \textsf{WS1}: \begin{itemize} \item $[A \otimes (C \& D)] \& [B \otimes (C \& D)] \& [(A \& B) \otimes C] \& [(A \& B) \otimes D] \multimap \\ (A \& B) \otimes (C \& D)$, also discussed in \cite{Bla_LL} \item $\phi_\mathsf{ex} \multimap \phi_\mathsf{ex} \otimes \phi_\mathsf{ex}$ where $\phi_\mathsf{ex} = (\phi \& (\phi \multimap \bot)) \multimap \bot$. \end{itemize}
\subsection{Embedding Polarized Linear Logic in \textsf{WS1}}
Polarized Linear Logic (\textsf{LLP}) \cite{Lau_PG} is a proof system for a polarisation of linear logic into negative and positive formulas. As we have noted, this is entirely different from the polarisation of \textsf{WS1} formulas employed here: each makes sense within the proof system within which it is defined. Here, we show how proofs of LLP may be represented inside \textsf{WS1} by translation, with two objectives: \begin{itemize} \item To clarify the relationship between the two logical systems, and their notions of polarisation. \item To capture both call-by-name and call-by-value $\lambda$-calculi via known (and elegant) translations into \textsf{LLP}, which may be composed with our embedding of \textsf{LLP} into \textsf{WS1}. In the call-by-name case, this corresponds with interpretation via intuitionistic linear logic, whereas for call-by-value it is new. \end{itemize} The formulas of \textsf{LLP} (over the units) are as follows:
\[ \begin{array}{ccccccccccccc}
P & ::= & \mathbf{1} & \mid & \mathbf{0} & \mid & P \otimes Q & \mid &
P \varoplus Q & \mid & {\downarrow N} & \mid & {!N} \\ N & ::= & \bot & \mid & \top & \mid & M \bindnasrepma N & \mid & M \& N & \mid & {\uparrow P} & \mid & {?P} \end{array} \]
There is an operation $(-)^\perp$ exchanging polarity, swapping $\mathbf{1}$ for $\bot$, $\mathbf{0}$ for $\top$, $\otimes$ for $\bindnasrepma$, and so on.
The presentation of \textsf{LLP} given in \cite{Lau_PG} omits the
linear lifts $\uparrow$ and $\downarrow$ of \textsf{MALLP}. We will
include them in our presentation of \textsf{LLP} and its embedding.
A sequent of \textsf{LLP} is a list of \textsf{LLP} formulas. The proof rules for Polarized Linear Logic are given in Figure \ref{LLP-rules}. $\Gamma^-$ ranges over lists of negative formulas, and $\Gamma'$ over lists where at most one formula is positive. We say a negative \textsf{LLP} formula $N$ is \emph{reusable} (and write $\mathsf{reuse}(N)$) if every occurrence of $\uparrow$ occurs under a $?$. If we exclude the linear lifts $\uparrow$ and $\downarrow$, all negative formulas are reusable. $\mathsf{reuse}(\Gamma^-)$ holds if all formulas in $\Gamma^-$ are reusable.
Each provable sequent has at most one positive formula, so we can restrict our attention to sequents of this form. It is possible to give semantics to \textsf{LLP} proofs as \emph{innocent} strategies \cite{Lau_PG}, which do not have access to the entire history of play.
\begin{figure*}
\caption{Proof rules for \textsf{LLP}}
\label{LLP-rules}
\end{figure*}
\label{LLP-WS} We next describe an embedding of \textsf{LLP} inside \textsf{WS1}. Apart from some renaming of units, connectives in \textsf{LLP} will be interpreted by the same connective in \textsf{WS1}.
Broadly speaking, positive formulas of \textsf{LLP} will be mapped to negative formulas of \textsf{WS1}, and negative formulas of \textsf{LLP} to positive formulas of \textsf{WS1}. However, under this scheme there is a mismatch for the additives: we will therefore need to map formulas of \textsf{LLP} to \emph{families} of \textsf{WS1} formulas. The formulas that have a lift as their outermost connective will be mapped to singleton families.
Let $\mathsf{WS1}^-$ denote the set of negative \textsf{WS1} formulas, and $\mathsf{WS1}^+$ the set of positive \textsf{WS1} formulas.
\begin{definition}
A \emph{finite family of negative (resp. positive) \textsf{WS1}
formulas} is a pair $(I,f)$ where $I$ is a finite set and $f : I
\rightarrow \mathsf{WS1}^-$ (resp. $I \rightarrow
\mathsf{WS1}^+$). \end{definition}
For brevity, given such a family $F = (I,f)$ we will write $|F|$ for $I$ and $F_x$ for $f(x)$.
We will interpret a negative formula of \textsf{LLP} as a finite family of positive $\mathsf{WS1}$ formulas, and a positive formula of \textsf{LLP} as a finite family of negative $\mathsf{WS1}$ formulas. We describe this mapping in Figure \ref{MALLP-to-WSN}.
Like \cite{MT_RM}, we decompose the polarity-reversing exponentials of \textsf{LLP} into polarity-preserving exponentials of and polarity-switching linear lifts.
\begin{figure*}
\caption{\textsf{LLP} formulas as families of \textsf{WS1} formulas}
\label{MALLP-to-WSN}
\end{figure*}
Note that $|i(A^\perp)| = |i(A)|$ and $i(A^\perp)_y = i(A)_y^\perp$. We translate proofs of \textsf{LLP} to families of proofs of \textsf{WS1} in the following manner:
\begin{itemize} \item Given an \textsf{LLP} proof $p$ of $\vdash N_1 , \ldots , N_n$
and $x_i \in |i(N_i)|$ for each $i$, we construct a proof
$i(p,\overrightarrow{x_i})$ of $\vdash \bot , i(N_1)_{x_1} , \ldots
, i(N_n)_{x_n}$ \item Given an \textsf{LLP} proof $p$ of $\vdash N_1 , \ldots , N_i ,
Q , N_{i+1} , \ldots , N_n$ and $x_i \in |i(N_i)|$ for each $i$, we
construct a pair $i(p,\overrightarrow{x_i}) = (y,q)$ where $y \in
|i(Q)|$ and $q$ is a proof of $\vdash i(Q)_y , i(N_1)_{x_1} , \ldots
, i(N_n)_{x_n}$. \end{itemize}
\begin{proposition}
Suppose $N$ is reusable. Then for any $x$ in $|i(N)|$, there is a
formula $Q$ and proofs $p \vdash !Q^\perp , i(N)_x$ and $p' \vdash
i(N)_x^\perp , ?Q$ such that $\llbracket p \rrbracket$ and $\llbracket
p' \rrbracket$ are inverses. \label{llpexp} \end{proposition} \begin{proof} Simple induction, making use of isomorphisms $!(M \& N) \cong !M \otimes !N$. \qed \end{proof}
\begin{proposition}
For each \textsf{LLP} formula $P$, $y \in |i(P)|$ and sequence of
negative \textsf{WS1} formulas $\Delta^-$ there is a \textsf{WS1}
proof $\mathsf{P}^\top_{P,y} \vdash i(P)_y, \Delta^- , \top$. \label{toprule} \end{proposition} \begin{proof}
Simple induction on $P$. \qed
\end{proof}
\noindent We next show how each of the \textsf{LLP} proof rules is translated. The translation is simple; we demonstrate some representative cases.
\begin{itemize} \item The $cut$ rule, with $p = cut(q,r)$: Suppose $\Gamma = N_1 ,
\ldots , N_i , P , N_{i+1} , \ldots N_n$ and $\Delta = M_1 , \ldots
, M_m$. Let $x_i \in |i(N_i)|$ and $y_i \in |i(M_i)|$. Then
$i(r,\overrightarrow{y_i}) = (y,t)$ with $y \in |i(N^\perp)|$ and $t
\vdash i(N^\perp)_y , i(M_1)_{y_1} , \ldots , i(M_n)_{y_n}$. Then
$i(q,\overrightarrow{x_i},y) = (y',q')$ where $y' \in |i(P)|$ and
$$q' \vdash i(P)_{y'} , i(N_1)_{x_1} , \ldots , i(N_n)_{x_n},
i(N)_y.$$ Applying $\mathsf{P}_\mathsf{cut}$ to this proof and $t$ results in a proof
$g$ of $$\vdash i(P)_{y'} , i(N_1)_{x_1} , \ldots , i(N_n)_{x_n} ,
i(M_1)_{y_1} , \ldots , i(M_m)_{y_m}$$ and we set
$i(p,\overrightarrow{x_i},\overrightarrow{y_i}) = (y',g)$.
The case where $\Gamma = N_1 , \ldots , N_n$ and $\Delta = M_1 , \ldots ,
M_m$ is similar.
\item The $\uparrow$ rule, with $p = \uparrow(q)$: Let $\Gamma = N_1 , \ldots ,
N_n$ and $x_i \in |i(N_i)|$. Then $i(q,\overrightarrow{x_i}) =
(y,q)$ where $q \vdash i(P)_y , i(N_1)_{x_1} , \ldots ,
i(N_n)_{x_n}$. We set $i(p,\overrightarrow{x_i})$ to be the
following proof:
\begin{prooftree}
\AxiomC{$q \vdash i(P)_y , i(N_1)_{x_1} , \ldots , i(N_n)_{x_n}$} \UnaryInfC{$\vdash \top , i(P)_y , i(N_1)_{x_1} , \ldots , i(N_n)_{x_n}$} \UnaryInfC{$\vdash \top \oslash i(P)_y , i(N_1)_{x_1} , \ldots , i(N_n)_{x_n}$} \LeftLabel{${\mathsf{P}_\oplus}_y$}
\UnaryInfC{$\vdash \bigoplus_{j \in |i(P)|} \top \oslash i(P)_j , i(N_1)_{x_1} , \ldots , i(N_n)_{x_n}$}
\UnaryInfC{$\vdash \bot , i(N_1)_{x_1} \bindnasrepma \ldots \bindnasrepma i(N_n)_{x_n} \bindnasrepma (\bigoplus_{j \in |i(P)|} \top \oslash i(P)_j)$}
\UnaryInfC{$\vdash \bot , i(N_1)_{x_1} , \ldots , i(N_n)_{x_n} , \bigoplus_{j \in |i(P)|} \top \oslash i(P)_j$} \end{prooftree}
Note that in the semantics of this rule two moves are played: the opening lift overall (O-move) and the opening lift in the derelicted component (P-move), which corresponds to ``focusing'' on that component.
\item The $?c$ rule, with $p = ?c(q)$:
If $\Gamma = N_1 , \ldots , N_n$ and $x_i \in |i(N_i)|$ and $x \in
|i(N)|$ then $i(q,\overrightarrow{x_i},x,x)$ is a proof of $\vdash \bot ,
i(N_1)_{x_1} , \ldots , i(N_n)_{x_n} , i(N)_x , i(N)_x$. We can
apply Proposition \ref{llpexp} and use $?$-contraction in
\textsf{WS1} to yield a proof $q'$ of \\ $\vdash \bot , i(N_1)_{x_1} ,
\ldots , i(N_n)_{x_n} , i(N)_x$ and we set
$i(p,\overrightarrow{x_i},x) = q'$.
If $\Gamma = N_1 , \ldots , N_i , P , N_{i+1} , \ldots , N_n$ and
$x_i \in |i(N_i)|$ and $x \in |i(N)|$ then
$i(q,\overrightarrow{x_i},x,x) = (y,q')$ where $q' \vdash i(P)_y ,
i(N_1)_{x_1} , \ldots , i(N_n)_{x_n} , i(N)_x , i(N)_x$. We can
apply Proposition \ref{llpexp} and use $?$-contraction in
\textsf{WS1} to yield a proof $q''$ of $$\vdash i(P)_y , i(N_1)_{x_1}
, \ldots , i(N_n)_{x_n} , i(N)_x$$ and we set
$i(p,\overrightarrow{x_i},x) = (y,q'')$. \end{itemize}
\noindent We can hence interpret proofs in \textsf{LLP} as (families of) proofs in \textsf{WS1}.
\section{Representing Imperative Programs and their Properties}
\subsection{Imperative Cell}
As an example of a proof of \textsf{WS1} capturing imperative behaviour (and which does not correspond to a proof of intuitionistic or polarized linear logic), we give a proof which denotes the Boolean reference cell strategy described in Section \ref{impobjstrat}, the \emph{cell} strategy of \cite{AMc_LSS}.
Recall that this is a strategy for the game $!(\mathbf{B} \& \mathbf{Bi})$, where $\mathbf{B} = \bot \lhd \top \oplus \top$ and $\mathbf{Bi} = (\bot \& \bot) \lhd \top$. We can parametrise the cell by a starting value, yielding a strategy on $\mathbf{B} \multimap {!}(\mathbf{B} \& \mathbf{Bi})$. We may obtain this strategy using a finite strategy $p : \mathbf{B} \multimap (\mathbf{B} \& \mathbf{Bi}) \oslash \mathbf{B}$. The strategy $p$ is defined as follows, using the naming conventions from Section \ref{impobjstrat}:
\[ \begin{array}{ccccccccccccccccccccccccccccccccccl}
\mathbf{B} & \multimap & (\mathbf{B}& \& &\mathbf{Bi}) & \oslash & \mathbf{B} & & & & & & & & & & \\
& & \mathtt{q} & & & \\
\mathtt{q} & & & & & \\
b & & & & & \\
& & b & & & \\
& & & & & & \mathtt{q} & \\
& & & & & & b & \\
\hline
& & & & \mathtt{in}(b) & & & \\
& & & & \mathtt{ok} & & & \\
& & & & & & \mathtt{q} & \\
& & & & & & b & \\
\end{array} \]
\noindent To obtain the $\mathsf{cell}$ strategy, we consider an infinite unwrapping $\leftmoon p \rightmoon : \mathbf{B} \multimap {!}(\mathbf{B} \& \mathbf{Bi})$, as performed by the semantics of the $\mathsf{P}_\mathsf{ana}$ rule.
\[ \begin{array}{ccccccccccccccccccccccccccccccccccl}
\mathbf{B} & \multimap & (\mathbf{B}& \& &\mathbf{Bi}) & \oslash & \mathbf{B} & \multimap & (\mathbf{B}& \& &\mathbf{Bi}) & \oslash & ((\mathbf{B}& \& &\mathbf{Bi}) & \oslash & \ldots) \\
& & & & & & & & & & \mathtt{in}(b) \\
& & & & \mathtt{in}(b) & & & \\
& & & & \mathtt{ok} & & & \\
& & & & & & & & & & \mathtt{ok} \\
& & & & & & & & & & & & \mathtt{q} \\
& & & & & & \mathtt{q} & \\
& & & & & & b & \\
& & & & & & & & & & & & b \\
& & & & & & & & & & & & \vdots \\
\end{array} \]
We can represent this strategy in our system using the anamorphism rule $\mathsf{P}_\mathsf{ana}$: we may prove $!(\mathbf{B} \& \mathbf{Bi}), \mathbf{B}^\perp$ by applying this rule to a proof of $(\mathbf{B} \& \mathbf{Bi}), \mathbf{B}, \mathbf{B}^\perp$. To obtain this, we apply the product rule to a pair of proofs: \begin{itemize} \item $p_\mathtt{read}$, of $\mathbf{B},\mathbf{B}, \mathbf{B}^\perp$, corresponding to a function which reads its argument, returns it \emph{and} propagates it to the next call, and \item $p_\mathtt{write}$, of $\mathbf{Bi}, \mathbf{B}, \mathbf{B}^\perp$, corresponding to a function which ignores its argument, accepts a Boolean input value and propagates it to the next call. \end{itemize} In this proof, if a rule is not labelled it is the unique applicable core rule, and some steps are omitted for brevity.
\begin{scriptsize} \begin{prooftree} \AxiomC{$p_\mathtt{write} : \vdash (\bot \& \bot) \lhd \top , \bot \lhd (\top \oplus \top) , \top \oslash (\bot \& \bot)$} \AxiomC{$p_\mathtt{read} : \vdash \bot \lhd (\top \oplus \top) , \bot \lhd (\top \oplus \top) , \top \oslash (\bot \& \bot)$} \BinaryInfC{$\vdash ((\bot \& \bot) \lhd \top) \& (\bot \lhd (\top \oplus \top)), \bot \lhd (\top \oplus \top) , \top \oslash (\bot \& \bot)$} \LeftLabel{$\mathsf{P}_{\mathsf{ana}}$} \UnaryInfC{$\vdash ! (((\bot \& \bot ) \lhd \top) \& (\bot \lhd (\top \oplus \top))) , \top \oslash (\bot \& \bot)$} \end{prooftree} \end{scriptsize}
where $p_\mathtt{write}$ is
\begin{footnotesize} \begin{prooftree} \AxiomC{} \UnaryInfC{$\vdash \top$} \UnaryInfC{$\vdash \top , (\top \oslash (\bot \& \bot))$} \LeftLabel{${\mathsf{P}_\oplus}_1$} \UnaryInfC{$\vdash \top \oplus \top , (\top \oslash (\bot \& \bot))$} \LeftLabel{${\mathsf{P}_\bindnasrepma}_1$} \UnaryInfC{$\vdash (\top \oplus \top) \bindnasrepma (\top \oslash (\bot \& \bot))$} \UnaryInfC{$\vdash \bot , (\top \oplus \top) \bindnasrepma (\top \oslash (\bot \& \bot))$}
\UnaryInfC{$\vdash (\bot \lhd (\top \oplus \top)) \lhd (\top \oslash (\bot \& \bot))$} \UnaryInfC{$\vdash \top , (\bot \lhd (\top \oplus \top)) \lhd (\top \oslash (\bot \& \bot))$}
\UnaryInfC{$\vdash (\top \oslash (\bot \lhd (\top \oplus \top))) , \top \oslash (\bot \& \bot)$} \LeftLabel{${\mathsf{P}_\bindnasrepma}_1$} \UnaryInfC{$\vdash (\top \oslash (\bot \lhd (\top \oplus \top))) \bindnasrepma (\top \oslash (\bot \& \bot))$} \UnaryInfC{$\vdash \bot , (\top \oslash (\bot \lhd (\top \oplus \top))) \bindnasrepma (\top \oslash (\bot \& \bot))$}
\UnaryInfC{$\vdash \bot , \top , \bot \lhd (\top \oplus \top) , \top \oslash (\bot \& \bot)$}
\AxiomC{} \UnaryInfC{$\vdash \top$} \UnaryInfC{$\vdash \top , (\top \oslash (\bot \& \bot))$} \LeftLabel{${\mathsf{P}_\oplus}_2$} \UnaryInfC{$\vdash \top \oplus \top , (\top \oslash (\bot \& \bot))$} \LeftLabel{${\mathsf{P}_\bindnasrepma}_1$} \UnaryInfC{$\vdash (\top \oplus \top) \bindnasrepma (\top \oslash (\bot \& \bot))$} \UnaryInfC{$\vdash \bot , (\top \oplus \top) \bindnasrepma (\top \oslash (\bot \& \bot))$}
\UnaryInfC{$\vdash (\bot \lhd (\top \oplus \top)) \lhd (\top \oslash (\bot \& \bot))$} \UnaryInfC{$\vdash \top , (\bot \lhd (\top \oplus \top)) \lhd (\top \oslash (\bot \& \bot))$}
\UnaryInfC{$\vdash (\top \oslash (\bot \lhd (\top \oplus \top))) , \top \oslash (\bot \& \bot)$} \LeftLabel{${\mathsf{P}_\bindnasrepma}_1$} \UnaryInfC{$\vdash (\top \oslash (\bot \lhd (\top \oplus \top))) \bindnasrepma (\top \oslash (\bot \& \bot))$} \UnaryInfC{$\vdash \bot , (\top \oslash (\bot \lhd (\top \oplus \top))) \bindnasrepma (\top \oslash (\bot \& \bot))$}
\UnaryInfC{$\vdash \bot , \top , \bot \lhd (\top \oplus \top) , \top \oslash (\bot \& \bot)$}
\BinaryInfC{$\vdash \bot \& \bot , \top , \bot \lhd (\top \oplus \top) , \top \oslash (\bot \& \bot)$} \UnaryInfC{$\vdash (\bot \& \bot) \lhd \top , \bot \lhd (\top \oplus \top) , \top \oslash (\bot \& \bot)$} \end{prooftree} \end{footnotesize}
and $p_\mathtt{read}$ is
\begin{footnotesize} \begin{prooftree} \AxiomC{} \UnaryInfC{$\vdash \top$} \LeftLabel{${\mathsf{P}_\oplus}_1$} \UnaryInfC{$\vdash \top \oplus \top$} \UnaryInfC{$\vdash \bot , (\top \oplus \top)$}
\UnaryInfC{$\vdash \top , \bot \lhd (\top \oplus \top)$} \LeftLabel{${\mathsf{P}_\oplus}_1$} \UnaryInfC{$\vdash \top \oplus \top , \bot \lhd (\top \oplus \top)$}
\UnaryInfC{$\vdash \bot , (\top \oplus \top) \oslash (\bot \lhd (\top \oplus \top))$}
\AxiomC{} \UnaryInfC{$\vdash \top$} \LeftLabel{${\mathsf{P}_\oplus}_2$} \UnaryInfC{$\vdash \top \oplus \top$} \UnaryInfC{$\vdash \bot , (\top \oplus \top)$}
\UnaryInfC{$\vdash \top , \bot \lhd (\top \oplus \top)$} \LeftLabel{${\mathsf{P}_\oplus}_2$} \UnaryInfC{$\vdash \top \oplus \top , \bot \lhd (\top \oplus \top)$}
\UnaryInfC{$\vdash \bot , (\top \oplus \top) \oslash (\bot \lhd (\top \oplus \top))$}
\BinaryInfC{$\vdash \bot \& \bot , (\top \oplus \top) \oslash (\bot \lhd (\top \oplus \top))$}
\UnaryInfC{$\vdash \top , (\bot \& \bot) \lhd ((\top \oplus \top) \oslash (\bot \lhd (\top \oplus \top)))$}
\UnaryInfC{$\vdash \top \oslash (\bot \& \bot) , (\top \oplus \top) \oslash (\bot \lhd (\top \oplus \top))$} \LeftLabel{${\mathsf{P}_\bindnasrepma}_2$} \UnaryInfC{$\vdash ((\top \oplus \top) \oslash (\bot \lhd (\top \oplus \top))) \bindnasrepma (\top \oslash (\bot \& \bot))$} \UnaryInfC{$\vdash \bot , ((\top \oplus \top) \oslash (\bot \lhd (\top \oplus \top))) \bindnasrepma (\top \oslash (\bot \& \bot))$}
\UnaryInfC{$\vdash \bot \lhd (\top \oplus \top) , \bot \lhd (\top \oplus \top) , \top \oslash (\bot \& \bot)$} \end{prooftree} \end{footnotesize}
We will later give categorical semantics to \textsf{WS1}, and so the above proof provides a categorical account of this Boolean reference cell, using a final coalgebraic property of the exponential.
We may use this proof to interpret declaration of a Boolean reference in either call-by-name or call-by-value settings, by composition (cut) with (the translation of) a term-in-context of the form $\Gamma,x:\mathbf{var} \vdash M:T$. Thus we may translate the recursion-free fragments of \emph{Idealized Algol} \cite{Rey_ALGOL} and \emph{Reduced ML} over finite datatypes into \textsf{WS1}, for example.
\subsection{State Encapsulation}
\textsf{WS1} is more expressive than total, finitary Idealized Algol: for instance, we may use the anamorphism rule to capture structures such as stacks, capable of storing an arbitrarily large amount of data. A generalised programming construct which corresponds to this capability is the \emph{encapsulation} operation which appears as the $\mathsf{thread}$ operator in \cite{Wol_OO}, and as the $\mathsf{encaps}$ strategy in \cite{Long_PLGM} where it is used for constructing imperative objects in a model based on the same underlying notion of game as used here. The operator has type $$(s \rightarrow (o \times s)) \rightarrow s \rightarrow (1 \rightarrow o).$$ Here $s$ is the type of the object's internal state. The first argument represents an object which takes an explicit state of type $s$, and returns a value of type $o$, together with an updated state. The second argument represents an initial state. Encapsulation returns an object of type $1 \rightarrow o$ (a ``thunk'' of type $o$) in which the state $s$ is encapsulated --- i.e. hidden from the environment, but shared between separate invocations of the object. On first invocation (unthunking) the initial state is used as the input state, and thereafter, each fresh call receives the output state from the previous invocation as its input.
We can represent this operation in \textsf{WS1} using the $\mathsf{P}_\mathsf{ana}$ rule. To do this, we consider a call-by-value interpretation of types. We may translate call-by-value types as positive formulas of \textsf{LLP}: $\phi^+(1) = \mathbf{1}$, $\phi^+(A \times B) = \phi^+(A) \otimes \phi^+(B)$ and $\phi^+(A \rightarrow B) = {!}(\phi^+(A)^\perp \bindnasrepma \uparrow \phi^+(B))$\footnote{This is
slightly different to the original embedding presented in
\cite{Lau_PG}, which uses $?$ rather than $\uparrow$ in the
translation of $\rightarrow$, allowing first-class continuations to
be interpreted (the $\lambda\mu$-calculus). The translation adopted
here is a form of \emph{linear CPS interpretation}.}. Thus by composition with the embedding of \textsf{LLP} in \textsf{WS1}, we may translate the types $o$ and $s$ as the families of WS-formulas $i \circ \phi^+(s)$ and $i \circ \phi^+(o)$. Let us assume for simplicity, that these are singleton families $\{S\}$ and $\{O\}$ respectively (i.e. $s$ represent products of function types). Then $\mathsf{encaps}$ may be translated as a proof of $\vdash \bot, \top \oslash i \circ \phi^+(1 \rightarrow o), S^\perp, i \circ \phi^+(s \rightarrow (o \times s))^\perp$ --- i.e. $\vdash \bot, \top \oslash ! \uparrow (\mathbf{0} \bindnasrepma \downarrow O) , S^\perp , ?\downarrow(S \otimes \uparrow(O^\perp \bindnasrepma S^\perp))$ --- as follows:
\begin{scriptsize} \begin{prooftree}
\AxiomC{$a$}
\AxiomC{} \LeftLabel{$\mathsf{P}_\mathsf{id}$} \UnaryInfC{$\vdash !\uparrow(S^\perp \bindnasrepma \downarrow(O \otimes S)) , ?\downarrow(S \otimes \uparrow(O^\perp \bindnasrepma S^\perp))$}
\AxiomC{$b$}
\LeftLabel{$\mathsf{P}_\mathsf{mul}$} \BinaryInfC{$\vdash \uparrow(\downarrow O) , !\uparrow(S^\perp \bindnasrepma \downarrow(O \otimes S)) , S , ?\downarrow(S \otimes \uparrow(O^\perp \bindnasrepma S^\perp)) , ?\downarrow(S \otimes \uparrow(O^\perp \bindnasrepma S^\perp)) , S^\perp$} \LeftLabel{$\mathsf{P}_\mathsf{con}^?$} \UnaryInfC{$\vdash \uparrow(\downarrow O) , !\uparrow(S^\perp \bindnasrepma \downarrow(O \otimes S)) , S , ?\downarrow(S \otimes \uparrow(O^\perp \bindnasrepma S^\perp)) , S^\perp$} \UnaryInfC{$\vdash \uparrow(\downarrow O) , !\uparrow(S^\perp \bindnasrepma \downarrow(O \otimes S)) \otimes S , ?\downarrow(S \otimes \uparrow(O^\perp \bindnasrepma S^\perp)) \bindnasrepma S^\perp$} \LeftLabel{$\mathsf{P}_\mathsf{ana}$} \UnaryInfC{$\vdash !\uparrow(\downarrow O) , ?\downarrow(S \otimes \uparrow(O^\perp \bindnasrepma S^\perp)) \bindnasrepma S^\perp$}
\LeftLabel{$\mathsf{P}_\mathsf{cut}$} \BinaryInfC{$\vdash !\uparrow(\mathbf{0} \bindnasrepma \downarrow O) , ?\downarrow(S \otimes \uparrow(O^\perp \bindnasrepma S^\perp)) \bindnasrepma S^\perp$}
\LeftLabel{$\mathsf{P}_\bindnasrepma^\mathsf{T}$} \UnaryInfC{$\vdash !\uparrow(\mathbf{0} \bindnasrepma
\downarrow O) , ?\downarrow(S \otimes \uparrow(O^\perp \bindnasrepma
S^\perp)) , S^\perp$} \LeftLabel{$\mathsf{P}_\mathsf{sym}^+$} \UnaryInfC{$\vdash !\uparrow(\mathbf{0} \bindnasrepma
\downarrow O) , S^\perp , ?\downarrow(S \otimes \uparrow(O^\perp \bindnasrepma
S^\perp))$} \UnaryInfC{$\vdash \top , !\uparrow(\mathbf{0} \bindnasrepma
\downarrow O) , S^\perp , ?\downarrow(S \otimes \uparrow(O^\perp \bindnasrepma
S^\perp))$} \UnaryInfC{$\vdash \bot , \top \oslash !\uparrow(\mathbf{0} \bindnasrepma
\downarrow O) , S^\perp , ?\downarrow(S \otimes \uparrow(O^\perp \bindnasrepma
S^\perp))$}
\end{prooftree} \end{scriptsize}
\noindent where $a$ is the evident isomorphism $\vdash ! \uparrow (\mathbf{0} \bindnasrepma \downarrow O) , ? \downarrow \uparrow O^\perp$ and $b$ is:
\begin{footnotesize} \begin{prooftree} \AxiomC{} \LeftLabel{$\mathsf{P}_\mathsf{id}$} \UnaryInfC{$\vdash S , S^\perp$}
\AxiomC{} \LeftLabel{$\mathsf{P}_\mathsf{id}$} \UnaryInfC{$\vdash O , O^\perp$}
\AxiomC{} \LeftLabel{$\mathsf{P}_\mathsf{id}$} \UnaryInfC{$\vdash S , S^\perp$} \LeftLabel{$\mathsf{P}_\mathsf{mul}$} \BinaryInfC{$\vdash O , S , O^\perp , S^\perp$} \UnaryInfC{$\vdash O \oslash S , O^\perp , S^\perp$} \LeftLabel{$\mathsf{P}_\bindnasrepma^\mathsf{T}$} \UnaryInfC{$\vdash O \oslash S , O^\perp \bindnasrepma S^\perp$} \UnaryInfC{$\vdash \top , O \oslash S , O^\perp \bindnasrepma S^\perp$}
\UnaryInfC{$\vdash O^\perp \bindnasrepma S^\perp \bindnasrepma (\downarrow O \oslash S)$} \UnaryInfC{$\vdash \bot , O^\perp \bindnasrepma S^\perp , \downarrow O \oslash S$}
\LeftLabel{${\mathsf{P}_\mathsf{mul}}_\otimes$} \BinaryInfC{$\vdash S \otimes \uparrow(O^\perp \bindnasrepma S^\perp) , \downarrow O \oslash S, S^\perp$} \LeftLabel{$\mathsf{P}_\mathsf{sym}^+$} \UnaryInfC{$\vdash S \otimes \uparrow(O^\perp \bindnasrepma S^\perp) , S^\perp , \downarrow O \oslash S$} \UnaryInfC{$\vdash \top , S \otimes \uparrow(O^\perp \bindnasrepma S^\perp) , \downarrow O \oslash S, S^\perp$} \UnaryInfC{$\vdash \downarrow(S \otimes \uparrow(O^\perp \bindnasrepma S^\perp)) , \downarrow O \oslash S, S^\perp$} \UnaryInfC{$\vdash \bot , (\downarrow O \oslash S) \bindnasrepma \downarrow(S \otimes \uparrow(O^\perp \bindnasrepma S^\perp)) \bindnasrepma S^\perp$} \UnaryInfC{$\vdash \uparrow(\downarrow O) , S , \downarrow(S \otimes \uparrow(O^\perp \bindnasrepma S^\perp)) , S^\perp$} \LeftLabel{$\mathsf{P}_\mathsf{der}^?$} \UnaryInfC{$\vdash \uparrow(\downarrow O) , S , ?\downarrow(S \otimes \uparrow(O^\perp \bindnasrepma S^\perp)) , S^\perp$} \end{prooftree} \end{footnotesize}
\subsection{Coroutines} We may also give a proof denoting a \emph{coroutining} operation, permitting a form of deterministic multithreading, defined as a strategy in \cite{Lai_COC,Lai_FPC}.
In a call-by-name setting, this corresponds to an operation taking
two terms $s$, $t$ of type $\mathbf{com} \rightarrow \mathbf{com}$, and returning a
command which runs $s$: when (and if) $s$ calls its argument, control
passes to $t$. When $t$ calls its
argument, control is passed back to $s$, and so on, until either
$s$ or $t$ terminates.
We can define a coroutining operator $\mathsf{cocomp} \vdash \Sigma , ?(\Sigma^\perp \oslash !\Sigma) , ?(\Sigma^\perp \oslash !\Sigma)$, where $\Sigma = \top \lhd \bot$.
We first give a proof $o$ of $(!\Sigma \multimap \bot) \multimap !\Sigma$.
\begin{footnotesize} \begin{prooftree} \AxiomC{} \LeftLabel{$\mathsf{P}_\mathsf{id}$} \UnaryInfC{$\vdash \bot \lhd ?(\top \oslash \bot) , \top \oslash !(\bot \lhd \top)$} \UnaryInfC{$\vdash \top , \bot \lhd ?(\top \oslash \bot) , \top \oslash !(\bot \lhd \top)$} \UnaryInfC{$\vdash \top \oslash (\bot \lhd ?(\top \oslash \bot)) , \top \oslash !(\bot \lhd \top)$} \UnaryInfC{$\vdash \bot , \top \oslash !(\bot \lhd \top) , \top \oslash (\bot \lhd ?(\top \oslash \bot))$} \UnaryInfC{$\vdash \bot \lhd \top , !(\bot \lhd \top) , \top \oslash (\bot \lhd ?(\top \oslash \bot))$} \UnaryInfC{$\vdash !(\bot \lhd \top) , \top \oslash (\bot \lhd ?(\top \oslash \bot))$} \UnaryInfC{$\vdash \top , !(\bot \lhd \top) , \top \oslash (\bot \lhd ?(\top \oslash \bot))$} \UnaryInfC{$\vdash \top \oslash !(\bot \lhd \top) , \top \oslash (\bot \lhd ?(\top \oslash \bot))$} \UnaryInfC{$\vdash \bot , \top \oslash (\bot \lhd ?(\top \oslash \bot)) , \top \oslash !(\bot \lhd \top)$} \UnaryInfC{$\vdash \bot , \top , \bot \lhd ?(\top \oslash \bot) , \top \oslash !(\bot \lhd \top)$} \UnaryInfC{$\vdash \bot \lhd \top , \bot \lhd ?(\top \oslash \bot) , \top \oslash !(\bot \lhd \top)$} \LeftLabel{$\mathsf{P}_\mathsf{ana}$} \UnaryInfC{$\vdash ! (\bot \lhd \top) , \top \oslash !(\bot \lhd \top)$} \UnaryInfC{$\vdash ! (\bot \lhd \top) \lhd (\top \oslash !(\bot \lhd \top))$} \end{prooftree} \end{footnotesize}
\noindent We next define a proof $o' \vdash (!\Sigma \multimap \Sigma) \multimap \bot \multimap !\Sigma$, which connects the output move of the first argument to the Player-move in the second argument.
\begin{footnotesize} \begin{prooftree}
\AxiomC{$o \vdash !\Sigma , \top \oslash !\Sigma$}
\AxiomC{} \UnaryInfC{$\vdash \bot , \top$}
\AxiomC{} \UnaryInfC{$\vdash !\Sigma , ?\Sigma^\perp$}
\LeftLabel{${\mathsf{P}_\mathsf{mul}}_\otimes$} \BinaryInfC{$\vdash \bot \otimes !\Sigma , ?\Sigma^\perp , \top$} \UnaryInfC{$\vdash \top , \bot , !\Sigma , ?\Sigma^\perp , \top$} \UnaryInfC{$\vdash (\top \oslash \bot) \oslash !\Sigma , ?\Sigma^\perp , \top$}
\UnaryInfC{$\vdash \bot , ?\Sigma^\perp , \top , (\top \oslash \bot) \oslash !\Sigma$} \UnaryInfC{$\vdash \bot \lhd ?\Sigma^\perp , \top , (\top \oslash \bot) \oslash !\Sigma$}
\LeftLabel{$\mathsf{P}_\mathsf{cut}$} \BinaryInfC{$\vdash !\Sigma , \top , (\top \oslash \bot) \oslash !\Sigma$} \UnaryInfC{$\vdash !\Sigma \lhd \top , \Sigma^\perp \oslash !\Sigma$} \end{prooftree} \end{footnotesize}
\noindent We can then define $\mathsf{cocomp}$.
\begin{footnotesize} \begin{prooftree}
\AxiomC{} \UnaryInfC{$\vdash \top$} \UnaryInfC{$\vdash \bot , \top$}
\AxiomC{} \UnaryInfC{$\vdash \top$} \UnaryInfC{$\vdash \bot , \top$}
\BinaryInfC{$\vdash \bot \otimes \bot , \top$} \UnaryInfC{$\vdash \top , \bot , \bot , \top$} \UnaryInfC{$\vdash (\top \oslash \bot) \oslash \bot , \top$} \UnaryInfC{$\vdash \bot , \top , (\top \oslash \bot) \oslash \bot$} \UnaryInfC{$\vdash \Sigma , \Sigma^\perp \oslash \bot$}
\AxiomC{} \LeftLabel{$\mathsf{P}_\mathsf{id}$} \UnaryInfC{$\vdash \Sigma , \Sigma^\perp$}
\AxiomC{} \LeftLabel{$\mathsf{P}_\mathsf{id}$} \UnaryInfC{$\vdash !\Sigma , ?\Sigma^\perp$} \AxiomC{} \LeftLabel{$\mathsf{P}_\mathsf{id}$} \UnaryInfC{$\vdash \bot , \top$} \LeftLabel{$\mathsf{P}_\multimap$} \BinaryInfC{$\vdash !\Sigma , ?\Sigma^\perp \oslash \bot, \top$} \LeftLabel{$\mathsf{P}_\mathsf{sym}^+$} \UnaryInfC{$\vdash !\Sigma , \top , ?\Sigma^\perp \oslash \bot$}
\LeftLabel{$\mathsf{P}_\multimap$} \BinaryInfC{$\vdash \Sigma , \Sigma^\perp \oslash !\Sigma , \top , ?\Sigma^\perp \oslash \bot$} \LeftLabel{$\mathsf{P}_\mathsf{sym}^+$} \UnaryInfC{$\vdash \Sigma , \top , \Sigma^\perp \oslash !\Sigma , ?\Sigma^\perp \oslash \bot$} \UnaryInfC{$\vdash \Sigma \lhd \top , \Sigma^\perp \oslash !\Sigma , ?\Sigma^\perp \oslash \bot$}
\LeftLabel{$\mathsf{P}_\mathsf{cut}$} \BinaryInfC{$\vdash \Sigma , \Sigma^\perp \oslash !\Sigma , ? \Sigma^\perp \oslash \bot$}
\AxiomC{$o'$}
\LeftLabel{$\mathsf{P}_\mathsf{cut}$} \BinaryInfC{$\vdash \Sigma , \Sigma^\perp \oslash !\Sigma , \Sigma^\perp \oslash !\Sigma$} \end{prooftree} \end{footnotesize}
\label{cocompdef}
\label{ialgemb} \subsection{Specifying Properties of Programs} The formulas of \textsf{WS1} are more expressive than the types of languages such as Idealized Algol, and hence they enable the behaviour of history sensitive strategies to be specified both more abstractly and more precisely. For example, formulas can specify the order in which arguments are interrogated, how many times they are interrogated, and relationships between inputs and outputs of ground type (using the first-order structure).
\subsubsection{Data-Independent Programming} \label{aqdi} We can use quantifiers to represent \emph{data-independent} structures such as cells and stacks, where the underlying ground type at a given
$\mathcal{L}$-structure $L$ is $|L|$. As a formula/game, this ground type is represented by $\mathbf{V} = \bot \lhd \exists x . \top$ --- a dialogue in this game consists of Opponent playing a question move $q$
and Player responding with an element of $|L|$. We can represent a stream of such values using the formula $!\mathbf{V}$.
Let $\mathbf{Vi} = \forall x . \bot \lhd \top$ represent an `input version' of $\mathbf{V}$, where Opponent plays an $|L|$ value and Player then accepts it, analogous to $\mathbf{Bi}$ above. The type of a stack object can then be given by the formula ${!}(\mathbf{V} \& \mathbf{Vi})$, with a ``pop'' and a ``push'' method. We give a proof denoting the behaviour of such a stack, parametrised by a starting stack, of type $!\mathbf{V} \multimap {!}(\mathbf{V} \& \mathbf{Vi})$.
\begin{footnotesize} \begin{prooftree} \AxiomC{} \LeftLabel{$\mathsf{P}_\mathsf{id}$} \UnaryInfC{$\vdash !(\bot \lhd \exists x . \top) , ?(\top \oslash \forall x . \bot)$} \LeftLabel{$\mathsf{P}_\mathsf{con}^!$} \UnaryInfC{$\vdash !(\bot \lhd \exists x . \top), !(\bot \lhd \exists x . \top) , ?(\top \oslash \forall x . \bot)$} \LeftLabel{$\mathsf{P}_\mathsf{der}^!$} \UnaryInfC{$\vdash \bot \lhd \exists x . \top, !(\bot \lhd \exists x . \top) , ?(\top \oslash \forall x . \bot)$}
\AxiomC{} \LeftLabel{$\mathsf{P}_\mathsf{id}$} \UnaryInfC{$\{ x \} ; \vdash !(\bot \lhd \exists x . \top) , ?(\top \oslash \forall x . \bot)$}
\UnaryInfC{$\{ x \} ; \vdash \top , !(\bot \lhd \exists x . \top) , ?(\top \oslash \forall x . \bot)$} \LeftLabel{$\mathsf{P}_\exists^x$} \UnaryInfC{$\{ x \} ; \vdash \exists x . \top , !(\bot \lhd \exists x . \top) , ?(\top \oslash \forall x . \bot)$}
\UnaryInfC{$\{ x \} ; \vdash \bot , \exists x . \top , !(\bot \lhd \exists x . \top) , ?(\top \oslash \forall x . \bot)$} \UnaryInfC{$\{ x \} ; \vdash \bot \lhd \exists x . \top , !(\bot \lhd \exists x . \top) , ?(\top \oslash \forall x . \bot)$} \UnaryInfC{$\{ x \} ; \vdash !(\bot \lhd \exists x . \top) , ?(\top \oslash \forall x . \bot)$}
\UnaryInfC{$\{ x \} ; \vdash \top , !(\bot \lhd \exists x . \top) , ?(\top \oslash \forall x . \bot)$}
\UnaryInfC{$\{ x \} ; \vdash \bot , \top, !(\bot \lhd \exists x . \top) , ?(\top \oslash \forall x . \bot)$} \UnaryInfC{$\vdash \forall x . \bot , \top, !(\bot \lhd \exists x . \top) , ?(\top \oslash \forall x . \bot)$} \UnaryInfC{$\vdash \forall x . \bot \lhd \top, !(\bot \lhd \exists x . \top) , ?(\top \oslash \forall x . \bot)$}
\BinaryInfC{$\vdash (\bot \lhd \exists x . \top) \& (\forall x . \bot \lhd \top), !(\bot \lhd \exists x . \top) , ?(\top \oslash \forall x . \bot)$} \LeftLabel{$\mathsf{P}_\mathsf{ana}$} \UnaryInfC{$\vdash !((\bot \lhd \exists x . \top) \& (\forall x . \bot \lhd \top)), ?(\top \oslash \forall x . \bot)$} \end{prooftree} \end{footnotesize}
Once again, we use $\mathsf{P}_\mathsf{ana}$ to obtain the infinite behaviour, applied to a proof $q$ of $!\mathbf{V} \multimap (\mathbf{V} \& \mathbf{Vi}) \oslash !\mathbf{V}$. The strategy denoted by $q$ performs as `copycat' in the $!\mathbf{V} \multimap \mathbf{V} \oslash !\mathbf{V}$ component, and in the $!\mathbf{V} \multimap \mathbf{Vi} \oslash !\mathbf{V}$ component behaves as follows:
\[ \begin{array}{ccccccccccccccccccccccccccccccccccl}
!\mathbf{V} & \multimap & \mathbf{Vi} & \oslash & !\mathbf{V} \\
& & \mathtt{in}(v) & & & \\
& & \mathtt{ok} & & & \\
& & & & \mathtt{q} & \\
& & & & v & \\
\end{array} \]
\noindent and then enters copycat.
\subsubsection{Good Variables}
One respect in which the game semantics of Idealized Algol (and other imperative languages) fails to reflect its syntax fully is in the existence in the model of \emph{bad variables} which do not return the last value assigned to them \cite{AMc_LSS}. In \textsf{WS1} we may define formulas for which the only proof denotes a good variable.
The formula $\mathbf{worm} = \mathbf{Bi} \oslash {!}\mathbf{B}$ represents a Boolean variable which can be written once, then read many times. One proof/strategy of this formula will indeed be a valid Boolean cell: if Opponent plays \texttt{inputX} then Player responds with \texttt{ok}, if Opponent then tries to read the cell \texttt{q}, then Player responds with \texttt{X}. But there are also bad variables: for example, the read method may always return \texttt{True} regardless of what was written.
To exclude such behaviour, we can replace the input/output moves with atoms. Define $\mathbf{B}^{\phi,\psi} = \bot \lhd (\overline{\phi} \oplus \overline{\psi})$ and $\mathbf{Bi}^{\phi,\psi} = (\phi \& \psi) \lhd \top$, with $\mathbf{worm}^{\phi,\psi} = \mathbf{Bi}^{\phi,\psi} \oslash {!}\mathbf{B}^{\phi,\psi}$. If $\phi$ and $\psi$ are assigned $\mathtt{tt}$, then this denotes the same dialogue as $\mathbf{worm}$. However, the denotation of any proof of $\mathbf{worm}^{\phi,\psi}$ at such a model must be the good variable strategy. The rule for $\phi$ (and semantically, uniformity of strategies) ensures that $\phi$ must be played before $\overline{\phi}$, and $\psi$ before $\overline{\psi}$. Consequently, Player can only respond with a particular Boolean value in the \texttt{read} component if that same value has previously been given as an input in the \texttt{write} component, so good-variable behaviour is assured. The following proof of this formula uses only the core rules and the promotion rule.
\begin{footnotesize} \begin{prooftree} \AxiomC{} \UnaryInfC{$\overline{\phi} \vdash \top$} \UnaryInfC{$\overline{\phi} \vdash \overline{\phi}$} \LeftLabel{${\mathsf{P}_\oplus}_1$} \UnaryInfC{$\overline{\phi} \vdash \overline{\phi} \oplus \overline{\psi}$} \UnaryInfC{$\overline{\phi} \vdash \bot , \overline{\phi} \oplus \overline{\psi}$} \UnaryInfC{$\overline{\phi} \vdash \bot \lhd \overline{\phi} \oplus \overline{\psi}$} \LeftLabel{$\mathsf{prom}$} \UnaryInfC{$\overline{\phi} \vdash !(\bot \lhd \overline{\phi} \oplus \overline{\psi})$} \UnaryInfC{$\overline{\phi} \vdash \top , !(\bot \lhd \overline{\phi} \oplus \overline{\psi})$} \UnaryInfC{$\overline{\phi} \vdash \top \oslash !(\bot \lhd \overline{\phi} \oplus \overline{\psi})$} \UnaryInfC{$\overline{\phi} \vdash \bot , \top \oslash !(\bot \lhd \overline{\phi} \oplus \overline{\psi})$} \UnaryInfC{$\overline{\phi} \vdash \bot , \top , !(\bot \lhd \overline{\phi} \oplus \overline{\psi})$} \UnaryInfC{$\vdash \phi , \top , !(\bot \lhd \overline{\phi} \oplus \overline{\psi})$}
\AxiomC{} \UnaryInfC{$\overline{\psi} \vdash \top$} \UnaryInfC{$\overline{\psi} \vdash \overline{\psi}$} \LeftLabel{${\mathsf{P}_\oplus}_2$} \UnaryInfC{$\overline{\psi} \vdash \overline{\phi} \oplus \overline{\psi}$} \UnaryInfC{$\overline{\psi} \vdash \bot , \overline{\phi} \oplus \overline{\psi}$} \UnaryInfC{$\overline{\psi} \vdash \bot \lhd \overline{\phi} \oplus \overline{\psi}$} \LeftLabel{$\mathsf{prom}$} \UnaryInfC{$\overline{\psi} \vdash !(\bot \lhd \overline{\phi} \oplus \overline{\psi})$} \UnaryInfC{$\overline{\psi} \vdash \top , !(\bot \lhd \overline{\phi} \oplus \overline{\psi})$} \UnaryInfC{$\overline{\psi} \vdash \top \oslash !(\bot \lhd \overline{\phi} \oplus \overline{\psi})$} \UnaryInfC{$\overline{\psi} \vdash \bot , \top \oslash !(\bot \lhd \overline{\phi} \oplus \overline{\psi})$} \UnaryInfC{$\overline{\psi} \vdash \bot , \top , !(\bot \lhd \overline{\phi} \oplus \overline{\psi})$} \UnaryInfC{$\vdash \psi , \top , !(\bot \lhd \overline{\phi} \oplus \overline{\psi})$}
\BinaryInfC{$\vdash (\phi \& \psi) , \top , !(\bot \lhd \overline{\phi} \oplus \overline{\psi})$} \UnaryInfC{$\vdash (\phi \& \psi) \lhd \top , !(\bot \lhd \overline{\phi} \oplus \overline{\psi})$} \UnaryInfC{$\vdash ((\phi \& \psi) \lhd \top) \oslash !(\bot \lhd \overline{\phi} \oplus \overline{\psi})$} \end{prooftree} \end{footnotesize}
\noindent We cannot use $!$ to obtain a formula which admits only an arbitrarily reusable `good variable', but we can obtain finite approximations. For example, the formula $$\mathbf{Bi}^{\alpha,\beta} \oslash (\mathbf{worm}^{\phi,\psi} ~ \& ~ (\mathbf{B}^{\alpha,\beta} \oslash \mathbf{worm}^{\phi,\psi}) ~ \& ~ (\mathbf{B}^{\alpha,\beta} \oslash (\mathbf{B}^{\alpha,\beta} \oslash \mathbf{worm}^{\phi,\psi})))$$ models a good variable that can be written to twice, and can be read at most twice before the second write. Strategies on such formulas then approximate our reusable cell strategy above on $!(\mathbf{B} \& \mathbf{Bi})$.
\section{Categorical Semantics for \textsf{WS1}} \label{sec:catsemantics}
To give a formal semantics for our logic, we first introduce a notion of categorical model which captures everything except the first-order structure (quantifiers and atoms). We shall use notation $\eta : F \Rightarrow G : \mathcal{C} \rightarrow \mathcal{D}$ to mean $\eta$ is a natural transformation from $F$ to $G$ with $F,G : \mathcal{C} \rightarrow \mathcal{D}$.
First, we define some categories of games that will form the intended instance of our categorical model. Objects in these categories will be negative games, and an arrow $A \rightarrow B$ will be a strategy on $A \multimap B$. We can compose strategies using ``parallel composition plus hiding''. Suppose $\sigma : A \multimap B$ and $\tau
: B \multimap C$, define $$\sigma \| \tau = \{ s \in (M_A + M_B +
M_C)^\ast : s|_1 \in P_A \wedge s|_2 \in P_B \wedge s|_3 \in P_C \}$$
and set $$\tau \circ \sigma = \{ s|_{1,3} : s \in \sigma \| \tau \}.$$ It is well-known that $\tau \circ \sigma$ is a well-formed strategy on $A \multimap C$ (see e.g. \cite{AJ_MLL}).
\begin{proposition}
Composition is associative, and there is an identity $A \rightarrow
A$ given by the copycat strategy: $\{ s \in P_{A \multimap A} :
\gamma(s) \}$ where $\gamma(s)$ holds if and only if $t|_1 = t|_2$
for all even-length prefixes $t$ of $s$. \end{proposition}
\begin{definition}
The category $\mathcal{G}$ has negative games as objects, and a map
$\sigma : A \rightarrow B$ is a strategy on $A \multimap B$ with
composition and identity as above. \end{definition}
\noindent This category has been studied extensively in e.g. \cite{Lam_SGL,Cur_SS,Long_PLGM}, and has equivalent presentations using graph games \cite{HS_GG} and locally Boolean domains \cite{Lai_LBD}.
If $A$, $B$ and $C$ are bounded, $\sigma : A \multimap B$ and $\tau : B \multimap C$ are total then $\tau \circ \sigma$ is also total. Total strategies do not compose for unbounded games, however. Winning strategies on unbounded games do compose \cite{Hy_GS}, and the identity strategy is winning.
\noindent \begin{definition}
The category $\mathcal{W}$ has negative games as objects and winning
strategies as maps. \end{definition}
\noindent A map $\sigma : A \rightarrow B$ is \emph{strict} if it responds to Opponent's first move with a move in $A$, if it responds at all. Strict strategies are closed under composition and the identity is strict.
\begin{definition}
The category $\mathcal{G}_s$ has negative games as objects and
strict strategies as maps.
The category $\mathcal{W}_s$ has negative games as objects and
strict winning strategies as maps. \end{definition}
\noindent Isomorphisms in $\mathcal{W}$ correspond to forest isomorphisms and all isomorphisms are total and strict \cite{Lau_CIT}.
Each of the above categories can be endowed with symmetric monoidal structure, given by $(I, \otimes)$ where $I$ is the empty game $\mathbf{1}$ and the action of $\otimes$ on objects is as defined in Section \ref{gameconnectives}.
\subsection{Sequoidal Closed Structure}
The notions of \emph{sequoidal category} and \emph{sequoidal closed category} were first introduced in \cite{Lai_HOS}.
\begin{definition} A \emph{sequoidal category} consists of: \begin{itemize} \item A symmetric monoidal category $(\mathcal{C}, I, \otimes)$ (we will call the relevant isomorphisms $\mathsf{assoc} : (A \otimes B) \otimes C \cong A \otimes (B \otimes C)$, $\mathsf{lunit}_\otimes : I \otimes A \cong A$, $\mathsf{runit}_\otimes : A \otimes I \cong I$ and $\mathsf{sym} : A \otimes B \cong B \otimes A$) \item A category $\mathcal{C}_s$ \item A right-action $\varoslash$ of $\mathcal{C}$ on $\mathcal{C}_s$. That is, a functor $\_\varoslash\_ : \mathcal{C}_s \times \mathcal{C} \rightarrow \mathcal{C}_s$ with natural isomorphisms $\mathsf{unit}_\oslash : A \varoslash I \cong A$ and $\mathsf{pasc} : A \varoslash (B \otimes C) \cong (A \varoslash B) \varoslash C$ satisfying the following coherence conditions \cite{JK_Act}:
\begin{diagram} A \oslash (B \otimes (C \otimes D)) & \rTo^{\mathsf{pasc}} & (A \oslash B) \oslash (C \otimes D) & \rTo^{\mathsf{pasc}} & ((A \oslash B) \oslash C) \oslash D \\ \dTo^{\mathsf{id} \oslash \mathsf{assoc}} & & & \ruTo^{\mathsf{pasc} \oslash \mathsf{id}} \\ A \oslash ((B \otimes C) \otimes D) & \rTo^{\mathsf{pasc}} & (A \oslash (B \otimes C)) \oslash D \\ \end{diagram} \begin{diagram} A \oslash (I \otimes B) & \rTo^{\mathsf{pasc}} & (A \oslash I) \oslash B & & & A \oslash (B \otimes I) & \rTo^\mathsf{pasc} & (A \oslash B) \oslash I \\ \dTo^{\mathsf{id} \oslash \mathsf{lunit}_\otimes} & \ldTo^{\mathsf{unit}_\oslash \oslash \mathsf{id}} & & \hspace{5pt} & & \dTo^{\mathsf{id} \oslash \mathsf{runit}_\otimes} & \ldTo^{\mathsf{unit}_\oslash} \\ A \oslash B & & & & & A \oslash B \\ \end{diagram} \item A functor $J : \mathcal{C}_s \rightarrow \mathcal{C}$ \item A natural transformation $\mathsf{wk} : J(\_) \otimes \_ \Rightarrow J(\_ \varoslash \_)$ satisfying further coherence conditions \cite{Lai_HOS}: \begin{diagram} JA \otimes I & \rTo^{\mathsf{runit}_\otimes} & JA & & (JA \otimes B) \otimes C & \rTo^{\mathsf{wk} \otimes \mathsf{id}} & J(A \oslash B) \otimes C & \rTo^{\mathsf{wk}} & J((A \oslash B) \oslash C) \\ \dTo^{\mathsf{wk}} & \ruTo^{J(\mathsf{unit}_\oslash)} & & \hspace{2pt} & \dTo^{\mathsf{assoc}} & & & \ruTo^{J(\mathsf{pasc})} \\ J(A \oslash I) & & & & JA \otimes (B \otimes C) & \rTo^{\mathsf{wk}} & J(A \oslash (B \otimes C)) \\ \end{diagram}
\end{itemize} \end{definition}
\begin{definition}
An \emph{inclusive sequoidal category} is a sequoidal category in
which $\mathcal{C}_s$ is a full-on-objects subcategory of
$\mathcal{C}$ containing $\mathsf{wk}$ and the monoidal isomorphisms; $J$ is
the inclusion functor; and $J$ reflects isomorphisms. \end{definition}
\noindent We can identify this structure in our categories of games: we can extend the left-merge operator $\oslash$ to an action $\mathcal{G}_s \times \mathcal{G} \rightarrow \mathcal{G}_s$. If $\sigma : A \rightarrow B$ and $\tau : C \rightarrow D$ then $\sigma \oslash \tau : A \oslash C \rightarrow B \oslash D$ plays as $\sigma$ between $A$ and $B$ and as $\tau$ between $C$ and $D$. The strictness of $\sigma$ guarantees that this yields a valid strategy on $(A \oslash C) \multimap (B \oslash D)$. The isomorphisms $\mathsf{pasc}$ and $\mathsf{unit}_\oslash$ exist, and there is a natural copycat strategy $\mathsf{wk} : M \otimes N \rightarrow M \oslash N$ in $\mathcal{G}_s$, all satisfying the required axioms \cite{Lai_FPC}. The functor $J$ reflects isomorphisms as the inverse of strict isomorphisms are strict. Thus $(\mathcal{G},\mathcal{G}_s)$ forms an inclusive sequoidal category; as does $(\mathcal{W},\mathcal{W}_s)$.
\begin{definition}
An inclusive sequoidal category is \emph{Cartesian} if
$\mathcal{C}_s$ has finite products preserved by $J$ (we will write
$t_A$ for the unique map $A \rightarrow 1$). It is
\emph{decomposable} if the natural transformations $\mathsf{dec} = \langle
\mathsf{wk} , \mathsf{wk} \circ \mathsf{sym} \rangle : A \otimes B \Rightarrow (A \varoslash
B) \times (B \varoslash A) : \mathcal{C}_s \times \mathcal{C}_s
\rightarrow \mathcal{C}_s$ and $\mathsf{dec}^0 = \mathsf{t}_I : I
\Rightarrow 1 : \mathcal{C}_s$ are isomorphisms (so, in particular,
$(\mathcal{C},\otimes,I)$ is an affine SMC).
A Cartesian sequoidal category is \emph{distributive} if the natural
transformations $\mathsf{dist} = \langle \pi_1 \varoslash \mathsf{id}_C , \pi_2
\varoslash \mathsf{id}_C \rangle : (A \times B) \varoslash C \Rightarrow (A
\varoslash C) \times (B \varoslash C) : \mathcal{C}_s \times
\mathcal{C}_s \times \mathcal{C} \rightarrow \mathcal{C}_s$ and
$\mathsf{dist}_0 = \mathsf{t}_{1 \varoslash C} : 1 \varoslash C \Rightarrow
1 : \mathcal{C} \rightarrow \mathcal{C}_s$ are isomorphisms. \end{definition}
\noindent We write $\mathsf{dist}^0 : I \oslash C \cong I$ for the isomorphism $(\mathsf{dec}^0)^{-1} \circ \mathsf{dist}_0 \circ (\mathsf{dec}^0 \oslash \mathsf{id})$.
In the game categories defined above, $M \& N$ is a product of $M$ and $N$, and the empty game $I$ is a terminal object as well as the monoidal unit. The decomposability and distributivity isomorphisms above exist as natural copycat morphisms \cite{Lai_FPC}. In fact, $\mathcal{W}$ and $\mathcal{G}$ have all small products, following the construction in Section \ref{gameconnectives}, with the corresponding distributivity isomorphism with respect to $\oslash$.
\begin{definition} A \emph{sequoidal closed category} is an inclusive sequoidal category where $\mathcal{C}$ is symmetric monoidal closed and the map $f \mapsto \Lambda(f \circ \mathsf{wk})$ defines a natural isomorphism $\Lambda_s : \mathcal{C}_s(B \varoslash A, C) \Rightarrow \mathcal{C}_s(B, A \multimap C)$. \end{definition}
\noindent We can show that $\mathcal{G}$ and $\mathcal{W}$ are sequoidal closed, with the internal hom given by $\multimap$ \cite{Lai_FPC}.
In any sequoidal closed category, define $\mathsf{app}_s : (A \multimap B) \varoslash A \rightarrow B$ as $\Lambda_s^{-1}(\mathsf{id})$, and $\mathsf{app} : (A \multimap B) \otimes A \rightarrow B = \Lambda^{-1}(\mathsf{id})$, noting that $\mathsf{app} = \mathsf{app}_s \circ \mathsf{wk}$. If $f : A \rightarrow B$ let $\Lambda_I(f) : I \rightarrow A \multimap B$ denote the name of $f$, i.e. $\Lambda(f \circ \mathsf{runit}_\otimes)$. Write $\Lambda_I^{-1}$ for the inverse operation.
\begin{proposition}
In any sequoidal closed category, $\multimap$ restricts to a functor $\mathcal{C}^\mathsf{op} \times \mathcal{C}_s \rightarrow \mathcal{C}_s$ with natural isomorphisms $\mathsf{unit}_\multimap : I \multimap A \cong A$ and $\mathsf{pasc}_\multimap : A \otimes B \multimap C \cong A \multimap (B \multimap C)$ in $\mathcal{C}_s$. \end{proposition} \begin{proof}
We need to show that if $g$ is in $\mathcal{C}_s$ then $f \multimap g$ is in $\mathcal{C}_s$. But $f \multimap g = \Lambda(g \circ \mathsf{app} \circ (\mathsf{id} \otimes f)) = \Lambda(g \circ \mathsf{app}_s \circ \mathsf{wk} \circ (\mathsf{id} \otimes f)) = \Lambda(g \circ \mathsf{app}_s \circ (\mathsf{id} \oslash f) \circ \mathsf{wk}) = \Lambda_s (g \circ \mathsf{app}_s \circ (\mathsf{id} \oslash f))$ which is in $\mathcal{C}_s$.
In any symmetric monoidal category the isomorphisms $\mathsf{unit}_\multimap$ and $\mathsf{pasc}_\multimap$ exist, but we must show that they are strict. \begin{itemize} \item $\mathsf{unit}_\multimap : I \multimap A \rightarrow A$ is given by $\mathsf{app} \circ \mathsf{runit}_\otimes^{-1}$. This $\mathsf{app}_s \circ \mathsf{wk} \circ \mathsf{runit}_\otimes^{-1} = \mathsf{app}_s \circ \mathsf{unit}_\oslash^{-1}$ which is a map in $\mathcal{C}_s$.
\item $\mathsf{pasc}_\multimap : A \otimes B \multimap C \cong A \multimap (B \multimap C)$ is given by $\Lambda(\Lambda(\mathsf{app} \circ \mathsf{assoc})) = \Lambda(\Lambda(\mathsf{app}_s \circ \mathsf{wk} \circ \mathsf{assoc})) = \Lambda(\Lambda(\mathsf{app}_s \circ \mathsf{pasc}^{-1} \circ \mathsf{wk} \circ (\mathsf{wk} \otimes \mathsf{id}))) = \Lambda(\Lambda(\mathsf{app}_s \circ \mathsf{pasc}^{-1} \circ \mathsf{wk}) \circ \mathsf{wk}) = \Lambda_s(\Lambda_s (\mathsf{app}_s \circ \mathsf{pasc}^{-1}))$ which is in $\mathcal{C}_s$.
\end{itemize} The inverses of the above maps are strict as $J$ reflects isomorphisms. \qed \end{proof}
\label{WScatdef}
\noindent In distributive, decomposable sequoidal closed categories we can also define the following natural transformations: \begin{itemize} \item The isomorphism $\mathsf{psym} : (A \oslash B) \oslash C \cong (A \oslash C) \oslash B$ given by $\mathsf{pasc} \circ (\mathsf{id} \oslash \mathsf{sym}) \circ \mathsf{pasc}^{-1}$. \item The isomorphism $\mathsf{psym}_\multimap : C \multimap (B \multimap A) \cong B \multimap (C \multimap A)$ given by $\mathsf{pasc}_\multimap \circ (\mathsf{sym} \multimap \mathsf{id}) \circ \mathsf{pasc}_\multimap^{-1}$ \item The isomorphism $\mathsf{dist}_\multimap : A \multimap (B \times C) \rightarrow (A \multimap B) \times (A \multimap C)$ given by $\langle \mathsf{id} \multimap \pi_1 , \mathsf{id} \multimap \pi_2 \rangle$, whose inverse is $\Lambda\langle \mathsf{app} \circ (\pi_1 \otimes \mathsf{id}) , \mathsf{app} \circ (\pi_2 \otimes \mathsf{id}) \rangle$. This isomorphism exists in any monoidal closed category with products. \item The map $\mathsf{af} : A \Rightarrow I$ given by $(\mathsf{dec}^0)^{-1} \circ \mathsf{t}_A$. \item The isomorphism $\mathsf{dist}_\multimap^0 : A \multimap I \rightarrow I$ given by $\mathsf{af}$ whose inverse is $\Lambda(\mathsf{runit}_\otimes \circ (\mathsf{id} \otimes \mathsf{af}))$. We must check that these are inverses: $\mathsf{af} \circ \Lambda(\mathsf{runit}_\otimes \circ (\mathsf{id} \otimes \mathsf{af})) = \mathsf{id}$ as both are maps into the terminal object, and $\Lambda(\mathsf{runit}_\otimes \circ (\mathsf{id} \otimes \mathsf{af})) \circ \mathsf{af} = \Lambda(\mathsf{runit}_\otimes \circ (\mathsf{af} \otimes \mathsf{id}) \circ (\mathsf{af} \otimes \mathsf{id})) = \Lambda(\mathsf{app}) = \mathsf{id}$ as required. We know that $\mathsf{runit}_\otimes \circ (\mathsf{af} \otimes \mathsf{id}) \circ (\mathsf{af} \otimes \mathsf{id}) = \mathsf{app}$ as both are maps into the terminal object. \end{itemize}
We can use the structure described above to model the negative connectives of \textsf{WS1}. We will represent positive connectives indirectly, inspired by the fact that strategies on the positive game $P$ correspond to strategies on the negative game $\uparrow P = P^\perp \multimap o$ where $o$ is the one-move game $\bot$. The object $o$ satisfies a special property: an internalised version of \emph{linear functional extensionality} \cite{A_ADFC}.
\begin{definition} An object $o$ in a sequoidal closed category satisfies \emph{linear functional extensionality} if the natural transformation $\mathsf{lfe} : (B \multimap o) \oslash A \Rightarrow (A \multimap B) \multimap o : \mathcal{C} \times \mathcal{C}^\mathsf{op} \rightarrow \mathcal{C}_s$ given by $\Lambda_s (\mathsf{app}_s \circ (\mathsf{id} \oslash \mathsf{app}) \circ (\mathsf{id} \oslash \mathsf{sym}) \circ \mathsf{pasc}^{-1})$ is an isomorphism. \end{definition}
\noindent The linear functional extensionality property is characteristic of our \emph{history sensitive, locally alternating} games model \cite{Lai_FPC}: it does not hold in other sequoidal closed categories (e.g. Conway games \cite{Lai_HOS}).
Using linear functional extensionality we can give a natural isomorphism $\mathsf{abs} : o \oslash A \cong o$ by noticing that $o \oslash A \cong (I \multimap o) \oslash A \cong (A \multimap I) \multimap o \cong I \multimap o \cong o$, and thus setting $\mathsf{abs} = \mathsf{unit}_\multimap \circ ((\mathsf{dist}_\multimap^0)^{-1} \multimap \mathsf{id}) \circ \mathsf{lfe} \circ (\mathsf{unit}_\multimap^{-1} \oslash \mathsf{id})$.
\subsection{Coalgebraic Exponential Comonoid}
We next consider the categorical status of the exponential operator $!$. We interpret the core introduction rules for the exponentials, and the key anamorphism rule, by requiring that it is the carrier for a \emph{final coalgebra} of the functor $X \mapsto N \oslash X$.
Recall that a coalgebra for a functor $F : \mathcal{C} \rightarrow \mathcal{C}$ is an object $A$ and a map $A \rightarrow F(A)$. A \emph{final coalgebra} is a terminal object in the category of coalgebras, that is a coalgebra $\alpha : Z \rightarrow F(Z)$ such that for any $f : A \rightarrow F(A)$ there is a unique $\leftmoon f \rightmoon : A \rightarrow Z$ such that $\alpha \circ \leftmoon f \rightmoon = F(\leftmoon f \rightmoon) \circ f$. \begin{diagram} A & \rTo^{f} & F(A) \\ \dTo^{\leftmoon f \rightmoon} & & \dTo_{F(\leftmoon f \rightmoon)} \\ Z & \rTo_{\alpha} & F(Z) \\ \end{diagram} We call $\leftmoon f \rightmoon$ the \emph{anamorphism} of $f$. Note in particular that if $(Z,\alpha)$ is a final coalgebra for $F$, then $\alpha$ is an isomorphism, with inverse $\alpha^{-1} = \leftmoon F(\alpha) \rightmoon$.
In $\cal{W}$ we define a coalgebra $(\mathop{!N},\alpha)$ by taking $\alpha : \mathop{!}N \rightarrow N \oslash \mathop{!}N$ to be the evident copycat strategy which relabels $\mathsf{in}_1(a)$ on the right to $(a,1)$ on the left and $\mathsf{in}_2(a,n)$ on the right to $(a,n+1)$ on the left.
\begin{proposition}
$(\mathop{!}N, \alpha)$ is the final coalgebra of the functor $N \oslash \_$
in the category $\mathcal{G}$.
\label{bangcoalgG} \end{proposition} \begin{proof}
Let $\sigma : M \rightarrow N \oslash M$. Define $\leftmoon \sigma
\rightmoon_n : M \rightarrow (N \oslash \_)^n(M)$ by $\leftmoon
\sigma \rightmoon_0 = \mathsf{id}$ and $\leftmoon \sigma \rightmoon_{n+1} =
(\mathsf{id} \oslash \_)^n(\sigma) \circ \leftmoon \sigma \rightmoon_n$.
\begin{diagram}
M & \rTo^{\leftmoon \sigma \rightmoon_n} & (N \oslash \_)^n(M) & \rTo^{(\mathsf{id} \oslash \_)^n(\sigma)} & (N \oslash \_)^n(N \oslash M) = (N \oslash \_)^{n+1}(M) \\
\end{diagram}
The strategy $\leftmoon \sigma \rightmoon_n$ is a partial
approximant to $\leftmoon \sigma \rightmoon : M \rightarrow
\mathop{!} N$.
We can show by induction on $n$ that $\leftmoon \sigma
\rightmoon_{n+1} = (\mathsf{id} \oslash \leftmoon \sigma \rightmoon_n) \circ
\sigma$.
Similarly, we can define $\alpha_k : \mathop{!}N \cong (N \oslash
\_)^k(\mathop{!}N) : \alpha_k^{-1}$ by performing the above
construction on $\alpha$. Consider the sequence of maps $M
\rightarrow \mathop{!}N$ defined by $s_k= \alpha_k^{-1} \circ (\mathsf{id}
\oslash \_)^k(\epsilon) \circ \leftmoon \sigma \rightmoon_k$ for $k
\in \omega$. We can show that $s_{k + 1} \sqsupseteq s_k$ by
induction on $k$, and so $(s_k)$ is a chain. Set $\leftmoon \sigma
\rightmoon = \bigsqcup \alpha_k^{-1} \circ (\mathsf{id} \oslash
\_)^k(\epsilon) \circ \leftmoon \sigma \rightmoon_k$, where
$\epsilon$ is the empty strategy. It is well-known that
$\mathcal{G}$ is cpo-enriched with bottom element $\epsilon$
\cite{Lai_FPC}.
We wish to show that $\leftmoon \sigma \rightmoon$ is the unique
strategy such that $\alpha \circ \leftmoon \sigma \rightmoon = (\mathsf{id}
\oslash \leftmoon \sigma \rightmoon) \circ \sigma$. To show that the
equation holds, note that $\alpha \circ \leftmoon \sigma \rightmoon
= \alpha \circ \bigsqcup \alpha_k^{-1} \circ (\mathsf{id} \oslash
\_)^k(\epsilon) \circ \leftmoon \sigma \rightmoon_k = \alpha \circ
\bigsqcup \alpha_{k+1}^{-1} \circ (\mathsf{id} \oslash \_)^{k + 1}(\epsilon)
\circ \leftmoon \sigma \rightmoon_{k + 1} = \bigsqcup \alpha \circ
\alpha_{k + 1}^{-1} \circ (\mathsf{id} \oslash \_)^{k + 1}(\epsilon) \circ
\leftmoon \sigma \rightmoon_{k + 1} = \bigsqcup (\mathsf{id} \oslash
\alpha_k^{-1}) \circ (\mathsf{id} \oslash (\mathsf{id} \oslash \_)^k(\epsilon))
\circ (\mathsf{id} \oslash \leftmoon \sigma \rightmoon_k) \circ \sigma =
(\mathsf{id} \oslash \bigsqcup(\alpha_k^{-1} \circ (\mathsf{id} \oslash
\_)^k(\epsilon) \circ \leftmoon \sigma \rightmoon_k)) \circ \sigma =
(\mathsf{id} \oslash \leftmoon \sigma \rightmoon) \circ \sigma$.
For uniqueness, suppose that $\gamma : M \rightarrow \mathop{!} N$
is such that $\alpha \circ \gamma = (\mathsf{id} \oslash \gamma) \circ
\sigma$. We wish to show that $\gamma = \leftmoon \sigma \rightmoon
= \bigsqcup \alpha_k^{-1} \circ (\mathsf{id} \oslash \_)^k(\epsilon) \circ
\leftmoon \sigma \rightmoon_k$. To see that $\gamma \sqsupseteq
\leftmoon \sigma \rightmoon$, it suffices to show that $\gamma$ is
an upper bound of the chain, i.e. $\gamma \sqsupseteq \alpha_k^{-1}
\circ (\mathsf{id} \oslash \_)^k(\epsilon) \circ \leftmoon \sigma
\rightmoon_k$ for each $k$. This can be shown using a simple induction on $k$.
To see that $\gamma \sqsubseteq \leftmoon \sigma \rightmoon$, we
show that each play in $\gamma$ is also in $\leftmoon \sigma
\rightmoon$. Consider a play $s \in \gamma : M \rightarrow
\mathop{!}N$. Since $s$ is finite, it must visit only a finite
number of copies of $N$ --- say, $k$ copies. Then $s$ is also a play
in $\alpha_{k}^{-1} \circ (\mathsf{id} \oslash \_)^k(\epsilon) \circ
\alpha_k \circ \gamma$.
It is thus sufficient to show that $(\mathsf{id} \oslash \_)^k(\epsilon)
\circ \alpha_k \circ \gamma \sqsubseteq (\mathsf{id} \oslash \_)^k(\epsilon)
\circ \leftmoon \sigma \rightmoon_k$. This is achieved by a simple
induction on $k$. \qed
\end{proof}
\begin{proposition}
$(\mathop{!}N, \alpha)$ is the final coalgebra of $N \oslash \_$
in the category $\mathcal{W}$.
\label{bangcoalgW} \end{proposition} \begin{proof}
It suffices to show that if $\sigma : M \rightarrow N \oslash M$ is a winning strategy, then $\leftmoon \sigma \rightmoon$ is winning.
To see that $\leftmoon \sigma \rightmoon$ is total, let $s \in
\leftmoon \sigma \rightmoon$ and $so \in P_{\mathop{!}N}$. Then $so$
visits only finite $k$ many copies of $N$, and so up to retagging it
is a play in $M \rightarrow (N \oslash \_)^k(M)$, and $s$ a play in
$\leftmoon \sigma \rightmoon_k$. By totality of $\leftmoon \sigma
\rightmoon_k$, there is a move $p$ with $sop \in \leftmoon \sigma
\rightmoon_k$. Then, up to retagging, $sop$ is also a play in
$\leftmoon \sigma \rightmoon$.
We next need to check that each infinite play with all even prefixes
in $\leftmoon \sigma \rightmoon$ is winning. Let $s$ be such an
infinite play, with $s|_M$ winning. We must show that
$s|_{\mathop{!}N}$ is winning, i.e. $s|_{(N,i)}$ is winning for each
$i$. The infinite play $s$ corresponds to an infinite interaction
sequence:
\begin{diagram}
M & \rTo^\sigma & N \oslash M & \rTo^{\mathsf{id} \oslash \sigma} & N \oslash (N \oslash M) & \rTo^{\mathsf{id} \oslash (\mathsf{id} \oslash \sigma)} & \ldots \\
\vdots \\
\end{diagram}
Then $s|_{(N,i)}$ can also be found in the $i$th column of the above
interaction sequence. By hiding all columns other than the first and
the $i$th, we see a play in $M \rightarrow (N \oslash \_)^i(M)$
in $\leftmoon \sigma \rightmoon_i$. The first column is $s|_M$
(which is winning), and the $i$th component of the second is
$s|_{(N,i)}$. Since $\leftmoon \sigma \rightmoon_i$ is a
winning strategy, this play is winning, by the winning condition for
$\oslash$. \qed \end{proof}
Recall that the monoidal unit of a distributive sequoidal category is a terminal object. Thus we may define operations corresponding to dereliction and promotion: \begin{itemize} \item $\mathsf{der}_N:\mathop{!}N \rightarrow N = \mathsf{unit}_\oslash \circ (\mathsf{id} \oslash \mathsf{t}) \circ \alpha$. \item Given any symmetric comonoid $(B,\eta,\delta)$, and morphism $f : B \rightarrow N$, let $f^\dagger: B \rightarrow !N$ be the (comonoid morphism) $\leftmoon \mathsf{wk} \circ (f \otimes \mathsf{id}) \circ \delta \rightmoon$. \end{itemize}
To interpret the contraction rule, we require a further coalgebraic property. \begin{definition}A decomposable, distributive sequoidal category $\mathcal{C}$ has \emph{coalgebraic monoidal exponentials} if: \begin{itemize} \item For any object $A$, the endofunctor $A \oslash \_$ has a specified final coalgebra $(\mathop{!}A,\alpha_A)$. \item For any objects $A,B$, $(!A \otimes !B,\alpha_{A,B})$ is a final coalgebra for the endofunctor $(A \times B) \oslash \_$, where $\alpha_{A,B}: !A \otimes !B \rightarrow (A \times B) \oslash (!A \otimes !B)$ is the isomorphism:
\begin{eqnarray*} & & !A \otimes !B \cong (!A \oslash !B) \times (!B \oslash !A) \cong ((A \oslash !A) \oslash !B) \times ((B \oslash !B) \oslash !A) \\ & \cong & (A \oslash (!A \otimes !B)) \times (B \oslash (!A \otimes !B)) \cong (A \times B) \oslash (!A \otimes !B)
\end{eqnarray*} \end{itemize} \end{definition} The second requirement is equivalent to requiring that the morphism $\langle \mathsf{der}_A \otimes \mathsf{t},\mathsf{t} \otimes \mathsf{der}_B\rangle^\dagger$ from $!A \otimes !B$ to $!(A \times B)$ is an isomorphism. Thus we may define a comonoid $(!A,\delta:!A \rightarrow !A \otimes !A,\mathsf{t}: A \rightarrow I)$, where $\delta$ is the anamorphism of the map $\mathsf{dist}_{A,A,!A} \circ \langle \alpha_A, \alpha_A\rangle$. \begin{proposition}If ${\mathcal{C}}$ has coalgebraic monoidal exponentials then $(!A,\delta,\mathsf{t})$ is the cofree commutative comonoid on $A$.
\end{proposition} \begin{proof}In other words, the forgetful functor from the category of comonoids on ${\mathcal{C}}$ into the category ${\mathcal{C}}$ has a left adjoint which sends $A$ to $(!A,\delta,\mathsf{t})$. The unit of this adjunction is the dereliction $\mathsf{der}_A:{!}A \rightarrow A$: for any $f:B \rightarrow A$, $f^\dagger:B \rightarrow !A$ is the unique comonoid morphism such that $\mathsf{der} \circ f^\dagger$. (Uniqueness follows from finality of $!A$.) \qed \end{proof}
This cofree commutative comonoid can also be constructed using the technique described in \cite{MTT_EFE}. This approach builds the exponential as a limit of finitary \emph{symmetric tensor powers}, that is, finite tensor products subject to a quotient so that the order that the components are played in is irrelevant. Our use of the asymmetric $\oslash$ enforces a strict left-to-right order, providing a concrete (albeit less generally applicable) alternative to such quotienting.
\begin{proposition} The sequoidal closed categories $\mathcal{W}$ and $\mathcal{G}$ are both equipped with coalgebraic monoidal exponentials. \label{coexpcomWG} \end{proposition}
\begin{proof}
Follows from Propositions \ref{bangcoalgG}, \ref{bangcoalgW} and the
fact $\mathop{!}$ is the cofree commutative comonoid in $\mathcal{G}$
and $\mathcal{W}$ \cite{Lai_FPC}. \qed
\end{proof}
\begin{definition}
A \emph{WS!-category} is a distributive, decomposable sequoidal
closed category with an object $o$ satisfying linear functional
extensionality and coalgebraic monoidal exponentials. \end{definition}
\begin{proposition}
The categories $(\mathcal{G},\mathcal{G}_s)$ and
$(\mathcal{W},\mathcal{W}_s)$ enjoy the structure of an WS!-category. \label{WScats} \end{proposition}
\subsection{Semantics of Rules} We may now describe the interpretation of the rules of our logic (other than those for atoms, quantifiers and equality) in a \textsf{WS!}-category ${\mathcal{C}}$. Suppose that, for a given context of variables and atoms $\Phi$, we have an interpretation of formulas and sequents over $\Phi$ as objects of ${\mathcal{C}}$, satisfying the following:
\[ \begin{array}{lcllcl}
\llbracket \Phi \vdash \mathbf{1} \rrbracket & = & I & \llbracket \Phi \vdash \mathbf{0} \rrbracket & = & I \\
\llbracket \Phi \vdash \bot \rrbracket & = & o & \llbracket \Phi \vdash \top \rrbracket & = & o \\
\llbracket \Phi \vdash M \otimes N
\rrbracket & = & \llbracket \Phi \vdash M \rrbracket \otimes \llbracket \Phi \vdash N \rrbracket & \llbracket \Phi \vdash P \bindnasrepma Q \rrbracket & = & \llbracket \Phi \vdash P \rrbracket
\otimes \llbracket \Phi \vdash Q \rrbracket \\
\llbracket \Phi \vdash M \& N \rrbracket & = & \llbracket \Phi \vdash M \rrbracket \times \llbracket \Phi \vdash N \rrbracket & \llbracket \Phi \vdash P \oplus Q \rrbracket & = & \llbracket \Phi \vdash P \rrbracket \times
\llbracket\Phi \vdash Q \rrbracket \\
\llbracket \Phi \vdash M \oslash N \rrbracket & = & \llbracket \Phi \vdash M \rrbracket \oslash
\llbracket \Phi \vdash N \rrbracket &
\llbracket \Phi \vdash P \lhd Q \rrbracket & = & \llbracket \Phi \vdash P \rrbracket \oslash
\llbracket \Phi \vdash Q \rrbracket \\
\llbracket \Phi \vdash M \lhd Q \rrbracket & = & \llbracket \Phi \vdash Q \rrbracket \multimap \llbracket \Phi \vdash M \rrbracket &
\llbracket \Phi \vdash P \oslash N \rrbracket & = & \llbracket \Phi \vdash N \rrbracket \multimap
\llbracket \Phi \vdash P \rrbracket \\
\llbracket \Phi \vdash !N \rrbracket & = & !\llbracket \Phi \vdash N \rrbracket & \llbracket \Phi \vdash ?P \rrbracket & = & ! \llbracket \Phi \vdash P \rrbracket \\ \end{array} \] \[ \begin{array}{lcl}
\llbracket \Phi \vdash M , \Gamma , N \rrbracket & = & \llbracket \Phi \vdash M , \Gamma \rrbracket \oslash \llbracket \Phi \vdash N \rrbracket \\ \llbracket \Phi \vdash M , \Gamma , P \rrbracket & = & \llbracket \Phi \vdash P \rrbracket \multimap \llbracket \Phi \vdash M , \Gamma \rrbracket \\
\llbracket \Phi \vdash P , \Gamma , N \rrbracket & = & \llbracket \Phi \vdash N \rrbracket \multimap \llbracket \Phi \vdash P , \Gamma \rrbracket \\ \llbracket \Phi \vdash P , \Gamma , Q \rrbracket & = & \llbracket \Phi \vdash P , \Gamma \rrbracket \oslash \llbracket \Phi \vdash Q \rrbracket \\
\end{array} \]
(For atom and quantifier-free formulas, these equations \emph{define} an interpretation of formulas and sequents in ${\mathcal{C}}$.) Then we may give an interpretation of each proof rule except those for atoms, quantifiers, and equality as an operation on morphisms in ${\mathcal{C}}$. These typically involve an operation on the head formula of the sequence ``under'' a context consisting of its tail, and so we define distributivity maps to allow this:
We define endofunctors $\llbracket \Gamma \rrbracket^b$ on $\mathcal{{\mathcal{C}}}_s$ for each context (possibly empty list of formulas) $\Gamma$ and $b \in \{ +, - \}$. below. \\
\[ \begin{array}{lclclcl}
\llbracket \epsilon \rrbracket^+ & = & \mathsf{id} & & \llbracket \epsilon \rrbracket^- & = & \mathsf{id} \\ \llbracket \Gamma, M \rrbracket^+ & = & \llbracket M \rrbracket \multimap \llbracket \Gamma \rrbracket^+ & & \llbracket \Gamma, P \rrbracket^- & = & \llbracket P \rrbracket \multimap \llbracket \Gamma \rrbracket^- \\ \llbracket \Gamma, P \rrbracket^+ & = & \llbracket \Gamma \rrbracket^+ \oslash \llbracket P \rrbracket & & \llbracket \Gamma, M \rrbracket^- & = & \llbracket \Gamma \rrbracket^- \oslash \llbracket M \rrbracket \\
\end{array} \] \begin{proposition}
For any sequent $A , \Gamma$ we have $\llbracket A, \Gamma
\rrbracket = \llbracket \Gamma \rrbracket^b(\llbracket A
\rrbracket)$ where $b$ is the polarity of $A$. \end{proposition} \begin{proof} A simple induction on $\Gamma$. \qed \end{proof}
\begin{proposition}
For any context $\Gamma$, $\llbracket \Gamma \rrbracket^b$ preserves
products. \label{contprod} \end{proposition} \begin{proof}
Using the distributivity of $\times$ over $\oslash$ and $\multimap$, we can construct isomorphisms $\mathsf{dist_{b,\Gamma}} : \llbracket
\Gamma \rrbracket^b(A \times B) \cong \llbracket \Gamma
\rrbracket^b(A) \times \llbracket \Gamma \rrbracket^b (B)$ and
$\mathsf{dist_{b,\Gamma}^{0}} : \llbracket \Gamma \rrbracket^b(I)
\cong I$ by induction on
$\Gamma$. \\
\end{proof}
\subsection{Semantics of Proof Rules} Define $\sigma:\llbracket\vdash \Gamma\rrbracket$ if: \begin{itemize} \item $\Gamma = N,\Gamma'$, and $\sigma:I \rightarrow \llbracket \Gamma \rrbracket$ in ${\mathcal{C}}$. \item $\Gamma = P,\Gamma'$, and $\sigma:\llbracket \Gamma \rrbracket \rightarrow o$ in ${\mathcal{C}}$. \end{itemize} Semantics of the core rules as operations on morphisms are given in Figure \ref{WS-sem} and the other rules in Figures \ref{WS-sem2} and \ref{WS-sem3}. The rules involving the exponential are treated separately in Figure \ref{WS!-sem}. Note that in each case, the interpretation in the WS!-category ${\mathcal{W}}$ agrees with the informal exposition in Section \ref{proofinterp}.
In the semantics of $\mathsf{P}_\mathsf{cut}$ we use an additional construction. If $\tau : I \rightarrow \llbracket N, \Delta \rrbracket$ define (strict) $\tau^{\circ-}_{M,\Gamma} : \llbracket M , \Gamma , N^\perp \rrbracket
\rightarrow \llbracket M , \Gamma , \Delta \rrbracket$ to be $\mathsf{unit}_\multimap \circ (\tau \multimap \mathsf{id}_{\llbracket M,\Gamma \rrbracket})$ if $|\Delta| = 0$ and $\mathsf{pasc}_\multimap^{n} \circ (\Lambda^{-n}\Lambda_I^{-1} \tau \multimap \mathsf{id}_{\llbracket M,\Gamma \rrbracket})$ if $|\Delta| = n + 1$. Define (strict) $\tau^{\circ+}_{P,\Gamma} : \llbracket P , \Gamma , \Delta \rrbracket \rightarrow \llbracket P , \Gamma , N^\perp \rrbracket$ to be $(\mathsf{id}_{\llbracket P,\Gamma \rrbracket} \oslash \tau) \circ \mathsf{unit}_\oslash^{-1}$ if $|\Delta| = 0$ and $(\mathsf{id} \oslash \Lambda^{-n}\Lambda_I^{-1} \tau) \circ ((\mathsf{id}_{\llbracket P,\Gamma \rrbracket} \oslash \mathsf{sym}) \circ \mathsf{pasc}^{-1})^{n}$ if $|\Delta| = n + 1$. In some of the rules in Figure \ref{WS-sem3} we omit some $\mathsf{pasc}$ isomorphisms for clarity.
\begin{figure*}
\caption{Categorical Semantics for \textsf{WS1} (core rules)}
\label{WS-sem}
\end{figure*}
\begin{figure*}
\caption{Categorical Semantics for \textsf{WS1} (other rules, part 1)}
\label{WS-sem2}
\end{figure*}
\begin{figure*}
\caption{Categorical Semantics for \textsf{WS1} (other rules, part 2)}
\label{WS-sem3}
\end{figure*}
\begin{figure*}
\caption{Semantics for \textsf{WS1} --- Exponential Rules}
\label{WS!-sem}
\end{figure*}
\section{Semantics of atoms, quantifiers and equality} We shall now complete the semantics of \textsf{WS1} by interpreting atoms and quantifiers based on our categories of games and strategies. (The requisite structure could be axiomatised for any WS!-category, but we shall not do so here.) We have seen that a sequent $X;\Theta \vdash \Gamma$ of \textsf{WS1} can be interpreted as a family of games, indexed over $\Theta$-satisfying $\mathcal{L}$-models over $X$. We shall interpret a proof of $X ; \Theta \vdash \Gamma$ as a uniform family of strategies for each such game.
For example, the family denoted by $\top \oslash (\phi \lhd \top)$ has games of the following form:
\begin{center} \includegraphics[scale=0.6]{unione} \end{center}
\noindent Here we represent the forest of plays $P_A$ directly. The moves in dotted circles are only available if $(L,v) \models \overline{\phi}$. There is a unique total strategy on the (positive) game above in both cases, and this family is uniform in the sense that the strategy on models which satisfy $\phi$ is a \emph{substrategy} of the strategy on models satisfying $\overline{\phi}$ --- if $(L,v) \models \phi$ and $(L',v') \models \overline{\phi}$ then $\sigma_{\llbracket \top \oslash (\phi \lhd \top) \rrbracket(L,v)} \subseteq \sigma_{\llbracket \top \oslash (\phi \lhd \top) \rrbracket(L',v')}$.
In contrast, consider the formula $\bot \lhd (\overline{\phi} \oplus (\top \oslash \phi))$. The game forest is given as follows, using the same notation as above:
\begin{center} \includegraphics[scale=0.6]{unitwo} \end{center}
\noindent There is a family of strategies on this (negative) game: if $\phi$ is true, Player plays \texttt{f} and if $\overline{\phi}$ is true, Player plays \texttt{t}. However, this strategy is not uniform as the choice of second move depends on the truth value of $\phi$ in the appropriate $\mathcal{L}$-structure. Correspondingly, the formula is not provable in \textsf{WS1}.
We now formalise this notion of uniformity of strategies as a naturality property.
\subsection{Uniform Strategies}
\subsubsection{Game Embeddings}
We wish to formalise categorically the notion of a game $A$ being a subgame of $B$: we can then state that a family of strategies is uniform if whenever $A$ is a subgame of $B$, the restriction of $\sigma_B$ to $A$ is $\sigma_A$. If we consider games as trees, we require a tree embedding from $P_A$ into $P_B$. We use the following machinery:
\begin{definition} Let $\mathcal{C}$ be a poset-enriched category. The category $\mathcal{C}_e$ has the same objects as $\mathcal{C}$ and a map $A \rightarrow B$ in $\mathcal{C}_e$ consists of a pair $(i_f , p_f)$ where $i_f : A \rightarrow B$ and $p_f : B \rightarrow A$ in $\mathcal{C}$, such that $p_f \circ i_f = \mathsf{id}$ and $i_f \circ p_f \sqsubseteq \mathsf{id}$. \begin{itemize} \item The identity is given by $(\mathsf{id}, \mathsf{id})$. \item For composition, set $(i_f, p_f) \circ (i_g , p_g) = (i_f \circ i_g , p_g \circ p_f)$. We need to check this is a valid pairing: $p_{f \circ g} \circ i_{f \circ g} = p_g \circ p_f \circ i_f \circ i_g = p_g \circ \mathsf{id} \circ i_g = \mathsf{id}$ and $i_{f \circ g} \circ p_{f \circ g} = i_f \circ i_g \circ p_g \circ p_f \sqsubseteq i_f \circ \mathsf{id} \circ p_f = i_f \circ p_f \sqsubseteq \mathsf{id}$. \item It is clear that composition is associative and that $f = f \circ \mathsf{id} = \mathsf{id} \circ f$. \end{itemize} \end{definition}
\noindent Let $\mathcal{G}$ denote the poset-enriched category of games and (not-necessarily winning) strategies, and $\mathcal{G}_s$ its subcategory of strict strategies, with $\sqsubseteq$ given by strategy inclusion. A tree embedding of $A$ into $B$ corresponds to a map $A \rightarrow B$ in $\mathcal{G}_e$.
\begin{proposition} If $f : A \rightarrow B$ in $\mathcal{G}_e$ then $i_f$ and $p_f$ are strict. \end{proposition} \begin{proof} If $i_f$ responds to an opening move in $B$ with a move in $B$ then so does $i_f \circ p_f$ and so $i_f \circ p_f \sqsubseteq \mathsf{id}$ fails. Similarly, if $p_f$ responds to an opening move in $A$ with a move in $A$ then so does $p_f \circ i_f$ and so $p_f \circ i_f = \mathsf{id}$ fails. \qed \end{proof}
\noindent We can thus define identity-on-objects functors $i : \mathcal{G}_e \rightarrow \mathcal{G}_s$ and $p : \mathcal{G}_e \rightarrow \mathcal{G}_s^\mathsf{op}$.
We can show that our operations on games lift to functors on $\mathcal{G}_e$.
\begin{proposition} All of the operations $\multimap$,$\oslash$,$\otimes$,$\&$,$!$ extend to covariant (bi)functors on $\mathcal{G}_e$. \end{proposition} \begin{proof}
Each case exploits functoriality and monotonicity of the relevant
operation. We just give an example: set $(i,p) \multimap (i',p') =
(p \multimap i' , i \multimap p')$. Then $(i \multimap p') \circ (p
\multimap i') = (p \circ i) \multimap (p' \circ i') = \mathsf{id} \multimap
\mathsf{id} = \mathsf{id}$ and $(p \multimap i') \circ (i \multimap p') = (i \circ
p) \multimap (i' \circ p') \sqsubseteq \mathsf{id} \multimap \mathsf{id} = \mathsf{id}$. \qed \end{proof}
\subsubsection{Lax natural Transformations}
Given an embedding $e : A \rightarrow B$ and strategies $\sigma_A : A$, $\sigma_B : B$, $\sigma_B$ \emph{restricts to} $\sigma_A$ if $\sigma_A = p_e \circ \sigma_B$. We generalise this idea using the notion of \emph{lax natural transformations}.
\begin{definition} Let $\mathcal{C}$ be a category, $\mathcal{D}$ a poset-enriched category and $F, G : \mathcal{C} \rightarrow \mathcal{D}$. A \emph{lax natural transformation} $F \Rightarrow G$ is a family of arrows $\mu_A : F(A) \rightarrow G(A)$ such that $\eta_B \circ F(f) \sqsupseteq G(f) \circ \eta_A$. \end{definition} \begin{diagram} F(A) & \rTo^{\mu_A} & G(A) \\ \dTo^{F(f)} & \sqsupseteq & \dTo_{G(f)} \\ F(B) & \rTo_{\mu_B} & G(B) \\ \end{diagram}
\noindent We can compose lax natural transformations using vertical composition. There is also a form of horizontal composition, provided that one of the two functors is the identity: Let $H,G : \mathcal{C} \rightarrow \mathcal{D}$ and $\mu : G \Rightarrow H$ a lax natural transformation. Then a) if $F : \mathcal{B} \rightarrow \mathcal{C}$ then there is a lax natural transformation $\mu F : G \circ F \Rightarrow H \circ F$ given by $(\mu F)_A = \mu_{F(A)}$ and b) if $J : \mathcal{D} \rightarrow \mathcal{E}$ is monotonic then there is a lax natural transformation $J \mu : J \circ G \rightarrow J \circ H$ given by $(J \mu)_A = J(\mu_A)$.
\subsubsection{Uniform Winning Strategies} \label{uniwinstrats}
\begin{definition} Let $F , G : \mathcal{C} \rightarrow \mathcal{G}_e$. A \emph{uniform strategy} from $F$ to $G$ is a lax natural transformation $\sigma : i \circ F \Rightarrow i \circ G$. A \emph{uniform total strategy} is a uniform strategy $\sigma$ where each $\sigma_A$ is total. A \emph{uniform winning strategy} is a uniform strategy where each $\sigma_A$ is winning. \end{definition}
\noindent If $f : A \rightarrow B$, the lax naturality condition is that $i_{G(f)} \circ \sigma_A \sqsubseteq \sigma_B \circ i_{F(f)}$. Thus $\sigma_A = p_{G(f)} \circ i_{G(f)} \circ \sigma_A \sqsubseteq p_{G(f)} \circ \sigma_B \circ i_{F(f)}$. But since $\sigma_A$ is total, it is maximal in the ordering $\sqsubseteq$ and we must have $\sigma_A = p_{G(f)} \circ \sigma_B \circ i_{F(f)}$. Similarly, we see that $\sigma_A = p_{G(f)} \circ \sigma_B \circ i_{F(f)}$ implies the lax naturality condition as $i_{G(f)} \circ \sigma_A = i_{G(f)} \circ p_{G(f)} \circ \sigma_B \circ i_{F(f)} \sqsubseteq \sigma_B \circ i_{F(f)}$. Thus, lax naturality captures the fact that $\sigma_A$ is determined by $\sigma_B$ via restriction. If $F$ is the constant functor $\kappa_I$, this reduces to $\sigma_A = p_{G(e)} \circ \sigma_B$.
We can construct a WS-category of uniform strategies over a base category $\mathcal{C}$. Let $\mathcal{G}^\mathcal{C}$ be the category where: \begin{itemize} \item Objects are functors $\mathcal{C} \rightarrow \mathcal{G}_e$ \item An arrow $F \rightarrow G$ is a uniform strategy $F
\Rightarrow G$ \item Composition is given by vertical composition of lax natural
transformations \item The identity on a functor $F$ is given by the lax natural
transformation $\eta : F \Rightarrow F$ where $\eta_A =
\mathsf{id}_{F(A)}$. It is clear that this is lax natural.
\end{itemize}
\noindent Similarly, we can construct a category $\mathcal{W}^\mathcal{C}$ of functors and uniform winning strategies.
\begin{proposition} $\mathcal{G}^\mathcal{C}$ is a WS!-category. \label{uniws} \end{proposition} \begin{proof}
We first exhibit the symmetric monoidal structure. $F \otimes G$ is
defined to be $\otimes \circ (F \times G) \circ \Delta$ where
$\Delta : \mathcal{C} \rightarrow \mathcal{C} \times \mathcal{C}$ is
the diagonal. So, $(F \otimes G)(A) = F(A) \otimes G(A)$. On arrows,
we set $(\eta \otimes \rho)_A = \eta_A \otimes \rho_A$.
We need to show that if $f : L
\rightarrow K$ then $(i_{A(f)} \otimes i_{C(f)}) \circ (\eta_K
\otimes \rho_K) \sqsupseteq (i_{B(f)} \otimes i_{D(f)}) \circ
(\eta_L \otimes \rho_L)$. That is, we need to show that $(i_{A(f)}
\circ \eta_K) \otimes (i_{C(f)} \circ \rho_K) \sqsupseteq (i_{B(f)}
\circ \eta_L) \otimes (i_{D(f)} \circ \rho_L)$. But this is clear by
lax naturality of $\eta$ and $\rho$ and monotonicity of $\otimes$.
The tensor unit $I$ is the constant functor, sending all objects to
the game $I$ and arrows to $\mathsf{id}_I$.
The morphisms $\mathsf{assoc}$, $\mathsf{runit}_\otimes$, $\mathsf{lunit}_\otimes$ and $\mathsf{sym}$ are defined
pointwise: for example, $(\mathsf{assoc}_{F,G,H})_X =
\mathsf{assoc}_{F(X),G(X),H(X)}$. To check for lax naturality, we must use
horizontal composition. For example, consider the map $\mathsf{assoc} : (F
\otimes G) \otimes H \rightarrow F \otimes (G \otimes H)$ defined
pointwise as described. The domain is $(F \otimes G) \otimes H =
((\_ \otimes \_) \otimes \_) \circ (i \circ F \times i \circ G
\times i \circ H) \circ \Delta_3$ where $\Delta_3$ is the diagonal
functor $\mathcal{C} \rightarrow \mathcal{C} \times \mathcal{C}
\times \mathcal{C}$. Similarly, the codomain is $(\_ \otimes (\_
\otimes \_)) \circ (i \circ F \times i \circ G \times i \circ H)
\circ \Delta_3$. We can thus see that $\mathsf{assoc}$ is equal to the
horizontal composition $\mathsf{assoc} J$ where $J = (i \circ F \times i
\circ G \times i \circ H) \circ \Delta_3$ and $\mathsf{assoc}$ is the
natural transformation $\_ \otimes (\_ \otimes \_) \Rightarrow (\_
\otimes \_) \otimes \_$ in $\mathcal{G}_s$. \begin{diagram} \mathcal{C} & \rTo^{\Delta_3} & \mathcal{C} \times \mathcal{C} \times \mathcal{C} & \rTo^{i \circ F \times i \circ G \times i \circ H} & \mathcal{G}_s \times \mathcal{G}_s \times \mathcal{G}_s & \rTo^{\_
\otimes (\_ \otimes \_)} & \mathcal{G}_s \\ & \dImplies^{\mathsf{id}} & & \dImplies^{\mathsf{id}} & & \dImplies^{\mathsf{assoc}} \\ \mathcal{C} & \rTo^{\Delta_3} & \mathcal{C} \times \mathcal{C} \times \mathcal{C} & \rTo^{i \circ F \times i \circ G \times i \circ H} & \mathcal{G}_s \times \mathcal{G}_s \times \mathcal{G}_s & \rTo^{(\_
\otimes \_) \otimes \_} & \mathcal{G}_s \\ \end{diagram}
One can similarly express the other monoidal isomorphisms in this
way to see lax naturality. The coherence equations of symmetric
monoidal categories inherit pointwise from $\mathcal{G}$.
Symmetric monoidal closure, products, sequoidal closure and linear
functional extensionality lift pointwise from $\mathcal{G}$ using
horizontal composition. We can also show that the coalgebraic monoidal
exponential structure lifts from $\mathcal{G}$. \qed
\end{proof}
\begin{proposition} $\mathcal{W}^\mathcal{C}$ is a WS!-category.
\end{proposition} \begin{proof}
We proceed precisely as in Proposition \ref{uniws}, lifting the
structure of a WS!-category in $\mathcal{W}$ to that in
$\mathcal{W}^\mathcal{C}$. In particular, pointwise-winningness of
the relevant morphisms in $\mathcal{W}^\mathcal{C}$ inherits from
the winningness in $\mathcal{W}$. \qed \end{proof}
\subsection{Quantifiers}
\subsubsection{Category of $\mathcal{L}$-structures}
\begin{definition}
Given a set of variables $X$ and set of atomic formulas $\Theta$, we
let $\mathcal{M}_\Theta^X$ denote the category of
$\Theta$-satisfying $\mathcal{L}$-models over $X$. Objects are
$\mathcal{L}$-models over $X$ that satisfy each formula in
$\Theta$. A morphism $(L,v) \rightarrow (L',v')$ is a map $f : |L|
\rightarrow |L'|$ such that: \begin{itemize} \item For each $x \in X$, $v'(x) = f(v(x))$
\item If $(L,v) \models \overline{\phi}(\overrightarrow{a})$ for $\overrightarrow{a} \in |L|^{\mathsf{ar}(\phi)}$ then $(L',v') \models \overline{\phi}(\overrightarrow{f(a)})$ \item For each function symbol $g$ in $\mathcal{L}$, $f(I_L(g)(\overrightarrow{a})) = I_{L'}(g)(\overrightarrow{f(a)})$. \end{itemize} \end{definition}
\noindent Note that since the positive atoms include inequality, such morphisms must be injective. Also note that if $f : (L,v) \rightarrow (L',v')$ and $(L,v) \models \overline{\phi}(\overrightarrow{s})$ then $(L',v') \models \overline{\phi}(\overrightarrow{s})$.
If $v$ is a valuation on $X$, define $v[x \mapsto l]$ on $X \cup \{ x \}$ to be the valuation sending $y$ to $v(y)$ if $y \neq x$, and $x$ to $l$. Given $f : (L,v) \rightarrow (M,w)$ in $\mathcal{M}_X^\Theta$ and $s$ a term with $FV(s) \subseteq X$, $f$ is also a map $(L,v[x \mapsto v(s)]) \rightarrow (M,w[x \mapsto w(s)])$ in $\mathcal{M}_{X \cup \{ x \}}^\Theta$. We know that $f$ preserves all of the valuations other than $x$, and for $x$ we see that $f(v[x \mapsto v(s)](x)) = f(v(s)) = w(s) = w[x \mapsto w(s)](x)$. \\
\noindent We will give semantics of sequents $X ; \Theta \vdash \Gamma$ as functors $\mathcal{M}_\Theta^X \rightarrow \mathcal{G}_e$, and proofs as uniform winning strategies.
\subsubsection{Quantifiers as Adjoints}
In this section, we will describe an adjunction that will allow us to interpret the quantifiers.
\begin{itemize} \item If $FV(s) \subseteq X$ we can define a functor $\mathsf{set}^x_s : \mathcal{M}_X^\Theta \rightarrow \mathcal{M}_{X \uplus \{x\}}^\Theta$ by $\mathsf{set}^x_s(L,v) = (L,v[x \mapsto v(s)])$ and if $f : (L,v) \rightarrow (M,w)$ we set $\mathsf{set}^x_s(f) = f$. We need to check that $\mathsf{set}^x_s(f)$ is a valid morphism. We know that $\mathsf{set}^x_s(f)$ preserves all variables in $X$, and $\mathsf{set}^x_s(f)(v[x \mapsto v(s)](x)) = f(v(s)) = w(s) = w[x \mapsto w(s)](x)$ as required. It is clear that $\mathsf{set}^x_s$ is functorial.
From this we can extract a functor $\mathsf{set}'^x_s : \mathcal{W}^{\mathcal{M}_{X \uplus \{ x \}}^\Theta} \rightarrow \mathcal{W}^{\mathcal{M}_X^\Theta}$, mapping $F$ to $F \circ \mathsf{set}^x_s$, with an action on arrows defined by horizontal composition.
\item Provided $x$ does not occur in $\Theta$, there is an evident forgetful functor $U_x : \mathcal{M}_{X \uplus \{x\}}^\Theta \rightarrow \mathcal{M}_X^\Theta$ mapping $(L,v)$ to $(L,v-x)$. From this we can extract a functor $U'_x : \mathcal{W}^{\mathcal{M}_X^\Theta} \rightarrow \mathcal{W}^{\mathcal{M}_{X \uplus \{ x \}}^\Theta}$ mapping $F$ to $F \circ U_x$, with an action on arrows defined by horizontal composition. Note that $U_x \circ \mathsf{set}^x_s = \mathsf{id}$ and so $\mathsf{set}'^x_s \circ U'_x = \mathsf{id}$. \end{itemize}
We will show that $U'_x$ has a right adjoint $\forall x . \_ $. Assuming empty $\Gamma$, this allows us to interpret the rules $\mathsf{P}_\forall$ and $\mathsf{P}_\exists$.
\begin{definition} Let $\mathcal{C}$ be a category. We define the category $\mathsf{FamInj}(\mathcal{C})$. An object is a set $I$ and a family of $\mathcal{C}$-objects $\{ A_i : i \in I \}$. An arrow $\{ A_i : i \in I \} \rightarrow \{ B_j : j \in J \}$ is a pair $(f, \{ f_i : i \in I \})$ where $f$ is an injective function $I \rightarrow J$ and each $f_i : A_i \rightarrow B_{f(i)}$. We will often write such a map as $(f , \{ f_i \})$ when we wish to leave the indexing set implicit. \begin{itemize} \item Composition is defined by $(f,\{f_i\}) \circ (g,\{g_i\}) = (f \circ g , \{ f_{g(i)} \circ g_i \})$. \item The identity $\{ A_i : i \in I \} \rightarrow \{A_i : i \in I \}$ is given by $(\mathsf{id}, \{ \mathsf{id}_{A_i} \})$. \item Satisfaction of the categorical axioms is inherited from $\mathcal{C}$. \end{itemize} \end{definition}
\begin{definition} Let $F : \mathcal{C} \rightarrow \mathcal{D}$. We define $\mathsf{FamInj}(F) : \mathsf{FamInj}(\mathcal{C}) \rightarrow \mathsf{FamInj}(\mathcal{D})$. On objects, $\mathsf{FamInj}(F)(\{ A_i : i \in I \}) = \{ F(A_i) : i \in I \}$. On arrows, we set $\mathsf{FamInj}(F)(f,\{f_i\}) = (f , \{ F(f_i) \})$.
\end{definition}
\noindent We define a distributivity functor $\mathsf{dst} : \mathsf{FamInj}(\mathcal{C}) \times \mathcal{D} \rightarrow \mathsf{FamInj}(\mathcal{C} \times \mathcal{D})$ by $\mathsf{dst}(\{A_i : i \in I \}, B) = \{ (A_i,B) : i \in I \}$ and $\mathsf{dst}((f,\{f_i\}),g) = (f,\{(f_i,g)\})$.
Suppose $F$ is an object in $\mathcal{W}^{\mathcal{M}_{X \uplus \{ x
\}}^\Theta}$ (a functor $\mathcal{M}_{X \uplus \{ x \}}^\Theta \rightarrow \mathcal{G}_e$). We define $\forall x . F$ as an object in $\mathcal{W}^{\mathcal{M}_X^\Theta}$ (a functor $\mathcal{M}_X^\Theta \rightarrow \mathcal{G}_e$). We first define a product functor $\mathsf{prod} : \mathsf{FamInj}(\mathcal{G}_e) \rightarrow \mathcal{G}_e$. On objects, $\mathsf{prod}$ sends $\{ G_i : i \in I \}$ to $\prod_{i \in I} G_i$. On arrows, let $f : \{ G_j : j \in J \} \rightarrow \{ H_h : h \in H \}$. The embedding part of $\mathsf{prod}(f)$ is given by $\langle g_h \rangle_h$ where $g_h = i_{f_j} \circ \pi_j$ if $h = f(j)$ and $\epsilon$ otherwise. The projection part is given by $\langle p_{f_j} \circ \pi_{f(j)} \rangle_j$. We can check that $\mathsf{prod}$ defines a functor into $\mathcal{G}_e$. Finally, given $F : \mathcal{M}_{X \uplus \{ x \}}^\Theta \rightarrow \mathcal{G}_e$ we define $\forall x . F : \mathcal{M}_X^\Theta \rightarrow \mathcal{G}_e$ to be $\mathsf{prod} \circ \mathsf{FamInj}(F) \circ \mathsf{add}_x$.
\begin{proposition} The functor $U_x' : \mathcal{W}^{\mathcal{M}_X^\Theta} \rightarrow \mathcal{W}^{\mathcal{M}_{X \uplus \{ x \}}^\Theta}$ has a right adjoint given by $\forall x . \_ = \mathsf{prod} \circ \mathsf{FamInj}(\_) \circ \mathsf{add}_x : \mathcal{W}^{\mathcal{M}_{X \uplus \{ x \}}^\Theta} \rightarrow \mathcal{W}^{\mathcal{M}_X^\Theta}$ \end{proposition}
\begin{proof}
We must first give the unit of this adjunction. For each $F$, we
must give a uniform winning strategy $\eta : U'_x(\forall x . F)
\Rightarrow F$. Such an $\eta$ is a winning uniform strategy
$\mathsf{prod} \circ \mathsf{FamInj}(F) \circ \mathsf{add}_x \circ U_x
\Rightarrow F$. Note that $(\mathsf{prod} \circ \mathsf{FamInj}(F) \circ
\mathsf{add}_x \circ U_x)(L,v) = \mathsf{prod}(\{ F(L,v-x[x \mapsto
l]) : l \in L \} ) = \prod_{l \in L} F(L,v[x \mapsto l])$. Thus
$\eta_{(L,v)}$ must be a winning strategy $\prod_{l \in L} F(L,v[x
\mapsto l]) \rightarrow F(L,v)$ and we take $\eta_{(L,v)} =
\pi_{v(x)}$. One can check that this transformation is lax natural.
Given $f : U'_x(F) \rightarrow G$ we must show that there is a unique $\hat{f} : F \rightarrow \forall x . G$ such that $f = \eta_G \circ U_x'(\hat{f})$. Let $f$ be such a uniform winning strategy. Then we must give winning strategies $\hat{f}_{(L,v)} : F(L,v) \rightarrow \prod_{l \in L} G(L,v[x \mapsto l])$. Set $\hat{f}_{(L,v)} = \langle h_l \rangle_l$ where $h_l : F(L,v) \rightarrow G(L,v[x \mapsto l])$ is defined by $f_{(L,v[x \mapsto l])}$. We can check that $\hat{f}$ satisfies lax naturality.
We next need to show that $\hat{f}$ satisfies the universal property. Firstly, we must show that $f = \eta_G \circ U_x'(\hat{f})$. It suffices to show that for each $(L,v)$, $f_{(L,v)} = ((\eta_G) \circ U_x'(\hat{f}))_{L,v}$. Composition in is given by vertical composition. Thus, the RHS is given by $\pi_{v(x)} \circ \langle f_{(L,v[x \mapsto l])} \rangle_l = f_{(L,v[x \mapsto v(x)])} = f_{(L,v)}$ as required.
We need to show that $\hat{f} : F \rightarrow \forall x . G$ is the unique uniform strategy satisfying $f = \eta_G \circ U_x'(\hat{f})$. Suppose $h : F \rightarrow \forall x . G$ in $\mathcal{W}^{\mathcal{M}_X^\Theta}$ satisfies this property. Then given $(L,v)$ in $\mathcal{M}_{X \uplus \{ x \}}^\Theta$, we know that $f_{(L,v)} = \eta_{G(L,v)} \circ h_{(L,v - x)} = \pi_{v(x)} \circ h_{(L,v - x)}$. Let $(L,v) \in \mathcal{M}_X^\Theta$. We must show that $h_{(L,v)} = \hat{f}_{(L,v)} = \langle f_{(L,v[x \mapsto l])} \rangle_l$. Thus we need to show that for each $l$, $\pi_l \circ h_{(L,v)} = f_{(L,v[x \mapsto l])}$. But consider the model $(L,v[x \mapsto l])$. This is $f_{(L,v[x \mapsto l])} = \pi_{v[x \mapsto l](x)} \circ h_{(L, v[x \mapsto l] - x)} = \pi_l \circ h_{(L,v)}$, as required. \qed \end{proof}
If $N : \mathcal{M}^\Theta_{X \uplus \{ x \}} \rightarrow \mathcal{G}_e$ then on objects $\llbracket \forall x . N
\rrbracket(L,v) = \prod_{l \in |L|} \llbracket N \rrbracket(L,v[x \mapsto l])$. For the action of $\forall x . N$ on arrows, suppose $f : (L,v) \rightarrow (L',w)$. Then $\llbracket \forall x . N
\rrbracket(f) : \prod_{l \in |L|} \llbracket N \rrbracket(L,v[x
\mapsto l]) \rightarrow \prod_{l \in |L'|} \llbracket N \rrbracket(L',w[x \mapsto l])$ is given as follows: The embedding part (left to right) is given by $\langle g_{m} \rangle_{m}$ where $g_{m} = \epsilon$ if $m$ is not in the image of $f$, and $g_{m} = i \llbracket N \rrbracket(f) \circ \pi_l$ if $m = f(l)$ (note in this case $l$ is unique by injectivity of $f$). The projection part is given by $\langle p \llbracket N \rrbracket (f) \circ \pi_{f(l)} \rangle_l$.
Consider the map $\mathsf{set}'^x_s(\eta) : \forall x . F = \mathsf{set}'^x_s(U'_x(\forall x . F)) \rightarrow \mathsf{set}'^x_s(F)$ in the category $\mathcal{M}_X^\Theta$. Pointwise, $\mathsf{set}'^x_s(\eta)_{(L,v)} : \prod_{l \in L} F(L,v[x \mapsto l]) \rightarrow F(L,v[x \mapsto v(s)])$ is given by $\pi_{v(s)}$, and so we will write $\pi_s$ for this map.
\subsection{Semantics of Sequents}
We define the semantics of sequents $X ; \Theta \vdash \Gamma$ as functors $\mathcal{M}_X^\Theta \rightarrow \mathcal{G}_e$ inductively, via the equations given in the previous section, extended with the following interpretations of atoms and quantifiers:
\begin{small} \[ \begin{array}{lcllcl}
\llbracket \Phi \vdash \phi(\overrightarrow{s}) \rrbracket(L,v) & = & I $ if $ (L,v) \models \overline{\phi}(\overrightarrow{s}) & \llbracket \Phi \vdash \overline{\phi}(\overrightarrow{s}) \rrbracket(L,v) & = & I $ if $ (L,v) \models \overline{\phi}(\overrightarrow{s}) \\
\llbracket \Phi \vdash \phi(\overrightarrow{s}) \rrbracket(L,v) & = & o $ if $ (L,v) \models \phi(\overrightarrow{s}) & \llbracket \Phi \vdash \overline{\phi}(\overrightarrow{s}) \rrbracket(L,v) & = & o $ if $ (L,v) \models \phi(\overrightarrow{s}) \\ \end{array} \] \[ \begin{array}{lcl} \llbracket X ; \Theta \vdash \forall x . N \rrbracket & = & \forall x . \llbracket X \uplus \{ x \} ; \Theta \vdash N \rrbracket \\ \llbracket X ; \Theta \vdash \exists x . P \rrbracket & = & \forall x . \llbracket X \uplus \{ x \} ; \Theta \vdash P \rrbracket \\ \end{array} \] \end{small}
In the case of atoms, the functors are specified pointwise on objects, and we must also define the (functorial) action on arrows. Let $f : (L,v) \rightarrow (L',v')$. If the truth value of $\phi(\overrightarrow{s})$ is the same in $(L,v)$ and $(L',v)$, we use the identity embedding $(\mathsf{id},\mathsf{id})$. If the truth value of $\phi(\overrightarrow{s})$ is different, we must have $(L,v) \models \phi(\overrightarrow{s})$ and $(L',v) \models \overline{\phi}(\overrightarrow{s})$ since morphisms in $\mathcal{M}_X$ preserve truth of positive atoms. Thus we need an embedding $I \rightarrow o$. We can take $(\epsilon_{I \multimap o}, \epsilon_{o \multimap I})$ where $\epsilon_A$ is the strategy containing just the empty sequence. Note that $\epsilon_{o \multimap
I} \circ \epsilon_{I \multimap o} = \epsilon_I = \mathsf{id}_I$ and $\epsilon_{I \multimap o} \circ \epsilon_{o \multimap I} = \epsilon \sqsubseteq \mathsf{id}_o$ ($\epsilon$ is the bottom element with respect to $\sqsubseteq$).
We must check functoriality. We have already noted that if the truth value of $\phi(\overrightarrow{s})$ is the same in $(L,v)$ and $(L',v')$ then $\llbracket \phi(\overrightarrow{s}) \rrbracket(f) = \mathsf{id}$, so in particular $\llbracket \phi(\overrightarrow{s}) \rrbracket(\mathsf{id}) = \mathsf{id}$. For composition, suppose $f : (L,v) \rightarrow (L',v')$ and $g : (L',v') \rightarrow (L'',v'')$. We can consider the truth value of $\phi(\overrightarrow{s})$ in each of these models (only some cases are possible, as morphisms preserve truth of positive atoms). \\
\noindent \begin{tabular}{|c|c|c|l|} \hline $(L,v) \models$ & $(L',v') \models$ & $(L'',v'') \models$ & $\llbracket \phi(\overrightarrow{s}) \rrbracket(g) \circ \llbracket \phi(\overrightarrow{s}) \rrbracket(g) = \llbracket \phi(\overrightarrow{s}) \rrbracket(g \circ f)$ \\ \hline $\phi(\overrightarrow{s})$ & $\phi(\overrightarrow{s})$ & $\phi(\overrightarrow{s})$ & $(\mathsf{id} , \mathsf{id}) \circ (\mathsf{id} , \mathsf{id}) = (\mathsf{id} , \mathsf{id})$ \\ $\phi(\overrightarrow{s})$ & $\phi(\overrightarrow{s})$ & $\overline{\phi}(\overrightarrow{s})$ & $(\epsilon , \epsilon) \circ (\mathsf{id} , \mathsf{id}) = (\epsilon, \epsilon)$\\ $\phi(\overrightarrow{s})$ & $\overline{\phi}(\overrightarrow{s})$ & $\overline{\phi}(\overrightarrow{s})$ & $(\mathsf{id} , \mathsf{id}) \circ (\epsilon , \epsilon) = (\epsilon , \epsilon)$ \\ $\overline{\phi}(\overrightarrow{s})$ & $\overline{\phi}(\overrightarrow{s})$ & $\overline{\phi}(\overrightarrow{s})$ & $(\mathsf{id} , \mathsf{id}) \circ (\mathsf{id} , \mathsf{id}) = (\mathsf{id} , \mathsf{id})$ \\ \hline \end{tabular}
\subsection{Semantics of Proofs} We now extend the semantics of proof rules given in the previous section with interpretations for the rules for quantifiers, atoms and equality, completing the semantics of \textsf{WS1}.
\noindent We first show that if $x \not \in FV(\Gamma)$ there is an isomorphism $\mathsf{dist}_\Gamma : \llbracket \forall x . A , \Gamma \rrbracket \cong \forall x . \llbracket A , \Gamma \rrbracket$ in $\mathcal{W}^{\mathcal{M}_X^\Theta}$. Observe that there is a natural isomorphism $$\mathsf{dist}_\oslash : \_ \oslash \_ \circ (\mathsf{prod} \times \mathsf{id}) \Rightarrow \mathsf{prod} \circ \mathsf{FamInj}(\_ \oslash \_) \circ \mathsf{dst} : \mathsf{FamInj}(\mathcal{G}_s) \times \mathcal{G}_s \rightarrow \mathcal{G}_s$$ which is concretely a family of winning strategies $$\mathsf{prod}(\{ G_i : i \in I \}) \oslash M \rightarrow \mathsf{prod}(\{ G_i \oslash M : i \in I \})$$ given by $\mathsf{dist}_\oslash = \langle \pi_i \oslash \mathsf{id} \rangle_i$. Each $\mathsf{dist}_\oslash$ is a natural isomorphism in $\mathcal{W}_s$.
Similarly, we can define a natural isomorphism $$\mathsf{dist}_\multimap : \mathsf{prod}(\{ M \multimap G_i : i \in I \}) \cong M \multimap \mathsf{prod}(\{ G_i : i \in I \})$$ between functors $$\_ \multimap \_ \circ (\mathsf{prod} \times \mathsf{id}) \Rightarrow \mathsf{prod} \circ \mathsf{FamInj}(\_ \multimap \_) \circ \mathsf{dst} : \mathsf{FamInj}(\mathcal{G}_s) \times \mathcal{G}_s \rightarrow \mathcal{G}_s.$$
For each $\Gamma$, we can then construct a map $$\mathsf{dist}_{b,\Gamma} : \llbracket \Gamma \rrbracket^b_1 \circ (\mathsf{prod} \times \mathsf{id}) \cong \mathsf{prod} \circ \mathsf{FamInj}(\llbracket \Gamma \rrbracket^b_1) \circ \mathsf{dst} : \mathsf{FamInj}(\mathcal{G}_s) \times \mathcal{M}_X^\Theta \rightarrow \mathcal{G}_s$$ proceeding by induction on $\Gamma$.
Finally, given a sequent $A,\Gamma$ we define $\mathsf{dist}_{\Gamma}$ as the following horizontal composition, where $b$ is the polarity of $A$. It is easy to see by checking pointwise that the functor $\forall x . \llbracket A , \Gamma \rrbracket$ is equal to the given decomposition. \begin{scriptsize} \begin{diagram} \forall x . \llbracket A , \Gamma \rrbracket : & \mathcal{M}_X^\Theta & \rTo^{\langle \mathsf{add}_x , \mathsf{id} \rangle} & \mathsf{FamInj}(\mathcal{M}_{X \uplus \{ x \}}^\Theta) \times \mathcal{M}_X^\Theta & \rTo^{\mathsf{FamInj}(A) \times \mathsf{id}} & \mathsf{FamInj}(\mathcal{G}_s) \times \mathcal{M}_X^\Theta & \rTo^{\mathsf{prod} \circ \mathsf{FamInj}(\llbracket \Gamma \rrbracket^b_1) \circ \mathsf{dst}} & \mathcal{G}_s \\ & & \dImplies^\mathsf{id} & & \dImplies^{\mathsf{id}} & & \dImplies^{\mathsf{dist}_{b,\Gamma}^{-1}} \\ \llbracket \forall x . A , \Gamma \rrbracket : & \mathcal{M}_X^\Theta & \rTo^{\langle \mathsf{add}_x , \mathsf{id} \rangle} & \mathsf{FamInj}(\mathcal{M}_{X \uplus \{ x \}}^\Theta) \times \mathcal{M}_X^\Theta & \rTo^{\mathsf{FamInj}(A) \times \mathsf{id}} & \mathsf{FamInj}(\mathcal{G}_s) \times \mathcal{M}_X^\Theta & \rTo^{\llbracket \Gamma \rrbracket^b_1 \circ (\mathsf{prod} \times \mathsf{id})} & \mathcal{G}_s \\ \end{diagram} \end{scriptsize}
\noindent Since $\mathsf{dist}_\Gamma$ is a natural isomorphism, and pointwise winning, it is an isomorphism in $\mathcal{W}^{\mathcal{M}_X^\Theta}$.
\begin{proposition}
$\pi_{v(x)} \circ {\mathsf{dist}_{\Gamma}}_{(L,v)} = \llbracket \Gamma
\rrbracket^b(\pi_{v(x)})$ \label{contprodpoint} \end{proposition} \begin{proof}
We can check this by induction on $\Gamma$, as in Proposition
\ref{contprod}. \qed \end{proof}
We next give semantics to the rules involving atoms and quantifiers. We first introduce some notation. Suppose $\mathcal{C}$ is the coproduct of two categories $\mathcal{D}$ and $\mathcal{E}$ (the disjoint union of the two categories, where there are no maps between them). If $F : \mathcal{C} \rightarrow \mathcal{G}_e$ we write
$F|_\mathcal{D}$ and $F|_\mathcal{E}$ for the restriction of $F$ to $\mathcal{D}$ and $\mathcal{E}$ respectively. If $\eta : F \Rightarrow G$ then we can restrict $\eta$ to a natural transformation
$F|_\mathcal{D} \Rightarrow G|_\mathcal{D}$, and we write
$\eta|_\mathcal{D}$ for this restriction. If $\eta : F|_\mathcal{D}
\Rightarrow G|_\mathcal{D}$ and $\sigma : F|_\mathcal{E} \Rightarrow G|_{\mathcal{E}}$ then we write $[ \eta , \sigma ]_{\mathcal{D},\mathcal{E}}$ for the lax natural transformation defined by $[ \eta , \sigma ]_A = \eta_A$ if $A \in \mathcal{D}$ and $[\eta , \sigma]_A = \sigma_A$ if $A \in \mathcal{E}$. Lax naturality of $[\eta , \sigma]$ inherits from lax naturality of $\eta$ and $\sigma$, since there are no maps between $\mathcal{D}$ and $\mathcal{E}$ when viewed as subcategories of $\mathcal{C}$. If $\mathcal{C} = \mathcal{M}_X^\Theta$ then we will write $[\eta , \sigma]_{\alpha , \beta}$ for $[\eta , \sigma]_{\mathcal{M}_X^{\Theta
, \alpha} , \mathcal{M}_X^{\Theta , \beta}}$.
We construct an isomorphism $$H_{x,y,z} : \mathcal{M}_X^{\Theta,x=y} \cong \mathcal{M}_{X / \{x,y\} \uplus \{ z \}}^{\Theta[ \frac{z}{x} ,
\frac{z}{y} ]} : H^{-1}_{x,y,z}$$ with $H_{x,y,z}(M,v) = (M,v [z \mapsto v(x)] - x - y)$ and $H^{-1}(M,v) = (M,v[x \mapsto v(z),y \mapsto v(z)] - z)$. We can show that $\llbracket (X ; \Theta \vdash \Gamma) [ \frac{z}{x} , \frac{z}{y} ] \rrbracket = \llbracket X ; \Theta , x = y \vdash \Gamma \rrbracket H_{x,y,z}^{-1}$ by induction on $\Gamma$.
Semantics of the rules involving atoms and quantifiers are given in Figure \ref{WS1-sem}. We must justify lax naturality of $\mathsf{P}_\mathsf{at-}$: the following diagram must lax commute: \begin{diagram} I & \rTo^{\llbracket \mathsf{P}_\mathsf{at-}(p) \rrbracket(M,w)} & \llbracket \phi(\overrightarrow{s}) , \Gamma \rrbracket(M,w) \\ \dTo^\mathsf{id} & \sqsupseteq & \dTo_{i\llbracket \phi(\overrightarrow{s}) , \Gamma \rrbracket(f)} \\ I & \rTo_{\llbracket \mathsf{P}_\mathsf{at-}(p) \rrbracket(L,v)} & \llbracket \phi(\overrightarrow{s}) , \Gamma \rrbracket(L,v) \\ \end{diagram}
\noindent To see this, note that if $(L,v)$ and $(M,w)$ agree on $\phi(\overrightarrow{x})$ then the diagram lax commutes by lax naturality of $\epsilon$ or $\llbracket p \rrbracket$. If they disagree, then we must have $(L,v) \models \overline{\phi}(\overrightarrow{x})$ and $(M,w) \models \phi(\overrightarrow{x})$. We need to show that $\llbracket \mathsf{P}_\mathsf{at-}(p) \rrbracket(L,v) \sqsupseteq i \llbracket \phi(\overrightarrow{x}) , \Gamma \rrbracket(f) \circ \llbracket \mathsf{P}_\mathsf{at-}(p) \rrbracket(M,w)$.
But $\llbracket \mathsf{P}_\mathsf{at-}(p) \rrbracket(M,w) = p\llbracket \phi(\overrightarrow{x}) , \Gamma \rrbracket(f) \circ \llbracket \mathsf{P}_\mathsf{at-}(p) \rrbracket(L,v)$ as both sides map into the terminal object, so $\llbracket \mathsf{P}_\mathsf{at-}(p) \rrbracket(L,v) \sqsupseteq i\llbracket \phi(\overrightarrow{x}) , \Gamma \rrbracket(f) \circ p\llbracket \phi(\overrightarrow{x}) , \Gamma \rrbracket(f) \circ \llbracket \mathsf{P}_\mathsf{at-}(p) \rrbracket(L,v) = i \llbracket \phi(\overrightarrow{x}) , \Gamma \rrbracket(f) \circ \llbracket \mathsf{P}_\mathsf{at-}(p) \rrbracket(M,w)$.
\begin{figure*}
\caption{Semantics of Rules involving Atoms and Quantifiers}
\label{WS1-sem}
\end{figure*}
\section{Full Completeness}
We next show a full completeness result for the \emph{function-free} fragment of \textsf{WS1}: in this section we assume that $\mathcal{L}$ contains no function symbols. Thus, the only uses of the $\mathsf{P}_\exists$ rule are of the form $\mathsf{P}_\exists^y$ where $y$ is some variable in scope.
We show that the core rules suffice to represent any uniform winning strategy $\sigma$ on a type object provided $\sigma$ is \emph{bounded} --- i.e. there is a bound on the size of plays occurring in $\sigma$. In particular, such a strategy is the semantics of a unique \emph{analytic} proof --- a proof using only the core rules, with some further restrictions on the use of the matching rule. Given a sequent $X; \Theta \vdash \Gamma$, we say $\Theta$ is \emph{lean} if it contains $x \neq y$ for all distinct $x$ and $y$ in $X$ and does not contain $x \neq x$. We assume an arbitrary ordering on variables.
\begin{definition} A proof in \textsf{WS1} is \emph{analytic} if it uses only core rules and has the following additional restrictions: \begin{itemize} \item Rules other than $\mathsf{P}_{\neq}$ and $\mathsf{P}_\mathsf{ma}^{x,y,z}$ can only
conclude sequents with a lean $\Theta$ \item If $\mathsf{P}_\mathsf{ma}^{x,y,z}$ is used to conclude $X ; \Theta \vdash
\Gamma$ then $\Theta$ does not contain $w \neq w$ for any $w$; $(x,y)$ is
the least pair with $x,y \in X$, $x \not \equiv y$ and $x \neq y
\not \in \Theta$; and $z$ is the least variable in $\mathsf{Fr}(X ;
\Theta \vdash \Gamma)$ (the least fresh variable). \end{itemize} \end{definition}
\begin{theorem}
Let $X ; \Theta \vdash \Gamma$ be a sequent of \textsf{WS1} and
$\sigma$ a bounded uniform winning strategy on $\llbracket X; \Theta
\vdash \Gamma \rrbracket$. Then there is a unique analytic proof $p$
of $X ; \Theta \vdash \Gamma$ with $\llbracket p \rrbracket =
\sigma$.
\label{fullcomp} \end{theorem}
All strategies on the denotations of exponential-free sequents are bounded. Consequently, in the affine fragment we can perform reduction-free normalisation from proofs to (cut-free) core proofs, by reification of their semantics. We thus see that all of the non-core rules are admissible (when restricted to this fragment).
The rest of this section sketches the proof of this full completeness result, and describes an extension to reify unbounded strategies as \emph{infinitary} analytic proofs. We perform a semantics-guided proof search procedure, following \cite{HO_PCF,AJM_PCF,Lau_PG,m_ag4}.
\subsection{Uniform Choice}
When constructing a proof of a given sequent out of core rules there is a choice of which rule to use when the outermost head connective is $\oplus$ (either $\mathsf{P}_\oplus^1$ or $\mathsf{P}_\oplus^2$) or $\exists$ (which $s$ to use in $\mathsf{P}_\exists^s$). Our choice of rule will depend on the given strategy, depending on which component Player plays in first. However, the input to our procedure is a family of strategies, and we need to ensure that the same component choice is made in each strategy. We will next show that our uniformity condition ensures this.
\begin{proposition} If $\Theta$ is lean and $(L,v),(M,w) \in \mathcal{M}_X^\Theta$ there exists an $\mathcal{L}$-model $(L,v) \sqcup (M,w)$ with maps $f_{(L,v,M,w)} : (L,v) \rightarrow (L,v) \sqcup (M,w)$ and $g_{(L,v,M,w)} : (M,w) \rightarrow (L,v) \sqcup (M,w)$. \label{firstkeyobs} \end{proposition} \begin{proof}
If $(L,v)$ is an $\mathcal{L}$-model, define $U_{(L,v)}$ to be the elements of $|L|$ not in the image of $v$. Then the carrier of $(L,v) \sqcup (M,w)$ is defined to be $X \uplus U_{(L,v)} \uplus U_{(M,w)}$. The $\mathcal{L}$-structure validates all positive atoms, and the valuation is just $\mathsf{inj}_1$. Then the map $f_{(L,v,M,w)}$ sends $v(x)$ to $\mathsf{inj}_1(x)$ and $u \in U_{(L,v)}$ to $\mathsf{inj}_2(u)$. This is an injection because $\Theta$ is lean. $g_{(L,v,M,w)}$ is defined similarly. \qed \end{proof}
We also recall that if $f : (L,v) \rightarrow (M,w)$ then $\sigma_{(L,v)}$ is determined entirely by $f$ and $\sigma_{(M,w)}$. In particular, uniformity for positive strategies $\sigma : N \Rightarrow o$ requires that $\sigma_{(L,v)} \sqsubseteq \sigma_{(M,w)} \circ N(f)$ but since $\sigma_{(L,v)}$ is total, it is maximal in the ordering and so we must have $\sigma_{(L,v)} = \sigma_{(M,w)} \circ N(f)$.
\begin{proposition} Let $X ; \Theta \vdash \Gamma$ be a sequent and suppose $\Theta$ is lean. Then there exists an object in $\mathcal{M}_X^\Theta$. \label{leannonempty} \end{proposition} \begin{proof} Note that $\Theta$ just contains positive atoms. We can take $(X,\mathsf{id})$, with $(X,\mathsf{id}) \models \overline{\phi}(\overrightarrow{x})$ just if $\overline{\phi}(\overrightarrow{x}) \in \Theta$. Then each formula in $\Theta$ is satisfied: each such formula is either $\overline{\phi}(\overrightarrow{x})$, or $x \neq y$ for distinct $x,y$. \qed \end{proof}
We now use the above lemmas to show that in any uniform winning strategy on a sequent whose head formula is $P \oplus Q$, either all strategies play their first move in $P$, or all strategies play their first move in $Q$.
\begin{proposition} Let $M_1 , M_2 : \mathcal{M}_X^\Theta \rightarrow \mathcal{G}_e$. Suppose $\Theta$ is lean, and let $\sigma : M_1 \times M_2 \Rightarrow o$ be a uniform total (resp. winning) strategy. Then $\sigma = \tau \circ \pi_1$ for some uniform total (resp. winning) strategy $\tau : M_1 \Rightarrow o$, or $\sigma = \tau \circ \pi_2$ for some uniform total (resp. winning) strategy $\tau : M_2 \Rightarrow o$. \label{choice1} \end{proposition} \begin{proof} We know that each $\sigma_{(L,v)}$ is of the form $\tau_{(L,v)} \circ \pi_i$ for some $i \in \{ 1 , 2 \}$ since in the game $M_1(L,v) \times M_2(L,v) \multimap o$ we must respond to the initial Opponent-move either with a move in $M_1$ or a move in $M_2$ (the $\pi$-atomicity condition). But we need to check that $i$ is uniform across components. Suppose that $i$ is not uniform --- then we have $(L,v)$ and $(T,w)$ with $\sigma_{(L,v)} = \tau_{(L,v)} \circ \pi_1$ and $\sigma_{(T,w)} = \tau_{(T,w)} \circ \pi_2$. Now consider $(L,v) \sqcup (T,w)$ and let $k$ be such that $\sigma_{(L,v) \sqcup (T,w)} = \tau_{(L,v) \sqcup (T,w)} \circ \pi_k$. By uniformity and totality, $\sigma_{(L,v)} = \sigma_{(L,v) \sqcup (T,w)} \circ (M_1 \times M_2)(f_{(L,v,T,w)}) = \tau_{(L,v) \sqcup (T,w)} \circ \pi_k \circ (M_1 \times M_2)(f_{(L,v,T,w)}) = \tau_{(L,v) \sqcup (T,w)} \circ M_k(f_{(L,v,T,w)}) \circ \pi_k$. But since $\sigma_{(L,v)}$ is of the form $\tau_{(L,v)} \circ \pi_1$, we must have $k = 1$. But we can reason similarly using $\sigma_{(T,w)}$ and $g_{(L,v,T,w)}$ and discover that $k = 2$. This is a contradiction.
Thus there is some $i$ such that each $\sigma_{(L,v)}$ can be decomposed into $\tau_{(L,v)} \circ \pi_i$. In particular, we can take $i$ such that $\sigma_{(X,\mathsf{id})} = \tau_{(X,\mathsf{id})} \circ \pi_i$ where $(X,\mathsf{id})$ is as defined in Proposition \ref{leannonempty}. We only need to show that $\tau$ is lax natural. We can construct a natural transformations $\iota_1 : \langle \mathsf{id} , \epsilon \rangle : M_1 \rightarrow M_1 \times M_2$ and $\iota_2 : \langle \epsilon , \mathsf{id} \rangle : M_2 \rightarrow M_1 \times M_2$. Then $\tau = \sigma \circ \iota_i$, and so is lax natural. \qed \end{proof}
We next show that in any uniform family of winning strategies on a sequent with head $\exists x. P$, Player chooses the same $x$ in each strategy component. Moreover, the chosen $x$ is the value of some variable in scope.
\begin{proposition} Let $M : \mathcal{M}_{X \uplus \{ x \}}^\Theta \rightarrow \mathcal{G}_e$. Suppose $\Theta$ is lean, and let $\sigma : \forall x . M \Rightarrow o$ be a uniform total (resp. winning) strategy. Then there exists a unique variable $y \in X$ and uniform total (resp. winning) strategy $\tau : M \mathsf{set}^{x}_y \Rightarrow o$ such that $\sigma = \tau \circ \pi_{y}$. \label{choice2} \end{proposition}
\begin{proof} We firstly show that given any $\mathcal{L}$-model $(L,v)$ there is some $x$ with $\sigma_{(L,v)} = \tau_{(L,v)} \circ \pi_{v(x)}$. Suppose for contradiction that $\sigma_{(L,v)} = \tau_{(L,v)} \circ \pi_u$ for some $u \in U_{(L,v)}$. Build the $\mathcal{L}$-model $L' = X \uplus \{ a , b \} \uplus U_{(L,v)}$ with valuation $\mathsf{inj}_1$ and validating all positive atoms. Let $\sigma_{(L',\mathsf{inj}_1)} = \tau_{(L',\mathsf{inj}_1)} \circ \pi_r$. Define $m_1 : (L,v) \rightarrow (L',\mathsf{inj}_1)$ sending $v(x)$ to $\mathsf{inj}_1(x)$, $u$ to $\mathsf{inj}_2(a)$ and $v \in U_{(L,v)} - \{ u \}$ to $\mathsf{inj}_3(v)$. Then $\sigma_{(L,v)} = \sigma_{(L',\mathsf{inj}_1)} \circ \pi_r \circ \forall x . M(m_1)$. \begin{itemize} \item If $r = \mathsf{inj}_2(b)$ then this is $\sigma_{(L',\mathsf{inj}_1)} \circ \epsilon$ which is $\epsilon$ as $\sigma_{(L',\mathsf{inj}_1)}$ must be strict (as its total and a map into $o$). This is impossible. \item If $r = \mathsf{inj}_1(x)$ then this is $\sigma_{(L',\mathsf{inj}_1)} \circ M(m_1) \circ \pi_{v(x)}$, which is impossible by assumption. \item Hence we must have $r = \mathsf{inj}_2(a)$. \end{itemize} Define $m_2 : (L,v) \rightarrow (L',\mathsf{inj}_1)$ sending $v(x)$ to $\mathsf{inj}_1(x)$, $u$ to $\mathsf{inj}_2(b)$ and $v \in U_{(L,v)} - \{ u \}$ to $\mathsf{inj}_3(v)$. We can use similar reasoning to show that $r = \mathsf{inj}_2(b)$. This is a contradiction.
Hence, given any $(L,v)$ there is some variable $x$ such that $\sigma_{(L,v)} = \tau_{(L,v)} \circ \pi_{v(x)}$. Let $y \in X$ be the unique variable such that $\sigma_{(X,\mathsf{id})} = \tau_{(X,\mathsf{id})} \circ \pi_y$ where $(X,\mathsf{id})$ is constructed as in Proposition \ref{leannonempty}. We now show the stronger fact that $\sigma_{(L,v)} = \tau_{(L,v)} \circ \pi_{v(y)}$. Suppose that $\sigma_{(L,v)} = \tau_{(L,v)} \circ \pi_{v(x)}$ and $\sigma_{(L,v) \sqcup (X,\mathsf{id})} = \tau_{(L,v) \sqcup (X,\mathsf{id})} \circ \pi_{\mathsf{inj}_1(z)}$. By lax naturality, $\tau_{(L,v)} \circ \pi_{v(x)} = \sigma_{(L,v)} = \sigma_{(L,v) \sqcup (X,\mathsf{id})} \circ \forall x . M(f_{(L,v,X,\mathsf{id})}) = \tau_{(L,v) \sqcup (X,\mathsf{id})} \circ \pi_{v(z)} \circ \forall x . M(f_{(L,v,X,\mathsf{id})})$. Since $\mathsf{inj}_1(z) = f_{(L,v,X,\mathsf{id})}(v(z))$, we have $\sigma_{(L,v)} = \tau_{(L,v) \sqcup (X,\mathsf{id})} \circ M(f_{(L,v,X,\mathsf{id})}) \circ \pi_{v(z)}$ and so we must have $x = z$. By similar reasoning using $g_{(L,v,X,\mathsf{id})}$, we see that $y = z$, so $x = y$.
Hence there is a variable $y$ such that for all $(L,v)$, $\sigma_{(L,v)} = \tau_{(L,v)} \circ \pi_{v(y)}$ for some $\tau_{(L,v)} : M(L,v[x \mapsto v(y)]) \Rightarrow o$. Since $\Theta$ is lean, $y$ is the unique variable such that $\sigma_{(L,v)} = \tau_{(L,v)} \circ \pi_{v(y)}$. Note that $M(L,v[x \mapsto v(y)]) = M(\mathsf{set}^x_y(L,v))$. We can easily check that the resulting transformation $\tau : M \mathsf{set}^x_y \Rightarrow o$ is lax natural. \qed \end{proof}
\subsection{Reification of Strategies}
We define a procedure $\mathsf{reify}$ which transforms a bounded uniform winning strategy on a formula object into a proof of that formula. It may be seen as a semantics-guided proof search procedure: given such a strategy $\sigma$ on the interpretation of $\Gamma$, $\mathsf{reify}$ finds a proof which denotes it. Reading upwards, the procedure first decomposes the head formula into a unit (nullary connective) using the head introduction rules. If this unit is $\mathbf{1}$, we are done. It cannot be $\mathbf{0}$, as there are no (total) strategies on this game. If the unit is $\top$ or $\bot$, the procedure then consolidates the tail of $\Gamma$ into a single formula, using the core elimination rules. Once this is done, the head unit is removed using $\mathsf{P}_\bot^+$ or $\mathsf{P}_\top^-$, strictly decreasing the size of the sequent. These steps are then repeated until termination. We further have to deal with equality: whenever a free variable is introduced, we must consider if it is equal to each of the other free variables using the $\mathsf{P}_\mathsf{ma}$ rule.
Informally, if $\Theta$ is not lean: \begin{itemize} \item If $\Theta$ contains $x \neq x$ we use $\mathsf{P}_{\neq}$ and halt. \item Otherwise, we consider the least two variables $x,y \in X$ that are not declared distinct by $\Theta$ and split the family into those models that identify $x$ and $y$, and those that do not. In the former case, we can substitute fresh $z$ for both $x$ and $y$. We then apply the inductive hypothesis to both halves and apply $\mathsf{P}_\mathsf{ma}^{x,y,z}$ using $H^{-1}_{x,y,z}$. \end{itemize} If $\Theta$ is lean, then: \begin{itemize} \item The case $\Gamma = \mathbf{0} , \Gamma'$ is impossible: there are no total strategies on this game. \item If $\Gamma = \mathbf{1} , \Gamma'$ then $\sigma$ must be the empty strategy, since it is the unique total strategy on this game. This is the interpretation of the proof $\mathsf{P}_\mathbf{1}$. \item If $\Gamma = \top$ then $\sigma$ must similarly be the unique total strategy on this game, i.e. the interpretation of $\mathsf{P}_\top$. \item If $\Gamma = \top , P , \Gamma'$ then $\sigma$ can never play in $P$ since if it did the play restricted to $\top , P$ would not be alternating. Thus $\sigma$ is a strategy on $\top , \Gamma'$. We can call $\mathsf{reify}$ inductively yielding a proof of $\vdash \top , \Gamma'$, and apply $\mathsf{P}_\top^+$ to yield a proof of $\top , P , \Gamma$. \item If $\Gamma = \top , N , P , \Gamma'$ then $\sigma$ is a total strategy on $\top , N \lhd P , \Gamma$ up to retagging and we can proceed inductively using $\mathsf{P}_\top^\lhd$. If $\Gamma = \top , N , M , \Gamma'$ we can proceed similarly, using $\mathsf{P}_\top^\otimes$. \item If $\Gamma = \top , N$ then $\sigma$ is a total strategy on $\downarrow N$: we can strip off the first move yielding a total strategy on $N$, apply $\mathsf{reify}$ inductively yielding a proof of $\vdash N$, and finally apply $\mathsf{P}_\top^-$ yielding a proof of $\vdash \top , N$. \item The case $\Gamma = \bot$ is impossible: there are no total strategies on this game. Other cases where $\bot$ is the head formula proceed as with $\top$: if the tail is a single positive formula, we remove the first move and apply $\mathsf{P}_\bot^+$, otherwise we shorten the tail using $\mathsf{P}_\bot^-$, $\mathsf{P}_\bot^\oslash$ or $\mathsf{P}_\bot^\bindnasrepma$. \item If $\Gamma = A \oslash N , \Gamma'$ then $\sigma$ is also a strategy on $A , N , \Gamma$. We can call $\mathsf{reify}$ inductively yielding a proof of $\vdash A , N , \Gamma$ that denotes $\sigma$, and apply $\mathsf{P}_\oslash$. We can proceed similarly in the following case $\Gamma = A \lhd P , \Gamma'$. \item If $\Gamma = M \& N , \Gamma'$ then we can split $\sigma$ into those plays that start with $M$ and those that start with $N$. This yields total strategies on $M , \Gamma$ and $N , \Gamma$ respectively, which we can $\mathsf{reify}$ inductively and apply $\mathsf{P}_\&$. \item If $\Gamma = M \otimes N , \Gamma'$ then we can split $\sigma$ into those plays that start with $M$ and those that start with $N$. This yields total strategies on $M , N , \Gamma$ and $N , M , \Gamma$ respectively, which we can $\mathsf{reify}$ inductively and apply $\mathsf{P}_\otimes$. \item If $\Gamma = P \oplus Q , \Gamma$ then $\sigma$ specifies a
first move that must either be in $P$ or in $Q$. In the former case,
we have a strategy on $P , \Gamma$ and can $\mathsf{reify}$ inductively,
finally applying ${\mathsf{P}_\oplus}_1$. In the latter case, we have a
strategy on $Q , \Gamma$ and can $\mathsf{reify}$ inductively and apply
${\mathsf{P}_\oplus}_2$. The case of $\Gamma = P \bindnasrepma Q , \Gamma$ is
similar. \item If the head formula is a positive atom $\overline{\phi}(\overrightarrow{x})$ then we must have $\overline{\phi}(\overrightarrow{x})$ in $\Theta$, as otherwise there can be no uniform winning strategies on $\llbracket \Gamma \rrbracket$ (since some games in that family have no winning strategies). Thus we can proceed inductively and apply $\mathsf{P}_{\mathsf{at+}}$. \item If the head formula is a negative atom $\phi(\overrightarrow{x})$ then we can split the family $\sigma$ into those models that satisfy $\phi(\overrightarrow{x})$ and those that do not. All strategies in the latter group must be empty, as there are no moves to play. All strategies in the former group form a uniform winning strategy on $\llbracket \Theta, \phi(\overrightarrow{x}) \vdash \bot , \Gamma \rrbracket$ and we can proceed inductively using $\mathsf{P}_\mathsf{at-}$. \item If $\sigma : \llbracket X ; \Theta \vdash \Gamma = \forall x . N , \Gamma' \rrbracket$ then $\mathsf{dist}_{\Gamma'} \circ \sigma : I \Rightarrow \forall x . \llbracket N , \Gamma' \rrbracket$. Using our adjunction, this corresponds to a map $\eta \circ U'_x(\mathsf{dist}_{\Gamma'} \circ \sigma) : I \Rightarrow \llbracket N , \Gamma' \rrbracket$ in $\mathcal{W}^{\mathcal{M}_{X \uplus \{ x \}}^\Theta}$. We can then $\mathsf{reify}$ this inductively to yield a proof of $X \uplus \{ x \} ; \Theta \vdash N , \Gamma'$ and apply $\mathsf{P}_\forall$.
\item If $\Gamma = \exists x . P , \Gamma'$ then $\sigma \circ \mathsf{dist}_{+,\Gamma'} : \forall x . \llbracket P , \Gamma' \rrbracket \Rightarrow o$. By Proposition \ref{choice2}, there is a unique $y$ and natural transformation $\tau : \llbracket P , \Gamma' \rrbracket \mathsf{set}^x_y \Rightarrow o$ such that $\sigma \circ \mathsf{dist}_{+,\Gamma'} = \tau \circ \pi_{y}$. Since $x$ does not occur in $\Gamma$, we have $\llbracket P , \Gamma' \rrbracket \mathsf{set}^x_y = \llbracket P[y/x] , \Gamma' \rrbracket$. This yields a lax natural transformation $\llbracket P[y/x] , \Gamma' \rrbracket \Rightarrow o$. We can then apply the inductive hypothesis use the $\mathsf{P}_\exists^y$ rule. \end{itemize}
\noindent We will later show that $\mathsf{reify}$ is well founded by giving a measure on sequents that decreases on each call to the inductive hypothesis.
\subsection{Definition of Reify}
\noindent $\mathsf{reify}_\Gamma$ is defined inductively in Figure \ref{WS-reify}. Following the above remarks, the following properties hold: \label{compaxioms} \begin{description} \item [1a]The unique map $i: \varnothing \Rightarrow \mathcal{C}(I,o)$ is a bijection. \item [1b]The map $\mathsf{d} = [\lambda f . f \circ \pi_1, \lambda g . f \circ \pi_2] : \mathcal{C}(M,o) + \mathcal{C}(N,o) \Rightarrow \mathcal{C}(M \times N, o)$ is a bijection. (\emph{$\pi$-atomicity} \cite{A_ADFC}). \item [2]The map $\_\multimap o : \mathcal{C}(I,M) \Rightarrow \mathcal{C}(M \multimap o, I \multimap o)$ is a bijection. \end{description}
\begin{figure*}
\caption{Reification of Strategies as Analytic Proofs}
\label{WS-reify}
\end{figure*}
\subsection{Termination of Reify}
We next argue for termination of our procedure. Intuitively, the full completeness procedure first breaks down the head formula until it is $\bot$ or $\top$. It then uses the core elimination rules to compose the tail into (at most) a single formula. These steps do not increase the size of the strategy. Finally, the head is removed using $\mathsf{P}_\bot^+$ or $\mathsf{P}_\top^-$, strictly reducing the size of the strategy. If $\Theta$ is not lean, the number of distinct variable pairs that are not declared distinct in $\Theta$ is reduced by using $\mathsf{P}_\mathsf{ma}$.
Formally, we can see this as a lexicographical ordering of four measures on $\sigma$,$X$,$\Theta$,$\Gamma$: \begin{itemize} \item The most dominant measure is the length of the longest play in $\sigma$. \item The second measure is the length of $\Gamma$ as a list if the head of $\Gamma$ is $\bot$ or $\top$, and $\infty$ otherwise. \item The third measure is the size of the head formula of $\Gamma$.
\item The fourth measure is $$|\{ (x , y) \in X \times X : \\ x \not\equiv y \wedge x \neq y \notin \Theta \} |$$ \end{itemize}
If $\Theta$ is lean: \begin{itemize} \item If $\Gamma = \bot , P$ or $\top , N$ then the first measure decreases in the call to the inductive hypothesis. \item Otherwise, if $\Gamma = A , \Gamma'$ with $A \in {\bot, \top}$ the first measure does not increase and the second measure decreases. \item If $\Gamma = A , \Gamma'$ with $A \not \not \in \{ \bot , \top \}$, the first measure does not increase and either the second or third measure decreases.
\end{itemize}
If $\Theta$ is not lean and the $\mathsf{P}_\mathsf{ma}$ rule is applied, in the call to the inductive hypotheses the first three measures stay the same and the fourth measure decreases.
Thus, the inductive hypothesis is used with a smaller value in the compound measure on $\mathbb{N} \times \mathbb{N} \cup \{ \infty \} \times \mathbb{N} \times \mathbb{N}$ ordered lexicographically.
\subsection{Soundness and Uniqueness}
\begin{lemma}
For all $\sigma : \llbracket \vdash \Gamma \rrbracket$ we have
$\llbracket \mathsf{reify}_\Gamma (\sigma) \rrbracket = \sigma$.
\label{refsound} \end{lemma}
\begin{proof}
We proceed by induction on our reification measure $\langle |\Gamma|
, \mathsf{tl}(\Gamma) , \mathsf{hd}(\Gamma) \rangle$ using equations
that hold in the categorical model. We perform case analysis on
$\Gamma$. The calculation is routine, we demonstrate only a few
cases.
\begin{itemize} \item If $\Theta$ is not lean with $(x,y) \in X \times X$ least such that $x \not \equiv y$ and $(x \neq y) \not \in \Theta$ and $z$ is the least element in $\mathsf{Fr}(X;\Theta \vdash \Gamma)$, then $\llbracket \mathsf{reify}(\sigma) \rrbracket $\\$ =
\llbracket \mathsf{P}_\mathsf{ma}^{x,y,z}(\mathsf{reify}(\sigma|_{\mathcal{M}_X^{\Theta,x = y}} \circ H^{-1}_{x,y,z} , \mathsf{reify}(\sigma|_{\mathcal{M}_X^{\Theta,x \neq y}}))) \rrbracket $\\$ =
[\llbracket \mathsf{reify}(\sigma|_{\mathcal{M}_X^{\Theta,x = y}} \circ H^{-1}_{x,y,z}) \rrbracket H_{x,y,z}, \llbracket \mathsf{reify}(\sigma|_{\mathcal{M}_X^{\Theta,x \neq y}}) \rrbracket ]_{x = y , x \neq y} $\\$ =
[\sigma|_{\mathcal{M}_X^{\Theta,x = y}} \circ H^{-1}_{x,y,z} \circ H_{x,y,z} , \sigma|_{\mathcal{M}_X^{\Theta,x \neq y}}]_{x = y , x \neq y} $\\$ =
[\sigma|_{\mathcal{M}_X^{\Theta,x = y}} , \sigma|_{\mathcal{M}_X^{\Theta,x \neq y}}]_{x = y , x \neq y} = \sigma$.
\item If $\Theta$ is lean and $\Gamma = \phi(\overrightarrow{x}) , \Gamma'$ then $$\llbracket \mathsf{reify}(\sigma) \rrbracket = \llbracket \mathsf{P}_\mathsf{at-}(\sigma|_{\mathcal{M}_X^{\Theta,\overline{\phi}(\overrightarrow{x})}}) \rrbracket = [ \sigma|_{\mathcal{M}_X^{\Theta,\overline{\phi}(\overrightarrow{x})}} , \epsilon]_{\overline{\phi}(\overrightarrow{x}),\phi(\overrightarrow{x})} = \sigma$$ as we must have $\sigma|_{\mathcal{M}_X^{\Theta,\phi(\overrightarrow{x})}} = \epsilon$ since $\llbracket \phi(\overrightarrow{x}) , \Gamma \rrbracket_A$ is the terminal object for each $A$ in $\mathcal{M}_X^{\Theta , \phi(\overrightarrow{x})}$. \item If $\Gamma = \forall x . N , \Gamma'$ then $\llbracket \mathsf{reify}(\sigma) \rrbracket = \llbracket \mathsf{P}_\forall(\mathsf{reify}(\eta \circ U'_x(\mathsf{dist}_{\Gamma'} \circ \sigma))) \rrbracket = $\\$ \mathsf{dist}_{\Gamma'}^{-1} \circ \widehat{\llbracket \mathsf{reify}( \eta \circ U'_x(\mathsf{dist}_{\Gamma'} \circ \sigma)) \rrbracket} = \mathsf{dist}_{\Gamma'}^{-1} \circ \widehat{ (\eta \circ U'_x(\mathsf{dist}_{\Gamma'} \circ \sigma))} = \mathsf{dist}_{\Gamma'}^{-1} \circ \mathsf{dist}_{\Gamma'} \circ \sigma = \sigma$ as required. \item If $\Gamma = P_1 \bindnasrepma P_2, \Delta$ then $\llbracket \mathsf{reify}_{P_1
\bindnasrepma P_2,\Delta}(\sigma) \rrbracket = \llbracket [ {\mathsf{P}_\bindnasrepma}_1
\circ \mathsf{reify}_{P_1,P_2,\Delta} , {\mathsf{P}_\bindnasrepma}_2 \circ
\mathsf{reify}_{P_2,P_1,\Delta} ] \circ \mathsf{d}^{-1}(\sigma \circ
\llbracket \Delta \rrbracket^+(\mathsf{dec}^{-1}) \circ \mathsf{dist}_{+,\Delta}^{-1})
\rrbracket$. Suppose $\mathsf{d}^{-1} (\sigma \circ \llbracket
\Delta \rrbracket^+(\mathsf{dec}^{-1}) \circ \mathsf{dist}_{+,\Delta}^{-1}) =
\mathsf{in}_i(\tau)$, so $\tau \circ \pi_i = \sigma \circ \llbracket \Delta
\rrbracket^+(\mathsf{dec}^{-1}) \circ \mathsf{dist}_{+,\Delta}^{-1}$.
If $i = 1$ then $\llbracket [ {\mathsf{P}_\bindnasrepma}_1 \circ
\mathsf{reify}_{P_1,P_2,\Delta} , {\mathsf{P}_\bindnasrepma}_2 \circ \mathsf{reify}_{P_2,P_1,\Delta} ]
\circ \mathsf{d}^{-1}(\sigma \circ \llbracket \Delta
\rrbracket^+(\mathsf{dec}) \circ \mathsf{dist}_{+,\Delta}^{-1}) \rrbracket =
\llbracket {\mathsf{P}_\bindnasrepma}_1 ( \mathsf{reify}_{P_1,P_2,\Delta}(\tau)) \rrbracket =
\llbracket \mathsf{reify}_{P_1,P_2,\Delta}(\tau) \rrbracket \circ \llbracket
\Delta \rrbracket^+(\mathsf{wk}) =$ \\$\llbracket \mathsf{reify}_{P_1,P_2,\Delta}(\tau)
\rrbracket \circ \llbracket \Delta \rrbracket^+(\pi_1 \circ \mathsf{dec}) =
\tau \circ \llbracket \Delta \rrbracket^+(\pi_1 \circ \mathsf{dec}) = \tau
\circ \pi_1 \circ \mathsf{dist}_{+,\Delta} \circ \llbracket \Delta
\rrbracket^+(\mathsf{dec}) = \sigma \circ \llbracket \Delta
\rrbracket^+(\mathsf{dec}^{-1}) \circ \mathsf{dist}_{+,\Delta}^{-1} \circ
\mathsf{dist}_{+,\Delta} \circ \llbracket \Delta \rrbracket^+(\mathsf{dec}) =
\sigma$.
The case for $i = 2$ is similar. \qed \end{itemize} \end{proof}
\begin{lemma}
For any analytic proof $p$ of $\vdash \Gamma$ we have $\mathsf{reify}_\Gamma
(\llbracket p \rrbracket) = p$.
\label{refunique} \end{lemma} \begin{proof}
We proceed by induction on $p$. The calculation is routine, we
demonstrate only a few cases.
\begin{itemize} \item If $p = \mathsf{P}_\bot^+(p')$ with $\Gamma = \bot , P$ then $\mathsf{reify}_\Gamma
(\llbracket p \rrbracket) = \mathsf{P}_\bot^+ (\mathsf{reify}_P (\Lambda_I^{-1}
(\llbracket p \rrbracket))) = $\\$ \mathsf{P}_\bot^+ (\mathsf{reify}_P
(\Lambda_I^{-1}\Lambda_I\llbracket p' \rrbracket)) = \mathsf{P}_\bot^+(
\mathsf{reify}_P(\llbracket p' \rrbracket)) = \mathsf{P}_\bot^+(p') = p$. \item If $p = \mathsf{P}_\& (p_1, p_2)$ with $\Gamma = M \& N , \Delta$ then
$\mathsf{reify}_\Gamma(\llbracket p \rrbracket) $\\$ = \mathsf{P}_\&
(\mathsf{reify}_{M,\Delta}(\pi_1 \circ \mathsf{dist}_{-,\Delta} \circ \llbracket p
\rrbracket), \mathsf{reify}_{N,\Delta}(\pi_2 \circ \mathsf{dist}_{-,\Delta} \circ
\llbracket p \rrbracket)) $\\$ = \mathsf{P}_\& (\mathsf{reify}_{M,\Delta}(\pi_1 \circ
\mathsf{dist}_{-,\Delta} \circ \mathsf{dist}_{-,\Delta}^{-1} \circ \langle
\llbracket p_1 \rrbracket, \llbracket p_2 \rrbracket \rangle),
\mathsf{reify}_{N,\Delta}(\pi_2 \circ \mathsf{dist}_{-,\Delta} \circ
\mathsf{dist}_{-,\Delta}^{-1} \circ \langle \llbracket p_1 \rrbracket,
\llbracket p_2 \rrbracket \rangle)) $\\$ = \mathsf{P}_\&
(\mathsf{reify}_{M,\Delta}(\llbracket p_1 \rrbracket),
\mathsf{reify}_{N,\Delta}(\llbracket p_2 \rrbracket)) $\\$ = \mathsf{P}_\& (p_1, p_2) = p$. \item If $p = {\mathsf{P}_\bindnasrepma}_1 (p')$ with $\Gamma = P_1 \bindnasrepma P_2 , \Delta$
then $\mathsf{reify}_\Gamma(\llbracket p \rrbracket) =
\mathsf{reify}_\Gamma(\llbracket p' \rrbracket \circ \llbracket \Delta
\rrbracket^+(\mathsf{wk})) = [{\mathsf{P}_\bindnasrepma}_1 \circ \mathsf{reify}_{P_1,P_2,\Delta} ,
{\mathsf{P}_\bindnasrepma}_2 \circ \mathsf{reify}_{P_2,P_1,\Delta}] \circ
\mathsf{d}^{-1}(\llbracket p' \rrbracket \circ \llbracket \Delta
\rrbracket^+(\mathsf{wk}) \circ \llbracket \Delta \rrbracket^+(\mathsf{dec}^{-1})
\circ \mathsf{dist}_{+,\Delta}^{-1}) = [{\mathsf{P}_\bindnasrepma}_1 \circ
\mathsf{reify}_{P_1,P_2,\Delta} , {\mathsf{P}_\bindnasrepma}_2 \circ \mathsf{reify}_{P_2,P_1,\Delta}]
\circ \mathsf{d}^{-1}(\llbracket p' \rrbracket \circ \llbracket
\Delta \rrbracket^+(\pi_1) \circ \mathsf{dist}_{+,\Delta}^{-1}) =
[{\mathsf{P}_\bindnasrepma}_1 \circ \mathsf{reify}_{P_1,P_2,\Delta} , {\mathsf{P}_\bindnasrepma}_2 \circ
\mathsf{reify}_{P_2,P_1,\Delta}] \circ \mathsf{d}^{-1}(\llbracket p'
\rrbracket \circ \pi_1) = [{\mathsf{P}_\bindnasrepma}_1 \circ \mathsf{reify}_{P_1,P_2,\Delta}
, {\mathsf{P}_\bindnasrepma}_2 \circ \mathsf{reify}_{P_2,P_1,\Delta}] \circ
\mathsf{d}^{-1}(\mathsf{d}(\mathsf{in}_1(\llbracket p' \rrbracket))) =
{\mathsf{P}_\bindnasrepma}_1(\mathsf{reify}_{P_1,P_2,\Delta}(\llbracket p' \rrbracket)) =
{\mathsf{P}_\bindnasrepma}_1(p') = p$ as required. \qed \end{itemize} \end{proof}
\noindent This completes our proof of Theorem \ref{fullcomp}.
\subsection{Infinitary Analytic Proofs}
\label{ws!-infcore}
We have seen that any bounded winning strategy is the denotation of a unique analytic proof of \textsf{WS1}. We cannot use this to normalise proofs to their analytic form
because proofs do not necessarily denote bounded strategies. We will next show that our reification procedure can be extended to winning strategies that may be unbounded, provided the resulting analytic proofs are allowed to be \emph{infinitary} --- that is, proofs using the core rules that may be infinitely deep. More precisely, we will show that \emph{total} strategies on a type object correspond precisely to the infinitary analytic proofs. Thus we can normalise any proof of \textsf{WS1} to an infinitary normal form, by taking its semantics and then constructing the corresponding infinitary analytic proof. Two proofs of \textsf{WS1} are semantically equivalent if and only if they have the same normal form as an infinitary analytic proof.
\subsubsection{Infinitary Proofs as a Final Coalgebra}
Let $L$ be a set. Let $\mathcal{T}_L$ denote the final coalgebra of the functor $X \mapsto L \times X^\ast$ in \textbf{Set}. The inhabitants of $T_L$ are $L$-labelled trees of potentially infinite depth. We let $\alpha : \mathcal{T}_L \rightarrow L \times \mathcal{T}_L^\ast$ describe the arrow part of this final coalgebra: this maps a tree to its label and sequence of subtrees. Given a natural number $n$, we define a function $N_n : \mathcal{T}_L \rightarrow \mathcal{P}(L \times \mathcal{T}_L^\ast)$, by induction: $N_0(T) = \emptyset$ and $N_{n+1}(T) = \{ \alpha(T) \} \uplus \bigcup \{ N_n(T') : T' \in \pi_2(\alpha(T)) \}$. We define the set of nodes $N(T)$ to be $\{ N_n(T) : n \in \mathbb{N} \}$. Let $\mathsf{Prf}$ be the set of (names of) proof rules of \textsf{WS1} and $\mathsf{Seq}$ the set of sequents of \textsf{WS1}.
\begin{definition}
An \emph{infinitary analytic proof} of \textsf{WS1} is an infinitary proof using only the core rules of \textsf{WS1}. Formally, this is an element $T$ of $\mathcal{I} = \mathcal{T}_{\mathsf{Prf} \times \mathsf{Seq}}$ such that for each node $((\mathsf{P}_x,X;\Theta\vdash \Gamma),c) \in N(T)$ we have $|c| = \mathsf{ar}(\mathsf{P}_x)$ and if $(\pi_2 \circ \pi_1 \circ \alpha)(c_i) = X_i;\Theta_i \vdash \Gamma_i$ then the following is a valid core rule of \textsf{WS1}:
\begin{prooftree} \AxiomC{$X_1;\Theta_1 \vdash \Gamma_1$} \AxiomC{$\ldots$}
\AxiomC{$X_{|c|};\Theta_{|c|} \vdash \Gamma_{|c|}$} \LeftLabel{$\mathsf{P}_x$} \TrinaryInfC{$X;\Theta \vdash \Gamma$} \end{prooftree}
\noindent We let $\mathcal{I}_\Gamma$ denote the set of infinitary analytic proofs of $\vdash \Gamma$. \end{definition}
Let $\{ A_{X;\Theta \vdash \Gamma} : X;\Theta \vdash \Gamma \in \mathsf{Seq} \}$ be family of sets indexed by sequents. We can construct a family of maps $A_{X;\Theta \vdash \Gamma} \rightarrow \mathcal{I}_{X;\Theta
\vdash \Gamma}$ by giving, for each $X;\Theta \vdash \Gamma$ and $a \in A_{X;\Theta \vdash \Gamma}$, a proof rule that concludes $X;\Theta \vdash \Gamma$ from $X_1;\Theta_1 \vdash \Gamma_1$, \ldots , $X_n;\Theta_n \vdash \Gamma_n$ and for each $i$ an element $a_i \in A_{X_i;\Theta_i \vdash \Gamma_i}$.
\subsubsection{Infinitary Proofs as a Limit of Paraproofs}
We can consider an alternative approach for presenting our infinitary analytic proofs. We consider partial proofs, that may ``give up'' in the style of \cite{Gir_LS}.
\begin{definition} An \emph{analytic paraproof} of \textsf{WS1} is a proof made up of the core proof rules of \textsf{WS1}, together with a \emph{d\ae mon} rule that can prove any sequent: \begin{center}{\Large $\over \Phi \vdash \Gamma$}\mbox{ $\mathsf{P}_\epsilon$} \end{center} \end{definition}
Note that each analytic proof is also an analytic paraproof. Let $\mathcal{C}_\Gamma$ represent the set of analytic paraproofs of $\vdash \Gamma$. We can introduce an ordering $\sqsubseteq$ on this set, generated from the least congruence with $\mathsf{P}_\epsilon$ as a bottom element. We can take the completion of $\mathcal{C}_\Gamma$ with respect to $\omega$-chains generating an algebraic cpo $\mathcal{D}_\Gamma$. The maximal elements in this domain are precisely the infinitary analytic proofs $\mathcal{I}_\Gamma$, and the compact elements are the analytic paraproofs $\mathcal{C}_\Gamma$.
\subsubsection{Semantics of Infinitary Analytic Proofs}
We next describe semantics of infinitary analytic proofs via the semantics of analytic paraproofs.
We can interpret analytic paraproofs as partial strategies. We interpret paraproofs of $X;\Theta \vdash \Gamma$ in $\mathcal{G}^{\mathcal{M}_X^\Theta}$. For the rules other than $\mathsf{P}_\epsilon$, we use the fact that $\mathcal{G}^{\mathcal{M}_X^\Theta}$ is a WS!-category. We interpret $\mathsf{P}_\epsilon$ as the strategy $\{ \epsilon \}$ where $\epsilon$ denotes the empty play on any game. We can hence interpret a analytic paraproof of $\vdash \Gamma$ as a strategy on $\llbracket \vdash \Gamma \rrbracket$.
The category $\mathcal{G}^{\mathcal{M}_X^\Theta}$ is cpo-enriched, with $\sigma \sqsubseteq \tau$ if for each $A$, $\sigma_A \subseteq \tau_A$ as a set of plays. The bottom element is the uniform strategy that is $\{ \epsilon \}$ at each component. Composition, pairing and currying are continuous maps of hom sets; as are the operations used in the first-order structure.
\begin{proposition} If $p$ and $q$ are analytic paraproofs of $\vdash \Gamma$ and $p \sqsubseteq q$ then $\llbracket p \rrbracket \sqsubseteq \llbracket q \rrbracket$. \end{proposition} \begin{proof} A simple induction on the proof rules for \textsf{WS1}, using the fact that composition, pairing and currying are monotonic operations. Note that $\llbracket - \rrbracket$ is also strict, as $\llbracket \mathsf{P}_\epsilon \rrbracket = \{ \epsilon \}$. \qed \end{proof}
Hom sets of $\mathcal{G}^{\mathcal{M}_X^\Theta}$ are algebraic domains: each strategy is the limit of its compact (finite) approximants. Our monotonic map $\mathcal{C}_\Gamma \rightarrow \llbracket X;\Theta \vdash \Gamma \rrbracket$ thus extends uniquely to a continuous map $\mathcal{D}_\Gamma \rightarrow \llbracket X;\Theta \vdash \Gamma \rrbracket$. By construction this agrees with the semantics given above for analytic paraproofs in $\mathcal{D}_\Gamma$. Given any infinitary analytic proof $p$ if $p\downarrow$ is the set of analytic paraproofs less than $p$ then $\llbracket p \rrbracket = \bigsqcup \llbracket p\downarrow \rrbracket$ using the cpo structure in $\mathcal{G}^{\mathcal{M}_X^\Theta}$.
We can show that this really does capture the intended semantics of infinitary analytic proofs.
\begin{proposition}
The equations for the semantics of analytic proofs given in Figures
\ref{WS-sem}, \ref{WS!-sem} and \ref{WS1-sem} hold for infinitary
analytic proofs. \label{infsemantics} \end{proposition} \begin{proof} We use the fact that the constructs used in the semantics of the core proof rules are continuous. We proceed by case analysis on the proof rule.
We just give an example. In the case of $\mathsf{P}_\otimes$, note that $\llbracket \mathsf{P}_\otimes(p,q) \rrbracket = \bigsqcup \{ \llbracket r \rrbracket : r \sqsubseteq \mathsf{P}_\otimes(p,q) \} = \bigsqcup \{ \llbracket \mathsf{P}_\otimes(p', q') \rrbracket : p' \sqsubseteq p \wedge q' \sqsubseteq q \} = \bigsqcup \{ \llbracket \Gamma \rrbracket^-(\mathsf{dec}^{-1}) \circ \mathsf{dist}^{-1}_{-,\Gamma} \circ \langle \llbracket p' \rrbracket , \llbracket q' \rrbracket \rangle : p' \sqsubseteq p \wedge q' \sqsubseteq q \} = \llbracket \Gamma \rrbracket^-(\mathsf{dec}^{-1}) \circ \mathsf{dist}^{-1}_{-,\Gamma} \circ \langle \llbracket \bigsqcup \{ p' : p' \sqsubseteq p \} \rrbracket , \llbracket \bigsqcup \{ q' : q' \sqsubseteq q \} \rrbracket \rangle = \llbracket \Gamma \rrbracket^-(\mathsf{dec}^{-1}) \circ \mathsf{dist}^{-1}_{-,\Gamma} \circ \langle \llbracket p \rrbracket , \llbracket q \rrbracket \rangle$ as required. All other cases are similar. \qed \end{proof}
\subsubsection{Totality}
We need to show that given $p \in \mathcal{I}_\Gamma$, $\llbracket p \rrbracket$ is a total uniform strategy. Note that this is not true of arbitrary paraproofs in $\mathcal{D}_\Gamma$, nor is it true for infinite derivations in full \textsf{WS1} (for example, one could repeatedly apply the $\mathsf{P}_\mathsf{sym}$ rules forever).
To show this fact, we first introduce some auxiliary notions.
\begin{definition}
Let $\sigma : N$ be a strategy on a negative game. We say that
$\sigma$ is $n$-total if whenever $s \in \sigma \wedge |s| \leq n
\wedge so \in P_N \Rightarrow \exists p . sop \in \sigma$. A uniform
strategy is $n$-total if it is pointwise $n$-total. \end{definition}
\noindent It is clear that a strategy is total if and only if it is $n$-total for each $n$.
\begin{proposition} \label{ntotal} The following hold: \begin{enumerate} \item If $\sigma$ is $n$-total and $\tau$ is an isomorphism then $\tau
\circ \sigma$ is $n$-total. If $\sigma$ is $n$-total and $\tau$ is
an isomorphism then $\sigma \circ \tau$ is $n$-total. \item If $\sigma : A \rightarrow B$ and $\tau : A \rightarrow C$ are
$n$-total then $\langle \sigma, \tau \rangle$ is also $n$-total. If
$\sigma : A_i \multimap B$ is $n$-total then $\sigma \circ \pi_i :
A_1 \times A_2 \multimap B$ is $n$-total.
\item If $\sigma : A \otimes B \multimap C$ is $n$-total then
$\Lambda(\sigma)$ is $n$-total. If $\sigma: A \multimap B$ is
$n$-total then $\sigma \multimap \mathsf{id} : (B \multimap o) \multimap (A
\multimap o)$ is $(n+2)$-total. \item If $\sigma$ and $\tau$ are $n$-total, then so is $[ \sigma ,
\tau ]_{\mathcal{C},\mathcal{D}}$, $\sigma \circ H$. If $\sigma$ is
$n$-total, then so is $\hat{\sigma}$. \end{enumerate} \end{proposition} \begin{proof} Simple verification. \qed \end{proof}
\begin{proposition} Given any infinitary analytic proof $p$ of $X ; \Theta \vdash \Gamma$, $\llbracket p \rrbracket$ is total. \label{inftotal} \end{proposition} \begin{proof} We show that $\llbracket p \rrbracket$ is $n$-total for each $n$. We proceed by induction on a compound measure. \begin{itemize} \item Define $\mathsf{tl}^+(A,\Gamma)$ to be the length of $\Gamma$ as a list if $A = \top$ or $\infty$ otherwise.
\item Define $\mathsf{hd}^+(A,\Gamma)$ to be $|A|$ if $A$ is positive or $\infty$ otherwise. \item Define $\mathsf{tl}^-(A,\Gamma)$ to be the length of $\Gamma$ as a list if $A = \bot$ or $\infty$ otherwise.
\item Define $\mathsf{hd}^-(A,\Gamma)$ to be $|A|$ if $A$ is negative or $\infty$ otherwise. \end{itemize} We proceed by induction on $$f(n,X,\Theta,\Gamma) = \langle n , \mathsf{tl}^+(\Gamma), \mathsf{hd}^+(\Gamma), \mathsf{tl}^-(\Gamma), \mathsf{hd}^-(\Gamma), \mathsf{L}(X,\Theta) \rangle.$$ We proceed by case analysis on $p$. If $p = \mathsf{P}_\otimes(p_1,p_2)$ then $\llbracket \mathsf{P}_\otimes(p_1,q_2) \rrbracket = \llbracket \Gamma \rrbracket^-(\mathsf{dec}^{-1}) \circ \mathsf{dist}^{-1}_{-,\Gamma} \circ \langle \llbracket p_1 \rrbracket , \llbracket p_2 \rrbracket \rangle$. By by Proposition \ref{ntotal} $\llbracket \mathsf{P}_\otimes(p,q) \rrbracket = \llbracket p \rrbracket$ is $n$-total.
The remaining cases work in an entirely analogous way. For $\mathsf{P}_\bot^+$ we must use the fact that currying is continuous and preserves $n$-totality. For termination: \begin{itemize} \item If $\Theta$ is not lean, in the call to the inductive hypothesis
the first five measures do not increase, and the fifth measure
$\mathsf{L}$ decreases. \item In the case of $\mathsf{P}_\otimes$, $\mathsf{P}_\&$, $\mathsf{P}_!$ the first three measures ($n , \mathsf{tl}^+(\Gamma), \mathsf{hd}^+(\Gamma)$) stay the same and either the fourth measure $\mathsf{tl}^-(\Gamma)$ decreases, or the fourth measure stays the same and the fifth measure $\mathsf{hd}^-(\Gamma)$ decreases. \item In the case of $\mathsf{P}_\bot^\bindnasrepma$, $\mathsf{P}_\bot^\oslash$, $\mathsf{P}_\bot^-$ the first three measures stay the same and the fourth measure decreases. \item In the cases of $\mathsf{P}_\bot^+$, $\mathsf{P}_\bindnasrepma$, $\mathsf{P}_\oplus$, $\mathsf{P}_?$ the first measure $n$ stays the same and either the second measure $\mathsf{tl}^+(\Gamma)$ decreases, or the second measure stays the same and the third measure $\mathsf{hd}^+(\Gamma)$ decreases. \item In the case of $\mathsf{P}_\top^\otimes$, $\mathsf{P}_\top^\lhd$, $\mathsf{P}_\top^+$ the first measure stays the same and the second measure decreases. \item In the case of $\mathsf{P}_\top^-$, the first measure decreases. In particular, $\llbracket \mathsf{P}_\top^+(q) \rrbracket = \mathsf{unit}_\multimap \circ (\llbracket q \rrbracket \multimap \mathsf{id})$. By induction $\llbracket q \rrbracket$ is $(n-2)$-total, and so $\llbracket q \rrbracket \multimap \mathsf{id}$ is $n$-total, and so $\llbracket p \rrbracket$ is $n$-total by Proposition \ref{ntotal}. \qed \end{itemize} \end{proof}
\noindent Note that there are infinitary analytic proofs that denote strategies that are total, but not winning. For example, there is an infinitary analytic proof of $\vdash \bot , ?(\top \lhd \bot)$ given by $\mathsf{P}_\bot^+(h)$ where $h$ is the infinitary analytic proof of $\vdash ?(\top \lhd \bot)$ given by $h = \mathsf{P}_?(\mathsf{P}_\lhd(\mathsf{P}_\top^\lhd(\mathsf{P}_\top^-(\mathsf{P}_\lhd(\mathsf{P}_\bot^+(h))))))$. But there are no winning strategies on this game.
\subsubsection{Reification of Total Strategies as Infinitary Analytic Proofs}
We next show that any total strategy $\sigma$ on the denotation of a sequent is the interpretation of a unique infinitary analytic proof $\mathsf{reify}(\sigma)$.
We first define $\mathsf{reify}$ for winning strategies. We have seen that we can construct a family of maps $A_{X;\Theta \vdash \Gamma} \rightarrow \mathcal{I}_{X;\Theta \vdash \Gamma}$ by giving, for each $X;\Theta \vdash \Gamma$ and $a \in A_{X;\Theta \vdash \Gamma}$, a proof rule that concludes $X;\Theta \vdash \Gamma$ from $X_1;\Theta_1 \vdash \Gamma_1$, \ldots , $X_n, \Theta_n \vdash \Gamma_n$ and for each $i$ an element $a_i \in A_{X_i;\Theta_i \vdash \Gamma_i}$. \begin{diagram} \sum_{X;\Theta \vdash \Gamma \in \mathsf{Seq}} A_\Gamma & \rTo^f & (\mathsf{Prf} \times \mathsf{Seq}) \times (\sum_{X;\Theta \vdash \Gamma \in \mathsf{Seq}} A_{X;\Theta \vdash \Gamma})^\ast \\ \dTo^{\leftmoon f \rightmoon} & & \dTo_{\mathsf{id} \times \leftmoon f \rightmoon^\ast} \\ \mathcal{I} & \rTo_{\alpha} & (\mathsf{Prf} \times \mathsf{Seq}) \times \mathcal{I}^\ast \\ \end{diagram}
Note that our reification function $\mathsf{reify}$ defined in Figure \ref{WS-reify} is exactly of this shape. In this case $A_{X;\Theta
\vdash \Gamma}$ is the set of uniform winning strategies on $\llbracket X;\Theta \vdash \Gamma \rrbracket$. The function specifies, for each strategy, the root-level proof rule and the derived strategies that are given as input to $\mathsf{reify}$ coinductively.
In the case that $\sigma$ is bounded, we have seen that the process terminates and $\mathsf{reify}(\sigma)$ is a finite proof.
In fact, we note that this family of maps are still well defined if $A_{X;\Theta \vdash \Gamma}$ is the set $\mathsf{Tot}_{X;\Theta \vdash
\Gamma}$ of uniform \emph{total} strategies on $\llbracket X ; \Theta \vdash \Gamma \rrbracket$. In particular, the composition of a total strategy and an isomorphism is a total strategy; the composition of a total strategy and a projection is a total strategy; and the completeness axioms in Section \ref{compaxioms} hold with respect to total strategies. This procedure provides, for each total strategy on $X;\Theta \vdash \Gamma$, a proof rule $\mathsf{P}_x$ concluding $X;\Theta \vdash \Gamma$ from $X_1;\Theta_1 \vdash \Gamma_1, \ldots , X_n;\Theta_n \vdash \Gamma_n$ and total strategies on each $\llbracket X_i;\Theta_i \vdash \Gamma_i \rrbracket$. We write this map as $\mathsf{reif}_{X;\Theta \vdash
\Gamma}$. \begin{diagram} \sum_{X;\Theta \vdash \Gamma \in \mathsf{Seq}} \mathsf{Tot}_{X;\Theta \vdash \Gamma} & \rTo^{\mathsf{reif}} & (\mathsf{Prf} \times \mathsf{Seq}) \times (\sum_{X; \Theta \vdash \Gamma \in \mathsf{Seq}} \mathsf{Tot}_{X;\Theta \vdash \Gamma})^\ast \\ \dTo^{\mathsf{reify} = \leftmoon \mathsf{reif} \rightmoon} & & \dTo_{\mathsf{id} \times \mathsf{reify}^\ast} \\ \mathcal{I} & \rTo_{\alpha} & (\mathsf{Prf} \times \mathsf{Seq}) \times \mathcal{I}^\ast \\ \end{diagram}
Thus we can take the anamorphism of this map yielding a map from total strategies on $\llbracket X;\Theta \vdash \Gamma \rrbracket$ to $\mathcal{I}_{X;\Theta \vdash \Gamma}$, as required.
\subsubsection{Soundness and Uniqueness}
We can show that given any winning strategy $\sigma$, $\mathsf{reify}(\sigma)$ is the unique infinitary analytic proof $p$ such that $\llbracket \mathsf{reify}(p) \rrbracket = \sigma$.
For soundness, we first introduce some auxiliary notions.
\begin{definition} Let $\sigma$ and $\tau$ be strategies on $A$. We say that $\sigma =_n \tau$ if each play in $\sigma$ of length at most $n$ is in $\tau$, and each play in $\tau$ of length at most $n$ is in $\sigma$. \end{definition}
\noindent It is clear that $=_n$ is an equivalence relation, and $\sigma = \tau$ if and only if $\sigma =_n \tau$ for each $n \in \mathbb{N}$. We can lift the relation $=_n$ to uniform total strategies pointwise.
\begin{proposition} \begin{enumerate} \item If $\sigma =_n \tau$ and $\rho$ is an isomorphism then $\sigma \circ \rho =_n \tau \circ \rho$. If $\sigma =_n \tau$ and $\rho$ is an isomorphism then $\rho \circ \sigma =_n \rho \circ \tau$. \item If $\sigma =_n \tau$ and $\rho =_n \delta$ then $\langle \sigma, \rho \rangle =_n \langle \tau , \delta \rangle$. If $\sigma =_n \tau$ then $\sigma \circ \pi_i =_n \tau \circ \pi_i$. \item If $\sigma =_n \tau$ then $\Lambda(\sigma) =_n \Lambda(\tau)$. If $\sigma =_n \tau$ then $\sigma \multimap \mathsf{id} =_{n+2} \tau \multimap \mathsf{id}$.
\item If $\sigma_1 =_n \sigma_2$ and $\tau_1 =_n \tau_2$ then $[ \sigma_1 , \tau_1 ]_{\mathcal{C},\mathcal{D}} = [ \sigma_2 , \tau_2 ]_{\mathcal{C},\mathcal{D}}$. If $\sigma_1 =_n \sigma_2$ then $\sigma_1 \circ H =_n \sigma_2 \circ H$. If $\sigma_1 =_n \sigma_2$ then $\hat{\sigma_1} =_n \hat{\sigma_2}$. \end{enumerate} \label{equaln} \end{proposition}
\begin{proof} Simple verification. \qed \end{proof}
\begin{proposition}
For every uniform total strategy $\sigma : \llbracket \vdash
\Gamma \rrbracket$, $\llbracket \mathsf{reify}(\sigma) \rrbracket =
\sigma$. \label{refsound-ws!-inf} \end{proposition} \begin{proof} We show that for each $n$, $\llbracket \mathsf{reify}(\sigma)
\rrbracket =_n \sigma$. The structure of the induction follows that
of Proposition \ref{inftotal}, lexicographically on $$\langle n ,
\mathsf{tl}^+(\Gamma), \mathsf{hd}^+(\Gamma), \mathsf{tl}^-(\Gamma),
\mathsf{hd}^-(\Gamma), \mathsf{L}(X,\Theta) \rangle.$$ In each
particular case, the reasoning follows the proof of Proposition
\ref{refsound} using $=_n$ in the inductive hypothesis rather than
$=$, and propagating this to the main equation using Proposition
\ref{equaln}. In the case of $\Gamma = \top , N$ we use the
inductive hypothesis with a smaller $n$, using the final clause in
Proposition \ref{equaln}. \qed \end{proof}
\begin{proposition} Given any infinitary analytic proof $p$, $\mathsf{reify}(\llbracket p \rrbracket) = p$. \label{refunique-ws!-inf} \end{proposition} \begin{proof} Since $\mathsf{id} = \leftmoon \alpha \rightmoon$, we know that $\mathsf{id}$ is the unique morphism $f$ such that: \begin{diagram} \mathcal{I}_\Gamma & \rTo^{\alpha} & (\mathsf{Prf} \times \mathsf{Seq}) \times \mathcal{I}^\ast \\ \dTo^{f} & & \dTo_{\mathsf{id} \times f^\ast} \\ \mathcal{I}_\Gamma & \rTo^{\alpha} & (\mathsf{Prf} \times \mathsf{Seq}) \times \mathcal{I}^\ast \\ \end{diagram}
Thus to show that $\mathsf{reify} \circ \llbracket - \rrbracket = \mathsf{id}$ it is sufficient to show that $\alpha \circ \mathsf{reify} \circ \llbracket - \rrbracket = \mathsf{id} \times (\mathsf{reify} \circ \llbracket - \rrbracket)^\ast \circ \alpha$, i.e. that for each infinitary analytic proof $p$ we have $\alpha(\mathsf{reify}(\llbracket p \rrbracket)) = (\mathsf{id} \times (\mathsf{reify} \circ \llbracket - \rrbracket)^\ast)(\alpha(p))$. \begin{itemize} \item For binary rules $\mathsf{P}_x$ we must show that $$\mathsf{reify}(\llbracket \mathsf{P}_x(p_1,p_2) \rrbracket) = \mathsf{P}_x(\mathsf{reify}(\llbracket p_1 \rrbracket), \mathsf{reify}(\llbracket p_2 \rrbracket)).$$ \item For unary rules $\mathsf{P}_x$ we must show that $\mathsf{reify}(\llbracket \mathsf{P}_x(p)) \rrbracket = \mathsf{P}_x(\mathsf{reify}(\llbracket p \rrbracket))$. \item For nullary rules $\mathsf{P}_x$ we must show that $\mathsf{reify}(\llbracket \mathsf{P}_x \rrbracket) = \mathsf{P}_x$. \end{itemize} For each proof rule, we have already shown this in the proof of Proposition \ref{refunique}. Proposition \ref{infsemantics} ensures that the proof applies in this setting. \qed \end{proof}
\subsubsection{Full Completeness and Normalisation}
We have thus shown:
\begin{theorem}
Each total strategy $\sigma$ on $\vdash \Gamma$ is the denotation of
a unique infinitary analytic proof $\mathsf{reify}(\sigma)$. \label{infrefok} \end{theorem}
\noindent We hence have a bijection between infinitary analytic proofs of a formula, and total strategies on the denotation of that formula, via the semantics. Since any proof in \textsf{WS1} can be given semantics as a winning strategy, and winning strategies are total, we may $\mathsf{reify}$ the semantics of a \textsf{WS1} proof to generate its infinitary normal form $\mathsf{reify}(\llbracket p \rrbracket)$.
\begin{theorem}
For each \textsf{WS1} proof $p$, there is a unique infinitary
analytic proof $q$ such that $\llbracket p \rrbracket = \llbracket q
\rrbracket$. \label{proofnorm} \end{theorem} \begin{proof}
Let $q = \mathsf{reify}(\llbracket p \rrbracket)$. Then $\llbracket q
\rrbracket = \llbracket \mathsf{reify}(\llbracket p \rrbracket) \rrbracket = \llbracket
p \rrbracket$ by Proposition \ref{refsound-ws!-inf}. If $q'$ is an
infinitary analytic proof with $\llbracket q' \rrbracket =
\llbracket p \rrbracket$ then $\llbracket q' \rrbracket = \llbracket
q \rrbracket$ and so
$\mathsf{reify}(\llbracket q' \rrbracket) = \mathsf{reify}(\llbracket q \rrbracket)$
and Proposition \ref{refunique-ws!-inf} ensures that $q' = q$. \qed \end{proof}
\noindent While infinitary analytic proofs may denote strategies that are not winning, any infinitary analytic proof generated as a result of the above normalisation denotes a winning strategy. The above result also ensures that proofs $p_1$ and $p_2$ in \textsf{WS1} denote the same strategy if and only if their normal forms (as infinitary analytic proofs) are identical.
\section{Further Directions}
In this paper, we have given some simple examples of ``stateful proofs''. We aim to investigate further examples in more expressive logics, and to specify additional properties of programs
in more powerful programming languages (such as the games-based language in e.g. \cite{Long_PLGM}). Further extensions to our work which may be required in order to do so include: \begin{itemize} \item \textsf{WS1} has been presented as a general first-order logic. By adding axioms, we may specify and study programs in particular domains. For example, can we derive a version of Peano Arithmetic in which proofs have constructive, stateful content (cf \cite{coq95})?
\item Extension with \emph{propositional variables} (and potentially, second-order quantification) would allow generic ``copycat strategies'' to be captured. On the programming side, this would allow us to model languages with polymorphism. \item We have interpreted the exponentials as greatest fixpoints. Adding general inductive and coinductive types, as in $\mu L J$ \cite{Cla_FIX} would extend \textsf{WS1} to a rich collection of datatypes (including finite and infinite lists, for example). \end{itemize}
\paragraph{Acknolwedgements} The authors would like to thank Pierre-Louis Curien, Alessio Guglielmi, Pierre Clairambault and anonymous reviewers for earlier comments on this work. This work was supported by the (UK) EPSRC grant EP/HO23097.
\end{document} | arXiv |
Fraction (mathematics)
From formulasearchengine
A cake with one quarter (one fourth) removed. The remaining three fourths are shown. Dotted lines indicate where the cake may be cut in order to divide it into equal parts. Each fourth of the cake is denoted by the fraction ¼.
A fraction (from Template:Lang-la, "broken") represents a part of a whole or, more generally, any number of equal parts. When spoken in everyday English, a fraction describes how many parts of a certain size there are, for example, one-half, eight-fifths, three-quarters. A common, vulgar, or simple fraction (examples: 12{\displaystyle {\tfrac {1}{2}}} and 17/3) consists of an integer numerator, displayed above a line (or before a slash), and a non-zero integer denominator, displayed below (or after) that line. Numerators and denominators are also used in fractions that are not common, including compound fractions, complex fractions, and mixed numerals.
The numerator represents a number of equal parts, and the denominator, which cannot be zero, indicates how many of those parts make up a unit or a whole. For example, in the fraction 3/4, the numerator, 3, tells us that the fraction represents 3 equal parts, and the denominator, 4, tells us that 4 parts make up a whole. The picture to the right illustrates 34{\displaystyle {\tfrac {3}{4}}} or 3/4 of a cake.
Fractional numbers can also be written without using explicit numerators or denominators, by using decimals, percent signs, or negative exponents (as in 0.01, 1%, and 10−2 respectively, all of which are equivalent to 1/100). An integer such as the number 7 can be thought of as having an implicit denominator of one: 7 equals 7/1.
Other uses for fractions are to represent ratios and to represent division.[1] Thus the fraction 3/4 is also used to represent the ratio 3:4 (the ratio of the part to the whole) and the division 3 ÷ 4 (three divided by four).
In mathematics the set of all numbers which can be expressed in the form a/b, where a and b are integers and b is not zero, is called the set of rational numbers and is represented by the symbol Q, which stands for quotient. The test for a number being a rational number is that it can be written in that form (i.e., as a common fraction). However, the word fraction is also used to describe mathematical expressions that are not rational numbers, for example algebraic fractions (quotients of algebraic expressions), and expressions that contain irrational numbers, such as √2/2 (see square root of 2) and π/4 (see proof that π is irrational).
1 Vocabulary
2 Forms of fractions
2.1 Simple, common, or vulgar fractions
2.2 Proper and improper fractions
2.3 Mixed numbers
2.4 Ratios
2.5 Reciprocals and the "invisible denominator"
2.6 Complex fractions
2.7 Compound fractions
2.8 Decimal fractions and percentages
2.9 Special cases
3 Arithmetic with fractions
3.1 Equivalent fractions
3.2 Comparing fractions
3.3 Addition
3.3.1 Adding unlike quantities
3.4 Subtraction
3.5 Multiplication
3.5.1 Multiplying a fraction by another fraction
3.5.2 Multiplying a fraction by a whole number
3.5.3 Mixed numbers
3.6 Division
3.7 Converting between decimals and fractions
3.7.1 Converting repeating decimals to fractions
4 Fractions in abstract mathematics
5 Algebraic fractions
6 Radical expressions
7 Typographical variations
9 In formal education
9.1 Pedagogical tools
9.2 Documents for teachers
10 Basic Fraction Conversion
10.1 Conversion to Decimals and Percents
10.2 Conversion to Other Fractions
11 See also
When reading fractions it is customary in English to pronounce the denominator using the corresponding ordinal number, in plural if the numerator is not one, as in "fifths" for fractions with a 5 in the denominator. Thus, 3/5 is rendered as three fifths and 5/32 as five thirty-seconds. This generally applies to whole number denominators greater than 2, though large denominators that are not powers of ten are often rendered using the cardinal number. Thus, 5/123 might be rendered as "five one-hundred-twenty-thirds", but is often "five over one hundred twenty-three". In contrast, because one million is a power of ten, 6/1,000,000 is usually expressed as "six millionths" or "six one-millionths", rather than as "six over one million".
The denominators 1, 2, and 4 are special cases. The fraction 3/1 may be spoken of as three wholes. The denominator 2 is expressed as half (plural halves); "−Template:Frac" is minus three-halves or negative three-halves. The fraction 3/4 may be either "three fourths" or "three quarters". Furthermore, since most fractions in prose function as adjectives, the fractional modifier is hyphenated. This is evident in standard prose in which one might write about "every two-tenths of a mile", "the quarter-mile run", or the Three-Fifths Compromise. When the fraction's numerator is 1, then the word one may be omitted, such as "every tenth of a second" or "during the final quarter of the year".
In the examples 2/5 and 7/3, the slanting line is called a solidus or forward slash. In the examples 25{\displaystyle {\tfrac {2}{5}}} and 73{\displaystyle {\tfrac {7}{3}}} , the horizontal line is called a vinculum or, informally, a "fraction bar". When the solidus is encountered in a fraction, a speaker will sometimes parse it by pronouncing it over as in the examples above.
Forms of fractions
Simple, common, or vulgar fractions
A simple fraction (also known as a common fraction or vulgar fraction) is a rational number written as a/b or ab{\displaystyle {\tfrac {a}{b}}} , where a and b are both integers.[2] As with other fractions, the denominator (b) cannot be zero. Examples include 12{\displaystyle {\tfrac {1}{2}}} , −85{\displaystyle -{\tfrac {8}{5}}} , −85{\displaystyle {\tfrac {-8}{5}}} , 8−5{\displaystyle {\tfrac {8}{-5}}} , and 3/17. Simple fractions can be positive or negative, proper, or improper (see below). Compound fractions, complex fractions, mixed numerals, and decimals (see below) are not simple fractions, though, unless irrational, they can be evaluated to a simple fraction.
Proper and improper fractions
Common fractions can be classified as either proper or improper. When the numerator and the denominator are both positive, the fraction is called proper if the numerator is less than the denominator, and improper otherwise.[3][4] In general, a common fraction is said to be a proper fraction if the absolute value of the fraction is strictly less than one—that is, if the fraction is greater than −1 and less than 1.[5][6] It is said to be an improper fraction, or sometimes top-heavy fraction,[7] if the absolute value of the fraction is greater than or equal to 1. Examples of proper fractions are 2/3, -3/4, and 4/9; examples of improper fractions are 9/4, -4/3, and 3/3.
Mixed numbers
A mixed numeral (often called a mixed number, also called a mixed fraction) is the sum of a non-zero integer and a proper fraction. This sum is implied without the use of any visible operator such as "+". For example, in referring to two entire cakes and three quarters of another cake, the whole and fractional parts of the number are written next to each other: 2+34=234{\displaystyle 2+{\frac {3}{4}}=2{\tfrac {3}{4}}} .
This is not to be confused with the algebra rule of implied multiplication. When two algebraic expressions are written next to each other, the operation of multiplication is said to be "understood". In algebra, abc{\displaystyle a{\tfrac {b}{c}}} for example is not a mixed number. Instead, multiplication is understood where abc=a×bc{\displaystyle a{\tfrac {b}{c}}=a\times {\tfrac {b}{c}}} .
To avoid confusion, the multiplication is often explicitly expressed. So abc{\displaystyle a{\tfrac {b}{c}}} may be written as
a×bc{\displaystyle a\times {\tfrac {b}{c}}} ,
a⋅bc{\displaystyle a\cdot {\tfrac {b}{c}}} , or
a(bc){\displaystyle a({\tfrac {b}{c}})} .
An improper fraction is another way to write a whole plus a part. A mixed number can be converted to an improper fraction as follows:
Write the mixed number 234{\displaystyle 2{\tfrac {3}{4}}} as a sum 2+34{\displaystyle 2+{\tfrac {3}{4}}} .
Convert the whole number to an improper fraction with the same denominator as the fractional part, 2=84{\displaystyle 2={\tfrac {8}{4}}} .
Add the fractions. The resulting sum is the improper fraction. In the example, 234=84+34=114{\displaystyle 2{\tfrac {3}{4}}={\tfrac {8}{4}}+{\tfrac {3}{4}}={\tfrac {11}{4}}} .
Similarly, an improper fraction can be converted to a mixed number as follows:
Divide the numerator by the denominator. In the example, 114{\displaystyle {\tfrac {11}{4}}} , divide 11 by 4. 11 ÷ 4 = 2 with remainder 3.
The quotient (without the remainder) becomes the whole number part of the mixed number. The remainder becomes the numerator of the fractional part. In the example, 2 is the whole number part and 3 is the numerator of the fractional part.
The new denominator is the same as the denominator of the improper fraction. In the example, they are both 4. Thus 114=234{\displaystyle {\tfrac {11}{4}}=2{\tfrac {3}{4}}} .
Mixed numbers can also be negative, as in −234{\displaystyle -2{\tfrac {3}{4}}} , which equals −(2+34)=−2−34{\displaystyle -(2+{\tfrac {3}{4}})=-2-{\tfrac {3}{4}}} .
A ratio is a relationship between two or more numbers that can be sometimes expressed as a fraction. Typically, a number of items are grouped and compared in a ratio, specifying numerically the relationship between each group. Ratios are expressed as "group 1 to group 2 ... to group n". For example, if a car lot had 12 vehicles, of which
2 are white,
6 are red, and
4 are yellow,
then the ratio of red to white to yellow cars is 6 to 2 to 4. The ratio of yellow cars to white cars is 4 to 2 and may be expressed as 4:2 or 2:1.
A ratio is often converted to a fraction when it is expressed as a ratio to the whole. In the above example, the ratio of yellow cars to all the cars on the lot is 4:12 or 1:3. We can convert these ratios to a fraction and say that 4/12 of the cars or 1/3 of the cars in the lot are yellow. Therefore, if a person randomly chose one car on the lot, then there is a one in three chance or probability that it would be yellow.
Reciprocals and the "invisible denominator"
The reciprocal of a fraction is another fraction with the numerator and denominator exchanged. The reciprocal of 37{\displaystyle {\tfrac {3}{7}}} , for instance, is 73{\displaystyle {\tfrac {7}{3}}} . The product of a fraction and its reciprocal is 1, hence the reciprocal is the multiplicative inverse of a fraction. Any integer can be written as a fraction with the number one as denominator. For example, 17 can be written as 171{\displaystyle {\tfrac {17}{1}}} , where 1 is sometimes referred to as the invisible denominator. Therefore, every fraction or integer except for zero has a reciprocal. The reciprocal of 17 is 117{\displaystyle {\tfrac {1}{17}}} .
Complex fractions
Not to be confused with fractions involving Complex numbers
In a complex fraction, either the numerator, or the denominator, or both, is a fraction or a mixed number,[8][9] corresponding to division of fractions. For example, 1213{\displaystyle {\frac {\tfrac {1}{2}}{\tfrac {1}{3}}}} and 123426{\displaystyle {\frac {12{\tfrac {3}{4}}}{26}}} are complex fractions. To reduce a complex fraction to a simple fraction, treat the longest fraction line as representing division. For example:
1213=12×31=32=112{\displaystyle {\frac {\tfrac {1}{2}}{\tfrac {1}{3}}}={\tfrac {1}{2}}\times {\tfrac {3}{1}}={\tfrac {3}{2}}=1{\tfrac {1}{2}}}
123426=1234⋅126=12⋅4+34⋅126=514⋅126=51104{\displaystyle {\frac {12{\tfrac {3}{4}}}{26}}=12{\tfrac {3}{4}}\cdot {\tfrac {1}{26}}={\tfrac {12\cdot 4+3}{4}}\cdot {\tfrac {1}{26}}={\tfrac {51}{4}}\cdot {\tfrac {1}{26}}={\tfrac {51}{104}}}
325=32×15=310{\displaystyle {\frac {\tfrac {3}{2}}{5}}={\tfrac {3}{2}}\times {\tfrac {1}{5}}={\tfrac {3}{10}}}
813=8×31=24{\displaystyle {\frac {8}{\tfrac {1}{3}}}=8\times {\tfrac {3}{1}}=24} .
If, in a complex fraction, there is no clear way to tell which fraction lines takes precedence, then the expression is improperly formed, and ambiguous. Thus 5/10/20/40 is a poorly constructed mathematical expression, with multiple possible values.
Compound fractions
A compound fraction is a fraction of a fraction, or any number of fractions connected with the word of,[8][9] corresponding to multiplication of fractions. To reduce a compound fraction to a simple fraction, just carry out the multiplication (see the section on multiplication). For example, 34{\displaystyle {\tfrac {3}{4}}} of 57{\displaystyle {\tfrac {5}{7}}} is a compound fraction, corresponding to 34×57=1528{\displaystyle {\tfrac {3}{4}}\times {\tfrac {5}{7}}={\tfrac {15}{28}}} . The terms compound fraction and complex fraction are closely related and sometimes one is used as a synonym for the other.
Decimal fractions and percentages
A decimal fraction is a fraction whose denominator is not given explicitly, but is understood to be an integer power of ten. Decimal fractions are commonly expressed using decimal notation in which the implied denominator is determined by the number of digits to the right of a decimal separator, the appearance of which (e.g., a period, a raised period (•), a comma) depends on the locale (for examples, see decimal separator). Thus for 0.75 the numerator is 75 and the implied denominator is 10 to the second power, viz. 100, because there are two digits to the right of the decimal separator. In decimal numbers greater than 1 (such as 3.75), the fractional part of the number is expressed by the digits to the right of the decimal (with a value of 0.75 in this case). 3.75 can be written either as an improper fraction, 375/100, or as a mixed number, 375100{\displaystyle 3{\tfrac {75}{100}}} .
Decimal fractions can also be expressed using scientific notation with negative exponents, such as Template:Val, which represents 0.0000006023. The Template:Val represents a denominator of Template:Val. Dividing by Template:Val moves the decimal point 7 places to the left.
Decimal fractions with infinitely many digits to the right of the decimal separator represent an infinite series. For example, 1/3 = 0.333... represents the infinite series 3/10 + 3/100 + 3/1000 + ... .
Another kind of fraction is the percentage (Latin per centum meaning "per hundred", represented by the symbol %), in which the implied denominator is always 100. Thus, 51% means 51/100. Percentages greater than 100 or less than zero are treated in the same way, e.g. 311% equals 311/100, and −27% equals −27/100.
The related concept of permille or parts per thousand has an implied denominator of 1000, while the more general parts-per notation, as in 75 parts per million, means that the proportion is 75/1,000,000.
Whether common fractions or decimal fractions are used is often a matter of taste and context. Common fractions are used most often when the denominator is relatively small. By mental calculation, it is easier to multiply 16 by 3/16 than to do the same calculation using the fraction's decimal equivalent (0.1875). And it is more accurate to multiply 15 by 1/3, for example, than it is to multiply 15 by any decimal approximation of one third. Monetary values are commonly expressed as decimal fractions, for example $3.75. However, as noted above, in pre-decimal British currency, shillings and pence were often given the form (but not the meaning) of a fraction, as, for example 3/6 (read "three and six") meaning 3 shillings and 6 pence, and having no relationship to the fraction 3/6.
A unit fraction is a vulgar fraction with a numerator of 1, e.g. 17{\displaystyle {\tfrac {1}{7}}} . Unit fractions can also be expressed using negative exponents, as in 2−1 which represents 1/2, and 2−2 which represents 1/(22) or 1/4.
An Egyptian fraction is the sum of distinct positive unit fractions, for example 12+13{\displaystyle {\tfrac {1}{2}}+{\tfrac {1}{3}}} . This definition derives from the fact that the ancient Egyptians expressed all fractions except 12{\displaystyle {\tfrac {1}{2}}} , 23{\displaystyle {\tfrac {2}{3}}} and 34{\displaystyle {\tfrac {3}{4}}} in this manner. Every positive rational number can be expanded as an Egyptian fraction. For example, 57{\displaystyle {\tfrac {5}{7}}} can be written as 12+16+121.{\displaystyle {\tfrac {1}{2}}+{\tfrac {1}{6}}+{\tfrac {1}{21}}.} Any positive rational number can be written as a sum of unit fractions in infinitely many ways. Two ways to write 1317{\displaystyle {\tfrac {13}{17}}} are 12+14+168{\displaystyle {\tfrac {1}{2}}+{\tfrac {1}{4}}+{\tfrac {1}{68}}} and 13+14+16+168{\displaystyle {\tfrac {1}{3}}+{\tfrac {1}{4}}+{\tfrac {1}{6}}+{\tfrac {1}{68}}} .
A dyadic fraction is a vulgar fraction in which the denominator is a power of two, e.g. 18{\displaystyle {\tfrac {1}{8}}} .
Arithmetic with fractions
Like whole numbers, fractions obey the commutative, associative, and distributive laws, and the rule against division by zero.
Multiplying the numerator and denominator of a fraction by the same (non-zero) number results in a fraction that is equivalent to the original fraction. This is true because for any non-zero number n{\displaystyle n} , the fraction nn=1{\displaystyle {\tfrac {n}{n}}=1} . Therefore, multiplying by nn{\displaystyle {\tfrac {n}{n}}} is equivalent to multiplying by one, and any number multiplied by one has the same value as the original number. By way of an example, start with the fraction 12{\displaystyle {\tfrac {1}{2}}} . When the numerator and denominator are both multiplied by 2, the result is 24{\displaystyle {\tfrac {2}{4}}} , which has the same value (0.5) as 12{\displaystyle {\tfrac {1}{2}}} . To picture this visually, imagine cutting a cake into four pieces; two of the pieces together (24{\displaystyle {\tfrac {2}{4}}} ) make up half the cake (12{\displaystyle {\tfrac {1}{2}}} ).
Dividing the numerator and denominator of a fraction by the same non-zero number will also yield an equivalent fraction. This is called reducing or simplifying the fraction. A simple fraction in which the numerator and denominator are coprime (that is, the only positive integer that goes into both the numerator and denominator evenly is 1) is said to be irreducible, in lowest terms, or in simplest terms. For example, 39{\displaystyle {\tfrac {3}{9}}} is not in lowest terms because both 3 and 9 can be exactly divided by 3. In contrast, 38{\displaystyle {\tfrac {3}{8}}} is in lowest terms—the only positive integer that goes into both 3 and 8 evenly is 1.
Using these rules, we can show that 510{\displaystyle {\tfrac {5}{10}}} = 12{\displaystyle {\tfrac {1}{2}}} = 1020{\displaystyle {\tfrac {10}{20}}} = 50100{\displaystyle {\tfrac {50}{100}}} .
A common fraction can be reduced to lowest terms by dividing both the numerator and denominator by their greatest common divisor. For example, as the greatest common divisor of 63 and 462 is 21, the fraction 63462{\displaystyle {\tfrac {63}{462}}} can be reduced to lowest terms by dividing the numerator and denominator by 21:
63462=63÷21462÷21=322{\displaystyle {\tfrac {63}{462}}={\tfrac {63\div 21}{462\div 21}}={\tfrac {3}{22}}}
The Euclidean algorithm gives a method for finding the greatest common divisor of any two positive integers.
Comparing fractions with the same denominator only requires comparing the numerators.
34>24{\displaystyle {\tfrac {3}{4}}>{\tfrac {2}{4}}} because 3>2.
If two positive fractions have the same numerator, then the fraction with the smaller denominator is the larger number. When a whole is divided into equal pieces, if fewer equal pieces are needed to make up the whole, then each piece must be larger. When two positive fractions have the same numerator, they represent the same number of parts, but in the fraction with the smaller denominator, the parts are larger.
One way to compare fractions with different numerators and denominators is to find a common denominator. To compare ab{\displaystyle {\tfrac {a}{b}}} and cd{\displaystyle {\tfrac {c}{d}}} , these are converted to adbd{\displaystyle {\tfrac {ad}{bd}}} and bcbd{\displaystyle {\tfrac {bc}{bd}}} . Then bd is a common denominator and the numerators ad and bc can be compared.
23{\displaystyle {\tfrac {2}{3}}} ? 12{\displaystyle {\tfrac {1}{2}}} gives 46>36{\displaystyle {\tfrac {4}{6}}>{\tfrac {3}{6}}}
It is not necessary to determine the value of the common denominator to compare fractions. This short cut is known as "cross multiplying" – you can just compare ad and bc, without computing the denominator.
518{\displaystyle {\tfrac {5}{18}}} ? 417{\displaystyle {\tfrac {4}{17}}}
Multiply top and bottom of each fraction by the denominator of the other fraction, to get a common denominator:
5×1718×17{\displaystyle {\tfrac {5\times 17}{18\times 17}}} ? 4×1817×18{\displaystyle {\tfrac {4\times 18}{17\times 18}}}
The denominators are now the same, but it is not necessary to calculate their value – only the numerators need to be compared. Since 5×17 (= 85) is greater than 4×18 (= 72), 518>417{\displaystyle {\tfrac {5}{18}}>{\tfrac {4}{17}}} .
Also note that every negative number, including negative fractions, is less than zero, and every positive number, including positive fractions, is greater than zero, so every negative fraction is less than any positive fraction.
The first rule of addition is that only like quantities can be added; for example, various quantities of quarters. Unlike quantities, such as adding thirds to quarters, must first be converted to like quantities as described below: Imagine a pocket containing two quarters, and another pocket containing three quarters; in total, there are five quarters. Since four quarters is equivalent to one (dollar), this can be represented as follows:
24+34=54=114{\displaystyle {\tfrac {2}{4}}+{\tfrac {3}{4}}={\tfrac {5}{4}}=1{\tfrac {1}{4}}} .
If 12{\displaystyle {\tfrac {1}{2}}} of a cake is to be added to 14{\displaystyle {\tfrac {1}{4}}} of a cake, the pieces need to be converted into comparable quantities, such as cake-eighths or cake-quarters.
Adding unlike quantities
To add fractions containing unlike quantities (e.g. quarters and thirds), it is necessary to convert all amounts to like quantities. It is easy to work out the chosen type of fraction to convert to; simply multiply together the two denominators (bottom number) of each fraction.
For adding quarters to thirds, both types of fraction are converted to twelfths, thus: 14+13=1∗34∗3+1∗43∗4=312+412=712{\displaystyle {\tfrac {1}{4}}\ +{\tfrac {1}{3}}={\tfrac {1*3}{4*3}}\ +{\tfrac {1*4}{3*4}}={\tfrac {3}{12}}\ +{\tfrac {4}{12}}={\tfrac {7}{12}}} .
Consider adding the following two quantities:
35+23{\displaystyle {\tfrac {3}{5}}+{\tfrac {2}{3}}}
First, convert 35{\displaystyle {\tfrac {3}{5}}} into fifteenths by multiplying both the numerator and denominator by three: 35×33=915{\displaystyle {\tfrac {3}{5}}\times {\tfrac {3}{3}}={\tfrac {9}{15}}} . Since 33{\displaystyle {\tfrac {3}{3}}} equals 1, multiplication by 33{\displaystyle {\tfrac {3}{3}}} does not change the value of the fraction.
Second, convert 23{\displaystyle {\tfrac {2}{3}}} into fifteenths by multiplying both the numerator and denominator by five: 23×55=1015{\displaystyle {\tfrac {2}{3}}\times {\tfrac {5}{5}}={\tfrac {10}{15}}} .
Now it can be seen that:
915+1015=1915=1415{\displaystyle {\tfrac {9}{15}}+{\tfrac {10}{15}}={\tfrac {19}{15}}=1{\tfrac {4}{15}}}
This method can be expressed algebraically:
ab+cd=ad+cbbd{\displaystyle {\tfrac {a}{b}}+{\tfrac {c}{d}}={\tfrac {ad+cb}{bd}}}
And for expressions consisting of the addition of three fractions:
ab+cd+ef=a(df)+c(bf)+e(bd)bdf{\displaystyle {\tfrac {a}{b}}+{\tfrac {c}{d}}+{\tfrac {e}{f}}={\tfrac {a(df)+c(bf)+e(bd)}{bdf}}}
This method always works, but sometimes there is a smaller denominator that can be used (a least common denominator). For example, to add 34{\displaystyle {\tfrac {3}{4}}} and 512{\displaystyle {\tfrac {5}{12}}} the denominator 48 can be used (the product of 4 and 12), but the smaller denominator 12 may also be used, being the least common multiple of 4 and 12.
34+512=912+512=1412=76=116{\displaystyle {\tfrac {3}{4}}+{\tfrac {5}{12}}={\tfrac {9}{12}}+{\tfrac {5}{12}}={\tfrac {14}{12}}={\tfrac {7}{6}}=1{\tfrac {1}{6}}}
The process for subtracting fractions is, in essence, the same as that of adding them: find a common denominator, and change each fraction to an equivalent fraction with the chosen common denominator. The resulting fraction will have that denominator, and its numerator will be the result of subtracting the numerators of the original fractions. For instance,
23−12=46−36=16{\displaystyle {\tfrac {2}{3}}-{\tfrac {1}{2}}={\tfrac {4}{6}}-{\tfrac {3}{6}}={\tfrac {1}{6}}}
Multiplying a fraction by another fraction
To multiply fractions, multiply the numerators and multiply the denominators. Thus:
23×34=612{\displaystyle {\tfrac {2}{3}}\times {\tfrac {3}{4}}={\tfrac {6}{12}}}
Why does this work? First, consider one third of one quarter. Using the example of a cake, if three small slices of equal size make up a quarter, and four quarters make up a whole, twelve of these small, equal slices make up a whole. Therefore a third of a quarter is a twelfth. Now consider the numerators. The first fraction, two thirds, is twice as large as one third. Since one third of a quarter is one twelfth, two thirds of a quarter is two twelfth. The second fraction, three quarters, is three times as large as one quarter, so two thirds of three quarters is three times as large as two thirds of one quarter. Thus two thirds times three quarters is six twelfths.
A short cut for multiplying fractions is called "cancellation". In effect, we reduce the answer to lowest terms during multiplication. For example:
23×34=2131×3142=11×12=12{\displaystyle {\tfrac {2}{3}}\times {\tfrac {3}{4}}={\tfrac {{\cancel {2}}^{~1}}{{\cancel {3}}^{~1}}}\times {\tfrac {{\cancel {3}}^{~1}}{{\cancel {4}}^{~2}}}={\tfrac {1}{1}}\times {\tfrac {1}{2}}={\tfrac {1}{2}}}
A two is a common factor in both the numerator of the left fraction and the denominator of the right and is divided out of both. Three is a common factor of the left denominator and right numerator and is divided out of both.
Multiplying a fraction by a whole number
Place the whole number over one and multiply.
6×34=61×34=184{\displaystyle 6\times {\tfrac {3}{4}}={\tfrac {6}{1}}\times {\tfrac {3}{4}}={\tfrac {18}{4}}}
This method works because the fraction 6/1 means six equal parts, each one of which is a whole.
When multiplying mixed numbers, it's best to convert the mixed number into an improper fraction. For example:
3×234=3×(84+34)=3×114=334=814{\displaystyle 3\times 2{\tfrac {3}{4}}=3\times \left({\tfrac {8}{4}}+{\tfrac {3}{4}}\right)=3\times {\tfrac {11}{4}}={\tfrac {33}{4}}=8{\tfrac {1}{4}}}
In other words, 234{\displaystyle 2{\tfrac {3}{4}}} is the same as 84+34{\displaystyle {\tfrac {8}{4}}+{\tfrac {3}{4}}} , making 11 quarters in total (because 2 cakes, each split into quarters makes 8 quarters total) and 33 quarters is 814{\displaystyle 8{\tfrac {1}{4}}} , since 8 cakes, each made of quarters, is 32 quarters in total.
To divide a fraction by a whole number, you may either divide the numerator by the number, if it goes evenly into the numerator, or multiply the denominator by the number. For example, 103÷5{\displaystyle {\tfrac {10}{3}}\div 5} equals 23{\displaystyle {\tfrac {2}{3}}} and also equals 103⋅5=1015{\displaystyle {\tfrac {10}{3\cdot 5}}={\tfrac {10}{15}}} , which reduces to 23{\displaystyle {\tfrac {2}{3}}} . To divide a number by a fraction, multiply that number by the reciprocal of that fraction. Thus, 12÷34=12×43=1⋅42⋅3=23{\displaystyle {\tfrac {1}{2}}\div {\tfrac {3}{4}}={\tfrac {1}{2}}\times {\tfrac {4}{3}}={\tfrac {1\cdot 4}{2\cdot 3}}={\tfrac {2}{3}}} .
Converting between decimals and fractions
To change a common fraction to a decimal, divide the denominator into the numerator. Round the answer to the desired accuracy. For example, to change 1/4 to a decimal, divide 4 into 1.00, to obtain 0.25. To change 1/3 to a decimal, divide 3 into 1.0000..., and stop when the desired accuracy is obtained. Note that 1/4 can be written exactly with two decimal digits, while 1/3 cannot be written exactly with any finite number of decimal digits.
To change a decimal to a fraction, write in the denominator a 1 followed by as many zeroes as there are digits to the right of the decimal point, and write in the numerator all the digits in the original decimal, omitting the decimal point. Thus 12.3456 = 123456/10000.
Converting repeating decimals to fractions
{{#invoke:see also|seealso}} Decimal numbers, while arguably more useful to work with when performing calculations, sometimes lack the precision that common fractions have. Sometimes an infinite repeating decimal is required to reach the same precision. Thus, it is often useful to convert repeating decimals into fractions.
The preferred way to indicate a repeating decimal is to place a bar over the digits that repeat, for example 0.Template:Overline = 0.789789789… For repeating patterns where the repeating pattern begins immediately after the decimal point, a simple division of the pattern by the same number of nines as numbers it has will suffice. For example:
0.Template:Overline = 5/9
0.Template:Overline = 62/99
0.Template:Overline = 264/999
0.Template:Overline = 6291/9999
In case leading zeros precede the pattern, the nines are suffixed by the same number of trailing zeros:
0.0Template:Overline = 5/90
0.000Template:Overline = 392/999000
0.00Template:Overline = 12/9900
In case a non-repeating set of decimals precede the pattern (such as 0.1523Template:Overline), we can write it as the sum of the non-repeating and repeating parts, respectively:
0.1523 + 0.0000Template:Overline
Then, convert both parts to fractions, and add them using the methods described above:
1523/10000 + 987/9990000 = 1522464/9990000
Alternatively, algebra can be used, such as below:
Let x = the repeating decimal:
x = 0.1523Template:Overline
Multiply both sides by the power of 10 just great enough (in this case 104) to move the decimal point just before the repeating part of the decimal number:
10,000x = 1,523.Template:Overline
Multiply both sides by the power of 10 (in this case 103) that is the same as the number of places that repeat:
10,000,000x = 1,523,987.Template:Overline
Subtract the two equations from each other (if a = b and c = d, then a - c = b - d):
10,000,000x - 10,000x = 1,523,987.Template:Overline - 1,523.Template:Overline
Continue the subtraction operation to clear the repeating decimal:
9,990,000x = 1,523,987 - 1,523
9,990,000x = 1,522,464
Divide both sides to represent x as a fraction
x = 1522464/9990000
Fractions in abstract mathematics
In addition to being of great practical importance, fractions are also studied by mathematicians, who check that the rules for fractions given above are consistent and reliable. Mathematicians define a fraction as an ordered pair (a, b) of integers a and b ≠ 0, for which the operations addition, subtraction, multiplication, and division are defined as follows:[10]
(a,b)+(c,d)=(ad+bc,bd){\displaystyle (a,b)+(c,d)=(ad+bc,bd)\,}
(a,b)−(c,d)=(ad−bc,bd){\displaystyle (a,b)-(c,d)=(ad-bc,bd)\,}
(a,b)⋅(c,d)=(ac,bd){\displaystyle (a,b)\cdot (c,d)=(ac,bd)}
(a,b)÷(c,d)=(ad,bc){\displaystyle (a,b)\div (c,d)=(ad,bc)} (when c ≠ 0)
In addition, an equivalence relation is specified as follows: (a,b){\displaystyle (a,b)} ~ (c,d){\displaystyle (c,d)} if and only if ad=bc{\displaystyle ad=bc} .
These definitions agree in every case with the definitions given above; only the notation is different.
More generally, a and b may be elements of any integral domain R, in which case a fraction is an element of the field of fractions of R. For example, when a and b are polynomials in one indeterminate, the field of fractions is the field of rational fractions (also known as the field of rational functions). When a and b are integers, the field of fractions is the field of rational numbers.
Algebraic fractions
{{#invoke:main|main}} An algebraic fraction is the indicated quotient of two algebraic expressions. Two examples of algebraic fractions are 3xx2+2x−3{\displaystyle {\frac {3x}{x^{2}+2x-3}}} and x+2x2−3{\displaystyle {\frac {\sqrt {x+2}}{x^{2}-3}}} . Algebraic fractions are subject to the same laws as arithmetic fractions.
If the numerator and the denominator are polynomials, as in 3xx2+2x−3{\displaystyle {\frac {3x}{x^{2}+2x-3}}} , the algebraic fraction is called a rational fraction (or rational expression). An irrational fraction is one that contains the variable under a fractional exponent or root, as in x+2x2−3{\displaystyle {\frac {\sqrt {x+2}}{x^{2}-3}}} .
The terminology used to describe algebraic fractions is similar to that used for ordinary fractions. For example, an algebraic fraction is in lowest terms if the only factors common to the numerator and the denominator are 1 and −1. An algebraic fraction whose numerator or denominator, or both, contain a fraction, such as 1+1x1−1x{\displaystyle {\frac {1+{\tfrac {1}{x}}}{1-{\tfrac {1}{x}}}}} , is called a complex fraction.
Rational numbers are the quotient field of integers. Rational expressions are the quotient field of the polynomials (over some integral domain). Since a coefficient is a polynomial of degree zero, a radical expression such as √2/2 is a rational fraction. Another example (over the reals) is π2{\displaystyle \textstyle {\tfrac {\pi }{2}}} , the radian measure of a right angle.
The term partial fraction is used when decomposing rational expressions into sums. The goal is to write the rational expression as the sum of other rational expressions with denominators of lesser degree. For example, the rational expression 2xx2−1{\displaystyle \textstyle {2x \over x^{2}-1}} can be rewritten as the sum of two fractions: 1x+1{\displaystyle \textstyle {1 \over x+1}} + 1x−1{\displaystyle \textstyle {1 \over x-1}} . This is useful in many areas such as integral calculus and differential equations.
Radical expressions
{{#invoke:main|main}} A fraction may also contain radicals in the numerator and/or the denominator. If the denominator contains radicals, it can be helpful to rationalize it (compare Simplified form of a radical expression), especially if further operations, such as adding or comparing that fraction to another, are to be carried out. It is also more convenient if division is to be done manually. When the denominator is a monomial square root, it can be rationalized by multiplying both the top and the bottom of the fraction by the denominator:
37=37⋅77=377{\displaystyle {\frac {3}{\sqrt {7}}}={\frac {3}{\sqrt {7}}}\cdot {\frac {\sqrt {7}}{\sqrt {7}}}={\frac {3{\sqrt {7}}}{7}}}
The process of rationalization of binomial denominators involves multiplying the top and the bottom of a fraction by the conjugate of the denominator so that the denominator becomes a rational number. For example:
33−25=33−25⋅3+253+25=3(3+25)32−(25)2=3(3+25)9−20=−9+6511{\displaystyle {\frac {3}{3-2{\sqrt {5}}}}={\frac {3}{3-2{\sqrt {5}}}}\cdot {\frac {3+2{\sqrt {5}}}{3+2{\sqrt {5}}}}={\frac {3(3+2{\sqrt {5}})}{{3}^{2}-(2{\sqrt {5}})^{2}}}={\frac {3(3+2{\sqrt {5}})}{9-20}}=-{\frac {9+6{\sqrt {5}}}{11}}}
33+25=33+25⋅3−253−25=3(3−25)32−(25)2=3(3−25)9−20=−9−6511{\displaystyle {\frac {3}{3+2{\sqrt {5}}}}={\frac {3}{3+2{\sqrt {5}}}}\cdot {\frac {3-2{\sqrt {5}}}{3-2{\sqrt {5}}}}={\frac {3(3-2{\sqrt {5}})}{{3}^{2}-(2{\sqrt {5}})^{2}}}={\frac {3(3-2{\sqrt {5}})}{9-20}}=-{\frac {9-6{\sqrt {5}}}{11}}}
Even if this process results in the numerator being irrational, like in the examples above, the process may still facilitate subsequent manipulations by reducing the number of irrationals one has to work with in the denominator.
Typographical variations
In computer displays and typography, simple fractions are sometimes printed as a single character, e.g. ½ (one half). See the article on Number Forms for information on doing this in Unicode.
Scientific publishing distinguishes four ways to set fractions, together with guidelines on use:[11]
special fractions: fractions that are presented as a single character with a slanted bar, with roughly the same height and width as other characters in the text. Generally used for simple fractions, such as: ½, ⅓, ⅔, ¼, and ¾. Since the numerals are smaller, legibility can be an issue, especially for small-sized fonts. These are not used in modern mathematical notation, but in other contexts.
case fractions: similar to special fractions, these are rendered as a single typographical character, but with a horizontal bar, thus making them upright. An example would be 12{\displaystyle {\tfrac {1}{2}}} , but rendered with the same height as other characters. Some sources include all rendering of fractions as case fractions if they take only one typographical space, regardless of the direction of the bar.[12]
shilling fractions: 1/2, so called because this notation was used for pre-decimal British currency (£sd), as in 2/6 for a half crown, meaning two shillings and six pence. While the notation "two shillings and six pence" did not represent a fraction, the forward slash is now used in fractions, especially for fractions inline with prose (rather than displayed), to avoid uneven lines. It is also used for fractions within fractions (complex fractions) or within exponents to increase legibility. Fractions written this way, also known as piece fractions,[13] are written all on one typographical line, but take 3 or more typographical spaces.
built-up fractions: 12{\displaystyle {\frac {1}{2}}} . This notation uses two or more lines of ordinary text, and results in a variation in spacing between lines when included within other text. While large and legible, these can be disruptive, particularly for simple fractions or within complex fractions.
The earliest fractions were reciprocals of integers: ancient symbols representing one part of two, one part of three, one part of four, and so on.[14] The Egyptians used Egyptian fractions ca. 1000 BC. About 4,000 years ago Egyptians divided with fractions using slightly different methods. They used least common multiples with unit fractions. Their methods gave the same answer as modern methods.[15] The Egyptians also had a different notation for dyadic fractions in the Akhmim Wooden Tablet and several Rhind Mathematical Papyrus problems.
The Greeks used unit fractions and later continued fractions and followers of the Greek philosopher Pythagoras, ca. 530 BC, discovered that the square root of two cannot be expressed as a fraction. In 150 BC Jain mathematicians in India wrote the "Sthananga Sutra", which contains work on the theory of numbers, arithmetical operations, operations with fractions.
The method of putting one number below the other and computing fractions first appeared in Aryabhatta's work around AD 499.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} In Sanskrit literature, fractions, or rational numbers were always expressed by an integer followed by a fraction. When the integer is written on a line, the fraction is placed below it and is itself written on two lines, the numerator called amsa part on the first line, the denominator called cheda "divisor" on the second below. If the fraction is written without any particular additional sign, one understands that it is added to the integer above it. If it is marked by a small circle or a cross (the shape of the "plus" sign in the West) placed on its right, one understands that it is subtracted from the integer. For example (to be read vertically), Bhaskara I writes[16]
That is,
to denote 6+1/4, 1+1/5, and 2–1/9.
Al-Hassār, a Muslim mathematician from Fez, Morocco specializing in Islamic inheritance jurisprudence during the 12th century, first mentions the use of a fractional bar, where numerators and denominators are separated by a horizontal bar. In his discussion he writes, "... for example, if you are told to write three-fifths and a third of a fifth, write thus, 3153{\displaystyle {\frac {3\quad 1}{5\quad 3}}} ."[17] This same fractional notation appears soon after in the work of Leonardo Fibonacci in the 13th century.[18]
In discussing the origins of decimal fractions, Dirk Jan Struik states:[19]
"The introduction of decimal fractions as a common computational practice can be dated back to the Flemish pamphlet De Thiende, published at Leyden in 1585, together with a French translation, La Disme, by the Flemish mathematician Simon Stevin (1548–1620), then settled in the Northern Netherlands. It is true that decimal fractions were used by the Chinese many centuries before Stevin and that the Persian astronomer Al-Kāshī used both decimal and sexagesimal fractions with great ease in his Key to arithmetic (Samarkand, early fifteenth century)."[20]
While the Persian mathematician Jamshīd al-Kāshī claimed to have discovered decimal fractions himself in the 15th century, J. Lennart Berggren notes that he was mistaken, as decimal fractions were first used five centuries before him by the Baghdadi mathematician Abu'l-Hasan al-Uqlidisi as early as the 10th century.[21][22]
In formal education
Pedagogical tools
In primary schools, fractions have been demonstrated through Cuisenaire rods, Fraction Bars, fraction strips, fraction circles, paper (for folding or cutting), pattern blocks, pie-shaped pieces, plastic rectangles, grid paper, dot paper, geoboards, counters and computer software.
Documents for teachers
Several states in the United States have adopted learning trajectories from the Common Core State Standards Initiative's guidelines for mathematics education. Aside from sequencing the learning of fractions and operations with fractions, the document provides the following definition of a fraction: "A number expressible in the form ab{\displaystyle {\tfrac {a}{b}}} where a{\displaystyle a} is a whole number and b{\displaystyle b} is a positive whole number. (The word fraction in the standards always refers to a non-negative number.)"[23] The document itself also refers to negative fractions.
Basic Fraction Conversion
Fractions can be converted into other decimals, percents, and other fractions that are the same as itself.
Conversion to Decimals and Percents
1/1 = 1.0 = 100%
1/2 = 0.5 = 50%
1/3 = 0.3333... = 33.3333...%
1/4 = 0.25 = 25%
1/7 = 0.1428571... = 14.28571...%
1/8 = 0.125 = 12.5%
1/9 = 0.1111... = 11.111...%
1/10 = 0.1 = 10%
Conversion to Other Fractions
1/2 = 2/4 = 3/6 = 4/8 = 5/10 = 6/12
1/3 = 2/6 = 3/9 = 4/12 = 5/15 = 6/18
2/3 = 4/6 = 6/9 = 8/12 = 10/15 = 12/18
1/4 = 2/8 = 3/12 = 4/16 = 5/20
3/4 = 6/8 = 9/12 = 12/16 = 15/20
1/5 = 2/10 = 3/15 = 4/20
3/5 = 6/10 = 9/15 = 12/20
4/5 = 8/10 = 12/15 = 16/20
Continued fraction
↑ H. Wu, The Mis-Education of Mathematics Teachers, Notices of the American Mathematical Society, Volume 58, Issue 03 (March 2011), page 374
↑ Weisstein, Eric W., "Common Fraction", MathWorld.
↑ Template:Cite web
↑ Weisstein, Eric W., "Improper Fraction", MathWorld.
↑ {{#invoke:citation/CS1|citation |CitationClass=book }}
↑ 8.0 8.1 {{#invoke:citation/CS1|citation |CitationClass=book }}
↑ {{#invoke:citation/CS1|citation |CitationClass=citation }}
↑ Template:Cite web See for examples and an explanation.
↑ Template:Harv
↑ While there is some disagreement among history of mathematics scholars as to the primacy of al-Uqlidisi's contribution, there is no question as to his major contribution to the concept of decimal fractions. [1] "MacTutor's al-Uqlidisi biography". Retrieved 2011-11-22.
Template:Sister Template:Sister Template:Sister
{{#invoke:citation/CS1|citation
|CitationClass=encyclopaedia }}
Weisstein, Eric W., "Fraction", MathWorld.
Online program for exact conversion between fractions and decimals
Online Fractions Calculator with detailed solution
Template:Fractions and ratios fi:Jaollisuus
Retrieved from "https://en.formulasearchengine.com/index.php?title=Fraction_(mathematics)&oldid=239561"
Commons category with local link different than on Wikidata
Fractions (mathematics)
Elementary arithmetic
Division (mathematics)
About formulasearchengine | CommonCrawl |
\begin{document}
\title{Non-linear Recurrences that Quite Unexpectedly Generate Rational Numbers}
\begin{abstract} Non-linear recurrences which generate integers in a surprising way have been studied by many people. Typically people study recurrences that are linear in the highest order term. In this paper I consider what happens when the recurrence is not linear in the highest order term. In this case we no longer produce a unique sequence, but we sometimes have surprising results. If the highest order term is raised to the $m^{th}$ power we expect answers to have $m^{th}$ roots, but for some specific recurrences it happens that we generate rational numbers ad infinitum. I will give a general example in the case of a first order recurrence with $m=2$, and a more specific example that is order 3 with $m=2$ which comes from a generalized Somos recurrence. \end{abstract}
\section{Introduction} Many people have studied non-linear recurrences that generate sequences of integers despite the fact that every iteration of the recurrence requires division by some previous term in the sequence. These types of non-linear recurrences generally have the following form \begin{align}\label{nonLinRecurDef} a(n) = L(a(n-1),\ldots, a(n-k)) \end{align} where $L$ is a Laurent polynomial with integer coefficients, i.e.- $L$ is in the set $\mathbb{Z}[x_1^{\pm 1}, \ldots, x_{k}^{\pm 1}]$. Well studied examples of this phenomenon are the Somos sequences, introduced by Michael Somos in 1989 \cite{DGale}, defined by the recurrence \begin{align*} s(n)s(n-k)=\sum_{i=1}^{\left\lfloor\frac{k}{2}\right\rfloor} s(n-i)s(n-k+i) \end{align*} with initial conditions $s(m)=1$ for $m\leq k$. For $k=2,3$ the recurrence generates the infinite sequence $\{1\}_{n=1}^\infty$. More interestingly, for $k=4,5,6,7$ it is known that these recurrences each generate an infinite sequence of integers (\cite{FZ}\cite{DGale}). There are, of course, other examples of this integrality phenomenon, and many are generalizations of the Somos recurrences (for some examples see \cite{DGale}).
It seems to be the case that all recurrences of the form (\ref{nonLinRecurDef}) that have been studied have no exponent on $a(n)$. In this paper I will discuss recurrences of the form \begin{align}\label{MnonLinRecurDef} a(n)^m = L(a(n),a(n-1),\ldots, a(n-k)) \end{align} where $m>1$, and $L$ is still a Laurent polynomial. I will refer to $m$ many times in this paper, and (unless otherwise stated) this will refer to the exponent on $a(n)$ in the left-hand side of a recurrence of the form (\ref{MnonLinRecurDef}).
In general one would imagine that if $a(n)$ is raised to a power $m>1$ then what is generated is not a sequence at all, since solving an equation of degree $m$ yields up to $m$ answers. So I need to introduce the concept of a \emph{recurrence tree}. \begin{defn} A \emph{recurrence tree} is a way of storing the values generated by a recurrence of the form (\ref{MnonLinRecurDef}) with $m>1$. Solving for the $n+1^{st}$ term, given a specific $n^{th}$ term requires solving an equation of degree $m$. This yields up to $m$ possibilities for the $n+1^{st}$ term. We can store these numbers in a complete $m$-ary tree. \end{defn} \noindent For example when $m=2$ we get the following structure
\[
\xy
(0,0)*+{a(1)}="a1";
(-10,-10)*+{a(2)_1}="a21";
(10,-10)*+{a(2)_2}="a22";
(-20,-20)*+{a(3)_{1,1}}="a31";
(-7,-20)*+{a(3)_{1,2}}="a32";
(7,-20)*+{a(3)_{2,1}}="a33";
(20,-20)*+{a(3)_{2,2}}="a34";
(-20,-25)*+{\vdots};
(-7,-25)*+{\vdots};
(7,-25)*+{\vdots};
(20,-25)*+{\vdots};
{\ar@{-} "a1"; "a21"};
{\ar@{-} "a1"; "a22"};
{\ar@{-} "a21"; "a31"};
{\ar@{-} "a21"; "a32"};
{\ar@{-} "a22"; "a33"};
{\ar@{-} "a22"; "a34"};
\endxy
\]
Also, since solving an equation of degree $m$ yields answers which can be in $\mathbb{C}$, we may expect that the numbers generated are not rational. However, in some cases a recurrence of this form generates rational numbers. When the tree consists only of numbers in $\mathbb{Q}$ (resp. $\mathbb{Z}$) we will call it a \emph{rational (resp. integer) recurrence tree}.
One way to come up with recurrences that obviously generate rational recurrence trees is to take a recurrence that generates integers and find the ``ratios of ratios" sequence. \begin{defn} Given the sequence $\{b(n)\}_{n=1}^\infty$, we call $\left\{\frac{b(n+1)}{b(n)}\right\}_{n=1}^\infty$ the \emph{sequence of ratios of $\{b(n)\}$} and $\left\{\frac{b(n+2)/b(n+1)}{b(n+1)/b(n)}\right\}_{n=1}^\infty$ the \emph{sequence of ratios of ratios of $\{b(n)\}$}. \end{defn} \noindent Obviously, if a sequence $\{b(n)\}_{n=1}^\infty \subset \mathbb{Z}$ then the sequence of ratios of $\{b(n)\}$ and the sequence of ratios of ratios of $\{b(n)\}$ are in $\mathbb{Q}$.
Of course, it may not be the case that the recurrence that generates these ratio sequences has the $m>1$ property, but in the case of the generalized Somos-4 sequences we can find an alternate recurrence for the sequence of ratios of ratios that does have this property. We can then generalize and find new recurrences that do not obviously generate rational numbers.
\section{Generalized Somos-4 Ratios of Ratios Sequence}\label{GenS4RatRat}
Let $\{s_c(n)\}_{n=1}^\infty$ be a sequence defined by the following recurrence: \begin{align}\label{GenSom4} s_c(n) s_c(n-4)&=c_1s_c(n-1)s_c(n-3)+c_2s_c(n-2)^2 \end{align} with initial conditions $s_c(i)=1$ for $i\leq4$, where $c=(c_1,c_2) \in \mathbb{Z}^2$ . This is a special case of the three term Gale-Robinson recurrence (\cite{FZ}\cite{DGale}) that further specializes to the Somos-4 recurrence when $c_1=c_2=1$. The first few terms of the sequence are \[ 1,1,1,1,c_1+c_2,c_1^2+c_1c_2+c_2,c_1^3+2c_1^2c_2+c_1c_2+2c_1c_2^2+c_2^3,\ldots \]
Using cluster algebras and the Caterpillar Lemma, Fomin and Zelevinsky proved that the recurrence (\ref{GenSom4}) generates a sequence of integers \cite{FZ}.
Now, define sequences $\seq{t_c(n)}{n}{1}{\infty}$, and $\seq{a_c(n)}{n}{1}{\infty}$ by \begin{align*} t_c(n)&=s_c(n+1)/s_c(n)\\ a_c(n)&=t_c(n+1)/t_c(n) \end{align*}
then $\seq{t_c(n)}{n}{1}{\infty}$ is the sequence of ratios of $s_c(n)$ \[\{t_c(n)\}=\left\{1,1,1,c_1+c_2,\frac{c_1^2+c_1c_2+c_2}{c_1+c_2},\frac{c_1^3+2c_1^2c_2+c_1c_2+2c_1c_2^2+c_2^3}{c_1^2+c_1c_2+c_2},\ldots\right\}\] and $\{a_c(n)\}$ is the sequence of ratios of ratios of $s_c(n)$. \[\seq{a_c(n)}{n}{1}{\infty}=\left\{1,1,c_1+c_2,\frac{c_1^2+c_1c_2+c_2}{(c_1+c_2)^2},\ldots\right\}\] In this paper we will be interested in the sequence $\{a_c(n)\}$. By algebraic manipulation we can easily find a first order quadratic recurrence for $a_c(n)$.
\begin{claim}\label{acRecurC} The sequence $\{a_c(n)\}_{n=1}^\infty$ is defined by the recurrence \begin{align}\label{acRecur} a_c(n+2)a_c(n+1)^2a_c(n)=c_1a_c(n+1)+c_2 \end{align} with initial conditions $a_c(1)=a_c(2)=1$. \end{claim} \begin{proof} We will simply manipulate the recurrence equation for $s_c(n)$ to look like the recurrence equation (\ref{acRecur}). \begin{align*} s_c(n+4)s_c(n) &=
c_1s_c(n+3)s_c(n+1)+c_2s_c(n+2)^2\\ \frac{s_c(n+4)s_c(n)}{s_c(n+2)^{2}} &= c_1\frac{s_c(n+3)s_c(n+1)}{s_c(n+2)^{2}}+c_2 \end{align*} Notice that the $s_c$ term on the right side is $a_c(n+1)$. By multiplying and dividing by the correct terms on the left side we will get the left side of (\ref{acRecur}). \begin{align*} \frac{s_c(n+4)s_c(n)}{s_c(n+2)^{2}} &=\frac{s_c(n+4)s_c(n)}{s_c(n+2)^2}\frac{s_c(n+2)^2s_c(n+3)^2s_c(n+1)^2}{s_c(n+2)^2s_c(n+3)^2s_c(n+1)^2}\\ &=\frac{s_c(n+4)s_c(n+2)}{s_c(n+3)^2}\frac{s_c(n+3)^2s_c(n+1)^2}{s_c(n+2)^4} \frac{s_c(n+2)s_c(n)}{s_c(n+1)^2}\\ &=a_c(n+2)a_c(n+1)^2a_c(n) \end{align*} Finally we obtain \[a_c(n+2)a_c(n+1)^2a_c(n)=c_1a_c(n+1)+c_2\] which is (\ref{acRecur}). \end{proof}
Unfortunately, this recurrence for the ratios of ratios of $s_c(n)$ does not satisfy $m>1$. The proof of the next claim will help to create a recurrence with $m=2$.
\begin{claim}\label{acRecur2C} The sequence generated by the recurrence (\ref{acRecur}) also satisfies the recurrence \begin{align}\label{acRecur2} a_c(n+2)a_c(n+1)^2+a_c(n+1)^2a_c(n)=(2c_1+c_2+1)a_c(n+1)-c_1 \end{align} \end{claim} \begin{proof} Showing the converse, that the sequence defined by (\ref{acRecur2}) satisfies (\ref{acRecur}), will prove this claim because of uniqueness of the sequence. So assume the sequence $\{a_c(n)\}_{n=0}^\infty$ is defined by (\ref{acRecur2}). Now let us define a function $T(n)$ by $$T(n):=a_c(n+1)^2a_c(n)^2-(2c_1+c_2+1)a_c(n+1)a_c(n)+c_1a_c(n+1)+c_1a_c(n)+c_2$$ I claim that it is enough to show $T(n)=0$, for if this is true then rearranging terms and dividing both sides by $a_c(n+1)^2a_c(n)$ we get \begin{align*} c_1a_c(n+1)+c_2&=(2c_1+c_2+1)a_c(n+1)a_c(n)-c_1a_c(n)-a_c(n+1)^2a_c(n)^2\\ \frac{c_1a_c(n+1)+c_2}{a_c(n)}&=(2c_1+c_2+1)a_c(n+1)-c_1-a_c(n+1)^2a_c(n)\\ \frac{c_1a_c(n+1)+c_2}{a_c(n+1)^2a_c(n)}&=\frac{(2c_1+c_2+1)a_c(n+1)-c_1-a_c(n+1)^2a_c(n)}{a_c(n+1)^2} \end{align*} The RHS of the above equality equals $a_c(n+2)$ because we assumed the sequence $\{a_c(n)\}$ is defined by the recurrence (\ref{acRecur2}). Therefore we also have that $a_c(n+2)=LHS$, i.e.- \begin{align*} a_c(n+2)=\frac{c_1a_c(n+1)+c_2}{a_c(n+1)^2a_c(n)} \end{align*} which is recurrence (\ref{acRecur}). So the sequence $\seq{a_c(n)}{n}{1}{\infty}$, generated by (\ref{acRecur2}), also satisfies the recurrence (\ref{acRecur}).
All that is left is showing, by induction, that $T(n)=0$ for all $n$. For $n=1$ we do the following calculation \begin{align*} T(1)&=a_c(2)^2a_c(1)^2-(2c_1+c_2+1)a_c(2)a_c(1)+c_1a_c(2)+c_1a_c(1)+c_2\\
&= 1\cdot1-(2c_1+c_2+1)\cdot 1 \cdot 1 + c_1 \cdot 1 + c_1 \cdot
1+c_2\\
&= 1-(2c_1+c_2+1)+2c_1+c_2\\
&=0 \end{align*} Now assume that $T(n-1)=0$ for some $n$. We substitute for $a_c(n+1)$ in $T(n)$ from (\ref{acRecur2}) and simplify to obtain \begin{align*} T(n)=&a_c(n-1)^2a_c(n)^2-(2c_1+c_2+1)a_c(n-1)a_c(n)+c_1a_c(n-1)+\\
&+c_1a_c(n)+c_2\\
=& T(n-1)=0 \end{align*} Therefore by induction, $T(n)=0$ for all $n$ and the claim is proved. This proof used ideas from Guoce Xin's paper \cite{Xin}. \end{proof} Coming out of the proof of Claim \ref{acRecur2C} we get that $T(n)=0$ is a first order recurrence with $m=2$ for $\{a_c(n)\}$ as we had hoped. One would expect that, since $\{a_c(n)\}$ is by definition a single sequence, the recurrence tree for $T(n)=0$ would somehow consist only of this single sequence. Indeed this is the case as we prove now. \begin{claim} The recurrence tree for \begin{align}\label{SomO1D2} a_c(n+1)^2a_c(n)^2-&(2c_1+c_2+1)a_c(n+1)a_c(n)+\\
&+c_1a_c(n+1)+c_1a_c(n)+c_2=0 \end{align} with $a(1)=1$, produces a single sequence in the sense that at every level of the tree there is only one value that we haven't yet seen. \end{claim} \begin{proof} Let $X:=a_c(n+1)$ and $Y:=a_c(n)$, then the first order quadratic recurrence for the generalized Somos-4 sequence is rewritten as \begin{align}\label{XYRecur} Y^2 X^2 + (c_1-(2c_1+c_2+1)Y)X + (c_1Y+c_2) &= 0 \end{align} Given some value $y_o$ for $Y$ there are two possible values for $X$ which satisfy (\ref{XYRecur}). This corresponds to the fact that given some $a_c(n)$ there are two possible values for $a_c(n+1)$. Using the quadratic formula, these two values, in terms of $y_o$, are \begin{align*} y^{\pm}&:=\frac{-(c_1-(2c_1+c_2+1)y_o)\pm \sqrt{(c_1-(2c_1+c_2+1)y_o)^2-4y_o^2(c_1y_o+c_2)}}{2y_o^2} \end{align*} Now, since these are values for $a_c(n+1)$ we substitute them back in for $Y$ in (\ref{XYRecur}), solve for $X$, and get potentially 4 possible values for $a_c(n+2)$ that come from this specific $a_c(n)=y_o$. However, when we solve the quadratic equation $$(y^{+})^2 X^2 + (c_1-(2c_1+c_2+1)y^{+})X + (c_1y^{+}+c_2) = 0$$ for $X$ the two solutions we get are $y_o$ and a large expression in terms of $y_o, c_1, c_2$ (similarly for $y^-$). This means that in the $i^{th}$ level of the tree, representing all possible values for $a_c(i)$, from each of the terms in the $i-1^{st}$ level there is at most one term that we haven't yet seen. Now, lets look at the second level given the initial condition (the root) $a_c(1)=1$. We solve the quadratic equation \begin{align*} 1^2 X^2 + (c_1-(2c_1+c_2+1)\cdot1)X + (c_1\cdot 1+c_2) &= 0\\ X^2 + (-c_1-c_2-1)X + (c_1+c_2) &= 0\\ X = 1 ~\mathrm{or}~ c_1+c_2 \end{align*} So on the second level we only have one new term. Therefore, on the third level, and all subsequent levels, we also only have one new term. \end{proof} Since the recurrence tree for (\ref{SomO1D2}) consists only of numbers from the sequence of ratios of ratios of $\{s_c(n)\}$, it must be a rational recurrence tree. So we have found an example of a non-linear recurrence with $m>1$ that generates rational recurrence tree. However, this example was constructed in such a way that it had to generate a rational recurrence tree. In the next section I will generalize this example to get nontrivial sequences generating rational recurrence trees.
\section{Generalized First Order Quadratic Recurrence Tree}\label{Ord1QuadTree} The general form of a first order non-linear recurrence is \begin{align}\label{O1D2Gen} \sum_{i=0}^m P_i(a(n))a(n+1)^i =0 \end{align} where $P_i(Y)$ is a polynomial in $Y$ of some degree $d_i$. For example, the sequence of ratios of ratios of generalized Somos-4 has recurrence given by (\ref{O1D2Gen}) where $m=2$ and \begin{align*} P_2(Y)&= Y^2\\ P_1(Y)&= c_1-(2c_1+c_2+1)Y\\ P_0(Y)&= c_2+c_1Y \end{align*} For the remainder of this section we will assume that $m=2$, $d_0=d_1=1$, and $P_2(Y)=Y^2$. Let \begin{align}\label{specP} P_2(Y)&=Y^2\notag\\ P_1(Y)&=A_1+A_2Y\\ P_0(Y)&=B_1+B_2Y\notag \end{align} where $A_1,A_2,B_1,B_2 \in \mathbb{Z}$. Under certain minimal sufficient conditions a recurrence of this form will generate a rational recurrence tree. \begin{prop} Let $a(1)=1$ in the recurrence (\ref{O1D2Gen}) with coefficient polynomials given by (\ref{specP}). If \begin{enumerate} \item $A_1=B_2$ and \item solving for $a(2)$ yields rational numbers, \end{enumerate} then the recurrence generates a rational recurrence tree. \end{prop} \begin{proof} First we will show that $A_1=B_2$ implies that for every term $a_1 = a(n)$ coming from solving \[a(n)^2 a(n-1)^2 + (A_1+A_2 a(n))a(n-1) + (B_1+A_1a(n)) = 0\] with $a(n-1)=a_0$, we get only one new $a(n+1)$. In other words, solving \[ a(n+1)^2 a_1^2 + (A_1+A_2 a(n+1))a_1 + (B_1+A_1a(n+1)) = 0\] for $a(n+1)$ yields the solutions $\{a(n+1),a_0\}$. In recurrence tree form this looks like:
\[
\xy
(0,7)*+{\vdots};
(0,0)*+{a(n-1)=a_0}="a1";
(20,-10)*+{\ddots}="a22";
(-5,-10)*+{a(n)_1=a_1}="a21";
(15,-20)*+{a(n+1)_{1,1}}="a31";
(-15,-20)*+{a(n+1)_{1,2}=a_0}="a32";
(20,-25)*+{\vdots};
{\ar@{-} "a1"; "a21"};
{\ar@{-} "a21"; "a31"};
{\ar@{-} "a21"; "a32"};
{\ar@{-} "a1"; "a22"};
\endxy
\]
Let $a_o$ be a term in the $n-1^{st}$ level. Its children in the recurrence tree are the solutions of the quadratic equation \begin{align*} a_o^2X^2+(A_1+A_2 a_o)X+(B_1+A_1 a_o)=0 \end{align*} In other words, possibilities for $a(n)$ given that $a(n-1)=a_o$ are \begin{align*} a(n)_1&=\frac{-(A_1+A_2 a_o)+\sqrt{(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)}}{2a_o^2}\\ a(n)_2&=\frac{-(A_1+A_2 a_o)-\sqrt{(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)}}{2a_o^2} \end{align*} If $a(n)=a(n)_i$, for $i=1,2$, then to get $a(n+1)$ we solve \begin{align*} a(n)_i^2a(n+1)^2+(A_1+A_2 a(n)_i)a(n+1)+(B_1+A_1 a(n)_i)=0 \end{align*} for $a(n+1)$. The goal is to show that $a_o$ is a solution for $a(n+1)$, so it is enough to show that $a(n)_i^2a_o^2+(A_1+A_2 a(n)_i)a_o+(B_1+A_1 a(n)_i)=0$ for $i=1,2$. This is nothing but algebraic manipulation that can be easily done using Maple or any other computer algebra system. \begin{comment} \begin{align*} &\left(\frac{-(A_1+A_2 a_o)\pm\sqrt{(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)}}{2a_o^2}\right)^2a_o^2+\\ &+\left(A_1+A_2 \left(\frac{-(A_1+A_2 a_o)\pm\sqrt{(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)}}{2a_o^2}\right)\right)a_o+\\ &+\left(B_1+A_1 \left(\frac{-(A_1+A_2 a_o)\pm\sqrt{(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)}}{2a_o^2}\right)\right)=\\ =&\frac{\left(-(A_1+A_2 a_o)\pm\sqrt{(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)}\right)^2}{2a_o^2}+\\ &+A_1a_o+A_2 \left(\frac{-(A_1+A_2 a_o)\pm\sqrt{(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)}}{2a_o}\right)+\\ &+B_1+A_1 \left(\frac{-(A_1+A_2 a_o)\pm\sqrt{(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)}}{2a_o^2}\right)\\ =&\frac{2(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)\mp2(A_1+A_2a_o)\sqrt{(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)}}{2a_o^2}+\\ &+A_1a_o+A_2 \left(\frac{-(A_1+A_2 a_o)\pm\sqrt{(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)}}{2a_o}\right)+\\ &+B_1+A_1 \left(\frac{-(A_1+A_2 a_o)\pm\sqrt{(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)}}{2a_o^2}\right)\\ =&(\pm A_1+\pm A_2a_o \mp 2A_1 \mp 2A_2a_o)\frac{\sqrt{(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)}}{2a_o^2} +\\ &+\frac{2A_1a_o^3+2B_1a_o^2-(A_1(A_1+A_2a_o))-(A_2a_o(A_1+A_2a_o))+2(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)}{2a_o^2}\\ =&\mp(A_1+A_2a_o)\frac{\sqrt{(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)}}{2a_o^2}\\ &+\frac{2a_o^3A_1+2a_o^2B_1+A_1^2+2A_1A_2a_o+A_2^2a_o^2-4a_o^2}{2a_o^2} \end{align*} \begin{align*} &\frac{\left(-(A_1+A_2 a_o)\pm\sqrt{(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)}\right)^2}{2a_o^2}=\\ =&\frac{(A_1+A_2a_o)^2\mp2(A_1+A_2a_o)\sqrt{(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)}+((A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o))}{2a_o^2}\\ =&\frac{2(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)\mp2(A_1+A_2a_o)\sqrt{(A_1+A_2a_o)^2-4a_o^2(B_1+A_1a_o)}}{2a_o^2} \end{align*} \end{comment}
The other assumption that we made is that both answers for $a(2)$ are rational. When we solve for level 3 we know that in each case one answer will be from level 1, so rational. Then the other answer must be rational because the product of the two answers is the constant term in the quadratic polynomial, $B_1+A_1a(2)_i$, which is rational. Likewise, if levels $n-1$ and $n$ are rational then level $n+1$ will be rational. So by induction we see that all levels are rational numbers. \end{proof}
Even with the stipulation that $A_1=B_2$ and $a(2)_1,a(2)_2$ are both rational, this more general first order recurrence encompasses more than just the Somos-4 ratios of ratios. For example, let $A_1=B_2=1,A_2=5,B_1=8$ with the initial condition $a(1)=1$, then the recurrence is \begin{align}\label{example} a(n-1)^2 a(n)^2 + (1+5a(n-1))a(n) + (8+a(n-1)) = 0 \end{align} The reader may check that $a(2)_1, a(2)_2$ are in fact rational. There is no $(c_1,c_2)$ such that (\ref{SomO1D2}), the generalized Somos-4 ratios of ratios recurrence, is the same as (\ref{example}).
Another fact that is worth pointing out is that these recurrences produce at most two sequences. In Section \ref{GenS4RatRat} we showed that the first order quadratic recurrence for $a_c(n)$, the ratios of ratios of the generalized Somos-4 sequence, generates a unique sequence. This was partly due to the fact that at level 2 one of the solutions is $a_c(1)$, so we only have one new solution. However, it is possible to have two new solutions on level 2. In this case, on level $n>2$ we could have two new solutions, each coming from one of the $a(2)$. However, we can never get more than two sequences.
\section{Recurrence Tree for Generalized Somos-4 Sequence} So far we have looked at first order nonlinear recurrences that generate trees. What about higher order nonlinear recurrences that generate trees? As an example we will look at the generalized Somos-4 recurrence (\ref{GenSom4}). This recurrence is order 4 and quadratic, but with $m=1$. Since we have a first order quadratic recurrence for the ratios of ratios of the generalized Somos-4 sequence given by (\ref{SomO1D2}) we can ``unfold" this recurrence to get an order 3 recurrence that should be satisfied by the generalized Somos-4 sequence. Since we assumed that $a_c(n) := t_c(n+1)/t_c(n)$ and $t_c(n):= s_c(n+1)/s_c(n)$ we can substitute into (\ref{SomO1D2}). The recurrence we obtain is \begin{align}\label{Som4Ord3} s_c(n+3)^2&s_c(n)^2+c_1s_c(n+2)^3s_c(n)+c_2s_c(n+2)^2s_c(n+1)^2\\ -((2c_1&+c_2+1)s_c(n+2)s_c(n+1)s_c(n)-c_1s_c(n+1)^3)s_c(n+3) = 0 \notag \end{align}
In Section \ref{GenS4RatRat} we were able to show that the recurrence tree for the ratios of ratios of the Somos-4 recurrence is just a sequence in disguise because there is only one new term per level. One might think that the same happens here since (\ref{Som4Ord3}) is supposed to generate the generalized Somos-4 sequence, however this is not the case. Instead, we get many new terms per level, and therefore generate many sequences. Also surprisingly, we even get some non-integer rational numbers in the tree. As it turns out, at least one of the sequences in the tree has a simple closed form. \begin{prop}\label{Som4Ord3Subseq} One of the sequences generated by the recurrence (\ref{Som4Ord3}) is $$s_c(1)=s_c(2)=s_c(3)=1, \{(c_1+c_2)^{f(n)}\}_{n=4}^\infty$$ where $f(n)=\left\lfloor\frac{(n-3)^2}{4}\right\rfloor$. \end{prop} \begin{proof} First note that the sequence $\{f(n)\}_{n=4}^\infty$ is $$0,1,2,4,6,9,12,16,20$$ I claim that $f(2i+1)-f(2i)=i-1$, and $f(2(i+1))-f(2i+1)=i-1$. The value of $f(2i)$ is \begin{align*} \left\lfloor\frac{(2i-3)^2}{4}\right\rfloor&= i^2-3i+2 \end{align*} and the value of $f(2i+1)$ is \begin{align*} \left\lfloor\frac{(2i+1-3)^2}{4}\right\rfloor&= i^2-2i+1 \end{align*} Their difference is $i-1$ as claimed. Now, $f(2(i+1))$ is \begin{align*} \left\lfloor\frac{(2(i+1)-3)^2}{4}\right\rfloor&= i^2-i+0 \end{align*} The difference $f(2(i+1))-f(2i+1)$ is indeed $i-1$.
Now we will prove this proposition by induction. First, the base case: we need to show that $s_c(4)=(c_1+c_2)^0, s_c(5)=(c_1+c_2)^1, s_c(6)=(c_1+c_2)^2$. This is straightforward computation, substitute in $s_c(1)=s_c(2)=s_c(3)=1$ in (\ref{Som4Ord3}) and solve for $s_c(4)$. We get two possibilities, 1 and $c_1+c_2$. In this case we choose $s_c(4)=1$. Now we again substitute $s_c(2)=s_c(3)=s_c(4)=1$ in (\ref{Som4Ord3}) and solve for $s_c(5)$. This is the same quadratic equation so we get the same 2 solutions, this time we choose $s_c(5)=c_1+c_2$. To finish off the base case we substitute $s_c(3)=s_c(4)=1, s_c(5)=c_1+c_2$ in (\ref{Som4Ord3}) and solve for $s_c(6)$. Our two possible solutions are $$c_1^2+c_2c_1+c_2~\mathrm{and}~(c_1+c_2)^2$$ so we choose $s_c(6)=(c_1+c_2)^2$. So the base case is true. Now assume, as the inductive hypothesis, that the proposition is true up to $n+2$. We have two possibilities \begin{description} \item[Case 1:] $n=2i$ \begin{align*} s_c(2i)=&(c_1+c_2)^k\\ s_c(2i+1)=&(c_1+c_2)^{k+i-1}\\ s_c(2(i+1))=&(c_1+c_2)^{k+2i-2} \end{align*} If we substitute this into (\ref{Som4Ord3}) and simplify, we obtain the quadratic equation \begin{align*} s(2i+3)^2(c_1+c_2)^{2k}-((c_1+c_2+1)(c_1+c_2)^{3k+3i-3})s(2i+3)&+\\ +(c_1+c_2)(c_2+c_1)^{4k+6i-6}&=0 \end{align*} which we can easily solve using the quadratic formula. Our two possibilities when solving for $s(2i+3)$ are \[s(2i+3)= \left\{\begin{array}{l} (c_1+c_2)^{k+3i-2}\\ (c_1+c_2)^{k+3i-3} \end{array}\right.\]
\begin{comment} {\allowdisplaybreaks \begin{align*} &s(2i+3) = \frac{(c_1+c_2+1)(c_1+c_2)^{3k+3i-3}}{2(c_1+c_2)^{2k}}\pm\\ &\pm\frac{ \sqrt{((c_1+c_2+1)(c_1+c_2)^{3k+3i-3})^2-4(c_1+c_2)^{2k}(c_2+c_1)^{4k+6i-5}}}{2(c_1+c_2)^{2k}}\\ &\phantom{s(2i+3}= \frac{(c_1+c_2+1)(c_1+c_2)^{3k+3i-3}}{2(c_1+c_2)^{2k}} \pm\\ &\phantom{s(2i+3=}\pm \frac{\sqrt{(c_1+c_2+1)^2(c_1+c_2)^{6k+6i-6}-4(c_1+c_2)^{6k+6i-5}}}{2(c_1+c_2)^{2k}}\\ &\phantom{s(2i+3}= \frac{(c_1+c_2+1)(c_1+c_2)^{3k+3i-3} \pm \sqrt{(c_1+c_2)^{6k+6i-6}(c_1+c_2-1)^2}}{2(c_1+c_2)^{2k}}\\ &\phantom{s(2i+3}=\frac{(c_1+c_2)^{3k+3i-3}\left((c_1+c_2+1)\pm (c_1+c_2-1)\right)}{2(c_1+c_2)^{2k}}\\ &\phantom{s(2i+3}=\frac{1}{2}(c_1+c_2)^{k+3i-3}\left(c_1+c_2+1 \pm (c_1+c_2-1) \right)\\ &\phantom{s(2i+3)}= \left\{\begin{array}{l} \frac{1}{2}(c_1+c_2)^{k+3i-3}(2c_1+2c_2)\\ \frac{1}{2}(c_1+c_2)^{k+3i-3}(2) \end{array}\right.\\ &\phantom{s(2i+3)}= \left\{\begin{array}{l} (c_1+c_2)^{k+3i-2}\\ (c_1+c_2)^{k+3i-3} \end{array}\right. \end{align*}} \end{comment}
We expected $s_c(2(i+1)+1)=(c_1+c_2)^{k+2i-2+i}=(c_1+c_2)^{k+3i-2}$ and we have it if we choose the ``$+$" in the quadratic formula. \item[Case 2:] $n=2i+1$ \begin{align*} s_c(2i+1)=&(c_1+c_2)^k\\ s_c(2(i+1))=&(c_1+c_2)^{k+i-1}\\ s_c(2(i+1)+1)=&(c_1+c_2)^{k+2i-1} \end{align*} Again we substitute this in to (\ref{Som4Ord3}) to obtain the following quadratic equation \begin{align*} s(2i&+4)^2(c_1+c_2)^{2k}+\\ &-\left((2c_1+c_2+1)(c_1+c_2)^{3k+3i-2}-c_1(c_1+c_2)^{3k+3i-3}\right)s(2i+4)+\\ &+c_1(c_1+c_2)^{4k+6i-3}+c_2(c_2+c_1)^{4k+6i-4}=0 \end{align*} which we can again solve using the quadratic equation and obtain \[ s(2i+4) = \left\{\begin{array}{l} (c_1+c_2)^{k+3i-1}\\ (c_1+c_2)^{k+3i-3}(c_2+c_2c_1+c_1^2) \end{array}\right. \] \begin{comment} I'll skip the beginning steps here \begin{align*} s(2i+4) =& \frac{(c_1+c_2)^{3k+3i-3}(c_2+3c_2c_1+c_2^2+2c_1^2)}{2(c_1+c_2)^{2k}}\pm\\ &\pm\frac{\sqrt{(c_1+c_2)^{6k+6i-6}c_2^2(c_1+c_2-1)^2}}{2(c_1+c_2)^{2k}}\\ =& \frac{(c_1+c_2)^{3k+3i-3}(c_2+3c_2c_1+c_2^2+2c_1^2)}{2(c_1+c_2)^{2k}}\pm\\ &\pm\frac{(c_1+c_2)^{3k+3i-3}c_2(c_1+c_2-1)}{2(c_1+c_2)^{2k}}\\ =& \frac{1}{2}(c_1+c_2)^{k+3i-3}\left((c_2+3c_2c_1+c_2^2+2c_1^2)\pm c_2(c_1+c_2-1)\right)\\ =& \left\{\begin{array}{l} \frac{1}{2}(c_1+c_2)^{k+3i-3}(c_2+3c_2c_1+c_2^2+2c_1^2+c_2(c_1+c_2-1))\\ \frac{1}{2}(c_1+c_2)^{k+3i-3}(c_2+3c_2c_1+c_2^2+2c_1^2-c_2(c_1+c_2-1)) \end{array}\right.\\ =& \left\{\begin{array}{l} \frac{1}{2}(c_1+c_2)^{k+3i-3}(2(c_1+c_2)^2)\\ \frac{1}{2}(c_1+c_2)^{k+3i-3}(2c_2+2c_2c_1+2c_1^2) \end{array}\right.\\ =& \left\{\begin{array}{l} (c_1+c_2)^{k+3i-1}\\ (c_1+c_2)^{k+3i-3}(c_2+c_2c_1+c_1^2) \end{array}\right.\\ \end{align*} \end{comment} Again, if we choose the $``+"$ in the quadratic equation we get \[s_c(2(i+2))=(c_1+c_2)^{k+2i-1+i}=(c_1+c_2)^{k+3i-1}\] as expected. \end{description} \end{proof} The fact that we were able to find a nice closed form for one of the integer sequences in this recurrence tree is very surprising. The closed form for the generalized Somos-4 sequence is in terms of elliptic theta functions \cite{Hone}, but by finding a lower order recurrence with higher degree we were able to find a polynomial sequence.
In looking at the tree that the recurrence (\ref{Som4Ord3}) generates I have noticed that something even more general seems to be true. \begin{conj} Let $T$ be the recurrence tree for (\ref{Som4Ord3}). Every integer, and the numerator and denominator of every (reduced) non-integer rational number in $T$, are products of terms in the generalized Somos-4 sequence. \end{conj} \noindent Clearly, proposition \ref{Som4Ord3Subseq} is consistent with this conjecture since $s_c(5)=c_1+c_2$ in the general Somos-4 recurrence (\ref{GenSom4}).
\section{Maple code} This subject could not have been studied without the use of a computer algebra system, Maple in my case. The Maple code accompanying this paper can be found on my website \texttt{http://math.rutgers.edu/$\sim$eahogan/maple/}. I created programs that calculate the recurrence tree for a given recurrence of any order and any degree. These programs can be found in the file \texttt{RecurrenceTree.txt}. I also created programs that generate all recurrences that have rational recurrence trees to a certain depth (i.e.- if the tree for a specific recurrence is rational up to a test depth, the program outputs that recurrence). Those can be found in file \texttt{GenerateRationalRecurrenceTrees.txt}.
\section{Conclusion} My study of recurrence trees in this paper may be just the beginning. The first order recurrences I looked at were limited to the case where $m=2$. As $m$ grows it seems less likely that the recurrence trees generated will be rational. Though it could be the case that a subtree is rational, or perhaps just a single sequence. The only higher order recurrence tree I investigated was that of the generalized Somos-4 recurrence. That specific recurrence is not completely characterized, but I suspect a generalization, along the lines of section \ref{Ord1QuadTree}, can be made which may yield behavior like (\ref{Som4Ord3}).
\end{document} | arXiv |
\begin{document}
\title [] {Schwarz's Lemmas for mappings satisfying Poisson's equation }
\def\@arabic\c@footnote{} \footnotetext{ \texttt{\tiny File:~\jobname .tex,
printed: \number\day-\number\month-\number\year,
\thehours.\ifnum\theminutes<10{0}\fi\theminutes} } \makeatletter\def\@arabic\c@footnote{\@arabic\c@footnote}\makeatother
\author{Shaolin Chen} \address{Sh. Chen, College of Mathematics and Statistics, Hengyang Normal University, Hengyang, Hunan 421008, People's Republic of China} \email{[email protected]}
\author{Saminathan Ponnusamy } \address{S. Ponnusamy, Stat-Math Unit, Indian Statistical Institute (ISI), Chennai Centre, 110, Nelson Manickam Road, Aminjikarai, Chennai, 600 029, India. } \email{[email protected], [email protected]}
\subjclass[2000]{Primary: 31A05, 31B05} \keywords{Poisson's equation, Schwarz's lemma, Landau's theorem, Gaussian hypergeometric function. }
\begin{abstract} For $n\geq3$, $m\geq1$ and a given continuous function $g:~\Omega\rightarrow\mathbb{R}^{m}$, we establish some Schwarz type lemmas for mappings $f$ of $\Omega$ into $\mathbb{R}^{m}$ satisfying the {\rm PDE}: $\Delta f=g$, where $\Omega$ is a subset of $\mathbb{R}^{n}$. Then we apply these
results to obtain a Landau type theorem. \end{abstract}
\maketitle \pagestyle{myheadings} \markboth{Sh. Chen and S. Ponnusamy} {Schwarz's Lemmas for mappings satisfying Poisson's equation}
\section{Preliminaries and statements of main results}\label{csw-sec1} \subsection{Notations} For $n\geq2$, let $\mathbb{R}^{n}
$ denote the usual real vector space of dimension $n$. Sometimes it is convenient to identify each point $x=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}$ with an $n\times 1$ column matrix so that $$x=\left(\begin{array}{cccc} x_{1} \\ \vdots \\
x_{n} \end{array}\right). $$ For $w=(w_{1},\ldots,w_{n})$ and $x\in\mathbb{R}^{n}$, we define the Euclidean inner product $\langle \cdot ,\cdot \rangle$ by $\langle x,w\rangle=x_{1}w_{1}+\cdots+x_{n}w_{n} $ so that the Euclidean length of $x$ is defined by
$$|x|=\langle x,x\rangle^{1/2}=(|x_{1}|^{2}+\cdots+|x_{n}|^{2})^{1/2}. $$ Denote a ball in $\mathbb{R}^{n}$ with center $w\in\mathbb{R}^{n}$ and radius $r$ by
$$\mathbb{B}^{n}(w,r)=\{x\in\mathbb{R}^{n}:\, |x-w|<r\}. $$ In particular, $\mathbb{B}^{n}$ and $\mathbb{S}^{n-1}$ denote the unit ball $\mathbb{B}^{n}(0,1)$ and the unit sphere in $\mathbb{R}^{n}$, respectively. Set $\mathbb{D}=\mathbb{B}^2$, the open unit disk in the complex plane $\mathbb{C}\cong \mathbb{R}^{2}$. For $k\in\mathbb{N}_0:=\mathbb{N}\cup\{0\}$ and $m\in\mathbb{N}_0$, we denote by $\mathcal{C}^{k}(\Omega_{1},\Omega_{2})$ the set of all $k$-times continuously differentiable functions from $\Omega_{1}$ into $\Omega_{2}$, where $\Omega_{1}$ and $\Omega_{2}$ are subsets of $\mathbb{R}^{n}$ and $\mathbb{R}^{m}$, respectively. In particular, let $\mathcal{C}(\Omega_{1},\Omega_{2}):=\mathcal{C}^{0}(\Omega_{1},\Omega_{2})$, the set of all continuous functions of $\Omega_{1}$ into $\Omega_{2}$. For $f=(f_{1},\ldots,f_{m})\in\mathcal{C}^{1}(\Omega_{1},\Omega_{2})$, we denote the derivative $D_{f}$ of $f$ by $$D_{f}=\left(\begin{array}{cccc} \displaystyle D_{1}f_{1}\; \cdots\;
D_{n}f_{1}\\[4mm] \vdots\;\; \;\;\cdots\;\;\;\;\vdots \\[2mm]
\displaystyle D_{1}f_{m}\; \cdots\;
D_{n}f_{m} \end{array}\right), \quad D_{j}f_{i}(x)=\frac{\partial f_{i}(x)}{\partial x_j}. $$ In particular, if $n=m$, the Jacobian of $f$ is defined by $J_{f}=\det D_{f}$ and the Laplacian of $f\in\mathcal{C}^{1}(\Omega_{1},\Omega_{2})$ is defined by $$\Delta f=\sum_{k=1}^{n}D_{kk}f. $$ For an $m\times n$ matrix $A$, the operator norm of $A$ is defined by
$$|A|=\sup_{x\neq 0}\frac{|Ax|}{|x|}=\max\{|A\theta|:\, \theta\in\mathbb{S}^{n-1}\} $$ where $m\geq1$ and $n\geq2$.
\subsection{Poisson equation and Schwarz lemma}
For $x,y\in\mathbb{R}^{n}\backslash\{0\}$, we define $x'=x/|x|$, $y'=y/|y|$ and let
$$[x,y]:=\left|y|x|-x'\right|=\left|x|y|-y'\right|. $$
Also, for $x,y\in\mathbb{B}^{n}$ with $x\neq y$ and $|x|+|y|\neq0$, we use $G(x,y)$ to denote the {\it Green function}: \begin{equation}\label{eq-ex0}
G(x,y)=c_{n}\left(\frac{1}{|x-y|^{n-2}}-\frac{1}{[x,y]^{n-2}}\right), \end{equation} where $c_{n}=1/[(n-2)\omega_{n-1}]$ and $\omega_{n-1}=2\pi^{\frac{n}{2}}/\Gamma\big(\frac{n}{2}\big)$ denotes the {\it Hausdorff measure} of $\mathbb{S}^{n-1}$. The {\it Poisson kernel} $P:\,\mathbb{B}^{n}\times \mathbb{S}^{n-1}\rightarrow {\mathbb R}$ is defined by
$$P(x,\zeta)=\frac{1-|x|^{2}}{|x-\zeta|^{n}}. $$ We write $$\nabla =\left ( \frac{\partial }{\partial x_1}, \ldots, \frac{\partial }{\partial x_n} \right ) $$ and for a vector valued function $f=(f_1, \ldots, f_m)$, we define the directional derivative $\frac{\partial f}{\partial \overrightarrow{n}}$ by a componentwise interpretation: $$ \frac{\partial f}{\partial \overrightarrow{n}}(x)=\left ( \langle \nabla f_1(x), x'\rangle, \ldots, \ldots, \langle \nabla f_m(x), x'\rangle \right ). $$ For a given bounded integrable function $\psi:~\mathbb{S}^{n-1}\rightarrow\mathbb{R}^{m}$ and $g\in\mathcal{C}(\mathbb{B}^{n},\mathbb{R}^{m})$, the solution of the {\it Poisson equation} \begin{equation}\label{eq-1} \begin{cases} \displaystyle \Delta f=g & \mbox{ in } \mathbb{B}^{n}\\ \displaystyle f=\psi &\mbox{ in }\, \mathbb{S}^{n-1} \end{cases} \end{equation} is given by \begin{equation}\label{eq-p} f(x)=\mathcal{P}_{\psi}(x)-\mathcal{G}_{g}(x), \end{equation} where \begin{equation}\label{eq-p1a} \mathcal{P}_{\psi}(x)=\int_{\mathbb{S}^{n-1}}P(x,\zeta)\psi(\zeta)d\sigma(\zeta) ~\mbox{ and }~\mathcal{G}_{g}(x)= \int_{\mathbb{B}^{n}}G(x,y)g(y)dV(y) \end{equation} for $x\in\mathbb{B}^{n}$. Here $d\sigma$ denotes the normalized Lebesgue surface measure on $\mathbb{S}^{n-1}$ and $dV$ is the Lebesgue volume measure on $\mathbb{B}^{n}$. It is well known that if $\psi$ and $g$ are continuous in $\mathbb{S}^{n-1}$ and in $\overline{\mathbb{B}^{n}}$, respectively, then $f=\mathcal{P}_{\psi}-\mathcal{G}_{g}$ has a continuous extension $\tilde{f}$ to the boundary, and $\tilde{f}=\psi$ in $\mathbb{S}^{n-1}$ (see \cite[p.~118--120]{Ho} or \cite{K4,K3,K1}).
The classical Schwarz lemma states that a holomorphic function $f$ from $\mathbb{D}$ into itself with $f(0)=0$ satisfies
$|f(z)|\leq|z|$ for all $z\in\mathbb{D}$. It is well-known that the Schwarz lemma has become a crucial theme in a lot of branches of mathematical research for more than a hundred years to date. For $n\geq3$, the classical Schwarz lemma of harmonic mappings in $\mathbb{B}^{n}$ infers that if $f$ is a harmonic mapping of $\mathbb{B}^{n}$ into itself satisfying $f(0)=0,$ then
$$|f(x)|\leq U(rN), $$
where $r=|x|$, $N=(0,\ldots,0,1)$ and $U$ is a harmonic function of $\mathbb{B}^{n}$ into $[-1,1]$ defined by $$U(x)=\mathcal{P}_{(\mathcal{X}_{S^{+}}-\mathcal{X}_{S^{-}})}(x). $$ Here $\mathcal{X}$ is the indicator function, $S^{+}=\{x=(x_{1},\ldots,x_{n})\in\mathbb{S}^{n-1}:~x_{n}\geq0\}$ and $S^{-}=\{x=(x_{1},\ldots,x_{n})\in\mathbb{S}^{n-1}:~x_{n}\leq0\}$ (see \cite{ABR}). For the case $n=2,$ we refer to \cite{CK,CV,He, K0}.
In \cite{K5}, Kalaj proved the following result for harmonic mappings $f$ of $\mathbb{B}^{n}$ into itself: \begin{equation}\label{eq-K}
\left|f(x)-\frac{1-|x|^{2}}{(1+|x|^{2})^{\frac{n}{2}}}f(0)\right|\leq U(|x|N). \end{equation} \subsection{Main results and Remarks} The first aim of the paper is to extend the result \eqref{eq-K} to mappings satisfying the Poisson equation. More precisely, we shall prove the following.
\begin{thm}\label{thm-1} Let $n\geq3$, $m\geq1$ and $g\in\mathcal{C}(\overline{\mathbb{B}^{n}},\mathbb{R}^{m})$. If $f\in\mathcal{C}^{2}(\mathbb{B}^{n},\mathbb{R}^{m})\cap\mathcal{C}(\mathbb{S}^{n-1},\mathbb{R}^{m})$ satisfies $\Delta f=g$, then, for $x\in\overline{\mathbb{B}^{n}}$, \begin{equation}\label{eq-thm1}
\left|f(x)-\frac{1-|x|^{2}}{(1+|x|^{2})^{\frac{n}{2}}}\mathcal{P}_{f}(0)\right|
\leq\|\mathcal{P}_{f}\|_{\infty}U(|x|N)
+\frac{\|g\|_{\infty}}{2n}(1-|x|^{2}),\end{equation} where
$$\mathcal{P}_{f}(x)=\int_{\mathbb{S}^{n-1}}P(x,\zeta)f(\zeta)d\sigma(\zeta),~\|f\|_{\infty}=
\sup_{x\in\mathbb{B}^{n}}|f(x)|~\mbox{and}~\|g\|_{\infty}=\sup_{x\in\mathbb{B}^{n}}|g(x)|. $$ If we choose $g(x)=(-2nM,0,\ldots,0)$ and
$f(x)=(M(1-|x|^{2}),0,\ldots,0)$ for $x\in\overline{\mathbb{B}^{n}}$, then
the inequality {\rm(\ref{eq-thm1})} is sharp in $\mathbb{S}^{n-1}\cup\{0\}$, where $M>0$ is a constant. \end{thm}
By Theorem \ref{thm-1}, we give an explicit estimate as follows.
\begin{cor}\label{cor-1} Let $n\geq3$, $m\geq1$ and $g\in\mathcal{C}(\overline{\mathbb{B}^{n}},\mathbb{R}^{m})$. If $f\in\mathcal{C}^{2}(\mathbb{B}^{n},\mathbb{R}^{m})\cap\mathcal{C}(\mathbb{S}^{n-1},\mathbb{R}^{m})$ satisfies $\Delta f=g$, then, for $x\in\overline{\mathbb{B}^{n}}$, \begin{equation}\label{eq-thm1a}
\left|f(x)-\frac{1-|x|}{(1+|x|)^{n-1}}\mathcal{P}_{f}(0)\right|
\leq\|\mathcal{P}_{f}\|_{\infty}\left[1-\frac{1-|x|}{(1+|x|)^{n-1}}\right]
+\frac{\|g\|_{\infty}}{2n}(1-|x|^{2}).
\end{equation} If we choose $g(x)=(-2nM,0,\ldots,0)$ and $f(x)=(M(1-|x|^{2}),0,\ldots,0)$ for $x\in\overline{\mathbb{B}^{n}}$, then the inequality {\rm(\ref{eq-thm1a})} is sharp in $\mathbb{S}^{n-1}\cup\{0\}$, where $M>0$ is a constant. \end{cor}
There is a classical Schwarz lemma at the boundary, which is as follows and the same may be obtained from standard texts. See for instance, \cite{G} or \cite[p.~249, Corollary~6.62]{pon}.
\begin{Thm} \label{Thm-B} Let $f$ be a holomorphic function from $\mathbb{D}$ into itself. If $f$ is holomorphic at $z=1$ with $f(0)=0$ and $f(1)=1$, then $f'(1)\geq1$. Moreover, the inequality is sharp. \end{Thm}
Theorem \Ref{Thm-B} has been generalized in various forms. For example, Krantz \cite{Kra} explored many versions of the Schwarz lemma at the boundary point of a domain in $\mathbb{C}$, and reviewed several results of many authors. A natural response to these results is to ask whether we can extend the Schwarz lemma at the boundary to higher dimensional cases. This form of the Schwarz lemma has attracted much attention, see \cite{BK,LWT,LT} for holomorphic functions, and see \cite{ABR,Bu, K5,K6,MM} for harmonic functions. In the following, by using Theorem \ref{thm-1}, we establish a Schwarz lemma at the boundary for mappings satisfying Poisson's equation, which is also a generalization of Theorem \Ref{Thm-B}.
\begin{thm}\label{thm-2} For $n\geq3$, $m\geq1$ and a given $g\in\mathcal{C}(\overline{\mathbb{B}^{n}},\mathbb{R}^{m})$, let $f\in\mathcal{C}^{2}(\mathbb{B}^{n},\mathbb{R}^{m})\cap\mathcal{C}(\mathbb{S}^{n-1},\mathbb{R}^{m})$ be a mapping of $\mathbb{B}^{n}$ into itself satisfying $\Delta f=g,$ where
$$\|g\|_{\infty}<\frac{nA_{n}}{\big(1+\frac{1}{2^{\frac{n}{2}}}\big)} ~\mbox{ and }~ A_{n}=\frac{n!\big[1+n-(n-2)F\big(\frac{1}{2},1;\frac{n+3}{2};-1\big)\big]}{2^{\frac{3n}{2}} \Gamma\big(\frac{n+1}{2}\big)\Gamma\big(\frac{n+3}{2}\big)}. $$
If $f(0)=0$ and $\lim_{r\rightarrow1^{-}}|f(r\zeta)|=1$ for some $\zeta\in\mathbb{S}^{n-1}$, then \begin{equation}\label{eq-Sch}
\liminf_{r\rightarrow1^{-}}\frac{|f(\zeta)-f(r\zeta)|}{1-r}\geq A_{n}-\frac{\|g\|_{\infty}}{n}\left(1+\frac{1}{2^{\frac{n}{2}}}\right).\end{equation} In particular, if $\|g\|_{\infty}=0$, then the estimate of {\rm(\ref{eq-Sch})} is sharp. \end{thm}
The definition of the classical hypergeometric function $F(a,b;c;z)$ is given in Section \ref{csw-sec2}.
Next, we state a Schwarz-Pick type lemma, which is a generalization of \cite[Theorem 1.3]{K6}, \cite[Theorem 2.12]{MM} and \cite[Corollary 2.2]{K5}.
\begin{thm}\label{thm-3} Let $n\geq3$ and $m\geq1$. If $f\in\mathcal{C}^{2}(\mathbb{B}^{n},\mathbb{R}^{m})\cap\mathcal{C}(\mathbb{S}^{n-1},\mathbb{R}^{m})$ satisfies $\Delta f=g$ for a given $g\in\mathcal{C}(\overline{\mathbb{B}^{n}},\mathbb{R}^{m})$, then
$$|D_{f}(x)|\leq\frac{\|\mathcal{P}_{f}\|_{\infty}}{1-r}\sup_{\gamma>0}C(\gamma,r)+
\frac{n}{n+1}\|g\|_{\infty} ~\mbox{ for $x\in\mathbb{B}^{n}$}, $$
where $r=|x|$ and $$C(\gamma,r)=\frac{4\omega_{n-3}}{\omega_{n-1}} \frac{2^{n-1}}{(1+r)^{n-1}}\frac{1}{\sqrt{1+\gamma^{2}}}\int_{0}^{1}\frac{\Psi_{r}(\gamma t)+\Psi_{r}(-\gamma t)}{\sqrt{(1-t^{2})^{4-n}}}dt<\infty. $$ Here $$\Psi_{r}(\gamma)=\int_{0}^{\frac{\gamma+\sqrt{\gamma^{2}+1-\alpha^{2}(r)}}{1-\alpha(r)}} \frac{n-\beta(r)+n\gamma \rho-\beta(r)\rho^{2}}{(1+\rho^{2})^{\frac{n}{2}+1}(1+\tau^{2}(r)\rho^{2})^{\frac{n}{2}-1}}\rho^{n-2}d\rho, $$ $$\tau(r)=\frac{1-r}{1+r},~\alpha(r)=\frac{r(n-2)}{n}~\mbox{and}~\beta(r)=\frac{[n-(n-2)r]}{2}. $$ \end{thm}
\begin{rem} We observe that if $n=4$ in Theorem \ref{thm-3}, then $\sup_{\gamma>0}C(\gamma,r)$ can be replaced by $$\frac{\left[r\sqrt{4-r^{2}}(2+r^{2})+4(1-r^{2})\arctan\big(r\frac{\sqrt{4-r^{2}}}{r^{2}-2}\big)\right]}{\pi(1+r)r^{3}}, $$
where $r=|x|$ (see \cite[Theorem 1.3]{K6}). In particular, using (\ref{eq-K}), we can obtain an explicit estimate for
$|D_{\mathcal{P}_{f}}(x)|$. But it is not going to yield a better bound than the bound stated in Theorem \ref{thm-3}. Indeed, for any fixed $x\in\mathbb{B}^{n}$, consider the function
$$\nu(y)=\mathcal{P}_{f}\big(x+(1-|x|)y\big). $$
Now, if we apply (\ref{eq-K}) to $\nu/\|\mathcal{P}_{f}\|_{\infty}$, then we deduce that
$$\left|\frac{\mathcal{P}_{f}\big(x+(1-|x|)y\big)-\mathcal{P}_{f}(x)}{|y|}-
\frac{\left[(1-|y|^{2})(1+|y|^{2})^{-n/2}-1\right]}{|y|}\mathcal{P}_{f}(x)\right|\leq\|\mathcal{P}_{f}\|_{\infty}\frac{U(|y|N)}{|y|}, $$ which, together with the fact that $$\lim_{t\rightarrow0^{+}}\frac{(1-t^{2})(1+t^{2})^{-n/2}-1}{t}=0, $$ implies \begin{equation}\label{eq18c}
(1-|x|)|D_{\mathcal{P}_{f}}(x)|\leq\|\mathcal{P}_{f}\|_{\infty}\frac{\partial U(rN)}{\partial r}\Big|_{r=0}=\frac{2\omega_{n-1}}{V(\mathbb{B}^{n})}\|\mathcal{P}_{f}\|_{\infty}=2n\|\mathcal{P}_{f}\|_{\infty},\end{equation} where $V(\mathbb{B}^{n})$ is the volume of $\mathbb{B}^{n}$. Finally, by (\ref{eq18c}) and (\ref{eq-15c}), we conclude that
\begin{equation}\label{eq-19cp}|D_{f}(x)|\leq|D_{\mathcal{P}_{f}}(x)|+|D_{\mathcal{G}_{g}}(x)|\leq\frac{2n\|\mathcal{P}_{f}\|_{\infty}}{1-|x|}+
\frac{n}{n+1}\|g\|_{\infty} ~\mbox{ for $x\in\mathbb{B}^{n}$} \end{equation} as desired.
$\Box$ \end{rem}
There exists a number of articles dealing with Landau type theorems in geometric function theory and, for general class of functions without some additional condition(s), there is no Landau's theorem. See for example \cite{CK,CMPW,CP,CPW-211,CPW-2011,CPW,CPW-2015,CV,W} and the related references therein. In our next result, we use Theorems \ref{thm-1} and \ref{thm-3} to establish a Landau type theorem for mappings satisfying the Poisson equation.
For $n\geq3$ and a given $g\in\mathcal{C}(\overline{\mathbb{B}^{n}},\mathbb{R}^{n})$, we use $\mathcal{F}_{g}$ to denote the set of all mappings $f\in\mathcal{C}^{2}(\mathbb{B}^{n},\mathbb{R}^{n})\cap\mathcal{C}(\mathbb{S}^{n-1},\mathbb{R}^{n})$
satisfying $\Delta f=g$ and $|f(0)|=J_{f}(0)-1=0$. Let $\mathcal{F}_{g}^{M}$ be the set of all mappings
$f\in\mathcal{F}_{g}$ satisfying $\|f\|_{\infty}+\|g\|_{\infty}\leq M$, where $M$ is a positive constant.
\begin{thm}\label{thm-4} For $n\geq3$ and a given $g\in\mathcal{C}(\overline{\mathbb{B}^{n}},\mathbb{R}^{n})$, let $f\in\mathcal{F}_{g}^{M}$. Then there is a positive constant $r_{0}$ depending only on $M$ and $g$ such that $\mathbb{B}^{n}(0,r_{0})\subset f(\mathbb{B}^{n}).$ \end{thm}
\begin{rem} We wish to point out that the Landau type theorem fails for the family $\mathcal{F}_{g}$ without appropriate additional condition(s). For example, consider $g(x)=(0,\ldots,0, 2n/3)$ and
$f_{k}(x)=(kx_{1},x_{2}/k, x_{3},\ldots,x_{n-1},|x|^{2}/3+x_{n})$ for $k=\{1,2,\ldots\}$, where $n\geq3$ and
$x=(x_{1},\ldots,x_{n})\in\mathbb{B}^{n}$. It is easy to see that each $f_{k}$ is univalent and $|f_{k}(0)|=J_{f_{k}}(0)-1=0$. Furthermore, each $f_{k}(\mathbb{B}^{n})$ contains no ball with radius bigger than $1/k$. Hence, there does not exist an absolute constant $r_{0}>0$ which can work for all $k\in\{1,2,\ldots\}$, such that $\mathbb{B}(0,r_{0}) $ is contained in the range $f_{k}(\mathbb{B}^{n})$. Although Theorem \ref{thm-4} provides existence of the Landau-Bloch constant for $f\in\mathcal{F}_{g}^{M}$, an explicit estimate on the Landau-Bloch constant is not established. \end{rem}
The proofs of Theorems \ref{thm-1}, \ref{thm-2}, \ref{thm-3} and Corollary \ref{cor-1} will be presented in Section \ref{csw-sec2}. Moreover, the proof of Theorem \ref{thm-4} will be given in Section \ref{csw-sec3}.
\section{The Schwarz Lemmas for mappings satisfying Poisson's equation}\label{csw-sec2}
\subsection{M\"obius Transformations of the Unit Ball}\label{sbcsw-sec2.1}
For $x\in\mathbb{B}^{n}$, the {\it M\"obius transformation} in $\mathbb{B}^{n}$ is defined by \begin{equation}\label{eq-ex1}
\phi_{x}(y)=\frac{|x-y|^{2}x-(1-|x|^{2})(y-x)}{[x,y]^{2}},~y\in\mathbb{B}^{n}. \end{equation} The set of isometries of the hyperbolic unit ball is a {\it Kleinian subgroup} of all M\"obius transformations of the extended spaces $\mathbb{R}^{n}\cup\{\infty\}$ onto itself. In the following, we make use of the {\it automorphism group} ${\operatorname{Aut}}(\mathbb{B}^{n})$ consisting of all M\"obius transformations of the unit ball $\mathbb{B}^{n}$ onto itself. We recall the following facts from \cite{Bea}: For $x\in\mathbb{B}^{n}$ and $\phi_{x}\in{\operatorname{Aut}}(\mathbb{B}^{n})$, we have $\phi_{x}(0)=x$, $\phi_{x}(x)=0$, $\phi_{x}(\phi_{x}(y))=y \in\mathbb{B}^{n}$,
\begin{equation}\label{II}
|\phi_{x}(y)|=\frac{|x-y|}{[x,y]}, ~1-|\phi_{x}(y)|^{2}=\frac{(1-|x|^{2})(1-|y|^{2})}{[x,y]^{2}} \end{equation} and \begin{equation}\label{III}
|J_{\phi_{x}}(y)|=\frac{(1-|x|^{2})^{n}}{[x,y]^{2n}}. \end{equation}
\subsection{Gauss Hypergeometric Functions}\label{sbcsw-sec2.2}
For $a, b, c\in\mathbb{R}$ with $c\neq0, -1, -2, \ldots,$ the {\it hypergeometric} function is defined by the power series in the variable $x$
$$F(a,b;c;x)=\sum_{k=0}^{\infty}\frac{(a)_{k}(b)_{k}}{(c)_{k}}\frac{x^{k}}{k!},~|x|<1. $$ Here $(a)_{0}=1$, $(a)_{k}=a(a+1)\cdots(a+k-1)$ for $k=1, 2, \ldots$, and generally $(a)_{k}=\Gamma(a+k)/\Gamma(a)$ is the {\it Pochhammer} symbol, where $\Gamma$ is the {\it Gamma function}. In particular, for $a, b, c>0$ and $a+b<c$, we have (cf. \cite{PBM}) $$F(a,b;c;1)=\lim_{x\rightarrow1} F(a,b;c;x)=\frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)}<\infty. $$
The following result is useful in showing one of our main results of the paper.
\begin{prop}{\rm (\cite{K2}~$\mbox{or}$~\cite[2.5.16(43)]{PBM})}\label{pro-1} For $\lambda_{1}>1$ and $\lambda_{2}>0$, we have $$\int_{0}^{\pi}\frac{\sin^{\lambda_{1}-1}t}{(1+r^{2}-2r\cos t)^{\lambda_{2}}}dt= \mathbf{B}\left(\frac{\lambda_{1}}{2},\frac{1}{2}\right) F\big(\lambda_{2},\lambda_{2}+\frac{1-\lambda_{1}}{2};\frac{1+\lambda_{1}}{2};r^{2}\big), $$ where $\mathbf{B}(.,.)$ denotes the beta function and $r\in[0,1)$. \end{prop}
\subsection{Proofs} \subsection*{Proof of Theorem \ref{thm-1}} Let $n\geq3$.
For $x,y\in\mathbb{B}^{n}$ with $x\neq y$ and $|x|+|y|\neq0$, by \eqref{II}, we have
\begin{eqnarray}\label{eq-2}
\left|\frac{1}{|x-y|^{n-2}}-\frac{1}{[x,y]^{n-2}}\right|&=&\frac{1}{|x-y|^{n-2}}\left|1-\frac{|x-y|^{n-2}}{[x,y]^{n-2}}\right|\\
\nonumber&=&\frac{1}{|x-\phi_{x}(z)|^{n-2}}\left(1-|z|^{n-2}\right), \end{eqnarray}
where $\phi_{x}\in{\operatorname{Aut}}(\mathbb{B}^{n})$ and $z=\phi_{x}(y)$. By \eqref{eq-ex1}, direct calculation shows that
\begin{eqnarray*}
x-\phi_{x}(z)=\frac{x[x,z]^{2}-|x-z|^{2}x+(1-|x|^{2})(z-x)}{[x,z]^{2}}
=\frac{(z-x|z|^{2})(1-|x|^{2})}{[x,z]^{2}}, \end{eqnarray*} which gives \begin{equation}\label{eq-3}
|x-\phi_{x}(z)|=\frac{|z|(1-|x|^{2})}{[x,z]}. \end{equation} By (\ref{eq-2}) and (\ref{eq-3}), we obtain \begin{equation}\label{eq-4}
\left|\frac{1}{|x-y|^{n-2}}-\frac{1}{[x,y]^{n-2}}\right|=\frac{[x,z]^{n-2}(1-|z|^{n-2})}{|z|^{n-2}(1-|x|^{2})^{n-2}}. \end{equation} Now, let $g\in\mathcal{C}(\overline{\mathbb{B}^{n}},\mathbb{R}^{m})$ be given. Then, by (\ref{eq-p}) with $f$ in place of $\psi$, we have $f(x)=\mathcal{P}_{f}(x)-\mathcal{G}_{g}(x), $ where $\mathcal{P}_{f}$ and $\mathcal{G}_{g}$ are defined by \eqref{eq-p1a}.
It follows that \begin{eqnarray}\label{eq-5}
\nonumber|\mathcal{G}_{g}(x)|&\leq&\int_{\mathbb{B}^{n}}|G(x,y)g(y)|dV(y)\\ \nonumber
&\leq&c_{n}\|g\|_{\infty}\int_{\mathbb{B}^{n}}\left|\frac{1}{|x-y|^{n-2}}-\frac{1}{[x,y]^{n-2}}\right|dV(y),~\mbox{ by \eqref{eq-ex0}},\\
\nonumber &=&c_{n}\|g\|_{\infty}\int_{\mathbb{B}^{n}}
\frac{[x,z]^{n-2}(1-|z|^{n-2})}{|z|^{n-2}(1-|x|^{2})^{n-2}}\frac{(1-|x|^{2})^{n}}{[x,z]^{2n}}dV(z),~\mbox{ by \eqref{II} and (\ref{eq-4})},\\
\nonumber
&=&c_{n}\|g\|_{\infty}(1-|x|^{2})^{2}\int_{\mathbb{B}^{n}}\frac{(1-|z|^{n-2})|z|^{n+2}}{|z|^{n-2}\big|x|z|^{2}-z\big|^{2+n}}dV(z)\\
&=&\frac{\|g\|_{\infty}(1-|x|^{2})^{2}}{n-2}\int_{0}^{1}dr\int_{\mathbb{S}^{n-1}}\frac{r(1-r^{n-2})}{|rx-\zeta|^{2+n}}d\sigma(\zeta). \end{eqnarray}
Using the polar coordinates and Proposition \ref{pro-1}, we obtain \begin{eqnarray}\label{eq-6}\nonumber
\int_{\mathbb{S}^{n-1}}\frac{d\sigma(\zeta)}{|rx-\zeta|^{2+n}}&=&\frac{1}{\int_{0}^{\pi}\sin^{n-2}tdt}\int_{0}^{\pi}
\frac{\sin^{n-2}tdt}{\left(1+r^{2}|x|^{2}-2r|x|\cos t\right)^{\frac{n+2}{2}}}dt\\ \nonumber &=&\frac{\Gamma\big(\frac{n}{2}\big)}{\sqrt{\pi}\Gamma\big(\frac{n-1}{2}\big)}\cdot
\frac{\sqrt{\pi}\Gamma\big(\frac{n-1}{2}\big)}{\Gamma\big(\frac{n}{2}\big)}F\Big(\frac{n+2}{2},2;\frac{n}{2};r^{2}|x|^{2}\Big)\\
&=& F\Big(\frac{n+2}{2},2;\frac{n}{2};r^{2}|x|^{2}\Big). \end{eqnarray} By (\ref{eq-5}) and (\ref{eq-6}), we get \begin{eqnarray}\label{eq-7}\nonumber
|\mathcal{G}_{g}(x)|&\leq&\frac{\|g\|_{\infty}(1-|x|^{2})^{2}}{n-2}
\int_{0}^{1}F\Big(\frac{n+2}{2},2;\frac{n}{2};r^{2}|x|^{2}\Big) r(1-r^{n-2})dr\\ \nonumber
&=&\frac{\|g\|_{\infty}(1-|x|^{2})^{2}}{n-2}\int_{0}^{1}\sum_{k=0}^{\infty} \frac{\Gamma\big(\frac{n}{2}\big)\Gamma\big(\frac{n}{2}+k+1\big) \Gamma(2+k)}{\Gamma\big(\frac{n}{2}+1\big)\Gamma\big(\frac{n}{2}+k\big)\Gamma(2)}
\frac{r^{2k+1}(1-r^{n-2})|x|^{2k}}{k!}\\ \nonumber
&=&\frac{\|g\|_{\infty}(1-|x|^{2})^{2}}{n-2}\int_{0}^{1}
\sum_{k=0}^{\infty}\frac{2}{n}\big(k+\frac{n}{2}\big)(k+1)r^{2k+1}(1-r^{n-2})|x|^{2k}dr\\ \nonumber
&=&\frac{\|g\|_{\infty}(1-|x|^{2})^{2}}{n-2}\sum_{k=0}^{\infty}
\frac{2}{n}\big(k+\frac{n}{2}\big)(k+1)\frac{(n-2)}{2(k+1)(2k+n)}|x|^{2k}\\
\nonumber
&=&\frac{\|g\|_{\infty}(1-|x|^{2})^{2}}{2n}\sum_{k=0}^{\infty}|x|^{2k}\\
&=&\frac{\|g\|_{\infty}(1-|x|^{2})}{2n}. \end{eqnarray}
For $\rho\in(0,1)$, let $F(x)=\mathcal{P}_{f}(\rho x)$, $x\in\mathbb{B}^{n}$. Then \begin{equation}\label{eq-ex2} F(x)=\int_{\mathbb{S}^{n-1}}P(x,\zeta)F(\zeta)d\sigma(\zeta), \end{equation} which, together with (\ref{eq-K}), yields that \begin{equation}\label{eq-10}
\left|F(x)-\frac{1-|x|^{2}}{(1+|x|^{2})^{\frac{n}{2}}}F(0)\right|
\leq\|\mathcal{P}_{f}\|_{\infty}U(|x|N). \end{equation} Applying (\ref{eq-10}), we see that \begin{eqnarray}\label{eq-11}\nonumber
\left|\mathcal{P}_{f}(x)-\frac{1-|x|^{2}}{(1+|x|^{2})^{\frac{n}{2}}}\mathcal{P}_{f}(0)
\right|&=&\lim_{\rho\rightarrow1^{-}}\left|F(x)-\frac{1-|x|^{2}}{(1+|x|^{2})^{\frac{n}{2}}}F(0)\right|\\
&\leq& \|\mathcal{P}_{f}\|_{\infty}U(|x|N).\end{eqnarray} Hence, by (\ref{eq-7}) and (\ref{eq-11}), we conclude that \begin{eqnarray*}
\left|f(x)-\frac{1-|x|^{2}}{(1+|x|^{2})^{\frac{n}{2}}}\mathcal{P}_{f}(0)\right|&=&\left|\mathcal{P}_{f}(x)-
\frac{1-|x|^{2}}{(1+|x|^{2})^{\frac{n}{2}}}\mathcal{P}_{f}(0)-\mathcal{G}_{g}(x)\right|\\
&\leq&\left|\mathcal{P}_{f}(x)-\frac{1-|x|^{2}}{(1+|x|^{2})^{\frac{n}{2}}}\mathcal{P}_{f}(0)\right|+|\mathcal{G}_{g}(x)|\\
&\leq&\|\mathcal{P}_{f}\|_{\infty}U(|x|N)+\frac{\|g\|_{\infty}}{2n}(1-|x|^{2}). \end{eqnarray*}
Now we prove the sharpness part. For $x\in\overline{\mathbb{B}^{n}}$, let $g(x)=(-2nM,0,\ldots,0)$ and
$f(x)=(M(1-|x|^{2}),0,\ldots,0)$, where $M$ is a positive constant. If $x\in\mathbb{S}^{n-1}$, then the optimality of (\ref{eq-thm1}) is obvious. On the other hand, if $x=0$, then
$$\|\mathcal{P}_{f}\|_{\infty}=0~\mbox{and}~M=|f(0)-\mathcal{P}_{f}(0)|=|f(0)|
= \frac{\|g\|_{\infty}}{2n}=M. $$ The proof of the theorem is complete. \qed
\subsection*{Proof of Corollary \ref{cor-1}} For $\rho\in(0,1)$, let $F(x)=\mathcal{P}_{f}(\rho x)$, $x\in\mathbb{B}^{n}$. Then, following the proof of Theorem \ref{thm-1} and considering $F$ described by \eqref{eq-ex2},
we deduce that \begin{eqnarray*} \nonumber
\left|F(x)-\frac{1-|x|}{(1+|x|)^{n-1}}F(0)\right|&=&
\left|\int_{\mathbb{S}^{n-1}}\Big[P(x,\zeta)-\frac{1-|x|}{(1+|x|)^{n-1}}\Big]F(\zeta)d\sigma(\zeta)\right|\\ \nonumber
&\leq&\int_{\mathbb{S}^{n-1}}\Big[P(x,\zeta)-\frac{1-|x|}{(1+|x|)^{n-1}}\Big]|F(\zeta)|d\sigma(\zeta)\\ &\leq&
\|\mathcal{P}_{f}\|_{\infty}\left[1-\frac{1-|x|}{(1+|x|)^{n-1}}\right] \end{eqnarray*}
and therefore, \begin{eqnarray*} \nonumber
\left|\mathcal{P}_{f}(x)-\frac{1-|x|}{(1+|x|)^{n-1}}\mathcal{P}_{f}(0)
\right|&=&\lim_{\rho\rightarrow1^{-}}\left|F(x)-\frac{1-|x|}{(1+|x|)^{n-1}}F(0)\right|\\
&\leq& \|\mathcal{P}_{f}\|_{\infty}\left[1-\frac{1-|x|}{(1+|x|)^{n-1}}\right] \end{eqnarray*}
which by Theorem \ref{thm-1} leads to \begin{eqnarray*}
\left|f(x)-\frac{1-|x|}{(1+|x|)^{n-1}}\mathcal{P}_{f}(0)\right|&=&\left|\mathcal{P}_{f}(x)-\frac{1-|x|}{(1+|x|)^{n-1}}\mathcal{P}_{f}(0)-\mathcal{G}_{g}(x)\right|\\
&\leq&\left|\mathcal{P}_{f}(x)-\frac{1-|x|}{(1+|x|)^{n-1}}\mathcal{P}_{f}(0)\right|+|\mathcal{G}_{g}(x)|\\
&\leq&\|\mathcal{P}_{f}\|_{\infty}\left[1-\frac{1-|x|}{(1+|x|)^{n-1}}\right]
+\frac{\|g\|_{\infty}}{2n}(1-|x|^{2}). \end{eqnarray*} The proof of the corollary is complete.
\qed
\begin{Lem}{\rm (\cite[Lemma 2.3]{K5})}\label{Lem-A} For $r\in[0,1]$, let $\varphi(r)=\frac{\partial U(rN)}{\partial r}$. Then $\varphi(r)$ is decreasing on $r\in[0,1]$ and
$$\varphi(r)\geq \left . \frac{\partial U(rN)}{\partial r} \right |_{r=1}=\varphi(1)=A_{n}, $$ where $A_{n}$ is the same as in Theorem {\rm\ref{thm-2}}. \end{Lem}
\subsection*{Proof of Theorem \ref{thm-2}} For $x\in\mathbb{B}^{n}$, there is a $\rho\in(r,1)$ such that
\begin{equation}\label{eq-12c} \frac{1-U(rN)}{1-|x|}=\frac{\partial U(\rho N)}{\partial r}, \end{equation} where $r=|x|$. Now, for $n\geq3$ and a given $g\in\mathcal{C}(\overline{\mathbb{B}^{n}},\mathbb{R}^{m})$, by (\ref{eq-p}) with $f$ in place of $\psi$, we have $$f(x)=\mathcal{P}_{f}(x)-\mathcal{G}_{g}(x), $$ where
$\mathcal{P}_{f}$ and $\mathcal{G}_{g}$ are defined as in \eqref{eq-p1a}. Since $f(0)=\mathcal{P}_{f}(0)-\mathcal{G}_{g}(0)=0$, by (\ref{eq-7}), Theorem \ref{thm-1} and the assumptions, we see that \begin{eqnarray}\label{eq-13c}\nonumber
|f(\zeta)-f(r\zeta)|&=&\left|f(\zeta)+\mathcal{P}_{f}(0)\frac{1-r^{2}}{(1+r^{2})^{\frac{n}{2}}}-
\mathcal{G}_{g}(0)\frac{1-r^{2}}{(1+r^{2})^{\frac{n}{2}}}-f(r\zeta)\right|\\ \nonumber &\geq&1-
\left|f(r\zeta)-\mathcal{P}_{f}(0)\frac{1-r^{2}}{(1+r^{2})^{\frac{n}{2}}}\right|-|\mathcal{G}_{g}(0)|\frac{1-r^{2}}{(1+r^{2})^{\frac{n}{2}}}\\
&\geq&1-U(rN)-\frac{\|g\|_{\infty}}{2n}(1-r^{2})-\frac{\|g\|_{\infty}}{2n}\frac{1-r^{2}}{(1+r^{2})^{\frac{n}{2}}}, \end{eqnarray} where $r\in[0,1)$. Finally, by (\ref{eq-12c}), (\ref{eq-13c}) and Lemma \Ref{Lem-A}, there is a $\rho\in(r,1)$ such that \begin{eqnarray*}
\frac{|f(\zeta)-f(r\zeta)|}{1-r}
&\geq&\frac{1-U(rN)}{1-r}-\frac{\|g\|_{\infty}}{2n}(1+r)-\frac{\|g\|_{\infty}}{2n}\frac{1+r}{(1+r^{2})^{\frac{n}{2}}}\\
&\geq&\frac{\partial U(\rho N)}{\partial r}-\frac{\|g\|_{\infty}}{2n}(1+r)\left (1+\frac{1}{(1+r^{2})^{\frac{n}{2}}}\right )\\
&\geq&A_{n}-\frac{\|g\|_{\infty}}{2n}(1+r)\left (1+\frac{1}{(1+r^{2})^{\frac{n}{2}}}\right ), \end{eqnarray*} which gives \eqref{eq-Sch}.
The sharpness part easily follows from \cite[Theorem 2.5]{K5}. The proof of the theorem is complete. \qed
\begin{Thm}{\rm (\cite[Theorem 2.12]{MM})}\label{Lem-MM} Let $u$ be a bounded harmonic function from $\mathbb{B}^{n}$ into $\mathbb{R}$, where $n\geq3$. Then, for $x\in\mathbb{B}^{n}$,
$$|\nabla u(x)|\leq\frac{\|u\|_{\infty}}{1-|x|}\sup_{\gamma>0}C(\gamma,|x|),$$
where $C(\gamma,|x|)$ is defined in Theorem {\rm\ref{thm-3}}. \end{Thm}
\subsection*{Proof of Theorem \ref{thm-3}} Let $n\geq3$ and $g\in\mathcal{C}(\overline{\mathbb{B}^{n}},\mathbb{R}^{m})$ be given. As before, by (\ref{eq-p}), we have $$f(x)=\mathcal{P}_{f}(x)-\mathcal{G}_{g}(x), $$ where $\mathcal{P}_{f}$ and $\mathcal{G}_{g}$ are defined as in \eqref{eq-p1a}.
Set $\mathcal{G}_{g}=(\mathcal{G}_{g,1},\ldots,\mathcal{G}_{g,m})$ and $g=(g_{1},\ldots,g_{m})$. For $k\in\{1,2,\ldots,m\}$, we let
$$I_{k}=\int_{\mathbb{B}^{n}}\left|\frac{x-y}{|x-y|^{n}}-\frac{|y|^{2}x-y}{[x,y]^{n}}\right||g_{k}(y)|dV(y). $$ If we apply Cauchy-Schwarz's inequality and \cite[Theorem 2.1]{K2}, it follows that \begin{eqnarray}\label{eq16c} I_{k}^{2}&\leq&
\int_{\mathbb{B}^{n}}\left|\frac{x-y}{|x-y|^{n}}-\frac{|y|^{2}x-y}{[x,y]^{n}}\right|dV(y)\\ \nonumber &&\times\bigg[\int_{\mathbb{B}^{n}}
\left|\frac{x-y}{|x-y|^{n}}-\frac{|y|^{2}x-y}{[x,y]^{n}}\right||g_{k}(y)|^{2}dV(y)\bigg]\\ \nonumber &\leq& \frac{2n\pi^{\frac{n}{2}}}{(n+1)\Gamma\big(\frac{n}{2}\big)}\int_{\mathbb{B}^{n}}
\left|\frac{x-y}{|x-y|^{n}}-\frac{|y|^{2}x-y}{[x,y]^{n}}\right||g_{k}(y)|^{2}dV(y),\end{eqnarray} which yields that
\begin{eqnarray}\label{eq17c}|\nabla
\mathcal{G}_{g,k}(x)|^{2}&\leq&\frac{I_{k}^{2}}{\omega_{n-1}^{2}}\nonumber \\ &\leq&\frac{n}{(n+1)\omega_{n-1}}\int_{\mathbb{B}^{n}}
\left|\frac{x-y}{|x-y|^{n}}-\frac{|y|^{2}x-y}{[x,y]^{n}}\right||g_{k}(y)|^{2}dV(y). \end{eqnarray} Then, using (\ref{eq17c}) and \cite[Theorem 2.1]{K2}, we obtain \begin{eqnarray}\label{eq-15c}\nonumber
|D_{\mathcal{G}_{g}}(x)|&=&\sup_{\theta\in\mathbb{S}^{n-1}}\left(\sum_{k=1}^{m}|\langle\nabla
\mathcal{G}_{g,k}(x),\theta\rangle|^{2}\right)^{\frac{1}{2}}\\
\nonumber &\leq&\left(\sum_{k=1}^{m}|\nabla
\mathcal{G}_{g,k}(x)|^{2}\right)^{\frac{1}{2}}\\ \nonumber &\leq&\bigg[\frac{n}{(n+1)\omega_{n-1}}\int_{\mathbb{B}^{n}}
\left|\frac{x-y}{|x-y|^{n}}-\frac{|y|^{2}x-y}{[x,y]^{n}}\right|\sum_{k=1}^{m}|g_{k}(y)|^{2}dV(y)\bigg]^{\frac{1}{2}}\\
\nonumber&\leq&\left[\frac{n}{(n+1)\omega_{n-1}}\right]^{\frac{1}{2}}\|g\|_{\infty}\left(\int_{\mathbb{B}^{n}}
\left|\frac{x-y}{|x-y|^{n}}-\frac{|y|^{2}x-y}{[x,y]^{n}}\right|dV(y)\right)^{\frac{1}{2}}\\ \nonumber
&\leq&\left[\frac{n}{(n+1)\omega_{n-1}}\right]^{\frac{1}{2}}\left(\frac{n\omega_{n-1}}{n+1}\right)^{\frac{1}{2}}\|g\|_{\infty}\\
&=&\frac{n}{n+1}\|g\|_{\infty}. \end{eqnarray}
Now we estimate $|D_{\mathcal{P}_{f}}|$. For $x\in\mathbb{B}^{n}$, we may let $\mathcal{P}_{f}(x)=(\mathcal{P}_{f,1}(x),\ldots,\mathcal{P}_{f,m}(x)).$ Then, for any $\theta\in\mathbb{S}^{n-1}$ and $k\in\{1,2,\ldots,m\}$, by Cauchy-Schwarz's inequality, we have \begin{eqnarray*}
\left|\langle\nabla
\mathcal{P}_{f,k}(x),\theta\rangle\right|^{2}&=&\left|\int_{\mathbb{S}^{n-1}}\langle\nabla P(x,\zeta),\theta\rangle\mathcal{P}_{f,k}(\zeta)d\sigma(\zeta)\right|^{2}\\
&\leq&\left(\int_{\mathbb{S}^{n-1}}\big|\langle\nabla P(x,\zeta),\theta\rangle\big||\mathcal{P}_{f,k}(\zeta)|d\sigma(\zeta)\right)^{2}\\
&\leq&\delta(x,\theta)\int_{\mathbb{S}^{n-1}}\big|\langle\nabla P(x,\zeta),\theta\rangle\big||\mathcal{P}_{f,k}(\zeta)|^{2}d\sigma(\zeta), \end{eqnarray*} which gives that \begin{eqnarray}\label{eq-cp1}\nonumber
\left(\sum_{k=1}^{m}\big|\langle\nabla
\mathcal{P}_{f,k}(x),\theta\rangle\big|^{2}\right)^{\frac{1}{2}}&\leq&(\delta(x,\theta))^{\frac{1}{2}}
\left(\int_{\mathbb{S}^{n-1}}\big|\langle\nabla P(x,\zeta),\theta\rangle\big|\sum_{k=1}^{m}|\mathcal{P}_{f,k}(\zeta)|^{2}d\sigma(\zeta)\right)^{\frac{1}{2}}\\
&\leq&\delta(x,\theta)\|\mathcal{P}_{f}\|_{\infty}, \end{eqnarray} where
$$\delta(x,\theta)=\int_{\mathbb{S}^{n-1}}\big|\langle\nabla P(x,\zeta),\theta\rangle\big|d\sigma(\zeta). $$ Applying (\ref{eq-cp1}), \cite[Lemma 2.3]{MM} and Theorem \Ref{Lem-MM}, we see that, for $x\in\mathbb{B}^{n}$, \begin{eqnarray}\label{eq-cp2}
\nonumber|D_{\mathcal{P}_{f}}(x)|&=&\sup_{\theta\in\mathbb{S}^{n-1}}\left(\sum_{k=1}^{m}\big|\langle\nabla
\mathcal{P}_{f,k}(x),\theta\rangle\big|^{2}\right)^{\frac{1}{2}}\\
\nonumber &\leq&\|\mathcal{P}_{f}\|_{\infty} \sup_{\theta\in\mathbb{S}^{n-1}}\delta(x,\theta)\\ &\leq&
\frac{\|\mathcal{P}_{f}\|_{\infty}}{1-|x|}\sup_{\gamma>0}C(\gamma,|x|) \end{eqnarray} By (\ref{eq-15c}) and (\ref{eq-cp2}), we conclude that
$$|D_{f}(x)|\leq|D_{\mathcal{P}_{f}}(x)|+|D_{\mathcal{G}_{g}}(x)|
\leq\frac{\|\mathcal{P}_{f}\|_{\infty}}{1-|x|}\sup_{\gamma>0}C(\gamma,|x|)+ \frac{n}{n+1}\|g\|_{\infty}, ~~x\in\mathbb{B}^{n}. $$
The proof the theorem is complete. \qed
\section{An application of the Schwarz Lemma}\label{csw-sec3}
Let $f:\,\overline{\Omega}\to \mathbb{R}^n$ be a differentiable mapping and $x$ be a regular value of $f$, where $x\notin f(\partial\Omega)$ and $\Omega\subset \mathbb{R}^n$ is a bounded domain. Then the degree $\deg(f,\Omega,x)$ is defined by the formula (cf. \cite{Ll,V}) $$\deg(f,\Omega,x):=\sum_{y\in f^{-1}(x)\cap\Omega}\mbox{ sign} \big(\det J_{f} (y)\big). $$
\begin{Lem}\label{Lem-X} The $\deg(f,\Omega,x)$ satisfies the following properties {\rm (cf. \cite[p.~125-129]{RR}):}
\begin{enumerate} \item[(a)] If $x\in\mathbb{R}^ n \backslash f(\partial D)$ and $\deg(f,\overline{\Omega},x)\neq 0$, then there exists an $w\in\Omega$ such that $f(w)=x$. \item[(b)]\label{(II)} If $D$ is a domain with $\overline{D}\subset\Omega$ and $x\in \mathbb{R}^ n \backslash f(\partial D)$, then $\deg(f,D,x)$ is a constant on each component of $\mathbb{R}^ n \backslash f(\partial D)$. \end{enumerate} \end{Lem}
\begin{lem}\label{lem-L} Let $n\geq3$ and $m\geq1$. For a given $g\in\mathcal{C}(\overline{\mathbb{B}^{n}},\mathbb{R}^{m})$, if $f\in\mathcal{C}^{2}(\mathbb{B}^{n},\mathbb{R}^{m})\cap\mathcal{C}(\mathbb{S}^{n-1},\mathbb{R}^{m})$
satisfies $\Delta f=g$ and $\|f\|_{\infty}+\|g\|_{\infty}\leq M$ for some constant $M>0$, then there is a constant $L>0$ such that
$$|f(x_{1})-f(x_{2})|\leq L|x_{1}-x_{2}| ~\mbox{ for all $x_{1},x_{2}\in\overline{\mathbb{B}^{n}(x_{0},\rho_{0})}$,} $$ where $x_{0}\in\mathbb{B}^{n}$ and
$\rho_{0}\in(0,1-|x_{0}|)$ are some constants. \end{lem} \begin{pf} By Theorem \ref{thm-3} or (\ref{eq-19cp}), for all $x\in\overline{\mathbb{B}^{n}(x_{0},\rho_{0})}$, we have
$$|D_{f}(x)|\leq\frac{\|\mathcal{P}_{f}\|_{\infty}}{1-|x|}\sup_{\gamma>0}C(\gamma,|x|)+
\frac{n}{n+1}\|g\|_{\infty}\leq\frac{2nM}{1-\rho_{0}-|x_{0}|}+\frac{n}{n+1}M:=L, $$ which implies, for all $x_{1},x_{2}\in\overline{\mathbb{B}^{n}(x_{0},\rho_{0})}$,
$$|f(x_{1})-f(x_{2})|\leq\int_{[x_{1},x_{2}]}|D_{f}(x)|\, |dx|\leq L\int_{[x_{1},x_{2}]}|dx|=L|x_{1}-x_{2}|,$$ where $[x_{1},x_{2}]$ is the segment from $x_{1}$ to $x_{2}$ (or $x_{2}$ to $x_{1}$) with the endpoints $x_{1}$ and $x_{2}$. \end{pf}
\subsection*{Proof of Theorem \ref{thm-4}}
We prove the theorem by the method of contradiction. Suppose that the result is not true. Then there is a sequence $\{a_k\}$ of positive real numbers, and a sequence of functions $\{f_k\}$ with $f_{k}\in\mathcal{F}_{g}^{M}$, such that $\{a_k\}$ tends to $0$ and $a_k \notin f_k(\mathbb{B}^{n})$ for $k\in\{1,2,\ldots\}$. By Theorem \ref{thm-1}, Lemma \ref{lem-L} and Arzel${\rm \grave{a}}$-Ascoli's theorem, we know that there is a subsequence $\{f_{k}^{\ast}\}$ of $\{f_{k}\}$ which converges uniformly on a compact subset of $\mathbb{B}^{n}$ to a function $f^{\ast} $. For each $k$, it is easy to see that the function $h_k=f_{k}^{\ast}-f_{1}^{\ast}$ is harmonic. As a consequence, the sequence $\{h_k\}$ converges uniformly on compact subsets of $\mathbb{B}^{n}$ to $f^{\ast}-f_{1}^{\ast}$ and therefore, the partial derivatives of $f_{k}^{\ast}$ converge uniformly on compact subsets of $\mathbb{B}^{n}$ to the partial derivatives of $ f^{\ast}$. In particular, $f_{k}^{\ast} (0) \rightarrow f^{\ast}(0)$ and $J_{f_{k}^{\ast}} (0) \rightarrow J_{f^{\ast}} (0)$ as $k\rightarrow\infty$, which imply that $f^{\ast} \in
\mathcal{F}_{g}^{M}$. Since $J_{f^{\ast}} (0)-1=|f^{\ast}(0)|=0,$ there are $r_0\in(0,1)$ and $c_1 > 0$ such that $J_{f^{\ast}} >0$ on $ \overline{\mathbb{B}^{n}(0,r_{0})}$, $f^{\ast}(\mathbb{B}^{n}(0,r_{0})) \supset
\overline{\mathbb{B}^{n}(0,c_1)}$ and $|f^{\ast}(x)| \geq c_1$ for $x \in \partial\mathbb{B}^{n}(0,r_{0})$.
Now, we let $c_2=c_1/2$, $\mathbb{B}_{r_{0}}=\mathbb{B}^{n}(0,r_{0})$ and $\mathbb{B}_{c_{2}}= \mathbb{B}^{n}(0,c_2) $. Then there is a $k_0$
such that $|f^{\ast}_{k}(x)| \geq c_2$ for $k \geq k_0$ and $J_{f^{\ast}_k} >0$ on $\overline{\mathbb{B}_{r_{0}}}$. Since $\deg(f^{\ast}_k,\mathbb{B}_{r_{0}},0)\geq 1$, by Lemma \Ref{Lem-X}, we see that, for $y \in \mathbb{B}_{c_{2}}$ and $k \geq k_0$, $\deg(f^{\ast}_k,\mathbb{B}_{r_{0}},y)\geq 1$. Hence, for $k \geq k_0$, $f^{\ast}_k(\mathbb{B}_{r_{0}})\supset \mathbb{B}_{c_{2}}$ which contradicts our assumption. The proof of the theorem is complete. \qed
{\bf Acknowledgements:} This research was partly supported by the National Natural Science Foundation of China ( No. 11571216 and No. 11401184), the Science and Technology Plan Project of Hunan Province (No. 2016TP1020) and the Construct Program of the Key Discipline in Hunan Province. The second author is currently on leave from IIT Madras.
\subsection*{Conflict of Interests} The authors declare that there is no conflict of interests regarding the publication of this paper.
\end{document} | arXiv |
David Gregory (mathematician)
David Gregory (originally spelt Gregorie) FRS (3 June 1659[1] – 10 October 1708) was a Scottish mathematician and astronomer. He was professor of mathematics at the University of Edinburgh, and later Savilian Professor of Astronomy at the University of Oxford, and a proponent of Isaac Newton's Principia.
David Gregory
Born3 June 1659
Aberdeen, Scotland
Died10 October 1708(1708-10-10) (aged 49)
Maidenhead, Berkshire, England
NationalityScottish
Alma materMarischal College, University of Aberdeen
University of Leiden
Known forDevelopment of infinite series
Scientific career
FieldsMathematics
InstitutionsUniversity of Edinburgh
Balliol College, Oxford
Notable studentsJohn Keill
John Craig
InfluencesJames Gregory
Archibald Pitcairne
Isaac Newton
InfluencedColin Maclaurin
William Whiston
Notes
He is the nephew of James Gregory.
Biography
The fourth of the fifteen children of David Gregorie, a doctor from Kinnairdy, Banffshire, and Jean Walker of Orchiston, David was born in Upper Kirkgate, Aberdeen. The nephew of astronomer and mathematician James Gregory, David, like his influential uncle before him, studied at Aberdeen Grammar School and Marischal College (University of Aberdeen), from 1671 to 1675. The Gregorys were Jacobites and left Scotland to escape religious discrimination. Young David visited several countries on the continent, including the Netherlands (where he began studying medicine at Leiden University) and France, and did not return to Scotland until 1683.
On 28 November 1683, Gregory graduated M.A. at University of Edinburgh, and in October 1683 he became Chair of Mathematics at University of Edinburgh. He was "the first to openly teach the doctrines of the Principia, in a public seminary...in those days this was a daring innovation."[2]
Gregory decided to leave for England where, in 1691, he was elected Savilian Professor of Astronomy at the University of Oxford, due in large part to the influence of Isaac Newton. The same year he was elected to be a Fellow of the Royal Society. In 1692, he was elected a Fellow of Balliol College, Oxford.
Gregory spent several days with Isaac Newton in 1694, discussing revisions for a second edition of Newton's Principia. Gregory made notes of these discussions, but the second edition of 1713 was not due to Gregory.[3]
In 1695 he published Catoptricae et dioptricae sphaericae elementa which addressed chromatic aberration and the possibility of its correction with achromatic lens.
In 1705 Gregory became an Honorary Fellow of the Royal College of Physicians of Edinburgh. At the Union of 1707, he was given the responsibility of re-organising the Scottish Mint. He was an uncle of philosopher Thomas Reid.
Gregory and his wife, Elizabeth Oliphant, had nine children, but seven died while still children.
On his death in Maidenhead, Berkshire he was buried in Maidenhead churchyard.
Works
• 1684: Exercitatio geometrica de dimensione figurarum, via Google Books
• 1695: Catoptricæ et dioptricæ sphæricæ elementa - digital facsimile from the Linda Hall Library
• 1703: (editor) Euclides quae supersunt omnia (collected works of Euclid)
• Gregory, David (1726). Astronomiae physicae et geometricae elementa (in Latin). Genève: Marc Michel Bousquet & C.
• 1745: (Colin Maclaurin editor) Treatise of Practical Geometry via Internet Archive
References
1. "David Gregory's inaugural lecture at Oxford". Notes and Records of the Royal Society of London. 25 (2): 143–178. 1970. doi:10.1098/rsnr.1970.0026. S2CID 143551983.
2. David Gregory from Significant Scots at electricscotland.com.
3. Westfall, Richard S. (1980). Never at Rest. Cambridge University Press. p. 506.
External links
Wikisource has the text of a 1911 Encyclopædia Britannica article about David Gregory.
• Gregory, David (1702). Astronomiae physicae et geometricae elementa (in Latin). Oxford.
• O'Connor, John J.; Robertson, Edmund F., "David Gregory (mathematician)", MacTutor History of Mathematics Archive, University of St Andrews
• Lectures on Algebra ascribed to David Gregory, 18th century from Archives Hub by Jisc
• Papers of David Gregory (1661–1708) from Archives Hub
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• Catalonia
• Germany
• Israel
• United States
• Australia
• Greece
• Netherlands
• Portugal
• Vatican
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
• Trove
Other
• SNAC
• IdRef
Savilian Professors
Chairs established by Sir Henry Savile
Savilian Professors
of Astronomy
• John Bainbridge (1620)
• John Greaves (1642)
• Seth Ward (1649)
• Christopher Wren (1661)
• Edward Bernard (1673)
• David Gregory (1691)
• John Caswell (1709)
• John Keill (1712)
• James Bradley (1721)
• Thomas Hornsby (1763)
• Abraham Robertson (1810)
• Stephen Rigaud (1827)
• George Johnson (1839)
• William Donkin (1842)
• Charles Pritchard (1870)
• Herbert Turner (1893)
• Harry Plaskett (1932)
• Donald Blackwell (1960)
• George Efstathiou (1994)
• Joseph Silk (1999)
• Steven Balbus (2012)
Savilian Professors
of Geometry
• Henry Briggs (1619)
• Peter Turner (1631)
• John Wallis (1649)
• Edmond Halley (1704)
• Nathaniel Bliss (1742)
• Joseph Betts (1765)
• John Smith (1766)
• Abraham Robertson (1797)
• Stephen Rigaud (1810)
• Baden Powell (1827)
• Henry John Stephen Smith (1861)
• James Joseph Sylvester (1883)
• William Esson (1897)
• Godfrey Harold Hardy (1919)
• Edward Charles Titchmarsh (1931)
• Michael Atiyah (1963)
• Ioan James (1969)
• Richard Taylor (1995)
• Nigel Hitchin (1997)
• Frances Kirwan (2017)
University of Oxford portal
| Wikipedia |
\begin{document}
\title{Shifted Power Method for Computing Tensor Eigenpairs
\thanks{This work was funded by the applied mathematics program at the U.S.
Department of Energy and by an Excellence Award from the Laboratory
Directed Research \& Development (LDRD) program at Sandia National
Laboratories. Sandia National Laboratories is a multiprogram
laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation,
for the United States Department of Energy's National Nuclear Security
Administration under contract DE-AC04-94AL85000.}} \author{Tamara G. Kolda\footnotemark[2] \and Jackson R. Mayo\footnotemark[2]} \maketitle
\opt{draft}{ \centerline {\sc \today} }
\renewcommand{\arabic{footnote}}{\fnsymbol{footnote}} \footnotetext[2]{Sandia National Laboratories, Livermore, CA. Email: \{tgkolda,jmayo\}@sandia.gov.} \renewcommand{\arabic{footnote}}{\arabic{footnote}}
\begin{abstract}
Recent work on eigenvalues and eigenvectors for tensors of order $m \ge 3$
has been motivated by
applications in blind source separation, magnetic resonance imaging,
molecular conformation, and more. In this paper, we consider methods
for computing real symmetric-tensor eigenpairs of the form $\T{A}\V{x}^{m-1} =
\lambda \V{x}$ subject to $\|\V{x}\|=1$, which is closely related to
optimal rank-1 approximation of a symmetric
tensor. Our contribution is a shifted symmetric higher-order
power method (SS-HOPM), which we show is guaranteed to converge to a
tensor eigenpair. SS-HOPM can be viewed as a generalization of the
power iteration method for matrices or of the symmetric
higher-order power method. Additionally, using fixed point
analysis, we can characterize exactly which eigenpairs can and
cannot be found by the method. Numerical examples are presented,
including examples from an extension of the method to finding
complex eigenpairs. \end{abstract}
\begin{keywords}
tensor eigenvalues, E-eigenpairs, Z-eigenpairs, $l^2$-eigenpairs, rank-1 approximation, symmetric higher-order power method (S-HOPM), shifted symmetric higher-order power method (SS-HOPM) \end{keywords} \opt{siam}{ \begin{AMS}
15A18,
15A69 \end{AMS} }
\pagestyle{myheadings} \thispagestyle{plain} \opt{draft}{ \markboth{\sc Draft --- \today}{\sc Draft --- \today} } \opt{arXiv,siam}{ \markboth{\sc T.~G.~Kolda and J.~R.~Mayo}{\sc Shifted Power Method for Computing Tensor Eigenpairs} }
\section{Introduction} \label{sec:introduction}
Tensor eigenvalues and eigenvectors have received much attention lately in the literature \cite{Li05,Qi05,QiSuWa07,Qi07,ChPeZh08,NgQiZh09,WaQiZh09}. The tensor eigenproblem is important because it has applications in blind source separation \cite{KoRe02}, magnetic resonance imaging \cite{ScSe08,QiWaWu08}, molecular conformation \cite{Di88}, etc. There is more than one possible definition for a tensor eigenpair \cite{Qi05}; in this paper, we specifically use the following definition. \begin{definition}
\label{def:EVP}
Assume that $\T{A}$ is a symmetric $m^\text{th}$-order $n$-dimensional
real-valued tensor. For any $n$-dimensional vector $\V{x}$, define
\begin{equation}
\label{eq:Axm1}
\left( \T{A}\V{x}^{m-1} \right)_{i_1} \equiv
\sum_{i_2=1}^n \cdots \sum_{i_m=1}^n
\TE{a}{i_1 i_2 \cdots i_m} \VE{x}{i_2} \cdots \VE{x}{i_m}
\qtext{for}
i_1 = 1,\dots,n.
\end{equation}
Then $\lambda \in \mathbb{R}$ is an
\emph{eigenvalue} of $\T{A}$ if there exists $\V{x} \in \mathbb{R}^n$ such that
\begin{equation}\label{eq:EVP}
\T{A}\V{x}^{m-1} = \lambda \V{x} \qtext{and} \V{x}^T\V{x}=1.
\end{equation}
The vector $\V{x}$ is a corresponding \emph{eigenvector}, and
$(\lambda,\V{x})$ is called an \emph{eigenpair}. \end{definition}
\Def{EVP} is equivalent to the Z-eigenpairs defined by Qi \cite{Qi05,Qi07} and the $l^2$-eigenpairs defined by Lim \cite{Li05}. In particular, Lim \cite{Li05} observes that any eigenpair $(\lambda,\V{x})$ is a Karush-Kuhn-Tucker (KKT) point (i.e., a constrained stationary point) of the nonlinear optimization problem \begin{equation}\label{eq:NLP}
\max_{\V{x} \in \Real^n}
\T{A}\V{x}^m
\qtext{subject to}
\V{x}^T\V{x} = 1,
\qtext{where}
\T{A}\V{x}^m \equiv \sum_{i_1=1}^n \cdots \sum_{i_m=1}^n
\TE{a}{i_1 \cdots i_m} \VE{x}{i_1} \cdots \VE{x}{i_m}. \end{equation} This is equivalent to the problem of finding the best \emph{symmetric} rank-1 approximation of a symmetric tensor \cite{DeDeVa00a}. We present the more general definition that incorporates complex-valued eigenpairs in \Sec{complex}.
In this paper, we build upon foundational work by Kofidis and Regalia \cite{KoRe02} for solving \Eqn{NLP}. Their paper is extremely important for computing tensor eigenvalues even though it predates the definition of the eigenvalue problem by three years. Kofidis and Regalia consider the higher-order power method (HOPM) \cite{DeDeVa00a}, a well-known technique for approximation of higher-order tensors, and show that its symmetric generalization (S-HOPM) is not guaranteed to converge. They go on, however, to use convexity theory to provide theoretical results (as well as practical examples) explaining conditions under which the method is convergent for even-order tensors (i.e., $m$ even). Further, these conditions are shown to hold for many problems of practical interest.
In the context of independent component analysis (ICA), both Regalia and Kofidis \cite{ReKo03} and Erdogen \cite{Er09} have developed shifted variants of the power method and shown that they are monotonically convergent. We present a similar method in the context of finding real-valued tensor eigenpairs, called the shifted symmetric higher-order power method (SS-HOPM), along with theory showing that it is guaranteed to converge to a constrained stationary point of \Eqn{NLP}. The proof is general and works for both odd- and even-order tensors (i.e., all $m \ge 3$). The effectiveness of SS-HOPM is demonstrated on several examples, including a problem noted previously \cite{KoRe02} for which S-HOPM does not converge. We also present a version of SS-HOPM for finding complex-valued tensor eigenpairs and provide examples of its effectiveness.
As mentioned, there is more than one definition of a tensor eigenpair. In the case of the \emph{$l^m$-eigenpair} (we use $m$ for the tensor order instead of $k$ as in some references) or \emph{H-eigenpair}, the eigenvalue equation becomes $\T{A}\V{x}^{m-1} = \lambda \V{x}^{[m-1]}$, where $\V{x}^{[m-1]}$ denotes the vector $\V{x}$ with each element raised to the $(m-1)^{\text{st}}$ power \cite{Li05,Qi05}. In this context, Qi, Wang, and Wang \cite{QiWaWa07} propose some methods specific to third-order tensors ($m = 3$). Unlike the ($l^2$-)eigenvalues we consider here, it is possible to guarantee convergence to the \emph{largest} $l^m$-eigenvalue for certain classes of nonnegative tensors. For example, see the power methods proposed by Ng, Qi, and Zhou \cite{NgQiZh09} and Liu, Zhou, and Ibrahim \cite{LiZhIb10}, the latter of which also uses a shift to guarantee convergence for any irreducible nonnegative tensor.
\section{Preliminaries}
Throughout, let $\Gamma$ and $\Sigma$ denote the unit ball and sphere on $\Real^n$, i.e., \begin{displaymath}
\Gamma = \{ \V{x} \in \Real^n : \| \V{x} \| \leq 1 \}
\qtext{and}
\Sigma = \{ \V{x} \in \Real^n : \| \V{x} \| = 1 \}. \end{displaymath} Additionally, define \begin{displaymath}
\Perm{m} \equiv \text{the set of all permutations of } (1,\dots,m). \end{displaymath} Let $\V{x} \bot \V{y}$ denote $\V{x}^T\V{y} = 0$, and define $\V{x}^{\bot} \equiv \{ \V{y} \in \Real^n : \V{x} \bot \V{y} \}$. Let $\rho(\M{A})$ denote the spectral radius of a square matrix $\M{A}$, i.e., the maximum of the magnitudes of its eigenvalues.
\subsection{Tensors} \label{sec:tensors}
A tensor is an $m$-way array. We let $\RT{m}{n}$ denote the space of $m^\text{th}$-order real-valued tensors with dimension $n$, e.g., $\RT{3}{2} = \mathbb{R}^{2 \times 2 \times 2}$. We adopt the convention that $\RT{0}{n} = \mathbb{R}$.
We formally introduce the notion of a symmetric tensor, sometimes also called supersymmetric, which is invariant under any permutation of its indices. Further, we define a generalization of the tensor-vector multiplication in equations \Eqn{Axm1} and \Eqn{NLP}.
\begin{definition}[Symmetric tensor \cite{CoGoLiMo08}]
A tensor $\T{A} \in \RT{m}{n}$ is \emph{symmetric} if
\begin{displaymath}
\TE{a}{i_{p(1)} \cdots i_{p(m)}} = \TE{a}{i_1 \cdots i_m}
\qtext{for all} i_1, \dots, i_m \in \{1,\dots,n\}
\qtext{and} p \in \Perm{m}.
\end{displaymath} \end{definition}
\begin{definition}[Symmetric tensor-vector multiply]\label{def:mult}
Let $\T{A} \in \RT{m}{n}$ be symmetric and $\V{x} \in \Real^n$.
Then for $0 \leq r \leq m - 1$, the \emph{$(m-r)$-times product} of the tensor
$\T{A}$ with the vector $\V{x}$ is denoted by $\T{A} \V{x}^{m-r} \in
\RT{r}{n}$ and defined by
\begin{displaymath}
(\T{A} \V{x}^{m-r})_{i_1 \cdots i_r} \equiv \sum_{i_{r+1}, \dots, i_m}
\TE{A}{i_1 \cdots i_m} \VE{x}{i_{r+1}} \cdots \VE{x}{i_m}
\qtext{for all} i_1, \dots, i_r \in \{1,\dots,n\}.
\end{displaymath} \end{definition}
\begin{example}
The identity matrix plays an important role in matrix analysis. This
notion can be extended in a sense to the domain of tensors. We may
define an identity tensor as a symmetric tensor $\T{E} \in \RT{m}{n}$
such that
\begin{displaymath}
\T{E}\V{x}^{m-1} = \V{x} \qtext{for all} \V{x} \in \Sigma.
\end{displaymath}
We restrict $\V{x} \in \Sigma$ since it is not possible to have a tensor with $m > 2$
such that the above equation
holds for all $\V{x} \in \Real^n$. For any $\V{x} \notin \Sigma$, the
above equation implies
\begin{displaymath}
\T{E}\V{x}^{m-1}
= \| \V{x} \|^{m-1} \T{E}(\V{x}/\|\V{x}\|)^{m-1}
= \| \V{x} \|^{m-1} (\V{x}/\|\V{x}\|)
= \| \V{x} \|^{m-2} \V{x}.
\end{displaymath}
Consider the case of $m=4$ and $n=2$. The system of equations that
must be satisfied for all $\V{x} \in \Sigma$ is
\begin{align*}
\TE{e}{1111} \VE{x}{1}^3 + 3\TE{e}{1112} \VE{x}{1}^2 \VE{x}{2}
+ 3\TE{e}{1122} \VE{x}{1} \VE{x}{2}^2 + \TE{e}{1222} \VE{x}{2}^3
&= \VE{x}{1}, \\
\TE{e}{1112} \VE{x}{1}^3 + 3\TE{e}{1122} \VE{x}{1}^2 \VE{x}{2}
+ 3\TE{e}{1222} \VE{x}{1} \VE{x}{2}^2 + \TE{e}{2222} \VE{x}{2}^3
&= \VE{x}{2}.
\end{align*}
Consider $\V{x} = \begin{bmatrix} 1 & 0 \end{bmatrix}^T$. This
yields $\TE{e}{1111}=1$ and $\TE{e}{1112} = 0$. Similarly, $\V{x} =
\begin{bmatrix} 0 & 1 \end{bmatrix}^T$ yields $\TE{e}{2222}=1$ and
$\TE{e}{1222}=0$. The only remaining unknown is $\TE{e}{1122}$, and
choosing, e.g., $\V{x} = \begin{bmatrix} \sqrt{2}/2 & \sqrt{2}/2
\end{bmatrix}^T$ yields $\TE{e}{1122} = 1/3$. In summary, the
identity tensor for $m=4$ and $n=2$ is
\begin{displaymath}
\TE{e}{ijkl} =
\begin{cases}
1 & \text{if } i = j = k = l, \\
1/3 & \text{if } i = j \ne k = l, \\
1/3 & \text{if } i = k \ne j = l, \\
1/3 & \text{if } i = l \ne j = k, \\
0 & \text{otherwise}.
\end{cases}
\end{displaymath}
We generalize this idea in the next property. \end{example}
\begin{property}
For $m$ even, the identity tensor $\T{E} \in \RT{m}{n}$ satisfying
$\T{E}\V{x}^{m-1} = \V{x}$ for all $\V{x} \in \Sigma$ is given by
\begin{equation}\label{eq:identity}
\TE{e}{i_1 \cdots i_m} =
\frac{1}{m!}
\sum_{p \in \Perm{m}}
\delta_{i_{p(1)} i_{p(2)}}
\delta_{i_{p(3)} i_{p(4)}}
\cdots
\delta_{i_{p(m-1)} i_{p(m)}}
\end{equation}
for $i_1,\dots,i_m \in \{1,\dots,n\}$,
where $\delta$ is the standard Kronecker delta, i.e.,
\begin{displaymath}
\delta_{ij} \equiv
\begin{cases}
1 & \text{if } i = j, \\
0 & \text{if } i \neq j.
\end{cases}
\end{displaymath} \end{property}
This identity tensor appears in a previous work \cite{Qi05}, where it is denoted by $I_E$ and used to define a generalization of the characteristic polynomial for symmetric even-order tensors.
\begin{example}
There is no identity tensor for $m$ odd.
This is seen because if $\T{E}\V{x}^{m-1} = \V{x}$ for some odd $m$ and some $\V{x} \in \Sigma$, then
we would have $-\V{x} \in \Sigma$ but $\T{E}(-\V{x})^{m-1} = \V{x} \ne -\V{x}$. \end{example}
For any even-order tensor (i.e., $m$ even), observe that if $(\lambda,\V{x})$ is an eigenpair, then $(\lambda,-\V{x})$ is also an eigenpair since \begin{displaymath}
\T{A}(-\V{x})^{m-1} = -\T{A}\V{x}^{m-1} = \lambda(-\V{x}). \end{displaymath} Likewise, for any odd-order tensor (i.e., $m$ odd), $(-\lambda,-\V{x})$ is also an eigenpair since \begin{displaymath}
\T{A}(-\V{x})^{m-1} = \T{A}\V{x}^{m-1} = (-\lambda)(-\V{x}). \end{displaymath} These are \emph{not} considered to be distinct eigenpairs.
We later present, as \Thm{neigs}, a recently derived result \cite{CaSt10} that bounds the number of real eigenpairs by $((m-1)^n-1)/(m-2)$. We defer discussion of this result until \Sec{complex}, where we discuss complex eigenpairs.
Because the tensor eigenvalue equation for $m > 2$ amounts to a system of nonlinear equations in the components of $\V{x}$, a direct solution is challenging.
Numerical algorithms exist for finding all solutions of a system of polynomial equations, but become computationally expensive for systems with many variables (here, large $n$) and with high-order polynomials (here, large $m$). A polynomial system solver (\texttt{NSolve}) using a Gr\"obner basis method is available in Mathematica \cite{Wo08} and has been employed to generate a complete list of eigenpairs for some of the examples in this paper.
The solver is instructed to find all solutions $(\lambda, \V{x})$ of the system \Eqn{EVP}.
Redundant solutions with the opposite sign of $\V{x}$ (for even $m$) or the opposite signs of $\V{x}$ and $\lambda$ (for odd $m$) are then eliminated.
\subsection{Convex functions} Convexity theory plays an important role in our analysis. Here we recall two important properties of convex functions \cite{BoVa04}.
\begin{property}[Gradient of convex function]
\label{prop:convex_gradient}
A differentiable function $f : \Omega \subseteq \Real^n \rightarrow \mathbb{R}$ is
convex if and only if $\Omega$ is a convex set and
$f(\V{y}) \geq f(\V{x}) + \nabla f(\V{x})^T (\V{y}-\V{x})$
for all $\V{x},\V{y} \in \Omega$. \end{property}
\begin{property}[Hessian of convex function]
\label{prop:convex_hessian}
A twice differentiable function $f : \Omega \subseteq \Real^n \rightarrow \mathbb{R}$ is
convex if and only if $\Omega$ is a convex set and the Hessian\footnote
{By $\nabla^2$ we denote the Hessian matrix and not its trace, the Laplacian.}
of $f$ is positive semidefinite on $\Omega$, i.e.,
$\nabla^2 f(\V{x}) \succeq 0$
for all $\V{x} \in \Omega$. \end{property}
We prove an interesting fact about convex functions on vectors of unit norm that will prove useful in our later analysis. This fact is implicit in a proof given previously \cite[Theorem 4]{KoRe02} and explicit in \cite[Theorem 1]{ReKo03}. \begin{theorem}[Kofidis and Regalia \cite{KoRe02,ReKo03}]
\label{thm:cvx}
Let $f$ be a function that is convex and continuously differentiable
on $\Gamma$.
Let $\V{w} \in \Sigma$ with $\nabla f(\V{w}) \neq \V{0}$. If
$\V{v} = \nabla f(\V{w}) / \|\nabla f(\V{w}) \| $ $\ne \V{w}$, then
\begin{inlinemath}
f(\V{v}) - f(\V{w}) > 0.
\end{inlinemath} \end{theorem} \begin{proof}
For arbitrary nonzero $\V{z} \in \Real^n$, $\V{z}^T\V{x}$ is strictly maximized for
$\V{x} \in \Sigma$ by $\V{x} = \V{z} / \| \V{z}\|$.
Substituting $\V{z} = \nabla f(\V{w})$, it follows that
\begin{inlinemath}
\nabla f(\V{w})^T \V{v} > \nabla f(\V{w})^T \V{w},
\end{inlinemath}
since $\V{v} = \nabla f(\V{w}) / \|\nabla f(\V{w}) \| \ne \V{w}$ and $\V{w}
\in \Sigma$.
By the convexity of $f$ on $\Gamma$ and \Prop{convex_gradient}, we have
\begin{inlinemath}
f(\V{v}) \geq f(\V{w}) + \nabla f(\V{w})^T(\V{v} - \V{w})
\end{inlinemath}
for all $\V{v},\V{w} \in \Gamma$. Consequently,
\begin{inlinemath}
f(\V{v}) - f(\V{w}) \geq \nabla f(\V{w})^T(\V{v} - \V{w}) > 0.
\end{inlinemath} \end{proof}
\subsection{Constrained optimization}
Here we extract relevant theory from constrained optimization \cite{NoWr99}.
\begin{theorem}
\label{thm:nlp}
Let $f:\Real^n \rightarrow \mathbb{R}$ be continuously differentiable.
A point $\V{x}_* \in \Sigma$ is a (constrained) stationary point of
\begin{displaymath}
\max f(\V{x}) \qtext{subject to} \V{x} \in \Sigma
\end{displaymath}
if there exists $\mu_* \in \mathbb{R}$ such that
\begin{inlinemath}
\nabla f(\V{x}_*) + \mu_* \V{x}_* = \V{0}.
\end{inlinemath}
The point $\V{x}_*$ is a (constrained) isolated local maximum if,
additionally,
\begin{displaymath}
\V{w}^T (\nabla^2 f(\V{x}_*) + \mu_* \M{I})\V{w} < 0
\qtext{for all} \V{w} \in \Sigma \cap \V{x}_*^{\bot}.
\end{displaymath} \end{theorem} \begin{proof}
The constraint $\V{x} \in \Sigma$ can be expressed as $c(\V{x}) =
\frac{1}{2} (\V{x}^T\V{x} - 1) = 0$. The Lagrangian for the
constrained problem is then given by
\begin{displaymath}
\mathcal{L}(\V{x},\mu) = f(\V{x}) + \mu c(\V{x}).
\end{displaymath}
Its first and second derivatives with respect to $\V{x}$ are
\begin{displaymath}
\nabla \mathcal{L}(\V{x},\mu) = \nabla f(\V{x}) + \mu\V{x}
\qtext{and}
\nabla^2 \mathcal{L}(\V{x},\mu) = \nabla^2 f(\V{x}) + \mu \M{I}.
\end{displaymath}
By assumption, $\nabla \mathcal{L}(\V{x}_*,\mu_*) = \V{0}$ and
$c(\V{x}_*) = 0$. Therefore, the pair $(\V{x}_*,\mu_*)$
satisfies the Karush-Kuhn-Tucker (KKT) conditions \cite[Theorem
12.1]{NoWr99} and so is a constrained stationary point. It is
additionally a constrained isolated local maximum if it meets the second-
order sufficient condition \cite[Theorem 12.6]{NoWr99}. \end{proof}
\subsection{Fixed point theory}
We consider the properties of iterations of the form \begin{displaymath}
\V{x}_{k+1} = \phi(\V{x}_k). \end{displaymath} Under certain conditions, the iterates are guaranteed to converge to a fixed point. In particular, we are interested in ``attracting'' fixed points.
\begin{definition}[Fixed point]
A point $\V{x}_* \in \Real^n$ is a \emph{fixed point} of $\phi: \Real^n
\rightarrow \Real^n$ if $\phi(\V{x}_*) = \V{x}_*$.
Further, $\V{x}_*$ is an \emph{attracting} fixed point if there exists
$\delta > 0$ such that the sequence $\{\V{x}_k\}$ defined by
$\V{x}_{k+1} = \phi(\V{x}_k)$ converges to $\V{x}_*$ for any $\V{x}_0$
such that $\|\V{x}_0 - \V{x}_*\| \leq \delta$. \end{definition}
\begin{theorem}[{\cite[Theorem 2.8]{Rh74}}]
\label{thm:fixed_point}
Let $\V{x}_* \in \Real^n$ be a fixed point of $\phi: \Real^n
\rightarrow \Real^n$, and let $J:\Real^n \rightarrow \mathbb{R}^{n \times
n}$ be the Jacobian of $\phi$. Then $\V{x}_*$ is an attracting
fixed point if $\sigma \equiv \rho(J(\V{x}_*)) < 1$; further, if
$\sigma > 0$, then the
convergence of $\V{x}_{k+1} = \phi(\V{x}_k)$ to $\V{x}_*$ is linear with rate $\sigma$. \end{theorem}
This condition on the Jacobian for an attracting fixed point is sufficient but not necessary. In particular, if $\sigma \equiv \rho(J(\V{x}_*)) = 1$, then $\V{x}_*$ may or may not be attracting, but there is no neighborhood of \emph{linear} convergence to it. For $\sigma < 1$, the rate of linear convergence depends on $\sigma$ and is slower for $\sigma$ values closer to 1. On the other hand, for $\sigma > 1$, an attractor is ruled out by the following.
\begin{theorem}[{\cite[Theorem 1.3.7]{StHu98}}]
\label{thm:unstable_fixed_point}
Let $\V{x}_* \in \Real^n$ be a fixed point of $\phi: \Real^n
\rightarrow \Real^n$, and let $J:\Real^n \rightarrow \mathbb{R}^{n \times
n}$ be the Jacobian of $\phi$. Then $\V{x}_*$ is an unstable fixed point
if $\sigma \equiv \rho(J(\V{x}_*)) > 1$. \end{theorem}
\section{Symmetric higher-order power method (S-HOPM)} \label{sec:shopm}
We review the symmetric higher-order power method (S-HOPM), introduced by De~Lathauwer et al.\@~\cite{DeDeVa00a} and analyzed further by Kofidis and Regalia \cite{KoRe02}. The purpose of S-HOPM is to solve the optimization problem \begin{equation}
\max_{\V{x} \in \Real^n} |\T{A}\V{x}^m| \qtext{subject to}
\V{x} \in \Sigma. \end{equation} The solution of this problem will be a solution of either the following maximization problem (lacking the absolute value) or its opposite minimization problem: \begin{equation}
\label{eq:f}
\max_{\V{x} \in \Real^n} f(\V{x}) \qtext{subject to}
\V{x} \in \Sigma, \qtext{where} f(\V{x}) = \T{A}\V{x}^m. \end{equation} Setting $\lambda = f(\V{x})$, these problems are equivalent to finding the best symmetric rank-1 approximation of a symmetric tensor $\T{A} \in \RT{m}{n}$, i.e., \begin{equation}
\label{eq:rank_one}
\min_{\lambda, \V{x}}
\| \T{A} - \T{B} \|
\qtext{subject to}
\TE{b}{i_1 \dots i_m} = \lambda \VE{x}{i_1} \cdots \VE{x}{i_m}
\qtext{and}
\V{x} \in \Sigma. \end{equation} Details of the connection between \Eqn{f} and \Eqn{rank_one} are available elsewhere \cite{DeDeVa00a}. The S-HOPM algorithm is shown in \Alg{shopm}. We discuss its connection to the eigenvalue problem in \Sec{f} and its convergence properties in \Sec{shopm_analysis}.
\begin{algorithm}
\caption{Symmetric higher-order power method (S-HOPM) \cite{DeDeVa00a,KoRe02}}
\label{alg:shopm}
Given a symmetric tensor $\T{A} \in \RT{m}{n}$.
\begin{algorithmic}[1]
\Require $\V{x}_0 \in \Real^n$ with $\| \V{x}_0 \| = 1$. Let
$\lambda_0 = \T{A} \V{x}_0^{m}$.
\For{$k=0,1,\dots$}
\State $\hat \V{x}_{k+1} \gets \T{A} \V{x}_k^{m-1}$
\State $\V{x}_{k+1} \gets \hat \V{x}_{k+1} / \| \hat \V{x}_{k+1} \|$
\State $\lambda_{k+1} \gets \T{A} \V{x}_{k+1}^{m}$
\EndFor
\end{algorithmic} \end{algorithm}
\subsection{Properties of $f(\V{x}) = \T{A}\V{x}^m$} \label{sec:f}
The function $f(\V{x}) = \T{A}\V{x}^m$ plays an important role in the analysis of eigenpairs of $\T{A}$ because all eigenpairs are constrained stationary points of $f$, as we show below.
We first need to derive the gradient of $f$. This result is perhaps generally well known \cite[Equation 4]{Li05}, but here we provide a proof.
\begin{lemma}\label{lem:g}
Let $\T{A} \in \RT{m}{n}$ be symmetric. The gradient of
$f(\V{x}) = \T{A}\V{x}^m$ is
\begin{equation}
\label{eq:g}
g(\V{x}) \equiv \nabla f(\V{x}) = m \, \T{A} \V{x}^{m-1} \in \Real^n.
\end{equation} \end{lemma} \begin{proof} We use the basic relation \begin{inlinemath} \nabla_k \VE{x}{j} = \delta_{jk}. \end{inlinemath} Applying the product rule to \Eqn{f}, we find \begin{displaymath} \nabla_k f(\V{x}) = \sum_{i_1,\dots,i_m} \sum_{q=1}^m \TE{A}{i_1 i_2\cdots i_m} \VE{x}{i_1} \VE{x}{i_2}\cdots \VE{x}{i_{q-1}}\delta_{i_q k} \VE{x}{i_{q+1}}\cdots \VE{x}{i_m}. \end{displaymath} Upon bringing the sum over $q$ to the outside, we observe that for each $q$ the dummy indices $i_1$ and $i_q$ can be interchanged (without affecting the symmetric tensor $\T{A}$), and the result is independent of $q$: \begin{displaymath} \begin{split} \nabla_k f(\V{x}) &= \sum_{q=1}^m \sum_{i_1,\dots,i_m} \TE{A}{i_1 i_2\cdots i_m} \delta_{i_1 k} \VE{x}{i_2}\cdots \VE{x}{i_{q-1}} \VE{x}{i_q} \VE{x}{i_{q+1}}\cdots \VE{x}{i_m}\\ &= \sum_{q=1}^m \sum_{i_2,\dots,i_m} \TE{A}{k i_2\cdots i_m} \VE{x}{i_2}\cdots \VE{x}{i_m}\\ &= m (\T{A} \V{x}^{m-1})_k. \end{split} \end{displaymath} Hence, $\nabla f(\V{x}) = m \, \T{A} \V{x}^{m-1}$. \end{proof}
\begin{theorem}\label{thm:equiv}
Let $\T{A} \in \RT{m}{n}$ be symmetric. Then $(\lambda,\V{x})$ is an
eigenpair of $\T{A}$ if and only if $\V{x}$ is a constrained stationary
point of \Eqn{f}. \end{theorem} \begin{proof}
By \Thm{nlp},
any constrained stationary point $\V{x}_*$ of \Eqn{f} must satisfy
$m \, \T{A} \V{x}_*^{m-1} + \mu_* \V{x}_* = 0$
for some $\mu_* \in \mathbb{R}$. Thus, $\lambda_* = -\mu_*/m$ is the
eigenvalue corresponding to $\V{x}_*$. Conversely, any eigenpair meets
the condition for being a constrained stationary point with $\mu_* = -m\lambda_*$. \end{proof}
This is is the connection between \Eqn{f} and the eigenvalue problem. It will also be useful to consider the Hessian of $f$, which we present here.
\begin{lemma}\label{lem:H}
Let $\T{A} \in \RT{m}{n}$ be symmetric. The Hessian of
$f(\V{x}) = \T{A}\V{x}^m$ is
\begin{equation}
\label{eq:H}
H(\V{x}) \equiv \nabla^2 f(\V{x}) = m(m-1) \T{A} \V{x}^{m-2} \in
\mathbb{R}^{n \times n}.
\end{equation} \end{lemma} \begin{proof}
The $(j,k)$ entry of $H(\V{x})$ is given by the $k^\text{th}$ entry of
$\nabla g_j(\V{x})$. The function $g_j(\V{x})$ can be rewritten as
\begin{displaymath}
g_j(\V{x}) = m \sum_{i_2,\dots,i_m}
\TE{a}{j i_2 \cdots i_m}
\VE{x}{i_2} \cdots \VE{x}{i_m}
= m \, \T{B}^{(j)} \V{x}^{m-1}
\end{displaymath}
where $\T{B}^{(j)}$ is the order-$(m-1)$ symmetric tensor that
is the $j^\text{th}$ subtensor of $\T{A}$, defined by $\TE{B}{i_1 \cdots
i_{m-1}}^{(j)} = \TE{A}{j i_1 \cdots i_{m-1}}$. From \Lem{g}, we
have
\begin{displaymath}
\nabla g_j(\V{x}) = m(m-1) \T{B}^{(j)} \V{x}^{m-2}.
\end{displaymath}
Consequently,
\begin{displaymath}
(H(\V{x}))_{jk} = m(m-1) \sum_{i_3,\dots,i_m} \TE{A}{j k i_3 \cdots
i_m} \VE{x}{i_3} \cdots \VE{x}{i_m},
\end{displaymath}
that is, $H(\V{x}) = m(m-1)\T{A}\V{x}^{m-2}$. \end{proof}
From \Thm{nlp}, we know that the projected Hessian of the Lagrangian plays a role in determining whether or not a fixed point is a local maximum or minimum. In our case, since $\mu_* = -m \lambda_*$, for any eigenpair $(\lambda_*,\V{x}_*)$ (which must correspond to a constrained stationary point by \Thm{equiv}) we have \begin{displaymath}
\nabla^2 \mathcal{L}(\V{x}_*,\lambda_*) =
m(m-1)\T{A}\V{x}_*^{m-2} - m \lambda_* \M{I}. \end{displaymath} Specifically, \Thm{nlp} is concerned with the behavior of the Hessian of the Lagrangian in the subspace orthogonal to $\V{x}_*$. Thus, we define the projected Hessian of the Lagrangian as \begin{equation} \label{eq:C}
C(\lambda_*,\V{x}_*) \equiv
\M{U}_*^T \left((m-1) \T{A}\V{x}_*^{m-2} - \lambda_* \M{I}\right) \M{U}_* \in \mathbb{R}^{(n-1) \times (n-1)}, \end{equation} where the columns of $\M{U}_*\in\mathbb{R}^{n \times (n-1)}$ form an orthonormal basis for $\V{x}_*^{\bot}$. Note that we have removed a factor of $m$ for convenience. We now classify eigenpairs according to the spectrum of $C(\lambda_*,\V{x}_*)$. The import of this classification will be made clear in \Sec{fp}.
\begin{definition}
Let $\T{A} \in \RT{m}{n}$ be a symmetric tensor. We say an eigenpair
$(\lambda,\V{x})$ of $\T{A} \in \RT{m}{n}$ is \emph{positive stable} if
$C(\lambda,\V{x})$ is positive definite, \emph{negative stable} if
$C(\lambda,\V{x})$ is negative definite, and \emph{unstable} if
$C(\lambda,\V{x})$ is indefinite. \end{definition}
These labels are not exhaustive because we do not name the cases where $C(\lambda,\V{x})$ is only semidefinite, with a zero eigenvalue. Such cases do not occur for generic tensors.
If $m$ is odd, then $(\lambda,\V{x})$ is positive stable if and only if $(-\lambda,-\V{x})$ is negative stable, even though these eigenpairs are in the same equivalence class. On the other hand, if $m$ is even, then $(\lambda,\V{x})$ is a positive (negative) stable eigenpair if and only if $(\lambda,-\V{x})$ is also positive (negative) stable.
\subsection{S-HOPM convergence analysis} \label{sec:shopm_analysis}
S-HOPM has been deemed unreliable \cite{DeDeVa00a} because convergence is not guaranteed. Kofidis and Regalia \cite{KoRe02} provide an analysis explaining that S-HOPM will converge if certain conditions are met, as well as an example where the method does not converge, which we reproduce here.
\begin{example2}{{Kofidis and Regalia \cite[Example 1]{KoRe02}}}\label{ex:KoRe02_ex1}
Let $\T{A} \in \RT{4}{3}$ be the symmetric tensor defined by
\begin{align*} a_{1111} &= \phantom{-}0.2883, & a_{1112} &= -0.0031, & a_{1113} &= \phantom{-}0.1973, & a_{1122} &= -0.2485,\\ a_{1123} &= -0.2939, & a_{1133} &= \phantom{-}0.3847, & a_{1222} &= \phantom{-}0.2972, & a_{1223} &= \phantom{-}0.1862,\\ a_{1233} &= \phantom{-}0.0919, & a_{1333} &= -0.3619, & a_{2222} &= \phantom{-}0.1241, & a_{2223} &= -0.3420,\\ a_{2233} &= \phantom{-}0.2127, & a_{2333} &= \phantom{-}0.2727, & a_{3333} &= -0.3054.
\end{align*}
Kofidis and Regalia \cite{KoRe02} observed that \Alg{shopm} does not
converge for this tensor.
Because this problem is small, all eigenpairs can be calculated by
Mathematica as described in \Sec{tensors}.
From \Thm{neigs}, this problem has at most 13 eigenpairs; we list the 11 real eigenpairs in
\Tab{KoRe02_ex1}.
We ran 100 trials of S-HOPM using different random starting
points $\V{x}_0$ chosen from a uniform
distribution on $[-1,1]^n$. For these experiments,
we allow up to 1000 iterations and say that the algorithm has
converged if $|\lambda_{k+1} - \lambda_k | < 10^{-16}$. In every
single trial for this tensor, the algorithm failed to converge. In \Fig{KoRe02_ex1},
we show an example $\{\lambda_k\}$ sequence with $\V{x}_0 =
\begin{bmatrix} -0.2695 & 0.1972 & 0.3370 \end{bmatrix}^T$. This
coincides with the results reported previously \cite{KoRe02}. \end{example2}
\begin{table}[htbp]
\centering
\caption{Eigenpairs for $\T{A} \in \RT{4}{3}$ from \Ex{KoRe02_ex1}.}
\label{tab:KoRe02_ex1}
\footnotesize
\begin{tabular}{|c|c|c|c|} \hline $\lambda$ & $\V{x}^T$ & Eigenvalues of $C(\lambda,\V{x})$ & Type \\ \hline
$\phantom{-}0.8893$ & [ $\phantom{-}0.6672$ $\phantom{-}0.2471$ $-0.7027$ ] & $\{$ $-0.8857$, $-1.8459$ $\}$ & Neg.~stable \\ \hline
$\phantom{-}0.8169$ & [ $\phantom{-}0.8412$ $-0.2635$ $\phantom{-}0.4722$ ] & $\{$ $-0.9024$, $-2.2580$ $\}$ & Neg.~stable \\ \hline
$\phantom{-}0.5105$ & [ $\phantom{-}0.3598$ $-0.7780$ $\phantom{-}0.5150$ ] & $\{$ $\phantom{-}0.5940$, $-2.3398$ $\}$ & Unstable \\ \hline
$\phantom{-}0.3633$ & [ $\phantom{-}0.2676$ $\phantom{-}0.6447$ $\phantom{-}0.7160$ ] & $\{$ $-1.1765$, $-0.5713$ $\}$ & Neg.~stable \\ \hline
$\phantom{-}0.2682$ & [ $\phantom{-}0.6099$ $\phantom{-}0.4362$ $\phantom{-}0.6616$ ] & $\{$ $\phantom{-}0.7852$, $-1.1793$ $\}$ & Unstable\\ \hline
$\phantom{-}0.2628$ & [ $\phantom{-}0.1318$ $-0.4425$ $-0.8870$ ] & $\{$ $\phantom{-}0.6181$, $-2.1744$ $\}$ & Unstable \\ \hline
$\phantom{-}0.2433$ & [ $\phantom{-}0.9895$ $\phantom{-}0.0947$ $-0.1088$ ] & $\{$ $-1.1942$, $\phantom{-}1.4627$ $\}$ & Unstable \\ \hline
$\phantom{-}0.1735$ & [ $\phantom{-}0.3357$ $\phantom{-}0.9073$ $\phantom{-}0.2531$ ] & $\{$ $-1.0966$, $\phantom{-}0.8629$ $\}$ & Unstable \\ \hline
$-0.0451$ & [ $\phantom{-}0.7797$ $\phantom{-}0.6135$ $\phantom{-}0.1250$ ] & $\{$ $\phantom{-}0.8209$, $\phantom{-}1.2456$ $\}$ & Pos.~stable \\ \hline
$-0.5629$ & [ $\phantom{-}0.1762$ $-0.1796$ $\phantom{-}0.9678$ ] & $\{$ $\phantom{-}1.6287$, $\phantom{-}2.3822$ $\}$ & Pos.~stable \\ \hline
$-1.0954$ & [ $\phantom{-}0.5915$ $-0.7467$ $-0.3043$ ] & $\{$ $\phantom{-}1.8628$, $\phantom{-}2.7469$ $\}$ & Pos.~stable\\ \hline
\end{tabular} \end{table}
\begin{figure}
\caption{Example $\lambda_k$ values for S-HOPM on $\T{A} \in
\RT{4}{3}$ from \Ex{KoRe02_ex1}.}
\label{fig:KoRe02_ex1}
\end{figure}
\begin{example}
\label{ex:odd}
As a second illustrative example, we consider an odd-order tensor
$\T{A} \in \RT{3}{3}$ defined by
\begin{align*} a_{111} &= -0.1281, & a_{112} &= \phantom{-}0.0516, & a_{113} &= -0.0954, & a_{122} &= -0.1958,\\ a_{123} &= -0.1790, & a_{133} &= -0.2676, & a_{222} &= \phantom{-}0.3251, & a_{223} &= \phantom{-}0.2513,\\ a_{233} &= \phantom{-}0.1773, & a_{333} &= \phantom{-}0.0338.
\end{align*}
From \Thm{neigs}, $\T{A}$ has at most 7 eigenpairs; in this case we
achieve that bound and the eigenpairs are
listed in \Tab{odd}. We ran 100 trials of S-HOPM as
described for \Ex{KoRe02_ex1}. Every trial converged to either
$\lambda=0.8730$ or $\lambda=0.4306$, as summarized in
\Tab{odd-zero}. Therefore, S-HOPM finds 2 of the 7 possible
eigenvalues. \end{example}
\begin{table}[htbp]
\centering
\caption{Eigenpairs for $\T{A} \in \RT{3}{3}$ from \Ex{odd}.}
\label{tab:odd}
\footnotesize
\begin{tabular}{|c|c|c|c|}\hline $\lambda$ & $\V{x}^T$ & Eigenvalues of $C(\lambda,\V{x})$ & Type \\ \hline
$0.8730$ & [ $-0.3922$ $\phantom{-}0.7249$ $\phantom{-}0.5664$ ] & $\{$ $-1.1293$, $-0.8807$ $\}$ & Neg.~stable \\ \hline
$0.4306$ & [ $-0.7187$ $-0.1245$ $-0.6840$ ] & $\{$ $-0.4420$, $-0.8275$ $\}$ & Neg.~stable \\ \hline
$0.2294$ & [ $-0.8446$ $\phantom{-}0.4386$ $-0.3070$ ] & $\{$ $-0.2641$, $\phantom{-}0.7151$ $\}$ & Unstable \\ \hline
$0.0180$ & [ $\phantom{-}0.7132$ $\phantom{-}0.5093$ $-0.4817$ ] & $\{$ $-0.4021$, $-0.1320$ $\}$ & Neg.~stable \\ \hline
$0.0033$ & [ $\phantom{-}0.4477$ $\phantom{-}0.7740$ $-0.4478$ ] & $\{$ $-0.1011$, $\phantom{-}0.2461$ $\}$ & Unstable \\ \hline
$0.0018$ & [ $\phantom{-}0.3305$ $\phantom{-}0.6314$ $-0.7015$ ] & $\{$ $\phantom{-}0.1592$, $-0.1241$ $\}$ & Unstable \\ \hline
$0.0006$ & [ $\phantom{-}0.2907$ $\phantom{-}0.7359$ $-0.6115$ ] & $\{$ $\phantom{-}0.1405$, $\phantom{-}0.0968$ $\}$ & Pos.~stable\\ \hline
\end{tabular} \end{table}
\begin{table}[htbp]
\centering
\caption{Eigenpairs for $\T{A}\in\RT{3}{3}$ from \Ex{odd} computed by S-HOPM with 100
random starts.}
\label{tab:odd-zero}
\footnotesize
\begin{tabular}{|c|c|c|c|}
\hline
\# Occurrences & $\lambda$ & $\V{x}$ & Median Its. \\ \hline
62 & $\phantom{-}0.8730$ & [ $-0.3922$ $\phantom{-}0.7249$ $\phantom{-}0.5664$ ] & 19 \\ \hline
38 & $\phantom{-}0.4306$ & [ $-0.7187$ $-0.1245$ $-0.6840$ ] & 184 \\ \hline
\end{tabular} \end{table}
In their analysis, Kofidis and Regalia \cite{KoRe02} proved that the sequence $\{\lambda_k\}$ in \Alg{shopm} converges if $\T{A} \in \RT{m}{n}$ is even-order and the function $f(\V{x})$ is convex or concave on $\Real^n$. Since $m=2\ell$ (because $m$ is even), $f$ can be expressed as \begin{displaymath}
f(\V{x}) =
(\, \underbrace{\V{x} \otimes \cdots \otimes \V{x}}_{\text{$\ell$ times}} \, )^T
\M{A}\,
(\, \underbrace{\V{x} \otimes \cdots \otimes \V{x}}_{\text{$\ell$ times}} \, ), \end{displaymath} where $\M{A} \in \mathbb{R}^{n^{\ell} \times n^{\ell}}$ is an unfolded version of the tensor $\T{A}$.\footnote{Specifically, $\M{A} \equiv \M{A}_{(\mathcal{R}
\times \mathcal{C})}$ with $\mathcal{R} = \{1,\dots,\ell\}$ and $\mathcal{C} = \{\ell+1,\dots,m\}$ in matricization notation \cite{Ko06}.} Since $\T{A}$ is symmetric, it follows that $\M{A}$ is symmetric. The condition that $f$ is convex (concave) is satisfied if the Hessian \begin{displaymath}
\nabla^2 f(\V{x}) =
(\, \M{I} \otimes \underbrace{\V{x} \otimes \cdots \otimes \V{x}}_{\text{$\ell-1$ times}} \, )^T
\M{A}\,
(\, \M{I} \otimes \underbrace{\V{x} \otimes \cdots \otimes \V{x}}_{\text{$\ell-1$ times}} \, ) \end{displaymath} is positive (negative) semidefinite for all $\V{x} \in \Real^n$.
We make a few notes regarding these results. First, even though $f$ is convex, its restriction to the nonconvex set $\Sigma$ is not.
Second, $\{\lambda_k\}$ is increasing if $f$ is convex and decreasing if $f$ is concave.
Third, only $\{\lambda_k\}$ is proved to converge for S-HOPM \cite[Theorem 4]{KoRe02}; the iterates $\{\V{x}_k\}$ may not. In particular, it is easy to observe that the sign of $\V{x}_k$ may flip back and forth if the concave case is not handled correctly.
\section{Shifted symmetric higher-order power method (SS-HOPM)} \label{sec:sshopm} In this section, we show that S-HOPM can be modified by adding a ``shift'' that guarantees that the method will always converge to an eigenpair. In the context of ICA, this idea has also been proposed by Regalia and Kofidis \cite{ReKo03} and Erdogen \cite{Er09}.
Based on the observation that S-HOPM is guaranteed to converge if the underlying function is convex or concave on $\Real^n$, our method works with a suitably modified function \begin{equation}
\label{eq:hatf}
\hat f(\V{x}) \equiv f(\V{x}) + \alpha (\V{x}^T\V{x})^{m/2}. \end{equation} Maximizing $\hat f$ on $\Sigma$ is the same as maximizing $f$ plus a constant, yet the properties of the modified function force convexity or concavity and consequently guarantee convergence to a KKT point (not necessary the \emph{global} maximum or minimum). Note that previous papers \cite{ReKo03,Er09} have proposed similar shifted functions that are essentially of the form \begin{inlinemath}
\hat f(\V{x}) \equiv f(\V{x}) + \alpha \V{x}^T\V{x}, \end{inlinemath} differing only in the exponent.
An advantage of our choice of $\hat f$ in \Eqn{hatf} is that, for even $m$, it can be interpreted as \begin{displaymath}
\hat f(\V{x}) = \hat\T{A} \V{x}^m \equiv (\T{A} + \alpha \T{E})\V{x}^m, \end{displaymath} where $\T{E}$ is the identity tensor as defined in \Eqn{identity}. Thus, for even $m$, our proposed method can be interpreted as S-HOPM applied to a modified tensor that directly satisfies the convexity properties to guarantee convergence \cite{KoRe02}. Because $\T{E} \V{x}^{m-1} = \V{x}$ for $\V{x} \in \Sigma$, the eigenvectors of $\hat\T{A}$ are the same as those of $\T{A}$ and the eigenvalues are shifted by $\alpha$. Our results, however, are for both odd- and even-order tensors.
\Alg{sshopm} presents the shifted symmetric higher-order power method (SS-HOPM). Without loss of generality, we assume that a positive shift ($\alpha \geq 0$) is used to make the modified function in \Eqn{hatf} convex and a negative shift ($\alpha < 0$) to make it concave. We have two key results. \Thm{main} shows that for any starting point $\V{x}_0 \in \Sigma$, the sequence $\{\lambda_k\}$ produced by \Alg{sshopm} is guaranteed to converge to an eigenvalue in the convex case if \begin{equation}
\label{eq:beta}
\alpha > \beta(\T{A})
\equiv (m-1) \cdot \max_{\V{x} \in \Sigma} \rho(\T{A}\V{x}^{m-2}). \end{equation} \Cor{main} handles the concave case where we require $\alpha < -\beta(\T{A})$. \Thm{fp} further shows that \Alg{sshopm} in the convex case will generically converge to an eigenpair $(\lambda, \V{x})$ that is negative stable. \Cor{fp} proves that \Alg{sshopm} in the concave case will generically converge to an eigenpair that is positive stable. Generally, neither version will converge to an eigenpair that is unstable.
\begin{algorithm}
\caption{Shifted Symmetric Higher-Order Power Method (SS-HOPM)}
\label{alg:sshopm}
Given a tensor $\T{A} \in \RT{m}{n}$.
\begin{algorithmic}[1]
\Require $\V{x}_0 \in \Real^n$ with $\| \V{x}_0 \| = 1$. Let
$\lambda_0 = \T{A} \V{x}_0^{m}$.
\Require $\alpha \in \mathbb{R}$
\For{$k=0,1,\dots$}
\If{$\alpha \geq 0$}
\State $\hat \V{x}_{k+1} \gets \T{A} \V{x}_k^{m-1} + \alpha \V{x}_k$
\Comment{Assumed Convex}
\Else
\State $\hat \V{x}_{k+1} \gets -(\T{A} \V{x}_k^{m-1} + \alpha \V{x}_k)$
\Comment{Assumed Concave}
\EndIf
\State $\V{x}_{k+1} \gets \hat \V{x}_{k+1} / \| \hat \V{x}_{k+1} \|$
\State $\lambda_{k+1} \gets \T{A} \V{x}_{k+1}^{m}$
\EndFor
\end{algorithmic} \end{algorithm}
\subsection{SS-HOPM convergence analysis} We first establish a few key lemmas that guide the choice of the shift $\alpha > \beta(\T{A})$ in SS-HOPM\@.
\begin{lemma}\label{lem:beta_bound}
Let $\T{A} \in \RT{m}{n}$ be symmetric and let $\beta(\T{A})$ be as
defined in \Eqn{beta}. Then $\beta(\T{A}) \leq (m-1)
\sum_{i_1,\dots,i_m} |\TE{a}{i_1\dots i_m}|$. \end{lemma} \begin{proof} For all $\V{x},\V{y} \in \Sigma$, we obtain
$|\V{y}^T(\T{A}\V{x}^{m-2})\V{y}| \le \sum_{i_1,\dots,i_m} |\TE{a}{i_1\dots i_m}|$ by applying the triangle inequality to the sum of $n^m$ terms. Thus
$\rho(\T{A}\V{x}^{m-2}) \le \sum_{i_1,\dots,i_m} |\TE{a}{i_1\dots i_m}|$ for all $\V{x} \in \Sigma$, and the result follows. \end{proof}
\begin{lemma}\label{lem:f_bound}
Let $\T{A} \in \RT{m}{n}$ be symmetric, let $f(\V{x})=\T{A}\V{x}^m$, and let $\beta(\T{A})$
be as defined in \Eqn{beta}.
Then $|f(\V{x})| \leq \beta(\T{A})/(m-1)$ for all $\V{x} \in \Sigma$. \end{lemma} \begin{proof}
We have $|\T{A}\V{x}^m| = |\V{x}^T(\T{A}\V{x}^{m-2})\V{x}| \le \rho(\T{A}\V{x}^{m-2}) \le \beta(\T{A})/(m-1)$. \end{proof}
The preceding lemma upper bounds the magnitude of any eigenvalue of $\T{A}$ by $\beta(\T{A})/(m-1)$ since any eigenpair $(\lambda,\V{x})$ satisfies $\lambda = f(\V{x})$. Thus, choosing $\alpha > \beta(\T{A})$ implies that $\alpha$ is greater than the magnitude of any eigenvalue of $\T{A}$.
\begin{lemma}\label{lem:H_bound}
Let $\T{A} \in \RT{m}{n}$ be symmetric and let $H(\V{x})$ and $\beta(\T{A})$
be as defined in \Eqn{H} and \Eqn{beta}.
Then $\rho(H(\V{x})) \leq m \beta(\T{A})$ for all $\V{x} \in \Sigma$. \end{lemma} \begin{proof} This follows directly from \Eqn{H} and \Eqn{beta}. \end{proof}
The following theorem proves that \Alg{sshopm} will always converge. Choosing $\alpha >
(m-1) \sum_{i_1,\dots,i_m} |\TE{A}{i_1\dots i_m}|$ is a conservative choice that is guaranteed to work by \Lem{beta_bound}, but this may slow down convergence considerably, as we show in subsequent analysis and examples.
\begin{theorem}\label{thm:main}
Let $\T{A} \in \RT{m}{n}$ be symmetric. For $\alpha > \beta(\T{A})$,
where $\beta(\T{A})$ is defined in \Eqn{beta},
the iterates $\{\lambda_k,\V{x}_k\}$ produced by \Alg{sshopm} satisfy the
following properties.
\begin{inparaenum}[(a)]
\item The sequence $\{\lambda_k\}$ is nondecreasing, and there exists
$\lambda_*$ such that $\lambda_k \rightarrow \lambda_*$.
\item The sequence $\{\V{x}_k\}$ has an accumulation point.
\item For every such accumulation point $\V{x}_*$, the pair $(\lambda_*,\V{x}_*)$ is an eigenpair of $\T{A}$.
\item \label{finite} If $\T{A}$ has finitely many real eigenvectors, then there exists $\V{x}_*$ such that $\V{x}_k \rightarrow \V{x}_*$.
\end{inparaenum} \end{theorem}
\begin{proof}
Our analysis depends on the modified function $\hat f$ defined in
\Eqn{hatf}. Its gradient and Hessian for $\V{x} \ne \V{0}$ are
\begin{align}
\hat g(\V{x}) &\equiv \nabla \hat f(\V{x}) =
g(\V{x}) + m \alpha (\V{x}^T\V{x})^{m/2-1} \V{x},\\
\hat H(\V{x}) &\equiv \nabla^2 \hat f(\V{x}) =
H(\V{x}) + m \alpha (\V{x}^T\V{x})^{m/2-1} \M{I} +
m(m - 2) \alpha (\V{x}^T\V{x})^{m/2-2} \V{x}\V{x}^T,
\end{align}
where $g$ and $H$ are the gradient and Hessian of $f$ from
\Lem{g} and \Lem{H}, respectively. And because $\hat f(\V{x}) = O(\|\V{x}\|^m)$,
as $\V{x} \to \V{0}$, it follows that $\hat f(\V{x})$ is of third or higher order
in $\V{x}$ for $m \ge 3$; thus $\hat g(\V{0}) = \V{0}$ and
$\hat H(\V{0}) = \M{0}$.
Because it is important for the entire proof, we first show that $\hat f$
is convex on $\Real^n$ for
$\alpha > \beta(\T{A})$. As noted, if $\V{x} = \V{0}$, we have $\hat H(\V{x}) = \M{0}$ for $m \ge 3$.
Consider nonzero $\V{x} \in \Real^n$ and
define $\bar\V{x} = \V{x} / \| \V{x} \| \in \Sigma$; then $\hat H(\V{x})$ is positive
semidefinite (in fact, positive definite) by \Lem{H_bound} since
\begin{align*}
\V{y}^T \hat H(\V{x})\V{y}
& = \|\V{x}\|^{m-2} \left( \V{y} ^T H(\bar\V{x}) \V{y}
+ m \alpha + m(m-1) \alpha (\bar\V{x}^T\V{y})^2 \right) \\
& \geq \|\V{x}\|^{m-2} \left( -m\beta(\T{A}) + m \alpha + 0 \right) > 0
\qtext{for all} \V{y} \in \Sigma.
\end{align*}
By \Prop{convex_hessian}, $\hat f$ is convex on $\Real^n$ because its
Hessian is positive semidefinite.
We also note that $-\alpha$ must be an eigenvalue of $\T{A}$ if $\hat g(\V{x}) =
\V{0}$ for some $\V{x} \in \Sigma$, since
\begin{displaymath}
\hat g(\V{x}) = \V{0}
\qtext{implies}
\T{A} \V{x}^{m-1} + \alpha \V{x} = \V{0}.
\end{displaymath}
By \Lem{f_bound}, choosing $\alpha > \beta(\T{A})$ ensures that $\alpha$
is greater than the magnitude of any eigenvalue, and so $\hat g(\V{x})
\neq \V{0}$ for all $\V{x} \in \Sigma$. This ensures that the update in
\Alg{sshopm}, which reduces to
\begin{equation}
\label{eq:xkp1}
\V{x}_{k+1} = \frac{\hat g(\V{x}_k)}{\| \hat g(\V{x}_k) \|}
\end{equation}
in the convex case, is always well defined.
\begin{asparaenum}[(a)]
\item \label{main1}
Since $\hat f$ is convex on $\Gamma$ and $\V{x}_{k+1}, \V{x}_k
\in \Sigma$ and $\V{x}_{k+1} = \nabla \hat f(\V{x}_k) / \| \nabla \hat
f(\V{x}_k) \|$, \Thm{cvx} yields
\begin{displaymath}
\lambda_{k+1} - \lambda_k
= \hat f(\V{x}_{k+1}) - \hat f(\V{x}_k) \geq 0,
\end{displaymath}
where the nonstrict inequality covers the possibility that $\V{x}_{k+1} = \V{x}_k$.
Thus, $\{\lambda_k\}$ is a nondecreasing sequence.
By \Lem{f_bound}, $\lambda_k = f(\V{x}_k)$ is bounded, so the sequence
must converge to a limit point $\lambda_*$.\footnote{Note that
the similar approach proposed for ICA
\cite[Theorem 2]{ReKo03} allows the shift $\alpha$ to vary at
each iteration so long as the underlying function remains convex.}
\item
Since $\{\V{x}_k\}$ is an infinite sequence on a compact set $\Sigma$, it
must have an accumulation point $\V{x}_* \in \Sigma$ by the Bolzano-Weierstrass theorem.
Note also that continuity of $f$ implies that $\lambda_* = \T{A}\V{x}_*^m$.
\item
By part (\ref*{main1}) of the proof, convexity of $\hat f$,
and \Prop{convex_gradient}, we have
\begin{displaymath}
\hat f(\V{x}_{k+1}) - \hat f(\V{x}_k) \rightarrow 0
\end{displaymath}
and thus
\begin{displaymath}
\hat g(\V{x}_k)^T (\V{x}_{k+1} - \V{x}_k)
\rightarrow 0.
\end{displaymath}
Using \Eqn{xkp1}, we can rewrite the above formula as
\begin{equation}
\label{eq:gxk}
\|\hat g(\V{x}_k)\| -
\hat g(\V{x}_k)^T \V{x}_k
\rightarrow 0.
\end{equation}
By continuity of $\hat g$, an accumulation point $\V{x}_*$ must satisfy
\begin{equation}
\label{eq:key}
\|\hat g(\V{x}_*)\| - \hat g(\V{x}_*)^T \V{x}_* = 0,
\end{equation}
which implies
\begin{displaymath}
\|\hat g(\V{x}_*)\| = \hat g(\V{x}_*)^T \V{x}_* = (m\, \T{A}\V{x}_*^{m-1} + m\alpha\V{x}_*)^T \V{x}_* = m(\lambda_* + \alpha).
\end{displaymath}
Because $\V{x}_* \in \Sigma$, \Eqn{key} can hold only if
\begin{displaymath}
\V{x}_* = \frac{\hat g(\V{x}_*)}{\| \hat g(\V{x}_*) \|} = \frac{m\, \T{A}\V{x}_*^{m-1} + m\alpha\V{x}_*}{m(\lambda_* + \alpha)},
\end{displaymath}
that is,
\begin{displaymath}
\T{A}\V{x}_*^{m-1} = \lambda_* \V{x}_*.
\end{displaymath}
Hence $(\lambda_*, \V{x}_*)$ is an eigenpair of $\T{A}$.
\item
Equation \Eqn{gxk} gives
\begin{displaymath}
\|\hat g(\V{x}_k)\| (1 - \V{x}_{k+1}^T \V{x}_k) \to 0.
\end{displaymath}
Because $\|\hat g(\V{x}_k)\|$ is bounded away from 0 and because $\V{x}_k,\V{x}_{k+1} \in \Sigma$,
this requires that
\begin{equation}
\label{eq:xk0}
\|\V{x}_k - \V{x}_{k+1}\| \to 0.
\end{equation}
Recall that every accumulation point of $\{\V{x}_k\}$ must be a (real)
eigenvector of $\T{A}$. If these eigenvectors are
finite in number and thus isolated, consider removing an arbitrarily small open neighborhood of each
from $\Sigma$, leaving a closed and thus compact space $Y \subset
\Sigma$ containing no accumulation points of $\{\V{x}_k\}$. If $\{\V{x}_k\}$ had infinitely many iterates in $Y$,
it would have an accumulation point in $Y$ by the Bolzano-Weierstrass theorem, creating a contradiction.
Therefore at most finitely many iterates are in $Y$, and $\{\V{x}_k\}$ is ultimately confined to
arbitrarily small neighborhoods of the eigenvectors.
By \Eqn{xk0}, however, $\|\V{x}_k - \V{x}_{k+1}\|$ eventually remains smaller than the minimum distance between
any two of these neighborhoods.
Consequently, the iteration ultimately cannot jump from one neighborhood to another, and so
in the limit $\{\V{x}_k\}$ is confined to an arbitrarily small neighborhood of a \emph{single} eigenvector
$\V{x}_*$, to which it therefore converges.
\end{asparaenum}
Hence, the proof is complete. \end{proof}
Note that the condition of finitely many real eigenvectors in part (\ref*{finite}) holds for generic tensors. We conjecture that the convergence of $\{\V{x}_k\}$ is guaranteed even without this condition.
\begin{example}
Again consider $\T{A} \in \RT{4}{3}$ from \Ex{KoRe02_ex1}.
We show
results using a shift of $\alpha = 2$.
We ran 100 trials of SS-HOPM using
the experimental conditions described in \Ex{KoRe02_ex1}.
We found 3 real eigenpairs; the results are
summarized in \Tab{sshopm-convex}. Three example runs (one for each
eigenvalue) are shown in \Fig{sshopm-convex}.
We also considered the ``conservative'' choice of $\alpha = (m-1)
\sum_{i_1,\dots,i_m} |\TE{A}{i_1\dots i_m}| = 55.6620$. We ran 100
trials of SS-HOPM using the experimental conditions described in
\Ex{KoRe02_ex1}, except that we increased the maximum number of
iterations to 10,000. Every trial converged to one of the same 3
real eigenpairs, but the number of iterations was around
1000 (versus around 60 for $\alpha =2$); in \Sec{fp}, we
see that the rate of convergence asymptotically decreases as
$\alpha$ increases.
Analogous results are shown for $\T{A} \in \RT{3}{3}$ from \Ex{odd}
with a shift of $\alpha=1$ in \Tab{sshopm-odd-convex} and
\Fig{sshopm-odd-convex}. Here SS-HOPM finds 2 additional eigenpairs
compared to S-HOPM\@. In this case, we also considered $\alpha = (m-1)
\sum_{i_1,\dots,i_m} |\TE{A}{i_1\dots i_m}| = 9.3560$, but this
again increased the number of iterations up to a factor of ten.
For both tensors,
$\{\lambda_k\}$ is always a nondecreasing sequence.
Observe further that SS-HOPM converges only to eigenpairs
that are negative stable. \end{example}
\begin{table}[htbp]
\centering
\subfloat[$\T{A}\in\RT{4}{3}$ from \Ex{KoRe02_ex1} with $\alpha=2$.]{
\footnotesize
\begin{tabular}{|c|c|c|c|} \hline
\# Occurrences & $\lambda$ & $\V{x}$ & Median Its. \\ \hline
46 & $\phantom{-}0.8893$ & [ $\phantom{-}0.6672$ $\phantom{-}0.2471$ $-0.7027$ ] & 63 \\ \hline
24 & $\phantom{-}0.8169$ & [ $\phantom{-}0.8412$ $-0.2635$ $\phantom{-}0.4722$ ] & 52 \\ \hline
30 & $\phantom{-}0.3633$ & [ $\phantom{-}0.2676$ $\phantom{-}0.6447$ $\phantom{-}0.7160$ ] & 65 \\ \hline
\end{tabular}
\label{tab:sshopm-convex}
}
\subfloat[$\T{A}\in\RT{3}{3}$ from \Ex{odd} with $\alpha=1$.]{
\footnotesize
\begin{tabular}{|c|c|c|c|} \hline
\# Occurrences & $\lambda$ & $\V{x}$ & Median Its. \\ \hline
40 & $\phantom{-}0.8730$ & [ $-0.3922$ $\phantom{-}0.7249$ $\phantom{-}0.5664$ ] & 32 \\ \hline
29 & $\phantom{-}0.4306$ & [ $-0.7187$ $-0.1245$ $-0.6840$ ] & 48 \\ \hline
18 & $\phantom{-}0.0180$ & [ $\phantom{-}0.7132$ $\phantom{-}0.5093$ $-0.4817$ ] & 116 \\ \hline
13 & $-0.0006$ & [ $-0.2907$ $-0.7359$ $\phantom{-}0.6115$ ] & 145 \\ \hline
\end{tabular}
\label{tab:sshopm-odd-convex}
}
\caption{Eigenpairs computed by SS-HOPM (convex) with 100 random starts.} \end{table}
\begin{figure}
\caption{Example $\lambda_k$ values for SS-HOPM (convex).
One sequence is shown for each distinct eigenvalue.}
\label{fig:sshopm-convex}
\label{fig:sshopm-odd-convex}
\end{figure}
Using a large enough negative value of $\alpha$ makes $\hat f$ concave. It was observed \cite{KoRe02} that $f(\V{x}) = f(-\V{x})$ for even-order tensors and so the sequence $\{\lambda_k\}$ converges regardless of correctly handling the minus sign. The only minor problem in the concave case is that the sequence of iterates $\{\V{x}_k\}$ does not converge. This is easily fixed, however, by correctly handling the sign as we do in \Alg{sshopm}. The corresponding theory for the concave case is presented in \Cor{main}. In this case we choose $\alpha$ to be negative, i.e., the theory suggests $\alpha < -\beta(\T{A})$.
\begin{corollary}
\label{cor:main}
Let $\T{A} \in \RT{m}{n}$ be symmetric. For $\alpha <
-\beta(\T{A})$, where $\beta(\T{A})$ is defined in \Eqn{beta},
the iterates $\{\lambda_k,\V{x}_k\}$ produced by \Alg{sshopm} satisfy the
following properties.
\begin{inparaenum}[(a)]
\item The sequence $\{\lambda_k\}$ is nonincreasing, and there exists
$\lambda_*$ such that $\lambda_k \rightarrow \lambda_*$.
\item The sequence $\{\V{x}_k\}$ has an accumulation point.
\item For any such accumulation point $\V{x}_*$, the pair $(\lambda_*,\V{x}_*)$ is an eigenpair of $\T{A}$.
\item If the eigenvalues of $\T{A}$ are isolated, then $\V{x}_k \to \V{x}_*$.
\end{inparaenum} \end{corollary} \begin{proof}
Apply the proof of \Thm{main} with $f(\V{x}) = - \T{A}\V{x}^m$. \end{proof}
\begin{example}
Revisiting $\T{A}\in\RT{4}{3}$ in \Ex{KoRe02_ex1} again, we run another 100 trials using
$\alpha=-2$. We find 3 (new) real eigenpairs; the results are
summarized in \Tab{sshopm-concave}. Three
example runs (one for each eigenvalue) are shown in
\Fig{sshopm-concave}.
We also revisit $\T{A} \in \RT{3}{3}$ from \Ex{odd} and use
$\alpha=-1$. In this case, we find the opposites, i.e.,
$(-\lambda,-\V{x})$, of the eigenpairs found with $\alpha = 1$, as
shown in \Tab{sshopm-odd-concave}. This is to be expected for
odd-order tensors since there is symmetry, i.e., $f(\V{x}) =
-f(-\V{x})$, $C(\lambda,\V{x}) = -C(-\lambda,-\V{x})$, etc. Observe that
the median number of iterations is nearly unchanged; this is
explained in the subsequent subsection where we discuss the rate of convergence. Four example
runs (one per eigenvalue) are shown in \Fig{sshopm-odd-concave}.
The sequence $\{\lambda_k\}$ is
nonincreasing in every case. Each of the eigenpairs found in the
concave case is positive stable. \end{example}
\begin{table}[htbp]
\centering
\subfloat[$\T{A}\in\RT{4}{3}$ from \Ex{KoRe02_ex1} with $\alpha=-2$.]{
\footnotesize
\begin{tabular}{|c|c|c|c|} \hline
\# Occurrences & $\lambda$ & $\V{x}$ & Median Its. \\ \hline
15 & $-0.0451$ & [ $-0.7797$ $-0.6135$ $-0.1250$ ] & 35 \\ \hline
40 & $-0.5629$ & [ $-0.1762$ $\phantom{-}0.1796$ $-0.9678$ ] & 23 \\ \hline
45 & $-1.0954$ & [ $-0.5915$ $\phantom{-}0.7467$ $\phantom{-}0.3043$ ] & 23 \\ \hline
\end{tabular}
\label{tab:sshopm-concave}
}
\subfloat[$\T{A}\in\RT{3}{3}$ from \Ex{odd} with $\alpha=-1$.]{
\footnotesize
\begin{tabular}{|c|c|c|c|} \hline
\# Occurrences & $\lambda$ & $\V{x}$ & Median Its. \\ \hline
19 & $\phantom{-}0.0006$ & [ $\phantom{-}0.2907$ $\phantom{-}0.7359$ $-0.6115$ ] & 146 \\ \hline
18 & $-0.0180$ & [ $-0.7132$ $-0.5093$ $\phantom{-}0.4817$ ] & 117 \\ \hline
29 & $-0.4306$ & [ $\phantom{-}0.7187$ $\phantom{-}0.1245$ $\phantom{-}0.6840$ ] & 49 \\ \hline
34 & $-0.8730$ & [ $\phantom{-}0.3922$ $-0.7249$ $-0.5664$ ] & 33 \\ \hline
\end{tabular}
\label{tab:sshopm-odd-concave}
}
\caption{Eigenpairs computed by SS-HOPM (concave) with 100 random starts.} \end{table}
\begin{figure}
\caption{Example $\lambda_k$ values for SS-HOPM (concave).
One sequence is shown for each distinct eigenvalue.}
\label{fig:sshopm-concave}
\label{fig:sshopm-odd-concave}
\end{figure}
\subsection{SS-HOPM fixed point analysis} \label{sec:fp} In this section, we show that fixed point analysis allows us to easily characterize convergence to eigenpairs according to whether they are positive stable, negative stable, or unstable. The convex version of SS-HOPM will generically converge to eigenpairs that are negative stable; the concave version will generically converge to eigenpairs that are positive stable.
To justify these conclusions, we consider \Alg{sshopm} in the convex case as a fixed point iteration $\V{x}_{k+1} = \phi(\V{x}_k;\alpha)$, where $\phi$ is defined as \begin{equation}\label{eq:phi}
\phi(\V{x};\alpha) = \phi_1(\phi_2(\V{x};\alpha))
\text{ with }
\phi_1(\V{x}) = \frac{\V{x}}{(\V{x}^T\V{x})^{\frac{1}{2}}}
\text{ and }
\phi_2(\V{x};\alpha) = \T{A} \V{x}^{m-1} + \alpha \V{x}. \end{equation} Note that an eigenpair $(\lambda, \V{x})$ is a fixed point if and only if $\lambda + \alpha > 0$, which is always true for $\alpha > \beta(\T{A})$.
From \cite{Fa05}, the Jacobian of the operator $\phi$ is \begin{displaymath}
J(\V{x};\alpha) = \phi_1'(\phi_2(\V{x};\alpha)) \phi_2'(\V{x};\alpha), \end{displaymath} where derivatives are taken with respect to $\V{x}$ and \begin{displaymath}
\phi_1'(\V{x}) = \frac{(\V{x}^T\V{x}) \M{I} - \V{x}
\V{x}^T}{(\V{x}^T\V{x})^{\frac{3}{2}}}
\qtext{and}
\phi_2'(\V{x};\alpha) = (m-1)\T{A}\V{x}^{m-2} + \alpha \M{I}. \end{displaymath} At any eigenpair $(\lambda, \V{x})$, we have \begin{gather*}
\phi_2(\V{x};\alpha) = (\lambda + \alpha)\V{x}
, \quad
\phi_1'(\phi_2(\V{x};\alpha)) = \frac{(\M{I} - \V{x}\V{x}^T)}{\lambda + \alpha}
, \\
\qtext{and}
\phi_2'(\V{x};\alpha) = (m-1) \T{A} \V{x}^{m-2} + \alpha \M{I}. \end{gather*} Thus, the Jacobian at $\V{x}$ is \begin{equation}
\label{eq:J}
J(\V{x};\alpha) = \frac{(m-1)(\T{A}\V{x}^{m-2} - \lambda \V{x}\V{x}^T) +
\alpha(\M{I} - \V{x}\V{x}^T)}{\lambda + \alpha}. \end{equation} Observe that the Jacobian is symmetric.
\begin{theorem}
\label{thm:fp}
Let $(\lambda,\V{x})$ be an eigenpair of a symmetric tensor $\T{A} \in \RT{m}{n}$.
Assume $\alpha \in \mathbb{R}$ such that $\alpha > \beta(\T{A})$, where
$\beta(\T{A})$ is as defined in \Eqn{beta}.
Let $\phi(\V{x})$ be given by \Eqn{phi}.
Then $(\lambda,\V{x})$ is negative stable if and only if
$\V{x}$ is a linearly attracting fixed point of $\phi$. \end{theorem} \begin{proof}
Assume that $(\lambda, \V{x})$ is negative stable.
The Jacobian $J(\V{x};\alpha)$ is given by \Eqn{J}.
By \Thm{fixed_point},
we need to show that $\rho(J(\V{x};\alpha)) < 1$ or, equivalently since
$J(\V{x};\alpha)$ is symmetric,
$| \V{y}^T J(\V{x};\alpha) \V{y} | < 1$ for all $\V{y} \in \Sigma$. We restrict
our attention to $\V{y} \bot \V{x}$ since
$J(\V{x};\alpha) \V{x} = \V{0}$.
Let $\V{y} \in \Sigma$ with $\V{y} \bot \V{x}$. Then
\begin{displaymath}
|\V{y}^T J(\V{x};\alpha) \V{y}| =
\left|
\frac
{ \V{y}^T \left( (m-1) \T{A}\V{x}^{m-2} \right) \V{y} + \alpha }
{\lambda + \alpha}
\right|
\end{displaymath}
The assumption that $(\lambda,\V{x})$ is negative stable means that
$C(\lambda,\V{x})$ is negative definite; therefore,
\begin{inlinemath}
\V{y}^T \left((m-1)\T{A}\V{x}^{m-2}\right) \V{y} < \lambda.
\end{inlinemath}
On the other hand, by the definition of $\beta$,
\begin{displaymath}
\rho\!\left((m-1)\T{A}\V{x}^{m-2}\right) \leq \beta(\T{A}).
\end{displaymath}
Thus, using the fact that $\lambda + \alpha$ is positive, we have
\begin{displaymath}
0 <
\frac{-\beta(\T{A}) + \alpha}{\lambda + \alpha}
\leq \frac
{ \V{y}^T \left( (m-1) \T{A}\V{x}^{m-2} \right) \V{y} + \alpha }
{\lambda + \alpha}
<
\frac{\lambda + \alpha}{\lambda + \alpha} = 1
\end{displaymath}
Hence, $\rho(J(\V{x};\alpha)) < 1$, and $\V{x}$ is a linearly attracting fixed point.
On the other hand, if $(\lambda,\V{x})$ is not negative stable, then there exists
$\V{w} \in \Sigma$ such that $\V{w} \bot \V{x}$ and $\V{w}^T \left( (m-1) \T{A}\V{x}^{m-2} \right)
\V{w} \geq \lambda$. Thus,
\begin{displaymath}
\V{w}^T J(\V{x};\alpha) \V{w} =
\frac
{ \V{w}^T \left( (m-1) \T{A}\V{x}^{m-2} \right) \V{w} + \alpha }
{\lambda + \alpha}
\geq
\frac{\lambda + \alpha}{\lambda + \alpha} = 1.
\end{displaymath}
Consequently, $\rho(J(\V{x};\alpha)) \geq 1$, and $\V{x}$ is not a linearly attracting fixed point
by \Thm{fixed_point} and \Thm{unstable_fixed_point}. \end{proof}
In fact, we can see from the proof of \Thm{fp} that if the eigenpair $(\lambda,\V{x})$ is not negative stable, there is no choice of $\alpha \in \mathbb{R}$ that will make $\rho(J(\V{x};\alpha))<1$. For $\V{x}$ to be a fixed point at all, we must have $\lambda + \alpha > 0$, and this is sufficient to obtain $\rho(J(\V{x};\alpha)) \ge 1$ if $(\lambda,\V{x})$ is not negative stable. In other words, smaller values of $\alpha$ do not induce ``accidental'' convergence to any additional eigenpairs.
An alternative argument establishes, for $\alpha > \beta(\T{A})$, the slightly broader result that any attracting fixed point, regardless of order of convergence, must be a strict constrained local maximum of $f(\V{x}) = \T{A} \V{x}^m$ on $\Sigma$. That is, the marginally attracting case corresponds to a stationary point that has degenerate $C(\lambda,\V{x})$ but is still a maximum. This follows from \Thm{cvx}, where the needed convexity holds for $\alpha > \beta(\T{A})$, so that any vector $\V{x}' \in \Sigma$ in the neighborhood of convergence of $\V{x}$ must satisfy $f(\V{x}') < f(\V{x})$. One can convince oneself that the converse also holds for $\alpha > \beta(\T{A})$, i.e., any strict local maximum corresponds to an attracting fixed point. This is because the strict monotonicity of $f$ under iteration (other than at a fixed point) implies that the iteration acts as a contraction on the region of closed contours of $f$ around the maximum.
The counterpart of \Thm{fp} for the concave case is as follows.
\begin{corollary}
\label{cor:fp}
Let $(\lambda,\V{x})$ be an eigenpair of a symmetric tensor $\T{A} \in \RT{m}{n}$.
Assume $\alpha \in \mathbb{R}$ such that $\alpha < -\beta(\T{A})$, where
$\beta(\T{A})$ is as defined in \Eqn{beta}.
Let $\phi(\V{x})$ be given by \Eqn{phi}.
Then $(\lambda,\V{x})$ is positive stable if and only if
$\V{x}$ is a linearly attracting fixed point of $-\phi$. \end{corollary}
\begin{example}
We return again to $\T{A} \in \RT{4}{3}$ as defined in \Ex{KoRe02_ex1}.
\Fig{jac_KoRe02_ex1} shows the spectral radius of the Jacobian of
the fixed point iteration for varying values of $\alpha$ for all
eigenpairs that are positive or negative stable.
At $\alpha=0$, the spectral radius is greater
than 1 for every eigenvalue, and this is why S-HOPM never
converges. At $\alpha=2$, on the other hand, we see that the
spectral radius is less than 1 for all of the negative stable
eigenpairs. Furthermore, the spectral radius stays less than 1 as
$\alpha$ increases. Conversely, at $\alpha=-2$, the spectral radius
is less than 1 for all the eigenpairs that are positive stable.
In \Fig{rate_KoRe02_ex1}, we plot example iteration sequences for $\|\V{x}_{k+1}-\V{x}_*\|/\|\V{x}_k - \V{x}_*\|$ for each eigenpair, using $\alpha = 2$ for the negative stable eigenpairs and $\alpha = -2$ for the positive stable eigenpairs.
We expect $\|\V{x}_{k+1}-\V{x}_*\| = \sigma \|\V{x}_k - \V{x}_*\|$ where $\sigma$ is the spectral radius of the Jacobian $J(\V{x};\alpha)$. For example, for $\lambda = -1.0954$, we have $\sigma=0.4$ (shown as a dashed line) and this precisely matched the observed rate of convergence (shown as a solid line).
\Fig{sphere_f} plots $f(\V{x})$ on the unit sphere using color to
indicate function value.
We show the front and back of the sphere. Notice that the
horizontal axis is from $1$ to $-1$ in the left plot and from $-1$ to $1$ in the
right plot, as if walking around the sphere. In this image, the
horizontal axis corresponds to $x_2$ and the vertical axis to $x_3$;
the left image is centered at $x_1 = 1$ and the right image at
$x_1=-1$. Since $m$ is even, the function is symmetric, i.e.,
$f(\V{x}) = f(-\V{x})$.
The eigenvectors are shown as white, gray, and black circles
corresponding to their classification as negative stable, positive
stable, and unstable, respectively;
in turn, these correspond to maxima, minima, and saddle
points of $f(\V{x})$.
\Fig{sphere_convex} shows the basins of attraction for
SS-HOPM with $\alpha = 2$.
Every grid point on the sphere was used as a starting point for
SS-HOPM, and it is colored\footnote{Specifically, each block on the
sphere is colored according to the convergence of its lower left
point.} according to which eigenvalue it converged
to.
In this case, every run converges to a
negative stable eigenpair (labeled
with a white circle). Recall that SS-HOPM
must converge to some eigenpair per \Thm{main}, and \Thm{fp} says
that it is generically a negative stable eigenpair.
Thus, the
non-attracting points lie on the boundaries of the domains of
attraction.
\Fig{sphere_concave} shows the basins of attraction for SS-HOPM with
$\alpha = -2$. In this case, every starting point converges to an
eigenpair that is positive stable (shown
as gray circles). \end{example}
\begin{figure}
\caption{Spectral radii of the Jacobian $J(\V{x};\alpha)$ for
different eigenpairs as $\alpha$ varies.}
\label{fig:jac_KoRe02_ex1}
\label{fig:jac_odd}
\end{figure}
\begin{figure}
\caption{Example plots of $\|\V{x}_{k+1}-\V{x}_*\|/\|\V{x}_k - \V{x}_*\|$.
The expected rate of convergence from $J(\V{x}_*;\alpha)$ is shown
as a dashed line.}
\label{fig:rate_KoRe02_ex1}
\label{fig:rate_odd}
\label{fig:rate}
\end{figure}
\begin{figure}
\caption{Illustrations for $\T{A} \in \RT{4}{3}$ from
\Ex{KoRe02_ex1}. The horizontal axis corresponds
to $\VE{x}{2}$ and the vertical axis to $\VE{x}{3}$; the left
image is centered at $\VE{x}{1}=1$ and the right at
$\VE{x}{1}=-1$. White, gray, and black dots indicate eigenvectors
that are negative stable, positive stable, and unstable,
respectively.}
\label{fig:sphere_f}
\label{fig:sphere_convex}
\label{fig:sphere_concave}
\end{figure}
\begin{figure}
\caption{Illustrations for $\T{A} \in \RT{3}{3}$ from \Ex{odd}.
The horizontal axis corresponds
to $\VE{x}{2}$ and the vertical axis to $\VE{x}{3}$; the left
image is centered at $\VE{x}{1}=1$ and the right at
$\VE{x}{1}=-1$. White, gray, and black dots indicate eigenvectors
that are negative stable, positive stable, and unstable,
respectively. }
\label{fig:sphere_odd_f}
\label{fig:sphere_odd_zero}
\label{fig:sphere_odd_convex}
\end{figure}
\begin{example}
We return again to $\T{A} \in \RT{3}{3}$ from \Ex{odd}, which is
interesting because S-HOPM was able to find 2 of its eigenpairs
without any shift.
In \Fig{sphere_odd_f},
$f(\V{x})$ is plotted on the unit sphere, along with each
eigenvector, colored white, gray, or black based on whether it is
negative stable, positive stable, or unstable, respectively. Observe that
the function is antisymmetric, i.e., $f(\V{x}) = -f(-\V{x})$.
\Fig{sphere_odd_zero} shows the basins of attraction for S-HOPM
(i.e., SS-HOPM with $\alpha = 0$). Every starting point converges to
one of the 2 labeled eigenpairs.
This is not surprising because \Fig{jac_odd} shows that there are
2 eigenvalues for which the spectral radius of the Jacobian is
less than 1 ($\lambda = 0.8730$ and $0.4306$). The other 2
eigenvalues are non-attracting for $\alpha=0$. \Fig{rate_odd} shows
the observed rates of convergence.
\Fig{sphere_odd_convex} shows the basins of attraction for SS-HOPM
with $\alpha = 1$; each negative stable eigenpair
(shown as a white circle) is an attracting
eigenpair.
The concave case is just a mirror image and is not
shown. \end{example}
As the previous example reminds us, for odd order, there is no need to try both positive and negative $\alpha$ because the definiteness of $C$ flips for eigenvectors of opposite sign.
Two additional examples of SS-HOPM are presented in \App{more}.
\subsection{Relationship to power method for matrix eigenpairs}
The power method for matrix eigenpairs is a technique for finding the largest-magnitude eigenvalue (and corresponding eigenvector) of a diagonalizable symmetric matrix \cite{GoVa96}. Let $\M{A}$ be a symmetric real-valued $n \times n$ matrix. Then the matrix power method is defined by \begin{displaymath}
\V{x}_{k+1} = \frac{\M{A}\V{x}_k}{\|\M{A}{\V{x}_k}\|}. \end{displaymath}
Assume that $\M{V} \M{\Lambda} \M{V}^T$ is the Schur decomposition of $\M{A}$ with eigenvalues satisfying $|\lambda_1| >
|\lambda_2| \geq \cdots \geq |\lambda_n|$ (note the strict difference in the first 2 eigenvalues). The sequence $\{\V{x}_k\}$ produced by the matrix power method always converges (up to sign) to the eigenvector associated with $\lambda_1$. Shifting the matrix by $\M{A} \leftarrow \M{A} + \alpha \M{I}$ shifts the eigenvalues by $\lambda_j \leftarrow \lambda_j + \alpha$, potentially altering which eigenvalue has the largest magnitude.
In the matrix case, the eigenvalues of the Jacobian defined by \Eqn{J} for an eigenpair $(\lambda_j,\V{x}_j)$ are given by \begin{displaymath}
\{0\} \cup \left\{ \frac{\lambda_i + \alpha}{\lambda_j + \alpha} : 1 \le i \le n \text{ with } i \ne j \right\}. \end{displaymath} Thus, the Jacobian at $\V{x}_1$ is the only one such that $\rho(J(\V{x};\alpha)) < 1$; no other eigenvectors are stable fixed points of the iteration. This corresponds to \Thm{fp} (or \Cor{fp}), since the most positive eigenvalue is negative stable, the most negative eigenvalue is positive stable, and every other eigenvalue is unstable. The eigenpair $(\lambda_1,\V{x}_1)$ is an attractor for ordinary (convex) power iteration if $\lambda_1 > 0$ or for flipped (concave) power iteration if $\lambda_1 < 0$.
In contrast to the matrix power method, SS-HOPM can find multiple eigenpairs since there may be multiple positive and negative stable eigenpairs. But, as for matrices, since the most positive and most negative eigenvalues correspond to the global maximum and minimum of $f(\V{x})$, they must be negative stable and positive stable respectively. Thus, choosing $\alpha$ positive is necessary for finding the most positive tensor eigenvalue; conversely, $\alpha$ negative is necessary for finding the most negative tensor eigenvalue. Unfortunately, the ability to find multiple eigenpairs means that there is no guarantee that the iterates will converge to an extremal eigenpair from every starting point. In fact, multiple starting points may be needed.
\subsection{Comparison to other methods} SS-HOPM is useful for its guaranteed convergence properties and its simple implementation based on tensor-vector multiplication. For fixed $m$ and large $n$, the computational complexity of each iteration of SS-HOPM is $O(n^m)$, which is the number of individual terms to be computed in $\T{A}\V{x}^{m-1}$. This is analogous to the $O(n^2)$ complexity of matrix-vector multiplication as used in the matrix power method. We do not yet know how the number of iterations needed for numerical convergence of SS-HOPM depends on $m$ and $n$.
The convergence of SS-HOPM to only a subset of eigenvalues, which tend to be among the largest in magnitude, is beneficial when the large eigenvalues are of primary interest, as in the rank-1 approximation problem \cite{KoRe02}. In particular, the most positive eigenvalue and most negative eigenvalue always have a region of stable convergence for a suitable choice of shift. However, the lack of stable convergence to certain other eigenvalues is a disadvantage if those eigenvalues are of interest.
One evident computational approach for finding tensor eigenpairs should be compared with SS-HOPM\@. This is to apply a numerical solver for nonlinear equation systems, such as Newton's method, directly to the eigenvalue equations \Eqn{EVP}. The computational complexity of each iteration of Newton's method for this system is that of SS-HOPM plus the construction and inversion of the $(n + 1) \times (n + 1)$ Jacobian for $(\lambda, \V{x})$. The Jacobian construction is effectively included in SS-HOPM, since it is dominated by computing $\T{A}\V{x}^{m-2}$, which is a precursor of $\T{A}\V{x}^{m-1}$. The additional work for inversion is $O(n^3)$, and for $m \ge 3$ it does not affect the complexity scaling, which remains $O(n^m)$.
Two advantages of an approach such as Newton's method are generic locally stable convergence, which enables finding eigenpairs not found by SS-HOPM, and the quadratic order of convergence, which can be expected to require fewer iterations than the linearly convergent SS-HOPM\@. On the other hand, there is no known guarantee of global convergence as there is for SS-HOPM, and it is possible that many starting points fail to converge. Even those that do converge may lead to eigenpairs of less interest for a particular application. Furthermore, certain tensor structures can be more efficiently handled with SS-HOPM than with Newton's method. For example, consider a higher-order symmetric tensor expressed as a sum of terms, each of which is an outer product of matrices. The computation of $\T{A}\V{x}^{m-1}$ then reduces to a series of matrix-vector multiplications, which are $O(n^2)$. This compares favorably to the $O(n^3)$ of Newton's method for the same tensor. Further investigation of general nonlinear solver approaches to the tensor eigenvalue problem will be beneficial.
Finally, we consider a polynomial solver approach, such as we implemented in Mathematica. This can find all eigenpairs (subject to numerical conditioning issues) but becomes computationally expensive for large $m$ and $n$. In part this is simply because, from \Thm{neigs}, the number of eigenpairs grows exponentially with $n$. The solver in Mathematica is designed to find all solutions; it is not clear whether a substantial improvement in efficiency would be possible if only one or a few solutions were required.
Nevertheless, for comparison with the iterative approaches discussed above, we have measured the computational time per eigenpair on a desktop computer for various values of $m$ and $n$, as shown in \Fig{mathematica-timing}. The complexity of the polynomial solution, even measured per eigenpair, is seen to increase extremely rapidly (faster than exponentially) with $n$. Thus the polynomial solver approach is not expected to be practical for large $n$.
\begin{figure}
\caption{Average time (over 10 trials) required to compute all eigenpairs, divided by the number of eigenpairs,
for random symmetric tensors in $\RT{m}{n}$. Note logarithmic vertical scale. Measured using \texttt{NSolve} in
Mathematica on a 4 GHz Intel Core i7.}
\label{fig:mathematica-timing}
\end{figure}
\section{Complex case} \label{sec:complex}
We present the more general definition of complex eigenpairs and some related results, and propose an extension of the SS-HOPM algorithm to this case.
\subsection{Eigenrings}
Some of the solutions of the polynomial system that results from the eigenvalue equations may be complex; thus, the definition can be extended to the complex case as follows, where $^\dagger$ denotes the conjugate transpose.
\begin{definition}
\label{def:EVP-COMPLEX}
Assume that $\T{A}$ is a symmetric $m^\text{th}$-order $n$-dimensional
real-valued tensor.
Then $\lambda \in \mathbb{C}$ is an
\emph{eigenvalue} of $\T{A}$ if there exists $\V{x} \in \mathbb{C}^n$ such that
\begin{equation}\label{eq:EVP-COMPLEX}
\T{A}\V{x}^{m-1} = \lambda \V{x} \qtext{and} \V{x}^\dagger\V{x}=1.
\end{equation}
The vector $\V{x}$ is a corresponding \emph{eigenvector}, and
$(\lambda,\V{x})$ is called an \emph{eigenpair}. \end{definition}
\Def{EVP-COMPLEX} is closely related to the E-eigenpairs defined by Qi \cite{Qi05,Qi07} but differs in the constraint on $\V{x}$.\footnote{Qi
\cite{Qi05,Qi07} requires $\V{x}^T\V{x}=1$ rather than
$\V{x}^\dagger\V{x}=1$.} It can also be considered as the obvious extension of ($l^2$-)eigenpairs to $\mathbb{C}$.
It has been observed \cite{Qi07,CaSt10} that the complex eigenpairs of a tensor form equivalence classes under a multiplicative transformation. Specifically, if $(\lambda,\V{x})$ is an eigenpair of $\T{A} \in \RT{m}{n}$ and $\V{y} = e^{i\varphi}\V{x}$ with $\varphi \in \mathbb{R}$, then $\V{y}^\dagger\V{y} = \V{x}^\dagger\V{x} = 1$ and \begin{displaymath}
\T{A}\V{y}^{m-1}
= e^{i(m-1)\varphi} \T{A}\V{x}^{m-1}
= e^{i(m-1)\varphi} \lambda \V{x}
= e^{i(m-2)\varphi} \lambda \V{y}. \end{displaymath} Therefore $(e^{i(m-2)\varphi}\lambda,e^{i\varphi}\V{x})$ is also an eigenpair of $\T{A}$ for any $\varphi \in \mathbb{R}$. Consequently, if $\lambda$ is an eigenvalue, then any other $\lambda' \in \mathbb{C}$ with
$|\lambda'| = |\lambda|$ is also an eigenvalue. This leads to the notion of an eigenring.
\begin{definition}[Eigenring]
For any $(\lambda,\V{x}) \in \mathbb{C} \times \mathbb{C}^n$ that is an
eigenpair of $\T{A} \in \RT{m}{n}$, we define a corresponding
equivalence class of (vector-normalized) eigenpairs
\begin{displaymath}
\mathcal{P}(\lambda,\V{x}) =
\{ (\lambda',\V{x}') :
\lambda' = e^{i(m-2)\varphi} \lambda,
\V{x}' = e^{i\varphi} \V{x},
\varphi \in \mathbb{R}
\},
\end{displaymath}
as well as a corresponding \emph{eigenring}
\begin{displaymath}
\mathcal{R}(\lambda) = \{ \lambda' \in \mathbb{C} : |\lambda'| =
|\lambda| \}.
\end{displaymath} \end{definition} Thus, any eigenring contains either 1 or 2 real eigenvalues. The special case of real eigenpairs occurs whenever the corresponding eigenvector for one of those real eigenvalues is also real.
Since we assume that $\T{A}$ is real-valued, any nonreal eigenpairs must come in sets of 2 related by complex conjugation, because taking the conjugate of the eigenvalue equation does not change it. Such conjugate eigenpairs are not members of the same equivalence class unless they are equivalent to a real eigenpair.
An elegant result has recently been derived for the number of distinct (non-equivalent) eigenvalues of a symmetric tensor. The result was first derived for even-order tensors by Ni et al.\@ \cite{NiQiWaWa07} and later generalized by Cartwright and Sturmfels \cite{CaSt10} for all cases. The case of $m=2$ requires application of l'H\^opital's rule to see that there are $n$ eigenvalues.
\begin{theorem}[{Cartwright and Sturmfels \cite{CaSt10}}]
\label{thm:neigs}
A generic symmetric tensor $\T{A}\in\RT{m}{n}$ has
$((m-1)^n-1)/(m-2)$ distinct eigenvalue equivalence classes. \end{theorem}
These papers \cite{NiQiWaWa07,CaSt10} use the condition $\V{x}^T\V{x} = 1$ to normalize eigenpairs, but in the generic case the result is the same for our condition $\V{x}^\dagger\V{x} = 1$. This is because the eigenpairs with $\V{x}^\dagger\V{x} = 1$ generically consist of isolated equivalence classes that have $\V{x}^T\V{x} \ne 0$ and thus can be rescaled to satisfy $\V{x}^T\V{x} = 1$, giving a one-to-one correspondence between the distinct eigenvalues in the two normalizations. In special cases, however, the condition $\V{x}^\dagger\V{x} = 1$ admits additional eigenpairs with $\V{x}^T\V{x} = 0$. Furthermore, tensors can be constructed with a continuous family of non-equivalent eigenvectors that correspond to the same eigenvalue when normalized by $\V{x}^T\V{x}$ but to infinitely many distinct eigenvalues when normalized by $\V{x}^\dagger\V{x}$ \cite[Example 5.7]{CaSt10}.
The polynomial system solver using Gr\"obner bases mentioned earlier can also be used to find complex solutions. A complication is that our normalization condition $\V{x}^\dagger\V{x} = 1$ is nonpolynomial due to the complex conjugation. The system, however, becomes polynomial if the alternate normalization condition $\V{x}^T\V{x} = 1$ is temporarily adopted. Any such $\V{x}$ can be rescaled to satisfy $\V{x}^\dagger\V{x} = 1$. Complex eigenvectors with $\V{x}^T\V{x} = 0$ will not be found, but, as just noted, these do not occur generically. Any nonreal solutions are transformed to a representative of the eigenring with positive real $\lambda$ by setting \begin{displaymath}
(\lambda, \V{x}) \leftarrow \left(\frac{|\lambda|}{(\V{x}^\dagger\V{x})^{m/2-1}},\, \left(\frac{|\lambda|}{\lambda}\right)^{1/(m-2)}\! \frac{\V{x}}{(\V{x}^\dagger\V{x})^{1/2}}\right). \end{displaymath} This polynomial system solution becomes prohibitively expensive for large $m$ or $n$; however, for \Ex{KoRe02_ex1}, the nonreal eigenpairs can be computed this way and are presented in \Tab{KoRe02_ex1_nonreal}. Thus, from this and \Tab{KoRe02_ex1}, we have found the 13 eigenvalue equivalence classes (real and nonreal) guaranteed by \Thm{neigs}. \begin{table}
\centering
\footnotesize
\begin{tabular}{|c|c|} \hline $\lambda$ & $\V{x}^T$ \\ \hline
$\phantom{-}0.6694$ & [ $\phantom{-}0.2930 + 0.0571 i$ $\phantom{-}0.8171 - 0.0365 i$ $-0.4912 - 0.0245 i$ ] \\ \hline
$\phantom{-}0.6694$ & [ $\phantom{-}0.2930 - 0.0571 i$ $\phantom{-}0.8171 + 0.0365 i$ $-0.4912 + 0.0245 i$ ] \\ \hline
\end{tabular}
\caption{Nonreal eigenpairs for $\T{A} \in \RT{4}{3}$ from \Ex{KoRe02_ex1}.}
\label{tab:KoRe02_ex1_nonreal} \end{table}
\subsection{SS-HOPM for Complex Eigenpairs}
We propose an extension of the SS-HOPM algorithm to the case of complex vectors in \Alg{csshopm}. Observe that the division by $\lambda_k + \alpha$ keeps the phase of $\V{x}_k$ from changing unintentionally. It is akin to taking the negative in the concave case in \Alg{sshopm}. It is important to note that even if an eigenpair is real, there is no guarantee that the complex SS-HOPM will converge to the real eigenpair; instead, it will converge to some random rotation in the complex plane. We have no convergence theory in the convex case, but we present several promising numerical examples.
\begin{algorithm}
\caption{Complex SS-HOPM}
\label{alg:csshopm}
Given a tensor $\T{A} \in \RT{m}{n}$.
\begin{algorithmic}[1]
\Require $\V{x}_0 \in \mathbb{C}^n$ with $\| \V{x}_0 \| = 1$. Let
$\lambda_0 = \T{A} \V{x}_0^{m}$.
\Require $\alpha \in \mathbb{C}$
\For{$k=0,1,\dots$}
\State $\hat \V{x}_{k+1} \gets (\T{A} \V{x}_k^{m-1} + \alpha \V{x}_k) /
(\lambda_k + \alpha)$
\State $\V{x}_{k+1} \gets \hat \V{x}_{k+1} / \| \hat \V{x}_{k+1} \|$
\State $\lambda_{k+1} \gets \V{x}_{k+1}^\dagger\T{A}\V{x}_{k+1}^{m-1}$
\EndFor
\end{algorithmic} \end{algorithm}
\begin{example}
We once again revisit $\T{A} \in \RT{4}{3}$ from \Ex{KoRe02_ex1} and test the complex version
of SS-HOPM in \Alg{csshopm}. \Tab{eigenrings_pos} shows the
results of 100 runs using the same experimental conditions as in
\Ex{KoRe02_ex1} except with complex random starting vectors.
We find 7 distinct eigenrings --- the 6 stable real
eigenpairs as well as a ring corresponding to the 2 complex
eigenpairs.
\Fig{eigenrings_pos} shows the individual $\lambda_*$ values plotted
on the complex plane. As mentioned above, it may converge anywhere
on the eigenring, though there is clear bias toward the
value of $\alpha$.
To investigate this phenomenon further, we do another experiment with
$\alpha = -(1+i)/\sqrt{2}$. It finds the same eigenrings as before
as shown in \Tab{eigenrings_imag}, but this time the
$\lambda_*$ values are distributed mostly in the lower left quadrant of the
complex plane as shown in \Fig{eigenrings_imag}, again close to the value
of $\alpha$. In the case of
the 2 complex eigenpairs with the same eigenring, the method
finds the 2 distinct eigenvectors (i.e., defining 2
different equivalence classes) in the 4 different times it
converges to that eigenvalue; this is not
surprising since the complex eigenvalue has 2 different
eigenvectors as shown in \Tab{KoRe02_ex1}.
We also ran an experiment with $\alpha=0$. In this case, 95 trials
converged, but to non-eigenpairs (all with $|\lambda| = 0.3656$). In
each case, even though $\{\lambda_k\}$ converged, we had
$\|\V{x}_{k+1} - \V{x}_k\| \rightarrow 1.1993$, indicating that the
sequence $\{\V{x}_k\}$ had not converged and hence we did not obtain an $\V{x}_*$
satisfying the eigenvalue equation \Eqn{EVP-COMPLEX}. Although it is
not shown, in all the convergent examples with the shifts mentioned
above, the $\{\V{x}_k\}$ sequence converged. \end{example}
\begin{table}[htbp]
\centering
\caption{Eigenrings computed for $\T{A} \in \RT{4}{3}$ from \Ex{KoRe02_ex1} by complex SS-HOPM with 100 random starts.}
\subfloat[$\alpha = 2$.]{
\label{tab:eigenrings_pos}
\footnotesize
\begin{tabular}{|c|c|}\hline
\# Occurrences & $|\lambda|$ \\ \hline
18 & 1.0954 \\ \hline
18 & 0.8893 \\ \hline
21 & 0.8169 \\ \hline
1 & 0.6694 \\ \hline
22 & 0.5629 \\ \hline
8 & 0.3633 \\ \hline
12 & 0.0451 \\ \hline
\end{tabular}
}
~~~
\subfloat[$\alpha = \sqrt{2}(1+i)$ (2 failures).]{
\label{tab:eigenrings_imag}
\footnotesize
\begin{tabular}{|c|c|}\hline
\# Occurrences & $|\lambda|$ \\ \hline
22 & 1.0954 \\ \hline
15 & 0.8893 \\ \hline
12 & 0.8169 \\ \hline
4 & 0.6694 \\ \hline
16 & 0.5629 \\ \hline
9 & 0.3633 \\ \hline
20 & 0.0451 \\ \hline
\end{tabular}
} \end{table}
\begin{figure}
\caption{For $\T{A} \in \RT{4}{3}$ from \Ex{KoRe02_ex1}, final $\lambda$ values (indicated by
red asterisks) for 100 runs of complex SS-HOPM\@. The green lines
denote the eigenrings.}
\label{fig:eigenrings_pos}
\label{fig:eigenrings_imag}
\label{fig:complex}
\end{figure}
\section{Conclusions} \label{sec:conclusions} We have developed the shifted symmetric higher-order power method (SS-HOPM) for finding tensor eigenvalues. The method can be considered as a higher-order analogue to the power method for matrices. Just as in the matrix case, it cannot find all possible eigenvalues, but it is guaranteed to be able to find the largest-magnitude eigenvalue. Unlike the matrix case, it can find multiple eigenvalues; multiple starting points are typically needed to find the largest eigenvalue. A GPU-based implementation of SS-HOPM has been reported \cite{GPU}.
Building on previous work \cite{KoRe02,ReKo03,Er09}, we show that SS-HOPM will always converge to a real eigenpair for an appropriate choice of $\alpha$. Moreover, using fixed point analysis, we characterize exactly which real eigenpairs can be found by the method, i.e., those that are positive or negative stable. Alternative methods will need to be developed for finding the unstable real eigenpairs, i.e., eigenpairs for which $C(\lambda,\V{x})$ is indefinite. A topic for future investigation is that the boundaries of the basins of attraction for SS-HOPM seem to be defined by the non-attracting eigenvectors.
We present a complex version of SS-HOPM and limited experimental results indicating that it finds eigenpairs, including complex eigenpairs. Analysis of the complex version is a topic for future study.
Much is still unknown about tensor eigenpairs. For example, how do the eigenpairs change with small perturbations of the tensor entries? Is there an eigendecomposition of a tensor? Can the convergence rate of the current method be accelerated? How does one numerically compute unstable eigenpairs? For computing efficiency, what is the optimal storage for symmetric tensors? These are all potential topics of future research.
\section*{Acknowledgments} We thank Fangxiang Jiao and Chris Johnson (U. Utah), David Rogers (Sandia), and Dennis Bernstein (U. Michigan) for contacting us with interesting applications and providing test data. We thank Arindam Banergee (U. Minnesota) for providing yet another motivating application.
We thank Dustin Cartwright and Bernd Sturmfels (UC Berkeley) for helpful discussions, especially about the number of eigenpairs for a problem of a given size.
We also thank our colleagues at Sandia for numerous helpful conversations in the course of this work, especially Grey Ballad (intern from UC Berkeley).
We are grateful to the three anonymous referees for identifying important references that we missed and for constructive feedback that greatly improved the manuscript.
\opt{draft,siam}{
} \opt{arXiv}{ \input{paper.bbl} }
\appendix \setlength{\floatsep}{0pt} \section{Further examples} \label{sec:more}
For additional insight, we consider two analytical examples. In the experiments in this section, each random trial used a point $\V{x}_0$
chosen from a uniform distribution on $[-1,1]^n$. We allow up to 1000 iterations and say that the algorithm has converged if $|\lambda_{k+1}
- \lambda_k | < 10^{-15}$.
Consider the tensor $\T{A}\in\RT{3}{3}$ whose entries are 0 except where the indices are all unequal, in which case the entries are 1, i.e., \begin{equation}\label{eq:simple1}
\TE{a}{ijk} =
\begin{cases}
1 & \text{if $(i,j,k)$ is some permutation of $(1,2,3)$}, \\
0 & \text{otherwise}.
\end{cases} \end{equation} Any eigenpair $(\lambda,\V{x})$ must satisfy the following system of equations: \begin{align*} 2x_2 x_2 &= \lambda x_1, & 2x_1x_3 &= \lambda x_2, & 2 x_1 x_2 &= \lambda x_3, & x_1^2 + x_2^2 + x_3^2 & = 1. \end{align*} The 7 real eigenpairs can be computed analytically and are listed in \Tab{simple1}, from which we can see that there are 4 negative stable eigenpairs that should be identifiable by SS-HOPM\@. \Fig{jac_simple1} shows the spectral radius of the Jacobian as $\alpha$ varies; the curve is identical for all 4 negative stable eigenpairs.
Another example is given as follows. Define the tensor $\T{A} \in \RT{4}{2}$ by \begin{equation}
\label{eq:simple2}
a_{ijkl} = 0 \qtext{for all} i,j,k,l \qtext{except} a_{1111} = 1
\qtext{and} a_{2222} = -1. \end{equation} The eigenpairs can be computed analytically as solutions to the following system: \begin{align*} x_1^3 &= \lambda x_1, & -x_2^3 &= \lambda x_2, & x_1^2 + x_2^2 & = 1. \end{align*} The 2 real eigenpairs are listed in \Tab{simple2}, from which we can see that one is negative stable and the other is positive stable. \Fig{jac_simple1} shows the spectral radius of the Jacobian as $\alpha$ varies. In this case, the spectral radius of the Jacobian can be computed analytically; for $\lambda=1$, it is $\frac{\alpha}{1+\alpha}$ and hence there is a singularity for $\alpha=-1$.
\begin{table}[htbp]
\centering
\caption{Eigenpairs for two analytical problems.}
\subfloat[Eigenpairs for $\T{A} \in \RT{3}{3}$ from \Eqn{simple1}.]{
\label{tab:simple1}
\footnotesize
\begin{tabular}{|c|c|c|c|c|} \hline
$\lambda$ & $\V{x}$ & eigenvalues of $C(\lambda,\V{x})$ & Type \\ \hline
$0$ & [ $1$ $0$ $0$ ] & $\{$ $-2$ , $2$ $\}$ & Unstable \\ \hline
$0$ & [ $0$ $1$ $0$ ] & $\{$ $-2$ , $2$ $\}$ & Unstable \\ \hline
$0$ & [ $0$ $0$ $1$ ] & $\{$ $-2$ , $2$ $\}$ & Unstable \\ \hline
$2/\sqrt{3}$ & [ $\phantom{-}1/\sqrt{3}$ $\phantom{-}1/\sqrt{3}$ $\phantom{-}1/\sqrt{3}$ ] & $\{$ $-2.3094$ , $-2.3094$ $\}$ & Neg. Stable \\ \hline
$2/\sqrt{3}$ & [ $\phantom{-}1/\sqrt{3}$ $-1/\sqrt{3}$ $-1/\sqrt{3}$ ] & $\{$ $-2.3094$ , $-2.3094$ $\}$ & Neg. Stable \\ \hline
$2/\sqrt{3}$ & [ $-1/\sqrt{3}$ $\phantom{-}1/\sqrt{3}$ $-1/\sqrt{3}$ ] & $\{$ $-2.3094$ , $-2.3094$ $\}$ & Neg. Stable \\ \hline
$2/\sqrt{3}$ & [ $-1/\sqrt{3}$ $-1/\sqrt{3}$ $\phantom{-}1/\sqrt{3}$ ] & $\{$ $-2.3094$ , $-2.3094$ $\}$ & Neg. Stable \\ \hline
\end{tabular}
}\\
\subfloat[Eigenpairs for $\T{A} \in \RT{4}{2}$ from \Eqn{simple2}.]{
\label{tab:simple2}
\footnotesize
\begin{tabular}{|c|c|c|c|c|} \hline
$\lambda$ & $\V{x}$ & eigenvalues of $C(\lambda,\V{x})$ & Type \\ \hline
$1$ & [ $1$ $0$ ] & $\{$ $-1$ $\}$ & Neg. Stable \\ \hline
$-1$ & [ $0$ $1$ ] & $\{$ $1$ $\}$ & Pos. Stable \\ \hline
\end{tabular}
} \end{table}
\begin{figure}
\caption{Spectral radii of the Jacobian $J(\V{x};\alpha)$ for
different eigenpairs as $\alpha$ varies. }
\label{fig:jac_simple1}
\label{fig:jac_simple2}
\end{figure}
For \Eqn{simple1}, we ran 100 trials with $\alpha=0$, and none converged, as expected per \Fig{jac_simple1}. The results of 100 random trials with $\alpha=12$ (the ``conservative choice'') is shown in \Tab{simple1-12}, in which case every trial converged to one of the 4 negative stable eigenpairs. (Note that $2/\sqrt{3} \approx 1.1547$ and $1/\sqrt{3} \approx 0.5774$.)
\Tab{simple1-1} shows the results of 100 random trials with $\alpha=1$. As expected (per \Fig{jac_simple1}), the convergence is much faster. \begin{table}[htbp]
\centering
\caption{Eigenpairs for $\T{A}\in\RT{3}{3}$ from \Eqn{simple1}
computed by SS-HOPM\@.}
\label{tab:simple1-results}
\subfloat[$\alpha=12$]{
\label{tab:simple1-12}
\footnotesize
\begin{tabular}{|c|c|c|c|}
\hline
\# Occurrences & $\lambda$ & $\V{x}$ & Median Its. \\ \hline
22 & $\phantom{-}1.1547$ & [ $-0.5774$ $\phantom{-}0.5774$ $-0.5774$ ] & 92 \\ \hline
18 & $\phantom{-}1.1547$ & [ $\phantom{-}0.5774$ $\phantom{-}0.5774$ $\phantom{-}0.5774$ ] & 90 \\ \hline
29 & $\phantom{-}1.1547$ & [ $-0.5774$ $-0.5774$ $\phantom{-}0.5774$ ] & 91 \\ \hline
31 & $\phantom{-}1.1547$ & [ $\phantom{-}0.5774$ $-0.5774$ $-0.5774$ ] & 94 \\ \hline
\end{tabular}
}\\
\subfloat[$\alpha=1$]{
\label{tab:simple1-1}
\footnotesize
\begin{tabular}{|c|c|c|c|}
\hline
\# Occurrences & $\lambda$ & $\V{x}$ & Median Its. \\ \hline
22 & $\phantom{-}1.1547$ & [ $\phantom{-}0.5774$ $-0.5774$ $-0.5774$ ] & 9 \\ \hline
25 & $\phantom{-}1.1547$ & [ $-0.5774$ $\phantom{-}0.5774$ $-0.5774$ ] & 9 \\ \hline
26 & $\phantom{-}1.1547$ & [ $\phantom{-}0.5774$ $\phantom{-}0.5774$ $\phantom{-}0.5774$ ] & 9 \\ \hline
27 & $\phantom{-}1.1547$ & [ $-0.5774$ $-0.5774$ $\phantom{-}0.5774$ ] & 9 \\ \hline
\end{tabular}
} \end{table}
For \Eqn{simple2}, we ran 100 trials with $\alpha=0.5$ (\Tab{simple2-p}) and 100 trials with $\alpha=-0.5$ (\Tab{simple2-n}). We find the negative stable and positive stable eigenvalues as expected.
\begin{table}[h]
\centering
\caption{Eigenpairs for $\T{A}\in\RT{4}{2}$ from \Eqn{simple2}
computed by SS-HOPM\@.}
\label{tab:simple2-results}
\subfloat[$\alpha=0.5$]{
\label{tab:simple2-p}
\footnotesize
\begin{tabular}{|c|c|c|c|}
\hline
\# Occurrences & $\lambda$ & $\V{x}$ & Median Its. \\ \hline
100 & $\phantom{-}1.0000$ & [ $-1.0000$ $\phantom{-}0.0000$ ] & 16 \\ \hline
\end{tabular}
}\\
\subfloat[$\alpha=-0.5$]{
\label{tab:simple2-n}
\footnotesize
\begin{tabular}{|c|c|c|c|}
\hline
\# Occurrences & $\lambda$ & $\V{x}$ & Median Its. \\ \hline
100 & $-1.0000$ & [ $\phantom{-}-0.0000$ $\phantom{-}1.0000$ ] & 16 \\ \hline
\end{tabular}
} \end{table}
\end{document} | arXiv |
\begin{document}
\title{Lipschitz interpolative nonlinear ideal procedure}
\begin{abstract} We treat the general theory of nonlinear ideals and extend as many notions as possible from the linear theory to the nonlinear theory. We define nonlinear ideals with special properties which associate new non-linear ideals to given ones and establish several properties and characterizations of them. Building upon the results of U. Matter we define a Lipschitz interpolative nonlinear ideal procedure between metric spaces and Banach spaces and establish this class of Lipschitz operators is an injective Banach nonlinear ideal and show several standard basic properties for such class. Extending the work of J. A. L\'{o}pez Molina and E. A. S\'{a}nchez P\'{e}rez we define a Lipschitz $\left(p,\theta, q, \nu\right)$-dominated operators for $1\leq p, q <\infty$; $0\leq \theta, \nu< 1$ and establish several characterizations. Afterwards we generalize a notion of Lipschitz interpolative nonlinear ideal procedure between arbitrary metric spaces and prove its a nonlinear ideal. Finally, we present certain basic counter examples of Lipschitz interpolative nonlinear ideal procedure between arbitrary metric spaces. \end{abstract}
2010 AMS Subject Classification. Primary 47L20; Secondary 26A16, 47A57.
\section{Notations and Preliminaries}\label{Sec. 1}
We introduce concepts and notations that will be used in this article. The letters $E$, $F$ and $G$ will denote Banach spaces. The closed unit ball of a Banach space $E$ is denoted by $B_{E}$. The dual space of $E$ is denoted by $E^{*}$. The class of all bounded linear operators between arbitrary Banach spaces will be denoted by $\mathfrak{L}$. The symbols $\mathbb{K}$ and $\mathbb{N}$ stand for the set of all scalar field and the set of all natural numbers, respectively. The symbols $W(B_{E^{*}})$ and $W(B_{X^{ \#}})$ stand for the set of all Borel probability measures defined on $B_{E^{*}}$ and $B_{X^{ \#}}$, respectively. The value of $a$ at the element $x$ is denoted by $\left\langle x, a\right\rangle$. We put $E^{\text{inj}}:=\ell_{\infty}(B_{E^{*}})$ and $J_{E} x:=\left(\left\langle x, a\right\rangle\right)$ for $x\in E$. Clearly $J_{E}$ is a metric injection from $E$ into $E^{\text{inj}}$. Let $0<p<\infty$. The Banach space of all absolutely $p$-summable sequences $\mathbf{x}=(x_j)_{j\in \mathbb{N}}$, where $x_j\in E$, is denoted by $\ell_{p}(E)$. We put
$$\left\|\mathbf{x}\Big|\ell_{p}(E)\right\|=\Bigg[\sum\limits_{j=1}^{ \infty}\left\|x_j\right\|^{p}\Bigg]^{\frac{1}{p}}<\infty.$$
The Banach space of all weakly absolutely $p$-summable sequences $\mathbf{x}\subset E$, is denoted by $\ell_{p}^{w}(E)$. We put \begin{equation}
\left\|\mathbf{x}\Big|\ell_{p}^{w}(E)\right\|=\sup\limits_{a\in B_{E^{*}}}\Bigg[\sum\limits_{j=1}^{\infty}\left|\left\langle x_j, a\right\rangle\right|^{p}\Bigg]^{\frac{1}{p}}. \end{equation}
For the triple sequence $(\sigma, x', x'')\subset\mathbb{R}\times X\times X$. We put
$$\left\|(\sigma,x',x'')\Big|\ell_p(\mathbb{R}\times X\times X)\right\|=\Bigg[\sum\limits_{j=1}^{\rm\infty}\left|\sigma_j\right|^{p} d_X(x'_j,x''_j)^{p}\Bigg]^{\frac{1}{p}}.$$
And
$$\left\|(\sigma,x',x'')\Big|\ell_p^{L,w}(\mathbb{R}\times X\times X)\right\|=\sup\limits_{f\in B_{{X}^{\#}}}\Bigg[\sum\limits_{j=1}^{\infty}\left|\sigma_j\right|^{p}\left|\left\langle f,x'_j\right\rangle-\left\langle f,x''_j\right\rangle\right|^{p}\Bigg]^{\frac{1}{p}}. $$
For $0\leq \theta <1$ and $1\leq p<\infty$ we define
$$\left\| \mathbf{x} \Big|\delta_{p,\theta} (E)\right\| =\underset{\xi \in B_{E^{\ast }}}{\sup }\left( \underset{j=1}{\overset{\infty }{\sum }}\left( \left|\left\langle x_{j} , \xi \right\rangle\right|^{1-\theta }\left\Vert x_{j}\right\Vert ^{\theta }\right) ^{\frac{p}{1-\theta }}\right) ^{\frac{1-\theta }{p}}.$$
Also for all sequences $(\sigma ,x',x'')\subset\mathbb{R}\times X\times X$, we define
$$\left\|(\sigma,x',x'')\Big|\delta_{p,\theta}^{L}(\mathbb{R}\times X\times X)\right\|=\sup\limits_{f\in B_{X^{\#}}}\left[ \sum\limits_{j=1}^{\infty }\left( \left\vert \sigma _{j}\right\vert \left\vert f(x_{j}^{\prime })-f(x_{j}^{\prime \prime })\right\vert ^{1-\theta }d_{X}(x_{j}^{\prime },x_{j}^{\prime \prime })^{\theta }\right) ^{\frac{p}{1-\theta }}\right] ^{ \frac{1-\theta }{p}} $$
Recall that the definition of an operator ideal between arbitrary Banach spaces of A. Pietsch \cite {P07} and \cite{P87} is as follows. Suppose that, for every pair of Banach spaces $E$ and $F$, we are given a subset $\mathfrak{A}(E,F)$ of $\mathfrak{L}(E,F)$. The class $$\mathfrak{A}:=\bigcup_{E,F}\mathfrak{A}(E,F)$$ is said to be an operator ideal, or just an ideal, if the following conditions are satisfied:
\begin{enumerate}\label{Auto5}
\item[$\bf (OI_0)$] $a^{*}\otimes e\in\mathfrak{A}(E,F)$ for $a^{*}\in E^{*}$ and $e\in F$.
\item[$\bf (OI_1)$] $S + T\in\mathfrak{A}(E,F)$ for $S,\: T\in\mathfrak{A}(E,F)$.
\item[$\bf (OI_2)$] $BTA\in\mathfrak{A}(E_{0},F_{0})$ for $A\in \mathfrak{L}(E_{0},E)$, $T\in\mathfrak{A}(E,F)$, and $B\in\mathfrak{L}(F,F_{0})$. \end{enumerate}
Condition $\bf (OI_0)$ implies that $\mathfrak{A}$ contains nonzero operators.
\begin{remark} The normed (Banach) operator ideal is designated by $\left[\textfrak{A} , \mathbf{A} \right]$. \end{remark}
\section{Introduction}\label{Sec. 2}
One important example of operator ideals is the class of $p$-summing operators defined by A. Pietsch \cite {P78} as follow: A bounded operator $T$ from $E$ into $F$ is called $p$-summing if and only if there is a constant $C\geq 0$ such that \begin{equation}\label{gfgfgfgfgjh20}
\left\| (T x_{j})_{j=1}^{m}\Big|\ell_p(F)\right\|\leq C\cdot\left\|\left( x_{j}\right)_{j=1}^{m}\Big|\ell_p^{w}(E)\right\| \end{equation} for arbitrary sequence $\left(x_{j}\right)_{j=1}^{m}$ in $E$ and $m\in\mathbb{N}$. Let us denote by $\Pi_{p}(E,F)$ the class of all $p$-summing operators from $E$ into $F$ with $\pi_{p}(T)$ summing norm of $T$ is the infimum of such constants $C$.
J. D. Farmer and W. B. Johnson \cite {J09} defined a true extension of the linear concept of $p$-summing operators as follows: a Lipschitz operator $T\in \Lip(X,Y)$ is called Lipschitz $p$-summing map if there is a nonnegative constants $C$ such that for all $m\in\mathbb{N}$, any sequences $x'$, $x''$ in $X$ and $\lambda $ in $\mathbb{R}^{+}$, the inequality
$$\left\|(\lambda, Tx',Tx'')\Big|\ell_p(\mathbb{R}\times X\times X)\right\|\leq C\cdot\left\|(\lambda,x',x'')\Big|\ell_p^{L,w}(\mathbb{R}\times X\times X)\right\| $$ holds. Let us denote by $\Pi_{p}^{L}(X, Y)$ the class of all Lipschitz $p$-summing maps from $X$ into $Y$ with $\pi_{p}^{L}(T)$ Lipschitz summing norm of $T$ is the infimum of such constants $C$.
Jarchow and Matter \cite{JM88} defined a general interpolation procedure for creating a new operator ideal between arbitrary Banach spaces. Also U. Matter defined in his seminal paper \cite {Matter87} a new class of interpolative ideal procedure as follows: let $0\leq \theta< 1$ and $\left[\textfrak{A}, \mathbf{A}\right]$ be a normed operator ideal. A bounded operator $T$ from $E$ into $F$ belongs to $\textfrak{A}_{\theta}(E, F)$ if there exist a Banach space $G$ and a bounded operator $S\in\textfrak{A} (E, G)$ such that \begin{equation}\label{prooor}
\left\|Tx|F\right\|\leq\left\|Sx |G\right\|^{1-\theta}\cdot \left\|x\right\|^{\theta},\ \ \forall\: x \in E. \end{equation} For each $T\in\textfrak{A} _{\theta}(E, F)$, we set
\begin{equation}\label{poorooor} \mathbf{A}_{\theta}(T):=\inf\mathbf{A}(S)^{1-\theta} \end{equation} where the infimum is taken over all bounded operators $S$ admitted in (\ref{prooor}).
\begin{prop} \cite {Matter87} $\left[\textfrak{A} _{\theta}, \mathbf{A}_{\theta}\right]$ is an injective complete quasinormed operator ideal. \end{prop}
U. Matter \cite {Matter87} applied Inequality (\ref{prooor}) to the ideal $\left[\Pi_p, \pi_p\right]$ of absolutely $p$-summing operators and obtained the injective operator ideal $\left(\Pi_{p}\right)_{\theta}$ which is complete with respect to the ideal norm $\left(\pi_{p}\right)_{\theta}$ and established the fundamental theorem of $(p,\theta)$-summing operators for $1\leq p<\infty$ and $0\leq \theta <1$ as follows:
\begin{thm}\cite {Matter87} Let $T$ be a bounded operator from $E$ into $F$ and $C\geq 0$. The following are equivalent: \begin{enumerate}
\item $T\in\left(\Pi_{p}\right)_{\theta}(E, F)$.
\item There exist a constant $C$ and a probability measure $\mu$ on $B_{E^{*}}$ such that
$$\left\|T x | F\right\| \leq C \cdot\left(\int\limits_{B_{E^{*}}}\left(\left|\left\langle x, x^{*}\right\rangle\right|^{1-\theta} \left\|x\right\|^{\theta}\right)^{\frac{p}{1-\theta}} d\mu(x^{*})\right)^{\frac{1-\theta}{p}}, \forall \;x\in E.$$
\item There exists a constant $C\geq 0$\ such that for any $\left( x_{j}\right) _{j=1}^{m}\subset E, $ and $m\in\mathbb{N}$ we have $$\left\Vert \left( Tx_{j}\right) _{j=1}^{m}\Big|\ell _{\frac{p}{1-\theta }}(F)\right\Vert \leq C\cdot \left\|\left( x_{j}\right)_{j=1}^{m} | \delta _{p\theta}(E)\right\|.$$
\end{enumerate} In addition, $\left(\pi_{p}\right)_{\theta}(T)$ is the smallest number $C$ for which, respectively, (2) and (3) hold. \end{thm}
Another example of operator ideals is the class of $(r,p, q)$-summing operators defined by A. Pietsch \cite [Sec. 17.1.1] {P78} as follows: Let $0<r, p, q\leq\infty$ and $\frac{1}{r}\leq\frac{1}{p}+ \frac{1}{q}$. A bounded operator $T$ from $E$ into $F$ is called $(r,p, q)$-summing if there is a constant $C\geq 0$ such that \begin{equation}\label{20}
\left\| \left( \left\langle T x_{j}, b_{j}\right\rangle\right)_{j=1}^{m}\Big|\ell_r \right\Vert\leq C\cdot\left\|\left( x_{j}\right)_{j=1}^{m}\Big|\ell_p^{w}(E)\right\| \left\|\left( b_{j}\right)_{j=1}^{m}\Big|\ell_q^{w}(F^{*})\right\| \end{equation} for arbitrary sequence $\left(x_{j}\right)_{j=1}^{m}$ in $E$, $\left(b_{j}\right)_{j=1}^{m}$ in $F^{*}$ and $m\in\mathbb{N}$. Let us denote by $\mathfrak{P}_{(r, p, q)}(E,F)$ the class of all $(r, p, q)$-summing operators from $E$ into $F$ with $\textbf{P}_{(r, p, q)}(T)$ summing norm of $T$ is the infimum of such constants $C$.
\begin{prop} \cite [Sec. 17.1.2] {P78} $\left[\mathfrak{P}_{(r, p, q)}, \textbf{P}_{(r, p, q)}\right]$ is a normed operator ideal. \end{prop}
Let $0<p, q\leq\infty$. A. Pietsch \cite {P78} is also defined $(p, q)$-dominated operator as follows: A bounded operator $T$ from $E$ into $F$ is called $(p, q)$-dominated if it belongs to the quasi-normed ideal $$\left[\mathcal{D}_{\left(p, q\right)}, D_{\left(p, q\right)}\right]:=\left[\mathfrak{P}_{(r, p, q)}, \textbf{P}_{(r, p, q)}\right],$$ where $\frac{1}{r}=\frac{1}{p} + \frac{1}{q}$. For a special case, if $q=\infty$, then $\left[\mathcal{D}_{\left(p, \infty\right)}, D_{\left(p, \infty\right)}\right]:=\left[\Pi_{p}, \pi_{p}\right]$.
J.A. L\'{o}pez Molina and E. A. S\'{a}nchez P\'{e}rez \cite {ms93} established the important characteristic of $(p, q)$-dominated operator as follows.
\begin{prop} \cite {ms93} Let $E$ and $F$ be Banach spaces and $T\in\mathfrak{L}(E,F)$. The following are equivalent:
\begin{enumerate}
\item $T\in\mathcal{D}_{\left(p, q\right)}(E,F)$.
\item There exist a Banach spaces $G$ and $H$, bounded operators $S_{1}\in\Pi_{p}(E, G)$ and $S_{2}\in\Pi_{q}(F^{*}, H)$ and $C>0$ such that \begin{equation}
\left|\left\langle Tx, b\right\rangle\right|\leq C \left\|S_{1} x\right\| \left\|S_{2} x\right\|, \ \ \forall x \in E, \forall \;b\in F^{*}. \end{equation} \end{enumerate} \end{prop}
A general example of $(p, q)$-dominated operators is also defined by J.A. L\'{o}pez Molina and E. A. S\'{a}nchez P\'{e}rez \cite {ms93} as follows: Let $1\leq p, q <\infty$ and $0\leq \theta, \nu< 1$ such that $\frac{1}{r}+\frac{1-\theta}{p}+\frac{1-\nu}{q}=1$ with $1\leq r <\infty$. A bounded operator $T$ from $E$ to $F$ is called $\left(p,\theta, q, \nu\right)$-dominated if there exist a Banach spaces $G$ and $H$, a bounded operator $S\in\Pi_{p} (E,G)$, a bounded operator $R\in\Pi_{q}(F^{*},H)$ and a positive constant $C$ such that \begin{equation}\label{lastrrrwagen1}
\left|\left\langle Tx, b^{*}\right\rangle\right|\leq C\cdot \left\|x\right\|^{\theta}\left\|Sx | G\right\|^{1-\theta}\left\|b^{*}\right\|^{\nu}\left\|R(b^{*})|H\right\|^{1-\nu} \end{equation} for arbitrary finite sequences $\textbf{x}$ in $X$ and $b^{*}\subset F^{*}$.
Let us denote by $\mathcal{D}_{\left(p, \theta, q, \nu\right)} (E,F)$ the class of all $\left(p, \theta, q, \nu\right)$-dominated operators from $E$ to $F$ with $$D_{\left(p, \theta, q, \nu\right)} (T)=\inf\left\{C \cdot\pi_{p} (S)^{1-\theta}\cdot \pi_{q}(R)^{1-\nu}\right\},$$ where the infimum is taken over all bounded operators $S$ and $R$ and constant $C$ admitted in (\ref{lastrrrwagen1}). They also established an important characteristic of $\left(p,\theta, q, \nu\right)$-dominated operator as follows.
\begin{thm} Let $E$ and $F$ be Banach spaces and $T\in\mathfrak{L}(E,F)$. The following are equivalent:
\begin{enumerate}
\item [$\bf (1)$] $T\in\mathcal{D}_{\left(p, \theta, q, \nu\right)} (E,F)$.
\item [$\bf (2)$] There is a constant $C\geq 0$ and regular probabilities $\mu$ and $\tau$ on $B_{E^{*}}$ and $B_{F^{**}}$, respectively such that for every $x$ in $X$ and $b^{*}$ in $F^{*}$ the following inequality holds
$$\left|\left\langle Tx, b^{*}\right\rangle\right|\leq C \cdot\left[\int\limits_{B_{E^{*}}}\left(\left| \left\langle x, a\right\rangle\right|^{1-\theta} \left\|x\right\|^{\theta}\right)^\frac{p}{1-\theta} d\mu(a)\right]^\frac{1-\theta}{p}
\cdot\left[\int\limits_{B_{F^{**}}}\left(\left|\left\langle b^{*}, b^{**}\right\rangle\right|^{1-\nu} \left\|b^{*}\right\|^{\nu}\right)^\frac{q}{1-\nu} d\tau(b^{**})\right]^\frac{1-\nu}{q}. $$ \item[$\bf (3)$] There exists a constant $C\geq 0$ such that for every finite sequences $\textbf{x} $ in $X$ and $b^{*}\subset F^{*}$ the inequality \begin{equation}
\left\| \left\langle Tx, b^{*}\right\rangle|\ell_{r'}\right\|\leq C\cdot\left\|\textbf{x}\Big|\delta_{p,\theta} (E)\right\|\left\| b^{*} \Big|\delta_{q,\nu}(F^{*})\right\| \end{equation} holds. \item[$\bf (4)$] There are a Banach space $G$, a bounded operator $A\in\left(\Pi_{p}\right)_{\theta}(X,G)$ and a bounded operator $B\in\mathfrak{L}(E,F)$ such that $B^{*}\in\left(\Pi_{q}\right)_{\nu}^{dual}(F^{*}, G^{*})$ and $T=BA$. \end{enumerate} In this case, $D_{\left(p, \theta, q, \nu\right)}$ is equal to the infimum of such constants $C$ in either $\bf (2)$, or $\bf (3)$.
\end{thm}
We now describe the contents of this paper. In Section \ref{Sec. 1}, we introduce notations and preliminaries that will be used in this article. In Section \ref{Sec. 2}, we first present preliminaries of special cases of those operators that map weakly (Lipschitz) $p$-summable sequences in arbitrary Banach (metric) space into strongly (Lipschitz) $p$-summable ones in Banach (metric) space these operators are called (Lipschitz) $p$-summing operators defined by A. Pietsch \cite {P78}, J. D. Farmer and W. B. Johnson \cite {J09}, respectively. Jarchow and Matter \cite{JM88} defined a general interpolation procedure to create a new ideal from given ideals and U. Matter defined a new class of interpolative ideal procedure in his seminal paper \cite {Matter87}. He established the fundamental characterize result of $(p,\theta)$-summing operators for $1\leq p<\infty$ and $0\leq \theta <1$. For $0<p, q\leq\infty$. A. Pietsch \cite {P78} defined $(p, q)$-dominated operator between arbitrary Banach spaces. J. A. L\'{o}pez Molina and E. A. S\'{a}nchez P\'{e}rez \cite {ms93} established the fundamental characterize of $(p, q)$-dominated operator. Afterwards a general example of $(p, q)$-dominated operators is also defined by J.A. L\'{o}pez Molina and E. A. S\'{a}nchez P\'{e}rez \cite {ms93}. This class of operators is called $\left(p,\theta, q, \nu\right)$-dominated for $1\leq p, q <\infty$ and $0\leq \theta, \nu< 1$ such that $\frac{1}{r}+\frac{1-\theta}{p}+\frac{1-\nu}{q}=1$ with $1\leq r <\infty$. They proved an important characterize of $\left(p,\theta, q, \nu\right)$-dominated operator. In Section \ref{Sec. 3}, we treat the general theory of nonlinear operator ideals. The basic idea here is to extend as many notions as possible from the linear theory to the nonlinear theory. Therefore, we start by recalling the fundamental concepts of an operator ideal defined by A. Pietsch \cite{P78}, see also \cite{P07}.
Then, we introduce the corresponding definitions for nonlinear operator ideals in the version close to that of A. Jim{\'e}nez-Vargas, J. M. Sepulcre, and Mois{\'e}s Villegas-Vallecillos \cite{mjam15}. Afterwards, we define nonlinear ideals with special properties which associate new non-linear ideals to given ones. Again, this is parallel to the linear theory. For $0<p\leq 1$ we also define a Lipschitz $p$-norm on nonlinear ideal and prove that the injective hull $\textfrak{A}^{L}_{\text{inj}}$ is a $p$-normed nonlinear ideal. We generalize U. Matter's interpolative ideal procedure for its nonlinear (Lipschitz) version between metric spaces and Banach spaces and establish these class of operators is an injective Banach nonlinear ideal as well as we show several basic properties for such class. Extending the work of J. A. L\'{o}pez Molina and E. A. S\'{a}nchez P\'{e}rez we define a Lipschitz $\left(p,\theta, q, \nu\right)$-dominated operators for $1\leq p, q <\infty$ and $0\leq \theta, \nu< 1$ such that $\frac{1}{r}+\frac{1-\theta}{p}+\frac{1-\nu}{q}=1$ with $1\leq r <\infty$ and establish several characterizations analogous to linear case of \cite {ms93} and prove that the class of Lipschitz $\left(p,\theta, q, \nu\right)$-dominated operators is a Banach nonlinear ideal under the Lipschitz $\left(p,\theta, q, \nu\right)$-norm. In Section \ref{Sec. 4}, we define nonlinear operator ideal concept between arbitrary metric spaces. It is also in the version close to that defined in \cite {mjam15}. We generalize a notion of Lipschitz interpolative nonlinear ideal procedure between arbitrary metric spaces and prove its a nonlinear ideal. Finally, we present certain basic counter examples of Lipschitz interpolative nonlinear ideal procedure between arbitrary metric spaces.
\section{\bf Nonlinear ideals between arbitrary metric spaces and Banach spaces} \label{Sec. 3}
\begin{definition}\label{A7777}
Suppose that, for every pair of metric spaces $X$ and Banach spaces $F$, we are given a subset $\textfrak{A}^{L}(X,F)$ of $\Lip(X,F)$. The class $$\textfrak{A}^{L}:=\bigcup_{X,F}\textfrak{A}^{L}(X,F)$$ is said to be a complete $p$-normed (Banach) nonlinear ideal $\left(0<p\leq 1\right)$, if the following conditions are satisfied:
\begin{enumerate}
\item[$\bf (\widetilde{PNOI_0})$] $g\boxdot e\in\textfrak{A}^{L}(X,F)$ and $\mathbf{A}^{L}\left(g\boxdot e\right)=\Lip(g)\cdot\left\|e\right\|$ for $g\in X^{\#}$ and $e\in F$.
\item[$\bf (\widetilde{PNOI_1})$] $S + T\in\textfrak{A}^{L}(X,F)$ and the $p$-triangle inequality holds:
$$\mathbf{A}^{L}\left(S + T\right)^{p}\leq\mathbf{A}^{L}(S)^{p} + \mathbf{A}^{L}(T)^{p} \ \text{for} \ S,\: T\in\textfrak{A}^{L}(X,F).$$
\item[$\bf (\widetilde{PNOI_2})$] $BTA\in\textfrak{A}^{L}(X_{0},F_{0})$ and $\mathbf{A}^{L}\left(BTA\right)\leq\left\|B\right\|\mathbf{A}^{L}(T)\: \Lip(A)$ for $A\in \Lip(X_{0},X)$, $T\in\textfrak{A}^{L}(X,F)$, and $B\in\mathfrak{L}(F,F_{0})$.
\item[$\bf (\widetilde{PNOI_3})$] All linear spaces $\textfrak{A}^{L}(X,F)$ are complete, where $\mathbf{A}^{L}$ is called a Lipschitz $p$-norm from $\textfrak{A}^{L}$ to $\mathbb{R}^{+}$.
\end{enumerate}
\end{definition}
\begin{remark}\label{soon}
\begin{description}
\item[$\bf (1)$] If $p=1$, then $\mathbf{A}^{L}$ is simply called a Lipschitz norm and $\left[\textfrak{A}^{L}, \mathbf{A}^{L}\right]$ is said to be a Banach nonlinear ideal.
\item[$\bf (2)$] If $\left[\textfrak{A}^{L}, \mathbf{A}^{L}\right]$ be a normed nonlinear ideal, then $\textfrak{A}^{L}\left(X, \mathbb{R}\right)=X^{\#}$ with $\Lip(g)=\mathbf{A}^{L}(g),\forall \ \ g\in X^{\#}$. \end{description}
\end{remark}
\begin{prop}\label{Auto4} Let $\textfrak{A}^{L}$ be a nonlinear ideal. Then all components $\textfrak{A}^{L}(X,F)$ are linear spaces. \end{prop}
\begin{proof} By the condition of $\bf (\widetilde{PNOI_1})$ it remains to show that $T\in\textfrak{A}^{L}(X,F)$ and $\lambda\in\mathbb{K}$ imply $\lambda\cdot T\in\textfrak{A}^{L}(X,F)$. This follows from $\lambda\cdot T=\left(\lambda\cdot I_{F}\right)\circ T\circ I_{X}$ and $\bf (\widetilde{PNOI_2})$. \\ \end{proof}
\begin{prop} If $\left[\textfrak{A}^{L}, \mathbf{A}^{L}\right]$ be a normed nonlinear ideal, then $\Lip(T)\leq\mathbf{A}^{L}(T)$ for all $T\in\textfrak{A}^{L}$. \end{prop}
\begin{proof} Let $T$ be an arbitrary Lipschitz operator in $\textfrak{A}^{L}(X,F)$. \begin{align}
\Lip(T)=\left\|T^{\#}_{|_{{F}^{*}}}\right\|&=\sup\left\{\Lip(T^{\#} b^{*}) : b^{*}\in B_{{F}^{*}}\right\}\nonumber \\ &=\sup\left\{\Lip\left(b^{*}\circ T\right) : b^{*}\in B_{{F}^{*}}\right\} \nonumber \end{align} Now from Remark \ref{soon} we have $\Lip(b^{*}\circ T)=\mathbf{A}^{L}(b^{*}\circ T)$ for $b^{*}\in {F}^{*}$. It follows $$\Lip(T)=\sup\left\{\mathbf{A}^{L}(b^{*}\circ T) : b^{*}\in B_{{F}^{*}}\right\}\leq\mathbf{A}^{L}(T).$$ \end{proof}
\subsection{\bf Nonlinear Ideals with Special Properties}\label{mAuto1818}
\subsubsection{Lipschitz Procedures}
A rule $$\text{new}: \mathfrak{A}\longrightarrow \mathfrak{A}^{L}_{new}$$
which defines a new nonlinear ideal $\mathfrak{A}^{L}_{new}$ for every ideal $\mathfrak{A}$ is called a Lipschitz semi-procedure. A rule $$\text{new} : \textfrak{A}^{L}\longrightarrow \textfrak{A}^{L}_{new}$$
which defines a new nonlinear ideal $\textfrak{A}^{L}_{new}$ for every nonlinear ideal $\textfrak{A}^{L}$ is called a Lipschitz procedure.
\begin{remark} We now define the following special properties:
\begin{enumerate}
\item[$\bf (M')$] If $\textfrak{A}^{L}\subseteq\textfrak{B}^{L}$, then $\textfrak{A}^{L}_{new}\subseteq\textfrak{B}^{L}_{new}$ (strong monotony).
\item[$\bf (M'')$] If $\mathfrak{A}\subseteq\mathfrak{B}$, then $\mathfrak{A}^{L}_{new}\subseteq\mathfrak{B}^{L}_{new}$ (monotony).
\item[$\bf (I)$] $\left(\textfrak{A}^{L}_{new}\right)_{new}=\textfrak{A}^{L}_{new}$ for all $\textfrak{A}^{L}$ (idempotence).
\end{enumerate}
A strong monotone and idempotent Lipschitz procedure is called a Lipschitz hull procedure if $\textfrak{A}^{L}\subseteq\textfrak{A}^{L}_{new}$ for all nonlinear ideals.
\end{remark}
\subsubsection{Closed Nonlinear Ideals}\label{Auto1818} Let $\textfrak{A}^{L}$ be a nonlinear ideal. A Lipschitz operator $T\in \Lip(X,F)$ belongs to the closure $\textfrak{A}^{L}_{clos}$ if there are $T_{1}, T_{2}, T_{3},\cdots\in\textfrak{A}^{L}(X, F)$ with $\lim\limits_{n} \Lip\left(T - T_{n}\right)=0$. It is not difficult to prove the following result.
\begin{prop} $\textfrak{A}^{L}_{clos}$ is a nonlinear ideal. \end{prop}
The following statement is evident.
\begin{prop} The rule $$\text{clos}: \textfrak{A}^{L}\longrightarrow\textfrak{A}^{L}_{clos}$$ is a hull Lipschitz procedure. \end{prop}
\begin{definition} The nonlinear ideal $\textfrak{A}^{L}$ is called closed if $\textfrak{A}^{L}=\textfrak{A}^{L}_{clos}$. \end{definition}
\begin{prop}
Let $\textfrak{G}^{L}$ be a Lipschitz approximable nonlinear ideal. Then $\textfrak{G}^{L}$ is the smallest closed nonlinear ideal. \end{prop}
\begin{proof} By the definition of Lipschitz approximable operators in \cite{JAJM14} we have $\textfrak{G}^{L}=\textfrak{F}^{L}_{clos}$. Hence $\textfrak{G}^{L}$ is closed. Let $\textfrak{A}^{L}$ be a closed nonlinear ideal. Since $\textfrak{F}^{L}$ is the smallest nonlinear ideal, we obtain from the monotonicity of the closure procedure $$ \textfrak{G}^{L}=\textfrak{F}^{L}_{clos} \subseteq \textfrak{A}^{L}_{clos} = \textfrak{A}^{L}.$$ \end{proof}
\subsubsection{Dual Nonlinear Ideals}\label{Auto19}
Let $\mathfrak{A}$ be an ideal. A Lipschitz operator $T\in \Lip(X,F)$ belongs to the Lipschitz dual ideal $\mathfrak{A}^{L}_{dual}$ if $T^{\#}_{|_{{F}^{*}}}\in\mathfrak{A}(F^{*}, X^{\#})$.
\begin{lemma}\label{Auto244}
Let $T$ in $\textfrak{F}^{L}(X, F)$ with $T=\sum\limits_{j=1}^{m} g_{j}\boxdot e_{j}$. Then $T^{\#}_{|_{{F}^{*}}}=\sum\limits_{j=1}^{m} \hat{e}_{j}\otimes g_{j}$, where $e\longmapsto \hat{e}$ is the natural embedding of the space $F$ into its second dual $F^{**}$. \end{lemma}
\begin{proof} We have $Tx=\sum\limits_{j=1}^{m} g_{j}(x) e_{j}$ for $x\in X$. So for $b^{*}\in F^{*}$,
$$\left\langle T^{\#}_{|_{{F}^{*}}} b^{*},x\right\rangle_{(X^{\#},X)}=\left\langle b^{*}, Tx\right\rangle_{(F^{*},F)}=\sum\limits_{j=1}^{m}g_{j}(x) b^{*} (e_{j}).$$ Hence $T^{\#}_{|_{{F}^{*}}} b^{*}=\sum\limits_{j=1}^{m} b^{*}(e_{j}) g_{j}$. This proves the statement for $T^{\#}_{|_{{F}^{*}}}$. \end{proof}
\begin{lemma}\label{Auto16} Let $T, S\in \Lip(X,F)$, $A\in \Lip(X_{0},X)$, and $B\in\mathfrak{L}(F,F_{0})$. Then \begin{enumerate}
\item $\left(T+S\right)^{\#}_{|_{{F}^{*}}}=T^{\#}_{|_{{F}^{*}}} + S^{\#}_{|_{{F}^{*}}}$.
\item $\left(BTA\right)^{\#}_{|_{F^{*}_{0}}}=A^{\#}T^{\#}_{|_{{F}^{*}}}B^{*}$. \end{enumerate}
\end{lemma}
\begin{proof} For $b^{*}\in F^{*}$ and $x\in X$, we have \begin{align}
\left\langle \left(T+S\right)^{\#}_{|_{{F}^{*}}} b^{*},x\right\rangle_{(X^{\#},X)}&=\left\langle b^{*}, (T+S)x\right\rangle_{(F^{*},F)}=\left\langle b^{*}, Tx+Sx\right\rangle_{(F^{*},F)} \nonumber \\ &=\left\langle b^{*}, Tx\right\rangle_{(F^{*},F)} + \left\langle b^{*}, Sx\right\rangle_{(F^{*},F)}\nonumber \\
&=\left\langle T^{\#}_{|_{{F}^{*}}} b^{*},x\right\rangle_{(X^{\#},X)} + \left\langle S^{\#}_{|_{{F}^{*}}} b^{*},x\right\rangle_{(X^{\#},X)}. \nonumber \end{align}
Hence $\left(T+S\right)^{\#}_{|_{{F}^{*}}}=T^{\#}_{|_{{F}^{*}}} + S^{\#}_{|_{{F}^{*}}}$. For $b^{*}_{0}\in F^{*}$ and $x_{0}\in X_{0}$, we have \begin{align} \left\langle b_{0}^{*}, BTA (x_{0})\right\rangle_{(F_{0}^{*},F_{0})}&=\left\langle b_{0}^{*}, B(TA x_{0})\right\rangle_{(F_{0}^{*},F_{0})}=\left\langle B^{*} b_{0}^{*}, T(A x_{0})\right\rangle_{(F^{*},F)} \nonumber \\
&=\left\langle T^{\#}_{|_{{F}^{*}}} B^{*} b_{0}^{*}, A x_{0}\right\rangle_{(X^{\#}, X)}=\left\langle A^{\#} T^{\#}_{|_{{F}^{*}}} B^{*} b_{0}^{*}, x_{0}\right\rangle_{(X_{0}^{\#}, X_{0})}. \nonumber \end{align}
But also $\left\langle b_{0}^{*}, BTA (x)\right\rangle_{(F_{0}^{*},F_{0})}=\left\langle \left(BTA\right)^{\#}_{|_{F^{*}_{0}}} b_{0}^{*}, x_{0}\right\rangle_{(X_{0}^{\#}, X_{0})}$. Therefore $\left(BTA\right)^{\#}_{|_{F^{*}_{0}}}=A^{\#}T^{\#}_{|_{{F}^{*}}}B^{*}$. \end{proof}
\begin{prop}\label{Auto26} $\mathfrak{A}^{L}_{dual}$ is a nonlinear ideal. \end{prop}
\begin{proof}
The algebraic condition $\bf (\widetilde{PNOI_0})$ is satisfied, from Lemma \ref{Auto244} we obtain $\left(g\boxdot e\right)^{\#}_{|_{{F}^{*}}}=\hat{e}\otimes g\in\mathfrak{A}(F^{*}, X^{\#})$. To prove the algebraic condition $\bf (\widetilde{PNOI_1})$, let $T$ and $S$ in $\mathfrak{A}^{L}_{dual}(X,F)$. Let $T^{\#}_{|_{{F}^{*}}}$ and $S^{\#}_{|_{{F}^{*}}}$ in $\mathfrak{A}(F^{*}, X^{\#})$, from Lemma \ref {Auto16} we have $\left(T+S\right)^{\#}_{|_{{F}^{*}}}=T^{\#}_{|_{{F}^{*}}} + S^{\#}_{|_{{F}^{*}}}\in\mathfrak{A}(F^{*}, X^{\#})$. Let $A\in \Lip(X_{0},X)$, $T\in\mathfrak{A}^{L}_{dual}(X,F)$, and $B\in\mathfrak{L}(F,F_{0})$. Also from Lemma \ref {Auto16} we have $\left(BTA\right)^{\#}_{|_{F^{*}_{0}}}=A^{\#}T^{\#}_{|_{{F}^{*}}}B^{*}\in\mathfrak{A}(F_{0}^{*}, X_{0}^{\#})$, hence the algebraic condition $\bf (\widetilde{PNOI_2})$ is satisfied. \end{proof}
The following proposition is obvious. \begin{prop}\label{tensor2} The rule $$dual: \mathfrak{A}\longrightarrow \left(\mathfrak{A}\right)^{L}_{dual}$$ is a monotone Lipschitz procedure. \end{prop}
\begin{prop}\label{peep98} Let $\textfrak{F}^{L}$ be a nonlinear ideal of Lipschitz finite rank operators, $\mathfrak{F}$ be an ideal of finite rank operators and $(\mathfrak{F})^{L}_{dual}$ be a semi-Lipschitz procedure. Then $\textfrak{F}^{L}=(\mathfrak{F})^{L}_{dual}$. \end{prop}
\begin{proof}
Let $T\in\textfrak{F}^{L}(X, F)$, then $T$ can be represented in the form $\sum\limits_{j=1}^{m} g_{j}\boxdot e_{j}$. From Lemma \ref{Auto244} and $E^{*}\otimes F\equiv\mathfrak{F}(E, F)$ we have $T^{\#}_{|_{{F}^{*}}}=\sum\limits_{j=1}^{m}\hat{e}_{j}\otimes g_{j}\in F^{**}\otimes X^{\#}\equiv\mathfrak{F}(F^{*}, X^{\#})$. Hence $T\in\mathfrak{F}^{L}_{dual}(X, F)$.
Let $T\in\mathfrak{F}^{L}_{dual}(X, F)$ then $T^{\#}_{|_{{F}^{*}}}\in\mathfrak{F}(F^{*}, X^{\#})$ hence $T^{\#}_{|_{{F}^{*}}}$ can be represented in the form $\sum\limits_{j=1}^{m} \hat{e}_{j}\otimes g_{j}$. For $b^{*}\in F^{*}$ and $x\in X$, we have \begin{align}
\left\langle b^{*}, T x\right\rangle_{(F^{*},F)}&=\left\langle T^{\#}_{|_{{F}^{*}}} b^{*}, x\right\rangle_{(X^{\#}, X)}=\left\langle \sum\limits_{j=1}^{m} \hat{e}_{j}\otimes g_{j}\ (b^{*}), x\right\rangle_{(X^{\#}, X)}=\left\langle \sum\limits_{j=1}^{m} \hat{e}_{j}(b^{*})\cdot g_{j}\ , x\right\rangle_{(X^{\#}, X)} \nonumber \\ &=\left\langle \sum\limits_{j=1}^{m} b^{*}({e}_{j})\cdot g_{j}\ , x\right\rangle_{(X^{\#}, X)}=\sum\limits_{j=1}^{m} g_{j}(x)\cdot b^{*}(e_{j})=\left\langle b^{*}, \sum\limits_{j=1}^{m} g_{j}\boxdot e_{j} \ (x)\right\rangle_{(F^{*},F)}. \nonumber \end{align} Hence $T=\sum\limits_{j=1}^{m} g_{j}\boxdot e_{j}\in\textfrak{F}^{L}(X, F)$. \end{proof}
\subsubsection{Injective Nonlinear Ideals} Let $\textfrak{A}^{L}$ be a nonlinear ideal. A Lipschitz operator $T\in \Lip(X,F)$ belongs to the injective hull $\textfrak{A}^{L}_{inj}$ if $J_{F}T\in\textfrak{A}^{L}(X, F^{inj})$.
\begin{prop}\label{Auto29} $\textfrak{A}^{L}_{inj}$ is a nonlinear ideal. \end{prop}
\begin{proof} The algebraic condition $\bf (\widetilde{PNOI_0})$ is satisfied, since $g\boxdot e\in\textfrak{A}^{L}(X, F)$ and using nonlinear composition ideal property we have $J_{F}(g\boxdot e)\in\textfrak{A}^{L}(X, F^{inj})$. To prove the algebraic condition $\bf (\widetilde{PNOI_1})$, let $T$ and $S$ in $\textfrak{A}^{L}_{inj}(X,F)$. Then $J_{F}T$ and $J_{F}S$ in $\textfrak{A}^{L}(X, F^{inj})$, we have $J_{F}(T+S)=J_{F}T + J_{F}S\in\textfrak{A}^{L}(X, F^{inj})$. Let $A\in \Lip(X_{0},X)$, $T\in\textfrak{A}^{L}_{inj}(X, F)$, and $B\in\mathfrak{L}(F,F_{0})$. Since $F_{0}^{inj}$ has the extension property, there exists $B^{inj}\in\mathfrak{L}(F^{inj},F_{0}^{inj})$ such that $$ \begin{tikzcd}[row sep=5.0em, column sep=5.0em] X \arrow{r}{T} & F \arrow{r}{J_F} \arrow{d}{B} & F^{inj} \arrow{d}{B^{inj}} \\ X_0 \arrow{r}{BTA} \arrow{u}{A} & F_0 \arrow{r}{J_{F_0}} & F_0^{inj} \\ \end{tikzcd}
$$ Consequently $J_{F_{0}}\left(BTA\right)=B^{inj}\left(J_{F}T\right)A\in\textfrak{A}^{L}$, hence the algebraic condition $\bf (\widetilde{PNOI_2})$ is satisfied. \end{proof}
\begin{prop} The rule $$\text{inj}: \textfrak{A}^{L}\longrightarrow\textfrak{A}^{L}_{inj}$$ is a hull Lipschitz procedure. \end{prop}
\begin{proof} The property $\bf (M')$ is obvious. To show the idempotence, let $T\in \Lip(X,F)$ belong to $\left(\textfrak{A}^{L}_{inj}\right)_{inj}$. Then $J_{F}T\in\textfrak{A}^{L}_{inj}(X, F^{inj})$, and the preceding lemma implies $J_{F}T\in\textfrak{A}^{L}(X, F^{inj})$. Consequently $T\in\textfrak{A}^{L}_{inj}(X, F)$. Thus $\left(\textfrak{A}^{L}_{inj}\right)_{inj}\subseteq\textfrak{A}^{L}_{inj}$. The converse inclusion is trivial. \end{proof}
\begin{lemma}\label{Auto21} Let $F$ be a Banach space possessing the extension property. Then $\textfrak{A}^{L}(X, F)=\textfrak{A}^{L}_{inj}(X, F)$. \end{lemma}
\begin{proof} By hypothesis there exists $B\in\mathfrak{L}(F^{inj},F)$ such that $BJ_{F}=I_{F}$. Therefore $T\in\textfrak{A}^{L}_{inj}(X, F)$ implies that $T=B\left(J_{F}T\right)\in\textfrak{A}^{L}(X, F)$. This proves that $\textfrak{A}^{L}_{inj}\subseteq\textfrak{A}^{L}$. The converse inclusion is obvious. \end{proof}
\begin{prop}\label{Auto 14} The rule $$inj: \textfrak{A}^{L}\longrightarrow\textfrak{A}^{L}_{inj}$$ is a hull Lipschitz procedure. \end{prop}
\begin{proof} The property $\bf (M')$ is obvious. To show the idempotence, let $T\in \Lip(X,F)$ belong to $\left(\textfrak{A}^{L}_{inj}\right)_{inj}$. Then $J_{F}T\in\textfrak{A}^{L}_{inj}(X, F^{inj})$, and the preceding lemma implies $J_{F}T\in\textfrak{A}^{L}(X, F^{inj})$. Consequently $T\in\textfrak{A}^{L}_{inj}(X, F)$. Thus $\left(\textfrak{A}^{L}_{inj}\right)_{inj}\subseteq\textfrak{A}^{L}_{inj}$. The converse inclusion is trivial. \end{proof}
\subsection{Minimal Nonlinear Ideals}\label{Auto3030} Let $\mathfrak{A}$ be an ideal. A Lipschitz operator $T\in \Lip(X,F)$ belongs to the associated minimal ideal $(\mathfrak{A})^{L}_{min}$ if $T=BT_{0}A$, where $B\in\mathfrak{G}(F_{0}, F)$, $T_{0}\in\mathfrak{A}(G_{0}, F_{0})$, and $A\in\textfrak{G}^{L}(X, G_{0})$. In the other words $(\mathfrak{A})^{L}_{min}:=\mathfrak{G}\circ\mathfrak{A}\circ\textfrak{G}^{L}$, where $\mathfrak{G}$ be an ideal of approximable operators between arbitrary Banach spaces.
\begin{prop}\label{Auto30} $(\mathfrak{A})^{L}_{min}$ is a nonlinear ideal. \end{prop}
\begin{proof} The algebraic condition $\bf (\widetilde{PNOI_0})$ is satisfied, since the elementary Lipschitz tensor $g\boxdot e$ admits a factorization $$g\boxdot e : X\stackrel{g\boxdot 1}{\longrightarrow} \mathbb{K}\stackrel{1\otimes 1}{\longrightarrow}\mathbb{K}\stackrel{1\otimes e}{\longrightarrow} F,$$ where $1\otimes e\in\mathfrak{G}\left(\mathbb{K}, F\right)$, $1\otimes 1\in\mathfrak{A}\left(\mathbb{K}, \mathbb{K}\right)$, and $g\boxdot 1\in\textfrak{G}^{L}\left(X, \mathbb{K}\right)$. To prove the algebraic condition $\bf (\widetilde{PNOI_1})$, let $T_{i}\in\mathfrak{G}\circ\mathfrak{A}\circ\textfrak{G}^{L}(X, F)$. Then $T_{i}=B_{i}T_{0}^{i} A_{i}$, where $B_{i}\in\mathfrak{G}(F_{0}^{i}, F)$, $T_{0}^{i}\in\mathfrak{A}(G_{0}^{i}, F_{0}^{i})$, and $A_{i}\in\textfrak{G}^{L}(X, G_{0}^{i})$. Put $B:=B_{1}\circ Q_{1} + B_{2}\circ Q_{2}$, $T_{0}:=\tilde{J}_{1}\circ T_{0}^{1}\circ \tilde{Q}_{1} + \tilde{J}_{2}\circ T_{0}^{2}\circ \tilde{Q}_{2}$, and $A:=J_{1}\circ A_{1} + J_{2}\circ A_{2}$. Now $T_{1} + T_{2}= B\circ T_{0}\circ A$, $B\in\mathfrak{G}(F_{0}, F)$, $T_{0}\in\mathfrak{A}(G_{0}, F_{0})$, and $A\in\textfrak{G}^{L}(X, G_{0})$ imply $T_{1} + T_{2}\in\mathfrak{G}\circ\mathfrak{A}\circ\textfrak{G}^{L}(X, F)$. Let $A\in \Lip(X_{0},X)$, $T\in\mathfrak{G}\circ\mathfrak{A}\circ\textfrak{G}^{L}(X, F)$, and $B\in\mathfrak{L}(F,R_{0})$. Then $T$ admits a factorization $$T: X\stackrel{\widetilde{A}}{\longrightarrow} G_{0}\stackrel{T_{0}}{\longrightarrow} F_{0}\stackrel{\widetilde{B}}{\longrightarrow} F,$$ where $\widetilde{B}\in\mathfrak{G}(F_{0}, F)$, $T_{0}\in\mathfrak{A}(G_{0}, F_{0})$, and $\widetilde{A}\in\textfrak{G}^{L}(X, G_{0})$. To show that $BTA\in\mathfrak{G}\circ\mathfrak{A}\circ\textfrak{G}^{L}(X_{0},R_{0})$. By using the linear and nonlinear composition ideal properties, we obtain $B\circ\widetilde{B}\in\mathfrak{G}\left(F_{0}, R_{0}\right)$ and $\widetilde{A}\circ A\in\textfrak{G}^{L}\left(X_{0}, G_{0}\right)$. Hence the Lipschitz operator $BTA$ admits a factorization $$BTA: X_{0}\stackrel{\widetilde{\widetilde{A\,}}}{\longrightarrow} G_{0}\stackrel{T_{0}}{\longrightarrow} F_{0}\stackrel{\widetilde{\widetilde{B\,}}}{\longrightarrow} R_{0},$$ where $\widetilde{\widetilde{B\,}}=B\circ\widetilde{B}$ and $\widetilde{\widetilde{A\,}}=\widetilde{A}\circ A$, hence the algebraic condition $\bf (\widetilde{PNOI_2})$ is satisfied. \end{proof}
\begin{prop}\label{Auto31} The rule $$min: \mathfrak{A}\longrightarrow(\mathfrak{A})^{L}_{min}$$ is a monotone Lipschitz procedure. \end{prop}
\begin{remark}\label{remmin}
\begin{enumerate}
\item[$\bf (1)$] It is evident $(\mathfrak{A})^{L}_{min}\subseteq\textfrak{G}^{L}$.
\item[$\bf (2)$] If $\textfrak{A}^{L}$ is a closed nonlinear ideal, then $(\mathfrak{A})^{L}_{min}\subseteq\textfrak{A}^{L}$. \end{enumerate} \end{remark}
\begin{prop}\label{propmin}
Let $\mathfrak{A}$ be a closed ideal. Then $(\mathfrak{A})^{L}_{min}=\textfrak{G}^{L}$.
In particular, the linear and nonlinear ideals of approximable operators are related by $(\mathfrak{G})^{L}_{min} = \textfrak{G}^{L}$. \end{prop}
The prove of the counterpart of this proposition for ideals of linear operators requires the notion of idempotence of ideals, see \cite[Prop. 4.8.4]{P78}. In particular, the equalities \begin{equation}\label{idemFG}
\mathfrak{F} \circ \mathfrak{F} = \mathfrak{F} \ \ \ \text{and} \ \ \ \mathfrak{G} \circ \mathfrak{G} = \mathfrak{G} \end{equation} are needed. Since idempotence does not make sense for nonlinear ideals, we instead use the following equalities.
\begin{prop}\label{idemFlGl}
Any Lipschitz finite operator can be written as a product of a linear operator with finite rank and a Lipschitz finite operator.
Any Lipschitz approximable operator can be written as a product of a linear approximable operator and a Lipschitz approximable operator.
That is, $\mathfrak{F} \circ \textfrak{F}^L = \textfrak{F}^L$ and $\mathfrak{G} \circ \textfrak{G}^L = \textfrak{G}^L$. \end{prop}
\begin{proof}
Let $T=\sum\limits_{j=1}^{m} g_{j}\boxdot e_{j}$ with $g_{1},\cdots, g_{m}$ in $X^{\#}$ and $e_{1},\cdots, e_{m}$ in $F$ be a Lipschitz finite operator. Let $F_0$ be the finite dimensional subspace of $F$ spanned by $e_{1},\cdots, e_{m}$ and let $J:F_0 \to F$ be the embedding. Obviously, $J$ is a linear operator with finite rank. Moreover, let $T_0$ be the operator $T$ considered as an operator from $X$ to $F_0$. Then $T=J T_0$ is the required factorization. Observe that we also have $\Lip(T_0) \|J\|=\Lip(T)$. The inclusion $\mathfrak{F} \circ \textfrak{F}^L \subseteq \textfrak{F}^L$ is obvious.
Now let $T \in \textfrak{G}^L(X,F)$. Since $T$ can be approximated by Lipschitz finite operators, we can also find Lipschitz finite operators $T_n\in \textfrak{F}^L(X,F)$ such that the sum $T=\sum\limits_{n=1}^{\infty} T_n$ converges absolutely in $\Lip(X,F)$, i.e. $\sum\limits_{n=1}^{\infty} \Lip(T_n) < \infty$. Now each $T_n$ can be factored as $T_n=V_n U_n$ with
$V_n \in \mathfrak{F}(F_i,F)$ and $U_n \in \textfrak{F}^L(X,F_i)$ such that $F_i$ is a suitable Banach space and $\|V_n\| \Lip(U_n) =\Lip(T)$. By homogeneity, we may assume that $\|V_n\|^2 = \Lip(U_n)^2 = \Lip(T_n)$. Let $M:=\ell_{2}(F_{n})$ and put $$V:=\sum\limits_{n=1}^{\infty} J_{n} V_{n}\ \ \ \text{and} \ \ \ U:=\sum\limits_{n=1}^{\infty} U_{n} Q_{n}.$$ Then $$\Lip(U)^{2}\leq\sum\limits_{n=1}^{\infty} \Lip(U_{n})^{2}<\infty\ \ \ \text{and} \ \ \ \left\|V\right\|^{2}\leq\sum\limits_{n=1}^{\infty} \left\|V_{n}\right\|^{2}<\infty.$$ This shows that the series defining $U$ and $V$ are absolutely convergent in $\Lip(X,M)$ and $\mathfrak{L}(M,F)$, respectively. Hence $U \in \textfrak{G}^L(X,M)$ and $V \in \mathfrak{G}(M,F)$ and $T=VU$ is the required factorization. Again, $\mathfrak{G} \circ \textfrak{G}^L \subseteq \textfrak{G}^L$ is obvious. \end{proof}
We can now prove Proposition \ref{propmin}.
\begin{proof}[Proof of Proposition \ref{propmin}]
By \eqref{idemFG} and Proposition \ref{idemFlGl}, we have
$$(\mathfrak{G})^{L}_{min} = \mathfrak{G} \circ \mathfrak{G} \circ \textfrak{G}^{L} = \mathfrak{G} \circ \textfrak{G}^{L} = \textfrak{G}^{L}.$$
If now $\mathfrak{A}$ is a closed ideal, then $\mathfrak{G} \subseteq \mathfrak{A}$ implies
$$ \textfrak{G}^{L} = (\mathfrak{G})^{L}_{min} = \mathfrak{G} \circ \mathfrak{G} \circ \textfrak{G}^{L} \subseteq \mathfrak{G} \circ \mathfrak{A} \circ \textfrak{G}^{L} = (\mathfrak{A})^{L}_{min}.$$
The reverse implication was already observed in Remark \ref{remmin}. \end{proof}
\subsection{Lipschitz interpolative ideal procedure between metric spaces and Banach spaces}\label{Auto20}
\begin{prop} $\left[\textfrak{A}^{L}_{\text{inj}}, \mathbf{A}^{L}_{\text{inj}}\right]$ is a $p$-normed nonlinear ideal. \end{prop}
\begin{proof} By Proposition \ref{Auto29} the algebraic conditions of Definition \ref{A7777} are hold. Then the injective hull $\textfrak{A}^{L}_{inj}$ is a nonlinear ideal. To prove the norm condition $\bf (\widetilde{PNOI_1})$, let $T$ and $S$ in $\textfrak{A}^{L}_{\text{inj}}(X,F)$. Then \begin{align} \mathbf{A}^{L}_{\text{inj}}\left(S+T\right)^{p}&:=\mathbf{A}^{L}\left[J_{F} \left(S+T\right)\right]^{p}=\mathbf{A}^{L}\left[J_{F} S+ J_{F} T\right]^{p} \nonumber \\ &\leq\mathbf{A}^{L}\left(J_{F} S\right)^{p} + \mathbf{A}^{L}\left(J_{F} T\right)^{p} \nonumber \\ &=\mathbf{A}^{L}_{\text{inj}}(S)^{p} + \mathbf{A}^{L}_{\text{inj}}(T)^{p}. \nonumber \end{align} Let $A\in \Lip(X_{0},X)$, $T\in\textfrak{A}^{L}_{\text{inj}}(X,F)$, and $B\in\mathfrak{L}(F,F_{0})$. Then \begin{align} \mathbf{A}^{L}_{\text{inj}}\left(BTA\right)&:=\mathbf{A}^{L}\left(J_{F} \left(BTA\right)\right)=\mathbf{A}^{L}\left(B^{\text{inj}} \left(J_{F} T\right) A\right) \nonumber \\
&\leq\left\|B^{\text{inj}}\right\|\mathbf{A}^{L}\left(J_{F} T\right) \Lip(A) \nonumber \\
&=\left\|B\right\|\mathbf{A}^{L}_{\text{inj}}(T) \Lip(A). \nonumber \end{align} Hence the norm condition $\bf (\widetilde{PNOI_2})$ is satisfied. \end{proof}
\begin{definition}\label{Torezen} Let $0\leq \theta< 1$ and $\left[\textfrak{A}^{L}, \mathbf{A}^{L}\right]$ be a normed nonlinear ideal. A Lipschitz operator $T$ from $X$ into $F$ belongs to $\textfrak{A}^{L}_{\theta}(X, F)$ if there exist a Banach space $G$ and a Lipschitz operator $S\in\textfrak{A}^{L}(X, G)$ such that \begin{equation}\label{pooor}
\left\|Tx'-Tx''|F\right\|\leq\left\|Sx'-Sx''|G\right\|^{1-\theta}\cdot d_{X}(x',x'')^{\theta},\ \ \forall\: x',\; x'' \in X. \end{equation} For each $T\in\textfrak{A}^{L}_{\theta}(X, F)$, we set
\begin{equation}\label{pooooor} \mathbf{A}^{L}_{\theta}(T):=\inf\mathbf{A}^{L}(S)^{1-\theta} \end{equation} where the infimum is taken over all Lipschitz operators $S$ admitted in (\ref{pooor}).
Note that $\Lip(T)\leq\mathbf{A}^{L}_{\theta}(T)$, by definition. \end{definition}
\begin{prop}\label{Flughafen} $\left[\textfrak{A}^{L}_{\theta}, \mathbf{A}^{L}_{\theta}\right]$ is an injective Banach nonlinear ideal. \end{prop}
\begin{proof} The condition of injective hull and the algebraic conditions of Definition \ref{A7777} are not difficult to prove it. Let $x',\; x'' \in X$ and $e\in F$ the norm condition $\bf (\widetilde{PNOI_0})$ is satisfied. Indeed \begin{equation}
\left\|(g\boxdot e) x'- (g\boxdot e) x''|F\right\|\leq\left\|\left(\Lip(g)\cdot\left\|e\right\|\right)^{\frac{\theta}{1-\theta}}\left[(g\boxdot e) x'- (g\boxdot e) x''\right]|F\right\|^{1-\theta}\cdot d_{X}(x',x'')^{\theta}. \end{equation}
Since a Lipschitz operator $S:=(\Lip(g)\cdot\left\|e\right\|)^{\frac{\theta}{1-\theta}} (g\boxdot e)\in\textfrak{A}^{L}(X, F)$, hence $g\boxdot e\in\textfrak{A}^{L}_{\theta}(X, F)$. From (\ref{pooooor}) we have \begin{align}
\mathbf{A}^{L}_{\theta}(g\boxdot e)&:=\inf\mathbf{A}^{L}\big((\Lip(g)\cdot\left\|e\right\|)^{\frac{\theta}{1-\theta}} g\boxdot e\big)^{1-\theta} \nonumber \\
&\leq(\Lip(g)\cdot\left\|e\right\|)^{\theta}\mathbf{A}^{L}(g\boxdot e\big)^{1-\theta} \nonumber \\
&=(\Lip(g)\cdot\left\|e\right\|)^{\theta}\cdot(\Lip(g)\cdot\left\|e\right\|)^{1-\theta} \nonumber \\
&=\Lip(g)\cdot\left\|e\right\|.\nonumber \end{align} The converse inequality is obvious. To prove the norm condition $\bf (\widetilde{PNOI_1})$, let $T_{1}$ and $T_{2}$ in $\textfrak{A}^{L}_{\theta}(X, F)$. Given $\epsilon > 0$, there is a Banach space $G_{i}$ and a Lipschitz operator $S_{i}\in\textfrak{A}^{L}(X, G_{i})$, $i=1, 2$ such that \begin{equation}
\left\|T_{i}x'-T_{i}x''|F\right\|\leq\left\|\mathbf{A}^{L}(S_{i})^{-\theta}\left(S_{i}x'-S_{i}x''\right)|G\right\|^{1-\theta} \left(\mathbf{A}^{L}(S_{i})^{1-\theta}\right)^{\theta}\cdot d_{X}(x',x'')^{\theta},\ \ \forall\: x',\; x'' \in X \end{equation} and $\mathbf{A}^{L}(S_{i})^{1-\theta}\leq (1+\epsilon)\cdot \mathbf{A}^{L}_{\theta}(T_{i})\ \left(i=1, 2\right)$. Introducing the $\ell_{1}$-sum $G:=G_{1}\oplus G_{2}$ and the Lipschitz operator $S:=\mathbf{A}^{L}(S_{1})^{-\theta} J_{1} S_{1} + \mathbf{A}^{L}(S_{2})^{-\theta} J_{2} S_{2}\in\textfrak{A}^{L}(X, G)\ \left(J_{1}, J_{2}\ \text{the canonical injections}\right)$ and applying the H\"older inequality, we get for all $x', x''\in X$ \begin{align}
\left\|(T_{1}+T_{2})x' - (T_{1}+T_{2})x''|F\right\|&\leq\left\|T_{1}x'+ T_{1}x''|F\right\|+\left\|T_{1}x'+ T_{1}x''|F\right\|\nonumber \\
&\leq\sum\limits_{i=1}^{2}\left\|\mathbf{A}^{L}(S_{i})^{-\theta}\left(S_{i}x'-S_{i}x''\right)|G_{i}\right\|^{1-\theta}\left(\mathbf{A}^{L}(S_{i})^{1-\theta}\right)^{\theta}\cdot d_{X}(x',x'')^{\theta} \nonumber \\
&\leq\left[\sum\limits_{i=1}^{2}\left\|\mathbf{A}^{L}(S_{i})^{-\theta}\left(S_{i}x'-S_{i}x''\right)|G_{i}\right\|\right]^{1-\theta}\left(\sum\limits_{i=1}^{2}\mathbf{A}^{L}(S_{i})^{1-\theta}\right)^{\theta} d_{X}(x',x'')^{\theta} \nonumber \\
&=\left(\mathbf{A}^{L}(S_{1})^{1-\theta} + \mathbf{A}^{L}(S_{2})^{1-\theta}\right)^{\theta}\left\|Sx'-Sx''|G\right\|^{1-\theta} \cdot d_{X}(x',x'')^{\theta}. \nonumber \end{align} Hence $T_{1}+ T_{2}\in\textfrak{A}^{L}_{\theta}(X, F)$ and furthermore, for $p=1$ we have \begin{align} \mathbf{A}^{L}_{\theta}(T_{1} + T_{2})&\leq\left[\mathbf{A}^{L}(S_{1})^{1-\theta} + \mathbf{A}^{L}(S_{2})^{1-\theta}\right]^{\theta}\mathbf{A}^{L}(S)^{1-\theta}\nonumber \\ &\leq\mathbf{A}^{L}(S_{1})^{1-\theta} + \mathbf{A}^{L}(S_{2})^{1-\theta} \nonumber \\ &\leq (1+\epsilon)\cdot\left(\mathbf{A}^{L}_{\theta}(T_{1})+\mathbf{A}^{L}_{\theta}(T_{2})\right). \nonumber \end{align}
To prove the norm condition $\bf (\widetilde{PNOI_2})$, let $A\in \mathscr{L}(X_{0},X)$, $T\in\textfrak{A}^{L}_{\theta}(X, F)$, and $B\in\mathfrak{L}(F, F_{0})$ and $x'_{0}, x''_{0}$ in $X$. Then \begin{align}
\left\|BTA x'_{0} - BTA x''_{0}|F_{0}\right\|&\leq \left\|B\right\|\cdot \left\|TA x'_{0} - TA x''_{0}|F\right\| \nonumber \\
&\leq\left\|B\right\|\cdot\left\|S (Ax'_{0})- S (Ax''_{0})| G\right\|^{1-\theta}\cdot d_{X}(Ax'_{0},Ax''_{0})^{\theta} \nonumber \\
&\leq \left\|B\right\|\cdot\Lip(A)^{\theta}\cdot \left\|S\circ A (x'_{0}) - S\circ A (x''_{0})| G\right\|^{1-\theta} d_{X_{0}}(x'_{0},x''_{0})^{\theta} \nonumber \\
&\leq \left\|\left\|B\right\|^{\frac{1}{1-\theta}}\cdot\Lip(A)^{\frac{\theta}{1-\theta}} \left(S\circ A (x'_{0})-S\circ A (x''_{0})\right)| G\right\|^{1-\theta} d_{X_{0}}(x'_{0},x''_{0})^{\theta}. \end{align}
Since a Lipschitz map $\widetilde{S}:=\left(\left\|B\right\|^{\frac{1}{1-\theta}}\cdot\Lip(A)^{\frac{\theta}{1-\theta}}\right) S\circ A\in\textfrak{A}^{L}(X_{0}, G)$, hence $BTA\in\textfrak{A}^{L}_{\theta}(X_{0}, F_{0})$. Moreover, from (\ref{pooooor}) we have \begin{align}\label{joo} \mathbf{A}^{L}_{\theta}(BTA)&:=\inf \mathbf{A}^{L}(\widetilde{S})^{1-\theta} \nonumber \\
&\leq\mathbf{A}^{L}\left((\left\|B\right\|^{\frac{1}{1-\theta}}\cdot\Lip(A)^{\frac{\theta}{1-\theta}}) S\circ A\right)^{1-\theta} \nonumber \\
&\leq \left\|B\right\|\cdot\Lip(A)^{\theta}\cdot \mathbf{A}^{L}(S\circ A)^{1-\theta} \nonumber \\
&= \left\|B\right\|\cdot\Lip(A)\cdot\mathbf{A}^{L}(S)^{1-\theta}. \end{align} Taking the infimum over all such $S\in\textfrak{A}^{L}(X, G)$ on the right side of (\ref{joo}), we have
$$\mathbf{A}^{L}_{\theta}(BTA)\leq\left\|B\right\|\cdot\mathbf{A}^{L}_{\theta}(T)\cdot\Lip(A).$$
To prove the completeness, condition $\bf (\widetilde{PNOI_3})$, let $(T_{n})_{n\in\mathbb{N}}$ be a sequence of Lipschitz operator in $\textfrak{A}^{L}_{\theta}(X, F)$ such that $\sum\limits_{n=1}^{\infty}\mathbf{A}^{L}_{\theta}(T_{n})<\infty$. Since $\Lip(T)\leq\mathbf{A}^{L}_{\theta}(T)$ and $\Lip(X, F)$ is a Banach space, there exists $T=\sum\limits_{n=1}^{\infty} T_{n}\in\Lip(X, F)$. Let $S_{n}\in\textfrak{A}^{L}(X, G_{n})$ such that $$\left\|T_{n}x'-T_{n}x''|F\right\|\leq\left\|S_{n}x'-S_{n}x''|G_{n}\right\|^{1-\theta}\cdot d_{X}(x',x'')^{\theta},\ \ \forall\: x',\; x'' \in X,$$ and $\mathbf{A}^{L}(S_{n})^{1-\theta}\leq\mathbf{A}^{L}_{\theta}(T_{n})+\frac{\epsilon}{2^{n}}$. Then $$\left(\sum\limits_{n=1}^{\infty}\mathbf{A}^{L}(S_{n})\right)^{1-\theta}\leq\sum\limits_{n=1}^{\infty}\mathbf{A}^{L}(S_{n})^{1-\theta}\leq\sum\limits_{n=1}^{\infty}\mathbf{A}^{L}_{\theta}(T_{n})+\epsilon<\infty.$$ Let $S=\sum\limits_{n=1}^{\infty} S_{n}\in\textfrak{A}^{L}(X, G)$, where $G$ is the $\ell_{1}$-sum of all $G_{n}$. Hence \begin{align}
\left\|Tx'-Tx''|F\right\|\leq\sum\limits_{n=1}^{\infty}\left\|T_{n}x'-T_{n}x''|F_{n}\right\|&\leq\sum\limits_{n=1}^{\infty}\left\|S_{n}x'-S_{n}x''|G_{n}\right\|^{1-\theta}\cdot d_{X}(x',x'')^{\theta}\nonumber \\
&\leq\left\|Sx'-Sx''|G_{n}\right\|^{1-\theta}\left(\sum\limits_{n=1}^{\infty}\mathbf{A}^{L}(S_{n})^{1-\theta}\right)^{\theta}\cdot d_{X}(x',x'')^{\theta}.\nonumber \end{align} This implies that $T\in\textfrak{A}^{L}_{\theta}(X, F)$ and $$\mathbf{A}^{L}_{\theta}(T)\leq\sum\limits_{n=1}^{\infty}\mathbf{A}^{L}(S_{n})^{1-\theta}\leq\sum\limits_{n=1}^{\infty}\mathbf{A}^{L}_{\theta}(T_{n})+\epsilon<\infty.$$ We have $$\mathbf{A}^{L}_{\theta}\left(T-\sum\limits_{j=1}^{n} T_{j}\right)=\mathbf{A}^{L}_{\theta}\left(\sum\limits_{k=n+1}^{\infty} T_{k}\right)\leq\sum\limits_{k=n+1}^{\infty}\mathbf{A}^{L}_{\theta}(S_{k})^{1-\theta}.$$ Thus, $T=\sum\limits_{n=1}^{\infty} T_{n}$. \end{proof} \begin{remark} If $\theta=0$, then the nonlinear ideal $\textfrak{A}^{L}_{\theta}$ is just the injective hull of nonlinear ideal $\textfrak{A}^{L}$ and Lipschitz norms are equal. Further properties are given in. \end{remark}
\begin{prop} Let $0\leq \theta, \theta_{1}, \theta_{2} < 1$. Then the following holds. \begin{enumerate}
\item[$\bf (a)$] $\textfrak{A}^{L}_{\theta_{1}}\subset \textfrak{A}^{L}_{\theta_{2}}$ if $\theta_{1}\leq\theta_{2}$.
\item[$\bf (b)$] $\textfrak{A}^{L}_{\text{inj}}\subset \textfrak{A}^{L}_{\theta}$.
\item[$\bf (c)$] $\left(\textfrak{A}^{L}_{\theta_{1}}\right)_{\theta_{2}}\subset\textfrak{A}^{L}_{\theta_{1}+\theta_{2}-\theta_{1}\theta_{2}}$. \end{enumerate} \end{prop}
\begin{proof} To verify $\bf (a)$, let $T\in\textfrak{A}^{L}_{\theta_{1}}(X, F)$ and $\epsilon > 0$. Then
$$\left\|Tx'-Tx''|F\right\|\leq\left\|Sx'-Sx''|G\right\|^{1-\theta_{1}}\cdot d_{X}(x',x'')^{\theta_{1}},\ \ \forall\: x',\; x'' \in X,$$ holds for a suitable Banach space $G$ and a Lipschitz operator $S\in\textfrak{A}^{L}(X, G)$ with $\mathbf{A}^{L}(S)^{1-\theta_{1}}\leq (1+\epsilon)\cdot\mathbf{A}^{L}_{\theta_{1}}(T)$. Since
$$\left\|Tx'-Tx''|F\right\|\leq\Lip(S)^{\theta_{2}-\theta_{1}}\left\|Sx'-Sx''|G\right\|^{1-\theta_{2}}\cdot d_{X}(x',x'')^{\theta_{2}},\ \ \forall\: x',\; x'' \in X,$$ Since a Lipschitz map $\widetilde{S}:=\Lip(S)^{\frac{\theta_{2}-\theta_{1}}{1-\theta_{2}}} S\in\textfrak{A}^{L}(X, G)$, hence $T\in\textfrak{A}^{L}_{\theta_{2}}(X, F)$ and $$\mathbf{A}^{L}_{\theta_{2}}(T)\leq\mathbf{A}^{L}(\widetilde{S})^{1-\theta_{2}}\leq\Lip(S)^{\theta_{2}-\theta_{1}}\mathbf{A}^{L}(S)^{1-\theta_{2}}\leq\mathbf{A}^{L}(S)^{1-\theta_{1}}\leq (1+\epsilon)\cdot\mathbf{A}^{L}_{\theta_{1}}(T).$$ To verify $\bf (b)$, let $T\in\textfrak{A}^{L}_{\text{inj}}(X, F)$. Then $J_{F}T\in\textfrak{A}^{L}(X, F^{\text{inj}})$ and \begin{align}
\left\|Tx'-Tx''|F\right\|&=\left\|J_{F}(Tx')-J_{F}(Tx'')|F^{\text{inj}}\right\|\nonumber \\
&\leq\Lip(T)^{\theta}\cdot\left\|J_{F}\circ T(x')-J_{F}\circ T(x'')|F^{\text{inj}}\right\|^{1-\theta}\cdot d_{X}(x',x'')^{\theta}. \nonumber \end{align} Since $G:=F^{\text{inj}}$ and a Lipschitz map $S:=\Lip(T)^{\frac{\theta}{1-\theta}} J_{F}\circ T\in\textfrak{A}^{L}(X, G)$, hence $T\in\textfrak{A}^{L}_{\theta}(X, F)$. Moreover, \begin{align} \mathbf{A}^{L}_{\theta}(T):=\inf\mathbf{A}^{L}(S)^{1-\theta}&\leq\mathbf{A}^{L}(\Lip(T)^{\frac{\theta}{1-\theta}} J_{F}\circ T)^{1-\theta} \nonumber \\ &\leq\Lip(T)^{\theta}\cdot\mathbf{A}^{L}(J_{F}\circ T)^{1-\theta} \nonumber \\ &:=\Lip(T)^{\theta}\cdot\mathbf{A}_{\text{inj}}^{L}(T)^{1-\theta} \nonumber \\ &\leq\mathbf{A}_{\text{inj}}^{L}(T)^{\theta}\cdot\mathbf{A}_{\text{inj}}^{L}(T)^{1-\theta} \nonumber \\ &=\mathbf{A}_{\text{inj}}^{L}(T). \nonumber \end{align}
To verify $\bf (c)$, let $T\in\left(\textfrak{A}^{L}_{\theta_{1}}\right)_{\theta_{2}}(X, F)$ and $\epsilon > 0$. Then \begin{equation}\label{traurig}
\left\|Tx'-Tx''|F\right\|\leq\left\|Sx'-Sx''|G\right\|^{1-\theta_{2}}\cdot d_{X}(x',x'')^{\theta_{2}},\ \ \forall\: x',\; x'' \in X, \end{equation} holds for a suitable Banach space $G$ and a Lipschitz operator $S\in\textfrak{A}_{\theta_{1}}^{L}(X, G)$ with $\mathbf{A}_{\theta_{1}}^{L}(S)^{1-\theta_{2}}\leq (1+\epsilon)\cdot\left(\mathbf{A}_{\theta_{1}}^{L}(T)\right)_{\theta_{2}}$ and \begin{equation}\label{traurig1}
\left\|Sx'-Sx''|G\right\|\leq\left\|Rx'-Rx''|G\right\|^{1-\theta_{1}}\cdot d_{X}(x',x'')^{\theta_{1}},\ \ \forall\: x',\; x'' \in X, \end{equation} holds for a suitable Banach space $\widetilde{G}$ and a Lipschitz operator $R\in\textfrak{A}^{L}(X, \widetilde{G})$ with $\mathbf{A}^{L}(R)^{1-\theta_{1}}\leq (1+\epsilon)\cdot\mathbf{A}_{\theta_{1}}^{L}(S)$. From (\ref{traurig}) and (\ref{traurig1}) we have \begin{align}
\left\|Tx'-Tx''|F\right\|&\leq\left\|Rx'-Rx''|\widetilde{G}\right\|^{({1-\theta_{1}})\cdot ({1-\theta_{2}})}\cdot d_{X}(x',x'')^{\theta_{1}\cdot (1-\theta_{2})}\cdot d_{X}(x',x'')^{\theta_{2}}\nonumber \\
&\leq\left\|Rx'-Rx''|\widetilde{G}\right\|^{1-\theta_{2}-\theta_{1}+\theta_{1}\cdot\theta_{2}}\cdot d_{X}(x',x'')^{\theta_{2}+\theta_{1}-\theta_{1}\cdot\theta_{2}},\nonumber \end{align} hence $T\in\textfrak{A}^{L}_{\theta_{1}+\theta_{2}-\theta_{1}\theta_{2}}(X, F)$. Moreover, \begin{align} \mathbf{A}^{L}_{\theta_{1}+\theta_{2}-\theta_{1}\theta_{2}}(T)&\leq\mathbf{A}^{L}(R)^{1-\theta_{1}-\theta_{2}+\theta_{1}\theta_{2}} \nonumber \\ &\leq (1+\epsilon)^{\frac{1-\theta_{1}-\theta_{2}+\theta_{1}\theta_{2}}{1-\theta_{1}}}\cdot\mathbf{A}_{\theta_{1}}^{L}(S)^{\frac{1-\theta_{1}-\theta_{2}+\theta_{1}\theta_{2}}{1-\theta_{1}}} \nonumber \\ &\leq (1+\epsilon)^{\frac{1-\theta_{1}-\theta_{2}+\theta_{1}\theta_{2}}{1-\theta_{1}}}\cdot (1+\epsilon)^{\frac{1}{1-\theta_{2}}\cdot\frac{1-\theta_{1}-\theta_{2}+\theta_{1}\theta_{2}}{1-\theta_{1}}}\cdot\left(\mathbf{A}_{\theta_{1}}^{L}(T)\right)_{\theta_{2}}^{\frac{1}{1-\theta_{2}}\cdot\frac{1-\theta_{1}-\theta_{2}+\theta_{1}\theta_{2}}{1-\theta_{1}}} \nonumber \\ &= (1+\epsilon)^{\frac{2-2\theta_{1}-\theta_{2}+\theta_{1}\theta_{2}}{1-\theta_{1}}}\cdot\left(\mathbf{A}_{\theta_{1}}^{L}(T)\right)_{\theta_{2}}. \nonumber \end{align} \end{proof}
\subsection{Lipschitz $\left(p,\theta, q, \nu\right)$-dominated operators}\label{Auto220}
$1\leq p, q <\infty$ and $0\leq \theta, \nu< 1$ such that $\frac{1}{r}+\frac{1-\theta}{p}+\frac{1-\nu}{q}=1$ with $1\leq r <\infty$. We then introduce the following definition.
\begin{definition}\label{ten1}
A Lipschitz operator $T$ from $X$ to $F$ is called Lipschitz $\left(p,\theta, q, \nu\right)$-dominated if there exists a Banach spaces $G$ and $H$, a Lipschitz operator $S\in\Pi_{p}^{L}(X,G)$, a bounded operator $R\in\Pi_{q}(F^{*},H)$ and a positive constant $C$ such that
\begin{equation}\label{lastwagen1}
\left|\left\langle Tx'- Tx'', b^{*}\right\rangle\right|\leq C\cdot d_{X}(x', x'')^{\theta}\left\|Sx'-Sx''| G\right\|^{1-\theta}\left\|b^{*}\right\|^{\nu}\left\|R(b^{*})|H\right\|^{1-\nu} \end{equation} for arbitrary finite sequences $x'$, $x''$ in $X$, and $b^{*}\subset F^{*}$.
Let us denote by $\mathcal{D}_{\left(p, \theta, q, \nu\right)}^{L}(X,F)$ the class of all Lipschitz $\left(p, \theta, q, \nu\right)$-dominated operators from $X$ to $F$ with $$D_{\left(p, \theta, q, \nu\right)}^{L}(T)=\inf\left\{C \cdot\pi_{p}^{L}(S)^{1-\theta}\cdot \pi_{q}(R)^{1-\nu}\right\},$$ where the infimum is taken over all Lipschitz operator $S$, bounded operator $R$, and constant $C$ admitted in (\ref{lastwagen1}). \end{definition}
\begin{prop}\label{sun1} The ordered pair $\left(\mathcal{D}_{\left(p, \theta, q, \nu\right)}^{L}(X,F), D_{\left(p, \theta, q, \nu\right)}^{L}\right)$ is a normed space. \end{prop}
\begin{proof} We prove the triangle inequality. Let $i=1,2$ and $T_{i}\in\mathcal{D}_{\left(p, \theta, q, \nu\right)}^{L}(X,F)$. For each $\epsilon> 0$, there exists a Banach spaces $G_{i}$ and $H_{i}$, a Lipschitz operators $S_{i}\in\Pi_{p}^{L}(X,G_{i})$, a bounded operators $R_{i}\in\Pi_{q}(F^{*},H_{i})$ and a positive constants $C_{i}$ such that
\begin{equation}\label{lastwagen2}
\left|\left\langle T_{i}x'- T_{i}x'', b^{*}\right\rangle\right|\leq C_{i} d_{X}(x', x'')^{\theta}\left\|S_{i}x'-S_{i}x''| G_{i}\right\|^{1-\theta}\left\|b^{*}\right\|^{\nu}\left\|R_{i}(b^{*})|H_{i}\right\|^{1-\nu} \forall x', x'' \in X \; \forall b^{*}\subset F^{*} \end{equation}
and
\begin{equation}\label{lastwagen3} C_{i} \cdot\pi_{p}^{L}(S_{i})^{1-\theta}\cdot \pi_{q}(R_{i})^{1-\nu}\leq D_{\left(p, \theta, q, \nu\right)}^{L}(T_{i}) + \epsilon. \end{equation}
For $x', x'' \in X$ and $b^{*}$ we have
\begin{align}\label{lastwagen4}
\left|\left\langle T_{i}x'- T_{i}x'', b^{*}\right\rangle\right|&\leq C_{i} \cdot d_{X}(x', x'')^{\theta}\left\|S_{i}x'-S_{i}x''| G_{i}\right\|^{1-\theta}\left\|b^{*}\right\|^{\nu}\left\|R_{i}(b^{*})|H_{i}\right\|^{1-\nu} \nonumber \\
&=\widetilde{C}_{i} \cdot d_{X}(x', x'')^{\theta}\left\|\widetilde{S}_{i}x'-\tilde{S}_{i}x''| G_{i}\right\|^{1-\theta}\left\|b^{*}\right\|^{\nu}\left\|\widetilde{R}_{i}(b^{*})|H_{i}\right\|^{1-\nu}, \nonumber \end{align}
where $\widetilde{C}_{i}=C_{i}^{\frac{1}{r}}\cdot\pi_{p}^{L}(S_{i})^{\frac{1-\theta}{r}}\cdot\pi_{q}(R_{i})^{\frac{1-\nu}{r}}$, $\widetilde{S}_{i}=C_{i}^{\frac{1}{p}}\cdot\pi_{p}^{L}(S_{i})^{\frac{1-\theta}{p}}\cdot\pi_{q}(R_{i})^{\frac{1-\nu}{p}}\frac{S_{i}}{\pi_{p}^{L}(S_{i})}$, and $\widetilde{R}_{i}=C_{i}^{\frac{1}{q}}\cdot\pi_{q}^{L}(S_{i})^{\frac{1-\theta}{q}}\cdot\pi_{q}(R_{i})^{\frac{1-\nu}{p}}\frac{R_{i}}{\pi_{q}^{L}(R_{i})}$.
From (\ref{lastwagen2}) and (\ref{lastwagen3}) we have $$C_{i}\leq \left(D_{\left(p, \theta, q, \nu\right)}^{L}(T_{i}) + \epsilon\right)^{\frac{1}{r}}.$$
\begin{equation}\label{lastwagen5} \pi_{p}^{L}(S_{i})\leq \left(D_{\left(p, \theta, q, \nu\right)}^{L}(T_{i}) + \epsilon\right)^{\frac{1}{p}} \ \text{and}\ \pi_{q}(R_{i})\leq \left(D_{\left(p, \theta, q, \nu\right)}^{L}(T_{i}) + \epsilon\right)^{\frac{1}{q}}. \end{equation}
Let $G$ and $H$ be a Banach spaces obtained as a direct sum of $\ell_{p}$ and $\ell_{q}$ by $G_{1}$ and $G_{2}$ and $H_{1}$ and $H_{2}$ respectively. Let $S$ be a Lipschitz operator from $X$ into $G$ such that $S(x)=(S_{i}(x))_{i=1}^{2}$ for $x\in X$ and $R$ be a bounded operator from $F^{*}$ into $H$ such that $R(b)=(R_{i}(b))_{i=1}^{2}$ for $b\in F^{*}$. For each finite sequence $x'$, $x''$ in $X$ we have
\begin{align}
\left\|(S(x'_{j})-S(x''_{j}))_{j=1}^{n}|\ell_{p}(G)\right\|&=\left[\sum\limits_{j=1}^{n}\left\|S(x'_{j})-S(x''_{j})|G\right\|^{p}\right]^{\frac{1}{p}}=\left[\sum\limits_{j=1}^{n}\sum\limits_{i=1}^{2}\left\|S_{i}(x'_{j})-S_{i}(x''_{j})|G\right\|^{p}\right]^{\frac{1}{p}} \nonumber \\
&\leq\left[\sum\limits_{i=1}^{2}\pi_{p}^{L}(S_{i})^{p}\sup\limits_{f\in B_{{X}^{\#}}}\sum\limits_{j=1}^{n}\left|fx'_j-fx''_j\right|^{p}\right]^{\frac{1}{p}} \nonumber \\
&=\sup\limits_{f\in B_{{X}^{\#}}}\left[\sum\limits_{j=1}^{n}\left|fx'_j-fx''_j\right|^{p}\right]^{\frac{1}{p}}\left(\sum\limits_{i=1}^{2}\pi_{p}^{L}(S_{i})^{p}\right)^{\frac{1}{p}} \nonumber \end{align}
\begin{equation} \pi_{p}^{L}(S)\leq\left(\sum\limits_{i=1}^{2}\pi_{p}^{L}(S_{i})^{p}\right)^{\frac{1}{p}}\leq\left(D_{\left(p, \theta, q, \nu\right)}^{L}(T_{1})+D_{\left(p, \theta, q, \nu\right)}^{L}(T_{2}) + 2\epsilon\right)^{\frac{1}{p}}. \end{equation}
\begin{equation} \pi_{q}(B)\leq\left(D_{\left(p, \theta, q, \nu\right)}^{L}(T_{1})+D_{\left(p, \theta, q, \nu\right)}^{L}(T_{2}) + 2\epsilon\right)^{\frac{1}{q}}. \end{equation}
\begin{align}
&\left|\left\langle (T_{1}+T_{1})x'- (T_{1}+T_{1}) x'', b^{*}\right\rangle\right|\leq\sum\limits_{i=1}^{2} C_{i} d_{X}(x', x'')^{\theta}\left\|S_{i}x'-S_{i}x''| G_{i}\right\|^{1-\theta}\left\|b^{*}\right\|^{\nu}\left\|R_{i}(b^{*})|H_{i}\right\|^{1-\nu} \nonumber \\
&\leq d_{X}(x', x'')^{\theta}\left\|b^{*}\right\|^{\nu}\left(\sum\limits_{i=1}^{2} C_{i}^{r}\right)^{\frac{1}{r}} \left(\sum\limits_{i=1}^{2} \left\|S_{i}x'-S_{i}x''| G_{i}\right\|^{p}\right)^{\frac{1-\theta}{p}} \left(\sum\limits_{i=1}^{2} \left\|R_{i}(b^{*})|H_{i}\right\|^{q}\right)^{\frac{1-\nu}{q}} \nonumber \\
&=d_{X}(x', x'')^{\theta}\left\|b^{*}\right\|^{\nu}\left(\sum\limits_{i=1}^{2} C_{i}^{r}\right)^{\frac{1}{r}}\left\|Sx'-Sx''| G\right\|^{1-\theta} \left\|R(b^{*})|H\right\|^{1-\nu} \nonumber \end{align}
\begin{align} D_{\left(p, \theta, q, \nu\right)}^{L}(T_{1}+T_{2})&\leq\left(\sum\limits_{i=1}^{2} C_{i}^{r}\right)^{\frac{1}{r}}\pi_{p}^{L}(S)^{1-\theta}\pi_{q}(R)^{1-\nu} \nonumber \\ &\leq \left(D_{\left(p, \theta, q, \nu\right)}^{L}(T_{1})+ D_{\left(p, \theta, q, \nu\right)}^{L}(T_{2})+ 2\epsilon\right)^{\frac{1}{r}+\frac{1-\theta}{p}+\frac{1-\nu}{q}}. \nonumber \end{align}
Hence $D_{\left(p, \theta, q, \nu\right)}^{L}(T_{1}+T_{2})\leq D_{\left(p, \theta, q, \nu\right)}^{L}(T_{1})+ D_{\left(p, \theta, q, \nu\right)}^{L}(T_{2})$
\end{proof}
\begin{remark}
If $\theta=\nu=0$, then the class of all Lipschitz $\left(p, \theta, q, \nu\right)$-dominated operators from $X$ to $F$ are the class of all Lipschitz $\left(p, q\right)$-dominated operators from $X$ to $F$ considered in \cite{CD11} with $\mathcal{D}_{\left(p, \theta, q, \nu\right)}^{L}(X,F)=\mathcal{D}_{\left(p, q\right)}^{L}(X,F)$. \end{remark}
\begin{thm}\label{thm1} Let $X$ be a metric space, $F$ be a Banach space and $T\in Lip(X,F)$. The following conditions are equivalent.
\begin{enumerate}
\item[$\bf (1)$] $T\in\mathcal{D}_{\left(p, \theta, q, \nu\right)}^{L}(X,F)$.
\item[$\bf (2)$] There is a constant $C\geq 0$ and regular probabilities $\mu$ and $\tau$ on $B_{X^{\#}}$ and $B_{F^{**}}$, respectively such that for every $x'$, $x''$ in $X$ and $b^{*}$ in $F^{*}$ the following inequality holds \begin{align}
\left|\left\langle Tx'- Tx'', b^{*}\right\rangle\right|\leq C &\cdot\left[\int\limits_{B_{X^{\#}}}\left(\left|f(x')-f(x'')\right|^{1-\theta} d_X(x',x'')^{\theta}\right)^\frac{p}{1-\theta} d\mu(f)\right]^\frac{1-\theta}{p}\nonumber \\
&\cdot\left[\int\limits_{B_{F^{**}}}\left(\left|\left\langle b^{*}, b^{**}\right\rangle\right|^{1-\nu} \left\|b^{*}\right\|^{\nu}\right)^\frac{q}{1-\nu} d\tau(b^{**})\right]^\frac{1-\nu}{q}. \nonumber \end{align}
\item[$\bf (3)$] There exists a constant $C\geq 0$ such that for every finite sequences $x'$, $x''$ in $X$; $\sigma$ in $\mathbb{R}$ and $y^{*}\subset F^{*}$ the inequality \begin{equation}
\left\|\sigma\cdot\left\langle Tx'- Tx'', b^{*}\right\rangle|\ell_{r'}\right\|\leq C\cdot\left\|(\sigma,x',x'')\Big|\delta_{p,\theta}^{L}(\mathbb{R}\times X\times X)\right\|\left\| b^{*} \Big|\delta_{q,\nu}(F^{*})\right\| \end{equation} holds.
\end{enumerate} In this case, $D_{\left(p, \theta, q, \nu\right)}^{L}$ is equal to the infimum of such constants $C$ in either $\bf (2)$, or $\bf (3)$. \end{thm}
\begin{proof} $\bf (1)\Longrightarrow\bf (2)$ If $T\in\mathcal{D}_{\left(p, \theta, q, \nu\right)}^{L}(X,F)$, then there exists a Banach spaces $G$ and $H$, a Lipschitz operator $S_{1}\in\Pi_{p}^{L}(X,G)$, a bounded operator $S_{2}\in\Pi_{q}(F^{*},H)$ and a positive constant $C$ such that \begin{equation}
\left|\left\langle Tx'- Tx'', b^{*}\right\rangle\right|\leq C\cdot d_{X}(x', x'')^{\theta}\left\|S_{1}x'-S_{1}x''| G\right\|^{1-\theta}\left\|b^{*}\right\|^{\nu}\left\|S_{2}(b^{*})|H\right\|^{1-\nu} \end{equation} for arbitrary finite sequences $x'$, $x''$ in $X$ and $b^{*}\subset F^{*}$. Since $S_{1}$ is Lipschitz $p$--summing operator and $S_{2}$ is $q$--summing operator then there exists regular probabilities $\mu$ and $\tau$ on $B_{X^{\#}}$ and $B_{F^{**}}$, respectively such that \begin{align}
\left|\left\langle Tx'- Tx'', b^{*}\right\rangle\right|\leq C\cdot\pi_{p}^{L}(S_{1})^{1-\theta}\cdot\pi_{q}(S_{2})^{1-\nu}&\left[\int\limits_{B_{X^{\#}}}\left(\left|f(x')-f(x'')\right|^{1-\theta} d_X(x',x'')^{\theta}\right)^\frac{p}{1-\theta} d\mu(f)\right]^\frac{1-\theta}{p}\nonumber \\
&\cdot\left[\int\limits_{B_{F^{**}}}\left(\left|\left\langle b^{*}, b^{**}\right\rangle\right|^{1-\nu} \left\|b^{*}\right\|^{\nu}\right)^\frac{q}{1-\nu} d\tau(b^{**})\right]^\frac{1-\nu}{q}. \nonumber \end{align} $\bf (2)\Longrightarrow\bf (1)$ Let $x'$, $x''$ be finite sequences in $X$ and $y^{*}$ be a finite sequence in $F^{*}$. Let $\varphi_{b^{*}}:=\left\langle b^{*}, b^{**}\right\rangle$. For each $x\in X$, let $\delta_{(x,0)}: X^{\#}\longrightarrow \mathbb{R}$ be the linear map defined by $$\delta_{(x,0)}(f)=f(x)\ \ \ (f\in X^{\#}).$$ By setting $S_{1}x:=\delta_{(x,0)}$, $S_{2}b^{*}:=\varphi_{b^{*}}$, $G:=L_{p}(B_{X^{\#}},\mu)$, and $H:=L_{q}(B_{F^{**}},\tau)$ we obtain a Lipschitz operator $S_{1}\in\Pi_{p}^{L}(X,G)$ with $\pi_{p}^{L}(S_{1})\leq 1$ and an operator $S_{2}\in\Pi_{q}(F^{*},H)$ with $\pi_{q}(S_{2})\leq 1$ such that \begin{align}
\left|\left\langle Tx'- Tx'', b^{*}\right\rangle\right|&\leq C\cdot\left[\int\limits_{B_{X^{\#}}}\left|f(x')-f(x'')\right|^{p} d\mu(f)\right]^\frac{1-\theta}{p} \left[\int\limits_{B_{F{**}}}\left|\left\langle b^{*}, b^{**}\right\rangle\right|^{q} d\tau(b^{**})\right]^\frac{1-\nu}{q}d_X(x',x'')^{\theta}\left\|b^{*}\right\|^{\nu} \nonumber\\
&= C\cdot d_{X}(x', x'')^{\theta}\left\|S_{1}x'-S_{1}x''| G\right\|^{1-\theta}\left\|b^{*}\right\|^{\nu}\left\|S_{2}(b^{*})|H\right\|^{1-\nu}\nonumber \end{align} $\bf (2)\Longrightarrow\bf (3)$ Let $x'$, $x''$ be finite sequences in $X$; $\sigma$ in $\mathbb{R}$ and $y^{*}$ be a finite sequence in $F^{*}$. By $\bf (2)$ and using the H\"older inequality with exponents of $1=\frac{1}{r}+\frac{1-\theta}{p}+\frac{1-\nu}{q}$ we have \begin{align}
\left[\sum\limits_{j=1}^{m}\left|\sigma_{j}\right|^{r'}\left|\left\langle Tx'_{j}- Tx''_{j}, b_{j}^{*}\right\rangle\right|^{r'}\right]^{\frac{1}{r'}}\leq C & \cdot\left[\sum\limits_{j=1}^{m}\left(\int\limits_{B_{X^{\#}}}\left(\left|\sigma_{j}\right|\left|f(x'_{j})-f(x''_{j})\right|^{1-\theta} d_X(x'_{j},x''_{j})^{\theta}\right)^\frac{p}{1-\theta} d\mu(f)\right)\right]^\frac{1-\theta}{p}\nonumber \\
&\cdot\left[\sum\limits_{j=1}^{m}\left(\int\limits_{B_{F^{**}}}\left(\left|\left\langle b_{j}^{*}, b^{**}\right\rangle\right|^{1-\nu} \left\|b_{j}^{*}\right\|^{\nu}\right)^\frac{q}{1-\nu} d\tau(b^{**})\right)\right]^\frac{1-\nu}{q} \nonumber \\
&\leq C\cdot\sup\limits_{f\in B_{{X}^{\#}}}\Bigg[\sum\limits_{j=1}^{m}\left[\left|\sigma_j\right|\left|fx'_j-fx''_j\right|^{1-\theta} d_X(x'_j,x''_j)^{\theta}\right]^{\frac{p}{1-\theta}}\Bigg]^{\frac{1-\theta}{p}}\nonumber \\
&\cdot\sup\limits_{b^{**}\in B_{{F}^{**}}}\Bigg[\sum\limits_{j=1}^{m}\left[\left|\left\langle b^{*}_j, b^{**}\right\rangle \right|^{1-\nu} \left\|b^{*}_j\right\|^{\nu}\right]^{\frac{q}{1-\nu}}\Bigg]^{\frac{1-\nu}{q}}\nonumber \\
&= C\cdot\left\|(\sigma,x',x'')\Big|\delta_{p,\theta}^{L}(\mathbb{R}\times X\times X)\right\|\left\|b^{*} \Big|\delta_{q,\nu}(F^{*})\right\| \nonumber \end{align} $\bf (3)\Longrightarrow\bf (2)$ Take $\left[C(B_{X^{\#}})\times C(B_{F^{**}})\right]^{*}$ equipped with the weak $C(B_{X^{\#}})\times C(B_{F^{**}})$-topology. Then $W(B_{X^{\#}})\times W(B_{F^{**}})$ is a compact convex subset. For any finite sequences $\sigma$ in $\mathbb{R}$, $x'$, $x''$ in $X$ and $y^{*}$ in $F^{*}$ the equation
\begin{align}
\Psi(\mu, \tau)=\sum\limits_{j=1}^{n}\bigg(\frac{1}{r'}\left|\sigma_{j}\left\langle Tx'_{j}-Tx''_{j}, b^{*}_{j}\right\rangle\right|^{r'}&
-\frac{C^{r'}}{\frac{p}{1-\theta}} \int\limits_{B_{X^{\#}}}\left|\sigma_{j}\right|^{\frac{p}{1-\theta}} d_{X} (x'_j, x''_j)^{\frac{\theta p}{1-\theta}}\left|f(x'_{j})-f(x''_{j})\right|^{p} d\mu(f)\nonumber \\
& -\frac{C^{r'}}{\frac{q}{1-\nu}}\int\limits_{B_{F^{**}}} \left\|b^{*}_{j}\right\|^{\frac{\nu q}{1-\nu}}\left|\left\langle b_{j}^{*}, b^{**}\right\rangle\right|^{q} d\tau(b^{**})\bigg) \nonumber \end{align} defines a continuous convex function $\Psi$ on $W(B_{X^{\#}})\times W(B_{F^{**}})$. From the compactness of $B_{X^{\#}}$ and $B_{F^{**}}$, there exists $f_{0}\in B_{X^{\#}}$ and $y^{**}_{0}\in B_{F^{**}}$ such that
$$\zeta=\Bigg[\sum\limits_{j=1}^{n}\left[\left|\sigma_j\right|\left|f_0 \:x'_j-f_0 \:x''_j\right|^{1-\theta} d_X(x'_j,x''_j)^{\theta}\right]^{\frac{p}{1-\theta}}\Bigg]^{\frac{1-\theta}{p}}.$$
and
$$\beta=\Bigg[\sum\limits_{j=1}^{n}\left[\left|\left\langle b^{*}_{j},b^{**}_{0}\right\rangle \right|^{1-\nu}\left\|b^{*}_{j}\right\|^{\nu}\right]^{\frac{q}{1-\nu}}\Bigg]^{\frac{1-\nu}{q}}.$$
If $\delta (f_{0})$ and $\delta (y^{**}_{0})$ denotes the Dirac measure at the point $f_{0}$ and in $y^{**}_{0}$, respectively, then we have
\begin{align}
\Psi\left(\delta (f_{0}), \delta (b^{**}_{0})\right)=\frac{1}{r'}\sum\limits_{j=1}^{n}\bigg(\left|\sigma_j\left\langle Tx'_{j}-Tx''_{j}, b^{*}_{j}\right\rangle\right|^{r'}& - \frac{C^{r'}}{\frac{p}{1-\theta}} \left|\sigma_{j}\right|^{\frac{p}{1-\theta}} d_{X} (x'_j, x''_j)^{\frac{\theta p}{1-\theta}}\left|f_{0}(x'_{j})-f_{0}(x''_{j})\right|^{p} \nonumber \\
& -\frac{C^{r'}}{\frac{q}{1-\nu}} \left\|b^{*}_{j}\right\|^{\frac{\nu q}{1-\nu}}\left|\left\langle b_{j}^{*}, b_{0}^{**}\right\rangle\right|^{q}\bigg) \nonumber \end{align} \begin{align}
\ \ \ \ \ \: &=\frac{1}{r'}\sum\limits_{j=1}^{n}\left|\sigma_j\right|^{r'}\left|\left\langle Tx'_{j}-Tx''_{j}, b^{*}_{j}\right\rangle\right|^{r'} - C^{r'}\left(\frac{1-\theta}{p} \zeta^{\frac{p}{1-\theta}}+\frac{1-\nu}{q} \beta^{\frac{p}{1-\theta}}\right) \nonumber \\
&\leq\frac{1}{r'}\left(\sum\limits_{j=1}^{n}\left|\sigma_j\right|^{r'}\left|\left\langle Tx'_{j}-Tx''_{j}, b^{*}_{j}\right\rangle\right|^{r'}\right)^{r'}-\frac{C^{r'}}{r'}(\zeta\cdot\beta)^{r'} \nonumber \\
&=\frac{1}{r'}\left[\left(\sum\limits_{j=1}^{n}\left|\sigma_j\right|^{r'}\left|\left\langle Tx'_{j}-Tx''_{j}, b^{*}_{j}\right\rangle\right|^{r'}\right)^{r'}- (C\cdot\zeta\cdot\beta)^{r'}\right] \nonumber \\ &\leq 0. \nonumber \end{align}
Since the collection $\mathfrak{Q}$ of all functions $\Psi$ obtained in this way is concave, by \cite [E.4.2] {P78} there are $\mu_{0}\in W(B_{X^{\#}})$ and $\tau_{0}\in W(B_{F^{**}})$ such that $\Psi\left(\mu_{0}, \tau_{0} \right)\leq 0$ for all $\Psi\in\mathfrak{Q}$. In particular, if $\Psi$ is generated by single finite sequences $\sigma$ in $\mathbb{R}$, $x'$, $x''$ in $X$ and $y^{*}$ in $F^{*}$, it follows that
\begin{align}
\frac{1}{r'}\left|\sigma\left\langle Tx'-Tx'', b^{*}\right\rangle\right|^{r'}&-\frac{C^{r'}}{\frac{p}{1-\theta}}\int\limits_{B_{X^{\#}}}\left|\sigma\right|^{\frac{p}{1-\theta}} d_{X} (x', x'')^{\frac{\theta p}{1-\theta}}\left|fx'-fx''\right|^{p} d\mu_{0}(f) \nonumber \\
&-\frac{C^{r'}}{\frac{q}{1-\nu}}\int\limits_{B_{F^{**}}} \left\|b^{*}\right\|^{\frac{\nu q}{1-\nu}}\left|\left\langle b^{*}, b^{**}\right\rangle\right|^{q} d\tau_{0}(b^{**})\leq 0. \nonumber \end{align}
Finally, we put
$$s_{1}:=\left[\int\limits_{B_{X^{\#}}}\left|\sigma\right|^{\frac{p}{1-\theta}} d_{X} (x', x'')^{\frac{\theta p}{1-\theta}}\left|fx'-fx''\right|^{p} d\mu_{0}(f)\right]^{\frac{1-\theta}{p}}$$ and
$$s_{2}:=\left[\int\limits_{B_{F^{**}}} \left\|b^{*}\right\|^{\frac{\nu q}{1-\nu}}\left|\left\langle b^{*}, b^{**}\right\rangle\right|^{q} d\tau_{0}(b^{**})\right]^{\frac{1-\nu}{q}}.$$ Then \begin{align}
\left|\sigma\left\langle Tx'-Tx'', b^{*}\right\rangle\right|&=s_{1}s_{2}\left|(s_{1}^{-1}\sigma)\left\langle Tx'-Tx'', s_{2}^{-1} b^{*}\right\rangle\right| \nonumber \\
&\leq C s_{1} s_{2} \bigg[\frac{r'}{\frac{p}{1-\theta}}\int\limits_{B_{X^{\#}}}\left|s_{1}^{-1}\sigma\right|^{\frac{p}{1-\theta}} d_{X} (x', x'')^{\frac{\theta p}{1-\theta}}\left|fx'-fx''\right|^{p} d\mu_{0}(f) \nonumber \\
& + \frac{r'}{\frac{q}{1-\nu}}\int\limits_{B_{F^{**}}} \left\|s_{2}^{-1} b^{*}\right\|^{\frac{\nu q}{1-\nu}}\left|\left\langle s_{2}^{-1} b^{*}, b^{**}\right\rangle\right|^{q} d\tau_{0}(b^{**})\bigg]^{\frac{1}{r'}}. \nonumber \\ &\leq C \ s_{1} \ s_{2}. \nonumber \end{align} Hence \begin{align}
\left|\left\langle Tx'- Tx'', b^{*}\right\rangle\right|\leq C &\cdot\left[\int\limits_{B_{X^{\#}}}\left(\left|f(x')-f(x'')\right|^{1-\theta} d_X(x',x'')^{\theta}\right)^\frac{p}{1-\theta} d\mu(f)\right]^\frac{1-\theta}{p}\nonumber \\
&\cdot\left[\int\limits_{B_{F^{**}}}\left(\left|\left\langle b^{*}, b^{**}\right\rangle\right|^{1-\nu} \left\|b^{*}\right\|^{\nu}\right)^\frac{q}{1-\nu} d\tau(b^{**})\right]^\frac{1-\nu}{q}. \nonumber \end{align}
\end{proof}
The aforementioned Theorem \ref{thm1} will use to prove the next result. \begin{cor}\label{sun2} The linear space $\mathcal{D}_{\left(p, \theta, q, \nu\right)}^{L}(X,F)$ is a Banach space under the norm $D_{\left(p, \theta, q, \nu\right)}^{L}$. \end{cor}
\begin{proof} To prove that $\mathcal{D}_{\left(p, \theta, q, \nu\right)}^{L}(X,F)$ is complete space. We consider an arbitrary Cauchy sequence $\left(T_{n}\right)_{n\in\mathbb{N}}$ in $\mathcal{D}_{\left(p, \theta, q, \nu\right)}^{L}(X,F)$ and show that $\left(T_{n}\right)_{n\in\mathbb{N}}$ converges to $T\in\mathcal{D}_{\left(p, \theta, q, \nu\right)}^{L}(X,F)$. Since $\left(T_{n}\right)_{n\in\mathbb{N}}$ is Cauchy, for every $\epsilon>0$ there is an $n_{0}$ such that
\begin{equation}\label{zzzzzwzzzzzz1212} D_{\left(p, \theta, q, \nu\right)}^{L}\left(T_{m}-T_{n}\right)\leq\epsilon\ \ \text{for} \ \ m,n\geq n_{0}, \end{equation}
Since $\Lip\left(T_{m}-T_{n}\right)\leq D_{\left(p, \theta, q, \nu\right)}^{L}\left(T_{m}-T_{n}\right)$ then $\left(T_{n}\right)_{n\in\mathbb{N}}$ is also a Cauchy sequence in the Banach space $\Lip(X,F)$, and there is a Lipschitz map $T$ with $$\lim_{n\rightarrow\infty}\Lip\left(T-T_{n}\right)=0.$$
From $\bf (2)$ of Theorem \ref{thm1} given $\epsilon>0$ there is $n_{0}\in\mathbb{N}$ such that, for each $n, m\in\mathbb{N}$, $n, m\geq n_{0}$, there exist probabilities $\mu_{n m}$ on $B_{X^{\#}}$ and $\tau_{n m}$ on $B_{F^{**}}$ such that for every $x'$, $x''$ in $X$ and $b^{*}$ in $F^{*}$ the following inequality holds \begin{align}
\left|\left\langle (T_{m}-T_{n}) \; x'- (T_{m}-T_{n}) \; x'', b^{*}\right\rangle\right|\leq \epsilon \; d_X(x',x'')^{\theta} \; \left\|b^{*}\right\|^{\nu} &\cdot\left[\int\limits_{B_{X^{\#}}} \left|f(x')-f(x'')\right|^{p} d\mu_{n m}(f)\right]^{1-\theta} \nonumber \\
&\cdot\left[\int\limits_{B_{F^{**}}} \left|\left\langle b^{*}, b^{**}\right\rangle\right|^{q} d\tau_{n m}(b^{**})\right]^ {1-\nu} . \nonumber \end{align} Fixed $n\geq n_{0}$, by the weak compactness of $W\left(B_{X^{\#}}\right)$ and $W\left(B_{F^{**}}\right)$, there is a sub-net $\left(\mu_{n m}(\alpha), \tau_{n m}(\alpha)\right)_{\alpha\in\mathcal{A}}$ convergent to $\left(\mu_{n }, \tau_{n }\right)\in W(B_{X^{\#}})\times W(B_{F^{**}})$ in the topology $\sigma\left((C(B_{X^{\#}}\times C(B_{F^{**}})))^{\ast}, C(B_{X^{\#}})\times C(B_{F^{**}})\right)$. Then, there is $\alpha_{0}\in\mathcal{A}$ such that for each $x'$, $x''$ in $X$, $b^{*}$ in $F^{*}$ and $\alpha\in\mathcal{A}$ with $\alpha\geq\alpha_{0}$ we have \begin{align}
&\left|\left\langle (T_{m(\alpha)}-T_{n}) \; x'- (T_{m(\alpha)}-T_{n}) \; x'', b^{*}\right\rangle\right|\leq \epsilon \; d_X(x',x'')^{\theta} \; \left\|b^{*}\right\|^{\nu} \nonumber \\
&\cdot\left[\int\limits_{B_{X^{\#}}} \left|f(x')-f(x'')\right|^{p} d(\mu_{{n m}(\alpha)}-\mu_{n})(f) + \int\limits_{B_{X^{\#}}} \left|f(x')-f(x'')\right|^{p} d\mu_{n}(f)\right]^{1-\theta} \nonumber \\
&\cdot\left[\int\limits_{B_{F^{**}}} \left|\left\langle b^{*}, b^{**}\right\rangle\right|^{q} d(\tau_{{n m}(\alpha)}-\tau_{n})(b^{**}) + \int\limits_{B_{F^{**}}} \left|\left\langle b^{*}, b^{**}\right\rangle\right|^{q} d\tau_{n} (b^{**})\right]^ {1-\nu} . \nonumber \end{align} and taking limits when $\alpha\in\mathcal{A}$ we have
\begin{align}
\left|\left\langle (T -T_{n}) \; x'- (T -T_{n}) \; x'', b^{*}\right\rangle\right|\leq \epsilon \; d_X(x',x'')^{\theta} \; \left\|b^{*}\right\|^{\nu} &\cdot\left[\int\limits_{B_{X^{\#}}} \left|f(x')-f(x'')\right|^{p} d\mu_{n}(f)\right]^{1-\theta} \nonumber \\
&\cdot\left[\int\limits_{B_{F^{**}}} \left|\left\langle b^{*}, b^{**}\right\rangle\right|^{q} d\tau_{n}(b^{**})\right]^{1-\nu} \nonumber \end{align} for every $x'$, $x''$ in $X$ and $b^{*}$ in $F^{*}$. It follows that $T -T_{n}\in\mathcal{D}_{\left(p, \theta, q, \nu\right)}^{L}(X,F)$ and therefore $T\in\mathcal{D}_{\left(p, \theta, q, \nu\right)}^{L}(X,F)$. From the last inequality it follows that $D_{\left(p, \theta, q, \nu\right)}^{L}(T -T_{n})\leq\epsilon$ if $n\geq n_{0}$ and hence $\mathcal{D}_{\left(p, \theta, q, \nu\right)}^{L}(X,F)$ is a Banach space. \end{proof}
By Proposition \ref{sun1}, Theorem \ref{thm1}, and Corollary \ref{sun2} we obtain the following result.
\begin{prop} $\left[\mathcal{D}_{\left(p, \theta, q, \nu\right)}^{L}, D_{\left(p, \theta, q, \nu\right)}^{L}\right]$ is a Banach nonlinear ideal. \end{prop}
\begin{remark}
Definition \ref{Torezen} can be generalized as follows. Let $0\leq \theta< 1$ and $\left[\textfrak{A}^{L}, \mathbf{A}^{L}\right]$ be a normed nonlinear ideal. A Lipschitz operator $T$ from $X$ into $F$ belongs to $\left(\textfrak{A}^{L}, \textfrak{B}^{L}\right)_{\theta}(X, F)$ if there exist a Banach spaces $G_{1}$, $G_{2}$ and a Lipschitz operators $S_{1}\in\textfrak{A}^{L}(X, G_{1})$ and $S_{2}\in\textfrak{B}^{L}(X, G_{2})$ such that \begin{equation}\label{lastwagen}
\left\|Tx'-Tx''|F\right\|\leq\left\|S_{1}x'-S_{1}x''|G_{1}\right\|^{1-\theta}\cdot \left\|S_{2}x'-S_{2}x''|G_{2}\right\|^{\theta},\ \ \forall\: x',\; x'' \in X. \end{equation} For each $T\in\left(\textfrak{A}^{L}, \textfrak{B}^{L}\right)_{\theta}(X, F)$, we set \begin{equation} \left(\mathbf{A}^{L}, \mathbf{B}^{L}\right)_{\theta}(T):=\inf\mathbf{A}^{L}(S_{1})^{1-\theta}\cdot\mathbf{B}^{L}(S_{2})^{\theta} \end{equation} where the infimum is taken over all Lipschitz operators $S_{1}$, $S_{2}$ admitted in (\ref{lastwagen}).
Note that $\Lip(T)\leq\left(\mathbf{A}^{L}, \mathbf{B}^{L}\right)_{\theta}(T)$, by definition. The nonlinear ideal $\left[\textfrak{A}^{L}_{\theta}, \mathbf{A}^{L}_{\theta}\right]$ now appear as $\left[\left(\textfrak{A}^{L}, \Lip\right)_{\theta}, \left(\mathbf{A}^{L}, \Lip(\cdot)\right)_{\theta}\right]$. \end{remark}
\section{\bf Nonlinear operator ideals between metric spaces}\label{Sec. 4}
\begin{definition}\label{Auto777777777} Suppose that, for every pair of metric spaces $X$ and $Y$, we are given a subset $\mathscr{A}^{L}(X,Y)$ of $\mathscr{L}(X,Y)$. The class $$\mathscr{A}^{L}:=\bigcup_{X,Y}\mathscr{A}^{L}(X,Y)$$ is said to be a nonlinear operator ideal, or just a nonlinear ideal, if the following conditions are satisfied:
\begin{enumerate}
\item[$\bf (\widetilde{\widetilde{NOI_0\,}})$] If $Y=F$ is a Banach space, then $g\boxdot e\in\mathscr{A}^{L}(X,F)$ for $g\in X^{\#}$ and $e\in F$.
\item[$\bf (\widetilde{\widetilde{NOI_1\,}})$] $BTA\in\mathscr{A}^{L}(X_{0}, Y_{0})$ for $A\in \mathscr{L}(X_{0},X)$, $T\in\mathscr{A}^{L}(X, Y)$, and $B\in\mathscr{L}(Y, Y_{0})$. \end{enumerate} Condition $\bf (\widetilde{\widetilde{NOI_0}})$ implies that $\mathscr{A}^{L}$ contains nonzero Lipschitz operators. \end{definition}
\subsection{Lipschitz interpolative ideal procedure between metric spaces}\label{Au3to20}
\begin{definition}\label{fruher} Let $0\leq \theta< 1$. A Lipschitz map $T$ from $X$ into $Y$ belongs to $\mathscr{A}^{L}_{\theta}(X, Y)$ if there exist a constant $C\geq 0$, a metric space $Z$ and a Lipschitz map $S\in\mathscr{A}^{L}(X, Z)$ such that \begin{equation}\label{poooor} d_{Y}(Tx', Tx'')\leq C \cdot d_{Z}(Sx', Sx'')^{1-\theta}\cdot d_{X}(x',x'')^{\theta},\ \ \forall\: x',\; x'' \in X. \neq\end{equation} For each $T\in\mathscr{A}^{L}_{\theta}(X, Y)$, we set \begin{equation}\label{fliehen} \mathbf{A}^{L}_{\theta}(T):=\inf C \mathbf{A}^{L}(S)^{1-\theta} \end{equation} where the infimum is taken over all Lipschitz operators $S$ admitted in (\ref{poooor}). \end{definition}
\begin{prop} $\mathscr{A}^{L}_{\theta}$ is a nonlinear ideal with $\mathbf{A}^{L}_{\theta}(BTA)\leq\Lip(B)\cdot\mathbf{A}^{L}_{\theta}(T)\cdot\Lip(A)$ for $A\in \mathscr{L}(X_{0},X)$, $T\in\mathscr{A}^{L}(X, Y)$, and $B\in\mathscr{L}(Y, Y_{0})$.. \end{prop}
\begin{proof} The proof of condition $\bf (\widetilde{\widetilde{NOI_0\,}})$ is similar to the proof of algebraic condition $\bf (\widetilde{PNOI_0})$ in Proposition \ref{Flughafen}. To prove the condition $\bf (\widetilde{\widetilde{NOI_1\,}})$, let $A\in \mathscr{L}(X_{0},X)$, $T\in\mathscr{A}^{L}_{\theta}(X, Y)$, and $B\in\mathscr{L}(Y, Y_{0})$ and $x'_{0}, x''_{0}$ in $X$. Then \begin{align} d_{Y_{0}}(BTA x'_{0}, BTA x''_{0})&\leq \Lip(B)\cdot d_{Y}(TA x'_{0}, TA x''_{0}) \nonumber \\ &\leq\Lip(B)\cdot C\cdot d_{Z}(S\circ A (x'), S\circ A (x''))^{1-\theta}\cdot d_{X}(Ax',Ax'')^{\theta} \nonumber \\ &\leq C \cdot\Lip(B)\cdot\Lip(A)^{\theta}\cdot d_{Z}(S\circ A (x'), S\circ A (x''))^{1-\theta} d_{X}(x',x'')^{\theta}. \nonumber \end{align} Since a Lipschitz map $\widetilde{S}:=S\circ A\in\mathscr{A}^{L}(X_{0}, Z)$ and $\widetilde{C}:=C \cdot\Lip(B)\cdot\Lip(A)^{\theta}$, hence $BTA\in\mathscr{A}^{L}_{\theta}(X_{0}, Y_{0})$. Moreover, from (\ref{fliehen}) we have \begin{align}\label{jo} \mathbf{A}^{L}_{\theta}(BTA)&:=\inf \widetilde{C}\cdot \mathbf{A}^{L}(\widetilde{S})^{1-\theta} \nonumber \\ &\leq C \cdot\Lip(B)\cdot\Lip(A)^{\theta}\cdot \mathbf{A}^{L}(S\circ A)^{1-\theta} \nonumber \\ &= C \cdot\Lip(B)\cdot\Lip(A)\cdot \mathbf{A}^{L}(S)^{1-\theta}. \end{align} Taking the infimum over all such $S\in\mathscr{A}^{L}(X, Z)$ on the right side of (\ref{jo}), we have $$\mathbf{A}^{L}_{\theta}(BTA)\leq\Lip(B)\cdot\mathbf{A}^{L}_{\theta}(T)\cdot\Lip(A).$$ \end{proof}
\subsection{Basic Examples of Lipschitz interpolative ideal procedure}
\begin{enumerate}
\item[$\bf (1)$] Lipschitz $\left(p,s,\theta\right)$-summing maps\label{schlumpf02}
A Lipschitz map $T$ from $X$ to $Y$ is called Lipschitz $\left(p,s,\theta\right)$-summing if there is a constant $C\geq 0$ such that
\begin{equation}\label{metaphor}
\left\|(\sigma,Tx',Tx'')\Big|\ell_{\frac{p}{1-\theta}}(\mathbb{R}\times Y\times Y)\right\|\leq C\cdot\left\|(\sigma,x',x'')\Big|\delta_{s,\theta}^{L}(\mathbb{R}\times X\times X)\right\|. \end{equation}
for arbitrary finite sequences $x'$, $x''$ in $X$ and $\sigma$ in $\mathbb{R}$. Let us denote by $\Pi^{L}_{\left(p,s,\theta\right)}(X,Y)$ the class of all Lipschitz $\left(p,s,\theta\right)$-summing maps from $X$ to $Y$ with $$\pi^{L}_{\left(p,s,\theta\right)}(T)=\inf C.$$ where the infimum is taken over all constant $C$ satisfying (\ref{metaphor}). The proof of the following result is not difficult to prove it.
\begin{prop}\label{schlumrtpf02} $\left[\Pi^{L}_{\left(p,s,\theta\right)}, \pi^{L}_{\left(p,s,\theta\right)}\right]$ is a nonlinear ideal. \end{prop}
\begin{remark}
\item[$\bf (1)$] If $\theta = 0$, then the class $\Pi^{L}_{\left(p,s,\theta\right)}(X,Y)$ coincides with the class $\Pi^{L}_{\left(p,s\right)}(X,Y)$ which considered in \cite {Mass15} for $\infty\geq p\geq q > 0$ and \cite{JA12} for $1\leq q< p$.
\item[$\bf (2)$] For a special case, if $p=s$, a Lipschitz $\left(p,\theta\right)$-summing map defined in \cite{AcRuYa} if there is a constant $C\geq 0$ such that \begin{equation}\label{metaphor1}
\left\|(\lambda,Tx',Tx'')\Big|\ell_{\frac{p}{1-\theta}}(\mathbb{R}\times Y\times Y)\right\|\leq C\cdot\left\|(\lambda,x',x'')\Big|\delta_{p,\theta}^{L}(\mathbb{R}\times X\times X)\right\|. \end{equation} for arbitrary finite sequences $x'$, $x''$ in $X$ and $\lambda$ in $\mathbb{R^{+}}$. Let us denote by $\Pi^{L}_{\left(p,\theta\right)}(X,Y)$ the class of all Lipschitz $\left(p,\theta\right)$-summing maps from $X$ to $Y$ with $$\pi^{L}_{\left(p,\theta\right)}(T)=\inf C.$$ \end{remark} where the infimum is taken over all constant $C$ satisfying (\ref{metaphor1}). The next result is a consequence of Proposition \ref{schlumrtpf02}. \begin{cor} $\left[\Pi^{L}_{\left(p,\theta\right)}, \pi^{L}_{\left(p,\theta\right)}\right]$ is a nonlinear ideal. \end{cor}
As a consequence of a general definition of Lipschitz interpolative ideal procedure between metric spaces the Lipschitz $(p, \theta)$-summing map has following characterize result.
\begin{thm} \cite{AcRuYa} \label{dom} Let $1\leq p<\infty $, $0\leq \theta <1$ and $T\in \mathrm{Lip} (X,Y)$. \textit{The following statements are equivalent.}
\begin{enumerate} \item[(i)] $T\in \Pi _{p,\theta }^{L}(X,Y)$.
\item[(ii)] \textit{There is a constant }$C\geq 0$ and a regular Borel probability measure $\mu $ on $B_{X^{\#}}$ such that \begin{equation*} d_{Y}(Tx', Tx'')\leq C\left( \int\nolimits_{B_{X^{\#}}}\left(
|fx'- fx''|^{1-\theta } d_{X}(x', x'')^{\theta }\right) ^{\frac{p }{1-\theta }} d\mu \left( f\right) \right) ^{\frac{1-\theta }{p}} \end{equation*} for all $x', x''\in X$.
\item[(iii)] There is a constant $C\geq 0$ such that for all $ (x'_{i})_{j=1}^{m},(x''_{j})_{j=1}^{m}$ in $X$ and all $ (a_{j})_{j=1}^{m}\subset \mathbb{R}^{+}$ we have \begin{equation*} \begin{array}{l} \displaystyle\hspace{-1cm}\left( \sum_{j=1}^{m}a_{i} d_{Y}(T(x'_{j}),T(x''_{j}))^{\frac{p}{1-\theta } }\right) ^{\frac{1-\theta }{p}} \\ \displaystyle\leq C\underset{f\in B_{X^{\#}}}{\sup }\left(
\sum_{j=1}^{m} a_{j}\left( |f(x'_{j})-f(x''_{j})|^{1-\theta }d_{X}(x'_{j}, x''_{j})^{\theta }\right) ^{\frac{p}{1-\theta }}\right) ^{ \frac{1-\theta }{p}}\text{.} \end{array} \end{equation*}
\item [(iv)] There exists a regular Borel probability measure $\mu $\textit{on }$B_{X^{\#}}$ \textit{and a Lipschitz operator } $v:X_{p,\theta }^{\mu }\rightarrow Y$\textit{\ such that the following diagram commutes} \begin{equation*} \xymatrix{X\ar[r]^T\ar[d]^{\delta_{X}} & Y\\ \delta_{X}(X) \ar[r]^{\phi\circ i} &X_{p,\theta }^\mu \ar[u]^v} \end{equation*} Furthermore, the infimum of the constants $C\geq 0$ in $(2)$ and $(3)$ is $ \pi _{p,\theta }^{L}\left( T\right) $. \end{enumerate} \end{thm}
\item[$\bf (2)$] Lipschitz $ \left( s;q,\theta \right) $-mixing operators\label{schl45umpf02}
D. Achour, E. Dahia and M. A. S. Saleh \cite{aem18} defined a Lipschitz $ \left( s;q,\theta \right) $-mixing operator if there is a constant $C\geq 0$ such that \begin{equation} \mathfrak{m}_{(s;q)}^{L,\theta }(\sigma ,Tx^{\prime },Tx^{\prime \prime })\leq C\cdot \delta _{q\theta }^{L}(\sigma ,x^{\prime },x^{\prime \prime }) \label{flat1} \end{equation} for arbitrary finite sequences $x^{\prime }$, $x^{\prime \prime }$ in $X$ and $\sigma $ in $\mathbb{R}$. Let us denote by $\mathbf{M}_{(s;q)}^{L,\theta }(X,Y)$ the class of all Lipschitz $\left( s;q,\theta \right) $-mixing maps from $X$ to $Y.$ In such case, we put \begin{equation*} \mathbf{m}_{(s;q)}^{L,\theta }(T)=\inf C, \end{equation*} where the infimum is taken over all constant $C$ satisfying (\ref{flat1}). The proof of the following result is not difficult to prove it.
\begin{prop}\label{schwwlumrtpf02} $\left[\mathbf{M}_{(s;q)}^{L,\theta }, \mathbf{m}_{(s;q)}^{L,\theta }\right]$ is a nonlinear ideal. \end{prop}
\end{enumerate}
\begin{remark}
Definition \ref{fruher} can be generalized as follows. Let $0\leq \theta< 1$. A Lipschitz map $T$ from $X$ into $Y$ belongs to $\left(\mathscr{A}^{L}, \mathscr{B}^{L}\right)_{\theta}(X, Y)$ if there exist a constant $C\geq 0$, a metric spaces $Z_{1}$, $Z_{2}$ and a Lipschitz maps $S_{1}\in\mathscr{A}^{L}(X, Z_{1})$ and $S_{2}\in\mathscr{B}^{L}(X, Z_{2})$ such that \begin{equation}\label{qlastwagen} d_{Y}(Tx', Tx'')\leq C\cdot d_{Z_{1}}(S_{1}x', S_{1}x'')^{1-\theta}\cdot d_{Z_{2}}(S_{2}x', S_{2}x'')^{\theta},\ \ \forall\: x',\; x'' \in X. \end{equation} For each $T\in\left(\mathscr{A}^{L}, \mathscr{B}^{L}\right)_{\theta}(X, Y)$, we set \begin{equation} \left(\mathbf{A}^{L}, \mathbf{B}^{L}\right)_{\theta}(T):=\inf C\cdot\mathbf{A}^{L}(S_{1})^{1-\theta}\cdot\mathbf{B}^{L}(S_{2})^{\theta} \end{equation} where the infimum is taken over all Lipschitz operators $S_{1}$, $S_{2}$ admitted in (\ref{qlastwagen}). The nonlinear ideal $\left[\mathscr{A}^{L}_{\theta}, \mathbf{A}^{L}_{\theta}\right]$ now appear as $\left[\left(\mathscr{A}^{L}, \mathscr{L}\right)_{\theta}, \left(\mathbf{A}^{L}, \Lip(\cdot)\right)_{\theta}\right]$.
\end{remark}
\end{document} | arXiv |
Structure-based directed evolution improves S. cerevisiae growth on xylose by influencing in vivo enzyme performance
Misun Lee1,
Henriëtte J. Rozeboom1,
Eline Keuning1,
Paul de Waal2 &
Dick B. Janssen ORCID: orcid.org/0000-0002-0834-20431
Biotechnology for Biofuels volume 13, Article number: 5 (2020) Cite this article
Efficient bioethanol production from hemicellulose feedstocks by Saccharomyces cerevisiae requires xylose utilization. Whereas S. cerevisiae does not metabolize xylose, engineered strains that express xylose isomerase can metabolize xylose by converting it to xylulose. For this, the type II xylose isomerase from Piromyces (PirXI) is used but the in vivo activity is rather low and very high levels of the enzyme are needed for xylose metabolism. In this study, we explore the use of protein engineering and in vivo selection to improve the performance of PirXI. Recently solved crystal structures were used to focus mutagenesis efforts.
We constructed focused mutant libraries of Piromyces xylose isomerase by substitution of second shell residues around the substrate- and metal-binding sites. Following library transfer to S. cerevisiae and selection for enhanced xylose-supported growth under aerobic and anaerobic conditions, two novel xylose isomerase mutants were obtained, which were purified and subjected to biochemical and structural analysis. Apart from a small difference in response to metal availability, neither the new mutants nor mutants described earlier showed significant changes in catalytic performance under various in vitro assay conditions. Yet, in vivo performance was clearly improved. The enzymes appeared to function suboptimally in vivo due to enzyme loading with calcium, which gives poor xylose conversion kinetics. The results show that better in vivo enzyme performance is poorly reflected in kinetic parameters for xylose isomerization determined in vitro with a single type of added metal.
This study shows that in vivo selection can identify xylose isomerase mutants with only minor changes in catalytic properties measured under standard conditions. Metal loading of xylose isomerase expressed in yeast is suboptimal and strongly influences kinetic properties. Metal uptake, distribution and binding to xylose isomerase are highly relevant for rapid xylose conversion and may be an important target for optimizing yeast xylose metabolism.
Efficient xylose conversion is an important property when selecting or engineering yeast strains to be used in second-generation bioethanol production. Fermentation of lignocellulose-derived feedstocks, which contain up to 30% d-xylose, is often carried out by Saccharomyces cerevisiae. Since this yeast does not metabolize the aldopentose d-xylose naturally, incorporation of either xylose isomerase or a combination of xylose reductase and xylitol dehydrogenase is necessary to convert d-xylose to the ketose d-xylulose, which can be metabolized [1,2,3,4]. The use of xylose isomerase has the advantage over the xylose reductase–xylitol dehydrogenase system that there is no intermediate production of xylitol and less formation of side products, but combining the pathways may also have certain benefits [5]. Several efforts to find suitable xylose isomerases have been reported [6,7,8,9,10,11]. A xylose isomerase discovered by Kuyper et al. [12, 13] in the fungal strain Piromyces E2 through genome mining (PirXI) is an attractive candidate for xylose isomerization in engineered S. cerevisiae strains, and is used in several studies [14]. However, in vivo performance of the enzyme is modest, as indicated by the high copy number (up to 10) of the chromosomally inserted XI-encoding gene observed in evolved strains that are capable of anaerobic d-xylose fermentation [15]. A multi-copy plasmid leading to overproduction of the PirXI protein has also been used for enhanced xylose metabolism [16]. The engineering of yeast strains showing faster xylose metabolism is an important challenge in the pursuit of strain improvement for second-generation bioethanol production [17,18,19,20].
The observation that strains with multiple copies of PirXI genes evolve during prolonged adaptation suggests that in vivo enzyme activity in S. cerevisiae is limiting xylose turnover [21]. Mutations in different xylose isomerases can lead to accelerated xylose metabolism and protein engineering of xylose isomerase is receiving significant attention [11, 14, 19, 20]. However, it is unclear which properties of the enzyme need to be tailored to improve its in vivo performance. A straightforward hypothesis is that the kinetic properties as reflected in catalytic rate (kcat) and/or substrate affinity (KM) at physiological conditions are not optimal for efficient xylose metabolism. On the other hand, metal affinity and in vivo metal content of xylose isomerase may also play an important role causing the enzyme to function suboptimally. Xylose isomerase is a metalloenzyme that requires two divalent metals for activity, and the wild type shows the best activity with Mn2+ [22]. However, metal content of the enzyme expressed in yeast may vary [21, 22]. A yeast strain with a mutation in its PMR1 gene which influences manganese homeostasis and increases Mn2+ content of the PirXI protein showed an enhanced rate of xylose consumption [21]. The high expression levels of the PirXI protein in selected xylose-metabolizing strains could be a burden for cell growth, and reduced expression of a more active enzyme would improve xylose consumption and growth rates. In view of the complexity of yeast cells, there may well be other factors that determine the performance of heterologously expressed enzymes, including compartmentalization, enzyme stability, and competition for metals with other cellular components.
Modern protein engineering tools enable tailoring of enzyme properties for specific applications. A particularly effective strategy is the use of directed evolution, i.e., the construction of mutant libraries and screening those for improved variants [23,24,25]. This approach does not require structural information. Here, one can take advantage of the fact that xylose isomerase activities limit the growth of S. cerevisiae on d-xylose and employ a random mutagenesis method with in vivo selection for improved growth [9, 14]. This allowed the discovery of unexpected xylose isomerase mutations, some of which were far away from the active site. It is known that distant mutations can enhance activity, e.g., by influencing enzyme surface properties [26]. On the other hand, the lack of focus in random mutagenesis protocols yields libraries with a low abundance of beneficial mutations, and a very large number of mutants often must be screened to discover better enzymes. So-called smart libraries, which incorporate phylogenetic and structural information in the design, are assumed to better cover functional sequence space, increasing the chance of discovering useful mutations and reducing the need for extensive screening [27,28,29].
To support PirXI engineering and understand the effects of selected mutations, we have recently characterized the enzyme both structurally and biochemically [22]. Even though the unidentified causes of the modest in vivo performance of PirXI and the complexity of the kinetic mechanism cause uncertainty about the types of mutation to introduce, the structures still provide useful information by revealing the residues that shape the substrate- and metal-binding sites. In the crystal structures, PirXI appeared as a homotetramer with each monomer (49.5 kDa, 437 aa) possessing an active site in which two divalent metal ions are bound. Soaking and cocrystallization studies showed that the ring-opened xylose binds in between two fully conserved tryptophan residues (Trp50 and Trp189) which play a role in the correct positioning of the substrate for catalysis [22, 30]. Of the two active site metals, one (M1) is responsible for substrate binding while the other (M2) is essential for catalysis by polarizing the M2-bound catalytic water that protonates O1 of the substrate and consequently generates a carbocation on C1 promoting the C2 to C1 hydride shift [22, 31]. The catalytic metal M2 moves during the reaction from the M2a to the M2b position, which is also visible in structures with certain combination of ligands: 5NH7 (xylose and Mg2+), 5NHC (xylulose and Co2+), 5NHD (xylose and Ni2+) and 5NHE (xylose and Cd2+) (Fig. 1) [22].
Structure of the PirXI active site and design of mutant library LibM1. The figure shows the active site structure of PirXI with xylose (yellow) and Mg2+ ions (green spheres) bound (PDB: 5NH7). The target residues (orange) are located near the active site. The catalytic metal (M2) can occupy two positions (M2a and M2b). W307′ is a residue from a neighboring subunit
To examine if structure-inspired mutagenesis can contribute to obtaining improved xylose isomerase variants and to investigate the possibility that such mutants may be useful to identify catalytic properties relevant for in vivo performance, we now designed focused mutant libraries with mutations surrounding the residues involved in metal and substrate binding. These were screened for enhanced growth after expression in yeast. Improved enzymes were indeed discovered and their biochemical properties were investigated, in comparison to previously reported PirXI variants. The results indicate that small changes in catalytic properties may be accompanied by significant effects on xylose-supported growth. Furthermore, in vivo selection may govern mutations that improve xylose metabolism without changing kinetic properties measured under standard conditions with Mg2+ as the activating metal.
To discover mutants of PirXI that enhance xylose metabolism, we designed and constructed small focused mutant libraries followed by in vivo screening for better variants. Recent structural and biochemical information was used to select target positions for mutagenesis, focusing on residues that surround the metal-binding sites. Replacing the second shell residues might have an effect on metal binding or reactivity and thereby influence the activity. The fully conserved metal-coordinating residues in PirXI are Glu233, Glu269, Asp297 and Asp340 for site M1 and Glu269, His272, Asp308 and Asp310 for the catalytic metal M2. For the first library (LibM1), we targeted residues that are in close proximity of the M2 site. Five residues were selected: three (Val270, Ala273 and Thr274) that lie on the same helix as the metal binding residues (His272 and Glu269), and two (Trp307 and Thr309) that are on a nearby loop (Fig. 1). Amino acid diversity to be introduced at each position was selected based on phylogenetic diversity, in silico-predicted stabilities of the mutants, and visual inspection of the predicted mutant structures. For phylogenetic input, a multiple sequence alignment was performed on 22 different class I and 100 class II XI sequences. Considering conservation scores and similarities between amino acid properties, the library diversity was decided. For example, residue Ala270 is fully conserved throughout all class II enzymes and therefore the diversity at this position was restricted to alanine and glycine to avoid extreme modifications. Changes in free energy of folding (ΔΔGfold) of mutants relative to the wild-type enzyme were predicted using FoldX calculations [32]. Large decreases in predicted stability were used to dismiss mutations from the library design. The resulting LibM1 library included 1008 different variants (Table 1).
Table 1 Design of PirXI library LibM1
To construct the LibM1 mutant library, at each target position a minimum number of partially undefined codons covering the selected set of amino acid substitutions was chosen using a spreadsheet implementing the CodonFinder routine [33]. The codons were selected in such a way that all the desired amino acids (including the wild-type residues) are incorporated at balanced coverage without introduction of undesired codons like stop codons. The 1008 LibM1 mutant library was covered by 8 partially undefined codons (Table 1).
Library DNA was obtained by generating gene fragments using PCR and subsequent cloning into E. coli–yeast shuttle expression vector (pRS426-URA) as described in "Methods". Transformation of the library DNA to E. coli resulted in over 8000 clones, of which pooled plasmid DNA was transformed to S. cerevisiae strain DS75543, producing 6000 colonies. Considering the library size of 1008, these numbers are sufficient for near full library coverage [33]. Prior to yeast transformation, library diversity was confirmed by sequencing a mixture of plasmids isolated from the mixed collection of E. coli transformants. The sequencing results showed that all expected bases were incorporated at the correct positions, indicating sufficient library quality to proceed to screening.
Screening for improved xylose utilization
Library LibM1 was screened by growth competition of S. cerevisiae strain DS75543 transformed with library plasmids DNA. The entire collection of yeast transformants was inoculated into xylose medium and cells were cultivated with serial transfers to fresh xylose medium. Faster growing cells, which over time became dominant, were assumed to harbor an improved PirXI. Screening was performed both under aerobic and anaerobic (oxygen-limited) conditions, each in duplicate, as different variants can be expected depending on the metabolic status of the cells. For anaerobic growth, the cultures were kept oxygen-limited as described in "Methods". The effect of limited oxygen availability was reflected in the final cell densities (OD600) of the cultures, which were ~ 3 and > 20 for anaerobic and aerobic conditions, respectively. Anaerobic cultures initially required 8–9 days before growth occurred. A reduced lag time and/or increased growth rate was observed after multiple transfers with all four selection cultures, also in comparison to a control culture harboring only wild-type PirXI.
Both aerobic and anaerobic cultures were harvested after the 10th transfer and plasmids were isolated to evaluate the selected PirXI genes. Sequencing showed that all four cultures, i.e., both the aerobic duplicates and anaerobic duplicates, contained only one PirXI variant, which carried the mutations V270A and A273G.
Effect of V270A–A273G PirXI on xylose utilization
The consistent selection of the V270A–A273G variant from library LibM1 suggested that the screening method was reliable and sensitive. Nevertheless, it is possible that other events such as genomic mutations or variations in expression level were responsible for the improved growth of yeast carrying the V270A–A273G mutant PirXI. The replicon of the pRS416 vector used in this work is derived from yeast plasmid 2µ, and plasmid copy numbers can vary from culture to culture [34]. Furthermore, in laboratory evolution of S. cerevisiae for growth on xylose, cells may acquire diverse chromosomal mutations that cause improved growth [35, 36]. To prove that in our case the selected mutations in the PirXI structural gene caused improved growth on xylose, the mutations were reconstructed by site-directed mutagenesis in the original PirXI gene and cloned in a vector that was not subjected to previous selection. S. cerevisiae DS75543 cells were then transformed with the freshly prepared constructs and their growth performance in a 96-well plate was monitored. We have repeated this process several times and consistently observed that the cells containing the mutant PirXI grow on xylose better compared to those containing wild-type PirXI (Fig. 2a). We have also observed that general growth performance of the cells slightly differs between experiments as well as between clones picked from a single transformation experiment. Figure 2b shows a high variability in growth between 30 independent transformants despite their identical genotype. Nevertheless, the results clearly indicate that the V270A–A273G mutant PirXI improves growth on xylose as compared to wild type, especially in the earlier phases of growth (Fig. 2b). On average, the mutant cultures started to grow earlier and more quickly reached their final density.
PirXI V270A–A273G improves growth of S. cerevisiae strain DS75543 on xylose. The orange and green lines represent growth on xylose (20 g l−1) of yeast expressing the mutant and the wild-type PirXI, respectively. a, b Growth in 96-well plate with continuous measurement of optical density. Each line represents an individual clone selected from a transformation plate. c Growth in triplicate in batch culture with intermittent sampling and measurement of optical densities
In 96-well plates, oxygen availability may not be well controlled and results could be influenced by evaporation. Therefore, a comparison between the wild-type and V270A–A273G XI variants was also performed using shake flask cultures with replicates inoculated with pre-cultures from independent transformants. The resulting growth curves (Fig. 2c) confirm that the mutated PirXI is beneficial for growth on xylose. The specific growth rates (µ) were calculated from the exponential part of the curves, using the following equation for fitting: lnX = lnX0 + µ(t − t0), where X is the measured OD600 and µ is the rate. The average growth rates of the wild type and the mutant are 0.13 ± 0.01 and 0.18 ± 0.01 h−1, respectively. These results show that the observed improved growth is due to the V270A–A273G mutations in PirXI, not by unidentified mutations elsewhere on the plasmid or in the chromosome of the selected transformants.
Yeast cells selected for good growth on xylose show high overexpression of PirXI [12, 15], which may be a metabolic burden for the cells and trigger selection of mutants with a higher activity:expression ratio in competition experiments. To examine if the PirXI mutations affected enzyme expression, we studied XI levels in cells grown on xylose. The specific activity with 100 mM xylose measured with the cell-free extracts were 0.94 U/mg and 0.54 U/mg for the wild type and the mutant, respectively. SDS-PAGE gels revealed that the expression levels for the wild-type and the mutant enzyme were similar (Fig. 3).
Expression level and cell-free extract activities of PirXI variants. Samples of 2 µg of crude extract protein prepared from cells grown on xylose medium were loaded on an SDS gel. The dominant bands at ~ 50 kDa represent PirXI. The expression level calculated by measuring the intensity of the bands is about 25% for both samples. WT: wild-type PirXI VA: V270A–A273G PirXI
The uncertainty of in vivo metal binding properties of PirXI and metal content of the yeast cytoplasm makes it difficult to define a metal composition for assays that gives results reporting on in vivo performance. When we measured the PirXI activity of extracts of S. cerevisiae DS75543 cells without metal addition, the results indicated that the activity of the wild-type enzyme was almost twofold higher compared to the mutant. This unexpected observation could be caused by changes in PirXI metal composition during enzyme preparation and dilution, e.g., due to binding of metals released from organelles such as vacuoles or from changes in metal–protein interactions.
To examine if the individual mutations in the PirXI variant are both necessary for improved growth, we constructed the single mutants V270A and A273G and examined the effect on growth on xylose, particularly on the early growth phase. The growth curves indicate that the V270A mutation has a larger effect, showing much earlier initiation of exponential growth (Fig. 4), but also cells containing the A273G mutant PirXI showed a slight growth improvement compared to the cells expressing the wild-type enzyme.
Effect of the mutations V270A and A273G on growth. Growth in xylose medium of S. cerevisiae cells containing PirXI variants was followed. Wild type (green lines), V270A (blue lines), A273G (red lines) or V270A–A273G (orange lines). Measurements were performed in triplicate, shown as individual lines
Kinetic properties of PirXI V270A–A273G
We expected that the positive effect of the V270A–A273G mutations on xylose-supported growth would be due to improved catalytic parameters, i.e., increased catalytic rate (kcat) or a better substrate affinity (reduced KM). Since PirXI can be activated by different divalent metals and activities depend on the type of metal that is bound [22], we measured XI activities with metals that the enzyme potentially binds in vivo as previously found by metal analysis (Mg2+, Mn2+ or Ca2+) [21]. With none of these metals, the in vitro activities revealed an increased kcat or decreased KM for the mutant enzyme in comparison to wild type. In contrast, the wild type performed better in the presence of all metals tested, showing slightly higher catalytic rates and substrate affinities (Table 2). Especially, with Mn2+, the activity of the mutant decreased 50% compared to the wild type. This result indicates that an increase in specific activity with these metals, at least individually, is not responsible for the improved growth on xylose of yeast expressing V270A–A273G PirXI.
Table 2 Kinetic parameters of PirXI variants
Besides the metal-dependence of the isomerase, metal affinities were considered as a possible cause of improved in vivo enzyme performance. We estimated metal affinities of the wild-type and the mutant enzyme by measuring the activation constant (Kact) for each metal. This constant represents the metal concentration giving half-maximal enzyme activity. Since xylose isomerase requires two metals for activity, the value depends on the binding sites with the lowest affinity if both sites must be occupied. For measuring Kact with Mg2+ and Mn2+, 100 mM xylose was used as substrate. In case of Ca2+, 400 mM xylose was used since the KM,xylose of the PirXI-Ca2+ is very high (Table 2). The data showed that the V270A–A273G mutant showed slightly higher affinity for Ca2+ and Mn2+, whereas the wild-type enzyme has slightly higher affinity for Mg2+ (Table 2). However, the differences are small and do not indicate a shift in metal affinity as the cause of improved growth.
Metal affinity was also examined by measuring the effect of metal addition on PirXI thermostability since metal binding can stabilize metalloenzymes [37]. Effects on apparent melting temperatures were measured in the presence of different concentrations of metals using thermal shift assays (Fig. 5). The results showed that the apo forms of wild-type and V270A–A273G PirXI have a similar thermostability. Interestingly, whereas Tm,app of the wild-type PirXI increased with metal concentration according to a hyperbolic saturation-like curve, the Tm,app of the mutant enzyme was constant up to ca. 200 µM of Mn2+ or Ca2+ added, with an increase at higher metal concentrations (Fig. 5a, c). In contrast, when Mg2+ was added, the thermostability of the mutant enzyme was not increased even at concentrations that were saturating for enzyme activity (Fig. 5b). The difference between the metal-concentration dependence of mid points of thermal shift assays and Kact values measured in the presence of substrate suggests that substrate influences metal binding, as also observed when examining X-ray structures of the enzyme with different combinations of ligands [22]. Only in the presence of the substrate xylose both metal-binding sites in the crystal structures were occupied, whereas in xylitol- or glycerol-bound enzyme only the M1 metal site was occupied with a metal ion.
Effect of metals on thermostability of PirXI. The graphs show the apparent melting temperatures of purified and EDTA-treated wild-type PirXI (green lines) and the V270A–A273G mutant PirXI (orange lines) in the presence of various concentrations of metals (a Mn2+; b Mg2+; and c Ca2+). The Tm values are averages from two independent measurements. [E] = 20 μM. Error bars represent standard deviations
The metal content of yeast cells is complex and consists of both free metal ions and metal ions bound to macromolecules [38]. In general, in vivo metal binding by metalloproteins is controlled by mechanisms such as intracellular metal homeostasis, localization of protein folding, and activities of metal transporters and metallochaperones [39]. In a previous study, we showed that changes in intracellular metal composition affect metal composition of PirXI, which in turn influences catalytic performance [21]. PirXI isolated from yeast grown on xylose is mostly bound with Ca2+, which barely activates the enzyme. Therefore, a large portion of PirXI does not contribute to in vivo conversion of xylose. In contrast, the smaller fraction of PirXI that is bound with the strongly activating Mn2+ contributes most to the in vivo enzyme activity [21].
In view of the complex metal composition of S. cerevisiae, we measured the activities of the wild type and the V270A–A273G mutant in the presence of varying concentrations of Mn2+ and a fixed high concentration (1 mM) of Ca2+ (Fig. 6). As expected, both variants showed higher activity with increasing concentration of Mn2+. Interestingly, the degree to which Mn2+ influenced the activity was different between the wild-type and the mutant enzyme. At low concentrations of Mn2+ (10–100 µM) and in the presence of 1 mM Ca2+ the mutant enzyme showed slightly higher activity. This indicates that the activation of the mutant enzyme by Mn2+ in the presence of a high concentration of Ca2+ is improved. Even though the activity of the mutant is lower in the presence of Mn2+ or Ca2+ alone, at certain low Mn2+/Ca2+ concentration ratios, the V270A–A273G mutant enzyme is better activated than the wild type. These results suggest that differences in in vivo metal activation may be responsible for the improved growth of yeast cells expressing the V270A–A273G mutant PirXI.
PirXI activity in the presence of Ca2+ and Mn2+. The activities of wild-type enzyme (green line) and the variant V270A–A273G (orange line) on 100 mM xylose were measured in the presence of a mixture of 1 mM Ca2+ and various concentrations of Mn2+. The data represent the average values from duplicate measurements and the error bars represent standard deviations
Crystal structures of PirXI wild type and V270A–A273G
To examine possible structural changes in the V270A–A273G PirXI, we solved and compared crystal structures of this variant and the wild-type enzyme purified from yeast (Fig. 7). The overall structures of the wild type and mutant enzyme are very similar and confirmed the mutations. In the mutant structure, the side chain of Phe280 moves ~ 0.5 Å in the direction of Ala270. Due to the decrease in hydrophobicity and size the surrounding waters also shift towards Ala270. Mutation A273G shows no effect on the structure. There is no significant difference between the two structures to explain the improved in vivo performance of the mutant.
Structural alignment of active sites of wild-type and V270A–A273G PirXI. The X-ray structures are of wild-type (gray) and V270A–A273G mutant (orange) PirXI, both isolated from yeast cells grown on xylose. Ligand colors: xylose (yellow—wild type, cyan—mutant); metals (light green—wild type, dark green—mutant)
The enzyme crystals were prepared without removal or addition of metal ions so that only intrinsic metal ions are present. When the metals ions were refined as Mg2+, the Fo–Fc map showed unaccounted electron density at the metal positions, suggesting the presence of heavier ions. In an anomalous electron density map a clear signal was observed at the M1 and M2a positions with σ levels of 4.8 and 3.5, respectively. Mg2+ ions do not have an anomalous signal at the in-house used wavelength of 1.54 Å. However, a comparison with anomalous maps of previously determined structures [22] shows similar peak heights in the Ca–xylose structure of PirXI (PDB code 5NH8). Therefore, metal ions at the M1 and M2a positions were refined as Ca2+ with 100% occupancy resulting in a flat Fo–Fc map in both the wild-type and the double-mutant structures. The temperature factors (B-factors) of the two Ca2+ ions are 11.9 and 16.4 for the wild type and 11.4 and 19.3 for the mutant, which are lower than those of the surrounding residues. The distances of the coordinating side chains to the M1 ion in the wild type PirXI and the mutant enzyme isolated from yeast are similar to those in the wild-type Ca–xylose structure reported earlier (5NH8). These results indicate that most of the metal-binding sites of PirXI isolated from yeast are occupied by poorly activating Ca2+ ions, both in the wild-type and in the PirXI V270A–A273G mutant enzyme. Other metal ions, such as Mg 2+, Fe2+, Mn2+ or Co2+, may be bound with low occupancy.
Construction and screening of library LibM2
A second library design for discovery of better xylose isomerase mutants focused on mutations in a stretch of six residues flanking the substrate binding site. In this case, to avoid the risk of improved growth by chromosomal mutations, we compared growth properties of yeast clones transformed with the library DNA (Table 3). The growth of library colonies on solid medium containing xylose as sole carbon source was monitored by visual inspection. Plasmid DNA was isolated from suspected positive (larger) clones, retransformed to yeast and rescreened.
Table 3 Library LibM2 variants and codons
Library design again included selection of target positions and diversity to be introduced at each position (Fig. 8). The residues at the six target positions (Ser141, Thr142, Ala143, Asn144, Val145 and Gly147) at the C5 side of the substrate interact with the substrate either directly or indirectly. Therefore, it was expected that modifying these residues can improve substrate binding and the catalytic rate. Residue Thr142 is fully conserved throughout all known xylose isomerase sequences. In the structure it is connected to O5 of the substrate via a water molecule. To keep this interaction, we limited the diversity at this position to Thr and Ser. In a previous study, the PirXI T142S mutation was discovered to improve the growth of yeast on xylose [14]. We preserved Phe146 as it is fully conserved and it plays an important role in keeping the active site hydrophobic. Together with Trp189 and other hydrophobic aromatic residues (Trp50 and Phe61) this promotes the hydride shift by shielding the hydride from solvent [40,41,42]. The resulting library consists of 3584 variants (Table 3).
Target residues for library LibM2. Residues mutated in library M2 (magenta) surround the substrate binding site. Residue Thr142 and xylose (yellow) may interact (dashed lines) via a water molecule (red sphere). Magnesium ions and the coordinating residues are depicted as green spheres and gray. The metal binding residues and conserved active site hydrophobic residues Phe146 and Trp189 are shown as gray sticks. PDB 5NH7
The library was constructed using the same strategy as for library LibM1. The initial E. coli transformation yielded ca. 8000 colonies and the diversity was confirmed by sequencing a plasmid mixture obtained from pooled transformants. The subsequent transformation to S. cerevisiae DS75543 also resulted in over 8000 colonies. For identification of clones showing improved xylose utilization, transformed cells were washed from glucose plates and spread on xylose plates (see "Methods" for details). Many cells did not grow at all or started to grow very slow, causing visible differences between individual colonies, also for wild type. The latter indicated that other factors than PirXI activity influenced colony growth, for example the physiological status of transformed cells at the moment of plating. After three rounds of screening and retransformation, 46 colonies which showed superior growth were selected. To identify the best variant, growth in xylose containing liquid medium was measured using 96-well plates and compared to wild type. Most of the 46 variants reproducibly showed improved growth. The xylose isomerase genes from the 24 best growing variants were sequenced, revealing 10 different variants, one of which was wild type. The sequences that appeared most frequent (4–8 times) were reconstructed in a clean background and the effects on growth on xylose were evaluated after transformation to fresh DS75543 cells. Among these mutants, variant S1 (S141N–T142S–A143S–G147A) consistently showed the biggest improvement of growth on xylose when several independent cultures were tested. As shown with variant V270A–A273G, the most significant effect of mutant S1 also appears to be on the earlier start of the growth while showing a slightly increased exponential growth rate (Fig. 9).
PirXI mutant S1 (S141N–T142S–A143S–G147A) improves growth on xylose. Growth of S. cerevisiae DS75543 cells harboring wild-type PirXI (green) or mutant PirXI S1 (magenta) on xylose (20 g l−1). Each line represents a biological replicate. The calculated average growth rates were 0.13 ± 0.01 h−1 and 0.18 ± 0.01 h−1 for the wild type and the mutant, respectively
Kinetic properties of PirXI S1
The reconstructed PirXI mutant S1 was purified from E. coli and its activity was measured after reconstitution with different metals. As with variant V270A–A273G, the Michaelis–Menten kinetic parameters measured in the presence of Mg2+, Mn2+ or Ca2+ revealed reduced kcat values as compared to wild type (Table 2). The KM for xylose was also several fold higher in the presence of Mg2+ or Mn2+ compared to the wild type. Furthermore, the Kact values indicate that a shift in metal affinity does not promote better in vivo performance of the mutant enzyme as the affinity towards the most activating metal Mn2+ decreased, while the affinity towards Ca2+, which poorly activates the enzyme, increased.
Performance of other xylose isomerase mutants
The results described above indicate that both libraries yielded PirXI mutants that caused accelerated growth on xylose. However, their properties exhibited disconnection between in vivo performance and in vitro catalytic properties. We further explored the ambiguous relation between in vivo and in vitro enzyme properties by studying PirXI mutants discovered independently in previous studies, using different S. cerevisiae host strains [9, 14]. Using directed evolution with random mutant libraries, Lee et al. discovered PirXI variant E15D–T142S which increased the growth rate on xylose from 0.01 h−1 up to 0.06 h−1 [14]. Activities of these enzymes have only been measured with cell lysates, making a comparison difficult, but KM values appear high. Later, Katahira et al. discovered that mutations at position N338, especially substitution N338C, improved growth of yeast on xylose. This mutation was effective not only in PirXI but also in related XIs [9]. It was reported that yeast cells carrying the N338C variant of PirXI consumed xylose 3–4 times faster than cells carrying wild-type PirXI. The catalytic properties of the mutant enzymes from these studies have not been described. Very recently, when our study was nearing completion, Seike et al. described mutations in XI from Lachnoclostridium phytofermentans (LpXI) that enhanced d-xylose metabolism [11]. The most effective mutations were T63I and V162A. The corresponding positions in PirXI are distant from the active site and the mutations were not examined here.
We constructed the two earlier PirXI mutants [9, 14], E15D–T142S and N338C, and confirmed that the variants are beneficial for PirXI-mediated growth of strain DS75543 on d-xylose as well (Fig. 10). Subsequently, we expressed the mutants in E. coli, purified the enzymes and measured kinetic parameters. For this, activities were determined with Mg2+, Mn2+ or Ca2+ added to the apoenzyme and the activation constants were also determined (Table 2). As with the new mutants described in the current paper, Michaelis–Menten parameters and metal affinities did not reflect the positive effect of the mutations on growth, with the exception of an increased kcat of the N338C mutant in the presence of Mg2+ and Mn2+. However, this enzyme also has a higher KM. This result shows that the disconnection between in vivo performance and in vitro properties of PirXI is not dependent on the screening strain or selection conditions.
Growth of S. cerevisiae expressing PirXI variants E15D–T142S or N338C. Growth of cells expressing the wild-type PirXI (green lines), a mutant PirXI E15D–T142S (red lines) or N338C (blue lines) on xylose (20 g l−1) medium were measured using a microtiter plate reader. Individual lines represent biological replicates
Improving d-xylose metabolism is an ongoing challenge for optimizing second-generation bioethanol production by engineered strains of S. cerevisiae. Important improvements in yeast performance have been achieved by evolutionary and metabolic engineering, which can influence different steps in central metabolism [16, 43], and by enhancing the xylose uptake system [44, 45]. Introduction of efficient enzymes for initial isomerization of d-xylose to d-xylulose is an equally important target. The incorporation of Piromyces xylose isomerase in xylulose fermenting yeast strains allowed xylose utilization [12, 13], but very high expression levels are needed for optimal performance, as indicated by gene amplification up to over 10 copies during adaptation [15, 21], leading to production of xylose isomerase at up to 25% of the cellular protein (Fig. 3). This suggested that the enzyme has poor in vivo kinetics and stimulated research aimed at discovering better xylose isomerase variants [9, 14].
In view of the catalytic properties of PirXI, the need for such a high expression level is unexpected. At the observed xylose isomerase content of 25%, one would expect the isomerization reaction not to be growth-limiting. The relation between growth rate μ and xylose consumption V can be expressed as:
$$ \mu = V \cdot Y ,{\text{ with}} $$
$$ V = [E] \cdot \frac{{V_{ \hbox{max} } \cdot \left[ S \right]}}{{K_{\text{M}} + \left[ S \right]}} , $$
where [E] is the enzyme content (ca. 0.1 g PirXI per g biomass estimated based on on 25% of the total protein being XI, and a total protein content of 0.4 g per g biomass [46, 47]), Vmax (8.3 U/mg, from kcat = 6.9 s−1 with Mn2+) and KM (6.2 mM) have their usual meaning. Y represents the yield on xylose (0.25 g cell dry weight/g xylose converted) estimated from a previous study performed with S. cerevisiae grown in a similar condition [48]. This predicts a PirXI activity at [S] = 1–10 mM (reasonable intracellular substrate concentration) [49, 50] of 1–4.6 g xylose converted per g biomass per h, allowing a growth rate of μ = 0.25–1.15 h−1. The experimentally observed growth was around 0.13 h−1, suggesting that xylose isomerase should not be rate limiting if it were fully active. PirXI is also well expressed and folded in vivo, which may be troublesome with other XIs, as illustrated by the extreme case of the xylose isomerase from Actinoplanes missouriensis, which in vitro looks catalytically superior to PirXI but fails to function in S. cerevisiae [6, 51]. In view of earlier work on the effect of metals on PirXI activity and the observation that mutations influencing manganese homeostasis can improve xylose metabolism, we initially suspected that the modest activity of the enzyme might be due to suboptimal metal loading and that xylose utilization could be improved by engineering PirXI variants carrying mutations surrounding the metal-binding sites.
The discovery and engineering of XI variants that improve xylose metabolism has been pursued by different groups [9, 11, 52]. However, the connection between in vitro kinetics of xylose isomerase and yeast growth on xylose remains rather unclear. Recently, Seike et al. [11] compared xylose isomerases from different organisms and found that an enzyme from L. phytofermentans (LpXI) and two mutants thereof gave the highest xylose consumption rate even though its activity measured in cell lysates were not better than those of cells expressing PirXI [7]. Mutants of LpXI that gave better xylose consumption were found. A higher activity (Vmax) and lower Km were found with a double mutant of LpXI, but the experiments were done with whole cell lysates reconstituted with Mg2+, so a comparison of intrinsic kinetic parameters is difficult. Similarly, an XI from Burkholderia cenocepacia which gave higher in vitro activity of cell lysates compared to PirXI and LpXI [52] did not seem beneficial for xylose fermentation [11]. In the same context, a recently discovered XI from the gut bacterium R. speratus was found to be better for xylose fermentation than PirXI, but this could not be explained by differences in in vitro catalytic performance [9].
We initially expected that selection of faster growing yeast strains from libraries expressing mutants of PirXI would give variants with improved kinetic parameters (higher kcat, lower KM) with Mn2+. Also changes in metal binding affinity or shifts in metal preference could be expected. Two focused libraries with good diversity at the target positions were constructed and improved mutants were indeed obtained by batch culture selection and plate screening for higher growth rates. From the first library (LibM1), the same V270A–A273G PirXI variant was repeatedly retrieved, both from aerobic and anaerobic duplicate cultures. This indicates that the features of PirXI which limit growth of yeast on xylose are not dependent on oxygen availability, and that aerobic screening is possible to discover mutations that contribute to anaerobic xylose metabolism as well. Examining a reconstructed V270A–A273G PirXI mutant demonstrated that the improved performance was due to the mutations in the PirXI structural gene. A second focused library (LibM2) was screened on solid xylose medium and led to the discovery of the fourfold mutant S141N–T142S–A143S–G174A. Again, the contribution of chromosomal mutations was excluded. After discovery of these new mutants, we investigated the relation between enzyme kinetics and improved growth, an issue that is also still open for earlier mutants of PirXI which were obtained by enrichment after error-prone PCR [9, 14]. With the reconstructed mutant genes expressed in a clean expression vector and host, we found that all four PirXI variants improved growth on xylose of S. cerevisiae DS75543, a strain different from the one used earlier by others [9, 14].
Using purified proteins of the two new mutants (V270A–A273G and S141N–142S–A143S–G174A) as well as of the earlier variants (E15D–T142S and N338C) [9, 14], we measured activities and kinetic parameters under a variety of conditions. Activities at physiological pH (~ 7) and temperature (30 °C) initially did not reveal any obvious features of the mutant enzymes that account for faster growth on xylose. Measurements with enzyme variants that were reconstituted with a single type of metal indicated that the mutants under these conditions had no advantage over the wild type, but likely this incompletely reflects in vivo conditions where the enzyme binds mixtures of metals and needs to function in the complex environment of the cytoplasm. Metal binding in the cytosol of S. cerevisiae is dependent on metal availability as well as binding affinities, which differ among xylose isomerases and between the two binding sites [53, 54]. The most pronounced yet small effect was the increased kcat of the N338C mutant with Mn2+ and Mg2+ as activating metals, as well as a slightly improved activity of the V270A–A273G variant in the presence of a low concentration of Mn2+ and a high concentration of Ca2+. Thus, at certain concentrations of these two metals, the mutant enzyme can be more active than the wild type. With all PirXI variants examined, Mn2+ gave much better activity than Ca2+, emphasizing the importance of Mn2+ homeostasis for enzyme activity, which was also shown in our previous study where cellular manganese content was enhanced by mutations in a metal transporter [21].
We also found that the main metal in PirXI isolated from yeast is Ca2+ and only a small amount of Mn2+ is present [21]. This metal composition of the enzyme is far from optimal for catalysis as Ca2+-bound PirXI shows very high KM for xylose, over 200-fold higher than with the catalytically preferred Mn2+ (Table 2). A shift in metal preference thus can explain improved in vivo performance, although establishing a quantitative correlation is impossible since in vivo metal binding of the enzyme is difficult to predict and measure, especially because the enzyme has two metal-binding sites with different affinities, with one site essential for catalysis yet probably only occupied when substrate is bound [22]. Indeed, the fact that the metal composition of PirXI does not strictly follow the apparent metal affinities (Kact) and intracellular metal composition of S. cerevisiae indicates a complex in vivo metal binding mechanism of the enzyme [21]. Changes in metal binding were also suggested by a different response of the wild-type and mutant PirXIs to metal titration followed by thermal shift assays. While the thermostability of wild type increases with increasing metal concentrations, the mutant V270A–A273G required high concentrations of the metals (> 200 µM) for an effect on thermostability, illustrating that the typical increase in thermostability of a metalloenzyme upon metal binding is abolished by the mutations. It is possible that binding of metals is affected by the availability and binding of substrate [22, 40].
Even though the mutations changed the metal specificity of the enzyme, increasing the affinity for both Mn2+ and Ca2+ slightly, crystal structures of mutated PirXI isolated from yeast showed that the enzyme still has Ca2+ ions occupying both metal-binding sites. It is interesting that the structure of PirXI isolated from E. coli showed a metal composition quite different from that of the enzyme produced in yeast, with in case of the yeast enzyme only the M1 binding site being occupied by Fe2+ (25%), Ca2+ (40%) and Mg2+ (35%) as estimated from structural refinement and with the M2 site left empty. These data confirm that in yeast most of the enzyme is loaded with the catalytically impractical Ca2+ ions, with a high impact on the in vivo performance of the enzyme. This indicates that conclusions about increased Vmax values of evolved XIs should be considered with care, especially in case of measurements performed with cell-free extracts instead of purified enzyme and in the presence of an excess of added Mg2+, which is routinely used in XI assays [11, 14].
Other factors that might possibly influence in vivo enzyme performance include enzyme stability (lifetime), compartmentalization, and interaction with other cellular macromolecules. PirXI originates from a heterologous host causing the enzyme to be not evolutionarily optimized for functioning in S. cerevisiae. The distribution of metals over cellular compartments and the sequestration of metals by other macromolecules are likely to have a major impact on the metal availability for PirXI in vivo [55]. Such differences in enzyme properties beyond kinetic parameters may influence the performance of xylose isomerase variants. Differences between host strains, as well as variations in cultivation conditions and assay conditions make it difficult to compare the performance of xylose isomerases in yeast xylose metabolism.
As part of developing efficient second-generation bioethanol production, there have been efforts to engineer xylose isomerase variants that improve growth of S. cerevisiae on xylose. We found that design of focused libraries of Piromyces xylose isomerase, based on inspection of the crystal structure, followed by growth-based screening, gives mutants that improve the growth of yeast on xylose. The mutations differ from those found earlier in random mutant libraries constructed by error-prone PCR. The new mutants described here and the two mutants discovered earlier, did not show improved xylose isomerization kinetics when tested in vitro with in the presence of an excess of single metals. Yet, metal occupation of the enzyme is of key importance as indicated by the low in vivo activities and the high calcium content found by metal analysis and X-ray crystallography of xylose isomerase isolated from yeast. Small differences in relative metal affinities and activities can explain the improved growth caused by mutations in the second shell of metal coordination. Rational redesign of xylose isomerase for better in vivo performance would require a theoretical framework that describes xylose isomerase structure–activity relations as a function of metal incorporation and activation.
Strain and plasmid construct
The E. coli strain used in this work is NEB 10 β (New England Biolab). S. cerevisiae strain DS75543 is derived from RWB217 [16] and was constructed for xylose fermentation by genetic engineering and further improved for growth rate on xylose by laboratory evolution. Plasmids bearing the XKS1 and PirXI expression cassette were cured from the strain, and an XKS1 overexpression cassette was re-introduced by integration in the yeast genome. The relevant genotype of DS75543 is the following: MATa, ura3-52, leu2-112, gre3::loxP, loxP-Ptpi::TAL1, loxP-Ptpi::RKI1, loxP-Ptpi::TKL1, loxP-Ptpi::RPE1, TY1::Padh1XKS1 + LEU2.
A yeast codon-optimized xylose isomerase gene-expression cassette, PTPI1_XylA_TCYC1, was obtained from Prof. J.T. Pronk, TU Delft, and cloned into the 2µ plasmid pRS426-URA using SacI and SalI restriction sites. For E. coli expression, a pBAD/myc-His-derived plasmid containing E. coli codon-optimized XylA was used as described in our previous study [22].
The library construction strategy is schematically shown in Fig. 11. A mutant PirXI library was created by cloning XylA fragments containing mutations into the pRS426-URA vector. The fragments and the vector were designed to contain > 20 bp overlaps for use of the Gibson assembly method for cloning. Appropriate partially undefined codons for generating the library were designed using a spreadsheet-based site-restricted library design tool called CoFinder [33]. Using primers containing the partially undefined codons and pRS426_pTPI_XylA as the template, PCR was performed to generate the library fragments. For creating the backbone, linearized pRS426_pTPI_XylA was used as the PCR template. The plasmid was linearized by using a restriction enzyme that cuts the DNA once outside of the backbone region. An AatII site was created for library LibM1 and the single existing BglII site was used for library LibM2. For both the fragment and the backbone generation, Phusion high-fidelity polymerase (ThermoFisher) was used and the instructions of the manufacturer were followed for the PCR reactions. After PCR, template fragments were degraded by incubating the reaction mixtures with 1 µl of DpnI at 37 °C for 3 h. For Gibson assembly of library fragments and backbone, a total of ~ 100 ng DNA was added at a 1:3 ratio of purified backbone to gene fragments. The library DNA fragments were mixed proportionally to the number of variants that is theoretically carried by each fragment. The assembly reactions were performed following the protocol established by Gibson et al. [56]. Subsequently, three aliquots of 100 µl E. coli were transformed with 10 µl reaction mixtures and transformants were selected on LB agar plates containing 50 µg ml−1 ampicillin. Plasmids from the entire E. coli transformant mixture were isolated, sequenced to confirm the diversity of the library and transformed to S. cerevisiae.
Library construction scheme
In silico prediction of enzyme stability
The relative folding free energy differences ΔΔGFold between PirXI wild type and variants were predicted using FoldX calculations [32] based on the X-ray structure of the wild-type enzyme (5NH7) and in silico-generated mutants.
Selected PirXI variants for expression in E. coli and S. cerevisiae were reconstructed by QuikChange site-directed mutagenesis. For PCR, PfuUltra II Hotstart master mix (Agilent) was used following the manufacturer's instructions. Subsequently, 1 µl DpnI was added to the reaction mixture, which was incubated at 37 °C for 3 h. Next, 10 µl of the reaction mixture were used to transform 100 µl NEB 10 β cells.
For E. coli transformation RhCl2-competent NEB 10 β cells were used and a standard heat-shock protocol was applied. S. cerevisiae transformation was performed using the LiAc/SS carrier-DNA/PEG method. The protocol established by Gietz et al. was followed [57], adjusting the duration of heat-shock to that works best for the DS75543 strain which was approximately 1 h.
Yeast cultivation
For all yeast growth experiments a defined medium prepared according to Verdyun et al. [58] was used, either supplemented with 20 g l−1 glucose (glucose medium) or 20 g l−1 xylose (xylose medium) unless stated otherwise. For solid medium 2% agar was added prior to autoclaving. Cells were pre-grown on glucose medium and leftover glucose was washed away with ddH2O. The washed cells were resuspended in xylose medium and diluted appropriately. On-line growth measurements of S. cerevisiae were performed using a microplate reader (Synergy H1, BioTek). For this, 200 µl xylose medium was used in 96-well cell culture plates (Eppendorf) and samples from pre-cultures were added to an OD600 of 0.02. Plates were covered with an optical-clear gas-permeable seal (Breath-easy, Diversified Biotech). Cultures were grown at 30 °C with continuous shaking (linear, 731 pm, 2 mm shaking amplitude) and the OD600 was measured with 30-min intervals. For off-line growth measurements cells were cultivated at 30 °C in 100-ml shake flasks containing 25 ml xylose medium and the optical densities at 600 nm was followed in a spectrophotometer using plastic cuvettes.
Library screening by competitive growth
After transformation, yeast cells were plated on solid glucose medium. All transformants were collected by gently scraping the colonies from the plate and divided for duplicates of aerobic and anaerobic screening. Prior to the screening, cells were pre-grown on glucose until the mid-exponential phase. For the aerobic screening, pre-cultures prepared were diluted in 50 ml xylose medium to an OD600 of ~ 0.1 and grown at 30 °C with shaking at 135 rpm. For anaerobic growth, cells prepared from pre-cultures were inoculated in 100 ml xylose medium supplemented with 420 mg l−1 Tween 80 and 10 mg l−1 ergosterol. To keep the conditions oxygen-limited, the cultures were grown in 100-ml Schott glass bottles which were kept air-tight with a rubber stopper and a glass airlock. Autoclaved medium was flushed with argon for at least 20 min prior to inoculation. The cultures were grown at 30 °C without shaking, but occasionally stirred briefly to keep homogenous cell suspensions. For aerobic and anaerobic screening, transfers to fresh xylose medium were carried out when the cultures reached an OD600 of 8–10 and 2, respectively. After the 10th transfer, cultures were harvested to investigate the evolved library diversity. Cells were pre-treated with zymolyase (Amsbio) and plasmids were isolated using a miniprep kit (Qiagen). The isolated plasmids were transformed to E. coli to produce sufficient plasmid DNA for sequencing. The plasmids were isolated from E. coli transformants using a miniprep kit and sequenced by GATC (Konstanz, Germany).
Library screening by comparative growth
All library transformants were collected from glucose medium plates, washed and diluted with sterile ddH2O and plated on solid xylose medium. After 2–3 days of incubation, around 100 colonies that grew faster were selected by comparing colony sizes with cells expressing wild-type PirXI. Plasmids were isolated from selected transformants as described above and retransformed into fresh non-evolved yeast cells. The selection procedure was repeated twice and after the last round of yeast transformation and selection approximately 200 random colonies were picked and replicated both on glucose and xylose plates. Several colonies harboring wild-type PirXI were included as controls. Next, 46 colonies that grew faster than the controls on the xylose medium were selected and the corresponding colonies were picked from the glucose plate. The growth of the selected transformants in liquid xylose medium was measured in a 96-well plate using a microtiter plate reader. The growth experiments with these 46 transformants and wild-type PirXI carrying clones were performed in duplicate. The 24 best growing colonies were selected, plasmids were isolated and sequenced as described above.
Enzyme expression in E. coli and purification
For in vitro enzyme analysis, PirXI variants were expressed in E. coli and purified. NEB 10β cells harboring PirXI variants were grown in TB medium (12 g/l tryptone, 24 g/l yeast extract, 5 ml/l glycerol, 2.31 g/l KH2PO4, and 16.43 g/l K2HPO4·3H2O) containing 50 μg ml−1 ampicillin at 37 °C. For inducing expression, 0.2% (w/v) l-arabinose was added and the cells were cultivated for 16 h at 37 °C. The cells were harvested by centrifugation and purification of overexpressed XI was done as described in previously [22].
In vitro enzyme activity and metal affinity
For measuring the XI activity in the presence of different metal cofactors, purified enzymes were first incubated overnight with 10 mM EDTA. Subsequently, any metal–EDTA complex and excess EDTA were removed by buffer exchange to 20 mM MOPS (pH 7.0) using EconoPac 10-DG desalting columns (Bio-Rad). All enzyme activities were measured with sorbitol dehydrogenase (SDH, Roche Diagnostics GmbH)—coupled assay at 30 °C and pH 7.0 (20 mM MOPS). The reactions were followed either using a spectrophotometer (1 ml mixtures) (Jasco) or with a microplate reader in (200 µl reactions) (Synergy H1, BioTek). All reactions contained 0.15 mM NADH, 1 mM divalent metal, 1.5 unit/ml SDH and d-xylose. The mixtures were incubated at 30 °C for 5 min and the reactions were initiated by addition of 0.05–0.2 µM apo XI. For measuring the activation constant for each metal, activities of XI on 100 mM (for Mg2+ and Mn2+) or 400 mM (for Ca2+) d-xylose in the presence of various concentrations of metals were determined using a microplate reader. Various concentrations of metals and PirXI were used according to the level of metal-dependent activity of the enzyme.
Thermostability measurement
Thermostability of PirXI variants were determined by measuring an increase in fluorescence of Sypro Orange (Life Technologies, Carlsbad, CA, USA) during thermal unfolding [59]. The change of fluorescence emission at 575 nm was measured with CFX RT-PCR system (Biorad) while increasing the temperature from 20 to 90 °C at a rate of 0.5 °C min−1. In order to evaluate the metal affinity of PirXI variants, various concentrations of divalent metal ions (MgCl2, MnCl2 or CaCl2) ranging from 0 to 1.28 mM were included in 25-µl reaction mixtures containing 1 mg/ml apo PirXI and 2× of Sypro Orange dye.
Cell extract activity
Saccharomyces cerevisiae cells grown on xylose were harvested, washed with sterile ddH2O and resuspended with lysis buffer containing 20 mM MOPS (pH 7.0) and 100 U/g cells (w/w) zymolyase (Amsbio). The cell suspension was incubated at 30 °C with mild shaking at 50 rpm for 20 min. Next, the zymolyase-treated cells were disrupted by vortexing for 30 s with an equal volume of glass beads followed by cooling on ice for 1 min. The process was repeated five times. The cell lysate was spun down by centrifugation at 17,000×g and the supernatant was collected as the cell-free extract (CFE). The total protein concentration of the CFE was measured by the Bradford assay using bovine serum albumin (BSA) as the standard. Activity of the CFE on 100 mM xylose was measured by the SDH-coupled assay as described above without addition of any metals.
Enzyme expression levels
Crude extracts of S. cerevisiae expressing either wild-type or V270A–A273G PirXI grown on xylose were prepared by lysing the cells as described above. Total protein concentrations were determined by the Bradford assay and 2 µg of protein from each cell extract was used for SDS-PAGE analysis. Along with the crude extract samples, 0.2, 0.4, 0.6, 0.8, 1 and 2 µg of purified PirXI were loaded on the same gel as references. An image of the gel was taken and analyzed using the image processing program Image J (https://imagej.net). The intensity of the bands corresponding to PirXI was quantified and the expression levels were calculated.
Purification of PirXI expressed in S. cerevisiae for crystallography
A single colony of DS75543 cells containing PirXI was inoculated in 5 ml glucose medium and grown overnight. Next, cultures were diluted into a 50 ml glucose medium to an OD600 of 0.1 and cultivated at 30 °C until mid-exponential phase. The cells were collected by centrifugation, washed and diluted into a 2 l xylose medium to a starting OD600 of 0.1. The cultures were grown at 30 °C and harvested when the OD600 reached 3–4. In order to minimize metal contamination and preserve in vivo enzyme-bound metals, we applied a gentle cell lysis method and a minimal purification step required as described below. Buffers used are as follows: A, 10 mM MOPS, pH 7.5; B, 10 mM MOPS, pH 7.5 + 5 mM DTT; C, 10 mM MOPS, pH 7.5 + 0.5 M sucrose; D, 10 mM MOPS, pH 7.5 + 0.5 M sucrose + 100 U/g cells (w.w.) of zymolyase + EDTA-free protease inhibitor cocktail tablets (Roche); and E, 10 mM MOPS, pH 7.5 + 0.5 M KCl. The harvested cells were washed with buffer A and the pellets were resuspended in buffer B and subsequently incubated for 10 min on ice. The cells were centrifuged and washed with buffer C. The pellets were resuspended in 25 ml of buffer D and incubated at 30° while gently shaken at 50 rpm for 1 h. The disruption of the cell walls of the yeast was monitored under a light microscope. Next, 25 ml of buffer A was added to the cell lysate mixtures and the cells were spun down at 1500 g for 5 min. The supernatants were carefully collected and centrifuged at 16,500 rpm for 30 min. PirXIs were purified from the CFE using an anion exchange column (Resource Q) by applying an ionic strength gradient using buffer A and buffer E. Prior to sample loading the CFEs were diluted appropriately to reduce the ionic strength of the samples to a similar level of buffer A. Most of PirXI eluted with 50–100 mM KCl and the fractions of highest purity were collected for crystallization.
Crystallization and structure determination
Wild-type xylose isomerase and XI mutant V270A–A273G were crystallized by the hanging-drop vapor diffusion method with 14–17% PEG3350, 0.08 M ammonium sulfate and 0.1 M HEPES, pH 7.0 [22]. For soaking experiments, the stabilizing solution was supplemented with 2 M xylose. Datasets were collected at the in-house source at 110 K [22]. Details and processing statistics are given in Table 4. Processing was done with XDS [60]. The structure of xylose isomerase (PDB code 5NH5), with all waters and ligands removed, was used as a starting model. Refinement was done with Refmac5 [61]. Sugar ligands and ions were manually placed in sigmaA-weighted 2Fo–Fc, Fo–Fc and anomalous electron density maps [62] with the program Coot [63]. The atomic coordinates and the structure factors of the structures have been deposited in the Protein Data Bank (code 6T8E for native and 6T8F for V270A–A273G PirXI.
Table 4 Data collection and refinement statistics
E. coli expression clones of PirXI mutants reported in this study are available from the corresponding author.
XI:
xylose isomerase
PirXI:
xylose isomerase from Piromyces
CFE:
cell-free extract
SDH:
sorbitol dehydrogenase
Briggs KA, Lancashire WE, Hartley BS. Molecular cloning, DNA structure and expression of the Escherichia coli D-xylose isomerase. EMBO J. 1984;3:611–6.
PubMed PubMed Central Article CAS Google Scholar
Jansen MLA, Bracher JM, Papapetridis I, Verhoeven MD, de Bruijn H, de Waal PP, et al. Saccharomyces cerevisiae strains for second-generation ethanol production: from academic exploration to industrial implementation. FEMS Yeast Res. 2017;17:1–20.
Walfridsson M, Anderlund M, Bao X, Hahn-Hägerdal B. Expression of different levels of enzymes from the Pichia stipitis XYL1 and XYL2 genes in Saccharomyces cerevisiae and its effects on product formation during xylose utilisation. Appl Microbiol Biotechnol. 1997;48:218–24.
PubMed Article CAS Google Scholar
Kötter P, Ciriacy M. Xylose fermentation by Saccharomyces cerevisiae. Appl Microbiol Biotechnol. 1993;38:776–83.
Cunha JT, Soares PO, Romaní A, Thevelein JM, Domingues L. Xylose fermentation efficiency of industrial Saccharomyces cerevisiae yeast with separate or combined xylose reductase/xylitol dehydrogenase and xylose isomerase pathways. Biotechnol Biofuels. 2019;12:1–14.
Amore R, Wilhelm M, Hollenberg CP. The fermentation of xylose—an analysis of the expression of Bacillus and Actinoplanes xylose isomerase genes in yeast. Appl Microbiol Biotechnol. 1989;30:351–7.
Brat D, Boles E, Wiedemann B. Functional expression of a bacterial xylose isomerase in Saccharomyces cerevisiae. Appl Environ Microbiol. 2009;75:2304–11.
de Figueiredo Vilela L, de Mello VM, Reis VCB, da Silva Bon EP, Gonçalves Torres FA, Neves BC, et al. Functional expression of Burkholderia cenocepacia xylose isomerase in yeast increases ethanol production from a glucose-xylose blend. Bioresour Technol. 2012;128:792–6.
Katahira S, Muramoto N, Moriya S, Nagura R, Tada N, Yasutani N, et al. Screening and evolution of a novel protist xylose isomerase from the termite Reticulitermes speratus for efficient xylose fermentation in Saccharomyces cerevisiae. Biotechnol Biofuels. 2017;10:1–18.
Hector RE, Dien BS, Cotta MA, Mertens JA. Growth and fermentation of D-xylose by Saccharomyces cerevisiae expressing a novel D-xylose isomerase originating from the bacterium Prevotella ruminicola TC2-24. Biotechnol Biofuels. 2013;6:84.
Seike T, Kobayashi Y, Sahara T, Ohgiya S, Kamagata Y. Molecular evolutionary engineering of xylose isomerase to improve its catalytic activity and performance of micro-aerobic glucose/xylose co-fermentation in Saccharomyces cerevisiae. Biotechnol Biofuels. 2019;12:1–16.
Kuyper M, Harhangi HR, Stave AK, Winkler AA, Jetten MSM, de Laat WTAM, et al. High-level functional expression of a fungal xylose isomerase: the key to efficient ethanolic fermentation of xylose by Saccharomyces cerevisiae? FEMS Yeast Res. 2003;4:69–78.
Harhangi HR, Akhmanova AS, Emmens R, van der Drift C, de Laat WTAM, van Dijken JP, et al. Xylose metabolism in the anaerobic fungus Piromyces sp. strain E2 follows the bacterial pathway. Arch Microbiol. 2003;180:134–41.
Lee SM, Jellison T, Alper HS. Directed evolution of xylose isomerase for improved xylose catabolism and fermentation in the yeast Saccharomyces cerevisiae. Appl Environ Microbiol. 2012;78:5708–16.
Zhou H, Cheng JS, Wang BL, Fink GR, Stephanopoulos G. Xylose isomerase overexpression along with engineering of the pentose phosphate pathway and evolutionary engineering enable rapid xylose utilization and ethanol production by Saccharomyces cerevisiae. Metab Eng. 2012;14:611–22.
Kuyper M, Hartog MMP, Toirkens MJ, Almering MJH, Winkler AA, van Dijken JP, et al. Metabolic engineering of a xylose-isomerase-expressing Saccharomyces cerevisiae strain for rapid anaerobic xylose fermentation. FEMS Yeast Res. 2005;5:399–409.
Bracher JM, Martinez-Rodriguez OA, Dekker WJC, Verhoeven MD, van Maris AJA, Pronk JT. Reassessment of requirements for anaerobic xylose fermentation by engineered, non-evolved Saccharomyces cerevisiae strains. FEMS Yeast Res. 2019. https://doi.org/10.1093/femsyr/foy104.
Kwak S, Jin YS. Production of fuels and chemicals from xylose by engineered Saccharomyces cerevisiae: a review and perspective. Microb Cell Fact. 2017;16:1–15.
Hoang PTN, Ko JK, Gong G, Um Y, Lee SM. Genomic and phenotypic characterization of a refactored xylose-utilizing Saccharomyces cerevisiae strain for lignocellulosic biofuel production. Biotechnol Biofuels. 2018;11:1–13.
Young E, Lee S, Alper H. Optimizing pentose utilization in yeast: the need for novel tools and approaches. Biotechnol Biofuels. 2010;3:24.
Verhoeven MD, Lee M, Kamoen L, Van Den Broek M, Janssen DB, Daran JMG, et al. Mutations in PMR1 stimulate xylose isomerase activity and anaerobic growth on xylose of engineered Saccharomyces cerevisiae by influencing manganese homeostasis. Sci Rep. 2017;7:1–11.
Lee M, Rozeboom HJ, de Waal PP, de Jong RM, Dudek HM, Janssen DB. Metal dependence of the xylose isomerase from Piromyces sp. E2 explored by activity profiling and protein crystallography. Biochemistry. 2017;56:5991–6005.
Cherry JR, Fidantsef AL. Directed evolution of industrial enzymes: an update. Curr Opin Biotechnol. 2003;14:438–43.
Packer MS, Liu DR. Methods for the directed evolution of proteins. Nat Rev Genet. 2015;16:379–94.
Kuchner O, Arnold FH. Directed evolution of enzyme catalysts. Trends Biotechnol. 1997;15:523–30.
de Kreij A, van Burg B, Den Venema G, Vriend G, Eijsink VGH, Nielsen JE. The effects of modifying the surface charge on the catalytic activity of a thermolysin-like protease. J Biol Chem. 2002;277:15432–8.
Tawfik DS, Goldsmith M. Directed enzyme evolution: beyond the low-hanging fruit. Curr Opin Struct Biol. 2012;22:406–12.
Lutz S. Beyond directed evolution—semi-rational protein engineering and design. Curr Opin Biotechnol. 2011;21:734–43.
van Leeuwen JGE, Wijma HJ, Floor RJ, van der Laan JM, Janssen DB. Directed evolution strategies for enantiocomplementary haloalkane dehalogenases: from chemical waste to enantiopure building blocks. ChemBioChem. 2012;13:137–48.
Lambeir AM, Lauwereys M, Stanssens P, Mrabet NT, Snauwaert J, Van Tilbeurgh H, et al. Protein engineering of xylose (glucose) isomerase from Actinoplanes missouriensis. 2. Site-directed mutagenesis of the xylose binding site. Biochemistry. 1992;31:5459–66.
Kovalevsky AY, Hanson L, Fisher SZ, Mustyakimov M, Mason SA, Trevor Forsyth V, et al. Metal ion roles and the movement of hydrogen during reaction catalyzed by D-xylose isomerase: a joint X-ray and neutron diffraction study. Structure. 2010;18:688–99.
Guerois R, Nielsen JE, Serrano L. Predicting changes in the stability of proteins and protein complexes: a study of more than 1000 mutations. J Mol Biol. 2002;320:369–87.
van Leeuwen JGE. Library design and screening strategies for efficient enzyme evolution [Doctoral dissertation]. Groningen: University of Groningen; 2015.
Futcher AB, Cox BS. Copy number and the stability of 2-micron circle-based artificial plasmids of Saccharomyces cerevisiae. J Bacteriol. 1984;157:283–90.
Kim SR, Skerker JM, Kang W, Lesmana A, Wei N, Arkin AP, et al. Rational and evolutionary engineering approaches uncover a small set of genetic changes efficient for rapid xylose fermentation in Saccharomyces cerevisiae. PLoS ONE. 2013;8:e57048.
Papapetridis I, Verhoeven MD, Wiersma SJ, Goudriaan M, van Maris AJA, Pronk JT. Laboratory evolution for forced glucose-xylose co-consumption enables identification of mutations that improve mixed-sugar fermentation by xylose-fermenting Saccharomyces cerevisiae. FEMS Yeast Res. 2018;18:1–17.
Coolbear T, Whittaker JM, Daniel RM. The effect of metal ions on the activity and thermostability of the extracellular proteinase from a thermophilic Bacillus, strain EA.1. Biochem J. 1992;287:367–74.
Dean KM, Qin Y, Palmer AE. Visualizing metal ions in cells: an overview of analytical techniques, approaches, and probes. Biochim Biophys Acta. 2012;1823:1406–15.
Foster AW, Osman D, Robinson NJ. Metal preferences and metallation. J Biol Chem. 2014;289:28095–103.
Rangarajan M, Hartley BS. Mechanism of d-fructose isomerization by Arthrobacter D-xylose isomerase. Biochem J. 1992;283:223–33.
Collyer CA, Blow DM. Observations of reaction intermediates and the mechanism of aldose-ketose interconversion by D-xylose isomerase. Proc Natl Acad Sci. 1990;87:1362–6.
Collyer CA, Henrick K, Blow DM. Mechanism for aldose-ketose interconversion by D-xylose isomerase involving ring opening followed by a 1,2-hydride shift. J Mol Biol. 1990;212:211–35.
Kuyper M, Toirkens MJ, Diderich JA, Winkler AA, van Dijken JP, Pronk JT. Evolutionary engineering of mixed-sugar utilization by a xylose-fermenting Saccharomyces cerevisiae strain. FEMS Yeast Res. 2005;5:925–34.
Nijland JG, Shin HY, Boender LGM, de Waal PP, Klaassen P, Driessen AJM. Improved xylose metabolism by a CYC8 mutant of Saccharomyces cerevisiae. Appl Environ Microbiol. 2017;83:1–12.
Nijland JG, Shin HY, de Jong RM, de Waal PP, Klaassen P, Driessen AJM. Engineering of an endogenous hexose transporter into a specific D-xylose transporter facilitates glucose-xylose co-consumption in Saccharomyces cerevisiae. Biotechnol Biofuels. 2014;7:1–11.
Milo R, Phillips R. Cell biology by the numbers. 1st ed. New York: Garland Science; 2015.
Huang M, Bao J, Hallström BM, Petranovic D, Nielsen J. Efficient protein production by yeast requires global tuning of metabolism. Nat Commun. 2017;8:1131.
Osiro KO, Borgström C, Brink DP, Fjölnisdóttir BL, Grauslund MFG. Exploring the xylose paradox in Saccharomyces cerevisiae through in vivo sugar signalomics of targeted deletants. Microb Cell Fact. 2019;18:88.
Wasylenko TM, Stephanopoulos G. Metabolomic and 13C-metabolic flux analysis of a xylose- consuming Saccharomyces cerevisiae strain expressing xylose isomerase. Biotechnol Bioeng. 2015;112:470–83.
Hasunuma T, Sanda T, Yamada R, Yoshimura K, Ishii J, Kondo A. Metabolic pathway engineering based on metabolomics confers acetic and formic acid tolerance to a recombinant xylose-fermenting strain of Saccharomyces cerevisiae. Microb Cell Fact. 2011;10:2.
van Bastelaere PBM, Kersters-hilderson HLM, Lambeirt A. Wild-type and mutant D-xylose isomerase from Actinoplanes missouriensis: metal-ion dissociation constants, kinetic parameters of deuterated and non-deuterated substrates and solvent-isotope effects. Biochem J. 1995;307:135–42.
PubMed PubMed Central Article Google Scholar
Vieira IPV, Cordeiro GT, Gomes DEB, Melani RD, Vilela LF, Domont GB, et al. Understanding xylose isomerase from Burkholderia cenocepacia: insights into structure and functionality for ethanol production. AMB Express. 2019;9:73.
Callens M, Tomme P, Kersters-Hilderson H, Cornelis R, Vangrysperre W, De Bruyne CK. Metal ion binding to D-xylose isomerase from Streptomyces violaceoruber. Biochem J. 1988;250:285–90.
van Bastelaere PBM, Vangrysperre W, Kersters-Hildersont H. Kinetic studies of Mg2+-, Co2+- and Mn2+-activated D-xylose isomerases. Biochem J. 1991;278:285–92.
Dudev T, Lim C. Competition among metal ions for protein binding sites: determinants of metal ion selectivity in proteins. Chem Rev. 2014;114:538–56.
Gibson DG, Young L, Chuang RY, Venter JG, Hutchion CA III, Smith HO. Enzymatic assembly of DNA molecules up to several hundred kilobases. Nat Methods. 2009;6:343–5.
Gietz RD, Schiestl RH. High-efficiency yeast transformation using the LiAc/SS carrier DNA/PEG method R. Nat Protoc. 2007;2:31–4.
Verduyn C, Postma E, Scheffers WA, van Dijken JP. Effect of benzoic acid on metabolic fluxes in yeasts: a continuous-culture study on the regulation of respiration and alcoholic fermentation. Yeast. 1992;8:501–17.
Ericsson UB, Hallberg BM, DeTitta GT, Dekker N, Nordlund P. Thermofluor-based high-throughput stability optimization of proteins for structural studies. Anal Biochem. 2006;357:289–98.
Kabsch W. XDS. Acta Cryst. 2010;66:125–32.
Murshudov GN, Skubák P, Lebedev AA, Pannu NS, Steiner RA, Nicholls RA, et al. REFMAC5 for the refinement of macromolecular crystal structures. Acta Cryst. 2011;67:355–67.
Winn MD, Ballard CC, Cowtan KD, Dodson EJ, Emsley P, Evans PR, et al. Overview of the CCP4 suite and current developments. Acta Cryst. 2011;67:235–42.
Emsley P, Lohkamp B, Scott WG, Cowtan K. Features and development of Coot. Acta Cryst. 2010;66:486–501.
We acknowledge Jack Pronk (TU Delft) for providing the xylose isomerase gene expression cassette PTPI1_XylA_TCYC1 and for his scientific advice. We thank Marcelo Masman for performing the FoldX calculations of PirXI mutants.
This work was performed within the BE-Basic (project number F01.012) R&D Program (http://www.be-basic.org/), which is financially supported by an EOS Long Term grant from the Dutch Ministry of Economic Affairs, Agriculture and Innovation (EL&I).
Biochemical Laboratory, Groningen Biomolecular Sciences and Biotechnology Institute, University of Groningen, Nijenborgh 4, 9747 AG, Groningen, The Netherlands
Misun Lee, Henriëtte J. Rozeboom, Eline Keuning & Dick B. Janssen
DSM Biotechnology Center, Alexander Fleminglaan 1, 2613 AX, Delft, The Netherlands
Paul de Waal
Misun Lee
Henriëtte J. Rozeboom
Eline Keuning
Dick B. Janssen
This study was designed by ML, PdW and DBJ. ML conducted all experiments except X-ray crystallography. HJR performed the crystallization and determined the structures. EK contributed to the design of PirXI libraries. The manuscript was written by ML, HJR and DBJ. All authors read and approved the final manuscript.
Correspondence to Dick B. Janssen.
PdW is employed at the DSM Biotechnology Center, part of the company Royal DSM. Royal DSM owns intellectual property rights and commercializes aspects of the technology discussed in this paper. PdW, DBJ, and ML are listed on a patent application on xylose isomerase (WO2018073107).
Lee, M., Rozeboom, H.J., Keuning, E. et al. Structure-based directed evolution improves S. cerevisiae growth on xylose by influencing in vivo enzyme performance. Biotechnol Biofuels 13, 5 (2020). https://doi.org/10.1186/s13068-019-1643-0
Metalloenzyme | CommonCrawl |
Reseller / Distributors
Manual Activation
NumXL Pro
NumXL SDK
Home » Blogs » The Ins and Outs of Histograms
The Ins and Outs of Histograms
In this issue, we will tackle the probability distribution inference for a random variable.
Why do we care? As a start, no matter how good a stochastic model you have, you will always end up with an error term (aka shock or innovation) and the uncertainty (e.g. risk, forecast error) of the model is solely determined by this random variable. Second, uncertainty is commonly expressed as a probability distribution, so there is no escape!
One of the main problems in practical applications is that the needed probability distribution is usually not readily available. This distribution must be derived from other existing information (e.g. sample data).
What we mean by probability distribution analysis is essentially the selection process of a distribution function (parametric or non-parametric).
In this paper, we'll start with the non-parametric distributions functions: (1) empirical (cumulative) density function and (2) the Excel histogram. In a later issue, we'll also go over the kernel density function (KDE).
1.Empirical Density Function (EDF)
The empirical distribution function (EDF), or empirical cdf is a step function that jumps by 1/N at the occurrence of each observation.
$$\mathrm{EDF}(x)=F_N (x)=\frac{1}{N}\sum_{i=1}^N I\{x_i \leqslant x\}$$
$I\{A\}$ is the indicator of an event function
$I\{x_i \leqslant x\}=\begin{cases} 1 & \text{ if } x_i \leqslant x \\ 0 & \text{ if } x_i > x \end{cases}$
The EDF estimates the true underlying cumulative density function of the points in the sample; it is virtually guaranteed to converge to the true distribution as the sample size gets sufficiently large
To obtain the probability density function (PDF), one needs to take the derivative of the CDF, but the EDF is a step function and differentiation is a noise-amplifying operation. As a result, the consequent PDF is very jagged and needs considerable smoothing for many areas of application.
2. Histograms
The (frequency) histogram is probably the most familiar and intuitive distribution function which fairly approximates the PDF.
In statistics, a histogram is a graphical representation showing a visual impression of the distribution of data. Histograms are used to plot the density of data, and often for density estimation, or estimating the probability density function of the underlying variable.
In mathematical terms, a histogram is a function mi that counts the number of observations whose values fall into one of the disjoint intervals (aka bins).
$$N=\sum_{i=1}^k m_i$$
$N$ is the total number of observations in the sample data
$k$ is the number of bins
$m_i$ is the histogram value for the i-th bin
And a cumulative histogram is defined as follows:
The frequency function $f_i$ (aka relative histogram) is computed simply by dividing the histogram value by the total number of observations;
One of the major drawbacks of the histogram is that its construction requires an arbitrary assignment of bar width (or bins number) and bar positions, which means that unless one has access to a very large amount of data, the shape of the distribution function varies heavily as the bar width (or bin number) and positions are altered.
Furthermore, for large sample size, the outliers are difficult or perhaps impossible to see in the histogram, except when they cause the x-axis to expand.
Having said that, there are a few methods for inferring the number of histogram bins, but care must be taken to understand the assumptions made behind their formulation.
Sturges' formula
The Sturges' method assumes the sample data follow an approximate normal distribution (i.e. bell shape).
$$k=\left \lceil \log_2 N +1 \right \rceil$$
$\left \lceil X \right \rceil$ is the ceiling operator
Square root formula
This method is used by Excel and other statistical packages. It does not assume any shape of the distribution:
$$k= \sqrt {N}$$
Scott's (normal reference) choice
Scott's choice is optimal for random sample of normal distribution:
$$k=\frac{3.5\hat\sigma}{\sqrt[3]{N}}$$
$\sigma$ is the estimated sample standard deviation
Freedman-Diaconis's choice
$$h=2\frac{\mathrm{IQR}}{\sqrt[3]{N}}$$
$h$ is the bin size
$\mathrm{IQR}$ is the inter-quartile range
$$k=\left \lceil \frac{x_{\mathrm{max}}-x_{\mathrm{min}}}{h} \right \rceil$$
Decision based on minimization of risk function ($L^2$)
$$\mathrm{min}\{L^2\}=\mathrm{min} \left ( \frac{2\bar m-v}{h^2} \right )$$
$$\bar m = \frac{\sum_{i=1}^k m_i}{k}=\frac{N}{k}$$ $$v=\frac{\sum_{i=1}^k (m_i – {\bar m}^2)}{k}=\frac{1}{k}\sum_{i=1}^k m_i^2-\frac{N^2}{k^2}$$
3.Kernel Density Estimate (KDE)
An alternative to the Excel histogram is kernel density estimation (KDE), which uses a kernel to smooth samples. This will construct a smooth probability density function, which will in general more accurately reflect the underlying variable. We mentioned the KDE for sake of completion, but we will postpone its discussion to a later issue.
EUR/USD Returns Application
Let's consider the daily log returns of the EUR/USD exchange rate sample data. In our earlier analysis (ref: NumXL Tips and Hints – Price this), the data was shown to be a Gaussian white noise distribution. The EDF function for those returns (n=498) is shown below:
For an Excel histogram, we calculated the number of bins using the 4 methods
Next, we plot the relative Excel histogram using those different bins numbers. We overlay the normal probability density function (red-curve) for comparison
Although we have a relatively large data set (n=498) and the EDF and statistical test exhibit Gaussian distributed data, the selection of different bin sizes can distort the density function.
Scott's choice (n=15) describes the density function best, and next would be Sturge's.
In this issue, we attempted to derive an approximate of the underlying density probability using a sample data Excel histogram and the (cumulative) empirical density function.
Although the data sample is relatively large (n=498), the Excel histogram is still a fairly crude approximation and very sensitive to the number of bins used.
Using the rules of thump (e.g. Sturge's rule, Scott's choice, etc.) can improve the process of finding better bins number, but they make their own assumptions about the shape of the distribution and an experienced (manual) examination (or eyeballing) is needed to ensure proper Excel histogram generation.
Files Examples
Please click the button below to download Histograms example.
Histograms Example 0.00 KB
In this example file, you can view the steps for using Histograms in Excel. ...
PrevNumXL in the clouds
Volatility 101Next
Descriptive Statistics, Histograms
Using NumXL
1.55 LYNX
1.56 ZEBRA
1.57 SINGA
1.59 TUCSON
1.64 TURRET
1.65 HAMMOCK
1.66 PARSON
1.67 MARTHA
1.68 CAMEL
ARCH GARCH Analysis
ARCH Test
Correlogram
EWMA
Exponential Smoothing
Factors Analysis
GARCH
Hurst Exponent
Linear Smoothing
Principal Components
SARIMA
Spectral Analysis
Stationarity
Triple Smoothing
X-13ARIMA-SEATS
53 W Jackson Blvd. STE 1629 • Chicago IL 60604, United States
NumXL Tips & Tricks Newsletter
* You can unsubscribe anytime.
© 2023 Spider Financial Corp | | Terms & Conditions | Disclaimer | Privacy Policy | Trademarks & Copyrights | CommonCrawl |
\begin{document}
\newcommand {\bbox} {\vrule height7pt width4pt depth1pt} \newcommand {\ra} {\rangle} \newcommand {\la} {\langle} \title{QUANTUM COMPUTATION\footnote{To appear in {\it Annual Reviews of Computational Physics} VI, Edited by Dietrich Stauffer, World Scientific, 1998}} \author{Dorit Aharonov\\ {\small Departments of Physics and Computer Science,} \\{\small The Hebrew University, Jerusalem, Israel} }
\maketitle
\begin{abstract} \it{
In the last few years, theoretical study of
quantum systems serving as computational devices has achieved tremendous progress. We now have strong theoretical evidence that
quantum computers, if built, might be used as a dramatically powerful computational tool, capable of performing tasks which seem intractable for classical computers. This review is about to tell the story of theoretical quantum computation. I left out the developing topic of experimental realizations of the model, and neglected other closely related topics which are quantum information and quantum communication.
As a result of narrowing the scope of this paper, I hope it has gained the benefit of being an almost self contained
introduction to the exciting field of quantum computation.
The review begins with background on theoretical computer science, Turing machines and Boolean circuits. In light of these models, I define quantum computers, and discuss the issue of universal quantum gates. Quantum algorithms, including Shor's factorization algorithm and Grover's algorithm for searching databases, are explained. I will devote much attention to understanding what the origins of the quantum computational power are, and what the limits of this power are. Finally, I describe the recent theoretical results which show that quantum computers
maintain their complexity power even in the presence of noise, inaccuracies and finite precision.
This question cannot be separated from that of quantum complexity, because any realistic model will inevitably
be subject to such inaccuracies. I tried to put all results in their context, asking what the implications to other issues in computer science and physics are. In the end of this review I make these connections explicit,
discussing the possible implications of
quantum computation on fundamental physical questions, such as the transition from quantum to classical physics.}
\end{abstract}
\newtheorem{theo}{Theorem} \newtheorem{lemm}{Lemma} \newtheorem{conj}{conjecture} \newtheorem{deff}{Definition} \newtheorem{coro}{Corollary}
\section{Overview}
Since ancient times, humanity has been seeking tools to help us perform tasks which
involve calculations. Such are computing the area of a land, computing the stresses on rods in bridges, or finding the shortest route from one place to another. A common feature of all these tasks is their structure: \begin{quote} \bf ~~~~~~~~~~~~~~~ Input ------$>$ Computation ------$>$ Output \end{quote}
The computation part of the process is inevitably performed by a dynamical physical system, evolving in time. In this sense, the question of what can be computed, is intermingled with the physical question of which systems can be physically realized. If one wants to
perform a certain computation task, one should seek the appropriate physical system,
such that the evolution in time of the system corresponds to the desired computation process. If such a system is initialized according to the input, its final state will correspond to the desired output.
A very nice such example was invented by Gaud\'{i}, a great Spanish architect, who lived around the turn of the century. His design of the holy family church, ({\it la sagrada familia}) in Barcelona is a masterpiece of art, and is still in the process of building, after almost a hundred years. The church resembles a sand palace, with a tremendous complexity of delicate thin but tall
towers and arcs. Since the plan of the church was so complicated,
towers and arcs emerging from unexpected places,
leaning on other arcs and towers, it is practically impossible to solve the set of equations which corresponds to the requirement of equilibrium in this complex. Instead of solving this impossible task, Gaud\'{i} thought of the following ingenious idea: For each arc he desired in his complex, he took a rope, of length proportional to the length of the arc. He tied the edges of one rope to the middle of some other rope, or where the arcs were supposed to lean on each other. Then he just tied the edges of the ropes corresponding to the lowest arcs, to the ceiling. All the computation was instantaneously done by gravity! The set of arcs arranged itself such that the whole complex is in equilibrium, but upside down. Everything was there, the angles between the different arcs, the radii of the arcs. Putting a mirror under the whole thing, he could simply see the design of
the whole church! \cite{gaudi}.
Many examples of analog computers exist, which were invented to solve one complicated task. Such are the differential analyzer invented by Lord Kelvin in 1870\cite{kelvin}, which uses friction, wheels, and pressure to draw the solution of an input differential equations. The spaghetti sort is another example, and there
are many more\cite{vergis}. Are these systems ``computers''? We do not want to construct and build a completely different machine for each task that we have to compute. We would rather have a general
purpose machine, which is ``universal''. A mathematical model for a ``universal'' computer was defined long before the invention of computers and is called the Turing machine\cite{turing}. Let me describe this model briefly. A Turing machine consists of an infinite tape, a head that reads and writes on the tape,
a machine with finitely many possible states,
and a transition function $\delta$. Given what the head reads at time $t$, and the machine's state at time $t$,
$\delta$ determines what the head will write, to which direction
it will move and what will be the new machine's state at time $t+1$.
The Turing machine model seems to capture the entire concept of computability, according to the following thesis\cite{church}:
\begin{quote} {\bf Church Turing Thesis:} A
Turing machine can compute any function computable by a reasonable physical device \end{quote}
What does ``reasonable physical device'' mean? This thesis is a physical statement, and as such it cannot be proven. But one knows a physically unreasonable device when one sees it. Up till now there are no candidates for counterexamples to this thesis (but see Ref. \cite{geroch}). All physical systems, (including quantum systems), seem to have a simulation by a Turing Machine.
It is an astonishing fact that there are families of functions which cannot be computed. In fact, most of the functions cannot be computed. There are trivial reasons for this:
There are more functions than there are ways to compute them. The reason for this is that the set of Turing machines is countable, where as the set of {\it families} of functions is not. In spite of the simplicity of this argument (which can be formalized using the {\it diagonal argument}) this observation came
as a complete surprise in the 1930's when it was first discovered. The subject of computability of functions is a cornerstone in computational complexity. However, in the theory of computation, we are interested not only in the question of which functions can be computed,
but mainly in the {\it cost} of computing these functions. The cost, or {\it computational complexity}, is measured naturally by the physical resources invested in order to
solve the problem, such as time, space, energy, etc. A fundamental question in computation complexity
is how the cost function behaves as a function of the input size, $n$, and in particular whether it is exponential or polynomial in $n$. In computer science problems
which can only be solved in exponential cost
are regarded intractable, and any of the readers who has ever
tried to perform an exponentially slow simulation will appreciate this characterization. The class of tractable problems constitutes of those problems which have polynomial solutions.
It is worthwhile to reconsider what it means to {\it solve} a problem. One of the most important conceptual breakthroughs in modern mathematics was the understanding\cite{rabin79} that sometimes it is advantageous to
relax the requirements that a solution be always correct,
and allow some (negligible ) probability for an error. This gave rise to much more rapid solutions to different problems, which make use of random coin flips, such as the Miller-Rabin randomized algorithm to test whether an integer is prime or not\cite{fft}. Here is a simple example of the advantage of probabilistic algorithms:
\begin{quote}
we have access to a database of
$N$ bits, and we are told that they are either all equal, (``constant'') or half are $0$ and half are $1$ (``balanced''). We are asked to distinguish between the two cases. \end{quote}
A deterministic algorithm will have to observe $N/2+1$ bits in order to
always give a correct answer. To solve this problem probabilistically, toss a random $i$ between $1$ to $N$, observe the $i'$th
bit, and repeat this experiment $k$ times. If two different bits are found, the answer is ``balanced'', and if all bits are equal, the answer is ``constant''. Of course, there is a chance that we are wrong when declaring ``constant'', but this chance can be made arbitrarily small. The probability for an error
equals the chance of tossing a fair coin $k$ times and getting always $0$, and it decreases exponentially with $k$. For example, in order for the error probability to be less than $10^{-10}$,
$k=100$ suffices. In general, for any desired confidence, a constant $k$ will do. This
is a very helpful shortcut if $N$ is very large. Hence, if we allow negligible probability of error,
we can do much better!
The class of tractable problems is now considered as those problems solvable with a negligible probability for error in polynomial time. These solutions will be computed by a probabilistic Turing machine, which is defined exactly as a deterministic Turing machine, except that the transition function can change the configuration in one of several possible ways, randomly. The modern Church thesis refines the Church thesis and asserts that the probabilistic Turing machine
captures the entire concept of computational complexity:
\begin{quote}{\bf The modern Church thesis}: A probabilistic Turing machine can simulate
any reasonable physical device in polynomial cost. \end{quote}
It is worthwhile considering a few models which
might seem to contradict this thesis at first sight. One such model is the DNA computer which enables a solution of $NP$-complete problems (these are hard problems to be defined later)
in polynomial time\cite{adleman2, lipton}. However, the cost of the solution is exponential
because the number of molecules in the system grows exponentially with the size of the computation. Vergis et al\cite{vergis} suggested a machine which
seems to be able to solve instantaneously an $NP$-complete problem using a construction of rods and balls, which is designed such that the
structure moves according to the solution to the problem. A careful consideration\cite{simon2} reveals that though we tend to think of rigid rods as transferring the motion instantaneously, there will be a
time delay in the rods, which will accumulate and cause an exponential overall delay. Shamir\cite{shamir} showed how to
factorize
an integer in polynomial time {\em and} space, but using another physical resource exponentially, namely
precision. In fact, J. Simon showed that
extremely hard problems (The class of problems called Polynomial space, which are harder than NP problems) can be solved with polynomial cost in time and space\cite{jsimon}, but with exponential precision. Hence all these suggestions for computational models do not provide counterexamples for the modern Church thesis, since they require exponential physical resources. However, note that all the suggestions mentioned above rely on classical physics.
In the early 80's Benioff\cite{benioff1,benioff2} and Feynman\cite{feynman2}
started to discuss the question of whether computation can be done in the scale of quantum physics. In classical computers, the elementary information unit is a {\it bit}, i.e. a value which is either $0$ or $1$. The quantum analog of a bit would be a two state particle, called a quantum bit or a {\bf qubit}.
A two state quantum system is described by a unit vector in the Hilbert space $C^2$, where $C$ are the complex numbers.
One of the two states will be denoted by $|0\ra$, and corresponds to the vector $(1,0)$. The other state, which is orthogonal to the first one,
will be denoted by $|1\ra=(0,1)$. These two states constitute an orthogonal basis to the Hilbert space. To build a computer, we need to compose a large number of these two state particles. When $n$ such qubits are composed to one system, their Hilbert space is the tensor product of $n$ spaces: $C^2\otimes C^2\otimes \cdots \otimes C^2$. To understand this space better, it is best to think of it as the space spanned by its basis. As the natural basis for this space, we take the basis consisting of $2^n$ vectors, which is sometimes called the computational basis:
\begin{eqnarray}
|0\ra\otimes|0\ra\otimes\cdots\otimes|0\ra~\\
|0\ra\otimes|0\ra\otimes\cdots\otimes|1\ra\nonumber~\\
\vdots~~~~~~~~~~~~~~~~~~~~~\nonumber\\
|1\ra\otimes|1\ra\otimes\cdots\otimes|1\ra\nonumber.
\end{eqnarray}
Naturally classical strings of bits will correspond to quantum states: \begin{equation}
i_1i_2...i_n \longleftrightarrow |i_1\ra\otimes|i_2\ra\otimes\cdots\otimes|i_n\ra\equiv|i_1....i_n\ra \end{equation}
How can one perform computation using qubits? Suppose, e.g., that we want to compute the function $f:i_1i_2...i_n \longmapsto f(i_1,....i_n)$, from $n$ bits to $n$ bits. We would like the system to evolve according to
the time evolution operator $U$: \begin{equation}\label{f}
|i_1i_2...i_n\ra \longmapsto U|i_1i_2...i_n\ra=|f(i_1,....i_n)\ra. \end{equation} We therefore have to find the Hamiltonian ${\cal H}$
which generates this evolution according to
Schr$\ddot{{\rm o}}$dinger's equation:
\(i\hbar\frac{d}{dt}|\Psi(t)\ra={\cal H }|\Psi(t)\ra\). This means that we have to solve for ${\cal H}$ given the desired $U$:
\begin{equation}\label{evu}
|\Psi_f\ra=
\exp\left(-\frac{i}{\hbar}\int{\cal{H}}dt\right)|\Psi_0\ra=
U|\Psi_0\ra \end{equation}
A solution for ${\cal H}$ always exists,
as long as the linear operator $U$ is unitary. It is important to pay attention to the unitarity restriction. Note that the quantum analog of a classical operation will be unitary only if $f$ is one-to-one, or reversible. Hence, reversible classical function can be implemented by a physical Hamiltonian. Researchers investigated
the question of reversible classical functions in connection with completely different problems, e.g.
the problem of whether computation can be done without generating heat (which is inevitable in irreversible operations) and as a solution to the ``maxwell demon'' paradox\cite{landauer2,bennett2,bennett4,keyes2}. It turns out that any classical function can be represented as a reversible function\cite{lecerf,bennett1} on a few more bits, and the computation of $f$ can be made reversible without losing much in efficiency. Moreover,
if $f$ can be computed classically by polynomially many elementary reversible steps,
the corresponding $U$ is also decomposable into a sequence of polynomially many elementary unitary operations. We see that quantum systems can imitate
all computations which can be done by classical systems, and do not lose much in
efficiency.
Quantum computation is interesting not because it can imitate classical computation, but because it can probably do much more. In a seminal paper\cite{feynman1}, Feynman
pointed out the fact that quantum systems of $n$ particles seem exponentially hard to simulate by classical devices. In other words, quantum systems do not seem to obey the modern Church thesis, i.e. they do not seem to be
polynomially equivalent to classical systems! If quantum systems are hard to simulate, then quantum systems,
harnessed as computational devices, might be dramatically more powerful than other computational devices.
Where can the ``quantumness'' of the particles be used? When I described how quantum systems imitate
classical computation, the quantum particles were either
in the state $|0\ra$ or $|1\ra$. However, quantum theory asserts that a quantum system, like Schr$\ddot{\rm{o}}$dinger's cat,
need not be in one of the basis states $|0\ra$ and $|1\ra$,
but can also be in a {\it linear superposition} of those. Such a superposition can be written as: \begin{equation}
c_0 |0\ra + c_1 |1\ra \end{equation}
where $c_0,c_1$ are complex numbers and $|c_0|^2+|c_1|^2=1$. The wave function, or superposition, of $n$ such quantum bits, can be in a superposition
of all of the $2^n$ possible basis states! Consider for example the following state of $3$ particles, known as the GHZ state\cite{ghz}: \begin{equation}
\frac{1}{\sqrt{2}}(|000\ra +|111\ra) \end{equation} What is the superposition describing the first qubit? The answer is that there is no such superposition. Each one of the $3$ qubits does not have a state of its own;
the state of the system is not a tensor product of the states of each particle, but is some superposition which describes quantum correlations between these particles. Such particles are said to be quantumly {\it entangled}. The Einstein Podolski Rosen paradox\cite{epr}, and Bell inequalities\cite{bell,bell1, clauser,ghz},
correspond to this puzzling quantum feature by which a quantum particle does not have a state of its own. Because of the entanglement or quantum correlations
between the $n$ quantum particles, the state of the system cannot be specified by simply describing the state of each of the $n$ particles. Instead, the state of $n$ quantum bits
is a complicated superposition of all $2^n$ basis states, so
$2^n$ complex coefficients are needed in order to describe it. This exponentiality of the Hilbert space is a crucial ingredient in quantum computation. To gain more understanding of the advantages of the exponentiality of the space,
consider the following
superposition of $n$ quantum bits. \begin{equation}
\frac{1}{\sqrt{2^n}}\sum_{i_1,i_2,...,i_n=0}^{1} |i_1,i_2,...,i_n\ra \end{equation} This is a uniform superposition of all possible basis states of $n$ qubits. If we now apply the unitary operation which computes
$f$, from equation \ref{f}, to this state,
we will get, simply from linearity of quantum mechanics: \begin{equation} \frac{1}{\sqrt{2^n}}\sum_{i_1,i_2,...,i_n=0}^{1}
|i_1,i_2,...,i_n\ra\longmapsto
\frac{1}{\sqrt{2^n}}\sum_{i_1,i_2,...,i_n=0}^{1} |f(i_1, i_2,...,i_n)\ra. \end{equation} Applying $U$ once computes $f$ simultaneously
on all the $2^n$ possible inputs $i$, which is an enormous power of parallelism!
It is tempting to think that exponential parallelism immediately implies exponential computational power, but this is not the case. In fact, classical computations can be viewed as having exponential parallelism as well-- we will devote much attention to this later on. The problem lies in the question of how to
extract the exponential information out of the system. In quantum computation, in order
to extract quantum information one has to {\it observe} the system.
The measurement process causes the famous {\it collapse of the wave
function}. In a nutshell, this means that after the measurement the state is projected to
only one of the exponentially many possible states, so that the exponential amount of information which has been computed is completely lost! In order to gain advantage of
exponential parallelism, one needs to combine it with another
quantum feature, known as interference. Interference allows the exponentially many computations done in parallel to cancel each other, just like destructive interference of waves or light. The goal is to
arrange the cancelation such that only the computations which we are interested in remain, and all the rest cancel out. The combination of exponential parallelism and interference is what makes quantum computation powerful, and plays an important role in quantum algorithms.
A quantum algorithm is a sequence of elementary unitary steps,
which manipulate the initial quantum state $|i\ra$ (for an input $i$) such that a measurement of the final state of the system yields the correct output. The first quantum algorithm which combines interference and exponentiality to solve a problem faster than classical computers, was discovered by Deutsch and Jozsa\cite{deutsch3}. This algorithm addresses the problem we have encountered before in connection with probabilistic algorithms: Distinguish between ``constant'' and ``balanced'' databases. The quantum algorithm solves this problem {\it exactly}, in polynomial cost.
As we have seen, classical computers cannot do this, and must release the restriction of exactness. Deutsch and Jozsa made use of the most powerful tool in quantum algorithms, the {\it Fourier transform}, which indeed manifests
interference and exponentiality.
Simon's algorithm\cite{simon} uses similar techniques, and was the seed for the most important quantum algorithm known today: Shor's algorithm.
Shor's algorithm (1994) is a polynomial quantum algorithm for factoring integers, and for finding the logarithm over a finite field\cite{shor1}. For both problems, the best known classical
algorithms are exponential.
However, there is no proof that classical efficient algorithms do not exist. Shor's result is regarded as extremely important both theoretically and practically, mainly due to the fact that the assumption that factorization is hard
lies in the heart of the $RSA$ cryptographic system \cite{rsa,fft}. A cryptosystem is supposed to be a secure way to transform information such that an eavesdropper will not be able to learn in reasonable time significant information about the message sent. The RSA cryptosystem is used very heavily: The CIA uses it,
the security embedded into Netscape and the Explorer Web browsers is based on RSA, banks use RSA for internal security as well as securing external connections. However, RSA can be cracked by any one who has an efficient algorithm for factoring. It is therefore understandable why the publication of the factorization algorithm caused a rush of excitement all over the world.
It is important that the quantum computation power does not rely on unreasonable precision but a polynomial amount of precision in the computational elements is enough\cite{bv}. This means that the new model requires physically reasonable resources, in terms of time, space, and precision, but yet it is (possibly) exponentially stronger than the ordinary model of probabilistic Turing machine. As such, it is the only model which really threatens the modern Church thesis.
There are a few major developing directions of research in the area of
quantum computation. In $1995$ Grover\cite{grover1} discovered an algorithm which searches an unsorted database of $N$ items and finds a specific
item in $\sqrt{N}$ time steps. This result is surprising, because intuitively, one cannot search the database without going through all the items. Grover's solution is
quadratically better than any possible classical algorithms, and
was followed by numerous extensions and applications\cite{boyer1,grover2,grover3,durr,brassard3,brassard4}, all achieving polynomial advantage over classical algorithms.
A promising new branch in quantum complexity theory is the study
of a class of problems which
is the quantum analog of the complexity
class NP\cite{kitaevNP}.
Another interesting direction in quantum computation is concerned with quantum computers simulating efficiently other physical systems such as many body Fermi systems\cite{zalka1,abrams, weisner2,bogosian}. This direction pursues the original suggestion by Feynman\cite{feynman1}, who noticed that quantum systems are hard to simulate by classical devices. An important direction
of investigation is the search for a different, perhaps stronger, quantum computation model. For example, consider the introduction of slight non-linearities into quantum mechanics. This is
completely hypothetical, as all experiments verify the linearity of quantum mechanics. However, such slight non linearities
would imply extremely strong quantum algorithms\cite{ abrams2}. A very interesting quantum computation model which is based on anyons, and uses non-local features of quantum
mechanics,
was suggested by Kitaev\cite{kitaev3}. A possibly much stronger model, based on quantum field theory, was sketched recently by Freedman, but it has not been rigorously defined yet\cite{freedman}. One other direction is oracle results in
quantum complexity. This direction compares
quantum complexity power and classical complexity power
when the two models are allowed to have access to an oracle, i.e. a black box which can compute a certain (possibly difficult) function in one step \cite{bv, bbbv, bert2,bert3}. In fact, the result of Bernstein and Vazirani\cite{bv} from $1993$ demonstrating a superpolynomial gap between quantum and classical computational comlexity with an access to a certain oracle initialized the sequence of results leading to the Shor's algorithm. An important recent result\cite{beals2} in quantum complexity
shows that quantum computers have no more than polynomial advantage
in terms of number of accesses to the inputs. As of now, we are very far from understanding the computational power of quantum systems. In particular, it is not
known whether quantum systems can efficiently solve
$NP$ complete problems or not.
Quantum information theory, a subject which is intermingled with quantum computation, provides a bunch of quantum magic tricks,
which might be used to construct more powerful quantum algorithms. Probably the first ``quantum pearl'' that one encounters in quantum mechanics
is the Einstein Podolsky Rosen paradox,
which, as is best explained by Bell's inequalities, establishes the existence of correlations between quantum particles, which are stronger than any classical model can explain. Another ``quantum pearl''
which builds on quantum entanglement,
is teleportation\cite{bennett13}. This is an amazing quantum recipe which
enables two parties (Alice and Bob) which are far apart,
to transfer an unknown quantum state of a particle in Alice's hands onto a particle in Bob's hand, without sending the actual particle. This can be done if Alice and Bob share a pair of particles which interacted in the past and therefore are quantumly entangled. Such quantum effects already serve as ingredients in different computation and communication tasks. Entanglement can be used, for example, in order to gain advantage in communication. If two parties, Alice and Bob, want to communicate, they can save bits of communication if they share entangled pairs of qubits\cite{cleve2,cleve3, cleve4,barenco2}. Teleportation can be viewed as a quantum computation\cite{brassard5},
and beautiful connections
were drawn\cite{bennett14} between teleportation and quantum algorithms which are used to correct quantum noise.
All these are uses of quantum effects in quantum computation. However,
I believe that the
full potential of
quantum mechanics in the context of complexity and algorithmic problems is yet to be revealed.
Despite the impressive progress in quantum computation,
a menacing question still remained. Quantum information is extremely fragile, due to inevitable interactions between the system and its environment. These interactions cause the system to lose part of its quantum nature, a process called {\it decoherence}\cite{stern1,zurek1}. In addition, quantum elementary operations (called {\it gates})
will inevitably suffer from inaccuracies. Will physical realizations of the model of quantum computation still be as powerful as the ideal model? In classical computation, it was already shown by von-Neumann\cite{neumann} how to compute when the elements of the computation are faulty, using redundant information. Indeed, nowadays error corrections are seldom used in computers because of extremely high reliability of the elements, but quantum elements are much more fragile, and it is almost certain that quantum error corrections will be necessary in future quantum computers. It was shown that if the errors are not corrected during quantum computation,
they soon accumulate and ruin the entire computation\cite{decoherence,decoherence2,barenco6, miquel1}. Hence, a method to correct the effect of quantum noise is necessary. Physicists
were pessimistic about the question of whether such a correction method exists\cite{landauer1,unroh1}. The reason is that quantum information in general cannot be cloned\cite{dieks,wootters,barnum2}, and so the information cannot be simply protected by redundancy, as is done classically. Another problem is that in contrast to the
discreteness of digital computers, a quantum system can be in a superposition of eigenstates with continuous coefficients. Since the range of allowed coefficients is continuous, it seems impossible to distinguish between bona fide information and information which has been contaminated.
As opposed to the physical intuition, it turns out that clever techniques enable quantum information to be protected. The conceptual breakthrough in quantum error corrections
was the understanding that quantum errors, which are continuous, can be viewed as a discrete process in which one out of four quantum operations occurs. Moreover, these errors can be viewed as classical errors, called bit flips, and quantum errors, called phase flips.
Bit flip errors can be corrected using classical error correction techniques. Fortunately, phase flips transform to bit flips, using the familiar Fourier transform.
This understanding allowed using classical error correction codes techniques in the quantum setting. Shor was the first to present
a scheme that reduces the affect of noise and inaccuracies, building on the discretization of errors\cite{shor2}. As in classical error correcting codes, quantum states of $k$ qubits are {\it encoded} on states of more qubits.
Spreading the state of a few qubits on more qubits, allows correction of the information, if part of it has been contaminated. These ideas were extended \cite{calshor,steane1} to show that a quantum state of $k$ qubits
can be encoded on $n$ qubits, such that if the $n$ qubits are sent through a noisy channel, the original state of the $k$ qubits can be recovered. $k/n$ tends asymptotically to a constant {\it transmission rate} which is non zero.
This is analogous to Shannon's result from noisy classical communication\cite{shannon}. Many different examples of quantum
error correcting codes followed\cite{steane2,laflamme2,chuang1,knill5, rains2, leung}, and a group theoretical framework for most quantum codes was
established\cite{gf4,calderbank3,gottesman2}.
Resilient quantum computation is more complicated than simply protecting quantum information which is sent through a noisy quantum channel. Naturally, to protect the information we would compute on encoded states. There are two problems with noisy computation on encoded states.
The first is that the error correction is done with faulty gates, which cause errors themselves\cite{barenco7}. We should be careful that the error correction does not cause more harm than it helps. The second problem is that
when computing on encoded states, qubits interact with each other through the gates, and this way
errors can {\it propagate} through the gates, from one qubit to another. The error can spread in this way to the entire set of qubits very quickly. In order to deal with these problems, the idea is to perform computation and error correction in a {\it distributed manner}, such that each qubit can effect only a small number of other qubits.
Kitaev\cite{kitaev2}
showed how to perform the computation of error correction with faulty gates. Shor discovered\cite{shor3} how
to perform a general computation
in the presence of noise, under the unphysical assumption that
the noise decreases (slowly) with
the size of the computation.
A more physically reasonable assumption would be that the devices
used in the laboratory have a constant amount of noise,
independent of the size of the computation.
To achieve fault tolerance against such noise, we apply a concatenation of Shor's scheme. We encode the state once, and then encode the encoded state, and so on for for several levels.
This technique enabled
the proof of the {\it threshold theorem}\cite{knill1, knill2,gottesman5,aharonov1, kitaev3,preskill2}, which asserts that it is possible to perform resilient quantum computation for as long as we wish,
if the noise is smaller than a certain {\it threshold}. Decoherence and imprecision are therefore no longer considered insurmountable obstacles to realizing a quantum computation.
In accord with these theoretical optimistic
results, attempts at implementations of quantum circuits are now
being carried out all over the world. Unfortunately, the progress in this direction is much slower than the impressive pace in which theoretical quantum computation has progressed. The reason is that handling quantum systems experimentally is extremely difficult. Entanglement is a necessary ingredient in quantum computers, but experimentally, it is a fragile property which is difficult to create and preserve\cite{cirac3}. So far, entangled pairs of photons were created successfully\cite{kwiat,tittel}, and entanglement features such as violation of Bell inequalities were demonstrated \cite{aspect1,aspect2}. Even entangled pairs of atoms were created\cite{hagley}. However quantum computation is advantageous only when macroscopically many particles are entangled\cite{jozsa2,aharonov2},
a task which seems impossible as of now. Promising experimental developments come from the closely related subject of quantum cryptography\cite{brassard6,bennett13,brassard2}. Quantum communication was successfully tested\cite{hughes,mattle}. Bouwmeester {\it et. al.} have recently reported on experimental realization of quantum teleportation\cite{bouwmeester} . Suggestions for implementations of quantum computation \cite{ cirac1,cory4,gershenfeld,lloyd3,div-rev,jones2,berman1, loss, jones2,pellizzari,privman,steane6} include
quantum dots, cold trapped ions and nuclear magnetic resonance, and some of these suggestions were already implemented \cite{monroe2,turchette,mattle,gershenfeld,cory2}. Unfortunately, these implementations were so far limited to three qubits. With three qubits it is possible to perform partial error correction, and successful implementation of error correction of phases using NMR was reported\cite{cory3,chuang2}. Using nuclear magnetic resonance techniques, a quantum algorithm was implemented which achieves proven advantage over classical algorithms\cite{chuang3}. It should be noted, however, that all these suggestions for implementation
suffer from severe problems. In nuclear magnetic resonance the signal-to-noise ratio
decays exponentially with the number of qubits\cite{warren1}, though
a theoretical solution to this problem was given recently\cite{umesh}. Other implementations do not allow parallel operations, which are necessary for error resilience\cite{aharonov2}. In all the above systems controlling thousands of qubits seems hopeless at present. Never the less, the experimental successes encourage our hope that the ambitious task of realizing quantum computation might be possible.
The exciting developments in quantum computation give rise to deep new open questions in both the fields of computer science and physics. In particular, computational complexity questions
shed new light on old questions in fundamental quantum physics such as the transition from quantum to classical physics, and the measurement process. I shall discuss these interesting topics at the end of the paper.
We will start with
a survey of the important concepts connected to computation, in section $2$.
The model of quantum computation is defined in section $3$.
Section $4$ discusses elementary quantum operations.
Section $5$ describes basic quantum algorithms by Deutsch and Jozsa's and by Simon.
Shor's factorization algorithm is presented in section $6$,
while Fourier transforms are discussed separately in section $7$,
together with an alternative factorization algorithm by Kitaev.
Grover's database search and variants are explained in section $8$.
Section $9$ discusses the origins for the power of quantum computation,
while section $10$ discusses weaknesses of quantum computers.
Sections $11,12$ and $13$ are devoted to noise, error correction and
fault tolerant computation. In Section $14$ I conclude with a few
remarks of a philosophical flavor.
\section{What is a Computer?}
Let us discuss now the basic notions of computational complexity theory:
Turing machines, Boolean circuits, computability and computational complexity.
The important complexity classes $P$ and $NP$ are also defined in this section.
For more background, consult \cite{fft,papa}.
We begin with the definition of a Turing machine:
\begin{deff} A Turing machine (TM) is a triplet $M=(\Sigma,K,\delta)$. \begin{enumerate} \item $\Sigma=\{\sqcup,0,1,...\}$ is a finite set of symbols which we call the alphabet. $\sqcup$ is a special symbol called the blank symbol. \item $K$ is a finite set of ``machine states'', with two special states: $s\in K$ the initial state and $h\in K$ the final state. \item A transition function $ \delta: K\times \Sigma \longmapsto K\times
\Sigma\times \{-1,0,1\} $ \end{enumerate} \end{deff}
The machine works as follows: the tape has a head which can read and write on the tape during the computation. The tape is thus used as working space, or memory. The computation starts with an input of $n$ symbols written in positions $[1,...n]$
on the tape,
all symbols except these $n$ symbols are blank ($\sqcup$), the head is initially at position $1$, and the state is initially $s$. Each time step, the machine evolves according to the transition function $\delta$ in the following way. If the current state of the machine is $q$ and the symbol in the current place of the tape is $\sigma$, and $\delta(q, \sigma)=(q',\sigma',\epsilon)$, then
the machine state is changed to $q'$, the symbol under the head is replaced by $\sigma'$
and the tape head moves one step in direction $\epsilon$. (if $\epsilon=0$ the head doesn't move). Here is a schematic description of a Turing machine:
\setlength{\unitlength}{0.030in}
\begin{picture}(40,35)(-60,0)
\put(20,0){\line(1,0){80}} \qbezier[10](10,0)(15,0)(20,0) \qbezier[10](100,0)(105,0)(110,0) \put(22,0){\line(0,1){7}}\put(26,3){\makebox(0,0){$\sqcup$}} \put(29,0){\line(0,1){7}}\put(33,3){\makebox(0,0){0}} \put(36,0){\line(0,1){7}}\put(39,3){\makebox(0,0){1}} \put(43,0){\line(0,1){7}}\put(46,3){\makebox(0,0){0}} \put(50,0){\line(0,1){7}}\put(54,3){\makebox(0,0){$\sqcup$}} \put(57,0){\line(0,1){7}}\put(61,3){\makebox(0,0){$\sqcup$}} \put(64,0){\line(0,1){7}}\put(68,3){\makebox(0,0){$\sqcup$}} \put(71,0){\line(0,1){7}}\put(75,3){\makebox(0,0){$\sqcup$}} \put(78,0){\line(0,1){7}}\put(82,3){\makebox(0,0){$\sqcup$}} \put(85,0){\line(0,1){7}}\put(89,3){\makebox(0,0){$\sqcup$}} \put(92,0){\line(0,1){7}}\put(96,3){\makebox(0,0){$\sqcup$}} \put(20,7){\line(1,0){80}} \qbezier[10](10,7)(15,7)(20,7) \qbezier[10](100,7)(105,7)(110,7) \put(60,20){\framebox(12,12){$q$}} \put(51,20){\oval(30,10)[br]} \put(51,10){\oval(10,10)[tl]} \put(46,10){\vector(0,-1){2}} \end{picture}
{~}
Note that the operation of the Turing machine is local: It depends only on the current state of
the machine and the symbol written in the current position of the tape. Thus the operation of the machine is a sequence of {\it elementary steps} which require a constant amount of effort. If the machine gets to ``final state'', $h$, we say that the machine has ``halted''.
What is written at that time on the tape should contain the output. (Typically, the output will be given in the form ``yes'' or ``no''.) One can easily construct examples in which the machine never halts on a given input, for example by entering into an infinite loop.
According to the definition above, there are many possible Turing machines, each designed to compute a specific task, according to the transition function. However, there exists one Turing machine, $U$ which when presented with an input, it interprets this input as a description of another Turing machine, $M$, concatenated with the description of the input to $M$, call it $x$. $U$ will simulate efficiently the behavior of $M$ when presented with the input $x$, and we write $U(M,x)=M(x)$. This $U$ is a called a
{\it universal} Turing machine. More precisely, the description of $M$ should be given with some fixed notation. Without loss of generality, all the symbols
and states of $M$ can be given numbers from $1$ to $|K|+|\Sigma|$.
The description of $M$ should contain $|K|$, $|\Sigma|$ and the transition function, which will be described by a set of rules (which is finite) of the form $((q, \sigma) (q',\sigma',\epsilon))$. For this, $U$'s set of symbols will contain the symbols $''(``$ and $'')''$ apart from $\sqcup,0,1$. $U$ will contain a few machine states, such as: ``$q_1$: now reading input'', ``$q_2$: looking for an appropriate rule to apply'' and so on. I will not go through the details, but it is convincing that with such a finite set of states, $U$ can simulate the operation of any $M$ on any input $x$, because the entire set of rules of the transition function is written on the tape.
The existence of a universal Turing machine leads naturally to the deep and beautiful subject of {\it non-computability.}
A function is non-computable if it cannot be computed by a Turing machine,
i.e. there is no Turing machine which for any given input, halts and
outputs the correct answer. The most famous example is the HALTING problem. The problem is this: Given a description of a Turing machine $M$ and its input $x$, will $M$ halt on $x$?
\begin{theo} There is no Turing machine that solves the HALTING problem on all inputs $(M,x)$. \end{theo}
{\bf Proof:} The proof of this theorem is conceptually puzzling. It uses the so called diagonal argument. Assume that $H$ is a Turing machine, such that $H(M,x)$ is $''yes'' $ if $M(x)$ halts and $''no''$ otherwise. Modify $H$ to obtain $\tilde{H}$, such that
\begin{eqnarray*} H(M,M)=''\rm{yes}''& \longmapsto &\tilde{H}(M)~ \rm{enters ~ an ~infinite~ loop}.\\ H(M,M)=''\rm{no} ''& \longmapsto &\tilde{H}(M)=''\rm{yes}''. \end{eqnarray*}
The modification is done easily by replacing a few rules in the transition
function of $H$.
A rule which writes "yes" on the tape and causes $H$ to halt
is replaced by a rule that takes the machine into an infinite loop.
A rule which writes "no" on the tape and causes $H$ to halt
is replaced by a rule that writes "yes" on the tape and than halts $H$.
This way, $\tilde{H}$ is a "twisted" version of $H$. Now, does
$\tilde{H}(\tilde{H})$ halt or not? We obtain a contradiction in both ways. Suppose it does halt. This means that $H(\tilde{H},\tilde{H})=''no''$ so $\tilde{H}(\tilde{H})$ does not halt! If $\tilde{H}(\tilde{H})$ does not halt, this means
$H(\tilde{H},\tilde{H})=''yes''$ so $\tilde{H}(\tilde{H})$ does halt! $\bbox$
This beautiful proof shows that there are functions which cannot be computed. The Turing machine is actually used to {\it define} which functions are computable and which are not.
It is sometimes more convenient to use another universal model,
which is polynomially equivalent to Turing machines, called the Boolean circuit model. We will use the quantum analog of this model throughout this review.
A Boolean circuit is a directed acyclic graph, with nodes which are associated with Boolean functions. These nodes are sometimes called {\it logical gates}. A node with $n$ input wires and $m$ output wires is associated with a function $f:\{0,1\}^n \longmapsto \{0,1\}^m$. Here is a simple example:
\setlength{\unitlength}{0.030in} \begin{picture}(40,40)(-80,5) \put(0,15){\vector(1,0){10}} \put(0,25){\vector(1,0){10}} \put(0,35){\vector(1,0){50}} \put(10,13){\framebox(15,15){$OR$}} \put(25,25){\vector(1,0){5}} \put(30,21){\framebox(15,7){$NOT$}} \put(45,25){\vector(1,0){5}} \put(50,23){\framebox(15,15){$AND$}} \put(65,30){\vector(1,0){10}} \end{picture} {~}
Given some string of bits as input, the wires carry the values of the bits, until a node is reached. The node computes a logical function of the bits (this function can be NOT, OR, AND, etc.)
The output wires of the node,
carry the output bits to the next node, until the computation ends at the output wires. The input wires can carry {\it constants} which do not vary with the different inputs to the circuit, but
are part of the hardware of the circuit. In a Turing machine the transition function is local, so the operation is a sequence of elementary steps. In the circuit model the same requirement translates to the fact that the gates are local, i.e. that the number of wires which each node operates on is bounded above by a constant.
To measure the cost of the
computation we can use different parameters: $S$, the number of gates in the circuit, or $T$, the time, or {\it depth} of the circuit. In this review, we will mainly be considered with $S$, the number of gates. We will be interested in the behavior of the cost, $S$, as a function of the size of the input, i.e. the number of wires input to the circuit, which we will usually denote by $n$. To find the cost function $S(n)$, we will look at a function $f$ as a family of functions $\{f_n\}_{n=1}^{\infty}$,
computed by a family of circuits $\{C_n\}_{n=1}^{\infty}$,
each operating on $n$ input bits; $S(n)$ will be the size of the circuit $C_n$.
I would like to remark here on an important distinction between the model of Turing machines and that of circuits.
A lot of information can get into the circuit through the hardware.
If we do not specify how long it takes to design the hardware, such circuits can compute even non-computable functions. This can be easily seen by an example. Define the circuit $C_n$ to be a very simple circuit, which outputs a constant bit regardless of the $n$ input bits. This constant bit will be $0$ or $1$ according to whether the $n'$th Turing machine, $M_n$ (ordered according to the numerical description of Turing machines) halts on the input $M_n$ or not. The family of circuits $\{C_n\}_{n=1}^{\infty}$ computes the non-computable HALTING problem with all the circuits having only one gate! This unreasonable computational power of circuits is due
to the fact that we haven't specified who constructs the hardware of the circuit.
We want to avoid such absurdity and concentrate on interesting
and realistic cases. We will therefore require that the hardware of the circuits
which compute $\{f_n\}_{n=1}^{\infty}$ can be designed with polynomial cost by a Turing machine. The Turing machine is given as an input the integer $n$, and outputs the specification of the circuit $C_n$. This model is called the ``uniform circuit model'', as opposed to the ``non uniform'' one, which is too strong. The models of uniform Boolean circuits and Turing machines are polynomially equivalent. This means that given a Turing machine which computes in polynomial time $f(x)$, there is a family of polynomial circuits $\{C_n\}_{n=0}^{\infty}$, specified by a polynomial Turing machine, such that $C_n$ computes $f_n$. This correspondence is true also in reverse order, i.e. given the family of circuits there is a Turing machine that simulates them. Therefore the complexity of a computation does not depend (except for polynomial factors) on the model used. From now on, we will work only in the uniform circuit model.
One of the main questions in this review is whether the cost of the computation
grows like a polynomial in $n$ or an exponential in $n$. This distinction might seem arbitrary, but is better understood in the context of the complexity classes $P$ and $NP$.
The complexity class $P$ is essentially the class of "easy" problems,
which can be solved with polynomial cost:
\begin{deff} {\bf: Complexity class P}
\noindent $ f=\{f_n\}_{n=1}^{\infty}\in P$ if there exists a uniform family of circuits $\{C_n\}_{n=1}^{\infty}$ of poly($n$) size, where $C_n$ computes the function $f_n(x)$ for all $x\in \{0,1\}^n$. \end{deff}
The class of {\it Non-deterministic Polynomial time} (in short, $NP$)
is a class of much harder problems.
For a problem to be in $NP$, we do not require that there exists a
polynomial algorithm that solves it. We merely require that there exists an
algorithm which can verify that a solution is correct in polynomial time.
Another way to view this is that the algorithm is provided with
the input for the problem and a {\it hint}, but the hint may be misleading.
The algorithm should solve the problem in polynomial time when the hint is good,
but it should not be mislead by bad hints.
In the formal definition which follows, $y$ plays the role of the hint.
\begin{deff} {\bf: Complexity class NP}
\noindent $f=\{f_n\}_{n=1}^{\infty}\in NP$ if
there exists a uniform family of circuits, $\{C_n\}_{n=1}^{\infty}$, of poly($n$) size, such that
$~~~~~~~$ If $x$ satisfies $f_n(x)=''yes''$ $\longmapsto$
there exists a string $y$ of $\rm{poly}(n)$ size such that $C_n(x,y)=1$,
$~~~~~$ If $x$ satisfies $f_n(x)=''no''$ there is no such $y$, i.e. for all $y's$,
$C_n(x,y)=''no''$. \end{deff}
To understand this formal definition better,
let us consider the following example for an $NP$ problem which is called {\it satisfiability}:
\begin{quote} {\bf Input}: A formula of $n$ Boolean variables, $X_1,...X_n$, of the form \[ g(X_1,...X_n)=( X_{i}\cup \neg X_{j} \cup X_k )\bigcap
( X_{m}\cup \neg X_{i})....\] which is the logical AND of poly$(n)$ clauses, each clause is the logical OR of poly$(n)$ variables or their negation.
{\bf Output}: $f(g)=1$ if there exists a satisfying assignment of the variables $X_1,...X_n$ so that $g(X_1,...X_n)$ is true. Else, $f(g)=0$. \end{quote}
To see that satisfiability is in $NP$, define
the circuit $C_n$ to
get as input the specification of the
formula $g$ and a possible assignment $X_1,...X_n$. The circuit will output $C_n(g,X_1,...X_n)=g(X_1,...X_n)$. It is easy to see that these circuits satisfy the requirements of the definition of $NP$ problems. However, nobody knows how to build a polynomial circuit which gets $g$ as an input, and finds whether a satisfying assignment exists. It seems impossible to find a satisfying assignment without literally checking all $2^n$ possibilities. Hence satisfiability is not known to be in $P$.
Satisfiability
belongs to a very important subclass of $NP$, namely the $NP$ {\it complete}
problems. These are the hardest problems in $NP$,
in the sense that if we know how to solve an NP-complete problem efficiently, we can solve any problem in $NP$ with only polynomial slowdown. In other words, a problem $f$ is $NP$-complete if any
NP problem can be {\it reduced} to $f$ in polynomial time.
Garey and Johnson\cite{grey} give hundreds of examples of
$NP$-complete problems, all of which
are {\it reducible} one to another with polynomial slowdown,
and therefore they are all equivalently hard. As of now, the best known algorithm for any
$NP$-complete problem is exponential, and the widely believed
conjecture is that there is no polynomial algorithm,
i.e. $P\not= NP$.
Perhaps the most important open question in complexity theory
today, is proving this conjecture.
Another interesting class consists of those problems solvable with negligible probability for error in polynomial time by a probabilistic Turing machine.
This machine is defined exactly as deterministic TM, except that
the transition function can change the configuration in one of several possible ways, randomly. Equivalently, we can define {\it randomized circuits},
which are Boolean circuits with the advantage that apart from the input of $n$ bits, they also get as input random bits which they can use as random coin flips. The class of problems solvable by uniform polynomial randomized circuits with bounded error probability is called
$BPP$ ({\it bounded probability polynomial}): \begin{deff} $f=\{f_n\}_{n=1}^{\infty}\in BPP$ if
there exists a family of uniform randomized circuits, $\{C_n\}_{n=1}^{\infty}$, of poly($n$) size such that $\forall x\in \{0,1\}^n,$ probability$(C_n(x,y)=f_n(x))\ge2/3$, where
the probability is measured with respect to a uniformly random $y$. \end{deff} Until the appearance of quantum computers, the modern Church thesis which asserts that a
probabilistic Turing machine, or equivalently randomized uniform circuits, can simulate
any reasonable physical device in polynomial time, held with no counterexamples. The quantum model, which I will define in the next chapter, is the only model which seems to be qualitatively different from all the others. We can define the quantum complexity classes: \begin{deff} The complexity classes $QP$ and $BQP$ are defined like
$P$ and $BPP$, respectively, only with quantum circuits. \end{deff} It is known that $P\subseteq QP$ and $BPP \subseteq BQP$, as we will see very soon.
\section{The Model of Quantum Computation} Deutsch was the first to define a rigorous model of quantum computation, first of quantum Turing machines\cite{deutsch1} and then of quantum circuits\cite{deutsch2}. I will describe first the model of quantum circuits, which is much simpler. At the end of the chapter, I present the model of quantum Turing machines, for completeness.
For background on basic quantum mechanics such as Hilbert spaces, Schr$\ddot{\rm{o}}$dinger equation and measurements I recommend to consult the books by Sakurai\cite{sakurai}, and by Cohen-Tanoudji\cite{cohentan}. As for more advanced material, the book by Peres\cite{peres} would be a good reference. However, I will give here all the necessary definitions.
A quantum circuit is a system built of two state quantum particles, called qubits. We will work with $n$ qubits, the state of which is a unit vector in the complex Hilbert space
${\cal C}^2\otimes {\cal C}^2\otimes \cdots\otimes {\cal C}^2$. As the natural basis for this space, we take the basis consisting of $2^n$ vectors: \begin{eqnarray}
|0\ra\otimes|0\ra\otimes\cdots\otimes|0\ra~\\
|0\ra\otimes|0\ra\otimes\cdots\otimes|1\ra\nonumber~\\
\vdots~~~~~~~~~~~~~~~~~~~~~\nonumber\\
|1\ra\otimes|1\ra\otimes\cdots\otimes|1\ra\nonumber.
\end{eqnarray}
For brevity, we will sometimes omit the tensor product, and denote
\begin{eqnarray}
|i_1\ra\otimes|i_2\ra\otimes\cdots\otimes|i_n\ra=|i_1,i_2,...,i_n\ra\equiv|i\ra
\end{eqnarray}
where $i_1,i_2,...,i_n$ is the binary representation of the integer $i$, a number between $0$ and $2^n-1$. This is an important step, as this representation allows us to use our quantum system to encode integers. This is where the quantum system starts being a computer. The general state which describes this system is a complex unit vector in the Hilbert space, sometimes called the {\it superposition:} \begin{equation}
\sum_{i=0}^{2^n-1} c_i |i\ra
\end{equation}
where $\sum_i |c_i|^2=1$. The initial state will correspond to the ``input''
for the computation. Let us agree that for an input string $i$,
the initial state of the system will be $|i\ra$: \begin{equation}
i \longmapsto |i\ra \end{equation} We will then perform ``elementary operations'' on the system. These operations will correspond to the computational steps in the computation, just like logical gates are the elementary steps in classical computers. In the meantime we will assume that all the operations are performed on an isolated system, so the evolution can always be described by a unitary matrix operating on the state of the system. Recall that a unitary matrix satisfies $UU^{\dagger}=I$, where $U^{\dagger}$ is the
transposed complex conjugate of $U$.
\begin{deff} A {\it quantum gate} on $k$ qubits is
a unitary matrix $U$ of dimensions $2^k\times 2^k$. \end{deff}
Here is an example of a simple quantum gate, operating on one qubit. \begin{equation} NOT=\left(\begin{array}{ll} 0&1\\ 1&0 \end{array} \right) \end{equation}
Recalling that in our notation
$|0\ra=(1,0)$ and $|1\ra=(0,1)$, we have that $NOT|0\ra=|1\ra$ and
$NOT|1\ra=|0\ra$. Hence, this gate flips the bit, and thus it is justified to call this gate the $NOT$ gate. The $NOT$ gate can operate on superpositions as well. From linearity of the operation,
\[NOT(c_0|0\ra+c_1|1\ra)= c_0|1\ra+c_1|0\ra.\]
This linearity is responsible for the quantum parallelism (see Margolus\cite{margolus2}) which we
will encounter in all powerful quantum algorithms. When the NOT gate operates on the first qubit in a system of $n$ qubits, in the state
$\sum_i c_i |i_1i_2...i_n\ra $ this state transforms to
$\sum_i c_i (NOT|i_1\ra)|i_2...i_n\ra=\sum_i c_i |\neg i_1i_2...i_n\ra $.
Formally, the time evolution of the system is described by a unitary matrix,
which is a tensor product of the gate operating on the first qubit and the identity
matrix $I$ operating on the rest of the qubits.
Another important quantum gate is the {\it controlled} $NOT$ gate acting on two qubits, which computes the classical function:
$(a,b)\longmapsto (a, a \oplus b)$ where $a \oplus b = (a+b)$ mod $2$ and $a,b \in {0,1}$. This function can be represented by the matrix operating on all $4$ configurations of $2$ bits:
\begin{equation} CNOT=\left(\begin{array}{llll} 1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0 \end{array} \right) \end{equation} The above matrix, as all matrices in this review, is written in the computational basis in lexicographic order. This gate is also called the
{\it exclusive or} or XOR gate (On its importance see \cite{divincenzo2}.) The XOR gate applies a $NOT$ on the second bit, called the {\it target} bit, conditioned that the first {\it control} bit is $1$. If a black circle denotes the bit we condition upon, we can denote the XOR gate by:
\setlength{\unitlength}{0.030in}
\begin{picture}(40,25)(-90,10)
\put(5,15){\line(1,0){7}} \put(15,15){\line(1,0){7}} \put(5,30){\line(1,0){17}}
\put(13,30){\circle*{3}}
\put(13,16){\line(0,1){14}}
\put(12,13){\makebox(3,3){$\oplus$}} \end{picture}
In the same way, all classical Boolean functions can be transformed to quantum gates. The matrix representing a classical
gate which computes a reversible function, (in particular the number of inputs to the gate equals the number of outputs) is a permutation on all the possible classical strings. Such a permutation is easily seen to be unitary. Of course, not all functions are reversible, but they can easily be converted to reversible functions,
by writing down the input bits instead of erasing them. For a function $f$,from $n$ bits to $m$ bits, we get the reversible function from $m+n$ bits to $m+n$ bits:
\begin{equation}\label{rev}\begin{array}{c}
f: i\longmapsto f(i)\\
\Downarrow\\
f_r: (i,j)\longmapsto (i,f(i)\oplus j).
\end{array}\end{equation}
Applying this method, for example, to the logical AND gate, \( (a,b) \longmapsto ab \)
it will become the known Toffoli gate\cite{toffoli} $ (a,b,c)\longmapsto (a,b,c\oplus ab),$ which is described by the unitary matrix on three qubits: \begin{equation}\label{tof} T= \left( \begin{array}{llllllll}
1&&&&&&&\\
& 1 &&&&&&\\
& &1&&&&&\\ & &&1&&&&\\ & &&&1&&&\\ & &&&&1&&\\ & &&&&& 0& 1\\ & &&&&& 1& 0 \end{array}\right) \end{equation} The Toffoli gate applies NOT on the last bit, conditioned that the other bits are $1$, so we can describe it by the following diagram:
\setlength{\unitlength}{0.030in}
\begin{picture}(40,45)(-80,5)
\put(0,15){\line(1,0){8}} \put(23,15){\line(1,0){7}} \put(0,30){\line(1,0){30}} \put(0,45){\line(1,0){30}}
\put(15,30){\circle*{3}} \put(15,45){\circle*{3}}
\put(15,22){\line(0,1){25}}
\put(8,10){\framebox(15,12){$NOT$}}
\end{picture}
Quantum gates can perform more complicated tasks than simply computing classical functions. An example of such a quantum gate, which is not a classical gate in disguise, is
a gate which applies a general rotation on one qubit:
\begin{equation} G_{\theta,\phi}=\left(\begin{array}{ll} \cos(\theta) & \sin(\theta)e^{i\phi}\\ -\sin(\theta)e^{-i\phi} & \cos(\theta) \end{array} \right) \end{equation}
To perform a quantum computation, we apply a sequence of elementary quantum gates
on the qubits in our system. Suppose now, that we have applied all the quantum gates
in our algorithm, and the computation has come to an end. The state which was initially a basis state has been
{\it rotated} to the state $|\alpha\ra\in C^{2^n}$. We now want to extract the output from this state. This is done by the process of {\it measurement}.
The notion of measurement in quantum mechanics is puzzling. For example, consider
a measurement of a qubit in the state $|\alpha\rangle=c_0|0\rangle +c_1|1\rangle$.
This qubit is neither in the state $|0\rangle$ nor in $|1\rangle$. Yet, the {\it measurement postulate}
asserts that when the state of this qubit is observed, it must decide on one of the two possibilities.
This decision is made non-deterministically.
The classical outcome of the measurement
would be $0$ with probability $|c_0|^2$ and $1$ with probability
$|c_1|^2$. After the measurement, the state of the qubit is
either $|0\rangle$ or $|1\rangle$, in consistency with the classical outcome
of the measurement. Geometrically, this process can be interpreted as a projection of the state on one of the two orthogonal subspaces, $S_0$ and $S_1$,
where $S_0=\rm{span}\{|0\ra\}$ and $S_1=\rm{span}\{|1\ra\}$, and a measurement of the state of the qubit $|\alpha\rangle$ is actually an observation in which of the subspaces the state is, in spite of the fact that the state might be in neither.
The probability that the decision is $S_0$
is the norm squared of the projection of $|\alpha\ra$
on $S_0$, and likewise for
$1$. Due to the fact that the norm of $|\alpha\ra$ is one,
these probabilities add up to one. After the measurement $|\alpha\ra$
is projected to the space $S_0$ if the answer is $0$, and to the space $S_1$ if the answer is $1$.
This projection is the famous {\it collapse} of the wave function.
Now what if we measure a qubit in a system of $n$ qubits?
Again, we project the state onto one of two subspaces, $S_0$ and $S_1$,
where $S_a$ is the subspace spanned by all
basis states in which the measured qubit is $a$. The rule is that
if the measured superposition is
$\sum_i c_i |i_1,...i_n\ra$, a measurement of the first qubit will give the outcome $0$ with probability
$\rm{Prob}(0)=\sum_{i_2,...i_n} |c_{0,i_2,...i_n}|^2$, and the superposition will collapse to
\[\frac{1}{\rm{Prob}(0)} \sum_{i_2,...i_n} c_{0,i_2,...i_n}|0,i_2,...i_n\ra,\] and likewise with $1$. Here is a simple example: Given the state of two qubits:
\[\frac{1}{\sqrt{3}} \big(|00\ra+|01\ra-|11\ra\big),\] the probability to measure $0$ in the left qubit is $2/3$, and the probability to measure $1$ is $1/3$. After measuring the left qubit, the state has collapsed to
$\frac{1}{\sqrt{2}} \big(|00\ra+|01\ra\big)$ with probability Pr$(0)=2/3$
and to $ -|11\ra$ with probability Pr$(1)=1/3$. Thus, the resulting state depends on the outcome of the measurement. After the collapse, the projected state is renormalized back to $1$.
We can now summarize the definition of the model of quantum circuits.
A quantum circuit is a directed acyclic
graph, where each node in the graph is associated a quantum gate.
This is exactly the definition from section $2$ of classical Boolean circuits,
except that the gates are quantum. The input for the circuit is a basis
state, which evolves in time according to the operation of the quantum gate. At the end of the computation we apply measurements on the output qubits (The order does not matter). The string of classical outcome
bits is the classical
output of the quantum computation. This output is in general probabilistic. This concludes the definition of the model.
Let us now
build a repertoire of quantum computations step by step. We have seen that classical gates can be implemented quantumly, by making the computation reversible. More explicitly, \begin{lemm} Let $f$ be a function from $n$ bits to $m$ bits, computed by a Boolean circuit $C$ of size $S$. There exists a quantum circuit $Q$ which computes the unitary transformation on $n+m$ qubits:
$|0^b,i,j\ra \longmapsto |0^b,i,f(i)\oplus j\ra$. $b$ and the size of $Q$ are linear in $S$. \end{lemm} {\bf Proof:} Replace each gate in $C$ by its reversible extension, according to equation \ref{rev}. We will add $b$ extra bits for this
purpose. The input for this circuit is thus $(0^b,i)$. The modified $C$, denoted by $ \tilde{C},$
can be viewed as a quantum circuit since all its nodes correspond to unitary matrices. The function that it computes is still not the required function, because the input $i$ is not necessarily part of the output as it should be.
To solve this problem, we add to $\tilde{C}$ $m$ extra wires, or qubits.
The input to these wires is $0$. At the end of
the sequence of gates of $\tilde{C}$, we copy the $m$ ``result'' qubits in $\tilde{C}$ on
these $m$ blank qubits by applying $m$ CNOT gates. We now apply in reverse order the reversed gates of all the gates applied so far, except the $CNOT$ gates. This will reverse all operations, and retain the input $(0^b,i)$, while the $m$ last qubits contain the desired $f(i)$. $\bbox$
The state of the system is always a basis state
during the computation which is described in the proof. Hence measurements of the final state will yield exactly the expected result. This shows that any computation which can be done classically can also be done quantumly with the same efficiency, i.e. the same order of number of gates. We have shown: \begin{theo} $P\subseteq QP$ \end{theo}
In the process of conversion to reversible operations,
each gate is replaced by a gate operating on more qubits. This means that making circuits reversible costs
in adding a linear number of extra qubits. In \cite{bennett9}, Bennett used a nice pebbling argument, to show that the space cost can be decreased to a logarithmic factor with only a minor cost in time: $T\longmapsto T^{1+\epsilon}$.
Thus the above conversion to quantum circuit can be made very efficient.
To implement classical computation we must also show how to implement probabilistic algorithms. For this we need a quantum subroutine that generates a random bit. This is done easily by measurements. We define the Hadamard gate which acts on one qubit. It is an extremely useful gate in quantum algorithms.
\begin{equation}\label{hadamard} H=\left(\begin{array}{ll} \frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}} \end{array} \right) \end{equation}
Applying this gate on a qubit in the state $|0\ra$ or $|1\ra$, we get a superposition:
\(\frac{1}{\sqrt{2}}(|0\ra\pm|1\ra)\). A measurement of this qubit yields a random bit. Any classical circuit that uses random bits can be converted to a quantum circuit by replacing the gates with reversible gates and adding the ``quantum random bit'' subroutine when needed.
Note that here we allow measuring in the middle of the computation. This shows that: \begin{theo} $BPP\subseteq BQP$ \end{theo}
The repertoire of classical algorithms can therefore be simulated efficiently by
quantum computers. But quantum systems feature characteristics which are far more interesting. We will encounter these possibilities when we discuss quantum algorithms.
Let me define here also the
model of quantum Turing Machine\cite{deutsch1,bv,solovay} ($QTM$) which
is the quantum analog of the classical TM. The difference is that all the degrees of freedom become quantum: Each cell in the tape, the state of the machine, and the reading head of the tape can all be in linear superpositions of their different possible classical states.
\begin{deff} A quantum Turing machine is specified by the following items: \begin{enumerate} \item A finite alphabet $\Sigma=\{\sqcup,0,1...\}$ where $\sqcup$ is the blank symbol. \item A finite set $K=\{q_0,...q_s\}$ of ``machine states'', with $h,s\in K$ two special states. \item A transition function $ \delta: Q\times \Sigma\times Q\times
\Sigma\times \{-1,0,1\} \longmapsto {\cal C}$ \end{enumerate} \end{deff}
As in classical TM, the tape is associated a head that reads and writes on that tape. A classical configuration, $c$, of the Turing machine is specified by the head's position, the contents of the tape and the machine's state. The Hilbert space of the QTM is defined as the vector space, spanned by all possible classical configurations
$\{|c\ra\}$. The dimension of this space is infinite. The computation starts with the QTM in a basis state $|c\ra$, which corresponds to the following classical configuration: An input of $n$ symbols is written in positions $1,..,n$
on the tape,
all symbols except these $n$ symbols are blank ($\sqcup$,) and the head is at position $1$. Each time step, the machine evolves according to an infinite unitary matrix which is defined in the following way.
$U_{c,c'}$, the probability amplitude to transform from configuration
$c$ to $c'$ is determined by the transition function $\delta$. If in $c$, the state of the machine is $q$ and the symbol in the current place of the tape head is $\sigma$ then $\delta(q,\sigma,q',\sigma',\epsilon)$ is the probability amplitude to go from $c$ to $c'$, where $c'$ is equal to $c$ everywhere except locally. The machine state in $c'$, $q$, is changed to $q'$, the symbol under the head is changed to $\sigma'$ and the tape head moves one step in direction $\epsilon$. Note that the operation of the Turing machine is local, i.e. it
depends only on the current state of
the machine and the symbol now read by the tape. Unitarity of infinite matrices is not easy to check, and conditions for unitarity
were given by Bernstein and Vazirani\cite{bv}.
In my opinion, the QTM model is less appealing than the model of quantum circuits,
for a few reasons. First, QTMs involve infinite unitary matrices. Second, it seems very unlikely that a physical quantum computer,
will resemble this model, because the head, or the apparatus executing the quantum operations, is most likely to be classical in its position and state. Another point is that the QTM model is a sequential model, which means that it is able to apply only one operation at each time step. Aharonov and Ben-Or showed\cite{aharonov2} that a sequential model is fundamentally incapable of operating fault tolerantly in the presence of noise. Above all, constructing algorithms is much simpler in the circuit model. For these reasons
I will restrict this review to quantum circuits. The model of quantum circuits, just like that of classical circuits, has a ``uniform'' and ``non-uniform'' versions. Again, we will restrict ourselves to the uniform model, i.e. quantum circuits which can be designed in polynomial time on a classical Turing Machine.
Yao\cite{yao} showed that uniform
quantum circuits are polynomially equivalent to quantum Turing machines, by a proof which is surprisingly complicated. This proof enables us the freedom of choosing whichever model is more convenient for us.
Another model worth mentioning in this context is
the quantum cellular automaton\cite{margolus2,watrous2,durr1,vandam2}. This model resembles quantum circuits, but is different in the fact that the operations are homogeneous, or periodic,
in space and in time. The definition of this model is subtle and, unlike the case of
quantum circuits, it is not trivial to decide whether a given quantum cellular automaton obeys the rules of quantum mechanics or
not\cite{durr1}. Another interesting quantum model is that of a finite state quantum automaton, which is similar to a quantum Turing machine except it can only read and not write, so it has no memory. It is therefore a very limited model. In this model Watrous\cite{watrous} showed an interesting algorithm which uses interference, and is able to compute a function which cannot be
computed in the analogous classical model.
\section{Universal Quantum Gates}
What kind of elementary gates can be used in a quantum computation program? We would like to write our program using elementary steps: i.e., the algorithm should be a sequence of steps, each potentially implementable in the laboratory.
It seems that achieving controlled interactions between a large number of qubits in one elementary step is extremely difficult. Therefore it is reasonable to require an ``elementary gate'' to operate on a small number of qubits, (independent of $n$ which can be very large.) We want our computer to be able to compute any function. The set of elementary gates used should thus be
{\it universal}. For classical reversible computation, there exists a single universal gate\cite{fredkin, toffoli}, called the Toffoli gate, which we have already encountered. This gate computes the function \[a,b,c \longmapsto a,b,ab\oplus c.\] The claim is that any reversible function can be represented as a concatenation of the Toffoli gate on different inputs. For example, to construct the logical AND gate on $a,b$, we simply input $c=0$, and the last bit will contain $ab\oplus 0=AND(a,b)$. To implement the NOT gate on the third bit we set the first two bits to be equal to $1$. We now have what is well known to be a universal set of gates, The NOT and AND gates. In the quantum case, the notion of universality is slightly more complicated, because operations are continuous. We need not require that all operations are achieved exactly, but a very good approximation suffices. The notion of approximation is very important in quantum computation. Frequently operations are approximated instead of achieved exactly, without significantly damaging the correctness of the computation. \begin{deff} {\bf Approximation:}
\noindent A unitary matrix $U$ is said to be approximated to within $\epsilon$
by a unitary matrix $U'$ if $|U-U'|\le \epsilon$. \end{deff} The norm we use is the one induced by the Euclidean norm on vectors in the Hilbert space.
Note that unitary transformations can be thought of as rigid rotations of the Hilbert space. This means that angles between vectors are preserved during the computation.The result of using $U'$
instead of $U$, where $|U-U'|\le \epsilon$,
is that the state is tilted by an angle of order $\epsilon$
from the correct state.
However this angle does not grow during the computation, because the rotation is rigid. The state always remains within $\epsilon$ angle from the correct state. Therefore the overall error in the entire computation is additive: it is at most the sum of the errors in all the gates. This shows that the accuracy to which the gates should be approximated is not very large. If $S$ gates are used in the circuit, it suffices to approximate each gate to within $O(\frac{1}{S})$, in order that the computation is correct with constant probability\cite{bv}.
We can now define the notion of universal gates, which approximate any possible quantum operation:
\begin{deff} {\bf Universal Set of Gates:}
A set of quantum gates, $\cal{G}$, is called {\it universal}
if for any $\epsilon$ and any unitary matrix $U$ on any number of bits, $U$ can be approximated to within $\epsilon >0$ by a sequence of gates from $\cal{G}$. In other words, the subgroup generated by $\cal{G}$ is dense in the group of unitary operators,
$U(n)$, for all $n$. \end{deff}
Deutsch was the first to show a universal elementary gate,
which operates
on three qubits\cite{deutsch2}.
Bernstein and Vazirani\cite{bv} gave another proof of universality in terms of $QTM$.
It was then shown by DiVincenzo that two-qubit gates are universal\cite{twobit}. This is an important result, since it seems impossible to
control interactions
between three particles, whereas two particle interactions are likely to be
much easier to implement. It was a surprising achievement, since in reversible classical computation, which is a special case of quantum computation, there is no set of two bit gates which is universal. Note that one qubit gate is certainly not enough to construct all operations. Barenco\cite{barenco1} and Deutsch {\it et.al}\cite{deutsch4}
showed that almost any two-bit gate is universal (See also Lloyd
\cite{lloyd2,lloyd4}). An improvement of DiVincenzo's result was achieved later by Barenco {\it et.al}\cite{barenco4}, where it was shown that the classical controlled not gate, together with all one-qubit gates construct a universal set as well! In fact, one 1-qubit gate and the controlled not gate will do. This is perhaps the simplest and most economic set constructed so far. Implementation of one qubit gates are feasible, and experimentalists have already implemented a controlled not gate \cite{turchette}. However, there are other possible sets of gates. Adleman, {\it et. al.}\cite{adleman} and Solovay\cite{solovay}
suggested a set of gates, where all entries of the matrices
are $\pm\frac{3}{5}$ and $\pm\frac{4}{5}$ and $\pm 1$. Other universal sets of gates were suggested in connection with fault tolerant quantum computation\cite{shor3,aharonov1,knill2}.
Why do we need so many possible universal sets to choose from? Universal sets of gates are our computer languages. At the lowest level,
we need {\it quantum assembly}, the machine language by which everything will be implemented. For this purpose, we will use the set which consists of the easiest gates to implement in the laboratory. Probably, the set of one and two qubit gates will be most appropriate. Another incentive is analyzing the complexity power of quantum computers. For this the set suggested by Solovay and by Adleman {\it et. al.} seems more appropriate. (Fortnow recently reported on bounds using this set\cite{fortnow}). We will see that for error correction purposes, we will need a completely different universal set of gates. An important question should arise here. If our computer is built using one set, how can we design algorithms using another set, and analyze the computational power using a third set? The answer is that since they are all universal sets, there is a way to {\it translate} between all these languages. A gate from one set can be approximated by a sequence of gates from another set. It turns out that in all the universal sets described here, the approximation to within $\epsilon$ of an operation on $k$ qubits takes
$\rm{poly}(\rm{log}(\frac{1}{\epsilon}),2^k)$ gates from the set. As long as the gates are local (i.e $k$ is constant) the translation between different universal sets is efficient.
Now that the concept of a universal set of gates is understood, I would like to present an example of a simple universal set of gates. It relies on the proof of Deutsch's universal gate. The idea underlying Deutsch's universal gate is that Reversible computation is a special case of quantum computation. It is therefore natural that
universal quantum computation can be achieved by generalizing universal reversible computation. Deutsch showed how to generalize Toffoli's gate so that it becomes a universal gate for quantum computation:
{~}
{~}
\setlength{\unitlength}{0.030in} \begin{picture}(40,40)(-80,5) \put(0,15){\line(1,0){10}} \put(20,15){\line(1,0){10}} \put(0,30){\line(1,0){30}} \put(0,45){\line(1,0){30}} \put(15,30){\circle*{3}} \put(15,45){\circle*{3}} \put(15,20){\line(0,1){25}} \put(10,10){\framebox(10,10){$Q$}} \end{picture}
The $NOT$ matrix in the original Toffoli gate (see equation \ref{tof})
is replaced by another
unitary matrix on one qubit, $Q$, such that
$Q^n$ can approximate any $2\otimes 2$ matrix. I will present here a modification of Deutsch's proof, using two gates of the above form. Define: \begin{equation}
U= \left( \begin{array}{ll}
\rm{cos}(2\pi\alpha)&\rm{sin}(2\pi\alpha)\\
-\rm{sin}(2\pi\alpha)&\rm{cos}(2\pi\alpha) \end{array}\right),
W= \left( \begin{array}{ll}
1&0\\ 0& e^{i2\pi\alpha} \end{array}\right). \end{equation} We have freedom in choosing $\alpha$, except we require that
the sequence \newline\(\alpha ~\rm{mod}~1, 2\alpha ~\rm{mod}~1, 3\alpha ~\rm{mod}~1,...\)
hits the $\epsilon$-neighborhood of any number in $[0,1]$, within
poly($\frac{1}{\epsilon}$) steps. Clearly, $\alpha$ should be irrational,
but not all irrational numbers satisfy this property.
It is not very difficult to see that an irrational root of a polynomial of degree $2$
satisfies the required property. Let $U_3$ $(W_3)$ be the generalized Toffoli gate with $U$ ($W$) playing the role of the conditioned matrix, $Q$, respectively. Then \begin{theo} $\{U_3, W_3\}$ is a universal set of quantum gates. \end{theo}
\noindent{\bf Proof:} First, note that according to the choice of $\alpha$, $U$ approximates any rotation in the real plane, and $W$ approximates any rotation in the complex plane.
Given an $8\times 8$ unitary matrix $U$, let us denote its $8$ eigenvectors as $|\psi_j\ra$ with corresponding
eigenvalues $e^{i\theta_j}$.
$U$ is determined by \(U|\psi_j\ra= e^{i\theta_j}|\psi_j\ra.\) Define: \begin{equation}
U_k |\psi_j\ra = \left\{\begin{array}{ll}
|\psi_j\ra &\mbox{if $k\ne j$} \\
e^{i\theta_k}|\psi_k\ra & \mbox{if $k= j$}\end{array}\right. \end{equation} Then $U=U_7U_6....U_0$.
$U_k$ can be achieved by first taking $|\psi_k\ra$ to $|111\ra$,
by a transformation which we will denote by $R$.
After $R$ we apply $W$ the correct number of times to approximate
$|111\ra \longmapsto e^{i\theta_k}|111\ra$ and then
we take $|111\ra$ to $|\psi_k\ra$ by applying the reverse transformation of $R$, $R^{-1}$.
It is left to show how to apply $R$, i.e. how to take a general state
$|\psi\ra=\sum_{i=0}^{7} c_i |i\ra$ to $|111\ra$. For this, note that $U_3^n$ can approximate the Toffoli gate, and therefore can approximate
all permutations on basis states. To apply
$|\psi\ra\longmapsto |111\ra$, first
turn the coefficient on the coordinate $|110\ra$ to $0$. This is done by applying $W$ an appropriate number of times so that the phase
in the coefficient of $|110\ra$ will equal that of $|111\ra$. The coefficients now become $c_6=r_6e^{i\phi}, c_7=r_7e^{i\phi}$. Let $\theta$ be such that $r_6=r\rm{sin}(\theta),r_7=r\rm{cos}{\theta}$. Now apply $U$ an appropriate number of times to approximate a rotation by $-\theta$. This will transform all the
weight of $|110\ra$ to $|111\ra$.
In the same way we transform the weight from all coordinates to $|111\ra$, using permutations between
coordinates. This achieves $|\psi\ra\longmapsto |111\ra$, i.e. the transformation $R$.
$R^{-1}$ is constructed in the same way.
We have shown that all three qubit operations can be approximated. For operations on more qubits, note that the group generated by $\{U_m,W_m\}$ is dense in all operations on $m$ bits, by the same reasoning. To create $U_m$ ( $W_m$)
from $U_3$ ( $W_3$) use recursion: compute the logical AND of the first two bits by a Toffoli gate writing it on an extra bit, and then apply $U_{m-1}$ ( $W_{m-1}$). The reader can verify that the approximation is polynomially fast, i.e. for fixed $m$,
any unitary matrix on $m$ qubits
can be approximated to within $\epsilon$ by $poly(\frac{1}{\epsilon})$ applications of the gates $U_3$ and $W_3$. $\Box$
{~}
The generalized Toffoli gates operate on three qubits.
Barenco {\it et. al.}\cite{barenco4} show an explicit sequence of two bit gates
which constructs any matrix on three qubits,
of the form of a generalized Toffoli gate:
\setlength{\unitlength}{0.030in}
\begin{picture}(40,60)(-10,0)
\put(0,15){\line(1,0){10}} \put(20,15){\line(1,0){10}} \put(0,30){\line(1,0){30}} \put(0,45){\line(1,0){30}}
\put(15,30){\circle*{3}} \put(15,45){\circle*{3}}
\put(15,20){\line(0,1){25}}
\put(10,10){\framebox(10,10){$Q$}}
\end{picture} \begin{picture}(20,60)(0,0)
\put(0,0){\makebox(20,60){$=$}}
\end{picture} \begin{picture}(90,60)(0,0)
\put(0,15){\line(1,0){10}} \put(20,15){\line(1,0){20}} \put(50,15){\line(1,0){20}} \put(80,15){\line(1,0){10}} \put(0,30){\line(1,0){90}} \put(0,45){\line(1,0){90}}
\put(30,30){\circle{6}} \put(60,30){\circle{6}}
\put(15,30){\circle*{3}} \put(45,30){\circle*{3}} \put(30,45){\circle*{3}} \put(60,45){\circle*{3}} \put(75,45){\circle*{3}}
\put(75,20){\line(0,1){25}} \put(15,20){\line(0,1){10}} \put(30,27){\line(0,1){18}} \put(45,20){\line(0,1){10}} \put(60,27){\line(0,1){18}}
\put(10,10){\framebox(10,10){$V$}} \put(70,10){\framebox(10,10){$V$}} \put(40,10){\framebox(10,10){$V^{\scriptsize \dag}$}}
\end{picture}
where $V=\sqrt{Q}$. Thus, two bit gates are universal. $\Box$
{~}
It was further shown\cite{barenco4} that one-qubit matrix conditioned on one other qubit can be expressed as a sequence of one-qubit matrices and $CNOT$s. So the generalized Toffoli gate of Deutsch can be written as a finite sequence of one-qubit gates and $CNOT$s. This shows that $\{One-qubit~ gates, CNOT\}$ is universal.
The description above shows how to approximate unitary matrices
using poly($\frac{1}{\epsilon}$) gates from the universal set.
In fact, an exponentially faster approximation is possible
due to a theorem by Kitaev \cite{kitaev0},
which was also proved by Solovay\cite{solovay}:
\begin{theo}
Let the matrices $U_1,...U_r\in SU(n)$ generate a dense subset in $SU(n)$.
Then any matrix $U\in SU(n)$ can be approximated to within $\epsilon$ by a product
of poly$(\rm{log}(\frac{1}{\epsilon}))$ matrices from
$U_1,...U_r,U_1^{\dagger},...U_r^{\dagger}$.
\end{theo}
$SU(n)$ is the set of $n\times n$ unitary matrices
with determinant $1$.
Given a universal quantum set, we can easily convert it to a set in $SU(n)$
by multiplying each matrix with an overall complex scalar
of absulute value $1$, namely a phase.
This overall phase does not effect the result of any
measurement,
so any gate can be multiplied by a phase without affecting the computation.
We thus have:
\begin{coro}
The approximation rate of
any universal set of quantum gates is exponential.
\end{coro}
The idea of the proof of the theorem is to
construct finer and finer nets of points in $SU(n)$.
The $2k$'th net is constructed
by taking commutators of points from the $k$'th net.
Each point in the $k'$th net is a product of a linear (in $k$)
number of gates from the set of gates. It turns out that
the distance between two adjacent points in the net
decreases exponentially with $k$. $\bbox$
Having chosen the set of gates to write algorithms with, actually writing the algorithm in this assembler-like language
seems like a very tedious task! Just like higher languages in ordinary computer programming, it is desirable that quantum operations which are commonly used can be treated as black boxes,
without rewriting them from the beginning with elementary gates. Steps in this direction were made by \cite{barenco4,barenco3,beckman1, nielsen1,vedral}.
\section{Quantum Algorithms} The first and simplest quantum algorithm which achieves advantage over classical algorithms was presented by Deutsch and Jozsa\cite{deutsch2}. Deutsch and Jozsa's algorithm
addresses a problem which we have encountered before, in the context of probabilistic algorithms. \begin{quote}
$f$ is a Boolean function from $\{1,N\}$ to $\{0,1\}$. Assume $N=2^n$ for
some integer $n$. We are promised that $f(i)$ are either all equal to $0$, (``constant'') or half are $0$ and half are $1$ (``balanced''). We are asked to distinguish between the two cases. \end{quote}
The question is presented in the {\it oracle} setting. This means that the circuit does not get $f(1),....f(N)$ as input.
Instead, the circuit has access to an {\it oracle} for $f$.
A {\it query} to the oracle is a gate with $n$
input wires carrying an integer $i\in \{1,n\}$ in
bit representation. The output from the oracle gate is $f(i)$.
A quantum query to the oracle means applying the unitary transformation
\(|i\ra|j\ra\longmapsto |i\ra|j \oplus f(i)\ra\). The cost is measured by the number of queries to the oracle. A classical algorithm that solves this question exactly will need $O(N)$ queries. The quantum algorithm of Deutsch and Jozsa solves the problem exactly, with merely one quantum query! The algorithm makes use of a transformation known as the discrete Fourier transform over the group $Z_2^n$. \begin{equation}
|i\ra \stackrel{Foriuer Transform}{\longrightarrow} \frac{1}{\sqrt{N}}
\sum_j (-1)^{i\cdot j}|j\ra \end{equation} where $i,j$ are strings of length $n$, and $i\cdot j=\sum_{k=1}^n i_kj_k ~mod ~2$,
the inner product of $i$ and $j$ modulo $2$. Meanwhile, we need only one easily verified fact about the Fourier transform over $Z_2^n$: To apply this transformation on $n$ qubits, we simply apply the Hadamard transform $H$ from equation \ref{hadamard} on each of the $n$ qubits. Note also that the reversed Fourier transform, $FT^{-1}$ is equal to the $FT$. We now turn to solve Deutsch and Jozsa's problem. We will work with two registers, one will hold a number between $1$ to $N$ and therefore will consist of $n$ qubits, and the other register will consist of one qubit that will carry the value of the function.
{~}
\frame{\begin{minipage} [70mm]{100mm} ~\\\raggedright~\\ \raggedright{\bf $~~~~~$ Deutsch and Jozsa's Algorithm}\\~\\
\raggedright$~~~~~~|0^n\ra\otimes |1\ra$\\~\\ \raggedright$~~~~~$ Apply Fourier transform on first register.\\ \raggedright$~~~~~$ Apply $H$ on last qubit\\ \center{ $\Downarrow$}\\~\\ \raggedright
\(~~~~~\frac{1}{\sqrt{N}}\sum_{i=1}^{N} |i>\otimes(\frac{1}{\sqrt{2}}
|0\ra - \frac{1}{\sqrt{2}}|1\ra)\) \\~\\\raggedright
$~~~~~$ Call oracle, \(|i\ra|j\ra\longmapsto |i\ra|j \oplus f(i)\ra\).\\
\center{ $\Downarrow$}\\~\\
\raggedright\(~~~\frac{1}{\sqrt{N}}\sum_{i=1}^{N} (-1)^{f(i)}|i>\otimes (\frac{1}{\sqrt{2}}
|0\ra - \frac{1}{\sqrt{2}}|1\ra)\)\\~\\\raggedright $~~~~~$ Apply reversed Fourier transform on first register\\ \center{$\Downarrow$}\\~\\\raggedright
\(~~~~~|\psi\ra\otimes
(\frac{1}{\sqrt{2}} |0\ra - \frac{1}{\sqrt{2}}|1\ra)\)\\~\\
$~~~~~$ Measure first register\\ \center{$\Downarrow$}\\~\\ \raggedright $~~~~~$ If outcome equals $0^n$, output ``constant''\\ \raggedright $~~~~~$ Else, output ``balanced'' ~\\~ \end{minipage}}
$~$
To see why this algorithm indeed works, let us denote by $|\psi_c\ra$
the vector $|\psi\ra$ in the case ``constant'',
and $|\psi_b\ra$
the vector $|\psi\ra$ in the case ``balanced''. Note that if $f(i)$ is constant, the second Fourier
transform merely undoes the first Fourier transform, so $|\psi_c\ra=|0^n\ra$. On the other hand, if $f(i)$ is balanced,
the vector \[\frac{1}{\sqrt{N}}\sum_{i=1}^{N} (-1)^{f(i)}|i>\] is orthogonal to
\[\frac{1}{\sqrt{N}}\sum_{i=1}^{N}|i>.\] Since unitary operations preserve angles between vectors,
$|\psi_b\ra$ is orthogonal to $|\psi_c\ra$. Hence the probability to measure $0^n$ in the ``balanced'' case is zero. Hence, the algorithm gives the correct answer with probability $1$. This algorithm shows the advantage of exact quantum complexity over exact classical complexity. However, when the restriction to exact solution is released, this advantage is gone. A classical probabilistic machine can solve the problem
using a constant number of queries - though not by one query! (This was shown in the overview).
Let me remark that discussing exact solutions is problematic
in the context of quantum algorithms,
because of the continuous characteristics of quantum operators. Almost all quantum computations cannot be achieved exactly, when using a finite universal set of gates; the set of unitary operations is continuous, while the set of achievable operations using a finite universal set of gates is countable. Moreover, the notion of exact quantum algorithms is not robust, because the set of
problems that have exact solution depend very strongly on the universal set of gates. The function AND, for example,cannot be computed exactly by Deutsch's universal machine!
In the next algorithm, due to Simon, the exponential advantage is achieved even without requiring exact solutions. The problem can be specified as follows:
\begin{quote} {\bf Simon's Problem:}
$f$ is a function from $\{1,N\}$ to $\{1,N\}$, where $N=2^n$.
We are promised that one of two cases occurs:
Either all $f(i)$ are different, i.e. $f$ is ``one to one'',
or
$f$ satisfies that $\exists s, f(i)=f(j)$ if and only if $i=j$ or $i=j\oplus s$, i.e $f$ is ``two to one''.
We are asked to distinguish between the two cases. \end{quote}
Here a classical computer will need order of $O(N)$ queries, even when an error is allowed. Simon's quantum algorithm can solve this question with the expected number of queries being $O(\rm{log}(N))$. (In fact, Brassard {\it et.al.} improved this result from expected $O(\rm{log}(N))$ queries to worst case $O(\rm{log}(N))$ queries\cite{brassard4}.)
We will work with two registers of $n$ qubits; both will hold an integer between $1$ to $N$. The first register will carry numbers in the range of the function. The second register will carry the value of the function.
\frame{\begin{minipage} [70mm]{160mm} ~\\\raggedright~\\\raggedright $~~~~~~~~~~~~~~~~~~~~~~~~~~~$ {\bf Simon's Algorithm}\\
~\\\raggedright$~~~~~~|0^n\ra\otimes |0^n\ra$\\~\\ \raggedright$~~~~~$ Apply Fourier transform on first register.\\ \center{ $\Downarrow$}\\~\\ \raggedright
\(~~~~~\frac{1}{\sqrt{N}}\sum_{i=1}^{N} |i>\otimes |0^n\ra\) \\~\\\raggedright $~~~~~$ Call oracle\\
\center{ $\Downarrow$}\\~\\
\raggedright\(~~~~\frac{1}{\sqrt{N}}\sum_{i=1}^{N} |i>\otimes|f(i)\ra\) \\~\\\raggedright $~~~~~$ Apply Fourier transform on first register.\\ \center{$\Downarrow$}\\~\\
\raggedright\(~~~~\frac{1}{N}\sum_{k=1}^{N} |k>\otimes \sum_{i=1}^{N}(-1)^{i\cdot k}|f(i)\ra\)\\~\\ \raggedright
$~~~~~$ Measure first register. Let $k_1$ be the outcome.\\ $~~~~~$ Repeat the previous steps $cn$ times to get $k_1$, $k_2$,..., $k_{cn}$.\\
\center{ $\Downarrow$}\\~\\ \raggedright $~~~~~$Apply Gauss elimination to find a non-trivial solution for $s$ in the set of equations: \begin{eqnarray} k_1\cdot s=0~ mod~ 2\nonumber\\ k_2\cdot s=0~mod ~2\nonumber\\ \vdots \nonumber\\ k_{cn}\cdot s=0~mod~2\nonumber \end{eqnarray} \\ \center{ $\Downarrow$}\\~\\ \raggedright $~~~~~$ If found, output ``two to one''. If not, declare ``one to one''. ~\\~ \end{minipage}}
{\bf Proof of correctness:} To see why this algorithm works, let us analyze the probability to measure $k_1=k$, in the two cases. In the case of ``one to one'', the probability to measure $k_1=k$ is independent of $k$:
\begin{equation}
\rm{Prob}(k_1=k)=\sum_i \left| \frac{(-1)^{i\cdot k}}{N}\right|^2 =\frac{1}{N}.
\end{equation}
The above formula is derived by computing the squared norm of the projection
of the measured vector
on $|k\rangle\otimes |f(i)\rangle$ and summing over all possible $f(i)$.
If we do the same thing in the "two to one" case, the projection
on $|k\rangle\otimes |f(i)\rangle$ will consist of two terms:
one comes from $i$ and the other from $i\oplus s$, since $f(i)=f(i\oplus s)$.
Hence, in the following sum we divide by $2$ to correct for
the fact that every term is counted twice. In the case ``two to one'', we derive:
\begin{equation}
\rm{Prob}(k_1=k)=\frac{1}{2}\sum_i
\frac{1}{N^2}|(-1)^{i\cdot k}+(-1)^{(i\oplus s) \cdot k}|^2
=\left\{\begin{array}{ll}\frac{2}{N} & \mbox{if $k\cdot s=0~ mod~ 2$}\\
0 & \mbox{otherwise}\end{array}
\right.
\end{equation}
So we will only measure $k$ which is orthogonal to $s$. In order to distinguish between the cases, we repeat the experiment many times, and observe whether the space spanned by the random vectors is the whole space or a subspace. If we perform a large enough number of trials, we can be almost sure that in the ``one to one'' case, the vectors will span the whole space. Hence finding a non trivial solution will mean that we are in the ``two to one'' case. A more precise argument follows. Let $V$ be a vector space of dimension $n$ over $Z_2$.
Let $S\subset V$ be the subspace spanned by the vectors, $k_1,....k_t$, which were measured at the first $t$ trials. If $S$ is not equal to $V$, a random vector $k_{t+1}$ from $V$ will be in $S$ with probability at most $\frac{1}{2}$. Hence, with probability greater than half, the dimension of span$\{S,k_{t+1}\}$ is larger than that of $S$. By Chernoff's law\cite{chernoff},
the probability the vectors will not span the whole space after $cn$ trials is exponentially small in $n$. $\Box$
{~}
This algorithm is exponentially more efficient than any randomized classical algorithm! This seems like an extremely strong result, but it is very important to notice here that the problem is stated in the oracle setting and that the algorithm does not apply for
any oracle, but only on oracles from a restricted set: either ``balanced'' or ``constant'' functions. This restriction is called in complexity theory a ``promise'' to the algorithm: the algorithm is ``promised'' that the oracle is from some restricted subset.
We will see later, in section $10$, that without such a "promise", quantum computation
and classical computation are polynomially equivalent in terms of number of queries
to the oracle. This shows that in the absence of a promise, i.e. full range input,
the quantum advantage is exhibited not in the number of accesses to the input,
but in the way the information is processed. We will see an example for this in the next section,
in Shor's factorization algorithm.
\section{Shor's Algorithm for Factoring Integers}
Shor's algorithm is the most important algorithmic result
in quantum computation. The algorithm
builds on ideas that already appear in Deutsch and Jozsa's algorithm and in Simon's algorithm, and like these algorithms, the basic ingredient of the algorithm is the Fourier transform. The problem can be stated as follows: \begin{quote}{\bf Input:} An integer N \\ {\bf Output:} A non-trivial factor of N, if exists.\nonumber \end{quote}
There is no proof that there is no polynomial classical factorization algorithm The problem is even not known to be $NP$-complete. However, factorization is regarded as hard, because many people have tried to solve it efficiently and failed. In $1994$, Shor published a polynomial (in $\rm{log}(N)$ )
quantum algorithm for solving this problem \cite{shor1}. This result is regarded as extremely important both theoretically and practically, although there is no proof that a classical algorithm does not exist. The reason for the importance of this algorithm is mainly the fact that the security of the RSA cryptosystem, which is so widely used, is based on the assumed hardness of factoring integers. Before explaining the algorithm, I would like to explain here in short how this cryptosystem works.
A cryptosystem is a secure way to transform information such that an eavesdropper will not have any information about the message sent. In the RSA method,
the receiver, Bob, who will get the message, sends first a public key to Alice. Alice uses this key to encode her message, and sends it to Bob. Bob is the only one who can encode the message, assuming factoring is hard.
{~}
\frame{\begin{minipage} [70mm]{160mm} \raggedright~\\\raggedright $~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ {\bf The RSA cryptosystem}\\\raggedright~\\\raggedright $~~~~~~$~~~~~~~~~~ {\bf Alice}$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ ~~~~~ {\bf Bob} \\\raggedright \begin{eqnarray} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ &~~~~~~~~~~~~N,E~~~~~~~~~~ &P,Q~ large~ primes.~
Set ~ N=PQ.~~~~~~~~~~~~~~~~~\nonumber \\
&\longleftarrow& E~ coprime~ with~ P-1, Q-1 \nonumber\\ ~\nonumber\\ Message~ M~~~~~~~&M^E mod~N&\nonumber\\ &\longrightarrow&\nonumber\\ ~\nonumber\\ &&Computes~ E^{-1} mod(P-1)(Q-1),\nonumber\\ &&Computes~ (M^{E})^{E^{-1}}modN~=~M\nonumber \end{eqnarray} \\~ \end{minipage}}
{~}
The key is chosen as follows: Bob chooses two large primes $P$ and $Q$. He then computes $N=PQ$, and also picks an integer co-prime to $(P-1)(Q-1)=\phi(N),$ the number of co-primes to $N$ smaller than $N$. Bob sends $E$ and $N$ to the sender, Alice, using a public domain (newspaper, phone...) The pair $(E,N)$ is called Bob's {\it public key.} Bob keeps secret $D=E^{-1}mod (P-1)(Q-1)$, which he can compute easily knowing $P$ and $Q$, using the extended Euclid's algorithm\cite{fft}. The pair $(N,D)$ is called Bob's {\it secret key.} Alice computes her message, $M$, to the power of $E$, modulo $N$, and sends this number in a public channel to Bob. Note that Alice's computation is easy: taking a number $Y$ to the power of $X$ modulo $N$ is done by writing $X$ in binary representation: $X=x_1...x_n$. Then one can square
$(Y^{x_i})$ $i$ times to get $(Y^{x_i})^{2^i}$,
add the results for all $i$ and take the modulus over $N$. Bob decodes Alice's massage using his secret key by computing $(M^{E})^{D} mod N$.
Why does Bob get the correct message $M$? This follows from Fermat's little theorem and the Chinese remainder theorem which together imply\cite{fft} that for
any $M$, $M^{k\phi(N)+1}=M~ mod ~N$. The security of this cryptosystem rests on the difficulty of factoring large numbers. If the eavesdropper has a factorization algorithm, he knows
the factors $P,Q$, and he can simply play the role of Bob in the last step of the cryptographic protocol. The converse statement, which asserts that in order to crack RSA one must have a factoring algorithm, is not proven. However, all known methods to crack $RSA$ can be polynomially
converted to a factorization algorithm. Since factorization is assumed hard, classically, RSA is believed to be a secure cryptosystem to use.
In order to use RSA securely, one should work with integers that are a few hundreds digits in length,
since factoring smaller integers is still practical.
Integers of up to $130$ digits have been factorized by classical computers in no longer
than a few weeks. Due to the fact that the only classical factorization algorithm is exponential, factorizing a number of twice the number of digits will take an eavesdropper not twice the time, but of the order of million years. If Alice and Bob work with numbers of the order of hundreds of digits, they are presumably secure against classical eavesdroppers.
Shor's algorithm provides
a quantum efficient way to break the RSA cryptosystem. In fact, Shor presented a quantum algorithm not for factoring, but for a different problem: \begin{quote}{\bf Order modulo N:}\\ {\bf Input:} An integer $N$, and $Y$ coprime to $N$ \\ {\bf Output:} The order of $Y$, i.e. the minimal positive integer $r$ such that $Y^r=1~ mod~ N$.\nonumber \end{quote}
The problem of factorization can be polynomially reduced to the problem of finding the order
modulo $N$, using results from number theory.
I will not describe the reduction here; an explanation can be found in an excellent review on Shor's algorithm \cite{ekert2}). Instead, I will show a way\cite{cleve4} to crack RSA given an efficient algorithm to find the order modulo $N$: Suppose the message sent is $M^E$. Find the order $r$ of $M^E$ modulo $N$, $r$ is also the order of $M$, since $E$ is coprime to $(P-1)(Q-1)=\phi(N)$. It is easy to find efficiently
the inverse of $E$, $D'=E^{-1}$ modulo $r$, using Euclid's algorithm. Then simply, $(M^{E})^{D'}\equiv M~ mod ~N$, since $M^r\equiv~ 1~ mod ~N$.
Let me now present Shor's beautiful algorithm for finding the order of $Y$, for any given $Y$, modulo $N$. The description follows\cite{ekert2}. In short, the idea of the algorithm is to create a state with periodicity $r$, and then apply Fourier transform over $Z_Q$, (the additive group of integers modulo $Q$), to reveal this periodicity. The Fourier transform over the group $Z_Q$ is defined as follows: \begin{equation}
|a\ra \longmapsto \frac{1}{\sqrt{Q}}\sum_{b=0}^{Q-1} e^{2\pi iab/Q} |b\ra=|\Psi_{Q,a}\ra \end{equation} The algorithm to compute this Fourier transform will be given in the next section, which is devoted entirely to Fourier transforms. Again we will work with two registers. The first will hold a number between $1$ to $Q$. ($Q$ will be fixed later: it is much larger than $N$, but still polynomial in $N$.) The second register will carry numbers between $1$ to $N$. Hence the two registers will consist of $O(\rm{log}(N))$ qubits
{~}
\frame{\begin{minipage} [70mm]{165mm} ~\\\raggedright~\\\raggedright $~~~~~~~~~~~~~~~~~~~~~~~~~~~$ {\bf Shor's Algorithm}\\\raggedright~\\\raggedright
$~~~~~~|\stackrel{\rightarrow}{0}\ra\otimes |\stackrel{\rightarrow}{0}\ra$\\\raggedright~\\\raggedright $~~~~~$ Apply Fourier Transform over $Z_Q$
on the first register\\
\center{ $\Downarrow$}\\~\\ \raggedright
\(~~~~~\frac{1}{\sqrt{Q}}\sum_{l=0}^{Q-1} |l\ra\otimes |\stackrel{\rightarrow}{0}\ra\) \\~\\ \raggedright $~~~~~$ Call subroutine
which computes $|l\ra|d\ra \longmapsto|l\ra |d\oplus Y^l~ mod ~N\ra$ \\
\center{ $\Downarrow$}\\~\\
\raggedright\(~~~~\frac{1}{\sqrt{Q}}\sum_{l=0}^{Q-1} |l\ra\otimes|Y^l mod N\ra\) \\~\\\raggedright $~~~~~$ Measure second register.\\
\center{ $\Downarrow$}\\~\\
\raggedright\(~~~~
\frac{1}{\sqrt{A}}\sum_{l=0|Y^l=Y^{l_0}}^{Q-1} |l\ra\otimes|Y^{l_0}\ra=
\frac{1}{\sqrt{A}}\sum_{j=0}^{A-1} |jr+l_0\ra\otimes|Y^{l_0}\ra \) \\~\\ \raggedright$~~~~~$ Apply Fourier Transform over $Z_Q$ on the first register\\ \center{$\Downarrow$}\\~\\ \raggedright\(~~~~ \frac{1}{\sqrt{Q}}\sum_{k=0}^{Q-1}\left(\frac{1}{\sqrt{A}} \sum_{j=0}^{A-1} e^{2\pi i (jr+l_0)k/Q}\right)
|k\ra\otimes|Y^{l_0}\ra\)\\ \center{ $\Downarrow$}\\~\\\raggedright
$~~~~~$ Measure first register. Let $k_1$ be the outcome.\\ \raggedright
$~~~~~$ Approximate the fraction $\frac{k_1}{Q}$ by a fraction with
denominator smaller than $N$,
\\ \raggedright $~~~~~$ using the (classical) method of continued fractions.\\ \raggedright $~~~~~$ If the denominator $d$ doesn't satisfy $Y^d=1 mod ~N$, throw it away.\\ \raggedright$~~~~~$ Else call the denominator $r_1$.
\\ \center{ $\Downarrow$}\\~\\ \raggedright $~~~~~$ Repeat all previous steps $\rm{poly}(\rm{log}(N))$ times to get $r_1$, $r_2$,..\\ \raggedright $~~~~~$ Output the minimal $r$. ~\\~ \end{minipage}}
{~}
Let us now understand how this algorithm works. In the second step of the algorithm, all numbers between $0$ and $Q-1$ are present in the superposition, with equal weights. In the third step of the algorithm, they are separated to sets, each has periodicity $r$. This is done as follows: there are $r$ possible values written on the second register: $a\in\{Y^0,Y^1,....Y^{r-1}\}$. The third state can thus be written as:
\[\frac{1}{\sqrt{Q}}\left( (\sum_{l=0|Y^l=Y}^{Q-1}
|l\ra\otimes|Y\ra)+
(\sum_{l=0|Y^l=Y^2}^{Q-1} |l\ra\otimes|Y^2\ra)+....+
(\sum_{l=0|Y^l=Y^{r}}^{Q-1} |l\ra\otimes|Y^r=1\ra)\right)\] Note that the values $l$ that give $Y^l=a$ have periodicity $r$: If the smallest such $l$ is $l_0$, then $l=l_0+r,l_0+2r,..$ will also give $Y^l=a$. Hence each term in the brackets
has periodicity $r$. Each set of $l'$s, with periodicity $r$, is attached to a different state of the second register. Before the computation of $Y^l$, all $l$'s appeared equally in the superposition.
Writing down the $Y^l$
on the second register can be thought of as giving a different ``color'' to each periodic set in $[0,Q-1]$. Visually, this can be viewed as follows:
\setlength{\unitlength}{0.030in}
\begin{picture}(40,30)(-10,0)
\put(0,0){\vector(1,0){130}}
\put(2,-5){\makebox(0,0){$0$}}\put(2,0){\line(0,1){20}} \put(10,-5){\makebox(0,0){$1$}}\qbezier[10](10,0)(10,10)(10,20) \put(18,-5){\makebox(0,0){$2$}} \put(30,-5){\makebox(0,0){$...$}} \put(38,-5){\makebox(0,0){$r$}}\put(38,0){\line(0,1){20}} \put(48,-5){\makebox(0,0){$r+1$}}\qbezier[10](48,0)(48,10)(48,20) \put(60,-5){\makebox(0,0){$...$}} \put(72,-5){\makebox(0,0){$2r$}}\put(72,0){\line(0,1){20}} \put(84,-5){\makebox(0,0){$2r+1$}}\qbezier[10](82,0)(82,10)(82,20) \put(97,-5){\makebox(0,0){$...$}} \put(120,-5){\makebox(0,0){$Q-1$}} \put(135,0){\makebox(0,0){$l$}} \end{picture}
{~}
{~}
The measurement of the second register picks randomly one of these sets, and the state collapses to a superposition of $l'$s with periodicity $r$, with an arbitrary shift $l_0$. Now, how to obtain the periodicity? The first idea that comes to mind is to measure the first register twice, in order to get two samples from the same periodic set, and somehow deduce $r$ from these samples. However, the probability that the measurement of the second register yields the same shift in two runs of the algorithm, i.e. that the same periodic set is chosen twice,
is exponentially small. How to gain information about the periodicity in the state without simply sampling it? This is done by the Fourier transform. To understand the operation of the Fourier transform, we use a diagram again:
{~}
\setlength{\unitlength}{0.030in}
\begin{picture}(40,50)(-10,0)
\put(0,0){\vector(1,0){130}} \put(0,40){\vector(1,0){130}} \put(135,40){\makebox(0,0){$k$}} \put(2,44){\makebox(0,0){$0$}} \put(15,44){\makebox(0,0){$1$}} \put(28,44){\makebox(0,0){$2$}} \put(41,44){\makebox(0,0){$3$}}
\put(2,0){\vector(2,3){27}} \put(2,0){\vector(0,1){40}} \put(2,0){\vector(1,3){13}} \put(2,0){\vector(1,1){40}} \put(60,44){\makebox(0,0){$...$}} \put(122,44){\makebox(0,0){$Q-1$}} \put(2,-5){\makebox(0,0){$0$}} \put(10,-5){\makebox(0,0){$1$}} \put(18,-5){\makebox(0,0){$2$}} \put(30,-5){\makebox(0,0){$...$}} \put(38,-5){\makebox(0,0){$r$}} \qbezier(38,0)(38,0)(2,40) \qbezier(38,0)(38,0)(15,40) \qbezier(38,0)(38,0)(28,40) \qbezier(38,0)(38,0)(41,40)
\put(38,0){\vector(-1,4){10}} \qbezier(72,0)(72,0)(2,40) \qbezier(72,0)(72,0)(15,40) \qbezier(72,0)(72,0)(28,40) \qbezier(72,0)(72,0)(41,40)
\put(48,-5){\makebox(0,0){$r+1$}} \put(60,-5){\makebox(0,0){$...$}} \put(72,-5){\makebox(0,0){$2r$}} \put(83,-5){\makebox(0,0){$2r+1$}} \put(98,-5){\makebox(0,0){$...$}} \put(120,-5){\makebox(0,0){$Q-1$}} \put(135,0){\makebox(0,0){$l$}} \end{picture}
{~}
{~}
Each edge in the diagram indicates that there is some probability amplitude to transform from the bottom basis state to the upper one. We now measure the first register, to obtain $k$. To find the probability to measure each $k$, we need to sum up the weights coming from all the $j'$s in the periodic set.
\begin{equation}
\rm{Prob}(k)=|\frac{1}{\sqrt{QA}} \sum_{j=0}^{A-1} e^{2\pi i k(jr+l_0)/Q}|^2=
|\frac{1}{\sqrt{QA}} \sum_{j=0}^{A-1} (e^{2\pi i kr/Q})^j|^2
\end{equation}
\noindent Hence, in order to compute the probability to measure each $k$, we need to evaluate a geometrical series. Alternatively the geometric series is a sum over unit vectors in the complex plane.
{~}
\noindent {\bf Exact periodicity:} Let us assume for a second {\it exact periodicity}, i.e. that $r$ divides $Q$ exactly. Then $A=Q/r$. In this case, the above geometrical series is equal to zero, unless $e^{2\pi i kr/Q}=1$. Thus we measure with probability $1$ only $k's$ such that
$kr=0 ~mod~Q$. This is where destructive interference comes to play: only ``good'' $k'$s, which satisfy $kr=0 ~mod~Q$, remain, and all the others cancel out. Why are such $k'$s ``good''? We can write $kr=mQ$, for some integer $m$, or $ k/Q=m/r$. We know $Q$, and we know $k$ since we have measured it. Therefore we can reduce the fraction $k/Q$. If $m$ and $r$ are coprime.
the denominator will be exactly $r$ which we are looking for! By the prime number theorem, there are approximately $n/log(n)$ numbers smaller than $n$ and coprime with $n$, so since
$m$ is chosen randomly, repeating the experiment a large enough number of times we will with very high probability eventually get $m$ coprime to $r$.
{~}
\noindent {\bf Imperfect periodicity:} In the general case, $r$ does not divide $Q$, and this means that the picture is less clear. ``Bad'' k's do not completely cancel out. We distinguish between two types of $k'$s, for which the geometrical series of vectors in the complex plain looks as follows:
{~}
\setlength{\unitlength}{0.00083300in}
\begingroup\makeatletter\ifx\SetFigFont\undefined
\def\endgroup\@setsize\SetFigFont{#2pt}#1#2#3#4#5#6#7\relax{\def\endgroup\@setsize\SetFigFont{#2pt}{#1#2#3#4#5#6}} \expandafter\endgroup\@setsize\SetFigFont{#2pt}\fmtname xxxxxx\relax \defsplain{splain} \ifx\xsplain \gdef\SetFigFont#1#2#3{
\ifnum #1<17\tiny\else \ifnum #1<20\small\else
\ifnum #1<24\normalsize\else \ifnum #1<29\large\else
\ifnum #1<34\Large\else \ifnum #1<41\LARGE\else
\huge\fi\fi\fi\fi\fi\fi
\csname #3\endcsname} \else \gdef\SetFigFont#1#2#3{\begingroup
\count@#1\relax \ifnum 25<\count@\count@25\fi
\def\endgroup\@setsize\SetFigFont{#2pt}{\endgroup\@setsize\SetFigFont{#2pt}}
\expandafter\endgroup\@setsize\SetFigFont{#2pt}
\csname \romannumeral\the\count@ pt\expandafter\endcsname
\csname @\romannumeral\the\count@ pt\endcsname
\csname #3\endcsname} \fi \fi\endgroup \begin{picture}(5870,2718)(1866,-5095) \thicklines \put(3226,-3736){\oval(2704,2704)} \put(6376,-3736){\oval(2704,2704)} \put(3226,-3736){\line( 0, 1){1200}} \put(3226,-3736){\line( 6, 5){900}} \put(4126,-2986){\line(-1, 0){ 75}} \put(3226,-3736){\line( 5,-1){1052.885}} \put(3226,-3736){\line( 1,-3){352.500}} \put(3226,-3736){\line(-3,-5){588.971}} \put(3226,-3736){\line(-1, 0){1125}} \put(2551,-2761){\line( 2,-3){692.308}} \multiput(3301,-2536)(-6.00000,-6.00000){26}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \multiput(3301,-2536)(6.00000,-6.00000){26}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \put(3901,-2986){\line( 1, 0){225}} \put(4126,-2986){\line(-1,-3){ 75}} \multiput(4201,-3811)(3.75000,-7.50000){21}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \multiput(4276,-3961)(-7.50000,-3.75000){21}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \multiput(3676,-4636)(-3.75000,-7.50000){21}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \multiput(3601,-4786)(-7.50000,3.75000){21}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \put(2776,-4711){\line(-1, 0){150}} \put(2626,-4711){\line( 0, 1){150}} \multiput(2176,-3511)(-3.75000,-7.50000){21}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \multiput(2101,-3661)(3.75000,-7.50000){21}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \put(2551,-2911){\line( 0, 1){150}} \put(2551,-2761){\line( 1, 0){150}} \put(6301,-3736){\line( 1, 0){1275}} \put(6301,-3736){\line( 4, 1){1200}} \put(6301,-3736){\line( 4,-1){1270.588}} \put(6301,-3736){\line( 5, 3){1036.765}} \put(6301,-3736){\line( 5,-3){1036.765}} \multiput(7201,-2986)(6.00000,-6.00000){26}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \put(7351,-3136){\line( 0,-1){150}} \multiput(7426,-3286)(3.75000,-7.50000){21}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \put(7501,-3436){\line( 0,-1){150}} \multiput(7501,-3586)(3.75000,-7.50000){21}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \multiput(7501,-3811)(6.25000,6.25000){13}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \put(7501,-3886){\line( 1,-3){ 75}} \multiput(7576,-4111)(-7.50000,-3.75000){21}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \put(7351,-4186){\line( 0,-1){150}} \multiput(7351,-4336)(-7.50000,-3.75000){21}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \end{picture}
{~}
In the left case, all vectors point in different directions, and they tend to cancel each other. This will cause destructive interference, which will cause the amplitude of such $k'$s to be small. In the right case, all vectors point almost to the same direction. In this case there will be constructive interference of all the vectors. This happens when $e^{2\pi i kr/Q}$ is close to one, or when $kr ~mod ~Q$ is close to zero. This means that with high probability, we will measure only $k's$ which satisfy an {\it approximate} criterion $kr\approx 0 ~mod ~Q$. In particular, consider $k$'s which satisfy:
\begin{equation}\label{criterion}
-r/2 \le kr ~{\rm mod} ~Q \le r/2
\end{equation}
There are exactly $r$ values of $k$ satisfying this requirement, because $k$ runs from $0$ to $Q-1$, therefore $kr$ runs from $0$ to $(Q-1)r$,
and this set of integers contains exactly $r$ multiples of $Q$. Note, that for such $k'$s all the
complex vectors lie in the upper half of the complex plane, so they are
instructively interfering. Now the probability to measure such a $k$ is bounded below, by choosing the largest exponent possible: \begin{eqnarray*}
\rm{Prob}(k)=
|\frac{1}{\sqrt{QA}} \sum_{j=0}^{A-1} (e^{2\pi i kr/Q})^j|^2\ge
|\frac{1}{\sqrt{QA}} \sum_{j=0}^{A-1} (e^{i \pi r/Q})^j|^2 \\
=\frac{1}{QA}|\frac{1-e^{\pi i rA/Q}}{1-e^{i\pi r/Q}}|^2
= \frac{1}{QA}|\frac{sin(\frac{\pi r A}{2 Q})}{sin(\frac{\pi r}{2 Q})}|^2
\approx \frac{4}{\pi^2 r}
\end{eqnarray*}
Where the approximation is due to the fact that $Q$ is chosen to be much larger than $N> r$, therefore the sinus in the enumerator is close to $1$
with negligible correction of the order of $r/Q$.
In the denominator we use the approximation
$\rm{sin}(x)\approx x$ for small $x$, and the correction is again
of the order of $r/Q$. The probability to measure any $k$
which satisfies \ref{criterion} is approximately $4/\pi^2$,
since there are $r$ such $k'$s.
Why are such $k'$s "good"? Given an integer $k$ which satisfies the criterion
\ref{criterion}, we can find $r$ with reasonably high probability. Note that for ``good'' $k$'s, there exists an integer $m$ such that:
\[ |\frac{k}{Q}-\frac{m}{r}|\le \frac{1}{2Q}.\]
Remember that $Q$ is chosen to be much larger than $N$, say $Q\ge N^2$. This means that $\frac{k}{Q}$, a fraction with denominator $\ge N^2$, can be approximated by $\frac{m}{r}$, a fraction with denominator smaller than $N$, to within $\frac{1}{N^2}$
There is only one fraction with such a small denominator that approximates a fraction so well with such large denominator. Given $k/Q$, the approximating fraction, $\frac{m}{r}$,
can be found efficiently, using the method of continued fractions: \[a=a_0+\frac{1}{a_1+\frac{1}{a_2+...}},\] where $a_i$ are all integers. Finding this fraction, the denominator will be $r$! Well, not precisely. Again, it might be the case that $m$ and $r$ are not coprime, and the number we find will be the denominator of the reduced fraction of $\frac{m}{r}$. In this case the number will fail the test $Y^r=1$ which is included in Shor's algorithm, and it will be thrown away. Fortunately, the probability for $m$ to be coprime to $r$ is large enough: it is greater than $1/log(r)$.
We repeat the experiment until this happens.
This concludes Shor's algorithm. In the next chapter we will see an alternative algorithm by Kitaev for finding the order modulo $N$.
\section{Fourier Transforms} The ability to efficiently apply Fourier transforms over groups with exponentially many elements
is unique to the quantum world. In fact, Fourier transforms are the {\it only} known tool in quantum computation which gives exponential advantage. For this reason it is worthwhile to devote a whole chapter for Fourier transforms. The Fourier transform is defined as follows. Denote the additive group of integers modulo $Q$ by $Z_Q$.
Let $f$ be a function from the group $Z_Q$ to the complex numbers: \begin{equation} f:a\longmapsto f(a)\in C \end{equation} The Fourier transform of this function is
another function from $Z_Q$ to the complex numbers: \begin{equation}
\hat{f}:a \longmapsto \hat{f}(a)=\frac{1}{\sqrt{Q}}
\sum_{b\in Z_Q} e^{2\pi i ab/Q} f(b)\in C
\end{equation}
The straight forward way to compute the $Q$ Fourier coefficients of the function, $\hat{f}(a)$ $\forall a$,
will take $O(Q^2)$ time. When $Q$ is a factor of $2$,
there is a way to shorten the trivial Fourier transform algorithm using recursion. This is called fast Fourier transform, or in short $FFT$, and it enables to compute the Fourier transform within $O(Q\rm{log}(Q))$ time steps\cite{fft}. When $Q$ is very large, this still is a very slow operation.
In the quantum world, a function from the Abelian group $G=Z_Q$ to the complex numbers $f: a\longmapsto f(a)$ can be represented
by a superposition $|f\ra=\sum_{a=0}^{Q-1} f(a)|a\ra$ (perhaps normalized.) The Fourier transform of the function will be
$|\hat{f}\ra=\sum_{a=0}^{Q-1} \hat{f}(a)|a\ra$. Note that in the quantum setting, the function on $Q$ elements is represented compactly as a superposition on $log(Q)$ qubits. This compact representation allows in some cases to apply
the transformation $|f\ra \longmapsto |\hat{f}\ra$ very efficiently,
in only $O(log(Q))$ time steps. Indeed, measuring all the Fourier coefficients will still take time which is exponential in log($Q$) simply because the number
of coefficients is exponential. However, the actual transformation from a superposition to its Fourier transform will be very fast.
In order to apply the Fourier transformation on general states,
it suffices to apply the following transformation on the basis states: \begin{equation}
|a\ra \longmapsto
|\Psi_{Q,a}\ra=\frac{1}{\sqrt{Q}}\sum_{b=0}^{Q-1}
e^{2\pi i ab/Q}
|b\ra. \end{equation}
We will first consider the special case of $Q=2^m$, which is simpler than the
general case, since classical techniques for fast Fourier transforms can be adopted\cite{shor1,cleve3,coppersmith, deutsch3,griffiths} I will give here a nice description by Cleve {\it et. al.}\cite{cleve4}. Later I'll describe Kitaev's\cite{kitaev1} more general quantum Fourier transform, for any Abelian group, which implies a beautiful alternative factorization algorithm.
{~}
\noindent{\bf Quantum fast Fourier transform.} Let $Q=2^m$. An integer $a\in\{0,1,...,2^m-1\}$ is represented in binary representation by
$|a_1...a_m\ra$, so $a=a_{1}2^{m-1}+a_{2}2^{m-2}+....+a_{m-1} 2^1+a_m$. Interestingly, the Fourier state in this case is not entangled, and can be written as a tensor product:
\begin{equation}\label{ft}|\Psi_{Q,a}\ra=\frac{1}{\sqrt{Q}}\sum_{b=0}^{Q-1}e^{2\pi iab/Q}
|b\ra=\frac{1}{\sqrt{2^m}}(|0\ra+e^{2\pi i 0.a_m}|1\ra)
(|0\ra+e^{2\pi i 0.a_{m-1}a_m}|1\ra)...(|0\ra+e^{2\pi i 0.a_{1}...a_{m-1}a_m}
|1\ra)\end{equation} We can see this by computing the coefficient of $b$ in this formula. In fact, what matters is that the phases in the coefficient of $b$ from both sides of the equality are equal (modulo $1$).
To see this, observe that the phase of $|b\rangle$ in the left term is
$2^{-m}ab=2^{-m}\sum_{i,j=1}^{m} a_i 2^{m-i} b_j 2^{m-j}$, which can be seen to
be equal modulo $1$ to $0.a_m\cdot b_1 + 0.a_{m-1}a_m\cdot b_2 +... +0.a_1...a_{m-1}a_m\cdot b_m$ which is the phase of $|b\rangle$ in the right term.
To apply the QFFT, we will need only two gates.
The first is the Hadamard gate on one qubit. The second gate is a gate on two qubits, which applies a conditioned phase shift on one qubit, given that the other qubit is in state $|1\ra$.
$R_k$ denotes
the phase shift on one qubit by $e^{2\pi i/2^k}$. \begin{equation}\label{fast} R_{k}=\left(\begin{array}{cc}
1& 0\\
0 & e^{2\pi i/2^k} \end{array}\right) ~~~, ~~~ H=\left(\begin{array}{ll} \frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}} \end{array} \right) \end{equation} We will operate the following gate array:
{~}
\setlength{\unitlength}{0.030in}
\begin{picture}(40,60)(-10,0)
\put(-11,0){\makebox(0,0){$|a_m\ra$}} \put(0,0){\line(1,0){90}} \put(92,0){\makebox(0,0){$....$}} \put(95,0){\line(1,0){20}}\put(117,0){\circle{6}} \put(117,0){\makebox(0,0){$H$}} \put(121,0){\line(1,0){5}}
\put(155,0){\makebox(0,0){$|0\ra+\exp(2\pi i 0.a_m)|1\ra$}}
\put(-8,8){\makebox(0,0){$|a_{m-1}\ra$}} \put(0,8){\line(1,0){90}} \put(92,8){\makebox(0,0){$....$}} \put(95,8){\line(1,0){5}}\put(102,8){\circle{6}} \put(102,8){\makebox(0,0){$H$}} \put(106,8){\line(1,0){2}} \put(108,6){\framebox(6,6){$R_2$}} \put(111,0){\line(0,1){6}} \put(111,0){\circle*{2}} \put(114,8){\line(1,0){12}}
\put(160,8){\makebox(0,0){$|0\ra+\exp(2\pi i 0.a_{m-1}a_m)|1\ra$}}
\put(-11,35){\makebox(0,0){$\vdots$}}
\put(-11,40){\makebox(0,0){$|a_{2}\ra$}} \put(0,40){\line(1,0){52}}\put(56,40){\circle{6}} \put(56,40){\makebox(0,0){$H$}} \put(60,40){\line(1,0){2}} \put(62,40){\makebox(0,0){$...$}} \put(64,40){\line(1,0){2}} \put(66,38){\framebox(12,6){$R_{m-2}$}} \put(70,8){\line(0,1){30}} \put(70,8){\circle*{2}} \put(78,40){\line(1,0){2}} \put(80,38){\framebox(12,6){$R_{m-1}$}} \put(84,0){\line(0,1){38}} \put(84,0){\circle*{2}} \put(92,40){\line(1,0){2}} \put(97,40){\makebox(0,0){$....$}} \put(100,40){\line(1,0){26}}
\put(170,40){\makebox(0,0){$|0\ra+\exp(2\pi i 0.a_2a_3...a_{m-1}a_m)|1\ra$}}
\put(-11,48){\makebox(0,0){$|a_{1}\ra$}} \put(0,48){\line(1,0){5}}\put(7,49){\circle{6}} \put(7,49){\makebox(0,0){$H$}} \put(11,48){\line(1,0){2}} \put(13,46){\framebox(8,6){$R_{2}$}} \put(17,40){\line(0,1){6}} \put(17,40){\circle*{2}} \put(21,48){\line(1,0){2}} \put(26,48){\makebox(0,0){$....$}} \put(29,48){\line(1,0){2}} \put(31,46){\framebox(12,6){$R_{m-1}$}} \put(36,8){\line(0,1){38}} \put(36,8){\circle*{2}} \put(43,48){\line(1,0){2}} \put(45,46){\framebox(8,6){$R_{m}$}} \put(48,0){\line(0,1){46}} \put(48,0){\circle*{2}} \put(53,48){\line(1,0){73}}
\put(172,48){\makebox(0,0){$|0\ra+exp(2\pi i 0.a_1a_2a_3...a_{m-1}a_m)|1\ra$}}
\end{picture}
{~}
{~}
We claim that this gate array implements the FT, except that
the output is in reverse order of bits. To prove this, we show that each bit gains the phase it is supposed to gain, according to equation \ref{ft}. The first $H$ on the first bit $a_1$ produces the state on $m$ qubits:
\[(|0\ra +e^{2\pi i (0.a_1)}|1\ra)|a_2...a_m\ra\] and the next $R_2$ makes it
\[(|0\ra +e^{2\pi i (0.a_1a_2)}|1\ra)|a_2...a_m\ra,\] and so on until the first qubit is in the correct state (of the last bit in equation \ref{ft}):
\[(|0\ra +e^{2\pi i (0.a_1a_2...a_m)}|1\ra)|a_2...a_m\ra.\] In the same way the phases of the rest of the qubits are fixed,
one by one. We now simply reverse the order of the bits to obtain the correct FT.
Note that the number of gates is $m(m-1)/2$ which is $O(log^2(Q))$. In fact, many of these gates can be omitted, because
$R_{k}$ can be exponentially close to one.
omitting such gates we still obtain a very good approximation of the Fourier transform\cite{coppersmith}.
{~}
\noindent{\bf Kitaev's algorithm}: Kitaev's algorithm\cite{kitaev1} shows how to approximate efficiently
the FT over the cyclic group $Z_Q$ for any $Q$ (a cyclic group is a group that is generated by one element). The generalization to any Abelian group is simple\cite{kitaev1}, but will not be described here. The sequence of operation is the following:
{~}
\frame{\begin{minipage} [70mm]{160mm} \raggedright~\\\raggedright $~~~~~~~~~~~~~~~~~~~~~~~~~~~$ {\bf Fourier Transform a la Kitaev}\\\raggedright~\\\raggedright
$~~~|a\ra\otimes|0\ra ~~\Longrightarrow ~~
|a\ra\otimes|\Psi_{Q,0}\ra ~~\Longrightarrow~~
|a\ra\otimes|\Psi_{Q,a}\ra~~\Longrightarrow ~~
|0\ra\otimes|\Psi_{Q,a}\ra~~ \Longrightarrow~~
|\Psi_{Q,a}\ra\otimes|0\ra ~~~$
{~}
\end{minipage}}
\noindent The most important and difficult step in this algorithm is the third step. Let us understand how to perform each of the other steps first: \begin{enumerate}
\item $|0\ra \longmapsto |\Psi_{Q,0}\ra$ is actually a classical operation. We pick an integer between $1$ and $Q$ uniformly at random using a recursive procedure. Let $2^{n-1}<Q<2^n$. Denote $Q_0=2^{n-1}$ and $Q_1=Q-Q_0$. Apply the one qubit gate \(
|0\ra \longmapsto \sqrt{\frac{Q_0}{Q}}|0\ra +\sqrt{\frac{Q_1}{Q}}|1\ra\). Now, conditioned on the first bit $x$, create on the last $n-1$ bits, the state $|\Psi_{Q_x,0}\ra$ recursively.
\item
$|a\ra\otimes|\Psi_0\ra ~~\Longrightarrow~~
|a\ra\otimes|\Psi_a\ra$ is achieved by applying
\(|a,b\ra \longmapsto e^{2\pi i ab/Q} |a,b\ra.\)
\item The third operation is, perhaps surprisingly, the
most difficult part in the FT, and I will sketch the idea next. \item The last operation is merely swapping the bits. \end{enumerate}
To apply the third step, we note that
the vectors $|\Psi_{Q,a}\ra$
are eigenvectors of the unitary operation $U: |g^m\ra\longmapsto |g^{m+1}\ra$, where $g$ is the generator of the cyclic group, with eigenvalues $e^{- 2\pi i a/Q}$
The operation \(|a\ra\otimes|\Psi_{Q,a}\ra~~\Longrightarrow ~~
|0\ra\otimes|\Psi_{Q,a}\ra\) is actually the reverse of computing the eigenvalue of an eigenvector. We need to be able to write down the eigenvalues of a given unitary matrix. Kitaev has proved the following lemma: \begin{lemm} (Kitaev) Let $U$ be a unitary matrix on $n$ qubits such that $U,U^2,U^4...U^{2^n}$ can be applied efficiently.
Let $ |\Psi_{\theta}\ra$ be $U$'s eigenvectors with corresponding
eigenvalues $e^{i\theta}$. Then the transformation \(|\Psi_{\theta}\ra\otimes|0\rangle~~\Longrightarrow ~~|\Psi_{\theta}\ra\otimes|\theta\ra\) can be approximated to exponential accuracy, efficiently. \end{lemm}
{\bf Proof:} The idea that lies behind this theorem is {\it interference.} The eigenvalues are phases, and in order
to gain information about a phase we need to compare it with some
reference phase, just like what happens in an interferometer. The implementation of this idea in the setting of qubits is done by adding a control qubit. We proceed as follows.
We apply the Hadamard transform $H$ on the control qubit, which
separates the state to two paths,
one in which the control qubit is in state $|1\ra$
and the other in which it is $|0\ra$. Now $U$ is applied on $|\Psi_\theta\ra$, {\it conditioned} that the control qubit is $1$. This adds a phase $e^{i\theta}$ on one of the paths, which can be compared to the reference path. Finally, the controlled qubit is rotated again by a Hadamard transform. The following diagram captures the idea schematically:
\setlength{\unitlength}{0.030in} \begin{picture}(100,50)(-20,-20)
\put(-10,0){\makebox(5,5){$|\Psi_\theta,0\ra$}} \put(5,0){\vector(2,1){20}}
\put(30,12){\makebox(50,5){$\frac{1}{\sqrt{2}}|\Psi_\theta,0\ra~~~
\longmapsto~~~\frac{1}{\sqrt{2}}|\Psi_\theta,0\ra $}}
\put(30,-17){\makebox(50,5){$\frac{1}{\sqrt{2}}|\Psi_\theta,1\ra ~~~
\longmapsto~~~\frac{1}{\sqrt{2}}e^{i\theta}|\Psi_\theta,1\ra$}} \put(5,0){\vector(2,-1){20}} \put(25,10){\vector(1,0){60}} \put(25,-10){\vector(1,0){60}} \put(85,10){\vector(2,-1){20}} \put(85,-10){\vector(2,1){20}}
\put(115,0){\makebox(50,5){$|\Psi_\theta\ra
\otimes(\frac{1+e^{i\theta}}{2}|0\ra+\frac{1-e^{i\theta}}{2}|1\ra)$}} \end{picture}
{~}
The control qubit is now in a state
$|\beta\ra=(\frac{1+e^{i\theta}}{2}|0\ra+\frac{1-e^{i\theta}}{2}|1\ra)$,
which is a qubit biased according to the eigenvalue. If we measure this qubit, it behaves
like a coin flip
with bias $p=|1-e^{i\theta}|^2/4=\frac{1-\rm{cos}\theta}{2}$.
The idea is to create many control qubits, and measure all of them.
This is like performing many independent coin tosses.
We can deduce $\theta$ from the ratio between the number of times we got $1$
and the number of times we got $0$. For this, we will apply a classical algorithm
on the outcomes of the measurements.
However, there are two problems with this idea. One is that the outcome of the algorithm will
be classical, while we want to create
a unitary transformation which writes down the eigenvalues
and can be applied on superpositions. We will deal with
this problem later. A more severe problem is that the algorithm should find $\theta$ with
exponential accuracy (polynomially many bits), since there are exponentially many eigenvalues.
To achieve exponential accuracy in $\theta$ we need exponentially many coin tosses;
By Chernoff's inequality\cite{fft}, exponentially many coin tosses are required in order to
achieve exponential accuracy in $\theta$.
Since we are limited to polynomial algorithms, we can only deduce $\theta$ with
polynomial accuracy. The solution to this problem takes advantage
of the fact that the powers of $U$ can be applied efficiently.
To deduce $\theta$ to higher accuracy,
we slightly modify the interference scheme: instead of $U$,
we apply $U^2$. This will generate another set of biased qubits,
from which we can deduce $2\theta$ with polynomial accuracy.
The same thing can be done using \(U^4,...,U^{2^n},\)
and this will generate $n$ sets of $m=poly(n)$ biased qubits.
From the outcomes of the measurements of the $j'$th set, we compute
$2^j\theta$ with polynomial accuracy. It is easy to construct a
polynomial classical algorithm that computes $\theta$ with exponential precision
(which is what we need) from the polynomial approximations of
$\theta$,$2\theta$,$4\theta$,... $2^n \theta$.
It is left to show how the above computation can be made unitary.
The idea is that it is not necessary to measure each set of qubits,
in order to count the number of $1'$s. Instead of measuring these bits, we will apply a unitary transformation that counts the portion of $1$'s out of $m$
and writes this portion down on a {\it counting} register. If we denote by $w(i)$ the number of $1'$s in a string $i$, or the {\it weight} of the string, then this transformation will be: \begin{equation}\label{statekitaev}
|i\ra|0\ra \longmapsto |i\ra|w(i)/m\ra. \end{equation} The resulting state will look something like: \begin{equation}\label{stateprob}
|\Psi\ra\otimes \sum_i \sqrt{p^w(i)(1-p)^{m-w(i)}} |i\ra|w(i)\ra \end{equation} with perhaps extra phases. Most of the weight in this state is concentrated on strings with approximately $pm$ $1'$s, like in a Bernoulli experiment. For each set of control qubits, we obtain some portion, written on
the counting register of that set. We denote the $n$ portions by
\(w_\theta,w_{2\theta}...w_{2^n\theta}. \)
We can now apply the unitary version of the classical algorithm which
computes an exponentially
close approximation of $\theta$ given the portions $w$.
If we call this procedure $T$, we have:
\begin{equation}
|w_\theta\ra|w_{2\theta}\ra\cdots|w_{2^n\theta}\ra|0\ra
\stackrel{T}{\longrightarrow}
|w_\theta\ra|w_{2\theta}\ra\cdots|w_{2^n\theta}\ra|\theta\ra
\end{equation}
We now have $\theta$ written down on the last register.
Let us denote by $Q'$ the unitary operation which the algorithm applies so far.
It is tempting to think that $Q'$ is the desired transformation,
\(|\Psi_{\theta}\ra\otimes|0\rangle~~\Longrightarrow ~~|\Psi_{\theta}\ra\otimes|\theta\ra.\)
This is not true. Actually, $Q'$ is exponentially close
to
\(|\Psi_{\theta}\ra\otimes|0\rangle\otimes|0\rangle~~\stackrel{Q'}{\Longrightarrow}
~~|\Psi_{\theta}\ra\otimes|\theta\ra\otimes|garbage_\theta\rangle,\)
where the last register consists of all the control qubits and ancilla qubits which we have used
during the computation.
The reason for the fact that $Q'$ is not exactly $Q$, is that
in the classical coin tossing, there is an exponentially
small probability to get result which is very far from the expected number of $1'$s, $mp$.
This translates in
equation \ref{stateprob} to the appearance, with exponentially small
weight, of strings $i$ which are very far from the
expected number of $1'$s $mp$. We now want to ask, why do the garbage qubits matter.
These qubits carry information which is no longer needed,
but never the less are entangled with the rest of the computer.
The point is that their existence might prevent interference in future computation.
We will develop tools to think about interference in section $9$, but roughly, garbage has the
same effect as interaction with the environment, which is known to cause decoherence.
How to get rid of the garbage?
The problem is that we cannot simply erase the garbage by setting all the garbage qubits
to $|0\rangle$, because the transformation that takes a general
state to $|0\rangle$ is not unitary.
Fortunately, in our case there is a unitary transformation that erases the garbage.
We do the following:
We copy $\theta$, which is written on the last register,
on an extra register which is initialized
in the state $|0\rangle$. The copying is done bit by bit, using polynomially many
$CNOT$ gates. We now apply in reverse order the reverse of all transformations done so far
in the algorithm, except for the $CNOT$ gates.
The overall transformation is exponentially close to the following sequence
of operations:
apply $Q$, then copy $\theta$ and then apply $Q^{-1}$.
This sequence of operation indeed achieves the desired transformation
without garbage:
\begin{eqnarray*}
|\Psi_{\theta}\ra\otimes|0\rangle\otimes|0\rangle\otimes|0\rangle~~\stackrel{Q}{\Longrightarrow}
~~|\Psi_{\theta}\ra\otimes|\theta\ra\otimes|garbage_\theta\rangle\otimes|0\rangle
~~\stackrel{CNOT~~gates} {\Longrightarrow}\\
~~|\Psi_{\theta}\ra\otimes|\theta\ra\otimes|garbage_\theta\rangle\otimes|\theta\rangle
~~\stackrel{Q^{-1}} {\Longrightarrow}
~~|\Psi_{\theta}\ra\otimes|0\ra\otimes|0\rangle\otimes|\theta\rangle.
\end{eqnarray*}
One can save many qubits by erasing garbage in the middle of the computation, when it
is no longer needed, and using these erased qubits as register in the rest of the computation. A different proof of this lemma can be found in \cite{cleve4}, where QFFT over $Z_2^n$ is used. $\Box$
{~}
This concludes the Fourier transform algorithm. Kitaev's procedure of writing the eigenvalue down
implies a very simple alternative factorization algorithm. The way an integer $N$ is factorized is done again by finding the order of a number $Y$ which is coprime to $N$. (Recall that the order of $Y$ is the least $r$ such that $Y^r=1 ~ mod ~N$.) Consider the unitary transformation
\(U:|g\ra \longmapsto |gY ~mod ~N\ra\). The eigenvectors of $U$, $\{|\Psi\ra\}$,
are exactly the linear superpositions of all configurations in the subgroup $\{Y, Y^2, Y^3,...Y^r\}$, or any coset of this subgroup, $\{gY, gY^2,gY^3,...gY^r\}$, with appropriate phases:
\[U|\Psi_a\ra=U(\sum_j e^{2\pi i ja/r} |gY^j\ra)=e^{-2\pi i a/r}
(\sum_j e^{2\pi i ja/r} |gY^j\ra) .\] The eigenvalues of $U$ hold information about $r$! The idea would be to apply Kitaev's lemma, write down $\theta=2\pi a/r$ and deduce $r$ from it.
We start with the basis state $|0\rangle$, which can be written as an equal superposition of all
eigenvectors: $|0\ra=\sum_a |\Psi_a\ra$,
as you can easily check. Applying Kitaev's lemma on the state $|0\ra$
we get on the second register all eigenvalues written with uniform probability. We now measure this
register, which carries an exponentially close approximation of $2\pi a/r$.
We divide by $2\pi$ to get $c$, an exponentially good approximation of $a/r$.
Now, using the method of continued fraction, like in Shor's algorithm, we find
the closest fraction to $c$ with denominator less than $N$. With high enough probability $a$ and $r$ are coprime, so we get $r$ in the denominator. If not, the denominator does not satisfy $Y^r=1~mod N$, and we repeat the experiment again. Here is a summary of the algorithm:
{~}
\frame{\begin{minipage} [70mm]{130mm} \raggedright~\\\raggedright $~~~~~~~~~~~~~~~~~~~~~~~~~~~$ {\bf Factorization a la Kitaev}\\~\\ \raggedright
$~~~~~~\sum_{a} |\Psi_a\ra|0^n\ra $\\ \raggedright
$~~~~~$ Apply Kitaev's transformation $|\Psi_a\ra|0\ra \longmapsto
|\Psi_a\ra|2\pi a/r\ra$
\\ \center{ $\Downarrow$}\\~\\ \raggedright
$~~~~~\sum_{a} |\Psi_a\ra|2\pi a/r\ra$\\ \raggedright
$~~~~~$Measure the second register. Classically compute $r$ from the outcome. \\ {~}
\end{minipage}}
{~}
Factorization can be viewed as finding the order of elements in Abelian groups. Many people tried to generalize Shor's and Kitaev's algorithms to non-Abelian groups. It is conjectured that Fourier transforms over non-Abelian groups would be helpful tools, however they are much more complicated operations since the Fourier coefficients are complex {\it matrices}, and not complex numbers!
Beals\cite{beals1} made the first (and only) step in this direction by discovering
an efficient quantum Fourier transform algorithm for the non-Abelian permutations group, $S_n$, building on the classical FFT over $S_n$\cite{diaconis,clausen}. Beals was motivated by an old hard problem in computer science: Given two graphs, can we say whether they are isomorphic (i.e one is simply a permutation of the other) or not. This problem is not known to be $NP-$ complete, but the best known algorithm is exponential. It is still not known whether Beals' Fourier transform
can be used for solving graph isomorphism. A very interesting open question is whether efficient quantum Fourier transforms can be done over any group, and can they be used to solve other problems.
\section{Grover's Algorithm for Finding a Needle in a Haystack} Grover's algorithm is surprising and counter intuitive at first sight, though it achieves only a polynomial (quadratic) improvement over classical algorithms. It deals with the {\it database search problem.} Suppose you have access to an unsorted database of size $N$. You are looking for an item $i$ which satisfies some property. It is easy to check whether the property is satisfied or not.
How long will it take you to find such an item, if it exists? If you are using classical computation, obviously it can take you $N$ steps. If you are using probabilistic classical computation, you can reduce it to $N/2$ expected steps. But if you are using a quantum computer, you can find the item in $O(\sqrt{N})$ steps! I will present here the algorithm which was found by Grover\cite{grover1} in $1995$. However, I will use here a different representation of the algorithm, which is mainly based on the geometrical interpretation by Boyer {\it et.al.} \cite{boyer1,brassard1}.
The algorithm works as follows. Set $\rm{log}(N)=n$, and let us define a function \( f:\{0,1\}^n\longmapsto \{0,1\}\) where $f(i)=0$ if the $i'th$ item does not satisfy the desired property, and $f(i)=1$ in the case it does. Let $t$ be the number of items such that $f(i)=1$. For the moment, we assume that $t=1$. The algorithm operates in the Hilbert space of $n$ qubits. Its main part actually works in a subspace of dimension $2$ of this space. This subspace is the one which is spanned by the two vectors: \begin{equation}
|a\ra=\frac{1}{\sqrt{N}}\sum_{i=0}^{2^n-1}|i\ra~~~,~~~
|b\ra= \frac{1}{\sqrt{N-1}}\sum_{i=0|f(i)=0}^{2^n-1}|i\ra. \end{equation}
\setlength{\unitlength}{0.030in} \begin{picture}(100,40)(-60,-5) \put(0,0){\framebox(60,34)} \put(5,5){\vector(4,3){30}} \put(5,5){\vector(1,0){37}}
\put(45,5){\makebox(0,0){$|b\ra$}} \put(14,9){\makebox(0,0){$\theta$}}
\put(38,27){\makebox(0,0){$|a\ra$}} \qbezier(35,10)(35,20)(30,20) \end{picture}
We begin by applying a FT on $|0\ra$ which generates the uniform vector $ |a\ra$,
using $n$ Hadamard gates. We now want to rotate the vector in the two dimensional subspace spanned by $ |a\ra$ and $|b\ra$, so that eventually we have large projection on the direction orthogonal to
$|b\ra$, which is exactly the item we want. The idea is that a rotation by the angle $2\theta$, is equivalent to two {\it reflections},
first with respect to $ |a\ra$, and then with respect
to $|b\ra$. We define a Boolean function $g(i)$ to be $0$ only for $i=0$, and $1$
for the rest. A reflection around $|0\ra$ is obtained by
$R_0: |i\ra \longmapsto (-1)^{g(i)} |i\ra$. A reflection around $|a\ra$ is achieved by: $R_a=FT\circ R_0\circ FT$. To reflect around $|b\ra$, apply the transformation:
$R_b:|i\ra \longmapsto (-1)^{f(i)} |i\ra$. A rotation by an angle $2\theta$ is achieved by applying $R_aR_b$.
{~}
\frame{\begin{minipage} [30mm]{100mm} $~~~~~~~~$\center{\bf{Grover's algorithm}}\\
\raggedright $~~~~$ Apply Fourier transform on $|0\ra$ to get $|a\ra$.\\ \raggedright $~~~~$ Apply $R_aR_b ~~~\sqrt{N}\pi/4$ times.\\
\raggedright $~~~~$ Measure all bits.\\ {~}\\ \end{minipage}}
{~}
\noindent The crucial point is that $\theta$ satisfies $\rm{cos}(\theta)=\sqrt{\frac{N-1}{N}}$ so for large $N$, we have \begin{equation} \theta\approx \rm{sin}(\theta)=\frac{1}{\sqrt{N}} \end{equation} Therefore after $O(\sqrt{N})$ rotations,
with high probability the measurement yields
an item satisfying $f(i)=1$. Note that this algorithm relies heavily on the assumption
that the number of ``good'' items is one. If for example the number of ``good'' items is $t=2$ , we will have almost $0$ probability to measure a ``good'' item, exactly when we expect this probability to be almost one! There are several ways to generalize this algorithm to the general case where the number of ``good'' items is not known. One is a known classical reduction\cite{valiant3}.
Another generalization was suggested in \cite{boyer1}. This suggestion not only finds a ``good'' item regardless of what the number, $t$, of ``good'' items is, but also gives a good estimation of $t$. The idea is that the probability to measure a ``good'' item is a periodic function in the number of Grover's iteration, where this period depends on $t$ in a well defined way. The period can be found using ideas similar to what is used in Shor's algorithm, by Fourier transforms. Grover's algorithm can be used to solve $NP$ complete problems in time $\sqrt{2^n}$, instead of the classical $2^n$, which simply goes over all the $2^n$ items in the database.
{~}
\noindent Grover's algorithm provides a quadratic advantage over any possible classical algorithm, which is optimal, due to Bennett {\it et.al.}\cite{bbbv,boyer1,zalka2}, a result which I will discuss when dealing with quantum lower bounds in section \ref{bounds}. Let me now describe several variants on Grover's algorithm,
all using Grover's iteration as the basic step. (These variants and others can be found in Refs.
\cite{brassard3, grover2, durr, grover3,brassard4,boyer1} and \cite{grover4}.)
{~}
\noindent {\bf Estimating the median to a precision $\epsilon$.}\cite{grover4,grover2} \begin{quote} $f$ is a function from $\{1,..N\}$ to $\{1,..N\}$ where $N$ is extremely large. We are given $\epsilon>0$, We want to find the median $M$, where we allow a deviation by $\epsilon$, i.e. the
number of items smaller than $M$ should be between $\frac{(1\pm\epsilon)N}{2}$.
We also allow an exponentially small (in $1/\epsilon$) probability for an error. \end{quote} We assume that $N$ is very large, and so only polylog($N$) operations are considered feasible. Classically, this means that the Median cannot be computed exactly but only estimated probabilistically. A classical probabilistic algorithm
cannot do better than sample random elements $f(i)$, and compute their median.
An error would occur if more than half the elements are chosen from the last
$\frac{1+\epsilon}{2}$ items, or from the first $\frac{1-\epsilon}{2}$ items.
For these events to have exponentially small probability, we need
$O(\frac{1}{\epsilon^2})$ samples, by Chernoff's law\cite{fft}. The following quantum algorithm performs the task in $O(\frac{1}{\epsilon})$ steps.
The idea is to find $M$ by binary search, starting with some value, $M_0$, as a guess. We will estimate up to precision $\epsilon$, the number
$|\eta|$ such that $(1+\eta)N/2$ items satisfy $f(i)> M_0$, This will take us $O(\frac{1}{\epsilon})$ steps. We can now continue the binary search of $M$, according to the $\eta$ which we have found. Note that since we do not have information about the sign of $\eta$, a simple binary search will not do, but a slight modification will.
Each step reduces the possible range of $M$ by a factor of half, and thus the search will take polylog($N$)$O(\frac{1}{\epsilon})$ steps.
It is therefore enough to estimate $|\eta|$ in $O(\frac{1}{\epsilon})$ steps, given a guess for the median, $M_0$. Here is how it is done.
We define
$f_0(i)=1 $ if $f(i)>M_0$, and $f_0(i)=0 $ if $f(i)\le M_0$. Our basic iteration will be a
rotation in the subspace spanned by two vectors:
\begin{equation}
|\alpha\ra=\frac{1}{\sqrt{N}}\sum_{i=0}^{2^n-1}|i\ra,~~~~
|\beta\ra= \frac{1}{\sqrt{N}}\sum_{i=0}^{2^n} (-1)^{f_0(i)}|i\ra \end{equation}
Let $|\gamma\rangle$ be a vector orthogonal to $|\beta\rangle$ in
the two dimensional subspace. The angle between $|\alpha\ra$ and $|\gamma\ra$, is $\theta\approx \rm{sin}(\theta)=\eta$. Rotation by $2\theta$ can be done like in Grover's algorithm.
We start with $|\alpha\ra$ and rotate by $2\theta$ $\frac{1}{2\epsilon}$ times.
The angle between our vector and $|\alpha\ra$
is $\eta/\epsilon$. We can now project on $|\alpha\ra$ (by rotating $|\alpha\ra$ to $|0\ra$ and projecting on $|0\ra$). The result is distributed like a coin flip with bias $\cos^2(\eta/\epsilon)$. We can repeat this experiment poly($\frac{1}{\epsilon}$) number of times. This will allow us to estimate the bias $\cos^2(\eta/\epsilon)$ and
from it
$|\eta|/\epsilon$, up to a $1/4$, with exponentially small error probability.
Thus we can estimate $|\eta|$ up to $\epsilon/4$ in $O(\frac{1}{\epsilon})$ time.
{~}
\noindent {\bf Estimating the mean to a precision $\epsilon$.} \begin{quote} $f$ is a function from $\{1,..N\}$ to $[-0.5,0.5]$, where $N$ is assumed to be very large. We are given $\epsilon>0$, We want to estimate the mean $M$ up to a precision $\epsilon$. \end{quote} Again, classically, this will take $O(\frac{1}{\epsilon^2})$, assuming that $N$ is extremely large. Grover suggested a quantum algorithm to solve this problem in $O(\frac{1}{\epsilon})$ steps\cite{grover2}. Instead of showing Grover's version, I will
show a simple classical reduction\cite{wigderson} which allows solving the mean estimation problem given the median algorithm.
The idea is that for Boolean functions the mean and median problems coincide. We write the real number $f(i)$, which is between $-0.5$ to $0.5$ in its binary representation: $f(i)=0.f_1(i)f_2(i)f_3(i).....$ up to $\rm{log}(\frac{2}{\epsilon})$ digits, where $f_j(i)$ is the $j'$th bit of $f(i)$.
Hence, $f_j(i)$ are Boolean functions. We can denote by $M_j$ the mean of $f_j$, which can be estimated by the median algorithm. The mean of $f$ can be computed from
$\frac{1}{N}\sum_i f(i)=\sum_j 2^{-j} (\frac{1}{N}\sum_i f_j(i))= \sum_j 2^{-j} M_j$. Cutting the number of digits causes at most $\frac{\epsilon}{2}$ error in $M$. Each $M_j$ will be estimated to precision $\epsilon/2$, and this will cause $\frac{\epsilon}{2}$
additional error all together.
{~}
\noindent {\bf Finding the minimum} \begin{quote} $f$ is a function from $\{1,..N\}$ to $\{1,..N\}$ . We want to find $i$ such that $f(i)$ is minimal. \end{quote} Classically, this will take $O(N)$, if the database is not sorted. Durr and Hoyer\cite{durr} show a quantum algorithm which finds the minimum in $O(\sqrt{N})$. This is done by a binary search of the minimum: At each step $j$ , we have a threshold $\theta_j$. This defines a function: $f_j(i)=1$ if $f(i)<\theta_j$, and $f_j(i)=0$ otherwise. $\theta_0$ is fixed
to be $N/2$, i.e. in the middle of the interval $[1,...N]$. Then we apply Grover's search, to find an $i$ such that
$f_0(i)=1$. If we find such an $i$, we fix the new threshold, $\theta_1$ to be $f(i)$. Else, we fix $\theta_1=3N/4$, i.e. in the middle of the interval $[N/2,...N]$. We continue this binary search until the current interval has shrunk to the size of one number. This is the minimum.
{~}
Grover's iteration can be used to achieve a quadratic gap also between quantum and classical communication complexity\cite{buhrman},
an issue which is beyond of the scope of this review.
\section{What Gives Quantum Computers their (Possible) Extra Power}\label{path} Let us ask ourselves why quantum computers can perform tasks which seem hard or impossible to do efficiently by classical machines. This is a delicate question which is still an issue of debate. One way to look at this question is using Feynman's path integrals. We will associate a diagram with a computation, in which
the vertical
axis will run over all $2^n$ possible classical configurations, and the horizontal axis will be time. Here is an example of such a diagram:
{~}
\setlength{\unitlength}{0.030in}
\begin{picture}(100,60)(-50,0) \put(0,0){\vector(2,1){30}} \put(12,0){\makebox(5,5){$-1$}} \put(0,0){\vector(1,0){30}} \put(12,8){\makebox(5,5){$1$}}
\put(-6,-3){\makebox(5,5){$11$}} \put(-6,13){\makebox(5,5){$10$}} \put(-6,28){\makebox(5,5){$01$}} \put(-6,43){\makebox(5,5){$00$}}
\put(98,-3){\makebox(5,5){$11$}} \put(98,13){\makebox(5,5){$10$}} \put(98,28){\makebox(5,5){$01$}} \put(98,43){\makebox(5,5){$00$}}
\put(14,-9){\makebox(5,5){$I\otimes H$}}
\put(32,16){\vector(1,1){31}}\put(39,28){\makebox(5,5){$1$}} \put(32,16){\vector(1,0){30}}\put(38,16){\makebox(5,5){$-1$}} \put(32,0){\vector(1,1){30}}\put(44,8){\makebox(5,5){$1$}} \put(32,0){\vector(1,0){30}}\put(45,0){\makebox(5,5){$-1$}}
\put(44,-9){\makebox(5,5){$H\otimes I$}}
\put(65,16){\vector(1,0){30}}\put(73,17){\makebox(5,5){$1$}} \put(65,16){\vector(2,-1){30}}\put(67,9){\makebox(5,5){$1$}}
\put(65,47){\vector(1,0){30}}\put(73,48){\makebox(5,5){$1$}} \put(65,47){\vector(2,-1){30}}\put(66,40){\makebox(5,5){$1$}}
\put(65,0){\vector(1,0){30}}\put(74,0){\makebox(5,5){$-1$}} \put(65,0){\vector(2,1){30}}\put(70,4){\makebox(5,5){$1$}}
\put(65,32){\vector(2,1){30}}\put(73,26){\makebox(5,5){$-1$}} \put(65,32){\vector(1,0){30}}\put(67,34){\makebox(5,5){$1$}}
\put(74,-9){\makebox(5,5){$I\otimes H$}}
\end{picture}
{~}
{~}
In this diagram, the state is initially $|11\ra$. The operation $H$ is applied thrice: First on the first bit, then on the second bit and then again on the first bit. The numbers near the edges indicate the probability amplitude to transform between configurations weights: $-1$ corresponds to $-\frac{1}{\sqrt{2}}$ and $1$ corresponds to $\frac{1}{\sqrt{2}}$. Let us now compute the weight of each basis state in the final superposition. This weight is the sum of the weights of all paths leading from the initial configuration to the final one, where the weight of each path is the product of the weights on the edges of the path.
\begin{equation}
\rm{Quantum}: ~~~~\rm{Prob}(j)=|\sum_{d:i\mapsto j} w(d)|^2 \end{equation} One can see that in the above diagram the weights of $10$ and $00$ in the final superposition are zero, because the two paths leading to each one of these states cancel one another.
What can we learn from this diagram? In order to analyze this diagram, I would like to define a classical computation model, called {\it stochastic circuits}
which can be associated with very similar diagrams. The comparison between the two models is quite instructive. The nodes in a stochastic circuit
have an equal number of inputs and outputs, like nodes in a quantum circuit. Instead of unitary matrices,
the nodes will be associated with stochastic matrices, which means that the entries of the matrices are positive reals, and
the columns are probability distributions. Such matrices correspond to applying stochastic transformations on the bits, i.e. a string $i$ transforms to string $j$ with the probability which is equal to the matrix entry $R_{i,j}$. For example, let $R$ be the stochastic matrix on one bit: \begin{equation} R=\left(\begin{array}{cc} \frac{1}{2} &\frac{1}{2}\\ \frac{1}{2} &\frac{1}{2} \end{array}\right) \end{equation} This matrix takes any input to a uniformly random bit. Consider the probabilistic computation on two bits, where we apply $R$ on the first bit, then on the second bit, and then again on the first bit. The diagram we get is:
{~}
\setlength{\unitlength}{0.030in}
\begin{picture}(100,60)(-50,-10) \put(0,0){\vector(2,1){30}} \put(0,0){\vector(1,0){30}}
\put(-6,-3){\makebox(5,5){$11$}} \put(-6,13){\makebox(5,5){$10$}} \put(-6,28){\makebox(5,5){$01$}} \put(-6,43){\makebox(5,5){$00$}}
\put(98,-3){\makebox(5,5){$11$}} \put(98,13){\makebox(5,5){$10$}} \put(98,28){\makebox(5,5){$01$}} \put(98,43){\makebox(5,5){$00$}}
\put(14,-9){\makebox(5,5){$I\otimes R$}}
\put(32,16){\vector(1,1){31}} \put(32,16){\vector(1,0){30}} \put(32,0){\vector(1,1){30}} \put(32,0){\vector(1,0){30}}
\put(44,-9){\makebox(5,5){$R\otimes I$}}
\put(65,16){\vector(1,0){30}} \put(65,16){\vector(2,-1){30}}
\put(65,47){\vector(1,0){30}} \put(65,47){\vector(2,-1){30}}
\put(65,0){\vector(1,0){30}} \put(65,0){\vector(2,1){30}}
\put(65,32){\vector(2,1){30}} \put(65,32){\vector(1,0){30}}
\put(74,-9){\makebox(5,5){$I\otimes R$}}
\end{picture}
{~}
{~} \noindent where the weights of all edges are $\frac{1}{2}$. Just like in quantum computation, the probability for a configuration in the final state is computed by summing over the weights of all paths leading to that configuration, where the weight of each path is the product of the weights of the edges participating in the path.
\begin{equation} \rm{Stochastic}: ~~~~\rm{Prob}(j)=\sum_{d:i\mapsto j} \rm{Prob}(d) \end{equation}
In this diagram all the configurations in the final state
have probability $\frac{1}{4}$.
We now have two models which are very similar. It can be easily seen that stochastic circuits are equivalent to probabilistic TM.
This means that we can find the advantage of quantum computation over classical computation
in the difference
between quantum circuits and stochastic circuits. It is sometimes tempting to say that quantum computation is powerful
because it has exponential parallelism. For $n$ particles, the vertical axis will run over
$2^n$ possible classical states. But this will also be true in the diagram of stochastic computation on $n$ bits! The difference between quantum and classical computations is therefore more subtle.
To reduce the difference between the two models even further, it can be shown\cite{bv} that the complex numbers in quantum computation can be replaced with real numbers, without damaging the computational power. This is done by adding an extra qubit to the entire circuit, which will carry the information of whether we are working in the real or imaginary part of the numbers. The correspondence between the superpositions of the complex circuit to the real circuit will be: \begin{equation}
\sum_i c_i|i\ra \longmapsto \sum_i Re(c_i)|i,0\ra + Im(c_i)|i,1\ra \end{equation}
Hence quantum computers maintain their computational power even if they use only real valued unitary gates.
There are two differences between these gates and stochastic gates.
One is that stochastic gates have positive entries while real unitary gates have
positive and negative entries. The other difference is that unitary gates preserve
the $L_2$ norm of vectors, while stochastic gates preserve $L_1$ norm.
The difference between the quantum and classical models can therefore be summarized in the following table:
\begin{eqnarray*} ~~~~~~~ \underline{Quantum} ~~~~~~~ & ~~~\underline{Stochastic}\\ Negative ~+~ Positive & ~~~Positive\\ ~~~~~~L_2~ Norm~~~~~~ & ~~~L_1~ Norm \end{eqnarray*}
Why are negative numbers so important? The fact that weights can be negative allows different paths to cancel each other. We can have many non-zero paths leading to the same final configuration, all cancelling each other, causing destructive interference. This is exactly what happens in Deutsch and Jozsa's algorithm, Simon's algorithm and Shor's algorithm, where the paths that lead to ``bad'' strings in the last step of the algorithm are destructively interfering, and at the same time paths that lead to ``good'' strings are constructively interfering. In the probabilistic case, interference cannot occur. Paths do not {\it talk} to each other, there is no influence of one path on the other. Probabilistic computation has the power of exponentiality, but lacks the power of
interference offered by computation that uses negative numbers. An exponential advantage in
computational power of negative numbers is already familiar
from classical complexity theory, when comparing Boolean circuits with monotone Boolean circuits\cite{valiant2}.
There are other computational models which exhibit interference, such as optical computers. However, these models do not exhibit exponentiality. It is only the quantum model which combines the two features of exponential space which can be explored in polynomial time,
together with the ability of interference. (See also \cite{aharonov5}.)
Another point of view of the origin of the power of quantum computation is quantum correlations, or {\it entanglement}. Two qubits are said to be entangled if their state is not in tensor
product, for example the EPR pair $\frac{1}{\sqrt{2}}(|00\ra+|11\ra)$. In a system of $n$ qubits, the entanglement can be spread over
macroscopic range, like in the state $\frac{1}{\sqrt{2}}(|0^n\ra+|1^n\ra)$, or it can be concentrated between pairs of particles like in the state
$\bigotimes_{n/2} \frac{1}{\sqrt{2}}(|00\ra+|11\ra)$.
It can be shown that quantum computational power exists only when the entanglement is spread over macroscopically many particles. If the entanglement is not macroscopically spread,
the system can be easily simulated by a classical computer\cite{aharonov2}. For the importance of entanglement see for example
Jozsa's review\cite{jozsa2}. This macroscopic spread of entanglement lies in the essence of another important topic, quantum error correcting codes, which we will encounter later.
\section{What We Cannot Do with Quantum Computers}\label {bounds} Now that we have all this repertoire of algorithms in our hands, it is tempting to try and solve everything on a quantum computer! Before doing that, it is worthwhile to understand the limitations of this model. The first thing to know is that this model cannot solve any question which is undecidable by a classical machine. This is simply due to the fact that anything that can be done in this model can be simulated on a classical machine by computing the coefficients of the superposition and writing them down. This will take an exponential amount of time, but finally will solve anything which can be done quantumly. Therefore the only difference between classical and quantum computation lies
in the computational cost.
The trivial simulation of quantum computers by classical machines is exponential both in time and space. Bernstein and Vazirani\cite{bv} showed that classical Turing machines can simulate quantum computers in polynomial space, although still in exponential time:
\begin{theo}(Bernstein, Vazirani) \(BQP\subseteq Pspace\) \end{theo}
The theorem means that anything that can be done on a quantum machine can be done by a classical machine which uses only polynomial space. To prove this result, have another look
on the Feynman path graph presented in Sec. $9$. To compute the weight of one path, we need only polynomial space. We can run over all paths leading to the same configuration, computing the weight one by one, and adding them up. This will give the probability of one configuration. To compute the probability to measure $0$, we add the probabilities of all the configurations with the result bit being $0$. This again will take exponential time, but only polynomial space. $\bbox$
Valiant improved this result\cite{bv} to show that $BQP$ is contained in a complexity class which is weaker than $Pspace$,
namely $P^{\#P}$, which I will not define here. It might still be that quantum computation is much less powerful, but we still do not have a proof for that.
In particular, the relation between $BQP$ and $NP$ is not known yet.
We do understand a lot about the following question: \begin{quote}
Can quantum computation be much more efficient than classical
computation in terms of number of accesses to the input?
\end{quote} Consider accessing the $n$ input bits $X_1,...X_n$,
for a problem or a function via an oracle, i.e. by applying the unitary transformation: \begin{equation}
|i\ra|0\ra \longmapsto |i\ra|X_i\ra \end{equation}
\noindent This unitary transformation corresponds to the classical operation of asking: ``what is the $i$'th bit? `` and getting the answer $X_i$. One might hope to make use of quantum parallelism, and query the oracle by
the superposition $1/\sqrt{N}\sum_i|i\ra|0\ra \longmapsto 1/\sqrt{N}
\sum_i|i\ra|X_i\ra$. In one query to the oracle, the algorithm can read all the $N$ bits, so intuitively no quantum algorithm needs more than one query to the oracle. It turns out that this intuition is completely wrong. It can be shown, using the notion of von Neumann entropy (see \cite{peres})
that there are no more than $log(N)$ bits of information in the
state $1/\sqrt{N}
\sum_i|i\ra|X_i\ra$. Bennett {\it et.al.}\cite{bbbv} show that if the quantum algorithm is supposed to compute the $OR$ of the oracle bits $X_1,...X_n$, then at least $O(\sqrt{N})$ queries are needed. Note that $OR$ is exactly the function computed by Grover's database search. Hence this gives a lower bound of $O(\sqrt{N})$ for database search, and shows that Grover's algorithm is optimal. \begin{theo} Any quantum algorithm that computes $OR(X_1...X_N)$ requires at least $O(\sqrt{N})$ steps. \end{theo}
\noindent The idea of the proof
is that if the number of the queries to the oracle is small,
there exists at least one index $i$, such the algorithm will be almost indifferent to $X_i$, and so will not distinguish between the case of all bits $0$ and the case that all bits are zero except $X_i=1$. Since the function which the algorithm computes is OR, this is a contradiction.
\noindent Beals {\it et. al.}\cite{beals2} recently generalized the above result
building on classical results by Nisan and Szegedi\cite{nisan}. Beals {\it et.al.} compare the minimal number of queries to the oracle which are needed in a quantum algorithm, with the minimal number of queries which are needed in a classical algorithm. Let us denote by $D(f)$ and $Q(f)$ the minimal number of queries in a classical and quantum algorithm respectively. Beals {\it et.al.}\cite{beals2} show that $D(f)$ is at most polynomial in $Q(f)$.
\begin{theo} \(D(f)=O(Q(f)^6)\) \end{theo} Beals {\it et. al.} use similar methods to give lower bounds on the time required to quantumly compute the functions MAJORITY, PARITY\cite{farhi}, OR and AND:
{~}
$~~~~~~~~~~~~~~~~~~~$\begin{tabular}{|l|l|}\hline OR & \(\Theta(\sqrt{N})\)\\ \hline AND & $\Theta(\sqrt{N})$\\ \hline PARITY & $N/2$\\ \hline MAJORITY & $\Theta(N)$\\ \hline \end{tabular}
{~}
(Here $f=\Theta (g)$ means that $f$ and $g$ behave the same asymptotically.)
The lower bounds are achieved by showing that the number of times the
algorithm is required to access the input is large. This is intuitive, since these functions are very sensitive to their input bits. For example, the string $0^N$ satisfies $OR(0^N)=0$, but flipping any bit will give $OR(0^{N-1}1)=1$.
The meaning of these results, is that in terms of the number of accesses to the input,
quantum algorithms have no more than polynomial advantage
over classical algorithms\cite{ozhi}.
This polynomial relation can give us a hint when looking for
computational problems in which quantum algorithms may have
an exponential advantage over classical algorithms.
These problems will have the property that in a classical algorithm that solves them,
the bottle neck is the information processing, while the number of accesses to the
input can be very small. Factorization is exactly such a problem. $D(f)$ is $log(N)$, because
the algorithm simply needs to read the number $N$ in binary representation,
but the classical information processing takes exponential in $log(N)$ steps.
Shor's quantum algorithm enables an exponential speed up in the information processing.
An opposite example is the database search. Here, the bottle neck in classical computation
is not the information processing but simply the fact that the size of the input
is very large. Indeed, in this case, quantum computers have only quadratic advantage
over classical computers.
Now that we understand some of the limitations and advantages of the quantum model, let us go on to the subject of quantum noise.
\section{Worries about Decoherence, Precision and Inaccuracies} Learning about the possibilities which lie in quantum computation gave rise to a lot of enthusiasm, but many physicist\cite{landauer1,unroh1,decoherence,barenco7} were at the same time very sceptic about the entire field. The reason was that all quantum algorithms achieve their advantage over classical algorithms when assuming that the gates and wires operate
without any inaccuracies or errors. Unfortunately, in reality we cannot expect any system to be ideal.
Quantum systems in particular tend to lose their quantum nature easily.
Inaccuracies and errors may cause the damage to accumulate exponentially fast during
the time of the computation\cite{decoherence,decoherence2,barenco6,barenco7, miquel1}.
In order to perform computations, one must be able to reduce the effects of
inaccuracies and errors, and to correct the quantum state.
Let us try to understand the types of errors and inaccuracies that might occur in a quantum
computer. The simplest problem is that the gates
perform unitary operations which slightly deviate from the correct ones. Indeed, it was shown by Bernstein and Vazirani\cite{bv} that it suffices that the entries of the gates are precise only up to
$1/n$, where $n$ is the size of the computation. However,
it is not reasonable to assume that inaccuracies decrease as $1/n$. What seems to be reasonable to assume is that the devices we will use in the laboratory have some finite precision, independent of the size of the computation. Errors, that might occur, will behave, presumably, according to the same law of constant probability for error per element per time step.
Perhaps the most severe problem was that of
{\it decoherence}\cite{mott,stern1,zurek1,palma,gardiner}.
Decoherence is the physical process,
in which quantum system lose some of their quantum characteristics due to
interactions with environment.
Such interactions are inevitable because no system can be kept entirely isolated
from the environment.
The effect of entanglement with the environment can be viewed as if the environment applied
a partial measurement on the system, which caused the wave function to collapse,
with certain probability. This collapse of the wave function seems to be an irreversible process by definition. How can we correct a wave function which has collapsed?
In order to solve the problem of correcting the effects of noise, we have to give a formal description of the noise process. Observe that the most general quantum operation on a system is a unitary operation on the system and its environment. Noise, inaccuracies, and decoherence can all be described in this form. Formally, the model of noise is that in between the time steps, we will allow a ``noise'' operator to operate on the system and an environment. We will assume that the environment is renewed each time step,
so there are no correlations between the noise processes at different times.
Another crucial assumption is that the noise is {\it local}. This means that each qubit interacts with its own environment during the noise process, and that there are no interactions or correlations between these environments. In other words, the noise operator on $n$ qubits, at each time step,
can be written as a tensor product of $n$ local noise operators, each operating on one qubit: \[{\cal E}= {\cal E}_1\otimes{\cal E}_2\otimes\cdots\otimes{\cal E}_n.\]
If the qubits were correlated in the last time step by a quantum gate,
the local noise operator operates on all the qubits participating in one gate together. This noise model assumes that correlations between errors on different qubits can only appear due to the qubits interacting through a gate.
Otherwise, each qubit interacts with its own environment.
The most general noise operator on one qubit is a general unitary transformation on the qubit
and its environment: \begin{eqnarray}
|e\rangle |0\rangle \rightarrow
|e_{0}\rangle|0\rangle + \ |e^b_{0}\rangle|1\rangle \\ \nonumber
|e\rangle|1\rangle \rightarrow
|e_{1}\rangle |1\rangle + \ |e^b_{1}\rangle|0\ra \label{env} \end{eqnarray}
When qubits interact via a gate, the most general noise operation would
be a general unitary transformation on the qubit participating in the gate
and their environments.
When dealing with noise, it is more convenient to use the language of density matrices, instead of vectors in the Hilbert space. I will define them here, so that I can explain the notion of ``amount of noise'' in the system, however they will rarely be used again later in this review. The
density matrix describing a system in the state $|\alpha\ra$ is
$\rho=|\alpha\ra\la \alpha|$. The density matrix of part $A$ of the system can be derived from $\rho$ by tracing out, or integrating, the degrees of freedom which are not in $A$. The unitary operation on the environment and the system, which corresponds to quantum noise, can be viewed as a linear operator on the density matrix describing only the system. As a metric on density matrices we can use the fidelity\cite{wootters1}, or the trace metric\cite{aharonov4}, where the exact definition does not matter now. Two quantum operations are said to be close if when operating on the same density matrix, they generate two close density matrices. We will say that the {\it noise rate} in the system is $\eta$ if each of the local noise operators is within $\eta$ distance from the identity map on density matrices.
We now want to find a way to compute
fault tolerantly in the presence of noise rate $\eta$, where we do not
want to assume any knowledge about the noise operators, except the noise rate.
We will first concentrate on a simple special case, in which the computation consists
of one time step which computes the identity operator on all qubits.
This problem is actually
equivalent to the problem of communicating with noisy channels.
In order to understand the subtle points when trying to communicate with noisy channels,
let us consider the classical analogous case. Classical information is presented by a string of bits instead of qubits, and the error model is simply that each bit flips with probability $\eta$.
Suppose Alice wants to send Bob a string of bits, and the channel which they use is noisy, with noise rate $\eta$, i.e. each bit flips with probability $\eta$.
In order to protect information against noise, Alice can use redundancy. Instead of sending $k$ bits, Alice will encode her bits on more bits, say $n$, such that Bob can apply some recovery operation to get the original $k$ bits. The idea is that to first approximation, most of the bits will not be damaged, and the encoded bits, sometimes called the
the {\it logical bits}, can be recovered. The simplest example of a classical code is
the majority code, which encodes one logical bit on three bits. \[0 \longmapsto 0_L=000 ~~~~,~~~~~ 1\longmapsto 1_L=111\] This classical code corrects one error, because if
one bit has flipped, taking the majority vote of the three bits still recovers the logical bit. However, if more then one bit has flipped, the logical bit can no longer be recovered.
If the probability for a bit flip is $\eta$, then the probability that the three bits cannot be recovered, i.e. the effective noise rate $\eta_e$, equals: \[\eta_e=3\eta^2(1-\eta)+\eta^3.\] If we require that we gain some advantage in reliability by the code, then $\eta_e< \eta$ implies a {\it threshold} on $\eta$, which is $\eta<0.5$. If $\eta$ is above the threshold, using the code will only decrease the reliability.
The majority code becomes extremely non efficient when Alice wants to send long messages.
If we require that Bob receives all the logical bits with high probability of being correct,
Alice will have to use exponential redundancy for each bit.
However, there are error correcting codes which map $k$ bits to $m=O(k)$ bits,
such that the probability for Bob to get the original message
of $k$ bits correct is high, even when $k$ tends to infinity.
A very useful class of error correcting codes are the {\it linear} codes,
for which the mapping from $k$ bits to $n$ bits
is linear, and the set of {\it code words},
i.e. the image of the mapping, is a linear subspace of $F_2^m$. A code is said to correct $d$ errors if a recovery
operation exists
even if $d$ bits have flipped. The {\it Hamming distance} between two strings is defined to be the number of coordinates by which the two strings differ. Being able to recover the string after $d$ bit flips have occurred
implies that the distance between two possible code words is at least $2d+1$, so that each word is corrected uniquely.
For an introduction to the subject of classical error correcting codes, see van Lint\cite{lint}.
We define a quantum code in a similar way. The state of $k$ qubits is mapped into the state
of $m$ qubits. The term {\it logical state} is used for the original state of the $k$ qubits.
We say that such a code corrects $d$ errors, if there exists a recovery operation
such that
if not more than $d$ qubits were damaged, the logical state can still be
recovered. It is important here that Bob has no control on the environment
with which the qubits interacted during the noise process. Therefore
we require that the recovery operation does not operate on the environment but merely
on the $m$ qubits carrying the message and perhaps some ancilla qubits. The image of the map in the Hilbert space of $m$ qubits will be called a {\it quantum code}.
Let us now try to construct a quantum code. Suppose that Alice wants to send Bob a qubit in the state
$c_0|0\ra+c_1|1\ra.$ How can she encode the information? One way to do this is simply to send the classical information describing $c_0$ and $c_1$ up to the desired accuracy. We will not be interested in this way, because when Alice wants to send Bob a state of $n$ qubits, the amount of classical bits that needs to be sent grows exponentially with $n$. We will want to encode qubits on qubits, to prevent this exponential overhead. The simplest idea that comes to mind is that
Alice generates a few copies of the same state,
and sends the following state to Bob:
\[c_0|0\ra+c_1|1\ra\longmapsto \left(c_0|0\ra+c_1|1\ra\right)\otimes
\left(c_0|0\ra+c_1|1\ra\right)\otimes
\left(c_0|0\ra+c_1|1\ra\right).\] Then Bob is supposed to apply some majority vote among the qubits. Unfortunately, a quantum majority vote among general quantum states is not a linear
operation. Therefore, simple redundancy will not do. Let us try another
quantum analog of the classical majority code:
\[ c_0|0\ra+c_1|1\ra
\longmapsto c_0|000\ra+c_1|111\ra\] This code turns out to be another bad quantum code. It does not protect the quantum information even against
one error. Consider for example, the local noise operator which operates on the first qubit in the encoded state $c_0|000\ra+c_1|111\ra$. It does nothing to that qubit, but it changes the state of the environment according to whether this bit is $0$ or $1$: \begin{eqnarray}
|0\ra\otimes |e\ra & \longmapsto & |0\ra \otimes |e_0\ra \\
|1\ra\otimes |e\ra & \longmapsto & |1\ra \otimes |e_1\ra \nonumber \end{eqnarray}
Here $\la e_0|e_1\ra =0$. Even though only an identity operation was applied on the first bit, the fact that the environment changed according to the state of this bit is equivalent to the environment {\it measuring } the state of the first qubit. This measurement is an irreversible process. After the noise operation, the environment is no longer in a tensor product with the state.
Bob can only apply local operations on his system, and cannot control the environment. This means that the entanglement
between the state of the first qubit, and the environment cannot be broken during the recovery operation; the coherence of the state is lost. A theorem due to Schumacher and Nielsen\cite{schumacher2}
formalizes this intuition. The claim is that if the reduced density matrix of the environment is different for different code words, then there is no unitary operation that operates on the system and recovers the logical state.
\begin{theo} It is impossible to recover the logical state, if information about it has leaked to the environment via the noise process. \end{theo}
This theorem underlines the main distinction between quantum error correcting codes and classical error correcting codes.
Quantum codes try to {\it hide} information from the environment, In contrast, the protection of classical information from noise, is completely orthogonal to the question of hiding secrets. The theorem gives us insight as to the basic idea in quantum computation: The idea is to spread the quantum information over more than $d$ qubits, in a non-local way, such that the environment which can access only a small number of qubits can gain no information about the quantum logical state,
and this information will be protected. Now, that we have some intuition about the requirements
from quantum codes, we can proceed to show how to construct such codes.
\section{Correcting Quantum Noise} In order to succeed in correcting quantum noise, we need to consider more carefully the process of noise. The first and most crucial step is the discovery that quantum noise can be treated as discrete. In the quantum setting, we assume all qubits undergo a noise of size $\eta$. We want to replace this with the case in which a few qubits are completely damaged, but the rest of the qubits are completely fine. This can be done by rewriting
the effect of a general noise operator. Let the state of $m$ qubits be $|\alpha\ra$. If the noise rate is $\eta$, we can develop the operation of a general
noise operator operating on $|\alpha\ra$
by orders of magnitude of $\eta$: \begin{equation}\label{dis}\begin{array}{l}
{\cal E}_1 {\cal E}_2....{\cal E}_m|\alpha\ra =\\ (I_1+\eta {\cal E'}_1)(I_2+\eta {\cal E'}_2)...
(I_m+\eta {\cal E'}_m)|\alpha\ra=\\
I_1I_2...I_m |\alpha\ra+
\eta \left({\cal E'}_1I_2...I_m+...+I_1I_2...I_{m-1}{\cal E'}_m\right)|\alpha\ra+ ....+
\eta^m\left({\cal E'}_1{\cal E'}_2...{\cal E'}_m\right)|\alpha\ra. \end{array}\end{equation}
The lower orders in $\eta$ correspond to a small number of qubits being operated upon, and higher orders in $\eta$ correspond to more qubits being contaminated. This way of writing the noise operator is the beginning of discretization of the quantum noise, because in each term a qubit is either damaged or not. For small $\eta$, we can neglect higher order terms and
concentrate in the lower orders, where only one or two qubits are damaged out of $m$. A special case of this model is the probabilistic model, in which the local noise operator applies a certain operation with probability $\eta$ and the identity operation with probability $(1-\eta)$. In this model, if the quantum system consists of $m$ qubits, we can assume that with high probability only a few of the qubits went through some noise process. There are noise operators, such as amplitude damping, which do not obey this probabilistic behavior. However their description by equation (\ref{dis}) shows that we can treat them in the same discrete manner.
The second step is the discretization of the noise operation itself. The most general quantum operation on the $k'$th qubit and it's environment is described by:
\begin{eqnarray}
|e\rangle |0_k\rangle \rightarrow
|e_{0}\rangle|0_k\rangle + \ |e^b_{0}\rangle|1_k\rangle \\ \nonumber
|e\rangle|1_k\rangle \rightarrow
|e_{1}\rangle |1_k\rangle + \ |e^b_{1}\rangle|0_k\ra \end{eqnarray}
This operation, applied on any logical state $c_0|0_L\ra +c_1|1_L\ra$, acts as the following operator:
\begin{equation}
(c_0|0_L\ra +c_1|1_L\ra)\rightarrow \Big (|e_+\rangle {\cal I} + |e_-\rangle \sigma_z^k +
|e^b_+\rangle \sigma_x^k - |e^b_-\rangle i\sigma_y^k \Big )(c_0|0_L\ra +c_1|1_L\ra)
\,, \label{pauli} \end{equation} Where $ \sigma_i^k$ are the Pauli operators acting on the $k$'th qubit: \begin{equation} {\cal I} =\left(\begin{array}{cc}1& 0 \\0 & 1\end{array}\right),
\sigma_x=\left(\begin{array}{cc}0 & 1 \\1 & 0\end{array}\right), \sigma_y=\left(\begin{array}{cc}0 & -i \\i & 0\end{array}\right), \sigma_z=\left(\begin{array}{cc}1 & 0 \\0 & -1\end{array}\right). \end{equation} The environment states are
defined as $|e_\pm\rangle=(|e_0\rangle \pm |e_1\rangle)/2$,
$|e^b_{\pm}\rangle=(|e^b_0\rangle \pm |e^b_1\rangle)/2$. The most crucial observations, which enables to correct quantum errors, hide in equation \ref{pauli}. The first observation is that everything that can happen to a qubit is composed of four basic operations, so it is enough to correct
for these four errors\cite{bennett14,ekert3,knill3}. This resembles a discrete model more than a continuous one, and gives hope that such discrete errors can be corrected. The second crucial point is
that the states of the environment which are entangled with the system after the operation of noise,
are {\it independent} of $(c_0|0_L\ra +c_1|1_L\ra)$ and depend only on which of the four operations
$ \sigma_i^k$ were applied. In particular, for any superposition of the logical states $|0_L\ra, |1_L\ra$, the operator will look the same.
This suggests the following scheme of breaking the entanglement of the system
with the environment. The idea is to measure which
one of the four possible operators was applied. This is called the {\it syndrome} of the error. Measuring the syndrome will collapse the system to a state which is one of the following tensor products of the system and the environment:
\begin{equation}
\Big(|e_+\rangle {\cal I} + |e_-\rangle \sigma_z^k +
|e^b_+\rangle \sigma_x^k - |e^b_-\rangle i\sigma_y^k \Big)
(c_0|0_L\ra+c_1|1_L)\ra\stackrel{measure}{\longrightarrow} \left\{\begin{array}{l}
|e_+\rangle {\cal I}\Big(c_0|0_L\ra+c_1|1_L\ra\Big)\\
|e_-\rangle \sigma_z^k \Big(c_0|0_L\ra+c_1|1_L\ra\Big)\\
|e^b_+\rangle \sigma_x^k\Big(c_0|0_L\ra+c_1|1_L\ra\Big)\\
|e^b_-\rangle i\sigma_y^k\Big(c_0|0_L\ra+c_1|1_L\ra\Big) \end{array}\right. \label{collapse} \end{equation}
After we know which of the operators had occurred, we can simply apply its reverse, and the state $c_0|0_L\ra +c_1|1_L\ra$ will be recovered. This reduces the problem of error correction
to being able to detect which of the four operators had occurred. The operator $\sigma_x$ corresponds to a {\it bit flip}, which is a classical error. This suggests the following idea: If the superposition of the encoded
state, is a sum of strings $|i\ra$ where the $i'$s are strings from a classical code, then bit flips can be detected by applying classical techniques.
Correcting the noise operator $\sigma_z$, which is a {\it phase flip}, seems harder, but an important observation is that $\sigma_z=H \sigma_x H$, where $H$ is the Hadamard transform. Therefore, phase flips correspond to bit flips occurring in the Fourier transform of the state! If the Fourier transform of the state is also a superposition of strings in a classical code, this enables a correction of phase flips by
correcting the bit flips in the Fourier transform basis. This idea was discovered by Calderbank and Shor\cite{calshor} and Steane\cite{steane1}.
A simple version of the recipe they discovered for cooking a quantum code goes as follows.
Let $C\subset F_2^m$ be a linear classical code, which corrects $d$ errors, such that
$C^\perp$, the set of all strings orthogonal over $F_2$
to all vectors in $C$, is strictly contained in $C$.
We look at the cosets of $C^\perp$ in $C$, i.e. we partition
$C$ to non intersecting sets which are translations of $C^\perp$
of the form $C^\perp+v$. The set of vectors in $C$, with the identification of
$w$ with $w'$ when $w-w'\in C^{\perp}$ is called $C/ C^\perp$.
For each $w\in C/ C^\perp$ we associate a code word: \begin{equation}\label{code}
|w\ra \longmapsto |w_L\ra = \sum_{i\in C^\perp} |i+w\ra
\end{equation}
where we omit overall normalization factors.
Note that all the strings which appear in the superposition are vectors in the code $C$.
It is easy to check that
the same is true for the Fourier transform over $Z_2^m$ of the code words, which is achieved
by applying the Hadamard gate, $H,$
on each qubit: \begin{equation}
H\otimes H\otimes ....\otimes H|w_L\ra= \sum_{j\in C} (-1)^{w\cdot j} |j\ra. \end{equation}
The error correction goes as follows. To detect bit flips, we apply the classical error correction according to the classical code $C$, on the states in equation (\ref{code}). This operation,
computes the syndrome (in parallel for all strings)
and writes it on some
ancilla qubits. Measuring the ancilla will collapse the state to a state with a specific syndrome, and we can compute according to the result of the measurement which qubits were affected by a bit flip, and apply $NOT$ on those qubits. To detect phase flips we apply Fourier transform on the entire state, and correct bit flips classically according to the code $C$. Then we apply the reverse of the Fourier transform. This operation will correct phase flips. $\sigma_y$ is a combination of a bit flip and a phase flip, and is corrected by the above sequence of error corrections\cite{calshor}.
The number of qubits which can be encoded by this code is the logarithm with base $2$
of the dimension of the space spanned by the code words. To calculate this
dimension, observe
that the code words
for different $w$'s in $C/ C^\perp$ are perpendicular.
The dimension of the quantum code is equal to the
number of different words in $C/ C^\perp$,
which is $2^{dim(C/ C^\perp)}$. Hence the number of qubits which can be encoded
by this quantum code is $dim(C/ C^\perp)$.
Here is an example, due to Steane\cite{steane1}. Steane's code
encodes one qubit on seven qubits, and corrects one error. It is constructed from the classical code known as the Hamming code, which is the subspace of $F_2^7$ spanned by the four vectors: \newline\(C=span\{1010101,0110011,0001111,1111111\}\).
$C^\perp$ is spanned by the three vectors: $1010101,0110011,0001111$.
Since $C$ is of dimension $4$, and $C^\perp$ is of dimension $3$, the number of qubits
which we can encode is $1$. The two code words are:
\begin{eqnarray}
|0_L)=|0000000\ra+|1010101\ra+|0110011\ra+|1100110\ra \\ \nonumber
+|0001111\ra+|1011010\ra+|0111100\ra+|1101001\ra\\ \nonumber
|1_L)=|1111111\ra+|0101010\ra+|1001100\ra+|0011001\ra \\ \nonumber
+|1110000\ra+|0100101\ra+|1000011\ra+|0010110\ra \end{eqnarray}
Observe that the minimal Hamming distance between two words in $C$ is $3$, so
one bit flip and one phase flip can be corrected.
One qubit cannot be encoded on less than $5$ qubits, if we require that
an error correction of one general error can be done. This was shown by
Knill and Laflamme\cite{knill3}. Such a code, called a perfect quantum code,
was found by Bennett et
al\cite{bennett14} and by Laflamme {\it et.al.} \cite{laflamme2}. If we restrict the error,
e.g. only bit flips or only phase flips occur than one qubit can
be encoded on less than $5$ qubits.
The theory of quantum error correcting codes has further developed.
A group theoretical structure was discovered \cite{calderbank3,gf4,gottesman1,gottesman2,knill3,shor4},
which most of the known quantum error correcting codes obey.
Codes that obey this structure are called stabilizer codes\cite{gottesman1,gottesman2}, and their group theoretical structure gives a recipe for constructing more quantum codes. Quantum codes are used for purposes of quantum communication
with noisy channels, which is out of the scope of this review.
For an overview on the subject of quantum communication
consult Refs. \cite{barnum3,optic} and \cite{lloyd5}. We now have the tools to deal with the question of quantum computation in the presence of noise, which I will discuss in the next section.
\section{Fault Tolerant Computation}
In order to protect quantum computation, the idea is that one should compute on encoded states. The entire operation will occur in the protected subspace, and every once in a while an error correction procedure will be applied, to ensure that errors do not accumulate. The original quantum circuit will be replaced by a quantum circuit which operates on encoded state. Suppose we use a quantum code which encodes one qubit into a block of $5$ qubits. Then in the new circuit, each wire will be replaced by five wires, and the state of the new circuit will encode the state of the
original circuit. In order to apply computation on encoded states, the original gates will be replaced by procedures which apply the corresponding operation. If $\Phi$ is the encoding, $U$ is a quantum gate, then $\Phi(U)$ should be the ``encoded gate'' $U$, which preserves the encoding. In other words, the following diagram should be commutative:
\setlength{\unitlength}{0.030in}
\begin{picture}(40,60)(-50,0)
\put(10,10){\makebox(0,0){$|\alpha\ra$}}
\put(10,40){\makebox(0,0){$\Phi(|\alpha\ra)$}} \put(10,13){\vector(0,1){25}} \put(7,26){\makebox(0,0){$\Phi$}}
\put(50,10){\makebox(0,0){$U|\alpha\ra$}} \put(50,13){\vector(0,1){25}}
\put(50,40){\makebox(0,0){$\Phi(U|\alpha\ra)$}} \put(53,26){\makebox(0,0){$\Phi$}}
\put(17,40){\vector(1,0){21}} \put(19,10){\vector(1,0){25}} \put(30,43){\makebox(0,0){$\Phi(U)$}} \put(30,7){\makebox(0,0){$U$}}
\end{picture}
Hence, using a code $\Phi$, which takes one qubit to $m$ qubits, we replace a quantum circuit by another circuit which operates on encoded states, in this circuit \begin{itemize} \item 1 qubit $\longmapsto $ $m$ qubits \item A gate $U$ $\longmapsto$ $\Phi(U)$ \item Every few time steps, an error correction procedure
is applied. \end{itemize}
However, this naive scheme encounters deep problems. Since quantum gates create interactions between qubits, errors may propagate through the gates. Even a small number of errors might spread to more qubits than the error correction can recover. Moreover, we can no longer assume that the recovery operation is error free. The correction procedure might cause more damage than it recovers. Consider, for example, a code $\Phi$ that takes one qubit to $5$ qubits. A gate on two qubits, $U$,
is replaced in the encoded circuit by the
encoded gate $\Phi(U)$ which operates on $10$ qubits. Let us consider two scenarios:
{~}
{~}
\setlength{\unitlength}{0.030in}
\begin{picture}(40,60)(-10,0)
\put(-2,60){\makebox(0,0){x}}
\put(0,5){\line(1,0){65}} \put(0,10){\line(1,0){65}} \put(0,15){\line(1,0){65}} \put(0,20){\line(1,0){65}} \put(0,25){\line(1,0){65}}
\qbezier[40](12,61)(32,61)(65,61)
\qbezier[10](41,54)(41,51)(48,51) \qbezier[25](48,51)(59,51)(65,51)
\qbezier[10](11,59)(11,56)(13,56) \qbezier[30](13,56)(25,56)(65,56)
\qbezier[30](26,54)(26,21)(28,21) \qbezier[25](28,21)(50,21)(65,21)
\qbezier[20](41,19)(41,11)(43,11) \qbezier[20](43,11)(55,11)(65,11)
\qbezier[25](56,11)(56,44)(58,44) \qbezier[10](58,44)(61,44)(65,44)
\put(67,60){\makebox(0,0){x}} \put(67,50){\makebox(0,0){x}} \put(67,45){\makebox(0,0){x}} \put(67,20){\makebox(0,0){x}} \put(67,55){\makebox(0,0){x}} \put(67,10){\makebox(0,0){x}} \put(0,40){\line(1,0){65}} \put(0,45){\line(1,0){65}} \put(0,50){\line(1,0){65}} \put(0,55){\line(1,0){65}} \put(0,60){\line(1,0){65}}
\put(88,60){\makebox(0,0){x}} \put(10,55){\circle*{2}} \put(10,60){\circle*{2}} \put(10,55){\line(0,1){5}}
\put(10,40){\circle*{2}} \put(10,45){\circle*{2}} \put(10,40){\line(0,1){5}}
\put(10,5){\circle*{2}} \put(10,10){\circle*{2}} \put(10,5){\line(0,1){5}}
\put(10,20){\circle*{2}} \put(10,25){\circle*{2}} \put(10,20){\line(0,1){5}}
\put(25,20){\circle*{2}} \put(25,55){\circle*{2}} \put(25,20){\line(0,1){35}}
\put(40,10){\circle*{2}} \put(40,20){\circle*{2}} \put(40,10){\line(0,1){10}}
\put(40,50){\circle*{2}} \put(40,55){\circle*{2}} \put(40,50){\line(0,1){5}}
\put(55,10){\circle*{2}} \put(55,45){\circle*{2}} \put(55,10){\line(0,1){35}}
\put(30,-6){\makebox(0,0){a}}
\put(90,5){\line(1,0){60}} \put(90,10){\line(1,0){60}} \put(90,15){\line(1,0){60}} \put(90,20){\line(1,0){60}} \put(90,25){\line(1,0){60}}
\put(90,40){\line(1,0){60}} \put(90,45){\line(1,0){60}} \put(90,50){\line(1,0){60}} \put(90,55){\line(1,0){60}} \put(90,60){\line(1,0){60}}
\put(100,25){\circle*{2}} \put(100,60){\circle*{2}} \put(100,25){\line(0,1){35}}
\put(110,20){\circle*{2}} \put(110,55){\circle*{2}} \put(110,20){\line(0,1){35}}
\put(120,15){\circle*{2}} \put(120,50){\circle*{2}} \put(120,15){\line(0,1){35}}
\put(130,10){\circle*{2}} \put(130,45){\circle*{2}} \put(130,10){\line(0,1){35}}
\put(140,5){\circle*{2}} \put(140,40){\circle*{2}} \put(140,5){\line(0,1){35}}
\qbezier[25](100,61)(125,61)(150,61) \qbezier[20](101,59)(100,26)(105,26) \qbezier[20](105,26)(130,26)(150,26)
\put(152,25){\makebox(0,0){x}} \put(152,60){\makebox(0,0){x}}
\put(120,-6){\makebox(0,0){b}}
\end{picture}
{~}
{~}
In figure $(a)$, the encoded gate is a gate array with large connectivity. An error which occurred in the first qubit, will propagate through the gates to five more qubits. At the end of the procedure, the number of damaged qubits is too large for any error correction to take care of. Such procedure will not tolerate even one error! In figure $(b)$, we see an alternative way to implement $\Phi(U)$, in which the error cannot propagate to more than one qubit
in each block. If the gate is encoded such that one error effects only one qubit in each block, we say that the encoded gate is implemented {\it distributively}. Such damage will be corrected during the error corrections. Of course, the error correction procedures should also be implemented in a distributed manner. Otherwise the errors generated during the correction procedure itself will contaminate the state.
Probably the simplest gate to implement distributively is the encoded
NOT gate on Steane's code. The encoded NOT is simply achieved by applying a NOT gate bitwise on each qubit in the code. The implementation of the XOR gate is applied bitwise as well, and the network is the same as that in figure $(b)$, only on $7$ qubits instead of five. However, for other gates much more work needs to be done. Shor\cite{shor3}, showed a way to implement a universal set of gates in this way, where the implementation of some of the gates, and Toffoli's gate in particular, require some hard work and the use of additional ``ancilla'' or ``working''
qubits. Together with the set of universal encoded gates, one also needs an error correction procedure, an encoding procedure to be used in the beginning of the computation, and a decoding procedure to be used at the end. All these procedures should be
implemented distributively, to prevent propagation of errors. A code
which is accompanied by a set of universal gates, encoding, decoding
and correction procedures, all implemented distributively, will be called a {\it quantum computation code}. Since Shor's suggestion, other computation codes were found\cite{aharonov1,knill2}. Gottesman\cite{gottesman2} has generalized these results and showed how to construct a computation code from any
stabilizer code.
Is the encoded circuit more reliable? The {\it effective noise rate}, $\eta_e$ of the encoded circuit, is the probability for an encoded gate to
suffer a number of errors which cannot be corrected. In the case of figure $(b)$, one error is still recoverable, but two are not. The effective noise rate is thus the probability for two or more
errors to occur in $U(\Phi)$. Let $A$ denote the
number of places in the implementation of $U(\Phi)$ where errors can occur. $A$ stands for the {\it area} of $U(\Phi)$. The probability for more than $d$ errors to occur can be bounded
from above, using simple counting arguments: \begin{equation}\label{noise} \eta_e\le \left(\begin{array}{c} A\\d+1\end{array}\right)\eta^{d+1} \end{equation} We will refer to this bound as the {\it effective noise rate.} To make a computation of size $n$ reliable, we need an effective noise rate of the order of $\frac{1}{n}$. Using a code with blocks of $\rm{log}(n)$ qubits, Shor\cite{shor3}
managed to show that
the computation will be reliable, with polynomial cost. However, Shor had to assume that $\eta$ is as small as $O(\frac{1}{\log^4(n)})$. This assumption is not physically reasonable ,
since $\eta$ is a parameter of the system, independent of the computation size. The reader is urged to play with the parameters of equation \ref{noise} in order to be convinced that assuming $\eta$ to be constant cannot lead to a polynomially small effective noise rate, as required.
Another idea, which was found independently by several groups \cite{ aharonov1,knill2, kitaev2,gottesman5}
was needed to close the gap, and to show that computation in the presence of constant noise rate and finite precision is possible. The idea is simple. Apply Shor's scheme recursively, gaining small improvement in the effective noise rate each level . Each circuit is replaced by a slightly more reliable circuit, which is replaced again by yet another circuit. If each level gains only a slight improvement from $\eta$ to $\eta^{1+\epsilon}$, then the final circuit
which is the one implemented in the laboratory, will have an effective noise rate exponentially smaller: \[\eta\longmapsto \eta^{1+\epsilon}\longmapsto (\eta^{1+\epsilon})^{1+\epsilon}... \longmapsto \eta^{(1+\epsilon)^r}\] The number of levels the recursion should be applied to get a polynomially small effective noise rate is only $O(\log(\log(n)))$. The cost in time and space is thus only polylogarithmic.
A similar concatanation scheme was used in the context of classical self correcting cellular automata\cite{tsirelson,gacs}.
The requirement that the noise rate is improved
from one level to the next
imposes a threshold requirement on $\eta$:
\[ \left(\begin{array}{c} A\\d+1\end{array}\right)\eta^{d+1} < \eta\]
If $\eta$ satisfies the above requirement, fault tolerant computation can be achieved. This is known as the threshold result\cite{ aharonov1,knill2, kitaev0,gottesman5}:
\begin{theo}\label{fault} {\bf Fault tolerance: } Quantum computation of any length
can be applied efficiently with arbitrary level of confidence, if the noise rate is smaller than the threshold $ \eta_c$. \end{theo}
The {\it threshold} $\eta_c$,
depends on the parameters of the computation code: $A$, the largest procedure's area, and $d$, the number
of errors which the code can correct. Estimations\cite{aharonov1,knill2,gottesman2,gottesman5, knill4, preskill2} of $\eta_c$ are in the range between $ 10^{-4}$ and $10^{-6}$. Presumably the correct threshold is much higher. The highest noise rate in which
fault tolerance is possible is not known yet.
The rigorous proof of the threshold theorem is quite complicated.
To gain some insight we can view the
final $r'$th circuit as a multi scaled system, where computation and error correction are
applied in many scales simultaneously. The largest procedures, computing on the largest (highest level) blocks,
correspond to operations on the logical qubits, i.e. qubits in the original circuit. The smaller procedures, operating on smaller blocks, correspond to computation in lower levels. Note, that each level simulates the error corrections in the previous level, and adds error corrections in the current level. The final circuit, thus, includes error corrections of all the levels, where during the computation of error corrections of larger blocks
smaller blocks of lower levels are being corrected. The lower the level, the more often error corrections of this level are applied, which is in correspondence with the fact that smaller blocks
are more likely to be quickly damaged.
The actual system consists of $m=n\log^c(n)$ qubits (where $n$ is the size of the original circuit), with a Hilbert space
${\cal H}=C^{2^m}$. In this Hilbert space we find a subspace, isomorphic to $C^{2^n}$, which is protected against noise. This subspace is a complicated multi-scaled construction, which is small in dimensions, compared to the Hilbert space of the system, but not negligible. The subspace is protected against noise for almost as long as we wish, and the quantum computation is done exactly in this protected subspace. The rate by which the state increases its distance from this subspace corresponds to the noise rate. The efficiency of the
error correction determines the rate by which the distance from this subspace decreases. The threshold in the noise rate is the point where distance is decreases faster than it increases. In a sense, the situation can be viewed as the operation of a renormalization group,
the change in the noise rate being the renormalization flow.
{~}
\setlength{\unitlength}{0.030in} \begin{picture}(40,0)(-40,0) \put(20,0){\vector(-1,0){20}} \put(20,0){\vector(1,0){80}} \put(0,-2){\line(0,1){4}} \put(20,-2){\line(0,1){4}} \put(100,-2){\line(0,1){4}} \put(19,5){\makebox(0,0){$\eta_c$}}
\end{picture}
{~}
It should be noted that along the proof of fault tolerance, a few implicit assumptions were made \cite{steane7}. The ancilla qubits that we need in the middle of the computation for error correction are assumed to
be prepared in state $|0\ra$ {\it when needed}, and not at the beginning of the computation. This requires the ability to cool part of the system constantly. It was shown by Aharonov {\it et. al.}\cite{aharonov4} that if all operations are unitary, the system keeps warming (in the sense of getting more noise) with no way to cool, and the rate in which the system warms up is {\it exponential}.
Fault tolerant quantum computation requires using non-unitary gates which enables to cool a qubit. This ability to cool qubits is used implicitly in all fault tolerant schemes. Another point which should be mentioned is that
fault tolerant computation uses immense parallelism, i.e. there are many gates which are applied at the same time. Again, this implicit assumption is essential. If operation were sequential, fault tolerant computation would have been impossible, as was shown by Aharonov and Ben-Or\cite{aharonov1}. However, with mass parallelism, constant supply of cold qubits and a noise rate which is smaller than $\eta_c$, it is possible to perform fault tolerant computation.
The fault tolerance result holds for the general local noise model, as defined before,
and this includes probabilistic collapses, inaccuracies, systematic errors, decoherence, etc. One can compute fault tolerantly
also with quantum circuits which are
allowed to operate only on nearest neighbor qubits\cite{aharonov1} ( In this case the threshold $\eta_c$ will be smaller, because the procedures are bigger when only nearest neighbor interactions are allowed. ) In a sense, the question of noisy quantum computation is theoretically closed. But a question still ponders our minds: Are the assumptions on the noise correct? Dealing with non-local noise is an open and challenging problem.
\section{Conclusions and Fundamental Questions} We cannot foresee which goals will be
achieved, if quantum computers be the
next step in the evolution of computation\cite{haroche}. This question involves two directions of research. From the negative side,
we are still very far from understanding the limitations of quantum computers as computation devices. It is possible that quantum Fourier transforms are the only real
powerful tool in quantum computation. Up to now, this is the only tool which implies exponential advantage over classical algorithms. However,
such a strong statement of the uniqueness of the Fourier transform is not known. Taking a more positive view, the goal is to find other techniques in addition to the Fourier transform. One of the main directions of research in quantum algorithms is finding an efficient solutions for a number of problems which are not known to be NP complete, but do not have a known efficient classical solution. Such is the problem of checking whether two graphs are isomorphic, known as {\it Graph Isomorphism}. Another important direction in quantum algorithms is finding algorithms that simulate quantum physical systems more efficiently. The field of quantum complexity is still in its infancy.
Hand in hand with the complexity questions, arise deep fundamental questions about quantum physics. The computational power of all classical systems seem to be equivalent, whereas quantum complexity, in light of the above results,
seems inherently different.
If it is true that quantum systems are exponentially better
computation devices than classical systems, this
can give rise to a new definition of quantum versus classical
physics, and might lead to a change in the
way we understand the transition
from quantum to classical physics. The ``phase diagram'' of quantum versus classical behavior can be viewed as follows:
{~}
\setlength{\unitlength}{0.00083300in}
\begingroup\makeatletter\ifx\SetFigFont\undefined
\def\endgroup\@setsize\SetFigFont{#2pt}#1#2#3#4#5#6#7\relax{\def\endgroup\@setsize\SetFigFont{#2pt}{#1#2#3#4#5#6}} \expandafter\endgroup\@setsize\SetFigFont{#2pt}\fmtname xxxxxx\relax \defsplain{splain} \ifx\xsplain \gdef\SetFigFont#1#2#3{
\ifnum #1<17\tiny\else \ifnum #1<20\small\else
\ifnum #1<24\normalsize\else \ifnum #1<29\large\else
\ifnum #1<34\Large\else \ifnum #1<41\LARGE\else
\huge\fi\fi\fi\fi\fi\fi
\csname #3\endcsname} \else \gdef\SetFigFont#1#2#3{\begingroup
\count@#1\relax \ifnum 25<\count@\count@25\fi
\def\endgroup\@setsize\SetFigFont{#2pt}{\endgroup\@setsize\SetFigFont{#2pt}}
\expandafter\endgroup\@setsize\SetFigFont{#2pt}
\csname \romannumeral\the\count@ pt\expandafter\endcsname
\csname @\romannumeral\the\count@ pt\endcsname
\csname #3\endcsname} \fi \fi\endgroup \begin{picture}(6324,2499)(1639,-4648) \thicklines \put(2026,-3436){\line( 1, 0){5550}} \put(2401,-3361){\line( 0,-1){150}} \put(2251,-3736){\line( 0,-1){450}} \multiput(2101,-3886)(6.00000,6.00000){26}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \multiput(2251,-3736)(6.00000,-6.00000){26}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \put(2026,-3361){\line( 0,-1){150}} \put(7201,-3361){\line( 0,-1){225}} \put(7576,-3361){\line( 0,-1){225}} \put(7351,-3661){\line( 0,-1){450}} \multiput(7201,-3811)(6.00000,6.00000){26}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \multiput(7351,-3661)(6.00000,-6.00000){26}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \multiput(5101,-2536)(6.00000,-6.00000){26}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \multiput(5101,-2836)(6.00000,6.00000){26}{\makebox(6.6667,10.0000){\SetFigFont{7}{8.4}{rm}.}} \put(4351,-2686){\line( 1, 0){900}} \put(1651,-4636){\framebox(6300,2475){}} \put(1801,-4411){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{rm}QUANTUM}}} \put(6751,-4336){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{rm}CLASSICAL}}} \put(4501,-4186){\makebox(0,0)[lb]{\smash{\SetFigFont{34}{40.8}{rm}?}}} \put(4351,-2461){\makebox(0,0)[lb]{\smash{\SetFigFont{14}{16.8}{rm}noise rate}}} \put(1951,-3361){\makebox(0,0)[lb]{\smash{\SetFigFont{14}{16.8}{rm}0}}} \put(2251,-3361){\makebox(0,0)[lb]{\smash{\SetFigFont{14}{16.8}{rm}0.0001}}} \put(6976,-3361){\makebox(0,0)[lb]{\smash{\SetFigFont{14}{16.8}{rm}0.96}}} \put(7501,-3361){\makebox(0,0)[lb]{\smash{\SetFigFont{14}{16.8}{rm}1}}} \end{picture}
{~}
Changing the noise rate, the system transforms from quantum behavior to classical behavior. As was shown by Aharonov and Ben-Or\cite{aharonov2}, there is a
constant $\eta$ bounded away from $1$ where the system cannot perform quantum computation at all. Fault tolerance shows that there is a constant $\eta$ bounded away from $0$ for which quantum systems exhibit their full quantum computation power. The regimes are characterized by the range of quantum entanglement, where in the quantum regime this range is macroscopic, and quantum computation is possible. On the right, ``classical'', range, entanglement is confined to microscopic clusters. A very interesting question is how does the transition between the two regimes occur. In \cite{aharonov2} we gave indications to the fact
that the transition is sharp and has many characteristics of a
phase transition (and see also \cite{tsirelson1}.) The order parameter corresponds to the range of entanglement, or to the size of entangled clusters of qubits. Unfortunately, we were unable yet to prove the existence of such a phase transition, presumably because of lack of the correct definition of an order parameter that
quantifies ``quantumness over large scales''. Never the less I conjecture that the transition from macroscopic quantum behavior to macroscopic classical behavior occurs as a phase transition. The idea that the transition from quantum to classical physics is abrupt stands in contrast to the standard view of a gradual transition due to decoherence\cite{zurek1}. I believe that the flippant frontier between quantum and classical physics will be better understood if we gain better understanding of the transition from quantum to classical computational behavior.
An interesting conclusion of the threshold result is that one dimensional quantum systems can exhibit a non trivial phase transition at a critical noise rate $\eta_c$, below which
the mixing time of the system is exponential,
but above which the system mixes rapidly. This phase transition might be different from the transition from classical to quantum behavior, or it might be the same. This existence of a one dimensional phase transition is interesting because
one dimensional phase transitions are rare, also in classical systems, though there exist several complicated examples\cite{mukamel, gacs1}.
Perhaps a vague, but
deeper, and more thought provoking question is that of the postulates of quantum mechanics. The possibility that the model will be realized will enable a thorough test of
some of the more philosophical aspects of quantum theory, such as understanding the collapse of the wave function, the process of measurement, and other elements which are used as everyday tools in quantum algorithms. It might be that the realization of quantum computation will reveal the fact that what we understand in quantum physics
is merely an approximation holding only for small number of
particles, which we extrapolated to
many particles. Such questions are appealing motivations for this extremely challenging task of
realizing the quantum computation model physically. It seems that successes, and also failures, in achieving this ambitious task, will open new exciting paths and possibilities
in both computer science and fundamental physics.
\section{Acknowledgments} I am most grateful to Michael Ben Or who introduced me to this beautiful subject. We had a lot of fascinating discussions together on many of the things I presented.
Noam Nisan taught me a lot simply by asking the right questions, with his clear point of view. It was a pleasure not to know the answers. It was great fun to argue with Avi Wigderson on quantum computation and other things. It is a special pleasure to thank my colleagues Peter Hoyer, Lidror Troyanski, Ran Raz and particularly Michael Nielsen. They all
read the manuscript, corrected
many errors, and had extremely helpful suggestions.
Finally, I thank Ehud Friedgut for direct and
indirect contributions to this review.
\footnotesize
\end{document} | arXiv |
Effect of textile dyes on activity and differential regulation of laccase genes from Pleurotus ostreatus grown in submerged fermentation
Verónica Garrido-Bazán1,
Maura Téllez-Téllez2,
Alfredo Herrera-Estrella3,
Gerardo Díaz-Godínez4,
Soley Nava-Galicia1,
Miguel Ángel Villalobos-López1,
Analilia Arroyo-Becerra1 &
Martha Bibbins-Martínez1
AMB Express volume 6, Article number: 93 (2016) Cite this article
This research was conducted to extend the knowledge on the differential regulation of laccase genes in response to dyes. In order to accomplish this, we analyzed both, the expression of five laccase genes by real time RT-qPCR, and also the laccase activity and isoforms patterns during the time-course of a Pleurotus ostreatus submerged fermentation supplemented with either acetyl yellow G (AYG) or remazol brilliant blue R (RBBR) dyes. For the purpose of obtaining a stable reference gene for optimal normalization of RT-quantitative PCR gene expression assays, we tested four candidate reference genes. As a result of this analysis, gpd was selected as reference index for data normalization. The addition of dyes had an induction effect on the enzymatic activity and also modified the zymogram profile. Fermentation with RBBR showed the highest laccase activity and number of isoforms along the course of the fermentation. Laccase gene expression profiles displayed up/down regulation along the fermentation time in four laccase genes (pox4, pox3, poxa1b and pox2), while pox1 was not expressed in either of the fermentation conditions. AYG addition caused the highest induction and repression levels for genes pox3 and poxa1b respectively. The expression level for all genes in the presence of RBBR were lower than in AYG, being in both conditions this response growth time dependent. These results show the influence of the nature of dyes on the induction level of laccase activity and on the differential regulation of the laccase genes expression in P. ostreatus.
Of all industrial sector effluent, wastewater from the textile industry is classified as one of the most polluting, in terms of both volume and composition (Vandevivere et al. 1998; López et al. 2006). Inefficient industrial textile processes produce residual water with a high concentration of synthetic dyes (Asgher et al. 2009). Currently, more than 10,000 different dyes and pigments are used in the dyeing and printing industry worldwide. World production has been estimated at 800,000 tons per year, with at least 10–15 % of the pigments used discharged into the environment through wastewater (Levin et al. 2004; Palmieri et al. 2005; Revankar and Lele 2007). Many textile dyes are believed to be toxic or carcinogenic (Hamedaani et al. 2007). These compounds are considered xenobiotics and to be recalcitrant, and, in most cases, are very difficult to remove.
Due to fungal peroxidases comprising mainly laccases, manganese peroxidases, lignin peroxidases, and veratryl alcohol oxidases (Wesenberg et al. 2003; Swamy and Ramsay 1999; Tavčar et al. 2006), white rot fungi are organisms capable of degrading a variety of compounds, including textile dyes (López et al. 2006).
Laccases (benzenediol: oxygen oxidoreductases, EC 1.10.3.2) are glycoproteins classified as multi-copper oxidases that use the distinctive redox ability of copper ions to concomitantly catalyze the oxidation of a wide range of aromatic substrates with the reduction of molecular oxygen to water (Thurston 1994; Solomon et al. 1996). Given their high and non-specific oxidation potential laccases are biocatalysts useful for a wide range of biotechnology applications. These enzymes are used efficiently in the detoxification of the wastewater produced in pulp bleaching processes (Bajpai 1999), in the treatment of wastewater from industrial plants (Durán and Esposito 2000), the enzymatic modification of fibers and the decoloration of effluent (Abadulla et al. 2000).
Pleurotusostreatus has been reported to contain several laccase isoenzymes encoded by multigene families (Giardina et al. 2010). These isoenzymes often present differences in terms of their catalytic properties, regulation mechanisms and location. The transcriptional activity of laccase encoding genes is often regulated by metal ions (Collins and Dobson 1997; Galhaup et al. 2002), aromatic compounds or lignin derivatives (Terrón et al. 2004), as well as the source and concentration of nitrogen (Collins and Dobson 1997) and/or carbon (Soden and Dobson 2001). The above mentioned factors may act synergistically or antagonistically (Baldrian and Gabriel 2002; Faraco et al. 2003; Periasamy and Palvannan 2010).
The physiological mechanisms that control fungal development are also known to modulate the expression levels of laccase isoenzymes, since some isoenzymes have been observed during the exponential growth phase, and could participate in the degradation of the substrate. Other isoenzymes have been found during the stationary phase, which may be related to both morphogenesis processes and spore pigmentation (Temp and Eggert 1999; Lettera et al. 2010). Several reports indicate that laccases produced by P. ostreatus are the main enzymes that mediate dye decolourisation, due to their enzymatic properties and also their potential for degrading dyes of diverse chemical structure, therefore the development of processes based on laccases represent an effective tool for application in the textile effluent degradation (Palmieri et al. 2005).
The main objective of this research was to study the effect of chemically different dyes on the production and the differential regulation of laccase genes from P. ostreatus.
A strain of P. ostreatus from the American Type Culture Collection (ATCC 32783) (Manassas, Virginia, USA) was used.
Submerged cultures
The fermentations were performed in 125 mL Erlenmeyer flasks containing 50 mL of basal medium (BM) of the following composition (g/L): yeast extract, 5; glucose, 10; K2HPO4, 0.4; ZnSO4·7H2O, 0.001; KH2PO4, 0.6; FeSO4·7H2O, 0.05; MnSO4·H2O, 0.05; MgSO4·7H2O, 0.5; CuSO4·7H2O, 0.25 (Téllez-Téllez et al. 2008). Three fermentations of P. ostreatus grown in basal medium (BMF) and in the presence of either 500 ppm of RBBR (remazol brilliant blue R dye, SIGMA) (BBF) or 500 ppm of AYG (acetyl yellow G, ALDRICH) (AYF) were established. Each flask was inoculated with three mycelial plugs taken from the periphery of P. ostreatus colonies grown for 7 days at 25 °C in Petri dishes containing potato dextrose agar. The cultures were incubated at 25 °C for 23 days on a rotary shaker at 120 rpm. Three flasks were taken as samples at 120, 168, 240, 288, 336, 408, 480 and 576 h of fermentation. The enzymatic extract (EE) was obtained by filtration of the cultures using filter paper (Whatman No. 4), and stored at −20 °C until it was analyzed, while the mycelium was rinsed with 0.9 % NaCl and stored at −70 °C until the total RNA extraction procedure was conducted or used for biomass (X) determination as difference of dry weight (g/L) (Additional file 1: Figure S1). Experiments were performed in triplicate, with the values shown being representative of at least two of the experiments.
Enzyme assays
Laccase activity was determined by measuring changes in absorbance at 468 nm with extinction coefficient ɛ468 = 35,645 M−1cm−1, using 2,6-dimethoxyphenol (DMP) as the substrate. The assay mixture contained 950 μl of substrate (2 mM DMP in 0.1 M phosphate buffer at pH 6.5) and 50 μl EE, and was incubated at 40 °C for 1 min (Téllez-Téllez et al. 2008). The activity was expressed in international units (U/mL).
Zymogram analysis
Laccase activity was also detected through zymograms, using the modified SDS-PAGE technique (Laemmli 1970). The running gel contained 100 g acrylamide/L and 27 g bis-acrylamide/L. The stacking gel contained 40 g acrylamide/L and 27 g bis-acrylamide/L. Each EE (20 µl approx.) was mixed with sample buffer without a reducing agent for the disulphide bonds. The samples were placed in Mini-Protean III electrophoresis system (BioRad) gels (thickness 0.75 mm) with 150 V then applied for 1–1.25 h. After the electrophoresis, the gels were washed with deionized water on an orbital shaker (20–30 rpm) for 30 min, with the water changed every 10 min to remove SDS. Finally, the gels were incubated at room temperature in substrate solutions (2 mM DMP). Laccase activity bands from the oxidation of the substrate appeared on the gel after approximately 1 h (Téllez-Téllez et al. 2008).
Nucleic acid extraction and real time qPCR
Total RNA was isolated from frozen mycelia harvested at different fermentation times, using TRIZOL (Invitrogen) extraction, and was spectrophotometrically quantified by determining the absorbance ratio at OD260/280. RNA was treated with RNAse-free DNase I (Invitrogen). The final RNA concentration was set to 500 ng/µl. Subsequently, 1 µg of total RNA was reverse-transcribed into cDNA in a 20 µl volume using the SuperScript™II Reverse Transcriptase (Invitrogen) by following the manufacturer protocol.
The procedure for reverse transcription quantitative PCR experiments was adapted from (Castanera et al. 2015). RT-qPCRs were performed in a StepOnePlus® (Applied Biosystems), using SYBR green dye to detect product amplification. A set of specific primers was designed for the amplification of the transcript from the four laccase genes identified in the genome (Table 1). Primers corresponding to the panel of reference genes were designed using the filtered model transcript sequence of PC15 (v2.0) (http://www.jgi.doe.gov) and the Express Primer Express® 3.0 (Applied Biosystems) (Additional file 1: Table S1). With a final volume of 20 µl, each reaction mixture contained 10 µl Maxima Probe/ROX qPCR Master Mix (2X) (ThermoScientific), 200 nM forward and reverse primer, and a 1 µl 1:10 dilution of the RT product. Amplifications were performed with an initial 5 min step of 95 °C followed by 40 denaturation cycles at 95 °C for 30 s and primer annealing and extension at 60 °C for 40 s. The melting curves ranged from 60 to 95 °C and temperature was increased in increments of 0.3 °C. StepOne software was used to confirm the occurrence of specific amplification peaks. All RT-qPCR reaction were carried out in triplicate with template-free negative control being performed in parallel. The crossing-point (Cp) values and relative fluorescence units were recorded, with the latter used to calculate amplification efficiencies via linear regression. The PCR efficiency (E) and the regression coefficient (R2) were calculated using the slope of the standard curve according to the equation E = [10−(1/slope)−1] × 100 %.
Table 1 Primer sequence, product length and amplification efficiencies used in this study
Reference genes, quantification of RT-qPCR data, and statistical analyses
Four genes of different functional class were selected as reference candidates. The gene panel used in this study contained housekeeping genes, such as glyceraldehyde 3-phosphate dehydrogenase (gpd), β-tubulin (tub), actin (act) and peptidase (pep) (Additional file 1: Table S1). The expression of the genes was evaluated in six samples corresponding to our experimental conditions. GeNorm (Vandesompele et al. 2002) and NormFinder (Andersen et al. 2004) algorithms were applied to rank the four candidates according to their expression stability, and a reference index consisting of the geometric mean of the best-performing candidates was used for RT-qPCR data normalization.
Data pre-processing was performed using Microsoft Excel 2007 and included efficiencies and reference gene normalization. The fold expression was calculated by the 2−Δ ΔCt method as described by (Pfaffl 2001) (Eq. 1).
$$\text{ratio =}\frac{(E_{\text{target}})^{\Delta{\text{Cp}}_{\text{target}}(\text{control}-\text{sample})}}{(E_{\text{ref}})^{\Delta{\text{Cp}}_{\text{ref}}(\text{control}-\text{sample})}}$$
In the above equation Etarget is the real-time PCR efficiency of target gene transcript: Eref is the real-time PCR efficiency of a reference gene transcript; ΔCptarget is the CP deviation of control-sample of the target gene transcript. All other multiples comparisons were performed using the statistical analysis software SAS 2002 by SAS Institute Inc., Cary, NC, USA.
Effect of dyes on laccase activity
Pleurotus ostreatus was grown in liquid fermentation at 25 °C for 23 days. Samples were taken at regular intervals and filtrated, with the supernatant obtained then used to measure laccase activity. Figure 1 shows the laccase activity, which increased from the beginning of fermentation in BMF, with maximal activity observed at 408 h (239 U/mL). In BBF, the activity was low from the beginning of the fermentation until 336 h (approx. 25 U/mL), after which the activity increased and reached its peak at 480 h (452 U/mL), while, in AYF, the activity was low until 168 h of fermentation (approx. 11 U/mL) with the maximal activity value being 410 U/mL at 576 h.
Time course of extracellular laccase activity of P. ostreatus obtained in submerged fermentations in BMF (■ blacksquare ), BBF (● black circle) and AYF (▲ black triangle) media. The error bars represent the standard deviation of three different fermentation runs
Effect of dyes on laccase isoenzymes production
Laccase isoenzymes produced during the fermentation process are shown in Fig. 2. Two to four isoenzymes were observed in enzymatic extracts (EEs) obtained from the BMF (Fig. 2a). Figure 2b shows the laccase isoenzyme profile obtained in BBF, in which two isoenzymes were observed in EE collected at 120 and 168 h, four isoenzymes at 240 and 288 h and three at the later stages of the fermentation.
Zymograms of laccase isoenzymes produced by P. ostreatus grown in basal medium BMF (a) and in the presence of either 500 ppm of remazol brilliant blue R dye BBF (b) or 500 ppm of acetyl yellow G dye AYF (c)
The growth of the fungus in AYF resulted in EEs with less isoenzymes than BMF and BBF, with one isoenzyme observed at 288 and 336 h of fermentation, two isoenzymes observed at 168, 408, 480 and 576 h, and only three isoenzymes observed at 240 h (Fig. 2c).
Identification and validation of reference genes for qPCR analysis
To evaluate the stability of the reference genes across experimental conditions, the transcript abundance of the four candidate reference genes were detected by their mean Ct values (Additional file 1: Figure S2). The GeNorm algorithm identified gpd and act as the most stable genes along all the conditions assayed, displaying an expression stability value (M-value) of 0.213. In addition NormFinder algorithm identifier gpd as most stable gene (Additional file 1: Figure S3). As a consequence of this analysis, gpd was selected as reference index for data normalization.
Effect of dyes on the expression of laccase genes
The expression of laccase genes pox1, pox2, pox3, pox4 and poxa1b in response to the addition of dyes was evaluated at transcriptional level. First P. ostreatus was grown in BMF (reference condition). Then we monitored by RT-qPCR using specific primers the time course (120–552 h) of transcriptional changes of the five laccase genes in both fermentations supplemented with dyes (BBF and AYF). The 2−ΔΔCt method (Pfaffl 2001) was applied to the transcriptional analysis to quantify the relative expression of each gene with respect to the corresponding un-induced value for the given time point (reference condition). Figure 3a, b shows the laccase gene expression profiles where, in general terms, RBBR and AYG dyes display up/down regulation along the fermentation time in four laccase genes (pox4, pox3, poxa1b and pox2), while pox1 was not expressed in any of the of the two fermentation conditions. AYG addition caused the highest induction in the transcript level of gene pox3 that becomes several order of magnitude higher than that of the other analyzed genes (up to 12-fold increase) followed by pox4 (tenfold increase) and pox2 (ninefold increase) all of them at 408 h (Fig. 3a). On the other hand poxa1b showed the highest down regulation (−6.39-fold) at 240 h and remains almost constant along the fermentation time. The expression level for all genes in the presence of RBBR (Fig. 3b) were lower than in AYG, pox4 showed the highest induction (6.85- and 6.47-fold) at 408 and 144 h respectively, followed by pox3 (5.89-fold) at 408 h and pox2 (5.61-fold), poxa1b (4.60-fold) both of them at 144 h.
Expression levels of four laccase genes from P. ostreatus during time course fermentation with the addition of acetyl yellow G dye (a) and remazol brilliant blue R dye (b). Error bars represent the standard deviations of the means of three independent amplifications, and asterisks mean that the changes referred to the basal condition are statistically significant at p < 0.05
Percentage contribution of each pox gene to the global laccase expression
In order to analyze the contribution of each pox gene to the total relative expression on the time-course fermentation, their transcriptional levels were also shown as percentage of the total expression (Fig. 4). For AYG fermentation, pox2 represents 36.7 % of the total laccase expression followed by pox4 (32.31 %), pox3 (28.96 %) and poxa1b (1.9 %) (Fig. 4a) on the other hand for RBBR fermentation the contribution was pox3 (37.5 %), pox2 (29.46 %), pox4 (25.22 %) and poxa1b (7.59 %) (Fig. 4b). It is clear that changing the type of dye in the fermentation lead to different transcriptional profiles for each laccase gene with up and down regulation depending on the fermentation sampling time. Furthermore the addition of dyes to the culture medium caused a strong induction of pox3 and pox4 and to a lesser extent to pox2 and poxa1b being this response growth time dependent. On the other hand, the transcriptional level of genes pox2, pox3 and pox4 represent the main contribution to the global laccase expression.
Relative expression of laccase genes as percentage of the total expression in AYF (a) and BBF (b)
The highest laccase activity was produced in the stationary growth phase of the fungi in all conditions applied in this study. However the addition of either Remazol brilliant blue R or acetyl yellow G dyes had an induction effect on the enzymatic activity, which almost doubled for both dyes in comparison with basal fermentation. The addition of phenolic and aromatic compounds, such as the dyes used in this study, has been proven to increase laccase production, given that laccase induction by phenolic substances is a putative response mechanism developed by fungi against toxic compounds (Pezzella et al. 2013; Casas et al. 2013). On the other hand, the induction level mediated by dyes has been reported to be highly sensitive to small differences in their chemical structures (Vanhulle et al. 2007). In this work we used azo (RBBR) and sulphonic (AYG) dyes and the differential effect was observed on both, laccase activity and gene expression level.
The appearance of laccase gene families is very common in fungi, with the synthesis and secretion of each family member strongly influenced by nutrient level, culture conditions, and developmental stage. While the genome of P. ostreatus contains 11 laccase encoding genes, to date only six laccase isoenzymes have been isolated and characterized. (Pezzella et al. 2013): POX2 (59 kD SDS-PAGE) (Palmieri et al. 1993), POXA1w (57 kDa) (Palmieri et al. 1997), POXA1b (62 kDa) (Giardina et al. 1999), POXA2 (61 kDa) (Palmieri et al. 1997), POXA3a and POXA3b (67 kDa) (Palmieri et al. 2003). POX2 is a typical laccase and is the most widely produced under different growth conditions (Palmieri et al. 1993). POXA1b is a neutral blue laccase, very stable at alkaline pH (Giardina et al. 1999) and with a high redox potential (Garzillo et al. 2001). Other laccase encoding genes have been identified in P. ostreatus such as pox3, pox4 and pox5, though their corresponding proteins have never been isolated in culture broth. The heterologous expression of these genes in the yeasts S. cerevisiae and K. lactis produced very unstable laccases with expression problems (Pezzella et al. 2009).
The profile of laccase isoenzymes can be the result of either the expression of different genes or postransductional modifications. As shown in Fig. 2, zymograms taken during this research showed up to four isoenzymes; however, the addition of dyes also modified the zymographic pattern, with the condition with the highest laccase activity and highest number of isoforms during the fermentation being the fermentation conducted in the presence of the remazol brilliant blue R dye. It has been reported that P. ostreatus grown on agar with starch as a carbon source presented two isoenzymes at an initial pH of 6.5 (Téllez-Téllez et al. 2005). Téllez-Téllez et al. (2008) grew P. ostreatus in submerged fermentation at pH 6.5 and observed two and four isoenzymes during the exponential and stationary growth phases, respectively. Recently, the number of P. ostreatus laccase isoenzymes in buffered and non-buffered media was determined with the initial pH adjusted to 3.5 in both culture media. One laccase isoenzyme was produced in both media during the entire fermentation process. In the non-buffered medium, an additional isoenzyme of lower molecular weight than that produced in the entire fermentation process was produced at the beginning of the exponential phase of growth when the pH reached a value of 6.5 (Díaz et al. 2011).
Laccase gene transcription is regulated by metal ions, various aromatic compounds related to lignin or lignin derivatives, nitrogen and carbon sources, factors which cause specific laccase transcriptional profiles with variations among not only different species but also different isoforms in the same strain (Piscitelli et al. 2011; Pezzella et al. 2013). As expected, the transcriptional profiles differ in this study depending on the condition tested. However, the addition of dyes resulted in the induction of all genes evaluated except gene pox1, an effect which was observed from the beginning of the fermentation onwards.
In the case of gene pox1, there was no amplification in any fermentation, with some reports indicating that the laccase isoenzyme gene pox1 is closely related to pox2, since their cDNA sequences show 84 % similarity (Giardina et al. 1995). Pezzella et al. (2013) reported that, due to the high sequence similarity between pox1 and pox2, it is difficult to distinguish their expression profiles in P. ostreatus. This means that transcription levels are considered as the sum of both genes (pox1/pox2); however, a more comprehensive analysis would be needed in order to arrive at this conclusion in this study. On the other hand, while pox2 was amplified in all conditions evaluated in this study, the dyes induced its expression. It has been reported that the promoter of pox2 contains at least eight putative metal-responsive elements (MRE), which leads to a strong transcriptional induction being observed in the copper-supplemented culture (Moussa 2009; Amore et al. 2012). Given that the basal media used in this research was also supplemented with copper, this might explain the transcriptional profile obtained in the basal fermentation for gene pox2. Furthermore, the promoter region of the pox2 gene also shows a possible xenobiotic responsive element (XRE), with industrial dyes being considered xenobiotics. Pezzella et al. (2013) reported that pox2 may fulfill this role during vegetative growth, which might explain why expression was observed during the complete fermentation process. In addition, POX2 has been reported to be the enzyme most abundantly produced under several growth conditions (Palmieri et al. 2005; Castanera et al. 2012; Parenti et al. 2013). The most marked effect of dyes in the transcription induction were for pox3 and pox4 genes, where pox3 presented the highest induction level up to 12-fold increase. Interestingly the promoter region of pox3 presented three putative XREs compare with just one for pox2 and none for pox4 and poxa1b. In addition nucleotide sequence analysis predicted the presence of 5 and 1 MRE in pox3 and pox4 respectively (Pezzella et al. 2009). This results may suggests a dye-responsive induction pathway. However location and orientation of such and other responsive elements may also play a role in dye response. The close relationship between pox2, pox1 and pox4 genes has been reported, where they present exactly the same gene organization, while, on the contrary, pox3 exhibits a very different structure from that of the other family members (Janusz et al. 2013; Pezzella et al. 2009). poxa1b showed the highest repression level of all genes evaluated, in both fermentations conducted with dyes added and seems to be the most affected by copper and/or dye among the P. otreatus laccase transcripts analyzed in this research, with this response possibly being growth time dependent. Our results are in agreement with Pezzella et al. (2013) who reported that poxa1b (lacc6) was barely induced in the presence of two inducers (Cu-ferulic acid) and its induction was limited to the latest stage of cultivation (7th day). Analysis of poxa1b promoter showed the presence of several putative responsive elements, such as antioxidant response element (ARE) and MREs but not XREs, C and N nutrient responsive elements (Amore et al. 2012; Miele et al. 2010; Piscitelli et al. 2011), the lack of XREs may explain the barely induction level observed for this gene under the assays conditions evaluated in this study.
The activity, isoforms and transcriptional profiles obtained in this investigation show the complex regulation of the laccase genes by xenobiotic compounds such as the dyes tested in combination with such other factors as culture conditions, developmental stage, and variations in medium composition during P. ostreatus growth.
The textile dyes RBBR and AYG acted as inducers of laccase activity and modified the zymographic and expression profiles of laccase genes. Laccase activity may be defined by the expression of genes pox2, pox3 and pox4 and the oxidation of the dyes under study may be the result of this gene products. The high induction level of genes pox3 and pox4 mediated by dyes suggests that the laccase coded by them could by the main activity present in the dye fermentations. Given what is known about the presence of response elements (metal ions, xenobiotics, stress, glucose, and nitrogen) in laccase gene promoters, the dyes may be involved in the regulation of expression. However, the precise molecular mechanism that regulates gene expression through these potential response elements is unknown and needs to be fully explored in future work.
AYG:
acetyl yellow G
RBBR:
remazol brilliant blue R
BM:
basal medium
BMF:
basal medium fermentation
BBF:
remazol brillliant blue R fermentation
AYF:
acetyl-yellow G fermentation
EE:
enzymatic extract
DMP:
qPCR- reverse transcription-quantitative PCR
gpd:
glyceraldehyde 3-phosphate dehydrogenase
tub:
β-Tubulin
pep:
Abadulla E, Tzanov T, Costa S, Robra KH, Cavaco-Paulo A, Gübitz GM (2000) Decolorization and detoxification of textile dyes with a laccase from Trametes hirsuta. Appl Environ Microbiol 66:3357–3362. doi:10.1128/AEM.66.8.3357-3362.2000
Amore A, Honda Y, Faraco V (2012) Copper induction of enhanced green fluorescent protein expression in Pleurotus ostreatus driven by laccase poxa1b promoter. FEMS Microbiol Lett 337:155–163. doi:10.1111/1574-6968.12023
Andersen CL, Jensen JL, Orntoft TF (2004) Normalization of real-time quantitative reverse transcription-PCR data: a model-based variance estimation approach to identify genes suited for normalization, applied to bladder and colon cancer data sets. Cancer Res 64:5245–5250. doi:10.1158/0008-5472.CAN-04-0496
Asgher M, Azim N, Bhatti HN (2009) Decolorization of practical textile industry effluents by white rot fungus Coriolus versicolor IBL-04. Biochem Eng J 47:61–65. doi:10.1016/j.bej.2009.07.003
Bajpai P (1999) Application of enzymes in the pulp and paper industry. Biotechnol Prog 15:147–157. doi:0.1021/bp990013k
Baldrian P, Gabriel J (2002) Copper and cadmium increase laccase activity in Pleurotus ostreatus. FEMS Microbiol Lett 2:69–74. doi:10.1016/S0378-1097(01)00519-5
Casas N, Blánquez P, Vincent T, Sarrá M (2013) Laccase production by Trametes versicolor under limited-growth conditions using dyes as inducers. Environ Technol 34:113–119. doi:10.1080/09593330.2012.683820
Castanera R, Pérez G, Omarini A, Alfaro M, Pisabarro AG, Faraco V, Ramírez L (2012) Transcriptional and enzymatic profiling of Pleurotus ostreatus laccase genes in submerged and solid-state fermentation cultures. Appl Environ Microbiol 78:4037–4045. doi:10.1128/AEM.07880-11
Castanera R, López-Varas L, Pisabarro AG, Ramírez L (2015) Validation of reference genes for transcriptional analyses in Pleurotus ostreatus by using reverse transcription-quantitative PCR. Appl Environ Microbiol 81(12):4120–4129
Collins PJ, Dobson A (1997) Regulation of laccase gene transcription in Trametes versicolor. Appl Environ Microbiol 63:3444–3450
Díaz R, Alonso S, Sánchez C, Tomasini A, Bibbins M, Díaz G (2011) Characterization of the growth and laccase activity of strains of Pleurotus ostreatus in submerged fermentation. BioResourses 6:282–290
Durán N, Esposito E (2000) Potential applications of oxidative enzymes and phenoloxidase-like compounds in wastewater and soil treatment: a review. Appl Catal B Environ 28:83–99. doi:10.1016/S0926-3373(00)00168-5
Faraco V, Giardina P, Sannia G (2003) Metal-responsive elements in Pleurotus ostreatus laccase gene promoters. Microbiology 149:2155–2162. doi:10.1099/mic.0.26360-0
Galhaup CGS, Galhaup C, Goller S, Peterbauer CK, Strauss J, Haltrich D (2002) Characterization of the major laccase isoenzyme from Trametes pubescens and regulation of its synthesis by metal ions. Microbiology 148:2159–2169. doi:10.1099/00221287-148-7-2159
Garzillo AM, Colao MC, Buonocore V, Oliva R, Falcigno L, Saviano M, Sannia G (2001) Structural and kinetic characterization of native laccases from Pleurotus ostreatus, Rigidoporus lignosus and Trametes trogii. J Protein Chem 20:191–201. doi:10.1023/A:1010954812955
Giardina P, Cannio R, Martirani L, Marzullo L, Palmieri G, Sannia G (1995) Cloning and sequencing of a laccase gene from the lignin-degrading basidiomycete Pleurotus ostreatus. Appl Environ Microbiol 61:2408–2413
Giardina P, Palmieri G, Scaloni A, Fontanella B, Faraco V, Cennamo G, Sannia G (1999) Protein and gene structure of a blue laccase from Pleurotus ostreatus. Biochem J 341:655–663
Giardina P, Faraco V, Pezzella C, Piscitelli A, Vanhulle S, Sannia G (2010) Laccases: a never-ending story. Cell Mol Life Sci 67:369–385. doi:10.1007/s00018-009-0169-1
Hamedaani HR, Sakurai A, Sakakibara M (2007) Decolorization of synthetic dyes by a new manganese peroxidase producing white rot fungus. Dyes Pigm 72:157–162. doi:10.1016/j.dyepig.2005.08.010
Janusz GKK, Kucharzyk KH, Pawlik A, Staszczak M, Paszczynski AJ (2013) Fungal laccase, manganese peroxidase and lignin peroxidase: gene expression and regulation. Enzyme Microb Technol 52:1–12. doi:10.1016/j.enzmictec.2012.10.003
Laemmli UK (1970) Cleavage of structural proteins during the assembly of the head of bacteriophage T4. Nature 227(5259):680–685. doi:10.1038/227680a0
Lettera V, Piscitelli A, Leo G, Birolo L, Pezzela C, Sannia G (2010) Identification of a new member of Pleurotus ostreatus laccase family from mature fruiting body. Fungal Biol 114:724–730. doi:10.1016/j.funbio.2010.06.004
Levin L, Papinutti L, Forchiassin F (2004) Evaluation of argentinean white rot fungi for their ability to produce lignin-modifying enzymes and decolorize industrial dyes. Bioresour Technol 94:169–176. doi:10.1016/j.biortech.2003.12.002
López MJ, Guisado G, Vargas-García MC, Estrella FS, Moreno J (2006) Decolourization of industrial dyes by ligninolytic microorganisms isolated from compositing environment. Enzyme Microb Technol 40:42–45. doi:10.1016/j.enzmictec.2005.10.035
Miele A, Giardina P, Sannia G, Faraco V (2010) Random mutants of a Pleurotus ostreatus laccase as new biocatalysts for industrial effluents bioremediation. J Appl Microbiol 108:998–1006. doi:10.1111/j.1365-2672.2009.04505.x
Moussa TA (2009) Molecular characterization of the phenol oxidase (pox2) gene from the ligninolytic fungus Pleurotus ostreatus. FEMS Microbiol Lett 298:131–142. doi:10.1111/j.1574-6968.2009.01708.x
Palmieri G, Giardina P, Marzullo L, Desiderio B, Nittii G, Cannio R, Sannia G (1993) Stability and activity of a phenol oxidase from the ligninolytic fungus Pleurotus ostreatus. Appl Microbiol Biotechnol 39:632–636. doi:10.1007/BF00205066
Palmieri G, Giardina P, Bianco C, Scaloni A, Capasso A, Sannia G (1997) A novel white laccase from Pleurotus ostreatus. J Biol Chem 272:31301–31307. doi:10.1074/jbc.272.50.31301
Palmieri G, Cennamo G, Faraco V, Amoresano A, Sannia G, Giardina P (2003) Atypical laccase isoenzymes from copper supplemented Pleurotus ostreatus cultures. Enzyme Microb Technol 33:220–230. doi:10.1016/S0141-0229(03)00117-0
Palmieri G, Cennamo G, Sannia G (2005) Remazol brilliant blue R decolourisation by Pleurotus ostreatus and its oxidative enzymatic system. Enzyme Microb Technol 36:17–24. doi:10.1016/j.enzmictec.2004.03.026
Parenti A, Muguerza E, Redin-Iroz A, Omarini A, Conde E, Alfaro M, Pisabarro AG (2013) Induction of laccase activity in the white rot fungus Pleurotus ostreatus using water polluted with wheat straw extracts. Bioresour Technol 133:142–149. doi:10.1016/j.biortech.2013.01.072
Periasamy R, Palvannan T (2010) Optimization of laccase production by Pleurotus ostreatus IMI 395545 using the Taguchi DOE methodology. J Basic Microbiol 50:548–556. doi:10.1002/jobm.201000095
Pezzella C, Autore F, Giardina P, Piscitelli A, Sannia G, Faraco V (2009) The Pleurotus ostreatus laccase multi-gene family: isolation and heterologous expression of new family members. Curr Genet 55:45–57. doi:10.1007/s00294-008-0221-y
Pezzella C, Lettera V, Piscitelli A, Giardina PS, Sannia G (2013) Transcriptional analysis of Pleurotus ostreatus laccase genes. Appl Microbiol Biotechnol 97:705–717. doi:10.1007/s00253-012-3980-9
Pfaffl MW (2001) A new mathematical model for relative quantification in real-time RT-PCR. Nucleic Acids Res 29:203–207. doi:10.1093/nar/29.9.e45
Piscitelli A, Giardina P, Lettera V, Pezzella CS, Sannia G, Vincenza F (2011) Induction and transcriptional regulation of laccases in fungi. Curr Genomics 12:104–112. doi:10.2174/138920211795564331
Revankar MS, Lele SS (2007) Synthetic dye decolorization by white rot fungus, Ganoderma sp., WR-1. Bioresour Technol 98:775–780. doi:10.1016/j.biortech.2006.03.020
Soden DM, Dobson AD (2001) Differential regulation of laccase gene expression in Pleurotus sajor-caju. Microbiology 147:1755–1763. doi:10.1099/00221287-147-7-1755
Solomon EI, Sundaram UM, Machonkin TE (1996) Multicopper oxidases and oxygenases. Chem Rev 7:2563–2606. doi:10.1021/cr950046o
Swamy J, Ramsay JA (1999) The evaluation of white rot fungi in the decoloration of textile dyes. Enzyme Microb Technol 24:130–137. doi:10.1016/S0141-0229(98)00105-7
Tavčar M, Svobodová K, Kuplenk J, Novotný ČP, Pavko A (2006) Biodegradation of azo dye RO16 in different reactors by immobilized Irpex lacteus. Acta Chim Slov 53:338–343
Téllez-Téllez M, Sánchez C, Loera O, Díaz-Godínez G (2005) Differential patterns of constitutive intracellular laccases of the vegetative phase of Pleurotus species. Biotechnol Lett 27:1391–1394. doi:10.1007/s10529-005-3687-4
Téllez-Téllez M, Fernández FJ, Montiel-González AM, Sánchez C, Díaz-Godínez G (2008) Growth and laccase production by Pleurotus ostreatus in submerged and solid-state fermentation. Appl Microbiol Biotechnol 81:75–679. doi:10.1007/s00253-008-1628-6
Temp U, Eggert C (1999) Novel interaction between laccase and cellobiose dehydrogenase during pigment synthesis in the white rot fungus Pycnoporus cinnabarinus. Appl Environ Microbiol 65:389–395
Terrón MC, González T, Carbajo JM, Yagüe S, Arana-Cuenca A, Téllez A, González AE (2004) Structural close-related aromatic compounds have different effects on laccase activity and on lcc gene expression in the ligninolytic fungus Trametes sp. I-62. Fungal Genet Biol 41:954–962. doi:10.1016/j.fgb.2004.07.002
Thurston CF (1994) The structure and function of fungal laccases. Microbiology 140:19–26. doi:10.1099/13500872-140-1-19
Vandesompele J, De Preter K, Pattyn F, Poppe B, Van Roy N, De Paepe A, Speleman F (2002) Accurate normalization of real-time quantitative RT-PCR data by geometric averaging of multiple internal control genes. Genome Biol 3:1–12. doi:10.1186/gb-2002-3-7-research0034
Vandevivere P, Bianchi R, Verstraete W (1998) Review: treatment and reuse of wastewater from the textile wet-processing industry: review of Emerging Technologies. J Chem Technol Biotechnol 72:289–302. doi:10.1002/(SICI)1097-4660(199808)72:4<289:AID-JCTB905>3.0.CO;2-%23
Vanhulle S, Enaud E, Trovaslet M, Nouaimeh N, Bols CM, Keshavarz T, Corbisier M (2007) Overlap of laccases/cellobiose dehydrogenase activities during the decolourisation of anthraquinonic dyes with close chemical structures by Pycnoporus strains. Enzyme Microbial Technol 40:1723–1731. doi:10.1016/j.enzmictec.2006.10.033
Wesenberg D, Kyriakides I, Agathos SN (2003) White-rot fungi and their enzymes for the treatment of industrial dye effluents. Biotechnol Adv 22:161–187. doi:10.1016/j.biotechadv.2003.08.011
MBM, AHE and GDG designed research, VGB,MTT and SNG preformed all experiments. MAVL and AAB contributed data analysis. All authors were involved in data interpretation and the writing of the paper. All authors read and approved the final manuscript.
This article does not contain any studies concerned with experiment on human or animals.
This work was supported by the Mexican Council of Science and Technology (CONACYT) Project No. CB-2009-134348) and the Instituto Politécnico Nacional (IPN) Project No. SIP 20161426, which are gratefully acknowledged. MTT was a postdoctoral fellow with a CONACYT Postdoctoral fellowship No. 17159.
Centro de Investigación en Biotecnología Aplicada-Instituto Politécnico Nacional, Carretera Estatal Sta Inés Tecuexcomac-Tepetitla, km. 1.5, C.P: 90700, Tepetitla de Lárdizabal, Tlaxcala, Mexico
Verónica Garrido-Bazán, Soley Nava-Galicia, Miguel Ángel Villalobos-López, Analilia Arroyo-Becerra & Martha Bibbins-Martínez
Centro de Investigaciones Biológicas, Universidad Autónoma del Estado de Morelos, Cuernavaca, Morelos, Mexico
Maura Téllez-Téllez
Laboratorio Nacional de Genómica para la Biodiversidad, Cinvestav, Irapuato, Gto, Mexico
Alfredo Herrera-Estrella
Laboratory of Biotechnology, Research Center for Biological Sciences, Universidad Autónoma de Tlaxcala, Tlaxcala, Mexico
Gerardo Díaz-Godínez
Verónica Garrido-Bazán
Soley Nava-Galicia
Miguel Ángel Villalobos-López
Analilia Arroyo-Becerra
Martha Bibbins-Martínez
Correspondence to Martha Bibbins-Martínez.
Growth of P. ostreatus and pH profile in submerged fermentations in BMF (●black circle), BBF (■ black square) and AYF (♦black diamond) media. The error bars represent the standard deviation of three different fermentation runs. Figure S2. Genorm analysis of the expression stability of 4 reference genes. Figure S3. Variability of Cp values of 4 reference genes tested under the 3 different fermentation conditions using NormFinder. Table S1. Identifiers and product lengths of reference genes primers used in this study.
Garrido-Bazán, V., Téllez-Téllez, M., Herrera-Estrella, A. et al. Effect of textile dyes on activity and differential regulation of laccase genes from Pleurotus ostreatus grown in submerged fermentation. AMB Expr 6, 93 (2016). https://doi.org/10.1186/s13568-016-0263-3
Laccases
Isoenzymes | CommonCrawl |
BMC Pregnancy and Childbirth
Prevalence, trend and determinants of adolescent childbearing in Burundi: a multilevel analysis of the 1987 to 2016–17 Burundi Demographic and Health Surveys data
Jean Claude Nibaruta1,
Bella Kamana2,
Mohamed Chahboune1,
Milouda Chebabe1,
Saad Elmadani1,
Jack E. Turman Jr.3,
Morad Guennouni1,
Hakima Amor4,
Abdellatif Baali4 &
Noureddine Elkhoudri1
BMC Pregnancy and Childbirth volume 22, Article number: 673 (2022) Cite this article
Very little is known about factors influencing adolescent childbearing despite an upward trend in adolescent childbearing prevalence in Burundi, and its perceived implications on the rapid population growth and ill-health of young mothers and their babies. To adress this gap, this study aimed to examine the prevalence, trends and determinants of adolescent childbearing in Burundi.
Secondary analyses of the 1987, 2010 and 2016–17 Burundi Demographic and Health Surveys (BDHS) data were conducted using STATA. Weighted samples of 731 (1987 BDHS), 2359 (2010 BDHS) and 3859 (2016-17BDHS) adolescent girls aged 15–19 years old were used for descriptive and trend analyses. Both bivariable and multivariable two-level logistic regression analyses were performed to identify the main factors associated with adolescent childbearing using only the 2016–17 BDHS data.
The prevalence of adolescent childbearing increased from 5.9% in 1987 to 8.3% in 2016/17. Factors such as adolescent girls aged 18–19 years old (aOR =5.85, 95% CI: 3.54–9.65, p < 0.001), adolescent illiteracy (aOR = 4.18, 95% CI: 1.88–9.30, p < 0.001), living in poor communities (aOR = 2.19, 95% CI: 1.03–4.64, p = 0.042), early marriage (aOR = 9.28, 95% CI: 3.11–27.65, p < 0.001), lack of knowledge of any contraceptive methods (aOR = 5.33, 95% CI: 1.48–19.16, p = 0.010), and non-use of modern contraceptive methods (aOR = 24.48, 95% CI: 9.80–61.14), p < 0.001) were associated with higher odds of adolescent childbearing. While factors such as living in the richest household index (aOR = 0.52, 95% IC: 0.45–0.87, p = 0.00), living in West region (aOR = 0.26, 95%CI: 0.08–0.86, p = 0.027) or in South region (aOR = 0.31, 95% CI: 0.10–0.96, p = 0.041) were associated with lower odds of adolescent childbearing.
Our study found an upward trend in adolescent childbearing prevalence and there were significant variations in the odds of adolescent childbearing by some individual and community-level factors. School-and community-based intervention programs aimed at promoting girls' education, improving socioeconomic status, knowledge and utilization of contraceptives and prevention of early marriage among adolescent girls is crucial to reduce adolescent childbearing in Burundi.
The World Health Organization (WHO) and United Nations entities define an adolescent as an individual aged 10–19 years [1, 2]. Adolescent childbearing is a major global public health issue because of its many adverse health and socio-economic consequences for both young mothers and their babies, particularly in Sub-Saharan Africa (SSA) [3, 4]. While adolescent childbearing declined significantly overall since 2004 [5], significant disparities persist between and within countries and among population groups, particularly in SSA [3, 6,7,8]. In 2015–2020, SSA had the highest levels of adolescent childbearing, followed by Asia and Latin America and the Caribbean [6]. Almost one-fifth (18.8%) of adolescent girls got pregnant in Africa, and a higher prevalence (21.5%) was observed in the East African sub-region where Burundi is located [3]. Several studies state that adolescent childbearing is associated with higher maternal mortality and morbidity and adverse child outcomes including a higher prevalence of low birth weight and higher perinatal and neonatal mortality as compared to older women [3, 4, 9]. Adolescent early initiation into childbearing lengthens the reproductive period and subsequently increases a woman's lifetime fertility rate, contributing to rapid population growth [10,11,12].
The Burundian population is characterized by its extreme youth, with 65% under the age of 25 and almost a quarter of this growing population (23%) are adolescents [13]. In Burundi, adolescent childbearing remains an important issue because of its perceived implications on the rapid population growth and ill-health of adolescent mothers and their babies [11]. According to the report of the latest Burundi Demographic and Health Survey (BDHS) [14], 8% of women aged 15–19 begun childbearing, including 6% who had at least one live birth and 2% who were pregnant with their first child. Despite a good progress in reducing maternal mortality ratio [14], a large number of adolescent girls are still dying from pregnancy and childbirth related complications. The maternal mortality rate among Burundian adolescent girls is estimated at 150 maternal deaths per 1000 women aged 15–19 years [14]. Maternal disorders are the fourth highest cause of death among teenage mothers in Burundi [13]. Early marriage and adolescent pregnancy could lead to or aggravate anemia in mothers and result in low iron stores in the offspring [15], or in prematurity or low birth weight babies [16]. Approximately 36% of Burundian adolescent girls are anemic and 0.4% have obstetric fistula [14]. On the other hand, the infant mortality rate among adolescent girls in Burundi is estimated at 59 deaths per 1000 live births, of which 30% are neonatal and 29% post-neonatal [14]. In addition, the prevalence of low birth weight is higher among adolescent mothers (7.2%) than among women aged 20–34 years (4.7%) [14].
Several studies were conducted to examine the factors influencing adolescent pregnancy and motherhood in various settings. The results of these studies showed that early marriage or sexual intercourse [4, 7, 9], illiteracy or low level of education and poverty [3, 7, 9, 10] or living in poor neighborhoods [17, 18], age of the adolescent [4, 10, 19], marital status [3, 4, 10], rural residence and geographic regions [3, 4, 10, 20] are important factors influencing adolescent childbearing. Despite an upward trend in adolescent childbearing prevalence and its perceived implications on the rapid population growth and poor health of young mothers and their babies, very little is known about factors influencing adolescent childbearing in Burundi [21,22,23]. Only two BDHS reports [14, 24] containing information on factors influencing adolescent childbearing are available in Burundi. The results of these two surveys are limited to a few determinants of adolescent childbearing and are fully descriptive, and therefore do not make it possible to know the net effect of each of the factors influencing adolescent childbearing in the Burundian settings. To adress this gap, we aim to examine the prevalence, trend and determinants of adolescent childbearing using the 1987 to 2016–17 BDHS data.
Data sources and population
This study used adolescent women (aged 15–19) data extracted from the three BDHS conducted in 1987 [25], 2010 [24] and 2016–2017 [14] for descriptive statistics and the trend of adolescent childbearing assessment. For the second objective of identifying factors associated with adolescent childbearing, only adolescent women data from the most recent BDHS [14] were used. The BDHS are nationally representative surveys with samples based on a two-stage stratified sampling procedure: Enumeration areas or clusters in the first stage and households in the second stage. In sampled households, all women aged between 15 and 49 years who consent to participate in the survey are interviewed. Then 731, 2359, and 3859 adolescent women aged 15–19 years were successfully interviewed during the 1987, 2010 and 2016–17 BDHS surveys respectively. Thus, the current study used three weighted samples of 731, 2359, and 3859 adolescent women aged 15–19 years. A detailed description of the sampling procedure for each of these three surveys is presented in the final report for each survey [14, 24, 25].
Variables of the study
Outcome variable
The outcome variable of interest in this study is adolescent childbearing, which refers to the sum of the percentage of adolescents aged 15–19 who are already mothers (have had at least a live birth) and the percentage of adolescents who are pregnant with their first child at the time of the interview [4, 26]. Thus, any adolescent who was already a mother or pregnant with her first child was coded one (1) and zero (0) in the opposite case.
Based on a prior literature review, our independent variables were classified into individual-level factors and community-level factors. The individual-level factors include: adolescent's age, education, household wealth index, working status, religion, access to mass media, age at first marriage, knowledge of any contraceptive methods, and modern contraceptive use. Community-level factors include: place of residence, health regions, community-level education, and community-level poverty. It should be noted that of the four community-level variables, two variables (community-level education, and community-level poverty) were created by aggregating individual-level factors (adolescent's education, and household wealth index) since these two variables are not directly found from the 2016–17 BDHS dataset.
Operational definitions
Access to mass media
Created by combining the following three variables: frequencies of listening to radio, watching TV, and reading newspapers and coded as "yes" if the adolescent was exposed to at least one of the three media and "no" in the opposite case.
Health regions
This variable had eighteen categories corresponding to the eighteen current provinces of Burundi. To reduce its excessive number of categories, it was recoded into five regions such as North Region, Central-East Region, West Region, South Region and Bujumbura Mairie [11].
Community-level education
Aggregate values measured by the proportion of adolescents with a minimum of primary level education derived from data on an adolescent's education. Then, it was categorized using national median value to values: low (communities with < 50% of adolescents have at least primary education) and high (communities with ≥50% of adolescents have at least primary education) community-level of adolescent education.
Community-level poverty
Aggregate values measured by the proportion of adolescents living in households classified as poorest/poorer derived from data on household wealth index. Then, it was categorized using national median value to values: low (communities with < 50% of adolescents living in poorest/poorer households) and high (communities with ≥50% of adolescents living in poorest/poorer households) community-level of adolescent poverty.
Data management and statistical analysis
After data were extracted, recoded and reorganized, the statistical analysis was performed using STATA statistical software version 14.2. During all statistical analyses, the weighted samples were used to adjust for non-proportional sample selection and for non responses to ensure that our results were nationally representative. Frequency and percentage were used to describe the sociodemographic characteristics as well as the sexual and reproductive health history of the sample across the three surveys. The trend analysis of adolescent childbearing was evaluated using the Extended Mantel-Haenszel chi square test for linear trend using the OpenEpi (Version 3.01)- Dose Response program [4, 27]. A p-value ≤0.05 was used to declare the existence of a significant trend.
During the BDHS data collection, two-stage stratified cluster sampling procedures were used and therefore the data were hierarchical. To obtain correct estimates in inferential analyses, advanced statistical models such as multilevel modeling that considers independent variables measured at individual- and community-levels should be used to account for the clustering effect/dependency [28,29,30,31]. Thus, bivariable and multivariable multilevel logistic regression analyses were conducted to identify factors associated with adolescent childbearing by using only the most recent BDHS [14]. We first performed the bivariable multilevel logistic regression analysis to examine associations between adolescent childbearing and the selected individual and community-level variables. Then variables with a p-value ≤0.2 in the bivariate analysis were included in the multivariable multilevel logistic regression analysis to assess the net effects of each independent variable on adolescent childbearing after adjusting for potential confounders. The fixed effects were reported in terms of adjusted odds ratios (aOR) with 95% confidence intervals (CI) and p-values. Variables with p-value < 0.05 were declared to be significantly associated with adolescent childbearing in the multivariate analysis.
Before performing these multilevel logistic regression analyses, an empty model was conducted to calculate the extent of variability in adolescent childbearing between clusters (between communities). The existence of this variability was assessed using the Intra-Class correlation Coefficient (ICC) and the Median Odds Ratio (MOR) [29,30,31,32]. The ICC represents the proportion of the between-cluster variation in the total variation (the between- plus the within-Cluster variation) of the chances of adolescent childbearing [28, 29]. It can be computed with the following formula:
$${\displaystyle \begin{array}{c}\mathrm{ICC}={\sigma}^2/\left({\sigma}^2+{\pi}^2/3\right)\\ {}.=\frac{\sigma^2}{\sigma^2+3.29},\mathrm{where}\ {\sigma}^2\mathrm{represents}\ \mathrm{the}\ \mathrm{cluster}\ \mathrm{variance}\end{array}}$$
The MOR is the Median values of the Odds Ratio of the cluster at high risk and cluster at lower risk of adolescent childbearing when randomly picking two adolescent women from two different clusters [29, 30] . It can be computed with the following formula:
$$\mathrm{MOR}=\exp\ \left[\sqrt{\Big(2\times {\sigma}^2}\right)\times 0.6745\Big]$$
$$\mathrm{MOR}\cong \exp\ \left(0.95\times \sqrt{\sigma^2}\right)$$
The deviance (or-2Log likelihood), Akaike Information Criteria (AIC) and Bayesian Information Criterion (BIC) were used to compare the fit to the data of the null model and that of the full model where we favored model with smaller values of these indices [4, 30, 33].
Sociodemographic characteristics of samples
The sociodemographic characteristics of the adolescents included in the three surveys are summarized in Table 1. The analysis of adolescents' age showed that the majority of them (53.4, 61.1 and 64.5% in the 1987, 2010 and 2016–17 BDHS respectively) were between 15 and 17 years old. Similarly, most of participants resided in rural areas: 95.7% (1987 BDHS), 88.4% (2010 BDHS) and 85.8% (2016–17 BDHS). A large proportion of adolescents (75.8 and 76.5% in the 2010 and 2016–17 BDHS respectively) lived in three health regions (North, Central-East and South). Similarly, most adolescent girls were still single: 93.2% (1987 BDHS), 90.2% (2010 BDHS) and 93.3% (2016–17 BDHS). The proportion of illiterate adolescents decreased from 73.3% (1987 BDHS) to 7.3% (2016–17 BDHS). On the other hand, the percentages of adolescents who were currently working increased from 7.5% (1987 DHS) to 57.6% (2016-17DHS). More than half of adolescent girls (58.5 and 53.6% in the 2010 and 2016–17 BDHS surveys respectively) were from very poor/poor/middle-income households. Similarly, analysis of religious affiliation showed that most adolescents were Catholic: 61.1% (2010 BDHS) and 55.7% (2016–17 BDHS).
Table 1 Sociodemographic characteristics of adolescents in Burundi using the 1987, 2010 and 2016/17 BDHS
Sexual and reproductive health characteristics of the samples
The percentage of adolescents who had their first sexual intercourse at age ≤ 14 years increased from 0.7% (1987 BDHS) to 2.6% (2016–17 BDHS). Similarly, the percentage of adolescents who had their first birth at age ≤ 17 years increased from 1.7% (1987 BDHS) to 3.3% (2016–17 BDHS). In contrast, the proportion of adolescents who had their first marriage at age ≤ 17 decreased slightly from 4% (1987 BDHS) to 3.8% (2016–17 BDHS). Similarly, 40.1% (1987 BDHS) of adolescents had knowledge of any contraceptive methods compared to 89.9% (2016–17 BDHS). The percentage of adolescents who do not intend to use contraception increased from 17.8% (2010 BHDS) to 24.8% (2016–17 BDHS). On the other hand, there was a reduction in the proportion of adolescents with unmet need for contraception, which decreased from 3.2% (2010 BDHS) to 2.5% (2016–17 BDHS). Regarding fertility preference, 5.8% (2010 BDHS) of adolescents wanted to have another pregnancy compared to 96.5% in the 2016–17 BDHS (See Table 2).
Table 2 Sexual and reproductive health characteristics of adolescents in Burundi using the 1987, 2010 and 2016/17 BDHS data
Prevalence and trends of adolescent childbearing
The prevalence and trends of adolescent childbearing were examined in its two components: prevalence and trend of adolescents who have had at least one live birth and prevalence and trend of those who were pregnant with their first child at the time of the survey (see Fig. 1). Thus, the prevalence of adolescent childbearing increased from 5.9% (95% CI: 4.3–7.8) in 1987 to 9.6% (95% CI: 8.4–10.4) in 2010, and then decreased from 9.6 to 8.3% (95% CI: 7.4–9.2) in 2016/17. The trend analysis shows that there was an increase of 2.4% from 1987 to 2016/17 although this increase was not statistically significant (P-value = 0.0503). Indeed, the prevalence of adolescents who have had at least one live birth increased from 3.2% (95% CI: 2.0–4.7) in 1987 to 6.7% (95% CI: 5.7–7.7) in 2010, and then decreased from 6.7 to 6.1% (95% CI: 5.3–6.8) in 2016/17. The trend analysis shows that there was an increase of 2.9% from 1987 to 2016/17 and this increase was statistically significant (P-value = 0.0036). On the other hand, the prevalence of adolescents who were pregnant with their first child increased from 2.7% (95% CI: 1.7–4.2) in 1987 to 2.9% (95% CI: 2.2–3.6) in 2010, and then decreased from 2.9 to 2.2% (95%CI: 1.7–2.7) in 2016/17. The trend analysis shows that there was a decrease of 0.5% from 1987 to 2016/17 but this decrease was not statistically significant (P-value = 0.3593).
Prevalence and trends of adolescent childbearing in Burundi using the 1987, 2010 and 2016–17 BDHS Data
Determinants of adolescents childbearing
Bivariable and multivariable multilevel logistic regression analyses were conducted to identify individual and community-level factors associated with adolescent childbearing by using only the most recent (2016–17) BDHS data. First, an empty model was performed to calculate the extent of variability in adolescent childbearing between clusters by using the ICC and the MOR indicators. The deviance, AIC, and BIC were also used to select the model that best fit the data. The results of bivariable and multivariable analyses, random effect model and model fitness are summarized in Table 3.
Table 3 Results of bivariable and multivariable multilevel logistic regression analyses of factors associated with adolescent childbearing in Burundi
According to the findings in Table 3, the ICC of the empty model was estimated to 20.2, which indicated that about 20.2% of the variations in adolescent childbearing were attributable to community differences. Similarly, the MOR of the empty model was estimated to 2.37, which means that if we randomly selected two adolescent girls from two different communities, the one from a higher risk community had 2.37 times higher odds of childbearing than the one from a lower risk community. The model fitness findings revealed the best-fitted model was the full model (model with individual and community-level factors) since it had significantly (p < 0.001) lower values of deviance (905.70), AIC (955.71), and BIC (1112.16) compared to those of the empty model. In the bivariable analysis, factors like adolescent's age, education, working status, household wealth index, religion, access to mass media, age at first marriage, knowledge of any contraceptive methods, modern contraceptive use, health regions and community-level poverty met the minimum criteria (p ≤ 0.2) to be included in the multivariable analysis.
In the multivariable analysis, only factors such as adolescent's age, adolescent's education, household wealth index, age at first marriage, knowledge of any contraceptive methods, modern contraceptive use, health regions, and community-level poverty remained significantly associated with adolescent childbearing. Indeed, adolescents aged 18–19 years had about 6 times higher odds (aOR =5.85, 95% CI: 3.54–9.65, p < 0.001) of childbearing than those aged 15–17 years. The odds of childbearing among adolescents who had no education was about 4 times higher (aOR = 4.18, 95% CI: 1.88–9.30, p < 0.001), and those who had only a primary education was about 2 times higher (aOR = 2.58, 95% CI: 1.54–4.25, p < 0.001) than adolescents who had a secondary or high education. The adolescents in the richest household quintile had 48% lower odds (aOR = 0.52, 95% IC: 0.45–0.87, p = 0.007) of childbearing compared to those in the poorest household quintile.
Similarly, the odds of childbearing among adolescents who got married at ≤17 years old was about 9 times higher (aOR = 9.28, 95% CI: 3.11–27.65, p < 0.001) than those who got married at the age between 18 and 19. Moreover, the adolescents who didn't have knowledge of any contraceptive methods had about 5 times higher odds (aOR = 5.33, 95% CI: 1.48–19.16, p = 0.010) of childbearing than those who had knowledge of any contraceptive methods. Similarly, the odds of childbearing among adolescents who were not using modern contraceptive methods was about 24 times higher (aOR = 24.48, 95% CI: 9.80–61.14), p < 0.001) than those who were using modern contraceptive methods. Also, the odds of childbearing among adolescents living in West, and those in South were about 74% (aOR = 0.26, 95%CI: 0.08–0.86), p = 0.027) and 69% (aOR = 0.31, 95% CI: 0.10–0.96, p = 0.041) times lower respectively than those living in Bujumbura Mairie. Finally, the odds of childbearing among adolescents living in high community-level poverty was about 2 times higher (aOR = 2.19, 95% CI: 1.03–4.64, p = 0.042) than those living in low community-level poverty.
This study aimed to analyze the prevalence, trend and determinants of adolescent childbearing in Burundi using data from the three DHS conducted in Burundi in 1987 [25], 2010 [24], and 2016–17 [14] respectively. Our findings showed that the prevalence of adolescent childbearing increased from 5.9% in 1987 to 8.3% in 2016/17. Indeed, analysis of the trend in adolescent childbearing over a 30-year period (1987 to 2017) shows that there was an increase in adolescent childbearing between 1987 and 2010, which would likely be the result of the various consequences of the 1993–2005 civil war. These consequences include sexual violence [34], the increase in the poverty rate [13, 35, 36] and the gradual deterioration of social norms that prohibited pregnancy outside of marriage, especially in urban areas [37]. Afterwards, there was a slight decrease in adolescent childbearing between 2010 and 2017, which would be attributable to the general increase in education in Burundi since 1987 but especially since 2010 after the implementation of the free Primary School Policy (FPSP) by the Burundian government in 2005 [38]. However, the effect of this general increase in school enrollment (at the individual and especially at the community level) would have been mitigated by the increase in the poverty rate among households especially after the 2015 post-election crisis [39] as some girls opt for early marriage to escape the poor household conditions in the parental home [35], while others move alone to the cities, especially in Bujumbura Mairie, in search of work and are often vulnerable to sexual exploitation which puts them at high risk of becoming pregnant [34], the gradual deterioration of social norms that severely prohibited pregnancy outside of marriage especially in urban areas [37], and finally the difficulties of access/low utilization of family planning services by adolescents girls in Burundi [23, 40, 41]. Although this upward trend in adolescent childbearing was not statistically significant, Burundi should make greater efforts to reverse this trend given the negative impact of adolescent childbearing in Burundi on the young mothers and their babies' well-being [21, 34, 42] and on the current demographic pressure [11, 13]. Moreover, several studies showed that the high level of maternal and infant morbidity and mortality can be reduced by reducing the adolescent childbearing rates in developing countries [3, 43, 44]. In addition, Burundi should take as a good example most of its neighboring countries that are currently showing a downward trend in adolescent childbearing after having made enormous efforts [4, 7].
Our study identified some key determinants of adolescent childbearing in the Burundian settings. Indeed, our findings indicated that adolescents aged 18–19 years were more likely to start childbearing than those aged 15–17 years. This positive correlation between adolescent age and risk of childbearing could be explained by increased exposure to sexual intercourse and marriage as the age of adolescent increases [4, 10]. Our results are consistent with those of many previous studies [4, 7, 10] that showed that the odd of adolescent pregnancy increases with adolescent age.. However, it should be noted that the consequences of childbearing can be much more serious for 15–17 year old girls than for 18–19 year old girls, both in terms of their health (given their physical immaturity) and that of their babies, in terms of acceptance in the community given that the legal age of marriage in Burundi is 18, and in terms of an increase in their reproductive age which would contribute to a high fertility rate further exacerbating the demographic pressure in Burundi [11]. Therefore, intervention programs to reduce/prevent adolescent childbearing in Burundi should preferably target all age groups of adolescent girls.
Similarly, our results showed that adolescents who had no education were more likely to start childbearing than those who had a secondary or high education. Such an association could be explained by the fact that out-of-school adolescent girls do not have access to comprehensive sexuality education (CSE) [45] and skills necessary to negotiate sexuality and reproductive options [3]. The protective effect of education against adolescent childbearing has also been reported in several previous studies. Indeed, adolescents who had no education had about 2 times higher odds of childbearing compared to those who were in school [3]. Teenage girls who had no education had about 3 times higher odds of childbearing than those who had a secondary or high education [45] . Other similar results were reported in studies conducted in Malawi [10], and in five East African countries that do not include Burundi [7]. In Burundi, a significant increase in the school attendance rate, especially at the primary level, was observed following the implementation of the FPSP initiated by the Burundian government since 2005 [38]. However, there is still a gender gap in school attendance, especially at the secondary and higher levels [14, 38]. Moreover, CSE was certainly integrated into the education program in Burundi even in extracurricular school clubs [22]. However, this is not enough as the emphasis was placed on abstinence as the only accepted method for avoiding adolescent pregnancy [37, 38]. The information available on the benefits of using contraceptive methods would be also very limited to have a positive effect on girls' possibilities to protect themselves [22]. Furthermore, many adolescent girls are eventually forced to drop out of school because of the very poor living conditions in the parental home [35, 36] and face an increased risk of pregnancy while trying to provide for their basic needs themselves [34, 35, 38]. Given the importance of education, particularly at the secondary and tertiary levels, in preventing teenage childbearing, policymakers should do everything possible to promote young girls education at all levels of the Burundian education system while significantly improving the household socio-economic conditions and the quality of the CSE provided.
Our findings also revealed that household poverty or living in poor communities is associated with higher odds of adolescent childbearing. In the Burundian context, this association could be explained by the fact that Burundian society was highly affected economically by the civil war of 1993–2005 [34, 37]. Consequently, 64.9% of Burundians live below the national poverty line of US$1.27 and 38.7% live in extreme poverty [35, 36]. Thus, some rural adolescents arrive alone in cities in search of work and are often vulnerable to sexual exploitation, which exposes them to a high risk of unwanted pregnancies [34, 38]. On the other hand, some adolescent girls, especially those from rural areas, are eventually forced to drop out of school, either because they have no money to buy sanitary pads during menstruation or because they are unable to learn much without some food before school or at lunchtime [38]. Some malicious men (shopkeepers, drivers, teachers, etc.) take advantage of this precariousness to offer them money in exchange for sex, which often results in unwanted pregnancies [13, 22].. Our results corroborate those of the study by Vikat et al. [17] and those of the study by Kearney and his colleague [18]. Although the relationship between poverty and adolescent childbearing may be a vicious cycle [3], our findings and available evidence [7, 9, 13] underscore the importance of improving the households' socioeconomic status in general, but especially of disadvantaged communities, to reduce the prevalence of adolescent childbearing, thereby improving their sexual and reproductive health.
Unexpectedly, Bujumbura Mairie, which is generally considered less poor than other regions and where more youth have access to education [38], was found to be associated with a higher risk of adolescent pregnancy than other regions. This finding could be explained by two main reasons. The first is that in order to escape poor living conditions in parental households, some rural adolescents arrive alone in Bujumbura Mairie in search of work and are often vulnerable to sexual exploitation, which puts them at increased risk of becoming pregnant [34]. The second reason is that rural families are even more attached to social norms against out-of-wedlock pregnancies than urban families [34, 37]. Therefore, to escape the stigma of their families, some rural adolescents who experience an unwanted pregnancy prefer to move to Bujumbura Mairie as soon as possible before the family realizes that their daughter is pregnant.
This study also found that the adolescent early marriage is associated with a higher odd of childbearing. This link between early marriage and higher risk of adolescent childbearing could be justified by the fact that early marriage implies early sexual debut and therefore a major risk of early pregnancy and childbearing [7, 9, 46]. In addition, several previous studies [3, 4, 9, 46] reported similar results. In Burundi, early marriage is associated with not only young mothers' and their babies' poor health outcomes [14], but also with high fertility rate [11]. While the official age of marriage for girls in Burundi is 18, early marriage remains a common practice, especially in rural areas, as a way to escape poor living conditions in the parental home [35]. Therefore, the Burundian government should ensure the strict enforcement of any law aimed at combating early marriage while improving the socio-economic conditions of households. Indeed, apart from the findings of our study, several other researchers [3, 4, 46, 47] suggest that investing in the prevention of child marriage is important not only to reduce teenage pregnancies and related complications, but also to improve a country's economic development.
Similarly, our findings showed that both the lack of knowledge of any contraceptive methods and the non-use of modern contraceptive methods were associated with higher odds of adolescent childbearing. The positive influence of good knowledge and use of family planning services in preventing or reducing the rate of unintended pregnancies among adolescent girls has been widely reported in the scientific literature [9, 10, 42, 46]. However, most Burundian adolescent girls do not use contraception, and some do not even plan to use it in the future [14]. Indeed, the prevalence of contraceptive use among adolescent girls remains very low (2.5%) and the percentage of adolescents girls who do not intend to use contraception increased from 17.8% in 2010 to 24.8% in 2016–17. Moreover, the percentage of adolescents who had knowledge of any contraceptive methods decreased from 91.8% in 2010 to 89.9% in 2016–17 [14, 24]. The results of this study as well as the available evidence [46, 47] highlight the importance of interventions such as CSE [42] at all levels of the Burundian education system and provision of contraceptive services [48] to adolescents and creating supportive environments such as knowledge and support from parents, teachers, church, mass media campaign, governance, and a peer education program [42, 46] to reduce the prevalence of adolescent childbearing in Burundi. The strength of our study is that it would be among the first to focus on trend analyses and community-level factors in the analysis of determinants of adolescent childbearing in Burundi. In addition, this study is the first to use an advanced logistic regression model (multilevel model) to investigate the determinants of adolescent childbearing in Burundi. However, our study also suffers from some limitations. The 1987 DHS database did not contain some of the variables of interest to our study. Therefore, we limited ourselves to the analysis of the available variables. Moreover, the results of this study may suffer from misreporting bias regarding the respondents current ages. Indeed, respondents' ages may not always have been reported correctly, either intentionally by trying to report a higher age than the real age given the stigma surrounding adolescent pregnancy [21] and the legal consequences of early marriage, or by not knowing the real age given that Burundi has suffered from repeated outbreaks of mass violence and political crisis [34, 37] during which registration of birth dates in government records was often impossible [49]. In addition, our study looked only at current pregnancies or previous births of adolescents to assess the prevalence of adolescent childbearing and did not consider adolescent pregnancies that ended in miscarriage, abortion, or stillbirth. This consideration is very important in the interpretation of the results of this study by readers, as there may be an underestimation bias in the prevalence. Indeed, given the Burundian culture, which still considers pregnancy outside of marriage to be a disgrace to the family [21], many cases of induced and clandestine abortion are quite possible in Burundi, as was found in two recent studies conducted in two of Burundi's neighboring countries, in Uganda [50] and in Ethiopia [51], which showed that nearly one in six adolescent pregnancies ends in an induced and clandestine abortion. Further studies that include adolescent pregnancies that ended in miscarriage, abortion, or stillbirth in prevalence estimate are needed to better understand the extent of the problem in Burundi.
The prevalence of adolescent childbearing increased from 5.9% in 1987 to 8.3% in 2016/17 although this increase was not statistically significant. There were variations in the odds of adolescent childbearing by some individual and community-level factors. Factors such as late adolescent age, adolescent illiteracy, household poverty or high community-level poverty, early marriage, lack of knowledge of any contraceptive methods, non-use of modern contraceptive methods, and living in Bujumbura Mairie were associated with higher odds of adolescent childbearing. School- and community- based intervention programs aimed at promoting girls' education and improving socioeconomic status, knowledge and utilization of contraceptives and prevention of early marriage among adolescent girls is crucial to reduce adolescent childbearing in Burundi.
The data that support the findings of this study are available for download upon a formal application from the DHS Program web site https://dhsprogram.com/data/available-datasets.cfm, but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of the DHS Program.
AIC:
Akaike Information Criterion
aOR:
Adjusted Odds Ratio
BDHS:
Burundi Demographic and Health Survey
Bayesian Information Criterion
CSE:
FPSP:
Free Primary Schooling Policy
ICC:
Intra-Class correlation Coefficient
Median Odds Ratio
SSA:
WHO. Guidance on ethical considerations in planning and reviewing research studies on sexual and reproductive.pdf. Geneva: WHO; 2018.
Plummer ML, Baltag V, Strong K, Dick B, Ross DA, World Health Organization, et al. Global Accelerated Action for the Health of Adolescents (AA-HA!): guidance to support country implementation. 2017. Available from: http://apps.who.int/iris/bitstream/10665/255415/1/9789241512343-eng.pdf. Cited 2020 Feb 19
Kassa GM, Arowojolu AO, Odukogbe AA, Yalew AW. Prevalence and determinants of adolescent pregnancy in Africa: a systematic review and Meta-analysis. Reprod Health. 2018;15:195.
Kassa GM, Arowojolu AO, Odukogbe A-TA, Yalew AW. Trends and determinants of teenage childbearing in Ethiopia: evidence from the 2000 to 2016 demographic and health surveys. Ital J Pediatr. 2019;45:153.
World Bank, International Monetary Fund. Global Monitoring Report 2015/2016: Development Goals in an Era of Demographic Change. Washington, DC: World Bank; 2016.
United Nations. World Fertility 2019 : early and later childbearing among adolescent women (ST/ESA/SER.A/446). 2019. Available from: https://www.un.org/en/development/desa/population/publications/index.asp. Cited 2021 Jan 13
Wado YD, Sully EA, Mumah JN. Pregnancy and early motherhood among adolescents in five East African countries: a multi-level analysis of risk and protective factors. BMC Pregnancy Childbirth. 2019;19:59. Available on: https://bmcpregnancychildbirth.biomedcentral.com/articles/10.1186/s12884-019-2204-z.
WHO. Adolescent pregnancy. Available from: https://www.who.int/news-room/fact-sheets/detail/adolescent-pregnancy. Cited 2021 Jan 12
World Health Organization. Regional Office for South-East Asia. Adolescent pregnancy situation in South-East Asia Region. Geneva: World Health Organization; 2015.
Palamuleni ME. Determinants of adolescent fertility in Malawi. Gend Behav. 2017;15:10126–41.
Nibaruta JC, Elkhoudri N, Chahboune M, Chebabe M, Elmadani S, Baali A, et al. Determinants of fertility differentials in Burundi: evidence from the 2016–17 Burundi demographic and health survey. PAMJ. 2021;38 Available from: https://www.panafrican-med-journal.com/content/article/38/316/full. Cited 2021 Apr 2.
Islam MM. Adolescent childbearing in Bangladesh. Asia Pacific Population Journal Economic and social commission for Asia and the pacific. 1999;14:73–87.
Rasmussen B, Sheehan P, Sweeny K, Symons J, Maharaj N, Kumnick M, et al. Adolescent Investment Case in Burundi: Estimating the Impacts of Social Sector Investments for adolescents. Bujumbura: Burundi: UNICEF Burundi; 2019.
Ministère à la Présidence chargé de la Bonne Gouvernance et du Plan (MPBGP), Ministère de la Santé Publique et de la Lutte Contre le Sida (MSPLS), Institut de Statistiques et d'Études Économiques du Burundi (ISTEEBU), ICF. Troisième Enquête Démographique et de Santé 2016–2017. Bujumbura, Burundi: ISTEEBU, MSPLS, and ICF.; 2017. 679. Available from: https://dhsprogram.com/publications/publication-FR335-DHS-Final-Reports.cfm
Kalaivani K. Prevalence & consequences of anaemia in pregnancy. Indian J Med Res Citeseer. 2009;130:627–33.
Ahmad MO, Kalsoom U, Sughra U, Hadi U, Imran M. Effect of maternal anaemia on birth weight. J Ayub Med Coll Abbottabad. 2011;23:77–9.
Vikat A, Rimpelä A, Kosunen E, Rimpelä M. Sociodemographic differences in the occurrence of teenage pregnancies in Finland in 1987–1998: a follow up study. J Epidemiol Community Health BMJ Publishing Group Ltd. 2002;56:659–68.
Kearney MS, Levine PB. Why is the teen birth rate in the United States so high and why does it matter? J Econ Perspect. 2012;26:141–63.
Gideon R. Factors associated with adolescent pregnancy and fertility in Uganda: analysis of the 2011 demographic and health survey data. Am J Sociol Res. 2013;3:30–5.
Neal S, Ruktanonchai C, Chandra-Mouli V, Matthews Z, Tatem AJ. Mapping adolescent first births within three East African countries using data from demographic and health surveys: exploring geospatial methods to inform policy. Reprod Health BioMed Central. 2016;13:1–29.
Ruzibiza Y. 'They are a shame to the community … ' stigma, school attendance, solitude and resilience among pregnant teenagers and teenage mothers in Mahama refugee camp, Rwanda. Glob Public Health. 2021;16:763–74.
Munezero D, Bigirimana J. Jont program "Menyumenyeshe" for improving sexual and reproductive health of adolescents and youth in Burundi. Bujumbura: Ministry of Public health and for fighting against Aids; 2017. p. 120. Available from: http://www.careevaluations.org/evaluation/improving-sexual-and-reproductive-health-of-adolescents-and-youth-in-burundi/
French H. How the "joint program" intervention should or might improve adolescent pregnancy in Burundi, how these potential effects could be encouraged, and where caution should be given; 2019.
Institut de Statistiques et d'Études Économiques du Burundi (ISTEEBU), Ministère de la Santé Publique et de la Lutte, contre le Sida (MSPLS), ICF International. Enquête Démographique et de Santé 2010. Bujumbura, Burundi: ISTEEBU, MSPLS, et ICF International.; 2012. 419. Available from: https://dhsprogram.com/publications/publication-FR253-DHS-Final-Reports.cfm
Segamba L, Ndikumasabo V, Makinson C, Ayad M. Enquête Démographique et de Santé au Burundi 1987. Columbia: Ministère de l'Intérieur Département de la Population/Burundi and Institute for Resource Development/Westinghouse; 1988. p. 385. Available from: https://dhsprogram.com/publications/publication-FR6-DHS-Final-Reports.cfm
Croft TN, Marshall AM, Allen CK, Arnold F, Assaf S, Balian S. Guide to DHS statistics; 2018. p. 645.
Dean AG, Sullivan KM, Soe MM. OpenEpi: open source epidemiologic statistics for public health, version 3.01. www.OpenEpi.com, updated 2013/04/06. 2013; Available from: http://www.openepi.com/DoseResponse/DoseResponse.htm
Sommet N, Morselli D. Keep calm and learn multilevel logistic modeling: a simplified three-step procedure using Stata, R, Mplus, and SPSS. Int Rev Soc Psychol. 2017;30:203–18.
Merlo J, Chaix B, Ohlsson H, Beckman A, Johnell K, Hjerpe P, et al. A brief conceptual tutorial of multilevel analysis in social epidemiology: using measures of clustering in multilevel logistic regression to investigate contextual phenomena. J Epidemiol Community Health. 2006;60:290–7.
Tesema GA, Worku MG. Individual-and community-level determinants of neonatal mortality in the emerging regions of Ethiopia: a multilevel mixed-effect analysis. BMC Pregnancy Childbirth. 2021;21:12.
Teshale AB, Tesema GA. Determinants of births protected against neonatal tetanus in Ethiopia: a multilevel analysis using EDHS 2016 data. Das JK, editor. Plos One. 2020;15:e0243071.
Tessema ZT, Tamirat KS. Determinants of high-risk fertility behavior among reproductive-age women in Ethiopia using the recent Ethiopian demographic health survey: a multilevel analysis. Trop Med Health BioMed Central. 2020;48:1–9.
Heck RH, Thomas S, Tabata L. Multilevel modeling of categorical outcomes using IBM SPSS: Routledge Academic; 2013. Available from: https://books.google.fr/books?id=PJsTMAuPv6kC&hl=fr&source=gbs_book_other_versions
Sommers M. Adolescents and violence: lessons from Burundi. Belgium: Belgique: Universiteit Antwerpen, Institute of Development Policy (IOB); 2013.
Berckmoes L, White B. Youth, farming and Precarity in rural Burundi. Eur J Dev Res. 2014;26:190–203.
Tokindang J, Bizabityo D, Coulibaly S, Nsabimana J-C. Profil et déterminants de la pauvret : Rapport de l'enquête sur les Conditions de Vie et des Ménages (ECVMB-2013/2014). Bujumbura: Institut de Statistiques et d'Études Économiques du Burundi; 2015. p. 91.
Schwarz J, Merten S. 'The body is difficult': reproductive navigation through sociality and corporeality in rural Burundi. Cult Health Sex. 2022;10:1–16.
Cieslik K, Giani M, Munoz Mora JC, Ngenzebuke RL, Verwimp P. Inequality in education, school-dropout and adolescent lives in Burundi. Brussels: UNICEF-Burundi/Université Libre de Bruxelles; 2014.
Arieff A. Burundi's Electoral Crisis: In Brief. Washington, DC: Congressional Research Service; 2015.
Westeneng J, Reis R, Berckmoes LH, Berckmoes LH. The effectiveness of sexual and reproductive health education in Burundi: policy brief. Paris: UNESCO; 2020.
Nzokirishaka A, Itua I. Determinants of unmet need for family planning among married women of reproductive age in Burundi: a cross-sectional study. Contracept Reprod Med. 2018;3:11.
Hindin MJ, Kalamar AM, Thompson T, Upadhyay UD. Interventions to prevent unintended and repeat pregnancy among young people in low-and middle-income countries: a systematic review of the published and gray literature. J Adolesc Health Elsevier. 2016;59:S8–15.
Nove A, Matthews Z, Neal S, Camacho AV. Maternal mortality in adolescents compared with women of other ages: evidence from 144 countries. Lancet Global Health Elsevier. 2014;2:e155–64.
Olausson PO, Cnattingius S, Haglund B. Teenage pregnancies and risk of late fetal death and infant mortality. BJOG. Wiley Online Library. 1999;106:116–21.
Islam MM, Islam MK, Hasan MS, Hossain MB. Adolescent motherhood in Bangladesh: trends and determinants. Khan HTA, editor. Plos One. 2017;12:e0188294.
WHO. Preventing early pregnancy and poor reproductive outcomes among adolescents in developing countries: What the evidence says? Geneva: World Health Organization; 2011. https://www.who.int/publications-detail-redirect/9789241502214. Accessed 31 Aug 2022.
WHO. WHO recommendations on adolescent sexual and reproductive health and rights. Geneva: World Health Organization; 2018. https://www.who.int/publications-detail-redirect/9789241514606.
Darroch JE, Woog V, Bankole A, Ashford LS, Points K. Costs and benefits of meeting the contraceptive needs of adolescents; 2016.
Isteebu. Recensement Général de la Population et de l'Habitat au Burundi en 2008. Bujumbura, Burundi: Institut de Statistiques et d'Études Économiques du Burundi; 2008. Available from: https://www.isteebu.bi/rgph-2008/
Sully EA, Atuyambe L, Bukenya J, Whitehead HS, Blades N, Bankole A. Estimating abortion incidence among adolescents and differences in postabortion care by age: a cross-sectional study of postabortion care patients in Uganda. Contraception Elsevier. 2018;98:510–6.
Sully E, Dibaba Y, Fetters T, Blades N, Bankole A. Playing it safe: legal and clandestine abortions among adolescents in Ethiopia. J Adolesc Health Elsevier. 2018;62:729–36.
DHS Program. The DHS Program - Request Access to Datasets. The Demographic and health surveys Program. 2020. Available from: https://dhsprogram.com/data/new-user-registration.cfm. Cited 2020 Apr 21
We extend our sincere thanks to the Measure DHS program for granting permission to access and use the 1987, 2010 and 2016/17 BDHS data for this study.
This study is financed by the Burundian government through the scholarship granted to Mr. Jean Claude Nibaruta under contract No.611/BBES/0134/12/2017/2018 within the framework of his PhD studies in Morocco. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of this manuscript.
Hassan First University of Settat, Higher Institute of Health Sciences, Laboratory of Health Sciences and Technologies, Settat, Morocco
Jean Claude Nibaruta, Mohamed Chahboune, Milouda Chebabe, Saad Elmadani, Morad Guennouni & Noureddine Elkhoudri
Hassan II University, Ibn Rochd University Hospital of Casablanca, Haematology laboratory, Casablanca, Morocco
Bella Kamana
Indiana University, Richard M. Fairbanks School of Public Health, Departments of Social and Behavioral Sciences, Indianapolis, IN, USA
Jack E. Turman Jr.
Cadi Ayyad University of Marrakech, Semlalia Faculty of Science, Departments of Biology, Marrakech, Morocco
Hakima Amor & Abdellatif Baali
Jean Claude Nibaruta
Mohamed Chahboune
Milouda Chebabe
Saad Elmadani
Morad Guennouni
Hakima Amor
Abdellatif Baali
Noureddine Elkhoudri
JCN and NK conceived the idea and design and contributed in data analysis, interpretation of results, discussion and manuscript drafting. BK, MG, MC and MC substantively contributed in discussion and manuscript drafting. SM was a major contributor in data analysis and interpretation of results. While JET, HA and AB advised on data analysis and substantively revised the manuscript. All authors read and approved the final manuscript.
Correspondence to Jean Claude Nibaruta.
The 1987, 2010, and 2016–17 survey protocols, consent forms, and data collection instruments were reviewed and approved by the National Ethics Committee for the Protection of Human Beings Participating in Biomedical and Behavioral Research in Burundi and the Institutional Review Board of ICF International. In addition, data were collected after informed consent was obtained from the participants and all information was kept confidential. For this study, permission was given by the MEASURE DHS program to access and download the three datasets after reviewing a short summary of this study submitted to the MEASURE DHS program via its website [52]. All the three datasets were treated with confidentiality and all methods were carried out in accordance with relevant guidelines and regulations.
Nibaruta, J.C., Kamana, B., Chahboune, M. et al. Prevalence, trend and determinants of adolescent childbearing in Burundi: a multilevel analysis of the 1987 to 2016–17 Burundi Demographic and Health Surveys data. BMC Pregnancy Childbirth 22, 673 (2022). https://doi.org/10.1186/s12884-022-05009-y
DOI: https://doi.org/10.1186/s12884-022-05009-y
Childbearing
Submission enquiries: [email protected] | CommonCrawl |
\begin{document}
\title{Liouville Operators over the Hardy Space}
\begin{abstract}The role of Liouville operators in the study of dynamical systems through the use of occupation measures have been an active area of research in control theory over the past decade. This manuscript investigates Liouville operators over the Hardy space, which encode complex ordinary differential equations in an operator over a reproducing kernel Hilbert space. \end{abstract}
\section{Introduction} \label{intro}
Traditionally the study of Liouville operators has been constrained to Banach spaces of continuous functions, where moment problems relating Liouville operators and occupation measures have been investigated. It was observed in \cite{rosenfeld2019occupation} that the functional relationship between occupation measures and Liouville operators can be fruitfully exploited within the context of reproducing kernel Hilbert spaces, which gave rise to occupation kernels (cf. \cite{rosenfeld2019occupation, rosenfeldCDC2019}).
This shift in the study of Liouville operators have led to several nontrival results in system identification and dynamic mode decomposition (DMD) \cite{rosenfeld2021dynamic,rosenfelddmd}. DMD has traditionally invoked Liouville operators as continuous generators for Koopman (or composition) semi-groups corresponding to discrete time dynamics \cite{kutz2016dynamic}. Work such as \cite{williams2015kernel}, connected the study of DMD and Koopman operators with RKHSs, where continuous time dynamics were discretized and the discretized system was analyzed as a proxy for the continuous time system. This discretization has thus far been necessary for the application of DMD to a dynamical system. Through the incorporation of occupation kernels, \cite{rosenfelddmd} gave a method of the direct DMD analysis of continuous time systems through the Liouville operator over a RKHS. It should be emphasized that the collection of Liouville operators is strictly larger than that of Koopman generators, where the latter requires that a dynamical system admits a discretization. This is not always possible, since dynamics such as $\dot x = 1+x^2$ are not discretizable, yielding a solution with finite escape time.
These recent results position the study of Liouville operators over RKHSs as an important research direction for both pure and applied mathematics. Important for both is the characterization of densely defined Liouville operators over various RKHSs, where each new characterization opens new relations for the data driven study of continuous dynamical systems. The resolution of these questions inform the selection of RKHS for particular applications in systems theory. Moreover, the introduction of \emph{scaled} Liouville operators (cf. \cite{rosenfelddmd}) allows for the representation of a dynamical system through a compact operator over the exponential dot product kernels' native space (the real valued counterpart of the Fock space). The idea of scaled Liouville operators are expanded here as Liouville weighted composition operators, which combines a composition operator with the Liouville operator to produce a bounded and sometimes compact operator that represents a dynamical system.
This manuscript investigates Liouville and Liouville weighted composition operators over the Hardy space. The Hardy space provides a model for the broader investigation of these operators, where properties such as the inner outer factorization of Hardy space functions \cite{hoffman2007banach}, the characterization of densely defined multiplication operators \cite{sarason2008unboundedtoeplitz}, and the representation of Hardy space functions as both analytic functions within the disc and their representation as a subspace of $L^2$ of the circle allow provide tools through which Liouville operators, their spectrum, and symbols may be investigated.
\section{Definitions and Preliminaries} \begin{definition}A reproducing kernel Hilbert space over a set $X$ is a Hilbert space of functions in which point evaluation $e_x(f)=f(x)$ is a continuous linear functional for all $x\in X$. Thus Riesz representation guarantees that for each $x\in X$ there exists a function $k_x\in H$ such that $f(x)=\ip{f}{k_x}$. \end{definition} One of the most studied reproducing kernel Hilbert spaces is the complex Hardy-Hilbert space over the disc. The Hardy space over the disc consists of analytic continuations of $L^2$ functions of the circle $\mathbb{T}$ whose Fourier coefficients, $\hat f(n) := \int_{-\pi}^\pi f(e^{i\theta}) e^{-i n \theta} d\theta$, are non-zero only for $n\geq 0$. In fact, one can define \[H^2(\mathbb{T})=\left\{f\in L^2(\mathbb{T}): f(z)=\sum_{n=0}^\infty \hat{f}(n)z^n\right\}.\] Functions in the Hardy space over the \emph{disc} can then be obtained by taking functions in $H^2(\mathbb{T})$ and integrating them against the Poisson kernel \cite{katznelson}. This amounts to defining $f(z)=\sum_{n=0}^\infty f_nz^n$ for $z\in \mathbb{D}$ where the Taylor coefficients agree with the Fourier coefficients of the function on the circle, $f_n=\hat{f}(n)$. Thus,
\[H^2(\mathbb{D})=\left\{f:\mathbb{D}\rightarrow\mathbb{C}: f(z)=\sum_{n=0}^\infty f_nz^n \text{ and }\sum_{n=0}^\infty |f_n|^2<\infty \right\},\] where radial limits exists almost everywhere. Of vast importance in Hardy space theory is the inner-outer factorization theorem \cite{cimaross, rosenblumrovnyak}. \begin{theorem} Every function $f\in H^2$ admits a representation as $f=F\varphi$ where $F$ is an inner function and $\varphi$ is a outer function. This factorization is unique up to a unimodular constant. \end{theorem}
This representation is called an \emph{inner-outer factorization} for the function $f\in H^2$. An inner function is a function $F\in H^2$ such that $|F|\leq 1$ and an outer function $G$ is a function such that $GH^2$ is dense in $H^2$. Outer functions have representation as exponentials \cite{vukotic}, i.e. if $G$ is outer then
\[G(z)=\exp\left (\frac{1}{2\pi}\int_0^{2\pi}\frac{e^{i\theta}+z}{e^{i\theta}-z}\log|\tilde{G}(e^{i\theta})|d\theta\right)\] for some function $\tilde{G}$. This inner outer factorization is used in establishing many properties for functions in $H^2$. As an example, bounded multiplication operators on $H^2(\mathbb{D})$ have been extensively studied, where it has been determined that the bounded multipliers for $H^2(\mathbb{D})$ is the collection of bounded analytic functions on the disc, $H^\infty(\mathbb{D})$. In contrast, densely defined multiplication operators have received less attention but still have a complete description due to Sarason utilizing the inner outer factorization that exists for functions in $H^2$ \cite{sarason2008unboundedtoeplitz}.
\section{Liouville, Liouville Weighted Composition Operators, and Occupation Kernels}
Given a function $f:\mathbb{D} \to \mathbb{C}$, and setting $\mathcal{D}(A_f) := \{ g \in H^2 : f(\cdot) \frac{d}{dz} g(\cdot) \in H^2\}$, the Liouville operator with symbol $f$, $A_f : \mathcal{D}(A_f) \to H^2$, is given as $A_f g = f(\cdot) \frac{d}{dz} g(\cdot)$. Liouville operators are automatically closed with this domain (cf. \cite{rosenfeld2019occupation}), and thus, when $\mathcal{D}(A_f)$ is all of $H^2$, $A_f$ is bounded. $A_f$ is nearly always unbounded, owing to the inclusion of the differentiation operator, with the notable exception of $f \equiv 0$. When $A_f$ is densely defined, it possesses a well defined adjoint \cite{pedersen2012analysis}.
Let $T > 0$ and suppose that $\theta : [0,T] \to \mathbb{D}$ defines a continuous signal in $\mathbb{D}$. The functional on $H^2$ given as $g \mapsto \int_0^T g(\theta(t)) dt$ is bounded, and hence, there is a function in $H^2$, denoted $\Gamma_{\theta}$, such that $\int_0^T g(\theta(t)) dt = \langle g, \Gamma_{\theta} \rangle_{H^2}$ which is called the occupation kernel corresponding to $\theta$ in $H^2$. When $\gamma: [0,T] \to \mathbb{D}$ is a trajectory satisfying $\dot \gamma = f(\gamma)$, then $\Gamma_{\gamma} \in \mathcal{D}(A_f^*)$ and \begin{equation}\label{eq:occupationkernelrelation}
A_f^* \Gamma_\gamma = K_{\gamma(T)} - K_{\gamma(0)}, \end{equation} which is a critical relation in the development of finite rank represenations of Liouville operators in \cite{rosenfelddmd}.
Clearly, as Liouville operators are modally unbounded, a sequence of finite rank representations is not expected to converge to the Liouville operator itself. A remedy for this shortcoming was the introduction of scaled Liouville operators, $A_{f,a} g = a f(\cdot) \frac{d}{dz} g(a \cdot)$ for $0 < a < 1$, which are compact operators for a wide range of $f$ (e.g. $A_{f,a}$ is compact when $f$ is a polynomial) (cf. \cite{rosenfelddmd}). Though this was demonstrated for the exponential dot product space, the same proof holds over the Hardy space. Scaled Liouville operators can be seen as a special case of \textit{Liouville weighted composition operators} introduced in this manuscript as $A_{f,\varphi} g := f \frac{d}{dz} g(\varphi(\cdot)) \frac{d}{dz} \varphi(\cdot)$.
During the preparation of this manuscript, the authors became aware of a recently published work \cite{fatehi2021normality}, which gives a similar operator, $W_{f,\varphi} g = f(\cdot)\frac{d}{dz} g(\varphi(\cdot))$, and which contains Liouville weighted composition operators as a proper subset. However, with the generality introduced in \cite{fatehi2021normality}, there is a loss of structure of the operators that is exploited here. Specifically, note that for the same trajectory, $\gamma$, given above, the relation \eqref{eq:occupationkernelrelation} extends to Liouville weighted composition operators as follows:
\begin{theorem}\label{thm:occupationkernel_weightedcomp} Suppose that $A_{f,\varphi}:\mathcal{D}(A_{f,\varphi}) \to H^2$ is a closed densely defined operator with domain $\mathcal{D}(A_{f,\varphi}) := \{ g \in H^2 : f(\cdot) \frac{d}{dz}g(\varphi(\cdot))\frac{d}{dz}\varphi(\cdot)\}$, and suppose that $\gamma:[0,T]\to H^2$ satisfies $\dot \gamma = f(\gamma)$. Then $\Gamma_{\gamma} \in \mathcal{D}(A_{f,\varphi}^*)$ and \begin{equation}\label{eq:weightedcomprelation} A_{f,\varphi}^*\Gamma_{\gamma} = K_{\varphi(\gamma(T))} - K_{\varphi(\gamma(0))}. \end{equation} \end{theorem}
\begin{proof} Let $g \in \mathcal{D}(A_{f,\varphi})$, then \begin{gather*}
\langle A_{f,\varphi} g, \Gamma_{\gamma}\rangle_{H^2} = \int_{0}^T \frac{d}{dz} g(\varphi(\gamma(t))) \frac{d}{dz}\varphi(\gamma(t)) f(\gamma(t))\\
= \int_0^T \dot g(\varphi(\gamma(t))) dt = g(\varphi(\gamma(T))) - g(\varphi(\gamma(0)))\\
= \langle g, K_{\varphi(\gamma(T))} - K_{\varphi(\gamma(0))}\rangle_{H^2}. \end{gather*}
Hence, the functional $g \mapsto \langle A_{f,\varphi} g, \Gamma_{\gamma}\rangle_{H^2}$ is bounded by an application of the Cauchy-Schwarz inequality and the theorem follows. \end{proof}
To wit, the inclusion of $\frac{d}{dz}\varphi$ in the multiplication symbol of the operator allows for the exchange of $\frac{d}{dz} g(\varphi(\gamma(t))) \frac{d}{dz}\varphi(\gamma(t)) f(\gamma(t))$ with $\dot g(\varphi(\gamma(t))$. This replacement yields the adjoint relation in \eqref{eq:weightedcomprelation}.
It should be noted that for any continuous signal, $\theta : [0,T] \to \mathbb{D}$, the occupation kernel corresponding to $\theta$ is contained in the domain of the adjoint of any densely defined Liouville weighted composition operator through a different argument. This follows from expressing the occupation kernel as \begin{gather*} \Gamma_{\theta}(z) = \langle \Gamma_{\theta}, K_z \rangle_{H^2(\mathbb{D})} = \overline{\langle K_z, \Gamma_{\theta} \rangle_{H^2(\mathbb{D})}}= \int_{0}^T \overline{K_z(\theta(t))} dt = \sum_{n=0}^\infty \left( \int_0^T \overline{\theta(t)}^n dt \right) z^n. \end{gather*}
\section{Densely Defined Liouville Operators over the Hardy Space}
As polynomials are dense inside of $H^2(\mathbb{D})$, the existence of densely defined Liouville operators over the Hardy space follows immediately. In particular, if $f$ is a polynomial, $A_f p(z)$ is a polynomial for every polynomial $p(z)$. The establishment of a broader class of densely defined Liouville operators can be obtained by following the classification of densely defined multiplication operators over the Hardy space, as was done in \cite{sarason2008unboundedtoeplitz}. In particular, \cite{sarason2008unboundedtoeplitz} appealed to the Smirnov class, $N^+ := \{ b/a : b, a \in H^\infty \text{ and } a \text{ outer} \},$ and the inner outer factorization of functions in $H^2(\mathbb{D})$ to obtain the following theorem.
\begin{theorem}[Sarason \cite{sarason2008unboundedtoeplitz}]\label{thm:sarason}
Let $f : \mathbb{D} \to \mathbb{C}$ be the symbol for the multiplication operator, $M_f : \mathcal{D}(M_{f}) \to H^2(\mathbb{D})$ with $\mathcal{D}(M_{f}) := \{ g \in H : fg \in H \}$. $\mathcal{D}(M_f)$ is dense in $H^2(\mathbb{D})$ iff $f \in N^+$ given as $f = b/a$ where $b,a \in H^\infty$ satisfy $|a|^2 + |b|^2 = 1$, $a$ outer and $\mathcal{D}(M_f) = aH^2(\mathbb{D})$. \end{theorem}
The reverse implication of Theorem \ref{thm:sarason} is relatively trivial and exploits the fact that a function $a \in H^\infty$ is outer iff $aH^2(\mathbb{D})$ is dense in $H^2(\mathbb{D})$. The approach to determining which symbols yield densely defined Liouville operators leverages outer functions while also combining the antiderivative operator, $J : H^2(\mathbb{D}) \to H^2(\mathbb{D})$, given as $Jh(z) := \int_0^z h(w) dw = \int_0^1 h(tz) dt.$
\begin{lemma}\label{lem:dense-J} $\mathbb{C}+JaH^2$ is dense in $H^2$ \end{lemma}
\begin{proof}
Let $h \in H^2$ where $h(z) = \sum_{n=0}^\infty h_n z^n$ and $\sum_{n=0}^\infty |h_n|^2 < \infty$. Then $Jh(z) = \sum_{n=0}^\infty \frac{h_n}{n+1}z^{n+1},$ and as $\sum_{n=0}^\infty \left|\frac{h_n}{n+1}\right|^2 < \| h\|_{H^2}^2 < \infty$, we have $Jh \in H^2$. That is, $J:H^2 \to H^2$. Let $p$ be a polynomial and $\epsilon > 0$. The function $\frac{d}{dz} p$ is also a polynomial, and thus in $H^2(\mathbb{D})$. Hence, there is a function $a g \in aH^2$ such that $\| \frac{d}{dz} p - ag \|_{H^2} < \epsilon$. Note that $J$ is a norm decreasing operator, and thus, $\| J p - J(ag) \|_{H^2} < \epsilon$. Finally, the range of $J$ is of co-dimension 1 with the Hardy space, and since polynomials are dense in $H^2(\mathbb{D})$, the above discussion established $JaH^2(\mathbb{D})$ as being dense in $zH^2(\mathbb{D})$. Thus, $\mathbb{C} + JaH^2(\mathbb{D})$ is dense inside of $H^2(\mathbb{D})$. \end{proof}
\begin{proposition} If $f \in N^+$ with representation $f=b/a$ as above, then $A_f$ is densely defined, with domain, $\mathcal{D}(A_f),$ containing the dense space $\mathbb{C}+J a H^2,$ where $J h(z) = \int_0^z h(w) dw$. \end{proposition}
\begin{proof} From Lemma \ref{lem:dense-J}, $\mathbb{C}+JaH^2 \subset H^2$. Moreover, if $g \in \mathbb{C}+JaH^2$, then $g(z) = c + J(ah)$ for some $h \in H^2$ and $c \in \mathbb{C}$. Observe that $\frac{d}{dz} g(z) = ah$, and $f\frac{d}{dz} g(z) = fah = \frac{b}{a} ah = bh \in H^2.$ Therefore $\mathbb{C}+JaH^2 \subset \mathcal{D}(A_f)$. \end{proof}
\begin{proposition} If $f$ is the symbol for a densely defined operator $A_f$, then $f$ is analytic, and $f = \frac{b}{a \frac{d}{dz} \Phi}$ where $b \in H^2$, $a \in H^2$, $a$ outer, and $\Phi$ a function in BMOA. \end{proposition}
\begin{proof}Suppose that $\mathcal{D}(A_f)$ is dense inside of $H^2$. Select a nonconstant function $g \in \mathcal{D}(A_f)$. The derivative of $g$, $\frac{d}{dz} g$, is analytic, and hence, so is $h(z) := f(z) \frac{d}{dz} g(z)$. Therefore, $f(z) = \frac{h(z)}{\frac{d}{dz}g(z)}$ is analytic wherever the derivative of $g$ is nonvanishing. By density, for each $z_0 \in \mathbb{C}$ there is a corresponding $g$ with nonvanishing derivative at $z_0$ in $\mathcal{D}(A_f)$. Hence, $f$ is analytic on $\mathbb{D}$. By \cite{cohn1999factorization}, $\frac{d}{dz} g = a \frac{d}{dz} \Phi$ where $a \in H^2$ and outer, and $\Phi$ is BMOA. The theorem follows. \end{proof}
\section{Adjoints of Liouville Operators in the Hardy Space}
\begin{proposition}\label{prop:derivative-adjoint}Let $g^{[j]}_w(z) := \frac{d^j}{d\bar w^j} \left(\frac{1}{1-\bar w z}\right)$, then for all $j \in \mathbb{N}$, $g^{[j-1]}_w \in \mathcal{D}(A_f^*)$, and \begin{equation}\label{eq:adjoint-on-kernel}A_f^* g^{[j-1]}_w = \sum_{\ell = 0}^{j-1} \overline{f^{(\ell)}(w)} g^{[j-\ell]}_w.\end{equation} \end{proposition}
\begin{proof}
Let $w \in \mathbb{D}$ and set \[g^{[j]}_w(z) = \frac{d^j}{d\bar w^j} \left(\frac{1}{1-\bar w z}\right) = \sum_{n=j}^\infty \frac{n!}{(n-j)!} z^n \bar{w}^{n-j},\] then $g^{[j]}_w \in H^2$ and $\langle h, g^{[j]}_w\rangle_{H^2}=h^{(j)}(w)$.
Suppose $h \in \mathcal{D}(A_f)$, and consider \[ \langle A_f h, g^{[j-1]}_w \rangle_{H^2} = \langle h'\cdot f, g^{[j-1]}_w \rangle_{H^2} = \sum_{\ell = 0}^{j-1} h^{(j-\ell)}(w) f^{(\ell)}(w) = \left\langle h, \sum_{\ell = 0}^{j-1} \overline{f^{(\ell)}(w)} g^{[j-\ell]}_w \right\rangle_{H^2}.\] Thus, $g^{[j-1]}_w \in \mathcal{D}(A_f^*)$ for all $j \in \mathbb{N}$ and $w \in \mathbb{D}$, and \[ A_f^* g^{[j-1]}_w = \sum_{\ell = 0}^{j-1} \overline{f^{(\ell)}(w)} g^{[j-\ell]}_w. \] \end{proof}
\begin{proposition} Suppose that $\gamma:[0,T] \to \mathbb{D}$ is a continuous trajectory satisfying $\frac{d}{dt} \gamma(t) = f(\gamma(t))$, then $A_f^* \Gamma_{\gamma} = k_{\gamma(T)} - k_{\gamma(0)}.$ More generally, suppose that $\theta : [0,T] \to \mathbb{D}$ is also a continuous trajectory, then $A_f^* \Gamma_\theta(z) = \int_0^T \overline{ f(\theta(t)) g^{[1]}_{z}(\theta(t))} dt.$ \end{proposition}
\begin{proof} Let $g \in \mathcal{D}(A_f)$ and consider \begin{gather*} \langle A_f g, \Gamma_{\gamma} \rangle_{H^2(\mathbb{D})} = \int_{0}^T f(\gamma(t)) g(\gamma(t)) dt\\ = \int_{0}^T \frac{d}{dt} g(\gamma(t)) dt = g(\gamma(T)) - g(\gamma(0)) = \langle g, k_{g(T)} - k_{g(0)} \rangle_{H^2(\mathbb{D})}. \end{gather*} Hence, $A_f^* \Gamma_{\gamma} = k_{g(T)} - k_{g(0)}.$
For the application of the adjoint on $\Gamma_{\theta}$, note that the functional \[g \mapsto \langle A_{f}^* g, \Gamma_{\theta} \rangle_{H^2(\mathbb{D})} = \int_0^T f(\theta(t))g'(\theta(t))dt\] is bounded as $f$ is continuous and the image of $\theta$ is compact within $\mathbb{D}$ and $g'(\theta(t)) = \langle g, g^{[1]}_{\theta(t)} \rangle_{H^2(\mathbb{D})}$. Hence, there is a function $\tilde h$ such that \[ \langle A_{f}^* g, \Gamma_{\theta} \rangle_{H^2(\mathbb{D})} = \int_0^T f(\theta(t))g'(\theta(t))dt = \langle g, \tilde h \rangle_{H^2(\mathbb{D})}, \] and $\tilde h = A_f^* \Gamma_{\theta}$. Moreover, \[\tilde h(z) = \langle \tilde h, k_z \rangle_{H^2(\mathbb{D})} = \overline{ \langle k_z, \tilde h \rangle_{H^2(\mathbb{D})}} = \int_0^T \overline{ f(\theta(t)) g^{[1]}_{z}(\theta(t))} dt.\] \end{proof}
The above propositions establish the action of the adjoint on particular vectors related to kernel functions. Similar results can be worked out for any RKHS. However, the establishment of a general expression for the adjoint of the Liouville operator is much more involved, and a closed form solution is not expected to be able to be found for general RKHSs. The remainder of the section establishes a formula for the adjoint of the Liouville operator over the Hardy space, which nontrivially leverages the Hardy space's connection with $L^2(\mathbb{T})$ through radial limits.
\begin{theorem}\label{adjoint formula} Let $f$ be the symbol for a densely defined Liouville operator over the Hardy space, and suppose that $h \in \mathcal{D}(A_f^*)$, then \[A_f^* h(z)=P_{H^2}\left(\overline{\frac{f(z)}{z}}\frac{d}{dz}(zh(z))-\overline{\frac{df}{dz}}h(z)\right).\] \end{theorem}
\begin{proof} Suppose that $g \in \mathcal{D}(A_f)$ and $h \in \mathcal{D}(A_f^*)$, then \begin{align*} \ip{A_fg}{h}_{H^2}&=\lim_{r\rightarrow 1}\int_0^{2\pi} f(z)\frac{d}{dz}\left(g(z)\right)\overline{h(z)}\, d\theta\\ &=\lim_{r\rightarrow 1}\int_0^{2\pi} \left(\frac{1}{iz}\frac{dg}{d\theta}\right)f(z)\overline{h(z)}\, d\theta\\ &=\lim_{r\rightarrow 1}\int_0^{2\pi}\frac{dg}{d\theta}\left(\frac{1}{iz}f(z)\overline{h(z)}\right)\, d\theta\\ &=\lim_{r\rightarrow 1}\left[\frac{1}{iz}f(z)g(z)\overline{h(z)}\Bigg\vert^{2\pi}_0\right]-\lim_{r\rightarrow 1}\int_0^{2\pi}g(z)\frac{d}{d\theta}\left[ \frac{1}{iz}f(z)\overline{h(z)}\right]\, d\theta\\ &=-\lim_{r\rightarrow 1}\int_0^{2\pi}g(z)\left[ \frac{1}{iz}f(z)\frac{d\bar{h}}{d\theta}+\overline{h(z)}\frac{d}{d\theta}\left(\frac{f(z)}{iz}\right)\right]\, d\theta\\ &=-\lim_{r\rightarrow 1}\int_0^{2\pi}g(z)\left[ \frac{1}{iz}f(z)\left(-i\bar{z}\overline{\frac{dh}{dz}}\right)+\overline{h(z)}\left(\frac{df}{dz}-\frac{f(z)}{z}\right)\right]\, d\theta\\ &=\lim_{r\rightarrow 1}\int_0^{2\pi}g(z)\left[ \left(\frac{f(z)}{z}\right)\overline{\left(z\frac{dh}{dz}+h(z)\right)}-\overline{h(z)}\frac{df}{dz}\right]\, d\theta\\ &=\ip{g}{\overline{\frac{f(z)}{z}}\frac{d}{dz}(zh(z))-\overline{\frac{df}{dz}}h(z)}_{L^2(\mathbb{T})}\\ &=\ip{P_{H^2}g}{\overline{\frac{f(z)}{z}}\frac{d}{dz}(zh(z))-\overline{\frac{df}{dz}}h(z)}_{L^2(\mathbb{T})}\\ &=\ip{g}{P_{H^2}\left(\overline{\frac{f(z)}{z}}\frac{d}{dz}(zh(z))-\overline{\frac{df}{dz}}h(z)\right)}_{H^2} \end{align*} \end{proof}
\section{Spectrum of Liouville Operators} The connection between Liouville operators and dynamical systems is realized through the eigenfunctions of the Liouville operator, where if $\phi$ is an eigenfunction of $A_f$ with eigenvalue $\lambda$ and $\gamma:[0,T] \to \mathbb{D}$ is a trajectory satisfying $\frac{d}{dt} \gamma(t) = f(\gamma(t))$, then $\frac{d}{dt} \phi(\gamma(t)) = \phi'(\gamma(t)) f(\gamma(t)) = A_f \phi(\gamma(t)) = \lambda \phi(\gamma(t)).$ Hence, $\phi(\gamma(t)) = \phi(\gamma(0)) e^{\lambda t}$ for all $t \in [0,T]$. This connection is leveraged in the study of DMD to provide data driven models of nonlinear dynamical systems \cite{rosenfelddmd}. The application of Liouville operators to the study of DMD for nonlinear dynamical systems motivates the further investigation of the spectrum of these operators.
In general, the spectrum of the Liouville operator is at least dependent on the properties of the symbol $f$. We start with a proposition which ties the spectrum to the existence of zeros in the disc. \begin{proposition}
Let $f\in H^2(\mathbb{D})$ be a function with no zeros in a neighborhood of the closed disc. Then $\sigma(A_f)=\mathbb{C}$ \end{proposition} \begin{proof} We will show that $(A_f-\lambda)$ is not an injective operator, i.e. there exists a non-zero $g(z)$ such that $(A_f-\lambda)g(z)=0$. This function is given by \[g(z)=C\exp\left(\int_0^z\frac{\lambda}{f(z)}dz\right)\] where the above is the path integral from zero to $z$. Note, $\frac{\lambda}{f(z)}$ is bounded, hence the integral and ultimately $g(z)$ is bounded. \end{proof}
In the next sub-section we show that it is possible to get a spectrum which is \emph{not} the entire plane if the symbol is allowed to have zeros in the disc.
\subsection{Symbols with zeros in $\mathbb{D}$} \begin{lemma}\label{lem:Az-symmetric} The operator $A_z$ is symmetric over $H^2(\mathbb{D})$. \end{lemma}
\begin{proof} Let $g(z) = \sum_{n=0}^\infty g_n z^n \in \mathbb{D}(A_z)$ and $h = \sum_{n=0}^\infty h_n z^n\in \mathbb{D}(A_z^*)$. Then \begin{gather*}\langle A_z g, h \rangle_{H^2} = \left \langle \sum_{n=1}^\infty n g_n z^n, \sum_{n=0}^\infty h_{n} z^n \right\rangle_{H^2}\\ = \sum_{n=1}^\infty n g_n \overline{h_n} = \sum_{n=1}^\infty g_n \overline{n h_n} = \langle g, A_z h \rangle_{H^2}. \end{gather*} \end{proof}
\begin{proposition} If $A_z:D\subset H^2(\mathbb{D})\rightarrow H^2(\mathbb{D})$ is the densely defined Liouville operator given by $A_z(h(z))=zh'(z)$, then $\sigma(A_z)=\mathbb{N}$ \end{proposition}
\begin{proof} Given that the symbol $f(z)=z$ makes the Liouville operator symmetric according to Lemma \ref{lem:Az-symmetric}, we have automatically that $\sigma(A_z)\subseteq \mathbb{R}$. By inspection we see for $n\in \mathbb{N}$ the functions $g_n(z)=z^n$ are eigen-functions with eigenvalues $n$. Moreover, we can see that these are all the points in the spectrum. Suppose there exists an $h\in H^2(\mathbb{D})$ such that
\[ (A_z - \lambda) h(z) = g(z) \] for a given $g \in H^2.$ Suppose that $h(z) = \sum_{n=0}^\infty h_n z^n$, then \[h'(z) = \sum_{n=0}^\infty h_{n+1}~(n+1)~z^n \quad\text{and}\quad zh(z) = \sum_{n=1}^\infty h_{n} n z^n.\] If $g(z)=\sum_{n=0}^\infty g_nz^n$
\[ -\lambda h_0 + \sum_{n=1}^\infty (nh_n - \lambda h_n)z^n = \sum_{n=0}^\infty g_n z^n.\]
For $n \ge 0$ we have $h_n = \frac{g_n}{n-\lambda}$. Provided that $\lambda \not\in \mathbb{N}$, $\sum_{n=0}^\infty |h_n|^2 < \infty$. Which means that $\lambda \in \rho(A_f)$. \end{proof} Given a non-real $\alpha\in \mathbb{C}$ the symbol $f(z)=\alpha z$ will not give rise to a symmetric operator. The next proposition shows that $f(z)=z$ is not the only symbol such that $\sigma(A_f)\neq \mathbb{C}$.
\begin{proposition}
If $f(z)=\alpha z+\beta$ for $\alpha,\beta \in \mathbb{C}$ with $|\beta/\alpha|<1$, then $\sigma(A_f)=\{\alpha\cdot n\mid n\in \mathbb{N}\}$ \end{proposition} \begin{proof} In this instance we will compute the spectrum for adjoint for $A_f^*$ and note that $\sigma(A_f^*)=\overline{\sigma(A_f)}$. Suppose that $(A_f^*-\bar{\lambda})h(z)=g(z)$ for some $h,g\in H^2$. Suppose $h(z)=\sum_{n=0}^\infty h_nz^n$ and invoke the theorem \ref{adjoint formula}. We get, \begin{align} \nonumber\sum_{n=1}^\infty \bar{\alpha}nh_nz^n+\sum_{n=2}^\infty\bar{\beta}(n-1)h_{n-1}z^n+\sum_{n=1}^\infty h_{n-1}z^n-\sum_{n=0}^\infty\bar{\lambda}h_nz^n&=\sum_{n=0}^\infty g_nz^n\\ \label{eq:eigenvectorequation}-h_0\lambda +(\bar{\alpha}h_1+\bar{\beta}h_0-\bar{\lambda}h_1)z+\sum_{n=2}^\infty \left[(\bar{\alpha}n-\bar{\lambda})h_n+\bar{\beta}nh_{n-1}\right]z^n&=\sum_{n=0}^\infty g_nz^n \end{align} From the above we get \[h_0=\frac{-g_0}{\lambda}, \quad\text{ and }\quad h_{n}=\frac{g_n-n\bar{\beta}h_{n-1}}{(n\bar{\alpha}-\bar{\lambda})} \quad \text{for } n\geq 1.\]
Assume that $\lambda \neq \alpha n$ for any $n \in \mathbb{N}$. Write $h_n = d_n g_n + e_n h_{n-1}$. The sequence $e_n \to \frac{\bar \beta}{\bar \alpha}$. Hence, $\limsup |e_n| < 1$ and $d_n \to 0$. Without loss of generality, assume that $|e_n| < e < 1$ and $|d_n| < d < 1$ for all $n \ge 1$. Write $E_m := \prod_{n=1}^m e_n$.
Note that $h_1 = d_1 g_1 + e_1 h_0 = d_1 g_1 + e_1 \alpha g_0,$ $h_2 = d_2 g_2 + e_2 d_1 g_1 + E_2 \alpha g_0,$ $h_3 = d_3 g_3 + E_3/E_2 d_2 g_2 + E_3/E_1 d_1 g_1 + E_3 \alpha g_0,$
and and more generally, \[h_n = d_n g_n + E_n/E_{n-1} d_{n-1} g_{n-1} + \cdots + E_{n}/E_1 d_1 g_1 + E_n/E_0 \alpha g_0.\]
The function $h(z) = \sum_{n=0}^\infty h_n z^n$ may be expressed as the sum of the following terms \begin{align*}
-g_0/\lambda && E_1 \alpha g_0 z && E_2 \alpha g_0 z^2 && E_3 \alpha g_0 z^3 && E_4 \alpha g_0 z^4 && \cdots\\
&& d_1 g_1 z && E_2/E_1 d_1 g_1 z^2 && E_3/E_1 d_1 g_1 z^3 && E_4/E_1 d_1 g_1 z^4 && \cdots\\
&& && d_2 g_2 z^2 && E_3/E_2 d_2 g_2 z^3 && E_4/E_2 d_2 g_2 z^4 && \cdots\\
&& && && d_3 g_3 z^3 && E_4/E_3 d_3 g_3 z^4 && \cdots\\
&& && && && d_4 g_4 z^4 && \cdots. \end{align*}
Define each sum along the diagonals as $z^iG_i(z)$. The norm of $G_i(z)$ is bounded above by the norm of $g$. Moreover, $\| G_i \| \le e^i \max(d,\alpha) \|g\|$. Hence, $h(z) = \sum_{i=0}^\infty z^i G_i(z)$, and $\| h \| \le \|g\| \max(d,\alpha,1/\lambda) \sum_{i=0}^\infty e^i = \| g\| \frac{\max(d,\alpha,1/\lambda)}{1-e} < \infty$. Thus, $h \in H^2$ as absolutely convergent series converge in Banach spaces.
Thus, as long $\lambda \neq \alpha n$ for some $n$, $A^*_{\alpha z + \beta} - \bar \lambda$ is invertible. If there exists $n \in \mathbb{N}$ such that $\lambda = \alpha n$, then the coefficient on $h_n$ in the left hand side of \eqref{eq:eigenvectorequation} is zero. Hence, $h_n$ is unconstrained an $A_{\alpha z + \beta}^* - \lambda$ is not invertible. \end{proof}
\begin{lemma} Let $m,j\in \mathbb{N}$ then, \[A_{z^m}^*(z^j)= \left\{\begin{array}{llr} 0 & \ & j=0, \ldots, m-1\\ \ & &\\ (j-m+1)z^{j-m+1}&\ & j\geq m \end{array} \right.\] \end{lemma} \begin{proof} Apply the adjoint formula. \end{proof} \begin{proposition} For each $m > 1$, all $\lambda \in \mathbb{C}$ is an eigenvalue for $A_{z^m}^*$ with an eigenspace of dimension $m-1$. \end{proposition} \begin{proof} We will produce the eigenvectors using series methods. Applying the adjoint formula from above we have that for $f(z)=z^m$ and $h(z)=\sum_{n=0}^\infty h_nz^n$ we have that \[A_f^*h(z)=\sum_{n=0}^\infty nh_{n+m-1}z^n\] Assume, $\lambda \neq 0$ and apply the eigenvector equation to $h(z)$. We get that \[\lambda h_n=nh_{n+m-1}\] Thus, for $k\in \{1, \ldots, m-1\}$ and $j\in \mathbb{N}$
\[h(z)=\sum_{k=1}^{m-1}h_k\cdot\left[\sum_{n=0}^\infty \frac{\lambda^n}{\prod_{j=0}^{n-1}(k+j(m-1))}z^{k+n(m-1)}\right]\] For $k\in \{1,\ldots, m-1\}$ and $m\in\mathbb{N}$ define \[H_k(z)=\sum_{n=0}^\infty \frac{\lambda^n}{\prod_{j=0}^{n-1}(k+j(m-1))}z^{k+n(m-1)}.\] We now show that these are the eigen-functions for $A_f^*$ for $f(z)=z^m$. Let $\lambda\in \mathbb{C}$ and $k\in \{1,\ldots, m-1\}$ be fixed, and \[a_n=\frac{\lambda^{2n}}{\left[\prod_{j=0}^{n-1}(k+j(m-1))\right]^2}.\] Note,
\[\lim_{n\rightarrow \infty}\left|\frac{a_n}{a_{n+1}}\right|=\lim_{n\rightarrow \infty}\left|\frac{\lambda^2}{(k+n(m-1))^2}\right|=0.\] Since $\sum|a_n|^2<\infty$ we have that $\|H_k\|=\sum|a_n|^2\infty$ and $H_k\in H^2(\mathbb{D}).$ Since monomials are in the domain of the adjoint $A_{z^m}^*$ we have the following whenever $H_k(z)$ is in the domain of $A_{z^m}^*$. \begin{align*}
A_{z^m}^*(H_k)(z)&=\sum_{n=0}^\infty \frac{\lambda^n}{\prod_{j=0}^{n-1}(k+j(m-1))}A_{z^m}^*\left(z^{k+n(m-1)}\right)\\
&=0+\sum_{n=1}^\infty \frac{\lambda^n}{\prod_{j=0}^{n-1}(k+j(m-1))}A_{z^m}^*\left(z^{k+n(m-1)}\right)\\
&=\sum_{n=1}^\infty \frac{\lambda^n}{\prod_{j=0}^{n-1}(k+j(m-1))}\cdot(k+n(m-1)-m+1)\cdot\left(z^{k+n(m-1)-m+1}\right)\\
&=\sum_{n=1}^\infty \frac{\lambda^n}{\prod_{j=0}^{n-1}(k+j(m-1))}\cdot(k+(n-1)(m-1))\cdot\left(z^{k+(n-1)(m-1)}\right)\\
&=\sum_{n=1}^\infty \frac{\lambda\cdot\lambda^{n-1}}{\prod_{j=0}^{n-2}(k+j(m-1))}\left(z^{k+(n-1)(m-1)}\right)\\
&=\lambda\cdot \sum_{n=0}^\infty \frac{\lambda^n}{\prod_{j=0}^{n-1}(k+j(m-1))}z^{k+n(m-1)}\\
&=\lambda H_k(z) \end{align*} The above calculation is valid if and only $H_k$ lands back in $H^2$ i.e. $\lambda\in \mathbb{C}$. \end{proof}
\begin{corollary}Let $f$ be the symbol for a densely defined Liouville operator, $A_f$, and suppose $\{ z_j \}_{j \in \Lambda}$ (where $\Lambda$ is a potentially infinite index set) be the collection of zeros of $f$ with multiplicities $\{ m_j \}_{j \in \Lambda}$. Then $0$ is an eigenvalue for $A_f^*$ with eigendimension $\sum_{j \in \Lambda} m_j$. \end{corollary} \begin{proof}Since $A_f$ is densely defined, $f$ is analytic in the disc. Then for each $i \in \Lambda$, $f(z) = (z-z_i)^{m_i}f_{i}(z)$, where $f_i(z)$ does not vanish at $z_i$. Note that $f^{(m)}(z_i) = 0$ for $m=1,\ldots,m_i-1$.
By \eqref{eq:adjoint-on-kernel}, \[ A_f^* g^{[j-1]}_{z_i} = \sum_{\ell = 0}^{j-1} \overline{f^{(\ell)}(z_i)} g^{[j-\ell]}_{z_i} = 0\] for $j = 1,\ldots,{m_i}.$ Hence, $g^{[j-1]}_{z_i}$ is an eigenfunction for $A_f^*$ with eigenvalue $0$, and there is a contribution of $m_i$ dimensions to the zero eigenspace. As the above argument applies for all $i \in \Lambda$, the conclusion follows. \end{proof}
\section{An Application of the Adjoint Formula} We will show that the only symbol that can give rise to a self-adjoint Liouville operator is $f(z)=cz$ for $c\in \mathbb{R}$. This is more restrictive than what is seen in for Toeplitz operator which requires a real-valued symbol.
\begin{theorem}If $A_f$ is self-adjoint then $f(z)=cz$ for $c\in \mathbb{R}$. \end{theorem} \begin{proof} Let $f(z)=\sum_{n=0}^\infty f_n z^n$ be the symbol of a Liouville operator. Suppose $A_f$ is self adjoint. Since, $A_f^*$ is closed, the kernel is in the domain of $A_f^*$ and hence $A_f$. If we apply the operators to the kernel function $K_w(z)$ we have that \[A_f[K_w](z)=\bar{w}f_0+(\bar{w}f_1+2f_0\bar{w}^2)z+\ldots\] and \[A_f^*[K_w](z)= 0+\langle f, k_{\bar{w}}\rangle z+ \ldots \] Moreover, the above holds for all $w\in \mathbb{D}$. Comparing term by term we have that $f_0=0$. Hence, we have that $f(w)=f_1w$ by comparing the first two terms. Additionally, since self-adjoint operators must have real spectrum, we note that $f_1\in \mathbb{R}$ since the spectrum is given by $\sigma(A_f)=\{f_1n\mid n\in \mathbb{N}\}$ by our above proposition. \end{proof} \section{Weighted Liouville Operators} In this section, we'll discusso weighted Liouville Operators and prove some results in analogy to whats know in weighted composition operators. For a good overview of weighted composition operators over the Hardy space see \cite{matache2008weighted}.
\begin{definition} A weighted Liouville operator, with symbols $f$ and $\varphi$, is given formally as \[A_{f,\varphi}g(z)=f(z)\frac{d}{dz}(g(\varphi(z)))=f(z)\varphi'(z)g'(\varphi(z)).\] \end{definition}
\subsection{Conditions for Self-Adjointness of Liouville Weighted Composition Operators} Here we wish to show how the selection of composition symbol, $\varphi(z)$, can influence the form of the symbol $f(z)$. Specifically, assuming $A_{f,\varphi}$ is a bounded and self-adjoint operator, then $A_{f,\varphi} K(\cdot,z) = A_{f,\varphi}^* K(\cdot,z)$, which can be utilized to extract conditions connecting $f$ and $\varphi$.
\begin{definition} Define, \[K^{(1)}_w(z)=\frac{z}{(1-\bar{w}z)^2}.\] We call $K^{(1)}_w(z)\in H^2$ the reproducing kernel for the derivative of a Hardy space function at the point $w\in \mathbb{D}$. \end{definition} \begin{lemma}For a densely defined weighted Liouville operator we have $A_{f,\varphi}K_w=\overline{f(w)\varphi'(w)}K^{(1)}_{\varphi(w)}.$
\end{lemma}
\begin{proof}Suppose that $g \in \mathcal{D}(A_{f,\varphi})$. Then by the reproducing property \[\ip{A_{f,\varphi}g}{K_w}=g'(\varphi(w))\varphi'(w)f(w)=\ip{g}{\overline{f(w)\varphi'(w)}K^{(1)}_{\varphi(w)}}.\] Therefore, \[A_{f,\varphi}^*K_w=\overline{f(w)\varphi'(w)}K^{(1)}_{\varphi(w)}\] \end{proof}
\begin{theorem}\label{thm:self-adjoint} If $A_{f,\varphi}$ is bounded, then $A_{f,\varphi}$ is self adjoint then necessarily \begin{equation}\label{relationonsymbols}
\varphi'(z)f(z)=\frac{(z-\overline{\varphi(0)}z^2)(\overline{\varphi'(0)f'(0)+f(0)\varphi''(0)})+2z^2\overline{\varphi'(0)f(0)}}{(1-\overline{\varphi(0)}z)^3} \end{equation} is satisfied. \end{theorem}
\begin{proof} If $A_{f,\varphi}^* K(z,\alpha) = A_{f,\varphi} K(z,\alpha)$, then \begin{equation}\label{eq:selfadjointrelation} \frac{\overline{\varphi'(\alpha) f(\alpha)} z}{(1-\overline{\varphi(\alpha)}z)^2} = \frac{\varphi'(z)f(z) \bar\alpha}{(1-\bar \alpha \varphi(z))^2}\end{equation} for the specific case of the Hardy space. This gives the necessary condition for a bounded self adjoint Liouville weighted composition operator over the Hardy space. As $z$ is independent of $\bar{\alpha}$ we can take the derivative with respect to $\bar{\alpha}$: \begin{equation} \frac{z(1-\overline{\varphi(\alpha)}z)(\overline{\varphi'(\alpha)f'(\alpha)+f(\alpha)\varphi''(\alpha)})+2z^2\overline{\varphi'(\alpha)f(\alpha)}}{(1-\overline{\varphi(\alpha)}z)^3}-\frac{(1-\bar{\alpha}\varphi(z))\varphi'(z)f(z)+2\bar{\alpha}\varphi(z)}{(1-\bar{\alpha}\varphi(z))^3} \end{equation} Setting $\bar{\alpha}=0$ and rearranging gives \[\varphi'(z)f(z)=\frac{(z-\overline{\varphi(0)}z^2)(\overline{\varphi'(0)f'(0)+f(0)\varphi''(0)})+2z^2\overline{\varphi'(0)f(0)}}{(1-\overline{\varphi(0)}z)^3}\] \end{proof}
\begin{proposition}\label{prop:occkernelselfadjoint} Consider the Liouville weighted composition operator, $A_{f,\varphi}$, as in Theorem \ref{thm:self-adjoint}. Let $T > 0$ and suppose that $\gamma:[0,T] \to \mathbb{D}$ satisfies $\dot \gamma(t) = f(\gamma(t))$ for $t \in [0,T]$, and let $\Gamma_{\gamma} \in H^2$ be the occupation kenrel in $H^2$ corresponding to $\gamma$. Then the following relation holds: \begin{equation}\label{eq:occkernel_selfadjoint} \left[ \int_0^T K'_{\gamma(t)}(z)dt \right] \phi'(z)f(z) = K_{\phi(\gamma(T))} - K_{\phi(\gamma(0))}. \end{equation} \end{proposition}
\begin{proof} The relation in \eqref{eq:occkernel_selfadjoint} follows from taking the derivative under the integral of $\Gamma_{\gamma}(z) = \int_0^T K_{\gamma(t)}(z) dt$ on the left hand side, and leveraging Theorem \ref{thm:occupationkernel_weightedcomp} on the right. \end{proof}
It can be noted that \eqref{eq:selfadjointrelation} follows from Proposition \ref{prop:occkernelselfadjoint} after taking the derivative with respect to $T$ of an appropriate trajectory.
\subsection{Boundedness for Liouville Weighted Composition Operators }The action of the adjoint of a weighted composition operator on a kernel function gives some immediate bound conditions. One can establish that for a weighted composition operator we have $W^*_{f,\varphi}K_w=f(\omega)K_{\varphi(\omega)}$. As a corrollary, if $W_{f,\varphi}$ is a bounded weighted composition operator on $H^2(\mathbb{D})$ then necessarily
$B:=\sup\left\{\frac{|f(\omega)|^2(1-|\omega|^2)}{1-|\varphi(\omega)|^2}\ : \ \omega\in \mathbb{D}\right\}<\infty$. We will prove an analogous proposition.
\begin{proposition}\label{conj1} If $A_{f,\varphi}$ is bounded then necessarily
\[B':=\sup\left\{\frac{|f(\omega)|^2|\varphi'(\omega)|^2(1-|\omega|^2)}{(1-|\varphi(\omega)|^2)^2}\cdot\left(\frac{1+|\varphi(w)|^2}{1-|\varphi(w)|^2}\right)\ : \ \omega\in \mathbb{D}\right\}<\infty \] \end{proposition} \begin{proof}
Let $k_w=\sqrt{1-|w|^2}K_w$ be the normalized kernel at $w\in \mathbb{D}$. Note, \begin{equation}\label{action on kernel}
\|A_{f,\varphi}^*k_w\|^2=|f(w)||\varphi'(w)|^2\left(1-|w|^2\right)\|K^{(1)}_{\varphi(w)}\|^2 \end{equation} where
\[\|K^{(1)}_{\varphi(w)}\|^2=\frac{1+|\varphi(w)|^2}{\left(1-|\varphi(w)|^2\right)^3}.\]
The proposition is a direct consequence of the above formulas and the fact that \[\|A_{f,\varphi}^*k_w\|\leq \|A_{f,\varphi}^*\|^2=\|A_{f,\varphi}\|^2.\] \end{proof} If one has the additional assumption that $\varphi$ is a finite Blashcke product then a stronger theorem can stated. In particular, the additional structure given by the appearance of the derivative of $\varphi$ leads to the following corollary. \begin{corollary} If $\varphi$ is a finite Blashcke product and $A_{f,\varphi}$ is bounded then necessarily
\[B':=\sup\left\{\frac{|f(\omega)|^2(1+|\varphi(\omega)|^2)}{(1-|\varphi(\omega)|^2)^2}\ : \ \omega\in \mathbb{D}\right\}<\infty \] \end{corollary} \begin{proof} We need only note that
\[\lim_{|\omega|\rightarrow 1}\frac{\varphi'(\omega)(1-|\omega|^2)}{1-|\varphi(\omega)|^2}=1\] when $\varphi$ is a finite Blashcke product as noted in \cite{Ross-Garcia-Masreghi} \end{proof}
\paragraph{Compactness for Liouville Weighted Composition Operators}When concerned with the compactness of Liouville weighted composition operators again some amount of insight can be gained immediately.
\begin{proposition}If $A_{f,\varphi}$ is a compact Liouville weighted composition operator then necessarily,
\[\lim_{|w|\rightarrow 1^-}\frac{|f(\omega)|^2|\varphi'(\omega)|^2(1-|\omega|^2)}{(1-|\varphi(\omega)|^2)^2}\cdot\left(\frac{1+|\varphi(w)|^2}{1-|\varphi(w)|^2}\right)=0\] \end{proposition} \begin{proof}
Since $\|A^*_{f,\varphi}k_w\|\rightarrow 0$ as $|w|\rightarrow 1^-1$ from Equation \ref{action on kernel} which is equivalent to the above expression. \end{proof} \begin{proposition}\label{liminf} If $A_{f,\varphi}$ is compact with $f\in H^2(\mathbb{D})$ and not identically zero then necessarily
\[\limsup_n \{n^2|\varphi(z)|^{2n-1}|\varphi'(z)|^2\}<1\] almost everywhere on $\mathbb{T}$. \end{proposition} \begin{proof}
For the sake of contradicition assume there existed a set $E\subset \mathbb{T}$ with non-zero measure where the above condition does not hold since compact operators take weakly converging sequences to norm convergent sequences we get a contradiction on the fact that $z^n$ converges weakly to zero but
\[\|A_{f,\varphi}(z^n)\|_2^2=\int_{\mathbb{T}}|f(z)|^2 n^2|\varphi(z)|^{2n-1}|\varphi'(z)|^2\,dz\geq\int_{E}|f(z)|^2\,dz\] and this holds for any function $f$. If $f$ is not identically zero we get a contradiction. \end{proof}
The above approach of computing the $L^2(\mathbb{T})$ norm for the monomials can be pushed further. Namely, one can compute the Hilbert Schmidt norm. When finite, this implies the operator is compact and thus gives a sufficient condition for compactness.
\begin{proposition} $A_{f,\varphi}$ is Hilbert-Schmidt if and only if
\[-\int_{\mathbb{T}}|f(z)|^2|\varphi'(z)|^2\frac{|\varphi(z)|^2(|\varphi(z)|^2+1)}{(|\varphi(z)|^2-1)^3}\, dz<\infty\] \end{proposition} \begin{proof}
Since $\varphi(z)$ and its derivative are analytic then in order for the quantity appearing in Proposition \ref{liminf} to be bounded we need $|\varphi|<1$ on $\mathbb{T}$. Hence, using the standard orthonormal basis for $H^2(\mathbb{D})$
\[\|A_{f,\varphi}\|_{\text{HS}}=\sum_{n=0}^\infty \int_{\mathbb{T}}|f(z)|^2 n^2|\varphi(z)|^{2n-1}|\varphi'(z)|^2\, dz = -\int_{\mathbb{T}}|f(z)|^2|\varphi'(z)|^2\frac{|\varphi(z)|^2(|\varphi(z)|^2+1)}{(|\varphi(z)|^2-1)^3}\, dz.\] \end{proof}
\end{document} | arXiv |
\begin{document}
\begin{abstract} Let $\phi: X \to {\mathbb P}^n$ be a double cover
branched along a smooth hypersurface of degree $2m, 2 \leq m \leq n-1$.
We study the varieties of minimal rational tangents ${\mathcal C}_x \subset {\mathbb P} T_x(X)$ at a general point $x$ of $X$. We describe the homogeneous ideal of ${\mathcal C}_x$ and show that the projective isomorphism type of ${\mathcal C}_x$ varies in a maximal way as $x$ varies over general points of $X$. Our description of the ideal of ${\mathcal C}_x$ implies a certain rigidity property of the covering morphism $\phi$. As an application of this rigidity, we show that any finite morphism between such double covers with $m=n-1$ must be an isomorphism. We also prove that Liouville-type extension property holds with respect to minimal rational curves on $X$. \end{abstract}
\maketitle
\noindent {\sc Keywords.} double covers of projective space, Fano manifolds, varieties of minimal rational tangents
\noindent {\sc AMS Classification.} 14J45
\section{Introduction} Throughout the paper, we will work over the field of complex numbers. Let $X$ be a Fano manifold of Picard number 1. For a general point $x \in X$, a rational curve through $x$ is called a minimal rational curve if its degree with respect to $K^{-1}_X$ is minimal among all rational curves through $x$. Denote by ${\mathcal K}_x$ the space of minimal rational curves through $x$. The projective subvariety ${\mathcal C}_x \subset {\mathbb P} T_x(X)$ defined as the union of tangent directions to members of ${\mathcal K}_x$ is called the variety of minimal rational tangents (VMRT) at $x$. The projective geometry of ${\mathcal C}_x$ plays a key role
in understanding the geometry of $X$, often leading to a certain
rigidity phenomenon (cf. the survey \cite{Hw01}) on $X$. This motivates
the study of the geometry of ${\mathcal C}_x \subset {\mathbb P} T_x(X)$ for various examples of $X$.
In the current article, we study the case when $X$ is a double
cover $\phi: X \to {\mathbb P}^n, n \geq 3,$ of projective space ${\mathbb P}^n$
branched along a smooth hypersurface
$Y \subset {\mathbb P}^n$ of degree $2m, 2 \leq m \leq n-1$. Although this is
one of the basic examples of Fano manifolds, its VMRT ${\mathcal C}_x \subset {\mathbb P} T_x(X)$ has not
been described explicitly.
Our first result
is the following description of the defining equations of the VMRT.
\begin{theorem}\label{t.VMRT} For a double cover $X \to {\mathbb P}^n, n \geq 3,$
branched along a smooth hypersurface of degree $2m, 2 \leq m \leq n-1$,
the VMRT ${\mathcal C}_x \subset {\mathbb P}
T_x(X)$ at a general point $x \in X$ is a smooth complete
intersection of multi-degree $(m+1, m+2, \ldots, 2m)$.
\end{theorem}
It is enlightening to compare Theorem \ref{t.VMRT} with the case when $X$ is a smooth hypersurface of degree $m, 2 \leq m \leq n$, in ${\mathbb P}^{n+1}$. In the latter case, it is classical that the VMRT at a general point is a smooth complete intersection of multi-degree $(2,3, \ldots, m)$ (e.g. Example 1.4.2 in \cite{Hw01} or \cite{LR}).
In the course of proving Theorem \ref{t.VMRT}, we will also prove the following partial converse to it.
\begin{theorem}\label{t.converse} Let $Z \subset {\mathbb P}^{n-1}, n \geq 3,$ be a general complete intersection of multi-degree $(m+1, m+2, \ldots, 2m)$ with $2 \leq m \leq n-1$. Then there exists a smooth hypersurface $Y \subset {\mathbb P}^n$ of degree $2m$ such that a double cover $X$ of ${\mathbb P}^n$ branched along $Y$ has a point $x \in X$ with its VMRT ${\mathcal C}_x \subset {\mathbb P} T_x(X)$ isomorphic to $Z \subset {\mathbb P}^{n-1}$. \end{theorem}
Theorem \ref{t.VMRT} and Theorem \ref{t.converse} are proved by explicit computation for a certain choice of $Y$, based on the fact that minimal rational curves of $X$ correspond to lines of ${\mathbb P}^n$ which have even contact order with $Y \subset {\mathbb P}^n$ as recalled in Proposition \ref{p.mrc}. Then Theorem \ref{t.VMRT} for arbitrary smooth $Y$ can be obtained by a flatness argument.
Our explicit computation enables us to study also the variation of the VMRT ${\mathcal C}_x$ as $x$ varies over $X$. Describing the variation of VMRT is not an easy problem even for very simple Fano manifolds, such as hypersurfaces in ${\mathbb P}^{n+1}.$ In \cite{LR}, Landsberg and Robles proved that when $X$ is a general hypersurface of degree $\leq n$ in ${\mathbb P}^{n+1}$, the VMRT at general points of $X$ have maximal variation. We will prove the following analogue of their result in our setting.
\begin{theorem}\label{t.LR} Let $Y\subset {\mathbb P}^n, n \geq 4,$ be a general hypersurface of degree $2m, 2 \leq m \leq n-1$, and let $X$ be a double cover of ${\mathbb P}^n$ branched along $Y$. Then the family of VMRT's
$$\{ {\mathcal C}_x \subset {\mathbb P} T_x(X) \ | \mbox{ general } x \in X \}$$ has maximal variation. More precisely, for a general point $x \in X$, choose a trivialization of ${\mathbb P} T(U) \cong {\mathbb P}^{n-1} \times U$ in a neighborhood $U$ of $x$. Define a morphism $\zeta: U \to {\rm Hilb}({\mathbb P}^{n-1})$ by $\zeta(y) := [{\mathcal C}_y]$ for $y \in U$. Then the rank of $d\zeta_x$ is $n$ and the intersection of the image of $\zeta$ and the $GL(n,{\mathbb C})$-orbit of $\zeta(x)$ is isolated at $\zeta(x)$. \end{theorem}
The condition $n \geq 4$ in Theorem \ref{t.LR} excludes the case of $(n,m) = (3,2)$. It is likely that the statement of Theorem \ref{t.LR} holds also for this case. However, this case seems to require much more complicated computation.
As mentioned at the beginning, the geometry of ${\mathcal C}_x$ often leads to a certain rigidity result. What rigidity phenomenon does our description of ${\mathcal C}_x$ exhibit? The double covering morphism $\phi: X \to {\mathbb P}^n$ sends members of ${\mathcal K}_x$ to lines. We show that this property characterizes $\phi$ in the following strong sense.
\begin{theorem}\label{t.germ} Let $Y\subset {\mathbb P}^n, n \geq 3,$ be a smooth
hypersurface of degree $2m, 2 \leq m \leq n-1$, and let $\phi: X \to {\mathbb P}^n$ be a double cover branched along $Y$. Let $U \subset X$ be a neighborhood (in classical topology) of a general point $x \in X$ and $\varphi: U \to {\mathbb P}^n$ be a biholomorphic immersion such that for any member $C$ of ${\mathcal K}_y, y \in U$, the image $\varphi(C \cap U)$ is contained in a line in ${\mathbb P}^n$. Then there exists a projective transformation $\psi:
{\mathbb P}^n \to {\mathbb P}^n$ such that $\varphi= \psi \circ (\phi|_U)$. \end{theorem}
As a consequence, we obtain the following algebraic version.
\begin{corollary}\label{c.finite} In the setting of Theorem \ref{t.germ}, let $\hat{X}$ be an $n$-dimensional projective variety equipped with generically finite surjective morphisms $g: \hat{X} \to X$ and $h: \hat{X} \to {\mathbb P}^n$ such that for a minimal rational curve $C$ through a general point of $X$, there exists an irreducible component $C'$ of $g^{-1}(C)$ whose image $h(C') \subset {\mathbb P}^n$ is a line. Then there exists an automorphism $\psi: {\mathbb P}^n \to {\mathbb P}^n$ such that $h= \psi \circ \phi \circ g$. \end{corollary}
This is a remarkable property of the double cover $\phi:X \to {\mathbb P}^n$, because an analogous statement fails drastically for many examples of Fano manifolds of Picard number 1 as the following example shows.
\begin{example}\label{e.counter} Let $X \subset {\mathbb P}^N$ be a Fano manifold embedded in projective space of dimension $N > n = \dim X$ such that lines of ${\mathbb P}^N$ lying on $X$ cover $X$, i.e, minimal rational curves on $X$ are lines of ${\mathbb P}^N$ lying on $X$. There are many such examples, e.g., rational homogeneous spaces under a minimal embedding or complete intersections of low degree in ${\mathbb P}^N$. We can define a finite projection $\phi: X \to {\mathbb P}^n$ to a linear subspace ${\mathbb P}^n \subset {\mathbb P}^N$ by choosing a suitable linear subspace ${\mathbb P}^{N-n-1} \subset ({\mathbb P}^N \setminus X).$ Then minimal rational curves are sent to lines in ${\mathbb P}^n$ by $\phi$. There are many different ways to choose such ${\mathbb P}^{N-n-1}$ and projections. For most examples of $X$, different choices of $\phi$ need not be related by projective transformations of ${\mathbb P}^n$. \end{example}
What makes the difference between Corollary \ref{c.finite} and Example \ref{e.counter}? The key point is that the ideal defining the VMRT of the double cover, as described in Theorem
\ref{t.VMRT}, does not contain a quadratic polynomial. In fact, we will prove a general version,
Theorem \ref{t.general}, of Theorem \ref{t.germ} where the double cover $X$ is replaced by any Fano manifold whose VMRT at a general
point is not contained in a hyperquadric. In this regard, we should mention that our double cover $\phi:X \to {\mathbb P}^n$ is the {\em first known example} of a Fano manifold with Picard number 1 whose VMRT at a general point is not contained in a hyperquadric. Note that the VMRT's of Fano manifolds in Example \ref{e.counter} are contained in hyperquadrics coming from the second fundamental form of $X \subset {\mathbb P}^N$.
Theorem \ref{t.germ} has an application in the study of morphisms between
double covers. There
have been several works, e.g., \cite{A1}, \cite{A2}, \cite{HM03},
\cite{IS} and \cite{S}, classifying finite morphisms between Fano threefolds of Picard number 1. But there still remain a few unsettled cases. One such case is finite morphisms between double covers of ${\mathbb P}^3$ branched along smooth quartic surfaces. When the quartic surfaces do not contain lines, Theorem 1.5 (more precisely, in the subsection (4.2.2)) of \cite{S} proves that such a morphism must be an isomorphism. However, the approach of \cite{S} has technical difficulties when the quartic surfaces contain lines. Using Theorem \ref{t.germ}, we can settle this case. More precisely, we obtain the following.
\begin{theorem}\label{t.map}
Let $Y_1, Y_2 \subset {\mathbb P}^n, n \geq 3,$ be two smooth hypersurfaces of degree $2n-2$. Let $\phi_1: X_1 \to {\mathbb P}^n$ (resp. $\phi_2: X_2 \to {\mathbb P}^n$) be a double cover of ${\mathbb P}^n$ branched along $Y_1$ (resp. $Y_2$). Suppose there exists a finite morphism $f:X_1 \to X_2$. Then $f$ is an isomorphism. \end{theorem}
Another application of Theorem \ref{t.germ} is the following problem, which is Problem 7.9 in \cite{Hw12}.
\begin{problem}[Liouville-type extension problem]\label{q.Liouville} Let $X$ be a Fano manifold of Picard number 1. Let $U_1$ and $U_2$ be two connected open subsets (in classical topology) in $X$. Suppose that we are given a biholomorphic map $\gamma: U_1 \to U_2$ such that for any minimal rational curve $C \subset X$, there exists another minimal rational curve $C'$ with $\gamma(U_1
\cap C) = U_2 \cap C'$. Then does there exist $\Gamma \in {\rm Aut}(X)$ with $\Gamma|_{U_1} = \gamma$? \end{problem}
Problem \ref{q.Liouville} is called Liouville-type extension,
because Liouville's theorem in conformal geometry
gives an affirmative answer to Problem \ref{q.Liouville} when $X$ is a smooth quadric hypersurface in ${\mathbb P}^{n+1}$. An affirmative answer to Problem \ref{q.Liouville} is known if $\dim {\mathcal K}_x>0$ for a general $x \in X$ (this is essentially proved in \cite{HM01}). However, when $\dim {\mathcal K}_x =0$,
affirmative answers are known only in a small number of examples of
$X$, such as hypersurfaces of degree $n$ in
${\mathbb P}^{n+1}$ and Mukai-Umemura threefolds (see Section 7 of \cite{Hw12}).
Theorem \ref{t.germ} enables us to give a stronger form of Liouville-type extension for our double cover $X$ as follows.
\begin{theorem}\label{t.Liouville}
Let $Y_1, Y_2 \subset {\mathbb P}^n, n \geq 3,$ be two smooth hypersurfaces of degree $2m, 2\leq m \leq n-1$. Let $\phi_1: X_1 \to {\mathbb P}^n$ (resp. $\phi_2: X_2 \to {\mathbb P}^n$) be a double cover of ${\mathbb P}^n$ branched along $Y_1$ (resp. $Y_2$). Let $U_1 \subset X_1$ and $U_2 \subset X_2$ be two connected open subsets. Suppose that we are given a biholomorphic map $\gamma: U_1 \to U_2$ such that for any minimal rational curve $C_1 \subset X_1$, there exists a minimal rational curve $C_2 \subset X_2$ with $\gamma(U_1 \cap C_1) = U_2 \cap C_2$. Then we can find a biregular morphism $\Gamma: X_1 \to X_2$
with $\Gamma|_{U_1} = \gamma$. \end{theorem}
The organization of this paper is as follows. In Section 2, we will present some basic facts concerning double covers of ${\mathbb P}^n$ and their minimal rational curves. Theorem \ref{t.VMRT} and Theorem \ref{t.converse} will be proved in Section 3. In Section 4, the variation of VMRT is studied and Theorem \ref{t.LR} will be proved. Finally, in Section 5, we review the notion of projective connections to prove a general version of Theorem \ref{t.germ} and explain how Theorem \ref{t.map} and Theorem \ref{t.Liouville} can be derived from Theorem \ref{t.germ}.
\section{Minimal rational curves and ECO lines}
Throughout, we will fix integers $n \geq 3$ and $2 \leq m \leq n-1$. Let $Y\subset \mathbb P^n$ be a smooth hypersurface of degree $2m$. Let $\phi: X\rightarrow \mathbb P^n$ be a double cover of $\mathbb P^n$ ramified along $Y$. Such a double cover arises as a submanifold in the line bundle ${\mathcal O}_{{\mathbb P}^n}(m)$ as explained in pp.242-244 of \cite{La}. This implies the following uniqueness result, where the slightly awkward appearance of the open subsets $U_1$ and $U_2$ are for our later use in Section 5.
\begin{lemma}\label{l.unique} Given a smooth hypersurface $Y \subset {\mathbb P}^n$ of degree $2m$, let $\phi_1: X_1 \to {\mathbb P}^n$ and $\phi_2: X_2 \to {\mathbb P}^n$ be two choices of double covers of ${\mathbb P}^n$ branched along $Y$. Let $U_1 \subset X_1$ and $U_2 \subset X_2$ be two connected open subsets (in classical topology) with $\phi_1(U_1) = \phi_2(U_2)$. Then there exists a biregular morphism $\Gamma: X_1 \to X_2$ with $\Gamma(U_1) = U_2$ and $\phi_1 = \phi_2\circ\Gamma$. \end{lemma}
\begin{definition}\label{d.mrc} Let $\phi: X \to {\mathbb P}^n$ be a double cover branched along a smooth hypersurface $Y \subset {\mathbb P}^n$ of degree $2m$. A rational curve $C \subset X$ with $\phi(C) \not\subset Y$
is a {\em minimal rational curve} if it has degree 1 with respect to $\phi^* {\mathcal O}_{{\mathbb P}^n}(1)$.
For $x \in X \setminus \phi^{-1}(Y)$, we denote by ${\mathcal K}_x$ the
(normalized) space of minimal rational curves through $x$. It is
known (e.g. II.3.11.5 in \cite{Ko}) that ${\mathcal K}_x$ is a union
of finitely many nonsingular projective varieties. From the adjunction formula $$K_X=\phi^*(K_{\mathbb P^n}+\frac{1}{2}[Y])=\phi^*\mathcal O_{\mathbb P^n}(-n+m-1),$$ $X$ is a Fano manifold of index $n-m+1$ and $\dim {\mathcal K}_x = n-m-1$. \end{definition}
\begin{definition}\label{d.ecoline} Let $Y \subset {\mathbb P}^n$ be an irreducible reduced hypersurface. A line $\ell \subset {\mathbb P}^n$ is an {\em ECO (Even Contact Order) line } with respect to $Y$ if $\ell\not\subset Y$ and the local intersection number at each point of $\ell \cap Y$ is even. For a point $x \in {\mathbb P}^n \setminus Y$, identify the space of lines through $x$ with the projective space ${\mathbb P} T_x({\mathbb P}^n)$ and denote by ${\mathcal E}^Y_x \subset {\mathbb P} T_x({\mathbb P}^n)$ the space of ECO lines through $x$ with respect to $Y$. \end{definition}
Next proposition is a direct generalization of well-known facts for $(n, m) = (3,2)$ (e.g. \cite{Ti}).
\begin{proposition}\label{p.mrc} In the setting of Definition \ref{d.mrc}, an irreducible reduced curve $C\subset X$ with
$\phi(C) \not\subset Y$ is a minimal rational curve if and only if the image curve $\phi(C)\subset \mathbb P^n$ is an ECO line with respect to $Y$. Moreover, a minimal rational curve $C$ is smooth and $\phi|_C: C \to \phi(C)$ is an isomorphism. \end{proposition}
\begin{proof} Let $C \subset X$ be an irreducible curve such that $\ell:=\phi(C)$ is an ECO line with respect to $Y$. Suppose that $\phi_C: C \to \ell$ is not birational, i.e., $C = \phi^{-1}(\phi(C))$. For a point $z\in \phi(C)\cap Y$, let $t$ be a local uniformizing parameter on $\ell$ at $z$ and let $r_z$ be the local intersection number of $\ell$ and $Y$ at $z$. Then $C$ is analytically defined by the equation $s^2=t^{r_z}$ (cf. \cite{La} pp.242-244). Let $\tilde{C}$ be the normalization of
$C$. Since $r_z$ is even for any choice of $z \in \phi(C) \cap Y$, the composition of the normalization morphism $\tilde C\rightarrow C$ and the covering morphism $\phi|_C: C\rightarrow
\ell$ induces a morphism $ \tilde C\rightarrow \ell$ of degree 2 without ramification point, a contradiction. Thus $\phi|_C: C \to \ell$ is birational and $C$ has degree 1 with respect to $\phi^*{\mathcal O}_{{\mathbb P}^n}(1)$.
Conversely, if $C$ is a minimal rational curve, then
$\ell:=\phi(C)$ is a line in $\mathbb P^n$ with $\ell \not\subset Y$ and $\phi|_C: C \to \ell$ must be birational. Thus $\phi^{-1}(\ell)$ has an irreducible component $C'$ different from $C$ with $\phi(C \cap C') = \ell \cap Y$. By the same argument as before, if the local intersection number $r_z$ at $z \in \ell \cap Y$ is odd, the germ of $\phi^{-1}(\ell)$ over $z$, defined by $s^2 = t^{r_z}$, is irreducible, a contradiction. Thus $r_z$ is even for all $z \in \ell \cap Y$ and $\ell$ is an ECO line. Moreover,
$C$ must be smooth and the morphism $\phi|_C: C \to \ell$ is an isomorphism. \end{proof}
We have the following consequence.
\begin{proposition}\label{p.unique} In the setting of Proposition \ref{p.mrc}, let $Y' \subset {\mathbb P}^n$ be an irreducible reduced hypersurface distinct from $Y$. Then a general ECO line with respect to $Y$ intersects $Y'$ transversally. In particular, a general ECO line with respect to $Y$ cannot be an ECO line with respect to $Y'$. \end{proposition}
\begin{proof} On a Fano manifold $X$, for any subset $Z \subset X$ of codimension $\geq 2$ and any reduced hypersurface $D \subset X$, a general minimal rational curve is disjoint from $Z$ (e.g. Lemma 2.1 in \cite{Hw01}) and intersects $D$
transversally (the proof is similar to the proof of Lemma 2.1 in \cite{Hw01}).
Putting $Z= \phi^{-1}(Y \cap Y')$ and
$D= \phi^{-1}(Y')$ for our $\phi: X \to {\mathbb P}^n$ branched along $Y$,
we see that a general minimal rational curve $C$ intersects $\phi^{-1}(Y')$ transversally and $\phi(C) \cap Y \cap Y' = \emptyset$. Thus $\phi(C)$ intersects $Y'$ transversally.
\end{proof}
\begin{proposition}\label{p.isom} In the setting of Proposition \ref{p.mrc}, let $x$ be a general point of $X$. Let $\tau_x: {\mathcal K}_x \to {\mathbb P} T_x(X)$ be the tangent morphism associating each member of ${\mathcal K}_x$ its tangent direction at $x$. Then $\tau_x$ is an embedding and the VMRT $\mathcal C_x= {\rm Im}(\tau_x) \subset {\mathbb P} T_x(X)$ is a nonsingular projective variety with finitely many components of dimension $n-m-1$, isomorphic to ${\mathcal E}_{\phi(x)} \subset {\mathbb P} T_{\phi(x)}({\mathbb P}^n)$. \end{proposition}
\begin{proof} The differential $d \phi_x: {\mathbb P} T_x(X) \to {\mathbb P} T_{\phi(x)}({\mathbb P}^n)$ sends ${\mathcal C}_x \subset {\mathbb P} T_x(X)$ isomorphically to ${\mathcal E}^Y_{\phi(x)} \subset {\mathbb P} T_{\phi(x)}({\mathbb P}^n)$ by Proposition \ref{p.mrc}. It follows that $\tau_x$ is injective because lines on ${\mathbb P}^n$ are determined by their tangent directions.
Since we know that ${\mathcal K}_x$ is nonsingular of dimension $n-m-1$, to prove that $\tau_x$ is an embedding, it remains to show that $\tau_x$ is an immersion. By Proposition 1.4 in \cite{Hw01}, this is equivalent to showing that for any member $C\subset X$ of ${\mathcal K}_x$, the normal bundle $N_{C/X}$ satisfies $$N_{C/X}= O_{\mathbb P^1}(1)^{ n-m-1}\oplus \mathcal O_{\mathbb P^1}^{m}.$$ By the generality of $x$, we can write $$N_{C/X}= \mathcal O_{\mathbb P^1}(a_1)\oplus \cdots \oplus \mathcal O_{\mathbb P^1}(a_{n-1})$$ for integers $a_1\geq \cdots \geq a_{n-1}\geq 0$ satisfying $\sum_{i} a_i = n-m-1$.
Since $\phi$ is unramified at general points of $C$ and $\phi|_C: C \to \ell := \phi(C)$
is an isomorphism,
we have an injective sheaf homomorphism $$\phi_*: N_{C/X} \to N_{\ell/{\mathbb P}^n} =\mathcal O(1)^{n-1}.$$ Thus $a_1 \leq 1$.
It follows that $a_1 = \cdots = a_{n-m-1} =1$ and $a_{n-m} = \cdots = a_{n-1} =0$. \end{proof}
\section{Defining equations of VMRT}\label{s.ECO}
\begin{definition}\label{d.weight} A polynomial $A(t_1, \ldots, t_m)$ in $m$ variables is said to be {\em weighted homogeneous of weighted degree $k$} if it is of the form $$A(t_1,...,t_m)=\sum_{1\cdot i_1+\cdots +m\cdot i_m=k}c_{i_1,...,i_m}t_1^{i_1}\cdots t_m^{i_m}$$ with coefficients $c_{i_1,...,i_m}\in \mathbb C$. An equivalent way of defining it is as follows. We define the {\em weighted degree} of each variable $t_i$ by ${\rm wt}(t_i) := i$ and each monomial by ${\rm wt}(t_{i_1} \cdots t_{t_N}) := \sum_{j=1}^N {\rm wt}(t_{i_j})$. Then $A$ is weighted homogeneous of weighted degree $k$ if all monomial terms in $A$ have weighted degree $k$. \end{definition}
\begin{definition}\label{d.ecopoly} A polynomial of degree $2m, m\geq 1,$ in one variable with complex coefficients is an {\em ECO polynomial} if it can be written as the square of a polynomial of degree $m$. \end{definition}
\begin{proposition}\label{p.ecopoly} For any positive integer $m$, there exists a weighted homogeneous polynomial $A_{k}(t_1, \ldots, t_m)$ of weighted degree $k$ for each $k, m+1 \leq k \leq 2m$,
such that a polynomial in one variable $\lambda$ of degree $2m$ $$a_{2m} \lambda^{2m} + a_{2m-1} \lambda^{2m-1} + \cdots + a_1 \lambda +1$$ is an ECO polynomial if and only if $ a_k = A_k(a_1, \ldots,a_m) $ for each $m+1 \leq k \leq 2m.$ \end{proposition}
\begin{remark} Our proof below gives a recursive formula for $A_k$, but an explicit expression of the polynomials $A_k$ will not be needed in this paper. \end{remark}
\begin{proof} Suppose that $$a_{2m} \lambda^{2m} + a_{2m-1} \lambda^{2m-1} + \cdots + a_1 \lambda +1$$ is an ECO polynomial. We can find $(\sigma_1, \ldots, \sigma_m) \in {\mathbb C}^m$ such that $$a_{2m} \lambda^{2m} + a_{2m-1} \lambda^{2m-1} + \cdots + a_1 \lambda +1 = (\sigma_m \lambda^m + \sigma_{m-1} \lambda^{m-1} + \cdots + \sigma_1 \lambda+1)^2.$$ For convenience, define $\sigma_0=1, \sigma_{m+1} = \cdots = \sigma_{2m}=0,$ so that we can write, for each $k, 1 \leq k \leq 2m$, $$a_k = \sum_{i=0}^k \sigma_i \sigma_{k-i}. $$ Using $${a}_k=\sum_{i=0}^{k}\sigma_i\sigma_{k-i}=\sum_{i=1}^{k-1}\sigma_i\sigma_{k-i}+2\sigma_k,$$ we have $$\sigma_k=\frac{{a}_k-\sum_{i=1}^{k-1}\sigma_i\sigma_{k-i}}{2} $$ for $k=1,2,...,m$. Thus \begin{align*}
\sigma_1=\frac{{a}_1}{2}, \
\sigma_2=\frac{{a}_2-\sigma_1^2}{2}=\frac{{a}_2}{2}-\frac{{a}_1^2}{8}, \ \cdots. \end{align*} Using induction on $k$, we see that $$ \sigma_k = G_k(a_1, \ldots, a_m) \mbox{ for each } k, 1 \leq k \leq m,$$ where $G_k(t_1, \ldots, t_m)$ is a weighted homogeneous polynomial of weighted degree $k$.
Setting $G_0=1, G_{m+1} = \cdots = G_{2m} =0$, we see that $$a_{k} = \sum_{\ell=0}^{k} G_{\ell}(a_1, \ldots, a_m) G_{k-\ell}(a_1, \ldots, a_m)$$ for all $m+1\leq k \leq 2m$. Define $$A_{k}(t_1, \ldots, t_m):= \sum_{\ell=0}^{k} G_{\ell}(t_1, \ldots, t_m) G_{k-\ell}(t_1, \ldots, t_m).$$ Then $A_k$ is a weighted homogeneous polynomial of weighted degree $k$ such that $a_k= A_k(a_1, \ldots, a_m)$ for each $m+1 \leq k \leq 2m$.
Conversely, given any $(a_1, \ldots, a_m) \in {\mathbb C}^m$, let $$ a_{m+i} = A_{m+i}(a_1, \ldots,a_m) \mbox{ for each } 1 \leq i \leq m$$ where $A_{m+i}$ is defined above. Then for $\sigma_i = G_j(a_1, \ldots, a_m)$, we see that $$(\sigma_m \lambda^m + \cdots + \sigma_1 \lambda + 1)^2 = a_{2m} \lambda^{2m} + a_{2m-1} \lambda^{2m-1} + \cdots + a_1 \lambda + 1$$ and $$a_{2m}\lambda^{2m} + a_{2m-1} \lambda^{2m-1} + \cdots + a_1 \lambda +1$$ is an ECO polynomial. \end{proof}
\begin{corollary}\label{c.ecopoly} Regard the affine space $$\mathbb A^{2m}:= \{(a_{2m}, a_{2m-1},
\ldots, a_1)\ | \ a_i \in {\mathbb C} \}$$ as the set of polynomials $$ a_{2m} \lambda^{2m} + a_{2m-1} \lambda^{2m-1} + \cdots + a_1 \lambda + 1$$ of degree $2m$ with the constant term 1. Then the set ${\mathcal D} \subset \mathbb A^{2m}$ of ECO-polynomials is a smooth complete intersection of $m$ divisors $D_1, \ldots, D_m$ where $D_j$ is the smooth divisor defined by $a_{m+j} = A_{m+j}(a_1, \ldots, a_m)$ where $A_{m+j}$ is the weighted homogeneous polynomial of weighted degree $m+j$ defined in Proposition \ref{p.ecopoly}. \end{corollary}
Using Corollary \ref{c.ecopoly}, we will study the space of ECO lines defined in Definition \ref{d.ecoline}. For our computation, we introduce the following notation.
\begin{notation}\label{n.coordi} Choose a homogeneous coordinate system $t_0$,...,$t_n$ on $\mathbb P^n$. Let ${\mathbb P}^{n-1}_{\infty} \subset {\mathbb P}^n$ be the hyperplane defined by $t_0=0$. The restriction of $t_1, \ldots, t_n$ on ${\mathbb P}^{n-1}_{\infty}$ will be denoted by $z_1, \ldots, z_n$. They provide a homogeneous coordinate system on ${\mathbb P}^{n-1}_{\infty}.$ Define the projective isomorphism $\upsilon_y: {\mathbb P}^{n-1}_{\infty} \to {\mathbb P} T_y({\mathbb P}^n)$ at each point $y=[1:y_1: \cdots : y_n] \in {\mathbb P}^n \setminus {\mathbb P}^{n-1}_{\infty}$ by sending $[z_1:\cdots:z_n]\in {\mathbb P}^{n-1}_{\infty}$ to the tangent direction of the line $$\{(y_1 + \lambda z_1, \ldots, y_n +
\lambda z_n)\ | \ \lambda \in {\mathbb C}\}$$ at the point $y$. The collection $\{ \upsilon_y^{-1} \ | \ y \in {\mathbb P}^n \setminus {\mathbb P}^{n-1}_{\infty}\}$ determines a canonical trivialization of the projectivized tangent bundle $$\upsilon^{-1}: {\mathbb P} T({\mathbb P}^n \setminus {\mathbb P}^{n-1}_{\infty}) \cong ({\mathbb P}^n \setminus {\mathbb P}^{n-1}_{\infty}) \times {\mathbb P}^{n-1}_{\infty}.$$ \end{notation}
\begin{definition}\label{d.f} For a homogeneous polynomial $f(t_0, \ldots, t_n)$ of degree $2m$, $2 \leq m \leq n-1$ and for each integer $k, 0 \leq k \leq 2m,$ define $a^f_k(y;z) = a^f_k(y_1,\ldots,y_n;z_1, \ldots, z_n)$ to be the polynomial in $2n$ variables satisfying $$f(1, y_1 + \lambda z_1, \ldots, y_n + \lambda z_n) = a^f_0(y;z) + a^f_1(y;z) \lambda + \cdots + a^f_{2m}(y;z) \lambda^{2m}.$$ Note that for a fixed $y$, $a^f_k(y;z)$ is a homogeneous polynomial in $z$ of degree $k$. In particular, $a^f_0(y;z) = f(1,y_1, \ldots, y_n)$ is independent of $z$. \end{definition}
\begin{proposition}\label{p.maineco} In Notation \ref{n.coordi} and Definition \ref{d.f}, let $Y \subset {\mathbb P}^n$ be the hypersurface defined by $f(t_0,\ldots,t_n)=0$. For any point $y \in {\mathbb P}^n \setminus (Y \cup {\mathbb P}^{n-1}_{\infty}),$ the variety $\upsilon_y^{-1}({\mathcal E}^Y_y) \subset {\mathbb P}^{n-1}_{\infty}$ is (set-theoretically) the common zero set of the homogeneous polynomials in $z$, $B^f_{k}(y;z),m+1 \leq k \leq 2m$, defined by $$B^f_k(y;z)= B^f_k(y_1,...,y_n;z_1,...,z_n):=\frac{a^f_k(y;z)}{a^f_0(y;z)}-A_k \left ( \frac{a^f_1(y;z)}{a^f_0(y;z)},...,\frac{a^f_m(y;z)}{a^f_0(y;z)}\right ),$$ where $A_k$ is as in Proposition \ref{p.ecopoly}. Note that $B^f_k(y;z)$ is homogeneous in $z$ of degree $k$ because $a^f_k(y;z)$ is homogeneous in $z$ of degree $k$ and $A_k$ is weighted homogeneous of weighted degree $k$. In particular, if ${\mathcal E}^Y_y$ is of pure dimension $n-m-1$, then it is set-theoretically a complete intersection of multi-degree $(m+1, m+2, \ldots, 2m)$. \end{proposition}
\begin{proof} A point $[z_1:\cdots:z_n] \in {\mathbb P}^{n-1}_{\infty}$ belongs to $\upsilon_y^{-1}({\mathcal E}^Y_y)$ if and only if the polynomial in $\lambda$ $$f(1, y_1 + \lambda z_1, \ldots, y_n + \lambda z_n) = a^f_0(y;z) + a^f_1(y;z) \lambda + \cdots + a^f_{2m}(y;z) \lambda^{2m}$$ is an ECO polynomial. By Corollary \ref{c.ecopoly}, we see that $\upsilon_y^{-1}({\mathcal E}^Y_y)$ is the common zero set of $B^f_k(y;z), m+1 \leq k \leq 2m$. \end{proof}
\begin{proposition}\label{p.converse} Given a general smooth complete intersection $Z \subset {\mathbb P}^{n-1}$ of multi-degree $(m+1, \ldots, 2m),$ there exist a smooth hypersurface $Y \subset {\mathbb P}^n$ of degree $2m$ and a point $y \in {\mathbb P}^n \setminus Y$, such that $\mathcal E_y^Y \subset \mathbb P T_y({\mathbb P}^n)$ is projectively equivalent to $Z \subset {\mathbb P}^{n-1}$. In particular, for a general hypersurface $Y \subset {\mathbb P}^n$ of degree $2m$ and a general $y\in {\mathbb P}^n \setminus Y$, the variety of ECO lines $\mathcal E^Y_y\subset \mathbb P T_y({\mathbb P}^n)$ is a smooth complete intersection of degree $(m+1,...,2m)$. \end{proposition}
\begin{proof}
Denote by $\{b_{k}(z_1, \ldots, z_n)\ | \ m+1 \leq k \leq 2m\}$ homogeneous polynomials with $\deg b_k = k$ defining $Z$. By the generality of $Z$, we may assume that \begin{itemize} \item[(i)] the affine hypersurface $$1 + b_{m+1}(t_1, \ldots, t_n) + \cdots + b_{2m}(t_1, \ldots, t_n) =0$$ in ${\mathbb C}^n = \{ (t_1, \ldots, t_n), t_i \in {\mathbb C} \}$ is smooth and \item[(ii)] the projective hypersurface $$b_{2m}(z_1, \ldots, z_n) =0$$ in ${\mathbb P}^{n-1}$ with homogeneous coordinates $(z_1:z_2: \cdots : z_n)$ is smooth. \end{itemize} Let $Y \subset {\mathbb P}^n$ be the hypersurface of degree $2m$ defined by the polynomial $$f(t_0, t_1, \ldots, t_n) := t_0^{2m} + t_0^{m-1} b_{m+1}(t_1, \ldots, t_n) + \cdots + t_0 b_{2m-1}(t_1, \ldots, t_n) + b_{2m}(t_1, \ldots, t_n).$$ Then $Y$ is a smooth hypersurface because it has no singular point on its intersection with the hyperplane $t_0 =0$ by the assumption (ii), while it has no singular point on the affine space $t_0 \neq 0$ by the assumption (i). In Notation \ref{n.coordi}, consider the point $y=[1:0:0:\cdots:0] \in {\mathbb P}^n \setminus ({\mathbb P}^{n-1}_{\infty}\cup Y)$. Then $$f(1, y_1 + \lambda z_1, \ldots, y_n +\lambda z_n) = f(1, \lambda z_1, \ldots, \lambda z_n) = 1+ \lambda^{m+1} b_{m+1}(z) + \cdots + \lambda^{2m} b_{2m}(z).$$ Comparing with Definition \ref{d.f}, we obtain $$a^f_0(y;z) =1, \; a_1^f(y;z)= \cdots = a^f_m(y;z) =0, \; a^f_k(y;z) = b_k(z) \mbox{ for } m+1 \leq k \leq 2m.$$ In the notation of Proposition \ref{p.maineco}, $$B^f_k(y; z_1, \ldots, z_n) = a^f_k(z_1, \ldots, z_n) = b_k(z_1, \ldots, z_n)$$ for $m+1 \leq k \leq 2m.$ This implies that ${\mathcal E}^Y_y, y:= [1:0:\cdots:0]$, is projectively equivalent to $Z \subset {\mathbb P}^{n-1}$. \end{proof}
\begin{proof}[Proof of Theorem \ref{t.VMRT} and Theorem \ref{t.converse}]
Theorem \ref{t.converse} is a direct consequence of Proposition \ref{p.isom} and Proposition \ref{p.converse}.
To prove Theorem \ref{t.VMRT}, it suffices to show by Proposition \ref{p.isom} that for any smooth $Y \subset {\mathbb P}^n$ and a general point $x \in {\mathbb P}^n \setminus Y$, the subvariety ${\mathcal E}^Y_x \subset {\mathbb P} T_x({\mathbb P}^n)$ is a smooth complete intersection of multi-degree $(m+1, \ldots, 2m)$. Proposition \ref{p.converse} says that this is O.K. if $Y$ is a general hypersurface.
To check it for any smooth $Y \subset {\mathbb P}^n$, choose a deformation
$\{ Y_t \ | \; |t|< \epsilon \}$ of $Y=Y_0$ such that for a Zariski open subset $U_t \subset {\mathbb P}^n \setminus Y_t$, ${\mathcal E}^{Y_t}_x \subset {\mathbb P} T_{x}({\mathbb P}^n)$ is a smooth complete intersection for any $t \neq 0$ and any $x \in U_t$. By shrinking $\epsilon$ if necessary, the intersection $\cap_{t\neq 0} U_t$ is nonempty. By Proposition \ref{p.isom}, we have a Zariski open subset $U \subset {\mathbb P}^n \setminus Y$ such that ${\mathcal E}^Y_x$ is smooth for any $x \in U$. Pick a point $x \in (\cap_{t \neq 0} U_t) \cap U $.
We can construct a smooth family $\{\phi_t:X_t \to {\mathbb P}^n\ | \; |t| < \epsilon\}$ of double covers of ${\mathbb P}^n$ branched along $Y_t$'s. Choose $z_t \in \phi_t^{-1}(x)$ in a continuous way. The proof (e.g. II.3.11.5 in \cite{Ko}) of the smoothness of ${\mathcal K}_x$ mentioned in Definition \ref{d.mrc} works for the family
${\mathcal K}_{z_t}$, i.e., the family $\{ {\mathcal K}_{z_t}\ | \; |t| <\epsilon\}$
is a flat family of nonsingular projective subvarieties. Via Proposition \ref{p.isom}, this implies that $\{ {\mathcal E}^{Y_t}_x\ | \;
|t|<\epsilon\}$ is a flat family of nonsingular projective varieties in ${\mathbb P} T_x({\mathbb P}^n)$.
By our choice of $x$, ${\mathcal E}^{Y_t}_x$ is (scheme-theoretically) a smooth complete intersection for $t \neq 0$, while ${\mathcal E}^{Y_0}_x$ is a nonsingular variety which is set-theoretically a complete intersection of the same multi-degree as ${\mathcal E}^{Y_t}_x, t\neq 0,$ by Proposition \ref{p.maineco}. We conclude that ${\mathcal E}^{Y_0}_x$ is also a smooth complete intersection of multi-degree $(m+1, \ldots, 2m)$. \end{proof}
\section{Variation of VMRT}
\begin{notation}\label{n.V} Let $V_k$ be the vector space of homogeneous polynomials of degree $k$ in $z_1$,...,$z_n$. Each polynomial $h\in V_k$ is of the form $$h(z_1,...,z_n)=\sum_{i_1+\cdots+i_n=k}e_{i_1,...,i_n}z_1^{i_1}\cdots z_n^{i_n}.$$ Regarding $V_k$ as a complex manifold, take $\{e_{i_1,..,i_n}\}_{i_1+\cdots+i_n=k}$ as linear coordinates on $V_k$ and $$\left\{ \frac{\partial}{\partial e_{i_1,...,i_n}} \
\Bigr| \ i_1+\cdots+i_n=k \right\}$$ as a basis for the tangent spaces $T_{h}(V_k)$ of $V_k$ at each $h\in V_k$. There is a canonical isomorphism between $V_k$ and $T_{h}(V_k)$ identifying a polynomial $$\sum_{i_1+\cdots+i_n=k}E_{i_1,...,i_n}z_1^{i_1}\cdots z_n^{i_n}\in V_k$$ with the tangent vector $$\sum_{i_1+\cdots+i_n=k}E_{i_1,...,i_n}\frac{\partial}{\partial e_{i_1,...,i_n}}\in T_h(V_k).$$ \end{notation}
\begin{notation}\label{n.f_k} For a homogeneous polynomial $f(t_0, \ldots, t_n)$ of degree $2m, 2 \leq m \leq n-1$, write $$f(t_0,...,t_n)=t_0^{2m}f_0(t_1,...,t_n)+t_0^{2m-1}f_1(t_1,...,t_n)+\cdots+f_{2m}(t_1,...,t_n)$$ where $f_k(t_1,...,t_n)$ is a homogeneous polynomial of degree $k=0,...,2m$ in $t_1$,...,$t_n$. Comparing with Definition \ref{d.f}, we have $$f(1,0,...,0)=f_0(z_1,...,z_n) = a^f_0(0;z) \mbox{ and } a^f_i(0;z)=f_i(z) \mbox{ for } i=1,..., 2m.$$ \end{notation}
\begin{definition}\label{d.phi} For a homogeneous polynomial $f(t_0, \ldots, t_n)$ of degree $2m, 2 \leq m \leq n-1$, let $Y \subset {\mathbb P}^n$ be the hypersurface defined by $f(t_0,\ldots,t_n)=0$ and define a morphism $$\mu: {\mathbb P}^n \setminus ({\mathbb P}^{n-1}_{\infty}\cup Y) \to V_{m+1}$$ by sending $y=[1:y_1: \cdots :y_n]$ to the polynomial in $z$ $$\mu(y):= [B^f_{m+1}(y;z)] \in V_{m+1}$$ with $B^f_{m+1}(y;z)$ as in Proposition \ref{p.maineco}. \end{definition}
\begin{proposition}\label{p.dphi} In Notation \ref{n.f_k} and Definition \ref{d.phi},
assume that $$f(1,0,...,0)=f_0(z)=1 \mbox{ and } f_1(z)=\cdots=f_m(z)=0.$$ Then for $x=[1:0:\cdots:0] \in {\mathbb P}^n \setminus ({\mathbb P}^{n-1}_{\infty}\cup Y)$ and $\sum_{i=1}^n v_i \frac{\partial}{\partial y_i} \in T_x({\mathbb P}^n)$, $$d \mu_x(\sum_{i=1}^n v_i \frac{\partial}{\partial y_i}) = [\sum_{i=1}^n v_i \frac{\partial f_{m+2}}{\partial t_i}(z)] \in T_{\mu(x)}(V_{m+1}) = V_{m+1}.$$ \end{proposition}
We will use the following lemma.
\begin{lemma}\label{l.Tim} In Notation \ref{n.f_k}, set $f_{2m+1} =0$ for convenience. Assume that $$f(1,0,...,0)=f_0(z)=1.$$ Then
$B^f_{k}(y;z)$ of Proposition \ref{p.maineco} satisfies
\begin{eqnarray*} \frac{\partial B^f_{k}(y;z)}{\partial y_i}\Bigr|_{(0;z)} &=& \frac{\partial f_{k+1}}{\partial t_i}(z)-f_k(z)\frac{\partial f_1 }{\partial t_i}(z) \\ & & -\sum_{j=1}^{m}\frac{\partial A_{k}}{\partial x_j}(f_1(z),...,f_m(z))\left( \frac{\partial f_{j+1}}{\partial t_i}(z)-f_j(z)\frac{\partial f_1 }{\partial t_i}(z)\right) \end{eqnarray*} for all $k=m+1,...,2m$ and $i=1,...,n$. \end{lemma} \begin{proof}
In the equality $$ \frac{\partial f(t)}{\partial t_i}\Bigr|_{t=(1,\lambda z_1,...,\lambda z_n)} =\frac{\partial f(1,y_1+\lambda z_1,...,y_n+\lambda z_n)}{\partial y_i}\Bigr|_{y_1=\cdots=y_n=0},$$ the left hand side can be written, via Notation \ref{n.f_k}, $$\frac{\partial f_1}{\partial t_i}(z_1,...,z_n)+ \frac{\partial f_2}{\partial t_i}(z_1,...,z_n)\lambda+\cdots+\frac{\partial f_{2m+1}}{\partial t_i}(z_1,...,z_n)\lambda^{2m}.$$ On the other hand, the right hand side is, by Definition \ref{d.f}, $$\frac{\partial a^f_0}{\partial y_i}(0;z)+\frac{\partial a^f_1}{\partial y_i}(0;z)\lambda+\cdots+\frac{\partial a^f_{2m}}{\partial y_i}(0;z)\lambda^{2m}.$$ Therefore for each $i=1, \ldots, n$, $$\frac{\partial a^f_{2m}}{\partial y_i}(0;z) =0 \mbox{ and } \frac{\partial a^f_k}{\partial y_i}(0;z)=\frac{\partial f_{k+1}}{\partial t_i}(z)\text{ for } k=0,...,2m.$$
From this and the assumption that $a^f_0(0;z)=f(1,0,...,0)=1$, we obtain \begin{eqnarray*} \frac{\partial}{\partial y_i}\left
(\frac{a^f_k(y;z)}{a^f_0(y;z)}\right )\Bigr|_{(y;z)=(0;z)}&=& \frac{a^f_0(0;z)\frac{\partial a^f_k}{\partial y_i}(0;z)-a^f_k(0;z)\frac{\partial a^f_0}{\partial y_i}(0;z)}{a^f_0(0;z)^2} \\ &=& \frac{\partial f_{k+1}}{\partial t_i}(z)-f_k(z)\frac{\partial f_1 }{\partial t_i}(z) \end{eqnarray*} for all $k=0,...,2m$ and $i=1,...,n$. Thus
\begin{eqnarray*}\frac{\partial B^f_{k}(y;z)}{\partial y_i} \Bigr|_{(0;z)} &=& \frac{\partial}{\partial y_i}\left
(\frac{a^f_{k}(y;z)}{a^f_0(y;z)}\right )\Bigr|_{(0;z)}\\ & & -\sum_{j=1}^{m}\frac{\partial A_{k}}{\partial x_j}\left(\frac{a^f_1(0;z)}{a^f_0(0;z)},...,\frac{a^f_m(0;z)}{a^f_0(0;z)}\right)\frac{\partial}{\partial y_i}\left
(\frac{a^f_{j}(y;z)}{a^f_0(y;z)}\right )\Bigr|_{(0;z)}\\ &=& \frac{\partial f_{k+1}}{\partial t_i}(z)-f_k(z)\frac{\partial f_1 }{\partial t_i}(z) \\ & & -\sum_{j=1}^{m}\frac{\partial A_{k}}{\partial x_j}(f_1(z),...,f_m(z))\left(\frac{\partial f_{j+1}}{\partial t_i}(z)-f_j(z)\frac{\partial f_1 }{\partial t_i}(z)\right)\end{eqnarray*} for all $k=m+1,...,2m$ and $i=1,...,n$. \end{proof}
\begin{proof}[Proof of Proposition \ref{p.dphi}] Since $A_k(x_1,...,x_m)$ is weighted homogeneous of weighted degree $k$, if $k \geq m+1$, then the linear part of $A_k(x_1,...,x_m)$ does not contain variables $x_1$,...,$x_m$. Therefore for all $k=m+1,...,2m$ and $j=1,...,m,$
$$\frac{\partial A_k}{\partial x_j}\Bigr|_{(0,...,0)}=0.$$ Thus putting $f_1= \cdots = f_m=0$ in Lemma \ref{l.Tim}, we obtain $$\frac{\partial B^f_{m+1}}{\partial y_i}(0;z)=\frac{\partial f_{m+2}}{\partial t_i}(z).$$ It follows that $$d \mu_x(\sum_{i=1}^n v_i \frac{\partial}{\partial y_i}) =\sum_{i=1}^n v_i\frac{\partial B^f_{m+1}}{\partial y_i}(0;z) = \sum_{i=1}^n v_i\frac{\partial f_{m+2}}{\partial t_i}(z).$$ \end{proof}
\begin{notation}\label{n.GL} Denote the action of $A \in GL(n,\mathbb C)$ on ${\mathbb C}^n$ by $(z_1, \ldots, z_n) \mapsto A(z_1, \ldots, z_n).$ We have the natural induced action on $V_k$ given by $$(A. h)(z_1,...,z_n):=h(A^{-1}(z_1,...,z_n)),\ h\in V_k. $$ Denote the orbit of $h\in V_k$ by
$$GL(n,\mathbb C).h:=\{A.h\bigm|\ A\in GL(n,\mathbb C)\}\subset V_k.$$ \end{notation}
\begin{proposition}\label{p.Torbit} We use the terminology of Notation \ref{n.V} and Notation \ref{n.GL}. A tangent vector $$\sum_{i_1+\cdots+i_n=k}E_{i_1,...,i_n} \frac{\partial}{\partial e_{i_1,...,i_n}} \in T_h(V_k)$$ is tangent to the orbit $GL(n,\mathbb C).h$ if and only if there exists an $(n\times n)$ matrix $(s_{j}^i)_{i,j=1,...,n}$ such that
$$\sum_{i_1+\cdots+i_n=k}E_{i_1,...,i_n}z_1^{i_1}\cdots z_n^{i_n}=\frac{d}{dt} h(z_1+t\sum_{i=1}^n s_1^iz_i,...,z_n+t\sum_{i=1}^n s_n^iz_i)\Bigr|_{t=0}.$$ \end{proposition} \begin{proof} Define a morphism $$\alpha_h:GL(n,\mathbb C)\rightarrow V_k$$ sending $A$ to $A.h$. Then $GL(n,\mathbb C).h$ is the image of $\alpha_h$ and $\alpha_h(I)=h$ where $I$ is the $(n\times n)$ identity matrix. The tangent space of $GL(n,\mathbb C).h$ at $h$ is the image of the differential $$d(\alpha_{h})_{I}:T_{I}(GL(n,\mathbb C))\rightarrow T_h(V_k).$$ Let us identify $T_{I}(GL(n,\mathbb C))$ with the vector space $M_n$ of all $(n\times n)$ matrices so that $A\in M_n$ corresponds to the tangent vector at $I$ of the curve $c(t)=I+tA$ which is indeed a curve on $GL(n,\mathbb C)$ for sufficiently small $t$. Since $\alpha_h\circ c(t)$ is the polynomial $h((I+tA)^{-1}(z_1, \ldots, z_n))$, the differential $d(\alpha_h)_I$ sends
$A=\frac{d}{dt} c(t)\Bigr|_{t=0}$ to $\frac{d}{dt}
h((I+tA)^{-1}(z_1, \ldots, z_n))\Bigr|_{t=0}$ which is of the form on the right hand side of the equation in the proposition. \end{proof}
\begin{proposition}\label{p.ImOb} In the setting of Proposition \ref{p.dphi},
$$ T_{\mu(x)}(GL(n,\mathbb C).\mu(x))=\left\{\sum_{i,j=1}^n s_j^iz_i \frac{\partial f_{m+1}}{\partial t_j}(z) \ \Bigr| \ s^i_j\in\mathbb C\right\}\subset T_{\mu(x)}(V_{m+1}) = V_{m+1}.$$\end{proposition}
\begin{proof} From $\mu(x)=[B_{m+1}(0;z)] \in V_{m+1}$ and Proposition \ref{p.Torbit}, we get \begin{equation*}\label{eq.1} T_{\mu(x)}(GL(n,\mathbb C).\mu(x))=\left\{ \frac{d}{dt}
B^f_{m+1}(0; z_1+t\sum_{i=1}^n s_1^iz_i,...,z_n+t\sum_{i=1}^n s_n^iz_i)\Bigr|_{t=0} \; \Bigr| \ s^i_j\in\mathbb C\right\}.\end{equation*} From Notation \ref{n.f_k}, we have $a^f_0(0;z)=f(1,0,...,0)=1$ and $a^f_i(0;z)=f_i(z)=0$ for $i=1,...,m$. Thus $$B^f_{m+1}(0;z)=\frac{a^f_{m+1}(0;z)}{a^f_0(0;z)}-A_{m+1} \left
( \frac{a^f_1(0;z)}{a^f_0(0;z)},...,\frac{a^f_m(0;z)}{a^f_0(0;z)}\right )=f_{m+1}(z).$$ So the following equalities hold \begin{align*}
&\frac{d}{dt} B^f_{m+1}(0;z_1+t\sum_{i=1}^n s_1^iz_i,...,z_n+t\sum_{i=1}^n s_n^iz_i)\Bigr|_{t=0}\\
&=\frac{d}{dt} f_{m+1}(z_1+t\sum_{i=1}^n s_1^iz_i,...,z_n+t\sum_{i=1}^n s_n^iz_i)\Bigr|_{t=0}\\ &=\sum_{i,j=1}^{n}s^i_jz_i\frac{\partial f_{m+1}}{\partial t_j}(z). \end{align*} Putting it in the above expression for $T_{\mu(x)}(GL(n,\mathbb C).\mu(x))$, we obtain the result. \end{proof}
\begin{proposition}\label{p.trans} There exists a smooth hypersurface $Y\subset {\mathbb P}^n, n \geq 4,$ defined by a homogeneous polynomial $f$ of degree $2m, 2 \leq m \leq n-1,$ such that, for a general $x\in {\mathbb P}^n \setminus ({\mathbb P}^{n-1}_{\infty} \cup Y)$, using the terminology of Definition \ref{d.phi}, $${\rm rank}(d\mu_x)=n,\; \dim_{\mathbb C} T_{\mu(x)}(GL(n,\mathbb C).\mu(x))=n^2\; \text{ and } $$ $${\rm Im}(d\mu_{x})\cap T_{\mu(x)}(GL(n,\mathbb C).\mu(x))=0.$$ \end{proposition}
\begin{proof} First, consider the case $m=2$. Set $$f(t_1,...,t_n)=t_0^4+b(t_1^3+\cdots+t_n^3)t_0+(t_1^{4}+\cdots+t_n^{4})+ \sum_{1\leq i_1<i_2<i_3<i_4\leq n} c t_{i_1}t_{i_2}t_{i_3}t_{i_4}$$ with some constants $b,c\in \mathbb C^*$. Using Notation \ref{n.f_k}, we have $$f_1=f_2=0, \; f_{3}=b(t_1^3+\cdots+t_n^3) \mbox{ and }$$ $$f_{4}=(t_1^{4}+\cdots+t_n^{4})+\sum_{1\leq i_1<i_2<i_3<i_4\leq n}c t_{i_1}t_{i_2}t_{i_3}t_{i_4}.$$ Since the Fermat hypersurface in ${\mathbb P}^n$ defined by
$t_0^{4}+t_1^{4}+\cdots+t_n^{4}=0$ is smooth, the hypersurface
$Y$
defined by $f=0$ is smooth if we choose general $b$ and $c$.
Set $x:=[1:0:\cdots:0]$. By Propositions \ref{p.dphi} and \ref{p.ImOb}, we have
\begin{align*}{\rm Im}(d\mu_x)&= \left\{\sum_{i=1}^{n}v_i\frac{\partial f_{4}}{\partial t_i}(z) \ \Bigr| \ v_i\in \mathbb C\right\}=
\left\{\sum_{i=1}^n v_i(4z_i^3+\sum_{1\leq i_1<i_2<i_3\leq n,\forall i_k\neq i}c z_{i_1}z_{i_2}z_{i_3}) \ \Bigr| \ v_i\in \mathbb C\right\}\end{align*} and
\begin{align*}T_{\mu(x)}(GL(n,\mathbb C).\mu(x))&= \left\{\sum_{i,j=1}^ns^i_jz_i\frac{\partial f_{3}}{\partial t_j}(z) \ \Bigr| \ s^i_j\in \mathbb C\right\}=\left\{\sum_{i,j=1}^ns^i_jz_iz_j^2\ \Bigr| \ s^i_j\in \mathbb C\right\} .\end{align*} From this it follows that ${\rm rank}(d\mu_x)=n$ and $\dim_{\mathbb C} T_{\mu(x)}(GL(n,\mathbb C).\mu(x))=n^2$. Also if
there exist $s^i_j$ and $v_i$ such that $$\sum_{i,j=1}^ns^i_jz_iz_j^2=\sum_{i=n}^nv_i(4z_i^3+\sum_{1\leq i_1<i_2<i_3\leq n, \forall i_k\neq i} c z_{i_1}z_{i_2}z_{i_3}),$$then $s^i_j=0$ and $v_i=0$ for all $i$ and $j$. Therefore $${\rm Im}(d\mu_{x})\cap T_{\mu(x)}(GL(n,\mathbb C).\mu(x))=0.$$
Next, assume that $m\geq 3$. Pick $$f(t_0, \ldots, t_n) = t_0^{2m} +b(t_1^{m+1}+\cdots+t_n^{m+1})t_0^{m-1} + ct_1t_2t_3(t_4^{m-1}+\cdots+t_n^{m-1})t_0^{m-2}+t_1^{2m}+\cdots+t_n^{2m}$$
with some constants $b,c\in \mathbb C^*$. Using Notation \ref{n.f_k}, we have $$f_1=\cdots=f_m=f_{m+3}=\cdots=f_{2m-1}=0, \; f_{m+1}=b(t_1^{m+1}+\cdots+t_n^{m+1}),$$ $$f_{m+2}=ct_1t_2t_3(t_4^{m-1}+\cdots+t_n^{m-1}) \mbox{ and } f_{2m}=t_1^{2m}+\cdots+t_n^{2m}.$$ From the smoothness of the Fermat hypersurface in ${\mathbb P}^n$ defined by
$t_0^{2m}+t_1^{2m}+\cdots+t_n^{2m}=0$, we can see that the hypersurface
$Y$
defined by $f=0$ is smooth for general $b$ and $c$. Set $x:=[1:0:\cdots:0]$.
Propositions \ref{p.dphi} and \ref{p.ImOb} show that
\begin{align*}{\rm Im}(d\mu_x)&=\left\{\sum_{i=1}^{n}v_i\frac{\partial f_{m+2}}{\partial t_i}(z)\ \Bigr| \ v_i\in \mathbb C\right\}\\& =\left\{(v_1z_2z_3+v_2z_1z_3+v_3z_1z_2)(z_4^{m-1}+\cdots+z_n^{m-1})+\sum_{i=4}^nv_iz_1z_2z_3z_i^{m-2}\
\Bigr|\ v_i\in \mathbb C\right\}\end{align*} and
\begin{align*}T_{\mu(x)}(GL(n,\mathbb C).\mu(x))&=\left\{\sum_{i,j=1}^ns^i_jz_i\frac{\partial f_{m+1}}{\partial t_j}(z)\ \Bigr| \ s^i_j\in \mathbb C \right\}
=\left\{\sum_{i,j=1}^ns^i_jz_iz_j^m\ \Bigr| \ s^i_j\in \mathbb C \right\} .\end{align*} The condition $m \geq 3$ implies that ${\rm rank}(d\mu_x)=n$. It is easy to see that $$ \dim_{\mathbb C} T_{\mu(x)}(GL(n,\mathbb C).\mu(x))=n^2 \mbox{ and } {\rm Im}(d\mu_{x})\cap T_{\mu(x)}(GL(n,\mathbb C).\mu(x))=0.$$ \end{proof}
\begin{proof}[Proof of Theorem \ref{t.LR}] By Proposition \ref{p.isom}, we may prove the corresponding statement for the morphism $\eta: W \to {\rm Hilb}({\mathbb P}^{n-1})$ defined on a neighborhood $W$ of a general point in ${\mathbb P}^n$ by $\eta(y):= [{\mathcal E}^Y_y]$ for $y \in W$. Since ${\mathcal E}^Y_x$ is a complete intersection of multi-degree $(m+1, \ldots, 2m)$ for general $Y$ and general $x \in {\mathbb P}^n \setminus Y$, the equation $B_{m+1}$ of degree $m+1$ is uniquely determined up to $GL(n,\mathbb C)$-action by the projective equivalence class of $\mathcal E_x^Y$. Thus it suffices to show that ${\rm rank}(d\mu_x)=n$ and $d\mu_x(T_x({\mathbb P}^n)) \cap T_{\mu(x)}(GL(n,\mathbb C).\mu(x))=0$ for a general $Y$ and general $x$. This follows from Proposition \ref{p.trans}. \end{proof}
\section{Projective connections and rigidity of maps}
\begin{definition}\label{d.connection} Given a complex manifold $M$ of dimension $n$, the projectivized tangent bundle $\pi: {\mathbb P} T(M) \to M$ is equipped with the tautological line bundle $\xi \subset \pi^*T(M)$ whose fiber at $\alpha \in {\mathbb P} T(M)$ is given by $\hat{\alpha} \subset T_{\pi(\alpha)}(M)$, the 1-dimensional subspace corresponding to $\alpha \in {\mathbb P} T_{\pi(\alpha)} (M).$ We have the vector subbundle ${\mathcal T} \subset T({\mathbb P} T(M))$ of rank $n$ whose fiber at $\alpha \in {\mathbb P} T(M)$ is given by $${\mathcal T}_{\alpha} := d\pi_{\alpha}^{-1}(\hat{\alpha})$$ where $d \pi_{\alpha} : T_{\alpha}({\mathbb P} T(M)) \to T_{\pi(\alpha)}(M)$ is the differential of the projection $\pi.$ A {\em projective connection} on $M$ is a homomorphism $p: \xi \to {\mathcal T}$ of vector bundles which splits the exact sequence of vector bundles on ${\mathbb P} T(M)$ $$ 0 \longrightarrow T^{\pi} \longrightarrow {\mathcal T} \longrightarrow {\mathcal T}/T^{\pi} \cong \xi \longrightarrow 0$$ where $T^{\pi} \subset T({\mathbb P} T(M))$ is the relative tangent bundle of $\pi$. Given a projective connection $p: \xi \to {\mathcal T}$, the image $p(\xi) \subset {\mathcal T} \subset T({\mathbb P} T(M))$ is a line subbundle in the tangent bundle of ${\mathbb P}(T(M))$ and defines a foliation of rank 1 on ${\mathbb P} T(M)$. \end{definition}
\begin{example}\label{e.P} On ${\mathbb P}^n$, we have a canonical projective connection $p: \xi \to {\mathcal T}$ such that the leaves of the foliation $p(\xi)$ are exactly the tangent directions of lines on ${\mathbb P}^n$. We call this the {\em flat projective connection} and denote it by $p^{\rm flat}$. Let $U$ be a connected complex manifold of dimension $n$ and let $\varphi: U \to {\mathbb P}^n$ be an immersion. Via the biholomorphic morphism ${\mathbb P} T(U) \cong {\mathbb P} T(\varphi(U))$, we have an induced projective connection $\varphi^* p^{\rm flat}$ on $U$. By the affirmative answer to Problem \ref{q.Liouville} when $X = {\mathbb P}^n$ (see the remark after Problem \ref{q.Liouville}), two immersions $\varphi_i: U \to {\mathbb P}^n, i = 1,2,$ are related by a projective transformation, i.e., there exists an automorphism $\psi: {\mathbb P}^n \to {\mathbb P}^n$ such that $\varphi_2 = \psi \circ \varphi_1$, if and only if the two projective connections $\varphi_1^* p^{\rm flat}$ and $ \varphi_2^* p^{\rm flat}$ coincide. \end{example}
\begin{proposition}\label{p.uniqueconnection} In the setting of Definition \ref{d.connection}, let ${\mathcal C} \subset {\mathbb P} T(M)$ be a closed subvariety dominant over $M$ such that for a general point $x \in M$, the fiber ${\mathcal C}_x \subset {\mathbb P} T_x(M)$ is not contained in a quadric hypersurface. Suppose that $p_1, p_2: \xi \to {\mathcal T}$ are two projective connections on $M$ such that
$p_1|_{{\mathcal C}} = p_2|_{{\mathcal C}}.$ Then $p_1=p_2$. \end{proposition}
\begin{proof} Since $p_1$ and $p_2$ split the exact sequence in Definition \ref{d.connection}, the difference $p_1-p_2$ determines an element $\sigma \in H^0( {\mathbb P} T(M), T^{\pi} \otimes \xi^{-1}).$ For a general $x \in M,$ $\sigma_x $ is a section of $T({\mathbb P} T_x(M))\otimes \xi^{-1}$ on the projective space ${\mathbb P} T_x(M)$.
The condition $p_1|_{{\mathcal C}} = p_2|_{{\mathcal C}}$ implies that $\sigma_x$ vanishes on the subvariety ${\mathcal C}_x$. In term of a homogeneous coordinate system on projective space ${\mathbb P}^{n-1}$, a nonzero section of $T({\mathbb P}^{n-1})\otimes {\mathcal O}_{{\mathbb P}^{n-1}}(1)$ is represented by a homogeneous polynomial vector field with quadratic coefficients. In particular, the zero set of such a section must be contained in some quadric hypersurface. By the assumption that ${\mathcal C}_x$ is not contained in a quadric hypersurface, we see that $\sigma_x =0$. Since it is true for a general $x \in M$, we obtain $p_1=p_2$. \end{proof}
We have the following general version of Theorem \ref{t.germ}. In fact, Theorem \ref{t.germ} is a corollary of Theorem \ref{t.general} by Theorem \ref{t.VMRT}.
\begin{theorem}\label{t.general} Let $X$ be a Fano manifold. For a general point $x \in X$, we denote by ${\mathcal K}_x$ the space of minimal rational curves through $x$ and by ${\mathcal C}_x \subset {\mathbb P} T_x(X)$ the VMRT at $x$. Assume that ${\mathcal C}_x$ is not contained in a quadric hypersurface in ${\mathbb P} T_x(X)$. Let $U \subset X$ be a connected neighborhood of a general point $x \in X$ and $\varphi_1, \varphi_2: U \to {\mathbb P}^n$ be two biholomorphic immersions such that for any $y \in U$ and any member $C$ of ${\mathcal K}_y$, both $\varphi_1(C \cap U)$ and $\varphi_2(C \cap U)$ are contained in lines in ${\mathbb P}^n$. Then there exists a projective transformation $\psi: {\mathbb P}^n \to {\mathbb P}^n$ such that $\varphi_2= \psi \circ \varphi_1$. \end{theorem}
\begin{proof} Let ${\mathcal C} \subset {\mathbb P} T(X)$ be the closure of the union of ${\mathcal C}_x \subset {\mathbb P} T_x(X)$ as $x$ varies over general points of $X$. For a member $C$ of ${\mathcal K}_x$ and its smooth locus $C^o \subset C$, the curve ${\mathbb P} T(C^o) \subset {\mathbb P} T(X)$ lies in ${\mathcal C}$. In fact, by the definition of ${\mathcal C}$ such curves cover a dense open subset in ${\mathcal C}$.
Consider the projective connections $\varphi_i^* p^{\rm flat}$ on $U$. Let $C$ be a general minimal rational curve intersecting $U$. Since $\varphi_1(C \cap U)$ and $\varphi_2(C \cap U)$ are contained in lines in ${\mathbb P}^n$, the difference $$\varphi_1^* p^{\rm flat} -\varphi_2^* p^{\rm flat} \in H^0({\mathbb P} T(U), T^{\pi} \otimes \xi^{-1}),$$ in the notation of Definition \ref{d.connection} with $M=U$, vanishes along the Riemann surface ${\mathbb P} T(C^o) \cap {\mathbb P} T(U)$. Since such Riemann surfaces cover a dense open subset in ${\mathcal C} \cap {\mathbb P} T(U)$, the two projective connections must agree on ${\mathcal C} \cap {\mathbb P} T(U)$. Applying Proposition \ref{p.uniqueconnection} with $M=U$, we conclude $\varphi_1^* p^{\rm flat} = \varphi_2^* p^{\rm flat}.$ As mentioned in Example \ref{e.P}, this implies the existence of a projective transformation $\psi$ satisfying $\varphi_2 = \psi \circ \varphi_1.$ \end{proof}
\begin{proof}[Proof of Theorem \ref{t.map}] Putting $m=n-1$ in the proof of Proposition \ref{p.isom}, we see that minimal rational curves on $X_i, i=1,2,$ have trivial normal bundles and rational curves through general points with trivial normal bundles are minimal rational curves. By Proposition 6 of \cite{HM03} (also cf. Theorem 3.1 (iv) in \cite{S}), for a general minimal rational curve $C \subset X_2$, each irreducible component of $f^{-1}(C)$ is a minimal rational curve in $X_1$. In other words, $f$ sends minimal rational curves of $X_1$ through a general point to those of $X_2$. Putting $$\hat{X}= X_1, X = X_2, g=f, \phi= \phi_2, \mbox{ and } h = \phi_1$$ in Corollary \ref{c.finite}, we see that $\phi_1 = \psi \circ \phi_2 \circ f$ for some projective transformation $\psi$. Thus $f$ must be birational, and hence an isomorphism. \end{proof}
\begin{proof}[Proof of Theorem \ref{t.Liouville}] Applying Theorem \ref{t.germ} to $\varphi:= \phi_2 \circ \gamma: U_1 \to \phi_2(U_1) \subset {\mathbb P}^n$, we have a projective transformation $\psi \in {\rm Aut}({\mathbb P}^n)$ such that $\psi \circ
\phi_1|_{U_1} = \phi_2 \circ \gamma$. By the assumption on $\gamma$ and Proposition \ref{p.mrc}, we have $d\psi({\mathcal E}^{Y_1}_x) = {\mathcal E}^{Y_2}_{\psi(x)}$ for $x \in \phi_1(U_1)$. By Proposition \ref{p.unique}, this implies $\psi(Y_1)= Y_2.$ Thus replacing $Y_1$ by $\psi(Y_1)$ and $\phi_1$ by $\psi \circ \phi_1$, we may assume that $Y_1=Y_2$ and $\phi_1(U_1) = \phi_2(U_2)$. From Lemma \ref{l.unique}, there exists a biregular
morphism $\Gamma: X_1 \to X_2$ with $\Gamma|_{U_1} = \gamma$. \end{proof}
\end{document} | arXiv |
\begin{document}
\numberwithin{equation}{section} \numberwithin{figure}{section} \numberwithin{table}{section}
\title[Tropical correspondence for del Pezzo log Calabi-Yau pairs]{Tropical correspondence for smooth del Pezzo log Calabi-Yau pairs} \author{Tim Graefnitz} \address{University of Hamburg \\ Department of Mathematics \\ Germany} \email{[email protected]}
\begin{abstract} Consider a log Calabi-Yau pair $(X,D)$ consisting of a smooth del Pezzo surface $X$ of degree $\geq 3$ and a smooth anticanonical divisor $D$. We prove a correspondence between genus zero logarithmic Gromov-Witten invariants of $X$ intersecting $D$ in a single point with maximal tangency and the consistent wall structure appearing in the dual intersection complex of $(X,D)$ from the Gross-Siebert reconstruction algorithm. More precisely, the logarithm of the product of functions attached to unbounded walls in the consistent wall structure gives a generating function for these invariants. \end{abstract}
\maketitle
\setcounter{tocdepth}{1} \tableofcontents
\section*{Introduction}
\thispagestyle{empty}
A smooth projective surface $X$ over the complex numbers $\mathbb{C}$ together with a reduced effective anticanonical divisor $D$ forms a \textit{log Calabi-Yau pair} $(X,D)$, meaning that $K_X+D$ is numerically trivial. The case where $D=D_1+\ldots+D_m$ is a cycle of smooth rational curves (\textit{maximal boundary}) has been studied in \cite{GHK1}. In \cite{GPS} it was shown that generating functions of logarithmic Gromov-Witten invariants of $X$ with maximal tangency at a single point on $D$ in this case can be read off from a certain \textit{scattering diagram}. The statement of \cite{GPS} was generalized in \cite{Bou2} to $q$-refined scattering diagrams and generating functions of higher genus Gromov-Witten invariants.
In this work we consider the somewhat complementary case with $D$ a smooth irreducible divisor. We restrict to the case where $X$ has very ample anticanonical bundle $-K_X$, i.e., is a smooth del Pezzo surface of degree $\geq 3$. In this case there is a Fano polytope $Q\subset\mathbb{R}^2$, that is, a convex lattice polytope containing the origin and with all vertices being primitive integral vectors, from which one can construct (see \S\ref{S:smoothing}) a family $(\mathfrak{X}_Q\rightarrow\mathbb{A}^2,\mathfrak{D}_Q)$ such that fixing one coordinate $s\neq 0$ on $\mathbb{A}^2$ one obtains a toric degeneration of $(X,D)$ and fixing $s=0$ gives a toric degeneration of $(X^0,D^0)$, where $X^0$ is a smooth nef toric surface admitting a $\mathbb{Q}$-Gorenstein smoothing to $X$ and $D^0=\partial X^0$ is the toric boundary.
Let $(B,\mathscr{P},\varphi)$ be the \textit{dual intersection complex} of the toric degeneration of $(X,D)$. The affine manifold with singularities $B$ is non-compact without boundary. In \cite{CPS} it is described how to construct a tropical superpotential from such a triple $(B,\mathscr{P},\varphi)$, leading to a Landau-Ginzburg model\footnote{In fact, there is additional information captured in so called \textit{gluing data} (\cite{DataI}, Definition 2.25). In this paper we always make the trivial choice, setting $s_e=1$ for any inclusion $e : \omega \rightarrow \tau$ of cells $\omega,\tau\in\mathscr{P}$, and will not mention gluing data.}. This perfectly fits into the picture, since the idea of the \textit{Gross-Siebert program} is that toric degenerations constructed from Legendre dual polarized polyhedral affine manifolds are mirror to each other, and in fact the mirror of a Fano variety together with a choice of anticanonical divisor is believed to be a Landau-Ginzburg model. The construction involves the scattering calculations described in \cite{GS11}, leading to a \textit{consistent wall structure} $\mathscr{S}_\infty$ on $(B,\mathscr{P},\varphi)$. This is a collection of codimension $1$ polyhedral subsets of $B$ (\textit{slabs} and \textit{walls}) with attached functions describing the gluing of canonical thickenings of affine pieces necessary to obtain a toric degeneration with intersection complex $(B,\mathscr{P},\varphi)$. See \cite{Inv} for an overview of this construction.
\subsection*{The statement}
\label{defi:beta1} For an effective curve class $\underline{\beta}\in H_2^+(X,\mathbb{Z})$ let $\beta$ be the class of $1$-marked stable log maps to $X$ of genus $0$, class $\underline{\beta}$ and maximal tangency with $D$ at a single unspecified point. Let $\mathscr{M}(X,\beta)$ be the moduli space of basic stable log maps of class $\beta$ (see \cite{LogGW}). By the results of \cite{LogGW} this is a proper Deligne-Mumford stack and admits a virtual fundamental class $\llbracket\mathscr{M}(X,\beta)\rrbracket$. It has virtual dimension zero. The corresponding logarithmic Gromov-Witten invariant is defined by integration, i.e., proper pushforward to a point: \[ N_\beta = \int_{\llbracket\mathscr{M}(X,\beta)\rrbracket} 1. \]
\begin{thmintro} \label{thm:trop} Let $\mathfrak{H}_\beta$ be the set of tropical curves that arise as tropicalizations of stable log maps in $\mathscr{M}(X,\beta)$, see Definiton \ref{defi:H} for the precise definition. Denote by $\text{Mult}(h)$ the multiplicity of a tropical curve $h$ (Definition \ref{defi:mult}). Then \[ N_\beta = \sum_{h\in\mathfrak{H}_\beta}\textup{Mult}(h). \] \end{thmintro}
Let $\mathscr{S}_\infty$ be the consistent wall structure defined by the dual intersection complex $(B,\mathscr{P},\varphi)$ of $(X,D)$ via the Gross-Siebert algorithm \cite{GS11}. Figure \ref{fig:main} shows $\mathscr{S}_\infty$ for $(\mathbb{P}^2,E)$ up to order $6$.
\begin{figure}
\caption{The wall structure of $(\mathbb{P}^2,E)$ consistent to order $6$. For the functions attached to unbounded walls see \S\ref{S:calcP2}.}
\label{fig:main}
\end{figure}
The unbounded walls in $\mathscr{S}_\infty$ are all parallel in direction $m_{\textup{out}}\in \Lambda_B$. Here $\Lambda_B$ is the sheaf of integral tangent vectors on $B$ and $m_{\textup{out}}$ is the primitive vector in the unique unbounded direction of $B$ (the upward direction in Figure \ref{fig:main}). Let $f_{\textup{out}}$ be the product of all functions attached to unbounded walls in $\mathscr{S}_\infty$, regarded as elements of $\mathbb{C}\llbracket x\rrbracket$ for $x:=z^{(-m_{\textup{out}},0)}\in\mathbb{C}[\Lambda_B\oplus\mathbb{Z}]$. Then the main theorem is the following. It can be interpreted as a \textit{tropical correspondence theorem}, since the wall structure $\mathscr{S}_\infty$ is combinatorial in nature and supported on the dual intersection complex $(B,\mathscr{P},\varphi)$ of $(X,D)$. Notably, it is a tropical correspondence theorem in a \textit{non-toric} setting, as the smooth divisor $D$ has genus $1$ and thus is non-toric. So far, most such theorems have been obtained only in toric cases, a remarkable exception being \cite{Arg}.
\begin{thmintro} \label{thm:main} \[ \textup{log }f_{\textup{out}} = \sum_{\underline{\beta}\in H_2^+(X,\mathbb{Z})} (D \cdot\underline{\beta}) \cdot N_\beta \cdot x^{D \cdot \underline{\beta}}. \] \end{thmintro}
For $(\mathbb{P}^2,E)$, this correspondence respects the torsion points on $E$: Consider the group law on $E$ with identity a flex point of $E$. The $3d$-torsion points form a subgroup $T_d$ of $S^1\times S^1$ isomorphic to $\mathbb{Z}_{3d}\times\mathbb{Z}_{3d}$. The stable log maps contributing to $N_d$ meet $E$ in such a $3d$-torsion point (Lemma \ref{lem:torsion}). For $P\in\cup_{d\geq 1}T_d$, let $k(P)$ be the smallest integer such that $P\in T_{k(P)}$. Let $N_{d,k}$ be the logarithmic Gromov-Witten invariant of stable log maps contributing to $N_d$ and intersecting $E$ in a point $P$ with $k(P)=k$. In \S\ref{S:torsion} we will show that this is well-defined.
Let $s_{k,l}$ be the number of points in $T_d \simeq \mathbb{Z}_{3d}\times\mathbb{Z}_{3d}$ with $k(P)=k$ that are fixed by $M_l=\left(\begin{smallmatrix}1& 3l\\ 0 & 1\end{smallmatrix}\right)$, but not fixed by $M_{l'}$ for any $l'<l$. Let $r_l$ be the number of points on $S^1$ of order $3l$, defined recursively in Lemma \ref{lem:rl}. For an unbounded wall $\mathfrak{p}\in\mathscr{S}_\infty$ let $l(\mathfrak{p})$ be the smallest integer such that $\textup{log }f_{\mathfrak{p}}$ has non-trivial $x^{3l(\mathfrak{p})}$-coefficient. The number of walls in $\mathscr{S}_\infty$ with $l(\mathfrak{p})=l$ is $r_l$.
\begin{thmintro} \label{thm:torsion} Let $\mathfrak{p}$ be an unbounded wall in $\mathscr{S}_\infty$ with $l(\mathfrak{p})=l$. Then \[ \textup{log }f_{\mathfrak{p}} = \sum_{d=1}^\infty 3d \left(\sum_{k: l\mid k\mid d} \frac{s_{k,l}}{r_l} N_{d,k}\right) x^{3d}. \] \end{thmintro}
Subtracting multiple cover contributions of curves of smaller degree, one obtains log BPS numbers $n_d$ and $n_{d,k}$ (see \S\ref{S:BPS}). Some of the $n_{d,k}$ have been calculated in \cite{Ta1}. The logarithmic Gromov-Witten invariants $N_d$ and $N_{d,k}$ and the log BPS numbers $n_d$ and $n_{d,k}$ are calculated, among other invariants, for $d\leq 6$, in \S\ref{S:calc}. Some of these numbers are new:
\begin{center}
\begin{tabular}{|l|l|l|l|l|l|} \hline $n_{4,1}=14$ & $n_{4,2}=14$ & $n_{4,4}=16$ & $n_{6,1}=927$ & $n_{6,2}=938$ & $n_{6,3}=936$ \\ \hline \end{tabular} \end{center}
\begin{remintro} There is a generalization of the above theorems to $q$-refined wall structures and higher genus logarithmic Gromov-Witten invariants, similar to the maximally degenerated case \cite{Bou2}, since the main argument to obtain higher genus statements in \cite{Bou1} and \cite{Bou2}, the gluing and vanishing properties of $\lambda$-classes, are purely local. We briefly sketch these ideas in \S\ref{S:genus}. See also \cite{Bou4}, Theorem 5.2.1.
An extension of the above theorems to $2$-marked invariants and broken lines will be established in \cite{Gra}. Building on this, the author jointly with Helge Ruddat and Eric Zaslow is working on an equality between the proper Landau-Ginzburg potential, defined via broken lines, and the open mirror map \cite{GRZ}.
In \cite{Bou3} Pierrick Bousseau proves an equality of the consistent wall structure $\mathscr{S}_\infty$ for $\mathbb{P}^2$ and a wall structure $\mathscr{S}_{\text{stab}}$ describing the wall crossing behavior of stability conditions on $D^b\text{Coh}(\mathbb{P}^2)$, the bounded derived of coherent sheaves on $\mathbb{P}^2$. $\mathscr{S}_{\text{stab}}$ can be interpreted as describing wall crossing of counts of coherent sheaves (generalized Donaldson-Thomas invariants) on $\mathbb{P}^2$. In \cite{Bou4}, building on \cite{Bou3} and Theorem \ref{thm:torsion} above, he proves a conjecture of Takahashi (\cite{Ta2}, Conjecture 1.6.) relating the $N_{d,k}$ with the primitive invariants $N_{d',d'}$. Yu-Shen Lin \cite{Lin} worked out a symplectic analogue of the correspondence described in this paper. \end{remintro}
\subsection*{Motivation}
The reason for an enumerative meaning of wall structures is the following. By the Strominger-Yau-Zaslow conjecture \cite{SYZ}, mirror dual Calabi-Yau varieties admit mirror dual Lagrangian torus fibrations. To construct the mirror to a given Calabi-Yau, one first constructs the \textit{semi-flat} mirror by dualizing the non-singular torus fibers. Then one corrects the complex structure of the semi-flat mirror such that it extends across the locus of singular fibers. It is expected that these corrections are determined by counts of holomorphic discs in the original variety with boundary on torus fibers \cite{SYZ}\cite{Fuk}.
Kontsevich and Soibelman \cite{KS} showed that in dimension two and with at most nodal singular fibers in the torus fibration, corrections of the complex structure are determined by algebraic self-consistency constraints which can be encoded by trees of gradient flow lines in the fan picture (dual intersection complex) of the degeneration, with certain automorphisms attached to the edges of the trees. From this they constructed a rigid analytic space from $B$, in dimension two.
Under the discrete Legendre transform (\cite{DataI}, {\S}1.4) the gradient flow lines in the fan picture become straight lines in the cone picture (intersection complex). This was used by Gross and Siebert to construct a toric degeneration from the cone picture in any dimension. In the cone picture the self-consistency calculations are described by scattering diagrams (locally) and wall structures (globally). The fact that wall structures are used to construct a complex manifold in the cone picture and at the same time give generating functions for holomorphic curve counts of the fan picture can be seen as an explicit explanation for the connection between deformations and holomorphic curves in mirror symmetry.
In \cite{GHK1} Gross, Hacking and Keel construct the mirror to a log Calabi-Yau surface $(X,D)$ with maximal boundary. They use the above correspondence to define a canonical consistent scattering diagram from the enumerative geometry of $(X,D)$. There is an affine singularity at the vertex of this scattering diagram. Hence, the scattering diagram only gives an open subscheme of the mirror. To obtain the whole mirror they use broken lines to construct theta functions -- certain canonical global sections of line bundles on $\check{X}^\circ$. This gives enough functions to define an embedding of $\check{X}^\circ$ into an affine space. Taking the closure gives the mirror to $(X,D)$ as a partial compactification of $\check{X}^\circ$. It can be defined explicitly as the spectrum of an explicit algebra generated by theta functions, and with multiplication rule defined by the enumerative geometry of $(X,D)$. This has led to the modern viewpoint of \textit{intrinsic mirror symmetry} \cite{Intr1}\cite{Intr2}. It circumvents the constructions of scattering diagrams and broken lines and directly defines the mirror to $(X,D)$ as the spectrum of an algebra with multiplication rule defined by certain \textit{punctured Gromov-Witten invariants} of $(X,D)$ \cite{Intr1}\cite{ACGS2}.
\subsection*{Plan of the paper}
In \S\ref{S:smoothing} we describe how smoothing the boundary of a Fano polytope leads to a family $(\mathfrak{X}_Q\rightarrow\mathbb{A}^2,\mathfrak{D}_Q)$ as above. Fixing one parameter $s\neq 0$ gives a toric degeneration $(\mathfrak{X}\rightarrow\mathbb{A}^1,\mathfrak{D})$ of $(X,D)$. It contains logarithmic singularities lying on the central fiber, corresponding to affine singularities in the dual intersection complex $(B,\mathscr{P},\varphi)$. In \S\ref{S:resolution} we describe a small log resolution of these singularities, leading to a log smooth degeneration $(\tilde{\mathfrak{X}}\rightarrow\mathbb{A}^1,\tilde{\mathfrak{D}})$ of $(X,D)$. In \S\ref{S:tropmap} we describe tropicalizations of stable log maps to the central fiber of $(\tilde{\mathfrak{X}}\rightarrow\mathbb{A}^1,\tilde{\mathfrak{D}})$ and show that there is a finite number of them. The tropicalizations induce a refinement of $\mathscr{P}$ and hence a logarithmic modification. This is a degeneration $(\tilde{\mathfrak{X}}_d\rightarrow\mathbb{A}^1,\tilde{\mathfrak{D}}_d)$ of $(X,D)$ such that stable log maps to the central fiber are torically transverse. This enables us to use the degeneration formula of logarithmic Gromov-Witten theory in \S\ref{S:degformula}. It gives a description of $N_\beta$ in terms of invariants $N_V$ labeled by vertices of the tropical curves found in \S\ref{S:tropmap}. In \S\ref{S:scattering} we show that the scattering calculations of \cite{GS11} give a similar formula for the logarithm of functions attached to unbounded walls in the consistent wall structure $\mathscr{S}_\infty$. This ultimately leads to a proof of Theorem \ref{thm:main}. In \S\ref{S:torsion} we explain that this correspondence respects the torsion points on $E$, leading to Theorem \ref{thm:torsion}. In \S\ref{S:genus} we discuss higher genus versions of the above statements. In \S\ref{S:calc} we explicitly calculate some invariants for $\mathbb{P}^2$, $\mathbb{P}^1\times\mathbb{P}^1$ and the cubic surface. In Appendix \ref{A:artin} we give some background on logarithmic modifications.
\section{Deforming toric degenerations} \label{S:smoothing}
\begin{defi} A \textit{smooth very ample log Calabi-Yau pair} is a log Calabi-Yau pair $(X,D)$ consisting of a smooth del Pezzo surface $X$ of degree $d\geq 3$ and a smooth very ample anticanonical divisor $D$. \end{defi}
\subsection{The cone picture}
\begin{con} \label{con:family1} Let $M\simeq\mathbb{Z}^2$ be a lattice and let $M_{\mathbb{R}}=M\otimes_{\mathbb{Z}}\mathbb{R}$ be the corresponding vector space. Let $Q\subset M_{\mathbb{R}}$ be a Fano polytope, i.e., a convex lattice polytope containing the origin and with all vertices being primitive integral vectors. The polytope $Q$ can be seen as an affine manifold via its embedding into $M_{\mathbb{R}}\simeq\mathbb{R}^2$. Let $\check{\mathscr{P}}$ be the polyhedral decomposition of $Q$ obtained by inserting edges connecting the vertices of $Q$ to the origin. Let $\check{\varphi} : Q \rightarrow \mathbb{R}$ be the strictly convex piecewise affine function on $(Q,\check{\mathscr{P}})$ defined by $\check{\varphi}(0)=0$ and $\check{\varphi}(v)=1$ for all vertices $v$ of $Q$. This means $\check{\varphi}$ is affine on the maximal cells of $\check{\mathscr{P}}$ and locally at each vertex $v$ of $\check{\mathscr{P}}$ gives a strictly convex piecewise affine function on the fan $\Sigma_v$ describing $\check{\mathscr{P}}$ locally. The triple $(Q,\check{\mathscr{P}},\check{\varphi})$ is a \textit{polarized polyhedral affine manifold} (\cite{GHS}, Construction 1.1). From this one obtains a toric degeneration of a toric del Pezzo surface with cyclic quotient singularities via the construction of Mumford \cite{Mum} (see also \cite{Inv}, \S1) as follows. Let
\[ Q_{\check{\varphi}} = \left\{(m,h) \in M_{\mathbb{R}} \times \mathbb{R} \ | \ h \geq \varphi(m), m \in Q\right\} \] be the convex upper hull of $\check{\varphi}$ and let \[ C(Q_{\check{\varphi}}) = \text{cl}\left(\mathbb{R}_{\geq 0} \cdot (Q_{\check{\varphi}} \times \{1\})\right) \subset M_{\mathbb{R}}\times\mathbb{R}\times\mathbb{R} \] be the cone over $Q_{\check{\varphi}}$. The ring $\mathbb{C}[C(Q_{\check{\varphi}})\cap(M\times\mathbb{Z}\times\mathbb{Z})]$ is graded by the last component and we can define \[ \mathfrak{X}^0 := \text{Proj}\left(\mathbb{C}[C(Q_{\check{\varphi}})\cap (M\times\mathbb{Z}\times\mathbb{Z})]\right). \] By construction $\mathfrak{X}^0$ comes with an embedding into $\mathbb{P}^{N-1}\times\textup{Spec }\mathbb{C}[t]$, where $N$ is the number of lattice points of $Q$ and $t=z^{(0,1,0)}$. Projection to the last coordinate gives a toric degeneration $\mathfrak{X}^0 \rightarrow \mathbb{A}^1$ of a toric del Pezzo surface $X_0$ with quotient singularities. The polytope $Q$ is the momentum polytope of $X_0$ and the Fano condition on $Q$ corresponds to the condition on $X$ having very ample anticanonical bundle. The divisor \[ \mathfrak{D}^0=\{z^{(0,0,0,1)}=0\} \subset \mathfrak{X}^0 \] defined by setting the coordinate corresponding to the origin in $Q$ to zero is a very ample anticanonical divisor, since it corresponds to the pullback of the line bundle $\mathcal{O}_{\mathbb{P}^{N-1}}(1)$, which is the anticanonical bundle on the general fiber of $\mathfrak{X}^0$. By construction, the polarized polyhedral affine manifold $(Q,\check{\mathscr{P}},\check{\varphi})$ is the \textit{intersection complex} (\cite{DataI}, {\S}4.2) of the toric degeneration $\mathfrak{X}^0$.
\begin{rem}
Let $M'\subset M$ be the sublattice generated by the vertices of $\check{\mathscr{P}}$. We naturally have an embedding $\mathfrak{X}^0 \subset \mathbb{P}_{\check{B},\check{\mathscr{P}}}\times\mathbb{A}^1$, where $\mathbb{P}_{\check{B},\check{\mathscr{P}}}$ is the weighted projective space of dimension $|Q\cap M'-1|$ and weights $(1,\ldots,1,d)$ for $d$ the index of $M'$ in $M$. \end{rem}
One can deform $\mathfrak{X}^0$ by perturbing its defining equations. This means we add a term $t^lsf$ to each equation, where $l$ is the lowest non-trivial $t$-order in the defining equations of $\mathfrak{X}^0$, $s\in\mathbb{A}^1$ is the deformation parameter and $f$ is a general polynomial defining a section of the anticanonical bundle of the general fiber of $\mathfrak{X}^0$. We give some examples below. By \cite{Pri1}, Theorem 1.1, this leads to a flat $2$-parameter family \[ (\mathfrak{X}_Q \rightarrow \mathbb{A}^2,\mathfrak{D}_Q) \] such that \begin{compactenum}[(1)] \item for $s=0$ we have a toric degeneration $(\mathfrak{X}^0\rightarrow\mathbb{A}^1,\mathfrak{D}^0)$ of a log Calabi-Yau pair $(X^0,D^0)$ consisting of a toric del Pezzo surface with quotient singularities $X^0$ and its toric boundary $D^0=\partial X^0$; \item for $s\neq 0$ we have a toric degeneration $(\mathfrak{X}\rightarrow\mathbb{A}^1,\mathfrak{D})$ of a smooth log Calabi-Yau pair $(X,D)$ consisting of a $\mathbb{Q}$-Gorenstein smoothing $X$ of $X^0$, i.e., a smooth del Pezzo surface of the same degree, and a smooth anticanonical divisor $D$. For different choices of $s\neq 0$ these toric degenerations are related via smooth deformation. We only care about $(X,D)$ up to smooth deformation, since log Gromov-Witten invariants are invariant under such deformations (\cite{MR}, Appendix A). \end{compactenum} \end{con}
\begin{notation} We write the fibers of $(\mathfrak{X}_Q\rightarrow\mathbb{A}^2,\mathfrak{D}_Q)$ as $(X_t^s,D_t^s)$, where $s$ is the deformation parameter and $t$ is the parameter for the toric degeneration. We denote the $1$-parameter families defined by fixing one parameter by $(\mathfrak{X}_t\rightarrow\mathbb{A}^1,\mathfrak{D}_t)$ and $(\mathfrak{X}^s\rightarrow\mathbb{A}^1,\mathfrak{D}^s)$, respectively. When we fix a parameter different from zero, we sometimes omit the index, e.g. $X=X_t^s$ for $s,t\neq 0$. When writing $(\mathfrak{X}\rightarrow\mathbb{A}^1,\mathfrak{D})$ we will always mean the toric degeneration $(\mathfrak{X}^s\rightarrow\mathbb{A}^1,\mathfrak{D}^s)$ for some $s\neq 0$. By (2) above this notation makes sense. Moreover, we often supress the divisor in the notation. \end{notation}
Let $\check{B}$ be the affine manifold with singularities obtained from $Q$ by introducing affine singularities on the interior edges of $\check{\mathscr{P}}$ such that the boundary of $\check{B}$ is a straight line, and let $(\check{B},\check{\mathscr{P}},\check{\varphi})$ be the corresponding polarized polyhedral affine manifold. Of course, there is a choice of the exact position of the affine singularities along the interior edges. In fact, one could form families of affine manifolds as in \cite{Pri1}. However, we don't care about the exact position, as we only care about the degeneration $(\mathfrak{X}\rightarrow\mathbb{A}^1,\mathfrak{D})$ up to deformation. So we may place the affine singularities in the middle of the interior edges. Note that in \cite{GS11} the affine singularities are required to have irrational coordinates, since otherwise some walls in the induced wall structure may cross them. However, this doesn't happen in our special case, so the middle of the edges will be a valid choice.
\begin{prop} The intersection complex of $\mathfrak{X}\rightarrow\mathbb{A}^1$ is $(\check{B},\check{\mathscr{P}},\check{\varphi})$, while the intersection complex of $\mathfrak{X}^0\rightarrow\mathbb{A}^1$ is $(Q,\check{\mathscr{P}},\check{\varphi})$. \end{prop}
\begin{proof} First note that $\check{B}$ as above exists, since by reflexivity for any vertex $v$ the integral tangent vectors of any adjacent vertex together with $v-v_0$ generate the full lattice (see \cite{CPS}, Construction 6.2). Here $v_0$ is the unique interior vertex.
The $t$-constant term in the defining equation for $X^0$ is independent of the variable $z^{(0,0,0,1)}$, since $(0,0,0,1)$ is the only lattice point at which $\varphi=0$. So the central fiber of $\mathfrak{X}^s\rightarrow\mathbb{A}^1$ is independent of $s$. As a consequence, the maximal cells of the intersection complex of $\mathfrak{X}^s\rightarrow\mathbb{A}^1$ are the same for each $s$. So the parameter $s$ only changes the affine structure, given by the fan structures at vertices of the intersection complex. These fan structures are defined by local models for the family at zero-dimensional toric strata of the central fiber.
For $s\neq 0$, locally at the zero-dimensional toric stratum of $X_0^s$ corresponding to a vertex $v$ on the boundary of $B$, the family $\mathfrak{X}^s$ is given by $\{xy=t^l\}\subset\mathbb{A}^4$ for some $l>0$. So the fan structure at $v$ is given by the fan of $\mathbb{P}^1\times\mathbb{A}^1$. This shows that the boundary is a straight line.
Note that for $s=0$ locally at a $0$-dimensional stratum corresponding to $v\in\partial B$ the family $\mathfrak{X}^0$ is given by $\{xy=t^lw\}\subset\mathbb{A}^4$ for some $l>0$. The fan structure at $v$ is given by the fan with ray generators $(1,0)$, $(0,1)$ and $(1,1)$. So the affine charts are compatible and there are no affine singularities. \end{proof}
\begin{expl} \label{expl:P2} Figure \ref{fig:P2} shows the intersection complex $(\check{B},\check{\mathscr{P}},\check{\varphi})$ of a toric degeneration of the log Calabi-Yau pair $(\mathbb{P}^2,E)$, where $E\subset\mathbb{P}^2$ is a smooth anticanonical divisor, i.e., an elliptic curve. This is obtained by smoothing a toric degeneration of $(\mathbb{P}^2,\partial\mathbb{P}^2)$, where $\partial\mathbb{P}^2$ is the toric boundary of $\mathbb{P}^2$. One can write down such a smoothing explicitly as follows. \begin{eqnarray*} \mathfrak{X}_Q &=& V\left(XYZ-t^3(W+sf_3)\right) \subset \mathbb{P}(1,1,1,3) \times \mathbb{A}^2 \\ \mathfrak{D}_Q &=& V(W)\subset\mathfrak{X}_Q \end{eqnarray*} Here $X,Y,Z,W$ are the coordinates of $\mathbb{P}(1,1,1,3)$, as shown in Figure \ref{fig:P2}, and $f_3$ is a general homogeneous degree $3$ polynomial in $X,Y,Z$.
\begin{figure}\label{fig:P2}
\end{figure}
For $t\neq 0$ we have $X_t^s=\mathbb{P}^2$, since we can eliminate $W$ by $W=t^{-3}XYZ-sf_3$. For $s\neq 0$, $D_t^s\subset\mathbb{P}^2$ is defined by a general degree $3$ polynomial, so is an elliptic curve, and $D_t^0\subset\mathbb{P}^2$ is a cycle of three lines. For $t=0$ we have $X_0^s=V(XYZ)$ in $\mathbb{P}(1,1,1,3)$. This is a union of three $\mathbb{P}(1,1,3)$ glued along toric divisors as described by the combinatorics of Figure \ref{fig:P2}. $D_0^s$ is again a cycle of three lines. \end{expl}
\begin{expl} \label{expl:cubic} Figure \ref{fig:cubic} shows the intersection complex of a toric degeneration of a smooth cubic surface $X$ (del Pezzo surface of degree $3$) obtained by smoothing the Fano polytope of the toric Gorenstein del Pezzo surface $X^0 = \mathbb{P}^2/\mathbb{Z}_3$, where $\mathbb{Z}_3$ acts by $(x,y,z) \mapsto (x,\zeta y,\zeta^{-1}z)$ for $\zeta$ a nontrivial third root of unity. This can be given explicitly as follows. \begin{eqnarray*} \mathfrak{X}_Q &=& V\left(XYZ-t^3(W^3+sf_3)\right) \subset \mathbb{P}^3\times\mathbb{A}^2 \\ \mathfrak{D}_Q &=& V(W) \subset \mathfrak{X}_Q \end{eqnarray*} Again, $X,Y,Z,W$ are the projective coordinates and $f_3$ is a general homogeneous degree $3$ polynomial in $X,Y,Z$. For $t,s\neq 0$, $X_t^s$ is a smooth cubic surface, and $D_t^s$ is a hyperplane section. For $t\neq 0,s=0$, $X_t^0$ is given by $V(XYZ-tW^3)\subset\mathbb{P}^3$, thus is a $\mathbb{Z}_3$-quotient of $\mathbb{P}^2$, and $D_t^0$ is a cycle of three lines. For $t=0$ we have $X_0^s=V(XYZ)\subset\mathbb{P}^3$. This is a union of three $\mathbb{P}^2$ glued as described by the combinatorics of Figure \ref{fig:cubic}, and again $D_0^s$ is a cycle of three lines. \end{expl}
\begin{figure}\label{fig:cubic}
\end{figure}
\begin{expl} \label{expl:8'a} Figure \ref{fig:8'a} shows the intersection complex of a toric degeneration of $\mathbb{P}^1\times\mathbb{P}^1$ obtained by smoothing the Fano polytope of $\mathbb{P}(1,1,2)$. This can be given explicitly as follows, with $f_2$ a general homogeneous degree $2$ polynomial in $X,Y,Z,U$ and $W$ the degree $2$ coordinate, \begin{eqnarray*} \mathfrak{X}_Q &=& V\left(XY-U^2+t^2sf_2,ZU-t^2(W+sf_2)\right)\subset\mathbb{P}(1,1,1,1,2)\times\mathbb{A}^2 \\ \mathfrak{D}_Q &=& V(W)\subset\mathfrak{X}_Q \end{eqnarray*} Indeed, $t=0$ implies $Z=0$ or $U=0$ which in turn implies $X=0$ or $Y=0$. We have $V(X)=V(Y)=\mathbb{P}(1,1,2)$ and $V(Z)=\{XY=U^2\}\subset\mathbb{P}(1,1,1,2)$ which is isomorphic to $\mathbb{P}(1,1,4)$. For $t\neq 0$ we have $X_t^0=\{XY=U^2+t^2sf_2\}\subset\mathbb{P}^3$ by elimination of $W$. For $s=0$ this is a singular quadric $X_t^0\simeq\mathbb{P}(1,1,2)$. For $s\neq 0$ it is a smooth quadric $X_t^s\simeq\mathbb{P}^1\times\mathbb{P}^1$. Again, $D_t^s$ is smooth if and only if $t,s\neq 0$. \end{expl}
\begin{figure}\label{fig:8'a}
\end{figure}
\begin{defi} \label{defi:toricmodel} Let $X$ be a smooth del Pezzo surface. A \textit{toric model} for $X$ is a toric del Pezzo surface with cyclic quotient singularities $X^0$ that admits a $\mathbb{Q}$-Gorenstein deformation to $X$. \end{defi}
\begin{rem} Note that there may be different toric models $X^0$ for the same smooth del Pezzo surface $X$. In fact, the Fano polytopes $Q$ of such $X^0$ are related via \textit{combinatorial mutations} (\cite{ACC+}, Theorem 3, see also \cite{CGG+}). \end{rem}
\begin{prop} For each smooth del Pezzo surface $X$ with very ample anticanonical class there exists a toric model $X^0$ with at most Gorenstein singularities. \end{prop}
\begin{proof} For any $\mathbb{Q}$-Gorenstein deformation $\mathfrak{X}\rightarrow\mathbb{A}^1$, the relative canonical class $K_{\mathfrak{X}/\mathbb{A}^1}$ is $\mathbb{Q}$-Cartier. By definition, the degree of $X^0$ is the self-intersection of its (anti)canonical class. Hence, the degree of $X$ equals the degree of any of its toric models $X^0$. We need to show that the degrees of the given toric models are the ones shown in Figure \ref{fig:list}. The Fano polytope $Q$ of $X^0$ is exactly the Newton polytope of its anticanonical class. By the duality between subdivisions of Newton polytopes and tropical curves, the self intersection can easily be computed by intersecting two tropical curves dual to the Fano polytope $Q$. For (3a), i.e., $X^0=\mathbb{P}^2/\mathbb{Z}_3$ as in Example \ref{expl:cubic}, the intersection of tropical curves is the following: \begin{center} \begin{tikzpicture}[scale=0.4] \draw (0,0) -- (-1,-1); \draw (0,0) -- (-1,2); \draw (0,0) -- (2,-1); \draw (3,1) -- (1,-1); \draw (3,1) -- (2,3); \draw (3,1) -- (5,0); \end{tikzpicture} \end{center}
The determinant of primitive tangent vectors at the intersection point is $|\textup{det}\left(\begin{smallmatrix}\textup{ }\ 1&2\\-1&1\end{smallmatrix}\right)|=3$. Indeed, this is the degree of $\mathbb{P}^2/\mathbb{Z}_3$. Similarly one computes the degrees of the other cases in Figure \ref{fig:list}. Alternatively, one can use the fact that the degree of a del Pezzo surface equals $|\check{B}\cap M|+1$ (see \cite{CPS}, {\S}6).
There are two smooth del Pezzo surfaces of degree $8$, the blow up of $\mathbb{P}^2$ at a point and $\mathbb{P}^1\times\mathbb{P}^1$. The del Pezzo surface $X^0$ in case (8'a) is a toric model for $\mathbb{P}^1\times\mathbb{P}^1$ . The other cases are determined, up to smooth deformations, by the degree, since del Pezzos of degree $\neq 8$ have a connected moduli space. \end{proof}
\begin{figure}\label{fig:list}
\end{figure}
\subsection{Fan picture and refinement}
Let $Q$ be a Fano polytope and let $\mathfrak{X}_Q\rightarrow\mathbb{A}^2$ be the family from Construction \ref{con:family1}. Let $(\check{B},\check{\mathscr{P}},\check{\varphi})$ be the intersection complex of the toric degeneration $\mathfrak{X}:=\mathfrak{X}^{s\neq 0}\rightarrow\mathbb{A}^1$, i.e., one of the polarized polyhedral affine manifolds in Figure \ref{fig:list}. Performing the discrete Legendre transform (\cite{DataI}, \S1.4) we obtain another polarized polyhedral affine manifold that is the dual intersection complex (\cite{DataI}, \S4.1) of $\mathfrak{X}\rightarrow\mathbb{A}^1$.
\begin{defi} \label{defi:sigma0} Let $\sigma_0$ be the unique bounded maximal cell of the dual intersection complex of $\mathfrak{X}\rightarrow\mathbb{A}^1$. \end{defi}
\begin{figure}
\caption{Dual intersection complexes $(B,\mathscr{P},\varphi)$ of smooth very ample log Calabi-Yau pairs. The shaded regions are cut out and the dashed lines are mutually identified. Compare this with \cite{KM}, Figure 2, and \cite{Pum}, Figure 5.15.}
\label{fig:listb}
\end{figure}
\begin{con} \label{con:family} Refine the dual intersection complex of $\mathfrak{X}\rightarrow\mathbb{A}^1$ by introducing rays starting at the origin and pointing to the integral points of the bounded maximal cell. This yields another polarized polyhedral affine manifold $(B,\mathscr{P},\varphi)$, as shown in Figure \ref{fig:listb}. A refinement of the dual intersection complex gives a logarithmic modification of $\mathfrak{X}_Q\rightarrow\mathbb{A}^2$ (see \S\ref{A:artin}). Since the deformation parameter $s$ is not part of the logarithmic data, the logarithmic modification does not change the general fiber $X=X_{t\neq0}^{s\neq0}$. It can be constructed as follows. \begin{compactenum}[(1)] \item Blow up $\mathfrak{X}_Q\subset\mathbb{P}_{\check{\mathscr{P}}}\times\mathbb{A}^2$ at $X_{\sigma_0}\times\{(0,0)\}$, where $X_{\sigma_0}$ is the point corresponding to $\sigma_0$. This corresponds to inserting edges from the origin to corners of $\sigma_0$. \item Introducing the ray starting at the origin and pointing in the direction of an integral vector on the interior of a bounded edge $\omega$ of $\mathscr{P}$ corresponds to a blow up at $X_\omega\times\mathbb{A}^1\times\{s=0\}$, where $X_\omega=\mathbb{P}^1$ is the component corresponding to $\omega$, i.e., the line through the points corresponding to the bounded maximal cell and the unbounded maximal cell containing $\omega$, respectively. \end{compactenum} In cases where $X^0$ is not smooth we refine the asymptotic fan of $\mathscr{P}$. This corresponds to a toric blow up of the toric model $X^0$. This blow up is nef but not necessarily ample. By \cite{KM}, Proposition A.2, the deformation of such a nef toric model still is $(X,D)$ and has Picard group isomorphic to $\text{Pic}(X)$. Note that $\text{Pic}(X)$ is isomorphic to $H_2(X,\mathbb{Z})$ for the del Pezzo surface $X$ by the Kodaira vanishing theorem and Poincar\'e duality. \end{con}
\begin{defi} By abuse of notation, from now on $\mathfrak{X}_Q\rightarrow\mathbb{A}^2$ will denote the logarithmic modification from Construction \ref{con:family}. Note that $X^0$ is smooth, toric and nef, but not necessarily ample. We call it a \textit{smooth toric model} of $X$. If $X^0$ is ample it coincides with the toric model of $X$ (Definition \ref{defi:toricmodel}). \end{defi}
\begin{rem} Note that $(B,\mathscr{P})$ is simple (\cite{DataI}, Definition 1.60), since all affine singularities have monodromy $\left(\begin{smallmatrix}1& 1\\ 0& 1\end{smallmatrix}\right)$ in suitable coordinates. Thus we can apply the reconstruction theorem (\cite{GS11}, Proposition 2.42) together with the construction of a tropical superpotential from \cite{CPS} to construct the mirror Landau-Ginzburg model to $(X,D)$. \end{rem}
\begin{expl} \label{expl:8'a2} Consider the smoothing of $\mathbb{P}(1,1,2)$ to $\mathbb{P}^1\times\mathbb{P}^1$ (case (8'a)) from Example \ref{expl:8'a}. The logarithmic modification from Construction \ref{con:family} is a $2$-parameter family $\mathfrak{X}_Q\rightarrow\mathbb{A}^2$ such that $\mathfrak{X}\rightarrow\mathbb{A}^1$ is a toric degeneration of $\mathbb{P}^1\times\mathbb{P}^1$ and $\mathfrak{X}^0\rightarrow\mathbb{A}^1$ is a toric degeneration of the Hirzebruch surface $\mathbb{F}_2$, the $\mathbb{P}^1$-bundle over $\mathbb{P}^1$ given by $\mathbb{F}_2=\mathbb{P}(\mathcal{O}_{\mathbb{P}^1}\oplus\mathcal{O}_{\mathbb{P}^1}(2))$. This is the smooth surface obtained by blowing up the singular point on $\mathbb{P}(1,1,2)$, corresponding the the subdivision of the asymptotic fan given in Figure \ref{fig:8'a2}.
The Picard group $\text{Pic}(\mathbb{F}_2)\simeq H_2(\mathbb{F}_2,\mathbb{Z})$ is generated by the class of a fiber $F$ and the class of a section, e.g., the exceptional divisor $E$ of the blow up. The intersection numbers are $F^2=0$, $E^2=-2$ and $E\cdot F=1$. The anticanonical bundle is $-K_{\mathbb{F}_2}=2F+S+E=4F+2E$, where $S=2F+E$ is the class of a section different from the exceptional divisor. The classes of curves corresponding to the rays in the fan of $\mathbb{F}_2$ are given in Figure \ref{fig:8'a2}.
\begin{figure}\label{fig:8'a2}
\end{figure}
The Picard group of $\mathbb{P}^1\times\mathbb{P}^1$ is generated by the class of a bidegree $(1,0)$ curve $L_1$ and a bidegree $(0,1)$ curve $L_2$, with intersection numbers $L_1^2=0$, $L_2^2=0$ and $L_1\cdot L_2=1$. There is an isomorphism \[ \text{Pic}(\mathbb{F}_2) \xrightarrow{\raisebox{-0.7ex}[0ex][0ex]{$\sim$}} \text{Pic}(\mathbb{P}^1\times\mathbb{P}^1), \ F \mapsto L_2, \ E \mapsto L_1-L_2. \] Note that there is another isomorphism by the symmetry $L_1\leftrightarrow L_2$ and we made a choice here, fixed by the deformation $\mathfrak{X}^0\hookrightarrow\mathfrak{X}_Q$. We will use this isomorphism in \S\ref{S:calc8'a} to calculate the logarithmic Gromov-Witten invariants of $\mathbb{P}^1\times\mathbb{P}^1$ in an alternative way. \end{expl}
\subsection{Affine charts} \label{S:affinecharts}
Figure \ref{fig:listb} shows the dual intersection complexes $(B,\mathscr{P},\varphi)$ in the chart of $\sigma_0$ (Definition \ref{defi:sigma0}). The shaded regions are cut out and the dashed lines are mutually identified, so in fact all unbounded edges are parallel.
\begin{defi} \label{defi:mout} Let $m_{\text{out}}\in\Lambda_B$ denote the primitive integral tangent vector pointing in the unique unbounded direction on $B$. \end{defi}
\begin{figure}
\caption{$(B,\mathscr{P},\varphi)$ for $(\mathbb{P}^2,E)$ in the chart of an unbounded maximal cell. The dark region is cut out and the dashed lines are mutually identified. A straight line is shown in red.}
\label{fig:dualb}
\end{figure}
\begin{expl} Figure \ref{fig:dualb} shows the dual intersection complex $(B,\mathscr{P},\varphi)$ of $(\mathbb{P}^2,E)$ in the chart of an unbounded maximal cell. Intuitively, this picture can be obtained by mutually gluing the dashed lines in Figure \ref{fig:listb}, (9). The two horizontal dashed lines are identified. The monodromy transformation by passing across the upper horizontal dashed line is given by $\Lambda_B \rightarrow \Lambda_B, m \mapsto \left(\begin{smallmatrix}1&9\\ 0&1\end{smallmatrix}\right) \cdot m$. \end{expl}
We can extend the description of the affine structure across the horizontal dashed line by giving a chart of a discrete covering space $\bar{B}$ of $B$ (Figure \ref{fig:dualc}). Passing from one fundamental domain to an adjacent one amounts to applying the monodromy transformation by crossing the horizontal dashed line in $B$.
This gives a trivialization $\Lambda_{\bar{B}} \simeq M = \mathbb{Z}^2$ on $\bar{B}\setminus(\textup{Int}(\bar{\sigma_0})\cup\bar{\Delta})$, where $\bar{\sigma_0}$ and $\bar{\Delta}$ are the preimages of the bounded maximal cell $\sigma_0$ and the discriminant locus $\Delta$, respectively. We will see in Lemma \ref{lem:global} that the consistent wall structure $\mathscr{S}_\infty$ defined by $(B,\mathscr{P},\varphi)$ has support disjoint from $\text{Int}(\sigma_0)$. Hence, the whole scattering procedure can be described in this affine chart of the covering space $\bar{B}$. This allows for a simple implementation of the scattering algorithm (see \S\ref{S:calc}).
\begin{figure}
\caption{A chart of a covering space $\bar{B}$ of $B$ with fundamental domain the white region (including one of the rays on its border). The preimage of the straight line from Figure \ref{fig:dualb} is shown in red.}
\label{fig:dualc}
\end{figure}
\section{Resolution of log singularities} \label{S:resolution}
Let $Q$ be a Fano polytope and consider the family $\mathfrak{X}_Q\rightarrow\mathbb{A}^2$ from Construction \ref{con:family}. Equip $\mathbb{A}^2$ with the divisorial log structure defined by $V(t)\subset\mathbb{A}^2$ and $\mathfrak{X}_Q$ with the divisorial log structure defined by $\mathfrak{X}_0\cup\mathfrak{D}_Q\subset\mathfrak{X}_Q$, that is, the sheaf of monoids \[ \mathcal{M}_{(\mathfrak{X}_Q,\mathfrak{X}_0\cup\mathfrak{D}_Q)} := (j_\star\mathcal{O}_{\mathfrak{X}_Q\setminus (\mathfrak{X}_0\cup\mathfrak{D}_Q)}^\times)\cap\mathcal{O}_{\mathfrak{X}_Q}, \quad j : \mathfrak{X}_Q\setminus (\mathfrak{X}_0\cup\mathfrak{D}_Q) \hookrightarrow \mathfrak{X}_Q. \] For an introduction to log structures see e.g. \cite{Kat1} or \cite{Gr10}, {\S}3.
If we consider the fibers $X_t^s$ or the families $\mathfrak{X}_t\rightarrow\mathbb{A}^1$ or $\mathfrak{X}^s\rightarrow\mathbb{A}^1$ as log schemes, we always mean equipped with the log structure by restriction of the above log structure. Now for each $s\in\mathbb{A}^1$ the family $\mathfrak{X}^s\rightarrow\mathbb{A}^1$ is log smooth away from finitely many points on the central fiber, corresponding to the affine singularities of the dual intersection complex $(B,\mathscr{P},\varphi)$. At these points $\mathfrak{X}^s$ is locally given by $\textup{Spec }\mathbb{C}[x,y,w,t]/(xy-t^l(w+s))$ with log structure given by $V(t)\cup V(w)$. This is isomorphic to $\text{Spec }\mathbb{C}[x,y,\tilde{w},t]/(xy-t^l\tilde{w})$ with $\tilde{w}=w+s$. The log structure is given by $V(t)\cup V(\tilde{w})$ for $s=0$ and by $V(t)$ for $s\neq 0$. Arguments as in \cite{Gr10}, Example 3.20, show that for $s\neq 0$ this is not fine at the point given by $x=y=w=t=0$. For $s=0$ the log structure is fine saturated but not log smooth. Following \cite{DataII}, Lemma 2.12, we describe a small log resolution $\tilde{\mathfrak{X}}^s \rightarrow \mathfrak{X}^s$ such that $\tilde{\mathfrak{X}}^s$ is fine and log smooth over $\mathbb{A}^1$.
\subsection{The local picture} \label{S:resloc}
$\text{Spec }\mathbb{C}[x,y,\tilde{w},t]/(xy-t^l\tilde{w})$ is the affine toric variety defined by the cone $\sigma$ generated by $(0,0,1)$, $(0,1,0)$, $(1,0,1)$ and $(l,1,0)$. In fact, \begin{eqnarray*} \textup{Spec }\mathbb{C}[\sigma^\vee \cap \mathbb{Z}^3] &=& \textup{Spec }\mathbb{C}[z^{(1,0,0)},z^{(-1,l,1)},z^{(0,0,1)},z^{(0,1,0)}] \\ &=& \textup{Spec }\faktor{\mathbb{C}[x,y,\tilde{w},t]}{(xy-\tilde{w}t^l)}. \end{eqnarray*} We obtain a toric blow up by subdividing the fan consisting of the single cone $\sigma$. There are two ways of doing this and they are related by a \textit{flop}. We choose the subdivision $\Sigma$ as in \cite{DataII}, Lemma 2.12, with maximal cones $\sigma_1$ generated by $(0,0,1)$, $(1,0,1)$ and $(0,1,0)$, and $\sigma_2$ generated by $(l,1,0)$, $(1,0,1)$ and $(0,1,0)$.
\begin{figure}
\caption{Generators of the cone defining a toric model of a log singularity and a choice of subdivision.}
\label{fig:cone}
\end{figure}
These cones define affine toric varieties \begin{eqnarray*} X_{\sigma_1} &=& \textup{Spec }\mathbb{C}[z^{(1,0,0)},z^{(0,1,0)},z^{(-1,0,1)}] = \textup{Spec }\mathbb{C}[x,t,u] = \mathbb{A}^3, \\ X_{\sigma_2} &=& \textup{Spec }\mathbb{C}[z^{(-1,l,1)},z^{(0,0,1)},z^{(0,1,0)},z^{(1,0,-1)}] = \textup{Spec }\mathbb{C}[y,\tilde{w},t,v]/(yv-t^l), \\ X_{\sigma_{12}} &=& \textup{Spec }\mathbb{C}[z^{(1,0,0)},z^{(0,1,0)},z^{\pm(-1,0,1)}] = \textup{Spec }\mathbb{C}[x,t,u^{\pm1}] = \mathbb{A}^2 \times \mathbb{G}_m. \end{eqnarray*} The toric variety $X_\Sigma$ defined by $\Sigma$ is obtained by gluing $X_{\sigma_1}$ and $X_{\sigma_2}$ along $X_{\sigma_{12}}$. This is the fibered coproduct (with $u=U/V$ and $v=V/U$) \[ X_\Sigma = X_{\sigma_1} \amalg_{X_{\sigma_{12}}} X_{\sigma_2} = \textup{Proj }\faktor{\mathbb{C}[x,y,\tilde{w},t][U,V]}{(\tilde{w}V-xU,yV-t^lU)}. \] Note that we take $\textup{Proj}$ of the polynomial ring with variables $U,V$ over the ring $\mathbb{C}[x,y,\tilde{w},t]$, so only $U,V$ are homogeneous coordinates, of degree $1$. The grading is given by degree in $U$ and $V$. The exceptional set of the resolution $X_\Sigma\rightarrow X_\sigma$ is a line contained in the irreducible component of the central fiber $X_{\Sigma,0}$ given by $y=0$.
\begin{figure}
\caption{Local picture of the central fiber of the resolution, with exceptional line shown in red.}
\label{fig:local}
\end{figure}
Equip $X_\Sigma$ with the divisorial log structure by its central fiber $X_{\Sigma,0}$ and pull this log structure back to $X_{\Sigma,0}$. Then $X_{\sigma_1}$ and $X_{\sigma_2}$ are log smooth with respect to the restriction of this log structure, since they are simple normal crossings. They form an affine cover of $X_{\Sigma,0}$, so $X_{\Sigma,0}$ is log smooth.
Similarly, if we make the opposite choice of subdivision, the exceptional line is contained in the irreducible component of the central fiber given by $x=0$.
\subsection{The global picture}
There are two geometric descriptions of the toric blow up $X_\Sigma \rightarrow X_\sigma$ considered in \S\ref{S:resloc} above: \begin{compactenum}[(1)] \item $X_\Sigma\rightarrow X_\sigma$ is given by blowing up $X_\sigma$ along $\{y=t=0\}$. Indeed, this corresponds to subdividing $\sigma$ by cones connecting the face of $\sigma$ corresponding to $\{y=t=0\}$, in our case the ray generated by $(1,0,1)$, with all other faces of $\sigma$, leading to the fan $\Sigma$. \item Let $X_{\Sigma'}$ be the blow up of $X_\sigma$ along the origin. This corresponds to inserting a ray in the center of $\sigma$ and connecting all faces of $\sigma$ with this ray, leading to a fan $\Sigma'$. The exceptional set of this blow up is isomorphic to $\mathbb{P}^1\times\mathbb{P}^1$. Choose one of the $\mathbb{P}^1$-factors and partially contract the exceptional set in $X_{\Sigma'}$ by projecting to this factor. This corresponds to one of the two ways to pair off the four maximal cones in $\Sigma'$ into two cones. One choice leads to $\Sigma$, so we obtain $X_{\Sigma'} \rightarrow X_\Sigma$ by a partial contraction of the exceptional set in $X_{\Sigma'}$. \end{compactenum} These constructions can also be performed globally on $\mathfrak{X}_Q$.
In (1) we blow up $\mathfrak{X}_Q$ along one of the irreducible components of $\mathfrak{X}_0$. We can do this for all irreducible components of $\mathfrak{X}_0$ and obtain a log smooth family over $\mathbb{A}^2$. However, this family will depend on the order of the blow ups and the irreducible components of its central fiber will contain different numbers of exceptional lines.
In (2) we blow up $\mathfrak{X}_Q$ along curves on $\mathfrak{X}_0$ and then partially contract the exceptional sets. In each step we have two ways to choose the contraction. Making the right choices we obtain a more symmetric resolution.
\begin{con}[The log smooth degeneration] \label{con:deg2} For each log singularity on $\mathfrak{X}_Q$ we have two choices of a small resolution as in (2) above, fixed by choosing which irreducible component of $\tilde{X}_0^{s\neq 0}$ contains the exceptional line. We make a symmetric choice such that we have one exceptional line on each irreducible component of $\tilde{X}_0$ (see Figure \ref{fig:bigpicture1}). The only reason for doing so is to avoid distinction of cases. We obtain a log smooth family $(\tilde{\mathfrak{X}}_Q,\tilde{\mathfrak{D}}_Q)\rightarrow \mathbb{A}^2$. Since we only change the fibers $X_0^s$, we still have that $(\tilde{\mathfrak{X}}^0,\mathfrak{D}^0)\rightarrow\mathbb{A}^1$ is a degeneration of $X^0$ and $\mathfrak{X}\rightarrow\mathbb{A}^1$ is a degeneration of $X$, but these are not toric degenerations. \end{con}
\begin{figure}\label{fig:bigpicture1}
\end{figure}
The small resolution does not change the local toric models at generic points of toric strata. As a consequence, the dual intersection complex $\tilde{B}$ of $\tilde{\mathfrak{X}}\rightarrow\mathbb{A}^1$ is homeomorphic to the dual intersection complex $B$ of $\mathfrak{X}\rightarrow\mathbb{A}^1$. But there is one difference here. The irreducible components of $\tilde{X}_0$ are non-toric, so there is no natural fan structure at the vertices. Further, $\tilde{X}_0$ has no log singularities. Hence, there is no focus-focus singularity on bounded edges in the dual intersection complex. However, the gluing is still in such a way that the unbounded edges are parallel, leading to affine singularities at the vertices of $\tilde{B}$, coming from the gluing. This gives a triple $(\tilde{B},\mathscr{P},\varphi)$ as in Figure \ref{fig:duald}. Affine manifolds with singularities at vertices have been considered by Gross-Hacking-Keel \cite{GHK1} to construct mirrors to log Calabi-Yau surfaces.
\begin{figure}
\caption{The dual intersection complex $(\tilde{B},\mathscr{P},\varphi)$ of $\tilde{X}_0$ for $(\mathbb{P}^2,E)$ away from $\sigma_0$, with choices of resolutions indicated.}
\label{fig:duald}
\end{figure}
\begin{defi} \label{defi:Lexc} For a vertex $v$ of $\mathscr{P}$, let $L^{\textup{exc}}_v$ be the unique exceptional line contained in the irreducible component $\tilde{X}_v$ of $\tilde{X}_0$ corresponding to $v$. \end{defi}
For later convenience we indicate the choices of small resolutions by red stubs attached to the vertices of $\mathscr{P}$. The stub at a vertex $v$ points in the direction corresponding to the toric divisor of $X_v$ intersecting $L^{\textup{exc}}_v$. Denote the primitive vector in the direction of the red stub adjacent to $v$ by $m_{v,+}\in\Lambda_{\tilde{B},v}$. Denote the primitive vector in the direction of the other edge of $\sigma_0$ adjacent to $v$ by $m_{v,-}\in\Lambda_{\tilde{B},v}$. Further, $m_{\textup{out}}$ is the unique unbounded direction (Definition \ref{defi:mout}).
\subsection{Logarithmic Gromov-Witten invariants}
Logarithmic Gromov-Witten invariants have been defined in \cite{Che1}\cite{AC} and \cite{LogGW} as counts of stable log maps. A stable log map is a stable map defined in the category of log schemes with additional logarithmic data at the marked points, allowing for specification of contact orders. This leads to a generalization of Gromov-Witten theory in log smooth situations. For example, Gromov-Witten invariants relative to a (log) smooth divisor can be defined in this context, avoiding the target expansion of relative Gromov-Witten theory \cite{Li}\cite{Li2}. This is the case of interest to us.
Let $\tilde{\mathfrak{X}}:=\mathfrak{X}^{s\neq 0}\rightarrow\mathbb{A}^1$ be the log smooth family from Construction \ref{con:deg2}. Note that $\tilde{X}_{t\neq 0}=X$. For the definition of stable log maps and their classes, see \cite{LogGW}.
The group of curve (= divisor) classes on $X$ is isomorphic to the singular homology group $H_2(X,\mathbb{Z})$ by Poincar\'e duality and since $H^1(X,\mathcal{O}_X)=0$ for del Pezzo surfaces by the Kodaira vanishing theorem. We write $H_2^+(X,\mathbb{Z})$ for the monoid of effective curve classes.
\begin{defi} \label{defi:beta} For an effective curve class $\underline{\beta}\in H_2^+(X,\mathbb{Z})\simeq H_2^+(X^0,\mathbb{Z})$ define a class $\beta$ of stable log maps to $\tilde{\mathfrak{X}}_Q \rightarrow \mathbb{A}^2$ as follows: \begin{compactenum}[(1)] \item genus $g=0$; \item $k=1$ marked point $p$; \item fibers have curve class $\underline{\beta}$; \item contact data $u_p = (D \cdot \underline{\beta}) m_{\textup{out}}$, that is, full tangency with $D$ at the marked point. Here $m_{\textup{out}}\in\Lambda_{\tilde{B}}$ is the primitive integral tangent vector pointing in the unbounded direction on $\tilde{B}$ (Definition \ref{defi:mout}). \end{compactenum} A choice of $s\in\mathbb{A}^1$ gives an embedding $\gamma : \mathbb{A}^1 \rightarrow \mathbb{A}^2$. Let $\gamma^!$ be the corresponding refined Gysin homomorphism (\cite{Ful}, {\S}6.6). Then $\gamma^!\beta$ defines a class of stable log maps to $\mathfrak{X}^s\rightarrow\mathbb{A}^1$ that, by abuse of notation, we also write as $\beta$. \end{defi}
\begin{rem}
One comment is in order about the space in which $u_p$ lives. By definition (\cite{LogGW}, Discussion 1.8(ii)), $u_p$ is an element of $P_p^\vee:=\textup{Hom}(f^\star\overline{\mathcal{M}}_{\tilde{\mathfrak{X}}_Q}|_p,\mathbb{N})$ and $m_{\textup{out}}$ is an element of $\Lambda_{\tilde{B}}$, the sheaf of integral tangent vectors on the dual intersection complex $\tilde{B}$. Let $\omega$ be the cell of $\mathscr{P}$ corresponding to the minimal stratum of $\tilde{X}_0^s$ to which the marked point is mapped. Then $\omega$ is an unbounded $1$- or $2$-dimensional cell and $m_{\textup{out}}$ defines an element of $\Lambda_{\tilde{B},\omega}$. Both, $P_p^\vee$ and $\Lambda_{\tilde{B},\omega}$ are submonoids of $\Lambda_{\Sigma(\tilde{X}_0^s),\omega}$ and their intersection is $\mathbb{N}\cdot m_{\textup{out}}\subseteq\Lambda_{\tilde{B},\omega}$. Thus $\mathbb{N}\cdot m_{\textup{out}}$ can be viewed as a submonoid of $P_p^\vee$, so the definition above makes sense. \end{rem}
\begin{defi} Let $\mathscr{M}(\tilde{\mathfrak{X}},\beta)$ be the moduli space of basic stable log maps to $\tilde{\mathfrak{X}}:=\mathfrak{X}^{s\neq 0}\rightarrow\mathbb{A}^1$ of class $\beta$. \end{defi}
By \cite{LogGW}, Theorems 0.2 and 0.3, $\mathscr{M}(\tilde{\mathfrak{X}},\beta)$ is a proper Deligne-Mumford stack and admits a virtual fundamental class $\llbracket\mathscr{M}(\tilde{\mathfrak{X}},\beta)\rrbracket$. Since $(X,D)$ is a log Calabi-Yau pair, the class $\beta$ is combinatorially finite (\cite{LogGW}, Definition 3.3). Hence, the virtual dimension of $\mathscr{M}(\tilde{\mathfrak{X}},\beta)$ is zero and the following definition makes sense.
\begin{defi} \label{defi:Nb} For $\beta$ as in Definition \ref{defi:beta} define the logarithmic Gromov-Witten invariant \[ N_\beta = \int_{\llbracket\mathscr{M}(\tilde{\mathfrak{X}},\beta)\rrbracket} 1. \] \end{defi}
\begin{defi} \label{defi:Nd}
Define $w_{\textup{out}} = \textup{min}\{D\cdot\underline{\beta} \ | \ \underline{\beta}\in H_2^+(X,\mathbb{Z}) \}$ so e.g. for $(X,D)=(\mathbb{P}^2,E)$ we have $w_{\text{out}}=3$, since $E$ has degree $3$. For $d>0$ define \[ N_d = \sum_{\substack{\underline{\beta} \in H_2^+(X,\mathbb{Z}) \\ D\cdot\underline{\beta}=dw_{\textup{out}}}} N_\beta. \] \end{defi}
\begin{rem} \label{rem:constant} Logarithmic Gromov-Witten invariants are constant in log smooth families (\cite{MR}, Appendix A). This means the following. Let $\gamma : \{\text{pt}\} \rightarrow \mathbb{A}^1$ be a point and let $\gamma^!$ be the corresponding refined Gysin homomorphism . Then $\gamma^!\beta$ defines a class of stable log maps to the fiber $\tilde{X}_t$ that, by abuse of notation, we also write as $\beta$. We get a moduli space and a virtual fundamental class $\llbracket\mathscr{M}(\tilde{X}_t,\beta)\rrbracket$. Then, for all $t\in\mathbb{A}^1$, \[ N_\beta = \int_{\llbracket\mathscr{M}(\tilde{X}_t,\beta)\rrbracket}1. \] This shows that $N_\beta$ equals the logarithmic Gromov-Witten invariant $N_\beta$ defined in the introduction. Moreover, as shown in \cite{AMW}, $N_\beta$ equals the relative Gromov-Witten invariant of the smooth pair $(X,D)$ as defined in \cite{Li}. \end{rem}
\subsection{Log BPS numbers} \label{S:BPS}
The logarithmic Gromov-Witten invariants $N_\beta$ are not integers but rather rational numbers. The fractional part comes from multiple cover contributions of curves with class $\beta'$ such that $\underline{\beta}=k\cdot\underline{\beta}'$.
\begin{prop}[\cite{GPS}, Proposition 6.1] The $k$-fold cover of an irreducible curve of class $\beta'$ contributes the following factor to $N_{k\cdot\beta'}$: \[ M_{\beta'}[k] = \frac{1}{k^2}\binom{k(D\cdot\underline{\beta}'-1)-1}{k-1} \] \end{prop}
We use the same formula for reducible curves, though it may be unclear how to interpret this as a multiple cover contribution.
\begin{defi} Define numbers $n_\beta$ by subtracting multiple cover contributions: \[ N_\beta = \sum_{\beta' : \underline{\beta}=k\cdot\underline{\beta}'} M_{\beta'}[k] \cdot n_{\beta'}. \] They are called \textit{Gopakumar-Vafa invariants} or \textit{log BPS numbers} as they are related to BPS state counts in string theory \cite{GV}. \end{defi}
\begin{rem} \label{rem:local} The logarithmic Gromov-Witten invariants $N_\beta$ are related to local Gromov-Witten invariants $N_\beta^{\textup{loc}}$ of the total space of the canonical bundle $K_X$ of $X$ by the formula $N_\beta = (-1)^{D\cdot\underline{\beta}-1}(D\cdot\underline{\beta})N_\beta^{\textup{loc}}$. This was conjectured by Takahashi (\cite{Ta2}, Remark 1.11) and proved by Gathmann (\cite{Ga}, Example 2.2) and more generally by van Garrel, Graber and Ruddat \cite{vGGR}. The log BPS numbers $n_d$ were shown to be integers in \cite{vGWZ}, using integrality of local BPS numbers. \end{rem}
\section{Tropical curves and refinement} \label{S:tropmap}
In this section we analyze what tropicalizations of stable log maps contributing to $N_d$ look like. We prove that for each $d$ there are only finitely many such tropical curves (Corollary \ref{cor:finite}). Choosing a subdivision of the dual intersection complex $(\tilde{B},\mathscr{P},\varphi)$ such that tropicalizations contributing to $N_d$ are contained in the $1$-skeleton of the polyhedral decomposition leads to a logarithmic modification $\tilde{\mathfrak{X}}_d$ of $\tilde{\mathfrak{X}}$ (Construction \ref{con:deg3}) with the property that stable log maps to the central fiber $Y$ of $\tilde{\mathfrak{X}}_d$ contributing to $N_d$ are torically transverse.
\subsection{Tropicalization of stable log maps}
\begin{defi}[\cite{ACGS1}, 2.1.1, 2.1.2] \label{defi:Cones} Define $\textbf{Cones}$ to be the category whose objects are pairs $(\sigma_{\mathbb{R}},M)$ where $M\cong\mathbb{Z}^n$ is a lattice and $\sigma_{\mathbb{R}}\subseteq M_{\mathbb{R}}=M\otimes_{\mathbb{Z}}\mathbb{R}$ is a top-dimensional strictly convex rational polyhedral cone. A morphism of cones $\varphi : \sigma_1\rightarrow \sigma_2$ is a homomorphism $\varphi : M_1 \rightarrow M_2$ which takes $\sigma_{1\mathbb{R}}$ into $\sigma_{2\mathbb{R}}$. It is a \textit{face morphism} if it identifies $\sigma_{1\mathbb{R}}$ with a face of $\sigma_{2\mathbb{R}}$ and $M_1$ with a saturated sublattice of $M_2$. A \textit{generalized cone complex} is a topological space with a presentation as the colimit of an arbitrary finite diagram in the category $\textbf{Cones}$ with all morphisms being face morphisms. \end{defi}
\begin{defi}[\cite{ACGS1}, 2.1.4] \label{defi:trop} Let $X$ be a fine saturated log scheme with log structure defined in the Zariski topology. For the generic point $\eta$ of a stratum of $X$, its characteristic monoid $\overline{\mathcal{M}}_{X,\eta}$ defines a dual monoid $(\overline{\mathcal{M}}_{X,\eta})^\vee := \textup{Hom}(\overline{\mathcal{M}}_{X,\eta},\mathbb{N})$ lying in the group $(\overline{\mathcal{M}}_{X,\eta})^\star := \textup{Hom}(\overline{\mathcal{M}}_{X,\eta},\mathbb{Z})$, hence a dual cone \[ \sigma_\eta := \left((\overline{\mathcal{M}}_{X,\eta})_{\mathbb{R}}^\vee,(\overline{\mathcal{M}}_{X,\eta})^\star\right). \] If $\eta$ is specialization of $\eta'$, there is a well-defined generization map $\overline{\mathcal{M}}_{X,\eta}\rightarrow \overline{\mathcal{M}}_{X,\eta'}$. Dualizing, we obtain a face morphism $\sigma_{\eta'}\rightarrow\sigma_\eta$. This gives a diagram of cones indexed by strata of $X$ with face morphisms, hence gives a generalized cone complex $\Sigma(X)$, the \textit{tropicalization} of $X$. This construction is functorial. \end{defi}
Let $Q$ be a Fano polytope and let $\tilde{\mathfrak{X}}\rightarrow\mathbb{A}^1$ be the log smooth degeneration of the corresponding smooth very ample log Calabi-Yau pair $(X,D)$ from Construction \ref{con:deg2}. Let $\underline{\beta}\in H_2^+(X,\mathbb{Z})$ be an effective curve class and consider a basic stable log map $f:C/\textup{pt}_{Q_{\text{basic}}}\rightarrow\tilde{X}_0/\textup{pt}_{\mathbb{N}}$ of class $\beta$ (Definition \ref{defi:beta}). Here $Q_{\text{basic}}$ is the basic monoid\footnote{The basic monoid $Q$ has the property that $\Sigma(\textup{pt}_{Q_{\text{basic}}})=\textup{Hom}(Q_{\text{basic}},\mathbb{R}_{\geq 0})$ is the moduli space of deformations of $\Sigma(C)$ as a tropical curve preserving its combinatorial type (\cite{LogGW}, Remark 1.21).} of $f$ (\cite{LogGW}, {\S}1.5). We will see in Corollary \ref{cor:finite} that in our situation $Q_{\text{basic}}=\mathbb{N}$. \begin{equation} \label{eq:stablelog} \begin{xy} \xymatrix{ C \ar[r]^f \ar[d]^\gamma & \tilde{X}_0 \ar[d]^{\tilde{\pi}_0} \\ \textup{pt}_{Q_{\text{basic}}} \ar[r]^g & \textup{pt}_{\mathbb{N}} } \end{xy} \end{equation} Tropicalization of \eqref{eq:stablelog} gives a diagram of generalized cone complexes. Note that $\Sigma(\textup{pt}_{\mathbb{N}})=\mathbb{R}_{\geq 0}$. The fiber $\Sigma(\tilde{\pi}_0)^{-1}(1)$ is homeomorphic to the dual intersection complex $\tilde{B}$ of $\tilde{X}_0$. Similarly, for a general element $b$ of the cone $\Sigma(\text{pt}_{Q_{\text{basic}}})$ the fiber $\Sigma(\gamma)^{-1}(b)$ is homeomorphic to the dual intersection graph $\Gamma_C$ of $C$. Hence, tropicalization of \eqref{eq:stablelog} and restriction to the fiber over $1\in\Sigma(\textup{pt}_{\mathbb{N}})=\mathbb{R}_{\geq 0}$ gives a map \begin{equation} \label{eq:Delta} \tilde{h} : \Gamma_C \rightarrow \tilde{B}. \end{equation}
There is additional data on $\Gamma_C$ making $\tilde{h} : \Gamma_C\rightarrow \tilde{B}$ into a tropical curve in the sense of \cite{ACGS1}. Note that such a tropical curve only fulfills a modified version of the balancing condition (\cite{LogGW}, Proposition 1.15). In \S\ref{S:balancing} we will see what this means in our case.
To make the connection with scattering diagrams in \S\ref{S:scattering} it is useful to consider tropical curves on $B$ (not $\tilde{B}$) that are balanced in the usual sense but may have some bounded legs.
There are many slightly different definitions of parametrized tropical curves, depending on the context in which they are used. The following definition is a synthesis of the definition in \cite{ACGS1} and \cite{Gr10}, Definition 1.32. In \cite{ACGS1} only tropical curves with no bounded legs are considered, while in \cite{Gr10} tropical curves are required to be balanced.
\begin{defi} \label{defi:tropical} Let $B$ be a $2$-dimensional integral affine manifold with singularities. Let $\Delta\subset B$ be the discriminant locus and write $B_0:=B\setminus\Delta$. A \textit{(parametrized) tropical curve on $B$}, written $h : \Gamma \rightarrow B$, is a homogeneous map $h : \Gamma \rightarrow B_0$, where $\Gamma$ is the topological realization of a graph\footnote{The topological realization of a graph $\Gamma$ is a topological space which is the union of line segments corresponding to the edges. By abuse of notation, we also denote this by $\Gamma$. Whenever we talk about a map from a graph we mean a homogeneous map from its topological realization.}, possibly with some non-compact edges (\textit{legs}), together with \begin{compactenum}[(1)] \item a non-negative integer $g_V$ (\textit{genus}) for each vertex $V$; \item a non-negative integer $\ell_E$ (\textit{length}) for each compact edge $E$; \item an element $u_{(V,E)}\in i_\star\Lambda_{B_0,h(V)}$ (\textit{weight vector}) for every vertex $V$ and edge or leg $E$ adjacent to $V$. Here $\Lambda_{B_0}$ is the sheaf of integral affine tangent vectors on $B$ and $i : B_0 \hookrightarrow B$ is the inclusion. The index of $u_{(V,E)}$ in the lattice $\Lambda_{B,h(V)}$ is called the \textit{weight} $w_E$ of $E$; \end{compactenum} such that \begin{compactenum}[(i)] \item if $E$ is a compact edge with vertices $V_1$, $V_2$, then $h$ maps $E$ affine linearly\footnote{The affine structure on $\Gamma$ is given by the lengths $\ell_E$ of its edges.} to the line segment connecting $h(V_1)$ and $h(V_2)$, and $h(V_2)-h(V_1)=\ell_Eu_{(V_1,E)}$. In particular, $u_{(V_1,E)} = -u_{(V_2,E)}$; \item if $E$ is a leg with vertex $V$, then $h$ maps $E$ affine linearly either to the ray $h(V)+\mathbb{R}_{\geq 0}u_{(V,E)}$ or to the line segment $[h(V),\delta)$ for $\delta$ an affine singularity of $B$ such that $\delta-h(V)\in\mathbb{R}_{>0} u_{(V,E)}$, i.e., $u_{(V,E)}$ points from $h(V)$ to $\delta$. \end{compactenum} We write the set of compact edges of $\Gamma$ as $E(\Gamma)$, the set of legs as $L(\Gamma)$, the set of legs mapped to a ray (\textit{unbounded legs}) as $L_\infty(\Gamma)$ and the set of legs mapped to an open line segment (\textit{bounded legs}) as $L_\Delta(\Gamma)$ (since such edges end at the singular locus $\Delta$ of $B$).
The \textit{genus} of a parametrized tropical curve $h : \Gamma \rightarrow B$ is defined by \[ g_h := g_\Gamma + \sum_{V\in V(\Gamma)}g(V), \] where $g_\Gamma$ is the genus (first Betti number) of the graph $\Gamma$. \end{defi}
\begin{rem} Note that if $E$ is a leg of $\Gamma$, then $h(E)$ must be parallel to the edge of $\mathscr{P}$ containing $\delta$, since there is only one tangent direction at $\delta$, i.e., $\Lambda_{B,\delta}\simeq\mathbb{Z}$. \end{rem}
\begin{defi} \label{defi:aut} An \textit{isomorphism} of tropical curves $h:\Gamma\rightarrow B$ and $h':\Gamma'\rightarrow B$ is a homeomorphism $\phi : \Gamma \rightarrow \Gamma'$ such that $h=h'\circ\phi$, $g_{\phi(V)}=g_V$ and $u_{(\phi(V),\phi(E))}=u_{(V,E)}$. An \textit{automorphism} of a tropical curve $h$ is an isomorphism of $h$ with itself. Here we use the convention that an edge $E$ is a pair of orientations of $E$, so that the automorphism group of a graph with a single loop is $\mathbb{Z}/2\mathbb{Z}$. \end{defi}
\begin{rem} We will only consider tropical curves of genus $0$. In particular, our tropical curves will have no loops. \end{rem}
\begin{defi} Let $(B,\mathscr{P})$ be a $2$-dimensional polyhedral affine manifold. A tropical curve $h : \Gamma\rightarrow B$ is \textit{compatible with $\mathscr{P}$} if \begin{compactenum}[(1)] \item the edges of $\Gamma$ do not extend across several maximal cells of $\mathscr{P}$. In other words, we have a well-defined map $E(\Gamma) \cup L(\Gamma) \rightarrow \mathscr{P}$ associating to an edge or leg $E$ the minimal cell of $\mathscr{P}$ containing it. \item there are no bivalent vertices in $\Gamma$ mapped to a maximal cell of $\mathscr{P}$. \end{compactenum} \end{defi}
\begin{con} \label{con:tropical} Let $\tilde{h} : \Gamma_C \rightarrow \tilde{B}$ be the continuous map from \eqref{eq:Delta}. We describe additional data making $\tilde{h}$ a tropical curve compatible with $\mathscr{P}$. \begin{compactenum}[(1)] \item For each vertex $V$, the genus is $g_V=0$. \item Let $E\in E(\Gamma_C)$ be a compact edge with vertices $V_1,V_2$, corresponding to a node $q\in C$. Then $\overline{\mathcal{M}}_{C,q}$ is isomorphic to the submonoid $S_{e_q}$ of $\mathbb{N}^2$ generated by $(e_q,0)$, $(0,e_q)$ and $(1,1)$ for some $e_q\in\mathbb{N}_{>0}$ (\cite{Kat2}, 1.8). Moreover, there is an equation $\tilde{h}(V_2)-\tilde{h}(V_1)=\pm e_qu_q$ for some $u_q\in\Lambda_{\tilde{B}}$ (see \cite{LogGW}, Discussions 1.8, 1.13). Then the length of $E$ is $\ell_E=e_q$ and the weight vectors are $u_{(V_i,E)}=\pm u_q$, with sign chosen such that $u_{(V_i,E)}$ points away from $\tilde{h}(V_i)$. \item Let $E\in L_\infty(\Gamma_C)$ be an unbounded leg with vertex $V$, corresponding to a marked point $p\in C$. Then $\overline{\mathcal{M}}_{C,p}$ is isomorphic to $\mathbb{N}\oplus\mathbb{N}$ and
\[ f^\star\overline{\mathcal{M}}_{X}|_p\rightarrow\overline{\mathcal{M}}_{C,p}\overset{\text{pr}_2}{\rightarrow}\mathbb{N} \]
is determined by an element of $P_p^\vee=\textup{Hom}(f^\star\overline{\mathcal{M}}_{X}|_p,\mathbb{N})$, inducing an element $u_p\in\Lambda_{\tilde{B},\tilde{h}(V)}$. The weight vector is $u_{(V,E)}=u_p$. \end{compactenum} The properties for $\tilde{h} : \Gamma_C \rightarrow \tilde{B}$ to be compatible with $\mathscr{P}$ can be achieved by (1) inserting vertices at points mapping to vertices or edges of $\mathscr{P}$ and (2) removing bivalent vertices mapping to a maximal cell of $\mathscr{P}$, by replacing a chain of edges connected via bivalent vertices with a single edge. The latter is possible, since vertices of $\Gamma_C$ not mapping to vertices of $\mathscr{P}$ are balanced by Proposition \ref{prop:balancing}, (I), below. \end{con}
\begin{defi} Let $B$ be a $2$-dimensional integral affine manifold with singularities and $m\in\Lambda_B$ an integral tangent vector. A tropical curve $h:\Gamma\rightarrow B$ is called \textit{of degree $d$ relative to $m$} if there is exactly one unbounded leg $E_{\textup{out}}\in L_\infty(\Gamma)$, and its weight vector is $u_{(V_{\textup{out}},E_{\textup{out}})}=d \cdot m$. Here $V_{\textup{out}}$ is the unique vertex of $E_{\textup{out}}$. \end{defi}
\begin{prop} \label{prop:tropical} Given a stable log map $f:C/\textup{pt}_{Q_{\text{basic}}}\rightarrow \tilde{X}_0/\textup{pt}_{\mathbb{N}}$ of class $\beta$ as in Definition \ref{defi:beta}, the continuous map $\tilde{h}:\Gamma_C\rightarrow \tilde{B}$ from \eqref{eq:Delta} together with the additional data defined in Construction \ref{con:tropical} is a tropical curve without bounded legs, of genus $0$, degree $D\cdot\underline{\beta}$ relative to $m_{\textup{out}}\in\Lambda_{\tilde{B}}$ (Definition \ref{defi:mout}) and compatible with the dual intersection complex $\mathscr{P}$. By abuse of notation, we call this tropical curve $\tilde{h}:\Gamma_C\rightarrow \tilde{B}$ the \textit{tropicalization} of $f$. \end{prop}
\begin{proof} The properties (i) and (ii) of Definition \ref{defi:tropical} follow by the structure of $f : C \rightarrow \tilde{X}_0$ on the level of ghost sheaves (see \cite{LogGW}, Discussions 1.8, 1.13). Hence, $\tilde{h}$ is a tropical curve. Moreover, these discussions show that $\tilde{h}$ has no bounded legs. $\tilde{h}$ is of degree $dw_{\textup{out}}$ relative to $m_{\textup{out}}$ by Definition \ref{defi:beta}, (4). \end{proof}
\begin{rem} There is one issue here, since we lost some information by smoothing $(X^0,D^0)$. An effective curve class $\beta\in H_2^+(X^0,\mathbb{Z})$ is determined by its intersection numbers $d_1,\ldots,d_k$ with the components of $D^0=D_1,\ldots,D_k$. After smoothing $D^0$ we only see the sum $d=d_1+\ldots+d_k$. In particular, if $X^0$ is a smooth toric del Pezzo surface with Picard number $>1$, i.e., different from $(\mathbb{P}^2,E)$, then we only see the total degree, not the multi-degree. One could solve this problem using non-trivial gluing data capturing information of the divisor $D^0$. We will give a more geometric solution in \S\ref{S:degdiv} by looking at the limit of curves under $s\rightarrow 0$, where $s\in\mathbb{A}^2$ is the deformation parameter of the family $\mathfrak{X}_Q\rightarrow\mathbb{A}^2$ from Construction \ref{con:family}. \end{rem}
\subsection{Types of vertices} \label{S:balancing}
Let $f:C/\textup{pt}_{Q_{\text{basic}}}\rightarrow\tilde{X}_0/\textup{pt}_{\mathbb{N}}$ be a stable log map of class $\beta$ as in Definition \ref{defi:beta} and let $\tilde{h}:\Gamma_C\rightarrow \tilde{B}$ be the corresponding tropical curve.
\begin{prop} \label{prop:balancing} Let $C_V$ be an irreducible component of $C$, corresponding to a vertex $V$ of $\Gamma_C$. Then the following three cases can occur: \begin{compactenum}[(I)] \item If $C_V$ is mapped to a $0$- or $1$-dimensional toric stratum of $\tilde{X}_0$, i.e., if $V$ is not mapped to a vertex of $\mathscr{P}$, then the ordinary balancing condition holds: \[ \sum_{E\ni V}u_{(V,E)} = 0. \] The sum is over all edges or legs $E\in E(\Gamma_C)\cup L(\Gamma_C)$ containing $V$. \item If $C_V$ is mapped onto an exceptional divisor $L^{\textup{exc}}_v$ on some component $\tilde{X}_v$ of $\tilde{X}_0$ (Definition \ref{defi:Lexc}), then $C_V$ is a $k$-fold multiple cover of $L^{\textup{exc}}_v\simeq\mathbb{P}^1$ for some $k>0$. It is fully ramified at the point $p=L^{\textup{exc}}_v\cap\partial\tilde{X}_v$, where $\partial\tilde{X}_v$ is the proper transform of the toric boundary $\partial X_v$ under the resolution from \S\ref{S:resolution}. The vertex $V$ is mapped to the vertex $v$ of $\mathscr{P}$. It is $1$-valent with adjacent edge $E$ mapped onto the edge of $\mathscr{P}$ containing the red stub adjacent to $v$. The balancing condition reads (with $m_{v,+}$ as in Figure \ref{fig:duald}) \[ u_{(V,E)} = km_{v,+}. \] \item Otherwise, $V$ is mapped to a vertex $v$ of $\mathscr{P}$ and has exactly one adjacent edge or leg $E_{V,\textup{out}}$ that is not mapped onto a compact edge of $\mathscr{P}$. All other edges (possibly none) are compact with other vertex of type (II) above. In this case, for some $k\geq 0$, the following balancing condition holds: \[ \sum_{E\ni V} u_{(V,E)} + km_{v,+} = 0. \] \end{compactenum} \end{prop}
\begin{proof} If $C_V$ does not intersect an exceptional line, the log structure on $\tilde{X}_0$ along the image of $C_V$ is the toric one. Then by \cite{ACGS2}, Remark 2.26, the ordinary balancing condition holds. This proves (I).
If $C_V$ is mapped onto an exceptional line $L^{\textup{exc}}_v\simeq\mathbb{P}^1$ on some component $\tilde{X}_v$, it is a $k$-fold multiple cover for some $k>0$. Suppose it is not fully ramified at the point where $L^{\textup{exc}}_v$ meets $\partial \tilde{X}_v$, i.e., $V$ has valency $>1$. Let $E_1,E_2$ be two distinct edges adjacent to $V$. We have $V \neq V_{\textup{out}}$, since $C_V$ does not meet the toric divisor of $\tilde{X}_v$ belonging to $\tilde{D}_0$. By Proposition \ref{prop:tropical}, $\Gamma_C$ has only one leg, and this leg is attached to $V_{\textup{out}}$. Thus $E_1$ and $E_2$ are bounded. Let $V_1,V_2$ be the vertices of $E_1,E_2$ different from $V$, respectively. There is a chain of vertices and edges (possibly the trivial one) connecting $V_1$ to $V_{\textup{out}}$ and similarly for $V_2$. These two chains form a cycle of the graph $\Gamma_C$, so $g(\Gamma_C)>0$ in contradiction with rationality of $C$. Hence, there is a unique bounded edge $E$ adjacent to $V$. Let $V'$ be its other vertex. Then $h(V')-h(V)$ points in the direction of $m_{v,+}$, since the only special point (node) of $C_V$ is mapped to $L^{\textup{exc}}_v\cap\partial\tilde{X}_v$. It follows by Definition \ref{defi:tropical} that $u_{(V,E)}$ points in the direction of $m_{v,+}$. Its affine length, the weight $w_E$, is the multiplicity of the node which is the ramification order $k$. This proves (II).
Let $C_V$ be a component of $C$ that intersects an exceptional divisor $L^{\textup{exc}}_v$ on some component $\tilde{X}_v$ but is not mapped onto it. We first prove the balancing condition. Let $m\in\Gamma(C_V,f^\star\overline{\mathcal{M}}_X|_{C_V})$ be the generator of the submonoid $\mathbb{N}$ of $\Gamma(C_V,f^\star\overline{\mathcal{M}}_X|_{C_V})$ corresponding to the exponent of the degeneration parameter $t$. Then by \cite{LogGW}, Lemma 1.14, the line bundle associated to $m$ is $f^\star\nu^\star\mathcal{O}_{X_v}(-kD_{v,+})$, where $\nu : \tilde{X}_v \rightarrow X_v$ is the resolution and $D_{v,+}$ is the toric divisor of $X_v$ whose proper transform $\tilde{X}_v$ intersects $L_v^{\textup{exc}}$. This is because the condition that $C_V$ intersects $L_v^{\textup{exc}}$ is equivalent to the condition that $\nu(C_V)$ intersects $D_{v,+}$ in the point $\nu(L_v^{\text{exc}})$. The corresponding contact data is $km_{v,+}$. This shows $\sum_{E\ni V} u_{(V,E)} + km_{v,+} = 0$, where $k$ is the sum of the affine lengths of the additional $u_p$. Now we show uniqueness of an edge or leg $E_{V,\text{out}}$ as claimed. If all legs and edges adjacent to $V$ are mapped onto compact edges of $\mathscr{P}$, this balancing condition can not be achieved. So there is at least one such edge or leg. We show by contradiction that there is at most one. Assume that there are two edges or legs $E, E'$ adjacent to $V$ that are not mapped onto compact edges of $\mathscr{P}$. Then $E,E'$ are either unbounded legs or bounded edges with other vertex of type (I). If $E$ or $E'$ is unbounded, then, since vertices of type (I) fulfill the ordinary balancing condition, we would have at least two unbounded legs, contradicting the assumptions. If $E$ and $E'$ are bounded edges with other vertices of type (I), their paths to $V_{\text{out}}$ form a cycle, contradicting $g=0$. So there is a unique edge not mapped onto a compat edge of $\mathscr{P}$. This proves (III). \end{proof}
\begin{defi} \label{defi:types} Denote the set of vertices of the given types in Proposition \ref{prop:balancing} by $V_{I}(\Gamma_C)$, $V_{II}(\Gamma_C)$ and $V_{III}(\Gamma_C)$, respectively. \end{defi}
\begin{defi} \label{defi:tildeH} Let $\tilde{\mathfrak{H}}_d$ be the set of isomorphism classes of tropical curves $\tilde{h}:\tilde{\Gamma}\rightarrow \tilde{B}$ compatible with $\mathscr{P}$ of genus $0$ and degree $dw_{\textup{out}}$ relative to $m_{\textup{out}}\in\Lambda_{\tilde{B}}$, without bounded legs and with vertices of one of the types (I)-(III) above. \end{defi}
\begin{figure}\label{fig:balancing}
\end{figure}
\begin{lem} \label{lem:finite} Let $\tilde{h}:\tilde{\Gamma}\rightarrow \tilde{B}$ be a tropical curve in $\tilde{\mathfrak{H}}_d$ for some $d>0$. Then $\tilde{h}(\tilde{\Gamma})$ is disjoint from the interior of $\sigma_0$ (Definition \ref{defi:sigma0}). \end{lem}
\begin{proof} Let $\tilde{h}:\tilde{\Gamma}\rightarrow \tilde{B}$ be a tropical curve in $\tilde{\mathfrak{H}}_d$. Give $\tilde{\Gamma}$ the structure of a rooted tree by defining the root vertex to be the vertex $V_{\textup{out}}$ of the unique unbounded leg $E_{\textup{out}}$. Let $V$ be a vertex of $\tilde{\Gamma}$ and let $E_{V,\textup{out}}$ be the edge connecting $V$ with its parent, or $E_{V,\textup{out}}=E_{\textup{out}}$ if $V$ is the root vertex $V_{\textup{out}}$. By Proposition \ref{prop:balancing}, if $V$ is mapped to a vertex $v$ of $\mathscr{P}$, hence of type (II) or (III), then $E_{V,\textup{out}}$ is mapped to the conical subset $\mathbb{R}_{\leq 0}m_{v,+}+\mathbb{R}_{\leq 0}m_{v,-}$ of $\tilde{B}$, and if $V$ is of type (I), then by induction $E_{V,\textup{out}}$ is mapped to the subset $\bigcup_{V'} \mathbb{R}_{\leq 0}m_{\tilde{h}(V'),+}+\mathbb{R}_{\leq 0}m_{\tilde{h}(V'),-}$ of $\tilde{B}$, where the union is over all vertices of type (II) and (III) in the subgraph of $\tilde{\Gamma}$ with root $V$. In particular, $\tilde{h}(\tilde{\Gamma})$ is contained in \[\bigcup_{V'\in V_{II}(\tilde{\Gamma})\cup V_{III}(\tilde{\Gamma})} \mathbb{R}_{\leq 0}m_{\tilde{h}(V'),+}+\mathbb{R}_{\leq 0}m_{\tilde{h}(V'),-} \] This is disjoint from the interior of $\sigma_0$. \end{proof}
\subsection{Balanced tropical curves}
We describe a procedure to obtain tropical curves in $\tilde{\mathfrak{H}}_d$ from tropical curves to $B$ (not $\tilde{B}!$) that are balanced in the usual sense. This makes the connection to scattering diagrams in \S\ref{S:scattering} more transparent. Moreover, the degeneration formula gets more symmetric when expressed in invariants labeled by balanced tropical curves (see Theorem \ref{thm:degmax}).
\begin{defi} \label{defi:balancedtrop} Let $\mathfrak{H}_d$ be the set of isomorphism classes of tropical curves $h:\Gamma\rightarrow B$ compatible with $\mathscr{P}$, possibly with bounded legs, of genus $0$ and degree $dw_{\textup{out}}$ relative to $m_{\textup{out}}$, satisfying the ordinary balancing condition at each vertex $V$ of $\Gamma$: \[ \sum_{E\ni V}u_{(V,E)} = 0, \] \end{defi}
\begin{con} \label{con:tilde} We construct a surjective map $\mathfrak{H}_d \rightarrow \tilde{\mathfrak{H}}_d$ as follows.
Let $h:\Gamma\rightarrow B$ be a tropical curve in $\mathfrak{H}_d$. Let $E\in L_\Delta(\Gamma)$ be a bounded leg with vertex $V$. Then $E$ is mapped to the line segment $[h(V),\delta)$ for $\delta$ an affine singularity on an edge $\omega$ of $\mathscr{P}$. Since $\Lambda_{B,\delta}$ is one-dimensional, $h(E)$ is parallel to $\omega$. Since $h$ is compatible with $\mathscr{P}$ and by the balancing condition, $h(V)$ must be a vertex $v$ of $\mathscr{P}$. Let $m_{v,\delta}$ be the primitive integral tangent vector pointing from $v$ to $\delta$ and let $m_{v,+},m_{v,-}$ be as in Figure \ref{fig:duald}. Two cases can occur. \begin{compactenum}[(1)] \item If $m_{v,\delta}=m_{v,+}$, i.e., if $E$ is mapped in the direction of the red stub attached to $v$, then remove $E$ from $\Gamma$. \item Otherwise, $m_{v,\delta}=m_{v,-}$. Then add a vertex $V'$ to $E$ to obtain a compact edge $\tilde{E}$. Define $u_{(V',E)}=-u_{(V,E)}$ and $\tilde{h}(\tilde{E})=\omega$, such that $\tilde{h}(V')=v'$ is a vertex of $\mathscr{P}$. This determines the length $\ell_{\tilde{E}}$ by Definition \ref{defi:tropical}, (i). \end{compactenum} We show that the map $\mathfrak{H}_d\rightarrow\tilde{\mathfrak{H}}_d$ constructed this way is surjective. Let $\tilde{h} : \tilde{\Gamma} \rightarrow \tilde{B}$ be a tropical curve in $\tilde{\mathfrak{H}}_d$. We can construct a preimage of $\tilde{h}$ as follows. (1) For each vertex $V\in V_{III}(\tilde{\Gamma})$, add a bounded leg $E$ with vertex $V$ and weight vector $u_{(V,E)}=-\sum_{E'\ni V} u_{(V,E')}$. The image of $E$ is specified by Definition \ref{defi:tropical}, (ii). (2) For each vertex $V\in V_{II}(\tilde{\Gamma})$, let $E$ be the unique adjacent edge. It is a bounded edge and we remove the vertex $V$ from $E$ to obtain a bounded leg. This shows that the map $\mathfrak{H}_d\rightarrow\tilde{\mathfrak{H}}_d$ is surjective. Note that in step (1) we could also add several bounded legs with weights a partition of $\sum_{E'\ni V} u_{(V,E')}$, so the number of preimages of $\tilde{h}$ is the number of such partitions. \end{con}
\begin{defi} Let $(\bar{B},\bar{\mathscr{P}})$ be the covering space of $(B,\mathscr{P})$ described in \S\ref{S:affinecharts}. Let $\bar{\mathfrak{H}}_d$ be the set of isomorphism classes of balanced tropical curves $\bar{h}:\bar{\Gamma}\rightarrow\bar{B}$ compatible with $\bar{\mathscr{P}}$ of genus $0$ and degree $dw_{\textup{out}}$ relative to $m_{\textup{out}}$ satisfying the ordinary balancing condition and such that the image of $E_{\textup{out}}$ lies in a fixed fundamental domain. \end{defi}
\begin{con} \label{con:bij} Define a map $\bar{\mathfrak{H}}_d \rightarrow \mathfrak{H}_d$ by sending $\bar{h} : \Gamma \rightarrow \bar{B}$ to $h : \Gamma \rightarrow B$, where $h$ is the composition of $\bar{h}$ with the covering map $\bar{B} \rightarrow B$. This map is bijective. The inverse map is given as follows. Let $h : B \rightarrow \Gamma$ be a tropical curve in $\mathfrak{H}_d$ and choose an unbounded maximal cell of $\mathscr{P}$. Choose a fundamental domain of $\bar{B} \rightarrow B$ and let $\bar{h}(E_{\textup{out}})$ be the preimage of $h(E_{\textup{out}})$ in that fundamental domain. Whenever the image $h(V)$ of an edge $V$ lies on the horizontal dashed line in Figure \ref{fig:dualb} with respect to the chart on the unbounded maximal cell chosen, we change the fundamental domain and apply the monodromy transformation. \end{con}
\begin{figure}
\caption{A balanced tropical curve $h:\Gamma\rightarrow B$ in $\mathfrak{H}_3$ for $(\mathbb{P}^2,E)$ giving the tropical curve in Figure \ref{fig:balancing} under the map from Construction \ref{con:tilde}. The integers are weights of edges $\neq 1$.}
\label{fig:balancing2}
\end{figure}
\begin{lem} \label{lem:finite2} The set $\bar{\mathfrak{H}}_d$ is finite. \end{lem}
\begin{proof}
Let $\bar{h}:\bar{\Gamma}\rightarrow \bar{B}$ be a tropical curve in $\bar{\mathfrak{H}}_d$. The graph $\bar{\Gamma}$ together with the set of weight vectors of bounded legs $\{u_{(V,E)} \ | \ V\in E\in L_\Delta(\bar{\Gamma})\}$ determines the image of $\bar{h}$. Indeed, for a bounded leg $E$, the weight vector $u_{(V,E)}$ determines its image, since all edges containing affine singularities have different direction. The images of all other edges are determined by the balancing condition. If we only know the set $\{u_{(V,E)} \ | \ V\in E\in L_\Delta(\bar{\Gamma})\}$, there are finitely many possibilities for $\bar{\Gamma}$, since the number of leaves is specified. So we need to show that there are only finitely many possible sets $\{u_{(V,E)} \ | \ V\in E\in L_\Delta(\bar{\Gamma})\}$ for a tropical curve $\bar{h}:\bar{\Gamma}\rightarrow \bar{B}$ in $\bar{\mathfrak{H}}_d$.
Let $V_{\text{out}}$ be the vertex of the unique unbounded edge $E_{\text{out}}$ and let $\sigma_{\text{out}}$ be an unbounded maximal cell containing $V_{\text{out}}$. Let $\varphi_{\text{out}}$ be a representative of the piecewise affine function $\varphi$ on $\sigma_{\text{out}}$. Note that via an affine transformation of $\bar{B}$ we can achieve that $\varphi_{\text{out}}(m)=\braket{m,m_{\text{out}}}$. Let $E$ be a bounded leg of $\bar{\Gamma}$ with vertex $V$ and write $u_{(V,E)}=w_Em_E$ with $w_E\in\mathbb{Z}_{>0}$ and $m_E\in\Lambda_{\bar{h}(V)}\simeq \mathbb{Z}^2$ primitive. Let $E'$ be an edge on the path from $V$ to $V_{\text{out}}$ that is \textit{pointing towards} $V_{\text{out}}$, i.e., such that the ray $\bar{h}(V)+\mathbb{R}_{>0}m_E$ intersects the interior of the bounded maximal cell $\sigma_0$. Then $\varphi_{\text{out}}(m_{E'})>0$ and by convexity of the bounded maximal cell and of $\varphi_{\text{out}}$ we have $w_{E'}\varphi_{\text{out}}(m_{E'}) \geq w_E|\varphi_{\text{out}}(m_E)|$. In particular $w_{E_{\text{out}}}\varphi_{\text{out}}(m_{\text{out}})=dw_{\text{out}} \geq w_E|\varphi(m_E)|$. This gives a bound $w_E \leq \lfloor\frac{dw_{\text{out}}}{|\varphi_{\text{out}}(m_E)|}\rfloor$ and there are only finitely many bounded legs $E$ with $|\varphi_{\text{out}}(m_E)| \leq dw_{\text{out}}$, i.e., such that this bound is nonzero. \end{proof}
\begin{cor} \label{cor:finite} The sets $\mathfrak{H}_d$ and $\tilde{\mathfrak{H}}_d$ are finite. In particular, tropical curves in $\tilde{\mathfrak{H}}_d$ are rigid and the basic monoid of stable log maps in $\tilde{\mathfrak{H}_d}$ is $Q_{\text{basic}}=\mathbb{N}$. \end{cor}
\subsection{The limit $s\rightarrow 0$} \label{S:degdiv}
Here we consider the $2$-parameter family $\tilde{\mathfrak{X}}_Q\rightarrow\mathbb{A}^2$ and describe limits of stable log maps to $\mathfrak{X}:=\mathfrak{X}^{s\neq 0}$ under $s\rightarrow 0$. This will enable us to read off the curve class $\underline{\beta}\in H_2^+(X,\mathbb{Z})$ of a stable log map from its tropicalization.
\begin{defi} For an effective curve class $\underline{\beta}\in H_2^+(X,\mathbb{Z})$ let $\mathscr{M}(\tilde{\mathfrak{X}}_Q,\beta)$ be the moduli space of basic stable log maps to $\tilde{\mathfrak{X}}_Q\rightarrow\mathbb{A}^2$ of class $\beta$ (Definition \ref{defi:beta}). Since $\tilde{\mathfrak{X}}_Q$ is projective over $\mathbb{A}^1$ by projection to $s$, the moduli space $\mathscr{M}(\tilde{\mathfrak{X}}_Q,\beta)$ is proper over $\mathbb{A}^1$ by \cite{LogGW}, Theorem 0.2. Figure \ref{fig:bigpicture} shows the fibers of a stable log map of degree $1$ in $\mathscr{M}(\tilde{\mathfrak{X}}_Q,\beta)$ for $(\mathbb{P}^2,E)$. \end{defi}
\begin{lem} Let $f : \mathfrak{C} \rightarrow \mathfrak{X}$ be a stable log map in $\mathscr{M}(\tilde{\mathfrak{X}}_Q,\beta)$. Then the fibers $f_t^0 : C_t^0 \rightarrow X_t^0$ map entirely to the divisor $D_t^0$. \end{lem}
\begin{proof} Suppose there is an irreducible component of $C_t^0$ not mapped to $D_t^0$. Then, since the marked point is mapped to $D_t^0$, there exists an irreducible component of $C_t^0$ that is not mapped onto $D_t^0$ and is not contracted to a point. But then the tropicalization of $f_t^0$ must have at least two legs, since the balancing condition implies balancing of the legs. This means $C_t^0$ must have at least two marked points, in contradiction with the definition of $\beta$ (Definition \ref{defi:beta}). \end{proof}
\begin{figure}\label{fig:bigpicture}
\end{figure}
\begin{defi} \label{defi:G} Let $\Gamma_{D_t^0}$ be the dual intersection graph of $D_t^0$. This is a cycle with $r$ vertices. Let $\mathfrak{G}_d$ be the set of graph morphisms $g : \Gamma \rightarrow \Gamma_{D_t^0}$ where $\Gamma$ is a tree (genus $0$ graph) with vertices $V$ decorated by $d_V\in\mathbb{N}_{>0}$ such that $\sum_V d_V = d$. \end{defi}
\begin{con} \label{con:HG} Note that projection to the unique unbounded direction defines a map $B \rightarrow \Gamma_{D_t^0}$, where vertices of $\Gamma_{D_t^0}$ correspond to unbounded edges of $B$. Define a surjective map \[ \mathfrak{H}_d \rightarrow \mathfrak{G}_d \] by composing $h : \Gamma \rightarrow B$ with this projection and defining the label $d_V$ at a vertex $V$ as follows. Let $h:\Gamma\rightarrow B$ be a tropical curve in $\mathfrak{H}_d$. Give $\Gamma$ the structure of a rooted tree by defining the root vertex to be the vertex $V_{\textup{out}}$ of the unique unbounded leg $E_{\textup{out}}$. Let $V$ be a vertex of $\Gamma$ and let $E_{V,\textup{out}}$ be the edge connecting $V$ with its parent, or $E_{V,\textup{out}}=E_{\textup{out}}$ if $V$ is the root vertex $V_{\textup{out}}$. Let $E_1,\ldots,E_r$ be the other edges of $\Gamma$ adjacent to $V$. Then define \[ d_V = \varphi(u_{(V,E_{V,\text{out}}}) - \sum_{i=1}^r \varphi(-u_{(V,E_i)}) \] \end{con}
Let $f : C \rightarrow \mathfrak{X} := \mathfrak{X}^{s\neq 0}$ be a stable log map in $\mathscr{M}(\mathfrak{X},\beta)$. Since all the fibers $\mathfrak{X}^s$ are isomorphic for $s\neq 0$ this gives a family of stable log maps over $\mathbb{A}^1 \times (\mathbb{A}^1 \setminus \{0\})$. Since $\mathscr{M}(\tilde{\mathfrak{X}}_Q,\beta)$ is proper this family can be uniquely completed to a family over $\mathbb{A}^2$. In other words, the limit of a stable log map in $\mathscr{M}(\mathfrak{X},\beta)$ under $s\rightarrow 0$ is well defined.
\begin{prop} \label{prop:limit} Let $f : \mathfrak{C} \rightarrow \mathfrak{X}_Q$ be a stable log map with tropicalization $\tilde{h}$ mapping to $h\in\mathfrak{H}_d$ under the map from Construction \ref{con:tilde}. The limit of $f$ with respect to the family $\mathfrak{X}_t^0\rightarrow\mathbb{A}^1$ has dual graph that is given by the image of $h$ under the map from Construction \ref{con:HG}. \end{prop}
\begin{proof} Let $f : \mathfrak{C} \rightarrow \mathfrak{X}_Q$ be a stable log map with tropicalization $\tilde{h}$ mapping to $h : \Gamma \rightarrow B$. Consider the fiber $f_0^0 : C_0^0 \rightarrow X_0^0$. If a vertex $V$ of $\Gamma$ is mapped to a vertex $v$ of $\mathscr{P}$ or the unbounded edge adjacent to $v$, then the corresponding irreducible component $C_V$ of $C_0^0$ is mapped to the irreducible component $X_v$ of $X_0^0$ corresponding to $v$. But then, for $t\neq 0$, the corresponding irreducible component $C_V$ of $C_t^0$ is mapped to the irreducible component $D_v$ of $D_t^0$ corresponding to $v$. This is the image of $V$ under the map from Construction \ref{con:HG}. The map $f_t^0 : C_t^0 \rightarrow D_v = \mathbb{P}^1$ is a multiple cover of a line. Its degree is precisely the label $d_V$ of $V$ as above. \end{proof}
\begin{figure}
\caption{The tropical curve from Figure \ref{fig:balancing2} under $\mathfrak{H}_d \rightarrow \mathfrak{G}_d$ gives $[2,1]$ for some choice of $D_i$ and some choice of ordering.}
\label{fig:balancing2}
\end{figure}
For $s=0$ fix a cyclic labelling of the cycle of lines $D_t^0 = D_1 + \ldots + D_k$ and write $v_1,\ldots,v_r$ for the corresponding vertices of $\Gamma_{D_t^0}$.
\begin{defi} \label{defi:splitting} Given $g : \Gamma \rightarrow \Gamma_{D_t^0}$ in $\mathfrak{G}_d$ write \[ d_i := \sum_{\substack{V \in \Gamma^{[0]} \\ g(V) = v_i}} d_V. \] Then the collection $[d_1,\ldots,d_k]$ gives the degrees of the curve over the lines $D_i=\mathbb{P}^1$. If all components of $D_t^0$ are isomorphic as divisors of $X_t^0$, e.g. for $X=\mathbb{P}^2$ or $X=\mathbb{P}^1\times\mathbb{P}^1$, we omit all zeros in this collection. We call this collection the \textit{degree splitting} corresponding to $g$ or any tropical curve $h$ mapping to $g$ under the map from Construction \ref{con:HG}. For example, the tropical curve for $(\mathbb{P}^2,E)$ in Figure \ref{fig:balancing2} has degree splitting $[2,1,0]$ for any choice of labelling of $D_t^0=D_1+D_2+D_3$, and we simply write this as $[2,1]$. \end{defi}
\begin{con} \label{con:GH} Define a surjective map \[ \mathfrak{G}_d \rightarrow H_2^+(X,\mathbb{Z}) \] by sending an element $g : \Gamma \rightarrow \Gamma_{D_t^0}$ of $\mathfrak{G}_d$ with degree splitting $[d_1,\ldots,d_k]$ to the curve class $\underline{\beta}$ defined by \[ D_i \cdot \underline{\beta} = d_i. \] This is well-defined by the balancing condition and since $H_2^+(X,\mathbb{Z})\simeq H_2^+(X^0,\mathbb{Z})$, where $X^0$ is a toric variety. \end{con}
\begin{defi} \label{defi:H} Let $\mathfrak{H}_\beta$ be the set of tropical curves mapping to $\underline{\beta}$ under the decomposition of the maps from Constructions \ref{con:HG} and \ref{con:GH}. Let $\tilde{\mathfrak{H}}_\beta$ be the preimage of $\mathfrak{H}_\beta$ under the map from Construction \ref{con:tilde}. \end{defi}
A direct consequence of Proposition \ref{prop:limit} is the following.
\begin{cor} The tropicalization of a stable log map of class $\beta$ is in $\tilde{\mathfrak{H}}_\beta$. \end{cor}
\subsection{Refinement and logarithmic modification} \label{S:refinement}
To apply the degeneration formula in \S\ref{S:degformula} we need a degeneration of $(X,D)$ such that all stable log maps to the central fiber are torically transverse. We achieve this as follows.
\begin{con}[The refined degeneration] \label{con:deg3} Let $\mathscr{P}_d$ be a refinement of $\mathscr{P}$ such that each tropical curve in $\mathfrak{H}_{\leq d}=\cup_{d'\leq d}\mathfrak{H}_{d'}$ (or equivalently in $\tilde{\mathfrak{H}}_{\leq d}$) is contained in the $1$-skeleton of $\mathscr{P}_d$. This is well-defined by finiteness of $\mathfrak{H}_d$ (Corollary \ref{cor:finite}) and defines a refinement of the generalized cone complex $\Sigma(\tilde{X}_0)$ by taking cones over cells of $\mathscr{P}_d$. In turn, $\mathscr{P}_d$ induces a logarithmic modification $\tilde{\mathfrak{X}}_d\rightarrow\mathbb{A}^1$ of $\tilde{\mathfrak{X}}\rightarrow\mathbb{A}^1$ (see \S\ref{A:artin}) without changing the generic fiber. By making a base change $\mathbb{A}^1 \rightarrow \mathbb{A}^1, t \mapsto t^e$ we can scale $\mathscr{P}_d$ and thus assume it has integral vertices (c.f. \cite{NS}, Proposition 6.3). \end{con}
\begin{rem} \label{rem:transverse} The dual intersection complex of the central fiber $Y$ of $\tilde{\mathfrak{X}}_d$ is given by $(\tilde{B},\mathscr{P}_d,\varphi)$. Hence, all stable log maps to $Y\rightarrow\textup{pt}_{\mathbb{N}}$ of class $\beta$ as in Definition \ref{defi:beta} are torically transverse, since their tropicalizations map onto the $1$-skeleton of $\mathscr{P}_d$, with vertices mapping to vertices of $\mathscr{P}_d$ (see \cite{MR}, Proposition 4.6). \end{rem}
Gromov-Witten invariants are invariant under logarithmic modifications \cite{AW}. Hence, \[ N_\beta = \int_{\llbracket\mathscr{M}(Y,\beta)\rrbracket}1. \] In the next section we will apply the degeneration formula of logarithmic Gromov-Witten theory to get a formula for $N_d$ in terms of logarithmic Gromov-Witten invariants of irreducible components of $Y$.
\section{The degeneration formula} \label{S:degformula}
Consider a projective semi-stable degeneration $\pi : \mathfrak{X} \rightarrow T = \textup{Spec }R$, for $R$ a discrete valuation ring. This is a projective surjection such that the generic fiber is smooth and the fiber $X=\pi^{-1}(0)$ over the closed point $0\in T$ is simple normal crossings with two smooth connected (hence irreducible) components $X_1,X_2$ meeting in a smooth connected divisor $D$. In the logarithmic language this means that $\pi$ is log smooth when $T$ and $\mathfrak{X}$ carry the divisorial log structures given by $0\in T$ and $X\subseteq\mathfrak{X}$, respectively. The degeneration formula relates invariants (relative or logarithmic Gromov-Witten invariants) on the generic fiber of $\pi : \mathfrak{X} \rightarrow T$ to invariants on the components $X_1,X_2$ of the special fiber.
The degeneration formula was proved for stable relative maps, in symplectic geometry \cite{LiRu}\cite{IoPa} and in algebraic geometry \cite{Li2}\cite{AF}, as well as for stable log maps using expanded degenerations \cite{Che2}. A pure log-geometric version avoiding the target expansions of relative Gromov-Witten theory was worked out by Kim, Lho and Ruddat \cite{KLR} using logarithmic Gromov-Witten theory \cite{LogGW}.
The formula in \cite{KLR} is stated in the setup above, with two smooth irreducible components $X_1$ and $X_2$. In our setup we have several irreducible components, indexed by the vertices of a tropical curve. One could generalize the formula in \cite{KLR} to the case where $X_1,X_2$ are only log smooth, in particular they might be reducible, and then apply it repeatedly to get a formula for several components. For the sake of self-containedness we will take a different approach and prove the formula more explicitly in our setting, only referring to \cite{KLR} for some general statements.
Fix an integer $d>0$ and let $\tilde{\mathfrak{X}}_d\rightarrow\mathbb{A}^1$ be the refined log smooth degeneration of $(X,D)$ from Construction \ref{con:deg3}. Write the central fiber as $Y$ and let $\beta$ be a class of stable log maps as in Definition \ref{defi:Nd} with $D\cdot\underline{\beta}\leq dw_{\text{out}}$. Let $Y^\circ$ be the complement of zero-dimensional toric strata in $Y$ and write \[ \mathscr{M}_\beta := \mathscr{M}(Y,\beta). \]
Since $\mathscr{M}(Y^\circ,\beta)$ is canonically isomorphic to the moduli space of torically transverse stable log maps to $Y$ of class $\beta$ and all such maps are torically transverse (Remark \ref{rem:transverse}), the canonical inclusion gives an isomorphism $\mathscr{M}_\beta \cong \mathscr{M}(Y^\circ,\beta)$.
For a vertex $v\in\mathscr{P}_d^{[0]}$, let $Y_v^\circ$ be the complement of the $0$-dimensional toric strata of the irreducible component $Y_v$ of $Y$ corresponding to $v$. Then $Y^\circ$ is a union of finitely many log smooth schemes $Y_v^\circ$ over $\textup{pt}_{\mathbb{N}}$, with $Y_v^\circ \cap Y_{v'}^\circ = \emptyset$ if there is no edge connecting $v$ and $v'$, and $D_E^\circ:=Y_v^\circ \cap Y_{v'}^\circ$ log smooth and a divisor of both $Y_v^\circ$ and $Y_{v'}^\circ$ if there is an edge $E$ connecting $v$ and $v'$. The intersection of any triple of components is empty. Hence, we can apply the degeneration formula.
\subsection{Toric invariants} \label{S:toricinv}
We introduce logarithmic Gromov-Witten invariants of toric varieties with point conditions on the toric boundary, following \cite{GPS}.
Let $M\simeq\mathbb{Z}^2$ be a lattice and let $M_{\mathbb{R}}=M\otimes_{\mathbb{Z}}\mathbb{R}$ be the associated vector space. Let $(m_1,\ldots,m_n)$ be an $n$-tuple of distinct nonzero primitive vectors in $M$ and let $\textbf{w}=(\textbf{w}_1,\ldots,\textbf{w}_n)$ be an $n$-tuple of weight vectors $\textbf{w}_i=(w_{i1},\ldots,w_{il_i})$ with $l_i>0$, $w_{ij}\in \mathbb{N}$ such that
\[ \sum_{i=1}^n|\textbf{w}_i|m_i = w_{\textup{out}} m_{\textup{out}} \]
for $0\neq m_{\text{out}}\in M$ primitive and $w_{\textup{out}}> 0$. Here $|\textbf{w}_i| := \sum_{j=1}^{l_i} w_{ij}$. Let $\Sigma$ be the complete rational fan in $M_{\mathbb{R}}$ whose rays are generated by $-m_1,\ldots,-m_n,m_{\textup{out}}$ and let $X_\Sigma$ be the corresponding toric surface over $\mathbb{C}$. By refining $\Sigma$ if necessary, we can assume that $X_\Sigma$ is nonsingular. Let $D_1,\ldots,D_n,D_{\textup{out}}\subseteq X_\Sigma$ be the toric divisors corresponding to the given rays. Let $X_\Sigma^\circ$ be the complement of the $0$-dimensional torus orbits in $X_\Sigma$, and let $D_i^\circ=D_i\cap X_\Sigma^\circ$, $D_{\textup{out}}^\circ = D_{\textup{out}}\cap X_\Sigma^\circ$. Then define a class $\beta_{\textbf{w}}$ of stable log maps to $X_\Sigma$ as follows. \begin{compactenum}[(1)] \item genus $g=0$; \item $k=l_1+\ldots+l_n+1$ marked points $p_{ij}, i=1,\ldots,n, j=1,\ldots,l_i$ and $p$; \item $\underline{\beta}_{\textbf{w}}\in H_2(X_\Sigma,\mathbb{Z})$ defined by intersection numbers with toric divisors,
\[ D_i \cdot \underline{\beta}_{\textbf{w}} = |\textbf{w}_i|, \quad D_{\textup{out}}\cdot\underline{\beta}_{\textbf{w}} = w_{\textup{out}}; \] \item contact data $u_{p_{ij}}=w_{ij}m_i$ and $u_p=w_{\textup{out}}m_{\textup{out}}$. \end{compactenum} By restriction we get a class of stable log maps to $X_\Sigma^\circ$ that we also denote by $\beta_{\textbf{w}}$. The moduli space $\mathscr{M}(X_\Sigma^\circ,\beta_{\textbf{w}})$ in general is not proper, since $X_\Sigma^\circ$ is not proper. However, the evaluation map \[ \textup{ev}^\circ : \mathscr{M}(X_\Sigma^\circ,\beta_{\textbf{w}}) \rightarrow \prod_{i=1}^n(D_i^\circ)^{l_i} \] is proper (\cite{GPS}, Proposition 4.2) and we obtain a proper moduli space via base change to a point. To be precise, let $\gamma : \textup{Spec }\mathbb{C} \rightarrow \prod_{i=1}^n(D_i^\circ)^{l_i}$ be a point. Then \[ \mathscr{M}_\gamma := \textup{Spec }\mathbb{C} \times_{\prod_{i=1}^n(D_i^\circ)^{l_i}} \mathscr{M}(X_\Sigma^\circ,\beta_{\textbf{w}}) \] is a proper Deligne-Mumford stack admitting a virtual fundamental class, and we can define the logarithmic Gromov-Witten invariant \begin{equation} \label{eq:Ntor} N_{\textbf{m}}(\textbf{w}) := \int_{\mathscr{M}_\gamma}\gamma^!\llbracket\mathscr{M}(X_\Sigma^\circ,\beta_{\textbf{w}})\rrbracket. \end{equation} Since the codimension of $\gamma$ equals the virtual dimension of $\mathscr{M}(X_\Sigma^\circ,\beta_{\textbf{w}})$, this definition makes sense. Note that we may add further primitive vectors $m_i$ to $\textbf{m}$, with weight vectors $\textbf{w}_i=0$. This leads to a subdivision of $\Sigma$, hence to a toric blow up of $X_\Sigma$, but the logarithmic Gromov-Witten invariants do not change.
\subsection{The decomposition formula} \label{S:decomposition}
By the decomposition formula for stable log maps (\cite{ACGS1}, Theorem 1.2), the moduli space $\mathscr{M}_\beta$ decomposes into moduli spaces indexed by certain decorated tropical curves. Here decorated means that there are classes of stable log maps $\beta_V$ attached to the vertices. In this section we show that a tropical curve in $\tilde{\mathfrak{H}}_d$ automatically carries such decorations.
\begin{prop} \label{prop:decorations} Let $f : C/\textup{pt}_{\mathbb{N}} \rightarrow Y/\textup{pt}_{\mathbb{N}}$ be a stable log map in $\mathscr{M}_\beta$ with tropicalization $\tilde{h} : \tilde{\Gamma} \rightarrow \tilde{B}$. For each vertex $V\in\tilde{\mathscr{P}}_d^{[0]}$, the class $[C_V]\in H_2^+(Y_{\tilde{h}(V)},\mathbb{Z})$ is uniquely determined by the intersection numbers of $C_V$ with components of $\partial Y_{\tilde{h}(V)}$, i.e., by $\tilde{h}$. \end{prop}
\begin{proof} If $V$ is of type (I) as in Definition \ref{defi:types}, then $Y_{\tilde{h}(V)}$ is a toric variety, so the statement is true. If $V$ is of type (II), then $C_V$ is a multiple cover of some exceptional line $L^{\textup{exc}}_v$ (Definition \ref{defi:Lexc}). Its intersection with $\partial Y_v$ determines the degree $d$ of the multiple cover, hence the curve class $[C_V] = d[L^{\textup{exc}}_v]\in H_2^+(Y_v,\mathbb{Z})$. Let V be a vertex of type (III). It is mapped to a vertex $v$ of $\mathscr{P}$. Let $X_v$ be the corresponding component of $X_0$. This is a toric variety. By Proposition \ref{prop:balancing}, (III), we know the intersection of the image of $C_V$ under the resolution $\nu : \tilde{\mathfrak{X}} \rightarrow \mathfrak{X}$ from \S\ref{S:resolution} with the toric divisors of $X_v$, hence the curve class $[\nu(C_V)]\in H_2^+(X_v,\mathbb{Z})$. But this determines $[C_V] = [\nu(C_V)] - k[L^{\textup{exc}}_v] \in H_2^+(Y_v,\mathbb{Z})$, where $k$ is as in Proposition \ref{prop:balancing}, (III). \end{proof}
\begin{defi} \label{defi:Mh} For $\tilde{h} : \tilde{\Gamma} \rightarrow \tilde{B}$ in $\tilde{\mathfrak{H}}_d$, let $\mathscr{M}_{\tilde{h}}$ be the moduli space of stable log maps with tropicalization $\tilde{h}$. This is proper by \cite{ACGS1}, Proposition 2.34. \end{defi}
\begin{rem} \label{rem:tau} In fact \cite{ACGS1} deals with moduli spaces $\mathscr{M}_{\tilde{\tau}}$ of stable log maps \textit{marked by} $\tilde{\tau}=(\tau,\textbf{A})$, where $\tau$ is a type of tropical maps and $\textbf{A}$ is a vertex decoration by curve classes. Since the virtual dimension of $\mathscr{M}_\beta$ is zero and tropical curves in $\tilde{\mathfrak{H}}_d$ are rigid, such $\tilde{\tau}$ are in bijection with vertex decorated tropical curves. We showed in Proposition \ref{prop:decorations} that tropical curves in $\tilde{\mathfrak{H}}_d$ carry unique vertex decorations. So $\tilde{\tau}$ uniquely defines a tropical curve $\tilde{h}$ and $\mathscr{M}_{\tilde{\tau}}$ equals $\mathscr{M}_{\tilde{h}}$. \end{rem}
\begin{prop}[Decomposition formula] \label{prop:dec}
\[ \llbracket\mathscr{M}_\beta\rrbracket = \sum_{\tilde{h}\in\tilde{\mathfrak{H}}_\beta}\frac{l_{\tilde{\Gamma}}}{|\textup{Aut}(\tilde{h})|} F_\star\llbracket\mathscr{M}_{\tilde{h}}\rrbracket, \]
where $l_{\tilde{\Gamma}} := \textup{lcm}\{w_E \ | \ E\in E(\tilde{\Gamma})\}$ and $F:\mathscr{M}_{\tilde{h}}\rightarrow\mathscr{M}_\beta$ is the forgetful map. Here $\text{Aut}(\tilde{h})$ is the group of automorphisms of $\tilde{h}$ (Definition \ref{defi:aut}). \end{prop}
\begin{proof}
The decomposition formula (\cite{ACGS1}, Theorem 1.2) gives $\llbracket\mathscr{M}_\beta\rrbracket$ as a sum over decorated types of tropical maps $\tilde{\tau}$. By Remark \ref{rem:tau} this is a summation over $\tilde{\mathfrak{H}}_\beta$. The multiplicity $m_\tau$ in \cite{ACGS1}, Theorem 1.2, is defined as the index of the image of the lattice $\Sigma(\textup{pt}_{\mathbb{N}})=\mathbb{N}$ inside the lattice $\Sigma(\textup{pt}_\mathbb{N})=\mathbb{N}$. Here the first $\text{pt}_{\mathbb{N}}$ is the base of the curve while the second $\text{pt}_{\mathbb{N}}$ is the base of $Y$. In other words, $m_\tau$ is the smallest integer such that scaling $\tilde{B}$ by $m_\tau$ leads to a tropical curve with integral vertices and edge lengths. By Construction \ref{con:deg3} $\mathscr{P}_d$ has integral vertices (by the base change $t\mapsto t^e$) and tropical curves in $\tilde{\mathfrak{H}}_\beta$ are contained in the $1$-skeleton of $\mathscr{P}_d$ with vertices mapping to vertices of $\mathscr{P}_d$. The affine length of the image of an edge $E\in E(\tilde{\Gamma})$ is $\ell_Ew_E$ by Definition \ref{defi:tropical}, (i). So the scaling necessary to obtain integral edge lengths is $m_\tau = l_{\tilde{\Gamma}} := \textup{lcm}\{w_E \ | \ E\in E(\tilde{\Gamma})\}$. Moreover $\text{Aut}(\tau)=\text{Aut}(\tilde{h})$ by our definition of automorphisms (see Definition \ref{defi:aut}). Then \cite{ACGS1}, Theorem 1.2, gives the formula above. \end{proof}
\subsection{Contributions of the vertices} \label{S:contributions}
Let $\tilde{h} : \tilde{\Gamma} \rightarrow \tilde{B}$ be a tropical curve in $\tilde{\mathfrak{H}}_d$. Define \[ \mathscr{M}_V^\circ := \mathscr{M}(Y_{\tilde{h}(V)}^\circ,\beta_V), \] where $Y_{\tilde{h}(V)}^\circ$ is the complement of the $0$-dimensional toric strata in $Y_{\tilde{h}(V)}$.
For $V\in V_{II}(\tilde{\Gamma})$ (Definition \ref{defi:types}) with adjacent edge $E$, the moduli space $\mathscr{M}_V^\circ$ is proper, since it is isomorphic to the moduli space of $w_E$-fold multiple covers of $\mathbb{P}^1$ totally ramified at a point.
For $V\in V(\tilde{\Gamma})\setminus V_{II}(\tilde{\Gamma})$ we obtain a proper moduli space as follows. Again, $\tilde{\Gamma}$ is a rooted tree with root vertex $V_{\textup{out}}$. There is a natural orientation of the edges of $\tilde{\Gamma}$ by choosing edges to point from a vertex to its parent. For each vertex $V\in V(\tilde{\Gamma})\setminus V_{II}(\tilde{\Gamma})$ there is an evaluation map \[ \textup{ev}_{V,-}^\circ : \mathscr{M}_V^\circ \rightarrow \prod_{E\rightarrow V}D_E^\circ, \] where the product is over all edges of $\tilde{\Gamma}$ adjacent to $V$ and pointing towards $V$.
\begin{lem} \label{lem:proper} The evaluation map $\textup{ev}_{V,-}^\circ$ is proper. \end{lem}
\begin{proof} For $V\in V_I(\tilde{\Gamma})$ this is \cite{GPS}, Proposition 4.2. For $V\in V_{III}(\tilde{\Gamma})$ it is similar to \cite{GPS}, Proposition 5.1. Let us carry this out. Let $V\in V(\tilde{\Gamma})$ be a vertex. We use the valuative criterion for properness, so let $R$ be a valuation ring with residue field $K$, and suppose we are given a diagram \begin{equation*} \begin{xy} \xymatrix{ T=\text{Spec }K \ar[r]\ar[d] & \mathscr{M}_V^\circ \ar[r]\ar[d]^{\textup{ev}_{V,-}^\circ} & \mathscr{M}_V \ar[d]^{\textup{ev}_{V,-}} \\ S=\text{Spec }R \ar[r] & \prod_{E\rightarrow V}D_E^\circ \ar[r] & \prod_{E\rightarrow V}D_E } \end{xy} \end{equation*} Since $\mathscr{M}_V$ is proper, $\textup{ev}_{V,-}$ is proper and we obtain a unique family of stable log maps \begin{equation*} \begin{xy} \xymatrix{ \mathcal{C} \ar[r]\ar[d] & Y_{\tilde{h}(V)}\times S \ar[d] \\ S \ar[r]^{=} & S } \end{xy} \end{equation*} We will show that $f$ is a family of stable log maps to $Y_{\tilde{h}(V)}^\circ$. Let $0\in S$ be the closed point and consider $f_0 : C_0 \rightarrow Y_{\tilde{h}(V)}$. The marked points of $C_0$ map to $Y_{\tilde{h}(V)}^\circ$.
Suppose $f_0(C_0)$ intersects a toric divisor $D\subset Y_{\tilde{h}(V)}$ at a point of $D\setminus Y_{\tilde{h}(V)}^\circ$. The intersection number of $C_0$ with $D$ is accounted for in $Y_{\tilde{h}(V)}^\circ$. For $V\in V_I(\tilde{\Gamma})$ this is clear and for $V\in V_{III}(\tilde{\Gamma})$ this follows since the intersection number is accounted for after composing with the resolution $\tilde{\mathfrak{X}}\rightarrow\mathfrak{X}$ from \S\ref{S:resolution}. Hence, there must be an irreducible component $C$ of $C_0$ dominating $D$. Let $D_1$ and $D_2$ be the two distinct toric divisors of $Y_{\tilde{h}(V)}$ intersecting $D$ only at two distinct torus fixed points of $D$. It was shown in \cite{GPS}, Proposition 4.2, that there are irreducible components $C_1,C_2\subset C_0$ intersecting $C$ and dominating $E_1$ and $E_2$, respectively. By applying this statement repeatedly, replacing $C$ with $C_1$ or $C_2$, we find that $C_0$ contains a cycle of components dominating the union of toric divisors of $Y_{\tilde{h}(V)}$. But then $C_0$ would have genus $g>0$, contradicting the assumptions. We have shown by contradiction that $f : \mathcal{C} \rightarrow Y_{\tilde{h}(V)} \times S$ is a family of stable log maps to $Y_{\tilde{h}(V)}^\circ$. Hence, $\textup{ev}_{V,-}^\circ$ is proper by the valuative criterion for properness. \end{proof}
Since properness of morphisms is stable under base change, we obtain a proper moduli space by base change to a point \[ \gamma_V : \textup{Spec }\mathbb{C} \rightarrow \prod_{E\rightarrow V}D_E^\circ, \] that is, \[ \mathscr{M}_{\gamma_V} := \textup{Spec }\mathbb{C} \times_{\prod_{E\rightarrow V}D_E^\circ} \mathscr{M}_V^\circ \] is a proper Deligne-Mumford stack.
\begin{lem} \label{lem:vdim} For $V\in V_{II}(\tilde{\Gamma})$ the virtual dimension of $\mathscr{M}_V$ is zero. Otherwise the virtual dimension of $\mathscr{M}_V$ equals the codimension of $\gamma_V$. \end{lem}
\begin{proof} For $V\in V_{II}(\tilde{\Gamma})$ with adjacent edge $E$ the moduli space $\mathscr{M}_V$ is isomorphic to the moduli space of $w_E$-fold multiple covers of $\mathbb{P}^1$ totally ramified at a point. However, the two moduli spaces carry obstruction theories which differ by $H^1(C,f^\star\mathcal{O}_{\mathbb{P}^1}(-1))$ at a moduli point $[f:C\rightarrow L_V^{\text{exc}}]$ (c.f. \cite{GPS}, {\S}5.3). The rank of $H^1(C,f^\star\mathcal{O}_{\mathbb{P}^1}(-1))$ is $w_E-1$ and so is the virtual dimension of $\mathscr{M}(\mathbb{P}^1/\infty,w_E)$. Hence, the virtual dimension of $\mathscr{M}_V$ is zero.
Otherwise, the virtual dimension is easily seen to be the number of edges of $\tilde{\Gamma}$ pointing towards $V$, with orientation of $\tilde{\Gamma}$ as given above. By definition this is the codimension of $\gamma_V$. \end{proof}
\begin{defi} \label{defi:NV} For a vertex $V$ of $\tilde{\Gamma}$ define \[ N_V := \begin{cases} \int_{\llbracket\mathscr{M}_V^\circ\rrbracket}1, & V \in V_{II}(\tilde{\Gamma}); \\ \int_{\mathscr{M}_{\gamma_V}}\gamma_V^!\llbracket\mathscr{M}_V^\circ\rrbracket, & V \in V_I(\tilde{\Gamma})\cup V_{III}(\tilde{\Gamma}). \end{cases} \] This is a finite number by Lemma \ref{lem:vdim} and independent of $\gamma_V$ by Lemma \ref{lem:proper}. \end{defi}
\begin{prop} \label{prop:N} We give $N_V$ for the different types of vertices (Definition \ref{defi:types}). \begin{compactenum}[(I)] \item For $V\in V_I(\tilde{\Gamma})$ let $e_1,\ldots,e_n$ be the edges of $\mathscr{P}_d$ adjacent to $\tilde{h}(V)$ and let $m_1,\ldots,m_n$ be the corresponding primitive vectors. Let $\textbf{w}_{i}=(w_{i1},\ldots,w_{il_i})$ be the weights of edges of $\tilde{\Gamma}$ mapping to $e_i$ and write $\textbf{w}=(\textbf{w}_1,\ldots,\textbf{w}_n)$. Then $N_V$ is the toric invariant from \eqref{eq:Ntor} \[ N_V = N_\textbf{m}(\textbf{w}), \] \item If $V\in V_{II}(\tilde{\Gamma})$, then \[ N_V = \frac{(-1)^{w_E-1}}{w_E^2}, \] where $E$ is the unique edge adjacent to $V$. \item If $V\in V_{III}(\tilde{\Gamma})$, then
\[ N_V = \sum_{\textbf{w}_{V,+}} \frac{N_{\textbf{m}}(\textbf{w})}{|\textup{Aut}(\textbf{w}_{V,+})|} \prod_{i=1}^l\frac{(-1)^{w_{V,i}-1}}{w_{V,i}}. \]
The sum is over all weight vectors $\textbf{w}_{V,+}=(w_{V,1},\ldots,w_{V,l_V})$ such that $|\textbf{w}_{V,+}| := \sum_{i=1}^{l_V} w_{V,i} = k$, with $k$ as in Proposition \ref{prop:balancing}, (III). Further, $N_{\textbf{m}}(\textbf{w})$ is as in \eqref{eq:Ntor} with $\textbf{m}=(m_{v,-},m_{v,+})$ and $\textbf{w}=((w_E)_{E\in E_{V,-}(\tilde{\Gamma})},\textbf{w}_{V,+})$, where $E_{V,-}$ is the set of edges adjacent to $V$ and mapped to direction $m_{v,-}$. \end{compactenum} \end{prop}
\begin{proof} (I) is by the definition of $N_{\textbf{m}}(\textbf{w})$ in \eqref{eq:Ntor}. For (II) recall from the proof of Lemma \ref{lem:vdim} that $\mathscr{M}_V$ is isomorphic to $\mathscr{M}(\mathbb{P}^1/\infty,w_E)$ with obstruction theory differing by $H^1(C,f^\star\mathcal{O}_{\mathbb{P}^1}(-1))$. Hence, \[ N_V=\int_{\llbracket\mathscr{M}(\mathbb{P}^1/\infty,w_E)\rrbracket}e(H^1(C,f^\star\mathcal{O}_{\mathbb{P}^1}(-1))) \]
which is equal to $(-1)^{w_E-1}/w_E^2$ by the genus zero part of \cite{BP}, Theorem 5.1, see also \cite{GPS}, Propositions 5.2 and 6.1. For (III) we apply \cite{GPS}, Proposition 5.3. We only blow up one point on the divisor $D_{v,+}$, so in the notation of \cite{GPS}, Proposition 5.3, we have $\textbf{P}=(P_+)$ with $P_+ = k$ as in Proposition \ref{prop:balancing}, (III). Note that our $i$ is called $j$ in \cite{GPS} and the $i$ of \cite{GPS} is equal to $1$ here. Further, $R_{\textbf{P}_+|\textbf{w}_+} = \prod_{i=1}^{l_i}\frac{(-1)^{w_i-1}}{w_i^2}$ by \cite{GPS}, Proposition 5.2, and the discussion thereafter. Then \cite{GPS}, Proposition 5.3, gives (III). \end{proof}
\subsection{Gluing}
Define $\bigtimes_{V\in V(\tilde{\Gamma})}\mathscr{M}_V$ to be the moduli space of stable log maps in $\prod_V\mathscr{M}_V$ matching over the divisors $D_E$, $E\in E(\tilde{\Gamma})$, i.e., the fiber product \begin{equation*} \begin{xy} \xymatrix{ \displaystyle\bigtimes_V\mathscr{M}_V \ar[rr]\ar[d] && \displaystyle\prod_V\mathscr{M}_V \ar[d]^{\textup{ev}} \\ \displaystyle\prod_{E\in E(\tilde{\Gamma})} D_E \ar[rr]^\delta && \displaystyle\prod_V\prod_{\substack{E\in E(\tilde{\Gamma}) \\ V \in E}} D_E } \end{xy} \end{equation*} Here $\textup{ev}$ is the product of evaluation maps to common divisors (labeled by compact edges) and $\delta$ is the diagonal map. Similarly define $\bigtimes_V\mathscr{M}_V^\circ$.
\begin{defi} Let $\textup{cut} : \mathscr{M}_{\tilde{h}} \rightarrow \bigtimes_V\mathscr{M}_V$ be the morphism defined by cutting a curve along its gluing nodes. For a precise definition see \cite{Bou1}, {\S}7.1. Here $\mathscr{M}_{\tilde{h}}$ denotes the moduli space of stable log maps with tropicalization $\tilde{h}$ (Definition \ref{defi:Mh}). Since every stable log map in $\mathscr{M}_{\tilde{h}}$ is torically transverse (Remark \ref{rem:transverse}) this is in fact a morphism \[ \text{cut} : \mathscr{M}_{\tilde{h}} \rightarrow \bigtimes_V\mathscr{M}_V^\circ. \] \end{defi}
\begin{lem} \label{lem:degcut} The morphism $\text{cut}$ is \'etale of degree \[ \textup{deg}(\textup{cut}) = \frac{\prod_{E\in E(\tilde{\Gamma})}w_E}{l_{\tilde{\Gamma}}}, \] where $l_{\tilde{\Gamma}} = \textup{lcm}\{w_E\}$. \end{lem}
\begin{proof} Since stable log maps in $\mathscr{M}_{\tilde{h}}$ are torically transverse by Construction \ref{con:deg3}, locally we are gluing along a smooth divisor as in \cite{KLR}. Then the statement is \cite{KLR}, Lemma 9.2 (4), and the degree is computed by \cite{KLR}, (6.13). We will briefly explain how to arrive at this expression.
For each edge $E$ we have a choice of $w_E$-th root of unity in the log structure of $C$ at the corresponding node, contributing a factor of $w_E$ to $\text{deg}(\text{cut})$. This was computed e.g. in \cite{NS}, Proposition 7.1, \cite{Gr10}, Proposition 4.23, and \cite{Bou1}, Proposition 18. In fact its is a bit more involved: In the definition of tropicalization we removed some of the bivalent vertices from the dual intersection graph $\Gamma_C$. For each compact edge $E$ in $\Gamma_C$ we have a choice of $w_E$-th root of unity as follows. Locally at a node $q$ we have $C=\text{Spec }\mathbb{C}[u,v]/(uv)$ and $\tilde{\mathfrak{X}}_d=\text{Spec }\mathbb{C}[x,y,w^{\pm 1},t]/(xy-t^\ell)$ for some $\ell\in\mathbb{Z}_{>0}$, so $Y:=\tilde{X}_{d,0}=\text{Spec }\mathbb{C}[x,y,w^{\pm 1}]/(xy)$. Locally at $q$ a log structure on $C$ is given by a commutative diagram \begin{equation*} \begin{xy} \xymatrix{ f^{-1}\mathcal{M}_Y \ar[r]^{f^\#}\ar[d]^{\alpha_Y} & \mathcal{M}_C \ar[d]^{\alpha_C} \\ f^{-1}\mathcal{O}_Y \ar[r]^{f^\star} & \mathcal{O}_C } \end{xy} \end{equation*} For any $w_E$-th root of unity $\zeta$ there is a chart for $\mathcal{M}_C$ locally at $q$ given by \[ S_\ell \rightarrow \mathcal{O}_C, \left((a,b,c)\right) \mapsto \begin{cases} (\zeta^{-1}u)^av^b & c = 0, \\ 0 & c \neq 0. \end{cases} \] Here $S_\ell = \mathbb{N}^2 \oplus_{\mathbb{N}} \mathbb{N}$ with $\mathbb{N} \rightarrow \mathbb{N}^2$ the diagonal embedding and $\mathbb{N} \rightarrow \mathbb{N}, 1 \mapsto \ell$ (see Construction \ref{con:tropical}, (2)). None of these choices are identified via a scheme theoretically trivial isomorphism and all possible extensions are of the above form (see \cite{Gr10}, Proposition 4.23, Step 2).
Now consider a chain of edges of $\Gamma_C$ connected by bivalent vertices not mapping to vertices of $\mathscr{P}_d$. Then these bivalent vertices get removed by producing the tropicalization and the chain of edges is replaced by a single edge $E$. In this case there are some isomorphisms between the above stable log maps that are not scheme theoretically trivial. Up to such isomorphisms there are exactly $w_E$ stable log maps (see \cite{Gr10}, Proposition 4.23, Step 3). So we really only get one factor of $w_E$ for each edge in the tropicalization. The log structure at general points and marked points is uniquely determined (see \cite{Gr10}, Proposition 4.23, Step 1). This gives the nominator of \textup{deg}(\textup{cut}) as above.
There are further isomorphisms of stable log maps given by the action of $l_\Gamma$-th roots of unity on the base of the curves (see \cite{KLR}, discussion before (6.13). This gives the denominator of \textup{deg}(\textup{cut}) as above. \end{proof}
\begin{prop}[Gluing formula] \label{prop:gluing} We have \[ \textup{cut}_\star\llbracket\mathscr{M}_{\tilde{h}}\rrbracket = \frac{1}{\ell_{\tilde{\Gamma}}}\prod_{E\in E(\tilde{\Gamma})}w_E \cdot \delta^!\prod_{V\in V(\tilde{\Gamma})}\llbracket\mathscr{M}_V^\circ\rrbracket. \] \end{prop}
\begin{proof} By compatibility of obstruction theories (see \cite{KLR}, {\S}9, and \cite{Bou1}, {\S}7.3) we have \[ \llbracket\mathscr{M}_{\tilde{h}}\rrbracket = \text{cut}^\star \delta^!\prod_{V\in V(\tilde{\Gamma})}\llbracket\mathscr{M}_V^\circ\rrbracket. \] By the projection formula, $\text{cut}_\star\text{cut}^\star$ is multiplication with $\text{deg}(\text{cut})$ which is $\frac{1}{\ell_{\tilde{\Gamma}}}\prod_{E\in E(\tilde{\Gamma})}w_E$ by Lemma \ref{lem:degcut}. \end{proof}
\begin{prop} \label{prop:loop} We have \[ \int_{\llbracket\mathscr{M}_{\tilde{h}}\rrbracket}1 = \frac{1}{\ell_{\tilde{\Gamma}}}\prod_{E\in E(\tilde{\Gamma})}w_E \cdot \int_{\delta^!\prod_V\llbracket\mathscr{M}_V\rrbracket}1. \] \end{prop}
\begin{proof} By Proposition \ref{prop:gluing} the cycles $\text{cut}_\star\llbracket\mathscr{M}_{\tilde{h}}\rrbracket$ and $\frac{1}{\ell_{\tilde{\Gamma}}}\prod_{E\in E(\tilde{\Gamma})}w_E \cdot \delta^!\prod_V\llbracket\mathscr{M}_V\rrbracket$ have the same restriction to the open substack $\bigtimes_V\mathscr{M}_V^\circ$ of $\bigtimes_V\mathscr{M}_V$. Hence by \cite{Ful}, Proposition 1.8, their difference is rationally equivalent to a cycle supported on the closed substack $Z:=(\bigtimes_V\mathscr{M}_V)\setminus(\bigtimes_V\mathscr{M}_V^\circ)$. Suppose there exists an element $(f_V : C_V \rightarrow Y_{\tilde{h}(V)})_{V\in V(\tilde{\Gamma})}\in Z$. Then by the loop construction in the proof of Lemma \ref{lem:proper} at least one of the source curves $C_V$ would contain a nontrivial cycle of components, contradicting $g=0$. So $Z$ is empty, completing the proof. \end{proof}
\begin{prop}[Identifying the pieces] \label{prop:pieces} We have \[ \int_{\delta^!\prod_{V}\llbracket\mathscr{M}_V\rrbracket} 1 = \prod_{V\in V(\tilde{\Gamma})}N_V. \] \end{prop}
\begin{proof} This is similar to the proof of \cite{Bou1}, Proposition 22. By definition of $\delta$ we have \[ \int_{\delta^!\prod_{V}\llbracket\mathscr{M}_V\rrbracket} 1 = \int_{\prod_{V}\llbracket\mathscr{M}_V\rrbracket} \textup{ev}^\star[\delta], \] where $[\delta]$ is the class of the diagonal $\prod_E D_E$. Since each $D_E$ is a projective line, we have \[ [\delta] = \prod_{E\in E(\tilde{\Gamma})}(\textup{pt}_E \times 1 + 1 \times \textup{pt}_E). \] As before we give $\tilde{\Gamma}$ the structure of a rooted tree by choosing the root vertex to be the vertex $V_{\textup{out}}$ of the unique unbounded leg $E_{\textup{out}}$. For a bounded edge $E$ let $V_{E,+}$ and $V_{E,-}$ be the vertices of $E$ such that $V_{E,+}$ is the parent of $V_{E,-}$.
We will show by dimensional arguments that the only term of \[ \textup{ev}^\star[\delta] = \prod_{E\in E(\tilde{\Gamma})} \left((\textup{ev}_{V_{E,-}})^\star[\textup{pt}_E] + (\textup{ev}_{V_{E,+}})^\star[\textup{pt}_E]\right) \] giving a nonzero contribution after integration over $\prod_{V}\llbracket\mathscr{M}_V\rrbracket$ is $\prod(\textup{ev}_{V_{E,+}})^\star[\textup{pt}_E]$. In other words:
\underline{Claim:} For each compact edge $E$, a term of $\textup{ev}^\star[\delta]$ giving a nonzero contribution after integration over $\prod_{V}\llbracket\mathscr{M}_V\rrbracket$ does not contain a factor $(\textup{ev}_{V_{E,-}})^\star[\textup{pt}_E]$.
Let $E$ be a compact edge with $V_{E,-}$ a vertex of type (II) as in Definition \ref{defi:types}. By Proposition \ref{prop:N}, (II), the virtual dimension of $\mathscr{M}_{V_{E,-}}$ is zero. Hence, $(\textup{ev}_{V_{E,-}})^\star[\textup{pt}_E]=0$, since its insertion over $\mathscr{M}_{V_{E,-}}$ defines an enumerative problem of virtual dimension $-1$.
Now consider a compact edge $E$ with $V_{E,-}$ of type (III). Let $E_i, i\in I$ be the edges adjacent to $V_{E,-}$ and different from $E$ (possibly $I=\emptyset$). By Proposition \ref{prop:balancing}, (III), the edges $E_i$ connect $V_{E,-}$ with a vertex $V_i$ of type (II). The terms in $\textup{ev}^\star[\delta]$ containing a factor $(\textup{ev}_{V_{E_i,-}})^\star[\textup{pt}_{E_i}]$ give zero after integration over $\llbracket\mathscr{M}_{V_i}\rrbracket$ by the dimensional argument above. Hence, to give a nonzero contribution, a term of $\textup{ev}^\star[\delta]$ must contain the factor $\prod_{i\in I}(\textup{ev}_{V_{E_i,+}})^\star[\textup{pt}_{E_i}]$. By Proposition \ref{prop:N}, (III), the virtual dimension of $\mathscr{M}_V$ is $|I|$, so the insertion of $\prod_{i\in I}(\textup{ev}_{V_{E_i,+}})^\star[\textup{pt}_{E_i}]$ in $\mathscr{M}_V$ defines an enumerative problem of virtual dimension $0$. Any further insertion would reduce the virtual dimension to $-1$, so a term of $\textup{ev}^\star[\delta]$ giving a nonzero contribution does not contain the factor $(\textup{ev}_{V_{E,-}})^\star[\textup{pt}_E]$.
We will show the claim for compact edges $E$ with $V_{E,-}$ a vertex of type (I) by induction on the height of $V_{E,-}$, that is, the maximal length of chains connecting $V_{E,-}$ with a leaf of $\tilde{\Gamma}$. By Proposition \ref{prop:balancing}, (I), a vertex of type (I) fulfills the ordinary balancing condition. In particular, it must have more than one adjacent edge, hence cannot be a leaf. This shows the set of leaves of $\tilde{\Gamma}$ is contained in $V_{II}(\tilde{\Gamma})\cup V_{III}(\tilde{\Gamma})$. Thus we have already shown that the claim is true for compact edges $E$ with $V_{E,-}$ of height $0$. This is the base case. For the induction step assume that the claim is true for compact edges $E$ with $V_{E,-}$ of height $\leq k$ for some $k\in\mathbb{N}$ and consider a compact edge $E$ with $V_{E,-}$ of height $k+1$. Assume that $V_{E,+}$ is of type (I), since otherwise the claim is true by the above arguments. Let $E_i,i\in I$ be the edges connecting $V_{E,-}$ with its childs. By Proposition \ref{prop:N}, (I), the virtual dimension of $\mathscr{M}_{V_{E,-}}$ is $|I|$. By the induction hypothesis, a term of $\textup{ev}^\star[\delta]$ giving a nonzero contribution must contain the factor $\prod_{i\in I}(\textup{ev}_{V_{E_i,+}})^\star[\textup{pt}_{E_i}]$. Inserting this factor over $\mathscr{M}_{V_{E,-}}$ gives an enumerative problem of virtual dimension $0$. Again, for dimensional reasons, a term of $\textup{ev}^\star[\delta]$ giving a nonzero contribution cannot contain the factor $(\textup{ev}_{V_{E,-}})^\star[\textup{pt}_E]$, hence it must contain the factor $(\textup{ev}_{V_{E,+}})^\star[\textup{pt}_E]$. This proves the claim. Now \[ \int_{\prod_{V\in V(\tilde{\Gamma})}\llbracket\mathscr{M}_V\rrbracket} \prod_{E\in E(\tilde{\Gamma})}(\textup{ev}_{V_{E,+}})^\star[\textup{pt}_E] = \prod_{V\in V(\tilde{\Gamma})} \int_{\llbracket\mathscr{M}_V\rrbracket}(\textup{ev}_{E\rightarrow V})^\star [\textup{pt}] = \prod_{V\in V(\tilde{\Gamma})}N_V, \] completing the proof. \end{proof}
\subsection{The degeneration formula}
Combining the decomposition formula and the gluing formula, we obtain the \textit{degeneration formula}, expressing $N_\beta$ in terms of logarithmic Gromov-Witten invariants $N_V$ labeled by vertices of tropical curves.
\begin{prop}[Degeneration formula] \label{prop:deg}
\[ N_\beta = \sum_{\tilde{h}\in\tilde{\mathfrak{H}}_\beta} \frac{1}{|\textup{Aut}(\tilde{h})|} \cdot \prod_{E\in E(\tilde{\Gamma})}w_E \cdot \prod_{V\in V(\tilde{\Gamma})} N_V. \] \end{prop}
\begin{proof} Since the virtual dimension of $\mathscr{M}_\beta$ is zero, integration (i.e., proper pushforward to a point) of the decomposition formula (Proposition \ref{prop:dec}) gives
\[ N_\beta = \sum_{\tilde{h}\in\tilde{\mathfrak{H}}_\beta} \frac{1}{|\textup{Aut}(\tilde{h})|}\int_{\llbracket\mathscr{M}_{\tilde{h}}\rrbracket} 1. \] Using Propositions \ref{prop:loop} and \ref{prop:pieces} we get the above formula. \end{proof}
As mentioned earlier, summation over balanced tropical curves in $\mathfrak{H}_\beta$ will give a more symmetric version of the above formula:
\begin{defi} \label{defi:Ntor} Let $h : \Gamma \rightarrow B$ be a tropical curve in $\mathfrak{H}_\beta$ and let $V$ be a vertex of $\Gamma$. Then the image of $V$ under the map from Construction \ref{con:tilde} is a vertex of $\tilde{\Gamma}$ of type (I) or (III). Let $\textbf{m}$ and $\textbf{w}$ be as in the respective case of Proposition \ref{prop:N} and define $ N_V^{\textup{tor}} := N_{\textbf{m}}(\textbf{w}). $ Note that $N_V^{\textup{tor}}=N_V$ for vertices of type (I). \end{defi}
\begin{defi} \label{defi:Nh} For a tropical curve $h : \Gamma \rightarrow B$ in $\mathfrak{H}_d$ for some $d$ define
\[ N_h := \left(\frac{1}{|\textup{Aut}(h)|} \cdot \prod_{E\in E(\Gamma)}w_E\cdot \prod_{E\in L_\Delta(\Gamma)}\frac{(-1)^{w_E-1}}{w_E}\cdot\prod_{V\in V(\Gamma)} N_V^{\textup{tor}}\right), \] where $L_\Delta(\Gamma)$ is the set of bounded legs (see Definition \ref{defi:tropical}). \end{defi}
\begin{thm}[Symmetric version of the degeneration formula] \label{thm:degmax} \[ N_\beta = \sum_{h\in\mathfrak{H}_\beta} N_h. \] \end{thm}
\begin{proof} Using Propositions \ref{prop:deg} and \ref{prop:N}, we have \begin{eqnarray*}
N_d &=& \sum_{\tilde{h}\in\tilde{\mathfrak{H}}_\beta}\left( \frac{1}{|\textup{Aut}(\tilde{h})|} \cdot \prod_{E\in E(\tilde{\Gamma})} w_E \cdot \prod_{V\in V_I(\tilde{\Gamma})}N_V^{\textup{tor}} \cdot \prod_{V\in V_{II}(\tilde{\Gamma})}\frac{(-1)^{w_{E_V}-1}}{w_{E_V}^2}\right. \\
&& \textup{ } \hspace{3cm} \left.\cdot \prod_{V\in V_{III}(\tilde{\Gamma})}\left(\sum_{\textbf{w}_{V,+}} N_V^{\textup{tor}} \frac{1}{|\textup{Aut}(\textbf{w}_{V,+})|}\prod_{i=1}^{l_V}\frac{(-1)^{w_{V,i}-1}}{w_{V,i}}\right)\right). \end{eqnarray*} Canceling the $w_{E_V}$ for vertices of type (II) in the first product against the ones in the denominator of the third product and factoring out the second sum we get \begin{eqnarray*}
N_d &=& \sum_{\tilde{h}\in\tilde{\mathfrak{H}}_\beta}\sum_{(\textbf{w}_{V,+})_{V\in V_{III}(\tilde{\Gamma})}} \left(\frac{1}{|\textup{Aut}(\tilde{h})||\textup{Aut}(\textbf{w}_{V,+})|} \cdot \prod_{E\in E(\tilde{\Gamma})\setminus\cup_{V\in V_{II}(\tilde{\Gamma})}\{E_V\}}w_E \right. \\ && \textup{ } \hspace{0cm} \left. \cdot \prod_{V\in V_I(\tilde{\Gamma})\cup V_{III}(\tilde{\Gamma})} N_V^{\textup{tor}} \cdot \prod_{V\in V_{II}(\tilde{\Gamma})}\frac{(-1)^{w_{E_V}-1}}{w_{E_V}}\cdot \prod_{V\in V_{III}(\tilde{\Gamma})}\prod_{i=1}^{l_V}\frac{(-1)^{w_{V,i}-1}}{w_{V,i}}\right). \end{eqnarray*} Now by the construction of the map $\mathfrak{H}_d \rightarrow \tilde{\mathfrak{H}}_d$ in Construction \ref{con:tilde}, the two summations can be replaced by a summation over $\mathfrak{H}_d$. Note that for $\tilde{h}\in\tilde{\mathfrak{H}}_d$ we have
\[ \sum_{h\mapsto\tilde{h}}\frac{1}{|\textup{Aut}(h)|} = \frac{1}{|\textup{Aut}(\tilde{h})|}\sum_{(\textbf{w}_{V,+})_{V\in V_{III}(\tilde{\Gamma})}} \frac{1}{|\textup{Aut}(\textbf{w}_{V,+})|}, \]
where the sum is over all $h\in\mathfrak{H}_d$ giving $\tilde{h}$ via the map from Construction \ref{con:tilde}. This can be seen by multiplying both sides with $|\text{Aut}(\tilde{h})|$.
Moreover, note that $V(\Gamma)=V_I(\tilde{\Gamma})\cup V_{III}(\tilde{\Gamma})$ and $E(\Gamma)=E(\tilde{\Gamma})\setminus\cup_{V\in V_{II}(\tilde{\Gamma})}\{E_V\}$, where, for a vertex $V$ of type (II), $E_V$ is the unique edge containing the vertex $V$. Then \begin{eqnarray*}
N_d &=& \sum_{h\in\mathfrak{H}_\beta}\left(\frac{1}{|\textup{Aut}(h)|} \cdot \prod_{E\in E(\Gamma)} w_E \cdot \prod_{V\in V(\Gamma)} N_V^{\textup{tor}} \right. \\ && \textup{ } \hspace{2cm} \left. \cdot \prod_{V\in V_{II}(\tilde{\Gamma})}\frac{(-1)^{w_{E_V}-1}}{w_{E_V}}\cdot \prod_{V\in V_{III}(\tilde{\Gamma})}\prod_{i=1}^{l_V}\frac{(-1)^{w_{V,i}-1}}{w_{V,i}} \right). \end{eqnarray*} Using that \[ \prod_{E\in L_\Delta(\Gamma)} \frac{(-1)^{w_E-1}}{w_E} = \prod_{V\in V_{II}(\tilde{\Gamma})}\frac{(-1)^{w_{E_V}-1}}{w_{E_V}}\cdot \prod_{V\in V_{III}(\tilde{\Gamma})}\prod_{i=1}^{l_V}\frac{(-1)^{w_{V,i}-1}}{w_{V,i}} \] completes the proof. \end{proof}
\begin{cor} \[ N_d = \sum_{h\in\mathfrak{H}_d} N_h. \] \end{cor}
\begin{proof} This follows from Theorem \ref{thm:degmax} and $\mathfrak{H}_d = \coprod_{\substack{\underline{\beta}\in H_2^+(X,\mathbb{Z}) \\ D\cdot\underline{\beta}=dw_{\text{out}}}} \mathfrak{H}_\beta$. \end{proof}
\begin{defi} \label{defi:mult}
Let $h : \Gamma \rightarrow B$ be a tropical curve. For a trivalent vertex $V\in V(\Gamma)$ define $m_V=\lvert u_{(V,E_1)}\wedge u_{(V,E_2)}\rvert=\lvert\text{det}(u_{(V,E_1)}|u_{(V,E_2)})\rvert$, where $E_1,E_2$ are any two edges adjacent to $V$. For a vertex $V\in V(\Gamma)$ of valency $\nu_V>3$ let $h_V$ be the one-vertex tropical curve describing $h$ locally at $V$ and let $h'_V$ be a deformation of $h_V$ to a trivalent tropical curve. It has $\nu_V-2$ vertices. Define $m_V=\prod_{V'\in V(h'_V)}m_{V'}$. For a bounded leg $E\in L_\Delta(\Gamma)$ define $m_E=(-1)^{w_E+1}/w_E^2$. Then define the \textit{multiplicity} of $h$ to be
\[ \text{Mult}(h) = \frac{1}{|\text{Aut}(h)|} \cdot \prod_V m_V \cdot \prod_{E\in L_\Delta(\Gamma)} m_E. \] \end{defi}
\begin{prop} For a tropical curve $h : \Gamma \rightarrow B$ in $\mathfrak{H}_d$ we have \[ N_h = \textup{Mult}(h) \] \end{prop}
\begin{proof} By the tropical correspondence theorem with point conditions on toric divisors (\cite{GPS}, Theorem 3.4) we have $m_V = \prod_{E\rightarrow V}w_E \cdot N_V^{\text{tor}}$. The product is over all eges of $\Gamma$ pointing towards $V$ with respect to the orientation of $\Gamma$ such that all edges point towards the root vertex $V_{\text{out}}$. Then $\prod_{V\in V(\Gamma)}m_V = \prod_{E\in E(\Gamma)} w_E \cdot \prod_{V\in V(\Gamma)} N_V^{\text{tor}}$ as each $E\in E(\Gamma)$ occurs exactly once. Plugging this and $m_E=(-1)^{w_E+1}/w_E^2$ into the definition of $\text{Mult}(h)$ we obtain $N_h$. \end{proof}
Together with Theorem \ref{thm:degmax} this gives the tropical correspondence theorem (Theorem \ref{thm:trop}).
\subsection{Invariants with prescribed degree splitting}
\begin{defi} As in \S\ref{S:degdiv} fix a cyclic labelling of $D_t^0=D_1+\ldots+D_k$. For $[d_1,\ldots,d_k] \in \mathbb{N}^k$ define \[ N_{[d_1,\ldots,d_k]} = \sum_{h\mapsto[d_1,\ldots,d_k]} N_h, \] where the sum is over all $h \in \mathfrak{H}_d$ with degree splitting $[d_1,\ldots,d_k]$ (Definition \ref{defi:splitting}). \end{defi}
\begin{rem} Barrott and Nabijou \cite{BN} define invariants with prescribed degree splittings by looking at the family $\mathfrak{X}_{t\neq 0} \rightarrow \mathbb{A}^1$ only degenerating the divisor and using torus localization. We conjecture that these coincide with the invariants defined above. This question will be investigated in future work. \end{rem}
\section{Scattering calculations} \label{S:scattering}
In this section we recall the notions of wall structures and scattering diagrams from \cite{GS11}, restricting to the $2$-dimensional case with trivial gluing data. We then explain how the dual intersection complex $(B,\mathscr{P},\varphi)$ of the toric degeneration $(\mathfrak{X},\mathfrak{D})\rightarrow\mathbb{A}^1$ of $(X,D)$ defines a consistent wall structure $\mathscr{S}_\infty$. For toric varieties the correspondence between scattering diagrams (wall structures with one vertex) and logarithmic Gromov-Witten invariants was shown in \cite{GPS}. We use the relation between scattering diagrams and tropical curves from \cite{GPS}, Theorem 2.8, (see Lemma \ref{lem:2.8}) to extend this correspondence to our non-toric case.
\subsection{Scattering diagrams} \label{S:scatteringdiag}
Let $M\simeq\mathbb{Z}^2$ be a lattice and write $M_{\mathbb{R}}=M\otimes_{\mathbb{Z}}\mathbb{R}$. Let $\Sigma$ be a fan in $M_{\mathbb{R}}$ and let $\varphi$ be an integral strictly convex piecewise affine function on $\Sigma$ with $\varphi(0)=0$. Note that the rays of $\Sigma$ form the corner locus of $\varphi$. Let $P_\varphi$ be the monoid of integral points in the upper convex hull of $\varphi$,
\[ P_\varphi = \{p=(\overline{p},h)\in M\oplus\mathbb{Z} \ | \ h \geq \varphi(\overline{p})\}. \] Write $t:=z^{(0,1)}$ and let $R_\varphi$ be the $\mathbb{C}\llbracket t\rrbracket$-algebra obtained by completion of $\mathbb{C}[P_\varphi]$ with respect to $(t)$, \[ R_\varphi = \varprojlim\mathbb{C}[P_\varphi]/(t^k). \]
\begin{defi} A \textit{ray} for $\varphi$ is a half-line $\mathfrak{d} = \mathbb{R}_{\geq 0}\cdot m_{\mathfrak{d}} \subseteq M_{\mathbb{R}}$, with $m_{\mathfrak{d}}\in M$ primitive, together with an element $f_{\mathfrak{d}}\in R_\varphi$ such that \begin{compactenum}[(1)] \item each exponent $p=(\overline{p},h)$ in $f_{\mathfrak{d}}$ satisfies $\overline{p}\in\mathfrak{d}$ or $-\overline{p}\in\mathfrak{d}$. In the first case the ray is called \textit{incoming}, in the latter it is called \textit{outgoing}; \item if $m_{\mathfrak{d}}$, is a ray generator of $\Sigma$, then $f_{\mathfrak{d}} \equiv 1 \textup{ mod }(z^{m_{\mathfrak{d}}})$; \item if $m_{\mathfrak{d}}$, is not a ray generator of $\Sigma$, then $f_{\mathfrak{d}} \equiv 1 \textup{ mod }(z^{m_{\mathfrak{d}}}t)$. \end{compactenum} A \textit{scattering diagram} for $\varphi$ is a set $\mathfrak{D}$ of rays for $\varphi$ such that for every power $k>0$ there are only a finite number of rays $(\mathfrak{d},f_{\mathfrak{d}})\in\mathfrak{D}$ with $f_{\mathfrak{d}} \not\equiv 1 \textup{ mod }(t^k)$. \end{defi}
Let $\mathfrak{D}$ be a scattering diagram for $\varphi$. Let $\gamma : [0,1] \rightarrow M_{\mathbb{R}}$ be a closed immersion not meeting the origin and with endpoints not contained in any ray of $\mathfrak{D}$. Then, for each power $k>0$, we can find numbers $0<r_1\leq r_2\leq\ldots\leq r_s<1$ and rays $\mathfrak{d}_i=(\mathbb{R}_{\geq 0}m_i,f_i)\in\mathfrak{D}$ with $f_i \not\equiv 1 \textup{ mod }(t^k)$ such that (1) $\gamma(r_i)\in\mathfrak{d}_i$, (2) $\mathfrak{d}_i\neq\mathfrak{d}_j$ if $r_i=r_j$ and $i\neq j$, and (3) $s$ is taken as large as possible.
For each ray $\rho\in\Sigma^{[1]}$ write $f_{\rho}=1+z^{(m_\rho,\varphi(m_\rho))}$ for $m_\rho\in M$ the primitive generator of $\rho$, and define \[ \tilde{R}_\varphi^k = \left(R_\varphi/(t^{k+1})\right)_{\prod_{\rho}f_\rho}. \] For each $i$, define a $\mathbb{C}\llbracket t\rrbracket$-algebra automorphism of $\tilde{R}_\varphi^k$ by $\theta_{\mathfrak{d}_i}=\text{exp}(\text{log}(f_i)\partial_{n_i})$ for $\partial_n(z^p):=\braket{n,\overline{p}}z^p$, i.e., \[ \theta_{\mathfrak{d}_i}^k(z^p) = f_i^{-\braket{n_i,\overline{p}}}z^p, \] where $n_i\in N=\text{Hom}(M,\mathbb{Z})$ is the unique primitive vector satisfying $\braket{n_i,m_i} = 0$ and $\braket{n_i,\gamma'(r_i)} > 0$. Define \[ \theta_{\gamma,\mathfrak{D}}^k = \theta^k_{\mathfrak{d}_s} \circ \ldots \circ \theta^k_{\mathfrak{d}_1}. \] If $r_i=r_j$, then $\theta_{\mathfrak{d}_i}$ and $\theta_{\mathfrak{d}_j}$ commute. Hence, $\theta_{\gamma,\mathfrak{D}}^k$ is well-defined. Moreover, define \[ \theta_{\gamma,\mathfrak{D}} = \lim_{k\rightarrow\infty}\theta_{\gamma,\mathfrak{D}}^k. \]
\begin{defi} A scattering diagram $\mathfrak{D}$ is \textit{consistent to order $k$} if, for all $\gamma$ such that $\theta_{\gamma,\mathfrak{D}}^k$ is defined, \[ \theta_{\gamma,\mathfrak{D}}^k \equiv 1 \textup{ mod }(t^{k+1}). \] It is \textit{consistent to any order}, or simply \textit{consistent}, if $\theta_{\gamma,\mathfrak{D}} = 1$. \end{defi}
\begin{prop}[\cite{GPS}, Theorem 1.4] \label{prop:scatt} Let $\mathfrak{D}$ be a scattering diagram such that $f_{\mathfrak{d}} \equiv 1 \text{ mod } (t)$ for all $\mathfrak{d}\in\mathfrak{D}$. Then there exists a consistent scattering diagram $\mathfrak{D}_\infty$ containing $\mathfrak{D}$ such that $\mathfrak{D}_\infty\setminus\mathfrak{D}$ consists only of outgoing rays. \end{prop}
\begin{proof} The proof is constructive, so we will give it here.
Take $\mathfrak{D}_0=\mathfrak{D}$. We will show inductively that there exists a scattering diagram $\mathfrak{D}_k$ containing $\mathfrak{D}_{k-1}$ that is consistent to order $k$. Let $\mathfrak{D}'_{k-1}$ consist of those rays $\mathfrak{d}$ in $\mathfrak{D}_{k-1}$ with $f_{\mathfrak{d}} \not\equiv 1 \text{ mod } (t^{k+1})$. Let $\gamma$ be a closed simple loop around the origin. Then $\theta_{\gamma,\mathfrak{D}_{k-1}} \equiv \theta_{\gamma,\mathfrak{D}'_{k-1}} \text{ mod } (t^{k+1})$. By the induction hypothesis this can be uniquely written as \[ \theta_{\gamma,\mathfrak{D}'_{k-1}} = \text{exp}\left(\sum_{i=1}^s c_iz^{m_i}\partial_{n_i}\right) \] with $m_i\in M\setminus\{0\}$m $n_i\in m_i^\bot$ primitive and $c_i\in (t^k)$. Define
\[ \mathfrak{D}_k = \mathfrak{D}_{k-1} \cup \left\{(\mathbb{R}_{\geq 0}m_i,1\pm c_iz^{m_i}) \ | \ i=1,\ldots,s\right\}, \] with sign chosen in each ray such that its contribution to $\theta_{\gamma,\mathfrak{D}_k}$ is $\text{exp}(c_iz^{m_i}\partial_{n_i})$ modulo $(t^{k+1})$. These contributions exactly cancel the contributions to $\theta_{\gamma,\mathfrak{D}_k}$ coming from $\mathfrak{D}_{k-1}$, so $\theta_{\gamma,\mathfrak{D}_k} \equiv 1 \text{ mod }(t^{k+1})$.
Then take $\mathfrak{D}_\infty$ to be the non-disjoint union of the $\mathfrak{D}_k$ for all $k\in\mathbb{N}$. The diagram $\mathfrak{D}_\infty$ will usually have infinitely many rays. \end{proof}
\begin{defi} Two scattering diagrams $\mathfrak{D}$, $\mathfrak{D}'$ are \textit{equivalent} if $\theta_{\gamma,\mathfrak{D}}=\theta_{\gamma,\mathfrak{D}'}$ for any closed immersion $\gamma$ for which both sides are defined. \end{defi}
\begin{defi} A scattering diagram $\mathfrak{D}$ is called \textit{minimal} if \begin{compactenum}[(1)] \item any two rays $\mathfrak{d},\mathfrak{d}'$ in $\mathfrak{D}$ have distinct support , i.e., $m_{\mathfrak{d}}\neq m_{\mathfrak{d}'}$; \item it contains no \textit{trivial} ray, i.e., with $f_{\mathfrak{d}}=1$. \end{compactenum} \end{defi}
\begin{rem} Every scattering diagram $\mathfrak{D}$ is equivalent to a unique minimal scattering diagram. In fact, if $\mathfrak{d},\mathfrak{d}'$ have the same support, then we can replace these two rays with a single ray with the same support and attached function $f_{\mathfrak{d}}\cdot f_{\mathfrak{d}'}$. Moreover, we can remove any trivial ray without affecting $\theta_{\gamma,\mathfrak{D}}$. \end{rem}
\subsection{Scattering diagrams and toric invariants}
In \cite{GPS}, Theorem 2.4, a bijective correspondence between certain scattering diagrams and tropical curves is established, leading to an enumerative correspondence (\cite{GPS}, Theorem 2.8). Combining this result with the tropical correspondence theorem for torically transverse stable log maps with point conditions on the toric boundary (\cite{GPS}, Theorems 3.4, 4.4), we get the following.
\begin{lem} \label{lem:2.8} Let $\textbf{m}=(m_1,\ldots,m_n)$ be an $n$-tuple of (not necessarily distinct) primitive vectors of $M$. Let $\Sigma$ be a fan in $M_{\mathbb{R}}$ and let $\varphi$ be an integral strictly convex piecewise affine function on $\Sigma$ such that $\varphi(0)=0$. Let $\mathfrak{D}$ be a scattering diagram for $\varphi$ consisting of a number of lines\footnote{This means that the rays in $\mathfrak{D}$ come in pairs, one incoming and one outgoing, with the same attached function.}, one in each direction $m_i$ and with attached function $f_i\in\mathbb{C}[z^{(-m_i,0)}]\subseteq P_\varphi$. Write the logarithm of $f_i$ as \[ \textup{log }f_i = \sum_{w=1}^\infty a_{iw}z^{(-wm_i,0)}, \quad a_{iw}\in\mathbb{C}. \] Let $\mathfrak{D}_\infty$ be the associated minimal consistent scattering diagram and let $\mathfrak{d}\in \mathfrak{D}_\infty\setminus\mathfrak{D}$ be a ray in direction $m_{\mathfrak{d}}$ with attached function $f_{\mathfrak{d}}$. Then
\[ \textup{log }f_{\mathfrak{d}} = \sum_{w=1}^\infty\sum_{\textbf{w}}w\frac{N_{\textbf{m}}(\textup{\textbf{w}})}{|\textup{Aut}(\textup{\textbf{w}})|} \left(\prod_{\substack{1\leq i\leq n\\ 1\leq j\leq l_i}} a_{iw_{ij}}\right) z^{(-wm_{\mathfrak{d}},0)}, \] where the sum is over all $n$-tuples of weight vectors $\textbf{w}=(\textbf{w}_1,\ldots,\textbf{w}_n)$ satisfying
\[ \sum_{i=1}^n |\textbf{w}_i|m_i = wm_{\mathfrak{d}}. \] Here $N_{\textbf{m}}(\textbf{w})$ is the toric logarithmic Gromov-Witten invariant defined in \S\ref{S:toricinv} and $\text{Aut}(\textbf{w})$ is the subgroup of the permutation group $S_n$ stabilizing $(w_1,\ldots,w_n)$.
Moreover, let $m\in\mathbb{Q}_{>0}m_1+\ldots+\mathbb{Q}_{>0}m_n$ be a primitive vector. If there is no ray $\mathfrak{d}\in\mathfrak{D}_\infty$ in direction $m$, then $N_{\textbf{m}}(\textbf{w})=0$ for all $\textbf{w}$ satisfying $\sum_{i=1}^n |\textbf{w}_i|m_i = wm_{\mathfrak{d}}$. \end{lem}
\begin{rem} Note that \cite{GPS} deals with the case $\varphi\equiv 0$, where the $t$-order of an element $z^{(\overline{p},h)}$ is simply given by $h$. In our case, the $t$-order is $\varphi(-\overline{p})+h\geq 0$: \[ z^{(\overline{p},h)} = \left(z^{(-\overline{p},\varphi(-\overline{p}))}\right)^{-1} t^{\varphi(-\overline{p})+h}. \] The formula in \cite{GPS}, Theorem 2.8, contains some explicit $t$-factors. These are not visible in Lemma \ref{lem:2.8} due to this different notion of $t$-order. \end{rem}
\subsection{Wall structures}
Let $(B,\mathscr{P},\varphi)$ be a $2$-dimensional polarized polyhedral affine manifold. Note that for each $x\in B\setminus\Delta$ (where $\Delta$ is the singular locus of $B$), $\varphi$ defines an integral strictly convex piecewise affine function \[ \varphi_x : \Lambda_{B,x} \simeq M \rightarrow \mathbb{R} \] on $\Sigma_x$, the fan describing $(B,\mathscr{P})$ locally at $x$. If $\tau_x\in\mathscr{P}$ is the smallest cell containing $x$, then this is given by
\[ \Sigma_x = \{K_{\tau_x}\sigma \ | \ \tau_x\subseteq\sigma\in\mathscr{P}\}, \] where $K_{\tau_x}\sigma$ is the cone generated by $\sigma$ relative to $\tau$, i.e.,
\[ K_{\tau_x}\sigma = \mathbb{R}_{\geq 0}(\sigma-\tau_x) = \{m \in M_{\mathbb{R}} \ | \ \exists m_0\in\tau_x, m_1\in\sigma,\lambda\in\mathbb{R}_{\geq 0} : m=\lambda(m_1-m_0)\}. \] As in \S\ref{S:scatteringdiag} this defines a monoid by the integral points in the upper convex hull of $\varphi_x$, \begin{equation} \label{eq:Px}
P_x := P_{\varphi_x} = \{p=(\overline{p},h)\in\Lambda_{B,x}\oplus\mathbb{Z} \ | \ h\geq\varphi_x(\overline{p})\}. \end{equation} Note that $\textup{Spec }\mathbb{C}[P_x]$ gives a local toric model for the toric degeneration defined by $(B,\mathscr{P},\varphi)$ at a point on the interior of the toric stratum corresponding to $\tau_x$.
\begin{defi} \label{defi:structure} Let $(B,\mathscr{P},\varphi)$ be a $2$-dimensional polarized polyhedral affine manifold such that $(B,\mathscr{P})$ is simple. For $x,x'\in B$ integral points, let $m_{xx'}\in\Lambda_B$ denote the primitive vector pointing from $x$ to $x'$. For a $1$-cell $\rho$ and $x\in\rho\setminus\Delta$ let $v[x]$ be the vertex of the irreducible component of $\rho\setminus\Delta$ containing $x$. This is unique by the construction of the discriminant locus $\Delta$ for polyhedral affine manifolds (\cite{GHS}, Construction 1.1). \begin{compactenum}[(1)] \item A \textit{slab} $\mathfrak{b}$ on $(B,\mathscr{P},\varphi)$ is a $1$-dimensional rational polyhedral subset of a $1$-cell $\rho_{\mathfrak{b}}\in\mathscr{P}^{[1]}$ together with elements $f_{\mathfrak{b},x}\in\mathbb{C}[P_x]$, one for each $x\in\mathfrak{b}\setminus\Delta$, satisfying the following conditions. \begin{compactenum}[(1)] \item $f_{\mathfrak{b},x} \equiv 1 \textup{ mod } (t)$ if $\rho_{\mathfrak{b}}$ does not contain an affine singularity; \item $f_{\mathfrak{b},x} \equiv 1+z^{(m_{v[x]\delta},\varphi(m_{v[x]\delta}))} \textup{ mod } (t)$ if $\rho_{\mathfrak{b}}$ contains an affine singularity $\delta$; \item $f_{\mathfrak{b},x} = z^{(m_{v[x]v[x']},\varphi(m_{v[x]v[x']}))}f_{\mathfrak{b},x'}$ for all $x,x'\in\mathfrak{b}\setminus\Delta$. \end{compactenum} Note that conditions (1) and (2) are compatible with (3). \item A \textit{wall} $\mathfrak{p}$ on $(B,\mathscr{P},\varphi)$ is a $1$-dimensional rational polyhedral subset of a maximal cell $\sigma_{\mathfrak{p}}\in\mathscr{P}^{[2]}$ with $\mathfrak{p}\cap\textup{Int}(\sigma_{\mathfrak{p}})\neq\emptyset$ together with (i) a \textit{base point} \textup{Base}($\mathfrak{p})\in\mathfrak{p}\setminus\partial\mathfrak{p}$, (ii) an \textit{exponent} $p_{\mathfrak{p}} \in \Gamma(\sigma_{\mathfrak{p}},\Lambda \oplus \underline{\mathbb{Z}})$ such that $p_{\mathfrak{p},x}=(\overline{p}_{\mathfrak{p},x},h_{\mathfrak{p},x})\in P_x$ for all $\mathfrak{p}\setminus\Delta$ with $h_{\mathfrak{p},x}>\varphi(\overline{p}_{\mathfrak{p},x})$ for $x\neq\textup{Base}(\mathfrak{p})$, and (iii) $c_{\mathfrak{p}}\in\mathbb{C}$, such that \[ \mathfrak{p} = (\textup{Base}(\mathfrak{p}) - \mathbb{R}_{\geq 0}\overline{p}_{\mathfrak{p}}) \cap \sigma_{\mathfrak{p}}. \] For each $x\in\mathfrak{p}\setminus\Delta$ this defines a function \[ f_{\mathfrak{p},x} = 1+c_{\mathfrak{p}}z^{p_{\mathfrak{p}}} \in \mathbb{C}[P_x]. \]
\item A \textit{wall structure} $\mathscr{S}$ on $(B,\mathscr{P},\varphi)$ is a locally finite set of slabs and walls with a polyhedral decomposition $\mathscr{P}_{\mathscr{S}}$ of its support $|\mathscr{S}|=\cup_{\mathfrak{b}\in\mathscr{S}}\mathfrak{b}$ such that \begin{compactenum}[(1)] \item The map sending a slab $\mathfrak{b}\in\mathscr{S}$ to its underlying $1$-cell of $\mathscr{P}$ is injective;
\item Each closure of a connected component of $B\setminus|\mathscr{S}|$ (\textit{chamber}) is convex and its interior is disjoint from any wall; \item Any wall in $\mathscr{S}$ is a union of elements of $\mathscr{P}_{\mathscr{S}}$; \item Any maximal cell of $\mathscr{P}$ contains only finitely many slabs or walls in $\mathscr{S}$. \end{compactenum} \item A \textit{joint} $\mathfrak{j}$ of a wall structure $\mathscr{S}$ on $(B,\mathscr{P},\varphi)$ is a vertex of $\mathscr{P}_{\mathscr{S}}$. At each joint $\mathfrak{j}$, the wall structure defines a scattering diagram $\mathfrak{D}_{\mathfrak{j}}$ for $\varphi_{\mathfrak{j}}$. \end{compactenum} \end{defi}
\begin{defi} \label{defi:S0} A polarized polyhedral affine manifold $(B,\mathscr{P},\varphi)$ induces an \textit{initial wall structure} $\mathscr{S}_0$ consisting only of slabs as follows. For each $1$-cell $\rho$ containing an affine singularity $\delta$ there is a slab $\mathfrak{b}$ with underlying polyhedral subset $\rho$ and with \[ f_{\mathfrak{b},v} = 1+z^{(m_{v\delta},\varphi(m_{v\delta}))}, \] where $m_{v\delta}\in \Lambda_{B,v}$ is the primitive vector pointing from $v$ to $\delta$. \end{defi}
\begin{defi}[\cite{GS11}, Definition 2.28] A wall structure $\mathscr{S}$ is \textit{consistent (to order $k$) at a joint $\mathfrak{j}$} if the associated scattering diagram $\mathfrak{D}_{\mathfrak{j}}$ is consistent (to order $k$). It is \textit{consistent (to order $k$)} if it is consistent (to order $k$) at any joint. \end{defi}
\begin{defi}[\cite{GS11}, Definition 2.41] Two wall structures $\mathscr{S}$, $\mathscr{S}'$ are \textit{compatible to order $k$} if the following conditions hold. \begin{compactenum}[(1)] \item If $\mathfrak{p}\in\mathscr{S}$ is a wall with $c_{\mathfrak{p}}\neq 0$ and $f_{\mathfrak{p},x} \not\equiv 1 \textup{ mod }(t^{k+1})$ for some $x\in\mathfrak{p}\setminus\Delta$, then $\mathfrak{p}\in\mathscr{S}'$, and vice versa. \item If $x\in \textup{Int}(\mathfrak{b})\cap\textup{Int}(\mathfrak{b}')$ for slabs $\mathfrak{b}\in\mathscr{S},\mathfrak{b}'\in\mathscr{S}'$, then $f_{\mathfrak{b},x} \equiv f_{\mathfrak{b}',x} \textup{ mod }(t^{k+1})$. \end{compactenum} \end{defi}
\begin{prop} \label{prop:Sk} If $(B,\mathscr{P},\varphi)$ is a polarized polyhedral affine manifold with $(B,\mathscr{P})$ simple, then there exists as sequence of wall structures $(\mathscr{S}_k)_{k\in\mathbb{N}}$ such that \begin{compactenum}[(1)] \item $\mathscr{S}_0$ is the initial wall structure defined by $(B,\mathscr{P},\varphi)$; \item $\mathscr{S}_k$ is consistent to order $k$; \item $\mathscr{S}_k$ and $\mathscr{S}_{k+1}$ are compatible to order $k$. \end{compactenum} \end{prop}
\begin{proof} By \cite{GS11}, Remark 1.29, if $(B,\mathscr{P})$ is simple, then the central fiber of the corresponding toric degeneration is locally rigid for any choice of open gluing data. In this case, the existence of a sequence of wall structures as claimed is the main part ({\S}3, {\S}4) of \cite{GS11}. Roughly speaking, the proof goes by induction as follows. To obtain $\mathscr{S}_k$ from $\mathscr{S}_{k-1}$, for each joint $\mathfrak{j}$ of $\mathscr{S}_{k-1}$ we calculate the scattering diagram $\mathfrak{D}_{\mathfrak{j},k}$ consistent to order $k$ from $\mathfrak{D}_{\mathfrak{j},k-1}$ as in Proposition \ref{prop:scatt}. Then we add walls corresponding to these rays to the scattering diagram $\mathscr{S}_{k-1}$. This will probably produce some new joint or complicate the scattering diagrams at other joints. However, it is shown in \cite{GS11} that this procedure after finitely many steps gibes a wall structure $\mathscr{S}_k$ consistent to order $k$. \end{proof}
\subsection{Proof of the main theorem}
Let $Q$ be a $2$-dimensional Fano polytope and let $(B,\mathscr{P},\varphi)$ be the dual intersection complex of the corresponding toric degeneration. Let $\mathscr{S}_\infty$ be the consistent wall structure defined by $(B,\mathscr{P},\varphi)$, i.e., the limit of the $\mathscr{S}_k$ in Proposition \ref{prop:Sk}. Let $\sigma_0$ be as in Definition \ref{defi:sigma0}.
\begin{lem} \label{lem:global}
The support $|\mathscr{S}_\infty|$ is disjoint from the interior of $\sigma_0$. \end{lem}
\begin{proof} $(B,\mathscr{P},\varphi)$ defines an initial wall structure $\mathscr{S}_0$ as in Definition \ref{defi:S0}. The joints of $\mathscr{S}_0$ are the vertices of $\mathscr{P}$. Let $v$ be such a vertex. By \cite{GS11} Proposition 3.9, the walls in $\mathscr{S}_\infty$ with base point $v$ lie in the cone $v + \mathbb{R}_{\leq 0}m_{v\delta} + \mathbb{R}_{\leq 0}m_{v\delta'} \subseteq B$. Here $\delta,\delta'$ are the affine singularities on edges adjacent to $v$ and $m_{\delta v}$ is the primitive integral tangent vector on $B$ pointing from $v$ to $\delta$.
By inductively using \cite{GS11}, Proposition 3.9, all walls in $\mathscr{S}_\infty$ lie in the union of these cones, i.e.,
\[ |\mathscr{S}_\infty| \subseteq \bigcup_{v\in\mathscr{P}^{[0]}} v + \mathbb{R}_{\leq 0}m_{v\delta} + \mathbb{R}_{\leq 0}m_{v\delta'}. \] In particular, there are no walls in $\mathscr{S}_\infty$ supported on the interior of $\sigma_0$. \end{proof}
The unbounded walls in $\mathscr{S}_\infty$ are all parallel in the direction $m_{\textup{out}}\in \Lambda_B$. Let $f_{\textup{out}}$ be the product of all functions $f_{\mathfrak{p}}$ attached to unbounded walls $\mathfrak{p}$ in $\mathscr{S}_\infty$. Then $f_{\text{out}}$ can be regarded as an element of $\mathbb{C}\llbracket x\rrbracket$ for $x:=z^{(-m_{\textup{out}},0)}\in\mathbb{C}[\Lambda_B\oplus\mathbb{Z}]$.
\begin{thm}[Theorem \ref{thm:main}] \label{thm:scattering} \[ \textup{log }f_{\textup{out}} = \sum_{d=1}^\infty (D\cdot\underline{\beta}) \cdot N_d \cdot x^{D\cdot\underline{\beta}}. \] \end{thm}
In fact, we will prove a more general statement, giving an enumerative meaning for the function attached to any wall in $\mathscr{S}_\infty$. For this we need the following.
\begin{defi}[\cite{GrP2}, Definition 2.2] A \textit{tropical disk} $h : \Gamma \rightarrow B$ is a tropical curve with the choice of univalent vertex $V_\infty$, adjacent to a unique edge $E_\infty$, such that $h$ is balanced for all vertices $V \neq V_\infty$. \end{defi}
\begin{defi} Let $\mathfrak{p}\in\mathscr{S}_\infty$ be a wall and choose $x\in\textup{Int}(\mathfrak{p})$. Define $\mathfrak{H}_{\mathfrak{p},w}$ to be the set of all tropical disks $h : \Gamma \rightarrow B$ with $h(V_\infty)=x$ and $u_{(V_\infty,E_\infty)}=-w\cdot m_{\mathfrak{p}}$. \end{defi}
\begin{defi} For $h\in \mathfrak{H}_{\mathfrak{p},w}$ define, with $N_V^{\textup{tor}}$ as in Definition \ref{defi:Ntor},
\[ N_h = \frac{1}{|\textup{Aut}(h)|}\prod_{E\in E(\Gamma)}w_E\cdot\prod_{V\neq V_{\textup{out}}} N_V^{\textup{tor}} \cdot \prod_{E\in L_\Delta(\Gamma)}\frac{(-1)^{w_E-1}}{w_E}. \] \end{defi}
\begin{rem} Note that the sets $\mathfrak{H}_{\mathfrak{p},w}$ are in bijection for different choices of $x$. Moreover, for an unbounded wall $\mathfrak{p}$ the set $\mathfrak{H}_{\mathfrak{p},w}$ is empty for $w_{\text{out}}\nmid w$, and for each $d$ there is an injective map $\iota:\mathfrak{H}_{\mathfrak{p},dw_{\text{out}}} \hookrightarrow \mathfrak{H}_d$ by removing $V_\infty$ and extending $E_\infty$ to infinity, giving $E_{\text{out}}$. Hence, $N_{\iota(h)}=dw_{\text{out}}N_h$. \end{rem}
\begin{prop} \label{prop:scattering} For a wall $\mathfrak{p}$ of $\mathscr{S}_\infty$ we have \[ \textup{log }f_{\mathfrak{p}} = \sum_{w=1}^\infty\sum_{h\in\mathfrak{H}_{\mathfrak{p},w}} N_h z^{(wm_{\mathfrak{p}},0)}. \] \end{prop}
\begin{proof} We want to prove the claimed equality by induction, so we need a well-ordered set. For a wall $\mathfrak{p}$ of $\mathscr{S}_\infty$, define a set
\[ \textup{Parents}(\mathfrak{p}) = \{\mathfrak{p}'\in\mathscr{S}_\infty \ | \ \mathfrak{p}\cap\mathfrak{p}'=\textup{Base}(\mathfrak{p}) \neq \textup{Base}(\mathfrak{p}')\}. \] Here $\textup{Base}(\mathfrak{p})$ is the base point of $\mathfrak{p}$ (see Definition \ref{defi:structure}, (2)). Note that $\textup{Base}(\mathfrak{p}')$ is only defined for walls, so we define the condition $\textup{Base}(\mathfrak{p})\neq\textup{Base}(\mathfrak{p}')$ to be always satisfied when $\mathfrak{p}'$ is a slab. Then define inductively \[ \textup{Ancestors}(\mathfrak{p}) = \{\mathfrak{p}\} \cup \bigcup_{\mathfrak{p}'\in\textup{Parents}(\mathfrak{p})} \textup{Ancestors}(\mathfrak{p}'). \]
For each $k$, the set of walls in $\mathscr{S}_k$ is finite and totally ordered by $\mathfrak{p}_1 \leq \mathfrak{p}_2$ if and only if $\mathfrak{p}_1 \in \textup{Ancestors}(\mathfrak{p}_2)$. Hence, it is well-ordered and we can use induction. The set of smallest elements with respect to this ordering is $\{\mathfrak{p}\in\mathscr{S}_\infty \textup{ wall}\ | \ \textup{Base}(\mathfrak{p})\in\mathscr{P}_{\mathscr{S}_0}^{[0]}=\mathscr{P}^{[0]}\}$. For such $\mathfrak{p}$, the set $\textup{Ancestors}(\mathfrak{p})$ consists of $\mathfrak{p}$ and two slabs $\mathfrak{b}_1,\mathfrak{b}_2$. This defines a scattering diagram at the joint $\textup{Base}(\mathfrak{p})$ that is equivalent to the scattering diagram obtained from two lines in the directions $m_1,m_2$ of the slabs $\mathfrak{b}_1,\mathfrak{b}_2$ with attached functions $f_1=1+z^{(-m_1,0)}$ and $f_2=1+z^{(-m_2,0)}$, respectively. Note that \[ \textup{log }f_i = \sum_{w=1}^\infty \frac{(-1)^{w-1}}{w}z^{(wm_i,0)}. \] Then Lemma \ref{lem:2.8} gives
\[ \textup{log }f_{\mathfrak{p}} = \sum_{w=1}^\infty\sum_{\textbf{w}}w\frac{N_{\textbf{m}}(\textup{\textbf{w}})}{|\textup{Aut}(\textup{\textbf{w}})|} \left(\prod_{\substack{1\leq i\leq n\\ 1\leq j\leq l_i}} \frac{(-1)^{w_{ij}-1}}{w_{ij}}\right) z^{(-wm_{\mathfrak{p}},0)}, \] where the sum is over all $n$-tuples of weight vectors $\textbf{w}=(\textbf{w}_1,\ldots,\textbf{w}_n)$ satisfying
\[ \sum_{i=1}^n |\textbf{w}_i|m_i = wm_{\mathfrak{p}}. \]
Here $N_{\textbf{m}}(\textbf{w})$ is the toric logarithmic Gromov-Witten invariant defined in \S\ref{S:toricinv}. Tropical disks $h$ in $\mathfrak{H}_{\mathfrak{p},w}$ have a $1$-valent vertex $V_\infty$ mapping to the interior of $\mathfrak{p}$ and another vertex $V$ with one compact edge of weight $w$ and several bounded legs in directions $m_1$ or $m_2$ with weights $\textbf{w}_1=(w_{11},\ldots,w_{1l_1})$ and $\textbf{w}_2=(w_{21},\ldots,w_{2l_2})$, respectively, such that $\sum_{i=1}^n |\textbf{w}_i|m_i = wm_{\mathfrak{p}}$. Hence, the set $\mathfrak{H}_{\mathfrak{p},w}$ is in bijection with the set of pairs $\textbf{w}$ as above, with $w_{ij}$ being the weights of the bounded legs of the corresponding tropical disk $h$. Moreover, $|\textup{Aut}(h)|=|\textup{Aut}(\textbf{w})|$ and $N_V^{\textup{tor}}=N_{\textbf{m}}(\textbf{w})$ by definition. Hence, the summation over $\textbf{w}$ can be replaced by a summation over $\mathfrak{H}_{\mathfrak{p},w}$, and we obtain
\[ \textup{log }f_{\mathfrak{p}} = \sum_{w=1}^\infty\sum_{h\in\mathfrak{H}_{\mathfrak{p},w}} w\frac{N_V^{\textup{tor}}}{|\textup{Aut}(h)|}\left(\prod_{E\in L_\Delta(\Gamma)}\frac{(-1)^{w_E-1}}{w_E}\right)z^{(-wm_{\mathfrak{p}},0)}. \] This is precisely the claimed formula for such $\mathfrak{p}$, completing the base case.
For the induction step, let $\mathfrak{p}$ be a wall of $\mathscr{S}_\infty$ and assume the claimed formula holds for all walls $\mathfrak{p}'\in\textup{Ancestors}(\mathfrak{p})\setminus\{\mathfrak{p}\}$. By Lemma \ref{lem:2.8} and using the induction hypothesis,
\[ \textup{log }f_{\mathfrak{p}} = \sum_{w=1}^n\sum_{\textbf{w}}\left(w\frac{N_{\textbf{m}}(\textup{\textbf{w}})}{|\textup{Aut}(\textup{\textbf{w}})|} \prod_{\substack{\mathfrak{p}'\in\textup{Parents}(\mathfrak{p})\\ 1\leq j\leq l_{\mathfrak{p}'}}} \sum_{h'\in\mathfrak{H}_{\mathfrak{p}',w_{\mathfrak{p}'j}}} N_{h'} \right) z^{(-wm_{\mathfrak{p}},0)}, \] where the second sum is over all tuples $\textbf{w}=(\textbf{w}_{\mathfrak{p}'})_{\mathfrak{p}'\in\textup{Parents}(\mathfrak{p})}$ of weight vectors $\textbf{w}_{\mathfrak{p}'}=(w_{\mathfrak{p}'1},\ldots,w_{\mathfrak{p}'l_{\mathfrak{p}'}})$ with
\[ \sum_{\mathfrak{p}'\in\textup{Parents}(\mathfrak{p})}|\textbf{w}_{\mathfrak{p}'}|m_{\mathfrak{p}'} = wm_{\mathfrak{p}}. \] Factoring out, we can replace the product over $\textup{Parents}(\mathfrak{p})$ and $1\leq j \leq l_{\mathfrak{p}'}$ and the sum over $h'\in\mathfrak{H}_{\mathfrak{p}',w_{\mathfrak{p}'j}}$ by a sum over tuples $(h')_{\mathfrak{p}',j}:=(h'\in\mathfrak{H}_{\mathfrak{p}',w_{\mathfrak{p}'j}})_{\substack{\mathfrak{p}'\in\textup{Parents}(\mathfrak{p}) \\ 1\leq j \leq l_{\mathfrak{p}'}}}$:
\[ \textup{log }f_{\mathfrak{p}} = \sum_{w=1}^n\sum_{\textbf{w}}\left(w\frac{N_{\textbf{m}}(\textup{\textbf{w}})}{|\textup{Aut}(\textup{\textbf{w}})|} \sum_{(h')_{\mathfrak{p}',j}} \prod_{h'\in (h')_{\mathfrak{p}',j}} N_{h'} \right) z^{(-wm_{\mathfrak{p}},0)}. \] Further, we can replace the summations over $\textbf{w}$ and $(h')_{\mathfrak{p}',j}$ by a summation over $\mathfrak{H}_{\mathfrak{p},w}$, since this is precisely the data that determines a tropical disk in $\mathfrak{H}_{\mathfrak{p},w}$. We get the claimed formula by using that, for $h\in\mathfrak{H}_{\mathfrak{p},w}$ determined by $\textbf{w}$ and $(h')_{\mathfrak{p}',j}$, we have
\[ |\text{Aut}(h)| = |\text{Aut}(\textbf{w})| \cdot \prod_{(h')_{\mathfrak{p}',j}} |\text{Aut}(h')|, \] so
\[ N_h = w\frac{N_{\textbf{m}}(\textbf{w})}{|\text{Aut}(\textbf{w})|} \cdot \prod_{h'\in (h')_{\mathfrak{p}',j}} N_{h'}. \] This completes the proof. \end{proof} Now Theorem \ref{thm:scattering} follows by summation over all outgoing walls $\mathfrak{p}$ in $\mathscr{S}_\infty$.
\section{Torsion points} \label{S:torsion}
In this section we consider $(\mathbb{P}^2,E)$. Choose a flex point $O$ on the elliptic curve $E$ and consider the group law on $E$ with $O$ the identity. An \textit{$m$-torsion point} on $E$ is a point $P$ such that $m \cdot P = O$. As a topological group, the elliptic curve is a torus $S^1\times S^1=\mathbb{R}^2/\mathbb{Z}^2$. The $m$-torsion points form a group $\mathbb{Z}_m \times \mathbb{Z}_m$.
\begin{lem} \label{lem:torsion} If $C$ is a rational degree $d$ curve intersecting $E$ in a single point $P$, then $P$ is a $3d$-torsion point. \end{lem}
\begin{proof} Let $C$ be a rational degree $d$ curve intersecting $E$ in a single point $P$ and let $L$ be the line tangent to $O$. Then the cycle $C-dL$ has degree $0$, so it is linearly equivalent to zero, since $\textup{Pic }\mathbb{P}^2 \cong \mathbb{Z}$ by the degree map. Moreover, it intersects $E$ in the cycle $3d(P-O)$ which in turn is linearly equivalent to zero. \end{proof}
Let $T_d\simeq\mathbb{Z}_{3d}\times\mathbb{Z}_{3d}$ be the set of $3d$-torsion points on $E$ and let $\beta_d$ be the class of degree $d$ stable log maps (Definition \ref{defi:beta}). Then we have a decomposition \[ \mathscr{M}(\mathbb{P}^2,\beta_d) = \coprod_{P\in T_d} \mathscr{M}(\mathbb{P}^2,\beta_d)_P, \] where $\mathscr{M}(\mathbb{P}^2,\beta_d)_P$ is the subspace of $\mathscr{M}(\mathbb{P}^2,\beta_d)$ of maps intersecting $E$ in $P$. Let $\llbracket\mathscr{M}(\mathbb{P}^2,\beta_d)_P\rrbracket$ be the restriction of $\llbracket\mathscr{M}(\mathbb{P}^2,\beta_d)\rrbracket$ to $\mathscr{M}(\mathbb{P}^2,\beta_d)_P$ and define \[ N_{d,P} := \int_{\llbracket\mathscr{M}(\mathbb{P}^2,\beta_d)_P\rrbracket}1. \]
\begin{defi} For $P\in\cup_{d\geq 1}T_d$ denote by $k(P)$ the smallest integer $k\geq 1$ such that $P$ is $3k$-torsion. \end{defi}
\begin{lem} $N_{d,P}$ only depends on $P$ through $k(P)$. \end{lem}
\begin{proof} This was shown in \cite{Bou4}, Lemma 1.2, using ideas from \cite{CvGKT2}. The freedom of choice of $O$ and the fact that the monodromy of the family of smooth cubics in $\mathbb{P}^2$ maps surjectively to $SL(2,\mathbb{Z})$ acting on $T_k \simeq \mathbb{Z}_{3k} \times \mathbb{Z}_{3k}$ implies that two points $P,P'$ with $k(P)=k(P')$ are related to each other via a monodromy transformation. Then the deformation invariance of logarithmic Gromov-Witten invariants shows that $N_{d,P}=N_{d,P'}$ for all $d$. \end{proof}
\begin{defi} Write $N_{d,k}$ for $N_{d,P}$ with $P$ such that $k(P)=k$. \end{defi}
Under the toric degeneration $\mathfrak{X} \rightarrow \mathbb{A}^1$ different $3d$-torsion points may map to the same point on the central fiber $X_0$, and even to a $0$-dimensional stratum. However, the limits of the $3$-torsion points all lie on the $1$-dimensional strata. The intersection points with $3d$-torsion correspond to the unbounded walls $\mathfrak{p}$ in $\mathscr{S}_\infty$ with non-zero $t^{3d}$-coefficient of $\textup{log }f_{\mathfrak{p}}$. Their number is exactly $3d$ and they are distributed as the $3l$-torsion points of a circle (see Figure \ref{fig:main} and Figure \ref{fig:sage2}).
This can be explained as follows. The $2$-dimensional torus is a $S^1$-fibration over $S^1$. In the SYZ limit, the $S^1$-fiber shrinks to a point and we only see the $S^1$-base, which is the tropicalization. In this limit, the $\mathbb{Z}_{3k}$-fibers of $T_k\simeq\mathbb{Z}_{3k}\times\mathbb{Z}_{3k}$ are identified and we only see the $\mathbb{Z}_{3k}$ in the base. The toric degeneration of divisors $\mathfrak{D}\rightarrow\mathbb{A}^1$ is an elliptic fibration. It contains a singular fiber that is a cycle of three $\mathbb{P}^1$, i.e., an $I_3$ fiber in Kodaira's classification of singular elliptic fibers. Then the monodromy acting on the first cohomology class of the general fiber is given by $M_1=\left(\begin{smallmatrix}1&3\\ 0&1\end{smallmatrix}\right)$ up to conjugation. Now the action of $M_1$ on $\mathbb{Z}_3\times\mathbb{Z}_3$ is trivial. This means that each $3$-torsion point really defines a section of the family, which will have a limit on the special fiber. Such limit is necessarily on the smooth part of the special fiber, i.e., on a $1$-dimensional toric stratum (see \cite{SS}, Theorem 6.3). The action of $M_1$ on $\mathbb{Z}_6\times\mathbb{Z}_6$ has some fixed points, corresponding to $6$-torsion points on $E$ that define sections with limit on the smooth part of the special fiber, i.e., on $1$-dimensional strata. The other $6$-torsion points are permuted by the action of $M_1$, so they only define multisections with limit on the singular part of the special fiber, i.e., on $0$-dimensional strata.
Now consider the refined degeneration $\tilde{\mathfrak{X}}_d\rightarrow\mathbb{A}^1$. By construction of the refinement, the limits of all $3d$-torsion points lie on $1$-dimensional strata of the central fiber $X:=Y$. Indeed, the central fiber of the elliptic fibration $\tilde{\mathfrak{D}}\rightarrow\mathbb{A}^1$ is a cycle of $3d$ lines, i.e., an $I_{3d}$-fiber. Then the monodromy acting on $\mathbb{Z}_{3k}\times\mathbb{Z}_{3k}$ is given by $\left(\begin{smallmatrix}1&3d\\ 0&1\end{smallmatrix}\right)$ which is the identity for all $k\mid d$.
\begin{defi} Let $s_{k,l}$ be the number of points P on $E$ with $k(P)=k$, fixed by the action of $M_l:=\left(\begin{smallmatrix}1&3l\\ 0&1\end{smallmatrix}\right)$, but not fixed by the action of $M_{l'}$ for all $l'<l$. Note that $s_k:=\sum_{l\mid k}s_{k,l}$ is the number of points $P$ on $E$ with $k(P)=k$ and that $\sum_{k\mid d}s_k=(3d)^2$ is the number of $3d$-torsion points. \end{defi}
\begin{table}[h!]
\begin{tabular}{|l|l|l|l|l|l|} \cline{1-1} $s_{1,1}=9$ \\ \cline{1-2} $s_{2,1}=9$ & $s_{2,2}=18$ \\ \cline{1-3} $s_{3,1}=18$ & & $s_{3,3}=54$ \\ \cline{1-4} $s_{4,1}=18$ & $s_{4,2}=18$ & & $s_{4,4}=72$ \\ \cline{1-5} $s_{5,1}=36$ & & & & $s_{5,5}=180$ \\ \cline{1-6} $s_{6,1}=18$ & $s_{6,2}=36$ & $s_{6,3}=54$ & & & $s_{6,6}=108$ \\ \hline \end{tabular}
\label{tab:s} \caption{The number $s_{k,l}$ of points $P$ on $E$ with $k(P)=k$, fixed by $M_l$, but not fixed by $M_{l'}$ for $l'<l$.} \end{table}
\begin{defi} For a wall $\mathfrak{p}\in\mathscr{S}_\infty$ let $l(\mathfrak{p})$ be the smallest number such that $\textup{log }f_{\mathfrak{p}}$ has non-trivial $t^{3l(\mathfrak{p})}$-coefficient. Let $r_l$ be the number of walls with $l(\mathfrak{p})=l$. \end{defi}
\begin{lem} \label{lem:rl} The number $r_l$ can be defined recursively by \[ r_1 = 3, \quad r_l = 3l - \sum_{l'\mid l}r_{l'}. \] \end{lem}
\begin{proof} For a wall $\mathfrak{p}\in\mathscr{S}_\infty$, the condition $l(\mathfrak{p})=l$ means that the corresponding toric stratum of $X:=Y$ contains the limits of the points on $E$ that are fixed by the action of $M_l$ but not fixed by $M_{l'}$ for $l'<l$. Since in the SYZ limit we only see the base of the fibration, $r_l$ equals the number of points on a circle $S^1$ that are $3l$-torsion but not $3l'$-torsion. This number can be defined as above. \end{proof}
\begin{table}[h!]
\begin{tabular}{|c|c|c|c|c|c|} \hline $r_1=3$ & $r_2=3$ & $r_3=6$ & $r_4=6$ & $r_5=12$ & $r_6=6$ \\ \hline \end{tabular}
\label{tab:r} \caption{The number $r_l$ of walls $\mathfrak{p}$ with $l(\mathfrak{p})=l$.} \end{table}
Note that $s_{k,l}/r_l$ is the number of points $P$ with $k(P)=k$ and with limit on the stratum corresponding to a particular wall $\mathfrak{p}$ with $l(\mathfrak{p})=l$. A direct consequence of Proposition \ref{prop:scattering} is the following.
\begin{cor}[Theorem \ref{thm:torsion}] \label{cor:torsion} Let $\mathfrak{p}$ be an unbounded wall of order $l$ in $\mathscr{S}_\infty$. Then \[ \textup{log }f_{\mathfrak{p}} = \sum_{d=1}^\infty 3d \left(\sum_{k: l\mid k\mid d} \frac{s_{k,l}}{r_l} N_{d,k}\right) x^{3d}. \] \end{cor}
\begin{defi} Similarly to \S\ref{S:BPS} we can define integers $n_{d,k}$ recursively by \[ n_{d,d} = N_{d,d}, \quad n_{d,k} = N_{d,k} - \sum_{d':k\mid d'\mid d} M_{d'}[d/d'] \cdot n_{d',k}. \] \end{defi}
Some of the numbers $n_{d,k}$ have been calculated by Takahashi \cite{Ta1}. Their relation to local BPS numbers is studied in \cite{CvGKT}\cite{CvGKT2}\cite{CvGKT3}.
\begin{rem} \label{rem:combine} Unfortunately we are not able to apply the methods of this section to the refined situation of \S\ref{S:degdiv} in order to calculate the contributions to $N_{[d_1,\ldots,d_m]}$ with prescribed torsion: Let $N_{[d_1,\ldots,d_m],k}$ be the logarithmic Gromov-Witten invariant of stable log maps contributing to $N_{[d_1,\ldots,d_m]}$ and meeting $E$ in a fixed point of order $3k$, and let $n_{[d_1,\ldots,d_m],k}$ be the corresponding log BPS number. Define $l([d_1,\ldots,d_m]):=l(\mathfrak{p})$ for $\mathfrak{p}$ the unbounded wall for $[d_1,\ldots,d_m]$. Then \begin{eqnarray*} \sum_{k=1}^d \frac{s_{k,l}}{r_l} n_{[d_1,\ldots,d_m],k} &=& n_{[d_1,\ldots,d_m]} \quad \text{for } l=l([d_1,\ldots,d_m])\\
\sum_{l([d_1,\ldots,d_m])=l} n_{[d_1,\ldots,d_m],k} &=& n_{d,k} \qquad\quad\ \ \text{for all } l|k \end{eqnarray*} This gives a system of linear equations for the indeterminates $n_{[d_1,\ldots,d_m],k}$. In general the number of equations will be smaller than the number of indeterminates, so there will be no unique solution. However, for $d\leq 3$ we indeed have enough equations to determine the numbers $n_{[d_1,\ldots,d_m],k}$ as we will show in \S\ref{S:calcP2c}. \end{rem}
\section{Higher genus and $q$-refined invariants} \label{S:genus}
For an effective curve class $\underline{\beta}$ of $X$ let $\beta^g$ be the class of $1$-marked stable log maps to $X$ of genus $g$, class $\underline{\beta}$ and maximal tangency with $D$ at a single unspecified point. The moduli space $\mathscr{M}(X,\beta^g)$ of basic stable log maps of class $\beta^g$ has virtual dimension $g$. We can cut this dimension down to zero by inserting a \textit{lambda class}. Let $\pi : \mathcal{C} \rightarrow \mathscr{M}(X,\beta^g)$ be the universal curve, with relative dualizing sheaf $\omega_\pi$. Then $\mathbb{E}=\pi_\star\omega_\pi$ is a rank $g$ vector bundle over $\mathscr{M}(X,\beta^g)$, called the Hodge bundle. The lambda classes are the Chern classes of the Hodge bundle, $\lambda_j=c_j(\mathbb{E})$. We can define higher genus $1$-marked log Gromov-Witten invariants \[ N_\beta^g = \int_{\llbracket\mathscr{M}(X,\beta^g)\rrbracket} (-1)^g\lambda_g \in \mathbb{Q}. \]
\begin{defi} Let $h : \Gamma \rightarrow B$ be a tropical curve. For a trivalent vertex $V$ with multiplicity $m_V$ (Definition \ref{defi:mult}) define, with $q=e^{i\hbar}$, \[ m_V(q) = \frac{1}{i\hbar}\left(q^{m_V/2}-q^{-m_V/2}\right). \] For a vertex with higher valvency define $m_V(q) = \prod_{V'\in V(h'_V)} m_{V'}(q)$ with $h'_V$ as in Definition \ref{defi:mult}. For a bounded leg $E$ with weight $w_E$ define \[ m_E(q) = \frac{(-1)^{w_E+1}}{w_E}\cdot \frac{i\hbar}{q^{w_E/2}-q^{-w_E/2}}. \] Then define the \textit{$q$-refined multiplicity} of $h$ to be
\[ m_h(q) = \frac{1}{|\text{Aut}(h)|} \cdot \prod_{V\in V(\Gamma)} m_V(q) \cdot \prod_{E\in L_\Delta(\Gamma)} m_E(q). \] \end{defi}
\begin{thm} We have, with $q=e^{i\hbar}$, \[ \sum_{g\geq 0} N_\beta^g\hbar^{2g} = \sum_{h\in\mathfrak{H}_\beta} m_h(q) \] \end{thm}
\begin{proof} Consider a stable log map in $\mathscr{M}(X,\beta^g)$ and let $h : \Gamma \rightarrow B$ be its tropicalization. In Definition \ref{defi:tropical} we defined the genus of $h$ to be $g_h = g_\Gamma + \sum_V g_V$. Using gluing and vanishing properties of lambda classes, Bousseau showed in \cite{Bou1} that $\Gamma$ is still a tree ($g_\Gamma=0$), i.e., all contributions to $g_h$ come from vertices. Hence, $h$ maps to an element of $\mathfrak{H}_\beta$ by forgetting genera at vertices $g_V$. So we can sum over $\mathfrak{H}_\beta$ but have to consider $q$-refined contributions of vertices. By \cite{Bou1}, Proposition 29, the contribution of a vertex $V$ with classical multiplicity $m_V$ is $m_V(q)$. By \cite{Bou2}, Lemma 5.9, the contribution of a bounded leg $L$ with weight $w_L$ is $m_L(q)$. \end{proof}
\begin{expl} For the tropical curve in Figure \ref{fig:balancing2} we have \begin{eqnarray*} m_h(q) &=& \frac{1}{i\hbar}\left(q^{9/2}-q^{-9/2}\right) \cdot \frac{1}{i\hbar}\left(q^3-q^{-3}\right) \cdot \frac{-1}{2} \cdot \frac{i\hbar}{q^1-q^{-1}} \cdot \left(\frac{i\hbar}{q^{1/2}-q^{-1/2}}\right)^2 \\ &=& -\frac{27}{2} + \frac{999}{16}\hbar^2 - \frac{137781}{1280}\hbar^4 + \ldots \end{eqnarray*} So this contributes $\frac{999}{16}$ to $N_3^1(\mathbb{P}^2)$. \end{expl}
To obtain a higher genus version of Theorem \ref{thm:main}, we have to $q$-refine the slab functions in the initial wall structure $\mathscr{S}_0$. For $q$-refined wall structures it turns out to be more conventient to work with the logarithm of such functions. Define the $q$-refined initial wall structure $\mathscr{S}_0(q)$ to have the same slabs as $\mathscr{S}_0$ but with slab functions $f_{\mathfrak{b},v}=1+z^{(m_{v\delta},0)}$ replaced by $f_{\mathfrak{b},v}(q)$, where \[ \text{log }f_{\mathfrak{b},v}(q) = \sum_{k\geq 1} \frac{(-1)^{k+1}i\hbar}{q^{k/2}-q^{-k/2}}z^{(km_{v\delta},0)}. \] Note that the coefficient of $z^{(km_{v\delta},0)}$ is the $q$-multiplicity of a bounded leg of weight $k$. Let $\mathscr{S}_\infty(q)$ be the consistent $q$-refined wall structure obtained from $\mathscr{S}_0(q)$, let $f_{\text{out}}(q)$ be the product all of functions attached to unbounded walls, and write $x=z^{(-m_{\textup{out}},0)}$.
\begin{thm} \[ \textup{log }f_{\textup{out}}(q) = \sum_{\underline{\beta}\in H_2^+(X,\mathbb{Z})} \sum_{g\geq 0} (D\cdot\underline{\beta}) N_\beta^g\hbar^{2g} x^{D\cdot\underline{\beta}}. \] \end{thm}
\begin{proof} The higher genus version of Lemma \ref{lem:2.8} is \cite{Bou2}, Proposition 6.2. Applying it inductively as in the proof of Proposition \ref{prop:scattering} we obtain the above formula. \end{proof}
For $(\mathbb{P}^2,E)$, let $N_{d,k}^g$ be the log Gromov-Witten invariant of $1$-marked genus $g$, degree $d$ stable log maps meeting $E$ in a specified point of order $k$ with respect to the group law from \S\ref{S:torsion}. Everything we said respects the groups structure of $E$, so we get a $q$-refined version of Theorem \ref{thm:torsion}:
\begin{thm} Let $\mathfrak{p}$ be an unbounded wall of order $l$ in $\mathscr{S}_\infty(q)$ for $(\mathbb{P}^2,E)$. Then \[ \textup{log }f_{\mathfrak{p}}(q) = \sum_{d=1}^\infty \sum_{g\geq 0} 3d \left(\sum_{k: l\mid k\mid d} \frac{s_{k,l}}{r_l} N_{d,k}^g\right) \hbar^{2g}x^{3d}. \] \end{thm}
\section{Explicit calculations} \label{S:calc}
In this section we will calculate some logarithmic Gromov-Witten invariants and log BPS numbers explicitly. To this end, I wrote a sage code for calculating scattering diagrams and wall structures. It can be found on my webpage\footnote{\url{http://timgraefnitz.com/}}.
\subsection{$(\mathbb{P}^2,E)$} \label{S:calcP2}
We want to calculate the numbers $N_{d,k}$ and $n_{d,k}$ for $d\leq 6$ as well as the numbers $n_{[d_1,\ldots,d_k]}$ for $d\leq 4$. Loading the code into a sage shell and typing \begin{eqnarray*} &&\verb+D = Diagram(case="P2",order=6)+ \\ &&\verb+D2 = D.scattering(order=6,case="P2")+\\ &&\verb+D2.tex(initial_diagram=D,print_directions=[(1,0)])+ \end{eqnarray*} one can produce a TikZ code that under some small changes gives Figure \ref{fig:sage}. It shows the part of the wall structure $\bar{\mathscr{S}}_6$ on the discrete covering space $\bar{B}$ (see \S\ref{S:affinecharts}) that is relevant for computing the functions on the central maximal cell. The full $\bar{\mathscr{S}}_6$ would be symmetric, carrying much more walls on the outer area. We have \begin{eqnarray*} \text{log }f_{\text{out}} &=& 27x^3 + \frac{405}{2}x^6 + 2196x^9 + \frac{110997}{4}x^{12} \\ && \text{ } \qquad\ \ + \frac{2051892}{5}x^{15} + 5527710x^{18} + \mathcal{O}(x^{20}) \end{eqnarray*} This yields the following logarithmic Gromov-Witten invariants:
\renewcommand{1.2}{1.2} \begin{table}[h!]
\begin{tabular}{|c|c|c|c|c|c|} \hline $N_1=9$ & $N_2=\frac{135}{4}$ & $N_3=244$ & $N_4=\frac{36999}{16}$ & $N_5=\frac{635634}{25}$ & $N_6=307095$ \\ \hline \end{tabular}
\label{tab:N} \caption{The invariants $N_d$ of $(\mathbb{P}^2,E)$ for $d \leq 6$.} \end{table}
Subtracting multiple cover contributions we get the following log BPS numbers:
\begin{table}[h!]
\begin{tabular}{|c|c|c|c|c|c|} \hline $n_1=9$ & $n_2=27$ & $n_3=234$ & $n_4=2232$ & $n_5=25380$ & $n_6=305829$ \\ \hline \end{tabular}
\label{tab:n} \caption{The log BPS numbers $n_d$ of $(\mathbb{P}^2,E)$ for $d \leq 6$.} \end{table}
They are related to the local BPS numbers $n_d^{\text{loc}}$, shown in \cite{CKYZ}, Table 1, by Remark \ref{rem:local} and \cite{CKYZ}, (2.1).
\begin{figure}\label{fig:sage}
\end{figure}
\subsubsection{Torsion points} \label{S:calcP2a}
Write $f_l$ for the function $f_{\mathfrak{p}}$ attached to a wall $\mathfrak{p}$ with $l(\mathfrak{p})=l$.
\begin{figure}
\caption{The wall structure $\mathscr{S}_6$ on one unbounded maximal cell of $\mathscr{P}$, showing the relevant attached functions $f_l$. }
\label{fig:sage2}
\end{figure}
The sage code gives the following: \begin{align*} \textup{log }f_1 &= 9x^{3}+\frac{63}{2}x^{6}+246x^{9}+\frac{9279}{4}x^{12}+\frac{175464}{5}x^{15}+307041x^{18} + \mathcal{O}(x^{21}) \\ \textup{log }f_2 &= 36x^{6} + 2322x^{12} + 307164x^{18} + \mathcal{O}(x^{21}) \\ \textup{log }f_3 &= 243x^9 + \frac{614061}{2}x^{18} + \mathcal{O}(x^{21}) \\ \textup{log }f_4 &= 2304x^{12} + \mathcal{O}(x^{21}) \\ \textup{log }f_5 &= 25425x^{15} + \mathcal{O}(x^{21}) \\ \textup{log }f_6 &= 307152x^{18} + \mathcal{O}(x^{21}) \end{align*} From Corollary \ref{cor:torsion} we get: \begin{compactenum}[(1)] \item For $l=6$ we get \[ 307041 = 18 \cdot \frac{108}{6} \cdot N_{6,6}, \] hence $N_{6,6}=948$. There are no multiple cover contributions, as there are no curves of degree $<6$ meeting a point with $l=6$. This shows $n_{6,6}=948$. \item For $l=5$ we get \[ 25425 = 15 \cdot \frac{180}{12} \cdot N_{5,5}, \] hence $N_{5,5}=113$. There are no multiple cover contributions, so $n_{5,5}=113$. \item For $l=4$ we get \[ 2304 = 12 \cdot \frac{72}{6} \cdot N_{4,4}, \] hence $N_{4,4}=16$. There are no multiple cover contributions, so $n_{4,4}=16$. \item For $l=3$ we get \[ 243 = 9 \cdot \frac{54}{6} \cdot N_{3,3}, \] hence $N_{3,3}=3$. There are no multiple cover contributions, so $n_{3,3}=3$. Moreover, \[ \frac{614061}{2} = 18 \cdot \left(\frac{54}{6} \cdot N_{6,3} + \frac{54}{6} \cdot 948\right), \] so $N_{6,3}=\frac{3789}{4}$. Subtracting $M_3[2]n_{3,3}=\frac{15}{4}\cdot 3$ we get $n_{6,3}=936$. \item For $l=2$ we get \[ 36 = 6 \cdot \frac{18}{3} \cdot N_{2,2}, \] hence $N_{2,2}=1$. There are no multiple cover contributions, so $n_{2,2}=1$, and \[ 2322 = 12\cdot\left(\frac{18}{3}\cdot N_{4,2}+\frac{18}{3}\cdot 16\right), \] so $N_{4,2}=\frac{65}{4}$. Subtracting $M_1[4]n_{1,1}+M_2[2]n_{2,1}=\frac{35}{16}\cdot 1+\frac{9}{4}\cdot 0$ we get $n_{4,1}=14$. Moreover, \[ 307164 = 18\cdot\left(\frac{18}{3}\cdot N_{6,2}+\frac{36}{3}\cdot 948\right), \] hence $N_{6,2}=\frac{8533}{9}$. Subtracting $M_2[3]n_{2,2}=\frac{91}{9}\cdot 1$ we get $n_{6,2}=938$. \item For $l=1$ we get \[ 9 = 3 \cdot \frac{9}{3} \cdot N_{1,1}, \] hence $N_{1,1}=1$. There are no multiple cover contributions, so $n_{1,1}=1$, and \[ \frac{63}{2} = 6\cdot\left(\frac{9}{3}\cdot N_{2,1} + \frac{9}{3}\cdot 1\right), \] so $N_{2,1}=\frac{3}{4}$. Subtracting $M_1[2]n_{1,1}=\frac{3}{4}\cdot 1$ we get $n_{2,1}=0$. Moreover, \[ 246 = 9\cdot\left(\frac{9}{3}\cdot N_{3,1}+\frac{18}{3}\cdot 3\right), \] so $N_{3,1}=\frac{28}{9}$. Subtracting $M_1[3]n_{1,1}=\frac{10}{9}\cdot 1$ we get $n_{3,1}=2$. Finally, \[ 307041 = 18 \cdot \left(\frac{9}{3}\cdot N_{6,1}+\frac{9}{3}\cdot\frac{8533}{9}+\frac{18}{3}\cdot\frac{3789}{4}+\frac{18}{3}\cdot 948\right), \] so $N_{6,1}=\frac{2842}{3}$. Subtracting $M_1[6]n_{1,1}+M_2[3]n_{2,1}+M_3[2]n_{3,1}=\frac{77}{6}\cdot 1+\frac{91}{9}\cdot 0+\frac{15}{4}\cdot 2$ we get $n_{6,1}=927$. \end{compactenum} In summary, the numbers $n_{d,k}$ for $d\leq 6$ are shown in Table \ref{tab:results}. The $n_{d,d}$ coincide with the $m_d$ in \cite{Ta2}, Theorem 1.4. The numbers $n_{d,k}$ for $d\leq 3$ are calculated in \cite{Ta1}. The sum $\sum_{k\mid d}s_kn_{d,k}$ is the log BPS number $n_d$ of $(\mathbb{P}^2,E)$. From $n_5$ and $n_{5,5}$ one also obtains $n_{5,1}$. To the best of my knowledge the numbers $n_{4,1}$, $n_{4,2}$, $n_{6,1}$, $n_{6,2}$ and $n_{6,3}$ are new.
\begin{table}[h!]
\begin{tabular}{|l|l|l|l|l|l|} \cline{1-1} $n_{1,1}=1$ \\ \cline{1-2} $n_{2,1}=0$ & $n_{2,2}=1$ \\ \cline{1-3} $n_{3,1}=2$ & & $n_{3,3}=3$ \\ \cline{1-4} $n_{4,1}=14$ & $n_{4,2}=14$ & & $n_{4,4}=16$ \\ \cline{1-5} $n_{5,1}=108$ & & & & $n_{5,5}=113$ \\ \cline{1-6} $n_{6,1}=927$ & $n_{6,2}=938$ & $n_{6,3}=936$ & & & $n_{6,6}=948$ \\ \hline \end{tabular}
\label{tab:results} \caption{The numbers $n_{d,k}$ calculated below.} \end{table}
\subsubsection{Degenerating the divisor} \label{S:calcP2b}
The limit $s\rightarrow 0$ corresponds to a degeneration of the elliptic curve $E$ to a cycle of three lines $D_t^0=D_1+D_2+D_3$. In this section we consider invariants of $(\mathbb{P}^2,E)$ with given degrees over the components $D_i$, i.e., with given degree splitting $[d_1,\ldots,d_k]$ (Definition \ref{defi:splitting}). Since the components $D_i$ are isomorphic as divisors of $\mathbb{P}^2$ this does not depend on the labelling of the $D_i$ and we can omit zeros in $[d_1,\ldots,d_k]$. The tropical curves contributing to the invariants $N_{[d_1,\ldots,d_k]}$ for $d=\sum d_i \leq 4$ are shown in Figure \ref{fig:colors}. There is more than one tropical curve with degree splitting $[2,2]$ and $[2,1,1]$, respectively. We have: \begin{eqnarray*} f_{[1]} &=&1+9x^3 + \mathcal{O}(x^6) \\ f_{[2]} &=&(1+9x^3)(1+72x^6) + \mathcal{O}(t^9) \\ f_{[1,1]} &=& 1+36x^6 + \mathcal{O}(x^9) \\ f_{[3]} &=& (1+9x^3)(1+72x^6)(1-78x^9) + \mathcal{O}(x^{12}) \\ f_{[2,1]} &=& 1+243x^9 + \mathcal{O}(x^{12}) \\ f_{[1,1,1]} &=& 1+81x^9 + \mathcal{O}(x^{12}) \\ f_{[4]} &=& (1+9x^3)(1+72x^6)(1-78x^9)(1+5256x^{12}) + \mathcal{O}(x^{15}) \\ f_{[3,1]} &=& 1+1872x^{12} + \mathcal{O}(x^{15}) \\ f_{[2,2]} &=& (1+36x^6)(1+1296x^{12})(1+1530x^{12}) + \mathcal{O}(x^{15}) \\ f_{[2,1,1]} &=& (1+144x^{12})(1+432x^{12})(1+1296x^{12}) + \mathcal{O}(x^{15}) \end{eqnarray*}
The invariant $N_{[d]}$ has a $d$-fold cover contribution from $n_{[d]}$ and $N_{[2,2]}$ has a $2$-fold cover contribution from $n_{[1,1]}$. This gives the following log BPS numbers:
\begin{table}[h!] \centering
\begin{tabular}{|l|l|l|l|} \hhline{|-|~|~|~|}
\cellcolor{red!50}$n_{[1]}=3$ \\ \hhline{|-|-|~|~|}
\cellcolor{red!50}$n_{[2]}=3$ & \cellcolor{violet!50}$n_{[1,1]}=6$ \\ \hhline{|-|-|-|~|}
\cellcolor{red!50}$n_{[3]}=15$ & \cellcolor{green!50}$n_{[2,1]}=27$ & \cellcolor{blue!50}$n_{[1,1,1]}=9$ \\ \hhline{|-|-|-|-|} \cellcolor{red!50}$n_{[4]}=72$ & \cellcolor{orange!50}$n_{[3,1]}=156$ & \cellcolor{violet!50}$n_{[2,2]}=168$ & \cellcolor{brown!70}$n_{[2,1,1]}=156$ \\ \hline \end{tabular}
\label{tab:colors} \caption{Log BPS numbers of $(\mathbb{P}^2,E)$ with given degrees over the $D_i$.} \end{table}
\text{ } \\[-12mm] \text{ }
\begin{figure}
\caption{Tropical curves contributing to $n_{[d_1,\ldots,d_k]}$ for $(\mathbb{P}^2,E)$.}
\label{fig:colors}
\end{figure}
\subsubsection{Combining the methods} \label{S:calcP2c}
As indicated in Remark \ref{rem:combine} in general it is not possible to combine the above methods and calculate invariants with given degree splitting and meeting a points on $E$ with prescribed torsion. However, for $d\leq 3$ this is indeed possible as we will show now.
For $d=1$ we have $\mathcal{G}_1=\{[1]\}$, so $n_{[1],1}=n_{1,1}=1$. For $d=2$ there are two types of tropical curves. One of them contributes to $N_{[1,1]}$ and corresponds to stable log maps meeting $E$ in a point of order $6$, so $n_{[1,1],1}=0$ and $n_{[1,1],2}=n_{2,2}=1$. The other one contributes to $N_{[2]}$, so $n_{[2],1}=n_{1,1}=0$.
For $d=3$ we have $l([3])=1$, $l([2,1])=3$ and $l([1,1,1])=1$. So $n_{[2,1],1}=0$ and the equations in Remark \ref{rem:combine} are the following: \begin{align*} 3n_{[3],1} + 6n_{[3],3} &= 15 & n_{[3],1} + n_{[1,1,1],1} &= 2 \\ 9n_{[2,1],3} &= 27 & n_{[3],3} + n_{[1,1,1],3} &= 3 \\ 3n_{[1,1,1],1} + 6n_{[1,1,1],3} &= 9 & n_{[2,1],3} &= 3 \end{align*} This system of linear equations has the unique solution: \begin{table}[h!] \centering
\begin{tabular}{|l|l|l|} \hline $n_{[3],1}=1$ & $n_{[2,1],1}=0$ & $n_{[1,1,1],1}=1$ \\ \hline $n_{[3],3}=2$ & $n_{[2,1],3}=3$ & $n_{[1,1,1],3}=1$ \\ \hline \end{tabular}
\label{tab:P1xP1} \caption{Log BPS numbers of $(\mathbb{P}^2,E)$ for $d=3$ with prescribed degree splitting and torsion point.} \end{table} \text{ }\\ If we try the same method for $d>3$, we have more indeterminates than independent equations, so there is no unique solution.
\subsubsection{Higher genus}
Let us compute the higher genus numbers $N_d^g$ for $d\leq 2$ and $g\leq 4$. For $d=1$ we have three tropical curves, all isomorphic to each other. They have one vertex of classical multiplicity $m_V=3$ and two bounded legs of weight $w_E=1$ each. This gives, with $q=e^{i\hbar}$, \begin{eqnarray*} m_h(q) &=& \frac{1}{i\hbar}\left(q^{3/2}-q^{-3/2}\right) \cdot \left(\frac{i\hbar}{q^{1/2}-q^{-1/2}}\right)^2 \\ &=& 3 - \frac{7}{8}\hbar^2 + \frac{29}{640}\hbar^4 - \frac{137}{322560}\hbar^6 + \frac{41}{7372800}\hbar^8 + \mathcal{O}(\hbar^{10}) \end{eqnarray*} Multiplying by $3$ gives \begin{table}[h!] \centering
\begin{tabular}{|c|c|c|c|c|} \hline $N_1^0=9$ & $N_1^1=-\frac{21}{8}$ & $N_1^2=\frac{87}{640}$ & $N_1^3=-\frac{137}{107520}$ & $N_1^4=\frac{41}{2457600}$ \\ \hline \end{tabular}
\label{tab:Ng} \caption{Higher genus log Gromov-Witten invaraints $N_d^g$ for $(\mathbb{P}^2,E)$.} \end{table}
\subsection{$\mathbb{P}^1\times\mathbb{P}^1$}
Similarly, executing the code for $\mathbb{P}^1\times\mathbb{P}^1$ gives the following: \begin{eqnarray*} \text{log }f_{\text{out}} = 16x^2 + 72x^4 + 352x^6 + 3108x^8 + \frac{120016}{5}x^{10} + 198384x^{12} + \mathcal{O}(x^{14}) \end{eqnarray*}
\begin{table}[h!] \centering
\begin{tabular}{|c|c|c|c|c|c|} \hline $n_1=8$ & $n_2=16$ & $n_3=72$ & $n_4=368$ & $n_5=2400$ & $n_6=16320$ \\ \hline \end{tabular}
\label{tab:n} \caption{The log BPS numbers $n_d$ of $\mathbb{P}^1\times\mathbb{P}^1$ for $d \leq 6$.} \end{table}
The functions and invariants with prescribed degree splitting are: \begin{eqnarray*} f_{[1]} &=& 1+4x^2 + \mathcal{O}(x^4) \\ f_{[2]} &=& (1+4x^2)(1+10x^4) + \mathcal{O}(x^6) \\ f_{[1,1]} &=& 1+16x^4 + \mathcal{O}(x^6) \\ f_{[3]} &=& (1+4x^2)(1+10x^4)(1-20x^6) + \mathcal{O}(x^8) \\ f_{[2,1]} &=& 1+36x^6 + \mathcal{O}(x^8) \\ f_{[1,1,1]} &=& 1+36x^6 + \mathcal{O}(x^8) \\ f_{[4]} &=& (1+4x^2)(1+10x^4)(1-20x^6)(1+115x^8) + \mathcal{O}(x^{10}) \\ f_{[3,1]} &=& 1+64x^8 + \mathcal{O}(x^{10}) \\ f_{[2,2]} &=& (1+16x^4)(1+64x^8)(1+264x^8) + \mathcal{O}(x^{10}) \\ f_{[2,1,1]} &=& 1+64x^8 + \mathcal{O}(x^{10}) \\ f_{[1,2,1]} &=& 1+256x^8 + \mathcal{O}(x^{10}) \\ f_{[1,1,1,1]} &=& 1+64x^8 + \mathcal{O}(x^{10}) \end{eqnarray*} This gives the following log BPS numbers:
\begin{table}[h!] \centering
\begin{tabular}{|l|l|l|l|l|l|} \hhline{|-|~|~|~|~|~|}
$n_{[1]}=2$ \\ \hhline{|-|-|~|~|~|~|}
$n_{[2]}=0$ & $n_{[1,1]}=4$ \\ \hhline{|-|-|-|~|~|~|}
$n_{[3]}=0$ & $n_{[2,1]}=6$ & $n_{[1,1,1]}=6$ \\ \hhline{|-|-|-|-|-|-|} $n_{[4]}=0$ & $n_{[3,1]}=8$ & $n_{[2,2]}=24$ & $n_{[2,1,1]}=8$ & $n_{[1,2,1]}=32$ & $n_{[1,1,1,1]}=8$ \\ \hline \end{tabular}
\label{tab:P1xP1} \caption{The log BPS numbers of $\mathbb{P}^1\times\mathbb{P}^1$ with given degree splitting.} \end{table}
From this we compute the log BPS numbers with given curve class (bidegree) as follows. The factors are the number of unbounded walls in the wall structure contributing to $n_{[d_1,\ldots,d_m]}$ for different labellings of $D=D_1+D_2+D_3+D_4$. \begin{align*} n_{(1,0)} &= 2 \cdot n_{[1]} &= 2 \\ n_{(2,0)} &= 2 \cdot n_{[2]} &= 0 \\ n_{(1,1)} &= 4 \cdot n_{[1,1]} &= 16 \\ n_{(3,0)} &= 2 \cdot n_{[3]} &= 0 \\ n_{(2,1)} &= 4 \cdot n_{[2,1]} + 2 \cdot n_{[1,1,1]} &= 36 \\ n_{(4,0)} &= 2 \cdot n_{[4]} &= 0 \\ n_{(3,1)} &= 4 \cdot n_{[3,1]} + 4 \cdot n_{[2,1,1]} &= 768 \\ n_{(2,2)} &= 4 \cdot n_{[2,2]} + 4 \cdot n_{[1,2,1]} + 4 \cdot n_{[1,1,1,1]} &= 1152 \end{align*}
\subsubsection{Deforming $\mathbb{F}_2$} \label{S:calc8'a}
Consider the toric degeneration of $\mathbb{P}^1\times\mathbb{P}^1$ by deformation of the Hirzebruch surface $\mathbb{F}_2$ (case (8'a) in Figure \ref{fig:listb}) from Example \ref{expl:8'a2}.
\begin{figure}
\caption{Tropical curves corresponding to $\underline{\beta}=L_1+L_2$.}
\label{fig:calc8'a}
\end{figure}
For $d=1$ it is clear by the symmetry $L_1\leftrightarrow L_2$ that $n_{(1,0)}=n_{L_1}=n_{L_2}=4$. For $d=2$ we have $n_{(2,0)}=n_{2L_1}=n_{2L_2}$ and $n_{(1,1)}=n_{L_1+L_2}$. Figure \ref{fig:calc8'a} shows the tropical curves corresponding to stable log maps of class $\underline{\beta}=L_1+L_2$. The first one has multiplicity $4$, the second one $-4$, the third one $8$ and the last one $4$. By symmetry there are two tropical curves similar to the red ones at the lower vertex, again with multiplicities $-4$ and $8$. This gives $n_{(1,1)}=n_{L_1+L_2}=16$ and in turn $n_{(2,0)}=0$. One can proceed similarly for higher degrees.
\subsection{Cubic surface} \label{S:calccubic}
The dual intersection complex of the cubic surface (case (3a) in Figure \ref{fig:listb}) is quite similar to the one of $(\mathbb{P}^2,E)$. The only differences are that for each vertex the determinant of primitive generators of adjacent bounded edges is $1$ instead of $3$ and that there are three affine singularities on each bounded edge. As a consequence, by the change of lattice trick (\cite{GHK2}, Proposition C.13), the wall structure of $(X,D)$ is in bijection with the wall structure of $(\mathbb{P}^2,E)$, and the functions attached to walls in direction $m_{\text{out}}$, in particular the unbounded walls, coincide. This immediately implies $N_d(X,D) = 3 \cdot N_d(\mathbb{P}^2,E)$. Subtracting multiple covers we get (note that $w_{\text{out}}=1$ and the multiple cover contributions of of degree $1$ curves are $M_1[k]=\frac{1}{k^2}\binom{-1}{k-1}=\frac{(-1)^{k-1}}{k^2}$):
\begin{table}[h!]
\begin{tabular}{|c|c|c|c|c|c|} \hline $n_1=27$ & $n_2=108$ & $n_3=729$ & $n_4=6912$ & $n_5=76275$ & $n_6=920727$ \\ \hline \end{tabular}
\label{tab:n} \caption{The log BPS numbers $n_d$ of the cubic surface for $d \leq 6$.} \end{table}
\subsubsection{Curve classes} A smooth cubic surface $X$ can be given by blowing up six general points on $\mathbb{P}^2$. Let $e_1,\ldots,e_1$ be the classes of the exceptional divisors and let $\ell$ be the pullback of the class of a line in $\mathbb{P}^2$. Then $\ell,e_1,\ldots,e_6$ generate $\text{Pic}(X)\simeq H_2^+(X,\mathbb{Z})\simeq\mathbb{Z}^7$.
\begin{figure}
\caption{The dual intersection complex of a smooth nef toric surface $X^0$ deforming to a smooth cubic surface $X$.}
\label{fig:fan}
\end{figure}
The dual intersection complex of a smooth nef toric surface $X^0$ deforming to $X$ is shown in Figure \ref{fig:fan}. Its asymptotic fan is the fan of $X^0$. Denote the curves corresponding to the rays of this fan by $C_1,\ldots,C_9$, labelled as in Figure \ref{fig:fan}. Then (see \cite{KM}, \S4) an isomorphism $\text{Pic}(X^0) \simeq \text{Pic}(X)$ is given as follows: \begin{align*} [C_1] &\mapsto e_2-e_5 & [C_2] &\mapsto \ell-e_2-e_3-e_6 & [C_3] &\mapsto e_6 \\ [C_4] &\mapsto e_3-e_6 & [C_5] &\mapsto \ell-e_1-e_3-e_4 & [C_6] &\mapsto e_4 \\ [C_7] &\mapsto e_1-e_4 & [C_8] &\mapsto \ell-e_1-e_2-e_5 & [C_9] &\mapsto e_5 \\ \end{align*} \text{ }\\[-14mm] Now we know the curve classes of the cubic surface $X$ corresponding to the unbounded edges in the dual intersection complex $(B,\mathscr{P},\varphi)$. In turn, we are able to associate to each tropical curve in $(B,\mathscr{P},\varphi)$ the curve class of the corresponding stable log maps, by composition of the maps from Constructions \ref{con:HG} and \ref{con:GH}.
As shown in \cite{Hos}, the Weyl group $W_{E_6}$ of type $E_6$ acts on $\text{Pic}(X)$ as symmetries of configurations of the $27$ lines and this action preserves the local Gromov-Witten invariants $N_\beta^{\text{loc}}$ of $X$. Hence, by the log-local correspondence \cite{vGGR}, it preserves the logarithmic Gromov-Witten invariants $N_\beta$ of $X$ considered here. The curve classes $\underline{\beta}$ of the cubic $X$ giving a nonzero contribution $N_\beta^{\text{loc}}$, up to action of $W_{E_6}$, are given in \cite{KM}, Table 1, along with the corresponding local BPS number $n_\beta^{\text{loc}}$.
For $d=1$ and $d=2$ there is, up to the action of $W_{E_6}$, only one curve class giving a nonzero contribution, so this is trivial. For $d=1$ this is $\underline{\beta}=e_6$ and the length of its orbit is $27$, so $n_\beta=1$. For $d=2$ it is $\underline{\beta}=\ell-e_1$, with orbit length $27$, so $n_\beta=4$. For $d=3$ there are two equivalence classes giving a nonzero contribution, with representatives $\ell$ and $3\ell-\sum_{i=1}^6 e_i$, respectively.
\begin{figure}
\caption{Tropical curves corresponding to $\underline{\beta}=3\ell-\sum_{i=1}^6e_i$.}
\label{fig:calccubic}
\end{figure}
The red tropical curve in Figure \ref{fig:calccubic} corresponds to the class \[ 1 \cdot (e_2-e_5) + 2 \cdot (\ell-e_2-e_3-e_6) + 3 \cdot e_6 + 2 \cdot (e_3-e_6) + 1 \cdot (\ell-e_1-e_3-e_4) = 3\ell-\sum_{i=1}^6e_i \] and similarly for the blue tropical curve. Changing the affine singularities in which the bounded legs end may change the curve class. It turns out that for the red tropical curve any change leads to the class $\underline{\beta}=\ell$ or to a class giving a nonzero contribution. Its multiplicity is $18$, so together with the choice of outgoing edge this gives a contribution of $54$ to $n_{3\ell-\sum_{i=1}^6e_i}$. For the blue tropical curve there are two changes leading again to $\underline{\beta}=3\ell-\sum_{i=1}^6e_i$ and six changes leading to $\underline{\beta}=\ell$. The multiplicity of any of these tropical curves is $3$. Together with the choice of outgoing edge this gives a contribution of $3 \cdot 3 \cdot 3 = 27$ to $n_{3\ell-\sum_{i=1}^6e_i}$. The orbit length is $1$, so $n_{3\ell-\sum_{i=1}^6e_i}=81$. The orbit length of $\ell$ is $72$, so $n_\ell = (729-81)/72 = 9$. So, in agreement with \cite{KM}, Table 1, we have: \begin{table}[h!]
\begin{tabular}{|c|c|c|c|} \hline $n_{e_i}=1$ & $n_{\ell-e_i}=4$ & $n_\ell=9$ & $n_{3\ell-\sum_{i=1}^6e_i}=81$ \\ \hline \end{tabular}
\label{tab:n} \caption{The log BPS numbers $n_\beta$ of the cubic surface for $d \leq 3$.} \end{table}
\begin{figure}
\caption{Tropical curves corresponding to $\underline{\beta}=\ell$.}
\label{fig:calccubic2}
\end{figure}
Figure \ref{fig:calccubic2} shows tropical curves corresponding to stable log maps of class $\underline{\beta}=\ell$. In particular, any change of bounded legs of the green tropical curves still leads to class $\underline{\beta}=\ell$. For instance, the red tropical curve has class \[ 1 \cdot (e_2-e_5) + 2 \cdot (\ell-e_2-e_3-e_6) + 3 \cdot e_6 + 1 \cdot (e_3-e_6) = 2\ell-e_2-e_3-e_5 \] Under the action of $W_{E_6}$ this is equivalent to $\underline{\beta}=2\ell-e_1-e_2-e_3$ and in turn (via the map $s_6$ from \cite{KM}, \S3) to \[ 2 \cdot (2\ell-e_1-e_2-e_3)-(\ell-e_2-e_3)-(\ell-e_1-e_3)-(\ell-e_1-e_2) = \ell. \] Similarly, one computes the classes of the other tropical curves.
\appendix
\section{Artin fans and logarithmic modifications} \label{A:artin}
An \textit{Artin fan} is a logarithmic algebraic stack that is logarithmically \'etale over a point. Artin fans were introduced in \cite{AW} to prove the invariance of logarithmic Gromov-Witten invariants under \textit{logarithmic modifications}, that is, proper birational logarithmically \'etale morphisms. We will briefly summarize this subject.
To any fine saturated log smooth scheme $X$ one can define an associated Artin fan $\mathcal{A}_X$. It has an \'etale cover by finitely many \textit{Artin cones} -- stacks of the form $[V/T]$, where $V$ is a toric variety and $T$ its dense torus. In this way, $\mathcal{A}_X$ encodes the combinatorial structure of $X$. A \textit{subdivision} of the Artin fan $\mathcal{A}_X$ induces a logarithmic modification of $X$ via pull-back. Moreover, all logarithmic modifications of $X$ arise this way. This ultimately leads to a proof of the birational invariance in logarithmic Gromov-Witten theory \cite{AW}.
Olsson \cite{Ol} showed that a logarithmic structure on a given underlying scheme $\underline{X}$ is equivalent to a morphism $\underline{X}\rightarrow\underline{\textbf{Log}}$, where $\underline{\textbf{Log}}$ is a zero-dimensional algebraic stack -- the moduli stack of logarithmic structures. It carries a universal logarithmic structure whose associated logarithmic algebraic stack we denote by $\textbf{Log}$ -- providing a universal family of logarithmic structures $\textbf{Log}\rightarrow\underline{\textbf{Log}}$. As shown in \cite{AW}, if $X$ is a fine saturated log smooth scheme, then the morphism $X\rightarrow\textbf{Log}$ factors through an initial morphism $X\rightarrow\mathcal{A}_X$, where $\mathcal{A}_X$ is an Artin fan and $\mathcal{A}_X\rightarrow\textbf{Log}$ is \'etale and representable. While this serves as a definition of the associated Artin fan $\mathcal{A}_X$, there is a more explicit description of $\mathcal{A}_X$ in terms of the \textit{tropicalization} of $X$, given below.
Let $S$ be a fine saturated log scheme.
\begin{defi} \label{defi:Log} The \textit{moduli stack of log structures over $S$} is the category $\underline{\textbf{Log}_S}$ fibered over the category of $\underline{S}$-schemes defined as follows. The objects over a scheme morphism $\underline{X}\rightarrow\underline{S}$ are the log morphisms $X\rightarrow S$ over $\underline{X}\rightarrow\underline{S}$. The morphisms from $X\rightarrow S$ to $X'\rightarrow S$ are the log morphisms $h : X \rightarrow X'$ over $S$ for which $h^\star\mathcal{M}_{X'}\rightarrow\mathcal{M}_X$ is an isomorphism. \end{defi}
\begin{prop}[\cite{Ol} Theorem 1.1] $\underline{\textup{\textbf{Log}}_S}$ is an algebraic stack locally of finite presentation over $\underline{S}$. \end{prop}
\begin{defi} An \textit{Artin fan} is a logarithmic algebraic stack that is logarithmically \'etale over a point. An \textit{Artin cone} is a logarithmic algebraic stack isomorphic to $[V/T]$, where $V$ is a toric variety and $T$ its dense torus. \end{defi}
\begin{rem} \label{rem:artin} If a logarithmic algebraic stack has a strict representable \'etale cover by Artin cones, then it is an Artin fan. In fact, in \cite{AW} Artin fans were defined this way. Later the definition was generalized to the one above. \end{rem}
\begin{lem}[\cite{AW}, Lemma 2.3.1] \label{lem:artin} An algebraic stack that is representable and \'etale over $\textup{\textbf{Log}}$ has a strict \'etale cover by Artin cones. \end{lem}
\begin{prop}[\cite{ACMW}, Proposition 3.1.1] \label{prop:artin} Let $X$ be a logarithmic algebraic stack that is locally connected in the smooth topology. Then there is an initial factorization of $X\rightarrow \textbf{Log}$ through a strict \'etale morphism $\mathcal{A}_X \rightarrow \textbf{Log}$ which is representable by algebraic spaces. \end{prop}
\begin{defi} Let $X$ be a fine saturated log smooth scheme. The \textit{Artin fan of $X$} is the stack $\mathcal{A}_X$ from Proposition \ref{prop:artin}. Indeed, this is an Artin fan by Lemma \ref{lem:artin} and Remark \ref{rem:artin}. \end{defi}
We now give a more explicit description of the Artin fan $\mathcal{A}_X$ of a fine saturated log smooth scheme $X$. By Lemma \ref{lem:artin} and Proposition \ref{prop:artin}, $\mathcal{A}_X$ has a strict \'etale cover by Artin cones. In fact, $\mathcal{A}_X$ is a colimit of Artin cones $\mathcal{A}_\sigma$ corresponding to the cones $\sigma$ in the tropicalization $\Sigma(X)$ of $X$.
\begin{defi} For a cone $\sigma\subseteq N_{\mathbb{R}}$, let $P=\sigma^\vee\cap M$ be the corresponding monoid. The \textit{Artin cone defined by $\sigma$} is the logarithmic algebraic stack \[ \mathcal{A}_\sigma = \left[\faktor{\textup{Spec }\mathbb{C}[P]}{\textup{Spec }\mathbb{C}[P^{\textup{gp}}]}\right] \] with the toric log structure coming from the global chart $P \rightarrow \mathbb{C}[P]$. \end{defi}
\begin{defi} Let $\Sigma$ be a generalized cone complex (Definition \ref{defi:Cones}) that is a colimit of a diagram of cones $s : I \rightarrow \textup{\textbf{Cones}}$. Then define $\mathcal{A}_\Sigma$ to be the colimit as sheaves over $\textup{Log}$ of the corresponding diagram of sheaves given by $I \ni i \mapsto \mathcal{A}_{s(i)}$. \end{defi}
\begin{prop}[\cite{ACGS1}, Proposition 2.2.2] Let $X$ be a fine saturated log smooth scheme with tropicalization (Definition \ref{defi:trop}) a generalized cone complex $\Sigma(X)$. Then \[ \mathcal{A}_X \cong \mathcal{A}_{\Sigma(X)}. \] \end{prop}
\begin{defi} A \textit{subdivision} of an Artin fan $\mathcal{X}$ is a morphism of Artin fans $\mathcal{Y}\rightarrow\mathcal{X}$ whose base change via any map $\mathcal{A}_\sigma\rightarrow\mathcal{X}$ is isomorphic to $\mathcal{A}_\Sigma$ for some subdivision $\Sigma$ of $\sigma$. \end{defi}
\begin{defi} A \textit{logarithmic modification} of fine saturated log smooth schemes is a proper surjective logarithmically \'etale morphism. \end{defi}
Let $X$ be a fine saturated log smooth scheme with tropicalization $\Sigma(X)$. Then a subdivision $\tilde{\Sigma}(X)$ of $\Sigma(X)$ gives a subdivision $\mathcal{A}_{\tilde{\Sigma}(X)} \rightarrow \mathcal{A}_X$ of the Artin fan of $X$. The pull back $\tilde{X} := \mathcal{A}_{\tilde{\Sigma}(X)} \times_{\mathcal{A}_X} X \rightarrow X$ is a logarithmic modification. Moreover, all logarithmic modifications of $X$ arise this way:
\begin{prop}[\cite{AW}, Corollary 2.6.7] If $Y\rightarrow X$ is a logarithmic modification of fine saturated log smooth schemes, then $Y\rightarrow\mathcal{A}_Y \times_{\mathcal{A}_X} X$ is an isomorphism. \end{prop}
\begin{thm}[\cite{AW}, Theorem 1.1] \label{thm:AW} Let $h : Y \rightarrow X$ be a logarithmic modification of log smooth schemes. This induces a projection $\pi : \bar{\mathscr{M}}(Y) \rightarrow \bar{\mathscr{M}}(X)$ with \[ \pi_\star\llbracket\bar{\mathscr{M}}(Y)\rrbracket = \llbracket\bar{\mathscr{M}}(X)\rrbracket, \] where $\bar{\mathscr{M}}(X)$ is the stack of stable log maps to $X$. \end{thm}
\begin{cor} Logarithmic Gromov-Witten invariants are invariant under logarithmic modifications. \end{cor}
\end{document} | arXiv |
The interior inverse scattering problem for a two-layered cavity using the Bayesian method
A fuzzy edge detector driven telegraph total variation model for image despeckling
doi: 10.3934/ipi.2021056
Small defects reconstruction in waveguides from multifrequency one-side scattering data
Éric Bonnetier 1, , Angèle Niclas 2,, , Laurent Seppecher 2, and Grégory Vial 2,
Institut Fourier, Université Grenoble Alpes, France
Institut Camille Jordan, École Centrale Lyon, France
* Corresponding author: Angèle Niclas
Received January 2021 Revised June 2021 Early access September 2021
Figure(14) / Table(2)
Localization and reconstruction of small defects in acoustic or electromagnetic waveguides is of crucial interest in nondestructive evaluation of structures. The aim of this work is to present a new multi-frequency inversion method to reconstruct small defects in a 2D waveguide. Given one-side multi-frequency wave field measurements of propagating modes, we use a Born approximation to provide a $ \text{L}^2 $-stable reconstruction of three types of defects: a local perturbation inside the waveguide, a bending of the waveguide, and a localized defect in the geometry of the waveguide. This method is based on a mode-by-mode spacial Fourier inversion from the available partial data in the Fourier domain. Indeed, in the available data, some high and low spatial frequency information on the defect are missing. We overcome this issue using both a compact support hypothesis and a minimal smoothness hypothesis on the defects. We also provide a suitable numerical method for efficient reconstruction of such defects and we discuss its applications and limits.
Keywords: Inverse problem, Helmholtz equation, waveguides, multi-frequency data, Born approximation.
Mathematics Subject Classification: 35R30, 78A46.
Citation: Éric Bonnetier, Angèle Niclas, Laurent Seppecher, Grégory Vial. Small defects reconstruction in waveguides from multifrequency one-side scattering data. Inverse Problems & Imaging, doi: 10.3934/ipi.2021056
L. Abrahamsson, Orthogonal grid generation for two-dimensional ducts, J. Comput. Appl. Math., 34 (1991), 305-314. doi: 10.1016/0377-0427(91)90091-W. Google Scholar
L. Abrahamsson and H. O. Kreiss, Numerical solution of the coupled mode equations in duct acoustics, J. Comput. Phy., 111 (1994), 1-14. doi: 10.1006/jcph.1994.1038. Google Scholar
S. Acosta, S. Chow, J. Taylor and V. Villamizar, On the multi-frequency inverse source problem in heterogeneous media, Inverse Problems, 28 (2012), 075013. doi: 10.1088/0266-5611/28/7/075013. Google Scholar
H. Ammari, E. Iakovleva and H. Kang, Reconstruction of a small inclusion in a two-dimensional open waveguide, SIAM J. Appl. Math., 65 (2005), 2107-2127. doi: 10.1137/040615389. Google Scholar
G. Bao and P. Li, Inverse medium scattering problems for electromagnetic waves, SIAM J. Appl. Math., 65 (2005), 2049-2066. doi: 10.1137/040607435. Google Scholar
G. Bao and F. Triki, Reconstruction of a defect in an open waveguide, Sci. China Math., 56 (2013), 2539-2548. doi: 10.1007/s11425-013-4696-8. Google Scholar
G. Bao and F. Triki, Stability for the multifrequency inverse medium problem, J. Differential Equations, 269 (2020), 7106-7128. doi: 10.1016/j.jde.2020.05.021. Google Scholar
J. P. Berenger, A perfectly matched layer for the absorption of electromagnetic waves, J. Comput. Phys., 114 (1994), 185-200. doi: 10.1006/jcph.1994.1159. Google Scholar
L. Bourgeois and S. Fliss, On the identification of defects in a periodic waveguide from far field data, Inverse Problems, 30 (2014), 095004. doi: 10.1088/0266-5611/30/9/095004. Google Scholar
L. Bourgeois and E. Lunéville, The linear sampling method in a waveguide: A modal formulation, Inverse Problems, 24 (2008), 015018. doi: 10.1088/0266-5611/24/1/015018. Google Scholar
D. Colton and A. Kirsch, A simple method for solving inverse scattering problems in the resonance region, Inverse Problems, 12 (1996), 383-393. doi: 10.1088/0266-5611/12/4/003. Google Scholar
D. Colton and R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory, Applied Mathematical Sciences, Springer-Verlag, Berlin, 1992. doi: 10.1007/978-3-662-02835-3. Google Scholar
S. Dediu and J. R. McLaughlin, Recovering inhomogeneities in a waveguide using eigensystem decomposition, Inverse Problems, 22 (2006), 1227-1246. doi: 10.1088/0266-5611/22/4/007. Google Scholar
A. S. B.-B. Dhia, L. Chesnel and S. A. Nazarov, Perfect transmission invisibility for waveguides with sound hard walls, J. Math. Pures Appl., 111 (2018), 79-105. doi: 10.1016/j.matpur.2017.07.020. Google Scholar
H. Dym and H. P. McKean, Fourier Series and Integrals, Academic Press New York, 1972. Google Scholar
P. Grisvard, Elliptic Problems in Nonsmooth Domains, Society for Industrial and Applied Mathematics, 2011. doi: 10.1137/1.9781611972030.ch1. Google Scholar
M. Isaev and R. G. Novikov, Hölder-logarithmic stability in Fourier synthesis, Inverse Problems, 36 (2020), 125003. doi: 10.1088/1361-6420/abb5df. Google Scholar
V. Isakov and S. Lu, Increasing stability in the inverse source problem with attenuation and many frequencies, SIAM J. Appl. Math., 78 (2018), 1-18. doi: 10.1137/17M1112704. Google Scholar
M. Kharrat, O. Bareille, W. Zhou and M. Ichchou, Nondestructive assessment of plastic elbows using torsional waves: Numerical and experimental investigations, Journal of Nondestructive Evaluation, 35 (2016), 1-14. doi: 10.1007/s10921-015-0324-6. Google Scholar
M. Kharrat, M. N. Ichchou, O. Bareille and W. Zhou, Pipeline inspection using a torsional guided-waves inspection system. part 1: Defect identification, International Journal of Applied Mechanics, 6 (2014). doi: 10.1142/S1758825114500343. Google Scholar
Y. Y. Lu, Exact one-way methods for acoustic waveguides, Math. Comput. Simulation, 50 (1999), 377-391. doi: 10.1016/S0378-4754(99)00111-1. Google Scholar
[22] W. McLean, Strongly Elliptic Systems and Boundary Integral Equations, Cambridge University Press, 2000. Google Scholar
M. Sini and N. T. Thanh, Inverse acoustic obstacle scattering problems using multifrequency measurements, Inverse Probl. Imaging, 6 (2012), 749-773. doi: 10.3934/ipi.2012.6.749. Google Scholar
J. Todd, The condition of the finite segment of the Hilbert matrix, National Bureau of Standarts, Applied Mathematics Series, (1954), 109–119. Google Scholar
Figure 1. Representation of the three types of defects: in $ (1) $ a local perturbation $ q $, in $ (2) $ a bending of the waveguide, in $ (3) $ a localized defect in the geometry of $ \Omega $. A controlled source $ s $ generates a wave field $ u^\text{inc}_k $. When it crosses the defect, it generates a scattered wave field $ u^s_k $. Both $ u^\text{inc}_k $ and $ u^s_k $ are measured on the section $ \Sigma $
Figure 2. Condition number of $ M_t^TM_t $ for different sizes of support and values of $ \omega_0 $. Here, $ X $ is the discretization of $ [1-r, 1+r] $ with $ 500r+1 $ points. The $ x $-axis represents the evolution of $ r $, and the $ y $-axis $ \text{cond}_2(M_t^TM_t) $. Each curve corresponds to value of $ \omega_0 $ as indicated in the left rectangle
Figure 3. Representation of a bend in a waveguide
Figure 4. Representation of a shape defect in a waveguide
Figure 5. Reconstruction of $ f(x) = (x-0.8)(1.2-x)\textbf{1}_{0.8\leq x\leq 1.2} $ for different values of $ \omega_1 $ using the discrete operator $ \gamma $ and the algorithm (91) with $ \lambda = 0.001 $. Here, $ X $ is the discretization of $ [0.5, 1.5] $ with $ 10\omega_1 $ points, and $ K $ is the discretization of $ [0.01, \omega_1] $ with $ 1000 $ points
Figure 6. $ \text{L}^2 $-error between $ f(x) = (x-0.8)(1.2-x)\textbf{1}_{0.8\leq x\leq 1.2} $ and its reconstruction $ f_{app} $ for different values of $ \omega_1 $ using the discrete operator $ \gamma $ and the algorithm (91) with $ \lambda = 0.001 $. Here, $ X $ is the discretization of $ [0.5, 1.5] $ with $ 10\omega_1 $ points, and $ K $ is the discretization of $ [0.01, \omega_1] $ with $ 1000 $ points
Figure 7. Reconstruction of $ f(x) = (x-0.8)(1.2-x)\textbf{1}_{0.8\leq x\leq 1.2} $ for different values of $ \omega_0 $ and $ r = 0.5 $ using the discrete operator $ \gamma $ and the algorithm (91) with $ \lambda = 0.001 $. Here, $ X $ is the discretization of $ [0.5, 1.5] $ with $ 251 $ points, and $ K $ is the discretization of $ [\omega_0, 50] $ with $ 1000 $ points
Figure 8. Reconstruction of $ f(x) = (x-0.8)(1.2-x)\textbf{1}_{0.8\leq x\leq 1.2} $ for different sizes of support $ r $ and $ \omega_0 = 3\pi $ using the discrete operator $ \gamma $ and the algorithm (91) with $ \lambda = 0.001 $. Here, $ X $ is the discretization of $ [1-r, 1+r] $ with $ 500r+1 $ points, and $ K $ is the discretization of $ [3\pi, 50] $ with $ 1000 $ points
Figure 9. Reconstruction of two different bends. The black lines represent the initial shape of $ \Omega $, and the red the reconstruction of $ \Omega $. In both cases, $ K $ is the discretization of $ [0.01, 40] $ with $ 100 $ points, and the reconstruction is obtain by (94). On the left, the initial parameters of the bend are $ (x_c, r, \theta) = (4, 10, \pi/12) $ and on the right, $ (x_c, r, \theta) = (2, 5, \pi/6) $
Figure 10. Reconstruction of a waveguide with two successive bends. The black lines represent the initial shape of $ \Omega $, and the red the reconstruction of $ \Omega $, slightly shifted for comparison purposes. In both cases, $ K $ is the discretization of $ [0.01, 40] $ with $ 100 $ points. The parameters of the two bends are $ (x_c^{(1)}, r^{(1)}, \theta^{(1)}) = (2, 10, \pi/30)) $ and $ (x_c^{(2)}, r^{(2)}, \theta^{(2)}) = (3.8, 8, -\pi/20)) $
Figure 11. Reconstruction of two shape defects. In black, the initial shape of $ \Omega $, and in red the reconstruction, slightly shifted for comparison purposes. In both cases, $ K $ is the discretization of $ [0.01, 70]\setminus \{[n\pi-0.2, n\pi+0.2], n\in \mathbb{N}\} $ with $ 300 $ points, $ X $ is the discretization of $ [3, 4.5] $ with $ 151 $ points and we use the algorithm (91) with $ \lambda = 0.08 $ to reconstruct $ s_0 $ and $ s_1 $. On the left, $ h(x) = \frac{5}{16}\textbf{1}_{3.2\leq x\leq 4.2}(x-3.2)^2(4.2-x)^2 $ and $ g(x) = -\frac{35}{16}\textbf{1}_{3.4\leq x\leq 4}(x-3.4)^2(4-x)^2 $. On the right, $ h(x) = \frac{125}{16}\textbf{1}_{3.7\leq x\leq 4.2}(x-3.7)^2(4.2-x)^2 $ and $ g(x) = \frac{125}{16}\textbf{1}_{3.4\leq x\leq 4}(x-3.4)^2(4-x)^2 $
Figure 12. Recontruction of $ h_n $ for $ 0\leq n\leq 9 $, where $ h(x) = 0.05\textbf{1}_{\left|\left(\frac{x-4}{0.05}, \frac{y-0.6}{0.15}\right)\right|\leq 1}\left|\left(\frac{x-4}{0.05}, \frac{y-0.6}{0.15}\right)\right|^2 $. In blue, we represent $ h_n $ and in red the reconstruction of $ h_{n_{\text{app}}} $. In every reconstruction, $ K $ is the discretization of $ [0.01, 150] $ with $ 200 $ points, $ X $ is the discretization of $ [3.8, 4.2] $ with $ 101 $ points and we use the algorithm (91) with $ \lambda = 0.002 $ to reconstruct every $ h_n $
Figure 13. Recontruction of an inhomogeneity $ h $, where $ h(x) = 0.05\textbf{1}_{\left|\left(\frac{x-4}{0.05}, \frac{y-0.6}{0.15}\right)\right|\leq 1}\left|\left(\frac{x-4}{0.05}, \frac{y-0.6}{0.15}\right)\right|^2 $. On the left, we represent the initial shape of $ h $, and on the right the reconstruction $ h_{\text{app}} $. Here, $ K $ is the discretization of $ [0.01, 150] $ with $ 200 $ points, $ X $ is the discretization of $ [3.8, 4.2] $ with $ 101 $ points and we use the algorithm (91) with $ \lambda = 0.002 $ to reconstruct every $ h_n $. We used $ N = 20 $ modes to reconstruct $ h $
Figure 14. Recontruction of an inhomogeneity $ h $. From top to bottom, the initial representation of $ h $, the reconstruction $ h_{\text{app}} $ and the reconstruction $ h_{\text{app}} $ with the knowledge of the positivity of $ h $. Here, $ K $ is the discretization of $ [0.01, 150] $ with $ 200 $ points, $ X $ is the discretization of $ [3, 6] $ with $ 3001 $ points and we use the algorithm (91) with $ \lambda = 0.01 $ to reconstruct every $ h_n $. We choose used $ N = 20 $ modes to reconstruct $ h $
Table 1. Relative errors on the reconstruction of $ (x_c, r, \theta) $ for different bends. In each case, $ K $ is the discretization of $ [0.01, 40] $ with $ 100 $ points, and the reconstruction is obtain by (94)
$ (x_c, r, \theta) $ $ (2.5, 40, \pi/80) $ $ (4, 10, \pi/12) $ $ (2, 5, \pi/6) $
relative error on $ x_c $ $ 1.8\% $ $ 0\% $ $ 7.6\% $
relative error on $ r $ $ 3.0\% $ $ 7.5\% $ $ 23.8\% $
relative error on $ \theta $ $ 1.6\% $ $ 10.7\% $ $ 16.9\% $
Table 2. Relative errors on the reconstruction of $ h $ for different amplitudes $ A $. We choose $ h(x) = A\textbf{1}_{3\leq x\leq 5}(x-3)^2(5-x)^2 $ and $ g(x) = 0 $. In every reconstruction, $ K $ is the discretization of $ [0.01, 40]\setminus \{[n\pi-0.2, n\pi+0.2], n\in \mathbb{N}\} $ with $ 100 $ points, $ X $ is the discretization of $ [1, 7] $ with $ 601 $ points and we use the algorithm (91) with $ \lambda = 0.08 $ to reconstruct $ h' $
$ A $ $ 0.1 $ $ 0.2 $ $ 0.3 $ $ 0.5 $
$ \Vert h-h_{\text{app}}\Vert_{\text{L}^2( \mathbb{R})}/\Vert h\Vert_{\text{L}^2( \mathbb{R})} $ $ 8.82\% $ $ 10.41\% $ $ 15.12\% $ $ 54.99\% $
Michael V. Klibanov, Dinh-Liem Nguyen, Loc H. Nguyen, Hui Liu. A globally convergent numerical method for a 3D coefficient inverse problem with a single measurement of multi-frequency data. Inverse Problems & Imaging, 2018, 12 (2) : 493-523. doi: 10.3934/ipi.2018021
Loc H. Nguyen, Qitong Li, Michael V. Klibanov. A convergent numerical method for a multi-frequency inverse source problem in inhomogenous media. Inverse Problems & Imaging, 2019, 13 (5) : 1067-1094. doi: 10.3934/ipi.2019048
María Ángeles García-Ferrero, Angkana Rüland, Wiktoria Zatoń. Runge approximation and stability improvement for a partial data Calderón problem for the acoustic Helmholtz equation. Inverse Problems & Imaging, 2022, 16 (1) : 251-281. doi: 10.3934/ipi.2021049
Michael V. Klibanov. A phaseless inverse scattering problem for the 3-D Helmholtz equation. Inverse Problems & Imaging, 2017, 11 (2) : 263-276. doi: 10.3934/ipi.2017013
Karzan Berdawood, Abdeljalil Nachaoui, Rostam Saeed, Mourad Nachaoui, Fatima Aboud. An efficient D-N alternating algorithm for solving an inverse problem for Helmholtz equation. Discrete & Continuous Dynamical Systems - S, 2022, 15 (1) : 57-78. doi: 10.3934/dcdss.2021013
Kaitlyn (Voccola) Muller. A reproducing kernel Hilbert space framework for inverse scattering problems within the Born approximation. Inverse Problems & Imaging, 2019, 13 (6) : 1327-1348. doi: 10.3934/ipi.2019058
Suman Kumar Sahoo, Manmohan Vashisth. A partial data inverse problem for the convection-diffusion equation. Inverse Problems & Imaging, 2020, 14 (1) : 53-75. doi: 10.3934/ipi.2019063
Li Liang. Increasing stability for the inverse problem of the Schrödinger equation with the partial Cauchy data. Inverse Problems & Imaging, 2015, 9 (2) : 469-478. doi: 10.3934/ipi.2015.9.469
Soumen Senapati, Manmohan Vashisth. Stability estimate for a partial data inverse problem for the convection-diffusion equation. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021060
Shuyang Dai, Fengru Wang, Jerry Zhijian Yang, Cheng Yuan. On the Cauchy-Born approximation at finite temperature for alloys. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021176
Enrico Capobianco. Born to be big: Data, graphs, and their entangled complexity. Big Data & Information Analytics, 2016, 1 (2&3) : 163-169. doi: 10.3934/bdia.2016002
Ioan Bucataru, Matias F. Dahl. Semi-basic 1-forms and Helmholtz conditions for the inverse problem of the calculus of variations. Journal of Geometric Mechanics, 2009, 1 (2) : 159-180. doi: 10.3934/jgm.2009.1.159
Pedro Caro. On an inverse problem in electromagnetism with local data: stability and uniqueness. Inverse Problems & Imaging, 2011, 5 (2) : 297-322. doi: 10.3934/ipi.2011.5.297
Victor Isakov. On uniqueness in the inverse conductivity problem with local data. Inverse Problems & Imaging, 2007, 1 (1) : 95-105. doi: 10.3934/ipi.2007.1.95
Xiaoxu Xu, Bo Zhang, Haiwen Zhang. Uniqueness in inverse acoustic and electromagnetic scattering with phaseless near-field data at a fixed frequency. Inverse Problems & Imaging, 2020, 14 (3) : 489-510. doi: 10.3934/ipi.2020023
S. L. Ma'u, P. Ramankutty. An averaging method for the Helmholtz equation. Conference Publications, 2003, 2003 (Special) : 604-609. doi: 10.3934/proc.2003.2003.604
Nguyen Huy Tuan, Mokhtar Kirane, Long Dinh Le, Van Thinh Nguyen. On an inverse problem for fractional evolution equation. Evolution Equations & Control Theory, 2017, 6 (1) : 111-134. doi: 10.3934/eect.2017007
Lili Chang, Wei Gong, Guiquan Sun, Ningning Yan. PDE-constrained optimal control approach for the approximation of an inverse Cauchy problem. Inverse Problems & Imaging, 2015, 9 (3) : 791-814. doi: 10.3934/ipi.2015.9.791
John Sylvester. An estimate for the free Helmholtz equation that scales. Inverse Problems & Imaging, 2009, 3 (2) : 333-351. doi: 10.3934/ipi.2009.3.333
Francis J. Chung. Partial data for the Neumann-Dirichlet magnetic Schrödinger inverse problem. Inverse Problems & Imaging, 2014, 8 (4) : 959-989. doi: 10.3934/ipi.2014.8.959
Éric Bonnetier Angèle Niclas Laurent Seppecher Grégory Vial | CommonCrawl |
Why is acceleration directed inward when an object rotates in a circle?
Somebody (in a video about physics) said that acceleration goes in if you would rotate a ball on a rope around yourself.
The other man (ex Navy SEAL, on YouTube too) said that obviously it goes out, because if you release the ball, it's going to fly in outward direction. Then somebody said that the second man doesn't know physics; acceleration goes in.
acceleration vectors rotational-kinematics centripetal-force
R SR S
$\begingroup$ Please please please link the video $\endgroup$
– Noumeno
$\begingroup$ I'm glad that guy is an ex Navy Seal. $\endgroup$
– garyp
$\begingroup$ The second person's argument is like saying that, because if you stopped lifting weights they'd fall down on you, you must be pulling them inward. It's a sign error. $\endgroup$
– J.G.
$\begingroup$ Answers here adopt physics technical terminology, where "acceleration" means rate of change of velocity vector. That is directed in the direction in which the velocity is changing, so inwards during circular motion, and zero after release. But I expect the ex SEAL guy is using a non-technical language where he means that after the ball is released the velocity is now outwards compared to what it would have been if it had not been released. So he is using the idea of "a difference in the velocity compared to what would have been the case if ..." rather than "rate of change of velocity". $\endgroup$
– Andrew Steane
$\begingroup$ xkcd.com/123 $\endgroup$
As a rule of thumb: when somebody states that something is obvious you should really doubt everything he says. Especially if he is an ex navy seal :)
Think about the ball moving in circle: Newton's first law of dynamics states that if an object is left alone, meaning: the object is not subjected to forces, it would keep moving with the same velocity. Remember that velocity is a vector, so this statement means that the object left alone would keep also the same direction of motion.
But in the case of a ball moving in circle of course its direction of motion changes with time, this must imply that the ball is subjected to a force (remember that a force $\vec{F}$ creates an acceleration $\vec{a}$ according to the second law of dynamics: $\vec{F}=m\vec{a})$. Ok, but the force pulls inward or outward? (That is analogous to asking: the acceleration is directed inward or outward?) Well think again about the velocity of the ball: as time passes the velocity curves inward, this must mean that the acceleration is directed inward.
But why then if you let the ball free it moves outward? The answer is that it doesn't really move outward, it simply begins moving in a straight line again since you are no longer applying force to it, as the first principle of dynamics states. Everything is consistent. Of course moving in a straight line in this context means moving away from the previous location of the rotational motion, so an observer has the impression of the ball moving away from the center, when the ball is as stated simply continuing his motion with the velocity it had at the time of release.
NoumenoNoumeno
You can't push rope.
Intuitively, rope is only useful under tension and not compression - you can pull an object with a rope, but not push it. It should be obvious that when you swing a ball on a rope, you are pulling on the rope. You can't use just a rope to accelerate an object away from you (i.e. push something), you can only use it to accelerate an object toward you (i.e. pull something).
From this very simple fact, we can surmise that when swinging a ball on a rope, the ball is accelerating toward the center, since it is impossible for the rope to impart a force on the ball in any other direction. To suggest that the ball is accelerating outward when it's released would mean that the person provides a "push" when letting go, and that the rope is capable of transmitting such a push, both of which are false - even if the person swinging the ball does "push" when they let go, there is simply no way for a rope to transmit that push to the ball.
Nuclear HoagieNuclear Hoagie
$\begingroup$ Elegant. 15 chars. $\endgroup$
– YSC
$\begingroup$ explanation golf $\endgroup$
– bunyaCloven
$\begingroup$ Great thought exercise. Might be worth noting that the acceleration is still inward if the circle constraint is not via a rope or other tension, like a curved track pushing a marble/car inward. $\endgroup$
– aschepler
I would explain the correct answer without reference to forces. Basically, this is a question about acceleration and I would not introduce forces or another reference system. in addition to the one where the motion is described as a circular motion.
The very simple kinematic fact is that the acceleration vector at a given time $t$ is defined as the derivative of the velocity at the same time $t$. If one would like to avoid derivatives, it is enough to analyze the average acceleration over a small interval of time $\Delta t$. Provided $\Delta t$ is small enough that the value of the average acceleration $\vec{a}_m=\frac{{\vec v}(t+\Delta t) - \vec{v}(t)}{\Delta t}$ does not change significantly for any smaller interval of time, this average acceleration can be used as the acceleration $\vec{a}(t)$.
Now, in a circular motion (uniform or not, does not matter), the velocities at two times $t$ and $t+\Delta t$ are not aligned (the velocity is always tangent to the circle). Moreover, whatever is the direction of $\vec{v}(t)$, $\vec{v}(t+\Delta t)$ bends toward the side of the trajectory where the center of the circle is.
The following picture shows the geometry
In particular, the difference vector ${\vec v}(t+\Delta t) - \vec{v}(t)$ has the tail on the tip of the vector $\vec{v}(t)$ and its tip on the tip of the vector ${\vec v}(t+\Delta t)$ (parallelogram rule). It should be clear that it is impossible to have an acceleration pointing in the direction opposite to the direction where the trajectory bends.
PostScriptum for more formal readers:
Of course, the previous elementary argument can be made completely formal by using a little of differential geometries of curves in 2 and 3 dimensions.
GiorgioPGiorgioP
Let's consider an everyday example: Driving a car or a bike. If you drive on a straight line at constant speed you do not experience any force. That's boring (not part of your question), so let's drive in a circle. If we drive in a circle in the counter-clock-wise direction, we are constantly turning to the left.
However, in order to move to the left we must experience a force, which is pushing/pulling us to the left. Hence, taking this perspective it becomes clear that the force we are experiencing must be directed inwards, to the center of the circle.
The situation in reversed if we take the perspective of being the inwards pulling force. So if we have a mass on a string and we rotate it in a circle, the mass becomes the car/bike of the former story and we take the role of the inwards pulling force. Since the mass experience an inwards pulling force, and since any force must be balanced (see Newtons law), we must experience an outwards pushing force.
Hence, whether we experience a force with is inwards or outwards directed depends on the role we play. Hope this helps.
SemoiSemoi
As usual, a picture is worth 1,000 words.
The object is the large dot. It rotates around the circle counterclockwise. You can see it at two different times. The arrows represent the velocity of the object, the direction indicating the direction it is moving. The acceleration is, in effect, the change between the two velocities at those two times - and in general, incorporates both the change to the direction, as well as the speed.
Note the direction of the arrows. Which way does the second arrow (counterclockwise from the first) tilt, compared to the first? Toward, or away from, the center? Thus in what direction is the tendency to accelerate?
(Note: don't let the different positions of the arrows fool you. That's part of the trick with vectors - they live in their own little "world", so to speak, and always come out of the same point therein, but that "world" is "pasted" onto the object as it moves.)
The velocity has to tilt inward, because that way it stays near the central point. As it moves forward in any direction away from the circle rim, it also needs to move a little bit inward on the next "step", so to speak, to compensate for that.
And in terms of forces, what he misses is that if you are at the circle's center and holding it by a rope, then you are providing the acceleration through the force you are applying via the rope. Which way do you have to pull to keep the object going in the circular path? Away from you, or toward you?
The_SympathizerThe_Sympathizer
If you want an object to rotate around a point you need to change its velocity, because if you don't, the object will continue to go straight with its current velocity. The change you need for the object to stay in a circle is not a change in the magnitude of the velocity, but a change in the direction. You want the direction of the velocity to change constantly in direction of the middle point where you want your object to rotate around, in order to make the object curve towards that point instead of going straight. This change in velocity is your (centripetal) acceleration, WHICH POINTS TO THE MIDDLE (this acceleration is caused by the rope). If you were to stop accelerating towards the middle (rope breaking) there would be no change in the objects velocity and it would fly straight wherever its current velocity is pointing to.
BUT if you consider the non-inertial system (which corresponds to imagining being stuck to the rope or the object and thus seeing everything around you moving instead of you moving yourself), you can calculate that there is a force acting outwards, a so called "fictitious force". This force's acceleration is called centrifugal acceleration and corresponds exactly to the centripetal acceleration.
If you haven't heard of fictitious forces and inertial systems, ignore the second paragraph. For better visualisation google the following in images: "centripetal force and centrifugal force".
RogerRoger
because if you release the ball, it's going to fly in outward direction
The other man is thinking from a different frame of reference, and they're disagreeing on terminology.
When you release the ball, it travels in a straight line. This is easily shown by looking at the hammer throwing discipline, which is pretty much the perfect practical experiment to our theoretical discussion.
But the other man says "outward". That is because he is thinking that the circular path (when holding the ball) is the normal path, and the straight path (when releasing the ball) is outside the circle.
But that is not an objective frame. In fact, it's the other way around.
All objects that are not under specific forces travel in a straight line. In order to have an object travel differently, you must apply a force to it. So let's think back to our ball throwing example, but let's start from a straight line situation.
We want to make the ball curve left (and end up in a circular path). So which way do we push on the ball? Left. And because we want the path to be circular, we supply a constant left pressure on the ball (where "left" rotates as the ball rotates).
If you draw this on a diagram, you will see that this "left force" points towards the center.
The black path shows the trajectory of the ball. The red arrows are the direction the ball is traveling in. The blue arrows show you the force that you have to apply in order to makes the ball go round, i.e. "rotating" the red arrow.
The blue arrows point inward. In a better drawn diagram, they'd be pointing to the center of the circle. This means that it is an inward force.
Intuitively, we could learn this by participating in the hammer throw competition. Think about this: when the hammer thrower is spinning around, does he feel like he's performing a pulling or pushing motion?
Pulling. Because the hammer keeps trying to move in a straight line (which eventually gets further away from the thrower). To prevent that from happening, the hammer thrower pulls on the hammer, therefore applying inward force to the hammer.
As an aside, to resolve the "different frame of reference" conflict here:
The inward motion is call the centripetal force. The alleged outward motion is call centrifugal force. You'll find many opinions online that claim centrifugal force doesn't exist. And they're mostly right (though I disagree that we therefore should not talk about it at all).
Centrifugal force is actually the desire for the object to move in a straight line (which is not a force, it is the absence of force). But if you think that the "normal" trajectory is the circular one (like the Navy SEAL in your question does), then this straight line appears to be a deviation from the "normal" trajectory.
Which leads the Navy SEAL to conclude that there must be a force causing this deviation.
But he's got it the wrong way around. The circular path was the deviation, and it was kept alive because of an inward force constantly deviating the normal trajectory. When that inward force stopped, the trajectory stopped being deviated, and therefore took the "normal" path again, i.e. moving in a straight line.
Centrifugal force is a perceived force. It's not real. But because the object wants to move in a straight line and fights going in a circle, the supplier of the inward force feels as if the object is trying to "pull away" from him, which is why he perceives it as a force. But it isn't.
FlaterFlater
$\begingroup$ "the supplier of the inward force feels as if the object is trying to 'pull away' from him, which is why he perceives it as a force. But it isn't." This paragraph is misleading at best. The force the supplier feels definitely is a real force, otherwise they would feel it. It's Newton's third law of motion: supplier pulls in, ball pulls out. $\endgroup$
– Vaelus
$\begingroup$ @Vaelus: I actually agree that centrifugal force exists (in the same way I think that "cold" exists, even though it's technically only an absence of heat), but the centrifugal force is perceived as being a fundamental force, when it is really just the combination of inertia (straight line movement), centripetal force (inward force) and a warped frame of reference (i.e. considering the circular path as the "normal" unpowered path). Perceptions are real, but can be skewed, as is the case here; hence why I call it a perceived force. $\endgroup$
– Flater
$\begingroup$ @Vaelus: Similar to my cold/hot example, while I absolutely agree that we can semantically discuss centrifugal forces (just like we can say that something is cold), I do feel like a more scientific approach focuses on the actual fundamentals. I.e. scientists talk about an amount of energy. And similarly, kineticists (if that is not a word, it totally should be) talk about centripetal force and inertia, not centrifugal force. Science should avoid perception, which is inherently subjective, and instead aim to objectively focus on the fundamentals. $\endgroup$
$\begingroup$ Centrifugal force may only be an artifact of rotating frames of reference, but the force the anchor feels from the ball isn't centrifugal force. It's the ball which experiences centrifugal force in the rotating frame of reference, not the anchor (which cancels out the centripetal force from the anchor, because in the rotating frame of reference it's not accelerating). The force on the anchor from the ball exists in all frames of reference. (The anchor does experience centrifugal force in the rotating frame of reference centered on the ball, but it is away from the ball, not toward it). $\endgroup$
$\begingroup$ All this to say, a better example of perceived centrifugal force is the outward force felt by someone standing on a spinning platform. $\endgroup$
Assume that there are only two nearby things in the universe:
you, and
an object at the end of a string that you're swinging in a circle.
If you let go of the string, the object flies off in a straight line, travelling away from you at a constant velocity. Newton's first law says that an object that's travelling at a constant velocity experiences no (net) force: after you've let go, there aren't any forces on the object. If you're still holding onto the string, the object would be travelling away from you – but something's stopping it: a force is opposing that motion (the tension in the string, from you holding onto the end).
In what direction do you have to pull an object to stop it flying outwards?
Newton's second law says that, if there's a (net) force on an object, the object's accelerating in the same direction as the force, so the acceleration must be in the same direction as your pulling.
But why does the object keep going at the same speed, if it's constantly accelerating? Well, for the same reason that your car accelerates when you press the accelerator, then accelerates (in the opposite direction – also known as "deceleration") when you press the brake, but doesn't have to keep getting faster forever.
If acceleration is in the same direction as motion, you get faster.
If acceleration is in the opposite direction to motion, you get slower.
If acceleration is completely sideways to motion, you don't get faster or slower; you just change direction without changing speed.
(If you want to be fancy, you can split all different directions of acceleration up into forwards / backwardsness and sidewaysness, and work out how much your speed changes and how much you change direction, but that isn't necessary for understanding this.)
If the acceleration is always sideways (perpendicular) to motion, then the object will just keep changing direction without speeding up or slowing down. And if you draw a diagram, you'll see that the inwards / outwards line is always sideways compared to the outside of the circle; if you keep pulling towards the circle, the object will keep going 'round it.
wizzwizz4wizzwizz4
Trying the shortest possible answer:
The ball is not a rocket. It has no mechanism to accelerate on its own, that is, it cannot change its own velocity. Therefore, the ball cannot accelerate once it is released. The ball flies straight away (Newtown's first law).
The mechanism by which it changes its velocity is obviously the rope, providing an external force.
It's the same as pulling a heavy block with a rope. You'll feel a counter-force (stiction force; centripetal force for the rotating ball), but the resulting acceleration is towards you.
HawkingRadiationHawkingRadiation
If there were no force, the object would move along in a straight line along the tangent. But since that is not happening and the object is moving in a circle, there must be a force acting inwards that is constantly changing its direction. This is called a centripetal force.
NeelimNeelim
There are some detailed explanations and some really good discussions here, but the confusion about the direction of acceleration has a very simple and short answer: it depends on the reference frame. You must specify which reference frame you're in while defining your acceleration. If you're standing on the ground and look at the spinning ball, then the acceleration is inwards (centripital) but if you were to choose the ball as your reference frame, then direction of acceleration flips (centrifugal).
You see, Newton's laws only work in an inertial reference frame (a frame of reference that isn't accelerating). The ground is (very much) an inertial reference frame, but the spinning ball definitely isn't.
In the reference frame of the ball, you must introduce a pseudo-force that is opposite in direction but equal in magnitude to the actual force (the string pulling the ball inwards). So, in that non-inertial reference frame (ball's), the acceleration is outwards.
Here's another classic example to make the idea rock-solid: if you're in a rocket in space and that rocket is accelerating upwards with an acceleration a. When you're inside the rocket, you'll feel as if something is pulling you downwards. But if someone is looking at you from outside the rocket, they'll tell you that no, the rocket it moving upwards and that's what is pushing against you.
Do you see it here as well? Your reference frame (inside the rocket) is non-inertial, so you conclude that there's this magical force which is pulling you downwards, so the acceleration must be down as well. But someone floating outside (inertial reference frame) will conclude the exact opposite. You're clearly accelerating upwards from his point of view.
Apekshik PanigrahiApekshik Panigrahi
$\begingroup$ The original question mentions an object (ball), a rope and someone swinging the rope. This answer explains the point of view of someone in the ball, but OP does not talk about that. The person who said "acceleration goes out" explicitly had an exterior perspective, the one of the rope holder. $\endgroup$
– Rainer Blome
$\begingroup$ Thank you for the comment, but I'm very well aware of that. I mention both these reference frames because these two are confused with each other a lot. The subtle difference between these two is what causes everyone to either say acceleration is inwards or outwards. The distinction isn't explicit in our minds and we tend to make mistakes regarding it, so that might be one of the reasons why their opinions on the problem differ. $\endgroup$
– Apekshik Panigrahi
A moving object continues in a straight line unless a force is applied to it. If a ball is whirled in a circle at the end of a string, it is caused to move in a circle by the pull of the string. If the string breaks the ball proceeds in a straight line unless gravity pulls it downward. The ball's straight line is a tangent to the circle. See the previous drawings showing that.
If there was a centrifugal force the released ball would move from its position directly away from the center of the circle like the symbol for Mars. It does not do that.
dennis gdennis g
Not the answer you're looking for? Browse other questions tagged acceleration vectors rotational-kinematics centripetal-force or ask your own question.
Countersteering a motorcycle
In tennis, why does topspin serve bounce higher than flat serve?
Coefficient of friction on a loop-the-loop!
When is the direction of the static friction negative?
Why is a clockwise moment negative by convention?
The ball-in-cylinder problem I've encountered
Banked Corners and Centripetal Forces Question
Is the angular acceleration of a rotating body calculated about points out of axis of rotation different?
How do observers in inertial frames explain fictitious forces?
Direction of deflection due to Coriolis Effect | CommonCrawl |
\begin{document}
\title{Sweep maps: A continuous family of sorting algorithms}
\date{\today}
\author{Drew Armstrong} \address{Dept. of Mathematics\\
University of Miami \\
Coral Gables, FL 33146} \email{[email protected]}
\author{Nicholas A. Loehr} \address{Dept. of Mathematics\\
Virginia Tech \\
Blacksburg, VA 24061-0123 \\ and Mathematics Dept. \\ United States Naval Academy \\ Annapolis, MD 21402-5002} \email{[email protected]}
\thanks{This work was partially supported by a grant from the Simons
Foundation (\#244398 to Nicholas Loehr).}
\author{Gregory S. Warrington} \address{Dept. of Mathematics and Statistics\\
University of Vermont \\
Burlington, VT 05401} \email{[email protected]}
\thanks{Third author supported in part by National Science Foundation
grant DMS-1201312.}
\vspace*{.3in}
\begin{abstract}
We define a family of maps on lattice paths, called \emph{sweep maps},
that assign levels to each step in the path and sort steps according to
their level. Surprisingly, although sweep maps act by sorting, they
appear to be bijective in general. The sweep maps give
concise combinatorial formulas for the $q,t$-Catalan numbers, the
higher $q,t$-Catalan numbers, the $q,t$-square numbers, and many
more general polynomials connected to the nabla operator and rational
Catalan combinatorics. We prove that many algorithms that have appeared
in the literature (including maps studied by
Andrews, Egge, Gorsky, Haglund, Hanusa, Jones, Killpatrick,
Krattenthaler, Kremer, Orsina, Mazin, Papi, Vaill{\'e}, and the
present authors) are all special cases of the sweep maps or their
inverses. The sweep maps provide a very simple unifying framework
for understanding all of these algorithms.
We explain how inversion of the sweep map (which is an open
problem in general) can be solved in known special cases by finding a
``bounce path'' for the lattice paths under consideration.
We also define a generalized sweep map acting on words over arbitrary
alphabets with arbitrary weights, which is also conjectured to be bijective. \end{abstract}
\maketitle
\section{Introduction} \label{sec:intro}
This paper introduces a family of sorting maps on words that we call
\emph{sweep maps}. In its simplest form, a sweep map $\sw_{r,s}$ uses coprime parameters $r$ and $s$ to associate a \emph{level} $l_i$ to each letter $w_i$ in a word $w=w_1w_2\cdots w_n$ consisting of $|s|$
copies of the letter $\mathrm{N}$ and $|r|$ copies of the letter $\mathrm{E}$. (Note that $r$ or $s$ may be negative.) This assignment is done recursively: use the convention that $l_0=0$; for $i\geq 1$ we set $l_i = l_{i-1}+r$ if $w_i=\mathrm{N}$ and $l_i=l_{i-1}+s$ if $w_i=\mathrm{E}$. The word $\sw_{r,s}(w)$ is then obtained by sorting the letters in $w$ according to level, starting with $-1,-2,-3,\ldots$, then continuing with $\ldots,2,1,0$. Figure~\ref{fig:sweepmap1} provides an example of $\sw_{5,-3}$ acting on the word $w=\NE{ENEENNEE}$. (Here we have identified $w$ with a lattice path in the plane: each $\mathrm{N}$ corresponds to a unit-length north step, while each $\mathrm{E}$ corresponds to a unit-length east step.)
\begin{figure}
\caption{The action of the sweep map $\sw_{5,-3}$ on the word
$w=\NE{ENEENNEE}$. Next to each step in $w$ is written
its level, $l_i$. Each step in $\sw_{5,-3}(w)$ has been
labeled by the level of the corresponding step in $w$.}
\label{fig:sweepmap1}
\end{figure}
Surprisingly, even though sweep maps act by sorting, they are (apparently) bijective. The reader may find it useful to check this bijectivity by hand for $\sw_{3,-2}$ acting on the set of all lattice paths from $(0,0)$ to $(3,2)$. As detailed in Conjecture~\ref{conj:gen-sweep}, bijectivity seems to hold even for the general sweep maps over arbitrary alphabets with arbitrary weights, described in Section~\ref{subsec:gen}. The bijectivity of the general sweep maps appears to be a very subtle and difficult fact.
\begin{remark}
The order in which the levels are traversed is a key ingredient to
bijectivity. For example, in the case of $r=3$, $s=-2$,
if we scan levels in the order $k=\ldots,2,1,0,-1,-2,\ldots$, both
of the paths $\NE{NENEE}$ and $\NE{NEENE}$ map to $\NE{NNEEE}$. \end{remark}
The sweep maps encode complex combinatorial information related to $q,t$-Catalan numbers, the Bergeron-Garsia nabla operator, and other constructions arising in representation theory, algebraic geometry, and symmetric functions. Researchers have discovered special cases of the sweep map in many different guises over the last fifteen years or so. One of the goals of this paper is to present a unifying framework for all of this work. In~\cite{loehr-mcat,loehr-thesis,loehr-trapz}, Loehr introduced bijections on $m$-Dyck paths, as well as generalizations to lattice paths contained in certain trapezoids, that turn out to be special cases of the sweep map. The bijection in the case $m=1$ also appears in a paper of Haglund and Loehr~\cite{HL-park} and is foreshadowed by a counting argument in Haglund's seminal paper on $q,t$-Catalan numbers~\cite[proof of Thm. 3.1]{Hag-bounce}. The inverse bijection in the case $m=1$ appears even earlier, where it was used by Andrews et al.~\cite{AKOP-lie} in their study of $ad$-nilpotent $\mathfrak{b}$-ideals in the Lie algebra $\mathfrak{sl}(n)$. (See also~\cite{vaille}.) More recently, special cases of the sweep map have arisen while studying lattice paths in squares~\cite{LW-square}; partition statistics~\cite{LW-ptnid}; simultaneous core partitions~\cite{AHJ}; and compactified Jacobians~\cite{GM-jacI,GM-jacII}. We discuss a number of these connections in more detail in Section~\ref{sec:alg-sweep}.
We suspect that to the typical mathematician, the most interesting question regarding the sweep maps is whether they are bijective (as conjectured in Conjecture~\ref{conj:gen-sweep}). For a researcher interested in the $q,t$-Catalan numbers, however, of comparable interest is the connection between the sweep maps and statistics on lattice paths such as $\mathsf{area}$, $\mathsf{bounce}$ and $\mathsf{dinv}$. Since shortly after Haiman's introduction of $\mathsf{dinv}$, it has been known that a ``zeta map'' takes $\mathsf{dinv}$ to $\mathsf{area}$ to $\mathsf{bounce}$. One point of view, then, is that rather than having three statistics on Dyck paths, we have one statistic --- $\mathsf{area}$ --- and a sweep map. Many polynomials related to the $q,t$-Catalan numbers can be defined using only an ``area'' and an appropriate sweep map. That these polynomials are jointly symmetric (conjecturally) supports the utility of this view (see Section~\ref{sec:area-qtcat}).
The structure of this paper is as follows. Section~\ref{sec:basic} introduces the necessary background on lattice paths. We then define sweep maps and present Conjecture~\ref{conj:gen-sweep} on their bijectivity in Section~\ref{sec:intro-sweep}. Section~\ref{sec:alg-sweep} reviews various algorithms that have appeared in the literature that are equivalent to special cases of the sweep map, while Section~\ref{sec:invert-sweep} describes how to invert these maps (when known). Finally, Section~\ref{sec:area-qtcat} shows how the sweep maps may be used to give concise combinatorial formulas for the higher $q,t$-Catalan numbers and related polynomials formed by applying the nabla operator to appropriate symmetric functions and then extracting the coefficient of $s_{(1^n)}$.
An extended abstract of this paper appears as~\cite{ALW-sweep-fpsac}.
\section{Partitions, Words, and Lattice Paths} \label{sec:basic}
This section introduces our basic conventions regarding partitions, words, and lattice paths. Integer parameters $a$ and $b$ will serve to restrict our attention to such objects fitting within a rectangle of height $a$ and width $b$. Integer parameters $r$ and $s$ will be used to assign a ``level'' to the components of the various objects considered. Of particular interest is the case of $r=b$ and $s=-a$. The constraint of $\gcd(a,b)=1$ arises naturally in some particular sweep maps such as the map due to Armstrong, Hanusa and Jones~\cite{AHJ} and the map due to Gorksy and Mazin~\cite{GM-jacI,GM-jacII} (see Sections~\ref{subsec:zetamap} and~\ref{subsec:gorsky-mazin}, respectively).
Let $a,b \in \mathbb{Z}_{\geq 0}$. Integer partition diagrams with at most $a$ parts and largest part at most $b$ (drawn according to the English convention) fit in the rectangle with vertices $(0,0)$, $(b,0)$, $(0,a)$ and $(b,a)$. We denote the set of such partitions (identified with their diagrams, which are collections
of unit squares in the first quadrant) by $\mathcal{R}^{\mathrm{ptn}} = \mathcal{R}^{\mathrm{ptn}}(a,b)$. Let $\{\mathrm{N},\mathrm{E}\}^*$ denote the set of all words $w = w_1w_2\cdots w_n$, $n\geq 0$, for which each $w_j \in \{\mathrm{N},\mathrm{E}\}$, and let $\mathcal{R}^{\mathrm{word}} = \mathcal{R}^{\mathrm{word}}(\mathrm{N}^a\mathrm{E}^b)$ denote the subset of words consisting of $a$ copies of $N$ and $b$ copies of $E$. Finally, let $\mathcal{R}^{\mathrm{path}} = \mathcal{R}^{\mathrm{path}}(\mathrm{N}^a\mathrm{E}^b)$ denote the set of lattice paths from $(0,0)$ to $(b,a)$ consisting of $a$ unit-length north steps and $b$ unit-length east steps.
There are natural bijections among the three sets $\mathcal{R}^{\mathrm{ptn}}$, $\mathcal{R}^{\mathrm{word}}$ and $\mathcal{R}^{\mathrm{path}}$. Each word $w\in \mathcal{R}^{\mathrm{word}}$ encodes a lattice path in $\mathcal{R}^{\mathrm{path}}$ by letting each $\mathrm{E}$ correspond to an east step and each $\mathrm{N}$ correspond to a north step. The frontier of a partition $\pi\in \mathcal{R}^{\mathrm{ptn}}$ also naturally encodes a path in $\mathcal{R}^{\mathrm{path}}$. We write $\mkwd(P)$ or $\mkwd(\pi)$ for the word associated with a path $P$ or a partition $\pi$, respectively. Operators $\mkpath$ and $\mkptn$ are defined analogously. For example, taking $a=3$ and $b=5$, the path $P$ shown on the left in Figure~\ref{fig:sweepmap1} has $\mkwd(P)=\NE{ENEENNEE}$ and $\mkptn(P)=(3,3,1)$. For the word $w=\NE{EEENENNE}$, $\mkpath(w)$ is the path shown on the right in Figure~\ref{fig:sweepmap1}, whereas $\mkptn(w)=(4,4,3)$.
Let $r,s \in \mathbb{Z}$. We assign a \textbf{level} to each square of a partition and each step of a path in the following manner. First, assign to each lattice point $(x,y)$ the \textbf{$\boldsymbol{(r,s)}$-level} $ry+sx$. Assign to each lattice square the level of its southeast corner. We will have occasion to consider two different conventions for associating levels to north and east steps of paths. For the \textbf{east-north (E-N)} convention, each east (resp. north) step inherits the level of its eastern (resp. northern) endpoint. For the \textbf{west-south (W-S)} convention, each east (resp. north) step inherits the level of its western (resp. southern) endpoint. Figure~\ref{fig:labeling} illustrates the various levels relevant to the word $\NE{NNENE}$ for $r=8$ and $s=-5$.
\begin{figure}
\caption{Illustration of level-assignment conventions for lattice
points, squares, path steps with the east-north convention, and
path steps with the west-south convention for the case of $r=8$
and $s=-5$.}
\label{fig:labeling}
\end{figure}
Let $\mathcal{D}^{\mathrm{path}}_{r,s}(\mathrm{N}^a\mathrm{E}^b)$ denote the set of lattice paths in $\mathcal{R}^{\mathrm{path}}(\mathrm{N}^a\mathrm{E}^b)$ whose steps all have nonnegative $(r,s)$-levels under the E-N convention. We call these paths \textbf{$\boldsymbol{(r,s)}$-Dyck paths}. A word $w\in\mathcal{R}^{\mathrm{word}}(\mathrm{N}^a\mathrm{E}^b)$ is an \textbf{$\boldsymbol{(r,s)}$-Dyck word} iff $\mkpath(w)$ is an $(r,s)$-Dyck path;
let $\mathcal{D}^{\mathrm{word}}_{r,s}(\mathrm{N}^a\mathrm{E}^b)$ denote the set of such words. A partition $\pi\in\mathcal{R}^{\mathrm{ptn}}(a,b)$ is an \textbf{$\boldsymbol{(r,s)}$-Dyck partition} iff $\mkpath(\pi)$ is an $(r,s)$-Dyck path;
let $\mathcal{D}^{\mathrm{ptn}}_{r,s}(a,b)$ denote the set of such partitions.
\section{The Sweep Map} \label{sec:intro-sweep}
We begin in Section~\ref{subsec:def-sweep} by giving an algorithmic description of the basic sweep maps for words over the alphabet $\{N,E\}$. Some minor variations are presented in Sections~\ref{subsec:irrat-sweep} and~\ref{subsec:minor-var}. We then present a general sweep map in Section~\ref{subsec:gen} that acts on words over any alphabet with arbitrary weights assigned to each letter.
\subsection{The Basic Sweep Map} \label{subsec:def-sweep}
Let $r,s\in \mathbb{Z}$. We first describe the \textbf{$\boldsymbol{(r,s)^{-}}$-sweep map}, $\sw^{-}_{r,s}:\{\mathrm{N},\mathrm{E}\}^*\rightarrow\{\mathrm{N},\mathrm{E}\}^*$. Given $w \in \{\mathrm{N},\mathrm{E}\}^*$, assign levels using the east-north convention applied to $\mkpath(w)$. Define a word $y=\sw^{-}_{r,s}(w)$ by the following algorithm. Initially, $y$ is the empty word. For $k=-1,-2,-3,\ldots$ and then for $k=\ldots,3,2,1,0$, scan $w$ from right to left. Whenever a letter $w_i$ is encountered with level equal to $k$, append $w_i$ to $y$. The \textbf{$\boldsymbol{(r,s)^{+}}$-sweep map} $\sw^{+}_{r,s}$ is defined the same way as $\sw^{-}_{r,s}$, except that: the value $0$ is the first value of $k$ used rather than the last; and for each value of $k$, $w$ is scanned from left to right. Figure~\ref{fig:negsweep-ex} illustrates the action of both $\sw^{+}_{3,-2}$ and $\sw^{-}_{3,-2}$ on a path in $\mathcal{R}^{\mathrm{path}}(\mathrm{N}^8\mathrm{E}^{10})$. We define the action of $\sw^{-}_{r,s}$ on a partition $\pi$ as $\mkptn(\sw_{r,s}^{-}(\mkwd(\pi)))$ and the action of $\sw_{r,s}^{-}$ on a path as $\mkpath(\sw_{r,s}^{-}(\mkwd(P)))$; similarly for $\sw^{+}_{r,s}$.
\begin{figure}\label{fig:negsweep-ex}
\end{figure}
Geometrically, we think of each step in $\mkpath(w)$ as a \emph{wand} whose \emph{tip} is located at the lattice point at the end of each step. This visual description reminds us that the maps $\sw^{\pm}_{r,s}$ are assigning levels to steps in the path using the east-north convention. For $r>0$ and $s<0$, $\sw^-_{r,s}$ acts by scanning southwest along each diagonal line $ry+sx=k$ (taking $k$'s in the appropriate order) and ``sweeping up'' the wands whose tips lie on each of these diagonals. The wands are laid out in the order in which they were swept up to produce the output lattice path. The labels on the wand tips are \emph{not} part of the final output. It is clear from this description, or from the original definition, that the sweep map depends only on the ``slope'' $-s/r$; i.e., $\sw_{r,s}^{\pm}=\sw_{rm,sm}^{\pm}$ for every positive integer $m$.
\begin{remark}
The $\mathsf{area}^*_{r,s}$ statistic introduced in
Section~\ref{subsec:area-stat} is computed using the $(r,s)$-levels
of the steps in $w$, which depend not just on the ratio $-s/r$ but
on the specific values of $r$ and $s$.
So it is not always safe to assume that $r$ and $s$ are relatively prime. \end{remark}
Since the sweep maps rearrange the letters in the input word $w$, it is immediate that both $\sw^{-}_{r,s}$ and $\sw^{+}_{r,s}$ map each set $\mathcal{R}^{\mathrm{word}}(\mathrm{N}^a\mathrm{E}^b)$ into itself. We will see later that $\sw^{-}_{r,s}$ maps each set $\mathcal{D}^{\mathrm{word}}_{r,s}(\mathrm{N}^a\mathrm{E}^b)$ into itself.
\subsection{The Irrational-Slope Sweep Map} \label{subsec:irrat-sweep}
So far, for each rational $-s/r$, we have defined the basic sweep map $\sw^-_{r,s}$ (which we also call the \textbf{negative-type} sweep map) and the \textbf{positive-type} sweep map $\sw^+_{r,s}$. We can extend this setup to define sweep maps $\sw_{\beta}$ indexed by \emph{irrational} numbers $\beta$. We regard inputs to $\sw_{\beta}$ as lattice paths $\mkpath(w)\in\mathcal{R}^{\mathrm{path}}(\mathrm{N}^a\mathrm{E}^b)$ consisting of ``wands'' with tips at their north ends and east ends. There is a ``sweep line'' $y-\beta x=k$ that sweeps through the plane as $k$ decreases from just below zero to negative infinity, then from positive infinity to zero. Because $\beta$ is irrational, every sweep line intersects at most one wand tip in $\mkpath(w)$. We obtain $\sw_{\beta}(w)$ by writing the wands in the order in which the sweep line hits the wand tips.
For fixed $a,b\geq 0$ and fixed $r>0,s<0$, one readily checks that $\sw^{-}_{r,s}(w)=\sw_{\beta}(w)$ for all irrationals $\beta$ with $\beta< -s/r$ and $\beta$ sufficiently close to $-s/r$. On the other hand, $\sw_{r,s}^+(w)=\sw_{\beta}(w)$ for all irrationals $\beta$ with $\beta> -s/r$ and $\beta$ sufficiently close to $-s/r$. This explains the terminology ``positive-type'' and ``negative-type'' sweep map. One approach to studying the sweep map is to understand the ``jump discontinuities'' between $\sw^{+}_{r,s}$ and $\sw^{-}_{r,s}$ that occur at certain critical rationals $-s/r$.
For irrational $\beta$, we say $w$ is a \textbf{$\beta$-Dyck word} iff for all lattice points $(x,y)$ visited by $\mkpath(w)$, $y-\beta x\geq 0$.
\begin{prop} (a) For all irrational $\beta$, if $w\in\{\mathrm{N},\mathrm{E}\}^*$ is a $\beta$-Dyck word then $v=\sw_{\beta}(w)$ is a $\beta$-Dyck word. (b) For all $r,s\in\mathbb{Z}$, if $w\in\{\mathrm{N},\mathrm{E}\}^*$ is an $(r,s)$-Dyck word, then $v=\sw_{r,s}^-(w)$ is an $(r,s)$-Dyck word. \end{prop} \begin{proof} (a) If $\beta<0$ then all words are $\beta$-Dyck words, so the proposition certainly holds. Therefore, in the following we assume $\beta > 0$. Fix $j > 0$; it suffices to show that the point $(x,y)$ reached by the first $j$ steps of $\mkpath(v)$ satisfies $y>\beta x$. Since $\beta$ is irrational and $w$ is a $\beta$-Dyck word, there exists a real $k>0$ such that $v_1\cdots v_j$ consists of all the symbols in $w$ with wand tips at levels higher than $k$. In other words, $v_1\cdots v_j$ is a rearrangement of all the steps of $\mkpath(w)$ that end at points $(x,y)$ with $y>\beta x+k$. Now, $\mkpath(w)$ begins at the origin, which has level zero. In general, this path enters and leaves the region $R_k=\{(x,y): y>\beta x+k\}$ several times. Let $w^{(1)},\ldots,w^{(t)}$ be the maximal substrings of consecutive letters in $w$ such that every step of $w^{(i)}$ ends in $R_k$. Every $w^{(i)}$ begins with a north step that enters $R_k$ from below, and every $w^{(i)}$ except possibly $w^{(t)}$ is followed by an east step that exits $R_k$ to the right. Suppose $w^{(i)}$ consists of $a_i$ north steps and $b_i$ east steps; since the boundary of $R_k$ is a line of slope $\beta$, the geometric fact in the previous sentence implies that $a_i/b_i>\beta$ for $1\leq i\leq t$. By definition, $v_1\cdots v_j$ is some rearrangement of $a=a_1+\cdots+a_t$ north steps and $b=b_1+\cdots+b_t$ east steps. Now $a_i>\beta b_i$ for all $i$ implies $a>\beta b$. Thus, $v_1\cdots v_j$ is a path from $(0,0)$ to $(b,a)$, where $a>\beta b$. Since $j$ was arbitrary, $\mkpath(v)$ is a $\beta$-Dyck path.
(b) It suffices to treat the case $r>0$, $s<0$. Fix $a,b\geq 0$ and $w\in\mathcal{D}^{\mathrm{word}}_{r,s}(\mathrm{N}^a\mathrm{E}^b)$. We can choose an irrational $\beta<-s/r$ so close to $-s/r$ that the region $S=\{(x,y)\in\mathbb{R}^2: \beta x\leq y<(-s/r)x,y\leq a\}$ contains no lattice points, and $v=\sw_{\beta}(w)=\sw_{r,s}^{-}(w)$. Since $w$ is an $(r,s)$-Dyck word and $\beta<-s/r$, $w$ is also a $\beta$-Dyck word. By part (a), $v=\sw_{\beta}(w)$ is a $\beta$-Dyck word. Since $S$ contains no lattice points, $v$ is an $(r,s)$-Dyck word, as needed.
\end{proof}
\subsection{Reversed and Transposed Sweeps} \label{subsec:minor-var}
For a fixed choice of $r$ and $s$, there are four parameters that can be used to define a potential sweep map: \begin{itemize} \item the level to start sweeping at, \item the direction of sweep for a given level (i.e., right-to-left or left-to-right), \item the relative order in which to visit levels (i.e., $k+1$ after level
$k$ versus $k-1$ after level $k$), \item the convention for levels assigned to steps (i.e., using the
west-south convention or the east-north convention). \end{itemize} Empirical evidence suggests that for each of the $8=2^3$ possible choices for the second through fourth parameters, there is a unique choice of starting level that will lead to a bijective sweep map for general $\mathcal{R}^{\mathrm{path}}(\mathrm{N}^a\mathrm{E}^b)$. In fact, each of these maps is closely related to the others through the following two natural involutions on words. Let $\rev:\{\mathrm{N},\mathrm{E}\}^*\rightarrow\{\mathrm{N},\mathrm{E}\}^*$ be the \textbf{reversal map} given by $\rev(w_1w_2\cdots w_n)=w_n\cdots w_2 w_1$. Let $\flip:\{\mathrm{N},\mathrm{E}\}^*\rightarrow\{\mathrm{N},\mathrm{E}\}^*$ be the \textbf{transposition map} that acts by interchanging $\mathrm{N}$'s and $\mathrm{E}$'s. Evidently, both $\rev$ and $\flip$ are involutions; $\rev$ maps $\mathcal{R}^{\mathrm{path}}(\mathrm{N}^a\mathrm{E}^b)$ bijectively onto itself, whereas $\flip$ maps $\mathcal{R}^{\mathrm{path}}(\mathrm{N}^a\mathrm{E}^b)$ bijectively onto $\mathcal{R}^{\mathrm{path}}(\mathrm{N}^b\mathrm{E}^a)$. We can modify the sweep maps by composing on the left or right with $\rev$ and/or $\flip$. The new maps are bijections (between appropriate domains and codomains) iff the original sweep maps are bijections. Table~\ref{tab:sweeps} displays the eight maps along with their relationships to $\sw^{-}_{r,s}$ and $\sw^{+}_{r,s}$. One can also check that $$\rev\circ\sw^+_{-r,-s}=\sw^-_{r,s}\mbox{ and }
\flip\circ\sw^{-}_{s,r}\circ\> \flip=\sw^-_{r,s}.$$
The following degenerate cases of the sweep map are readily verified. \begin{itemize} \item If $r<0$ and $s<0$, then $\sw_{r,s}^{\pm}=\id$, the
identity map on $\{\mathrm{N},\mathrm{E}\}^*$. \item If $r>0$ and $s>0$, then $\sw_{r,s}^{\pm}=\rev$, the reversal map. \item For $r=s=0$, $\sw_{0,0}^-=\rev$ and $\sw_{0,0}^+=\id$. \item If $r=0$ and $s<0$, then $\sw_{r,s}^+=\id$,
whereas $\sw_{r,s}^-$ maps $\mathrm{N}^{a_0}\mathrm{E}\mathrm{N}^{a_1}\mathrm{E}\mathrm{N}^{a_2}\cdots\mathrm{E}\mathrm{N}^{a_k}$ (where $a_j\geq 0$) to $\mathrm{N}^{a_1}\mathrm{E}\mathrm{N}^{a_2}\mathrm{E}\cdots\mathrm{N}^{a_k}\mathrm{E}\mathrm{N}^{a_0}$. Similar statements hold in the cases: $r=0$ and $s>0$;
$s=0$ and $r<0$; $s=0$ and $r>0$. \end{itemize} \begin{table}[ht]
\centering
\caption{Symmetries of sweep maps.}
\begin{tabular}{@{}cccccc@{}} \toprule
Map & Step-labeling & Order to & Sweep direction & Start level\\
& convention & scan levels& on each level & \\\midrule
$\sw^{-}_{r,s}$ & E-N & decreasing & $\leftarrow$ & $-1$\\
$\sw^{+}_{r,s}$ & E-N & decreasing & $\rightarrow$ & $0$ \\
$\rev \circ \sw^{-}_{r,s}$ & E-N & increasing & $\rightarrow$ & $0$ \\
$\rev \circ \sw^{+}_{r,s}$ & E-N & increasing & $\leftarrow$ & $1$ \\
$\sw^{-}_{r,s}\circ\> \rev$ & W-S & increasing & $\rightarrow$ & $ra+sb+1$ \\
$\sw^{+}_{r,s}\circ\> \rev$ & W-S & increasing & $\leftarrow$ & $ra+sb$ \\
$\rev\circ \sw^{-}_{r,s}\circ\> \rev$ & W-S & decreasing & $\leftarrow$
& $ra+sb$ \\
$\rev\circ \sw^{+}_{r,s}\circ\> \rev$ & W-S & decreasing & $\rightarrow$
& $ra+sb-1$ \\\bottomrule
\end{tabular}
\label{tab:sweeps} \end{table}
\subsection{The General Sweep Map} \label{subsec:gen}
Suppose $A=\{x_1,\ldots,x_k\}$ is a given alphabet and $\wt:A\rightarrow\mathbb{Z}$ is a function assigning an integer \textbf{weight} to each letter in $A$. Given a word $w=w_1w_2\cdots w_n\in A^*$, define the \emph{levels $l_0,\ldots,l_n$ relative to
the weight function $\wt$} by setting $l_0=0$ and, for $1\leq i\leq n$, letting $l_{i}=l_{i-1}+\wt(w_i)$. (These levels are essentially computed according to the east-north convention, though the west-south convention works equally well.) Define $\sw_{\wt}:A^*\rightarrow A^*$ as follows: For each $k$ from $-1$ down to $-\infty$ and then from $\infty$ down to $0$, scan $w$ from right to left, writing down each $w_i$ with $l_i=k$ and $i>0$. Let $\mathcal{R}^{\mathrm{word}}(x_1^{n_1}\cdots x_k^{n_k})$ be the set of words $w\in A^*$ consisting of $n_j$ copies of $j$ for $1\leq j\leq k$. Let $\mathcal{D}^{\mathrm{word}}_{\wt}(x_1^{n_1}\cdots x_k^{n_k})$ be the set of such words for which all levels $l_i$ are nonnegative.
\begin{conj}\label{conj:gen-sweep}
Let $A=\{x_1,\ldots,x_k\}$ be an alphabet and $\wt:A\rightarrow\mathbb{Z}$
a weight function. For any nonnegative integers $n_1,n_2,\ldots,
n_k$, \begin{itemize} \item[(a)]
$\sw_{\wt}$ maps $\mathcal{R}^{\mathrm{word}}(x_1^{n_1}\cdots x_k^{n_k})$ bijectively to itself. \item[(b)]
$\sw_{\wt}$ maps $\mathcal{D}^{\mathrm{word}}_{\wt}(x_1^{n_1}\cdots x_k^{n_k})$ bijectively to itself. \end{itemize} \end{conj}
\section{Algorithms Equivalent to a Sweep Map} \label{sec:alg-sweep}
This section reviews some algorithms that have appeared in the literature that are equivalent to special cases of the sweep map and its variations. We describe each algorithm and indicate its exact relation to the general sweep map. The algorithms reviewed here fall into three main classes: algorithms that operate on area vectors, algorithms that operate on hook-lengths of cells in a partition, and algorithms involving generators of certain semi-modules. The algorithms based on area vectors arose in the study of the $q,t$-Catalan polynomials and their generalizations; these polynomials will be discussed at greater length later in this paper. The algorithms involving hook-lengths and semi-modules were introduced to study the special case of Dyck objects where the dimensions $a$ and $b$ are coprime. The sweep map provides a single unifying framework that simultaneously generalizes all these previously studied algorithms. We find it remarkable that this map, which has such a simple definition, encodes such a rich array of mathematical structures.
\subsection{Algorithms Based on Area Vectors} \label{subsec:alg-area-vector}
\subsubsection{Introduction}
This subsection studies several algorithms that operate on lattice paths by manipulating an \emph{area vector} that records how many area cells in each row lie between the path and a diagonal boundary. The simplest version of these algorithms is a bijection on Dyck paths described in a paper by Haglund and Loehr~\cite[\S3, Bijections]{HL-park}. In~\cite{loehr-mcat,loehr-thesis,loehr-trapz}, Loehr generalized this bijection to define a family of maps $\phi$ acting on $m$-Dyck paths and on lattice paths contained in certain trapezoids. We begin our discussion with the maps for trapezoidal lattice paths, which contain the earlier maps as special cases. We then look at a generalization of $\phi$ acting on lattice paths inside squares, followed by a different variation that acts on Schr\"oder paths containing diagonal steps.
\subsubsection{Trapezoidal Lattice Paths} \label{subsubsec:trapz}
Fix integers $k\geq 0$ and $n,m>0$. Let $T_{n,k,m}$ denote the set of \emph{trapezoidal lattice paths} from $(0,0)$ to $(k+mn,n)$ that never go strictly to the right of the line $x=k+my$. The paper~\cite{loehr-trapz} introduces a bijection $\phi=\phi_{n,k,m}:T_{n,k,m}\rightarrow T_{n,k,m}$ and its inverse. That paper (last paragraph of Section 3.1) accidentally switches the roles of $\phi$ and $\phi^{-1}$ compared to~\cite{loehr-mcat} and other literature. The map $\phi_{n,k,m}$ discussed below is the composite $\alpha^{-1}\circ\beta\circ\gamma$ from~\cite{loehr-trapz} (which is erroneously denoted $\phi^{-1}$ in that paper). After recalling the definition of this map, we show that a variant of $\phi_{n,k,m}$ is a sweep map.
Given a path $P\in T_{n,k,m}$, we first construct the \textbf{ area vector} $g(P)=(g_0,g_1,\ldots,g_{n-1})$, where $g_i$ is the number of complete lattice squares in the horizontal strip $\{(x,y): x\geq 0, i\leq y\leq i+1\}$ that lie to the right of $P$ and to the left of the line $x=k+my$. The area vector $g(P)$ has the following properties: $0\leq g_0\leq k$; $g_i$ is a nonnegative integer for $0\leq i<n$; and $g_i\leq g_{i-1}+m$ for $1\leq i<n$. One readily checks that $P\mapsto g(P)$ is a bijection from $T_{n,k,m}$ to the set of vectors of length $n$ with the properties just stated.
For $P\in T_{n,k,m}$, we compute $\phi(P)$ by concatenating lattice paths (regarded as words in $\{\mathrm{N},\mathrm{E}\}^*$) that are built up from subwords of $g(P)$ as follows. For $i=0,1,2,\ldots$, let $z^{(i)}$ be the subword of $g(P)$ consisting of symbols in the set $\{i,i-1,i-2,\ldots,i-m\}$; let $M$ be the largest $i$ such that $z^{(i)}$ is nonempty. Create a word $\sigma^{(i)}\in\{\mathrm{N},\mathrm{E}\}^*$ from $z^{(i)}$ by replacing each symbol $i$ in $z^{(i)}$ by $\mathrm{N}$ and replacing all other symbols in $z^{(i)}$ by $\mathrm{E}$. Let $\sigma$ be the concatenation of words \[ \sigma=\sigma^{(0)}\,\,\mathrm{E}\sigma^{(1)}\,\,\mathrm{E}\sigma^{(2)}\cdots\,\, \mathrm{E}\sigma^{(k)}\, \sigma^{(k+1)}\,\cdots\,\sigma^{(M)}, \] in which an extra east step is added after the first $k$ words. Define $\phi(P)=\mkpath(\sigma)$. It is proved in~\cite[Sec. 3]{loehr-trapz} that $\phi(P)$ always lies in $T_{n,k,m}$, and that $\phi_{n,k,m}$ is a bijection.
To relate $\phi$ to the sweep map, we need to introduce a modified map $\phi'$ that incorporates the bijection described in~\cite[Sec. 4]{loehr-trapz}. Keep the notation of the previous paragraph. For all $i$ with $k<i\leq M$, note that $\sigma^{(i)}$ must begin with an $\mathrm{E}$, so we can write $\sigma^{(i)}=\mathrm{E}\tilde{\sigma}^{(i)}$. Let $\tau^{(i)}=\rev(\sigma^{(i)})$ for $0\leq i\leq k$, and let $\tau^{(i)}=\mathrm{E}\,\rev(\tilde{\sigma}^{(i)})$ for $k<i\leq M$. Define $\phi'(P)=\mkpath(\tau)$, where \[ \tau=\tau^{(0)}\,\,\mathrm{E}\tau^{(1)}\,\,\mathrm{E}\tau^{(2)}\cdots\,\,\mathrm{E}\tau^{(k)} \,\tau^{(k+1)}\,\cdots\,\tau^{(M)}. \] \begin{example}\label{ex:phi} Let $n=8$, $k=2$, $m=2$, and $\mkwd(P)=\NE{ENNEENEEEEENNEEENNEEENEEEE}$. Then $g(P)=(1,3,3,0,2,1,3,2)$, so \[ \begin{array}{llllll} z^{(0)}=0, & z^{(1)}=101, & z^{(2)}=10212, & z^{(3)}=1332132, & z^{(4)}=33232,& z^{(5)}=333, \\ \sigma^{(0)}=\NE{N}, & \sigma^{(1)}=\NE{NEN}, & \sigma^{(2)}=\NE{EENEN}, & \sigma^{(3)}=\NE{ENNEENE}, & \sigma^{(4)}=\NE{EEEEE}, & \sigma^{(5)}=\NE{EEE},\\ \tau^{(0)}=\NE{N}, & \tau^{(1)}=\NE{NEN}, & \tau^{(2)}=\NE{NENEE}, & \tau^{(3)}=\NE{EENEENN}, & \tau^{(4)}=\NE{EEEEE}, & \tau^{(5)}=\NE{EEE}, \end{array}\] \begin{align*}
\mkwd(\phi(P)) &= \sigma =
\NE{N\,\,ENEN\,\,EEENEN\,\,ENNEENE\,\,EEEEE\,\,EEE}, \\
\mkwd(\phi'(P)) &= \tau =
\NE{N\,\,ENEN\,\,ENENEE\,\,EENEENN\,\,EEEEE\,\,EEE}. \end{align*} \end{example}
\begin{theorem}\label{thm:trapz-vs-sweep}
For all $k\geq 0$, $n,m>0$, and $P\in T_{n,k,m}$, $$\mkwd(\phi'_{n,k,m}(P))= \flip\circ\rev\circ\sw^{-}_{1,-m}\circ\>\rev\circ\flip(\mkwd(P)).$$ \end{theorem} \begin{proof} \noindent\textbf{Step 1.} Write $w=\mkwd(P)=w_1w_2\cdots w_{k+n+nm}$ and $\sw'=\flip\circ\rev\circ\sw^{-}_{1,-m}\circ\>\rev\circ\flip$. One may routinely check that $\sw'(w)$ may be computed by the following algorithm. Let $l_1=k$, $l_{j+1}=l_{j}+m$ if $w_j=\mathrm{N}$, and $l_{j+1}=l_{j}-1$ if $w_j=\mathrm{E}$. (Thus in this variation, a north step $w_j$ from $(x,y)$ to $(x,y+1)$ has associated level $l_j=k+my-x\geq 0$, whereas an east step $w_j$ from $(x,y)$ to $(x+1,y)$ has associated level $l_j=k+my-x>0$. Up to the shift by $k$, this is the west-south convention for assigning levels.) Generate an output word $y$ from left to right as follows. For each level $L=0,1,2,\ldots$, scan $w$ from right to left, and append the letter $w_j$ to the right end of $y$ whenever an index $j\leq k+n+nm$ is scanned for which $l_{j}=L$. For each $i\geq 0$, let $\rho^{(i)}$ be the subword of $y$ generated in the $L=i$ iteration of the algorithm.
In the preceding example, the sequence of steps and levels is
\begin{center} {\setlength{\tabcolsep}{1pt} \begin{tabular}{ccccccccccccccccccccccccccc}
E&N&N&E&E&N&E&E&E&E&E&N&N&E&E&E&N&N&E&E&E&N&E&E&E&E & \\2&1&3&5&4&3&5&4&3&2&1&0&2&4&3&2&1&3&5&4&3&2&4&3&2&1 &(0), \end{tabular}} \end{center}
\noindent where the zero in parentheses is the level following the final east step. The subwords $\rho^{(i)}$ are
\[ \rho^{(0)}=\NE{N},\ \rho^{(1)}=\NE{ENEN},\ \rho^{(2)}=\NE{ENENEE},\
\rho^{(3)}=\NE{EENEENN},\ \rho^{(4)}=\NE{EEEEE},\ \rho^{(5)}=\NE{EEE}. \] By definition of the levels, one sees that the maximum level of any letter in $w$ is the same value $M$ appearing in the definition of $\phi'(P)$. Since $y=\sw'(w)=\rho^{(0)}\rho^{(1)}\cdots\rho^{(M)}$, it will suffice to prove that $\rho^{(0)}=\tau^{(0)}$, $\rho^{(i)}=\mathrm{E}\tau^{(i)}$ for $1\leq i\leq k$, and $\rho^{(i)}=\tau^{(i)}$ for all $i$ with $k<i\leq M$ (as illustrated by Example~\ref{ex:phi}). Define $\tilde{\rho}^{(i)}=\rev(\rho^{(i)})$ for all $i$; this is the word obtained by scanning $w$ from left to right and taking all letters at level $i$. By reversing everything, it is enough to prove that $\tilde{\rho}^{(0)}=\sigma^{(0)}$, $\tilde{\rho}^{(i)}=\sigma^{(i)}\mathrm{E}$ for $1\leq i\leq k$, and $\tilde{\rho}^{(i)}=\tilde{\sigma}^{(i)}\mathrm{E}$ for all $i$ with $k<i\leq M$.
\noindent\textbf{Step 2.}
Fix a level $L=i$. We define an \textbf{event
sequence} in $\{\mathrm{A},\mathrm{B},\mathrm{C}\}^*$ associated
with a left-to-right scan of level $i$. It follows from Step 1 that
the levels of the north steps of $P$ will be
$g_0,g_1,\ldots,g_{n-1}$ in this order. As we scan $w$ during this
iteration, the following \textbf{events} may occur:
\begin{itemize}
\item[A.] We scan an $\mathrm{N}$ of $w$ at level $i$,
which appends an $\mathrm{N}$ onto both $\sigma^{(i)}$ and $\tilde{\rho}^{(i)}$.
\item[B.] We scan an $\mathrm{E}$ of $w$ at level $i$, which appends an $\mathrm{E}$
onto $\tilde{\rho}^{(i)}$.
\item[C.] We scan an $\mathrm{N}$ of $w$ with level in $\{i-1,i-2,\ldots,i-m\}$,
which appends an $\mathrm{E}$ onto $\sigma^{(i)}$.
\end{itemize}
Consider the sequence of events A, B, C that occur during the $L=i$
scan. In our example, the $L=2$ scan has event sequence BCBCABCAB,
whereas the $L=3$ scan has event sequence CAABCBCABCB.
\noindent\textbf{Step 3.}
We prove that $\tilde{\rho}^{(0)}=\sigma^{(0)}$.
For the $L=0$ scan,
events B and C are impossible, since the path stays within the
trapezoid. So the event sequence consists of $j$ A's for some $j$,
and $\tilde{\rho}^{(0)}$ and $\sigma^{(0)}$ both consist of $j$ $N$'s.
\noindent\textbf{Step 4.}
For $0<i\leq M$, we analyze the possible transitions
between events A, B, and C that may occur during the $L=i$
scan. Note that events A and B can only occur when the level of the
current character in $w$ is $\geq i$, whereas event C only occurs
when this level is $<i$. Moreover, the only way to transition from a
level $\geq i$ to a level $<i$ is via event B, and the only way to
transition from a level $<i$ to a level $\geq i$ is via event C.
Consequently, in the event sequence for $L=i$, every A (not at the
end) can only be followed by A or B; every B (not at the end) can
only be followed by C; and every C (not at the end) can only be
followed by A or B. The path $P$ ends at level $0<i$, so the event
sequence must end in a B.
\noindent\textbf{Step 5.}
We prove that $\tilde{\rho}^{(i)}=\sigma^{(i)} \mathrm{E}$ for $1\leq i\leq k$.
Since $i\leq k$ and the origin has level $k$, the first letter
in the event sequence must be A or B. By Step 4, the event sequence
is some rearrangement of A's and BC's, except there is an unmatched
B at the end. By definition of the events in Step 2, this means
that $\tilde{\rho}^{(i)}$ and $\sigma^{(i)}$ agree, except for an
extra $\mathrm{E}$ at the end of $\tilde{\rho}^{(i)}$.
\noindent\textbf{Step 6.}
We prove that $\tilde{\rho}^{(i)}=\tilde{\sigma}^{(i)} \mathrm{E}$
for $k<i\leq M$. Since $i>k$ and the origin has level $k$, the first letter
in the event sequence must be an unmatched C. Thereafter, the event
sequence consists of A's and matched BC pairs, with one unmatched B
at the end. The initial C gives the initial $\mathrm{E}$ in $\sigma^{(i)}$ that is
deleted to form $\tilde{\sigma}^{(i)}$. As in Step 5, we see that
$\tilde{\rho}^{(i)}$ and $\tilde{\sigma}^{(i)}$ agree,
except for an extra $\mathrm{E}$ at the end of $\tilde{\rho}^{(i)}$
caused by the unmatched B. \end{proof}
The proof structure above can be readily adapted to show that other algorithms based on area vectors are equivalent to suitable sweep maps. For this reason, we will omit the details of these proofs in the remainder of this subsection. For instance, one can modify the preceding proof to show that the map $\phi_{n,0,1}$ (which acts on Dyck paths of order $n$) is also a sweep map.
\begin{theorem} For all $n>0$ and $P\in T_{n,0,1}$, $\mkwd(\phi_{n,0,1}(P))= \flip\circ\rev\circ\sw^{-}_{1,-1}(\mkwd(P))$. \end{theorem}
Similarly, let $\phi_{\mathrm{HL}}$ denote the map described in~\cite[\S3, Bijections]{HL-park} that sends unlabeled Dyck paths to unlabeled Dyck paths.
\begin{theorem} For all $n>0$ and $P\in T_{n,0,1}$, $\mkwd(\phi_{\mathrm{HL}}(P))= \sw^{-}_{1,-1}(\mkwd(P))$. \end{theorem}
We note that the partition $\mkptn(\phi_{n,0,1}(P))$ is the transpose of the partition $\mkptn(\phi_{\mathrm{HL}}(P))$. Since \[ \flip\circ\rev(\mkwd(\pi))=\mkwd(\pi') \] for all partitions $\pi$ (where $\pi'$ denotes the transpose of $\pi$), the theorem for $\phi_{\mathrm{HL}}$ follows from the theorem for $\phi_{n,0,1}$ and vice versa.
\subsubsection{Square Lattice Paths}
In~\cite{LW-square}, Loehr and Warrington modified the map $\phi_{\mathrm{HL}}$ to obtain a bijection $\phi_{\mathrm{LW}}$ on $\mathcal{R}^{\mathrm{path}}(\mathrm{N}^n\mathrm{E}^n)$, the set of lattice paths in an $n\times n$ square. Given $P\in\mathcal{R}^{\mathrm{path}}(\mathrm{N}^n\mathrm{E}^n)$, we define its \textbf{area vector} $g(P)=(g_0,g_1,\ldots,g_{n-1})$ by letting $g_i+n-i$ be the number of complete squares in the strip $\{(x,y): x\geq 0, i\leq y\leq i+1\}$ that lie to the right of $P$ and to the left of $x=n$. (This reduces to the previous area vector if $P$ is a Dyck path.) The area vectors of paths in $\mathcal{R}^{\mathrm{path}}(\mathrm{N}^n\mathrm{E}^n)$ are characterized by the following properties: $g_0\leq 0$; $g_i+n-i\geq 0$ for $0\leq i<n$; and $g_i\leq g_{i-1}+1$ for $1\leq i<n$.
Given $P\in\mathcal{R}^{\mathrm{path}}(\mathrm{N}^n\mathrm{E}^n)$, we define a new path $\phi_{\mathrm{LW}}(P)$ as follows. For all $i\in\mathbb{Z}$, let $z^{(i)}$ be the subword of $g(P)$ consisting of all occurrences of $i$ and $i-1$. Create words $\sigma^{(i)}$ from $z^{(i)}$ by replacing each $i$ by $\mathrm{E}$ and each $i-1$ by $\mathrm{N}$. For all $i\geq 0$, let $\tau^{(i)}$ be the reversal of $\sigma^{(i)}$. For all $i<0$, $\sigma^{(i)}$ must end in $\mathrm{E}$, so we can write $\sigma^{(i)}=\tilde{\sigma}^{(i)}\mathrm{E}$; let $\tau^{(i)}=\rev(\tilde{\sigma}^{(i)})\mathrm{E}$. Finally, define \[ \tau=\tau^{(-1)}\tau^{(-2)}\cdots \tau^{(-n)}
\tau^{(n)}\cdots \tau^{(2)}\tau^{(1)}\tau^{(0)}, \] and set $\phi_{\mathrm{LW}}(P)=\mkpath(\tau)$.
\begin{example} Let $P\in\mathcal{R}^{\mathrm{path}}(\mathrm{N}^{16}\mathrm{E}^{16})$ be such that \[\mkwd(P)=w=\NE{ENEENENNNNEENEEEENNNENEENNNNENEE}.\] Then $g(P)=(-1,-2,-2,-1,0,1,0,-3,-2,-1,-1,-2,-1,0,1,1)$, so (for instance) $z^{(1)}=010011$, $\sigma^{(1)}=\NE{NENNEE}$, $\tau^{(1)}=\NE{EENNEN}$, $z^{(-2)}=\text{$-2$ $-2$ $-3$ $-2$ $-2$}$, $\sigma^{(-2)}=\NE{EENEE}$, $\tau^{(-2)}=\NE{ENEEE}$, and so on. Next, $\tau$ is the concatenation of the words $\tau^{(-1)}=\NE{NEENENNEE}$, $\tau^{(-2)}=\NE{ENEEE}$, $\tau^{(-3)}=\NE{E}$, $\tau^{(2)}=\NE{NNN}$, $\tau^{(1)}=\NE{EENNEN}$, and $\tau^{(0)}=\NE{ENNNEENN}$, and $\phi_{\mathrm{LW}}(P)=\mkpath(\tau)$. On the other hand, the reader can check that $\tau=\sw^{-}_{1,-1}(w)$. In fact, for all $i\in\mathbb{Z}$, the subword $\tau^{(i)}$ is precisely the subword $\rho^{(i)}$ of letters in $\sw^{-}_{1,-1}(w)$ coming from letters in $w$ with $(1,-1)$-level (using the E-N convention) equal to $i$. One can prove this always happens, by adapting the ideas in the proof of Theorem~\ref{thm:trapz-vs-sweep}, to obtain the following theorem. \end{example}
\begin{theorem} For all $n>0$ and all $P\in\mathcal{R}^{\mathrm{path}}(\mathrm{N}^n\mathrm{E}^n)$, $\mkwd(\phi_{\mathrm{LW}}(P))=\sw^{-}_{1,-1}(\mkwd(P))$. \end{theorem}
\subsubsection{Schr\"oder Lattice Paths}
A \textbf{Schr\"oder path} of order $n$ is a path from the origin $(0,0)$ to $(n,n)$, never going below $y=x$, with the allowed steps being north steps of length $1$, east steps of length $1$, and a northeast step of length $\sqrt{2}$. In~\cite[Theorem 6]{EHKK}, Egge, Haglund, Killpatrick, and Kremer extend $\phi_{\mathrm{HL}}$ to a bijection $\phi_{\mathrm{EHKK}}$ acting on Schr\"oder paths.
\begin{theorem}
After converting from paths to words, $\phi_{\mathrm{EHKK}}$ is
the sweep map $\sw_{\wt}$ associated to the alphabet $A=\{\mathrm{N},\mathrm{D},\mathrm{E}\}$
with weight function $\wt(\mathrm{N}) = 1$, $\wt(\mathrm{D}) = 0$, and $\wt(\mathrm{E}) = -1$. \end{theorem}
\subsection{An Algorithm Based on Hook-Lengths} \label{subsec:zetamap}
Throughout this subsection, fix positive integers $a$ and $b$ with $\gcd(a,b)=1$. In~\cite{AHJ}, D.~Armstrong, C.~Hanusa and B.~Jones investigate the combinatorics of $(a,b)$-cores. In the process, they define a map $\mathsf{zeta}: \mathcal{D}^{\mathrm{ptn}}_{b,-a}(a,b) \rightarrow \mathcal{D}^{\mathrm{ptn}}_{b,-a}(a,b)$. For $\pi\in\mathcal{D}^{\mathrm{ptn}}_{b,-a}(a,b)$, the partition $\mathsf{zeta}(\pi)$ is defined in two stages. First, create a partition $\nu = f(\pi)$ as follows. Consider the levels of all lattice squares lying above $by=ax$ and below the path $\mkpath(\pi)$. Since $\gcd(a,b)=1$, these levels must all be distinct. Sort these levels into increasing order, and write them in a column from bottom to top. Let $\nu = f(\pi)$ be the unique partition such that these levels are the hook-lengths of the cells in the first column. (Recall that the \textbf{hook} of a cell $c$ in a partition diagram consists of $c$ itself, all cells below $c$ in its column, and all cells right of $c$ in its row. The \textbf{hook-length} of $c$, denoted $h(c)$, is the number of cells in the hook of $c$.)
The second stage maps $\nu$ to a new partition $\rho = g(\nu)$ as follows. There will be one nonzero row of $\rho$ for each row of $\nu$ whose first-column hook-length is the level of a square directly east of a north step of $\mkpath(\pi)$. To determine the length of each row in $\rho$, count the number of cells of $\nu$ in the corresponding row whose hook-length is less than or equal to $b$. The $\mathsf{zeta}$ map is then defined by $\mathsf{zeta}(\pi) = g \circ f(\pi)$. See Figure~\ref{fig:drewmap-ex} for an example.
\begin{figure}
\caption{Example of the $\mathsf{zeta}$ map applied to $\pi = (4,4,4,2,2,1)$
for $a=7$ and $b=10$. The partitions $\nu=f(\pi)$
and $\rho=g(\nu)=\mathsf{zeta}(\pi)=(8,6,4,2)$ are shown.
Each cell of $\nu$ is labeled with its hook-length.}
\label{fig:drewmap-ex}
\end{figure}
\begin{theorem}\label{thm:drew-sweep}
For all $\pi\in \mathcal{D}^{\mathrm{ptn}}_{b,-a}(a,b)$,
$\mkwd(\mathsf{zeta}(\pi)) = \sw^{+}_{b,-a}\circ\rev(\mkwd(\pi))$. \end{theorem}
To prove this theorem, we will introduce alternate formulations of the maps $f$ and $g$, denoted $\tilde{f}$ and $\tilde{g}$, that focus attention on the lattice paths making up the frontiers of $\pi$, $\nu$, and $\rho$. This will enable us to compare the action of the $\mathsf{zeta}$ map on partitions to the action of the sweep map on lattice paths. To start, define $\tilde{f}: \mathcal{D}^{\mathrm{ptn}} \rightarrow \mathcal{R}^{\mathrm{ptn}}$ by setting $\tilde{f}(\pi) = \mkptn(z_0z_1z_2\cdots)$, where $z_0=E$, and for all $i > 0$, \begin{equation*}
z_i =
\begin{cases}
\mathrm{N}, & \text{if the square with level $i$ lies between $\mkpath(\pi)$
and the line $by=ax$},\\
\mathrm{E}, & \text{otherwise.}
\end{cases} \end{equation*} Since $\mkpath(\pi)$ begins at $(0,0)$ and ends at $(b,a)$, we must have $z_i = \mathrm{E}$ for all $i > ab-a-b$.
\begin{lemma}\label{lem:nu}
For all $\pi \in \mathcal{D}^{\mathrm{ptn}}$, $\tilde{f}(\pi) = f(\pi)$. \end{lemma} \begin{proof}
We first observe that the partitions $\tilde{f}(\pi)$ and
$f(\pi)$ will have the same number of (positive-length) rows.
For, on one hand,
the first-column hook-lengths in $f(\pi)$ will be the levels
of the squares between $\mkpath(\pi)$ and $by=ax$.
On the other hand, these same levels
will be the indices of the north steps of $\mkpath(\tilde{f}(\pi))$.
Now we show that the row lengths will be the same. Consider the
$i$-th row from the bottom (starting with $i=1$ for the bottom row)
in each partition. Suppose the first-column hook-length in this row
in $f(\pi)$ is $k$. By definition, the length of this row in
$f(\pi)$ will be $k-i+1$. Additionally, since we are in the
$i$-th row from the bottom, it follows that of the values
$\{0,1,2,\ldots,k-1\}$, exactly $i-1$ are levels of lattice squares
below $\mkpath(\pi)$, while the remaining $k-i+1$ values are levels of lattice
squares above $\mkpath(\pi)$. But the levels corresponding to squares above
$\mkpath(\pi)$ map to $\mathrm{E}$'s under $\tilde{f}$. So
$|\{j:\,0\leq j\leq k\mbox{ and } z_j = \mathrm{E}\}| = k-i+1$,
which implies that the number of cells in the $i$-th
row from the bottom of $\tilde{f}(\pi)$ will also be $k-i+1$. \end{proof}
We now define an analog of the map $g$ that maps $\nu = \tilde{f}(\pi)=f(\pi)$ to a new partition $\rho$. With the word $z=z_0z_1z_2\cdots$ defined as above, let $y=y_0y_1y_2\cdots$ be the subword of $z$ formed by retaining only those $z_i$ for which $i+a$ is the level (using the W-S convention) of a step of $\mkpath(\pi)$; then set $\tilde{g}(\nu)=\mkptn(y)$. (Technically, $\tilde{g}$ depends not only on the partition $\nu$, but on $a$, $b$ and $\pi = \tilde{f}^{-1}(\nu)$ as well. However, $a$ and $b$ are fixed and we consider $\tilde{g}$ only as part of the composition $\tilde{g}\circ \tilde{f}$.)
\begin{lemma}\label{lem:drew-sweep}
For all $\pi\in\mathcal{D}^{\mathrm{ptn}}_{b,-a}(a,b)$, $\mkwd(\tilde{g}\circ\tilde{f}(\pi)) = \sw^{+}_{b,-a}\circ \rev(\mkwd(\pi))$. \end{lemma} \begin{proof}
Recall from Table~\ref{tab:sweeps} that the sweep map variation on
the right side of the lemma acts on $\mkwd(\pi)$ by scanning the
levels $0,1,2,\ldots$ in this order, sweeping up path steps with
levels assigned according to the W-S convention. (Since
$\gcd(a,b)=1$, each level appears at most once in $\mkwd(\pi)$.
Also, for $r=b$ and $s=-a$, $ra + sb = 0$.)
To compare this sweep map to the action of $\tilde{g}\circ\tilde{f}$,
first note that a north step on the frontier of $\pi$ has level $i+a$
iff the lattice square directly east of that north step has level $i$.
Such lattice squares are encoded as $z_i=\mathrm{N}$ by $\tilde{f}$ and
are then retained by $\tilde{g}$.
Similarly, an east step on the frontier of $\pi$ has level $i+a$
iff the lattice square directly north of that east step has level $i$.
Such lattice squares are encoded as $z_i=\mathrm{E}$ by $\tilde{f}$ and
are then retained by $\tilde{g}$. All other lattice squares not of
these two types are discarded by $\tilde{g}$. Thus
$\mkwd(\tilde{g}\circ\tilde{f}(\pi))$,
which is precisely the subword of $z_0z_1z_2\cdots$
consisting of letters retained by $\tilde{g}$, will be the same
word produced by the sweep map. \end{proof}
Intuitively, one can think of $\tilde{f}$ as sweeping up \emph{all}
lattice-square levels, and then $\tilde{g}$ keeps only those levels
of squares that are ``adjacent'' to the frontier of $\pi$ in the
sense described above. Below, we will call these squares \emph{frontier
squares} of $\pi$.
Before proving our final lemma, we need to introduce some temporary
notation for describing the cells and rows of $\nu=f(\pi)=\tilde{f}(\pi)$.
First, let $\mathrm{FCHL}_{\nu}$ be the set of hook-lengths of cells
in the first (leftmost) column of $\nu$. In our running example,
$\mathrm{FCHL}_{\nu}=\{1,2,4,5,8,9,11,15,18,25\}$. Each square $c$ in the
diagram of $\nu$ lies due north of an east step on the frontier of
$\nu$, say $z_i=\mathrm{E}$; and $c$ lies due west of a north step on the
frontier of $\nu$, say $z_m=\mathrm{N}$. Identify the square $c$ with
the ordered pair of labels $[i,m]$. Observe that $[i,m]$ is the
label of some cell $c$ in the diagram of $\nu$ iff $0\leq i<m$
and $z_i=\mathrm{E}$ and $z_m=\mathrm{N}$; in this case, we must have $m\in\mathrm{FCHL}_{\nu}$.
It is routine to check that the hook-length $h(c)$ is $m-i$.
For all $m\in\mathrm{FCHL}_{\nu}$, the \emph{row of $\nu$ indexed by $m$}
is the row with leftmost cell $[0,m]$, whose hook-length is $m$.
\begin{lemma}\label{lem:rho}
For all $\pi \in \mathcal{D}^{\mathrm{ptn}}_{b,-a}(a,b)$,
$\tilde{g}(\tilde{f}(\pi)) = g(f(\pi))$. \end{lemma} \begin{proof}
Let $\nu=\tilde{f}(\pi)=f(\pi)$.
We must show $g(\nu)=\tilde{g}(\nu)$.
\noindent\textbf{Step 1.} We show that $g$ and
$\tilde{g}$ keep the same rows of $\nu$. On one hand, the definition
of $g$ tells us to keep the rows of $\nu$ indexed by those
$m\in\mathrm{FCHL}_{\nu}$ appearing as the level of a square immediately east of
a north step in $\mkpath(\pi)$. These squares are the frontier
squares of $\pi$ below $\mkpath(\pi)$. On the other hand, the
definition of $\tilde{g}$ tells us to retain the frontier steps
$z_m=\mathrm{N}$ of $\nu$ for those $m\in\mathrm{FCHL}_{\nu}$ such that $m+a$ is the
level of a north step of $\mkpath(\pi)$. As observed in the earlier
lemma, these $m$'s correspond to $m$'s that are the levels of
frontier squares of $\pi$ below $\mkpath(\pi)$. So $g$ and
$\tilde{g}$ do retain the same rows of $\nu$.
\noindent\textbf{Step 2.}
For each fixed $m\in\mathrm{FCHL}_{\nu}$, we compare the cells in the row
of $\nu$ indexed by $m$ that are discarded by $g$ and $\tilde{g}$. On one hand, let \[ C_m = \{c:\,\text{$c$ is a cell
in the row of $\nu$ indexed by $m$, and $h(c)>b$}\}. \] The cells in $C_m$ are erased by $g$. So, for those row indices $m$
retained by $g$, $|C_m|$ is the difference between the length of this row in $\nu$ and the length of the corresponding row in $g(\nu)$. On the other hand, let \[ B_m = \{j:\,1\leq j < m, z_j = \mathrm{E},\text{ and
the square with level $j$ is not a frontier square of $\pi$}\}. \] The values $j\in B_m$ index the east steps $z_j=\mathrm{E}$ prior to the north step $z_j=m$ that are discarded by $\tilde{g}$. So, for those row indices
$m$ retained by $\tilde{g}$, $|B_m|$ is the difference between the length of this row in $\nu$ and the length of the corresponding row in
$\tilde{g}(\nu)$. Since $g$ and $\tilde{g}$ retain the same row indices $m$ (by Step 1), it will now suffice to show that $|B_m|=|C_m|$ for all $m\in\mathrm{FCHL}_{\nu}$.
\noindent\textbf{Step 3.} Fix $m\in\mathrm{FCHL}_{\nu}$; we define bijections $G:B_m\rightarrow C_m$ and $H:C_m\rightarrow B_m$. Given $j\in B_m$, let $G(j)=[j-b,m]$. Given a cell $[i,m]\in C_m$, let $H([i,m])=i+b$. It is clear that $H\circ G$ and $G\circ H$ are identity maps, so the proof will be complete once we check that $G$ does map $B_m$ into $C_m$, and $H$ does map $C_m$ into $B_m$. Consider a fixed $j\in B_m$. Since $z_j=\mathrm{E}$, $j$ is the level of a square above $\mkpath(\pi)$, but this square is not a frontier square of $\pi$. Hence, the square directly below this square (whose level is $j-b$) is also above $\mkpath(\pi)$. This implies $j-b\geq 0$ and $z_{j-b}=\mathrm{E}$. Moreover, since $j<m$, the hook-length of the cell $[j-b,m]$ is $m-(j-b)=b+(m-j)>b$, proving that $G(j)=[j-b,m]\in C_m$. Now consider a fixed cell $[i,m]\in C_m$. By definition of $C_m$, we must have $z_i=\mathrm{E}$ and $m-i>b$. So the square with level $i$ is above $\mkpath(\pi)$, and hence the square with level $i+b$ is also above $\mkpath(\pi)$ and is not a frontier square of $\pi$. In particular, $z_{i+b}=\mathrm{E}$. Finally, $i+b<m$ and $i+b>0$, so $i+b\in B_m$. We conclude that $H([i,m])\in B_m$. \end{proof}
In our running example, the row indices $m$ retained by both $g$ and $\tilde{g}$ are $5,9,15,25$. For $m=25$, we have \[ B_{25}=\{10,13,16,17,20,22,23,24\}; \] \[ C_{25}=\{[0,25],[3,25],[6,25],[7,25],[10,25],[12,25],[13,25],[14,25]\}. \] ($C_{25}$ is the set of the leftmost eight cells in the top row of $\nu$.) The map $j\mapsto [j-10,25]$ defines a bijection from $B_{25}$ to $C_{25}$.
\subsection{An Algorithm Based on Semi-Module Generators} \label{subsec:gorsky-mazin}
In~\cite{GM-jacI,GM-jacII}, E.~Gorsky and M.~Mazin relate the $q,t$-Catalan numbers and their generalizations to the homology of compactified Jacobians for singular plane curves with Puiseux pair $(a,b)$. In the course of their investigations, they introduce the following map $\mathrm{G}_{b,a}$ on partitions in $\mathcal{D}^{\mathrm{ptn}}_{b,-a}(a,b)$ (we follow the notation of ~\cite{GM-jacII}). Let $a,b\in\mathbb{Z}_{>0}$ with $\gcd(a,b)=1$ and $\pi\in\mathcal{D}^{\mathrm{ptn}}_{b,-a}(a,b)$. For $1\leq i\leq b$, define the \textbf{$\boldsymbol{b}$-generators of $\boldsymbol{\pi}$}, denoted $\beta_1 < \cdots < \beta_b$, to be the levels of the squares immediately above $\mkpath(\pi)$. Define $\Delta = \Delta(\pi)$ to be the set of levels of \emph{all} lattice squares lying north or west of $\mkpath(\pi)$ (i.e., including squares not adjacent to $\mkpath(\pi)$). Equivalently, $\Delta=\mathbb{Z}_{\geq 0}\setminus\Delta^c$ where $\Delta^c$ is the finite set of levels of lattice squares between $\mkpath(\pi)$ and $by=ax$. We then define a new partition $\rho = \mathrm{G}_{b,a}(\pi)$ by setting the $i$-th column of $\rho$ to have length \begin{equation*}
g_{b,a}(\beta_i) = |\{\beta_i,\beta_i+1,\ldots,\beta_i+a-1\} \setminus \Delta|
= |\{\beta_i,\beta_i+1,\ldots,\beta_i+a-1\}\cap\Delta^c|. \end{equation*}
For our running example where $\pi = (4,4,4,2,2,1)$, $a=7$, and $b=10$, the $10$-generators are $\{0,3,6,7,12,14,19,21,28,35\}$, \begin{equation*} \Delta^c = \{1,2,4,5,8,9,11,15,18,25\}, \end{equation*} and $\Delta=\mathbb{Z}_{\geq 0}\setminus\Delta^c$. It follows that \begin{align*}
g_{10,7}(0) &= |\{0,\ldots,6\} \setminus \{0,3,6\}\}| = 4,\\
g_{10,7}(3) &= |\{3,\ldots,9\} \setminus \{3,6,7\}\}| = 4,\\
g_{10,7}(6) &= |\{6,\ldots,12\} \setminus \{6,7,10,12\}\}| = 3,\\
g_{10,7}(7) &= |\{7,\ldots,13\} \setminus \{7,10,12,13\}\}| = 3,\\
g_{10,7}(12) &= |\{12,\ldots,18\} \setminus \{12,13,14,16,17\}\}| = 2,\\
g_{10,7}(14) &= |\{14,\ldots,20\} \setminus \{14,16,17,19,20\}\}| = 2,\\
g_{10,7}(19) &= |\{19,\ldots,25\} \setminus \{19,20,21,22,23,24\}\}| = 1,\\
g_{10,7}(21) &= |\{21,\ldots,27\} \setminus \{21,22,23,24,26,27\}\}| = 1,\\
g_{10,7}(28) &= |\{28,\ldots,34\} \setminus \{28,\ldots,34\}\}| = 0,\\
g_{10,7}(35) &= |\{35,\ldots,41\} \setminus \{35,\ldots,41\}\}| = 0. \end{align*} The vector $(g_{10,7}(0),g_{10,7}(3),\ldots,g_{10,7}(35)) = (4,4,3,3,2,2,1,1)$ gives the column lengths of the partition $\rho=\mathrm{G}_{7,3}(\pi)=(8,6,4,2)$. See Figure~\ref{fig:GMmap}.
\begin{figure}
\caption{Example of the Gorsky-Mazin map $\mathrm{G}_{7,3}$.}
\label{fig:GMmap}
\end{figure}
The preceding example suggests that the Gorsky-Mazin map coincides with the map discussed in~\S\ref{subsec:zetamap}. We now prove this fact.
\begin{theorem}\label{thm:gm}
For $a,b\in\mathbb{Z}_{>0}$ with $\gcd(a,b)=1$ and all $\pi\in \mathcal{D}^{\mathrm{ptn}}_{b,-a}(a,b)$, \[ \mkwd(\mathrm{G}(\pi)) = \sw^{+}_{b,-a}\circ\rev(\mkwd(\pi)). \] \end{theorem}
\begin{proof}
Let $\pi\in\mathcal{D}^{\mathrm{ptn}}_{b,-a}(a,b)$ have $b$-generators $\beta_1 < \beta_2
< \cdots < \beta_b$. Recall from Table~\ref{tab:sweeps} that
$\sw^+_{b,-a}\circ\rev$ uses the west-south convention to assign
levels to steps of a lattice path. It follows that the levels of
east steps in $\mkpath(\pi)$ are precisely the numbers
$a+\beta_1<a+\beta_2<\cdots<a+\beta_b$. The $i$-th east step in the
output of $\sw^+_{b,-a}\circ\rev$ will be preceded by all the north
steps whose levels are less than $a+\beta_i$ and followed by all the
north steps whose levels are greater than $a+\beta_i$. Since the
output has exactly $a$ north steps total, it will suffice to prove
(for each fixed $i$) that \[ \mbox{(the number of north steps of level $<a+\beta_i$)}
+g_{b,a}(\beta_i)=a. \]
For each north step of level $a+k$, the lattice square with level $k$ lies
below $\mkpath(\pi)$, but all of the squares to the left lie west of
$\mkpath(\pi)$ and have levels of the form $k+ja$ for some $j\in
\mathbb{Z}_{\geq 1}$. On the other hand, for each
$b$-generator $\beta_i$, $g_{b,a}(\beta_i)$ is the number of levels in the set
$\{\beta_i,\beta_i+1,\ldots,\beta_i+a-1\}$ that are the levels of
squares below $\mkpath(\pi)$. For each north step of level $a+k$
with $k \geq \beta_i$, \[|\{\beta_i,\ldots,\beta_i+a-1\}
\cap \{k+ja\}_{j > 0}| = 0.\] But for each north step of level $a+k$
with $k < \beta_i$, then the cardinality will be exactly $1$.
Thus the number of levels removed from the $a$-element set
$\{\beta_i,\ldots,\beta_i+a-1\}$ to obtain $g_{b,a}(\beta_i)$
is the same as the number of north steps of level $<a+\beta_i$, as needed. \end{proof}
\section{Inverting the Sweep Map} \label{sec:invert-sweep}
\subsection{Introduction.} \label{subsec:invert-intro}
The main open problem in this paper is to prove that \emph{all sweep maps are bijections}. Even in the two-letter case, this problem appears to be very difficult in general. Nevertheless, many special cases of the sweep map are known to be invertible. After discussing the basic strategy for inversion (which involves recreating the labels on the output steps by drawing a suitable ``bounce path''), we describe the inverse sweep maps that have appeared in the literature in various guises. We omit detailed proofs that the inverse maps work, since these appear in the references.
\subsection{Strategy for Inversion.} \label{subsec:strategy-invert}
In Figure~\ref{fig:negsweep-ex}, we showed the computation of $Q=\sw_{3,-2}^-(P)$ where $P,Q$ are paths in $\mathcal{R}^{\mathrm{path}}(\mathrm{N}^8\mathrm{E}^{10})$. The output $Q$ is the path shown on the far right of the figure, \emph{not including labels}. Suppose we were given $Q$ and needed to compute $P=(\sw_{3,-2}^-)^{-1}(Q)$. If we could somehow recreate the labels on the steps of $Q$ (as shown in the figure), then the sweep map could be easily inverted, as follows. By counting the total number of north and east steps, we deduce that $P$ must end at level $8\cdot 3 + 10\cdot (-2) = 4$. We now reconstruct the steps of $P$ in reverse order. The last step of $P$ must be the first step of $Q$ in the collection of steps labeled 4 (since, when sweeping $P$ to produce $Q$, level 4 is swept from right to left). We mark that step of $Q$ as being used. Since it is a north step, the preceding step of $P$ must end at level 1. We now take the first unused step of $Q$ labeled 1 (which is a north step), mark it as used, and note that the preceding step of $P$ must end at level $-2$. We continue similarly, producing $P$ in reverse until reaching the origin, and marking steps in $Q$ as they are used. Because $Q$ is in the image of the sweep map, this process must succeed (in the sense that all steps of $Q$ are used at the end, and we never get stuck at some level where all steps in $Q$ with that label have already been used). Evidently, the strategy outlined here works for any choice of weights, including the general case of alphabets with more than one letter. Variations of the sweep map (such as $\sw^+$) can be handled analogously. The crucial question is \emph{how to recreate the labels on the steps of $Q$}.
This question has been answered in the literature for Dyck paths, $m$-Dyck paths, trapezoidal lattice paths, square paths, Schr\"oder paths, $(n,nm+1)$-Dyck paths, and $(n,nm-1)$-Dyck paths. In every known case, the key to recreating the labels is to define a \emph{bounce path} for a lattice path $Q$. The steps of $Q$ associated with the ``$i$-th bounce'' in the bounce path receive label $i$. Once labels have been assigned, one can reverse the sweep map as described in the previous paragraph. We begin by discussing the simplest instance of the bounce path, which is used to invert the map $\sw_{1,-1}^-=\phi_{\mathrm{HL}}$ (see~\S\ref{subsubsec:trapz}) acting on Dyck paths. Ironically, several different authors independently introduced inverse sweep maps even before the map $\phi_{\mathrm{HL}}$ was proposed in the context of $q,t$-Catalan numbers. We describe these inverses in \S\ref{subsec:vaille} and \S\ref{subsec:andrews} below.
\subsection{Inversion of $\phi_{\mathrm{HL}}$ via Haglund's Bounce Path} \label{subsec:invert-HL}
Figure~\ref{fig:invert-HL} shows the computation of $Q=\sw_{1,-1}^-(P)=\phi_{\mathrm{HL}}(P)$ for a Dyck path $P\in\mathcal{D}^{\mathrm{path}}(\mathrm{N}^{14}\mathrm{E}^{14})$. To understand how to find $P=\phi_{\mathrm{HL}}^{-1}(Q)$ given $Q$, it suffices (by the above remarks) to see how to pass from the unlabeled path $Q$ on the right side of the figure to the labeled path in the middle of the figure.
\begin{figure}\label{fig:invert-HL}
\end{figure}
To recreate the labels, we draw the \emph{bounce path} for the Dyck path $Q$, using the following definition due to Haglund~\cite{Hag-bounce}. The bounce path starts at $(n,n)$ and makes a sequence of horizontal moves $H_i$ and vertical moves $V_i$, for $i=0,1,2,\ldots$, until reaching $(0,0)$. Each horizontal move is determined by moving west from the current position as far as possible without going strictly left of the path $Q$. Then the next vertical move goes south back to the diagonal $y=x$. Figure~\ref{fig:sweep-bounce} shows the bounce path for our example path $Q$. In this example, the labels we are trying to recreate are related to the bounce path as follows: every step of $Q$ located above the bounce move $H_i$ and to the left of bounce move $V_{i-1}$ has label $i$. As special cases, the steps of $Q$ above $H_0$ have label zero, and the steps of $Q$ to the left of the last bounce move $V_s$ have label $s+1$. We claim that this relation between the labels and the bounce path holds in general, for any Dyck path $Q$ of the form $\phi_{\mathrm{HL}}(P)$. This claim implies that $P$ can be uniquely recovered from $Q$, so that $\phi_{\mathrm{HL}}$ is injective and hence bijective.
\begin{figure}
\caption{Bounce path for path $Q$ of Figure~\ref{fig:invert-HL}.}
\label{fig:sweep-bounce}
\end{figure}
To prove the claim, let $h_i$ be the length of the horizontal move $H_i$ of the bounce path for $Q$, and let $v_i=h_i$ be the length of the vertical move $V_i$ of the bounce path for $Q$. Also define $v_i=h_i=0$ for $i<0$ and $i>s$, where $s+1$ is the total number of horizontal bounces. Finally, let $n_i$ (resp. $e_i$) denote the number of north (resp. east) steps of $P$ with $(1,-1)$-level equal to $i$.
We first prove this lemma: $n_i = e_{i-1}$ for all $i\in\mathbb{Z}$. For any level $i$, the number of times the path $P$ arrives at this level (via a north step of level $i$ or an east step of level $i$) equals the number of times $P$ leaves this level (via a north step of level $i+1$ or an east step of level $i-1$). This holds even when $i=0$, since $P$ begins and ends at level zero. It follows that $n_i + e_i = n_{i+1} + e_{i-1}$ for all $i\in\mathbb{Z}$. Also $n_i=0$ for all $i\leq 0$, and $e_i=0$ for all $i<0$. Thus $n_i=e_{i-1}=0$ for all integers $i\leq 0$. Fix an integer $i\geq 0$, and assume that $n_i=e_{i-1}$. Then $n_{i+1}=n_i+e_i-e_{i-1}=e_i$, so the lemma follows by induction.
Now we show that for all $i\in\mathbb{Z}$, $h_i = e_i$ and $v_{i-1} = n_i$. Since $P$ is a Dyck path, the assertion holds for all $i<0$. We prove the two equalities for $i \geq 0$ by induction on $i$, starting with the base case of $i=0$. Recall that the steps of $P$ are swept in decreasing order by level. We know that $v_{-1} = n_0 = 0$. Since $n_0 = 0$, the path $Q$ ends in $e_0$ east steps. Hence $h_0 \geq e_0$. Since $P$ starts at level $0$, the first step in $P$ at \emph{any} level $i > 0$ must be a north step. It follows that the last step in $Q$ labeled with a 1 is a north step and, consequently, that $h_0 = e_0$.
Assume now that $h_k = e_k$ and $v_{k-1} = n_k$ for some fixed $k\geq 0$. It follows from the bounce mechanism that $v_k = h_k$. We know that $h_k = e_k$ by the induction hypothesis. Finally, $e_k = n_{k+1}$ by the above discussion. Combining these equalities, we find that $v_k = n_{k+1}$. We know that $h_{k+1} \geq e_{k+1}$ using $v_k = n_{k+1}$ and the fact that the east steps of $P$ at level $k+1$ must be swept after any steps at level $k+2$. As observed above, the first step in $P$ at level $k+2$ (if it exists) is a north step. Hence $h_{k+1} = e_{k+1}$.
\subsection{Vaill{\'e}'s Bijection} \label{subsec:vaille}
In 1997, Vaill{\'e}~\cite{vaille} defined a bijection $\omega$ mapping Dyck paths to Dyck paths, which is the inverse of the map $\phi_{n,0,1}$ defined in~\S\ref{subsubsec:trapz}. (Recall that $\phi_{n,0,1}$ differs from $\phi_{\mathrm{HL}}$ by reversing and flipping the output lattice path.) Vaill{\'e} gives this example of his bijection $\omega$ in~\cite[Fig. 3, p. 121]{vaille}: \begin{align*}
P&=\NE{NNEENNNNNEENNEENEEENNEENNEEENNEE},\\
\omega(P)&=\NE{NENNENNENNNENNENEEENEEENENNEENEE}. \end{align*} The bounce path of $P$ is clearly visible in the left panel of that figure, although here the bounce path moves from $(0,0)$ north and east to $(n,n)$ as a result of the reversal and flipping.
\subsection{The Bijection of Andrews et al.} \label{subsec:andrews}
Andrews, Krattenthaler, Orsina, and Papi~\cite{AKOP-lie} described a bijection mapping Dyck partitions to Dyck paths that is essentially the inverse of $\sw_{1,-1}^{-}$. They give an example starting with an input partition $\pi=(10,10,9,6,5,4,4,3,1,1,1,1,0)$ in~\cite[Fig. 2, p. 3841]{AKOP-lie}. The word of this partition (after adding one more zero part at the end) is \[ y=\mkwd(\pi)=\NE{NNENNNNEENENNENENEEENENNEEEE}. \] This partition maps to the output Dyck path shown in~\cite[Fig. 3, p. 3846]{AKOP-lie}, which has word \[ w=\NE{NENNNEEENNENNEENNNEENEEENNEE}. \] One may check that $\sw_{1,-1}^-(w)=y$, and similarly for other objects, so these authors have inverted the sweep map on Dyck paths. Here too, Haglund's bounce path construction (this time proceeding from $(n,n)$ to $(0,0)$) is visible in Figure 2 of~\cite{AKOP-lie}.
\subsection{Inverting $\phi_{n,k,m}$ and $\phi'_{n,k,m}$.} \label{subsec:invert-trapz}
Loehr describes $\phi_{n,0,m}$ and its inverse in~\cite{loehr-mcat}. The maps $\phi_{n,k,m}$, $\phi'_{n,k,m}$, and their inverses are treated in~\cite{loehr-trapz}. The key to inversion is defining the bounce path for a trapezoidal lattice path $Q\in T_{n,k,m}$. This bounce path starts at $(0,0)$ and moves north and east to $(k+nm,n)$. For $i\geq 0$, the $i$-th bounce moves north $v_i$ steps from the current location as far as possible without going strictly north of the path $Q$. The $i$-th bounce continues by moving east $h_i=v_i+v_{i-1}+\cdots+v_{i-(m-1)}+s$ steps, where $v_j=0$ for $j<0$, $s=1$ for $0\leq i<k$, and $s=0$ for $i\geq k$. One can show that if $Q$ is produced from $P$ via sweeping (as described in Step 1 of the proof of Theorem~\ref{thm:trapz-vs-sweep}), then the steps of $Q$ located north of the $(i-1)$-th horizontal bounce move and west of the $i$-th vertical bounce move receive label $i$. Thus, we can invert the sweep map in this case.
\subsection{Inverting $\phi_{\mathrm{LW}}$.} \label{subsec:invert-LW}
Loehr and Warrington describe the inverse of $\phi_{\mathrm{LW}}$ in~\cite{LW-square} using the language of area vectors. Their result amounts to inverting the sweep map $\sw_{1,-1}^-$ on the domain $\mathcal{R}^{\mathrm{path}}(\mathrm{N}^n\mathrm{E}^n)$ of lattice paths in an $n\times n$ square. As usual, it suffices to discuss the construction of the ``square bounce path.'' Given $Q=\sw_{1,-1}^-(P)$ with $P,Q\in\mathcal{R}^{\mathrm{path}}(\mathrm{N}^n\mathrm{E}^n)$, first choose the maximum integer $k$ such that the path $Q$ touches the line $y=x-k$. Call this line the \emph{break diagonal}. The \emph{break point} of $Q$ is the lowest point $(x,y)$ of $Q$ on the line $y=x-k$. The \emph{positive bounce path} of $Q$ starts at $(n,n)$ and moves to the break point as follows. First go south $k$ steps from $(n,n)$ to $(n,n-k)$; call this move $V_{-1}$. Repeat until reaching the break point, taking $i=0,1,2,\ldots$: In move $H_i$, go west until blocked by the north end of a north step of $Q$; in move $V_i$, go south to the break diagonal. Next, the \emph{negative bounce path} of $Q$ starts at $(0,0)$ and moves to the break point as follows. First go east $k$ steps from $(0,0)$ to $(k,0)$; call this move $H_{-1}$. Repeat until reaching the break point, taking $i=-2,-3,\ldots$: In move $V_i$, go north to the first lattice point on $Q$; in move $H_i$, go east to the break diagonal. (Since the negative bounce path is blocked by lattice points, not edges, on $Q$, this rule is not the reflection of the rule for the positive bounce path.) One can check that when sweeping $P$ to produce $Q$, the steps of $Q$ located north of move $H_i$ and west of move $V_{i-1}$ receive label $i$. It is now straightforward to invert $\phi_{\mathrm{LW}}$.
\subsection{Inverting $\phi_{\mathrm{EHKK}}$.} \label{subsec:invert-EHKK}
Egge, Haglund, Killpatrick, and Kremer describe the inverse of their map $\phi_{\mathrm{EHKK}}$ in~\cite[p. 15]{EHKK}. Given a Schr\"oder path $Q$, their algorithm to compute $P=\phi_{\mathrm{EHKK}}^{-1}(Q)$ begins by dividing the steps of $Q$ into regions based on a version of the bounce path defined for Schr\"oder paths. This amounts to reconstructing the labels of the steps of $Q$ created when applying the sweep map to $P$. They then reconstruct the area vector of $P$ (modified to allow diagonal steps) by an insertion process that reverses the action of the sweep map. This special case of sweep inversion is notable because it inverts a sweep map on a three-letter alphabet (although one of the letters has weight zero).
\subsection{Inverting $\mathrm{G}_{b,a}$.} \label{subsec:invert-gorsky-mazin}
In~\cite{GM-jacII}, Gorsky and Mazin describe how to invert special cases of their map $\mathrm{G}_{b,a}$ using the language of semi-module generators and bounce paths. Their results amount to inverting $\sw_{b,-n}$ on the domain $\mathcal{D}^{\mathrm{path}}(\mathrm{N}^n\mathrm{E}^b)$ in the case where $b=nm\pm 1$ for some $m$. The case $b=nm+1$ essentially duplicates the $m$-bounce path construction of~\cite{loehr-mcat} (although there is a lot of new material relating this construction to semi-modules). The case $b=nm-1$ is a new inversion result obtained via a modification of the $m$-bounce paths. Specifically, one constructs the bounce path's vertical moves $v_i$ (for $i\geq 0$) as described in~\S\ref{subsec:invert-trapz} above, but now $h_i=v_i+v_{i-1}+\cdots+v_{i-(m-1)}+t$, where $t=-1$ for $i=m-1$ and $t=0$ for all other $i$.
\section{Area Statistics and Generalized $q,t$-Catalan Numbers} \label{sec:area-qtcat}
This section applies the sweep map to provide new combinatorial generalizations of the $q,t$-Catalan numbers~\cite{GH-qtcat} and the $q,t$-square numbers~\cite{LW-square}. We make several conjectures regarding the joint symmetry of these polynomials and their connections to the nabla operator $\nabla$ on symmetric functions introduced by A.~Garsia and F.~Bergeron~\cite{nabla1,nabla2,nabla3}.
\subsection{Area Statistics} \label{subsec:area-stat}
For any word $w\in\{\mathrm{N},\mathrm{E}\}^*$, let $\mathsf{area}(w)$ be the number of pairs $i<j$ with $w_i=\mathrm{E}$ and $w_j=\mathrm{N}$. This is the area of the partition diagram $\mkptn(w)$ consisting of the squares above and to the left of steps in $\mkpath(w)$. For $r,s\in\mathbb{Z}$, let $w$ have $(r,s)$-levels $l_0,l_1,\ldots$ (E-N convention). Let $\mathsf{ml}_{r,s}(w)=\min\{l_0,l_1,\ldots\}$, and set $\mathsf{area}^*_{r,s}(w)=\mathsf{area}(w)+\mathsf{ml}_{r,s}(w)$. Note that $\mathsf{area}^*_{r,s}(w)\neq\mathsf{area}^*_{rm,sm}(w)$ in general, so we cannot necessarily assume that $\gcd(r,s)=1$ when using $\mathsf{area}^*$.
\begin{remark}
The function $\mathsf{ml}_{b,-a}$ appears in~\cite{ALW-RPF} as $\mathsf{ml}_{b,a}$. \end{remark}
\begin{remark}
The correspondence between partitions and paths lying in a fixed triangle
has led to inconsistent terminology: ``area'' can refer to
either the number of squares lying in the partition determined by a
path (as above) or the number of squares between the path and a
diagonal (such as in~\cite{ALW-RPF}). \end{remark}
\subsection{Generalized $q,t$-Catalan Polynomials} \label{subsec:gen-qt-cat}
For $r,s\in\mathbb{Z}$ and $a,b\geq 0$, define the \textbf{$q,t$-Catalan
numbers for slope $\boldsymbol{(-s/r)}$ ending at
$\boldsymbol{(b,a)}$} by \[ C_{r,s,a,b}(q,t)=\sum_{w\in\mathcal{D}^{\mathrm{word}}_{r,s}(\mathrm{N}^a\mathrm{E}^b)}
q^{\mathsf{area}(w)}t^{\mathsf{area}(\sw_{r,s}^-(w))}. \]
\begin{conj}[Joint Symmetry]~\label{conj:qt-joint}
For all $r,s\in\mathbb{Z}$ and all $a,b\geq 0$, $C_{r,s,a,b}(q,t)=C_{r,s,a,b}(t,q)$. \end{conj}
Note that the conjectured bijectivity of $\sw_{r,s}^-$ on the domain $\mathcal{D}^{\mathrm{word}}_{r,s}(\mathrm{N}^a\mathrm{E}^b)$ would imply the weaker univariate symmetry property $C_{r,s,a,b}(q,1)=C_{r,s,a,b}(1,q)$.
The rational $q,t$-Catalan polynomials defined in~\cite{ALW-RPF} arise from a sweep map that, in the case $\gcd(a,b)=1$ considered in that paper, reduces to $\sw^{+}_{b,-a}\circ\rev$. As such, the joint symmetry conjecture~\cite[Conj. 19]{ALW-RPF} is not quite a special case of Conjecture~\ref{conj:qt-joint}.
Let $w\in\mathcal{D}^{\mathrm{path}}_{1,-1}(\mathrm{N}^n\mathrm{E}^n)$ be a ``classical'' Dyck path with area vector $(g_1,\ldots,g_n)$ (see~\S\ref{subsubsec:trapz}). Let $\mathsf{Area}(w)=g_1+\cdots+g_n$, which is the number of area squares between the path $w$ and the line $y=x$, and let $\mathsf{dinv}(w)$ be the number of $i<j$ with $g_i-g_j\in\{0,1\}$. The Garsia-Haiman $q,t$-Catalan numbers~\cite{GH-qtcat} can be defined by the combinatorial formula \[ C_n(q,t)=\sum_{w\in\mathcal{D}^{\mathrm{path}}_{1,-1}(\mathrm{N}^n\mathrm{E}^n)} q^{\mathsf{Area}(w)}t^{\mathsf{dinv}(w)}. \] To relate this polynomial to the one defined above, note that $\mathsf{area}(w)+\mathsf{Area}(w)=n(n-1)/2$. Similarly, it follows from Theorem~\ref{thm:trapz-vs-sweep} and~\cite[\S2.5]{loehr-mcat} that $\mathsf{area}(\sw_{1,-1}^-(w))+\mathsf{dinv}(w)=n(n-1)/2$. Therefore, \[ C_n(q,t)=(qt)^{n(n-1)/2}C_{1,-1,n,n}(1/q,1/t). \] Combining this with a theorem of Garsia and Haglund~\cite{nablaproof}, we get \[ (qt)^{n(n-1)/2}C_{1,-1,n,n}(1/q,1/t)=\langle\nabla(e_n),s_{(1^n)}\rangle. \] More generally, for any positive integers $m,n$, the higher-order $q,t$-Catalan numbers~\cite{loehr-mcat,loehr-thesis} satisfy \[ C_n^{(m)}(q,t)=(qt)^{mn(n-1)/2}C_{m,-1,n,mn}(1/q,1/t). \] The main conjecture for these polynomials can be stated as follows: \[ (qt)^{mn(n-1)/2}C_{m,-1,n,mn}(1/q,1/t)
=\langle\nabla^m(e_n),s_{(1^n)}\rangle. \] An interesting open problem is to find formulas relating the general polynomials $C_{r,s,a,b}(q,t)$ to nabla or related operators.
\subsection{Generalized $q,t$-Square Numbers} \label{subsec:gen-qt-square}
Next we generalize the $q,t$-square numbers studied in~\cite{LW-square}. For $a,b\geq 0$, define the \textbf{$\boldsymbol{q,t}$-rectangle numbers for the $\boldsymbol{a\times b}$ rectangle} by \[ S_{a,b}(q,t)=\sum_{w\in\mathcal{R}^{\mathrm{word}}(\mathrm{N}^a\mathrm{E}^b)}
q^{\mathsf{area}^*_{b,-a}(w)}t^{\mathsf{area}^*_{b,-a}(\sw_{b,-a}^-(w))}. \]
\begin{conj}[Joint Symmetry] For all $a,b$, $S_{a,b}(q,t)=S_{a,b}(t,q)$. \end{conj}
The joint symmetry conjecture is known to hold when $a=b$. This follows from the stronger statement \[ (qt)^{n(n-1)/2}S_{n,n}(1/q,1/t) =2\langle (-1)^{n-1}\nabla(p_n),s_{(1^n)}\rangle, \] which was conjectured in~\cite{LW-square} and proved in~\cite{CL-sqthm}. We conjecture the following more general relationship between certain $q,t$-rectangle numbers and higher powers of $\nabla$.
\begin{conj} For all $m\geq 0$ and $n>0$, \[ (qt)^{mn(n-1)/2}S_{n,mn}(1/q,1/t)
=(-1)^{n-1}(m+1)\langle \nabla^m(p_n),s_{(1^n)}\rangle. \] \end{conj}
\subsection{Specialization at $t=1/q$} \label{subsec:t=1/q}
Recall the definitions of $q$-integers, $q$-factorials, and $q$-binomial coefficients: $[n]_q=1+q+q^2+\cdots+q^{n-1}$, $[n]!_q=[n]_q[n-1]_q\cdots [2]_q[1]_q$, and ${\tqbin{a+b}{a,b}{q}=[a+b]!_q/([a]!_q[b]!_q)}$.
In~\cite[Conj. 21]{ALW-RPF}, the authors make the following conjecture for coprime $a$ and $b$: \begin{equation*}
q^{(a-1)(b-1)/2} \sum_{D\in\mathcal{D}^{\mathrm{word}}(\mathrm{N}^a\mathrm{E}^b)} q^{\mathsf{area}(\sw^{+}_{b,-a}(\rev(D)))-\mathsf{area}(D)}=\frac{1}{[a+b]_q}\dqbin{a+b}{a,b}{q}. \end{equation*} We conjecture here that $\sw^{+}_{b,-a}\circ\rev$ can be replaced by $\sw^{-}_{b,-a}$: \begin{conj} For all coprime $a,b>0$, \begin{equation*}
q^{(a-1)(b-1)/2} C_{b,-a,a,b}(q,1/q) = \frac{1}{[a+b]_q}\dqbin{a+b}{a,b}{q}. \end{equation*} \end{conj}
We also introduce two conjectures regarding the $t=1/q$ specialization of paths in a rectangle. \begin{conj} For all $m,n\geq 0$, \[ q^{m\binom{n}{2}}S_{n,mn}(q,1/q)
=\frac{(m+1)}{[m+1]_{q^n}}\dqbin{mn+n}{mn,n}{q}. \] \end{conj}
This conjecture generalizes to arbitrary rectangles as follows:
\begin{conj} For all $a,b\geq 0$, write $b=b'k$ and $a=a'k$ for integers $a',b',k\geq 0$ with $\gcd(a',b')=1$. Then \[ q^{k(a'-1)(b'-1)/2 + a'b'\binom{k}{2}}S_{a,b}(q,1/q)=\frac{(a'+b')}{[a'+b']_{q^k}}
\dqbin{a+b}{a,b}{q}. \] \end{conj}
\begin{remark}
A Sage worksheet containing code to compute the images of paths
under various versions of the sweep map as well as to check the
conjectures of Section~\ref{sec:area-qtcat} can be found at the
third author's web page~\cite{sage-worksheet}. \end{remark}
\section*{Acknowledgments}
The authors gratefully acknowledge discussions with Jim Haglund, Mark Haiman and Michelle Wachs.
\end{document} | arXiv |
\begin{document}
\title{ \huge\bfseries\sffamily Detecting of multi-modality in probabilistic regression models }
\author[1]{Andrew Polar} \author[2]{Michael Poluektov} \affil[1]{Independent Software Consultant, Duluth, GA, USA} \affil[2]{Department of Mathematics, School of Science and Engineering, University of Dundee, Dundee DD1 4HN, UK}
\date{ \huge\normalfont\sffamily DRAFT: \today }
\maketitle
\setlength{\absleftindent}{2.0cm} \setlength{\absrightindent}{2.0cm} \setlength{\absparindent}{0em} \begin{abstract} This paper focuses on building of models of stochastic systems with aleatoric uncertainty. The nature of the considered systems is such that the identical inputs can result in different outputs, i.e. the output is a random variable. The suggested in this paper algorithm targets an identification of multi-modal properties of the output distributions, even when they depend on the inputs and vary significantly throughout the dataset. This ability of the suggested method to recognise complex and not only bell-shaped distributions follows from its construction and is backed up by provided experimental results. In general, the suggested method belongs to the category of boosted ensemble learning techniques, where the single deterministic component can be an arbitrarily-chosen regression model. The algorithm does not require any special properties of the chosen regression model, other than having descriptive capabilities with some expected accuracy for the training data type. \\ \textbf{Keywords:} uncertainty quantification, Kolmogorov-Arnold representation, deep ensemble learning, divisive data resorting, multi-modality in posterior distributions. \end{abstract}
\section{Introduction} \label{sec:intro}
Many real-life systems are intrinsically stochastic, which creates a challenge for modelling such systems. One can consider systems that produce datasets that contain individual records with vector inputs and corresponding scalar outputs\footnote{Data records are also called `instances' or `entries', inputs are also called `features' and outputs --- `targets' in literature.}. Different records are considered to be independent and can be used in modelling in any order. The output may take different values if an experiment is repeated with the identical inputs, which means that modelling of such data falls into the category of aleatoric uncertainty quantification. The goal is to obtain the probability distribution of the output as a function of the vector input.
The appropriate modelling choice depends on the dataset type. In many cases, data are observations of the natural phenomena or measurements, where stochastic properties are expected due to the known nature, but there are no scientifically-founded suggestions about the type of the distribution of the modelled outputs. In this case, researchers accept convenient assumptions of uni-modal bell-shape distribution and expect the model to return variances or confidence intervals in addition to expectations. It may be acceptable in many particular cases. This article targets the other cases, when distribution of the outputs is not uni-modal, more specifically, when it varies depending on the inputs and when it is crucial to find it for unobserved new inputs after the training of the model.
Probabilistic and Bayesian Neural Networks (BNNs) \cite{Neal1993,Mohebali2020} are traditional modelling choices for problems of this type, but they have some burden attached to the advantages. Only experts can design their own code by following published concepts; most researchers use publicly available libraries, usually one of three biggest: TensorFlow, Keras or PYRO. The returned posterior distributions for them are so-called intractable\footnote{https://github.com/krasserm/bayesian-machine-learning}, which means that analytical expressions for them cannot be obtained, although priors and inference steps are defined. These libraries are used as black boxes with all built-in assumptions/approximations, which can lead to difficulties in the assessment of the accuracy of the result with respect of multi-modality. When a BNN-based method from a library returns uni-modal posterior distribution, a non-expert researcher has no choice other than to accept it.
Another large group of methods (different from BNNs) is ensemble training \cite{Abdar2021}, including bagging \cite{Efron1979,Breiman1996}, boosting \cite{Schapire1990,Freund1997} and stacking \cite{Wolpert1992,Smyth1999}. Recent comparison \cite{Lakshmi2017} showed that probabilistic methods do not have general advantages over deep ensembles and the latter can be as accurate as BNNs or better\footnote{This, however, can be disputed. An elementary example where bagging ensemble fails can easily be constructed. If data represent points randomly selected close to two parallel lines $y = ax + b$ and $y = ax + b + \Delta b$ with a clear gap between them, the bagging method will create multiple deterministic models located between these lines, since each one is created by minimisation of residual errors. Using such models for predictions simply gives wrong result. For such example, other methods can be efficient, such as support vector machine (SVM); however, it will not give a regression model as a result.}. The aim of this paper to propose a lightweight algorithm, capable of building an ensemble of models, which recognises multi-modality and which is not limited to a specific model or its identification technique, assuming only that models are built in a process of minimisation of residual errors. The codes of all back-up tests are shared by the authors online\footnote{https://github.com/andrewpolar}.
\section{Intractable posteriors of BNNs} \label{sec:intractableBNN}
The term `intractable posteriors' means inability to obtain analytical expression for posterior distribution although priors and inference steps are analytically defined. Expectations and variances returned by probabilistic models are usually accurate, but the type of the output distribution is not always correct. In all benchmark examples found by authors, the test data are sampled from the normal distribution, which is then identified in the demonstrated probabilistic modelling. However, when given data has been changed by authors into multi-modal (as sum of two shifted normal distributions), the posteriors returned by models were still uni-modal. Two published examples have been chosen for this test from two popular libraries PYRO\footnote{https://num.pyro.ai/en/stable/examples/bnn.html} and Keras\footnote{https://github.com/krasserm/bayesian-machine-learning/tree/dev/bayesian-neural-networks}.
The result for the given data for the original PYRO example is shown in figure \ref{fig:unimodal_pyro}. The points, which are normally distributed around the expectation curve and are denoted by crosses, are shown in the left-hand side image. The solid blue line is the identified expectation, the light blue area is the $90\%$ confidence interval. The right-hand side image shows the histogram at the central point ($x = 0$) obtained by the Markov Chain Monte Carlo (MCMC) method. Same uni-modal close-to-normal distribution can be found for any other argument (not shown here).
The new test for the (changed) multi-modal data is shown in figure \ref{fig:multimodal_pyro}. The expectation and the confidence interval are recalculated accordingly, but the posterior distribution shown on the right at $x = 0$ is clearly of the wrong type and it is similar for other points of the input interval and stays that way for repeated experiments. This modified version of the provided example can be found online\footnote{https://github.com/andrewpolar/BNN\_accuracy\_test/blob/main/bnn.py}.
Similar test for the Keras example is shown in figure \ref{fig:multimodal_keras}. The modified multi-modal data, the expectation line and the confidence interval are shown in the left-hand side image, the distribution at point $x = 0$ is shown on the right. It clearly can not be qualified as two shifted normal distributions with a gap between them --- although it does not look like a normal distribution, it is still uni-modal. Same type of distribution can be seen for any other point of the data definition interval and when the experiment is repeated. The modified code is available online\footnote{https://github.com/andrewpolar/BNN\_accuracy\_test/blob/main/KrasserMExample.py}.
In these cases, inaccuracy in the estimation of the probability density can clearly be noticed; however, for large datasets with multiple inputs, mismatch in the distribution type will not be that obvious or independently verifiable. The identification of multi-modality in posteriors is recognised as a challenge and is in the focus of the recent research \cite{JerfelG}; however, implementation of new theoretical results requires expert knowledge in the field, thus, most researchers resort to using publicly-available libraries.
The next section provides simple but efficient way of obtaining so-called `tractable posteriors', where connection between multi-modal data and multi-modal posteriors can be traced and explained.
\begin{figure}
\caption{Uni-modal data and histogram at x = 0.0 (PYRO).}
\label{fig:unimodal_pyro}
\end{figure}
\begin{figure}
\caption{Multi-modal data and histogram at x = 0.0 (PYRO).}
\label{fig:multimodal_pyro}
\end{figure}
\begin{figure}
\caption{Multi-modal data and histogram at x = 0.0 (Keras).}
\label{fig:multimodal_keras}
\end{figure}
\section{Divisive data resorting (DDR)} \label{sec:DDR}
The dataset is considered to contain independent records $\left( X^i, y^i \right)$, $i \in \left\lbrace 1, \ldots, N \right\rbrace$, where $X^i \in \mathbb{R}^m$ is the input of the $i$-th record, $y^i \in \mathbb{R}$ is the output of the $i$-th record and $N$ is the number of records. The records are the observations of a system with an aleatoric uncertainty. The independence implies that each individual record is not related to other records\footnote{An example of a system with order-dependent records is a dynamical system.}.
Suppose some deterministic regression model $M_0: \mathbb{R}^m \to \mathbb{R}$ can be built using a particular error minimisation process, \begin{equation*}
M_0 = \underset{M\in\mathcal{A}}{\operatorname{argmin}} \sum_i \left( y^i - M\left(X^i\right) \right)^2 , \end{equation*} where $\mathcal{A}$ is the space of possible models. Such models will be called expectation models further in this paper.
The simplest version of the DDR method starts by building one expectation model for the entire dataset $M_{1,1}$, where the first subscript is the step number and the second subscript is the index of the model within the step. Then, the data records are resorted according to residual \begin{equation*}
r^i = y^i - M_{1,1} \left(X^i\right) \end{equation*} and divided into two even clusters over the median of the residual in the sorted list. At the second step, new expectation models are built for each cluster, resulting in $M_{2,1}$ and $M_{2,2}$. Then, the records are again resorted according to the residual error within each cluster and are subdivided into two clusters each over the median error, resulting in four clusters in total, for which new expectation models $M_{3,1}$, $M_{3,2}$, $M_{3,3}$ and $M_{3,4}$ are built. The process is continued in the similar way. The average error for each cluster should decline in the division process. When models become sufficiently accurate, the process stops. The ensemble of models obtained at the last divisive step can be now used to approximate the distribution of the output for each individual record --- the outputs of the ensemble can be handled as a sample from the probability distribution of the real output.
The proposed approach falls into the boosting category, since a series of models is built sequentially were the next group of models depends on the previous. It has similarities to some known methods, but, in the form proposed here, it is novel to the best knowledge of the authors. A possibly close concept is called anticlustering \cite{Valev1983,Spath1986,Papenberg2021}, where data is clustered such that the clusters have maximum similarity, while having maximum diversity of the elements within them.
In the simplest version of the DDR method introduced above, the number of clusters doubles at each step. This can be relaxed, leading to a general version of the DDR algorithm. The entire process of constructing the ensemble of models can be defined by a sequence of increasing integers that denote the number of clusters at each step: $w_1, w_2, \ldots, w_p$, where $p$ is the total number of steps of the algorithm and $w_1 = 1$, which indicates that there is only one cluster at the first step (the entire dataset). It is also useful to denote $w_p = W$, indicating that $W$ models must constitute the final ensemble at the last step. The general algorithm is as follows: \begin{enumerate}
\item Start with $j = 1$;
\item Split the dataset into $w_j$ disjoint clusters of equal size, such that all records belonging to one cluster are encountered consecutively within the entire dataset; if $N$ is not divisible by $w_j$, there will be some remaining records; \label{st:split}
\item Build $w_j$ deterministic models $M_{j,k}$, $k \in \left\lbrace 1, \ldots, w_j \right\rbrace$, one for each cluster; \label{st:mod}
\item For each record, calculate the residual using the model corresponding to the parent cluster of the record;
\item Within each cluster, resort the records according to the residuals;
\item If $j=p$, then finish with models obtained at step \ref{st:mod} being the final ensemble of models; if $j<p$, then increase $j$ by $1$ and go to step \ref{st:split}. \end{enumerate}
Now, given some input $X$, the models of the ensemble can be used to calculate $W$ different outputs. These outputs can be used to calculate some statistics, for example, the mean or the standard deviation. Furthermore, these outputs can be used to build the so-called empirical cumulative distribution function (ECDF) of the output. This ECDF will obviously be input-dependent and is the ultimate goal of the considered uncertainty quantification problem for the considered stochastic system.
It should be noted that if $W$ is relatively small, the ECDF will be rather coarse. A technique resembling a `sliding' window concept can be used to build a finer ECDF. Having the final order of data records (i.e. after the final resorting), additional models $M_{k}^\mathrm{cdf}$, $k \in \left\lbrace 1, \ldots, W_\mathrm{cdf} \right\rbrace$ can be built, such that each model is identified on a set of consecutive records, which contains $r$ records and which starts from record with index $\left(dk-d+1\right)$. It can be seen that these additional models are no longer identified on disjoint subsets, but rather on overlapping subsets, each shifted by $d$ records with respect to the neighbouring subsets. Given some input $X$, a finer ECDF can be then constructed using the outputs of these additional models (or the additional ensemble).
\subsection{DDR concept explained} \label{subsec:DDRExplained}
The concept of detecting of multi-modality by the proposed algorithm is illustrated schematically in figure \ref{fig:example}. The columns of different output values can be interpreted as multiple outputs for a single argument ($8$ outputs per one input in the figure) or as outputs of neighbouring input points located in the disjoint hyper-spheres of the vector input, projected onto axis. The red lines are the expectation models in the ensemble. The line densities within each column approximately match the densities of the output values, which follows from the construction of the ensemble of models: the first model ($M_{1,1}$, not illustrated) will use all data points and will pass through expectation values for each column, the data points will be subdivided into two clusters (`above' and `below' the model), and the subsequent models will be built only using the data points of the corresponding cluster. Suppose now that the bagging algorithm is applied instead to the data of figure \ref{fig:example}, as suggested in e.g. \cite{Lakshmi2017}. In this case, multiple models built on random samples of $60\%$ to $80\%$ of all points, or $100\%$ of the points with random initialisation, will make a cloud near the expectation values for each column, which will not approximate the probabilistic properties of the output. For expectation models with the output continuously-dependent on the input, the gradual changes in the distribution of the output is crucial for the accuracy of the DDR ensemble. The figure shows $4$ lines only to illustrate the concept, the dividing steps in the DDR algorithm continue until the residual errors become sufficiently small.
\begin{figure}
\caption{A schematic illustration of the concept of the proposed algorithm. The blue crosses are multiple output values for different inputs. Red lines are individual DDR expectation models.}
\label{fig:example}
\end{figure}
\section{Numerical example} \label{sec:simulation}
The expectation model (i.e. the deterministic component of the ensemble) chosen for this experiment was the Kolmogorov-Arnold representation \cite{Arnold1957,Kolmogorov1956}: \begin{equation} \hat{y}^i = \sum_{k=1}^{n} \varPhi^k \left( \sum_{j=1}^{m} f^{kj} \left(X^i_j\right) \right) , \label{eq:Kolmogorov} \end{equation} where $\hat{y}^i$ is the calculated model output of the $i$-th record, $X^i_j$ denote the $j$-th component of input vector $X^i$, functions $f^{kj}: \mathbb{R} \to \mathbb{R} \in C\left[0,1\right]$ and $\varPhi^k: \mathbb{R} \to \mathbb{R} \in C\left(\mathbb{R}\right)$ constitute the model. In the original work \cite{Arnold1957,Kolmogorov1956}, it has been shown that the model with $n=2m+1$ can represent any continuous multivariate function. This decomposition can be used for a general problem of data modelling, where the output is a continuous function of the inputs\footnote{More recently the restrictions on continuity have been somewhat relaxed, e.g. see discussion in \cite{Ismailov2008}.}. To handle the model, the underlying functions, $f^{kj}$ and $\varPhi^k$, must be specified in a form allowing computations. In the previous publication by the authors \cite{Polar2021}, it has been suggested to use the piecewise-linear representation of the functions. Having such functions, the identification of the model is reduced to estimation of a finite number of the nodal values (i.e. points where the functions change the slope). The identification algorithm is described in detail in \cite{Polar2021}. Also, as shown in \cite{Polar2021}, the descriptive capabilities of the Kolmogorov-Arnold model are similar to that of the neural networks; furthermore, it has been used in a variety of applications \cite{Bryant2008,Liu2015}.
The evaluation of the performance of the DDR algorithm is made using the synthetic data, since it requires simulating the cumulative distribution functions of the outputs using the Monte-Carlo technique for a given record, which is not possible if data recordings of a real physical system are taken. The generated synthetic data is a set of records with observed input vectors $X^i$ and corresponding output scalars $y^i$. The outputs are computed by formula $y^i = \mathcal{F}\left(X^i + \xi^i\right)$, where inputs $X^i$ are perturbed by a uniformly-distributed noise $\xi^i$ and $\mathcal{F}$ is given below. When the ensemble of models is obtained and a sample of the outputs is computed for particular $X^i$, this sample can be compared to the probability distribution or the expected value of $y^i$.
A synthetic example must be challenging enough to test the proposed procedure to the limit. This means that the stochastic system must be such that the statistical behaviour of the output changes significantly as a function of the inputs --- the probability density function (PDF) changes qualitatively not only from symmetric to non-symmetric, but also by changing the number of minima/maxima as the inputs are changing. Thus, the following system is taken: \begin{equation} \begin{split}
&y = \frac{2 + 2 X_3^*}{3\pi} \left( \operatorname{arctan}\left( 20\left( X_1^* - \frac{1}{2} + \frac{X_2^*}{6}\right)\exp\left(X_5^*\right) \right) + \frac{\pi}{2} \right) + \\
&\vphantom{y} + \frac{2 + 2 X_4^*}{3\pi} \left( \operatorname{arctan}\left( 20\left( X_1^* - \frac{1}{2} - \frac{X_2^*}{6}\right)\exp\left(X_5^*\right) \right) + \frac{\pi}{2} \right) , \\
&X_j^* = X_j + 0.4 \left(C_j - 0.5\right) , \quad j \in \left\lbrace 1, \ldots, 5 \right\rbrace , \end{split} \label{eq:formula2} \end{equation} where $C_j \sim \operatorname{unif}\left(0,1\right)$ are uniformly distributed random variables. If $C_j$ are constants (i.e. the system becomes deterministic), then the Kolmogorov-Arnold representation builds an accurate model with near $100\%$ accuracy, which means that there is almost no epistemic uncertainty.
For uniformly-random $C_j$, the probability density of the output indeed significantly changes depending on the inputs. In figure \ref{fig:atanGeom}, in the insets (blue figures), four examples of probability densities of $y$ are shown for four different inputs: \begin{align*}
&X^1 = \begin{bmatrix} 0.5 & 0.5 & 0.5 & 0.5 & 0.5 \end{bmatrix} , \\
&X^2 = \begin{bmatrix} 0.65 & 0 & 0.5 & 0.5 & 0.5 \end{bmatrix} , \\
&X^3 = \begin{bmatrix} 0.68 & 1 & 0.5 & 0.5 & 0.5 \end{bmatrix} , \\
&X^4 = \begin{bmatrix} 0.74 & 1 & 0.5 & 0.5 & 1 \end{bmatrix} , \end{align*} corresponding to subfigures (a)-(d), respectively. The PDFs are built using the Monte-Carlo sampling of $10^5$ points.
\begin{figure}
\caption{The empirical cumulative distribution functions (ECDFs) for the considered stochastic system obtained using the Monte-Carlo sampling (black) and using the realisations of the DDR algorithm (grey). Averages over the realisations are shown in red. The corresponding probability density functions (PDFs) obtained using the Monte-Carlo sampling are shown in the insets. Subfigures (a)-(d) correspond to inputs $X^1$-$X^4$.}
\label{fig:atanGeom}
\end{figure}
To benchmark the DDR procedure, a total of $40$ runs of the programme have been performed. During each run, a dataset of $N = 10^6$ records has been generated, the ensemble of models has been constructed using the DDR algorithm, the ECDFs for the given above four points have been calculated using the `sliding' window technique. For data generation, inputs $X_j \sim \operatorname{unif}\left(0,1\right)$ were taken. The structure of the Kolmogorov-Arnold model has been selected based on a separate parametric study --- $5$ and $7$ equidistant nodes for the inner and the outer functions, respectively. The noise-reduction parameter of the identification of the Kolmogorov-Arnold model has been selected to be $\mu = 0.002$. During the identification of each model, $4$ runs through the dataset have been performed. To construct the ensemble of models, $w \in \left\lbrace 1, 2, 3, 5, 7, 11, 17, 23, 29 \right\rbrace$ divisions have been made. For the `sliding' window technique, parameters have been selected to be $r = 30000$ and $d = 5000$.
In figure \ref{fig:atanGeom}, the given above four inputs are considered, the ECDFs built using the Monte-Carlo sampling are shown in black colour and the ECDFs obtained using the DDR procedure (with the `sliding' window technique) are shown in grey colour. Furthermore, an average ECDF across all realisations (i.e. average of grey curves) is shown in red colour. For the purpose of the discussion, the Monte-Carlo sampling ECDFs are referred to as the `exact'. It can be seen that the ensemble of models reproduces the major features of the exact CDFs qualitatively and predicts well the range of the output quantitatively. The subfigures show different representative scenarios: for input $X^1$, the ensemble ECDF is very close to the exact CDF; for input $X^2$, the ensemble ECDF is somewhat far from the exact CDF quantitatively, although reproduces qualitatively the change of the slope; for inputs $X^3$ and $X^4$, the ensemble ECDF is rather close to the exact CDF, although somewhat smoothed.
The DDR algorithm builds the ensemble of models using the specifically created set of disjoints. Therefore, it is crucial to emphasise its advantage over an ensemble built using disjoints containing random records. To show this, a set of $100$ input points have been randomly generated. For each input point, the true mean and the true standard deviation have been calculated using the Monte-Carlo sampling of $10^5$ points. Next, for each input point, the mean and the standard deviation have been calculated using the DDR ensembles (i.e. built using the DDR algorithm) and the random ensembles (i.e. built using random disjoints). All ensembles had the same number of models --- $29$. In figure \ref{fig:atanMV}, the results are compared and it can be seen that the random ensembles can model the means well, which is a known fact, but cannot model the standard deviations at all. Meanwhile, the DDR ensembles predict the standard deviations both qualitatively and quantitatively.
\begin{figure}
\caption{The mean and the standard deviation for $100$ input points. The points are sorted based on their $X_1$ value. The values are obtained using the Monte-Carlo sampling (red squares), using the ensemble of models built using the realisations of the DDR algorithm (blue crosses) and using the realisations of the ensemble of models built on randomly selected disjoint sets of records (grey pluses).}
\label{fig:atanMV}
\end{figure}
To estimate quantitatively whether the DDR ensembles can give the ECDFs that are close to the real CDFs (built using the Monte-Carlo sampling) across a wide range of inputs, the standard Kolmogorov-Smirnov goodness test with level $\alpha = 0.05$ has been used. This test quantifies the distance between the distribution functions and indicates whether they can be regarded to be sampled from the same reference distribution. For the $100$ input points generated above, the DDR ensembles have failed the Kolmogorov-Smirnov test at $31.8 \pm 4.3$ points, when compared to the Monte-Carlo sampling. This might seem to be a large percentage, but the underlying system has specifically been chosen to be extremely challenging for the modelling. Furthermore, the random ensembles have failed the test at $98.8 \pm 1.0$ points.
\section{Comparison to published benchmarks} \label{sec:nonsynthetic}
Each new method should certainly be tested using at least few known benchmarks, and the accuracy and the performance should be compared to coding examples written by other researchers, preferably industry experts. Luckily, there is no shortage of data and coding samples in the public domain. Several examples from the companies with well-established reputations have been chosen for the comparison.
\subsection{Wine quality (Keras)} \label{subsec:winequality}
The code and the data are taken from online documentation `Probabilistic Bayesian Neural Network'\footnote{https://keras.io/examples/keras\_recipes/bayesian\_neural\_networks/}. The testing dataset --- wine quality\footnote{https://archive.ics.uci.edu/ml/datasets/wine+quality} is frequently used for benchmarking. The number of records is $4898$, the number of inputs is $11$, the predicted output (wine quality) is an integer between $0$ and $10$, the training subset is $85\%$ of the full dataset, the remaining $15\%$ are used for the validation.
The Keras example returns the expectation and the standard deviation for each record of the validation subset. The predicted expectations were compared to the actual, and the root mean square error (RMSE) was used as the accuracy metric. The resulting RMSE was approximately $0.75$ for the actual range of variation of the output $[3, 9]$. The standard deviations could not be compared to the actual values, since only one actual output was available per record. On that reason, the mean values for all returned standard deviations were computed and are shown in the table below in the second column from the left.
When the same data was processed by the DDR algorithm, the ensemble of models has been built; thus, the samples of multiple possible outputs were computed for each validation record and were used for computing the expectations and the standard deviations. The same accuracy metrics were applied and the results are shown in the third and the forth columns of the table below. The results given by both algorithms show surprising similarity, which rarely happens for completely different algorithms. The different rows in the table below are different experiments to illustrate the stability of the results.
\begin{center}
\begin{tabular}{||c c c c||}
\hline
\multicolumn{2}{||c}{Keras} & \multicolumn{2}{c||}{DDR} \\
\hline
RSME & STDV & RSME & STDV \\ [0.5ex]
\hline\hline
0.75 & 0.75 & 0.75 & 0.76 \\
\hline
0.74 & 0.74 & 0.71 & 0.76 \\
\hline
0.76 & 0.75 & 0.77 & 0.76 \\ [1ex]
\hline
\end{tabular} \end{center}
While the accuracy of both methods is approximately similar, the execution time is significantly different. The C\# implementation of the DDR algorithm needs approximately $7$ seconds for processing, while the Keras example in Python requires approximately $2$ minutes on the same computer.
\subsection{Test for the variance} \label{subsec:variance}
The previous experiment assessed the accuracy of the expectations, but not of the standard deviations, about which it can only be concluded that they match the DDR result. In order to assess the variance, the wine quality data was replaced by the records generated by equation \eqref{eq:formula2}. The original code was slightly modified\footnote{https://github.com/andrewpolar/Benchmark5} to be able to work with a different dataset. The training subset was $10^4$ records, the validation subset was $100$ records. The actual expectation and variance were generated for each validation record by the Monte-Carlo simulation and, in this experiment, both values are available for the accuracy assessment.
The Pearson correlation coefficient between the actual and the modelled values was chosen as the accuracy metric. The results are shown in the table below. Different rows show results for different executions of the code. The DDR code in C\# is available online\footnote{https://github.com/andrewpolar/TKAR}.
\begin{center}
\begin{tabular}{||c c c c||}
\hline
\multicolumn{2}{||c}{Keras} & \multicolumn{2}{c||}{DDR} \\
\hline
Expectations & Variances & Expectations & Variances \\ [0.5ex]
\hline\hline
0.98 & 0.92 & 0.99 & 0.96 \\
\hline
0.99 & 0.96 & 0.99 & 0.95 \\
\hline
0.98 & 0.95 & 0.99 & 0.96 \\ [1ex]
\hline
\end{tabular} \end{center}
\subsection{Expectation and variance only} \label{subsec:expandvar}
When obtaining the full probability distribution density of the outputs for individual inputs is not the goal, the input-dependent variances can be modelled similar to the expectations. After expectation model $M_\mathrm{E}$ is obtained, it can be used for the generation of new output $v_i$ for the training dataset as $v_i = \left( y_i - M_\mathrm{E}(X_i) \right)^2$. Then, new model $M_\mathrm{V}$, which is built using $X_i$ as inputs and $v_i$ as outputs, will constitute the model for the variance.
Although it is a very simple and straightforward way, the experiments with data generated using equation \eqref{eq:formula2} showed good accuracy. The synthetic data allowed to generate the actual variance, which was compared to the result returned by $M_\mathrm{V}$ and the BNN. The accuracy metric was the Pearson correlation coefficient. In both cases (BNN and DDR), it was approximately between $0.96$ and $0.98$ for the unseen data (not used in the training). That clearly shows, that when there are no reasons to assume multi-modality of the outputs, or when the actual distribution is simply not a target, two deterministic models ($M_\mathrm{E}$ and $M_\mathrm{V}$) can be a quick and simple solution.
\subsection{Modelling of a social system} \label{subsec:socialsystem}
In this example, the outcomes of the English Premier League football matches were modelled. The predicted value was the goal difference as a real number, which was used to make a bet on any of `home win', `draw' or `away win'. The expectation model was the Kolmogorov-Arnold representation. The ensemble of expectation models returned a sample of values, which were used for the estimation of the probabilities for the betting choices. The bets were selected according to the maximum possible monetary gain: \begin{equation} \hat{M}_\rho = \hat{P}_\rho W_\rho - \left(1 - \hat{P}_\rho\right) B_\rho , \quad\quad \rho \in \left\lbrace \mathrm{h}, \mathrm{d}, \mathrm{a} \right\rbrace , \label{eq:bettingdecide} \end{equation} where $B_\rho$ is the bet amount (i.e. money that the gambler gives to the bookmaker), $W_\rho$ is the gain amount in the case of a win (i.e. the bookmaker gives $W_\rho + B_\rho$ to the gambler in this case), $\hat{P}_\rho$ is the probability of the event, $\hat{M}_\rho$ is the estimated profit, which can also be negative depending on the bet, the gain amount and the probability. Subscripts $\mathrm{h}, \mathrm{d}, \mathrm{a}$ stand for `home', `draw' and `away', respectively. Having made the bet, the actual outcome was then used to determine whether the bet has been successful and the total amount for the entire season was then computed.
The predicted seasons were 2018-2019 and 2019-2020, the training data included $14$ preceding seasons. The inputs were the outcomes of the matches against the same opponent on the same field (either at home or away). For example, the goal differences in Team A vs Team B and Team A vs Team C somehow points to a possible outcome of Team B vs Team C. By using a large set of similar observations, the predictions become more accurate, but the underlying system is still stochastic. The detailed explanation of the model is left out of the main text of this paper and those who interested may find properly commented code and the data published by the authors\footnote{https://github.com/andrewpolar/premierleague3}\footnote{https://github.com/andrewpolar/premierleague7}\footnote{https://github.com/andrewpolar/premierleague17}.
The DDR modelling returned statistically positive balance, approximately $20\%$ of the total amount used for the betting. It was compared to the bagging approach, where individual models were built using random sampling. Bagging ended up in a very low profit of $0$ to $5\%$ and completely random betting led to small losses, negative $5\%$, due to bookmakers' commissions.
The prediction of the outcomes of the football matches is a popular topic in literature, see e.g. \cite{Yavuz2021,Odachowski2013}. The positive monetary balance due to data modelling is not a surprise, but a commonly reported result. The bookmakers are aware that the artificial intelligence gives a big advantage and usually disallow it for the bettors. That does not mean that the bookmakers' models are weak. They have to make their bets attractive to the public, which is biased, and, therefore, the bets sometimes must be adjusted contrary to the bookmakers' models.
\section{Conclusions} \label{sec:conclusion}
The presented in this paper divisive data resorting (DDR) algorithm is capable of identifying the multi-modality of the distributions of the outputs of stochastic systems, which follows from its construction. Other probabilistic methods can recognise the multi-modality theoretically, but it cannot be clearly discerned from the theoretical steps of the inference of posterior distributions.
Comparison to simple bagging showed clear advantage of DDR. When, for the given example, $68\%$ of output samples generated by the DDR ensemble passed the Kolmogorov-Smirnov goodness-of-fit test, only about $1\%$ of output samples passed same test for the bagging ensemble. When the bagging and the DDR algorithms were applied to predictions of football outcomes, the DDR clearly showed monetary gain, while bagging showed only insignificant improvement compared to purely random betting. One also must note that, in this modelling example, the success critically depends on the accuracy of the predicted probabilities.
The efficiency of the DDR algorithm relies on the distributions of the output depending gradually on the input of the system. In the case when this is not fulfilled (the statistical properties of the output randomly changing depending on the input), any method (including DDR) may return questionable results.
Another advantage of the DDR algorithm is openness to any data modelling technique, assuming only that the model identification uses the minimisation of the residual errors. It can be applied to models, for example, consisting of systems of algebraic equations, which use physical properties of the modelled phenomenon. In contrast to this, when the most popular and thoroughly-tested libraries are used (TensorFlow, Keras, PYRO), the users have no choice other than using a neural network as a model.
C\# implementations, used in the examples of this article, have code length of about $700$ lines and they use only built-in compiler standard types, while Python implementations in the examples required installation and configuration of multiple dedicated libraries. Finally, there is a significant difference in the run-time performance --- for provided examples, the results were obtained $10$ to $20$ times faster using the C\# implementation of the DDR algorithm.
\end{document} | arXiv |
Flux of electric field through a closed surface with no charge inside? [duplicate]
Gauss's law not making sense (1 answer)
I'm reading the Feynman lectures on electromagnetism and in Vol II, Chapter 1.4 Vol. II, Chapter 1-4 he talks about the flux of the electric field and says that flux of $E$ through and closed surface is equal to the net charge inside divided by $\epsilon_0$.
If there are no charges inside the surface, even though there are charges nearby outside the surface, the average normal component of $E$ is zero, so there is no net flux through the surface
I cannot see why the net flux is zero here. Say we have a closed unit sphere at the origin with no charge inside it and at the point $(2, 0, 0)$ we have some charge $q$.
Well doesn't this charge then define the electric field $E$ for the system and it will flow into the unit sphere on the right hand side, and out of the unit sphere on the left hand side?
Furthermore, as the strength of the electric field decreases with distance from $q$ won't we have more flux going into the right hand side which is closer to the charge $q$, and less flux leaving through the left hand side as it is further away - and hence we should have a non-zero flux?
Can someone please explain what I am misinterpreting here?
electrostatics electric-fields gauss-law
Qmechanic♦
Ron RonsonRon Ronson
$\begingroup$ @RobJeffries I think perhaps that should be an answer $\endgroup$ – David Z♦ Sep 16 '15 at 16:15
$\begingroup$ possible duplicate of Gauss's law not making sense $\endgroup$ – Rob Jeffries Sep 17 '15 at 6:45
You have more flux per unit area going into the right side, but the area on the right side is smaller. These two balance out so that the total flux is the same going in as going out.
The part of the sphere which has electric flux going in, traced in red, is less than half the area of the sphere.
Incidentally, flux per unit area is just the electric field.
David Z♦David Z
$\begingroup$ Thanks...but although the flux "spreads out" on the left hand side means that less is going out there....is there not EVEN less again going out due to the fall-off in electrical force with distance from the source? $\endgroup$ – Ron Ronson Sep 16 '15 at 15:59
$\begingroup$ @Riggs I think you're double-counting the effect. The "spreading out" of the flux is exactly the same physical effect as the $1/r^2$ decrease of the electric field. $\endgroup$ – David Z♦ Sep 16 '15 at 16:15
The strength of the field near the charge is higher (because it is closer to the charge) but this electric field is entering through a smaller area $S_1$ whereas the electric field leaving the sphere is relatively weaker (as it is further away from the charge) but leaves through a larger area $S_2$ as visualised below. Hence, the flux going in is exactly equal to the flux going out, and the net flux is 0.
To understand this better, you might see the proof of Gauss's Law using solid angles. You'll see that the area through which the field leaves or enters is proportional to $r^2$ whereas the field itself is proportional to $\frac{1}{r^2}$ and hence, the flux leaving or entering doesn't depend on the position of the charge. (I mean, it does not depend on where it is placed inside or where it is placed outside)
Note: This picture isn't that of the sphere and charge described exactly, it's sort of flipped but that won't be a problem, I think.
Aritra DasAritra Das
Not the answer you're looking for? Browse other questions tagged electrostatics electric-fields gauss-law or ask your own question.
Gauss's law not making sense
Gauss' law - changes in the magnitude of E field inside the closed surface
Electric flux due to external charge
Flux through closed surface
Why is Electric field flux through a closed surface in Gauss's law not zero?
Will the flux through an arbitrary closed surface be finite or infinite when a plane charge intersects the Gaussian surface?
Why is electric flux through a closed surface with charge inside non zero? | CommonCrawl |
Solve for: -4x+40 < 4x+16
Expression: $-4x+40 < 4x+16$
Move the variable to the left-hand side and change its sign
$-4x+40-4x < 16$
Move the constant to the right-hand side and change its sign
$-4x-4x < 16-40$
Collect like terms
$-8x < 16-40$
$-8x < -24$
Divide both sides of the inequality by $-8$ and flip the inequality sign
$\begin{align*}&x > 3 \\&\begin{array} { l }x \in \langle3, +\infty\rangle\end{array}\end{align*}$
Calculate: (-5x)/(-6x+3)+(-7x^2)/(24x^2-36x+12)=(5)/(-4x+4)
Solve for: (10b^2-b-3)/(5b^2+28b+15) * (20b^2+12b)/(1-4b^2)
Calculate: (1)/(x)=(x-3)/(x^2-9)
Solve for: 12x-27-3x=0
Calculate: \lim_{x arrow 0} (cot(x))
Calculate: (1)/(1-sin(θ))-(1)/(1+sin(θ))=(2tan(θ))/(cos(θ))
Evaluate: x^2+3x+6=0
Evaluate: (n !)/((n-2) !)=132
Evaluate: (16x^9y^4-4xy)/(8xy)
Calculate: (-18)+4
A Guide To Working Through Simultaneous Equations
Unlocking Math Potential Through Puzzles and Games
How Much Does Math Tutoring Cost?
Understanding the Core Concepts of Geometry
How To Get The Most Out Of Math Apps
Top 10 Best Math Apps for Solving Tasks | CommonCrawl |
Math 641-600 — Fall 2018
Assignment 1 - Due Wednesday, September 5, 2018
Read sections 1.1-1.4
Do the following problems.
Section 1.1: 4, 5, 7(a), 8, 9(a) (Do the first 3, but without software.)
Section 1.2: 9
Let $U$ be a subspace of an inner product space $V$, with the inner product and norm being $\langle\cdot,\cdot \rangle$ and $\|\cdot\|$. Also, let $v$ be in $V$. (Do not assume that $U$ is finite dimensional or use arguments requiring a basis.)
Fix $v\in V$. Show that there is a unique vector $p \in U$ that satisfies $\min_{u\in U}\|v-u\| = \|v-p\|$ if and only if $v-p\in U^\perp$.
Suppose $p$ exists for every $v\in V$. Since $p$ is uniquely determined by $v$, we may define a map $P: V \to U$ via $Pv:=p$. Show that $P$ is a linear map and that $P$ satisfies $P^2 = P$. ($P$ is called an orthogonal projection. The vector $p$ is the orthogonal projection of $v$ onto $U$.)
If the projection $P$ exists, show that for all $w,z\in V$, $\langle Pw,z\rangle = \langle Pw,Pz\rangle= \langle w,Pz\rangle$. Use this to show that $U^\perp= \{w\in V\colon Pw=0\}$.
Suppose that the projection $P$ exists. Show that $V=U\oplus U^\perp$, where $\oplus$ indicates the direct sum of the two spaces. (This is easy, but important.)
Let $U$ and $V$ be as in the previous exercise. Suppose that $U$ is finite dimensional and that $B=\{u_1,u_2,\ldots,u_n\}$ is an ordered basis for $U$. In addition, let $G$ be the $n\times n$ matrix with entries $G_{jk}= \langle u_k,u_j\rangle$.
Show that $G$ is positive definite and thus invertible.
Let $v\in V$ and $d_k := \langle v,u_k\rangle$. Show that $p$ exists for every $v$ and is given by $p=\sum_j x_j u_j\in U$, where the $x_j$'s satisfy the normal equations, $d_k = \sum_{j=1}^n G_{kj}x_j$.
Show that if B is orthonormal, then $Pv=\sum_j \langle v,u_j\rangle u_j$.
Assignment 2 - Due Wednesday, September 12, 2018.
Read the notes on Banach spaces and Hilbert Spaces, and sections 2.1 and 2.2 in Keener.
Section 1.2: 10(a,b) Hint for !0(a): You may choose the norms $\| \phi_j\|$ and $\|\psi_k\|$ to be any (convenient) positive numbers.
Section 1.3: 2, 3
Find the set of biorthogonal vectors corresponding to the set $\left\{\begin{pmatrix}1 \\ 0 \\ 1 \end{pmatrix}, \begin{pmatrix}0 \\ 1 \\ 1 \end{pmatrix}, \begin{pmatrix}2 \\ 1\\ 0\end{pmatrix}\right\}$. Suppose that $\{\mathbf a_1, \mathbf a_2, \ldots, \mathbf a_n\}$ is a set of linearly independent vectors in $\mathbb R^n$. What is the corresponding set of biorthogonal vectors?
This problem concerns several important inequalities.
Show that if α, β are positive and α + β =1, then for all u,v ≥ 0 we have
uαvβ ≤ αu + βv.
Let x,y ∈ Rn, and let p > 1 and define q by q-1 = 1 - p-1. Prove Hölder's inequality,
|∑j xjyj| ≤ ||x||p ||y||q.
Hint: use the inequality in part (a), but with appropriate choices of the parameters. For example, u = (|xj|/||x||p)p
Let x,y ∈ Rn, and let p > 1. Prove Minkowski's inequality,
||x+y||p ≤ ||x||p + ||y||p.
Use this to show that ||x||p defines a norm on Rn. Hint: you will need to use Hölder's inequality, along with a trick.
Find the $QR$ factorization for the matrix $A=\begin{pmatrix} 1 & 2 & 0\\ 0 & 1 & 1 \\ 1 & 0 & 1 \end{pmatrix}$. Use it to solve $Ax=b$, where $b=\begin{pmatrix} 1\\ 3\\ 7 \end{pmatrix}$.
Let $\mathbf y\in \mathbb R^n$. Use the normal equations for a minimization problem to show that the minimizer of $\| \mathbf y - A\mathbf x\|$ is given by $\mathbf x_{min} = R^{-1}Q^\ast \mathbf y$. ($Q^\ast=Q^T$, since we are dealing with real scalars.)
Let U be a unitary, n×n matrix. Show that the following hold.
< Ux, Uy > = < x, y >
The eigenvalues of U all lie on the unit circle, |λ|=1.
Eigenvectors corresponding to distinct eigenvalues are orthogonal.
Read Keener's sections 2.1 and the notes on Lebesgue integration.
Before one can define a norm or inner product on some set, one has to show that the set is a vector space -- i.e., that linear combinations of vectors are in the space. Do this for the spaces of sequences below. The inequalities from the previous assignment will be useful.
$\ell^2=\{x=\{x_n\}_{n=1}^\infty\colon \sum_{j=1}^\infty |x_j|^2\}$
$\ell^p=\{x=\{x_n\}_{n=1}^\infty\colon \sum_{j=1}^\infty |x_j|^p\}$, all $1\le p<\infty$, $p\ne 2$.
$\ell^\infty = \{x=\{x_n\}_{n=1}^\infty\colon \sup_{1\lr j}|x_j|<\infty \}$.
Show that, for all $1\le p <\infty$, $\|x\|_p := \big(\sum_{j=1}^\infty |x_j|^p \big)^{1/p}$ defines a norm on $\ell^p$.
Show that $\ell^2$ is an inner product space, with $\langle x,y\rangle = \sum_{j-1}^\infty x_j \bar y_j$ being the inner product, and that with this inner product it is a Hilbert space. Bonus: show that it is separable.
Let $C^1[0,1]$ be the set of all continuously differentiable real-valued functions on $[0,1]$. Show that $C^1[0,1]$ is a Banach space if $\|f\|_{C^1} := \max_{x\in [0,1]}|f(x)| + \max_{x\in [0,1]}|f'(x)|$.
Let $f\in C^1[0,1]$. Show that $\|f\|_{C[0,1]}\le C\|f\|_{H^1[0,1]}$, where $C$ is a constant independent of $f$ and $\|f\|_{H^1[0,1]}^2 := \int_0^1\big( |f(x)|^2 + |f'(x)|^2\big)dx$.
A measurable function whose range consists of a finite number of values is a simple function — see Lebesgue integration, p. 5. Use the definition of the Lebesgue integral in in terms of Lebesgue sums, from eqn. 2, to show that, in terms of this definition, the integral of a simple function ends up being the one in eqn. 3 on p. 6.
Read the notes on Lebesgue integration and on Orthonormal sets and expansions.
Section 2.1: 10
Section 2.2: 1 (Use $w=1$.), 8(a,b,c) (FYI: the formula for $T_n(x)$ has an $n!$ missing in the numerator.), 9
This problem is aimed at showing that the Chebyshev polynomials form a complete set in $L^2_w$, which has the weighted inner product \[ \langle f,g\rangle_w := \int_{-1}^1 \frac{f(x)\overline{g(x)}dx}{\sqrt{1 - x^2}}. \]
Show that the continuous functions are dense in $L^2_w$. Hint: if $f\in L^2_w$, then $ \frac{f(x)}{(1 - x^2)^{1/4}}$ is in $L^2[-1,1]$.
Show that if $f\in L^\infty[-1,1]$, then $\|f\|_w \le \sqrt{\pi}\|f\|_\infty$.
Follow the proof given in the notes on Orthonormal Sets and Expansions showing that the Legendre polynomials form a complete set in $L^2[-1,1]$ to show that the Chebyshev polynomials form a complete orthogonal set in $L^2_w$.
Let F(s) = ∫ 0∞ e − s t f(t)dt be the Laplace transform of f ∈ L1([0,∞)). Use the Lebesgue dominated convergence theorem to show that F is continuous from the right at s = 0. That is, show that
lim s↓0 F(s) = F(0) = ∫ 0∞f(t)dt
Let fn(x) = n3/2 x e-n x, where x ∈ [0,1] and n = 1, 2, 3, ....
Verify that the pointwise limit of fn(x) is f(x) = 0.
Show that ||fn||C[0,1] → ∞ as n → ∞, so that fn does not converge uniformly to 0.
Find a constant C such that for all n and x fixed fn(x) ≤ C x−1/2, x ∈ (0,1].
Use the Lebesgue dominated convergence theorem to show that
lim n→∞ ∫ 01 fn(x)dx = 0.
Let $U:=\{u_j\}_{j=1}^\infty$ be an orthonormal set in a Hilbert space $\mathcal H$. Show that the two statements are equivalent. (You may use what we have proved for o.n. sets in general; for example, Bessel's inequality, minimization properties, etc.)
$U$ is maximal in the sense that there is no non-zero vector in $\mathcal H$ that is orthogonal to $U$. (Equivalently, $U$ is not a proper subset of any other o.n. set in $\mathcal H$.)
Every vector in $\mathcal H$ may be uniquely represented as the series $f=\sum_{j=1}^\infty \langle f, u_j\rangle u_j$.
Assignment 5 - Due Wednesday, October 3, 2018.
Read sections 2.2.2-2.2.4 and the notes on Approximation of Continuous Functions.
In proving the Weierstrass Approximation, we did the case $x>j/n$. Do the case $x < j/n$.
Let $\delta>0$. We define the modulus of continuity for $f\in C[0,1]$ by $\omega(f,\delta) := \sup_{\,|\,s-t\,|\,\le\, \delta,\,s,t\in [0,1]}|f(s)-f(t)|$.
Fix $\delta>0$. Let $S_\delta = \{ \epsilon >0 \colon |f(t) - f(s)| < \epsilon \, \forall\ s,t \in [0,1], \ |s - t| \le \delta\}$. In other words, for given $\delta$, $S_\delta$ is in the set of all $\epsilon$ such that $|f(t) - f(s)| < \epsilon$ holds for all $|s - t|\le \delta$. Show that $\omega(f, \delta) = \inf S_\delta$
Show that $\omega(f,\delta)$ is non decreasing as a function of $\delta$. (Or, more to the point, as $\delta \downarrow 0$, $\omega(f,\delta)$ gets smaller.)
Show that $\lim_{\delta \downarrow 0} \omega(f,\delta) = 0$.
Let g be C2 on an interval [a,b]. Let h = b − a. Show that if g(a) = g(b) = 0, then $ \|g\|_{C[a,b]} \le (h^2/8) \|g''\|_{C[a,b]}$. Give an example that shows that $1/8$ is the best possible constant.
Use the previous part to show that if f ∈ C2[0,1], then the equally spaced linear spline interpolant $s_f$ satisfies $\|f - s_f\|_{C[0,1]} \le (8n^2)^{-1}\|f''\|_{C[0,1]}$.
Let $f(x)$ be continuous on $[0,1]$ and let $s_f(x)$ be the linear spline for $f$ with equally spaced points $j/n$, where $j=0, 1,2,\ldots,n$.
Show that $\int_0^1s_f(x)dx$ is equal to the trapezoidal (quadrature) rule for approximating $\int_0^1f(x)dx$.
Let $E=\big|\int_0^1f(x)dx - \int_0^1 s_f(x)dx\big|$ be the quadrature error. If $f\in C^2[0,1]$, use the previous problem to show that $E\le (8n^2)^{-1}\|f''\|_{C[0,1]}$.
Show that, in terms of the Bernstein polynomials $\beta_{j,n}$, \[ x^k = \sum_{j=k}^n\frac{\binom{j}{k}}{\binom{n}{k}}\beta_{j,n}(x), \] where $k=0,1, 2, \ldots,x^n$.
Assignment 6 - Due Wednesday, October 10, 2018.
Read sections 2.2.2-2.2.4, the notes on Fourier series, and the notes on the discrete Fourier transform.
Prove this: Let $g$ be a $2\pi$ periodic function (a.e.) that is integrable on each bounded interval in $\mathbb R$. Then, $\int_{-\pi+c}^{\pi+c} g(u)du$ is independent of $c$. In particular, $\int_{-\pi+c}^{\pi+c} g(u)du=\int_{-\pi}^\pi g(u)du$.
Compute the Fourier series for the following functions.
f(x) = x, 0≤ x ≤ 2π
f(x) = |x|, − π ≤ x ≤ π
f(x) = e2x, − π ≤ x ≤ π (complex form).
Compute the complex form of the Fourier series for $f(x) = e^{2x}$, $0 \le x \le 2\pi$. Why is this different from 3(c) above? Use this Fourier series and Parseval's theorem to sum the series $\sum_{k=-\infty}^\infty (4+k^2)^{-1}$.
The following problem is aimed at showing that $\{e^{inx}\}_{n=-\infty}^\infty$ is complete in $L^2[-\pi,\pi]$.
Consider the series ∑n cn einx, where ∑n |cn| < ∞. Show that ∑n cn einx converges uniformly to a continuous function f(x) and that the series is the Fourier series for f. (It's possible for a trigonometric series to converge pointwise to a function, but not be the Fourier series for that function.)
Use the previous problem to show that if $f$ is a continuous, piecewise smooth $2\pi$-periodic function, then the FS for $f$ converges uniformly to $f$. (Hint: Show that if $f'\in L^2[-\pi,\pi]$, then series $\sum_{k=-\infty}^\infty k^2|c_k|^2$ is convergent.)
Apply this result to show that the FS for a linear spline $s(x)$, which satisfies $s(-\pi)=s(\pi)$, is uniformly convergent to $s(x)$. Show that such splines are dense in $L^2[-\pi,\pi]$.
Show that $\{e^{inx}\}_{n=-\infty}^\infty$ is complete in $L^2[-\pi,\pi]$.
Let $\mathcal S_n$ be the set of $n$-periodic, complex-valued sequences.
Suppose that $\mathbf x \in \mathcal S_n$. Show that $ \sum_{j=m}^{m+n-1}{\mathbf x}_j = \sum_{j=0}^{n-1}{\mathbf x}_j $. (This is the DFT analogue of problem 1 above.)
Prove the Convolution Theorem for the DFT. (See Notes on the Discrete Fourier Transform, pg. 3.)
Read section 2.2.7 and the notes on Splines and Finite Element Spaces.
Section 2.2: 18(a,d). (Both of these use the formula $N_m(x)=\frac{x}{m-1}N_{m-1}(x)+\frac{m-x}{m-1}N_{m-1}(x-1)$, together with induction).
Let $f(t)=10\cos(2t)$ and consider the ODE $u''+2u'+2u=f(t)$.
Verify that the general solution to the equation is $u=Ae^{-t}\cos(t)+ Be^{-t}\sin(t) +2\sin(2t)-\cos(2t)$; consequently the "steady state" periodic solution is $u_p(t)=2\sin(2t)-\cos(t)$.
Let $n=2^L$ and $h=\frac{2\pi}{n}$. For $L=3,5,8,\text{and}\ 10$, sample $f$ at $jh$, $j=0\ldots n-1$; let $f_j:=f(jh)$. Use your favorite program to find the FFT of $\{f_0,f_1,\ldots,f_{n-1}\}$ and, using the method outlined in the notes on the discrete Fourier transform, find $\hat u_k$. Finally, apply your program's inverse FFT to the $\hat u_k$'s to obtain the approximation $u_j$ to $u_p$ at $jh$. For each $L$, plot the $u_j$'s and the $u_p(jh)$'s. The $u_j$'s may have a small complex part due to roundoff error; just plot the real parts of the $u_j$'s you found by the procedure above. Be sure to label your plots.
For each $L$ plot the error $\{ |u_0-u_p(0)|,|u_1-u_p(h)|,\ldots, |u_{n-1}-u_p((n-1)h)|\}$; again, label your plots.
Let α, ξ, η be n-periodic sequences, and let a, x, y be column vectors with entries a0, ..., an-1, etc. Show that the convolution η = α∗ξ is equivalent to the matrix equation y = Ax, where A is an n×n matrix whose first column is $\mathbf a$, and whose remaining columns are $\mathbf a$ with the entries cyclically permuted. Such matrices are called cyclic. Use the DFT and the convolution theorem to find the eigenvalues of the a cyclic matrix. Use this method, along with your favorite software, to find the eigenvalues and eigenvectors of the matrix below. (For this matrix, $\mathbf a =(3\ 1\ 4\ 5)^T$.) \[ \begin{pmatrix} 3 &5 &4 &1 \\ 1 &3 &5 &4 \\ 4 &1 &3 &5\\ 5 &4 &1 &3 \end{pmatrix} \]
Let $S^{1/n}(1,0)$ be the space of piecewise linear splines, with knots at $x_j=j/n$, and let $N_2(x)$ be the linear B-spline ("tent function", see Keener, p. 81 or my notes on splines.)
Let $\phi_j(x):= N_2(nx +1 -j)$. Show that $\{\phi_j(x)\}_{j=0}^n$ is a basis for $S^{1/n}(1,0)$.
Let $S_0^{1/n}(1,0):=\{s\in S^{1/n}(1,0):s(0)=s(1)=0\}$. Show that $S_0^{1/n}(1,0)$ is a subspace of $S^{1/n}(1,0)$ and that $\{\phi_j(x)\}_{j=1}^{n-1}$ is a basis for it.
Read section 2.2.7, the notes on Splines and Finite Element Spaces, and on Bounded Operators & Closed Subspaces.
Section 2.2: 25(a,b), 26(b), 27(a)
Consider the space of cubic Hermite splines $S_0^{1/n}(3,1)\subset S^{1/n}(3,1)$ that satisfy $s(0)=s(1)=0$. Show that $\langle u,v\rangle = \int_0^1 u''v''dx$ defines an inner product on $S_0^{1/n}(3,1)$.
We want to use a Galerkin method to numerically solve the boundary value problem (BVP): −u" = f(x), u(0) = u(1) = 0, f ∈ C[0,1]. Let $H^1_0$ be the space of all functions $g:[0,1]\to \mathbb R$ such that $g'$ is in $L^2[0,1]$ and $g(0)=g(1)=0$. Define an inner product on $H^1_0$ by $ \langle f,g\rangle_{H^1_0}=\int_0^1 f'g'dx$. Your are given that $H^1_0$ is a Hilbert space.
Weak form of the problem. Suppose that $v\in H^1_0$. Multiply both sides of the equation $-u''=f$ by $v$ and use integration by parts to show that $ \langle u,v\rangle_{H^1_0} = \langle f,v\rangle_{L^2[0,1]}$. This is called the ``weak'' form of the BVP.
Conversely, suppose that $u\in H^1_0$ is also in $C^2[0,1]$ and that if for all $v\in H^1_0$, $u$ satisfies \[ \langle u,v\rangle_{H^1_0} = \langle f,v\rangle_{L^2[0,1]}. \] then $-u''=f$.
Let $S_0^{1/n}(1,0) \subset S^{1/n}(1,0)$ be the set of all linear splines that are $0$ at $x=0$ and $x=1$; note that $S_0^{1/n}(1,0)$ a subspace of $H^1_0$. Let $s_n\in S^{1/n}_0(1,0)$ satisfy $\|u-s_n\|_{H^1_0} = \min_{s\in S^{1/n}_0(1,0)}\|u - s\|_{H^1_0}$; thus, $s_n$ is the least-squares approximation to $u$ from $S^{1/n}_0(1,0)$. Expand $s_n$ in the basis from Assignment 7, problem 4(b): $s_n = \sum_{j=1}^{n-1}\alpha_j\phi_j$ Use the normal equations for the problem in connection with the weak form of the problem to show that the $\alpha_j$'s satisfy $G\alpha = \beta$, where $\beta_j= \langle f,\phi_j\rangle_{L^2[0,1]}$ and $G_{kj} =\langle \phi_j,\phi_k\rangle_{H_0}$
Show that $ G=\begin{pmatrix} 2n& -n &0 &\cdots &0\\ -n & 2n& -n &0 &\cdots \\ 0&-n& 2n& \ddots &\ddots \\ \vdots &\cdots &\ddots &\ddots &-n\\ 0 &\cdots &0 &-n &2n \end{pmatrix}. $
Assignment 9 - Due Wednesday, November 7, 2018.
Read sections 3.1-3.3, the notes on the projection theorem, the Riesz representation theorem, etc, and the notes on an example of the Fredholm alternative and finding a resolvent
Section 3.2: 3(d) (Assume the appropriate operators are closed and that λ is real.)
Section 3.3: 2 (Assume the appropriate operators are closed and that λ is real.)
Let $V$ be a subspace of a Hilbert space $\mathcal H$. Show that $(V^\perp)^\perp= \overline{V}$, where $\overline{V}$ is the closure of $V$ in $\mathcal H$. Use this to show that $\mathcal H = \overline{V}\oplus V^\perp$.
Let V be a Banach space. Show that a linear operator L:V → V is bounded if and only if L is continuous.
Let $k(x,y)$ be defined by \[ k(x,y) = \left\{ \begin{array}{cl} y, & 0 \le y \le x\le 1, \\ x, & x \le y \le 1. \end{array} \right. \]
Let $L$ be the integral operator $L\,f = \int_0^1 k(x,y)f(y)dy$. Show that $L:C[0,1]\to C[0,1]$ is bounded and that the norm $\|L\|_{C[0,1]\to C[0,1]}\le 1$. Bonus (5 pts.): Show that $\|L\|_{C[0,1]\to C[0,1]}=1/2$.
Show that $k(x,y)$ is a Hilbert-Schmidt kernel and that $\|L\|_{L^2\to L^2} \le \sqrt{\frac{1}{6}}$.
Finish the proof of the Projection Theorem: If for every $f\in \mathcal H$ there is a $p\in V$ such that $\|p-f\|=\min_{v\in V}\|v-f\|$ then $V$ is closed.
Let L be a bounded linear operator on Hilbert space $\mathcal H$. Show that these two formulas for $\|L\|$ are equivalent:
$\|L\| = \sup \{\|Lu\| : u \in {\mathcal H},\ \|u\| = 1\}$
$\|L\| = \sup \{|\langle Lu,v\rangle| : u,v \in {\mathcal H},\ \|u\|=\|v\|=1\}$
Assignment 10 - Due Wednesday, November 14, 2018.
Read sections 3.3-3.5, and my notes on Compact Operators, and on the Closed Range Theorem.
Section 3.4: 2(b)
Consider the Hilbert space $\mathcal H=\ell^2$ and let $S=\{x=(x_{1}\ x_{2}\ x_3\ \ldots)\in \ell^2: \sum_{n=1}^\infty (n^2+1)|x_n|^2 <1\}$. Show that $S$ is a precompact subset of $\ell^2$.
Let $S$ be a bounded subset (not a subspace!) of a Hilbert space $\mathcal H$. Show that $S$ is precompact if and only if every sequence in $S$ has a convergent subsequence. (Note: If $S$ is just precompact, the limit point of the sequence may not be in $S$, because $S$ may not be closed.)
Show that every compact operator on a Hilbert space is bounded.
Consider the finite rank (degenerate) kernel
k(x,y) = φ1(x)ψ1(y) + φ2(x)ψ2(y), where φ1 = 6x-3, φ2 = 3x2, ψ1 = 1, ψ2 = 8x − 6.
Let Ku= ∫01 k(x,y)u(y)dy. Assume that L = I-λ K has closed range,
For what values of λ does the integral equation
u(x) - λ∫01 k(x,y)u(y)dy =f(x)
have a solution for all f ∈ L2[0,1]?
For these values, find the solution u = (I − λK)−1f — i.e., find the resolvent.
For the values of λ for which the equation does not have a solution for all f, find a condition on f that guarantees a solution exists. Will the solution be unique?
In the following, H is a Hilbert space and B(H) is the set of bounded linear operators on H. Let L be in B(H) and let N:= sup {|< Lu, u>| : u ∈ H, ||u|| = 1}.
Verify the identity < L(u+αv), u+αv> − < L(u-αv), u-αv> = 2α< Lu,v>+2α< Lv,u>, where |α| = 1.
Show that N ≤ ||L||.
Let L be a self-adjoint operator on H, which may be real or complex. Use (a) and (b) to show that N= ||L||. (Hint: In the complex case, choose α so that α< Lu,v> = |< Lu,v>|. For the real case, use $\alpha=\pm 1$, as required.)
Suppose that H is a complex Hilbert space. If L ∈ B(H), then use (a) and (b) to show that
N ≤ ||L|| ≤ 2N.
For the real Hilbert space, H = R2, let $L = \begin{pmatrix} 0& 1\\ -1 & 0 \end{pmatrix}. $ Show that $||L|| = 1$, but $N=0$.
Assignment 11 - Due Monday, November 26, 2018.
Read sections 3.3-3.5, and my notes on and my notes on Spectral Theory for Compact Operators.
Section 3.4: 2(a)
Let $L\in \mathcal B(\mathcal H)$. Suppose that for all $f\in N(L)^\perp$ there is a constant $c>0$ such that $\|Lf\|\ge c\|f\|$, where $c$ is independent of $f$. Show that $R(L)$ is closed.
Finish the proof of Proposition 2.5 in my notes on Compact Operators
Consider the kernel $k(x,y)=\min(x,y)$, $0\le x,y\le 1$.
Show that $Ku=\int_0^1 k(x,y)u(y)dy$ is a compact, self-adjoint operator operator.
Let $U(x)=\int_0^x u(y)dy - \int_0^1 u(y)dy$. Show that $Ku(x) = -\int_0^x U(y)dy$, and that $ \int_0^1 Ku(x)\,u(x)dx = \int_0^1 U(x)^2dx$.
Use this identity to show that $0$ is not an eigenvalue of $K$ — i.e., $N(K)=\{0\}$.
Show that there is no constant $c>0$ such that $c\|u\|\le \|Ku\|$. Explain why this implies $K^{-1} \not\in \mathcal B(\mathcal H)$. (Hint: consider the sequence $u_n(x) = \sqrt{2} \cos(n\pi x)$.) The point here is that $\lambda=0$ is not an eigenvalue of $K$, but is in the spectrum of $K$.
Assignment 12 - Due Wednesday, December 5, 2018.
Read sections 4.1, 4.2, 4.3.1, 4.3.2, 4.5.1 and my notes on and my notes on example problems for distributions.
Section 3.4: 2(d) (You may use problem 4 from Assignment 11.)
Section 4.2: 1, 3, 4
Let $Ku(x)=\int_0^1 k(x,y)u(y)dy$, where $k(x,y)$ is defined by $ k(x,y) = \left\{ \begin{array}{cl} y, & 0 \le y \le x\le 1, \\ x, & x \le y \le 1. \end{array} \right.$
Show that $0$ is not an eigenvalue of $K$.
Show that $Ku(0)=0$ and $(Ku)'(1)=0$.
Find the eigenvalues and eigenvectors of $K$. Explain why the (normalized) eigenvectors of $K$ are a complete orthonormal basis for $L^2[0,1]$.
Let $Lu=-u''$, $u(0)=0$, $u'(1)=2u(1)$.
Show that the Green's function for this problem is \[ G(x,y)=\left\{ \begin{array}{rl} -(2y-1)x, & 0 \le x < y \le 1\\ -(2x-1)y, & 0 \le y< x \le 1. \end{array} \right. \]
Let $Kf(x) := \int_0^1G(x,y)f(y)dy$. Show that $K$ is a self-adjoint Hilbert-Schmidt operator, and that $0$ is not an eigenvalue of $K$.
Use (b) and the spectral theory of compact operators to show the orthonormal set of eigenfunctions for $L$ form a complete set in $L^2[0,1]$.
Updated 11/26/2018. | CommonCrawl |
Decomposition analysis of earnings inequality in rural India: 2004–2012
Shantanu Khanna1,
Deepti Goel ORCID: orcid.org/0000-0002-3876-95902,3 &
René Morissette4
We analyze the changes in earnings of paid workers (wage earners) in rural India from 2004/05 to 2011/12. Real earnings increased at all percentiles, and the percentage increase was larger at the lower end. Consequently, earnings inequality declined. Recentered influence function decompositions show that throughout the earnings distribution, except at the very top, both changes in "worker characteristics" and in "returns to these characteristics" increased earnings, with the latter having played a bigger role. Decompositions of inequality measures reveal that although the change in characteristics had an inequality-increasing effect, chiefly attributable to increased education levels, inequality declined because workers at lower quantiles experienced greater improvements in returns to their characteristics than those at the top.
JEL:JEL Classification: J30, J31, O53
In their discussion of India's economic growth, Kotwal et al. (2011) point to the existence of two Indias: "One of educated managers and engineers who have been able to take advantage of the opportunities made available through globalization and the other—a huge mass of undereducated people who are making a living in low productivity jobs in the informal sector—the largest of which is still agriculture." This paper is about the second India that mainly resides in its rural parts. Agriculture, the mainstay of the rural economy, continues to employ the largest share of the Indian workforce, but its contribution to gross value added (GVA) is much smaller. In 2011, the employment shares of agriculture, industry, and services were 49, 24 and 27 %, respectively, whereas their shares in GVA were 19, 33, and 48 %, respectively (GOI 2015). In addition, between 2004/05 and 2011/12, real gross domestic product (GDP) in these sectors grew at 4.2, 8.5 and 9.6 % per annum, respectively, making agriculture the slowest growing sector of the economy (authors' calculations based on RBI 2015). Given these figures, the concern about whether high overall GDP growth has benefitted those at the bottom, and to what extent they have benefitted compared to those at the top, is even more pertinent for rural India. We therefore focus on rural India and examine how real earnings of paid workers (wage earners) evolved over the 7-year period between 2004/05 and 2011/12.
Several studies have documented that along with the high growth rates of GDP that have characterized the Indian economy since the 1980s, there has been an increase in inequality.Footnote 1 However, most of these studies have either focused on consumption expenditure (Sen and Himanshu 2004; Cain et al. 2010; Motiram and Vakulabharanam 2012; Jayaraj and Subramanian 2015; Datt et al. 2016)Footnote 2 or on earnings of paid workers in urban India (Kijima 2006; Azam 2012a). Two notable exceptions are Hnatkovska and Lahiri (2013) and Jacoby and Dasgupta (2015). Hnatkovska and Lahiri (2013) focus on wage comparisons between rural and urban areas between 1983 and 2010. They find that urban agglomeration led to a massive increase in urban labor supply that in turn reduced the rural-urban wage gap. Unlike Hnatkovska and Lahiri (2013), we focus exclusively on rural India to provide a more detailed picture of the changes within this sector. Jacoby and Dasgupta (2015) adopt the supply-demand-institutions (SDI) framework pioneered by Katz and Murphy (1992) and Bound and Johnson (1992), to decompose wage changes between 1993 and 2011 in both rural and urban India. We use a very different approach, namely, the recentered influence function (RIF) decomposition developed by Firpo, Fortin, and Lemieux (2009) to study earnings evolution in rural India.Footnote 3 Jacoby and Dasgupta (2015) decompose the change in an indirect measure of wage inequality, namely, the relative wages of educated and uneducated workers, into changes in employment shares of different demographic groups and changes in the industrial composition. In this paper, we focus on direct measures of inequality such as the Gini and the 90/10 percentile ratio, and decompose changes in these measures into changes in worker characteristics and changes in returns to these characteristics. Our finding that the change in returns to characteristics is driving the decline in earnings inequality in rural India is a novel one. Moreover, we document changes not just at the mean but also at various quantiles. It is important to do so because several studies have found that earnings inequality is mainly concentrated at the upper end. For India, Azam (2012a) and Kijima (2006) find this for urban wage earners and Banerjee and Piketty (2005) find it for income tax payers. We use unconditional quantile regressions to account for the effects of workers' characteristics at different quantiles and thereby make inferences about their effects on earnings inequality. Finally, we use the RIF decompositions to divide the overall change in earnings inequality into a composition effect (the component due to changes in the distribution of worker characteristics) and a structure effect (the component due to changes in returns to these characteristics).
We find that during the period from 2004 to 2012, real earnings among paid workers increased at all percentiles and the percentage increase was greater at lower percentiles. Consequently, earnings inequality declined in rural India. The RIF decompositions reveal that throughout the earnings distribution, except at the very top, both the composition effect and the structure effect increased earnings, with changes in the latter having played a bigger role. Decompositions of inequality measures reveal that in spite of the composition effect having had an inequality-increasing role, inequality fell because workers at lower quantiles experienced greater improvements in returns to their characteristics than those at the top. Earnings inequality increased as workers acquired higher levels of education. At the same time, lower returns to higher education reduced inequality.
The rest of the paper is organized as follows. Section 2 discusses the methodology used to analyze the change in earnings. Section 3 describes the data and the analysis sample. Section 4 presents the results, and Section 5 concludes.
We briefly explain the RIF regression for unconditional quantiles, followed by the RIF decomposition technique. For a detailed exposition of this and other decomposition techniques, see Fortin et al. 2011.
Unconditional quantile regressions
Unconditional quantile regressions (UQR) introduced by Firpo et al. (2009) help us examine the marginal effects of covariates on the unconditional quantiles of an outcome variable. UQR differ from the traditional quantile regressions (Koenker and Bassett 1978) in that the latter examine the marginal effects on the conditional quantiles. For instance, if we observe that the conditional quantile regression coefficients for college education increase as we move from the first to the ninth decile, we can say that having more people with a college education would increase earnings dispersion within a group of individuals having the same vector of covariate values. However, in order to claim that college education increases overall earnings dispersion (among all individuals irrespective of their covariates), we need to rely on unconditional quantile regressions. To understand UQRs, we begin with the concept of an influence function (IF).
The IF of any distributional statistic represents the influence of an observation on that statistic. Specifically, let w denote earnings and let q θ denote the θth quantile of the unconditional earnings distribution. Then,
$$ \mathrm{IF}\left(w,{q}_{\theta}\right)=\left(\theta -\mathbb{I}\left\{w\le {q}_{\theta}\right\}\right)/{f}_w\left({q}_{\theta}\right) $$
where \( \mathbb{I}\left\{.\right\} \) is an indicator function and f w is the density of the marginal distribution of earnings. The RIF is obtained by adding back the statistic to the IF. Thus, the RIF for the θth quantile is given by:
$$ \mathrm{R}\mathrm{I}\mathrm{F}\left(w,{q}_{\theta}\right)={q}_{\theta } + \mathrm{I}\mathrm{F}\left(w,{q}_{\theta}\right)={q}_{\theta }+\left(\theta -\mathbb{I}\left\{w\le {q}_{\theta}\right\}\right)/{f}_w\left({q}_{\theta}\right) $$
Note that the expected value of the RIF is q θ itself. The conditional expectation of the RIF modelled as a function of certain explanatory variables, X, gives us the UQR or RIF regression model:
$$ E\left[\mathrm{R}\mathrm{I}\mathrm{F}\left(w,{q}_{\theta}\right)\Big|\boldsymbol{X}\right]={m}_{\theta}\left(\boldsymbol{X}\right) $$
In its simplest form,
$$ E\left[\mathrm{R}\mathrm{I}\mathrm{F}\left(w,{q}_{\theta}\right)\Big|\boldsymbol{X}\right]=\boldsymbol{X}\boldsymbol{\beta } $$
where β represents the marginal effect of X on the θth quantile. β can be estimated by ordinary least squares (OLS) wherein the dependent variable is replaced by the estimated RIF. The RIF is estimated by plugging the sample quantile, \( \widehat{q_{\theta }} \), and the empirical density, \( \widehat{f_w\left({q}_{\theta}\right)} \), the latter estimated using kernel methods, in Eq. (2).
RIF decomposition
The RIF decomposition divides the overall change in any distributional statistic into a structure effect (due to the changes in returns to characteristics/covariates) and a composition effect (due to the changes in the distribution of covariates). Compared to other decomposition methods such as the Machado-Mata (Machado and Mata 2005), the RIF decomposition has the added advantage of further dividing the structure and composition effects into the contribution of each covariate. In this way, it is closest in spirit to the decomposition method proposed by Blinder (1973) and Oaxaca (1973).
In the case of quantiles, the RIF decomposition is carried out using the estimated UQR/RIF regression coefficients explained in Section 2.1. The RIF regression coefficients for each year (T) are given by:
$$ {\widehat{\boldsymbol{\beta}}}_{T,\theta }={\left({\displaystyle {\sum}_{i\in T}{\boldsymbol{X}}_{Ti}\cdot {\boldsymbol{X}}_{Ti}^{\prime }}\right)}^{-1}{\displaystyle {\sum}_{i\in T}\widehat{\mathrm{RIF}}\left({w}_{Ti},{q}_{T\theta}\right)}\cdot {\boldsymbol{X}}_i,\kern1em T=1,2 $$
The aggregate decomposition for any unconditional quantile θ is given by:
$$ {\widehat{\varDelta}}_{\mathrm{Total}}^{\theta }=\underset{{\widehat{\varDelta}}_{\mathrm{Structure}}^{\theta }}{\underbrace{{\overline{\boldsymbol{X}}}_2\left({\widehat{\boldsymbol{\beta}}}_{2,\theta }-{\widehat{\boldsymbol{\beta}}}_{1,\theta}\right)}}+\underset{{\widehat{\varDelta}}_{\mathrm{Composition}}^{\theta }}{\underbrace{\left({\overline{\boldsymbol{X}}}_2-{\overline{\boldsymbol{X}}}_1\right){\widehat{\boldsymbol{\beta}}}_{1,\theta }}} $$
To examine the contribution of each covariate, the two terms in (6) can be further written as:
$$ {\widehat{\varDelta}}_{\mathrm{Composition}}^{\theta }={\displaystyle {\sum}_{k=1}^K\left({\overline{X}}_{2k}-{\overline{X}}_{1k}\right){\widehat{\beta}}_{1k,\theta }} $$
$$ {\widehat{\varDelta}}_{\mathrm{Structure}}^{\theta }={\displaystyle {\sum}_{k=0}^K{\overline{X}}_{2k}\left({\widehat{\beta}}_{2k,\theta }-{\widehat{\beta}}_{1k,\theta}\right)} $$
Equations (7) and (8) represent the detailed decompositions of the composition and structure effects, respectively.
The detailed decomposition of the structure effect has a limitation when categorical variables are included as covariates. The choice of the omitted or reference group (for caste, education, industry, occupation, or state of residence in our analysis) can influence the contribution of each covariate to the structure effect. Since the choice of the reference categories is arbitrary, results of the detailed decomposition can vary. Existing solutions to the omitted category problem come at the cost of interpretability (see Fortin et al. 2011). To ensure the robustness of our results regarding the contribution of factor-specific structure effects, we use several specifications, each of which uses a different set of omitted categories for the categorical variables.
Though the above discussion on RIF decomposition focused on quantiles, it is also applicable to any other distributional statistic. We present the RIF decomposition for quantiles as well as selected inequality measures including the Gini.
We use two rounds of the nationally representative Employment Unemployment Survey (EUS) conducted by the National Sample Survey Organization (NSSO) for the years 2004/05 and 2011/12. Our target population is wage earners between the ages of 15 and 64 (working age), living in rural areasFootnote 4 of 23 major states of India.Footnote 5
In both years, wage earners constituted around 25 % of the rural working age population.Footnote 6 Nominal earnings are converted into real terms (2004/05 prices) using consumer price indices provided by the Labour Bureau, Government of India.Footnote 7 We also trim the real earnings distribution of each year by dropping 0.1 % of observations from the top and the bottom.Footnote 8 Ultimately, our analysis sample consists of, 44,634 workers in 2004/05 and 36,050 in 2011/12. This corresponds to about 104 million paid workers in 2004/05 and about 118 million in 2011/12.
In this section we present our findings related to the evolution of the earnings distribution in rural India between 2004/05 and 2011/12.
Changes in the distribution of earnings from paid work
Figure 1 presents the kernel density estimates of the log of real weekly earnings for 2004/05 and 2011/12. The earnings density for each year is skewed to the right implying that the median earning was less than the mean. Over the 7-year period, the earnings density shifted to the right and became more peaked (less dispersed). The mean real weekly earnings increased from 391 to about 604 rupees, while median increased from 263 to 457 rupees. For 2004/05, the all-India rural poverty line (defined in terms of minimum consumption expenditure needed to meet a specified nutritional and living standard) was 447 rupees per capita per month (Planning Commission 2014).Footnote 9 Thus, the mean (median) real monthly earnings was 3.5 (2.4) times the poverty line, and in 2011/12 it was 5.4 (4.1) times this value.
Earnings densities, 2004/05 and 2011/12
Changes in earnings inequality
Figure 2 plots the real weekly earnings (in rupees) at each percentile for 2004/05 and 2011/12. At each percentile, earnings were higher in 2011/12 than in 2004/05. The gap between the two curves reveals that the increase in earnings was, in absolute terms (i.e., measured in rupees), greater for higher percentiles. For instance, real weekly earnings increased by 99 rupees at the first decile, 194 rupees at the median, and 307 rupees at the ninth decile. However, as seen in Fig. 3, the percentage increase in earnings was greater at the lower end of the distribution.Footnote 10 For instance, earnings increased by 91 % at the first decile, 74 % at the median, and 44 % at the ninth decile. Thus, earnings inequality―defined in relative rather than absolute terms―declined over the 7-year period.
Real weekly earnings, by percentile, 2004/05 and 2011/12
Change in log real weekly earnings, by percentile, 2004/05 to 2011/2012
Figure 4 confirms the decline in earnings inequality: It shows that the Lorenz curve of weekly earnings for 2011/12 lies above the one for 2004/05, unambiguously indicating that inequality declined.
Lorenz curves of real weekly earnings, 2004/05 and 2011/12
Table 1 supplements Figs. 2, 3, and 4 and shows how various summary measures of inequality changed over time. The ratio of the (raw) earnings at the 25th to the 10th percentile was steady at about 1.52. At the middle of the distribution, there was some decrease in inequality as measured by the 60th to the 40th percentile. In contrast, the ratio at the 90th to the 75th percentile fell very sharply from 1.72 to 1.53. Thus, it is clear that the decrease in inequality mainly came from changes at the top and middle of the distribution than from the bottom.
Table 1 Inequality measures for real weekly earnings from paid work
The decrease in inequality is also reflected in the variance of log earnings and in the Gini coefficients. The Gini of real weekly earnings fell from 0.462 to 0.396.Footnote 11 This is in sharp contrast to the picture in urban India where earnings inequality remained virtually unchanged over the period: The Gini of real weekly earnings in urban India was 0.506 in 2004/5 and 0.499 in 2011/12. Jayaraj and Subramanian (2015) use consumption expenditure data (also from the NSSO) and find that between 2004/05 and 2009/10, the Gini declined from 0.305 to 0.299 in rural India. For urban India, it increased from 0.376 to 0.393. It is noteworthy that while the direction of change in rural inequality that they find using consumption expenditure is the same as what we find using earnings, this is not the case for urban inequality. This makes a strong case for studying both consumption and earnings inequality.
Wage rates or days worked: decomposition of the variance in log earnings
So far our analysis has been about weekly earnings. The EUS also collects data on the number of half-days worked during the week. The following equations illustrate the decomposition of earnings inequality as measured by the variance in log earnings:
$$ \begin{array}{l}\mathrm{Weekly}\ \mathrm{earnings}\ (E)=\mathrm{Average}\ \mathrm{daily}\ \mathrm{wage}\ \mathrm{rate}(W)*\mathrm{Number}\ \mathrm{of}\ \mathrm{days}\ \mathrm{worked}\ (D)\\ {}\Rightarrow \ln (E)= \ln (W)+ \ln (D)\\ {}\Rightarrow \underset{1}{\underbrace{\mathrm{Var}\left[ \ln (E)\right]}}=\underset{2}{\underbrace{\mathrm{Var}\left[ \ln (W)\right]}}+\underset{3}{\underbrace{\mathrm{Var}\left[ \ln (D)\right]}}+\underset{4}{\underbrace{2\ast \mathrm{Covariance}\left[ \ln (W), \ln (D)\right]}}\end{array} $$
The decomposition tells us how much of the earnings inequality (1) is accounted by inequality of wage rates (2), inequality of workdays (3), and the co-movement of wage rates and workdays (4). We implement this decomposition for both years and then calculate the difference between corresponding terms.Footnote 12 The results are shown in Table 2.
Table 2 Decomposition of earnings inequality
In both years, the covariance between wage rates and days worked was positive implying that highly paid workers worked more number of days. Also, earnings inequality was largely on account of inequality of wages rates rather than inequality of days worked or because highly paid workers also worked for a longer time: Over 70 % of the earnings inequality was due to inequality of wage rates.Footnote 13
The last row of Table 2 presents the decomposition of decline in earnings inequality as seen in the decrease in the variance of log earnings. About 50 % of this decline was due to a decline in inequality of wage rates. The rest was due to a decrease in inequality of days worked (about 30 %) and a weaker relationship between highly paid workers working more number of days (about 20 %).
Unconditional quantile regression results
Before moving to the regression results, we present some descriptive statistics in Table 3 for paid workers in rural India. Mean (log) weekly earnings increased over the period. The average age also increased by about 1.7 years, perhaps an indication of later entry into the labor market as more people acquire higher education. There was also an increase in the share of males, married workers, and Muslims. The proportion of those belonging to ST (Scheduled Tribes) and SC (Scheduled Castes) declined.Footnote 14 Education levels rose significantly: The share of illiterates decreased by around 11 percentage points, while the share of each schooling level, including college education, increased.
Table 3 Descriptive statistics, wage earners in rural India
We classify industries into seven categories: agriculture, manufacturing (including mining), construction, utilities, wholesale and retail trade, public administration (including defense), and other services (including education, health, real estate, and finance). Over the period, the major change in the industrial distribution came primarily from agriculture, which saw a 12 percentage point decrease, and construction, which saw a roughly equivalent increase.Footnote 15
Next, we estimate earnings regressions (both OLS and UQR) separately for the years 2004/05 and 2011/12 with the log of real weekly earnings as the dependent variable. The covariates include all characteristics shown in Table 3 and the state of residence.Footnote 16 Age enters the regressions in a quadratic form as a proxy for work experience. "Others", and illiterates, are the omitted categories for caste and education, respectively. Agriculture, and laborers and unskilled workers, are the omitted categories for industry and occupation, respectively. Figures 5 and 6 plot regression coefficients for select covariates. The left column of plots is for 2004/05 and the right for 2011/12. For each selected covariate, UQR regression coefficients are plotted against the corresponding nine deciles. The dashed lines represent the 95 % confidence interval of the coefficients. The solid horizontal line is the OLS coefficient. As we move across deciles, whether coefficients for a particular characteristic are increasing or decreasing reveals the effect of changing the characteristic on wage inequality. An upward slope suggests that increasing the share of workers with that characteristic would increase inequality, while a downward slope would decrease it. It is important to note that these predictions are based on the assumption that the wage structure, i.e., the returns to observed worker characteristics, remains intact as the distribution of characteristics changes. In effect, this amounts to assuming away the presence of general equilibrium effects, a standard assumption made in this literature.
UQR coefficients for select covariates, 2004/05 and 2011/12
UQR coefficients for education categories, 2004/05 and 2011/12
The first row of plots in Fig. 5 shows that the coefficients for being male were positive and significant, implying the presence of a gender earnings gap. The UQR male coefficients were decreasing across deciles: In 2011/12, the male coefficient value was 0.69 at the first decile, 0.44 at the median, and 0.40 at the ninth decile. This is termed as the "sticky floor" effect and shows that while men earned more than women throughout the distribution, the penalty for being female was more pronounced at the bottom of the distribution.Footnote 17 The decreasing UQR coefficients also mean that having a greater proportion of men would reduce earnings inequality among wage earners. This was unambiguously true for 2004/05 as the coefficients decline monotonically across deciles, and it was true for the lower part of the 2011/12 distribution.
The second through fourth rows of plots in Fig. 5 show the presence of caste earnings gaps, though we do not see such gaps in all parts of the distribution. In 2004/05, the UQR coefficients for ST, SC, and Other Backward Classes (OBC) vis-à-vis "Others" show that there was an earnings penalty for all three groups at the upper deciles but not at the lower ones.Footnote 18 In 2011/12, the caste penalty for ST persisted, although, unlike 2004/05, it was experienced at the lower deciles. Surprisingly, the caste penalty for SC and OBC disappeared in 2011/12. Interestingly, in the regressions without industry and occupation controls, the caste earnings gap for SC and OBC persisted even for 2011/12. This suggests that in 2011/12, the caste earnings gaps were overwhelmingly because of occupation and industrial segregation by caste.
The fifth row of Fig. 5 indicates that returns to being married moved from being insignificant at lower deciles to being positive at upper ones. Thus, if the proportion of married individuals were to increase, earnings inequality among wage earners would increase. Except at the ninth decile in 2004/05, there was no penalty for being Muslim in both years.
Figure 6 examines coefficients for various education categories vis-à-vis the illiterates. First, there is clear evidence of positive returns to education. Additionally, in 2004/05, for each education category, there was a monotonic increase in returns as we moved up the earnings distribution, with an especially sharp increase at the ninth decile. This pattern persisted in 2011/12 for all categories except primary and middle: For instance, the coefficient of "college and beyond" was 0.22 at the first decile, 0.28 at the median, and 1.7 at the ninth decile. Thus, educating the illiterate population would increase earnings dispersion.Footnote 19 Figure 6 also reveals how the impact of education on earnings dispersion changed over time. The profile of UQR coefficients across deciles was flatter in 2011/12 than what it was in 2004/05 revealing that the inequality enhancing effect of education weakened over the period. The detailed decomposition of the structure effect in Section 4.3.3 shows this more formally.
RIF decomposition results
Next we turn to RIF decompositions to understand the factors behind the changes in the real earnings distribution. We first present the aggregate decomposition followed by the detailed decompositions of the composition and structure effects.
Aggregate decomposition of change in earnings
Figure 7 shows the results of the aggregate decomposition of the change in the (log) real earnings distribution at different vigintiles. We present the decomposition based on the counterfactual that relies on the characteristics of 2004/05 and returns of 2011/12.Footnote 20 For each vigintile, the total difference in log real earnings over the period is plotted (solid line). The downward slope of the total difference graph once again shows that the lower quantiles experienced a larger percentage increase in earnings than the higher quantiles.
The RIF aggregate decomposition
The total difference is decomposed into the structure (dashed) and the composition effects (dotted). Both components made significant contributions to the overall increase in earnings over the 7-year period. The only exception to this is at the 19th vigintile (95th percentile), where the structure effect is not significant. Thus, the contribution of the structure effect to the overall increase in earnings was positive and much larger than the composition effect at all but the top vigintile.Footnote 21
An important conclusion from the decomposition is that most of the decline in inequality occurred because the returns to characteristics improved a lot more at lower percentiles. In fact, it is clear that while changing characteristics did lead to an improvement in real earnings throughout the distribution, it had an inequality-increasing effect: The composition effect increased sharply after the eighth decile, implying that had "returns to characteristics" been held constant over the period, earnings inequality would have risen.
Table 4 confirms this by decomposing several measures of inequality.Footnote 22 The first column shows the difference between the log of real weekly earnings at the 90th and the 10th percentiles, while the second and the third columns present the 50-10 and 90-50 differences. The final column gives the Gini values for real weekly earnings. The third row presents the difference between the years that is to be decomposed. Aggregate decompositions of all four inequality measures confirm that the structure effect had an inequality decreasing effect, while the composition effect (with the exception of the 50-10 measure which was statistically insignificant) had an inequality-increasing effect. In other words, had labor market characteristics remained the same in 2011/12 as they were in 2004/05, earnings inequality would have dropped: e.g., the Gini coefficient would have dropped from 0.461 to 0.389 instead of the observed Gini of 0.396 in 2011/12. Decompositions of the 90-50 and 50-10 measures reveal that the inequality-increasing effect of the composition effect was mainly coming from changes at the top end of the wage distribution. This is reflected by the larger contribution of the composition effect on the 90-50 measure compared to the 50-10 measure and the fact that the latter is not statistically significant.
Table 4 Decomposition of changes in inequality measures from 2004/05 to 2011/12
In summary, the aggregate decomposition of all inequality measures reveals that the decline in inequality came exclusively from the structure effect, but the detailed decomposition that follows presents a more nuanced picture.
Detailed decomposition of the composition effect
The second panel of Table 4 and Fig. 8 present the detailed decomposition of the composition effect to ascertain which set of covariates were important in driving the total composition effect. Looking at the 90-10 and the Gini, we find that the inequality-increasing effect was mainly driven by changes in the distribution of education, and to a lesser extent of experience and occupation. The same pattern is observed when we focus at the top of the distribution (90-50 measure). However, education and occupation did not play a significant role at the bottom (50-10 measure). On the other hand, the change in the industrial distribution had a significant inequality decreasing effect, confined to the top of the distribution (the change was significant for the 90-50 measure but not for the 50-10). Further decomposing the industry category into its constituents points to a large contribution from the shift into construction. The large shift from agriculture to construction noted earlier decreased earnings inequality. The greater proportion of male workers also contributed to the decline in inequality, mainly driven by changes at the bottom of the distribution (the change was significant for the 50-10 measure but not for 90-50). Changes in the distribution of state of residence, marital status, caste, and religion did not have a major effect on change in inequality.
Detailed decomposition of the composition effect for select covariates
Before we move to the detailed decomposition of the structure effect, we would like to remark on the inclusion of industry and occupation as separate factors in the decomposition. Changes in the composition of and returns to industry and occupation may be partly driven by changes in education. To that extent, we should not be including them as controls if we are interested in studying the overall contribution of education. Following the decomposition literature, we also estimate Table 4 without industry and occupation controls. The results are in Appendix 1.Footnote 23 Comparing with Table 4, one major difference with regard to the composition effect is that without industry and occupation controls, the change in distribution of education plays a significant role even in the bottom of the distribution (as seen by the 50-10 measure). Otherwise, the conclusions are qualitatively the same.
Detailed decomposition of the structure effect
The bottom panel of Table 4 presents the decomposition of the structure effect. Both the 90-10 and the Gini decompositions reveal that education, occupation, and being married were largely responsible for the negative structure effect. Further, comparing the 50-10 and 90-50 measures shows that for all three characteristics, it was changes in returns at the top end of the distribution that mainly contributed to the overall negative structure effect.
This was also noted in Fig. 6 where the returns to education (with illiterates as the base category) actually declined at the higher end of the wage distribution, whereas returns did not change significantly in the middle. The same is true for the return to higher occupations (with laborers and unskilled workers as the base category). Comparing with Appendix 1 (without industry and occupation controls), the conclusions broadly remain the same.
The contribution of returns to industry in Table 4 is interesting: it changed in such a manner that it had an inequality decreasing effect at the bottom and an inequality-increasing effect at the top as seen by the negative and positive effects for the 50-10 and 90-50 measures, respectively. It is therefore not surprising that it has an insignificant contribution toward the 90-10 measure.
In Table 4, the contribution of the "constant" term to the overall structure effect is large and statistically significant. It is hard to give a meaningful interpretation to it as it depends on the choice of omitted categories for categorical variables. As described in Section 2.2, the choice of omitted category affects the decomposition of the structure effect. We test for the sensitivity of our results vis-à-vis choice of omitted categories by re-estimating Table 4 using two additional specifications presented in Appendix 2. Given that returns to education were largely driving the structure effect, in the first specification we change the omitted category for education from illiterates to the highest educational category, namely, "college and beyond". As seen in Appendix 2, the returns to education are now positive (vis-à-vis college and beyond) and the constant term is now negative. The broad conclusions are therefore the same. In the second specification, we convert all categorical variables into dummy variables by defining the variable to be "0" for the omitted category and defining it to be "1" for the remaining categories.Footnote 24 Education continues to explain a large part of the composition and structure effects.
Robustness check using state poverty lines
Recall that we used the Consumer Price Index – Rural Labourers (CPI-RL) to deflate nominal earnings to 2004/05 prices. These price indices do not account for spatial price adjustment across states. As a robustness check, we use state-level poverty lines computed using the Tendulkar methodology (Planning Commission 2014) which account for spatial variation across states. We replicate Tables 1 and 4 using state-level poverty lines and present them in Appendix 3. Our results are robust to the choice of deflators.
Using nationally representative data from the Employment Unemployment Survey, we examine the changes in real weekly earnings from paid work for rural India from 2004/05 to 2011/12.
For wage earners who constituted about a quarter of the rural working age population, we find that their real earnings increased at all percentiles. Using consumption expenditure data that span the entire population, other studiesFootnote 25 have also documented an improvement in all parts of the distribution. Taken together, there is clear evidence that economic growth in the post-reform period (after the early 1990s) has been accompanied by a reduction in poverty.Footnote 26 At the same time, according to official estimates, in 2011/12, 25.7 % of the rural population was below the poverty line. This figure represents about 216.7 million poor persons, a large number of people living below a minimum acceptable standard.Footnote 27
Our analysis also reveals that earnings inequality in rural India decreased over the 7-year period, and about half of the decline can be accounted for by the decline in daily wage inequality. However, while the rural Gini fell over this period, it remained virtually unchanged in urban India. This suggests that the dynamics of earnings is different for the two sectors. This could be because the underlying structural characteristics are different across the two sectors. For example, while agriculture is the largest employer in rural India, for urban India it is services. It could also be the result of different redistributive policies followed in the two sectors. These aspects need to be recognized when designing future policies to tackle inequality in the two regions.
Aggregate decompositions of the change in inequality measures reveal that the change in returns to worker characteristics was mainly responsible for the decrease in earnings inequality. Further detailed decompositions reveal that higher levels of education in the population contributed to an increase in earnings inequality, while lower returns to higher education contributed to a decrease. Rural India experienced a construction boom during this period that also contributed to the decrease in earnings inequality.
Some studies (Datt et al. 2016; Thomas 2015) have attributed the tightening of the rural casual labor market between 2000 and 2012 to the expansion of schooling and to the construction boom. Others (Azam 2012b; Berg et al. 2015; Imbert and Papp 2015) have found that the MGNREGS (Mahatma Gandhi National Rural Employment Guarantee Scheme), a large-scale employment guarantee scheme initiated in rural India in 2005, led to an increase in casual wages.
One cannot be certain that this trend of rising casual wages and declining earnings inequality will continue into the future. Regardless of the underlying causes of the recent decline in earnings inequality in rural India, volatility in global crop prices and the drought conditions currently experienced by large parts of the country because of two consecutive weak monsoons are important reminders that policies designed to foster employment opportunities and wage growth of unskilled workers outside of agriculture are crucial for improving the economic wellbeing of the second part of India.
Finally, we end with the caveat that although India has the lowest Gini value among the BRICS countries,Footnote 28 and we find that earnings inequality declined in rural India between 2004/05 and 2011/12, these facts mask extreme deprivations and inequities in access to health care, education, and physical infrastructure such as safe water and sanitation (Drèze and Sen 2013). One needs to be cognizant that extreme inequalities prevail in many other dimensions beyond earnings and consumption expenditure.
A notable exception is Dutta (2005). For the period, 1983–1999, at the all-India level, she finds an increase in wage rate inequality among regular salaried workers, but a decrease among casual labor.
There are some advantages in looking at consumption expenditure instead of earnings (Goldberg and Pavcnik 2007). The former are a better measure of lifetime wellbeing and suffer from fewer reporting errors. In spite of this, we feel that it is important to juxtapose the two to get a complete picture. This is especially important as the two measures may exhibit different trends. Krueger and Perri (2006) document this for the USA and then develop a model to show how income inequality can affect consumption inequality.
It is hard to establish the superiority of one approach over the other. In the SDI framework, changes in supply (changes in employment shares of demographics groups) and demand (changes in industrial composition) are assumed exogenous and therefore unaffected by changes in the relative wage structure. In the RIF decomposition, the feedback between changing characteristics and changing returns is ignored. Both these assumptions ignore general equilibrium effects.
In 2004/05, 75.3 % of India's working age population lived in rural areas, while in 2011/12 this figure was 71.1 %.
In 2004/05 India had 28 states and 7 union territories. We excluded the states and union territories for which there were no price deflators. The 23 included states are Andhra Pradesh, Assam, Bihar, Chhattisgarh, Gujarat, Haryana, Himachal Pradesh, Jammu and Kashmir, Jharkhand, Karnataka, Kerala, Madhya Pradesh, Maharashtra, Manipur, Meghalaya, Orissa, Punjab, Rajasthan, Tamil Nadu, Tripura, Uttar Pradesh, Uttaranchal, and West Bengal. In both years, they constituted 99.3 % of India's rural working age population.
In 2011/12, of the remaining rural working age population, 30 % were self-employed, 2 % were unemployed, and 43 % were not in the labor force. The main reason for restricting our analysis to wage earners is that the EUS does not collect earnings data for self-employed individuals. Kijima (2006) imputes the earnings of the self-employed using Mincerian equations estimated on the sample of regular wage/salaried workers. We refrain from this imputation as it imposes identical returns to covariates for both sets of workers, an assumption that may not be true.
We use the Consumer Price Index – Rural Labourers (CPI-RL), the relevant price index for rural areas.
While we are aware that this may underestimate our inequality measures, we do this in order to remove potential data entry errors.
The poverty line is based on the methodology proposed by the Tendulkar Committee in 2009. The committee was appointed by the Planning Commission, Government of India.
Using consumption expenditure data (also collected by the NSSO), for the period between 2004/05 and 2009/10, Jayaraj and Subramanian (2015) find a similar pattern of an increase in real consumption expenditures at all deciles for rural India, with the highest growth occurring at the third and fourth deciles.
If we consider daily wage rates instead of real weekly earnings, the Gini fell from 0.398 to 0.358. This indicates that it is wage rates, and not so much the time spent working, that is driving the decrease in earnings inequality. We study this in detail in the next sub-section where we show the same result by decomposing the variance in log earnings.
Although the variance of log weekly earnings allows us to quantify a "wage rate effect", a "workday effect", and a "covariance effect", it does not necessarily fall when one rupee is transferred from a rich worker to a poor one. However, this limitation is inconsequential since we have shown (using the Lorenz curves) that inequality has unambiguously fallen over time.
Admittedly, as there are bounds to the number of days worked, ranging from half a day to 7 days, this may have partly contributed to the lower inequality of days worked.
Scheduled Castes and Tribes (SC and ST, respectively) are administrative categories and represent groups of castes and tribes that are entitled to benefits from affirmative action policies such as reservations in educational institutions and government jobs to overcome historical social and economic discrimination against them. OBC stands for Other Backward Classes and is a collective term used by the Government of India to classify other castes that are socially and educationally backward (for details on the caste system, see Deshpande 2011).
This shift in industrial distribution in rural India has been documented in several other studies including Thomas 2015 and Jacoby and Dasgupta 2015.
Following the literature on earnings regressions, we also estimated the regressions and decompositions without the industry and occupation controls. The results are qualitatively the same and are available from the authors on request.
Deshpande et al. (2015) also find a sticky floor for 1999/2000 and 2009/10 among regular salaried workers in India.
The "Others" group includes, but is not confined to, the Hindu upper castes as the EUS data do not allow us to isolate the Hindu upper castes. Consequently, this four-way division understates the gaps between the Hindu upper castes and the most marginalized ST and SC groups (Deshpande 2011).
This finding for rural India is similar to the evidence presented in Azam 2012a for regular salaried workers in urban India. Using conditional quantile regressions on EUS data for 1983, 1993/94, and 2004/05, he finds that returns to secondary and tertiary education have increased over time and are larger at higher quantiles.
The results based on the other counterfactual that relies on the characteristics of 2011/12 and returns of 2004/05 are very similar and are available on request.
We also implemented the aggregate decomposition using Melly's refinement (Melly 2006) of the Machado-Mata Decomposition (Machado and Mata 2005) and found similar results.
Standard errors for Table 4 (and for all its variants in various appendices) were calculated using 1000 replications of the bootstrap procedure followed by Fortin et al (2011). The basic codes for this are available from Fortin's website http://faculty.arts.ubc.ca/nfortin/datahead.html and were suitably modified for this paper.
We decided to present the decomposition with industry and occupation controls in the main text because as noted earlier there was a massive shift from agriculture to industry which we believe was largely exogenous to education. Because this change has been widely discussed in related literature on the Indian economy, we feel that readers may be more interested in the specification that includes industry and occupation controls, despite the endogeneity issue that it suffers from.
We had to exclude controls for state of residence, as there is no natural criteria of classifying the states as high or low.
Kotwal et al. 2011, for all-India, 1983–2004/05; Jayaraj and Subramanian 2015, for rural and urban separately, 2004/05–2009/10.
Using NSS data on consumption expenditure from 1957 to 2012, Datt et al. (2016) provide direct evidence that growth in India has been accompanied with a decline in poverty, especially after economic reforms were initiated in the early 1990s.
The corresponding figures for below poverty line population in urban India are 13.7 % (53.1 million).
According to estimates from the World Bank, the Gini values for BRICS countries are as follows: Brazil-0.539 (2009); Russia-0.397 (2009); India-0.339 (2009); China-0.421 (2010), and South Africa-0.630 (2008). These are available at Gini Index (World Bank Estimate) http://data.worldbank.org/indicator/SI.POV.GINI. Accessed on June 1, 2016.
BRICS:
Brazil, Russia, India, China, South Africa
CPI-RL:
Consumer Price Index – Rural Labourers
EUS:
Employment Unemployment Survey
GDP:
GVA:
Gross value added
IF:
Influence functions
MGNREGS:
Mahatma Gandhi National Rural Employment Guarantee Scheme
NSSO:
National Sample Survey Organization
OBC:
Other Backward Classes
OLS:
Ordinary least squares
RIF:
Recentered influence functions
SDI:
Supply, demand, and institutions
UQR:
Azam M (2012a) Changes in Wage Structure in Urban India, 1983–2004: A Quantile Regression Decomposition. World Dev 40(6):1135-1150
Azam M (2012b) The Impact of Indian Job Guarantee Scheme on Labor Market Outcomes: Evidence from a Natural Experiment. IZA Discussion Papers; IZA DP No. 6548
Banerjee A, Piketty T (2005) Top Indian incomes, 1922-2000. World Bank Econ Rev 19(1):1–20
Berg E, Bhattacharyya S, Rajasekhar D, Manjula R (2015) Can Public Works Increase Equilibrium Wages? Evidence from India's National Rural Employment Guarantee. Available at http://www.erlendberg.info/agwages.pdf. Accessed on 1 June 2016
Blinder A (1973) Wage discrimination: reduced form and structural estimates. J Hum Resour 8:436–455
Bound J, Johnson G (1992) Changes in the structure of wages in the 1980's: an evaluation of alternative explanations. Am Econ Rev 82(3):371–392
Cain JS, Hasan R, Magsombol R, Tandon A (2010) Accounting for inequality in India: evidence from household expenditures. World Dev 38(3):282–297
Datt G, Ravallion M, Murgai R (2016) Growth, Urbanization, and Poverty Reduction in India. World Bank Group, Policy Research Working Paper 7568
Deshpande A (2011) The grammar of caste: economic discrimination in contemporary India. Oxford University Press, New Delhi
Deshpande A, Goel D, Khanna S (2015) Bad Karma or Discrimination? Male-Female Wage Gaps among Salaried Workers in India. IZA Discussion Papers, IZA DP No. 9485
Drèze J, Sen A (2013) An Uncertain Glory: India and its Contradictions. Princeton University Press
Dutta P V (2005) Accounting for Wage Inequality in India. Poverty Research Unit at Sussex, PRUS Working Paper No. 29
Firpo S, Fortin NM, Lemieux T (2009) Unconditional quantile regressions. Econometrica 77:953–973. doi:10.3982/ECTA6822
Fortin NM, Lemieux T, Firpo S (2011) Decomposition methods in economics. In: Ashenfelter O, Card DE (eds) Handbook of labor economics, vol 4A, Chapter 1
GOI (2015) Economic survey 2014-15. Government of India, Ministry of Finance
Goldberg PK, Pavcnik N (2007) Distributional effects of globalization in developing countries. J Econ Lit XLV:39–82
Hnatkovska V, Lahiri A (2013) Structural Transformation and the Rural-Urban Divide. Working Paper, International Growth Center, London School of Economics
Imbert C, Papp J (2015) Labor market effects of social programs: evidence from India's employment guarantee. Am Econ J Appl Econ 7(2):233–263
Jacoby H, Dasgupta B (2015) Changing Wage Structure in India in the Post-reform Era: 1993-2011. Policy Research Working Paper 7426, World Bank
Jayaraj D, Subramanian S (2015) Growth and Inequality in the Distribution of India's Consumption Expenditure: 1983-2009-10. Econ Pol Wkly 50(32):39–47
Katz LF, Murphy KM (1992) Changes in relative wages, 1963‐1987: supply and demand factors. Q J Econ 107(1):35–78
Kijima Y (2006) Why did wage inequality increase? Evidence from urban India 1983–99. J Dev Econ 81:97–117
Koenker R, Bassett G (1978) Regression quantiles. Econometrica 46:33–50
Kotwal A, Ramaswami B, Wadhwa W (2011) Economic liberalization and Indian economic growth: what's the evidence? J Econ Lit 49(4):1152–1199
Krueger D, Perri F (2006) Does income inequality lead to consumption inequality? evidence and theory. Rev Econ Stud 73(1):163–93
Machado JF, Mata J (2005) Counterfactual decomposition of changes in wage distributions using quantile regression. J Appl Economet 20:445–465
Melly B (2006) Estimation of Counterfactual Distributions using Quantile Regression. University of St. Gallen, Discussion Paper
Motiram S, Vakulabharanam V (2012) Indian Inequality: Patterns and Changes, 1993 – 2010. India Development Report, vol 7. New Delhi: Oxford University Press, p 224–232
Oaxaca RL (1973) Male-female wage differentials in urban labor markets. Int Econ Rev 14:693–709
Planning Commission (2014) Report of the expert group to review the methodology for measurement of poverty. Planning Commission, Government of India
RBI (2015) Handbook of statistics on the Indian economy 2014-15., Reserve Bank of India
Sen A, Himanshu (2004) Poverty and inequality in India: II: widening disparities during the 1990s. Econ Pol Wkly 39(39):4361–4375
Thomas JJ (2015) India's labour market during the 2000s: an overview. In: Ramaswamy KV (ed) Labour, employment and economic growth in India. Cambridge University Press, New Delhi
The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007–2013) under grant agreement no. 290752. The views expressed in this paper are those of the authors and do not reflect the views of Statistics Canada or of other institutions that the authors are affiliated to. We are grateful to participants at the Nopoor India Policy Conference in Delhi, and to an anonymous referee and the editor for many insightful comments, which greatly improved the paper.
Responsible editor: David Lam
The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007–2013) under grant agreement no. 290752. The funding agency had no role in collecting data, interpreting the results, or writing the manuscript.
This paper uses the Employment Unemployment Survey data collected by the National Sample Survey Organization (NSSO), Government of India. This data is available for purchase from the NSSO. http://mail.mospi.gov.in/index.php/home
The IZA Journal of Labor & Development is committed to the IZA Guiding Principles of Research Integrity. The authors declare that they have observed these principles.
University of California, Irvine, 3151 Social Science Plaza A, Irvine, CA, 92617, USA
Shantanu Khanna
Department of Economics, Delhi School of Economics, University of Delhi, 110007, Delhi, India
Deepti Goel
IZA-Institute of Labor, Bonn, Germany
Statistics Canada, 100 Tunney's Pasture Driveway, Ottawa, Ontario, K1A 0T6, Canada
René Morissette
Search for Shantanu Khanna in:
Search for Deepti Goel in:
Search for René Morissette in:
Correspondence to Deepti Goel.
Re-estimating Table 4 without controls for industry and occupation
Sensitivity checks to choice of omitted categories
Robustness check using state-level poverty lines as deflators
Khanna, S., Goel, D. & Morissette, R. Decomposition analysis of earnings inequality in rural India: 2004–2012. IZA J Labor Develop 5, 18 (2016) doi:10.1186/s40175-016-0064-8
Earning distribution | CommonCrawl |
\begin{document}
\date{}
\title{Finite-Time Error Bounds for Distributed Linear Stochastic Approximation
}
\thispagestyle{empty}
\begin{abstract} This paper considers a novel multi-agent linear stochastic approximation algorithm driven by Markovian noise and general consensus-type interaction, in which each agent evolves according to its local stochastic approximation process which depends on the information from its neighbors. The interconnection structure among the agents is described by a time-varying directed graph. While the convergence of consensus-based stochastic approximation algorithms when the interconnection among the agents is described by doubly stochastic matrices (at least in expectation) has been studied, less is known about the case when the interconnection matrix is simply stochastic. For any uniformly strongly connected graph sequences whose associated interaction matrices are stochastic, the paper derives finite-time bounds on the mean-square error, defined as the deviation of the output of the algorithm from the unique equilibrium point of the associated ordinary differential equation. For the case of interconnection matrices being stochastic, the equilibrium point can be any unspecified convex combination of the local equilibria of all the agents in the absence of communication. Both the cases with constant and time-varying step-sizes are considered. In the case when the convex combination is required to be a straight average and interaction between any pair of neighboring agents may be uni-directional, so that doubly stochastic matrices cannot be implemented in a distributed manner, the paper proposes a push-sum-type distributed stochastic approximation algorithm and provides its finite-time bound for the time-varying step-size case by leveraging the analysis for the consensus-type algorithm with stochastic matrices and developing novel properties of the push-sum algorithm. \end{abstract}
\section{Introduction}
The use of reinforcement learning (RL) to obtain policies that describe solutions to a Markov decision process (MDP) in which an autonomous agent interacting with an unknown environment aims to optimize its long term reward is now standard \cite{sutton2018reinforcement}. Multi-agent or distributed reinforcement learning is useful when a team of agents interacts with an unknown environment or system and aims to collaboratively accomplish tasks involving distributed decision-making. Distributed here implies that agents exchange information only with their neighbors according to a certain communication graph. Recently, many distributed algorithms for multi-agent RL have been proposed and analyzed \cite{zhang2019multi}. The basic result in such works is of the type that if the graph describing the communication among the agents is bi-directional (and hence can be represented by a doubly stochastic matrix), then an algorithm that builds on traditional consensus algorithms converges to a solution in terms of policies to be followed by the agents that optimize the sum of the utility functions of all the agents; further, both finite and infinite time performance of such algorithms can be characterized \cite{doan2019convergence,kaiqing}.
This paper aims to relax the assumption of requiring bi-directional communication among agents in a distributed RL algorithm. This assumption is arguably restrictive and will be violated due to reasons such as packet drops or delays, differing privacy constraints among the agents, heterogeneous capabilities among the agents in which some agents may be able to communicate more often or with more power than others, adversarial attacks, or even sophisticated resilient consensus algorithms being used to construct the distributed RL algorithm. A uni-directional communication graph can be represented through a (possibly time-varying) stochastic -- which may not be doubly stochastic -- matrix being used in the algorithm. As we discuss in more detail below, relaxing the assumption of a doubly stochastic matrix to simply a stochastic matrix in the multi-agent and distributed RL algorithms that have been proposed in the literature, however, complicates the proofs of their convergence and finite time performance characterizations. The main result in this paper is to provide a finite time bound on the mean-square error for a multi-agent linear stochastic approximation algorithm in which the agents interact over a time-varying directed graph characterized by a stochastic matrix. This paper, thus, extends the applicability of distributed and multi-agent RL algorithms presented in the literature to situations such as those mentioned above where bidirectional communication at every time step cannot be guaranteed. As we shall see, this extension is technically challenging and requires new proof techniques that may be of independent interest for the theory of distributed optimization and learning.
{\bf Related Work \;}
A key tool used for designing and analyzing RL algorithms is stochastic approximation \cite{robbins1951stochastic}, e.g., for policy evaluation, including temporal difference (TD) learning as a special case~\cite{sutton2018reinforcement}. Convergence study of stochastic approximation based on ordinary differential equation (ODE) methods has a long history \cite{borkar2000ode}. Notable examples are~\cite{tsitsiklis1997analysis,dayan1992convergence} which prove asymptotic convergence of TD($\lambda$). Recently, {\em finite-time performance} of single-agent stochastic approximation and TD algorithms has been studied in \cite{dalal2018finite,lakshminarayanan2018linear,bhandari2018finite,Srikant,gupta2019finite,wang2017finite,ma2020variance,xu2019two,chen2020finite}; many other works have now appeared that perform finite-time analysis for other RL algorithms, see, e.g.,~\cite{zou,qu2020finite,wu2020finite,xu2019finite,weng2020mean,wang2020finite_gq,iclr_gq,chen2020explicit,wang2019multistep,dalal2018finite_2,2timeSA}, just to name a few.
Many distributed reinforcement learning algorithms have now been proposed in the literature. In this setting, each agent can receive information only from its neighbors, and no single agent can solve the problem alone or by `taking the lead'. A backbone of almost all distributed RL algorithms proposed in the literature is the consensus-type interaction among the agents, dating back at least to~\cite{Ts3}. Many works have analyzed asymptotic convergence of such RL algorithms using ODE methods \cite{zhang2019distributed,kaiqing,wes,zhang2018networked,Yixuan}. This can be viewed as an application of ideas from distributed stochastic approximation~\cite{kushner87,stankovic2010decentralized,huang2012stochastic,stankovic2016multi,bianchi2013performance,stankovic2016distributed}. Finite-time performance guarantees for distributed RL have also been provided in works, most notably in~\cite{doan2019convergence,doan2019finite,wang2020decentralized,zhang2018finite,sun2020finite,zeng2020finite}.
The assumption that is the central concern of this paper and is made in all the existing finite-time analyses for distributed RL algorithms is that the consensus interaction is characterized by doubly stochastic matrices \cite{doan2019convergence,doan2019finite,wang2020decentralized,zhang2018finite,sun2020finite,zeng2020finite} at every time step, or at least in expectation, i.e., $W\mathbf{1}=\mathbf{1}$ and $\mathbf{1}^\top\mathbf{E}(W)=\mathbf{1}^\top$ \cite{bianchi2013performance}. Intuitively, doubly stochastic matrices imply symmetry in the communication graph, which almost always requires bidirectional communication graphs. More formally, the assumption of doubly stochastic matrices is restrictive since distributed construction of a doubly stochastic matrix needs to either invoke algorithms such as the Metropolis algorithm \cite{metro2} which requires bi-directional communication of each agent's degree information; or to utilize an additional distributed algorithm~\cite{gharesifard2012distributed} which significantly increases the complexity of the whole algorithm design. Doubly stochastic matrices in expectation can be guaranteed via so-called broadcast gossip algorithms which still requires bi-directional communication for convergence \cite{bianchi2013performance}. In a realistic network, especially with mobile agents such as autonomous vehicles, drones, or robots, uni-directional communication is inevitable due to various reasons such as asymmetric communication and privacy constraints, non-zero communication failure probability between any two agents at any given time, and application of resilient consensus in the presence of adversary attacks \cite{vaidya2012iterative,leblanc2013resilient}, all leading to an interaction among the agents characterized by a stochastic matrix, which may further be time-varying. The problem of design of distributed RL algorithms with time-varying stochastic matrices and characterizing either their asymptotic convergence or finite time analysis remains open.
As a step towards solving this problem, we propose a novel distributed stochastic approximation algorithm and provide its convergence analyses when a time-dependent stochastic matrix is being used due to uni-directional communication in a dynamic network. One of the first guarantees to be lost as the assumption of doubly stochastic matrices is removed is that the algorithm converges to a ``policy'' that maximizes the sum of reward functions of all the agents. Instead, the convergence is to a set of policies that optimize a convex combination of the network-wise accumulative reward, with the exact combination depending on the limit product of the infinite sequence of stochastic matrices. Nonetheless, by defining the error as the deviation of the output of the algorithm from the eventual equilibrium point, we derive finite-time bounds on the mean-square error. We consider both the cases with constant and time-varying step sizes. In the important special case where the goal is to optimize the average of the individual accumulative rewards of all the agents, we provide a distributed stochastic approximation algorithm, which builds on the push-sum idea \cite{pushsum} that has been used to solve distributed averaging problem over strongly connected graphs,
and characterize its finite-time performance. Thus, this paper provides the first distributed algorithm that can be applied (e.g., in TD learning) to converge to the policy maximizing the team objective of the sum of the individual utility functions over time-varying, uni-directional, communication graphs,
and characterizes the finite-time bounds on the mean-square error of the algorithm output from the equilibrium point under appropriate assumptions.
{\bf Technical Innovation and Contributions \;}
There are two main technical challenges in removing the assumption of doubly stochastic matrices being used in the analysis of distributed stochastic approximation algorithms. The first is in the direction of finite-time analysis. For distributed RL algorithms, finite-time performance analysis essentially boils down to two parts, namely bounding the consensus error and bounding the ``single-agent’’ mean-square error. For the case when consensus interaction matrices are all doubly stochastic, the consensus error bound can be derived by analyzing the square of the 2-norm of the deviation of the current state of each agent from the average of the states of the agents. With consensus in the presence of doubly stochastic matrices, the average of the states of the agents remains invariant. Thus, it is possible to treat the average value as the state of a fictitious agent to derive the mean-square consensus error bound with respect to the limiting point. More formally, this process relies on two properties of a doubly stochastic matrix $W$, namely that (1) $\mathbf{1}^\top W =\mathbf{1}^\top$, and (2) if $x_{t+1}=Wx_t$, then $\|x_{t+1} - (\mathbf{1}^\top x_{t+1})\mathbf{1}\|_2 \le \sigma_2(W) \|x_{t} - (\mathbf{1}^\top x_{t})\mathbf{1}\|_2$ where $\sigma_2(W)$ denotes the second largest singular value of $W$ (which is strictly less than one if $W$ is irreducible). Even if the doubly stochastic matrix is time-varying (denoted by $W_t$), property (1) still holds and property (2) can be generalized as in \cite{nedic2018network}. Thus, the square of the 2-norm $\|x_{t} - (\mathbf{1}^\top x_{t})\mathbf{1}\|_2^2$ is a quadratic Lyapunov function for the average consensus processes. Doubly stochastic matrices in expectation can be treated in the same way by looking at the expectation. This is the core on which all the existing finite-time analyses of distributed RL algorithms are based.
However, if each consensus interaction matrix is stochastic, and not necessarily doubly stochastic, the above two properties may not hold. In fact, it is well known that quadratic Lyapunov functions for general consensus processes $x_{t+1}=S_tx_t$, with $S_t$ being stochastic, do not exist \cite{olshevsky2008nonexistence}. This breaks down all the existing analyses and provides the first technical challenge that we tackle in this paper. Specifically, we appeal to the idea of quadratic comparison functions for general consensus processes. This was first proposed in \cite{touri2012product} and makes use of the concept of ``absolute probability sequences''. We provide a general analysis methodology and results that subsume the existing finite-time analyses for single-timescale distributed linear stochastic approximation and TD learning as special cases.
The second technical challenge arises from the fact that with stochastic matrices, the distributed RL algorithms may not converge to the policies that maximize the average of the utility functions of the agents. To regain this property, we propose a new algorithm that utilizes a push-sum protocol for consensus. However, finite-time analysis for such a push-based distributed algorithm is challenging. Almost all, if not all, the existing push-based distributed optimization works build on the analysis in \cite{nedic}; however, that analysis assumes that a convex combination of the entire history of the states of each agent (and not merely the current state of the agent) is being calculated. This assumption no longer holds in our case. To obtain a direct finite-time error bound without this assumption, we propose a new approach to analyze our push-based distributed algorithm by leveraging our consensus-based analyses to establish direct finite-time error bounds for stochastic approximation. Specifically, we tailor an ``absolute probability sequence'' for the push-based stochastic approximation algorithm and exploit its properties. Such properties have never been found in the existing literature and may be of independent interest for analyzing any push-sum based distributed algorithm.
We now list the main contributions of our work. We propose a novel consensus-based distributed linear stochastic approximation algorithm driven by Markovian noise in which each agent evolves according to its local stochastic approximation process and the information from its neighbors. We assume only a (possibly time-varying) stochastic matrix being used during the consensus phase, which is a more practical assumption when only unidirectional communication is possible among agents. We establish both convergence guarantees and finite-time bounds on the mean-square error, defined as the deviation of the output of the algorithm from the unique equilibrium point of the associated ordinary differential equation. The equilibrium point can be an ``uncontrollable'' convex combination of the local equilibria of all the agents in the absence of communication. We consider both the cases of constant and time-varying step-sizes. Our results subsume the existing results on convergence and finite-time analysis of distributed RL algorithms that assume doubly stochastic matrices and bi-directional communication as special cases.
In the case when the convex combination is required to be a straight average and interaction between any pair of neighboring agents may be uni-directional, we propose a push-type distributed stochastic approximation algorithm and establish its finite-time performance bound. It is worth emphasizing that it is straightforward to extend our algorithm from the straight average point to any pre-specified convex combination. Since it is well known that TD algorithms can be viewed as a special case of linear stochastic approximation \cite{tsitsiklis1997analysis}, our distributed linear stochastic approximation algorithms and their finite-time bounds can be applied to TD algorithms in a straight-forward manner.
\label{sec:introdcution}
{\bf Notation \;}
We use $X_t$ to represent that a variable $X$ is time-dependent and $t\in\{0,1,2,\ldots\}$ is the discrete time index. The $i$th entry of a vector $x$ will be denoted by $x^i$ and, also, by $(x)^{i}$ when convenient. The $ij$th entry of a matrix $A$ will be denoted by $a^{ij}$ and, also, by $(A)^{ij}$ when convenient. We use $\mathbf{1}_n$ to denote the vectors in ${\rm I\!R}^n$ whose entries all equal to $1$'s, and $I$ to denote the identity matrix, whose dimension is to be understood from the context.
Given a set $\scr S$ with finitely many elements, we use $|\scr S|$ to denote the cardinality of $\scr S$.
We use $\ceil{\cdot}$ to denote the ceiling function.
A vector is called a stochastic vector if its entries are nonnegative and sum to one. A square nonnegative matrix is called a row stochastic matrix, or simply stochastic matrix, if its row sums all equal one. Similarly, a square nonnegative matrix is called a column stochastic matrix if its column sums all equal one. A square nonnegative matrix is called a doubly stochastic matrix if its row sums and column sums all equal one. The graph of an $n\times n$ matrix is a direct graph with $n$ vertices and a directed edge from vertex $i$ to vertex $j$ whenever the $ji$-th entry of the matrix is nonzero. A directed graph is strongly connected if it has a directed path from any vertex to any other vertex. For a strongly connected graph $\mathbb G$, the distance from vertex $i$ to another vertex $j$ is the length of the shortest directed path from $i$ to $j$; the longest distance among all ordered pairs of distinct vertices $i$ and $j$ in $\mathbb G$ is called the diameter of $\mathbb G$. The union of two directed graphs, $\mathbb G_p$ and $\mathbb G_q$, with the same vertex set, written $\mathbb G_p \cup \mathbb G_q$, is meant the directed graph with the same vertex set and edge set being the union of the edge set of $\mathbb G_p$ and $\mathbb G_q$. Since this union is a commutative and associative binary operation, the definition extends unambiguously to any finite sequence of directed graphs.
\section{Distributed Linear Stochastic Approximation} \label{sec:SA}
Consider a network consisting of $N$ agents. For the purpose of presentation, we label the agents from $1$ through $N$. The agents are not aware of such a global labeling, but can differentiate between their neighbors. The neighbor relations among the $N$ agents are characterized by a time-dependent directed graph $\mathbb{G}_t = (\mathcal{V},\mathcal{E}_t)$ whose vertices correspond to agents and whose directed edges (or arcs) depict neighbor relations, where $\mathcal{V}=\{1,\ldots,N\}$ is the vertex set and $\mathcal{E}_t = \mathcal{V} \times \mathcal{V}$ is the edge set at time $t$. Specifically, agent $j$ is an in-neighbor of agent $i$ at time $t$ if $(j,i)\in\scr{E}_t$, and similarly, agent $k$ is an out-neighbor of agent $i$ at time $t$ if $(i,k)\in\scr{E}_t$. Each agent can send information to its out-neighbors and receive information from its in-neighbors. Thus, the directions of edges represent the directions of information flow. For convenience, we assume that each agent is always an in- and out-neighbor of itself, which implies that $\mathbb{G}_t$ has self-arcs at all vertices for all time $t$. We use $\mathcal{N}_t^{i}$ and $\mathcal{N}_t^{i-}$ to denote the in- and out-neighbor set of agent $i$ at time $t$, respectively, i.e., \begin{align*}
\mathcal{N}_t^{i} = \{ j \in \mathcal{V} \; : \;( j, i ) \in \mathcal{E}_t \}, \;\;\;
\mathcal{N}_t^{i-} = \{ k \in \mathcal{V} \; : \; ( i, k ) \in \mathcal{E}_t \}. \end{align*}
It is clear that $\mathcal{N}_t^{i}$ and $\mathcal{N}_t^{i-}$ are nonempty as they both contain index $i$.
We propose the following distributed linear stochastic approximation over a time-varying neighbor graph sequence $\{\mathbb{G}_t\}$. Each agent $i$ has control over a random vector $\theta^i_t$ which is updated~by \begin{align} \label{eq:theta update}
\theta_{t+1}^i = \sum_{j \in \mathcal{N}_t^i} w_t^{ij} \theta_t^j + \alpha_t \bigg(A(X_t)\sum_{j \in \mathcal{N}_t^i} w_t^{ij}\theta_t^j + b^i(X_t)\bigg),\;\;\; i\in\scr{V},\;\;\; t\in\{0,1,2,\ldots\}, \end{align} where $w_t^{ij}$ are consensus weights, $\alpha_t$ is the step-size at time $t$, $A(X_t)$ is a random matrix and $b^i(X_t)$ is a random vector, both generated based on the Markov chain $\{ X_t \}$ with state spaces $\mathcal{X}$. It is worth noting that the update \eqref{eq:theta update} of each agent only uses its own and in-neighbors' information and thus is distributed.
\begin{remark} \label{remark:ode}
The work of \cite{kushner87} considers a different consensus-based networked linear stochastic approximation as follows: \begin{align}\label{eq:yin}
\theta_{t+1}^i = \sum_{j \in \mathcal{N}_t^i} w_t^{ij} \theta_t^j + \alpha_t \left(A(X_t)\theta_{t}^i + b^i(X_t)\right),\;\;\; i\in\scr{V},\;\;\; t\in\{0,1,2,\ldots\}, \end{align} whose state form is $ \Theta_{t+1} = W_t \Theta_t + \alpha_t \Theta_t A(X_t)^\top + \alpha_t B(X_t)$, and mainly focuses on asymptotically weakly convergence for the fixed step-size case (i.e., $\alpha_t=\alpha$ for all $t$). Under the similar set of conditions, with its condition (C3.4') being a stochastic analogy for Assumption~\ref{assum:limit_pi}, Theorem~3.1 in \cite{kushner87} shows that \eqref{eq:yin} has a limit which can be verified to be the same as $\theta^*$, the limit of \eqref{eq:theta update}. How to apply the finite-time analysis tools in this paper to \eqref{eq:yin} has so far eluded us. The two updates \eqref{eq:theta update} and \eqref{eq:yin} are analogous to the ``combine-then-adapt'' and ``adapt-then-combine'' diffusion strategies in distributed optimization \cite{chen2012diffusion}.
$\Box$ \end{remark}
We impose the following assumption on the weights $w_t^{ij}$ which has been widely adopted in consensus literature \cite{vicsekmodel,survey,tacrate}.
\begin{assumption}\label{assum:weighted matrix} There exists a constant $\beta>0$ such that for all $i,j\in\scr V$ and $t$, $w_t^{ij} \ge \beta$ whenever $j\in\scr{N}_t^{i}$. For all $i\in\scr V$ and $t$, $\sum_{j\in\scr{N}_t^{i}} w_t^{ij} = 1$. \end{assumption} Let $W_t$ be the $N\times N$ matrix whose $ij$th entry equals $w_t^{ij}$ if $j\in\scr{N}_t^i$ and zero otherwise. From Assumption~\ref{assum:weighted matrix}, each $W_t$ is a stochastic matrix that is compliant with the neighbor graph $\mathbb{G}_t$. Since each agent $i$ is always assumed to be an in-neighbor of itself, all diagonal entries of $W_t$ are positive. Thus, if $\mathbb{G}_t$ is strongly connected, $W_t$ is irreducible and aperiodic. To proceed, define \begin{align*}
\Theta_t = \left[
\begin{array}{c}
(\theta_t^1)^\top \\
\vdots \\
(\theta_t^N)^\top
\end{array}
\right], \;\;\;
B(X_t) = \left[
\begin{array}{c}
(b^1(X_t))^\top \\
\vdots \\
(b^N(X_t))^\top
\end{array}
\right]. \end{align*} Then, the $N$ linear stochastic recursions in \eqref{eq:theta update} can be combined and written as \begin{align} \label{eq:updtae_Theta}
\Theta_{t+1} = W_t \Theta_t + \alpha_t W_t \Theta_t A(X_t)^\top + \alpha_t B(X_t),\;\;\; t\in\{0,1,2,\ldots\}. \end{align} The goal of this section is to characterize the finite-time performance of~\eqref{eq:theta update}, or equivalently~\eqref{eq:updtae_Theta}, with the following standard assumptions, which were adopted e.g. in \cite{Srikant,doan2019convergence}.
\begin{assumption} \label{assum:A and b}
There exists a matrix $A$ and vectors $b^i$, $i\in\scr V$, such that
\begin{align*}
\lim_{t\to\infty} \mathbf{E}[A(X_t)] = A, \;\;\;
\lim_{t\to\infty} \mathbf{E}[b^i(X_t)] = b^i,\;\;\; i\in\scr V.
\end{align*}
Define $b_{\max} = \max_{i\in\mathcal{V}}\sup_{x\in\mathcal{X}} \| b^i(x) \|_2 < \infty$ and $A_{\max} = \sup_{x\in\mathcal{X}} \| A(x) \|_2 < \infty $.
Then, $\| A \|_2 \le A_{\max}$ and $\| b^i \|_2 \le b_{\max}$, $i\in\scr V$. \end{assumption}
\begin{assumption} \label{assum:mixing-time}
Given a positive constant $\alpha$, we use $\tau(\alpha)$ to denote the mixing time of the Markov chain $\{ X_t \}$ for which
\begin{align*}
\left\{
\begin{array}{ll}
\| \mathbf{E}[A(X_t) - A | X_0 = X] \|_2 \le \alpha, & \forall X, \;\; \forall t \ge \tau(\alpha),\\\\
\| \mathbf{E}[ b^i(X_t) - b^i | X_0 = X] \|_2 \le \alpha, & \forall X, \;\; \forall t \ge \tau(\alpha), \;\; \forall i\in\scr{V}.
\end{array}
\right.
\end{align*}
The Markov chain $\{ X_t \}$ mixes at a geometric rate, i.e., there exists a constant $C$ such that $\tau(\alpha) \le - C \log \alpha$.
\end{assumption}
\begin{assumption} \label{assum:lyapunov}
All eigenvalues of $A$ have strictly negative real parts, i.e., $A$ is a Hurwitz matrix. Then, there exists a symmetric positive definite matrix $P$, such that $A^\top P + P A = - I$.
Let $\gamma_{\max}$ and $\gamma_{\min}$ be the maximum and minimum eigenvalues of $P$, respectively. \end{assumption}
\begin{assumption} \label{assum:step-size}
The step-size sequence $\{\alpha_t\}$ is positive, non-increasing, and satisfies $\sum_{t=0}^\infty \alpha_t = \infty$ and $\sum_{t=0}^\infty \alpha_t^2 < \infty$. \end{assumption}
To state our first main result, we need the following concepts.
\begin{definition}
A graph sequence $\{ \mathbb{G}_t \}$ is uniformly strongly connected if there exists a positive integer $L$ such that for any $t\ge 0$, the union graph $\cup_{k=t}^{t+L-1} \mathbb{G}_k$ is strongly connected.
If such an integer exists, we sometimes say that $\{ \mathbb{G}_t \}$ is uniformly strongly connected by sub-sequences of length $L$. \end{definition}
\begin{remark} \label{remark:uniformly} Two popular joint connectivity definitions in consensus literature are ``$B$-connected'' \cite{nedic2009distributed_quan} and ``repeatedly jointly strongly connected'' \cite{reachingp1}. A graph sequence $\{ \mathbb{G}_t \}$ is $B$-connected if there exists a positive integer $B$ such that the union graph $\cup_{t=kB}^{(k+1)B-1} \mathbb{G}_t$ is strongly connected for each integer $k\ge 0$. Although the uniformly strongly connectedness looks more restrictive compared with $B$-connectedness at first glance, they are in fact equivalent. To see this, first it is easy to see that if $\{ \mathbb{G}_t \}$ is uniformly strongly connected, $\{ \mathbb{G}_t \}$ must be $B$-connected; now supposing $\{ \mathbb{G}_t \}$ is $B$-connected, for any fix $t$, the union graph $\cup_{k=t}^{t+2B-1} \mathbb{G}_k$ must be strongly connected, and thus $\{ \mathbb{G}_t \}$ is uniformly strongly connected by sub-sequences of length $2B$. Thus, the two definitions are equivalent. It is also not hard to show that the uniformly strongly connectedness is equivalent to ``repeatedly jointly strongly connectedness'' provided the directed graphs under consideration all have self-arcs at all vertices, with ``repeatedly jointly strongly connectedness'' being defined upon ``graph composition'' \cite{reachingp1}.
$\Box$ \end{remark}
\begin{definition}\label{def: absolute prob} Let $\{ W_t \}$ be a sequence of stochastic matrices. A sequence of stochastic vectors $\{ \pi_t \}$ is an absolute probability sequence for $\{ W_t \}$ if $\pi_t^\top = \pi_{t+1}^\top W_t$ for all $t\ge0$. \end{definition}
This definition was first introduced by Kolmogorov \cite{kolmogorov}. It was shown by Blackwell \cite{blackwell} that every sequence of stochastic matrices has an absolute probability sequence. In general, a sequence of stochastic matrices may have more than one absolute probability sequence; when the sequence of stochastic matrices is ``ergodic'', it has a unique absolute probability sequence \cite{tacrate}. It is easy to see that when $W_t$ is a fixed irreducible stochastic matrix $W$, $\pi_t$ is simply the normalized left eigenvector of $W$ for eigenvalue one. More can be said.
\begin{lemma} \label{lemma:bound_pi_jointly}
Suppose that Assumption~\ref{assum:weighted matrix} holds. If $\{\mathbb{G}_t\}$ is uniformly strongly connected, then there exists a unique absolute probability sequence $\{ \pi_t \}$ for the matrix sequence $\{W_t\}$ and a constant $\pi_{\min} \in (0,1)$ such that $\pi_t^i \ge \pi_{\min}$ for all $i$ and $t$.
\end{lemma}
Let $\langle \theta \rangle_t = \sum_{i=1}^N\pi^i_t \theta^i_t$, which is a column vector and convex combination of all $\theta_t^i$. It is easy to see that $\langle \theta \rangle_t = (\pi_t^\top\Theta_t)^\top=\Theta_t^\top \pi_t$. From Definition~\ref{def: absolute prob} and \eqref{eq:updtae_Theta}, we have $ \pi^\top_{t+1} \Theta_{t+1} = \pi^\top_{t+1} W_t \Theta_t + \alpha_t \pi^\top_{t+1} W_t \Theta_t A(X_t)^\top + \alpha_t \pi^\top_{t+1} B(X_t)
= \pi^\top_t \Theta_t + \alpha_t \pi^\top_{t} \Theta_t A(X_t)^\top + \alpha_t \pi^\top_{t+1} B(X_t)$,
which implies that \begin{align} \label{eq:update of average_time-varying}
\langle \theta \rangle_{t+1} &= \langle \theta \rangle_t + \alpha_t A(X_t) \langle \theta \rangle_t + \alpha_t B(X_t)^\top \pi_{t+1}. \end{align}
Asymptotic performance of \eqref{eq:theta update} with any uniformly strongly connected neighbor graph sequence is characterized by the following two theorems.
\begin{theorem} \label{thm:consensus_time-varying_jointly}
Suppose that Assumptions~\ref{assum:weighted matrix}, \ref{assum:A and b} and \ref{assum:step-size} hold. Let $\{ \theta_t^i \}$, $i\in \mathcal{V}$, be generated by \eqref{eq:theta update}. If $\{\mathbb{G}_t\}$ is uniformly strongly connected,
then $\lim_{t\rightarrow{\infty}}\|\theta^i_t-\langle \theta\rangle_t\|_2=0$ for all $i\in\scr V$. \end{theorem}
Theorem~\ref{thm:consensus_time-varying_jointly} only shows that all the sequences $\{ \theta_t^i \}$, $i\in\scr V$, generated by \eqref{eq:theta update} will finally reach a consensus, but not necessarily convergent or bounded. To guarantee the convergence of the sequences, we further need the following assumption, whose validity is discussed in Remark~\ref{remark:on assmption}.
\begin{assumption} \label{assum:limit_pi}
The absolute probability sequence $\{ \pi_t \}$ for the stochastic matrix sequence $\{W_t\}$ has a limit, i.e., there exists a stochastic vector $\pi_{\infty}$ such that $\lim_{t\to\infty} \pi_t = \pi_{\infty}$. \end{assumption}
\begin{theorem} \label{thm:theta^*_jointly}
Suppose that Assumptions~\ref{assum:weighted matrix}--\ref{assum:limit_pi} hold. Let $\{ \theta_t^i \}$, $i\in \mathcal{V}$, be generated by \eqref{eq:theta update} and $\theta^*$ be the unique equilibrium point of the ODE
\begin{align} \label{eq:definition theta^*}
\dot \theta = A \theta + b, \;\;\; b=\sum_{i=1}^N \pi_{\infty}^i b^i,
\end{align}
where $A$ and $b^i$ are defined in Assumption~\ref{assum:A and b} and $\pi_{\infty}$ is defined in Assumption~\ref{assum:limit_pi}.
If $\{ \mathbb{G}_t \}$ is uniformly strongly connected, then all $\theta_t^i$ will converge to $\theta^*$ both with probability 1 and in mean square. \end{theorem}
\begin{remark} \label{remark:on assmption} Though Assumption~\ref{assum:limit_pi} may look restrictive at first glance, simple simulations show that the sequences $\{ \theta_t^i \}$, $i\in\scr V$, do not converge if the assumption does not hold (e.g., even when $W_t$ changes periodically). It is worth emphasizing that the existence of $\pi_{\infty}$ does not imply the existence of $\lim_{t\rightarrow\infty}W_t$, though the converse is true. Indeed, the assumption subsumes various cases including (a) all $W_t$ are doubly stochastic matrices, and (b) all $W_t$ share the same left eigenvector for eigenvalue 1, which may arise from the scenario when the number of in-neighbors of each agent does not change over time \cite{olshevsky2013degree}. An important implication of Assumption~\ref{assum:limit_pi} is when the consensus interaction among the agents, characterized by $\{W_t\}$, is replaced by resilient consensus algorithms such as \cite{vaidya2012iterative,leblanc2013resilient} in order to attenuate the effect of unknown malicious agents, the resulting dynamics of non-malicious agents, in general, will not converge, because the resulting interaction stochastic matrices among the non-malicious agents depend on the state values transmitted by the malicious agents, which can be arbitrary, and thus the resulting stochastic matrix sequence, in general, does not have a convergent absolute probability sequence; of course, in this case, the trajectories of all the non-malicious agents will still reach a consensus as long as the step-size is diminishing, as implied by Theorem~\ref{thm:consensus_time-varying_jointly}. Further discussion on Assumption~\ref{assum:limit_pi} can be found in Appendix~\ref{discussionAss6}.
$\Box$ \end{remark}
We now study the finite-time performance of the proposed distributed linear stochastic approximation \eqref{eq:theta update} for both fixed and time-varying step-size cases. Its finite-time performance is characterized by the following theorem.
Let $\eta_t = \| \pi_t - \pi_\infty \|_2$ for all $t\ge 0$. From Assumption~\ref{assum:limit_pi}, $\eta_t$ converges to zero as $t\rightarrow\infty$.
\begin{theorem} \label{thm:bound_jointly_SA}
Let the sequences $\{ \theta_t^i \}$, $i \in \mathcal{V}$, be generated by \eqref{eq:theta update}. Suppose that Assumptions~\ref{assum:weighted matrix}--\ref{assum:lyapunov},~\ref{assum:limit_pi} hold and $\{ \mathbb{G}_t \}$ is uniformly strongly connected by sub-sequences of length $L$. Let $q_t$ and $m_t$ be the unique integer quotient and remainder of $t$ divided by $L$, respectively. Let $\delta_t$ be the diameter of $\cup_{k=t}^{t+L-1} \mathbb{G}_k$, $ \delta_{\max} = \max_{t\ge 0} \delta_t$, and
\begin{align}
\epsilon & = \bigg(1+\frac{2 b_{\max}}{A_{\max}}-\frac{\pi_{\min} \beta^{2L}}{2 \delta_{\max}} \bigg)( 1 + \alpha A_{\max})^{2L} - \frac{2 b_{\max}}{ A_{\max}} (1 + \alpha A_{\max})^{L}, \label{eq:define epsilon_jointly}
\end{align} where $0 < \alpha < \min \{ K_1 ,\; \frac{\log2}{A_{\max} \tau(\alpha)},\; \frac{0.1}{K_2 \gamma_{\max}} \}$.
{\rm\bf 1) Fixed step-size:}
Let $\alpha_t = \alpha$ for all $t\ge 0$.
For all $ t\ge T_1 $,
\begin{align} \label{eq:bound_jointly_fixed}
\sum_{i=1}^N \pi_{t}^i \mathbf{E}\left[\left\|\theta_{t}^i - \theta^*\right\|_2^2\right]
&\le 2 \epsilon^{q_{t}} \sum_{i=1}^N \pi_{m_t}^i \mathbf{E}\left[\left\| \theta_{m_t}^i - \langle \theta \rangle_{m_t} \right\|_2^2 \right] + C_1 \bigg( 1 - \frac{0.9 \alpha}{\gamma_{\max}} \bigg)^{{t}-T_1} + C_2 \nonumber \\
&\;\;\; + \frac{\gamma_{\max}}{\gamma_{\min}} 2\alpha \zeta_4 \sum_{k={0}}^{t-T_1} \eta_{t+1-k} \bigg(1-\frac{0.9 \alpha}{\gamma_{\max}} \bigg)^{k}.
\end{align}
{\rm\bf 2) Time-varying step-size:}
Let $\alpha_t = \frac{\alpha_0}{t+1}$ with $\alpha_0 \ge \frac{\gamma_{\max}}{0.9}$. For all $t\ge LT_2$,
\begin{align}
&\sum_{i=1}^N \pi_{t}^i \mathbf{E}\left[\left\|\theta_{t}^i - \theta^*\right\|_2^2\right]
\; \le \; 2 \epsilon^{q_{t}-T_2} \sum_{i=1}^N \pi_{LT_2+m_t}^i \mathbf{E}\left[\left\| \theta_{LT_2+m_t}^i - \langle \theta \rangle_{LT_2+m_t} \right\|_2^2\right] \nonumber \\
& \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; + C_3 \left( \alpha_0 \epsilon^{\frac{q_t-1}{2}} + \alpha_{\ceil{\frac{q_t-1}{2}}L}\right) + \frac{1}{t} \bigg(C_4 \log^2\Big(\frac{t}{\alpha_0}\Big)+C_5\sum_{k = LT_2}^{t} \eta_{k} + C_6\bigg). \label{eq:bound_jointly_time-varying}
\end{align}
Here $T_1, T_2, K_1, K_2, C_1 - C_6$ are finite constants whose definitions are given in Appendix~\ref{sec:thmSA_constant}. \end{theorem}
Since $\pi_t^i$ is uniformly bounded below by $\pi_{\min}\in(0,1)$ from Lemma~\ref{lemma:bound_pi_jointly}, it is easy to see that the above bound holds for each individual $\mathbf{E}[\|\theta_t^i - \theta^*\|_2^2]$. To better understand the theorem, we provide the following remark.
\begin{remark} \label{remark:exists_T_1} In Appendix~\ref{sec:proof_jointly_fixed}, we show that both $\epsilon$ and $(1-\frac{0.9 \alpha}{\gamma_{\max}})$ lie in the interval $(0,1)$. It is easy to show that $\epsilon$ is monotonically increasing for $\delta_{\max}$ and $L$, monotonically decreasing for $\beta$ and $\pi_{\min}$. Also, $\lim_{t\to\infty} \sum_{k=0}^{t-T_1} \eta_{t+1-k} (1-\frac{0.9 \alpha}{\gamma_{\max}})^k \le \lim_{t\to\infty} \frac{\gamma_{\max}}{0.9 \alpha} [ \eta_{\ceil{\frac{t-T_1}{2}}} + \eta_1 (1-\frac{0.9 \alpha}{\gamma_{\max}})^{\frac{t-T_1}{2}} ] = 0 $. Therefore, the summands in the finite-time bound \eqref{eq:bound_jointly_fixed} for the fixed step-size case are exponentially decaying except for the constant $C_2$, which implies that
$\limsup_{t\rightarrow\infty}\sum_{i=1}^N \pi_t^i \mathbf{E}[\|\theta_t^i - \theta^*\|_2^2]
\le C_2$,
providing a constant limiting bound.
From Appendix~\ref{sec:constants}, $C_2$ is monotonically increasing for $\gamma_{\max}, \delta_{\max}, b_{\max}$ and $L$, and monotonically decreasing for $\gamma_{\min}, \pi_{\min}$ and $\beta$. In Appendix~\ref{sec:proof_jointly_time-varying}, we show that $\lim_{t\rightarrow\infty}\frac{1}{t}\sum_{k=1}^t \eta_k=0$, which implies that the finite-time bound \eqref{eq:bound_jointly_time-varying} for the time-varying step-size case converges to zero as $t\rightarrow\infty$. We next comment on $0.1$ in the inequality defining $\alpha$.
Actually, we can replace $0.1$ with any constant $c\in (0,1)$, which will affect the value of $\epsilon$ and the feasible set of $\alpha$, with the latter becoming $0 < \alpha < \min \{ K_1 ,\; \frac{\log2}{A_{\max} \tau(\alpha)},\; \frac{c}{K_2 \gamma_{\max}}\}.$ Thus, the smaller the value of $c$ is, the smaller is the feasible set of $\alpha$, though the feasible set is always nonempty. For convenience, we simply pick $c = 0.1$ in this paper; that is why we also have $0.9$ in \eqref{eq:bound_jointly_fixed}. Lastly, we comment on $\alpha_0$ in the time-varying step-size case. We set $\alpha_0 \ge \frac{\gamma_{\max}}{0.9}$ for the purpose of getting a cleaner expression of the finite-time bound. For $\alpha_0 < \frac{\gamma_{\max}}{0.9}$, our approach still works, but will yield a more complicated expression. The same is true for Theorem~\ref{thm:bound_time-varying_step_Push_SA}.
$\Box$ \end{remark}
{\bf Technical Challenge and Proof Sketch \;}
As described in the introduction, the key challenge of analyzing the finite-time performance of the distributed stochastic approximation \eqref{eq:theta update} lies in the condition that the consensus-based interaction matrix is time-varying and stochastic (not necessarily doubly stochastic). To tackle this, we appeal to the absolute probability sequence $\pi_t$ of the time-varying interaction matrix sequence and introduce the quadratic Lyapunov comparison function $\sum_{i=1}^N \pi_{t}^i \mathbf{E}[\|\theta_{t}^i - \theta^*\|_2^2]$. Then, using the inequality
$\sum_{i=1}^N \pi_{t}^i \mathbf{E}[\|\theta_{t}^i - \theta^*\|_2^2]
\le 2 \sum_{i=1}^N \pi_{t}^i \mathbf{E}[\|\theta_{t}^i - \langle \theta \rangle_{t} \|_2^2 ] + 2 \mathbf{E} [\| \langle \theta \rangle_{t} - \theta^*\|_2^2]$,
the next step is to find the finite-time bounds of $\sum_{i=1}^N \pi_{t}^i \mathbf{E}[\|\theta_{t}^i - \langle \theta \rangle_{t} \|_2^2 ]$ and $ \mathbf{E} [\| \langle \theta \rangle_{t} - \theta^*\|_2^2]$, respectively. The latter term is essentially the ``single-agent'' mean-square error. Our main analysis contribution here is to bound the former term for both fixed and time-varying step-size cases.
\section{Push-SA} \label{sec:SA_pushsum}
The preceding section shows that the limiting state of consensus-based distributed stochastic approximation depends on $\pi_{\infty}$, which leads to a convex combination of the local equilibria of all the agents in the absence of communication, but the convex combination is in general ``uncontrollable''. Note that this convex combination will correspond to a convex combination of the network-wise accumulative rewards in applications such as distributed TD learning. In an important case when the convex combination is desired to be the straight average, the existing literature e.g. \cite{doan2019convergence,doan2019finite} relies on doubly stochastic matrices whose corresponding $\pi_{\infty}=(1/N)\mathbf{1}_N$. As mentioned in the introduction, doubly stochastic matrices implicitly require bi-directional communication between any pair of neighboring agents; see e.g. gossiping \cite{boyd052,pieee} and the Metropolis algorithm \cite{metro2}. A popular method to achieve the straight average target while allowing uni-directional communication between neighboring agents is to appeal to the idea so-called ``push-sum'' \cite{pushsum}, which was tailored for solving the distributed averaging problem over directed graphs and has been applied to distributed optimization \cite{nedic}. In this section, we will propose a push-based distributed stochastic approximation algorithm tailored for uni-directional communication and establish its finite-time error bound.
Each agent $i$ has control over three variables, namely $y_t^i$, $\tilde{\theta}_t^i$ and $\theta_t^i$, in which $y^{i}_t$ is scalar-valued with initial value 1, $\tilde{\theta}_t^i$ can be arbitrarily initialized, and $\theta_0^i=\tilde{\theta}_0^i$. At each time $t\ge 0$, each agent $i$ sends its weighted current values $\hat w_t^{ji} y^i_{t}$ and $\hat w_t^{ji}(\tilde\theta^i_{t} + \alpha_t A(X_t) \theta_t + \alpha_t b^i(X_t)) $ to each of its current out-neighbors $j\in\scr{N}_t^{i-}$, and updates its variables as follows:
\begin{empheq}[left = \empheqlbrace]{align}\label{eq:SA_push-sum} y^i_{t+1}&=\sum_{j \in \mathcal{N}_t^i} \hat w_t^{ij } y^j_{t}, \;\;\;\;\; y^i_0=1,\nonumber\\ \tilde \theta^i_{t+1} &= \sum_{j \in \mathcal{N}_t^i} \hat w_t^{ij } \left[ \tilde\theta^j_{t} + \alpha_t \left(A(X_{t}) \theta^j_{t} + b^j(X_{t} ) \right) \right],\\ \theta^i_{t+1}&=\frac{\tilde \theta^i_{t+1}}{y^i_{t+1}}, \;\;\;\;\; \theta_0^i=\tilde{\theta}_0^i,\nonumber \end{empheq}
where $\hat w_t^{ij}=1/|\mathcal{N}_t^{j-}|$. It is worth noting that the algorithm is distributed yet requires that each agent be aware of the number of its out-neighbors.
Asymptotic performance of \eqref{eq:SA_push-sum} with any uniformly strongly connected neighbor graph sequence is characterized by the following theorem.
\begin{theorem} \label{thm:push_meansq}
Suppose that Assumptions~\ref{assum:A and b}--\ref{assum:step-size} hold. Let $\{ \theta_t^i \}$, $i\in \mathcal{V}$, be generated by \eqref{eq:SA_push-sum} and $\theta^*$ be the unique equilibrium point of the ODE \begin{align} \label{eq:ode_pushsum}
\dot \theta = A \theta + \frac{1}{N} \sum_{i=1}^N b^i, \end{align}
where $A$ and $b^i$ are defined in Assumption~\ref{assum:A and b}.
If $\{ \mathbb{G}_t \}$ is uniformly strongly connected, then $\theta_t^i$ will converge to $\theta^*$ in mean square for all $i\in\scr V$. \end{theorem}
In this section, we define $\langle \tilde \theta \rangle_t = \frac{1}{N} \sum_{i=1}^N \tilde \theta_t^i$ and $\langle \theta \rangle_t = \frac{1}{N} \sum_{i=1}^N \theta_t^i$. To help understand these definitions, let $\hat W_t$ be the $N\times N$ matrix whose $ij$-th entry equals $\hat w_t^{ij}$ if $j\in\scr{N}_t^i$, otherwise equals zero. It is easy to see that each $\hat W_t$ is a column stochastic matrix whose diagonal entries are all positive. Then, $\pi_t = \frac{1}{N}\mathbf{1}_N$ for all $t \ge 0$ can be regarded as an absolute probability sequence of $\{ \hat W_t \}$. Thus, the above two definitions are intuitively consistent with $\langle \theta \rangle_t$ in the previous section.
Finite-time performance of \eqref{eq:SA_push-sum} with any uniformly strongly connected neighbor graph sequence is characterized by the following theorem.
Let $ \mu_t = \|A(X_t) (\langle \theta \rangle_t - \langle \tilde \theta \rangle_t)\|_2$. In Appendix~\ref{sec:proof_push}, we show that $\|\langle \theta \rangle_t - \langle \tilde \theta \rangle_t\|_2$ converges to zero as $t\rightarrow\infty$, so does $ \mu_t$.
\begin{theorem} \label{thm:bound_time-varying_step_Push_SA}
Suppose that Assumptions~\ref{assum:A and b}--\ref{assum:lyapunov}
hold and $\{ \mathbb{G}_t \}$ is uniformly strongly connected by sub-sequences of length $L$.
Let $\{ \theta_t^i \}$, $i\in \mathcal{V}$, be generated by \eqref{eq:SA_push-sum}
with $\alpha_t = \frac{\alpha_0}{t+1}$ and $\alpha_0 \ge \frac{\gamma_{\max}}{0.9}$. Then, there exists a nonnegative $\bar\epsilon \le (1-\frac{1}{N^{NL}})^{\frac{1}{L}}$ such that
for all $t\ge \bar T$,
\begin{align}
\sum_{i=1}^N \mathbf{E}\left[\left\|\theta_{t+1}^i - \theta^*\right\|_2^2\right]
\le \;\; & C_7 \bar\epsilon^t + C_8 \left( \alpha_0 \bar\epsilon^{\frac{t}{2}} + \alpha_{\ceil{\frac{t}{2}}} \right)+ C_9 \alpha_t \nonumber \\
& + \frac{1}{t}\bigg(C_{10} \log^2\Big(\frac{t}{\alpha_0}\Big) + C_{11}\sum_{k = \bar T}^{t} \mu_{k} +C_{12}\bigg), \label{eq:bound_SA}
\end{align}
where $\bar T$ and $C_7 - C_{12}$ are finite constants whose definitions are given in Appendix~\ref{sec:thmPush_constant}. \end{theorem}
In Appendix~\ref{sec:proof_push}, we show that $\lim_{t\rightarrow\infty}\frac{1}{t}\sum_{k=1}^t \mu_k=0$, which implies that the finite-time bound \eqref{eq:bound_SA} converges to zero as $t\rightarrow\infty$. It is worth mentioning that the theorem does not consider the fixed step-size case, as our current analysis approach cannot be directly applied for this case.
{\bf Proof Sketch and Technical Challenge \;} Using the inequality
$$\sum_{i=1}^N \mathbf{E}[\|\theta_{t+1}^i - \theta^*\|_2^2]
\le 2 \sum_{i=1}^N \mathbf{E}[\|\theta_{t+1}^i - \langle \tilde \theta \rangle_t \|_2^2 ] + 2 N \mathbf{E} [\| \langle \tilde \theta \rangle_t - \theta^*\|_2^2],$$
our goal is to derive the finite-time bounds of $\sum_{i=1}^N \mathbf{E}[\|\theta_{t+1}^i - \langle \tilde \theta \rangle_t \|_2^2 ]$ and $\mathbf{E} [\| \langle \tilde \theta \rangle_t - \theta^*\|_2^2]$, respectively. Although this looks similar to the proof of Theorem~\ref{thm:bound_jointly_SA}, the derivation is quite different. First, the iteration of $\langle \tilde \theta \rangle_t$ is a single-agent stochastic approximation (SA) plus a disturbance term $\langle \theta \rangle_t-\langle \tilde \theta \rangle_t$, so we cannot directly apply the existing single-agent SA finite-time analyses to bound $\mathbf{E} [\| \langle \tilde \theta \rangle_t - \theta^*\|_2^2]$; instead, we have to show that $\langle \theta \rangle_t-\langle \tilde \theta \rangle_t$ will diminish and quantify the diminishing ``speed''.
Second, both the proof of showing diminishing $\langle \theta \rangle_t-\langle \tilde \theta \rangle_t$ and derivation of bounding $\sum_{i=1}^N \mathbf{E}[\|\theta_{t+1}^i - \langle \tilde \theta \rangle_t \|_2^2 ]$ involve a key challenge: to prove the sequence $\{ \theta_t^i \}$ generated from the Push-SA \eqref{eq:SA_push-sum} is bounded almost surely. To tackle this, we introduce a novel way to constructing an absolute probability sequence for the Push-SA as follows. From~\eqref{eq:SA_push-sum}, $ \theta^i_{t+1}
= \sum_{j=1}^N \tilde w_t^{ij} [ \theta^j_{t} + \alpha_t A(X_t) \frac{ \theta^j_{t}}{y_t^j} + \alpha_t \frac{ b^j(X_t ) }{y_t^j}]$,
where $\tilde w_t^{ij} = (\hat w_t^{ij} y_t^j)/(\sum_{k=1}^N \hat w_t^{ik } y^k_{t})$. We show that each matrix $\tilde W_t = [\tilde w_t^{ij}]$ is stochastic, and there exists a unique absolute probability sequence $\{ \tilde \pi_t \} $ for the matrix sequence $\{ \tilde W_t \} $ such that $ \tilde \pi_t^i \ge \tilde \pi_{\min}$ for all $i\in\scr V$ and $t\ge 0$, with the constant $ \tilde \pi_{\min}\in(0,1)$. Most importantly, we show two critical properties of $\{ \tilde W_t \} $ and $\{ \tilde \pi_t \} $, namely $\lim_{t\to\infty} (\Pi_{s=0}^{t} \tilde W_s)= \frac{1}{N}\mathbf{1}_N \mathbf{1}^\top_N$ and $\frac{\tilde \pi_t^i}{y_t^i} = \frac{1}{N}$ for all $i,j\in\scr V$ and $t \ge 0$, which have never been reported in the literature though push-sum-based distributed algorithms have been extensively studied.
\begin{remark} It is worth mentioning that our novel approach for analyzing push-SA can be used to establish a better convergence rate for the push-subgradient algorithm in \cite{nedic}, which is currently under preparation \cite{yixuanpush}.
$\Box$ \end{remark}
\section{Concluding Remarks} \label{sec:conclusion}
In this paper, we have established both asymptotic and non-asymptotic analyses for a consensus-based distributed linear stochastic approximation algorithm over uniformly strongly connected graphs, and proposed a push-based variant for coping with uni-directional communication. Both algorithms and their analyses can be directly applied to TD learning. One limitation of our finite-time bounds is that they involve quite a few constants which are well defined and characterized but whose values are not easy to compute. Future directions include leveraging the analyses for resilience in the presence of malicious agents and extending the tools to more complicated RL.
\appendix
\section{List of Constants} \label{sec:constants}
In this appendix, we list all the constants used in our main results, Theorems~\ref{thm:bound_jointly_SA} and \ref{thm:bound_time-varying_step_Push_SA}. They are finite and their expressions do not affect the understanding of the theorems. Since their expressions are quite long and complicated, we begin with the following set of constants, based on which we will be able to present the constants used in the theorems and the proofs of the theorems in an easier way. We hope that this way can also help the readers to better understand and follow our results and analyses.
The first constant $\zeta_1$ is defined as follows. Recall that $\epsilon$ is given in \eqref{eq:define epsilon_jointly} as \begin{align*}
\epsilon = \bigg(1+\frac{2 b_{\max}}{A_{\max}}-\frac{\pi_{\min} \beta^{2L}}{2 \delta_{\max}} \bigg)( 1 + \alpha A_{\max})^{2L} - \frac{2 b_{\max}}{ A_{\max}} (1 + \alpha A_{\max})^{L}. \end{align*} $\zeta_1$ is defined as the unique solution for which $ \epsilon = 1$ if $\alpha = \zeta_1$. The following remark shows why $\zeta_1$ uniquely exists.
\begin{remark} \label{remark:exists_Psi9}
From~\eqref{eq:define epsilon_jointly}, it is easy to see that $ \epsilon$ is monotonically increasing for $\alpha>0$. Define the corresponding monotonic function as \begin{align*}
f(\alpha) = \bigg(1+\frac{2 b_{\max}}{A_{\max}}-\frac{\pi_{\min} \beta^{2L}}{2 \delta_{\max}} \bigg)( 1 + \alpha A_{\max})^{2L} - \frac{2 b_{\max}}{ A_{\max}} (1 + \alpha A_{\max})^{L}. \end{align*} Note that $0<f(0) < 1$ and $f(+\infty)=+\infty$. Thus, $ f(\alpha) = 1$ has a unique solution $\zeta_1$.
$\Box$ \end{remark}
The other constants are defined as follows:
\begin{align}
\zeta_2 &= \frac{4 b_{\max}^2}{ A_{\max}^2}\left[(1 + \alpha A_{\max})^L-1\right]^2
+ 2 b_{\max} \frac{(1 + \alpha A_{\max})^L-1}{ A_{\max}} (1 + \alpha A_{\max})^{L}\label{eq:define Psi10} \\
\zeta_3 &= \left( 144 + 4 A_{\max}^2 + 912 \tau(\alpha) A_{\max}^2 + 168 \tau(\alpha) A_{\max} b_{\max} \right) \| \theta^*\|_2^2 \nonumber \\
&\;\;\; + \tau(\alpha) A_{\max}^2 \bigg[152 \bigg(\frac{b_{\max}}{A_{\max}} + \| \theta^* \|_2 \bigg)^2 + \frac{48 b_{\max}}{A_{\max}} \bigg(\frac{b_{\max}}{A_{\max}} + 1 \bigg)^2 + \frac{87b_{\max}^2}{A_{\max}^2} + \frac{12 b_{\max}}{A_{\max}} \bigg] \nonumber\\
& \;\;\; + 2 + 2 b_{\max}^2 + 4 \|\theta^* \|_2^2 + \frac{48b_{\max}^2}{A_{\max}^2} \label{eq:define Psi4} \end{align} \begin{align}
\zeta_4 &= \sqrt{N}b_{\max} \bigg( 2 + \frac{12 b_{\max}^2}{A_{\max}^2}+ 38 \| \theta^* \|_2^2 \bigg)\label{eq:define Psi5} \\
\zeta_5 & = 144 + 916 A_{\max}^2 + 168 A_{\max} b_{\max} \label{eq:define Psi7} \\
\zeta_6 & = 4 b_{\max}^2 \alpha L^2 ( 1 + \alpha A_{\max})^{2L-2} +2 b_{\max}L ( 1 + \alpha A_{\max})^{2L-1}
\label{eq:define Psi11} \\
\zeta_7 & = (148 + 916 A_{\max}^2 + 168 A_{\max} b_{\max}) \| \theta^*\|_2^ 2 + 2 + \frac{48b_{\max}^2}{A_{\max}^2} + 152 \bigg(b_{\max} + A_{\max} \| \theta^* \|_2 \bigg)^2 \nonumber\\
&\;\;\; + 89 b_{\max}^2 + 12 A_{\max}b_{\max} + 48 A_{\max}b_{\max} \bigg(\frac{b_{\max}}{A_{\max}} + 1 \bigg)^2 \label{eq:define Psi8}\\
\zeta_8 &= 144 + 916 A_{\max}^2 + 168 A_{\max} b_{\max} + 144 A_{\max} \mu_{\max} \label{eq:definition_Psi12} \\
\zeta_9 &= \bigg[ 2 + ( 4 + \zeta_8) \|\theta^* \|_2^2 + 48\frac{(b_{\max}+ \mu_{\max})^2}{A_{\max}^2} + 152 \left(b_{\max} + \mu_{\max} + A_{\max} \| \theta^* \|_2 \right)^2 \nonumber\\
&\;\;\;\; + 12 A_{\max}b_{\max} + 48 A_{\max}(b_{\max}+ \mu_{\max}) \bigg(\frac{b_{\max} + \mu_{\max}}{A_{\max}} + 1 \bigg)^2 + 89 (b_{\max}+ \mu_{\max})^2 \bigg]\label{eq:definition_Psi13}
\end{align}
Here $\mu_{\max}=(N+1) A_{\max}C_\theta$, where $C_\theta$ is a finite number defined in Lemma~\ref{lemma:bound_theta} which can be regarded as an upper bound of 2-norm of each agent $i$'s state $\theta^i_t$ generated by the Push-SA algorithm \eqref{eq:SA_push-sum}.
\subsection{Constants used in Theorem~\ref{thm:bound_jointly_SA}} \label{sec:thmSA_constant}
\begin{align}
K_1 &= \min\bigg\{ \zeta_1, \; \frac{\gamma_{\max}}{0.9} \bigg\} \hspace{4in}\nonumber\\
K_2 &= 144 + 4 A_{\max}^2 + 912 \tau(\alpha) A_{\max}^2 + 168 \tau(\alpha) A_{\max} b_{\max} \label{eq:define Psi3}\\
C_1 &= \frac{\gamma_{\max}}{\gamma_{\min}} \left( 8 \exp\left\{ 2 \alpha A_{\max}T_1 \right\}+4 \right) \mathbf{E}\left[\| \langle \theta \rangle_{0} -\theta^* \|_2^2\right] \nonumber\\
&\;\;\;\;\;+ 8 \frac{\gamma_{\max}}{\gamma_{\min}} \exp\left\{ 2\alpha A_{\max}T_1 \right\} \bigg( \|\theta^*\|_2 + \frac{b_{\max}}{A_{\max}} \bigg)^2 \nonumber\\
C_2 &= \frac{2\zeta_2}{1- \epsilon} + \frac{\gamma_{\max}}{\gamma_{\min}}\cdot \frac{ 2 \alpha \zeta_3 \gamma_{\max}}{0.9 } \nonumber\\
C_3 &= \frac{2\zeta_6}{1-\epsilon} \nonumber\\
C_4 &= 2\zeta_7 \alpha_0 C \frac{\gamma_{\max}}{\gamma_{\min}} \nonumber \\
C_5 &= 2 \alpha_0 \zeta_4 \frac{\gamma_{\max}}{\gamma_{\min}}\nonumber \\
C_6 &= 2LT_2 \frac{\gamma_{\max}}{\gamma_{\min}} \mathbf{E}\left[\| \langle \theta \rangle_{LT_2} -\theta^* \|_2^2 \right] \nonumber
\end{align}
$T_1$ is any positive integer such that for all $t \ge T_1$, there hold $t\ge \tau(\alpha)$ and
$36 \sqrt{N}b_{\max} \eta_{t+1} \gamma_{\max} + K_2 \alpha \gamma_{\max} \le 0.1 $.
\begin{remark} We show that $T_1$ must exist. From $0 < \alpha < \min \{ K_1 ,\; \frac{\log2}{A_{\max} \tau(\alpha)},\; \frac{0.1}{K_2 \gamma_{\max}} \}$, it is easy to see that the feasible set of $\alpha$ is nonempty and $ K_2 \alpha \gamma_{\max} < 0.1 $. Since $\lim_{t\to\infty} \eta_t = 0$ by Lemma~\ref{lemma:eta_sum} and $\tau(\alpha) \le -C\log \alpha$ by Assumption~\ref{assum:mixing-time}, there exists a time instant $ T \ge -C\log \alpha$ such that for any $t\ge T$, there hold $t \ge \tau(\alpha)$ and $\eta_{t+1} \le (0.1 - K_2 \alpha \gamma_{\max})/(36 \sqrt{N}b_{\max} \gamma_{\max})$, which implies that $T_1$ must exist.
$\Box$ \end{remark}
$T_2$ is any positive integer such that for all $t\ge LT_2$, there hold $\alpha_t \le \alpha$, $2\tau(\alpha_t) \le t$, $\tau(\alpha_t) \alpha_{t-\tau(\alpha_t)} \le \min \{ \frac{ \log2}{A_{\max}},\; \frac{0.1}{\zeta_5 \gamma_{\max}} \}$ and $ \zeta_5 \alpha_{t-\tau(\alpha_t)} \tau(\alpha_t) \gamma_{\max} + 36\sqrt{N} b_{\max} \eta_{t+1} \gamma_{\max} \le 0.1 $.
\begin{remark} \label{remark:exists_T_2} We explain why $T_2$ must exist.
Since $\alpha_t = \frac{\alpha_0}{t+1}$ is monotonically decreasing for $t$ and $\tau(\alpha_t) \le -C\log \alpha_t = -C\log \alpha_0 + C\log (t+1) $ from Assumption~\ref{assum:mixing-time}, there exists a positive $S_{1}$ such that for any $t\ge S_{1}$, we have $\alpha_t \le \alpha$ and $t\ge 2\tau(\alpha_t)$ for any constant $0 < \alpha < \min \{ K_1 ,\; \frac{\log2}{A_{\max} \tau(\alpha)},\; \frac{0.1}{K_2 \gamma_{\max}} \}$. Moreover, it is easy to show that \begin{align*}
\lim_{t\to\infty} t-\tau(\alpha_t) & \ge \lim_{t\to\infty} t+ C\log \alpha_0 - C\log (t+1) = +\infty, \\
\lim_{t\to\infty}\tau(\alpha_t) \alpha_{t-\tau(\alpha_t)} & \le \lim_{t\to\infty} \frac{-C \alpha_0 \log \alpha_0 + C\alpha_0\log(t+1)}{t-\tau(\alpha_t)+1} = 0. \end{align*} Then, there exists a positive $S_{2}$ such that for any $t\ge S_{2}$, we have $\tau(\alpha_t) \alpha_{t-\tau(\alpha_t)} \le \min \{ \frac{ \log2}{A_{\max}},\; \frac{0.1}{\zeta_5 \gamma_{\max}} \}$. In addition, since $\lim_{t\to\infty} \eta_t = 0$ from Lemma~\ref{lemma:eta_sum}, when $\tau(\alpha_t) \alpha_{t-\tau(\alpha_t)} \le \frac{0.1}{\zeta_5 \gamma_{\max}}$, there exists a positive $S_{3}$ such that for any $t\ge S_{3}$, we have $\eta_{t+1} \le (0.1 - \zeta_5 \alpha_{t-\tau(\alpha_t)} \tau(\alpha_t) \gamma_{\max})/(36\sqrt{N} b_{\max} \gamma_{\max})$. Therefore, $T_2$ must exist as we can simply set $T_2 = \max\{ S_{1},\; S_{2},\; S_{3} \}$.
$\Box$ \end{remark}
\subsection{Constants used in Theorem~\ref{thm:bound_time-varying_step_Push_SA}} \label{sec:thmPush_constant}
\begin{flalign}
C_7 &= \frac{16}{\epsilon_1} \mathbf{E}\bigg[ \Big\| \sum_{i=1}^N \tilde \theta_0^i + \alpha_0 A(X_0)\tilde \theta_0^i + \alpha_0 b^i(X_0) \Big\|_2\bigg] \hspace{3in}\nonumber \\
C_8 &= \frac{16}{\epsilon_1} \cdot\frac{ A_{\max} C_\theta + b_{\max}}{1-\bar\epsilon}\nonumber \\
C_9 &= 2 A_{\max} C_\theta + 2 b_{\max} \nonumber\\
C_{10} &= {2 N \zeta_9 \alpha_0 C } \frac{\gamma_{\max}}{\gamma_{\min}} \nonumber\\
C_{11} &= 2 \alpha_0 N \frac{\gamma_{\max}}{\gamma_{\min}} \nonumber\\
C_{12} &= {2\bar T N} \frac{\gamma_{\max}}{\gamma_{\min}} \mathbf{E}\left[\|\langle \tilde \theta \rangle_{\bar T} -\theta^* \|_2^2 \right] \nonumber \end{flalign}
Here $\epsilon_1$ is a positive constant defined as $\epsilon_1 = \inf_{t\ge 0} \min_{i\in\scr V} (\hat W_t \cdots \hat W_0 \mathbf{1}_N)^i $.
From Corollary~2~(b) in \cite{nedic} and the fact that each $\hat W_t$ is column stochastic, $\epsilon_1 \in [\frac{1}{N^{NL}}, 1]$. See Lemma~\ref{lemma:bound_consensus_time-varying_push_SA} for more~details.
$\bar T$ is any positive integer such that for all $t\ge \bar T$, there hold $2\tau(\alpha_t) \le t$, $ \mu_{t} + \tau(\alpha_t) \alpha_{t-\tau(\alpha_t)} \zeta_8 \le \frac{0.1}{\gamma_{\max}}$ and $\tau(\alpha_t) \alpha_{t-\tau(\alpha_t)} \le \min \{ \frac{ \log2}{A_{\max}},\; \frac{0.1}{\zeta_8 \gamma_{\max}} \}$.
\begin{remark} From Lemma~\ref{lemma:eta_limit_Push_SA}, $\lim_{t\to\infty} \mu_t = 0$. Then, using the similar arguments as in Remark~\ref{remark:exists_T_2}, we can show the existence of $\bar T$.
$\Box$ \end{remark}
\section{Discussion on Assumption~\ref{assum:limit_pi}}\label{discussionAss6} In this appendix, we contend that Assumption~\ref{assum:limit_pi} has more general applications than the previously known case and that it is in fact necessary.
\subsection{Applications}
First, as mentioned in Remark~\ref{remark:on assmption}, there are at least two cases which satisfy Assumption~\ref{assum:limit_pi}, yet cannot be directly handled by the existing analysis tool, which was developed only for doubly stochastic matrices. Case 1 is when the number of in-neighbors of agents is unchanged over time. This case has an interesting behavioral interpretation in fish biology, and has been adopted in bio-inspired distributed algorithm design \cite{abaid2010consensus}. Case 2 is when the interaction matrix changes arbitrarily over time during an initial period, after which it finally becomes fixed. As we describe below, Case 2 occurs naturally in certain multi-agent systems.
Case 1 is mathematically equivalent to the situation when all stochastic matrices share the same left dominant eigenvector, which subsumes doubly stochastic matrices as a special case; thus it could be analyzed by carefully choosing a fixed norm. There may be different choices: one choice is to apply our time-varying quadratic Lyapunov comparison function $\sum_{i=1}^N \pi_{t}^i \mathbf{E}[||\theta_{t}^i - \theta^*||_2^2]$ to the time-invariant case (i.e., $\pi_t^i$ does not change over time), which leads to the weighted Frobenius norm defined in the appendix.
The extension to Case 1 just described may be straightforward, but Case 2 is not. As we proved in Theorems~\ref{thm:theta^*_jointly} and \ref{thm:bound_jointly_SA}, when the interaction matrix arbitrarily changes over time for an initial period, say of length $T$, and finally becomes a fixed matrix or enters Case 1, all agents' trajectories determined by (1) will converge in mean square. Also, recall that the corresponding finite-time error bounds in this case were derived using the ``absolute probability sequence" technique. Note that the existing techniques can only be applied to analyze (1) after time $T$; when $T$ is very large, such an analysis is undesirable, since the focus and challenge here are for ``finite" time.
It is important to note that Case 2 provides a realistic model for certain systems. Consider scenarios in which some agents do not function stably and thus they communicate with their neighbors sporadically for a certain period, leading to a time-varying stochastic matrix. Such scenarios occur naturally when there is unstable communication due to environmental changes or movement of agents (e.g., robots or UAVs may need to move into a new formation while continuing computation). After this unstable period, which could be long, the whole system then enters a stable operation status. This satisfies Case 2 and our finite-time analysis can be applied to the whole process, no matter how long the unstable period could be, as long as it is finite. In addition to this example, Case 2 and our analysis can be applied to certain scenarios in the presence of malicious agents. Suppose the system is aware that a small subset of agents have potentially been attacked and are thus behaving maliciously. To protect the system, the consensus interaction among the agents can switch to resilient consensus algorithms such as \cite{vaidya2012iterative,leblanc2013resilient} in order to attenuate the effect of malicious agents. In this situation, the resulting dynamics of the non-malicious agents are in general characterized by a time-varying stochastic matrix. After identifying and/or fixing the malicious agents, which could be a very slow process, the system can switch back to normal operation status. This example again satisfies Case 2, and our analysis can be applied to the whole procedure. As we mentioned in Remark~\ref{remark:on assmption}, if some malicious agents always exist, the non-malicious agents in general will not converge, and thus a finite-time analysis is probably meaningless. The non-convergence issue will be further explained in the next subsection.
Whether Assumption~\ref{assum:limit_pi} can represent more realistic/analytic examples is a very interesting future direction. Though consensus has been extensively studied and the ``absolute probability sequence" was proposed decades ago, this question has never been explored. The development of more advanced analysis tools is an interesting topic as well.
\subsection{Necessity}
We now elaborate on why Assumption~\ref{assum:limit_pi} is not restrictive from a theoretical point of view.
As mentioned in Remark~\ref{remark:on assmption}, distributed SA with time-varying stochastic matrices does not converge, in general, if Assumption~\ref{assum:limit_pi} does not hold. Assumption~\ref{assum:limit_pi} is sufficient to guarantee the convergence of the distributed SA algorithm \eqref{eq:theta update} when the interaction matrix is row stochastic and time-varying. Let us denote the necessary and sufficient condition for convergence of consensus-based distributed SA as Condition A, which is currently unknown. It is possible that there is a large gap between Assumption~\ref{assum:limit_pi} and Condition A. But Assumption~\ref{assum:limit_pi} is (to our knowledge) the most general sufficient condition that has been proposed so far; one indirect justification of this claim is Assumption~\ref{assum:limit_pi} is an analogue of condition (C3.4') in \cite{kushner87}, which is itself a sufficient condition guaranteeing the asymptotic convergence of a different form of distributed SA. While \cite{kushner87} only provided asymptotic analysis, we provided both asymptotic and finite-time analyses using a novel tool. Assumption~\ref{assum:limit_pi} subsumes the existing analysis for doubly stochastic matrices as a special case, and can be used for more general, nontrivial cases (see the examples provided in the discussion of Case 2 above). Existing analysis tools cannot be applied to Case 2. From a theoretical point of view, our paper reduces the gap between the doubly stochastic matrices assumption and Condition A to the smaller gap between Assumption~\ref{assum:limit_pi} and Condition A, for finite-time analysis of consensus-based distributed~SA.
In addition, the other equally important main contribution of our paper, push-SA, does not need Assumption~\ref{assum:limit_pi}, though its analysis still relies on the ``absolute probability sequence" technique.
\subsection{Contributions}
Next, we present a high-level view of our paper, which may help the readers to better understand our overall contributions.
There are three major information fusion schemes in the vast distributed algorithms literature: ``consensus" (time-varying stochastic matrices), ``averaging" (time-varying doubly stochastic matrices which include gossiping), and ``push-sum" (time-varying column stochastic matrices). The consensus-based scheme can guarantee an agreement among the agents, but the agreement point in general cannot be specified, especially when the interaction is time-varying. The averaging scheme can specify the agreement point to be the average among all agents using doubly stochastic matrices, but these only work for undirected graphs (i.e., bi-directional communication is required between any pair of neighbors); typical examples are the Metropolis algorithm \cite{metro2} and gossiping \cite{boyd052}. The push-sum scheme is able to not only achieve agreement on the average, but it also works for directed graphs, allowing uni-directional communication. The push-sum scheme can also be straightforwardly modified to achieve any given convex combination agreement among all agents. The three schemes are widely used, depending on task specifications. Push-sum appears to be the most powerful, but the other two also have advantages; e.g., consensus can be modified to be more resilient against malicious agents, and averaging is easier in algorithm design (especially gossiping) and analysis (due to nicer properties of doubly stochastic matrices). There is a very recently proposed scheme called push-pull, but it is not yet that popular, so we focus our attention on the three major schemes.
With the above background in mind, there are three major information fusion schemes that can be used to design distributed SA (as well as RL). The existing literature has only analyzed the averaging scheme (doubly stochastic matrices), which to us appears to be the easiest among the three. Finite-time analyses of the other two schemes are untouched in the literature. Our paper is the first to consider both.
As explained in the preceding subsection, our result and analysis for the consensus scheme (based on Assumption~\ref{assum:limit_pi}) are the most general so far and generalize the existing tools in a nontrivial manner. This leads to very interesting, open research problems -- like necessary and sufficient condition for distributed SA convergence -- as well as how to design resilient consensus fusion, which can guarantee convergence of distributed SA.
\section{Analysis and Proofs}\label{analysis}
In this appendix, we provide the analysis of our two algorithms, \eqref{eq:theta update} and \eqref{eq:SA_push-sum}, and the proofs of all the assertions in the paper. We begin with some notation.
\subsection{Notation}
We use $\mathbf{0}_n$ to denote the vector in ${\rm I\!R}^n$ whose entries all equal to $0$'s. For any vector $x\in{\rm I\!R}^n$, we use ${\rm diag}(x)$ to denote the $n\times n$ diagonal matrix whose $i$th diagonal entry equals $x^i$. We use $\|\cdot\|_F$ to denote the Frobenius norm. For any positive diagonal matrix $W\in{\rm I\!R}^{n\times n}$, we use $\|A\|_W$ to denote the weighted Frobenius norm for $A\in{\rm I\!R}^{n\times m}$, defined as $\|A\|_W = \|W^{\frac{1}{2}}A\|_F$. It is easy to see that $\|\cdot\|_W$ is a matrix norm. We use $\mathbf{P}(\cdot)$ to denote the probability of an event and $\mathbf{E}(X)$ to denote the expected value of a random variable $X$.
\subsection{Distributed Stochastic Approximation}\label{proofs:sa}
In this subsection, we analyze the distributed stochastic approximation algorithm \eqref{eq:theta update} and provide the proofs of the results in Section~\ref{sec:SA}. We begin with the asymptotic performance.
{\bf Proof of Lemma~\ref{lemma:bound_pi_jointly}:} Since the uniformly strongly connectedness is equivalent to $B$-connectedness as discussed in Remark~\ref{remark:uniformly}, the existence is proved in Lemma 5.8 of \cite{touri2012product}, and the uniqueness is proved in Lemma 1 of \cite{tacrate}.
$ \rule{.08in}{.08in}$
{\bf Proof of Theorem~\ref{thm:consensus_time-varying_jointly}:} Without loss of generality, let $\{\mathbb{G}_t\}$ be uniformly strongly connected by sub-sequences of length $L$. Note that for any $i\in\scr V$, we have \begin{align}\label{eq:proof_th1_1_jointly}
0 \le \pi_{\min}\|\theta^i_t-\langle \theta\rangle_t\|_2^2 \le \pi_{\min} \sum_{j=1}^N \|\theta^j_t-\langle \theta\rangle_t\|_2^2 \le \sum_{j=1}^N \pi_t^j \|\theta^j_t-\langle \theta\rangle_t\|_2^2, \end{align} where $\pi_{\min}$ is defined in Lemma~\ref{lemma:bound_pi_jointly}. From Lemma~\ref{lemma:bound_consensus_time-varying_jointly}, \begin{align} \label{eq:proof_th1_2_jointly}
&\;\;\;\;\lim_{t\to\infty}\sum_{i=1}^N \pi_{t}^i \| \theta_{t}^i - \langle \theta \rangle_{t} \|_2^2 \nonumber\\
& \le \lim_{t\to\infty}\hat \epsilon^{q_t - {T_4^*}} \sum_{i=1}^N \pi_{T_4^*L+m_t}^i \| \theta_{T_4^*L+m_t}^i - \langle \theta \rangle_{T_4^*L+m_t} \|_2^2 + \lim_{t\to\infty} \frac{\zeta_6}{1-\hat\epsilon} \left( \alpha_0 \hat\epsilon^{\frac{q_t-1}{2}} + \alpha_{\ceil{\frac{q_t-1}{2}}L} \right) = 0. \end{align}
Combining \eqref{eq:proof_th1_1_jointly} and \eqref{eq:proof_th1_2_jointly}, it follows that for all $i\in\scr V$, $\lim_{t\to\infty} \pi_{\min}\|\theta^i_t-\langle \theta\rangle_t\|_2^2 = 0$.
Since $\pi_{\min}>0$ by Lemma~\ref{lemma:bound_pi_jointly}, $\lim_{t\to\infty} \|\theta^i_t-\langle \theta\rangle_t\|_2 = 0$ for all $i\in\scr V$.
$ \rule{.08in}{.08in}$
\noindent {\bf Proof of Theorem~\ref{thm:theta^*_jointly}:}
From Theorem~\ref{thm:consensus_time-varying_jointly}, all $\theta_t^i$, $i\in \mathcal{V}$, will reach a consensus with $ \langle \theta \rangle_t $ and the update of $ \langle \theta \rangle_t $ is given in \eqref{eq:update of average_time-varying}, which can be treated as a single-agent linear stochastic approximation whose corresponding ODE is \eqref{eq:definition theta^*}. From \cite{kushner87, kushner1983averaging},\footnote{On page 1289 of \cite{kushner87}, it says that the idea in \cite{kushner1983averaging} can be adapted to get the w.p.1 convergence result. } we know that $ \langle \theta \rangle_t $ will converge to $\theta^*$ w.p.1, which implies that $\theta_t^i$ will converge to $\theta^*$ w.p.1. In addition, from Theorem~\ref{thm:bound_jointly_SA}-(2) and Lemma~\ref{lemma:eta_sum},
$\lim_{\rightarrow\infty}\sum_{i=1}^N \pi_t^i \mathbf{E}[\|\theta_t^i - \theta^*\|_2^2]=0$. Since $\pi_t^i$ is uniformly bounded below by $\pi_{\min}>0$, as shown in Lemma~\ref{lemma:bound_pi_jointly}, it follows that $\theta_t^i$ will converge to $\theta^*$ in mean square for all $i\in\scr V$.
$ \rule{.08in}{.08in}$
We now analyze the finite-time performance of \eqref{eq:theta update}. In the sequel, we use $K$ to denote the dimension of each $\theta_t^i$, i.e., $\theta_t^i \in {\rm I\!R}^K$ for all $i\in\scr V$.
\subsubsection{Fixed Step-size} \label{sec:proof_jointly_fixed}
We first consider the fixed step-size case and begin with validation of two ``convergence rates'' in Theorem~\ref{thm:bound_jointly_SA}.
\begin{lemma}\label{lemma:ration_in_0_1}
Both $\epsilon$ and $(1-\frac{0.9 \alpha}{\gamma_{\max}})$ lie in the interval $(0,1)$. \end{lemma}
{\bf Proof of Lemma~\ref{lemma:ration_in_0_1}:} Since $0<\alpha < K_1 = \min\{ \zeta_1, \; \frac{\gamma_{\max}}{0.9}\}$ as imposed in Theorem~\ref{thm:bound_jointly_SA}, we have $0< \alpha < \zeta_1$ and $0<\alpha< \frac{\gamma_{\max}}{0.9}$. The latter immediately implies that $1-\frac{0.9 \alpha}{\gamma_{\max}} \in (0,1)$. From Remark~\ref{remark:exists_Psi9}, $ \epsilon$ is monotonically increasing for $\alpha>0$. In addition, from the definition of $\zeta_1$ in Section~\ref{sec:constants} that if $\alpha = \zeta_1$, then $\epsilon=1$. Since $0<\alpha<\zeta_1$, it follows that $0<\epsilon <1$.
$ \rule{.08in}{.08in}$
To proceed, we need the following derivation and lemmas.
Let $Y_t = \Theta_t - \mathbf{1}_N \langle \theta \rangle_t^\top = (I - \mathbf{1}_N \pi_t^\top)\Theta_t$. For any $t\ge s\ge 0$, let $W_{s:t} = W_t W_{t-1} \cdots W_s$. Then,
\begin{align} \label{eq:update_y_fixed}
Y_{t+1} &= \Theta_{t+1} - \mathbf{1}_N \langle \theta \rangle_{t+1}^\top \nonumber \\
&= W_t \Theta_t + \alpha W_t \Theta_t A^\top(X_t) + \alpha B(X_t) - \mathbf{1}_N (\langle \theta \rangle^\top_t + \alpha \langle \theta \rangle^\top_t A^\top(X_t) + \alpha \pi_{t+1}^\top B(X_t) )\nonumber\\
&= W_t (I - \mathbf{1}_N \pi_t^\top) \Theta_t + \alpha W_t (I - \mathbf{1}_N \pi_t^\top) \Theta_t A^\top(X_t) + \alpha (I - \mathbf{1}_N \pi_{t+1}^\top) B(X_t)\nonumber \\
&= W_t Y_t + \alpha W_t Y_t A^\top(X_t) + \alpha (I - \mathbf{1}_N \pi_{t+1}^\top) B(X_t). \end{align} For simplicity, let $Y_{t}^i$ be the $i$-th column of matrix $Y_{t}^\top$. Then, \begin{align}\label{eq:yyy_fixed}
Y_{t+1}^i = \sum_{j=1}^N w_t^{ij} Y^j_t + \alpha A(X_t) \sum_{j=1}^N w_t^{ij} Y^j_t + \alpha \left( b^i(X_t) - B^\top(X_t) \pi_{t+1}\right). \end{align} From \eqref{eq:update_y_fixed}, we have \begin{align}\label{eq:update_y_constant_jointly}
Y_{t+L}
&= W_{t+L-1} Y_{t+L-1} ( I + \alpha A^\top(X_{t+L-1})) + \alpha (I - \mathbf{1}_N \pi_{{t+L}}^\top) B(X_{t+L-1}) \nonumber\\
&= W_{t+L-1} W_{t+L-2} Y_{t+L-2} ( I + \alpha A^\top(X_{t+L-2})) ( I + \alpha A^\top(X_{t+L-1})) \nonumber \\
&\;\;\; + \alpha W_{t+L-1} (I - \mathbf{1}_N \pi_{{t+L-1}}^\top) B(X_{t+L-2}) ( I + \alpha A^\top(X_{t+L-1})) + \alpha (I - \mathbf{1}_N \pi_{{t+L}}^\top) B(X_{t+L-1})\nonumber \\
&= W_{t:t+L-1} Y_{t} ( I + \alpha A^\top(X_{t})) \cdots ( I + \alpha A^\top(X_{t+L-1})) + \alpha (I - \mathbf{1}_N \pi_{{t+L}}^\top) B(X_{t+L-1}) \nonumber \\
& \;\;\; + \alpha \sum_{k=t}^{t+L-2} W_{k+1:t+L-1} (I - \mathbf{1}_N \pi_{{k+1}}^\top) B(X_{k}) \left(\Pi_{j=k+1}^{t+L-1} ( I + \alpha A^\top(X_{j})) \right), \end{align} and from \eqref{eq:yyy_fixed}, \eq{
Y_{t+L}^i = \left(\Pi_{k=t}^{t+L-1}( I + \alpha A(X_k)) \right) \sum_{j=1}^N w_{t:t+L-1}^{ij} Y_{t}^j + \alpha \hat b_{t+L}^i, \label{eq:xxx}} where \begin{align*}
\hat b_{t+L}^i & = (b^i(X_{t+L-1}) - B(X_{t+L-1})^\top \pi_{{t+L}}) \\
& \;\;\; + \sum_{k=t}^{t+L-2} \left(\Pi_{j=k+1}^{t+L-1} ( I + \alpha A(X_{j})) \right) \sum_{j=1}^N w_{k+1:t+L-1}^{ij} (b^j(X_{k}) - B(X_{k})^\top \pi_{{k+1}}). \end{align*}
\begin{lemma} \label{lemma:lower_bound_jointly}
Suppose that Assumption~\ref{assum:weighted matrix} holds and $\{ \mathbb{G}_t \}$ is uniformly strongly connected by sub-sequences of length $L$. Then, for all $t \ge 0$,
\begin{align*}
\sum_{i=1}^N \pi_{t+L}^i \sum_{j=1}^N \sum_{k=1}^N w_{t:t+L-1}^{ij} w_{t:t+L-1}^{ik}\| Y_{t}^j - Y_{t}^k \|_2^2 \ge \frac{\pi_{\min} \beta^{2L}}{\delta_{\max}} \sum_{i=1}^N \pi_{t}^i \| Y_{t}^{i}\|_2^2,
\end{align*} where $\beta>0$ and $\pi_{\min}>0$ are given in Assumption~\ref{assum:weighted matrix} and Lemma~\ref{lemma:bound_pi_jointly}, respectively. \end{lemma} \noindent {\bf Proof of Lemma~\ref{lemma:lower_bound_jointly}:} We first consider the case when $K=1$, i.e., $Y_t^i \in {\rm I\!R}$ for all $i$. From Lemma~\ref{lemma:bound_pi_jointly},
\begin{align*}
\sum_{i=1}^N \pi_{t+L}^i \sum_{j=1}^N \sum_{l=1}^N w_{t:t+L-1}^{ij} w_{t:t+L-1}^{il}\| Y_{t}^j - Y_{t}^l \|_2^2 \ge \pi_{\min} \sum_{i=1}^N \sum_{j=1}^N \sum_{l=1}^N w_{t:t+L-1}^{ij} w_{t:t+L-1}^{il}\| Y_{t}^j - Y_{t}^l \|_2^2.
\end{align*} Let $j^*$ and $l^*$ be the indices such that $
|Y_{t}^{j^*} - Y_{t}^{l^*}| = \max_{1\le j,l \le N} | Y_{t}^j - Y_{t}^l |. $ From the definition of~$Y_t$, $ Y_{t}^j - Y_{t}^l = \theta_{t}^{j} - \theta_{t}^{l} $ for all $j,l\in\scr V$, which implies that $$
|Y_{t}^{j^*} - Y_{t}^{l^*}| = \max_{1\le j,l \le N} | Y_{t}^j - Y_{t}^l |
= \max_{1\le j,l \le N} | \theta_{t}^j - \theta_{t}^l |=|\theta_{t}^{j^*} - \theta_{t}^{l^*}|. $$ Since $\cup_{k=t}^{t+L-1} \mathbb{G}_k$ is a strongly connected graph for all $t\ge0$,
we can find a shortest path from agent~$j^*$ to agent $l^*$: $( j_0, j_1 ), \cdots, (j_{p-1}, j_p)$ with $j_0 = j^*$, $j_p = l^*$, and $( j_{m-1}, j_m )$ is the edge of graph $\cup_{k=t}^{t+L-1} \mathbb{G}_k$, for $1 \le m \le p$, which implies that
\begin{align} \label{eq:lemma1_2_jointly}
\sum_{i=1}^N \sum_{j=1}^N \sum_{l=1}^N w_{t:t+L-1}^{ij} w_{t:t+L-1}^{il}\| Y_{t}^j - Y_{t}^l \|_2^2 \ge \sum_{i=1}^N \sum_{m=1}^p w_{t:t+L-1}^{ij_{m-1}} w_{t:t+L-1}^{ij_m} ( Y_{t}^{j_{m-1}} - Y_{t}^{j_m} )^2. \end{align} Moreover, we have \begin{align} \label{eq:lemma1_21_jointly}
\sum_{i=1}^N w_{t:t+L-1}^{ij_{m-1}} w_{t:t+L-1}^{ij_m} \ge w_{t:t+L-1}^{j_{m-1} j_{m-1}} w_{t:t+L-1}^{j_{m-1} j_m} + w_{t:t+L-1}^{j_{m} j_{m-1}} w_{t:t+L-1}^{j_{m} j_m} \ge \beta^{2L}. \end{align} Then, from Jensen's inequality, \eqref{eq:lemma1_2_jointly} and \eqref{eq:lemma1_21_jointly}, we have \begin{align} \label{eq:lemma1_3_jointly}
\sum_{i=1}^N \pi_{t+L}^i \sum_{j=1}^N \sum_{l=1}^N w_{t:t+L-1}^{ij} w_{t:t+L-1}^{il}\| Y_{t}^j - Y_{t}^l \|_2^2
& \ge \pi_{\min} \sum_{i=1}^N \sum_{m=1}^p w_{t:t+L-1}^{ij_{m-1}} w_{t:t+L-1}^{ij_m} ( Y_{t}^{j_{m-1}} - Y_{t}^{j_m} )^2 \nonumber \\
& \ge \frac{\pi_{\min} \beta^{2L}}{p} ( Y_{t}^{j^*} - Y_{t}^{l^*} )^2 = \frac{\pi_{\min} \beta^{2L}}{ \delta_t} ( \theta_{t}^{j^*} - \theta_{t}^{l^*} )^2. \end{align} For the case when $K > 1$, let $Y_{t}^{ik}$ be the $k$-th entry of vector $Y_{t}^i$. Then, \begin{align*}
\sum_{i=1}^N \pi_{t+L}^i \sum_{j=1}^N \sum_{l=1}^N w_{t:t+L-1}^{ij} w_{t:t+L-1}^{il}\| Y_{t}^j - Y_{t}^l \|_2^2 = \sum_{k=1}^K \sum_{i=1}^N \pi_{t+L}^i \sum_{j=1}^N \sum_{l=1}^N w_{t:t+L-1}^{ij} w_{t:t+L-1}^{il} (Y_{t}^{jk} - Y_{t}^{lk})^2 . \end{align*} For each entry $k$, we have \begin{align} \label{eq:lemma1_4_jointly}
\sum_{i=1}^N \pi_{t+L}^i \sum_{j=1}^N \sum_{l=1}^N w_{t:t+L-1}^{ij} w_{t:t+L-1}^{il} (Y_{t}^{jk} - Y_{t}^{lk})^2 \ge \frac{\pi_{\min} \beta^{2L}}{\delta_{\max}} \max_{1\le j,l \le N} (\theta_{t}^{jk} - \theta_{t}^{lk})^2 , \end{align} where $\theta_t^{ik}$ is the $k$-th entry of vector $\theta_t^{i}$. Moreover, let $\Theta_t^{\bm{\cdot} k}$ be the $k$-th column of matrix $\Theta_t$. Since $2 x_1 x_2 \le x_1^2 + x_2^2$, we have for any $k \in\{1,\ldots, K\}$, \begin{align*}
\sum_{i=1}^N \pi_{t}^i( Y_{t}^{ik} )^2 &= \sum_{i=1}^N \pi_t^i \| \theta_t^{ik} - \pi_t^\top \Theta_t^{\bm{\cdot} k} \|_2^2
\le \max_{1\le i \le N} \left( \theta_t^{ik} - \pi_t^\top \Theta_t^{\bm{\cdot} k} \right)^2 = \max_{1\le i \le N} \left( \pi_t^\top (\mathbf{1}_N \theta_t^{ik} - \Theta_t^{\bm{\cdot} k}) \right)^2 \nonumber\\
& = \max_{1\le i \le N} \Big( \sum_{j=1}^N \pi_t^j ( \theta_t^{ik} - \theta_t^{jk} ) \Big)^2 = \max_{1\le i \le N} \sum_{j=1}^N \sum_{l=1}^N \pi_t^j \pi_t^l ( \theta_t^{ik} - \theta_t^{jk} )( \theta_t^{ik} - \theta_t^{lk} ) \nonumber\\
& \le \max_{1\le i \le N} \sum_{j=1}^N (\pi_t^j)^2 ( \theta_t^{ik} - \theta_t^{jk} )^2
\le \max_{1\le i \le N} \sum_{j=1}^N \pi_t^j ( \theta_t^{ik} - \theta_t^{jk} )^2 \le \max_{1\le i \le N} \max_{1\le j \le N} ( \theta_t^{ik} - \theta_t^{jk} )^2. \end{align*}
Then, combining this inequality with \eqref{eq:lemma1_3_jointly} and \eqref{eq:lemma1_4_jointly}, we have \begin{align*}
& \;\;\; \sum_{k=1}^K \sum_{i=1}^N \pi_{t+L}^i \sum_{j=1}^N \sum_{l=1}^N w_{t:t+L-1}^{ij} w_{t:t+L-1}^{il} (Y_{t}^{jk} - Y_{t}^{lk})^2 \\
& \ge \frac{\pi_{\min} \beta^{2L}}{\delta_{\max}} \sum_{k=1}^K \max_{1\le j,l \le N} (\theta_t^{jk} - \theta_t^{lk})^2 = \frac{\pi_{\min} \beta^{2L}}{\delta_{\max}} \sum_{k=1}^K \sum_{i=1}^N \pi_{t}^i( Y_t^{ik} )^2 = \frac{\pi_{\min} \beta^{2L}}{\delta_{\max}} \sum_{i=1}^N \pi_{t}^i \| Y_{t}^{i} \|_2^2. \end{align*} This completes the proof.
$ \rule{.08in}{.08in}$
\begin{lemma} \label{lemma:bound_consensus_jointly}
Suppose that Assumptions~\ref{assum:weighted matrix} and \ref{assum:A and b} hold and $\{ \mathbb{G}_t \}$ is uniformly strongly connected by sub-sequences of length $L$. Then, when $\alpha \in (0, \zeta_1)$, we have for all $t \ge \tau(\alpha)$, $$
\sum_{i=1}^N \pi_{t}^i \| \theta_{t}^i - \langle \theta \rangle_{t} \|_2^2 \le \epsilon^{q_{t}} \sum_{i=1}^N \pi_{m_t}^i \| \theta_{m_t}^i - \langle \theta \rangle_{m_t} \|_2^2 + \frac{\zeta_2}{1- \epsilon}, $$ where $\zeta_1$ is defined in Appendix~\ref{sec:constants}, $ \epsilon$ and $\zeta_2$ are defined in \eqref{eq:define epsilon_jointly} and \eqref{eq:define Psi10}, respectively. \end{lemma} \noindent {\bf Proof of Lemma~\ref{lemma:bound_consensus_jointly}:} Let $M_t = {\rm diag}{(\pi_t)}$. From \eqref{eq:xxx},
\begin{align}
\| Y_{t+L} \|_{M_{t+L}}^2
&= \sum_{i=1}^N \pi_{t+L}^i \Big\| \Big(\Pi_{k=t}^{t+L-1}( I + \alpha A(X_k)) \Big) \sum_{j=1}^N w_{t:t+L-1}^{ij} Y_{t}^j \Big\|_2^2 \label{eq:matrix_1_jointly}\\
&\;\;\; + \alpha^2 \sum_{i=1}^N \pi_{t+L}^i \| \hat b_{t+L}^i\|_2^2 \label{eq:matrix_2_jointly}\\
& \;\;\; + 2 \alpha \sum_{i=1}^N \pi_{t+L}^i
(\hat b_{t+L}^i)^\top \left(\Pi_{k=t}^{t+L-1}( I + \alpha A(X_k)) \right) \sum_{j=1}^N w_{t:t+L-1}^{ij} Y_{t}^j. \label{eq:matrix_3_jointly} \end{align} We derive bounds for \eqref{eq:matrix_1_jointly}--\eqref{eq:matrix_3_jointly} separately.
For \eqref{eq:matrix_1_jointly}, since $ 2 (x_1)^\top x_2 = \|x_1\|_2^2+\|x_2\|_2^2 - \| x_1 - x_2 \|_2^2 $ and $\pi_{t}^\top = \pi_{t+L}^\top W_{t:t+L-1}$, we have \begin{align}
&\;\;\; \sum_{i=1}^N \pi_{t+L}^i \Big\| \Big(\Pi_{k=t}^{t+L-1}( I + \alpha A(X_k)) \Big) \sum_{j=1}^N w_{t:t+L-1}^{ij} Y_{t}^j \Big\|_2^2 \le ( 1 + \alpha A_{\max})^{2L} \sum_{i=1}^N \pi_{t+L}^i \Big\| \sum_{j=1}^N w_{t:t+L-1}^{ij} Y_{t}^j \Big\|_2^2 \nonumber\\
& = (1 + \alpha A_{\max})^{2L} \sum_{i=1}^N \pi_{t+L}^i \sum_{j=1}^N \sum_{l=1}^N w_{t:t+L-1}^{ij} w_{t:t+L-1}^{il}\frac{1}{2} \left( \|Y_{t}^j\|_2^2+\|Y_{t}^l\|_2^2 - \| Y_{t}^j - Y_{t}^l \|_2^2 \right) \nonumber \\
& = ( 1 + \alpha A_{\max})^{2L} \Big( \sum_{i=1}^N \pi_{t}^i \|Y_{t}^i\|_2^2 - \frac{1}{2} \sum_{i=1}^N \pi_{t+L}^i \sum_{j=1}^N \sum_{l=1}^N w_{t:t+L-1}^{ij} w_{t:t+L-1}^{il}\| Y_{t}^j - Y_{t}^l \|_2^2 \Big) \nonumber. \end{align} From Lemma~\ref{lemma:lower_bound_jointly}, $
\sum_{i=1}^N \pi_{t+L}^i \sum_{j=1}^N \sum_{k=1}^N w_{t:t+L-1}^{ij} w_{t:t+L-1}^{ik}\| Y_{t}^j - Y_{t}^k \|_2^2 \ge \frac{\pi_{\min} \beta^{2L}}{\delta_{\max}} \sum_{i=1}^N \pi_{t}^i \| Y_{t}^{i}\|_2^2, $ which implies that \begin{align}
\sum_{i=1}^N \pi_{t+L}^i \Big\| \Big(\Pi_{k=t}^{t+L-1}( I + \alpha A(X_k)) \Big) \sum_{j=1}^N w_{t:t+L-1}^{ij} Y_{t}^j \Big\|_2^2 \le ( 1 + \alpha A_{\max})^{2L} \Big(1-\frac{\pi_{\min} \beta^{2L}}{2 \delta_{\max}}\Big) \sum_{i=1}^N \pi_{t}^i \| Y_{t}^{i}\|_2^2. \label{eq:matrix_proof_1_jointly} \end{align}
For \eqref{eq:matrix_2_jointly}, since $\| b^i(X_t) - B^\top(X_t) \pi_{t+1} \|_2 \le 2 b_{\max}$ for all $i$, \begin{align*}
\| \hat b_{t+L}^i\|_2
& \le \| (b^i(X_{t+L-1}) - B(X_{t+L-1})^\top \pi_{{t+L}}) \|_2 \\
& \;\;\; + \sum_{k=t}^{t+L-2} \Big\|\Big(\Pi_{j=k+1}^{t+L-1} ( I + \alpha A(X_{j})) \Big) \Big\|_2 \sum_{j=1}^N w_{k+1:t+L-1}^{ij} \| (b^j(X_{k}) - B(X_{k})^\top \pi_{{k+1}})\|_2 \\
& \le 2 b_{\max} \sum_{j=0}^{L-1} ( 1 + \alpha A_{\max})^j \le 2 b_{\max} ( 1 + \alpha A_{\max})^{L-1} \sum_{j=0}^{L-1} \frac{1}{( 1 + \alpha A_{\max})^j} \\
& \le 2 b_{\max} \frac{(1 + \alpha A_{\max})^L-1}{\alpha A_{\max}}, \end{align*} which implies that \begin{align} \label{eq:eq:matrix_proof_21_jointly}
\alpha^2 \sum_{i=1}^N \pi_{t+L}^i \| \hat b_{t+L}^i\|_2^2
& \le \frac{4 b_{\max}^2}{ A_{\max}^2}\left((1 + \alpha A_{\max})^L-1\right)^2. \end{align}
For \eqref{eq:matrix_3_jointly}, since $2\| x \|_2 \le 1+\|x\|_2^2 $ holds for any vector $x$, \begin{align} \label{eq:matrix_proof_3_jointly}
&\;\;\;\; 2 \alpha \sum_{i=1}^N \pi_{t+L}^i
(\hat b_{t+L}^i)^\top \left(\Pi_{k=t}^{t+L-1}( I + \alpha A(X_k)) \right) \sum_{j=1}^N w_{t:t+L-1}^{ij} Y_{t}^j \nonumber\\
& \le 2 \alpha \sum_{i=1}^N \pi_{t+L}^i
\| \hat b_{t+L}^i\|_2 \| \Pi_{k=t}^{t+L-1}( I + \alpha A(X_k)) \|_2 \sum_{j=1}^N w_{t:t+L-1}^{ij} \| Y_{t}^j \|_2 \nonumber \\
& \le 4 \alpha b_{\max} \frac{(1 + \alpha A_{\max})^L-1}{\alpha A_{\max}} (1 + \alpha A_{\max})^{L} \sum_{i=1}^N \pi_{t}^i \| Y_{t}^i \|_2 \nonumber \\
& \le 2 b_{\max} \frac{(1 + \alpha A_{\max})^L-1}{ A_{\max}} (1 + \alpha A_{\max})^{L}\Big( \sum_{i=1}^N \pi_{t}^i \| Y_{t}^i \|_2^2+1 \Big). \end{align} From \eqref{eq:matrix_proof_1_jointly}--\eqref{eq:matrix_proof_3_jointly}, we have \begin{align*}
\| Y_{t+L} \|_{M_{t+L}}^2
& \le ( 1 + \alpha A_{\max})^{2L} \Big(1-\frac{\pi_{\min} \beta^{2L}}{2 \delta_{\max}}\Big) \sum_{i=1}^N \pi_{t}^i \| Y_{t}^{i}\|_2^2 + \frac{4 b_{\max}^2}{ A_{\max}^2}\left((1 + \alpha A_{\max})^L-1\right)^2 \nonumber \\
& \;\;\; + 2 b_{\max} \frac{(1 + \alpha A_{\max})^L-1}{ A_{\max}} (1 + \alpha A_{\max})^{L} \Big( \sum_{i=1}^N \pi_{t}^i \| Y_{t}^i \|_2^2+1 \Big) \nonumber \\
& = \bigg( ( 1 + \alpha A_{\max})^{2L}\Big(1-\frac{\pi_{\min} \beta^{2L}}{2 \delta_{\max}}\Big) + 2 b_{\max} \frac{(1 + \alpha A_{\max})^L-1}{ A_{\max}} (1 + \alpha A_{\max})^{L} \bigg) \| Y_{t} \|_{M_{t}}^2 \nonumber \\
& \;\;\; + \frac{4 b_{\max}^2}{ A_{\max}^2}\left((1 + \alpha A_{\max})^L-1\right)^2
+ 2 b_{\max} \frac{(1 + \alpha A_{\max})^L-1}{ A_{\max}} (1 + \alpha A_{\max})^{L}. \nonumber \end{align*} From Lemma~\ref{lemma:ration_in_0_1}, $0 < \epsilon <1$ when $ 0 < \alpha < \zeta_1$. With the definition of $ \epsilon$ and $\zeta_2$ in \eqref{eq:define epsilon_jointly} and \eqref{eq:define Psi10}, \begin{align*}
\| Y_{t+L} \|_{M_{t+L}}^2
\le \epsilon \| Y_{t} \|_{M_{t}}^2 + \zeta_2 \le \epsilon^{q_{t+L}} \| Y_{m_t}\|_{M_{m_t}}^2 + \zeta_2 \sum_{k=0}^{q_{t+L}-1} \epsilon^k
\le \epsilon^{q_{t+L}} \| Y_{m_t}\|_{M_{m_t}}^2 + \frac{\zeta_2}{1- \epsilon}, \end{align*} which implies that $$
\sum_{i=1}^N \pi_{t}^i \| \theta_{t}^i - \langle \theta \rangle_{t} \|_2^2 \le \epsilon^{q_{t}} \sum_{i=1}^N \pi_{m_t}^i \| \theta_{m_t}^i - \langle \theta \rangle_{m_t} \|_2^2 + \frac{\zeta_2}{1- \epsilon}, $$ where $q_{t}$ and $m_t$ are defined in Theorem~\ref{thm:bound_jointly_SA}. This completes the proof.
$ \rule{.08in}{.08in}$
\begin{lemma} \label{lemma:fixed_single_3}
Suppose that Assumptions~\ref{assum:A and b} and \ref{assum:mixing-time} hold and $\{ \mathbb{G}_t \}$ is uniformly strongly connected. If the step-size $\alpha$ and corresponding mixing time $\tau(\alpha)$ satisfies
$
0< \alpha\tau(\alpha) < \frac{\log2}{A_{\max}}$,
then for any $t \ge \tau(\alpha)$, \begin{align}
\| \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)} \|_2 & \le 2 \alpha A_{\max} \tau(\alpha) \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2 + 2 \alpha \tau(\alpha) b_{\max} \label{eq:fixed_single_3_1}\\
\| \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)} \|_2 & \le 6 \alpha \tau(\alpha) A_{\max} \| \langle \theta \rangle_{t} \|_2 + 5 \alpha \tau(\alpha) b_{\max} \label{eq:fixed_single_3_2}\\
\| \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)} \|_2^2 & \le 72 \alpha^2 \tau^2(\alpha) A_{\max}^2 \| \langle \theta \rangle_{t} \|_2^2 + 50 \alpha^2 \tau^2(\alpha) b_{\max}^2 \le 8 \| \langle \theta \rangle_{t} \|_2^2 + \frac{6b_{\max}^2}{A_{\max}^2}. \label{eq:fixed_single_3_3} \end{align}
\end{lemma}
\noindent {\bf Proof of Lemma~\ref{lemma:fixed_single_3}:} With $\alpha_t = \alpha$ for all $t\ge 0$, the update of $\langle \theta \rangle_t$ in \eqref{eq:update of average_time-varying} becomes $
\langle \theta \rangle_{t+1} = \langle \theta \rangle_t + \alpha A(X_t) \langle \theta \rangle_t + \alpha B(X_t)^\top\pi_{t+1}. $ Then, $$
\| \langle \theta \rangle_{t+1} \|_2 \le \| \langle \theta \rangle_t \|_2 + \alpha A_{\max} \| \langle \theta \rangle_t \|_2 + \alpha b_{\max}
\le (1+\alpha A_{\max}) \| \langle \theta \rangle_t \|_2 + \alpha b_{\max}. $$ Using $(1+x)\le \exp(x)$, we have for $u \in [t-\tau(\alpha), t]$, \begin{align*}
\| \langle \theta \rangle_{u} \|_2
& \le (1+\alpha A_{\max})^{u-t+\tau(\alpha)} \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2 + \alpha b_{\max} \sum_{l = t-\tau(\alpha)}^{u-1} (1+\alpha A_{\max})^{u-1-l} \\
& \le (1+\alpha A_{\max})^{\tau(\alpha)} \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2 + \alpha b_{\max} \sum_{l = t-\tau(\alpha)}^{u-1} (1+\alpha A_{\max})^{u-1-t+\tau(\alpha)} \\
& \le \exp(\alpha \tau(\alpha) A_{\max}) \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2 + \alpha \tau(\alpha) b_{\max} \exp(\alpha \tau(\alpha) A_{\max}). \end{align*} Since $\alpha \tau(\alpha) A_{\max} \le \log2 < \frac{1}{3}$, $\exp(\alpha \tau(\alpha) A_{\max}) \le 2$, which implies that $
\| \langle \theta \rangle_{u} \|_2
\le 2 \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2 + 2 \alpha \tau(\alpha) b_{\max}. $ Thus, we can use this to prove \eqref{eq:fixed_single_3_1} for all $t\ge \tau(\alpha)$, i.e., \begin{align*}
\| \langle \theta \rangle_{t} - \langle \theta \rangle_{t - \tau(\alpha)} \|_2
& \le \sum_{u=t-\tau(\alpha)}^{t-1} \| \langle \theta \rangle_{u+1} - \langle \theta \rangle_{u} \|_2 \le \alpha A_{\max} \sum_{u=t-\tau(\alpha)}^{t-1} \| \langle \theta \rangle_{u} \|_2 + \alpha \tau(\alpha) b_{\max} \\
& \le \alpha A_{\max} \sum_{u=t-\tau(\alpha)}^{t-1} \left( 2 \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2 + 2 \alpha \tau(\alpha) b_{\max} \right) + \alpha \tau(\alpha) b_{\max} \\
& \le 2 \alpha \tau(\alpha) A_{\max} \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2 + 2 \alpha^2 \tau^2(\alpha) A_{\max} b_{\max} + \alpha \tau(\alpha) b_{\max} \\
& \le 2 \alpha \tau(\alpha) A_{\max} \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2 + \frac{5}{3} \alpha \tau(\alpha) b_{\max} \\
& \le 2 \alpha \tau(\alpha) A_{\max} \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2 + 2 \alpha \tau(\alpha) b_{\max}. \end{align*} Moreover, we can prove \eqref{eq:fixed_single_3_2} using the equation above for all $t\ge \tau(\alpha)$ as follows: \begin{align*}
\| \langle \theta \rangle_{t} - \langle \theta \rangle_{t - \tau(\alpha)} \|_2
& \le 2 \alpha \tau(\alpha) A_{\max} \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2 + \frac{5}{3} \alpha \tau(\alpha) b_{\max} \\
& \le \frac{2}{3} \| \langle \theta \rangle_{t} - \langle \theta \rangle_{t - \tau(\alpha)} \|_2 + 2 \alpha \tau(\alpha) A_{\max} \| \langle \theta \rangle_{t} \|_2 + \frac{5}{3} \alpha \tau(\alpha) b_{\max} \\
& \le 6 \alpha \tau(\alpha) A_{\max} \| \langle \theta \rangle_{t} \|_2 + 5 \alpha \tau(\alpha) b_{\max}. \end{align*} Next, using the inequality $(x+y)^2 \le 2x^2 + 2 y^2$ for all $x, y$, we can show \eqref{eq:fixed_single_3_3} using \eqref{eq:fixed_single_3_2}, i.e., \begin{align*}
\| \langle \theta \rangle_{t} - \langle \theta \rangle_{t - \tau(\alpha)} \|_2^2
\le 72 \alpha^2 \tau^2(\alpha) A_{\max}^2 \| \langle \theta \rangle_{t} \|_2^2 + 50 \alpha^2 \tau^2(\alpha) b_{\max}^2
\le 8 \| \langle \theta \rangle_{t} \|_2^2 + \frac{ 6b_{\max}^2}{A_{\max}^2}, \end{align*} where we use $\alpha \tau(\alpha) A_{\max} < \frac{1}{3}$ in the last inequality.
$ \rule{.08in}{.08in}$
\begin{lemma} \label{lemma:bound_fixed_Ab}
Let $\mathcal{F}_t = \sigma( X_k,\; k\le t )$ be a $\sigma$-algebra on $\{X_t\}$.
Suppose that Assumptions~\ref{assum:A and b}--\ref{assum:lyapunov} and~\ref{assum:limit_pi} hold. If $\{ \mathbb{G}_t \}$ is uniformly strongly connected and
$
0< \alpha < \frac{ \log2}{A_{\max} \tau(\alpha)}, $
then for any $t \ge \tau(\alpha)$,
\begin{align*}
& \;\;\;\; \left|\mathbf{E} \left[ (\langle \theta \rangle_t - \theta^* )^\top (P+P^\top) \big( A(X_t) \langle \theta \rangle_t + B(X_t)^\top\pi_{t+1} - A\langle \theta \rangle_t - b\big) \;|\; \mathcal{F}_{t-\tau(\alpha)} \right]\right| \nonumber \\
& \le \alpha \gamma_{\max} \left( 72 + 456 \tau(\alpha) A_{\max}^2 + 84 \tau(\alpha) A_{\max} b_{\max} \right) \mathbf{E}\left[ \| \langle \theta \rangle_{t} \|_2^2 \;|\; \mathcal{F}_{t-\tau(\alpha)} \right] \nonumber \\
&\;\;\; + \alpha \gamma_{\max} \bigg[ 2 + 4 \|\theta^* \|_2^2 + \frac{48b_{\max}^2}{A_{\max}^2} + \tau(\alpha) A_{\max}^2 \bigg(152 \Big(\frac{b_{\max}}{A_{\max}} + \| \theta^* \|_2 \Big)^2 + \frac{48b_{\max}}{A_{\max}} \Big(\frac{b_{\max}}{A_{\max}} + 1 \Big)^2 \nonumber\\
&\;\;\; + \frac{87b_{\max}^2}{A_{\max}^2} + \frac{12b_{\max}}{A_{\max}} \bigg)\bigg] + 2 \gamma_{\max} \eta_{t+1}\sqrt{N}b_{\max} \Big( 1 + 9 \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \;|\; \mathcal{F}_{t-\tau(\alpha)} ] + \frac{6b_{\max}^2}{A_{\max}^2}+ \| \theta^* \|_2^2 \Big).
\end{align*} \end{lemma}
\noindent {\bf Proof of Lemma~\ref{lemma:bound_fixed_Ab}:} Note that for $t\ge\tau(\alpha)$, we have \begin{align}
& \;\;\;\; |\mathbf{E}[ ( \langle \theta \rangle_t - \theta^* )^\top (P+P^\top)( A(X_t) \langle \theta \rangle_t + B(X_t)^\top\pi_{t+1} - A \langle \theta \rangle_t - b) \; | \; \mathcal{F}_{t-\tau(\alpha)} ]| \nonumber \\
& \le |\mathbf{E}[ ( \langle \theta \rangle_{t-\tau(\alpha)} - \theta^* )^\top (P+P^\top)( A(X_t) - A) \langle \theta \rangle_{t-\tau(\alpha)} \; | \; \mathcal{F}_{t-\tau(\alpha)} ]| \label{eq:fixed_bound_Ab_1} \\
& \;\;\; + |\mathbf{E}[ ( \langle \theta \rangle_{t-\tau(\alpha)} - \theta^* )^\top (P+P^\top)( A(X_t) - A) ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)}) \; | \; \mathcal{F}_{t-\tau(\alpha)} ]| \label{eq:fixed_bound_Ab_2} \\
& \;\;\; + |\mathbf{E}[ ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)})^\top (P+P^\top)( A(X_t) - A) \langle \theta \rangle_{t-\tau(\alpha)} \; | \; \mathcal{F}_{t-\tau(\alpha)} ]| \label{eq:fixed_bound_Ab_3}\\
& \;\;\; + |\mathbf{E}[ ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)})^\top (P+P^\top)( A(X_t) - A) ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)}) \; | \; \mathcal{F}_{t-\tau(\alpha)} ]| \label{eq:fixed_bound_Ab_4}\\
&\;\;\; + |\mathbf{E}[ ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)})^\top (P+P^\top)(B(X_t)^\top\pi_{t+1} - b) \; | \; \mathcal{F}_{t-\tau(\alpha)} ]| \label{eq:fixed_bound_Ab_5}\\
&\;\;\; + |\mathbf{E}[ ( \langle \theta \rangle_{t-\tau(\alpha)} - \theta^* )^\top (P+P^\top)(B(X_t)^\top\pi_{t+1} - b) \; | \; \mathcal{F}_{t-\tau(\alpha)} ]|\label{eq:fixed_bound_Ab_6}. \end{align} We derive bounds for \eqref{eq:fixed_bound_Ab_1}--\eqref{eq:fixed_bound_Ab_6} separately.
First, using the mixing time in Assumption~\ref{assum:mixing-time}, we can get the bounds for \eqref{eq:fixed_bound_Ab_1} and \eqref{eq:fixed_bound_Ab_6} for $t\ge\tau(\alpha)$ as follows: \begin{align}
& \;\;\; |\mathbf{E}[ ( \langle \theta \rangle_{t-\tau(\alpha)} - \theta^* )^\top (P+P^\top)( A(X_t) - A) \langle \theta \rangle_{t-\tau(\alpha)} \; | \; \mathcal{F}_{t-\tau(\alpha)} ]|\nonumber\\
& \le |( \langle \theta \rangle_{t-\tau(\alpha)} - \theta^* )^\top (P+P^\top) \mathbf{E}[A(X_t) - A \; | \; \mathcal{F}_{t-\tau(\alpha)} ] \langle \theta \rangle_{t-\tau(\alpha)} | \nonumber\\
& \le 2 \alpha \gamma_{\max} \mathbf{E}[\| \langle \theta \rangle_{t-\tau(\alpha)} - \theta^* \|_2 \| \langle \theta \rangle_{t-\tau(\alpha)}\|_2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] \nonumber\\
& \le \alpha \gamma_{\max} \mathbf{E}[\| \langle \theta \rangle_{t-\tau(\alpha)} - \theta^* \|_2^2 + \| \langle \theta \rangle_{t-\tau(\alpha)}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] \nonumber \\
& \le \alpha \gamma_{\max} \mathbf{E}[ 2 \|\theta^* \|_2^2 + 3 \| \langle \theta \rangle_{t-\tau(\alpha)}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] \nonumber \\
& \le 6 \alpha \gamma_{\max} \mathbf{E}[ \| \langle \theta \rangle_{t} - \langle \theta \rangle_{t-\tau(\alpha)}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 6 \alpha \gamma_{\max} \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 2 \alpha \gamma_{\max} \|\theta^* \|_2^2 \nonumber \\
& \le 54 \alpha \gamma_{\max} \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 36 \alpha \gamma_{\max} (\frac{b_{\max}}{A_{\max}})^2 + 2 \alpha \gamma_{\max} \|\theta^* \|_2^2, \label{eq:fixed_bound_Ab_1_bounded} \end{align} where in the last inequality, we use \eqref{eq:fixed_single_3_1} from Lemma~\ref{lemma:fixed_single_3}. Then, from the definition of $\pi_{\infty}$ in Assumption~\ref{assum:limit_pi}, \begin{align}
& \;\;\; |\mathbf{E}[ ( \langle \theta \rangle_{t-\tau(\alpha)} - \theta^* )^\top (P+P^\top)(B(X_t)^\top\pi_{t+1} - b) \; | \; \mathcal{F}_{t-\tau(\alpha)} ]|\nonumber\\
& \le |\mathbf{E}[ ( \langle \theta \rangle_{t-\tau(\alpha)} - \theta^* )^\top (P+P^\top)(\sum_{i=1}^N \pi_{t+1}^i(b^i(X_t) - b^i ) + \sum_{i=1}^N (\pi_{t+1}^i - \pi_{\infty}^i) b^i ) \; | \; \mathcal{F}_{t-\tau(\alpha)} ]|\nonumber\\
& \le | ( \langle \theta \rangle_{t-\tau(\alpha)} - \theta^* )^\top (P+P^\top)(\sum_{i=1}^N \pi_{t+1}^i \mathbf{E}[ b^i(X_t) - b^i \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + \sum_{i=1}^N (\pi_{t+1}^i - \pi_{\infty}^i) b^i ) |\nonumber\\
& \le 2 \gamma_{\max} (\alpha + \eta_{t+1}\sqrt{N}b_{\max}) \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha)} - \theta^* \|_2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ]\nonumber\\
& \le 2 \gamma_{\max} (\alpha + \eta_{t+1}\sqrt{N}b_{\max}) \left( \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + \| \theta^* \|_2 \right) \nonumber\\
& \le 2 \gamma_{\max} (\alpha + \eta_{t+1}\sqrt{N}b_{\max}) \big( 1 + \frac{1}{2} \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + \frac{1}{2} \| \theta^* \|_2^2 \big) \nonumber\\
& \le 2 \gamma_{\max} (\alpha + \eta_{t+1}\sqrt{N}b_{\max}) \left( 1 + \mathbf{E}[ \| \langle \theta \rangle_{t} - \langle \theta \rangle_{t-\tau(\alpha)} \|_2^2 + \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + \| \theta^* \|_2^2 \right) \nonumber\\
& \le 2 \gamma_{\max} (\alpha + \eta_{t+1}\sqrt{N}b_{\max}) \big( 1 + 9 \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 6 (\frac{b_{\max}}{A_{\max}})^2+ \| \theta^* \|_2^2 \big),
\label{eq:fixed_bound_Ab_6_bounded} \end{align} where we also use \eqref{eq:fixed_single_3_1} from Lemma~\ref{lemma:fixed_single_3} in the last inequality.
Next, using Assumption~\ref{assum:A and b}, \eqref{eq:fixed_single_3_1} and \eqref{eq:fixed_single_3_3}, we have \begin{align}
& \;\;\; |\mathbf{E}[ ( \langle \theta \rangle_{t-\tau(\alpha)} - \theta^* )^\top (P+P^\top)( A(X_t) - A) ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)} ) \; | \; \mathcal{F}_{t-\tau(\alpha)} ]| \nonumber\\
&\le 4 \gamma_{\max} A_{\max} \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha)} - \theta^* \|_2 \| \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)}\|_2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] \nonumber\\
&\le 4 \gamma_{\max} A_{\max} \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2 \| \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)}\|_2 + \| \theta^* \|_2 \| \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)}\|_2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] \nonumber\\
&\le 8 \alpha \tau(\alpha) \gamma_{\max} A_{\max}^2 \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 8 \alpha \tau(\alpha) \gamma_{\max} A_{\max} b_{\max} \| \theta^* \|_2 \nonumber\\
& \;\;\; + 8 \alpha \tau(\alpha) \gamma_{\max} A_{\max}^2 \big(\frac{ b_{\max}}{A_{\max}} + \| \theta^* \|_2 \big) \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] \nonumber\\
&\le 8 \alpha \tau(\alpha) \gamma_{\max} A_{\max}^2 \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 8 \alpha \tau(\alpha) \gamma_{\max} A_{\max} b_{\max} \| \theta^* \|_2 \nonumber\\
& \;\;\; + 4 \alpha \tau(\alpha) \gamma_{\max} A_{\max}^2 \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ]
+ 4 \alpha \tau(\alpha) \gamma_{\max} A_{\max}^2 \big(\frac{ b_{\max}}{A_{\max}} + \| \theta^* \|_2 \big)^2, \nonumber
\end{align} which implies that \begin{align}
& \;\;\; |\mathbf{E}[ ( \langle \theta \rangle_{t-\tau(\alpha)} - \theta^* )^\top (P+P^\top)( A(X_t) - A) ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)} ) \; | \; \mathcal{F}_{t-\tau(\alpha)} ]| \nonumber\\
&\le 12 \alpha \tau(\alpha) \gamma_{\max} A_{\max}^2 \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 8 \alpha \tau(\alpha) \gamma_{\max} \left(b_{\max} + A_{\max} \| \theta^* \|_2 \right)^2 \nonumber\\
&\le 24 \alpha \tau(\alpha) \gamma_{\max} A_{\max}^2 \mathbf{E}[ \| \langle \theta \rangle_{t} - \langle \theta \rangle_{t-\tau(\alpha)} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 8 \alpha \tau(\alpha) \gamma_{\max} \left(b_{\max} + A_{\max} \| \theta^* \|_2 \right)^2 \nonumber\\
& \;\;\; + 24 \alpha \tau(\alpha) \gamma_{\max} A_{\max}^2 \mathbf{E}[ \| \langle \theta \rangle_{t}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] \nonumber\\
&\le 216 \alpha \tau(\alpha) \gamma_{\max} A_{\max}^2 \mathbf{E}[ \| \langle \theta \rangle_{t}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 144 \alpha \tau(\alpha) \gamma_{\max} b_{\max}^2
+ 8 \alpha \tau(\alpha) \gamma_{\max} \left(b_{\max} + A_{\max} \| \theta^* \|_2 \right)^2 \nonumber\\
&\le 216 \alpha \tau(\alpha) \gamma_{\max} A_{\max}^2 \mathbf{E}[ \| \langle \theta \rangle_{t}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 152 \alpha \tau(\alpha) \gamma_{\max} \left(b_{\max} + A_{\max} \| \theta^* \|_2 \right)^2.
\label{eq:fixed_bound_Ab_2_bounded} \end{align} In additional, using \eqref{eq:fixed_single_3_1} and \eqref{eq:fixed_single_3_3}, we have \begin{align}
& \;\;\; |\mathbf{E}[ ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)} )^\top (P+P^\top)( A(X_t) - A) \langle \theta \rangle_{t-\tau(\alpha)} \; | \; \mathcal{F}_{t-\tau(\alpha)} ]|\nonumber\\
& \le 4 \gamma_{\max} A_{\max} \mathbf{E}[ \| \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)}\|_2 \| \langle \theta \rangle_{t-\tau(\alpha)}\|_2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ]\nonumber\\
& \le 8 \alpha \tau(\alpha) \gamma_{\max} A_{\max} \mathbf{E}[ A_{\max } \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2^2 + b_{\max} \| \langle \theta \rangle_{t-\tau(\alpha)}\|_2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ]\nonumber\\
& \le 4 \alpha \tau(\alpha) \gamma_{\max} A_{\max} (2 A_{\max }+ b_{\max} ) \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha)} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 4 \alpha \tau(\alpha) \gamma_{\max} A_{\max} b_{\max} \nonumber\\
& \le 8 \alpha \tau(\alpha) \gamma_{\max} A_{\max} (2 A_{\max }+ b_{\max} ) \mathbf{E}[ \| \langle \theta \rangle_{t} - \langle \theta \rangle_{t-\tau(\alpha)} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] \nonumber\\
&\;\;\; + 8 \alpha \tau(\alpha) \gamma_{\max} A_{\max} (2 A_{\max }+ b_{\max} ) \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 4 \alpha \tau(\alpha) \gamma_{\max} A_{\max} b_{\max} \nonumber \\
& \le 72 \alpha \tau(\alpha) \gamma_{\max} A_{\max} (2 A_{\max } + b_{\max} ) \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] 48 \alpha \tau(\alpha) \gamma_{\max} A_{\max} b_{\max} (\frac{b_{\max}}{A_{\max}} + 1 )^2.
\label{eq:fixed_bound_Ab_3_bounded} \end{align} Moreover, we can get the bound for \eqref{eq:fixed_bound_Ab_4} using \eqref{eq:fixed_single_3_3} as follows: \begin{align}
& \;\;\; |\mathbf{E}[ ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)} )^\top (P+P^\top)( A(X_t) - A) ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)} ) \; | \; \mathcal{F}_{t-\tau(\alpha)} ]| \nonumber\\
& \le 4 \gamma_{\max} A_{\max} \mathbf{E}[ \| \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ]| \nonumber\\
& \le 4 \gamma_{\max} A_{\max} \mathbf{E}[ 72 \alpha^2 \tau^2(\alpha) A_{\max}^2 \| \langle \theta \rangle_{t} \|_2^2 + 50 \alpha^2 \tau^2(\alpha) b_{\max}^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] \nonumber\\
& \le 96 \alpha \tau(\alpha) A_{\max}^2 \gamma_{\max} \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 67 \alpha \tau(\alpha) b_{\max}^2 \gamma_{\max}.
\label{fixed_bound_Ab_4_bounded} \end{align} Finally, using \eqref{eq:fixed_single_3_2} we can get the bound for \eqref{eq:fixed_bound_Ab_5}: \begin{align}
&\;\;\; |\mathbf{E}[ ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)} )^\top (P+P^\top)(B(X_t)^\top\pi_{t+1} - b) \; | \; \mathcal{F}_{t-\tau(\alpha)} ]| \nonumber \\
& \le 4 \gamma_{\max} b_{\max} \mathbf{E}[ \|\langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha)}\|_2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] \nonumber \\
& \le 4 \gamma_{\max} b_{\max} \mathbf{E}[ 6 \alpha \tau(\alpha) A_{\max} \| \langle \theta \rangle_{t} \|_2 + 5 \alpha \tau(\alpha) b_{\max} \; | \; \mathcal{F}_{t-\tau(\alpha)} ] \nonumber \\
& \le 12 \alpha \tau(\alpha) \gamma_{\max} A_{\max} b_{\max} \mathbf{E}[\| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 12 \alpha \tau(\alpha) \gamma_{\max} A_{\max} b_{\max} + 20 \alpha \tau(\alpha) b_{\max}^2 \gamma_{\max}.
\label{eq:fixed_bound_Ab_5_bounded} \end{align} Then, using \eqref{eq:fixed_bound_Ab_1_bounded}--\eqref{eq:fixed_bound_Ab_5_bounded}, we have \begin{align*}
& \;\;\;\; |\mathbf{E}[ ( \langle \theta \rangle_t^\top - \theta^* )^\top (P+P^\top)( A(X_t) \langle \theta \rangle_t + B(X_t)^\top\pi_{t+1} - A \langle \theta \rangle_t - b) \; | \; \mathcal{F}_{t-\tau(\alpha)} ]| \nonumber \\
& \le 54 \alpha \gamma_{\max} \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 36 \alpha \gamma_{\max} (\frac{b_{\max}}{A_{\max}})^2 + 2 \alpha \gamma_{\max} \|\theta^* \|_2^2+ 12 \alpha \tau(\alpha) \gamma_{\max} A_{\max} b_{\max} \nonumber \\
&\;\;\;+ 216 \alpha \tau(\alpha) \gamma_{\max} A_{\max}^2 \mathbf{E}[ \| \langle \theta \rangle_{t}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 152 \alpha \tau(\alpha) \gamma_{\max} \left(b_{\max} + A_{\max} \| \theta^* \|_2 \right)^2\nonumber \\
& \;\;\;+ 72 \alpha \tau(\alpha) \gamma_{\max} A_{\max} (2 A_{\max } + b_{\max} ) \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 48 \alpha \tau(\alpha) \gamma_{\max} A_{\max} b_{\max} (\frac{b_{\max}}{A_{\max}} + 1 )^2 \nonumber\\
& \;\;\;+ 96 \alpha \tau(\alpha) A_{\max}^2 \gamma_{\max} \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 12 \alpha \tau(\alpha) \gamma_{\max} A_{\max} b_{\max} \mathbf{E}[\| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ]\nonumber \\
& \;\;\;+ 2 \gamma_{\max} (\alpha + \eta_{t+1}\sqrt{N}b_{\max}) \big( 1 + 9 \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 6 (\frac{b_{\max}}{A_{\max}})^2+ \| \theta^* \|_2^2 \big) + 87 \alpha \tau(\alpha) b_{\max}^2 \gamma_{\max},\nonumber
\end{align*} which implies that \begin{align*}
& \;\;\;\; |\mathbf{E}[ ( \langle \theta \rangle_t^\top - \theta^* )^\top (P+P^\top)( A(X_t) \langle \theta \rangle_t + B(X_t)^\top\pi_{t+1} - A \langle \theta \rangle_t - b) \; | \; \mathcal{F}_{t-\tau(\alpha)} ]| \nonumber \\
& \le \alpha \gamma_{\max} \left( 72 + 456 \tau(\alpha) A_{\max}^2 + 84 \tau(\alpha) A_{\max} b_{\max} \right) \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] \nonumber \\
&\;\;\; + \alpha \gamma_{\max} \Big[ 2 + 4 \|\theta^* \|_2^2 + 48(\frac{b_{\max}}{A_{\max}})^2 + \tau(\alpha) A_{\max}^2 \Big(152 \big(\frac{b_{\max}}{A_{\max}} + \| \theta^* \|_2 \big)^2 + 48 \frac{b_{\max}}{A_{\max}} (\frac{b_{\max}}{A_{\max}} + 1 )^2 \nonumber\\
&\;\;\; + 87 (\frac{b_{\max}}{A_{\max}})^2 + 12 \frac{b_{\max}}{A_{\max}} \Big)\Big] + 2 \gamma_{\max} \eta_{t+1}\sqrt{N}b_{\max} \Big( 1 + 9 \mathbf{E}[ \| \langle \theta \rangle_{t}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 6 (\frac{b_{\max}}{A_{\max}})^2+ \| \theta^* \|_2^2 \Big). \end{align*} This completes the proof.
$ \rule{.08in}{.08in}$
\begin{lemma} \label{lemma:bound_average}
Suppose that Assumptions~\ref{assum:A and b}--\ref{assum:lyapunov} and \ref{assum:limit_pi} hold. Then, when
$
0< \alpha < \min\{\frac{ \log2}{A_{\max} \tau(\alpha)},\; \frac{0.1}{K_2 \gamma_{\max}}\}, $
we have for any $t \ge T_1$,
\begin{align*}
\mathbf{E}[\|\langle \theta \rangle_{t+1} -\theta^* \|_2^2 ]
& \le \Big(1 - \frac{0.9 \alpha}{\gamma_{\max}} \Big)^{t-T_1} \frac{\gamma_{\max}}{\gamma_{\min}} \mathbf{E}\left[ \| \langle \theta \rangle_{T_1} - \theta^* \|_2^2 \right] + \frac{\alpha \zeta_3 \gamma_{\max}^2}{0.9 \gamma_{\min} } \\
&\;\;\; + \frac{\gamma_{\max}}{\gamma_{\min}} \alpha \zeta_4 \sum_{k={0}}^{t-T_1} \eta_{t+1-k} \Big(1-\frac{0.9 \alpha}{\gamma_{\max}} \Big)^{k} \\
& \le \Big(1 - \frac{0.9 \alpha}{\gamma_{\max}} \Big)^{t+1-T_1} \frac{\gamma_{\max}}{\gamma_{\min}} ( 4 \exp\left\{ 2 \alpha A_{\max}T_1 \right\}+2) \mathbf{E}[\|\langle \theta \rangle_{0} -\theta^* \|_2^2] \nonumber\\
&\;\;\; + 4 \Big(1 - \frac{0.9 \alpha}{\gamma_{\max}} \Big)^{t+1-T_1} \frac{\gamma_{\max}}{\gamma_{\min}} \exp\left\{ 2\alpha A_{\max}T_1 \right\} \Big( \|\theta^*\|_2 + \frac{b_{\max}}{A_{\max}} \Big)^2 \\
& \;\;\; + \frac{\alpha \zeta_3 \gamma_{\max}^2}{0.9 \gamma_{\min} } + \frac{\gamma_{\max}}{\gamma_{\min}} \alpha \zeta_4 \sum_{k={0}}^{t-T_1} \eta_{t+1-k} \Big(1-\frac{0.9 \alpha}{\gamma_{\max}} \Big)^{k} . \end{align*} where $\zeta_3$, $\zeta_4$ and $K_2$ are defined in\eqref{eq:define Psi4}, \eqref{eq:define Psi5} and \eqref{eq:define Psi3}, respectively. \end{lemma}
\noindent {\bf Proof of Lemma~\ref{lemma:bound_average}:} Let $H(\langle \theta \rangle_t ) = ( \langle \theta \rangle_t - \theta^* )^\top P ( \langle \theta \rangle_t - \theta^* ) $. From Assumption~\ref{assum:lyapunov}, $$
\gamma_{\min} \| \langle \theta \rangle_t - \theta^* \|_2^2 \le H(\langle \theta \rangle_t ) \le \gamma_{\max} \| \langle \theta \rangle_t - \theta^* \|_2^2. $$ Moreover, from Assumption~\ref{assum:A and b}, we have for all $t\ge 0$, \begin{align}
&\;\;\;\; H( \langle \theta \rangle_{t+1} ) = ( \langle \theta \rangle_{t+1} - \theta^* )^\top P ( \langle \theta \rangle_{t+1} - \theta^* ) \nonumber\\
& = ( \langle \theta \rangle_t + \alpha A(X_t) \langle \theta \rangle_t + \alpha B(X_t)^\top\pi_{t+1} - \theta^* )^\top P (\langle \theta \rangle_t + \alpha A(X_t) \langle \theta \rangle_t + \alpha B(X_t)^\top\pi_{t+1} - \theta^* ) \nonumber\\
& = ( \langle \theta \rangle_t - \theta^* )^\top P (\langle \theta \rangle_t - \theta^* ) + \alpha^2 ( A(X_t) \langle \theta \rangle_t )^\top P ( A(X_t) \langle \theta \rangle_t ) \nonumber \\
& \;\;\;\; + \alpha^2 (B(X_t)^\top\pi_{t+1})^\top P (B(X_t)^\top\pi_{t+1}) + \alpha^2 ( A(X_t) \langle \theta \rangle_t )^\top (P+P^\top)(B(X_t)^\top\pi_{t+1}) \nonumber\\
& \;\;\;\; + \alpha ( \langle \theta \rangle_t - \theta^* )^\top (P+P^\top)( A(X_t) \langle \theta \rangle_t + B(X_t)^\top\pi_{t+1} - A\langle \theta \rangle_t - b) \nonumber\\
& \;\;\;\; + \alpha ( \langle \theta \rangle_t - \theta^* )^\top P( A\langle \theta \rangle_t + b) + \alpha ( A\langle \theta \rangle_t + b)^\top P( \langle \theta \rangle_t - \theta^* ) \nonumber\\
& = H( \langle \theta \rangle_t ) + \alpha^2 ( A(X_t) \langle \theta \rangle_t )^\top P ( A(X_t) \langle \theta \rangle_t ) + \alpha ( \langle \theta \rangle_t - \theta^* )^\top (PA+A^\top P ) (\langle \theta \rangle_t -\theta^*) \nonumber \\
& \;\;\;\; + \alpha^2 (B(X_t)^\top\pi_{t+1})^\top P (B(X_t)^\top\pi_{t+1}) + \alpha^2 ( A(X_t) \langle \theta \rangle_t )^\top (P+P^\top)(B(X_t)^\top\pi_{t+1}) \nonumber\\
& \;\;\;\; + \alpha ( \langle \theta \rangle_t - \theta^* )^\top (P+P^\top)( A(X_t) \langle \theta \rangle_t + B(X_t)^\top\pi_{t+1} - A \langle \theta \rangle_t - b) \label{eq:fixed_proof_1}, \end{align} where we use the fact that $A\theta^* +b =0 $ on the last equality.
Next, we can take expectation on both sides of \eqref{eq:fixed_proof_1}. From Assumption~\ref{assum:lyapunov} and Lemma~\ref{lemma:bound_fixed_Ab}, we have for $t\ge T_1$, \begin{align}
&\;\;\;\; \mathbf{E}[H( \langle \theta \rangle_{t+1} )] \nonumber\\
& = \mathbf{E}[H( \langle \theta \rangle_t )] + \alpha^2 \mathbf{E}[( A(X_t) \langle \theta \rangle_t )^\top P ( A(X_t) \langle \theta \rangle_t )] - \alpha \mathbf{E}[\| \langle \theta \rangle_t - \theta^* \|_2^2] \nonumber \\
& \;\;\;\; + \alpha^2 \mathbf{E}[(B(X_t)^\top\pi_{t+1})^\top P (B(X_t)^\top\pi_{t+1})] + \alpha^2 \mathbf{E}[( A(X_t) \langle \theta \rangle_t )^\top (P+P^\top)(B(X_t)^\top\pi_{t+1})] \nonumber\\
& \;\;\;\; + \alpha \mathbf{E}[( \langle \theta \rangle_t - \theta^* )^\top (P+P^\top)( A(X_t) \langle \theta \rangle_t + B(X_t)^\top\pi_{t+1} - A\langle \theta \rangle_t - b)] \nonumber\\
& \le \mathbf{E}[H( \langle \theta \rangle_t )] - \alpha\mathbf{E}[\| \langle \theta \rangle_t - \theta^*\|_2^2 ] + \alpha^2 A_{\max}^2 \gamma_{\max} \mathbf{E}[\| \langle \theta \rangle_t\|_2^2 ] + 2 \alpha^2 A_{\max} b_{\max} \gamma_{\max} \mathbf{E}[\| \langle \theta \rangle_t\|_2 ] \nonumber\\
& \;\;\; + \alpha^2 b_{\max}^2 \gamma_{\max} + \alpha^2 \gamma_{\max} \left( 72 + 456 \tau(\alpha) A_{\max}^2 + 84 \tau(\alpha) A_{\max} b_{\max} \right) \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 ] \nonumber \\
&\;\;\; + \alpha^2 \gamma_{\max} \Big[ 2 + 4 \|\theta^* \|_2^2 + 48(\frac{b_{\max}}{A_{\max}})^2 + \tau(\alpha) A_{\max}^2 \Big(152 \big(\frac{b_{\max}}{A_{\max}} + \| \theta^* \|_2 \big)^2 + 48 \frac{b_{\max}}{A_{\max}} (\frac{b_{\max}}{A_{\max}} + 1 )^2 \nonumber\\
&\;\;\; + 87 (\frac{b_{\max}}{A_{\max}})^2 + 12 \frac{b_{\max}}{A_{\max}} \Big)\Big] + 2 \alpha \gamma_{\max} \eta_{t+1}\sqrt{N}b_{\max} \Big( 1 + 9 \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2] + 6 (\frac{b_{\max}}{A_{\max}})^2+ \| \theta^* \|_2^2 \Big) \nonumber \\
& \le \mathbf{E}[H( \langle \theta \rangle_t )] - \alpha\mathbf{E}[\| \langle \theta \rangle_t - \theta^*\|_2^2 ] + 2 \alpha^2 A_{\max}^2 \gamma_{\max} \mathbf{E}[\| \langle \theta \rangle_t\|_2^2 ] + 2 \alpha^2 b_{\max}^2 \gamma_{\max} \nonumber\\
& \;\;\; + \alpha^2 \gamma_{\max} \left( 72 + 456 \tau(\alpha) A_{\max}^2 + 84 \tau(\alpha) A_{\max} b_{\max} \right) \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 ] \nonumber \\
&\;\;\; + \alpha^2 \gamma_{\max} \Big[ 2 + 4 \|\theta^* \|_2^2 + 48(\frac{b_{\max}}{A_{\max}})^2 + \tau(\alpha) A_{\max}^2 \Big(152 \big(\frac{b_{\max}}{A_{\max}} + \| \theta^* \|_2 \big)^2 + 48 \frac{b_{\max}}{A_{\max}} (\frac{b_{\max}}{A_{\max}} + 1 )^2 \nonumber\\
&\;\;\; + 87 (\frac{b_{\max}}{A_{\max}})^2 + 12 \frac{b_{\max}}{A_{\max}} \Big)\Big] + 2 \alpha \gamma_{\max} \eta_{t+1}\sqrt{N}b_{\max} \Big( 1 + 9 \mathbf{E}[ \| \langle \theta \rangle_{t}\|_2 ] + 6 (\frac{b_{\max}}{A_{\max}})^2+ \| \theta^* \|_2^2 \Big). \nonumber \end{align}
Since $ \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 ] \le 2 \mathbf{E}[ \| \langle \theta \rangle_{t}- \theta^* \|_2^2 ] + 2 \| \theta^*\|_2^ 2 $, we have \begin{align}
&\;\;\;\; \mathbf{E}[H( \langle \theta \rangle_{t+1} )] \nonumber\\
& \le \mathbf{E}[H( \langle \theta \rangle_t )] - \alpha\mathbf{E}[\| \langle \theta \rangle_t - \theta^*\|_2^2 ] + 2 \alpha^2 b_{\max}^2 \gamma_{\max} \nonumber\\
& \;\;\; + \alpha^2 \gamma_{\max} \left( 72 + 2 A_{\max}^2 + 456 \tau(\alpha) A_{\max}^2 + 84 \tau(\alpha) A_{\max} b_{\max} \right) (2 \mathbf{E}[ \| \langle \theta \rangle_{t}- \theta^* \|_2^2 ] + 2 \| \theta^*\|_2^ 2) \nonumber \\
&\;\;\; + \alpha^2 \gamma_{\max} \Big[ 2 + 4 \|\theta^* \|_2^2 + 48(\frac{b_{\max}}{A_{\max}})^2 + \tau(\alpha) A_{\max}^2 \Big(152 \big(\frac{b_{\max}}{A_{\max}} + \| \theta^* \|_2 \big)^2 + 48 \frac{b_{\max}}{A_{\max}} (\frac{b_{\max}}{A_{\max}} + 1 )^2 \nonumber\\
&\;\;\; + 87 (\frac{b_{\max}}{A_{\max}})^2 + 12 \frac{b_{\max}}{A_{\max}} \Big)\Big] + 2 \alpha \gamma_{\max} \eta_{t+1}\sqrt{N}b_{\max} \Big( 1 + 18 \mathbf{E}[ \| \langle \theta \rangle_{t}- \theta^* \|_2^2 ] + 6 (\frac{b_{\max}}{A_{\max}})^2+ 19 \| \theta^* \|_2^2 \Big) \nonumber \\
& \le \mathbf{E}[H( \langle \theta \rangle_t )] + \left( - \alpha + 2\alpha^2 \gamma_{\max} \left( 72 + 2 A_{\max}^2 + 456 \tau(\alpha) A_{\max}^2 + 84 \tau(\alpha) A_{\max} b_{\max} \right) \right) \mathbf{E}[\| \langle \theta \rangle_t - \theta^*\|_2^2 ] \nonumber\\
& \;\;\; + 2 \alpha^2 \gamma_{\max} \left( 72 + 2 A_{\max}^2 + 456 \tau(\alpha) A_{\max}^2 + 84 \tau(\alpha) A_{\max} b_{\max} \right) \| \theta^*\|_2^2\nonumber \\
&\;\;\; + \alpha^2 \gamma_{\max} \Big[ 2 + 2 b_{\max}^2 + 4 \|\theta^* \|_2^2 + 48(\frac{b_{\max}}{A_{\max}})^2 + \tau(\alpha) A_{\max}^2 \Big(152 \big(\frac{b_{\max}}{A_{\max}} + \| \theta^* \|_2 \big)^2 \nonumber\\
&\;\;\; + 48 \frac{b_{\max}}{A_{\max}} (\frac{b_{\max}}{A_{\max}} + 1 )^2 + 87 (\frac{b_{\max}}{A_{\max}})^2 + 12 \frac{b_{\max}}{A_{\max}} \Big)\Big] \nonumber \\
& \;\;\;+ 2 \alpha \gamma_{\max} \eta_{t+1}\sqrt{N}b_{\max} \Big( 1 + 18 \mathbf{E}[ \| \langle \theta \rangle_{t}- \theta^* \|_2^2 ] + 6 (\frac{b_{\max}}{A_{\max}})^2+ 19 \| \theta^* \|_2^2 \Big)
\nonumber \\
&\le \mathbf{E}[H( \langle \theta \rangle_t )] + (-\alpha + \alpha^2 \gamma_{\max} K_2 + 36\alpha \eta_{t+1}\sqrt{N}b_{\max} \gamma_{\max} )\mathbf{E}[\| \langle \theta \rangle_t - \theta^*\|_2^2 ] + \alpha^2 \zeta_3 \gamma_{\max} + \alpha \gamma_{\max} \eta_{t+1}\zeta_4. \nonumber \end{align}
By Lemma~\ref{lemma:ration_in_0_1}, we know $1-\frac{0.9 \alpha}{\gamma_{\max}} \in (0,1)$. From the definition of $T_1$ and $\alpha < \frac{0.1}{K_2 \gamma_{\max}}$, we have
\begin{align*}
\mathbf{E}[H( \langle \theta \rangle_{t+1} )]
& \le \mathbf{E}[H( \langle \theta \rangle_t )] - 0.9 \alpha \mathbf{E}[\| \langle \theta \rangle_t - \theta^*\|_2^2 ] + \alpha^2 \zeta_3 \gamma_{\max} + \alpha \gamma_{\max} \eta_{t+1}\zeta_4 \nonumber \\
& \le \Big(1 - \frac{0.9 \alpha}{\gamma_{\max}} \Big)\mathbf{E}[H( \langle \theta \rangle_t )] + \alpha^2 \zeta_3 \gamma_{\max} + \alpha \gamma_{\max} \eta_{t+1}\zeta_4 , \end{align*} which implies that \begin{align*}
\mathbf{E}[H( \langle \theta \rangle_{t+1} )]
& \le \Big(1 - \frac{0.9 \alpha}{\gamma_{\max}} \Big)^{t+1-T_1} \mathbf{E}[H( \langle \theta \rangle_{T_1} )] + \alpha^2 \zeta_3 \gamma_{\max} \sum_{k={T_1}}^t \Big(1-\frac{0.9 \alpha}{\gamma_{\max}} \Big)^{t-k} \nonumber \\
&\;\;\; + \alpha \gamma_{\max} \zeta_4 \sum_{k={0}}^{t-T_1} \eta_{t+1-k} \Big(1-\frac{0.9 \alpha}{\gamma_{\max}} \Big)^{k} \nonumber \\
& \le \Big(1 - \frac{0.9 \alpha}{\gamma_{\max}} \Big)^{t+1-T_1} \mathbf{E}[H( \langle \theta \rangle_{T_1} )] + \frac{\alpha \zeta_3 \gamma_{\max}^2}{0.9} + \alpha \gamma_{\max} \zeta_4 \sum_{k={0}}^{t-T_1} \eta_{t+1-k} \Big(1-\frac{0.9 \alpha}{\gamma_{\max}} \Big)^{k}. \end{align*} In addition, \begin{align}
&\;\;\;\; \mathbf{E}[\|\langle \theta \rangle_{t+1} -\theta^* \|_2^2 ]
\le \frac{1}{\gamma_{\min}} \mathbf{E}[H( \langle \theta \rangle_{t+1} )] \nonumber\\
& \le \Big(1 - \frac{0.9 \alpha}{\gamma_{\max}} \Big)^{t+1-T_1} \frac{\gamma_{\max}}{\gamma_{\min}} \mathbf{E}[\| \langle \theta \rangle_{T_1} -\theta^* \|_2^2 ] + \frac{\alpha \zeta_3 \gamma_{\max}^2}{0.9 \gamma_{\min} } + \frac{\gamma_{\max}}{\gamma_{\min}} \alpha \zeta_4 \sum_{k={0}}^{t-T_1} \eta_{t+1-k} \Big(1-\frac{0.9 \alpha}{\gamma_{\max}} \Big)^{k} . \label{eq:lemma_fix_1} \end{align}
Next, we consider the bound for $\mathbf{E}[\| \langle \theta \rangle_{T_1} -\theta^* \|_2^2 ]$. Since $1+x \le \exp \{x\}$ for any $x$, we have for any~$t$, \begin{align*}
\| \langle \theta \rangle_{t+1} - \langle \theta \rangle_{0} \|_2
& = \| \langle \theta \rangle_t - \langle \theta \rangle_{0} + \alpha A(X_t) (\langle \theta \rangle_t - \langle \theta \rangle_{0} ) + \alpha B(X_t)^\top\pi_{t+1} + \alpha A(X_t) \langle \theta \rangle_{0} \|_2 \nonumber \\
& \le (1+ \alpha A_{\max}) \| \langle \theta \rangle_t - \langle \theta \rangle_{0} \|_2 + \alpha\left( A_{\max} \|\langle \theta \rangle_{0} \|_2 + b_{\max} \right)\nonumber \\
& \le \alpha\left( A_{\max} \| \langle \theta \rangle_{0} \|_2 + b_{\max} \right) \sum_{l=0}^t\left(1+ \alpha A_{\max}\right)^l\nonumber \\
& \le \left( A_{\max} \|\langle \theta \rangle_{0} \|_2 + b_{\max} \right) \frac{\left(1+ \alpha A_{\max}\right)^{t+1}}{A_{\max}}\nonumber \\
& \le \Big( \|\langle \theta \rangle_{0} - \theta^* \|_2 + \|\theta^* \|_2 + \frac{b_{\max}}{A_{\max}} \Big) \exp\left\{ \alpha A_{\max}(t+1)\right\}, \end{align*} which implies that $
\| \langle \theta \rangle_{T_1} - \langle \theta \rangle_{0} \|_2
\le ( \|\langle \theta \rangle_{0} - \theta^* \|_2 + \|\theta^* \|_2 + \frac{b_{\max}}{A_{\max}} ) \exp\{ \alpha A_{\max}T_1\}. $ Then, \begin{align}\label{eq:lemma_fix_2}
&\;\;\;\; \mathbf{E}[\| \langle \theta \rangle_{T_1} - \theta^* \|_2^2 ]
\le 2 \| \langle \theta \rangle_{T_1} - \langle \theta \rangle_{0} \|_2^2 + 2 \| \langle \theta \rangle_{0} -\theta^* \|_2^2\nonumber\\
& \le ( 4 \exp\left\{ 2 \alpha A_{\max}T_1 \right\}+2) \mathbf{E}[ \|\langle \theta \rangle_{0} -\theta^* \|_2^2] + 4 \exp\left\{ 2\alpha A_{\max}T_1 \right\} \Big( \|\theta^*\|_2 + \frac{b_{\max}}{A_{\max}} \Big)^2. \end{align} From \eqref{eq:lemma_fix_1} and \eqref{eq:lemma_fix_2}, we have
\begin{align*}
\mathbf{E}[\|\langle \theta \rangle_{t+1} -\theta^* \|_2^2 ]
& \le \Big(1 - \frac{0.9 \alpha}{\gamma_{\max}} \Big)^{t+1-T_1} \frac{\gamma_{\max}}{\gamma_{\min}} ( 4 \exp\left\{ 2 \alpha A_{\max}T_1 \right\}+2) \mathbf{E}[\|\langle \theta \rangle_{0} -\theta^* \|_2^2] \nonumber\\
&\;\;\; + 4 \Big(1 - \frac{0.9 \alpha}{\gamma_{\max}} \Big)^{t+1-T_1} \frac{\gamma_{\max}}{\gamma_{\min}} \exp\left\{ 2\alpha A_{\max}T_1 \right\} \Big( \|\theta^*\|_2 + \frac{b_{\max}}{A_{\max}} \Big)^2 \\
& \;\;\; + \frac{\alpha \zeta_3 \gamma_{\max}^2}{0.9 \gamma_{\min} } + \frac{\gamma_{\max}}{\gamma_{\min}} \alpha \zeta_4 \sum_{k={0}}^{t-T_1} \eta_{t+1-k} \Big(1-\frac{0.9 \alpha}{\gamma_{\max}} \Big)^{k} . \end{align*} This completes the proof.
$ \rule{.08in}{.08in}$
We are now in a position to prove the fixed step-size case in Theorem~\ref{thm:bound_jointly_SA}.
\noindent {\bf Proof of Case 1) in Theorem~\ref{thm:bound_jointly_SA}:}
From Lemmas~\ref{lemma:bound_consensus_jointly} and \ref{lemma:bound_average}, we have for any $t \ge T_1$,
\begin{align*}
&\;\;\;\; \sum_{i=1}^N \pi_{t}^i \mathbf{E}[\|\theta_{t}^i - \theta^*\|_2^2] \le 2 \sum_{i=1}^N \pi_{t}^i \mathbf{E}[\|\theta_{t}^i - \langle \theta \rangle_{t} \|_2^2 ] + 2 \mathbf{E} [\| \langle \theta \rangle_{t} - \theta^*\|_2^2] \\
&\le 2 \epsilon^{q_{t}} \sum_{i=1}^N \pi_{m_t}^i \mathbf{E}[\| \theta_{m_t}^i - \langle \theta \rangle_{m_t} \|_2^2] + \frac{2 \zeta_2}{1- \epsilon} + \frac{2 \alpha \zeta_3 \gamma_{\max}^2}{0.9 \gamma_{\min} } + \frac{\gamma_{\max}}{\gamma_{\min}} 2\alpha \zeta_4 \sum_{k={0}}^{t-T_1} \eta_{t+1-k} \Big(1-\frac{0.9 \alpha}{\gamma_{\max}} \Big)^{k} \nonumber\\
&\;\;\; + \Big(1 - \frac{0.9 \alpha}{\gamma_{\max}} \Big)^{{t}-T_1}\frac{\gamma_{\max}}{\gamma_{\min}} ( 8 \exp\left\{ 2 \alpha A_{\max}T_1 \right\}+4) \mathbf{E}[\| \langle \theta \rangle_{0} -\theta^* \|_2^2] \nonumber\\
&\;\;\; + 8 \Big(1 - \frac{0.9 \alpha}{\gamma_{\max}} \Big)^{{t}-T_1} \frac{\gamma_{\max}}{\gamma_{\min}} \exp\left\{ 2\alpha A_{\max}T_1 \right\} \Big( \|\theta^*\|_2 + \frac{b_{\max}}{A_{\max}} \Big)^2 \\
&\le 2 \epsilon^{q_{t}} \sum_{i=1}^N \pi_{m_t}^i \mathbf{E}[\| \theta_{m_t}^i - \langle \theta \rangle_{m_t} \|_2^2 ] + C_1 \Big( 1 - \frac{0.9 \alpha}{\gamma_{\max}} \Big)^{{t}-T_1} + C_2 + \frac{\gamma_{\max}}{\gamma_{\min}} 2\alpha \zeta_4 \sum_{k={0}}^{t-T_1} \eta_{t+1-k} \Big(1-\frac{0.9 \alpha}{\gamma_{\max}} \Big)^{k}, \end{align*} where $C_1$ and $C_2$ are defined in Appendix~\ref{sec:thmSA_constant}. This completes the proof.
$ \rule{.08in}{.08in}$
\subsubsection{Time-varying Step-size} \label{sec:proof_jointly_time-varying}
In this subsection, we consider the time-varying step-size case and begin with a property of $\eta_t$.
\begin{lemma} \label{lemma:eta_sum}
Suppose that Assumption~\ref{assum:limit_pi} holds. Then, $\lim_{t \to \infty} \eta_t =0$ and $\lim_{t \to \infty} \frac{1}{t+1} \sum_{k=0}^t \eta_{k} = 0.$ \end{lemma}
\noindent {\bf Proof of Lemma~\ref{lemma:eta_sum}:} From Assumption~\ref{assum:limit_pi}, we know that $\pi_t$ will converge to $ \pi_\infty$, and thus $\eta_t$ will converge to 0. Next, we will prove that $\lim_{t \to \infty} \frac{1}{t+1} \sum_{k=0}^t \eta_{k} = 0.$ For any positive constant $ c > 0$, there exists a positive integer $ T(c)$, depending on $c$, such that $ \forall t \ge T(c) $, we have $\eta_{t} < c$. Thus, \begin{align*}
\frac{1}{t} \sum_{k=0}^{t-1} \eta_{k}
= \frac{1}{t}\sum_{k=0}^{T(c)} \eta_k + \frac{1}{t}\sum_{k=T(c)+1}^{t-1} \eta_k
\le \frac{1}{t}\sum_{k=0}^{T(c)} \eta_k + \frac{t-1-T(c)}{t} c. \end{align*} Let $t \to \infty$ on both sides of the above inequality. Then, we have \begin{align*}
\lim_{t \to \infty}\frac{1}{t} \sum_{k=0}^{t-1} \eta_{k}
& \le \lim_{t \to \infty} \frac{1}{t}\sum_{k=0}^{T(c)} \eta_k + \lim_{t \to \infty} \frac{t-1-T(c)}{t} c = c. \end{align*} Since the above argument holds for arbitrary positive $c$, then $\lim_{t \to \infty} \frac{1}{t+1} \sum_{k=0}^t \eta_{k} = 0.$
$ \rule{.08in}{.08in}$
From the updates of the time-varying step-size case, given in \eqref{eq:updtae_Theta} and \eqref{eq:update of average_time-varying},
and using \eqref{eq:update_y_constant_jointly}, the update for $Y_{t}$ in this case can be written as \begin{align*}
Y_{t+L}
&= W_{t:t+L-1} Y_t ( I + \alpha_{t} A^\top(X_{t})) \cdots ( I + \alpha_{t+L-1} A^\top(X_{t+L-1})) + \alpha_{t+L-1} (I - \mathbf{1}_N \pi_{{t+L}}^\top) B(X_{t+L-1}) \nonumber \\
& \;\;\; + \sum_{k=t}^{t+L-2} \alpha_k W_{k+1:t+L-1} (I - \mathbf{1}_N \pi_{{k+1}}^\top) B(X_{k}) \left(\Pi_{j=k+1}^{t+L-1} ( I + \alpha_j A^\top(X_{j})) \right), \end{align*} and $
Y_{t+L}^i = \big(\Pi_{k=t}^{t+L-1}( I + \alpha_k A(X_k)) \big) \sum_{j=1}^N w_{t:t+L-1}^{ij} Y_{t}^j + \tilde b_{t+L}^i, $ where \begin{align*}
\tilde b_{t+L}^i & = \alpha_{t+L-1} (b^i(X_{t+L-1}) - B(X_{t+L-1})^\top \pi_{{t+L}}) \\
& \;\;\; + \sum_{k=t}^{t+L-2} \alpha_k \left(\Pi_{j=k+1}^{t+L-1} ( I + \alpha_j A(X_{j})) \right) \sum_{j=1}^N w_{k+1:t+L-1}^{ij} (b^j(X_{k}) - B(X_{k})^\top \pi_{{k+1}}). \end{align*}
To prove the theorem, we need the following lemmas.
\begin{lemma} \label{lemma:bound_consensus_time-varying_jointly} Suppose that Assumptions~\ref{assum:weighted matrix} and \ref{assum:A and b} hold and $\{ \mathbb{G}_t \}$ is uniformly strongly connected by sub-sequences of length $L$.
Given $\alpha_t$ and $T_2$ defined in Theorem~\ref{thm:bound_jointly_SA}, for all $t \ge T_2L $,
\begin{align*}
\sum_{i=1}^N \pi_{t}^i \| \theta_{t}^i - \langle \theta \rangle_{t} \|_2^2
& \le \epsilon^{q_{t}-{T_2}} \sum_{i=1}^N \pi_{T_2L + m_t }^i \| \theta_{T_2L + m_t }^i - \langle \theta \rangle_{T_2L+ m_t } \|_2^2 + \frac{\zeta_6}{1-\epsilon} \left( \epsilon^{\frac{q_t - 1}{2}} \alpha_{m_t} + \alpha_{\ceil{\frac{q_t - 1}{2}} L+m_t} \right) \\
& \le \epsilon^{q_t - {T_2}} \sum_{i=1}^N \pi_{T_2L+m_t}^i \| \theta_{T_2L+m_t}^i - \langle \theta \rangle_{T_2L+m_t} \|_2^2 + \frac{\zeta_6}{1-\epsilon} \left( \alpha_0 \epsilon^{\frac{q_t-1}{2}} + \alpha_{\ceil{\frac{q_t-1}{2}}L} \right),
\end{align*} where $\epsilon$ and $\zeta_6$ are defined in \eqref{eq:define epsilon_jointly} and \eqref{eq:define Psi11}, respectively. \end{lemma}
\noindent {\bf Proof of Lemma~\ref{lemma:bound_consensus_time-varying_jointly}:} Similar to the proof of Lemma~\ref{lemma:bound_consensus_jointly}, we have \begin{align}
\| Y_{t+L} \|_{M_{t+L}}^2
&= \sum_{i=1}^N \pi_{t+L}^i \| \left(\Pi_{k=t}^{t+L-1}( I + \alpha_k A(X_k)) \right) \sum_{j=1}^N w_{t:t+L-1}^{ij} Y_{t}^j\|_2^2 \label{eq:matrix_1_time_jointly}\\
&\;\;\; + \sum_{i=1}^N \pi_{t+L}^i \| \tilde b_{t+L}^i\|_2^2 \label{eq:matrix_2_time_jointly}\\
& \;\;\; + 2 \sum_{i=1}^N \pi_{t+L}^i
(\tilde b_{t+L}^i)^\top \left(\Pi_{k=t}^{t+L-1}( I + \alpha_k A(X_k)) \right) \sum_{j=1}^N w_{t:t+L-1}^{ij} Y_{t}^j. \label{eq:matrix_3_time_jointly} \end{align}
By Lemma~\ref{lemma:lower_bound_jointly}, the item given by \eqref{eq:matrix_1_time_jointly} can be bounded as follows: \begin{align}
&\;\;\; \sum_{i=1}^N \pi_{t+L}^i \| \left(\Pi_{k=t}^{t+L-1}( I + \alpha_k A(X_k)) \right) \sum_{j=1}^N w_{t:t+L-1}^{ij} Y_{t}^j\|_2^2 \nonumber\\
& \le \Pi_{k=t}^{t+L-1} ( 1 + \alpha_k A_{\max})^{2} \bigg[ \sum_{i=1}^N \pi_{t}^i \|Y_{t}^i\|_2^2 - \frac{1}{2} \sum_{i=1}^N \pi_{t+L}^i \sum_{j=1}^N \sum_{l=1}^N w_{t:t+L-1}^{ij} w_{t:t+L-1}^{il}\| Y_{t}^j - Y_{t}^l \|_2^2 \bigg] \nonumber \\
& \le \Pi_{k=t}^{t+L-1} ( 1 + \alpha_k A_{\max})^{2} \Big(1-\frac{\pi_{\min} \beta^{2L}}{2 \delta_{\max}}\Big) \sum_{i=1}^N \pi_{t}^i \| Y_{t}^{i}\|_2^2. \label{eq:matrix_proof_1_time_jointly} \end{align}
Since $\| b^i(X_{t}) - B(X_{t})^\top \pi_{{t+1}} \|_2 \le 2 b_{\max}$ holds for all $i$, \begin{align*}
\| \tilde b_{t+L}^i\|_2
& \le \alpha_{t+L-1} \| (b^i(X_{t+L-1}) - B(X_{t+L-1})^\top \pi_{{t+L}}) \|_2 \\
& \;\;\; + \sum_{k=t}^{t+L-2} \alpha_k \|\left(\Pi_{j=k+1}^{t+L-1} ( I + \alpha_j A(X_{j})) \right) \|_2 \sum_{j=1}^N w_{k+1:t+L-1}^{ij} \| (b^j(X_{k}) - B(X_{k})^\top \pi_{{k+1}})\|_2 \\
& \le 2 b_{\max} \bigg[ \alpha_{t+L-1} + \sum_{k=t}^{t+L-2} \alpha_k \left(\Pi_{j=k+1}^{t+L-1} ( 1 + \alpha_j A_{\max}) \right)\bigg]. \end{align*} Then, we can bound the item given by \eqref{eq:matrix_2_time_jointly} as follows: \begin{align} \label{eq:eq:matrix_proof_21_time_jointly}
\sum_{i=1}^N \pi_{t+L}^i \| \tilde b_{t+L}^i\|_2^2
& \le 4 b_{\max}^2 \bigg( \alpha_{t+L-1} + \sum_{k=t}^{t+L-2} \alpha_k \left(\Pi_{j=k+1}^{t+L-1} ( 1 + \alpha_j A_{\max}) \right)\bigg)^2. \end{align} As for the item given by \eqref{eq:matrix_3_time_jointly}, we have \begin{align} \label{eq:matrix_proof_3_time_jointly}
&\;\;\;\; 2 \sum_{i=1}^N \pi_{t+L}^i
(\tilde b_{t+L}^i)^\top \left(\Pi_{k=t}^{t+L-1}( I + \alpha_k A(X_k)) \right) \sum_{j=1}^N w_{t:t+L-1}^{ij} Y_{t}^j \nonumber\\
& \le 2 \sum_{i=1}^N \pi_{t+L}^i
\| \tilde b_{t+L}^i\|_2 \| \Pi_{k=t}^{t+L-1}( I + \alpha_k A(X_k)) \|_2 \sum_{j=1}^N w_{t:t+L-1}^{ij} \| Y_{t}^j \|_2 \nonumber \\
& \le 2 b_{\max} \Big( \alpha_{t+L-1} + \sum_{k=t}^{t+L-2} \alpha_k \left(\Pi_{j=k+1}^{t+L-1} ( 1 + \alpha_j A_{\max}) \right)\Big) \left(\Pi_{k=t}^{t+L-1}( I + \alpha_k A_{\max}) \right) \Big(\sum_{i=1}^N \pi_{t}^i \| Y_{t}^i \|_2^2 +1 \Big). \end{align} From \eqref{eq:matrix_proof_1_time_jointly}--\eqref{eq:matrix_proof_3_time_jointly}, we have \begin{align*}
& \;\;\; \| Y_{t+L} \|_{M_{t+L}}^2 \\
& \le \Pi_{k=t}^{t+L-1} ( 1 + \alpha_k A_{\max})^{2} \Big(1-\frac{\pi_{\min} \beta^{2L}}{2 \delta_{\max}}\Big) \sum_{i=1}^N \pi_{t}^i \| Y_{t}^{i}\|_2^2 \nonumber\\
& \;\;\;+ 4 b_{\max}^2 \Big( \alpha_{t+L-1} + \sum_{k=t}^{t+L-2} \alpha_k \Big(\Pi_{j=k+1}^{t+L-1} ( 1 + \alpha_j A_{\max}) \Big)\Big)^2 \nonumber\\
& \;\;\; +2 b_{\max} \Big( \alpha_{t+L-1} + \sum_{k=t}^{t+L-2} \alpha_k \Big(\Pi_{j=k+1}^{t+L-1} ( 1 + \alpha_j A_{\max}) \Big)\Big) \Big(\Pi_{k=t}^{t+L-1}( I + \alpha_k A_{\max}) \Big) \Big(\sum_{i=1}^N \pi_{t}^i \| Y_{t}^i \|_2^2 +1 \Big) \nonumber \\
& = \bigg[ 2 b_{\max} \Big( \alpha_{t+L-1} + \sum_{k=t}^{t+L-2} \alpha_k \Big(\Pi_{j=k+1}^{t+L-1} ( 1 + \alpha_j A_{\max}) \Big)\Big) \Big(\Pi_{k=t}^{t+L-1}( I + \alpha_k A_{\max}) \Big) \nonumber\\
& \;\;\; + \Pi_{k=t}^{t+L-1} ( 1 + \alpha_k A_{\max})^{2} \Big(1-\frac{\pi_{\min} \beta^{2L}}{2 \delta_{\max}}\Big) \bigg] \| Y_{t} \|_{M_{t}}^2 \nonumber \\
& \;\;\;+ 4 b_{\max}^2 \Big( \alpha_{t+L-1} + \sum_{k=t}^{t+L-2} \alpha_k \Big(\Pi_{j=k+1}^{t+L-1} ( 1 + \alpha_j A_{\max}) \Big)\Big)^2 \nonumber\\
& \;\;\; +2 b_{\max} \Big( \alpha_{t+L-1} + \sum_{k=t}^{t+L-2} \alpha_k \Big(\Pi_{j=k+1}^{t+L-1} ( 1 + \alpha_j A_{\max}) \Big)\Big) \Big(\Pi_{k=t}^{t+L-1}( I + \alpha_k A_{\max}) \Big) \\
& = \epsilon_{t} \| Y_{t} \|_{M_{t}}^2 + 4 b_{\max}^2 \Big( \alpha_{t+L-1} + \sum_{k=t}^{t+L-2} \alpha_k \Big(\Pi_{j=k+1}^{t+L-1} ( 1 + \alpha_j A_{\max}) \Big)\Big)^2 \nonumber\\
& \;\;\; +2 b_{\max} \Big( \alpha_{t+L-1} + \sum_{k=t}^{t+L-2} \alpha_k \Big(\Pi_{j=k+1}^{t+L-1} ( 1 + \alpha_j A_{\max}) \Big)\Big) \Big(\Pi_{k=t}^{t+L-1}( I + \alpha_k A_{\max}) \Big), \end{align*} where \begin{align*}
\epsilon_{t} & = 2 b_{\max} \Big( \alpha_{t+L-1} + \sum_{k=t}^{t+L-2} \alpha_k \Big(\Pi_{j=k+1}^{t+L-1} ( 1 + \alpha_j A_{\max}) \Big)\Big) \Big(\Pi_{k=t}^{t+L-1}( I + \alpha_k A_{\max}) \Big) \nonumber\\
& \;\;\; + \Pi_{k=t}^{t+L-1} ( 1 + \alpha_k A_{\max})^{2} \Big(1-\frac{\pi_{\min} \beta^{2L}}{2 \delta_{\max}}\Big). \end{align*} Since $\alpha_{t} \le \alpha$ when $t\ge T_2L$, we have for $t\ge T_2L$, $0 \le \epsilon_{t} \le \epsilon \le 1$ and \begin{align*}
\alpha_{t+L-1} + \sum_{k=t}^{t+L-2} \alpha_k \big(\Pi_{j=k+1}^{t+L-1} ( 1 + \alpha_j A_{\max}) \big)
\le \sum_{k=t}^{t+L-1} \alpha_k ( 1 + \alpha A_{\max})^{t+L-k-1}
\le ( 1 + \alpha A_{\max})^{L-1} \sum_{k=t}^{t+L-1} \alpha_k. \end{align*} Since $\sum_{k=t}^{t+L-1} \alpha_k \le L\alpha_{t} \le L \alpha$, \begin{align*}
\| Y_{t+L} \|_{M_{t+L}}^2
& \le \epsilon \| Y_{t} \|_{M_{t}}^2 + 4 b_{\max}^2 ( 1 + \alpha A_{\max})^{2L-2} \Big(\sum_{k=t}^{t+L-1} \alpha_k\Big)^2 +2 b_{\max} ( 1 + \alpha A_{\max})^{2L-1} \Big(\sum_{k=t}^{t+L-1} \alpha_k\Big) \\
& \le \epsilon \| Y_{t} \|_{M_{t}}^2 +\left( 4 b_{\max}^2 \alpha L^2 ( 1 + \alpha A_{\max})^{2L-2} +2 b_{\max}L ( 1 + \alpha A_{\max})^{2L-1} \right) \alpha_{t} \\
& \le \epsilon \| Y_{t} \|_{M_{t}}^2 + \zeta_6 \alpha_{t}, \end{align*} where $ \epsilon$ and $\zeta_6$ are defined in \eqref{eq:define epsilon_jointly} and \eqref{eq:define Psi11}, respectively. Then, \begin{align*}
\| Y_{t+L} \|_{M_{t+L}}^2
& \le \epsilon \| Y_{t} \|_{M_{t}}^2 + \zeta_6 \alpha_{t} \\
& \le \epsilon^{q_{t+L}-{T_2}} \| Y_{m_t + T_2L}\|_{M_{m_t + T_2L}}^2 + \zeta_6 \sum_{k=T_2}^{q_t} \epsilon^{q_t-k} \alpha_{kL+m_t} \\
& \le \epsilon^{q_{t+L}-{T_2}} \| Y_{T_2L + m_t}\|_{M_{T_2L + m_t}}^2 + \zeta_6 \Big( \sum_{k=0}^{\floor{\frac{q_t}{2}}} \epsilon^{q_t-k} \alpha_{kL+m_t} + \sum_{k=\ceil{\frac{q_t}{2}}}^{q_t} \epsilon^{q_t-k} \alpha_{kL+m_t} \Big) \\
& \le \epsilon^{q_{t+L}-{T_2}} \| Y_{T_2L + m_t}\|_{M_{T_2L + m_t}}^2 + \frac{\zeta_6}{1-\epsilon} \left( \epsilon^{\frac{q_t}{2}} \alpha_{m_t} + \alpha_{\ceil{\frac{q_t}{2}} L+m_t} \right), \end{align*} which implies that \begin{align*}
\sum_{i=1}^N \pi_{t}^i \| \theta_{t}^i - \langle \theta \rangle_{t} \|_2^2
& \le \epsilon^{q_{t}-{T_2}} \sum_{i=1}^N \pi_{T_2L + m_t }^i \| \theta_{T_2L + m_t }^i - \langle \theta \rangle_{T_2L+ m_t } \|_2^2 + \frac{\zeta_6}{1-\epsilon} \left( \epsilon^{\frac{q_t - 1}{2}} \alpha_{m_t} + \alpha_{\ceil{\frac{q_t - 1}{2}} L+m_t} \right) \\
& \le \epsilon^{q_t - {T_2}} \sum_{i=1}^N \pi_{T_2L+m_t}^i \| \theta_{T_2L+m_t}^i - \langle \theta \rangle_{T_2L+m_t} \|_2^2 + \frac{\zeta_6}{1-\epsilon} \left( \alpha_0 \epsilon^{\frac{q_t-1}{2}} + \alpha_{\ceil{\frac{q_t-1}{2}}L} \right) . \end{align*} This completes the proof.
$ \rule{.08in}{.08in}$
\begin{lemma} \label{lemma:timevarying_single_3}
Suppose that Assumptions~\ref{assum:A and b} and \ref{assum:mixing-time} hold. When the step-size $\alpha_t$ and corresponding mixing time $\tau(\alpha_t)$ satisfy
$
0< \alpha_t \tau(\alpha_t) < \frac{\log2}{A_{\max}}, $
we have for any $t \ge T_2L$, \begin{align}
\| \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2 & \le 2 A_{\max} \| \langle \theta \rangle_{t-\tau(\alpha_t)} \|_2 \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k + 2 b_{\max} \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k, \label{eq:timevarying_single_3_1}\\
\| \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2 & \le 6 A_{\max} \| \langle \theta \rangle_{t} \|_2 \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k + 5 b_{\max} \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k, \label{eq:timevarying_single_3_2}\\
\| \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2^2 & \le 72 \alpha_{t-\tau(\alpha_t)}^2 \tau^2(\alpha_t) A_{\max}^2 \| \langle \theta \rangle_{t} \|_2^2 + 50 \alpha_{t-\tau(\alpha_t)}^2 \tau^2(\alpha_t) b_{\max}^2 \nonumber\\
& \le 8 \| \langle \theta \rangle_{t} \|_2^2 + \frac{6b_{\max}^2}{A_{\max}^2}. \label{eq:timevarying_single_3_3} \end{align} \end{lemma}
\noindent {\bf Proof of Lemma~\ref{lemma:timevarying_single_3}:} From the update of $\langle \theta \rangle_t $ in \eqref{eq:update of average_time-varying}, $
\| \langle \theta \rangle_{t+1} \|_2 \le \| \langle \theta \rangle_t \|_2 + \alpha_{t} A_{\max} \| \langle \theta \rangle_t \|_2 + \alpha_{t} b_{\max}
\le (1+\alpha_{t} A_{\max}) \| \langle \theta \rangle_t \|_2 + \alpha_{t} b_{\max}. $ Similar to the proof of Lemma~\ref{lemma:fixed_single_3}, for all $u \in [t-\tau(\alpha_{t}), t]$, \begin{align*}
\| \langle \theta \rangle_{u} \|_2
& \le \Pi_{k = t-\tau(\alpha_{t})}^{u-1} (1+\alpha_{k} A_{\max})\| \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2 + b_{\max} \sum_{k = t-\tau(\alpha_{t})}^{u-1} \alpha_k \Pi_{l=k+1}^{u-1} (1+\alpha_{l} A_{\max}) \\
& \le \exp\{ \sum_{k = t-\tau(\alpha_{t})}^{u-1} \alpha_{k} A_{\max}\} \| \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2 + b_{\max} \sum_{k = t-\tau(\alpha_{t})}^{u-1} \alpha_k \exp\{ \sum_{l=k+1}^{u-1} \alpha_{l} A_{\max}\} \\
& \le \exp\{ \alpha_{t-\tau(\alpha_{t})} \tau(\alpha_t) A_{\max}\} \| \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2 + b_{\max} \sum_{k = t-\tau(\alpha_{t})}^{u-1} \alpha_k \exp\{ \alpha_{t-\tau(\alpha_{t})} \tau(\alpha_t) A_{\max}\} \\
& \le 2 \| \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2 + 2 b_{\max} \sum_{k = t-\tau(\alpha_{t})}^{u-1} \alpha_k, \end{align*} where we use $\alpha_{t-\tau(\alpha_{t})} \tau(\alpha_t) A_{\max} \le \log2 < \frac{1}{3}$ in the last inequality. Thus, for all $t\ge T_2L$, we can get \eqref{eq:timevarying_single_3_1} as follows: \begin{align*}
\| \langle \theta \rangle_{t} - \langle \theta \rangle_{t - \tau(\alpha_{t})} \|_2
& \le \sum_{k=t-\tau(\alpha_{t})}^{t-1} \| \langle \theta \rangle_{k+1} - \langle \theta \rangle_{k} \|_2 \le A_{\max} \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k} \| \langle \theta \rangle_{k} \|_2 + b_{\max} \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k} \\
& \le A_{\max} \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_k \Big( 2 \| \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2 + 2 b_{\max} \sum_{l=t-\tau(\alpha_{t})}^{k-1} \alpha_l \Big) + b_{\max} \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k} \\
& \le 2 A_{\max} \| \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2 \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_k + \left( 2 A_{\max} \tau(\alpha_{t}) \alpha_{t-\tau(\alpha_{t})} + 1 \right) b_{\max} \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k} \\
& \le 2 A_{\max} \| \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2 \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_k + \frac{5}{3} b_{\max} \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k} \\
& \le 2 A_{\max} \| \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2 \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_k + 2 b_{\max} \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k} . \end{align*} Moreover, using the above inequality, we can get \eqref{eq:timevarying_single_3_2} for $t\ge T_2L$ as follows: \begin{align*}
&\;\;\;\; \| \langle \theta \rangle_{t} - \langle \theta \rangle_{t - \tau(\alpha_{t})} \|_2
\le 2 A_{\max} \| \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2 \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_k + \frac{5}{3} b_{\max} \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k} \\
& \le 2 A_{\max} \tau(\alpha_{t}) \alpha_{t-\tau(\alpha_{t})}
\| \langle \theta \rangle_{t} - \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2
+ 2 A_{\max} \| \langle \theta \rangle_{t} \|_2 \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_k + \frac{5}{3} b_{\max} \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k} \\
& \le 6 A_{\max} \| \langle \theta \rangle_{t} \|_2 \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_k
+ 5 b_{\max} \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k}. \end{align*} Next, using \eqref{eq:timevarying_single_3_2} and the inequality $(x+y)^2 \le 2x^2 + 2y^2$ for any $x, y$, we can get \eqref{eq:timevarying_single_3_3} as follows: \begin{align*}
\| \langle \theta \rangle_{t} - \langle \theta \rangle_{t - \tau(\alpha_{t})} \|_2^2
& \le 72 A_{\max}^2 \| \langle \theta \rangle_{t} \|_2^2 (\sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_k )^2
+ 50 b_{\max}^2 (\sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k})^2 \\
& \le 72 \alpha_{t-\tau(\alpha_{t})}^2 \tau^2(\alpha_t) A_{\max}^2 \| \langle \theta \rangle_{t} \|_2^2 + 50 \alpha_{t-\tau(\alpha_{t})}^2 \tau^2(\alpha_t) b_{\max}^2 \\
& \le 8 \| \langle \theta \rangle_{t} \|_2^2 + 6 (\frac{ b_{\max}}{A_{\max}})^2, \end{align*} where we use $\alpha_{t-\tau(\alpha_{t})} \tau(\alpha_t) A_{\max} < \frac{1}{3}$ in the last inequality.
$ \rule{.08in}{.08in}$
\begin{lemma} \label{lemma:bound_timevarying_Ab}
Suppose that Assumptions~\ref{assum:A and b}--\ref{assum:limit_pi} hold and $\{ \mathbb{G}_t \}$ is uniformly strongly connected. Then, when
$
0< \alpha_{t - \tau(\alpha_t) } \tau(\alpha_t) < \frac{ \log2}{A_{\max} }, $
we have for any $t \ge T_2L$,
\begin{align*}
& \;\;\;\; \left| \mathbf{E}\left[ ( \langle \theta \rangle_t - \theta^* )^\top (P+P^\top) \Big( A(X_t) \langle \theta \rangle_t + B(X_t)^\top\pi_{t+1} - A \langle \theta \rangle_t - b \Big) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} \right] \right| \nonumber \\
& \le \alpha_{t-\tau(\alpha_t)} \tau(\alpha_t) \gamma_{\max} \left( 72 + 456 A_{\max}^2 + 84 A_{\max} b_{\max} \right) \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_t)} ] \nonumber \\
&\;\;\; + \alpha_{t -\tau(\alpha_t) } \tau(\alpha_t) \gamma_{\max} \bigg[ 2 + 4 \|\theta^* \|_2^2 + \frac{48b_{\max}^2}{A_{\max}^2} + 152 \left(b_{\max} + A_{\max} \| \theta^* \|_2 \right)^2 + 12 A_{\max}b_{\max} \nonumber\\
&\;\;\; + 48 A_{\max}b_{\max} \Big(\frac{b_{\max}}{A_{\max}} + 1\Big)^2 + 87 b_{\max}^2 \bigg] \nonumber \\
& \;\;\;+ 2 \gamma_{\max} \eta_{t+1}\sqrt{N}b_{\max} \Big( 1 + 9 \mathbf{E}\left[ \| \langle \theta \rangle_{t}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} \right] + \frac{6b_{\max}^2}{A_{\max}^2}+ \| \theta^* \|_2^2 \Big). \end{align*} \end{lemma}
\noindent {\bf Proof of Lemma~\ref{lemma:bound_timevarying_Ab}:} Note that for all $t\ge T_2L$, we have \begin{align}
& \;\;\;\; |\mathbf{E}[ ( \langle \theta \rangle_t - \theta^* )^\top (P+P^\top)( A(X_t) \langle \theta \rangle_t + B(X_t)^\top\pi_{t+1} - A \langle \theta \rangle_t - b) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \nonumber \\
& \le |\mathbf{E}[ ( \langle \theta \rangle_t - \theta^* )^\top (P+P^\top)( A(X_t) - A) \langle \theta \rangle_t \; | \; \mathcal{F}_{t-\tau(\alpha_{t})}]| \nonumber\\
&\;\;\; + |\mathbf{E}[ ( \langle \theta \rangle_t - \theta^* )^\top (P+P^\top)(B(X_t)^\top\pi_{t+1} - b) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \nonumber \\
& \le |\mathbf{E}[ ( \langle \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top)( A(X_t) - A) \langle \theta \rangle_{t-\tau(\alpha_{t})} \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \label{eq:timevarying_bound_Ab_1} \\
& \;\;\; + |\mathbf{E}[ ( \langle \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top)( A(X_t) - A) ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})} ) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \label{eq:timevarying_bound_Ab_2} \\
& \;\;\; + |\mathbf{E}[ ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})} )^\top (P+P^\top)( A(X_t) - A) \langle \theta \rangle_{t-\tau(\alpha_{t})} \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \label{eq:timevarying_bound_Ab_3}\\
& \;\;\; + |\mathbf{E}[ ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})} )^\top (P+P^\top)( A(X_t) - A) ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})} ) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \label{eq:timevarying_bound_Ab_4}\\
&\;\;\; + |\mathbf{E}[ ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})} )^\top (P+P^\top)(B(X_t)^\top\pi_{t+1} - b) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \label{eq:timevarying_bound_Ab_5}\\
&\;\;\; + |\mathbf{E}[ ( \langle \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top)(B(X_t)^\top\pi_{t+1} - b) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]|\label{eq:timevarying_bound_Ab_6}. \end{align} Similar to the proof of Lemma~\ref{lemma:bound_fixed_Ab}, using the mixing time in Assumption~\ref{assum:mixing-time}, we can get the bound for \eqref{eq:timevarying_bound_Ab_1} and \eqref{eq:timevarying_bound_Ab_6} for $t\ge T_2L$ as follows: \begin{align}
& \;\;\; |\mathbf{E}[ ( \langle \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top)( A(X_t) - A) \langle \theta \rangle_{t-\tau(\alpha_{t})} \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]|\nonumber\\
& \le |( \langle \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top) \mathbf{E}[A(X_t) - A \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \langle \theta \rangle_{t-\tau(\alpha_{t})} | \nonumber\\
& \le 2 \alpha_{t} \gamma_{\max} \mathbf{E}[\| \langle \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* \|_2 \| \langle \theta \rangle_{t-\tau(\alpha_{t})}\|_2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \nonumber\\
& \le \alpha_{t} \gamma_{\max} \mathbf{E}[\| \langle \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* \|_2^2 + \| \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \nonumber \\
& \le \alpha_{t} \gamma_{\max} \mathbf{E}[ 2 \|\theta^* \|_2^2 + 3 \| \langle \theta \rangle_{t-\tau(\alpha_{t})}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \nonumber \\
& \le 6 \alpha_{t} \gamma_{\max} \mathbf{E}[ \| \langle \theta \rangle_{t} - \langle \theta \rangle_{t-\tau(\alpha_{t})}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 6 \alpha_{t} \gamma_{\max} \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 2 \alpha_{t} \gamma_{\max} \|\theta^* \|_2^2 \nonumber \\
& \le 54 \alpha_{t} \gamma_{\max} \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 36 \alpha_{t} \gamma_{\max} (\frac{b_{\max}}{A_{\max}})^2 + 2 \alpha_{t} \gamma_{\max} \|\theta^* \|_2^2, \label{eq:timevarying_bound_Ab_1_bounded} \end{align} where in the last inequality, we use \eqref{eq:timevarying_single_3_3} from Lemma~\ref{lemma:timevarying_single_3}, and \begin{align}
& \;\;\; |\mathbf{E}[ ( \langle \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top)(B(X_t)^\top\pi_{t+1} - b) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]|\nonumber\\
& \le |\mathbf{E}[ ( \langle \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top)\Big(\sum_{i=1}^N \pi_{t+1}^i(b^i(X_t) - b^i ) + \sum_{i=1}^N (\pi_{t+1}^i - \pi_{\infty}^i) b^i \Big) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]|\nonumber \end{align} \begin{align}
& \le | ( \langle \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top)\Big(\sum_{i=1}^N \pi_{t+1}^i \mathbf{E}[ b^i(X_t) - b^i \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + \sum_{i=1}^N (\pi_{t+1}^i - \pi_{\infty}^i) b^i \Big) |\nonumber\\
& \le 2 \gamma_{\max} (\alpha_{t} + \eta_{t+1}\sqrt{N}b_{\max}) \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* \|_2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]\nonumber\\
& \le 2 \gamma_{\max} (\alpha_{t} + \eta_{t+1}\sqrt{N}b_{\max}) \Big( 1 + \frac{1}{2} \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + \frac{1}{2} \| \theta^* \|_2^2 \Big) \nonumber\\
& \le 2 \gamma_{\max} (\alpha_{t} + \eta_{t+1}\sqrt{N}b_{\max}) \left( 1 + \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha_{t})} - \langle \theta \rangle_{t} \|_2^2 + \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + \| \theta^* \|_2^2 \right) \nonumber\\
& \le 2 \gamma_{\max} (\alpha_{t} + \eta_{t+1}\sqrt{N}b_{\max}) \Big( 1 + 9 \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 6 (\frac{b_{\max}}{A_{\max}})^2+ \| \theta^* \|_2^2 \Big),
\label{eq:timevarying_bound_Ab_6_bounded} \end{align} where in the last inequality we use \eqref{eq:timevarying_single_3_3}. Next, by Assumption~\ref{assum:A and b}, \eqref{eq:timevarying_single_3_1} and \eqref{eq:timevarying_single_3_3}, we have \begin{align}
& \;\;\; |\mathbf{E}[ ( \langle \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top)( A(X_t) - A) ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})} ) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \nonumber\\
&\le 4 \gamma_{\max} A_{\max} \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* \|_2 \| \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})}\|_2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \nonumber\\
&\le 4 \gamma_{\max} A_{\max}
\mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2 \| \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})}\|_2 + \| \theta^* \|_2 \| \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})}\|_2
\; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \nonumber\\
&\le 8 \gamma_{\max} A_{\max}^2 \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k + 8 \gamma_{\max} A_{\max} b_{\max} \| \theta^* \|_2 \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \nonumber\\
& \;\;\; + 8 \gamma_{\max} A_{\max}^2 \Big(\frac{ b_{\max}}{A_{\max}} + \| \theta^* \|_2 \Big) \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \nonumber\\
&\le \gamma_{\max} A_{\max}^2 \Big( 12 \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 8 \Big(\frac{ b_{\max}}{A_{\max}} + \| \theta^* \|_2 \Big)^2 \Big) \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k ,\nonumber
\label{eq:timevarying_bound_Ab_2_bounded} \end{align} which implies that \begin{align}
& \;\;\; |\mathbf{E}[ ( \langle \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top)( A(X_t) - A) ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})} ) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \nonumber\\
&\le 24 \gamma_{\max} A_{\max}^2 \mathbf{E}[ \| \langle \theta \rangle_{t} - \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \nonumber\\
& \;\;\; + 24 \gamma_{\max} A_{\max}^2 \mathbf{E}[ \| \langle \theta \rangle_{t}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k + 8 \gamma_{\max} A_{\max}^2 \Big(\frac{ b_{\max}}{A_{\max}} + \| \theta^* \|_2 \Big)^2 \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \nonumber\\
&\le \gamma_{\max} \left( 216 A_{\max}^2 \mathbf{E}[ \| \langle \theta \rangle_{t}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 152 \left(b_{\max} + A_{\max} \| \theta^* \|_2 \right)^2 \right) \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k.
\end{align} In additional, using \eqref{eq:timevarying_single_3_1} and \eqref{eq:timevarying_single_3_3}, we have the following bound for \eqref{eq:timevarying_bound_Ab_3}: \begin{align}
& \;\;\; |\mathbf{E}[ ( \langle \theta \rangle_t -\langle \theta \rangle_{t-\tau(\alpha_{t})} )^\top (P+P^\top)( A(X_t) - A) \langle \theta \rangle_{t-\tau(\alpha_{t})} \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]|\nonumber\\
& \le 4 \gamma_{\max} A_{\max} \mathbf{E}[ \| \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})}\|_2 \| \langle \theta \rangle_{t-\tau(\alpha_{t})}\|_2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]\nonumber\\
& \le 8 \gamma_{\max} A_{\max} \mathbf{E}[ A_{\max } \| \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2^2 + b_{\max} \| \langle \theta \rangle_{t-\tau(\alpha_{t})}\|_2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \nonumber\\
& \le 4 \gamma_{\max} A_{\max} \left( (2 A_{\max }+ b_{\max} ) \mathbf{E}[ \| \langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + b_{\max} \right) \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \nonumber
\end{align} \begin{align}
& \le 8 \gamma_{\max} A_{\max} (2 A_{\max }+ b_{\max} ) \mathbf{E}[ \| \langle \theta \rangle_{t} -\langle \theta \rangle_{t-\tau(\alpha_{t})} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \nonumber\\
&\;\;\; + 8 \gamma_{\max} A_{\max} (2 A_{\max }+ b_{\max} ) \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k + 4 \gamma_{\max} A_{\max} b_{\max} \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \nonumber \\
& \le 72 \ \gamma_{\max} A_{\max} (2 A_{\max } + b_{\max} ) \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \nonumber\\
& \;\;\;+ 48 \gamma_{\max} A_{\max} b_{\max} \Big(\frac{b_{\max}}{A_{\max}} + 1 \Big)^2 \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k.
\label{eq:timevarying_bound_Ab_3_bounded} \end{align} Moreover, using \eqref{eq:timevarying_single_3_3}, we can get the bound for \eqref{eq:timevarying_bound_Ab_4} as follows: \begin{align}
& \;\;\; |\mathbf{E}[ ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})} )^\top (P+P^\top)( A(X_t) - A) ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})} ) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \nonumber\\
& \le 4 \gamma_{\max} A_{\max} \mathbf{E}[ \| \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \nonumber\\
& \le 4 \gamma_{\max} A_{\max} \mathbf{E}[ 72 A_{\max}^2 \| \langle \theta \rangle_{t} \|_2^2 + 50 b_{\max}^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \Big( \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \Big)^2 \nonumber\\
& \le 96 A_{\max}^2 \gamma_{\max} \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k + 67 b_{\max}^2 \gamma_{\max} \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k.
\label{timevarying_bound_Ab_4_bounded} \end{align} Finally, we can get the bound for \eqref{eq:timevarying_bound_Ab_5} using \eqref{eq:timevarying_single_3_2}: \begin{align}
&\;\;\; |\mathbf{E}[ ( \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})} ) (P+P^\top)(B(X_t)^\top\pi_{t+1} - b) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \nonumber \\
& \le 4 \gamma_{\max} b_{\max} \mathbf{E}[ \| \langle \theta \rangle_t - \langle \theta \rangle_{t-\tau(\alpha_{t})}\|_2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \nonumber \\
& \le 4 \gamma_{\max} b_{\max} \mathbf{E}[ 6 A_{\max} \| \langle \theta \rangle_{t} \|_2 + 5 b_{\max} \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \nonumber \\
& \le \gamma_{\max}\left( 12 A_{\max} b_{\max} \mathbf{E}[\| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 12 A_{\max} b_{\max} + 20 b_{\max}^2 \right)\sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k.
\label{eq:timevarying_bound_Ab_5_bounded} \end{align} Then, using \eqref{eq:timevarying_bound_Ab_1_bounded}--\eqref{eq:timevarying_bound_Ab_5_bounded}, we have \begin{align*}
& \;\;\;\; |\mathbf{E}[ ( \langle \theta \rangle_t - \theta^* )^\top (P+P^\top)( A(X_t) \langle \theta \rangle_t + B(X_t)^\top\pi_{t+1} - A \langle \theta \rangle_t - b) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \nonumber \\
& \le 54 \alpha_{t} \gamma_{\max} \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 36 \alpha_{t} \gamma_{\max} (\frac{b_{\max}}{A_{\max}})^2 + 2 \alpha_{t} \gamma_{\max} \|\theta^* \|_2^2 \nonumber \\
&\;\;\; + \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \Big[ \left( 216 \gamma_{\max} A_{\max}^2 + 12 \gamma_{\max} A_{\max} (20 A_{\max } + 7b_{\max} ) \right) \mathbf{E}[ \| \langle \theta \rangle_{t}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 67 b_{\max}^2 \gamma_{\max} \\
& \;\;\; + 152 \gamma_{\max} \left(b_{\max} + A_{\max} \| \theta^* \|_2 \right)^2
+ 48 \gamma_{\max} A_{\max} b_{\max} (\frac{b_{\max}}{A_{\max}} + 1 )^2 + ( 12 A_{\max} b_{\max} + 20 b_{\max}^2 )\gamma_{\max} \Big] \\
& \;\;\; + 2 \gamma_{\max} (\alpha_{t} + \eta_{t+1}\sqrt{N}b_{\max}) \left( 1 + 9 \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 6 (\frac{b_{\max}}{A_{\max}})^2+ \| \theta^* \|_2^2 \right),
\end{align*} which implies that \begin{align*}
& \;\;\;\; |\mathbf{E}[ ( \langle \theta \rangle_t - \theta^* )^\top (P+P^\top)( A(X_t) \langle \theta \rangle_t + B(X_t)^\top\pi_{t+1} - A \langle \theta \rangle_t - b) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \nonumber \\
& \le \alpha_{t-\tau(\alpha_t)} \tau(\alpha_t) \gamma_{\max} \left( 72 + 456 A_{\max}^2 + 84 A_{\max} b_{\max} \right) \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_t)} ] \nonumber \\
&\;\;\; + \alpha_{t -\tau(\alpha_t) } \tau(\alpha_t) \gamma_{\max} \bigg[ 2 + 4 \|\theta^* \|_2^2 + 48(\frac{b_{\max}}{A_{\max}})^2 + 152 \left(b_{\max} + A_{\max} \| \theta^* \|_2 \right)^2 + 12 A_{\max}b_{\max} \nonumber\\
&\;\;\; + 48 A_{\max}b_{\max} (\frac{b_{\max}}{A_{\max}} + 1 )^2 + 87 b_{\max}^2 \bigg] \nonumber \\
& \;\;\;+ 2 \gamma_{\max} \eta_{t+1}\sqrt{N}b_{\max} \left( 1 + 9 \mathbf{E}[ \| \langle \theta \rangle_{t}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha)} ] + 6 (\frac{b_{\max}}{A_{\max}})^2+ \| \theta^* \|_2^2 \right), \end{align*} where we use $\alpha_t \le \alpha_{t-\tau(\alpha_t)}$ from Assumption~\ref{assum:step-size} and $\tau(\alpha_t) \ge 1$ in the last inequality. This completes the proof.
$ \rule{.08in}{.08in}$
\begin{lemma} \label{lemma:bound_average_time-varying_jointly}
Under Assumptions~\ref{assum:weighted matrix}--\ref{assum:limit_pi}, when the $\tau(\alpha_t) \alpha_{t-\tau(\alpha_t)} \le \min \{ \frac{ \log2}{A_{\max}}, \; \frac{0.1}{\zeta_5 \gamma_{\max}} \}$, we have for any $t\ge T_2L$,
$$
\mathbf{E} \left[\|\langle \theta \rangle_{t} -\theta^* \|_2^2 \right] \le \frac{T_2L}{t} \frac{\gamma_{\max}}{\gamma_{\min}} \mathbf{E}[\| \langle \theta \rangle_{T_2L} -\theta^* \|_2^2 ]
+ \frac{\zeta_7 \alpha_0 C \log^2(\frac{t}{\alpha_0})}{t} \frac{\gamma_{\max}}{\gamma_{\min}} + \alpha_0 \zeta_4 \frac{\gamma_{\max}}{\gamma_{\min}} \frac{\sum_{l = T_2L}^{t} \eta_{l}}{t}, $$ where $T_2$ is defined in Appendix~\ref{sec:thmSA_constant}, and $\zeta_4$, $\zeta_5$, $\zeta_7$ are defined in \eqref{eq:define Psi5}, \eqref{eq:define Psi7}, \eqref{eq:define Psi8}, respectively. \end{lemma}
\noindent {\bf Proof of Lemma~\ref{lemma:bound_average_time-varying_jointly}:}
Consider the update of $\langle \theta \rangle_t $ given in \eqref{eq:update of average_time-varying}.
Note that $ \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 ] \le 2 \mathbf{E}[ \| \langle \theta \rangle_{t}- \theta^* \|_2^2 ] + 2 \| \theta^*\|_2^ 2 \le \frac{2}{\gamma_{\min}} \mathbf{E}[H( \langle \theta \rangle_t )] + 2 \| \theta^*\|_2^ 2 $, then from \eqref{eq:fixed_proof_1} and Lemma~\ref{lemma:bound_timevarying_Ab}, for $t\ge T_2L$ we have \begin{align}
&\;\;\;\; \mathbf{E}[ H( \langle \theta \rangle_{t+1} ) ] \nonumber\\
& \le \mathbf{E}[H( \langle \theta \rangle_t )] - \alpha_t \mathbf{E}[\| \langle \theta \rangle_t - \theta^*\|_2^2 ] + \alpha_t^2 A_{\max}^2 \gamma_{\max} \mathbf{E}[\| \langle \theta \rangle_t\|_2^2 ] + 2 \alpha_t^2 A_{\max} b_{\max} \gamma_{\max} \mathbf{E}[\| \langle \theta \rangle_t\|_2 ] \nonumber\\
& \;\;\;\; + \alpha_t \mathbf{E}[( \langle \theta \rangle_t - \theta^* )^\top (P+P^\top)( A(X_t) \langle \theta \rangle_t + B(X_t)^\top\pi_{t+1} - A \langle \theta \rangle_t - b) ] + \alpha_t^2 b_{\max}^2 \gamma_{\max}\nonumber\\
& \le \mathbf{E}[H( \langle \theta \rangle_t )] - \alpha_t \mathbf{E}[\| \langle \theta \rangle_t - \theta^*\|_2^2 ] + 2 \alpha_t^2 A_{\max}^2 \gamma_{\max} \mathbf{E}[\| \langle \theta \rangle_t\|_2^2 ] + 2 \alpha_t^2 b_{\max}^2 \gamma_{\max} \nonumber\\
& \;\;\; + \alpha_t \alpha_{t-\tau(\alpha_t)} \tau(\alpha_t) \gamma_{\max} \left( 72 + 456 A_{\max}^2 + 84 A_{\max} b_{\max} \right) \mathbf{E}[ \| \langle \theta \rangle_{t} \|_2^2 ] \nonumber \\
&\;\;\; + \alpha_{t} \alpha_{t -\tau(\alpha_t) } \tau(\alpha_t) \gamma_{\max} \bigg[ 152 \left(b_{\max} + A_{\max} \| \theta^* \|_2 \right)^2 + 48 A_{\max}b_{\max} (\frac{b_{\max}}{A_{\max}} + 1 )^2 + 12 A_{\max}b_{\max} + 2 \nonumber\\
&\;\;\; + 4 \|\theta^* \|_2^2 + 48(\frac{b_{\max}}{A_{\max}})^2 + 87 b_{\max}^2 \bigg] + 2 \alpha_t \gamma_{\max} \eta_{t+1}\sqrt{N}b_{\max} \left( 1 + 9 \mathbf{E}[ \| \langle \theta \rangle_{t}\|_2^2 ] + 6 (\frac{b_{\max}}{A_{\max}})^2+ \| \theta^* \|_2^2 \right) \nonumber\\
& \le \mathbf{E}[H( \langle \theta \rangle_t )] + \left( - \alpha_t + 2 \alpha_t \alpha_{t-\tau(\alpha_t)} \tau(\alpha_t) \gamma_{\max} \left( 72 + 458 A_{\max}^2 + 84 A_{\max} b_{\max} \right) \right) \mathbf{E}[\| \langle \theta \rangle_t - \theta^*\|_2^2 ] \nonumber\\
& \;\;\; + \alpha_t \alpha_{t-\tau(\alpha_t)} \tau(\alpha_t) \gamma_{\max} \Big[ 2\left( 72 + 458 A_{\max}^2 + 84 A_{\max} b_{\max} \right) \| \theta^*\|_2^ 2 + 152 \left(b_{\max} + A_{\max} \| \theta^* \|_2 \right)^2 \nonumber\\
&\;\;\; + 2 + 4 \|\theta^* \|_2^2 + 12 A_{\max}b_{\max} + 48(\frac{b_{\max}}{A_{\max}})^2 + 48 A_{\max}b_{\max} (\frac{b_{\max}}{A_{\max}} + 1 )^2 + 89 b_{\max}^2\bigg] \nonumber\\
&\;\;\; + 2 \alpha_t \gamma_{\max} \eta_{t+1}\sqrt{N}b_{\max} \left( 1 + 18 \mathbf{E}[\| \langle \theta \rangle_t - \theta^*\|_2^2 ] + 6 (\frac{b_{\max}}{A_{\max}})^2+ 19 \| \theta^* \|_2^2 \right) \nonumber \\
& \le \mathbf{E}[H( \langle \theta \rangle_t )] + \left( - \alpha_t + \alpha_t \alpha_{t-\tau(\alpha_t)} \tau(\alpha_t) \gamma_{\max} \zeta_5 + 36 \alpha_t \gamma_{\max} \eta_{t+1}\sqrt{N}b_{\max} \right) \mathbf{E}[\| \langle \theta \rangle_t - \theta^*\|_2^2 ] \nonumber\\
& \;\;\; + \alpha_t \alpha_{t-\tau(\alpha_t)} \tau(\alpha_t) \gamma_{\max} \zeta_7 \nonumber +
\alpha_t \gamma_{\max} \eta_{t+1}\zeta_4, \end{align} where $\zeta_4$, $\zeta_5$ and $\zeta_7$ are defined in \eqref{eq:define Psi5}, \eqref{eq:define Psi7} and \eqref{eq:define Psi8}, respectively. Moreover, from $\alpha_t = \frac{\alpha_0}{t+1}$, $\alpha_0\ge \frac{\gamma_{max}}{0.9}$ and the definition of $T_2$, we have for all $t \ge T_2L$
\begin{align*}
\mathbf{E}[H( \langle \theta \rangle_{t+1} )] &\le \left(1 - \frac{0.9 \alpha_t}{\gamma_{\max}} \right) \mathbf{E}[H( \langle \theta \rangle_t )] + \alpha_t \gamma_{\max} \eta_{t+1}\zeta_4 + \alpha_t \alpha_{t-\tau(\alpha_t)} \tau(\alpha_t) \gamma_{\max} \zeta_7 \nonumber \\
& \le \frac{t}{t+1} \mathbf{E}[H( \langle \theta \rangle_t )] + \alpha_0 \gamma_{\max} \zeta_4 \frac{\eta_{t+1}}{t+1} + \frac{\alpha_0^2 C\log(\frac{t+1}{\alpha_0}) \gamma_{\max} \zeta_7 }{(t+1)(t-\tau(\alpha_t)+1)} \\
& \le \frac{T_2L}{t+1} \mathbf{E}[H( \langle \theta \rangle_{T_2L} )] + \alpha_0 \gamma_{\max} \zeta_4 \sum_{l = T_2L}^{t} \frac{\eta_{l+1}}{l+1} \Pi_{u=l+1}^t\frac{u}{u+1} \nonumber\\
& \;\;\; + \alpha_0^2 \gamma_{\max} \zeta_7
\sum_{l=T_2L}^{t} \frac{ C\log(\frac{l+1}{\alpha_0}) }{(l+1)(l-\tau(\alpha_l)+1)} \Pi_{u=l+1}^t\frac{u}{u+1} \nonumber \\
& \le \frac{T_2L}{t+1} \mathbf{E}[H( \langle \theta \rangle_{T_2L} )] + \alpha_0 \gamma_{\max} \zeta_4 \frac{\sum_{l = T_2L}^{t} \eta_{l+1}}{t+1} + \frac{\zeta_7 \alpha_0 \gamma_{\max} C \log^2(\frac{{t+1}}{\alpha_0})}{t+1}\\
& \le \frac{T_2L}{t+1} \mathbf{E}[H( \langle \theta \rangle_{T_2L} )] + \alpha_0 \gamma_{\max} \zeta_4 \frac{\sum_{l = T_2L}^{t+1} \eta_{l}}{t+1} + \frac{\zeta_7 \alpha_0 \gamma_{\max} C \log^2(\frac{{t+1}}{\alpha_0})}{t+1}, \end{align*} where we use $
\sum_{l=T_2}^t \frac{2 \alpha_0 \log(\frac{l+1}{\alpha_0}) }{l+1} \le \log^2(\frac{t+1}{\alpha_0}) $
to get the last inequality. Then, we can get the bound of $ \mathbf{E}[\| \langle \theta \rangle_{t+1} -\theta^* \|_2^2 ] $ as follows:
\begin{align*}
&\;\;\;\; \mathbf{E}[\| \langle \theta \rangle_{t+1} -\theta^* \|_2^2 ] \le \frac{1}{\gamma_{\min}} \mathbf{E}[H( \langle \theta \rangle_{t+1} )] \nonumber\\
& \le \frac{T_2L}{t+1} \frac{\gamma_{\max}}{\gamma_{\min}} \mathbf{E}[\| \langle \theta \rangle_{T_2L} -\theta^* \|_2^2 ]
+ \frac{\zeta_7 \alpha_0 C \log^2(\frac{t+1}{\alpha_0})}{t+1} \frac{\gamma_{\max}}{\gamma_{\min}}
+ \alpha_0 \zeta_4 \frac{\gamma_{\max}}{\gamma_{\min}} \frac{\sum_{l = T_2L}^{t+1} \eta_{l}}{t+1}. \end{align*} This completes the proof.
$ \rule{.08in}{.08in}$
We are now in a position to prove the time-varying step-size case in Theorem~\ref{thm:bound_jointly_SA}.
\noindent {\bf Proof of Case 2) in Theorem~\ref{thm:bound_jointly_SA}:} From Lemmas~\ref{lemma:bound_consensus_time-varying_jointly} and \ref{lemma:bound_average_time-varying_jointly}, for any $t \ge T_2L$, we have
\begin{align*}
& \;\;\;\; \sum_{i=1}^N \pi_{t}^i \mathbf{E}[\|\theta_{t}^i - \theta^*\|_2^2]
\le 2 \sum_{i=1}^N \pi_{t}^i \mathbf{E}[\|\theta_{t}^i - \langle \theta \rangle_{t} \|_2^2 ] + 2 \mathbf{E} [\| \langle \theta \rangle_{t} - \theta^*\|_2^2]\\
& \le 2 \epsilon^{q_{t}-{T_2}} \sum_{i=1}^N \pi_{T_2L+m_t}^i \mathbf{E}[\| \theta_{T_2L+m_t}^i - \langle \theta \rangle_{T_2L+m_t} \|_2^2] + \frac{2T_2L}{t} \frac{\gamma_{\max}}{\gamma_{\min}} \mathbf{E}[\| \langle \theta \rangle_{T_2L} -\theta^* \|_2^2 ] \\
& \;\;\;
+ \frac{2\zeta_7 \alpha_0 C \log^2(\frac{t}{\alpha_0})}{t} \frac{\gamma_{\max}}{\gamma_{\min}}
+ 2 \alpha_0 \zeta_4 \frac{\gamma_{\max}}{\gamma_{\min}} \frac{\sum_{l = T_2L}^{t} \eta_{l}}{t}+ \frac{2\zeta_6}{1-\epsilon} ( \alpha_0 \epsilon^{\frac{q_t-1}{2}} + \alpha_{\ceil{\frac{q_t-1}{2}}L}) \\
& \le 2 \epsilon^{q_{t}-T_2} \sum_{i=1}^N \pi_{LT_2+m_t}^i \mathbf{E}\left[\left\| \theta_{LT_2+m_t}^i - \langle \theta \rangle_{LT_2+m_t} \right\|_2^2\right] + C_3 \left( \alpha_0 \epsilon^{\frac{q_t-1}{2}} + \alpha_{\ceil{\frac{q_t-1}{2}}L}\right) \nonumber \\
& \;\;\; + \frac{1}{t} \bigg(C_4 \log^2\Big(\frac{t}{\alpha_0}\Big)+C_5\sum_{k = LT_2}^{t} \eta_{k} + C_6\bigg),
\end{align*}
where $C_3 - C_6$ are defined in Appendix~\ref{sec:thmSA_constant}.
This completes the proof.
$ \rule{.08in}{.08in}$
\subsection{Push-SA}\label{proofs:push}
In this subsection, we analyze the push-based distributed stochastic approximation algorithm \eqref{eq:SA_push-sum} and provide the proofs of the results in Section~\ref{sec:SA_pushsum}. We begin with the proof of asymptotic performance of \eqref{eq:SA_push-sum}.
\noindent {\bf Proof of Theorem~\ref{thm:push_meansq}:}
From Lemma~\ref{lemma:bound_consensus_time-varying_push_SA}, since $\bar \epsilon \in(0,1)$ and $\alpha_t = \frac{\alpha_0}{t}$, we have $\lim_{t\to\infty}\| \theta_{t+1}^i - \langle \tilde \theta \rangle_t \|_2 = 0$, which implies that all $\theta_{t+1}^i$, $i\in \mathcal{V}$, will reach a consensus with $ \langle \tilde \theta \rangle_t $. The update of $ \langle \tilde \theta \rangle_t $ is given in \eqref{eq:update_average_tilde_theta}, which can be treated as a single-agent linear stochastic approximation whose corresponding ODE is \eqref{eq:ode_pushsum}. In addition, from Theorem~\ref{thm:bound_time-varying_step_Push_SA} and Lemma~\ref{lemma:eta_limit_Push_SA},
$\lim_{\rightarrow\infty}\sum_{i=1}^N \mathbf{E}[\|\theta_{t+1}^i - \theta^*\|_2^2]=0$, it follows that $\theta_{t+1}^i$ will converge to $\theta^*$ in mean square for all $i\in\scr V$.
$ \rule{.08in}{.08in}$
We next analyze the finite-time performance of \eqref{eq:SA_push-sum}.
Let $\hat W_t$ be the matrix whose $ij$-th entry is $\hat w_t^{ij}$. Then, from \eqref{eq:SA_push-sum} we have \begin{align}
\theta^i_{t+1}&=\frac{\tilde \theta^i_{t+1}}{y^i_{t+1}} = \frac{\sum_{j=1}^N \hat w_t^{ij } ( \tilde \theta^j_{t} + \alpha_t A(X_t) \theta^j_{t} + \alpha_t b^j(X_t ))} {y^i_{t+1}} \nonumber \\
&= \sum_{j=1}^N \frac{\hat w_t^{ij} y_t^j}{\sum_{k=1}^N \hat w_t^{ik } y^k_{t}} \left[ \frac{\tilde \theta^j_{t}}{y_t^j} + \alpha_t A(X_t) \frac{ \theta^j_{t}}{y_t^j} + \alpha_t \frac{ b^j(X_t ) }{y_t^j}\right] \nonumber \\
&= \sum_{j=1}^N \tilde w_t^{ij} \left[ \theta^j_{t} + \alpha_t A(X_t) \frac{ \theta^j_{t}}{y_t^j} + \alpha_t \frac{ b^j(X_t ) }{y_t^j}\right], \label{eq:push-sum_ratio} \end{align} where $ \tilde w_t^{ij} = \frac{\hat w_t^{ij} y_t^j}{\sum_{k=1}^N \hat w_t^{ik } y^k_{t}}$ and $ \tilde W_t = [ \tilde w_t^{ij}]$ is a row stochastic matrix, i.e., \begin{align*}
\sum_{j=1}^N \tilde w_t^{ij} = \frac{\sum_{j=1}^N \hat w_t^{ij} y_t^j}{\sum_{k=1}^N \hat w_t^{ik } y^k_{t}} =1, \;\;\; \forall i. \end{align*}
Let $\Theta_t = [\theta_t^1, \cdots, \theta_t^N]^\top$ and $\tilde \Theta_t = [\tilde \theta_t^1, \cdots, \tilde \theta_t^N]^\top$. Then \eqref{eq:SA_push-sum} and \eqref{eq:push-sum_ratio} can be written as \begin{align}
\tilde\Theta_{t+1} &= \hat W_t \left[\tilde\Theta_{t} + \alpha_t
\left[
\begin{array}{c}
( \tilde \theta_t^1)^\top/ y_t^1 \\
\cdots \\
( \tilde \theta_t^N)^\top/ y_t^N
\end{array}
\right] A(X_t)^\top
+ \alpha_t B(X_t) \right] \\
\Theta_{t+1} &= \tilde W_t \left[\Theta_{t} + \alpha_t
\left[
\begin{array}{c}
( \theta_t^1)^\top/ y_t^1 \\
\cdots \\
( \theta_t^N)^\top / y_t^N
\end{array}
\right] A(X_t)^\top
+ \alpha_t
\left[
\begin{array}{c}
(b^1(X_t))^\top / y_t^1 \\
\cdots \\
(b^N(X_t))^\top / y_t^N
\end{array}
\right]
\right]. \label{eq:update Theta} \end{align}
Since each matrix $\tilde W_t = [\tilde w_t^{ij}]$ is stochastic, from Lemma~\ref{lemma:bound_pi_jointly}, there exists a unique absolute probability sequence $\{ \tilde \pi_t \} $ for the matrix sequence $\{ \tilde W_t \} $ such that $ \tilde \pi_t^i \ge \tilde \pi_{\min}$ for all $i\in\scr V$ and $t\ge 0$, with the constant $ \tilde \pi_{\min}\in(0,1)$.
\begin{lemma} \label{lemma:pushsum_product}
Suppose that $\{ \mathbb{G}_t \}$ is uniformly strongly connected. Then, $\Pi_{s=0}^t \hat W_s $ will converge to the set $\{v\mathbf{1}_N^\top \; :\; v\in{\rm I\!R}^N\}$ exponentially fast as $t\rightarrow\infty$. \end{lemma}
\noindent {\bf Proof of Lemma~\ref{lemma:pushsum_product}:} The lemma is a direct consequence of Theorem~2 in \cite{hajnal}.
$ \rule{.08in}{.08in}$
\begin{lemma} \label{lemma:push-sum_pi_intfty}
Suppose that $\{ \mathbb{G}_t \}$ is uniformly strongly connected. Then,
$ (\Pi_{l=s}^{t} \tilde W_l)^{ij}= \frac{y_s^j}{y_{t+1}^i} (\Pi_{l=s}^{t} \hat W_l)^{ij} $ and $\frac{\tilde \pi_s^i}{y_s^i} = \frac{1}{y_s^i}\lim_{t\to\infty} (\Pi_{l=s}^{t} \tilde W_l)^{ji} = \frac{1}{N}$ for all $i,j\in\scr V$ and $s \ge 0$. \end{lemma} \noindent {\bf Proof of Lemma~\ref{lemma:push-sum_pi_intfty}:} Note that for all $l \ge 0$, we have $ \tilde w_l^{ij} = \frac{\hat w_l^{ij} y_l^j}{y^i_{l+1}}$. Let $\hat W_{s:t} = \Pi_{l=s}^t \hat W_l$ for all $t \ge s \ge 0$. We claim that $
(\Pi_{l=s}^{t} \tilde W_l)^{ij} = \frac{ y_s^j \hat w_{s:t}^{ij}}{y_{t+1}^i}, $ where $\hat w_{s:t}^{ij}$ is the $i,j$-th entry of the matrix $\hat W_{s:t}^{ij}$. The claim will be proved by induction on $t$. When $t=s+1$, \begin{align*} ( \tilde W_{s+1} \tilde W_s)^{ij} &= \sum_{k=1}^N \tilde w_{s+1}^{ik} \cdot \tilde w_s^{kj} = \sum_{k=1}^N \frac{ y_{s+1}^k \hat w_{s+1}^{ik}}{y_{s+2}^i} \frac{ y_s^j \hat w_{s}^{kj}}{y_{s+1}^k} = \frac{ y_s^j }{y_{s+2}^i} \sum_{k=1}^N \hat w_{s+1}^{ik}\hat w_{s}^{kj} = \frac{ y_s^j }{y_{s+2}^i} \hat w_{s:s+1}^{ij}. \end{align*} Thus, in this case the claim is true. Now suppose that the claim holds for all $t=\tau \ge s$, where $\tau$ is a positive integer. For $t=\tau+1$, we have \begin{align*} (\Pi_{s=1}^{\tau+1} \tilde W_s)^{ij} &= \sum_{k=1}^N \tilde w_{\tau+1}^{ik} \cdot \frac{ y_s^j \hat w_{s:\tau}^{kj}}{y_{\tau+1}^k} = \sum_{k=1}^N \frac{\hat w_{\tau+1}^{ik} y_{\tau+1}^k}{y_{\tau+2}^i} \cdot \frac{ y_s^j \hat w_{s:\tau}^{kj}}{y_{\tau+1}^k} = \frac{ y_s^j }{y_{\tau+2}^i} \sum_{k=1}^N \hat w_{\tau+1}^{ik} \cdot\hat w_{s:\tau}^{kj} = \frac{ y_s^j }{y_{\tau+2}^i} \hat w_{s:\tau+1}^{ij}, \end{align*} which establishes the claim by induction.
From Lemma~\ref{lemma:pushsum_product}, for given $s\ge 0$, we have $\lim_{t\to\infty}\hat W_{s:t} = v_{s,\infty} \mathbf{1}_N^\top $, with the understanding here that $v_{s,\infty}$ is not a constant vector. Then, since $y_{t+1} = \hat W_t y_t = \Pi_{l=s}^t \hat W_l y_s$ for all $t \ge s$, we have \begin{align*} \lim_{t\to\infty}(\Pi_{l=s}^{t} \tilde W_l)^{ij} &= \lim_{t\to\infty} \frac{ y_s^j \hat w_{s:t}^{ij}}{y_{t+1}^i} = \lim_{t\to\infty} \frac{ y_s^j \hat w_{s:t}^{ij}}{\sum_{k=1}^N \hat W_{s:t}^{ik} y_s^k} = \frac{ y_s^j \lim_{t\to\infty} \hat w_{s:t}^{ij}}{ \lim_{t\to\infty} \sum_{k=1}^N \hat W_{s:t}^{ik} y_s^k} = \frac{y_s^j v_{s,\infty}^i }{ \sum_{k=1}^N v_{s,\infty}^i y_s^k} = \frac{y_s^j}{N}, \end{align*} where we use the fact that $\mathbf{1}_N^\top y_{s} = N $ for all $s \ge 0$ in the last equality. This completes the proof.
$ \rule{.08in}{.08in}$
To proceed, let \begin{align*}
h^j(\Theta_n,y_n) &= A \frac{ \theta^i_{n}}{y_n^i} + \frac{ b^i}{y_n^i}\\
M^j_n &= \left(A(X_n) - \mathbb{E}[A(X_n)|\mathcal{F}_{n-\tau(\alpha_n)}]\right) \frac{ \theta^j_{n}}{y_n^j} + \frac{ 1 }{y_n^j} \left(b^j(X_n ) - \mathbb{E}[b^j(X_n )|\mathcal{F}_{n-\tau(\alpha_n)}]\right)\\
G^j_n &= \left(\mathbb{E}[A(X_n)|\mathcal{F}_{n-\tau(\alpha_n)} ] - A\right) \frac{ \theta^j_{n}}{y_n^j} + \frac{ 1 }{y_n^j} \left( \mathbb{E}[b^j(X_n )|\mathcal{F}_{n-\tau(\alpha_n)}] - b^j\right). \end{align*} From \eqref{eq:push-sum_ratio}, we have $
\theta^i_{n+1}
= \sum_{j=1}^N \tilde w_n^{ij} \left[ \theta^j_{n} + \alpha_n h^j(\theta_n,y_n) + \alpha_n M^j_n + \alpha_n G^j_n \right]. $ Let $h = [h^1, \cdots, h^N]^\top $, $M = [M^1, \cdots, M^N]^\top $ and $G = [G^1, \cdots, G^N]^\top $. Note that \begin{align*}
\mathbb{E}[ M^j_n | \mathcal{F}_n] &= \left( \mathbb{E}[A(X_t) | \mathcal{F}_n] - \mathbb{E}[ \mathbb{E}[A(X_t)|\mathcal{F}_{n-\tau(\alpha_n)}] | \mathcal{F}_n]\right) \frac{ \theta^j_{t}}{y_t^j} \\
& \;\;\; + \frac{ 1 }{y_t^j} \left(\mathbb{E}[b^j(X_t ) | \mathcal{F}_n] - \mathbb{E}[\mathbb{E}[b^j(X_t )|\mathcal{F}_{n-\tau(\alpha_n)}]| \mathcal{F}_n]\right) = 0 \end{align*} and for all $n \ge \tau(\alpha_n)$ \begin{align*}
& \;\;\;\; \mathbb{E}[ \| M_n\|_F^2 | \mathcal{F}_n] = \sum_{j=1}^N \mathbb{E}[ \| M^j_n\|_2^2 | \mathcal{F}_n] \\
&= \sum_{j=1}^N \mathbb{E}[ \| \left(A(X_n) - \mathbb{E}[A(X_n)|\mathcal{F}_{n-\tau(\alpha_n)}]\right) \frac{ \theta^j_{t}}{y_t^j} + \frac{ 1 }{y_t^j} \left(b^j(X_t ) - \mathbb{E}[b^j(X_t )|\mathcal{F}_{n-\tau(\alpha_n)}]\right) \|_2^2 | \mathcal{F}_n]\\
&\le \sum_{j=1}^N \left( \frac{2A_{\max}+\alpha_0}{\beta}
\| \theta^j_{t} \|_2 + \frac{2b_{\max}+\alpha_0}{\beta} \right)^2
\le \frac{2(2A_{\max}+\alpha_0)^2}{\beta^2} \| \Theta_{t} \|_F^2 + \frac{2N}{\beta^2} (2b_{\max}+\alpha_0)^2, \end{align*}
then $\{ M_n \}$ is a martingale difference sequence satisfying $ \mathbb{E}[ \| M_n\|_F^2 | \mathcal{F}_n] \le \hat C ( 1+ \| \Theta_{t} \|_F )$, where $\hat C = \max\{\frac{2(2A_{\max}+\alpha_0)^2}{\beta^2}, \frac{2N}{\beta^2} (2b_{\max}+\alpha_0)^2 \}$. Define $h_c : {\rm I\!R}^{N\times K}\times {\rm I\!R}^{N} \to {\rm I\!R}^{N \times K}$ as $h_c(x, y) = h(cx, y) c^{-1}$ with some $c\ge 1$ and $\tilde h_c(z) :{\rm I\!R}^{K} \to {\rm I\!R}^{K}$ as $ \tilde h_c(z) = h_c(\mathbf{1}_N \cdot z^\top,y_n)^\top \tilde \pi_n$. By Lemma~\ref{lemma:push-sum_pi_intfty}, \begin{align*}
h_c(\Theta_n, y_n) = \left[
\begin{array}{c}
( A\frac{ \theta_n^1 }{ y_n^1} + \frac{b^1}{y_n^1 c})^\top \\
\cdots \\
(A\frac{ \theta_n^N }{ y_n^N} + \frac{b^N}{y_n^N c})^\top \\
\end{array}
\right] , \;\;\;\;\;
\tilde h_c(z) = A z + \sum_{i=1}^N \frac{b^i}{ N c}. \end{align*} Then, $ \tilde h_c(z) \to \tilde h_{\infty}(z)= Az$ as $c \to \infty$ uniformly on compact sets. Let $\phi_c(z,t)$ and $\phi_\infty(z,t)$ denote the solutions of the ODE: \begin{align}
\dot z(t) = \tilde h_{c}(z(t)), \;\;\; z(0) = z \label{eq:ODE_infty}\\
\dot z(t) = \tilde h_{\infty}(z(t))= Az(t), \;\;\; z(0) = z \nonumber \end{align} respectively. Furthermore, since the origin is the unique globally asymptotically stable equilibrium of the ODE, then we have the following lemma.
\begin{lemma} \label{lemma:bound_z}
There exist constant $c_0 > 0$ and $T > 0$ such that for all initial conditions $z$ with the sphere $\{ z| \| z\|_2 \le \frac{1}{N^{1/2}} \}$ and all $c \ge c_0$, we have $\| \phi_c(z,t) \|_2 < \frac{1-\kappa}{N^{1/2}} $ for $t \in [T, T+1]$ for some $0< \kappa <1$. \end{lemma}
\noindent {\bf Proof of Lemma~\ref{lemma:bound_z}:} Similar to the proof of Lemma 5 in \cite{mathkar2016nonlinear}.
$ \rule{.08in}{.08in}$
Define $t_0 =0$, $t_n = \sum_{i=0}^n \alpha_n$, $n \ge 0$. Define $\bar \Theta(t), t\ge 0 $ as $\bar \Theta(t_n) = \Theta_{n} $ with linear interpolation on each interval $[t_n, t_{n+1}]$. In addition, let $T_0 = 0$ and $T_{n+1} = \min\{ t_m : t_m \ge T_n + T \}$ for all $n \ge 0$. Then, $T_{n+1} \in [T_n + T, T_n + T + \sup_n \alpha_n]$. Let $m(n)$ be the value such that $T_n = t_{m(n)}$ for any $n\ge 0$. Define the piecewise continuous trajectory $\hat \Theta(t) = \bar \Theta(t) \cdot r_n^{-1}$ for $t \in [T_n, T_{n+1})$, where $r_n = \max \{ \| \bar \Theta(T_n)\|_F, 1\}$.
\begin{lemma} \label{lemma:boundedness of hat theta}
There exists a positive constant $C_{\hat \theta}< \infty$ such that $\sup_{t} \| \hat \Theta(t) \|_F < C_{\hat \theta}$.
\end{lemma}
\noindent {\bf Proof of Lemma~\ref{lemma:boundedness of hat theta}:} First, we write the update of $\hat\Theta(t_k)$ for $k \in [ m(n), m(n+1) )$ \begin{align} \label{eq:update_ratio_hat}
\hat \Theta(t_{k+1}) &= \tilde W_{t_{k}} \left[\hat \Theta(t_{k}) + \alpha_{t_{k}}
\left[
\begin{array}{c}
(\hat \theta^1(t_{k}))^\top/ y_{t_{k}}^1 \\
\cdots \\
(\hat \theta^N(t_{k}))^\top/ y_{t_{k}}^N
\end{array}
\right]A(X_{t_{k}})^\top
+ \alpha_{t_{k}}
\left[
\begin{array}{c}
(b^1(X_{t_{k}}))^\top/( y_{t_{k}}^1 r_n) \\
\cdots \\
(b^N(X_{t_{k}}))^\top/( y_{t_{k}}^N r_n)
\end{array}
\right]
\right]. \end{align}
Since $ W_{t_k}$ is a column matrix, thus we have \begin{align*}
\| \hat \Theta(t_{k+1}) \|_{\infty}
&\le \| \tilde W_{t_{k}} \|_{\infty} \left( \| \hat \Theta(t_{k}) \|_{\infty} + \alpha_{t_{k}}
\left\| \left[
\begin{array}{c}
A(X_{t_{k}}) \hat \theta^1(t_{k})/ y_{t_{k}}^1 \\
\cdots \\
A(X_{t_{k}}) \hat \theta^N(t_{k})/ y_{t_{k}}^N
\end{array}
\right] \right\|_{\infty}
+ \alpha_{t_{k}} \left\| \left[
\begin{array}{c}
\frac{b^1(X_{t_{k}})}{ y_{t_{k}}^1 r_n} \\
\cdots \\
\frac{b^N(X_{t_{k}})}{ y_{t_{k}}^N r_n}
\end{array}
\right] \right\|_{\infty} \right) \\
&\le \| \hat \Theta(t_{k}) \|_{\infty} + \frac{\alpha_{t_{k}} \sqrt{K} A_{\max}}{\beta} \|
\hat \Theta(t_{k}) \|_{\infty} + \frac{\alpha_{t_{k}} \sqrt{K} b_{\max} }{\beta r_n} \\
&\le \| \hat \Theta(t_{m(n)}) \|_{\infty} + \sqrt{K} \sum_{l=0}^{k-m(n)} \frac{\alpha_{t_{k+l}} A_{\max} }{\beta} \|
\hat \Theta(t_{k+l}) \|_{\infty} + \frac{\alpha_{t_{k+l}} b_{\max} }{\beta r_n}\\
&\le \sqrt{K} + \frac{( T + \sup_l \alpha_{l}) \sqrt{K} b_{\max} }{\beta} + \sum_{l=0}^{k-m(n)} \frac{\alpha_{t_{k+l}}\sqrt{K} A_{\max}}{\beta} \| \hat \Theta(t_{k+l}) \|_{\infty}, \end{align*}
where we use the fact that $\| \hat \Theta(t_{m(n)}) \|_F = 1$ and $r_n \ge 1$ in the last inequality. Therefore, using the discrete-time Gr\"{o}nwall inequality, we have \begin{align*}
\sup_{m(n)\le k<m(n+1)} \| \hat \Theta(t_{k+1}) \|_{\infty}
&\le \sqrt{K} (1 + ( T + \sup_l \alpha_{l}) b_{\max}) \exp\left\{ \frac{ A_{\max}\sqrt{K} }{\beta} (T+ \sup_l \alpha_l)\right\}. \end{align*}
Since $ T+ \sup_l \alpha_l < \infty $, we have $\sup_{m(n)\le k<m(n+1)} \| \hat \Theta(t_{k+1}) \|_{\infty} < \infty$ for all $n$. By equivalence of vector norms, we further obtain that $\sup_{t} \| \hat \Theta(t) \|_F < \infty $.
$ \rule{.08in}{.08in}$
For $ n \ge 0 $, let $z^n(t)$ denote the trajectory of $\dot z = \tilde h_c(z)$ with $c = r_n$ and $z^n(T_n) = \sum_{i=1}^N \tilde\pi_{T_n}^i \hat \theta_{T_n} $, for $[T_n, T_{n+1} )$.
\begin{lemma} \label{lemma:hat_to_z}
$ \lim_n \sup_{t \in[T_n, T_{n+1})} \| \hat \Theta_t - \mathbf{1}\otimes z^n(t) \| = 0 $. \end{lemma}
\noindent {\bf Proof of Lemma~\ref{lemma:hat_to_z}:} From \eqref{eq:push-sum_ratio} and \eqref{eq:update_ratio_hat}, for any $k \in [m(n), m(n+1))$, by Lemma~\ref{lemma:push-sum_pi_intfty}, we have \begin{align*}
\sum_{i=1}^N \tilde \pi^i_{n+1} \theta^i_{n+1} &=
\Theta_{n+1}^\top \tilde \pi_{n+1}
= \left(\Theta_{n} + \alpha_n
\left[
\begin{array}{c}
(A(X_n) \theta_n^1)^\top/ y_n^1 \\
\cdots \\
(A(X_n) \theta_n^N)^\top/ y_n^N
\end{array}
\right]
+ \alpha_n
\left[
\begin{array}{c}
(b^1(X_n))^\top / y_n^1 \\
\cdots \\
(b^N(X_n))^\top / y_n^N
\end{array}
\right]
\right)^\top \tilde \pi_{n} \nonumber \\
&= \sum_{i=1}^N \tilde \pi^i_{n} \theta^i_{n} + \alpha_n \sum_{i=1}^N \tilde \pi_{n}^i (A(X_n) \theta_n^i/ y_n^i + b^i(X_n) / y_n^i) \nonumber \\
&= \sum_{i=1}^N \tilde \pi^i_{n} \theta^i_{n} + \frac{\alpha_n }{N}A(X_n)\sum_{i=1}^N \theta_n^i + \frac{\alpha_n}{N}\sum_{i=1}^N b^i(X_n). \end{align*} Similarly, we have \begin{align*}
& \;\;\;\; \sum_{i=1}^N \tilde \pi^i_{t_{k+1}} \hat \theta^i_{t_{k+1}}
= \sum_{i=1}^N \tilde \pi^i_{t_{k}} \hat \theta^i_{t_{k}} + \alpha_t \sum_{i=1}^N \tilde \pi_{{t_{k}}}^i (A(X_{t_{k}}) \hat \theta_{t_{k}}^i/ y_{t_{k}}^i + b^i(X_{t_{k}}) / (y_{t_{k}}^i r_n)) \nonumber \\
&=\sum_{i=1}^N \tilde \pi^i_{t_{k}} \hat \theta^i_{t_{k}} + \alpha_{t_{k}}\left( A(X_{t_{k}}) \sum_{i=1}^N \tilde \pi^i_{t_{k}} \hat \theta^i_{t_{k}} + \frac{1}{N r_n} \sum_{i=1}^N b^i(X_{t_{k}}) \right) + \alpha_{t_{k}} \frac{A(X_{t_{k}})}{N} \sum_{i=1}^N \left( \hat \theta_{t_{k}}^i - \sum_{i=1}^N \tilde \pi^i_{t_{k}} \hat \theta^i_{t_{k}} \right)\\
&= \sum_{i=1}^N \tilde \pi^i_{t_{k}} \hat \theta^i_{t_{k}} + \alpha_{t_{k}}
\left( A \sum_{i=1}^N \tilde \pi^i_{t_{k}} \hat \theta^i_{t_{k}} + \frac{1}{N r_n} \sum_{i=1}^N b^i \right) + \alpha_{t_{k}} \frac{A(X_{t_{k}})}{N} \sum_{i=1}^N \left( \hat \theta_{t_{k}}^i - \sum_{i=1}^N \tilde \pi^i_{t_{k}} \hat \theta^i_{t_{k}} \right) \nonumber\\
&\;\;\; + \alpha_{t_{k}} \left( A(X_{t_{k}}) - \mathbb{E}[A(X_{t_{k}}) | \mathcal{F}_{t_k - \tau(\alpha_{t_k})}] \right)\sum_{i=1}^N \tilde \pi^i_{t_{k}} \hat \theta^i_{t_{k}} + \frac{\alpha_{t_{k}}}{N r_n} \sum_{i=1}^N \left( b^i(X_{t_{k}}) - \mathbb{E}[b^i(X_{t_{k}}) | \mathcal{F}_{t_k - \tau(\alpha_{t_k})}] \right) \nonumber \\
&\;\;\; + \alpha_{t_{k}} \left( \left( \mathbb{E}[A(X_{t_{k}}) | \mathcal{F}_{t_k - \tau(\alpha_{t_k})}] - A \right)\sum_{i=1}^N \tilde \pi^i_{t_{k}} \hat \theta^i_{t_{k}} + \frac{1}{N r_n} \sum_{i=1}^N \left( \mathbb{E}[b^i(X_{t_{k}}) | \mathcal{F}_{t_k - \tau(\alpha_{t_k})}] - b^i \right) \right). \end{align*} To proceed, let \begin{align*}
\hat M_{t_k}
&= \left(A(X_t) - \mathbb{E}[A(X_{t_k})|\mathcal{F}_{{t_k}-\tau(\alpha_{t_k})}]\right) \sum_{i=1}^N \tilde \pi^i_{t_{k}} \hat \theta^i_{t_{k}} + \frac{ 1 }{N r_n} \sum_{i=1}^N \left(b^i(X_{t_k} ) - \mathbb{E}[b^i(X_{t_k} )|\mathcal{F}_{{t_k}-\tau(\alpha_{t_k})}]\right)\\
\hat G_{t_k} &= \left(\mathbb{E}[A(X_{t_k})|\mathcal{F}_{{t_k}-\tau(\alpha_{t_k})} ] - A\right) \sum_{i=1}^N \tilde \pi^i_{t_{k}} \hat \theta^i_{t_{k}} + \frac{ 1 }{N r_n} \sum_{i=1}^N \left( \mathbb{E}[b^i(X_{t_k} )|\mathcal{F}_{{t_k}-\tau(\alpha_{t_k})}] - b^i\right) \\
& \;\;\; + \frac{A(X_{t_{k}})}{N} \sum_{i=1}^N \left( \hat \theta_{t_{k}}^i - \sum_{i=1}^N \tilde \pi^i_{t_{k}} \hat \theta^i_{t_{k}} \right). \end{align*}
It is easy to verify that $\{ \hat M_{t_k} \}$ is a martingale difference sequence satisfying $ \mathbb{E}[\| \hat M_{t_k} \|_2^2 | \mathcal{F}_{t_k} ] \le \bar C (1+\| \sum_{i=1}^N \tilde \pi^i_{t_{k}} \hat \theta^i_{t_{k}} \|_2^2) $ for some $\bar C \le \infty$. In addition, we have \begin{align*}
\hat\theta_{t_k}^i - \sum_{j=1}^N \tilde \pi^j_{t_{k}} \hat \theta^j_{t_{k}}
&=\sum_{j=1}^N (\tilde w_{t_s:t_k}^{ij} - \tilde \pi_{t_s}^j) \hat \theta_{t_s}^j + \sum_{r = s+1}^k \alpha_{t_r} \sum_{i=1}^N (\tilde w_{t_r:t_k}^{ij} - \tilde \pi_{t_r}^j)(A(X_{t_r}) \hat \theta_{t_r}^j/y_{t_r}^j + b^j(X_{t_r}/y_{t_r}^j). \end{align*} Since $\{ \mathbb{G}_t \}$ is uniformly strongly connected, then for any $s\ge 0$, $W_{s:t}$ converges to $\mathbf{1} \pi_s^\top$ exponentially fast as $t \to \infty$ and there exist a finite positive constant $C$ and a constant $0 \le \lambda <1$ such that $
| \tilde w_{s:t}^{ij} - \tilde \pi_s^j | \le C \lambda^{t-s} $ for all $i,j \in \mathcal{V}$ and $s \ge 0$. Then,for any $k \in [m(n), m(n+1))$, we have \begin{align*}
&\;\;\;\; \| \hat\theta_{t_k}^i - \sum_{j=1}^N \tilde \pi^j_{t_{k}} \hat \theta^j_{t_{k}} \|_2 \\
&\le \sum_{j=1}^N \| \tilde w_{t_{m(n)}:t_k}^{ij} - \tilde \pi_{t_{m(n)}}^j\|_2 \| \hat \theta_{t_{m(n)}}^j \|_2 + \sum_{r = m(n)+1}^k \alpha_{t_r} \sum_{i=1}^N \| \tilde w_{t_r:t_k}^{ij} - \tilde \pi_{t_r}^j\|_2 \frac{A_{\max} \| \hat \theta_{t_r}^j\|_2 + b_{\max}}{\beta}\\
&\le \sum_{j=1}^N C \lambda^{t_k-t_{m(n)}} \| \hat \theta_{t_{m(n)}}^j \|_2 + \sum_{r = m(n)+1}^k \alpha_{t_r} \sum_{i=1}^N C \lambda^{t_k-t_r} (\frac{A_{\max} \| \hat \theta_{t_r}^j\|_2 + b_{\max}}{\beta})\\
&\le N C \lambda^{t_k-t_{m(n)}} + \frac{\alpha_{t_{m(n)}} N C}{1-\lambda} \frac{A_{\max} C_{\hat\theta} + b_{\max}}{\beta}, \end{align*}
where in the last inequality, we use the fact that for all $n\ge 0$, we have $\| \hat \Theta(t_{m(n)}) \|_F = 1$, $\alpha_{n+1} \le \alpha_n$ and the boundedness of $\| \hat \Theta_n \|_F$ from Lemma~\ref{lemma:boundedness of hat theta}. Since $\alpha_{t_k} \to 0$ as $k \to \infty$, then $
\lim_{k\to\infty}\| \hat\theta_{t_k}^i - \sum_{j=1}^N \tilde \pi^j_{t_{k}} \hat \theta^j_{t_{k}} \|_2 = 0, $ which implies that $
\lim_{k\to\infty}\left\| \frac{A(X_{t_k})}{N} \sum_{i=1}^N (\hat\theta_{t_k}^i - \sum_{j=1}^N \tilde \pi^j_{t_{k}} \hat \theta^j_{t_{k}} ) \right\|_2 = 0. $ Then, \begin{align*}
\lim_{k\to\infty} \| \hat G_{t_k} \|_2
&\le \lim_{k\to\infty} \alpha_{t_k} ( \| \sum_{j=1}^N \tilde \pi^j_{t_{k}} \hat \theta^j_{t_{k}} \|_2 + 1)
+ \lim_{k\to\infty}\left\| \frac{A(X_{t_k})}{N} \sum_{i=1}^N (\hat\theta_{t_k}^i - \sum_{j=1}^N \tilde \pi^j_{t_{k}} \hat \theta^j_{t_{k}} ) \right\|_2 = 0. \end{align*} Therefore, by Corollary~8 and Theorem~9 in Chapter~6 of \cite{borkar2008stochastic}, we obtain that $\sum_{i=1}^N \tilde \pi^i_{t_{k}} \hat \theta^i_{t_{k}} \to z^n(t)$ as $n \to \infty$, namely $k \to \infty$. Furthermore, we obtain that $ \hat \theta_{t_{k+1}}^i \to z^n(t)$ as $n \to \infty$ for all $i \in \mathcal{V}$, which concludes the proof following Theorem~2 in Chapter~2 of \cite{borkar2008stochastic}.
$ \rule{.08in}{.08in}$
\begin{lemma} \label{lemma:bound_theta}
The sequence $\{ \Theta_n \}$ generated by \eqref{eq:update Theta} is bounded almost surely, i.e., $ C_\theta = \sup_{n} \| \Theta_n \|_F<\infty$ almost surely. \end{lemma}
\noindent {\bf Proof of Lemma~\ref{lemma:bound_theta}:}
In order to prove this lemma, we need to show that $\sup_n \| \bar \Theta(T_n) \|_F < \infty$ first. If this does not hold, there will exist a sequence $T_{n_1}, T_{n_2}, \cdots $ such that $\| \hat \Theta(T_{n_k}) \|_F \to \infty $, i.e., $r_{n_k} \to \infty$. If $r_n > c_0$ and $\| \hat \Theta(T_n) \|_F = 1$, then $\| z^n(T_n) \|_2 = \| \sum_{i=1}^N \tilde \pi_{T_n} \hat \theta^i_{T_n} \|_2 \le N^{-1/2}$. Using Lemma~\ref{lemma:bound_z}, we have $\| \mathbf{1}_N \cdot (z^n(T_{n+1}^-))^\top \|_F = N^{1/2} \| z^n(T_{n+1}^-) \|_2 \le 1 - \kappa $. In addition, using Lemma~\ref{lemma:hat_to_z}, there exists a constant $0 < \kappa' < \kappa$ such that $\| \hat \Theta(T_{n+1}^-) \|_F < 1 - \kappa'$. Hence for $r_n > c_0$ and $n$ sufficiently large, \begin{align*}
\frac{\| \bar \Theta(T_{n+1})\|_F}{\| \bar \Theta(T_{n})\|_F} = \frac{\| \hat \Theta(T_{n+1}^-)\|_F}{\| \hat \Theta(T_{n})\|_F} \le 1-\kappa'. \end{align*}
It shows that if $\| \bar \Theta(T_{n})\|_F > c_0$, $\| \bar \Theta(T_{k})\|_F$ for all $k \ge n $ falls back to the ball of radius $c_0$ at an exponential rate.
Thus, if $\| \bar \Theta(T_{n})\|_F > c_0$, then $\| \bar \Theta(T_{n-1})\|_F$ is either greater than $\| \bar \Theta(T_{n})\|_F $ or is inside the ball of radius $c_0$. Since we assume $r_{n_k} \to \infty$, then we can find a time $T_n$ such that $\| \bar \Theta(T_{n})\|_F < c_0$ and $\| \bar \Theta(T_{n+1})\|_F = \infty$. However, using the discrete-time Gr\"{o}nwall inequality, we have \begin{align*}
\| \bar \Theta(T_{n+1})\|_\infty
&\le \| \bar \Theta(T_{n+1}-1)\|_\infty + \alpha_{T_{n+1}-1}\frac{\sqrt{K} A_{\max} }{\beta} \| \bar \Theta(T_{n+1}-1)\|_\infty + \alpha_{T_{n+1}-1}\sqrt{K} \frac{b_{\max} }{\beta} \\
&\le \| \bar \Theta(T_{n})\|_\infty + \sqrt{K} \sum_{s=0}^{T_{n+1} - T_n} \alpha_{T_{n}+s} \frac{ A_{\max} }{\beta} \| \bar \Theta(T_{n}+s)\|_\infty + \alpha_{T_{n}+s}\frac{b_{\max} }{\beta} \\
&\le \sqrt{K}c_0 + \sqrt{K}(T+\sup_n \alpha_n)\frac{b_{\max} }{\beta} + \frac{\sqrt{K} A_{\max} }{\beta} \sum_{s=0}^{T_{n+1} - T_n} \alpha_{T_{n}+s} \| \bar \Theta(T_{n}+s)\|_\infty \\
&\le \sqrt{K}(c_0 + (T+\sup_n \alpha_n)\frac{b_{\max} }{\beta}) \exp\left\{(T+\sup_n \alpha_n)\frac{\sqrt{K} A_{\max} }{\beta}\right\}, \end{align*}
which implies that $\| \bar \Theta(T_{n+1})\|_F$ can be bounded if $\| \bar \Theta(T_{n})\|_F < c_0$. This leads to a contradiction.
Moreover, let $C_{\bar \theta} = \sup_n \| \bar \Theta(T_n) \|_F < \infty$, then $C_{\theta} = \sup_n \| \Theta_n \|_F \le C_{\bar \theta} C_{\hat \theta} < \infty$.
$ \rule{.08in}{.08in}$
From \eqref{eq:SA_push-sum},
by using the definition of $\langle \tilde \theta \rangle_t = \frac{1}{N} \sum_{i=1}^N \tilde \theta_t^i$ and $\langle \theta \rangle_t = \frac{1}{N} \sum_{i=1}^N \theta_t^i$, we have \begin{align} \label{eq:update_average_tilde_theta}
\langle \tilde \theta \rangle_{t+1}
&= \langle \tilde \theta \rangle_t + \alpha_t A(X_t) \langle \theta \rangle_t + \frac{\alpha_t}{N}\sum_{i=1}^N b^i(X_t) \nonumber \\
&= \langle \tilde \theta\rangle_t + \alpha_t A(X_t) \langle \tilde \theta \rangle_t + \frac{\alpha_t}{N}\sum_{i=1}^N b^i(X_t) + \alpha_t \rho_t, \end{align}
where $ \rho_t = A(X_t) \langle \theta \rangle_t - A(X_t) \langle \tilde \theta \rangle_t$. From Lemma~\ref{lemma:bound_theta}, we have $\| \langle \theta \rangle_t \|_2 \le \max_{i\in\mathcal{V}} \| \theta_t^i \|_2 \le C_\theta$ for all $t \ge 0$, which implies that $\| \langle \tilde \theta \rangle_t \|_2 \le N C_\theta$ and $
\mu_t = \| \rho_t \|_2 = \left\| A(X_t) \langle \theta \rangle_t - A(X_t) \langle \tilde \theta \rangle_t \right\|_2 \le \mu_{\max}, $ where $\mu_{\max} = (N+1) A_{\max} C_\theta$.
\begin{lemma} \label{lemma:bound_consensus_time-varying_push_SA}
Suppose that Assumptions~\ref{assum:A and b} and \ref{assum:step-size} hold and $\{ \mathbb{G}_t \}$ is uniformly strongly connected by sub-sequences of length $L$.
Let $\epsilon_1 = \inf_{t\ge 0} \min_{i\in\scr V} (\hat W_t \cdots \hat W_0 \mathbf{1}_N)^i $.
For all $t \ge 0$ and $i \in \mathcal{V}$,
\begin{align*}
\| \theta_{t+1}^i - \langle \tilde \theta \rangle_t \|_2
&\le\frac{8}{\epsilon_1} \bar\epsilon^t \| \sum_{i=1}^N \tilde \theta_0^i + \alpha_0 A(X_0)\theta_0^i + \alpha_0 b^i(X_0) \|_2 \\
& \;\;\; + \frac{8}{\epsilon_1} \frac{ A_{\max} C_\theta + b_{\max}}{1-\bar\epsilon} \left( \alpha_0 \bar\epsilon^{t/2} + \alpha_{\ceil{\frac{t}{2}}} \right) + \alpha_t A_{\max} C_\theta + \alpha_t b_{\max}, \end{align*}
where $\epsilon_1 > 0$ and $\bar\epsilon \in (0,1)$ satisfy $\epsilon_1 \ge \frac{1}{N^{NL}}$ and $\bar\epsilon \le (1-\frac{1}{N^{NL}})^{1/L}$. \end{lemma} \noindent {\bf Proof of Lemma~\ref{lemma:bound_consensus_time-varying_push_SA}:} Since $\epsilon_1 = \inf_{t\ge 0} \min_{i\in\scr V} (\hat W_t \cdots \hat W_0 \mathbf{1}_N)^i$ and all weight matrices $\hat W_s$ are column stochastic matrices for all $s\ge 0$, from Corollary~2~(b) in \cite{nedic}, we know that $\epsilon_1 \le \frac{1}{N^{NL}}$. If the weight matrices are doubly stochastic matrices, then $ \epsilon_1 = 1$.
From Assumption~\ref{assum:A and b} and Lemma~\ref{lemma:bound_theta}, we know that $\|A(X_t) \theta_t^i + b^i(X_t)\|_2 \le A_{\max} C_\theta + b_{\max}$. Then, by Lemma~1 in \cite{nedic}, for all $t \ge 0$ and $i \in \mathcal{V}$ we have \begin{align*}
& \;\;\;\; \| \theta_{t+1}^i - \langle \tilde \theta \rangle_t - \alpha_t A(X_t) \langle \theta \rangle_t - \frac{\alpha_t}{N} \sum_{i=1}^N b^i(X_t)\|_2 \\
&\le\frac{8}{\epsilon_1} ( \bar\epsilon^t \| \sum_{i=1}^N \tilde \theta_0^i + \alpha_0 A(X_0)\theta_0^i + \alpha_0 b^i(X_0) \|_2 + \sum_{s=0}^t \bar\epsilon^{t-s} \alpha_s (A_{\max} C_\theta + b_{\max})) \\
&\le\frac{8}{\epsilon_1} \bar\epsilon^t \| \sum_{i=1}^N \tilde \theta_0^i + \alpha_0 A(X_0)\theta_0^i + \alpha_0 b^i(X_0) \|_2 + \frac{8}{\epsilon_1} (A_{\max} C_\theta + b_{\max})\left(\sum_{s=0}^{\floor{\frac{t}{2}}} \bar\epsilon^{t-s} \alpha_s + \sum_{s=\ceil{\frac{t}{2}}}^{t} \bar\epsilon^{t-s} \alpha_s \right) \\
&\le\frac{8}{\epsilon_1} \bar\epsilon^t \| \sum_{i=1}^N \tilde \theta_0^i + \alpha_0 A(X_0)\theta_0^i + \alpha_0 b^i(X_0) \|_2 + \frac{8}{\epsilon_1} \frac{ A_{\max} C_\theta + b_{\max}}{1-\bar\epsilon} \left( \alpha_0 \bar\epsilon^{t/2} + \alpha_{\ceil{\frac{t}{2}}} \right), \end{align*} which implies that \begin{align*}
\| \theta_{t+1}^i - \langle \tilde \theta \rangle_t \|_2
&\le \| \theta_{t+1}^i - \langle \tilde \theta \rangle_t - \alpha_t A(X_t) \langle \theta \rangle_t - \frac{\alpha_t}{N} \sum_{i=1}^N b^i(X_t)\|_2 + \alpha_t \| A(X_t) \langle \theta \rangle_t + \frac{1}{N} \sum_{i=1}^N b^i(X_t) \|_2 \\
&\le\frac{8}{\epsilon_1} \bar\epsilon^t \| \sum_{i=1}^N \tilde \theta_0^i + \alpha_0 A(X_0)\theta_0^i + \alpha_0 b^i(X_0) \|_2 + \frac{8}{\epsilon_1} \frac{ A_{\max} C_\theta + b_{\max}}{1-\bar\epsilon} \left( \alpha_0 \bar\epsilon^{t/2} + \alpha_{\ceil{\frac{t}{2}}} \right) \\
& \;\;\; + \alpha_t A_{\max} C_\theta + \alpha_t b_{\max} . \end{align*}
This completes the proof.
$ \rule{.08in}{.08in}$
\begin{lemma} \label{lemma:eta_limit_Push_SA}
$\lim_{t \to \infty} \mu_t = \lim_{t \to \infty} \| \rho_t \|_2 =0$ and $\lim_{t \to \infty} \frac{\sum_{k=0}^t \mu_k}{t+1} = \lim_{t \to \infty} \frac{\sum_{k=0}^t \| \rho_k \|_2}{t+1} = 0.$ \end{lemma} \noindent {\bf Proof of Lemma~\ref{lemma:eta_limit_Push_SA}:} From Lemma~\ref{lemma:bound_consensus_time-varying_push_SA}, we have \begin{align*}
\mu_t & = \| \rho_t \|_2
= \left\| A(X_t) \langle \theta \rangle_t - A(X_t) \langle \tilde \theta \rangle_t \right\|_2 \\
&\le \frac{8A_{\max}}{\epsilon_1} \bar\epsilon^t \| \tilde \Theta_0 \|_1 + \frac{8A_{\max}}{\epsilon_1} \frac{N \sqrt{K} (A_{\max} C_\theta + b_{\max})}{1-\bar\epsilon} \left( \alpha_0 \bar\epsilon^{t/2} + \alpha_{\ceil{\frac{t}{2}}} \right). \end{align*}
Since $\bar\epsilon \in (0,1)$, then $\lim_{t \to \infty} \| \rho_t \|_2 =0$.
Next, we will prove that $\lim_{t \to \infty} \frac{1}{t+1} \sum_{k=0}^t \| \rho_k \|_2 = 0.$ For any positive constant $ c > 0$, there exists a positive integer $ T(c)$, depending on $c$, such that $ \forall t \ge T(c) $, we have $\| \rho_t \|_2 < c$. Thus, \begin{align*}
\frac{1}{t} \sum_{k=0}^{t-1} \| \rho_k \|_2
= \frac{1}{t}\sum_{k=0}^{T(c)} \| \rho_k \|_2 + \frac{1}{t}\sum_{k=T(c)+1}^{t-1} \| \rho_k \|_2
\le \frac{1}{t}\sum_{k=0}^{T(c)} \| \rho_k \|_2+ \frac{t-1-T(c)}{t} c. \end{align*} Let $t \to \infty$ on both sides of the above inequality. Then, we have \begin{align*}
\lim_{t \to \infty}\frac{1}{t} \sum_{k=0}^{t-1} \| \rho_k \|_2
& \le \lim_{t \to \infty} \frac{1}{t}\sum_{k=0}^{T(c)} \| \rho_k \|_2 + \lim_{t \to \infty} \frac{t-1-T(c)}{t} c = c. \end{align*}
Since the above argument holds for arbitrary positive $c$, then $\lim_{t \to \infty} \frac{1}{t+1} \sum_{k=0}^t \| \rho_k \|_2 = 0.$
$ \rule{.08in}{.08in}$
\begin{lemma} \label{lemma:timevarying_single_3_Push_SA}
Suppose that Assumptions~\ref{assum:A and b} and \ref{assum:mixing-time} hold. When the step-size $\alpha_t$ and corresponding mixing time $\tau(\alpha_t)$ satisfy
$
0< \alpha_t \tau(\alpha_t) < \frac{\log2}{A_{\max}} $,
we have for any $t \ge \bar T$, \begin{align}
\|\langle \tilde \theta \rangle_{t} - \langle \tilde \theta \rangle_{t - \tau(\alpha_{t})} \|_2
& \le 2 A_{\max} \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2 \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_k + 2 (b_{\max}+ \mu_{\max} ) \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k}, \label{eq:timevarying_single_3_Push_SA_1}\\
\| \langle \tilde \theta \rangle_{t} - \langle \tilde \theta \rangle_{t - \tau(\alpha_{t})} \|_2
& \le 6 A_{\max} \| \langle \tilde \theta \rangle_{t} \|_2 \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_k
+ 5 ( b_{\max} + \mu_{\max}) \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k}, \label{eq:timevarying_single_3_Push_SA_2}\\
\| \langle \tilde \theta \rangle_{t} - \langle \tilde \theta \rangle_{t - \tau(\alpha_{t})} \|_2^2
& \le 72 \alpha_{t-\tau(\alpha_{t})}^2 \tau^2(\alpha_t) A_{\max}^2 \| \langle \tilde \theta \rangle_{t} \|_2^2 + 50 \alpha_{t-\tau(\alpha_{t})}^2 \tau^2(\alpha_t) ( b_{\max} + \mu_{\max})^2 \nonumber \\
& \le 8 \| \langle \tilde \theta \rangle_{t} \|_2^2 + \frac{ 6 (b_{\max} + \mu_{\max})^2}{A_{\max}^2}. \label{eq:timevarying_single_3_Push_SA_3} \end{align}
\end{lemma} \noindent {\bf Proof of Lemma~\ref{lemma:timevarying_single_3_Push_SA}:} From \eqref{eq:update_average_tilde_theta}, $
\| \langle \tilde \theta \rangle_{t+1} \|_2
\le (1+\alpha_{t} A_{\max}) \|\langle \tilde \theta \rangle_t \|_2 + \alpha_{t} b_{\max} + \alpha_{t} \mu_{\max}. $ In addition, for all $u \in [t-\tau(\alpha_{t}), t]$, we have \begin{align*}
\| \langle \tilde \theta \rangle_{u} \|_2
& \le \Pi_{k = t-\tau(\alpha_{t})}^{u-1} (1+\alpha_{k} A_{\max})\| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2 + (b_{\max} + \mu_{\max} ) \sum_{k = t-\tau(\alpha_{t})}^{u-1} \alpha_k \Pi_{l=k+1}^{u-1} (1+\alpha_{l} A_{\max}) \\
& \le \exp\{ \sum_{k = t-\tau(\alpha_{t})}^{u-1} \alpha_{k} A_{\max}\} \|\langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2 + (b_{\max} + \mu_{\max} ) \sum_{k = t-\tau(\alpha_{t})}^{u-1} \alpha_k \exp\{ \sum_{l=k+1}^{u-1} \alpha_{l} A_{\max}\} \\
& \le \exp\{ \alpha_{t-\tau(\alpha_{t})} \tau(\alpha_t) A_{\max}\} \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2 + (b_{\max} + \mu_{\max} ) \sum_{k = t-\tau(\alpha_{t})}^{u-1} \alpha_k \exp\{ \alpha_{t-\tau(\alpha_{t})} \tau(\alpha_t) A_{\max}\} \\
& \le 2 \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2 + 2 (b_{\max} + \mu_{\max} ) \sum_{k = t-\tau(\alpha_{t})}^{u-1} \alpha_k, \end{align*}
where we use $\alpha_{t-\tau(\alpha_{t})} \tau(\alpha_t) A_{\max} \le \log2 < \frac{1}{3}$ in the last inequality. Thus, for all $t\ge \bar T$, we have $\| \langle \tilde \theta \rangle_{t} - \langle \tilde \theta \rangle_{t - \tau(\alpha_{t})} \|_2 \le \sum_{k=t-\tau(\alpha_{t})}^{t-1} \| \langle \tilde \theta \rangle_{k+1} - \langle \tilde \theta \rangle_{k} \|_2$. Then, we can get \eqref{eq:timevarying_single_3_Push_SA_1} as follows: \begin{align*}
&\;\;\;\; \| \langle \tilde \theta \rangle_{t} - \langle \tilde \theta \rangle_{t - \tau(\alpha_{t})} \|_2
\le A_{\max} \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k} \| \langle \tilde \theta \rangle_{k} \|_2 + (b_{\max}+ \mu_{\max} ) \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k} \\
& \le \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_k \left [ A_{\max}\left( 2 \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2 + 2 (b_{\max}+ \mu_{\max} ) \sum_{l=t-\tau(\alpha_{t})}^{k-1} \alpha_l \right) + (b_{\max}+ \mu_{\max} ) \right]\\
& \le 2 A_{\max} \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2 \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_k + \left( 2 A_{\max} \tau(\alpha_{t}) \alpha_{t-\tau(\alpha_{t})} + 1 \right) (b_{\max}+ \mu_{\max} ) \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k} \\
& \le 2 A_{\max} \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2 \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_k + \frac{5}{3} (b_{\max}+ \mu_{\max} ) \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k} \\
& \le 2 A_{\max} \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2 \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_k + 2 (b_{\max}+ \mu_{\max} ) \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k} . \end{align*} Moreover, using the above inequality, we can get \eqref{eq:timevarying_single_3_Push_SA_2} for all $t\ge \bar T$ as follows: \begin{align*}
& \;\;\;\; \| \langle \tilde \theta \rangle_{t} - \langle \tilde \theta \rangle_{t - \tau(\alpha_{t})} \|_2
\le 2 A_{\max} \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2 \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_k + \frac{5}{3} ( b_{\max} + \mu_{\max}) \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k} \\
& \le 2 A_{\max} \tau(\alpha_{t}) \alpha_{t-\tau(\alpha_{t})}
\| \langle \tilde \theta \rangle_{t} - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2
+ \left[ 2 A_{\max} \| \langle \tilde \theta \rangle_{t} \|_2 + \frac{5}{3} ( b_{\max} + \mu_{\max}) \right] \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k} \\
& \le 6 A_{\max} \| \langle \tilde \theta \rangle_{t} \|_2 \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_k
+ 5 ( b_{\max} + \mu_{\max}) \sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k}. \end{align*} Next, using \eqref{eq:timevarying_single_3_Push_SA_2} and the inequality $(x+y)^2 \le 2x^2 + y^2$ for all $x, y$, we can get \eqref{eq:timevarying_single_3_Push_SA_3} as follows: \begin{align*}
\| \langle \tilde \theta \rangle_{t} - \langle \tilde \theta \rangle_{t - \tau(\alpha_{t})} \|_2^2
& \le 72 A_{\max}^2 \| \langle \tilde \theta \rangle_{t} \|_2^2 (\sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_k )^2
+ 50 ( b_{\max} + \mu_{\max})^2 (\sum_{k=t-\tau(\alpha_{t})}^{t-1} \alpha_{k})^2 \\
& \le 72 \alpha_{t-\tau(\alpha_{t})}^2 \tau^2(\alpha_t) A_{\max}^2 \| \langle \tilde \theta \rangle_{t} \|_2^2 + 50 \alpha_{t-\tau(\alpha_{t})}^2 \tau^2(\alpha_t) ( b_{\max} + \mu_{\max})^2 \\
& \le 8 \| \langle \tilde \theta \rangle_{t} \|_2^2 + \frac{ 6 (b_{\max} + \mu_{\max})^2}{A_{\max}^2}, \end{align*} where we use $\alpha_{t-\tau(\alpha_{t})} \tau(\alpha_t) A_{\max} < \frac{1}{3}$ in the last inequality.
$ \rule{.08in}{.08in}$
\begin{lemma} \label{lemma:bound_timevarying_Ab_push_SA}
Suppose that Assumptions~\ref{assum:A and b}--\ref{assum:step-size} hold and $\{ \mathbb{G}_t \}$ is uniformly strongly connected by sub-sequences of length $L$. When
$
0< \alpha_{t - \tau(\alpha_t) } \tau(\alpha_t) < \frac{ \log2}{A_{\max} } $,
we have for any $t \ge \bar T$, \begin{align*}
& \;\;\;\; |\mathbf{E}[ ( \langle \tilde \theta \rangle_t - \theta^* )^\top (P+P^\top)( A(X_t) \langle \tilde \theta \rangle_t + B(X_t)^\top\pi_{t+1} - A \langle \tilde \theta \rangle_t - b) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \nonumber \\
& \le \alpha_{t-\tau(\alpha_t)} \tau(\alpha_t) \gamma_{\max} \left( 72 + 456 A_{\max}^2 + 84 A_{\max} b_{\max} + 72 A_{\max} \mu_{\max} \right) \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_t)} ] \nonumber \\
&\;\;\; + \alpha_{t -\tau(\alpha_t) } \tau(\alpha_t) \gamma_{\max} \bigg[ 2 + 4 \|\theta^* \|_2^2 + 48\frac{(b_{\max}+ \mu_{\max})^2}{A_{\max}^2} + 152 \left(b_{\max} + \mu_{\max} + A_{\max} \| \theta^* \|_2 \right)^2 \nonumber\\
&\;\;\; + 12 A_{\max}b_{\max} + 48 A_{\max}(b_{\max}+ \mu_{\max}) (\frac{b_{\max} + \mu_{\max}}{A_{\max}} + 1 )^2 + 87 (b_{\max}+ \mu_{\max})^2 \bigg]. \end{align*} \end{lemma}
{\bf Proof of Lemma~\ref{lemma:bound_timevarying_Ab_push_SA}:} Note that for all $t\ge \bar T$, we have \begin{align}
& \;\;\;\; \mathbf{E}[( \langle \tilde \theta \rangle_t - \theta^* )^\top (P+P^\top)( A(X_t) \langle \tilde \theta \rangle_t + \frac{1}{N} B(X_t)^\top\mathbf{1}_N - A\langle \tilde \theta \rangle_t - b)\; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \nonumber \\
& \le |\mathbf{E}[ ( \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top)( A(X_t) - A) \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \label{eq:timevarying_bound_Ab_Push_SA1} \\
& \;\;\; + |\mathbf{E}[ ( \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top)( A(X_t) - A) ( \langle \tilde \theta \rangle_t - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} ) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \label{eq:timevarying_bound_Ab_Push_SA2} \\
& \;\;\; + |\mathbf{E}[ ( \langle \tilde \theta \rangle_t - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} )^\top (P+P^\top)( A(X_t) - A) \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \label{eq:timevarying_bound_Ab_Push_SA3}\\
& \;\;\; + |\mathbf{E}[ ( \langle \tilde \theta \rangle_t - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} )^\top (P+P^\top)( A(X_t) - A) ( \langle \tilde \theta \rangle_t - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} ) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \label{eq:timevarying_bound_Ab_Push_SA4}\\
&\;\;\; + |\mathbf{E}[ ( \langle \tilde \theta \rangle_t - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} )^\top (P+P^\top)(\frac{1}{N} B(X_t)^\top\mathbf{1}_N - b) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \label{eq:timevarying_bound_Ab_Push_SA5}\\
&\;\;\; + |\mathbf{E}[ ( \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top)(\frac{1}{N} B(X_t)^\top\mathbf{1}_N - b) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]|\label{eq:timevarying_bound_Ab_Push_SA6}. \end{align} Using the mixing time in Assumption~\ref{assum:mixing-time}, we can get the bound for \eqref{eq:timevarying_bound_Ab_Push_SA1} and \eqref{eq:timevarying_bound_Ab_Push_SA6} for all $t\ge \bar T$: \begin{align}
& \;\;\; |\mathbf{E}[ ( \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top)( A(X_t) - A) \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]|\nonumber\\
& \le |( \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top) \mathbf{E}[A(X_t) - A \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} | \nonumber\\
& \le 2 \alpha_{t} \gamma_{\max} \mathbf{E}[\| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* \|_2 \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})}\|_2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \nonumber\\
& \le \alpha_{t} \gamma_{\max} \mathbf{E}[\| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* \|_2^2 + \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \nonumber \\
& \le \alpha_{t} \gamma_{\max} \mathbf{E}[ 2 \|\theta^* \|_2^2 + 3 \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \nonumber \\
& \le 6 \alpha_{t} \gamma_{\max} \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 6 \alpha_{t} \gamma_{\max} \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 2 \alpha_{t} \gamma_{\max} \|\theta^* \|_2^2 \nonumber \\
& \le 54 \alpha_{t} \gamma_{\max} \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 36 \alpha_{t} \gamma_{\max} \frac{(b_{\max} + \mu_{\max})^2 }{A_{\max}^2} + 2 \alpha_{t} \gamma_{\max} \|\theta^* \|_2^2, \label{eq:timevarying_bound_Ab_Push_SA1_bounded} \end{align} where in the last inequality, we use \eqref{eq:timevarying_single_3_Push_SA_1} from Lemma~\ref{lemma:timevarying_single_3_Push_SA}. \begin{align}
& \;\;\; |\mathbf{E}[ ( \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top)(\frac{1}{N}B(X_t)^\top\mathbf{1}_N - b) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]|\nonumber\\
& \le | ( \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top) \frac{1}{N} \sum_{i=1}^N \mathbf{E}[ b^i(X_t) - b^i \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] |\nonumber\\
& \le 2 \gamma_{\max} \alpha_{t} \mathbf{E}[ \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* \|_2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \le 2 \gamma_{\max} \alpha_{t} \left( 1 + \frac{1}{2} \mathbf{E}[ \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + \frac{1}{2} \| \theta^* \|_2^2 \right) \nonumber\\
& \le 2 \gamma_{\max} \alpha_{t} \left( 1 + 9 \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 6 \frac{(b_{\max} +\mu_{\max})^2}{A_{\max}^2}+ \| \theta^* \|_2^2 \right),
\label{eq:timevarying_bound_Ab_Push_SA6_bounded} \end{align} where in the last inequality we use \eqref{eq:timevarying_single_3_Push_SA_1}. Next, using Assumption~\ref{assum:A and b}, \eqref{eq:timevarying_single_3_Push_SA_1} and \eqref{eq:timevarying_single_3_Push_SA_3}, we have \begin{align}
& \;\;\; |\mathbf{E}[ ( \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* )^\top (P+P^\top)( A(X_t) - A) ( \langle \tilde \theta \rangle_t - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} ) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \nonumber\\
&\le 4 \gamma_{\max} A_{\max} \mathbf{E}[ \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} - \theta^* \|_2 \| \langle \tilde \theta \rangle_t - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})}\|_2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \nonumber\\
&\le 4 \gamma_{\max} A_{\max} \left[ \mathbf{E}[ \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2 \| \langle \tilde \theta \rangle_t - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})}\|_2 + \| \theta^* \|_2 \| \langle \tilde \theta \rangle_t - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})}\|_2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \right] \nonumber\\
&\le \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \gamma_{\max} \Big[ 8 A_{\max}^2 \mathbf{E}[ \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 8 A_{\max} (b_{\max} + \mu_{\max})\| \theta^* \|_2 \nonumber\\
& \;\;\; + 8 A_{\max}^2 \left(\frac{ b_{\max} + \mu_{\max}}{A_{\max}} + \| \theta^* \|_2 \right) \mathbf{E}[ \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \Big] \nonumber\\
&\le \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \gamma_{\max} \left[ 12 A_{\max}^2 \mathbf{E}[ \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 8 A_{\max}^2 \left(\frac{ b_{\max} + \mu_{\max}}{A_{\max}} + \| \theta^* \|_2 \right)^2 \right]
\nonumber\\
&\le \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \gamma_{\max} \Bigg[ 24 A_{\max}^2 \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 24 A_{\max}^2 \mathbf{E}[ \| \langle \tilde \theta \rangle_{t}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]
\nonumber\\
& \;\;\; + 8 A_{\max}^2 \left(\frac{ b_{\max} + \mu_{\max} }{A_{\max}} + \| \theta^* \|_2 \right)^2 \Bigg]\nonumber\\
&\le \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \left[
216 \gamma_{\max} A_{\max}^2 \mathbf{E}[ \| \langle \tilde \theta \rangle_{t}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 152 \gamma_{\max} \left(b_{\max} + \mu_{\max} + A_{\max} \| \theta^* \|_2 \right)^2 \right].
\label{eq:timevarying_bound_Ab_Push_SA2_bounded} \end{align}
In additional, for the bound of \eqref{eq:timevarying_bound_Ab_Push_SA3}, using \eqref{eq:timevarying_single_3_Push_SA_1} and \eqref{eq:timevarying_single_3_Push_SA_3}, we have \begin{align}
& \;\;\; |\mathbf{E}[ ( \langle \tilde \theta \rangle_t -\langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} )^\top (P+P^\top)( A(X_t) - A) \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]|\nonumber\\
& \le 4 \gamma_{\max} A_{\max} \mathbf{E}[ \| \langle \tilde \theta \rangle_t - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})}\|_2 \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})}\|_2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]\nonumber\\
& \le 8 \gamma_{\max} A_{\max} \mathbf{E}[ A_{\max } \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2^2 + (b_{\max} + \mu_{\max} ) \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})}\|_2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \nonumber\\
& \le 4 \gamma_{\max} A_{\max} \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k
\left[ (2 A_{\max }+ b_{\max} + \mu_{\max} ) \mathbf{E}[ \| \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + b_{\max} + \mu_{\max}\right] \nonumber\\
& \le 4 \gamma_{\max} A_{\max} \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \Big[
2 (2 A_{\max }+ b_{\max} + \mu_{\max}) \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} -\langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + b_{\max} + \mu_{\max} \nonumber\\
&\;\;\; + 2 (2 A_{\max }+ b_{\max} + \mu_{\max}) \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \Big] \nonumber\\
& \le 72 \ \gamma_{\max} A_{\max} (2 A_{\max } + b_{\max} + \mu_{\max}) \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \nonumber\\
& \;\;\;+ 48 \gamma_{\max} A_{\max} (b_{\max}+ \mu_{\max}) (\frac{b_{\max} + \mu_{\max}}{A_{\max}} + 1 )^2 \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k.
\label{eq:timevarying_bound_Ab_Push_SA3_bounded} \end{align} Moreover, using \eqref{eq:timevarying_single_3_Push_SA_3}, we can get the bound for \eqref{eq:timevarying_bound_Ab_Push_SA4} as follows: \begin{align}
& \;\;\; |\mathbf{E}[ ( \langle \tilde \theta \rangle_t - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} )^\top (P+P^\top)( A(X_t) - A) ( \langle \tilde \theta \rangle_t - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} ) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \nonumber\\
& \le 4 \gamma_{\max} A_{\max} \mathbf{E}[ \| \langle \tilde \theta \rangle_t - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \nonumber\\
& \le 4 \gamma_{\max} A_{\max} \mathbf{E}[ 72 A_{\max}^2 \| \langle \tilde \theta \rangle_{t} \|_2^2 + 50 (b_{\max}+\mu_{\max})^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \left( \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \right)^2 \nonumber\\
& \le 96 A_{\max}^2 \gamma_{\max} \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k + 67 (b_{\max}+\mu_{\max})^2 \gamma_{\max} \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k.
\label{eq:timevarying_bound_Ab_Push_SA4_bounded} \end{align} Finally, we can get the bound of \eqref{eq:timevarying_bound_Ab_Push_SA5} using \eqref{eq:timevarying_single_3_Push_SA_2}: \begin{align}
&\;\;\; |\mathbf{E}[ ( \langle \tilde \theta \rangle_t - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})} ) (P+P^\top)(\frac{1}{N} B(X_t)^\top\mathbf{1}_N - b) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \nonumber \\
& \le 4 \gamma_{\max} b_{\max} \mathbf{E}[ \| \langle \tilde \theta \rangle_t - \langle \tilde \theta \rangle_{t-\tau(\alpha_{t})}\|_2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \nonumber \\
& \le 4 \gamma_{\max} b_{\max} \mathbf{E}[ 6 A_{\max} \| \langle \tilde \theta \rangle_{t} \|_2 + 5 ( b_{\max} + \mu_{\max} ) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \nonumber \\
& \le \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \gamma_{\max}b_{\max} \left( 12 A_{\max} \mathbf{E}[\| \langle \tilde \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 12 A_{\max} + 20 b_{\max} + 20 \mu_{\max} \right).
\label{eq:timevarying_bound_Ab_Push_SA5_bounded} \end{align} Then, using \eqref{eq:timevarying_bound_Ab_Push_SA1_bounded}--\eqref{eq:timevarying_bound_Ab_Push_SA5_bounded}, we have \begin{align*}
& \;\;\;\; |\mathbf{E}[ ( \langle \tilde \theta \rangle_t - \theta^* )^\top (P+P^\top)( A(X_t) \langle \tilde \theta \rangle_t + B(X_t)^\top\pi_{t+1} - A \langle \tilde \theta \rangle_t - b) \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ]| \nonumber \\
& \le 54 \alpha_{t} \gamma_{\max} \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 36 \alpha_{t} \gamma_{\max} \frac{(b_{\max} + \mu_{\max})^2 }{A_{\max}^2} + 2 \alpha_{t} \gamma_{\max} \|\theta^* \|_2^2 \\
& \;\;\; + 2 \gamma_{\max} \alpha_{t} \left( 1 + 9 \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 6 \frac{(b_{\max} +\mu_{\max})^2}{A_{\max}^2}+ \| \theta^* \|_2^2 \right) \\
& \;\;\; + \sum_{k=t-\tau(\alpha_t)}^{t-1} \alpha_k \Big[ 216 \gamma_{\max} A_{\max}^2 \mathbf{E}[ \| \langle \tilde \theta \rangle_{t}\|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 152 \gamma_{\max} \left(b_{\max} + \mu_{\max} + A_{\max} \| \theta^* \|_2 \right)^2 \nonumber\\
& \;\;\; + 72 \ \gamma_{\max} A_{\max} (2 A_{\max } + b_{\max} + \mu_{\max}) \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + 96 A_{\max}^2 \gamma_{\max} \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] \nonumber\\
& \;\;\; + 48 \gamma_{\max} A_{\max} (b_{\max}+ \mu_{\max}) (\frac{b_{\max} + \mu_{\max}}{A_{\max}} + 1 )^2 + 67 (b_{\max}+\mu_{\max})^2 \gamma_{\max} \\
& \;\;\; + 12 \gamma_{\max} A_{\max} b_{\max} \mathbf{E}[\| \langle \tilde \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_{t})} ] + ( 12 A_{\max} + 20 b_{\max} + 20 \mu_{\max} )\gamma_{\max} b_{\max} \Big] \\%,
& \le \alpha_{t-\tau(\alpha_t)} \tau(\alpha_t) \gamma_{\max} \left( 72 + 456 A_{\max}^2 + 84 A_{\max} b_{\max} + 72 A_{\max} \mu_{\max} \right) \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} \|_2^2 \; | \; \mathcal{F}_{t-\tau(\alpha_t)} ] \nonumber \\
&\;\;\; + \alpha_{t -\tau(\alpha_t) } \tau(\alpha_t) \gamma_{\max} \bigg[ 2 + 4 \|\theta^* \|_2^2 + 48\frac{(b_{\max}+ \mu_{\max})^2}{A_{\max}^2} + 152 \left(b_{\max} + \mu_{\max} + A_{\max} \| \theta^* \|_2 \right)^2 \nonumber\\
&\;\;\; + 12 A_{\max}b_{\max} + 48 A_{\max}(b_{\max}+ \mu_{\max}) (\frac{b_{\max} + \mu_{\max}}{A_{\max}} + 1 )^2 + 87 (b_{\max}+ \mu_{\max})^2 \bigg], \end{align*} where we use $\alpha_t \le \alpha_{t-\tau{\alpha_t}}$ from Assumption~\ref{assum:step-size} and $\tau(\alpha_t) \ge 1$ in the last inequality. This completes the proof.
$ \rule{.08in}{.08in}$
\begin{lemma} \label{lemma:bound_average_time-varying_push_SA}
Suppose that Assumptions~\ref{assum:A and b}--\ref{assum:lyapunov} hold and $\alpha_t = \frac{\alpha_0}{t+1}$. When $ \mu_{t} + \tau(\alpha_t) \alpha_{t-\tau(\alpha_t)} \zeta_8 \le \frac{0.1}{\gamma_{\max}}$ and $\tau(\alpha_t) \alpha_{t-\tau(\alpha_t)} \le \min \{ \frac{ \log2}{A_{\max}},\; \frac{0.1}{\zeta_8 \gamma_{\max}} \}$, we have for $t\ge \bar T$,
\begin{align*}
\mathbf{E}[\|\langle \tilde \theta \rangle_{t+1} -\theta^* \|_2^2 ]
& \le \frac{\bar T}{t+1} \frac{\gamma_{\max}}{\gamma_{\min}} \mathbf{E}[\|\langle \tilde \theta \rangle_{\bar T} -\theta^* \|_2^2 ]
+ \frac{\zeta_9 \alpha_0 C \log^2(\frac{t+1}{\alpha_0})}{t+1} \frac{\gamma_{\max}}{\gamma_{\min}}
+ \alpha_0 \frac{\gamma_{\max}}{\gamma_{\min}} \frac{\sum_{l = \bar T}^{t+1} \mu_{l} }{t+1}, \end{align*} where $\bar T$ is defined in Appendix~\ref{sec:thmPush_constant}, $\zeta_8$ and $\zeta_9$ are defined in \eqref{eq:definition_Psi12} and \eqref{eq:definition_Psi13}, respectively. \end{lemma}
{\bf Proof of Lemma~\ref{lemma:bound_average_time-varying_push_SA}:} Let $H(\langle \tilde \theta \rangle_t ) = ( \langle \tilde \theta \rangle_t - \theta^* )^\top P ( \langle \tilde \theta \rangle_t - \theta^* ) $. From Assumption~\ref{assum:lyapunov}, we know that $
\gamma_{\min} \| \langle \tilde \theta \rangle_t - \theta^* \|_2^2 \le H(\langle \tilde \theta \rangle_t ) \le \gamma_{\max} \| \langle \tilde \theta \rangle_t - \theta^* \|_2^2. $
From Assumption~\ref{assum:A and b} and \eqref{eq:update_average_tilde_theta}, for $t\ge \bar T$ we have \begin{align}
&\;\;\;\; H( \langle \tilde \theta \rangle_{t+1} )
= ( \langle \tilde \theta \rangle_{t+1} - \theta^* )^\top P ( \langle \tilde \theta \rangle_{t+1} - \theta^* ) \nonumber\\
& = ( \langle \tilde \theta \rangle_t - \theta^* )^\top P (\langle \tilde \theta \rangle_t - \theta^* ) + \alpha_t^2 ( A(X_t) \langle \tilde \theta \rangle_t )^\top P ( A(X_t) \langle \tilde \theta \rangle_t ) \nonumber \\
& \;\;\;\; + \frac{\alpha_t^2}{N^2} (B(X_t)^\top \mathbf{1}_N)^\top P (B(X_t)^\top\mathbf{1}_N) + \frac{ \alpha_t^2}{N} ( A(X_t) \langle \tilde \theta \rangle_t )^\top (P+P^\top)(B(X_t)^\top\mathbf{1}_N) + \alpha_t^2 \rho_t^\top P \rho_t\nonumber\\
& \;\;\;\; + \alpha_t^2 ( A(X_t) \langle \tilde \theta \rangle_t + \frac{1}{N}B(X_t)^\top\mathbf{1}_N )^\top (P+P^\top)\rho_t + \alpha_t ( \langle \tilde \theta \rangle_t - \theta^* )^\top (P+P^\top) \rho_t \nonumber\\
& \;\;\;\; + \alpha_t ( \langle \tilde \theta \rangle_t - \theta^* )^\top (P+P^\top)( A(X_t) \langle \tilde \theta \rangle_t + \frac{1}{N}B(X_t)^\top\mathbf{1}_N - A\langle \tilde \theta \rangle_t - b) \nonumber\\
& \;\;\;\; + \alpha_t ( \langle \tilde \theta \rangle_t - \theta^* )^\top P( A\langle \tilde \theta \rangle_t + b) + \alpha_t ( A\langle \tilde \theta \rangle_t + b)^\top P( \langle \tilde \theta \rangle_t - \theta^* ), \nonumber
\end{align} which implies that \begin{align}
H( \langle \tilde \theta \rangle_{t+1} )
& = H( \langle \tilde \theta \rangle_t )+ \alpha_t^2 ( A(X_t) \langle \tilde \theta \rangle_t )^\top P ( A(X_t) \langle \tilde \theta \rangle_t ) + \frac{\alpha_t^2}{N^2} (B(X_t)^\top \mathbf{1}_N)^\top P (B(X_t)^\top\mathbf{1}_N) \nonumber \\
& \;\;\;\; + \frac{ \alpha_t^2}{N} ( A(X_t) \langle \tilde \theta \rangle_t )^\top (P+P^\top)(B(X_t)^\top\mathbf{1}_N) + \alpha_t^2 \rho_t^\top P \rho_t\nonumber\\
& \;\;\;\; + \alpha_t^2 ( A(X_t) \langle \tilde \theta \rangle_t + \frac{1}{N}B(X_t)^\top\mathbf{1}_N )^\top (P+P^\top)\rho_t + \alpha_t ( \langle \tilde \theta \rangle_t - \theta^* )^\top (P+P^\top)\rho_t \nonumber\\
& \;\;\;\; + \alpha_t ( \langle \tilde \theta \rangle_t - \theta^* )^\top (P+P^\top)( A(X_t) \langle \tilde \theta \rangle_t + \frac{1}{N}B(X_t)^\top\mathbf{1}_N - A\langle \tilde \theta \rangle_t - b) \nonumber\\
& \;\;\;\; + \alpha_t ( \langle \tilde \theta \rangle_t - \theta^* )^\top (PA+A^\top P ) (\langle \tilde \theta \rangle_t -\theta^*) \label{eq:push_SA_proof_1}, \end{align} where we use the fact that $A\theta^* +b =0 $ on the last equality.
Next, we can take expectation on both sides of \eqref{eq:push_SA_proof_1}. From Assumption~\ref{assum:lyapunov} and Lemma~\ref{lemma:bound_timevarying_Ab_push_SA}, for \;\; $t\ge \bar T$ we have \begin{align}
\mathbf{E}[H( \langle \tilde \theta \rangle_{t+1} )]
& = \mathbf{E}[H( \langle \tilde \theta \rangle_t )] + \alpha_t^2 \mathbf{E}[( A(X_t) \langle \tilde \theta \rangle_t )^\top P ( A(X_t) \langle \tilde \theta \rangle_t )] - \alpha_t \mathbf{E}[\| \langle \tilde \theta \rangle_t - \theta^* \|_2^2] + \mathbf{E}[\alpha_t^2 \rho_t^\top P \rho_t]\nonumber \\
& \;\;\; + \frac{\alpha_t^2 }{N^2} \mathbf{E}[(B(X_t)^\top\mathbf{1}_N)^\top P (B(X_t)^\top\mathbf{1}_N)] + \frac{\alpha_t^2}{N} \mathbf{E}[( A(X_t) \langle \tilde \theta \rangle_t )^\top (P+P^\top)(B(X_t)^\top\mathbf{1}_N)] \nonumber\\
& \;\;\; + \alpha_t^2 \mathbf{E}[( A(X_t) \langle \tilde \theta \rangle_t + \frac{1}{N}B(X_t)^\top\mathbf{1}_N )^\top (P+P^\top) \rho_t] + \alpha_t \mathbf{E}[( \langle \tilde \theta \rangle_t - \theta^* )^\top (P+P^\top)\rho_t] \nonumber\\
& \;\;\; + \alpha_t \mathbf{E}[( \langle \tilde \theta \rangle_t - \theta^* )^\top (P+P^\top)( A(X_t) \langle \tilde \theta \rangle_t + \frac{1}{N} B(X_t)^\top\mathbf{1}_N - A\langle \tilde \theta \rangle_t - b)], \nonumber
\end{align} which implies that \begin{align}
&\;\;\; \mathbf{E}[H( \langle \tilde \theta \rangle_{t+1} )] \nonumber\\
& \le \mathbf{E}[H( \langle \tilde \theta \rangle_t )] + \alpha_t^2 A_{\max}^2 \gamma_{\max} \mathbf{E}[ \| \langle \tilde \theta \rangle_t \|_2^2 ] - \alpha_t \mathbf{E}[\| \langle \tilde \theta \rangle_t - \theta^* \|_2^2] + 2 \alpha_t \gamma_{\max} \| \rho_{t} \|_2 \mathbf{E}[\|\langle \tilde \theta \rangle_t - \theta^* \|_2]\nonumber \\
& \;\;\; + \alpha_t^2 \gamma_{\max} (b_{\max}^2+\mu_{\max}^2) + 2 \alpha_t^2 \gamma_{\max} A_{\max} b_{\max} \mathbf{E}[ \|\langle \tilde \theta \rangle_t\|_2] + 2 \alpha_t^2 \gamma_{\max} \mu_{\max} ( A_{\max} \mathbf{E}[\| \langle \tilde \theta \rangle_t \|_2] + b_{\max}) \nonumber\\
& \;\;\; + \alpha_t \alpha_{t-\tau(\alpha_t)} \tau(\alpha_t) \gamma_{\max} \left( 72 + 456 A_{\max}^2 + 84 A_{\max} b_{\max} + 72 A_{\max} \mu_{\max} \right) \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} \|_2^2 ] \nonumber \\
& \;\;\; + \alpha_t \alpha_{t -\tau(\alpha_t) } \tau(\alpha_t) \gamma_{\max} \bigg[ 2 + 48\frac{(b_{\max}+ \mu_{\max})^2}{A_{\max}^2} + 152 \left(b_{\max} + \mu_{\max} + A_{\max} \| \theta^* \|_2 \right)^2 \nonumber\\
&\;\;\; + 4 \|\theta^* \|_2^2 + 12 A_{\max}b_{\max} + 48 A_{\max}(b_{\max}+ \mu_{\max}) (\frac{b_{\max} + \mu_{\max}}{A_{\max}} + 1 )^2 + 87 (b_{\max}+ \mu_{\max})^2 \bigg] \nonumber\\
& \le \mathbf{E}[H( \langle \tilde \theta \rangle_t )] + ( - \alpha_t + \alpha_t \gamma_{\max} \| \rho_{t} \|_2 ) \mathbf{E}[\| \langle \tilde \theta \rangle_t- \theta^* \|_2^2] + \alpha_t \gamma_{\max} \| \rho_{t} \|_2 \nonumber\\
& \;\;\; + \alpha_t \alpha_{t-\tau(\alpha_t)} \tau(\alpha_t) \gamma_{\max} \left( 72 + 458 A_{\max}^2 + 84 A_{\max} b_{\max} + 72 A_{\max} \mu_{\max} \right) \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} \|_2^2 ] \nonumber \\
& \;\;\;\; + \alpha_t \alpha_{t -\tau(\alpha_t) } \tau(\alpha_t) \gamma_{\max} \bigg[ 2 + 48\frac{(b_{\max}+ \mu_{\max})^2}{A_{\max}^2} + 152 \left(b_{\max} + \mu_{\max} + A_{\max} \| \theta^* \|_2 \right)^2 \nonumber\\
&\;\;\; + 4 \|\theta^* \|_2^2 + 12 A_{\max}b_{\max} + 48 A_{\max}(b_{\max}+ \mu_{\max}) (\frac{b_{\max} + \mu_{\max}}{A_{\max}} + 1 )^2 + 89 (b_{\max}+ \mu_{\max})^2 \bigg]. \nonumber \end{align}
Using the facts that $ \mathbf{E}[ \| \langle \tilde \theta \rangle_{t} \|_2^2 ] \le 2 \mathbf{E}[ \| \langle \tilde \theta \rangle_{t}- \theta^* \|_2^2 ] + 2 \| \theta^*\|_2^ 2 $ and $\gamma_{\min} \| \langle \tilde \theta \rangle_t - \theta^* \|_2^2 \le H(\langle \tilde \theta \rangle_t ) \le \gamma_{\max} \| \langle \tilde \theta \rangle_t - \theta^* \|_2^2$, then \begin{align}
&\;\;\;\; \mathbf{E}[H( \langle \tilde \theta \rangle_{t+1} )] \nonumber\\
& \le \mathbf{E}[H( \langle \tilde \theta \rangle_t )] + ( - \alpha_t + \alpha_t \gamma_{\max} \mu_{t} ) \mathbf{E}[\| \langle \tilde \theta \rangle_t- \theta^* \|_2^2] + \alpha_t \gamma_{\max} \mu_{t} + 2 \alpha_t^2 \gamma_{\max} (b_{\max} + \mu_{\max})^2 \nonumber\\
& \;\;\; + 2 \alpha_t \alpha_{t-\tau(\alpha_t)} \tau(\alpha_t) \gamma_{\max} \left( 72 + 458 A_{\max}^2 + 84 A_{\max} b_{\max} + 72 A_{\max} \mu_{\max} \right) \mathbf{E}[ \| \langle \tilde \theta \rangle_{t}- \theta^* \|_2^2 ] \nonumber \\
& \;\;\; + 2 \alpha_t \alpha_{t-\tau(\alpha_t)} \tau(\alpha_t) \gamma_{\max} \left( 72 + 458 A_{\max}^2 + 84 A_{\max} b_{\max} + 72 A_{\max} \mu_{\max} \right) \| \theta^*\|_2^2 \nonumber \\
& \;\;\; + \alpha_t \alpha_{t -\tau(\alpha_t) } \tau(\alpha_t) \gamma_{\max} \bigg[ 48\frac{(b_{\max}+ \mu_{\max})^2}{A_{\max}^2} + 152 \left(b_{\max} + \mu_{\max} + A_{\max} \| \theta^* \|_2 \right)^2 + 4 \|\theta^* \|_2^2 \nonumber\\
&\;\;\; +2 + 12 A_{\max}b_{\max} + 48 A_{\max}(b_{\max}+ \mu_{\max}) (\frac{b_{\max} + \mu_{\max}}{A_{\max}} + 1 )^2 + 87 (b_{\max}+ \mu_{\max})^2 \bigg] \nonumber \\
& \le \mathbf{E}[H( \langle \tilde \theta \rangle_t )] + ( - \alpha_t + \alpha_t \gamma_{\max} \mu_{t} + \alpha_t \alpha_{t-\tau(\alpha_t)} \tau(\alpha_t) \gamma_{\max} \zeta_8 ) \mathbf{E}[\| \langle \tilde \theta \rangle_t- \theta^* \|_2^2] \nonumber \\
& \;\;\; + \alpha_t \alpha_{t -\tau(\alpha_t) } \tau(\alpha_t) \gamma_{\max} \zeta_9 + \alpha_t \gamma_{\max} \mu_{t} \nonumber. \end{align} Moreover, from $\alpha_t = \frac{\alpha_0}{t+1}$, $\alpha_0\ge \frac{\gamma_{max}}{0.9}$ and the definition of $\bar T$, for all $t \ge \bar T$ we have \begin{align}
\mathbf{E}[H( \langle \tilde \theta \rangle_{t+1} )]
& \le (1-\frac{0.9\alpha_t} {\gamma_{\max}})\mathbf{E}[H( \langle \tilde \theta \rangle_t )] + \alpha_t \alpha_{t -\tau(\alpha_t) } \tau(\alpha_t) \gamma_{\max} \zeta_9 + \alpha_t \gamma_{\max} \mu_{t} \nonumber\\
& \le \frac{t}{t+1} \mathbf{E}[H( \langle \tilde \theta \rangle_t )] + \alpha_0 \gamma_{\max} \frac{ \mu_{t} }{t+1} + \frac{\alpha_0^2 C\log(\frac{t+1}{\alpha_0}) \gamma_{\max} \zeta_9 }{(t+1)(t-\tau(\alpha_t)+1)} \nonumber \\
& \le \frac{\bar T}{t+1} \mathbf{E}[H( \langle \tilde \theta \rangle_{\bar T} )] + \alpha_0 \gamma_{\max} \sum_{l = \bar T}^t \left( \frac{ \mu_{l} }{l+1} + \frac{\alpha_0 \zeta_9 C\log(\frac{l+1}{\alpha_0}) }{(l+1)(l-\tau(\alpha_l)+1)} \right) \Pi_{u=l+1}^t\frac{u}{u+1} \nonumber\\
& = \frac{\bar T}{t+1} \mathbf{E}[H( \langle \tilde \theta \rangle_{\bar T} )] + \alpha_0 \gamma_{\max} \sum_{l = \bar T}^t \frac{ \mu_{l} }{t+1} + \frac{\alpha_0^2 \gamma_{\max} \zeta_9 }{t+1}
\sum_{l=\bar T}^t \frac{ C\log(\frac{l+1}{\alpha_0}) }{l-\tau(\alpha_l)+1} , \nonumber
\end{align} which implies that \begin{align}
\mathbf{E}[H( \langle \tilde \theta \rangle_{t+1} )]
& \le \frac{\bar T}{t+1} \mathbf{E}[H( \langle \tilde \theta \rangle_{\bar T} )] + \alpha_0 \gamma_{\max} \frac{\sum_{l = \bar T}^t \mu_{l} }{t+1} + \frac{\alpha_0^2 \gamma_{\max} \zeta_9 }{t+1}
\sum_{l=\bar T}^t \frac{ 2 C\log(\frac{l+1}{\alpha_0}) }{l+1} \nonumber \\
& \le \frac{\bar T}{t+1} \mathbf{E}[H( \langle \tilde \theta \rangle_{\bar T} )] + \alpha_0 \gamma_{\max} \frac{\sum_{l = \bar T}^{t+1} \mu_{l} }{t+1} + \frac{\zeta_9 \alpha_0 \gamma_{\max} C \log^2(\frac{t+1}{\alpha_0})}{t+1}, \label{eq:timevarying_singlebound_1_Push_SA} \end{align} where we use $
\sum_{l=\bar T}^t \frac{2 \alpha_0 \log(\frac{l+1}{\alpha_0}) }{l+1} \le \log^2(\frac{t+1}{\alpha_0}) $
to get the last inequality. Then, we can get the bound of $ \mathbf{E}[\| \langle \tilde \theta \rangle_{t+1} -\theta^* \|_2^2 ] $ from \eqref{eq:timevarying_singlebound_1_Push_SA} as follows: \begin{align*}
&\;\;\;\; \mathbf{E}[\|\langle \tilde \theta \rangle_{t+1} -\theta^* \|_2^2 ]
\le \frac{1}{\gamma_{\min}} \mathbf{E}[H( \langle \tilde \theta \rangle_{t+1} )] \nonumber\\
& \le \frac{\bar T}{t+1} \frac{\gamma_{\max}}{\gamma_{\min}} \mathbf{E}[\|\langle \tilde \theta \rangle_{\bar T} -\theta^* \|_2^2 ]
+ \frac{\zeta_9 \alpha_0 C \log^2(\frac{t+1}{\alpha_0})}{t+1} \frac{\gamma_{\max}}{\gamma_{\min}}
+ \alpha_0 \frac{\gamma_{\max}}{\gamma_{\min}} \frac{\sum_{l = \bar T}^{t+1} \mu_{l} }{t+1}. \end{align*} This completes the proof.
$ \rule{.08in}{.08in}$
We are now in a position to prove Theorem~\ref{thm:bound_time-varying_step_Push_SA}.
\noindent {\bf Proof of Theorem~\ref{thm:bound_time-varying_step_Push_SA}:} Note that
$
\sum_{i=1}^N \mathbf{E}[\|\theta_{t+1}^i - \theta^*\|_2^2]
\le 2 \sum_{i=1}^N \mathbf{E}[\|\theta_{t+1}^i - \langle \tilde \theta \rangle_t \|_2^2 ] + 2 N \mathbf{E} [\| \langle \tilde \theta \rangle_t - \theta^*\|_2^2] .
$ From Lemmas~\ref{lemma:bound_consensus_time-varying_push_SA} and \ref{lemma:bound_average_time-varying_push_SA}, we have for any $t \ge \bar T$,
\begin{align*}
\sum_{i=1}^N \mathbf{E}\left[\left\|\theta_{t+1}^i - \theta^*\right\|_2^2\right]
& \le \frac{16}{\epsilon_1} \bar\epsilon^t \mathbf{E}[ \| \sum_{i=1}^N \tilde \theta_0^i + \alpha_0 A(X_0)\theta_0^i + \alpha_0 b^i(X_0) \|_2] + 2 \alpha_t A_{\max} C_\theta + 2 \alpha_t b_{\max} \\
& \;\;\; + \frac{16}{\epsilon_1} \frac{ A_{\max} C_\theta + b_{\max}}{1-\bar\epsilon} \left( \alpha_0 \bar\epsilon^{t/2} + \alpha_{\ceil{\frac{t}{2}}} \right)+ \frac{2\bar TN}{t} \frac{\gamma_{\max}}{\gamma_{\min}} \mathbf{E}[\|\langle \tilde \theta \rangle_{\bar T} -\theta^* \|_2^2 ] \\
&\;\;\;
+ \frac{2 N \zeta_9 \alpha_0 C \log^2(\frac{t}{\alpha_0})}{t} \frac{\gamma_{\max}}{\gamma_{\min}}
+ 2 \alpha_0 N \frac{\gamma_{\max}}{\gamma_{\min}} \frac{\sum_{l = \bar T}^{t} \mu_{l} }{t} \\
&\le C_7 \bar\epsilon^t + C_8 \left( \alpha_0 \bar\epsilon^{\frac{t}{2}} + \alpha_{\ceil{\frac{t}{2}}} \right)+ C_9 \alpha_t + \frac{1}{t}\bigg(C_{10} \log^2\Big(\frac{t}{\alpha_0}\Big) + C_{11}\sum_{l = \bar T}^{t} \mu_{l} +C_{12}\bigg).
\end{align*} This completes the proof.
$ \rule{.08in}{.08in}$ \label{sec:proof_push}
\end{document} | arXiv |
Partitioning of thermostable glucoamylase in polyethyleneglycol/salt aqueous two-phase system
Vinayagam Ramesh1 &
Vytla Ramachandra Murty1
A major challenge in downstream processing is the separation and purification of a target biomolecule from the fermentation broth which is a cocktail of various biomolecules as impurities. Aqueous two phase system (ATPS) can address this issue to a great extent so that the separation and partial purification of a target biomolecule can be integrated into a single step. In the food industry, starch production is carried out using thermostable glucoamylase. Humicola grisea serves as an attractive source for extracellular production of glucoamylase.
In the present investigation, the possibility of using polyethylene glycol (PEG)/salt-based ATPS for the partitioning of glucoamylase from H. grisea was investigated for the first time. Experiments were conducted based on one variable at a time approach in which independent parameters like PEG molecular weight, type of phase-forming salt, tie line length, phase volume ratio, and neutral salt concentration were optimized. It has been found that the PEG 4000/potassium phosphate system was suitable for the extraction of glucoamylase from the fermentation broth. From the results, it was observed that, at a phase composition of 22 % w/w PEG 4000 and 12 % w/w phosphate in the presence of 2 % w/w NaCl and at pH 8, glucoamylase was partitioned into the salt-rich phase with a maximum yield of 85.81 %.
A range of parameters had a significant influence on aqueous two-phase extraction of glucoamylase from H. grisea. The feasibility of using aqueous two-phase extraction (ATPE) as a preliminary step for the partial purification of glucoamylase was clearly proven.
Glucoamylase (EC 3.2.1.3) is a hydrolytic enzyme that degrades starch and related oligosaccharides, leading to the production of β-d-glucose. Other sectors that benefit from glucoamylase include brewing, textile, food, paper, and pharmaceutical industries [1]. Glucoamylase is sourced from different microbial specimens like bacteria, yeasts, and fungi. The commercial production of glucoamylase has been mainly carried out using the genera Aspergillus and Rhizopus [2]. For the manufacture of high-fructose corn syrups, starch needs to be first converted to glucose by high-temperature liquefaction and saccharification [3]. A lot of focus is currently made on the high thermostability of glucoamylase used in the starch processing. Hence, a highly thermostable and environmentally compatible glucoamylase is very essential for industrial purposes [4]. The main benefits of using thermostable enzymes in the starch processing industry include increased reaction rates, decreased contamination risk and cost-reduction in terms of cooling system [5, 6]. The thermophilic fungus, Humicola grisea possesses an efficient hydrolytic system for the production of glucoamylase. Moreover, the enzyme is stable when exposed to high temperature for a longer duration. With regard to these advantages, glucoamylase derived from the thermophilic fungus, H. grisea MTCC 352 has been used in the current study [3].
A variety of downstream processing techniques such ion exchange chromatography, hydrophobic interaction chromatography, and gel filtration chromatography have been exploited for the purification of glucoamylase [1, 7–11]. But the flipside of these procedures is that they are expensive, time consuming, and are often multistep low-yield protocols, not suitable for large scale production. In this regard, the use of aqueous two phase systems (ATPSs) for extraction and purification of glucoamylase has been attempted in the present investigation. Aqueous two-phase extraction (ATPE) has been widely used as a rapid and economic method for the separation and partial purification of many intracellular and extracellular enzymes [12–15].
ATPS can be formulated by mixing appropriate quantity of two hydrophilic polymers or a hydrophilic polymer and a salt. However, the use of ATPS based on hydrophilic polymer and a salt has attracted many researchers because of the following advantages: ease of separation, low cost, ease of scale-up and operation, biocompatibility, and high water content. Moreover, ATPE has high capacity and yield [16]. The protein partitioning in any ATPS depends on many factors such as hydrophobic interactions, hydrogen bonding, ionic interactions, and van der Waals forces. Therefore, with respect to the type of polymer, polymer molecular weight and concentration, type of salt and concentration, tie line length (TLL), phase volume ratio (V R), and other processing parameters such as pH, temperature, and presence of neutral salt concentration, and the partitioning behavior varies [17, 18].
Over the years, ATPSs are widely used in the purification of monoclonal antibodies, extractive fermentation, and recovery of industrial enzymes [18]. Recent studies have employed the use of ATPS (polyethylene glycol (PEG)/potassium phosphate) for biomolecule extraction and primary purification, to a great extent. Nandini and Rastogi [19] dealt with the partitioning of lactoperoxidase from milk whey and studied the effect of phase-forming salt, PEG molecular weight, pH, TLL and V R, resulting in a purification-fold (PF) of 2.31. Ratanapongleka [20] studied the partitioning behavior of laccase from Lentinus polychrous Lev., to study the effect of PEG molecular weight and concentration, salt concentration, pH, and NaCl, leading to 99 % yield and PF of 3. Babu et al. [21] studied the extraction of polyphenol oxidase from pineapple and studied PEG molecular weight and concentration, salt concentration, and pH, which gave 90 % recovery and a PF of 2.7. Naganagouda and Mulimani [22] carried out ATPE of α-galactosidase from Aspergillus oryzae and studied the effect of PEG molecular weight, salt concentration, pH, and NaCl, resulting in a PF of 3.6 and recovery of 87.71 %. The portioning of glucoamylase from Aspergillus awamori NRRL 3112 was studied by Minami and Kilikian [23] using a two-step ATPE consisted of PEG/phosphate system and achieved a threefold PF. Glucoamylase from the same organism was partitioned using bioaffinity extraction with starch as a free bioligand by de Gouveia and Kilikian [24]. To the best of our knowledge, there are no available studies based on the ATPE for glucoamylase from any known thermophilic fungi.
The present investigation was done to comprehend and augment the partition of glucoamylase. Accordingly, studies were systematically carried out by varying the stated parameters, through the one-variable-at-a-time approach. In the current study, the choice of the phase-forming salt was first done, followed by the molecular weight of PEG (fixing the concentration of PEG and salt at a constant level). Next, the influence of process parameters such as tie line length, phase volume ratio, and pH were investigated. Finally, the effect of the presence of a neutral salt (sodium chloride) on the partitioning behavior of glucoamylase was studied.
Polyethylene glycol (molecular weight (MW) 1000, 2000, 4000, and 6000), dipotassium hydrogen orthophosphate, potassium dihydrogen orthophosphate, trisodium citrate, tripotassium citrate, magnesium sulfate, magnesium sulfate heptahydrate, sodium chloride, and calcium chloride were obtained from Merck (India). Potato dextrose agar, yeast extract, and soluble starch were obtained from Hi Media Laboratories Pvt. Ltd (India). Glucose oxidase/peroxidase (GOD-POD) assay kit used was obtained from Agappe diagnostics Ltd (India). All chemicals were of analytical grade. The fungi H. grisea MTCC 352 was obtained from Microbial Type Culture Collection, Chandigarh, India.
Enzyme production and preparation of crude enzyme
The microorganism was maintained on potato dextrose agar (PDA) slant, grown at 45 °C for 10 days before being stored at 4 °C. Glucoamylase was produced through submerged cultivation in a chemically defined medium. The medium consisted of 2.84 g soluble starch, 0.96 g yeast extract, 0.05 g KH2PO4, 0.24 g K2HPO4, 0.05 g NaCl, 0.05 g CaCl2, 0.19 g MgSO4.7H2O, and 0.1 mL of Vogel's trace elements solution. The pH of the medium was adjusted to 6 [3]. Cultures were incubated with agitation at 150 rpm at 45 °C for 4 days. The fermented broth was further subjected to filtration using Whatman No. 1 filter paper. After the filtrate was centrifuged at 10,000 rpm for 10 min, the fungal mycelia were removed. The cell-free supernatant was referred to as crude enzyme and was used throughout the experiments.
Partitioning studies in aqueous two-phase system
Aqueous two-phase systems were prepared by mixing the requisite amounts of PEG and the various salts (trisodium citrate, tripotassium citrate, magnesium sulfate, and mono/dibasic potassium phosphate). The total weight of the systems was 10 g, and the crude enzyme amount was 10 % of the total system. The tubes were vigorously vortexed and centrifuged at 3000 rpm for 10 min to speed up the separation process. The phase equilibration was achieved by overnight incubation of the tubes, and the samples were withdrawn from the individual phase and then analyzed for total protein and glucoamylase activity. Without the incorporation of the enzyme, the samples were analyzed against blanks containing similar composition, to avoid interference of the phase components.
Glucoamylase activity
An appropriate amount of the crude enzyme was allowed to react with 1 % (w/v) soluble starch solution in 50 mM citrate buffer (pH 5.5), at 60 °C for 10 min. The concentration of the glucose produced was estimated by GOD-POD method using a standard glucose curve prepared under similar conditions. One unit of glucoamylase activity was defined as the amount of enzyme that releases 1 μmol of glucose from soluble starch per minute under assay conditions.
The total protein was estimated, as described by Bradford [25], using bovine serum albumin as a standard.
Estimation of partition parameters
The partitioning parameters in ATPS were calculated as follows.
The phase volume ratio (V R) was defined as the ratio of volume in the top phase (V T) and bottom phase (V B).
$$ {V}_{\mathrm{R}}=\frac{V_{\mathrm{T}}}{V_{\mathrm{B}}} $$
The partition coefficient for glucoamylase (K GA) was defined as the ratio of glucoamylase activity in the top phase (A T) to that in the bottom phase (A B).
$$ {K}_{\mathrm{GA}}=\frac{A_{\mathrm{T}}}{A_{\mathrm{B}}} $$
The partition coefficient for total protein (K TP) was defined as the ratio of protein concentration in the top phase (C T) to that in the bottom phase (C B).
$$ {K}_{\mathrm{T}\mathrm{P}}=\frac{C_{\mathrm{T}}}{C_{\mathrm{B}}} $$
The specific activity (SA) was defined as the ratio of glucoamylase activity (A) to protein concentration (C) in the respective phases.
$$ \mathrm{S}\mathrm{A}=\frac{A}{C} $$
The purification factor (PF) was calculated by the ratio of the specific activity in the bottom phase (SAB) to the specific activity in the crude extract (SAF).
$$ \mathrm{P}\mathrm{F}=\frac{{\mathrm{SA}}_{\mathrm{B}}}{{\mathrm{SA}}_{\mathrm{F}}} $$
The glucoamylase yield in the bottom phase is given by the following equation.
$$ \mathrm{Yield}\ \left(\%\right)=\frac{100}{1+{V}_{\mathrm{R}}{K}_{\mathrm{GA}}\ } $$
The TLL is defined as
$$ \mathrm{T}\mathrm{L}\mathrm{L}\left(\%\right)=\sqrt{{\left({C}_{\mathrm{PT}}-{C}_{\mathrm{PB}}\right)}^2+{\left({C}_{\mathrm{SB}}-{C}_{\mathrm{ST}}\right)}^2} $$
where C PT and C PB are the PEG concentrations (% w/w) in the top and bottom phases, respectively, and C ST and C SB are the salt concentrations (% w/w) in the top and bottom phases, respectively.
The essence of ATPE lies in the differential partitioning of the target biomolecule to one phase and the contaminants to the other phase. It is this mechanism that leads to the purification of a target biomolecule. Extraction of biomolecules using ATPS could be tougher using theoretical predictions, primarily due to the fact that a complex set of parameters decide the extent of partitioning in an ATPS. They include the properties of the biomolecule (size, charge, and hydrophobicity) and the properties of the system like (i) type and concentration of phase-forming salt, (ii) concentration and molecular weight of phase-forming polymer, (iii) tie line length, (iv) phase volume ratio, (v) pH of the system, and (vi) concentration of neutral salts. Details of the selection of each of these parameters and their effect on partitioning of glucoamylase have been presented in the following sections.
Effect of phase-forming salts
Due to the significant influence of the phase-forming salt on system environment, its selection has a direct consequence on separation, concentration, and purification of a given biomolecule in ATPE [26]. In order to identify the most appropriate salt for the recovery of glucoamylase and ensure its efficient extraction, ATPE experiments were performed by incorporating a phase system of PEG (MW 4000) with four different phase-forming salts such as trisodium citrate, tripotassium citrate, magnesium sulfate, and mono/dibasic potassium phosphate. The partition coefficients of H. grisea-derived glucoamylase and total protein using 15 % (w/w) PEG 4000 + 15 % (w/w) salt are shown in Fig. 1. In all the phase systems studied, it was observed that the values of K GA and K TP obtained from all of the systems were lesser than 1. Glucoamylase was preferentially partitioned to the bottom phase that indicated a strong preference of glucoamylase to the bottom phase. It resulted in low partition coefficients in the range of 0.28–0.78. This is in agreement with Minami and Kilikian [23]. ATPE studies on glucoamylase from A. awamori. The disparities in the values of K GA in the partitioning process are caused by the non-uniform distribution of the salt ions in the top and bottom phases. It is also due to the difference in the electric potential that improves protein mobility to the other phase by electrostatic repulsion/attraction, hydrophobicity, and size of the salt ions [18, 27]. The specific activity of the top and bottom phases and yield and purification factor for the bottom phase in the systems with different phase-forming salts are portrayed in Table 1. It was noted that the yield was higher in the bottom phase (69.73–82.48 %) for all the phase-forming salts. Except for the magnesium salt, it was observed that the specific activity (U/mg) was higher in the bottom phase and so was the purification factor, for rest of the salts. Trisodium citrate and potassium phosphate exhibited relatively higher yield of 82.23 & 82.48% respectively. However, based on the purification factor, potassium phosphate resulted in a higher PF (1.46). Potassium phosphate system was recognized to be more effective for lactoperoxidase [19], laccase [20], polyphenol oxidase [21], and α-galactosidase [22]. Based on the preliminary results, PEG/potassium phosphate (K2HPO4 and KH2PO4) system was used for further studies.
Effect of phase-forming salt on the partitioning of glucoamylase
Table 1 Effect of phase-forming salts on glucoamylase partitioning
Effect of PEG molecular weight
The molecular weight of PEG decides the extent of partition of the target biomolecules and the other molecules in the extract. As the chain length of PEG increases, the volume exclusion effect generally follows an increasing trend. In the presence of salt, the hydrophobicity of the polymer-rich top phase increases with a rise in chain length [18]. The extraction efficiency is influenced by the composition of phases and the number of polymer–protein interactions. These factors are governed by the polymer, possessing different degrees of polymerization [28]. In order to attain the most appropriate molecular weight of PEG for the recovery of glucoamylase, partitioning studies were carried out by employing PEG/KH2PO4 K2HPO4 system with different molecular weights of PEG (1000, 2000, 4000, and 6000). The concentration of the phase composition and pH was maintained at a constant value throughout (15 % (w/w) PEG 4000 + 15 % (w/w) potassium phosphate, pH 7). The partition coefficients of glucoamylase and total protein are shown in Fig. 2. The partition coefficients of glucoamylase (K e) and total protein (K p) were found to decrease with an increase in PEG molecular weight. The decrease in partition coefficient of glucoamylase and total proteins could be ascribed to the effect of volume exclusion, which increases with an increase in molecular weight of the polymer. As a result, the biomolecules will selectively partition to the bottom phase. Similar results were observed by Nandini and Rastogi [26], Priyanka et al. [29], and Lakshmi et al. [30]. The specific activity of the top and bottom phases and yield and purification factor for the bottom phase in the systems with different PEGs (MW 1000, 2000, 4000, and 6000) are shown in Table 2. From the experimental runs, we observed that the specific activity of the bottom phase was greater than that of the top phase, irrespective of the molecular weight of the polymer. With a rise in the molecular weight of PEG, the yield of glucoamylase in the bottom phase was on the increasing mode. This trend can be explained due to the increase of the top phase hydrophobicity. As there is an increase in the chain length of PEG, it experiences a deficiency in hydroxyl groups for the same concentration of the polymer [12]. The specific activity of the enzyme in the bottom phase rose up to the stage when the molecular weight increased from 1000 to 4000. Thereafter, with PEG 6000, it took a dip; so was the case with the purification factor. This behavior is due to the fact that the bottom phase may be reaching the solubility limit with respect to glucoamylase. The resulting salting-out effect, therefore, tends to push the enzyme to the top phase. Similar results were observed by Yuzugullu and Duman [31] and Madhusudhan and Raghavarao [32]. Thus, it was observed that the transition of yield and purification factor towards a higher value was observed at PEG 4000. Based on this fact, PEG 4000 was chosen for further studies.
Effect of PEG molecular weight on the partitioning of glucoamylase
Table 2 Effect of PEG molecular weight on glucoamylase partitioning
Effect of TLL
The effect of TLL (22.91–31.61 %) on glucoamylase partitioning was investigated in PEG 4000/potassium phosphate systems. The composition of the PEG-salt system within the specified TLL range was obtained from the liquid-liquid equilibrium data, as provided by Carvalho et al. [33]. The phase volume ratio was maintained at 1 for these set of experiments. It was observed that the partition coefficient values of both glucoamylase and total protein increased with an increase in TLL (Fig. 3). This could be because of the decrease in the relative free volume in the bottom phase and subsequent decrease in the solubility of the biomolecules [32, 34]. As depicted in the Fig. 4, the increase in the partition coefficient of glucoamylase with the increase in TLL at constant volume ratio results in the decrease in glucoamylase yield (Eq. 6). The purification factor increased and reached a maximum value of 1.72 at TLL of 30.62 %. A decrease in the purification factor value was observed for further increase in TLL, and this may be due to the high salt concentration at the bottom phase which affects the solubility of glucoamylase [30].
Effect of TLL on the partitioning of glucoamylase
Effect of TLL on the recovery of glucoamylase
Effect of phase volume ratio
In order to further purify the enzyme, various volume ratios (0.41–1.57) were selected on the TLL of 30.62 % and the consecutive effect of this on PF and yield was investigated. It can be evidenced from Fig. 5 that the increase in phase volume ratio increases the yield owing to the reduction in the bottom phase volume [32]. A lower PF was observed at lower V R due to the fact that a larger volume of bottom phase at the lower V R promotes the partitioning of the contaminant proteins to the bottom phase. A maximum PF of 1.84 was observed at a V R of 1.37 and further increment resulted in a decrease in PF. In contrast to this, the yield decreased with the increase in phase volume ratio. Similar result was observed by Chethana et al. [35].
Effect of V R on the recovery of glucoamylase
Effect of pH
One of the significant factors that govern the partition behavior of biomolecules in an ATPS is the pH at which the process is carried out. Any change in pH has the ability to influence the charge of the solute or the ratio of the charged molecules. The phase system selected from the previous step (22 % (w/w) PEG 4000 and 12 % (w/w) potassium phosphate) was further subjected to pH changes from 6 to 9. The variation of partition coefficients with respect to the pH of the system is shown in Fig. 6. The increase in pH improved the migration of contaminant proteins to the top phase and consequently the partition coefficient of glucoamylase was found to decrease. This phenomenon enhanced the purification factor and yield at the bottom phase. The results are in accordance with reported literature (Nandini and Rastogi [19] and Naganagouda and Mulimani [22]). It can be visualized from Fig. 7 that the increase in pH has a positive effect on glucoamylase yield and it reached a maximum yield of 82.62 % at a pH of 9. But the PF reached a maximum of 2.61 at a pH of 8 and decreased further. The low stability of glucoamylase at higher pH could be a possible reason for this reduction [4]. It is a well-known fact that the pH of the ATPS has a profound effect on the partitioning of biomolecules since it may change the charge of the biomolecule or the ratio of the charged biomolecules. The partitioning depends on the system pH and the isoelectric point of glucoamylase. The literature reveals that the isoelectric point of glucoamylase from H. grisea is greater than 8 [8, 36, 37]. The decrease in pH makes the glucoamylase more positively charged and leads to stronger interaction between glucoamylase and polymer which migrated more enzymes to the PEG-rich phase. Similar results were observed by Nandini and Rastogi [19] and Ratanapongleka [20].
Effect of pH on the partitioning of glucoamylase
Effect of pH on the recovery of glucoamylase
Effect of NaCl
One of the definitive methods to arrive at an optimum value of the selectivity and yield has been the addition of neutral salts to the ATPS [16]. With a view to examine the effect of a neutral salt on the partitioning of the enzyme, the concentration NaCl was varied from 0 to 5 % w/w in the selected system from the previous step (22 % w/w PEG 4000 12 % w/w phosphate system, pH 8.0). In general, the addition of neutral salts to the ATPS changes the partitioning behavior of protein by changing the electrostatic potential difference between the phases or by increasing the hydrophobic interactions [38]. Because of the changes in the electrostatic potential difference, the increase in NaCl concentration promoted more partitioning of glucoamylase to the bottom phase and a lowest partition coefficient of 0.126 was obtained at 2 % NaCl concentration (Fig. 8). This system resulted in a PF of 2.68 and a yield of 85.81 %. As shown in Fig. 8, the further increase in NaCl concentration caused the reduction of PF which could be as a consequence of increase in hydrophobic interactions between the protein and PEG in the top phase [39].
Effect of NaCl on the recovery of glucoamylase
Based on the above observations, it is clear that PEG 4000 and KH2PO4/K2HPO4 phase system can be used as a potential technique for the separation and partial purification of glucoamylase.
The recovery of glucoamylase from thermophilic fungal sources using aqueous two-phase extraction was reported for the first time. The influence of various parameters on separation and partial purification of glucoamylase from H. grisea in aqueous two-phase systems was revealed. The PEG 4000/potassium phosphate phase system was found to be the most efficient for the extraction of glucoamylase, when compared to other salt systems. It was noted that glucoamylase preferentially partitioned to the salt-rich bottom phase. The optimized conditions of tie line length were at 30.62 %, phase volume ratio 0.53, pH 8, and 2 % w/w NaCl. The said conditions provided a maximum yield of 85.81 % and purity of 2.68-fold compared to crude extract. Overall, the results demonstrated the feasibility of using ATPE as a preliminary step for the partial purification of glucoamylase.
Riaz M, Perveen R, Javed MR, Nadeem HU, Rashid MH (2007) Kinetic and thermodynamic properties of novel glucoamylase from Humicola sp. Enzyme Microb Tech 41:558–564
Pandey A (1995) Glucoamylase research: an overview. Starch 47:439–445
Ramesh V, Murty VR (2014) Sequential statistical optimization of media components for the production of glucoamylase by thermophilic fungus Humicola grisea MTCC 352. Enzyme Res. http://www.hindawi.com/journals/er/2014/317940/
Gomes E, Souza SR, Grandi RP, Da Silva R (2005) Production of thermostable glucoamylase by Aspergillus flavus A 1.1 and Thermomyces Lanuginosus A 13.37. Braz J Microbiol 36:75–82
Kaur P, Satyanarayana T (2004) Production and starch saccharification by a thermostable and neutral glucoamylase of a thermophilic mould Thermomucor indicae-seudaticae. World J Microbiol Biotechnol 20:419–425
Koç O, Metin K (2010) Purification and characterization of a thermostable glucoamylase produced by Aspergillus flavus HBF34. African J Biotechnol 9(23):3414–3424
Ferreira-Nozawa MS, Rezende JL, Guimarães LHS, Terenzi HF, Jorge JA, Polizeli MLTM (2008) Mycelial glucoamylases produced by the thermophilic fungus Scytalidium thermophilum strains 15.1 and 15.8. Purification and biochemical characterization. Braz J Microbiol 39(2):344–352
Campos L, Felix CR (1995) Purification and characterization of a glucoamylase from Humicola grisea. Appl Env Microbiol 61(6):2436–2438
Nguyen QD, Rezessy-Szabó JM, Claeyssens M, Stals I, Hoschke A (2002) Purification and characterization of amylolytic enzymes from thermophilic fungus Thermomyces lanuginosus strain ATCC 34626. Enzyme Microb Tech 31:345–352
Thorsen TS, Johnsen AH, Josefsen K, Jensen B (2006) Identification and characterization of glucoamylase from the fungus Thermomyces lanuginosus. Biochim Biophys Acta 1764(4):671–676
Negi S, Gupta S, Banerjee R (2011) Extraction and purification of glucoamylase and protease produced by Aspergillus awamori in a single-stage fermentation. Food Technol Biotechnol 49:310–315
Gautam S, Simon L (2006) Partitioning of β-glucosidase from Trichoderma reesei in poly(ethylene glycol) and potassium phosphate aqueous two-phase systems: influence of pH and temperature. Biochem Eng J 30:104–108
Madhusudhan MC, Raghavarao KSMS, Nene S (2008) Integrated process for extraction and purification of alcohol dehydrogenase from baker's yeast involving precipitation and aqueous two phase extraction. Biochem Eng J 38:414–420
Kammoun R, Chouayekh H, Abid H, Naili B, Bejar S (2009) Purification of CBS 819.72 -amylase by aqueous two-phase systems: modelling using response surface methodology. Biochem Eng J 46:306–312
Kianmehr A, Pooraskari M, Mousavikoodehi B, Mostafavi SS (2014) Recombinant D-galactose dehydrogenase partitioning in aqueous two-phase systems: effect of pH and concentration of PEG and ammonium sulfate. Bioresource Bioprocess 1:6
Albertsson PA (1987) Partitioning of cell particles and macromolecules, 3rd edn. New York, John Wiley and Sons
Benavides J, Rito-Palomares M (2008) Practical experiences from the development of aqueous two-phase processes for the recovery of high value biological products. J Chem Technol Biotechnol 83:133–142
Raja S, Murty VR, Thivaharan V, Rajasekar V, Ramesh V (2011) Aqueous two phase systems for the recovery of biomolecules—a review. Science Technol 1:7–16
Nandini KE, Rastogi NK (2011) Integrated downstream processing of lactoperoxidase from milk whey involving aqueous two-phase extraction and ultrasound-assisted ultrafiltration. Appl Biochem Biotechnol 163:173–185
Ratanapongleka K (2012) Partitioning behavior of laccase from Lentinus polychrous Lev in aqueous two phase systems. Songklanakarin J Sci Technol 34(1):69–76
Babu BR, Rastogi NK, Raghavarao KSMS (2008) Liquid–liquid extraction of bromelain and polyphenol oxidase using aqueous two-phase system. Chem Eng Process 47:83–89
Naganagouda K, Mulimani VH (2008) Aqueous two-phase extraction (ATPE): an attractive and economically viable technology for downstream processing of Aspergillus oryzae α-galactosidase. Process Biochem 43:1293–1299
Minami NM, Kilikian BV (1998) Separation and purification of glucoamylase in aqueous two-phase systems by a two-step extraction. J Chromatogr B 711:309–312
de Gouveia T, Kilikian BV (2000) Bioaffinity extraction of glucoamylase in aqueous two-phase systems using starch as free bioligand. J Chromatogr B 743:241–246
Bradford MM (1976) A rapid and sensitive for the quantitation of microgram quantitites of protein utilizing the principle of protein-dye binding. Anal Biochem 72:248–254
Nandini KE, Rastogi NK (2011) Liquid–liquid extraction of lipase using aqueous two-phase system. Food Bioprocess Technol 4:295–303
Nagaraja VH, Iyyaswami R (2015) Aqueous two phase partitioning of fish proteins: partitioning studies and ATPS evaluation. J Food Sci Technol 52(6):3539–3548. http://www.ncbi.nlm.nih.gov/pubmed/26028736
Mohamadi HS, Omidinia E (2007) Purification of recombinant phenylalanine dehydrogenase by partitioning in aqueous two-phase systems. J Chromatogr B 854:273–278
Priyanka BS, Rastogi NK, Raghavarao KSMS, Thakur MS (2012) Downstream processing of luciferase from fireflies (Photinus pyralis) using aqueous two-phase extraction. Process Biochem 47:1358–1363
Lakshmi MC, Madhusudhan MC, Raghavarao KSMS (2012) Extraction and purification of lipoxygenase from soybean using aqueous two-phase system. Food Bioprocess Technol 5:193–199
Yuzugullu Y, Duman YA (2015) Aqueous two-phase (PEG4000/Na2SO4) extraction and characterization of an acid invertase from potato tuber (Solanum tuberosum). Prep Biochem Biotechnol 45(7):696–711. http://www.ncbi.nlm.nih.gov/pubmed/25127162
Madhusudhan MC, Raghavarao KSMS (2011) Aqueous two phase extraction of invertase from baker's yeast: effect of process parameters on partitioning. Process Biochem 46:2014–2020
Carvalho CP, Coimbra JSR, Costa IAF, Minim LA, Silva LHM, Maffia MC (2007) Equilibrium data for PEG 4000 plus salt plus water systems from (278.15 to 318.15) K. J Chem Eng Data 52:351–356
Selvakumar P, Ling TC, Walker S, Lyddiatt A (2012) Recovery of glyceraldehyde 3-phosphate dehydrogenase from an unclarified disrupted yeast using aqueous two-phase systems facilitated by distribution analysis of radiolabelled analytes. Sep Purif Technol 85:28–34
Chethana S, Nayak CA, Raghavarao KSMS (2007) Aqueous two phase extraction for purification and concentration of betalains. J Food Eng 81:679–687
Cereia M, Guimaraes LHS, Nogueira SCP, Jorge JA, Terenzi HF, Greene LJ, Polieli MLTM (2006) Glucoamylase isoform (GAII) purified from a thermophilic fungus Scytalidium thermophilum 15.8 with biotechnological potential. African J Biotechnol 5(12):1239–1245
Aquino ACMM, Jorge JA, Terenzi HF, Polizeli MLTM (2001) Thermostable glucose-tolerant glucoamylase produced by thermophilic fungus Scytalidiuyem thermophilum. Folia Microbiol 46(1):11–16
Kavakçıoğlu B, Tarhan L (2013) Initial purification of catalase from Phanerochaete chrysosporium by partitioning in poly(ethylene glycol)/salt aqueous two phase systems. Sep Purif Technol 105:8–14
Raja S, Murty VR (2013) Optimization of aqueous two-phase systems for the recovery of soluble proteins from tannery wastewater using response surface methodology. J Eng. http://www.hindawi.com/journals/je/2013/217483/
The authors gratefully acknowledge the Department of Biotechnology, MIT, Manipal University for providing the facilities to carry out the research work.
Department of Biotechnology, Manipal Institute of Technology, Manipal University, Manipal, 576104, Karnataka, India
Vinayagam Ramesh & Vytla Ramachandra Murty
Vinayagam Ramesh
Vytla Ramachandra Murty
Correspondence to Vinayagam Ramesh.
Both the authors have active participation in the implementation and analysis of the present study. Ramesh performed the research protocols and wrote the manuscript. Both the authors have read and approved the final version of the manuscript.
Ramesh, V., Murty, V.R. Partitioning of thermostable glucoamylase in polyethyleneglycol/salt aqueous two-phase system. Bioresour. Bioprocess. 2, 25 (2015). https://doi.org/10.1186/s40643-015-0056-6
Aqueous two-phase systems (ATPS)
Glucoamylase
Humicola grisea | CommonCrawl |
\begin{document}
\title{Dzyaloshinskii-Moriya interaction as a fast quantum information scrambler}
\author{Fatih Ozaydin} \email{(\Letter) [email protected]} \affiliation{Institute for International Strategy, Tokyo International University, 1-13-1 Matoba-kita, Kawagoe, Saitama, 350-1197, Japan} \affiliation{CERN, CH-1211 Geneva 23, Switzerland}
\author{Azmi Ali Altintas} \email{[email protected]} \affiliation{Department of Physics, Faculty of Science, Istanbul University, 34116, Vezneciler, Istanbul, Turkey}
\author{Can Yesilyurt} \email{can\texttt{-{}-}[email protected]} \affiliation{Department of Physics, Faculty of Science, Istanbul University, 34116, Vezneciler, Istanbul, Turkey}
\author{Cihan Bay\i nd\i r} \email{[email protected]} \affiliation{\.{I}stanbul Technical University, Engineering Faculty, 34469 Maslak, \.{I}stanbul, Turkey} \affiliation{Bo\u{g}azi\c{c}i University, Engineering Faculty, 34342 Bebek, \.{I}stanbul, Turkey} \affiliation{CERN, CH-1211 Geneva 23, Switzerland}
\date{\today}
\begin{abstract} Black holes are conjectured to be the fastest information scramblers, and within holographic duality, the speed of quantum information scrambling of thermal states of quantum systems is at the heart of studies of chaos and black hole dynamics. Here, considering the Ising interaction on the thermal state of spin chains with Dzyaloshinskii-Moriya (DM) interaction and measuring the out-of-time-order correlation functions, we study the effect of DM interaction on the speed of scrambling the quantum information. On the contrary to its advantages in quantum information and metrology such as exciting entanglement and quantum Fisher information, we show that DM interaction speeds up the information scrambling. We also show that the increasing temperature slows down the scrambling process due to vanishing quantum correlations.
\end{abstract}
\keywords{Dzyaloshinskii-Moriya interaction; spin chain; quantum chaos; information scrambling; out-of-time-order correlations; OTOC}
\maketitle \section{Introduction} The anisotropic antisymmetric Dzyaloshinskii-Moriya (DM) interaction has been attracting a particular attention in quantum information science since Zhang found that it excites the entanglement of spin chains, or protects it against increasing temperature~\cite{Zhang07PRA}, motivating several other works on various quantum systems~\cite{Jafari08PRB,Kargarian09PRA,Mehran14PRA,Song14PhysA,Ozaydin2020LP}. Sharma et al. studied entanglement sudden death and birth~\cite{Sharma13QIP} and entanglement dynamics~\cite{Sharma14QIP} of qubit-qutrit systems, also of Werner states~\cite{Sharma15QIP} and qubit-qutrit systems with x-component DM interaction~\cite{Sharma16CTP}. Because, unlike entanglement measures, quantum Fisher information (QFI) is not an entanglement monotone that it can be increased via local operations and classical communications~\cite{Erol14SRep}, we asked whether it can be excited via DM interaction too, and showed that it can be~\cite{Ozaydin15SRep,Ozaydin20OQEL}. Bound entanglement (BE)~\cite{Horodecki1998PRLIsThere} and its activation are among the most interesting phenomena in quantum mechanics~\cite{Horodecki1999PRL,Ozaydin-BoundZeno}. Along this vein, Sharma et al. showed that DM interaction can be used as an agent to free BE~\cite{Sharma16QIP}. Besides these \textit{positive} influences on the quantum dynamics, Vahedi et al. showed that DM interaction can drive a quantum system into chaos~\cite{Vahedi16Chaos}, motivating us for the present work due to the close relation between information scrambling and chaos in quantum systems.
Measuring the out-of-time-correlation (OTOC) functions has recently opened new insights in the \textit{black hole in lab} studies, in particularly in quantum chaos within the holographic duality. Through reversing the sign of the Hamiltonian of a many body system, Swingle et al. showed that scrambling of quantum information can be probed~\cite{Swingle16PRA}. Dag et al. studied information scrambling and OTOCs in cold atoms~\cite{Ceren19PRA}, and also showed that OTOC functions can detect the quantum phases~\cite{Ceren19PRL}. Very recently, Sharma et al. have established mathematical connections between quantum information scrambling and quantum correlation quantifiers through OTOC functions~\cite{Sharma21QIP}.
In this work, we investigate the influence of DM interaction on the speed of information scrambling in quantum systems by calculating the OTOC functions. DM interaction has a good reputation in quantum information and computation tasks.
However, finding that it scrambles quantum information faster could destroy this reputation in systems which exhibit chaotic dynamics in certain conditions. On the other hand, it has a potential to contribute to the \textit{black hole in lab} studies. In either way, it opens new insights in our understanding the fundamentals of quantum mechanics and also in developing quantum technologies.
\section{Physical Model and Information Scrambling} For the interaction Hamiltonian $H_I$, we consider the following Ising model, which is a simplified version of the power-law quantum Ising model considered by Swingle and Halpern~\cite{Swingle18PRA} for a rapid scrambling at early times \begin{equation}\label{eq:HamIsing}
H_{\text{I}}=-\sum^{n-1}_{r=1}J \sigma^z_r \sigma^z_{r+1} - \sum^{n}_{r} h_x \sigma^x_r - \sum^{n}_{r} h_z \sigma^z_r, \end{equation}
\noindent where the interaction-energy scale is chosen to be $J=-1$, the transverse field and the position-dependent longitudinal field are chosen to be $h^x=1.05$, and $h_r^z=0.375 (-1)^r$, respectively.
Although this simplified version does not lead to such a rapid scrambling due to the exponential growth of OTOCs, it is sufficient to observe the influence of DM interaction on the scrambling.
The OTOC operators are chosen to be Pauli-x operator on the first and the last qubit, i.e. $V=\sigma_1^x$ and $W=\sigma_n^x$. The operator $W$ is subject to a time evolution described by Hamiltonian $H$ as $W_t = U(-t) W U(t)$ where $U(t) = e^{-iHt}$ with $\hbar=1$. We measure the out-of-time-correlation function through fidelity $\mathcal{F}$ of two instances of the initial state $\rho_i$ as $F(t)=\sqrt {Re[\mathcal{F}(\rho_a,\rho_b)]}$ with $\rho_a = W_t V \rho_i V^{\dagger} W_t^{\dagger}$ and $\rho_b = V W_t \rho_i W_t^{\dagger} V^{\dagger}$. Hence, the deviation of $F(t)$ from unity, i.e. the discrepancy between $\rho_a$ and $\rho_b$ detects the scrambling of quantum information.
\begin{figure}
\caption{Information scrambling speed with respect to strength of Dzyaloshinskii-Moriya interaction. }
\label{fig:fig1}
\end{figure}
Along with such a spin-1/2 interaction model and OTOC operators, one can consider an arbitrary two-level quantum system to study information scrambling. The system we consider in this work is a thermalized $n$ spin-1/2 chain with DM interaction, described by the Hamiltonian
\begin{equation}\label{eq:Ham}
H_{\text{DM}}=\sum^{n-1}_{k=1} {1 \over 2} \left[ J_x \sigma^x_k \sigma^x_{k+1}
+ J_y \sigma^y_k \sigma^y_{k+1}
+ J_z \sigma^z_k \sigma^z_{k+1}
+ \overrightarrow{D} \cdot (\overrightarrow{\sigma}_{k} \times \overrightarrow{\sigma}_{k+1}) \right], \end{equation}
\noindent where $J$ is the coupling constant, $D$ is the strength of DM interaction, and we choose $J_x = J_y$, $J_z < 0$ and $\overrightarrow{D}= D \overrightarrow{z}$. The density matrix of the thermal entangled state of the spin-1/2 system which we use as the initial state $\rho_i$ is found as \begin{equation}
\rho_i = e^{- H_{\text{DM}} / kT } / \text{Tr}(e^{- H_{\text{DM}} / kT }), \end{equation}
\noindent where $k$ is the Boltzman constant and $T$ is the temperature.
Since our major question in this work is whether DM interaction speeds up quantum information scrambling, we first set $J_z = -1$ and choose $T = 0.05$. For a set of values of the strength of DM interaction ranging from $D = 0$ to $D = 1$, we plot $F(t)$ in Fig.~\ref{fig:fig1}, which clearly shows that as the strength of DM interaction increases, quantum information is scrambled faster.\\ \indent In the above result, in order to observe the effect of DM interaction solely, we kept the other system parameters constant. However, thermalization and decoherence effects are in the heart of scrambling in quantum systems. Swan et al. show that fidelity OTOCs can shed light on the connections of scrambling, thermalization, entanglement and quantum chaos~\cite{SwanNatComm}. In order to discriminate scrambling from decoherence, Landsman et al. implemented and verified scrambling on an ion trap quantum computer by designing a teleportation circuit ~\cite{LandsmanNature}. Because temperature is a key factor in the quantum dynamics of the thermal state of a spin chain, it is a natural direction to investigate how the increasing temperature affects the scrambling speed. One might expect in general that the temperature speeds up the scrambling process. However, as can be seen from Fig.~\ref{fig:fig2} where we set $J_z = -1$ and $D = 1$, scrambling slows down with respect to the increasing temperature. The physical interpretation of this result is that as the temperature increases, as can be calculated easily, quantum correlations of the initial state decreases beyond the point that DM interaction can excite. Hence, scrambling, which is a pure quantum mechanical effect tends to vanish. In other words, at high temperatures, there is no significant quantum information left to scramble.
\begin{figure}
\caption{Information scrambling speed with respect to strength of Dzyaloshinskii-Moriya interaction. }
\label{fig:fig2}
\end{figure}
\section{Conclusion} In conclusion, we have studied the influence of DM interaction on the speed of information scrambling in quantum systems. Focusing on a spin-1/2 chain with DM interaction and considering Ising interaction, we have shown that DM interaction speeds up the quantum information scrambling. Our work has a potential to contribute to chaos and black hole dynamics within holographic duality.
\section*{Acknowledgments} \noindent F.O. acknowledges the financial support of Tokyo International University Personal Research Fund. C.Y. acknowledges the Istanbul University Scientific Research Fund, Grant No. BAP-2019-33825.
\section*{Compliance with ethical standards} \noindent \textbf{Conflict of interest} We have no competing interests.
\noindent \textbf{Ethics statement} This work did not involve any active collection of human data.
\noindent \textbf{Data accessibility statement} This work does not have any experimental data.
\noindent \textbf{Funding} We have no competing financial interests.
\end{document} | arXiv |
Why does this large Newtonian telescope's front cover have two or three holes in it?
The Michael Bernardo video How to use an Equatorial Mount for Beginners shows a large Newtonian telescope on an equatorial mount.
The cover of the telescope's large aperture shows what looks like three holes each with its own removable sub-cover, but I might be misunderstanding what I'm looking at, perhaps the "central object" is just a handle to rotate the cover when the other two holes are open.
But that raises the question "why two holes" rather than just one. I'm pretty sure that these are not used for this kind of thing but it would be great to know how they are used!
observational-astronomy telescope amateur-observing optics
uhohuhoh
These are used to reduce the aperture of the instrument. While you see three in this image, likely only one is removable.
The instrument appears to be a Newtonian reflector. Which means it would have a secondary mirror near the front (and center). So the thing that looks like a knob in the center of the telescope cover is probably just for grip (and not likely removable).
One of the off-axis caps is probably actually a cap. That can be removed. The other thing that looks like a cap is typically non-removable. It is a holder for the cap on the opposite side (the one that can be removed).
If you observe a bright object. Such as the Moon. You can use this to reduce the total amount of light.
Solar filters are large aperture telescopes will often employ the same idea. The Sun is plenty bright. Collecting "enough" light isn't the problem. By using a solid cap with an off-axis (off-axis only because it is necessary to avoid the central obstruction in the telescope) can help limit the total energy entering the telescope to avoid over-heating internal components and damaging the instrument.
peterh - Reinstate Monica
Tim CampbellTim Campbell
$\begingroup$ mystery solved, thanks! $\endgroup$ – uhoh Sep 13 '20 at 2:44
While the image cannot tell whether one, two or three of the caps are removable two or three things can be said of what would be sensible:
The central one likely is not removable as it is where the obstruction for the central secondary mirror sits.
The other two - hopefully - are removable. Ideally those would be three openings forming a equally-sided triangle, but two will do for most purposes: The optical resolution of an instrument depends on its aperture - which is not the area but diameter even when parts are obstructed: $\theta = \frac{\lambda}{D}$ where $\theta$ is the angular resolution (an important property of telescopes) and $D$ the aperture. Resolution is independent for directions. Thus with two openings, you keep the resolution of the original apperture in one dimension, but have the resolution of a single small opening in the perpendicular dimension. Using three small openings keeps the resolution equal in both directions of the focal plan while still reducing the clear apperture considerably.
You'd want to use a front cap like that shown with removable caps for the holes when you want to observe the sun. You can apply sun filters to the smaller holes and still have a nearly-identical optical resolution of the telescope as at night. And at the same time you save a lot of possibly expensive sun filters. Also, assuming usual sun-filter foil for filters you would be minimizing the risk of hazard due to damaged sun filters as for those a larger filter is easier damaged by handling than two small ones). The Price argument would be even stronger if you assume ND filters instead of self-made filters with optical sun-filter foil. Ideally you'd have 3 equally-spaced caps so you don't loose resolution in one direction.
Keeping the size of the aperture makes sense for amateur telescopes, too: with 5cm aperture one gets a resolution of 2 arc seconds, with 20cm you have 0.5 arc seconds, with 50cm aperture one gets 0.2 arc seconds. Assume a typical DSLR camera with about 5500 pixels width and a telescope with 2m focal length. That gives an angular pixel resolution of 0.4" - that's well below the 2" for a 5cm aperture, but of the order of the optics for a typical 20cm aperture - typical for a C8 or similar. Wikipedia gives best seeing conditions as 0.4" - so being able to resolve that optically makes sense for such instruments - or they would waste their potential. It's not unheard that amateur astrophotographers take their equipment to like La Palma or Hawaii in their holidays to profit from good environmental conditions... However, generally we want the atmosphere to be the limit, not the optics, so that we have the chance to get good results in good conditions or even via lucky imaging as possible on bright objects, especially like the Sun.
I've seen telescopes where both outer caps could be removed, and some where one was just fake.
planetmakerplanetmaker
Not the answer you're looking for? Browse other questions tagged observational-astronomy telescope amateur-observing optics or ask your own question.
What equipment and techniques were used to study Betelgeuse's diameter in 1920?
Why don't Australia/Russia have large optical telescopes?
Telescope choice
How was Earth's "quasi-satellite" 2016 HO3 "first spotted" and it's orbit determined?
What telescope is this, and how does it work?
How did Johannes Hevelius' long telescope work? Why all the round holes?
How to locate nebulas visible through AstroMaster 114 telescope without device assistance?
Why does this Lowell Observatory telescope have so many knobs? What do they all do?
How exactly will DESI simultaneously capture individual spectra from 5,000 galaxies using optical fibers?
Why does the Fourier transform of this CMB image have a hole in it? | CommonCrawl |
Analysis of the heterogeneous multiscale method for elliptic homogenization problems
by Weinan E, Pingbing Ming and Pingwen Zhang HTML | PDF
J. Amer. Math. Soc. 18 (2005), 121-156 Request permission
A comprehensive analysis is presented for the heterogeneous multiscale method (HMM for short) applied to various elliptic homogenization problems. These problems can be either linear or nonlinear, with deterministic or random coefficients. In most cases considered, optimal estimates are proved for the error between the HMM solutions and the homogenized solutions. Strategies for retrieving the microstructural information from the HMM solutions are discussed and analyzed.
A75 R.A. Adams and J.J. F. Fournier, Sobolev Spaces, second edition, Academic Press, New York, 2003.
Grégoire Allaire, Homogenization and two-scale convergence, SIAM J. Math. Anal. 23 (1992), no. 6, 1482–1518. MR 1185639, DOI 10.1137/0523084
Ivo Babuška, Homogenization and its application. Mathematical and computational problems, Numerical solution of partial differential equations, III (Proc. Third Sympos. (SYNSPADE), Univ. Maryland, College Park, Md., 1975) Academic Press, New York, 1976, pp. 89–116. MR 0502025
Ivo Babuška, Solution of interface problems by homogenization. I, SIAM J. Math. Anal. 7 (1976), no. 5, 603–634. MR 509273, DOI 10.1137/0507048
Alain Bensoussan, Jacques-Louis Lions, and George Papanicolaou, Asymptotic analysis for periodic structures, Studies in Mathematics and its Applications, vol. 5, North-Holland Publishing Co., Amsterdam-New York, 1978. MR 503330
Bm81 L. Boccardo and F. Murat, Homogénéisation de problémes quasi-linéaires, Publ. IRMA, Lille., 3 (1981), no. 7, 13–51.
J. F. Bourgat, Numerical experiments of the homogenization method for operators with periodic coefficients, Computing methods in applied sciences and engineering (Proc. Third Internat. Sympos., Versailles, 1977) Lecture Notes in Math., vol. 704, Springer, Berlin, 1979, pp. 330–356. MR 540121
Achi Brandt, Multi-level adaptive solutions to boundary-value problems, Math. Comp. 31 (1977), no. 138, 333–390. MR 431719, DOI 10.1090/S0025-5718-1977-0431719-X
Susanne C. Brenner and L. Ridgway Scott, The mathematical theory of finite element methods, Texts in Applied Mathematics, vol. 15, Springer-Verlag, New York, 1994. MR 1278258, DOI 10.1007/978-1-4757-4338-8
Philippe G. Ciarlet, The finite element method for elliptic problems, Studies in Mathematics and its Applications, Vol. 4, North-Holland Publishing Co., Amsterdam-New York-Oxford, 1978. MR 0520174
P. G. Ciarlet and P.-A. Raviart, The combined effect of curved boundaries and numerical integration in isoparametric finite element methods, The mathematical foundations of the finite element method with applications to partial differential equations (Proc. Sympos., Univ. Maryland, Baltimore, Md., 1972) Academic Press, New York, 1972, pp. 409–474. MR 0421108
Ph. Clément, Approximation by finite element functions using local regularization, Rev. Française Automat. Informat. Recherche Opérationnelle Sér. 9 (1975), no. R-2, 77–84 (English, with Loose French summary). MR 0400739
Joseph G. Conlon and Ali Naddaf, On homogenization of elliptic equations with random coefficients, Electron. J. Probab. 5 (2000), no. 9, 58. MR 1768843, DOI 10.1214/EJP.v5-65
Dul91 L.J. Durlofsky, Numerical calculation of equivalent grid block permeability tensors for heterogeneous poros-media, Water. Resour. Res., 28 (1992), 699-708.
Weinan E, Homogenization of linear and nonlinear transport equations, Comm. Pure Appl. Math. 45 (1992), no. 3, 301–326. MR 1151269, DOI 10.1002/cpa.3160450304
Weinan E and Bjorn Engquist, The heterogeneous multiscale methods, Commun. Math. Sci. 1 (2003), no. 1, 87–132. MR 1979846, DOI 10.4310/CMS.2003.v1.n1.a8
Ee02b W. E and B. Engquist, The heterogeneous multiscale method for homogenization problems, submitted to MMS, 2002.
Weinan E and Bjorn Engquist, Multiscale modeling and computation, Notices Amer. Math. Soc. 50 (2003), no. 9, 1062–1070. MR 2002752
Ey04 W. E and X.Y. Yue, Heterogeneous multiscale method for locally self-similar problems, Comm. Math. Sci., 2 (2004), 137–144.
Yalchin R. Efendiev, Thomas Y. Hou, and Xiao-Hui Wu, Convergence of a nonconforming multiscale finite element method, SIAM J. Numer. Anal. 37 (2000), no. 3, 888–910. MR 1740386, DOI 10.1137/S0036142997330329
Mark Freidlin, Functional integration and partial differential equations, Annals of Mathematics Studies, vol. 109, Princeton University Press, Princeton, NJ, 1985. MR 833742, DOI 10.1515/9781400881598
M. I. Freidlin and A. D. Wentzell, Random perturbations of dynamical systems, 2nd ed., Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 260, Springer-Verlag, New York, 1998. Translated from the 1979 Russian original by Joseph Szücs. MR 1652127, DOI 10.1007/978-1-4612-0611-8
N. Fusco and G. Moscariello, On the homogenization of quasilinear divergence structure operators, Ann. Mat. Pura Appl. (4) 146 (1987), 1–13. MR 916685, DOI 10.1007/BF01762357
David Gilbarg and Neil S. Trudinger, Elliptic partial differential equations of second order, 2nd ed., Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 224, Springer-Verlag, Berlin, 1983. MR 737190, DOI 10.1007/978-3-642-61798-0
Thomas Y. Hou and Xiao-Hui Wu, A multiscale finite element method for elliptic problems in composite materials and porous media, J. Comput. Phys. 134 (1997), no. 1, 169–189. MR 1455261, DOI 10.1006/jcph.1997.5682
Ioannis G. Kevrekidis, C. William Gear, James M. Hyman, Panagiotis G. Kevrekidis, Olof Runborg, and Constantinos Theodoropoulos, Equation-free, coarse-grained multiscale computation: enabling microscopic simulators to perform system-level analysis, Commun. Math. Sci. 1 (2003), no. 4, 715–762. MR 2041455, DOI 10.4310/CMS.2003.v1.n4.a5
KO02J. Knap and M. Ortiz, An analysis of the quasicontinuum method, J. Mech. Phys. Solids., 49 (2001), 1899-1923.
S. M. Kozlov, The averaging of random operators, Mat. Sb. (N.S.) 109(151) (1979), no. 2, 188–202, 327 (Russian). MR 542557
A. M. Matache, I. Babuška, and C. Schwab, Generalized $p$-FEM in homogenization, Numer. Math. 86 (2000), no. 2, 319–375. MR 1777492, DOI 10.1007/PL00005409
Graeme W. Milton, The theory of composites, Cambridge Monographs on Applied and Computational Mathematics, vol. 6, Cambridge University Press, Cambridge, 2002. MR 1899805, DOI 10.1017/CBO9780511613357
My03 P.B. Ming and X.Y. Yue, Numerical methods for multiscale elliptic problems, preprint, 2003.
Shari Moskow and Michael Vogelius, First-order corrections to the homogenised eigenvalues of a periodic composite medium. A convergence proof, Proc. Roy. Soc. Edinburgh Sect. A 127 (1997), no. 6, 1263–1299. MR 1489436, DOI 10.1017/S0308210500027050
François Murat and Luc Tartar, $H$-convergence, Topics in the mathematical modelling of composite materials, Progr. Nonlinear Differential Equations Appl., vol. 31, Birkhäuser Boston, Boston, MA, 1997, pp. 21–43. MR 1493039
Gabriel Nguetseng, A general convergence result for a functional related to the theory of homogenization, SIAM J. Math. Anal. 20 (1989), no. 3, 608–623. MR 990867, DOI 10.1137/0520043
J. Tinsley Oden and Kumar S. Vemaganti, Estimation of local modeling error and goal-oriented adaptive modeling of heterogeneous materials. I. Error estimates and adaptive algorithms, J. Comput. Phys. 164 (2000), no. 1, 22–47. MR 1786241, DOI 10.1006/jcph.2000.6585
G. C. Papanicolaou and S. R. S. Varadhan, Boundary value problems with rapidly oscillating random coefficients, Random fields, Vol. I, II (Esztergom, 1979) Colloq. Math. Soc. János Bolyai, vol. 27, North-Holland, Amsterdam-New York, 1981, pp. 835–873. MR 712714
Rolf Rannacher and Ridgway Scott, Some optimal error estimates for piecewise linear finite element approximations, Math. Comp. 38 (1982), no. 158, 437–445. MR 645661, DOI 10.1090/S0025-5718-1982-0645661-4
Christoph Schwab and Ana-Maria Matache, Generalized FEM for homogenization problems, Multiscale and multiresolution methods, Lect. Notes Comput. Sci. Eng., vol. 20, Springer, Berlin, 2002, pp. 197–237. MR 1928567, DOI 10.1007/978-3-642-56205-1_{4}
Ridgway Scott, Optimal $L^{\infty }$ estimates for the finite element method on irregular meshes, Math. Comp. 30 (1976), no. 136, 681–697. MR 436617, DOI 10.1090/S0025-5718-1976-0436617-2
Sergio Spagnolo, Convergence in energy for elliptic operators, Numerical solution of partial differential equations, III (Proc. Third Sympos. (SYNSPADE), Univ. Maryland, College Park, Md., 1975) Academic Press, New York, 1976, pp. 469–498. MR 0477444
Luc Tartar, An introduction to the homogenization method in optimal design, Optimal shape design (Tróia, 1998) Lecture Notes in Math., vol. 1740, Springer, Berlin, 2000, pp. 47–156. MR 1804685, DOI 10.1007/BFb0106742
Jinchao Xu, Two-grid discretization techniques for linear and nonlinear PDEs, SIAM J. Numer. Anal. 33 (1996), no. 5, 1759–1777. MR 1411848, DOI 10.1137/S0036142992232949
V. V. Yurinskiĭ, Averaging of symmetric diffusion in a random medium, Sibirsk. Mat. Zh. 27 (1986), no. 4, 167–180, 215 (Russian). MR 867870
V. V. Zhikov, On an extension and an application of the two-scale convergence method, Mat. Sb. 191 (2000), no. 7, 31–72 (Russian, with Russian summary); English transl., Sb. Math. 191 (2000), no. 7-8, 973–1014. MR 1809928, DOI 10.1070/SM2000v191n07ABEH000491
V. V. Zhikov, S. M. Kozlov, and O. A. Oleĭnik, Usrednenie differentsial′nykh operatorov, "Nauka", Moscow, 1993 (Russian, with English and Russian summaries). MR 1318242
Retrieve articles in Journal of the American Mathematical Society with MSC (2000): 65N30, 74Q05, 74Q15, 65C30
Retrieve articles in all journals with MSC (2000): 65N30, 74Q05, 74Q15, 65C30
Weinan E
Affiliation: Department of Mathematics and PACM, Princeton University, Princeton, New Jersey 08544 and School of Mathematical Sciences, Peking University, Beijing 100871, People's Republic of China
Email: [email protected]
Pingbing Ming
Affiliation: No. 55, Zhong-Guan-Cun East Road, Institute of Computational Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100080, People's Republic of China
Email: [email protected]
Pingwen Zhang
Affiliation: School of Mathematical Sciences, Peking University, Beijing 100871, People's Republic of China
Email: [email protected]
Received by editor(s): January 2, 2003
Published electronically: September 16, 2004
Additional Notes: The work of the first author was partially supported by ONR grant N00014-01-1-0674 and the National Natural Science Foundation of China through a Class B Award for Distinguished Young Scholars 10128102.
The work of the second author was partially supported by the Special Funds for the Major State Basic Research Projects G1999032804 and was also supported by the National Natural Science Foundation of China 10201033.
The work of the third author was partially supported by the Special Funds for the Major State Research Projects G1999032804 and the National Natural Science Foundation of China for Distinguished Young Scholars 10225103.
We thank Bjorn Engquist for inspiring discussions on the topic studied here.
The copyright for this article reverts to public domain 28 years after publication.
Journal: J. Amer. Math. Soc. 18 (2005), 121-156
MSC (2000): Primary 65N30, 74Q05; Secondary 74Q15, 65C30
DOI: https://doi.org/10.1090/S0894-0347-04-00469-2 | CommonCrawl |
Starwerk Articles
This is where you'll find our thoughts about recent (and not) astronomy, earth science and other science news, our answers to questions we get from people a lot, plus whatever else strikes our fancy. Sometimes we get into a few of the mathematical or scientific details, but never too deeply.
Ever falling, ever missing…
That is an orbit.
2020-02-01 0:00 Filed in: Astronomy |Physics written by: Kevin McLin
What Is An Orbit?
To understand orbits you must first understand motion. In particular, you must understand circular motion and what causes it. Many people think that nothing causes circular motion, and that if an object is moving in a circular path, then it will continue to move in a circular path on its own. Forever.
According to Newton's First Law of Motion (which was actually discovered by Galileo and later adopted by Newton), an object in motion will continue to move along at a constant speed and with a constant direction unless acted upon by a force. Which is to say that unless something induces an object to change either its speed or its direction, both will stay the same. Forever.
In an orbit, the direction, at least, is constantly changing. So that means that there must be a force acting to maintain the orbital motion, at least according to Newton. To understand how this works, lets first consider the motion of something more down to earth.
Imagine you are swinging an object, perhaps a tennis ball, on the end of a string. Imagine further that the string maintains a constant length, so the tennis ball will move in a circular path. If the speed of the object is constant, then what force is acting on the ball to make it change its direction? The answer is probably fairly obvious: the string pulls it into a circle. If you let go of the string, then the tension in it drops to zero and it no longer exerts a force on the ball. The ball will go flying off in whatever direction it was headed at the moment you let go. If you don't believe this is true - and a significant number of people do not - I urge you to give this a try. Just make sure you do it in a place where you are not going to knock somebody with your missile when you let go of the string… And make sure you use something light, like a tennis ball, not something heavy, like a rock.
We can understand how this force changes the direction of the object's motion by applying Newton's Second Law of Motion.
$$ \vec{F}=m\vec{a} $$
The little arrows above the \(F\) and the \(a\) mean that these quantities, force and acceleration, are vectors. They have both a size (how big is the force, for example) and a direction. What's more, since the left and right sides of this equation are equal to each other, the force \(\vec{F}\) and acceleration \(\vec{a}\) must point in the same direction because they are the only two vectors present. The mass m has no direction (it is called a scalar), it just scales the acceleration, adjusting its length and converting it into a force.
At this point, if you don't know what vectors are, or if you are a little bit rusty about how they work, you should have a look at my blog entry on vectors. It's short and will provide the basic ideas that you will need for this article.
Circular Kinematics
Have a look at the diagram below. It shows an object moving in a circular path. This could be any circular path. Its velocity and acceleration vectors are shown as well.
The critical thing to notice in this diagram is that the acceleration vector and the velocity vector are perpendicular to each other. For circular motion this is always true. This is because for circular motion the acceleration points directly at the center of the circle; it is called centripetal acceleration, a fancy name for center-seeking acceleration. Motion along any path is always tangent to the path at whatever point the moving object happens to be. For a circle, a tangent line at any point is always perpendicular to the radial direction, and so the velocity and the centripetal acceleration are always perpendicular. Mathematically, the magnitude of the centripetal acceleration always satisfies the equation below; the variables have the same meanings as in the diagram above, except we have placed a subscript \(c\) to emphasize that the expression refers specifically to centripetal acceleration. This will be important later for understanding the shape of orbits.
$$ a_c=\frac{v^2}{R} $$
From the physics of motion (kinematics) we learn that the acceleration is the change of velocity with time. In fact, this is how acceleration is defined, so it is true for any acceleration from any cause. Mathematically we can write this as follows.
$$ \vec{a}=\frac{\Delta \vec{v}}{\Delta t} $$
The symbol \(\Delta\vec{v}\) represents the change in the velocity that occurs over some time interval \(\Delta t\). Because the velocity is a vector, the change in velocity is also a vector. That means the velocity can change in two ways: it can change its magnitude (the object can speed up or slow down) or it can change its direction. It can also do both, of course, but we will leave that case aside for the moment. For circular orbits, only the direction of the velocity changes. How do we know that? Let's have a look at the next diagram to begin to understand why that is the case.
In the diagram below, the change in the velocity over some time interval has replaced the acceleration. Both vectors point in the same direction. They must, because they are proportional to each other: to get the change in velocity we simply multiply the acceleration by the scalar quantity \(\Delta t.\) By the way, that should give you some insight into why scalars are given that name - they scale the length of vectors (and convert their units sometimes, too, as here).
From the diagram above, we know that during the time interval in question, the orbiting object will acquire a new velocity, call it \(\vec{v}^\prime .\) This new velocity is found by adding the change in velocity, \(\Delta \vec{v},\) to the original velocity \(\vec{v}.\) In the diagram, \(\vec{v}^\prime\) is represented by the green arrow.
$$ \vec{v}^\prime=\vec{v}+\Delta \vec{v} $$
In the time required to reach this new velocity, the object will have moved along its orbit to a new position, and this position can be found by parallel transporting the \(\vec{v}^\prime\) vector so as to find the point on the circle where the vector runs tangent to the circular path - keep in mind that we can move vectors around, and as long as we preserve their length and direction they remain the same. This is why we can move \(\vec{v}^\prime\) in the manner described. The intersection of the dotted green vector and the circle shows roughly where the object will be located when it acquires the velocity \(\vec{v}^\prime .\) The point is marked with a white dot.
As an aside, in an actual orbit the velocity is changing all the time, so even over arbitrarily small time intervals there will be (quite small) changes to the velocity, and the orbit is thus a smooth curve, not a disjointed sequence of discrete jumps in velocity as we have implied here.
But how do we know that the speed does not change? Well, for a circular path, the acceleration is always perpendicular to the velocity. That means that there is no amount of acceleration that is in the same direction (or the opposite direction) as the velocity vector. Only such an acceleration component could change the speed; a perpendicular acceleration component can only change the velocity's direction. If you know how work is defined in physics, then you can think of this in those terms: a force (or acceleration) that is perpendicular to a displacement (or velocity) cannot do any work on an object, and so cannot change the object's energy (or in other words, change its speed). If you don't know about the definition of work, then no worries. If you understand the tail-to-head addition rule for vectors, then you can probably convince yourself that the speed of a particle moving in a circle will remain constant if the only acceleration acting on it is the centripetal acceleration.
Orbital Dynamics
The kinematics discussed above are true for any type of circular motion. They are true for a tennis ball swinging with constant speed on a string with constant length, a car traveling around a curve of constant curvature with constant speed, or a satellite moving along a circular orbit at a constant speed. All of these are examples of circular motion, and the relationship between speed, velocity and acceleration will be the same for all of them. However, the causes of those accelerations are quite different for each of these cases, and it is the forces that we turn to now.
In the discussion below we will assume that the central object is much more massive than the orbiting object. This is so we can imagine the central object is essentially stationary as the smaller object travels around it. In fact, both objects orbit about their common center of mass. For a central object that is much more massive, the center of mass of the two nearly coincides with the center of mass of the central object. Under these assumptions only a small error is introduced. So just keep in mind that our treatment assumes a very massive, immobile central object that is orbited by a much less massive "test object." If you would like to see a more general treatment you should have a look at a text book on orbital dynamics.
As we mentioned already, the force that causes a tennis ball to accelerate inward is the tension acting through he string. The force causing the acceleration of a car around a curve is the friction between the road and its tires. If you doubt this, try turning sometime on an icy road. You will find the exercise fraught with difficulty.
For an orbiting object, which is the primary concern of this article, the force that creates the centripetal acceleration is gravity. That is the force that keeps the planets and other members of the solar system from flying off into space. Gravity also keeps us all from springing up from the ground and flying into the sky. In the Newtonian view, gravity always acts along the line connecting two objects, and it is always attractive. So if we imagine a planet orbiting a star, we will have a diagram just like first one, with the force of gravity providing the centripetal acceleration. We could say that in this case the centripetal acceleration is the gravitational acceleration.
(We usually discuss gravity in terms of the force it exerts, as that force is described by Newton's Universal Law of Gravitation. In many ways it makes a lot more sense to think of gravity as an acceleration instead of a force. After going through this article, perhaps you will believe that this is a better way to think about gravity too.)
The diagram above has a lot going on. Each of its parts are described below. the centripetal acceleration, which is explicitly given as the gravitational acceleration, \(\vec{a}_g.\) It shows the orbital velocity of the object, \(\vec{v}.\) It also gives the radius of the orbit, \(r\) and the mass of the central object, \(M.\) The symbol \(\hat{r}\) is a radial unit vector: it points in the radial direction, always toward the orbiting object, and it has unit length (a length of one in arbitrary units). It is a mathematical indicator that the gravitational force and acceleration point in the opposite direction of the radius vector, which points from the center of the circle to the orbiting object. Don't confuse the orbital size, \(r\) with the orbital radius vector \(\vec{r}=r\hat{r}\). The size is shown in the diagram, but \(\vec{r}\) has been left off because showing it would make the diagram more busy than it already is. Just keep in mind, if you like, that \(\vec{r}\) points in the direction of \(\hat{r}\) and has length \(r.\)
\(\vec{a}_g\) - gravitational acceleration
\(M\) - object at the center of the orbit, the Sun in the solar system, Earth for the Moon and other satellites.
\(r\) - the size of the orbit, the orbital radius.
\(\hat{r}\) - radial unit vector. It always points radially outward from the center of the circle toward the orbiting object. It has unit length (a length of 1 in arbitrary units) and no physical units like meters, inches, feet, etc. It is a mathematical object that tells us that the gravitaitonal force and acceleration act along the radial direction only.
\(\vec{v}\) - the orbital velocity. Its magnitude, \(v\), is the speed of the orbiting object.
The gravitational acceleration is found by using Newton's Second Law of Motion along with his Universal Law of Gravitation. Together they give the expression shown above. We demonstrate this below.
Begin with the Law of Gravitation in the first line, then substitute using the Second Law in the second line. We then cancel the common factor of \(m\), the mass of the orbiting object, to arrive at the gravitational acceleration in the third line.
\begin{equation}
\begin{split}
\vec{F}_g &=-\frac{GMm}{r^2}\hat{r}\\
m\vec{a}_g & = -\frac{GMm}{r^2}\hat{r}\\
\vec{a}_g & = -\frac{GM}{r^2}\hat{r}
\end{split}
\end{equation}
The gravitational acceleration is only in the radial direction (it points in the \(-\hat{r}\) direction) and so the object orbits at constant speed, just as we found before for circular motion. Only with gravity we can think a bit more deeply about the motion than we did previously.
In order for the orbit to be circular, the velocity must be tuned just right. The tendency of the test object to fall toward the center under the influence of gravity must exactly match its tendency to move away from the center provided by its orbital speed. For a given gravitational acceleration there is only a single orbital speed that will satisfy this. It is the speed that you would put into the expression for centripetal acceleration to get the gravitational acceleration that the object feels. Only at this speed will the orbit be a circle. This case is represented in the diagram above.
For other speeds, we get orbits that are not circular. For example, let's imagine that the orbital velocity of the test object is zero. In that case, the test object will fall toward the center of the circle, speeding up as it goes. Eventually it will collide with the central object that attracts it. But what if there is nothing at the center? Perhaps we are dealing with a star that is part of a large cluster of stars, and so its orbit is determined by the mutual gravity of all the stars, not just a single massive object at the center of its orbit. In that case, some of the details of our treatment would have to be modified. In particular, the gravitational acceleration would no longer depend on \(1/r^2\) all the way to the center, but we can still use this simple picture to help ourselves understand the orbital properties of the system.
As the test object approaches the center of the star cluster, there will presumably not be anything at the center, at least not for long. The object will fall toward the center faster and faster, and when it arrives there, finding nothing to collide with, it will race right through. As the test object travels outward from the cluster center it will begin to feel the tug of gravitational attraction pulling it back toward the center: the gravitational acceleration now points opposite its velocity, and so the test object slows down. Eventually, as it reaches the original distance from which it fell, though now on the opposite side of the cluster, the object will gradually stop. It will then fall back, racing through the cluster in the opposite direction to return to its original position. The object will continue to oscillate this way forever, or until something stops it; perhaps it will at some point collide with another cluster member. This plunging orbit case, with zero orbital velocity, is depicted below, at upper left.
No Orbital Velocity
Large Orbital Velocity
Small Orbital Velocity
Very Large Orbital Velocity
In the schematic diagrams above, the dotted green circle is understood to have a constant size. It represents the orbit of an object with a set of orbital parameters (energy and angular momentum) that will produce an orbit of that particular shape and size. The solid green curves are for orbits with different energy and angular momentum. These produce orbits with different shapes and sizes. The circular orbit is shown on each diagram to make comparison possible between the various other orbits. The velocity required to produce this circular orbit would lie somewhere between the "Large Orbital Velocity" and "Small Orbital Velocity" of the middle pair of examples.
The case just considered, that of zero orbital velocity, is an extreme case. In general objects do have some non-radial velocity component. Let's imagine giving our test object a small nudge sideways. Now part of its velocity is not radial. What is the result of that? Well, the orbiting body will still fall toward the center, but as it does so it will move sideways, too. So it won't oscillate along a line as described above. However, as it falls inward its original sideways velocity will begin to feel a stronger and stronger gravitational effect acting to slow it down. Eventually, its sideways velocity will stop completely. Thereafter the sideways motion will reverse. The upshot of its motion and the changing gravitational attraction (again, we assume that the gravitational acceleration is proportional to \(1/r^2\) ) will cause the test object to move faster and faster along its path, which will sweep closely to the massive central object and then arc out to the point at which it was first set in motion. It will have exactly its initial velocity when it arrives there, and so the motion will repeat this way indefinitely. The situation is depicted at top right, above. Notice that the orbit is now a narrow ellipse, so still not a circle, but not a line, either.
We can imagine giving the test object larger and larger sideways velocities to start out its motion. As we do this, the the arc it follows will broaden more and more as the initial (sideways) speed increases, and its closest passage to the central object (called perihelion for an object orbiting the Sun, say) becomes larger and larger. At some point the orbit will be a circle. That will happen when the initial sideways kick will be enough to exactly cancel the inward fall toward the center, and so the object will follow a path of constant distance from the center of the orbit, a circle, just as we imagined initially. Incidentally, a circle is still an ellipse, it's just a special case, as well see below. The circular path is shown using a dashed pattern, and in each of the diagrams this circle should be taken as being identical in size to all the others. Its depiction has been altered so that the different orbits, which each have different sizes, can be shown in the various diagrams.
If we give a the body a larger velocity kick than this, then gravity will be too weak to counteract the sideways motion and keep it a circle. Instead, the object will move a small distance away from the center. In this case the orbit will again be an ellipse, but it will lie outside the initial circle. The object will eventually slow down, cease its outward motion and begin to fall back. In this case it will sweep back past the central object and return to its initial position. The orbit might look something like the path shown above at lower left. Finally, we could give the object so much velocity that it simply never comes back. In that case its energy of motion (kinetic energy) is larger than the energy in its gravitational attraction (gravitational potential energy) and the orbit will be a hyperbola. The orbit does not close back on itself, and the object leaves the system. That case is shown above at lower right.
There is a borderline condition on the orbit as well, the boundary between a closed orbits and an open ones discussed above. In this case the kinetic energy of the orbit exactly equals the potential energy. Orbits with this condition are parabolas. They are not closed, and in fact the object never falls back. Instead, it moves outward forever, slowing more and more as it does, and it approaches, but never quite reaches, stillness the farther it travels. These orbits differ from the hyperbolic orbits because in the latter, the motion of the object approaches a finite, non-zero speed as it moves farther away. These parabolic orbits appear quite similar to the hyperbolic ones, though they are narrower, and so there is no separate diagram shown for them.
We will look at the effect of energy on the orbit in greater detail in the next section.
You probably noticed that the elliptical orbits are offset from the circle. This effect is real: in elliptical orbits the center of attraction is offset from the center of the orbit. In the solar system, the Sun seems to be at the center, but this is only true because we generally only think about the planets, and they have near-circular orbits. For comets, which have very elliptical orbits, the Sun is nowhere near the center. It is offset and is located at one of the foci of the comet's elliptical orbit. Ellipses have two foci, and nothing is located at the second one. As an ellipse becomes closer and closer to a circle the foci move closer together. Finally, for a circle they lie directly on top of one another. So even for a circular orbit the central object is at a focus of the ellipse, but in the case of a circle this happens to be at its center.
The diagram below shows an ellipse and its two foci, along with the geometrical relationships between the axes of the ellipse and the location of the foci. The curve is the set of points that satisfy the expression \(r_1+r_2=2a\) where \(a,\) shown in blue, is called the semi-major axis of the ellipse. It is half the distance across the longest straight path through the ellipse. If you consider this mathematical relationship for the case of a circle, where the two foci lie on top of one another, you will see that it is still satisfied, so a circle is indeed a special case of an ellipse.
In the solar system, the Sun is at one of the foci. Nothing is located at the other focus. This is typical for systems with a single dominant object. For example, the planets all have nearly circular orbits. That means the two foci lie nearly on top of one another, well inside the Sun. For objects with highly elliptical orbits, like the shape shown in the figure, the foci are quite distant from one another. These are the kinds of orbits followed by many comets and asteroids.
Of course, some objects are not bound to the Sun at all. They fall through the solar system on orbits that never loop back and return. Those orbits would be parabolae or hyperbolae, not ellipses. Their second foci are located an infinite distance away from the Sun. All of these shapes are allowed orbits in Newtonian gravity. They are called conic sections because they are the curves created by the intersection of a plane with the surface of a cone.
Orbits and Conservation Principles
The various cases outlined above can all be understood as expressions of two basic physical laws: the law of conservation of energy and the law of conservation of angular momentum. The first law states that the total energy of a system remains constant. The second says that the angular momentum of a system remains constant. Lets break these down so that we understand what they mean.
The energy of any object has two parts. One is the energy it has because of its motion. This is called kinetic energy. In the non-relativistic case (when speeds are not close to the speed of light) the kinetic energy of an object is given by the expression below. The kinetic energy is often represented by the symbol \(T,\) and that is how we will represent it.
$$ T = \frac{1}{2}mv^2 $$
The expression gives the kinetic energy of an object with mass \(m\) that moves with a speed \(v.\)
Lots of other types of energy are really just kinetic energy masquerading as something else. For example, the temperature of a gas is really just a representation of the average kinetic energy of the particles in the gas. While there is always a spread of speeds of particles, the faster the typical particle moves, the higher the average kinetic energy and the higher the temperature of the gas.
The second type of energy a system can have is called potential energy. Potential energy results from interactions between objects. So, for example, if there is an attractive interaction between two objects, like gravity, then there will be energy associated with that interaction. Since the strength of the interaction usually depends on the distance between the objects, the potential energy will change when the objects move either closer or farther from one another. For gravity, the potential energy is given by the expression below. Potential energy is usually represented by \(U.\)
$$ U = -\frac{GM m}{r} $$
The \(G\) in this equation is the gravitational constant, the same constant is we find in Newton's Universal Law of Gravitation. Remember, it is there to keep track of the units of measure we are using (feet or meters for length, kilograms or slugs for mass, for example), and to give us some idea about how strong the gravitational interaction is. Beyond this it is not very important. In the SI system of units, \(G\) has a value of \(\rm{6.67\times10^{-11}\,N\,m^2\,kg^{-2}}.\) That is quite small, so gravity is only a strong interaction when very large amounts of mass are involved - or perhaps over very tiny distances.
Note that the potential energy related to gravity is always negative. This reflects the fact that Newtonian gravity always produces an attractive force. It is never repulsive. The negative sign for attractive forces is a convention, but it carries across to all branches of physics. For example, in contrast with gravity, the electric force can be attractive or repulsive. Two positive charges repel one another, as do two negative charges. Only when there is a negative charge paired with a positive charge is the force attractive. So the electric potential energy can be positive or negative, it just depends on the sign of the charges. With gravity, since there is only one kind of mass (there is no negative mass), the energy is always negative, and therefore by convention attractive.
Potential and kinetic are the only two kinds of energy there are. All forms or sources of energy that you have heard about are combinations of the two of them. For example, the energy in fuels like oil or wood are types of electrical potential energy. This energy is stored in the chemical bonds (interactions between electrons) in the fuel. The energy extracted by a dam to drive an electric generator starts out as gravitational potential energy. It is converted to kinetic energy as the water falls from the top of the dam to the bottom where the generator is located, and is finally extracted from the kinetic energy of the water and stored in the electric and magnetic fields of the generator - potential energy. Even nuclear energy is a form of potential energy, but in this case it is energy stored in the interactions between particles in the atomic nucleus, not electrons.
There are exactly four kinds of potential energy, one for each of the four fundamental interactions: gravitational, electromagnetic, strong nuclear and weak nuclear. Fortunately, to understand orbits in the solar system or in galaxies, etc., we only need to worry about gravitational potential energy. The other fundamental interactions do not play a role.
For an orbiting object, we can write an expression for its total energy, which is the sum of its potential energy and kinetic energy. The total energy is given below.
$$ E = T+U =\frac{1}{2}mv^2 -\frac{GM_1 M_2}{r} $$
Consider this expression. It provides useful (and quite general) insights into orbits and orbital motion. First, note that the total energy can be either positive, negative or zero. The kinetic energy is always positive, of course, because it is the product of terms that are themselves always positive. As we mentioned already, the potential energy for gravitation is always negative. We have three possible cases to consider, and only these three cases.
E > 0
The total energy is positive. The kinetic energy exceeds the (absolute value of the) potential energy.
In this case the energy of motion is greater than the energy of attraction, so the system is not bound.
The system will not be stable and objects within the system will generally follow paths that are not closed, as shown in the lower right diagram above.
E= 0
The total energy is zero. The kinetic energy exactly equals the (absolute value of the) potential energy.
In this case the energy of motion balances the energy of attraction, so the system is marginally bound//unbound.
The system is not really stable, as it will expand to infinite size in time. The paths of particles are not closed, as shown in the lower right diagram above.
E< 0
The total energy is negative. The kinetic energy is less than the (absolute value of the) potential energy.
In this case the energy of motion is smaller than the energy of attraction, so the system is bound.
The system is stable, and objects within it will generally follow closed orbital paths as in any of the three diagrams above, excluding the lower right diagram.
From the three cases above we see that the boundary between bound and unbound orbits occurs when the total energy is zero. This allows us to assign a specific value to what is meant by "Very Large Orbital Velocity" in the description of orbits. To do so, we set the total energy equal to zero and solve for the velocity.
0 &=\frac{1}{2}mv^2-\frac{GMm}{r}\\
& = \frac{1}{2}v^2-\frac{GM}{r}\\
v^2 & = \frac{2GM}{r}\\
v_{esc} & = \sqrt{\frac{2GM}{r}}
So this critical speed, over which an object becomes unbound, below which it is bound, is given by the simple expression above. It is important enough that it has a special name: it is the escape velocity. You can understand why it is so named by considering the following cases.
First, imagine you throw an object, say your keys, upward. You know they will rise, ever more slowly, until they finally stop. Then they will fall, increasing their speed as they fall. This slowing and falling is a direct example of the exchange of energy between kinetic and potential. You impart a certain amount of kinetic energy to the keys when you throw them upward. That kinetic energy is converted to potential as they rise, and when all of it has been converted they no longer have any energy of motion. They have stopped. Of course, gravity has not ceased its pull, so the keys begin to fall. As they do so, they lose potential energy, converting it into kinetic. The keys speed up. Energy is conserved.
Now imagine that you throw the keys a little harder. They have a bit more kinetic energy at the outset, so they must rise a little bit farther before it is all used up, converted into potential energy. They then fall back down and arrive back at the ground moving with their initial speed, but now going down instead of up. You can repeat the experiment over and over, giving a little more energy each time. As a result the keys will travel higher before they stop and fall back down. Is there a point at which you have thrown the keys so fast that they just never come back down?. If you have understood the derivation of the escape velocity, you will know that the answer is yes. If you throw the keys at the escape velocity - or at a higher speed - then they have so much kinetic energy that gravity will never be able to stop them and pull them back down.
For Earth, the escape velocity is about \(\rm{11\,km\,s^{-1}}.\) That is quite fast. Certainly it is faster than anyone could possibly throw a set of keys. However, it is not so fast that a rocket cannot propel objects fast enough to escape. But Earth is so huge, that even a giant rocket like the Saturn V could propel only a tiny capsule out of Earth's immediate vicinity and send it to the Moon. Escaping from an even larger object, the Sun for example, is harder still.
So objects that travel at or above the escape velocity are not bound, and they escape out into space. But what happens to objects that move more slowly? They will be bound, but why then don't they simply fall into the object attracting them? Why doesn't the Moon fall into Earth? Why doesn't Earth fall into the Sun? After all, the energy of gravitational attraction for these bound objects is greater than their energy of motion. Why doesn't gravity simply pull them down into the center of their orbit? In short, it doesn't because it can't, at least not according to Newtonian gravity (general relativity, a more complete theory of gravity, says something else, but we will leave that to a different article).
Conservation of energy is important for orbits, but it does not alone determine them. Conservation of angular momentum also plays a vital role. It is conservation of angular momentum that prevents orbiting objects from collapsing inward, and that keeps them stable over time. Conservation of angular momentum is the subject to which we turn our attention next.
The diagram above depicts an object at three places along its orbital path. At left and denoted by \(p\) is the periapsis, the point of closest approach to the attracting body. For a body orbiting the Sun this is given the special name perihelion (after Helios, the sun god in Greek mythology), and for a body orbiting Earth it is called perigee (after Gaia, the Greek goddess of Earth). In any case, this is the point of an orbit's closest passage to the central attracting object.
At right is the apoapsis, the point at which the object is at its farthest point from the central object. For objects orbiting the Sun this point is called aphelion, and for those orbiting Earth it is called apogee. Similar terms pertain for objects orbiting other stars: periastron and apastron. Collectively, the extreme points of an orbit are generically called the apsides (ap-si-dees), regardless of the body being orbited. These points in the orbit are important not just because they are the turning points, where an orbiting body changes from moving toward/away from its central object, but because they are the easiest points at which to determine the orbital angular momentum.
Angular momentum depends both on an object's linear momentum, \(\vec{p}\), and its position, \(\vec{r}\). It is generally denoted by a capital \(\vec{L}\). The angular momentum of an object is always computed in reference to some particular point in space. For orbital motion, it is generally convenient to take as a reference the central object that is being orbited (like the Sun, for example), and that is what we do here. The blue dot in the diagram represents that object. All distances are measured from that point, and so all the radius vectors in the diagram start there. The velocities are all tangent to the orbit at the point where the orbiting body is located at the moment of interest. With these conventions in mind, the angular momentum can be computed in a consistent way using the expression given below.
$$\vec{L} = \vec{r} \times \vec{p}$$
In this expression, \(\vec{p}\) is the linear momentum, the product of the object's mass and its velocity.
$$\vec{p} = m \vec{v}$$
From the notation we are using it is clear that \(\vec{L}\) is a vector. Its direction does not really concern us here, so we will not describe how it is determined. If you want to know, take a look at the blog post about why galaxies are disks. That topic is intimately tied up with the direction of the vector \(\vec{L}\), so we spend much more time discussing it there.
For our purposes presently, we are more concerned with the magnitude of the angular momentum vector, or in other words, its length. That is given by the expression below.
$$L =rp\sin\theta$$
The expression gives the magnitude of the angular momentum, \(L\), at any point in the orbit. In general, the values of \(r\), \(p\) and \(\theta\) will be different at each point in the orbit. To understand the expression, look at the diagram above, and consider the case when the object is between apoapsis and periapsis. Notice that the angle between the direction of the vectors \(\vec{r}\) and \(\vec{v}\) (the velocity vector is always tangent to the curve at the point of interest) is an angle we call \(\theta\). The angle \(\theta\) varies as the particle moves around its path, just as \(r\) and \(v\) do. The direction of \(\vec{r}\) is made more clear by extending it with the dotted line. The magnitude of the angular momentum, \(\vec{L}\), depends not only on the magnitudes of the position and velocity vectors, but on their relative orientations. If \(\vec{r}\) and \(\vec{v}\) are parallel (\(\theta = 0\)), or antiparallel (\(\theta = 180^{\circ}\)), then there is no angular momentum at all because \(\sin 0\,= 0\) and \(\sin 180^{\circ}\, =\, 0\). If \(\vec{r}\) and \(\vec{v}\) are perpendicular ( \(\theta = 90^{\circ}\) ), then the angular momentum is simply their product, because \(\sin 90^{\circ}\, =\,1\). That is the case at both the apoapsis and periapsis.
The important thing to understand about the angular momentum is that it is a constant of the orbit, meaning it does not change as the object moves around its orbit. This is true for both the direction of \(\vec{L}\) and its magnitude, but as we stated, we are presently only concerned with the magnitude. So in particular, consider the cases of perihelion and aphelion. The angular momentum must be the same at these points in the orbit.
L_a &= L_p\\
r_a p_a & = r_p p_p\\
r_a m v_a & = r_p m v_p\\
r_a v_a & = r_p v_p\\
v_a &= \left (\frac{r_p}{r_a} \right ) v_p
Have a look at the last line in the equations above. It says that the speed of the orbiting object when it is at apoapsis is smaller than its speed at periapsis in exactly the proportion that its distance at apoapsis is larger than its distance at periapsis. And of course, the same is true at all the other points in the orbit, with the caveat that at those positions we have to consider the angle \(\theta\) because it will not be \(90^{\circ}\) at those points, it will vary. What this means is that the object moves slowest at apoapsis, where it is farthest away, and it moves fastest at periapsis, where it is closest. At other points in the orbit it moves with some intermediate speed. But there is a limit to how small \(r_p\) can be because there is a limit on how large \(v_p\) can be. Certainly the velocity at periapsis cannot be so large that the kinetic energy at periapsis causes the total energy there to exceed the total energy at apoapsis. If it did it would violate energy conservation, an independently satisfied condition. The two conservation laws together place an absolute constraint on how close the object can come to the central gravitating object at its closest approach.
The momentum relations alone do not give us any information about the actual speeds at any point in the orbit, they just tell us how the speed at any given point compares to speeds at other places along the path. Similarly, the energy equations alone do not tell us anything about the shape of the orbit. That depends in detail on the orbital angular momentum. The two sets of equations together, however, along with the expression for the gravitational force (or equivalently, the gravitational potential energy), provide us with all the information we need to compute the exact orbit of the object.
The basic takeaway is that orbital motion consists of objects falling toward each other under the influence of gravity, but not usually merging because conservation of angular momentum keeps them from getting too close. Instead, they fall toward and then away from each other in a continuous cycle, exchanging kinetic energy for potential energy, over and over again. That is what an orbit is.
This article covers the basic concepts needed to understand orbits. If you are interested in seeing the mathematical details that go along with it, you should consult a book on celestial dynamics. An excellent one is called Fundamentals of Astrodynamics by Roger R. Bate, Donald D. Mueller and Jerry E. White. There is a Dover edition available. The mathematics required is not too difficult. One needs to know only what is learned in the first year or two of introductory college level physics and math courses. If you don't want to trouble with that, then hopefully this short description will give you a better understanding of what orbits are and how they work.
Tags: gravitation,physics
© 2021 Kevin McLin / Starwerk | CommonCrawl |
Show that $\lambda^{d}(E_{d})=\frac{\pi^{\frac{d}{2}}}{\Gamma(\frac{d}{2}+1)}$ and determine behavior for $d \to \infty$
Let $d \in \mathbb N$ and $E_{d}:=\{x \in \mathbb R^{d}:|x|\leq 1\}$
Prove that $$ \lambda^{d}(E_{d})=\frac{\pi^{\frac{d}{2}}}{\Gamma(\frac{d}{2}+1)} $$
and determine $\lambda^{d}(E_{d})$ as $d \to \infty$
I struggle with d-dimensional volume, so I will try the behavior for $d \to \infty$
$$\frac{\pi^{\frac{d}{2}}}{\Gamma(\frac{d}{2}+1)}=\frac{\pi^{\frac{d}{2}}}{\int_{0}^{\infty}x^{\frac{d}{2}}e^{-x}dx}$$
Looking particularly at:
$\int_{0}^{\infty}x^{\frac{d}{2}}e^{-x}dx$ it looks like partial integration, but I wouldn't as $d \in \mathbb N$. I would use substitution, namely $y = x^{\frac{d}{2}}\Rightarrow \frac{2}{d}dy=x^{\frac{d}{2}-1}dx$. But this is a dead end, as $x$ does not disappear
real-analysis integration measure-theory lebesgue-integral
MinaThumaMinaThuma
$\begingroup$ Are you looking for the derivation of the volume or the limiting behavior? $\endgroup$ – user1337 Jan 8 '19 at 12:12
$\begingroup$ Rather the deviation as $d$ gets larger $\endgroup$ – MinaThuma Jan 8 '19 at 12:21
$\begingroup$ I think you are looking for volume of n-ball. $\endgroup$ – StubbornAtom Jan 8 '19 at 12:59
$\begingroup$ Also see math.stackexchange.com/q/67039/321264. $\endgroup$ – StubbornAtom Jan 8 '19 at 13:05
Let $t=d/2$. Observe that $$0 \leq \frac{\pi^t}{\Gamma(t+1)} \leq \frac{\pi^{\lfloor t \rfloor+1}}{\Gamma(\lfloor t \rfloor+1)} $$ and recall that the series $$\sum_{n=0}^\infty \frac{\pi^n}{n!} $$ converges (to $e^\pi$). Convergent series have their terms approaching zero, and the squeeze law shows that the same is true for $\lambda^d (E_d)$.
Hint for the derivation: compute $\int_{\mathbb{R}^d}{e^{-|x|^2}}$ by separating integrals and by polar coordinates.
For the asymptotic behavior: use the functional equation of $\Gamma$ to have a non-integral formula for $d=2p$ and $d=2p-1$. Then let $p$ go to infinity in both formulas and use Stirling's theorem.
MindlackMindlack
$\begingroup$ Why would I use the integral $\int_{\mathbb R^{d}}e^{-|x|^2}$ ? $\endgroup$ – MinaThuma Jan 8 '19 at 16:42
$\begingroup$ It is the shortest way I know to derive the "volume" of the unit sphere thus the volume of the unit ball. $\endgroup$ – Mindlack Jan 8 '19 at 17:13
Not the answer you're looking for? Browse other questions tagged real-analysis integration measure-theory lebesgue-integral or ask your own question.
Why does the volume of the unit sphere go to zero?
Show that $\int_{0}^\infty \frac{1-\cos x}{x^2}dx=\pi/2$.
Proving that $\lim_{n \rightarrow\infty} \int_{0}^{\frac{\pi}{2}} \sin(t^n) dt =0$
Help in proving $f \circ \phi \in \mathcal L^1(\lambda) \iff \int_0^{\infty} \frac {f(x)}{\sqrt x} \lambda (dx) < \infty$
$E \subset \mathbb{R}$ such that $\lambda(E \cap [a,b])=\frac{1}{2}(b-a)$ for all $a,b\in\mathbb{R}$
Proving that $\int_{-1}^0 \frac{e^x+e^{1/x}-1}{x}dx=\gamma$
$\int_E \frac{y^2}{x^2}d\lambda^2(x,y)$ for $E=\{ (x,y)\in\mathbb R^2 : x\le y, xy\ge 1, y\le2 \}$
Determine if $\lim_{n\rightarrow \infty }\int_{-\infty}^{\infty}\frac{(\sin(x))^{n}}{x^2+1}d\lambda(x)$ exists and calculate its value if it exists.
Evaluating $\int_{-\infty}^{\infty}\frac{e^{-\alpha x}}{(1+e^{-x})^{1+\lambda}}\,\mathrm{d}x$ for $\alpha,\lambda>0$
Solve $\int_{[0,\infty[}(\int_{[0,\infty[} e^{-xy}\sin{x} \sin{y} d\lambda(x)) d\lambda(y)$ | CommonCrawl |
Introduction to the A7000 Textbook
Chapter 1 Science and the Universe: A Brief Tour
1.1 The Nature of Astronomy
1.2 The Nature of Science
1.3 The Laws of Nature
1.4 Numbers in Astronomy
1.5 Consequences of Light Travel Time
1.6 A Tour of the Universe
1.7 The Universe on the Large Scale
1.8 The Universe of the Very Small
1.9 A Conclusion and a Beginning
8.0 Thinking Ahead
8.1 The Global Perspective
8.2 Earth's Crust
8.3 Earth's Atmosphere
8.4 Life, Chemical Evolution, and Climate Change
8.5 Cosmic Influences on the Evolution of Earth
Chapter 2 Observing the Sky: The Birth of Astronomy
2.1 The Sky Above
2.2 Ancient Astronomy Around the World
2.3 Astronomy of the First Nations of Canada
2.4 Ancient Babylonian, Greek and Roman Astronomy
2.5 Astrology and Astronomy
2.6 The Birth of Modern Astronomy – Copernicus and Galileo
2.7 For Further Exploration, Websites
2.8 Collaborative Group Activities
2.9 Questions and Exercises
Chapter 3 Orbits and Gravity
3.1 The Laws of Planetary Motion
3.2 Newton's Great Synthesis
3.4 Orbits in the Solar System
3.5 Motions of Satellites and Spacecraft
3.6 Gravity with More Than Two Bodies
3.7 For Further Exploration
Chapter 4 Earth, Moon, and Sky
4.1 Earth and Sky
4.2 The Seasons
4.3 Keeping Time
4.4 The Calendar
4.5 Phases and Motions of the Moon
4.6 Ocean Tides and the Moon
4.7 Eclipses of the Sun and Moon
Chapter 7 Other Worlds: An Introduction to the Solar System
7.1 Overview of Our Planetary System
7.2 Composition and Structure of Planets
7.3 Dating Planetary Surfaces
7.4 Origin of the Solar System
Chapter 9 Cratered Worlds
9.1 General Properties of the Moon
9.2 The Lunar Surface
9.3 Impact Craters
9.4 The Origin of the Moon
9.5 Mercury
9.6 Key Concepts and Summary, Further Explorations
9.7 Collaborative Group Activities and Exercises
Chapter 10 Earthlike Planets: Venus and Mars
10.0 Thinking Ahead
10.1 The Nearest Planets: An Overview
10.2 The Geology of Venus
10.3 The Massive Atmosphere of Venus
10.4 The Geology of Mars
10.5 Water and Life on Mars
10.6 Divergent Planetary Evolution
10.7 Collaborative Group Activities and Exercises
Chapter 11 The Giant Planets
11.1 Exploring the Outer Planets
11.2 The Giant Planets
11.3 Atmospheres of the Giant Planets
Chapter 12 Rings, Moons, and Pluto
12.1 Ring and Moon Systems Introduced
12.2 The Galilean Moons of Jupiter
12.3 Titan and Triton
12.4 Pluto and Charon
12.5 Planetary Rings
12.6 Summary, Further Exploration, Websites
Chapter 13 Comets and Asteroids: Debris of the Solar System
13.2 Asteroids and Planetary Defense
13.3 The "Long-Haired" Comets
13.4 The Origin and Fate of Comets and Related Objects
13.5 Key Concepts and Summary, Further Explorations, Websites
13.1 Asteroids
Chapter 14 Cosmic Samples and the Origin of the Solar System
14.2 Meteorites: Stones from Heaven
14.3 Formation of the Solar System
14.4 Comparison with Other Planetary Systems
14.5 Planetary Evolution
14.6 Collaborative Activities, Questions and Exercises
Chapter 15 The Sun: A Garden-Variety Star
15.1 The Structure and Composition of the Sun
15.2 The Solar Cycle
15.3 Solar Activity above the Photosphere
15.4 Space Weather
Chapter 16 The Sun: A Nuclear Powerhouse
16.1 Sources of Sunshine: Thermal and Gravitational Energy
16.2 Mass, Energy, and the Theory of Relativity
16.3 The Solar Interior: Theory
16.4 The Solar Interior: Observations
Chapter 17 Analyzing Starlight
17.1 The Brightness of Stars
17.2 Colors of Stars
17.3 The Spectra of Stars (and Brown Dwarfs)
17.4 Using Spectra to Measure Stellar Radius, Composition, and Motion
Chapter 18 The Stars: A Celestial Census
18.1 A Stellar Census
18.2 Measuring Stellar Masses
18.3 Diameters of Stars
18.4 The H–R Diagram
18.5 Collaborative Group Activities, Questions and Exercises
Chapter 19 Celestial Distances
19.2 Surveying the Stars
19.3 Variable Stars: One Key to Cosmic Distances
19.4 The H–R Diagram and Cosmic Distances
Chapter 20 Between the Stars: Gas and Dust in Space
20.1 The Interstellar Medium
20.2 Interstellar Gas
20.3 Cosmic Dust
20.4 Cosmic Rays
20.5 The Life Cycle of Cosmic Material
20.6 Interstellar Matter around the Sun
Chapter 21 The Birth of Stars and the Discovery of Planets outside the Solar System
21.1 Star Formation
21.2 The H–R Diagram and the Study of Stellar Evolution
21.3 Evidence That Planets Form around Other Stars
21.4 Planets beyond the Solar System: Search and Discovery
21.5 Exoplanets Everywhere: What We Are Learning
21.6 New Perspectives on Planet Formation
Chapter 22 Stars from Adolescence to Old Age
22.1 Evolution from the Main Sequence to Red Giants
22.2 Star Clusters
22.3 Checking Out the Theory
22.4 Further Evolution of Stars
22.5 The Evolution of More Massive Stars
Chapter 23 The Death of Stars
23.1 The Death of Low-Mass Stars
23.2 Evolution of Massive Stars: An Explosive Finish
23.3 Supernova Observations
23.4 Pulsars and the Discovery of Neutron Stars
23.5 The Evolution of Binary Star Systems
23.6 The Mystery of the Gamma-Ray Bursts
Chapter 24 Black Holes and Curved Spacetime
24.1 Introducing General Relativity
24.2 Spacetime and Gravity
24.3 Tests of General Relativity
24.4 Time in General Relativity
24.5 Black Holes
24.6 Evidence for Black Holes
24.7 Gravitational Wave Astronomy
Chapter 25 The Milky Way Galaxy
25.1 The Architecture of the Galaxy
25.2 Spiral Structure
25.3 The Mass of the Galaxy
25.4 The Center of the Galaxy
25.5 Stellar Populations in the Galaxy
25.6 The Formation of the Galaxy
Chapter 26 Galaxies
26.1 The Discovery of Galaxies
26.2 Types of Galaxies
26.3 Properties of Galaxies
26.4 The Extragalactic Distance Scale
26.5 The Expanding Universe
Chapter 27 Active Galaxies, Quasars, and Supermassive Black Holes
27.1 Quasars
27.2 Supermassive Black Holes: What Quasars Really Are
27.3 Quasars as Probes of Evolution in the Universe
BCIT Astronomy 7000: A Survey of Astronomy
By the end of this section, you will be able to:
Understand how astronomers can learn about a star's radius and composition by studying its spectrum
Explain how astronomers can measure the motion and rotation of a star using the Doppler effect
Describe the proper motion of a star and how it relates to a star's space velocity
Analyzing the spectrum of a star can teach us all kinds of things in addition to its temperature. We can measure its detailed chemical composition as well as the pressure in its atmosphere. From the pressure, we get clues about its size. We can also measure its motion toward or away from us and estimate its rotation.
Clues to the Size of a Star
As we shall see in The Stars: A Celestial Census, stars come in a wide variety of sizes. At some periods in their lives, stars can expand to enormous dimensions. Stars of such exaggerated size are called giants. Luckily for the astronomer, stellar spectra can be used to distinguish giants from run-of-the-mill stars (such as our Sun).
Suppose you want to determine whether a star is a giant. A giant star has a large, extended photosphere. Because it is so large, a giant star's atoms are spread over a great volume, which means that the density of particles in the star's photosphere is low. As a result, the pressure in a giant star's photosphere is also low. This low pressure affects the spectrum in two ways. First, a star with a lower-pressure photosphere shows narrower spectral lines than a star of the same temperature with a higher-pressure photosphere ([link]). The difference is large enough that careful study of spectra can tell which of two stars at the same temperature has a higher pressure (and is thus more compressed) and which has a lower pressure (and thus must be extended). This effect is due to collisions between particles in the star's photosphere—more collisions lead to broader spectral lines. Collisions will, of course, be more frequent in a higher-density environment. Think about it like traffic—collisions are much more likely during rush hour, when the density of cars is high.
Second, more atoms are ionized in a giant star than in a star like the Sun with the same temperature. The ionization of atoms in a star's outer layers is caused mainly by photons, and the amount of energy carried by photons is determined by temperature. But how long atoms stay ionized depends in part on pressure. Compared with what happens in the Sun (with its relatively dense photosphere), ionized atoms in a giant star's photosphere are less likely to pass close enough to electrons to interact and combine with one or more of them, thereby becoming neutral again. Ionized atoms, as we discussed earlier, have different spectra from atoms that are neutral.
Spectral Lines.
Figure 1. This figure illustrates one difference in the spectral lines from stars of the same temperature but different pressures. A giant star with a very-low-pressure photosphere shows very narrow spectral lines (bottom), whereas a smaller star with a higher-pressure photosphere shows much broader spectral lines (top). (credit: modification of work by NASA, ESA, A. Field, and J. Kalirai (STScI))
Abundances of the Elements
Absorption lines of a majority of the known chemical elements have now been identified in the spectra of the Sun and stars. If we see lines of iron in a star's spectrum, for example, then we know immediately that the star must contain iron.
Note that the absence of an element's spectral lines does not necessarily mean that the element itself is absent. As we saw, the temperature and pressure in a star's atmosphere will determine what types of atoms are able to produce absorption lines. Only if the physical conditions in a star's photosphere are such that lines of an element should (according to calculations) be there can we conclude that the absence of observable spectral lines implies low abundance of the element.
Suppose two stars have identical temperatures and pressures, but the lines of, say, sodium are stronger in one than in the other. Stronger lines mean that there are more atoms in the stellar photosphere absorbing light. Therefore, we know immediately that the star with stronger sodium lines contains more sodium. Complex calculations are required to determine exactly how much more, but those calculations can be done for any element observed in any star with any temperature and pressure.
Of course, astronomy textbooks such as ours always make these things sound a bit easier than they really are. If you look at the stellar spectra such as those in [link], you may get some feeling for how hard it is to decode all of the information contained in the thousands of absorption lines. First of all, it has taken many years of careful laboratory work on Earth to determine the precise wavelengths at which hot gases of each element have their spectral lines. Long books and computer databases have been compiled to show the lines of each element that can be seen at each temperature. Second, stellar spectra usually have many lines from a number of elements, and we must be careful to sort them out correctly. Sometimes nature is unhelpful, and lines of different elements have identical wavelengths, thereby adding to the confusion. And third, as we saw in the chapter on Radiation and Spectra, the motion of the star can change the observed wavelength of each of the lines. So, the observed wavelengths may not match laboratory measurements exactly. In practice, analyzing stellar spectra is a demanding, sometimes frustrating task that requires both training and skill.
Studies of stellar spectra have shown that hydrogen makes up about three-quarters of the mass of most stars. Helium is the second-most abundant element, making up almost a quarter of a star's mass. Together, hydrogen and helium make up from 96 to 99% of the mass; in some stars, they amount to more than 99.9%. Among the 4% or less of "heavy elements," oxygen, carbon, neon, iron, nitrogen, silicon, magnesium, and sulfur are among the most abundant. Generally, but not invariably, the elements of lower atomic weight are more abundant than those of higher atomic weight.
Take a careful look at the list of elements in the preceding paragraph. Two of the most abundant are hydrogen and oxygen (which make up water); add carbon and nitrogen and you are starting to write the prescription for the chemistry of an astronomy student. We are made of elements that are common in the universe—just mixed together in a far more sophisticated form (and a much cooler environment) than in a star.
As we mentioned in The Spectra of Stars (and Brown Dwarfs) section, astronomers use the term "metals" to refer to all elements heavier than hydrogen and helium. The fraction of a star's mass that is composed of these elements is referred to as the star's metallicity. The metallicity of the Sun, for example, is 0.02, since 2% of the Sun's mass is made of elements heavier than helium.
Appendix K lists how common each element is in the universe (compared to hydrogen); these estimates are based primarily on investigation of the Sun, which is a typical star. Some very rare elements, however, have not been detected in the Sun. Estimates of the amounts of these elements in the universe are based on laboratory measurements of their abundance in primitive meteorites, which are considered representative of unaltered material condensed from the solar nebula (see the Cosmic Samples and the Origin of the Solar System chapter).
Radial Velocity
When we measure the spectrum of a star, we determine the wavelength of each of its lines. If the star is not moving with respect to the Sun, then the wavelength corresponding to each element will be the same as those we measure in a laboratory here on Earth. But if stars are moving toward or away from us, we must consider the Doppler effect (see The Doppler Effect section). We should see all the spectral lines of moving stars shifted toward the red end of the spectrum if the star is moving away from us, or toward the blue (violet) end if it is moving toward us ([link]). The greater the shift, the faster the star is moving. Such motion, along the line of sight between the star and the observer, is called radial velocity and is usually measured in kilometers per second.
Doppler-Shifted Stars.
Figure 2. When the spectral lines of a moving star shift toward the red end of the spectrum, we know that the star is moving away from us. If they shift toward the blue end, the star is moving toward us.
William Huggins, pioneering yet again, in 1868 made the first radial velocity determination of a star. He observed the Doppler shift in one of the hydrogen lines in the spectrum of Sirius and found that this star is moving toward the solar system. Today, radial velocity can be measured for any star bright enough for its spectrum to be observed. As we will see in The Stars: A Celestial Census, radial velocity measurements of double stars are crucial in deriving stellar masses.
Proper Motion
There is another type of motion stars can have that cannot be detected with stellar spectra. Unlike radial motion, which is along our line of sight (i.e., toward or away from Earth), this motion, called proper motion, is transverse: that is, across our line of sight. We see it as a change in the relative positions of the stars on the celestial sphere ([link]). These changes are very slow. Even the star with the largest proper motion takes 200 years to change its position in the sky by an amount equal to the width of the full Moon, and the motions of other stars are smaller yet.
Large Proper Motion.
Figure 3. Three photographs of Barnard's star, the star with the largest known proper motion, show how this faint star has moved over a period of 20 years. (modification of work by Steve Quirk)
For this reason, with our naked eyes, we do not notice any change in the positions of the bright stars during the course of a human lifetime. If we could live long enough, however, the changes would become obvious. For example, some 50,000 years from now, terrestrial observers will find the handle of the Big Dipper unmistakably more bent than it is now ([link]).
Changes in the Big Dipper.
Figure 4. This figure shows changes in the appearance of the Big Dipper due to proper motion of the stars over 100,000 years.
We measure the proper motion of a star in arcseconds (1/3600 of a degree) per year. That is, the measurement of proper motion tells us only by how much of an angle a star has changed its position on the celestial sphere. If two stars at different distances are moving at the same velocity perpendicular to our line of sight, the closer one will show a larger shift in its position on the celestial sphere in a year's time. As an analogy, imagine you are standing at the side of a freeway. Cars will appear to whiz past you. If you then watch the traffic from a vantage point half a mile away, the cars will move much more slowly across your field of vision. In order to convert this angular motion to a velocity, we need to know how far away the star is.
To know the true space velocity of a star—that is, its total speed and the direction in which it is moving through space relative to the Sun—we must know its radial velocity, proper motion, and distance ([link]). A star's space velocity can also, over time, cause its distance from the Sun to change significantly. Over several hundred thousand years, these changes can be large enough to affect the apparent brightnesses of nearby stars. Today, Sirius, in the constellation Canis Major (the Big Dog) is the brightest star in the sky, but 100,000 years ago, the star Canopus in the constellation Carina (the Keel) was the brightest one. A little over 200,000 years from now, Sirius will have moved away and faded somewhat, and Vega, the bright blue star in Lyra, will take over its place of honor as the brightest star in Earth's skies.
Space Velocity and Proper Motion.
Figure 5. This figure shows the true space velocity of a star. The radial velocity is the component of the space velocity projected along the line of sight from the Sun to a star. The transverse velocity is a component of the space velocity projected on the sky. What astronomers measure is proper motion (μ), which is the change in the apparent direction on the sky measured in fractions of a degree. To convert this change in direction to a speed in, say, kilometers per second, it is necessary to also know the distance (d) from the Sun to the star.
We can also use the Doppler effect to measure how fast a star rotates. If an object is rotating, then one of its sides is approaching us while the other is receding (unless its axis of rotation happens to be pointed exactly toward us). This is clearly the case for the Sun or a planet; we can observe the light from either the approaching or receding edge of these nearby objects and directly measure the Doppler shifts that arise from the rotation.
Stars, however, are so far away that they all appear as unresolved points. The best we can do is to analyze the light from the entire star at once. Due to the Doppler effect, the lines in the light that come from the side of the star rotating toward us are shifted to shorter wavelengths and the lines in the light from the opposite edge of the star are shifted to longer wavelengths. You can think of each spectral line that we observe as the sum or composite of spectral lines originating from different speeds with respect to us. Each point on the star has its own Doppler shift, so the absorption line we see from the whole star is actually much wider than it would be if the star were not rotating. If a star is rotating rapidly, there will be a greater spread of Doppler shifts and all its spectral lines should be quite broad. In fact, astronomers call this effect line broadening, and the amount of broadening can tell us the speed at which the star rotates ([link]).
Using a Spectrum to Determine Stellar Rotation.
Figure 6. A rotating star will show broader spectral lines than a nonrotating star.
Measurements of the widths of spectral lines show that many stars rotate faster than the Sun, some with periods of less than a day! These rapid rotators spin so fast that their shapes are "flattened" into what we call oblate spheroids. An example of this is the star Vega, which rotates once every 12.5 hours. Vega's rotation flattens its shape so much that its diameter at the equator is 23% wider than its diameter at the poles ([link]). The Sun, with its rotation period of about a month, rotates rather slowly. Studies have shown that stars decrease their rotational speed as they age. Young stars rotate very quickly, with rotational periods of days or less. Very old stars can have rotation periods of several months.
Comparison of Rotating Stars.
Figure 7. This illustration compares the more rapidly rotating star Altair to the slower rotating Sun.
As you can see, spectroscopy is an extremely powerful technique that helps us learn all kinds of information about stars that we simply could not gather any other way. We will see in later chapters that these same techniques can also teach us about galaxies, which are the most distant objects that can we observe. Without spectroscopy, we would know next to nothing about the universe beyond the solar system.
Astronomy and Philanthropy
Throughout the history of astronomy, contributions from wealthy patrons of the science have made an enormous difference in building new instruments and carrying out long-term research projects. Edward Pickering's stellar classification project, which was to stretch over several decades, was made possible by major donations from Anna Draper. She was the widow of Henry Draper, a physician who was one of the most accomplished amateur astronomers of the nineteenth century and the first person to successfully photograph the spectrum of a star. Anna Draper gave several hundred thousand dollars to Harvard Observatory. As a result, the great spectroscopic survey is still known as the Henry Draper Memorial, and many stars are still referred to by their "HD" numbers in that catalog (such as HD 209458).
In the 1870s, the eccentric piano builder and real estate magnate James Lick ([link]) decided to leave some of his fortune to build the world's largest telescope. When, in 1887, the pier to house the telescope was finished, Lick's body was entombed in it. Atop the foundation rose a 36-inch refractor, which for many years was the main instrument at the Lick Observatory near San Jose.
Henry Draper (1837–1882) and James Lick (1796–1876).
Figure 8. (a) Draper stands next to a telescope used for photography. After his death, his widow funded further astronomy work in his name. (b) Lick was a philanthropist who provided funds to build a 36-inch refractor not only as a memorial to himself but also to aid in further astronomical research.
The Lick telescope remained the largest in the world until 1897, when George Ellery Hale persuaded railroad millionaire Charles Yerkes to finance the construction of a 40-inch telescope near Chicago. More recently, Howard Keck, whose family made its fortune in the oil industry, gave ?70 million from his family foundation to the California Institute of Technology to help build the world's largest telescope atop the 14,000-foot peak of Mauna Kea in Hawaii (see the chapter on Astronomical Instruments to learn more about these telescopes). The Keck Foundation was so pleased with what is now called the Keck telescope that they gave ?74 million more to build Keck II, another 10-meter reflector on the same volcanic peak.
Now, if any of you become millionaires or billionaires, and astronomy has sparked your interest, do keep an astronomical instrument or project in mind as you plan your estate. But frankly, private philanthropy could not possibly support the full enterprise of scientific research in astronomy. Much of our exploration of the universe is financed by federal agencies such as the National Science Foundation and NASA in the United States, and by similar government agencies in the other countries. In this way, all of us, through a very small share of our tax dollars, are philanthropists for astronomy.
Key Concepts and Summary
Spectra of stars of the same temperature but different atmospheric pressures have subtle differences, so spectra can be used to determine whether a star has a large radius and low atmospheric pressure (a giant star) or a small radius and high atmospheric pressure. Stellar spectra can also be used to determine the chemical composition of stars; hydrogen and helium make up most of the mass of all stars. Measurements of line shifts produced by the Doppler effect indicate the radial velocity of a star. Broadening of spectral lines by the Doppler effect is a measure of rotational velocity. A star can also show proper motion, due to the component of a star's space velocity across the line of sight.
For Further Exploration
Berman, B. "Magnitude Cum Laude." Astronomy (December 1998): 92. How we measure the apparent brightnesses of stars is discussed.
Dvorak, J. "The Women Who Created Modern Astronomy [including Annie Cannon]." Sky & Telescope (August 2013): 28.
Hearnshaw, J. "Origins of the Stellar Magnitude Scale." Sky & Telescope (November 1992): 494. A good history of how we have come to have this cumbersome system is discussed.
Hirshfeld, A. "The Absolute Magnitude of Stars." Sky & Telescope (September 1994): 35.
Kaler, J. "Stars in the Cellar: Classes Lost and Found." Sky & Telescope (September 2000): 39. An introduction is provided for spectral types and the new classes L and T.
Kaler, J. "Origins of the Spectral Sequence." Sky & Telescope (February 1986): 129.
Skrutskie, M. "2MASS: Unveiling the Infrared Universe." Sky & Telescope (July 2001): 34. This article focuses on an all-sky survey at 2 microns.
Sneden, C. "Reading the Colors of the Stars." Astronomy (April 1989): 36. This article includes a discussion of what we learn from spectroscopy.
Steffey, P. "The Truth about Star Colors." Sky & Telescope (September 1992): 266. The color index and how the eye and film "see" colors are discussed.
Tomkins, J. "Once and Future Celestial Kings." Sky & Telescope (April 1989): 59. Calculating the motion of stars and determining which stars were, are, and will be brightest in the sky are discussed.
Discovery of Brown Dwarfs: http://w.astro.berkeley.edu/~basri/bdwarfs/SciAm-book.pdf.
Listing of Nearby Brown Dwarfs: http://www.solstation.com/stars/pc10bd.htm.
Spectral Types of Stars: http://www.skyandtelescope.com/astronomy-equipment/the-spectral-types-of-stars/.
Stellar Velocities https://www.e-education.psu.edu/astro801/content/l4_p7.html.
Unheard Voices! The Contributions of Women to Astronomy: A Resource Guide: http://multiverse.ssl.berkeley.edu/women and http://www.astrosociety.org/education/astronomy-resource-guides/women-in-astronomy-an-introductory-resource-guide/.
When You Are Just Too Small to be a Star: https://www.youtube.com/watch?v=zXCDsb4n4KU. 2013 Public Talk on Brown Dwarfs and Planets by Dr. Gibor Basri of the University of California–Berkeley (1:32:52).
Collaborative Group Activities
The Voyagers in Astronomy feature on Annie Cannon: Classifier of the Stars discusses some of the difficulties women who wanted to do astronomy faced in the first half of the twentieth century. What does your group think about the situation for women today? Do men and women have an equal chance to become scientists? Discuss with your group whether, in your experience, boys and girls were equally encouraged to do science and math where you went to school.
In the section on magnitudes in The Brightness of Stars, we discussed how this old system of classifying how bright different stars appear to the eye first developed. Your authors complained about the fact that this old system still has to be taught to every generation of new students. Can your group think of any other traditional systems of doing things in science and measurement where tradition rules even though common sense says a better system could certainly be found. Explain. (Hint: Try Daylight Savings Time, or metric versus English units.)
Suppose you could observe a star that has only one spectral line. Could you tell what element that spectral line comes from? Make a list of reasons with your group about why you answered yes or no.
A wealthy alumnus of your college decides to give ?50 million to the astronomy department to build a world-class observatory for learning more about the characteristics of stars. Have your group discuss what kind of equipment they would put in the observatory. Where should this observatory be located? Justify your answers. (You may want to refer back to the Astronomical Instruments chapter and to revisit this question as you learn more about the stars and equipment for observing them in future chapters.)
For some astronomers, introducing a new spectral type for the stars (like the types L, T, and Y discussed in the text) is similar to introducing a new area code for telephone calls. No one likes to disrupt the old system, but sometimes it is simply necessary. Have your group make a list of steps an astronomer would have to go through to persuade colleagues that a new spectral class is needed.
1: What two factors determine how bright a star appears to be in the sky?
2: Explain why color is a measure of a star's temperature.
3: What is the main reason that the spectra of all stars are not identical? Explain.
4: What elements are stars mostly made of? How do we know this?
5: What did Annie Cannon contribute to the understanding of stellar spectra?
6: Name five characteristics of a star that can be determined by measuring its spectrum. Explain how you would use a spectrum to determine these characteristics.
7: How do objects of spectral types L, T, and Y differ from those of the other spectral types?
8: Do stars that look brighter in the sky have larger or smaller magnitudes than fainter stars?
9: The star Antares has an apparent magnitude of 1.0, whereas the star Procyon has an apparent magnitude of 0.4. Which star appears brighter in the sky?
10: Based on their colors, which of the following stars is hottest? Which is coolest? Archenar (blue), Betelgeuse (red), Capella (yellow).
11: Order the seven basic spectral types from hottest to coldest.
12: What is the defining difference between a brown dwarf and a true star?
Thought Questions
13: If the star Sirius emits 23 times more energy than the Sun, why does the Sun appear brighter in the sky?
14: How would two stars of equal luminosity—one blue and the other red—appear in an image taken through a filter that passes mainly blue light? How would their appearance change in an image taken through a filter that transmits mainly red light?
15: [link] lists the temperature ranges that correspond to the different spectral types. What part of the star do these temperatures refer to? Why?
16: Suppose you are given the task of measuring the colors of the brightest stars, listed in Appendix J, through three filters: the first transmits blue light, the second transmits yellow light, and the third transmits red light. If you observe the star Vega, it will appear equally bright through each of the three filters. Which stars will appear brighter through the blue filter than through the red filter? Which stars will appear brighter through the red filter? Which star is likely to have colors most nearly like those of Vega?
17: Star X has lines of ionized helium in its spectrum, and star Y has bands of titanium oxide. Which is hotter? Why? The spectrum of star Z shows lines of ionized helium and also molecular bands of titanium oxide. What is strange about this spectrum? Can you suggest an explanation?
18: The spectrum of the Sun has hundreds of strong lines of nonionized iron but only a few, very weak lines of helium. A star of spectral type B has very strong lines of helium but very weak iron lines. Do these differences mean that the Sun contains more iron and less helium than the B star? Explain.
19: What are the approximate spectral classes of stars with the following characteristics?
Balmer lines of hydrogen are very strong; some lines of ionized metals are present.
The strongest lines are those of ionized helium.
Lines of ionized calcium are the strongest in the spectrum; hydrogen lines show only moderate strength; lines of neutral and metals are present.
The strongest lines are those of neutral metals and bands of titanium oxide.
20: Look at the chemical elements in Appendix K. Can you identify any relationship between the abundance of an element and its atomic weight? Are there any obvious exceptions to this relationship?
21: Appendix I lists some of the nearest stars. Are most of these stars hotter or cooler than the Sun? Do any of them emit more energy than the Sun? If so, which ones?
22: Appendix J lists the stars that appear brightest in our sky. Are most of these hotter or cooler than the Sun? Can you suggest a reason for the difference between this answer and the answer to the previous question? (Hint: Look at the luminosities.) Is there any tendency for a correlation between temperature and luminosity? Are there exceptions to the correlation?
23: What star appears the brightest in the sky (other than the Sun)? The second brightest? What color is Betelgeuse? Use Appendix J to find the answers.
24: Suppose hominids one million years ago had left behind maps of the night sky. Would these maps represent accurately the sky that we see today? Why or why not?
25: Why can only a lower limit to the rate of stellar rotation be determined from line broadening rather than the actual rotation rate? (Refer to [link].)
26: Why do you think astronomers have suggested three different spectral types (L, T, and Y) for the brown dwarfs instead of M? Why was one not enough?
27: Sam, a college student, just bought a new car. Sam's friend Adam, a graduate student in astronomy, asks Sam for a ride. In the car, Adam remarks that the colors on the temperature control are wrong. Why did he say that?
Figure 9. (credit: modification of work by Michael Sheehan)
28: Would a red star have a smaller or larger magnitude in a red filter than in a blue filter?
29: Two stars have proper motions of one arcsecond per year. Star A is 20 light-years from Earth, and Star B is 10 light-years away from Earth. Which one has the faster velocity in space?
30: Suppose there are three stars in space, each moving at 100 km/s. Star A is moving across (i.e., perpendicular to) our line of sight, Star B is moving directly away from Earth, and Star C is moving away from Earth, but at a 30° angle to the line of sight. From which star will you observe the greatest Doppler shift? From which star will you observe the smallest Doppler shift?
31: What would you say to a friend who made this statement, "The visible-light spectrum of the Sun shows weak hydrogen lines and strong calcium lines. The Sun must therefore contain more calcium than hydrogen."?
Figuring for Yourself
32: In Appendix J, how much more luminous is the most luminous of the stars than the least luminous?
33: For [link] through [link], use the equations relating magnitude and apparent brightness given in the section on the magnitude scale in The Brightness of Stars and [link].
34: Verify that if two stars have a difference of five magnitudes, this corresponds to a factor of 100 in the ratio $\left(\frac{{b}_{2}}{{b}_{1}}\right);$ that 2.5 magnitudes corresponds to a factor of 10; and that 0.75 magnitudes corresponds to a factor of 2.
35: As seen from Earth, the Sun has an apparent magnitude of about −26.7. What is the apparent magnitude of the Sun as seen from Saturn, about 10 AU away? (Remember that one AU is the distance from Earth to the Sun and that the brightness decreases as the inverse square of the distance.) Would the Sun still be the brightest star in the sky?
36: An astronomer is investigating a faint star that has recently been discovered in very sensitive surveys of the sky. The star has a magnitude of 16. How much less bright is it than Antares, a star with magnitude roughly equal to 1?
37: The center of a faint but active galaxy has magnitude 26. How much less bright does it look than the very faintest star that our eyes can see, roughly magnitude 6?
38: You have enough information from this chapter to estimate the distance to Alpha Centauri, the second nearest star, which has an apparent magnitude of 0. Since it is a G2 star, like the Sun, assume it has the same luminosity as the Sun and the difference in magnitudes is a result only of the difference in distance. Estimate how far away Alpha Centauri is. Describe the necessary steps in words and then do the calculation. (As we will learn in the Celestial Distances chapter, this method—namely, assuming that stars with identical spectral types emit the same amount of energy—is actually used to estimate distances to stars.) If you assume the distance to the Sun is in AU, your answer will come out in AU.
39: Do the previous problem again, this time using the information that the Sun is 150,000,000 km away. You will get a very large number of km as your answer. To get a better feeling for how the distances compare, try calculating the time it takes light at a speed of 299,338 km/s to travel from the Sun to Earth and from Alpha Centauri to Earth. For Alpha Centauri, figure out how long the trip will take in years as well as in seconds.
40: Star A and Star B have different apparent brightnesses but identical luminosities. If Star A is 20 light-years away from Earth and Star B is 40 light-years away from Earth, which star appears brighter and by what factor?
41: Star A and Star B have different apparent brightnesses but identical luminosities. Star A is 10 light-years away from Earth and appears 36 times brighter than Star B. How far away is Star B?
42: The star Sirius A has an apparent magnitude of −1.5. Sirius A has a dim companion, Sirius B, which is 10,000 times less bright than Sirius A. What is the apparent magnitude of Sirius B? Can Sirius B be seen with the naked eye?
43: Our Sun, a type G star, has a surface temperature of 5800 K. We know, therefore, that it is cooler than a type O star and hotter than a type M star. Given what you learned about the temperature ranges of these types of stars, how many times hotter than our Sun is the hottest type O star? How many times cooler than our Sun is the coolest type M star?
a star of exaggerated size with a large, extended photosphere
the angular change per year in the direction of a star as seen from the Sun
motion toward or away from the observer; the component of relative velocity that lies in the line of sight
space velocity
the total (three-dimensional) speed and direction with which an object is moving through space relative to the Sun
Previous: 17.3 The Spectra of Stars (and Brown Dwarfs)
Next: 18.0 Thinking Ahead
BCIT Astronomy 7000: A Survey of Astronomy by OpenStax is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted. | CommonCrawl |
Revisiting area risk classification of visceral leishmaniasis in Brazil
Gustavo Machado ORCID: orcid.org/0000-0001-7552-61441,
Julio Alvarez2,3 na1,
Haakon Christopher Bakka4 na1,
Andres Perez5,
Lucas Edel Donato6,
Francisco Edilson de Ferreira Lima Júnior6,
Renato Vieira Alves6 &
Victor Javier Del Rio Vilas7
BMC Infectious Diseases volume 19, Article number: 2 (2019) Cite this article
Visceral leishmaniasis (VL) is a neglected tropical disease of public health relevance in Brazil. To prioritize disease control measures, the Secretaria de Vigilância em Saúde of Brazil's Ministry of Health (SVS/MH) uses retrospective human case counts from VL surveillance data to inform a municipality-based risk classification. In this study, we compared the underlying VL risk, using a spatiotemporal explicit Bayesian hierarchical model (BHM), with the risk classification currently in use by the Brazil's Ministry of Health. We aim to assess how well the current risk classes capture the underlying VL risk as modelled by the BHM.
Annual counts of human VL cases and the population at risk for all Brazil's 5564 municipalities between 2004 and 2014 were used to fit a relative risk BHM. We then computed the predicted counts and exceedence risk for each municipality and classified them into four categories to allow comparison with the four risk categories by the SVS/MH.
Municipalities identified as high-risk by the model partially agreed with the current risk classification by the SVS/MH. Our results suggest that counts of VL cases may suffice as general indicators of the underlying risk, but can underestimate risks, especially in areas with intense transmission.
According to our BHM the SVS/MH risk classification underestimated the risk in several municipalities with moderate to intense VL transmission. Newly identified high-risk areas should be further evaluated to identify potential risk factors and assess the needs for additional surveillance and mitigation efforts.
Visceral leishmaniasis (VL) in the Americas is a vector-borne neglected zoonosis caused by the intracellular protozoan Leishmania infantum [1, 2]. If left untreated, VL is fatal in more than 90% of cases, within two years of the onset of the disease [3].
Every year approximately 200,000–400,000 new cases of VL are registered worldwide [4]. In 2015, 88.8% of VL cases were reported from six countries: Brazil, Ethiopia, India, Somalia, South Sudan and Sudan [4], Brazil was ranked second, reporting 3289 new cases, 14% of the total reported worldwide, surpassed only by India [5]. In the Americas, Brazil represents 95% of total occurrences [6].
In Latin America transmission is mediated by the vector Lutzomyia longipalpis and Lutzomyia cruzi [7,8,9], a synanthropic sandfly with a wide geographic distribution in Brazil [10], and the domestic dogs as its the main animal reservoir in urban and rural areas. Control measures applied against the vector and the reservoir have shown limited success [11].
The Secretaria de Vigilância em Saúde of Brazil's Ministry of Health (SVS/MH) is responsible for the planning, implementation and evaluation of VL surveillance in Brazil. VL surveillance data is used by the SVS/MH for the classification of municipalities in four VL risk categories. This risk classification is the main pillar for the management of the VL control in the country, and is currently based on the average number of reported cases per municipality in periods of 3-years, without considering human population at risk. Such simple classification and ranking approach does not account for uncertainties around the average number of cases and variability around risk metrics, and may be unable to fully recognize and address spatial and spatiotemporal dependencies in the data [12].
In this study, we evaluate the spatiotemporal pattern of VL risk in Brazil and generate alternative risk categories to compare with the current SVS/MH risk-classification. We aim to provide additional insights in the epidemiology of VL in Brazil, and inform how accurately the current risk categories reflect the underlying VL risk at the municipality level.
Data source and collection
The study area comprised all 5564 municipalities in Brazil as listed by the Instituto Brasileiro de Geografia e Estatística (IBGE) database (IBGE general information http://www.ibge.gov.br/english/). Municipality-specific annual counts of VL cases for the period 2004–2014, and the official risk classification status for the period 2008–2014 were provided by the SVS/MH.
In order to account for the population at risk, we computed the municipality-specific standardized incidence ratios (SIR), \( {SIR}_{it}=\frac{y_{it}}{e_{it}} \), where, for municipality i and year t, yit is the count of VL cases and eit the expected number of cases calculated by multiplying the population in municipality i for the t year (based on 2010 national census data) by the incidence of VL in the country.
At the first level of the BHM, the observed number of human VL cases in municipality i and year t (yit) was assumed to follow a Poisson distribution yit ~ Poisson (eit, θit), where eit is defined above and θit is the unknown municipality-specific annual relative risk.
The log of θit was then decomposed additively into spatial and temporal effects and a space-time interaction term, so that
$$ Log\ \left({\theta}_{it}\right)=\alpha +{\upsilon}_i+{\nu}_i+{\gamma}_t+{\delta}_{it} $$
where α is the intercept, representing the population average risk, υi and νi describe respectively the spatially structured and unstructured variation in VL risk, γt represents the structured temporal effect, and δit is a space-time interaction term where given by the Kronecker product γt⨂ υi. Given the large number of municipalities with zero case counts we explored other parameterizations, specifically a zero inflated Poisson likelihood. We computed the Deviance Information Criterion (DIC) to compare the fit of our models [13].
A non-informative normal distribution with mean 0 and variance\( {\sigma}_{\nu}^2 \) was used as prior distribution for the spatially unstructured random effect νi, while the spatially structured effect υi was assigned a conditional autoregressive structure as previously described [14]. Briefly, υi was assumed to follow a normal distribution with mean conditional to the neighboring municipalities υj, where neighborhood is defined in terms of geographical adjacency, and variance \( {\sigma}_{\upsilon}^2 \) dependent on the number of neighboring municipalities ni,
$$ {\upsilon}_i\mid \upsilon, j\ neighbor\ of\ i\sim N\ \left(\frac{1}{n_i}\ \gamma {\sum}_{j=1}^{n_i}{\upsilon}_j,\frac{\sigma_{\upsilon}^2}{n_i}\right) $$
Finally, γt was assigned a random walk type 1 (RW1) \( {\gamma}_t\sim N\left({\gamma}_{t-1},{\sigma}_{\gamma}^{-1}\right) \). Exponential priors (3, 0.01) were assigned to all the standard deviations of the random effects [15]. In addition, we investigated also the sensitivity of our results to other less informative priors with larger ranges. Model posterior parameters were estimated using Integrated Nested Laplace Approximation (INLA), and fitted using the R-INLA package [16] conducted in R [17]. Results were visualized using ArcGIS 10.4 (ESRI ArcMap, 2016).
Risk classification
The actual MH risk classification is based upon the most recent three-year moving average of the number of VL human cases registered in each municipality. This classification is updated every June. Municipalities are classified as no transmission (class 0, no cases reported), sporadic transmission (class 1, moving average < 2.4), moderate transmission (class 2, moving average in the interval [2.4–4.4), and intense transmission (class 3, moving average ≥ 4.4 cases) [18, 19].
In order to compare the current risk classification SVS/MH with the results of the BHM, we computed the posterior estimates of the 'exceedence' probability of risk Prob (θit > 1) ∣ y [20,21,22] further categorized into 4 categories (0, 1, 2, 3) if Prob(θit > 1) assumed < 0.5, 0.5–0.75, 0.75–0.95 and > 0.95, respectively. Exceedence categories were compared with the four SVS/MH risk classes via the weighted Kappa correlation test. Finally, the correspondent three-year moving average of the annual number of cases per municipality predicted by the model BHM (\( \hat{y_{it}} \)) was used to create a third risk classification in which municipalities were classified as no transmission (class 0, no cases predicted); sporadic transmission (class 1, \( \hat{y_{it}} \) predicted moving average < 2.4); moderate transmission (class 2, \( \hat{y_{it}} \) predicted moving average in the interval 2.4–4.4) and intense transmission (class 3, \( \hat{y_{it}} \) predicted moving average ≥ 4.4) to compare with the SVS/MH classification based on observed cases (yit). The agreement between this classification and that of the SVS/MH was also compared via the weighted Kappa correlation test.
Descriptive results
From January 2004 to December 2014, a total of 37,405 VL cases were registered by the SINAN/SVS/MH Brazil. The annual average case count by municipality is shown in Fig. 1. The annual case count of VL during the study period (2004–2014), for the entire country, ranged between 2947 and 3713 cases (Fig. 2).
Spatial distribution of the annual average case count of visceral leishmaniasis by municipality, 2004–2014
Number of cases of visceral leishmaniasis reported in Brazil over 11 years (2004 to 2014)
Five municipalities (0.09%) accounted for almost 20% of the total number of cases reported during the period of study: Fortaleza (state of Ceará) 1865 (4.98%), Campo Grande (state of Mato Grosso do Sul) 1520 (4.06%), Araguaína (state of Tocantins) 1294 (3.45%), Belo Horizonte (state of Minas Gerais) 1176 (3.14%), and Teresina (state of Piauí) 961 (2.57%).
Bayesian hierarchical model
The BHM with Poisson likelihood had the lowest DIC value (Table 1), and included spatial (structured and unstructured), temporal random effects, and interaction term. Models were robust to different choices of priors.
Table 1 Composition of eight different models, description of likelihood and for model diagnostics DIC is reported
The posterior estimates of the spatially structured random effect ui were higher for municipalities located in the central and eastern part of Brazil, while the non-spatially structured were scattered throughout the country (Fig. 3). The average standard deviation was calculated for all municipalities and υi shown to be 2.5 times larger than that of νi (6.96 versus 2.76), suggesting that a higher proportion of the unexplained risk of VL (not attributable to the size of the population at risk) was partially explained by factors with a spatial structure (Fig. 3). Finally, the proportion of the marginal variances were calculated for each parameter in the final model: the major contributors were the spatial effects ν (32.8%), υ (57.8%), with less variance explained by the temporal γ (1%) and spatial temporal interaction δ (9.3%).
Spatial distribution of the exponentiated spatially structured υi (left) and non − structured νi (right) random effects
Comparisons of the risk classifications
The proportion of municipalities that were classified in the same category by both the BHM via computation of the exceedence probabilities and the SVS/MH classification was 79.84%, very similar to the results obtained when the SVS/MH classification was compared with results using the predicted cases [78.05%, see Additional file 1: Figure S1 and Additional file 2: Figure S2]. This comparison (Table 2) revealed that the classifications based on the BHM (via exceedence probabilities or predicted cases) allocated a higher proportion of municipalities to categories two and three (moderate and intense transmission). Specifically, the classification based on the exceedence probabilities categorized between two and four times more municipalities as category three than the SVS/MH risk classification. Conversely, the current SVS/MH risk classification identified almost four times more municipalities as class one than the classification based on the posterior estimates of the exceedence probabilities. The average agreement between both classifications over the seven years was considered good (weighted Kappa = 0.69) [further information on yearly agreement is provided in Additional file 3: Table S1]. A good agreement (weighted Kappa = 0.63) on average was also obtained when the SVS/MH classification was compared with the one based on the predicted number of cases (\( \hat{y_{it}} \)) [see Additional file 4: Table S2 for yearly agreement]. However, if the lower risk category (0) was excluded from the comparison the agreement was much lower (0.17 and 0.12 when the SVS/MH classification was compared to the exceedence probabilities and predicted cases from the BHM, respectively), revealing most of the discordant results were obtained in municipalities with some risk as determined by both proposed classification [Table 2 and Additional file 1: Figure S1 to Additional file 2: Figure S2].
Table 2 Comparison of the number of municipalities allocated to the different risk levels depending on the classification followed (BHM or SVS/MH classification)
We have explored the spatial distribution for the comparison among all classifications, we demonstrate the scenario for the 2014 pattern, where SVS/MH, BHM-exceedence and BHM-predictions for intense transmission (class 3) are mapped in Fig. 4 [see Additional file 5: Figure S3, Additional file 6: Figure S4, Additional file 7: Figure S5, Additional file 8: Figure S6, Additional file 9: Figure S7, Additional file 10: Figure S8 for the 2008 to 2013 maps], showing that discordant municipalities were located throughout the country.
Geographic patterns of the municipalities classified as high-risk (class 3) by the SVS/MH, BHM- exceedence and BHM-predicted predicted number of cases (\( \hat{y_{it}} \)) by the BHM
VL is endemic in Brazil, and has been historically distributed across multiple states, especially in the North and Northeast regions of the country. However, recent reports indicate that the disease is expanding within Brazil and is reaching neighboring countries like Argentina and Uruguay [23,24,25]. Recently affected areas in Brazil include states located in the South (such as Rio Grande do Sul) and in the Midwest region [10]. For the study period, municipalities that presented higher number of cases were mostly located in the states of Tocantins, Minas Gerais, Mato Grosso do Sul, Ceará and Piauí (Fig. 1), supporting the results observed in previous studies that had also identified the above states as high-risk areas [26,27,28,29,30]. For the 11 years studied here less than 10% of the municipalities reported at least one case of VL in any given year (mean of municipalities with one or more VL cases during 2004–2014 = 437, min = 380, max = 492). However, VL incidence varied largely in those affected municipalities.
The inclusion of both spatially structured and unstructured random effects in the model allowed a better understanding of how the risk was directly explained by the population at risk across the country. The exponentiated posterior estimates for the spatially structured random effect term were above one in multiple regions including Central-Western, Northeast and especially north of Roraima state (Fig. 3-left). High values of ui indicate a positive association between the spatially structured effects and VL in Brazil, signaling the presence of additional risk factors that are not directly related with VL occurrence and that have a spatial component. This spatially-dependent risk may be in part related with the local density of infected reservoirs (dogs), in line with previous studies that described a positive spatial dependency between the occurrence of human and canine VL cases [31]. Therefore, larger concentrations of infected dogs per inhabitants in certain municipalities could lead to increased risk, since dogs are considered the main reservoir of the disease in Latin America and in Brazil in particular [27, 32, 33].
Increased risk may be also explained by other factors. For example, in some areas with high VL incidence like Teresina (Northeastern Brazil) a correlation between VL incidence and more limited urban infrastructures and poorer living conditions has been previously described [26, 34, 35]. Future analysis can expand on our models by incorporating covariates explaining local development as one example. Changes in the environment, such as deforestation due to expansion of the road networks, have been also shown to have a major effect on the risk of VL and other vector-borne diseases [36]. Indeed, the expanding habitat of the vector may be associated to some extent with the increase in VL incidence in areas traditionally considered non-endemic in Brazil, especially in the South and Midwest regions, a situation that may become more concerning in the future [25].
The nearly 80% agreement between the SVS/MH and BHM-exceedence and predicted risk classifications when all risk categories are considered suggests that the current strategy for the classification of municipalities may provide an acceptable approach in a significant proportion of the municipalities in the country. However, when results from municipalities classified in categories 1–3 (i.e., 'some risk') by the three approaches were compared, the agreement dropped largely [Table 2, Additional file 1: Figure S1 and Additional file 2: Figure S2], and major disagreements were identified particularly regarding to the category of higher risk (class 3) as classified by the BHM, that were evident throughout the study period [Additional file 3: Table S1, Additional file 4: Table S2, Fig. 4 and Additional file 5: Figure S3, Additional file 6: Figure S4, Additional file 7: Figure S5, Additional file 8: Figure S6, Additional file 9: Figure S7, Additional file 10: Figure S8 for the 2008 to 2013 maps]: a considerable proportion of these high risk municipalities (between 58% in 2012 and 82% in 2013) were identified to have lower risk according to the SVS/MH classification. The SVS/MH classification seemed to be more sensitive to year-to-year changes (for example, there was a 30% drop in the number of municipalities classified as high risk between 2011 and 2012), which could be due to surveillance artifacts since the risk of VL would not be expected to change so drastically in such a short time-span. The classification yielded by the BHM, on the other hand, provided a more stable risk landscape over time and space due to the smoothing stemming from the inclusion of spatial effects in the model [Fig. 4 and Additional file 5: Figure S3, Additional file 6: Figure S4, Additional file 7: Figure S5, Additional file 8: Figure S6, Additional file 9: Figure S7, Additional file 10: Figure S8]. This is obvious from a close look at the municipalities classified differently by the two approaches, showing that these were typically located neighboring others with a large spatially structured random effect term (υi).The implications in the control of VL may be relevant if municipalities stop the application of control measures without accounting for the risk in neighboring municipalities (Fig. 4).
Both "moderate" and "intense transmission" municipalities according to SVS/MH (categories 2 and 3) are subjected to the same disease control measures in terms of resources and active surveillance activities. However, the BHM results suggest that a substantial underestimation may take place when only focusing on numerator data, since every year an average of 131 and 288 additional municipalities were classified as moderate (class 2) and intense (class 3) transmission areas, respectively, using this approach. This highlights the importance of incorporating information on the population at risk as well as spatial and temporal effects most related to the risk of infectious diseases. The comparison between the SVS/MH classification and those based on the exceedence probabilities or the predicted number of cases (\( \hat{y_{it}} \)) revealed that even though agreement was good (weighted Kappa min:0.66-max:0.69) discordances were not only found in municipalities classified as higher risk [Additional file 3: Table S1, Additional file 4: Table S2]. Our current analyses allow the identification of municipalities with higher VL risk that could have been previously inadequately classified according to the methodology adopted by the SVS/MH. The new classification proposed in this study may help to identify municipalities that, despite not presenting high morbidity, are under a high risk of disease transmission, and should therefore be subjected to improved surveillance.
Finally, the limitations of this study are mainly associated to the lack of information on neighboring countries for municipalities located at the edge of the study area (Paraguay, Argentina and Bolivia). In addition, location of cases were based on where the notification took place, and may not indicate where the infection actually occurred. However, we suggest that the modeling the incidence ratio and inclusion of spatial and temporal effects and the smoothing technique we used helped to remove the effects of the variation of count cases used by the current MHS risk classification, and hence provide a better approximation of the municipality-level risk.
The comparison between the VL risk classification currently in use by the SVS/MH and that obtained through a BHM revealed that raw case counts of VL may be sufficient to indicate disease risk in a large proportion of the municipalities in Brazil, but may underestimate the risk in others, particularly those neighboring high risk areas. Our results identified "hot" areas where disease clustered, and where control and surveillance efforts could be implemented in order to prevent further spread of VL in the country. Resources to support increased measures in those hot areas could come from the many more areas classified as "1" (sporadic transmission) by the SVS/MH compared to those identify by our models.
BHM:
DIC:
Deviance Information Criterion
IBGE:
Instituto Brasileiro de Geografia e Estatística
INLA:
Integrated Nested Laplace Approximation
RW1:
Random walk type 1
SIR:
Standardized incidence ratios
SVS/MH:
Secretaria de Vigilância em Saúde of Brazil's Ministry of Health
VL:
Visceral leishmaniasis
Harhay MO, Olliaro PL, Costa DL, Costa CHN. Urban parasitology: visceral leishmaniasis in Brazil. Trends Parasitol. 2011;27(9):403–9 PubMed PMID: WOS:000295207500007.
Malaviya P, Picado A, Singh SP, Hasker E, Singh RP, et al. Visceral Leishmaniasis in Muzaffarpur District, Bihar, India from 1990 to 2008. PLOS ONE. 2011;6(3):e14751.
WHO. WHO neglected tropical disease 2014. Available from: http://www.who.int/neglected_diseases/diseases/en/).
WHO. First WHO report on neglected tropical diseases 2010. Available from: http://www.who.int/neglected_diseases/2010report/en/.
WHO. Number of cases of visceral leishmaniasis reported data by country 2017. Available from: http://apps.who.int/gho/data/node.main.NTDLEISH?lang=en.
PAHO. Informe Epidemiológico das Américas. Leishmanioses 2017. Available from: http://iris.paho.org/xmlui/handle/123456789/34113.
Lainson R, Rangel EF. Lutzomyia longipalpis and the eco-epidemiology of American visceral leishmaniasis, with particular reference to Brazil - A Review. Mem I Oswaldo Cruz. 2005;100(8):811–27 PubMed PMID: WOS:000235006100001.
Missawa NA, Veloso MA, Maciel GB, Michalsky EM, Dias ES. Evidence of transmission of visceral leishmaniasis by Lutzomyia cruzi in the municipality of Jaciara, state of Mato Grosso, Brazil. Rev Soc Bras Med Trop. 2011;44(1):76–8 PubMed PMID: 21340413.
Dos Santos SO, Arias J, Ribeiro AA, Hoffmann MD, De Freitas RA, Malacco MAF. Incrimination of Lutzomyia cruzi as a vector of American Visceral Leishmaniasis. Med Vet Entomol. 1998;12(3):315–7 PubMed PMID: WOS:000075782800013.
Souza GD, dos Santos E, Andrade JD. The first report of the main vector of visceral leishmaniasis in America, Lutzomyia longipalpis (Lutz & Neiva) (Diptera: Psychodidae: Phlebotominae), in the state of Rio Grande do Sul, Brazil. Mem I Oswaldo Cruz. 2009;104(8):1181–2 PubMed PMID: WOS:000274413300017.
Romero GAS, Boelaert M. Control of Visceral Leishmaniasis in Latin America-A Systematic Review. Plos Neglect Trop D. 2010;4(1) PubMed PMID: WOS:000274179500012.
Courtemanche C, Soneji S, Tchernis R. Modeling Area-Level Health Rankings. Health Serv Res. 2015;50(5):1413–31. https://doi.org/10.1111/1475-6773.12352 PubMed PMID: 26256684; PubMed Central PMCID: PMCPMC4600354.
Spiegelhalter DJ, Best NG, Carlin BR, van der Linde A. Bayesian measures of model complexity and fit. J R Stat Soc B. 2002;64:583–616 PubMed PMID: WOS:000179221100001.
Knorr-Held L, Besag J. Modelling risk from a disease in time and space. Stat Med. 1998;17(18):2045–60 PubMed PMID: WOS:000075939000002.
Simpson D, Håvard R, Martins GT, Riebler A, Sørbye GS. Penalising model component complexity: A principled, practical approach to constructing priors. Statistical Science. 2015;arXiv:1403.4630.
Martino S, Havard R. Implementing approximate Bayesian inference using integrated nested Laplace approximation: a manual for the inla program. NTNU, Norway: Department of Mathematical Sciences; 2009.
Team RDC. R : a language and eviroment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2010.
Brasil Ministério da Saúde SdVeS. Manual de Vigilância e Controle da Leishmaniose Visceral. 2007.
Saúde Md. Guia de Vigilância em Saúde 2016 [cited 1]. Available from: http://portalarquivos.saude.gov.br/images/pdf/2016/novembro/18/Guia-LV-2016.pdf.
Lawson AB. Bayesian disease mapping: hierarchical modeling in spatial epidemiology. New York: CRC Press; 2013.
Richardson S, Thomson A, Best N, Elliott P. Interpreting posterior relative risk estimates in disease-mapping studies. Environ Health Persp. 2004;112(9):1016–25 PubMed PMID: WOS:000222315800011.
Rotejanaprasert C, Lawson A, Bolick-Aldrich S, Hurley D. Spatial Bayesian surveillance for small area case event data. Stat Methods Med Res. 2016;25(4):1101–17 PubMed PMID: WOS:000382871200003.
Salomon OD, Basmajdian Y, Fernandez MS, Santini MS. Lutzomyia longipalpis in Uruguay: the first report and the potential of visceral leishmaniasis transmission. Mem Inst Oswaldo Cruz. 2011;106(3):381–2 PubMed PMID: 21655832.
Salomon OD, Quintana MG, Bruno MR, Quiriconi RV, Cabral V. Visceral leishmaniasis in border areas: clustered distribution of phlebotomine sand flies in Clorinda, Argentina. Mem Inst Oswaldo Cruz. 2009;104(5):801–4 PubMed PMID: 19820846.
Peterson AT, Campbell LP, Moo-Llanes DA, Travi B, Gonzalez C, Ferro MC, et al. Influences of climate change on the potential distribution of Lutzomyia longipalpis sensu lato (Psychodidae: Phlebotominae). Int J Parasitol. 2017. https://doi.org/10.1016/j.ijpara.2017.04.007 PubMed PMID: 28668326.
Neto JC, Werneck GL, Costa CHN. Factors associated with the incidence of urban visceral leishmaniasis: an ecological study in Teresina, Piaui State, Brazil. Cad Saude Publica. 2009;25(7):1543–51 PubMed PMID: WOS:000267705400012.
Ashford DA, David JR, Freire M, David R, Sherlock I, Eulalio MC, et al. Studies on control of visceral leishmaniasis: impact of dog control on canine and human visceral leishmaniasis in Jacobina, Bahia, Brazil. Am J Trop Med Hyg. 1998;59(1):53–7 PubMed PMID: 9684628.
Vieira CP, Oliveira AM, Rodas LA, Dibo MR, Guirado MM, Chiaravalloti NF. Temporal, spatial and spatiotemporal analysis of the occurrence of visceral leishmaniasis in humans in the City of Birigui, state of Sao Paulo, from 1999 to 2012. Rev Soc Bras Med Trop. 2014;47(3):350–8 PubMed PMID: 25075487.
Margonari C, Freitas CR, Ribeiro RC, Moura ACM, Timbo M, Gripp AH, et al. Epidemiology of visceral leishmaniasis through spatial analysis, in Belo Horizonte municipality, state of Minas Gerais, Brazil. Mem I Oswaldo Cruz. 2006;101(1):31–8 PubMed PMID: WOS:000236054000007.
Antonialli SAC, Torres TG, Paranos AC, Tolezano JE. Spatial analysis of American Visceral leishmaniasis in Mato Grosso do Sul State, Central Brazil. J Infection. 2007;54(5):509–14 PubMed PMID: WOS:000246442900016.
Teixeira-Neto RG, da Silva ES, Nascimento RA, Belo VS, de Oliveira CD, Pinheiro LC, et al. Canine visceral leishmaniasis in an urban setting of Southeastern Brazil: an ecological study involving spatial analysis. Parasite Vector. 2014;7 PubMed PMID: WOS:000348547300001.
de Araujo VEM, Pinheiro LC, Almeida MCD, de Menezes FC, Morais MHF, Reis IA, et al. Relative Risk of Visceral Leishmaniasis in Brazil: A Spatial Analysis in Urban Area. Plos Neglect Trop D. 2013;7(11) PubMed PMID: WOS:000330378400025.
Souza VA, Cortez LR, Dias RA, Amaku M, Ferreira Neto JS, Kuroda RB, et al. Space-time cluster analysis of American visceral leishmaniasis in Bauru, Sao Paulo state, Brazil. Cad Saude Publica. 2012;28(10):1949–64 PubMed PMID: 23090174.
de Almeida AS, Medronho RD, Werneck GL. Identification of Risk Areas for Visceral Leishmaniasis in Teresina, Piaui State, Brazil. Am J Trop Med Hyg. 2011;84(5):681–7 PubMed PMID: WOS:000290365100006.
Werneck GL, Costa CHN, Walker AM, David JR, Wand M, Maguire JH. Multilevel modelling of the incidence of visceral leishmaniasis in Teresina, Brazil. Epidemiol Infect. 2007;135(2):195–201 PubMed PMID: WOS:000244652800003.
Seva AD, Mao L, Galvis-Ovallos F, Tucker Lima JM, Valle D. Risk analysis and prediction of visceral leishmaniasis dispersion in Sao Paulo State, Brazil. PLoS Negl Trop Dis. 2017;11(2):e0005353. https://doi.org/10.1371/journal.pntd.0005353 PubMed PMID: 28166251; PubMed Central PMCID: PMCPMC5313239.
We would like to thank Serviço de Vigilância em Saúde, Ministério da Saúde (SVS-MOH), Brasília, Brazil.
This study was funded by the Academic Health Center Faculty Research Development Grant Program (FRD #16.36) and CVM-Department of Population Health and Pathobiology- North Carolina State University, Grant/Award Number: Startup fund. The funder had no role in the collation of the data, development of the conceptual framework, analysis of data, interpretation of data, writing of the manuscript, or the decision to submit the paper for publication.
Data of reported cases are available through the Secretaria de Vigilância em Saúde of Brazil's Ministry of Health (SVS/MH) upon request and can also be retrieved from the National Information System of Health of the Ministry of Health (Sistema de Informação de Agravos de Notificação [SINAN] do Ministério da Saúde [MS]- http://portalsinan.saude.gov.br/doencas-e-agravos
Julio Alvarez and Haakon Christopher Bakka contributed equally to this work.
Department of Population Health and Pathobiology, College of Veterinary Medicine, North Carolina State University, 1060 William Moore Drive, Raleigh, NC, 27607, USA
Gustavo Machado
VISAVET Health Surveillance Center, Universidad Complutense, Avda Puerta de Hierro S/N, 28040, Madrid, Spain
Julio Alvarez
Departamento de Sanidad Animal, Facultad de Veterinaria, Universidad Complutense, Avda Puerta de Hierro S/N, 28040, Madrid, Spain
CEMSE Division, King Abdullah University of Science and Technology, Trondheim, Saudi Arabia
Haakon Christopher Bakka
Department of Veterinary Population Medicine, College of Veterinary Medicine, University of Minnesota, St Paul, MN, 55108, USA
Andres Perez
Secretaria de Vigilância em Saúde, Ministério da Saúde (SVS-MH), Brasília, Brazil
Lucas Edel Donato, Francisco Edilson de Ferreira Lima Júnior & Renato Vieira Alves
School of Veterinary Medicine, University of Surrey, Guildford, Surrey GU2 7A, UK
Victor Javier Del Rio Vilas
Lucas Edel Donato
Francisco Edilson de Ferreira Lima Júnior
Renato Vieira Alves
GM, JA, VJDRVB and AP authors reviewed the literature, and contributed to the conception and design of the study. VJDRVB, LED, FEFLJ and RVA acquired the leishmaniasis data. GM wrote the code for the spatiotemporal and prior sensitivity analysis. GM, JA and HCB reviewed and improved the codes. GM, JA, VJDRVB, AP, LED, FEFLJ, HCB and RVA interpreted and discussed the results, wrote the manuscript, and revised it critically. All authors read and approved the final manuscript.
Correspondence to Gustavo Machado.
Additional file 1:
Figure S1. Proportion of municipalities classified by the BHM model exceedence probabilities and the SVS/MH classification. (TIFF 26367 kb)
Figure S2. Proportion of municipalities classified by the BHM model-predicted risk class and the SVS/MH classification. (TIFF 26367 kb)
Table S1. Weighted Kappa between BHM model-exceedence probabilities and the SVS/MH classification. (DOCX 18 kb)
Table S2. Weighted Kappa between BHM model-predicted risk class and the SVS/MH classification. (DOCX 18 kb)
Figure S3. The spatial distribution of all classifications SVS/MH, BHM-exceedence and BHM-predictions for 2008. (TIF 26986 kb)
Additional file 10:
Machado, G., Alvarez, J., Bakka, H.C. et al. Revisiting area risk classification of visceral leishmaniasis in Brazil. BMC Infect Dis 19, 2 (2019). https://doi.org/10.1186/s12879-018-3564-0
Disease mapping | CommonCrawl |
Dynamics and spatiotemporal pattern formations of a homogeneous reaction-diffusion Thomas model
DCDS-S Home
Global existence and energy decay estimate of solutions for a class of nonlinear higher-order wave equation with general nonlinear dissipation and source term
October 2017, 10(5): 1165-1174. doi: 10.3934/dcdss.2017063
Stability and bifurcation analysis in a chemotaxis bistable growth system
Shubo Zhao , Ping Liu , and Mingchao Jiang
Y. Y. Tseng Functional Analysis Research Center and School of Mathematical Sciences, Harbin Normal University, Harbin, Heilongjiang 150025, China
* Corresponding author: Ping Liu
Received September 2016 Revised January 2017 Published June 2017
Fund Project: Partially supported by NSFC grant 11571086,11471091 and Science Research Funds for Overseas Returned Chinese Scholars of Heilongjiang Province LC2013C01
Figure(2)
The stability analysis of a chemotaxis model with a bistable growth term in both unbounded and bounded domains is studied analytically. By the global bifurcation theorem, we identify the full parameter regimes in which the steady state bifurcation occurs.
Keywords: Chemotaxis, bistable, bifurcation, stability.
Mathematics Subject Classification: 92C17, 74H60, 35B35, 35K55, 35K57.
Citation: Shubo Zhao, Ping Liu, Mingchao Jiang. Stability and bifurcation analysis in a chemotaxis bistable growth system. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 1165-1174. doi: 10.3934/dcdss.2017063
J. Adler, Chemotaxis in bacteria, Science (New York, NY), 153 (1966), 708-716. Google Scholar
M. Aida, T. Tsujikawa, M. Efendiev, A. Yagi and M. Mimura, Lower estimate of the attractor dimension for a chemotaxis growth system, Journal of the London Mathematical Society, 74 (2006), 453-474. doi: 10.1112/S0024610706023015. Google Scholar
E. O. Budrene and H. C. Berg, Complex patterns formed by motile cells of escherichia coli, Nature, 349 (1991), 630-633. doi: 10.1038/349630a0. Google Scholar
M. A. J. Chaplain and A. M. Stuart, A model mechanism for the chemotactic response of endothelial cells to tumour angiogenesis factor, Mathematical Medicine and Biology, 10 (1993), 149-168. doi: 10.1093/imammb/10.3.149. Google Scholar
A. Gamba, D. Ambrosi, A. Coniglio, v De Candia, S. Di Talia, E. Giraudo, G. Serini, L. Preziosi and F. Bussolino, Percolation, morphogenesis, and burgers dynamics in blood vessels formation, Physical Review Letters, 90 (2003), 118101. doi: 10.1103/PhysRevLett.90.118101. Google Scholar
R. E. Goldstein, Traveling-wave chemotaxis, Physical Review Letters, 77 (1996), 775-778. doi: 10.1103/PhysRevLett.77.775. Google Scholar
T. Hillen and K. J. Painter, A user's guide to pde models for chemotaxis, Journal of Mathematical Biology, 58 (2009), 183-217. doi: 10.1007/s00285-008-0201-3. Google Scholar
D. Horstmann, From 1970 until present: The keller-segel model in chemotaxis and its consequences, Jahresber. Deutsch. Math.-Verein, 105 (2003), 103-165. Google Scholar
E. F. Keller and L. A. Segel, Initiation of slime mold aggregation viewed as an instability, Journal of Theoretical Biology, 26 (1970), 399-415. doi: 10.1016/0022-5193(70)90092-5. Google Scholar
E. F. Keller and L. A. Segel, Model for chemotaxis, Journal of Theoretical Biology, 30 (1971), 225-234. doi: 10.1016/0022-5193(71)90050-6. Google Scholar
M. Mimura and T. Tsujikawa, Aggregating pattern dynamics in a chemotaxis model including growth, Physica A: Statistical Mechanics and its Applications, 230 (1996), 499-543. doi: 10.1016/0378-4371(96)00051-9. Google Scholar
J. D. Murray, Mathematical Biology: Ⅰ. An Introduction, Third edition. Interdisciplinary Applied Mathematics, 17. Springer-Verlag, New York, 2002. Google Scholar
M. Nelkin, S. Meneveau, K. R. Sreenivasan and C. K. Peng, Dynamics of formation of symmetrical patterns by chemotactic bacteria, Nature, 376 (1995), 49. Google Scholar
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura, Exponential attractor for a chemotaxis-growth system of equations, Nonlinear Analysis: Theory, Methods & Applications, 51 (2002), 119-144. doi: 10.1016/S0362-546X(01)00815-X. Google Scholar
K. J. Painter, P. K. Maini and H. G. Othmer, Stripe formation in juvenile pomacanthus explained by a generalized turing mechanism with chemotaxis, Proceedings of the National Academy of Sciences, 96 (1999), 5549-5554. doi: 10.1073/pnas.96.10.5549. Google Scholar
K. J. Painter, P. K. Maini and H. G. Othmer, A chemotactic model for the advance and retreat of the primitive streak in avian development, Bulletin of Mathematical Biology, 62 (2000), 501-525. Google Scholar
C. S. Patlak, Random walk with persistence and external bias, Bulletin of Mathematical Biology, 15 (1953), 311-338. doi: 10.1007/BF02476407. Google Scholar
G. J. Pettet, H. M. Byrne, D. L. S. McElwain and J. Norbury, A model of wound-healing angiogenesis in soft tissue, Mathematical Biosciences, 136 (1996), 35-63. doi: 10.1016/0025-5564(96)00044-2. Google Scholar
J. Shi and X. Wang, On global bifurcation for quasilinear elliptic systems on bounded domains, Journal of Differential Equations, 246 (2009), 2788-2812. doi: 10.1016/j.jde.2008.09.009. Google Scholar
R. Welch and D. Kaiser, Cell behavior in traveling wave patterns of myxobacteria, Proceedings of the National Academy of Sciences, 98 (2001), 14907-14912. doi: 10.1073/pnas.261574598. Google Scholar
F. Yi, J. Wei and J. Shi, Bifurcation and spatiotemporal patterns in a homogeneous diffusive predator-prey system, Journal of Differential Equations, 246 (2009), 1944-1977. doi: 10.1016/j.jde.2008.10.024. Google Scholar
Figure 1. Basic phase portrait of (2)
Figure 2. Parameter space for Turing instability. The parameter values are $d_1=d_2=f=g=1, \nu=\frac{1}{4}, $$ M=\frac{(\sqrt{d_1g}+\sqrt{(1-\nu)d_2})^2}{f}$
Kousuke Kuto, Tohru Tsujikawa. Bifurcation structure of steady-states for bistable equations with nonlocal constraint. Conference Publications, 2013, 2013 (special) : 467-476. doi: 10.3934/proc.2013.2013.467
Fengxin Chen. Stability and uniqueness of traveling waves for system of nonlocal evolution equations with bistable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2009, 24 (3) : 659-673. doi: 10.3934/dcds.2009.24.659
Je-Chiang Tsai. Global exponential stability of traveling waves in monotone bistable systems. Discrete & Continuous Dynamical Systems - A, 2008, 21 (2) : 601-623. doi: 10.3934/dcds.2008.21.601
Xin Lai, Xinfu Chen, Mingxin Wang, Cong Qin, Yajing Zhang. Existence, uniqueness, and stability of bubble solutions of a chemotaxis model. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 805-832. doi: 10.3934/dcds.2016.36.805
Wei-Jie Sheng, Wan-Tong Li. Multidimensional stability of time-periodic planar traveling fronts in bistable reaction-diffusion equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2681-2704. doi: 10.3934/dcds.2017115
Qian Xu. The stability of bifurcating steady states of several classes of chemotaxis systems. Discrete & Continuous Dynamical Systems - B, 2015, 20 (1) : 231-248. doi: 10.3934/dcdsb.2015.20.231
Kentarou Fujie. Global asymptotic stability in a chemotaxis-growth model for tumor invasion. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 203-209. doi: 10.3934/dcdss.2020011
Huanhuan Qiu, Shangjiang Guo. Global existence and stability in a two-species chemotaxis system. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1569-1587. doi: 10.3934/dcdsb.2018220
Rui Hu, Yuan Yuan. Stability, bifurcation analysis in a neural network model with delay and diffusion. Conference Publications, 2009, 2009 (Special) : 367-376. doi: 10.3934/proc.2009.2009.367
Xiaomei Feng, Zhidong Teng, Kai Wang, Fengqin Zhang. Backward bifurcation and global stability in an epidemic model with treatment and vaccination. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 999-1025. doi: 10.3934/dcdsb.2014.19.999
Fabien Crauste. Global Asymptotic Stability and Hopf Bifurcation for a Blood Cell Production Model. Mathematical Biosciences & Engineering, 2006, 3 (2) : 325-346. doi: 10.3934/mbe.2006.3.325
Hui Miao, Zhidong Teng, Chengjun Kang. Stability and Hopf bifurcation of an HIV infection model with saturation incidence and two delays. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2365-2387. doi: 10.3934/dcdsb.2017121
Jungho Park. Bifurcation and stability of the generalized complex Ginzburg--Landau equation. Communications on Pure & Applied Analysis, 2008, 7 (5) : 1237-1253. doi: 10.3934/cpaa.2008.7.1237
Yan'e Wang, Jianhua Wu. Stability of positive constant steady states and their bifurcation in a biological depletion model. Discrete & Continuous Dynamical Systems - B, 2011, 15 (3) : 849-865. doi: 10.3934/dcdsb.2011.15.849
Chun-Hsiung Hsia, Tian Ma, Shouhong Wang. Bifurcation and stability of two-dimensional double-diffusive convection. Communications on Pure & Applied Analysis, 2008, 7 (1) : 23-48. doi: 10.3934/cpaa.2008.7.23
Shanshan Chen, Jianshe Yu. Stability and bifurcation on predator-prey systems with nonlocal prey competition. Discrete & Continuous Dynamical Systems - A, 2018, 38 (1) : 43-62. doi: 10.3934/dcds.2018002
Yaodan Huang, Zhengce Zhang, Bei Hu. Bifurcation from stability to instability for a free boundary tumor model with angiogenesis. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2473-2510. doi: 10.3934/dcds.2019105
Shengqin Xu, Chuncheng Wang, Dejun Fan. Stability and bifurcation in an age-structured model with stocking rate and time delays. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2535-2549. doi: 10.3934/dcdsb.2018264
Meihua Wei, Yanling Li, Xi Wei. Stability and bifurcation with singularity for a glycolysis model under no-flux boundary condition. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 5203-5224. doi: 10.3934/dcdsb.2019129
Masaaki Mizukami. Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2301-2319. doi: 10.3934/dcdsb.2017097
HTML views (27)
Shubo Zhao Ping Liu Mingchao Jiang | CommonCrawl |
Results for 'Absolute Dependence'
Feeling of Absolute Dependence or Absolute Feeling of Dependence? (What Schleiermacher Really Said and Why It Matters).Georg Behrens - 1998 - Religious Studies 34 (4):471-481.details
Friedrich Schleiermacher is known as the theologian who said that the essence of Christian faith is a state of mind called 'the feeling of absolute dependence'. In this respect, Schleiermacher's reputation owes much to the influential translation of his dogmatics prepared by Mackintosh, Stewart and others. I argue that the translation is misleading precisely as to the terms which Schleiermacher uses in order to refer to the religious state of mind. I also show that the translation obscures a (...) problem of some substantive depth regarding what Schleiermacher thought to be the nature of pious feeling. (shrink)
Religious Experience in Philosophy of Religion
Feeling of Absolute Dependence or Absolute Feeling of Dependence?Georg Behrens - 1998 - Religious Studies 34 (4):471-481.details
Friedrich Schleiermacher is known as the theologian who said that the essence of Christian faith is a state of mind called 'the feeling of absolute dependence'. In this respect, Schleiermacher's reputation owes much to the influential translation of his dogmatics prepared by Mackintosh, Stewart and others. I argue that the translation is misleading precisely as to the terms which Schleiermacher uses in order to refer to the religious state of mind. I also show that the translation obscures a (...) problem of some substantive depth regarding what Schleiermacher thought to be the nature of pious feeling. (shrink)
'Feeling of Absolute Dependence' or 'Absolute Feeling of Dependence'? A Question Revisited.Hueston E. Finlay - 2005 - Religious Studies 41 (1):81-94.details
The translation of Schleiermacher's key phrase 'das schlechthinnige Abhängigkeitsgefühl' is a matter of some contention. It has been suggested that the traditional translation is in fact inaccurate and that it should be replaced with the accurate 'absolute feeling of dependence '. This change would have serious implications for our understanding of Schleiermacher's theology. This essay examines the case for and against a change of translation. It concedes that the change is demanded if one strictly adheres to the rules (...) of grammar but that there are several reasons for putting those rules to one side and holding firm to the traditional translation. (shrink)
The Number of Gods in Philosophy of Religion
Absolute Dependence or Infinite Desire? Comparing Soteriological Themes in Schleiermacher and Kierkegaard.Claus-Dieter Osthövener, Theodor Jørgensen, Richard Crouter & Niels Jørgen Cappelørn - 2006 - In Claus-Dieter Osthövener, Theodor Jørgensen, Richard Crouter & Niels Jørgen Cappelørn (eds.), Schleiermacher Und Kierkegaard: Subjektivität Und Wahrheit / Subjectivity and Truth. Akten des Schleiermacher-Kierkegaard-Kongresses in Kopenhagen Oktober 2003 / Proceedings From the Schleiermacher-Kierkegaard Congress in Copenhagen October, 2003. Walter de Gruyter.details
From Post-Traumatic Stress Disorder to Absolute Dependence in an Intensive Care Unit: Reflections on a Clinical Account.Tina Sideris - 2019 - Medical Humanities 45 (1):37-44.details
This paper tells the story of one man's experience of terrifying hallucinations and nightmares in an intensive care unit, drawing attention to the reality that intensive care treatment induces emotional suffering severe enough to be identified as post-traumatic stress disorder. A body of international research, confirmed by South African studies, links life-saving critical care to symptoms which qualify for secondary psychiatric diagnosis including of post-traumatic stress. Risk factors include pre-ICU comorbid psychopathology. Early on in the clinical encounter with the patient (...) in this paper it emerged that he bore the scars of another trauma. He had been a soldier. Recounting the terror he experienced when he was being weaned off mechanical ventilation evoked memories of his military history. Paradoxically, these shifted the focus away from the symptoms of PTSD, to make the helplessness and dependency of ICU patients more visible. This patient's clinical account and patient experiences in other studies reveal the relational vulnerability of ICU patients. In as much as experiences of ICU treatment can be terrifying, the non-response of carers distresses patients. This interplay of wounding and care provides a starting point from which to explore how we account for the neglect of relational care that is a recurring theme in medical contexts, without blaming the carers. These questions find resonance in a South African novel to which the paper refers. A novel about war and trauma movingly portrays the internal conflict of the central character, a nurse and her quest not to care, as a defence against vulnerability. In these ways writing about the relational vulnerability of patients opened up questions about the social and institutional context of carer vulnerability. (shrink)
Medical Ethics in Applied Ethics
4. Schleiermacher and Absolute Dependence.Louis P. Roy - 2001 - In Transcendent Experiences Phenomenology and Critique. University of Toronto Press. pp. 47-68.details
Feeling of Absolute Dependence or Will to Power?Jan-Olav Henriksen - 2003 - Neue Zeitschrift für Systematicsche Theologie Und Religionsphilosophie 45 (3):313-327.details
Feeling of Absolute Dependence or Will to Power?: Schleiermacher Vs. Nietzsche on the Conditions for Religious Subjectivity.Jan-Olav Henriksen - 2003 - Neue Zeitschrift für Systematicsche Theologie Und Religionsphilosophie 45 (3):313-327.details
19th Century German Philosophy in 19th Century Philosophy
Schleiermacher's Idea of Hermeneutics and the Feeling of Absolute Dependence.Ben Vedder - 1994 - Epoché: A Journal for the History of Philosophy 2 (1):91-111.details
Faith, Language and Experience: An Analysis of the Feeling of Absolute Dependence.E. Mouton - 1990 - Hts Theological Studies 46 (3).details
Hegelian 'Absolute Idealism' with Yogācāra Buddhism on Consciousness, Concept ( Begriff ), and Co-Dependent Origination ( Pratītyasamutpāda ).Adam Scarfe - 2006 - Contemporary Buddhism 7 (1):47-73.details
Buddhism in Philosophy of Religion
Dependent Choice, Properness, and Generic Absoluteness.David Asperó & Asaf Karagila - forthcoming - Review of Symbolic Logic:1-25.details
We show that Dependent Choice is a sufficient choice principle for developing the basic theory of proper forcing, and for deriving generic absoluteness for the Chang model in the presence of large cardinals, even with respect to $\mathsf {DC}$ -preserving symmetric submodels of forcing extensions. Hence, $\mathsf {ZF}+\mathsf {DC}$ not only provides the right framework for developing classical analysis, but is also the right base theory over which to safeguard truth in analysis from the independence phenomenon in the presence of (...) large cardinals. We also investigate some basic consequences of the Proper Forcing Axiom in $\mathsf {ZF}$, and formulate a natural question about the generic absoluteness of the Proper Forcing Axiom in $\mathsf {ZF}+\mathsf {DC}$ and $\mathsf {ZFC}$. Our results confirm $\mathsf {ZF} + \mathsf {DC}$ as a natural foundation for a significant portion of "classical mathematics" and provide support to the idea of this theory being also a natural foundation for a large part of set theory. (shrink)
Mathematical Logic in Formal Sciences
Model Theory in Logic and Philosophy of Logic
Set Theory in Philosophy of Mathematics
Absolute Biological Needs.Stephen McLeod - 2014 - Bioethics 28 (6):293-301.details
Absolute needs (as against instrumental needs) are independent of the ends, goals and purposes of personal agents. Against the view that the only needs are instrumental needs, David Wiggins and Garrett Thomson have defended absolute needs on the grounds that the verb 'need' has instrumental and absolute senses. While remaining neutral about it, this article does not adopt that approach. Instead, it suggests that there are absolute biological needs. The absolute nature of these needs is (...) defended by appeal to: their objectivity (as against mind-dependence); the universality of the phenomenon of needing across the plant and animal kingdoms; the impossibility that biological needs depend wholly upon the exercise of the abilities characteristic of personal agency; the contention that the possession of biological needs is prior to the possession of the abilities characteristic of personal agency. Finally, three philosophical usages of 'normative' are distinguished. On two of these, to describe a phenomenon or claim as 'normative' is to describe it as value-dependent. A description of a phenomenon or claim as 'normative' in the third sense does not entail such value-dependency, though it leaves open the possibility that value depends upon the phenomenon or upon the truth of the claim. It is argued that while survival needs (or claims about them) may well be normative in this third sense, they are normative in neither of the first two. Thus, the idea of absolute need is not inherently normative in either of the first two senses. (shrink)
Moral Normativity, Misc in Meta-Ethics
Moral Reasoning and Motivation, Misc in Meta-Ethics
Value Theory, Misc in Value Theory, Miscellaneous
Dependence, Transcendence, and Creaturely Freedom: On the Incompatibility of Three Theistic Doctrines.Aaron Segal - forthcoming - Mind.details
In this paper I argue for the incompatibility of three claims, each of them quite attractive to a theist. First, the doctrine of deep dependence: the universe depends for its existence, in a non-causal way, on God. Second, the doctrine of true transcendence: the universe is wholly distinct from God; God is separate and apart from the universe in respect of mereology, modes, and mentality. Third, the doctrine of robust creaturely freedom: some creature performs some act such that he (...) could have done other than he in fact did. After laying out the claims, I show that their conjunction has its adherents—most clearly, the medieval Jewish philosopher, Maimonides. I then argue in detail that the claims are in fact incompatible. I conclude with a discussion of which of the claims is best jettisoned, drawing in part on the work of the Absolute Idealist, Mary Calkins. (shrink)
Divine Attributes, Misc in Philosophy of Religion
Topics in Free Will, Misc in Philosophy of Action
Absoluteness Via Resurrection.Giorgio Audrito & Matteo Viale - 2017 - Journal of Mathematical Logic 17 (2):1750005.details
The resurrection axioms are forcing axioms introduced recently by Hamkins and Johnstone, developing on ideas of Chalons and Veličković. We introduce a stronger form of resurrection axioms for a class of forcings Γ and a given ordinal α), and show that RAω implies generic absoluteness for the first-order theory of Hγ+ with respect to forcings in Γ preserving the axiom, where γ = γΓ is a cardinal which depends on Γ. We also prove that the consistency strength of these axioms (...) is below that of a Mahlo cardinal for most forcing classes, and below that of a stationary limit of supercompact cardinals for the class of stationary set preserving posets. Moreover, we outline that simultaneous generic absoluteness for Hγ0+ with respect to Γ0 and for Hγ1+ with respect to Γ1 with γ0 = γΓ0≠γΓ1 = γ1 is in principle possible, and we present several natural models o... (shrink)
What is Absolute Undecidability?†.Justin Clarke-Doane - 2013 - Noûs 47 (3):467-481.details
It is often alleged that, unlike typical axioms of mathematics, the Continuum Hypothesis (CH) is indeterminate. This position is normally defended on the ground that the CH is undecidable in a way that typical axioms are not. Call this kind of undecidability "absolute undecidability". In this paper, I seek to understand what absolute undecidability could be such that one might hope to establish that (a) CH is absolutely undecidable, (b) typical axioms are not absolutely undecidable, and (c) if (...) a mathematical hypothesis is absolutely undecidable, then it is indeterminate. I shall argue that on no understanding of absolute undecidability could one hope to establish all of (a)–(c). However, I will identify one understanding of absolute undecidability on which one might hope to establish both (a) and (c) to the exclusion of (b). This suggests that a new style of mathematical antirealism deserves attention—one that does not depend on familiar epistemological or ontological concerns. The key idea behind this view is that typical mathematical hypotheses are indeterminate because they are relevantly similar to CH. (shrink)
Axioms of Set Theory, Misc in Philosophy of Mathematics
Independence Results in Set Theory in Philosophy of Mathematics
Indeterminacy in Mathematics in Philosophy of Mathematics
Mathematical Proof in Philosophy of Mathematics
New Axioms in Set Theory in Philosophy of Mathematics
Objectivity Of Mathematics in Philosophy of Mathematics
The Continuum Hypothesis in Philosophy of Mathematics
Finite and Absolute Idealism.Robert Pippin - 2015 - In Sebastian Gardner & Matthew Grist (eds.), The Transcendental Turn. Oxford University Press UK.details
Any interpretation of Hegel which stresses both his deep dependence on and radical revision of Kant must account for the nature of the difference between what Hegel calls a merely finite idealism and a so-called 'Absolute Idealism'. Such a clarification in turn depends on understanding Hegel's claim to have preserved the distinguishability of intuition and concept, but to have insisted on their inseparability, or, to have defended their 'organic' rather than 'mechanical' relation. This is the main issue in (...) this chapter, which invokes John McDowell's notion of 'the unboundedness of the conceptual' to clarify the issue, as well as noting a number of similar claims in Wittgenstein. The implications of Hegel's view for the issues of metaphysics generally is explored. (shrink)
$99.51 new $100.00 from Amazon $121.25 used (collection) Amazon page
State-Dependent Utilities.Mark J. Schervish, Teddy Seidenfeld & Joseph B. Kadane - unknowndetails
Several axiom systems for preference among acts lead to a unique probability and a state-independent utility such that acts are ranked according to their expected utilities. These axioms have been used as a foundation for Bayesian decision theory and subjective probability calculus. In this article we note that the uniqueness of the probability is relative to the choice of whatcounts as a constant outcome. Although it is sometimes clear what should be considered constant, in many cases there are several possible (...) choices. Each choice can lead to a different "unique" probability and utility. By focusing attention on statedependent utilities, we determine conditions under which a truly unique probability and utility can be determined from an agent's expressed preferences among acts. Suppose that an agent's preference can be represented in terms of a probability P and a utility U.That is, the agent prefers one act to another iff the expected utility of that act is higher than that of the other. There are many other equivalent representations in terms of probabilities Q, which are mutually absolutely continuous with P, and state-dependent utilities V, which differ from U by possibly different positive affine transformations in each state of nature. We describe an example in which there are two different but equivalent state-independent utility representations for the same preference structure. They differ in which acts count as constants. The acts involve receiving different amounts of one or the other of two currencies, and the states are different exchange rates between the currencies. It is easy to see how it would not be possible for constant amounts of both currencies to have simultaneously constant values across the differentstates. Savage (1954, sec. 5.5) discovered a situation in which two seemingly equivalent preference structures are represented by different pairs of probability and utility. He attributed the phenomenon to the construction of a "small world." We show that the small world problem is just another example of two different, but equivalent, representations treating different actsas constants. Finally, we prove a theorem (similar to one of Karni 1985) that shows how to elicit a unique state-dependent utility and does not assume that there are prizes with constant value. To do this, we define a new hypothetical kind of act in which both the prize to be awarded and the state of nature are determined by an auxiliary experiment. (shrink)
Topics in Decision Theory in Philosophy of Action
Vagueness and Grammar: The Semantics of Relative and Absolute Gradable Adjectives.Christopher Kennedy - 2007 - Linguistics and Philosophy 30 (1):1 - 45.details
This paper investigates the way that linguistic expressions influence vagueness, focusing on the interpretation of the positive (unmarked) form of gradable adjectives. I begin by developing a semantic analysis of the positive form of 'relative' gradable adjectives, expanding on previous proposals by further motivating a semantic basis for vagueness and by precisely identifying and characterizing the division of labor between the compositional and contextual aspects of its interpretation. I then introduce a challenge to the analysis from the class of ' (...) class='Hi'>absolute' gradable adjectives: adjectives that are demonstrably gradable, but which have positive forms that relate objects to maximal or minimal degrees, and do not give rise to vagueness. I argue that the truth conditional difference between relative and absolute adjectives in the positive form stems from the interaction of lexical semantic properties of gradable adjectives—the structure of the scales they use—and a general constraint on interpretive economy that requires truth conditions to be computed on the basis of conventional meaning to the extent possible, allowing for context dependent truth conditions only as a last resort. (shrink)
Gradable Adjectives in Philosophy of Language
Theories of Vagueness, Misc in Philosophy of Language
Absolute Inhibition Is Incompatible with Conscious Perception.Michael Snodgrass, Howard Shevrin & Michael Kopka - 1993 - Consciousness and Cognition 2 (3):204-209.details
Van Selst and Merikle argued that the critical Preference × Strategy interaction findings could be alternatively explained by positing individual differences as a function of preference and strategy. They further argued that ruling out conscious perception depends on making the exhaustiveness assumption. We argue that the inhibitory effects satisfy objective threshold criteria regardless of possible individual differences in thresholds. We further suggest that the inhibitory findings are inherently incompatible with the conscious perception explanation and that therefore we do not need (...) to make the exhaustiveness assumption. We thus stand by our original conclusion that subliminal perception at the objective threshold has been demonstrated. (shrink)
Issues in Psychology in Philosophy of Cognitive Science
Unconscious Perception in Philosophy of Cognitive Science
On the Significance of the Absolute Margin.Christian List - 2004 - British Journal for the Philosophy of Science 55 (3):521-544.details
Consider the hypothesis H that a defendant is guilty, and the evidence E that a majority of h out of n independent jurors have voted for H and a minority of k:=n-h against H. How likely is the majority verdict to be correct? By a formula of Condorcet, the probability that H is true given E depends only on each juror's competence and on the absolute margin between the majority and the minority h-k, but neither on the number n, (...) nor on the proportion h/n. This paper reassesses that result and explores its implications. First, using the classical Condorcet jury model, I derive a more general version of Condorcet's formula, confirming the significance of the absolute margin, but showing that the probability that H is true given E depends also on an additional parameter: the prior probability that H is true. Second, I show that a related result holds when we consider not the degree of belief we attach to H given E, but the degree of support E gives to H. Third, I address the implications for the definition of special majority voting, a procedure used to capture the asymmetry between false positive and false negative decisions. I argue that the standard definition of special majority voting in terms of a required proportion of the jury is epistemically questionable, and that the classical Condorcet jury model leads to an alternative definition in terms of a required absolute margin between the majority and the minority. Finally, I show that the results on the significance of the absolute margin can be resisted if the so-called assumption of symmetrical juror competence is relaxed. (shrink)
Condorcet in 17th/18th Century Philosophy
Formal Social Epistemology, Misc in Epistemology
Social Choice Theory, Misc in Social and Political Philosophy
Direct download (14 more)
The Commitment in Feeling Absolutely Safe.Hermen Kroesbergen - 2018 - International Journal for Philosophy of Religion 84 (2):185-203.details
The experience of feeling safe even in the midst of trials and temptations seems to be a central feature of the Christian faith. In this article I will try to solve some possible difficulties in understanding this kind of absolute safety by discussing some problems noted by philosophers in connection with the related statements by Socrates that a good man cannot be harmed, and by Wittgenstein that he sometimes feels absolutely safe, that nothing can injure him whatever happens. First, (...) I will investigate whether there is an invalid prediction implied in this feeling of absolute safety: how can someone know that nothing will hurt him or her? Second, I will examine whether this experience of complete safety is dependent upon impossible requirements, such as to be a good man or an impeccable Christian. Third, I will consider the character of the people who claim absolute safety as portrayed by different philosophers: do these people really need to be so cold and inhumanly detached from the world for them to be able to say that nothing can hurt them? I will argue that if, instead of asking how someone can claim absolute safety, we ask to what someone commits him- or her-self in making this claim, these difficulties disappear. (shrink)
Mad Speculation and Absolute Inhumanism: Lovecraft, Ligotti, and the Weirding of Philosophy.Ben Woodard - 2011 - Continent 1 (1):3-13.details
continent. 1.1 : 3-13. / 0/ – Introduction I want to propose, as a trajectory into the philosophically weird, an absurd theoretical claim and pursue it, or perhaps more accurately, construct it as I point to it, collecting the ground work behind me like the Perpetual Train from China Mieville's Iron Council which puts down track as it moves reclaiming it along the way. The strange trajectory is the following: Kant's critical philosophy and much of continental philosophy which has followed, (...) has been a defense against horror and madness. Kant's prohibition on speculative metaphysics such as dogmatic metaphysics and transcendental realism, on thinking beyond the imposition of transcendental and moral constraints, has been challenged by numerous figures proceeding him. One of the more interesting critiques of Kant comes from the mad black Deleuzianism of Nick Land stating, "Kant's critical philosophy is the most elaborate fit of panic in the history of the Earth." And while Alain Badiou would certainly be opposed to the libidinal investments of Land's Deleuzo-Guattarian thought, he is likewise critical of Kant's normative thought-bureaucracies: Kant is the one author for whom I cannot feel any kinship. Everything in him exasperates me, above all his legalism—always asking Quid Juris? Or 'Haven't you crossed the limit?'—combined, as in today's United States, with a religiosity that is all the more dismal in that it is both omnipresent and vague. The critical machinery he set up has enduringly poisoned philosophy, while giving great succour to the academy, which loves nothing more than to rap the knuckles of the overambitious [….] That is how I understand the truth of Monique David-Menard's reflections on the properly psychotic origins of Kantianism. I am persuaded that the whole of the critical enterprise is set up to to shield against the tempting symptom represented by the seer Swedenborg, or against 'diseases of the head', as Kant puts it. An entire nexus of the limits of reason and philosophy are set up here, namely that the critical philosophy not only defends thought from madness, philosophy from madness, and philosophy from itself, but that philosophy following the advent of the critical enterprise philosophy becomes auto-vampiric; feeding on itself to support the academy. Following Francois Laruelle's non-philosophical indictment of philosophy, we could go one step further and say that philosophy operates on the material of what is philosophizable and not the material of the external world. [1] Beyond this, the Kantian scheme of nestling human thinking between our limited empirical powers and transcendental guarantees of categorical coherence, forms of thinking which stretch beyond either appear illegitimate, thereby liquefying both pre-critical metaphysics and the ravings of the mad in the same critical acid. In rejecting the Kantian apparatus we are left with two entities – an unsure relation of thought to reality where thought is susceptible to internal and external breakdown and a reality with an uncertain sense of stability. These two strands will be pursued, against the sane-seal of post-Kantian philosophy by engaging the work of weird fiction authors H.P. Lovecraft and Thomas Ligotti. The absolute inhumanism of the formers universe will be used to describe a Shoggothic Materialism while the dream worlds of the latter will articulate the mad speculation of a Ventriloquil Idealism. But first we must address the relation of philosophy to madness as well as philosophy to weird fiction. /1/ – Philosophy and Madness There is nothing that the madness of men invents which is not either nature made manifest or nature restored. Michel Foucault. Madness and Civilization. The moment I doubt whether an event that I recall actually took place, I bring the suspicion of madness upon myself: unless I am uncertain as to whether it was not a mere dream. Arthur Schopenhauer. The World as Will and Idea, Vol. 3. Madness is commonly thought of as moving through several well known cultural-historical shifts from madness as a demonic or otherwise theological force, to rationalization, to medicalization psychiatric and otherwise. Foucault's Madness and Civilization is well known for orientating madness as a form of exclusionary social control which operated by demarcating madness from reason. Yet Foucault points to the possibility of madness as the necessity of nature at least prior to the crushing weight of the church. Kant's philosophy as a response to madness is grounded by his humanizing of madness itself. As Adrian Johnston points out in the early pages of Time Driven pre-Kantian madness meant humans were seized by demonic or angelic forces whereas Kantian madness became one of being too human. Madness becomes internalized, the external demonic forces become flaws of the individual mind. Foucault argues that, while madness is de-demonized it is also dehumanized during the Renaissance, as madmen become creatures neither diabolic nor totally human reduced to the zero degree of humanity. It is immediately clear why for Kant, speculative metaphysics must be curbed – with the problem of internal madness and without the external safeguards of transcendental conditions, there is nothing to formally separate the speculative capacities for metaphysical diagnosis from the mad ramblings of the insane mind – both equally fall outside the realm of practicality and quotidian experience. David-Menard's work is particularly useful in diagnosing the relation of thought and madness in Kant's texts. David-Menard argues that in Kant's relatively unknown "An Essay on the Maladies of the Mind" as well as his later discussion of the Seer of Swedenborg, that Kant formulates madness primarily in terms of sensory upheaval or other hallucinatory theaters. She writes: "madness is an organization of thought. It is made possible by the ambiguity of the normal relation between the imaginary and the perceived, whether this pertains to the order of sensation or to the relations between our ideas" Kant's fascination with the Seer forces Kant between the pincers of "aesthetic reconciliation" – namely melancholic withdrawal – and "a philosophical invention" – namely the critical project. Deleuze and Guattari's schizoanalysis is a combination and reversal of Kant's split, where an aesthetic over engagement with the world entails prolific conceptual invention. Their embrace of madness, however, is of course itself conceptual despite all their rhizomatic maneuvers. Though they move with the energy of madness, Deleuze and Guattari save the capacity of thought from the fangs of insanity by imbuing materiality itself with the capacity for thought. Or, as Ray Brassier puts it, "Deleuze insists, it is necessary to absolutize the immanence of this world in such a way as to dissolve the transcendent disjunction between things as we know them and as they are in themselves". That is, whereas Kant relied on the faculty of judgment to divide representation from objectivity Deleuze attempts to flatten the whole economy beneath the juggernaut of ontological univocity. Speculation, as a particularly useful form of madness, might fall close to Deleuze and Guattari's shaping of philosophy into a concept producing machine but is different in that it is potentially self destructive – less reliant on the stability of its own concepts and more adherent to exposing a particular horrifying swath of reality. Speculative madness is always a potential disaster in that it acknowledges little more than its own speculative power with the hope that the gibbering of at least a handful of hysterical brains will be useful. Pre-critical metaphysics amounts to madness, though this may be because the world itself is mad while new attempts at speculative metaphysics, at post-Kantian pre-critical metaphysics, are well aware of our own madness. Without the sobriety of the principle of sufficient reason we have a world of neon madness: "we would have to conceive what our life would be if all the movements of the earth, all the noises of the earth, all the smells, the tastes, all the light – of the earth and elsewhere, came to us in a moment, in an instant – like an atrocious screaming tumult of things". Speculative thought may be participatory in the screaming tumult of the world or, worse yet, may produce its spectral double. Against theology or reason or simply commonsense, the speculative becomes heretical. Speculation, as the cognitive extension of the horrorific sublime should be met with melancholic detachment. Whereas Kant's theoretical invention, or productivity of thought, is self -sabotaging, since the advent of the critical project is a productivity of thought which then delimits the engine of thought at large either in dogmatic gestures or non-systematizable empirical wondrousness. The former is celebrated by the fiction of Thomas Ligotti whereas the latter is espoused by the tales of H.P. Lovecraft. /2/ – Weird Fiction and Philosophy. Supernatural horror, in all its eerie constructions, enables a reader to taste treats inconsistent with his personal welfare. Thomas Ligotti Songs of a Dead Dreamer. I choose weird stories because they suit my inclination best—one of my strongest and most persistent wishes being to achieve,momentarily, the illusion of some strange suspension or violation of the galling limitations of time, space, and natural law which forever imprison us and frustrate our curiosity about the infinite cosmic spaces beyond the radius of our sight and analysis H.P. Lovecraft. "Notes on Writing Weird Fiction" Lovecraft states that his creation of a story is to suspend natural law yet, at the same time, he indexes the tenuousness of such laws, suggesting the vast possibilities of the cosmic. The tension that Lovecraft sets up between his own fictions and the universe or nature is reproduced within his fictions in the common theme of the unreliable narrator; unreliable precisely because they are either mad or what they have witnessed questions the bounds of material reality. In "The Call of Cthulhu" Lovecraft writes: The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age. Despite Lovecraft's invocations of illusion, he is not claiming that his fantastic creations such as the Old Ones are supernatural but, following Joshi, are only ever supernormal. One can immediately see that instead of nullifying realism Lovecraft in fact opens up the real to an unbearable degree. In various letters and non-fictional statements Lovecraft espoused strictly materialist tenets, ones which he borrowed from Hugh Elliot namely the uniformity of law, the denial of teleology and the denial of non-material existence. Lovecraft seeks to explore the possibilities of such a universe by piling horror upon horror until the fragile brain which attempts to grasp it fractures. This may be why philosophy has largely ignored weird fiction – while Deleuze and Guattari mark the turn towards weird fiction and Lovecraft in particular, with the precursors to speculative realism as well as contemporary related thinkers have begun to view Lovecraft as making philosophical contributions. Lovecraft's own relation to philosophy is largely critical while celebrating Nietzsche and Schopenhauer. This relationship of Lovecraft to philosophy and philosophy to Lovecraft is coupled with Lovecraft's habit of mercilessly destroying the philosopher and the figure of the academic more generally in his work, a destruction which is both an epistemological destruction and an ontological destruction. Thomas Ligotti's weird fiction which he has designated as a kind of "confrontational escapism" might be best described in the following quote from one of his shortstories, "The human phenomenon is but the sum of densely coiled layers of illusion each of which winds itself on the supreme insanity. That there are persons of any kind when all there can be is mindless mirrors laughing and screaming as they parade about in an endless dream". Whereas Lovecraft's weirdness draws predominantly from the abyssal depths of the uncharted universe, Ligotti's existential horror focuses on the awful proliferation of meaningless surfaces that is, the banal and every day function of representation. In an interview, Ligotti states: We don't even know what the world is like except through our sense organs, which are provably inadequate. It's no less the case with our brains. Our whole lives are motored along by forces we cannot know and perceptions that are faulty. We sometimes hear people say that they're not feeling themselves. Well, who or what do they feel like then? This is not to say that Ligotti sees nothing beneath the surface but that there is only darkness or blackness behind it, whether that surface is on the cosmological level or the personal. By addressing the implicit and explicit philosophical issues in Ligotti's work we will see that his nightmarish take on reality is a form of malevolent idealism, an idealism which is grounded in a real, albeit dark and obscure materiality. If Ligotti's horrors ultimately circle around mad perceptions which degrade the subject, it takes aim at the vast majority of the focus of continental philosophy. While Lovecraft's acidic materialism clearly assaults any romantic concept of being from the outside, Ligotti attacks consciousness from the inside: Just a little doubt slipped into the mind, a little trickle of suspicion in the bloodstream, and all those eyes of ours, one by one, open up to the world and see its horror [...] Not even the solar brilliance of a summer day will harbor you from horror. For horror eats the light and digests it into darkness. Clearly, the weird fiction of Lovecraft and Ligotti amount to a anti-anthrocentric onslaught against the ramparts of correlationist continental philosophy. /3/ – Shoggothic Materialism or the Formless Formless protoplasm able to mock and reflect all forms and organs and processes—viscous agglutinations of bubbling cells—rubbery fifteen-foot spheroids infinitely plastic and ductile—slaves of suggestion, builders of cities—more and more sullen, more and more intelligent, more and more amphibious, more and more imitative—Great God! What madness made even those blasphemous Old Ones willing to use and to carve such things? H.P. Lovecraft. "At the Mountains of Madness" On the other hand, affirming that the universe resembles nothing and is only formless amounts to saying that the universe is something like a spider or spit. Georges Bataille. "Formless". The Shoggoths feature most prominently in H.P. Lovecraft's shortstory "At the Mountains of Madness" where they are described in the following manner: It was a terrible, indescribable thing vaster than any subway train – a shapeless congeries of protoplasmic bubbles, faintly self -luminous, and with myriads of temporary eyes forming and un-forming as pustules of greenish light all over the tunnel-filling front that bore down upon us, crushing the frantic penguins and slithering over the glistening floor that it and its kind had swept so evilly free of all litter. The term is a litmus test for materialism itself as the Shoggoth is an amorphous creature. The Shoggoths were living digging machines bio engineered by the Elder Things, and their protoplasmic bodies being formed into various tools by their hypnotic powers. The Shoggoths eventually became self aware and rose up against their masters in an ultimately failed rebellion. After the Elder Ones retreated into the oceans leaving the Shoggoths to roam the frozen wastes of the Antarctic. The onto-genesis of the Shoggoths and their gross materiality, index the horrifyingly deep time of the earth a concept near and dear to Lovecraft's formulation of horror as well as the fear of intelligences far beyond, and far before, the ascent of humankind on earth and elsewhere. The sickly amorphous nature of the Shoggoths invade materialism at large, where while materiality is unmistakably real ie not discursive, psychological, or otherwise overly subjectivist, it questions the relation of materialism to life. As Eugene Thacker writes: The Shoggoths or Elder Things do not even share the same reality with the human beings who encounter them—and yet this encounter takes place, though in a strange no-place that is neither quite that of the phenomenal world of the human subject or the noumenal world of an external reality. Amorphous yet definitively material beings are a constant in Lovecraft's tales. In his tale "The Dream-Quest of Unknown Kadatth" Lovecraft describes Azathoth as, "that shocking final peril which gibbers unmentionably outside the ordered universe," that, "last amorphous blight of nethermost confusion which blashphemes and bubbles at the centre of all infinity," who, "gnaws hungrily in inconceivable, unlighted chambers beyond time". Azathoth's name may have multiple origins but the most striking is the alchemy term azoth which is both a cohesive agent and a acidic creation pointing back to the generative and the decayed. The indistinction of generation and degradation materially mirrors the blur between the natural and the unnatural as well as life and non-life. Lovecraft speaks of the tension between the natural and the unnatural is his short story "The Unnameable." He writes, "if the psychic emanations of human creatures be grotesque distortions, what coherent representation could express or portray so gibbous and infamous a nebulousity as the spectre of a malign, chaotic perversion, itself a morbid blasphemy against Nature?". Lovecraft explores exactly the tension outlined at the beginning of this chapter, between life and thought. At the end of his short tale Lovecraft compounds the problem as the unnameable is described as "a gelatin—a slime—yet it had shapes, a thousand shapes of horror beyond all memory". Deleuze suggests that becoming-animal is operative throughout Lovecraft's work, where narrators feel themselves reeling at their becoming non-human or of being the anomalous or of becoming atomized. Following Eugene Thacker however, it may be far more accurate to say that Lovecraft's tales exhibit not a becoming-animal but a becoming-creature. Where the monstrous breaks the purportedly fixed laws of nature, the creature is far more ontologically ambiguous. The nameless thing is an altogether different horizon for thought. The creature is either less than animal or more than animal – its becoming is too strange for animal categories and indexes the slow march of thought towards the bizarre. This strangeness is, as aways, some indefinite swirling in the category of immanence and becoming. Bataille begins "The Labyrinth" with the assertion that being, to continue to be, is becoming. More becoming means more being hence the assertion that Bataille's barking dog is more than the sponge. This would mean that the Shoggotth is altogether too much being, too much material in the materialism. Bataille suggests that there is an immanence between the eater and the eaten, across the species and never within them. That is, despite the chaotic storm of immanence there must remain some capacity to distinguish the gradients of becoming without reliance upon, or at least total dependence upon, the powers of intellection to parse the universe into recognizable bits, properly digestible factoids. That is, if we undo Deleuze's aforementioned valorization of sense which, for his variation of materialism, performed the work of the transcendental, but refuse to reinstate Kant's transcendental disjunction between thing and appearance, then it must be a quality of becoming-as-being itself which can account for the discernible nature of things by sense. In an interview with Peter Gratton, Jane Bennett formulates the problem thusly: What is this strange systematicity proper to a world of Becoming? What, for example, initiates this congealing that will undo itself? Is it possible to identify phases within this formativity, plateaus of differentiation? If so, do the phases/plateaus follow a temporal sequence? Or, does the process of formation inside Becoming require us to theorize a non-chronological kind of time? I think that your student's question: "How can we account for something like iterable structures in an assemblage theory?" is exactly the right question. Philosophy has erred too far on the side of the subject in the subject-object relation and has furthermore, lost the very weirdness of the non-human. Beyond this, the madness of thought need not override. /4/ - Ventriloquial Idealism or the Externality of Thought My aim is the opposite of Lovecraft's. He had an appreciation for natural scenery on earth and wanted to reach beyond the visible in the universe. I have no appreciation for natural scenery and want the objective universe to be a reflection of a character. Thomas Ligotti. "Devotees of Decay and Desolation." Unless life is a dream, nothing makes sense. For as a reality, it is a rank failure [….] Horror is more real than we are. Thomas Ligotti. "Professor Nobody's Little Lectures on Supernatural Horror". Thomas Ligotti's tales are rife with mannequins, puppets, and other brainless entities which of replace the valorized subject of philosophy – that of the free thinking human being. His tales such as "The Dream of the Manikin" aim to destroy the rootedness of consciousness. James Trafford has connected the anti-egoism of Ligotti to Thomas Metzinger – where the self is at best an illusion and we plead desperately for someone else to acknowledge that we are real. Trafford has stated it thus, "Life is played out as an inescapable puppet show, an endless dream in which the puppets are generally unaware that they are trapped within a mesmeric dance of whose mechanisms they know nothing and over which they have no control". An absolute materialism, for Ligotti, implies an alienation of the idea which leads to a ventriloquil idealism. As Ligotti notes in an interview, "the fiasco and nightmare of existence, the particular fiasco and nightmare of human existence, the sense that people are puppets of powers they cannot comprehend, etc." And then further elaborates that,"[a]ssuming that anything has to exist, my perfect world would be one in which everyone has experienced the annulment of his or her ego. That is, our consciousness of ourselves as unique individuals would entirely disappear". The externality of the idea leads to the unfortunate consequence of consciousness eating at itself through horror which, for Ligotti, is more real than reality and goes beyond horror-as-affect. Beyond this, taking together with the unreality of life and the ventriloquizing of subjectivity, Ligotti's thought becomes an idealism in which thought itself is alien and ultimately horrifying. The role of human thought and the relation of non-relation of horror to thought is not completely clear in Ligotti's The Conspiracy Against the Human Race. Ligotti argues in his The Conspiracy Against the Human Race,that the advent of thought is a mistake of nature and that horror is being in the sense that horror results from knowing too much. Yet, at the same time, Ligotti seems to suggest that thought separates us from nature whereas, for Lovecraft, thought is far less privileged – mind is just another manifestation of the vital principal, it is just another materialization of energy. In his brilliant "Prospects for Post-Copernican Dogmatism" Iain Grant rallies against the negative definition of dogmatism and the transcendental, and suggests that negatively defining both over-focuses on conditions of access and subjectivism at the expense of the real or nature. With Schelling, who is Grant's champion against the subjectivist bastions of both Fichte and Kant, Ligotti's idealism could be taken as a transcendental realism following from an ontological realism. Yet the transcendental status of Ligotti's thought move towards a treatment of the transcendental which may threaten to leave beyond its realist ground. Ligotti states: Belief in the supernatural is only superstition. That said, a sense of the supernatural, as Conrad evidenced in Heart of Darkness, must be admitted if one's inclination is to go to the limits of horror. It is the sense of what should not be- the sense of being ravaged by the impossible. Phenomenally speaking, the super-natural may be regarded as the metaphysical counterpart of insanity, a transcendental correlative of a mind that has been driven mad. Again, Ligotti equates madness with thought, qualifying both as supernatural while remaining less emphatic about the metaphysical dimensions of horror. The question becomes one of how exactly the hallucinatory realm of the ideal relates to the black churning matter of Lovecraft's chaos of elementary particles. In his tale "I Have a Special Plan for This World" Ligotti formulates thus: A: There is no grand scheme of things. B: If there were a grand scheme of things, the fact – the fact – that we are not equipped to perceive it, either by natural or supernatural means, is a nightmarish obscenity. C: The very notion of a grand scheme of things is a nightmarish obscenity. Here Ligotti is not discounting metaphysics but implying that if it does exist the fact that we are phenomenologically ill-equipped to perceive that it is nightmarish. For Ligotti, nightmare and horror occur within the circuit of consciousness whereas for Lovecraft the relation between reality and mind is less productive on the side of mind. It is easier to ascertain how the Kantian philosophy is a defense against the diseases of the head as Kant armors his critical enterprise from too much of the world and too much of the mind. The weird fiction of both Lovecraft and Ligotti demonstrates that there is too much of both feeding into one another in a way that corrodes the Kantian schema throughly, breaking it down into a dead but still ontologically potentiated nigredo. The haunting, terrifying fact of Ligotti's idealism is that the transcendental motion which brought thought to matter, while throughly material and naturalized, brings with it the horror that thought cannot be undone without ending the material that bears it either locally or completely. Thought comes from an elsewhere and an elsewhen being-in-thought. The unthinkable outside thought is as maddening as the unthought engine of thought itself within thought which doesn't exist except for the mind, the rotting décor of the brain. /5/ - Hyperstitional Transcendental Paranoia or Self -Expelled Thought Weird fiction has been given some direct treatment in philosophy in the mad black Deleuzianism of Nick Land. Nick Land along with others in the 1990s created the Cyber Culture Research Unit as well as the research group Hyperstition. The now defunct hyperstitional website, an outgrowth of the Cyber Culture Research Unit, defined hyperstition in the following fourfold: 1-Element of effective culture that makes itself real. 2-Fictional quantity functional as a time-traveling device. 3-Coincidence intensifier. 4-Call to the Old Ones. The distinctively Lovecraftian character of hyperstition is hard to miss as is its Deleuzo-Guattarian roots. In the opening pages of A Thousand Plateaus Deleuze and Guattari write, "We have been criticized for over-quoting literary authors. But when one writes, the only question is which other machine the literary machine can be plugged into". The indisinction of literature and philosophy mirrors the mess of being and knowing as post-correlationist philosophy, where philosophy tries to make itself real where literature, especially the weird, aims itself at the brain-circuit of horror. The texts of both Lovecraft and Ligotti work through horror as epistemological plasticity meeting with proximity as well as the deep time of Lovecraft and the glacially slow time of paranoia in Ligotti. Against Deleuze, and following Brassier, we cannot allow the time of consciousness, the Bergsonian time of the duree, to override natural time, but instead acknowledge that it is an unfortunate fact of existence as a thinking being. Horror-time, the time of consciousness, with all its punctuated moments and drawn out terrors, cannot compare to the deep time of non-existence both in the unreachable past and the unknown future. The crystalline cogs of Kant's account of experience as the leading light for the possibility of metaphysics must be throughly obliterated. His gloss of experience in Prolegomena to any Future Metaphysics could not be more sterile: Experience consists of intuitions, which belong to the sensibility, and of judgments, which are entirely a work of the understanding. But the judgments which the understanding makes entirely out of sensuous intuitions are far from being judgments of experience. For in the one case the judgment connects only the perceptions as they are given in sensuous intuition [....] Experience consists in the synthetic connection of appearances in consciousness, so far as this connection is necessary. Here it is difficult to dismiss the queasiness that Kant's legalism induces upon sight for both Badiou and David-Menard. Kant's thought becomes, as Foucault says when reflecting on Sade's text in relation to nature, "the savage abolition of itself". For Badiou, Kant's philosophy simply closes off too much of the outside, freezing the world of thought in an all too limited formalism. Critical philosophy is simply the systematized quarantine on future thinking, on thinking which would threaten the formalism which artificially grants thought its own coherency in the face of madness. Even the becoming-mad of Deleuze, while escaping the rumbling ground, makes grounds for itself, mad grounds but grounds which are thinkable in their affect. The field of effects allows for Deleuze's aesthetic and radical empiricism, in which effects and/or occasions make up the material of the world to be thought as a chaosmosis of simulacra. Given a critique of an empiricism of aesthetics, of the image, it may be difficult to justify an attack on Kantian formalism with the madness of literature, which does not aim to make itself real but which we may attempt to make real. That is, how do Lovecraft's and Ligotti's materials, as materials for philosophy to work on, differ from either the operative formalisms of Kant or the implicitly formalized images of Deleuzian empiricism? It is simply that such texts do not aim to make themselves real, and make claims to the real which are more alien to us than familiar, which is why their horror is immediately more trustworthy. This is the madness which Blanchot discusses in The Infinite Conversation through Cervantes and his knight – the madness of book-life, of the perverse unity of literature and life a discussion which culminates in the discussion of one of the weird's masters, that of Kafka. The text is the knowing of madness, since madness, in its moment of becoming-more-mad, cannot be frozen in place but by the solidifications of externalizing production. This is why Foucault ends his famous study with works of art. Furthermore extilligence, the ability to export the products of our maligned brains, is the companion of the attempts to export, or discover the possibility of intelligences outside of our heads, in order for philosophy to survive the solar catastrophe. To borrow again from Deleuze, writing is inseparable from becoming. The mistake is to believe that madness is reabsorbed by extilligence, by great works, or that it could be exorcised by the expelling of thought into the inorganic or differently organic. Going out of our heads does not guarantee we will no longer mean we cannot still go out of our minds. This is simply because of the outside, of matter, or force, or energy, or thing-in-itself, or Schopenhauerian Will. In Lovecraft's "The Music of Erich Zahn" an "impoverished student of metaphysics" becomes intrigued by strange viol music coming from above his room. After meeting the musician the student discovers that each night he plays frantic music at a window in order to keep some horridness at bay, some "impenetrable darkness with chaos and pandemonium". The aesthetic defenses provided by the well trained brain can bear the hex of matter for so long, the specter of unalterability within it which too many minds obliterate, collapsing everything before the thought of thought as thinkable or at least noetically mutable on our own terms. Transcendental paranoia is the concurrent nightmare and promise of Paul Humphrey's work, of being literally out of our minds. It is the gothic counterpart of thinking non-conceptually but also of thinking never belonging to any instance of purportedly solid being. As Bataille stated, "At the boundary of that which escapes cohesion, he who reflects within cohesion realizes there is no longer any room for him" Thought is immaterial only to the degree that it is inhuman, it is a power that tries, always with failure, to ascertain its own genesis. Philosophy, if it can truly return to the great outdoors, if it can leave behind the dead loop of the human skull, must recognize not only the non-priority of human thought, but that thought never belongs to the brain that thinks it, thought comes from somewhere else. To return to the train image from the beginning "a locomotive rolling on the surface of the earth is the image of continuous metamorphosis" this is the problem of thought, and of thinking thought, of being no longer able to isolate thought, with only a thought-formed structure. [1] One of the central tenets of Francois Laruelle's non-philosophy is that philosophy has traditionally operated on material already presupposed as thinkable instead of trying to think the real in itself. Philosophy, according to Laruelle, remains fixated on transcendental synthesis which shatters immanence into an empirical datum and an a prori factum which are then fused by a third thing such as the ego. For a critical account of Laruelle's non-philosophy see Ray Brassier's Nihil Unbound. (shrink)
Alain Badiou in Continental Philosophy
Gilles Deleuze in Continental Philosophy
Does the Cosmological Argument Depend on the Ontological?William F. Vallicella - 2000 - Faith and Philosophy 17 (4):441-458.details
Does the cosmological argument (CA) depend on the ontological (OA)? That depends. If the OA is an argument "from mere concepts," then no; if the OA is an argument from possibility, then yes. That is my main thesis. Along the way, I explore a number of subsidiary themes, among them, the nature of proof in metaphysics, and what Kant calls the "mystery of absolute necessity.".
Cosmological Arguments for Theism in Philosophy of Religion
Ontological Arguments for Theism in Philosophy of Religion
Universals Without Absolutes: A Theory of Media Ethics.Christopher Meyers - 2016 - Journal of Media Ethics 31 (4):198-214.details
The global turn in media ethics has presented a tough challenge for traditional models of moral theory: How do we assert common moral standards while also showing respect for the values of those from outside the Western tradition? The danger lies in advocating for either extreme: reason-dependent absolutism or cultural relativism. In this paper, I reject Cliff Christian's attempts to solve the problem and propose instead a moral theory of universal standards that are discovered via a mix of rationally grounded (...) methods. Such universality refutes relativism but, because it is grounded in evolutionary naturalism and life-world philosophy—as opposed to a Kantian or theological transcendentalism—it also avoids absolutism. (shrink)
Relative and Absolute Presence.Sean Enda Power - 2016 - In B. Mölder, V. Arstila & P. Øhrstrøm (eds.), Philosophy and Psychology of Time. Springer. pp. 69-100.details
Different ways of thinking about presence can have significant consequences for one's thinking about temporal experience. Temporal presence can be conceived of as either absolute or relative. Relative presence is analogous to spatial presence, whereas absolute presence is not. For each of these concepts of presence, there is a theory of time which holds that this is how presence really is. For the A-theory, temporal presence is absolute; it is a special moment in time, a time defined (...) by events in what has been called the A-series. For the B-theory, temporal presence is relative ; it is itself defined relative to moments in time, a time defined by events in the B-series. Many A-theorists go further to claim that the present is the only real moment in time; the past and future are unreal. One can have different sets of problems depending on whether one thinks in terms of absolute presence or relative presence. For example, there is the concept of the 'specious' present – a duration many theorists claim that we perceptually experience. It is argued in this paper that the specious present has problems given absolute presence, which it does not have given relative presence. Many of the problems are avoided by having an extended present. However, A-theory, the standard theory of time which advocates absolute presence, cannot have an extended present. Further, the best solution for absolute presence which is extended, durational presentism, involves denying the standard theories in the philosophy of time. (shrink)
Philosophy of Time, Misc in Metaphysics
The Specious Present in Philosophy of Mind
Time and Consciousness in Psychology in Philosophy of Cognitive Science
Online Recognition of Music Is Influenced by Relative and Absolute Pitch Information.Sarah C. Creel & Melanie A. Tumlin - 2012 - Cognitive Science 36 (2):224-260.details
Three experiments explored online recognition in a nonspeech domain, using a novel experimental paradigm. Adults learned to associate abstract shapes with particular melodies, and at test they identified a played melody's associated shape. To implicitly measure recognition, visual fixations to the associated shape versus a distractor shape were measured as the melody played. Degree of similarity between associated melodies was varied to assess what types of pitch information adults use in recognition. Fixation and error data suggest that adults naturally recognize (...) music, like language, incrementally, computing matches to representations before melody offset, despite the fact that music, unlike language, provides no pressure to execute recognition rapidly. Further, adults use both absolute and relative pitch information in recognition. The implicit nature of the dependent measure should permit use with a range of populations to evaluate postulated developmental and evolutionary changes in pitch encoding. (shrink)
Absolute Creationism and Divine Conceptualism.William Lane Craig - 2017 - Philosophia Christi 19 (2):431-438.details
The contemporary debate over God and abstract objects is hampered by a lack of conceptual clarity concerning two distinct metaphysical views: absolute creationism and divine conceptualism. This confusion goes back to the fount of the current debate, the article "Absolute Creation" by Thomas Morris and Christopher Menzel, who were not of one mind concerning God's relation to abstract objects. Confusion has followed in their wake. Going forward, theistic philosophers need to distinguish more clearly between a sort of modified (...) Platonism, according to which abstract objects depend ontologically on God, and a sort of divine psychologism, according to which objects typically thought to be abstract are, in fact, concrete mental entities of some sort. (shrink)
Absolute Knowledge and the Problem of Systematic Completeness in Hegel's Philosophy.Ph D. Edward Beach - 1981 - The Owl of Minerva 13 (2):8-8.details
From the author: This dissertation undertakes a critical examination of one central problem in Hegelian philosophy: viz., whether the final realization of "absolute knowledge" is logically consistent with significant epistemic progress in the system's continuing development. Serious consideration of the concept of systematic completeness, as interpreted on Hegel's terms, uncovers the existence of a profound paradox. On the one hand, if the Truth is the Whole, then the truth of any finite part or aspect of that Whole depends upon (...) its place within the system as a totality. In order to grasp the part and comprehend it correctly, one must already have a systematically ordered knowledge of the universe in its entirety. Yet the methodological open-endedness of dialectical thinking precludes the acceptance of any particular theoretical formulation as final or complete in itself. Hegel interprets the history of philosophy as a progression of successive formulations of the Truth, each of which improves on its predecessors by reconciling their internal contradictions, but no one of which is adequate in itself. Hence, no theoretical world view, including Hegel's own, can rest content within its limits, for the inevitable advance of thought must in the end throw it down. (shrink)
G. W. F. Hegel in 19th Century Philosophy
Measuring Absolute Velocity.Sebastián Murgueitio Ramírez & Ben Middleton - 2021 - Australasian Journal of Philosophy 99 (4):806-816.details
ABSTRACT We argue that Roberts's argument for the thesis that absolute velocity is not measurable in a Newtonian world is unsound, because it depends on an analysis of measurement that is not extensionally adequate. We propose an alternative analysis of measurement, one that is extensionally adequate and entails that absolute velocity is measured in at least one Newtonian world. If our analysis is correct, then this Newtonian world is a counterexample to the widely endorsed thesis that if a (...) property varies under the symmetries of a theory then, according to that theory, the property could not be measured. Thus, our paper shows that the debate over the measurability of symmetry-variant properties is more unsettled than previously supposed. (shrink)
William Crookes and the Quest for Absolute Vacuum in the 1870s.Robert K. DeKosky - 1983 - Annals of Science 40 (1):1-18.details
This essay examines the technical evolution and scientific context of William Crookes's effort to achieve an absolute vacuum in the 1870s. Prior to late 1876, along with interrogation of the radiometer effect, the quest for perfect vacuum was a major motive of his research programme. At this time, no absolutely dependable method existed to determine exactly the pressures at extreme rarefactions. Crookes therefore employed changes in radiometric, viscous and electrical effects with changing pressure in order to monitor the progress (...) of exhaustion. After late 1876, his research priorities shifted because he had reached a plateau of technical accomplishment in the effort to attain extreme vacua, and because observed effects in vacua—particularly electrical—assumed an importance in their own right, and as bases for elucidation and defence of his concept of a 'fourth state of matter' at very low pressures. (shrink)
Upādāyaprajñaptiḥ and the Meaning of Absolutives: Grammar and Syntax in the Interpretation of Madhyamaka. [REVIEW]Mattia Salvini - 2011 - Journal of Indian Philosophy 39 (3):229-244.details
The article discusses the relevance of the syntactical implications of the absolutive ending (lyabanta) in interpreting the Madhyamaka term upādāyaprajñapti, and hence Mūlamadhyamakakārikā 18.24. The views of both Sanskrit and Pāli classical grammarians are taken into account, and a comparison is made between some contemporary English translations of MMK 18.24 as against Candrakīrti's commentary. The conclusion suggests that Candrakīrti is grammatically accurate and perceptive, that he may have been aware of the tradition of Candragomin's grammar, and that the structural analogy (...) between upādāyaprajñapti and pratītyasamutpāda may be relevant in understanding the relationship between notional and existential dependence. (shrink)
Indian Philosophy in Asian Philosophy
Absolute Knowledge and the Problem of Systematic Completeness in Hegel's Philosophy. Beach - 1981 - The Owl of Minerva 13 (2):10-10.details
As an important corollary of this interpretation of absolute knowledge, the dissertation concludes with the suggestion that Hegelian philosophy need not be regarded merely as an interesting curiosity in the history of ideas, but rather that it can serve as a vital and potentially rewarding source of fresh theoretical insights. ;Instead, the concrete completeness of speculative philosophy can only consist in the activity of a dynamical, ceaselessly self-examining and self-regulating intellectual community. In one sense, of course, no finite system (...) can ever be complete, insofar as it will inevitably be guilty of errors and misconceptions. Yet at the same time, the infinite System is perfect and complete, insofar as it contains within itself the means whereby its own unavoidable flaws will be discovered, corrected, and "forgiven"--i.e. aufgehoben into vanishing moments of a universal totality. ;In opposition to most traditional interpretations of Hegel, the position is maintained that Hegel himself deliberately faced this problem, and that a true understanding of his proposed solution requires a radical Aufhebung of the concept of cognition as such. Relying primarily on the penultimate chapters of the Phenomenology of Spirit, the dissertation argues that absolute knowledge can neither be a finalized system of fully explicit content, nor an endless movement toward such an ideal state. ;This dissertation undertakes a critical examination of one central problem in Hegelian philosophy: viz., whether the final realization of "absolute knowledge" is logically consistent with significant epistemic progress in the system's continuing development. Serious consideration of the concept of systematic completeness, as interpreted on Hegel's terms, uncovers the existence of a profound paradox. On the one hand, if the Truth is the Whole , then the truth of any finite part or aspect of that Whole depends upon its place within the system as a totality. In order to grasp the part and comprehend it correctly, one must already have a systematically ordered knowledge of the universe in its entirety. Yet the methodological open-endedness of dialectical thinking precludes the acceptance of any particular theoretical formulation as final or complete in itself. Hegel interprets the history of philosophy as a progression of successive formulations of the Truth, each of which improves on its predecessors by reconciling their internal contradictions, but no one of which is adequate in itself. Hence, no theoretical world view, including Hegel's own, can rest content within its limits, for the inevitable advance of thought must in the end throw it down. (shrink)
Hegel: Metaphysics in 19th Century Philosophy
'Absolute' Adjectives in Belief Contexts.Charlie Siu - 2020 - Linguistics and Philosophy (4):1-36.details
It is a consequence of both Kennedy and McNally's typology of the scale structures of gradable adjectives and Kennedy's economy principle that an object is clean just in case its degree of cleanness is maximal. So they jointly predict that the sentence `Both towels are clean, but the red one is cleaner than the blue one' is a contradiction. Surely, one can account for the sentence's assertability by saying that the first instance of `clean' is used loosely: Since `clean' pragmatically (...) conveys the property of being close to maximally clean rather than the property of being maximally clean, the sentence as a whole conveys a consistent proposition. I challenge this semantics-pragmatics package by considering the sentence `Mary believes that both towels are clean but that the red one is cleaner than the blue one'. We can certainly use this sentence to attribute a coherent belief to Mary: One of its readings says that she believes that the towels are clean by a contextually salient standard (e.g. the speaker's); the other says that she believes that the towels are clean by her own standard. I argue that Kennedy's semantics-pragmatics package can't deliver those readings, and propose that we drop the economy principle and account for those readings semantically by assigning to the belief sentence two distinct truth conditions. I consider two ways to deliver those truth-conditions. The first one posits world-variables in the sentence's logical form and analyzes those truth-conditions as resulting from two binding possibilities of those variables. The second one proposes that the threshold function introduced by the phonologically null morpheme pos is shiftable in belief contexts. (shrink)
Attitude Ascriptions, Misc in Philosophy of Language
Absolutely Clean Hands? Responsibility for What's Allowed in Refraining From What's Not Allowed.Suzanne Uniacke - 1999 - International Journal of Philosophical Studies 7 (2):189 – 209.details
This paper examines the absolutist grounds for denying an agent's responsibility for what he allows to happen in 'keeping his hands clean' in acute circumstances. In defending an agent's non-prevention of what is, viewed impersonally, the greater harm in such cases, absolutists typically insist on a difference in responsibility between what an agent brings about as opposed to what he allows. This alleged difference is taken to be central to the absolutist justification of non-intervention in acute cases: the agent's obligation (...) not to do harm is held to be more stringent than his obligation to prevent (comparable) harm, since as agents we are principally responsible for what we ourselves do. The paper's central point is that this representation of the absolutist response to acute cases- as grounded in a difference in responsibility for what we do as opposed to what we allow- involves a misleading theoretical inversion. I argue that the absolutist justification of non-intervention in acute cases must depend on a direct defence of the nature and the stringency of the moral norm with which the agent's non-intervention complies. The nature and stringency of this norm are basic to attribution of agent responsibility in acute cases, and not the other way around. (shrink)
Control and Responsibility in Meta-Ethics
Free Will and Responsibility in Philosophy of Action
Freedom and Liberty in Social and Political Philosophy
Motivation and Will in Philosophy of Action
Medical Confidentiality: An Intransigent and Absolute Obligation.M. H. Kottow - 1986 - Journal of Medical Ethics 12 (3):117-122.details
Clinicians' work depends on sincere and complete disclosures from their patients; they honour this candidness by confidentially safeguarding the information received. Breaching confidentiality causes harms that are not commensurable with the possible benefits gained. Limitations or exceptions put on confidentiality would destroy it, for the confider would become suspicious and un-co-operative, the confidant would become untrustworthy and the whole climate of the clinical encounter would suffer irreversible erosion. Excusing breaches of confidence on grounds of superior moral values introduces arbitrariness and (...) ethical unreliability into the medical context. Physicians who breach the agreement of confidentiality are being unfair, thus opening the way for, and becoming vulnerable to, the morally obtuse conduct of others. Confidentiality should not be seen as the cosy but dispensable atmosphere of clinical settings; rather, it constitutes a guarantee of fairness in medical actions. Possible perils that might accrue to society are no greater than those accepted when granting inviolable custody of information to priests, lawyers and bankers. To jeopardize the integrity of confidential medical relationships is too high a price to pay for the hypothetical benefits this might bring to the prevailing social order. (shrink)
In Defense of Absolute Creationism.William Lane Craig - 2017 - Review of Metaphysics 71 (3).details
Absolute creationism is a sort of theistic Platonism, which preserves intact the host of abstract objects but renders them dependent upon God. From its inception, absolute creationism has been dogged by a vicious circularity that has come to be known as the bootstrapping objection. Many philosophers, including the author, have taken the bootstrapping objection to be decisive against absolute creationism. But a review of the most sophisticated statement of the objection suggests a way out for the (...) class='Hi'>absolute creationist. By denying a constituent ontology the absolute creationist can avoid the vicious circularity, since explanatorily prior to his creation of properties God can be just as he is without exemplifying properties. Still, in light of the metaphysical idleness of such abstract entities, theists would be well advised to deny instead the Platonist's presumed criterion of ontological commitment and so to avoid realism altogether. (shrink)
The Quasi-Doppler Experiment According to Absolute Space-Time Theory.Stefan Marinov - 1981 - Foundations of Physics 11 (1-2):115-120.details
We find the relation between the frequencies received by two observers placed at a given parallel with 180° difference in longitude when they observe a distant light (radio) source. This relation depends on the absolute velocity of the Earth; however, because of the occurrence of aberration, the effect cannot be registered in practice.
Space and Time in Philosophy of Physical Science
The Ingardenian Distinction Between Inseparability and Dependence: Historical and Systematic Considerations.Marek Piwowarczyk - 2020 - HORIZON. Studies in Phenomenology 9 (2):532-551.details
In this paper I present the Ingardenian distinction between inseparability and dependence. My considerations are both historical and systematic. The historical part of the paper accomplishes two goals. First, I show that in the Brentanian tradition the problem of existential conditioning was entangled into parts—whole theories. The best examples of such an approach are Kazimierz Twardowski's theory of the object and Edmund Husserl's theory of parts and wholes. Second, I exhibit the context within which Ingarden distinguished inseparability and (...) class='Hi'>dependence. Moreover, Ingarden's motivations are presented: the problem of understanding the Husserlian concept of "immanent transcendence," the issue of the existence of purely intentional objects, and finally the problem of the relationship between individual objects and ideas. The systematic part deals with the ambiguity of Ingarden's definition of inseparability. I seek to improve this definition by reference to the distinction between absolute and summative wholes. I also present some divisions of inseparability and dependence and investigate whether these types of existential conditioning are reflexive, symmetric, or transitive. (shrink)
Roman Ingarden in Continental Philosophy
Boundedness and Absoluteness of Some Dynamical Invariants in Model Theory.Krzysztof Krupiński, Ludomir Newelski & Pierre Simon - 2019 - Journal of Mathematical Logic 19 (2):1950012.details
Let [Formula: see text] be a monster model of an arbitrary theory [Formula: see text], let [Formula: see text] be any tuple of bounded length of elements of [Formula: see text], and let [Formula: see text] be an enumeration of all elements of [Formula: see text]. By [Formula: see text] we denote the compact space of all complete types over [Formula: see text] extending [Formula: see text], and [Formula: see text] is defined analogously. Then [Formula: see text] and [Formula: see (...) text] are naturally [Formula: see text]-flows. We show that the Ellis groups of both these flows are of bounded size, providing an explicit bound on this size. Next, we prove that these Ellis groups do not depend on the choice of the monster model [Formula: see text]; thus, we say that these Ellis groups are absolute. We also study minimal left ideals of the Ellis semigroups of the flows [Formula: see text] and [Formula: see text]. We give an example of a NIP theory in which the minimal left ideals are of unbounded size. Then we show that in each of these two cases, boundedness of a minimal left ideal is an absolute property and that whenever such an ideal is bounded, then in some sense its isomorphism type is also absolute. Under the assumption that [Formula: see text] has NIP, we give characterizations of when a minimal left ideal of the Ellis semigroup of [Formula: see text] is bounded. Then we adapt the proof of Theorem 5.7 in Definably amenable NIP groups, J. Amer. Math. Soc. 31 609–641 to show that whenever such an ideal is bounded, a certain natural epimorphism 863–932]) from the Ellis group of the flow [Formula: see text] to the Kim–Pillay Galois group [Formula: see text] is an isomorphism. We also obtain some variants of these results, formulate some questions, and explain differences which occur when the flow [Formula: see text] is replaced by [Formula: see text]. (shrink)
Logic and Philosophy of Logic, Miscellaneous in Logic and Philosophy of Logic
Experience and the Absolute in the Light of Idealism.Marco Gomboso - 2020 - Idealistic Studies 50 (1):19-31.details
The question of whether the true character of reality is monistic or pluralistic spans almost the entire history of metaphysics. Though little discussed in recent decades, it presents problems that are nowadays considered of the utmost importance. Think, for instance, of the ultimate nature of elements such as matter, elemental particles or physical fields. Are they self-sufficient? Do they depend on a higher reality? A major discussion regarding the metaphysical grounds of such questions took place in Britain during the late (...) nineteenth century. It saw Francis Herbert Bradley and James Ward trying to understand the nature of experience. By recalling that specific discussion, this article seeks to show why the monistic character of reality prevails. (shrink)
Levinas's Agapeistic Metaphysics of Morals: Absolute Passivity and the Other as Eschatological Hierophany.John J. Davenport - 1998 - Journal of Religious Ethics 26 (2):331 - 366.details
This article evaluates Emmanuel Levinas's novel "ethical metaphysics" of interpersonal relations from a religious perspective. Levinas presents a unique version of agape ethics that can be evaluated in terms of a number of the dilemmas that have traditionally attended Christian discussions of neighbor-love. Because Levinas's analysis makes our responsibility for other persons depend on their eschatological significance, it has the same problems that hamper all theories of neighbor-love that lack a sufficient role for reciprocity.
The Settlement Structure Is Reflected in Personal Investments: Distance-Dependent Network Modularity-Based Measurement of Regional Attractiveness.Laszlo Gadar, Zsolt T. Kosztyan & Janos Abonyi - 2018 - Complexity 2018:1-16.details
How are ownership relationships distributed in the geographical space? Is physical proximity a significant factor in investment decisions? What is the impact of the capital city? How can the structure of investment patterns characterize the attractiveness and development of economic regions? To explore these issues, we analyze the network of company ownership in Hungary and determine how are connections are distributed in geographical space. Based on the calculation of the internal and external linking probabilities, we propose several measures to evaluate (...) the attractiveness of towns and geographic regions. Community detection based on several null models indicates that modules of the network coincide with administrative regions, in which Budapest is the absolute centre, and where county centres function as hubs. Gravity model-based modularity analysis highlights that, besides the strong attraction of Budapest, geographical distance has a significant influence over the frequency of connections and the target nodes play the most significant role in link formation, which confirms that the analysis of the directed company-ownership network gives a good indication of regional attractiveness. (shrink)
Human Striving and Absolute Reliance Upon God: A Kierkegaardian Paradox.Lee C. Barrett - 2021 - Kierkegaard Studies Yearbook 26 (1):139-164.details
Kierkegaard's texts suggest countervailing construals of the respective roles of divine and human agency in an individual's pursuit of blessedness. Kierkegaard paradoxically suggests that the individual must depend entirely on grace for the birth and development of faith, and at the same time actively cultivate faithful dispositions and passions. But Kierkegaard did not espouse Calvinistic divine determinism, or Pelagian autonomous human agency, or the Arminian cooperation of the two. For Kierkegaard, the ostensible paradox of grace and free will is not (...) a cognitive conundrum but is rather a challenge to integrate faith as a gift and faith as a task. (shrink)
First-Order Response Dependencies at a Differential Brightness Threshold.R. G. Lathrop - 1966 - Journal of Experimental Psychology 72 (1):120.details
The Independence/Dependence Paradox Within John Rawls's Political Liberalism.Ali Rizvi - manuscriptdetails
Rawls in his later philosophy claims that it is sufficient to accept political conception as true or right, depending on what one's worldview allows, on the basis of whatever reasons one can muster, given one's worldview (doctrine). What political liberalism is interested in is a practical agreement on the political conception and not in our reasons for accepting it. There are deep issues (regarding deep values, purpose of life, metaphysics etc.) which cannot be resolved through invoking common reasons (this is (...) the fact of free reason itself), and trying to resolve them would involve us in interminable debates and would hamper the practical task of agreement on the political conception. Given the absolute necessity of a political society which is stable and enduring, it is thus wise to avoid these issues in founding a political society and choosing its basic principles - this is the pragmatic part of Rawls's position. In this paper I argue that this strategy leads Rawls into a paradox: (i) although the intention is to stay independent of comprehensive doctrines, the political conception is in fact totally (and precariously) dependent on comprehensive doctrines (not just on one doctrine but on each and every major doctrine in society). It is dependent on them: for its conceptualisation as an independent idea, for its justification, for the check of its reasonability in relation to the external world, for the formation of identities and value inculcation and hence for the formation of its model citizen; (ii) the very search for independence makes the political conception more dependent on comprehensive doctrines, and by extension makes it potentially more prone to intervention in and tampering with comprehensive doctrines (it is enough to show that it is a strong conceptual possibility to cast doubt on the whole strategy). Thus, for example, the political conception relies on the hope that "firmly held convictions gradually change" and that it would "in fact . . . have the capacity to shape those doctrines toward itself". The purpose of the Rawlsian conjecture is to give these "hopes" a concrete, practical form by giving advice to proponents of the comprehensive doctrine on how they can do all this and "try to show them that, despite what they might think, they can still endorse a reasonable political conception". I further argue that this paradox can be overcome by making the core of political liberalism more flexible. (shrink)
John Rawls in 20th Century Philosophy
Liberalism in Social and Political Philosophy
Political Constructivism in Social and Political Philosophy
Does Being Rational Require Being Ideally Rational? `Rational' as a Relative and an Absolute Term.Wes Siscoe - forthcoming - Philosophical Topics.details
A number of formal epistemologists have argued that perfect rationality requires probabilistic coherence, a requirement that they often claim applies only to ideal agents. However, in "Rationality as an Absolute Concept", Roy Sorensen contends that 'rational' is an absolute term. Just as Peter Unger argued that being flat requires that a surface be completely free of bumps and blemishes, Sorensen claims that being rational requires being perfectly rational. However, when we combine these two views, they lead to counterintuitive (...) results. If being rational requires being perfectly rational, and only the probabilistically coherent are perfectly rational, then this indicts all ordinary agents as irrational. In this paper, I will attempt to resolve this conflict by arguing that Sorensen is only partly correct. One important sense of 'rational,' the sanctioning sense of 'rational', is an absolute term, but another important sense of 'rational,' the sense in which someone can have rational capacities, is not. I will, then, show that this distinction has important consequences for theorizing about ideal rationality, developing an account of the relationship between ordinary and ideal rationality. Because the sanctioning sense of 'rational' is absolute, it is rationally required to adopt the most rational attitude available, but which attitude is most rational can change depending on whether we are dealing with ideal agents or people more like ourselves. (shrink)
Philosophy as Self-Transformation: Shusterman's Somaesthetics and Dependent Bodies.Talia Welsh - 2014 - Journal of Speculative Philosophy 28 (4):489-504.details
Part of Nietzsche's blistering attack against Western morality is the argument that it stems from a lack of self-control that the weak have. Since the moralist cannot control and direct his own sexuality, he creates a "universal" set of moral values to be imposed externally on everyone. Despite the enchanting diversity of life, moralists prefer drab worlds of absolutes to help bolster their weak-willed selves: "Let us finally consider how naïve it is altogether to say: 'Man ought to be such (...) and such!' Reality shows us an enchanting wealth of types, the abundance of a lavish play and change of forms—and some wretched loafer of a moralist comments: 'No! Man ought to be different.' He even knows what man should be .. (shrink)
Michel Foucault in Continental Philosophy
Nietzsche: Character and Virtue Ethics in 19th Century Philosophy
Somatic and Feeling Theories of Emotion in Philosophy of Mind
Rāmānuja's Viśiṣṭādvaita and Hegel's Absolute Idealism -A Comparative Study.Shakuntala Gawde - 2018 - Journal of the Oriental Institute 67 (1-4):93-114.details
Rāmānuja is known as a theistic ācārya who interpreted Brahmasūtras in Viśiṣṭādvaita point of view. He propounded his philosophy by refuting Kevāldvaita system of Śaṅkara. He criticized the existence and knowledge of indeterminate objects and refuted the concept of Nirviśeṣa Brahman. Therefore, Brahman for him is Saviśeṣa. The name Viśiṣṭādvaita itself signifies that it is Qualified Monism. Brahman is qualified by matter and soul. Matter and soul though real are completely dependent on Brahman for their existence. Hegel is a German (...) Philosopher who propounded Absolute Idealism. Hegel solved the problem of reality from a synthetic and positive point of view. Predecessors of Hegel were reflecting on Reality with one sided abstractions. Absolute is ultimately real for him. Hegel's Absolute is not devoid of all objects and qualities. Absolute is inclusive of all the categories and all things of the world. Absolute is not abstract like Śaṅkara's Brahman but it goes very much closer to Ramanuja's Saviśeṣa Brahman. Thus, both these philosopher though differ in some principles which are very peculiar to them, they definitely meet at on one point of 'Concrete Monism'. Logical method adopted by them to reach towards their goal is also strikingly similar. The aim of this paper is to analyse the concept of monism according to Rāmānuja and Hegel with philosophical point of view. Comparative study of both will throw light on some striking similarities as well as some differences of these great philosophers. (shrink)
Postmodern or Late Modern? On the Significance of Louis Dupré's The Quest of the Absolute.Guido Vanheeswijck - 2014 - International Journal of Philosophy and Theology 75 (3):223-235.details
The latest book by Louis Dupré, The Quest of the Absolute, is the third and final volume of a trilogy on the intellectual history of modernity. It follows Passage to Modernity and The Enlightenment and the Intellectual Foundations of Modern Culture. Elegant writing and remarkable erudition go hand in hand with a deep insight into the objectives, achievements and deadlocks of the Romantic movement. It is not possible to look into the overwhelming variety of issues and figures that come (...) to the fore in this book and the trilogy as a whole; instead, this article focuses on Dupré's central claim as to the development and significance of modern Western culture, starting from a specific question that time and again recurs as a key motive throughout the three volumes of his trilogy: are we postmodern or late modern? Dupré's answer that we are dwellers of a late modern era rather than inhabitants of a postmodern age is dependent on his definition of modernity as a still ongoing 'event that has transformed the relation between the cosmos, its transcendent source, and its human interpreter'. Since we are still standing in the midst of the event of modernity, shaped by the evolutionary process of and the strains and tensions within and between its three waves, Dupré underlines the necessity to move from hermeneutic to ontological questions. He even explicitly pleads for the rediscovery of a symbolic religious language in a tentative search for its ontological dimension and for a source of significance beyond the realm of human mind. The main question, however, is whether contemporary Western man is still capable of such a rediscovery. (shrink) | CommonCrawl |
Anti-glaucoma potential of Heliotropium indicum Linn in experimentally-induced glaucoma
Samuel Kyei1,2,
George Asumeng Koffuor1,3,
Paul Ramkissoon1 &
Osei Owusu-Afriyie4
Heliotropium indicum is used as a traditional remedy for hypertension in Ghana. The aim of the study was to evaluate the anti-glaucoma potential of an aqueous whole plant extract of H. indicum to manage experimentally-induced glaucoma.
The percentage change in intraocular pressure (IOP), after inducing acute glaucoma (15 mLkg−1 of 5 % dextrose, i.v.), in New Zealand White rabbits pretreated with Heliotropium indicum aqueous extract (HIE) (30–300 mgkg−1), acetazolamide (5 mgkg−1), and normal saline (10 mLkg−1) per os were measured. IOPs were also monitored in chronic glaucoma in rabbits (induced by 1 % prednisolone acetate drops, 12 hourly for 21 days) after treatments with the same doses of HIE, acetazolamide, and normal saline for 2 weeks. The anti-oxidant property of the extract was assessed by assaying for glutathione levels in the aqueous humour. Glutamate concentration in the vitreous humour was also determined using ELISA technique. Histopathological assessment of the ciliary bodies was made.
The extract significantly reduced intraocular pressure (p ≤ 0.05–0.001) in acute and chronic glaucoma, preserved glutathione levels and glutamate concentration (p ≤ 0.01–0.001). Histological assessment of the ciliary body showed a decrease in inflammatory infiltration in the extract and acetazolamide-treated group compared with the normal saline-treated group.
The aqueous whole plant extract of Heliotropium indicum has ocular hypotensive, anti-oxidant and possible neuro-protective effects, which therefore underscore its plausible utility as an anti-glaucoma drug with further investigation.
Glaucoma, referred to as the silent thief of sight, is recorded as the second most important cause of blindness and the leading cause of irreversible blindness globally [1, 2].
It is said to be a heterogeneous group of diseases resulting from multiple causative factors including increase in intraocular pressure (IOP) and vascular dysregulation. These factors largely contribute to the initial injury in this disorder by hindering axoplasmic flow within the retinal ganglion cell (RGC) axons at the lamina cribrosa, impairing the optic nerve microcirculation at the level of lamina, and changing the laminar glial and connective tissue [3]. Factors leading to further damage include excitotoxicity caused by glutamate or glycine that is freed from injured neurons and oxidative damage [4]. Despite the provision of appropriate treatment, blindness still occurs in nearly 10 % of sufferers [5]. The most common form of the glaucomas, primary open angle glaucoma (POAG), presents with no warning symptoms, especially at its early stages [6].
Ghana is one of the worse affected countries in the world as it is ranked second after St. Lucia in terms of glaucoma prevalence [7]. It is also reported to have early age of onset (30 years) compared to the global trend of 40 years, with risk factors such as age and ethnicity [8–10]. The most aggressive form of glaucoma has been reported among people of African descent and they are three times more likely to suffer from glaucoma compared to Caucasians [11]. Cost-of-illness studies have shown the importance of this disease, with the United Kingdom spending more than £300 million in 2002 on glaucoma prevention and treatment [12]. In the United States, it is the reason for over 10 million visits to physicians annually, with a yearly estimated cost of over $1.5 billion to its government [5]. Elsewhere in Africa where there is reliable data, it is evident that the middle-class spent more than half their monthly income, while low-income earners spent virtually all their monthly take-home salary to treat glaucoma [13]. This makes it an expensive disease so far as its management is concerned.
Management options for glaucoma include the use of medicines, as well as lasers and incisional surgery, with medical therapy being the most common [14]. None of these management procedures are free of complications, with some leading to loss of vision instead of its preservation [15, 16]. The development of new treatment options with minimal side effects is therefore important, specifically those that target the modifiable pathogenic factor of ocular hypertension in addition to others.
It is within this context that the current study investigated the anti-glaucoma potential of an aqueous whole plant extract of Heliotropium indicum L. (Boraginaceae) also known as cock's comb to manage experimentally-induced glaucoma as an initial step in bioprospecting for treatment options for the disease. In Ghana and elsewhere in Africa, H. indicum is widely used as a traditional remedy for several diseases such as abdominal pain, convulsion, cataract, conjunctivitis, cold and high blood pressure among others [17, 18]. The plant is prepared and applied in various forms such as decoction, powder, cold infusion, poultice, concoction or squeezing its juice onto the affected area depending on the ailment. In some localities in Ghana, it is used in preparation of soup for postpartum women to treat inflammatory reactions. For the purposes of pressure-lowering effect, preparations of H. indicum are used orally as a decoction, concoction or as a dietary ingredient in locally prepared soups.
Plant collection
Heliotropium indicum was collected in November, 2012, from the University of Cape Coast botanical gardens (5.1036° N, 1.2825° W), located in the Central Region of Ghana. It was identified and authenticated by a botanist at the School of Biological Sciences, College of Agricultural and Natural Sciences, University of Cape Coast, Cape Coast, Ghana. A voucher specimen, numbered 4873, has been deposited at the herbarium.
Preparation of the H. indicum aqueous extract (HIE)
Whole plants of H. indicum were washed thoroughly with tap water and shade-dried. The dried plants were milled into coarse powder (1.5 kg) by a hammer mill (Schutte Buffalo, New York, NY), then mixed with 1 liter of water. The mixture was soxhlet extracted at 80 °C, for 24 h, and the aqueous extract was freeze-dried (Hull freeze-dryer/lyophilizer 140 SQ, Warminster, PA). The powder obtained (yield 12.2 %), was labelled HIE, and stored at a temperature of 4 °C. This (HIE) was reconstituted in normal saline to the desired concentration for dosing in this study.
Drugs and chemicals used
Prednisolone acetate ophthalmic suspension (1 %) (Alcon Laboratories, Inc. Texas, USA) was used to induce ocular hypertension. Proparacaine hydrochloride ophthalmic solution (Ashford Laboratories Ltd, China Macau) was used as a local anaesthetic in the eyes during IOP measurements. Acetazolamide (Ernest Chemists Ltd, Tema, Ghana) was used as the reference anti-glaucoma drug.
Experimental animals and husbandry
Twenty five New Zealand White rabbits, weighing 1.0 ± 0.2 kg, were housed singly in aluminium cages (34 cm × 47 cm × 18 cm) with soft wood shavings as bedding, under ambient conditions (temperature 28 ± 2 °C, relative humidity 60–70 %, and a normal light–dark cycle) in the Animal House of the School of Biological Sciences, University of Cape Coast, Ghana. They were fed on a normal commercial pellet diet (Agricare Ltd, Kumasi, Ghana) and had access to water ad libitum.
Ethical and biosafety considerations
The study protocol was approved by the Institutional Review Board on Animal Experimentation, Faculty of Pharmacy and Pharmaceutical Sciences, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana (Ethical clearance number: FPPS/PCOL/0030/2013). All activities performed during the study conformed to acceptable principles on the use and care of laboratory animal (EU directive of 1986: 86/609/EEC), and the association for research in vision and ophthalmology statement for use of animals in ophthalmic and vision research. Biosafety guidelines for protection of personnel in the laboratory were also observed.
Preliminary phytochemical screening
Screening was performed on HIE to ascertain the presence of phytochemicals using standard procedures described by Harborne [19] and Kujur et al. [20].
Assessing hypotensive effect of HIE in an acute glaucoma model
The basal IOP in each eye of each rabbit was measured using an improved Schiotz indentation tonometer (J. Sklar Manufacturing Company, Long Island City, N.Y), which was calibrated by an open manometric calibration procedure as described elsewhere [21]. Care was taken to prevent the nictitating membrane from coming under the base of the tonometer. Tension was recorded each time by two weights (5.5 g and 10 g), and the mean of the two recordings was calculated. The animals were then put into five groups (n = 5) labelled A-E. Groups A, B, and C received 30, 100 and 300 mgkg−1 HIE respectively, while Groups D and E received 5 mgkg−1 acetazolamide and 10 mLkg−1 normal saline respectively. All administration was by mouth using an oral gavage. This was to mimic its ethno-pharmacological use. Each animal received not more than 1 mL of HIE. After 30 min, 15 mLkg−1 of 5 % dextrose solution was administered intravenously, through the marginal ear vein. IOP measurements were made every 20 min for 120 min in each eye. The percentage change in IOPs was then determined by the following formula:
$$ \%\ \mathbf{Change}\ \mathbf{in}\ \mathbf{I}\mathbf{O}\mathbf{P} = \left(\mathbf{IO}{\mathbf{P}}_{\mathbf{t}}\hbox{--}\ \mathbf{IO}{\mathbf{P}}_{\mathbf{0}}/\ \mathbf{IO}{\mathbf{P}}_{\mathbf{0}}\right) \times \mathbf{100} $$
Where IOPt is the ocular tension (at different times) after dextrose or steroid (prednisolone) administration and IOPo is the ocular tension before dextrose or steroid (prednisolone administration (i.e. time zero).
Assessing the hypotensive effect of HIE in a chronic glaucoma model
Induction of ocular hypertension in rabbits
After baseline measurements of IOPs, ocular hypertension was induced in rabbits by instilling 1 % prednisolone acetate in each eye, twice daily (12 hourly) for 21 days, while measuring the IOP weekly (between 8.30 and 9.00 AM). Animals with at least a 50 % increase in IOP and characterized with one or more of the following clinical signs: bulging eyeball (buphthalmic eyes), fixed dilated pupils, sluggish pupillary reaction, and limbal injection [22], were selected for this study.
Assessment of ocular hypotensive effect of HIE
Rabbits with ocular hypertension were divided into five groups labelled I-V. Each group was treated orally, twice daily (12 hourly), with 30, 100, 300 mgkg-1 HIE, 5 mgkg−1 Acetazolamide (positive control), or 10 ml/kg normal saline (negative control), for 2 weeks with intraocular pressure measurements being made in each eye every other day for the same period.
Determination of glutathione in aqueous humour
Total glutathione in the aqueous humour of the experimental animals was determined using a commercial kit (Cayman Chemicals, Ann Arbor, MI, USA). The animals were euthanized and the anterior chamber punctured with a 30-gauge needle. The aqueous humour was collected from both eyes and stored in sterile eppendorf tubes. The aqueous humour was then deproteinated using metaphosphoric acid and 4 M triethanolamine according to the manufacturer's instruction. A 50 μL volume of the deproteinated aqueous humour and the standards (constituted per the manufacturer's directive) were pipetted into a 96 well plate, incubated in the dark on an orbital shaker, and read at 405 nm using a URIT-660 microplate reader (URIT Medical Electronic Co., Ltd, Guangxi, China). Each determination was performed in duplicates.
Evaluation of glutamate in vitreous humour
Glutamate concentration in the vitreous humour of experimental animals was determined using the glutamate assay kit. The vitreous humour in each eye was collected in separate sterile eppendorf tubes after accessing it through a scleral puncture at the lateral canthus. The vitreous bodies were sonicated in 0.2 M perchloric acid containing 0.1 % Na2S205 and 0.1 % EDTA. Homogenates were centrifuged at l5,000 g for 5 min at 4 °C and the supernatant were used for the glutamate concentration assay. The samples and standards were prepared according to manufacturer's instructions, pipetted into a 96 well microplate, and read at 405 nm using the URIT-660 microplate reader (URIT Medical Electronic Co., Ltd, Guangxi, China). Each determination was performed in triplicates.
Histopathological assessment
The enucleated eyes of the animals were fixed in 10 % phosphate-buffered paraformaldehyde, and embedded in paraffin for histopathological assessment. Sections were made and stained with haematoxylin and eosin and alcian blue [23]. Sections were fixed on glass slides for microscopic examination by a specialist pathologist at the Pathology Department of the Komfo Anokye Teaching Hospital, Kumasi, Ghana.
Results were analysed using one-way analysis of variance followed by Dunnett's multiple comparisons test using GraphPad Prism (version 5.03; GraphPad, La Jolla, CA). Values were expressed as the mean ± standard error of the mean and p ≤ 0.05 was considered statistically significant.
HIE pretreatment significantly (p ≤ 0.001) prevented the expected rise in IOP in dextrose-induced ocular hypertension compared to the normal saline pre-treated rabbits (Fig. 1); the effect was comparable to acetazolamide pretreatment (p ≤ 0.001) as there was no significance (p > 0.05) in the IOP lowering effect of acetazolamide and HIE. Similarly, oral treatments of steroid-induced ocular hypertension with HIE showed a significant (p ≤ 0.05–0.001) reduction in IOPs of the right and the left eyes of the rabbits vs. normal saline treated animals (Fig. 2). Effects were comparable (p ≤ 0.001) to acetazolamide treatment.
Time-course curves and areas under the curve for the for acute glaucoma study. Time-course curves (a & c) and areas under the curve (b & d) for the effects of pretreatment with 30, 100, and 300 mgkg−1 of HIE, 5 mgkg−1 Acetazolamide (ACET), and 10 mLkg−1 normal saline (NS) on Dextrose-induced ocular hypertension of the right eye (a, b) and left eye (c, d) in New Zealand White Rabbits. Values plotted represent mean ± SEM (n = 5). ***p ≤ 0.001, ANOVA followed by Dunnett's post-hoc test
Time-course curves and areas under the curve for the for chronic glaucoma study. Time-course curves (a & c) and areas under the curve (b & d) for the effects of treatment with 30, 100, and 300 mgkg−1 of HIE, 5 mgkg−1 Acetazolamide (ACET), and 10 mLkg−1 normal saline (NS) on steroid-induced ocular hypertension of the right eye (a, b) and left eye (c, d) in New Zealand White Rabbits. Values plotted represent mean ± SEM (n = 5). ***p ≤ 0.001, **p ≤ 0.01, *p ≤ 0.05. ANOVA followed by Dunnett's post-hoc test
Level of glutathione in aqueous humour
The HIE and acetazolamide treatments in the chronic model of glaucoma studies significantly (p ≤ 0.01–0.001) reduced oxidative stress by preserving aqueous endogenous glutathione levels (Table 1).
Table 1 Total glutathione (GSH) in the aqueous humour and glutamate levels in the vitreous of controls and HIE-treated chronic ocular hypertensive New Zealand White Rabbits
Concentration of glutamate in Vitreous humour
Treatment with HIE and acetazolamide caused a significant (p ≤ 0.01–0.001) reduction of excitotoxin in the vitreous humour of the ocular hypertensive treated animals (Table 1).
The histopatological assessment of the structures of the anterior chamber did indicate relatively reduced signs of morphological changes in ciliary bodies of all rabbits treated with HIE, and acetazolamide. However, there were histopathological signs of tissue alteration characterized by mononuclear infiltration into the ciliary body (Fig. 3).
Photomicrographs of the anterior chamber of ocular hypertensive rabbits per the various treatments. Photomicrograph of anterior chamber of rabbits (H and E × 100), (a) glaucomatous rabbit with 10 mlkg−1 normal saline treatment (Control) showing intense neutrophilic infiltration in the ciliary body, (b) glaucomatous rabbit with 5 mgkg−1 Acetazolamide treatment. Normal marginal zone of the ciliary process with normal architecture is shown, (c) glaucomatous rabbit with 30 mgkg−1 HIE treatment indicating moderate neutophilic infiltration in the cilairy body, (d) Glaucomatous rabbit with 100 mgkg−1 HIE treatment indicating mild neutophilic infiltration and (e) glaucomatous rabbit with 300 mgkg−1 HIE treatment. There is moderate oedema of the cliary body
Glaucoma is described as an assemblage of ocular disorders with multi-factorial causes united by a clinically characteristic optic neuropathy with or without a rise in intraocular pressure (IOP). As it is not a single disease entity, it is sometimes referred to as "the glaucomas" [3] for which IOP reduction remains the only evidence-based treatment approach. Experimental glaucoma is a model that mimics the human condition, and is very useful in studies aimed at understanding the pathophysiology of the disease and in pre-clinical studies of potential anti-glaucoma agents [24].
Acute glaucoma was induced in rabbits using 5 % dextrose administered intravenously; this method has obvious advantages over the water-loading model [25]. Intravenously administered dextrose lowers serum osmolarity after the sugar has been cleared from circulation. This reduced serum osmolarity leads to the movement of water into the eye thereby increasing IOP [24, 25]. Pre-treatment with the extract prevented the expected rise in IOP (p ≤ 0.001) compared to the normal saline pretreated group, indicating that the extract could be acting by reducing aqueous humour production or increasing outflow facility [26]. A previous study that assessed the hypotensive effect of H. indicum extract on systemic hypertension indicated that it exerts its hypotensive effect via muscarinic receptor stimulation [27], implying that its ocular hypotensive effect could be due to enhanced outflow facility rather than reduced aqueous humour production. Again, an important relation has been found between systemic blood pressure and the development of glaucoma. That is, an increased blood pressure as in the case of hypertension impairs autoregulation of blood flow, which consequentially affects blood circulation to the optic nerve inducing glaucoma via ischemic tendencies [28]. On the other hand, hypotension has also been named as a risk factor for glaucoma therefore, further studies would be needed to ascertain the clinical application of HIE in glaucoma management as it has been reported to reduce blood pressure in some studies 48.25 ± 3.56 % [27, 29].
The anti-glaucoma potential of HIE was further substantiated by testing its ocular hypotensive effect on a more sustained (chronic) model of ocular hypertension. The corticosteroid–induced model bears semblance of POAG, and is characterized by aqueous outflow obstruction, optic nerve cupping and visual field defects [22, 30]. HIE treatment reduced (p ≤ 0.05–0.001) the IOP induced by steroid pretreatment in the rabbits. A recent study showed that POAG, as modelled by corticosteroid-induced ocular hypertension in rabbits, is a multi-tissue disease entity involving the trabecular meshwork, the optic nerve head, the lateral geniculate nuclei, and the visual cortex. Stressors such as repeated steroid intake triggers oxidative stress resulting in compromised aqueous humour antioxidant system and apoptotic trabecular meshwork cell loss. This apoptotic cell loss is informed by severe mitochondrial damage altering tissue function and integrity [31]. The extract could therefore be exerting its ocular hypotensive effect via improving aqueous outflow, protection of the structural integrity of the trabecular meshwork or both [31]. The reference drug, acetazolamide, on the other hand, is a specific carbonic anhydrase inhibitor that lowers the intraocular pressure of mammal's eyes by partially inhibiting aqueous humour formation [32]. In addition to the inhibition of aqueous production, it has been reported to also decrease oxidative damage of the trabecular meshwork and more so in the presence of active mitochondria [33]. Mitochondria are predicted as the key intracellular target for most drugs with antioxidant properties [34]. This is suggestive of the possibility of multiple mechanistic pathways in exerting their therapeutic effect. Acetazolamide is one of the few medications that exist in both oral and topical forms that are effective in reducing IOP and improving retinal blood flow [35]. Its oral formulation affords ophthalmic caregivers the options of achieving greater bioavailability (oral bioavailability of more than 90 %) for aggressive forms of glaucoma when the short precorneal residence time poses the challenge of poor bioavailability upon topical application [36]. However, oral doses of acetazolamide are associated with a myriad of systemic side effects due to the wide distribution of the carbonic anhydrase enzyme, which has many functions including transporting CO2 from the tissues to the lung, excreting and reabsorbing electrolytes and H+ ions in the kidney, secreting H+ ions into the gastric mucosa, and maintaining the major buffer system of the human body [37, 38]. Preliminary data from our laboratory indicates that topical application of HIE into the conjunctival cul-de-sac is safe, but medium term oral (subchronic) usage of therapeutic doses produced subtle morphometric changes in the liver, kidney and the spleen upon histological assessment. Further studies are still ongoing in this regard.
It is clear that oxidative stress, which is an important etiologic factor in the pathogenesis of glaucoma, [39] is manly driven by free radicals in living systems when endogenous antioxidant defences are deficient [40]. It is proven in humans that both elevated IOP and visual field loss are notably related to the amount of oxidative DNA damage affecting trabecular meshwork (TM) cells, thereby affecting outflow facility [41]. Glutathione has been found in significant proportions in the aqueous humour and plays an essential role in defending the system against oxidative stress-provoked diseases [42]. The antioxidant statuses of biological samples are therefore useful as a marker of oxidative stress [43]. The extract treatment preserved endogenous aqueous humour glutathione levels (p ≤ 0.01–0.001), which suggests its usefulness not only in reducing IOP, but also in providing a protection against oxidative damage critical in advancing the progression of glaucomatous neurodegeneration. This presupposes that the HIE targets mitochondrial cells of the trabecular meshwork in exerting its effect.
Excitotoxicity elicited by the amino acid glutamate is gaining attention so far as the mediation of neuronal death in many disorders. An understanding of excitotoxic injury provides clues in the search for answers to such fundamental questions such as the continual loss of retinal ganglion cells despite achieving IOP control [44]. Amidst the ranging controversy over its pathogenic role in glaucoma [45, 46], there was an observed reduction of glutamate concentration (p ≤ 0.01–0.001) in the extract-treated rabbits. Studies have shown that vitreous is easily obtainable and remains an important biological sample in postmortem analysis, in that it is less prone to putrefaction and contamination relative to other body fluids as postmortem biochemical changes occur more slowly in the eye [47]. The hypothesized association between glutamate excitotoxicity and neurological disorders such as glaucoma was well managed by the HIE treatment [48, 49].
Histopathological changes were remarkable in the anterior chamber of normal saline treated animals but relatively minimal in the extract-treated and the acetazolamide-treated rabbits. Glaucoma is exceptional amongst ocular disorder in that its principal pathophysiology involves structures in both the anterior and posterior segments of the eye. This affords the option of tracking pathological changes in either segment or both [23].
The extract owes its net anti-glaucoma potential effect to the synergistic effect of its phytochemicals acting concomitantly on the diverse etiologic factors. Other researchers have established that some alkaloids possess hypotensive effect more particularly via muscarinic action [50]. Saponins have also been demonstrated to have some hypotensive activity [51]. Flavonoids, in general, have been proven to possess antioxidant activity relevant for free scavenging activity that is necessary to preserve the eye's endogenous antioxidant system [52]. This cocktail of bioactive compounds detected in HIE affirms the mechanistic multiplicity of its therapeutic effect in experimental glaucoma management.
The aqueous whole plant extract of H. indicum exhibits ocular hypotensive, antioxidant and potential neuroprotective effects hence, it could be a useful anti-glaucoma drug with further studies.
Availability of supporting data
Supporting data are all available in this study.
Thompson Jr EH, Kaye LW. A man's guide to healthy aging: stay smart, strong, and active. Baltimore: Johns Hopkins University Press; 2013.
Kingman S. Glaucoma is second leading cause of blindness globally. Bull World Health Organ. 2004;82:887–8.
Casson RJ, Chidlow G, Wood JP, Crowston JG, Goldberg I. Definition of glaucoma: clinical and experimental concepts. Clin Experiment Ophthalmol. 2012;40:341–9.
Kaushik S, Pandav SS, Ram J. Neuroprotection in glaucoma. J Postgrad Med. 2003;49:90–5.
Glaucoma Research Foundation. Glaucoma Facts and Stats. 2013. http://www.glaucoma.org/glaucoma/glaucoma-facts-and-stats.php. Accessed on 24 Sept 2014.
Acton AQ. Open-angle glaucoma: new insights for the healthcare professional. 2013th ed. Atlanta, Georgia: ScholarlyEditions; 2013.
Ntim-Amponsah CT, Amoaku WM, Ofosu-Amaah S, Ewusi RK, Idirisuriya-Khair R, Nyatepe-Coo E, et al. Prevalence of glaucoma in an African population. Eye (Lond). 2004;18:491–7.
Budenz DL, Barton K, Whiteside-de Vos J, Schiffman J, Bandi J, Nolan W, et al. Prevalence of glaucoma in an urban West African population: the Tema Eye Survey. JAMA Ophthalmol. 2013;131:651–8.
Quigley HA, Broman AT. The number of people with glaucoma worldwide in 2010 and 2020. Br J Ophthalmol. 2006;90:262–7.
Friedman DS, Jampel HD, Muñoz B, West SK. The Prevalence of open-angle glaucoma among blacks and whites 73 years and older: the Salisbury Eye Evaluation Glaucoma Study. Arch Ophthalmol. 2006;124:1625–30.
Friedman DS, Wolfs RC, O'Colmain BJ, Klein BE, Taylor HR, West S, et al. Prevalence of open-angle glaucoma among adults in the United States. Arch Ophthalmol. 2004;122:532–8.
Rouland JF, Berdeaux G, Lafuma A. The economic burden of glaucoma and ocular hypertension: implications for patient management: a review. Drugs Aging. 2005;22:315–21.
Adio AO, Onua AA. Economic burden of glaucoma in Rivers State, Nigeria. Clin Ophthalmol. 2012;6:2023–31.
Schwartz K, Budenz D. Current management of glaucoma. Curr Opin Ophthalmol. 2004;15(2):119–26.
Vijaya L, Manish P, Ronnie G, Shantha B. Management of complications in glaucoma surgery. Indian J Ophthalmol. 2011;59 Suppl 1:131–40.
Banaszek A. Company profits from side effects of glaucoma treatment. CMAJ. 2011;183(14):E1058.
Togola A, Diallo D, Dembélé S, Barsett H, Paulsen BS. Ethnopharmacological survey of different uses of seven medicinal plants from Mali, (West Africa) in the regions Doila, Kolokani and Siby. J Ethnobiol Ethnomed. 2005;1:7.
Science and technology policy research (STEPRI), Council for scientific and industrial research (CSIR). Ghana Herbal Pharmacopoiea. Accra, Ghana: Advent Press; 2007.
Harborne JB. Phytochemical methods: a guide to modern techniques of plant analysis. 3rd ed. London, UK: Chapman and Hall; 1998.
Kujur RS, Singh V, Ram M, Yadava HN, Singh KK, Kumari S, et al. Antidiabetic activity and phytochemical screening of crude extract of Stevia rebaudiana in alloxan-induced diabetic rats. Pharmacognosy Res. 2010;2:258–63.
Best M, Pola R, Galin MA, Blumenthal M. Tonometric calibration for the rabbit eye. Arch Ophthalmol. 1970;84:200–5.
Kersey JP, Broadway DC. Corticosteroid-induced glaucoma: a review of the literature. Eye (Lond). 2006;20:407–16.
Ticho U, Lahav M, Berkowitz S, Yoffe P. Ocular changes in rabbits with corticosteroid induced ocular hypertension. Br J Ophthalmol. 1979;63:646–50.
Shah GB, Sharma S, Mehta AA, Goyal RK. Oculohypotensive effect of angiotensin-converting enzyme inhibitors in acute and chronic models of glaucoma. J Cardiovasc Pharmacol. 2000;36:169–75.
Bonomi L, Tomazzoli L, Jaria D. An improved model of an experimentally induced ocular hypertension in the rabbit. Invest Ophthalmol. 1976;15:781–4.
Panchal S, Mehta A, Santani D. Occulohypotensive effect of Torasamide in experimental glaucoma. Int J Pharmacol. 2007;5:4.
Koffuor GA, Boye A, Ameyaw EO, Amoateng P, Abaitey AK. Hypotensive effect of an aqueous extract of Heliotropium indicum Linn (Boraginaceae). Int Res J Pharm Pharmacol. 2012;2:103–9.
Gangwani RA, Lee JW, Mo HY, Sum R, Kwong AS, Wang JH, et al. The correlation of retinal nerve fiber layer thickness with blood pressure in a chinese hypertensive population. Medicine (Baltimore). 2015;94:e947.
Memarzadeh F, Ying-Lai M, Chung J, Azen SP, Varma R, Los Angeles Latino Eye Study Group. Blood pressure, perfusion pressure, and open-angle glaucoma: the Los Angeles Latino Eye Study. Invest Ophthalmol Vis Sci. 2010;51:2872–7.
Sapir-Pichhadze R, Blumenthal EZ. Steroid induced glaucoma. Harefuah. 2003;142:137–40. 157.
Saccà SC, Pulliero A, Izzotti A. The dysfunction of the trabecular meshwork during glaucoma course. J Cell Physiol. 2015;230:510–25.
de Carvalho CA, Lawrence C, Stone HH. Acetazolamide (Diamox) therapy in chronic glaucoma: a 3-year follow-up study. Arch Ophthalmol. 1958;59:840–9.
Saccà SC, La Maestra S, Micale RT, Larghero P, Travaini G, Baluce B, et al. Ability of Dorzolamide Hydrochloride and Timolol Maleate to target mitochondria in glaucoma therapy. Arch Ophthalmol. 2011;129:48–55.
Kniep EM, Roehlecke C, Ozkucur N, Steinberg A, Reber F, Knels L, et al. Inhibition of apoptosis and reduction of intracellular pH decrease in retinal neural cell cultures by a blocker of carbonic anhydrase. Invest Ophthalmol Vis Sci. 2006;47:1185–92.
Detry-Morel M. Side effects of glaucoma medications. Bull Soc Belge Ophtalmol. 2006;299:27–40.
Patsalos PN. The Epilepsy prescriber's guide to antiepileptic drugs. 2nd ed. Cambridge: University Press; 2010.
Pfeiffer N. Dorzolamide: development and clinical application of a topical carbonic anhydrase inhibitor. Surv Ophthamol. 1997;42:137–51.
Goodfield M, Davis J, Jeffcoate W. Acetazolamide and symptomatic metabolic acidosis in mild renal failure. Br Med J (Clin Res Ed). 1982;284:422.
Ferreira SM, Lerner SF, Brunzini R, Evelson PA, Llesuy SF. Oxidative stress markers in aqueous humour of glaucoma patients. Am J Ophthalmol. 2004;137:62–9.
Grüb M, Mielke J. Aqueous humour dynamics. Ophthalmologe. 2004;101:357–65.
Saccà SC, Izzotti A, Rossi P, Traverso C. Glaucomatous outflow pathway and oxidative stress. Exp Eye Res. 2007;84:389–99.
Richer SP, Rose RC. Water soluble antioxidants in mammalian aqueous humor: interaction with UV B and hydrogen peroxide. Vision Res. 1998;38:2881–8.
Oduntan OA, Mashige KP. A review of the role of oxidative stress in the pathogenesis of eye diseases. S Afr Optom. 2011;70:191–9.
Mark LP, Prost RW, Ulmer JL, Smith MM, Daniels DL, Strottmann JM, et al. Pictorial review of glutamate excitotoxicity: fundamental concepts for neuroimaging. AJNR Am J Neuroradiol. 2001;22:1813–24.
Lotery AJ. Glutamate excitotoxicity in glaucoma: truth or fiction? Eye (Lond). 2005;19:369–70.
Salt TE, Cordeiro MF. Glutamate excitotoxicity in glaucoma: throwing the baby out with the bathwater? Eye (Lond). 2006;20:730–1. author reply 731–2.
Paranitharan P, Pollanen MS. Utility of postmortem vitreous biochemistry. Sri Lanka J Forensic Med Sci Law. 2011;2(1):23–5.
Azuma N, Kawamura M, Kohsaka S. Morphological and immunohistochemical studies on degenerative changes of the retina and the optic nerve in neonatal rats injected with monosodium-L-glutamate. Nihon Ganka Gakkai Zasshi. 1989;93:72–9.
Siliprandi R, Canella R, Carmignoto G, Schiavo N, Zanellato A, Zanoni R, et al. N-methyl-D-aspartate-induced neurotoxicity in the adult rat retina. Vis Neurosci. 1992;8:567–73.
Wynter-Adams DM, Simon OR, Gossell-Williams MD, West ME. Isolation of a muscarinic alkaloid with ocular hypotensive action from Trophis racemosa. Phytother Res. 1999;13:670–4.
Hiwatashi K, Shirakawa H, Hori K, Yoshiki Y, Suzuki N, Hokari M, et al. Reduction of blood pressure by soybean saponins, renin inhibitors from soybean, in spontaneously hypertensive rats. Biosci Biotechnol Biochem. 2010;74:2310–2.
Pragada RR, Rao Ethadi S, Yasodhara B, Praneeth Dasari VS, Mallikarjuna RT. In-vitro antioxidant and antibacterial activities of different fractions of Heliotropium indicum L. J Pharm Res. 2012;5:1051.
The authors are grateful to the management and staff of Life Science Diagnostic Centre, Cape Coast, for permitting us to use their facility for various aspects of this study.
Source of funding
This study was partly funded by University of Cape Coast.
This article results from research towards a PhD (Optometry) degree in the Discipline of Optometry at the University of KwaZulu Natal under the supervision of Dr. George A. Koffuor and co-supervision of Prof. Paul Ramkissoon.
Discipline of Optometry, School of Health Sciences, College of Health Sciences, University of KwaZulu- Natal, Durban, South Africa
Samuel Kyei
, George Asumeng Koffuor
& Paul Ramkissoon
Department of Optometry, School of Physical Sciences, University of Cape-Coast, Cape-Coast, Ghana
Department of Pharmacology, Faculty of Pharmacy and Pharmaceutical Sciences, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
George Asumeng Koffuor
Department of Pathology, Komfo Anokye Teaching Hospital, Kumasi, Ghana
Osei Owusu-Afriyie
Search for Samuel Kyei in:
Search for George Asumeng Koffuor in:
Search for Paul Ramkissoon in:
Search for Osei Owusu-Afriyie in:
Correspondence to Samuel Kyei.
Author SK conceived the idea, designed the study, wrote the protocol, managed the literature searches, collected data, and wrote the first draft of the manuscript. Authors GAK and PR were involved in the conception and design of the study, and managed the analyses of the study. Author OAO performed the histological evaluations, interpretation of data and critically revised the content. All authors read and approved the final manuscript.
Kyei, S., Koffuor, G.A., Ramkissoon, P. et al. Anti-glaucoma potential of Heliotropium indicum Linn in experimentally-induced glaucoma. Eye and Vis 2, 16 (2015). https://doi.org/10.1186/s40662-015-0027-1
Glutathione assay
Steroid-induced glaucoma
New Zealand white rabbit
Anti-glaucoma drug | CommonCrawl |
Ultrafast doublon dynamics in photoexcited $1T$-${\mathrm{TaS}}_{2}$
Ligges, M. Faculty of Physics, University of Duisburg-Essen, Germany
Avigo, I. Faculty of Physics, University of Duisburg-Essen, Germany
Golež, Denis Department of Physics, University of Fribourg, Switzerland
Strand, Hugo U. R. Department of Physics, University of Fribourg, Switzerland
Beyazit, Y. Faculty of Physics, University of Duisburg-Essen, Germany
Hanff, K. Institute of Experimental and Applied Physics, University of Kiel, Germany
Diekmann, F. Institute of Experimental and Applied Physics, University of Kiel, Germany
Stojchevska, L. Faculty of Physics, University of Duisburg-Essen, Germany
Kalläne, M. Institute of Experimental and Applied Physics, University of Kiel, Germany
Zhou, P. Faculty of Physics, University of Duisburg-Essen, Germany
Rossnagel, K. Institute of Experimental and Applied Physics, University of Kiel, Germany
Eckstein, Martin Max Planck Research Department for Structural Dynamics, University of Hamburg, Germany
Werner, Philipp Department of Physics, University of Fribourg, Switzerland
Bovensiepen, U. Faculty of Physics, University of Duisburg-Essen, Germany
Physical Review Letters. - 2018, vol. 120, no. 16, p. 166401
English Strongly correlated materials exhibit intriguing properties caused by intertwined microscopic interactions that are hard to disentangle in equilibrium. Employing nonequilibrium time-resolved photoemission spectroscopy on the quasi-two- dimensional transition-metal dichalcogenide 1T-TaS2, we identify a spectroscopic signature of doubly occupied sites (doublons) that reflects fundamental Mott physics. Doublon-hole recombination is estimated to occur on timescales of electronic hopping ℏ/J≈14 fs. Despite strong electron-phonon coupling, the dynamics can be explained by purely electronic effects captured by the single-band Hubbard model under the assumption of weak hole doping, in agreement with our static sample characterization. This sensitive interplay of static doping and vicinity to the metal- insulator transition suggests a way to modify doublon relaxation on the few- femtosecond timescale.
Département de Physique
DOI 10.1103/PhysRevLett.120.166401
wer_udd.pdf: 1 | CommonCrawl |
\begin{document}
\title{Data-Driven Reduction for Multiscale Stochastic Dynamical Systems hanks{
C.J.D. was supported by the Department of Energy Computational Science Graduate Fellowship (CSGF), grant number DE-FG02-97ER25308, and the National Science Foundation Graduate Research Fellowship, Grant No. DGE 1148900.
R.T. was supported by the European Union's Seventh Framework Programme (FP7) under Marie Curie Grant 630657 and by the Horev Fellowship.
R.T and R.R.C. were supported by the National Science Foundation, Award No. 1309858.
I.G.K. was supported by the NSF and the AFOSR.} \newcommand{\slugmaster}{ \slugger{siads}{xxxx}{xx}{x}{x--x}}
\renewcommand{\arabic{footnote}}{\fnsymbol{footnote}}
\footnotetext[2]{Department of Chemical and Biological Engineering, Princeton University, Princeton, New Jersey, 08544, USA} \footnotetext[3]{Department of Electrical Engineering, Technion – Israel Institute of Technology, Haifa, Israel, 3200003} \footnotetext[4]{Department of Mathematics, Yale University, New Haven, Connecticut, 06520, USA} \footnotetext[5]{Program in Applied and Computational Mathematics, Princeton University, Princeton, New Jersey, 08544, USA}
\renewcommand{\arabic{footnote}}{\arabic{footnote}}
\begin{abstract}
Multiple time scale stochastic dynamical systems are ubiquitous in science and engineering, and the reduction of such systems and their models to only their slow components is often essential for scientific computation and further analysis.
Rather than being available in the form of an explicit analytical model, often such systems can only be observed as a data set which exhibits dynamics on several time scales.
We will focus on applying and adapting data mining and manifold learning techniques to detect the slow components in such multiscale data.
Traditional data mining methods are based on metrics (and thus, geometries) which are not informed of the multiscale nature of the underlying system dynamics; such methods cannot successfully recover the slow variables.
Here, we present an approach which utilizes both the local geometry and the {\em local dynamics} within the data set through a metric which is both insensitive to the fast variables and more general than simple statistical averaging.
Our analysis of the approach provides conditions for successfully recovering the underlying slow variables, as well as an empirical protocol guiding the selection of the method parameters.
\end{abstract}
\begin{keywords} multiscale dynamical systems, Mahalanobis distance, diffusion maps \end{keywords}
\begin{AMS} 37M10, 62-07 \end{AMS}
\pagestyle{myheadings} \thispagestyle{plain} \markboth{C.~J. DSILVA {\it ET AL}}{DATA-DRIVEN REDUCTION OF SDES}
\section{Introduction}
Dynamical systems of engineering interest often contain several disparate time scales.
When the evolving variables are strongly coupled, resolving the dynamics at all relevant scales can be computationally challenging and pose problems for analysis.
Often, the goal is to write a reduced system of equations which accurately captures the dynamics on the slow time scales.
These reduced models can greatly accelerate simulation efforts, and are more appropriate for integration into larger modeling frameworks.
Following the methods of Mori \cite{mori1965transport}, Zwanzig \cite{zwanzig1961memory}, and others \cite{brey1981nonlinear, chorin2000optimal, hijon2010mori}, one can reduce the number of variables needed to describe a system of differential equations.
However, in general, this reduction introduces memory terms.
It transforms a system of differential equations into a system of (lower-dimensional) {\em integro-}differential equations, so that the reduction of the number of variables is counterpoised by the increased complexity of the reduced model.
Here, we will study the special case of evolution equations which contain an inherent time scale separation; in this case, it is possible, in principle, to obtain a reduced system of differential equations in only the slow variables {\em without memory terms}.
Such an analysis crucially hinges on knowing in which variables (or, more generally, functions of variables) one can write such a reduced system of slow evolution equations.
Moving averages and subsampling have often been used in simple cases as appropriate functions of variables in which to formulate slow lower-dimensional models \cite{pavliotis2007parameter}.
However, if the underlying dynamics are sufficiently nonlinear, such statistics may fail to capture the relevant structures and time scales within the data (see Figure~\ref{fig:schematic_fastslow} for a schematic illustration).
For well-studied systems, one often has some {\em a priori} knowledge of the appropriate observables (such as phase field variables) with which to formulate the reduced dynamics \cite{chen2002phase, wheeler1992phase}.
However, such observables may not be immediately obvious upon inspection for new complex systems, and so we require an automated approach to construct such slow variables.
Given an explicit system of ordinary differential equations, one can make numerical approximations, such as the quasi-steady state approximation \cite{segel1989quasi} or the partial equilibrium approximation \cite{gallagher1986combined}, to reduce the system dimensionality without introducing memory terms.
There has been some recent analytical work on extending and generalizing such ideas to more complex systems of equations \cite{ait2008closed, calderon2007fitting, contou2011model, dong2007simplification, givon2004extracting, pavliotis2007parameter, sotiropoulos2009model}.
However, in many instances, closed form, analytical models are not given explicitly, but can only be inferred from simulation and/or experimental data.
We therefore turn to data-driven techniques to analyze such systems and uncover the relevant dynamical modes.
In particular, we will use a manifold-learning based approach, as such methods can accommodate nonlinear structures in high-dimensional data.
The core of most manifold learning methods is having a notion of similarity between data points, usually through a distance metric \cite{Belkin2003, Coifman2006, coifman2005geometric, roweis2000nonlinear, tenenbaum2000global}.
The distances are then integrated into a global parametrization of the data, typically through the solution of an eigenproblem.
In this paper, we will analyze multiple time scale stochastic dynamical systems using data-driven methods.
Standard ``off-the-shelf'' manifold learning techniques which utilize the Euclidean distance are not appropriate for analyzing data from such multiscale systems, since this metric does not account for the disparate time scales.
Research efforts have addressed the construction of more informative distance metrics, which are less sensitive to noise and can better recover the true underlying structure in the data by suppressing unimportant sources of variability \cite{berry2013time, gepshtein2013image, rubner2000earth, simonyan2013fisher, xing2002distance}.
The Mahalanobis distance is one such metric.
It was shown that the Mahalanobis distance can remove the effect of {\em observing} the underlying system variables through a complex, nonlinear function \cite{dsilva2013nonlinear, singer2008non, talmon2013empirical}.
Here, we will show the analogy between removing the effects of such nonlinear observation functions (in the context of data analysis), and reducing a dynamical system to remove the effects of the fast variables.
Our approach will build a parametrization of the data which is consistent with the underlying slow variables.
Because our approach is data-driven, we require no explicit description of the model, and can extract the underlying slow variables from either simulation or experimental data.
Furthermore, the approach implicitly identifies the slow variables within the data and does not require any {\em a priori} knowledge of the fast or slow variability sources.
Even when the underlying dynamical system is complex with nonlinear coupling between the fast and slow variables, we will show that our approach has the potential to isolate the underlying slow modes.
We will present detailed analysis for our method, and provide conditions under which it will successfully recover the slow variables.
Furthermore, based on this analysis, we will present data-driven protocols to tune the parameters of the method appropriately.
Our presentation and discussion will address two-time-scale stochastic systems; however, we claim that our framework and analysis readily extends to systems with multiple time scale separations.
\begin{figure}
\caption{(a) Schematic of a two-dimensional, two-time scale ($\tau_1 = 50$ and $\tau_2=2$) ordinary differential equation system where the value of $x_2$ becomes slaved to the value of $x_1$.
In such an example, traditional data mining algorithms are sufficient to recover the slow variable. (b) Schematic of a two-scale two-dimensional stochastic dynamical system where {\em the statistics} of $x_2$ become slaved to $x_1$.
In such an example, traditional data mining algorithms will not recover the slow variable if the variance in the fast variable is too large. }
\label{fig:schematic_fastslow}
\end{figure}
\section{Multiscale Stochastic Systems} \label{subsec:multiscale_SDE}
Consider the following two-time-scale system of stochastic differential equations (SDEs), \begin{equation} \label{eq:general_SDE} \begin{aligned} dx_i(t) &= a_i(\mathbf{x}(t)) dt + dW_i(t), & \: 1 \le i \le m \\ dx_i(t) &= \frac{a_i(\mathbf{x}(t))}{\epsilon} dt + \frac{1}{\sqrt{\epsilon}} dW_i(t) , & \: m+1 \le i \le n \end{aligned} \end{equation} where $W_i(t)$ are independent standard Brownian motions, $\mathbf{x}(t) = \begin{bmatrix} x_1(t) & \cdots & x_n(t) \end{bmatrix}^T \in \mathbb{R}^n$, and $\epsilon \ll 1$.
In the simple case of a linear drift function, i.e., when $a_i(\mathbf{x}(t)) = \mu _i x_i$ with $\mu_i < 0$, the probability density function of $x_i$ approaches a Gaussian with (finite) variance $\mu_i$.
The time constant of the approach of the variance to equilibrium is $-1/\mu_i$ for $i=1,\ldots,m$ and $-\epsilon/\mu_i$ for $i=m+1,\ldots,n$ \cite{lelievre2013optimal}.
Thus, the last $n-m$ variables of \eqref{eq:general_SDE} rapidly approach a local equilibrium measure and exhibit fast dynamics, while the first $m$ variables exhibit slow dynamics.
A short burst of simulation will yield a cloud of points which is broadly distributed in the fast directions but narrowly distributed in the slow ones.
With the appropriate conditions on $a_i(\mathbf{x})$, the same can be said for more general drift functions, where $\mu_i$ are the eigenvalues of the Jacobian of $\mathbf{a}(\mathbf{x}) = \begin{bmatrix} a_1(\mathbf{x}) & \cdots & a_n(\mathbf{x}) \end{bmatrix}^T$ \cite{villani2009hypocoercivity}.
Therefore, \eqref{eq:general_SDE} defines an $n$-dimensional stochastic system with $m$ slow variables and $n-m$ fast variables, and $\epsilon$ defines the time scale separation.
The ratio of the powers of $\epsilon$ in the drift and diffusion terms in \eqref{eq:general_SDE} is essential, as we require the square of the diffusivity to be of the same order as the drift as $\epsilon \rightarrow 0$ \cite{berglund2003geometric}.
If the diffusivity is larger, then, as $\epsilon \rightarrow 0$, the equilibrium measure will be unbounded.
Conversely, if the diffusivity is smaller, the equilibrium measure will go to $0$ as $\epsilon \rightarrow 0$.
Assuming the sample average of $a_i(\mathbf{x})$ converges to a distribution which is only a function of the slow variables, then by the averaging principle \cite{freidlin2012random}, we can write a reduced SDE in {\em only} the slow variables $x_1, \dots, x_m$.
The aim of our work is to show how we can detect such slow variables {\em automatically} from data, in order to help inform modeling efforts and aid in the writing of such reduced stochastic models.
In general, we are not given the variables $\mathbf{x}(t)$ from the original SDE system, but instead, we are given some {\em observations}
in the form $\mathbf{y}(t) = \mathbf{f} (\mathbf{x}(t))$.
We assume that $\mathbf{f}: \mathbb{R}^n \mapsto \mathbb{R}^d$, $n \le d$, is a deterministic (possibly nonlinear) function whose image is an $n$-dimensional manifold $\mathcal{M}$ in $\mathbb{R}^d$.
For our analysis, we require $\mathbf{g} = \mathbf{f} ^{-1}$ to be well-defined on $\mathcal{M}$, and both $\mathbf{f}$ and $\mathbf{g}$ to be continuously differentiable to fourth order.
Given data $\mathbf{y}(t_1),\ldots,\mathbf{y}(t_N)$ on $\mathcal{M}$ we would like to recover a parametrization of the data that is one-to-one with the slow variables $x_1, \dots, x_m$.
\section{Local Invariant Metrics}
In order to recover the slow variables from data, we will utilize a local metric that collapses the fast directions.
Typically, such a metric averages out the fast variables.
However, simple averages are inadequate to describe data which is observed through a complicated nonlinear function.
Instead, we propose to use the Mahalanobis distance, which measures distances normalized by the respective variances in each local principal direction.
Using this metric, we still retain information about both the fast and slow directions and can more clearly observe complex dynamic behavior within the data set.
If two points $\mathbf{x}(t_1)$ and $\mathbf{x}(t_2)$ are drawn from an $n$-dimensional Gaussian distribution with covariance $\mathbf{C}_x$, the Mahalanobis distance between the points is defined as \cite{mahalanobis1936generalized} \begin{equation}
\| \mathbf{x}(t_1) - \mathbf{x}(t_2) \| _M = \sqrt{ (\mathbf{x}(t_1) - \mathbf{x}(t_2))^T \mathbf{C}_x^{-1} (\mathbf{x}(t_1) - \mathbf{x}(t_2) ) }. \end{equation} In particular,
for \eqref{eq:general_SDE}, whose covariance does not depend on $\mathbf{x}$, $\mathbf{C}_x^{-1} = \mathrm{diag}(e_1, \ldots, e_n)$ is a constant matrix where \begin{equation} \label{eq:e_def} \begin{aligned} e_i =& 1, \: & 1 \le i \le m \\ e_i =& \epsilon, \: & m+1 \le i \le n, \end{aligned} \end{equation} and the Mahalanobis distance between samples is \begin{equation} \label{eq:rescale_x_dist}
\| \mathbf{x}(t_2) - \mathbf{x}(t_1) \|^2_M = \sum_{i=1}^n e_i \left( x_i(t_2) - x_i(t_1) \right)^2. \end{equation} Note that in \eqref{eq:rescale_x_dist}, the fast variables are collapsed and become $\mathcal{O}(\sqrt{\epsilon})$ small, and so this metric is implicitly insensitive to variations in the fast variables.
The metric \eqref{eq:rescale_x_dist} can be rewritten as \begin{equation} \label{eq:norm_z}
\| \mathbf{x}(t_2) - \mathbf{x}(t_1) \|^2_M = \| \mathbf{z}(t_2) - \mathbf{z}(t_1) \|^2_2 \end{equation} where \begin{equation} \label{eq:general_rescale} z_i(t) = \sqrt{e_i} x_i(t). \end{equation} $\mathbf{z}(t)$ is a stochastic process of the same dimension as $\mathbf{x}(t)$, rescaled so that each variable has unit diffusivity.
This rescaling transforms our problem from one of detecting the slow variables within dynamic data to one of traditional data mining.
The Mahalanobis distance incorporates information about the dynamics and relevant time scales, so that using traditional data mining techniques with this metric will allow us to detect the slow variables in our data \cite{singer2009detecting}.
It is important to note that, in practice, we {\em never construct} $\mathbf{z}(t)$ {\em explicitly}.
It was shown in \cite{singer2008non} that, assuming $\mathbf{f}$ is bilipschitz, the Mahalanobis distance can be extended to approximate (to fourth order) the Euclidean distance between the rescaled samples $\mathbf{z}(t)$ from accessible $\mathbf{y}(t) = \mathbf{f} (\mathbf{x}(t))$,
\begin{equation} \label{eq:mahalanobis}
\| \mathbf{y}(t_2) - \mathbf{y}(t_1) \|^2_M = \| \mathbf{z}(t_2) - \mathbf{z}(t_1) \|^2_2 + \mathcal{O}(\| \mathbf{y}(t_2) - \mathbf{y}(t_1) \|^4_2). \end{equation}
This approximation is accurate when $\| \mathbf{y}(t_2) - \mathbf{y}(t_1) \|$ is small.
Because we will integrate these distances into a manifold learning algorithm which only considers local distances, we can recover a parametrization of the data which is consistent with the underlying system variables $\mathbf{x}(t)$, even when the data are obscured by a function $\mathbf{f}$.
In Section~\ref{sec:analysis}, we will show how we can approximate this distance directly from data $\mathbf{y}(t)$.
\section{Diffusion Maps for Global Parametrization}
From pairwise distances, we want to extract a {\em global} parametrization of the data that represents the slow variables.
We will use diffusion maps \cite{Coifman2006, coifman2005geometric}, a kernel-based manifold learning technique, to extract a global parametrization using the local distances that we described in the previous section.
Given data $\mathbf{y}(t_1), \dots, \mathbf{y}(t_N)$, we first construct the kernel matrix $\mathbf{W} \in \mathbb{R}^{N \times N}$, where \begin{equation} \label{eq:dmaps_kernel}
W_{ij} = \exp \left( -\frac{\|\mathbf{y}(t_i) - \mathbf{y}(t_j) \|^2}{\sigma_{kernel}^2} \right). \end{equation}
Here, $\| \cdot \|$ denotes the appropriate norm (in our case, the Mahalanobis distance), and $\sigma_{kernel}$ is the kernel scale and denotes a characteristic distance within the data set.
Note that $\sigma_{kernel}$ induces a notion of locality: if $\|\mathbf{y}(t_i) - \mathbf{y}(t_j) \| \gg \sigma_{kernel}$, then $W_{ij}$ is negligible.
Therefore, we only need our metric to be informative within a ball of radius $\sigma _{kernel}$.
We then construct the diagonal matrix $\mathbf{D} \in \mathbb{R}^{N \times N}$, with \begin{equation} D_{ii} = \sum_{j=1}^N W_{ij}. \end{equation}
We compute the eigenvalues $\lambda_0, \dots, \lambda_{N-1}$ and eigenvectors $\phi_0, \dots, \phi_{N-1}$ of the matrix $\mathbf{A} = \mathbf{D}^{-1}\mathbf{W}$, and order them such that $1 = \lambda_0 \ge |\lambda_1| \ge \dots \ge |\lambda_{N-1}|$.
$\phi_0 = \begin{bmatrix} 1 & 1 & \cdots & 1 \end{bmatrix}^T$ is the trivial eigenvector; the next few eigenvectors provide embedding coordinates for the data, so that $\phi_j(i)$, the $i^{th}$ entry of $\phi_j$, provides the $j^{th}$ embedding coordinate for $\mathbf{y}(t_i)$ (modulo higher harmonics which characterize the same direction in the data \cite{ferguson2010systematic}).
\section{Estimation of the Mahalanobis Distance} \label{sec:analysis}
\begin{figure}
\caption{Illustration of how to choose $\delta t$ and $\sigma_{kernel}$ appropriately. Curvature effects and other nonlinearities should be negligible within a time window $\delta t$ and within a ball of radius $\sigma_{kernel}$.}
\label{fig:schematic}
\end{figure}
As previously mentioned, we do not have access to the original variables $\mathbf{x}(t)$ from the underlying original SDE system.
Instead, we only have measurements $\mathbf{y}(t) = \mathbf{f} (\mathbf{x}(t))$, and we want to estimate the Mahalanobis distance between the $\mathbf{x}$ variables from observations $\mathbf{y}(t)$.
The traditional Mahalanobis distance is defined for a fixed distribution, whereas here we are dealing with a distribution that possibly changes as a function of position due to nonlinearities in the observation function $\mathbf{f}$ and in the drift $\mathbf{a}(\mathbf{x})$.
Consequently, we use the following modified definition for the Mahalanobis distance between two points, \begin{equation} \label{eq:mahalanobis_distance}
\| \mathbf{y}(t_2) - \mathbf{y}(t_1) \|^2_M =
\frac{1}{2} (\mathbf{y}(t_2) - \mathbf{y}(t_1))^T \left( \mathbf{C}^{\dagger}(\mathbf{y}(t_1)) + \mathbf{C}^{\dagger}(\mathbf{y}(t_2)) \right) (\mathbf{y}(t_2) - \mathbf{y}(t_1)),
\end{equation} where $\mathbf{C}(\mathbf{y}(t))$ is the covariance of the observed stochastic process {\em at the point} $\mathbf{y}(t)$,
and $\dagger$ denotes the Moore-Penrose pseudoinverse (since $d$ may exceed $n$).
To motivate this definition of the Mahalanobis distance, we first consider the simple linear case where $\mathbf{f} (\mathbf{x}) = \mathbf{A} \mathbf{x}$, with $\mathbf{A} \in \mathbb{R}^{d \times n}$.
The covariance of the observed stochastic process $\mathbf{f} (\mathbf{x})$ is given by $\mathbf{C}=\mathbf{AC}_x\mathbf{A}^T$.
Let $\mathbf{A}= \mathbf{U} \mathbf{\Lambda} \mathbf{V}^T$ be the singular value decomposition (SVD) of $\mathbf{A}$, where $\mathbf{U} \in \mathbb{R}^{d \times n}$, $\mathbf{\Lambda} \in \mathbb{R}^{n \times n}$, and $\mathbf{V} \in \mathbb{R}^{n \times n}$.
The pseudoinverse of the covariance matrix is $\mathbf{C}^{\dagger} = \mathbf{U} \mathbf{\Lambda}^{-1} \mathbf{V}^T \mathbf{C}_x^{-1} \mathbf{V \Lambda} ^{-1} \mathbf{U}^T$. Consequently, the Mahalanobis distance \eqref{eq:mahalanobis_distance} is reduced to \begin{equation} \begin{aligned}
\| \mathbf{y}(t_2) - \mathbf{y}(t_1) \|^2_M &= (\mathbf{y}(t_2) - \mathbf{y}(t_1))^T \mathbf{C}^{\dagger} (\mathbf{y}(t_2) - \mathbf{y}(t_1)) \\
&= (\mathbf{x}(t_2) - \mathbf{x}(t_1))^T \mathbf{A}^T \mathbf{C}_x^{-1} \mathbf{A} (\mathbf{x}(t_2) - \mathbf{x}(t_1)) \\
&= (\mathbf{x}(t_2) - \mathbf{x}(t_1))^T \mathbf{V \Lambda U}^T \mathbf{ U \Lambda}^{-1} \mathbf{V}^T \mathbf{C}_x^{-1} \mathbf{V \Lambda} ^{-1} \mathbf{U}^T \mathbf{U \Lambda V}^T (\mathbf{x}(t_2) - \mathbf{x}(t_1)) \\
&= (\mathbf{x}(t_2) - \mathbf{x}(t_1))^T \mathbf{C}_x^{-1} (\mathbf{x}(t_2) - \mathbf{x}(t_1)) \\
&= \| \mathbf{x}(t_2) - \mathbf{x}(t_1) \|^2_M = \| \mathbf{z}(t_2) - \mathbf{z}(t_1) \|^2_2 . \end{aligned} \end{equation} Hence evaluating the Mahalanobis distances of the observations $\mathbf{y}(t) = \mathbf{f}(\mathbf{x}(t))$ using \eqref{eq:mahalanobis_distance} allows us to estimate the Euclidean distances of the rescaled variables $\mathbf{z}$ (in which the fast coordinates are collapsed).
Following \cite{singer2008non}, we will show via Taylor expansion that the Mahalanobis distance between the observations \eqref{eq:mahalanobis_distance} approximates the Euclidean distance {\em in the rescaled variables} for general nonlinear observation functions $\mathbf{f}$ (provided $\mathbf{f}$ is bilipschitz and both $\mathbf{f}$ and $\mathbf{f}^{-1}$ are differentiable to fourth order).
\eqref{eq:mahalanobis_distance} cannot be evaluated directly since we do not have access to the covariance matrices, so we will instead estimate the covariances directly from data.
We can estimate the covariance $\mathbf{C}(\mathbf{y}(t_0))$ empirically from a set of values $\mathbf{y}(t_1), \dots, \mathbf{y}(t_q)$ drawn from the local distribution at $\mathbf{y}(t_0)$.
One way to obtain such a set of points is to run $q$ simulations for a short time, $\delta t$, each starting from $\mathbf{y}(t_0)$.
Alternatively, we can consider a single time series of length $q \delta t$ starting from $\mathbf{y}(t_0)$, and then estimate the covariance from the increments $\Delta \mathbf{y}(t_i) = \mathbf{y}(t_i) -\mathbf{y}(t_{i-1})$.
Although we will present analysis and results for the first type of estimation, the second case is often more practical in practice.
Errors in our estimation of the Mahalanobis distance arise from three sources.
One source of error is approximating the function $\mathbf{f}$ locally as a linear function by truncating the Taylor expansion of $\mathbf{f}$ at first order.
An additional source of error arises from disregarding the drift in the stochastic process, and assuming that samples are drawn from a Gaussian distribution.
The third source comes from finite sampling effects.
In this work, we will address and discuss the first two sources of error (the finite sampling effects are the subject of future research).
We can control the effects of the errors due to truncation of the Taylor expansion by adjusting $\sigma_{kernel}$; the higher-order terms in this expansion will be small for points which are close, such that adjusting $\sigma_{kernel}$ will allow us to only consider distances which are sufficiently accurate in our overall computation scheme.
Furthermore, we can control the errors incurred by disregarding the drift by adjusting the time scale of our simulation bursts $\delta t$.
Figure~\ref{fig:schematic} illustrates some of the issues in choosing the sizes $\delta t$ (or $q \delta t$ if the alternate method is used) and the parameter $\sigma_{kernel}$.
We will present both analytical results for the error bounds, as well as an empirical methodology to set the parameters $\sigma_{kernel}$ and $\delta t$ for our method to accurately recover the slow variable(s).
\subsection{Error due to the observation function $\mathbf{f}$}
We want to relate the distance in the rescaled space, $\|\mathbf{z}(t_2) - \mathbf{z}(t_1)\|_2$, to the estimated Mahalanobis distance between the observations $\| \mathbf{y}(t_2) - \mathbf{y}(t_1)\|_M$.
We define the error incurred by using the Mahalanobis distance to approximate the true distance as \begin{equation}
E_M(\mathbf{y}(t_1), \mathbf{y}(t_2)) = \|\mathbf{z}(t_2) - \mathbf{z}(t_1)\|_2^2 - \| \mathbf{y}(t_2) - \mathbf{y}(t_1)\|^2_M . \end{equation}
By Taylor expansion of $\mathbf{g}(y) = \mathbf{f}^{-1}(y)$ around $\mathbf{y}(t_1)$ and $\mathbf{y}(t_2)$ and averaging the two expansions, we obtain
\begin{equation} \label{eq:mahanaobis_error} \begin{aligned} E_M\left( \mathbf{y}(t_1), \mathbf{y}(t_2) \right)
=& \begin{aligned}[t]
\frac{1}{2} \sum_{i=1}^n \sum_{jkl=1}^{d} & \left( g_{i, (j)} (\mathbf{y}(t_1)) g_{i, (k,l)} (\mathbf{y}(t_1)) - g_{i, (j)} (\mathbf{y}(t_2)) g_{i, (k,l)} (\mathbf{y}(t_2)) \right) \times \\ & ({y}_j(t_2) - {y}_j(t_1)) ({y}_k(t_2) - {y}_k(t_1))({y}_l(t_2) - {y}_l(t_1)) \end{aligned} \\ +& \begin{aligned}[t] \frac{1}{8} \sum_{i=1}^n \sum_{jklm=1}^d & \left( g_{i, (j,k)} (\mathbf{y}(t_1)) g_{i, (l,m)} (\mathbf{y}(t_1)) + g_{i, (j,k)} (\mathbf{y}(t_2)) g_{i, (l,m)} (\mathbf{y}(t_2)) \right) \times
\\ &({y}_j(t_2) - {y}_j(t_1)) ({y}_k(t_2) - {y}_k(t_1))({y}_l(t_2) - {y}_l(t_1)) ({y}_m(t_2) - {y}_m(t_1)) \end{aligned} \\ +& \begin{aligned} [t] \frac{1}{6} \sum_{i=1}^n \sum_{jklm=1}^d & \left( g_{i, (j)} (\mathbf{y}(t_1)) g_{i, (k,l,m)} (\mathbf{y}(t_1)) + g_{i, (j)} (\mathbf{y}(t_2)) g_{i, (k,l,m)} (\mathbf{y}(t_2)) \right) \times \\ & ({y}_j(t_2) - {y}_j(t_1)) ({y}_k(t_2) - {y}_k(t_1))({y}_l(t_2) - {y}_l(t_1))({y}_m(t_2) - {y}_m(t_1)) \end{aligned} \\
+& \mathcal{O} \left(\| \mathbf{y}(t_2) - \mathbf{y}(t_1) \|^6_2 \right) , \end{aligned} \end{equation}
where
\begin{equation} \begin{aligned} g_{i,(j)} &= \sqrt{e_i} \frac{\partial g_i}{\partial y_j} \\ g_{i,(j,k)} &= \sqrt{e_i} \frac{\partial^2 g_i}{\partial y_j \partial y_k} \\ g_{i,(j,k,l)} &= \sqrt{e_i} \frac{\partial^3 g_i}{\partial y_j \partial y_k \partial y_l} . \end{aligned} \end{equation}
In \cite{singer2008non}, it was shown that the error incurred by using the Mahalanobis distance to approximate the $L_2$-distance between points $\mathbf{z}(t)$ is $\mathcal{O} (\|\mathbf{y}_1 - \mathbf{y}_2 \|_2^4 )$ (see the Supplementary Materials for details).
We now see from \eqref{eq:mahanaobis_error} that the error is an explicit function of the second- and higher-order derivatives of $\mathbf{g} = \mathbf{f}^{-1}$ and the distance between samples $\| \mathbf{y}(t_2) - \mathbf{y}(t_1) \|_2$.
We would like to note that this error does not depend on the dynamics of the underlying stochastic process (as we assume the covariances at each point on the manifold are known), but is only a function of the measurement function $\mathbf{f}$.
The parameter $\sigma_{kernel}$ in the diffusion maps calculation determines how much $E_M$ contributes to the overall analysis.
From \eqref{eq:dmaps_kernel}, distances which are much greater than $\sigma_{kernel}$ are negligible in the diffusion maps computation because of the exponential kernel.
Therefore, we want to choose $\sigma_{kernel}^2$ on the order of $\|\mathbf{y}(t_2) - \mathbf{y}(t_1)\|^2_M$ in a regime where $| E_M(\mathbf{y}(t_1), \mathbf{y}(t_2))| \ll \|\mathbf{y}(t_2) - \mathbf{y}(t_1)\|^2_M$.
This is illustrated in Figure~\ref{fig:schematic}, where we want to choose $\sigma_{kernel}$ small enough so that the curvature and other nonlinear effects (captured in the error term $E_M$) are negligible.
This will ensure that the errors in the Mahalanobis distance approximation do not greatly effect our overall analysis.
On first inspection, it would appear that our analysis indicates that $\sigma_{kernel}$ should be chosen arbitrarily small.
However, to obtain a meaningful parametrization of the data set, there must be a nonnegligible number of data points within a ball of radius $\sigma_{kernel}$ around each sample.
Therefore, the sampling density on the underlying manifold provides a lower bound for $\sigma_{kernel}$.
\subsection{Error due to the dynamics} \label{subsec:cov_est}
To compute the Mahalanobis distance in \eqref{eq:mahalanobis_distance}, we require $\mathbf{C}$, the covariance of the observed stochastic process $\mathbf{y}(t) = \mathbf{f}( \mathbf{x}(t))$.
We will use simulation bursts to locally explore the dynamics on the manifold of observations in order to estimate the covariance at a point $\mathbf{y}(t)$ from data \cite{talmon2014manifold, talmon2014intrinsic}.
We write the elements of the estimated covariance $\hat{\mathbf{C}}(\mathbf{y}(t), \delta t)$ as \begin{equation}\label{eq:estimated_cov_expected_value} \hat{C}_{ij}(\mathbf{y}(t), \delta t) = \frac{1}{\delta t} \left( \mathbb{E} \left[ y_i (t+\delta t) y_j (t+ \delta t) \mid \mathbf{y}(t) \right] - \mathbb{E} \left[ y_i (t+\delta t) \mid \mathbf{y}(t) \right] \mathbb{E} \left[ y_j (t+\delta t) \mid \mathbf{y}(t) \right] \right) , \end{equation}
where $\delta t > 0$ is the length of the simulation burst.
Due to the drift in the stochastic process and the (perhaps nonlinear) measurement function $\mathbf{f}$, we incur some error by approximating the covariance at a point $\mathbf{y}(t)$ using simulations of length $\delta t > 0$.
Define the error in this approximation as \begin{equation} \mathbf{E}_C(\mathbf{y}(t), \delta t) = \hat{\mathbf{C}}(\mathbf{y}(t), \delta t) - \mathbf{C}(\mathbf{y}(t)). \end{equation}
By It\^{o}-Taylor expansion of $\mathbf{f}$ and $\mathbf{x}(t)$ \cite{kloeden1992numerical},
\begin{equation} \label{eq:cov_error} \begin{aligned} E_{C, ij} (\mathbf{x}(t), \delta t) =&
\frac{1}{\delta t} \sum_{k=1}^n f_{i,(k)}(\mathbf{x}(t)) \mathbb{E} \left[ \int_t^{t+\delta t} \left( \int_{s_2}^{t+\delta t} f_{j,(k,0)}(\mathbf{x}(s_1)) ds_1 + \int_t^{s_2} f_{j,(0,k)}(\mathbf{x}(s_1)) ds_1 \right) ds_2 \right] \\ &+ \frac{1}{\delta t} \sum_{k=1}^n f_{j,(k)}(\mathbf{x}(t)) \mathbb{E} \left[ \int_t^{t+\delta t} \left( \int_{s_2}^{t + \delta t} f_{i,(k,0)}(\mathbf{x}(s_1)) ds_1 + \int_t^{s_2} f_{i,(0,k)}(\mathbf{x}(s_1)) ds_1 \right) ds_2 \right] \\ &+ \frac{1}{\delta t} \sum_{k,l=1}^n \mathbb{E} \left[ \int_t^{t+\delta t}\left( \int_t^{s_2} f_{i,(k,l)}(\mathbf{x}(s_1)) dW_{s_1, k} \right) \left( \int_t^{s_2} f_{j,(k,l)}(\mathbf{x}(s_1)) dW_{s_1, k} \right) ds_2 \right] \\ &+ \mathcal{O} (\delta t^{3/2}) \end{aligned} \end{equation}
where \begin{equation} \begin{aligned} f_{i,(k)} &= \frac{1}{\sqrt{e_k}} \frac{\partial f_i}{\partial x_k} \\ f_{i,(k,l)} &= \frac{1}{\sqrt{e_k e_l}} \frac{\partial^2 f_i}{\partial x_k \partial x_l} \\ f_{i,(k,0)} &= \frac{1}{\sqrt{e_k}} \sum_{l=1}^n \left( \frac{\partial}{\partial x_k} \left( \frac{a_l(\mathbf{x})}{e_l} \frac{\partial f_i}{\partial x_l} \right) + \frac{1}{2 e_l} \frac{\partial^3 f_i}{\partial x_k \partial x_l^2} \right) \\ f_{i,(0, k)} &= \frac{1}{\sqrt{e_k}} \sum_{l=1}^n \left( \frac{a_l(\mathbf{x})}{e_l} \frac{\partial^2 f_i}{\partial x_k \partial x_l} +\frac{1}{2 e_l} \frac{\partial^3 f_i}{\partial x_k \partial^2 x_l} \right). \end{aligned} \end{equation}
From \eqref{eq:cov_error}, the error in the covariance is $\mathcal{O}(\delta t)$ (as the $ds$ integrals are each $\mathcal{O}(\delta t)$ and the $dW$ integrals are each $\mathcal{O}(\sqrt{\delta t})$) and a function of the derivatives of the observation function $\mathbf{f}$ and the drift $\mathbf{a}$.
We want to set $\delta t$ such that $\|\mathbf{E}_C \| \ll \| \mathbf{C} \|$ (this is illustrated in Figure~\ref{fig:schematic}), so that the estimated covariances are accurate.
Note that in practice, we compute $\hat{\mathbf{C}}$ by running many simulations of length $\delta t$ starting from $\mathbf{x}(t)$, and use the sample average to approximate the expected values in \eqref{eq:estimated_cov_expected_value}.
We therefore incur additional error due to finite sampling; this error is ignored for the purposes of this analysis, and quantifying this error is the subject of future research.
Our analysis reveals that the errors decrease with decreasing $\delta t$; at first inspection, one would want to set $\delta t$ arbitrarily small to obtain the highest accuracy possible.
However, often in practice, one cannot obtain an arbitrarily refined sampling rate, such that a smaller $\delta t$ results in fewer samples with which to approximate the local covariance. When also accounting for these finite sampling errors, and one should take $\delta t$ as long as possible while still maintaining negligable errors from the observation function $\mathbf{f}$ and the drift $\mathbf{a}$.
\section{Illustrative Examples}
For illustrative purposes, we consider the following two-dimensional SDE \begin{equation} \label{eq:specific_SDE} \begin{aligned} dx_1(t) &=& adt &+& dW_1(t)\\ dx_2(t) &=& -\frac{x_2(t)}{\epsilon} dt &+& \frac{1}{\sqrt{\epsilon}} dW_2(t) \end{aligned} \end{equation}
where $a$ is an $\mathcal{O}(1)$ constant, as a specific example of \eqref{eq:general_SDE}.
$x_1$ is the slow variable, and $x_2$ is a fast noise whose equilibrium measure is bounded and $\mathcal{O}(1)$.
Figure~\ref{fig:initial_data} shows data simulated from this SDE colored by time.
We would like to recover a parametrization of this data which is one-to-one with the slow variable $x_1$.
\begin{figure}
\caption{Data, simulated from \eqref{eq:specific_SDE} with $a=3$ and $\epsilon = 10^{-3}$, for $3000$ time steps with $dt = 10^{-4}$. The data are colored by time.}
\label{fig:initial_data}
\end{figure}
\subsection{Linear function} \label{subsec:linear_example}
In the first example, our observation function $\mathbf{f}$ will be the identity function,
\begin{equation} \label{eq:linear_transform} \begin{aligned} \begin{bmatrix} y_1(t) \\ y_2(t) \end{bmatrix} &=& \mathbf{f}(\mathbf{x}(t)) &=& \begin{bmatrix} x_1(t) \\ x_2(t) \end{bmatrix} \\ \mathbf{g}(\mathbf{y}(t)) &=& \mathbf{f}^{-1} (\mathbf{y}(t)) &=& \begin{bmatrix} y_1(t) \\ y_2(t) \end{bmatrix} \end{aligned} \end{equation}
where the fast and slow variables remain uncoupled.
In this case, there is no error incurred due to the measurement function $\mathbf{f}$ ($E_M = 0$), as the second- and higher-order derivatives of $\mathbf{g}$ are identically 0.
\subsubsection{Importance of using the Mahalanobis distance}
We want to demonstrate the utility of using the Mahalanobis distance compared to the typical Euclidean distance.
We compute the diffusion map embedding for the data in Figure~\ref{fig:initial_data}, using both the standard Euclidean distance and the Mahalanobis distance for the computation of the kernel in \eqref{eq:dmaps_kernel}.
The data, colored by $\phi_1$ using the two different metrics, are shown in Figure~\ref{fig:NIV_versus_DMAPS}.
When using the standard Euclidean distance which does not account for the underlying dynamics, the first diffusion maps recovers the fast variable $x_2$, suggesting the fast modes is the dominant scale purely in terms of data analysis (Figure~\ref{fig:NIV_versus_DMAPS}(a)).
In contrast, the slow variable is recovered when using the Mahalanobis distance, as the coloring in Figure~\ref{fig:NIV_versus_DMAPS}(b) (where the data are colored by the first diffusion maps variable) is consistent with the coloring in Figure~\ref{fig:initial_data} (where the data are colored by time).
\begin{figure}
\caption{Comparison of using the Euclidean distance and the Mahalanobis distance in multiscale data mining. (a) The data from Figure~\ref{fig:initial_data}, colored by the first diffusion map coordinate when using the Euclidean distance in the kernel in \eqref{eq:dmaps_kernel}. Note that we do {\em not} recover the slow variable. (b) The data from Figure~\ref{fig:initial_data}, colored by the first diffusion map coordinate when using the Mahalanobis distance in the kernel in \eqref{eq:dmaps_kernel}. The good correspondence between this coordinate and the slow variable is visually obvious.}
\label{fig:NIV_versus_DMAPS}
\end{figure}
\subsubsection{Errors in covariance estimation}
For the example in \eqref{eq:linear_transform}, the analytical covariance is
\begin{equation} \label{eq:cov_linear_example} \mathbf{C}(\mathbf{x}(t)) = \begin{bmatrix} 1 & 0 \\ 0 & \frac{1}{\epsilon} \end{bmatrix}. \end{equation}
From \eqref{eq:cov_error}, we find
\begin{equation} \mathbf{E}_C(\mathbf{x}(t), \delta t) = \begin{bmatrix} 0 & 0 \\ 0 & -\frac{\delta t}{\epsilon^2} \end{bmatrix} + \mathcal{O} (\delta t^{3/2}) . \end{equation}
Therefore, $\| \mathbf{C} \| = \mathcal{O} \left( \frac{1}{\epsilon} \right)$ and $\|\mathbf{E}_C \| = \mathcal{O}\left(\frac{\delta t}{\epsilon^2} \right)$ (provided $\frac{1}{\epsilon^2} \gg \sqrt{\delta t}$; this will be discussed further in Section~\ref{subsec:fastvar}).
These terms are shown in Figure~\ref{fig:cov_error}(a) as a function of $\delta t$.
We want to choose $\delta t$ in a regime where $\| \mathbf{E}_C \| \ll \| \mathbf{C} \|$ (the yellow shaded region in Figure~\ref{fig:cov_error} indicates where $\| \mathbf{E}_C \| < \| \mathbf{C} \|$), so that the errors in the estimated covariance are small with respect to the covariance.
When we do not analytically know the functions $\mathbf{f}$ or $\mathbf{g}$, we can find such a regime empirically by estimating the covariance for several values of $\delta t$.
This provides an estimate of $\hat{\mathbf{C}} = \mathbf{C} + \mathbf{E}_C$ as a function of $\delta t$.
From Figure~\ref{fig:cov_error}(a), we expect a ``knee" in the plot of $\| \hat{\mathbf{C}} \|$ versus $\delta t$ when $\| \mathbf{E}_C \|$ becomes larger than $\| \mathbf{C}\|$.
Figure~\ref{fig:cov_error}(b) shows the empirical $\| \hat{\mathbf{C}} \|$ as a function of $\delta t$ for the data in Figure~\ref{fig:initial_data}, and the knee in this curve is consistent with the intersection in Figure~\ref{fig:cov_error}(a).
\begin{figure}\label{fig:cov_error}
\end{figure}
\subsubsection{Recovery of the fast variable} \label{subsec:fastvar}
Note that, for the example in \eqref{eq:linear_transform}, $\mathbf{E}_C$ is a constant diagonal matrix.
Therefore, taking $\delta t$ too large will not lead to nonlinear effects or mixing of the fast and slow variables.
Rather, changing $\delta t$ will only affect the perceived ratio of the fast and slow timescales.
To see this behavior in our diffusion maps results, we must first discuss the interpretation of the diffusion maps eigenspectrum.
The diffusion maps eigenvectors provide embedding coordinates for the data, and the corresponding eigenvalues provide a measure of the importance of each coordinate.
However, some eigenvectors can be harmonics of previous eigenvectors; for example, for a data set parameterized by a variable $x$, both $\cos x$ and $\cos 2x$ will appear as diffusion maps eigenvectors (see \cite{ferguson2010systematic} for a more detailed discussion).
These harmonics do not capture any new direction within the data set, but do appear as additional eigenvector/eigenvalue pairs.
Therefore, for the two-dimensional data considered here, the fast variable will not necessarily appear as the second (non-trivial) eigenvector.
As the time scale separation increases, the relative importance of the slow and fast directions will also increase.
This implies that the eigenvalue corresponding to the eigenvector which parameterizes the fast direction will decrease, and the number of harmonics of the slow mode which appear before the fast mode will increase.
Figure~\ref{fig:recover_fast} shows results for three different values of $\delta t$ (the corresponding values are indicated by the dashed lines in Figure~\ref{fig:cov_error}).
When the time scale of the simulation burst used to estimate the local covariance (indicated by the red clouds in the top row of figures), is sufficiently shorter than that of the equilibration time of the fast variable, the estimated local covariance is accurate and the fast variable is collapsed significantly relative to the slow variable.
This means that the fast variable is recovered {\em very} far down in the diffusion maps eigenvectors.
The left two columns of Figure~\ref{fig:recover_fast} show that, for this example, when the simulation burst is shorter than the equilibration time, the fast variable is recovered as $\phi_{10}$.
However, if the time scale of the burst is {\em longer} than the saturation time of the fast variable, the estimated covariance changes: the variance in the slow direction continues to grow, while the variance in the fast direction is fixed.
This means that the apparent time scale separation is smaller, the collapse of the fast variable is less pronounced relative to the slow variable, and the fast variable is recovered in an earlier eigenvector (in our ordering of the spectrum).
The right column of Figure~\ref{fig:recover_fast} shows that, when the burst is now longer than the equilibration time, the fast variable appears earlier in the eigenvalue spectrum and is recovered as $\phi_6$.
\begin{figure}
\caption{Relationship between changing $\delta t$ and recovery of the variables. From left to right, the columns correspond to $\delta t = 10^{-6}, 10^{-5}, 10^{-3}$. (Row~1) Data (gray) and representative burst (red) used to estimate the local covariance. (Row~2) Correlation between first diffusion maps coordinate and the slow variable $x_1$. (Row~3) Correlation between the relevant diffusion maps coordinate and the fast variable $x_2$. Note that for $\delta t = 10^{-6}$ and $\delta t = 10^{-5}$, $x_2$ is correlated with $\phi_{10}$. When $\delta t = 10^{-3}$, $x_2$ is correlated with $\phi_6$. (Row 4) Diffusion maps eigenvalue spectra. The eigenvalues corresponding to the coordinates for the slow and fast modes are indicated by red circles. Note that when $\delta t$ is too large, the apparent time scale separation decreases and the coordinate corresponding to the fast variable appears earlier in the spectrum. }
\label{fig:recover_fast}
\end{figure}
\subsection{Nonlinear observation function} \label{subsec:nonlinear_example}
In the second example, our data will be warped into ``half-moon" shapes via the function \begin{equation} \label{eq:nonlinear_function} \begin{aligned} \begin{bmatrix} y_1(t) \\ y_2(t) \end{bmatrix} &=& \mathbf{f}(\mathbf{x}(t)) &=& \begin{bmatrix} x_1(t) + x_2^2(t) \\ x_2(t) \end{bmatrix}\\ \mathbf{g}(\mathbf{y}(t)) &=& \mathbf{f}^{-1} (\mathbf{y}(t)) &=& \begin{bmatrix} y_1(t) - y_2^2(t) \\ y_2(t) \end{bmatrix} . \end{aligned} \end{equation}
Figure~\ref{fig:initial_data_nonlinear} shows the data from Figure~\ref{fig:initial_data} transformed by the function $\mathbf{f}$ in \eqref{eq:nonlinear_function} and colored by time.
It is important to note that this is a difficult class of problem in practice, as none of the observed variables are purely fast or slow, and the observed system appears, at first inspection, to possess no separation of time scales.
For this example, the analytical covariance and inverse covariance are \begin{equation} \begin{aligned} \mathbf{C}(\mathbf{x}(t)) =& \frac{1}{\epsilon}
\begin{bmatrix} \epsilon + 4x_2^2(t) & 2x_2(t) \\ 2x_2(t) & 1 \end{bmatrix}\\ \mathbf{C}^{\dagger}(\mathbf{x}(t)) =& \begin{bmatrix} 1 & -2 x_2(t) \\ -2 x_2(t) & \epsilon+ 4 x_2^2(t) \end{bmatrix} . \end{aligned} \end{equation}
The fast and slow variables are now coupled through the function $\mathbf{f}$, and the Euclidean distance is not informative about the fast {\em or} the slow variables.
We need to use the Mahalanobis distance to obtain a parametrization that is consistent with the underlying fast-slow dynamics.
\begin{figure}
\caption{The data from Figure~\ref{fig:initial_data}, transformed by $\mathbf{f}$ in \eqref{eq:nonlinear_function}}
\label{fig:initial_data_nonlinear}
\end{figure}
\subsubsection{Errors in Mahalanobis distance}
We can bound the Mahalanobis distance by the eigenvalues of $\mathbf{C}^{\dagger}$, \begin{equation}
\lambda_{C^{\dagger},1} \| \mathbf{y}(t_2) - \mathbf{y}(t_1) \|_2^2 \le
\| \mathbf{y}(t_2) - \mathbf{y}(t_1) \|^2_M \le
\lambda_{C^{\dagger},2} \| \mathbf{y}(t_2) - \mathbf{y}(t_1) \|_2^2 \end{equation} where $\lambda_{C^{\dagger},1} \le \lambda_{C^{\dagger},2}$ are the two eigenvalues of $\mathbf{C}^{\dagger}$.
Therefore, for the example in \eqref{eq:nonlinear_function}, we have \begin{equation} E_M(\mathbf{y}(t_1), \mathbf{y}(t_2)) = - (y_2(t_2) - y_2(t_1))^4. \end{equation}
Figure~\ref{fig:cov_error_nonlinear}(a) shows $\| \mathbf{y}(t_2) - \mathbf{y}(t_1) \|^2_M$ and $| E_M |$ as a function of $\| \mathbf{y}(t_2) - \mathbf{y}(t_1) \|_2$.
The Mahalanobis distance is an accurate approximation to the true intrinsic distance $\| \mathbf{z}(t_2) - \mathbf{z}(t_1) \|_2$ when $|E_M| \ll \| \mathbf{y}(t_2) - \mathbf{y}(t_1) \|^2_M$ (the shaded yellow region in the plot indicates where $|E_M| < \| \mathbf{y}(t_2) - \mathbf{y}(t_1) \|^2_M$).
We want to choose $\sigma_{kernel}^2$ in a regime where $|E_M(\mathbf{y}(t_1), \mathbf{y}(t_2))| \ll \| \mathbf{y}(t_2) - \mathbf{y}(t_1) \|^2_M$, so that the distances we utilize in the diffusion maps calculation are accurate.
We can find such a regime empirically by plotting $\| \mathbf{y}(t_2) - \mathbf{y}(t_1) \|^2_M$ as a function of $\| \mathbf{y}(t_2) - \mathbf{y}(t_1) \|_2$, and assessing when the relationship deviates from quadratic.
This is shown in Figure~\ref{fig:cov_error_nonlinear}(b), and the deviation from quadratic behavior is consistent with the intersection of the analytical expressions plotted in Figure~\ref{fig:cov_error_nonlinear}(a).
Figures~\ref{fig:colored_data_nonlinear_cases}(a) and (b) show the data from Figure~\ref{fig:initial_data_nonlinear}, colored by $\phi_1$ for two different values of $\sigma_{kernel}$.
The corresponding values of $\sigma_{kernel}^2$ are indicated by the dashed lines.
When $\sigma_{kernel}^2$ corresponds to a region where $ |E_M | \ll \|\mathbf{y}(t_2) - \mathbf{y}(t_1) \|_M^2$, $\phi_1$ is well correlated with the slow variable.
However, when $\sigma_{kernel}^2$ corresponds to a region where $|E_M | \gg \|\mathbf{y}(t_2) - \mathbf{y}(t_1) \|_M^2$, the slow variable is no longer recovered.
\begin{figure}\label{fig:cov_error_nonlinear}
\end{figure}
\subsubsection{Errors in covariance estimation}
From \eqref{eq:cov_error}, we find that, for the example in \eqref{eq:nonlinear_function},
\begin{equation} \small \begin{aligned} E_{C,11} (\mathbf{x}(t), \delta t) =& \frac{2 \delta t}{\epsilon^2} - \frac{8 x_2(t)}{\epsilon^2 \delta t} \mathbb{E} \left[ \int_t^{t+\delta t} \left( \int_{s_2}^{t+\delta t} 2 x_2(s_1) ds_1 + \int_t^{s_2} x_2(s_1) ds_1 \right) ds_2\right] + \mathcal{O} (\delta t^{3/2}) \\
E_{C, 12} (\mathbf{x}(t), \delta t) = & E_{C, 21} (\mathbf{x}(t), \delta t)\\ =& - \frac{x_2(t) \delta t}{\epsilon^2} - \frac{2}{\epsilon^2 \delta t} \mathbb{E} \left[ \int_t^{t+\delta t} \left( \int_{s_2}^{t + \delta t} 2 x_2(s_1) ds_1 + \int_t^{s_2} x_2(s_1) ds_1 \right) ds_2 \right] + \mathcal{O} (\delta t^{3/2})\\
E_{C, 22} (\mathbf{x}(t), \delta t) =& -\frac{\delta t}{\epsilon^2} + \mathcal{O} (\delta t^{3/2}) \end{aligned} \end{equation}
The error in the covariance is $\mathcal{O} \left( \frac{\delta t}{\epsilon^2} \right)$.
As expected, the error grows with increasing $\delta t$.
We can also see the explicit dependence of the covariance error on the time scale separation $\epsilon$; larger time scale separation results in a larger covariance error, as a more refined simulation burst is required to estimate the covariance of the fast directions.
$\|\mathbf{C} \|$ and $\| \mathbf{E}_C\| $ are plotted as a function of $\delta t$ in Figure~\ref{fig:cov_error_nonlinear}(c); the shaded yellow portion denotes the region where $\| \mathbf{E}_C \| < \| \mathbf{C} \|$.
As in the previous example, we can empirically find where $\| \mathbf{E}_C \| \ll \| \mathbf{C} \|$ by plotting $\| \hat{\mathbf{C}} \|$ as a function of $\delta t $ and looking for a knee in the plot.
These results are shown in Figure~\ref{fig:cov_error_nonlinear}(d).
Figures~\ref{fig:colored_data_nonlinear_cases}(a) and (c) show the data from Figure~\ref{fig:initial_data_nonlinear}, colored by $\phi_1$ for two different values of $\delta t$.
The corresponding values of $\delta t$ are indicated by the dashed lines in Figure~\ref{fig:cov_error_nonlinear}(c)~and~(d).
When $\delta t$ corresponds to a region where $\|\mathbf{E}_C \| \ll \| \mathbf{C} \|$, the slow variable is recovered by the first diffusion maps coordinate.
However, when $\delta t$ corresponds to a region where $\|\mathbf{E}_C \| \gg \| \mathbf{C} \|$, the slow variable is no longer recovered.
\def \figheight {1.6in}
\begin{figure}
\caption{Data from Figure~\ref{fig:initial_data_nonlinear}, colored by the first diffusion maps variable $\phi_1$ using the Mahalanobis distance for three different parameter settings. The relevant values of $\delta t$ and $\sigma_{kernel}$ are indicated by the red dashed lines on the corresponding plots. (Row~1) $\delta t = 10^{-7}$ and $\sigma_{kernel}^2 = 10^{-2}$. Note that the parametrization is one-to-one with the slow variable. (Row~2) $\delta t = 10^{-7}$ and $\sigma_{kernel}^2 = 10^{1}$. We do not recover the slow variable because $\sigma_{kernel}$ is too large. (Row~3) $\delta t = 10^{-3}$ and $\sigma_{kernel}^2 = 10^{-2}$. We do not recover the slow variable because $\delta t$ is too large. }
\label{fig:colored_data_nonlinear_cases}
\end{figure}
\section{Conclusions}
We have presented a methodology to compute a parametrization of a data set which respects the slow variables in the underlying dynamical system.
The approach utilizes diffusion maps, a kernel-based manifold learning technique, with the Mahalanobis distance as the metric.
We showed that the Mahalanobis distance collapses the fast directions within a data set, allowing for successful recovery of the slow variables.
Furthermore, we showed how to estimate the covariances (required for the Mahalanobis distance) directly from data.
A key point in our approach is that the embedding coordinates we compute are not only insensitive to the fast variables, but are also invariant to nonlinear observation functions.
Therefore, the approach can be used for data fusion: data collected from the same system via different measurement functions can be combined and merged into a single coordinate system.
In the examples presented, the initial data came from a single trajectory of a dynamical system, and the local covariance at each point in the trajectory was estimated using brief simulation bursts.
However, the initial data need not be collected from a single trajectory, and other sampling schemes could be employed.
Brief time series are required to estimate the local covariances, but given a simulator, one could reinitialize brief simulation bursts which are sufficiently short and refined from each sample point.
In our examples, we controlled the time scale of sampling and could therefore set the time scale over which to estimate the covariance and the simulation time step arbitrarily small.
However, in some settings, such as previously collected historical data, it is not uncommon to have a fixed sampling rate and be unable to reinitialize simulations.
In such cases, it is possible that we cannot find an appropriate kernel scale given the fixed $\delta t$ such that we can accurately recover the slow variables.
For these cases, the data cannot be processed as given, and it is necessary to construct intermediate observers, such as histograms, Fourier coefficients, or scattering transform coefficients \cite{mallat2012group, talmon2014intrinsic, talmon2014manifold}.
Such intermediates are more complex statistical functions than simple averages and can capture additional structure within the data.
They also reduce the effects of noise and permit a larger time step.
However, constructing such intermediates often requires additional {\em a priori} knowledge about the system dynamics and noise structure.
Clearly, in our analysis, we have ignored the finite sampling effects in our estimation.
In reality, both the number of samples used to estimate the covariances, as well as the density of sampled points on the manifold, affect the recovered parametrization and provide additional constraints on $\delta t$ and $\sigma_{kernel}$.
Future work involves extending our analysis to the finite sample case, and providing guidelines for the amount of data required to apply our methodology.
The methods presented here provide a bridge between traditional data mining and multiple time scale dynamical systems.
With this interface established, one can now consider using such data-driven methodologies to extract reduced models (either explicitly, or implicitly via an equation-free framework \cite{erban2006gene, kevrekidis2004equation, kevrekidis2003equation, kevrekidis2009equation}) which also respect the underlying slow dynamics and geometry of the data.
Such reduced models hold the promise of accelerated analysis and reduced simulation of dynamical systems whose effective dynamics are obscure upon simple inspection.
\end{document} | arXiv |
概要:二次元乱流を特徴づける性質として, 高レイノルズ数におけるエネルギー密度スペクトル中のエンストロフィーカスケードとエネルギー逆カスケードに対応する領域の出現が挙げられる. 言い換えれば, 二次元乱流とは粘性ゼロ極限における流体のエネルギー保存とエンストロフィー散逸によって特徴付けられる. よって非粘性流体の運動を記述するEuler方程式の解でこのような性質を持つものが存在すれば, 二次元乱流の数学的構造を解析するうえで重要であると考えられる. しかし, 二次元Euler方程式においてエンストロフィー散逸解を直接構成するには数学的な困難が伴う. 本セミナーでは, Euler方程式の正則化方程式である. Euler-$\alpha$方程式を利用したエンストロフィー散逸解の構成について紹介し, 特に点渦の自己相似3体衝突が生むエンストロフィー散逸に関する数学的結果について説明する予定である.
概要: Mathematical models and statistical arguments play a central role in the assessment of the changes that are observed in Earth's climate system. While much of the discussion of climate change is focused on large-scale computational models, the theory of dynamical systems provides the language to distinguish natural variability from change. In this talk I will discuss some problems of current interest in climate science and indicate how, as mathematicians, we can find inspiration for new applications.
概要: Strong \(A_\infty\) weights are introduced. Then, degenerate elliptic equations with respect to a power of a strong \(A_\infty\) weight are studied. Then, Harnack inequality and local regularity results for weak solutions of a quasilinear degenerate equation in divergence form under natural growth conditions are proved. We stress that regularity results are achieved under minimal assumptions on the coefficients.
概要: In this talk, we will introduce a high-precision numerical method for studying quasicrystals, i.e., the projection method. This method is based on the philosophy that a continuous distributed quasicrystal is a continuous function over a quasilattice. It can be used to study the soft quasicrystals. In particular, the projection method decomposes the quasiperiodic structure by a combination of the almost periodic functions, and provides an efficient algorithm to calculate the combinational coefficients in the higher-dimensional space. At the same time, the projection method provides a unified computational framework for the periodic crystals and quasicrystals. The free energies of the two kinds of ordered structures can be obtained with the same accuracy. Therefore, it can be used to determine the thermodynamic stability of periodic and quasiperiodic crystals in theory. We have applied the algorithm to a series of coarse-grained density functional theories, and obtained 2-dimensional 8-, 10, 12-fold symmetric quasicrystals (computed in the 4-dimensional space), and 3-dimensional icosahedral quasicrystals (calculated in the 6-dimensional space). The corresponding phase diagrams, including periodic crystals and quasicrystals, have been constructed. | CommonCrawl |
Adaptation and psychometric validation of Diabetes Health Profile (DHP-18) in patients with type 2 diabetes in Quito, Ecuador: a cross-sectional study
Ikram Benazizi ORCID: orcid.org/0000-0001-7210-91931,
Mari Carmen Bernal-Soriano 1,2,
Yolanda Pardo 2,3,4,
Aida Ribera2,5,
Andrés Peralta-Chiriboga1,6,
Montserrat Ferrer2,3,4,
Alfonso Alonso-Jaquete7,
Jordi Alonso2,3,8,
Blanca Lumbreras 1,2 &
Lucy Anne Parker1,2
The Diabetes Health Profile (DHP‐18), structured in three dimensions (psychological distress (PD), barriers to activity (BA) and disinhibited eating (DE)), assesses the psychological and behavioural burden of living with type 2 diabetes. The objectives were to adapt the DHP‐18 linguistically and culturally for use with patients with type 2 DM in Ecuador, and to evaluate its psychometric properties.
Participants were recruited using purposive sampling through patient clubs at primary health centres in Quito, Ecuador. The DHP-18 validation consisted in the linguistic validation made by two Ecuadorian doctors and eight patient interviews. And in the psychometric validation, where participants provided clinical and sociodemographic data and responded to the SF-12v2 health survey and the linguistically and culturally adapted version of the DHP-18. The original measurement model was evaluated with confirmatory factor analysis (CFA). Reliability was assessed through internal consistency using Cronbach's alpha and test–retest reproducibility by administering DHP-18 in a random subgroup of the participants two weeks after (n = 75) using intraclass correlation coefficient (ICC). Convergent validity was assessed by establishing previous hypotheses of the expected correlations with the SF12v2 using Spearman's coefficient.
Firstly, the DHP-18 was linguistically and culturally adapted. Secondly, in the psychometric validation, we included 146 participants, 58.2% female, the mean age was 56.8 and 31% had diabetes complications. The CFA indicated a good fit to the original three factor model (χ2 (132) = 162.738, p < 0.001; CFI = 0.990; TLI = 0.989; SRMR = 0.086 and RMSEA = 0.040. The BA dimension showed the lowest standardized factorial loads (λ) (ranging from 0.21 to 0.77), while λ ranged from 0.57 to 0.89 and from 0.46 to 0.73, for the PD and DE dimensions respectively. Cronbach's alphas were 0.81, 0.63 and 0.74 and ICCs 0.70, 0.57 and 0.62 for PD, BA and DE, respectively. Regarding convergent validity, we observed weaker correlations than expected between DHP-18 dimensions and SF-12v2 dimensions (r > −0.40 in two of three hypotheses).
The original three factor model showed good fit to the data. Although reliability parameters were adequate for PD and DE dimensions, the BA presented lower internal consistency and future analysis should verify the applicability and cultural equivalence of some of the items of this dimension to Ecuador.
Diabetes mellitus (DM) is a high priority public health problem. It is the most frequent chronic disease in the world and, in 2014, affected 422 million people. According to the World Health Organization, people with type 2 Diabetes mellitus (T2DM) represent 90% of all diabetics. The prevalence of T2DM has increased more rapidly in low- and middle-income countries than in high-income countries, as is the case in Latin America and Ecuador [1]. In 2016, the prevalence of T2DM in Ecuador was estimated at 7.3% and has been rising significantly in all age groups [2,3,4,5]. According to data from the STEPS Survey of Ecuador in 2018, the prevalence of diabetes was 6.6% in both sexes (6.6% in men and 6.5% in women) of the Ecuadorian population between 18 and 69 years of age, and increased to 10.7% in the age group between 45 and 69 years in both sexes [6].
T2DM is the most common metabolic cause of mortality, due to its complications and associated pathologies [7]. It negatively affects quality of life [8], defined as a person's individual perception of the physical, emotional and social state [9], as a result of associated physical disabilities and mental health problems [10]. Clinical measures can provide a good estimate of disease control, but the ultimate goal of DM care is to maintain or improve the patient's quality of life [11].
There are generic instruments to measure quality of life that can be used both in the general population and in all disease groups [12, 13]. However, specific instruments have been developed to measure specific effects of diseases and are more responsive to changes. Disease-specific instruments can help determine which conditions best explain a patient's limitations in physical and / or mental function, and, therefore, are more useful in outcome research, health care cost studies, and clinical practice [14].
In Ecuador, advanced age, longer disease duration, hypertension and kidney disease are associated with a lower health related quality of life in patients with T2DM [15, 16]. In addition, a direct relationship was found between low socioeconomic status and the development of the disease [17].
Despite the rapid growth in the prevalence of T2DM and the existence of different instruments to measure quality of life in diabetic patients, none of them have been linguistically or psychometrically validated in Ecuador. Although there is a wide description of different questionnaires to assess quality of life in diabetic patients [18], such as the Diabetes Care Profile, which aims at assessment of factors important in a patient's adjustment to diabetes and its treatment in daily life and which consists of 234 items, the Appraisal of Diabetes Scale, which aims at Assessment of diabetes-related distress and which consists of 7 items, the Diabetes Distress Scale, which aims to Measure of diabetes-related emotional distress for use in research and clinical practice and which consists of 17 items, among others [19]. We chose the Diabetes Health Profile (DHP) because of its advantages over other diabetes-specific patient reported outcome measures. It is a specific instrument to evaluate psychological and behavioural impact of living with diabetes [20]. It generates a health profile that measures psychological distress, barriers to activity, and uninhibited eating. Each answer is rated on a scale, and the scores by dimension are presented on a scale in which a higher DHP value is associated with a worse perception of quality of life. The short version of DHP with 18 items has been used in different countries, demonstrating adequate metric properties [21,22,23].
The objectives of this study are to adapt the Diabetes Health Profile-18 (DHP‐18) both linguistically and culturally for use with patients with T2DM in Ecuador, and to evaluate its psychometric properties.
We included type 2 diabetic patients, who were at least 18 years of age, had been diagnosed for at least 12 months, resided in Quito with no intention of moving in the near future and were native Spanish speakers. Recruitment to the study used purposive sampling through a patient club for people with diabetes at the Chimbacalle Health Center and contacts from health promoters from several health centres in Quito (Número 1, Jardín del Valle, Cotocollao, Jaime Roldos Aguilera, Corazón de Jesus, Comité del Pueblo, San Antonio de Pichincha, Colinas del Norte, Pomasqui, Carcelén Bajo, El Condado, Mena del Hierro, La Bota, Pisulí, Puellaro, Chavezpamba, Cotocollao Alto and Calacalí).
In this setting, diabetic patient's clubs are sometimes established in primary health care centres, either by initiative of the health staff or the patient's themselves. The role of patient clubs is to motivate patients through the exchange of experiences among its members, in addition to the orientation, advice and guidance offered by health professionals on behaviour modification (physical activity/diets) [24, 25].
Our selection sought to include a group of patients that was heterogeneous in terms of sex, age and level of education. All participants gave their consent to participate in the study.
The interviews were carried out between February and July 2020. The DHP-18 validation process consisted of 2 phases.
Linguistical and cultural adaptation
Two Ecuadorian medical researchers reviewed the original version of the DHP-18 (English) and the existing translation (Spanish for the United States) to assess the cultural and linguistic relevance for its use in Ecuador. They suggested some changes in text, as well as the reasons for these changes and provided a new recommended translation. Changes were discussed with the other members of the team and a new adapted version of the questionnaire was proposed. Subsequently, 2 different researchers carried out interviews to assess the linguistic and cultural understanding of the adapted questionnaire with 8 people with T2DM of Ecuadorian nationality in the Chimbacalle Health Centre. Participants were asked to answer the questions and then, the necessary time was recorded, the answer options were discussed, the wording that was difficult to understand was commented, and alternative wording was suggested based on the participants' own words. A second adapted version was proposed. The interviews were recorded and transcribed verbatim for analysis. Finally, participants' responses were summarized in a pilot test report including recommended changes and suggestions. The report was then sent to the original authors of the questionnaire for verification and approval.
Psychometric validation
Firstly, we recruited 146 participants for the baseline test where they responded to the questions posed in the tool previously linguistically validated DHP-18 instrument in Ecuador and in another tool (SF-12v2 in its version for use in Ecuador) [26] in order to assess the correlation with generic quality of life as a construct validity test. Two weeks later, we assessed the intra-observer reliability of the new tool in a random sample of 75 of the previously interviewed patients, where only DHP-18 was retested, along with the following question: "Compared to the last time you completed the questionnaire, how do you assess your condition today? (1) unchanged, (2) improved, (3) greatly improved, (4) impaired or (5) highly impaired".
The 8 interviews carried out during the linguistic and cultural adaptation were held face to face but given the situation generated by the COVID19 pandemic [27], the data for the psychometric validation was collected through individual telephone interviews. Responses were digitally recorded by the interviewer using the Kobo toolbox (http://www.kobotoolbox.org/) free open-source software on electronic tablets. Informed consents were provided orally and were audio recorded.
DHP-18 questionnaire
Participants responded to the adapted version of DHP-18. We used the Diabetic Health Profile (DHP) -18 because it is a shortened version of DHP-1, a specific instrument for measuring the psychological and behavioural impact of type 1 diabetes. We decided to use the short version of the DHP because it can be used in people with both type 1 and type 2 diabetes aged 11 and older. And because the instrument has demonstrated adequate metric properties and its completion time is approximately 5–6 min. Items are scored using a 4-point Likert-type scale ranging from 0 to 3. Items are provided with one of four sets of responses (1) never, sometimes, generally, always; (2) never, sometimes, often, very often; (3) not at all, a little, a lot, very much; and (4) very likely, quite likely, unlikely, not at all likely. The raw subscale scores are transformed into a common score range from 0 to 100, with 0 representing no dysfunction.
The DHP-18 consists of three dimensions: psychological distress (includes questions like depressed from diabetes; more arguments or upsets at home than there would be if you did not have diabetes; losing your temper over unimportant things; etc.), barriers to activity (includes questions like food controls life; difficult staying out late; avoid going out when sugar is low; etc.) and disinhibited eating (includes questions hard to say no to food you like; ease of stopping when you eat; wish there were not so many nice things to eat; etc.).
SF-12 v2
The SF-12 v2 is an instrument for measuring health-related quality of life [26], based on SF-36. It includes twelve items, has an application time of approximately two minutes, and is used to evaluate the degree of well-being and functional capacity of people over 14 years of age. The response options form Likert-type scales (where the number of options varies from three to six points, depending on the item), which assess intensity and / or frequency of people's health status. The score ranges from 0 to 100, where the higher score implies a better health-related quality of life. The SF-12v2 has demonstrated adequate validity and reliability in the United States and internationally, and the Spanish version has been used successfully in Latin America and with Spanish-speaking populations in the United States. Investigations that use these twelve items of the SF have verified that the instrument is a valid and reliable measure in Latin American countries such as Colombia and Chile in adult population, and a translated version is available for Ecuador.
The SF12v2 includes questions related to health status and limitations in doing activities, problems with work or other regular daily activities due to physical health, due to emotional problems, pain, feelings, etc.
Sociodemographic and clinical variables
We collected sociodemographic and clinical variables (all self-reported by the participants): age, sex, marital status, ethnicity (mestizo or other minorities. The mestizos are an ethnicity composed of Spanish and indigenous heritages), educational level, monthly income, employment status, smoking status, alcohol intake, weight, height, duration of illness, use of medications, diabetes complications and comorbidities.
We included descriptive statistics through frequencies, the mean (standard deviation) or the median (interquartile range), as appropriate. The psychometric characteristics of the DHP-18 were assessed according to consensus-based standards for the selection of health status measurement instruments (COSMIN) guidelines [28]. Missing values for the DHP-18 and SF-12 v2 were substituted with the mean of the completed questions for those dimensions in which ≥ 50% of questions had been completed [29, 30].
We evaluated floor and ceiling effects by calculating the percentage of patients scoring either the lowest or highest possible dimensional scores. If more than 15% of respondents achieve the lowest or highest possible score, then floor or ceiling effects are present [31].
Statistical analyses were performed using Stata Version 15 (StataCorp LP; College Station, TX) and R software, version R 4.0.0 (R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria; http://www.R-project.org) was used to perform the confirmatory factor analysis. The level of statistical significance was set at p < 0.05.
Structural validity
We performed a confirmatory factor analysis (CFA) because the factor structure had already been determined [32] and confirmed for other language translations [23]. In this case, we used CFA using Diagonally Weighted Least Squares (DWLS) [33,34,35] to test the hypothesis that the general construct of DHP is composed of three individual and correlated factors: psychological distress (6 items), activity barriers (7 items) and disinhibited eating (5 items). To estimate the model fit, we used the following criteria: Values > 0.95 for the Tucker-Lewis index (TLI) or for the comparative fit index (CFI) and the root mean square error of approximation (RMSEA) < 0.06 or the standardized root mean square residual (SRMR) < 0.08 are considered as a good model fit [36, 37]. The magnitudes of factor loadings of 0.3 or greater were considered suitable.
To measure internal consistency reliability, we used Cronbach's alpha coefficient, where values > 0.7 are considered as acceptable [36]. The homogeneity of items was verified by the analysis of item-rest and inter-item correlations for the items constituting each dimension of the scale. The usual rule of thumb is that an item should correlate between 0.3 and 0.7 with the total score of the factor (excluding that item), using Pearson's coefficient. Additionally, average inter-item correlations for items in the same factor should correlate moderately, between 0.15 and 0.5, to ensure that they measure the same construct but not so closely as to be too redundant [38].
We measured test–retest reliability in patients reporting no-change in the global assessment of change question. To measure test–retest reliability we considered that the individual's health was significantly better if they responded, "much better" or "somewhat better" in the global assessment, or significantly worse if they responded "somewhat worse" or "much worse" [39]. We used the intraclass correlation coefficient (ICC) under a 2-way random effects model with absolute agreement [40], and its associated 95% confidence interval. We considered that a questionnaire exhibits substantial reliability when ICC is between 0.40 and 0.75, and greater than 0.90 represents excellent reliability [36].
Measurement errors were determined by calculating the standard error of measurement (SEM) and the smallest detectable change (SDC). We calculated SEM by the square root of the error variance derived from analysis of variance (ANOVA), two-way ANOVA with repeated measures [41]. The SDCindividual and SDCgroup was calculated with the following formulas (41):
$${\text{SDC}}_{{{\text{individual}}}} = { 1}.{96 }* \, \surd {2 }*{\text{ SEM}}$$
SDCgroup = (SDCindividual /√n); n: number of subjects in the sample.
We estimated the minimally important difference (MID) for each DHP-18 dimension using three distribution-based methods to estimate MID: 0.2 and 0.5 standard deviation (SD) and SEM estimations. Formulas:
$$\begin{aligned} & 0.{\text{2SD}} = \, 0.{2}*{\text{ SD}}_{{{\text{basaline}}}} \\ & 0.{\text{5SD}} = \, 0.{5}*{\text{ SD}}_{{{\text{basaline}}}} \\ & {\text{1SEM}} = {\text{SEM}} \\ \end{aligned}$$
We also estimated Cohen's d effect size (ES) of the change in DHP-18 dimensions for those reporting a small but important change and those reporting no changes in global assessment rating. Cohen's d was calculated with the following formula (42):
$${\text{ES }} = \, \left( {{\text{Score}}_{{{\text{baseline}}}} - {\text{Score}}_{{{\text{retest}}}} } \right)/{\text{SD}}_{{{\text{basaline}}}}$$
SDbasaline: Standard deviation of baseline score.
An effect size of 0.2 was considered small, 0.5 moderate and 0.8 large [43].
We assessed construct validity of the DHP questionnaire using three approaches. Firstly, we assessed convergent validity using binary correlation analysis (Spearman's r- due to non-normal value distributions) of the DHP-18 and SF-12v2. Before starting the analysis, we set up the following a priori hypothesis: (1) Scores of "psychological distress" dimension in DHP-18 correlate negatively with scores of "mental health" dimension in SF-12v2. (2) Scores of "activity barriers" dimension in DHP-18 correlates negatively with "physical dimension" in SF-12v2. (3) Scores of "disinhibited eating" dimension in DHP-18 correlates negatively with "physical dimension" in SF-12v2.
Secondly, we explored discriminant validity by comparing the correlation among the three dimensions of the DHP-18 scale.
Thirdly, we evaluated known-group validity by comparing DHP-18 scores in patients according to sex, education level, obesity, and clinical characteristics such as duration of diabetes, presence of comorbidities and/or diabetes-related complications using a Student's t-test or ANOVA. We tested the following pre-defined hypotheses:
H1: Individuals with longer duration of illness would have higher DHP-18 scores (poorer quality of life) than those with shorter illness duration [44].
H2: Obese individuals would have higher DHP-18 values (poorer quality of life) than non-obese individuals [45].
H3: Women would report higher DHP-18 values (poorer quality of life) than men [46].
H4: Individuals with comorbidities would have higher DHP-18 values (poorer quality of life) [44].
H5: Individuals with a higher education level would have lower DHP-18 values (better quality of life) than those with a lower education level [47, 48].
H6: Individuals with diabetes-related complications would have higher DHP-18 values (poorer quality of life) than patients without complications [44].
Linguistic and cultural adaptation
Two Ecuadorian medical researchers modified some linguistic expressions in the Spanish version for the United States in items 2, 3, 4, 5, 6, 10, 11, 12, 13, 14, 15 and 18 and in some answer options. In the linguistic and cultural review, six women and two men participated: a 28-year-old person, a 49-year-old person and a 52-year-old person, and the rest of the participants were over 70 years old. They made further changes to items 5, 6, 10 and 12 and proposed reformulation of some expressions. Most modifications were minor linguistical issues to use terms more commonly used in Ecuador, for example the expression "staying out" was changed by "going out of the house", the term "edgy" was changed by "nervous", and the term "lose your temper" was changed by "get angry easily". Other changes were made to improve comprehension by simplifying technical terms, for example "influenza" was changed to "flu", "depressed" was changed to "sad". One of the items was flagged as having potential difficulties because participants would be asked to reflect on their sugar levels, and there was a very low availability of glucometers in homes. The expression "on the low side" was changed to "having low or very low sugar levels". Similarly, in item 6, the word "monitor" was replaced by the expression "take the sugar test" to improve its understanding. The original author approved the new tool, linguistically and culturally adapted to the context of Quito, Ecuador.
We recruited 146 patients diagnosed with T2DM. Table 1 describes the characteristics of the study population. The mean age of the participants was 56.8 years, 58.2% were women and 80.1% were mestizo. The population studied had relatively low educational qualifications, with 56.8% having primary or no education, 27.6% were not working and 61.4% had incomes of less than $375 per month.
Table 1 Description of study population
Regarding diabetes medication, the majority were on oral antidiabetic therapy (66.2%), 11.7% of patients were treated with insulin, 2.1% with only diet and the rest (20%) were on combined therapy (oral + insulin). We found that 37.5% were overweight and 29.5% were obese.
Of the 146 respondents, 135 (92.5%) answered the 18 items of DHP-18; 9 (6.2%) omitted one item and 2 (1.3%) omitted two or more items. The missing values were in items 4, 5, 7,12, 13 and 14. One item (question 14) of the DHP-18 version showed unbalanced responses with 75% of respondents reporting never.
Seventy-five (51.4%) participants were retested for DHP-18. There were no differences in socio-demographic or clinical characteristics between participants who were retested and those who were not (Table 1). In the DHP-18 retest, there were two missing values in item 4 and two in item 14, the distribution of the missing were 2 participants who did not answer one item and 1 participant did not answer two items.
CFA with values of χ2 (132) = 162.738, p < 0.001; CFI = 0.990; TLI = 0.989; SRMR = 0.086 and RMSEA = 0.040 indicated a good fit to the data, except for SRMR. The standardized factorial loads (λ) from each item on their respective factors were all statistically significant (p < 0.001) and ranged from 0.57 to 0.89, from 0.21 to 0.77 and from 0.46 to 0.73 for psychological distress, barriers to activity and disinhibited eating, respectively. The covariance between the three latent variables ranged from 0.54 to 0.90, with psychological distress and disinhibited eating presenting the highest covariance. And two items (questions 1 and 3) showed a λ value below 0.3, using a one-factor model (Fig. 1). When we repeated the analysis excluding these 2 items, we observed a significant improvement in all the indicators, including the SRMR, which was the only one that showed a value slightly higher than recommended (CFI = 0.997; TLI = 0.996; RMSEA = 0.027 (90% confidence interval: 0.000–0.052); SRMR = 0.078).
Confirmatory Factor Analysis performed based on the polychoric correlation matrix, using diagonally weighted least squares (DWLS). CFI = 0.990. TLI = 0.989. RMSEA = 0.040 (90% confidence interval: 0.011–0.059). SRMR = 0.086
Overall Cronbach's alpha was 0.77 and dimensional alphas were 0.81, 0.63 and 0.74 for psychological distress, barriers to activity and disinhibited eating, respectively. The three dimensions were in a suitable range (0.15–0.50) for average interitem correlation values which ranged from 0.38 to 0.47, from 0.17 to 0.26 and from 0.32 to 0.41 for psychological distress, barriers to activity and disinhibited eating, respectively. Item-rest correlation values ranged from 0.39 to 0.61 for disinhibited eating while values ranged from 0.43 to 0.72 for psychological distress dimension, where item 17 showed the highest value (0.72) and ranged from 0.07 to 0.47 for barriers to activity, where item 1 showed the lowest value (0.07).
When we repeated the analysis excluding question 1 which had an item-rest correlation value below 0.30 (value: 0.07) and a λ value < 0.3 in the barriers to activity dimension, the dimensional and overall Cronbach's alpha changed to 0.67 and 0.76, respectively. When question 17 (item-rest correlation value slightly higher than 0.7) was excluded from psychological distress dimension, the dimensional and overall Cronbach's alpha changed to 0.75 and 0.76, respectively.
ICC values for a total of 75 retested participants (Table 2) were 0.70 (95%CI: 0.57, 0.80), 0.67 (95%CI 0.56–0.77), 0.73 (95%CI 0.64–0.81) for psychological distress, barriers to activity and disinhibited eating, respectively.
Table 2 Test–retest reliability of the Diabetes Health Profile-18 subdimensions overall, and considering a reported change in global assessment of health
Among the retest participants, thirty-nine (52%) reported that their condition was unchanged from baseline to retest (ICC values in Table 2) and 36 (48%) reported that their condition had changed from baseline to retest. Fifteen (20%) participants reported that their condition had improved and 21 (28%) reported that their condition had deteriorated. ICC values for participants reporting that their condition stayed the same were 0.69 (95%CI 0.48–0.83), 0.66 (95%CI 0.50–0.79), 0.66 (95%CI 0.50–0.80) for psychological distress, barriers to activity and disinhibited eating, respectively.
Our assessment of convergent validity showed an inverse relationship between DHP-18 dimensions and SF12v2 dimensions and the results verified two of three a priori hypotheses with correlation values between 0.4 and 0.7 (Table 3).
Table 3 Convergent validity: correlation (Spearman's r) between the Diabetes Health Profile-18 dimensions and the SF12v2 dimensions
For discriminant validity, correlations between the DHP-18 dimensions were 0.4 or more, ranging from 0.40 to 0.74. The highest correlation was between psychological distress and disinhibited eating (r = 0.74), followed by the correlation between psychological distress and barriers to activity (r = 0.45) and the lowest was the correlation between barriers to activity and disinhibited eating (r = 0.40).
With regard to known-group validity, our results showed the expected tendency in three (H2, H3 and H6) of the 6 initial hypotheses. Compared to individuals with BMI < 30 kg/m2, those with BMI ≥ 30 kg/m2 (H2) showed higher values for each dimension, although only those associated to disinhibited eating were statistically significant. For H3 and H6, the expected tendency of scores for each dimension was obtained with higher scores in women than men and in patients with diabetes-related complications than those without, but there were no statistically significant differences (Table 4).
Table 4 Known group validity of the DHP-18
Regarding hypotheses H1, H4 and H5, score patterns were different from those expected. Individuals with longer duration of illness (H1) had lower scores reflecting improved quality of life, although the differences were not statistically significant. Similarly, regarding educational level (H5), scores did not show a clear tendency, with the exception of lower scores for barriers to activity dimension with increasing educational level. Finally, there were no differences by presence of comorbidities (H4) but we found differences between patients with or without specific comorbidities such as hypertension and depression. Having hypertension was associated with better evaluation of two dimensions (psychological distress and disinhibited eating), while depression was associated with worse evaluation of two dimensions (barriers to activity and disinhibited eating) (Table 4).
In the present study, we linguistically and culturally adapted the DHP-18 and investigated its psychometric properties in people resident in Quito, Ecuador. Satisfactory psychometric properties were observed in a substantial number of aspects. The factor structure was adequate but with 2 items, belonging to the dimension of barriers to activity, which were loaded below the recommended value. Although reliability parameters were adequate for psychological distress and disinhibited eating dimensions, the barriers to activity presented lower internal consistency and future analysis should verify the applicability and cultural equivalence of some of the items of this dimension to Ecuador.
Except for the dimension of barriers to activity, a good internal consistency was found. The internal consistency of the dimension of barriers to activity contrasts with another study [21] and may be related to the different populations of patients investigated [21, 22, 32], since in some studies of people with both type 1 or 2 diabetes were included. Based on a more detailed analysis of the total item statistics, we observed that the elimination of items 1 and 17, with the lowest and highest item-rest correlation values, did not produce significant increases in overall and dimensional consistency, as observed in another study [23].
The test–retest reliability showed substantial reliability values in accordance with the recommendations of the literature [36]. And the sample size used is within the recommended ranges in psychometric validation studies, which could be considered a strength of our study [49].
Regarding convergent validity, a strong correlation was shown between the dimension of psychological distress of the DHP-18 and the mental health dimension of the SF-12v2 and between the dimension of barriers to activity of the DHP-18 and the role physical dimension of the SF-12v2, corroborating two of the predefined hypotheses. Similar results have been observed in previous studies [21, 23]. However, the uninhibited eating dimension was related to the emotional role dimension and not to the physical role dimension as was hypothesized based on other studies [21].
Discriminant validity showed adequate correlations between the 3 dimensions, higher than those indicated in the literature. These results differ from other studies that showed an overall low correlation between the dimensions of DHP-18 [20, 23].
Regarding the known group validity, our results showed the expected trend in three of the 6 initial hypotheses. Regarding the hypothesis related to educational level, lower scores were given for the dimension barriers to activity with the increase in educational level. Regarding comorbidities there were also significant differences for specific comorbidities such as hypertension and depression. These results are corroborated with other studies where it was seen that the presence of hypertension resulted in a significantly lower score in the disinhibited eating dimension [50]. In the case of the duration of the disease, and despite the fact that the differences were not significant, we did see that people with a disease of longer duration reported a better quality of life. One possible explanation is that the longer the disease lasts, the more likely the patient is able to adapt to the care requirements including behaviour modification [51, 52].
The CFA indicated an adequate fit to the original three-factor model, with the exception of the SRMR indicator. The dimension barriers to activity showed the lowest standardized factorial loads, while for the dimensions psychological distress and uninhibited eating, they were adequate. Using a one-factor model, two of the 18 items, both of the activity barriers dimension, were loaded below the recommended value of 0.3, item 3 loaded with a value of 0.3 and item 1 with a value less than 0.3. Regarding item 1, the problem may be due to a lack of understanding, perhaps not at the linguistic level, but of concepts, since in the linguistic comprehension and cultural adaptation interviews, it was observed that the question was simple, short and easily understood, but sometimes people were unsure whether food "controlling one's life" referred to the need to observe and take care of one's diet, or whether it referred to having your life structured and organized around food, such as the timing of meals, the physical exercise depending on the amount of food, etc. Perhaps a clarification with examples could be added to overcome this issue in future uses of the questionnaire in Ecuador. Item 3, about "tied to meal times" was also flagged as potentially problematic in the initial round of reviews by the 2 medical researchers, who evaluated linguistic understanding and cultural adaptation. It should be added that we carried out a new confirmatory factor analysis, eliminating these 2 items and observed a significant improvement in all the indicators, including the SRMR, which was the only one that showed a value slightly higher than recommended.
This study has some limitations in addition to the factors discussed above. Although the DHP-18 can be used with people with either Type 1 or Type 2 diabetes, the psychometric test was not performed in type 1 diabetes, limiting the applicability of the results to those patients with type 2 diabetes. In addition, an important factor to take into account is the context in which the study was carried out, in a pandemic situation it is difficult to assess the possible changes produced and there may be factors external to the disease that can influence the results, especially of repeatability-concordance, due to changes in the context produced quickly and that can affect the quality of life of patients. Another limitation is that SF12v2 summary scores for physical and mental health can be misleading if proprietary scores are used, as a low physical health summary score tends to inflate the mental health summary score and vice versa. So this must be taken into account when interpreting the results [53, 54]. Despite this, the results are significant and similar to those obtained in other studies.
The strength of this study lies in the fact that this is the first adaptation and validation of a questionnaire to assess the quality of life in diabetic patients in Ecuador. Hence, it provides a practical tool to evaluate aspects such as self-control of food intake, limitations, barriers and anxiety related to daily activities, feelings, emotions, mood and irritability in people with diabetes.
The study adds to the evidence for DHP-18, showing that it is a short, acceptable, valid and reliable instrument to measure the impact of living with diabetes from a patient perspective. However, future analysis should verify the applicability and cultural equivalence of some of the items of barriers to activity dimension to Ecuador. Using DHP-18 enables clinicians to conduct an appropriate educational or therapeutic intervention to alleviate or address dysfunctional life outcomes for people living with diabetes.
The datasets analysed during the current study are available from the corresponding author on reasonable request.
Psychological distress
BA:
Barriers to activity
Disinhibited eating
T2DM:
DHP:
Diabetes Health Profile
DHP‐18:
Diabetes Health Profile-18
CFA:
Confirmatory factor analysis
DWLS:
Diagonally Weighted Least Squares
TLI:
Tucker-Lewis index
CFI:
Comparative fit index
RMSEA:
Root mean square error of approximation
SRMR:
Standardized root mean square residual
ICC:
Intraclass correlation coefficient
Standard error of measurement
SDC:
Smallest detectable change
MID:
Minimally important difference
World Health Organization. Diabetes. [cited 2021 Jan 21]. https://www.who.int/es/news-room/fact-sheets/detail/diabetes
Orces CH, Lorenzo C. Prevalence of prediabetes and diabetes among older adults in Ecuador: analysis of the SABE survey. Diabetes Metab Syndr. 2018;12(2):147–53.
Cordero LCA, C MAV, Cordero G, Álvarez R, Añez R, Rojas J, et al. Prevalencia de la diabetes mellitus tipo 2 y sus factores de riesgo en individuos adultos de la ciudad de Cuenca-Ecuador. Av En Biomed. 2017;6(1):10–21.
Pan American Health Organization / World Health Organization. La diabetes, un problema prioritario de salud pública en el Ecuador y la región de las Américas. 2014 [cited 2021 Jan 21]. https://www.paho.org/ecu/index.php?option=com_content&view=article&id=1400:la-diabetes-un-problema-prioritario-de-salud-publica-en-el-ecuador-y-la-regionde-las-americas&Itemid=360
Organización Mundial de la Salud – Perfiles de los países para la diabetes. 2016 [cited 2021 Jan 21]. https://www.who.int/diabetes/countryprofiles/ecu_es.pdf?ua=1
MSP, INEC, OPS/OMS. ENCUESTA STEPS ECUADOR. 2018 [cited 2021 Jan 21]. https://www.salud.gob.ec/wpcontent/uploads/2020/10/INFORMESTEPS.pdf
Glovaci D, Fan W, Wong ND. Epidemiology of diabetes mellitus and cardiovascular disease. Curr Cardiol Rep. 2019;21(4):21.
Solli O, Stavem K, Kristiansen IS. Health-related quality of life in diabetes: the associations of complications with EQ-5D scores. Health Qual Life Outcomes. 2010;8:18.
Rubin RR, Peyrot M. Quality of life and diabetes. Diabetes Metab Res Rev. 1999;15(3):205–18.
Cannon A, Handelsman Y, Heile M, Shannon M. Burden of illness in type 2 diabetes mellitus. J Manag Care Spec Pharm. 2018;24(9-a Suppl):S5-13.
Saleh F, Ara F, Mumu SJ, Hafez MA. Assessment of health-related quality of life of Bangladeshi patients with type 2 diabetes using the EQ-5D: a cross-sectional study. BMC Res Notes. 2015;8:497.
Zhou T, Guan H, Yao J, Xiong X, Ma A. The quality of life in Chinese population with chronic non-communicable diseases according to EQ-5D-3L: a systematic review. Qual Life Res Int J Qual Life Asp Treat Care Rehabil. 2018;27(11):2799–814.
Pequeno NPF, Cabral NLDA, Marchioni DM, Lima SCVC, Lyra CDO. Quality of life assessment instruments for adults: a systematic review of population-based studies. Health Qual Life Outcomes. 2020;18(1):208.
Ware JE, Gandek B, Guyer R, Deng N. Standardizing disease-specific quality of life measures across multiple chronic conditions: development and initial evaluation of the QOL Disease Impact Scale (QDIS®). Health Qual Life Outcomes. 2016;14:84.
Romero-Naranjo F, Espinosa-Uquillas C, Gordillo-Altamirano F, Barrera-Guarderas F. Which factors may reduce the health-related quality of life of Ecuadorian patients with diabetes? Proc R Health Sci J. 2019;38(2):102–8.
Lara M, José M. Diabetes mellitus y sus factores de riesgo en el Ecuador. agosto de 2016 [cited 2021 Jan 21]. http://repositorio.usfq.edu.ec/handle/23000/5697
Jácome Á, Francisco J. Factores de riesgo socioeconómicos en la prevalencia de diabetes tipo II: evidencia en el Ecuador ENSANUT-ECU 2011–2013. 2018 [cited 2021 Jan 30]. http://repositorio.puce.edu.ec:80/xmlui/handle/22000/15622
Pereira EV, Tonin FS, Carneiro J, Pontarolo R, Wiens A. Evaluation of the application of the Diabetes Quality of Life Questionnaire in patients with diabetes mellitus. Arch Endocrinol Metab. 2020;64(1):59–65.
Palamenghi L, Carlucci MM, Graffigna G. Measuring the quality of life in diabetic patients: a scoping review. J Diabetes Res. 2020;2020:5419298.
Meadows K, Steen N, McColl E, Eccles M, Shiels C, Hewison J, et al. The Diabetes Health Profile (DHP): a new instrument for assessing the psychosocial profile of insulin requiring patients–development and psychometric evaluation. Qual Life Res Int J Qual Life Asp Treat Care Rehabil. 1996;5(2):242–54.
Santos Cruz R, Leitão CE, Lopes FP. Determinantes do estado de saúde dos diabéticos. Rev Port Endocrinol Diabetes E Metab. 2016;11(2):188–96.
Tan ML, Khoo EY, Griva K, Lee YS, Amir M, Zuniga YL, et al. Diabetes health profile-18 is reliable, valid and sensitive in Singapore. Ann Acad Med. 2016;45(9):383–93.
Jelsness-Jørgensen L-P, Jensen Ø, Gibbs C, Bekkhus Moe R, Hofsø D, Bernklev T. Psychometric testing of the Norwegian Diabetes Health Profile (DHP-18) in patients with type 1 diabetes. BMJ Open Diabetes Res Care. 2018;6(1):e000541.
García-Carrera C, Gutierrez-Fuentes E, Borroel-Saligan L, Oramas P, Vidal-López M. Club de diabéticos y su impacto en la disminución de glicemia del diabético tipo 2. Salud en Tabasco. 2002;8(1):16–9.
Olvera JP. La influencia del grupo de autoayuda de pacientes diabéticos en el control de su enfermedad. Horiz Sanit. 2009;8(1):44–58.
Gandek B, Ware JE, Aaronson NK, Apolone G, Bjorner JB, Brazier JE, et al. Cross-validation of item selection and scoring for the SF-12 Health Survey in nine countries: results from the IQOLA Project. International Quality of Life Assessment. J Clin Epidemiol. 1998;51(11):1171–8.
Omary MB, Eswaraka J, Kimball SD, Moghe PV, Panettieri RA, Scotto KW. The COVID-19 pandemic and research shutdown: staying safe and productive. J Clin Investig. 2020;130(6):2745–8.
Mokkink LB, Prinsen CAC, Bouter LM, de Vet HCW, Terwee CB. The COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) and how to select an outcome measurement instrument. Braz J Phys Ther. 2016;20(2):105–13.
Bell ML, Fairclough DL, Fiero MH, Butow PN. Handling missing items in the Hospital Anxiety and Depression Scale (HADS): a simulation study. BMC Res Notes. 2016;9(1):479.
Meadows K. Diabetes health profile user manual—sample pages. 2015.
McHorney CA, Tarlov AR. Individual-patient monitoring in clinical practice: are available health status surveys adequate? Qual Life Res Int J Qual Life Asp Treat Care Rehabil. 1995;4(4):293–307.
Meadows KA, Abrams C, Sandbaek A. Adaptation of the Diabetes Health Profile (DHP-1) for use with patients with Type 2 diabetes mellitus: psychometric evaluation and cross-cultural comparison. Diabet Med J Br Diabet Assoc. 2000;17(8):572–80.
Mîndrilă D. Maximum likelihood (ML) and diagonally weighted least squares (DWLS) estimation procedures: a comparison of estimation bias with ordinal and multivariate non-normal data. Int J Digit Soc. 2010;1(1):60–6.
Flora DB, Curran PJ. An empirical evaluation of alternative methods of estimation for confirmatory factor analysis with ordinal data. Psychol Methods. 2004;9(4):466–91.
Forero CG, Maydeu-Olivares A, Gallardo-Pujol D. Factor analysis with ordinal indicators: a Monte Carlo study comparing DWLS and ULS estimation. Struct Equ Model Multidiscip J. 2009;16(4):625–41.
Prinsen CAC, Vohra S, Rose MR, Boers M, Tugwell P, Clarke M, et al. How to select outcome measurement instruments for outcomes included in a "Core Outcome Set"—a practical guideline. Trials. 2016;17(1):449.
Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Model. 1999;6(1):1–55.
Clark LA, Watson D. Constructing validity: basic issues in objective scale development. Psychol Assess. 1995;7(3):309–19.
Mulhern B, Meadows K. Investigating the minimally important difference of the Diabetes Health Profile (DHP-18) and the EQ-5D and SF-6D in a UK diabetes mellitus population. Health (NY). 2013;05(06):1045–54.
Terwee CB, Bot SDM, de Boer MR, van der Windt DAWM, Knol DL, Dekker J, et al. Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol. 2007;60(1):34–42.
Bland JM, Altman DG. Measurement error. BMJ. 1996;312(7047):1654.
Kazis LE, Anderson JJ, Meenan RF. Effect sizes for interpreting changes in health status. Med Care. 1989;27(3 Suppl):S178-189.
Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale: L. Erlbaum Associates; 1988.
Jing X, Chen J, Dong Y, Han D, Zhao H, Wang X, et al. Related factors of quality of life of type 2 diabetes patients: a systematic review and meta-analysis. Health Qual Life Outcomes. 2018;16(1):189.
Rozjabek H, Fastenau J, LaPrade A, Sternbach N. Adult obesity and health-related quality of life, patient activation, work productivity, and weight loss behaviors in the United States. Diabetes Metab Syndr Obes Targets Ther. 2020;13:2049–55.
Alshayban D, Joseph R. Health-related quality of life among patients with type 2 diabetes mellitus in Eastern Province, Saudi Arabia: a cross-sectional study. PLoS ONE. 2020;15(1):e0227573.
Norris SL. Health-related quality of life among adults with diabetes. Curr Diab Rep. 2005;5(2):124–30.
Glasgow RE, Ruggiero L, Eakin EG, Dryfoos J, Chobanian L. Quality of life and associated characteristics in a large national sample of adults with diabetes. Diabetes Care. 1997;20(4):562–7.
Polit DF. Getting serious about test–retest reliability: a critique of retest research and some recommendations. Qual Life Res. 2014;23(6):1713–20.
Goddijn P, Bilo H, Meadows K, Groenier K, Feskens E, Meyboom-de JB. The validity and reliability of the Diabetes Health Profile (DHP) in NIDDM patients referred for insulin therapy. Qual Life Res Int J Qual Life Asp Treat Care Rehabil. 1996;5(4):433–42.
Lambrinou E, Hansen TB, Beulens JW. Lifestyle factors, self-management and patient empowerment in diabetes care. Eur J Prev Cardiol. 2019;26(2_suppl):55–63.
Chrvala CA, Sherr D, Lipman RD. Diabetes self-management education for adults with type 2 diabetes mellitus: a systematic review of the effect on glycemic control. Patient Educ Couns. 2016;99(6):926–43.
Farivar SS, Cunningham WE, Hays RD. Correlated physical and mental health summary scores for the SF-36 and SF-12 Health Survey, V.I. Health Qual Life Outcomes. 2007;5:54.
Fleishman JA, Selim AJ, Kazis LE. Deriving SF-12v2 physical and mental health summary scores: a comparison of different scoring algorithms. Qual Life Res Int J Qual Life Asp Treat Care Rehabil. 2010;19(2):231–41.
The authors wish to thank María Hernández, Blanca Chamorro, Sofía Mosquera, and Cristian Cuhunay for interviewing the participants. To Jimmy Martin Delgado for his help with the translation of the questionnaire. To Amparito Carrera, the president of the Chimbacalle diabetic patients club. To the health promoters of the participating health centers. And a special thanks to all the study participants.
This research was funded by a H2020 European Research Council 2018 Starting Grant, Grant Number 804761—CEAD.
Department of Public Health, Universidad Miguel Hernández, Sant Joan d'Alacant, Alicante, Spain
Ikram Benazizi, Mari Carmen Bernal-Soriano , Andrés Peralta-Chiriboga, Blanca Lumbreras & Lucy Anne Parker
CIBER de Epidemiología y Salud Pública (CIBERESP), Madrid, Spain
Mari Carmen Bernal-Soriano , Yolanda Pardo , Aida Ribera, Montserrat Ferrer, Jordi Alonso, Blanca Lumbreras & Lucy Anne Parker
Health Services Research Group, IMIM (Hospital del Mar Medical Research Institute), Barcelona, Spain
Yolanda Pardo , Montserrat Ferrer & Jordi Alonso
Universitat Autònoma de Barcelona, Barcelona, Spain
Yolanda Pardo & Montserrat Ferrer
Cardiovascular Epidemiology and Research Unit, University Hospital and Research Institute Vall d'Hebron (VHIR), Barcelona, Spain
Aida Ribera
Instituto de Salud Pública, Pontificia Universidad Católica del Ecuador, Quito, Ecuador
Andrés Peralta-Chiriboga
Unidad Docente de Medicina Preventiva y Salud Pública de Cantabria, Consejería de Sanidad de Cantabria, Santander, Spain
Alfonso Alonso-Jaquete
Department of Experimental and Health Sciences, Pompeu Fabra University, Barcelona, Spain
Jordi Alonso
Ikram Benazizi
Mari Carmen Bernal-Soriano
Yolanda Pardo
Montserrat Ferrer
Blanca Lumbreras
Lucy Anne Parker
LAP and IB designed the study and all authors reviewed and approved the proposed methodology. IB, AP-C, AA-J and MCB-S carried out linguistical and cultural adaptation of the questionnaire. IB was responsible for data acquisition in the psychometric validation. IB, MCB-S, YP and AR analysed and interpreted the data. IB, MCB-S and LAP drafted the manuscript, which was critically reviewed and discussed with all co-authors. All authors read and approved the final manuscript.
Correspondence to Ikram Benazizi.
All participants provided informed consent. The study protocol was approved by the ethical board at the Universidad Central del Ecuador (UCE, reference 00022-UMHE-E-2019).
All authors fulfil the criteria for authorship and have read and approved the final version of this manuscript.
Benazizi, I., Bernal-Soriano , .C., Pardo , Y. et al. Adaptation and psychometric validation of Diabetes Health Profile (DHP-18) in patients with type 2 diabetes in Quito, Ecuador: a cross-sectional study. Health Qual Life Outcomes 19, 189 (2021). https://doi.org/10.1186/s12955-021-01818-5 | CommonCrawl |
Linear algebra
Linear algebra is the branch of mathematics concerning linear equations such as:
$a_{1}x_{1}+\cdots +a_{n}x_{n}=b$,
linear maps such as:
$(x_{1},\ldots ,x_{n})\mapsto a_{1}x_{1}+\cdots +a_{n}x_{n}$,
and their representations in vector spaces and through matrices.[1][2][3]
Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis, a branch of mathematical analysis, may be viewed as the application of linear algebra to spaces of functions.
Linear algebra is also used in most sciences and fields of engineering, because it allows modeling many natural phenomena, and computing efficiently with such models. For nonlinear systems, which cannot be modeled with linear algebra, it is often used for dealing with first-order approximations, using the fact that the differential of a multivariate function at a point is the linear map that best approximates the function near that point.
History
See also: Determinant § History, and Gaussian elimination § History
The procedure (using counting rods) for solving simultaneous linear equations now called Gaussian elimination appears in the ancient Chinese mathematical text Chapter Eight: Rectangular Arrays of The Nine Chapters on the Mathematical Art. Its use is illustrated in eighteen problems, with two to five equations.[4]
Systems of linear equations arose in Europe with the introduction in 1637 by René Descartes of coordinates in geometry. In fact, in this new geometry, now called Cartesian geometry, lines and planes are represented by linear equations, and computing their intersections amounts to solving systems of linear equations.
The first systematic methods for solving linear systems used determinants and were first considered by Leibniz in 1693. In 1750, Gabriel Cramer used them for giving explicit solutions of linear systems, now called Cramer's rule. Later, Gauss further described the method of elimination, which was initially listed as an advancement in geodesy.[5]
In 1844 Hermann Grassmann published his "Theory of Extension" which included foundational new topics of what is today called linear algebra. In 1848, James Joseph Sylvester introduced the term matrix, which is Latin for womb.
Linear algebra grew with ideas noted in the complex plane. For instance, two numbers w and z in $\mathbb {C} $ have a difference w – z, and the line segments wz and 0(w − z) are of the same length and direction. The segments are equipollent. The four-dimensional system $\mathbb {H} $ of quaternions was discovered by W.R. Hamilton in 1843.[6] The term vector was introduced as v = xi + yj + zk representing a point in space. The quaternion difference p – q also produces a segment equipollent to pq. Other hypercomplex number systems also used the idea of a linear space with a basis.
Arthur Cayley introduced matrix multiplication and the inverse matrix in 1856, making possible the general linear group. The mechanism of group representation became available for describing complex and hypercomplex numbers. Crucially, Cayley used a single letter to denote a matrix, thus treating a matrix as an aggregate object. He also realized the connection between matrices and determinants, and wrote "There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants".[5]
Benjamin Peirce published his Linear Associative Algebra (1872), and his son Charles Sanders Peirce extended the work later.[7]
The telegraph required an explanatory system, and the 1873 publication of A Treatise on Electricity and Magnetism instituted a field theory of forces and required differential geometry for expression. Linear algebra is flat differential geometry and serves in tangent spaces to manifolds. Electromagnetic symmetries of spacetime are expressed by the Lorentz transformations, and much of the history of linear algebra is the history of Lorentz transformations.
The first modern and more precise definition of a vector space was introduced by Peano in 1888;[5] by 1900, a theory of linear transformations of finite-dimensional vector spaces had emerged. Linear algebra took its modern form in the first half of the twentieth century, when many ideas and methods of previous centuries were generalized as abstract algebra. The development of computers led to increased research in efficient algorithms for Gaussian elimination and matrix decompositions, and linear algebra became an essential tool for modelling and simulations.[5]
Vector spaces
Main article: Vector space
Until the 19th century, linear algebra was introduced through systems of linear equations and matrices. In modern mathematics, the presentation through vector spaces is generally preferred, since it is more synthetic, more general (not limited to the finite-dimensional case), and conceptually simpler, although more abstract.
A vector space over a field F (often the field of the real numbers) is a set V equipped with two binary operations satisfying the following axioms. Elements of V are called vectors, and elements of F are called scalars. The first operation, vector addition, takes any two vectors v and w and outputs a third vector v + w. The second operation, scalar multiplication, takes any scalar a and any vector v and outputs a new vector av. The axioms that addition and scalar multiplication must satisfy are the following. (In the list below, u, v and w are arbitrary elements of V, and a and b are arbitrary scalars in the field F.)[8]
AxiomSignification
Associativity of additionu + (v + w) = (u + v) + w
Commutativity of additionu + v = v + u
Identity element of additionThere exists an element 0 in V, called the zero vector (or simply zero), such that v + 0 = v for all v in V.
Inverse elements of additionFor every v in V, there exists an element −v in V, called the additive inverse of v, such that v + (−v) = 0
Distributivity of scalar multiplication with respect to vector addition a(u + v) = au + av
Distributivity of scalar multiplication with respect to field addition(a + b)v = av + bv
Compatibility of scalar multiplication with field multiplicationa(bv) = (ab)v [lower-alpha 1]
Identity element of scalar multiplication1v = v, where 1 denotes the multiplicative identity of F.
The first four axioms mean that V is an abelian group under addition.
An element of a specific vector space may have various nature; for example, it could be a sequence, a function, a polynomial or a matrix. Linear algebra is concerned with those properties of such objects that are common to all vector spaces.
Linear maps
Main article: Linear map
Linear maps are mappings between vector spaces that preserve the vector-space structure. Given two vector spaces V and W over a field F, a linear map (also called, in some contexts, linear transformation or linear mapping) is a map
$T:V\to W$
that is compatible with addition and scalar multiplication, that is
$T(\mathbf {u} +\mathbf {v} )=T(\mathbf {u} )+T(\mathbf {v} ),\quad T(a\mathbf {v} )=aT(\mathbf {v} )$
for any vectors u,v in V and scalar a in F.
This implies that for any vectors u, v in V and scalars a, b in F, one has
$T(a\mathbf {u} +b\mathbf {v} )=T(a\mathbf {u} )+T(b\mathbf {v} )=aT(\mathbf {u} )+bT(\mathbf {v} )$
When V = W are the same vector space, a linear map T : V → V is also known as a linear operator on V.
A bijective linear map between two vector spaces (that is, every vector from the second space is associated with exactly one in the first) is an isomorphism. Because an isomorphism preserves linear structure, two isomorphic vector spaces are "essentially the same" from the linear algebra point of view, in the sense that they cannot be distinguished by using vector space properties. An essential question in linear algebra is testing whether a linear map is an isomorphism or not, and, if it is not an isomorphism, finding its range (or image) and the set of elements that are mapped to the zero vector, called the kernel of the map. All these questions can be solved by using Gaussian elimination or some variant of this algorithm.
Subspaces, span, and basis
Main articles: Linear subspace, Linear span, and Basis (linear algebra)
The study of those subsets of vector spaces that are in themselves vector spaces under the induced operations is fundamental, similarly as for many mathematical structures. These subsets are called linear subspaces. More precisely, a linear subspace of a vector space V over a field F is a subset W of V such that u + v and au are in W, for every u, v in W, and every a in F. (These conditions suffice for implying that W is a vector space.)
For example, given a linear map T : V → W, the image T(V) of V, and the inverse image T−1(0) of 0 (called kernel or null space), are linear subspaces of W and V, respectively.
Another important way of forming a subspace is to consider linear combinations of a set S of vectors: the set of all sums
$a_{1}\mathbf {v} _{1}+a_{2}\mathbf {v} _{2}+\cdots +a_{k}\mathbf {v} _{k},$
where v1, v2, ..., vk are in S, and a1, a2, ..., ak are in F form a linear subspace called the span of S. The span of S is also the intersection of all linear subspaces containing S. In other words, it is the smallest (for the inclusion relation) linear subspace containing S.
A set of vectors is linearly independent if none is in the span of the others. Equivalently, a set S of vectors is linearly independent if the only way to express the zero vector as a linear combination of elements of S is to take zero for every coefficient ai.
A set of vectors that spans a vector space is called a spanning set or generating set. If a spanning set S is linearly dependent (that is not linearly independent), then some element w of S is in the span of the other elements of S, and the span would remain the same if one remove w from S. One may continue to remove elements of S until getting a linearly independent spanning set. Such a linearly independent set that spans a vector space V is called a basis of V. The importance of bases lies in the fact that they are simultaneously minimal generating sets and maximal independent sets. More precisely, if S is a linearly independent set, and T is a spanning set such that S ⊆ T, then there is a basis B such that S ⊆ B ⊆ T.
Any two bases of a vector space V have the same cardinality, which is called the dimension of V; this is the dimension theorem for vector spaces. Moreover, two vector spaces over the same field F are isomorphic if and only if they have the same dimension.[9]
If any basis of V (and therefore every basis) has a finite number of elements, V is a finite-dimensional vector space. If U is a subspace of V, then dim U ≤ dim V. In the case where V is finite-dimensional, the equality of the dimensions implies U = V.
If U1 and U2 are subspaces of V, then
$\dim(U_{1}+U_{2})=\dim U_{1}+\dim U_{2}-\dim(U_{1}\cap U_{2}),$
where U1 + U2 denotes the span of U1 ∪ U2.[10]
Matrices
Main article: Matrix (mathematics)
Matrices allow explicit manipulation of finite-dimensional vector spaces and linear maps. Their theory is thus an essential part of linear algebra.
Let V be a finite-dimensional vector space over a field F, and (v1, v2, ..., vm) be a basis of V (thus m is the dimension of V). By definition of a basis, the map
${\begin{aligned}(a_{1},\ldots ,a_{m})&\mapsto a_{1}\mathbf {v} _{1}+\cdots a_{m}\mathbf {v} _{m}\\F^{m}&\to V\end{aligned}}$
is a bijection from Fm, the set of the sequences of m elements of F, onto V. This is an isomorphism of vector spaces, if Fm is equipped of its standard structure of vector space, where vector addition and scalar multiplication are done component by component.
This isomorphism allows representing a vector by its inverse image under this isomorphism, that is by the coordinate vector (a1, ..., am) or by the column matrix
${\begin{bmatrix}a_{1}\\\vdots \\a_{m}\end{bmatrix}}.$
If W is another finite dimensional vector space (possibly the same), with a basis (w1, ..., wn), a linear map f from W to V is well defined by its values on the basis elements, that is (f(w1), ..., f(wn)). Thus, f is well represented by the list of the corresponding column matrices. That is, if
$f(w_{j})=a_{1,j}v_{1}+\cdots +a_{m,j}v_{m},$
for j = 1, ..., n, then f is represented by the matrix
${\begin{bmatrix}a_{1,1}&\cdots &a_{1,n}\\\vdots &\ddots &\vdots \\a_{m,1}&\cdots &a_{m,n}\end{bmatrix}},$
with m rows and n columns.
Matrix multiplication is defined in such a way that the product of two matrices is the matrix of the composition of the corresponding linear maps, and the product of a matrix and a column matrix is the column matrix representing the result of applying the represented linear map to the represented vector. It follows that the theory of finite-dimensional vector spaces and the theory of matrices are two different languages for expressing exactly the same concepts.
Two matrices that encode the same linear transformation in different bases are called similar. It can be proved that two matrices are similar if and only if one can transform one into the other by elementary row and column operations. For a matrix representing a linear map from W to V, the row operations correspond to change of bases in V and the column operations correspond to change of bases in W. Every matrix is similar to an identity matrix possibly bordered by zero rows and zero columns. In terms of vector spaces, this means that, for any linear map from W to V, there are bases such that a part of the basis of W is mapped bijectively on a part of the basis of V, and that the remaining basis elements of W, if any, are mapped to zero. Gaussian elimination is the basic algorithm for finding these elementary operations, and proving these results.
Linear systems
Main article: System of linear equations
A finite set of linear equations in a finite set of variables, for example, x1, x2, ..., xn, or x, y, ..., z is called a system of linear equations or a linear system.[11][12][13][14][15]
Systems of linear equations form a fundamental part of linear algebra. Historically, linear algebra and matrix theory has been developed for solving such systems. In the modern presentation of linear algebra through vector spaces and matrices, many problems may be interpreted in terms of linear systems.
For example, let
${\begin{alignedat}{7}2x&&\;+\;&&y&&\;-\;&&z&&\;=\;&&8\\-3x&&\;-\;&&y&&\;+\;&&2z&&\;=\;&&-11\\-2x&&\;+\;&&y&&\;+\;&&2z&&\;=\;&&-3\end{alignedat}}$
(S)
be a linear system.
To such a system, one may associate its matrix
$M=\left[{\begin{array}{rrr}2&1&-1\\-3&-1&2\\-2&1&2\end{array}}\right].$
and its right member vector
$\mathbf {v} ={\begin{bmatrix}8\\-11\\-3\end{bmatrix}}.$
Let T be the linear transformation associated to the matrix M. A solution of the system (S) is a vector
$\mathbf {X} ={\begin{bmatrix}x\\y\\z\end{bmatrix}}$
such that
$T(\mathbf {X} )=\mathbf {v} ,$
that is an element of the preimage of v by T.
Let (S′) be the associated homogeneous system, where the right-hand sides of the equations are put to zero:
${\begin{alignedat}{7}2x&&\;+\;&&y&&\;-\;&&z&&\;=\;&&0\\-3x&&\;-\;&&y&&\;+\;&&2z&&\;=\;&&0\\-2x&&\;+\;&&y&&\;+\;&&2z&&\;=\;&&0\end{alignedat}}$
(S′)
The solutions of (S′) are exactly the elements of the kernel of T or, equivalently, M.
The Gaussian-elimination consists of performing elementary row operations on the augmented matrix
$\left[\!{\begin{array}{c|c}M&\mathbf {v} \end{array}}\!\right]=\left[{\begin{array}{rrr|r}2&1&-1&8\\-3&-1&2&-11\\-2&1&2&-3\end{array}}\right]$
for putting it in reduced row echelon form. These row operations do not change the set of solutions of the system of equations. In the example, the reduced echelon form is
$\left[\!{\begin{array}{c|c}M&\mathbf {v} \end{array}}\!\right]=\left[{\begin{array}{rrr|r}1&0&0&2\\0&1&0&3\\0&0&1&-1\end{array}}\right],$
showing that the system (S) has the unique solution
${\begin{aligned}x&=2\\y&=3\\z&=-1.\end{aligned}}$
It follows from this matrix interpretation of linear systems that the same methods can be applied for solving linear systems and for many operations on matrices and linear transformations, which include the computation of the ranks, kernels, matrix inverses.
Endomorphisms and square matrices
Main article: Square matrix
A linear endomorphism is a linear map that maps a vector space V to itself. If V has a basis of n elements, such an endomorphism is represented by a square matrix of size n.
With respect to general linear maps, linear endomorphisms and square matrices have some specific properties that make their study an important part of linear algebra, which is used in many parts of mathematics, including geometric transformations, coordinate changes, quadratic forms, and many other part of mathematics.
Determinant
Main article: Determinant
The determinant of a square matrix A is defined to be[16]
$\sum _{\sigma \in S_{n}}(-1)^{\sigma }a_{1\sigma (1)}\cdots a_{n\sigma (n)},$
where Sn is the group of all permutations of n elements, σ is a permutation, and (−1)σ the parity of the permutation. A matrix is invertible if and only if the determinant is invertible (i.e., nonzero if the scalars belong to a field).
Cramer's rule is a closed-form expression, in terms of determinants, of the solution of a system of n linear equations in n unknowns. Cramer's rule is useful for reasoning about the solution, but, except for n = 2 or 3, it is rarely used for computing a solution, since Gaussian elimination is a faster algorithm.
The determinant of an endomorphism is the determinant of the matrix representing the endomorphism in terms of some ordered basis. This definition makes sense, since this determinant is independent of the choice of the basis.
Eigenvalues and eigenvectors
Main article: Eigenvalues and eigenvectors
If f is a linear endomorphism of a vector space V over a field F, an eigenvector of f is a nonzero vector v of V such that f(v) = av for some scalar a in F. This scalar a is an eigenvalue of f.
If the dimension of V is finite, and a basis has been chosen, f and v may be represented, respectively, by a square matrix M and a column matrix z; the equation defining eigenvectors and eigenvalues becomes
$Mz=az.$
Using the identity matrix I, whose entries are all zero, except those of the main diagonal, which are equal to one, this may be rewritten
$(M-aI)z=0.$
As z is supposed to be nonzero, this means that M – aI is a singular matrix, and thus that its determinant det (M − aI) equals zero. The eigenvalues are thus the roots of the polynomial
$\det(xI-M).$
If V is of dimension n, this is a monic polynomial of degree n, called the characteristic polynomial of the matrix (or of the endomorphism), and there are, at most, n eigenvalues.
If a basis exists that consists only of eigenvectors, the matrix of f on this basis has a very simple structure: it is a diagonal matrix such that the entries on the main diagonal are eigenvalues, and the other entries are zero. In this case, the endomorphism and the matrix are said to be diagonalizable. More generally, an endomorphism and a matrix are also said diagonalizable, if they become diagonalizable after extending the field of scalars. In this extended sense, if the characteristic polynomial is square-free, then the matrix is diagonalizable.
A symmetric matrix is always diagonalizable. There are non-diagonalizable matrices, the simplest being
${\begin{bmatrix}0&1\\0&0\end{bmatrix}}$
(it cannot be diagonalizable since its square is the zero matrix, and the square of a nonzero diagonal matrix is never zero).
When an endomorphism is not diagonalizable, there are bases on which it has a simple form, although not as simple as the diagonal form. The Frobenius normal form does not need of extending the field of scalars and makes the characteristic polynomial immediately readable on the matrix. The Jordan normal form requires to extend the field of scalar for containing all eigenvalues, and differs from the diagonal form only by some entries that are just above the main diagonal and are equal to 1.
Duality
Main article: Dual space
A linear form is a linear map from a vector space V over a field F to the field of scalars F, viewed as a vector space over itself. Equipped by pointwise addition and multiplication by a scalar, the linear forms form a vector space, called the dual space of V, and usually denoted V*[17] or V′.[18][19]
If v1, ..., vn is a basis of V (this implies that V is finite-dimensional), then one can define, for i = 1, ..., n, a linear map vi* such that vi*(vi) = 1 and vi*(vj) = 0 if j ≠ i. These linear maps form a basis of V*, called the dual basis of v1, ..., vn. (If V is not finite-dimensional, the vi* may be defined similarly; they are linearly independent, but do not form a basis.)
For v in V, the map
$f\to f(\mathbf {v} )$
is a linear form on V*. This defines the canonical linear map from V into (V*)*, the dual of V*, called the bidual of V. This canonical map is an isomorphism if V is finite-dimensional, and this allows identifying V with its bidual. (In the infinite dimensional case, the canonical map is injective, but not surjective.)
There is thus a complete symmetry between a finite-dimensional vector space and its dual. This motivates the frequent use, in this context, of the bra–ket notation
$\langle f,\mathbf {x} \rangle $
for denoting f(x).
Dual map
Main article: Transpose of a linear map
Let
$f:V\to W$
be a linear map. For every linear form h on W, the composite function h ∘ f is a linear form on V. This defines a linear map
$f^{*}:W^{*}\to V^{*}$
between the dual spaces, which is called the dual or the transpose of f.
If V and W are finite dimensional, and M is the matrix of f in terms of some ordered bases, then the matrix of f* over the dual bases is the transpose MT of M, obtained by exchanging rows and columns.
If elements of vector spaces and their duals are represented by column vectors, this duality may be expressed in bra–ket notation by
$\langle h^{\mathsf {T}},M\mathbf {v} \rangle =\langle h^{\mathsf {T}}M,\mathbf {v} \rangle .$
For highlighting this symmetry, the two members of this equality are sometimes written
$\langle h^{\mathsf {T}}\mid M\mid \mathbf {v} \rangle .$
Inner-product spaces
Main article: Inner product space
Besides these basic concepts, linear algebra also studies vector spaces with additional structure, such as an inner product. The inner product is an example of a bilinear form, and it gives the vector space a geometric structure by allowing for the definition of length and angles. Formally, an inner product is a map
$\langle \cdot ,\cdot \rangle :V\times V\to F$
that satisfies the following three axioms for all vectors u, v, w in V and all scalars a in F:[20][21]
• Conjugate symmetry:
$\langle \mathbf {u} ,\mathbf {v} \rangle ={\overline {\langle \mathbf {v} ,\mathbf {u} \rangle }}.$
In $\mathbb {R} $, it is symmetric.
• Linearity in the first argument:
${\begin{aligned}\langle a\mathbf {u} ,\mathbf {v} \rangle &=a\langle \mathbf {u} ,\mathbf {v} \rangle .\\\langle \mathbf {u} +\mathbf {v} ,\mathbf {w} \rangle &=\langle \mathbf {u} ,\mathbf {w} \rangle +\langle \mathbf {v} ,\mathbf {w} \rangle .\end{aligned}}$
• Positive-definiteness:
$\langle \mathbf {v} ,\mathbf {v} \rangle \geq 0$
with equality only for v = 0.
We can define the length of a vector v in V by
$\|\mathbf {v} \|^{2}=\langle \mathbf {v} ,\mathbf {v} \rangle ,$
and we can prove the Cauchy–Schwarz inequality:
$|\langle \mathbf {u} ,\mathbf {v} \rangle |\leq \|\mathbf {u} \|\cdot \|\mathbf {v} \|.$
In particular, the quantity
${\frac {|\langle \mathbf {u} ,\mathbf {v} \rangle |}{\|\mathbf {u} \|\cdot \|\mathbf {v} \|}}\leq 1,$
and so we can call this quantity the cosine of the angle between the two vectors.
Two vectors are orthogonal if ⟨u, v⟩ = 0. An orthonormal basis is a basis where all basis vectors have length 1 and are orthogonal to each other. Given any finite-dimensional vector space, an orthonormal basis could be found by the Gram–Schmidt procedure. Orthonormal bases are particularly easy to deal with, since if v = a1 v1 + ⋯ + an vn, then
$a_{i}=\langle \mathbf {v} ,\mathbf {v} _{i}\rangle .$
The inner product facilitates the construction of many useful concepts. For instance, given a transform T, we can define its Hermitian conjugate T* as the linear transform satisfying
$\langle T\mathbf {u} ,\mathbf {v} \rangle =\langle \mathbf {u} ,T^{*}\mathbf {v} \rangle .$
If T satisfies TT* = T*T, we call T normal. It turns out that normal matrices are precisely the matrices that have an orthonormal system of eigenvectors that span V.
Relationship with geometry
There is a strong relationship between linear algebra and geometry, which started with the introduction by René Descartes, in 1637, of Cartesian coordinates. In this new (at that time) geometry, now called Cartesian geometry, points are represented by Cartesian coordinates, which are sequences of three real numbers (in the case of the usual three-dimensional space). The basic objects of geometry, which are lines and planes are represented by linear equations. Thus, computing intersections of lines and planes amounts to solving systems of linear equations. This was one of the main motivations for developing linear algebra.
Most geometric transformation, such as translations, rotations, reflections, rigid motions, isometries, and projections transform lines into lines. It follows that they can be defined, specified and studied in terms of linear maps. This is also the case of homographies and Möbius transformations, when considered as transformations of a projective space.
Until the end of the 19th century, geometric spaces were defined by axioms relating points, lines and planes (synthetic geometry). Around this date, it appeared that one may also define geometric spaces by constructions involving vector spaces (see, for example, Projective space and Affine space). It has been shown that the two approaches are essentially equivalent.[22] In classical geometry, the involved vector spaces are vector spaces over the reals, but the constructions may be extended to vector spaces over any field, allowing considering geometry over arbitrary fields, including finite fields.
Presently, most textbooks, introduce geometric spaces from linear algebra, and geometry is often presented, at elementary level, as a subfield of linear algebra.
Usage and applications
Linear algebra is used in almost all areas of mathematics, thus making it relevant in almost all scientific domains that use mathematics. These applications may be divided into several wide categories.
Geometry of ambient space
The modeling of ambient space is based on geometry. Sciences concerned with this space use geometry widely. This is the case with mechanics and robotics, for describing rigid body dynamics; geodesy for describing Earth shape; perspectivity, computer vision, and computer graphics, for describing the relationship between a scene and its plane representation; and many other scientific domains.
In all these applications, synthetic geometry is often used for general descriptions and a qualitative approach, but for the study of explicit situations, one must compute with coordinates. This requires the heavy use of linear algebra.
Functional analysis
Functional analysis studies function spaces. These are vector spaces with additional structure, such as Hilbert spaces. Linear algebra is thus a fundamental part of functional analysis and its applications, which include, in particular, quantum mechanics (wave functions).
Study of complex systems
Most physical phenomena are modeled by partial differential equations. To solve them, one usually decomposes the space in which the solutions are searched into small, mutually interacting cells. For linear systems this interaction involves linear functions. For nonlinear systems, this interaction is often approximated by linear functions.[lower-alpha 2]This is called a linear model or first-order approximation. Linear models are frequently used for complex nonlinear real-world systems because it makes parametrization more manageable.[23] In both cases, very large matrices are generally involved. Weather forecasting (or more specifically, parametrization for atmospheric modeling) is a typical example of a real-world application, where the whole Earth atmosphere is divided into cells of, say, 100 km of width and 100 km of height.
Scientific computation
Nearly all scientific computations involve linear algebra. Consequently, linear algebra algorithms have been highly optimized. BLAS and LAPACK are the best known implementations. For improving efficiency, some of them configure the algorithms automatically, at run time, for adapting them to the specificities of the computer (cache size, number of available cores, ...).
Some processors, typically graphics processing units (GPU), are designed with a matrix structure, for optimizing the operations of linear algebra.
Extensions and generalizations
This section presents several related topics that do not appear generally in elementary textbooks on linear algebra, but are commonly considered, in advanced mathematics, as parts of linear algebra.
Module theory
Main article: Module (mathematics)
The existence of multiplicative inverses in fields is not involved in the axioms defining a vector space. One may thus replace the field of scalars by a ring R, and this gives a structure called module over R, or R-module.
The concepts of linear independence, span, basis, and linear maps (also called module homomorphisms) are defined for modules exactly as for vector spaces, with the essential difference that, if R is not a field, there are modules that do not have any basis. The modules that have a basis are the free modules, and those that are spanned by a finite set are the finitely generated modules. Module homomorphisms between finitely generated free modules may be represented by matrices. The theory of matrices over a ring is similar to that of matrices over a field, except that determinants exist only if the ring is commutative, and that a square matrix over a commutative ring is invertible only if its determinant has a multiplicative inverse in the ring.
Vector spaces are completely characterized by their dimension (up to an isomorphism). In general, there is not such a complete classification for modules, even if one restricts oneself to finitely generated modules. However, every module is a cokernel of a homomorphism of free modules.
Modules over the integers can be identified with abelian groups, since the multiplication by an integer may identified to a repeated addition. Most of the theory of abelian groups may be extended to modules over a principal ideal domain. In particular, over a principal ideal domain, every submodule of a free module is free, and the fundamental theorem of finitely generated abelian groups may be extended straightforwardly to finitely generated modules over a principal ring.
There are many rings for which there are algorithms for solving linear equations and systems of linear equations. However, these algorithms have generally a computational complexity that is much higher than the similar algorithms over a field. For more details, see Linear equation over a ring.
Multilinear algebra and tensors
In multilinear algebra, one considers multivariable linear transformations, that is, mappings that are linear in each of a number of different variables. This line of inquiry naturally leads to the idea of the dual space, the vector space V* consisting of linear maps f : V → F where F is the field of scalars. Multilinear maps T : Vn → F can be described via tensor products of elements of V*.
If, in addition to vector addition and scalar multiplication, there is a bilinear vector product V × V → V, the vector space is called an algebra; for instance, associative algebras are algebras with an associate vector product (like the algebra of square matrices, or the algebra of polynomials).
Topological vector spaces
Main articles: Topological vector space, Normed vector space, and Hilbert space
Vector spaces that are not finite dimensional often require additional structure to be tractable. A normed vector space is a vector space along with a function called a norm, which measures the "size" of elements. The norm induces a metric, which measures the distance between elements, and induces a topology, which allows for a definition of continuous maps. The metric also allows for a definition of limits and completeness – a metric space that is complete is known as a Banach space. A complete metric space along with the additional structure of an inner product (a conjugate symmetric sesquilinear form) is known as a Hilbert space, which is in some sense a particularly well-behaved Banach space. Functional analysis applies the methods of linear algebra alongside those of mathematical analysis to study various function spaces; the central objects of study in functional analysis are Lp spaces, which are Banach spaces, and especially the L2 space of square integrable functions, which is the only Hilbert space among them. Functional analysis is of particular importance to quantum mechanics, the theory of partial differential equations, digital signal processing, and electrical engineering. It also provides the foundation and theoretical framework that underlies the Fourier transform and related methods.
See also
• Fundamental matrix (computer vision)
• Geometric algebra
• Linear programming
• Linear regression, a statistical estimation method
• List of linear algebra topics
• Multilinear algebra
• Numerical linear algebra
• Transformation matrix
Explanatory notes
1. This axiom is not asserting the associativity of an operation, since there are two operations in question, scalar multiplication bv; and field multiplication: ab.
2. This may have the consequence that some physically interesting solutions are omitted.
Citations
1. Banerjee, Sudipto; Roy, Anindya (2014). Linear Algebra and Matrix Analysis for Statistics. Texts in Statistical Science (1st ed.). Chapman and Hall/CRC. ISBN 978-1420095388.
2. Strang, Gilbert (July 19, 2005). Linear Algebra and Its Applications (4th ed.). Brooks Cole. ISBN 978-0-03-010567-8.
3. Weisstein, Eric. "Linear Algebra". MathWorld. Wolfram. Retrieved 16 April 2012.
4. Hart, Roger (2010). The Chinese Roots of Linear Algebra. JHU Press. ISBN 9780801899584.
5. Vitulli, Marie. "A Brief History of Linear Algebra and Matrix Theory". Department of Mathematics. University of Oregon. Archived from the original on 2012-09-10. Retrieved 2014-07-08.
6. Koecher, M., Remmert, R. (1991). Hamilton’s Quaternions. In: Numbers. Graduate Texts in Mathematics, vol 123. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-1005-4_10
7. Benjamin Peirce (1872) Linear Associative Algebra, lithograph, new edition with corrections, notes, and an added 1875 paper by Peirce, plus notes by his son Charles Sanders Peirce, published in the American Journal of Mathematics v. 4, 1881, Johns Hopkins University, pp. 221–226, Google Eprint and as an extract, D. Van Nostrand, 1882, Google Eprint.
8. Roman (2005, ch. 1, p. 27)
9. Axler (2015) harvp error: no target: CITEREFAxler2015 (help) p. 82, §3.59
10. Axler (2015) harvp error: no target: CITEREFAxler2015 (help) p. 23, §1.45
11. Anton (1987, p. 2)
12. Beauregard & Fraleigh (1973, p. 65)
13. Burden & Faires (1993, p. 324)
14. Golub & Van Loan (1996, p. 87)
15. Harper (1976, p. 57)
16. Katznelson & Katznelson (2008) pp. 76–77, § 4.4.1–4.4.6
17. Katznelson & Katznelson (2008) p. 37 §2.1.3
18. Halmos (1974) p. 20, §13
19. Axler (2015) harvp error: no target: CITEREFAxler2015 (help) p. 101, §3.94
20. P. K. Jain, Khalil Ahmad (1995). "5.1 Definitions and basic properties of inner product spaces and Hilbert spaces". Functional analysis (2nd ed.). New Age International. p. 203. ISBN 81-224-0801-X.
21. Eduard Prugovec̆ki (1981). "Definition 2.1". Quantum mechanics in Hilbert space (2nd ed.). Academic Press. pp. 18 ff. ISBN 0-12-566060-X.
22. Emil Artin (1957) Geometric Algebra Interscience Publishers
23. Savov, Ivan (2017). No Bullshit Guide to Linear Algebra. MinireferenceCo. pp. 150–155. ISBN 9780992001025.
General and cited sources
• Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0
• Axler, Sheldon (18 December 2014), Linear Algebra Done Right, Undergraduate Texts in Mathematics (3rd ed.), Springer Publishing (published 2015), ISBN 978-3-319-11079-0
• Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Company, ISBN 0-395-14017-X
• Burden, Richard L.; Faires, J. Douglas (1993), Numerical Analysis (5th ed.), Boston: Prindle, Weber and Schmidt, ISBN 0-534-93219-3
• Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations, Johns Hopkins Studies in Mathematical Sciences (3rd ed.), Baltimore: Johns Hopkins University Press, ISBN 978-0-8018-5414-9
• Halmos, Paul Richard (1974), Finite-Dimensional Vector Spaces, Undergraduate Texts in Mathematics (1958 2nd ed.), Springer Publishing, ISBN 0-387-90093-4, OCLC 1251216
• Harper, Charlie (1976), Introduction to Mathematical Physics, New Jersey: Prentice-Hall, ISBN 0-13-487538-9
• Katznelson, Yitzhak; Katznelson, Yonatan R. (2008), A (Terse) Introduction to Linear Algebra, American Mathematical Society, ISBN 978-0-8218-4419-9
• Roman, Steven (March 22, 2005), Advanced Linear Algebra, Graduate Texts in Mathematics (2nd ed.), Springer, ISBN 978-0-387-24766-3
Further reading
History
• Fearnley-Sander, Desmond, "Hermann Grassmann and the Creation of Linear Algebra", American Mathematical Monthly 86 (1979), pp. 809–817.
• Grassmann, Hermann (1844), Die lineale Ausdehnungslehre ein neuer Zweig der Mathematik: dargestellt und durch Anwendungen auf die übrigen Zweige der Mathematik, wie auch auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erläutert, Leipzig: O. Wigand
Introductory textbooks
• Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International
• Banerjee, Sudipto; Roy, Anindya (2014), Linear Algebra and Matrix Analysis for Statistics, Texts in Statistical Science (1st ed.), Chapman and Hall/CRC, ISBN 978-1420095388
• Bretscher, Otto (2004), Linear Algebra with Applications (3rd ed.), Prentice Hall, ISBN 978-0-13-145334-0
• Farin, Gerald; Hansford, Dianne (2004), Practical Linear Algebra: A Geometry Toolbox, AK Peters, ISBN 978-1-56881-234-2
• Hefferon, Jim (2020). Linear Algebra (4th ed.). Ann Arbor, Michigan: Orthogonal Publishing. ISBN 978-1-944325-11-4. OCLC 1178900366. OL 30872051M.
• Kolman, Bernard; Hill, David R. (2007), Elementary Linear Algebra with Applications (9th ed.), Prentice Hall, ISBN 978-0-13-229654-0
• Lay, David C. (2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7
• Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall, ISBN 978-0-13-185785-8
• Murty, Katta G. (2014) Computational and Algorithmic Linear Algebra and n-Dimensional Geometry, World Scientific Publishing, ISBN 978-981-4366-62-5. Chapter 1: Systems of Simultaneous Linear Equations
• Noble, B. & Daniel, J.W. (2nd Ed. 1977) , Pearson Higher Education, ISBN 978-0130413437.
• Poole, David (2010), Linear Algebra: A Modern Introduction (3rd ed.), Cengage – Brooks/Cole, ISBN 978-0-538-73545-2
• Ricardo, Henry (2010), A Modern Introduction To Linear Algebra (1st ed.), CRC Press, ISBN 978-1-4398-0040-9
• Sadun, Lorenzo (2008), Applied Linear Algebra: the decoupling principle (2nd ed.), AMS, ISBN 978-0-8218-4441-0
• Strang, Gilbert (2016), Introduction to Linear Algebra (5th ed.), Wellesley-Cambridge Press, ISBN 978-09802327-7-6
• The Manga Guide to Linear Algebra (2012), by Shin Takahashi, Iroha Inoue and Trend-Pro Co., Ltd., ISBN 978-1-59327-413-9
Advanced textbooks
• Bhatia, Rajendra (November 15, 1996), Matrix Analysis, Graduate Texts in Mathematics, Springer, ISBN 978-0-387-94846-1
• Demmel, James W. (August 1, 1997), Applied Numerical Linear Algebra, SIAM, ISBN 978-0-89871-389-3
• Dym, Harry (2007), Linear Algebra in Action, AMS, ISBN 978-0-8218-3813-6
• Gantmacher, Felix R. (2005), Applications of the Theory of Matrices, Dover Publications, ISBN 978-0-486-44554-0
• Gantmacher, Felix R. (1990), Matrix Theory Vol. 1 (2nd ed.), American Mathematical Society, ISBN 978-0-8218-1376-8
• Gantmacher, Felix R. (2000), Matrix Theory Vol. 2 (2nd ed.), American Mathematical Society, ISBN 978-0-8218-2664-5
• Gelfand, Israel M. (1989), Lectures on Linear Algebra, Dover Publications, ISBN 978-0-486-66082-0
• Glazman, I. M.; Ljubic, Ju. I. (2006), Finite-Dimensional Linear Analysis, Dover Publications, ISBN 978-0-486-45332-3
• Golan, Johnathan S. (January 2007), The Linear Algebra a Beginning Graduate Student Ought to Know (2nd ed.), Springer, ISBN 978-1-4020-5494-5
• Golan, Johnathan S. (August 1995), Foundations of Linear Algebra, Kluwer, ISBN 0-7923-3614-3
• Greub, Werner H. (October 16, 1981), Linear Algebra, Graduate Texts in Mathematics (4th ed.), Springer, ISBN 978-0-8018-5414-9
• Hoffman, Kenneth; Kunze, Ray (1971), Linear algebra (2nd ed.), Englewood Cliffs, N.J.: Prentice-Hall, Inc., MR 0276251
• Halmos, Paul R. (August 20, 1993), Finite-Dimensional Vector Spaces, Undergraduate Texts in Mathematics, Springer, ISBN 978-0-387-90093-3
• Friedberg, Stephen H.; Insel, Arnold J.; Spence, Lawrence E. (September 7, 2018), Linear Algebra (5th ed.), Pearson, ISBN 978-0-13-486024-4
• Horn, Roger A.; Johnson, Charles R. (February 23, 1990), Matrix Analysis, Cambridge University Press, ISBN 978-0-521-38632-6
• Horn, Roger A.; Johnson, Charles R. (June 24, 1994), Topics in Matrix Analysis, Cambridge University Press, ISBN 978-0-521-46713-1
• Lang, Serge (March 9, 2004), Linear Algebra, Undergraduate Texts in Mathematics (3rd ed.), Springer, ISBN 978-0-387-96412-6
• Marcus, Marvin; Minc, Henryk (2010), A Survey of Matrix Theory and Matrix Inequalities, Dover Publications, ISBN 978-0-486-67102-4
• Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8, archived from the original on October 31, 2009
• Mirsky, L. (1990), An Introduction to Linear Algebra, Dover Publications, ISBN 978-0-486-66434-7
• Shafarevich, I. R.; Remizov, A. O (2012), Linear Algebra and Geometry, Springer, ISBN 978-3-642-30993-9
• Shilov, Georgi E. (June 1, 1977), Linear algebra, Dover Publications, ISBN 978-0-486-63518-7
• Shores, Thomas S. (December 6, 2006), Applied Linear Algebra and Matrix Analysis, Undergraduate Texts in Mathematics, Springer, ISBN 978-0-387-33194-2
• Smith, Larry (May 28, 1998), Linear Algebra, Undergraduate Texts in Mathematics, Springer, ISBN 978-0-387-98455-1
• Trefethen, Lloyd N.; Bau, David (1997), Numerical Linear Algebra, SIAM, ISBN 978-0-898-71361-9
Study guides and outlines
• Leduc, Steven A. (May 1, 1996), Linear Algebra (Cliffs Quick Review), Cliffs Notes, ISBN 978-0-8220-5331-6
• Lipschutz, Seymour; Lipson, Marc (December 6, 2000), Schaum's Outline of Linear Algebra (3rd ed.), McGraw-Hill, ISBN 978-0-07-136200-9
• Lipschutz, Seymour (January 1, 1989), 3,000 Solved Problems in Linear Algebra, McGraw–Hill, ISBN 978-0-07-038023-3
• McMahon, David (October 28, 2005), Linear Algebra Demystified, McGraw–Hill Professional, ISBN 978-0-07-146579-3
• Zhang, Fuzhen (April 7, 2009), Linear Algebra: Challenging Problems for Students, The Johns Hopkins University Press, ISBN 978-0-8018-9125-0
External links
Wikibooks has a book on the topic of: Linear Algebra
Online Resources
Wikimedia Commons has media related to Linear algebra.
• MIT Linear Algebra Video Lectures, a series of 34 recorded lectures by Professor Gilbert Strang (Spring 2010)
• International Linear Algebra Society
• "Linear algebra", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Linear Algebra on MathWorld
• Matrix and Linear Algebra Terms on Earliest Known Uses of Some of the Words of Mathematics
• Earliest Uses of Symbols for Matrices and Vectors on Earliest Uses of Various Mathematical Symbols
• Essence of linear algebra, a video presentation from 3Blue1Brown of the basics of linear algebra, with emphasis on the relationship between the geometric, the matrix and the abstract points of view
Online books
• Beezer, Robert A. (2009) [2004]. A First Course in Linear Algebra. Gainesville, Florida: University Press of Florida. ISBN 9781616100049.
• Connell, Edwin H. (2004) [1999]. Elements of Abstract and Linear Algebra. University of Miami, Coral Gables, Florida: Self-published.
• Hefferon, Jim (2020). Linear Algebra (4th ed.). Ann Arbor, Michigan: Orthogonal Publishing. ISBN 978-1-944325-11-4. OCLC 1178900366. OL 30872051M.
• Margalit, Dan; Rabinoff, Joseph (2019). Interactive Linear Algebra. Georgia Institute of Technology, Atlanta, Georgia: Self-published.
• Matthews, Keith R. (2013) [1991]. Elementary Linear Algebra. University of Queensland, Brisbane, Australia: Self-published.
• Mikaelian, Vahagn H. (2020) [2017]. Linear Algebra: Theory and Algorithms. Yerevan, Armenia: Self-published – via ResearchGate.
• Sharipov, Ruslan, Course of linear algebra and multidimensional geometry
• Treil, Sergei, Linear Algebra Done Wrong
Linear algebra
• Outline
• Glossary
Basic concepts
• Scalar
• Vector
• Vector space
• Scalar multiplication
• Vector projection
• Linear span
• Linear map
• Linear projection
• Linear independence
• Linear combination
• Basis
• Change of basis
• Row and column vectors
• Row and column spaces
• Kernel
• Eigenvalues and eigenvectors
• Transpose
• Linear equations
Matrices
• Block
• Decomposition
• Invertible
• Minor
• Multiplication
• Rank
• Transformation
• Cramer's rule
• Gaussian elimination
Bilinear
• Orthogonality
• Dot product
• Hadamard product
• Inner product space
• Outer product
• Kronecker product
• Gram–Schmidt process
Multilinear algebra
• Determinant
• Cross product
• Triple product
• Seven-dimensional cross product
• Geometric algebra
• Exterior algebra
• Bivector
• Multivector
• Tensor
• Outermorphism
Vector space constructions
• Dual
• Direct sum
• Function space
• Quotient
• Subspace
• Tensor product
Numerical
• Floating-point
• Numerical stability
• Basic Linear Algebra Subprograms
• Sparse matrix
• Comparison of linear algebra libraries
• Category
• Mathematics portal
• Commons
• Wikibooks
• Wikiversity
Major mathematics areas
• History
• Timeline
• Future
• Outline
• Lists
• Glossary
Foundations
• Category theory
• Information theory
• Mathematical logic
• Philosophy of mathematics
• Set theory
• Type theory
Algebra
• Abstract
• Commutative
• Elementary
• Group theory
• Linear
• Multilinear
• Universal
• Homological
Analysis
• Calculus
• Real analysis
• Complex analysis
• Hypercomplex analysis
• Differential equations
• Functional analysis
• Harmonic analysis
• Measure theory
Discrete
• Combinatorics
• Graph theory
• Order theory
Geometry
• Algebraic
• Analytic
• Arithmetic
• Differential
• Discrete
• Euclidean
• Finite
Number theory
• Arithmetic
• Algebraic number theory
• Analytic number theory
• Diophantine geometry
Topology
• General
• Algebraic
• Differential
• Geometric
• Homotopy theory
Applied
• Engineering mathematics
• Mathematical biology
• Mathematical chemistry
• Mathematical economics
• Mathematical finance
• Mathematical physics
• Mathematical psychology
• Mathematical sociology
• Mathematical statistics
• Probability
• Statistics
• Systems science
• Control theory
• Game theory
• Operations research
Computational
• Computer science
• Theory of computation
• Computational complexity theory
• Numerical analysis
• Optimization
• Computer algebra
Related topics
• Mathematicians
• lists
• Informal mathematics
• Films about mathematicians
• Recreational mathematics
• Mathematics and art
• Mathematics education
• Mathematics portal
• Category
• Commons
• WikiProject
Authority control
International
• FAST
National
• Spain
• France
• BnF data
• Germany
• Israel
• United States
• Latvia
• Japan
• Czech Republic
| Wikipedia |
Power associativity
In mathematics, specifically in abstract algebra, power associativity is a property of a binary operation that is a weak form of associativity.
Definition
An algebra (or more generally a magma) is said to be power-associative if the subalgebra generated by any element is associative. Concretely, this means that if an element $x$ is performed an operation $*$ by itself several times, it doesn't matter in which order the operations are carried out, so for instance $x*(x*(x*x))=(x*(x*x))*x=(x*x)*(x*x)$.
Examples and properties
Every associative algebra is power-associative, but so are all other alternative algebras (like the octonions, which are non-associative) and even some non-alternative algebras like the sedenions and Okubo algebras. Any algebra whose elements are idempotent is also power-associative.
Exponentiation to the power of any positive integer can be defined consistently whenever multiplication is power-associative. For example, there is no need to distinguish whether x3 should be defined as (xx)x or as x(xx), since these are equal. Exponentiation to the power of zero can also be defined if the operation has an identity element, so the existence of identity elements is useful in power-associative contexts.
Over a field of characteristic 0, an algebra is power-associative if and only if it satisfies $[x,x,x]=0$ and $[x^{2},x,x]=0$, where $[x,y,z]:=(xy)z-x(yz)$ is the associator (Albert 1948).
Over an infinite field of prime characteristic $p>0$ there is no finite set of identities that characterizes power-associativity, but there are infinite independent sets, as described by Gainov (1970):
• For $p=2$: $[x,x^{2},x]=0$ and $[x^{n-2},x,x]=0$ for $n=3,2^{k}$ ($k=2,3...)$
• For $p=3$: $[x^{n-2},x,x]=0$ for $n=4,5,3^{k}$ ($k=1,2...)$
• For $p=5$: $[x^{n-2},x,x]=0$ for $n=3,4,6,5^{k}$ ($k=1,2...)$
• For $p>5$: $[x^{n-2},x,x]=0$ for $n=3,4,p^{k}$ ($k=1,2...)$
A substitution law holds for real power-associative algebras with unit, which basically asserts that multiplication of polynomials works as expected. For f a real polynomial in x, and for any a in such an algebra define f(a) to be the element of the algebra resulting from the obvious substitution of a into f. Then for any two such polynomials f and g, we have that (fg)(a) = f(a)g(a).
See also
• Alternativity
References
• Albert, A. Adrian (1948). "Power-associative rings". Transactions of the American Mathematical Society. 64: 552–593. doi:10.2307/1990399. ISSN 0002-9947. JSTOR 1990399. MR 0027750. Zbl 0033.15402.
• Gainov, A. T. (1970). "Power-associative algebras over a finite-characteristic field". Algebra and Logic. 9 (1): 5–19. doi:10.1007/BF02219846. ISSN 0002-9947. MR 0281764. Zbl 0208.04001.
• Knus, Max-Albert; Merkurjev, Alexander; Rost, Markus; Tignol, Jean-Pierre (1998). The book of involutions. Colloquium Publications. Vol. 44. With a preface by Jacques Tits. Providence, RI: American Mathematical Society. ISBN 0-8218-0904-0. Zbl 0955.16001.
• Okubo, Susumu (1995). Introduction to octonion and other non-associative algebras in physics. Montroll Memorial Lecture Series in Mathematical Physics. Vol. 2. Cambridge University Press. p. 17. ISBN 0-521-01792-0. MR 1356224. Zbl 0841.17001.
• Schafer, R. D. (1995) [1966]. An introduction to non-associative algebras. Dover. pp. 128–148. ISBN 0-486-68813-5.
| Wikipedia |
\begin{definition}[Definition:Variable/Descriptive Statistics]
A '''variable''' is a characteristic property of all individuals in a population or sample.
It is a categorization of the population such that each individual can be unambiguously described with respect to said variable.
\end{definition} | ProofWiki |
Equipollence (geometry)
In Euclidean geometry, equipollence is a binary relation between directed line segments. A line segment AB from point A to point B has the opposite direction to line segment BA. Two parallel line segments are equipollent when they have the same length and direction.
Parallelogram property
A definitive feature of Euclidean space is the parallelogram property of vectors: If two segments are equipollent, then they form two sides of a parallelogram:
If a given vector holds between a and b, c and d, then the vector which holds between a and c is the same as that which holds between b and d.
— Bertrand Russell, The Principles of Mathematics, page 432
History
The concept of equipollent line segments was advanced by Giusto Bellavitis in 1835. Subsequently, the term vector was adopted for a class of equipollent line segments. Bellavitis's use of the idea of a relation to compare different but similar objects has become a common mathematical technique, particularly in the use of equivalence relations. Bellavitis used a special notation for the equipollence of segments AB and CD:
$AB\bumpeq CD.$
The following passages, translated by Michael J. Crowe, show the anticipation that Bellavitis had of vector concepts:
Equipollences continue to hold when one substitutes for the lines in them, other lines which are respectively equipollent to them, however they may be situated in space. From this it can be understood how any number and any kind of lines may be summed, and that in whatever order these lines are taken, the same equipollent-sum will be obtained...
In equipollences, just as in equations, a line may be transferred from one side to the other, provided that the sign is changed...
Thus oppositely directed segments are negatives of each other: $AB+BA\bumpeq 0.$
The equipollence $AB\bumpeq n.CD,$ where n stands for a positive number, indicates that AB is both parallel to and has the same direction as CD, and that their lengths have the relation expressed by AB = n.CD.[1]
The segment from A to B is a bound vector, while the class of segments equipollent to it is a free vector, in the parlance of Euclidean vectors.
Extension
Geometric equipollence is also used on the sphere:
To appreciate Hamilton's method, let us first recall the much simpler case of the Abelian group of translations in Euclidean three-dimensional space. Each translation is representable as a vector in space, only the direction and magnitude being significant, and the location irrelevant. The composition of two translations is given by the head-to-tail parallelogram rule of vector addition; and taking the inverse amounts to reversing direction. In Hamilton's theory of turns, we have a generalization of such a picture from the Abelian translation group to the non-Abelian SU(2). Instead of vectors in space, we deal with directed great circle arcs, of length < π on a unit sphere S2 in a Euclidean three-dimensional space. Two such arcs are deemed equivalent if by sliding one along its great circle it can be made to coincide with the other.[2]
On a great circle of a sphere, two directed circular arcs are equipollent when they agree in direction and arc length. An equivalence class of such arcs is associated with a quaternion versor
$\exp(ar)=\cos a+r\sin a,$ where a is arc length and r determines the plane of the great circle by perpendicularity.
References
1. Michael J. Crowe (1967) A History of Vector Analysis, "Giusto Bellavitis and His Calculus of Equipollences", pp 52–4, University of Notre Dame Press
2. N. Mukunda, Rajiah Simon and George Sudarshan (1989) "The theory of screws: a new geometric representation for the group SU(1,1), Journal of Mathematical Physics 30(5): 1000–1006 MR0992568
• Giusto Bellavitis (1835) "Saggio di applicazioni di un nuovo metodo di Geometria Analitica (Calcolo delle equipollenze)", Annali delle Scienze del Regno Lombardo-Veneto, Padova 5: 244–59.
• Giusto Bellavitis (1854) Sposizione del Metodo della Equipollenze, link from Google Books.
• Charles-Ange Laisant (1874): French translation with additions of Bellavitis (1854) Exposition de la méthode des equipollences, link from Google Books.
• Giusto Bellavitis (1858) Calcolo dei Quaternioni di W.R. Hamilton e sua Relazione col Metodo delle Equipollenze, link from HathiTrust.
• Charles-Ange Laisant (1887) Theorie et Applications des Equipollence, Gauthier-Villars, link from University of Michigan Historical Math Collection.
• Lena L. Severance (1930) The Theory of Equipollences; Method of Analytical Geometry of Sig. Bellavitis, link from HathiTrust.
| Wikipedia |
\begin{document}
\title{\textbf{\large{The Monomial Lattice in Modular Symmetric Power Representations}}}
\author{Eknath Ghate and Ravitheja Vangala}
\date{}
\maketitle
\begin{abstract} Let $p$ be a prime. We study the structure of and the inclusion relations among the terms in the monomial lattice in the modular symmetric power representations of $\mathrm{GL}_2(\mathbb{F}_p)$. We also determine the structure of certain related quotients of the symmetric power representations which arise when studying the reductions of local Galois representations of slope at most $p$. In particular, we show that these quotients are periodic and depend only on the congruence class modulo $p(p-1)$. Many of our results are stated in terms of the sizes of various sums of digits in base $p$-expansions and in terms of the vanishing or non-vanishing of certain binomial coefficients modulo $p$. \end{abstract}
\keywords{Modular representations of $ \operatorname{GL}_{2}(\mathbb{F}_{p}) $,
structure of monomial submodules,
reductions of crystalline representations}
\subjclass{20C33, 20C20, 11T06, 11F80}
\section{Introduction}
Let $V_r$, for $r \geq 0$, be the $r^{\mathrm{th}}$-symmetric power representation over the field $k$, of the general linear group $\Gamma = \mathrm{GL}_2(k)$. This paper studies the monomial lattice in $V_r$ when $k = \mathbb{F}_p$ is the finite field with $p$ elements, for $p$ a prime number. It also studies related quotients of $V_r$ which arise in number theoretic problems involving Galois representations.
For a general field $k$, the symmetric power representations $V_r$ have models over $k$ consisting of homogeneous polynomials $F(X,Y)$ in two variables of degree $r$ defined over $k$, with action given by:
\begin{align}\label{G action}
\begin{pmatrix} a & b \\ c & d \end{pmatrix} \cdot F(X,Y) =
F(aX+cY,bX+dY), \quad \forall \
\begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \Gamma = \operatorname{GL}_{2}(k). \end{align}
When $k = \mathbb{C}$ is the field of complex numbers,
the representations $V_r$ are irreducible. But the $V_r$ are not always irreducible if $k$ has characteristic $p$. In this introduction we consider two special cases. We assume that $k = {\mathbb{F}}_p$, so that $\Gamma = \mathrm{GL}_2(\mathbb{F}_p)$ is a finite group (of Lie type), or $k = \bar{\mathbb F}_p$, the algebraic closure of $\mathbb{F}_p$, so that $\Gamma = \mathrm{GL}_2(\bar{\mathbb{F}}_p)$ is an algebraic group (more precisely, the $\bar{\mathbb{F}}_p$-valued points of the algebraic group $\mathrm{GL}_2$). In both these cases it is well known that the representations $V_r$ of $\Gamma$ are irreducible if and only if $r \leq p-1$. Thus, it is natural to ask what the structure of various $\Gamma$-submodules of these representations are. A particularly important class of $\Gamma$-submodules are those generated by the monomials $X^{r-i}Y^i \in V_r$, for $0 \leq i \leq r$. Set
\[
X_{r-i} := \langle X^{r-i}Y^i \rangle \subset V_r, \]
for $0 \leq i \leq r$, which we also denote by $X_{r-i,\,r}$ when we wish to specify the ambient space $V_r$.
We refer to the lattice of submodules in $V_r$ defined by the monomial submodules $X_{r-i}$, with partial order determined by inclusion, as the {\it monomial lattice}.
It is not difficult to see (Lemma~\ref{first row filtration}) that the monomial lattice starts off as an increasing filtration
\begin{eqnarray}
\label{first row}
X_r \subset X_{r-1} \subset \cdots \subset X_{r-(p-1)} \end{eqnarray}
and so by the Weyl involution $w = \left( \begin{smallmatrix} 0 & 1 \\ 1 & 0 \end{smallmatrix} \right)$, which flips $X$ and $Y$, ends as a decreasing filtration
\begin{eqnarray*}
X_{p-1} \supset X_{p-2} \supset \cdots \supset X_1 \supset X_0. \end{eqnarray*}
However, the other inclusion relations between these two extremes are not well understood. In fact, the very next monomial submodule $X_{r-p}$ behaves erratically with respect to the filtration \eqref{first row}, in the sense that there are infinite families of $r$ (e.g., $r = p^m + p$, for $m \geq 2$) such that $X_{r-p}$ does not contain $X_{r-(p-1)}$. Similarly, there are other infinite families of $r$ (e.g., $r = p^m+(p-1)$, for $m \geq 2$) such that $X_{r-p}$ is not contained in $X_{r-(p-1)}$.
When we are in the algebraic group case, so that $\Gamma = {\mathrm{GL}}_2(k)$ with $k = \bar{\mathbb F}_p$, S. Doty has given an elegant description of all inclusion relations among the monomial submodules in terms of {\it carry patterns}, even for the general linear groups $\mathrm{GL}_n(k)$, for $n \geq 2$, of higher rank \cite{Doty} (see also \cite{DW96}). We describe his result in the present setting ($n= 2$). Let $0 \leq i \leq r$. Each of the three numbers involved in the identity
\begin{eqnarray}
\label{carry}
r = i + (r-i), \end{eqnarray}
has a base $p$-expansion. However (as every schoolchild will attest to when $p$ is the non-prime $10$), the sum of the individual $p$-adic digits of $i$ and $r-i$ do not necessarily coincide with the corresponding $p$-adic digits of $r$. The discrepancy is measured by a sequence of $0$s and $1$s, called the carry pattern (of $i$). Doty proved that, for $0 \leq i,j \leq r$, there is an inclusion $X_{r-j} \subset X_{r-i}$ of monomial submodules if and only if the carry pattern of $j$ is less than the carry pattern of $i$ with respect to the lexicographic ordering on the set of all sequences of $0$s and $1$s, thereby setting up an isomorphism of posets between the monomial lattice and the poset of carry patterns with their lexicographic ordering. Moreover, Doty showed that any $\Gamma$-submodule of the symmetric power representation $V_r$ is a sum of monomial submodules essentially reducing the study of arbitrary submodules of $V_r$ to the monomial submodules of $V_r$. A crucial role in his arguments is played by the fact that $k = \bar{\mathbb F}_p$ is an infinite field.\footnote{In fact, Doty's results
also hold if $k = {\mathbb F}_q$ is a finite field of cardinality $q > r$. However, in the applications we have in mind, one wishes to treat all $r$ at the same time for the fixed finite field $k = \mathbb{F}_p$, so this more refined version of his result becomes of limited use.}
In this paper, we study the monomial lattice when $k = \mathbb{F}_p$, which we assume to be the case from now on. We will focus on the first $p$ monomial submodules in the filtration \eqref{first row} where there are already many new phenomena to be discovered. It seems to be generally acknowledged (see, e.g., the book of Humphreys \cite[\S 19.2]{Humph}) that the study of modular representations of algebraic groups is in general easier than the study of modular representations of finite groups of Lie type. This in certainly borne out in the study of the monomial lattice as one moves from $k = \bar{\mathbb F}_p$ to $k = {\mathbb F}_p$. For instance, while the fact that the carry pattern of $j$ should be less than the carry pattern of $i$ is clearly still a necessary condition for the inclusion $X_{r-j } \subset X_{r-i}$, it is no longer a sufficient condition, not even for the first two monomial submodules in \eqref{first row} above! Indeed, if the constant term $r_0$ in the base $p$-expansion of $r$ is non-zero, then the carry patterns with respect to the equation \eqref{carry} for $i = 0$ and $i = 1$ are the same (a string of $0$s) even though it is easily checked that the inclusion $X_r \subsetneq X_{r-1}$ is strict, for $r \geq p$ \cite[Lemma 4.1]{BG15}.
For an integer $n \geq 0$, let $\Sigma_p(n)$ denote the sum of the $p$-adic digits in the base $p$-expansion of $n$. The first result of this paper that we state here gives necessary and sufficient conditions for all inclusion relations among the first $p$ monomial submodules in the filtration \eqref{first row} in terms of
additional data involving such sums.
\begin{theorem}
Let $p \geq 2$, $0 \leq j < i \leq p-1$, $r \geq p$ and $r_0$ be
the constant term in the base $p$-expansion of $r$.
Then the monomial submodules $X_{r-j} = X_{r-i}$ are equal if and only if
$i$ and $j$ have the same carry patterns, $\Sigma_p(r-j) \leq p-1$
and $\Sigma_p(r-r_0) \leq j$. \end{theorem}
\noindent The theorem is proved in several lemmas leading up to Lemma~\ref{final X_r-i = X_r-j}, noting that the cases $p = 2$ and $j = 0$ are trivially true. It may be viewed as a refinement of Doty's first result recalled above when one moves from $k = \bar{\mathbb F}_p$ to $k = \mathbb{F}_p$, at least for the first $p$ monomial submodules \eqref{first row} in the monomial lattice.
We now remark that Doty's second result mentioned above also fails when one moves from $k = \bar{\mathbb F}_p$ to $k = {\mathbb F}_p$. Let $$\theta(X,Y) = X^pY-XY^p$$ be the Dickson polynomial of degree $p+1$, and let $V_r^{(1)} \subset V_r$ denote the $\Gamma$-submodule of $V_r$ consisting of all polynomials $F(X,Y)$ divisible by $\theta(X,Y)$ studied by Glover \cite{Glover}, in what was perhaps the first thorough study of the symmetric power representations $V_r$ when $\Gamma = \mathrm{GL}_2({\mathbb F}_p)$. Since $\Gamma$ acts on $\theta(X,Y)$ via the determinant character (this really uses the fact that one is working over ${\mathbb F}_p$), the submodule $V_r^{(1)}$ is indeed stable by $\Gamma$.
But the sum of the coefficients of a polynomial in $V_r^{(1)}$ vanishes, hence $V_r^{(1)}$ does not contain any monomials
and, in particular, $V_r^{(1)}$ cannot be spanned by monomial submodules.
Nonetheless, there are many interesting subquotients of $V_r$ involving monomial submodules, so it is important to describe the structure of these submodules when $k = \mathbb{F}_p$. The submodule $X_r$ generated by the highest monomial $X^r$ was essentially described by Glover himself \cite{Glover} and the structure of the next submodule $X_{r-1}$ generated by the second highest monomial $X^{r-1}Y$ was described completely in \cite{BG15}, for $p \geq 3$. There does not seem to be any general literature beyond these results on the other submodules in the monomial lattice when $\Gamma =\mathrm{GL}_2(\mathbb{F}_p)$.
The first main goal of this paper is to extend the aforementioned results to give the structure of the first $p$ monomial submodules $X_{r-i}$ in \eqref{first row} in the monomial lattice. To this end, we note that there is a surjection of $\Gamma$-modules $\phi_i : X_{r-i,\,r-i} \otimes V_i \twoheadrightarrow X_{r-i,\,r}$ given by multiplication (Lemma~\ref{surjection1}), where $X_{r-i,\,r-i}$ is the submodule generated by the highest monomial $X^{r-i}$ in $V_{r-i}$. We prove that generically this map is an isomorphism. More precisely, we have
\begin{theorem}
Let $p \geq 3$, $0 \leq i \leq p-1$, $r \geq (i+1)(p+1)$ and $r_0$ be the
constant term in the base $p$-expansion of $r$. If $\Sigma_p(r-r_0) \geq p$,
then
\begin{eqnarray*}
X_{r-i,\,r} \cong X_{r-i,\,r-i} \otimes V_i.
\end{eqnarray*}
If $\Sigma_p(r - r_0) < p$, the structure of $X_{r-i,\,r}$ can also be described in terms of explicit extensions of tensor products of irreducible and principal series representations. \end{theorem}
\noindent Here the principal series representations are by definition $\mathrm{ind}_B^\Gamma (a^{m}d^n)$, where $B$ is the Borel subgroup of upper triangular matrices $\{ \left( \begin{smallmatrix} a & b \\ 0 & d \end{smallmatrix} \right) : a,d \in \mathbb{F}_{p}^\ast, b \in \mathbb{F}_{p} \} \subset \Gamma$ and $m$, $n$ are integers. If $r_0 \geq i$, then $\Sigma_p(r-i) = \Sigma_p(r-r_0) + (r_0 - i)$ and the theorem follows from Theorem~\ref{Structure r_0 >i}. If $r_0 < i$, it follows from Theorem~\ref{Structure r_0<i}. We refer the reader to these theorems for more details.
The proofs of these theorems reduce the study of the structure of $X_{r-i}$ to that of $X_{r-(i-1)}$, by noting that the quotients $X_{r-i}/X_{r-(i-1)}$ are homomorphic images of the principal series representation $\mathrm{ind}_B^\Gamma (a^{r-i}d^i)$, for $1 \leq i \leq p-1$ (Corollary~\ref{induced and successive}). In any case, these theorems allow us to state the following explicit dimension formulas for the first $p$ monomial submodules $X_{r-i}$ in \eqref{first row} (see Corollaries~\ref{dimension r_0 > i} and \ref{dimension r_0 < i}).
\begin{corollary}
Let $ p \geq 3$, $0 \leq i \leq p-1$, $r \geq (i+1)(p+1)$ and $r_0$ be
the constant term in the base $p$-expansion of $r$.
\noindent If $r_{0} \geq i$, then
\begin{align*}
\dim X_{r-i} =
\begin{cases}
(r_{0}+1)(\Sigma_{p}(r-r_{0})+1) , & \mathrm{if} ~ \Sigma_{p}(r-i) \leq r_{0}, \\
(i+1)\left( \Sigma_{p}(r-i) +1 \right), &\mathrm{if} ~
r_{0} \leq \Sigma_{p}(r-i) \leq p,\\
(i+1)(p+1), & \mathrm{if} ~ \Sigma_{p}(r-i) \geq p.
\end{cases}
\end{align*}
\noindent If $r_{0}<i$, then
\begin{align*}
\dim X_{r-i} =
\begin{cases}
(p+r_{0}+1) \Sigma_{p}(r-r_{0}),
&~\mathrm{if}~ \Sigma_{p}(r-i) < p, \\
(i - r_{0})(p+1)+ (\Sigma_{p}(r-r_{0})+1)(r_{0}+1), &
~ \mathrm{if} ~ \Sigma_{p}(r-i) \geq p, ~ \Sigma_{p}(r-r_{0}) \leq p, \\
(i+1)(p+1), & ~ \mathrm{if} ~ \Sigma_{p}(r-i) \geq p, ~ \Sigma_{p}(r-r_{0}) \geq p. \\
\end{cases}
\end{align*} \end{corollary}
\noindent We remark that the first dimension formula is continuous at both the boundaries, whereas the second formula is not continuous at the first boundary.
Although this was not part of the initial goal of this paper, we decided to also include in the final section, as an example of things to come, the structure of the next submodule $X_{r-p}$ in the monomial lattice, generated by $X^{r-p}Y^p$. The module $X_{r-p}$ behaves more erratically with respect to the filtration \eqref{first row} when $k = \mathbb{F}_p$ than when $k = \bar{\mathbb F}_p$, since there are infinite families of $r$ (e.g., $r= p^m+p-1$, for $m \geq 2$) such that $X_{r-p}$ {\it simultaneously} neither contains nor is contained in $X_{r-(p-1)}$. In spite of this, we show that $X_{r-p}$ has a relatively simple structure when $k = {\mathbb F}_p$, namely, it is isomorphic to the monomial submodule $X_{s-1,\,s} \subset V_s$ generated by the second highest monomial, for some $s$, which is often though not always equal to $r$. More precisely, we establish the following trichotomy in Propositions~\ref{p divides r}, \ref{BG Proposition 3.3}, \ref{remaining case r = 1 mod p-1}, \ref{BG Lemma 4.5}, \ref{4 JH factors} and \ref{BG Lemma 4.8}.
\begin{theorem}
Let $p \geq 3$ and $r \geq 2p+1$. Then
\begin{eqnarray*}
X_{r-p} \simeq
\begin{cases}
X_{r/p-1, \,r/p}, & \text{if } \Sigma_p(r-p) < \Sigma_p(r-1), \\
X_{r-1, \,r}, & \text{if } \Sigma_p(r-p) = \Sigma_p(r-1), \\
X_{rp-1, \,rp}, & \text{if } \Sigma_p(r-p) > \Sigma_p(r-1). \\
\end{cases}
\end{eqnarray*} \end{theorem}
\noindent Since the structure of the submodule $X_{s-1, \,s}$ was determined in \cite[\S 2, \S 3]{BG15}, the theorem also gives the structure of the submodule $X_{r-p}$, for $p \geq 3$ (see Theorem~\ref{Main theorem part 1 and 2} and Theorem~\ref{Main theorem part 4} for the precise structure).\footnote{ Though we caution the reader that there are infinite families of $r$ (e.g., $r=p^m+p+1$, for $m \geq 2$) for which one is in the middle case of the theorem, so that $X_{r-1} \cong X_{r-p}$, yet neither $X_{r-p}$ contains nor is contained in $X_{r-1}$.} We remark that in the first case we have $p \mid r$, by Lemma~\ref{Equality of sum of p-adic digits of (r-1),(r-p)}. As a corollary, we obtain:
\begin{corollary}
Let $p\geq 3$ and $r \geq p(2p+1)$.
Write $r=p^{n}u$,
with $p \nmid u$. Set $\delta =1$ if $n \geq 2$ or
$\Sigma_{p}(r-p) > \Sigma_{p}(r-1)$ and $0$ otherwise.
Then
\begin{align*}
\mathrm{dim} \ X_{r-p} =
\begin{cases}
2 \Sigma_p(r) + \delta(p+2-\Sigma_p(r)),
\ & \text{if} \ \Sigma_p(r) \leq p, \\
2p+2, & \text{if} \ \Sigma_p(r) > p.
\end{cases}
\end{align*} \end{corollary}
\noindent The proof follows from the theorem and \cite[Corollary 1.6]{BG15} (or \Cref{dimension formula for X_{r-1}}), by checking various cases.
The second main goal of this paper is to give a complete description of certain quotients of the symmetric power representations $V_r$ involving the monomial submodules $X_{r-i}$ in \eqref{first row}.
Let $V_r^{(m)}$, for $m \geq 0$, be the $\Gamma$-submodule of $V_r$ consisting of all polynomials $F(X,Y)$ divisible by $\theta(X,Y)^m$. The submodules $V_r^{(m)}$ cut out an exhaustive decreasing filtration of $V_r$:
\begin{eqnarray} \label{theta filtration}
V_r \supset V_r^{(1)} \supset V_r^{(2)} \supset
\cdots \supset V_r^{(m)} \supset \cdots \supset 0. \end{eqnarray}
This filtration is much better understood than the monomial filtration \eqref{first row}. For one, the structure of each of the individual terms in this filtration is easy to write down. By what we have said above, we have $V_r^{(m)} \cong V_{r-m(p+1)} \otimes D^m$, for $D : \Gamma \rightarrow {\mathbb F}_p^\ast$ the determinant character. Moreover, for all (but possibly the last) non-zero subquotients in the filtration \eqref{theta filtration}, we have
\begin{eqnarray*}
V_r^{(m)} / V_r^{(m+1)} \cong \mathrm{ind}_B^\Gamma (a^md^{r-m}) \end{eqnarray*}
are principal series representations (Lemma~\ref{induced and star}) depending only on the congruence classes of $r$ modulo $(p-1)$ and $m$ modulo $(p-1)$, so extensions of length two, which split if and only if $r \equiv 2m \mod (p-1)$ (Lemma~\ref{Breuil map}).
Now, consider the quotients $Q(i)$ and $P(i)$ of $V_r$ involving the monomial submodules $X_{r-i}$, defined by
\begin{eqnarray*}
Q(i) & := & \dfrac{V_r}{X_{r-i} + V_r^{(i+1)}}, \\ \end{eqnarray*}
for $i \geq 0$, and
\begin{eqnarray*}
P(i) & := & \dfrac{V_r}{X_{r-(i-1)} + V_r^{(i+1)}}, \\ \end{eqnarray*} for $i > 0$.
These quotients are of number theoretic importance. They arise when one is trying to compute the reductions of crystalline two-dimensional representations of the Galois group of ${\mathbb Q}_p$ of Hodge-Tate weights $(0, r+1)$ and positive slope, using the mod $p$ Local Langlands Correspondence. The quotient $P(i)$ arises for integral slopes $i > 0$ and $Q(i)$ for fractional slopes in the interval $(i, i+1)$ with $i \geq 0$, by \cite[Remark 4.4]{BG09}. Thus, it is important to know the structure of the quotients $P(i)$ and $Q(i)$ as $\Gamma$-modules. In this paper, we will restrict to discussing the structure of these quotients, deferring the number theoretic applications to future works, except for an example at the end of this introduction (Corollary~\ref{cor reduction}). Moreover, since $P(i)$ is closely related to $Q(i-1)$, for $i > 0$, by the discussion around the exact sequence \eqref{Q i-1 and P i} below, in this paper we will restrict to essentially discussing the structure of the quotients $Q(i)$, for $i \geq 0$.
The quotient $Q(0)$ is irreducible \cite{BG09}, whereas the quotient $Q(1)$ has at most three Jordan-H\"older (JH) factors \cite{BG15}. In this paper, we prove several results which in principle allow us to deduce the JH factors of the quotients $Q(i)$, for all $1 \leq i \leq p-1$. We summarize these results now. For an integer $n$, let $[n] \in \{1, 2, \ldots, p-1\}$ be such that $n \equiv [n] \mod (p-1)$. Let $r \equiv a \mod (p-1)$, for $a \in \{1, 2, \ldots, p-1\}$, so that $a = [r]$. We will also need to consider certain intervals of the congruence classes $\{0,1, \ldots, p-1 \}$ mod $p$.
We first treat the case when $i$ is neither $a$ nor $p-1$. \begin{theorem}\label{i not a nor p - 1} Let $p \geq 3$, $1 \leq i \leq p-1$, $r \geq i(p+1)+p$, $r \equiv a \mod (p-1)$ with $a \in \{1, 2, \ldots, p-1\}$ and $r_0$ be the constant term in the base $p$-expansion of $r$. If $i \neq a$, $p-1$ and $j = \min \{i, [a-i] \}$, then there is an exact sequence of $\Gamma$-modules
\begin{eqnarray}
\label{intro i vs j-1}
0 \rightarrow W \rightarrow Q(i) \rightarrow Q(j-1) \rightarrow 0, \end{eqnarray}
where $W$ is an explicit subquotient of $V_r^{(j)}/V_r^{(i+1)}$, completely determined by the relationship between $r_0$ and certain explicit interval(s) of the congruence classes mod $p$ which depend only on $a$ and $i$ (and, in some cases, a comparison between $[a-r_0]$ and $r_0$). \end{theorem}
The explicit intervals depend on the vanishing (or non-vanishing) of certain binomial coefficients modulo $p$. More precisely, if $i$ is neither $a$ nor $p-1$, then we have the following explicit version of Theorem~\ref{i not a nor p - 1} which deal with the cases (i) $i < [a-i]$, (ii) $i = [a-i]$ and (iii) $i > [a-i]$ separately.
\begin{theorem}
\label{intro i < [a-i]}
Let $p \geq 3$, $r \equiv a \mod (p-1)$ with $1 \leq a \leq p-1$,
$r \equiv r_{0}~\mathrm{mod}~ p$ with $0 \leq r_{0} \leq p-1$ and let
$1 \leq i < [a-i] < p-1$. If
$r \geq i(p+1)+p $, then we have the following exact sequence
\begin{align*}
0 \rightarrow W \rightarrow Q(i) \rightarrow Q(i-1) \rightarrow 0,
\end{align*}
where $W= V_{p-1-[a-2i]} \otimes D^{a-i}$ if
$\binom{r-[a-i]-1}{i} \not \equiv 0 \mod p$ and
$W= V_{r}^{(i)}/V_{r}^{(i+1)}$ otherwise. \end{theorem}
\begin{theorem}
\label{intro i = [a-i]}
Let $p \geq 3$, $r \equiv a ~\mathrm{mod}~(p-1)$ with $1 \leq a \leq p-1$,
$r \equiv r_{0} ~\mathrm{mod}~ p$ with $0 \leq r_{0} \leq p-1$ and
let $1 \leq i < p-1$ with $i=[a-i]$. If $ r \geq i(p+1)+p$, then
\[
0 \rightarrow W \rightarrow Q(i) \rightarrow Q(i-1) \rightarrow 0,
\]
where
\begin{align*}
W \cong
\begin{cases}
0,
&\mathrm{if}~ \binom{r-i+1}{i+1} \not \equiv 0 \mod p , \\
V_{p-1} \otimes D^{i}, & \mathrm{if}~ r_{0}=i, \\
V_{0} \otimes D^{i}, & \mathrm{if}~r_{0}=i-1, \\
V_{r}^{(i)}/V_{r}^{(i+1)}, &\mathrm{otherwise}.
\end{cases}
\end{align*} \end{theorem}
\begin{theorem}
\label{intro i > [a-i]}
Let $p \geq 3$, $r \equiv a ~ \mathrm{mod}~(p-1)$ with $1 \leq a \leq p-1$ and
let $r \equiv r_{0}~ \mathrm{mod}~p$ with $0 \leq r_{0} \leq p-1$.
Let $1 \leq [a-i]<i < p-1$ and $i(p+1)+p \leq r$.
Then we have an exact sequence of $\Gamma$-modules
\begin{align*}
0 \rightarrow W \rightarrow Q(i) \rightarrow Q([a-i]-1) \rightarrow 0,
\end{align*}
where
\begin{enumerate}[label= \emph{(\roman*)}]
\item If $ \binom{r-[a-i]+1}{i+1} \not \equiv 0 \mod p$,
then $W= 0$.
\item If $\binom{r-[a-i]+1}{i+1} \equiv 0 \mod p$ and
$\binom{r-i-1}{[a-i]} \not \equiv 0 \mod p$, then we have
\begin{enumerate}
\item[\em{(a)}] If $[a-r_{0}] < r_{0}+1$, then
$
0 \rightarrow V_{r}^{([a-r_{0}]+1)}/V_{r}^{(i+1)} \rightarrow W
\rightarrow V_{[a-2r_{0}]} \otimes D^{r_{0}} \rightarrow 0.
$
\item[\em{(b)}] If $[a-r_{0}] = r_{0} +1 $, then
$W= V_{r}^{([a-r_{0}])}/ V_{r}^{(i+1)}$.
\item[\em{(c)}] If $ [a-r_{0}] > r_{0}+1 $,
then
$
0 \rightarrow V_{r}^{([a-r_{0}])}/V_{r}^{(i+1)} \rightarrow W
\rightarrow V_{p-1-[2r_{0}+2-a]} \otimes D^{r_{0}+1} \rightarrow 0.
$
\end{enumerate}
\item If $\binom{r-i-1}{[a-i]} \equiv 0 \mod p$ and
$r \not \equiv [a-i]+i ~\mathrm{mod}~p$,
then $W= V_{r}^{([a-i])}/V_{r}^{(i+1)}$.
\end{enumerate} \end{theorem}
On the other hand, when $i = a$ or $i = p-1$ we have the following theorem.
\begin{theorem}\label{i = a or p - 1}
Let $p \geq 3$, $r \equiv a \mod (p-1)$ with $a \in \{1, 2, \ldots, p-1\}$. Let
$i = a$ or $p-1$. If $r \geq i(p+1)+p$, then there is an exact sequence of
$\Gamma$-submodules
\begin{eqnarray*}
0 \rightarrow W' \rightarrow P(i) \rightarrow Q(i) \rightarrow 0,
\end{eqnarray*} where $W'$ is explicitly determined by whether $r_0$ lies in an explicit interval of the congruence classes mod $p$ which depends only on $a$. \end{theorem}
\noindent Explicit versions of this theorem can be found in Theorem~\ref{Structure of Q(i) if i = a} for $i = a$ and Theorem~\ref{Structure of Q(i) if i = p - 1} for $i = p-1$.
The proof of Theorem~\ref{i not a nor p - 1} uses several auxiliary results which may be of independent interest. The key point is to determine $W$ explicitly in the exact sequence \eqref{intro i vs j-1}. To this end, we note that there is an exact sequence (this is the first column in \eqref{commutative diagram} below)
\begin{equation} \label{W exact sequence intro}
0 \rightarrow X_{r-i}^{(j)}/X_{r-i}^{(i+1)} \rightarrow
V_{r}^{(j)}/V_{r}^{(i+1)} \rightarrow W \rightarrow 0,
\end{equation}
for $0 \leq j \leq i \leq p-1$, so the proof of Theorem~\ref{i not a nor p - 1} reduces to determining the quotients
\begin{equation}\label{successive quotients intro}
X_{r-i}^{(j)}/X_{r-i}^{(j+1)}, \end{equation}
for all $0 \leq i, j \leq p-1$. A novel argument involving principal series representations coming from the $R$-valued points of $\mathrm{GL}_2$, where $R$ is the ring of dual numbers $\mathbb{F}_{p}[\epsilon]$ with $\epsilon^2 = 0$ and higher generalizations of this ring, allows us to show that $Q(i)$ and all the terms in \eqref{W exact sequence intro} and \eqref{successive quotients intro} are periodic in $r$ modulo $p(p-1)$ (\Cref{arbitrary quotient periodic}). Determining the quotients \eqref{successive quotients intro} in the case $j = 0$ is easy (\Cref{Structure X(1)}). One may reduce the case of a given $j \geq 1$ to three special values of $i$, namely $i = j$, $[a-j]$ and $0$ (\Cref{reduction}). The subcase $i = 0$ is treated in Proposition~\ref{singular quotient X_{r}}. The subcases $j = i$ and $[a-i]$ are treated in Propositions~\ref{singular quotient i < [a-i]}, \ref{singular i= [a-i]} and \ref{singular i>r-i}, by dividing our discussion into the three cases mentioned above, namely (i) $i < [a-i]$, (ii) $i = [a-i]$ and (iii) $i > [a-i]$. The answers are determined by explicit intervals of the congruence classes modulo $p$. This determines $W$ explicitly in terms of these intervals. By the exact sequence \eqref{intro i vs j-1}, we obtain the structure of $Q(i)$ in terms of $W$ and $Q(j-1)$ as in Theorems~\ref{Structure of Q(i) if i<[a-i]}, \ref{Structure of Q i=[a-i]} and \ref{Structure of Q(i) i>[a-i]}. This proves Theorem~\ref{i not a nor p - 1}. The more explicit versions, Theorems~\ref{intro i < [a-i]}, \ref{intro i = [a-i]} and \ref{intro i > [a-i]}, follow immediately using a criterion for membership (or lack thereof) in these intervals in terms of the vanishing (or non-vanishing) of certain binomial coefficients mod $p$ (cf. Lemma~\ref{interval and binomial}).
The proof of Theorem~\ref{i = a or p - 1} is simpler and uses similar ideas. The module $W'$ is determined in Propositions~\ref{singular i=a} and \ref{singular i=p-1}, proving the theorem (see Theorems~\ref{Structure of Q(i) if i = a}, \ref{Structure of Q(i) if i = p - 1}).
We now illustrate how (the explicit versions) of Theorem~\ref{i not a nor p - 1}
and \ref{i = a or p - 1} above can in principle be used to recursively determine all the JH factors of $Q(i)$, for all $1 \leq i \leq p-1$, reducing the computation to $Q(0)$ or $Q(1)$. This also allows us to introduce the explicit intervals mentioned above. We first treat the case $i \neq a$, $p-1$. We divide our discussion according to the three cases (i), (ii), (iii) mentioned above.
(i) Assume that $a$ is strictly larger than $2i$, that is, $i < a-i = [a-i]$, so $i < a$. By Theorem~\ref{i not a nor p - 1}, we see that $Q(i)$ is determined in terms of $W$ and $Q(i-1)$. In this case the interval of residue classes modulo $p$ mentioned in the statement of Theorem~\ref{i not a nor p - 1} is $${\mathcal I}(a,i) = \{a-i+1, a-i+2, \ldots, a-1,a \},$$ and $W$ is all of (respectively, the cosocle of) $V_r^{(i)}/V_r^{(i+1)}$ if $r_0 \in {\mathcal I}(a,i)$ (respectively, if $r_0 \not\in {\mathcal I}(a,i)$). Indeed, the interval above is precisely the residue classes of $r$ modulo $p$ for which the binomial coefficient in Theorem~\ref{intro i < [a-i]} vanishes modulo $p$. Applying Theorem~\ref{intro i < [a-i]} recursively, we see that $Q(i)$ has all the cosocle JH factors of $V_r^{(j)}/V_r^{(j+1)}$, for $0 \leq j \leq i_0-1$, and all the JH factors of $V_r^{(i_0)}/V_r^{(i + 1)}$, where $1 \leq i_0 \leq i$ is the smallest integer such that $r_0 \in {\mathcal I}(a,i_0)$ if it exists, else $i_0 = i+1$.
If $i < [a-i]$, but with $i > a$ instead, then $i - 1$ still satisfies these inequalities if $i-1 > a$, so we can again recursively apply Theorems~\ref{i not a nor p - 1} and \ref{intro i < [a-i]},
this time with the interval $$\mathcal{I}(a,i) = \{a, a+1, \ldots, [a-i]-1, [a-i] \}^c,$$ where $c$ denotes the complement in the residue classes $\{0,1,\ldots,p-1\}$ mod $p$, to determine the JH factors of $Q(i)$ in terms of the $W$ and $Q(a)$. We may then apply Theorem~\ref{i = a or p - 1} to determine the JH factors of $Q(a)$ and therefore of $Q(i)$.
(ii) When $i \geq [a-i]$ with $i \neq a$, $p-1$, we also need to consider the intervals of residue classes modulo $p$ \begin{eqnarray*}
\mathcal{J}(a,i) = \begin{cases}
\{a-i-1, a-i, \ldots, a-2, a-1\}, & \text{if } i < a, \\
\{a-1, a, \ldots, [a-i]-3, [a-i]-2\}^c, & \text{if } i > a.
\end{cases} \end{eqnarray*} If we are at the boundary of the cases treated in (i), namely $i = [a-i]$, with $i \neq a$, $p-1$, then it is the interval $\mathcal{J}(a,i)$ that plays a role in determining $W$ in Theorem~\ref{i not a nor p - 1}, since $\mathcal{J}(a,i)$ is precisely the residue classes of $r$ modulo $p$ for which the binomial coefficient in Theorem~\ref{intro i = [a-i]} vanishes modulo $p$.
Applying Theorem~\ref{intro i = [a-i]}, we are reduced to determining the structure of $Q(i')$ with $i'=i-1$. If $i'=0$, we are done, else $i'$ satisfies $1 \leq i' < [a-i']$ and we can apply the arguments in (i) to determine $Q(i')$, unless $i' = a$, in which case we apply Theorem~\ref{i = a or p - 1} instead.
(iii) Finally, if $[a-i] < i < p-1$ (the hardest case), then $j = [a-i]$ in Theorem~\ref{i not a nor p - 1}, and both the intervals ${\mathcal I}(a,[a-i])$ and $\mathcal{J}(a,i)$ (along with the size of $[a-r_0]$ compared to $r_0$, in some cases) play a role in determining $W$, by Theorem~\ref{intro i > [a-i]}. So we are reduced to determining $Q(i')$, for $i' = [a-i]-1$. As in case (ii), if $i'=0$ we are done, else $i'$ satisfies $1 \leq i' < [a-i']$, and we are reduced to case (i), unless $i' = a$, in which case we apply Theorem~\ref{i = a or p - 1} instead.
We now make some remarks about determining the JH factors of $Q(i)$ when $i = a$ and $p-1$. If $i = a$, then Theorem~\ref{i = a or p - 1} determines $W'$, so the JH factors of $Q(a)$ can be determined from those of $P(a)$, hence by what we have said above, from those of $Q(a-1)$. If $a = 2$, we are reduced to $Q(1)$ and are done, else $i' = a-1$ satisfies $[a-i']<i'$, so applying Theorem~\ref{i not a nor p - 1} with $j = \min\{i',[a-i']\} = 1$, we are reduced to $Q(0)$. Finally, if $i = p-1$ and $i \neq a$, then $Q(p-1)$ is determined by $P(p-1)$ and $W'$ by Theorem~\ref{i = a or p - 1}, hence by $Q(p-2)$, hence by $Q(a)$ if $a = p-2$, and again by $Q(a)$ if $a \leq p-3$, applying Theorem~\ref{i not a nor p - 1} with $j = \min\{p-2, [a-(p-2)]\} = a +1$. But we have just determined the JH factors of $Q(a)$ in all cases, so we are again done.
As an example of the strategy outlined above, we now determine all cases for which the quotient $Q(i)$ is irreducible, for $1 \leq i \leq p-1$ (see Theorem~\ref{irreducible Q(i)}).
\begin{theorem}\label{irreducible}
Let $p \geq 3$, $1 \leq i \leq p-1$, $r \geq i(p+1)+p$, $r \equiv a \mod (p-1)$
with $a \in \{1, 2, \ldots, p-1\}$ and let $r_0$ be the constant term in the
base $p$-expansion of $r$. Then the quotient $Q(i)$ of $V_r$ is irreducible
if and only if either
\begin{itemize}
\item $i = a-1$ or $a$, and $r_0 \in \{a, a+1, \ldots p-1\}$, or,
\item $i = p-1$, $a = 1$ and $r_0 = 0$.
\end{itemize} \end{theorem}
This result is of special number theoretic interest since it immediately solves the reduction problem for the Galois representations mentioned above in the cases that $Q(i)$ is irreducible and does not have dimension $p-1$. We state the result, assuming some familiarity with the notation. Let $k \geq 2$ be an integer and $a_p \in \bar{\mathbb{Q}}_p$ have positive $p$-adic valuation $v(a_p)$, where $v$ is normalized so that $v(p) = 1$. Let $V_{k,a_p}$ be the unique two-dimensional $p$-adic crystalline representation defined over $\bar{\mathbb{Q}}_p$ of the Galois group of $\mathbb{Q}_p$ attached to this data, having Hodge-Tate weights $(0, r+1)$, for $r =k-2$, and slope $v(a_p)>0$.
\begin{corollary} \label{cor reduction}
Let $r = k-2 \equiv a \mod (p-1)$, for $a \in \{1, 2, \ldots, p-1\}$,
and assume that the constant term $r_0$ lies in the range
$\{a, a+1, \ldots, p-1 \}$.
If the slope $v(a_p)$ is fractional, with either
\begin{itemize}
\item[$\bullet$] $v(a_p) \in (a-1, a)$ for $2 \leq a \leq p-1$, or
\item[$\bullet$] $v(a_p) \in (a, a+1)$ for $a \neq p-2$,
\end{itemize}
then the reduction of the crystalline representation $\bar{V}_{k,a_p}$ of $V_{k,a_p}$ is {\em irreducible}. \end{corollary}
\noindent In fact, one checks that the reduction $\bar{V}_{k,a_p}$ is isomorphic to the induced representation $\operatorname{ind}(\omega_2^{a+1})$, where $\omega_2$ is the fundamental character of level $2$ of the Galois group of the quadratic unramified extension of $\mathbb{Q}_p$. When $i = a = p-2$ or $i = p-1$, $a = 1$, the quotient $Q(i)$ is irreducible but has dimension $p-1$ and one may only conclude that $\bar{V}_{k,a_p} \cong \operatorname{ind}(\omega_2^{a+1})$ {\it if} it is irreducible.
\section{Preliminaries}
The aim of this section is to recall some basic results concerning the symmetric power representations $V_{r}$ and the principal series representations of $\operatorname{GL}_{2}(\mathbb{F}_{p})$, and to prove some explicit results involving binomial coefficients in characteristic $p$.
\textbf{Notation}: We fix a prime number $p$. We write $\mathbb{Q}_p$ (resp. $\mathbb{Z}_p$) for the $p$-adic completion of $\mathbb{Q}$ (resp. $\mathbb{Z}$), $\mathbb{F}_{p}$ for the field with $p$ elements, $\bar{\mathbb{F}}_{p}$ for a fixed algebraic closure of $\mathbb{F}_{p}$. We let $M := \mathrm{M}_{2}(\mathbb{F}_{p})$, $\Gamma := \operatorname{GL}_{2}(\mathbb{F}_{p})$, $B \subset \Gamma$ the subgroup of upper triangular matrices, $U \subset B$ the subgroup of unipotent matrices and $H \subset B $ the subgroup of diagonal matrices.
For a positive integer $r$, let $\Sigma_{p}(r)$ denote the sum of digits in the base $p$-expansion of $r$. It is easy to see that $\Sigma_{p}(r) \equiv r \mod (p-1)$, for every $r \in \mathbb{N}$. Also $\Sigma_{p}(p^{n}r) = \Sigma_{p}(r)$, for all $n$, $r \geq 0$ and $\Sigma_{p}(r-1) = \Sigma_{p}(r)-1$ if $p \nmid r$. We will be considering the base $p$-expansion of $r$ quite often which we denote by
\begin{align} \label{base p expansion of r}
r= r_{m}p^{m}+ \cdots +r_{1}p+r_{0}, \end{align}
where $r_{m} \neq 0$ and $0 \leq r_{j} \leq p-1$. The constant term $r_0$ and the linear term $r_1$ will play key roles.
For $n \in \mathbb{Z}$, define $[n] \in \lbrace 1,\ldots, p-1 \rbrace$ by $n \equiv [n] $ mod ($p-1$). Note that $[[m]-[n]] = [m-[n]] = [[m]- n] = [m-n]$, $\forall$ $m$, $n \in \mathbb{Z}$. We finally recall the Kronecker delta function: if $S$ is any set, and $s_{1}$, $s_{2} \in S$, then we define
\begin{align*}
\delta_{s_{1},s_{2}} =
\begin{cases}
0, &\mathrm{if} ~ s_{1} \neq s_{2}, \\
1, &\mathrm{if} ~ s_{1} = s_{2}.
\end{cases} \end{align*}
Let $V_{r} $ denote the space of homogeneous polynomials $F(X,Y)$ of degree $r$ in two variables $X$, $Y$ with coefficients in $\mathbb{F}_{p}$. The semigroup $M$ acts on $V_{r}$ by $\begin{psmallmatrix} a & b \\ c & d\end{psmallmatrix} \cdot F(X,Y)= F(aX+cY, bX+dY)$, for $\begin{psmallmatrix} a & b \\ c & d\end{psmallmatrix} \in M$. An $\mathbb{F}_{p}[M]$-module $V$ is called {\it singular} if every singular matrix $t \in M$ annihilates $V$, i.e., if $t \cdot V =0$, $\forall$ $t \in M \smallsetminus \Gamma$. The largest singular submodule of an arbitrary $\mathbb{F}_{p}[M]$-module $V$ is denoted by $V^{\ast}$. Let $D : \Gamma \rightarrow \mathbb{F}_{p}^\ast$ denote the determinant character of $\Gamma$. Recall the Dickson invariant \[
\theta := X^{p}Y-XY^{p} = -X \cdot \prod\limits _{\lambda \in \mathbb{F}_{p}}
(Y- \lambda X) \in V_{p+1} \] on which $\Gamma$ acts by $D$. Also, for each $m \in \mathbb{N}$, define \[
V_{r}^{(m)} = \{ F(X,Y) \in V_{r} : \theta^{m} \text{ divides } F(X,Y)
\text{ in } \mathbb{F}_{p}[X,Y] \}, \] so that $ V_{r} \supseteq V_{r}^{(1)} \supseteq V_{r}^{(2)} \supseteq \cdots $ is a chain of $\Gamma$-modules of length $\lfloor \frac{r}{p+1}\rfloor +1$. By \cite[(4.1)]{Glover}, we have $V_{r}^{\ast} =V_{r}^{(1)}$ and $ V_{r}^{(m)} \cong V_{r-m(p+1)} \otimes D^{m}$, for all $m \in \mathbb{N}$.
\subsection{Modular representations of \texorpdfstring{$M$ and $\Gamma$}{}}
\subsubsection{Results on \texorpdfstring{$V_{r}$}{}.} \label{prelim}
Let $X_{r-i,\,r}$ be the $\mathbb{F}_{p}[\Gamma]$-submodule of $V_{r}$ generated by the monomial $X^{r-i} Y^{i}$, for $0 \leq i \leq r$. The representations $V_r$ were studied by Glover \cite{Glover}. In this subsection, we recall a few results from \cite{Glover} and \cite{BG15} about $V_{r}$ and its $\Gamma$-submodules $X_{r,\,r}$ and $X_{r-1,\,r}$. One has to be careful with notation when using the results of \cite{Glover} as Glover indexed the symmetric power representations by dimension instead of by the degree of the polynomials involved.
We start with the following well-known result describing the irreducible representations of $\Gamma$ (see \cite{BN41}). These representations form the Jordan-H\"older (JH) factors of the various representations of $\Gamma$ studied later.
\begin{lemma}
If $0 \leq r \leq p-1$ and $1 \leq j \leq p-1$, then $V_{r} \otimes D^{j}$ is
an irreducible $\Gamma $-module. In fact these $p(p-1)$ modules are the
set of all irreducible $\Gamma$-modules. \end{lemma}
We note the following congruence modulo $p$ which we use often. With the convention $0^0=1$ we have for any $i \geq 0$, \begin{align}\label{sum fp}
\sum_{\lambda \in \mathbb{F}_{p}} \lambda^{i} \equiv
\begin{cases}
-1, &\mathrm{if}~i=n(p-1), ~ \mathrm{for ~ some ~} n \geq 1, \\
0, & \mathrm{otherwise}.
\end{cases} \end{align}
Next we show that the $\Gamma$-modules generated by the first $p$ monomials, i.e., $X^{r-i}Y^{i}$, for $0 \leq i \leq p-1$, form an ascending chain of submodules of $V_{r}$.
\begin{lemma}\label{first row filtration}
For $r \geq p$, we have $X_{r,\,r} \subseteq X_{r-1,\,r} \subseteq \cdots
\subseteq X_{r-i,\,r}\subseteq \cdots \subseteq X_{r-(p-1),\,r}$. \end{lemma}
\begin{proof}
Let $1 \leq i \leq p-1$. We have
\begin{align*}
\sum_{a \in \mathbb{F}_{p}^{\ast}} a^{-1}
\begin{pmatrix} 1 & a \\ 0 & 1 \end{pmatrix} \cdot X^{r-i}Y^{i}
& = \sum_{a \in \mathbb{F}_{p}^{\ast}} a^{-1} X^{r-i} (aX+Y)^{i} \\
& = \sum_{a \in \mathbb{F}_{p}^{\ast}} a^{-1} \sum_{j=0}^ {i}
\binom{i}{j}a^{j} X^{r-i+j} Y^{i-j} \\
& = \sum_{j=0}^ {i} \binom{i}{j} X^{r-i+j} Y^{i-j}
\sum_{a \in \mathbb{F}_{p}^{\ast}} a^{j-1} =-i X^{r-(i-1)}Y^{i-1}.
\end{align*}
Since $i \not\equiv 0$ mod $p$, it follows that $X^{r-(i-1)}Y^{i-1} \in X_{r-i,\,r}$,
hence $X_{r-(i-1),\,r} \subseteq X_{r-i,\,r}$.
\end{proof}
By the lemma, $X_{r-i,\,r}$ is $M$-stable, for $0 \leq i \leq p-1$, since if $t$ is singular, then $t \cdot X^{r-i}Y^i \in X_{r, \, r}$, by \cite[(4.4)]{Glover}.
We next recall a Clebsh-Gordon type result from \cite{Glover} which gives the decomposition of the tensor product of two irreducible representations of $\Gamma$.
\begin{lemma}\label{ClebschGordan}\emph{\cite[(5.5)]{Glover}}
Let $p \geq 2$ and $0 \leq m \leq n \leq p-1 $.
\begin{enumerate}[label=\emph{(\roman*)}]
\item If $0 \leq m+n \leq p-1$, then
\begin{align*}
V_{m} \otimes V_{n}
\cong V_{m+n} \oplus (V_{m-1} \otimes V_{n-1} \otimes D)
\cong \bigoplus_{l=0}^{m} V_{m+n-2l} \otimes D^{l}.
\end{align*}
\item If $p \leq m+n \leq 2p-2$, then
\begin{align*}
V_{m} \otimes V_{n} & \cong V_{p(m+n+2-p)-1} \oplus
V_{(p-n-2)} \otimes V_{(p-m-2)} \otimes D^{m+n+2-p} \\
& \cong V_{p(m+n+2-p)-1} \oplus
\bigoplus_{l=0}^{p-n-2} V_{2p-2-m-n-2-2l} \otimes D^{m+n+2-p+l}.
\end{align*}
\end{enumerate}
\end{lemma}
The following dimension formula for $X_{r-1,\,r}$ was proved in \cite{BG15}.
\begin{lemma}\emph{\cite[Corollary 1.6]{BG15}} \label{dimension formula for X_{r-1}}
Let $p\geq 3$ and $r \geq 2p+1$. Set $\delta =1$ if $p \mid r$ and
$\delta=0$ otherwise. Then
\begin{align*}
\mathrm{dim}\ X_{r-1,\,r} =
\begin{cases}
2\Sigma_{p}(r)+\delta(p+2-\Sigma_{p}(r)), & \mathrm{if} ~ \Sigma_{p}(r)
\leq p, \\
2p+2, & \mathrm{if} ~ \Sigma_{p}(r) > p.
\end{cases}
\end{align*} \end{lemma} \begin{proof}
Write $r=p^{n}u$, with $p\nmid u$. Then $\Sigma_{p}(u-1) = \Sigma_{p}(u)-1 = \Sigma_{p}(p^{n}u)-1
=\Sigma_{p}(r)-1$. Substituting $\Sigma_{p}(u-1)=\Sigma_{p}(r)-1$ in \cite[Corollary 1.6]{BG15}
we obtain the lemma. \end{proof}
We next recall the structure of $X_{r,\, r}$ and obtain a dimension formula for it. By \cite[(4.5)]{Glover}, for $r \geq p$, $r \equiv a$ mod $(p-1)$ with $1 \leq a \leq p-1$, we have an exact sequence of $M$-modules
\begin{align}\label{Glover 4.5}
0 \rightarrow X_{r,\,r}^{(1)} \rightarrow X_{r,\,r} \rightarrow V_{a} \rightarrow 0, \end{align}
where $X_{r,\, r}^{(1)} = X_{r,\,r} \cap V_{r}^{(1)}$. More precisely, we have
\begin{lemma}\label{dimension formula for X_{r}}
Let $p\geq 3$ and $1 \leq r \equiv a \mod ~(p-1)$ with $1 \leq a \leq p-1$. Then
\begin{enumerate}[label=\emph{(\roman*)}]
\item If $\Sigma_{p}(r)=a$, then $X_{r,\, r} \cong V_{a}$, as an $M$-module.
\item If $\Sigma_{p}(r)>a$, or equivalently $\Sigma_{p}(r) \geq a+p-1$, then there is a
short exact sequence of $M$-modules
\begin{align*}
0 \rightarrow V_{p-a-1} \otimes D^{a} \rightarrow X_{r,r}
\rightarrow V_{a} \rightarrow 0.
\end{align*}
\end{enumerate}
Moreover, $ \mathrm{dim} ~ X_{r,\,r} = p+1$ if and only if
$X_{r,\,r}^{(1)} \neq 0$, and
\begin{align*}
\mathrm{dim} ~ X_{r,\,r} =
\begin{cases}
\Sigma_{p}(r)+1, & \mathrm{if} ~~\Sigma_{p}(r) \leq p-1, \\
p+1, & \mathrm{if} ~~ \Sigma_{p}(r) > p-1.
\end{cases}
\end{align*} \end{lemma}
\begin{proof}
This is well known, and can be deduced from the results of \cite{Glover}, \cite{BG15}.
Write $r=p^{n}u$ with $p \nmid u$. Then $\Sigma_{p}(r) = \Sigma_{p}(u) $ and
$X_{u,\,u} \cong X_{r,\,r}$ via the map $F \mapsto F^{p^{n}}$. So it is enough
to prove the lemma with $r$ replaced by $u$. Note that
$p \nmid u$ implies that $\Sigma_{p}(u-1) = \Sigma_{p}(u)-1$.
For $a=1$, part (i) follows from the fact that $\Sigma_{p}(u)=1$ is equivalent to
$u=1$. The cases $\Sigma_{p}(u)=p$ and $\Sigma_{p}(u) > p$ of part (ii) follow
from \cite[Proposition 3.3]{BG15} and \cite[Proposition 3.8]{BG15} respectively.
For $2 \leq a \leq p-1$, part (i) follows from \cite[Lemma 4.5]{BG15}, noting
that $X_{u,\, u} \cong V_{a}$ if $u < p$. For part (ii), if $\Sigma_{p}(u)\geq a+p-1 > p$,
then $u \geq2 p+1$ and so by \Cref{dimension formula for X_{r-1}}, we have
dim $X_{u-1,\,u}= 2p+2$. Thus by \cite[Lemma 3.5]{BG15}, we have
dim $X_{u,\,u} = p+1$. Thus by \eqref{Glover 4.5}, we have $X_{u,\,u}^{(1)} \neq 0$.
Further by \cite[Lemma 4.6]{BG15}, we have
$X_{u,\,u}^{(1)} \cong V_{p-a-1} \otimes D^{a}$. This proves part (ii), for
$2 \leq a \leq p-1$. The other assertions are clear from the exact sequence \eqref{Glover 4.5}
and parts (i), (ii). \end{proof}
\subsubsection{Principal series.} In this subsection, we recall a few results about principal series representations. These representations play a central role in this article as they are related to modules such as $X_{r-i} /X_{r-(i-1)}$ and $V_{r}^{(i)}/V_{r}^{(i+1)}$, for $0 \leq i \leq p-1$, which we study in later sections.
Let $ (\sigma, V)$ be a representation of $B$. The induced representation $\operatorname{ind}_{B}^{\Gamma}(\sigma) $ is defined as the space of functions $f: \Gamma \rightarrow V $ satisfying $ f(b\gamma) = \sigma(b) f(\gamma)$, $\forall$ $b \in B$, $ \gamma \in \Gamma $, this space being endowed with a left $\Gamma$-action defined by right
translation of functions, i.e.,
$(\gamma\cdot f)(\gamma ') = f(\gamma' \gamma)$,
$\forall$ $\gamma,\gamma' \in \Gamma$. For any $ \gamma \in \Gamma$, $v \in V$, we define the function $[\gamma,v] \in \operatorname{ind}_{B}^{\Gamma}(\sigma)$ by
\begin{align*}
[\gamma, v] (\gamma') =
\begin{cases}
\sigma(\gamma' \gamma) v, & \mathrm{if} ~ \gamma' \in B \gamma^{-1},\\
0, & \mathrm{otherwise}.
\end{cases} \end{align*}
Every element of $\operatorname{ind}_{B}^{\Gamma}(\sigma)$ is a linear combination of functions of the form $[\gamma,v]$, for $\gamma \in \Gamma$ and $v \in V$. It can be checked that $\gamma' \cdot [\gamma, v] = [\gamma' \gamma, v]$, $\forall$ $\gamma,\gamma' \in \Gamma$. If $\sigma$ is a 1-dimensional representation, then $\operatorname{ind}_{B}^{\Gamma}(\sigma)$ is called a principal series representation. Since $\lvert \Gamma/ B \rvert =p+1$, the dimension of a principal series representation equals $p+1$.
Let $w = \begin{psmallmatrix} 0 & 1 \\ 1 & 0 \end{psmallmatrix}$. For a character $\chi: H \rightarrow \mathbb{F}_{p}^{\ast}$, we define $\chi^{w}: H \rightarrow \mathbb{F}_{p}^{\ast}$ by $\chi^w(h) = \chi(whw)$, for all $h \in H$.
Let $\chi_{1}$, $\chi_{2} : H \rightarrow \mathbb{F}_{p}^{\ast}$ be the characters defined by
\begin{align*}
\chi_{1}\left(\begin{psmallmatrix} a & 0 \\ 0 & d \end{psmallmatrix}\right) =
a, \quad \chi_{2}\left(\begin{psmallmatrix} a & 0 \\ 0 & d \end{psmallmatrix}\right) =
d, ~ \forall ~ \begin{psmallmatrix} a & 0 \\ 0 & d \end{psmallmatrix} \in H. \end{align*}
These can also be thought of as characters of $B$ via $ B \twoheadrightarrow H$. Clearly $(\chi_{1}^{i} \chi_{2}^{j})^{w} =\chi_{1}^{j} \chi_{2}^{i}$. It is well known that every character $\chi: B \rightarrow \mathbb{F}_{p}^{\ast}$ is of the form $\chi_{1}^{i} \chi_{2}^{j}$, for $1 \leq i,j \leq p-1$. For a character $\chi: B \rightarrow \mathbb{F}_{p}^{\ast}$, let $e_{\chi}$ denote a (fixed) non-zero element of the 1-dimensional representation $(\chi, V_{\chi})$. The following result explicitly describes the Jordan-H\"older (JH) factors of principal series representations and the basis elements of the underlying spaces.
\begin{lemma}\label{Structure of induced}
Let $p \geq 2$, $1 \leq i,j \leq p-1$ and $\chi =\chi_{1}^{i} \chi_{2}^{j}$. Then we have
the following exact sequence of $\Gamma$-modules
\begin{align*}
0 \rightarrow V_{[ j-i ] } \otimes D^{i} \rightarrow \operatorname{ind}_{B}^{\Gamma}
(\chi_{1}^{i}\chi_{2}^{j}) \rightarrow V_{p-1-[j-i]} \otimes D^{j}
\rightarrow 0 .
\end{align*}
The sequence splits if and only if $i = j$. Moreover,
\begin{enumerate}[label=\emph{(\roman*)}]
\item An $\mathbb{F}_{p}$-basis of the image of $V_{[j-i]}\otimes D^{i} $
in $\operatorname{ind}_{B}^{\Gamma} (\chi)$ is given by
\begin{align*}
\sum\limits_{\lambda \in \mathbb{F}_{p}} \lambda^{l}
\begin{pmatrix} \lambda & 1 \\ 1 & 0 \end{pmatrix} [1, e_{\chi}],
~ \mathrm{for} ~ 0 \leq l < [j-i]; ~ ~
\sum\limits_{\lambda \in \mathbb{F}_{p}} \lambda^{[j-i]}
\begin{pmatrix} \lambda & 1 \\ 1 &0 \end{pmatrix}
[1, e_{\chi}] + (-1)^{j} [1, e_{\chi}].
\end{align*}
\item The elements of $\operatorname{ind}_{B}^{\Gamma} (\chi)$
\begin{align*}
\sum\limits_{\lambda \in \mathbb{F}_{p}} \lambda^{l}
\begin{pmatrix} \lambda & 1 \\ 1 &0 \end{pmatrix}
[1, e_{\chi}], ~ \mathrm{for} ~ [j-i] \leq l \leq p-1,
\end{align*}
map to an $\mathbb{F}_{p}$-basis of $ V_{p-1-[j-i]} \otimes D^{j}$.
\end{enumerate}
\end{lemma}
\begin{proof}
See \cite[Proposition 2.4]{Morra} and
Lemmas 2.3, 2.6, 2.7 and Theorem 2.4 of
\cite{BP12}. \end{proof}
\begin{corollary}\label{Common JH factor}
Let $\chi$, $\eta : B \rightarrow \mathbb{F}_{p}^{\ast}$ be characters. Then
\begin{enumerate}[label=\emph{(\roman*)}]
\item The Jordan-H\"older factors of $\operatorname{ind}_{B}^{\Gamma}(\chi)$ are distinct.
\item socle of $\operatorname{ind}_{B}^{\Gamma}(\chi) \cong $ socle of
$\operatorname{ind}_{B}^{\Gamma}(\eta)$
if and only if $\eta = \chi$.
\item socle of $\operatorname{ind}_{B}^{\Gamma}(\chi) \cong $ cosocle of
$\operatorname{ind}_{B}^{\Gamma}(\eta)$
if and only if $\eta = \chi^{w}$.
\end{enumerate}
Therefore $\operatorname{ind}_{B}^{\Gamma}(\chi)$ and $\operatorname{ind}_{B}^{\Gamma}(\eta)$ have a
common Jordan-H\"older factor if and only if $\eta = \chi$ or $\eta = \chi^{w}$. \end{corollary} \begin{proof}
This is an easy consequence of \Cref{Structure of induced}. \end{proof}
\subsubsection{The filtration \texorpdfstring{$V_{r}^{(m)}$}{}.}
We now prove some results concerning the modules $V_{r}^{(m)}$, for $m \in \mathbb{N}$.
We begin by giving a criterion that allows one to check when an arbitrary
polynomial $F \in V_{r}$ is divisible by $\theta ^{m}$,
slightly generalizing \cite[Lemma 3.1]{SB18}.
\begin{lemma}\label{divisibility1}
Let $p\geq 2$, $r>p$ and $F(X,Y) = \sum\limits_{j=0}^{r} a_{j} X^{r-j}Y^{j} \in V_{r}$.
Then for any $1 \leq m \leq p$, we have $F \in V_{r}^{(m)}$ if and only if
the following hold
\begin{enumerate}[label=\emph{(\roman*)}]
\item $a_{j} \neq 0 \; \Longrightarrow \; m \leq j \leq r-m$,
\item $\sum\limits_{j \, \equiv \, l \: \mathrm{mod} ~ (p-1)}^{}
\binom{j}{i} a_{j} = 0 $ in $\mathbb{F}_{p}$, $\forall$ $0 \leq i \leq m-1 $ and
$1 \leq l \leq p-1 $.
\end{enumerate} \end{lemma} \begin{proof}
We follow the proof of \cite[Lemma 3.1]{SB18}.
Consider $f(z) = \sum_{j=0}^{r} a_{j} z^{j} \in \mathbb{F}_{p} [z]$, so that $F(X,Y)=
X^{r}f(Y/X)$. Note that
\begin{align*}
\theta^{m} \mid F(X,Y)
& \Longleftrightarrow F(X,Y) =(-XY)^{m} \prod_{\lambda \in \mathbb{F}_{p}^{\ast}}
(Y-\lambda X)^{m} F_{1}(X,Y), ~ \mathrm{for ~ some} ~ F_{1}
\in V_{r-mp-m} \\
& \Longleftrightarrow X^{m}, Y^{m} \mid F(X,Y) ~
\mathrm{and} ~ f(Y/X) =
(-1)^{m}\prod_{\lambda \in \mathbb{F}_{p}^{\ast}} (Y/X - \lambda)^{m} F_{1}(1,Y/X) \\
& \Longleftrightarrow X^{m},Y^{m} \mid F(X,Y) ~ \mathrm{and} ~ f(z)
= \prod_{\lambda \in \mathbb{F}_{p}^{\ast}} (z - \lambda)^{m} f_{1}(z), ~ \mathrm{for ~ some}
~ f_{1} \in \mathbb{F}_{p}[z] \\ & \Longleftrightarrow X^{m}, Y^{m} \mid F(X,Y)~
\mathrm{and} ~ (z - \lambda )^{m} \mid f(z), \ \forall \ \lambda \in \mathbb{F}_{p}^{\ast}.
\end{align*}
The conditions $X^{m},Y^{m} \mid F(X,Y)$ are equivalent to
$a_{i} \neq 0 \Longrightarrow
m \leq i \leq r-m$, and $(z- \lambda)^{m} \mid f(z)$ if and only if
$f(\lambda) = f'(\lambda)= \cdots = f^{(m-1)} (\lambda) = 0$ in $\mathbb{F}_{p}$.
For $i \geq 0$ and $\lambda \in \mathbb{F}_{p}^{\ast}$,
we have
\begin{align*}
f^{(i)}(\lambda)
& = \sum_{j} j (j-1) \cdots (j-i+1)a_{j} \lambda^{j-i}
= \sum_{j} i ! \binom{j}{i} a_{j} \lambda^{j-i} \\
&= \sum_{l=1}^{p-1} \lambda^{l-i} \sum_{j \, \equiv \, l ~ \mathrm{mod} ~
(p-1)} i ! \binom{j}{i} a_{j}.
\end{align*}
Since $f^{(i)}(\lambda) =0$, $\forall ~ \lambda \in \mathbb{F}_{p}^{\ast}$ and $0 \leq i
\leq m-1$, by the non-vanishing of the Vandermonde determinant
and $p \nmid i!$, we obtain (ii). This completes the proof. \end{proof}
\begin{corollary}
Let $p\geq 2$, $r > p$ and $F(X,Y) = \sum\limits_{j=0}^{r} a_{j} X^{r-j}Y^{j} \in V_{r}$.
For $1 \leq l \leq p-1$, define
$F_{l}(X,Y) = \sum\limits_{j \equiv l~ \mathrm{mod}~(p-1)}
a_{j} X^{r-j}Y^{j} \in V_{r}$.
Them, for $1 \leq m \leq p$, we have
$F(X,Y) \in V_{r}^{(m)}$ if and only if $F_{1}(X,Y), \ldots, F_{p-1}(X,Y)
\in V_{r}^{(m)}$. \end{corollary} For $r \geq p $, the map $F \mapsto \left( \gamma \mapsto F((0,1) \gamma) \right)$ defines a $\Gamma$-linear isomorphism from $V_{r}/V_{r}^{(1)}$ to $\operatorname{ind}_{B}^{\Gamma}(\chi_{2}^{r})$, see, for example, \cite[Lemma 2.4]{sandra}.
We generalize this result as follows:
\begin{lemma}\label{induced and star}
For $p \geq 2$, $m \geq 0$ and $r \geq m(p+1)+p$,
we have
$V_{r}^{(m)}/ V_{r}^{(m+1)} \cong \operatorname{ind}_{B}^{\Gamma} (\chi_{1}^{m} \chi_{2}^{r-m})$, as
$\Gamma$-modules. Furthermore, if $r< m(p+1)+p $, then
$V_{r}^{(m)}/V_{r}^{(m+1)} \hookrightarrow
\operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{m} \chi_{2}^{r-m})$.
\end{lemma}
\begin{proof}
Let $r' = r-m(p+1)$, for $r \geq m(p+1)$. Assume $r \geq m(p+1)+p$.
By \cite[(4.1)]{Glover}, we see that
$V_{r}^{(m)}/ V_{r}^{(m+1)} \cong
V_{r'}/V_{r'}^{(1)} \otimes D^{m}$ as $\Gamma$-modules. Since
$r' \geq p$, we have
$V_{r'}/V_{r'}^{(1)} \cong \operatorname{ind}_{B}^{\Gamma}
(\chi_{2}^{r'})$. Hence
\begin{equation} \label{ps local}
V_{r}^{(m)}/V_{r}^{(m+1)} \cong \operatorname{ind}_{B}^{\Gamma}
(\chi_{2}^{r'}) \otimes D^{m} \cong \operatorname{ind}_{B}^{\Gamma}
(\chi_{2}^{r'} \otimes D^{m}) = \operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{m}
\chi_{2}^{r-m}).
\end{equation}
This proves the first assertion. If $r < m(p+1)$, then
$V_{r}^{(m)} =0$,
so the
second assertion is trivial in this case. Assume
$m(p+1) \leq r < m(p+1)+p $. Then
$V_{r}^{(m)} \cong V_{r'} \otimes D^{m}$ and
$V_{r}^{(m+1)} =0$, so
$V_{r}^{(m)}/V_{r}^{(m+1)} \cong V_{r'} \otimes D^{m}$, which is
$V_{[r-2m]} \otimes D^{m}$ if $1 \leq r' \leq p-1$,
since $r' \equiv r-2m \mod (p-1)$, and is $V_0 \otimes D^m$ if $r' = 0$. In either case,
this submodule is contained in the socle of
the principal series representation
$\operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{m} \chi_{2}^{r-m})$ in \eqref{ps local},
by \Cref{Structure of induced}
(the latter because we are in the split case of that lemma).
\end{proof}
It follows from the lemma that
$\dim V_{r}/V_{r}^{(m)} = \sum_{n=0}^{m-1} \dim V_{r}^{(n)}/V_{r}^{(n+1)} = m(p+1)$, for $m \geq 1$ and $r \geq m(p+1)-1$. We have:
\begin{lemma} \label{basis} Let $p \geq 2$, $m \geq 1$ and $r \geq m(p+1)-1$. Then \[
\Lambda = \lbrace X^{r-i}Y^{i}+ V_{r}^{(m)} : 0 \leq i \leq m(p+1)-(m+1) \rbrace \cup
\lbrace X^{i}Y^{r-i}+ V_{r}^{(m)} : 0 \leq i < m \rbrace \] is a basis of $V_{r}/V_{r}^{(m)}$. \end{lemma} \begin{proof} Since the cardinality of $\Lambda$ equals $\dim V_{r}/V_{r}^{(m)}$, it is enough to show that $\Lambda$ spans $V_{r}/V_{r}^{(m)} $ as an $\mathbb{F}_{p}$-vector space. For this we
induct on $i$ to show that $X^{r-i}Y^{i}+ V_{r}^{(m)}$
belongs to the $\mathbb{F}_{p}$-span of $\Lambda$, for all $0\leq i \leq r$.
If $0 \leq i \leq m(p+1)-(m+1)$ or $r-m < i \leq r$, then
$X^{r-i}Y^{i}+ V_{r}^{(m)}$ belongs to $\Lambda$. Assume that $mp = m(p+1)-m \leq i \leq r-m$, then \[
X^{r-i}Y^{i} - X^{r-i-m} Y^{i-mp}(XY^{p}-X^{p}Y)^{m} =
- \sum_{j=1}^{m} (-1)^{j} \binom{m}{j} X^{r-i+jp-j} Y^{i-jp+j}. \] Observe that the degree of $Y$ on right hand side above is strictly less than $i$. By induction, the right hand side modulo $V_{r}^{(m)}$ belongs to the $\mathbb{F}_{p}$-span of $\Lambda$, so $\Lambda$
is an $\mathbb{F}_{p}$-basis of $V_{r}/V_{r}^{(m)}$. \end{proof}
\begin{lemma}\label{Breuil map}
Let $p \geq 2$, $m \geq 0$ and $m(p+1)+p \leq r \equiv a ~\mathrm{mod}~(p-1)$
with $1 \leq a \le p-1$. Then we
have a short exact sequence of $\Gamma$-modules
\begin{align}\label{exact sequence Vr}
0 \rightarrow V_{[a-2m]} \otimes D^{m} \rightarrow V_{r}^{(m)}/ V_{r}^{(m+1)}
\rightarrow
V_{p-1-[a-2m]} \otimes D^{a-m} \rightarrow 0
\end{align}
and this sequence splits if and only if $a \equiv 2m$ $\mathrm{mod}~(p-1)$.
If $r' = r-m(p+1)$ and $a' =[a-2m]$,
then the rightmost map in \eqref{exact sequence Vr} is given on a basis by
$$
\theta^{m} X^{r'-i}Y^{i}~\mathrm{mod}~V_{r}^{(m+1)} \longmapsto
\begin{cases}
0, & \text{if} ~ 0 \leq i < a' \text{ or } i = r', \\
(-1)^{r'-i} \binom{p-1-a'}{i-a'} X^{p-1-i}Y^{i-a'}, & \text {if} ~ a' \leq i \leq p-1.
\end{cases}
$$
\end{lemma} \begin{proof}
The exact sequence \eqref{exact sequence Vr} follows from \Cref{induced and star}
and \Cref{Structure of induced}. The explicit description
of the rightmost map in \eqref{exact sequence Vr} follows from results in \cite{Glover} and \cite{Breuil}.
Indeed, by \cite[(4.1)]{Glover}, we see that $V_{r}^{(m)}/ V_{r}^{(m+1)} \cong
V_{r'}/V_{r'}^{(1)} \otimes D^{m}$ as $\Gamma$-modules.
By the proof of \cite[(4.2)]{Glover}, we see that the map
$ \psi: V_{a'+p-1}/V_{a'+p-1}^{(1)} \rightarrow V_{r'}/V_{r'}^{(1)}$ induced by
$X^{a'+p-1-i} Y^{i}
\mapsto X^{r'-i}Y^{i}$,
for $0 \leq i \leq p-1$,
and $Y^{a'+p-1} \mapsto Y^{r'}$, is an $M$-linear isomorphism. Thus, the composition
$\phi$ $$ V_{r}^{(m)}/ V_{r}^{(m+1)} \cong V_{r'}/V_{r'}^{(1)} \otimes D^{m}
\stackrel{\psi^{-1}}{\longrightarrow}
V_{a'+p-1}/V_{a'+p-1}^{(1)} \otimes D^{m},$$
given by
$\theta^{m} X^{r'-i} Y^{i} \mapsto X^{a'+p-1-i} Y^{i}$,
for $0 \leq i \leq p-1$, and $ \theta^{m} Y^{r'} \mapsto Y^{a'+p-1}$, is a
$\Gamma$-linear isomorphism.
Applying $\phi$ and using
\cite[Lemma 5.1.3]{Breuil} for
$V_{a'+p-1}/V_{a'+p-1}^{(1)}$, we obtain the explicit description
of the rightmost map in \eqref{exact sequence Vr}. \end{proof}
Let $0 \leq m \leq p-1$ and $F(X,Y) \in V_{r}^{(m)}$ be a polynomial such that the coefficients of $X^{r-j} Y^{j}$ are non-zero only if
$j \equiv l \mod (p-1)$ for some $l$. The following lemma gives another useful way to compute the image of $F(X,Y)$ under the rightmost map of the exact sequence \eqref{exact sequence Vr}.
\begin{lemma}\label{breuil map quotient}
Let $0 \leq m \leq p-1$ and $m(p+1) +p\leq r \equiv a \mod p$ with $1 \leq a
\leq p-1$. Let $F(X,Y) = \sum\limits_{j=0}^{r} a_{j} X^{r-j} Y^{j} \in V_{r}$ with
$a_{j} \neq 0$ only if $j \equiv l \mod (p-1)$, for some $l$. If $F(X,Y) \in V_{r}^{(m)}$,
then
\begin{align*}
F(X,Y) \equiv ~ \theta ^{m} G(X,Y) + \theta^{m} \left( a_{m} X^{r-m(p+1)} +
(-1)^{m} a_{r-m} Y^{r-m(p+1)} \right) \mod V_{r}^{(m+1)},
\end{align*}
where $$G(X,Y)= \left( \sum_{j=0}^{r} a_{j} \binom{j}{m} - a_{m}+
(-1)^{m+1} a_{r-m} \right) X^{r-m(p+1)-[l-m]}Y^{[l-m]}.$$ Further,
the image of $F(X,Y)$ under the quotient map
$V_{r}^{(m)}/ V_{r}^{(m+1)} \rightarrow V_{p-1-[a-2m]} \otimes D^{a-m} $
in \eqref{exact sequence Vr} is the same as the image of $\theta^{m}G(X,Y)$.
\end{lemma} \begin{proof}
Let $r'= r-m(p+1)$. Since $\theta^{m} \mid F(X,Y)$, there exist
$H(X,Y)= \sum_{j=0}^{r'} b_{j} X^{r'-j}Y^{j} \in V_{r'}$ such that
$F(X,Y) = \theta^{m} H(X,Y)$. Differentiating both sides with respect to $Y$
$m$-times and substituting $X=Y=1$ we get
\begin{align*}
m! \sum_{j=0}^{r} a_{j} \binom{j}{m} =
\left( \frac{\partial^{m}}{\partial Y^{m}} F(X,Y) \right)
\Biggr|_{\substack{X=1\\Y=1}}= m! ~ H(X,Y) \Bigr|_{\substack{X=1\\Y=1}}
=m! \sum_{j=0}^{r'} b_{j}.
\end{align*}
Comparing the coefficients of $X^{r-m}Y^{m}$, $X^{m}Y^{r-m}$ in $F(X,Y)$ and
$\theta^{m} H(X,Y)$ we get $a_{m}=b_{0}$ and $a_{r-m} = (-1)^{m} b_{r'}$.
Hence
\begin{align}\label{breuil quotient sum relation}
\sum\limits_{0<j<r'}b_{j}= \left( \sum\limits_{j=0}^ {r'} b_{j} \right)
- b_{0}-b_{r'} = \sum\limits_{j=0}^ {r} a_{j}
\binom{j}{m} - a_{m} + (-1)^{m+1} a_{r-m}.
\end{align}
Since $a_{j}=0$ if $j \not \equiv l$ mod $(p-1)$, it can be
checked that $b_{j} = 0$ if $j \not \equiv l-m$ mod $(p-1)$.
Therefore
\begin{align*}
F(X,Y) & = \theta ^{m} H(X,Y)
= \theta^{m}
\sum_{\substack{0<j<r' \\ j \equiv l-m ~\mathrm{mod}~(p-1)}} b_{l}
X^{r'-j} Y^{j} + \theta^{m} (b_{0} X^{r'} +b_{r'}Y^{r'}) \\
& \equiv \theta^{m} \left(
\sum_{0<j<r'} b_{j} \right) X^{r'-[l-m]} Y^{[l-m]} +
\theta^{m} (b_{0} X^{r'} +b_{r'}Y^{r'})
\mod V_{r}^{(m+1)} \\
& \stackrel{\eqref{breuil quotient sum relation}}{\equiv} \theta ^{m} G(X,Y) + \theta^{m} \left( a_{m} X^{r-m(p+1)} +
(-1)^{m} a_{r-m} Y^{r-m(p+1)} \right) \mod V_{r}^{(m+1)},
\end{align*}
as required.
The last assertion is clear from \Cref{Breuil map}, as
$\theta^{m}X^{r'}$, $\theta^{m}Y^{r'}$ map to zero under the rightmost map
in \eqref{exact sequence Vr}. \end{proof}
\subsection{Binomial coefficients mod $p${}.}
In this subsection, we prove several elementary results involving binomial coefficients in characteristic $p$ which will be used in later sections. We begin by recalling Lucas' theorem.
\begin{lemma} \label{lucas} \emph{(\textbf{Lucas' theorem})}
For any prime $p$, let $m$ and $n$ be two non-negative integers with
base $p$-expansions
given by $m= m_{k}p^{k}+m_{k-1}p^{k-1}+ \cdots + m_{0}$ and
$n= n_{k}p^{k}+n_{k-1}p^{k-1}+ \cdots + n_{0}$ respectively. Then
$\binom{m}{n} \equiv \binom{m_{k}}{n_{k}} \cdot \binom{m_{k-1}}{n_{k-1}} \cdots
\binom{m_{0}}{n_{0}} \mod p$, with the convention that $\binom{a}{b}=0$,
if $b>a$. \end{lemma}
\begin{comment} We note the following congruence mod $p$ which we use often. With the convention $0^0=1$ we have for any $i \geq 0$, \begin{align}\label{sum fp}
\sum_{\lambda \in \mathbb{F}_{p}} \lambda^{i} \equiv
\begin{cases}
-1, &\mathrm{if}~i=n(p-1), ~ \mathrm{for ~ some ~} n \geq 1, \\
0, & \mathrm{otherwise}.
\end{cases} \end{align} \end{comment}
We next prove a lemma concerning sums involving products of binomial coefficients. \begin{lemma}\label{binomial sum}
For $m \geq 0$, $1 \leq b \leq p-1$ and $m < r \equiv a \mod (p-1)$ with
$1 \leq a \leq p-1$, we have
$$
S_{r, b, m}:=\sum\limits_{\substack {0 \leq l \leq r \\ l \equiv b ~
\mathrm{mod}~ (p-1)}}
\binom{r}{l} \binom{l}{m} \equiv \binom{r}{m} \binom{[a-m]}{[b-m]}
+ \binom{r}{m} \delta_{p-1,[b-m]} \mod p,
$$
where $\delta$ is the Kronecker delta function. \end{lemma}
\begin{proof}
Observe that
$$
S_{r, b, m}=\sum\limits_{\substack {m \leq l \leq r \\ l \equiv b ~\mathrm{mod}~
(p-1)}}
\binom{r}{l} \binom{l}{m} =
\sum\limits_{\substack {m \leq l \leq r \\ l \equiv b ~\mathrm{mod}~ (p-1)}}
\binom{r-m}{l-m} \binom{r}{m} =
\binom{r}{m} S_{r-m, [b-m], 0}.
$$
Put $r' = r-m$ and $a' = [a-m]$ and $b'= [b-m]$.
To prove the lemma we compute the following sum in two different ways. Let
\begin{align*}
T_{r',b'}:= \sum_{\lambda \in \mathbb{F}_{p}} \lambda^{p-1-b'} (1+\lambda)^{r'}.
\end{align*}
First we note that
\begin{align*}
T_{r',b'}
&= \sum_{\lambda \in \mathbb{F}_{p}^\ast} \lambda^{p-1-b'} (1+\lambda)^{r'} + \delta_{p-1,b'}
= \sum_{j=0}^{r'} \binom{r'}{j} \sum_{\lambda \in \mathbb{F}_{p}^\ast} \lambda ^{j-b'}
+ \delta_{p-1,b'} \\
& \stackrel{\eqref{sum fp}}{\equiv} -S_{r', b', 0}+ \delta_{p-1,b'} \mod p.
\end{align*}
On the other hand since $r>m$, we have
\begin{align*}
T_{r',b'}
& = \sum_{\lambda \in \mathbb{F}_{p} \smallsetminus \lbrace-1 \rbrace}
\lambda^{p-1-b'} (1+\lambda)^{r'}
= \sum_{\lambda \in \mathbb{F}_{p} \smallsetminus \lbrace-1 \rbrace}
\lambda^{p-1-b'} (1+\lambda)^{a'} \\
& = \sum_{j=0}^{a'} \binom{a'}{j} \sum_{\lambda \in \mathbb{F}_{p} \smallsetminus
\lbrace-1 \rbrace}
\lambda^{p-1+j-b'} \\
&= \left(\sum_{j=0}^{a'} \binom{a'}{j} \sum_{\lambda \in \mathbb{F}_{p} }
\lambda^{p-1+j-b'} \right) - \left(\sum_{j=0}^{a'} \binom{a'}{j} (-1)^{p-1+j-b'}
\right)
\equiv - \binom{a'}{b'} \mod p.
\end{align*}
Here the last congruence follows from the observation that the first sum
doesn't vanish if and only if $a' \geq b' $, in which case it equals $-\binom{a'}{b'}$
and for the second sum note that $\sum_{j=0}^{a'} \binom{a'}{j} (-1)^j$
$=(1-1)^{a'}=0$. Hence $S_{r', b',0} \equiv \binom{a'}{b'} + \delta_{p-1,b'}
\mod p$, as desired. \end{proof}
In a few proofs, we need to choose some numbers $s$ satisfying appropriate conditions. This choice is made using the following lemma.
\begin{lemma}\label{choice of s}
Let $p \leq r =r_{m}p^{m}+ \cdots +r_{1}p+r_{0}$ be the base $p$-expansion
of $r$. Then, for every $0 \leq b \leq r_{0} $ and
$1 \leq u \leq \Sigma_{p}(r)-r_{0}$, there exists a positive integer $s$ with
$p \leq s \leq r$, $s \equiv b \mod p$ such that $\Sigma_{p}(s)=b+u$ and $\binom{r}{s}
\not \equiv 0 \mod p$. In addition,
if $u< \Sigma_{p}(r)-r_{0}$,
then
$s \leq r-p$. \end{lemma}
\begin{proof}
Since $1 \leq u \leq \Sigma_{p}(r) - r_{0} = \sum_{i=1}^{m}r_{i} $, we can
find integers $s_{i}$ for $1 \leq i \leq m$, such that $0 \leq s_{i} \leq r_{i}$ and
$\sum_{i=1}^{m}s_{i}=u$. Put $s= s_{m}p^{m}+ \cdots + s_{1}p+b$.
Since $s_{i} \leq r_{i}$ and $b \leq r_{0}$ we have $s \leq r$. Also
$s \geq \sum_{i=1}^{m} s_{i} p = u p \geq p$. Clearly $s \equiv b \mod p$
and $\Sigma_{p}(s)=b+u$. By Lucas' theorem and choice of $s_{i}$, we have
$\binom{r}{s} \equiv \binom{r_{m}}{s_{m}} \cdots \binom{r_{1}}{s_{1}}
\binom{r_{0}}{b} \not\equiv 0$ mod $p$.
Further if $\sum_{i=1}^{m}s_{i} = u < \sum_{i=1}^{m}r_{i}$, then $s_{j}<r_{j} $
for some $j \geq 1$, whence $r-s \geq (r_{j}-s_{j})p^{j} \geq p$. \end{proof}
Next we determine when certain matrices built out of binomial coefficients are invertible mod $p$. These matrices are typical of the ones we encounter later.
\begin{proposition}\label{matrix det}
Suppose that $r \geq 2p$ and $1 \leq a \leq p-1$.
\begin{enumerate}[label=\emph{(\roman*)}]
\item If $0 \leq i \leq j \leq a \leq p-1 $, then the matrix
$$
\left( \binom{a-n}{j-m}\right)_{0 \leq m,n \leq i}
$$
is invertible modulo $p$.
\item If $0 \leq i \leq j \leq i+j \leq a < p+i$, then
$$
\det_{0 \leq m,n \leq i } \left( \binom{r-n}{m} \binom{a-m-n}{j-m} \right)
= \binom{a-2i}{j-i}
\prod \limits _{l=0}^{i-1} \frac{(a-i-l)!(r-(a-l))^{i-l}}{(j-l)!(a-j-l)!}.
$$
The corresponding matrix is invertible mod $p$ $\iff r \not \equiv a-i+1$, $a-i+2,\ldots$,
$a-1$, $a \mod p$.
\item If $1 \leq a-i \leq i$ and $r \not \equiv a-i-1 ~\mathrm{mod}~p$ and
$r \not \equiv i+1,\ldots, a-1$, $a ~\mathrm{mod}~ p$, then the matrix
$$
\left(\begin{array}{c|c}
A ' & \mathbf{v}^{t} \\
\hline
\mathbf{w} & 0
\end{array}
\right)
$$
is invertible modulo $p$,
where $A'$ is the matrix $$\left( \binom{r-n}{m}
\binom{a-m-n}{i-m} \right)_{0 \leq m,n \leq a-i-1}$$ and
$\mathbf{v}$, $\mathbf{w} $ are the $1 \times a-i$ row vectors
$\left( \binom{i}{0}, \ldots, \binom{i}{a-i-1} \right)$,
$ \left( \binom{r}{r-(a-i)}, \ldots, \binom{r-(a-i-1)}{r-(a-i)} \right)$,
respectively.
\end{enumerate} \end{proposition}
\begin{proof}
We use elementary row and column operations to reduce the given matrices
to a particular form to which we can apply results from \cite{Viennot}
and \cite{Kra99}.
\begin{enumerate}
\item[(i)] By reversing the rows
and columns we have $ \det\limits_{0 \leq m,n \leq i}
\left( \binom{a-n}{j-m}\right)
= \det\limits_{0 \leq m,n \leq i} \left( \binom{a-i+n}{j-i+m}\right)$.
Applying the formula on Line -7 of \cite[ p. 308]{Viennot} with
$k=(i+1)$, $b= j-i$ and $a_{1}=a-i$, $a_{2}=a-i+1, \ldots$,
$a_{i+1}=a$ we get
\begin{align*}
\det_{0 \leq m,n \leq i} \left( \binom{a-i+n}{j-i+m}\right)
&= ((j-i)!)^{i+1} \frac{\binom{a-i}{j-i} \cdot \binom{a-i+1}{j-i}
\cdots \binom{a}{j-i}}{(j-i)! \cdot(j-i+1)!\cdots j!} 1! \cdot 2! \cdots i! \\
&=\frac{\binom{a-i}{j-i} \cdots\binom{a}{j-i}}{\binom{j-i}{j-i} \cdots
\binom{j}{j-i}}.
\end{align*}
Since, for $0 \leq l \leq i$, we have $0 \leq j-i \leq j-l \leq a-l \leq p-1$,
it follows from Lucas' theorem that the above determinant is non-zero
modulo $p$ and hence the matrix is invertible.
\item[(ii)] Pulling out a factor of $1/(m! (j-m)!)$ and
$(a-i-n)!/(a-j-n)!$ from the $(m+1)^{th}$-row and the $(n+1)^{th}$-column
respectively, for $0 \leq m,n \leq i$, we get
\begin{align}\label{Matrix det eqn 1}
\det_{0 \leq m,n \leq i } \left( \binom{r-n}{m} \binom{a-m-n}{j-m} \right)
&= \prod\limits_{l=0}^{i} \frac{(a-i-l)!}{(a-j-l)! (j-l)! l !} ~ \times \\
& ~~~ \det_{0 \leq m,n \leq i } \left( \frac{(r-n)!(a-m-n)!}{(r-m-n)!(a-i-n)!}
\right).\nonumber
\end{align}
Applying \cite[Lemma 3]{Kra99} with $n=i+1$, and
$$X_{1} = r, X_{2}=r-1,\ldots,X_{i+1} = r-i, $$
$$A_{2}= a-r, A_{3} =a-r-1,\ldots, A_{i+1} = a-r-(i-1),$$
$$B_{2} =0, B_{3}=-1,\ldots, B_{i+1} = -(i-1),$$
we get the determinant of the transpose of
$\left( \frac{(r-n)!(a-m-n)!}{(r-m-n)! (a-i-n)!}\right)_{0 \leq m,n \leq i } $
equals
\begin{eqnarray*}
\prod_{ l=1}^{i+1}\prod_{1 \leq l'<l} (X_{l'} - X_{l})
\times
\prod_{l=0}^{i-1} \prod_{\substack{ 2 \leq m \leq n \leq i+1 \\ n-m = l} }
(B_{m}-A_{n})
&=& \prod_{ l=1}^{i+1} (l-1)! \prod_{l=0}^{i-1} (r-(a-l))^{i-l}.
\end{eqnarray*}
Substituting the above expression in \eqref{Matrix det eqn 1}, we obtain the formula in (ii).
The statement about the invertibility can be deduced as in
(i).
\item[(iii)]
If $a-i=1$, then the matrix is equal to
$\begin{psmallmatrix} a & 1 \\ r & 0 \end{psmallmatrix}$,
which is invertible in $M_{2}(\mathbb{F}_{p})$ if $p \nmid r$. So assume
$a-i \geq 2$.
Multiplying the $(n+1)^{th}$-column by $(r-n+1)/(a-i-n+1)$ and subtracting
from the $n^{th}$-column, for $1 \leq n \leq a-i-1 $, we get
\begin{align*}
\det \left(\begin{array}{c|c}
A ' & \mathbf{v}^{t} \\
\hline
\mathbf{w} & 0
\end{array} \right) ~ = ~ -
\frac{(a-r)^{a-i-1}}{(a-i)!} (r-(a-i-1))
\det \left(\begin{array}{c|c}
A '' & \mathbf{v}^{t}
\end{array} \right),
\end{align*}
where
$A'' = \left( \binom{r-n}{m} \binom{a-1-m-n}{i-m} \right)$ and
$0 \leq m \leq a-i-1$, $0 \leq n \leq a-i-2$.
Now multiplying the $(m+1)^{th}$-row by $m/(i-m+1)$ and subtracting
from the $m^{th}$-row, for $1 \leq m \leq a-i-1$, we get
\begin{align*}
\det
\left(\begin{array}{c|c}
A ' & \mathbf{v}^{t} \\
\hline
\mathbf{w} & 0
\end{array}
\right) ~ = ~-
\frac{(a-r)^{a-i-1}}{(a-i)!} & (r-(a-i-1)) \frac{(a-1-r)^{a-i-1}}{(a-i-1)!}
\binom{i}{a-i-1} \\
& \times \det\left(\binom{r-n}{m} \binom{a-2-m-n}{i-m} \right)_{0 \leq m,n
\leq a-i-2}.
\end{align*}
Now (iii) follows from (ii). \qedhere
\end{enumerate} \end{proof}
Let us set
\begin{align}\label{A(a,i,j,r) matrix}
A(a,i,j,r) := \left( \binom{r-n}{m} \binom{[a-m-n]}{j-m} \right)_{0 \leq m, n \leq i},
\end{align} for $1 \leq a \leq p-1$ and $0 \leq i \leq j \leq r$. We have the following corollaries.
\begin{corollary}\label{A(a,i,j,r) invertible}
Let $2p \leq r \equiv r_{0} ~\mathrm{mod}~p$ with $0 \leq r_{0} \leq p-1$
and let $1 \leq a \leq p-1 $.
Suppose that $0 \leq i \leq j < [a-i] < p-1$.
Then the matrix $A(a,i,j,r)$
is invertible if $r_{0} \not \in \mathcal{I}(a,i)$, where
\begin{align}\label{interval I for i < a-i}
\mathcal{I}(a,i) =
\begin{cases}
\lbrace a-i+1, a-i+2, \ldots, a-1,a \rbrace, & \mathrm{if}~
i <[a-i] <a, \\
\lbrace 0,1, \ldots a-1 \rbrace \cup
\lbrace p+a-i, p+a-i+1, \ldots, p-1 \rbrace, & \mathrm{if}~a<i<[a-i].
\end{cases}
\end{align} \end{corollary} \begin{proof}
If $i<a$, then $[a-i] =a-i$ and the condition $i < [a-i] $
implies $ 2i < a$. Thus, for $i<a$ we have
$[a-m-n] =a-m-n$, for all $0 \leq m,n \leq i$.
For $i>a$, we have
\begin{align}\label{binomial identity i>a}
\binom{[a-m-n]}{j-m} & =
\begin{cases}
\binom{p-1+a-m-n}{j-m}, ~ &\mathrm{if} ~ m+n \geq a,
\vspace*{1 mm} \\
\binom{a-m-n}{j-m}, ~ &\mathrm{if} ~ m+n < a,
\end{cases} \nonumber \\
& \equiv \binom{p-1+a-m-n}{j-m} ~ \mathrm{mod}~p, \end{align} where the first equality follows from the definition of [ $\cdot$ ] and the second follows from Lucas' theorem as the assumption $a<i \leq j$ implies $\binom{p-1+a-m-n}{j-m}$, $\binom{a-m-n}{j-m} \equiv 0$ mod $p$ in the case $m+n <a$. Let $a' = [a-i]+i$. Note that $a'=a$ (resp. $p-1+a$) if $i<a$ (resp. $i>a$). Using these observations, we see that
\begin{align}\label{A(a,i,j,r) expression}
A(a,i,j,r) =
\left( \binom{r-n}{m} \binom{a'-m-n}{j-m}
\right)_{0\leq m,n \leq i}.
\end{align}
Since $[a-i] \leq p-1$ and $j < [a-i]$, we see that
$0 \leq i \leq j \leq i+j \leq a' < p+i$.
Using \Cref{matrix det} (ii),
it follows that $A(a,i,j,r)$ is invertible
if $r \not \equiv a'-i+1, a'-i+2, \ldots, a'-1,a' $ mod $p$
if and only if $r_{0} \not \in \mathcal{I}(a,i)$. \end{proof}
\begin{corollary}\label{block matrix invertible}
Let
$ 2p \leq r \equiv r_{0} ~\mathrm{mod}~p$
with $0 \leq r_{0} \leq p-1$ and $ 1 \leq a \leq p-1$.
Suppose $1 \leq [a-i] \leq i < p-1$. Let
$A' = A(a,[a-i]-1,i,r)$
and let
$\mathbf{v}$, $\mathbf{w} $ be the $1 \times [a-i]$ row vectors
given by $\left( \binom{i}{0}, \ldots, \binom{i}{[a-i]-1} \right)$,
$ \left( \binom{r}{r-[a-i]}, \ldots, \binom{r-([a-i]-1)}{r-[a-i]} \right)$
respectively. Then the matrix
$$
\left(\begin{array}{c|c}
A ' & \mathbf{v}^{t} \\
\hline
\mathbf{w} & 0
\end{array}
\right)
$$
is invertible mod $p$ if
$r \not \equiv [a-i]+i ~\mathrm{mod}~p$ and
$r_{0} \not \in \mathcal{J}(a,i) \smallsetminus \lbrace i \rbrace $,
where
\begin{align} \label{interval J first}
\mathcal{J}(a,i) =
\begin{cases}
\lbrace a-i-1, a-i, \ldots, a-1 \rbrace, & \mathrm{if} ~
[a-i] \leq i <a, \\
\lbrace 0,1, \ldots, a-2 \rbrace \cup
\lbrace p-2+a-i, \ldots , p-1 \rbrace, & \mathrm{if} ~
a< [a-i] \leq i .
\end{cases}
\end{align} \end{corollary} \begin{proof}
Observe that $[a-i]-1 < i < i+1 =[a-([a-i]-1)]$.
Let $a'=[a-i]+i$. By a check similar to
\eqref{A(a,i,j,r) expression}, we have
\[
A' = \left( \binom{r-n}{m}
\binom{a'-m-n}{i-m} \right)_{0 \leq m,n \leq [a-i]-1}.
\]
Indeed, let $0 \leq m,n \leq [a-i]-1$.
If $i < a$, then $a \leq 2i$, so $a -m - n \geq 2i -a + 2\geq 2$,
so $[a-m-m] = a-m-n$. If $i > a$, then $p-1 + a \leq 2i$, so if $m+n < a$, then $[a-m-n] = a-m-n$,
but $\binom{p-1+a-m-n}{i-m}$, $\binom{a-m-n}{j-m} \equiv 0$ mod $p$ by Lucas' theorem,
and if $m+n \geq a$, then $p-1+a-m-n \geq p-1 + a -2([a-i] -1) = 2i - (p-1) -a + 2 \geq 2$, so $[a-m-n] = p-1+a-m-n$.
Also, by the definition of $[~ \cdot ~]$ (by considering the cases
$i<a$ and $i>a$ separately), it follows that
\begin{align*}
\mathbf{v} &= \left( \binom{i}{0}, \ldots, \binom{i}{a'-i-1} \right) \\
\mathbf{w} & =
\left( \binom{r}{r-(a'-i)}, \ldots, \binom{r-(a'-i-1)}{r-(a'-i)} \right).
\end{align*}
Applying \Cref{matrix det} (iii) (with $a$ there equal to $a'$),
we see that $A$ is invertible if $r \not \equiv a'-i-1$ mod $p$ and
$r \not \equiv i+1, \ldots, a'-1, a'$ mod $p$ which happens if
$r \not \equiv a'$ mod $p$ and $r_{0} \not \in \mathcal{J}(a,i)
\smallsetminus \lbrace i \rbrace$.
\end{proof}
\section{The first \texorpdfstring{$p$}{} monomial submodules}
In this section, we determine the structure of the monomial submodules $X_{r-i,\,r}$, the $\Gamma$-submodule of $V_{r}$ generated by $X^{r-i}Y^{i}$, for $0 \leq i \leq p-1$. Recall that these modules are $M$-stable.
We begin by describing an $\mathbb{F}_{p}$-generating
set for $X_{r-i, \,r}$, for $0 \leq i \leq p-1$.
\begin{lemma}\label{Basis of X_r-i}
If $0 \leq i \leq p-1$, then $\lbrace X^{l}(kX+Y)^{r-l}, X^{r-l}Y^{l} : 0 \leq l \leq i,
~ k \in \mathbb{F}_{p} \rbrace$ generates $X_{r-i,\,r}$ as an $\mathbb{F}_{p}$-vector space.
Hence $\mathrm{dim}~X_{r-i,\,r} \leq (i+1)(p+1)$. \end{lemma} \begin{proof}
By Bruhat decomposition, $\Gamma= B \sqcup B w B$, where
$w=\begin{psmallmatrix} 0 & 1 \\ 1 & 0 \end{psmallmatrix}$. We first look at the
action of the Borel subgroup $B$ on $X^{r-i} Y^{i}$. Observe that
\begin{align*}
\begin{pmatrix} a & b \\ 0 & d \end{pmatrix} \cdot X^{r-i}Y^{i} = a^{r-i}
X^{r-i}(bX+dY)^{i} = a^{r-i} \sum\limits_{l=0}^{i} \binom{i}{l} b^{i-l}d^{l}
X^{r-l} Y^{l}.
\end{align*}
Therefore $B \cdot X^{r-i}Y^{i} \subset \mathbb{F}_{p}$-span of
$\lbrace X^{r}, X^{r-1}Y, \ldots ,X^{r-i}Y^{i} \rbrace$. It is clear that any element
of $Bw$ is of the form $\begin{psmallmatrix} a & b \\ c & 0 \end{psmallmatrix}$
with $bc \neq 0$. For every $0 \leq l \leq i$, we have
\begin{align*}
\begin{pmatrix} a & b \\ c & 0 \end{pmatrix} \cdot X^{r-l}Y^{l} = (aX+cY)^{r-l} (bX)^{l}=
b^{l} c^{r-l} X^{l}(ac^{-1}X+Y)^{r-l}.
\end{align*}
Hence
\begin{align*}
BwB \cdot X^{r-i} Y^{i} & \subset \mathbb{F}_{p} \text{-span of} ~\lbrace B w \cdot X^{r-l}
Y^{l} : 0 \leq l \leq i \rbrace \\ & \subset \mathbb{F}_{p} \text{-span of} ~ \lbrace X^{l}
(kX+Y)^{r-l} : 0 \leq l \leq i , \ k \in \mathbb{F}_{p} \rbrace.
\end{align*}
Combining these observations,
we get $\gamma \cdot X^{r-i} Y^{i}
\in \mathbb{F}_{p} \text{-span of} ~\lbrace X^{l}(kX+Y)^{r-l}, X^{r-l}Y^{l} : 0 \leq l \leq i,
~ k \in \mathbb{F}_{p} \rbrace$.
This
completes the proof of the lemma as $X^{r-i}Y^{i}$ generates $X_{r-i,\,r}$
as a $\Gamma$-module. \end{proof}
We next define a surjection $X_{r-i,\,r-i} \otimes V_{i} \twoheadrightarrow X_{r-i,\,r}$, for $0 \leq i \leq p-1$, which generalizes \cite[Lemma 3.6]{BG15}.
\begin{lemma}\label{surjection1} For $r \geq i$ and $0 \leq i \leq p-1$, there exists an $M$-linear
surjection
$$\phi_{i}: X_{r-i,\,r-i} \otimes V_{i} \twoheadrightarrow X_{r-i,\,r}.$$ \end{lemma} \begin{proof}
The map $\phi_{r-i, \, i}:V_{r-i} \otimes V_{i} \rightarrow V_{r}$ sending
$F \otimes G \mapsto FG$ for $F \in V_{r-i}$ and $G \in V_{i}$, is $M$-linear by \cite[(5.1)]{Glover}.
Let $\phi_{i}$ be the restriction of $\phi_{r-i,\, i}$ to the $M$-submodule
$ X_{r-i,\,r-i} \otimes V_{i} \subseteq V_{r-i} \otimes V_{i}$. As an $M$-module
$X_{r-i, \, r-i} \otimes V_{i}$ is generated by $X^{r-i} \otimes X^{i}$, $X^{r-i}
\otimes X^{i-1}Y, \ldots ,X^{r-i} \otimes Y^{i}$ whose images $X^{r}$, $X^{r-1}Y,
\ldots, X^{r-i}Y^{i}$ lie in $X_{r-i, \, r}$, by \Cref{first row filtration}.
The surjectivity follows as $\phi_{i}(X^{r-i}
\otimes Y^{i}) = X^{r-i}Y^{i}$ generates $X_{r-i,\,r}$ as an $M$-module. \end{proof}
We next define a $\Gamma$-linear surjection from an induced representation to the quotient $X_{r-i,\,r}/X_{r-j,\,r}$, for $0 \leq j \leq i \leq p-1$. This map will be crucially used in later sections to obtain the structure of $X_{r-i,\,r}$.
\begin{proposition}\label{Succesive quotient} Let $0 \leq j \leq i \leq p-1 < r$. Then there is a $\Gamma$-linear surjection $$
\operatorname{ind}_{B}^{\Gamma}(V_{i-j-1} \otimes \chi_{1}^{r-i} \chi_{2}^{j+1})
\twoheadrightarrow X_{r-i, \, r} / X_{r-j,\,r}, $$ where $V_{-1}=0$ by convention. \end{proposition} \begin{proof}
Let $\chi = \chi_{1}^{r-i} \chi_{2}^{j+1}$ and $e_{\chi}$
be a non-zero
element in the representation $( \chi, V_{\chi})$. If $i=j$, then
$X_{r-i, \, r} / X_{r-j,\,r}=0$ and there is nothing prove. Assume
$i> j$. Consider $\mathbb{F}_{p}$-linear map $\psi : V_{i-j-1} \otimes
\chi_{1}^{r-i} \chi_{2}^{j+1} \rightarrow X_{r-i, \, r} / X_{r-j,\,r}$ defined by
$$
X^{i-j-1-l} Y^{l} \otimes e_{\chi} \mapsto \binom{j+l+1}{j+1}^{-1}
X^{r-j-l-1} Y^{j+l+1},
~ \forall ~ 0 \leq l \leq i-j-1.
$$
For $0 \leq l \leq i-j-1$, we have
$0 \leq j+1 \leq j+l+1 \leq i \leq p-1$, whence by Lucas' theorem,
$\binom{j+l+1}{j+1} \not \equiv 0$ mod $p$, so $\psi$ is
well defined. We claim that $\psi$ is a $B$-linear map. For $0 \leq n \leq i-j-1$ and $\gamma = \begin{psmallmatrix}
a & b \\ 0 & d \end{psmallmatrix} \in B$, we have
\begin{align*}
\gamma \cdot (X^{i-j-1-n} Y^{n} \otimes e_{\chi}) & =
(aX)^{i-j-1-n }(bX+dY)^{n} \otimes a^{r-i}d^{j+1} e_{\chi} \\
&= \sum_{l=0}^{n} a^{r-j-n-1} b^{n-l} d^{j+l+1} \binom{n}{l}
\left( X^{i-j-1-l } Y^{l} \otimes e_{\chi} \right).
\end{align*}
Therefore
\begin{align*}
\psi \left( \gamma \cdot (X^{i-j-1-n} Y^{n} \otimes e_{\chi}) \right)
& = \sum\limits_{l=0}^{n} a^{r-j-n-1}b^{n-l}d^{j+l+1} \binom{n}{l}
\binom{j+l+1}{j+1}^{-1} X^{r-j-l-1} Y^{j+l+1} \\
& = \sum\limits_{l=j+1}^{j+n+1} a^{r-j-n-1}b^{j+n+1-l}d^{l} \binom{n}{l-(j+1)}
\binom{l}{j+1}^{-1} X^{r-l} Y^{l} \\
&=\binom{j+n+1}{j+1}^{-1} \sum\limits_{l=j+1}^{j+n+1} a^{r-j-n-1}b^{j+n+1-l}
d^{l} \binom{j+n+1}{l} X^{r-l} Y^{l} \\
&=\binom{j+n+1}{j+1}^{-1} (aX)^{r-j-n-1}(bX+dY)^{j+n+1} \\
& \quad -\binom{j+n+1}{j+1}^{-1} \sum\limits_{l=0}^{j} a^{r-j-n-1}b^{j+n+1-l}
d^{l} \binom{j+n+1}{l} X^{r-l} Y^{l} \\
& = \binom{\,j+n+1}{j+1}^{-1} \begin{pmatrix} a & b \\ 0 & d \end{pmatrix}
\cdot X^{r-j-n-1}Y^{j+n+1} \quad \mod X_{r-j,\,r} \\
& = \gamma \cdot \psi( X^{i-j-1-n} Y^{n} \otimes e_{\chi}).
\end{align*}
This shows that $\psi$ is $B$-linear. By Frobenius reciprocity (alternatively \cite[Lemma 4, $\S 8$]{Alperin}),
we see that $\psi$ extends to a
$\Gamma$-linear map $\operatorname{ind}_{B}^{\Gamma}(V_{i-j-1}
\otimes \chi_{1}^{r-i} \chi_{2}^{j+1}) \rightarrow X_{r-i, \, r} / X_{r-j ,\,r}$.
As $X^{r-i}Y^{i} = \binom{i}{j+1}\psi(Y^{i-j-1} \otimes e_{\chi})$
generates $X_{r-i,\,r}$ as a $\Gamma$-module, the surjectivity of $\psi$ follows. \end{proof}
In particular, taking $j=i-1$ in the above proposition, we see the successive quotients $X_{r-i,\,r}/ X_{r-(i-1),\,r}$ are isomorphic to quotients of principal series representations. \begin{corollary}\label{induced and successive}
If $r \geq p$ and $1 \leq i \leq p-1$, then the map
\begin{align*}
\psi_{i} : \operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i} \chi_{2}^{i})
~ & \longrightarrow ~ X_{r-i,\,r}/ X_{r-(i-1),\,r} \\
[\gamma, e_{\chi_{1}^{r-i} \chi_{2}^{i}}]
~ &\longmapsto ~ \gamma \cdot X^{r-i}Y^{i}
\end{align*}
is a $\Gamma$-linear surjection. \end{corollary}
\subsection{The case \texorpdfstring{$\boldsymbol{r_{0} \geq i}$}{}}
\label{section r0>i}
In this subsection, we determine the structure of $X_{r-i,\,r}$,
for $0 \leq i \leq p-1$, in the case $r_{0} \geq i$, where $r_{0}$ is
as in \eqref{base p expansion of r}.
The structure of $X_{r-1,\,r}$ was determined in \cite{BG15} using the surjection $\phi_{1} : X_{r-1,\,r-1} \otimes V_{1} \rightarrow X_{r-1,\,r}$. It turns out that the map $\phi_{1}$ is an isomorphism if $p \nmid r$, i.e., $r_{0} \geq 1$ in \eqref{base p expansion of r}. This fact can be verified using \Cref{dimension formula for X_{r}} in conjunction with \cite[Proposition 3.13]{BG15} and \cite[Proposition 4.9]{BG15} (see also \Cref{Structure r_0 >i}, noting that the first case there does not occur when $i = 1$). So one might expect that the surjection $\phi_{i}$ obtained in \Cref{surjection1} is an isomorphism in the case $r_{0}\geq i$, for arbitrary $i$. We show that this is indeed true if $r_{0} \geq i$ and $\Sigma_{p}(r-i) \geq r_{0}$, by showing dim $X_{r-i,\,r}$ equals dim $X_{r-i,\,r-i} \otimes V_{i}$. See \Cref{dim large 1} and \Cref{not equal and not full 1} for details.
Furthermore, in \Cref{terms equal in filtration r0 >i}, we show that $\Sigma_{p}(r-i) < r_{0}$ if and only if $X_{r-i,\,r} = X_{r-(i-1),\,r}$. Applying this successively allows us to reduce to the case just treated by replacing $i$ with $\Sigma_{p}(r-r_{0})$, see \Cref{Structure r_0 >i}. We remark that the case $\Sigma_{p}(r-i) < r_{0}$ never happens if $i=1$ and $r\geq p$, by \cite[Lemma 4.1]{BG15}.
\begin{comment}
In this subsection, we determine the structure of $X_{r-i,\,r}$,
for $0 \leq i \leq p-1$, in the case $r_{0} \geq i$, where $r_{0}$ is
as in \eqref{base p expansion of r}.
The structure of $X_{r-1,\,r}$ was determined in \cite{BG15} using the surjection $\phi_{1} : X_{r-1,\,r-1} \otimes V_{1} \rightarrow X_{r-1,\,r}$. It turns out that the map $\phi_{1}$ is an isomorphism if $p \nmid r$, i.e., $r_{0} \geq 1$ in \eqref{base p expansion of r}. This fact can be verified using \Cref{dimension formula for X_{r}} in conjunction with \cite[Proposition 3.13]{BG15} and \cite[Proposition 4.9]{BG15} (see also \Cref{Structure r_0 >i}, noting that the first case there does not occur when $i = 1$). So one might expect that the surjection $\phi_{i}$ obtained in \Cref{surjection1} to be an isomorphism in the case $r_{0}\geq i$, for arbitrary $i$. We show that if $r_{0} \geq i$ and $\Sigma_{p}(r-i) \geq r_{0}$, then $\phi_{i}$ is an isomorphism by showing dim $X_{r-i,\,r}$ equals dim $X_{r-i,\,r-i} \otimes V_{i}$, see \Cref{dim large 1} and \Cref{not equal and not full 1} for details.
Furthermore, in \Cref{terms equal in filtration r0 >i} we show that $\Sigma_{p}(r-i) < r_{0}$ if and only if $X_{r-i,\,r} = X_{r-(i-1),\,r}$, which allows us to reduce to the previous case by replacing $i$ by $\Sigma_{p}(r-r_{0})$, see \Cref{Structure r_0 >i}. We remark that this latter case never happens in the case $i=1$ and $r\geq p$, by \cite[Lemma 4.1]{BG15}. The arguments in this section closely use the techniques of \cite[$\S$4]{BG15}. \end{comment}
We first determine a necessary condition for equality $X_{r-i, \, r} = X_{r-j, \, r}$, for $0 \leq j \leq i < p-1$. For $r_{0} \geq i$, note that $\Sigma_{p}(r-i) =\Sigma_{p}(r)-i$.
\begin{lemma}\label{X_{r-i}=X_{r-j}}
Let $p \leq r= r_{m}p^{m}+r_{m-1}p^{m-1}+ \cdots + r_{0}$ be the base
$p$-expansion of $r$. Let $ 1 \leq j < i \leq p-1$ with $r_{0} \geq i$. If
$r_{m}+r_{m-1} \cdots +r_{1} > j$, then $X_{r-j, \, r} \neq X_{r-i, \, r}$.
In particular, if $ \Sigma_{p}(r- r_{0}) = r_{m}+r_{m-1} \cdots +r_{1} \geq i$, then $X_{r-(i-1), \, r} \neq X_{r-i, \, r}$. \end{lemma}
\begin{proof}
If $X_{r-j,\,r} = X_{r-(j+1),\,r}$, then
by \Cref{Basis of X_r-i},
there exist $a_{k,l} \in \mathbb{F}_{p}$ and $b_{l} \in \mathbb{F}_{p}$, for $k = 0$, $1, \ldots,p-1 $
and $l = 0$, $1, \ldots, j$, such that
\begin{equation}\label{expression r_0>i 1}
X^{r-j-1}Y^{j+1} = \sum_{l=0}^{j} \sum_{k=0}^{p-1} a_{k,l} X^{l}(kX+Y)^{r-l} +
\sum_{l=0}^{j}b_{l} X^{r-l}Y^{l}.
\end{equation}
For every positive integer $t$, put $A_{t,l} := \sum\limits_{k=1}^{p-1} a_{k,l} k^{r-l-t} $.
Comparing the coefficients of $X^{r-t}Y^{t}$ on both sides of
\eqref{expression r_0>i 1},
we get
\begin{equation}\label{compare coeff r_0 >i 1}
\sum_{l=0}^{j} \binom{r-l}{t} A_{t,l} = \delta_{j+1,t}, ~
\forall ~ j < t < r-j.
\end{equation}
For every $1 \leq s \leq j+1 $, choose $0 \leq t_{n,s} \leq r_{n}$ for $1 \leq n
\leq m$ such that $\sum\limits_{n=1}^{m} t_{n,s} =s$. Put $t_{s}= t_{m,s}p^{m}+
\cdots + t_{1,s}p+(j+1-s)$. Clearly $\Sigma_{p}(t_{s}) = j+1$ and
$t_{s} \equiv \Sigma_{p}(t_{s})\equiv j+1$ mod ($p-1$). Since $ j < \Sigma_{p}(t_{s}) \leq t_{s}$
and $r-t_{s} \geq \sum\limits_{n=1}^{m} (r_{n} -t_{n,s} ) p + r_{0} -(j+1-s)
\geq (j+1-s)(p-1)+r_{0} \geq i > j$, we get $j< t_{s} <r-j$.
By \eqref{compare coeff r_0 >i 1}, and
$A_{t,l} = A_{t',l}$ if $t \equiv t'$ mod $(p-1)$, we have
\begin{equation}\label{eq 3.18}
\sum\limits_{l=0}^{j} \binom{r-l}{t_{s}} A_{j+1,l}
= \sum\limits_{l=0}^{j} \binom{r-l}{t_{s}} A_{t_{s},l} = 0.
\end{equation}
Applying Lucas' theorem we get $\binom{r-l}{t_{s}} \equiv
\binom{r_{m}}{t_{m,s}} \cdots \binom{r_{1}}{t_{1,s}}
\binom{r_{0}-l}{j+1-s} $ mod $p$,
$\forall ~ 0 \leq l \leq j$. Substituting this in \eqref{eq 3.18}
and dividing both sides by
$ \binom{r_{m}}{t_{m,s}} \cdots \binom{r_{1}}{t_{1,s}}$, we obtain
$$
\sum_{l=0}^{j} \binom{r_{0}-l}{j+1-s} A_{j+1,l} =0 ,~ \forall ~ 1
\leq s \leq j+1.
$$
Putting these set of equations in matrix form, we get
$$
\begin{pmatrix} \binom{r_{0}}{j} & \binom{r_{0}-1}{j} & \cdots &
\binom{r_{0}-j}{j} \\
\binom{r_{0}}{j-1} & \binom{r_{0}-1}{j-1} & \cdots & \binom{r_{0}-j}{j-1} \\
\vdots & \vdots & \ddots & \vdots \\
\binom{r_{0}}{0} & \binom{r_{0}-1}{0} & \cdots & \binom{r_{0}-j}{0}
\end{pmatrix}
\begin{pmatrix} A_{j+1,0} \\ A_{j+1,1} \\ \vdots \\ A_{j+1,j} \end{pmatrix}
= \begin{pmatrix} 0 \\ 0 \\ \vdots \\ 0 \end{pmatrix} .
$$
Applying \Cref{matrix det} (i) (with $a=r_{0}$ and $i=j$),
we obtain that the above matrix is invertible, whence
$A_{j+1,0} = A_{j+1,1} = \cdots = A_{j+1,j}=0$.
Taking $t=j+1$ in \eqref{compare coeff r_0 >i 1} we get
$$
\sum\limits_{l=0}^{j} \binom{r-l}{j+1} A_{j+1,l} = 1,
$$
which leads to a contradiction.
Hence $X_{r-j,\,r} \subsetneq X_{r-(j+1),\,r} \subseteq X_{r-i,\,r}$.
This proves the lemma. \end{proof}
By \Cref{Basis of X_r-i}, we know that dim $X_{r-i,r} \leq (i+1)(p+1)$,
for all $0 \leq i \leq p-1$. We next show that this inequality is indeed an
equality if $\Sigma_{p}(r)$ is large. More precisely:
\begin{proposition}\label{dim large 1}
Let $p \geq 3$, $0 \leq i \leq p-1$ and $p \leq r
\equiv r_{0} \mod p$ with $0 \leq r_{0} \leq p-1$. If $r_{0} \geq i$ and
$\Sigma_{p}(r-i) \geq p$, then $\mathrm{dim}~X_{r-i,r} = (i+1)(p+1)$.
As a consequence, we have $X_{r-i,\,r} \cong X_{r-i,\,r-i} \otimes V_{i}$
as $M$-modules. \end{proposition} \begin{proof}
The last assertion is an easy consequence of the first assertion and
\Cref{surjection1}, noting that dim($X_{r-i,\,r-i} \otimes V_{i})
\leq (i+1)(p+1)$.
We prove the proposition by induction on $i$. The cases $i=0$, $1$
follow from Lemmas \ref{dimension formula for X_{r}}, \ref{dimension formula for
X_{r-1}} respectively. Assume that the result holds for all $j < i$, for some
$2 \leq i \leq p-1$. We need to prove the proposition is true for $i$. By
\Cref{Basis of X_r-i}, the vectors
$\{ X^{l}(kX+Y)^{r-l}, X^{r-l}Y^{l} : k \in \mathbb{F}_{p}, 0 \leq l \leq i \} $ span
$X_{r-i, \, r}$ as an $\mathbb{F}_{p}$-vector space. We claim that they are $\mathbb{F}_{p}$-linearly
independent. Suppose there exist constants $A_{l}, B_{l}, c_{k,l} \in \mathbb{F}_{p} $
for $l=0$, $1,\ldots, i$ and $k= 1$, $2, \ldots, p-1 $ such that
\begin{align}\label{3.4}
\sum\limits_{l=0}^{i} A_{l} X^{r-l}Y^{l} + \sum\limits_{l=0}^{i} B_{l} X^{l} Y^{r-l} +
\sum\limits_{l=0}^{i} \sum\limits_{k =1}^{p-1} c_{k,l} X^{l}(kX+Y)^{r-l} =0.
\end{align}
We need to show that $A_{l},B_{l},c_{k,l}=0$ for all $k$, $l$.
For $0 \leq l \leq i$ and $t \in \mathbb{Z}$, let
$C_{t,l} := \sum\limits_{k=1}^{p-1} k^{r-l-t}c_{k,l}$.
Note that $C_{t,l}$ depends only on the congruence class of $t$ mod $(p-1)$.
By the non-vanishing of the Vandermonde determinant, we have
for every $l$, $C_{t,l} =0$, for all $t$, if and only if
$c_{k,l}=0$ for all $k$. Comparing the coefficients of $X^{r-t}Y^{t}$ on
both sides of \eqref{3.4}, we get
\begin{align}\label{3.5}
\sum\limits_{l=0}^{i} \delta_{t,l} A_{l} + \sum\limits_{l=0}^{i} \delta_{r-l,t} B_{l} +
\sum\limits_{l=0}^{i} \binom{r-l}{t} C_{t,l}=0.
\end{align}
\noindent
\underline{Claim}: $C_{1,0} = C_{2,0} = \cdots = C_{p-1,0} =0$. \\
Assuming the claim, we complete the proof of the proposition.
Taking $t=r$ in \eqref{3.5} and noting that $C_{1,0} = C_{2,0}
= \cdots = C_{p-1,0} =0$ by the claim, we get $B_{0}=0$.
Also note that by the claim we have
$c_{1,0} = c_{2,0} = \cdots = c_{p-1,0}$. Thus dividing both sides
of \eqref{3.4} by $X$, we get
\begin{align}\label{reducing 3.4}
\sum\limits_{l=0}^{i} A_{l} X^{r-1-l}Y^{l} + \sum\limits_{l=0}^{i-1} B_{l+1}
X^{l} Y^{r-1-l} + \sum\limits_{l=0}^{i-1}
\sum\limits_{k =1}^{p-1} c_{k,l+1} X^{l}(kX+Y)^{r-1-l} = 0.
\end{align}
Let $r=r_{m}p^{m}+ \cdots + r_{1}p+r_{0}$ be
the base $p$-expansion of $r$. Then
$r-1 = r_{m}p^{m}+ \cdots + r_{1}p+(r_{0}-1)$.
Since $r_{m}+ \cdots +r_{1} = \Sigma_{p}(r)-r_{0} \geq p+i-r_{0} > i$,
we get $r > p$ and $X_{r-1-(i-1), \,r-1} \neq X_{r-1-i, \,r-1} $,
by \Cref{X_{r-i}=X_{r-j}}. This forces $A_{i}=0$, as otherwise
$X_{r-1-(i-1), \,r-1} =X_{r-1-i, \,r-1}$.
By induction for $r-1$, we have dim $X_{r-1-(i-1), \,r-1} = i(p+1)$.
Hence $\lbrace X^{r-1-l}Y^{l}, X^{l}(kX+Y)^{r-1-l} : k \in \mathbb{F}_{p}, \ 0 \leq l
\leq i-1 \rbrace$ is an $\mathbb{F}_{p}$-basis of $X_{r-1-(i-1), \,r-1} $ by
\Cref{Basis of X_r-i}. Thus $A_{l}=B_{l}=0$ and $c_{k,l} =0$ for all $k,l$
by \eqref{reducing 3.4} as $A_{i} =0$.
Therefore dim $X_{r-i,\,r}=(i+1)(p+1)$.
\noindent
\underline{Proof of the claim} :
We first show that $C_{1,0} , \ldots ,C_{i,0}=0$ and
$C_{r_{0}+1,0}, \ldots , C_{p-1,0}=0 $.
Let $s$ be a positive integer congruent to $ r_{0}$ mod $p$.
By Lucas' theorem, for $0 \leq l \leq i \leq r_{0} $, we see that
$ \binom{r-l}{s} \not \equiv 0~ \mathrm{mod} ~p \Rightarrow r-l \equiv r_{0}$,
$r_{0}+1,\ldots, p-1 ~ \text{mod} ~ p \Leftrightarrow l =0$.
Taking $t=s$ and applying this in \eqref{3.5}, we get
\begin{align}\label{3.11}
\sum\limits_{l=0}^{i} \delta_{l,s} A_{l} +
\sum\limits_{l=0}^{i}\delta_{r-l,s} B_{l} + \binom{r}{s} C_{s,0}=0.
\end{align}
By \Cref{choice of s}, for every $1 \leq u \leq p+i-r_{0}-1 \leq \Sigma_{p}(r) -r_{0}-1$,
there exists $p \leq s_{u} \leq r-p$
such that $s_{u} \equiv r_{0} $ mod $p$, $\Sigma_{p}(s_{u})= r_{0}+u$ and $\binom{r}{s_{u}}
\not \equiv 0$ mod
$p$. Therefore $\delta_{l,s_{u}} = 0 = \delta_{r-l,s_{u}}$, for all $0\leq l \leq i$.
So \eqref{3.11}
implies that $C_{s_{u},0}=0$, for all $1 \leq u \leq p+i-r_{0}-1$. Since
$s_{u} \equiv \Sigma_{p}(s_{u})=u+r_{0}$ mod $(p-1)$,
we get $C_{u+r_{0},0}=0$, for all $1 \leq u \leq p+i-r_{0}-1$.
So $C_{1,0} , \ldots ,C_{i,0}=0$ and $C_{r_{0}+1,0}, \ldots , C_{p-1,0}=0 $.
This finishes the proof of the claim, if $r_{0}=i$.
Else choose $t\geq 1$ such that $ i+1 \leq t+i \leq r_{0}$. Clearly
$i < t+i \leq r_{0} \leq p-1 < \Sigma_p(r-i) \leq r-i$. Since $t+i \leq r_{0} \leq p-1$, by Lucas' theorem, we
have $\binom{r-l}{t+i} \equiv \binom{r_{0}-l}{t+i}$ mod $p$, $\forall$
$0 \leq l \leq i \leq r_{0}$.
Therefore, by \eqref{3.5}, we have
\begin{align}\label{3.12}
\sum\limits_{l=0}^{i} \binom{r_{0}-l}{t+i} C_{t+i,l}=0.
\end{align}
For every $0 \leq w \leq i-1$, note that $0 \leq w+t \leq t+i \leq r_{0}$
and $1 \leq i-w < i+1 \leq \Sigma_{p}(r)-r_{0}$. Thus
by \Cref{choice of s} (applied with $b= w+t$ and $u=i-w$),
there exists $ p \leq s_{w} \leq r-p$
such that $s_{w} \equiv w+t$ mod $p$, $\Sigma_{p}(s_{w}) = t+i$ and
$\binom{r}{s_{w}} \not \equiv 0$ mod $p$, for all $0 \leq w \leq i-1$.
Let $s_{w} = s_{w,m}p^{m}+\cdots+
s_{w,1} p+ (w+t)$ be the base $p$-expansion of
$s_{w}$. By Lucas' theorem, we get
$\binom{r-l}{s_{w}} \equiv \binom{r_{0}-l}{w+t} \prod\limits_{j=1}^{m} \binom{r_{j}}
{s_{w,j}} \equiv \binom{r_{0}-l}{w+t} \binom{r_{0}}{w+t}^{-1} \binom{r}{s_{w}} $
mod $p$, for $0 \leq l \leq i \leq r_{0}$. Noting
$s_{w} \equiv \Sigma_{p}(s_{w}) \equiv t+i$ mod $(p-1)$ and
$\binom{r_{0}}{w+t}$, $\binom{r}{s_{w}} \not \equiv 0$ mod $p$, for
$0 \leq w \leq i-1 $, it follows from \eqref{3.5} that
\begin{align*}
\sum\limits_{l=0}^{i} \binom{r_{0}-l}{w+t} C_{t+i,l} = 0, \quad \quad
\forall \; \ 0 \leq w \leq i-1.
\end{align*}
Combining the above set of equations with \eqref{3.12}, we get
\begin{align*}
\begin{pmatrix}
\binom{r_{0}}{t+i} & \binom{r_{0}-1}{t+i} &\cdots &\binom{r_{0}-i}{t+i} \\
\binom{r_{0}}{t+i-1} & \binom{r_{0}-1}{t+i-1} &\cdots & \binom{r_{0}-i}{t+i-1} \\
\vdots & \vdots & \ddots & \vdots \\
\binom{r_{0}}{t} & \binom{r_{0}-1}{t} & \cdots &\binom{r_{0}-i}{t} \\
\end{pmatrix}
\begin{pmatrix}
C_{t+i,0} \\ C_{t+i,1} \\ \vdots \\ C_{t+i, i}
\end{pmatrix}
=
\begin{pmatrix}
0 \\ 0 \\ \vdots \\ 0
\end{pmatrix}.
\end{align*}
By \Cref{matrix det} (i) (with $a=r_{0}$ and $j=t+i$), the above matrix is invertible,
whence $C_{t+i,0}=0$, where $i+1 \leq t+i \leq r_{0}$. This finishes
the proof of the claim, as we have already shown $C_{1,0} , \ldots ,C_{i,0}=0$ and
$C_{r_{0}+1,0}, \ldots , C_{p-1,0}=0 $. \end{proof}
We next consider the case $r_{0} \leq \Sigma_{p}(r-i) \leq p$ and still show that $X_{r-i,\,r} \cong X_{r-i,\,r-i} \otimes V_{i} $ as $M$-modules. Note that the case $\Sigma_{p}(r-i) = p$ was treated in the proposition above.
\begin{proposition}\label{not equal and not full 1}
Let $p\geq 3$, $ 0 \leq i \leq p-1$ and $p \leq r
\equiv r_{0} \mod p$ with
$0 \leq r_{0} \leq p-1$. If $r_{0} \geq i$ and $r_{0} \leq \Sigma_{p}(r-i) \leq p$, then
$\dim X_{r-i,\,r} = (i+1)(\Sigma_{p}(r-i)+1)$. Furthermore, $
X_{r-i,\,r} \cong X_{r-i,\,r-i} \otimes V_{i} $ as $M$-modules. \end{proposition}
\begin{proof}
We prove the proposition by induction on $i$.
The case $i=0$ follows from
\Cref{dimension formula for X_{r}}. Assume that the proposition holds for all
$j < i$ for some $i \geq 1$. If $\Sigma_{p}(r-i) =p$, then as remarked earlier
the proposition follows from \Cref{dim large 1}. In the case $r_{0} \leq \Sigma_{p}(r)-i \leq p-1$,
by \Cref{X_{r-i}=X_{r-j}} and \Cref{induced and successive},
we have $X_{r-i, \, r}/X_{r-(i-1),\,r} $ is isomorphic to a non-zero
quotient of $\operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i}\chi_{2}^{i})$.
Since $r \equiv \Sigma_{p}(r)$ mod $(p-1)$ and
$2i \leq r_{0}+i \leq \Sigma_{p}(r) < p-1+2i$, we see that
$[2i-r] = [2i- \Sigma_{p}(r)] = p-1+2i-\Sigma_{p}(r)$. Thus by \Cref{Structure of induced},
we have any non-zero quotient of
$\operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i}\chi_{2}^{i})$ has dimension
at least $(\Sigma_{p}(r)-2 i+1)$ (note that the quantity $(\Sigma_{p}(r)-2 i+1)$ equals one if
$\Sigma_{p}(r)=2i$). Since $r_{0}+1 \leq \Sigma_{p}(r-(i-1)) \leq p$, by induction we have
\begin{align*}
\dim X_{r-i, \, r}
& = \dim X_{r-(i-1), \, r} + \dim\left(\frac{X_{r-i, \, r}}{X_{r-(i-1),\,r} }\right) \\
& \geq i(\Sigma_{p}(r)-i+2)+ (\Sigma_{p}(r)-2i+1) \\
& = (i+1)\left( \Sigma_{p}(r)-i +1 \right).
\end{align*}
Since $\Sigma_{p}(r-i) \leq p-1$, by \Cref{dimension formula for X_{r}},
we have $\dim X_{r-i,\,r-i} = \Sigma_{p}(r-i)+1 =
\Sigma_{p}(r)-i+1$.
So $\dim X_{r-i,\,r-i} \otimes V_{i} $ $\leq (i+1)( \Sigma_{p}(r)-i +1 )
\leq \dim X_{r-i, \, r} $. Now the proposition follows from \Cref{surjection1}. \end{proof} We now prove the converse of \Cref{X_{r-i}=X_{r-j}}. \begin{lemma}\label{terms equal in filtration r0 >i}
Let $p \leq r= r_{m}p^{m}+r_{m-1}p^{m-1}+ \cdots + r_{0}$ be the base
$p$-expansion of $r$ with $r_{0} \geq i$. For $ 1 \leq j < i
\leq p-1$, we have $X_{r-j, \, r} = X_{r-i, \, r}$
if and only if $r_{m}+r_{m-1} \cdots +r_{1} \leq j$.
In particular, $X_{r-(i-1), \, r} = X_{r-i, \, r} \Longleftrightarrow
r_{m}+r_{m-1} \cdots +r_{1} < i \Longleftrightarrow
\Sigma_{p}(r-i) < r_{0}$. \end{lemma} \begin{proof}
The \enquote*{only if} part is \Cref{X_{r-i}=X_{r-j}}.
For the \enquote*{if} part, assume $r_{m}+r_{m-1} \cdots +r_{1} \leq j$.
By \Cref{dimension formula for X_{r}}, we have
$\dim X_{r-r_{0},\,r-r_{0}} = \Sigma_{p}(r-r_{0})+1$. Thus,
by \Cref{surjection1}, we have $\dim X_{r-r_{0},\,r} \leq (r_{0}+1)
(\Sigma_{p}(r-r_{0})+1)$. Since $\Sigma_{p}(r-r_{0}) \leq j \leq r_{0}$,
we have $\Sigma_{p}(r- \Sigma_{p}(r-r_{0}))
= \Sigma_{p}(r)-\Sigma_{p}(r-r_{0})=\Sigma_{p}(r)-\Sigma_{p}(r)+r_{0}
= r_{0} \leq p-1$. Hence, by \Cref{not equal and not full 1} (with
$i$ there equal to $\Sigma_{p}(r-r_{0})$),
we have
$\dim X_{r- \Sigma_{p}(r-r_{0}), \, r} = (\Sigma_{p}(r-r_{0})+1)(r_{0}+1)$. As
$0 \leq \Sigma_{p}(r-r_{0}) \leq j < i \leq r_{0} \leq p-1 $,
by \Cref{first row filtration}, we see that
$X_{r- \Sigma_{p}(r-r_{0}), \, r} \subseteq X_{r-j,\,r} \subseteq
X_{r-i,\,r} \subseteq X_{r-r_{0},\,r}$.
As the dimension of the rightmost term is less than or equal to the
dimension of the leftmost term,
it follows that $ X_{r-j,\,r} = X_{r-i,\,r}$. \end{proof}
Putting together all the results obtained so far, we have the following theorem. \begin{theorem}\label{Structure r_0 >i}
Let $p \geq 3$, $ 0 \leq i \leq p-1$ and $p
\leq r \equiv r_{0}
\mod p$ with $0 \leq r_{0} \leq p-1$. If $r_{0} \geq i$, then
as $M$-modules, we have
\begin{align*}
X_{r-i,\,r} \cong
\begin{cases}
X_{r- \Sigma_{p}(r-r_{0}),\, r-\Sigma_{p}(r-r_{0})} \otimes V_{\Sigma_{p}(r-r_{0})}, &
\mathrm{if} ~ \Sigma_{p}(r-i) < r_{0}, \\
X_{r-i, \, r-i} \otimes V_{i}, & \mathrm{if} ~ \Sigma_{p}(r-i)
\geq r_{0}.
\end{cases}
\end{align*} \end{theorem}
\begin{proof}
Note that $\Sigma_{p}(r-i) = \Sigma_{p}(r)-i = \Sigma_{p}(r-r_{0})+r_{0}-i$.
The case $\Sigma_{p}(r-i) \geq r_{0}$ follows immediately from
\Cref{dim large 1} and \Cref{not equal and not full 1}.
If $\Sigma_{p}(r-i)< r_{0}$, then by \Cref{terms equal in filtration r0 >i}, we have
$X_{r-\Sigma_{p}(r-r_{0}),\,r} = X_{r-i,\,r} $. Also note that
$r_{0} \geq i > \Sigma_{p}(r-r_{0})$ and $\Sigma_{p}(r- \Sigma_{p}(r-r_{0})) =
\Sigma_{p}(r) - \Sigma_{p}(r-r_{0}) = \Sigma_{p}(r) - \Sigma_{p}(r)+ r_{0} = r_{0} $.
Applying \Cref{not equal and not full 1} with $i = \Sigma_{p}(r-r_{0})$,
we obtain the theorem in the case $\Sigma_{p}(r-i)<r_{0}$. This completes the proof. \end{proof}
Using the above theorem and \Cref{dimension formula for X_{r}} in conjunction with \Cref{ClebschGordan}, one can determine the JH factors of $X_{r-i,r}$, for all $0\leq i \leq p-1$, in the case $r_{0} \geq i$. We also have the following dimension formula. \begin{corollary}\label{dimension r_0 > i}
Let $p\geq 3$, $ 0 \leq i \leq p-1$ and $p
\leq r \equiv r_{0}
\mod p$ with $0 \leq r_{0} \leq p-1$. If $r_{0} \geq i$, then
\begin{align*}
\dim X_{r-i, \, r} =
\begin{cases}
(r_{0}+1)(\Sigma_{p}(r-r_{0})+1) , & \mathrm{if} ~ \Sigma_{p}(r-i) < r_{0}, \\
(i+1)\left( \Sigma_{p}(r-i) +1 \right), &\mathrm{if} ~
r_{0} \leq \Sigma_{p}(r-i) \leq p,\\
(i+1)(p+1), & \mathrm{if} ~ \Sigma_{p}(r-i) \geq p.
\end{cases}
\end{align*}
\end{corollary}
\begin{proof}
This follows from \Cref{Structure r_0 >i} and
\Cref{dimension formula for X_{r}}. Note that the formulas match at the
boundary $\Sigma_{p}(r-i) = p$.
\end{proof}
As a corollary, we obtain the structure of the successive quotients $X_{r-i, \, r}/ X_{r-(i-1),\,r}$, for $r_{0} \geq i$. \begin{corollary} Let $p\geq 3$, $1 \leq i \leq p-1$ and $p
\leq r \equiv r_{0}
\mod p$ with $0 \leq r_{0} \leq p-1$. If $r_{0} \geq i$, then
\begin{align*}
\frac{X_{r-i, \, r}}{X_{r-(i-1),\,r}} \cong
\begin{cases}
(0), & \mathrm{if} ~ \Sigma_{p}(r-i) < r_{0}, \\
V_{p-1-[2i-r]} \otimes D^{i}, &\mathrm{if} ~
r_{0} \leq \Sigma_{p}(r-i) \leq p-1,\\
\operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i}\chi_{2}^{i}),
& \mathrm{if} ~ \Sigma_{p}(r-i) \geq p.
\end{cases}
\end{align*} \end{corollary} \begin{proof}
Note that $\Sigma_{p}(r-(i-1)) = \Sigma_{p}(r-i)+1 =\Sigma_{p}(r)-i+1$, for $r_{0} \geq i$.
Using \Cref{dimension r_0 > i}, one checks that dim $X_{r-i, \, r}-$
dim $X_{r-(i-1),\,r}$ equals $0$, $\Sigma_{p}(r)-2i+1$ and $p+1$, in the cases
described in the corollary. Now the result follows from
\Cref{induced and successive} and \Cref{Structure of induced}, noting that
$p-1-[2i-r] = \Sigma_{p}(r)-2i$,
if $r_{0} \leq \Sigma_{p}(r-i) \leq p-1$. \end{proof}
\subsection{The case \texorpdfstring{$\boldsymbol{r_{0} < i}$}{}} \label{section r0<i}
In this subsection, we determine the structure of $X_{r-i,\,r}$,
for $1 \leq i \leq p-1$, in the case $r_{0} < i$, where $r_{0}$ is
as in \eqref{base p expansion of r}. We first analyze the quotients $X_{r-j,\,r}/X_{r-(j-1),\,r}$ for $r_{0} < j \leq i$,
which depend only on $\Sigma_{p}(r-j)$, see \Cref{successive quotients r_0 < i}. Building upon this result we determine $X_{r-i,\,r} / X_{r-r_{0},\,r}$ in \Cref{final quotient structure}. This combined with the results of the previous section allows us to determine the structure of $X_{r-i,\,r}$, for $r_{0}<i$, see \Cref{Structure r_0<i}.
We being with the following useful lemma.
\begin{lemma}\label{Sp(r-i), Sp(r-j)}
Let $1 \leq i \leq p-1$ and $p \leq r = r_{m}p^{m}+ \cdots +r_{1}p+r_{0}$ be the base
$p$-expansion of $r$ with $r_{0} < i$. Then
\begin{enumerate}[label=\emph{(\roman*)}]
\item $\Sigma_{p}(r-j) = \Sigma_{p}(r-i)+i-j$, for all $r_{0}<j \leq i$.
\item If $\Sigma_{p}(r-i) \leq p-1$, then $r_{1} \neq 0$, $\Sigma_{p}(r-i) =
\Sigma_{p}(r)+p-1-i$ and $\Sigma_{p}(r) \geq r _{0}+1$.
\end{enumerate} \end{lemma}
\begin{proof}
We have
\begin{enumerate}
\item[(i)] Let $r-i = r_{m}'p^{m}+ \cdots +r_{1}'p+r_{0}'$ be the base $p$-expansion
of $r-i$. As $p+r_{0}-i \equiv r-i \equiv r_{0}'$ mod $p$ and
$0 \leq p+r_{0}-i, r_{0}' \leq p-1$ we
have $r_{0}'=p+r_{0}-i $. Since $0 \leq r_{0}'+i-j = p+r_{0}-j \leq p-1$, we obtain
the base $p$-expansion of $r-j$ is given by $r_{m}'p^{m}+
\cdots +r_{1}'p+(r_{0}'+i-j)$.
Hence $\Sigma_{p}(r-j) = r_{m}'+ \cdots +r_{0}'+i-j = \Sigma_{p}(r-i)+i-j$.
\item[(ii)] Suppose $r_{1}=0$. As $r \geq p$
there exists $2 \leq l \leq m$ such that $r_l \neq 0$.
Let $l$ be minimal. Then
$r-i = r_{m}p^{m} + \cdots + r_{l+1}p^{l+1}+(r_{l}-1)p^{l}+(p-1)p^{l-1} +
\cdots +(p-1)p+ (p+r_{0}-i)$ and
\begin{align*}
\Sigma_{p}(r-i) & \geq (p-1)+p+r_{0}-i \quad \mathrm( \because ~ l \geq 2 ) \\
& \geq p \quad \mathrm( \because ~ i \leq p-1,~r_{0} \geq 0 ~),
\end{align*}
which is a contradiction. Hence $r_{1} \neq 0$ and $r-i = r_{m}p^{m} + \cdots +
r_{2}p^{2}+(r_{1}-1)p+ (p+r_{0}-i)$. Thus $\Sigma_{p}(r-i)= r_{m}+
\cdots+r_{1}-1+p+r_{0}-i
= \Sigma_{p}(r)+p-1-i$. Since $r_{1} \geq 1$, we have $1+r_{0} \leq \Sigma_{p}(r)$.
\qedhere
\end{enumerate} \end{proof}
By \cite[(5.1)]{Glover}, for every $r \geq p$, we have an exact sequence
$$
0 \rightarrow V_{r-2} \otimes V_{0} \otimes D \xrightarrow{\theta_{r-1,1}}
V_{r-1} \otimes V_{1} \xrightarrow{\varphi_{r-1,1}} V_{r} \rightarrow 0.
$$ As in \Cref{surjection1}, we have $\varphi_{r-1,1}(X_{r-i,\,r-1} \otimes V_{1}) = X_{r-i,\,r}$, for all $1 \leq i \leq p-1$. Hence, for $2 \leq i \leq p-1$, we have \begin{align}\label{surjection3}
\frac{X_{r-i,\,r-1} \otimes V_{1}}
{X_{r-(i-1),\,r-1} \otimes V_{1}} \twoheadrightarrow \frac{X_{r-i,\,r}}{X_{r-(i-1),r}}. \end{align}
We now derive a sufficient condition under which $X_{r-(i-1),\,r} = X_{r-i,\,r}$,
for $r_{0}< i \leq p-1$. \begin{lemma}\label{r-i small}
Let $1 \leq i \leq p-1$, $r \geq p$ and $r=r_{m}p^{m}+ \cdots + r_{1}p+ r_{0}$ be
the base $p$-expansion of $r$. If $r_{0}< i$ and $\Sigma_{p}(r-i)< p-1$, then
$X_{r-(i-1),\,r} = X_{r-i,\,r}$. \end{lemma} \begin{proof}
We prove the lemma by induction on $i$. For $i=1$, the condition
$r_{0} < 1$ implies $r_{0}=0$, i.e., $p \mid r$. Thus $\Sigma_{p}(r-1) \geq p-1$
as $r \geq p$. For $r \geq p$, by \cite[Lemma 4.1]{BG15}, we have $X_{r-1,\, r}
\neq X_{r,\,r}$. So the lemma is vacuously true for $i=1$. So we may assume
$i \geq 2$. By \eqref{surjection3}, it is enough to show
$X_{r-1-(i-2),\,r-1}= X_{r-1-(i-1),\,r-1}$.
If $r=p$, then $V_{r-1}$ is irreducible, so
$X_{r-1-(i-2),\,r-1}= X_{r-1-(i-1),\,r-1}$.
Assume $r>p$.
We first consider the case $r_{0} =0$.
By Lemma~\ref{Sp(r-i), Sp(r-j)} (ii), we see that
$\Sigma_{p}(r-i) < p-1$ implies $r_{1} \neq 0$.
Thus $r-1 = r_{m}p^{m}+\cdots+r_{2}p^{2}+
(r_{1}-1)p+p-1$. Note that $\Sigma_{p}(r-1-(i-1))=\Sigma_{p}(r-i) < p-1$.
Applying \Cref{terms equal in filtration r0 >i} for $r-1$, we get
$X_{r-1-(i-2),\,r-1}= X_{r-1-(i-1),\,r-1}$.
If $1 \leq r_{0} < i $, then the base $p$-expansion of $r-1$
is given by $r_{m}p^{m}+\cdots+r_{2}p^{2}+ r_{1}p+(r_{0}-1)$.
Thus by induction (for $r-1$ and $i-1$), we have
$X_{r-1-(i-2),\,r-1} = X_{r-1-(i-1),\, r-1}$.
This proves the inductive step
and the lemma follows.
\end{proof}
We next prove a result analogous to \Cref{terms equal in filtration r0 >i}, in the case $r_{0}<i$.
\begin{lemma}\label{X_r-i = X_ r-j}
Let $p \leq r=r_{m}p^{m}+ \cdots + r_{1}p+r_{0}$ be the base $p$-expansion of
$r$. For $1 \leq j < i \leq p-1 $ and $r_{0}<j$, we have
$X_{r-j,\,r} = X_{r-i,\,r}~ \mathrm{ if ~ and ~ only ~ if}~ \Sigma_{p}(r-j)\leq p-1 $.
\end{lemma}
\begin{proof}
Since $r_{0}< j$, by Lemma~\ref{Sp(r-i), Sp(r-j)} (i)
(with $i$ there equal to $l$),
we have $\Sigma_{p}(r-l) = \Sigma_{p}(r-j)-(l-j) < p-1$, for all $l$ such that
$j+1 \leq l \leq i$. By \Cref{r-i small}, we have $X_{r-(l-1),\,r} = X_{r-l,\,r}$
for all $l$ such that
$j+1 \leq l \leq i$, whence $X_{r-j,\,r} = X_{r-(j+1),\,r} = \cdots = X_{r-(i-1),\,r}
= X_{r-i,\,r}$.
This proves the \enquote*{if} part.
For the converse we claim that if $\Sigma_{p}(r-j) > p-1$, then
$X_{r-j,\,r} \subsetneq X_{r-(j+1),\,r}$.
Suppose not. Then by \Cref{first row filtration}, we have
$X_{r-j,\,r}=X_{r-(j+1),\,r}$.
Thus by \Cref{Basis of X_r-i}, there exist
$a_{k,l} \in \mathbb{F}_{p}$ and $b_{l} \in \mathbb{F}_{p}$,
for $k \in \mathbb{F}_{p}$ and $0 \leq l \leq j$,
such that
\begin{align}\label{Relation r-i small}
X^{r-j-1} Y^{j+1}= \sum_{l=0}^{j} \sum_{k=0}^{p-1} a_{k,l} X^{l} (kX+Y)^{r-l}
+ \sum\limits_{l=0}^{j} b_{l} X^{r-l}Y^{l}.
\end{align}
As above, for every positive integer $t $ and $0 \leq l \leq j$, define
$A_{t,l} := \sum\limits_{k=1}^{p-1} a_{k,l}k^{r-l-t}$. For
every $l$, note that $A_{t,l} $ depends only on the congruence class of $t$ mod $(p-1)$.
Comparing the coefficients of $X^{r-t}Y^{t}$ on both sides of
\eqref{Relation r-i small}, we get
\begin{align}\label{3.23}
\sum\limits_{l=0}^{j} \binom{r-l}{t} A_{t,l} = \delta_{j+1,t}, ~ \forall ~
j < t < r-j.
\end{align}
Since $r_{0} < j < p-1$, by Lucas' theorem,
we see that $ \binom{r-l}{j+1} \equiv
\binom{r_{0}-l}{j+1} \equiv 0 \mod p$, for $0 \leq l \leq r_{0}$.
Thus, taking $t=j+1$ in \eqref{3.23} we get
\begin{align}\label{Relation r-i small i}
\sum\limits_{l=r_{0}+1}^{j} \binom{r-l}{j+1} A_{j+1,l} =1.
\end{align}
Below we show that
$A_{j+1,r_{0}+1}, \ldots , A_{j+1,j} =0$ by solving a system of linear equations.
This contradicts \eqref{Relation r-i small i}.
Let $r_{m}'p^{m}+ \cdots + r_{1}'p+r_{0}'$ be the base
$p$-expansion of $r-j$. Clearly $0 \leq r_{0}'$, $p+r_{0}-j \leq p-1$
and $r_{0}' \equiv r-j \equiv p+r_{0}-j$ mod $p$. So $r_{0}' = p+r_{0}-j$.
Then the assumption $\Sigma_{p}(r-j) \geq p$ implies
$r_{m}'+ \cdots + r_{1}' \geq p-r_{0}' = j-r_{0} \geq 1$. Thus $r= r-j+j
\geq p +(j+r_{0}) = 2p+r_{0} \geq 2p$. Also note that for
all $l$ such that
$r_{0}+1 \leq l \leq j $, the base $p$-expansion of $r-l$ is given by
\[
r -l = r -j+ (j-l) = r_{m}'p^{m}+ \cdots + r_{1}'p+ (p+r_{0}-l).
\]
Thus
$\Sigma_{p}(r - (r_{0}+1)) -(p-1)= r_{m}'+ \cdots + r_{1}' \geq j-r_{0} \geq 1$
and $r-(r_{0}+1) \geq ( r_{m}'+ \cdots + r_{1}')p \geq p$.
For every $1 \leq u \leq j-r_{0}$, let $b_{u} = j+1-u $.
Clearly $1 \leq b_{u} \leq j \leq p-1$.
Applying \Cref{choice of s} (for $r- (r_{0}+1)$),
for every $1 \leq u \leq j-r_{0}$
there exists
$s_{u}$ such that
$p \leq s_{u} \leq r- r_{0}-1$, $s_{u} \equiv b_{u} =j+1-u \mod p$,
$\Sigma_{p}(s_{u}) = b_{u}+u = j+1$ and
$\binom{r-r_{0}-1}{s_{u}} \not \equiv 0 \mod p$.
Let $s_{u} = s_{u,m}p^{m}+ \cdots +s_{1,u}p+ s_{u,0}$ be the base
$p$-expansion of $s_{u}$.
By the above conditions for every $1 \leq u \leq j-r_{0}$,
we have $s_{u,0} = j+1-u$
and $s_{u,n} \leq r_{n}'$, by Lucas' theorem, for $1 \leq n \leq m$.
If $ 1\leq u < j- r_{0}\leq \sum_{n=1}^{m}r_{m}' $, then by \Cref{choice of s} we have
$s_{u} \leq (r-r_{0}-1)-p \leq r-p< r-j $, and also if $u=j-r_{0}$, then
$r- (r_{0} +1) - s_{u} \geq (p-1)-s_{j-r_{0},0} = p-1 -(r_{0}+1) > j-(r_{0}+1)$.
Thus for all $1 \leq u \leq j-r_{0}$,
we have $j< p \leq s_{u} < r-j$, so we may apply \eqref{3.23} to obtain
\begin{align}\label{Relation r-i small 2}
\sum\limits_{l=0}^{j} \binom{r-l}{s_{u}} A_{s_{u},l} =0, ~
\forall~1 \leq u \leq j-r_{0}.
\end{align}
Since $ s_{u,0} = j+1-u \geq r_{0}+1 $ by Lucas' theorem,
for all $l$ such that $0 \leq l \leq r_{0}$, we have
\[
\binom{r-l}{s_{u}} \equiv \binom{r_{m}}{s_{u,m}} \cdots \binom{r_{1}}{s_{u,1}}
\binom{r_{0}-l}{s_{u,0}} \equiv 0 \mod p.
\]
Again by Lucas' theorem,
for all $l$ such that $r_{0}+1 \leq l \leq j$, we have
\begin{align*}
\binom{r-l}{s_{u}} & \equiv
\binom{r_{m}'}{s_{u,m}} \cdots \binom{r_{1}'}{s_{u,1}} \binom{p+r_{0}-l}{j+1-u}
\mod p \\
&\equiv \binom{r-r_{0}-1}{s_{u}} \binom{p-1}{j+1-u}^{-1}
\binom{p+r_{0}-l}{j+1-u} \mod p.
\end{align*}
Since $s_{u} \equiv \Sigma_{p}(s_{u}) = j+1 $ mod $(p-1)$ and
$\binom{r-r_{0}-1}{s_{u}} \not \equiv 0$ mod $p$,
it follows from the above computations and \eqref{Relation r-i small 2} that
\begin{align*}
\sum\limits_{l=r_{0}+1}^{j} \binom{p+r_{0}-l}{j+1-u} A_{j+1,l} =0, ~
\text{for} ~ 1 \leq u \leq j-r_{0}.
\end{align*}
Writing the above set of equations in matrix form and
applying \Cref{matrix det} (i) (with $a=p-1$ and $i=j-r_{0}-1$),
we obtain $A_{j+1,r_{0}+1} = \cdots = A_{j+1,j}=0$.
Substituting $A_{j+1,r_{0}+1}, \ldots , A_{j+1,j}=0$
in \eqref{Relation r-i small i} leads to contradiction.
This shows $X_{r-j,\,r} \subsetneq X_{r-(j+1),\,r} \subseteq X_{r-i,\,r} $
and proves the \enquote*{only if} part.
\end{proof}
Combining the above lemma with \Cref{terms equal in filtration r0 >i}, we have the following criterion for when $X_{r-j,\,r} = X_{r-i,\,r}$ are equal, for $1 \leq j < i \leq p-1$. \begin{lemma}\label{final X_r-i = X_r-j}
Let $1 \leq j < i \leq p-1$ and $p \leq r = r_{m}p^{m}+ \cdots +
r_{1}p+r_{0}$ be the base $p$-expansion of $r$. Then
\[
X_{r-j,\,r} = X_{r-i,\,r} ~ \Longleftrightarrow~
r_{0} \neq j, j+1, \ldots, i-1 , ~
\Sigma_{p}(r-j) \leq p-1 ~ \mathrm{and} ~ \Sigma_{p}(r-r_{0}) \leq j.
\] \end{lemma}
\begin{proof}
Clearly the \enquote*{if} part follows from \Cref{terms equal in filtration r0 >i}
and \Cref{X_r-i = X_ r-j} in the case $r_0 \geq i$ and
$r_0 < j$ respectively.
For the converse assume $X_{r-j,\,r} = X_{r-i,\,r}$.
Then we claim that $ r_{0} \neq j, j+1, \ldots, i-1$.
Suppose not.
Then the coefficient of $X^{r-i}Y^{i}$ in $X^{l}(kX+Y)^{r-l}$ is congruent to $0$ mod $p$, for all
$k \in \mathbb{F}_{p}$ and $0 \leq l \leq j$, since
$\binom{r-l}{i}$ vanishes,
by Lucas' theorem.
Hence $X^{r-i} Y^{i} \not \in \mathbb{F}_{p}$-span of $\lbrace X^{l}(kX+Y)^{r-l}, X^{r-l}Y^{l} :
k \in \mathbb{F}_{p}, 0 \leq l \leq j\rbrace = X_{r-j,\,r}$ by \Cref{Basis of X_r-i}, which is a
contradiction.
Thus $r_{0} \neq j$, $ j+1, \ldots, i-1$. If
$r_{0} \geq i$, then by
\Cref{terms equal in filtration r0 >i}, we have $\Sigma_{p}(r-r_{0}) \leq j \leq p-1$, whence
$\Sigma_{p}(r-j) = \Sigma_{p}(r-r_{0})+r_{0}-j \leq r_{0} \leq p-1$. Similarly if $r_{0}<j$,
then by \Cref{X_r-i = X_ r-j}, we have
$\Sigma_{p}(r-j) \leq p-1$, whence
$\Sigma_{p}(r-r_{0}) = \Sigma_{p}(r)-r_{0}= \Sigma_{p}(r-j) +j- (p-1)-r_{0} \leq j $,
by Lemma~\ref{Sp(r-i), Sp(r-j)} (ii).
This finishes the proof. \end{proof}
\begin{lemma}\label{dim not large}
Let $1 \leq i \leq p-1$, $p \leq r \equiv r_{0}
~ \mathrm{mod}~p$ with $0 \leq r_{0}<i$.
If $\Sigma_{p}(r-i)=p-1$, then $X_{r-i,\,r}/ X_{r-(i-1),\,r} \cong V_{p-1-i}
\otimes D^{i}$. \end{lemma}
\begin{proof}
For any integer $t \in \lbrace p, \ldots , p+i-2 \rbrace$
one checks that $\Sigma_{p}(t-i) = t-i < p-1$.
So the condition $\Sigma_{p}(r-i) = p-1$ implies that
$r \geq p+i-1$.
Let $s= r-i+1$. Then $s \geq p$, $\Sigma_{p}(s-1)=p-1$
and $s=(s-1)+1 \equiv \Sigma_{p}(s-1)+1 \equiv 1 $ mod $(p-1)$.
Applying \cite[Lemma 3.2]{BG15} for $s$, we get
$$
\sum_{k=0}^{p-1} X(kX+Y)^{s-1} = -X^{s}.
$$
Multiplying the above equation by $X^{i-1}$
we get
\begin{align}\label{Sp(r-i) =p-1 relation}
\sum_{k=0}^{p-1} X^{i}(kX+Y)^{r-i} = -X^{r}.
\end{align}
Since $r-i \equiv \Sigma_{p}(r-i)= p-1 \equiv 0$ mod $(p-1)$, we have
$\operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i}\chi_{2}^{i}) =
\operatorname{ind}_{B}^{\Gamma}(\chi_{2}^{i})$. Using
\Cref{Structure of induced} and \Cref{induced and successive},
we get
\[
\begin{tikzcd}
0 \arrow[r, rightarrow] & V_{i} \otimes D^{p-1} \arrow[r, rightarrow]
& \operatorname{ind}_{B}^{\Gamma} (\chi_{2}^{i}) \arrow[r, rightarrow]
\arrow[d, twoheadrightarrow, "\psi_{i}"] & V_{p-1-i} \otimes D^{i}
\arrow[r, rightarrow] & 0. \\
& & X_{r-i,\,r}/X_{r-(i-1),\,r} & &
\end{tikzcd}
\]
By \Cref{Structure of induced} (i) (for $l=0$), we have
$\sum\limits_{k \in \mathbb{F}_{p}}^{} [ \begin{psmallmatrix}
k & 1 \\ 1 & 0 \end{psmallmatrix} , e_{\chi_{2}^{i}}] $
is an element of
$V_{i} \otimes D^{p-1} \hookrightarrow
\operatorname{ind}_{B}^{\Gamma} (\chi_{2}^{i})$.
Also by \Cref{induced and successive}, \eqref{Sp(r-i) =p-1 relation}
and \Cref{first row filtration} we see that
\[
\psi_{i} \bigg( \sum_{k \in \mathbb{F}_{p}}^{} [\begin{psmallmatrix}
k & 1 \\ 1 & 0 \end{psmallmatrix} , e_{\chi_{2}^{i}}] \bigg)
= \sum_{k=0}^{p-1} X^{i}(kX+Y)^{r-i} = -X^{r} \in X_{r}
\subseteq X_{r-(i-1)}.
\]
Thus
we get the composition $V_{i} \otimes D^{p-1} \hookrightarrow
\operatorname{ind}_{B}^{\Gamma} (\chi_{1}^{r-i}\chi_{2}^{i}) \twoheadrightarrow
X_{r-i,\,r}/X_{r-(i-1),\,r}$ is the zero map. Hence
$V_{p-1-i} \otimes D^{i} \twoheadrightarrow X_{r-i,\,r}/X_{r-(i-1),\,r}$.
If $r_{0} =i-1$, then $X_{r-i,\,r}/ X_{r-(i-1),\,r} \neq 0$
by \Cref{final X_r-i = X_r-j}. If $r_{0}< i-1 $, then
$\Sigma_{p}(r-(i-1)) = \Sigma_{p}(r-i)+1 =p$ by Lemma~\ref{Sp(r-i), Sp(r-j)} (i),
whence $X_{r-i,\,r}/ X_{r-i-1,\,r} \neq 0$,
by \Cref{final X_r-i = X_r-j}.
Therefore $X_{r-i,\,r}/ X_{r-(i-1),\,r} \cong V_{p-1-i} \otimes D^{i}$
in either case.
\end{proof}
Next we prove that if $\Sigma_{p}(r-i)> p-1$, then $X_{r-i,\,r}/X_{r-(i-1),\,r}$ is isomorphic to the principal series representation $\operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i}\chi_{2}^{i})$.
\begin{lemma}\label{Sp(r-i) large}
Let $1 \leq i \leq p-1$, $(i+1)(p+1) \leq r \equiv r_{0} \mod p$ with $0 \leq r_{0} <i$.
If $\Sigma_{p}(r-i)>p-1$, then $X_{r-i,\,r}/X_{r-(i-1),\,r}
\cong \operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i}\chi_{2}^{i})$. As a consequence,
$\dim X_{r-i,\,r}/X_{r-(i-1),\,r} = p+1$. \end{lemma}
\begin{proof}
By \Cref{induced and successive}, we have
$\psi_{i} : \operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i}\chi_{2}^{i})
\twoheadrightarrow X_{r-i,\,r}/X_{r-(i-1),\,r} $.
We claim that $\psi_{i}$ is an isomorphism. As $\psi_{i}$ is
surjective, it is
enough to show that $\psi_{i}$ is injective. By
\Cref{Structure of induced} (i) (for $l=0$) and (ii) (for $l=p-1$),
we know that
$\sum_{\lambda \in \mathbb{F}_{p}} \begin{psmallmatrix}
\lambda & 1 \\ 1 &0 \end{psmallmatrix} [1, e_{\chi_{1}^{r-i}\chi_{2}^{i}}] $
and $ \sum_{\lambda \in \mathbb{F}_{p}^{\ast}} \begin{psmallmatrix}
\lambda & 1 \\ 1 &0 \end{psmallmatrix} [1, e_{\chi_{1}^{r-i}\chi_{2}^{i}}] $ are
elements of the two sub-quotients $V_{[2i-r]} \otimes D^{r-i}$
and $V_{p-1 - [2i-r]} \otimes D^{i}$ respectively of
$\operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i}\chi_{2}^{i})$. Hence, it is enough
to prove that these elements have non-zero image under
$\psi_{i}$. Indeed, if
\[
F(X,Y) := \sum_{\lambda \in \mathbb{F}_{p}} \begin{pmatrix}
\lambda & 1 \\ 1 &0 \end{pmatrix} \psi_{i}
([1, e_{\chi_{1}^{r-i}\chi_{2}^{i}}]) =
\sum_{\lambda \in \mathbb{F}_{p}} X^{i}( \lambda X+ Y)^{r-i}
\in X_{r-(i-1),\,r},
\]
then by \Cref{Basis of X_r-i}, there exist
$A_{l}, B_{l}$ and $c_{k,l} \in \mathbb{F}_{p}$ for $0 \leq l \leq i-1$
and $1 \leq k \leq p-1$, such that
\begin{align}\label{Relation r-j large}
F(X,Y)= \sum_{l=0}^{i-1} A_{l} X^{l} Y^{r-l}+ \sum_{l=0}^{i-1} B_{l} X^{r-l}Y^{l}
- \sum_{l=0}^{i-1} \sum_{k=1}^{p-1} c_{k,l} X^{l} (kX+Y)^{r-l}.
\end{align}
Observe that the coefficient of $X^{t}Y^{r-t}$ in $F(X,Y)$
equals
$\sum_{\lambda \in \mathbb{F}_{p} } \binom{r-i}{r-t} \lambda^{t-i}$ which is zero if
$t \not \equiv i$ mod $(p-1)$, by \eqref{sum fp}.
Therefore \eqref{Relation r-j large} reduces to
\begin{align}\label{Relation r-j large 1}
F(X,Y) =
\sum_{l=0}^{i-1} B_{l} X^{r-l}Y^{l}
-\sum_{l=0}^{i-1} C_{l} \sum_{\substack {l \leq j \leq r \\
j \equiv i~ \text{mod}~ (p-1)}} \binom{r-l}{r-j} X^{j}Y^{r-j},
\end{align}
where $C_{l} = \sum\limits_{k=1}^{p-1} k^{i-l} c_{k,l} $.
Comparing the coefficients of $X^{t}Y^{r-t}$ on both sides of
\eqref{Relation r-j large 1}, we get
\begin{align}\label{compare coeff r-j large}
\binom{r-i}{t-i} = \sum_{l=0}^{i-1} C_{l} \binom{r-l}{t-l},
~ \forall~ i < t \leq r-i ~\mathrm{and}~
t \equiv i ~\mathrm{mod}~(p-1).
\end{align}
Let $r = r_{m}p^{m}+ \cdots +r_{1}p+r_{0}$ be the base $p$-expansion of $r$.
Since $r-(r_{0}+1) \equiv p-1$ mod $p$ and $r-(r_{0}+1) < r$, note that the
base $p$-expansion of $r-(r_{0}+1)$ is of the form
$r_{m}'p^{m}+ \cdots +r_{1}'p+p-1$ for some $r_{m}', \ldots, r_{1}' $. As
$r_{0}<i$, we have the base $p$-expansion of $r-i$ is
given by $r_{m}'p^{m}+ \cdots + r_{1}'p+(p+r_{0}-i)$. Then the condition
$\Sigma_{p}(r-i) > p-1$ implies $r_{m}'+\cdots+ r_{1}' \geq i-r_{0}$.
We now show that $C_{r_{0}+1}, \ldots , C_{i-1} =0 $.
Note that this statement is vacuous if $i=r_{0}+1$.
So we may assume $i> r_{0}+1$.
For every $1 \leq u < i-r_{0}$, let $b_{u}=i-u$. Clearly
$0 \leq r_{0} \leq b_{u} \leq i-1 \leq p-1$.
By \Cref{choice of s} (applied for $r-(r_{0}+1)$), for every
$1 \leq u < i-r_{0}$, we can find an integer $s_{u}$
such that $p \leq s_{u} \leq r-r_{0}-1-p \leq r-p$,
$s_{u} \equiv b_{u}= i-u$ mod $p$, $\Sigma_{p}(s_{u}) = b_{u}+u=i$ and
$\binom{r-(r_{0}+1)}{s_{u}} \not \equiv 0 \mod p$.
Noting that $s_{u} \equiv \Sigma_{p}(s_{u}) =i$ mod $(p-1)$, by
\eqref{compare coeff r-j large}, we get
\[
\binom{r-i}{s_{u}-i}
= \sum_{l=0}^{i-1} C_{l} \binom{r-l}{s_{u}-l}.
\]
Since $s_{u} \equiv i-u $ mod $p$ and $i-u > r_{0}$,
for $1 \leq u < i-r_{0}$,
by Lucas' theorem, we have
\begin{align*}
\binom{r-l}{s_{u}-l} &\equiv
\begin{cases}
(*) \binom{r_{0}-l}{i-u-l}, & \mathrm{if} ~0 \leq l \leq r_{0},\\
(*) \binom{p+r_{0}-l}{p+i-u-l}, & \mathrm{if}~i-u <l \leq i,
\end{cases} \\
& \equiv 0 \mod p,
\end{align*}
where $(*)$ denotes the contribution from the higher order terms in
the base $p$-expansion. Hence, for every $1 \leq u < i-r_{0}$, we have
\begin{align}\label{Compare r-j large}
\sum_{l=r_{0}+1}^{i-u} C_{l} \binom{r-l}{s_{u}-l} =0.
\end{align}
Since $s_{u} \equiv i-u$ mod $p$ and $s_{u} \leq r$,
we get the base $p$-expansion of $s_{u}$ is given by
$s_{u} = s_{u,m} p^{m}+ \cdots + s_{u,1}p+ (i-u)$,
for some $s_{u,m}, \ldots , s_{u,1}$. By Lucas' theorem
and the choice of $s_{u}$, for $1 \leq u <i-r_{0}$,
we have
\begin{align*}
\binom{r-i+u}{s_{u}-i+u}& \equiv \binom{r_{m}'}{s_{u,m}} \cdots
\binom{r_{1}'}{s_{u,1}} \binom{p+r_{0}-i+u}{0} \mod p \\
&\equiv \binom{r-r_{0}-1}{s_{u}} \binom{p-1}{i-u}^{-1} \mod p \\
&\not \equiv 0 \mod p.
\end{align*}
Writing \eqref{Compare r-j large} in matrix form and noting that the
anti-diagonal entries $\binom{r-i+u}{s_{u}-i+u} \not \equiv 0$ mod $p$
for $1 \leq u < i-r_{0}$, we see
that $C_{r_{0}+1}= \cdots = C_{i-1} =0$. Thus
\eqref{compare coeff r-j large} reduces to
\begin{align}\label{compare coeff r-j large 1}
\binom{r-i}{r-t} = \sum_{l=0}^{r_{0}} C_{l} \binom{r-l}{r-t},
~ \forall~ i < t \leq r-i ~\mathrm{and}~
t \equiv i ~\mathrm{mod}~(p-1).
\end{align}
We now show that the above system of equations leads to a contradiction
by considering various cases.
If $r_{1}=0$, then $m \geq 2$ and $r \geq p^{2}$. Let
$j>1$ be the smallest positive integer such that $r_{j} \neq 0$.
Then
\[ r-i= r_{m}p^{m}+ \cdots +r_{j+1}p^{j+1}+ (r_{j}-1)p^{j}+
(p-1)p^{j-1}+ \cdots+(p-1)p+ (p+r_{0}-i).
\]
Let $s=(p-2)p+i+1 \equiv i$ mod $(p-1)$.
Since $i \leq p-1$, note that $p \leq s \leq p^{2} -p \leq r-p$.
By \eqref{compare coeff r-j large 1},
we have
\[
\binom{r-i}{s-i} =\sum_{l=0}^{r_{0}} C_{l} \binom{r-l}{s-l}.
\]
By Lucas' theorem, $\binom{r-i}{s-i} \equiv \binom{p-1}{p-2}
\binom{p+r_{0}-i} {1} \not \equiv 0$ mod $p$, and
for $0 \leq l \leq r_{0}$, we have
\begin{align*}
\binom{r-l}{s-l} & \equiv
\begin{cases}
\binom{r_{1}}{p-2}\binom{r_{0}-l}
{i+1-l}, & \mathrm{if}~ i<p-1, \\
\binom{r_{1}}{p-1}\binom{r_{0}-l}
{0} ,& \mathrm{if}~ i = p-1, l=0, \\
\binom{r_{1}}{p-2}\binom{r_{0}-l}
{p-l}, & \mathrm{if}~ i = p-1, l \neq 0, \\
\end{cases} \\
& \equiv 0 ~\mathrm{mod}~ p ~ ( ~\because r_{1}=0, ~ r_{0}<p).
\end{align*}
This leads to a contradiction.
If $r_{m}+ \cdots + r_{1} \geq i+1$, then by \Cref{choice of s}
(applied for $r-i$, $u=i$ and $b=p-i-1 \leq p+r_{0}-i$),
there exists $p \leq s \leq r-i-p$, $s \equiv p-i-1$ mod $p$,
$\Sigma_{p}(s) = p-1$ and $\binom{r-i}{s} \not
\equiv 0$ mod $p$. Taking $t=s+i$ in
\eqref{compare coeff r-j large 1}, we get
$$
\binom{r-i}{s} =\sum_{l=0}^{r_{0}} C_{l} \binom{r-l}{s+i-l} .
$$
As $s+i-l \equiv p-1-l \mod p$, for $0 \leq l \leq r_{0}$, we
have $0 \leq r_{0}-l < i-l \leq p-1-l$, whence by Lucas' theorem,
$\binom{r-l}{s+i-l} \equiv (*) \binom{r_{0}-l}{p-1-l} \equiv 0$
mod $p$, where $(*)$ denotes the contribution form higher order terms in the
base $p$-expansion. However by the choice of $s$ we have
$\binom{r-i}{s} \not \equiv 0$ mod $p$. Again we obtain a contradiction.
Finally, suppose $r_1 \neq 0 $ and $r_{m}+ \cdots + r_{1} \leq i$.
So $r-i = r_{m}p^{m}+ \cdots + r_{2}p^{2}+(r_{1}-1)p+p+r_{0}-i$ in this case.
By the
hypotheses $\Sigma_{p}(r-i) > p-1$ and $r_{1} \neq 0 $, it follows $i-r_{0} < r_{m}+
\cdots + r_{1}$.
Hence $r_{m}+ \cdots + r_{1} = i+1-r_{0}+w$, for some $0 \leq w \leq r_{0}-1$.
If $m=1$, then $(i+1)(p+1) \leq r = r_{1}p+r_{0} = (i+1-r_{0}+w)p+r_{0} < ip+i$
which is not possible. So $m \geq 2$.
Let $s = r-w-p$ and $s' = r-w-p^{m}$. Since $\Sigma_{p}(r) \equiv r \mod (p-1)$, we
see that $s$, $s' \equiv i \mod (p-1)$. Since $r_{m} , r_{1} \geq 1$ we have
$r \geq p^{m}+ p+ r_{0} \geq p^{m}+p+w$, so $p \leq s,s' \leq r-p$.
By \eqref{compare coeff r-j large 1} with $t = s$, $s'$, we get
$$
\binom{r-i}{s-i} =\sum_{l=0}^{r_{0}} C_{l} \binom{r-l}{s-l} , \quad
\binom{r-i}{s'-i}= \sum_{l=0}^{r_{0}} C_{l} \binom{r-l}{s'-l} .
$$
By Lucas' theorem, $\binom{r-l}{s-l} \equiv \binom{r_{m}}{r_{m}} \cdots
\binom{r_{2}}{r_{2}} \binom{r_{1}}{r_{1}-1} \binom{r_{0}-l}{r_{0}-w-l}
\equiv r_{1} \binom{r_{0}-l}{r_{0}-w-l} \mod p$ if $0 \leq l \leq r_{0}-w$.
Similarly, $\binom{r-l}{s'-l} \equiv \binom{r_{m}}{r_{m}-1} \cdots
\binom{r_{2}}{r_{2}} \binom{r_{1}}{r_{1}} \binom{r_{0}-l}{r_{0}-w-l}
\equiv r_{m} \binom{r_{0}-l}{r_{0}-w-l} \mod p$ if $0 \leq l \leq r_{0}-w$.
If
$r_{0}-w +1 \leq l \leq r_{0}$, then $s-l$, $s'-l \equiv p+r_{0}-w-l$
mod $p$ and $p+r_{0}-w-l> r_{0}-l$, so
it follows from Lucas' theorem that $\binom{r-l}{s-l}$, $\binom{r-l}{s'-l} \equiv 0$
mod $p$, for $r_{0}-w+1 \leq l \leq r_{0}$. Therefore
$$
\binom{r-i}{s-i} =r_{1}\sum_{l=0}^{r_{0}-w} C_{l}
\binom{r_{0}-l}{r_{0}-w-l}, \quad
\binom{r-i}{s'-i}=r_{m} \sum_{l=0}^{r_{0}-w} C_{l} \binom{r_{0}-l}{r_{0}-w-l}.
$$
Thus $\binom{r-i}{s-i} \binom{r-i}{s'-i}^{-1} \equiv r_{1} r_{m}^{-1} \mod p$. By Lucas' theorem,
$$
\binom{r-i}{s-i} \binom{r-i}{s'-i}^{-1} \equiv r_{m}^{-1} (r_{1}-1) \mod p
$$
This again leads to contradiction.
Next consider $G(X,Y) := \sum_{\lambda \in \mathbb{F}_{p}^{\ast}} \begin{psmallmatrix}
\lambda & 1 \\ 1 &0 \end{psmallmatrix} \psi_{i}
([1, e_{\chi_{1}^{r-i}\chi_{2}^{i}}]) = \sum_{\lambda \in \mathbb{F}_{p}^{\ast}}
X^{i}( \lambda X+ Y)^{r-i} $ and note that $F(X,Y)-G(X,Y) = X^{i}Y^{r-i}$.
So the coefficient of $X^{s}Y^{r-s}$ in $F(X,Y)$ and $G(X,Y)$ agree if $s \neq i $.
In the proof above for $F(X,Y)$,
we only compared the coefficients of $X^{s}Y^{r-s}$
for $s \neq i$. So imitating
the proof above for $F(X,Y)$, we get $G(X,Y) \not \in X_{r-(i-1),r}$.
This finishes the proof of the lemma. \end{proof}
Putting together all the results obtained so far, we have the following result. \begin{corollary}\label{successive quotients r_0 < i}
Let $p \geq 3$, $1 \leq i \leq p-1$, $(i+1)(p+1)<r$ and $r_{0} <i$. Then
\begin{align*}
\frac{X_{r-i,\,r}}{ X_{r-(i-1),\,r}} \cong
\begin{cases}
(0), &\mathrm{if} ~ \Sigma_{p}(r-i)<p-1, \\
V_{p-1-i} \otimes D^{i}, &\mathrm{if} ~ \Sigma_{p}(r-i)= p-1, \\
\operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i} \chi_{2}^{i}),
&\mathrm{if} ~ \Sigma_{p}(r-i)>p-1.
\end{cases}
\end{align*} \end{corollary}
\begin{proof}
The three assertions follow from
\Cref{r-i small},
\Cref{dim not large} and \Cref{Sp(r-i) large} respectively.
\end{proof}
\begin{lemma}\label{final quotient structure}
Let $p\geq 3$, $1 \leq i \leq p-1$,
$(i+1)(p+1)<r \equiv r_{0} ~ \mathrm{mod}~p$ with
$0 \leq r_{0}<i$. Then we have
\begin{enumerate}[label= \emph{(\roman*)}]
\item If $\Sigma_{p}(r-i)>p-1$, then
$X_{r-i,\,r}/ X_{r-r_{0},\,r} \cong V_{i-r_{0}-1}
\otimes \operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i}\chi_{2}^{r_{0}+1})$.
\item If $\Sigma_{p}(r-i) \leq p-1$, then
$$ 0 \rightarrow V_{\Sigma_{p}(r-r_{0})-2} \otimes \operatorname{ind}_{B}^{\Gamma}
(\chi_{1} \chi_{2}^{r_{0}+1}) \rightarrow X_{r-i,\,r}/ X_{r-r_{0},\,r}
\rightarrow V_{p-1-\Sigma_{p}(r)} \otimes D^{\Sigma_{p}(r)} \rightarrow 0.
$$
\end{enumerate} \end{lemma} \begin{proof}
Note that
\begin{enumerate}
\item[(i)] By Lemma~\ref{Sp(r-i), Sp(r-j)} (i),
we have $\Sigma_{p}(r-j)= \Sigma_{p}(r-i)+i-j > p-1$, for all
$r_{0} < j \leq i$. Hence by \Cref{Sp(r-i) large}, we have
dim $X_{r-j,r} / X_{r-(j-1),r} = p+1$, for all $r_{0} < j \leq i$.
Thus $\dim X_{r-i,\,r}/ X_{r-r_{0},\,r} =
\sum\limits_{j=r_{0}+1}^{i} \dim X_{r-j,\,r} / X_{r-(j-1),\,r}
= (i-r_{0})(p+1)=
( \dim V_{i-r_{0}-1}) \times
( \dim\operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i} \chi_{2}^{r_{0}+1}))$.
Now assertion (i) follows from \Cref{Succesive quotient}.
\item[(ii)] By Lemma~\ref{Sp(r-i), Sp(r-j)} (ii), we have
$\Sigma_{p}(r) =\Sigma_{p}(r-i)-(p-1)+ i \leq i$ and $\Sigma_{p}(r) \geq r_{0}+1$.
Thus by Lemma~\ref{Sp(r-i), Sp(r-j)} (i),
we have $\Sigma_{p}(r-\Sigma_{p}(r)) = \Sigma_{p}(r-i)+i-\Sigma_{p}(r) = p-1$ whence
$ X_{r-\Sigma_{p}(r), \, r} = X_{r-i, \,r}$ and
$X_{r-\Sigma_{p}(r), \, r}/ X_{r-(\Sigma_{p}(r)-1), \, r} \cong V_{p-1-\Sigma_{p}(r)}
\otimes D^{\Sigma_{p}(r)}$ by \Cref{X_r-i = X_ r-j} and \Cref{dim not large}
respectively.
We now claim that the inequality $\Sigma_{p}(r) \geq r_{0}+1$ is strict.
If
$\Sigma_{p}(r) = r_{0}+1$, then $r=p^{n}+r_{0}$
for some $n \geq 1$.
Since $\Sigma_{p}(r-i) \leq p-1$ and $r_{0}<i$, we have $r=p+r_{0}$. Thus
$r= p+r_{0} < 2p \leq (i+1)(p+1)$ which is a contradiction
as $i \geq 1$. Hence $\Sigma_{p}(r) > r_{0}+1$.
So by Lemma~\ref{Sp(r-i), Sp(r-j)} (i),
we have $\Sigma_{p}(r-(\Sigma_{p}(r)-1)) = \Sigma_{p}(r-i)+i-\Sigma_{p}(r)+1 = p$, whence
$X_{r-(\Sigma_{p}(r)-1), \, r} / X_{r-r_{0},\,r} \cong
V_{\Sigma_{p}(r)-r_{0}-2} \otimes \operatorname{ind}_{B}^{\Gamma}(\chi_{1} \chi_{2}^{r_{0}+1})$,
by part (i).
Putting all these together in the following
exact sequence
\[
0 \rightarrow X_{r-(\Sigma_{p}(r)-1), \, r} / X_{r-r_{0},\,r} \rightarrow
X_{r-\Sigma_{p}(r), \, r} / X_{r-r_{0}, \,r} \rightarrow
X_{r-\Sigma_{p}(r), \, r}/ X_{r-(\Sigma_{p}(r)-1),\, r} \rightarrow 0,
\]
part (ii) follows as
the middle term equals $X_{r-i, \, r} / X_{r-r_{0}, \,r}$. \qedhere
\end{enumerate} \end{proof}
Using the above lemma in conjunction with \Cref{Structure r_0 >i}, we obtain the following result which describes the structure of $X_{r-i,\,r}$ in the case $r_{0}<i$. \begin{theorem}\label{Structure r_0<i}
Let $p\geq 3$, $1 \leq i \leq p-1$ and $(i+1)(p+1)<r \equiv r_{0}
~ \mathrm{mod}~p$ with $0 \leq r_{0}<i$. Then
\begin{enumerate}[label=\emph{(\roman*)}]
\item If $\Sigma_{p}(r-r_{0}) \geq p$, then
as $M$-modules we have
$X_{r-i,\,r} \cong X_{r-i,\,r-i} \otimes V_{i}$.
\item If $\Sigma_{p}(r-r_{0}) \leq p-1$ and $\Sigma_{p}(r-i) \geq p $, then
as $\Gamma$-modules we have
\begin{align*}
0 \rightarrow V_{\Sigma_{p}(r-r_{0})} \otimes V_{r_{0}}
\rightarrow X_{r-i,\,r}
\rightarrow V_{i-r_{0}-1} \otimes \operatorname{ind}_{B}^{\Gamma}
(\chi_{1}^{r-i}\chi_{2}^{r_{0}+1}) \rightarrow 0.
\end{align*}
\item If $\Sigma_{p}(r-r_{0}) \leq p-1 $ and $\Sigma_{p}(r-i) \leq p-1$, then
as $\Gamma$-modules we have
\begin{align*}
0 \rightarrow V_{\Sigma_{p}(r-r_{0})} \otimes V_{r_{0}}
\rightarrow X_{r-i,\,r}
\rightarrow W \rightarrow 0,
\end{align*}
where $W$ is an extension of $V_{\Sigma_{p}(r-r_{0})-2} \otimes
\operatorname{ind}_{B}^{\Gamma}
(\chi_{1}\chi_{2}^{r_{0}+1})$ by $V_{p-1- \Sigma_{p}(r)}
\otimes D^{\Sigma_{p}(r)}$.
\end{enumerate} \end{theorem} \begin{proof}
By \Cref{first row filtration}, we get the following
exact sequence
\[
0 \rightarrow X_{r-r_{0},\,r} \rightarrow X_{r-i,\,r}
\rightarrow X_{r-i,\,r}/ X_{r-r_{0},\,r} \rightarrow 0.
\]
If $\Sigma_{p}(r-i) \leq p-1$, then by Lemma~\ref{Sp(r-i), Sp(r-j)} (ii),
we have $\Sigma_{p}(r)-r_{0} = \Sigma_{p}(r-i) -(p-1)+i -r_{0} \leq i-r_{0} \leq p-1$.
Thus $\Sigma_{p}(r)-r_{0} =\Sigma_{p}(r-r_{0}) \geq p$ implies
$\Sigma_{p}(r-i) \geq p$. Hence by
\Cref{dim large 1} (for $i =r_{0}$) and
\Cref{final quotient structure} (i), we get dim $X_{r-i,\,r} =
\dim X_{r-r_{0},\,r} + \dim X_{r-i,\,r}/ X_{r-r_{0},\,r} =
(r_{0}+1)(p+1)+ (i-r_{0})(p+1)=
(i+1)(p+1) \geq \dim X_{r-i,\, r-i} \times \dim V_{i} $.
Thus by \Cref{surjection1}, we have $X_{r-i,\,r-i} \cong X_{r-i,\,r-i}
\otimes V_{i}$. This proves (i).
By \Cref{Structure r_0 >i} (applied for $i=r_{0}$), we have
\begin{align*}
X_{r-r_{0},\,r} \cong
\begin{cases}
X_{r-r_{0},\, r-r_{0}} \otimes V_{r_{0}},
&~\mathrm{if}~ r_{0} \leq \Sigma_{p}(r-r_{0}) \leq p-1, \\
X_{r-\Sigma_{p}(r-r_{0}), \,r-\Sigma_{p}(r-r_{0})}\otimes V_{\Sigma_{p}(r-r_{0})},
& ~\mathrm{if}~ \Sigma_{p}(r-r_{0}) < r_{0}.
\end{cases}
\end{align*}
Note that if $r \geq p$, then $\Sigma_{p}(r-r_{0}) \geq 1$, whence
$\Sigma_{p}(r-r_{0}) \geq 1$ in the setting of lemma.
If $r_{0} \leq \Sigma_{p}(r-r_{0}) \leq p-1$, then since
$r-r_{0} \equiv \Sigma_{p}(r-r_{0})$ mod $(p-1)$,
we have $X_{r-r_{0},\,r} \cong V_{\Sigma_{p}(r-r_{0})} \otimes V_{r_{0}}$,
by \Cref{dimension formula for X_{r}} (i). If
$1 \leq \Sigma_{p}(r-r_{0}) < r_{0}$, then since
$r- \Sigma_{p}(r-r_{0}) \equiv r_{0} $ mod $(p-1)$ with
$1 \leq r_{0} \leq p-1$, we have
$ X_{r-r_{0},\,r} \cong V_{r_{0}} \otimes V_{\Sigma_{p}(r-r_{0})}$,
by \Cref{dimension formula for X_{r}} (i).
Hence
$X_{r-r_{0},\,r} \cong V_{\Sigma_{p}(r-r_{0})} \otimes V_{r_{0}}$ in either
case. Now parts (ii) and (iii) follow
from the short exact sequence above,
and \Cref{final quotient structure} (i) and (ii) respectively.
\end{proof}
\begin{remark} \label{r - i vs r - r_0}
Using Lemma \ref{Sp(r-i), Sp(r-j)} (ii),
the condition $\Sigma_{p}(r-i) \leq p-1$ implies that
$\Sigma_{p}(r-r_{0}) = \Sigma_{p}(r)-r_{0}= \Sigma_{p}(r-i)+i-(p-1)-r_{0} \leq i -r_{0} \leq p-1$.
So the extra assumption $\Sigma_{p}(r-r_{0}) \leq p-1$ in part (iii) of the above
theorem
is redundant.
\end{remark} As a corollary, we have the following dimension formula. \begin{corollary} \label{dimension r_0 < i}
Let $p\geq 3$, $1 \leq i \leq p-1$ and $(i+1)(p+1)<r \equiv r_{0}
~ \mathrm{mod}~p$ with $0 \leq r_{0}<i$. Then
\begin{align*}
\dim X_{r-i,\,r} =
\begin{cases}
(i+1)(p+1), & ~ \mathrm{if} ~ \Sigma_{p}(r-r_{0}) \geq p, \\
( i-r_{0})(p+1)+ (\Sigma_{p}(r-r_{0})+1)(r_{0}+1), &
~ \mathrm{if} ~ \Sigma_{p}(r-r_{0}) \leq p,~
\Sigma_{p}(r-i) \geq p, \\
(p+r_{0}+1) \Sigma_{p}(r-r_{0}),
&~ \mathrm{if} ~ \Sigma_{p}(r-i) < p.
\end{cases}
\end{align*} \end{corollary}
\section{Structure of \texorpdfstring{$Q(i)$}{}} \label{section Q}
Recall that $\theta = X^{p}Y- XY^{p}$, $V_{r}^{(m)} = \lbrace F(X,Y) \in \mathbb{F}_{p}[X,Y] : \theta^{m} \mid F ~ \text{in} ~ \mathbb{F}_{p}[X,Y] \rbrace$ and $X_{r-i,\,r}^{(m)} = V_{r}^{(m)} \cap X_{r-i,\,r}$ for $0 \leq i \leq r$ and $m \in \mathbb{Z}_{\geq 0}$. In this section, we study the quotient module $Q(i)$ of $V_{r}$, for $0 \leq i \leq p-1$, defined by
\begin{align*}
Q(i) = \frac{V_{r}}{X_{r-i,\,r}+V_{r}^{(i+1)}}. \end{align*}
Similarly, for $1 \leq i \leq p-1$, let
\begin{align*}
P(i) = \frac{V_{r}}{X_{r-(i-1),\,r}+V_{r}^{(i+1)}}. \end{align*}
These modules play an important role in the study of the reductions of Galois representations mod $p$ via the mod $p$ local Langlands correspondence.
Observe that, for $1 \leq i \leq p-1$, we have the following exact sequence
\begin{align}\label{Q i-1 and P i}
0 \rightarrow \frac{X_{r-(i-1),\,r}+V_{r}^{(i)}}{X_{r-(i-1),\,r}+V_{r}^{(i+1)}}
\rightarrow P(i) \rightarrow Q(i-1) \rightarrow 0,
\end{align}
where the first map is the inclusion and the last map is the quotient
map. By the second isomorphism theorem, we have
$(X_{r-(i-1),\,r}+V_{r}^{(i)})/(X_{r-(i-1),\,r}+V_{r}^{(i+1)})$
is isomorphic to $V_{r}^{(i)}/(X_{r-(i-1),\, r}^{(i)}+ V_{r}^{(i+1)})$,
which is also the cokernel of the map
\[ X_{r-(i-1),\,r}^{(i)}/X_{r-(i-1),\,r}^{(i+1)}
\hookrightarrow V_{r}^{(i)}/V_{r}^{(i+1)} . \]
In the process of determining the $Q(i)$ in
this section, we will
need to determine the quotients
$X_{r-i,\,r}^{(j)}/X_{r-i,\,r}^{(j+1)}$, for $0 \leq i,j \leq p-1$, cf.
\Cref{Structure X(1)}, \Cref{reduction}, \Cref{singular quotient X_{r}},
\Cref{singular quotient i < [a-i]}, \Cref{singular i= [a-i]},
\Cref{singular i>r-i}, \Cref{singular i=a} and \Cref{singular i=p-1} below.
Since the structure of
$V_{r}^{(i)}/V_{r}^{(i+1)}$ is well known, cf. \Cref{Breuil map}, we can
deduce the structure of the cokernel of the map above.
So in principle we may deduce
the structure of $P(i)$ from $Q(i-1)$, and so for the rest of this paper
we concentrate on determining just the $Q(i)$.
Note that the definition of $Q(i)$ and $P(i)$ involves $r$. In this section, to simplify notation we often denote
$X_{r-i,\,r}$ by just $X_{r-i}$ etc.
For every $0 \leq i \leq p-1$ and $ m,n \in \mathbb{Z}_{\geq 0}$ with $n \leq m$, we have
$$
X_{r-i}^{(n)} \hookrightarrow V_{r}^{(n)} \twoheadrightarrow
V_{r}^{(n)}/V_{r}^{(m)}.
$$
Clearly the kernel of the composition is $X_{r-i}^{(m)} $.
For every $ 0 \leq i \leq p-1$, we have an exact sequence
\begin{align}\label{Q(i) exact sequence}
0 \rightarrow X_{r-i}/X_{r-i}^{(i+1)} \rightarrow V_{r}/V_{r}^{(i+1)} \rightarrow
Q(i) \rightarrow 0,
\end{align}
where the leftmost map is induced by inclusion and the rightmost map is the quotient map.
In \cite[(4.2)]{Glover}, Glover showed that $V_r/V_r^{(1)}$ is periodic with period $p-1$, for $r \geq p$. We generalize this result using the ring of dual numbers etc. to show that $V_{r}/V_{r}^{(m)}$ is periodic with period $p(p-1)$, i.e., $V_{r}/V_{r}^{(m)} \cong V_{r+p(p-1)}/V_{r+p(p-1)}^{(m)}$, for $1 \leq m \leq p-1$ and $r \geq m(p+1)-1$.
\begin{lemma}\label{quotient periodic}
Let $1 \leq m \leq p$, and let $r \geq s \geq
m(p+1)-1 $. If $r \equiv s ~ \mathrm{mod}~p(p-1)$, then
\begin{align*}
\frac{V_{r}}{V_{r}^{(m)}} \cong \frac{V_{s}}{V_{s}^{(m)}}.
\end{align*} \end{lemma} \begin{proof}
Let $\epsilon$ be a variable and let $R = \mathbb{F}_{p}[\epsilon]/(\epsilon^{m})$.
Let $G(R) = \mathrm{GL}_{2}(R)$ and $B(R)$ denote the Borel subgroup
consisting of the upper triangular matrices in $G(R)$.
Define a map
$ \psi_{r}: V_{r} \rightarrow \operatorname{ind}_{B(R)}^{G(R)}(\chi_2^{r})$, by
$$
\psi_{r}(P(X,Y))(\gamma)= P((0,1)\gamma) = P(c,d),
$$
for all $\gamma = \begin{psmallmatrix} a & b \\ c & d \end{psmallmatrix}
\in G(R)$ and $P(X,Y) \in V_{r}$.
Observe that, for $\gamma =
\begin{psmallmatrix} a & b \\ c & d \end{psmallmatrix} \in G(R)$,
$\gamma' = \begin{psmallmatrix} a' & b '\\ 0 & d' \end{psmallmatrix}
\in B(R)$ and $P(X,Y) \in V_{r}$, we have
\[
\psi_{r}(P(X,Y))(\gamma' \gamma) = P(cd',dd') = d'^{r} P(c,d)
= \gamma' \cdot \psi_{r}(P(X,Y))(\gamma),
\]
so $\psi_{r}(P(X,Y)) \in \operatorname{ind}_{B(R)}^{G(R)}(\chi_2^{r})$ and
$\psi_{r}$ is well-defined. For $\gamma_{1} =
\begin{psmallmatrix} a_{1} & b_{1} \\ c_{1} & d_{1} \end{psmallmatrix}
\in \Gamma$ and $\gamma_{2} =
\begin{psmallmatrix} a_{2} & b_{2} \\ c_{2} & d_{2} \end{psmallmatrix}
\in G(R)$, we have
\begin{align*}
\psi_{r}(\gamma_{1} \cdot P (X,Y)) (\gamma_{2}) & =
(\gamma_{1} \cdot P)(c_{2}, d_{2}) =
P(a_{1}c_{2}+c_{1}d_{2}, b_{1}c_{2}+ d_{1}d_{2})=
\psi_{r}(P(X,Y)) (\gamma_{2} \gamma_{1}) \\
& = (\gamma_{1} \cdot \psi_{r}(P(X,Y)) ) (\gamma_{2}).
\end{align*}
Thus $\psi_{r}$ is $\mathbb{F}_{p}[\Gamma]$-linear, as $\psi_{r}$ is $\mathbb{F}_{p}$-linear.
We next show that ker $\psi_{r} = V_{r}^{(m)}$. For
$c = c_{0}+c_{1} \epsilon+ \cdots + c_{m-1} \epsilon^{m-1}$
and $d =d_{0}+d_{1} \epsilon+ \cdots + d_{m-1} \epsilon^{m-1} \in R$,
we have
\[
c^{p} d - c d^{p} = c_{0} d - c d_{0}
\in \epsilon R.
\]
So $(c^{p} d - c d^{p})^{m} =0$ and $V_{r}^{(m)} \subseteq$ ker $\psi_{r}$.
Conversely let $P(X,Y) = \sum\limits_{i=0}^{r} a_{i} X^{r-i}Y^{i} \in $ ker $\psi_{r}$.
Then
\[
\psi_{r}(P(X,Y))
\begin{psmallmatrix} \epsilon^{m-1} & -1 \\ 1 & \epsilon \end{psmallmatrix}
= P(1,\epsilon) = \sum\limits_{i=0}^{r} a_{i} \epsilon^{i}
= \sum\limits_{i=0}^{m-1} a_{i} \epsilon^{i} \quad
( ~\because \epsilon^{m}=0).
\]
So $a_{0}$, $a_{1}, \ldots , a_{m-1} =0$ and $Y^{m} \mid P(X,Y)$.
Since $\psi_{r}$ is $\Gamma$-linear we see that
for all $\gamma \in \Gamma$,
$\gamma \cdot P(X,Y) \in $ ker $\psi_{r}$,
whence $\gamma^{-1} \cdot Y^{m} \mid P(X,Y)$.
Taking $\gamma =
\begin{psmallmatrix} -\lambda & -1 \\ 1 & 0 \end{psmallmatrix} $ for
$\lambda \in \mathbb{F}_{p}$, we see that $(X- \lambda Y)^{m} \mid P(X,Y)$, for
$\lambda \in \mathbb{F}_{p}$.
Hence $P(X,Y) \in V_{r}^{(m)}$ and ker $\psi_{r} \subseteq V_{r}^{(m)}$.
This shows that ker $\psi_{r} = V_{r}^{(m)}$. Thus $\psi_{r}$ induces
an injective map $\psi_{r}: V_{r}/V_{r}^{(m)} \rightarrow
\operatorname{ind}_{B(R)}^{G(R)}(\chi_2^{r})$. Let $r$, $s$ be as in hypothesis. Then
we have $\operatorname{ind}_{B(R)}^{G(R)}(\chi_2^{s}) = \operatorname{ind}_{B(R)}^{G(R)}(\chi_2^{r})$,
as $\chi_2^{r-s}$ is the trivial character.
By Lemma~\ref{basis}, the set $\Lambda$ provides a basis of $V_r/V_{r}^{(m)}$ and similarly of $V_s/V_{s}^{(m)}$.
For $\gamma =
\begin{psmallmatrix} a & b \\ c & d \end{psmallmatrix} \in G(R) $ and
$0 \leq i \leq s-m$, we have
\begin{align*}
\psi_{s}(X^{s-i}Y^{i})(\gamma)
& = \begin{cases}
0, &\mathrm{if }~ c \in \epsilon R, \\
c^{s-i} d^{i}, & \mathrm{if } ~ c \not \in \epsilon R.
\end{cases} \\
&= \psi_{r}(X^{r-i}Y^{i})(\gamma),
\end{align*}
since $r \equiv s ~\mathrm{mod}~p(p-1)$.
Since $ s-m \geq m(p+1)-(m+1)$, we see that $\psi_{s}$ and $\psi_{r}$
agree on the
first kind of basis elements in $\Lambda$.
Similarly for $s-m < i \leq s$, one checks that
$\psi_{s}(X^{s-i}Y^{i})(\gamma) = \psi_{r}(X^{s-i}Y^{r-s+i})(\gamma)$.
Since $r-m < r-s+i \leq r$, we see that that $\psi_{s}$ and $\psi_{r}$
also agree on the
second kind of basis elements in $\Lambda$.
Thus
$\psi_{s}(V_{s}/V_{s}^{(m)}) = \psi_{r}(V_{r}/V_{r}^{(m)})$ and we
have a $\Gamma$-linear isomorphism
\begin{equation*}
\psi: V_{s}/V_{s}^{(m)} \overset{\psi_{s}}{\longrightarrow}
\psi_{s}(V_{s}/V_{s}^{(m)}) = \psi_{r}(V_{r}/V_{r}^{(m)})
\overset{\psi_{r}^{-1}}{\longrightarrow} V_{r}/V_{r}^{(m)}. \qedhere
\end{equation*}
\end{proof}
For $n \geq 0$ and $0 \leq j \leq i \leq p-1$, we
have $X_{r-j}^{(n)} \subseteq X_{r-i}^{(n)}$, by \Cref{first row filtration}.
Therefore, for every $0 \leq j \leq i \leq p-1$ and $0 \leq n \leq m$, we have
\[ X_{r-j}^{(n)}/X_{r-j}^{(m)} \subseteq X_{r-i}^{(n)}/X_{r-i}^{(m)} \subseteq
V_{r}^{(n)}/V_{r}^{(m)}. \] As a corollary, we obtain
\begin{corollary}\label{arbitrary quotient periodic}
Let $0 \leq n < m \leq p$, $0 \leq i \leq p-1$, $r \geq s \geq
m(p+1)-1$. If $r \equiv s ~ \mathrm{mod}~p(p-1)$, then
\begin{align*}
\frac{V_{r}^{(n)}}{V_{r}^{(m)}} \cong \frac{V_{s}^{(n)}}{V_{s}^{(m)}} \quad \text{and} \quad
\frac{X_{r-i}^{(n)}}{X_{r-i}^{(m)}} \cong \frac{X_{s-i}^{(n)}}{X_{s-i}^{(m)}}.
\end{align*}
In particular, $Q(i)$ is periodic with period $p(p-1)$,
for all $0 \leq i \leq p-1$ and all $r \geq i(p+1) +p$. \end{corollary} \begin{proof}
Let $\psi: V_{s} / V_{s}^{(m)} \rightarrow V_{r}/ V_{r}^{(m)}$
be defined as in the proof of \Cref{quotient periodic}.
Explicitly we have
\begin{align*}
\psi(X^{s-i}Y^{i}+V_{s}^{(m)}) =
\begin{cases}
X^{r-i}Y^{i}+V_{r}^{(m)}, &~ \mathrm{if}~0\leq i \leq s-m \\
X^{s-i}Y^{r-s+i}+V_{r}^{(m)}, &~ \mathrm{if}~s-m < i \leq s.
\end{cases}
\end{align*}
We claim that
$\psi (V_{s}^{(n)}/V_{s}^{(m)}) = V_{r}^{(n)}/V_{r}^{(m)}$.
Let $F(X,Y) = \sum_{j=0}^{s} a_{j} X^{s-j}Y^{j} \in V_{s}^{(n)}$. Then
\[
\psi(F(X,Y)+V_{s}^{(m)}) = \sum_{j=0}^{s-m} a_{j} X^{r-j}Y^{j} +
\sum_{j=s-(m-1)}^{s} a_{j} X^{s-j}Y^{r-s+j}+V_{r}^{(m)}.
\]
Clearly
the coefficient of $X^{s-i}Y^{i}$ (resp. $X^{i}Y^{s-i}$) and
$X^{r-i}Y^{i}$ (resp. $X^{i}Y^{r-i}$) in $F(X,Y)$
and $\psi(F(X,Y))$
are equal if $0 \leq i < n$, so both vanish as
$F(X,Y) \in V_{s}^{(n)}$. Also noting that
$ r \equiv s$ mod $p$, for
$1 \leq t \leq p-1$ and $0 \leq l < n \leq p-1$, by Lucas' theorem,
we have
\begin{align*}
\sum_{\substack{0 \leq j \leq s-m \\ j \equiv t~\mathrm{mod}~(p-1)}}^{}
a_{j} \binom{j}{l} +
\sum_{\substack{s-m< j \leq s \\ j \equiv t~\mathrm{mod}~(p-1)}}^{}
a_{j} \binom{r-s+j}{l} \equiv
\sum_{\substack{0 \leq j \leq s \\ j \equiv t~\mathrm{mod}~(p-1)}}^{}
a_{j} \binom{j}{l} \mod p.
\end{align*}
Since $F(X,Y) \in V_{s}^{(n)}$, we see that the right hand side vanishes, by
\Cref{divisibility1}. Hence by \Cref{divisibility1}, we obtain
$\psi(F) \in V_{r}^{(n)}$, so $\psi$ induces an injective map
$V_{s}^{(n)} / V_{s}^{(m)} \rightarrow V_{r}^{(n)}/ V_{r}^{(m)}$,
for all $0 \leq n \leq m$.
Since dim $V_{s}^{(n)} / V_{s}^{(m)} = \dim V_{r}^{(n)}/ V_{r}^{(m)}$,
the above map is an isomorphism.
Since $X^{r-i}Y^{i}$ generates $X_{r-i}$ and
$\psi(X^{s-i}Y^{i}+V_{s}^{(m)}) = X^{r-i}Y^{i}+V_{r}^{(m)}$ we see that
$\psi(X_{s-i} +V_{s}^{(m)}) = X_{r-i}+V_{r}^{(m)}$. Thus
$\psi(X_{s-i}^{(n)} +V_{s}^{(m)}) = \psi((X_{s-i} +V_{s}^{(m)}) \cap( V_{s}^{(n)} +V_{s}^{(m)}))
= \psi(X_{s-i}+V_{s}^{(m)}) \cap \psi(V_{s}^{(n)}+V_{s}^{(m)})
= (X_{r-i} \cap V_{r}^{(n)})+V_{r}^{(m)} = X_{r-i}^{(n)}+V_{r}^{(m)}$.
By the second isomorphism theorem, we have
\[
\frac{X_{s-i}^{(n)}}{ X_{s-i}^{(m)} } \cong
\frac{X_{s-i}^{(n)} +V_{s}^{(m)}}{V_{s}^{(m)}} \overset{\psi} {\cong}
\frac{X_{r-i}^{(n)} +V_{r}^{(m)}}{V_{r}^{(m)} }
\cong \frac{X_{r-i}^{(n)}}{ X_{r-i}^{(m)}}.
\] The periodicity of $Q(i)$ now follows immediately from the exact sequence \eqref{Q(i) exact sequence}. \end{proof}
For every $0 \leq j \leq i \leq p-1$, we have the following
commutative diagram:
\begin{equation}\label{commutative diagram}
\begin{tikzcd}
& 0 \arrow{d} & 0 \arrow{d} & 0 \arrow{d} & \\
0 \arrow{r} & \frac{X_{r-i}^{(j)}}{ X_{r-i}^{(i+1)}} \arrow{r} \arrow{d}&
\frac{X_{r-i}} {X_{r-i}^{(i+1)}} \arrow{r} \arrow{d}&
\frac{X_{r-i}}{ X_{r-i}^{(j)}} \arrow{r} \arrow{d} & 0 \\
0 \arrow{r} & \frac{V_{r}^{(j)}}{ V_{r}^{(i+1)}} \arrow{r} \arrow{d}&
\frac{V_{r}} {V_{r}^{(i+1)}} \arrow{r} \arrow{d} &
\frac{V_{r}}{ V_{r}^{(j)}} \arrow{r} \arrow{d} & 0 \\
0 \arrow{r} & \frac{V_{r}^{(j)}}{ X_{r-i}^{(j)}+V_{r}^{(i+1)}} \arrow{r} \arrow{d}&
Q(i) \arrow{r} \arrow{d} &
\frac{V_{r}}{ X_{r-i}+V_{r}^{(j)}} \arrow{r} \arrow{d} & 0,\\
& 0 & 0 & 0 &
\end{tikzcd} \end{equation} where the leftmost bottom entry is isomorphic to $(X_{r-i}+V_{r}^{(j)})/ (X_{r-i}+V_{r}^{(i+1)})$, by the second isomorphism theorem. Recall that $V_{r} \supseteq V_{r}^{(1)} \supseteq \cdots $ is a descending chain of submodules of $V_{r}$. By \Cref{Breuil map}, we know the structure of the quotients of successive terms in the chain, hence we know the structure of arbitrary quotients in the chain. Thus by the commutative diagram above, to determine the structure of $Q(i)$, it is enough to determine the quotients $X_{r-i}^{(n)}/X_{r-i}^{(m)}$, for all $0 \leq n \leq m$. These quotients in turn are determined by $X_{r-i}^{(j)}/X_{r-i}^{(j+1)}$, for $j \geq 0$.
The first result describes $X_{r-i}/ X_{r-i}^{(1)}$, for $0 \leq i \leq p-1$.
\begin{lemma}\label{Structure X(1)}
Let $p \geq 2$ and $p \leq r \equiv a \mod(p-1)$ with $1 \leq a \le p-1$. For
$ 0 \leq i \leq p-1$ we have
\begin{align*}
\frac{X_{r-i}}{X_{r-i}^{(1)}} \cong
\begin{cases}
V_{a}, & \mathrm{if} ~ 0 \leq i <a, \\
V_{r}/V_{r}^{(1)}, & \mathrm{if} ~ a \leq i \leq p-1.
\end{cases}
\end{align*} \end{lemma} \begin{proof}
By \cite[(4.5)]{Glover}, we have $X_{r}/X_{r}^{(1)} \cong V_{a}$.
For $0 \leq i \leq p-1$, we have $V_{a} \cong X_{r}/X_{r}^{(1)}
\subseteq X_{r-i}/X_{r-i}^{(1)}$.
Since $X^{r-i}Y^{i}$ generates $X_{r-i}$ as a $\Gamma$-module, we see that
$X^{r-i}Y^{i}$ generates $X_{r-i}/X_{r-i}^{(1)}$.
Therefore $X_{r-i}/X_{r-i}^{(1)}= V_{r}/V_{r}^{(1)} $ if and only if the image of
$X^{r-i}Y^{i}$ in the quotient $V_{p-1-a} \otimes D^{a}$ of
$V_{r}/V_{r}^{(1)}$ in the exact sequence
\eqref{exact sequence Vr} (for $m=0$) is non-zero
if and only if $i \geq a$, by \Cref{Breuil map} (for $m=0$). This finishes the proof. \end{proof}
Recall that $[n]$ denotes the congruence class of $n$ modulo $(p-1)$ in $\lbrace 1, 2 , \ldots, p-1 \rbrace$, for $n \in \mathbb{Z}$. We now make an important further reduction. For $1 \leq j \leq p-1$, we show that $X_{r-i}^{(j)}/X_{r-i}^{(j+1)}$ equals $X_{r-j'}^{(j)}/X_{r-j'}^{(j+1)}$, where $j'$ is the largest integer among the integers $0$, $j$ and $[r-j]$ which is less than or equal to $i$. In order to do this, it is convenient to introduce the following notation. For every $0 \leq i , j \leq p-1$, define the following subquotient of $V_{r}^{(j)}/V_{r}^{(j+1)}$ by \[
Y_{i,j} := \frac{X_{r-i}^{(j)}/ X_{r-i}^{(j+1)}}{X_{r-(i-1)}^{(j)}/ X_{r-(i-1)}^{(j+1)}}. \] By the second and third isomorphism theorems respectively, we have
\begin{align*}
\frac{X_{r-i}^{(j)}/ X_{r-i}^{(j+1)}}{X_{r-(i-1)}^{(j)}/ X_{r-(i-1)}^{(j+1)}}
& \cong \frac{X_{r-i}^{(j)}/X_{r-i}^{(j+1)}} {(X_{r-(i-1)}^{(j)}+
X_{r-i}^{(j+1)})/X_{r-i}^{(j+1)}}
\cong \frac{X_{r-i}^{(j)}}{X_{r-(i-1)}^{(j)}+X_{r-i}^{(j+1)}},
\end{align*}
and, by the analog of the Zassenhaus lemma \cite[Lemma 3.3]{Lang}
for modules and the inclusions
$X_{r-i}^{(j+1)} \subseteq X_{r-i}^{(j)} \subseteq V_{r}^{(j)} $, we have
\begin{align*}
\frac{X_{r-i}^{(j)}}{X_{r-(i-1)}^{(j)}+X_{r-i}^{(j+1)}}
= \frac{\left(X_{r-i} \cap V_{r}^{(j)} \right) + X_{r-i}^{(j+1)}}
{ \left(X_{r-(i-1)} \cap V_{r}^{(j)} \right) + X_{r-i}^{(j+1)}}
\cong
\frac{X_{r-i}^{(j)}+X_{r-(i-1)}}{X_{r-i}^{(j+1)}+X_{r-(i-1)}}.
\end{align*}
Hence
\begin{align}\label{Y i,j}
Y_{i,j} \cong \frac{X_{r-i}^{(j)}+X_{r-(i-1)}}{X_{r-i}^{(j+1)}+X_{r-(i-1)}}.
\end{align}
\begin{lemma}\label{reduction}
Let $p \geq 2$, $0 \leq i \leq p-1$ and $1 \leq j \leq p-1$. For every
$p \leq r \equiv a \mod (p-1)$ with $1 \leq a \leq p-1$, and we have
\begin{align*}
\frac{ X_{r-i}^{(j)}}{X_{r-i}^{(j+1)}}=
\begin{cases}
X_{r-j}^{(j)}/ X_{r-j}^{(j+1)},
& \mathrm{if}~ j \leq i < [a-j ]
~ \mathrm{or} ~ [a-j] \leq j \leq i , \\
X_{r-[a-j]}^{(j)}/ X_{r-[a-j ]}^{(j+1)},
& \mathrm{if}~ [ a-j ] \leq i < j ~
\mathrm{or}~ j \leq [a-j ] \leq i , \\
X_{r}^{(j)}/X_{r}^{(j+1)}, & \mathrm{if}~ i < j \leq [a-j]~
\mathrm{or} ~ i < [a-j] \leq j.
\end{cases} \end{align*} \end{lemma}
\begin{proof}
If $i=0$, then there is nothing to prove. So assume $i \geq 1$.
We claim that for $i \geq 1$ and $j \neq i$, $[a-i]$, we have
$X_{r-i}^{(j)}/ X_{r-i}^{(j+1)} = X_{r-(i-1)}^{(j)}/ X_{r-(i-1)}^{(j+1)}$. Suppose not.
Then $Y_{i,j}$, a non-zero subquotient of $V_{r}^{(j)}/V_{r}^{(j+1)}$, and
$(X_{r-i}^{(j)}+ X_{r-(i-1)})/ (X_{r-i}^{(j+1)}+ X_{r-(i-1)})$,
a subquotient of $X_{r-i}/X_{r-(i-1)}$, have a common JH factor, by \eqref{Y i,j}.
Therefore, by \Cref{induced and star} and \Cref{induced and successive}, we get
$\operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{j} \chi_{2}^{r-j})$
and $\operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i} \chi_{2}^{i})$ have a common JH factor.
But this is not possible by \Cref{Common JH factor},
since $j \neq [a-i] $, $i$.
This proves the claim.
Since $1 \leq i, j \leq p-1$, we have $j \neq i$, $[a-i]$
if and only if $i \neq j$, $[a-j]$.
If $j \leq i < [a-j ]$ or $ [a-j] \leq j \leq i $, then applying the
claim above repeatedly, we have
$ X_{r-i}^{(j)}/ X_{r-i}^{(j+1)} = X_{r-(i-1)}^{(j)}/ X_{r-(i-1)}^{(j+1)} =
\cdots = X_{r-j}^{(j)}/ X_{r-j}^{(j+1)}$. Similarly if $[a-j] \leq i < j $ or
$j \leq [a-j] \leq i $, then by the claim above, we have $ X_{r-i}^{(j)}/ X_{r-i}^{(j+1)} =
X_{r-(i-1)}^{(j)}/ X_{r-(i-1)}^{(j+1)} =\cdots = X_{r-[a-j]}^{(j)}/ X_{r-[a-j]}^{(j+1)}$.
Finally if $i < j \leq [a-j]$ or $i<[a-j] \leq j$, then by the claim above
we have $ X_{r-i}^{(j)}/ X_{r-i}^{(j+1)}
= X_{r-(i-1)}^{(j)}/ X_{r-(i-1)}^{(j+1)} = \cdots = X_{r}^{(j)}/ X_{r}^{(j+1)}$. \end{proof}
\begin{corollary}\label{reduction corollary}
Let $p \leq r \equiv a ~\mathrm{mod}~ (p-1)$ with
$1 \leq a \leq p-1$ and $0 \leq i, j \leq p-1$
with $i \neq a$. Then
\begin{enumerate}[label=\emph{(\roman*)}]
\item If $ j \neq i, [a-i]$, then we have
$X_{r-i}^{(j)}/ X_{r-i}^{(j+1)} = X_{r-(i-1)}^{(j)}/ X_{r-(i-1)}^{(j+1)}$.
\item Let $i'=\min \lbrace i, [a-i] \rbrace$. Then $X_{r-i} =
X_{r-(i-1)} + X_{r-i}^{(i')}$.
As a consequence, $X_{r-i} \subseteq X_{r-(i-1)} +V_{r}^{(i')}$.
\end{enumerate} \end{corollary}
\begin{proof}
The first assertion follows from \Cref{Structure X(1)} if $j=0$ and
the claim proved in \Cref{reduction}
if $j \geq 1$.
Thus, by the second isomorphism theorem, we have
\[
\frac{X_{r-i}^{(j)}}{X_{r-i}^{(j+1)}} = \frac{X_{r-(i-1)}^{(j)}}{X_{r-(i-1)}^{(j+1)}}
= \frac{X_{r-(i-1)}^{(j)}+X_{r-i}^{(j+1)}}{X_{r-i}^{(j+1)}}, ~
\forall \; 0 \leq j < i' .
\]
So $X_{r-i}^{(j)} = X_{r-(i-1)}^{(j)}+X_{r-i}^{(j+1)} \subseteq
X_{r-(i-1)}+X_{r-i}^{(j+1)}$, for all $0 \leq j < i'$. Thus we have
\[
X_{r-i} \subseteq X_{r-(i-1)}+X_{r-i}^{(1)} \subseteq \cdots
\subseteq X_{r-(i-1)}+X_{r-i}^{(i')}.
\]
Since both $X_{r-(i-1)}$, $X_{r-i}^{(i')} \subseteq X_{r-i}$,
the second assertion follows from the inclusion above.
\end{proof}
By Lemma~\ref{reduction}, to determine $X_{r-i}^{(j)}/ X_{r-i}^{(j+1)}$, for all $0 \leq i$, $j \leq p-1$, it is enough to determine $X_{r-j}^{(j)}/ X_{r-j}^{(j+1)}$, $X_{r-[a-j]}^{(j)}/ X_{r-[a-j]}^{(j+1)}$ and $X_{r}^{(j)}/ X_{r}^{(j+1)} $, for all $1 \leq j \leq p-1$, as we already know the structure of $X_{r-i}/X_{r-i}^{(1)}$, by \Cref{Structure X(1)}.
We begin by determining $X_{r}^{(j)}/X_{r}^{(j+1)}$, for $0 \leq j \leq p-1$.
\begin{lemma}\label{star=double star}
Let $0 \leq r \equiv a \mod (p-1)$ with $1 \leq a \leq p-1$.
Then $X_{r}^{(1)} = X_{r}^{(2)} = \cdots = X_{r}^{(a)}$. \end{lemma}
\begin{proof}
If $a=1$, then there is nothing to prove. Assume $2 \leq a \leq p-1$.
If $X_{r}^{(1)} =\cdots = X_{r}^{(m)} \neq X_{r}^{(m+1)}$,
for some $1 \leq m \leq a-1$, then \cite[Lemma 4.6]{BG15} (which in fact holds for all $r \geq 0$)
implies that
$X_{r}^{(1)} = X_{r}^{(m)} \cong V_{p-a-1} \otimes D^{a}$ and
$X_{r}^{(m+1)}= (0)$.
Therefore $V_{p-a-1} \otimes D^{a} \cong X_{r}^{(m)}/X_{r}^{(m+1)}
\hookrightarrow V_{r}^{(m)}/V_{r}^{(m+1)} \hookrightarrow \operatorname{ind}_B^\Gamma(\chi_1^{m} \chi_2^{r-m})$,
by Lemma~\ref{induced and star}. But this is not possible by Lemma~\ref{Structure of induced}, as $1 \leq m \leq a-1$. Therefore
$X_{r}^{(1)} = X_{r}^{(a)}$. \end{proof}
For $r \equiv a$ mod $(p-1)$, by the above lemma we have $X_{r}^{(j)}/X_{r}^{(j+1)} = (0)$, for all $1 \leq j < a$. We next derive a necessary and sufficient condition under which $ X_{r}^{(a)}/X_{r}^{(a+1)}$ is non-zero. We will soon see that $X_{r}^{(j)}/X_{r}^{(j+1)} = (0)$
if $1 \leq j \leq p-1$ and $j \neq a$ (cf. \Cref{singular quotient X_{r}}). \begin{lemma}\label{quotient image}
Let $ p \leq r \equiv a \mod (p-1)$ with $1 \leq a \leq p-1$. Then
\begin{align*}
G_{r}(X,Y) := \sum_{\lambda \in \mathbb{F}_{p}} (\lambda X+Y)^{r} +
\delta_{a,p-1}X^{r}
\end{align*}
belongs to $ X_{r}^{(a)}$.
Further, $G_{r}(X,Y) \in V_{r}^{(p)} \Leftrightarrow G_{r}(X,Y) \in V_{r}^{(a+1)}
\Leftrightarrow
\binom{r}{a} \equiv 0 \mod p$. Consequently,
$X_{r}^{(a)}/
X_{r}^{(a+1)}$ is non-zero if and only if $\binom{r}{a} \not \equiv 0 \mod p$. \end{lemma}
\begin{proof}
By \Cref{Basis of X_r-i}, we see that $G_{r}(X,Y) \in X_{r}$. Note that
\begin{align}\label{G r expression}
G_{r}(X,Y) ~
& = ~ \sum_{l=0}^{r} \binom{r}{l} X^{r-l} Y^{l} \sum_{\lambda \in \mathbb{F}_{p}}
\lambda^{r-l} + \delta_{a,p-1} X^{r} \nonumber \\
& \overset{\eqref{sum fp}}{\equiv} ~
- \sum_{\substack{ 0 < l < r \\ l \equiv a~ \mathrm{mod}~ (p-1)} }
\binom{r}{l} X^{r-l} Y^{l} \mod p.
\end{align}
Clearly the coefficient of $X^{r}$, $Y^{r}$ in $G_{r}(X,Y)$ are zero. By
\cite[Lemma 2.5]{BG15}, we have
\begin{align*}
\sum_{\substack{ 0 < l < r \\ l \equiv a ~ \mathrm{mod}~
(p-1)} } \binom{r}{l} \equiv 0 \mod p.
\end{align*}
Thus $G_{r}(X,Y) \in X_{r} \cap V_{r}^{(1)}$ by \Cref{divisibility1},
whence $G_{r}(X,Y) \in X_{r}^{(a)}$ by \Cref{star=double star}.
This proves the first part.
If $G_{r}(X,Y) \in V_{r}^{(p)}$, then clearly $G_{r}(X,Y) \in V_{r}^{(a+1)}$
as $a \leq p-1$.
If $G_{r}(X,Y) \in V_{r}^{(a+1)}$, then the coefficient of $X^{r-a} Y^{a}$ in
$G_{r}(X,Y)$ is zero, i.e., $\binom{r}{a} \equiv 0 \mod p$. Next suppose that
$\binom{r}{a} \equiv 0 \mod p$. Then by Lucas' theorem, we get $r \equiv 0,1,
\ldots, a-1 \mod p$. This implies that the coefficients of $X^{r-a}Y^{a}$ and
$X^{p-1}Y^{r-(p-1)}$ in $G_{r}(X,Y)$ are zero. Since $a$ (resp. $r-(p-1)$) is
the only number between $1$ and $p-1$
(resp. $r-1$ and $r-(p-1)$)
which is congruent to $a$ mod $(p-1)$, we get $X^{p}$, $Y^{p} \mid G_{r}(X,Y)$.
Thus $G_{r}(X,Y)$ satisfies the condition (i) of \Cref{divisibility1} for $m=p$.
Moreover, by \Cref{binomial sum}, for $ 1 \leq a \leq n \leq p-1$, we have
\begin{align*}
\sum_{\substack{ 0 < l < r \\ l \equiv a ~ \mathrm{mod}~(p-1) } }
\binom{r}{l} \binom{l}{n} &=
\sum_{\substack{ 0 \leq l \leq r \\ l \equiv a ~ \mathrm{mod}~(p-1)}}
\binom{r}{l} \binom{l}{n} - \binom{r}{n} \\
& \equiv \binom{r}{n} \binom{[a-n]}{[a-n]} + \delta_{p-1,[a-n]} \binom{r}{n}
- \binom{r}{n} \\
& = \delta_{n,a} \binom{r}{a}
\equiv 0 \mod p,
\end{align*}
where in the second last step we used $[a-n]=p-1 \Leftrightarrow a=n$, as
$1 \leq a \leq n \leq p-1$.
Therefore $G_{r}(X,Y) \in V_{r}^{(p)}$ by \Cref{divisibility1}, as $G_{r}(X,Y)$
already belongs to $V_{r}^{(a)}$. \end{proof}
The next result describes all the quotients $X_{r}^{(m)}/X_{r}^{(m+1)}$, for $0 \leq m \leq p-1$. \begin{proposition}\label{singular quotient X_{r}}
Let $p \geq 3$, $p \leq r\equiv a \mod (p-1)$ with $1 \leq a \leq p-1$.
For $0 \leq m \leq p-1$, we have
\begin{align*}
\frac{X_{r}^{(m)}}{X_{r}^{(m+1)}} \cong \begin{cases}
V_{a}, & \mathrm{if} ~ m=0,\\
V_{p-1-a} \otimes D^{a}, & \mathrm{if}~ m=a~ \mathrm{and}~
r \equiv a,a+1, \ldots, p-1 ~\mathrm{mod}~ p,\\
0, & \mathrm{if}~
m=a ~ \mathrm{and}~ r \equiv 0,1, \ldots, a-1 ~\mathrm{mod}~ p, ~ \mathrm{or} ~ m \neq 0, a.
\end{cases}
\end{align*} \end{proposition} \begin{proof}
If $m = 0$, we have $X_{r}/X_{r}^{(1)} \cong V_{a}$, by the exact sequence \eqref{Glover 4.5}, so assume $m \geq 1$.
Suppose
$r \equiv a, a+1, \ldots, p-1$ mod $p$. Then, by Lucas' theorem, we have
$\binom{r}{a} \not \equiv 0$ mod $p$, so by \Cref{quotient image},
we have $0 \neq G_{r}(X,Y) \in X_{r}^{(a)}/X_{r}^{(a+1)}$.
By \cite[Lemma 4.6]{BG15}, we have $X_{r}^{(1)} = (0) $ or $V_{p-1-a} \otimes D^{a}$.
Since $X_{r}^{(1)} \supseteq X_{r}^{(2)}
\supseteq \cdots \supseteq X_{r}^{(a)} \supsetneq X_{r}^{(a+1)}$ is a descending chain,
we see that $X_{r}^{(a)}/X_{r}^{(a+1)} = V_{p-1-a} \otimes D^{a}$ and
$X_{r}^{(m)}/X_{r}^{(m+1)} = 0$, for $m \geq 1$ and $m \neq a$. Next suppose
that $r \equiv 0,1, \ldots, a-1$ mod $p$. Then, by Lucas' theorem we have
$\binom{r}{a} \equiv 0$ mod $p$. If
$X_{r}^{(1)}=0$,
then $X_{r}^{(m)}/X_{r}^{(m+1)}=0$, for all $m \geq 1$. If $X_{r}^{(1)} \neq 0$,
then by \Cref{dimension formula for X_{r}}, we have $\dim X_{r} = p+1$.
Thus $G_{r}(X,Y) \neq 0$, as $G_{r}(X,Y)$ is a non-zero linear combination of the
basis elements of $X_{r}$ (cf. \Cref{Basis of X_r-i}).
By \Cref{quotient image}, we see that $G_{r}(X,Y)
\in X_{r}^{(p)}$. As $X_{r}^{(1)}$
is irreducible by \cite[Lemma 4.6]{BG15}, we obtain $X_{r}^{(1)} =
\cdots = X_{r}^{(p)}$,
whence $X_{r}^{(m)}/X_{r}^{(m+1)}=0$, for all $1 \leq m \leq p-1$. \end{proof}
\begin{corollary}\label{reduction corollary 2}
Let $0 \leq i \leq p-1$, $1 \leq j \leq p-1$ and
$p \leq r \equiv a ~\mathrm{mod}~
(p-1)$ with $1 \leq a \leq p-1$. If $j \neq a$ and $i < j \leq [a-j]$
or $i < [a-j] \leq j$, then $X_{r-i}^{(j)}/X_{r-i}^{(j+1)} = (0)$. \end{corollary} \begin{proof}
By the third part of \Cref{reduction}, we have
$X_{r-i}^{(j)}/X_{r-i}^{(j+1)} = X_{r}^{(j)}/X_{r}^{(j+1)}$. Since $j \neq 0$,
$a$, the corollary follows from \Cref{singular quotient X_{r}}. \end{proof}
We now introduce some notation. Let $0 \leq i \leq p-1$ and let $r \geq p$. Recall that by \Cref{induced and successive}, we have $[ \begin{psmallmatrix} \lambda & 1 \\ 1 & 0 \end{psmallmatrix}, e_{\chi_{1}^{r-i} \chi_{2}^{i}}]$ maps to $X^{i} (\lambda X + Y)^{r-i}$ under $\psi_{i}: \operatorname{ind}_{B}^{\Gamma} (\chi_{1}^{r-i}\chi_{2}^{i}) \twoheadrightarrow X_{r-i}/X_{r-(i-1)}$.
Let \begin{align}\label{F i,r definition}
F_{i,r}(X,Y) ~ &:=~ \sum_{\lambda \in \mathbb{F}_{p}} \lambda^{[2i-a]} X^{i} (\lambda X + Y)^{r-i}
\in X_{r-i}. \end{align} By \Cref{Structure of induced} (ii) (with $l = [2i-a]$),
if $r-i \not \equiv i$ mod $(p-1)$, then
$\sum_{\lambda \in \mathbb{F}_{p}} \lambda^{[2i-a]} [ \begin{psmallmatrix} \lambda & 1 \\ 1 & 0 \end{psmallmatrix}, e_{\chi_{1}^{r-i} \chi_{2}^{i}}]$ generates $\operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i} \chi_{2}^{i})$ as a $\Gamma$-module, so its image $F_{i,r}(X,Y)$ mod $X_{r-(i-1)}$ under $\psi_{i}$ generates $X_{r-i}/X_{r-(i-1)}$ as a $\Gamma$-module. Let
\begin{align}\label{G i,r definition}
G_{i,r} (X,Y)~ &:= ~ \sum_{\lambda \in \mathbb{F}_{p}} X^{i} (\lambda X + Y)^{r-i}
\in X_{r-i}. \end{align} By \Cref{Structure of induced} (i) (with $l=0$), we have $V_{[2i-a]} \otimes D^{a-i} \hookrightarrow \operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i}\chi_{2}^{i})$ is generated by $ \sum_{\lambda \in \mathbb{F}_{p}} [ \begin{psmallmatrix} \lambda & 1 \\ 1 & 0 \end{psmallmatrix}, e_{\chi_{1}^{r-i} \chi_{2}^{i}}]$ as a $\Gamma$-module, so \begin{align}\label{W definition}
W_{i,r} ~:= ~ \mathrm{Image ~ of ~} V_{[2i-a]} \otimes D^{a-i} ~
\mathrm{under} ~ \psi_{i} \end{align}
is generated by $G_{i,r}(X,Y)$ mod $X_{r-(i-1)}$. Clearly $X^{i}G_{r-i}(X,Y)-G_{i,r}(X,Y) = \delta_{[a-i],p-1} X^{r} \in X_{r} \subseteq X_{r-(i-1)}$, by \Cref{first row filtration}. By \Cref{quotient image} and \Cref{surjection1}, we see that \begin{align}\label{G i,r}
G_{i,r}(X,Y)+ \delta_{[a-i],p-1} X^{r} =
X^{i}G_{r-i}(X,Y) = \phi_{i}(X^{i} \otimes G_{r-i}(X,Y))
\in X_{r-i}^{[a-i]}. \end{align} Thus $G_{i,r}(X,Y) \in X_{r-i}^{([a-i])} + X_{r-(i-1)}$, whence $ W_{i,r} \subseteq (X_{r-i}^{([a-i])}+X_{r-(i-1)})/ X_{r-(i-1)}$.
\subsection{The case \texorpdfstring{$\boldsymbol{i \neq a, ~p-1}$}{•} } \label{Section i not a nor p - 1}
In this subsection, we determine the quotients $Q(i)$ when $i \neq a$, $p-1$, by determining the structure of $X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$ and $X_{r-i}^{([a-i])}/X_{r-i}^{([a-i]+1)}$.
The structure of $Q(0)$ is well known \cite{Glover}, \cite{BG09}. We have \begin{lemma}\label{Structure Q(0)}
Let $p \leq r \equiv a~\mathrm{mod}~p-1$, with $1 \leq a \leq p-1$.
Then $Q(0) \cong V_{p-1-a} \otimes D^{a} $. \end{lemma} \begin{proof}
By the exact sequence \eqref{exact sequence Vr} (with $m=0$), we have
\[
0 \rightarrow V_{a} \rightarrow V_{r}/V_{r}^{(1)} \rightarrow V_{p-1-a}
\otimes D^{a} \rightarrow 0.
\]
By \Cref{singular quotient X_{r}}, we have $X_{r}/X_{r}^{(1)} \cong V_{a}$.
Now the lemma follows from the exact sequence \eqref{Q(i) exact sequence}. \end{proof}
\begin{lemma}\label{socle term singular}
Let $r \equiv a \mod (p-1)$ with $1 \leq a \leq p-1$ and $ 1 \leq i < p-1$
with $i \leq [a -i]$. Suppose $r \geq [a-i](p+1)+p$. Then
$$ V_{[2i-a]} \otimes D^{a-i} \hookrightarrow X_{r-i}^{([a-i])}/
X_{r-i}^{([a-i]+1)} \Longleftrightarrow
\binom{r-i}{[a-i]} \not \equiv 0~ \mathrm{mod}~ p.$$ \end{lemma}
\begin{proof}
Let $j=[a-i]$. Then $r-i \equiv j$ mod $(p-1)$ and $1 \leq j \leq p-1$.
Let $F(X,Y):= G_{i,r}(X,Y) + \delta_{[a-i],p-1} X^{r}= X^{i} G_{r-i}(X,Y)$.
By \eqref{G i,r}, we have $F(X,Y) \in X_{r-i}^{(j)}$.
Assume $\binom{r-i}{j} \not \equiv 0$ mod $p$. By
\eqref{G r expression} we have
\begin{align}\label{F expression in socle term singular}
F(X,Y) =
-\sum_{\substack{0 < l < r-i \\ l \equiv j ~ \mathrm{mod}~(p-1)}}^{}
\binom{r-i}{l} X^{r-l}Y^{l}.
\end{align}
The coefficient of $X^{r-j}Y^{j}$ in $F(X,Y)$ equals $-\binom{r-i}{j}$,
which is non-zero modulo $p$ by assumption. Hence $F(X,Y)
\not \in V_{r}^{(j+1)}$, by \Cref{divisibility1} and
$0 \neq F(X,Y) \in X_{r-i}^{(j)}/X_{r-i}^{(j+1)}$. We now show that
the image of $F(X,Y)$ is zero under the surjection
$V_{r}^{(j)}/V_{r}^{(j+1)} \twoheadrightarrow V_{p-1-[a-2j]} \otimes D^{a-j}$
in \eqref{exact sequence Vr}. By assumption $i \leq j$.
If $i <j $, then $r-j \equiv i \not \equiv j$
mod $p$, so the coefficient of $X^{j}Y^{r-j}$ in
$F(X,Y)$ equals zero, by \eqref{F expression in socle term singular}.
If $i=j$, the coefficient of $X^{j}Y^{r-j}$ in $F(X,Y)$ is still zero,
by \eqref{F expression in socle term singular}, as $r-j = r-i $.
By \Cref{binomial sum},
if $i \leq j$, we have
\begin{align*}
\sum_{\substack{0 < l < r-i \\ l \equiv j ~ \mathrm{mod}~(p-1)}}^{}
- \binom{r-i}{l} \binom{l}{j} &=
-\sum_{\substack{0 \leq l \leq r-i \\ l \equiv j ~ \mathrm{mod}~(p-1)}}^{}
\binom{r-i}{l} \binom{l}{j} + \binom{r-i}{j} \\
&\equiv -\delta_{p-1,[j-j]}\binom{r-i}{[a-i]} ~\mathrm{mod}~p \\
& = -\binom{r-i}{j}\\
& = \mathrm{the ~ coefficient~ of ~} X^{r-j}Y^{j} -
(-1)^{j+1}\mathrm{the ~ coefficient~ of ~}~X^{j}Y^{r-j}.
\end{align*}
Thus by \Cref{breuil map quotient}, we have the image of $F(X,Y)$ under
$V_{r}^{(j)}/V_{r}^{(j+1)} \twoheadrightarrow V_{p-1-[a-2j]} \otimes D^{a-j}$
is zero. Therefore $ 0 \neq F(X,Y) \in V_{[a-2j]} \otimes D^{j} \hookrightarrow
V_{r}^{(j)}/V_{r}^{(j+1)}$. This proves the \enquote*{only if} part.
For the converse, assume $\binom{r-i}{j} \equiv 0$ mod $p$.
Thus by \Cref{quotient image}, we have $F(X,Y) \in X_{r-i}^{(p)}$.
Since $i-1 <i \leq [a-i] $,
by the third part of \Cref{reduction} (with $i$ there equal to $i-1$ and $j=[a-i]$),
we have
$X_{r-(i-1)}^{(j)} / X_{r-(i-1)}^{(j+1)} = X_{r}^{(j)} / X_{r}^{(j+1)}$.
By \Cref{singular quotient X_{r}}, we have $ X_{r}^{(j)} / X_{r}^{(j+1)} =(0)$,
as $j \neq a$ since $i \neq p-1$.
Hence by \eqref{Y i,j}, we have
\[
\frac{X_{r-i}^{(j)}}{X_{r-i}^{(j+1)}} = Y_{i,j} \cong
\frac{ X_{r-i}^{(j)}+ X_{r-(i-1)}} { X_{r-i}^{(j+1)}+X_{r-(i-1)}}.
\]
Thus $ V_{[2i-a]} \otimes D^{a-i} $ is a JH factor of $X_{r-i}^{(j)}/
X_{r-i}^{(j+1)} $ if and only if
$ V_{[2i-a]} \otimes D^{a-i} $ is a JH factor of
$(X_{r-i}^{(j)}+ X_{r-(i-1)} )/ (X_{r-i}^{(j+1)}+X_{r-(i-1)})$.
By \Cref{Structure of induced} and \Cref{induced and successive},
we have a map
\[
V_{[2i-a]} \otimes D^{a-i} \hookrightarrow \operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i}
\chi_{2}^{i})
\overset{\psi_{i}}{\twoheadrightarrow } \frac{X_{r-i}}{X_{r-(i-1)}}
\twoheadrightarrow
\frac{ X_{r-i}} { X_{r-i}^{(j+1)}+X_{r-(i-1)}} .
\]
Let $\psi'$ denote the above composition. Since $W_{i,r}$
is the image $V_{[2i-a]} \otimes D^{a-i}$ under $\psi_{i}$
and $G_{i,r}(X,Y)$ generates $W_{i,r}$,
we see that the image of $\psi '$ is also generated by $G_{i,r}(X,Y)$
mod $X_{r-i}^{(j+1)}+X_{r-(i-1)}$. Note that
\[
G_{i,r}(X,Y) = X^{i}G_{r-i}(X,Y) - \delta_{j,p-1}X^{r}
= F(X,Y) -\delta_{j,p-1}X^{r} \in X_{r-i}^{(p)}+X_{r-(i-1)}.
\]
So the map $\psi'$ is zero. By \Cref{Structure of induced},
the JH factor $V_{[2i-a]} \otimes D^{a-i}$ occurs with multiplicity
one in $ \operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i} \chi_{2}^{i})$. Thus it follows
$V_{[2i-a]} \otimes D^{a-i}$ is not a JH factor of
$X_{r-i}/ (X_{r-i}^{(j+1)}+X_{r-(i-1)})$, so is not a JH factor of its submodule
$(X_{r-i}^{(j)}+ X_{r-(i-1)} )/ (X_{r-i}^{(j+1)}+X_{r-(i-1)})$, so of
$X_{r-i}^{(j)}/X_{r-i}^{(j+1)}$. This proves
the \enquote*{if} part.
\end{proof}
To state the results in Sections~\ref{section i < a-i}, \ref{section i = a-i}, \ref{section i > a-i}, we need to extend the definitions of the sets $\mathcal{I}(a,i)$ and $\mathcal{J}(a,i)$ introduced in \eqref{interval I for i < a-i} and \eqref{interval J first} respectively. For $1 \leq a$, $i \leq p-1$ and $i \neq a,$ $p-1$, we define $\mathcal{I}(a,i) \subseteq \lbrace 0,1, \ldots, p-1 \rbrace$, a subset of the congruence classes modulo $p$, by \begin{align}\label{interval I}
\mathcal{I}(a,i) =
\begin{cases}
\lbrace a-i+1, a-i+2, \ldots , a-1, a \rbrace, & \mathrm{if}~ i < a-i < a, \\
\lbrace a-i, a-i+1, \ldots , a-1, a\rbrace, & \mathrm{if}~ a-i \leq i < a, \\
\lbrace a, a+1, \ldots, [a-i]-1, [a-i]\rbrace^{c}, & \mathrm{if}~ a < i < [a-i], \\
\lbrace a, a+1, \ldots, [a-i]-2, [a-i]-1 \rbrace^{c}, & \mathrm{if}~ a < [a-i] \leq i,
\end{cases} \end{align} where $c$ in the superscript denotes the complement in
$\lbrace 0 , 1, \ldots, p-1 \rbrace$. Since any $p-1$ consecutive numbers define a congruence classes modulo $p$, we may view $ \mathcal{I}(a,i) $ as an interval. We leave it to the reader to check that the above listed cases are mutually exclusive and cover all possibilities. Also it can be checked that if $i\neq a$, $a+1$, then $\mathcal{I}(a,i-1) \subseteq \mathcal{I}(a,i)$, for $i \geq 2$.
Similarly, we define the subset $\mathcal{J}(a,i) \subseteq \lbrace 0,1, \ldots, p-1 \rbrace$ of the congruence classes modulo $p$
as follows \begin{align}\label{interval J}
\mathcal{J}(a,i) =
\begin{cases}
\lbrace a-i, a-i+1 , \ldots, a-2, a-1\rbrace, & \mathrm{if}~
i < a-i <a ,\\
\lbrace a-i-1, a-i, \ldots, a-2, a-1\rbrace, & \mathrm{if}~
a-i \leq i <a , \\
\lbrace a-1,a, \ldots, [a-i]-2, [a-i]-1 \rbrace^{c}, &\mathrm{if}~
a<i < [a-i], \\
\lbrace a-1, a, \ldots, [a-i]-3, [a-i]-2 \rbrace^{c}, &\mathrm{if}~
a<[a-i] \leq i,
\end{cases} \end{align} where $c$ in the superscript again denotes the complement in $\lbrace 0, 1, 2, \ldots, p-1 \rbrace$. As in the case of $\mathcal{I}(a,i)$, we think of $\mathcal{J}(a,i) $ as an interval. Again we have $\mathcal{J}(a,i-1) \subseteq \mathcal{J}(a,i)$, for all $2 \leq i \neq a$, $a+1$.
Note that $\mathcal{J}(a,i) $ is essentially the translate of $\mathcal{I}(a,i)$ to the `left' by 1 in the set of congruence classes modulo $p$. More precisely, we have
\begin{lemma} \label{I vs J} Let $r \equiv a \mod (p-1)$ with $1 \leq a \leq p-1$, $1 \leq i \leq p-1$ with $i \neq a, p-1$, $r \equiv r_0 \mod p$ with $0 \leq r_0 \leq p-1$. \begin{enumerate}
\item[(i)] If $i<[a-i]$, then $r_{0} \in \mathcal{J}(a,i)$
$\Leftrightarrow$
$r_{0} \in \mathcal{I}(a,i) \cup \lbrace [a-i] \rbrace$ and
$r \not \equiv [a-i]+i$ mod $p$.
\item[(ii)] If $i \geq [a-i]$, then $r_{0} \in \mathcal{J}(a,i)$
$\Leftrightarrow$
$r_{0} \in \mathcal{I}(a,i) \cup \lbrace [a-i]-1 \rbrace$ and
$r \not \equiv [a-i]+i$ mod $p$. \end{enumerate}
\end{lemma} \begin{proof}
This follows from the definitions of the intervals above, noting that
$[a-i]+i \equiv a$ (resp. $a -1$) mod $p$ if $i<a$ (resp. $i>a$). \end{proof}
Next we give a criterion to check when the congruence class of $r$ modulo $p$ belongs to the intervals above in terms of binomial coefficients. \begin{lemma} \label{interval and binomial}
Let $r \geq p$ and $r \equiv r_{0} \mod p$ with $0 \leq r_{0} \leq p-1$.
For $1 \leq a$, $i \leq p-1$ with $i \neq a$, $p-1$, we have
\begin{enumerate}[label=\emph{(\roman*)}]
\item If $i<[a-i]$, then $r_{0} \in \mathcal{I}(a,i)$ if and only if
$\binom{r-[a-i]-1}{i} \equiv 0 \mod p$.
\item If $i \geq [a-i]$, then $r_{0} \in \mathcal{I}(a,i)$ if and only if
$\binom{r-[a-i]}{i+1} \equiv 0 \mod p$.
\item If $i < [a-i]$, then $r_{0} \in \mathcal{J}(a,i)$ if and only if
$\binom{r-[a-i]}{i} \equiv 0 \mod p$.
\item If $i \geq [a-i]$, then $r_{0} \in \mathcal{J}(a,i)$ if and only if
$\binom{r-[a-i]+1}{i+1} \equiv 0 \mod p$.
\end{enumerate} \end{lemma} \begin{proof}
The lemma follows from Lucas' theorem and the definitions of the intervals
\eqref{interval I} and \eqref{interval J}. To illustrate the proof we prove (i).
By Lucas' theorem, we have $\binom{r-[a-i]-1}{i} \equiv 0 \mod p$
if and only if $r-[a-i]-1 \equiv 0, 1, \ldots, i-1 \mod p$
if and only if $r \equiv [a-i]+1, [a-i]+2, \ldots, [a-i]+i$.
If $i<a$, then $[a-i] =a-i$ and (i) follows in this case.
If $i > a$, then $[a-i] =p-1+a-i$ and (i) follows by shifting the congruence
classes between $0$ and $p-1$ whenever they are larger than $p$.
The other assertions are proved in a similar manner. \end{proof}
We now determine the structure of $Q(i)$, for $i \neq a$, $p-1$, by considering the cases $i < [a-i]$, $i =[a-i]$ and $i> [a-i]$.
\subsubsection{The case \texorpdfstring{$\boldsymbol{i < [a-i].}$}{}} \label{section i < a-i} In this subsection, we determine the structure of the quotients $Q(i)$, for all $1 \leq i < p-1$,
such that $i \not \equiv r$ mod $(p-1)$ and $i < [a-i]$. Taking $j=i$ in diagram \eqref{commutative diagram} and using \Cref{reduction corollary} (ii), we see that the rightmost bottom entry there equals $Q(i-1)$. Thus, to determine the structure of $Q(i)$ in terms of $Q(i-1)$, it is enough to determine the quotient $X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$. The structure of $Q(i)$ is described in terms of $Q(i-1)$ in \Cref{Structure of Q(i) if i<[a-i]}.
Before we proceed further, we note that for $1 \leq a,i \leq p-1$, we have $[a-i] < p-1$ if and only if $i \neq a$. Thus, the conditions $i \neq a$, $p-1$ and $i<[a-i]$ are equivalent to $1\leq i <[a-i] < p-1$.
\begin{lemma}\label{Large Cong class Quotient non zero}
Let $p \geq 3$, $ r \equiv a ~\mathrm{mod} ~ (p-1)$ with
$1 \leq a \leq p-1$, $r \equiv r_{0}~ \mathrm{mod}~p$ with
$0 \leq r_{0} \leq p-1$ and suppose $1 \leq i< [a-i] < p-1$.
If $i(p+1)+ p \leq r $ and $r_{0} \not \in \mathcal{I}(a,i)$, then
$X_{r-i}^{(i)}/ X_{r-i}^{(i+1)}$ contains $V_{[a-2i]} \otimes D^{i}$ as a
$\Gamma$-module. \end{lemma}
\begin{proof}
Recall that $A(a,i,i,r)= \left( \binom{r-n}{m} \binom{[a-m-n]}{i-m}
\right)_{0 \leq m,n \leq i} $ (cf. \eqref{A(a,i,j,r) matrix}).
Under the above hypotheses,
we have $A(a,i,i,r)$ is invertible, by Corollary~\ref{A(a,i,j,r) invertible}.
Hence there exist $C_{0}, \ldots , C_{i} \in \mathbb{F}_{p} $ such that
$A(a,i,i,r) (C_{0}, \ldots,C_{i-1}, C_{i})^{t} =(0,\ldots,0,1)^{t} $,
i.e.,
\begin{align}\label{choice C_n for i<a-i}
\sum_{n=0}^{i} C_{n} \binom{r-n}{m}\binom{[a-m-n]}
{i-m} \equiv \delta_{i,m} \mod ~ p, ~\forall~ 0\leq m \leq i.
\end{align}
Consider the following polynomial
\begin{align}
F(X,Y)
& := \sum _{n=0}^{i} C_{n} \sum_{k \in \mathbb{F}_{p}^{\ast}}
k^{i+n-a} X^{n} (kX+Y)^{r-n} \nonumber \\
& \stackrel{\eqref{sum fp}}{=} - \sum_{n=0}^{i} C_{n}
\sum_{\substack{0 \leq l \leq r-n \\ l \equiv i \mathrm{~mod}~ (p-1)}}
\binom{r-n}{l}X^{r-l} Y^{l} . \label{polynomial F in i < a-i}
\end{align}
Note that $F(X,Y) \in X_{r-i}$, by \Cref{Basis of X_r-i}.
We claim that $0 \neq F(X,Y) \in X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$.
Since $1 \leq i < p-1$, we have $0$, $1, \ldots, i-1 \not \equiv i
\mathrm{~mod}~(p-1)$, so the
coefficients of $X^{r}, \ldots, X^{r-(i-1)}Y^{i-1}$ in $F(X,Y)$ are zero.
Since $r- [a-i] \equiv i$ mod $(p-1)$ and $r-(r-[a-i]) =[a-i] < p-1$,
we have $ r-[a-i]+1$, $ \ldots, r-1, r \not \equiv i \mathrm{~mod}~(p-1)$,
so the coefficients
of $X^{[a-i]-1}Y^{r-[a-i]+1}, \ldots, XY^{r-1}, Y^{r}$ in $F(X,Y)$ are also zero.
As $i < [a-i] $,
we see that $F(X,Y)$ satisfies condition (i) of \Cref{divisibility1} for $m=i$.
Further, by \Cref{binomial sum}, for $ 0 \leq m \leq i$, we have
\begin{align*}
\sum_{n=0}^{i} C_{n} \sum_{\substack{0 \leq l \leq r-n
\\ l \equiv i \mathrm{~mod}~ (p-1)}} \binom{r-n}{l} \binom{l}{m}
& \equiv \sum_{n=0}^{i} C_{n} \binom{r-n}{m} \left( \binom{[a-m-n]}
{[i-m]} + \delta_{[p-1],[i-m]} \right) \mod ~ p \\
& \equiv \sum_{n=0}^{i} C_{n} \binom{r-n}{m}\binom{[a-m-n]}
{i-m} \mod ~ p \\
& \stackrel{\eqref{choice C_n for i<a-i}}{\equiv} \delta_{i,m} \mod ~ p,
\end{align*}
where the second last step is obvious if $m<i$, and if
$m =i$, then $\binom{[a-m-n]}{[i-m]} = \binom{[a-i-n]}{p-1}=0
$ as $1 \leq [a-i-n] = [ [a-i] -n]< p-1$.
Thus, by \Cref{divisibility1}, we have $F(X,Y) \in
V_{r}^{(i)}$ and $ F(X,Y) \not \in V_{r}^{(i+1)}$, whence
$0 \neq F(X,Y) \in X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$, so
$X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$ has non-zero socle.
Since $i < [a-i]= [r-i]$, the sequence \eqref{exact sequence Vr}
with $m=i$ is non-split.
Therefore $V_{[a-2i]} \otimes D^{i}
\hookrightarrow X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$. \end{proof}
We next determine the quotient $X_{r-i}^{([a-i])}/X_{r-i}^{([a-i]+1)}$, when $r \equiv [r-i]+i$ mod $p$. This result will be used in \Cref{exceptional case 1} and \Cref{medium i full}. \begin{lemma}\label{large a-i}
Let $p \geq 3$, $ r \equiv a ~\mathrm{mod}~ (p-1)$ with $1 \leq a \leq p-1$ and
$1 \leq i< [a-i] <p-1$. If $2p-4 \leq r \equiv [a-i]+i~\mathrm{mod}~ p$,
then
\[
\frac{X_{r-i}^{([a-i])}}{X_{r-i}^{([a-i]+1)}} = \frac{V_{r}^{([a-i])}}{V_{r}^{([a-i]+1)}}.
\] \end{lemma} \begin{proof}
Note that
the smallest positive integer satisfying $s \equiv [a-i]+i$ mod $p$
and $s \equiv a$ mod $(p-1)$ is $[a-i]+i \leq 2p-5$.
Since $r \geq 2p-4$, we have $r \geq (p-1)p+[a-i]+i \geq ([a-i]+1)p+[a-i]
= [a-i](p+1)+p$.
Recall that by \eqref{F i,r definition}, we have
\[
F_{i,r}(X,Y) = \sum_{\lambda \in \mathbb{F}_{p}} \lambda^{[2i-a]} X^{i} (\lambda X + Y)^{r-i}
\in X_{r-i}.
\]
Consider the following polynomial
\begin{align*}
F(X,Y) & : = (r-i+1) F_{i,r}(X,Y) - (r-2i+1) \sum_{k \in \mathbb{F}_{p}^{\ast}}
k^{2i-a-1} X^{i-1}(kX+Y)^{r-(i-1)} \\
& \stackrel{\eqref{sum fp}}{=} -
\sum_{\substack{0 \leq l \leq r-i \\ l \equiv i ~\mathrm{mod}~(p-1)}}
(r-i+1) \binom{r-i}{l} X^{r-l}Y^{l} \\
& \qquad \qquad \qquad \qquad +
\sum_{\substack{0 \leq l \leq r-i+1 \\ l \equiv i ~\mathrm{mod}~(p-1)}}
(r-2i+1)\binom{r-i+1}{l} X^{r-l}Y^{l}.
\end{align*}
Observe that $F(X,Y) \in X_{r-i}$, by \Cref{Basis of X_r-i}.
We claim that $ F(X,Y) \in X_{r-i}^{([a-i])}$
and generates $V_{r}^{([a-i])}/V_{r}^{([a-i]+1)}$.
Clearly the coefficient of $X^{r-i}Y^{i}$ in $F(X,Y)$ equals
\begin{align}\label{coeff X^r-i Y^i}
(r-2i+1) \binom{r-i+1}{i} - (r-i+1) \binom{r-i}{i} =0.
\end{align}
Also the coefficient of $X^{[a-i]}Y^{r-[a-i]}$ in $F(X,Y)$ equals
\[
(r-2i+1) \binom{r-i+1}{r-[a-i]} - (r-i+1) \binom{r-i}{r-[a-i]}
= \binom{r-i+1}{r-[a-i]} (r-2i+1 - [a-i]+i-1) = 0 ,
\]
as $r \equiv [a-i]+i $ mod $p$.
Since $i$ (resp. $r-[a-i]$) is the only number less than $p$
(resp. greater than $r-p$)
congruent to $i$ mod $(p-1)$, we see that
the coefficients of $X^{r},X^{r-1}Y, \ldots, X^{r-(p-1)}Y^{p-1}$ and
$Y^{r}, XY^{r-1}, \ldots , X^{p-1}Y^{r-(p-1)}$ in $F(X,Y)$ are zero.
Using \Cref{binomial sum} and Lucas' theorem,
one checks that, for $0 \leq m < [a-i]$,
we have
\begin{align*}
& \sum_{\substack{0 \leq l \leq r-i \\ l \equiv i ~\mathrm{mod}~ (p-1)}}
(r-i+1) \binom{r-i}{l} \binom{l}{m} -
\sum_{\substack{0 \leq l \leq r-i+1 \\ l \equiv i ~\mathrm{mod}~ (p-1)}}
(r-2i+1)\binom{r-i+1}{l} \binom{l}{m} \\
& ~~\equiv (r-i+1) \binom{r-i}{m} \binom{[a-i-m]}{[i-m]}
- (r-2i+1) \binom{r-i+1}{m} \binom{[a-i+1-m]}{[i-m]} \\
& ~~\qquad \qquad \qquad + \delta_{p-1,[i-m]} \left[ (r-i+1) \binom{r-i}{m}
- (r-2i+1) \binom{r-i+1}{m} \right]~\mathrm{mod}~p \\
& ~~ = \binom{r-i+1}{m} \left[ (r-i-m+1) \binom{[a-i]-m}{[i-m]}
- (r-2i+1) \binom{[a-i]-m+1}{[i-m]} \right] \\
& ~~ \equiv \binom{r-i+1}{m} \left[ ([a-i]-m+1) \binom{[a-i]-m}{[i-m]}
- ([a-i]-i+1) \binom{[a-i]-m+1}{[i-m]} \right] \\
& \qquad \qquad \qquad \qquad ~\mathrm{mod}~p,
\end{align*}
where in the penultimate step we have used $0 \leq m < [a-i] < p-1$
and \eqref{coeff X^r-i Y^i} and in the last step we have used
$r \equiv [a-i]+i$ mod $p$.
If $0 \leq m < i$, then $[i-m] = i-m$ so the sum vanishes.
Since $i \neq a$, we see that $1 \leq [a-i] < p-1$.
If $i \leq m <[a-i]$, then $[i-m]= p-1+i-m > [a-i]+1 -m$, whence
by Lucas' theorem the sum vanishes.
Hence by \Cref{divisibility1}, we get $F(X,Y) \in X_{r-i}^{([a-i])} $.
Since the sequence \eqref{exact sequence Vr} for $m=[a-i]$ is non-split,
to show $F(X,Y)$ generates $V_{r}^{([a-i])}/V_{r}^{([a-i]+1)}$
it is enough to show the image of $F(X,Y)$ is non-zero
under the rightmost map of the sequence \eqref{exact sequence Vr},
for $m=[a-i]$.
Noting that $r \equiv [a-i]+i ~\mathrm{mod}~ p$ and $i < [a-i] < p-1$,
by \Cref{binomial sum} and Lucas' theorem, we have
\begin{align}\label{large a-i breuil map}
&\sum_{\substack{0 \leq l \leq r-i \\ l \equiv i ~\mathrm{mod}~ (p-1)}}
(r-i+1) \binom{r-i}{l} \binom{l}{[a-i]} -
\sum_{\substack{0 \leq l \leq r-i+1 \\ l \equiv i ~\mathrm{mod}~ (p-1)}}
(r-2i+1)\binom{r-i+1}{l}
\binom{l}{[a-i]} \nonumber \\
&\equiv (r-i+1) \binom{r-i}{[a-i]} \binom{p-1}{[i-[a-i]]} -
(r-2i+1) \binom{r-i+1}{[a-i]} \binom{1}{[i-[a-i]]} \mod p \nonumber \\
&\equiv ([a-i]+1) \left[ \binom{p-1}{p-1+i-[a-i]} -
([a-i]-i+1) \binom{1}{p-1+i-[a-i]} \right] \mod p \nonumber \\
& = ([a-i]+1) \binom{p-1}{p-1+i-[a-i]},
\end{align}
where in the last step we used that $p-1+i-[a-i] > i \geq 1$.
Since the coefficients of $X^{[a-i]}Y^{r-[a-i]}$ and $X^{r-[a-i]}Y^{[a-i]}$
in $F(X,Y)$ are zero, by \Cref{breuil map quotient} and
\eqref{large a-i breuil map}, we have
$$
F(X,Y) \equiv -([a-i]+1) \binom{p-1}{p-1+i-[a-i]} \theta^{[a-i]}
X^{r-[a-i](p+1)-[2i-a]} Y^{[2i-a]} \mod V_{r}^{([a-i]+1)}.
$$
Thus, by \Cref{Breuil map} and Lucas' theorem, the image
of $F(X,Y)$ in the quotient of
$V_{r}^{([a-i])}/V_{r}^{([a-i]+1)}$ is non-zero,
as $1 \leq i < [a-i] < p-1 $. Hence $F(X,Y)$ generates
$V_{r}^{([a-i])}/V_{r}^{([a-i]+1)}$. \end{proof}
We next prove the converse of \Cref{Large Cong class Quotient non zero}. \begin{lemma}\label{exceptional case 1}
Let $p\geq 3$, $r \equiv a ~\mathrm{mod} ~ (p-1)$ with $1 \leq a \leq p-1$,
$r \equiv r_{0}~\mathrm{mod} ~ p$ with $1 \leq r_{0} \leq p-1$
and $1 \leq i < [a-i] < p-1$.
If $i(p+1)+p \leq r$ and $r_{0} \in
\mathcal{I}(a,i)$, then
$X_{r-i}^{(i)}/X_{r-i}^{(i+1)} = (0)$. \end{lemma} \begin{proof}
Observe that $ [a-i]+i$ equals $a$ and $p-1+a$
if $ i < a$ and $i>a$ respectively.
Note that $a$ (resp. $a-1$) belongs to $\mathcal{I}(a,i)$ in the case
$i<a$ (resp. $i>a$).
We prove the lemma by considering the
cases $r \equiv [a-i]+i$ mod $p$ and
$r \not \equiv [a-i]+i$ mod $p$.
If $r \equiv [a-i]+i$ mod $p$, then by
\Cref{large a-i}, we have $X_{r-i}^{([a-i])}/X_{r-i}^{([a-i]+1)} =
V_{r}^{([a-i])}/V_{r}^{([a-i]+1)}$. Since $i-1< i < [a-i]$
and $[a-i] \neq 0$, $a$,
by \Cref{reduction corollary 2},
we have $X_{r-(i-1)}^{([a-i])}/X_{r-(i-1)}^{([a-i]+1)} = (0)$.
Thus, by \eqref{Y i,j} (with $j=[a-i]$), we have
\[
\frac{X_{r-i}^{([a-i])}+X_{r-(i-1)}}{X_{r-i}^{([a-i]+1)}+X_{r-(i-1)}}
\cong Y_{i,[a-i]} = \frac{X_{r-i}^{([a-i])}}{X_{r-i}^{([a-i]+1)}} =
\frac{V_{r}^{([a-i])}}{V_{r}^{([a-i]+1)}}
\]
has dimension $p+1$.
Since $X_{r-(i-1)} \subseteq X_{r-i}^{([a-i]+1)}+X_{r-(i-1)}
\subseteq X_{r-i}^{([a-i])}+X_{r-(i-1)} \subseteq X_{r-i}$
and $\dim X_{r-i}/X_{r-(i-1)} \leq p+1$, it follows
that $X_{r-i} = X_{r-i}^{([a-i])}+X_{r-(i-1)}$. Since $i < [a-i]$,
we see that $X_{r-i} = X_{r-i}^{(i+1)}+X_{r-(i-1)}$.
Since $i \neq 0$, $a$ and $ i-1 < i < [a-i]$, by
\Cref{reduction corollary 2}, we have
$X_{r-(i-1)}^{(i)}/X_{r-(i-1)}^{(i+1)}= (0)$, i.e., $X_{r-(i-1)}^{(i)} =
X_{r-(i-1)}^{(i+1)}$. Thus
\[
X_{r-i}^{(i)} = X_{r-i}^{(i)} \cap
(X_{r-(i-1)} + X_{r-i}^{(i+1)}) = X_{r-(i-1)}^{(i)} + X_{r-i}^{(i+1)}
= X_{r-(i-1)}^{(i+1)} + X_{r-i}^{(i+1)} = X_{r-i}^{(i+1)}.
\]
Now suppose $r \not \equiv [a-i]+i$ mod $p$. By hypothesis,
$r_{0} \in \mathcal{I}(a,i)$. If $i<a$, then $a-i+1 \leq
r_{0} \leq a-1$. If $i>a$, then $0 \leq r_{0} \leq a-2$ or
$p+a-i \leq r_{0} \leq p-1$.
Let
\begin{align*}
s=
\begin{cases}
(a-r_{0})p^{3}+r_{0}, &\mathrm{if}~ i<a ~\mathrm{so}~
a-i+1 \leq r_{0} \leq a-1, \\
(a-r_{0}-1)p^{3}+p+r_{0}, & \mathrm{if} ~ i>a ~\mathrm{and}
~ 0 \leq r_{0} \leq a-2, \\
(a-r_{0}+p-1)p^{3}+r_{0}, & \mathrm{if} ~ i>a ~\mathrm{and}
~p+a-i \leq r_{0} \leq p-1.
\end{cases}
\end{align*}
Observe that $s \geq p^{3}$, $\Sigma_{p}(s- r_{0}) \leq i-1$ and
$\Sigma_{p}(s-(i-1))=[a-i]+1 \leq p-1$.
Since $i<[a-i]$, we have $r_{0} \neq i-1$.
Then by \Cref{final X_r-i = X_r-j}, we have
$X_{s-i} = X_{s-(i-1)}$. Thus $X_{s-i}^{(i)}/X_{s-i}^{(i+1)}
= X_{s-(i-1)}^{(i)}/X_{s-(i-1)}^{(i+1)} = (0)$,
by \Cref{reduction corollary 2}, as
$ i-1< i <[a-i]$ and $i \neq 0$, $a$.
Since $r \equiv s$ mod $p(p-1)$ by \Cref{arbitrary quotient periodic},
we have
$X_{r-i}^{(i)}/X_{r-i}^{(i+1)}= X_{s-i}^{(i)}/X_{s-i}^{(i+1)} = (0)$.
\end{proof} \begin{remark}\label{remark i< a-i singular zero}
The argument in the case $r \not \equiv [a-i]+i$ mod $p$ and
$r_{0} \in \mathcal{I}(a,i)$ in the above lemma also works in the case
$i =[a-i]$, as we didn't require $i$ to be strictly less than $[a-i]$. \end{remark} We are now ready to determine the $\Gamma$-modules $X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$ and $X_{r-i}^{([a-i])}/X_{r-i}^{([a-i]+1)}$ in the case $1 \leq i < [a-i] < p-1$ and $i \neq a$, $p-1$. Before we state the result observe that, for $r \equiv a $ mod $(p-1)$ with $1 \leq a \leq p-1$, $1 \leq i <[a-i] <p-1$ and $j \in \lbrace i, [a-i] \rbrace$, we have
\begin{align}\label{dimension singular quotient}
\dim \left( \frac{X_{r-i}^{(j)}}{X_{r-i}^{(j+1)}} \right) & =
\dim X_{r-i}^{(j)} - \dim X_{r-(i-1)}^{(j)} + \dim X_{r-(i-1)}^{(j+1)}
- \dim X_{r-i}^{(j+1)}\nonumber \\
&= \dim (X_{r-i}^{(j)}+X_{r-(i-1)}) -\dim (X_{r-i}^{(j+1)}+X_{r-(i-1)}), \end{align}
where the first equality follows from \Cref{reduction corollary 2}, since $i-1 <i <[a-i]$,
and the second equality follows from the dimension formula for sum of two vector subspaces.
Recall that $W_{i,r}$ is the image of $V_{[2i-r]} \otimes D^{a-i} \hookrightarrow \operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i}\chi_{2}^{i})$ under the $\Gamma$-linear map $\psi_{i}:\operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i}\chi_{2}^{i}) \rightarrow X_{r-i}/X_{r-(i-1)}$, as defined in \Cref{surjection1}. Also $W_{i,r} \subseteq (X_{r-i}^{([a-i])}+X_{r-(i-1)})/X_{r-(i-1)}$. By the second and fourth parts of \eqref{interval I}, for $1 \leq a,i \leq p-1$ with $i< [a-i] < p-1$, we have \begin{align}\label{interval I for [a-i]}
\mathcal{I}(a,[a-i]) =
\begin{cases}
\lbrace i, i+1, \ldots, a-1, a \rbrace, & \mathrm{if}~ i<a, \\
\lbrace 0, 1, \ldots , a-2, a-1 \rbrace \cup
\lbrace i, i+1, \ldots , p-1 \rbrace,
& \mathrm{if}~ i>a.
\end{cases} \end{align} Thus, for $r \equiv r_{0}$ mod $p$ with $0 \leq r_{0} \leq p-1$, by Lucas' theorem, we have $\binom{r-i}{[a-i]} \equiv 0$ mod $p$ if and only if
$r_{0} \in \mathcal{I}(a,[a-i])$ and $r \not \equiv [a-i]+i $ mod $p$.
\begin{proposition}\label{singular quotient i < [a-i]}
Let $p\geq 3$, $ r \equiv a \mod (p-1)$ with $1 \leq a \leq p-1$,
$r \equiv r_{0} ~\mathrm{mod}~p$ with $0 \leq r_{0}\leq p-1$
and $1 \leq i <[a-i]<p-1$.
For $ j \in \lbrace i, [a-i] \rbrace$ and $r \geq j(p+1)+p$, we have
\begin{align*}
\frac{X_{r-i}^{(j)}}{X_{r-i}^{(j+1)}} \cong
\begin{cases}
V_{[a-2j]} \otimes D^{j}, & \mathrm{if} ~ r_{0} \not \in \mathcal{I}(a,j), \\[3pt]
V_{r}^{(j)}/V_{r}^{(j+1)}, & \mathrm{if}~ j= [a-i] ~
\mathrm{and} ~ r \equiv [a-i]+i ~\mathrm{mod}~p, \\[3pt]
(0), &\mathrm{otherwise.}
\end{cases}
\end{align*} \end{proposition}
\begin{proof}
We consider the cases $j=i$ and $j=[a-i]$ separately.
\textbf{Case} $\boldsymbol{j=i}$:
If $r_{0} \not \in \mathcal{I}(a,i)$, then by \Cref{Large Cong class Quotient non zero}, we have $V_{[a-2i]} \otimes D^{i} \hookrightarrow X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$. To prove the lemma in this case, we need to show that the inclusion is an isomorphism. Note that by \eqref{dimension singular quotient} and $W_{i,r} \subseteq (X_{r-i}^{([a-i])}+X_{r-(i-1)})/X_{r-(i-1)} \subseteq (X_{r-i}^{(i)}+X_{r-(i-1)})/X_{r-(i-1)}$, we have
\begin{align*}
\dim \left( \frac{X_{r-i}^{(i)}}{X_{r-i}^{(i+1)}} \right)
& \leq \dim X_{r-i} - \dim ( X_{r-i}^{(i+1)}+X_{r-(i-1)} ) \\
& = \dim \left( \frac{X_{r-i}}{X_{r-(i-1)}} \right) -
\dim \left( \frac{X_{r-i}^{(i+1)}+X_{r-(i-1)}}{X_{r-(i-1)}}\right) \\
& \leq \dim \left( \frac{X_{r-i}/X_{r-(i-1)}}{W_{i,r}}\right)
\leq p-[a-2i] \leq p.
\end{align*}
As the exact sequence \eqref{exact sequence Vr}
does not split for $m=i$, we have
$X_{r-i}^{(i)}/X_{r-i}^{(i+1)} \cong V_{[a-2i]} \otimes D^{i}$.
If $r_{0} \in \mathcal{I}(a,i)$, then by \Cref{exceptional case 1}, we have
$ X_{r-i}^{(i)}/X_{r-i}^{(i+1)} =0$.
\textbf{Case} $\boldsymbol{j=[a-i]}$:
If $r_{0} \not \in \mathcal{I}(a,j)$, then from above we have
$\binom{r-i}{[a-i]} \not \equiv 0$ mod $p$. Thus,
by \Cref{socle term singular}, we have
$V_{[2i-a]} \otimes D^{a-i}
\hookrightarrow X_{r-i}^{(j)}/X_{r-i}^{(j+1)}$. As $\mathcal{I}(a,i)
\subseteq \mathcal{I}(a, [a-i])= \mathcal{I}(a,j)$,
by the case $j=i$, we have $X_{r-i}^{(i)}/X_{r-i}^{(i+1)} \cong V_{[a-2i]}
\otimes D^{i}$. Therefore
\begin{align*}
p+1 & = [a-2i]+1 + [2i-a]+1 \\
& \leq \dim \left( \frac{X_{r-i}^{(i)}}{X_{r-i}^{(i+1)}} \right) +
\dim \left( \frac{X_{r-i}^{(j)}}{X_{r-i}^{(j+1)}} \right) \\
& \leq
\dim \left( \frac{X_{r-i}^{(i)}+ X_{r-(i-1)}}{X_{r-i}^{(i+1)}+X_{r-(i-1)}} \right)
+ \dim \left( \frac{X_{r-i}^{(j)}+ X_{r-(i-1)}}{X_{r-i}^{(j+1)}+X_{r-(i-1)}} \right)
~~ \mathrm{by}~ \eqref{dimension singular quotient} \\
& \leq \dim\left( \frac{X_{r-i}^{(i)}+X_{r-(i-1)}}{X_{r-i}^{(j+1)}+X_{r-(i-1)}}\right)
\leq \dim\left( \frac{X_{r-i}}{X_{r-(i-1)}}\right) \leq p+1,
\end{align*}
where the inequalities on the last line follow from the fact
$X_{r-(i-1)} \subseteq
X_{r-i}^{(j+1)}+X_{r-(i-1)} \subseteq
X_{r-i}^{(j)}+X_{r-(i-1)} \subseteq
X_{r-i}^{(i+1)}+X_{r-(i-1)} \subseteq X_{r-i}^{(i)} + X_{r-(i-1)} \subseteq X_{r-i}$
and \Cref{induced and successive}. Therefore
dim $X_{r-i}^{(j)}/X_{r-i}^{(j+1)} = [2i-a]+1$ and
$X_{r-i}^{(j)}/X_{r-i}^{(j+1)} \cong V_{[2i-a]} \otimes D^{a-i}
=V_{[a-2j]} \otimes D^{j}$.
If $r \equiv [a-i]+i$ mod $p$, then by \Cref{large a-i}, we have
$X_{r-i}^{(j)}/X_{r-i}^{(j+1)} = V_{r}^{(j)}/V_{r}^{(j+1)}$.
So assume $r_{0} \in \mathcal{I}(a,j)$ and $ r \not \equiv [a-i]+i $
mod $p$.
Again from above, we get
$\binom{r-i}{[a-i]} \equiv 0$ mod $p$.
Thus, by \Cref{socle term singular}, we have
$ V_{[2i-a]} \otimes D^{a-i}
\not \hookrightarrow X_{r-i}^{(j)}/X_{r-i}^{(j+1)}$.
Since the exact sequence \eqref{exact sequence Vr} doesn't split for $m=j$
and $ V_{[2i-a]} \otimes D^{a-i} $
is the socle of $V_{r}^{(j)}/V_{r}^{(j+1)}$,
we see that $X_{r-i}^{(j)}/X_{r-i}^{(j+1)}=(0)$. \end{proof}
\begin{theorem}\label{Structure of Q(i) if i<[a-i]}
Let $p \geq 3$, $r \equiv a \mod (p-1)$ with $1 \leq a \leq p-1$,
$r \equiv r_{0}~\mathrm{mod}~ p$ with $0 \leq r_{0} \leq p-1$ and let
$1 \leq i < [a-i] < p-1$. If
$i(p+1)+p \leq r$, then we have the following exact sequence
\begin{align*}
0 \rightarrow W \rightarrow Q(i) \rightarrow Q(i-1) \rightarrow 0,
\end{align*}
where $W= V_{p-1-[a-2i]} \otimes D^{a-i}$ if $r_{0} \not\in \mathcal{I}(a,i)$ and
$W= V_{r}^{(i)}/V_{r}^{(i+1)}$ if $r_{0} \in \mathcal{I}(a,i)$.
\end{theorem} \begin{proof}
Since $i<[a-i]$, by \Cref{reduction corollary} (ii), we have $ X_{r-i}+ V_{r}^{(i)} = X_{r-(i-1)}+V_{r}^{(i)}$.
Thus $V_{r} / (X_{r-i}+ V_{r}^{(i)}) = V_{r}/(X_{r-(i-1)}+V_{r}^{(i)})=Q(i-1)$. Taking $j=i$ in \eqref{commutative diagram} and noting that rightmost bottom entry is $Q(i-1)$ we get $$
0 \rightarrow W \rightarrow Q(i) \rightarrow Q(i-1) \rightarrow 0, $$ where $W$ is the quotient of $V_{r}^{(i)}/V_{r}^{(i+1)}$ by $X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$. Now it follows from \Cref{singular quotient i < [a-i]} and the exact sequence \eqref{exact sequence Vr} that $W= V_{p-1-[a-2i]} \otimes D^{a-i}$ if $r_{0} \not\in \mathcal{I}(a,i)$ and $W= V_{r}^{(i)}/V_{r}^{(i+1)}$ if $r_{0} \in \mathcal{I}(a,i)$. This finishes the proof.
\end{proof}
If $1 \leq i < a $ in the theorem above, then $i-1$ also satisfies the hypotheses if it is positive.
Thus repeated application of the above theorem gives the structure of $Q(i)$ in terms of $Q(0)$.
But if $i>a$, then $i-1$ satisfies the hypotheses provided $i-1 > a$. Thus
in the case $i>a$, the theorem above determines the structure of $Q(i)$
modulo the structure of $Q(a)$. The structure of $Q(a)$ will be determined in
$\S \ref{section i = a}$. This will give the structure of $Q(i)$ in all cases when $i < [a-i]$ and $i \neq a$, $p-1$.
\subsubsection{ The case \texorpdfstring{$ \boldsymbol{i = [a-i].}$}{}} \label{section i = a-i}
In this subsection, we determine the structure of $Q(i)$ in the case $i = [a-i]$ and $i \neq a $, $p-1$. Assume $r \geq i(p+1)+p$. Taking $j=i$ in \eqref{commutative diagram} and using \Cref{reduction corollary} (ii), we see that the rightmost bottom entry equals $Q(i-1)$. Thus to determine the structure of $Q(i)$ in terms of $Q(i-1)$ it is enough to determine $X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$.
By the exact sequence \eqref{exact sequence Vr}, we have $V_{r}^{(i)}/V_{r}^{(i+1)} \cong V_{0} \otimes D^{i} \oplus V_{p-1} \otimes D^{i}$. By \Cref{socle term singular}, we have $V_{p-1} \otimes D^{i} \hookrightarrow X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$ if and only if $\binom{r-i}{i} \not \equiv 0$ mod $p$. So to describe $X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$ completely we are reduced to determining necessary and sufficient conditions under which $V_{0} \otimes D^{i} \hookrightarrow X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$.
The next two lemmas deal with this question.
Recall that, by the second and fourth parts of \eqref{interval J}, for $1 \leq a,i \leq p-1$ with $1 \leq [a-i] \leq i < p-1$, we have \begin{align}\label{interval J for i > a-i}
\mathcal{J}(a,i) =
\begin{cases}
\lbrace a-i-1, a-i, \ldots, a-1 \rbrace, & \mathrm{if}~i <a, \\
\lbrace 0,1, \ldots , a-2 \rbrace \cup
\lbrace p-2+a-i, p-1+a-i, \ldots, p-1 \rbrace,
& \mathrm{if}~i > a.
\end{cases} \end{align}
\begin{lemma}\label{a=2i 1-dim is JH factor} Let $p \geq 3$, $r \equiv a ~\mathrm{mod}~(p-1)$ with $1 \leq a \leq p-1$, $r \equiv r_{0}~\mathrm{mod}~p$ with $0 \leq r_{0} \leq p-1$ and let $1 \leq i < p-1$ with $i=[a-i]$. If $i(p+1)+p \leq r $ and $r_{0} \not \in \mathcal{J}(a,i)
\smallsetminus \lbrace i \rbrace$, then $X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$ contains $V_{0} \otimes D^{i}$. \end{lemma}
\begin{proof}
Since $r \equiv 2i$ mod $(p-1)$ we see that the exact sequence
\eqref{exact sequence Vr} is split for $m=i$.
So to prove the lemma it is enough to exhibit a polynomial
$F(X,Y) \in X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$ whose projection under
$V_{r}^{(i)}/V_{r}^{(i+1)} \twoheadrightarrow V_{0} \otimes D^{i}$
is non-zero.
Let
\begin{align*}
A' & = \left( \binom{r-n}{m} \binom{[a-m-n]}{i-m}
\right)_{0 \leq m,n \leq i-1}, \\
\mathbf{v} & =\left( \binom{i}{0}, \binom{i}{1}, \ldots , \binom{i}{i-1} \right), \\
\mathbf{w}&= \left( \binom{r}{r-i}, \binom{r-1}{r-i}, \ldots , \binom{r-(i-1)}
{r-i} \right).
\end{align*}
Note that $[a-i] =i$ and
$r_{0} \not \in \mathcal{J}(a,i) \smallsetminus \lbrace i \rbrace$.
If $r \not\equiv 2i \mod p$, by \Cref{block matrix invertible},
the matrix
\[
A =
\left(
\begin{array}{c|c}
A' & \mathbf{v}^{t} \\
\hline
\mathbf{w} & 0
\end{array}
\right) ~~\mathrm{is ~ invertible},
\]
so we may choose $C_{0}, \ldots , C_{i} \in \mathbb{F}_{p} $ such that $A (C_{0}, \ldots, C_{i})^{t} =
(0,\ldots, 0, 1)^{t}$, i.e.,
\begin{align}
\sum\limits_{n=0}^{i-1} C_{n} \binom{r-n}{m}
\binom{[a-m-n]}{i-m} + C_{i} \binom{i}{m} &=
0, ~ \forall ~ 0 \leq m \leq i-1, \label{1-dim choice C i eq 1} \\
\sum\limits_{n=0}^{i-1} C_{n} \binom{r-n}{r-i} &=1 \label{1-dim choice C i eq 2}.
\end{align}
If $r \equiv 2i \mod p$, we take $C_0 = C_1 = \cdots = C_{i-2} = 0$, $C_{i-1} = (i+1)^{-1}$ and $C_i = - 1$.
Then \eqref{1-dim choice C i eq 1} and \eqref{1-dim choice C i eq 2} still hold.
Indeed, if $0 \leq m \leq i-1$, by Lucas' theorem, we have
\begin{align*}
C_{i-1} \binom{r-(i-1)}{m} \binom{[a-m-(i-1)]}{i-m}
= (i+1)^{-1} \binom{i+1}{m} \binom{i+1-m}{i-m}
= \binom{i}{m}
\end{align*}
and
\[
\sum\limits_{n=0}^{i-1} C_{n} \binom{r-n}{r-i}
= C_{i-1} \binom{r-i+1}{r-i} = (i+1)^{-1} (r-i+1)
\equiv 1 \mod p.
\]
Let
\begin{align*}
F(X,Y) & := C_{i} X^{r-i}Y^{i} -\sum\limits_{n=0}^{i-1} C_{n}
\sum_{k \in \mathbb{F}_{p}^{\ast}} k^{i+n-a} X^{n} (k X +Y)^{r-n} \\
& \stackrel{\eqref{sum fp}}{=} C_{i}X^{r-i}Y^{i} +
\sum_{n=0}^{i-1} C_{n} \sum_{\substack{0 \leq l \leq r-n \\ l \equiv i
~ \mathrm{mod}~ (p-1)}} \binom{r-n}{l} X^{r-l}Y^{l}.
\end{align*}
Clearly $F(X,Y) \in X_{r-i}$, by \Cref{Basis of X_r-i}. As above, since
$i$ (resp. $r-i$) is smallest (resp. largest) number between $0$ and
$r$ congruent to $i$ mod $(p-1)$, we see that the coefficients of
$X^{r}, \ldots X^{r-(i-1)}Y^{i-1}$ and $X^{i-1}Y^{r-(i-1)}, \ldots, Y^{r}$
in $F(X,Y)$ are zero. Hence $F(X,Y)$ satisfies condition (i)
in \Cref{divisibility1} for $m=i$.
For $0 \leq m \leq i-1$, by \Cref{binomial sum}, we have
\begin{align*}
C_{i} \binom{i}{m}+\sum\limits_{n=0}^{i-1} C_{n}
\sum_{\substack{0 \leq l \leq r-n \\ l \equiv i ~ \mathrm{mod}~ (p-1)}}
\binom{r-n}{l} \binom{l}{m} & \; \equiv C_{i} \binom{i}{m} +
\sum_{n=0}^{i-1} C_{n} \binom{r-n}{m} \binom{[a-m-n]}{i-m} \\
& \stackrel{\eqref{1-dim choice C i eq 1}}{\equiv} 0 \mod p.
\end{align*}
Hence by \Cref{divisibility1}, we have $F(X,Y) \in X_{r-i}^{(i)}$.
Clearly the coefficient of
$X^{r-i}Y^{i}$ in $F(X,Y)$ is $C_{i}+\sum_{n=0}^{i-1}
C_{n} \binom{r-n}{i} $. Also, the coefficient of $X^{i}Y^{r-i}$ in $F(X,Y)$
is $\sum_{n=0}^{i-1} C_{n} \binom{r-n}{r-i} =1$, by
\eqref{1-dim choice C i eq 2}.
By \Cref{binomial sum}, we have
\[
C_{i} \binom{i}{i}+ \sum\limits_{n=0}^{i-1} C_{n}
\sum_{\substack{0 \leq l \leq r-n \\ l \equiv i ~ \mathrm{mod}~ (p-1)}}
\binom{r-n}{l} \binom{l}{i} \equiv C_{i} +
\sum\limits_{n=0}^{i-1} C_{n} \binom{r-n}{i} \mod p,
\]
which also equals the coefficient of $X^{r-i}Y^{i}$ in $F(X,Y)$.
Thus, by \Cref{breuil map quotient} (with $m=i$), we have
\begin{align*}
F(X,Y) \equiv (-1)^{i+1} \theta^{i} X^{r-i(p+1)-p-1}Y^{p-1}
~~\mathrm{mod} ~ V_{r}^{(i+1)},
\end{align*}
up to terms of the form $\theta^{i} X^{r-i(p+1)}$, $\theta^{i} Y^{r-i(p+1)}$.
It follows from \Cref{Breuil map}
that the image of $F(X,Y)$ in $X_{r-i}^{(i)}/X_{r-i}^{(i+1)}
\hookrightarrow V_{r}^{(i)}/V_{r}^{(i+1)} \twoheadrightarrow
V_{0} \otimes D^{i}$ equals $(-1)^{i+1} \neq 0$.
This completes the proof of the lemma. \end{proof}
We next prove the converse of the above lemma. Recall, by \eqref{F i,r definition}, that for $i=[r-i]$, we have $F_{i,r}(X,Y) = \sum_{k \in \mathbb{F}_{p}}^{} k^{[2i-r]} X^{i}(kX+Y)^{r-i} = \sum_{k \in \mathbb{F}_{p}}^{} k^{p-1} X^{i}(kX+Y)^{r-i}$. \begin{lemma}\label{a=2i 1-dim is not JH}
Let $p\geq3$, $r \equiv a ~\mathrm{mod}~(p-1)$ with $1 \leq a \leq p-1$,
$r \equiv r_{0} ~\mathrm{mod}~ p$ with $0 \leq r_{0} \leq p-1$ and
let $1 \leq i < p-1$ with $i=[a-i]$. If $ r \geq i(p+1)+p$ and
$r_{0} \in \mathcal{J}(a,i)
\smallsetminus \lbrace i \rbrace$, then
$V_{0} \otimes D^{i} \not \hookrightarrow X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$. \end{lemma}
\begin{proof}
Since $[a-i]=i$, by \eqref{interval J for i > a-i}
and the definition of $ \mathcal{I}(a,i)$, we have
$ r_{0} \in \mathcal{J}(a,i) \smallsetminus \lbrace i,i-1 \rbrace$
implies that $r_{0} \in \mathcal{I}(a,i)$ and $ r \not \equiv [a-i]+i $ mod $p$.
So if $r_{0} \in \mathcal{J}(a,i) \smallsetminus \lbrace i,i-1 \rbrace$,
then by \Cref{remark i< a-i singular zero}, we have
$X_{r-i}^{(i)}/X_{r-i}^{(i+1)} =(0)$. So assume $r \equiv i-1$ mod $p$.
Since $i-1 < i =[a-i] < p-1$, by \Cref{reduction corollary 2}
(with $i$ there equal to $i-1$ and $j = i$),
we have $X_{r-(i-1)}^{(i)}/X_{r-(i-1)}^{(i+1)} =0 $. Thus,
by \eqref{Y i,j}, we have
\[
\frac{X_{r-i}^{(i)}+X_{r-(i-1)}}{X_{r-i}^{(i+1)}+X_{r-(i-1)} }
\cong Y_{i,i} \cong \frac{X_{r-i}^{(i)}}{X_{r-i}^{(i+1)}}.
\]
By \Cref{reduction corollary} (ii), we have
$ X_{r-i} =X_{r-(i-1)} +X_{r-i}^{(i)} $.
Let $\chi = \chi_{1}^{r-i} \chi_{2}^{i}$. Thus, by
\Cref{induced and successive}, we have
\begin{align*}
\operatorname{ind}_{B}^{\Gamma}(\chi) \overset{\psi_{i}}{\twoheadrightarrow}
\frac{X_{r-i}}{X_{r-(i-1)}} = \frac{X_{r-(i-1)} +X_{r-i}^{(i)}}{X_{r-(i-1)}}
\twoheadrightarrow \frac{X_{r-i}^{(i)}+X_{r-(i-1)}}{X_{r-i}^{(i+1)}+X_{r-(i-1)} }
\cong \frac{X_{r-i}^{(i)}}{X_{r-i}^{(i+1)}} \hookrightarrow
\frac{V_{r}^{(i)}}{V_{r}^{(i+1)}}.
\end{align*}
By \Cref{Structure of induced} and \Cref{induced and star}, we have
$ V_{0} \otimes D^{i} \oplus V_{p-1} \otimes D^{i} \cong
\operatorname{ind}_{B}^{\Gamma}(\chi) \cong V_{r}^{(i)}/V_{r}^{(i+1)} $. Thus,
$V_{0} \otimes D^{i} \hookrightarrow X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$
if and only if the map
\[
V_{0} \otimes D^{i} \hookrightarrow \operatorname{ind}_{B}^{\Gamma}(\chi)
\rightarrow V_{r}^{(i)}/V_{r}^{(i+1)}
\twoheadrightarrow V_{0} \otimes D^{i}
\]
induced by the composition above is non-zero.
By \Cref{Structure of induced} (ii) (for $l=p-1$),
$\sum_{\lambda\in \mathbb{F}_{p}}^{} \lambda^{p-1} \left[ \begin{psmallmatrix}
\lambda & 1 \\ 1 & 0 \end{psmallmatrix} , e_{\chi} \right]$ is a basis element of
$V_{0} \otimes D^{i} \hookrightarrow \operatorname{ind}_{B}^{\Gamma}(\chi) $. By
\Cref{induced and successive}, we see that
$\psi_{i}(\sum_{\lambda\in \mathbb{F}_{p}}^{} \lambda^{p-1} \left[ \begin{psmallmatrix}
\lambda & 1 \\ 1 & 0\end{psmallmatrix} , e_{\chi} \right])=F_{i,r}(X,Y)$.
So to prove the lemma it is enough to show that the image of $F_{i,r}(X,Y)$ is zero.
Since $[a-i] =i <p-1$, we have $[a-(i-1)] = [i+1] =i+1 >i$. Thus,
by Corollary~\ref{A(a,i,j,r) invertible},
we see that the matrix
$A(a,i-1,i,r) =
\left( \binom{r-n}{m} \binom{[a-m-n]}{i-n} \right)_{0 \leq m,n \leq i-1}$
is invertible if $r \equiv i-1$ mod $p$. So there exist
$C_{0}, \ldots$, $C_{i-1} \in \mathbb{F}_{p}$ such that
\begin{align}\label{choice C_i exceptional case a=2i}
\sum_{n=0}^{i-1} C_{n} \binom{r-n}{m} \binom{[a-m-n]}{i-n}
= \binom{r-i}{m}, ~~ \forall ~ 0 \leq m \leq i-1.
\end{align}
Let $C_{i} = -1$. Consider the following polynomial
\begin{align*}
F(X,Y) &:= F_{i,r}(X,Y) - \sum_{n=0}^{i-1} C_{n} \sum_{k \in \mathbb{F}_{p}^{\ast}}^{}
k^{i+n-a} X^{n} (k X +Y)^{r-n} \\
& = - \sum_{n=0}^{i} C_{n} \sum_{k \in \mathbb{F}_{p}^{\ast}}^{}
k^{i+n-a} X^{n} (k X +Y)^{r-n} \\
& \stackrel{\eqref{sum fp}}{\equiv} \sum_{n=0}^{i} C_{n}
\sum_{\substack{0 \leq l \leq r-n \\
l \equiv i ~\mathrm{mod}~ (p-1)}}^{} \binom{r-n}{l} X^{r-l} Y^{l}
\mod p.
\end{align*}
Since $i$ (resp. $r-i$) is the smallest (resp. largest) between
$0$ and $r$ congruent to $i$ mod $(p-1)$, we see that
$X^{i}, Y^{i} \mid F(X,Y)$.
Further by \Cref{binomial sum}, for $0 \leq m \leq i-1$, we have
\begin{align*}
\sum_{n=0}^{i} C_{n} \sum_{\substack{0 \leq l \leq r-n \\
l \equiv i ~\mathrm{mod}~ (p-1)}}^{} \binom{r-n}{l} \binom{l}{m}
\equiv
\sum_{n=0}^{i-1} C_{n} \binom{r-n}{m} \binom{[a-m-n]}{i-m}
-\binom{r-i}{m} \overset{\eqref{choice C_i exceptional case a=2i}}{\equiv} 0
\mod p.
\end{align*}
Hence by \Cref{divisibility1}, we have $F(X,Y) \in V_{r}^{(i)}$.
Also note that by \Cref{Basis of X_r-i}, we have
$F(X,Y)-F_{i,r} (X,Y) \in X_{r-(i-1)}$, whence the images of
$F(X,Y)$, $F_{i,r}(X,Y)$ under
\begin{align*}
\frac{X_{r-i}}{X_{r-(i-1)}} = \frac{X_{r-(i-1)} +X_{r-i}^{(i)}}{X_{r-(i-1)}}
\twoheadrightarrow \frac{X_{r-i}^{(i)}+X_{r-(i-1)}}{X_{r-i}^{(i+1)}+X_{r-(i-1)} }
\cong \frac{X_{r-i}^{(i)}}{X_{r-i}^{(i+1)}} \hookrightarrow
\frac{V_{r}^{(i)}}{V_{r}^{(i+1)}} \twoheadrightarrow V_{0} \otimes D^{i}
\end{align*}
are the same. Since $F(X,Y) \in X_{r-i}^{(i)}$, the image of $F(X,Y)$
under the above composition is the same as the image of
$F(X,Y)$ under the last surjection
which by \Cref{breuil map quotient} equals zero as
\begin{align*}
\sum_{n=0}^{i} & C_{n} \sum_{\substack{0 \leq l \leq r-n \\
l \equiv i ~\mathrm{mod}~ (p-1)}}^{} \binom{r-n}{l} \binom{l}{i}
- \sum_{n=0}^{i} C_{n} \binom{r-n}{i} +
(-1)^{i+1} \sum_{n=0}^{i} C_{n} \binom{r-n}{r-i} \\
& \equiv C_{i} \binom{r-i}{i} -(-1)^{i} \sum_{n=0}^{i} C_{n} \binom{r-n}{r-i}
\mod p~ \mathrm{ (by~ \Cref{binomial sum}) } \\
& \equiv C_{i} \binom{p-1}{i} - (-1)^{i} C_{i}\binom{r-i}{r-i}
\equiv 0\mod p ~~ ( \text{by Lucas' theorem and}
~ r \equiv i-1 ~ \mathrm{mod}~ p).
\end{align*}
This proves the lemma.
\end{proof}
We are now ready to describe the quotient $X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$ when $i=[r-i]$. \begin{proposition}\label{singular i= [a-i]}
Let $p \geq 3$, $r \equiv a ~\mathrm{mod}~(p-1)$ with $1 \leq a \leq p-1$,
$r \equiv r_{0} ~\mathrm{mod}~ p$ with $0 \leq r_{0} \leq p-1$ and
let $1 \leq i < p-1$ with $i=[a-i]$. If $ r \geq i(p+1)+p$, then
\begin{align*}
\frac{X_{r-i}^{(i)}}{X_{r-i}^{(i+1)}} \cong
\begin{cases}
V_{r}^{(i)}/V_{r}^{(i+1)},
&\mathrm{if}~ r_{0} \not \in \mathcal{J}(a,i), \\
V_{0} \otimes D^{i}, & \mathrm{if}~ r_{0}=i, \\
V_{p-1} \otimes D^{i}, & \mathrm{if}~r_{0}=i-1, \\
(0), &\mathrm{otherwise}.
\end{cases}
\end{align*} \end{proposition} \begin{proof}
Using $[a-i] = i$ and \eqref{interval J for i > a-i}, one checks that
$\binom{r-i}{i} \not \equiv 0$ mod $p$ if and only if
$r_{0} \not \in \mathcal{J}(a,i) \smallsetminus \lbrace i-1 \rbrace$.
Thus, by \Cref{socle term singular}, we have
$V_{p-1} \otimes D^{i} \hookrightarrow X_{r-i}^{(i)}/X_{r-i}^{(i+1)} $
if and only if
$r_{0} \not \in \mathcal{J}(a,i) \smallsetminus \lbrace i-1 \rbrace$
if and only if $r_{0} \not \in \mathcal{J}(a,i)$ or $r_{0}= i-1$.
By Lemmas \ref{a=2i 1-dim is JH factor} and
\ref{a=2i 1-dim is not JH}, we see that
$V_{0} \otimes D^{i} \hookrightarrow X_{r-i}^{(i)}/X_{r-i}^{(i+1)} $
if and only if
$r_{0} \not \in \mathcal{J}(a,i) \smallsetminus \lbrace i \rbrace$
if and only if $r_{0} \not \in \mathcal{J}(a,i)$ or $r_{0}=i$.
Since the exact sequence \eqref{exact sequence Vr} splits for $m=i$,
we have $X_{r-i}^{(i)}/X_{r-i}^{(i+1)} \cong V_{p-1} \otimes D^{i} \oplus
V_{0} \otimes D^{i} $. The proposition follows immediately by putting these facts together. \end{proof} We now determine the structure of $Q(i)$ in the case $i=[a-i]$. \begin{theorem}\label{Structure of Q i=[a-i]}
Let $p \geq 3$, $r \equiv a ~\mathrm{mod}~(p-1)$ with $1 \leq a \leq p-1$,
$r \equiv r_{0} ~\mathrm{mod}~ p$ with $0 \leq r_{0} \leq p-1$ and
let $1 \leq i < p-1$ with $i=[a-i]$. If $ r \geq i(p+1)+p$, then
\[
0 \rightarrow W \rightarrow Q(i) \rightarrow Q(i-1) \rightarrow 0,
\]
where
\begin{align*}
W \cong
\begin{cases}
(0),
&\mathrm{if}~ r_{0} \not \in \mathcal{J}(a,i), \\
V_{p-1} \otimes D^{i}, & \mathrm{if}~ r_{0}=i, \\
V_{0} \otimes D^{i}, & \mathrm{if}~r_{0}=i-1, \\
V_{r}^{(i)}/V_{r}^{(i+1)}, &\mathrm{otherwise}.
\end{cases}
\end{align*} \end{theorem} \begin{proof}
Taking $j=i$ in the diagram \eqref{commutative diagram} and using
\Cref{reduction corollary} (ii), we see that the rightmost bottom entry there equals
$Q(i-1)$. Thus, we have
\[
0 \rightarrow W \rightarrow Q(i) \rightarrow Q(i-1) \rightarrow 0,
\]
where $W$ denotes the cokernel of the map
$X_{r-i}^{(i)}/X_{r-i}^{(i+1)} \hookrightarrow V_{r}^{(i)}/V_{r}^{(i+1)}$.
By the exact sequence \eqref{exact sequence Vr} (with $m=i$), we see that
$V_{r}^{(i)}/V_{r}^{(i+1)} \cong V_{0} \otimes D^{i} \oplus V_{p-1} \otimes D^{i}$.
Now the theorem follows from \Cref{singular i= [a-i]}. \end{proof}
The theorem can be used to give the complete structure of $Q(i)$, when $i = [a-i]$ and $i \neq a$, $p-1$, as follows. It already gives the structure of $Q(i)$ in terms of $W$ and $Q(i-1)$. If $i = 1$, we are done. Else $1 \leq i-1 < [a-(i-1)] = [a-i] + 1 \leq p-1$. If the last inequality is strict, we can use Theorem~\ref{Structure of Q(i) if i<[a-i]} to determine $Q(i-1)$, as explained at end of the \S \ref{section i < a-i}. If the last inequality is an equality, i.e., if $i-1 = a$, then the structure of $Q(a)$ will be determined in \S \ref{section i = a}. In either case, we obtain the structure of $Q(i)$.
\subsubsection{The case \texorpdfstring{$ \boldsymbol{i > [a-i].}$}{•}} \label{section i > a-i}
In this subsection, we determine the structure of the quotients $Q(i)$ in the case $i>[a-i]$ and $i \neq a$, $p-1$. As in Sections~\ref{section i < a-i} and \ref{section i = a-i}, we determine $Q(i)$ recursively. By \Cref{reduction corollary} (ii), we see that $X_{r-i} = X_{r-(i-1)} + X_{r-i}^{([a-i])} \supset X_{r-(i-1)} + X_{r-i}^{(i)}$. In most cases it turns out that the last containment is strict. Thus the natural choice for $j=i$ in
diagram \eqref{commutative diagram} which was taken in \S \ref{section i < a-i} and \S \ref{section i = a-i} doesn't work here. But it turns out that the choice $j=[a-i]$ works as we will show that $X_{r-i}+V_{r}^{([a-i])} = X_{r-([a-i]-1)}+V_{r}^{([a-i])}$, so that one can realize the rightmost bottom entry of diagram \eqref{commutative diagram} (applied with $j=[a-i]$) as $Q([a-i]-1)$. This reduces the computation of $Q(i)$ to \S \ref{section i < a-i} once one knows the
leftmost bottom module of diagram \eqref{commutative diagram} for $j=[a-i]$. In order to determine this module we need to determine $X_{r-i}^{([a-i])}/X_{r-i}^{(i+1)}$ as we already know the JH factors of $V_{r}^{([a-i])}/V_{r}^{(i+1)}$, by \Cref{Breuil map}. To do this we need to determine the successive quotients of the following ascending chain of modules \begin{align}\label{4.1.3 ascending chain}
X_{r-i}^{(i+1)} \subseteq X_{r-i}^{(i)} \subseteq \cdots \subseteq
X_{r-i}^{([a-i]+1)} \subseteq X_{r-i}^{([a-i])}. \end{align}
We start by determining the last quotient $X_{r-i}^{([a-i])}/X_{r-i}^{([a-i]+1)}$ in the chain \eqref{4.1.3 ascending chain} for $[a-i]<i < p-1$.
For $1 \leq a,i \leq p-1$, with $1 \leq [a-i] < i < p-1$, it follows from the first and third parts of the definition of
$\mathcal{J}(a,i)$ (cf. \eqref{interval J}) and the fact that $[a-[a-i]] =i$, that \begin{align}\label{interval J for [a-i]}
\mathcal{J}(a,[a-i]) =
\begin{cases}
\lbrace i, i+1, \ldots, a-2,a-1 \rbrace, & \mathrm{if}~
[a-i]< i<a, \\
\lbrace 0,1, \ldots, a-2 \rbrace \cup
\lbrace i, i+1, \ldots , p-1 \rbrace, & \mathrm{if}~
a<[a-i]<i.
\end{cases} \end{align} \begin{lemma}\label{medium a-i full}
Let $p \geq 3$, $r \equiv a \mod (p-1) $ with $1 \leq a \leq p-1$,
$r \equiv r_{0} ~\mathrm{mod}~p$ with $0 \leq r_{0}\leq p-1$
and $1 \leq [a-i] < i < p-1$.
If $[a-i](p+1)+p \leq r $ and
$r_{0} \not \in \mathcal{J}(a,[a-i])$,
then $X_{r-i}^{([a-i])}/X_{r-i}^{([a-i]+1)} =
V_{r}^{([a-i])}/V_{r}^{([a-i]+1)}$. \end{lemma}
\begin{proof}
Put $j=[a-i]$. Let
\begin{align*}
A &= \left( \binom{r-n}{m} \binom{[a-m-n]}{j-m}\right)_{0 \leq m,n \leq j-1} , \\
\mathbf{b} &=\left( \binom{r-i}{0}, \binom{r-i}{1}, \ldots,
\binom{r-i}{j-1} \right)^{t}.
\end{align*}
Note that $A= A(a,j-1,j,r)$, cf. \eqref{A(a,i,j,r) matrix}.
Note that $[a-(j-1)] = [i+1] =i+1 > j > j-1$. Thus,
by Corollary~\ref{A(a,i,j,r) invertible},
we see that $A$ is invertible if
$r_{0} \not \in \mathcal{I}(a,[a-i]-1)$
(if $[a-i]=1$, then $A= (a)$ which is trivially invertible).
From \eqref{interval J for [a-i]}
we see that $r_{0} \not \in \mathcal{J}(a,[a-i])$ and
$r \not \equiv [a-i]+i$ mod $p$ implies that
$r_{0} \not \in \mathcal{I}(a,[a-i]-1)$.
So, $A \mathbf{x} = \mathbf{b}$ has a solution if
$r_{0} \not \in \mathcal{J}(a,[a-i])$ and $r \not \equiv [a-i]+i$ mod $p$.
We claim that $A\mathbf{x} = \mathbf{b}$ has a
solution even if $r \equiv [a-i]+i =j+i \mod p$.
Indeed, by Lucas' theorem, we have
$\mathbf{b}= \big( \binom{j}{0}, \ldots, \binom{j}{m}, \ldots,
\binom{j}{j-1} \big)^{t}$ and since $[a-m-(j-1)]= [i+1-m]$,
we also have the last column of $A$ is equal to
$\big( \binom{i+1}{0} \binom{i+1}{j}, \ldots, \binom{i+1}{m} \binom{i+1-m}{j-m},
\ldots,\binom{i+1}{j-1}\binom{i+1-(j-1)}{j-(j-1)} \big)^{t}$.
As
$\binom{i+1}{m} \binom{i+1-m}{j-m} \binom{i+1}{j}^{-1} = \binom{j}{m}$,
we see that
$A(0, \ldots, 0, \binom{i+1}{j}^{-1}) ^{t}= \mathbf{b}$.
Therefore the linear system $A\mathbf{x} = \mathbf{b}$ has a solution
if $r_{0} \not \in \mathcal{J}(a,[a-i])$.
Let $C_{0}, \ldots, C_{[a-i]-1}$ be a solution to $A\mathbf{x} = \mathbf{b}$,
i.e.,
\begin{align}\label{Choice C_i for [a-i]<i}
\sum_{n=0}^{[a-i]-1} C_{n} \binom{r-n}{m} \binom{[a-m-n]}{[a-i]-m}
= \binom{r-i}{m}, ~ \forall ~ 0 \leq m \leq [a-i]-1.
\end{align}
Let
\begin{align*}
F(X,Y) & =
X^{i}Y^{r-i}
+ \sum_{n=0}^{[a-i]-1} C_{n} \sum_{k \in \mathbb{F}_{p}^{\ast}} k^{n-i}
X^{n} (kX+ Y)^{r-n} \\
& \stackrel{\eqref{sum fp}}{\equiv} X^{i}Y^{r-i}
- \sum_{n=0}^{[a-i]-1} C_{n} \sum_{\substack{ 0 \leq l
\leq r-n \\ l \equiv [a-i] \mod (p-1)}}
\binom{r-n}{l} X^{r-l} Y^{l} \mod p.
\end{align*}
We show below that $F(X,Y) \in X_{r-i}^{([a-i])}$ and
$F(X,Y)$ generates $V_{r}^{([a-i])}/V_{r}^{([a-i]+1)}$.
Clearly $F(X,Y) \in X_{r-i}$, by \Cref{Basis of X_r-i}.
Since $[a-i]$ (resp. $r-i$) is the smallest (resp. largest)
number between $0$ and $r$ congruent to $a-i$ mod $(p-1)$,
we see that the coefficients of
$X^{r}, X^{r-1}Y, \ldots$, $X^{r-[a-i]+1}Y^{[a-i]-1}$ (resp.
$ X^{i-1}Y^{r-i+1}, X^{i-2}Y^{r-i+2}, \ldots,Y^{r}$) in $F(X,Y)$ are zero.
As $[a-i] <i$, we see that
$F(X,Y)$ satisfies condition (i) of \Cref{divisibility1}
for $m=[a-i]$. For $0 \leq m \leq [a-i]-1$, by \Cref{binomial sum},
we have
\begin{align*}
\sum_{n=0}^{[a-i]-1} C_{n} \sum_{\substack{ 0 \leq l
\leq r-n \\ l \equiv [a-i] ~ \mathrm{mod} ~(p-1)}}
\binom{r-n}{l} \binom{l}{m}
& \; \; \equiv \sum_{n=0}^{[a-i]-1} C_{n} \binom{r-n}{m}
\binom{[a-m-n]}{[a-i]-m} \mod p \\
&\overset{\eqref{Choice C_i for [a-i]<i}}{\equiv} \binom{r-i}{m}
~\mathrm{mod}~p.
\end{align*}
Thus, $F(X,Y) \in V_{r}^{([a-i])}$, by \Cref{divisibility1}.
Again by \Cref{binomial sum}, for $0\leq n \leq [a-i]-1$, we have
\begin{align*}
\sum_{\substack{ 0 \leq l \leq r-n \\ l \equiv [a-i] ~ \mathrm{mod} ~(p-1)}}
\binom{r-n}{l} \binom{l}{[a-i]}
& \equiv \binom{r-n}{[a-i]}
\binom{[a-n-[a-i]]}{p-1} + \binom{r-i}{[a-i]} ~\mathrm{mod}~p \\
& \equiv \binom{r-n}{[a-i]} ~\mathrm{mod}~p,
\end{align*}
where in the last step we used $\binom{[a-n-[a-i]]}{p-1} =
\binom{[i-n]}{p-1} = \binom{i-n}{p-1} = 0$, as $ n< [a-i] < i< p-1$. Hence
\begin{align*}
\binom{r-i}{[a-i]}-\sum_{n=0}^{[a-i]-1} C_{n}
\sum_{\substack{ 0 \leq l \leq r-n \\ l \equiv [a-i] ~ \mathrm{mod} ~(p-1)}}
\binom{r-n}{l} \binom{l}{[a-i]}
\equiv \binom{r-i}{[a-i]}-
\sum_{n=0}^{[a-i]-1} C_{n} \binom{r-n}{[a-i]} ~ \mathrm{mod} ~p.
\end{align*}
Clearly the coefficient of $X^{r-[a-i]}Y^{[a-i]}$ in $F(X,Y)$ is
$-\sum\limits_{n=0}^{[a-i]-1} C_{n}
\binom{r-n}{[a-i]} $. Also, from above we have
the coefficient of $X^{[a-i]}Y^{r-[a-i]}$
in $F(X,Y)$ is zero.
Putting all these together, it follows from \Cref{breuil map quotient}
with $m=[a-i]$, that
\begin{align*}
F(X,Y) \equiv & \binom{r-i}{[a-i]} \theta^{[a-i]} X^{r-[a-i](p+1)-(p-1)}Y^{p-1}
\mod V_{r}^{([a-i]+1)},
\end{align*}
up to terms involving $\theta^{[a-i]}X^{r-[a-i](p+1)}$ and $\theta^{[a-i]}Y^{r-[a-i](p+1)}$. Thus, by \Cref{Breuil map}, we have that the image of $F(X,Y)$ under $V_{r}^{([a-i] )}/ V_{r}^{([a-i]+1)} \twoheadrightarrow V_{p-1-[a-2i]} \otimes D^{i}$ is $(-1)^{r} \binom{r-i}{[a-i]} Y^{p-1-[2i-a]}$, which is non-zero by Lemma~\ref{interval and binomial} (iii) with $i$ there equal to $[a-i]$, since $r_{0} \not \in \mathcal{J}(a,[a-i])$
(this is where we discard $r_{0}=i$, as all the previous statements are valid even if $r_{0}=i$). As the exact sequence \eqref{exact sequence Vr} doesn't split for $m=[a-i]$, we obtain $F(X,Y)$ generates $V_{r}^{([a-i]) }/ V_{r}^{([a-i]+1)}$. \end{proof}
We next describe the first and last quotients in the chain \eqref{4.1.3 ascending chain} when the hypothesis of the lemma above fails to hold. \begin{lemma}\label{medium a-i not full}
Let $ p\geq 3$, $p \leq r \equiv a \mod (p-1) $ with $1 \leq a \leq p-1$,
$r \equiv r_{0} ~\mathrm{mod}~p$ with $0 \leq r_{0}\leq p-1$
and suppose $1 \leq [a-i] < i < p-1$. If
$r_{0} \in \mathcal{J}(a,[a-i])$, then $X_{r-i}= X_{r-(i-1)}+X_{r-i}^{(i+1)}$.
Furthermore, $X_{r-i}^{(i)}/X_{r-i}^{(i+1)} =
X_{r-[a-i]}^{(i)}/X_{r-[a-i]}^{(i+1)}$ and $X_{r-i}^{([a-i])}/X_{r-i}^{([a-i]+1)} =
X_{r-[a-i]}^{([a-i])}/X_{r-[a-i]}^{([a-i]+1)}$. \end{lemma}
\begin{proof}
Since $[a-i] < i$, we have $r - i \not\equiv i \mod p-1$.
Recall that the polynomial
\begin{align*}
F_{i,r}(X,Y)
&\stackrel{\eqref{F i,r definition}}{=} \sum_{\lambda \in \mathbb{F}_{p}} \lambda^{[2i-a]} X^{i}(\lambda X+Y)^{r-i}
\stackrel{\eqref{sum fp}}{\equiv}
- \sum_{\substack{0 \leq l \leq r-i \\ l \equiv i ~\mathrm{mod}~(p-1)}}^{}
\binom{r-i}{l}X^{r-l}Y^{l} \mod p
\end{align*}
generates $X_{r-i}/X_{r-(i-1)}$.
We claim that $F_{i,r}(X,Y) \in V_{r}^{(i+1)}$,
which implies the first assertion in the lemma.
Clearly the coefficients
of $X^{r-i}Y^{i}$ and $X^{[a-i]}Y^{r-[a-i]}$ in $F_{i,r}(X,Y)$ are
$-\binom{r-i}{i}$ and $-\binom{r-i}{r-[a-i]}$ respectively.
Since $i> [a-i]$, we have $\binom{r-i}{r-[a-i]}=0$.
By Lemma~\ref{interval and binomial} (iii) with $i$ there equal to $[a-i]$, and the hypothesis $i>[a-i]$, we see that
$\binom{r-i}{i} \equiv 0$ mod $p$, for $r_{0} \in \mathcal{J}(a,[a-i])$.
As $i$
(resp. $r-[a-i]$) is the only number between $0$ and $p-1$
(resp. $r$ and $r-(p-1)$) congruent to $i \mod (p-1)$, we see that
$X^{p},Y^{p} \mid F_{i,r}(X,Y)$. So $F_{i,r}(X,Y)$
satisfies condition (i) of \Cref{divisibility1} for $m=i+1$.
Further, for $0 \leq m \leq i$, by \Cref{binomial sum}, we have
\begin{align*}
\sum_{\substack{0 \leq l \leq r-i \\ l \equiv i ~ \mathrm{mod}~ (p-1) }}
\binom{r-i}{l} \binom{l}{m} \equiv \binom{r-i}{m} \left( \binom{[a-i-m]}{[i-m]}
+ \delta_{[i-m],p-1} \right) \mod p.
\end{align*}
If $0 \leq m < [a-i]$, then $[a-i-m] = [a-i]-m < i-m =[i-m]$ so
$\binom{[a-i-m]}{[i-m]} =0$.
If $[a-i] \leq m \leq i$ as we just saw
$\binom{r-i}{m} \equiv 0 ~\mathrm{mod}~ p$ for
$r_{0} \in \mathcal{J}(a,[a-i])$. Thus, the sum above vanishes for all
$0 \leq m \leq i$. Hence
by \Cref{divisibility1}, we have $F_{i,r}(X,Y) \in V_{r}^{(i+1)}$.
This completes the proof of the first statement of the lemma.
Since $[a-i]< i$, we have
\[
X_{r-(i-1)}+X_{r-i}^{(i+1)} \subseteq X_{r-(i-1)}+X_{r-i}^{(i)}
\subseteq X_{r-(i-1)}+X_{r-i}^{([a-i]+1)} \subseteq
X_{r-(i-1)}+X_{r-i}^{([a-i])} \subseteq X_{r-i}.
\]
As the extreme terms in the above chain are equal,
all the terms are equal. Thus, for $j \in \lbrace i,[a-i] \rbrace$,
it follows from \eqref{Y i,j} that
\[
(0) = \frac{X_{r-i}^{(j)}+X_{r-(i-1)}}{X_{r-i}^{(j+1)}+X_{r-(i-1)}}
\cong Y_{i,j} =
\frac{X_{r-i}^{(j)}/X_{r-i}^{(j+1)}}{X_{r-(i-1)}^{(j)}/X_{r-(i-1)}^{(j+1)}}.
\]
Thus $X_{r-i}^{(j)}/X_{r-i}^{(j+1)} = X_{r-(i-1)}^{(j)}/X_{r-(i-1)}^{(j+1)} $,
for $j \in \lbrace i,[a-i] \rbrace$. Since $[a-i]\leq i -1 <i$ and $[a-[a-i]]=i$,
it follows from the second and first parts of \Cref{reduction} (with $i$ there equal to $i-1$), that
$X_{r-(i-1)}^{(j)}/X_{r-(i-1)}^{(j+1)} = X_{r-[a-i]}^{(j)}/X_{r-[a-i]}^{(j+1)} $,
for $j \in \lbrace i,[a-i] \rbrace$. This finishes the proof of the lemma. \end{proof}
Since $[a-i]<i=[a-[a-i]]$, the lemma above in conjunction with \Cref{singular quotient i < [a-i]} (applied with $i$ there equal to $[a-i]$) can be used to describe the first quotient $X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$ and the last quotient $X_{r-i}^{([a-i])}/X_{r-i}^{([a-i]+1)}$ in the chain \eqref{4.1.3 ascending chain} if
$r_{0} \in \mathcal{J}(a,[a-i])$.
Note that $\mathcal{J}(a,[a-i]) \subset \mathcal{J}(a,i)$ if $[a-i]<i$.
In the next lemma we determine the first quotient $X_{r-i}^{(i)}/X_{r-i}^{(i+1)}$ in the chain \eqref{4.1.3 ascending chain} when $r_{0} \not \in \mathcal{J}(a,i)$.
\begin{lemma}\label{medium i full}
Let $p \geq 3$, $ r \equiv a~\mathrm{mod}~ (p-1) $ with $1 \leq a \leq p-1$,
$r \equiv r_{0} ~\mathrm{mod}~p$ with $0 \leq r_{0}\leq p-1$ and suppose $1 \leq [a-i]<i < p-1$.
If $i(p+1)+p \leq r$ and $r_{0} \not \in \mathcal{J}(a,i)$,
then $X_{r-i}^{(i)}/X_{r-i}^{(i+1)} =
V_{r}^{(i)}/V_{r}^{(i+1)}$. \end{lemma} \begin{proof}
Note that the congruence class of $[a-i]+i$ mod $p$
doesn't belong to $\mathcal{J}(a,i)$, by
\eqref{interval J for i > a-i}.
If $r \equiv [a-i]+i \mod p$, then by \Cref{large a-i}
(applied with $i$ there equal to $[a-i]$), we have
$X_{r-[a-i]}^{(i)}/ X_{r-[a-i]}^{(i+1)} = V_{r}^{(i)}/ V_{r}^{(i+1)}$.
Since $X_{r-[a-i]}^{(i)}/ X_{r-[a-i]}^{(i+1)} \subseteq
X_{r-i}^{(i)}/ X_{r-i}^{(i+1)} \subseteq V_{r}^{(i)}/ V_{r}^{(i+1)}$,
we get
$X_{r-i}^{(i)}/ X_{r}^{(i+1)} = V_{r}^{(i)}/ V_{r}^{(i+1)}$.
So assume that $r_{0} \not \in
\mathcal{J}(a,i)$ but $r \not \equiv [a-i]+i $ mod $p$.
We claim that there exist constants $B$, $C_{0}, \ldots , C_{i}$
such that
\begin{align*}
F(X,Y) & :=B X^{r-i}Y^{i} - \sum_{n=0}^{i} C_{n}
\sum_{k \in \mathbb{F}_{p}^{\ast}} k^{i+n-a} X^{n} (k X+Y)^{r-n} \\
& \stackrel{\eqref{sum fp}}{\equiv}
B X^{r-i}Y^{i} + \sum_{n=0}^{i} C_{n}
\sum_{\substack{0 \leq l \leq r-n \\ l \equiv i ~ \mathrm{mod} ~(p-1)}}
\binom{r-n}{l} X^{r-l}Y^{l}
~\mathrm{mod}~p
\end{align*}
generates $V_{r}^{(i)}/V_{r}^{(i+1)}$. Clearly $F(X,Y) \in X_{r-i}$,
by \Cref{Basis of X_r-i}, for any choice of $B$, $C_{n}$.
To show such a choice exists, let
$A'= \left( \binom{r-n}{m} \binom{[a-m-n]}{i-m} \right)_{0 \leq m,n \leq [a-i]-1}$,
$\mathbf{v}= \left( \binom{i}{0}, \ldots , \binom{i}{[a-i]-1} \right)$ and
$\mathbf{w} = \left( \binom{r}{r-[a-i]}, \ldots, \binom{r-([a-i]-1)}{r-[a-i]} \right)$.
By \Cref{block matrix invertible}, we see that the matrix
\[
A = \left( \begin{array}{c|c} A' & \mathbf{v}^{t} \\ \hline \mathbf{w} & 0
\end{array} \right),
\]
is invertible if
$r_{0} \not \in \mathcal{J}(a,i)$ and $r _{0} \not \equiv [a-i]+i$ mod $p$.
Choose constants $B$, $C_{0}, \ldots, C_{[a-i]-1}$ such that
$A (C_{0}, \ldots, C_{[a-i]-1}, B)^{t} = -(\binom{r-[a-i]}{0},
\ldots, \binom{r-[a-i]}{[a-i]-1},1)^{t}$, i.e.,
\begin{align}
\sum_{n=0}^{[a-i]-1} C_{n} \binom{r-n}{m} \binom{[a-m-n]}{i-m}
+ B \binom{i}{m}
&= - \binom{r-[a-i]}{m}, ~ \forall~ 0 \leq m \leq [a-i]-1,
\label{i> a-i choice C_i 1} \\
\sum_{n=0}^{[a-i]-1} C_{n} \binom{r-n}{r-[a-i]}
& = -1. \label{i> a-i choice C_i 2}
\end{align}
Set $C_{[a-i]}=1$. Using \eqref{interval J for i > a-i}, we see that
if $r_{0} \not \in \mathcal{J}(a,i)$, then
$r- [a-m] \equiv m$, $m+1, \ldots, p+m-i-2$ mod $p$, for all
$[a-i] \leq m \leq i-1$.
By Lucas' theorem,
we have $\binom{r-[a-m]}{m} \not \equiv 0$ mod $p$, for all
$[a-i] \leq m \leq i-1$. Successively choose
$C_{i}, C_{i-1}, \ldots, C_{[a-i]+1}$, so that
for every $[a-i] \leq m < i$ we have
\begin{align}\label{choice C_i}
(-1)^{i-m} \binom{r-[a-m]}{m} C_{[a-m]} \equiv
-\sum_{n=0}^{[a-i]} & C_{n} \binom{r-n}{m}
\binom{[a-m-n]}{i-m} + D \binom{i}{m} \nonumber \\
&-\sum_{n=[a-m]+1}^{i} C_{n} \binom{r-n}{m} \binom{[a-m-n]}{i-m}
~\mathrm{mod}~p.
\end{align}
We claim that for the above choice of $B$, $C_{0}, \ldots , C_{i}$
we have $F(X,Y)$ generates $V_{r}^{(i)}/V_{r}^{(i+1)}$. Note that
the coefficient of $X^{r-l}Y^{l}$ in $F(X,Y)$ is zero if
$ l \not \equiv i $ mod $p$. Since $i$ is the smallest
number between $0$ and $r$ congruent to $i$ mod $(p-1)$,
we see that $Y^{i} \mid F(X,Y)$.
The coefficient of $X^{[a-i]}Y^{r-[a-i]}$ in $F(X,Y)$ equals
\begin{align*}
- \sum_{n=0}^{i} C_{n} \binom{r-n}{r-[a-i]} =
-\sum_{n=0}^{[a-i]} C_{n} \binom{r-n}{r-[a-i]}
\overset{\eqref{i> a-i choice C_i 2}}{\equiv} 0
~\mathrm{mod}~ p ~~ ( ~\because ~ C_{[a-i]} =1).
\end{align*}
As $r-[a-i]$ is the only number between $r$ and $r-(p-1)$
congruent to $i$ mod $(p-1)$, we see that $X^{p} \mid F(X,Y)$.
So $X^{p},Y^{i} \mid F(X,Y)$, whence $F(X,Y)$ satisfies condition
(i) of \Cref{divisibility1} for $m=i$. For $0 \leq m < [a-i]$, by
\Cref{binomial sum}, we have
\begin{align*}
\sum_{n=0}^{i} C_{n} \sum_{\substack{0 \leq l \leq r-n \\
l \equiv i ~\mathrm{mod}~(p-1)} }
\binom{r-n}{l}\binom{l}{m} - D \binom{i}{m}
& \; \equiv \sum_{n=0}^{i} C_{n} \binom{r-n}{m}
\binom{[a-m-n]}{i-m}- D \binom{i}{m} ~\mathrm{mod}~ p \\
& \;= \sum_{n=0}^{[a-i]} C_{n} \binom{r-n}{m}
\binom{[a-m-n]}{i-m} - D \binom{i}{m} \\
& \overset{\eqref{i> a-i choice C_i 1}}{\equiv}
0 ~\mathrm{mod}~ p,
\end{align*}
where in the second last step we used that
$[a-m-n] = [a-n]-m < i-m$, for all $[a-i] < n \leq i$,
so $\binom{[a-m-n]}{i-m} =0$.
For $[a-i] \leq m < i$, by \Cref{binomial sum}, we have
\begin{align*}
\sum_{n=0}^{i} C_{n} & \sum_{\substack{0\leq l \leq r-n \\
l \equiv i ~\mathrm{mod}~(p-1)}}
\binom{r-n}{l} \binom{l}{m} - D \binom{i}{m} \\
& \equiv \sum_{n=0}^{[a-i]} C_{n} \binom{r-n}{m}
\binom{[a-m-n]}{i-m}
+ \sum_{n=[a-m]}^{i} C_{n} \binom{r-n}{m} \binom{[a-m-n]}{i-m}
- D \binom{i}{m} ~\mathrm{mod}~ p \\
& \overset{\eqref{choice C_i}}{\equiv} 0 ~\mathrm{mod}~ p,
\end{align*}
where in the second last step we used that
$[a-m-n] = [a-n]-m < i-m$, so
$\binom{[a-m-n]}{i-m} =0$, for all $[a-i] < n < [a-m]$
and
in the last step we used $\binom{p-1}{i-m} \equiv (-1)^{i-m}$
mod $p$. Hence by \Cref{divisibility1}, we have
$F(X,Y) \in V_{r}^{(i)}/ V_{r}^{(i+1)}$. As the exact sequence
\eqref{exact sequence Vr} doesn't split for $m=i$, to show
$F(X,Y)$ generates $V_{r}^{(i)}/V_{r}^{(i+1)}$ it is enough to show
the image of $F(X,Y)$ under the rightmost map of the exact sequence
\eqref{exact sequence Vr} is non-zero.
By \Cref{binomial sum},
we have
\begin{align*}
B \binom{i}{i} - \sum_{n=0}^{i} C_{n} \sum_{\substack{0 \leq l \leq r-n\\ l
\equiv i ~\mathrm{mod}~(p-1)}}
\binom{r-n}{l} \binom{l}{i} & \equiv
B \binom{i}{i} -
\sum_{n=0}^{i} C_{n} \binom{r-n}{i} + \binom{r-[a-i]}{i}
~~\mathrm{mod}~ p \\
&= \mathrm{coefficient ~ of ~} X^{r-i}Y^{i} ~ \mathrm{in}~ F(X,Y) +
\binom{r-[a-i]}{i},
\end{align*}
where in the first congruence we used $[a-n-i] = p-1$ if and only if
$n=[a-i]$.
Since $X^{p} \mid F(X,Y)$, by \Cref{breuil map quotient}, we have
\begin{align*}
F(X,Y) \equiv \binom{r-[a-i]}{i} \theta^{i} & X^{r-i(p+1)-(p-1)}Y^{p-1}
\mod V_{r}^{(i+1)} ,
\end{align*}
up to terms involving $\theta^{i} X^{r-i(p+1)} $ and
$\theta^{i} X^{r-i(p+1)} $.
By Lucas' theorem and \eqref{interval J for i > a-i}, we see that
$\binom{r-[a-i]}{i} \not \equiv 0$ mod $p$ for
$r_{0} \not \in \mathcal{J}(a,i)$. Thus, by \Cref{Breuil map},
we get the image of $F(X,Y)$ is non-zero under the rightmost map
of \eqref{exact sequence Vr} as desired. \end{proof}
The next lemma determines the first quotient in the chain \eqref{4.1.3 ascending chain} in the remaining cases
by reducing to the results in \S \ref{section i < a-i}, noting that the inequality $i > [a-i]$ implies $[a-i] < [a-[a-i]]$. \begin{lemma}\label{medium i not full}
Let $p \geq 3$, $ r \equiv a \mod (p-1) $ with $1 \leq a \leq p-1$,
$r \equiv r_{0} ~\mathrm{mod}~p$ with $0 \leq r_{0}\leq p-1$
and $1 \leq [a-i] < i < p-1$. If $i(p+1)+p \leq r$,
$ r_{0} \in \mathcal{J}(a,i)$ and
$r_{0} \not \in \mathcal{J}(a,[a-i])$, then
$X_{r-i}^{(i)}/ X_{r-i}^{(i+1)}= X_{r-[a-i]}^{(i)}/X_{r-[a-i]}^{(i+1)}$. \end{lemma} \begin{proof}
Since $[a-i] \leq i-1 < i$, by the second part of \Cref{reduction}
(with $i$ there equal to $i-1$ and $j=i$), we have
$X_{r-(i-1)}^{(i)}/ X_{r-(i-1)}^{(i+1)} = X_{r-[a-i]}^{(i)}/X_{r-[a-i]}^{(i+1)} $.
So to prove the lemma it is enough to show
$X_{r-i}^{(i)}/ X_{r-i}^{(i+1)}= X_{r-(i-1)}^{(i)}/X_{r-(i-1)}^{(i+1)}$.
By \Cref{medium a-i full}, we have
$X_{r-i}^{([a-i])}/X_{r-i}^{([a-i]+1)} = V_{r}^{([a-i])}/V_{r}^{([a-i]+1)}$.
Since $[a-i] \leq i-1 < i$, by the first part of
\Cref{reduction} (with $i$ there equal to $i-1$ and $j=[a-i]$), we have
$X_{r-(i-1)}^{([a-i])}/X_{r-(i-1)}^{([a-i]+1)} =
X_{r-[a-i]}^{([a-i])}/X_{r-[a-i]}^{([a-i]+1)} $. By
\eqref{interval J for i > a-i} and \eqref{interval J for [a-i]},
we have
$$r_{0} \in \lbrace [a-i]-1, [a-i], \ldots, i-1 \rbrace.$$
By the first and third parts of \eqref{interval I} and the fact $[a-i]<[a-[a-i]] =i$,
we have
\begin{align}\label{interval [a-i], for i> [a-i]}
\mathcal{I}(a,[a-i]) =
\begin{cases}
\lbrace i+1,i+2, \ldots, a \rbrace,
& \mathrm{if}~ [a-i] < i<a, \\
\lbrace 0,1, \ldots, a-1 \rbrace \cup
\lbrace i+1, i+2, \ldots, p-1 \rbrace,
& \mathrm{if}~a < [a-i] < i,
\end{cases}
\end{align}
so $r_{0} \not \in \mathcal{I}(a,[a-i])$.
Thus, by \Cref{singular quotient i < [a-i]}
(with $i,j$ there equal to $[a-i]$), we have
$
X_{r-[a-i]}^{([a-i])}/X_{r-[a-i]}^{([a-i]+1)}
\cong V_{[2i-a]} \otimes D^{a-i}$, whence
$X_{r-(i-1)}^{([a-i])}/X_{r-(i-1)}^{([a-i]+1)}
\cong V_{[2i-a]} \otimes D^{a-i}$.
By the exact sequence
\eqref{exact sequence Vr} (with $m =[a-i]$), we have
$V_{p-1-[2i-a]} \otimes D^{i} \cong Y_{i,[a-i]}$.
Thus, by \eqref{Y i,j}, we have
\begin{align*}
V_{p-1-[2i-a]} \otimes D^{i} \cong
\frac{X_{r-i}^{([a-i])} +X_{r-(i-1)}}{X_{r-i}^{([a-i]+1)} +X_{r-(i-1)}}.
\end{align*}
Now assume $r_{0} \neq [a-i]-1$, i.e.,
$ r_{0} = [a-i],\ldots, i-1$ mod $p$.
By Lemma~\ref{I vs J},
we see that $r_0 \in \mathcal{I}(a,i)$.
Since $[a-i] \leq i-1 <i$, by \Cref{reduction}, we have
$X_{r-(i-1)}^{(i)}/X_{r-(i-1)}^{(i+1)} =
X_{r-[a-i]}^{(i)}/X_{r-[a-i]}^{(i+1)} $.
By \Cref{singular quotient i < [a-i]}, noting $r \not \equiv [a-i]+i \mod p$,
we have
$X_{r-[a-i]}^{(i)}/X_{r-[a-i]}^{(i+1)} = (0)$, whence
$X_{r-(i-1)}^{(i)}/X_{r-(i-1)}^{(i+1)} =(0)$.
Suppose $X_{r-i}^{(i)}/ X_{r-i}^{(i+1)}
\neq X_{r-(i-1)}^{(i)}/X_{r-(i-1)}^{(i+1)}$, to get a contradiction.
Then $X_{r-i}^{(i)}/ X_{r-i}^{(i+1)}\neq (0) $ and
$V_{[a-2i]} \otimes D^{i}
\hookrightarrow X_{r-i}^{(i)}/ X_{r-i}^{(i+1)}$ by the exact sequence \eqref{exact sequence Vr}
with $m=i$.
Therefore
\begin{align*}
V_{p-1-[2i-a]} \otimes D^{i} = V_{[a-2i]} \otimes D^{i} \hookrightarrow
Y_{i,i} \cong \frac{X_{r-i}^{(i)} +X_{r-(i-1)}}{X_{r-i}^{(i+1)} +X_{r-(i-1)}}.
\end{align*}
Since $[a-i]<i$, we have
$\frac{X_{r-i}^{([a-i])} +X_{r-(i-1)}}{X_{r-i}^{([a-i]+1)} +X_{r-(i-1)}}$ and
$\frac{X_{r-i}^{(i)} +X_{r-(i-1)}}{X_{r-i}^{(i+1)} +X_{r-(i-1)}}$ are distinct
subquotients of $X_{r-i}/X_{r-(i-1)}$. Thus, we obtain that
$V_{p-1- [2i-a]} \otimes D^{i}$
occurs twice as a JH factor of $X_{r-i}/X_{r-(i-1)}$
and so for $ \operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i} \chi_{2}^{i})$
by \Cref{induced and successive}. Clearly this is not possible by
\Cref{Common JH factor} (i). This proves the lemma in the case
$r_{0} \neq [a-i]-1$.
Next we deal with the case $r \equiv [a-i]-1 \mod p$.
By Lucas' theorem, we have
$\binom{r-i}{[a-i]} \not \equiv 0$ mod $p$, as $[a-i] < i < p-1$.
By \eqref{interval [a-i], for i> [a-i]},
it follows
that $[a-i]-1 \not \in \mathcal{I}(a,[a-i])$. Let
$F(X,Y)$ be as defined in \eqref{polynomial F in i < a-i}
with $i$ there equal to $[a-i]$.
As in the proof of \Cref{Large Cong class Quotient non zero}, we have
$F(X,Y) \in X_{r-[a-i]}^{([a-i])}$. Let
$G_{i,r}(X,Y)$ be as in \eqref{G i,r definition} and let
\begin{align*}
H(X,Y) & = - G_{i,r}(X,Y) + \binom{r-i}{[a-i]} F(X,Y) \\
& = \sum_{ \substack{0 \leq l < r-i \\ l \equiv [a-i]
~\mathrm{mod}~(p-1)}} \binom{r-i}{l} X^{r-l}Y^{l}
- \binom{r-i}{[a-i]} \sum_{n=0}^{[a-i]} C_{n}
\sum_{\substack{0 \leq l \leq r-n \\ l \equiv [a-i]
\mathrm{~mod}~ (p-1)}} \binom{r-n}{l}X^{r-l} Y^{l} ,
\end{align*}
where $C_{0}, \ldots, C_{[a-i]}$ satisfy
\eqref{choice C_n for i<a-i} with $i$ there equal to $[a-i]$.
Clearly $H(X,Y) \in X_{r-i}$.
We claim that $H(X,Y) \in X_{r-i}^{(i+1)}+ X_{r-(i-1)}$.
Assuming the claim we
finish the proof of the lemma.
Since $[a-i] \leq i-1$ and $F(X,Y) \in X_{r-[a-i]}$, we have
$H(X,Y) + G_{i,r}(X,Y) \in X_{r-(i-1)}$, by
\Cref{first row filtration}. Also recall that
$G_{i,r}(X,Y)$ generates $W_{i,r}$, the image of
$V_{[2i-a]} \otimes D^{a-i} \hookrightarrow
\operatorname{ind}_{B}^{\Gamma}(\chi_{1}^{r-i} \chi_{2}^{i})
\overset{\psi_{i}}{\twoheadrightarrow} X_{r-i}/X_{r-(i-1)}$,
as a $\Gamma$-module. Therefore
\begin{align*}
V_{[2i-a]} \otimes D^{a-i}
\overset{\psi_{i}}{\twoheadrightarrow} W_{i,r}
\hookrightarrow
\frac{X_{r-i}^{(i+1)}+X_{r-i}}{X_{r-(i-1)} }.
\end{align*}
Using
\Cref{Structure of induced} in conjunction with
\Cref{induced and successive}, and noting that
$p-1-[2i-a]=[a-2i]$,
we get
\[
\begin{tikzcd}
0 \arrow[r, rightarrow] & V_{[2i-a]} \otimes D^{a-i} \arrow[r, rightarrow]
& \operatorname{ind}_{B}^{\Gamma} (\chi_{1}^{r-i}\chi_{2}^{i}) \arrow[r, rightarrow]
\arrow[d, twoheadrightarrow, "\psi_{i}"] & V_{[a-2i]} \otimes D^{i}
\arrow[r, rightarrow] & 0. \\
& & X_{r-i,\,r}/X_{r-(i-1),\,r} & &
\end{tikzcd}
\]
Together these facts give a surjection $V_{[a-2i]} \otimes D^{i}
\twoheadrightarrow
\frac{X_{r-i}}{X_{r-i}^{(i+1)} +X_{r-(i-1)}}$.
Since $[a-i]<i$, we have
\[
X_{r-(i-1)}+X_{r-i}^{(i+1)} \subseteq X_{r-(i-1)}+X_{r-i}^{(i)}
\subseteq X_{r-(i-1)}+X_{r-i}^{([a-i]+1)} \subseteq
X_{r-(i-1)}+X_{r-i}^{([a-i])} \subseteq X_{r-i}.
\]
But we already know that $V_{[a-2i]} \otimes D^{i} \hookrightarrow
\frac{X_{r-i}^{([a-i])} +X_{r-(i-1)}}{X_{r-i}^{([a-i]+1)} +X_{r-(i-1)}}$.
So
$ X_{r-(i-1)} + X_{r-i}^{([a-i]+1)}
= X_{r-(i-1)}+ X_{r-i}^{(i+1)} = X_{r-(i-1)}+ X_{r-i}^{(i)}$. Hence,
by \eqref{Y i,j},
$X_{r-i}^{(i)}/X_{r-i}^{(i+1)} = X_{r-(i-1)}^{(i)}/X_{r-(i-1)}^{(i+1)} $.
We now prove the claim.
Since $[a-i]<i < p-1$, by \eqref{G i,r}, we have $G_{i,r}(X,Y) = X^{i} G_{r-i}(X,Y)
\in X_{r-i}^{([a-i])}$.
So $H(X,Y) \in X_{r-i}^{([a-i])}$, as
$F(X,Y) \in X_{r-i}^{([a-i])}$.
We first show that $H(X,Y) \in V_{r}^{(i)}$.
The coefficient of $X^{r-[a-i]}Y^{[a-i]}$ in $H(X,Y)$ equals
\[
\binom{r-i}{[a-i]} - \binom{r-i}{[a-i]}
\sum_{n=0}^{[a-i]} C_{n} \binom{r-n}{[a-i]}
\stackrel{\eqref{choice C_n for i<a-i}}{=}
\binom{r-i}{[a-i]} - \binom{r-i}{[a-i]} =
0,
\]
where we used \eqref{choice C_n for i<a-i} with
$i,m$ there equal to $[a-i]$.
Since $[a-i]$ is the only number between $0$ and $p-1$
congruent to $[a-i]$ mod $(p-1)$, we see that $Y^{p} \mid H(X,Y)$.
Also $r-i$ is the only number between $r$ and $r-(p-1)$ congruent
to $[a-i]$ mod $(p-1)$, so $X^{i} \mid H(X,Y)$.
So $H(X,Y)$ satisfies condition (i) of \Cref{divisibility1} for $m=i$.
It suffices to check condition (ii) of that lemma for $[a-i] \leq m \leq i$,
since $H(X,Y) \in V_{r}^{([a-i])}$.
By \Cref{binomial sum}, for $[a-i] \leq m \leq i$, we have
\begin{align}\label{i> a-i, G i,r binomial sum}
\sum_{ \substack{0 \leq l < r-i \\ l \equiv [a-i]
~\mathrm{mod}~(p-1)}} \binom{r-i}{l} \binom{l}{m}
& = \sum_{ \substack{0 \leq l \leq r-i \\ l \equiv [a-i]
~\mathrm{mod}~(p-1)}} \binom{r-i}{l} \binom{l}{m}
- \binom{r-i}{m} \nonumber \\
&\equiv \binom{r-i}{m} \left[ \binom{[a-i-m]}{[a-i-m]}+
\delta_{p-1,[a-i-m]} \right]
- \binom{r-i}{m}
\mod p \nonumber \\
&\equiv \binom{r-i}{m} \delta_{[a-i],m} \mod p.
\end{align}
Again by \Cref{binomial sum}, for $[a-i] \leq m \leq i-1$, we have
\begin{align}\label{i> a-i, F binomial sum}
\sum_{n=0}^{[a-i]} C_{n}
\sum_{\substack{0 \leq l \leq r-n \\ l \equiv [a-i]
\mathrm{~mod}~ (p-1)}} \binom{r-n}{l}
\binom{l}{m} & \equiv
\sum_{n=0}^{[a-i]} C_{n} \binom{r-n}{m}
\left[\binom{[a-m-n]}{[a-i-m]} + \delta_{p-1,[a-i-m]} \right]
~\mathrm{mod}~p \nonumber \\
& \equiv
\sum_{n=0}^{[a-i]} C_{n} \binom{r-n}{m} \delta_{[a-i],m}
~\mathrm{mod}~p \nonumber \\
& \stackrel{\eqref{choice C_n for i<a-i}}{\equiv} \delta_{[a-i],m}
~\mathrm{mod}~p,
\end{align}
where in the second last step we used if $0 \leq n \leq [a-i]-1$, then
$r-n \equiv [a-i]-1-n$ mod $p$ and $0 \leq [a-i]-1-n < [a-i] \leq m$
so $\binom{r-n}{m} \equiv 0$ mod $p$, by Lucas' theorem,
and if $n=[a-i]$, then we have $\binom{[a-m-n]}{[a-i-m]}
= \binom{i-m}{p-1+[a-i]-m} = 0$ as $i< p-1 \leq p-1+[a-i]$.
Hence, the difference of the expressions
\eqref{i> a-i, G i,r binomial sum} and
$\binom{r-i}{[a-i]}$ times \eqref{i> a-i, F binomial sum}
is zero.
Thus, by \Cref{divisibility1}, we have $H(X,Y) \in V_{r}^{(i)}$.
Next we show that
\begin{align}\label{H in V i+1}
H(X,Y) \equiv
- (-1)^{a-i} C_{[a-i]} \binom{r-i}{[a-i]}
\theta ^{i} Y^{r-i(p+1)}
\mod V_{r}^{(i+1)}.
\end{align}
Indeed, since
$r-n \equiv [a-i]-1-n$ mod $p$ and $0 \leq [a-i]-1-n < i -n$,
for all $0 \leq n \leq [a-i]-1$, so
$\binom{r-n}{i-n} \equiv 0$ mod $p$, by Lucas' theorem. Thus,
by Lucas' theorem,
the coefficient of $X^{i}Y^{r-i}$ in $H(X,Y)$ is equal to
\begin{align}\label{i>a, coeff X i in H}
- \binom{r-i}{[a-i]} \sum_{n=0}^{[a-i]} C_{n} \binom{r-n}{r-i}
&= - \binom{r-i}{[a-i]} \sum_{n=0}^{[a-i]} C_{n} \binom{r-n}{i-n}
\nonumber\\
&\equiv - C_{[a-i]} \binom{r-i}{[a-i]} \binom{r-[a-i]}{i-[a-i]}
~\mathrm{mod}~p \nonumber \\
&\equiv - C_{[a-i]} \binom{r-i}{[a-i]}
\binom{p-1}{i-[a-i]} ~\mathrm{mod}~ p
\nonumber \\
& \equiv -(-1)^{a}C_{[a-i]} \binom{r-i}{[a-i]}
~\mathrm{mod}~p ,
\end{align}
which is the clearly the coefficient of $X^{i}Y^{r-i}$
on the right hand side of \eqref{H in V i+1}.
Also, $Y^{p}$ divides the right hand side of
\eqref{H in V i+1} as $r-i(p+1) \geq p$. Thus
the difference of the two sides satisfies condition
(i) of \Cref{divisibility1} with $m=i+1$.
By \Cref{binomial sum}, we have
\begin{align}\label{i> a-i, i derivative condition for F}
\sum_{n=0}^{[a-i]} C_{n}
\sum_{\substack{0 \leq l \leq r-n \\ l \equiv [a-i]
\mathrm{~mod}~ (p-1)}} \binom{r-n}{l}
\binom{l}{i} & \equiv
\sum_{n=0}^{[a-i]} C_{n} \binom{r-n}{i}
\binom{[a-i-n]}{[a-2i]}
\mod p \nonumber \\
&\equiv (-1)^ {a-i}C_{[a-i]} \mod p,
\end{align}
where in the last step we have used that if $0\leq n \leq [a-i]-1$,
then
$\binom{r-n}{i} \equiv 0$ mod $p$ as above,
and if $n=[a-i]$, then
$\binom{r-[a-i]}{i} \binom{p-1}{[a-2i]}
\equiv \binom{p-1}{i} \binom{p-1}{[a-2i]}
\equiv (-1)^{a-i}$ mod $p$
since $r \equiv [a-i]-1$ mod $p$ and $\binom{p-1}{j} \equiv (-1)^{j}
$ mod $p$. Thus, by
\eqref{i> a-i, G i,r binomial sum} and
\eqref{i> a-i, i derivative condition for F},
we have
\begin{align}\label{i> a-i, i derivative condition}
\sum_{ \substack{0 \leq l < r-i \\ l \equiv [a-i]
~\mathrm{mod}~(p-1)}} \binom{r-i}{l} \binom{l}{i}
-& \binom{r-i}{[a-i]} \sum_{n=0}^{[a-i]} C_{n}
\sum_{\substack{0 \leq l \leq r-n \\ l \equiv [a-i]
\mathrm{~mod}~ (p-1)}} \binom{r-n}{l}
\binom{l}{i} \nonumber \\
& \equiv -(-1)^{a-i}\binom{r-i}{[a-i]} C_{[a-i]}
\mod p,
\end{align}
so that the difference of both sides of
\eqref{H in V i+1} also satisfies condition (ii)
of \Cref{divisibility1} with $m=i+1$ as desired.
Since $\binom{r-[a-i]}{i} \equiv \binom{p-1}{i} \not\equiv 0 \mod p$,
we have $X_{r-[a-i]}^{(i)}/X_{r-[a-i]}^{(i+1)} \neq (0)$,
by Lemma~\ref{socle term singular} (with $i$ there equal to $[a-i]$).
Since $[a-i]< i$, the exact sequence
\eqref{exact sequence Vr} doesn't split for $m=[a-i]$
and
$ \theta^{i}Y^{r-i(p+1)} $
belongs to the socle of $V_{r}^{(i)}/V_{r}^{(i+1)}$,
by \Cref{Breuil map}. Thus, by \eqref{H in V i+1}
we see that
$H(X,Y) \in X_{r-[a-i]}^{(i)}/X_{r-[a-i]}^{(i+1)}$.
Since $[a-i] \leq i-1 $, we get
$H(X,Y) \in X_{r-(i-1)} + V_{r}^{(i+1)}$,
so in $X_{r-(i-1)} + X_{r-i}^{(i+1)}$ because $H(X,Y) \in X_{r-i}$,
as claimed.
\end{proof}
We are now ready to determine the quotients $X_{r-i}^{(j)}/X_{r-i}^{(j+1)}$, for $j \in \lbrace i , [a-i] \rbrace$ and $i>[a-i]$.
\begin{proposition}\label{singular i>r-i}
Let $p \geq 3$, $r \equiv a \mod (p-1)$ with $1 \leq a \leq p-1$,
$r \equiv r_{0} \mod p$ with $0 \leq r_{0} \leq p-1$
and suppose $1 \leq [a-i] < i < p-1$. If $j \in \lbrace i , [a-i] \rbrace$ and $r \geq j(p+1)+p$, then
\begin{align*}
\frac{X_{r-i}^{(j)} }{X_{r-i}^{(j+1)}} =
\begin{cases}
V_{r}^{(j)}/V_{r}^{(j+1)}, & \mathrm{if}~ r_{0} \not \in \mathcal{J}(a,j), \\
V_{[a-2i]} \otimes D^{i}, & \mathrm{if}~ r_{0} = [a-j]-1~
\mathrm{and}~ j \geq [a-j], \\
V_{p-1-[a-2i]} \otimes D^{a-i}, & \mathrm{if}~ r_{0} = [a-j]
~ \mathrm{and}~ j \leq [a-j], \\
(0), & \mathrm{if}~ r_{0} \in \mathcal{I}(a,j)
~\mathrm{and}~ r \not \equiv [a-i]+i ~\mathrm{mod}~p.
\end{cases}
\end{align*} \end{proposition} \begin{proof}
We prove the lemma by treating
the cases $j=i$ and $j=[a-i]$ separately.
\textbf{Case} $\boldsymbol{j=i}$:
If $r_{0} \not \in \mathcal{J}(a,i)$,
then by Lemma~\ref{medium i full}
we have $X_{r-i}^{(i)}/X_{r-i}^{(i+1)}=V_{r}^{(i)}/V_{r}^{(i+1)}$.
So assume $r_{0} \in \mathcal{J}(a,i)$.
Then, by Lemmas \ref{medium a-i not full} and \ref{medium i not full},
we see that
$X_{r-i}^{(i)}/X_{r-i}^{(i+1)} = X_{r-[a-i]}^{(i)}/X_{r-[a-i]}^{(i+1)}$.
As $[a-i]< i $, by Lemma~\ref{I vs J} (ii),
we see that
$ r_{0} \in \mathcal{I}(a, i) \cup \lbrace [a-i]-1 \rbrace$ and
$ r \not \equiv [a-i]+i $ mod $p$. Thus,
by \Cref{singular quotient i < [a-i]} (with
$i$ there equal to $[a-i]$ and $j$ equal to $i$),
we get
$X_{r-[a-i]}^{(i)}/X_{r-[a-i]}^{(i+1)} \cong
V_{[a-2i]} \otimes D^{i}$ if $r \equiv [a-i]-1$ mod $p$
and zero otherwise.
\textbf{Case} $\boldsymbol{j=[a-i]}$:
If $r_{0} \not \in \mathcal{J}(a,j)$,
then by Lemma~\ref{medium a-i full}
we have $X_{r-i}^{(j)}/X_{r-i}^{(j+1)}=V_{r}^{(j)}/V_{r}^{(j+1)}$.
So assume $r_{0} \in \mathcal{J}(a,j)$, then by
\Cref{medium a-i not full}, we have
$X_{r-i}^{(j)}/X_{r-i}^{(j+1)} = X_{r-[a-i]}^{(j)}/X_{r-[a-i]}^{(j+1)}$.
Since $j < [a-j]$, by Lemma~\ref{I vs J} (i),
we see that
$r_{0} \in \mathcal{I}(a,j) \cup \lbrace [a-j] \rbrace$
and $ r \not \equiv [a-j]+j$ mod $p$. Thus,
by \Cref{singular quotient i < [a-i]} (with $i$ (resp. $j$) there equal to
$[a-i]$ (resp. $[a-i]$)), we see that
$X_{r-j}^{(j)}/X_{r-j}^{(j+1)} \cong
V_{p-1-[a-2i]} \otimes D^{a-i}$ if $r \equiv i \, =[a-j]$ mod $p$
and is zero otherwise. \end{proof}
\begin{remark}
Observe that if in the statement of the proposition above,
we replace the condition \enquote* {$r_{0}
\in \mathcal{I}(a,j)$ and $r \not \equiv [a-i]+i$ mod $p$}
by \enquote*{otherwise}, then we may include the case
$[a-i]=i$ in the statement, by \Cref{singular i= [a-i]}. \end{remark} \begin{corollary}\label{arbitrary singular i>r-i}
Let $p \geq 3$, $r \equiv a \mod (p-1)$ with $1 \leq a \leq p-1$,
$r \equiv r_{0} \mod p$ with $0 \leq r_{0} \leq p-1$
and suppose $1 \leq [a-i] < i < p-1$. Then for
$[a-i] \leq l \leq i$ and $r \geq i(p+1)+p$,
we have
\begin{enumerate}[label= \emph{(\roman*)}]
\item If $r_{0} \not \in \mathcal{J}(a,l)$,
then
$X_{r-i}^{([a-i])} / X_{r-i}^{(l+1)} = V_{r}^{([a-i])} / V_{r}^{(l+1)}$.
\item Assume $l \neq [a-l]$. If $r_{0} \in \mathcal{I}(a,l)$
and $ r \not \equiv [a-i]+i ~\mathrm{mod}~p$, then
$X_{r-i}^{(l)} / X_{r-i}^{(i+1)} = (0)$.
\end{enumerate}
\end{corollary} \begin{proof}
Let $j' := \max \lbrace j, [a-j] \rbrace$, for all $[a-i] \leq j \leq i$.
Note that
$ \lbrace j, [a-j] \rbrace = \lbrace j', [a-j'] \rbrace$.
One checks that $[a-i] \leq j, [a-j] \leq i$, for all $[a-i] \leq j \leq i$.
Thus $[a-i] \leq j' \leq i <p-1$,
for all $[a-i] \leq j \leq i$.
\begin{enumerate}[label= (\roman*)]
\item
Since $\mathcal{J}(a,[a-i]) \subseteq \cdots \subseteq
\mathcal{J}(a,l_{1}) \subseteq \cdots \subseteq \mathcal{J}(a,i)$,
we see that $r_{0} \not \in \mathcal{J}(a,l)$ implies that
$r_{0} \not \in \mathcal{J}(a,j)$, for all $[a-i] \leq j \leq l$.
Hence by \Cref{singular i>r-i}, we have
$X_{r-j'}^{(j)}/X_{r- j'}^{(j+1)} = V_{r}^{(j)}/V_{r}^{(j+1)}$, for every
$[a-i] \leq j \leq l$. Since $X_{r-j'}^{(j)}/X_{r-j'}^{(j+1)} \subseteq
X_{r-i}^{(j)}/X_{r-i}^{(j+1)} \subseteq V_{r}^{(j)}/V_{r}^{(j+1)}$,
for all $[a-i] \leq j \leq l$, we see that
$X_{r-i}^{([a-i])}/X_{r-i}^{(l+1)} = V_{r}^{([a-i])}/X_{r-i}^{(l+1)}$.
\item
Since $\mathcal{I}(a,[a-i]) \subseteq \cdots \subseteq
\mathcal{I}(a,l) \subseteq \cdots \subseteq \mathcal{I}(a,i)$,
we see that $r_{0} \in \mathcal{I}(a,l)$ and
$ r \not \equiv [a-i]+i $ mod $p$ implies
$r_{0} \in \mathcal{I}(a,j)$ and
$ r \not \equiv [a-i]+i $ mod $p$, for all $l \leq j \leq i$.
One checks that $[a-i]+i = [a-j]+j$, for all $[a-i] \leq j \leq i$.
So $r_{0} \in \mathcal{I}(a,j)$ and
$ r \not \equiv [a-j]+j = [a-j']+j'$ mod $p$, for all $l \leq j \leq i$.
We claim that $X_{r-i}^{(j)}/X_{r-i}^{(j+1)}= (0)$, for all
$l \leq j \leq i$. Clearly (ii) follows from the claim.
Fix $l \leq j \leq i$. If $[a-j] \leq j \leq i$, then
by the first part of \Cref{reduction}, we have
$X_{r-i}^{(j)}/X_{r-i}^{(j+1)} = X_{r-j}^{(j)}/X_{r-j}^{(j+1)}$.
Similarly, by the second part, if $j \leq [a-j] \leq i$, then
$X_{r-i}^{(j)}/X_{r-i}^{(j+1)} = X_{r-[a-j]}^{(j)}/X_{r-[a-j]}^{(j+1)}$.
So to prove the claim it is enough to show that
$X_{r-j'}^{(j)}/X_{r-j'}^{(j+1)} =(0)$.
If $[a-j'] < j'$, then
$X_{r-j'}^{(j)}/X_{r- j'}^{(j+1)} = (0)$, by \Cref{singular i>r-i}.
If $j'= [a-j']$, then $j' = j$.
So $[a-i] \leq l \lneq j \leq i$ because $[a-l] \neq l$,
whence
\[ [a-i] \leq
j-1 < j < [a-j]+1 = [a-(j-1)] \leq i+1 \leq p-1.
\]
Thus, by the first and third parts of \eqref{interval I}, we have
$j-1, j \not \in \mathcal{I}(a,j-1)$.
Since $ \mathcal{I}(a,l) \subseteq \mathcal{I}(a,j-1)
\subseteq \mathcal{I}(a,j)$,
we get $j-1, j \not \in \mathcal{I}(a,l)$.
So $r_{0} \neq j-1$, $j$.
Thus, $r_{0} \in \mathcal{I}(a,j) \smallsetminus
\lbrace j,j-1 \rbrace $ and
$r \not \equiv [a-i]+i$ mod $p$.
So, by Lemma~\ref{I vs J} (applied with $i$ there equal to $j$),
we see that $r_{0} \neq j$, $j-1$ and $r_{0} \in \mathcal{J}(a,j)$.
Hence, by \Cref{singular i= [a-i]},
we again have $X_{r-j}^{(j)}/X_{r- j}^{(j+1)} = (0)$.
This proves the claim.
\qedhere
\end{enumerate}
\end{proof}
For $0 \leq j \leq n \leq m $, we have the following commutative diagram
\begin{equation}\label{commutative diagram arbitrary}
\begin{tikzcd}
0 \arrow{r} & \frac{X_{r-i}^{(n)}}{X_{r-i}^{(m)}} \arrow{r}
\arrow[hookrightarrow]{d}
& \frac{X_{r-i}^{(j)}}{X_{r-i}^{(m)}} \arrow{r} \arrow[hookrightarrow]{d}
& \frac{X_{r-i}^{(j)}}{X_{r-i}^{(n)}} \arrow{r} \arrow[hookrightarrow]{d} & 0 \\
0 \arrow{r} & \frac{V_{r}^{(n)}}{V_{r}^{(m)} } \arrow{r} & \frac{V_{r}^{(j)}}
{V_{r}^{(m)}}
\arrow{r} & \frac{V_{r}^{(j)}}{V_{r}^{(n)}} \arrow{r} & 0.
\end{tikzcd}
\end{equation}
Taking the cokernel of each the inclusions and applying the snake lemma, we get
\begin{align}\label{Q(i) exact sequence 2}
0 \rightarrow \frac{V_{r}^{(n)}}{X_{r-i}^{(n)}+V_{r}^{(m)}} \rightarrow
\frac{V_{r}^{(j)}}{X_{r-i}^{(j)}+V_{r}^{(m)}} \rightarrow
\frac{V_{r}^{(j)}}{X_{r-i}^{(j)}+V_{r}^{(n)}} \rightarrow 0.
\end{align}
We now determine the structure of $Q(i)$ in terms of $Q([a-i]-1)$
when $1 \leq [a-i]<i < p-1$. \begin{theorem}\label{Structure of Q(i) i>[a-i]}
Let $p \geq 3$, $r \equiv a ~ \mathrm{mod}~(p-1)$ with $1 \leq a \leq p-1$ and
let $r \equiv r_{0}~ \mathrm{mod}~p$ with $0 \leq r_{0} \leq p-1$.
Let $1 \leq [a-i]<i < p-1$ and $i(p+1)+p \leq r$.
Then we have an exact sequence of $\Gamma$-modules
\begin{align*}
0 \rightarrow W \rightarrow Q(i) \rightarrow Q([a-i]-1) \rightarrow 0,
\end{align*}
where
\begin{enumerate}[label= \emph{(\roman*)}]
\item If $r_{0} \not \in \mathcal{J}(a,i)$,
then $W= (0)$.
\item If $r_{0} \in \mathcal{J}(a,i)
\smallsetminus \mathcal{I}(a,[a-i])$ and
\begin{enumerate}
\item If $[a-r_{0}] < r_{0}+1$, then
$
0 \rightarrow V_{r}^{([a-r_{0}]+1)}/V_{r}^{(i+1)} \rightarrow W
\rightarrow V_{[a-2r_{0}]} \otimes D^{r_{0}} \rightarrow 0.
$
\item If $[a-r_{0}] = r_{0} +1 $, then
$W= V_{r}^{([a-r_{0}])}/ V_{r}^{(i+1)}$.
\item If $ [a-r_{0}] > r_{0}+1 $,
then
$
0 \rightarrow V_{r}^{([a-r_{0}])}/V_{r}^{(i+1)} \rightarrow W
\rightarrow V_{p-1-[2r_{0}+2-a]} \otimes D^{r_{0}+1} \rightarrow 0.
$
\end{enumerate}
\item If $r_{0}\in \mathcal{I}(a,[a-i])$ and
$r \not \equiv [a-i]+i ~\mathrm{mod}~p$,
then $W= V_{r}^{([a-i])}/V_{r}^{(i+1)}$.
\end{enumerate} \end{theorem} \begin{proof}
For $[a-i] \leq j \leq i$, we have $[a-i] \leq [a-j] \leq i$.
Thus, $j':= \min \lbrace [a-j], j \rbrace \geq [a-i]$,
for all $[a-i] \leq j \leq i$. By \Cref{reduction corollary} (ii),
we see that $X_{r-j} \subseteq X_{r-(j-1)}+ V_{r}^{(j')}
\subseteq X_{r-(j-1)}+ V_{r}^{([a-i])}$, for all
$[a-i] \leq j \leq i$.
Hence
\[
X_{r-i} \subseteq X_{r-(i-1)}+ V_{r}^{([a-i])}
\subseteq \cdots
\subseteq X_{r-[a-i]}+ V_{r}^{([a-i])}
\subseteq X_{r-([a-i]-1)}+ V_{r}^{([a-i])}.
\]
Therefore,
$X_{r-i}+V_{r}^{([a-i])} = X_{r-([a-i]-1)}+V_{r}^{([a-i])}$,
by \Cref{first row filtration}, and so
\[
\frac{V_{r}}{X_{r-i}+V_{r}^{([a-i])}}
= \frac{V_{r}}{X_{r-([a-i]-1)}+V_{r}^{([a-i])}} =Q([a-i]-1).
\]
Taking $j= [a-i]$ in diagram \eqref{commutative diagram} we see that
\[
0 \rightarrow W \rightarrow Q(i) \rightarrow Q([a-i]-1) \rightarrow 0,
\]
where $W$ is the quotient of $V_{r}^{([a-i])}/ V_{r-i}^{(i+1)}$
by $X_{r-i}^{([a-i])}/X_{r-i}^{(i+1)}$. To determine $W$
explicitly we consider the cases described in the theorem.
We first prove (i) and (iii).
\begin{enumerate}
\item[(i)] If $r_{0} \not \in \mathcal{J}(a,i)$, then by
\Cref{arbitrary singular i>r-i} (i), we have
$X_{r-i}^{([a-i])}/X_{r-i}^{(i+1)} = V_{r}^{([a-i])}/V_{r}^{(i+1)}$.
Thus $W =(0)$.
\item[(iii)] If $r_{0} \in \mathcal{I}(a,[a-i])$ and
$ r \not \equiv [a-i]+i $ mod $p$, then by
\Cref{arbitrary singular i>r-i} (ii),
we have
$X_{r-i}^{([a-i])}/X_{r-i}^{(i+1)} =(0)$. Thus
$W =V_{r}^{([a-i])}/V_{r}^{(i+1)}$.
\end{enumerate}
We now prove (ii). So we may assume $r_{0} \in
\mathcal{J}(a,i) \smallsetminus \mathcal{I}(a,[a-i])$.
By \eqref{interval J for i > a-i} and
\eqref{interval [a-i], for i> [a-i]}, we have
\begin{align*}
\mathcal{J}(a,i) \smallsetminus \mathcal{I}(a,[a-i])=
\lbrace [a-i]-1, [a-i], \ldots, i-1,i \rbrace.
\end{align*}
So the congruence class
of $[a-i]+i$ mod $p$ has a representative in $\mathcal{I}(a,[a-i])$ but not
in $\mathcal{J}(a,i)$.
Clearly $[a-i]-1 \leq r_{0} \leq i$.
One checks that $[a-i]-1 \leq [a-r_{0}]-1 \leq i$ as well.
We now prove (ii) according
to how the numbers $r_0$ and $[a-r_0]-1$ compare to each other.
\begin{enumerate}
\item[(a)]
If $ [a-r_{0}]-1 < r_{0}$, then
$ [a-i]+1 \leq r_{0} \leq i$
(because $[a-i] < i= [a-[a-i]]$ and $[a-i]-1 < i+1 = [a-([a-i]-1)]$).
So $[a-i] \leq [a-r_{0}]+1 \leq i$.
Since $2 \leq [a-i]+1 \leq r_{0} \leq i $ and
$r_{0}-1 \equiv [a-([a-r_{0}]+1)]$ mod $(p-1)$,
we have $r_{0} = [a-([a-r_{0}]+1)]+1$. Thus,
by \eqref{interval I},
we see that
$r_{0} \in \mathcal{I}(a,[a-r_{0}]+1)$.
Thus by \Cref{arbitrary singular i>r-i} (ii), we have
$X_{r-i}^{([a-r_{0}]+1)}/X_{r-i}^{(i+1)} =(0)$,
if $[a-r_{0}]+1 \neq r_{0} -1$. If $[a-r_{0}]+1 = r_{0}-1$,
then $r_{0} =[a-r_{0}]+2 \geq 3$ and
$[a-r_{0}]+2 > r_{0}-2 = [a-([a-r_{0}]+2)]$.
So by \Cref{arbitrary singular i>r-i} (ii),
we have
$X_{r-i}^{([a-r_{0}]+2)}/X_{r-i}^{(i+1)} =(0)$.
Further, by \eqref{interval J for i > a-i} and \Cref{singular i= [a-i]}, we have
$X_{r-[a-r_{0}]-1}^{([a-r_{0}]+1)}/X_{r-[a-r_{0}]-1}^{([a-r_{0}]+2)} =(0)$.
By the second part of \Cref{reduction}, we have
$X_{r-i}^{([a-r_{0}]+1)}/ X_{r-i}^{([a-r_{0}]+2)} =
X_{r-[a-r_{0}]-1}^{([a-r_{0}]+1)}/X_{r-[a-r_{0}]-1}^{([a-r_{0}]+2)} =(0)$.
So in either case, we have
$X_{r-i}^{([a-r_{0}]+1)}/X_{r-i}^{(i+1)} =(0)$.
Since $[a-r_{0}] \leq r_{0} =[a-[a-r_{0}]]$, by \Cref{singular i= [a-i]}
and \Cref{singular i>r-i},
we have $X_{r-r_{0}}^{([a-r_{0}])}/X_{r-r_{0}}^{([a-r_{0}]+1)}
\cong V_{p-1-[a-2r_{0}]} \otimes D^{a-r_{0}}$.
Since $[a-r_{0}] \leq r_{0} \leq i$, by the second part of \Cref{reduction}
(with $j$ there equal to $[a-r_{0}]$), we have
$X_{r-i}^{([a-r_{0}])}/X_{r-i}^{([a-r_{0}]+1)} =
X_{r-r_{0}}^{([a-r_{0}])}/X_{r-r_{0}}^{([a-r_{0}]+1)}
\cong V_{p-1-[a-2r_{0}]} \otimes D^{a-r_{0}}$. Thus by
\Cref{Breuil map} (with $m=[a-r_{0}]$) and the exact sequences
\eqref{commutative diagram arbitrary} and
\eqref{Q(i) exact sequence 2} (with $j=[a-r_{0}]$,
$n=[a-r_{0}]+1$, $m=i+1$), we see that
\[
0 \rightarrow V_{r}^{([a-r_{0}]+1)}/V_{r}^{(i+1)} \rightarrow
\frac{V_{r}^{([a-r_{0}])}}{X_{r-i}^{([a-r_{0}])} + V_{r}^{(i+1)}}
\rightarrow V_{[a-2r_{0}]} \otimes D^{r_{0}} \rightarrow 0.
\]
If $r_{0} =i$, then the middle term is $W$ and we are done.
If $r_{0}< i$, then $[a-r_{0}] \geq [a-i]+1$.
Thus $[a-r_{0}]-1 = [a-r_{0}-1]$ so
by \eqref{interval J for [a-i]} (with $i$ there equal to $r_{0}+1$), we have
$r_{0} \not \in \mathcal{J}(a,[a-r_{0}]-1)$, whence
by \Cref{arbitrary singular i>r-i} (i) we have
$X_{r-i}^{([a-i])}/ X_{r-i}^{([a-r_{0}])} =
V_{r}^{([a-i])}/ V_{r}^{([a-r_{0}])} $. Thus, by
the exact sequences
\eqref{commutative diagram arbitrary} and
\eqref{Q(i) exact sequence 2} (with $j=[a-i]$,
$n =[a-r_{0}]$, $m=i+1$),
we have
$W \cong \frac{V_{r}^{([a-r_{0}])}}{X_{r-i}^{([a-r_{0}])} + V_{r}^{(i+1)}}$,
whence (a) follows from the above exact sequence.
\item[(b)]
Assume $[a-r_{0}]=r_{0}+1$.
Since $[a-i] < i < i+1$, we have $r_{0} \neq i$, i.e.,
$[a-i]-1 \leq r_{0}<i$. So $[a-r_{0}] >[a-i]$.
If $r_{0} =0$, then $a=1$ and $[a-i] \leq 1$. So $[a-i]=[1-i]=1$
and $i =0$, $p-1$ which are not possible.
Therefore, $[a-r_{0}] > r_{0} \geq 1$.
Note that
$r_{0}= [a-r_{0} -1] = [a-r_{0}] -1 < [a-r_{0}] = r_{0} +1
=[a-[a-r_{0}-1]] $.
Thus, by \eqref{interval J for [a-i]}, we
see that $r_{0} \not \in \mathcal{J}(a,[a-r_{0}-1])$.
Also, $ [a-i] \leq [a-r_{0}]-1 = r_{0} < i$
whence by \Cref{arbitrary singular i>r-i} (i),
we have $X_{r-i}^{([a-i])} / X_{r-i}^{([a-r_{0}])} =
V_{r}^{([a-i])}/V_{r}^{([a-r_{0}])}$.
Also, by \eqref{interval I for [a-i]} (with $i$ there equal to
$r_{0}$), we see that $r_{0} \in \mathcal{I}(a,[a- r_{0} ]) $
and
$ r \not \equiv [a-i]+i$ mod $p$, whence by
\Cref{arbitrary singular i>r-i} (ii), we have
$X_{r-i}^{([a-r_{0}])}/X_{r-i}^{(i+1)} = (0)$.
Thus, by the exact sequences
\eqref{commutative diagram arbitrary} and
\eqref{Q(i) exact sequence 2}
(with $j=[a-i]$, $n=[a-r_{0}]$, $m=i+1$) we see that
$W \cong V_{r}^{([a-r_{0}])}/V_{r}^{(i+1)}$. Since
$[a-r_{0}] = r_{0}+1$, we are done.
\item[(c)] If $ [a-r_{0}]> r_{0}+1 $, then $r_{0}
\neq i, i-1$ (because $[a-i] < i+1$ and $[a-(i-1)] = [a-i]+1
\leq (i-1)+1 $). So $[a-i]-1 \leq r_{0} < i-1$, hence
one checks that
$[a-i] < [a-(r_{0}+1)] \leq i $. Set $l =[a- (r_{0}+1)]$,
so $ [a-i]<l \leq i$. Since
$[a-l] =r_{0}+1 < i < p-1$, we have $[a-(l-1)] = [a-l]+1$
and so $r_{0} = [a-(l-1)] -2$. By \eqref{interval J}, we see that
$r_{0} \not \in \mathcal{J}(a,l-1)$.
As $[a-i]\leq l-1 \leq i-1$, it follows from
\Cref{arbitrary singular i>r-i} (i), that
$X_{r-i}^{([a-i])} / X_{r-i}^{(l)} =V_{r}^{([a-i])}/V_{r}^{(l)}$.
Since $[a-l] =r_{0}+1 \leq l$, by \Cref{singular i= [a-i]}
and \Cref{singular i>r-i}, we have $X_{r-l}^{(l)}/X_{r-l}^{(l+1)} =
V_{[a-2l]} \otimes D^{l}$, as $r_{0}=[a-l]-1$.
Since $[a-l] \leq l \leq i$, by the first part of \Cref{reduction}, we have
$X_{r-i}^{(l)}/X_{r-i}^{(l+1)} = X_{r-l}^{(l)}/X_{r-l}^{(l+1)}$.
Therefore, by the exact sequences
\eqref{commutative diagram arbitrary} and
\eqref{Q(i) exact sequence 2} (with $j=[a-i]$,
$n= l$ and $m=l+1$) and the exact sequence
\eqref{exact sequence Vr} (with $m=l$), we see that
$$
\frac{V_{r}^{([a-i])}}{X_{r-i}^{([a-i])} + V_{r}^{(l+1)}}
\cong V_{p-1-[a-2l]} \otimes D^{a-l}.
$$
If $l=i$, then we are done. If $l< i$, then
$[a-i] \leq l +1 =[a-r_{0}] \leq i$.
Noting $r_{0}< [a-r_{0}]$, by \eqref{interval I for [a-i]},
we see that
$r_{0} \in \mathcal{I}(a,[a-r_{0}])$. Thus, by
\Cref{arbitrary singular i>r-i} (ii), we have
$X_{r-i}^{(l+1)}/ X_{r-i}^{(i+1)} = (0)$.
Thus, by the exact sequences
\eqref{commutative diagram arbitrary} and
\eqref{Q(i) exact sequence 2} (with $j=[a-i]$,
$n=l+1$ and $m=i+1$),
and above we see that
\[
0 \rightarrow V_{r}^{(l+1)}/V_{r}^{(i+1)} \rightarrow W
\rightarrow V_{p-1-[a-2l]} \otimes D^{a-l} \rightarrow 0.
\]
Since $l =[a-r_{0}] -1$, this proves part (c). \qedhere
\end{enumerate} \end{proof}
\subsection{The case \texorpdfstring{$ \boldsymbol{ i = a, ~p-1}$}{}}
\label{Section i = a or p - 1}
Let $1 \leq a \leq p-1$ be such that $r \equiv a$ mod $(p-1)$. In this subsection, we determine the quotients $Q(a)$ and $Q(p-1)$. Recall that, for $1 \leq i \leq p$, we have defined \[
P(i) = \frac{V_{r}}{X_{r-(i-1)}+V_{r}^{(i+1)}}. \]
For $1 \leq i \leq p-1$, we have the following exact sequence
\[
0 \rightarrow \frac{X_{r-i}+V_{r}^{(i+1)}}{X_{r-(i-1)}+V_{r}^{(i+1)}}
\rightarrow P(i) \rightarrow Q(i) \rightarrow 0,
\]
where the first map is the inclusion and the last map is the quotient
map. Since $X_{r-(i-1)} \subseteq X_{r-i}$, one checks that
$X_{r-i} \cap (X_{r-(i-1)} + V_{r}^{(i+1)}) =
X_{r-(i-1)} + X_{r-i}^{(i+1)}$. Thus, by the second isomorphism theorem,
we have an exact sequence
\begin{align}\label{Q and P exact sequence}
0 \rightarrow
\frac{X_{r-i}}{X_{r-(i-1)} + X_{r-i}^{(i+1)}}
\rightarrow P(i) \rightarrow Q(i) \rightarrow 0.
\end{align}
Thus to determine $Q(i)$ in terms of $P(i)$ it is enough to determine the leftmost module in the above exact sequence. Note that
we have an ascending chain
\begin{align}\label{ascending chain}
X_{r-(i-1)} + X_{r-i}^{(i+1)} \subseteq X_{r-(i-1)} + X_{r-i}^{(i)}
\subseteq \cdots \subseteq X_{r-(i-1)} + X_{r-i}^{(1)}
\subseteq X_{r-i}.
\end{align}
Recall
$Y_{i,j} =( X_{r-i}^{(j)}/X_{r-i}^{(j+1)})/( X_{r-(i-1)}^{(j)}/X_{r-(i-1)}^{(j+1)})$. By \eqref{Y i,j}, to determine the leftmost module in \eqref{Q and P exact sequence}, it is enough to determine the successive quotients $Y_{i,j}$, for all $0 \leq j \leq i$. Note that by \Cref{Structure X(1)} and the exact sequence
\eqref{exact sequence Vr} for $m=0$,
we have the successive quotient
$ Y_{a,0} \cong V_{p-1-a} \otimes D^{a} $.
\subsubsection{The case \texorpdfstring{ $\boldsymbol{i=a .} $}{}} \label{section i = a}
We start with the case $i=a$, where $1 \leq a \leq p-1$. The first result asserts that most $Y_{i,j}$ vanish. \begin{lemma}\label{i=a smaller quotients}
Let $p \leq r \equiv a~\mathrm{mod}~(p-1)$ with $1 \leq a \leq p-1$,
and let
$r \equiv r_{0} ~\mathrm{mod}~p$ with $0 \leq r_{0} \leq p-1$.
\begin{enumerate}[label= \emph{(\roman*)}]
\item If $a\leq p-1$, then $X_{r-a}^{(j)}/X_{r-a}^{(j+1)} =
X_{r-(a-1)}^{(j)}/X_{r-(a-1)}^{(j+1)}$, for all $1 \leq j < p-1$.
\item If $a=p-1$ and $r_{0} \neq p-2 $,
then $X_{r-a}^{(j)}/X_{r-a}^{(j+1)} =
X_{r-(a-1)}^{(j)}/X_{r-(a-1)}^{(j+1)}$, for all $1 \leq j \leq p-1$.
\end{enumerate} \end{lemma} \begin{proof}
Recall that
$G_{a,r}(X,Y)$ generates
$W_{a,r}$, the image of $V_{a} \hookrightarrow
\operatorname{ind}_{B}^{\Gamma}(\chi_{2}^{a}) \overset{\psi_{i}}{\twoheadrightarrow}
X_{r-a}/X_{r-(a-1)}$ as a $\Gamma$-module. Assume that
$G_{a,r}(X,Y) \in X_{r-(a-1)}+X_{r-a}^{(n)}$, for some $n \geq 1$
(to be determined later). Thus,
$W_{a,r} \subseteq (X_{r-(a-1)}+X_{r-a}^{(n)}) /X_{r-(a-1)}$.
Note that we have the following diagram
\[
\begin{tikzcd}
0 \arrow[r, rightarrow] & V_{a} \arrow[r, rightarrow]
& \operatorname{ind}_{B}^{\Gamma} (\chi_{2}^{a}) \arrow[r, rightarrow]
\arrow[d, twoheadrightarrow, "\psi_{i}"] & V_{p-1-a} \otimes D^{a}
\arrow[r, rightarrow] & 0. \\
& & X_{r-a}/X_{r-(a-1)} & &
\end{tikzcd}
\]
Therefore, $ V_{p-1-a} \otimes D^{a} \twoheadrightarrow
X_{r-a}/(X_{r-(a-1)} +X_{r-a}^{(n)})$.
But, by \eqref{Y i,j}, we have
\[
V_{p-1-a} \otimes D^{a} \cong Y_{a,0}
\cong \frac{X_{r-a}}{X_{r-(a-1)}+X_{r-a}^{(1)}}.
\]
Thus, in the ascending chain of modules
\[
X_{r-(a-1)}+X_{r-a}^{(n)} \subseteq
X_{r-(a-1)}+X_{r-a}^{(n-1)} \subseteq
\cdots \subseteq X_{r-(a-1)}+X_{r-a}^{(1)}
\subseteq X_{r-a},
\]
we have $X_{r-(a-1)}+X_{r-a}^{(j)}
= X_{r-(a-1)}+X_{r-a}^{(j+1)}$, for all $1 \leq j < n$.
Thus, by \eqref{Y i,j}, we see that
$X_{r-a}^{(j)}/X_{r-a}^{(j+1)} =
X_{r-(a-1)}^{(j)}/X_{r-(a-1)}^{(j+1)}$,
for all $1 \leq j < n$.
By \eqref{G i,r} we have $G_{a,r}(X,Y)
\in X_{r-(a-1)}+X_{r-a}^{(p-1)} $. So if $a \leq p-1$,
we may take $n=p-1$ above and we obtain (i).
If $a=p-1$ and $r_{0} \neq a-1$,
then we can do better. Indeed since $\binom{r-a}{p-1} \equiv 0 \mod p$, by \Cref{quotient image}, we have
$G_{r-a}(X,Y) \in V_{r-a}^{(p)} \cap X_{r-a,\,r-a}$.
So $X^{a} G_{r-a}(X,Y) \in X_{r-a}^{(p)}$, by
\Cref{surjection1}, whence $G_{a,r}(X,Y) =
-X^{r}+ X^{a} G_{r-a}(X,Y) \in X_{r-(a-1)}+X_{r-a}^{(p)}$,
by \eqref{G i,r}. Taking $n=p$ above we obtain (ii). \end{proof}
\begin{lemma}\label{i=a, p-1 quotients}
Let $(p-1)(p+1)+p< r \equiv a ~\mathrm{mod}~ (p-1)$. Then $V_{a}
\hookrightarrow X_{r-a}^{(p-1)}/X_{r-a}^{(p)}$
if and only if $r \equiv a-1 ~\mathrm{mod}~ p$. \end{lemma} \begin{proof}
The case $1 \leq a < p-1$ follows from \Cref{socle term singular}.
Assume $a=p-1$.
By \Cref{Structure X(1)},
we have $ X_{r-(p-1)} / X_{r-(p-1)}^{(1)} = V_{r}/V_{r}^{(1)}$
and $X_{r-(p-2)}/X_{r-(p-2)}^{(1)} \cong V_{p-1}$. Thus,
by \Cref{Breuil map} and the fact $X_{r-(p-2)} \subseteq
X_{r-(p-1)}$, we have
\[
V_{0} \cong
\frac{X_{r-(p-1)}/X_{r-(p-1)}^{(1)} }{X_{r-(p-2)}/X_{r-(p-2)}^{(1)}}
= Y_{p-1,0} \stackrel{\eqref{Y i,j}}{\cong}
\frac{X_{r-(p-2)}+X_{r-(p-1)}}{X_{r-(p-2)}+X_{r-(p-1)}^{(1)}}
\cong
\frac{X_{r-(p-1)}}{X_{r-(p-2)}+X_{r-(p-1)}^{(1)}}.
\]
Also by \Cref{Structure of induced} and \Cref{induced and successive},
we have
$
V_{0} \oplus V_{p-1} \cong \operatorname{ind}_{B}^{\Gamma}(1)
\overset {\psi_{p-1}}{\twoheadrightarrow}
X_{r-(p-1)}/X_{r-(p-2)}.
$
So
\begin{align}\label{a=p-1, i=p-1}
V_{p-1} \twoheadrightarrow \frac{X_{r-(p-2)}+X_{r-(p-1)}^{(1)}}{X_{r-(p-2)}}.
\end{align}
If $r \equiv p-2$ mod $p$, then by \Cref{quotient image},
we have $G_{r-(p-1)}(X,Y) \in X_{r-(p-1), \, r-(p-1)}^{(p-1)}$.
Thus, by \Cref{surjection1}, we see that
$X^{p-1}G_{r-(p-1)}(X,Y) \in X_{r-(p-1)}^{(p-1)}$.
Using \eqref{G r expression}, one checks that the coefficient of
$X^{r-(p-1)}Y^{p-1}$ in $X^{p-1}G_{r-(p-1)}(X,Y)$ equals
$-\binom{r-(p-1)}{p-1} \equiv -\binom{p-1}{p-1} \equiv -1 $ mod $p$,
by Lucas' theorem. Thus,
$X^{p-1}G_{r-(p-1)}(X,Y) \not \in X_{r-(p-1)}^{(p)}$,
by \Cref{divisibility1}, whence
$X_{r-(p-1)}^{(p-1)}/X_{r-(p-1)}^{(p)} \neq (0)$.
Since $p-2 < p-1 = [p-1-(p-1)]$, by the third part of \Cref{reduction}
(with $i=p-2$ and $j= p-1$), we have
$X_{r-(p-2)}^{(p-1)}/X_{r-(p-2)}^{(p)} \cong X_{r}^{(p-1)}/X_{r}^{(p)}$.
Further, by \Cref{singular quotient X_{r}} and $r \equiv p-2$ mod $p$,
we see that $X_{r}^{(p-1)}/X_{r}^{(p)} =(0)$.
Thus, by \eqref{Y i,j}, we have
\[
\frac{X_{r-(p-1)}^{(p-1)}+X_{r-(p-2)}}{X_{r-(p-1)}^{(p)}+X_{r-(p-2)}}
\cong Y_{p-1,p-1} = \frac{X_{r-(p-1)}^{(p-1)}}{X_{r-(p-1)}^{(p)}}
\neq (0).
\]
Combining this with \eqref{a=p-1, i=p-1}, we get
$X_{r-(p-1)}^{(p-1)}/X_{r-(p-1)}^{(p)} \cong V_{p-1}$.
We now prove the converse. Assume $r \not \equiv p-2$ mod $p$.
Then by \Cref{quotient image}, we have
$X^{p-1}G_{r-(p-1)}(X,Y) \in X_{r-(p-1)}^{(p)}$, whence
$G_{p-1,r-(p-1)}(X,Y)= X^{p-1}G_{r-(p-1)}(X,Y) -
X^{r} \in X_{r-(p-1)}^{(p)}+X_{r-(p-2)}$, by \eqref{G i,r}.
Recall $G_{p-1,r-(p-1)}(X,Y)$
generates $W_{p-1,r}$, the image of $V_{p-1} \hookrightarrow
\operatorname{ind}_{B}^{\Gamma}(1) \overset{\psi_{p-1}}
{\twoheadrightarrow} X_{r-(p-1)}/X_{r-(p-2)}$.
Thus, the image of $V_{p-1}$ in \eqref{a=p-1, i=p-1}
lies in $\frac{X_{r-(p-1)}^{(p)}+X_{r-(p-2)}}{ X_{r-(p-2)}}$. Thus
$X_{r-(p-1)}^{(p)}+X_{r-(p-2)} = X_{r-(p-1)}^{(p-1)}+X_{r-(p-2)}$, whence
\[
\frac{X_{r-(p-1)}^{(p-1)}}{X_{r-(p-1)}^{(p)}}
\stackrel{\eqref{Y i,j}}{=}
\frac{X_{r-(p-2)}^{(p-1)}}{X_{r-(p-2)}^{(p)}}
= \frac{X_{r}^{(p-1)}}{X_{r}^{(p)}} .
\]
Since $a=p-1$, by \Cref{singular quotient X_{r}}, we see that
$V_{p-1} \not \hookrightarrow X_{r}^{(p-1)}/X_{r}^{(p)}$. This completes the
proof. \end{proof}
\begin{proposition}\label{singular i=a}
Let $ r \equiv a ~ \mathrm{mod}~(p-1)$, with $1 \leq a \leq p-1$
and let $r \equiv r_{0} ~\mathrm{mod}~p$ with $0 \leq r_{0} \leq p-1$.
For $j \in \lbrace a, p-1 \rbrace$ and $r \geq j(p+1)+p$, we have
\begin{align*}
\frac{X_{r-a}^{(j)}}{X_{r-a}^{(j+1)}} =
\begin{cases}
V_{p-1-a} \otimes D^{a}, & \mathrm{if}~j=a
~\mathrm{and}~r_{0}=a,a+1, \ldots, p-1, \\
V_{a}, & \mathrm{if}~j=p-1 ~\mathrm{and}~r_{0}=a-1, \\
(0), & \mathrm{otherwise}.
\end{cases}
\end{align*} \end{proposition} \begin{proof}
Since $X_{r-(a-1)} \subseteq X_{r-(a-1)}+X_{r-a}^{(p)}
\subseteq X_{r-(a-1)}+X_{r-a}^{(p-1)} \subseteq
\cdots \subseteq X_{r-(a-1)}+X_{r-a}^{(1)} \subseteq X_{r-a}$,
we have
\begin{align*}
\dim \left( \frac{X_{r-(a-1)}+X_{r-a}^{(p-1)}}{X_{r-(a-1)}+X_{r-a}^{(p)}}
\right) +
\dim \left( \frac{X_{r-a}}{X_{r-(a-1)}+X_{r-a}^{(1)}}
\right)
\leq \dim \left( \frac{X_{r-a}}{X_{r-(a-1)}} \right)
\leq p+1.
\end{align*}
Since $Y_{a,0} \cong V_{p-1-a} \otimes D^{a}$, we have
$\dim Y_{a,p-1} \leq a+1$.
$\boldsymbol{\mathrm{Case} ~ a < p-1}$\textbf{:}
By \Cref{i=a smaller quotients} (i),
we have $X_{r-a}^{(a)}/X_{r-a}^{(a+1)} = X_{r-(a-1)}^{(a)}/X_{r-(a-1)}^{(a+1)}$.
Since $a-1 < a < [a-a] = p-1$, by the third part of \Cref{reduction}, we have
$X_{r-(a-1)}^{(a)}/X_{r-(a-1)}^{(a+1)} = X_{r}^{(a)}/X_{r}^{(a+1)}$.
So $X_{r-a}^{(a)}/X_{r-a}^{(a+1)}= X_{r}^{(a)}/X_{r}^{(a+1)}$ and
the assertion for $j=a$ follows from \Cref{singular quotient X_{r}}.
By \Cref{reduction corollary 2}, we have $X_{r-(a-1)}^{(p-1)}/X_{r-(a-1)}^{(p)}
= (0)$. So $Y_{a,p-1} = X_{r-a}^{(p-1)}/X_{r-a}^{(p)}$.
Since the exact sequence \eqref{exact sequence Vr} doesn't split
for $m=p-1$, we have $X_{r-a}^{(p-1)}/X_{r-a}^{(p)} \neq (0)$
if and only if $V_{a} \hookrightarrow X_{r-a}^{(p-1)}/X_{r-a}^{(p)}$.
Hence by \Cref{socle term singular}, we have
$X_{r-a}^{(p-1)}/X_{r-a}^{(p)} \neq (0)$ if and only if
$V_{a} \hookrightarrow X_{r-a}^{(p-1)}/X_{r-a}^{(p)}$ if and only if
$r \equiv a-1 $ mod $p$. So if $r \equiv a-1$ mod $p$,
then $Y_{a,p-1} = X_{r-a}^{(p-1)}/X_{r-a}^{(p)} \cong V_{a}$
as $\dim Y_{a,p-1} \leq a+1$
and if $ r \not \equiv a-1$ mod $p$, then
$X_{r-a}^{(p-1)}/X_{r-a}^{(p)} =(0)$.
$\boldsymbol{\mathrm{Case} ~ a = p-1}$\textbf{:}
As earlier, one checks that $X_{r-(a-1)}^{(p-1)}/X_{r-(a-1)}^{(p)}
=X_{r}^{(p-1)}/X_{r}^{(p)}$.
If $r \not \equiv p-2$ mod $p$, then
$X_{r-a}^{(p-1)}/X_{r-a}^{(p)} = X_{r-(a-1)}^{(p-1)}/X_{r-(a-1)}^{(p)}$,
by Lemma~\ref{i=a smaller quotients}.
Thus, by \Cref{singular quotient X_{r}},
we have $X_{r-a}^{(p-1)}/X_{r-a}^{(p)} =V_{0}$
if $r \equiv p-1$ mod $p$ and zero if
$r \equiv 0,1, \ldots, p-3$ mod $p$.
Assume $r \equiv p-2$ mod $p$.
By Lemma~\ref{i=a, p-1 quotients}, we have
$V_{p-1} \hookrightarrow X_{r-a}^{(p-1)}/X_{r-a}^{(p)} $.
Also, by \Cref{singular quotient X_{r}}, we have
$X_{r-(a-1)}^{(p-1)}/X_{r-(a-1)}^{(p)}
=X_{r}^{(p-1)}/X_{r}^{(p)} =(0)$.
So $V_{p-1} \hookrightarrow X_{r-a}^{(p-1)}/X_{r-a}^{(p)}
= Y_{a,p-1}$. Since
$\dim Y_{a,p-1} \leq a+1 = p $, we get
$V_{p-1} = X_{r-a}^{(p-1)}/X_{r-a}^{(p)} $. \end{proof}
\begin{theorem}\label{Structure of Q(i) if i = a}
Let $a(p+1)+p \leq r \equiv a ~ \mathrm{mod}~(p-1)$ with $1 \leq a \leq p-1$.
Then
\begin{enumerate}
\item[\emph{(i)}] If $a \neq p-1$ or $r \not \equiv p-2 ~ \mathrm{mod}~p$,
then
\[
0 \rightarrow V_{p-1-a} \otimes D^{a} \rightarrow P(a) \rightarrow
Q(a) \rightarrow 0.
\]
\item[\emph{(ii)}] If $a = p-1$ and $r \equiv p-2 ~ \mathrm{mod}~p$,
then
\[
0 \rightarrow V_{0} \oplus V_{p-1} \rightarrow P(a) \rightarrow
Q(a) \rightarrow 0.
\]
\end{enumerate} \end{theorem}
\begin{proof}
By \eqref{Q and P exact sequence}, we have an exact sequence
\[
0 \rightarrow
\frac{X_{r-a}}{X_{r-(a-1)} + X_{r-a}^{(a+1)}}
\rightarrow P(a) \rightarrow Q(a) \rightarrow 0.
\]
Note that we have an ascending chain of modules
\[
X_{r-(a-1)} + X_{r-a}^{(a+1)} \subseteq X_{r-(a-1)} + X_{r-a}^{(a)}
\subseteq \cdots \subseteq
X_{r-a}.
\]
By \eqref{Y i,j}, the successive quotients
are isomorphic to $Y_{a,j}$, for $0 \leq j \leq a$.
By Lemma~\ref{i=a smaller quotients}, we have
$Y_{a,j} = (0)$, for $1 \leq j <a$. So
\begin{align}\label{Q a proof exact sequence}
0 \rightarrow Y_{a,a} \rightarrow
\frac{X_{r-a}}{X_{r-(a-1)} + X_{r-a}^{(a+1)}} \rightarrow
Y_{a,0} \rightarrow 0.
\end{align}
Recall that $Y_{a,0} \cong V_{p-1-a} \otimes D^{a}$.
Thus it remains to determine $Y_{a,a}$.
By Lemma~\ref{i=a smaller quotients}, if $a \neq p-1$ or $r \not \equiv p-2$ mod $p$, then
$Y_{a,a} = (0)$.
If $a=p-1$ and $r \equiv p-2$ mod $p$, then
$X_{r-a}^{(p-1)}/X_{r-a}^{(p)} \cong V_{p-1}$, by \Cref{singular i=a}.
Also by the third part of \Cref{reduction} and
\Cref{singular quotient X_{r}}, we have
$X_{r-(a-1)}^{(p-1)}/X_{r-(a-1)}^{(p)} =
X_{r}^{(p-1)}/X_{r}^{(p)}=(0)$. So $Y_{a,a}
=Y_{p-1,p-1} \cong V_{p-1}$.
Since $V_{p-1}$ is projective, the exact sequence
\eqref{Q a proof exact sequence} splits
and this completes the proof of the lemma. \end{proof}
The above theorem completely determines the structure of $Q(a)$. Indeed, by the remarks at the beginning of Section~\ref{section Q}, the structure of $P(a)$ is completely determined by $Q(a-1)$ and $X_{r-(a-1)}^{(a)}/X_{r-(a-1)}^{(a+1)}$. The latter module is equal to $X_{r}^{(a)}/X_{r}^{(a+1)}$, by the third part of \Cref{reduction}, so is completely determined by \Cref{singular quotient X_{r}}. If $a=1$, then the structure of $Q(a-1) = Q(0)$ is known by \Cref{Structure Q(0)}. If $a \geq 2$, then $a-1 \geq [a-(a-1)] =1$, so the structure of $Q(a-1)$ can be determined using the results of \S \ref{section i = a-i}, \S \ref{section i > a-i}, again in terms of $Q(0)$ and $W$, see \Cref{Structure of Q i=[a-i]} and \Cref{Structure of Q(i) i>[a-i]}.
\subsubsection{The case \texorpdfstring{$\boldsymbol{i=p-1}$}{}} \label{section i = p-1}
In this section we determine $Q(i)$ when $i=p-1$. Since the case $i=a=p-1$ was already treated in \S \ref{section i = a}, we may assume $a<p-1$.
\begin{lemma}\label{i=p-1 exceptional case}
Let $p \geq 3$,
$p \leq r \equiv a ~ \mathrm{mod}~(p-1)$, with $1 \leq a < p-1$.
If $r \equiv 0, 1, \ldots, a-2$
or
$ p-1 ~\mathrm{mod}~p$ ,
then $X_{r-(p-1)} \subseteq X_{r-(p-2)}+V_{r}^{(p)}$.
Furthermore, $X_{r-(p-1)}^{(a)}/X_{r-(p-1)}^{(a+1)}
= X_{r-a}^{(a)}/X_{r-a}^{(a+1)}$ and
$X_{r-(p-1)}^{(p-1)}/X_{r-(p-1)}^{(p)}
= X_{r-a}^{(p-1)}/X_{r-a}^{(p)}$. \end{lemma} \begin{proof}
Recall that
\begin{align*}
F_{p-1,r}(X,Y) &\stackrel{\eqref{F i,r definition}}{=} \sum_{\lambda \in \mathbb{F}_{p}}^{} \lambda^{[2p-2-a]}
X^{p-1}( \lambda X+Y)^{r-(p-1)} \\
& \stackrel{\eqref{sum fp}}{\equiv} -X^{r}
- \sum_{\substack{0 < l \leq r-(p-1)
\\ l \equiv 0 ~\mathrm{mod}~(p-1)}}^{} \binom{r-(p-1)}{l}
X^{r-l}Y^{l} \mod p
\end{align*}
generates the quotient
$X_{r-(p-1)}/X_{r-(p-2)}$ as a $\Gamma$-module
because $a \neq p-1$. So to prove the first statement of the lemma
it is enough to show
$F_{p-1,r}(X,Y) \in X_{r-(p-2)}+V_{r}^{(p)}$.
We claim that $F(X,Y):= F_{p-1,r}(X,Y) +X^{r} \in X_{r-(p-2)}+V_{r}^{(p)}$,
which proves the the first statement of the lemma,
as $X^{r} \in X_{r-(p-2)}$, by \Cref{first row filtration}.
Since $r-(p-1) \equiv a \not \equiv p-1$ mod $(p-1)$,
we get the coefficient of $X^{p-1}Y^{r-(p-1)}$ in $F(X,Y)$ is zero.
From the hypothesis we see that $r-(p-1) \not \equiv p-1$ mod $p$,
whence $\binom{r-(p-1)}{p-1} \equiv 0$ mod $p$ by Lucas' theorem.
Therefore $X^{p}$, $Y^{p} \mid F(X,Y)$.
So $F(X,Y)$ satisfies condition (i) of \Cref{divisibility1}
with $m=p$.
For $0 \leq m \leq p-1$, by \Cref{binomial sum}, we have
\begin{align*}
\sum_{\substack{0 < l \leq r-(p-1)
\\ l \equiv 0 ~\mathrm{mod}~(p-1)}}^{} \binom{r-(p-1)}{l}
\binom{l}{m} &= \sum_{\substack{0 \leq l \leq r-(p-1)
\\ l \equiv 0 ~\mathrm{mod}~(p-1)}}^{} \binom{r-(p-1)}{l}
\binom{l}{m} - \delta_{0,m} \\
& \equiv \binom{r-(p-1)}{m} \left[ \binom{[a-m]}{[p-1-m]}
+ \delta_{p-1,[p-1-m]} \right] - \delta_{0,m} ~\mathrm{mod}~ p.
\end{align*}
If $0 \leq m < a$, then $\binom{[a-m]}{[p-1-m]} = \binom{a-m}{p-1-m}
= 0$ as $1 \leq a<p-1$. So the above sum vanishes for $0 \leq m < a$.
By the assumption we have
$r-(p-1) \equiv 0$, $1, \ldots, a-1$ mod $p$, whence by Lucas'
theorem $\binom{r-(p-1)}{m} \equiv 0$ mod $p$, for $a \leq m \leq p-1$.
So the above sum vanishes, for all $0 \leq m \leq p-1$.
Thus by \Cref{divisibility1}, we have $F(X,Y) \in V_{r}^{(p)}$.
To prove the last assertion, note that
$X_{r-(p-1)} \subseteq X_{r-(p-2)}+V_{r}^{(p)}$ implies
$X_{r-(p-1)} = X_{r-(p-2)}+X_{r-(p-1)}^{(p)}$.
Recall that we have an ascending chain of modules
\[
X_{r-(p-2)}+X_{r-(p-1)}^{(p)} \subseteq
\cdots \subseteq X_{r-(p-2)}+X_{r-(p-1)}^{(a+1)}
\subseteq X_{r-(p-2)}+X_{r-(p-1)}^{(a)}
\subseteq \cdots \subseteq X_{r-(p-1)}.
\]
Since the extreme terms are equal, all the intermediate
terms are equal. Hence, by \eqref{Y i,j}, we have
$Y_{p-1,j} =(0)$ for $j=a$, $p-1$. Thus
$X_{r-(p-1)}^{(j)}/X_{r-(p-1)}^{(j)} = X_{r-(p-2)}^{(j)}/X_{r-(p-2)}^{(j)}$
for $j=a$, $p-1$.
Since $a \leq p-2 < p-1$, we have
$X_{r-(p-2)}^{(j)}/X_{r-(p-2)}^{(j)}= X_{r-a}^{(j)}/X_{r-a}^{(j+1)}$,
by the first (resp. second) part of \Cref{reduction}, for $j=a$
(resp. $p-1$).
Thus $X_{r-(p-1)}^{(j)}/X_{r-(p-1)}^{(j)}=
X_{r-a}^{(j)}/X_{r-a}^{(j+1)}$.
\end{proof}
\begin{lemma}\label{i=p-1, full}
Let $a(p+1)+p \leq r \equiv a ~ \mathrm{mod}~(p-1)$, with $1 \leq a < p-1$.
If $r \equiv a-1$, $a, \ldots, p-3$, $p-2~ \mathrm{mod}~p$, then
$X_{r-(p-1)}^{(a)}/X_{r-(p-1)}^{(a+1)} = V_{r}^{(a)}/V_{r}^{(a+1)}$. \end{lemma}
\begin{proof}
First we consider the case $r \equiv a-1$ mod $p$. Let
$F(X,Y) := X^{p-1}Y^{r-(p-1)}- X^{r-a}Y^{a} \in X_{r-(p-1),\,r}$, by
\Cref{first row filtration}. Clearly $F(X,Y) \in V_{r}^{(1)}$, by
\Cref{divisibility1}.
Since $r \equiv a-1$
mod $p$, we see that $r-(p-1) \equiv a$ mod $p$. Thus,
by Lucas' theorem, we have
\[
\binom{r-(p-1)}{n} \equiv \binom{a}{n}, ~\forall ~0 \leq n \leq a.
\]
Hence, by \Cref{divisibility1}, we have $F(X,Y) \in V_{r}^{(a)}$. Also by
\Cref{breuil map quotient}, we have the image of
$F(X,Y) \equiv \theta^{a}X^{r-a(p+1)-(p-1)}Y^{p-1} $
mod $V_{r}^{(a+1)}$ up to terms involving
$\theta^{a} X^{r-a(p+1)}$ and $\theta Y^{r-a(p+1)}$.
Clearly the image of $\theta^{a}X^{r-a(p+1)-(p-1)}Y^{p-1}$ under the quotient
map $V_{r}^{(a)}/V_{r}^{(a+1)} \rightarrow V_{p-1-[r-2a]}$ is non-zero by
\Cref{Breuil map}, so also the image of $F(X,Y)$.
Since the sequence \eqref{exact sequence Vr} doesn't split for
$m=a$ and $a \neq p-1$,
we get $X_{r-(p-1)}^{(a)}/X_{r-(p-1)}^{(a+1)} = V_{r}^{(a)}/V_{r}^{(a+1)}$.
So we may assume $r \equiv a, a+1, \ldots , p-2$ mod $p$.
If $a=1$, then $F(X,Y) \equiv (r+1) \theta X^{r-(p+1)-(p-1)}Y^{p-1}$
mod $V_{r}^{(a+1)}$ up to terms involving
$\theta X^{r-(p+1)}$ and $\theta Y^{r-(p+1)}$.
As above one checks that $F(X,Y)$ generates
$V_{r}^{(1)}/V_{r}^{(2)}$. So we may further assume $2 \leq a < p-1 $.
Let
\begin{align*}
A &= \left( \binom{r-n}{m} \binom{p-1+a-m-n}{a-m} \right)_{1 \leq m,n \leq a-1} \\
&= \left( \binom{r-1-n}{m+1} \binom{p-1+a-2-m-n}{a-1-m}
\right)_{0 \leq m,n \leq a-2}\\
&= \left( \frac{r-1-n}{m+1} \binom{r-2-n}{m} \binom{p-1+a-2-m-n}{a-1-m}
\right)_{0 \leq m,n \leq a-2} \\
& = D \left( \binom{r-2-n}{m} \binom{p-1+a-2-m-n}{a-1-m}
\right)_{0 \leq m,n \leq a-2} D',
\end{align*}
where $D=$
diag $(1 , 2^{-1}, \ldots, (a-1)^{-1})$ and
$D'=$ diag $(r-1, r-2, \ldots, r-(a-1))$ are diagonal matrices.
Applying \Cref{matrix det} (ii) (with $a$ there equal to $p-1+a-2$,
$j=a-1$ and $i=a-2$), we see that
$A$ is invertible. Choose $C_{1}, \ldots , C_{a-1}$ such that
\begin{align}\label{choice C , i=p-1}
\sum_{n=1}^{a-1} C_{n} \binom{r-n}{m} \binom{p-1+a-m-n}{a-m}
= \binom{r-(p-1)}{m} - \binom{a}{m}, ~ \forall ~ 1 \leq m \leq a-1.
\end{align}
Let
\begin{align*}
G(X,Y) & := F(X,Y) + \sum_{n=1}^{a-1} C_{n} \sum_{k \in \mathbb{F}_{p}^{\ast}}^{}
k^{n} X^{n} (k X+ Y)^{r-n} \\
& \stackrel{\eqref{sum fp}}{\equiv}
X^{p-1}Y^{r-(p-1)}- X^{r-a}Y^{a} - \sum_{n=1}^{a-1} C_{n}
\sum_{\substack{0 \leq l \leq r-n \\ l \equiv a~\mathrm{mod}~(p-1)}}^{}
\binom{r-n}{l} X^{r-l} Y^{l} \mod p.
\end{align*}
By \Cref{Basis of X_r-i}, we see that $G(X,Y) \in X_{r-(p-1)}$.
We claim that $G(X,Y) \in V_{r}^{(a)}$ and generates
$V_{r}^{(a)}/V_{r}^{(a+1)}$.
Clearly $Y^{a} \mid G(X,Y)$ and the coefficient
of $Y^{r}$ in $G(X,Y)$ is zero. Since the smallest number
strictly less than $r$ congruent to $a$ mod $(p-1)$ is
$r-(p-1)$, we see that $X^{p-1} \mid G(X,Y)$.
So $G(X,Y)$ satisfies condition (i) of \Cref{divisibility1} for
$m=a$. By \Cref{binomial sum} with $m=0$, we have
\[
1-1-\sum_{n=1}^{a-1} C_{n}
\sum_{\substack{0 \leq l \leq r-n \\ l \equiv a~\mathrm{mod}~(p-1)}}^{}
\binom{r-n}{l} \equiv - \sum_{n=1}^{a-1} C_{n} \binom{a-n}{a}
\equiv 0 ~\mathrm{mod}~p.
\]
For $1 \leq m \leq a-1$,
again by \Cref{binomial sum}, we have
\[
\sum_{n=1}^{a-1} C_{n}
\sum_{\substack{0 \leq l \leq r-n \\ l \equiv a~\mathrm{mod}~(p-1)}}^{}
\binom{r-n}{l} \binom{l}{m}\equiv \sum_{n=1}^{a-1} C_{n}
\binom{r-n}{m} \binom{[a-m-n]}{a-m} \mod p.
\]
If $m+n<a$, then by Lucas' theorem we see that
$\binom{[a-m-n]}{a-m} \equiv \binom{a-m-n}{a-m} \equiv 0 $ mod $p$
and $\binom{p-1+a-m-n}{a-m} \equiv \binom{a-m-n-1}{a-m} \equiv 0$
mod $p$. If $m+n \geq a$, then $[a-m-n] =p-1+a-m-n$. Therefore
$\binom{[a-m-n]}{a-m} \equiv \binom{p-1+a-m-n}{a-m}$ mod $p$.
Thus
\begin{align*}
\sum_{n=1}^{a-1} C_{n} \binom{r-n}{m} \binom{[a-m-n]}{a-m}
& \equiv \sum_{n=1}^{a-1} C_{n} \binom{r-n}{m}
\binom{p-1+a-m-n}{a-m} \mod p\\
& \stackrel{ \eqref{choice C , i=p-1}}{\equiv}
\binom{r-(p-1)}{m} - \binom{a}{m} \mod p.
\end{align*}
Thus, by \Cref{divisibility1}, we see that $G(X,Y) \in V_{r}^{(a)}$. Since
$a \neq p-1$, the sequence \eqref{exact sequence Vr} doesn't split.
To show $X_{r-(p-1)}^{(a)}/X_{r-(p-1)}^{(a+1)} = V_{r}^{(a)}/V_{r}^{(a+1)}$,
it is enough to show the image of $G(X,Y)$ under the rightmost
map in the exact sequence \eqref{exact sequence Vr} is non-zero.
By \Cref{binomial sum}, we have
\begin{align*}
-\sum_{n=1}^{a-1} C_{n}
\sum_{\substack{0 \leq l \leq r-n \\ l \equiv a~\mathrm{mod}~(p-1)}}^{}
\binom{r-n}{l} \binom{l}{a} - \binom{a}{a}
& \equiv -\sum_{n=1}^{a-1} C_{n} \binom{r-n}{a} -1 \mod p \\
& = \mathrm{the ~ coefficient~ of} ~ X^{r-a}Y^{a}~ \mathrm{in}~ G(X,Y).
\end{align*}
Since $X^{p-1} \mid G(X,Y)$, by \Cref{breuil map quotient},
we see that
\[
G(X,Y) \equiv \binom{r-(p-1)}{a} \theta^{a} X^{r-a(p+1)-(p-1)}Y^{p-1}
\mod V_{r}^{(a+1)},
\]
up to terms involving $\theta^{a}X^{r-a(p+1)}$ and
$\theta^{a} Y^{r-a(p+1)}$.
Since $r \equiv a, a+1, \ldots, p-2$ mod $p$, by Lucas' theorem,
we have $\binom{r-(p-1)}{a} \not \equiv 0$ mod $p$.
Thus, by \Cref{Breuil map}, the image of $G(X,Y)$
under the rightmost map in the sequence \eqref{exact sequence Vr} is non-zero.
This proves that
$G(X,Y)$ generates $V_{r}^{(a)}/V_{r}^{(a+1)}$ as a $\Gamma$-module
and this finishes the proof. \end{proof}
\begin{proposition}\label{singular i=p-1}
Let $p \geq 3$, $r \equiv a~\mathrm{mod}~(p-1)$ with $1 \leq a < p-1$
and $r \equiv r_{0} ~\mathrm{mod}~p$ with $0 \leq r_{0} \leq p-1$.
Let $j \in \lbrace a, p-1 \rbrace$. If $r \geq j(p+1)+p$, then
\begin{align*}
\frac{X_{r-(p-1)}^{(j)}}{X_{r-(p-1)}^{(j+1)}} =
\begin{cases}
V_{r}^{(a)}/V_{r}^{(a+1)}, & \mathrm{if}~
j=a~ \mathrm{and} ~r \equiv a-1, \ldots,p-2
~\mathrm{mod}~p \\
X_{r-a}^{(j)}/X_{r-a}^{(j+1)}, & \mathrm{otherwise}.
\end{cases}
\end{align*} \end{proposition} \begin{proof}
Since $a \leq p-2 < p-1$, by the first and
second parts of \Cref{reduction} with $j=a$ and $j=p-1$, we have
$X_{r-(p-2)}^{(a)}/X_{r-(p-2)}^{(a+1)} =
X_{r-a}^{(a)}/X_{r-a}^{(a+1)}$ and
$X_{r-(p-2)}^{(p-1)}/X_{r-(p-2)}^{(p)}=
X_{r-a}^{(p-1)}/X_{r-a}^{(p)}$
respectively. We consider the cases $j=a$ and $j=p-1$
separately.
$\boldsymbol{\mathrm{Case} ~ j = a}$\textbf{:}
If $ r \equiv a-1, \ldots,p-2 $ mod $p$, then
$X_{r-(p-1)}^{(a)}/X_{r-(p-1)}^{(a+1)} = V_{r}^{(a)}/V_{r}^{(a+1)}$,
by Lemma~\ref{i=p-1, full}. If
$ r \not \equiv a-1, \ldots,p-2 $ mod $p$, then
$X_{r-(p-1)}^{(a)}/X_{r-(p-1)}^{(a+1)} =
X_{r-a}^{(a)}/X_{r-a}^{(a+1)}$, by \Cref{i=p-1 exceptional case}.
$\boldsymbol{\mathrm{Case} ~ j = p-1}$\textbf{:}
So $j \neq a$.
If $r \not \equiv a-1, \ldots, p-2$ mod $p$, then
again by \Cref{i=p-1 exceptional case}, we have
$X_{r-(p-1)}^{(p-1)}/X_{r-(p-1)}^{(p)}
= X_{r-a}^{(p-1)}/X_{r-a}^{(p)}$ and we are done. So assume
$r \equiv a-1, \ldots, p-2$ mod $p$. Then by the
$j=a$ case, we have $X_{r-(p-1)}^{(a)}/X_{r-(p-1)}^{(a+1)}
= V_{r}^{(a)}/V_{r}^{(a+1)}$. Since
$X_{r-(p-2)}^{(a)}/X_{r-(p-2)}^{(a+1)} =
X_{r-a}^{(a)}/X_{r-a}^{(a+1)}$,
so $Y_{p-1,a} = (V_{r}^{(a)}/V_{r}^{(a+1)})/(X_{r-a}^{(a)}/X_{r-a}^{(a+1)}) $.
Thus, by \Cref{singular i=a}
and the exact sequence \eqref{exact sequence Vr}, we have
\begin{align}\label{Y p-1 a}
Y_{p-1,a} =
\begin{cases}
V_{r}^{(a)}/V_{r}^{(a+1)}, & \mathrm{if}~
r \equiv a-1 ~\mathrm{mod}~p, \\
V_{a}, & \mathrm{if}~ r \equiv a, a+1, \ldots,p-2
~\mathrm{mod}~p.
\end{cases}
\end{align}
By \eqref{Y i,j}
and the chain \eqref{ascending chain}, we see that
$\dim Y_{p-1,p-1} + \dim Y_{p-1,a} \leq
\dim X_{r-(p-1)}/X_{r-(p-2)} $. But by
\Cref{induced and successive}, we have
$ \dim X_{r-(p-1)}/X_{r-(p-2)} \leq p+1$. If $r \equiv a-1$ mod $p$,
then it follows
from \eqref{Y p-1 a} that $Y_{p-1,p-1} = (0)$,
so $X_{r-(p-1)}^{(p-1)}/X_{r-(p-1)}^{(p)} =
X_{r-(p-2)}^{(p-1)}/X_{r-(p-2)}^{(p)} =
X_{r-a}^{(p-1)}/X_{r-a}^{(p)}$ and again we are done.
If $r \equiv a, \ldots, p-2$ mod $p$,
then by \Cref{singular i=a}, we have
$X_{r-(p-2)}^{(p-1)}/X_{r-(p-2)}^{(p)}=X_{r-a}^{(p-1)}/X_{r-a}^{(p)}
=(0)$, so $Y_{p-1,p-1} =X_{r-(p-1)}^{(p-1)}/X_{r-(p-1)}^{(p)}$.
Since the exact sequence \eqref{exact sequence Vr}
doesn't split for $m=p-1$ and $a< p-1$, we see that
$Y_{p-1,p-1} \neq (0)$ if and only if $V_{a}
\hookrightarrow Y_{p-1,p-1}$. By \Cref{induced and successive} and
\Cref{Common JH factor} (i), we see that $X_{r-(p-1)}/X_{r-(p-2)}$
doesn't have repeated JH factor. This forces $Y_{p-1,p-1} = (0)$,
as otherwise the distinct subquotients $Y_{p-1,a}$ and $Y_{p-1,p-1} $ of
$X_{r-(p-1)}/X_{r-(p-2)}$ would both contain $V_{a}$, by \eqref{Y p-1 a}.
\end{proof} The proposition above in conjunction with \Cref{singular i=a} determines the quotients stated explicitly. We finally determine $Q(p-1)$. \begin{theorem}\label{Structure of Q(i) if i = p - 1}
Let $(p-1)(p+1)+p \leq r \equiv a ~ \mathrm{mod}~(p-1)$
with $1 \leq a < p-1$. Then
\begin{enumerate}
\item[\emph{(i)}] If $r \equiv 0,1, \ldots, a-2$
or $ p-1 ~\mathrm{mod}~p$, then $Q(p-1) \cong P(p-1)$.
\item[\emph{(ii)}] If $r \equiv a-1~ \mathrm{mod}~p$,
then
\[
0 \rightarrow \frac{V_{r}^{(a)}}{V_{r}^{(a+1)}}
\rightarrow P(p-1) \rightarrow Q(p-1) \rightarrow 0.
\]
\item[\emph{(iii)}] If $r \equiv a, \ldots,p-2~ \mathrm{mod}~p$,
then
\[
0 \rightarrow V_{a} \rightarrow P(p-1) \rightarrow
Q(p-1) \rightarrow 0.
\]
\end{enumerate} \end{theorem} \begin{proof}
First we consider the case $r \equiv 0,1, \ldots, a-2 $ mod $p$
or $r \equiv p-1$ mod $p$. Then by the first part of
\Cref{i=p-1 exceptional case},
we have $X_{r-(p-1)}+V_{r}^{(p)}=X_{r-(p-2)}+V_{r}^{(p)}$.
Thus $Q(p-1) \cong P(p-1)$.
By the exact sequence \eqref{Q and P exact sequence}
with $i =p-1$, we have
\[
0 \rightarrow \frac{X_{r-(p-1)}}{X_{r-(p-2)} + X_{r-(p-1)}^{(p)}}
\rightarrow P(p-1) \rightarrow Q(p-1) \rightarrow 0.
\]
Observe that we have an ascending chain of modules
\[
X_{r-(p-2)} + X_{r-(p-1)}^{(p)} \subseteq
X_{r-(p-2)} + X_{r-(p-1)}^{(p-1)}
\subseteq \cdots \subseteq
X_{r-(p-2)} + X_{r-(p-1)}^{(1)}
\subseteq X_{r-(p-1)}.
\]
Note that by \eqref{Y i,j}, the successive quotients
are isomorphic to $Y_{p-1,j}$, for $0 \leq j \leq p-1$.
If $j \neq a$, $p-1$, then by \Cref{reduction corollary} (i),
we have $Y_{p-1,j} =(0)$.
By \Cref{singular i=p-1}, we have
$X_{r-(p-2)}^{(p-1)}/X_{r-(p-2)}^{(p)} \subseteq
X_{r-(p-1)}^{(p-1)}/X_{r-(p-1)}^{(p)} \subseteq
X_{r-a}^{(p-1)}/X_{r-a}^{(p)} \subseteq
X_{r-(p-2)}^{(p-1)}/X_{r-(p-2)}^{(p)}$. So, $Y_{p-1,p-1}=(0)$.
By \eqref{Y p-1 a},
$Y_{p-1,a} = V_{r}^{(a)}/V_{r}^{(a+1)}$ if $r \equiv
a-1 $ mod $p$ and is equal to $V_{a}$ if $r \equiv a, \ldots, p-2 $
mod $p$. \end{proof}
The above theorem determines the structure of $Q(p-1)$ in terms of $P(p-1)$ which in principle is determined by $Q(p-2)$, by the remarks made at the beginning of Section~\ref{section Q}. If $a=p-2$, then $Q(p-2)=Q(a)$. Otherwise $[a-(p-2)] = a+1 \leq p-2 $, so $Q(p-2)$ is in turn determined by $Q(a)$ by \Cref{Structure of Q i=[a-i]} and \Cref{Structure of Q(i) i>[a-i]}. Thus, the structure of $Q(p-1)$ can in principle be obtained from $Q(a)$, which was determined in the previous subsection.
\subsection{Irreducibility of \texorpdfstring{$Q(i)$}{} }
It is possible to completely determine the JH factors of $Q(i)$ in all cases using the results of Sections~\ref{Section i not a nor p - 1} and \ref{Section i = a or p - 1}. An an example, in this section we determine when $Q(i)$ is irreducible, for $1 \leq i \leq p-1$.
\begin{lemma}
Let $p \geq 3$, $r \equiv a~\mathrm{mod}~(p-1)$ with $1 \leq a \leq p-1$.
If $0 \leq i < p-1$ and $r \geq i(p+1)+p$, then $Q(i) \neq (0)$. \end{lemma} \begin{proof}
Suppose $Q(i)=(0)$, towards a contradiction.
Then, by the exact sequence \eqref{Q(i) exact sequence},
we have $X_{r-i}/X_{r-i}^{(i+1)} = V_{r}/V_{r}^{(i+1)}$ and so
$X_{r-i}^{(j)}/X_{r-i}^{(j+1)} = V_{r}^{(j)}/V_{r}^{(j+1)}$, for all
$0 \leq j \leq i$. If $1 \leq i < a$, then by \Cref{Structure X(1)}, we have
$V_{a} \cong X_{r-i}/X_{r-i}^{(1)} \subsetneq V_{r}/V_{r}^{(1)} $,
a contradiction.
If $a \leq i < p-1$, then by the first part of \Cref{reduction}
(applied with $i$ there equal to $i$ and $j=a$), we see that
$X_{r-i}^{(a)}/X_{r-i}^{(a+1)}
= X_{r-a}^{(a)}/X_{r-a}^{(a+1)} $. Hence,
by \Cref{singular i=a} (applied with $j=a$), we have $X_{r-i}^{(a)}/X_{r-i}^{(a+1)}
\subsetneq V^{(a)}_{r}/V_{r}^{(a+1)} $, which is again a contradiction. \end{proof}
\begin{lemma}
Let $p \geq 3$, $r \equiv a~\mathrm{mod}~(p-1)$ with $1 \leq a \leq p-1$.
Let $1 \leq i < p-2$ with $i \neq a-1$, $a$.
If $i(p+1)+p \leq r$, then $Q(i)$ is reducible. \end{lemma} \begin{proof}
If $i< [a-i]$, then by \Cref{Structure of Q(i) if i<[a-i]},
we see that $Q(i)$ is reducible as both $W$ there and $Q(i-1)$ do not vanish.
If $i \geq [a-i]$, then $[a-i]-1 < i+1 = [a -([a-i]-1)]$.
Since $i \neq a-1$, $p-1$, we have $[a-i]-1 \neq 0$. Also
$[a-i]-1 \neq a$ as $i \neq p-2$. Thus $Q([a-i]-1)$ is reducible by what we
just proved. Hence, so is $Q(i)$ by Theorems~\ref{Structure of Q i=[a-i]}
and \ref{Structure of Q(i) i>[a-i]}. \end{proof}
We next consider the case $i = a-1$ and $i = a$. \begin{lemma}\label{irred Q(a), Q(a-1)}
Let $p \geq 3$, $r \equiv a~\mathrm{mod}~(p-1)$ with $1 \leq a \leq p-1$.
Let $r \equiv r_{0} ~\mathrm{mod}~p$ with $0 \leq r_{0} \leq p-1$.
Let $\mathcal{J}(a,a-1) = \{ 0,1, \ldots, a-1 \}$.
\begin{enumerate}
\item[\em{(i)}] If $a >1$ and $r \geq (a-1)(p+1)+p$, then
$Q(a-1)$ is irreducible if and only if
$r_0 \not \in \mathcal{J}(a,a-1)$.
Furthermore, in this case $Q(a-1) \cong
V_{p-1-a} \otimes D^{a}$.
\item[\em{(ii)}] If $r \geq a(p+1)+p$, then $Q(a)$ is
irreducible if and only if
$r_0 \not \in \mathcal{J}(a,a-1)$.
Furthermore, in this case $Q(a) \cong
V_{a}$.
\end{enumerate} \end{lemma} \begin{proof}
Note that
\begin{enumerate}
\item[(i)] If $a \geq 2$, then $a-1 \geq 1 =[a-(a-1)]$,
assertion (i) follows from \Cref{Structure of Q i=[a-i]}
(resp. \Cref{Structure of Q(i) i>[a-i]}) when $a=2$
(resp. $a \geq 3$) as $W = 0$ if and only if $r_0 \not\in \mathcal{J}(a,a-1)$,
and $Q(0) \cong V_{p-1-a} \otimes D^{a}$.
\item[(ii)] By \Cref{Structure of Q(i) if i = a}, we see that
the irreducibility of $Q(a)$ depends on $P(a)$. So we determine $P(a)$.
By the exact sequence \eqref{Q i-1 and P i}, we see that
\[
0 \rightarrow W'' \rightarrow P(a) \rightarrow Q(a-1) \rightarrow 0,
\]
where $W''$ is the cokernel of the map
$X_{r-(a-1)}^{(a)}/X_{r-(a-1)}^{(a+1)} \hookrightarrow V_{r}^{(a)}/V_{r}^{(a+1)}$.
By the third part of \Cref{reduction} (applied with $i= a-1$ and $j=a$),
we have $X_{r-(a-1)}^{(a)}/X_{r-(a-1)}^{(a+1)} = X_{r}^{(a)}/X_{r}^{(a+1)} $.
Thus, by \Cref{singular quotient X_{r}}, we see that
$W'' = V_{a}$ if $r_{0} \not \in \mathcal{J}(a, a-1)$
and $W'' = V_{r}^{(a)}/V_{r}^{(a+1)}$ otherwise. Combining this with
part (i), we see that $P(a)$ has two (resp. at least four) JH factors
if $r_{0} \not \in \mathcal{J}(a, a-1)$ (resp. $r_{0} \in \mathcal{J}(a, a-1)$).
Now the first assertion of (ii) follows from \Cref{Structure of Q(i) if i = a}.
If $r_{0} \not \in \mathcal{J}(a,a-1)$,
then $W'' = V_{a}$ and $Q(a-1) \cong V_{p-1-a} \otimes D^a$, so
the second assertion of (ii) follows from \Cref{Structure of Q(i) if i = a} (i). \qedhere
\end{enumerate} \end{proof}
We next consider the remaining cases of $Q(i)$, for $i = p-2$. \begin{lemma}
Let $p \geq 3$, $r \geq (p-1)(p+1)+p$,
$r \equiv a~\mathrm{mod}~(p-1)$ with $1 \leq a \leq p-1$.
If $p-2 \neq a$, $a-1$, then $Q(p-2)$ is reducible. \end{lemma} \begin{proof}
By hypothesis we have $[a-(p-2)] = a+1 \leq p-2$.
If $r_{0} \in \mathcal{J}(a,p-2)$, then by
Theorems~\ref{Structure of Q i=[a-i]} and \ref{Structure of Q(i) i>[a-i]},
we see that $Q(p-2)$ is reducible since
$W$, $Q(a) \neq 0$.
If $r_{0} \not \in \mathcal{J}(a,p-2)$, then again
by Theorems~\ref{Structure of Q i=[a-i]} and
\ref{Structure of Q(i) i>[a-i]},
we have $Q(p-2) \cong Q(a)$.
Since $a < [a-(p-2)] = a+1 \leq p-2$, by the fourth part of \eqref{interval J}, we have
$\mathcal{J}(a,p-2) =\{ a-1\}^{c}$. So $r_{0} \not \in \mathcal{J}(a,p-2)$
implies $r_{0} =a-1$, so $r_0 \in \mathcal{J}(a,a-1)$.
But by Lemma~\ref{irred Q(a), Q(a-1)} (ii), we have
$Q(a)$ is not irreducible and this finishes the proof of the lemma. \end{proof}
Since $V_{r} \subset V_{r}^{(1)} \subset \cdots \subset V_{r}^{(i+1)} \subset \cdots $ is a descending chain of $\Gamma$-modules, we have the JH factors of $ V_{r}/V_{r}^{(i+1)} = \bigcup\limits_{j=0}^{i}$ JH factors of $V_{r}^{(j)}/V_{r}^{(j+1)}$. Similarly, the JH factors of $ X_{r-i}/X_{r-i}^{(i+1)} = \bigcup\limits_{j=0}^{i}$ JH factors of $ X_{r-i}^{(j)}/X_{r-i}^{(j+1)} $. Thus, by the exact sequence \eqref{Q(i) exact sequence}, we see that \begin{align}\label{irred. criteria Q(i)}
Q(i) ~ \mathrm{is ~ irreducible} \Longleftrightarrow
\sum_{j=0}^{i} \lvert \{ \mathrm{JH ~ factors ~ of}~
V_{r}^{(j)}/V_{r}^{(j+1)} \} \rvert -
\lvert \{ \mathrm{JH ~ factors ~ of}~
X_{r-i}^{(j)}/X_{r-i}^{(j+1)} \} \rvert
\leq 1. \end{align}
Finally, we consider the case $Q(i)$, for $i = p-1$. \begin{lemma}
Let $p \geq 3$ and $(p-1)(p+1)+p \leq r \equiv a~\mathrm{mod}~(p-1)$
with $1 \leq a < p-1$. Then $Q(p-1)$ is irreducible
if and only if $r \equiv 1 ~\mathrm{mod}~(p-1)$ and $p \mid r$.
Furthermore, in this case, $Q(p-1) \cong V_{p-2} \otimes D$. \end{lemma} \begin{proof}
We consider the case $r \equiv a-1$ mod $p$ and
$r \not \equiv a-1$ mod $p$ separately.
By \Cref{singular i=p-1}, we have
$X_{r-(p-1)}^{(p-1)}/X_{r-(p-1)}^{(p)} = X_{r-a}^{(p-1)}/X_{r-a}^{(p)}$.
If $r \not \equiv a-1$ mod $p$, then $X_{r-a}^{(p-1)}/X_{r-a}^{(p)}
=(0)$, by \Cref{singular i=a}, so $Q(p-1)$ is reducible by
\eqref{irred. criteria Q(i)}.
So assume $r \equiv a-1$ mod $p$. Then
$X_{r-a}^{(p-1)}/X_{r-a}^{(p)} = V_{a}$, by \Cref{singular i=a}.
By the second part of \Cref{reduction} (with $j=1$ and $i=p-1$), we see that
$X_{r-(p-1)}^{(1)}/X_{r-(p-1)}^{(2)} = X_{r-[a-1]}^{(1)}/X_{r-[a-1]}^{(2)}$.
If $a \geq 2$, then $ V_{a-2} \otimes D \cong X_{r-(a-1)}^{(1)}/X_{r-(a-1)}^{(2)}
\subsetneq V_{r}^{(1)}/V_{r}^{(2)}$,
by \Cref{singular i= [a-i]} with $i = j = 1$ when $a = 2$ (resp. \Cref{singular i>r-i}
with $i = a-1$ and
$j = 1$ when $a>2$). So $Q(p-1)$ is reducible again by \eqref{irred. criteria Q(i)}.
We finally consider the case $a=1$.
By \Cref{Structure of Q(i) if i = p - 1} (ii), we have
\[
0 \rightarrow V_{r}^{(1)}/V_{r}^{(2)} \rightarrow P(p-1) \rightarrow
Q(p-1) \rightarrow 0.
\]
Since
$X_{r-1}^{(p-1)}/X_{r-1}^{(p)} \subseteq
X_{r-(p-2)}^{(p-1)}/X_{r-(p-2)}^{(p)} \subseteq
X_{r-(p-1)}^{(p-1)}/X_{r-(p-1)}^{(p)}= X_{r-1}^{(p-1)}/X_{r-1}^{(p)} \cong V_{1}$,
by the exact sequence \eqref{Q i-1 and P i}
and the exact sequence \eqref{exact sequence Vr} (with $m=p-1$), we have
\[
0 \rightarrow V_{p-2} \otimes D \rightarrow P(p-1) \rightarrow Q(p-2)
\rightarrow 0.
\]
If $p=3$, then $Q(p-2)= Q(1)$. If $p >3$, then $2 = a+1 =[a-(p-2)]
< p-2$, so by \Cref{Structure of Q(i) i>[a-i]}. we see that $Q(p-2)= Q(1)$
as $W = 0$ since $a-1 =r_{0} \not \in \mathcal{J}(a,p-2) = \{ a-1\}^{c}$.
By \cite[Proposition 3.12 (iii)]{BG15}
and the exact sequence \eqref{exact sequence Vr} (with $m=1$), we see that
$Q(1) \cong V_{r}^{(1)}/V_{r}^{(2)}$. Comparing the JH factors in the
two exact sequences above, we obtain
$Q(p-1) \cong V_{p-2} \otimes D$. \end{proof}
Collecting the lemmas above, we obtain the following theorem on the irreducibility of the $Q(i)$.
\begin{theorem}\label{irreducible Q(i)}
Let $p \geq 3$, $1 \leq i \leq p-1$, $r \geq i(p+1)+p$, $r \equiv a \mod (p-1)$ with $a \in \{1, 2, \ldots, p-1\}$ and $r_0$ be
the constant term in the base $p$-expansion of $r$.
Then $Q(i)$ is irreducible if and only if
\begin{itemize}
\item $i = a-1$ or $a$, and $r_0 \not\in \mathcal{J}(a,a-1) = \{0,1, \ldots, a-2, a-1\}$, or,
\item $i = p-1$, $a = 1$ and $r_0 = 0$.
\end{itemize} \end{theorem}
\section{Structure of \texorpdfstring{$X_{r-p, \,r}$}{}}
In this final section, we determine the structure of $X_{r-p,\,r}$, the monomial submodule generated by $X^{r-p}Y^p$. We exhibit an isomorphism between $X_{r-p,\,r}$ and
$X_{s-1,\,s}$, for some $s$ depending on $r$, allowing us to use the results of \cite[\S 2, \S 3]{BG15} to determine the structure of $X_{r-p,\,r}$. The proofs in this section use techniques very similar to those used in \cite[\S 2, \S 3]{BG15}.
First observe that $X^{r}= \begin{psmallmatrix} 1 & 1 \\ 0 & 1
\end{psmallmatrix} X^{r-p} Y^{p} - X^{r-p}Y^{p}$. Hence $X_{r,\,r}
\subseteq X_{r-p,\,r}$ and $X_{r-p,\,r}$ is an $M$-module.
The next lemma shows that this inclusion is strict if $p\nmid r$.
\begin{lemma}\label{BG Lemma 4.1}
For any $p\geq 2$, if $p \nmid r$ and $r>p$, then
$X_{r,\,r} \subsetneq X_{r-p,\,r}$. \end{lemma}
\begin{proof}
Suppose not.
Then $X_{r-p,\,r} = X_{r,\,r}$. By \Cref{Basis of X_r-i},
we have
$\lbrace (kX+Y)^{r},
X^{r} : k \in \mathbb{F}_{p} \rbrace$
is a spanning set for $X_{r,\,r}$ over $\mathbb{F}_{p}$. Let
\begin{align*}
X^{p}Y^{r-p} = AX^{r}+ \sum\limits_{k=0}^{p-1} c_{k} (kX+Y)^{r}.
\end{align*}
Comparing the coefficients of $XY^{r-1}$ and $X^{p}Y^{r-p}$ on both sides,
we get
\begin{align*}
r \sum\limits_{k =0}^{p-1} c_{k} k = 0 \quad \mathrm{and} \quad
\binom{r}{p} \sum\limits_{k=0}^{p-1} c_{k} k^{p}=1.
\end{align*}
Since $p\nmid r$, we get $\sum\limits_{k=0}^{p-1}c_{k} k=0$. Hence
$ 1= \binom{r}{p} \sum\limits_{k =0}^{p-1}
c_{k} k^{p} \equiv \binom{r}{p} \sum\limits_{k=0}^{p-1} c_{k} k \equiv 0 $
mod $p$. This is a contradiction. Therefore $X_{r,\,r} \subsetneq X_{r-p,\,r}$. \end{proof}
\begin{lemma}\label{surjection}
There is an $M$-linear surjection $\phi_{p}: X_{r-p,\,r-p}
\otimes V_{1} \rightarrow X_{r-p,\,r}$, given by
$\phi_{p}(u \otimes v) = uv^{p}$, for all $u \in X_{r-p,\, r-p}$ and $v \in V_{1}$.
In particular, $\dim X_{r-p,r} \leq 2p+2$. \end{lemma}
\begin{proof}
Clearly the map $\psi:V_{1} \rightarrow V_{p}$ defined by $(aX+bY)
\mapsto (aX+bY)^{p}$ is $M$-linear. By \cite[(5.1)]{Glover}, we have an
$M$-linear map $\varphi_{r-p,\, p}: V_{r-p} \otimes V_{p} \rightarrow V_{r}$,
given by $\varphi_{r-p,\,p} (u \otimes v) = uv$, for $u \in V_{r-p}$ and
$v \in V_{p}$. Let $\phi_{p} $ be the restriction of $\varphi_{r-p,\,r}
\circ ( \mathrm{id} \otimes \psi ) $ to the $M$-submodule $X_{r-p,\,r-p}
\otimes V_{1}$. Since $\begin{psmallmatrix}
1 & 1 \\ 0 & 0 \end{psmallmatrix} X^{r-p} \otimes Y = X^{r-p} \otimes X$,
we see that $X^{r-p} \otimes Y$ generates
$X_{r-p,\,r-p} \otimes V_{1}$ as an $M$-module. So $\phi_{p}
(X_{r-p,\,r-p} \otimes V_{1})$ is an $M$-module generated by
$ X^{r-p}Y^{p}$. But $X^{r-p} Y^{p}$ is a generator of $X_{r-p,\,r}$ as
an $M$-module. Hence $\phi_{p}(X_{r-p,\, r-p} \otimes V_{1}) = X_{r-p,\,r} $. \end{proof}
In the course of determining the structure of $X_{r-p,\,r}$ we often define
an $\mathbb{F}_{p}$-linear map $\eta: X_{s-1,\,s} \rightarrow X_{r-p,\,r}$, for some $s$. The
next result gives a criteria under which such a map $\eta$ is $M$-linear.
\begin{lemma} \label{M-linearity of map}
Let $s,s' \geq p$ be integers such that $s\equiv s' ~\mathrm{mod}~ (p-1)$. Let
$\eta: X_{s-1,\,s} \rightarrow X_{s'-p,\,s'}$ be an $\mathbb{F}_{p}$-linear map satisfying
\begin{enumerate}[label=\emph{(\roman*)}]
\item $\eta(X^{s})=X^{s'}$ and $\eta(X(kX+Y)^{s-1}) = X^{p}(kX+Y)^{s'-p}$,
$\forall \ k \in \mathbb{F}_{p}$, \item $\eta(Y^{s})=Y^{s'}$ and $\eta((X+kY)^{s-1}Y) =
(X+kY)^{s'-p}Y^{p}$, $\forall \ k \in \mathbb{F}_{p}$.
\end{enumerate}
Then $\eta$ is an $M$-linear surjection. \end{lemma}
\begin{proof}
We first claim that $\eta(\gamma \cdot X^{s-1}Y)= \gamma\cdot
\eta(X^{s-1}Y), ~ \forall ~\gamma \in M$. For $\gamma= \begin{psmallmatrix} a
& b \\ c & d \end{psmallmatrix} \in M$, we have
\begin{align}\label{4.1}
\eta(\gamma \cdot X^{s-1}Y)
&= \eta((aX+cY)^{s-1}(bX+dY)) \nonumber \\
& = b \eta(X(aX+cY)^{s-1})+ d\eta((aX+cY)^{s-1}Y)
\end{align}
Since $s\equiv s'$ mod $(p-1)$ it follows that
\begin{align*}
X(aX+cY)^{s-1} = \begin{cases} a^{s'-p}X^{s}, \ &\mathrm{if} \ c =0, \\
c^{s'-p}X(ac^{-1}X+Y)^{s-1}, \ &\mathrm{if} \ c \neq 0.\end{cases}
\end{align*}
This implies that
\begin{align*}
\eta(X(aX+cY)^{s-1} )=
\begin{cases} a^{s'-p}X^{s'}, \ &\mathrm{if} \ c =0, \\
c^{s'-p}X^{p}(ac^{-1}X+Y)^{s'-p}, \ &\mathrm{if} \ c \neq 0.
\end{cases}
\end{align*}
Hence $\eta(X(aX+cY)^{s-1} )= X^{p}(aX+cY)^{s'-p}$. A similar argument as
above shows that $\eta((aX+cY)^{s-1} Y)= (aX+cY)^{s'-p}Y^{p}$. Therefore
\begin{align*}
\eta(\gamma \cdot X^{s-1}Y)
~& \stackrel{\mathclap{\eqref{4.1}}}{=} ~b \eta(X(aX+cY)^{s-1})+
d\eta((aX+cY)^{s-1}Y)\\
~ & = ~ bX^{p}(aX+cY)^{s'-p}+d(aX+cY)^{s'-p}Y^{p} \\
~ &= ~(bX+dY)^{p}(aX+cY)^{s'-p}\\
~&= ~ \gamma\cdot \eta(X^{s-1}Y).
\end{align*}
Thus, for all $\gamma_{1},\gamma_{2} \in M$, we have
\begin{align}\label{eq4.2}
\eta((\gamma_{1} \gamma_{2}) \cdot X^{s-1}Y)
= (\gamma_{1} \gamma_{2}) \cdot \eta(X^{s-1}Y)
= \gamma_{1}\cdot( \gamma_{2} \cdot \eta(X^{s-1}Y))
= \gamma_{1}\cdot\eta( \gamma_{2} \cdot X^{s-1}Y).
\end{align}
Let $F(X,Y) \in X_{s-1,\,s}$. Since $X^{s-1}Y$ generates $X_{s-1,\,s}$ as
an $\mathbb{F}_{p}[M]$-module, we can write $F(X,Y)$ as $ \sum_{i=1}^{n} a_{i}\gamma
\cdot X^{s-1}Y$, for some $a_{i}\in \mathbb{F}_{p}$ and $\gamma_{i} \in M$. For every
$\gamma \in M$, we have
\begin{align*}
\eta(\gamma \cdot F(X,Y)) ~ &= ~ ~\sum\limits_{i=1}^{n} a_{i}
\eta(\gamma\gamma_{i}\cdot X^{s-1}Y) \quad
( \because \eta ~ \mathrm{is} ~ \mathbb{F}_{p} \text{-} \mathrm{linear} ) \\
~ &\stackrel{\mathclap{\eqref{eq4.2}}}{=} ~~\gamma \cdot \sum
\limits_{i=1}^{n} a_{i} \eta(\gamma_{i}\cdot X^{s-1}Y) =
\gamma \cdot \eta(F(X,Y)).
\end{align*}
This shows that $\eta$ is $M$-linear. Since $\eta(X^{s-1}Y) = X^{s'-p}Y^{p}$ is a
generator of $X_{s'-p,\,s'}$ as $M$-module, we get $\eta$ is onto. \end{proof}
As a consequence we have the following result. \begin{corollary}\label{M linear isomorhism}
Let $s,s' \geq p$ be integers such that
$s,s' \equiv a ~\mathrm{mod}~ (p-1)$.
If $\dim X_{s-1,\,s} = \dim X_{s'-p,\,s'} =2p+2 $, then
$X_{s-1,\,s} \cong X_{s'-p,\,s'}$ as $M$-modules. \end{corollary} \begin{proof}
By \Cref{Basis of X_r-i}, we have $\lbrace (kX+Y)^{s},
X(lX+Y)^{s-1}, X^{s}, X^{s-1}Y : l,k \in \mathbb{F}_{p}^{\ast} \rbrace $ forms a basis of
$X_{s-1,\,s}$. Define an $\mathbb{F}_{p}$-linear map $\eta:
X_{s-1,\,s} \rightarrow X_{s'-p,\,s'}$ by
$\eta( (kX+Y)^{s}) = (kX+Y)^{s'}$,
$\eta( X(l X+Y)^{s-1}) = X^{p}(lX+Y)^{s'-p}$,
$\eta(X^{s}) = X^{s'}$ and $\eta(X^{s-1}Y) = X^{s'-p}Y^{p}$.
Observe that for $k \in \mathbb{F}_{p}^{\ast}$, we have
\begin{align*}
\eta((X+kY)^{s-1} Y) &= k^{s-1} \eta ( (k^{-1}X+Y)^{s}) -
k^{s-2}\eta(X(k^{-1}X+Y)^{s-1}) \\
& = k^{-1}(X+kY)^{s'} - k^{-1}X^{p}(X+kY)^{s'-p}
= (X+kY)^{s'-p} Y^{p}.
\end{align*}
Thus, by \Cref{M-linearity of map}, we have $\eta$ is an $M$-linear
surjection. Since $\dim X_{s-1,s} = \dim X_{s'-p,s'}$, $\eta$ is
an isomorphism. \end{proof}
We now give a criterion which allows us to compare
the sum of $p$-adic digits of
$r-1$ and $r-p$ in terms of the constant and linear terms in the
base $p$-expansion of $r$. \begin{lemma}\label{Equality of sum of p-adic digits of (r-1),(r-p)}
Let $p \leq r= r_{m}p^{m}+\cdots+r_{1}p+r_{0}$ be the base $p$-expansion of $r$.
If $r>p$, then
\begin{enumerate}[label=\emph{(\roman*)}]
\item $\Sigma_{p}(r-1) = \Sigma_{p}(r-p)$ if and only if $r_{1} , r_{0} \neq 0$.
\item $\Sigma_{p}(r-1) < \Sigma_{p}(r-p)$ if and only if $r_{1}=0$ and
$r_{0} \neq 0$.
\item $\Sigma_{p}(r-1) > \Sigma_{p}(r-p)$ if and only if $r_{0} = 0$.
\end{enumerate} \end{lemma} \begin{proof}
It is enough to only prove the \enquote*{if} part of the above assertions.
Case (i): If $r_{1}, r_{0} \neq 0$, then $\Sigma_{p}(r-1)= (\sum_{i} r_{i})-1 =
\Sigma_{p}(r-p)$.
Case (ii): Assume $r_{1}=0$ and $r_{0} \neq 0$.
Let $i>1$ be smallest integer such that $r_{i} \neq 0$. Then
$r-1 = r_{m}p^{m}+\cdots+r_{i}p^{i}+ (r_{0}-1)$ and
$r-p = r_{m}p^{m}+ \cdots + r_{i+1}p^{i+1}+(r_{i}-1)p^{i}+(p-1)p^{i-1}+ \cdots +
(p-1)p+r_{0}$. Hence $\Sigma_{p}(r-1) =(\sum_{i} r_{i})-1 <
\sum_{i} r_{i}+(i-1)(p-1)-1= \Sigma_{p}(r-p)$.
Case (iii): Assume $r_{0}=0$.
Let $i \geq 1$ be smallest positive integer such that $r_{i} \neq 0$. Then
$r-1 = r_{m}p^{m}+\cdots+r_{i+1}p^{i+1}+(r_{i}-1)p^{i}+(p-1)p^{i-1}+ \cdots +
(p-1)$ and $r-p=r_{m}p^{m}+\cdots+r_{i+1}p^{i+1}+ (r_{i}-1)p^{i} + (p-1)p^{i-1}+
\cdots +(p-1)p$. Hence $\Sigma_{p}(r-1)= \sum_{i}^{}r_{i}+i(p-1)-1 >
\sum_{i}^{}r_{i}+(i-1)(p-1)-1 =
\Sigma_{p}(r-p)$. Therefore $\Sigma_{p}(r-1) > \Sigma_{p}(r-p)$. \end{proof}
The following proposition shows that $X_{rp-p,\,rp} \cong X_{r-1,r}$ as $M$-modules. \begin{proposition} \label{p divides r}
If $r \geq 2p$ and $p \mid r$, then the map sending $X^{r/p -1}Y$ to $X^{r-p}
Y^{p}$ defines an $M$-linear isomorphism between $X_{r/p-1,r/p}$ and
$X_{r-p,\,r}$. \end{proposition}
\begin{proof}
The map $V_{r/p} \rightarrow V_{r}$ defined by $F(X,Y) \mapsto F(X,Y)^{p}
=F(X^{p},Y^{p}) $ is
an injective $M$-linear homomorphism. Restricting this map to
$X_{r/p-1,r/p}$ completes the proof. \end{proof}
This result, combined with the results of \cite[$\S$2, $\S$3]{BG15} determining the structure of $X_{s-1,\,s}$, determines the
structure of $X_{r-p,\,r}$ in the case $p \mid r$.
Hereafter, we will assume that
$p\nmid r$ which, by the above lemma, is equivalent to
$\Sigma_{p}(r-p) \geq \Sigma_{p}(r-1)$. If $p \leq r < 2p$, then $0 \leq r-p \leq p-1$, so the structure of $X_{r-p,\,r}$ can be treated by the methods of $\S3$ and for $r=2p$, we have $X_{r-p,\,r} \cong V_{2}$. So from now on we will also assume $r>2p$.
\subsection{The case \texorpdfstring{$r \equiv 1 ~\mathrm{mod} ~(p-1)$} {}}
In this section, we determine the structure of $X_{r-p,\,r}$ if $p \nmid r$ and $r \equiv 1$ mod $(p-1)$. Since $\Sigma_{p}(r-p) \equiv r-p \equiv 0$ mod $(p-1)$, we have $\Sigma_{p}(r-p)$ is a non-zero multiple of $p-1$. We first consider the case $\Sigma_{p}(r-p) = p-1$.
\begin{lemma}\label{BG Lemma 3.2}
If $p\geq 2$, $2p < r\equiv 1 ~\mathrm{mod}~ (p-1)$
and $\Sigma_{p}(r-p)=p-1$, then
\begin{align*}
\sum\limits_{k=0}^{p-1}X^{p}(kX+Y)^{r-p} \equiv -X^{r} \quad \text{and} \quad
\sum\limits_{k=0}^{p-1}(X+kY)^{r-p}Y^{p} \equiv -Y^{r} ~~\mathrm{mod } ~ p.
\end{align*} As a consequence, $\dim X_{r-p,\,r} \leq 2p$. \end{lemma} \begin{proof}
Let $s=r-p+1$. Clearly $s \equiv 1$ mod $(p-1)$ and $\Sigma_{p}(s-1) =
\Sigma_{p}(r-p) =p-1$. Further, $s-1 = r-p \geq p$. Therefore,
by \cite[Lemma 3.2]{BG15}, we have
\begin{align*}
\sum\limits_{k=0}^{p-1}X (kX+Y)^{s-1} \equiv -X^{s} \ \text{ and }\ \sum
\limits_{k=0}^{p-1}(X+kY)^{s-1}Y
\equiv -Y^{s} \text{ mod } p.
\end{align*}
Multiplying the first and second equation above by $X^{p-1}$ and $Y^{p-1}$
respectively we obtain the lemma. \end{proof}
\begin{proposition}\label{BG Proposition 3.3} For $p\geq 2$, if $p \nmid r$, $2p<r\equiv 1 ~\mathrm{mod}~ (p-1)$ and $\Sigma_{p}(r-p)=p-1 $, then $X_{r-p,\,r} \cong X_{r-1,r} \cong V_{2p-1}$, as $M$-modules. \end{proposition} \begin{proof}
First we claim that $\Sigma_{p}(r-1)=p-1$. By hypothesis we have $\Sigma_{p}
(r-1) \equiv r-1 \equiv 0$ mod $(p-1)$. Also,
$\Sigma_{p}(r-1) \leq \Sigma_{p}(r-p) =p-1$. Therefore,
$\Sigma_{p}(r-1)=0$ or $p-1$. Since $r>1$, we have
$\Sigma_{p}(r-1) \neq 0$, whence $\Sigma_{p}(r-1)=p-1$. Hence,
by \cite[Proposition 3.3]{BG15}, we have $X_{r-1,r} \cong V_{2p-1}$ and
$\lbrace X(kX+Y)^{r-1}, (X+lY)^{r-1}Y : k,l \in \mathbb{F}_{p} \rbrace$ is a basis of
$X_{r-1,r}$ over $\mathbb{F}_{p}$. Define an $\mathbb{F}_{p}$-linear map $\eta: X_{r-1,r}
\rightarrow X_{r-p,\,r}$, by $\eta(X(kX+Y)^{r-1})=X^{p}(kX+Y)^{r-p}$ and
$\eta((X+lY)^{r-1}Y)= (X+lY)^{r-p}Y^{p}$, for $k,l \in \mathbb{F}_{p}$. Then
\begin{alignat*}{3}
\eta(X^{r}) &= -\eta \Big ( \sum\limits_{k=0}^{p-1} X(kX+Y)^{r-1} \Big )
&& \quad \quad \text{( by \cite[Lemma~3.2]{BG15}) }\\
&= - \sum\limits_{k=0}^{p-1} X^{p}(kX+Y)^{r-p} \\
&= X^{r} && \quad \quad \text{(by \Cref{BG Lemma 3.2})}.
\end{alignat*}
Similarly $\eta(Y^{r})=Y^{r}$. Therefore, $\eta$ satisfies hypotheses
of \Cref{M-linearity of map} with $s=s'=r$. So $\eta$ is $M$-linear and onto.
Further, by $M$-linearity we have
$\eta(\sum_{i} a_{i} \gamma_{i}X^{r}) = \sum_{i} a_{i} \gamma_{i} \eta(X^{r}) =
\sum_{i} a_{i} \gamma_{i}X^{r}$. Therefore, the restriction of $\eta$ to $X_{r,\,r}$ is
the identity map. By \cite[Proposition 3.3]{BG15}, and the fact that
soc$(V_{2p-1})= V_{2p-1}^{(1)}$,
we have soc$(X_{r-1,r}) = X_{r-1,r}^{(1)} = X_{r,r}^{(1)}$. So ker$(\eta) \cap
\mathrm{soc}(X_{r-1,r}) = $ ker$(\eta) \cap X_{r,r}^{(1)} = (0)$,
since $\eta$ is injective on $X_{r,r}$. Hence
$\eta: X_{r-1,r} \rightarrow X_{r-p,\,r}$ is an isomorphism. \end{proof}
We next consider the remaining case, i.e., $\Sigma_{p}(r-p)>p-1$. \begin{proposition}\label{remaining case r = 1 mod p-1}
Let $p \geq 3$, $p\nmid r$, $2p<r\equiv1 ~\mathrm{mod}~ (p-1)$
and $\Sigma_{p}(r-p)> p-1$. Then
$X_{r-p,\,r} \cong X_{rp-1,rp}$ as
$M$-modules, and we have a short exact sequence of $M$-modules
$$
0 \rightarrow V_{1} \otimes D^{p-1} \rightarrow X_{r-p,\,r} \rightarrow
V_{2p-1} \rightarrow 0 .
$$
Moreover, if $\Sigma_{p}(r-p) = \Sigma_{p}(r-1) > p-1$, then $X_{r-p,\,r} \cong X_{r-1,\,r}$. \end{proposition} \begin{proof}
We claim that $\dim X_{r-p,\,r} =2p+2$.
We prove the proposition assuming the claim.
Note that $\Sigma_{p}(rp-1) = \Sigma_{p}((r-1)p+p-1)
= \Sigma_{p}(r-1)+p-1 > p-1$.
Thus, by \cite[Proposition 3.13 (ii)]{BG15}, we have dim $X_{rp-1,\,rp}=2p+2$.
Now the first two assertions of the proposition follow from \Cref{M linear isomorhism}
and \cite[Proposition 3.13 (ii)]{BG15}.
For the last assertion, by \cite[Proposition 3.8]{BG15}, we have
$\dim X_{r-1,\,r} =2p+2$ so $X_{r-1,\,r} \cong X_{r-p,\,r}$,
again by \Cref{M linear isomorhism}.
We now prove the claim. Note that
\begin{align}
\dim X_{r-p,\,r} & = \dim \left( \frac{X_{r-p,\,r}}{X_{r-p,\,r}^{(1)}} \right)
+ \dim \left( \frac{X_{r-p,\,r}^{(1)}}{X_{r-p,\,r}^{(2)}} \right)
+ \dim X_{r-p,\,r}^{(2)} \nonumber \\
& \geq \dim \left( \frac{X_{r-p,\,r}}{X_{r-p,\,r}^{(1)}} \right) +
\dim \left( \frac{X_{r,\,r}^{(1)}}{X_{r,\,r}^{(2)}} \right)
+ \dim X_{r-p,\,r}^{(2)} .
\end{align}
We now compute each of the terms on the right hand side
of the inequality.
Note that $X^{r-p}Y^{p} = (X^{r-p}Y^{p} - X^{r-1}Y) + X^{r-1}Y
\in X_{r-1,\,r} +V_{r}^{(1)}$. Thus, $X_{r-p,\,r}+ V_{r}^{(1)}
= X_{r-1,\,r} +V_{r}^{(1)}$. By the second isomorphism theorem, we have
\[
\frac{X_{r-1,\,r}}{X_{r-1,\,r}^{(1)}} \cong
\frac{X_{r-1,\,r} + V_{r}^{(1)}}{V_{r}^{(1)}} =
\frac{X_{r-p,\,r} + V_{r}^{(1)}}{V_{r}^{(1)}} \cong
\frac{X_{r-p,\,r}}{X_{r-p,\,r}^{(1)}}.
\]
Thus,
$\dim X_{r-p,\,r}/X_{r-p,\,r}^{(1)}= \dim X_{r-1,\,r}/X_{r-1,\,r}^{(1)}
=p+1$, by \Cref{Structure X(1)}.
By \cite[Lemma 3.1 (i)]{BG15}, we have
$\dim X_{r,\,r}^{(1)}/X_{r,\,r}^{(2)}= p-1$.
By \Cref{dimension formula for X_{r}}, we have
$\dim X_{r-p,\,r-p} = p+1$ and $X_{r-p,\,r-p}^{(1)} \neq (0)$.
Since $r-p \equiv p-1$ mod $(p-1)$, by Lemma~\ref{star=double star},
we have $X_{r-p,\,r-p}^{(1)} = X_{r-p,\,r-p}^{(2)}$.
If $0 \neq G(X,Y) \in X_{r-p,\,r-p}^{(2)}$, then
$X^{p}G(X,Y)$ and $Y^{p}G(X,Y)$ are distinct elements
of $X_{r-p,\,r}^{(2)}$, by \Cref{surjection}. So
$\dim X_{r-p,\,r}^{(2)} \geq 2$.
Putting all these facts together, the claim follows from \Cref{surjection}. \end{proof}
To summarize the structure of $X_{r-p,\,r}$ obtained so far, we record
the following theorem. \begin{theorem}\label{Main theorem part 1 and 2}
Let $p\geq 3$, $p \nmid r $ and $2p< r \equiv 1 ~\mathrm{mod}~ (p-1)$.
\begin{enumerate}[label=\emph{(\roman*)}]
\item If $\Sigma_{p}(r-p)=p-1$, then $X_{r-p,\,r} \cong V_{2p-1}$ as an
$M$-module.
\item If $\Sigma_{p}(r-p) > p-1$, then
we have a short exact sequence of $M$-modules
$$
0 \rightarrow V_{1} \otimes D^{p-1} \rightarrow X_{r-p,\,r}
\rightarrow V_{2p-1} \rightarrow 0.
$$
\end{enumerate} \end{theorem}
\subsection{The case \texorpdfstring{$r \not \equiv 1 \mod (p-1)$}{}}
In this section, we determine the structure of $X_{r-p,\,r}$ when $p \nmid r$ and $r \equiv a $ mod $(p-1)$ with $2 \leq a \leq p-1$. Thus, $\Sigma_{p}(r-p) \equiv r-p \equiv a-1$ mod $(p-1)$. We begin by considering the case $\Sigma_{p}(r-p) = a-1$. For simplicity, below we sometimes denote $r-p$ by $r''$. \begin{proposition}\label{BG Lemma 4.5}
Let $p\geq3, p \nmid r $ and let
$2p < r \equiv a ~\mathrm{mod}~ (p-1)$, with
$2 \leq a \leq p-1 $. If $\Sigma_{p}(r-p)=\Sigma_{p}(r-1) =a-1$,
then $X_{r-p,\,r} \cong X_{r-1,\,r} \cong
V_{a-2} \otimes D \oplus V_{a}$ as $M$-modules. \end{proposition} \begin{proof}
By \Cref{dimension formula for X_{r}} (i), we have dim
$X_{r'',r''}=a$ and $X_{r'',r''} \cong V_{a-1}$.
By \cite[(5.2)]{Glover}, we have
$ X_{r'',r''} \otimes V_{1} \cong V_{a-2} \otimes D \oplus V_{a}$.
Thus, by \Cref{surjection}, we have an $M$-linear surjection
\begin{align*}
V_{a-2} \otimes D \oplus V_{a} \cong X_{r'',r''} \otimes V_{1}
\xrightarrow{\phi_{p}} X_{r-p,\,r}.
\end{align*}
Since $\Sigma_{p}(r)=a$, we have $X_{r,r} \cong V_{a}$ and it follows
from \Cref{BG Lemma 4.1} that $\phi_{p}$ is an isomorphism.
By \cite[Lemma 4.5]{BG15}, we have $X_{r-1,\,r}
\cong V_{a-2} \otimes D \oplus V_{a}$ as $M$-modules
so $X_{r-1,\,r} \cong X_{r-p,\,r}$. \end{proof} Before we treat the case $\Sigma_{p}(r-p) > a-1$, or equivalently $\Sigma_{p}(r-p) \geq p+a-2 > p-1$, we need a few preparatory results. In the next two lemmas we show that $V_{p-a+1}\otimes D^{a-1}$ and $V_{a-2} \otimes D$ are JH factors of $X_{r-p,\,r}$ whenever $\Sigma_{p}(r-p)>p-1$. Observe that $\phi_{p}$ is $M$-linear and
$X_{r'',r''}^{(1)} \otimes V_{1}$ is singular, so we have $\phi_{p}(X_{r'',r''}^{(1)}
\otimes V_{1}) \subseteq X_{r-p,\,r}^{(1)}$. \begin{lemma}\label{JH1}
Let $ p \geq 3, p \nmid r $ and
$2p < r \equiv a ~\mathrm{mod}~(p-1)$ with
$2 \leq a \leq p-1$. If $\Sigma_{p}(r-p) > p-1$, then
$X_{r-p,\,r}^{(1)}$ contains $V_{p-a+1} \otimes D^{a-1}$
as an $M$-module. \end{lemma} \begin{proof}
By \Cref{dimension formula for X_{r}}, we have dim $X_{r'',r''}=p+1$
and $X_{r'',r''}^{(1)} \cong V_{p-a} \otimes D^{a-1}$.
For $F(X,Y) \in V_{m}$, define $\delta_{m}(F)=F_{X}
\otimes X +F_{Y}\otimes Y \in V_{m-1} \otimes V_{1}$, where $F_{X},F_{Y}$ are
the partial derivatives of $F$ w.r.t. $X,Y$, respectively.
It is shown on \cite[p. 449]{Glover}, that $\frac{1}{p-a+1}
\delta_{p-a+1}$ ($\bar{\phi}$ in the notation of \cite{Glover})
induces an $M$-linear injection $V_{p-a+1} \otimes D^{a-1} \hookrightarrow
(V_{p-a} \otimes D^{a-1}) \otimes V_{1} $. Let $F$ be the inverse
image of $X^{p-a}$ under the isomorphism
$X_{r'',r''}^{(1)} \cong V_{p-a} \otimes D^{a-1}$. Then
the composition of the maps
\begin{alignat*}{4}
V_{p-a+1} \otimes D^{a-1} & \hookrightarrow (V_{p-a} \otimes D^{a-1})
\otimes V_{1}
&&\stackrel{\simeq}{\longrightarrow} X_{r'',r''}^{(1)} \otimes V_{1}
&&&\stackrel{\phi_{p}}{\longrightarrow} X_{r-p,\,r}^{(1)} \\
\ \ \ \ \ \ \ \ X^{p-a+1}& \mapsto \ \ \ \ \ \ \ X^{p-a} \otimes X \ \ \
&&\longmapsto \ \ \ F\otimes X \ \ \ \ &&&\longmapsto \ FX^{p}
\neq 0
\end{alignat*}
is a non-zero $M$-linear map. Since $ V_{p-a+1} \otimes D^{a-1}$ is an
irreducible $\Gamma$-module, the composition is injective. Hence
$X_{r-p,\,r}^{(1)}$ contains $ V_{p-a+1} \otimes D^{a-1}$ as an $M$-module. \end{proof} \begin{lemma}\label{JH2}
Let $p \geq 3$, $p \nmid r $ and
$2p < r \equiv a ~\mathrm{mod}~(p-1)$ with $2 \leq a \leq p-1$.
Then $V_{a-2} \otimes D$ is a JH factor of $X_{r-p,\,r}$.
\end{lemma} \begin{proof}
We first treat the case $a=2$.
Consider the polynomial $F(X,Y) = X^{p}Y^{r-p} - X^{r-p}Y^{p}
\in X_{r-p,\,r}$. By \Cref{divisibility1}, we have
$F(X,Y) \in V_{r}^{(1)}$. We claim that the following map is non-zero,
hence surjective
\[
\frac{X_{r-p,\,r}^{(1)}}{X_{r-p,\,r}^{(2)}} \hookrightarrow
\frac{V_{r}^{(1)}}{V_{r}^{(2)}}
\twoheadrightarrow V_{0} \otimes D,
\]
where the rightmost map is induced by the quotient map of
the exact sequence \eqref{exact sequence Vr}.
Since $r > 2p$, by \Cref{breuil map quotient}, we have
$F(X,Y) \equiv r \theta X^{r-(p+1)-(p-1)}Y^{p-1}$
mod $V_{r}^{(2)}$.
By \Cref{Breuil map}, we have the image of $F(X,Y)$ in
$V_{r}^{(1)}/V_{r}^{(2)}
\twoheadrightarrow V_{0} \otimes D$ is non-zero, as $p \nmid r$.
This proves the lemma if $a=2$.
Assume $3 \leq a \leq p-1$.
We claim that $ X_{r-p,\,r}^{(1)}/X_{r-p,\,r}^{(2)} \neq 0$.
Consider the polynomial
\begin{align*}
G(X,Y) & := X^{r-p} Y^{p} + (a-1)^{-1}\sum\limits_{k=1}^{p-1}
k^{2-a} X^{p}(kX+Y)^{r-p} \\
& \stackrel{\eqref{sum fp}}{ \equiv}
X^{r-p} Y^{p} - (a-1)^{-1} \sum\limits_{\substack { 0 \leq j \leq
r-p\\ j \equiv 1 \ \mathrm{mod} \ (p-1)}} \binom{r-p}{j} X^{r-j}Y^{j}
\mod p.
\end{align*}
Clearly $G(X,Y) \in X_{r-p,\,r}$.
The coefficient of $X^{r-1}Y$ in $G(X,Y)$ is
equal to $-(r-p)/(a-1) \not \equiv 0$ mod $p$, so by \Cref{divisibility1},
we have $G(X,Y) \not \in V_{r}^{(2)}$. Clearly $X,Y \mid G(X,Y)$.
Further, by \Cref{binomial sum}, applied with $m=0$, we have
\[
1 - (a-1)^{-1} \sum\limits_{\substack { 0 \leq j \leq
r-p\\ j \equiv 1 ~ \mathrm{mod} ~ (p-1)}} \binom{r-p}{j}
\equiv 1 - (a-1)^{-1} \binom{a-1}{1} \equiv 0 \mod p.
\]
Thus, by \Cref{divisibility1}, we have $G(X,Y) \in V_{r}^{(1)}$
and $0 \neq G(X,Y) \in X_{r-p,\,r}^{(1)}/X_{r-p,\,r}^{(2)}$.
Since $3 \leq a \leq p-1$, the exact sequence \eqref{exact sequence Vr}
doesn't split for $m=1$. Hence $V_{a-2} \otimes D \hookrightarrow
X_{r-p,\,r}^{(1)}/X_{r-p,\,r}^{(2)}$. \end{proof}
\begin{proposition}\label{4 JH factors}
Let $p\geq 3, \ r>2p, \ p\nmid r$ and
$r \equiv a ~\mathrm{mod}~ (p-1)$ with $2 \leq a \leq p-1$.
If $\Sigma_{p}(r-1)> p-1$, then $X_{r-p,r} \cong X_{r-1,\,r} \cong
X_{rp-1,\,rp}$
as $M$-modules
and there is an exact sequence of $M$-modules
\[
0 \rightarrow V_{p-a-1} \otimes D^{a} \oplus V_{p-a+1} \otimes D^{a-1}
\rightarrow X_{r-p,\,r}
\rightarrow V_{a-2} \otimes D \oplus V_{a} \rightarrow 0 .
\] \end{proposition} \begin{proof}
By \Cref{dimension formula for X_{r}} (ii), we have dim $X_{r,r} =p+1$ and
$V_{a}$, $V_{p-a-1} \otimes D^{a}$ are JH factors of $X_{r,\,r}$,
hence of $X_{r-p,\,r}$. By \Cref{JH2}, we have $V_{a-2}
\otimes D$ is a JH factor of $X_{r-p,\,r}$.
By Lemma~\ref{Equality of sum of p-adic digits of (r-1),(r-p)}, $\Sigma_{p}(r-p) \geq \Sigma_{p}(r-1) \geq p $. Thus, by
Lemma~\ref{JH1}, $V_{p-a+1} \otimes D^{a-1}$ is a JH factor of $X_{r-p,\,r}$.
Adding the dimensions of these JH factors we get dim $X_{r-p,\,r} \geq 2p+2$. By
\Cref{surjection}, we get dim $X_{r-p,\,r} = 2p+2$.
Since $p \nmid r $ and $\Sigma_{p}(r-1) > p-1$, by
\Cref{dimension formula for X_{r-1}}, we have
$\dim X_{r-1,\,r} =2p+2$. Now the isomorphism $X_{r-p,r} \cong X_{r-1}$
follows from \Cref{M linear isomorhism}
and hence the exact sequence above
follows from \cite[Proposition 4.9 (iii)]{BG15}.
Finally, by \cite[Proposition 4.9 (iii)]{BG15}, we also have
$\dim X_{rp-1,\,rp} =2p+2$, so $X_{r-p,\,r} \cong X_{rp-1,\,rp} $, again
by \Cref{M linear isomorhism}. \end{proof}
We next treat the last case, i.e., $\Sigma_{p}(r-1) \leq p-1$ and
$\Sigma_{p}(r-p)>p-1$.
\begin{proposition} \label{BG Lemma 4.8}
Let $p\geq 3$, $p \nmid r$ and let
$r\equiv a ~\mathrm{mod}~ (p-1)$ with $2 \leq a \leq p-1$.
If $\Sigma_{p}(r-p)>p-1 > \Sigma_{p}(r-1)=a-1$,
then $X_{r-p,\,r} \cong X_{rp-1,\,rp}$ as $M$-modules and
we have an exact sequence of $M$-modules
\[
0 \rightarrow V_{p-a+1} \otimes D^{a-1} \rightarrow X_{r-p,\,r} \rightarrow
V_{a-2} \otimes D \oplus V_{a} \rightarrow 0.
\] \end{proposition} \begin{proof}
Clearly the map $F(X,Y) \mapsto F(X,Y)^{p}$ induces an
$M$-linear isomorphism $\eta': X_{r,\,r} \rightarrow
X_{rp,\,rp}$. Let $\eta: X_{rp,\,rp} \rightarrow X_{r,\,r}$ be the
inverse of $\eta'$.
We show that $\eta$ is the restriction of an $M$-linear
surjection $X_{rp-1,rp} \rightarrow X_{r-p,\,r}$ which we denote
again by $\eta$. Let $S= \lbrace
X^{rp-1}Y, X(kX+Y)^{rp-1}: k \in \mathbb{F}_{p} \rbrace \subset X_{rp-1,rp}$ and
$W \subset X_{rp-1,rp}$ be the vector space spanned by $S$. Let $W' =
X_{rp,\,rp}+W$. By \Cref{Basis of X_r-i}, we have $W' =X_{rp-1,\,rp}$.
Note that $\Sigma_{p}(rp) = \Sigma_{p}(r)=a$.
By \Cref{dimension formula for X_{r}} and
\Cref{dimension formula for X_{r-1}}, we have $\dim X_{rp,\,rp}=a+1$ and dim
$X_{rp-1,rp}=a+p+2$. So $\dim W \geq \dim W' - \dim X_{rp,rp} =p+1$.
Since Card$(S) \leq p+1$, we
have dim $W =p+1$ and
$X_{rp-1,rp} = X_{rp,\,rp} \oplus W$. Extend $\eta$ to an $\mathbb{F}_{p}$-linear map
$\eta: X_{rp-1,rp} \rightarrow X_{r-p,\,r}$ by setting $\eta(X(kX+Y)^{rp-1})=
X^{p} (kX+Y)^{r-p}$ and $\eta(X^{rp-1}Y)=X^{r-p}Y^{p}$.
Also, for $l \in \mathbb{F}_{p}^{\ast}$, we have
\begin{align*}
\eta( (X+lY)^{rp-1}Y)
& =\eta(l^{-1} (X+lY)^{rp} - l^{rp-2} X(l^{-1}X+Y)^{rp-1}) \\
&= l^{-1} (X+lY)^{r} - l^{rp-2} X^{p} (l^{-1}X+Y)^{r-p} \\
&=l^{-1} (X+lY)^{r} - l^{-1} X^{p} (X+lY)^{r-p} \\
&= (X+lY)^{r-p}Y^{p}.
\end{align*}
Therefore the extension $\eta$ satisfies the hypotheses
of \Cref{M-linearity of map} with
$s=rp$ and $s'=r$. Hence $\eta$ is an $M$-linear surjection.
We now show that $\eta$ is an isomorphism by showing $\dim X_{rp-1,\,rp}=
\dim X_{r-p,\,r}$.
Since $\Sigma_{p}(r-p)>p-1$, by Lemmas
\ref{JH1} and \ref{JH2}, we have $V_{p-a+1} \otimes D^{a-1}$
and $V_{a-2} \otimes D$ are JH factors of $X_{r-p,\,r-p}$.
Further, as $r \equiv a $ mod $(p-1)$ by \cite[(4.5)]{Glover},
we have $V_{a}$ is JH factor for $X_{r,r}$. So $V_{a}$ is also a
JH factor of $X_{r-p,\,r}$. Adding the dimensions of the above
JH factors we get dim $X_{r-p,\,r} \geq a+p+2 =
\mathrm{dim} \ X_{rp-1,\,rp}$. So $\eta$ is an isomorphism.
Finally, the exact sequence follows from \cite[Proposition 4.9 (ii)]{BG15}.
\end{proof}
We collect all the results related to the structure of $X_{r-p,\,r}$ proved in this
subsection in the following theorem. \begin{theorem}\label{Main theorem part 4}
Let $p\geq 3$, $p\nmid r$,
$2p < r \equiv a ~\mathrm{mod}~ (p-1)$ with $2 \leq a \leq p-1$.
Then
\begin{enumerate}[label=\emph{(\roman*)}]
\item If $\Sigma_{p}(r-1) =\Sigma_{p}(r-p) = a-1$, then $X_{r-p,\,r}
\cong V_{a-2} \otimes D
\oplus V_{a}$.
\item If $\Sigma_{p}(r-1)=a-1$ and $\Sigma_{p}(r-p) >a-1$, then
we have the following exact sequence of $M$-modules
\begin{align*}
0 \rightarrow V_{p-a+1} \otimes D^{a-1} \rightarrow X_{r-p,\,r} \rightarrow
V_{a-2} \otimes D
\oplus V_{a} \rightarrow 0 .
\end{align*}
\item If $\Sigma_{p}(r-1)>a-1$, then we have the following
exact sequence of
$M$-modules
\begin{align*}
0 \rightarrow V_{p-a-1} \otimes D^{a} \oplus V_{p-a+1} \otimes D^{a-1}
\rightarrow X_{r-p,\,r}
\rightarrow V_{a-2} \otimes D \oplus V_{a} \rightarrow 0 .
\end{align*}
\end{enumerate} \end{theorem}
\quad \\ \noindent {School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai-5, India}
\noindent{\tt email: [email protected], [email protected]}
\end{document} | arXiv |
Title:The single-degenerate model for the progenitors of accretion-induced collapse events
Authors:Bo Wang
(Submitted on 17 Aug 2018)
Abstract: It has been suggested that the accretion-induced collapse (AIC) of an oxygen-neon white dwarf (ONe WD) to a neutron star is a theoretically predicted outcome in stellar evolution, likely relating to the formation of some neutron star systems. However, the progenitor models of AIC events are still not well studied, and recent studies indicated that CO WD+He star systems may also contribute to the formation of neutron star systems through AIC process when off-centre carbon ignition happens on the surface of the CO WD. In this work, I studied the single-degenerate (SD) model of AIC events in a systematic way, including the contribution of the CO WD+He star channel and the ONe WD+MS/RG/He star channels. Firstly, I gave the initial parameter space of these SD channels for producing AIC events in the orbital period--secondary mass plane based on detailed binary evolution computations. Then, according to a binary population synthesis approach, I gave the rates and delay times of AIC events for these SD channels based on their initial parameter space. I found that the rates of AIC events in our galaxy are in the range of $\sim0.3-0.9\times10^{-3}$\,yr$^{-1}$,and that their delay times are $>$30\,Myr. I also found that the ONe WD+He star channel is the main way to produce AIC events, and that the CO WD+He star channel cannot be ignored when studying the progenitors of AIC events.
Comments: 14 pages, 8 figures, 1 table, accepted for publication in MNRAS
Subjects: Solar and Stellar Astrophysics (astro-ph.SR); High Energy Astrophysical Phenomena (astro-ph.HE)
Journal reference: MNRAS, 481, 439 (2018)
DOI: 10.1093/mnras/sty2278
From: Bo Wang [view email]
[v1] Fri, 17 Aug 2018 20:48:47 UTC (92 KB)
astro-ph.HE | CommonCrawl |
\begin{document}
\begin{abstract} Motivated by the study of polytopes formed as the convex hull of permutation matrices and alternating sign matrices, we define several new families of polytopes as convex hulls of sign matrices, which are certain $\{0,1,-1\}$--matrices in bijection with semistandard Young tableaux. We investigate various properties of these polytopes, including their inequality descriptions, vertices, facets, and face lattices, as well as connections to alternating sign matrix polytopes and transportation polytopes. \end{abstract}
\title{Sign matrix polytopes from Young tableaux} \tableofcontents
\section{Introduction} \emph{Sign matrices} are defined as $\{0,1,-1\}$--matrices whose column partial sums are zero or one and whose row partial sums are nonnegative. Sign matrices were introduced by Aval~\cite{aval}, who showed they are in bijection with \emph{semistandard Young tableaux}. Young tableaux are well-loved objects for their nice combinatorial properties, including beautiful enumerative formulas, and nontrivial connections to Lie algebras, representation theory, and statistical physics \cite{BumpSchilling, fulton, KSS}. Aval used sign matrices to give a simple method for computing the \emph{left key} of a tableau by successively removing the negative ones from its corresponding sign matrix~\cite{aval}.
\emph{Alternating sign matrices} are $n \times n$ sign matrices with the additional properties that the rows and columns each sum to one and the row partial sums may not exceed one~\cite{MRRASM}. Alternating sign matrices were introduced by Robbins and Rumsey in their study of the $\lambda$-determinant~\cite{RobbinsRumsey}, with an enumeration formula conjectured by Mills, Robbins, and Rumsey~\cite{MRRASM}. The proof of this conjecture~\cite{zeilberger,kuperbergASMpf} was a major accomplishment in enumerative combinatorics in the 1990's. Alternating sign matrices are still a source of interest, in particular, with regard to intriguing open bijective questions involving plane partitions and connections to both the six-vertex model and various loop models in statistical physics~\cite{ZINNDPP,Bettinelli,Biane_Cheballah_1,BRESSOUDBOOK,razstrogpf2,razstrogpf,ProppManyFaces,razstrog,Striker_DPP,StrikerFPSAC2013,razstrogrow,STRIKERPOSET}.
In \cite{striker}, the second author examined alternating sign matrices from a geometric perspective by defining and studying the polytope formed by taking the convex hull of all $n\times n$ alternating sign matrices, as vectors in $\mathbb{R}^{n^2}$. She studied various aspects of the alternating sign matrix polytope, including its dimension, facet count, vertices, face lattice, and inequality description. Independently, Behrend and Knight~\cite{behrend} defined and studied the alternating sign matrix polytope. They proved the equivalence of the inequality and vertex descriptions, computed the Ehrhart polynomials to $n=5$, and studied lattice points in the $r$th dilate of the alternating sign matrix polytope, which they called {higher spin alternating sign matrices}.
In this paper, we extend this work by studying polytopes formed as convex hulls of sign matrices. We define two new families of polytopes: $P(m,n)$ as the convex hull of all $m\times n$ sign matrices and $P(\lambda,n)$ as the convex hull of sign matrices in bijection with semistandard Young tableaux of a given shape $\lambda$ and entries at most $n$. If we, furthermore, fix the entries in the first column of the tableaux to be determined by the vector $v$, we obtain a polytope $P(v,\lambda,n)$, whose nonnegative part we show in Theorem~\ref{thm:transportation} is a transportation polytope.
\textbf{Our main results} include Theorems~\ref{thm:ineqthmshape}, \ref{thm:ineqthm}, and \ref{thm:v_ineqthmshape}, in which we have found the set of inequalities that determine $P(\lambda,n)$, $P(m,n)$, and $P(v,\lambda,n)$ by an extension of the proof technique von Neumann used to show that the convex hull of $n\times n$ permutation matrices, the $n$th Birkhoff polytope, consists of all $n\times n$ doubly stochastic matrices~\cite{vonneumann}. Other main results include vertex charactorizations (Theorems~\ref{thm:lambdavertex}, \ref{thm:mnvertex}, and \ref{thm:v_lambdavertex}), descriptions of the face lattices of these polytopes (Theorems~\ref{th:g_bijection}, \ref{thm:poset_iso}, \ref{th:g_bijection_shape}, and \ref{th:v_g_bijection_shape}), and enumerations of the facets (Theorems~\ref{thm:facets_mn} and \ref{thm:facets_lambda}).
\textbf{Our outline is as follows.} In Section~\ref{sec:ssyt}, we refine Aval's bijection between semistandard Young tableaux and sign matrices to account for the tableau shape. In Section~\ref{sec:Plambda_n}, we define the polytope $P(\lambda,n)$ as the convex hull of all $\lambda_1\times n$ sign matrices corresponding to semistandard Young tableaux of shape $\lambda$ and entries at most $n$, prove its dimension, and show that the vertices are all the sign matrices used in the construction. In Section~\ref{sec:Pmn}, we define the polytope $P(m,n)$ as the convex hull of all $m\times n$ sign matrices, find its dimension and vertices. Then in Section~\ref{sec:ineq}, we prove Theorems~\ref{thm:ineqthmshape} and \ref{thm:ineqthm}, giving an inequality description of $P(\lambda,n)$ and $P(m,n)$ respectively. In Section~\ref{sec:Pmninq}, we prove facet counts for both polytope families (Theorems~\ref{thm:facets_mn} and \ref{thm:facets_lambda}). In Section~\ref{sec:facelattice} we give a description of the face lattices of these polytopes. In Section~\ref{sec:connections}, we describe how $P(m,n)$ and $P(\lambda,n)$ relate to each other and give connections to alternating sign matrix polytopes. In Section~\ref{sec:transportation}, we define another polytope $P(v,\lambda,n)$ as the convex hull of sign matrices in bijection with semistandard Young tableaux of shape $\lambda$, entries at most $n$, and first column given by $v$. We then prove Theorem~\ref{thm:transportation}, relating these polytopes to transportation polytopes.
\section{Semistandard Young tableaux and sign matrices} \label{sec:ssyt}
In this section, we first define semistandard Young tableaux and sign matrices. We then discuss a bijection between them, due to Aval. We refine this bijection in Theorem~\ref{thm:MtoSSYT} to a bijection between semistandard Young tableaux with a given shape and sign matrices with prescribed row sums.
We use the following notation throughout the paper. \begin{definition} \label{def:YD} A \emph{partition} is a weakly decreasing sequence of positive integers $\lambda=[\lambda_1,\lambda_2,\ldots,\lambda_k]$. The positive integers $\lambda_i$ are called the \emph{parts} of the partition and $k$ is the \emph{length} of the partition. A \emph{Young diagram} is a visual representation of a partition $\lambda$ as a collection of boxes, or cells, arranged in left-justified rows, with $\lambda_i$ boxes in row $i$. We will refer to a partition and its Young diagram interchangeably.
Let $\lambda'$ denote the \emph{conjugate partition} of $\lambda$, that is, the Young diagram defined by reflecting $\lambda$ about the diagonal. Note $k=\lambda'_1$.
The \emph{frequency representation} of $\lambda$ is the sequence $[a_1,a_2,\ldots,a_{\lambda_1}]$ where $a_i$ equals the number of parts of $\lambda$ equal to $i$. We may also denote $\lambda$ using \emph{exponential notation} as $[\lambda_1^{a_{\lambda_1}},\ldots,i^{a_i},\ldots,\lambda_k^{a_{\lambda_k}}]$. \end{definition}
\begin{example} The partition $\lambda=[6,3,3,1]$ has $k=4$ parts. The exponential notation for $\lambda$ is $[6,3^2,1]$ and its frequency representation is $[1,0,2,0,0,1]$. The conjugate partition is $\lambda'=[4,3,3,1,1,1]$. See Figure~\ref{fig:ydssyt}. \end{example}
\begin{definition} A \emph{semistandard Young tableau (SSYT)} is a filling of a Young diagram with positive integers such that the rows are weakly increasing and the columns are strictly increasing. See Figure~\ref{fig:ydssyt}. \label{def:SSYT} \end{definition}
\begin{figure}
\caption{A Young diagram of partition shape $\lambda=[6,3,3,1]$ and a semistandard Young tableau of the same shape.}
\label{fig:ydssyt}
\end{figure}
\begin{definition} Let $SSYT(m,n)$ denote the set of {semistandard Young tableaux with at most $m$ columns and entries at most $n$}. \end{definition}
Gordon enumerated $SSYT(m,n)$ as follows.
\begin{theorem}[\cite{gordon}] \label{thm:Gordon} The number of SSYT with at most $m$ columns and entries at most $n$ is \[\displaystyle\prod_{1\le i\le j\le n} \frac{m+i+j-1}{i+j-1}.\] \end{theorem}
\begin{definition} Let $SSYT(\lambda,n)$ denote the set of {semistandard Young tableaux of partition shape $\lambda$ and entries at most $n$}. \end{definition}
For example, the tableau of Figure~\ref{fig:ydssyt} is in both $SSYT(6,n)$ and $SSYT([6,3,3,1],n)$ for any $n \geq 7$.
$SSYT(\lambda,n)$ is enumerated by Stanley's hook-content formula.
\begin{theorem} [\cite{rpstanley}] The number of SSYT of shape $\lambda$ with entries at most $n$ is \[ \displaystyle \prod_{u \in \lambda} \frac{n+c(u)}{h(u)} \] where $c(u)$ is the content of the box $u$, given by $c(u)=i-j$ for $u=(i,j)$, and $h(u)$ is the hook length of $u$, given by the number of squares directly below or to the right of $u$ (counting $u$ itself). \end{theorem}
Aval \cite{aval} defined a new set of objects, called sign matrices, which will be the building blocks of the polytopes that will be our main objects of study.
\begin{definition}[\cite{aval}] \label{def:sm} A \emph{sign matrix} is a matrix $M=\left(M_{ij}\right)$ with entries in $\left\{-1,0,1\right\}$ such that: \begin{align} \label{eq:sm1} \displaystyle\sum_{i'=1}^{i} M_{i'j} &\in \left\{0,1\right\}, & \mbox{ for all }i,j. \\ \label{eq:sm2} \displaystyle\sum_{j'=1}^{j} M_{ij'} &\geq 0, & \mbox{ for all }i,j. \end{align} \end{definition}
In words, the column partial sums from the top of a sign matrix equal either 0 or 1 and the partial sums of the rows from the left are non-negative.
Aval showed that $m\times n$ sign matrices are in bijection with SSYT with at most $m$ columns and largest entry at most $n$ \cite[Proposition 1]{aval}. We now define the set of sign matrices we will show in Theorem~\ref{thm:MtoSSYT} to be in bijection with $SSYT(\lambda, n)$; this is a refinement of Aval's bijection. See Figure~\ref{fig:(3,3,1,1,1)} for an example of this bijection.
\begin{definition} \label{def:MtoSSYT} Fix a partition $\lambda$ with frequency representation $[a_1,a_2,\ldots,a_{\lambda_1}]$ and fix $n\in\mathbb{N}$. Let \emph{$M(\lambda,n)$} be the set of $\lambda_1\times n$ sign matrices $M=(M_{ij})$ such that: \begin{align} \label{eq:Mij_rowsum} \displaystyle\sum_{j=1}^n M_{ij} &= a_{\lambda_1-i+1},
& \mbox{ for all }1\le i \le \lambda_1. \end{align} Call $M(\lambda,n)$ the set of \emph{sign matrices of shape $\lambda$ and content at most $n$}. \end{definition}
\begin{theorem} \label{thm:MtoSSYT} $M(\lambda,n)$ is in explicit bijection with $SSYT(\lambda,n)$. \end{theorem}
\begin{proof} We first outline the bijection of Aval~\cite{aval} between SSYT and sign matrices. Given an $m\times n$ sign matrix $M$, we construct a tableau $\Phi(M)=T \in SSYT(m,n)$ such that the entries in the $i$th row of $M$ determine the $(m-i+1)$st column (from the left) of $T$. In the $i$th row of $M$, note which columns have a partial sum (from the top) of one. Record the numbers of the matrix columns in which this occurs, in increasing order from top down, to form column $m-i+1$ of $T$. Since we record the entries in increasing order for each column of $T$ and each entry only occurs once in a column, the columns of $T$ are strictly increasing. The rows of $T$ are weakly increasing, since by (\ref{eq:sm2}) the partial sums of the rows of $M$ are non-negative. Thus, $T$ is a SSYT. The length of the first row of $T$ is $m$ and the entries of $T$ are at most $n$, since $M$ is an $m\times n$ matrix. Thus $\Phi$ maps into $SSYT(m,n)$.
Aval proved in~\cite{aval} that $\Phi$ is an invertible map that gives a bijection between $SSYT(m,n)$ and $m\times n$ sign matrices. We refine this to a bijection between $SSYT(\lambda,n)$ and $M(\lambda,n)$ by keeping track of the row sums of $M$ and the shape of $T$. Given a tableau, $T\in SSYT(\lambda,n)$, we show that $\Phi^{-1}(T)=M\in M(\lambda,n)$. By~\cite{aval}, we know that $M$ is a sign matrix, so we only need to show it satisfies the condition (\ref{eq:Mij_rowsum}). Consider the frequency representation $[a_1,a_2,a_3, \dots, a_{\lambda_1}]$ of the partition $\lambda$. Consider columns $\lambda_1-i$ and $\lambda_1-i+1$ of $T$. If a number, $\ell$, appears in both columns $\lambda_1-i+1$ and $\lambda_1-i+2$ of $T$, then $M_{i\ell}=0$. So we can ignore when a number is repeated in adjacent columns of $T$, since it corresponds to a zero in $M$, which does not contribute to the row sum. Suppose $\ell$ appears in column $\lambda_1-i+2$ of $T$ but not column $\lambda_1-i+1$. Then $M_{i\ell}=-1$. Suppose $\ell$ appears in column $\lambda_1-i+1$ of $T$ but not column $\lambda_1-i+2$. Then $M_{i\ell}=1$. So the total row sum $\displaystyle\sum_{j'=1}^n M_{ij'}$ equals the number of entries that appear in column $\lambda_1-i+1$ of $T$ but not column $\lambda_1-i+2$ minus the number of entries that appear in column $\lambda_1-i+2$ but not column $\lambda_1-i+1$. This is exactly the length of column $\lambda_1-i+1$ minus the length of column $\lambda_1-i+2$, which is given by $a_{\lambda_1-i+1}$.
See Figure~\ref{fig:(3,3,1,1,1)} and Example~\ref{ex:MtoSSYT}. \end{proof}
\begin{example} \label{ex:MtoSSYT} In Figure~\ref{fig:(3,3,1,1,1)}, we have a semistandard Young tableau $T$ of shape $[3,3,1,1,1]$ and the corresponding sign matrix $M$ formed by the bijection discussed in Theorem~\ref{thm:MtoSSYT}. To see that $M$ satisfies (\ref{eq:Mij_rowsum}), note that the total row sums of $M$ are $2$, $0$ and $3$, while the frequency representation of the partition $[3,3,1,1,1]$ is $[3,0,2]$. \end{example}
\begin{figure}
\caption{The SSYT of shape $[3,3,1,1,1]$ and corresponding sign matrix from Example~\ref{ex:MtoSSYT}.}
\label{fig:(3,3,1,1,1)}
\end{figure}
\section{Definition and vertices of $P(\lambda,n)$} \label{sec:Plambda_n}
In this section, we define the first of the two polytopes that we are studying and prove some of its properties.
\begin{definition} Let \emph{$P(\lambda,n)$} be the polytope defined as the convex hull, as vectors in $\mathbb{R}^{\lambda_1 n}$, of all the matrices in $M(\lambda,n)$. Call this the \emph{sign matrix polytope of shape $\lambda$.} \end{definition}
We now investigate the structure of this polytope, starting with its dimension.
\begin{proposition} \label{prop:dim} The dimension of $P(\lambda, n)$ is $\lambda_1(n-1)$ if $1 \leq k < n$. When $k=n$, the dimension is $(\lambda_1 - \lambda_n)(n-1).$ \end{proposition}
\begin{proof} Since each matrix in $M(\lambda,n)$ is $\lambda_1 \times n$, the ambient dimension is $\lambda_1 n$. However, when constructing the sign matrix corresponding to a tableau of shape $\lambda$, as in Theorem~\ref{thm:MtoSSYT}, the last column is determined by the shape $\lambda$ via the prescribed row sums (\ref{eq:Mij_rowsum}) of Definition~\ref{def:MtoSSYT}. This is the only restriction on the dimension when $1 \leq k < n$, where $k$ is the length of $\lambda$, reducing the free entries in the matrix by one column. Thus, the dimension is $\lambda_1(n-1)$.
When $k=n$ the dimension depends on the number of columns of length $n$ in $\lambda$; this is given by $\lambda_n$. A column of length $n$ in a SSYT with entries at most $n$ is forced to be filled with the numbers $1,2,\ldots,n$. So the matrix rows corresponding to these columns are determined, and thus do not contribute to the dimension. Thus the dimension is $(\lambda_1 - \lambda_n)(n-1)$. \end{proof}
From now on, we assume $k<n$. We now define a graph associated to any matrix. The graph will be useful in upcoming theorems; see Figure~\ref{fig:partialsums}.
\begin{definition} \label{def:gamma} We define the $m\times n$ \emph{grid graph} $\Gamma_{(m,n)}$ as follows. The vertex set is $V(m,n):=\{(i,j) : 1 \leq i \leq m+1, 1 \leq j \leq n+1 \}$. We separate the vertices into two categories. We say the \emph{internal vertices} are \{$(i,j)$ : $1 \leq i \leq m, 1 \leq j \leq n$\} and the \emph{boundary vertices} are $\{(m+1,j) \mbox{ and } (i,n+1) : 1 \leq i \leq m, 1 \leq j \leq n\}$. The edge set is \[E(m,n):= \begin{cases} (i,j) \text{ to } (i+1,j) & 1 \leq i \leq m, 1 \leq j \leq n\\ (i,j) \text{ to } (i,j+1) & 1 \leq i \leq m, 1 \leq j \leq n. \end{cases}\] We draw the graph with $i$ increasing to the right and $j$ increasing down, to correspond with matrix indexing. \end{definition}
\begin{definition} \label{def:Xhat} Given an $m\times n$ matrix $X$, we define a graph, $\thickhat{X}$, which is a labeling of the edges of $\Gamma_{(m,n)}$ from Definition~\ref{def:gamma}. The horizontal edges from $(i,j)$ to $(i,j+1)$ are each labeled by the corresponding row partial sum $r_{ij}= \displaystyle\sum_{j'=1}^{j} X_{ij'}$ ($1\leq i\leq m$, $1\leq j\leq n$). Likewise, the vertical edges from $(i,j)$ to $(i+1,j)$ are each labeled by the corresponding column partial sum $c_{ij} = \displaystyle\sum_{i'=1}^{i} X_{i'j}$ ($1\leq i\leq m$, $1\leq j\leq n$). In many of the figures, we will label the interior vertices with their corresponding matrix entry $\textcolor{blue}{\bf{X_{ij}}}$ ($1\leq i\leq m$, $1\leq j\leq n$). \end{definition}
\begin{remark} \label{remark:invertible} Note that given either the row or column partial sum labels of $\thickhat{X}$, one can uniquely recover the matrix $X$. \end{remark}
See Figures~\ref{fig:partialsums} and~\ref{fig:vertices}.
\begin{figure}
\caption{The graph $\thickhat{X}$ from Definition~\ref{def:Xhat}, with dots on only the internal vertices. }
\label{fig:partialsums}
\end{figure}
The above notation will be used in proving the next theorem, which identifies the vertices of $P(\lambda,n)$.
\begin{theorem} \label{thm:lambdavertex} The vertices of $P(\lambda,n)$ are the sign matrices $M(\lambda,n)$. \end{theorem}
\begin{proof} Fix a sign matrix $M\in M(\lambda,n)$. In order to show that $M$ is a vertex of $P(\lambda,n)$, we need to find a hyperplane with $M$ on one side and all the other sign matrices in $M(\lambda,n)$ on the other side. Then since $P(\lambda,n)$ is the convex hull of $M(\lambda,n)$, $M$ will necessarily be a vertex.
Let $c_{ij}$ denote the column partial sums of $M$, as in Definition~\ref{def:Xhat}. Define $C_M:=\{(i,j)\ : \ c_{ij}=1\}$.
Note that $C_M$ is unique for each $M$, since the column partial sums can only be $0$ or $1$, and by Remark~\ref{remark:invertible}, we can recover $M$ from the $c_{ij}$. Also note that $|C_M|=|\lambda|$, that is, the number of partial column sums that equal one in $M$ equals the number of boxes in $\lambda$.
Define a hyperplane in $\mathbb{R}^{\lambda_1 n}$ as follows, on coordinates $X_{ij}$ corresponding to positions in a $\lambda_1\times n$ matrix.
\begin{equation} H_M(X):=\sum_{(i,j)\in C_M} \sum_{i'=1}^i X_{i'j}=|\lambda|-\frac{1}{2} \label{eq:shape_hyper} \end{equation}
If $X=M$, then $H_M(X)= H_M(M)=|\lambda|$, since $|C_M|=|\lambda|$. Given a hyperplane formed in this manner, we may recover the matrix from which it is formed, thus $H_M$ is unique for each $M$.
By definition, every matrix in $M(\lambda,n)$ has $|\lambda|$ partial column sums that equal $1$. Let $M'\neq M$ be another matrix in $M(\lambda,n)$. It must be that there is an $(i,j)$ where $c_{ij}=1$ in $M$ and $c_{ij}=0$ in $M'$. $H_{M}(M')$ will be smaller than $H_M(M)$ by one for every time this occurs. For any $(i,j)$ such that $c_{ij}=0$ in $M$ and $c_{ij}=1$ in $M'$, $(i,j)\not\in C_M$, so this partial sum does not contribute to $H_M$.
Therefore,
$H_M(M)=|\lambda|>|\lambda|-\frac{1}{2}$ while $H_M(M')<|\lambda|-\frac{1}{2}$. Thus the sign matrices of $M(\lambda,n)$ are the vertices of $P(\lambda,n)$. \end{proof}
\begin{figure}
\caption{The six graphs corresponding to the six sign matrices in $M([2,2],3)$; these matrices correspond to SSYT of shape $[2,2]$ with entries at most $3$.}
\label{fig:vertices}
\end{figure}
\begin{example} Figure~\ref{fig:vertices} gives the six graphs corresponding to the six sign matrices in $M(\lambda,3)$ for $\lambda=[2,2]$; these matrices correspond to SSYT of shape $[2,2]$ with entries at most $3$. Let $M_e$ be the sign matrix corresponding to the graph in Figure~\ref{fig:vertices}(e). The equation for the hyperplane, $H_{M_e}$, described in Theorem~\ref{thm:lambdavertex}, is
$H_{M_e}(X)=X_{11}+(X_{11}+X_{21})+X_{13}+(X_{12}+X_{22})=2X_{11}+X_{12}+X_{13}+X_{21}+X_{22}=|\lambda|-\frac{1}{2}=3.5$. Now we substitute the entries of each matrix in $M([2,2],3)$ into this equation to show $M_e$ is the only matrix on one side of this hyperplane.
\tabbedblock{ (a): \> $X_{11}=1, X_{12}=1, X_{13}=0, X_{21}=0, X_{22}=0$ \> $\rightarrow \; H_{M_e}(M_a) =2+ 1 + 0 + 0 + 0$ \> =\; 3;\\ (b): \> $X_{11}=1, X_{12}=0, X_{13}=1, X_{21}=0, X_{22}=0$ \> $\rightarrow \; H_{M_e}(M_b)=2 + 0 + 1 + 0 + 0$ \> =\; 3;\\ (c): \> $X_{11}=0, X_{12}=1, X_{13}=1, X_{21}=1, X_{22}=0$ \> $\rightarrow \; H_{M_e}(M_c)=0 + 1 + 1 + 1 + 0$ \> =\; 3;\\ (d): \> $X_{11}=0, X_{12}=1, X_{13}=1, X_{21}=0, X_{22}=0$ \> $\rightarrow \; H_{M_e}(M_d)=0 + 1 + 1 + 0 + 0$ \> =\; 2;\\ (e): \> $X_{11}=1, X_{12}=0, X_{13}=1, X_{21}=0, X_{22}=1$ \> $\rightarrow \; H_{M_e}(M_e) = 2 + 0 + 1 + 0 + 1$ \> =\; 4;\\ (f): \> $X_{11}=0, X_{12}=1, X_{13}=1, X_{21}=1, X_{22}=\; $-1 \> $\rightarrow \; H_{M_e}(M_f) = 0 + 1 + 1 + 1 + $(-1) \> =\; 2. }
Note that $M_e$ is on one side of $2X_{11}+X_{12}+X_{13}+X_{21}+X_{22}=3.5$ and the other five matrices in $M([2,2],3)$ are on the other side. \label{ex:vertices} \end{example}
\section{Definition and vertices of $P(m,n)$} \label{sec:Pmn}
We will now define and study another family of polytopes, constructed using all $m \times n$ sign matrices.
\begin{definition} Let $P(m,n)$ be the polytope defined as the convex hull of all $m\times n$ sign matrices. Call this the \emph{$(m,n)$ sign matrix polytope.} \end{definition}
\begin{proposition} \label{prop:pmn_dim} The dimension of $P(m,n)$ is $mn$ for all $m > 1$. \end{proposition}
\begin{proof} Since every entry is essential, all $mn$ of the entries contribute to the dimension. \end{proof}
\begin{theorem} \label{thm:mnvertex} The vertices of $P(m,n)$ are the sign matrices of size $m \times n$. \label{thm:mnverts} \end{theorem}
\begin{proof} Fix an $m \times n$ sign matrix $M$. In order to show that $M$ is a vertex of $P(m,n)$, we need to find a hyperplane in $\mathbb{R}^{mn}$ with $M$ on one side and all the other $m\times n$ sign matrices on the other side. Then since $P(m,n)$ is the convex hull of all $m \times n$ sign matrices, $M$ would necessarily be a vertex.
Let $c_{ij}=\displaystyle\sum_{i'=1}^i X_{i'j}$ in $M$, as in Definition~\ref{def:Xhat}. Recall from the proof of Theorem~\ref{thm:lambdavertex} the notation $C_M=\{(i,j)\ : \ c_{ij}=1\mbox{ in }M\}$ and $H_M(X)=\displaystyle\sum_{(i,j)\in C_M} \displaystyle\sum_{i'=1}^i X_{i'j}$.
Define a hyperplane in $\mathbb{R}^{mn}$ as follows, on coordinates $X_{ij}$ corresponding to positions in an $m\times n$ matrix. \begin{equation} K_M(X):=
H_M(X) - \sum_{(i,j)\not\in C_M} \sum_{i'=1}^i X_{i'j} = |C_M|-\frac{1}{2}. \label{eq:mn_hyper} \end{equation}
Note that $C_M$ is unique for each sign matrix $M$ since we may recover any sign matrix from its column partial sums (see Remark~\ref{remark:invertible}). Therefore $K_M$ is unique for each matrix $M$.
We wish to show the hyperplane $K_M(X)=|C_M|-\frac{1}{2}$ has $M$ on one side and all the other $m\times n$ sign matrices on the other.
Note that if $X=M$, then $K_M(X)= K_M(M)=|C_M|$. So we wish to show that given any $M'\in M(m,n)$ such that $M'\neq M$, $K_M(M')<|C_M|-\frac{1}{2}$.
We have two cases:
\textit{Case 1}: There is a $(i,j)$ entry $c_{ij}=0$ in $M$ and $c_{ij}=1$ in $M'$. In this case, $(i,j)\not\in C_M$. So in $K_M(M')$, this partial sum gets subtracted making $K_M(M')$ one smaller than $K_M(M)$ for every such $(i,j)$.
\textit{Case 2}: There is a $(i,j)$ entry $c_{ij}=1$ in $M$ and $c_{ij}=0$ in $M'$. In this case, $(i,j)\in C_M$. So this partial sum contributed one to $H_M(M)$, whereas in $H_M(M')$ there is a contribution of zero. Therefore $H_M(M)$ is one greater than $H_M(M')$ so that $K_M(M)$ is one greater than $K_M(M')$ for every such $(i,j)$.
Since $M$ and $M'$ must differ in at least one column partial sum, $|C_M|=K_M(M)\geq K_M(M')+1$ so that $K_M(M')<|C_M|-\frac{1}{2}$ for all $m \times n$ sign matrices $M'$. Thus the $m\times n$ sign matrices are the vertices of $P(m,n)$. \end{proof}
\begin{figure}
\caption{Four of the $29$ partial sum graphs corresponding to the sign matrices that are vertices in $P(2,3)$ but not in $P([2,2],3)$.}
\label{fig:mnvertices}
\end{figure}
\begin{example} \label{ex:mnverts}
Let $M_h$ be the sign matrix corresponding to the graph in Figure~\ref{fig:mnvertices}(h). So $H_{M_h}(X)=2X_{11}+X_{21}$, and therefore $H_{M_h}(M_a)=H_{M_h}(M_b)=H_{M_h}(M_e)=H_{M_h}(M_h)=H_{M_h}(M_i)=H_{M_h}(M_j)=2$. This shows that the hyperplane of Theorem~\ref{thm:lambdavertex} does not separate $M_h$ from all the other $m\times n$ sign matrices. But using Theorem~\ref{thm:mnverts}, we find the needed hyperplane to be $K_{M_h}(X)=X_{11}+(X_{11}+X_{21})-X_{12}-(X_{12}+X_{22})-X_{13}-(X_{13}+X_{23})=2X_{11}+X_{21}-2X_{12}-X_{22}-2X_{13}-X_{23}=|C_M|-\frac{1}{2}=2-\frac{1}{2}=1.5$. One may calculate the following:
$K_{M_h}(M_a) = K_{M_h}(M_b) = K_{M_h}(M_e) = 0; \; K_{M_h}(M_h) = 2; \; K_{M_h}(M_i)=-2; \; K_{M_h}(M_j)= -1$. This illustrates how the hyperplane $K_{M}(X)=|C_M|-\frac{1}{2}$ separates $M$ from the other $m\times n$ sign matrices, even though $H_{M}(X)=|C_M|-\frac{1}{2}$ fails to. \end{example}
In the following remark, we give some properties and non-properties of $P(m,n)$ and $P(\lambda,n)$.
\begin{remark} Both $P(\lambda,n)$ and $P(m,n)$ are \emph{integral polytopes}, since an integral polytope has integer values for all vertices. Neither $P(\lambda,n)$ nor $P(m,n)$ are \emph{regular polytopes}. (A regular polytope has the same number of edges adjacent to each vertex.) For example, some of the vertices in $P(2,2)$ from Figure~\ref{fig:faces} are adjacent to 4 edges, while others are adjacent to 5 or 6 edges. These polytopes are not \emph{simplicial} (where every facet has the minimal number of vertices), since the facets of these polytopes have varying numbers of vertices. For example, the facets of $P(2,2)$ have between $4$ and $7$ vertices. These polytopes are not \emph{simple} (where every vertex is contained in the minimal number of facets where that number is fixed); the vertices corresponding to $\delta_5$ and $\delta_6$ in Figure~\ref{fig:faces} are contained in $20$ and $14$ facets, respectively. \end{remark}
\section{Inequality descriptions} \label{sec:ineq} In analogy with the Birkhoff polytope \cite{birkhoff,vonneumann} and the alternating sign matrix polytope \cite{behrend,striker}, we find an inequality description of $P(\lambda,n)$.
\begin{theorem} \label{thm:ineqthmshape} $P(\lambda,n)$ consists of all $\lambda_1\times n$ real matrices $X=(X_{ij})$ such that: \begin{align} \label{eq:eq1} 0 \leq \displaystyle\sum_{i'=1}^{i} X_{i'j} &\leq 1, &\mbox{ for all }1 \leq i\leq \lambda_1, 1\le j\le n \\ \label{eq:eq2} 0 \leq \displaystyle\sum_{j'=1}^{j} X_{ij'}, & &\mbox{ for all }1 \leq i\leq \lambda_1, 1\le j\le n \\ \label{eq:eq3} \displaystyle\sum_{j'=1}^n X_{ij'} &= a_{\lambda_1-i+1}, &\mbox{ for all } 1\leq i \leq \lambda_1. \end{align} \end{theorem}
\begin{proof} This proof builds on techniques developed by Von Neumann in his proof of the inequality description of the Birkhoff polytope \cite{vonneumann}. First we need to show that any $X\in P(\lambda,n)$ satisfies $(\ref{eq:eq1}) - (\ref{eq:eq3})$. Suppose $X \in P(\lambda,n)$. Thus $X=\displaystyle \sum_{\gamma} \mu_{\gamma} M_{\gamma}$ where $\displaystyle \sum_{\gamma} \mu_{\gamma}=1$ and the $M_{\gamma}\in M(\lambda,n)$. Since we have a convex combination of sign matrices, by Definition~\ref{def:sm} we obtain (\ref{eq:eq1}) and (\ref{eq:eq2}) immediately. (\ref{eq:eq3}) follows from (\ref{eq:Mij_rowsum}) in the definition of $M(\lambda,n)$ (Definition~\ref{def:MtoSSYT}). Thus $P(\lambda,n)$ fits the inequality description.
Let $X$ be a real-valued $\lambda_1\times n$ matrix satisfying $(\ref{eq:eq1}) - (\ref{eq:eq3})$. We wish to show that $X$ can be written as a convex combination of sign matrices in $M(\lambda,n)$, so that it is in $P(\lambda,n)$. Consider the corresponding graph $\thickhat{X}$ of Definition~\ref{def:Xhat}. Let $r_{i0}=0=c_{0j}$ for all $i,j$. Then for all $1\leq i\leq \lambda_1, 1\leq j\leq n$, we have $X_{ij}=r_{ij}-r_{i, j-1}=c_{ij}-c_{i-1,j}$. Thus, \begin{equation} \label{eq:rc} r_{ij}+c_{i-1,j}=c_{ij}+r_{i, j-1}. \end{equation} If $X$ has no non-integer partial sums, then $X$ is a $\lambda_1 \times n$ sign matrix, since $(\ref{eq:eq1}) - (\ref{eq:eq3})$ reduce to Definitions \ref{def:sm} and \ref{def:MtoSSYT}.
So we assume $X$ has at least one non-integer partial sum $r_{ij}$ or $c_{ij}$. We may furthermore assume $X$ has at least one non-integer column partial sum, since if all column partial sums of $X$ were integers, $X_{ij}=c_{ij}-c_{i-1,j}$ would imply the $X_{ij}$ would be integers, thus all row partial sums would also be integers.
We construct an \emph{open} or \emph{closed circuit} in $\thickhat{X}$ whose edges are labeled by non-integer partial sums. We say a \emph{closed circuit} is a simple cycle in $\thickhat{X}$, that is, it begins and ends at the same vertex with no repetitions of vertices, other than the repetition of the starting and ending vertex. We say an \emph{open circuit} is a simple path in $\thickhat{X}$ that begins and ends at different boundary vertices along the bottom of the graph, that is, it begins at a vertex $(\lambda_1+1,j)$ and ends at vertex $(\lambda_1+1,j_0)$ for some $j_0 \neq j$.
We create such a circuit by first constructing a path in $\hat{X}$ as follows. If there exists $j$ such that $0<c_{\lambda_1j}<1$, we start the path at bottom boundary vertex $(\lambda_1+1,j)$. If there is no such $j$, we find some $c_{ij}$ such that $0<c_{ij}<1$ and start at the vertex corresponding to $X_{ij}$. By (\ref{eq:rc}), at least one of $c_{i\pm 1,j},r_{i,j\pm 1}$ is also a non-integer. Therefore, we may form a path by moving through $\thickhat{X}$ vertically and horizontally along edges labeled by non-integer partial sums.
Now $\hat{X}$ is of finite size and all the boundary partial sums on the left, right, and top are integers (since for all $i$ and $j$, $r_{i0}=c_{0j}=0$ and $r_{in}=a_{\lambda_1-i+1}$). So the path eventually reaches one of the following: $(1)$ a vertex already in the path, or $(2)$ a vertex $(\lambda_1+1,j_0)$. In Case $(2)$, this means $c_{\lambda_1j_0}$ is not an integer. But the total sum of the matrix is $\displaystyle\sum_{i=1}^{\lambda_1} r_{in}=\displaystyle\sum_{i=1}^{\lambda_1} a_{\lambda_1-i+1}$. Each $a_{\lambda_1-i+1}$ is an integer, so the total sum of all matrix entries is an integer. Since $c_{\lambda_1j_0}$ is not an integer, there must be some other column sum $c_{\lambda_1j}$ that is also not an integer. By construction, the path began at a bottom boundary vertex $(\lambda_1+1,j)$ with $c_{\lambda_1 j}$ not an integer, for some $j\neq j_0$. So this process yields an open circuit whose edge labels are all non-integer. In Case $(1)$, the constructed path consists of a simple closed loop and possibly a simple path connected to the closed loop at some vertex $X_{i_0 j_0}$. We delete this path, and keep the closed loop. This process yields a closed circuit in $\thickhat{X}$ whose edge labels are all non-integer. See Figures~\ref{fig:opencirc} and~\ref{fig:closedcirc} for examples.
Let the following denote a circuit constructed as above, where the circled $c$ and $r$ values denote the edge labels as we traverse the circuit, and
the boxed $X_{ij}$'s denote the matrix entries corresponding to the vertices on the corners of the circuit where the path changes from vertical to horizontal or vice versa. (Note how the boxes and circles appear in Figures~\ref{fig:opencirc} and~\ref{fig:closedcirc}.)
\[\left( \circled{$c_{0}$},\ldots,\circled{$c'_{0}$},\boxed{X_{i_1,j_0}},\circled{$r_{1}$},\ldots,\circled{$r'_{1}$},\boxed{X_{i_1,j_1}},\circled{$c_{1}$},\ldots,\circled{$c'_{1}$},\boxed{X_{i_2,j_1}},\circled{$r_{2}$},\ldots\right)\] Using this circuit, we are able to write $X$ as the convex combination of two new matrices, call them $X^+$ and $X^-$, that each have at least one more partial sum equal to its maximum or minimum possible value.
Construct a matrix $X^+$ by setting \[X^+_{i_{\alpha},j_{\beta}} = \begin{cases} X_{i_{\alpha}j_{\beta}} + \ell^+ &\mbox{ if } \alpha+\beta \mbox{ is odd} \\ X_{i_{\alpha}j_{\beta}} - \ell^+ &\mbox{ if } \alpha+\beta \mbox{ is even} \end{cases}\] and setting all other entries equal to the corresponding entry of $X$. That is, construct $X^+$ by alternately adding and subtracting a number $\ell^+$ from each entry in $X$ that corresponds to a corner in the circuit and leaving all other matrix entries unchanged. We will choose $\ell^+$ to be the maximum possible value that preserves $(\ref{eq:eq1}) - (\ref{eq:eq3})$ when added and subtracted from the corners as indicated above. That is, $\ell^+$ equals the minimum value of the union of the following sets: \begin{align*}
\{&c_{ij} \ | \ \mbox{ the edge labeled by } c_{ij} \mbox{ is below a circuit corner } X_{i_{\alpha}j_{\beta}} \mbox{ with } \alpha+\beta \mbox{ even}\},\\
\{&1-c_{ij} \ | \ \mbox{ the edge labeled by } c_{ij} \mbox{ is below a circuit corner } X_{i_{\alpha}j_{\beta}} \mbox{ with } \alpha+\beta \mbox{ odd}\},\\
\{&r_{ij} \ | \ \mbox{ the edge labeled by } r_{ij} \mbox{ is to the right of a circuit corner } X_{i_{\alpha}j_{\beta}} \mbox{ with } \alpha+\beta \mbox{ even}\}. \end{align*} Note $\ell^+>0$ since all the partial sums in the circuit are non-integer.
Construct a matrix $X^-$ by setting \[X^-_{i_{\alpha},j_{\beta}} = \begin{cases} X_{i_{\alpha}j_{\beta}} - \ell^- &\mbox{ if } \alpha+\beta \mbox{ is odd} \\ X_{i_{\alpha}j_{\beta}} + \ell^- &\mbox{ if } \alpha+\beta \mbox{ is even.} \end{cases}\] and setting all other entries equal to the corresponding entry of $X$. That is, construct $X^-$ by alternately subtracting and adding a number $\ell^-$ from each entry in $X$ that corresponds to a corner in the circuit and leaving all other matrix entries unchanged. We will choose $\ell^-$ to be the maximum possible value that preserves (\ref{eq:eq1}), (\ref{eq:eq2}), and (\ref{eq:eq3}) when subtracted and added from the corners as indicated above. That is, $\ell^-$ equals the minimum value of the union of the following sets:
\begin{align*}\{&c_{ij} \ | \ \mbox{ the edge labeled by } c_{ij} \mbox{ is below a circuit corner } X_{i_{\alpha}j_{\beta}} \mbox{ with } \alpha+\beta \mbox{ odd}\},\\
\{&1-c_{ij} \ | \ \mbox{ the edge labeled by } c_{ij} \mbox{ is below a circuit corner } X_{i_{\alpha}j_{\beta}} \mbox{ with } \alpha+\beta \mbox{ even}\},\\
\{&r_{ij} \ | \ \mbox{ the edge labeled by } r_{ij} \mbox{ is to the right of a circuit corner } X_{i_{\alpha}j_{\beta}} \mbox{ with } \alpha+\beta \mbox{ odd}\}. \end{align*} Note $\ell^->0$ since all the partial sums in the circuit are non-integer.
Now in the case of either an open or closed circuit, there will be an even number of corners in the circuit. Note that for open circuits, each row has an even number of corners and there will be two columns with an odd number of corners, namely the columns where the path begins and ends. Whenever there is an even number of circuit corners in a row or column, this means that the same number is alternately added to and subtracted from the corners, thus the total row or column sum is not changed. Whenever there is an odd number of circuit corners in a column, this means that the total column sum will change, however it will stay between $0$ and $1$. Thus our constructions of $X^+$ and $X^-$ above are well-defined.
Both $X^+$ and $X^-$ satisfy (\ref{eq:eq1})--(\ref{eq:eq3}) by construction. Also by construction, \[X=\frac{\ell^-}{\ell^++\ell^-}X^++\frac{\ell^+}{\ell^++\ell^-}X^-\] and $\frac{\ell^-}{\ell^++\ell^-} + \frac{\ell^+}{\ell^++\ell^-} = 1$. So $X$ is a convex combination of the two matrices $X^+$ and $X^-$ that still satisfy the inequalities and are each at least one step closer to being sign matrices, since they each have at least one more partial sum attaining its maximum or minimum bound. Hence, by iterating this process, $X$ can be written as a convex combination of sign matrices in $M(\lambda,n)$. \end{proof}
\begin{figure}
\caption{Left: A matrix $X$ in $P([3,3,1],4)$; Right: An open circuit in $\thickhat{X}$.}
\label{fig:opencirc}
\end{figure}
\begin{figure}
\caption{Left: A matrix $X$ in $P([3,3],4)$; Right: A closed circuit in $\thickhat{X}$.}
\label{fig:closedcirc}
\end{figure}
\begin{figure}
\caption{The decomposition of the matrix from Figure~\ref{fig:opencirc} as the convex combination of $X^+$ and $X^-$; see Example~\ref{ex:Xk}.}
\label{fig:Xk}
\end{figure}
\begin{example} We use the open circuit in Figure~\ref{fig:opencirc}, we will find $X^+, X^-, \ell^+$ and $\ell^-$. The circuit is $\left(\textbf{\circled{.9}},\; \textbf{\circled{.9}},\; \textbf{\circled{.9}},\;\boxed{\color{blue} \textbf{.9}}, \; \circled{.9},\; \circled{.9}, \;\boxed{\color{blue} \textbf{.3}},\; \textbf{\circled{.3}}, \;\boxed{\color{blue} \textbf{.6}},\; \circled{.7}, \;\boxed{\color{blue} \textbf{-.7}},\; \textbf{\circled{.1}},\; \textbf{\circled{.3}}\right)$, where the circled and bold entries are the partial column sums and the circled non-bold entries are the row partial sums of the circuit. The matrix entries at the corners of the circuit are boxed for emphasis. To construct $X^+$, we label the corner entries alternately plus and minus, so the plus value goes on the $\boxed{\color{blue} \textbf{.9}}$ and $\boxed{\color{blue} \textbf{.6}}$ corners and the minus on the $\boxed{\color{blue} \textbf{.3}}$ and $\boxed{\color{blue} \textbf{-.7}}$ corners. Looking at the partial sums, we see that $\ell^+$ will be the minimum of $\left\{.3,\; .1, \; .3 \right\}\cup\left\{1-.9,\; 1-.9,\; 1-.9\right\}\cup\emptyset$. Thus $\ell^+ =.1$, so $.1$ will be added to plus corners and subtracted from minus corners with $X^+$ as the result. We now switch the plus and minus corners. $\ell^-$ will be the minimum of $\left\{.9,\; .9,\; .9\right\}\cup\left\{1-.3,\; 1-.1, \; 1-.3 \right\}\cup\left\{.9,\; .9,\; .7 \right\}$ so $\ell^- = .7$. So then $.7$ is added to the plus corners and subtracted from the minus corners to get $X^-$. Thus we may write the matrix as the convex combination of the matrices $X^+$ and $X^-$ as in Figure~\ref{fig:Xk}. \label{ex:Xk} \end{example}
We now find an inequality description of $P(m,n)$.
\begin{theorem} \label{thm:ineqthm} $P(m,n)$ consists of all $m\times n$ real matrices $X=\{X_{ij}\}$ such that: \begin{align} 0 &\le \displaystyle\sum_{i'=1}^{i} X_{i'j} \le 1 &\mbox{ for all } 1\le i \le m, 1\le j\le n. \\ 0 &\le \displaystyle\sum_{j'=1}^{j} X_{ij'} &\mbox{ for all }1\le i \le m, 1 \le j \le n. \end{align} \end{theorem}
\begin{proof} The proof follows the proof of Theorem~\ref{thm:ineqthmshape}, with a few differences. The open circuits are no longer restricted to start and end at the bottom of the matrix; they may also start and end at vertices $(i,n+1)$ and $(i_0,n+1)$ ($i\neq i_0$) on the right border of $\Gamma_{(m,n)}$,
or they may start at the bottom at vertex $({m+1,j})$ and end on the right at vertex $(i,n+1)$. Therefore the evenness of corners is not needed here, since unlike in Theorem~\ref{thm:ineqthmshape}, there is no analogue of Equation~(\ref{eq:eq3}) that specifies the row sums. With these less restrictive exceptions, the matrices $X^-$ and $X^+$ will be found in the same way as in the proof of Theorem~\ref{thm:ineqthmshape}. \end{proof}
\section{Facet enumerations} \label{sec:Pmninq} In this section, we use the inequality descriptions of the previous section to enumerate the facets in $P(m,n)$ and $P(\lambda,n)$. Note this is not as straightforward as counting the inequalities in the theorems of the previous section, as these inequality descriptions are not minimal.
\begin{theorem} \label{thm:facets_mn} $P(m,n)$ has $3mn-n-2(m-1)$ facets. \end{theorem}
\begin{proof} We have three defining inequalities in the inequality description of Theorem~\ref{thm:ineqthm} for each entry $X_{ij}$ of $X\in P(m,n)$: $0 \leq \displaystyle\sum_{i'=1}^{i} X_{i'j}$, $\displaystyle\sum_{i=1}^{i} X_{i'j} \leq 1$, and $0 \leq \displaystyle\sum_{j'=1}^{j} X_{ij'}$. Therefore there are at most $3mn$ facets, each made by turning one of the inequalities to an equality. We now determine which of these inequalities give \emph{unique} facets. (See Figure~\ref{fig:facet_mn} for a visual representation of which inequalities determine duplicate facets.)
Notice first that we will always have $0 \leq X_{1j}$ (from the column partial sums). This implies that the partial sums of the first row are all nonnegative, since each entry in the first row must be nonnegative. Thus the inequalities $0 \leq \displaystyle\sum_{j=1}^{j} X_{1j'}$ for $1\leq j\leq n$ are all unnecessary; and there are $n$ inequalities of this form.
We have already counted $0 \leq X_{11}$ in the column partial sums. From the partial row sums, we have that $0 \leq X_{21}$. But in the partial column sum we have $0 \leq X_{11}+X_{12}$; this is implied by $0 \leq X_{11}$ and $0 \leq X_{21}$. Similarly, the partial column sums $0 \leq \displaystyle\sum_{i'=1}^{i} X_{i'1}$ for $2\leq i\leq m$ are all implied by the partial row sums $0 \leq X_{i'1}$. There are $m-1$ inequalities of this form.
Note that $\displaystyle\sum_{i'=1}^{m} X_{i'1} \leq 1$. Furthermore, note that $0 \leq X_{m1}$ from the row partial sums. Therefore we have that $\displaystyle\sum_{i'=1}^{m-1} X_{i'1 }\leq 1 - X_{m1} \leq 1$. Similarly, the $m-1$ inequalities in the form of $\displaystyle\sum_{i'=1}^{i} X_{i'1} \leq 1$ for $1\leq i < m$ are all implied by the partial row sums $0 \leq X_{i'1}$.
Therefore we have the number of facets to be at most $3mn-n-2(m-1)$. We claim this upper bound is the facet count. That is, a facet can be defined as all ${X} \in P(m,n)$ which satisfy exactly one of the following: \begin{align} \label{eq:f1} r_{ij} &= \displaystyle\sum_{j'=1}^{j} X_{ij'} = 0, \hspace{.3in} & 2 \leq i \leq m \text{ and } 1 \leq j \leq n \\ \label{eq:f2} c_{ij} &= \displaystyle\sum_{i'=1}^{i} X_{i'j} =0, \hspace{.3in} & 1 \leq i \leq m \text{ and } 2 \leq j \leq n \\ \label{eq:f3} c_{ij} &=\displaystyle\sum_{i'=1}^{i} X_{i'j}=1, \hspace{.3in} & 1 \leq i \leq m \text{ and } 2 \leq j \leq n \\ \label{eq:f4} r_{11} &= c_{11} = X_{11} =0 \\ \label{eq:f5} c_{m1} &= \displaystyle\sum_{i'=1}^{m} X_{i'1}=1. \end{align}
Note each equality fixes exactly one entry, thus lowering the dimension by one. Let two generic equalities of the form (\ref{eq:f1})-(\ref{eq:f5}) be denoted as $\alpha_{ij}=\gamma$ and $\beta_{de}=\delta$ for $\alpha,\beta\in\{r,c\}$ and $\gamma,\delta\in\{0,1\}$, where the choice of $r$ or $c$ for each of $\alpha$ and $\beta$ indicates whether the equality involves a row partial sum $r_{ij}$ or column partial sum $c_{ij}$, and the indices $(i,j)$ and $(d,e)$ must be in the corresponding ranges indicated by (\ref{eq:f1})-(\ref{eq:f5}). To finish the proof, we construct an $m\times n$ sign matrix $M$, such that $M$ satisfies $\alpha_{ij}=\gamma$ and not $\beta_{de}=\delta$. We work with $\thickhat{M}$ rather than $M$ itself, recalling the bijection between $M$ and $\thickhat{M}$. Recall from Definition~\ref{def:Xhat}, $\thickhat{M}$ is a graph whose horizontal edges are labeled by the partial row sums of $M$ and whose vertical edges are labeled by the partial column sums of $M$. Since all of the equalities in (\ref{eq:f1})-(\ref{eq:f5}) are given by setting a $c_{ij}$ equal to 0 or 1 or a $r_{ij}$ equal to 0, set the edge label of $\thickhat{M}$ corresponding to $\alpha_{ij}$ equal to $\gamma$ and the edge label corresponding to the equality $\beta_{de}$ equal to $1-\delta$. Now we transform $\thickhat{M}$ back to $M$ and if we can fill in the rest of the matrix so it is a sign matrix, the proof will be complete. In the cases below, we construct such a sign matrix $M$ satisfying equality $\alpha$ and not equality $\beta$.
\emph{Case 1}: $\alpha_{ij}=0$ and $\beta_{de}=1$. So in $\thickhat{M}$, $\beta_{de}=0$. It suffices to set $M$ equal to the zero matrix.
\emph{Case 2:} $\alpha_{ij}=0$ and $\beta_{de}=0$. So in $\thickhat{M}$, $\beta_{de}=1$. If $i\neq d$ and $j\neq e$, let $M_{de}=1$ and the rest of the entries equal to zero.
Suppose $\alpha=\beta=c$. If $j\neq e$, let $M_{de}=1$ and the rest of the entries equal to zero. If $j=e$ and $i<d$, let $M_{de}=1$ and the rest of the entries equal to zero. If $j=e$ and $i>d$, let $M_{de}=1$, $M_{d+1,e}=-1$, $M_{d+1,e-1}=1$, and the rest of the entries equal to zero. (Note $e\geq 2$ since $\beta=c$.)
Suppose $\alpha=\beta=r$. If $i\neq d$, let $M_{de}=1$ and the rest of the entries equal to zero. If $i=d$ and $j<e$, let $M_{de}=1$ and the rest of the entries equal to zero. If $i=d$ and $j>e$, let $M_{de}=1$, $M_{d,e+1}=-1$, $M_{d-1,e+1}=1$, and the rest of the entries equal to zero. Note since $\beta=r$, $d\geq 2$, so $d-1\geq 1$.
If $\alpha=r$ and $\beta=c$, let $M_{1e}=1$ and the rest of the entries equal to zero. (Note since $\alpha=r$, $i\geq 2$.)
If $\alpha=c$ and $\beta=r$, let $M_{d1}=1$ and the rest of the entries equal to zero. (Note since $\alpha=c$, $j\geq 2$.)
\emph{Case 3:} $\alpha_{ij}=1$ and $\beta_{de}=1$. So in $\thickhat{M}$, $\beta_{de}=0$. Note only column partial sums are set equal to 1 in the above list of equalities, so $\alpha=c$ and $\beta=c$. If $j\neq e$, set $M_{ij}=1$ and the rest of the entries of $M$ equal to zero. If $j=e$ and $i<d$, set $M_{ij}=M_{i+1,j-1}=1$ and $M_{i+1,j}=-1$ and all other entries equal to zero. Note $j-1\geq 1$ since (\ref{eq:f3}) requires that $2\leq j\leq n$. If $j=e$ and $i>d$, set $M_{ij}=1$ and the rest of the entries of $M$ equal to zero.
\emph{Case 4:} $\alpha_{ij}=1$ and $\beta_{de}=0$. So in $\thickhat{M}$, $\beta_{de}=1$. Note $\alpha=c$, so $j\geq 2$. If $j\neq e$, let $M_{ij}=M_{de}=1$ and the rest of the entries zero. If $j=e$ and $\beta=c$, let $M_{1j}=1$ and the rest of the entries equal to zero. If $j=e$ and $\beta=r$, if $i\neq d$, let $M_{ij}=1$ and $M_{d1}=1$ (we noted above that $j\geq 2$, so these ones are not in the same column) and the rest of the entries equal to zero. If $j=e$, $\beta=r$, and $i=d$, set $M_{ij}=1$ and the rest of the entries equal to zero.
Thus we may always complete to a sign matrix. $M$ is constructed to satisfy $\alpha_{ij}=\gamma$ but not $\beta_{ij}=\delta$, thus each of the equalities in (\ref{eq:f1})-(\ref{eq:f5}) gives rise to a unique facet. \end{proof}
\begin{figure}
\caption{$\Gamma_{(m,n)}$ decorated with symbols that represent the inequalities that do not determine facets of $P(m,n)$. Squares represent partial column sums of the form $\sum X_{ij}\leq 1$ and dots represent partial row or column sums of the form $\sum X_{ij}\geq 0$.}
\label{fig:facet_mn}
\end{figure}
We now state a theorem on the number of facets of $P(\lambda,n)$. We then give simpler formulas as corollaries in the special cases of two-row shapes, rectangles, and hooks. First, recall that $k$ is the number of parts of $\lambda$, and that $a_{\lambda_1}$ is the number of parts in $\lambda$ of size $\lambda_1$.
\begin{theorem} \label{thm:facets_lambda} The number of facets of $P(\lambda,n)$ is: \begin{center} \begin{equation} \label{eq:facets} 3n\lambda_1-n-3(\lambda_1-1)-(n-2)(\lambda_1-\lambda_2+\lambda_{n-1}) -(k-a_{\lambda_1}) -2(\lambda_1-D(\lambda)) - C(\lambda) \end{equation}
\end{center} where $D(\lambda)$ is the number of distinct part sizes of $\lambda$ (each part size counts once, even though there may be multiple parts of a given size), we take $\lambda_i=0$ if $k<i$, and $C(\lambda)$ equals the following: \[C(\lambda)=\begin{cases}
2 & \mbox{ if } \ k=1, \\
1 & \mbox{ if } \ 1<k<n-1 \ \mbox{and} \ \lambda_1\neq\lambda_2, \\
0 & \mbox{ if } \ 1<k<n-1 \ \mbox{and} \ \lambda_1=\lambda_2, \\
2 & \mbox{ if } \ k=n-1 \ \mbox{and either} \ \lambda_1\neq\lambda_2 \ \mbox{or} \ \lambda=\lambda_1^{k}, \\
1 & \mbox{ if } \ k=n-1, \ \lambda_1=\lambda_2, \ \mbox{and} \ \lambda\neq \lambda_1^{k}. \\
\end{cases}\] \end{theorem} \begin{proof} By Theorem~\ref{thm:facets_mn}, since $P(\lambda,n)$ satisfies all the inequalities satisfied by $P(m,n)$ for $m=\lambda_1$, we have at most $3n\lambda_1 - n -2(\lambda_1-1)$ facets, given by the equalities (\ref{eq:f1})--(\ref{eq:f5}). See Figure~\ref{fig:facet_mn}.
But note equalities of the form (\ref{eq:f1}) with $j=n$ no longer give facets, since by (\ref{eq:eq3}) the total sum of each matrix row is fixed. There are $\lambda_1-1$ such inequalities, so we now have at most $3\lambda_1 n - n -3(\lambda_1-1)$ facets. See Figure~\ref{fig:facet_shape1}.
To prove our count in (\ref{eq:facets}), we determine which of the remaining equalities in (\ref{eq:f1})--(\ref{eq:f5}) are unnecessary. We discuss each remaining term of (\ref{eq:facets}) below. Let $X\in P(\lambda,n)$.
\begin{enumerate} \item \label{bullet1}$-{(n-2)(\lambda_1-\lambda_2+\lambda_{n-1})}$: First, suppose $\lambda_1\neq \lambda_2$, otherwise $(n-2)(\lambda_1-\lambda_2)=0$. Since $\lambda_1\neq\lambda_2$, the first row of $X$ sums to 1 and the next $\lambda_1-\lambda_2-1$ rows sum to 0. So the first $i$ rows all together sum to 1 for any $1\leq i\leq\lambda_1-\lambda_2$. That is, for any fixed $i\in[1,\lambda_1-\lambda_2]$, $\displaystyle\sum_{i'=1}^{i}\displaystyle\sum_{j'=1}^n X_{i'j'}=1$. Also, by (\ref{eq:eq1}), $\displaystyle\sum_{i'=1}^{i} X_{i'j}\geq 0$, and by (\ref{eq:eq2}), $\displaystyle\sum_{j'=1}^{j} X_{ij'}\geq 0$. So we have the following sum: \[1=\displaystyle\sum_{i'=1}^{i}\displaystyle\sum_{j'=1}^n X_{i'j'}=\displaystyle\sum_{i'=1}^{i}\underbrace{\displaystyle\sum_{j'=1}^{j-1} X_{i'j'}}_{\geq 0}+\displaystyle\sum_{j'=j}^{n} \underbrace{\displaystyle\sum_{i'=1}^{i} X_{i'j'}}_{\geq 0}. \] Since we have all positive terms summing to 1, none of these terms may exceed 1. Therefore, $\displaystyle\sum_{i'=1}^{i} X_{i'j}\leq 1$ for all $1\leq i\leq\lambda_1-\lambda_2$, $1\leq j\leq n$.
Thus the partial sums of the form $\displaystyle\sum_{i'=1}^i X_{i'j}\leq 1$ for $1\leq i\leq\lambda_1-\lambda_2$, $1\leq j\leq n$ are unnecessary. We have already disregarded these inequalities for $j=1$, $1\leq i\leq n-1$ in Theorem~\ref{thm:facets_mn}. We will consider $j=1$, $i=n$ in (\ref{bullet4}). We will count the partial column sums in the $n$th column in (\ref{bullet3}). Thus, for this term we count the $(n-2)(\lambda_1-\lambda_2)$ unnecessary inequalities $\displaystyle\sum_{i'=1}^i X_{i'j}\leq 1$ for $1\leq i\leq\lambda_1-\lambda_2$, $2\leq j\leq n-1$.
Now suppose $k=n-1$ so that $\lambda_{n-1}\neq 0$, otherwise $(n-2)\lambda_{n-1}=0$. Since $\lambda_{n-1}\neq 0$, the last $\lambda_{n-1}$ rows of $X$ sum to $0$. That is, for any fixed $i\in[\lambda_1-\lambda_{n-1}+1,\lambda_1]$, $\displaystyle\sum_{i'=i+1}^{\lambda_1}\displaystyle\sum_{j'=1}^n X_{i'j'}=0$. Also, by (\ref{eq:eq3}), $\displaystyle\sum_{i'=1}^{\lambda_1}\displaystyle\sum_{j'=1}^n X_{i'j'}=\displaystyle\sum_{i'=1}^{\lambda_1}a_{\lambda_1-i+1}=k=n-1$, since $\lambda$ has $n-1$ parts. Also, by (\ref{eq:eq1}), $\displaystyle\sum_{i'=1}^{i} X_{i'j}\leq 0$. So we have the following sum: \[n-1=\displaystyle\sum_{j'=1}^n\displaystyle\sum_{i'=1}^{\lambda_1}X_{i'j'}=\displaystyle\sum_{j'=1}^n\underbrace{\displaystyle\sum_{i'=1}^{i} X_{i'j'}}_{\leq 1}+\underbrace{\displaystyle\sum_{j=1}^{n} \displaystyle\sum_{i'=i+1}^{\lambda_1} X_{i'j'}}_{=0} \] Since we have $n$ terms $\displaystyle\sum_{i'=1}^{i} X_{i'j'}$ summing to $n-1$, each at most $1$, none of these terms may be negative. Therefore, $\displaystyle\sum_{i'=1}^{i} X_{i'j}\geq 0$ for all $\lambda_1-\lambda_{n-1}+1\leq i\leq\lambda_1$, $1\leq j\leq n$.
Thus the partial sums of the form $\displaystyle\sum_{i'=1}^i X_{i'j}\geq 0$ for $\lambda_1-\lambda_{n-1}+1\leq i\leq\lambda_1$, $1\leq j\leq n$ are unnecessary. We have already disregarded these inequalities for $j=1$, $2\leq i\leq n$ in Theorem~\ref{thm:facets_mn}. We will count the partial column sums in the $n$th column in (\ref{bullet3}). Thus, for this term we count the $(n-2)\lambda_{n-1}$ unnecessary inequalities $\displaystyle\sum_{i'=1}^i X_{i'j}\leq 1$ for $\lambda_1-\lambda_{n-1}+1\leq i\leq\lambda_1$, $2\leq j\leq n-1$. See Figure~\ref{fig:facet_shape1}.
\item \label{bullet2}$-(k-a_{\lambda_1})$: Let $i>1$. By (\ref{eq:eq3}), $\displaystyle\sum_{j'=1}^{n} X_{i j'}= a_{\lambda_1-i+1}$. Now $0\leq \displaystyle\sum_{i'=1}^{i-1} X_{in}$ and $\displaystyle\sum_{i'=1}^{i} X_{in}\leq 1$ imply $X_{in}\leq 1$, so we have $\displaystyle\sum_{j'=1}^{n-1} X_{i j'}\geq a_{\lambda_1-i+1}-1$. This implies the inequality $\displaystyle\sum_{j'=1}^{n-1} X_{i j'}\geq 0$ whenever $a_{\lambda_1-i+1}>0$. Similarly, $\displaystyle\sum_{j'=1}^{n-z} X_{i j'}\geq a_{\lambda_1-i+1}-z$ for all $1\leq z\leq a_{\lambda_1-i+1}$ since the last $z$ entries in that row sum to at most $z$ (since entries can be no more than 1, by the column partial sums). Thus, the $a_{\lambda_1-i+1}$ inequalities $\displaystyle\sum_{j'=1}^{n-z} X_{i j'}\geq 0$, $1\leq z\leq a_{\lambda_1-i+1}$, are unnecessary. By reindexing, this is equivalent to $\displaystyle\sum_{j'=1}^{j} X_{i j'}\geq 0$, $n-a_{\lambda_1-i+1}\leq j\leq n-1$.
We already discarded all the row partial sum inequalities in the first row in Theorem~\ref{thm:facets_mn}, so we do not count those here. Thus $a_{\lambda_1}$ is not included. So we have ${\displaystyle\sum_{i'=1}^{\lambda_1-1} a_{i'}}$ unnecessary partial sum inequalities. This equals the total number of parts of $\lambda$ minus the number of parts with part size $\lambda_1$, that is, $k-a_{\lambda_1}$. See Figure~\ref{fig:facet_shape3}.
\item \label{bullet3} $-2(\lambda_1-D(\lambda))$: Suppose $a_{\lambda_1-i+1}=0$ so that the total sum of row $i$ of $X$ equals 0. Then the last entry $X_{in}$ may not be greater than $0$, since this would contradict $\displaystyle\sum_{j'=1}^{n-1} X_{ij'}\geq 0$. So the inequality $\displaystyle\sum_{i'=1}^i X_{i'n} \leq 1$ is unnecessary. Also, since the total sum of row $i$ of $X$ equals 0, we have then $X_{in}= -\displaystyle\sum_{j=1}^{n-1} X_{ij}$. In addition, $\displaystyle\sum_{i'=1}^i X_{i'n}\geq 0$. We substitute the previous equality into this inequality to obtain $\displaystyle\sum_{i'=1}^{i-1} X_{i'n}-\displaystyle\sum_{j=1}^{n-1} X_{ij}\geq 0$. We know $\displaystyle\sum_{j=1}^{n-1} X_{ij}\geq 0$, so this implies $\displaystyle\sum_{i'=1}^{i-1} X_{i'n}\geq 0$.
So for each $a_{\lambda_1-i+1}=0$ we have two unnecessary inequalities: $\displaystyle\sum_{i'=1}^i X_{i'n} \leq 1$ and $\displaystyle\sum_{i'=1}^{i-1} X_{i'n}\geq 0$. The number of row sums equal to zero is given by the number of integers $\ell$ with $1\leq\ell\leq\lambda_1$ such that $a_{\ell}=0$. This count equals $\lambda_1-D(\lambda)$, where $D(\lambda)$ equals the number of distinct part sizes of $\lambda$. Thus, we have $2(\lambda_1-D(\lambda))$ unnecessary inequalities. See Figure~\ref{fig:facet_shape3}.
\item \label{bullet4} $-C(\lambda)$: We now have a few more border inequalities to discard, depending on $\lambda$. We take each case in turn. See Figure~\ref{fig:facet_shape4}.
\begin{enumerate} \item \label{a} When $\lambda_1\neq\lambda_2$, we may also discard the inequality $X_{1 n} \leq 1$, as this is a partial sum of the form $\displaystyle\sum_{i'=1}^i X_{i'n} \leq 1$ for $1\leq i\leq\lambda_1-\lambda_2$, which by reasoning in (\ref{bullet1}) may be discarded. The other inequalities of that form have already been counted in (3), thus we have one additional unnecessary inequality whenever $\lambda_1\neq\lambda_2$. Note, since $\lambda_2=0\neq\lambda_1$ for $k=1$, this inequality is also discarded in the case $k=1$.
\item \label{b} When $k=1$, since $\displaystyle\sum_{j'=1}^n X_{1j'}=1$ and $\displaystyle\sum_{j'=1}^n X_{ij'}=0$ for all $2\leq i\leq\lambda_1$, we have that the sum of all the entries in the matrix is $1$. This, together with the inequalities $\displaystyle\sum_{i'=1}^{\lambda_1} X_{i'j}\geq 0$, $2\leq j\leq n$, implies $\displaystyle\sum_{i'=1}^{\lambda_1} X_{i'1}\leq 1$. So we have one additional unnecessary inequality when $k=1$.
\item \label{c} When $1<k=n-1$, by the reasoning in the $k=n-1$ case of (\ref{bullet1}) we may discard the inequality $\displaystyle\sum_{i'=1}^{\lambda_1} X_{i'n}\geq 0$. If $k=1$, $n=2$, we may not discard this inequality, since in this case we have already discarded the inequality in (\ref{b}).
\item \label{d} Suppose $k=n-1$ and $\lambda$ is a rectangle, so $\lambda_{n-1}=\lambda_1$. In this case, we may also discard the inequality $X_{1 1} \geq 0$; this is a partial sum of the form $\displaystyle\sum_{i'=1}^i X_{i'1} \geq 0$ for $\lambda_1-\lambda_{n-1}+1\leq i\leq\lambda_1$ which by the reasoning in (\ref{bullet1}) may be discarded. The other inequalities of that form have already been counted in (3), thus we have one additional unnecessary inequality whenever $\lambda_1=\lambda_1^{n-1}$ and $k>1$. If $k=1$, $n=2$, we may not discard this inequality, since we have already discarded the inequality in (\ref{a}). \end{enumerate} \end{enumerate}
Thus the total number of facets is at most (\ref{eq:facets}). We claim this upper bound is the facet count. That is, a facet can be defined as all ${X} \in P(\lambda,n)$ which satisfy exactly one of the following: \begin{align} \label{eq:z1} r_{ij} &= \displaystyle\sum_{j'=1}^{j} X_{ij'} = 0, \hspace{.3in} & 2 \leq i \leq \lambda_1 \text{ and } 1 \leq j \leq n-a_{\lambda_1-i+1}-1 \\ \label{eq:z2} c_{ij} &= \displaystyle\sum_{i'=1}^{i} X_{i'j} =0, \hspace{.3in} & 1 \leq i \leq \lambda_1 \text{ and } 2 \leq j \leq n-1 \\ \label{eq:z2a} c_{in } &= \displaystyle\sum_{i'=1}^{i} X_{i'n} =0, \hspace{.3in} & (i=\lambda_1 \mbox{ and } k<n-1) \mbox{ or } (1 \leq i \leq \lambda_1-1 \text{ and } a_{\lambda_1-i}>0) \\ \label{eq:z3a} c_{ij} &=\displaystyle\sum_{i'=1}^{i} X_{i'j}=1, \hspace{.3in} & \lambda_1-\lambda_2+1 \leq i \leq \lambda_1 \text{ and } 2 \leq j \leq n-1 \\ \label{eq:z3n} c_{in} &=\displaystyle\sum_{i'=1}^{i} X_{i'n}=1, \hspace{.3in} & \lambda_1-\lambda_2+1 \leq i \leq \lambda_1 \text{ and } a_{\lambda_1-i+1}>0\\ \label{eq:z4} r_{11} &= c_{11} = X_{11} =0 & \text{if } \lambda=\lambda_1^{n-1} \text{ and } k>1\\ \label{eq:z5} c_{\lambda_1 1} &= \displaystyle\sum_{i'=1}^{\lambda_1} X_{i'1}=1 & \text{if } k=1. \end{align}
Note each equality fixes exactly one matrix entry, lowering the dimension by one. By an argument similar to that given in Theorem~\ref{thm:facets_mn}, given any two equalities above, we may construct a sign matrix in $M(\lambda,n)$ that satisfies one but not the other. \end{proof}
\begin{figure}
\caption{$\Gamma_{(\lambda_1,n)}$ decorated with symbols that represent inequalities that do not determine facets of $P(\lambda,n)$. Squares represent partial column sums of the form $\sum X_{ij}\leq 1$ and dots represent partial row or column sums of the form $\sum X_{ij}\geq 0$. The filled-in shapes represent inequalities that were already removed in the facet proof for $P(m,n)$. The crosses represent the fixed row sums in $P(\lambda, n)$. The open squares and gray squares represent inequalities that are removed in (\ref{bullet1}) from the proof of Theorem~\ref{thm:facets_lambda}.}
\label{fig:facet_shape1}
\end{figure}
\begin{figure}\label{fig:facet_shape3}
\end{figure}
\begin{figure}
\caption{$\Gamma_{(\lambda_1,n)}$ decorated with symbols that represent inequalities removed in (\ref{bullet4}) from the proof of Theorem~\ref{thm:facets_lambda}. A is discussed in (\ref{d}), B is discussed in (\ref{a}), C is discussed in (\ref{b}), and D is discussed in (\ref{c}). }
\label{fig:facet_shape4}
\end{figure}
\begin{corollary} The number of facets of $P([\lambda_1,\lambda_2],n)$ when $\lambda_1 \ne \lambda_2$ is as follows: \begin{itemize} \item $3n\lambda_1-n-5(\lambda_1-1)-(n-2)(\lambda_1-\lambda_2)$, when $n>3$; \item $3n\lambda_1-n-5(\lambda_1-1)-(n-2)\lambda_2-1$, when $n=3$. \end{itemize} \end{corollary} \begin{proof} Suppose $\lambda_1\neq\lambda_2$ and $n > 3$. Then $a_{\lambda_1}=1$, $D(\lambda)=2$, and $C(\lambda)=1$ (from the definition in Theorem~\ref{thm:facets_lambda}). Thus since $k=2$, the formula of Theorem~\ref{thm:facets_lambda} specializes to $3n\lambda_1-n-3(\lambda_1-1)-(n-2)(\lambda_1-\lambda_2)-(2-1)-2(\lambda_1-2)-1$, which reduces to the above formula. Now suppose $\lambda_1\neq\lambda_2$ and $n = 3$. In this case $\lambda_{n-1}=\lambda_2$ and $C(\lambda)=2$ but the rest of the values remain the same. Thus, the formula of Theorem~\ref{thm:facets_lambda} specializes to the above. \end{proof}
In the above corollary, we required $\lambda_1 \ne \lambda_2$. The case $\lambda=[\lambda_1, \lambda_1]=[\lambda_1^2]$ is a special case of the next corollary, which enumerates the facets when $\lambda$ is a rectangle.
\begin{corollary} \label{cor:squares} The number of facets of $P(\lambda_1^k,n)$ is as follows: \begin{itemize} \item 0, when $k=n$; \item $2n\lambda_1-n-3(\lambda_1-1)$, when $k=n-1$ or $k=1$; \item $3n\lambda_1-n-5(\lambda_1-1)$, when $1<k<n-1$. \end{itemize} \end{corollary}
\begin{proof} Suppose $k=n$. By Proposition~\ref{prop:dim}, since $k=n$ we have that the dimension of $P(\lambda_1^k)$ equals $(\lambda_1-\lambda_n)(n-1)=(\lambda_1-\lambda_1)(n-1)=0$. Since the polytope is zero dimensional, there are no facets.
Suppose $k=n-1$. We then have the following: $\lambda_1=\lambda_2= \lambda_{n-1}$, $a_{\lambda_1}=k=n-1$, $D(\lambda)=1$, and $C(\lambda)=2$. Therefore by Theorem~\ref{thm:facets_lambda} the number of facets is $3n\lambda_1-n-3(\lambda_1-1)-(n-2)(0+\lambda_1)-0-2(\lambda_1-1)-2$ which reduces to the formula above.
For $1<k<n-1$, by Theorem~\ref{thm:facets_lambda} the number of facets is $3n\lambda_1-n-3(\lambda_1-1)-(n-2)(\lambda_1-\lambda_2+\lambda_{n-1}) -(k-a_{\lambda_1})-2(\lambda_1-D(\lambda)) - C(\lambda)$. Since $\lambda_1=\lambda_2$ and $\lambda_{n-1}=0$, the 4th term equals $0$. The 5th term equals $0$ since $a_{\lambda_1}=k$. Note $D(\lambda_1^k)=1$, so the 6th term equals $2(\lambda_1-1)$. $C(\lambda)=0$, so the resulting count follows.
When $k=1$, by Theorem~\ref{thm:facets_lambda} the number of facets is $3n\lambda_1-n-3(\lambda_1-1)-(n-2)(\lambda_1-\lambda_2+\lambda_{n-1}) -(k-a_{\lambda_1})-2(\lambda_1-D(\lambda)) - C(\lambda) = 3n\lambda_1-n-3(\lambda_1-1)-(n-2)(\lambda_1-0+0)-(1-1)-2(\lambda_1-1) - 2$, since $a_{\lambda_1}=D(\lambda)=1$ and $\lambda_2=0$. The resulting count follows. \end{proof}
Finally, we have the following corollary in the case that $\lambda$ is hook-shaped. \begin{corollary} The number of facets of $P([\lambda_1,1^{k-1}],n)$ is as follows: \begin{itemize} \item $2n(\lambda_1-1)-n-3(\lambda_1-2)$, when $k=n$; \item $2n\lambda_1-2n-3(\lambda_1-1)+4$, when $k=n-1$; \item $2n\lambda_1-3(\lambda_1-1)-k+2$, when $1<k <n-1$. \end{itemize} \end{corollary}
\begin{proof} When $k=n$, the first column of the tableau corresponding to any sign matrix in the polytope is fixed as $1, 2, \ldots, n$, so this reduces to the case of rectangles of one row, that is, shape $[\lambda_1-1]$. So by Corollary~\ref{cor:squares}, we have $2n(\lambda_1-1)-n-3((\lambda_1-1)-1)=2n\lambda_1-3n-3\lambda_1+6$ facets.
When $k=n-1$, in the formula in Theorem~\ref{thm:facets_lambda} we have that $\lambda_2=\lambda_{n-1}=1$, $a_{\lambda_1}=1$, $D(\lambda)=2$ and $C(\lambda)=2$. Therefore, by Theorem~\ref{thm:facets_lambda} the number of facets is $3n\lambda_1-n-3(\lambda_1-1)-(n-2)(\lambda_1-1+1)-(n-1-1)-2(\lambda_1-2)-2$, which when simplified yields the desired result.
When $1<k<n-1$, $a_{\lambda_1}=1$, $D(\lambda)=2$, and $\lambda_1\neq\lambda_2$ so $C(\lambda)=1$. So by Theorem~\ref{thm:facets_lambda} the number of facets is $3n\lambda_1-n-3(\lambda_1-1)-(n-2)(\lambda_1-1)-(k-1)-2(\lambda_{1}-2) -1$, which when simplified yields the desired result. \end{proof}
\section{Face lattice descriptions} \label{sec:facelattice}
In this section, we determine the face lattice of the $P(m,n)$ and $P(\lambda,n)$ polytope families. We also show that given any two faces, we may determine the smallest dimensional face in which they are contained. The ideas for proving the face lattice were inspired by \cite{striker} and \cite{heean}.
\begin{definition}[\cite{ziegler}] The \emph{face lattice} of a convex polytope $P$ is the poset $L:= L(P)$ of all faces of $P$, partially ordered by inclusion. \end{definition}
\begin{definition} We define the \emph{complete partial sum graph} denoted $\overline\Gamma_{(m,n)}$ as the following labeling of the graph $\Gamma_{(m,n)}$. The horizontal edges are labeled with $\{0,\star\}$, while the vertical edges are labeled $\{0,1,\{0,1\}\}$. An example is shown for $P(3,5)$ in Figure~\ref{fig:completegraph}. \label{def:completegraph} \end{definition}
\begin{figure}
\caption{The complete partial sum graph $\overline\Gamma_{(3,5)}$.}
\label{fig:completegraph}
\end{figure}
\begin{definition} \label{def:0-dim} A \emph{0-dimensional} component of $\overline\Gamma_{(m,n)}$ is a labeling of $\Gamma_{(m,n)}$ such that the edge labels are one element subsets of the edge labels of $\overline\Gamma_{(m,n)}$ and such that the edge labels come from the partial sums of a sign matrix as follows: Let the edges be labeled as in $\thickhat{M}$ for some $m\times n$ sign matrix $M$, with the exception that horizontal edges labeled by nonzero numbers in $\thickhat{M}$ are now labeled as $\star$. For any $m\times n$ sign matrix $M$, let $g(M)$ be the $0$-dimensional component associated to $M$. \end{definition}
\begin{lemma} \emph{0-dimensional} components of $\overline\Gamma_{(m,n)}$ are in bijection with $m\times n$ sign matrices. \end{lemma} \begin{proof} Recall we may recover a sign matrix $M$ from its column partial sums. Thus, even though we are not keeping the exact values of the row partial sums, we still have enough information to recover a sign matrix $M$ from $g(M)$. Thus, given sign matrices $M_1\neq M_2$, $g(M_1)\neq g(M_2)$. \end{proof}
\begin{definition} \label{def:union} Let $\delta$ and $\delta'$ be labelings of $\Gamma_{(m,n)}$ such that the edge labels are subsets of the corresponding edge label sets in $\overline\Gamma_{(m,n)}$. Define the \emph{union} $\delta\cup \delta'$ as the labeling of $\Gamma_{(m,n)}$ such that each edge is labeled by the union of the corresponding labels on $\delta$ and $\delta'$, where we consider $0\cup\star=\star$. Define the \emph{intersection} $\delta\cap\delta'$ to be a labeling of $\Gamma_{(m,n)}$ such that each edge is labeled by the intersection of the corresponding labels on $\delta$ and $\delta'$, where we consider $0\cap\star=0$. So the vertical edges will have labels of $\emptyset, 0, 1$, or $\{0,1\}$ and the horizontal edges will have labels of $0$ or $\star$. In our figures, vertical edges labeled $\{0,1\}$ and horizontal edges labeled $\star$ will be darkened (blue). \end{definition}
\begin{definition} Let $\delta$ be a labeling of $\Gamma_{(m,n)}$ such that the edge labels are subsets of the corresponding edge label sets in $\overline\Gamma_{(m,n)}$. \begin{enumerate} \item $\delta$ is a \emph{component} of $\overline\Gamma_{(m,n)}$ if it is either the empty labeling of $\Gamma_{(m,n)}$ (we call this the \emph{empty component} denoted $\emptyset$) or if it can be presented as the union of any set of $0$-dimensional components. \item For two components $\delta$ and $\delta'$ of $\overline\Gamma_{(m,n)}$, we say $\delta$ is a \emph{component of} $\delta'$ if the edge labels of $\delta$ are each a subset of the corresponding edge labels of $\delta'$, where we consider $0$ to be a subset of $\star$. \end{enumerate} \label{def:face} \end{definition}
\begin{remark} \label{remark:union} Note if $\delta$ and $\delta'$ are components of $\overline{\Gamma}_{(m,n)}$, $\delta\cup\delta'$ is also a component. This is because each of $\delta$ and $\delta'$ is a union of 0-dimensional components, so $\delta\cup\delta'$ is as well. \end{remark}
\begin{figure}
\caption{A set of components of $\overline\Gamma_{(2,2)}$.}
\label{fig:faces}
\end{figure}
Next, we define a partial order on components of $\overline\Gamma_{(m,n)}$.
\begin{definition} Define a {partial order} $\Lambda_{(m,n)}$ on components of $\overline\Gamma_{(m,n)}$ by containment. That is, $\delta\leq \delta'$ in $\Lambda_{(m,n)}$ if and only if $\delta$ is a component of $\delta'$. Say $\delta'$ \emph{covers} $\delta$, denoted $\delta\lessdot\delta'$, if $\delta$ is contained in $\delta'$ and there is no component $\delta''$ of $\overline\Gamma_{(m,n)}$ such that $\delta<\delta''<\delta'$. \end{definition}
\begin{remark} \label{remark:meet_join} For components $\delta$ and $\delta'$ of $\overline\Gamma_{(m,n)}$, we may define $\delta \vee \delta' = \delta\cup\delta'$. By Remark~\ref{remark:union}, this is itself a component of $\overline\Gamma_{(m,n)}$. Also, it is the smallest component containing both $\delta$ and $\delta'$ as subcomponents, so this is the \emph{join operator} of $\Lambda_{(m,n)}$. We will show in Theorems~\ref{th:g_bijection} and \ref{thm:poset_iso} that $\Lambda_{(m,n)}$ is the face lattice of $P(m,n)$, thus there also exists a well-defined meet operator, since $\Lambda_{(m,n)}$ is a lattice. The {meet} $\delta \wedge \delta'$ will be the maximal component contained in the intersection $\delta \cap \delta'$; note this could be the empty component. \end{remark}
\begin{remark} \label{remark:max} Note the maximal component of $\Lambda_{(m,n)}$ is the union of all $0$-dimensional components. Thus, it has labels $\{0,1\}$ on the vertical edges of $\Gamma_{(m,n)}$ and $\star$ on the horizontal edges. \end{remark}
\begin{example} We show examples of several of the above definitions using Figure~\ref{fig:faces} (which by the upcoming Theorems~\ref{th:g_bijection} and \ref{thm:poset_iso} is the face lattice of one of the $3$-dimensional faces of $P(2,2)$).
\begin{enumerate} \item[i).] We first exhibit a component as a union of $0$-dimensional components: $\delta_{025}=\delta_0 \cup \delta_2 \cup \delta_5$. \item[ii).] We now show how the union of two components can contain more $0$-dimensional components than are contained in the original component: $\delta_{14} \cup \delta_{46} = \delta_{0123456}$. Note $\delta_{0123456}$ is the join. \item[iii).] Next we intersect two components: $\delta_{2456} \cap \delta_{015}= \delta_5$. Note $\delta_5$ is the meet. \item[iv).] To illustrate containment of components, note the $1$-dimensional components $\delta_{01}, \delta_{03}$, and $\delta_{13}$ are all contained in the $2$-dimensional component $\delta_{013}$. \end{enumerate} \end{example}
\begin{definition} \label{def:region} Given a component $\delta\in\Lambda_{(m,n)}$, consider the planar graph $G$ composed of the darkened edges of $\delta$; we regard any darkened edges on the right and bottom as meeting at a point in the exterior region. We say a \emph{region} of $\delta$ is defined as a planar region of $G$, excluding the exterior region. Let $\mathcal{R}(\delta)$ denote the number of regions of $\delta$. For consistency we set $\mathcal{R}(\emptyset)=-1$. \end{definition}
See Figure~\ref{fig:region} for an example of this definition.
We now state a lemma which shows that moving up in the partial order $\Lambda_{(m,n)}$ increases the number of regions. We will use this lemma in the proof of Theorem~\ref{thm:poset_iso}.
\begin{lemma} \label{prop:regions} Suppose a component $\delta\in\Lambda_{(m,n)}$ has $\mathcal{R}(\delta)=\omega$. If $\delta\lessdot\delta'$ then $\mathcal{R}(\delta')\geq \omega+1$. \end{lemma}
\begin{proof} By convention, the empty component has $\mathcal{R}(\emptyset)=-1$. If $\delta$ is a $0$-dimensional component, $\mathcal{R}(\delta)=0$, as there are no regions in a $0$-dimensional component. Suppose a component $\delta\in\Lambda_{(m,n)}$ has $\mathcal{R}(\delta)=\omega$. We wish to show if $\delta\lessdot\delta'$ then $\mathcal{R}(\delta')\geq \omega+1$. $\delta\lessdot\delta'$ implies that the labels of each edge of $\delta$ are subsets of the labels of each edge of $\delta'$. Thus all the $0$-dimensional components contained in $\delta$ are also contained in $\delta'$. $\delta'$ must contain at least one more $0$-dimensional component than $\delta$, otherwise $\delta'$ would equal $\delta$. This $0$-dimensional component differs from any other $0$-dimensional component in $\delta$ by at least one circuit of differing partial sums: consider a $0$-dimensional component in $\delta'$ that has a partial column sum that differs from the corresponding partial sum in any $0$-dimensional component in $\delta$. By Equation~(\ref{eq:rc}), at least one adjacent row or column partial sum of $\delta'$ must also differ from the corresponding partial sum in $\delta$. Thus, $\delta'$ has at least one new open or closed circuit of darkened edges, creating at least one new region. So $\mathcal{R}(\delta')\geq \omega+1$. \end{proof}
We now define a map, which we show in Theorem~\ref{th:g_bijection} gives a bijection between faces of $P(m,n)$ and components of $\overline{\Gamma}_{(m,n)}$. \begin{definition} \label{def:g(F)} Given a collection of sign matrices $\mathcal{M}=\{M_1,M_2,\dots,M_q\}$, we define the map $g(\mathcal{M})=\displaystyle\bigcup_{i=1}^q g(M_i)$, where $g(M_i)$ is as in Definition~\ref{def:0-dim}. \end{definition}
\begin{theorem} \label{th:g_bijection} Let $F$ be a face of $P(m,n)$ and $\mathcal{M}(F)$ be equal to the set of sign matrices that are vertices of $F$. The map $\psi:F\mapsto g(\mathcal{M}(F))$ is a bijection between faces of $P(m,n)$ and components of $\overline{\Gamma}_{(m,n)}$. \end{theorem}
\begin{proof} Let $F$ be a face of $P(m,n)$. Then $g(\mathcal{M}(F))$ is a component of $\overline\Gamma_{(m,n)}$ since $g(\mathcal{M}(F))=\displaystyle\bigcup_{i=1}^q g(M_i)$ is a union of 0-dimensional components. We now construct the inverse of $\psi$, call it $\varphi$. Given a component $\nu$ of $\overline\Gamma_{(m,n)}$, let $\varphi(\nu)$ be the face that results as the intersection of the facets corresponding to the not darkened edges of $\nu$.
We wish to show $\psi(\varphi(\nu))=\nu$. First, we show $\nu\subseteq \psi(\varphi(\nu))$. Let $M$ be a sign matrix such that $g(M)$ is a $0$-dimensional component of $\nu$. $M$ is in the intersection of the facets that yields $\varphi(\nu)$, since otherwise $g(M)$ would not be a $0$-dimensional component of $\nu$. Thus $g(M)$ is in $\psi(\varphi(\nu))$ as well. So $\nu\subseteq \psi(\varphi(\nu))$, which means the edge labels of $\nu$ must be subsets of the edge labels of $\psi(\varphi(\nu))$.
Next, we show $\nu=\psi(\varphi(\nu))$. Suppose not. Then there exists some edge $e$ of $\Gamma_{(m,n)}$ whose label in $\psi(\varphi(\nu))$ strictly contains the label of $e$ in $\nu$. Suppose $e$ is a horizontal edge, then the label of $e$ in $\nu$ is $0$ and the label of $e$ in $\psi(\varphi(\nu))$ is $\star$. Then the facet corresponding to the label 0 on $e$ would have been one of the facets intersected to get $\varphi(\nu)$. Therefore the matrix partial row sum corresponding to edge $e$ would be fixed as $0$ in each sign matrix in $\varphi(\nu)$. So in the union $\psi(\varphi(\nu))$, this edge label would be the union of the edge labels of all the sign matrices in $\varphi(\nu)$, and this union would be $0$. This is a contradiction. Now suppose $e$ is a vertical edge. Then the label of $e$ in $\nu$ is $0$ or $1$ and the label of $e$ in $\psi(\varphi(\nu))$ is $\{0,1\}$. Let $\gamma$ denote the label of $e$ in $\nu$. As in the previous case, the facet corresponding to the label $\gamma$ on $e$ would have been one of the facets intersected to get $\varphi(\nu)$. Therefore the matrix partial column sum corresponding to edge $e$ would be fixed as $\gamma$ in each sign matrix in $\varphi(\nu)$. So in the union $\psi(\varphi(\nu))$, that edge label would be the union of the edge labels of all the sign matrices in $\varphi(\nu)$, and this union would be $\gamma$. This is a contradiction. Thus $\nu=\psi(\varphi(\nu))$. \end{proof}
\begin{figure}
\caption{A 5-dimensional face of $P(3,5)$, where the five regions are numbered.}
\label{fig:region}
\end{figure}
\begin{theorem} \label{thm:poset_iso} $\psi$ is a poset isomorphism. Moreover, the dimension of a face of $P(m,n)$ equals the number of regions of the corresponding component of $\overline\Gamma_{(m,n)}$. That is, for every face $F$ in $P(m,n)$, \[ dim\; F= \mathcal{R}(\psi(F)).\] \end{theorem}
\begin{proof} Let $F_1$ and $F_2$ be faces of $P(m,n)$ such that $F_1 \subseteq F_2$. Then $F_1$ is an intersection of $F_2$ and some facet hyperplanes. In other words, $F_1$ is obtained from $F_2$ by setting one of the inequalities in Theorem~\ref{thm:ineqthm} to an equality. We have that $\psi(F_1)$ is obtained from $\psi(F_2)$ by changing at least one darkened edge to a non-darkened edge. Therefore we have $\psi(F_1) \subseteq \psi(F_2)$.
Conversely, suppose that $\psi(F_1) \subseteq \psi(F_2)$. Recall the inverse of $\psi$ is $\varphi$, where for any component $\nu$ of $\Gamma_{(m,n)}$, $\varphi(\nu)$ is the face of $P(m,n)$ that results as the intersection of the facets corresponding to the not darkened edges of $\nu$. Now if $\psi(F_1) \subseteq \psi(F_2)$, the darkened edges of $\psi(F_1)$ are a subset of the darkened edges of $\psi(F_2)$, so the not darkened edges of $\psi(F_2)$ are a subset of the not darkened edges of $\psi(F_1)$. So $\varphi(\psi(F_1))$ is an intersection of the facets intersected in $\varphi(\psi(F_2))$ and some additional facets (if $F_1\neq F_2$). Thus $F_1=\varphi(\psi(F_1))\subseteq \varphi(\psi(F_2))=F_2$.
Now, we prove the dimension claim. Recall that dim$(P(m,n))=mn$. Since $\psi$ is a poset isomorphism, $\psi$ maps a maximal chain of faces $F_0 \subset F_1 \subset \cdots \subset F_{mn}$ to the maximal chain $\psi(F_0) \subset \psi(F_1) \subset \cdots \subset \psi(F_{mn})$ in the components of $\overline\Gamma_{(m,n)}$. We know that the maximal component of $\Lambda_{(m,n)}$ has $mn$ regions, thus the result follows by Lemma~\ref{prop:regions} and by noting that for components $\nu$ and $\nu'$, $\nu \subsetneq \nu'$ implies $\mathcal{R}(\nu) < \mathcal{R}(\nu')$. \end{proof}
We now discuss the face lattice of $P(\lambda,n)$. We will restate the main result in this new setting, but since most of the definitions and proofs are exactly analogous, we only note where additional notation or arguments are needed.
\begin{definition} \label{def:completegraph_shape} Define the \emph{shape-complete partial sum graph} denoted $\overline\Gamma_{(\lambda,n)}$ as the following labeling of the graph $\Gamma_{( \lambda_1,n)}$. The vertical edges are labeled $\{0,1,\{0,1\}\}$ as before. The horizontal edges are labeled with the fixed row sum $\{0,\star\}$, except the last horizontal edge in row $i$ is labeled with $a_{\lambda_1-i+1}$. An example is shown in Figure~\ref{fig:shapecomplete}. \end{definition}
\begin{figure}
\caption{The shape-complete partial sum graph of $P([3,3,3,1],5)$.}
\label{fig:shapecomplete}
\end{figure}
\begin{remark} $0$-dimensional components, components, containment of components, and regions are defined analogously. Let $\Lambda_{(\lambda,n)}$ denote the {partial order} on components of $\overline\Gamma_{(\lambda,n)}$ by containment. \end{remark}
See Figure~\ref{fig:shaperegion} for an example of a component of $\Lambda_{(\lambda,n)}$.
\begin{remark} \label{remark:max_shape} Note the maximal component of $\Lambda_{(\lambda,n)}$ is the union of all $0$-dimensional components. Thus, it has labels $\{0,1\}$ on the vertical edges of $\Gamma_{(\lambda_1,n)}$ and $\star$ on the horizontal edges, but with the fixed row sums in the $n$th column. \end{remark}
\begin{theorem} \label{th:g_bijection_shape} Let $F$ be a face of $P(\lambda,n)$ and $\mathcal{M}(F)$ be equal to the set of sign matrices that are vertices of $F$. The map $\psi:F\mapsto g(\mathcal{M}(F))$ is a bijection between faces of $P(\lambda,n)$ and components of $\overline{\Gamma}_{(\lambda,n)}$. Moreover, $\psi$ is a poset isomorphism, and the dimension of $F$ is equal to the number of regions of $\psi(F)$. \end{theorem}
\begin{proof} The proof is analogous to the proofs of Theorems~\ref{th:g_bijection} and \ref{thm:poset_iso}; we need only check that the number of regions of the maximal component of $\Lambda_{(\lambda,n)}$ matches the dimension of $P(\lambda,n)$. Recall from Proposition~\ref{prop:dim} that the dimension of $P(\lambda,n)$ equals $\lambda_1(n-1)$ when $1 \leq k < n$, and $(\lambda_1 - \lambda_n)(n-1)$ when $k=n$. Note that when $1 \leq k < n$, there are $\lambda_1(n-1)$ regions in the maximal component of $\Lambda_{(\lambda,n)}$. When $k=n$ the column partial sums in the last $\lambda_n$ rows of $\Gamma_{\lambda,n}$ are all fixed to be one, due to the first $\lambda_n$ columns of the tableau being $1,\ldots, n$. Thus there will be no darkened vertical edges in the bottom $\lambda_n$ rows, so these edges will not bound regions. So there will be $(\lambda_1 - \lambda_n)(n-1)$ regions in the maximal component of $\Lambda_{(\lambda,n)}$. \end{proof}
\begin{figure}
\caption{An $8$-dimensional component of $P([4,4,4,1,1],6)$.}
\label{fig:shaperegion}
\end{figure}
\section{Connections and related polytopes} \label{sec:connections} In this section, we describe connections between sign matrix polytopes and related polytopes. First we describe how $P(\lambda,n)$ and $P(m,n)$ are related. We will need a few additional definitions in order to relate $P(\lambda,n)$ to $P(m,n)$ when $\lambda_1 < m$.
\begin{definition} \label{def:lambda1lessthanm} Fix $\lambda$ and $m$ such that $\lambda_1 \leq m$. Let $M_m(\lambda,n)$ be the set of $m\times n$ sign matrices $M=(M_{ij})$ such that: \begin{align} \label{eq:Mij_rowsum2} M_{ij} &=0 & \mbox{ for all } 1 \leq i\leq m-\lambda_1 \\ \label{eq:Mij_rowsum3} \displaystyle\sum_{j=1}^n M_{ij} &= a_{\lambda_1-(i-(m-\lambda_1))+1},
& \mbox{ for all }m-\lambda_1+1\le i \le m. \end{align} Let \emph{$P_m(\lambda,n)$} be the polytope defined as the convex hull, as vectors in $\mathbb{R}^{m n}$, of all the matrices in $M_m(\lambda,n)$. \end{definition}
Note that if $\lambda_1=m$, $M_m(\lambda,n)=M(\lambda,n)$ so that $P_m(\lambda,n)=P(\lambda,n)$.
\begin{remark} The only difference between $M_m(\lambda,n)$ and $M(\lambda,n)$ is that we have inserted $m-\lambda_1$ additional rows of zeros at the top of each matrix. Therefore, $P_m(\lambda,n)$ and $P(\lambda,n)$ have all the same combinatorial properties (dimension, face lattice, volume, etc.); the only difference is their ambient dimensions. In particular, a slight modification of the bijection of Theorem~\ref{thm:MtoSSYT} shows $M_m(\lambda,n)$ is also in bijection with $SSYT(\lambda,n)$. \end{remark}
We give below an inequality description of $P_m(\lambda,n)$, whose proof follows immediately from Theorem~\ref{thm:ineqthmshape} and Definition~\ref{def:lambda1lessthanm}.
\begin{corollary} \label{thm:ineqthmshape_m} $P_m(\lambda,n)$ consists of all $m\times n$ real matrices $X=(X_{ij})$ such that: \begin{align} \label{eq:eq1m} 0 \leq \displaystyle\sum_{i'=1}^{i} X_{i'j} &\leq 1, &\mbox{ for all }1 \leq i\leq m, 1\le j\le n \\ \label{eq:eq2m} 0 \leq \displaystyle\sum_{j'=1}^{j} X_{ij'}, & &\mbox{ for all }1 \leq i\leq m, 1\le j\le n \\ \label{eq:eq3m} \displaystyle\sum_{j'=1}^n X_{ij'} &= a_{\lambda_1-(i-(m-\lambda_1))+1},
& \mbox{ for all } m-\lambda_1+1\le i \le m \\
\label{eq:eq4m} X_{ij} &=0 & \mbox{ for all } 1 \leq i\leq m-\lambda_1, 1\le j\le n. \end{align} \end{corollary}
\begin{lemma} \label{thm:hyper} $P_m(\lambda,n)$ is the intersection of a ${\lambda_1 (n-1)}$--dimensional affine subspace of $\mathbb{R}^{mn}$ and $P(m,n)$. \end{lemma}
\begin{proof} The only differences between the inequality descriptions of $P_m(\lambda,n)$ and $P(m,n)$ are (\ref{eq:eq3m}) and (\ref{eq:eq4m}). (\ref{eq:eq4m}) fixes the first $m-\lambda_1$ matrix rows to contain all zeros, while (\ref{eq:eq3m}) fixes the remaining row total sums in $P_m(\lambda,n)$. So $P_m(\lambda,n)$ is the intersection of $P(m,n)$ and the affine subspace defined by (\ref{eq:eq3m}) and (\ref{eq:eq4m}). \end{proof}
See Figure~\ref{fig:cubes} for an example.
\begin{figure}
\caption{The cube above is $P(1,3)$; the $P(\lambda,3)$ polytopes for each partition of shape $\lambda$ in a $1\times 3$ box are also indicated. $P_1([~],3)$ and $P([1,1,1],3)$ are each a single point, while $P([1],3)$ and $P([1,1],3)$ are the indicated triangles cutting through $P(1,3)$. }
\label{fig:cubes}
\end{figure}
Alternating sign matrices are a motivating special case of sign matrices. We give the usual definition below and then relate it to sign matrices. \begin{definition}[\cite{MRRASM}] \label{def:asm} An \emph{alternating sign matrix} is a square matrix with entries in $\left\{-1,0,1\right\}$ such that the rows and columns each sum to one and the nonzero entries along any row or column alternate in sign. Let $A(n)$ denote the set of $n\times n$ alternating sign matrices. \end{definition}
The following lemma is implicit in Aval's paper on sign matrices. \begin{lemma}[\cite{aval}] \label{prop:asm_sign_matrix} $A(n)$ is the set of sign matrices $M=\left(M_{ij}\right)$ in $M([n,n-1,\ldots,2,1],n)$ satisfying the additional requirement: \begin{align} \label{eq:asm2}
&\displaystyle\sum_{j'=1}^{j} M_{ij'} \in\{0,1\} \mathrm{~for~all~} i,j. \end{align} \end{lemma}
\begin{proof} Let $M\in A(n)$. Then the nonzero entries of $M$ alternate between $1$ and $-1$ across any row or column. The first nonzero entry in a row or column must be a $1$, since otherwise that row or column would not sum to $1$. Thus (\ref{eq:sm1}) and (\ref{eq:sm2}) from Definition~\ref{def:sm} of a sign matrix and (\ref{eq:asm2}) above are satisfied. Also in an alternating sign matrix all of the total row sums are $1$. Recall from (\ref{eq:Mij_rowsum}) that the row sums of a sign matrix equal $a_{\lambda_1-i+1}$, so since each row sum of $M$ is $1$, $M$ must be in $M([n,n-1,\ldots,2,1],n)$.
Now let $M\in M([n,n-1,\ldots,2,1],n)$ satisfy (\ref{eq:asm2}). $M$ is an $n\times n$ matrix whose rows each sum to $1$ since $M\in M([n,n-1,\ldots,2,1],n)$. By (\ref{eq:sm1}) and the fact that the sum of all the matrix entries is $n$, we have that the columns must each sum to $1$. Then (\ref{eq:sm1}) and (\ref{eq:asm2}) imply that the nonzero entries of $M$ alternate in sign along each row and column. \end{proof}
\begin{remark} \label{re:ASMstairs} It is well-known (see e.g.~\cite{MRRASM}) that alternating sign matrices are in bijection with \emph{monotone triangles}, which are equivalent (by rotation) to semistandard Young tableau of staircase shape with first column $(1,2,\ldots,n)$ and such that each northeast to southwest diagonal is weakly increasing. This bijection is a specialization of the bijection of Theorem~\ref{thm:MtoSSYT}. \end{remark}
\begin{definition}[\cite{behrend,striker}] The $n$th alternating sign matrix polytope, denoted $ASM_n$, is the convex hull of all the $n \times n$ alternating sign matrices, considered as vectors in $\mathbb{R}^{n^2}$. \end{definition}
\begin{remark} Striker~\cite{striker} and Behrend and Knight~\cite{behrend} independently defined and proved several results about $ASM_n$. The dimension of $ASM_n$ is $(n-1)^2$, an inequality description of $ASM_n$ is that the rows and columns sum to $1$ and the partial sums are between $0$ and $1$, and the vertices of $ASM_n$ are all the $n\times n$ alternating sign matrices~\cite{behrend,striker}. $ASM_n$ has $4[(n-2)^2+1]$ facets and a nice face lattice description~\cite{striker}. These are a few of the results that inspired the research of this paper.
Some further properties of $ASM_n$ were studied by Brualdi and Dahl~\cite{BrualdiDahl}. These include results regarding edges of $ASM_n$, an alternative proof of the characterization of the vertices of $ASM_n$, and an alternative proof of the linear characterization of $ASM_n$. \end{remark}
We see the connection between $P(\lambda,n)$ and $ASM_n$ in the following theorem.
\begin{lemma} \label{thm:staircase} $P([n, n-1, \cdots, 2, 1],n)$ contains $ASM_n$. \end{lemma}
\begin{proof} Lemma~\ref{prop:asm_sign_matrix} gives that the set of $n\times n$ alternating sign matrices is a subset of $M([n, n-1, \cdots, 2, 1],n)$. So the convex hull of $n\times n$ alternating sign matrices will be contained in the convex hull of $M([n, n-1, \cdots, 2, 1],n)$, which is $P([n, n-1, \cdots, 2, 1],n)$. \end{proof}
The Birkhoff polytope contains no lattice points except the permutation matrices, which are its vertices. We show something similar happens in the case of sign matrices and alternating sign matrices.
\begin{theorem} \label{prop:lattice_points} There are no lattice points in $P(m,n)$, $P(\lambda,n)$, or $ASM_n$ other than the matrices used to construct them. \end{theorem} \begin{proof} Let $M$ be an integer-valued matrix inside the polytope $P(m,n)$. Then $M$ fits the inequality description of $P(m,n)$. From the inequalities, all partial column sums are either $0$ or $1$, thus the entries of $M$ must be in $\{-1,0,1\}$. Also, all partial row sums are nonnegative, so $M$ satisfies the definition of an $m\times n$ sign matrix.
By Lemma~\ref{thm:hyper}, $P(\lambda,n)$ is contained in $P(\lambda_1,n)$. By Lemma~\ref{thm:staircase}, $ASM_n$ is contained in $P([n, n-1, \cdots, 2, 1],n)$ which by Theorem~\ref{thm:hyper} is contained in $P(n,n)$. Thus, the results follow. \end{proof}
\section{$P(v,\lambda,n)$ and transportation polytopes} \label{sec:transportation} Thus far in this paper, we have defined and studied the sign matrix polytope $P(m,n)$ and the polytope $P(\lambda,n)$ whose vertices are the sign matrices with row sums determined by $\lambda$. We may furthermore restrict to sign matrices with prescribed column sums; we define this polytope below, calling it $P(v,\lambda,n)$. We show in Theorem~\ref{thm:transportation} that the nonnegative part of this polytope is a transportation polytope.
\begin{definition} Let $\lambda$ be a partition with $k$ parts and $v$ a vector of length $k$ with strictly increasing entries at most $n$. Let $SSYT(v,\lambda,n)$ denote the set of semistandard Young tableaux of shape $\lambda$ with entries at most $n$ and first column $v$. \end{definition} For example, the tableau of Figure~\ref{fig:ydssyt} is in $SSYT((1,2,3,6),[6,3,3,1],n)$ for any $n \geq 7$.
\begin{remark} We do not know an enumeration for $SSYT(v,\lambda,n)$, though the numbers we have calculated look fairly nice. \end{remark}
\begin{definition} \label{def:MtoSSYTv} Fix $\lambda$ and $n\in\mathbb{N}$ and $v$ a vector of length $k$ with strictly increasing entries at most $n$. Let \emph{$M(v,\lambda,n)$} be the set of $M\in M(\lambda,n)$ such that: \begin{align} \label{eq:Mij_colsum_v} \displaystyle\sum_{i=1}^{\lambda_1} M_{ij} &= 1, & \mbox{ if } j\in v\mbox{ and } 0\mbox{ otherwise.} \end{align} \end{definition}
\begin{theorem} $M(v,\lambda,n)$ is in explicit bijection with $SSYT(v,\lambda,n)$. \label{prop:MtoSSYTv} \end{theorem}
\begin{proof} We know that $M(\lambda,n)$ is in bijection with $SSYT(\lambda,n)$ from Theorem~\ref{thm:MtoSSYT}. So we only need to check (\ref{eq:Mij_colsum_v}). Consider $M \in M(v,\lambda,n)$ and follow the bijection of Theorem~\ref{thm:MtoSSYT} to construct the corresponding $T \in SSYT(\lambda,n)$. Recall that in $M(v,\lambda,n)$, $v$ records which columns of $M$ have a total sum of $1$. Thus, the numbers in $v$ are the entries of $T$ in the first column of $\lambda$, so $T \in SSYT(v,\lambda,n)$.
Now consider $T \in SSYT(v,\lambda,n)$ and its corresponding sign matrix $M\in M(\lambda,n)$. The first column of $T$ is fixed to be the numbers in $v$. The first column of $T$ gets mapped to the last row of $M$. That is, for each number in the first column of $T$, the corresponding column of $M$ will sum to $1$. The rest of the columns of $M$ will sum to $0$. Thus $M \in M(v,\lambda,n)$. \end{proof}
\begin{definition} Let \emph{$P(v,\lambda,n)$} be the polytope defined as the convex hull, as vectors in $\mathbb{R}^{\lambda_1 n}$, of all the matrices in $M(v,\lambda,n)$. We say this is the sign matrix polytope with row sums determined by $\lambda$ and column sums determined by $v$. \end{definition}
We now discuss analogous properties to those proved in the rest of the paper regarding $P(m,n)$ and $P(\lambda,n)$. Since many of these proofs are very similar to proofs we have already discussed, we only note how the proofs differ from those in the other cases.
\begin{proposition} \label{prop:transdim} The dimension of $P(v,\lambda, n)$ is $(\lambda_1-1)(n-1)$ if $1 \leq k < n$. When $k=n$, the dimension is $(\lambda_1 - \lambda_n)(n-1).$ \end{proposition}
\begin{proof} Since each matrix in $M(v,\lambda,n)$ is $\lambda_1 \times n$, the ambient dimension is $\lambda_1 n$. However, when constructing the sign matrix corresponding to a tableau of shape $\lambda$, as in Theorem~\ref{thm:MtoSSYT}, the last column is determined by the shape $\lambda$ via the prescribed row sums (\ref{eq:Mij_rowsum}) of Definition~\ref{def:MtoSSYT}. The last row of the matrix is determined by $v$ using (\ref{eq:Mij_colsum_v}). These are the only restrictions on the dimension when $1 \leq k < n$, reducing the free entries in the matrix by one column and one row. Thus, the dimension is $(\lambda_1-1)(n-1)$. When $k=n$, it must be that $v=(1,2,\ldots,n)$ and $P(v,\lambda,n)$ equals $P(\lambda,n)$, so we reduce to this case. \end{proof}
\begin{theorem} \label{thm:v_lambdavertex} The vertices of $P(v,\lambda,n)$ are the sign matrices $M(v,\lambda,n)$. \end{theorem} \begin{proof} The hyperplane constructed in the proof of Theorem~\ref{thm:lambdavertex} separates a given sign matrix from all other sign matrices in $M(\lambda,n)$, which includes $M(v,\lambda,n)$. \end{proof}
\begin{theorem} \label{thm:v_ineqthmshape} $P(v,\lambda,n)$ consists of all $\lambda_1\times n$ real matrices $X=(X_{ij})$ such that: \begin{align} \label{eq:eq1t} 0 \leq \displaystyle\sum_{i'=1}^{i} X_{i'j} &\leq 1, &\mbox{ for all }1 \leq i\leq \lambda_1, 1\le j\le n \\ \label{eq:eq2t} 0 \leq \displaystyle\sum_{j'=1}^{j} X_{ij'}, & &\mbox{ for all }1\le j\le n, 1\leq i\leq \lambda_1 \\ \label{eq:eq3t} \displaystyle\sum_{j'=1}^n X_{ij'} &= a_{\lambda_1-i+1}, &\mbox{ for all } 1\leq i \leq \lambda_1 \\ \label{eq:eq4t} \displaystyle\sum_{i'=1}^{\lambda_1} X_{i'j} &= 1, &\mbox{ if } j\in v\mbox{ and } 0\mbox{ otherwise.} \end{align} \end{theorem} \begin{proof} This proof follows the proof of Theorem~\ref{thm:ineqthmshape}, except since both the row and column sums are fixed, only closed circuits are needed. \end{proof}
\begin{definition} \label{def:v_completegraph_shape} Define $\overline\Gamma_{(v,\lambda,n)}$ as the following labeling of the graph $\Gamma_{( \lambda_1,n)}$. All edges are labeled as in $\overline\Gamma_{(\lambda,n)}$, except the last vertical edge in column $j$ is labeled $1$ if $j\in v$ and $0$ otherwise. $0$-dimensional components, components, containment of components, and regions are defined analogously. Let $\Lambda_{(v,\lambda,n)}$ denote the {partial order} on components of $\overline\Gamma_{(v,\lambda,n)}$ by containment. \end{definition}
\begin{theorem} \label{th:v_g_bijection_shape} Let $F$ be a face of $P(v,\lambda,n)$ and $\mathcal{M}(F)$ be equal to the set of sign matrices that are vertices of $F$. The map $\psi:F\mapsto g(\mathcal{M}(F))$ is a bijection between faces of $P(v,\lambda,n)$ and components of $\overline{\Gamma}_{(v,\lambda,n)}$. Moreover, $\psi$ is a poset isomorphism, and the dimension of $F$ is equal to the number of regions of $\psi(F)$. \end{theorem} \begin{proof} The proof is analogous to the proof of Theorem~\ref{th:g_bijection_shape}; we need only check that the number of regions of the maximal component of $\Lambda_{(v,\lambda,n)}$ matches the dimension of $P(v,\lambda,n)$. Recall from Proposition~\ref{prop:transdim} that the dimension of $P(v,\lambda,n)$ equals $(\lambda_1-1)(n-1)$ when $1 \leq k < n$, and $(\lambda_1 - \lambda_n)(n-1)$ when $k=n$. Note that when $1 \leq k < n$, there are $(\lambda_1-1)(n-1)$ regions in the maximal component of $\Lambda_{(v,\lambda,n)}$. When $k=n$, the only possible first column of $T\in SSYT(\lambda,n)$ is $v=(1,2,\ldots,n)$, thus $P(v,\lambda,n)=P(\lambda,n)$ and we may use Theorem~\ref{th:g_bijection_shape}. \end{proof}
Theorem~\ref{thm:transportation} relates sign matrix polytopes to transportation polytopes. We first give the following definition (see, for example, \cite{deloera} and references therein). \begin{definition}
Fix two integers $p,q \in \mathbb{Z}_{>0}$ and two vectors $y \in\mathbb{R}_{\geq 0}^p$ and $z \in \mathbb{R}_{\geq 0}^q$. The \emph{transportation polytope} $P_{(y,z)}$ is the convex polytope defined in the $pq$ variables $X_{ij} \in \mathbb{R}_{\geq 0}$, $1\leq i\leq p, 1\leq j \leq q$) satisfying the $p + q$ equations: \begin{align} \label{eq:transp1} \sum_{j'=1}^{q} X_{ij'} &= y_i, & \mbox{ for all } 1\leq i\leq p \\ \label{eq:transp2} \sum_{i'=1}^{p} X_{i'j} &=z_j, & \mbox{ for all } 1\leq j \leq q. \end{align} \end{definition}
\begin{theorem} \label{thm:transportation} The nonnegative part of $P(v,\lambda,n)$ is the transportation polytope $P_{(y,z)}$, where $y_i=a_{\lambda_1-i+1}$ for all $1\leq i\leq \lambda_1$ and $z_j=$ 1 if $j\in v$ and $0$ otherwise. \end{theorem} \begin{proof} By Theorem~\ref{thm:v_ineqthmshape}, the nonnegative part of $P(v,\lambda,n)$ is contained in $P_{(y,z)}$, since for these choices of $y$ and $z$, (\ref{eq:transp1}) and (\ref{eq:transp2}) are exactly (\ref{eq:eq3t}) and (\ref{eq:eq4t}). For the reverse inclusion, note in addition that any matrix with nonnegative entries and column sums at most $1$ satisfies (\ref{eq:eq1t}) and (\ref{eq:eq2t}). \end{proof}
This is analogous to the fact that the non-negative part of the alternating sign matrix polytope is the {Birkhoff polytope}~\cite{behrend,striker}.
\section*{Acknowledgments}
The authors thank Jesus De Loera for helpful conversations on transportation polytopes, Dennis Stanton for making us aware of Theorem~\ref{thm:Gordon}, and the anonymous referee for helpful comments. The authors also thank the developers of \verb|SageMath|~\cite{sage} software, especially the code related to polytopes and tableaux, which was helpful in our research, and the developers of \verb|CoCalc|~\cite{SMC} for making \verb|SageMath| more accessible. JS was supported by a grant from the Simons Foundation/SFARI (527204, JS).
\begin{comment}
\end{comment}
\end{document} | arXiv |
\begin{document}
\title[Cubes and Fifth Powers Sums]
{Remark on a Paper \\by Izadi and Baghalaghdam \\ about Cubes and Fifth Powers Sums} \author[G.~Iokibe]
{Gaku IOKIBE} \address[Gaku Iokibe]
{Department of Mathematics, Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043, Japan} \email{[email protected]}
\subjclass{11D41; 11D45, 14H52} \keywords{Diophantine equations, Elliptic Curves}
\thanks{The present paper is to appear in {\it Math. J. Okayama University}.}
\begin{abstract} In this paper, we refine the method introduced by Izadi and Baghalaghdam to search integer solutions to the Diophantine equation $X_1^5+X_2^5+X_3^5=Y_1^3+Y_2^3+Y_3^3$. We show that the Diophantine equation has infinitely many positive solutions. \end{abstract} \maketitle
\section{Introduction}
In \cite{I-B}, Izadi and Baghalaghdam consider the Diophantine equation: \begin{equation}\label{eqn:ib0} a(X_1^{\prime 5}+X_2^{\prime 5})+\sum_{i=0}^{n} a_iX_i^5=b(Y_1^{\prime 3}+Y_2^{\prime 3})+\sum_{i=0}^{m} b_iY_i^3 \end{equation} where $n,m\in \mathbb{N}\cup \{0\}, \ a,b\neq 0, \ a_i,b_i$ are fixed arbitrary rational numbers. They use theory of elliptic curves to find nontrivial integer solutions to (\ref{eqn:ib0}). In particular, they discuss the equation: \begin{equation}\label{eqn:ib} X_1^5+X_2^5+X_3^5=Y_1^3+Y_2^3+Y_3^3 \end{equation} and obtain integer solutions, for example: $$8^5+6^5+14^5=(-110)^3+124^3+14^3,$$ $$128122^5+(-79524)^5+48598^5=359227580^3+(-251874598)^3+107352982^3.$$ However, no positive solutions are presented in their paper \cite{I-B}. In this paper, we refine their method to find positive solutions to (\ref{eqn:ib}).
Consider the Diophantine equation (\ref{eqn:ib}). Let:
\begin{equation}\label{transform_var} \left\{ \begin{split} \ X_1=t+x_1, \quad &X_2=t-x_1, \quad X_3=\alpha t, \\
Y_1=t+v, \quad\ &Y_2=t-v, \qquad Y_3=\beta t. \end{split} \right. \end{equation} Then we get a quartic curve: \begin{equation}\label{eqn:qc} C:v^2 = \frac{2+\alpha^5}{6}t^4 + \frac{20x_1^2-2-\beta^3}{6}t^2 + \frac{5x_1^4}{3} \end{equation} with parameters $x_1, \ \alpha, \ \beta \in \mathbb{Q}$. If we get a rational point $(t,v)$ on $C$, we can compute a rational solution to (\ref{eqn:ib}) (see \cite{I-B}).
Once we obtain rational solutions to (\ref{eqn:ib}), we can obtain integer solutions by multiplying an appropriate value to $X_i,\ Y_i$. In the same way, in order to obtain solutions in positive integers, it suffices to search positive rational solutions to equation (\ref{eqn:ib}).
\section{Additional Requirements for Positive Solutions}
Suppose that a positive rational solution $(X_i, Y_i)_{1\leq i\leq 3}$ to (\ref{eqn:ib}) is obtained from a given point $(t,v)$ on the quartic $C$.
\begin{prop}\label{prop:1} Let $\alpha,\beta,x_1\in\mathbb{Q}$ and $$ F(t)=\frac{2+\alpha^5}{6}t^4 + \frac{20x_1^2-2-\beta^3}{6}t^2 + \frac{5x_1^4}{3}. $$ A rational point $(t,v)$ on the curve $C:v^2=F(t)$ in (\ref{eqn:qc}) produces a positive rational solution to (\ref{eqn:ib}) by (\ref{transform_var}) if and only if \begin{equation} \label{1st_condition} \alpha, \beta > 0, \quad 0\leq F(t)<t^2,\quad
t>|x_1| \end{equation} hold.
\end{prop}
\begin{proof} If $X_i$ and $Y_i$ are positive in the solution in the form (\ref{transform_var}), we have $t = (X_1 + X_2)/2 > 0$, $\alpha = 2X_3/(X_1 + X_2) > 0$ and $\beta = 2Y_3/(Y_1 + Y_2) > 0$. For $(t, v)\in C$, one has that $0\le v^2 = F(t) < v^2 + Y_1Y_2 = t^2$. It follows from $x_1^2< x_1^2 + X_1X_2 = t^2$ that
$t > |x_1|$ for $t > 0$. Conversely, suppose the inequalities in (\ref{1st_condition}) hold. Then the given point $(t,v)$ on $C$ satisfies $v^2=F(t)<t^2$. This and (\ref{1st_condition}) immediately imply $X_i,Y_i>0$ in (\ref{transform_var}).
\end{proof}
\begin{prop}\label{prop:2} Under the same assumption as Proposition \ref{prop:1}, let \begin{equation}\label{abc_def} a=\frac{2+\alpha^5}{6},\ b=\frac{20x_1^2-8-\beta^3}{6},\ c=\frac{5}{3}x_1^4. \end{equation} Then $a,b,c$ satisfy $b^2-4ac>0$ and $b<0$ if and only if there exists a real number $t$ such that $F(t)<t^2$. \end{prop} \begin{proof} Let $\tilde{F}(t)=F(t)-t^2$. Since $\tilde{F}(0)=5x_1^4/3\geq 0,$ and in this case $a>0,$ it is easy to see that the following conditions are equivalent to each others:
(i) There exists a real number $t$ such that $F(t)<t^2$.
(ii) The equation $\tilde{F}(t)=0$ has four distinct solutions.
(iii) The quadratic equation $ax^2+bx+c=0$ has two distinct non-negative solutions.
(iv) The discriminant $D=b^2-4ac$ of the quadratic function $f(x)=ax^2+bx+c$ is positive, and the axis of the quadratic function $-b/2a$ is positive, and $f(0)\geq 0$.
The condition (iv) holds if and only if ``$b^2-4ac>0$ and $b<0$'', since $a>0$ and $f(0)=c=5x_1^4/3\geq 0$. \end{proof}
\section{Example for $X_1^5+X_2^5+X_3^5=Y_1^3+Y_2^3+Y_3^3$}
Let us first search parameters $(x_1,\alpha, \beta)$ such that $$ 0< \alpha, \beta, \quad b<0< b^2-4ac $$ with $a,b,c$ given by (\ref{abc_def}) and such that the quartic curve $C$ of (\ref{eqn:qc}) has at least one rational point. Note that these are necessary to satisfy conditions of Proposition \ref{prop:1}, \ref{prop:2}. Then, the curve $C$ is birationally equivalent to an elliptic curve $E$ over $\mathbb{Q}$. If $E$ has positive rank, then $C$ has infinitely many rational points.
Let $(x_1,\alpha, \beta)=(2,1,16)$. Then the quartic: $$C:v^2=\frac{1}{2}t^4 - \frac{2009}{3}t^2 + \frac{80}{3},$$ has a rational point $(t,v)=(44,760).$ By $T=t-44$, we transform $C$ into $$ C' : v^2=\frac12 T^4+88 T^3+\frac{15415}{3} T^2+ \frac{334312}{3}T+ 760^2 $$ which is birationally equivalent over $\mathbb{Q}$ to the cubic elliptic curve (see \cite[Theorem 2.17]{W}, \cite{I-B}): $$E:y^2+\frac{41789}{285}xy+133760y=x^3-\frac{76876021}{324900}x^2-1155200x +\frac{2460032672}{9},$$ where: $$T=\frac{2\cdot 760(x+\frac{15415}{3})-\frac{334312^2}{2\cdot 3^2\cdot 760}}{y},\quad v=-760+\frac{T(Tx-\frac{334312}{3})}{2\cdot 760}. $$ Using the Sage software \cite{Sage}, we find that the cubic curve $E$ is an elliptic curve which has rank 2 and the generators of $E$ are: $$P_1=\left(-\frac{1802189}{1521},\frac{5513659679}{417430}\right), \quad P_2=\left(-\frac{351379}{363},\frac{47356344241}{2276010}\right). $$ We now consider the subset $$ C_0=\left\{ (t,v)\in C\mid 0 \le F(t)<t^2\right\} \subset C $$ whose points satisfy another condition (\ref{1st_condition}) of Proposition \ref{prop:1}. The two quartic equations: $$ F(t)=\frac{1}{2}t^4 - \frac{2009}{3}t^2 + \frac{80}{3}=0, \quad \tilde{F}(t)=\frac{1}{2}t^4 - \frac{2009}{3}t^2 + \frac{80}{3}-t^2=0 $$ have respectively solutions: $$t=\pm \frac{1}{3}\sqrt{6027\pm 3\sqrt{4035601}},\quad t=\pm \frac{2}{3}\sqrt{1509\pm 3\sqrt{252979}}.$$ Let us take larger solutions as: \begin{eqnarray*} a_1 &=& \frac{1}{3}\sqrt{6027+3\sqrt{4035601}}\simeq 36.59635926...\\ a_2 &=& \frac{2}{3}\sqrt{1509+3\sqrt{252979}}\simeq 36.62367500... \end{eqnarray*} If a point $(t_0,v_0)$ on $C$ satisfies $a_1\leq t_0\leq a_2$, then $(t_0,v_0)$ lies on $C_0$.
We now make use of the composition law of points on the elliptic curve $E$. Since $E$ has positive rank, we can test infinitely many rational points of $E$ till finding a point $(t_0,v_0)$ on $C_0$.
We find that the rational point $$ Q=2P_1-P_2=\left(\tfrac{304845381192111829037}{58470412871306667},-\tfrac{4767546475726965161322288395890039}{4652843756178203561643745770}\right) $$ on $E$ corresponds to $$ (t_0,v_0)=\left(\tfrac{170815619844155909156204}{4664941095250009917983}, -\tfrac{690740884062625663919872925291699877683029096}{21761675422152362106175457381859866386788289}\right) $$ on $C_0$, and creates a positive rational solution: \begin{align*} X_1&=\tfrac{180145502034655928992170}{4664941095250009917983}\simeq 38.61688676... \\
X_2&=\tfrac{161485737653655889320238}{4664941095250009917983}\simeq 34.61688676... \\ X_3&=\tfrac{170815619844155909156204}{4664941095250009917983}\simeq 36.61688676... \\ Y_1&=\tfrac{106103920658980331397442614601687483092587436}{21761675422152362106175457381859866386788289}\simeq 4.875723886... \\ Y_2&=\tfrac{1487585688784231659237188465185087238458645628}{21761675422152362106175457381859866386788289}\simeq 68.35804964... \\ Y_3&=\tfrac{2733049917506494546499264}{4664941095250009917983}\simeq 585.8701882... \end{align*}
Next we shall prove that the Diophantine equation (\ref{eqn:ib}) has infinitely many positive solutions. The real locus of elliptic curve $E(\mathbb{R})$ can be regarded as a compact topological subspace of complex projective variety $E$.
\begin{lem}\label{lem:1} If the rank of elliptic curve $E$ over $\mathbb{Q}$ is positive, every point of $E(\mathbb{Q})$ is an accumulation point in $E(\mathbb{R})$. \end{lem}
\begin{proof} Since $E(\mathbb{R})$ is a compact topological group, and $E(\mathbb{Q})$ is an infinite
subgroup of $E(\mathbb{R})$, there is at least one accumulation point of $E(\mathbb{Q})$ in $E(\mathbb{R})$. The group operations are homeomorphisms from $E(\mathbb{R})$ to itself. Therefore all points of $E(\mathbb{Q})$ are accumulation points of $E(\mathbb{R})$. \end{proof}
\begin{thm} The Diophantine equation (\ref{eqn:ib}) has infinitely many positive solutions. \end{thm} \begin{proof} The part of $C_0$
has one rational point $(t_0,v_0)$ which corresponds to the above point $Q$. By Lemma \ref{lem:1}, the point $Q$ is an accumulation point of $E(\mathbb{Q})$
in $E(\mathbb{R})$, and $(t_0,v_0)$ is that of $C(\mathbb{Q})$ in $C(\mathbb{R})$. Thus the part of $C_0$ includes infinitely many rational points. Since $2=|x_1|<a_1=36.59635926...$, they correspond to positive rational solutions to (\ref{eqn:ib}). \end{proof}
\section{Example for $X_1^5+X_2^5=Y_1^3+Y_2^3+Y_3^3$} Let $\alpha=0$. Then (\ref{eqn:ib}) gives another Diophantine equation: \begin{equation}\label{eqn:ib2} X_1^5+X_2^5=Y_1^3+Y_2^3+Y_3^3. \end{equation} In the same way, we can obtain a rational or positive rational solutions of it. For example, let $x_1=10,\ \beta = 18$. Then the quartic curve: $$ C:v^2=\frac13 t^4-639 t^2+ \frac{50000}{3}
$$ has a rational point $(t,v)=(-5,30)$ and can be regarded as an elliptic curve over $\mathbb{Q}$ that has rank 2. It is birationally equivalent to: $$ E: y^2+\frac{1867}{9}xy-400y= x^3-\frac{3676525}{324} x^2-1200 x+ \frac{367652500}{27}.
$$ From this, we can compute positive rational solutions to (\ref{eqn:ib2}). For example, there is a point $Q=(x_0,y_0)$ on $E$ with $$
x_0=\tfrac{9233921838917810856046138588468998730}{71226852166762122405616706766475947}
$$ corresponding to $(t_0,v_0)$ on $C$ with $$
t_0=\tfrac{7869911761727476320751662986237524106650}{180965667579279848488380712753242417827}
$$ which creates the following solution to (\ref{eqn:ib2}): \begin{align*} X_1&=\tfrac{9679568437520274805635470113769948284920}{180965667579279848488380712753242417827},\\ X_2&=\tfrac{6060255085934677835867855858705099928380}{180965667579279848488380712753242417827},\\ Y_1&=\tfrac{2102579397586077496858869804126511993988094601307100986270258503567645177035000}{32748572842414417658282657731373155447687070419319181813277645661864847401929},\\ Y_2&=\tfrac{745788273916000738265027095213285105579870143595196644344754733646843241464100}{32748572842414417658282657731373155447687070419319181813277645661864847401929},\\ Y_3&=\tfrac{141658411711094573773529933752275433919700}{180965667579279848488380712753242417827}. \end{align*}
The case of $\beta=0$ will be discussed briefly in 5.2 below.
\section{Parameters $(x_1,\alpha ,\beta)$ from Trivial Solutions} \subsection{} There are several trivial solutions; for example: $$1^5+1^5+1^5=1^3+1^3+1^3. $$ We call solutions to (\ref{eqn:ib}) which consist of $0,\ \pm 1$ trivial. \hspace{-0.5pc}We are going to check some of them to search integer (or positive) solutions.
A solution to (\ref{eqn:ib}) may decide parameter. For example, when $X_i=Y_i=1$ ($i=1,2,3$),
we get $(x_1,\alpha ,\beta )=(0,1,1)$. Then: $$C:v^2=\frac{1}{2}t^4-\frac{1}{2}t^2$$ has a singular point $(t,v)=(0,0)$ \hspace{-0.5pc}and can be parametrized by one parameter. Let us divide both sides of $C$ by $t^4$ and
substitute $s,\ w$ for $1/t,\ v/t^2$ respectively. Then: $$C':w^2=\frac{1}{2}-\frac{1}{2}s^2$$ has a rational point $(s,w)=(1,0)$. Hence we can parametrize rational points on $C'$ and integer solutions to (\ref{eqn:ib}). That is to say we have: \begin{align*} &\left(\frac{2k^2+1}{2k^2-1}\right)^5+ \left(\frac{2k^2+1}{2k^2-1}\right)^5+ \left(\frac{2k^2+1}{2k^2-1}\right)^5 \\ &= \left(\frac{4k^4-4k^3-2k-1}{(2k^2-1)^2}\right)^3 +\left(\frac{4k^4+4k^3+2k-1}{(2k^2-1)^2} \right)^3 +\left(\frac{2k^2+1}{2k^2-1}\right)^3
\end{align*} where $k\in \mathbb{Q}$. We can see that large enough $k$ give positive solutions to (\ref{eqn:ib}). For example: $$\left(\frac{9}{7}\right)^5+\left(\frac{9}{7}\right)^5+\left(\frac{9}{7}\right)^5=\left(\frac{27}{49}\right)^3+\left(\frac{99}{49}\right)^3+\left(\frac{9}{7}\right)^3$$ where $k=2$. Since $X_1=X_2=X_3=Y_3$, this solution also gives positive solution to another Diophantine equation $3X^5=Y_1^3+Y_2^3+X^3$. Moreover it satisfies $X_1+X_2+X_3=Y_1+Y_2+Y_3$ because $\alpha = \beta$.
\subsection{} From another trivial solution: $$ 1^5+0^5+0^5=1^3+0^3+0^3, $$ we can derive parameters $(x_1,\alpha, \beta)=(\frac{1}{2},0,0)$. Then: $$C:v^2=\frac{1}{3}t^4+\frac{1}{2}t^2+\frac{5}{48}$$ is an elliptic curve defined over $\mathbb{Q}$ with rational point $(t,v)=(\frac{1}{2},\frac{1}{2})$. It is birationally equivalent to: $$ E:y^2+\frac{4}{3}xy+\frac{2}{3}y=x^3+\frac{5}{9}x^2-\frac{1}{3}x-\frac{5}{27} $$ over $\mathbb{Q}$ and has rank 1. Hence we can apply the method of Section 3 to compute positive solutions to \begin{equation} \label{caseC} X_1^5+X_2^5=Y_1^3+Y_2^3 \end{equation} as a special case of (\ref{eqn:ib}) with $X_1,X_2,Y_1,Y_2>0$, $X_3=Y_3=0$ (where $\alpha=\beta=0$ in (\ref{transform_var})). For example, a point $$ Q=\left(\tfrac{10017045137918654785}{165672066306928896}, \tfrac{29224609136538294659462738431}{67433225470590933809197056} \right)
$$ on $E$
corresponding to the point $$ (t_0,v_0)=\left( \tfrac{2806052350871126431439}{4379016004568066987998}, \tfrac{5797926783162005502807971914786692611082209}{9587890584131638439948667971418559938024002} \right) $$ on $C$ creates the positive solution to (\ref{caseC}): \begin{align*} X_1&=\tfrac{2497780176577579962719}{2189508002284033493999},\ X_2=\tfrac{308272174293546468720}{2189508002284033493999},\
\\ Y_1&=\tfrac{5970900430111130674379700360675596051258385}{4793945292065819219974333985709279969012001}, \\ Y_2&=\tfrac{172973646949125171571728445888903440176176}{4793945292065819219974333985709279969012001}.
\end{align*} \subsection{} There exists one more parameter with $\beta = 0$, $(x_1,\alpha, \beta)=(0,0,0)$, which is derived from the trivial solution: $$1^5+1^5+0^5=1^3+1^3+0^3.$$ Then the rational points on: $$C:v^2=\frac{1}{3}t^4-\frac{1}{3}t^2$$ can be parametrized. Thus we have: {\small \begin{equation*} \left(\frac{3k^2+1}{3k^2-1}\right)^5+\left(\frac{3k^2+1}{3k^2-1}\right)^5 =\left(\frac{9k^4-6k^3-2k-1}{(3k^2-1)^2}\right)^3 +\left(\frac{9k^4+6k^3+2k-1 }{(3k^2-1)^2}\right)^3,
\end{equation*} }
\noindent where $k\in \mathbb{Q}$. For example, substituting $2$ for $k$, we have: $$ \left(\frac{13}{11}\right)^5+\left(\frac{13}{11}\right)^5+0^5=\left(\frac{91}{121}\right)^3+\left(\frac{195}{121}\right)^3+0^3. $$ The solutions which are obtained in these way give solutions to another Diophantine equation $2X^5=Y_1^3+Y_2^3$. \subsection{}
It is not simple to find parameters $(x_1,\alpha ,\beta )$ that produce elliptic curves for non-trivial solutions $(X_i, Y_i)_{1\leq i\leq 3}$. In particular, the author could not find a good parameter for $\beta =0,\ \alpha \neq 0$:
\begin{question} Find (a good method for) positive solutions to: \begin{equation*} X_1^5+X_2^5+X_3^5=Y_1^3+Y_2^3. \end{equation*} \end{question}
{\it Acknowledgement}: The author would like to thank the referee for many valuable suggestions to improve this article.
\end{document} | arXiv |
\begin{document}
\title{Objective Bayesian Model Discrimination \\ in Follow-up Experimental Designs} \author{GUIDO CONSONNI \and LAURA DELDOSSI\\ Dipartimento di Scienze Statistiche\\ Universit\`{a} Cattolica del Sacro Cuore\\ Largo Gemelli, 1; 20123 Milan, Italy\\ \url{[email protected]; [email protected]} } \date{ } \maketitle
\begin{abstract}
An initial screening experiment may lead to ambiguous conclusions regarding the factors which are active in explaining the variation of an outcome variable: thus adding follow-up runs becomes necessary. We propose a fully Bayes objective approach to follow-up designs, using prior distributions suitably tailored to model selection.
We adopt a model criterion based on a weighted average of Kullback-Leibler divergences between predictive distributions for all possible pairs of models. When applied to real data, our method produces results which compare favorably to previous analyses based on subjective weakly informative priors. Supplementary materials are available online.
KEY WORDS: Bayesian model selection; Kullback-Leibler divergence; Screening experiment. \end{abstract}
\section{Introduction}
In screening designs the objective is to discover which of the many potential factors are really active, i.e. contribute to explain the variability of a response variable.
In this context, it is customary to assume that the response follows a normal linear regression model, where the predictors are the model-specific main effects together with all interactions up to a specified order (usually two).
In this way, for each given set of active factors, there is associated one and only one linear model.
If one considers $k$ factors, there exist $2^k$ distinct models, including the null model (no factor is active), and the full model (all factors are active).
We adopt a
Bayesian approach, wherein each uncertain quantity (such as model, parameter or future observation) is assigned a prior (distribution) which, in the light of data, is updated to a posterior. In particular the Bayesian approach produces a full posterior distribution on the space of all models, unlike in frequentist model selection procedures (e.g. AIC, BIC, or penalized regression methods such as the Lasso).
Often screening designs are based on a limited number of runs, and they may not lead to unequivocal conclusions as to which factors are active,
because the posterior probability on model space is not sufficiently concentrated on a few models; and similarly for the induced posterior probability that each factor is active.
As a consequence, extra runs are needed to resolve this ambiguity. The issue then becomes finding the combination of factor levels which best discriminates among rival models, and hence factors. This brings us to optimal follow-up designs, which is the core of this paper.
In this context, the following intuition can be helpful:
a new experiment is most useful whenever the predicted response varies widely across models, because this feature will facilitate model comparison. Accordingly, the follow-up runs are chosen so as to maximize a \emph{model discrimination} (MD) criterion, see \citet{Meye:Stei:Box:1996}.
To compute the posterior probability on each model, one requires a prior on model space, as well as a parameter prior on the space of parameters (conditionally on each single model).
A notorious difficulty associated with Bayesian model determination is its sensitivity to parameter priors; see \citet[ch. 7]{Ohag:Fors:2004}. This remark, and the practical difficulty of specifying distinct subjective priors for each of the entertained models, suggest to adopt an \emph{objective} Bayes approach \citep{Berg:Peri:2001}.
The latter program however cannot be carried out using
standard
noninformative priors for estimation purposes (if for no better reason that they are typically improper); on the other hand proper weakly informative priors, as implemented for instance in
\citet{Meye:Stei:Box:1996}, are also questionable for Bayesian model choice (high sensitivity to prior specification of tuning parameters being an issue); for further discussion see \cite{Peri:2005}.
In this paper we address the problem of choosing follow-up experiments for optimal discrimination among factorial models, using a fully objective Bayesian approach.
This seems particularly attractive at the screening stage, especially if prior information is weak.
Specifically, we seek to maximize an MD criterion which is a weighted combination of Kullback-Leibler (KL) divergences between predictive distributions (for future follow-up observations conditionally on the available data) for all pairs of models, where the weights are the posterior probabilities of the corresponding model pair; see \cite{Box:Hill:1967} for a derivation of this criterion using the notion of expected change in entropy between input and output.
A related approach was used by \citet{Bing:Chip:inco:2007} for identifying most promising screening designs. Their MD criterion however uses the Hellinger distance, rather than KL-divergence, between pairs of (prior) predictive distributions.
Because of the structure of MD we decouple the problem into two separate sub-problems: i) finding the posterior probability on model space (which provides the weights of the MD); ii) finding the predictive distributions of future observations (required to compute the KL divergences).
The rest of this paper is organized as follows:
Section \ref{sec:objective priors} presents the objective model choice priors; Section \ref{sec:model discimination} introduces the model discrimination criterion; Section \ref{sec:applications} applies the methodology to a variety of data sets, and provides comparison with previous analyses. Finally Section \ref{sec:discussion} contains a brief discussion.
\section{Objective model choice priors}
\label{sec:objective priors}
\subsection{Model assumptions}
Consider $k$ categorical factors,
and
$n$ experimental runs for specific combinations of the factor levels.
Let $M_i$ be a model which specifies a set of $f_i$ active factors ($0 \leq f_i \leq k$) for the response $y$
\begin{eqnarray} \label{linearmodelMi}
y \,\vert\, \beta_0, \beta_i, \sigma, M_i \sim N_n(X_0\beta_0 +X_i \beta_i, \sigma^2 I_n),
\end{eqnarray}
where $X_0$ represents an $n \times t_0$ design matrix containing variables which appear in all models. Typically,
$X_0=1_n$, the $n$-dimensional unit vector; occasionally however, we may want to consider more general versions for $X_0$.
Both $\beta_0$ and $\sigma^2$ are regarded as parameters which are \lq \lq common\rq \rq{} across models, while $\beta_i$ is the model-specific vector of regression parameters.
The $t_i$ columns of the matrix $X_i$ contain suitable terms representing main effects and interactions of selected factors.
The model matrix $[X_0 \vdots X_i]$ is assumed to be of full column rank, so that the number of linearly independent terms in the regression structure cannot exceed $n$.
In fractional factorial designs this means that only estimable (non aliased) interactions, up to a desired order, are introduced besides the main effects, conditionally to the constraint that $n>t_0+t_i$.
\subsection{Prior distribution on model space} \label{subsec: prior on model space}
In this subsection we consider in greater detail the
prior on model space.
A typical assumption is that each factor is active (i.e. its effect will be included in any particular model) with some probability $\pi$ independently of the other factors.
If $M_i$ contains $f_i$ active factors, $f_i\in \{0, 1, \ldots, k \}$, then
\begin{eqnarray}
\label{probMigivenpi}
\mathrm{Pr}(M_i \,\vert\, \pi)=\pi^{f_i}(1-\pi)^{k-f_i}.
\end{eqnarray}
Of course $\pi$ is unknown, and from a Bayesian perspective it should be regarded as an uncertain quantity with its own distribution. Assuming that $\pi \sim Beta(a,b)$, then integrating (\ref{probMigivenpi}) with respect to this prior yields
\begin{eqnarray} \label{probMigeneral}
\mathrm{Pr}(M_i)=\int_{0}^1 \pi^{f_i}(1-\pi)^{k-f_i} p(\pi) d\pi=B(a+f_i, b+k-f_i)/B(a,b),
\end{eqnarray}
where $B(\cdot, \cdot)$ is the usual beta function.
Priors belonging to family (\ref{probMigeneral}) incorporate a multiplicity adjustment component; see \citet{Scot:Berg:2010} for an extensive study.
\subsection{Parameter priors} \label{subsec: priors on parameter space}
Consider the comparison of two nested models (so that the sampling family under one model is a special case of the other) through the Bayes factor.
If one starts with an objective prior developed for estimation purposes, such as the Jeffreys or reference prior, a difficulty arises: since these priors are typically improper, the Bayes factor is undefined.
Several attempts have been made to circumvent this problem: intrinsic Bayes factor \citep{Berg:Peri:1996}, fractional Bayes factor \citep{Ohag:1995}, intrinsic priors \citet{Case:More:2006}, expected posterior prior \citep{Pere:Berg:2002}; see \citet{Peri:2005} for a comprehensive review.
Another approach, whose precursor was essentially \citet{Jeff:1961}, is to develop a \emph{proper} prior for the parameter \lq \lq specific\rq \rq{} to the current model
based on some reasonable intuition, and then test its reasonableness in specific settings using simulation studies and possibly theoretical results. Examples, with reference to variable selection in normal linear models, include \citet{Zell:Siow:1980} on $g$-priors, \cite{Lian:EtAl:2008}, \cite{Clyd:Ghos:Litt:2011}, \cite{Maru:Geor:2011}; see also \citet{Baya:Garc:2007} for generalized linear models.
Very recently, a different approach was proposed by \cite{Baya:Berg:Fort:Garc:2012}, where \emph{criteria} that should be satisfied by any model choice prior are first laid out in generality, with special consideration for the objective case. Next, one seeks priors which satisfy these requirements, in the specific setting under investigation. We find this research strategy convincing, and adopt it in this paper to propose a solution for the follow-up experiment.
Recall the structure of model $M_i$ presented in (\ref{linearmodelMi}).
On the other hand the null model $M_0$ prescribes $y \,\vert\, \beta_0, \sigma \sim N_n(\beta_0 X_0, \sigma^2 I_n)$.
Consider the following
hierarchical $g$-prior for model choice
\begin{eqnarray} \label{p^R} p^R(\beta_0, \beta_i, \sigma \,\vert\, M_i)= p(\beta_0, \sigma)p^R(\beta_i \,\vert\, \beta_0, \sigma \,\vert\, M_i)= \sigma^{-1} \times \int_0^{\infty} N_{t_i}(\beta_i \,\vert\, 0, g \Sigma_i)p^R(g \,\vert\, M_i)dg, \end{eqnarray} where $p(\beta_0, \sigma)$ is the prior on the common parameters shared by all models,
$\Sigma_i=\sigma^2 (V_i^{\prime}V_i)^{-1}$, $V_i=(I_n-X_0(X_0^{\prime}X_0)^{-1}X_0^{\prime})X_i$ and \begin{eqnarray*} \label{p^Rg} p^R(g \,\vert\, M_i)=\frac{1}{2} \left[ \frac{1+n}{t_i+t_0}\right]^{1/2}(g+1)^{-3/2}1_{(\frac{1+n}{t_i+t_0}-1, \infty)}(g), \end{eqnarray*} with $1_A(t)=1$ if $t \in A$ and 0 otherwise. Prior (\ref{p^R}) has been shown to satisfy all \emph{desiderata} from an objective model choice perspective;
see \citet[Section 2]{Baya:Berg:Fort:Garc:2012} (the superscript \lq \lq R\rq \rq{} stands for \lq \lq robust prior \rq \rq{}). Notice that (\ref{p^R}) is improper; however it scales appropriately when compared to the null model $M_0$ so that the resulting Bayes factor for the comparison of $M_i$ against $M_0$ is meaningful. Its expression is \begin{eqnarray} \label{BF_i0} \lefteqn{BF_{i0}(y)= \left[ \frac{n+1}{t_i+t_0} \right]^{-t_i/2}} \nonumber \\ && \hspace{-.5cm} \times \frac{Q_{i0}(y)^{-(n-t_0)/2}}{t_i+1} {_2F_1}\left[ \frac{t_i+1}{2}; \frac{n-t_0}{2}; \frac{t_i+3}{2}; \frac{(1-Q_{i0}(y)^{-1})(t_i+t_0)}{n+1} \right] , \end{eqnarray} where $_2F_1$ is the standard hypergeometric function \citep{Abra:Steg:1964}, and $Q_{i0}(y)=SSE_i(y)/SSE_0(y)$ is the ratio of the sum of squared errors of models $M_i$ and $M_0$.
\section{The model discrimination criterion}
\label{sec:model discimination}
Assuming that one of the entertained models is true,
the posterior probability of each model $M_i$ can be written in the convenient form \begin{eqnarray} \label{probMiGiveny} \mathrm{Pr}(M_i \,\vert\, y)=\frac{BF_{i0}(y)P_{i0}}{1+\sum_{j \neq 0} BF_{j0}(y)P_{j0}}, \end{eqnarray} where $P_{j0}$
is the prior odds of model $M_j$ relative to $M_0$ implied by (\ref{probMigeneral}), and $BF_{j0}(y)$ is defined in (\ref{BF_i0}).
In particular we adopt (\ref{probMigeneral}) with $(a=1,b=1)$ for which $P_{j0}=f_j !(k-f_j)!/k!$.
Having computed $\mathrm{Pr}(M_i \,\vert\, y)$, $i=1, \ldots, 2^k$, a useful by-product is the posterior probability $P_A(y)$ that factor $A$ say is active; namely
$
\sum_{ \{M_j:\, \mbox{factor A is active} \}}\mathrm{Pr}(M_j \,\vert\, y)$.
In order to determine the $n^*$ follow-up runs which best discriminate among potential explanatory models,
\citet{Meye:Stei:Box:1996} suggested to maximize the following model discrimination (MD) criterion
\begin{eqnarray}
\label{MD-PredictiveCrit}
MD=\sum_{ i \neq j} \mathrm{Pr}(M_i|y)\mathrm{Pr}(M_j|y)KL(m(\cdot|y, M_i), m(\cdot|y, M_j)),
\end{eqnarray}
where $m(\cdot|y, M_i)$ is the (posterior) predictive density for the vector of follow-up observations,
and
\begin{eqnarray}
KL(f,g)= \int f(x) \log \frac{f(x)}{g(x)}dx
\end{eqnarray}
is the Kullback-Leibler divergence of the density $f$ from $g$.
Notice that MD is a weighted average of the KL-divergences between all pairs of predictive distributions for the follow-up observations.
Adopting a standard reference prior $p^N(\beta_0, \beta_i, \sigma \,\vert\, M_i) \propto 1/\sigma$ for prediction purposes leads to a closed form expression for the MD criterion, which we label OMD (Objective MD). This is given by \begin{eqnarray} \label{MDfinal} && OMD= \\
&& \sum_{ i \neq j} \mathrm{Pr}(M_i|y)P(M_j|y) \frac{1}{2} \left\{ tr(V_j^{* \,-1} V_i^*) + \frac{n-t_i-t_0}{SSE_i}(\hat{y}_i^*-\hat{y}_j^*)^{\prime} V_j^{* \,-1} (\hat{y}_i^*-\hat{y}_j^*)
-n^* \right\}, \nonumber \end{eqnarray}
where $\mathrm{Pr}(M_i|y)$ is defined in (\ref{probMiGiveny}), $SSE_i$ is the usual residual sum of squares under model $M_i$, and the remaining quantities are defined in (\ref{y_i*V_i*}) of Appendix A in the Supplementary Materials, which includes further details. To find the best follow-up runs one has to maximize OMD over all possible $n^*$ combinations with repetition from the set of runs of the full factorial design.
\citet{Meye:Stei:Box:1996} evaluate MD using
(\ref{probMigivenpi}) with $\pi$ equal to a fixed small value (the recommended choice was $\pi=0.25$) to induce factor sparsity in model selection.
Additionally, they chose $p(\beta_0, \sigma) \propto 1/\sigma$ as we did, while adopting a proper weakly informative Gaussian prior on $\beta_i\,\vert\, \beta_0, \sigma, M_i$, wherein each component of $\beta_i$ is assigned a normal distribution with zero expectation and standard deviation $\gamma \sigma$, with $\gamma$ a tuning parameter, which need be specified by the user. We label the resulting criterion CMD (Conventional MD).
OMD has the advantage, with respect to CMD, of being fully Bayes, objective, and based on principled model selection priors. In particular, there is no need to tune hyperparameters, which makes it especially attractive from a practitioner's point of view. In fact, on the one hand it is well known that model selection is typically highly sensitive to the choice of hyperparameters; on the other hand this prior information is usually hard to elicit in screening experiments.
We have developed Fortran and R-code to find the optimal follow-up runs under OMD. This code relies on existing Fortran and R-code to carry out computations under CMD; see \citet{Meye:1996} and \citet{Barr:2013}.
\section{Applications} \label{sec:applications} \subsection{Injection molding experiment} \label{subsec:injecion}
We first consider the experiment on the percentage shrinkage in an injection molding process described in \citet[p. 398]{Box:Hunt:Hunt:1978} which contains eight factors labeled A through H. This experiment was
also analyzed in \citet{Meye:Stei:Box:1996}. The plan is a $2^{8-4}$ fractional factorial resolution IV design with generators I=ABDH=ACEH=BCFH=ABCG.
A preliminary analysis based on normal probability plots, and confirmed by a Bayesian analysis (be it conventional or objective), leads to the conclusion that the potential active factors can be reduced to four, namely A, C, E and H.
Accordingly, we follow \citet{Box:Hunt:Hunt:1978} and collapse the $2^{8-4}$ fractional factorial design on the above factors, thus obtaining a replicated $2^{4-1}$ design with defining relation I=ACEH.
The posterior probabilities that each factor is active are reported in Table \ref{tab:Injection.Post.Prob.Factors.Active}.
They are essentially uniform for the case of three-factor interactions (3FI) both under the conventional and the objective approach; this result is only partly modified in the case of 2FI, where factor A appears unlikely to be active under the conventional approach.
It appears that additional runs are needed both to resolve the ambiguity regarding factor A, and to further investigate the role of the remaining factors.
\renewcommand{1}{1} \begin{table}[tbp] \footnotesize \centering \caption{{\protect\small \textit{Injection molding experiment. Posterior probabilities that factors are active for the $2^{4-1}$ design}}} \label{tab:Injection.Post.Prob.Factors.Active} \begin{tabular}{*{6}{c}} \hline \hline & \multicolumn{2}{c}{2FI} & & \multicolumn{2}{c}{3FI } \\ \cline{2-3} \cline{5-6} Factor & Conventional Approach & Objective Approach & & Conventional Approach & Objective Approach\\
\hline $A$ & $0.18$ & $0.68$ & & $0.76$ & $0.87$ \\
$C$ & $1$ &$1$ & & $0.76$ & $0.88$ \\
$E$ & $1$ & $1$ & & $0.76$ & $0.87$\\
$H$ & $0.91$ & $0.95$ & & $0.76$ & $0.87$ \\
\hline \end{tabular} \end{table}
Table \ref{tab:Injection.Candidate.Runs} in the Supplementary materials reports the full $2^{4}$ design in the factors A, C, E, H, with the corresponding runs in the $2^{8-4}$ fractional design.
Assuming that $n^*=4$ follow-up runs have to be chosen, the number of possible follow-up designs (with replication) from the 16 candidate runs of the $2^4$ factorial design in the factors A, C, E, H is 3876.
The five best designs identified by the OMD criterion of formula (\ref{MDfinal}), along with those corresponding to the CMD criterion are shown in Table \ref{tab:Injection.Top5}, separately for models having 2FI and 3FI. The CMD criterion was applied using the recommended settings $\pi=0.25$ and $\gamma=2$, and without a block-effect to distinguish between screening and follow-up runs.
We also report for completeness the value of the criterion achieved by each single run. Please notice that these values are meaningful for comparison purposes within each criterion, but not between criteria.
\renewcommand{1}{1} \begin{table}[tbp] \footnotesize \centering \caption{{\protect\small \textit{Injection molding experiment. Top five follow-up designs}}} \label{tab:Injection.Top5} \begin{tabular}{*{10}{c}} \hline \hline & \multicolumn{4}{c}{2FI} & & \multicolumn{4}{c}{3FI} \\ \cline{2-5} \cline{7-10} Model & CMD & runs & OMD & runs & & CMD & runs & OMD & runs \\ \hline
$1$ & 11.23 & 9 12 13 16 & $51.12$ & 9 13 15 16 & & $88.37$ & 9 11 12 15 & $103.41$ & 9 10 11 13 \\
$2$ & $11.08$ & 9 12 15 16 & $49.84$ & 9 12 13 16 & & $87.37$ & 9 12 12 15 & $100.39$ & 9 10 11 12 \\
$3$ & $10.99$ & 11 12 15 16 & $49.53$ & 9 9 13 15 & & $87.35$ & 9 9 12 15 & $99.03$ & 9 10 10 11 \\
$4$ & $10.92$ & 9 11 12 16 & $49.52$ & 9 9 13 16 & & $86.53$ & 9 12 14 15 & $98.34$ & 9 10 11 16 \\
$5$ & $10.87$ & 12 13 15 16 & $48.22$ & 9 11 13 16 & & $83.87$ & 9 11 12 12 & $98.09$ & 9 10 11 11 \\ \hline \end{tabular} \end{table}
A common feature is that all follow-up runs belong to the set $\{9, 10, \ldots,16\}$, i.e. the set of runs which were \textit{not} carried out in the initial screening experiment; this is reassuring because those runs were not able to discriminate sufficiently among models.
Some differences emerge depending on the number of FI allowed in the models as well as the criterion which is adopted (CMD and OMD), although runs 9 and 11 are broadly recurring.
\subsection{Reactor experiment} \label{subsec:reactor}
In this subsection we consider the reactor experiment described in \citet[p. 376]{Box:Hunt:Hunt:1978}. Table \ref{tab:Reactor.Full.Fact.Design} in the Supplementary Materials reports the complete $2^{5}$ factorial design, including the value of the response variable. This feature makes this experiment especially attractive, because we can actually verify the effectiveness of our approach in identifying active factors, as we detail below.
Following \citet[Section 3]{Meye:Stei:Box:1996}, we extract eight runs from the original experiment corresponding to the $2^{5-2}$ Resolution III design with generators I=ABD=ACE, and consider these runs as our initial screening design; see Table \ref{tab:Reactor.Screening.Design} in the Supplementary Materials.
The five highest posterior probability (top) models based on the objective Bayes approach are reported in Table \ref{tab:Reactor.Post.Prob.TopModelsAndActFac.2FI} for models with 2FI. The corresponding results for the case of 3FI are reported in Table \ref{tab:Reactor.Post.Prob.TopModelsAndActFac.3FI} in the Supplementary Materials.
For the sake of comparison
we also included the corresponding results based on the conventional approach derived in \citet{Meye:Stei:Box:1996} (setting $\gamma=0.4$ and $\pi=0.25$). The posterior probabilities of all models, and that the factors are active, are also displayed in Figure \ref{fig:Reactor.Post.Prob.Models.Factors.2FI} for the case of 2FI (and Figure \ref{fig:Reactor.Post.Prob.Models.Factors.3FI} in the Supplementary Materials for the case of 3FI).
\renewcommand{1}{1} \begin{table}[tbp] \footnotesize \centering \caption{{\protect\small \textit{Reactor experiment. Posterior probabilities of top five models and that factors are active (2FI)}}} \label{tab:Reactor.Post.Prob.TopModelsAndActFac.2FI} \begin{tabular}{*{6}{c}} \hline \hline
& \multicolumn{2}{c}{Conventional Approach} & & \multicolumn{2}{c}{Objective Approach} \\ \cline{2-3} \cline{5-6}
Model & Factors & Posterior probability & & Factors & Posterior probability \\ \hline $1$ & $null$ & $0.23$ & & $null$ & $0.32$\\
$2$ & $B$ & $0.13$ & & $B,D,E$ & $0.10$ \\
$3$ & $D$ & $0.07$ & & $B$ & $0.08$\\
$4$ & $A$ & $0.07$ & & $A,D$ & $0.05$ \\
$5$ & $A,D$ & $0.05$ & & $B,D$ & $0.05$ \\ \hline \hline
& \multicolumn{2}{c}{Conventional Approach} & & \multicolumn{2}{c}{Objective Approach} \\ \cline{2-3} \cline{5-6}
& Factor & Posterior probability & & Factor & Posterior probability \\ \cline{2-6}
& $A$ & $0.27$ & & $A$ & $0.28$ \\
& $B$ & $0.38$ & & $B$ & $0.47$ \\
& $C$ & $0.17$ & & $C$ & $0.15$\\
& $D$ & $0.29$ & & $D$ & $0.39$ \\
& $E$ & $0.17$ & & $E$ & $0.21$ \\
\hline \end{tabular} \end{table}
\begin{figure}
\caption{{\itshape Reactor experiment. Posterior probabilities of models and that factors are active (2FI)}}
\label{fig:Reactor.Post.Prob.Models.Factors.2FI}
\end{figure}
It appears from Figure \ref{fig:Reactor.Post.Prob.Models.Factors.2FI} that the objective Bayes prior tends to favor, relative to the conventional approach, the null model as well as a few models containing three factors. This is due to the different nature of the respective priors on model space.
The posterior probabilities that factors are active do not point to a clear-cut conclusion. The highest scoring factor (B) does not even achieve the 50\% threshold; the remaining factors trail behind but each one has an appreciable probability of being active. Extra runs are needed in order to solve what appears to be an ambiguous outcome.
To facilitate the comparison with \citet{Meye:Stei:Box:1996}, we chose to add $n^*=4$ follow-up runs.
For this problem there exist 52360 four-run designs (with replications) from 32 candidates.
The five best follow-up designs selected by the OMD, as well as the CMD, criterion are shown in Table \ref{tab:Reactor.Top.Five.Follow-up.2FI} for the case of 2FI.
\renewcommand{1}{1} \begin{table}[tbp] \footnotesize \centering \caption{{\protect\small \textit{Reactor experiment. Top five follow-up designs (2FI)}}} \label{tab:Reactor.Top.Five.Follow-up.2FI} \begin{tabular}{*{7}{c}} \hline \hline Model & & CMD & runs & & OMD & runs \\ \cline{1-1} \cline{3-4} \cline{6-7}
$1$ & & $0.5840$ & 4 10 12 26 & & $69.85$ & 11 15 26 29 \\
$2$ & & $0.5821$ & 4 12 26 27 & & $69.73$ & 15 15 29 30 \\
$3$ & & $0.5800$ & 10 12 26 27 & & $69.71$ & 11 15 26 30 \\
$4$ & & $0.5797$ & 4 11 12 26 & & $69.63$ & 11 15 29 30 \\
$5$ & & $0.5792$ & 4 10 26 28 & & $69.42$ & 11 15 25 30 \\ \hline \end{tabular} \end{table}
The best four runs under the OMD criterion only marginally overlap (run 26) with those obtained using CMD; on the other hand they do coincide when models with three-factor interactions are considered; see Table \ref{tab:Reactor.Top.Five.Follow-up.3FI} in the Supplementary Materials.
To validate the effectiveness of our approach, we re-run the analysis using all 12 runs (screening \textit{and} follow-up). To account for potential different experimental conditions, a block effect was added in each linear model. For models having 2FI, the results are summarized in Table \ref{tab:Reactor.Post.Prob.Combined.2FI}, and also displayed in Figure \ref{fig:Reactor.Post.Prob.Combined.2FI}.
\renewcommand{1}{1} \begin{table}[tbp] \footnotesize \centering \caption{{\protect\small \textit{Reactor experiment. Posterior probabilities of top five models and that factors are active based on the combined screening and follow-up designs (2FI)}}} \label{tab:Reactor.Post.Prob.Combined.2FI} \begin{tabular}{*{6}{c}} \hline \hline
& \multicolumn{2}{c}{Conventional Approach} & & \multicolumn{2}{c}{Objective Approach} \\ \cline{2-3} \cline{5-6}
Model & Factors & Posterior probability & & Factors & Posterior probability \\ \hline $1$ & $B,D,E$ & $0.73$ & & $B,D,E$ & $0.86$\\
$2$ & $B,D$ & $0.09$ & & $B,D$ & $0.05$ \\
$3$ & $A,B,D,E$ & $0.06$ & & $B$ & $0.04$\\ $4$ & $B,C,D,E$ & $0.03$ & & $null$ & $0.01$ \\
$5$ & $B$ & $0.03$ & & $B,C,D$ & $0.01$ \\ \hline \hline
& \multicolumn{2}{c}{Conventional Approach} & & \multicolumn{2}{c}{Objective Approach} \\ \cline{2-3} \cline{5-6}
& Factor & Posterior probability & & Factor & Posterior probability \\ & $A$ & $0.08$ & & $A$ & $0.02$ \\
& $B$ & $0.97$ & & $B$ & $0.98$ \\
& $C$ & $0.06$ & & $C$ & $0.02$\\
& $D$ & $0.94$ & & $D$ & $0.93$ \\
& $E$ & $0.83$ & & $E$ & $0.87$ \\
\hline \end{tabular} \end{table}
It now appears clearly that the only model worth of consideration is the one involving factors B, D and E; these results are also spelled out in the posterior probabilities that factors are active.
Table \ref{tab:Reactor.Post.Prob.Combined.3FI} and Figure \ref{fig:Reactor.Post.Prob.Combined.3FI}.
in the Supplementary Materials illustrate the analysis for models involving three-factor interactions with results broadly similar to those obtained under the 2FI case, the main difference being that factor E appears less likely to be active.
\begin{figure}
\caption{{\itshape Reactor experiment. Posterior model probabilities of models and that factors are active based on the combined screening and follow-up designs (2FI)}}
\label{fig:Reactor.Post.Prob.Combined.2FI}
\end{figure}
The above results obtained on the basis of 12 runs are in agreement with those that emerge from the normal probability of the contrasts based on the \textit{complete} set of 32 runs; see Figure \ref{fig: Reactor.Normal.Prob.Contrasts} in the Supplementary Materials.
Clearly the follow-up runs greatly contributed to differentiate among factors in terms of their likely activity. Which of the two approaches, conventional or objective, did a better job? Table \ref{tab:Reactor.Shannon} offers an answer. It computes the normalized Shannon heterogeneity index on the posterior distribution of models after: (1) the screening experiment, and (2) the combined screening and follow-up experiment.
Clearly the index is lower in the latter situation, reflecting a reduced heterogeneity (increased concentration). We can see that our objective criterion not only scores lower after (1) and (2) than the conventional one, but it also produces a greater relative reduction (71\% against 59\%).
\renewcommand{1}{1} \begin{table}[tbp] \footnotesize \centering \caption{{\protect\small \textit{Reactor experiment. Shannon hetereogenity of model posterior probabilities}}} \label{tab:Reactor.Shannon} \begin{tabular}{{lcccc}} \hline \hline & & Conventional Approach & & Objective Approach \\ \cline{1-1} \cline{3-3} \cline{5-5} (1): Screening experiment & & $0.79$ & & $0.74$ \\ (2): Screening and follow-up experiment & & $0.32$ & & $0.21$ \\ \\ Relative reduction between (1) and (2) & & $59\%$ & & $71\%$ \\ \hline \end{tabular} \end{table}
A similar exercise was performed with respect to the posterior probabilities that the factors are active. In this case, one can no longer use Shannon heterogeneity because the probabilities do not sum to one (the events are not incompatible). Accordingly, we chose the coefficient of variation. In this case situation (2) corresponds to a greater variation. Again OMD provides a higher score than CMD both in case (1) and (2), even though CMD provides a greater improvement in relative terms; see Table \ref{tab:Reactor.Coefficient.Of.Variation}.
\renewcommand{1}{1} \begin{table}[tbp] \footnotesize \centering \caption{{\protect\small \textit{Reactor experiment. Coefficient of variation of posterior probabilities that factors are active}}}
\label{tab:Reactor.Coefficient.Of.Variation} \begin{tabular}{{lcccc}} \hline \hline & & Conventional Approach & & Objective Approach \\ \cline{1-1} \cline{3-3} \cline{5-5} (1): Screening experiment & & $0.32$ & & $0.39$ \\ (2): Screening and follow-up experiment & & $0.72$ & & $0.80$ \\ \\ Relative increase between (1) and (2) & & $125\%$ & & $105\%$ \\ \hline \end{tabular} \end{table}
\section{Discussion} \label{sec:discussion}
In this paper we have developed an objective Bayesian method to obtain follow-up designs which are optimal in terms of predictive model discrimination.
In order to determine the posterior probability of models, we have employed a multiplicity correction prior on model space, and a principled model selection hierarchical-$g$ prior on the parameters.
With regard to prediction, we have relied on a standard reference prior, which produces a closed-form expression for the model discrimination criterion, thus greatly enhancing the computational speed of searching through the space of potential designs.
Employing different priors for model selection and prediction implies
that our model discrimination criterion will no longer enjoy the theoretical properties described in the original contribution of \citet{Box:Hill:1967}. However, it will do so at least approximately, because predictions based on the standard reference prior are themselves an approximation to those computed using the model selection prior; see Appendix B of the Supplementary materials.
Finally, we remark that the practice of using distinct prior distributions for design and estimation-prediction dates back at least to \citet{Tsut:1972}. For a more recent example see \citet{Han:Chal:2004}, and references therein, where the motivation is that distinct researchers, with different priors, may be involved in the design and estimation stage.
Our objective Bayes approach requires that the design matrix be of full rank.
This is in contrast to what happens in subjective Bayes approaches where this condition can be relaxed at the expense of having to specify a prior covariance matrix on the regression coefficients. Substantive prior information of this kind is usually unavailable, and conventional choices are problematic because model selection is highly sensitive to such prior inputs; see \citet{Berg:Peri:2001}.
The requirement that the design matrix be of full rank implies that the set of models that can be entertained -for a given order of interactions- may be smaller than that of all potential models. This difficulty however can be typically overcome by omitting models containing higher-order interactions, or context variables (such as blocking). Since the main goal is obtaining the posterior probability of the active factors -rather than the posterior probability of the models- this simplification seems reasonable.
With regard to the prior on model space presented in Subsection \ref{subsec: prior on model space},
we adopted the values $(a=1,b=1)$. Recently the alternative choice $(a=1, b=k+1)$ has been advocated to achieve a stronger sparse modeling effect. This prior, besides performing multiplicity adjustment, is also optimal in terms of concentration of the posterior distribution around the true model; see \cite{Cast:Vaar:2012}.
Having experimented with such prior, the main difference is that the choice $(a=1$, $b=k+1)$ gives more weight to more parsimonious models,
relative to $(a=1, b=1)$; however, optimal follow-up runs, are broadly similar in the two cases.
The prior on model space adopted in this paper relies on the assumption of effect forcing whereby if a set of factors is inserted in the model, then all interactions (up to the desired order) must be included.
One could relax the assumption of effect forcing, and consider a more flexible approach, as advocated in \citet{Bing:Chip:inco:2007}, through the incorporation of prior opinions on structural aspects of effects such as Effect sparsity, Effect hierarchy and Effect heredity; see also \citet{Wolt:Bing:2011}.
The model discrimination criterion used in this work is based on the Kullback-Leibler divergence. Alternative divergence measures could be employed. For instance, within the context of screening experiments, \citet{Bing:Chip:inco:2007} suggest to use the Hellinger distance, which is symmetric and bounded above. Symmetry is useful from the computational perspective, because it avoids to sum over all pairs of distinct models, while a bounded index makes calibration and interpretation easier. We could implement our method using the Hellinger distance because its expression is also available in closed-form. The choice of the KL-divergence was mostly motivated for comparison purposes with results in the current literature.
\section*{ACKNOWLEDGMENTS} The R-code to find the optimal follow-up runs was developed by Marta Nai Ruscone, Dipartimento di Scienze Statistiche, Universit\`{a} Cattolica del Sacro Cuore, Milan.
We are indebted to the participants to the O-Bayes 2013 conference (December 15-17, 2013; Duke University) for useful comments on a preliminary version of this paper. In particular we thank Veronika R\v{o}ckov\'{a} for a detailed discussion of our work, including priors on model space and the derivation of the model discrimination criterion, as well as Gonzalo Garc\'{i}a-Donato for pointing out the relationship between the posterior under the hierarchical $g$-prior and that based on the reference prior.
\section*{Supplementary Materials}
\begin{description}
\item[Appendix A:] Derivation of KL-divergence between the predictive distributions for the follow-up runs under two models.
\item[Appendix B:] Relationship between the posterior distributions under the hierarchical $g$-prior and the reference prior.
\item[Tables and Figures:] A collection of Tables and Figures complementing those in the main text.
\end{description}
\subsection*{Appendix A: derivation of KL-divergence between the predictive distributions for the follow-up runs under two models}
Let $y^*$ denote the vector of observations for the $n^*$ follow-up runs. Under model $M_i$, let $\gamma_i^{\prime}=(\beta_0^{\prime}, \beta_i^{\prime})$, and denote with $p^N(\gamma_i, \sigma^2 \,\vert\, M_i)$ an objective estimation prior, where the superscript \lq \lq N\rq \rq{} stands for \lq \lq noninformative\rq \rq{}. Then \begin{eqnarray*} m(y^* \,\vert\, y, M_i)= \int \int f(y^* \,\vert\, \gamma_i, \sigma^2, M_i)p^N(\gamma_i, \sigma^2 \,\vert\, y, M_i) d\gamma_i d\sigma^2, \end{eqnarray*} where $f(y^* \,\vert\, \gamma_i, \sigma^2, M_i)=N_{n^*}(y^* \,\vert\, Z_i \gamma_i, \sigma^2 I_{n^*})$ is the usual Gaussian regression model having set
$Z_i=[X_0 \vdots X_i]$.
Standard computations yield \begin{eqnarray} \label{p^NGiveny} && p^N(\gamma_i, \sigma^2 \,\vert\, y,M_i) =p^N(\gamma_i \,\vert\, \sigma^2, y, M_i) p^N( \sigma^2 \,\vert\, y,M_i) \nonumber \\ &=& N_{t_i+t_0}(\gamma_i \,\vert\, \hat{\gamma_i}, \sigma^2 (Z_i^{\prime}Z_i)^{-1}) IGa(\sigma^2 \,\vert\, \frac{n-t_i-t_0}{2}, \frac{SSE_i}{2}), \end{eqnarray} where $\hat{\gamma}_i$ is the OLS estimate of $\gamma_i$ and $IGa(t \,\vert\, a,b)$ is the inverse gamma density having kernel $(1/t)^{a+1}\exp(-b/t)$.
As a consequence the predictive distribution of $y^*$, conditionally on $\sigma^2$ and under model $M_i$, can be written as \begin{eqnarray} \label{predictivey*} m(y^* \,\vert\, \sigma^2, y, M_i)=N_{n^*}(y^* \,\vert\, \hat{y}_i^*, \sigma^2 V_i^*), \end{eqnarray} where \begin{eqnarray} \label{y_i*V_i*} \hat{y}_{i}^*= Z_{i}^* \hat{\gamma}_i, \quad V_i^*=I_{n^*}+Z_i^*(Z_i^{\prime}Z_i)^{-1}Z_i^{*\,^{\prime}}. \end{eqnarray}
To compute the KL divergences between pairs of predictive distributions appearing in formula (\ref{MD-PredictiveCrit}) of the paper, we proceed in two steps. First we evaluate the KL divergence conditionally on $\sigma$, and then we take the expectations with respect to the posterior distribution of $\sigma^2$.
Conditionally on $\sigma$, the predictive distributions are multivariate normal, and the following Lemma is useful. \begin{lemma} Let $m_0(\cdot)$ and $m_1(\cdot)$ be two $s$-dimensional multivariate Gaussian distributions with expectations $\mu_0$ and $\mu_1$ and covariance matrices $\Sigma_0$ and $\Sigma_1$. Then \begin{eqnarray} \label{KLGaussian}
KL(m_0(\cdot), m_1(\cdot))=\frac{1}{2} \left\{ tr(\Sigma_1^{-1} \Sigma_0) + (\mu_1-\mu_0)^{\prime} \Sigma_1^{-1} (\mu_1- \mu_0) +\log \left( \frac{|\Sigma_1|}{|\Sigma_0|}\right) -s \right\}. \end{eqnarray} \end{lemma} As a corollary we get \begin{eqnarray} \label{KLPredictivesGivenSigma} \hspace{-1.5cm} && KL(m(\cdot \,\vert\, \sigma^2, y, M_i), m(\cdot \,\vert\, \sigma^2, y, M_j))= \nonumber \\ \hspace{-1.5cm} && \frac{1}{2} \left\{ tr(V_j^{* \,-1} V_i^*) + \frac{1}{\sigma^2}(\hat{y}_i^*-\hat{y}_j^*)^{\prime} V_j^{* \,-1} (\hat{y}_i^*-\hat{y}_j^*)
+ \log \left( \frac{|V_j^*|}{|V_i^*|}\right) -n^* \right\}. \end{eqnarray}
The last step involves an expectation with respect to the posterior distribution of $\sigma^2$. Since $ \sigma^2 \sim IGa( \frac{n-t_i-t_0}{2}, \frac{SSE_i}{2})$, we get $\mathbb{E}(1/\sigma^2 \,\vert\, y, M_i)=(n-t_i-t_0)/SSE_i$. Therefore \begin{eqnarray} \label{KLPredictives} \hspace{-1.5cm} && KL(m(\cdot \,\vert\, y, M_i), m(\cdot \,\vert\, y, M_j))= \nonumber \\ \hspace{-1.5cm} && \frac{1}{2} \left\{ tr(V_j^{* \,-1} V_i^*) + \frac{n-t_i-t_0}{SSE_i}(\hat{y}_i^*-\hat{y}_j^*)^{\prime} V_j^{* \,-1} (\hat{y}_i^*-\hat{y}_j^*)
+ \log \left( \frac{|V_j^*|}{|V_i^*|}\right) -n^* \right\}.
\end{eqnarray} When it comes to computing the criterion OMD of formula (\ref{MDfinal}) in the paper, all terms $\log(|V_j^*|/|V_i^*|)$ disappear because the sum extends over all indexes $i \neq j$.
\subsection*{Appendix B: posterior distribution of $(\beta_0, \beta_i, \sigma)$ under the reference and the hierarchical $g$-prior}
Consider the linear model $M_i$ represented by equation (\ref{linearmodelMi}) in the paper, and assume for simplicity that $\beta_0$ is a scalar ($t_0=1$).
We want to show that the posterior distribution of $(\beta_0, \beta_i, \sigma)$ under the hierarchical $g$-prior can be approximated with the corresponding distribution under the reference prior, at least when $n$ is moderately large.
Consider first the posterior under the standard reference prior $p^N(\beta_0, \beta_i, \sigma \,\vert\, M_i) \propto 1/\sigma$. This is given by \begin{eqnarray*} \label{p^NbetaPosterior} p^R(\beta_0, \beta_i, \sigma \,\vert\, y, M_i)=N(\beta_0 \,\vert\, \bar{y}, \sigma^2/n) N_{t_i}(\beta_i \,\vert\, \hat{\beta}_i, \sigma^2(V_i^{\prime}V_i)^{-1}) IGa(\sigma^2 \,\vert\, \frac{n-t_i-1}{2}, \frac{SSE_i}{2}). \end{eqnarray*} On the other hand, if the prior is the hierarchical $g$-prior, see equation (\ref{p^R}) in the paper, the posterior becomes \begin{eqnarray*} \label{p^RbetaPosterior} \hspace{-1cm}&&p^R(\beta_0, \beta_i, \sigma \,\vert\, y, M_i)=N(\beta_0 \,\vert\, \bar{y}, \sigma^2/n) \nonumber \\ \hspace{-1cm}&&\int N_{t_i}(\beta_i \,\vert\, \frac{g}{g+1}\hat{\beta}_i, \frac{g}{g+1} \sigma^2(V_i^{\prime}V_i)^{-1}) IGa(\sigma^2 \,\vert\, \frac{n-1}{2}, \frac{g}{2(g+1)}(SSE_i+\frac{1}{g}SSE_0))p^R(g \,\vert\, M_i)dg.
\end{eqnarray*} Since $p^R(g \,\vert\, M_i)$ is positive only for $g> \frac{1+n}{t_i+t_0}-1$, it follows that as $n$ grows, so does $g$ in probability; in particular $\frac{g}{g+1} \stackrel{p} \rightarrow 1$ ($n \rightarrow \infty$), and the two posterior distributions become similar.
The above argument was developed in a preliminary version of the article \citet{Baya:Berg:Fort:Garc:2012}, but is not present in the final version of the paper.
\subsection*{Tables and Figures}
\renewcommand{1}{1} \begin{table}[htbp] \footnotesize \centering \caption{{\protect\small \textit{Injection molding experiment. Candidate follow-up runs}}} \label{tab:Injection.Candidate.Runs} \begin{tabular}{*{6}{c}} \hline Run in the $2^{4}$ full design & A & C & E & H & Corresponding runs in the $2^{8-4}$ fractional design \\ \hline $1$ & $-$ & $-$ & $-$ & $-$ & 14,16 \\ $2$ & $-$ & $-$ & $+$ & $+$ & 1,3 \\
$3$ & $-$ & $+$ & $-$ & $+$ & 5,7 \\
$4$ & $-$ & $+$ & $+$ & $-$ & 10,12 \\
$5$ & $+$ & $-$ & $-$ & $+$ & 2,4 \\
$6$ & $+$ & $-$ & $+$ & $-$ & 13,15 \\
$7$ & $+$ & $+$ & $-$ & $-$ & 9,11 \\
$8$ & $+$ & $+$ & $+$ & $+$ & 6,8 \\
$9$ & $-$ & $-$ & $-$ & $+$ & \\
$10$ & $-$ & $-$ & $+$ & $-$ & \\
$11$ & $-$ & $+$ & $-$ & $-$ & \\
$12$ & $-$ & $+$ & $+$ & $+$ & \\
$13$ & $+$ & $-$ & $-$ & $-$ & \\
$14$ & $+$ & $-$ & $+$ & $+$ & \\
$15$ & $+$ & $+$ & $-$ & $+$ & \\
$16$ & $+$ & $+$ & $+$ & $-$ & \\
\hline \end{tabular} \end{table}
\renewcommand{1}{1} \begin{table}[htbp] \footnotesize \centering \caption{{\protect\small \textit{Reactor experiment. Full $2^{5}$ factorial design}}} \label{tab:Reactor.Full.Fact.Design} \begin{tabular}{*{9}{c}} \hline Run & & A & B & C & D & E & & $y$ \\ \hline $1$ & & $-$ & $-$ & $-$ & $-$ & $-$ & & 61 \\
$2$ & & $+$ & $-$ & $-$ & $-$ & $-$ & &53\\
$3$ & & $-$ & $+$ & $-$ & $-$ & $-$ & & 63\\
$4$ & & $+$ & $+$ & $-$ & $-$ & $-$ & & 61\\
$5$ & & $-$ & $-$ & $+$ & $-$ & $-$ & & 53\\
$6$ & & $+$ & $-$ & $+$ & $-$ & $-$ & & 56\\
$7$ & & $-$ & $+$ & $+$ & $-$ & $-$ & & 54\\
$8$ & & $+$ & $+$ & $+$ & $-$ & $-$ & & 61\\
$9$ & & $-$ & $-$ & $-$ & $+$ & $-$ & & 69\\
$10$ & & $+$ & $-$ & $-$ & $+$ & $-$ & & 61\\
$11$ & & $-$ & $+$ & $-$ & $+$ & $-$ & & 94\\
$12$ & & $+$ & $+$ & $-$ & $+$ & $-$ & & 93\\
$13$ & & $-$ & $-$ & $+$ & $+$ & $-$ & & 66\\
$14$ & & $+$ & $-$ & $+$ & $+$ & $-$ & & 60\\
$15$ & & $-$ & $+$ & $+$ & $+$ & $-$ & & 95\\
$16$ & & $+$ & $+$ & $+$ & $+$ & $-$ & & 98\\
$17$ & & $-$ & $-$ & $-$ & $-$ & $+$ & & 56\\
$18$ & & $+$ & $-$ & $-$ & $-$ & $+$ & & 63\\
$19$ & & $-$ & $+$ & $-$ & $-$ & $+$ & & 70\\
$20$ & & $+$ & $+$ & $-$ & $-$ & $+$ & & 65\\
$21$ & & $-$ & $-$ & $+$ & $-$ & $+$ & & 59\\
$22$ & & $+$ & $-$ & $+$ & $-$ & $+$ & & 55\\
$23$ & & $-$ & $+$ & $+$ & $-$ & $+$ & & 67\\
$24$ & & $+$ & $+$ & $+$ & $-$ & $+$ & & 65\\
$25$ & & $-$ & $-$ & $-$ & $+$ & $+$ & & 44\\
$26$ & & $+$ & $-$ & $-$ & $+$ & $+$ & & 45\\
$27$ & & $-$ & $+$ & $-$ & $+$ & $+$ & & 78\\
$28$ & & $+$ & $+$ & $-$ & $+$ & $+$ & & 77\\
$29$ & & $-$ & $-$ & $+$ & $+$ & $+$ & & 49\\
$30$ & & $+$ & $-$ & $+$ & $+$ & $+$ & & 42\\
$31$ & & $-$ & $+$ & $+$ & $+$ & $+$ & & 81\\
$32$ & & $+$ & $+$ & $+$ & $+$ & $+$ & & 82\\ \hline \end{tabular} \end{table}
\renewcommand{1}{1} \begin{table}[htbp] \footnotesize \centering \caption{{\protect\small \textit{Reactor experiment. Screening design}}}
\label{tab:Reactor.Screening.Design} \begin{tabular}{*{9}{c}} \hline Run in the full design & Run & A & B & C & D & E & & $y$ \\ \hline $2$ & $1$ & $+$ & $-$ & $-$ & $-$ & $-$ && 53 \\ $7$ & $2$ & $-$ & $+$ & $+$ & $-$ & $-$ & &54\\
$12$ & $3$ & $+$ & $+$ & $-$ & $+$ & $-$ &&93\\
$13$ & $4$ & $-$ & $-$ & $+$ & $+$ & $-$ & &66\\
$19$ & $5$ & $-$ & $+$ & $-$ & $-$ & $+$ & &70\\
$22$ & $6$ & $+$ & $-$ & $+$ & $-$ & $+$ & &55\\
$25$ & $7$ & $-$ & $-$ & $-$ & $+$ & $+$ & &44\\
$32$ & $8$ & $+$ & $+$ & $+$ & $+$ & $+$ & & 82\\ \hline \end{tabular} \end{table}
\renewcommand{1}{1} \begin{table}[htbp] \footnotesize \centering \caption{{\protect\small \textit{Reactor experiment. Posterior probabilities of top five models and that factors are active (3FI)}}} \label{tab:Reactor.Post.Prob.TopModelsAndActFac.3FI} \begin{tabular}{*{6}{c}} \hline \hline
& \multicolumn{2}{c}{Conventional Approach} & & \multicolumn{2}{c}{Objective Approach} \\ \cline{2-3} \cline{5-6}
Model & Factors & Posterior probability & & Factors & Posterior probability \\ \hline $1$ & $null$ & $0.23$ & & $null$ & $0.46$\\
$2$ & $B$ & $0.13$ & &$B$ & $0.12$ \\
$3$ & $D$ & $0.07$ && $A,D$ & $0.07$\\
$4$ & $A$ & $0.07$ && $B,D$ & $0.07$ \\
$5$ & $A,B$ & $0.05$ && $A,B$ & $0.07$ \\ \hline \hline & \multicolumn{2}{c}{Conventional Approach} & & \multicolumn{2}{c}{Objective Approach} \\ \cline{2-3} \cline{5-6}
& Factor & Posterior probability & & Factor & Posterior probability \\ \cline{2-6} & $A$ & $0.27$ && $A$ & $0.20$ \\
& $B$ & $0.37$ & &$B$ & $0.31$ \\
& $C$ & $0.17$ && $C$ & $0.06$\\
& $D$ & $0.29$ & &$D$ & $0.21$ \\
& $E$ & $0.17$ && $E$ & $0.06$ \\
\hline \end{tabular} \end{table}
\renewcommand{1}{1} \begin{table}[htbp] \footnotesize \centering \caption{{\protect\small \textit{Reactor experiment. Top five follow-up designs (3FI)}}} \label{tab:Reactor.Top.Five.Follow-up.3FI} \begin{tabular}{*{7}{c}} \hline \hline Model & & CMD & runs & & OMD & runs \\ \cline{1-1} \cline{3-4} \cline{6-7}
$1$ && $0.6535$ & 4 10 11 28 && $1.5647$ & 4 10 11 28 \\
$2$ && $0.6529$ & 4 10 11 12 && $1.5625$ & 4 26 27 28 \\ $3$ && $0.6502$ & 10 11 12 26 &&$1.5624$ & 20 26 27 28 \\ $4$ && $0.6501$ & 10 12 26 27 && $1.5623$ & 4 10 16 28 \\
$5$ && $0.6499$ & 4 10 12 26 && $1.5610$ & 4 11 26 28 \\ \hline \end{tabular} \end{table}
\renewcommand{1}{1} \begin{table}[tbp] \footnotesize \centering \caption{{\protect\small \textit{Reactor experiment. Posterior probabilities of top five models and that factors are active based on the combined screening and follow-up designs designs (3FI)}}} \label{tab:Reactor.Post.Prob.Combined.3FI} \begin{tabular}{*{6}{c}} \hline \hline
& \multicolumn{2}{c}{Conventional Approach} & & \multicolumn{2}{c}{Objective Approach} \\ \cline{2-3} \cline{5-6}
Model & Factors & Posterior probability & & Factors & Posterior probability \\ \hline $1$ & $B,D,E$ & $0.38$ & & $null$ & $0.27$\\
$2$ & $B,D$ & $0.25$ && $B,D,E$ & $0.21$ \\
$3$ & $null$ & $0.11$ && $B,D$ & $0.20$\\
$4$ & $B$ & $0.11$ && $B$ & $0.10$\\
$5$ & $B,C,D,E$ & $0.05$ && $D$ & $0.04$\\ \hline \hline & \multicolumn{2}{c}{Conventional Approach} & & \multicolumn{2}{c}{Objective Approach} \\ \cline{2-3} \cline{5-6}
& Factor & Posterior probability & & Factor & Posterior probability \\ \cline{2-6}
& $A$ & $0.03$ && $A$ & $0.09$ \\
& $B$ & $0.82$ && $B$ & $0.62$ \\
& $C$ & $0.08$ & & $C$ & $0.08$\\
& $D$ & $0.74$ && $D$ & $0.53$ \\
& $E$ & $0.45$ & & $E$ & $0.27$ \\ \hline \end{tabular} \end{table}
\begin{figure}
\caption{\textit{Reactor experiment. Posterior probabilities of models and that factors are active (3FI)} }
\label{fig:Reactor.Post.Prob.Models.Factors.3FI}
\end{figure}
\begin{figure}
\caption{\textit{Reactor experiment. Posterior probabilities of models and that factors are active based on the combined screening and follow-up designs (3FI)} }
\label{fig:Reactor.Post.Prob.Combined.3FI}
\end{figure}
\begin{figure}\label{fig: Reactor.Normal.Prob.Contrasts}
\end{figure}
\end{document} | arXiv |
\begin{document}
\title{Distributions of Statistics over Pattern-Avoiding Permutations} \author{Michael Bukata \and Ryan Kulwicki \and Nicholas Lewandowski \and Lara Pudwell \and Jacob Roth \and Teresa Wheeland}
\date{Valparaiso University \\ \today}
\maketitle
\begin{abstract} We consider the distribution of ascents, descents, peaks, valleys, double ascents, and double descents over permutations avoiding a set of patterns. Many of these statistics have already been studied over sets of permutations avoiding a single pattern of length 3. However, the distribution of peaks over 321-avoiding permutations is new and we relate it statistics on Dyck paths. We also obtain new interpretations of a number of well-known combinatorial sequences by studying these statistics over permutations avoiding two patterns of length 3. \end{abstract}
\section{Introduction}
Let $\mathcal{S}_n$ denote the set of permutations of $\{1,2,\dots, n\}$ and let $\mathrm{red}(w_1\cdots w_m)$ be the word obtained by replacing the $i$th smallest digit(s) of $w$ with $i$. Given $\pi \in \mathcal{S}_n$ and $\rho \in \mathcal{S}_m$, we say that $\pi$ \emph{contains} $\rho$ as a pattern if there exist indices $1 \leq i_1 < i_2 < \cdots < i_m \leq n$ such that $\pi_{i_a} < \pi_{i_b}$ if and only if $\rho_a < \rho_b$; that is, $\mathrm{red}(\pi_{i_1}\cdots \pi_{i_m})=\rho$. Otherwise $\pi$ \emph{avoids} $\rho$. For example, the permutation $18274635 \in \mathcal{S}_8$ contains the pattern $\rho=4312$ using $i_1=2$, $i_2=4$, $i_3=5$, and $i_4=8$ since the entries of $\pi_{i_1}\pi_{i_2}\pi_{i_3}\pi_{i_4}=8745$ are in the same relative order as 4312; i.e., $\mathrm{red}(8745)=4312$. Let $\mathcal{S}_n(\rho_1, \dots , \rho_p)$ be the set of permutations avoiding each of $\rho_1, \dots, \rho_p$; $\mathcal{S}_n(\rho_1, \dots , \rho_p)$ is called a \emph{pattern class} and the pattern(s) $\rho_1, \dots, \rho_p$ are called the \emph{basis} of the pattern class. Further, let $\mathrm{s}_n(\rho_1,\dots, \rho_p)=\left|\mathcal{S}_n(\rho_1, \dots, \rho_p)\right|$. It is well-known that $\mathrm{s}_n(\rho)=\frac{\binom{2n}{n}}{n+1}$ (\seqnum{A000108}) when $\rho \in \mathcal{S}_3$, and there are a variety of techniques for determining $\mathrm{s}_n(\rho_1, \dots, \rho_p)$, depending on that patterns to be avoided.
Another well-known family of objects enumerated by the Catalan numbers (\seqnum{A000108}) is the set of \emph{Dyck paths} of semilength $n$. Here a Dyck path of semilength $n$ is a sequence of $n$ up-steps ($U=\langle 1,1\rangle$) and $n$ down-steps ($D=\langle 1,-1\rangle$) from $(0,0)$ to $(2n,0)$ that never falls below the $x$-axis. We let $\mathcal{D}_n$ denoted the set of such paths. Further, we let $\mathcal{I}_n$ be the set of indecomposable Dyck paths of semilength $n$, where a path is indecomposable if it only touches the $x$-axis at $(0,0)$ and at $(2n,0)$. Because both $\mathcal{S}_n(\rho)$ and $\mathcal{D}_n$ have the same enumeration when $\rho \in \mathcal{S}_3$, bijections with Dyck paths are a powerful tool to better understand the structure of these pattern classes.
Some common permutations and constructions require addition notation. To this end, let $I_m = 1\cdots m$ be the increasing permutation of length $m$ and let $J_m = m(m-1)\cdots 1$ be the decreasing permutation of length $m$. Further, given permutations $\alpha \in \mathcal{S}_a$ and $\beta \in \mathcal{S}_b$, let $\alpha \oplus \beta \in \mathcal{S}_{a+b}$ denote the direct sum of $\alpha$ and $\beta$ and let $\alpha \ominus \beta\in \mathcal{S}_{a+b}$ denote the skew-sum of $\alpha$ and $\beta$, defined as follows: $$\alpha \oplus \beta = \begin{cases} \alpha(i),& 1 \leq i \leq a;\\ a+\beta(i-a),& a+1 \leq i \leq a+b. \end{cases}$$ $$\alpha \ominus \beta = \begin{cases} \alpha(i)+b,& 1 \leq i \leq a;\\ \beta(i-a),& a+1 \leq i \leq a+b. \end{cases}$$
Another thread of research is to consider the distribution of permutation statistics over $\mathcal{S}_n$. Here, a \emph{permutation statistic} is a function $\mathrm{stat}: \mathcal{S}_n \to \mathbb{Z}^{+} \cup \{0\}$. Some common statistics include ascents ($\mathrm{asc}$), descents ($\mathrm{des}$), double ascents ($\mathrm{dasc}$), double descents ($\mathrm{ddes}$), peaks ($\mathrm{pk}$), and valleys ($\mathrm{vl}$), which are defined as follows:
$$\mathrm{asc}(\pi) = \left|\left\{i \middle| \pi_i< \pi_{i+1}\right\}\right|,$$
$$\mathrm{des}(\pi) = \left|\left\{i \middle| \pi_i> \pi_{i+1}\right\}\right|,$$
$$\mathrm{dasc}(\pi) = \left|\left\{i \middle| \pi_i< \pi_{i+1} \text{ and } \pi_{i+1}< \pi_{i+2}\right\}\right|,$$
$$\mathrm{ddes}(\pi) = \left|\left\{i \middle| \pi_i> \pi_{i+1} \text{ and } \pi_{i+1}> \pi_{i+2}\right\}\right|,$$
$$\mathrm{pk}(\pi) = \left|\left\{i \middle| \pi_i< \pi_{i+1} \text{ and } \pi_{i+1}> \pi_{i+2}\right\}\right|,$$
$$\mathrm{vl}(\pi) = \left|\left\{i \middle| \pi_i> \pi_{i+1} \text{ and } \pi_{i+1}< \pi_{i+2}\right\}\right|.$$
It is well-known that $\left|\left\{\pi \in \mathcal{S}_n \middle| \mathrm{asc}(\pi) = k\right\}\right|=\left|\left\{\pi \in \mathcal{S}_n \middle| \mathrm{des}(\pi) = k\right\}\right|$ is given by the Eulerian numbers (\seqnum{A008292}), while the distributions of $\mathrm{dasc}$, $\mathrm{ddes}$, $\mathrm{pk}$, and $\mathrm{vl}$, are newer to the literature or open.
Combining these two areas, we consider the distribution of permutation statistics over $\mathcal{S}_n(\rho_1, \dots, \rho_p)$. Let
$$\mathrm{a}_{n,k}^{\mathrm{stat}}(\rho_1, \dots, \rho_p) = \left|\left\{\pi \in \mathcal{S}_n(\rho_1, \dots, \rho_p) \middle| \mathrm{stat}(\pi)=k\right\}\right|.$$ Further, for $\pi \in \mathcal{S}_n$, let $\pi^r = \pi_n \cdots \pi_1$ and $\pi^c = (n+1-\pi_1)\cdots (n+1-\pi_n)$ denote the reverse and complement of $\pi$ respectively. By symmetry, we observe the following:
\begin{align*} \mathrm{a}_{n,k}^{\mathrm{asc}}(\rho_1, \dots, \rho_p)&=\mathrm{a}_{n,k}^{\mathrm{des}}(\rho_1^r, \dots, \rho_p^r)\\ &=\mathrm{a}_{n,k}^{\mathrm{des}}(\rho_1^c, \dots, \rho_p^c)\\ &=\mathrm{a}_{n,k}^{\mathrm{asc}}(\rho_1^{rc}, \dots, \rho_p^{rc}), \end{align*}
\begin{align*} \mathrm{a}_{n,k}^{\mathrm{dasc}}(\rho_1, \dots, \rho_p)&=\mathrm{a}_{n,k}^{\mathrm{ddes}}(\rho_1^r, \dots, \rho_p^r)\\ &=\mathrm{a}_{n,k}^{\mathrm{ddes}}(\rho_1^c, \dots, \rho_p^c)\\ &=\mathrm{a}_{n,k}^{\mathrm{dasc}}(\rho_1^{rc}, \dots, \rho_p^{rc}), \end{align*}
\begin{align*} \mathrm{a}_{n,k}^{\mathrm{pk}}(\rho_1, \dots, \rho_p)&=\mathrm{a}_{n,k}^{\mathrm{pk}}(\rho_1^r, \dots, \rho_p^r)\\ &=\mathrm{a}_{n,k}^{\mathrm{vl}}(\rho_1^c, \dots, \rho_p^c)\\ &=\mathrm{a}_{n,k}^{\mathrm{vl}}(\rho_1^{rc}, \dots, \rho_p^{rc}). \end{align*}
In this paper, we consider $\mathrm{a}_{n,k}^{\mathrm{stat}}(\rho_1, \dots, \rho_p)$ where $p \in \left\{1, 2\right\}$ and where $\mathrm{stat} \in \left\{\mathrm{asc}, \mathrm{des}, \mathrm{dasc}, \mathrm{ddes}, \mathrm{pk}, \mathrm{vl}\right\}$. A summary our results is given in Table \ref{T:allresults}. In Section \ref{S:history}, we detail known results for $\mathrm{a}_{n,k}^{\mathrm{stat}}(\rho)$ where $\rho \in \mathcal{S}_3$. While there are a number of previous results, $\mathrm{a}_{n,k}^{\mathrm{pk}}(321)$ is new, and we determine its distribution in Section \ref{S:peaks} via a bijection with Dyck paths. In Section \ref{S:patternsets} we consider $\mathrm{a}_{n,k}^{\mathrm{stat}}(\rho_1, \rho_2)$ for $\rho_1, \rho_2 \in \mathcal{S}_3$; while these enumerations yield a number of well-known combinatorial sequences, the particular interpretations in terms of permutation statistics are new.
\begin{table}[hbt] \begin{center} \resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|} \hline $B \backslash$ $\mathrm{st}$&$\mathrm{asc}$&$\mathrm{des}$&$\mathrm{pk}$&$\mathrm{vl}$&$\mathrm{dasc}$&$\mathrm{ddes}$\\ \hline 231&(known)&(known)&Thm. \ref{T:pk231}&Thms. \ref{T:biject312and321} and \ref{T:pk321}&(known)&(known)\\ \hline 321&(known)&(known)&Thm. \ref{T:pk321}&Thm. \ref{T:pk321}&(known)&(known)\\ \hline 213,312&Prop. \ref{P:213312asc}&Prop. \ref{P:213312asc}&Prop. \ref{P:213312dasc}&Prop. \ref{P:213312dasc}&Prop. \ref{P:213312pk}&Pr. \ref{P:213312vl}\\ \hline 132,213&Prop. \ref{P:132213asc}&Prop. \ref{P:132213asc}&Prop. \ref{P:132213dasc}&Prop. \ref{P:132213dasc}&Prop. \ref{pk132213}&Prop. \ref{pk132213}\\ \hline 213,231&Prop. \ref{P:132213asc}&Prop. \ref{P:132213asc}&Prop. \ref{P:132213dasc}&Prop. \ref{P:132213dasc}&Prop. \ref{pk132213}&Prop. \ref{pk132213}\\ \hline 123,132&Prop. \ref{asc123132}&Prop. \ref{des123132}&Prop. \ref{dasc123132}&Prop. \ref{P:123132ddes}&Prop. \ref{P:123132pk}&Prop. \ref{vl123132}\\ \hline 132,321&Prop. \ref{asc132321}&Prop. \ref{asc132321}&Prop. \ref{dasc132321}&Prop. \ref{ddes132321}&Prop. \ref{pk132321}&Prop. \ref{vl132321}\\ \hline \end{tabular}} \end{center} \caption{Results for distribution of statistics over $\mathcal{S}_n(B)$} \label{T:allresults} \end{table}
\section{History}\label{S:history}
The study of permutation statistics has a rich history, with over 300 possible statistics listed in the database FindStat \cite{FindStat} as of this writing. However, the distribution of statistics over pattern classes, rather than over all permutations, is newer. Robertson, Saracino, and Zeilberger \cite{RSZ03} and Mansour and Robertson \cite{MR03} studied the distribution of fixed points over pattern classes whose basis is a subset of $\mathcal{S}_3$. Elizalde \cite{E04} gave an alternate approach to the distribution of fixed points using bijections with Dyck paths and also determined the distribution of excedances over the same pattern classes.
Dokos, Dwyer, Johnson, Sagan, and Selsor \cite{DDJSS12} defined two pattern sets $\{\rho_1, \dots, \rho_p\}$ and $\{\rho_1^\prime, \dots, \rho_p^\prime\}$ to be $\mathrm{st}$-Wilf equivalent if $\mathrm{a}_{n,k}^{\mathrm{st}}(\rho_1, \dots, \rho_p)=\mathrm{a}_{n,k}^{\mathrm{st}}(\rho_1^\prime, \dots, \rho_p^\prime)$ for all $n$ and $k$ and determined all $\mathrm{st}$-Wilf equivalences for subsets of $\mathcal{S}_3$ when $\mathrm{st}$ is the number of inversions or the major index.
Fixed points and excedances are statistics involving a single digit of $\pi$ at a time, while inversions and major index involve multiple digits. The statistics we study in this paper may best be thought of as consecutive patterns in $\pi$. In particular, $\mathrm{asc}(\pi)$ is the number of consecutive 12 patterns in $\pi$, $\mathrm{des}(\pi)$ is the number of consecutive 21 patterns in $\pi$, $\mathrm{dasc}(\pi)$ is the number of consecutive 123 patterns in $\pi$ and $\mathrm{ddes}(\pi)$ is the number of consecutive 321 patterns in $\pi$. Meanwhile, $\mathrm{pk}(\pi)$ is the number of consecutive 132 patterns plus the number of consecutive 231 patterns in $\pi$ and $\mathrm{vl}(\pi)$ is the number of consecutive 213 patterns plus the number of consecutive 312 patterns in $\pi$. In the following subsections, we review the history of results involving these statistics over specific pattern classes.
\subsection{Ascents and Descents}
Studying ascents and descents over $\mathcal{S}_n(\rho)$ where $\rho \in \mathcal{S}_3$ yields one of exactly two sequences: \seqnum{A001263} (the Narayana numbers) or \seqnum{A091156}.
For $\rho \in \left\{132, 213, 231, 312\right\}$, $\mathrm{a}_{n,k}^{\mathrm{asc}}(\rho)= \mathrm{a}_{n,k}^{\mathrm{des}}(\rho) = \dfrac{\binom{n-1}{k}\binom{n}{k}}{k+1}$ (\seqnum{A001263}). This enumeration follows by a bijection between permutations in $\mathcal{S}_n(231)$ with $k$ ascents with Dyck paths of semilength $n$ with $k$ DU factors, which are known to be enumerated by \seqnum{A001263}. For more details, see Petersen \cite{P15}.
On the other hand, for $\rho \in \left\{123, 321\right\}$ $\mathrm{a}_{n,k}^{\mathrm{asc}}(\rho)= \mathrm{a}_{n,k}^{\mathrm{des}}(\rho)$ is given by \seqnum{A091156}. In 2010, Barnabei, Bonetti, and Silimbani \cite{BBS10} showed that $$G(q, z) = \sum_{n \geq 0} \sum_{k \geq 0} \mathrm{a}_{n,k}^{\mathrm{des}}(321) q^{k}z^n$$ satisfies $$z(1-z+qz)G^2 - G + 1 = 0.$$ Their work features a bijection between 321-avoiding permutations of length $n$ and Dyck paths of semilength $n$. Tracking descents in the permutations corresponds to tracking both DU and DDD factors in the corresponding Dyck path. In Section \ref{S:peaks}, we make use of the same bijection to study the distribution of peaks over 321-avoiding permutations. As a corollary, we obtain a simpler way of tracking descents in permutations via the corresponding Dyck paths.
\subsection{Peaks, Valleys, and More}
Table \ref{T:onepatstats} shows the distributions of $\mathrm{pk}$, $\mathrm{vl}$, $\mathrm{dasc}$, and $\mathrm{ddes}$ over $\mathcal{S}_n(\rho)$ for $\rho \in \mathcal{S}_3$. Notice that by reversal, understanding the distributions when $\rho \in \left\{231, 312, 321\right\}$ determines the distributions for the remaining patterns. The relationship between the first two rows of the table follows from the fact that $231^{rc}=312$.
\begin{table}[hbt] \begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline $\rho \backslash$ $\mathrm{st}$&$\mathrm{pk}$&$\mathrm{vl}$&$\mathrm{dasc}$&$\mathrm{ddes}$\\ \hline 231& \seqnum{A091894}& \seqnum{A236406} & \seqnum{A092107} & \seqnum{A092107}\\ \hline 312& \seqnum{A236406}& \seqnum{A091894} & \seqnum{A092107} & \seqnum{A092107}\\ \hline 321& \seqnum{A236406} & \seqnum{A236406} & new & (none)\\ \hline \end{tabular} \end{center} \caption{Distribution of statistics over $\mathcal{S}_n(\rho)$ for $\rho \in \mathcal{S}_3$} \label{T:onepatstats} \end{table}
In 2010, Barnabei, Bonetti, and Silimbani \cite{BBS10b} used bijections with Dyck paths to consider the joint distribution of various consecutive patterns with descents over $\mathcal{S}_n(312)$. In another recent paper, Pan, Qiu, and Remmel \cite{PQR18} also investigated the distribution of consecutive patterns of length 3 over $\mathcal{S}_n(132)$ and $\mathcal{S}_n(123)$. As seen above, their work directly addresses the distributions of $\mathrm{dasc}$ and $\mathrm{ddes}$. However, $\mathrm{pk}$ and $\mathrm{vl}$ involve combining the distributions of two of their statistics at a time. Since the results for $\mathrm{dasc}$ and $\mathrm{ddes}$ are already studied in \cite{BBS10b, PQR18}, we focus on $\mathrm{pk}$ and provide an alternate approach in Section \ref{S:peaks}. Once we have determined $a_{n,k}^{\mathrm{pk}}(\rho)$ for $\rho \in \{231, 312, 321\}$, by symmetry, we have determined the distributions of $\mathrm{pk}$ and $\mathrm{vl}$ over all $\mathcal{S}_n(\rho)$ for $\rho \in \mathcal{S}_3$. In Section \ref{S:patternsets} we extend this work to pattern classes that avoid two or more patterns.
\section{Peaks}\label{S:peaks}
We now wish to determine $a_{n,k}^{\mathrm{pk}}(\rho)$ for $\rho \in \{231, 312, 321\}$.
\begin{theorem}\label{T:pk231} For $n \geq 1$, $k \geq 0$, $a_{n,k}^{\mathrm{pk}}(231)=\dfrac{2^{n-2k-1}\binom{n-1}{2k}\binom{2k}{k}}{k+1}$. \end{theorem}
We prove Theorem \ref{T:pk231} via a bijection with Dyck paths. The bijection in this proof is given by Petersen \cite{P15} for the purpose of determining $\mathrm{a}_{n,k}^{\mathrm{des}}(\rho)$ and the eumeration given in Theorem \ref{T:pk231} is given by Petersen in \seqnum{A091894} of the On-line Encyclopedia of Integer Sequences. We include the argument here for completeness.
\begin{proof} Define a bijection $\phi: \mathcal{S}_n(231) \to \mathcal{D}_n$ recursively as follows. The empty permutation maps to the empty path and $\phi(1)=UD$. Now, for $\pi \in \mathcal{S}_n(231)$ where $n \geq 2$, suppose that $\pi_i=n$ and write $\pi=\pi_1\cdots \pi_{i-1} n \pi_{i+1} \cdots \pi_n$. Let $\alpha=\mathrm{red}(\pi_1\cdots \pi_{i-1})$ and $\beta = \mathrm{red}(\pi_{i+1} \cdots \pi_n)$. Notice that $\alpha$ or $\beta$ could be empty. Now $\phi(\pi)=\phi(\alpha)U\phi(\beta)D$.
Notice that $\pi$ has a peak involving $n$ exactly when $\alpha$ and $\beta$ are both non-empty. Since $\phi(\alpha)$ ends in a D and $\phi(\beta)$ begins in a U, by construction, there is a peak in $\pi$ involving $n$ exactly when the corresponding $U$ in $\phi(\pi)$ is part of a DUU factor. Recursively, the number of peaks of $\pi$ corresponds to the number of DUU factors of $\phi(\pi)$, or, equivalently, to the number of DDU factors in the reversal of $\phi(\pi)$. The number of paths in $\mathcal{D}_n$ with $k$ DDU factors is given to be $\dfrac{2^{n-2k-1}\binom{n-1}{2k}\binom{2k}{k}}{k+1}$ in OEIS sequence \seqnum{A091894}. \end{proof}
The fact that $a_{n,k}^{\mathrm{pk}}(312)=a_{n,k}^{\mathrm{pk}}(321)$ requires another well-known bijection.
\begin{theorem}\label{T:biject312and321} For all $n$ and $k$, $a_{n,k}^{\mathrm{pk}}(312)=a_{n,k}^{\mathrm{pk}}(321)$. \end{theorem}
The bijection below is a symmetry of a well-known bijection of Simion and Schmidt \cite{SS85} using left-to-right maxima. They used this bijection to to show that $\mathrm{s}_n(312) = \mathrm{s}_n(321)$ for all $n$, while we use it to show the refinement that $a_{n,k}^{\mathrm{pk}}(312)=a_{n,k}^{\mathrm{pk}}(321)$. We say that $\pi_i$ is a \emph{left-to-right maximum} of $\pi$ if $\pi_i>\pi_j$ for $j<i$. For example the left-to-right maxima of $32658741$ are 3, 6, and 8.
\begin{proof} We define a bijection $\zeta: \mathcal{S}_n(312) \to \mathcal{S}_n(321)$ that preserves left-to-right maxima.
Consider $\pi \in \mathcal{S}_n(312)$. Suppose that the left-to-right maxima of $\pi$ are $\ell_1, \dots, \ell_k$ and that they are located in positions $j_1, \dots, j_k$. We claim that $\pi$ is the unique 312-avoiding permutation with left-to-right maxima $\ell_1, \dots, \ell_k$ in positions $j_1, \dots, j_k$. In particular, we determine the other entries of $\pi$ from left to right by placing in position $i$ the largest unused digit that is smaller than the rightmost left-to-right maxima before position $i$.
Similarly, there is a unique 321-avoiding permutation with left-to-right maxima $\ell_1, \dots, \ell_k$ in positions $j_1, \dots, j_k$. In particular, the digits that are not left-to-right maxima must appear in increasing order. Let $\zeta(\pi)$ be this permutation.
Notice that any peak of $\pi$ must involve a left-to-right maxima as its middle entry, and similarly any peak of $\zeta(\pi)$ must involve a left-to-right maxima as its middle entry. Since $\zeta$ preserves left-to-right maxima both in value and in position, $\zeta$ preserves peaks. \end{proof}
Finally, we determine $a_{n,k}^{\mathrm{pk}}(321)$, which is the central result of this paper. Previously, Baxter \cite{B14} computed data about $a_{n,k}^{\mathrm{pk}}(321)$ using an enumeration scheme algorithm; however our generating function is new and our bijective argument ties together a number of previously-known generating function results.
\begin{theorem} \label{T:pk321} $$\sum_{n \geq 0} \sum_{k \geq 0} a_{n,k}^{\mathrm{pk}}(321) q^kz^n=1+z\left(-\dfrac{-1+\sqrt{-4z^2q+4z^2-4z+1}}{2z(zq-z+1)}\right)^2.$$ \end{theorem}
We first describe a bijection $\psi: \mathcal{S}_n(321) \to \mathcal{D}_n$ that is due to Krattenthaler \cite{K01}. Consider $\pi \in \mathcal{S}_n(321)$ and plot the points $(i,\pi_i)$ for $1 \leq i \leq n$. Let $P=\left\{(p_1, \pi_{p_1}), \dots, (p_k, \pi_{p_k})\right\}$ be the set of points $(i, \pi_i)$ such that $\pi_i$ is not a left-to-right maxima of $\pi$ and $p_1<p_2< \cdots <p_k$. Then define a path of $E=\langle 1,0\rangle$ steps and $N=\langle 0,1 \rangle$ steps from $(1,0)$ to $(n+1,n)$ in the following way: use $p_1-1$ $E$ steps followed by $\pi_{p_1}$ $N$ steps to get from $(1,0)$ to $(p_1,\pi_{p_1})$. For $1 \leq i \leq k-1$, use $(p_{i+1}-p_i)$ $E$ steps followed by $(\pi_{p_{i+1}}-\pi_{p_i})$ $N$ steps to get from $(p_i,\pi_{p_i})$ to $(p_{i+1},\pi_{p_{i+1}})$. Finally, take $((n+1)-p_k)$ $E$ steps followed by $(n-\pi_{p_k})$ $N$ steps to get from $(p_k,\pi_{p_k})$ to $(n+1,n)$. Figure \ref{F:Krateg} shows this process for $\pi=617238459$. By construction, this path stays below the line $y=x-1$, and we obtain the Dyck path $\psi(\pi)$ by replacing all $E$ steps with $U$ and all $N$ steps with $D$. Therefore, $\psi(617238459)=UDUUDUDUUDUDUUDDDD$.
\begin{figure}
\caption{Bijection $\psi$ applied to $\pi=617238459$}
\label{F:Krateg}
\end{figure}
We know that a 321-avoiding permutation can be partitioned into two increasing subsequences: namely, the left-to-right maxima, and the remaining digits. Necessarily, the middle digit of a peak in such a permutation must be a left-to-right maxima, and the final digit is not. After $\pi_1$, whenever we have a left-to-right maxima in $\pi$, we have a UU factor in $\psi(\pi)$. Whenever we have a non-left-to-right maxima in $\pi$, we have at least one D in $\psi(\pi)$. Therefore, a peak of $\pi \in \mathcal{S}_n(321)$ corresponds to a UUD factor in $\psi(\pi)$, with one exception. A UUD factor that is followed only by Ds indicates that $\pi$ ended with a left-to-right maxima. To this end, we introduce two statistics on Dyck paths. Let $\mathrm{st}(d)$ be the number of UUD factors in Dyck path $d$, and let $\mathrm{st^*}(d)$ be the number of UUD factors in Dyck path $d$ that appear before the last $U$. For example, $\mathrm{st}(UUUDDDUD) = \mathrm{st^*}(UUUDDDUD) = 1$, while $\mathrm{st}(UUUDDDUUDD)=2$ and $\mathrm{st^*}(UUUDDDUUDD)=1$. We have just seen that
$$\sum_{n \geq 0} \sum_{k \geq 0} a_{n,k}^{\mathrm{pk}}(321) q^kz^n=\sum_{n \geq 0} \sum_{d \in \mathcal{D}_n} q^{\mathrm{st^*}(d)}z^{\left|d\right|}.$$ It remains to study the distribution of $\mathrm{st^*}$ on Dyck paths of semilength $n$.
We define the following four generating functions, which are weight-enumerators on Dyck paths. Throughout, $\mathrm{st}(d)$ and $\mathrm{st^*}(d)$ are as defined above, $\mathcal{D}_n$ is the set of all Dyck paths of semilength $n$, and $\mathcal{I}_n$ is the set of indecomposable Dyck paths of semilength $n$.
\begin{center} \begin{tabular}{cc} $A:=\displaystyle{\sum_{n \geq 0} \sum_{d \in \mathcal{D}_n} q^{\mathrm{st}(d)}z^{n}},$&$B:=\displaystyle{\sum_{n \geq 0} \sum_{d \in \mathcal{I}_n} q^{\mathrm{st}(d)}z^{n}},$\\ $C:=\displaystyle{\sum_{n \geq 0} \sum_{d \in \mathcal{D}_n} q^{\mathrm{st^*}(d)}z^{n}},$&$D:=\displaystyle{\sum_{n \geq 0} \sum_{d \in \mathcal{I}_n} q^{\mathrm{st^*}(d)}z^{n}}.$ \end{tabular} \end{center}
Notice that our goal is to find $C(q,z)$. By construction we have $C = 1 + AD$ and $A = 1 + AB$. We prove Theorem \ref{T:pk321} by first determining $A$ and $D$.
\begin{lemma} \label{L:D} $$D(q,z)=\sum_{n \geq 0} \sum_{k \geq 0} a_{n,k}^{\mathrm{des}}(321) q^kz^{n+1}.$$ \end{lemma}
\begin{proof} Suppose $\pi \in \mathcal{S}_n(321)$. Any descent in $\pi$ consists of a left-to-right maxima followed by a non-left-to-right-maxima. Using bijection $\psi$, defined above, $\pi$ has a left-to-right maxima at the beginning of $\pi$ and also whenever $\psi(\pi)$ has a UU factor. Similarly, $\pi$ has a non-left-to-right maxima whenever it has a UD factor, unless the D is at the end of $\psi(\pi)$. Together, we detect a descent in $\pi$ when $\psi(\pi)$ begins with a UD factor and whenever $\psi(\pi)$ has a UUD factor before the last $U$. To convert the first case into a UUD factor, let $\widehat{\psi}(\pi)$ be the Dyck path obtained by adding a U to the beginning and a D to the end of $\psi(\pi)$. By construction, $\widehat{\psi}(\pi)$ is an indecomposable Dyck path of semilength $n+1$. Now, each descent in $\pi$ corresponds to a UUD factor in $\widehat{\psi}(\pi)$ that appears before the final U, which proves the lemma. \end{proof}
Next, we consider $A(q,z)$.
\begin{lemma} \label{L:A} $$A(q,z)=\sum_{n \geq 0} \sum_{k \geq 0} a_{n,k}^{\mathrm{des}}(321) q^kz^{n}.$$ \end{lemma}
\begin{proof} By definition, $A(q,z)$ tracks all UUD factors across Dyck paths of semilength $n$.
We have seen in the proof of Lemma \ref{L:D} that there is at most one UUD factor in $\psi(\pi)$ that does not correspond to a descent of $\pi$, namely a UUD factor that is followed only by Ds. Similarly, there is at most one descent in $\pi$ that does not correspond to a UUD factor in $\psi(\pi)$, namely a descent at the beginning of $\pi$ corresponds to $\psi(\pi)$ beginning with a UD factor. In other words, given $\pi \in \mathcal{S}_n(321)$ with $d=\psi(\pi)$, either $\mathrm{st}(d) = \mathrm{des}(\pi)$, $\mathrm{st}(d)+1 = \mathrm{des}(\pi)$, or $\mathrm{st}(d) = \mathrm{des}(\pi)+1$,
We prove the lemma by giving an involution $\iota$ on $\mathcal{D}_n$.
If $d=\psi(\pi)$ with $\mathrm{st}(d) = \mathrm{des}(\pi)$, then $\iota(d)=d$.
Now, consider $d=\psi(\pi)$ with $\mathrm{st}(d)=k$ and $\mathrm{des}(\pi)=k+1$. Since $\pi$ has one more descent than $\mathrm{st}(d)$, we know that $\psi(\pi)$ begins with UD and $d$ does not have a UUD factor at the end. In other words, $d=(UD)^id^{\prime}DUD^j$ for some positive $i$ and $j$ where $d^{\prime}$ is a sequence of $n-i-1$ Us and $n-i-j-1$ Ds that does not begin in $UD$. Let $\iota(d)=d^{\prime}DU(U^iD^i)D^j$. Now, by construction, $\iota(d)$ has $k+1$ UUD factors, since a new UUD factor was introduced at the end, but $\mathrm{des}(\phi^{-1}(\iota(d)))=k$ since there is no longer an initial UD in $\iota(d)$.
Finally, consider $d=\psi(\pi)$ with $\mathrm{st}(d)=k+1$ and $\mathrm{des}(\pi)=k$. Since $d$ has one more UUD factor than $\mathrm{des}(\pi)$, we know that $d$ ends with $DU^iD^j$ for some $j \geq i \geq 2$ and $d$ does not begin with UD. In other words, $d=d^{\prime}DU^iD^j$ where $d^{\prime}$ is a sequence of $n-i$ Us and $n-j-1$ Ds that does not begin in $UD$. Let $\iota(d)=(UD)^{i-1}d^{\prime}DUD^{j-i}$. Now, by construction, $\iota(d)$ has $k$ UUD factors, since a UUD factor was removed at the end, but $\mathrm{des}(\phi^{-1}(\iota(d)))=k+1$ since there is a new initial UD in $\iota(d)$.
By involution $\iota$, we see that UUD factors on Dyck paths are equidistributed with descents in 321-avoiding permutations. \end{proof}
An example of $\iota$ in action is shown in Figure \ref{F:involution}.
\begin{figure}
\caption{An example of $\iota(d)$}
\label{F:involution}
\end{figure}
As a consequence of Lemmas \ref{L:D} and \ref{L:A}, we see that $D=zA$. Therefore, $C(q,z)=1+AD = 1+zA^2$. Using Barnabei, Bonetti, and Silimbani's result for $A(q,z)$ in \cite{BBS10} yields Theorem \ref{T:pk321}.
Two nice observations follow from this proof. First, Barnabei, Bonetti, and Silimbani determined $A(q,z)$ by counting the number of DU factors plus the number of DDU factors in $\phi(\pi)$. We have shown in Lemma \ref{L:A} that $A(q,z)$ can be determined by counting only the number of UUD factors in $\phi(\pi)$. Second, by Lemma \ref{L:A} and the fact that $A=1+AB$, we can determine $B(q,z)$. It turns out
$$B(q,z)=z(1-q)+\sum_{n \geq 0} \sum_{k \geq 0} a_{n,k}^{\mathrm{pk}}(231) q^{k+1}z^{n+1},$$ matching the enumeration in Theorem \ref{T:pk231}.
Thus, the distributions of both $\mathrm{st}$ and $\mathrm{st^*}$ are in bijection with distributions of statistics over pattern-avoiding permutations whether we consider them over all Dyck paths or only over indecomposable Dyck paths.
\section{Avoiding Two Patterns}\label{S:patternsets}
We now consider $\mathrm{a}_{n,k}^{\mathrm{stat}}(\rho_1, \rho_2)$ where $\mathrm{stat} \in \left\{\mathrm{asc}, \mathrm{des}, \mathrm{dasc}, \mathrm{ddes}, \mathrm{pk}, \mathrm{vl}\right\}$ and $\rho_1,\rho_2 \in \mathcal{S}_3$. Using the symmetries of reverse and complement, there are 6 pairs of patterns to consider: $\{123,321\}$, $\{213, 312\}$, $\{132, 213\}$, $\{213, 231\}$, $\{123, 132\}$, and $\{132, 321\}$. Simion and Schmidt \cite{SS85} determined $\left|\mathcal{S}_n(\rho_1,\rho_2)\right|$ for each of these classes. We now use the permutation structures they determined to find $\mathrm{a}_{n,k}^{\mathrm{stat}}(\rho_1, \rho_2)$ for our desired statistics. We already know that $\left|\mathcal{S}_n(123,321)\right|=0$ for $n \geq 5$, so there are 5 non-trivial pairs of permutation patterns to consider. A summary of the results of this section is given in Tables \ref{T:pairs1}, \ref{T:pairs2}, and \ref{T:pairs3}. Just as many results for $\mathrm{a}_{n,k}^{\mathrm{stat}}(\rho)$ with $\rho \in \mathcal{S}_3$ follow from bijections with Dyck paths, many results in this section follow from bijections with binary sequences.
\begin{table} \begin{center}
\begin{tabular}{| c | c | c |}
\hline
Patterns \textbackslash Statistic & $\mathrm{asc}$ & $\mathrm{des}$\\ \hline
213,312 &
$\binom{n-1}{k}$ &
$\binom{n-1}{k}$\\
\hline
132,213 &
$\binom{n-1}{k}$ &
$ \binom{n-1}{k}$ \\ \hline
213,231 &
$\binom{n-1}{k}$ &
$\binom{n-1}{k}$ \\ \hline
123,132 &
$\binom{n}{2k}$ &
$\binom{n}{2(n-k-1)}$ \\ \hline
132,321 &
$\begin{aligned} &1, && k=n-1; \\ &\binom{n}{2}, && k=n-2. \end{aligned}$ &
$\begin{aligned} &1, && k=0; \\ &\binom{n}{2}, && k=1. \end{aligned}$ \\ \hline
\end{tabular} \end{center} \caption{Distribution of $\mathrm{asc}$ and $\mathrm{des}$ over pattern classes of the form $\mathcal{S}_n(\rho_1, \rho_2)$ with $\rho_1, \rho_2 \in \mathcal{S}_3$} \label{T:pairs1} \end{table}
\begin{table}
\begin{center}
\begin{tabular}{| c | c | c |}
\hline
Patterns \textbackslash Statistic & $\mathrm{dasc}$ & $\mathrm{ddes}$ \\ \hline
213,312 &
$\begin{aligned} & n, && k=0; \\ &\binom{n-1}{k+1}, && k \geq 1. \end{aligned}$ &
$\begin{aligned} &n, && k=0; \\ &\binom{n-1}{k+1}, && k \geq 1. \end{aligned}$ \\
\hline
132,213 &
\seqnum{A076791} &
\seqnum{A076791} \\ \hline
213,231 &
\seqnum{A076791} &
\seqnum{A076791} \\ \hline
123,132 &
trivial &
$\binom{n-2}{k}+2\binom{n-3}{k}$\\ \hline
132,321 &
$\begin{aligned} & 1, && k = n-2; \\ & n, && k = n -3; \\ &\binom{n}{2}-n, && k = n -4. \end{aligned}$ &
trivial \\ \hline
\end{tabular}
\end{center}
\caption{Distribution of $\mathrm{dasc}$ and $\mathrm{ddes}$ over pattern classes of the form $\mathcal{S}_n(\rho_1, \rho_2)$ with $\rho_1, \rho_2 \in \mathcal{S}_3$} \label{T:pairs2} \end{table}
\begin{table} \begin{center}
\begin{tabular}{| c | c | c |}
\hline
Patterns \textbackslash Statistic & $\mathrm{pk}$ & $\mathrm{vl}$ \\ \hline
213,312 &
$\begin{aligned} & 2, && k=0; \\ & 2^{n-1}-2, && k=1. \end{aligned}$ & trivial \\
\hline
132,213 &
$\binom{n}{2k+1}$ &
$\binom{n}{2k+1}$ \\ \hline
213,231 &
$\binom{n}{2k+1}$ &
$\binom{n}{2k+1}$ \\ \hline
123,132 &
$\binom{n}{2k+1}$ &
$2\cdot\binom{n-1}{2k}$ \\ \hline
132,321 &
$\begin{aligned} & n, && k=0; \\ &\binom{n-1}{2}, && k=1. \end{aligned}$ &
$\begin{aligned}& 2, && k=0; \\ &\binom{n}{2}-1, && k=1. \end{aligned}$ \\ \hline
\end{tabular}
\end{center} \caption{Distribution of $\mathrm{pk}$ and $\mathrm{vl}$ over pattern classes of the form $\mathcal{S}_n(\rho_1, \rho_2)$ with $\rho_1, \rho_2 \in \mathcal{S}_3$} \label{T:pairs3} \end{table}
We consider each pattern pair in turn.
\subsection{Statistics on \texorpdfstring{$\mathcal{S}_n(213,312)$}{Sn(213,312)}}
We first describe the structure of a $\{213, 312\}$-avoiding permutation. Let $\pi \in \mathcal{S}_n(213,312)$. Suppose that $\pi_i=n$. Then $\pi_1 \cdots \pi_{i-1}$ must form an increasing subpermutation (otherwise $\pi$ has a 213 pattern), and $\pi_{i+1}\cdots \pi_n$ must form a decreasing subpermutation (otherwise $\pi$ has a 312 pattern). There are $\binom{n-1}{i-1}$ ways to choose the digits before $\pi_i=n$, so summing over all possible values for $i$, we have that $\left|\mathcal{S}_n(213,312)\right|=\sum_{i=1}^{n} \binom{n-1}{i-1}=2^{n-1}$. This structure helps prove the following propositions.
\begin{prop}\label{P:213312asc} $$\mathrm{a}_{n,k}^{\mathrm{asc}}(213,312)=\mathrm{a}_{n,k}^{\mathrm{des}}(213,312)=\binom{n-1}{k}.$$ \end{prop}
\begin{proof} By the structure above, $\pi \in \mathcal{S}_n(213,312)$ has $k$ ascents if and only if $\pi_{k+1}=n$. There are $\binom{n-1}{k}$ ways to determine the digits before $\pi_{k+1}$, which uniquely determines $\pi$.
Now, $\pi \in \mathcal{S}_n$ has $k$ descents if and only if $\pi$ has $n-k-1$ ascents. There are $\binom{n-1}{n-k-1}$ permutations $\pi \in \mathcal{S}_n(213,312)$ with $n-k-1$ ascents, so there are $\binom{n-1}{n-k-1} = \binom{n-1}{k}$ such permutations with $k$ descents. \end{proof}
Proposition \ref{P:213312asc} gives a new interpretation of Pascal's triangle (\seqnum{A007318}).
\begin{prop}\label{P:213312dasc} For $n \geq 1$, $$\mathrm{a}_{n,k}^{\mathrm{dasc}}(213,312)=\mathrm{a}_{n,k}^{\mathrm{ddes}}(213,312) = \begin{cases} n,&k=0;\\ \binom{n-1}{k+1},& k\geq 1. \end{cases}$$ \end{prop}
\begin{proof} Suppose $\pi \in \mathcal{S}_n(213,312)$ has no double ascents. Then either $\pi_1=n$ or $\pi_2=n$. In other words, the digit $\pi_1$ determines $\pi$, and there are $n$ choices of $\pi_1$, so we have the first case.
Otherwise, if $k \geq 1$, then $\pi \in \mathcal{S}_n(213,312)$ has $k$ double ascents if and only if $\pi_{k+2}=n$. There are $\binom{n-1}{k+1}$ ways to determine the digits before $\pi_{k+2}$, which uniquely determines $\pi$.
Since reversing $\pi$ is an involution on $\mathcal{S}_n(213,312)$ that sends double ascents to double descents and vice versa, we get the same enumeration for $\mathrm{a}_{n,k}^{\mathrm{ddes}}(213,312)$. \end{proof}
While the triangle in Proposition \ref{P:213312dasc} is straightforward to compute, it is new to OEIS and given in \seqnum{A299927}.
\begin{prop}\label{P:213312pk} $$\mathrm{a}_{n,k}^{\mathrm{pk}}(213,312) = \begin{cases} 2,&k=0;\\ 2^{n-1}-2,&k=1;\\ 0,&\text{otherwise}.\end{cases}$$ \end{prop}
\begin{proof} Consider $\pi \in \mathcal{S}_n(213,312)$. By the structure described above, $\pi$ has at most one peak, and if there is a peak, it must use $n$ as its middle digit. There are two ways to not have a peak; namely, the increasing permutation where $\pi_n=n$ and the decreasing permutation where $\pi_1=n$. All other $2^{n-1}-2$ permutation in $\mathcal{S}_n(213,312)$ have one peak. \end{proof}
\begin{prop}\label{P:213312vl} $$\mathrm{a}_{n,k}^{\mathrm{vl}}(213,312)\begin{cases} 2^{n-1},&k=0;\\ 0,&\text{otherwise}. \end{cases}$$ \end{prop}
\begin{proof} A valley is either a 213 pattern or a 312 pattern. By definition every permutation in $\mathcal{S}_n(213,312)$ has 0 valleys. \end{proof}
\subsection{Statistics on \texorpdfstring{$\mathcal{S}_n(132,213)$}{Sn(132,213)} and \texorpdfstring{$\mathcal{S}_n(213,231)$}{Sn(213,231)}}
The pattern classes $\mathcal{S}_n(132,213)$ and $\mathcal{S}_n(213,231)$ provide the one non-trivial instance where $\mathrm{a}_{n,k}^{\mathrm{stat}}(\rho_1,\rho_2)=\mathrm{a}_{n,k}^{\mathrm{stat}}(\rho_1^{\prime},\rho_2^{\prime})$ for the statistics of this paper.
We first describe the structure of a $\{132, 213\}$-avoiding permutation. Suppose $\pi \in \mathcal{S}_n(132,213)$. Since $\pi$ avoids 213, all digits before $\pi_i=n$ must be in increasing order. Since $\pi$ avoids 132, all digits before $\pi_i=n$ are larger than all digits after $n$. These observations imply that if $\pi \in \mathcal{S}_n(132,213)$, then $\pi = I_{i_1} \ominus \cdots \ominus I_{i_m}$ for some positive integers $i_1, \dots, i_m$. In fact, there is a natural bijection $\phi_{132,213}$ between $\mathcal{S}_n(132,213)$ and binary sequences $s=s_1\cdots s_{n-1}$ of length $n-1$; namely, if $s=\phi_{132,213}(\pi)$ then $s_i=1$ when $\pi_i<\pi_{i+1}$ and $s_i=0$ when $\pi_i>\pi_{i+1}$. This bijection implies $\left|\mathcal{S}_n(132,213)\right| = 2^{n-1}$.
Next, we describe the structure of a $\{213, 231\}$-avoiding permutation. Suppose $\pi \in \mathcal{S}_n(213,231)$. Then, for all $i$, either $\pi_i =\min(\pi_i, \pi_{i+1}, \dots \pi_n)$ or $\pi_i =\max(\pi_i, \pi_{i+1}, \dots \pi_n)$. If not, then $\pi_i$ together with $\min(\pi_i, \pi_{i+1}, \dots \pi_n)$ and $\max(\pi_i, \pi_{i+1}, \dots \pi_n)$ form either a 213 pattern or a 231 pattern. Since there are two choices for each digit of $\pi$ before the last digit, $\left|\mathcal{S}_n(213,231)\right| = 2^{n-1}$. In fact, there is a natural bijection $\phi_{213,231}$ from $\mathcal{S}_n(213,231)$ to the set of binary sequences $s=s_1\cdots s_{n-1}$ of length $n-1$; namely, $s_i=0$ when $\pi_i =\max(\pi_i, \pi_{i+1}, \dots \pi_n)$ and $s_i=1$ when $\pi_i =\min(\pi_i, \pi_{i+1}, \dots \pi_n)$.
Both bijections $\phi_{132,213}$ and $\phi_{213,231}$ help prove the following propositions.
\begin{prop}\label{P:132213asc} $$\mathrm{a}_{n,k}^{\mathrm{asc}}(132,213)=\mathrm{a}_{n,k}^{\mathrm{des}}(132,213)=\mathrm{a}_{n,k}^{\mathrm{asc}}(213,231)=\mathrm{a}_{n,k}^{\mathrm{des}}(213,231)=\binom{n-1}{k}.$$ \end{prop}
\begin{proof} By construction $\pi \in \mathcal{S}_n(132,213)$ has an ascent at $i$ if and only if $s=\phi_{132,213}(\pi)$ has $s_i=1$. Therefore, $\mathrm{a}_{n,k}^{\mathrm{asc}}(132,213)$ is the number of binary sequences of length $n-1$ with exactly $k$ 1s, which is given by $\binom{n-1}{k}$. Also, $\mathrm{a}_{n,k}^{\mathrm{des}}(132,213)$ is the number of binary sequences of length $n-1$ with exactly $k$ 0s, which is given by $\binom{n-1}{k}$.
Similarly, $\pi \in \mathcal{S}_n(213, 231)$ has an ascent at $i$ if and only if $s=\phi_{213,231}(\pi)$ has $s_i=1$ and $\pi$ has a descent at $i$ if and only if $s=\phi_{213,231}(\pi)$ has $s_i=0$, so the same enumerations follow. \end{proof}
Proposition \ref{P:132213asc} gives a new interpretation of Pascal's triangle (\seqnum{A007318}).
\begin{prop} \label{P:132213dasc} $$\mathrm{a}_{n,k}^{\mathrm{dasc}}(132,213)=\mathrm{a}_{n,k}^{\mathrm{ddes}}(132,213)=\mathrm{a}_{n,k}^{\mathrm{dasc}}(213,231)=\mathrm{a}_{n,k}^{\mathrm{ddes}}(213,231)$$ and
$$\sum_{n \geq 0} \sum_{k \geq 0} \mathrm{a}_{n,k}^{\mathrm{ddes}}(132,213)q^kz^n=\dfrac{1-qz}{1-z-z^2-qz+qz^2}.$$ \end{prop}
\begin{proof} By construction $\pi \in \mathcal{S}_n(132,213)$ has a double ascent at $i$ if and only if $s=\phi_{132,213}(\pi)$ has $s_i=s_{i+1}=1$ and $\pi$ has a double descent at $i$ if and only if $s=\phi_{132,213}(\pi)$ has $s_i=s_{i+1}=0$. Similarly, $\pi \in \mathcal{S}_n(213,231)$ has a double ascent at $i$ if and $s=\phi_{213,231}(\pi)$ has $s_i=s_{i+1}=1$ and a double descent at $i$ if and only if $s=\phi_{213,231}(\pi)$ has $s_i=s_{i+1}=0$. Therefore $\mathrm{a}_{n,k}^{\mathrm{dasc}}(132,213)=\mathrm{a}_{n,k}^{\mathrm{ddes}}(132,213)=\mathrm{a}_{n,k}^{\mathrm{dasc}}(213,231)=\mathrm{a}_{n,k}^{\mathrm{ddes}}(213,231)$.
While there is not a straightforward closed formula, the number of binary strings with $k$ 00 factors can be determined recursively.
Let $a(n,k)$ be the number of strings of length $n$ with $k$ 00 factors, and then let $a_i(n,k)$ be the number of strings of length $n$ with exactly $k$ 00 factors and that begin with $i$ 0s. By definition $$\mathrm{a}_{n,k}^{\mathrm{ddes}}(132,213) = a(n-1,k)=\sum_{i=0}^{n-1} a_i(n-1,k).$$
First, consider the case when $i=0$. We have $a_0(n,k)=a(n-1,k)$ since $i=0$ implies the string must start with 1. The remaining $n-1$ digits may be any string of length $n-1$ with $k$ 00 factors.
Now, for $i \geq 1$, we have $a_i(n,k)=a(n-1-i, k-(i-1))$. This is because the initial $i$ digits of our string are 0. These 0s account for $i-1$ 00 factors. The next digit is a 1. The remaining $n-1-i$ digits may be any binary string of length $n-i$ with $k-(i-1)$ 00 factors.
Together, we have:
\begin{align*} a(n-1,k)=\sum_{i=0}^{n-1} a_i(n-1,k) &= a(n-2,k) + \sum_{i=1}^{n-1} a(n-2-i, k-(i-1))\\ &= a(n-2,k) + \sum_{i=1}^{k+1} a(n-2-i, k-(i-1)). \end{align*}
Equivalently:
$$\mathrm{a}_{n,k}^{\mathrm{ddes}}(132,213) = \mathrm{a}_{n-1,k}^{\mathrm{ddes}}(132,213)+\sum_{i=1}^{k+1} a(n-1-i, k-(i-1)).$$
This recurrence implies that $$\sum_{n \geq 0} \sum_{k \geq 0} \mathrm{a}_{n,k}^{\mathrm{ddes}}(132,213)q^kz^n=\dfrac{1-qz}{1-z-z^2-qz+qz^2}.$$ \end{proof}
The number of binary sequences with exactly $k$ 00 factors is given in OEIS entry \seqnum{A076791}, and Proposition \ref{P:132213dasc} gives a new permutation statistic interpretation of the sequence.
\begin{prop}\label{pk132213} $$\mathrm{a}_{n,k}^{\mathrm{pk}}(132,213)=\mathrm{a}_{n,k}^{\mathrm{vl}}(132,213)=\mathrm{a}_{n,k}^{\mathrm{pk}}(213,231)=\mathrm{a}_{n,k}^{\mathrm{vl}}(213,231)=\binom{n}{2k+1}.$$ \end{prop}
\begin{proof} By construction $\pi \in \mathcal{S}_n(132,213)$ has a peak at $i$ if and only if $s=\phi_{132,213}(\pi)$ has $s_i=1$ and $s_{i+1}=0$ and $\pi$ has a valley at $i$ if and only if $s=\phi_{132,213}(\pi)$ has $s_i=0$ and $s_{i+1}=1$. By symmetry, $\mathrm{a}_{n,k}^{\mathrm{pk}}(132,213)=\mathrm{a}_{n,k}^{\mathrm{vl}}(132,213)$. Similarly, $\pi \in \mathcal{S}_n(213,231)$ has a peak at $i$ if and only if $s=\phi_{213,231}(\pi)$ has $s_i=1$ and $s_{i+1}=0$ and a valley at $i$ if and only if $s=\phi_{213,231}(\pi)$ has $s_i=0$ and $s_{i+1}=1$. Therefore, $\mathrm{a}_{n,k}^{\mathrm{pk}}(132,213)=\mathrm{a}_{n,k}^{\mathrm{vl}}(132,213)=\mathrm{a}_{n,k}^{\mathrm{pk}}(213,231)=\mathrm{a}_{n,k}^{\mathrm{vl}}(213,231)$.
Let $a(n,k)$ denote the number of binary sequences of length $n$ with $k$ 10 factors. We wish to determine $a(n-1,k)$.
Clearly $a(n,0)=n+1$, since a binary sequence with no 10 factors consists of $i$ 0s followed by $n-i$ 1s, and there are $n+1$ choices for the value of $i$. On the other hand, a sequence with $k$ 10 factors requires at least $2k$ digits, so if $n < 2k$, then $a(n,k)=0$. Similarly, $a(2k,k) = 1$ corresponds to the 1 way to have a binary sequence of length $2k$ with $k$ 10 factors, namely $1010\cdots 10$.
Now that we have determined the boundary conditions, suppose that $0 < k < \frac{n-1}{2}$. Now suppose $s$ is a binary sequence of length $n$ with $k$ 10 factors. We call a position $s_i$ a switch if $s_i \neq s_{i+1}$. In all, a sequence of length $n$ has $n-1$ positions where a switch could occur.
If $s$ starts with 1, the sequence switches from 1 to 0 $k$ times and from 0 to 1 either $k$ times or $k-1$ times, so there are $2k$ or $2k-1$ switches. In the first case, there are $\binom{n-1}{2k}$ ways to choose the locations of the switches and in the second case there are $\binom{n-1}{2k-1}$ ways to choose the locations of the switches for a total of $\binom{n-1}{2k}+\binom{n-1}{2k-1}=\binom{n}{2k}$ binary sequences of length $n$ with $k$ 10 factors that begin in 1.
If $s$ starts with 0, the sequence switches from 1 to 0 $k$ times and from 0 to 1 either $k$ times or $k+1$ times, so there are $2k$ or $2k+1$ switches. In the first case, there are $\binom{n-1}{2k}$ ways to choose the locations of the switches and in the second case there are $\binom{n-1}{2k+1}$ ways to choose the locations of the switches for a total of $\binom{n-1}{2k}+\binom{n-1}{2k+1}=\binom{n}{2k+1}$ binary sequences of length $n$ with $k$ 10 factors that begin in 0.
Combining these two cases, we have that $a(n,k)=\binom{n}{2k}+\binom{n}{2k+1} = \binom{n+1}{2k+1}$. Therefore, $$\mathrm{a}_{n,k}^{\mathrm{pk}}(132,213)=a(n-1,k)=\binom{n}{2k+1}.$$ \end{proof}
Proposition \ref{pk132213} gives a new interpretation of OEIS sequence \seqnum{A034867}.
\subsection{Statistics on \texorpdfstring{$\mathcal{S}_n(123,132)$}{Sn(123,132)}}
We first describe the structure of a $\{123,132\}$-avoiding permutation. For $\pi \in \mathcal{S}_n(123,132)$, either $\pi_{n-1}=1$ or $\pi_n=1$; otherwise, $1$, $\pi_{n-1}$ and $\pi_n$ would form a forbidden pattern. There is a natural bijection $\phi_{123,132}$ between $\mathcal{S}_n(123,132)$ and binary sequences of length $n-1$ that is described recursively as follows: $\phi_{123,132}(1)=\epsilon$, the empty string. Then, for $\pi \in \mathcal{S}_n(123,132)$, $$\phi_{123,132}(\pi)=\begin{cases} \phi_{123,132}(\mathrm{red}(\pi_1\cdots \pi_{n-2}\pi_n))0,&\pi_{n-1}=1;\\ \phi_{123,132}(\mathrm{red}(\pi_1\cdots \pi_{n-1}))1,&\pi_n=1. \end{cases}$$ For example, $\phi_{123,132}(653241) = 11001$. We can also read a binary string $s$ of length $n-1$ from left to right to construct the corresponding permutation $\phi^{-1}_{123,132}(s)$. Namely, begin with $\pi^{(1)}=n$. Then for $1 \leq i \leq n-1$, if $s_i=0$, then $\pi^{(i+1)}=\pi^{(i)}_1\cdots \pi^{(i)}_{i-1}(n-i)\pi^{(i)}_{i}$, and if $s_i=1$, then $\pi^{(i+1)}=\pi^{(i)}(n-1)$. $\pi =\phi^{-1}_{123,132}(s) = \pi^{(n)}$.
Because of the bijection $\phi_{123,132}$ with binary strings, we have $\left|\mathcal{S}_n(123,132)\right| = 2^{n-1}$. We use this bijection to prove the following propositions.
\begin{prop}\label{asc123132} $$\mathrm{a}_{n,k}^{\mathrm{asc}}(123,132)=\binom{n}{2k}.$$ \end{prop}
\begin{proof} Suppose $\pi \in \mathcal{S}_n(123,132)$ has $k$ ascents and consider $s=\phi_{123,132}(\pi)$ and the sequence of partial permutations $\pi^{(1)}, \pi^{(2)}, \dots, \pi^{(n)}$ where $\pi^{(n)}=\pi$. By construction, $\mathrm{asc}(\pi^{(i+1)}) = \mathrm{asc}(\pi^{(i)})$ or $\mathrm{asc}(\pi^{(i+1)}) = \mathrm{asc}(\pi^{(i)})+1$ for all $i$, so we seek to characterize factors in $s$ that introduce a new ascent in $\pi^{(i+1)}$ compared to $\pi^{(i)}$.
By construction, $\mathrm{asc}(\pi^{(1)})=0$ and $\pi^{(2)}$ has an ascent if and only if $s_1=0$. For $i \geq 3$, $\mathrm{asc}(\pi^{(i)}) = \mathrm{asc}(\pi^{(i-1)})+1$ if and only if $s_{i-2}=1$ and $s_{i-1}=0$.
Therefore, in order to determine $\mathrm{a}_{n,k}^{\mathrm{asc}}(123,132)$ we wish to count binary strings of length $n-1$ that either begin with 0 and have $k-1$ 10 factors or that begin with 1 and have $k$ 10 factors. As before, we call a position $s_i$ a switch if $s_i \neq s_{i+1}$, and in all, a sequence of length $n-1$ has $n-2$ positions where a switch could occur.
In the first case, since $s_1=0$ and there are $k-1$ switches from 1 to 0, there must be either $k-1$ or $k$ switches from 0 to 1 for a total of either $2k-2$ or $2k-1$ switches. In all there are $\binom{n-2}{2k-2}+\binom{n-2}{2k-1} = \binom{n-1}{2k-1}$ such binary strings.
In the second case, since $s_1=1$ and there are $k$ switches from 1 to 0 there must be either $k-1$ or $k$ switches from 0 to 1 for a total of $2k-1$ or $2k$ switches. In all there are $\binom{n-2}{2k-1}+\binom{n-2}{2k} = \binom{n-1}{2k}$ such binary strings.
Combining both cases, there are $\binom{n-1}{2k-1}+\binom{n-1}{2k}=\binom{n}{2k}$ permutations of length $n$ that avoid 123 and 132 and have exactly $k$ ascents. \end{proof}
Proposition \ref{asc123132} gives an alternate interpretation to OEIS \seqnum{A034839}.
\begin{prop} \label{des123132} $$\mathrm{a}_{n,k}^{\mathrm{des}}(123,132)=\binom{n}{2(n-k-1)}.$$ \end{prop}
\begin{proof} For any permutation $\pi \in \mathcal{S}_n$, $\mathrm{asc}(\pi)+\mathrm{des}(\pi)=n-1$. Therefore, a permutation of length $n$ with $k$ descents has $n-k-1$ ascents. By Proposition \ref{asc123132}, $\mathrm{a}_{n,k}^{\mathrm{des}}(123,132)=\binom{n}{2(n-k-1)}$. \end{proof}
Proposition \ref{des123132} gives an alternate interpretation to OEIS \seqnum{A109446}, which is a symmetry of OEIS \seqnum{A034839}.
\begin{prop} \label{dasc123132} For $n \geq 3$ $$\mathrm{a}_{n,k}^{\mathrm{dasc}}(123,132)=\begin{cases} 2^{n-1},&k=0;\\ 0,&\text{otherwise}. \end{cases}$$ \end{prop}
\begin{proof} Since a consecutive 123 pattern is a double ascent, any permutation that avoids 123 has 0 double ascents. \end{proof}
\begin{prop} \label{P:123132ddes} For $n \geq 3$, $$\mathrm{a}_{n,k}^{\mathrm{ddes}}(123,132)=\binom{n-2}{k}+2\binom{n-3}{k}.$$ \end{prop}
\begin{proof} For $n \leq 2$, every permutation has 0 double descents, so we focus on the case where $n \geq 3$. Similarly, no permutation has more than $n-2$ double descents, so we focus on $k \leq n-2$.
Suppose $\pi \in \mathcal{S}_n(123,132)$ has $k$ ascents and consider $s=\phi_{123,132}(\pi)$ and the sequence of partial permutations $\pi^{(1)}, \pi^{(2)}, \dots, \pi^{(n)}$ where $\pi^{(n)}=\pi$. By construction, $\mathrm{ddes}(\pi^{(i+1)}) = \mathrm{ddes}(\pi^{(i)})$ or $\mathrm{ddes}(\pi^{(i+1)}) = \mathrm{ddes}(\pi^{(i)})+1$ for all $i$, so we seek to characterize factors in $s$ that introduce a new double descent in $\pi^{(i+1)}$ compared to $\pi^{(i)}$.
By construction, $\mathrm{ddes}(\pi^{(1)})=\mathrm{ddes}(\pi^{(2)})=0$ and $\pi^{(3)}$ has a double descent if and only if $s_1=s_2=1$. For $i \geq 4$, $\mathrm{ddes}(\pi^{(i)}) = \mathrm{ddes}(\pi^{(i-1)})+1$ if and only if $s_{i-2}=s_{i-1}=1$ or $s_{i-2}=s_{i-1}=0$.
Therefore we wish to count the number of binary strings of length $n-1$ that begin with 00 and have $k$ additional 00 or 11 factors plus the number of binary strings of length $n-1$ that do not begin with 00 and have $k$ total 00 or 11 factors.
Now, suppose $k=0$. By our characterization, there are exactly 3 such permutations. They correspond to $\phi^{-1}_{123,132}(0101\cdots)$, $\phi^{-1}_{123,132}(1010\cdots)$, and $\phi^{-1}_{123,132}(00101010\cdots)$. This matches our formula above since $\binom{n-2}{0}+2\binom{n-3}{0}=3$ for $n \geq 3$.
Notice that if $k =n-2$ there is $\binom{n-2}{n-2}+2\binom{n-3}{n-2}=1$ permutation with $n-2$ double descents, namely the strictly decreasing permutation, which corresponds to $\phi^{-1}_{123,132}(11\cdots 1)$.
Now, let $a_{n,k}$ be the number of binary strings of length $n$ with $k$ 00 or 11 factors (other than a possible initial 00). We wish to determine $a_{n-1,k}$. Suppose $n \geq 4$ and $s =s_1\cdots s_n$ is such a string. If $s_{n-1}=s_n$ then $s_1\cdots s_{n-1}$ is a string of length n-1 with $k-1$ 00 or 11 factors (other than a possible initial 00). If $s_{n-1}\neq s_n$, then $s_1\cdots s_{n-1}$ is a string of length n-1 with $k$ 00 or 11 factors (other than a possible initial 00). This implies that $a_{n,k}=a_{n-1,k-1}+a_{n-1,k}$.
We now proceed to show that $a_{n,k}=\binom{n-1}{k}+2\binom{n-2}{k}$ by induction. We have confirmed this formula holds when when $k=0$ and $k=n-2$. In particular, this implies $a_{n,k}=\binom{n-1}{i}+2\binom{n-2}{i}$ for $0 \leq i \leq n-1$ for the case when $n=2$.
Now, suppose that $a_{n-1,i}=\binom{n-2}{i}+2\binom{n-3}{i}$ for $0 \leq i \leq n-2$. We know that $a_{n,k}=a_{n-1,k-1}+a_{n-1,k}$. Therefore: \begin{align*} a_{n,k}&=a_{n-1,k-1}+a_{n-1,k}\\ &=\binom{n-2}{k-1}+2\binom{n-3}{k-1}+\binom{n-2}{k}+2\binom{n-3}{k}\\ &=\left(\binom{n-2}{k-1}+\binom{n-2}{k}\right)+2\left(\binom{n-3}{k-1}+\binom{n-3}{k}\right)\\ &=\binom{n-1}{k}+2\binom{n-2}{k}, \end{align*} which is what we wanted to show. \end{proof}
Proposition \ref{P:123132ddes} gives a new interpretation of OEIS \seqnum{A093560}.
\begin{prop} \label{P:123132pk} $$\mathrm{a}_{n,k}^{\mathrm{pk}}(123,132)=\binom{n}{2k+1}.$$ \end{prop}
\begin{proof} Suppose $\pi \in \mathcal{S}_n(123,132)$ has $k$ ascents and consider $s=\phi_{123,132}(\pi)$ and the sequence of partial permutations $\pi^{(1)}, \pi^{(2)}, \dots, \pi^{(n)}$ where $\pi^{(n)}=\pi$. By construction, $\mathrm{pk}(\pi^{(i+1)}) = \mathrm{pk}(\pi^{(i)})$ or $\mathrm{pk}(\pi^{(i+1)}) = \mathrm{pk}(\pi^{(i)})+1$ for all $i$, so we seek to characterize factors in $s$ that introduce a new peak in $\pi^{(i+1)}$ compared to $\pi^{(i)}$.
By construction, $\mathrm{pk}(\pi^{(1)})=\mathrm{pk}(\pi^{(2)})=0$. Also, for $i \geq 3$, $\mathrm{pk}(\pi^{(i)}) = \mathrm{pk}(\pi^{(i-1)})+1$ if and only if $s_{i-2}=0$ and $s_{i-1}=1$. Therefore, we wish to count the number of binary strings $s$ of length $n-1$ with exactly $k$ 01 factors. We have two cases.
If $s$ begins with 0 then $s$ switches from 0 to 1 $k$ times and $s$ switches from 1 to 0 either $k-1$ times or $k$ times for a total of $2k-1$ or $2k$ switches. There are $\binom{n-2}{2k-1}+\binom{n-2}{2k} = \binom{n-1}{2k}$ sequences in this case.
If $s$ begins with a 1 then $s$ switches from 0 to 1 $k$ times and $s$ switches from 1 to 0 either $k$ times or $k+1$ times for a total of $2k$ or $2k+1$ switches. There are $\binom{n-2}{2k}+\binom{n-2}{2k+1} = \binom{n-1}{2k+1}$ sequences in this case.
Combining both cases yields a total of $\binom{n-1}{2k}+\binom{n-1}{2k+1}=\binom{n}{2k+1}$ binary sequences of length $n-1$ with $k$ 01 factors. By bijection $\phi_{123,132}$, this implies $\mathrm{a}_{n,k}^{\mathrm{pk}}(123,132)=\binom{n}{2k+1}$. \end{proof}
Proposition \ref{P:123132pk} gives a new interpretation of OEIS \seqnum{A034867}, which also appeared in Proposition \ref{pk132213}.
\begin{prop} \label{vl123132} $$\mathrm{a}_{n,k}^{\mathrm{vl}}(123,132)=2\binom{n-1}{2k}.$$ \end{prop}
\begin{proof} Suppose $\pi \in \mathcal{S}_n(123,132)$ has $k$ ascents and consider $s=\phi_{123,132}(\pi)$ and the sequence of partial permutations $\pi^{(1)}, \pi^{(2)}, \dots, \pi^{(n)}$ where $\pi^{(n)}=\pi$. By construction, $\mathrm{vl}(\pi^{(i+1)}) = \mathrm{vl}(\pi^{(i)})$ or $\mathrm{vl}(\pi^{(i+1)}) = \mathrm{vl}(\pi^{(i)})+1$ for all $i$, so we seek to characterize factors in $s$ that introduce a new valley in $\pi^{(i+1)}$ compared to $\pi^{(i)}$.
By construction, $\mathrm{vl}(\pi^{(1)})=\mathrm{vl}(\pi^{(2)})=0$, and $\mathrm{vl}(\pi^{(3)})=1$ if and only if $s_1=0$ and $s_2=0$. For $i \geq 4$, $\mathrm{vl}(\pi^{(i)}) = \mathrm{vl}(\pi^{(i-1)})+1$ if and only if $s_{i-2}=1$ and $s_{i-1}=0$. Therefore, we wish to count the number of binary strings $s$ of length $n-1$ that either begin with 00 and have $k-1$ 10 factors or that don't begin with 00 and have $k$ 10 factors. We consider three cases: $s$ begins with 1, $s$ begins with 01, and $s$ begins with 00.
If $s_1=1$, there are $k$ switches from 1 to 0 and either $k-1$ or $k$ switches from 0 to 1 for a total of $2k-1$ or $2k$ switches. There are $\binom{n-2}{2k-1}+\binom{n-2}{2k} = \binom{n-1}{2k}$ such sequences of length $n-1$.
If $s_1=0$ and $s_2=1$ there are still $k$ switches from 1 to 0 and either $k-1$ or $k$ switches from 0 to 1 after $s_2$ for a total of $2k-1$ or $2k$ switches after $s_2$. There are $\binom{n-3}{2k-1}+\binom{n-3}{2k} = \binom{n-2}{2k}$ such sequences of length $n-1$.
If $s_1=0$ and $s_2=0$ there are $k-1$ switches from 1 to 0 and either $k-1$ or $k$ switches from 0 to 1 for a total of $2k-2$ or $2k-1$ switches. There are $\binom{n-3}{2k-2}+\binom{n-3}{2k-1} = \binom{n-2}{2k-1}$ such sequences of length $n-1$.
Combining these cases yields $$\binom{n-1}{2k} + \left(\binom{n-2}{2k}+\binom{n-2}{2k-1}\right) = \binom{n-1}{2k}+\binom{n-1}{2k} = 2\binom{n-1}{2k}$$ such sequences. \end{proof}
Proposition \ref{vl123132} gives a new interpretation of OEIS \seqnum{A119462}.
\subsection{Statistics on \texorpdfstring{$\mathcal{S}_n(132,321)$}{Sn(132,321)}}
We first describe the structure of a $\{132, 321\}$-avoiding permutation.
\begin{prop} \label{struct132321} If $\pi \in \mathcal{S}_n(132,321)$ then $\pi = \left(I_a \ominus I_b\right)\oplus I_{n-a-b}$ for some $1 \leq a \leq n$ and $0 \leq b \leq n-1$. \end{prop}
\begin{proof} We proceed by induction on $n$; that is, assume that every member of $\mathcal{S}_{n-1}(132,321)$ is of the form $\left(I_a \ominus I_b\right)\oplus I_{(n-1)-a-b}$ and prove this is the case for members of $\mathcal{S}_{n}(132,321)$.
For the base case, notice that $\mathcal{S}_1(132,321)=\left\{1\right\}$ and $1=I_1$, so the permutation 1 has the desired form where $a=1$ and $b=0$.
For the induction step, suppose $\pi \in \mathcal{S}_{n}(132,321)$. This implies that \\$\widehat{\pi}=\mathrm{red}(\pi_1\cdots \pi_{n-1}) \in \mathcal{S}_{n-1}(132,321)$. By the induction hypothesis, either $\widehat{\pi}=I_{n-1}$, $\widehat{\pi}=I_a \ominus I_{n-1-a}$ or $\widehat{\pi} = \left(I_a \ominus I_b\right)\oplus I_{(n-1)-a-b}$.
If $\widehat{\pi}=I_{n-1}$, there are two choices for $\pi_n$. Either $\pi_n=1$, which means $\pi = I_{n-1}\ominus I_1$ or $\pi_n=n$, which means $\pi=I_n$. Any other choice of $\pi_n$ produces a 132 pattern involving $\pi_n$.
If $\widehat{\pi}=I_a \ominus I_{n-1-a}$, there are two choices for $\pi_n$. Either $\pi_n = n-a$ which means $\pi = I_a \ominus I_{n-a}$ or $\pi_n=n$ which means $\pi=(I_a \ominus I_{n-1-a})\oplus I_1$. Any other choice of $\pi_n$ produces a 132 pattern or a 321 pattern involving $\pi_n$.
If $\widehat{\pi} = \left(I_a \ominus I_b\right)\oplus I_{(n-1)-a-b}$, then $\pi_n=n$ which means $\left(I_a \ominus I_b\right)\oplus I_{n-a-b}$. Any other choice for $\pi_n$ produces a 132 pattern or a 321 pattern. \end{proof}
As a consequence of Proposition \ref{struct132321}, we have the following Corollary. \begin{cor}
$\left|\mathcal{S}_n(132,321)\right| = \binom{n}{2}+1.$ \end{cor}
\begin{proof} The permutation $I_n$ is in $\mathcal{S}_n(132,321)$ for all $n$.
Otherwise, there are $n$ positions in $\pi$. We may choose one position to be the last digit of $I_a$ and a second position to be the last position of $I_b$. This choice of two positions uniquely determines the permutation. There are $\binom{n}{2}$ permutations in $\mathcal{S}_n(132,321) \setminus \left\{I_n\right\}$. \end{proof}
The propositions below follow from the structure given in Proposition \ref{struct132321}.
\begin{prop}\label{asc132321} $$\mathrm{a}_{n,k}^{\mathrm{asc}}(132,321)=\begin{cases} 1,&k=n-1;\\ \binom{n}{2},&k=n-2;\\ 0,&\text{otherwise}. \end{cases}$$ and $$\mathrm{a}_{n,k}^{\mathrm{des}}(132,321)=\begin{cases} 1,&k=0;\\ \binom{n}{2},&k=1;\\ 0,&\text{otherwise}. \end{cases}$$ \end{prop}
\begin{proof} We know $\pi = \left(I_a \ominus I_b\right)\oplus I_{n-a-b}$. If $a=n$, $\pi$ has $n-1$ ascents and 0 descents. Otherwise, the only descent in $\pi$ is at position $a$, so $\pi$ has $n-2$ ascents and 1 descent. \end{proof}
\begin{prop}\label{dasc132321} $$\mathrm{a}_{n,k}^{\mathrm{dasc}}(132,321)=\begin{cases} 1,&k=n-2;\\ n,&k=n-3;\\ \binom{n}{2}-n, &k=n-4;\\ 0,&\text{otherwise}. \end{cases}$$ \end{prop}
\begin{proof} We know $\pi = \left(I_a \ominus I_b\right)\oplus I_{n-a-b}$.
If $a=n$, $\pi$ has $n-2$ double ascents. There is one such permutation.
If $a=n-1$ then $b=1$. This means $\pi$ has $n-3$ double ascents in $I_a$. There is one such permutation.
If $a=1$ then there are 0 double ascents in $a$ and there are $(n-1)-2$ double ascents in $I_b \oplus I_{n-a-b} = I_{n-1}$ for a total of $n-3$ double ascents. There are $n-1$ such permutations since there are $n-1$ choices for the value of $b$, i.e., $1 \leq b \leq n-1$.
So far we have accounted for 1 permutation with $n-2$ double ascents and $1+(n-1)=n$ permutations with $n-3$ double ascents.
If $2 \leq a \leq n-2$, then $\pi$ has $a-2$ double ascents in $I_a$ and $(n-a)-2$ double ascents in $I_b \oplus I_{n-a-b} = I_{n-1}$ for a total of $(a-2)+(n-a-2) = n-4$ double ascents. The remaining $\binom{n}{2} -n$ permutations fall into this category, which completes the proof. \end{proof}
\begin{prop}\label{ddes132321} For $n \geq 3$, $$\mathrm{a}_{n,k}^{\mathrm{ddes}}(132,321)=\begin{cases} \binom{n}{2}+1,&k=0;\\ 0,&\text{otherwise}. \end{cases}$$ \end{prop}
\begin{proof} Since a consecutive 321 pattern is a double descent, any permutation that avoids 321 has 0 double descents. \end{proof}
\begin{prop}\label{pk132321} $$\mathrm{a}_{n,k}^{\mathrm{pk}}(132,321)=\begin{cases} n,&k=0;\\ \binom{n-1}{2},&k=1;\\ 0,&\text{otherwise}. \end{cases}$$ \end{prop}
\begin{proof} There is at most one peak in a permutation of the form $\left(I_a \ominus I_b\right)\oplus I_{n-a-b}$. In particular, we get a peak exactly when $2 \leq a \leq n-1$.
There are $n-1$ permutations where $a=1$, and there is 1 permutation where $a=n$, so there are a total of $n$ permutations with 0 peaks.
The remaining $\binom{n}{2}+1 - n = \binom{n-1}{2}$ permutations have one peak. \end{proof}
\begin{prop}\label{vl132321} $$\mathrm{a}_{n,k}^{\mathrm{vl}}(132,321)\begin{cases} 2,&k=0;\\ \binom{n}{2}-1,&k=1;\\ 0,&\text{otherwise}. \end{cases}$$ \end{prop}
\begin{proof} There is at most one valley in a permutation of the form $\left(I_a \ominus I_b\right)\oplus I_{n-a-b}$. In particular, we get a valley exactly when $2 \leq n-a \leq n-1$. The only permutations that violate this rule are when $a=n$ and when $a=n-1$. There is one permutation with $a=n$, i.e., $I_n$. There is one permutation with $a=n-1$, i.e., $I_{n-1}\ominus I_1$. All other $\binom{n}{2}-1$ permutations avoiding 132 and 321 have a valley involving the last digit in $I_a$ and the first two digits of $I_b \oplus I_{n-a-b}$. \end{proof}
\section{Acknowledgments} This work was partially supported by NSF grant DUE-1068346.
\end{document} | arXiv |
Nutrition Research and Practice
The Korean Nutrition Society (KNS)
Bimonthly
Agriculture, Fishery and Food > Food and Nutrition Science
Nutrition Research and Practice (NRP) is an official journal, jointly published by the Korean Nutrition Society and the Korean Society of Community Nutrition since 2007. The journal had been published quarterly at the initial stage and has been published bimonthly since 2010. NRP aims to stimulate research and practice across diverse areas of human nutrition. The Journal publishes peer-reviewed original manuscripts on nutrition biochemistry and metabolism, community nutrition, nutrition and disease management, nutritional epidemiology, nutrition education, institutional food service in the following categories: Original Research Articles, Notes, Communications, and Reviews. Reviews will be received by the invitation of the editors only. Statements made and opinions expressed in the manuscripts published in this Journal represent the views of authors and do not necessarily reflect the opinion of the Societies. This journal is indexed/tracked/covered by PubMed, PubMed Central, Science Citation Index Expanded (SCIE), SCOPUS, Chemical Abstracts Service (CAS), CAB International (CABI), KoreaMed, Synapse, KoMCI, CrossRef and Google Scholar.
http://www.nrpesubmit.org/ KSCI KCI SCOPUS SCIE
Antioxidant activity and anti-inflammatory activity of ethanol extract and fractions of Doenjang in LPS-stimulated RAW 264.7 macrophages
Kwak, Chung Shil;Son, Dahee;Chung, Young-Shin;Kwon, Young Hye 569
https://doi.org/10.4162/nrp.2015.9.6.569 PDF KSCI
BACKGROUND/OBJECTIVES: Fermentation can increase functional compounds in fermented soybean products, thereby improving antioxidant and/or anti-inflammatory activities. We investigated the changes in the contents of phenolics and isoflavones, antioxidant activity and anti-inflammatory activity of Doenjang during fermentation and aging. MATERIALS/METHODS: Doenjang was made by inoculating Aspergillus oryzae and Bacillus licheniformis in soybeans, fermenting and aging for 1, 3, 6, 8, and 12 months (D1, D3, D6, D8, and D12). Doenjang was extracted using ethanol, and sequentially fractioned by hexane, dichloromethane (DM), ethylacetate (EA), n-butanol, and water. The contents of total phenolics, flavonoids and isoflavones, 2,2-diphenyl-1 picryl hydrazyl (DPPH) radical scavenging activity, and ferric reducing antioxidant power (FRAP) were measured. Anti-inflammatory effects in terms of nitric oxide (NO), prostaglandin (PG) E2 and pro-inflammatory cytokine production and inducible nitric oxide synthase (iNOS) and cyclooxygenase (COX)-2 expressions were also measured using LPS-treated RAW 264.7 macrophages. RESULTS: Total phenolic and flavonoid contents showed a gradual increase during fermentation and 6 months of aging and were sustained thereafter. DPPH radical scavenging activity and FRAP were increased by fermentation. FRAP was further increased by aging, but DPPH radical scavenging activity was not. Total isoflavone and glycoside contents decreased during fermentation and the aging process, while aglycone content and its proportion increased up to 3 or 6 months of aging and then showed a slow decrease. DM and EA fractions of Doenjang showed much higher total phenolic and flavonoid contents, and DPPH radical scavenging activity than the others. At $100{\mu}g/mL$, DM and EA fractions of D12 showed strongly suppressed NO production to 55.6% and 52.5% of control, respectively, and PGE2 production to 25.0% and 28.3% of control with inhibition of iNOS or COX-2 protein expression in macrophages. CONCLUSIONS: Twelve month-aged Doenjang has potent antioxidant and anti-inflammatory activities with high levels of phenolics and isoflavone aglycones, and can be used as a beneficial food for human health.
Sonchus asper extract inhibits LPS-induced oxidative stress and pro-inflammatory cytokine production in RAW264.7 macrophages
Wang, Lan;Xu, Ming Lu;Liu, Jie;Wang, You;Hu, Jian He;Wang, Myeong-Hyeon 579
BACKGROUND/OBJECTIVES: Sonchus asper is used extensively as an herbal anti-inflammatory for treatment of bronchitis, asthma, wounds, burns, and cough; however, further investigation is needed in order to understand the underlying mechanism. To determine its mechanism of action, we examined the effects of an ethyl acetate fraction (EAF) of S. asper on nitric oxide (NO) production and prostaglandin-E2 levels in lipopolysaccharide (LPS)-stimulated RAW264.7 macrophages. MATERIALS/METHODS: An in vitro culture of RAW264.7 macrophages was treated with LPS to induce inflammation. RESULTS: Treatment with EAF resulted in significant suppression of oxidative stress in RAW264.7 macrophages as demonstrated by increased endogenous superoxide dismutase (SOD) activity and intracellular glutathione levels, decreased generation of reactive oxygen species and lipid peroxidation, and restoration of the mitochondrial membrane potential. To confirm its anti-inflammatory effects, analysis of expression of inducible NO synthase, cyclooxygenase-2, tumor necrosis factor-${\alpha}$, and the anti-inflammatory cytokines IL-$1{\beta}$ and IL-6 was performed using semi-quantitative RT-PCR. EAF treatment resulted in significantly reduced dose-dependent expression of all of these factors, and enhanced expression of the antioxidants MnSOD and heme oxygenase-1. In addition, HPLC fingerprint results suggest that rutin, caffeic acid, and quercetin may be the active ingredients in EAF. CONCLUSIONS: Taken together, findings of this study imply that the anti-inflammatory effect of EAF on LPS-stimulated RAW264.7 cells is mediated by suppression of oxidative stress.
Cytoprotective effect of rhamnetin on miconazole-induced H9c2 cell damage
Lee, Kang Pa;Kim, Jai-Eun;Park, Won-Hwan 586
BACKGROUND/OBJECTIVES: Reactive oxygen species (ROS) formation is closely related to miconazole-induced heart dysfunction. Although rhamnetin has antioxidant effects, it remained unknown whether it can protect against miconazole-induced cardiomyocyte apoptosis. Thus, we investigated the effects of rhamnetin on miconazole-stimulated H9c2 cell apoptosis. MATERIALS/METHODS: Cell morphology was observed by inverted microscope and cell viability was determined using a WelCount$^{TM}$ cell proliferation assay kit. Miconazole-induced ROS production was evaluated by fluorescence-activated cell sorting with 6-carboxy-2',7'-dichlorofluoroscein diacetate ($H_2DCF$-DA) stain. Immunoblot analysis was used to determine apurinic/apyrimidinic endonuclease 1 (APE/Ref-1) and cleaved cysteine-aspartic protease (caspase) 3 expression. NADPH oxidase levels were measured using real-time polymerase chain reaction. RESULTS: Miconazole (3 and $10{\mu}M$) induced abnormal morphological changes and cell death in H9c2 cells. Rhamnetin enhanced the viability of miconazole ($3{\mu}M$)-treated cells in a dose-dependent manner. Rhamnetin (1 and $3{\mu}M$) treatment downregulated cleaved caspase 3 and upregulated APE/Ref-1 expression in miconazole-stimulated cells. Additionally, rhamnetin significantly reduced ROS generation. CONCLUSIONS: Our data suggest that rhamnetin may have cytoprotective effects in miconazole-stimulated H9c2 cardiomyocytes via ROS inhibition. This effect most likely occurs through the upregulation of APE/Ref-1 and attenuation of hydrogen peroxide levels.
The micosporine-like amino acids-rich aqueous methanol extract of laver (Porphyra yezoensis) inhibits adipogenesis and induces apoptosis in 3T3-L1 adipocytes
Kim, Hyunhee;Lee, Yunjung;Han, Taejun;Choi, Eun-Mi 592
BACKGROUND/OBJECTIVES: Increased mass of adipose tissue in obese persons is caused by excessive adipogenesis, which is elaborately controlled by an array of transcription factors. Inhibition of adipogenesis by diverse plant-derived substances has been explored. The aim of the current study was to examine the effects of the aqueous methanol extract of laver (Porphyra yezoensis) on adipogenesis and apoptosis in 3T3-L1 adipocytes and to investigate the mechanism underlying the effect of the laver extract. MATERIALS/METHODS: 3T3-L1 cells were treated with various concentrations of laver extract in differentiation medium. Lipid accumulation, expression of adipogenic proteins, including CCAAT enhancer-binding protein ${\alpha}$, peroxisome proliferator-activated receptor ${\gamma}$, fatty acid binding protein 4, and fatty acid synthase, cell viability, apoptosis, and the total content and the ratio of reduced to oxidized forms of glutathione (GSH/GSSG) were analyzed. RESULTS: Treatment with laver extract resulted in a significant decrease in lipid accumulation in 3T3-L1 adipocytes, which showed correlation with a reduction in expression of adipogenic proteins. Treatment with laver extract also resulted in a decrease in the viability of preadipocytes and an increase in the apoptosis of mature adipocytes. Treatment with laver extract led to exacerbated depletion of cellular glutathione and abolished the transient increase in GSH/GSSG ratio during adipogenesis in 3T3-L1 adipocytes. CONCLUSION: Results of our study demonstrated that treatment with the laver extract caused inhibition of adipogenesis, a decrease in proliferation of preadipocytes, and an increase in the apoptosis of mature adipocytes. It appears that these effects were caused by increasing oxidative stress, as demonstrated by the depletion and oxidation of the cellular glutathione pool in the extract-treated adipocytes. Our results suggest that a prooxidant role of laver extract is associated with its antiadipogenic and proapoptotic effects.
Bioconversion of Citrus unshiu peel extracts with cytolase suppresses adipogenic activity in 3T3-L1 cells
Lim, Heejin;Yeo, Eunju;Song, Eunju;Chang, Yun-Hee;Han, Bok-Kyung;Choi, Hyuk-Joon;Hwang, Jinah 599
BACKGROUND/OBJECTIVES: Citrus flavonoids have a variety of physiological properties such as anti-oxidant, anti-inflammation, anti-cancer, and anti-obesity. We investigated whether bioconversion of Citrus unshiu with cytolase (CU-C) ameliorates the anti-adipogenic effects by modulation of adipocyte differentiation and lipid metabolism in 3T3-L1 cells. MATERIALS/METHODS: Glycoside forms of Citrus unshiu (CU) were converted into aglycoside forms with cytolase treatment. Cell viability of CU and CU-C was measured at various concentrations in 3T3L-1 cells. The anti-adipogenic and lipolytic effects were examined using Oil red O staining and free glycerol assay, respectively. We performed real time-polymerase chain reaction and western immunoblotting assay to detect mRNA and protein expression of adipogenic transcription factors, respectively. RESULTS: Treatment with cytolase decreased flavanone rutinoside forms (narirutin and hesperidin) and instead, increased flavanone aglycoside forms (naringenin and hesperetin). During adipocyte differentiation, 3T3-L1 cells were treated with CU or CU-C at a dose of 0.5 mg/ml. Adipocyte differentiation was inhibited in CU-C group, but not in CU group. CU-C markedly suppressed the insulin-induced protein expression of CCAAT/enhancer-binding protein ${\alpha}$ ($C/EBP{\alpha}$) and peroxisome proliferator-activated receptor gamma ($PPAR{\gamma}$) as well as the mRNA levels of $CEBP{\alpha}$, $PPAR{\gamma}$, and sterol regulatory element binding protein 1c (SREBP1c). Both CU and CU-C groups significantly increased the adipolytic activity with the higher release of free glycerol than those of control group in differentiated 3T3-L1 adipocytes. CU-C is particularly superior in suppression of adipogenesis, whereas CU-C has similar effect to CU on stimulation of lipolysis. CONCLUSIONS: These results suggest that bioconversion of Citrus unshiu peel extracts with cytolase enhances aglycoside flavonoids and improves the anti-adipogenic metabolism via both inhibition of key adipogenic transcription factors and induction of adipolytic activity.
Antiobesity effects of the water-soluble fraction of the ethanol extract of Smilax china L. leaf in 3T3-L1 adipocytes
Kang, Yun Hwan;Kim, Kyoung Kon;Kim, Dae Jung;Choe, Myeon 606
BACKGROUND/OBJECTIVES: Several medicinal properties of Smilax china L. have been studied including antioxidant, anti-inflammatory, and anti-cancer effects. However, the antiobesity activity and mechanism by which the water-soluble fraction of this plant mediates its effects are not clear. In the present study, we investigated the lipolytic actions of the water-soluble fraction of Smilax china L. leaf ethanol extract (wsSCLE) in 3T3-L1 adipocytes. MATERIALS/METHODS: The wsSCLE was identified by measuring the total polyphenol and flavonoid content. The wsSCLE was evaluated for its effects on cell viability, lipid accumulation, glycerol, and cyclic adenosine monophosphate (cAMP) contents. In addition, western blot analysis was used to evaluate the effects on protein kinase A (PKA), PKA substrates (PKAs), and hormone-sensitive lipase (HSL). For the lipid accumulation assay, 3T3-L1 adipocytes were treated with different doses of wsSCLE for 9 days starting 2 days post-confluence. In other cell experiments, mature 3T3-L1 adipocytes were treated for 24 h with wsSCLE. RESULTS: Results showed that treatment with wsSCLE at 0.05, 0.1, and 0.25 mg/mL had no effect on cell morphology and viability. Without evidence of toxicity, wsSCLE treatment decreased lipid accumulation compared with the untreated adipocyte controls as shown by the lower absorbance of Oil Red O stain. The wsSCLE significantly induced glycerol release and cAMP production in mature 3T3-L1 cells. Furthermore, protein levels of phosphorylated PKA, PKAs, and HSL significantly increased following wsSCLE treatment. CONCLUSION: These results demonstrate that the potential antiobesity activity of wsSCLE is at least in part due to the stimulation of cAMP-PKA-HSL signaling. In addition, the wsSCLE-stimulated lipolysis induced by the signaling is mediated via activation of the ${\beta}$-adrenergic receptor.
Effects of developmental iron deficiency and post-weaning iron repletion on the levels of iron transporter proteins in rats
Oh, Sugyoung;Shin, Pill-kyung;Chung, Jayong 613
BACKGROUND/OBJECTIVES: Iron deficiency in early life is associated with developmental problems, which may persist until later in life. The question of whether iron repletion after developmental iron deficiency could restore iron homeostasis is not well characterized. In the present study, we investigated the changes of iron transporters after iron depletion during the gestational-neonatal period and iron repletion during the post-weaning period. MATERIALS/METHODS: Pregnant rats were provided iron-deficient (< 6 ppm Fe) or control (36 ppm Fe) diets from gestational day 2. At weaning, pups from iron-deficient dams were fed either iron-deficient (ID group) or control (IDR group) diets for 4 week. Pups from control dams were continued to be fed with the control diet throughout the study period (CON). RESULTS: Compared to the CON, ID rats had significantly lower hemoglobin and hematocrits in the blood and significantly lower tissue iron in the liver and spleen. Hepatic hepcidin and BMP6 mRNA levels were also strongly down-regulated in the ID group. Developmental iron deficiency significantly increased iron transporters divalent metal transporter 1 (DMT1) and ferroportin (FPN) in the duodenum, but decreased DMT1 in the liver. Dietary iron repletion restored the levels of hemoglobin and hematocrit to a normal range, but the tissue iron levels and hepatic hepcidin mRNA levels were significantly lower than those in the CON group. Both FPN and DMT1 protein levels in the liver and in the duodenum were not different between the IDR and the CON. By contrast, DMT1 in the spleen was significantly lower in the IDR, compared to the CON. The splenic FPN was also decreased in the IDR more than in the CON, although the difference did not reach statistical significance. CONCLUSIONS: Our findings demonstrate that iron transporter proteins in the duodenum, liver and spleen are differentially regulated during developmental iron deficiency. Also, post-weaning iron repletion efficiently restores iron transporters in the duodenum and the liver but not in the spleen, which suggests that early-life iron deficiency may cause long term abnormalities in iron recycling from the spleen.
Effects of natural raw meal (NRM) on high-fat diet and dextran sulfate sodium (DSS)-induced ulcerative colitis in C57BL/6J mice
Shin, Sung-Ho;Song, Jia-Le;Park, Myoung-Gyu;Park, Mi-Hyun;Hwang, Sung-Joo;Park, Kun-Young 619
BACKGROUND/OBJECTIVES: Colitis is a serious health problem, and chronic obesity is associated with the progression of colitis. The aim of this study was to determine the effects of natural raw meal (NRM) on high-fat diet (HFD, 45%) and dextran sulfate sodium (DSS, 2% w/v)-induced colitis in C57BL/6J mice. MATERIALS/METHODS: Body weight, colon length, and colon weight-to-length ratio, were measured directly. Serum levels of obesity-related biomarkers, triglyceride (TG), total cholesterol (TC), low density lipoprotein (LDL), high density lipoprotein (HDL), insulin, leptin, and adiponectin were determined using commercial kits. Serum levels of pro-inflammatory cytokines including tumor necrosis factor-${\alpha}$ (TNF-${\alpha}$), interleukin (IL)-$1{\beta}$, and IL-6 were detected using a commercial ELISA kit. Histological study was performed using a hematoxylin and eosin (H&E) staining assay. Colonic mRNA expressions of TNF-${\alpha}$, IL-$1{\beta}$, IL-6, inducible nitric oxide synthase (iNOS), and cyclooxygenase-2 (COX-2) were determined by RT-PCR assay. RESULTS: Body weight and obesity-related biomarkers (TG, TC, LDL, HDL, insulin, leptin, and adiponectin) were regulated and obesity was prevented in NRM treated mice. NRM significantly suppressed colon shortening and reduced colon weight-to-length ratio in HFD+DSS induced colitis in C57BL/6J mice (P < 0.05). Histological observations suggested that NRM reduced edema, mucosal damage, and the loss of crypts induced by HFD and DSS. In addition, NRM decreased the serum levels of pro-inflammatory cytokines, TNF-${\alpha}$, IL-$1{\beta}$, and IL-6 and inhibited the mRNA expressions of these cytokines, and iNOS and COX-2 in colon mucosa (P < 0.05). CONCLUSION: The results suggest that NRM has an anti-inflammatory effect against HFD and DSS-induced colitis in mice, and that these effects are due to the amelioration of HFD and/or DSS-induced inflammatory reactions.
Estrogen deprivation and excess energy supply accelerate 7,12-dimethylbenz(a)anthracene-induced mammary tumor growth in C3H/HeN mice
Kim, Jin;Lee, Yoon Hee;Yoon Park, Jung Han;Sung, Mi-Kyung 628
BACKGROUND/OBJECTIVES: Obesity is a risk factor of breast cancer in postmenopausal women. Estrogen deprivation has been suggested to cause alteration of lipid metabolism thereby creating a cellular microenvironment favoring tumor growth. The aim of this study is to investigate the effects of estrogen depletion in combination with excess energy supply on breast tumor development. MATERIALS/METHODS: Ovariectomized (OVX) or sham-operated C3H/HeN mice at 4 wks were provided with either a normal diet or a high-fat diet (HD) for 16 weeks. Breast tumors were induced by administration of 7,12-dimethylbenz(a)anthracene once a week for six consecutive weeks. RESULTS: Study results showed higher serum concentrations of free fatty acids and insulin in the OVX+HD group compared to other groups. The average tumor volume was significantly larger in OVX+HD animals than in other groups. Expressions of mammary tumor insulin receptor and mammalian target of rapamycin proteins as well as the ratio of pAKT/AKT were significantly increased, while pAMPK/AMPK was decreased in OVX+HD animals compared to the sham-operated groups. Higher relative expression of liver fatty acid synthase mRNA was observed in OVX+HD mice compared with other groups. CONCLUSIONS: These results suggest that excess energy supply affects the accelerated mammary tumor growth in estrogen deprived mice.
Evaluation of the efficacy of nutritional screening tools to predict malnutrition in the elderly at a geriatric care hospital
Baek, Myoung-Ha;Heo, Young-Ran 637
BACKGROUND/OBJECTIVES: Malnutrition in the elderly is a serious problem, prevalent in both hospitals and care homes. Due to the absence of a gold standard for malnutrition, herein we evaluate the efficacy of five nutritional screening tools developed or used for the elderly. SUBJECTS/METHODS: Elected medical records of 141 elderly patients (86 men and 55 women, aged $73.5{\pm}5.2years$) hospitalized at a geriatric care hospital were analyzed. Nutritional screening was performed using the following tools: Mini Nutrition Assessment (MNA), Mini Nutrition Assessment-Short Form (MNA-SF), Geriatric Nutritional Risk Index (GNRI), Malnutrition Universal Screening Tool (MUST) and Nutritional Risk Screening 2002 (NRS 2002). A combined index for malnutrition was also calculated as a reference tool. Each patient evaluated as malnourished to any degree or at risk of malnutrition according to at least four out of five of the aforementioned tools was categorized as malnourished in the combined index classification. RESULTS: According to the combined index, 44.0% of the patients were at risk of malnutrition to some degree. While the nutritional risk and/or malnutrition varied greatly depending on the tool applied, ranging from 36.2% (MUST) to 72.3% (MNA-SF). MUST showed good validity (sensitivity 80.6%, specificity 98.7%) and almost perfect agreement (k = 0.81) with the combined index. In contrast, MNA-SF showed poor validity (sensitivity 100%, specificity 49.4%) and only moderate agreement (k = 0.46) with the combined index. CONCLUSIONS: MNA-SF was found to overestimate the nutritional risk in the elderly. MUST appeared to be the most valid and useful screening tool to predict malnutrition in the elderly at a geriatric care hospital.
Consumer attitudes, barriers, and meal satisfaction associated with sodium-reduced meal intake at worksite cafeterias
Lee, Jounghee;Park, Sohyun 644
BACKGROUND/OBJECTIVES: Targeting consumers who consume lunches at their worksite cafeterias would be a valuable approach to reduce sodium intake in South Korea. To assess the relationships between socio-demographic factors, consumer satisfaction, attitudes, barriers and the frequency of sodium-reduced meal intake. SUBJECTS/METHODS: We implemented a cross-sectional research, analyzing data from 738 consumers aged 18 years or older (327 males and 411 females) at 17 worksite cafeterias in South Korea. We used the ordinary least squares regression analysis to determine the factors related to overall satisfaction with sodium-reduced meal. General linear models with LSD tests were employed to examine the variables that differed by the frequency of sodium-reduced meal intake. RESULTS: Most subjects always or usually consumed the sodium-reduced meal (49%), followed by sometimes (34%) and rarely or never (18%). Diverse menus, taste and belief in the helpfulness of the sodium-reduced meal significantly increased overall satisfaction with the sodium-reduced diet (P < 0.05). We found importance of needs in the following order: 1) 'menu diversity' (4.01 points), 2) 'active promotion' (3.97 points), 3) 'display of nutrition labels in a visible location' (3.96 points), 4) 'improvement of taste' (3.88 points), and 5) 'education of sodium-reduction self-care behaviors' (3.82 points). CONCLUSION: Dietitians could lead consumers to choose sodium-reduced meals by improving their taste and providing diverse menus for the sodium-reduced meals at worksite cafeterias.
Dietary intake of fats and fatty acids in the Korean population: Korea National Health and Nutrition Examination Survey, 2013
Baek, Yeji;Hwang, Ji-Yun;Kim, Kirang;Moon, Hyun-Kyung;Kweon, Sanghui;Yang, Jieun;Oh, Kyungwon;Shim, Jae Eun 650
BACKGROUND/OBJECTIVES: The aim of this study was to estimate average total fat and fatty acid intakes as well as identify major food sources using data from the Korea National Health and Nutrition Examination Survey (KNHANES) VI-1 (2013). SUBJECTS/METHODS: Total fat and fatty acid intakes were estimated using 24-hour dietary recall data on 7,048 participants aged ${\geq}3years$ from the KNHANES VI-1 (2013). Data included total fat, saturated fatty acid (SFA), monounsaturated fatty acid (MUFA), polyunsaturated fatty acid (PUFA), n-3 fatty acid (n-3 FA), and n-6 fatty acid (n-6 FA) levels. Population means and standard errors of the mean were weighted in order to produce national estimates and separated based on sex, age, income, as well as residential region. Major food sources of fat, SFA, MUFA, PUFA, n-3 FA, and n-6 FA were identified based on mean consumption amounts of fat and fatty acids in each food. RESULTS: The mean intake of total fat was 48.0 g while mean intakes of SFA, MUFA, PUFA, n-3 FA, and n-6 FA were 14.4 g, 15.3 g, 11.6 g, 1.6 g, and 10.1 g, respectively. Intakes of MUFA and SFA were each higher than that of PUFA in all age groups. Pork was the major source of total fat, SFA, and MUFA, and soybean oil was the major source of PUFA. Milk and pork were major sources of SFA in subjects aged 3-11 years and ${\geq}12years$, respectively. Perilla seed oil and soybean oil were main sources of n-3 FA in subjects aged ${\geq}50years$ and aged < 50 years, respectively. CONCLUSIONS: Estimation of mean fatty acid intakes of this study using nationally represented samples of the Korean population could be useful for developing and evaluating national nutritional policies.
Trends in adherence to dietary recommendations among Korean type 2 diabetes mellitus patients
Park, Kyong 658
BACKGROUND/OBJECTIVES: The current study examined trends in adherence to dietary recommendations and compared the levels of adherence between diagnosed and undiagnosed subjects with type 2 diabetes mellitus (T2DM) in Korea over the past 14 years. SUBJECTS/METHODS: Data were collected from the 1998-2012 Korea National Health and Nutrition Examination Surveys (KNHANES). Diagnosed diabetes was defined as giving a positive response to questions about awareness of the disease, a physician's diagnosis of diabetes, or medical treatment for diabetes, whereas undiagnosed diabetes was defined as having a fasting glucose level ${\geq}126mg/dl$. Assessment of adherence level was based on 6 components of dietary guidelines, considering meal patterns and intake levels of calories, carbohydrates, vegetable/seaweed, sodium, and alcohol. The participants received 1 point if they met the criteria for each of the 6 components, and the total possible score ranged from 0 to 6 points. Multivariate generalized linear regression was performed, taking into account the complex survey design. RESULTS: Among all diabetic patients aged 30 years or older, the proportion of diagnosed diabetes increased dramatically, from 40.9% in 1998 to 75.9% in 2012 (P for trend < 0.001). The overall adherence levels to dietary recommendations were low and did not significantly differ between diagnosed and undiagnosed subjects with T2DM for all survey years. Several improvements were observed, including increased adherence to maintaining sufficient vegetable/seaweed consumption (increased from 0.12 to 0.16 points) and limiting sodium intake (increased from 0.12-0.13 points to 0.19-0.24 points; P for trend < 0.001), while adherence to maintaining moderate alcohol consumption decreased. CONCLUSIONS: Analysis of data collected by the KNHANES indicates that Korean T2DM patients have poor adherence to dietary recommendations and maintenance of a healthy lifestyle, regardless of disease awareness. This finding suggests that development of practical, evidence-based guidelines is necessary and that provision and expansion of educational programs for T2DM patients is critical after diagnosis.
The effect of providing nutritional information about fast-food restaurant menus on parents' meal choices for their children
Ahn, Jae-Young;Park, Hae-Ryun;Lee, Kiwon;Kwon, Sooyoun;Kim, Soyeong;Yang, Jihye;Song, Kyung-Hee;Lee, Youngmi 667
BACKGROUND/OBJECTIVES: To encourage healthier food choices for children in fast-food restaurants, many initiatives have been proposed. This study aimed to examine the effect of disclosing nutritional information on parents' meal choices for their children at fast-food restaurants in South Korea. SUBJECTS/METHODS: An online experimental survey using a menu board was conducted with 242 parents of children aged 2-12 years who dined with them at fast-food restaurants at least once a month. Participants were classified into two groups: the low-calorie group (n = 41) who chose at least one of the lowest calorie meals in each menu category, and the high-calorie group (n = 201) who did not. The attributes including perceived empowerment, use of provided nutritional information, and perceived difficulties were compared between the two groups. RESULTS: The low-calorie group perceived significantly higher empowerment with the nutritional information provided than did the high-calorie group (P = 0.020). Additionally, the low-calorie group was more interested in nutrition labeling (P < 0.001) and considered the nutritional value of menus when selecting restaurants for their children more than did the high-calorie group (P = 0.017). The low-calorie group used the nutritional information provided when choosing meals for their children significantly more than did the high-calorie group (P < 0.001), but the high-calorie group had greater difficulty using the nutritional information provided (P = 0.012). CONCLUSIONS: The results suggest that improving the empowerment of parents using nutritional information could be a strategy for promoting healthier parental food choices for their children at fast-food restaurants.
Attenuating effect of Lactobacillus brevis G101 on the MSG symptom complex in a double-blind, placebo-controlled study
Kim, Dong-Hyun;Choi, Yeji;Park, Sun-Sung;Kim, Se-Young;Han, Myung Joo 673
BACKGROUND/OBJECTIVES: Lactobacillus brevis G101 suppresses the absorption of monosodium glutamate (MSG) from the intestine into the blood in mice. Therefore, the attenuating effect of orally administered G101 on monosodium glutamate (MSG) symptom complex was investigated in humans MATERIALS/METHODS: Capsules (300 mg) containing Lactobacillus brevis G101 ($1{\times}10^{10}CFU/individual$) or maltodextrin (placebo) was orally administered in 30 respondents with self-recognized monosodium glutamate (MSG) symptom complex for 5 days and the rice with black soybean sauce containing 6 g MSG (RBSM) was ingested 30 min after the final administration. Thereafter, the MSG symptom complex (rated on a 5-point scale: 1, none; 5, strong) was investigated in a double blind placebo controlled study. The intensity of the MSG symptom complex was significantly reduced in respondents of the G101 intake group ($2.87{\pm}0.73$) compared to that in those treated with the placebo ($3.63{\pm}1.03$) (P = 0.0016). Respondents in the placebo group exhibited more of the various major conditions of the MSG symptom complex than in the G101 intake group. Although there was no significant difference in the appearance time of the MSG symptom complex between subjects orally administered G101 and those administered the placebo, its disappearance in < 3 h was observed in 69.9% of subjects in the G101 treatment group and in 38.0% of subjects in the placebo group (P = 0.0841). CONCLUSIONS: Oral administration of Lactobacillus brevis G101 may be able to reduce the intensity of the MSG symptom complex. | CommonCrawl |
\begin{document}
\begin{abstract} We give an explicit characterization of all principally polarized abelian varieties $(A,\Theta)$ such that there is a finite subgroup $G\subseteq\mathrm{Aut}(A,\Theta)$ such that the quotient variety $A/G$ is smooth. We also give a complete classification of smooth quotients of Jacobians of curves.\\
\noindent\textbf{MSC codes:} primary 14L30, 14K10; secondary 14H37, 14H40. \end{abstract}
\title{Smooth quotients of principally polarized abelian varieties}
\section{Introduction}
Let $(A,\Theta)$ be a complex principally polarized abelian variety and denote by $\aut(A,\Theta)$ the group of automorphisms of $A$ that fix the origin as well as the numerical class of $\Theta$. The purpose of this article is to give an explicit character\-ization of all principally polarized abelian varieties admitting a smooth quotient, that is, varieties $(A,\Theta)$ such that there exists a non-trivial subgroup $G\subseteq\aut(A,\Theta)$ such that $A/G$ is a smooth variety. We recall the well-known fact that $\aut(A,\Theta)$ is a finite group (see \cite[Corollary 5.1.9]{BL}) and so we are only dealing with quotients by finite groups.
This article can be seen as a continuation of the article \cite{ALA} by the same authors where abelian varieties that have a finite group of automorphisms with smooth quotient are characterized. The addition of a polarization to the study of smooth quotients throws a non-trivial extra ingredient into the mix and will allow us to study the problem in a moduli-theoretic context.
Our first main result (Theorem \ref{main thm}) states the following:
\begin{theorem} Let $(A,\Theta)$ be a principally polarized abelian variety. Then there exists a non-trivial subgroup $G\subseteq\aut(A,\Theta)$ such that $A/G$ is smooth if and only if $(A,\Theta)$ is the product of principally polarized elliptic curves and a principally polarized abelian variety that comes from what we call the \textit{standard construction}. \end{theorem}
The basic blocks of our standard construction (see Section \ref{standard} for details) are self-products of an elliptic curve with a natural action of the symmetric group $S_{g+1}$ that comes from its standard representation, and are equipped with a non-principal $S_{g+1}$-invariant polarization $\Xi_g$. The standard construction takes a product of such varieties, as well as an auxiliary polarized abelian variety, and yields a principally polarized abelian variety that is isogenous to the whole product and admits a smooth quotient.
We are able to give a moduli-theoretic interpretation of the situation as well (see Section \ref{sec moduli}). For $g_1,\ldots,g_r,n\in\mathbb{Z}_{\geq0}$ we define a morphism \[\Phi_{g_1,\ldots,g_s,n}:\left(\prod_{j=1}^s\cal X(g_i+1)\right)\times\mathcal{A}_n^D\to\mathcal{A}_g,\] where $g=n+z+\sum_{i=1}^sg_i$, $z$ is the number of $g_i$'s equal to 0, $\mathcal{X}(m)$ is the modular curve $\mathbb{H}/\Gamma(m)$ and $\mathcal{A}_n^D$ is the moduli space of polarized abelian varieties of dimension $n$ and type $D$ along with some extra data (where $D$ depends on the $g_i$), whose image consists of principally polarized abelian varieties admitting a smooth quotient. Theorem \ref{moduli} rephrases our main theorem by stating that a principally polarized abelian variety admits a smooth quotient if and only if it lies in the image of one of these morphisms. When $g_i>0$ for all $i$, the image of $\Phi_{g_1,\ldots,g_r,n}$ corresponds to varieties that come from the standard construction described above, and we prove (Proposition \ref{prop standard irred}) that for a very general element in the image of this morphism, its theta divisor is irreducible.\\
Since Jacobians are a special case of irreducible principally polarized abelian varieties, it is then natural to ask what Jacobians have smooth quotients. With a little more work we are able to classify all smooth quotients of Jacobians by groups of automorphisms that come from automorphisms of the curve in question. The classification we obtain for Jacobians is the following:
\begin{theorem}\label{thm jacob intro} Let $X$ be a smooth projective curve of genus $g$ and let $G$ be a (non-trivial) group of automorphisms of $X$. Then $J_X/G$ is smooth if and only if one of the following holds: \begin{enumerate} \item $g\leq 1$; \item $g=2$, $G\cong\mathbb{Z}/2{\mathbb{Z}}$ and $X\to X/G$ ramifies at two points. \item $g=3$, $G\cong\mathbb{Z}/2{\mathbb{Z}}$ and $X\to X/G$ is \'etale. \end{enumerate} \end{theorem}
Throughout this paper, a \textit{polarization} will be used interchangeably as an ample divisor, an ample line bundle, or an ample numerical class. This should not produce confusion. In the case of a principal polarization, we will say that it is \textit{irreducible} if any effective divisor inducing the polarization is irreducible. We will denote numerical equivalence by $\equiv$.
\section{Preliminary results: smooth quotients of abelian varieties}\label{sec prelim}
The following results are consequences of the main results of \cite{ALA} and we will be using them in what follows.
\begin{theorem}\label{thm classification} Let $A$ be an abelian variety of dimension $g$, and let $G$ be a (non-trivial) finite group of automorphisms of $A$ that fix the origin. Then the following conditions are equivalent: \begin{itemize} \item[(1)] $A/G$ is smooth and the analytic representation of $G$ is irreducible. \item[(2)] $A/G$ is smooth of Picard number 1. \item[(3)] $A/G\cong\mathbb{P}^g$. \item[(4)] There exists an elliptic curve $E$ such that $A\cong E^g$ (as a variety, not including the polarization) and $(A,G)$ satisfies exactly one of the following: \begin{enumerate}[label=(\alph*)] \item $G\cong C^g\rtimes S_g$ where $C$ is a (cyclic) subgroup of automorphisms of $E$ of order $\geq 2$ that fix the origin; here the action of $C^g$ is coordinatewise and $S_g$ permutes the coordinates.\label{ex1} \item $G\cong S_{g+1}$ and acts on \[A\cong\{(x_1,\ldots,x_{g+1})\in E^{g+1}:x_1+\cdots+x_{g+1}=0\},\] by permutations.\label{ex2} \item $g=2$, $E={\mathbb{C}}/{\mathbb{Z}}[i]$ and $G$ is the order 16 subgroup of $\mathrm{GL}_2({\mathbb{Z}}[i])$ generated by: \[\left\{\begin{pmatrix} -1 & 1+i \\ 0 & 1\end{pmatrix}\right.,\, \begin{pmatrix} -i & i-1 \\ 0 & i\end{pmatrix},\, \left.\begin{pmatrix} -1 & 0 \\ i-1 & 1\end{pmatrix} \right\},\] acting on $A$ in the obvious way.\label{ex3} \end{enumerate} \end{itemize} \end{theorem}
To see the proof of this theorem see \cite[Theorem 1.1]{ALA} and \cite[Theorem 1.1]{ALAQ}. We will refer to the three cases appearing in point (4) of this theorem as Example \ref{ex1}, Example \ref{ex2} and Example \ref{ex3} respectively.
The next two results are proved in \cite[Theorem 1.3]{ALA} and \cite[Prop. 2.9]{ALA}.
\begin{theorem}\label{thm red irred} Let $A$ be an abelian variety of dimension $g$, and let $G$ be a (non-trivial) finite group of automorphisms of $A$ that fix the origin. Assume that $A/G$ is smooth and $\dim(A^G)=0$. Then $G=\prod_{i=1}^rG_i$, $A=\prod_{i=1}^rA_i$, $G_i$ acts trivially on $A_j$ for $i\neq j$ and irreducibly on $A_i$ and $A_i/G_i$ is smooth for all $1\leq i\leq r$. In particular, \[A/G\cong A_1/G_1\times\cdots\times A_r/G_r.\] \end{theorem}
\begin{proposition}\label{prop And smooth} Let $A$ be an abelian variety of dimension $g$, and let $G$ be a (non-trivial) finite group of automorphisms of $A$ that fix the origin. Let $A_0$ be the connected component of $A^G$ containing 0 and let $P_G$ be its complementary abelian subvariety with respect to a $G$-invariant polarization. Then there exists a fibration $A/G\to A_0/(A_0\cap P_G)$ with fibers isomorphic to $P_G/G$. Moreover, $A/G$ is smooth if and only if $P_G /G$ is smooth. \end{proposition}
\section{Smooth quotients of principally polarized abelian varieties}
In this section, we give a full classification of smooth quotients of principally polarized abelian varieties.
We start with the case where $\dim(A^G)=0$, which is a direct application of Theorems \ref{thm classification} and \ref{thm red irred}. Here we only get direct products of copies of Example \ref{ex1}. Next, we present our standard construction, which uses copies of Example \ref{ex2} in order to obtain new smooth quotients of principally polarized abelian varieties. Finally, we show that in general, every principally polarized abelian variety admitting a smooth quotient is obtained as a direct product of these two cases. We conclude with a moduli-theoretic version of these results.
\subsection{The case $\dim(A^G)=0$}\label{sec dim A^G 0} Let $E$ be an elliptic curve, and on $E^g$ consider the natural principal polarization \begin{equation}\label{eqn Theta0} \Theta_{g}:=[0]\boxtimes\cdots\boxtimes[0], \end{equation} where $[0]$ denotes the divisor on $E$ consisting of the origin. Note that in particular $\Theta_1=[0]$.
\begin{proposition}\label{thm ppav irred} Let $(A,\Theta)$ be a principally polarized abelian variety of dimension $g$ and let $G\subseteq \aut(A,\Theta)$ be a (non-trivial) group of automorphisms of $A$. Assume that $A/G$ is smooth and that the analytic representation of $G$ is irreducible. Then $(A,G)$ is as in Example \ref{ex1} and $\Theta\equiv\Theta_{g}$. \end{proposition}
\begin{proof}
By Theorem \ref{thm classification}, the pair $(A,G)$ must be as in one of the Examples \ref{ex1}, \ref{ex2}, \ref{ex3}. In particular, $A/G\cong\bb P^g$ and thus $\text{NS}(A)^G={\mathbb{Z}}\Theta$. Therefore, if we denote by $\pi$ the morphism $A\to\bb P^g$, there exists $m\in\mathbb{Z}$ such that $\pi^*\mathcal{O}_{\mathbb{P}^g}(1)\equiv m\Theta.$ Then, by taking the self-intersection, we obtain $|G|=m^gg!$.
If $g>1$, this is only possible if $G\cong C^g\rtimes S_g$ with $m=|C|$ and therefore the pair $(A,G)$ is as in Example \ref{ex1}. If $g=1$ then this is trivially true, since in this case Example \ref{ex1} coincides with Example \ref{ex2}. Now, a direct computation in Example \ref{ex1} tells us that $\pi^*\mathcal{O}_{\mathbb{P}^g}(1)$ is indeed $m([0]\boxtimes\cdots\boxtimes[0])$. Hence $\Theta\equiv\Theta_{g}$. \end{proof}
\begin{corollary}\label{thm ppav inv of dim 0} Let $(A,\Theta)$ be a principally polarized abelian variety of dimension $g$ and let $G\leq \aut(A,\Theta)$ be a (non-trivial) group of automorphisms of $A$. Assume that $A/G$ is smooth and $\dim(A^G)=0$. Then the triple $(A,\Theta,G)$ is a direct product of triples as in Proposition \ref{thm ppav irred}. \end{corollary}
\begin{proof} Since $\dim(A^G)=0$ and $A/G$ is smooth, by Theorem \ref{thm red irred} we have $A= A_1\times \cdots \times A_r$, $G=G_1\times\cdots\times G_r$ and $G_i$ acts irreducibly on $A_i$. Then, by \cite[Prop.~61]{Kani}, \[\mathrm{NS}(A)\cong\mathrm{NS}(A_1)\oplus\cdots\oplus\mathrm{NS}(A_r)\oplus\bigoplus_{i<j}\mathrm{Hom}(A_i,A_j),\] where $G$ acts on each factor $\mathrm{NS}(A_i)$ by pullback and on $\mathrm{Hom}(A_i,A_j)$ by $\tau\cdot f=\tau f\tau^{-1}$. We note that there are no $G$-invariant elements in $\mathrm{Hom}(A_i,A_j)$ since $G_i$ acts irreducibly on $A_i$ and trivially on $A_j$.
Now, the coordinates of $\Theta$ with respect to the above decomposition are $G$-invariant. In particular, the coordinate of $\Theta$ in $\mathrm{Hom}(A_i,A_j)$ is 0. This implies that $\Theta$ splits as a sum of $G_i$-invariant principal polarizations $\Theta_i$ on each factor $A_i$. We can then apply Proposition \ref{thm ppav irred} to each factor. \end{proof}
\subsection{The standard construction}\label{standard} In the last section, we showed that when $\dim A^G=0$, then only products of Example \ref{ex1} can appear. In this section, we remove this hypothesis and present a way of constructing triples $(A,\Theta,G)$ such that $A/G$ is smooth and Example \ref{ex2} appears as a factor.\\
Consider the pair $(X,G)$ as in Example \ref{ex2}; in particular $X\cong E^g$ with $E$ an elliptic curve, $G\cong S_{g+1}$ and the analytic representation is the standard representation of $S_{g+1}$. Define the following polarization on $X$: \begin{equation}\label{eqn Xi0} \Xi_{g}:=\Theta_g+\ker(\Sigma), \end{equation} where $\Sigma$ is the sum morphism $X\cong E^g\to E$ and $\Theta_g$ was defined in \eqref{eqn Theta0}. Note that $\Xi_g$ is a generator of the group $\mathrm{NS}(X)^{G}\cong{\mathbb{Z}}$ for $g\geq 2$ since it is primitive (cf.~for instance \cite[\S2.3]{Auff}) and in dimension 1 we have $\Xi_1=2\Theta_1$. \\
For a given polarization $\Xi$ on $X$, consider the group \[K(\Xi):=\{x\in X\mid t_x^*\cal O_X(\Xi)\cong \cal O_X(\Xi)\},\] where $t_x$ denotes translation by the element $x\in X$ (cf. \cite[Section 2.4]{BL}).
\begin{lemma}\label{lemma type Ex b} For $X$ as in Example \ref{ex2} and $\Xi_{g}$ as in \eqref{eqn Xi0}, we have \[K(\Xi_{g})=X^G=\{(x,\ldots,x)\mid x\in E[g+1]\}\subset X.\] In particular, $\Xi_g$ is of type $(1,\ldots,1,g+1)$. \end{lemma}
\begin{proof} Consider the lattice $(I_g\hspace{0.2cm}\tau I_g){\mathbb{Z}}^{2g}$ of $E^g$ where $E={\mathbb{C}}/({\mathbb{Z}}+\tau{\mathbb{Z}})$. With this lattice, the imaginary part of the first Chern class of $\Xi_g$ has matrix \[M=\begin{pmatrix} 0 & A+I_g \\ -A-I_g & 0 \end{pmatrix},\] where $A$ is the $g\times g$ matrix consisting of 1's in each coordinate. The group $K(\Xi_{g})$ corresponds to the set of all $x\in{\mathbb{Q}}^{2g}$, modulo ${\mathbb{Z}}^{2g}$, such that $x^tMy\in{\mathbb{Z}}^{2g}$ for all $y\in{\mathbb{Z}}^{2g}$, cf.~\cite[Lemma 2.4.5]{BL}. We obtain the group in the statement by a direct computation. \end{proof}
Consider now a family of triples $(X_i,G_i,\Xi_{g_i})$ for $1\leq i \leq r$, where $(X_i,G_i)$ is as in Example \ref{ex2} and $\Xi_{g_i}$ is defined in \eqref{eqn Xi0}. Define then the triple $(X,G,\Xi_X)$ as: \begin{equation}\label{prodSn}X=\prod_{i=1}^rX_i,\quad G=\prod_{i=1}^rG_i,\quad \Xi_X=\Xi_{g_1}\boxtimes\cdots\boxtimes\Xi_{g_r}, \end{equation} with the obvious action of $G$ on $X$. Let $(1,\ldots,1,d_1,\ldots,d_s)$ be the type of $\Xi_X$, let $(Y,\Xi_Y)$ be a polarized abelian variety of dimension $\geq s$ and of type $(1,\ldots,1,d_1,\ldots,d_s)$, and let $G$ act \emph{trivially} on it.\\
Starting from $(X,\Xi_X)$ and $(Y,\Xi_Y)$, we construct a principally polarized abelian variety $(A,\Theta)$ with a $G$-action that fixes the origin and preserves the class of $\Theta$ following \cite[\S9.2]{Debarre}. Note that $G$ acts trivially on both $K(\Xi_X)$ and $K(\Xi_Y)$.
Now, using \cite[Lemma 6.6.3]{BL}, one can construct an isomorphism $\epsilon:K(\Xi_X)\to K(\Xi_Y)$ that is antisymplectic with respect to the alternating forms induced by the respective polarizations (cf.~\cite[\S6.6]{BL}). Let $\Gamma$ be the graph of this isomorphism. Then, by \cite[Corollary 6.3.5]{BL}, there exists a unique principal polarization $\Theta$ on $A=(X\times Y)/\Gamma$ such that the pullback to $X\times Y$ gives $\Xi_X\boxtimes\Xi_Y$. Since $G$ acts trivially on $\Gamma$ and fixes the classes of $\Xi_X$ and $\Xi_Y$, we see that $G$ acts on $A$ and fixes the class of $\Theta$. Moreover, by Proposition \ref{prop And smooth}, the quotient $A/G$ is smooth since $X/G$ is by construction.
\begin{definition} We say that a principally polarized abelian variety with $G$-action $(A,\Theta,G)$ is \emph{standard} if it can be obtained via the construction here above. \end{definition}
\subsection{The Main Theorem}
Having defined the standard construction in Section \ref{standard} and considering the results from Section \ref{sec dim A^G 0}, we are now ready to state our main result. It essentially states that the standard construction and $(E^g,\Theta_g)$ are the only direct factors of a principally polarized abelian variety admitting a smooth quotient.
\begin{theorem}\label{main thm} Let $(A,\Theta)$ be a principally polarized abelian variety and let $G\subseteq \mathrm{Aut}(A,\Theta)$ be a non-trivial subgroup of automorphisms of $A$. Assume that $A/G$ is smooth. Then, \[(A,\Theta)\cong\left(\prod_{i=1}^t(A_i,\Theta_{g_i})\right)\times (B,\Theta_B) \quad\text{and}\quad G\cong \left(\prod_{i=1}^tG_{i}\right)\times H,\] where $(B,\Theta_B,H)$ is standard, each $(A_i,\Theta_{g_i},G_i)$ is as in Proposition \ref{thm ppav irred} and $G$ acts on each factor in the obvious way.
In particular $G$ is a direct product of symmetric groups and of groups of the form $({\mathbb{Z}}/m{\mathbb{Z}})^{g_i}\rtimes S_{g_i}$ with $m\in\{2,3,4,6\}$. \end{theorem}
The idea of the proof is the following: we use the results given in Section \ref{sec prelim} in order to study and classify $G$-stable abelian subvarieties of $A$ according to whether they correspond to Examples \ref{ex1}, \ref{ex2} or \ref{ex3}. We will then use results by Debarre in order to prove that Example \ref{ex3} cannot occur if $A$ is principally polarized and that the subvarieties isomorphic to Example \ref{ex1} are already principally polarized and hence split as direct factors. Finally, we will prove that the remaining variety comes from the standard construction.\\
\begin{proof}[Proof of Theorem \ref{main thm}] Let $(A,\Theta)$ be a principally polarized abelian variety and let $G\subseteq \aut(A,\Theta)$ be such that $A/G$ is smooth. Let $Y=(A^G)^0$, $X$ its complementary abelian subvariety with respect to $\Theta$, and let $\Xi_Y:=\Theta\cap Y$ and $\Xi_X:=\Theta\cap X$. Let $\Gamma=\ker(X\times Y\xrightarrow{+} A)$. The addition map is $G$-equivariant, and actually $G$ acts trivially on $\Gamma$. Moreover, by \cite[Prop.~9.1]{Debarre}, $\Gamma$ is the graph of an isomorphism $f:K(\Xi_X)\to K(\Xi_Y)$ that is antisymplectic with respect to the alternating forms induced by the respective polarizations. In particular, $G$ acts trivially on $K(\Xi_X)$ and $K(\Xi_Y)$.
By Theorem \ref{thm red irred} and Proposition \ref{prop And smooth}, $X/G$ is smooth and therefore \[X\cong X_1\times\cdots\times X_r\quad\text{and}\quad G\cong G_1\times\cdots\times G_r,\] where $G_i$ acts on $X_i$ irreducibly and the quotient $X_i/G_i$ is smooth. Thus, by Theorem \ref{thm classification}, each pair $(X_i,G_i)$ corresponds to one of the Examples \ref{ex1}, \ref{ex2} or \ref{ex3}. Let $\Xi_{X_i}:=\Xi_X\cap X_i$ denote the restricted polarization. By following the proof of Corollary \ref{thm ppav inv of dim 0} (which does not use the fact that the polarization is principal until the very end), we deduce that \[\Xi_X\equiv \Xi_{X_1}\boxtimes\cdots\boxtimes\Xi_{X_r},\] and the numerical class of each $\Xi_{X_i}$ is fixed by $G$ (and thus $G_i$). An easy exercise gives us that \[K(\Xi_X)=K(\Xi_{X_1})\oplus\cdots\oplus K(\Xi_{X_r}),\] and $G$ (and thus $G_i$) acts trivially on each factor.
We will prove now that the polarizations $\Xi_{X_i}$ correspond to the ones we have defined above for Examples \ref{ex1} and \ref{ex2}. At the same time we will prove that Example \ref{ex3} cannot appear in this situation.
\begin{lemma}\label{lem reductions of Theta} Let $g_i$ denote the dimension of $X_i$. \begin{enumerate} \item If $(X_i,G_i)$ is isomorphic to Example \ref{ex1} and $g_i\geq 2$, then $\Xi_{X_i}$ is the principal polarization $\Theta_{g_i}$ defined in \eqref{eqn Theta0}. \item If $(X_i,G_i)$ is isomorphic to Example \ref{ex2} and $g_i\geq 2$, then $\Xi_{X_i}$ is the polarization $\Xi_{g_i}$ defined in \eqref{eqn Xi0}. \item If $g_i=1$, then either $\Xi_{X_i}\equiv\Theta_1$ or $\Xi_{X_i}\equiv\Xi_1\equiv 2\Theta_1$. In this last case, $G_i\cong{\mathbb{Z}}/2{\mathbb{Z}}$. \item None of the pairs $(X_i,G_i)$ can be isomorphic to Example \ref{ex3}. \end{enumerate} \end{lemma}
\begin{proof} If $(X_i,G_i)$ is isomorphic to Example \ref{ex1}, then $X_i\cong E^{g_i}$ for some elliptic curve $E$ and $\mathrm{NS}(X_i)^{G_i}={\mathbb{Z}}\cdot \Theta_{g_i}$. Therefore $\Xi_{X_i}\equiv m\Theta_{g_i}$ for some $m\in\mathbb{Z}_{>0}$. However, \[K(m\Theta_{g_i})=X_i[m]=E[m]^{g_i}\] which is only fixed by $G_i\cong ({\mathbb{Z}}/m{\mathbb{Z}})^{g_i}\rtimes S_{g_i}$ in the case when $m=1$, hence $\Xi_{X_i}\equiv \Theta_{g_i}$. This proves (1).
If $g_i=1$, then the pair $(X_i,G_i)$ is always isomorphic to Example \ref{ex1}, so that the previous analysis still holds. However, in this case $X_i$ is an elliptic curve $E$ and $G_i\cong{\mathbb{Z}}/m{\mathbb{Z}}$, so that $E[m]$ can be fixed by $G_i$ if $m=2$ and $G_i=\{\pm 1\}\simeq{\mathbb{Z}}/2{\mathbb{Z}}$. This proves (3).\\
If $(X_i,G_i)$ is isomorphic to Example \ref{ex2} and $g_i\geq 2$, then $X_i\cong E^{g_i}$ for some elliptic curve $E$ and $\mathrm{NS}(X_i)^{G_i}={\mathbb{Z}}\cdot \Xi_{g_i}$. Therefore $\Xi_{X_i}\equiv m\Xi_{g_i}$ for some $m\in\mathbb{Z}_{>0}$. However, \[K(m\Xi_{g_i})=m^{-1}(K(\Xi_{g_i}))\supset X_i[m],\] which is only fixed by $G_i\cong S_{g_i+1}$ in the case when $m=1$, hence $\Xi_{X_i}\equiv \Xi_{g_i}$. This proves (2).\\
If $(X_i,G_i)$ is isomorphic to Example \ref{ex3}, then with respect to the natural symplectic basis of the lattice $\Lambda:={\mathbb{Z}}^2+i{\mathbb{Z}}^2$, we have that $\rho_r(G_i)$ is generated by the matrices \[\left(\begin{array}{rrrr} -1 & 1 & 0 & 1 \\ 0 & 1 & 0 & 0 \\ 0 & -1 & -1 & 1 \\ 0 & 0 & 0 & 1 \end{array}\right), \left(\begin{array}{rrrr} 0 & -1 & -1 & 1 \\ 0 & 0 & 0 & 1 \\ 1 & -1 & 0 & -1 \\ 0 & -1 & 0 & 0 \end{array}\right), \left(\begin{array}{rrrr} -1 & 0 & 0 & 0 \\ -1 & 1 & 1 & 0 \\ 0 & 0 & -1 & 0 \\ -1 & 0 & -1 & 1 \end{array}\right),\] where $\rho_r:G\to\mathrm{GL}(\Lambda)$ denotes the rational representation of $G$. With respect to this basis the Riemann form is given by the matrix \[J=\left(\begin{array}{cc}0&I\\-I&0\end{array}\right),\] and a simple calculation with a computer program gives us that \[c_1\left(\sum_{g\in G_i}g^*\mathcal{O}_{X_i}(\Theta_2)\right)=\sum_{g\in G_i}\rho_r(g)^tJ\rho_r(g)=\left(\begin{array}{rrrr} 0 & 16 & 32 & -16 \\ -16 & 0 & -16 & 32 \\ -32 & 16 & 0 & 16 \\ 16 & -32 & -16 & 0 \end{array}\right).\] Therefore $\sum_{g\in G_i}g^*\Theta_2\equiv 16\Xi_{\text{\ref{ex3}}}$ for a \emph{primitive} polarization $\Xi_{\text{\ref{ex3}}}$, and so $\Xi_{\text{\ref{ex3}}}$ generates $\mathrm{NS}(X_i)^{G_i}$. However, it is easy to see that \[K(\Xi_{\text{\ref{ex3}}})=\langle(\tfrac{1+i}{2},0),(0,\tfrac{1+i}{2})\rangle,\] which is not in the fixed locus of $G$. Moreover, $K(m\Xi_{\ref{ex3}})= m^{-1}(K(\Xi_{\text{\ref{ex3}}}))$ which is not invariant by $G$ either. Therefore by the previous analysis it is impossible for Example (c) to appear. This proves (4). \end{proof}
Lemma \ref{lem reductions of Theta} tells us that the triples $(X_i,\Xi_i,G_i)$ are either Example \ref{ex1} with the polarization $\Theta_g$ or Example \ref{ex2} with the polarization $\Xi_g$. Indeed, the only case that ``escapes'' from this fact is when $g_i=1$ and $\Xi_{X_i}\equiv \Xi_{1}$. But since in this case we have $G_i\cong S_2$, we may indeed interpret the pair $(X_i,G_i)$ as Example \ref{ex2} in dimension 1 with the polarization $\Xi_{1}$.
Having said this, up to rearranging the factors, we can write \[(X,\Xi_X)\cong \left(\prod_{i=1}^t(A_i,\Theta_{g_i})\right)\times\left(\prod_{j=1}^s(X_j,\Xi_{g_j})\right)\quad\text{and}\quad G\cong \left(\prod_{i=1}^tG_{i}\right)\times\left(\prod_{j=1}^sH_j\right), \] where the pairs $(A_i,G_i)$ are isomorphic to Example \ref{ex1} and the pairs $(X_j,H_j)$ are isomorphic to Example \ref{ex2}. Note that since the pairs $(A_i,\Theta_{g_i})$ are principally polarized, they split as direct factors of $(A,\Theta)$ as well. Thus, we only need to prove that the remaining factor, which corresponds to the subvariety \[B=Y+\sum_{j=1}^s X_j,\] equipped with the principal polarization $\Theta_B:=\Theta\cap B$, is standard. This is an immediate application of \cite[Prop.~9.1]{Debarre} since $Y$ is the complementary abelian subvariety of $\sum_{j=1}^s X_j$ in $B$ with respect to $\Theta_B$ and the polarizations $\Xi_{g_j}$ are the ones used in the standard construction. \end{proof}
We conclude this section with an immediate corollary to Theorem \ref{main thm}.
\begin{corollary} Let $(A,\Theta)$ be a principally polarized abelian variety. Let $\aut(A,\Theta)$ be the group of automorphisms of $A$ preserving the numerical class of $\Theta$. Assume that $A/\aut(A,\Theta)$ is smooth. Then $(A,\Theta)$ is a polarized product of elliptic curves. \end{corollary}
\begin{proof} It suffices to note that the standard factor $(B,\Theta_B)$ from Theorem \ref{main thm} always ``has more'' automorphisms that preserve the polarization. Indeed, multiplication by $-1$ is an automorphism that does not fix the subvariety $Y$ (unless it is trivial) in this factor and by definition $Y$ was invariant by the group $H$ acting on $B$. \end{proof}
\subsection{A moduli-theoretic interpretation}\label{sec moduli} We note that Theorem \ref{main thm} has a moduli-theoretic interpretation. Indeed, the moduli space of triples $(E^m,\Xi_m,f)$ where $E$ is an elliptic curve, $\Xi_m$ is the polarization defined in (\ref{eqn Xi0}), and \[f:K(\Xi_m)\to({\mathbb{Z}}/(m+1){\mathbb{Z}})^2,\] is a symplectic isomorphism is easily seen to be isomorphic to the modular curve $\cal X(m+1):=\mathbb{H}/\Gamma(m+1)$. As a consequence, the moduli space of all products of the form (\ref{prodSn}) along with a symplectic isomorphism \[K(\Xi_X)\to\bigoplus_{i=1}^r({\mathbb{Z}}/(g_i+1){\mathbb{Z}})^2,\] is isomorphic to \[\cal X(g_1+1)\times\cdots\times \cal X(g_r+1).\]
Now, let $D=(1,\ldots,1,d_1,\ldots,d_s)$ be the type of $\Xi_X$ and note that this tuple depends only on the numbers $(g_1+1),\ldots,(g_r+1)$. Let $\cal{A}_{n}^D$ be the moduli space of triples $(Y,\Xi_Y,h)$ where $(Y,\Xi_Y)$ is a polarized abelian variety of dimension $n$ and type $D$, and \[h:K(\Xi_Y)\to\bigoplus_{i=1}^r({\mathbb{Z}}/(g_i+1){\mathbb{Z}})^2,\] is a symplectic isomorphism. Then the standard construction can be easily interpreted as a morphism of moduli spaces \[\left(\prod_{j=1}^r\cal X(g_i+1)\right)\times\mathcal{A}_n^D\to\mathcal{A}_{g'},\] where $g'=n+\sum_{i=1}^r g_i$.
On the other hand, the factor in Theorem \ref{main thm} that does not come from the standard construction is just a product of principally polarized elliptic curves. Thus, its moduli space is simply a product of copies of $\mathcal{A}_1=\cal X(1)$. Therefore, we may interpret the set described in Theorem \ref{main thm} as the union of images of morphisms of moduli spaces \[\Phi_{g_1,\ldots,g_s,n}:\left(\prod_{j=1}^s\cal X(g_i+1)\right)\times\mathcal{A}_n^D\to\mathcal{A}_g,\] where $g_i,n\geq 0$, $\cal{A}_g$ is the moduli space of principally polarized abelian varieties of dimension $g$ and $g=n+z+\sum_{i=1}^sg_i$, where $z$ is the number of $i$'s such that $g_i=0$. The moduli-theoretic version of Theorem \ref{main thm} can henceforth be stated as follows:
\begin{theorem}\label{moduli} A principally polarized abelian variety $(A,\Theta)$ admits a non-trivial subgroup $G\subseteq\aut(A,\Theta)$ that gives a smooth quotient $A/G$ if and only if it is in the image of one of the morphisms $\Phi_{g_1,\ldots,g_s,n}$. \end{theorem}
We finish this section by studying the irreducibility of the theta divisor of a general element in the image of $\Phi_{g_1,\ldots,g_s,n}$. This will be useful in order to study smooth quotients of Jacobians. As it turns out, the only non-trivial case is the standard construction, which is studied in the following proposition.
\begin{proposition}\label{prop standard irred} If $g_i>0$ for all $i$, a very general element in the image of $\Phi_{g_1,\ldots,g_s,n}$ is irreducible. \end{proposition}
\begin{proof} Let $(A,\Theta)$ be a very general element of the image of $\Phi_{g_1,\ldots,g_s,n}$, where specifically we mean that \[A=(X_1\times\cdots\times X_s\times Y)/ \Gamma,\] where $\Gamma$ is the graph of a certain anti-symplectic isomorphism, $\Hom(X_i,X_j)=0$ for $i\neq j$, $\Hom(X_i,Y)=0$ for all $i$ and $Y$ is simple. This implies, in particular, that \begin{equation}\label{eqn End Q}
\End_{\mathbb{Q}}(A)\cong\End_{\mathbb{Q}}(X_1)\oplus\cdots\oplus\End_{\mathbb{Q}}(X_s)\oplus\End_{\mathbb{Q}}(Y), \end{equation} where the subscript ${\mathbb{Q}}$ means that we are tensoring with ${\mathbb{Q}}$.
It is well-known that $(A,\Theta)$ is reducible if and only if there is a non-trivial abelian subvariety $T\subseteq A$ such that $\Theta\cap T$ is principal. Assume this is the case and let $S$ be its complementary abelian subvariety (which is also principally polarized). By \eqref{eqn End Q}, either $T$ or $S$ must contain $Y$ and the other must be contained in $\prod_{i=1}^sX_i$. Assume without loss of generality that this is the case for $T$. Then \[(T,\Theta\cap T)=(T_1,\Theta\cap {T_1})\times\cdots\times (T_s,\Theta\cap {T_s}),\] where $T_i=T\cap X_i$. Moreover, $\Theta\cap T_i$ is principal for every $i$ since $\Theta\cap T$ is. The following lemma tells us then that this is impossible, concluding the proof. \end{proof}
\begin{lemma} Let $X\subset E^n$ be an abelian subvariety of dimension $m>0$. Then $\Xi_n\cap X$ is \emph{not} a principal polarization. \end{lemma}
\begin{proof} Assume that $\Xi_n\cap X$ is principal and let us proceed by contradiction. Using the definition of $\Xi_n$ given in \eqref{eqn Xi0} and denoting $D_i:=\pi_i^*([0])$, we have that \[m!=(\Xi_n\cap X)^m=\sum_{\Sigma k_i=m}\binom{m}{k_1,\ldots,k_{m+1}}D_1^{k_1}\cdots D_n^{k_n}(\ker\Sigma)^{k_{n+1}}\cdot X.\] However, since each $D_i$ and $\ker\Sigma$ are abelian subvarieties, their self-intersection is trivial in the Chow ring modulo numerical equivalence. Hence, every $k_i$ can be taken to be equal to 0 or 1, i.e. \begin{align*} (\Xi_n\cap X)^m &=\underset{k_i\in \{0,1\}}{\sum_{\Sigma k_i=m}}\binom{m}{k_1,\ldots,k_{m+1}}D_1^{k_1}\cdots D_n^{k_n}(\ker\Sigma)^{k_{n+1}}\cdot X,\\ &=m! \underset{k_i\in \{0,1\}}{\sum_{\Sigma k_i=m}}D_1^{k_1}\cdots D_n^{k_n}(\ker\Sigma)^{k_{n+1}}\cdot X. \end{align*} But since $(\Xi_n\cap X)^m=m!$ and the intersection between $X$ and each summand is greater than or equal to 0 since the $D_i$'s and $\ker\Sigma$ are effective, we see that there is only one non-zero summand, which is equal to 1.
Let $F$ be an irreducible component of $\Xi_n$. Then $\Xi_n-F$ is ample and thus $(\Xi_n-F)^m\cdot X>0$. However, since there is only one non-zero summand in the equality above, we see that this number is equal to 0 as soon as $F$ is taken to appear in the non-zero summand, which is a contradiction. \end{proof}
For the general case, note that if there exists $i$ such that $g_i=0$, then the theta divisor of every element of the image of $\Phi_{g_1,\ldots,g_s,n}$ is reducible. Putting this together with Proposition \ref{prop standard irred} we immediately get the following result.
\begin{theorem}\label{thm irred of Theta} If the theta divisor of a principally polarized abelian variety with smooth quotient is irreducible, then it comes from the standard construction. Moreover, a very general element of this construction is irreducible. \end{theorem}
\section{Smooth quotients of Jacobians}\label{jacobians}
In this section we prove Theorem \ref{thm jacob intro}. We start with a lemma on minimal morphisms. Following Kani (cf.~\cite{Kani2}), a cover $f:C\to E$ with $C$ a smooth curve and $E$ an elliptic curve is said to be \emph{minimal} if for every commutative diagram
\begin{equation}\label{eq diag minimal} \xymatrix@R=0.8em@C=0em{ & C \ar[dl]_h \ar[dr]^f & \\ F \ar[rr] && E, } \end{equation} where $F$ is an elliptic curve and $F\to E$ is an isogeny, we have $F=E$ and the isogeny is the identity. We have then the following Lemma:
\begin{lemma}\label{lem minimality} Let $f:C\to E$ be a Galois cover of an elliptic curve $E$. Then $f$ is minimal. \end{lemma}
\begin{proof} Consider the commutative diagram \eqref{eq diag minimal}. Since $f$ is Galois, $F=C/H$ for some subgroup $H\subseteq G$. Since the cover $F\to E$ is an unramified cover of elliptic curves, it is also Galois, and thus $H$ is normal in $G$. This implies that $h:C\to F$ is $G$-equivariant, and hence so is the morphism $h^*:F\to J_C$. Since $h^*F=f^*E\subset J_C^G$, $G$ acts trivially on $F$, and thus $H=G$ and $F=E$. \end{proof}
We are now ready to prove Theorem \ref{thm jacob intro}.
\begin{proof}[Proof of Theorem \ref{thm jacob intro}] It is obvious that the quotient $J_C/G$ is smooth if $g\leq 1$. Let us prove first then that either (2) or (3) implies that $J_C/G$ is smooth. Let $g'$ be the genus of $C':=C/G$ and let $R$ be the total ramification index of the covering $C\to C'$. We see then by Riemann-Hurwitz that $g'=1$ in case (2) and $g'=2$ in case (3), so $g'=g-1$ in both cases.
We get then that $J_{C'}$ is isogenous to an abelian subvariety $A_0$ of $J_C$ of dimension $g'=g-1$, which corresponds to the connected component of $J_C^G$ that contains 0. Let $P_G$ be the complementary abelian subvariety of $A_0$ with respect to the theta divisor of $J_C$, which has dimension 1. Since the action of $G={\mathbb{Z}}/2{\mathbb{Z}}$ on $A_0$ is trivial and the theta divisor is $G$-invariant, we see that $G$ acts non-trivially on $P_G$ and thus $P_G /G\cong\bb P^1$ is smooth. Then by Proposition \ref{prop And smooth} we have that $J_C/G$ is smooth.\\
Assume now that $J_C/G$ is smooth with $g\geq 2$. We have to prove that $G\cong\mathbb{Z}/2{\mathbb{Z}}$ and that either (2) or (3) holds. The smoothness of $J_C/G$ at the image of $0$ tells us that $G$ is generated by pseudoreflections (i.e.~elements fixing pointwise a divisor passing through $0$) by the Chevalley-Shephard-Todd Theorem. Consider then a pseudoreflection $\sigma\in G$ and the subgroup $S\subset G$ generated by it. Then $J_C^S$ is a divisor and hence $J_{C/S}$ has dimension $g-1$. This implies that $C/S$ is a curve of genus $g-1$. A quick look at the Riemann-Hurwitz formula using $|S|\geq 2$ and $R\geq 0$ tells us that $g\leq 3$. We are left then with five possible cases: \[(g,g')\in \{(3,2),(3,1),(3,0),(2,1),(2,0)\}.\]
In the case $(3,2)$, we obtain $|G|=2$ and $R=0$, which corresponds to case (3). In the case $(2,1)$, by \cite[Thm.~4.1]{Br} we have $|G|=2$ and the Riemann-Hurwitz formula yields $R=2$, which corresponds to case (2). We are left to prove then that the three other cases cannot give smooth quotients.\\
By Theorem \ref{thm irred of Theta}, we see that the pair $(J_C,\Theta_C)$ is standard since the theta divisor is an irreducible principal polarization. In particular $J_C$ is isogenous to a product $X\times Y$ with $Y$ a \emph{non-trivial} $G$-invariant abelian subvariety of $J_C$ (which \textit{a fortiori} corresponds to $f^*J_{C'}$) and $X$ a direct product of elliptic curves with $\dim(X^G)=0$ (which \textit{a fortiori} corresponds to the Prym subvariety of $J_C$ with respect to $G$). This discards immediately the cases $(2,0)$ and $(3,0)$, since then we have $\dim(J_C^G)=0$.
We are left then with the case $(3,1)$, where $X$ has dimension 2 and hence $G$ must be isomorphic to either $({\mathbb{Z}}/2{\mathbb{Z}})^2$ or $S_3$ by the standard construction. Here, $C'$ is an elliptic curve $E$ and thus $J_E=E$. Lemma \ref{lem minimality} tells us that the cover $f:C\to C'=E$ is minimal.
Assume that $G\cong S_3$, let $H$ denote the index 2 normal subgroup of $G$ and consider the quotient $C''=C/H$. Then the genus $g''$ of $C''$ must be 2 since it has to be $<3$ and if it was 1 we would contradict the minimality of $f:C\to E$. We get then that $C\to C''$ is a Galois cover with Galois group $H$ and $|H|> 2$. This contradicts the Riemann-Hurwitz formula.
Assume now that $G=({\mathbb{Z}}/2{\mathbb{Z}})^2$. Since $f$ is minimal and of degree $|G|$, \cite[Cor.~12.1.4]{BL} tells us that
\[|f^*E\cap X|=|G|^2=16.\]
But since the action of $G$ on $E$ is trivial and there exists $g\in G$ acting as $-1$ on $X$, we know that $f^*E\cap X\subset X^G\subset X[2]$ and hence $f^*E\cap X$ is 2-torsion. Since $f^*E$ is just an elliptic curve, $|f^*E\cap X|\leq |(f^*E)[2]|=4$, which yields a contradiction. \end{proof}
Theorem \ref{thm jacob intro} immediately gives us the following interesting corollary.
\begin{corollary} Let $g_1,\ldots,g_r,n\in\mathbb{Z}_{\geq0}$ and let $z$ be the number of $i$'s such that $g_i=0$. If $n+z+\sum_{i=1}^rg_i\geq4$, then the image of $\Phi_{g_1,\ldots,g_r,n}$ is disjoint from the Jacobian locus. \end{corollary}
\begin{rem} Using well-known results by Broughton \cite{Br}, we see that the moduli space of genus 3 \'etale double covers of genus 2 curves is a connected 3-dimensional subvariety of $\mathcal{M}_3$, and the moduli space of genus 2 double covers of elliptic curves is a connected 2-dimensional subvariety of $\mathcal{M}_2$. \end{rem}
\end{document} | arXiv |
Better explanation for beyond-line-of-sight VHF signal reach than "lower curvature to radio waves"
Asked 1 year, 10 months ago
I am currently studying for the Technician Exam and have come across an answer to a question I think is ridiculous. The question is:
Why do VHF/UHF signals usually travel somewhat farther than visual line-of-sight distance between two stations?
The given "correct" answer is,
Because the Earth seems less curved to radio waves than to light.
Come on, there are much better answers than that and one does not even have to go into the duality of particles and waves. My question is, asking a ham expert, what would be a better answer to this question?
Dave GDave G
$\begingroup$ Could you clarify what the question you're asking us is, please? $\endgroup$ – Kevin Reid AG6YO♦ Feb 23 '19 at 17:44
$\begingroup$ Sorry - it's now in the form of a question instead of a statement $\endgroup$ – Dave G Feb 23 '19 at 17:59
$\begingroup$ so, the full question is "How can you explain that VHF/UHF signals reach farther than line of sight? (explain without using "curvature".)"? $\endgroup$ – Marcus Müller Feb 23 '19 at 21:09
$\begingroup$ By the way, I'm really angry about this answer. It doesn't even answer the question. It's literally answering something else. The question doesn't even mention that the other station is beyond the visual horizon due to earth curvature; it describes a lack of visual line of sight (literally!), which might mean there's earth curvature between, but also might mean there's a hill in between, a house, or a billboard made of thin cardboard. And even for the question it is answering, it's plain wrong. $\endgroup$ – Marcus Müller Feb 23 '19 at 21:16
$\begingroup$ Thank you Marcus - it's nice to know someone with a background such as yours agrees with me. $\endgroup$ – Dave G Feb 23 '19 at 21:35
The full question, and possible answers:
T3 C11
Why do VHF and UHF radio signals usually travel somewhat farther than the visual line of sight distance between two stations?
A. Radio signals move somewhat faster than the speed of light
B. Radio waves are not blocked by dust particles
C. The Earth seems less curved to radio waves than to light
D. Radio waves are blocked by dust particles
Of these, C is the only answer that makes any kind of sense. Arguably, one could take issue with the phrasing of the answer, because radio waves have no consciousness, and so nothing can "seem" anything to them. However, atmospheric refraction does bend the relatively low frequencies (compared to visible light) of VHF and UHF to a greater extent, and it's in human nature to transform our observations into impossible reference frames to more intuitively reason about the world. If your eyes could see in RF you would indeed see farther which means the Earth is effectively less curved, so even if it's an impossible scenario, it holds up to simple intuition and reason.
Trying to simplify a complex interaction of atmospheric conditions and electrodynamics into a single sentence question and answer in simple language is bound to generate some degree of disagreement about the single "best" sentence that captures the situation, and going to require some reasonable assumptions to be made. The best we can hope is that one of the four proposed answers is obviously correct, while the other three are obviously wrong, and by that standard, this question seems fine to me.
$\begingroup$ I'll agree with you on the "it's the least wrong option"; I think you made a very true statement for a lot of situations in life: When tasked with making a choice between multiple bad options, choose the one that's the least batpoop crazy. And I agree with you, with these alternatives, the choice is more than clear. $\endgroup$ – Marcus Müller Feb 23 '19 at 22:19
$\begingroup$ I believe an applicable term would be anthropomorphizing. $\endgroup$ – Glenn W9IQ Feb 24 '19 at 12:40
$\begingroup$ All excellent points Phil, Marcus, and Glen.... it would be difficult to describe the physical universe without the use of metaphor. $\endgroup$ – Dave G Feb 24 '19 at 13:40
$\begingroup$ "Atmospheric refraction" would be a much better answer than the earth appearing less curved $\endgroup$ – Brad Mace Sep 27 '20 at 0:49
Please accept Phil's answer. It's the sanest one.
Now, however to answer:
what would be a better answer to this question?
Because they are not the same as light; first of all, they aren't blocked by things like a thin cotton sheet.
I hear me muttering to myself:
Ok, you know exactly how that question was meant; don't be an arse:
Why can VHF/UHF waves be detected when no straight line through an RF-transparent medium connects emitter and receiver?
Because beam optics don't apply here.
Remember that the assumption that light forms perfectly straight beams is actually wrong; it just looks that way because the wavelength of visible light, which is an electromagnetic wave just like radio waves, is in the nanometers, and hence, most effects that we can observe in lights are well-modeled with a model of light following a straight beam.
However, that's just a justifiable simplification under the assumption that
all structures interacting with the wave are way, way larger than the wavelength
the medium (air, for example) through which the wave travels is homogenous, and works the same no matter in which direction you travel through it
the sensitivity with which we look at light phenomena is low enough for us to ignore the effects that can't be explained by the model (this applies to any model of the world, by the way)
Large-scale-ness (Assumption 1)
To illustrate 1.: Maybe you've done slit experiments with light: Make a really, really narrow slit and place it between a source of light and a screen. You'll see that light really doesn't need a straight path to travel. Also, you'd ideally be able to observe interference effects on the screen, ie. not only light in places that should be dark, but also regular patterns of brightness fluctuating.
However, you need a really narrow slit to make this work visibly well (or even better, a lot of identical slits in equal distance, to make the effect stronger).
Same applies to radio waves and the earth: Say your UHF signal has a wavelength in the order of "single-digit meters". The earth has a radius of 6370 km. It's "huge enough" compared to the wavelength that radio waves, just like light, wouldn't "reach beyond the horizon", **iff* there was no atmosphere around earth.
Homogenous, anisotropic Medium (Assumption 2)
Now here's the thing: if the earth is really large-scale compared to UHF waves, the optical propagation model would work, and we shouldn't be able to receive things from a place to which a straight line goes through the earth.
However, the atmosphere doesn't play along here, at all:
On the lower end, it's limited by earth, or actually, globally, more often by seawater, which is at least partially a pretty well-reflecting surface for radio waves. On the upper end, we get the ionosphere, and troposphere and whatsitcalledsphere; basically, you get a refraction that "bends" the beam towards the earth just enough to make it "cling" to earth's surface a little more.
(I've writtne an advanced high-school physics level explanation for that here.)
So, what we see happen for microwave radio is similar to what happens in a (graded index multimode) optical fiber; the fiber doesn't have to run in a straight line, but a light beam can still pass relatively unhindered through it, by the simple fortune of always being refracted (and reflected) back towards the core of the fiber when it comes closer to the edge.
That doesn't make the fiber (earth) any less curved; it's just that macroscopically, the beam follows the curvature. The fact that the curvature isn't "eradicated" by this refractive model can pretty simply be verified by realizing that light still has to travel a distance longer (and hence, for a time measurably longer) than the straight connection between the points.
So, yes, I'd consider that answer wrong.
Sensitivity too low to detect the difference to the simpler model (assumption 3)
While our eyes are tremendous instruments that can be continuously and smoothly adjusted to different lightning situation, their instantaneous dynamic range (the ratio of the weakest detectable to the strongest detectable light power) is limited: Numbers differ depending on who examined what, but it's fair to assume doesn't exceed the order of magnitude of 1:1,000,000 (or 60 dB).
With radio receivers, and transmitters, we can be a lot more sensitive by something that light waves don't allow us (because the waves aren't coherent):
We can take an arbitrarily long observation and boil it down to a number. (We call the increase in sensitivity processing gain. I find that name very fitting.)
You won't be able to see a lighthouse's atmospherically refracted light at all if the ambient light outshone it, even if you stood right next to it – but with radio receivers, we can suppress the effect of ambient and receiver noise. You can do something like:
Let the transmitter transmit a $\cos(2\pi 200\,\text{MHz} t)$ for 1 ms, then a $-\cos(2\pi 200\,\text{MHz} t)$ (i.e. a 180° phase shifted, or inverted, version of the same carrier), and repeat that pattern for an hour (that's 3,600,000 ms).
If our receiver knows that pattern, it could take all signals from the odd-numbered milliseconds, record them and add them up. Do the same for all the even-numbered milliseconds, but in a separate accumulator.
Now you get two different "sum-recordings", each with 1,800,000 summed up cosine segments, one with all the +cos, the other with all the -cos.
If the two bins don't look like cosines at all, then you pretty certainly only noise with no cosines "hidden" in it.
If the one bin, however, looks like a cosine, and the other like a minus-cosine, then you can say with high certainty that you received the transmitter's signal. The same-phase cosines just added up constructively – you made a cosine with amplitude of 1.8 million out of a cosine with amplitude 1. The noise can't add up as much; the math is relatively easy here: If we say a cosine with amplitude 1 has power $P$, then a cosine with an amplitude of $A\cdot 1$ has power of $A^2 \cdot P$, i.e. you see a power increase in the sum signal of $1\,800\,000^2$. Uncorrelated noise's power adds up – so the power increase of the noise is "only" $A$, not $A^2$; hence, the signal-power-to-noise-power ration (SNR) of the sum is $\frac{A^2}{A}=A$ times better than that of the original single observation.
That's a pretty crass thing – and it's the reason modes like WSPR or things like GPS work so robustly – your GPS receiver might work with a 4-bit ADC, that means the weakest-to-strongest instantaneously discriminatable signal rate, i.e. the dynamic range is just 16, even ignoring that one bit should actually be the sign of the received signal. GPS signals are usually significantly below noise floor – i.e., they have a negative SNR (in dB). Still, GPS works well! GPS receivers know the pattern in the signal sent by the satellites, and they add it up over a long time, which lifts the signal out of noise.
This all is just to illustrate that we have it much easier, because it's just such a standard thing to do, to detect sub-ambient RF signals than light signals; hence, it's easier to notice that things don't work like in the nice and simple optical (beam) model.
Because we're breaking the assumptions necessary for modelling electromagnetic waves as beams, we can't make any statements based on a beam-based model.
Especially, emitted energy does reach beyond the horizon due to wave (as opposed to ray) propagation effects such as refraction, diffraction and reflection.
From Electronics Notes, a British page.
Line of sight radio communications
It might be thought that most radio communications links at VHF and above follow a line of sight path. This is not strictly true and it is found that even under normal conditions radio signals are able to travel or propagate over distances that are greater than the line of sight.
The reason for the increase in distance travelled by the radio signals is that they are refracted by small changes that exist in the Earth's atmosphere close to the ground. It is found that the refractive index of the air close to the ground is very slightly higher than that higher up. As a result the radio signals are bent towards the area of higher refractive index, which is closer to the ground. It thereby extends the range of the radio signals.
The refractive index of the atmosphere varies according to a variety of factors. Temperature, atmospheric pressure and water vapour pressure all influence the value. Even small changes in these variables can make a significant difference because radio signals can be refracted over whole of the signal path and this may extend for many kilometres.
Mike Waters♦
Rich Morgan - KF9FRich Morgan - KF9F
$\begingroup$ Nice answer. Here on Stack Exchange, we don't put signatures on posts — the user card automatically provided at the bottom is your signature, and you can put whatever you like in your name and profile. I've done this for you. $\endgroup$ – Mike Waters♦ Feb 27 '19 at 0:05
From Marcus' answer of Feb 23, 2019: ...Especially, emitted energy does reach beyond the horizon due to wave (as opposed to ray) propagation effects such as refraction, diffraction and reflection.
Below is an extreme example of refraction in the VHF range.
Although the line-of-sight path there is severely shadowed by Earth curvature plus several terrain obstructions, enough radiated energy may reach beyond those obstructions to provide a useful signal to a suitable receive system.
Richard FryRichard Fry
Not the answer you're looking for? Browse other questions tagged propagation or ask your own question.
Communications : Ground wave propagation
How does VHF/UHF propagate beyond the expected (radio) horizon?
Troposcatter: really all that bad?
Why is VHF better than UHF in this situation?
Is there any way in which elevation *does* affect medium frequency (AM radio) transmissions?
Radio Waves propagation (Intuitive Understanding of Wave propagation) | CommonCrawl |
\begin{document}
\preprint{APS}
\title{Narrow entanglement beats} \author{Luis Roa} \affiliation{Center for Quantum Optics and Quantum Information, Departamento de F\'{\i}sica, Universidad de Concepci\'{o}n, Casilla 160-C, Concepci\'{o}n, Chile.} \author{R. Pozo-Gonz\'{a}lez} \affiliation{Departamento de F\'{\i}sica, Tecnol\'ogico de Monterrey, Monterrey 64849, Mexico.} \affiliation{Center for Quantum Optics and Quantum Information, Departamento de F\'{\i}sica, Universidad de Concepci\'{o}n, Casilla 160-C, Concepci\'{o}n, Chile.} \author{Marius Schaefer} \affiliation{Center for Quantum Optics and Quantum Information, Departamento de F\'{\i}sica, Universidad de Concepci\'{o}n, Casilla 160-C, Concepci\'{o}n, Chile.} \affiliation{Eidgen\"{o}ssisches Institut f\"{u}r Schnee und Lawinenforschung, Fl\"{u}elastrasse 11, CH-7260, Davos Dorf, Switzerland} \author{P. Utreras-SM} \affiliation{Center for Quantum Optics and Quantum Information, Departamento de F\'{\i}sica, Universidad de Concepci\'{o}n, Casilla 160-C, Concepci\'{o}n, Chile.}
\date{\today}
\begin{abstract} We study how the entanglement between two atoms can be created or modified even when they do not interact but when each of them interacts dispersively, i.e., weak and far from the resonance with a single mode of the field. Considering that regime we apply a method which makes use of a small nonlinear deformation of the usual $SU(2)$ algebra in order to obtain the effective Hamiltonian describing correctly the dynamics for any initial states. In particular we study two cases: In the first one we consider each atom initially in a pure state and in the second case we assume that they start in a Werner state. We find that both atoms can reach, periodically, maximum entanglement if each of them starts in any eigenstate of $\sigma_x$, independent of the initial Fock state of the mode. Thus we find that a dispersive vacuum can generate entanglement between two two-level atoms. In the second case and when the field mode is initially in a coherent or thermal state, we find that in the high energy limit, in general, there is no entanglement between the two atoms however at well defined moments the initial entanglement is as suddenly recovered as removed. This time behavior looks like narrow beats separated by the so called \textit{entanglement dead valleys}. \end{abstract}
\pacs{03.67.-a, 03.65.-w} \maketitle
\section{Introduction} In 1935 E. Schr\"{o}dinger introduced into the quantum world the entanglement concept by means of his communication addressing the gedanken experiment known as Schr\"{o}dinger’s cat \cite{Schrodinger}. In the same year A. Einstein, B. Podolsky, and N. Rosen argued the incompleteness of quantum mechanics by describing the reality they sensed \cite{Einstein}. Later, in 1964, J. S. Bell reported that a special no local operator must satisfy in average an inequality which can be violated only by some non separable states \cite{Bell}. Today the non locality effect or entanglement is considered as a resort for manipulation of quantum information which does not have a classical counterpart \cite{Bennett,Zoller}. Thus, during the last two decades a major research effort has been conducted in the emerging field of quantum information theory \cite{Nielsen} based on the renewed non locality effect. With this motivation, there has been a lot of interest in understanding and quantifying entanglement of pure and mixed states \cite{Caves,Vedral,DiVincenzo,Wootters}. The entanglement of two systems can arise through direct interaction between them as well as through the coupling of the systems with a common quantum bus in the form of an auxiliary system or environment \cite{Tessier,Steinbach,Sainz,Bose,Eberly,Davidovich}.
In this work we investigate whether and how the entanglement between two atoms can be generated or modified when they interact dispersively with the same single mode of the electromagnetic field. We use the method of the small nonlinear deformation of the usual $SU(2)$ algebra \cite{Klimov,Klimov2,Gottfried}, in order to obtain an effective Hamiltonian describing the dynamics of the system.
\section{The Hamiltonian Model}
We consider two noninteracting two-level atoms, labelled by sub or supraindexes $a$ and $b$, each one coupled dispersively with a single mode characterized by the frequency $\omega$. The unitary dynamics in the whole tensorial product Hilbert space, $\textsc{H}=\textsc{H}_{a}\otimes \textsc{H}_{b}\otimes\textsc{H}_{\text{mode}}$, is driven by the Hamiltonian ($\hbar =1$) in the rotating wave approximation, \begin{eqnarray} \hat{H}&=&\frac{1}{2}\omega _a\sigma_z^{(a)}+\frac{1}{2}\omega_b\sigma_z^{(b)}+\omega b^{\dagger}b \nonumber \\ &&+g_a(\sigma_+^{(a)}b+\sigma_-^{(a)}b^{\dagger}) +g_b(\sigma_+^{(b)}b+\sigma_-^{(b)}b^{\dagger}), \label{H} \end{eqnarray} where $b$ and $b^{\dagger }$ are the annihilation and creation single mode operators,
$\sigma _{+}^{(j)}=|1\rangle _{j{}j}\langle 0|$,
$\sigma_{-}^{(j)}=|0\rangle _{j{}j}\langle 1|$, and $\sigma _{z}^{(j)}$ is the $z-$
component of the effective angular spin-half operator whose eigenstates are $\{|0\rangle _{j},|1\rangle _{j}\}$, for $j=a,b$.
Taking into account that the excitation number operator $\hat{N}=(\sigma _{z}^{(a)}+\sigma _{z}^{(b)})/2+b^{\dagger}b$ is a constant of motion, $[\hat{H},\hat{N}]=0$, the (\ref{H}) Hamiltonian can be written by $\hat{H}=\omega\hat{N}+\hat{H}_{int}$ where \begin{eqnarray} \hat{H}_{int}&=&\frac{\Delta_a}{2}\sigma_z^{(a)}+\frac{\Delta_b}{2}\sigma_z^{(b)} \nonumber \\ &&+g_a(\sigma_+^{(a)}b+\sigma_-^{(a)}b^{\dagger})+g_b(\sigma_+^{(b)}b+\sigma_-^{(b)}b^{\dagger}), \label{Hint} \end{eqnarray} with $\Delta _{a}=\omega _{a}-\omega$ and $\Delta _{b}=\omega _{b}-\omega$.
We have assumed dispersive interactions between each atom and the common single mode. In other words, those couplings are weak and far from the resonance, so we can define the small parameters: \begin{equation} \epsilon_j\equiv\frac{g_j}{\Delta_j}\ll\frac{1}{\sqrt{\langle n\rangle_T}}\ll 1,\hspace{0.5in}j=a,b, \end{equation} where $\langle n\rangle_T$ is the average photon number. Making use of the small rotation method \cite{Klimov,Klimov2} to obtain the effective Hamiltonian which approximately describes the interaction process, we can eliminate the two terms which do not represent the resonance interaction but represent rapid oscillations in the rotating frame. That can be achieved by applying to the Hamilnonian (\ref{Hint}) a small unitary transformation: \begin{equation} \hat{R}=e^{\epsilon_a(\sigma_+^{(a)}b-\sigma_-^{(a)}b^{\dagger})+\epsilon_b(\sigma_+^{(b)}b-\sigma_-^{(b)}b^{\dagger})}. \nonumber \end{equation} Considering terms up to first order in $\epsilon_a$ and $\epsilon_b$ of the Cambell-Baker-Hausdorf expansion, $\hat{R}H_{int}\hat{R}^\dagger$, we obtain the following effective Hamiltonian: \begin{eqnarray} \hat{H}_{eff}&=& \frac{\Delta_a}{2}\sigma_z^{(a)}+\frac{\Delta_b}{2}\sigma_z^{(b)} \nonumber \\ &&+(b^{\dagger}b+\frac{1}{2})\left(\frac{g_a^2}{\Delta_a}\sigma_z^{(a)}+\frac{g_b^2}{\Delta_b}\sigma_z^{(b)}\right)\nonumber \\ &&+\frac{g_ag_b}{2}(\frac{1}{\Delta_a}+\frac{1}{\Delta_b})\left(\sigma_+^{(a)}\sigma_-^{(b)}+\sigma_-^{(a)}\sigma_+^{(b)}\right). \label{Heff} \end{eqnarray} The first two terms in the above effective Hamiltonian represents the free evolution of the non-resonant atoms with renormalized transition frequencies. The third term is the so-called, dynamical Stark shifts, which describes an additional intensity dependent detuning of non-resonant atoms from the mode frequency. The last term represents an effective dipole-dipole interaction between the non-resonant atoms which appears as a consequence of a collective nature of the interaction of non-resonant atoms with the quantized mode. We point out that this kind of effective interaction could not appear in the classical field and that the contribution of this term strongly depends on the internal resonance conditions of the non-resonant atoms.
The effective Hamiltonian (\ref{Heff}) can be diagonalized without difficulty but, for the sake of simplicity, we suppose that both atoms are identical in a way such that $g=g_a=g_b$ and $\Delta=\Delta_a=\Delta_b$. Under those conditions the unitary evolution operator is given by \begin{eqnarray}
\hat{U}&=&e^{-i[\Delta +g^2(2b^{\dagger}b+1)/\Delta]t}|1\rangle_a|1\rangle_{b\ a}\langle 1|_b\langle1| \nonumber \\
&&+e^{-ig^2t/\Delta}|\psi^+\rangle\langle\psi^+|+e^{ig^2t/\Delta }|\psi^-\rangle\langle\psi^-| \nonumber \\
&&+e^{i[\Delta+g^2(2b^{\dagger}b+1)/\Delta]t}|0\rangle_a|0\rangle_{b\ a}\langle 0|_b\langle 0|, \label{U} \end{eqnarray}
where $|\psi^{\pm}\rangle=(|0\rangle_a|1\rangle_b\pm|1\rangle_a|0\rangle_b)/\sqrt{2}$ are two Bell states.
From Eqs. (\ref{Heff}) and (\ref{U}) one realizes that the dressed states $|1\rangle_a|1\rangle_b|n\rangle$, $|0\rangle_a|0\rangle_b|n\rangle$, and $|\psi^{\pm}\rangle|n\rangle$ ($n=0,1,2,\dots$) are stationary. Thus, by their form, independent of the initial mode state, the $a-b$ bipartite system does not evolve when starting at one of the states $|1\rangle_a|1\rangle_b$,
$|0\rangle_a|0\rangle_b$, or $|\psi^{\pm}\rangle$, and hence preserves the initial entanglement.
\section{Bipartite Entanglement}
The entanglement between two systems in a whole pure state is given by the entropy of any subsystem. It can also be evaluated by the concurrence $C(|\psi\rangle)\equiv|\langle\psi |\sigma_y\otimes\sigma_y|\psi^*\rangle|$, where the asterisk denotes complex conjugation of the probability amplitudes in the $\sigma_z\otimes\sigma_z$-representation, i.e., in the base: $\{|0\rangle|0\rangle,|0\rangle|1\rangle,|1\rangle|0\rangle,|1\rangle|1\rangle\}$ \cite{Caves,Wootters}. The generalization of the concurrence to a mixed state $\rho$ of two atoms is defined as the infimum of the average concurrence over all possible pure state ensemble decompositions of $\rho$, defined as convex combinations of pure states $s_i=\{p_i,|\psi_i\rangle\}$ decomposition, such that $\rho=\sum_ip_i|\psi_i\rangle\langle\psi_i|$. In this way, $C(\rho)=\inf_{s_i}\sum_ip_iC(|\psi_i\rangle)$. Williams Wootters succeeded in deriving an analytic solution to this difficult minimization procedure in terms of the eigenvalues $\lambda_i$'s of the non-Hermitian operator $\rho\sigma_y\otimes\sigma_y\rho^*\sigma_y\otimes\sigma_y$, where the asterisk again denotes complex conjugate of the elements of $\rho$ in the $\sigma_z\otimes\sigma_z$-representation. The closed-form solution for the concurrence of a mixed state of two atoms is given by $C(\rho)=\max\{0,\sqrt{\lambda_1}-\sqrt{\lambda_2}-\sqrt{\lambda_3}-\sqrt{\lambda_4}\}$, where the $\lambda_i$'s are in the decreasing order\cite{Wootters}.
First we study the entanglement of formation between the two atoms when each of them is initially in a pure state. After that we consider the two atoms in a mixed states. In both cases we study how the entanglement evolves as a function of time and energy mode, under the weak and dispersive interaction (\ref{Heff}). We consider the initial mode state to be a Fock state $|n\rangle$, a coherent state $|\alpha\rangle$, and a thermal state $\rho_T=\sum_n \langle n\rangle_T^n/(1-\langle n\rangle_T)^{n+1}|n\rangle\langle n|$, where $\langle n\rangle_T=1/(e^{\omega/k_BT}-1)$ is the average photon number of the mode, $T$ the absolute temperature and $k_B$ the Boltzmann constant ($\hbar=1$).
\subsection{Initial pure state}
Here we consider each atom to be initially in a pure state, i.e., qubit $a$ being in $|\psi\rangle_a$ and qubit $b$ in $|\varphi\rangle_b$. When the single mode is initially in a Fock state $|n\rangle$, the two atoms-system does not tangle with it at all time, and they evolve to the following pure state: \begin{eqnarray}
|\phi\rangle &=&\langle 0|\psi\rangle\langle 0|\varphi\rangle e^{i[\Delta+\frac{g^2}{\Delta}(2n+1)]t}|0\rangle_a|0\rangle_b
+|L\rangle \nonumber \\
&&+\langle 1|\psi\rangle\langle 1|\varphi\rangle e^{-i[\Delta+\frac{g^2}{\Delta}(2n+1)]t}|1\rangle_a|1\rangle_b, \nonumber \end{eqnarray}
where $|L\rangle =L_0|0\rangle_a|1\rangle_b+L_1|1\rangle_a|0\rangle_b$ is an unnormalized state in the subspace spanned by $\{|0\rangle_a|1\rangle_b,|1\rangle_a|0\rangle_b\}$. Here we have defined the functions: \begin{eqnarray}
L_0&=&\langle 0|\psi\rangle\langle 1|\varphi\rangle\cos\frac{g^{2}t}{\Delta}-i\langle 1|\psi\rangle\langle 0|\varphi\rangle\sin\frac{g^{2}t}{\Delta}, \nonumber \\
L_1&=&\langle 1|\psi\rangle\langle 0|\varphi\rangle\cos\frac{g^{2}t}{\Delta}-i\langle 0|\psi\rangle\langle 1|\varphi\rangle\sin\frac{g^{2}t}{\Delta}. \nonumber \end{eqnarray} Thus, the concurrence of the above pure state is read as follows \begin{equation}
C(|\phi \rangle )=2\left\vert L_{0}L_{1}-\langle 0|\psi \rangle \langle 0|\varphi \rangle
\langle 1|\psi \rangle \langle 1|\varphi \rangle \right\vert. \label{c1} \end{equation}
From Eq. (\ref{c1}), we can see that the concurrence: i) does not rely on the $n$ photon number, ii) reaches the maximum value $1$ at $t=\pi\Delta/(2g^2)$ and hence periodically, under the condition that both atoms start in any eigenstate of $\sigma_x$ or even in a general state of type $|\theta\rangle=(|0\rangle+e^{i\theta}|1\rangle)/\sqrt{2}$ being $\theta$ real. Therefore two atoms starting in any $|\theta\rangle$ state reach maximum entanglement when they interact dispersively even with a common vacuum. In this form we can say that the vacuum can generate entanglement between two two-level atoms even when they are far from the resonance.
When the single mode is initially in a coherent state $|\alpha\rangle$, the reduced density operator of the two atoms-system becomes, \begin{widetext} \begin{eqnarray}
\rho&=&|\langle 1|\psi\rangle\langle 1|\varphi\rangle|^2|1\rangle_a|1\rangle_{ba}\langle 1|_b\langle 1|+|L\rangle\langle L|
+|\langle 0|\psi\rangle\langle 0|\varphi\rangle|^2|0\rangle_a|0\rangle_{ba}\langle 0|_b\langle 0| \nonumber \\
&&+\langle 0|\psi\rangle\langle 0|\varphi\rangle e^{i(\Delta+g^{2}/\Delta)t}e^{-|\alpha|^{2}(1-e^{2ig^{2}t/\Delta})}|0\rangle_a|0\rangle_b\langle L| \nonumber \\
&&+\langle 0|\psi\rangle\langle\psi|1\rangle\langle 0|\varphi\rangle\langle\varphi|1\rangle e^{2i(\Delta+g^{2}/\Delta)t}e^{-|\alpha|^{2}(1-e^{4ig^{2}t/\Delta})}
|0\rangle_a|0\rangle_{ba}\langle1|_b\langle 1| \nonumber \\
&&+\langle\psi|0\rangle\langle\varphi|0\rangle e^{-i(\Delta+g^{2}/\Delta)t}e^{-|\alpha|^{2}(1-e^{-2ig^{2}t/\Delta})}|L\rangle_a\langle 0|_b\langle 0| \nonumber \\
&&+\langle\psi|1\rangle\langle\varphi|1\rangle e^{i(\Delta+g^{2}/\Delta )t}e^{-|\alpha|^{2}(1-e^{2ig^{2}t/\Delta})}|L\rangle_a\langle 1|_b\langle 1| \nonumber \\
&&+\langle\psi|0\rangle\langle 1|\psi\rangle\langle\varphi|0\rangle\langle 1|\varphi\rangle e^{-2i(\Delta +g^{2}/\Delta )t}e^{-|\alpha|^{2}(1-e^{-4ig^{2}t/\Delta})}
|1\rangle_a|1\rangle_{ba}\langle 0|_b\langle 0| \nonumber \\
&&+\langle 1|\psi\rangle\langle 1|\varphi\rangle e^{-i(\Delta+g^{2}/\Delta)t}e^{-|\alpha|^{2}(1-e^{-2ig^{2}t/\Delta})}|1\rangle_a|1\rangle_b\langle L|, \label{rhota} \end{eqnarray} \end{widetext}
From this expression, Eq. (\ref{rhota}), we see that only the last six non diagonal terms depend on the $|\alpha|$ and they, in general, vanish for $|\alpha|\gg1$. However, at times $t=t_k=\pi k\Delta/g^2$ ($k=1,2,\dots$) those terms suddenly reappear and become independent of the intensity $|\alpha|$. Since at times $t_k$ the state described by Eq. (\ref{rhota}) is equal to the state when $\alpha=0$, that is, with an initial vacuum state, then at those times the concurrence is given by the Eq. (\ref{c1}) evaluated at $t_k$ which is zero. We can also see that when the atom $a$ starts in the $|0\rangle$ state and the atom $b$ begins in the $|1\rangle$ state or vice versa the concurrence of mixed state (\ref{rhota}) is given by $|\sin(2g^2t/\Delta)|$ reaching the maximum value $1$ at $t=\pi\Delta/4g^2$ and repeating it periodically. On the other hand, in the high intensity regime, $|\alpha|\gg1$, the reduced density operator at any $t\neq t_k$, is given by the first three diagonal terms of the Eq. (\ref{rhota}) and its concurrence is read as follows: \begin{equation}
C(\rho)=\max\{0,2(|L_0L_1|-|\langle 0|\psi\rangle\langle 0|\varphi\rangle\langle 1|\psi\rangle\langle 1|\varphi\rangle|)\}. \label{c2} \end{equation}
In this regime and when each atom begins in any $|\theta\rangle$ state, the entanglement between them is always zero.
\begin{figure}
\caption{Concurrence as a function of the dimensionless time $\tau/\pi$ and of the average photon number of the single mode when the initial mode state is: (a) a coherent state and (b) a thermal state. In both cases each atom starts in the $|\theta=0\rangle$ state. We have considered $g/\Delta=0.01$. White color means zero entanglement of formation whereas black color stand for their maximum entanglement value 1.}
\label{figure1}
\end{figure}
When the initial mode state is a thermal state at absolute temperature $T$, the reduced density operator of the two atoms, at time $t$ becomes: \begin{widetext} \begin{eqnarray} \rho
&=&|\langle 1|\psi \rangle\langle 1|\varphi \rangle |^{2}|1\rangle_{a}|1\rangle _{ba}\langle 1|_{b}\langle 1|+|L\rangle \langle L|
+|\langle 0|\psi \rangle\langle 0|\varphi \rangle |^{2}|0\rangle _{a}|0\rangle_{ba}\langle 0|_{b}\langle 0| \nonumber \\
&&+\frac{e^{i(\Delta +\frac{g^{2}}{\Delta })t}}{1+\langle n\rangle_T(1-e^{i\frac{2g^{2}t}{\Delta }})}(\langle 0|\psi \rangle \langle 0|\varphi \rangle |0\rangle_{a}|0\rangle _{b}\langle L|
+\langle \psi |1\rangle \langle \varphi|1\rangle |L\rangle _{a}\langle 1|_{b}\langle 1|) \nonumber \\
&&+\frac{e^{-i(\Delta +\frac{g^{2}}{\Delta })t}}{1+\langle n\rangle_T(1-e^{-i\frac{2g^{2}t}{\Delta }})}(\langle 1|\psi \rangle \langle 1|\varphi \rangle
|1\rangle_{a}|1\rangle _{b}\langle L|+\langle \psi |0\rangle \langle\varphi |0\rangle |L\rangle _{a}\langle 0|_{b}\langle 0|) \nonumber \\
&&+\frac{\langle 0|\psi \rangle \langle 0|\varphi \rangle \langle \psi|1\rangle \langle \varphi |1\rangle e^{2i(\Delta +\frac{g^{2}}{\Delta })t}}
{1+\langle n\rangle_T(1-e^{i\frac{4g^{2}t}{\Delta }})}|0\rangle _{a}|0\rangle_{ba}\langle 1|_{b}\langle 1| \nonumber \\
&&+\frac{\langle 1|\psi \rangle \langle 1|\varphi \rangle \langle \psi|0\rangle \langle \varphi |0\rangle e^{-2i(\Delta +\frac{g^{2}}{\Delta })t}}
{1+\langle n\rangle_T(1-e^{-i\frac{4g^{2}t}{\Delta }})}|1\rangle _{a}|1\rangle_{ba}\langle 0|_{b}\langle 0|. \label{rhoTER} \end{eqnarray} \end{widetext} Once again we find that at high intensity, i.e., at high temperature, $\langle n\rangle_T\gg 1$, the last six non diagonal terms vanish and reappear suddenly at times $t=t_k$. Like the previous case at those $t_k$ times the concurrence is zero and at any $t\neq t_k$ the concurrence is given by Eq. (\ref{c2}).
Figures (\ref{figure1}) shows a linear black-white degradation of the concurrences of the mixed state given by: (a) the Eq. (\ref{rhota}) and (b) the Eq. (\ref{rhoTER}), as a function of the dimensionless time $\tau=2g^2t/\Delta$ and of the average photon number of the single mode. White color means zero entanglement of formation whereas black color stand for they maximum entanglement value 1. In both figures (\ref{figure1}) we considered both qubit starting in the $|\theta=0\rangle$ state.
From the figures (\ref{figure1}) we see that maximal entanglement arises periodically at low energy regime. This effect is a reminiscence of the entanglement generated by the dispersive vacuum state. Those maximal entanglement zones are separated by narrow \textit{entantaglement dead valleys} (EDV) \cite{Eberly}.
\subsection{Initial mixed state}
Now we study the case when both atoms are initially in a Werner state \cite{Werner,Miranowicz} type: \begin{equation}
\rho(0)=\frac{1-\gamma}{4}I+\gamma|X\rangle\langle X|, \label{W} \end{equation}
with $I$ being the identity of the two atoms Hilbert space, $|X\rangle$ being one of the four Bell states \cite{Miranowicz}, and $0\leq\gamma\leq 1$ a physical parameter.
One can prove easily that when the single mode starts in a Fock state $|n\rangle$ the concurrence does not change and remains in $(3\gamma-1)/2$ for $\gamma\geq1/3$ and zero otherwise.
However, when the field starts in a coherent state $|\alpha\rangle$ the reduced density operator of the two qubit-system at time $t$ becomes \begin{eqnarray}
\rho&=&\frac{1-\gamma}{4}I+\frac{\gamma}{2}(|0\rangle_a|0\rangle_{ba}\langle 0|_b\langle 0|+|1\rangle_a|1\rangle_{ba}\langle 1|_b\langle 1|\nonumber \\
&&\pm e^{2i\Omega t}e^{-|\alpha|^2(1-e^{\frac{4ig^{2}t}{\Delta}})}|0\rangle_a|0\rangle_{ba}\langle1|_b\langle 1| \nonumber \\
&&\pm e^{-2i\Omega t}e^{-|\alpha|^2(1-e^{-\frac{4ig^{2}t}{\Delta}})}|1\rangle_a|1\rangle_{ba}\langle0|_b\langle 0|), \label{rhota2} \end{eqnarray}
where we have consider $|X\rangle$ to be one of the two Bell states $|\phi^\pm\rangle=(|0\rangle_a|0\rangle_b\pm|1\rangle_a|1\rangle_b)/\sqrt{2}$. $\Omega=\Delta+g^2/\Delta$. The concurrence of the bipartite mixed state (\ref{rhota2}) is given by \begin{equation}
C(\rho)=\frac{\left( 1+2e^{-2|\alpha |^{2}\sin ^{2}\frac{2g^{2}t}{\Delta }}\right)\gamma -1}{2}, \label{Ca} \end{equation}
for $\gamma \geq 1/(1+2e^{-2|\alpha |^{2}\sin ^{2}\frac{2g^{2}t}{\Delta }})$ and is zero otherwise. Clearly we see that for a high intensity coherent state, in general, there is not entanglement between the two atoms however at each time
$t=t_k/2$ the initial entanglement amount, $\max\{0,(3\gamma-1)/2\}$, is suddenly recovered and is independent of the intensity of the coherent state. When one consider the other two Bell states $|X\rangle=|\psi^\pm\rangle=(|0\rangle_a|1\rangle_b\pm|1\rangle_a|0\rangle_b)/\sqrt{2}$, the (\ref{W}) density operator does not evolve.
\begin{figure}
\caption{Concurrence as a function of the dimensionless time $\tau/\pi$ and of the average photon number of the single mode when the initial mode state is: (a) a coherent state and (b) a thermal state. In both cases the two qubits start in a Werner state. White color means zero entanglement of formation whereas black color stand for their maximum entanglement value $(3\gamma-1)/2=8/11$ with $\gamma=9/11$.}
\label{figure2}
\end{figure}
On the other hand, when the mode state is initially in a thermodynamic equilibrium at absolute temperature $T$ the concurrence becomes: \begin{equation}
C(\rho) =\max\{0,\frac{\left(1+\frac{2}{|1+\langle n\rangle_T(1-e^{-4ig^{2}t/\Delta })|}\right) \gamma -1}{2}\}. \label{c3} \end{equation} Once again we find the \textit{entanglement-beats} effect at high intensity or equivalently in the high temperature regime, that is, the initial entanglement amount is suddenly recovered just at each times $t=t_k/2$. We will call this effect \textit{E-beats}.
We can also seen from Eqs. (\ref{c2}) and (\ref{c3}) that when the field mode is initially in the vacuum state, $|\alpha|^2=\langle n\rangle_T=0$, the entanglement does not change. The expression (\ref{c3}) was calculated considering the state $|X\rangle=|\phi^\pm\rangle$ in Eq. (\ref{W}). When $|X\rangle=|\psi^\pm\rangle$ the (\ref{W}) density operator does not evolve.
Figures (\ref{figure2}) shows a linear black-white degradation of the concurrences given by: (a) Eq. (\ref{c2}) and (b) Eq. (\ref{c3}), as a function of the dimensionless time $\tau=2g^2t/\Delta$ and of the average photon number of the single mode. White color means zero entanglement of formation whereas black color stand for they maximum entanglement value $(3\gamma-1)/2=8/11$. In both figures (\ref{figure2}) we considered $\gamma=9/11$. From the figures (\ref{figure2}) we see that the \textit{E-beats} effect becomes apparent even for $\langle n\rangle_T=|\alpha|^2 \approx 3$. These E-beats are separated by EDVs \cite{Eberly}.
\section{Conclusions}
In summary we have studied the dynamics of the entanglement between two non interacting two-level atoms weakly coupled and far from the resonance with the same single mode field. We find that a dispersive vacuum can generate maximum entanglement between them when there is a single photon to share. We emphasize that in the dispersive regime, the atomic energy is not exchanged with the single mode, so the single mode is required to be only the mediator between the two two-level atoms effective interaction. This effect can not be generated by classical field because classical fields can not couple the two atoms, at any intensity. When they are initially in a type of Werner state, the entanglement is in general zero at high energy, but the so called \textit{E-beats} effect take place and the narrow beats are separated by the EDVs \cite{Eberly}. In other words, in that regime, the initial entanglement amount is periodically recoverd in a sudden manner, only for a short moments separated by the time scale $\pi\Delta/g^2$. The wide of a E-beat is inversely proportional to the energy of the single mode. Besides, in that atomic initial condition the entanglement does not change when the single mode is initially in the vacuum state. However, we have already seen that the vacuum initial state makes an important effect when each atoms starts in a pure state.
A physical implementation of this Hamiltonian interaction between two two-level system and a single mode can be performed with two quantum dots interacting with a boson mode \cite{Krummheuer}. Another physical implementation could be implemented considing the Zeeman's level structure in a $^{138}$Ba cold ion moving in a linear Paul trap \cite{Raizen} in a standing wave configuration \cite{Cirac}. Spontaneous emission is suppressed using as a qubit the $S_{1/2}$ ground and the $D_{5/2}$ upper metastable states \cite{Blatt,Kli}. The lifetime of those metastable states of Ba$^+$ is about $45 s$. The motion of the ions can be described in terms of the normal center-of-mass mode. The required dispersive interaction between two ions with the same center-of-mass mode can be always simulated.
\begin{acknowledgments} The authors thank to M. L. Ladr\'{o}n de Guevara, P. Toschek, and P. Zoller for valuable comments. R. P.-G. thanks to G. A. Olivares-Renter\'{\i}a. This work was supported by Grants: Milenio ICM P02-49F and FONDECyT No. 1030671.
\end{acknowledgments}
\end{document} | arXiv |
Predicting the provisioning potential of forest ecosystem services using airborne laser scanning data and forest resource maps
Jari Vauhkonen1Email author
Forest Ecosystems20185:24
Accepted: 24 May 2018
Remote sensing-based mapping of forest Ecosystem Service (ES) indicators has become increasingly popular. The resulting maps may enable to spatially assess the provisioning potential of ESs and prioritize the land use in subsequent decision analyses. However, the mapping is often based on readily available data, such as land cover maps and other publicly available databases, and ignoring the related uncertainties.
This study tested the potential to improve the robustness of the decisions by means of local model fitting and uncertainty analysis. The quality of forest land use prioritization was evaluated under two different decision support models: either using the developed models deterministically or in corporation with the uncertainties of the models.
Prediction models based on Airborne Laser Scanning (ALS) data explained the variation in proxies of the suitability of forest plots for maintaining biodiversity, producing timber, storing carbon, or providing recreational uses (berry picking and visual amenity) with RMSEs of 15%–30%, depending on the ES. The RMSEs of the ALS-based predictions were 47%–97% of those derived from forest resource maps with a similar resolution. Due to applying a similar field calibration step on both of the data sources, the difference can be attributed to the better ability of ALS to explain the variation in the ES proxies.
Despite the different accuracies, proxy values predicted by both the data sources could be used for a pixel-based prioritization of land use at a resolution of 250 m2, i.e., in a considerably more detailed scale than required by current operational forest management. The uncertainty analysis indicated that maps of the ES provisioning potential should be prepared separately based on expected and extreme outcomes of the ES proxy models to fully describe the production possibilities of the landscape under the uncertainties in the models.
Forestry decision making
Spatial prioritization
Light detection and ranging (LiDAR)
Forestry decision making requires evaluating potential management alternatives with respect to multiple objectives (Kangas et al. 2008). A fundamental decision is related to which goods and services to produce: in addition to conventional timber production, the management objectives may be related to maintaining habitats, providing recreational and aesthetic opportunities, and carbon storage or sequestration (e.g. Pukkala 2016). These goods and services are jointly called "multiple uses" (Kangas 1992) or, following Costanza et al. (1997), Daily et al. (1997) and many others, "ecosystem services" of forest. In the following text, I use ESs to abbreviate "Ecosystem Services", referring most essentially to indicators of forest-related ESs that can be derived from Remote Sensing (RS) or other digital map data as indirect proxies (Andrew et al. 2014). The mapping of these proxies allows spatial prioritization and other spatially explicit analyses of multiple ESs at various scales (e.g. Schröter et al. 2014; Räsänen et al. 2015; Sani et al. 2016; Roces-Díaz et al. 2017). According to reviews (Martínez-Harms and Balvanera, 2012; Englund et al. 2017) and a collection of case studies (Barredo et al. 2015), however, such analyses can be expected to suffer from the lack of standardized terminology, methodology and data. Increased attention should especially be focused on quantifying and communicating the resulting uncertainties to the decision makers in order to make informed decisions (see also Eigenbrod et al. 2010; Schulp et al. 2014; Foody 2015). Accounting for these aspects, the present study examines the robustness of forest land-use prioritization based on maps of the provisioning potential of forest ESs (Vauhkonen and Ruotsalainen 2017a), i.e., the fitness of forest patches to provide goods and services typical to the ESs occurring in the studied area, re-considering the methodological and data workflow proposed in the earlier study.
To result in valid conclusions from RS-based decision analyses, the estimates should be accurate already at the level of individual pixels. The use of active RS such as Light Detection and Ranging (LiDAR) is expected to produce more accurate information compared to passive, optical RS (Lefsky et al. 2001; Coops et al. 2004; Maltamo et al. 2006), especially, when using small pixels (e.g., 200 m2 as in Næsset 2002). Forest structure and habitat related inventories in particular benefit from the ability of LiDAR to provide three-dimensional information, when operated as Airborne Laser Scanning (ALS; Maltamo et al. 2014). Kankare et al. (2015) evaluated the estimation accuracy of biomass attributes based on two different RS setups in an area closely resembling to that presently studied. According to their results, pixel-level predictions based on coarse to medium resolution satellite imagery had a Root Mean Squared Error (RMSE) of 47.7% of the total biomass, which could be reduced to 25.7% using ALS and local field reference data. The ALS data used were acquired by the land survey, and the availability of such data is increasing due to large-area acquisitions for terrain elevation modelling. Such data have also been used to map attributes related to habitat (Melin et al. 2013, 2016; Vauhkonen and Imponen 2016), structural (Valbuena et al. 2016b; Vauhkonen and Imponen 2016) and aesthetic (Vauhkonen and Ruotsalainen 2017b) properties of the forest.
Overall, when various forest ESs are categorized according to a typology such as the Common International Classification of Ecosystem Services (CICES) as in Englund et al. (2017), the potential of ALS for assessing the suitability of forest areas to provide these ESs can be characterized as:
Regulation and maintenance services: A very high number of studies indicates that the vegetation height and density profiles produced by ALS are useful for a detailed quantification of variations in above-ground biomass (Næsset and Gobakken 2008; Zolkos et al. 2013; Popescu and Hauglin 2014) and, thus, carbon storage (Patenaude et al. 2004). Essentially, ALS produces a three-dimensional description of the forest structure, which can be related to ecological properties such as habitat types (Bässler et al. 2011) or biological diversity in general (Müller and Vierling 2014) and employed to assess suitability of forests to be maintained as habitats for different species (Davies and Asner 2014; Hill et al. 2014; Simonson et al. 2014).
Provisioning services: Several studies carried out especially in boreal forest structures indicate ALS data useful for assessing properties related to wood production. Except that the methods listed in the previous paragraphs can be directly used to assess the production potential of bulk biomass, also more detailed predictions of timber assortments (Korhonen et al. 2008; Kotamaa et al. 2010; Vauhkonen et al. 2014; Hou et al. 2016) or wood fiber-related attributes (Hilker et al. 2013; Luther et al. 2014) are possible. Although the yield studies are mostly related to wood-based biomass, there also are examples of improved assessments of the yield of shrub fruits (Barber et al. 2016) or edible fungi (Peura et al. 2016) based on ALS.
Cultural services: The applicability of ALS highly depends on the cultural service of interest. For example, several archaeological studies indicate the potential to improve the mapping of historical remains in the forest using an ALS-based digital terrain model. Similar techniques to visualize the terrain (Domingo-Santos et al. 2011) or trees (Lämås et al. 2015) could potentially be used to assess the aesthetic properties of the forest. To date, the study of Vauhkonen and Ruotsalainen (2017b), which assessed the preferences on the visual amenity of a forest area based on cuttings simulated to triangulated vegetation point clouds, appears to be the only ALS-based attempt towards this direction.
The use of ALS can thus be motivated by the potential to obtain a better correspondence with forest biophysical attributes and these data may be available for some areas in a similar extent as land cover maps and other publicly available data. Despite the high potential, however, also ALS-based information may yield a high degree of uncertainties, if applied in expert models formulated according to conventionally measured field attributes. For example, the suitability index proposed by Pukkala et al. (2012) to map potential habitats of Siberian jay (Perisoreus infaustus L.) would require estimating the availability of Vaccinium myrtillus (L.) berries and epiphytic lichens for food and nests. Although sub-models to estimate these attributes are presented (Pukkala et al. 2012), also those include stand age and site fertility, which are difficult to estimate by ALS. Although some researchers have predicted even understorey-related attributes, the results of Korpela et al. (2012) indicate that direct measures are difficult to obtain due to transmission losses occurring in the upper canopy (see also Maltamo et al. 2005) and such estimations would be even more unreliable based on passive optical RS methods. Even the recognition of dominant tree species may be challenging in ALS-based inventories: despite promising results based solely on ALS (Ørka et al. 2013; Vauhkonen et al. 2014), the results of Räty et al. (2016) suggest difficulties in detecting species, which dominate a minor proportion of an area otherwise homogeneous in terms of the species.
On the other hand, ALS may allow producing other attributes with more relevance from the forest management point of view. For example, forests with multilayered vertical structure can be distinguished based on the data (Zimble et al. 2003; Maltamo et al. 2005), which can be further employed in detecting the prevailing silvicultural system (Bottalico et al. 2014), management intensity (Sverdrup-Thygeson et al. 2016; Valbuena et al. 2016a), or development stage (Valbuena et al. 2016b). Even more detailed indices may be developed based on ecological rationale (Listopad et al. 2015) or a thorough understanding of the properties affecting the ALS response (Valbuena et al. 2013, 2014). Earlier studies have suggested that the information in the ALS data may be condensed to a few metrics (Kane et al. 2010; Leiterer et al. 2015; Valbuena et al. 2017), the partitioning of which will provide a stratification corresponding closely to the structural complexity observed in the field (Pascual et al. 2008; Thompson et al. 2016; Vauhkonen and Imponen 2016).
Even though properties related to individual ESs have been actively studied, no studies that show how to support management decisions related to the provisioning of multiple forest ESs based on three-dimensional forest structure description obtained by ALS can currently be found from the literature. Barbosa and Asner (2017) and Rechsteiner et al. (2017) derived information from ALS data to prioritize landscapes for ecological restoration and species conservation planning, respectively. Packalén et al. (2011) used ALS data and spatial optimization to derive so called dynamic treatment units to guide the management of pulpwood production in a plantation forest. Although a similar approach could be extended to the decision making of other or multiple ESs (Pukkala et al. 2014), all ALS-based applications are, to date, focused on single ESs.
The purpose of this study is to test ALS data for management prioritization of multiple ESs in a boreal forest landscape. Proxies for pixel-wise provisioning potential of biodiversity, carbon, timber, berries, and recreational amenities were formulated using ALS-based features and compared to information obtained from forest resource maps with a resolution of 16 × 16 m2. The quality of land use prioritization based on the obtained information was evaluated under two different decision support models: either using the developed models deterministically or in corporation with the uncertainties of the models.
A methodological overview
Specifically, the ALS data are tested for predicting the provisioning potential of ESs (Vauhkonen and Ruotsalainen 2017a) in a spatial prioritization framework, where land use decisions are based on ranking the set of decision alternatives in the considered location(s) and choosing the best according to the decision makers' preferences (cf., Malczewski and Rinner 2015). When applied to prioritize forests for single (e.g. Lehtomäki et al. 2015) or multiple uses (e.g. Vauhkonen and Ruotsalainen 2017a) based on ES proxy maps, a simplified workflow for such analyses includes three methodological steps:
Data acquisition, feature extraction and/or expert modelling to derive proxy values for the analyzed ESs.
Scaling and normalization of the proxy values derived from different sources to the same scale. The resulting values can be called 'priority', 'benefit', or 'utility' value and used in different ways depending on the literature source (see also Pukkala 2008; Pukkala et al. 2014; Malczewski and Rinner 2015).
Decision analyses using the normalized data at selected spatial scale(s).
Because the normalized proxy maps resulting from the previous steps 'measure' the ESs in a same scale and account for the value range of each ES in the entire landscape, they can be used (a) to mutually rank ESs within a spatial unit to subsequently prioritize management to provide most suitable ESs in each unit; and (b) to identify the most important locations of specific ESs in the landscape to be considered as management hot-spots or cold-spots. Because the spatial prioritization is carried out at a sub-stand-level using pixels or other corresponding map units, it is expected to allow a more efficient use of the production possibilities of the forest (Heinonen et al. 2007) and, overall, operationalize the concept of ESs for landscape planning, which is further motivated by de Groot et al. (2010).
The present study examines whether changes to each of the three steps listed above could improve pixel-wise analyses of the provisioning potential of forest ESs (cf. the discussion section of Vauhkonen and Ruotsalainen 2017a):
1) What data to use for the expert models of the provisioning potential: A consolidated approach to obtain grid-based, wall-to-wall predictions for the tessellated landscapes would be to use forest resource maps based on generalizing field sample plot measurements to larger areas using coarse to medium resolution RS images and other numeric map data (Tomppo et al. 2008a, 2008b, 2014). This approach, referred to as Multi-Source National Forest Inventory (MS-NFI), was used by Vauhkonen and Ruotsalainen (2017a). Even if ALS allows more prediction possibilities, as reviewed above, it is practically reasoned to benchmark the accuracies against the pixel data provided by the MS-NFI approach, because different forest resource maps are readily available in many countries (Tomppo et al. 2008b, Roces-Díaz et al. 2017; Vauhkonen and Ruotsalainen, 2017a).
2) How to scale the ESs originally measured in different units for the joint analyses: Vauhkonen and Ruotsalainen (2017a) used a simple normalization to convert the ES values between 0 and 1:
$$ {v}_{ij}=\frac{n_{ij}}{N}, $$
where v ij is the normalized value and n ij is the position of the j:th plot in ascending order of the expert model values for the i:th ecosystem service among altogether N plots. Notably, this normalization produced values in an interval scale, whereas the ratios between the expert model values could also be assumed useful for the priority ranking. An alternative, ratio-scale normalization could be computed as:
$$ {v}_{ij}=\frac{ES_{ij}-\min \left({ES}_i\right)}{\mathit{\max}\left({ES}_i\right)-\mathit{\min}\left({ES}_i\right)}, $$
where v ij is the value (or priority or benefit or utility, depending on literature source; see above) produced by the i:th ES in plot j.
3) How to use the obtained information in decision analyses: Vauhkonen and Ruotsalainen (2017a) deterministically prioritized each pixel to the ES with the highest predicted proxy value, but highlighted the need to consider uncertainties around the predictions. If a quantification of the uncertainties is obtained (e.g., by approximating residual errors of calibration models fitted to the data), the decision analyses can consider distributions of uncertainty in addition to the expected values and produce separate recommendations for different decision makers according to their attitudes towards risk (Pukkala and Kangas 1996). Therefore, in addition to deterministic use of the predicted values, this study considered both the expected and extreme outcomes of the predictions when selecting the most suitable ES for a pixel. The principal idea of this analysis is illustrated in Fig. 1.
A generic example of selecting the best decision alternative based on different outcomes of model predictions (colored curves). The yellow curve yields the highest priority value based on the expected (upper horizontal line) or worst outcome of the model. However, if the decision maker weights best possible outcomes, the alternative depicted by the grey curve should be selected as it produces the highest priority in the right tail accumulation point (the interception of the curve and the lower horizontal line)
On this background, the present study tested the data source (ALS or MS-NFI), priority value function form (Eqs. 1 or 2), uncertainty management approach, and joint implications of these choices to the predictions of the provisioning potential of forest ESs and subsequent management prioritization decisions. Forest ESs considered were selected based on two criteria: likelihood to occur in the studied landscape and existence of expert models to derive proxies for their provisioning potential based on the field measurements (Table 1). The field and MS-NFI data contained estimates of forest attribute that could be directly inserted to the expert models. Using ALS data, regression analyses were employed to estimate predictive relationships between ALS-features and ES proxy values to fully utilize the different properties of these data (cf., Section "ALS-based models for the priority values of the ESs" below). In the absence of independent, wall-to-wall data for validation, both the predictions and validations were carried out at the level of individual forest plots. The evaluation is therefore limited to the local fitness of the ESs for a specific forest patch in a single point in time and without considering their spatial or temporal continuum. No decision maker was assumed in this study and the values obtained from both the Eqs. 1 and 2 were therefore treated with equal weights, even if those could additionally be weighted according to the decision makers' preference structure.
The ESs considered in this study and expert models for deriving their reference proxy values
Abbr.
Indicator, unit (citation)a
Stand-level forest attributes used as predictorsb
BIOD
Index value based on expert opinion (Lehtomäki et al. 2015)1
Site fertility, growing stock volume, diameter, dominant species
TIMB
Timber production
Soil expectation value (SEV), €∙ha− 1 (Pukkala 2005)2
Diameter, basal area, age, site fertility, species-specific growing stock volume, number of trees, operational environment (temperature, interest rate, timber prices)
Estimated amount of carbon 3, t∙ha−1 (Karjalainen and Kellomäki 1996)
Growing stock volume
BILB
Suitability for bilberry picking
Index value based on expert opinion (Ihalainen et al. 2002)
Age, basal area, height, species-specific growing stock volume, site fertility
COWB
Suitability for cowberry picking
Age, species-specific growing stock volume, diameter, site fertility
Visual amenity
Index value based on expert opinion (Pukkala et al. 1988)
Diameter, number of trees, species-specific growing stock volume, site fertility
aWhen computing the values for the present study, the following details or exceptions compared to original publications were made:
1The index values are of form diameter × volume, scaled using dominant-species-specific transformation functions (Lehtomäki et al. 2015) and maximum values of forest attributes in the study area, and multiplied by site fertility specific weights (Lehtomäki et al. 2015).
2Values of operational environment related parameters were obtained as combinations of effective temperature sum fixed to 1300 degree days, interest rates of 1%–4% and saw-wood/pulpwood prices (units in €∙m−3) of 30/15, 30/25, 40/15, 40/25, 40/35, 50/25, and 50/35, and the SEV was obtained as an average of these 28 combinations weighted by the proportions of species. All values were adopted from the study by Pukkala (2005).
3The estimated carbon was obtained based on conversion factors from species-specific, total stem volumes to carbon contents.
bTo standardize the computation based on all data sets, the following simplifications or groupings were used:
-Species groups: pine, spruce, deciduous trees.
-'Diameter' always referred to the basal-area weighted mean diameter.
Study area and experimental data
The study area is located in Evo, Finland (61.19°N, 25.11°E), which belongs to the southern boreal forest zone. The data extended over an area of approximately 3 km × 6 km. The forest stands in the area vary from intensively managed to natural forests in terms of their silvicultural status. Approximately 84% of the growing stock in the studied plots is dominated by coniferous tree species Scots pine (Pinus sylvestris L.) and Norway spruce (Picea abies [L.] H. Karst.). Deciduous tree species such as birches (Betula spp. L.), aspen (Populus tremula L.), alders (Alnus spp. P. Mill.), willows (Salix spp. L.), and rowan (Sorbus aucuparia L.) occur in mixed stands and below the dominant canopy.
Data sets used were compiled from three earlier studies in the same area (Vauhkonen and Imponen 2016; Niemi and Vauhkonen 2016; Vauhkonen and Ruotsalainen 2017a). Vauhkonen and Imponen (2016) downloaded and processed ALS data acquired by the National Land Survey of Finland to stratify the area according to forest structural properties. The ALS data were acquired from a flying altitude of 2200 m using Leica ALS 50 scanner on 7 May, 2012, to yield a nominal pulse density of 0.8 m− 2. Circular sample plots (9 m radius) were placed by clustering the ALS data with respect to forest structural features, which was found to be an efficient strategy to distribute the sample across the spatial, size, and age distributions of the tree stock (Vauhkonen and Imponen 2016). The field measurements were carried out in June–August, 2014. The species and diameter-at-breast height (DBH) were measured for each tree with a DBH ≥ 5 cm. For each tree species of the plot, a tree with a DBH corresponding to the median tree was measured for height and used to calibrate height curves for predicting the missing tree heights. Plot-level forest attributes were computed from the tree-level measurements using standard equations and methods, which are described in detail in an open-access article by Niemi and Vauhkonen (2016).
Publicly available MS-NFI data (Natural Resources Institute Finland 2017) were included to provide a benchmark for the ALS data. The MS-NFI maps are the same used Vauhkonen and Ruotsalainen (2017a) and details on their pre-processing are given in that paper. These raster maps depicted site fertility, growing stock volume and biomass components by tree species, total basal area and mean diameter and height corresponding to those of the (basal area weighted) median tree, and they were produced using a k-nearest neighbor (k-NN) estimation method based on optimized neighbor and feature selection (Tomppo and Halme 2004; Tomppo et al. 2008a, 2014). The method used various satellite images from 2012 to 2014 and National Forest Inventory (NFI) field plot measurements from 2009 to 2013, which were updated to correspond the situation in mid-2013 using growth models.
Altogether 102 field plots were covered by both ALS and MS-NFI data and were included in the analyses. The models of Table 1 were applied to produce plot-specific reference values for the provisioning potential of the ESs based on field data. According to an exploratory analysis, the expert models of Table 1, fit with many different data sets, had considerably different value ranges over the landscape. As a result, a direct normalization of the expert function values specifically with Eq. 2 resulted to emphasizing one ES in the priority rankings only because of the different shape and scale of initial value distributions, as elaborated upon in Appendix 1. For this reason, the expert function values of all ESs were transformed to follow the normal distribution as closely as possible using the Box-Cox-transformation (Appendix 1) prior to applying Eqs. 1 and 2. The forest attribute estimates based on the MS-NFI maps were transformed using the same parameter values as with field data. This transformation did not affect the order of the observations, but produced approximately equally shaped frequency distributions of every ES, as detailed in Appendix 1.
ALS-based models for the priority values of the ESs
Prediction models with independent variables extracted from the ALS data were formulated to predict priority function values of the form of Eq. 2. Priority function values corresponding to Eq. 1 were obtained by ordering the aforementioned predictions, i.e., no separate models were constructed for the function form of Eq. 2.
As reasoned in the Introduction, the aim was not to model the forest attributes used as the predictors of the expert models, but to identify and quantify such properties of the ALS point clouds that directly explained the variation in the ES proxies. As visualized in Fig. 2, the point clouds of the plots with maximum proxy values did not considerably differ between the ESs in terms of the total distributions. However, when height values or proportions were computed separately according to echo categories, ES-specific differences could be pointed out (Fig. 2). The features were therefore extracted in echo categories, which were "only echoes" (suffix _only), "first of many echoes" (_first), "last of many echoes" (_last), "first echoes" (_FP), and "last echoes" (_LP), where the last two categories included "first of many" and "last of many" echoes, respectively, with "only echoes" duplicated in both. Fixed height values of 0.5 m, 5 m, and below or above an adaptive height value determined as the height of the 60th percentile were used as the thresholds of ground, shrub, and suppressed or dominant canopy, respectively.
ALS height profiles and descriptive characteristics of the field plots considered to be most important locations of the ESs in the data studied (priority value of 1 based on Eq. 2). For comparison, the lower right panel shows a plot that had low priority values of the considered ESs. The black, green, and blue symbols indicate only, first-of-many, and last-of-many ALS echoes, respectively. Grey horizontal lines indicate the mean heights of these echo categories and all echoes and are drawn to illustrate the differences in terms of these metrics between the ESs
The following categories of the features were considered:
Canopy height and density, which are the basic predictors used in ALS analyses (Næsset 2002) and were assumed to discriminate between size-specific attributes of the ESs: the maximum (hmax), the mean (hmean), and the standard deviation (hstd) of the height values above the ground threshold; the 5th, 10th, 20th, ..., 90th, and 95th percentiles (hzz, where zz denoted the percentile value); and the corresponding proportional densities (dzz) were computed according to Korhonen et al. (2008, pp. 502–503).
Proportion of echoes above a given threshold to all echoes, corresponding to a vegetation cover estimate (Korhonen et al. 2011). This proportion was computed in two ways: using echoes of different categories above the ground (ccX_ground, where X is the echo category) or first echoes above the shrub layer threshold (ccshrub), which corresponds to an attempt to quantify the shrub layer thickness (cf., Vauhkonen and Imponen 2016).
Absolute differences between mean heights of different echo categories. These features were computed without height thresholds and assumed to discriminate between properties related to coniferous- or deciduous-dominated forest in the ALS data acquired during the leaf-off period (Liang et al. 2007). These features are denoted by diff x–y , where suffix x–y refers to the height difference of echo categories FP–LP, only–LP, first–only, or first–last.
Proportions of the different echo categories, which were assumed to be affected by the species and size specific ES properties in the canopy similar to the ALS-intensity features (Ørka et al. 2012; Vauhkonen et al. 2014). These features are denoted by prop X/Y_z , where X/Y indicated the ratio of two echo categories X and Y, and z was the height threshold employed for computations.
Predictors related to the shrub and understorey layers (Vauhkonen and Imponen 2016): the ratio of the echoes reflected above ground but below the dominant canopy threshold (runderstory); the standard deviation of the height values of echoes reflected above ground but below the dominant canopy threshold (stdunderstory); and the ratio of the echoes reflected from the shrub layer to all echoes (rshrub).
Features x i , i = 1, 2, …, 143, listed above formed the initial set S1 of candidate predictors. To account for useful interactions between the features, the final set S was obtained as S1 ∪ {x i × x j } ∀ i, j ∈ S1, which resulted to altogether 10,296 candidate features per plot. Separate models for each ES were constructed by inserting features iteratively into a model template:
$$ {\widehat{y}}_{in}={a}_n+{\sum}_{n=1}^N{b}_n{x}_{jn}^{c_n}, $$
where \( {\widehat{y}}_{in} \) is the vector of predicted priority values for the i:th ES, x jn is the j:th feature of S, and a n , b n , and c n are model parameters at the n:th round of N = 1, 2, 3, 4 iteration rounds. Parameters a n , b n , and c n were estimated using the nls function of R statistical computing environment (R Core Team 2016). Testing every candidate feature as x jn at every iteration round, the RMSE between the predicted and reference priority values was computed as:
$$ \mathrm{RMSE}=\sqrt{\frac{\sum_{i=1}^n{\left({\widehat{y}}_i-{y}_i\right)}^2}{n},} $$
where n is the number of observations, and \( {\widehat{y}}_i \) and y i are the predicted and reference values, respectively. The feature that minimized the RMSE was retained in the model template and the iterations were continued until the model included a maximum of four features. However, more criteria were employed to select the model to be used for the prioritization analyses among the models with one to four features:
The final predictor inserted had to improve the RMSE by at least 1%.
The residual errors had to satisfy the null hypothesis that the considered sample came from a normally distributed population, which was examined graphically using scatter, residual and QQ-plots, and numerically using the test statistic proposed by Shapiro and Wilk (1965).
The model had to pass a "sensitivity of convergence" test, in which the model was fit separately for each plot using Leave-One-Out-Cross-Validation (LOOCV), i.e., not allowing the plot in question to be available in the training data for model fitting. Implications of including this test are further described in the Results section.
Predicting the priority values of the ESs based on the MS-NFI maps
Benchmark predictions for those based on ALS were obtained by inserting the forest attribute estimates from the MS-NFI maps to Eq. 2. Priority function values corresponding to Eq. 1 were obtained by ordering the aforementioned predictions (cf., previous section). The MS-NFI maps included estimates of all other independent variables except the number of trees per hectare, which was estimated by dividing the total basal area by the basal area corresponding to the mean diameter, i.e., assuming that the resulting number of average-sized trees existed in a pixel. To compute plot-wise estimates, the pixels of the forest resource maps intersecting with the plot polygons were identified using a spatial query. The estimates of a plot were obtained from the intersecting pixels as weighted averages with the joint areas of the plots and pixels as the weights. Finally, to see if amending the models based on ALS with the MS-NFI layers improved the models, a similar feature selection as with ALS data was run including all MS-NFI-based ES and forest attribute proxies as additional feature candidates.
Field calibration and evaluation of the predictions
Following the method described in the previous section, potential estimation errors in the MS-NFI maps propagate to the predicted priority values, whereas similar error propagation is avoided in the ALS-based analyses due to local model fitting. An additional calibration step was therefore included to eliminate the contribution of the local field sample to the predictions. Calibration models \( {y}_i=f\left({\widehat{y}}_i\right) \), where y i was the reference priority value of the i:th ES and \( {\widehat{y}}_i \) its RS-based estimate, of all ESs were fit simultaneously as systems of linear equations. Due to the high inter-correlations (see Additional file 1, Table S1), the models were fit in two steps: first, using Ordinary Least Squares (OLS) to produce model residuals, and second, using Seemingly Unrelated Regression (SUR) to account for the residual error covariance matrices in the final models. The computations were carried out in the LOOCV mode using the systemfit package of R (Henningsen and Hamann 2007). The accuracies of the ALS- and MS-NFI-based predictions were compared using the RMSE and coefficient of determination (R2) computed between the reference values and predictions obtained from the LOOCV models.
Decision analyses
The effects of the aforementioned prediction accuracies to the management decisions were evaluated by comparing the priority ranking of the ESs in each individual plot. The ES with the highest priority value, based on Eqs. 1 or 2 applied to the field reference data, was assumed to be the most suitable ES for the specific plot. The RS-based decision was considered correct, if the most suitable ES based on the field data and the RS prediction equaled. The degree of incorrect decisions was quantified using two approaches. First, the correctness of every decision was given a numerical score (Gopal and Woodcock 1994): situations where RS and field data resulted in the same decision was given a score of 6; those where the RS-based service was the second best according to the field data a score of 5; and so on, until the situation where the RS-based service was the worst according to field data, which was given a score of 1. The distributions of these "decision scores" were compared between the different data sources. Second, the dispersion in field and RS-data between the services selected as the most suitable for the specific plot was examined using confusion matrices. The priority ranking of the less important ESs was not evaluated.
In addition to 'deterministic' decision making described above, the sensitivity of the decisions was examined by incorporating the uncertainties of the models to the analyses (Fig. 1). Instead of using the expected values of the priority functions, the ranking was carried out assuming the predictions as realized values of a random variable X ~ N (E, s2), where E was the expected value and s2 was the mean squared error of the model residuals. A similar priority ranking as with the expected values was carried out with predictions that were among the worst and best outcomes of the model, obtained as the values of the 5th and 95th percentiles of the distribution of X of each ES.
The ALS features considered as the predictors of the regression models are listed in Table 2. All feature and echo type categories and a wide range of different height values was employed when building the models, for which reason only a few specific observations on the structure of models can be made. All features selected were products of form feature1 × feature2, where feature1 was often an absolute height value (a mean height, percentile, or height difference) and feature2 a proportion (either a canopy cover proxy or proportional density). This combination was especially frequent among the first features selected to the models. In the models of TIMB, all selected predictors were such combinations employing various height values and echo categories. The models of BIOD and CARB used proportion × proportion types of interactions and the ratio of first-of-many to only and first returns (propfirst/FP_ground). The predictors of BIOD (e.g., propfirst/FP_ground; diffonly–LP; hstdLP) were most diverse in terms of describing the canopy structure with features from different categories. The models of BILB, COWB, and AMEN differed from those mentioned above in employing low percentile values, last pulse proportions and features such as runderstory, diffonly–FP, propfirst/FP_ground, and hstdfirst. Overall, the canopy cover proxies were the most frequent feature type, whereas the computing heights and echo categories of all features varied. Although a wide range of different height values was used, a predictor with a percentile value above 70 was selected only once.
The features and performance of ALS-based models for predicting ratio-scaled ES proxy values. W – Shapiro-Wilk test statistic
RMSE
RMSELOOCV
ccshrub × h40first
0.965***
+ propfirst/FP_ground × d50first
+ diffonly–LP × hstdLP
+ h95first × h10LP
cconly_ground × hmeanFP
+ h40last × ccLP_ground
0.970**
+ h05last × d05first
+ cconly_ground × h10FP
+ h20last × cconly_ground
+ d60first × d30LP
h60first × h70LP
+ d50only × ccFP_ground
+ runderstory × d05FP
+ d70first × d50only
d20LP × h40FP
+ diffonly–FP 2
+ d60first × h05only
h10first × hmeanFP
+ d30last × d70first
+ h20last × ccFP_ground
+ hstdfirst × hmeanLP
aThe asterisks refer to the significance of the test statistic at the 90% (*), 95% (**), and 99% (***) confidence level
The graphical assessment of model residuals (detailed results not shown) was mainly in line with the test on residual normality (Table 2): the QQ-plots showed heavy-tailed residuals especially for the models with statistically significant values of the Shapiro-Wilk test statistic. However, the deviations of normality were typically related to one or two plots with the highest or lowest values, and not considered problematic for the further analyses. When examined in the same data used for constructing the models, the performance of every model could be slightly improved by increasing the number of predictors to the maximum number allowed. However, when the models were re-fit using LOOCV, model parameters could not be solved for at least one of the plots in the data, resulting to NA values for this performance factor in Table 2. Although this effect could probably have been avoided by allowing a slightly wider range of initial parameters when fitting the models, it was also considered as a sensitivity issue reflecting an over-parameterization of the initial model to certain types of forest structures.
The volume of deciduous trees and the MS-NFI based proxy for BILB would have replaced the last ALS-based features in the models of CARB and BILB, respectively, and in the models of COWB, the corresponding MS-NFI proxy would have been selected as the second feature. However, none of the aforementioned MS-NFI features performed better than the ALS-features of these models in terms of the feature selection criteria. Based on the considerations above, the ALS and MS-NFI data sets were always used separately. Also, a different number of predictors was used in the ALS-based models for the priority ranking: BIOD and COWB were modeled using only one predictor (the one selected first); TIMB, BILB, and AMEN using two predictors (those selected first and second); and CARB using three predictors selected first.
Comparison of ALS and MS-NFI for predicting the priority values of the ESs
The models based on ALS data always outperformed those based on forest attribute estimates derived from the MS-NFI maps. As shown in Figs. 3 and 4, the ALS-based models generally explained more variation in the ES proxies. The regression lines of the MS-NFI data based on the SUR calibration models also differed more severely from the 0–1-lines. Using MS-NFI data, TIMB was predicted most accurately with an RMSE of 30.4%. The RMSEs of other ESs were also close (30.6%–33.2%), except AMEN, which had an RMSE of 40.5% and BIOD, which was predicted worst with an RMSE of 41.6%. Using ALS, the ES predicted worst (COWB) had an RMSE of 29.8%, which is 97% of the RMSE of the corresponding MS-NFI prediction. The RMSEs of all other ESs were in order of 21.7%–27.5% (57%–83% of the RMSEs of MS-NFI predictions), except CARB, which was predicted most accurately with an RMSE of 15.1% (47% of the RMSE of MS-NFI prediction). The degree of determination of CARB also improved most due to using ALS instead of MS-NFI, from R2 = 0.11 to 0.81. The R2-improvements of the other ESs were close to this magnitude, except for BILB and COWB, which had R2 values close to each other based on both the data sources. The residual errors of models based on ALS and MS-NFI were somewhat correlated for BILB and COWB, but not for the other ESs (Figs. 3 and 4, right column).
Predicted (x-axis) versus reference priority values of BIOD (upper row), TIMB (middle row) and CARB (bottom row) based on MS-NFI (left column) or ALS (middle column). The broken and solid lines are the 1:1 line and regression line of the SUR calibration models fit with the local field sample, respectively. The right column shows the residuals of the corresponding predictions based on ALS (x-axis) and MS-NFI data
Predicted (x-axis) versus reference priority values of BILB (upper row), COWB (middle row) and AMEN (bottom row) based on MS-NFI (left column) or ALS (middle column). The broken and solid lines are the 1:1 line and regression line of the SUR calibration models fit with the local field sample, respectively. The right column shows the residuals of the corresponding predictions based on ALS (x-axis) and MS-NFI data
Decision analyses based on different priority functions, data, and model uncertainties
Compared to the use of interval-scaled priority functions, those based on the ratio scale slightly increased the proportion of decisions that had a perfect agreement in both ALS and field data (Fig. 5, left panel). The same observation was made regarding the decisions based on the MS-NFI data, but the differences between the priority function forms were in general minor. When the ratio-scaled priority functions were used in the remaining analyses, the ALS and field data resulted to the same decision in 42% of the plots; the ALS-based ES was at least the second best according to field data in 69%; and among the three best alternatives in 84% of the plots. With MS-NFI data calibrated by the local field sample, the corresponding figures were 46%, 61%, and 72% and slightly lower without the calibration, the distribution of the decision scores of all data sources being shown in Fig. 5 (right panel). Thus, although the calibrated MS-NFI data did better in selecting ESs that matched perfectly with the field data, the proportion of poorest decisions was considerably lower based on the ALS data. The ALS-based models resulted to a better decision in 32%, those based on the MS-NFI data in 27%, and the decision equaled in 41% of the plots. Of the latter group of plots, around 2/3 had either BILB or COWB as the most important ES and the decisions for these plots were generally scored ≥4. Except for data sources, the decision score was highly dependent on the priority difference between the best ESs observed from the field plots: in the plots where the decisions equaled between the data sets, the average priority difference between the ESs prioritized as the first and second was 0.12 (standard deviation 0.08), whereas the corresponding figures for the other plots were 0.07 (0.07).
The distribution of decision scores (left:) between interval- (Eq. 1) and ratio-scaled (Eq. 2) priority functions in ALS data, or (right:) between data source in ratio-scaled priority. Situations where RS and field data resulted in the same decision was given a score of 6; those where the RS-based service was the second best according to the field data a score of 5; and so on, until the situation where the RS-based service was the worst according to field data, which was given a score of 1
Decision making based on the 5th percentile of the ALS-predicted priority value distributions reduced the proportion of worst decisions (Fig. 6, left panel). Otherwise, the decisions based on either the 5th or 95th percentile did not compare favorably with those based on the expected value in either data (Fig. 6). The confusion matrices for the most important ESs based on the different data sources and either the expected or extreme outcomes are shown as Appendices 2 and 3, respectively. A comparison of the confusion matrices based on either expected or extreme values indicates that many plots with the most important ES as those predicted with the highest or smallest error rates (e.g., CARB or COWB, respectively, based on ALS data) could be prioritized for this ES only by explicitly considering the extreme model outcomes. As a number of plots obtain a different prioritization based on the expected values, it was found interesting to look at the plots for which the prioritization changed depending on the model outcome. Using ALS-based ES proxy models, the prioritization based on the expected, worst, and best outcomes equaled in only 22 plots (in 43 using MS-NFI maps calibrated with the field data). If all decision alternatives based on the three outcomes of the ALS-models were considered, the alternative that matched perfectly with the field data was included in 57% of the plots; an alternative among the two best ones in 85%; and among the three best ones in 93% of the plots. With MS-NFI data, the corresponding figures were 59%, 76%, and 88%. Thus, especially the ALS-models were found useful in confining the decision alternatives to those most feasible according to production possibilities, from which the one preferred most preferred according to the stakeholder preferences could be selected based on further MCDA.
The distribution of decision scores based on the use of ratio-scaled priority functions and different outcomes of the prediction models in ALS (left) and calibrated MS-NFI data (right). See the caption of Fig. 5 for the interpretation of decision score values
Earlier RS-based decision analyses of multiple ESs have relied on map scales such as 1:25,000 (Roces-Díaz et al. 2017) or pixel sizes such as 500 × 500 m2 (Schröter et al. 2014). Also smaller pixel sizes of 60 × 60 m2 (Lehtomäki et al. 2015) or 48 × 48 m2 (Vauhkonen and Ruotsalainen 2017a) have been used in spatial prioritization analyses. Many of the aforementioned scales are coarse considering the need to formulate management prescriptions at the level of operational units (e.g., forest compartments), which are typically 1.5–2.0 ha in size in Finland (Koivuniemi and Korhonen 2006). Based on this background, the results of the present study are encouraging: the use of ALS in particular allowed deriving accurate predictions for plots of around 250 m2, i.e., in a considerably more detailed resolution than current operational compartments. The RMSEs of predicting the studied ES indicators by ALS were 47%–97% of the RMSEs of corresponding predictions based on benchmark forest resource maps derived using coarse satellite images. Due to applying a similar field calibration step on both of the data sources, the difference can be attributed to the better ability of ALS to explain the variation in the ES proxies. In the sub-sections below, the results are discussed from the points of view of using ALS data to predict various ES indicator proxies (Table 1) and using these predictions in decision analyses compared to other data.
On the ES proxies and their ALS-based modelling
The expert models (Table 1) express the suitability of forests for the ESs as transformations of forest attributes. Applying the expert models yielded the highest biodiversity conservation values for mature, densely stocked forests. The values were weighted by site fertility such that most fertile sites received a considerable weight compared to poorer sites. An increasing basal area and mean diameter increased the soil expectation value of all the species, but otherwise the model included species-specific interactions between these and operational environment related parameters. The value of carbon storage increased according to an increasing stem volume depending on the species-specific biomass expansion factors. An increasing maturity (measured in terms of mean age and diameter) and decreasing stand density (measured in terms of either basal area or stem number) increased the suitability of a site for berry picking and visual amenity. The latter models also included species-specific terms such that especially the presence of pine trees improved the values. Thus, the response variables of all other ESs except CARB were modelled as functions of multiple forest attributes.
The models based on the ALS data (Table 2) explained the differences in the provisioning potential of the different ESs with RMSEs of 15%–30%. The selected features and model performances are well in line with the background given in the previous paragraph. The predictions of CARB had the highest accuracies, because those essentially explained the variation in the total stem volume converted to carbon using biomass expansion factors. The ratio of first-of-many to all first echoes (propfirst/FP_ground) was used as a predictor of CARB in addition to height and density metrics, which are typical to total biomass or carbon models. The aforementioned feature has been found useful for separating species (Ørka et al. 2012; Vauhkonen et al. 2014) and should be studied in a broader forest modeling context. Although the field proxy value for TIMB was computed using species-specific predictors, the model based on the ALS data included only features based on height. The models for the other ESs included ALS-based predictors that were clearly related to forest canopy structure: for example, features indicating the existence or abundance of low vegetation were frequently selected to the models of BIOD or recreation-related ESs, respectively. No clear recommendations for the selection of features can be given, except for using multiple heights and echo types when computing the candidate features. The requirement to estimate a high number of model parameters was likely reduced by including products of all candidate features to account for interactions between them, which is a useful property that has not been reported in earlier ALS studies. The features used here are the most common and easily implementable using available software packages such as R. However, it is acknowledged that not all features presented in the literature were included and it could be possible to improve the results with more experimental ones such as those related to volumetric (Vauhkonen and Ruotsalainen 2017b) or textural (Niemi and Vauhkonen 2016) properties.
The proxies listed in Table 1 are measured in different units. In order to use the proxies in decision analyses, those need to be normalized to the same scale, which was done in this paper prior to the modeling. Due to the transformations, the prediction accuracies cannot be easily compared with earlier studies, even in relative scale. Even CARB, which is a species-specific transformation of the total volume, cannot be directly compared to previous studies that predict total biomass or carbon due to the Box-Cox-transformation (Appendix 1) applied to equalize the distribution of the response variables for the decision analyses. If a similar model for CARB had been fit without the Box-Cox-transformation, the RMSE would have risen to around 34%. It is higher than (e.g.) the RMSE of 25.7% obtained by Kankare et al. (2015) for total biomass in the same region, but when comparing the figures the differences in the definition of the response variable and a slightly different plot size must also be considered. For more reasoning on the need for the transformations, please see the next section. On the other hand, the modeling task considered above may have been alleviated by the fact that all response variables resulted from models that had been formulated earlier using common forest mensurational attributes. Actual differences between berry yields of two forests could be much more discrete than those predicted as a function of forest attributes. The performance of the models should thus be tested by re-fitting or validating the models against ES-specific indicators that are not based on models but direct observations made in the field (see also Hegetschweiler et al. 2017; Kohler et al. 2017).
On the other hand, it may be less feasible or even impossible to predict certain ES properties otherwise than using features quantifying the three-dimensional forest structure. Examples in the Scandinavian boreal forest structures include mammal or bird habitats (e.g., Melin et al. 2013, 2016), which could have realistically occurred in the studied area, but could not be included in the present comparison due to the lack of models based on information obtainable from forest resource maps or field plots. The value of ALS data can be seen to lie especially in applications that require identifying sites with high value for e.g. conservation as ALS data coverage is globally increasing and the method has been proven to provide 3D descriptions of vegetation structure that, in turn, have been long known as primary determinants of habitat quality and biodiversity (MacArthur and MacArthur 1961; Dueser and Shugart Jr, 1978; Brokaw and Lent, 1999). Nevertheless, Vihervaara et al. (2017) identified only four Essential Biodiversity Variables that could benefit from the use of RS data. According to this study, the obtainable improvements are most likely indicator-specific, with magnitude depending on what RS data are available.
Other aspects of RS-based decision analyses: Data, normalization, and uncertainties
This study tested normalization resulting to values in either an interval- (Eq. 1) or ratio-scale (Eq. 2). Even though the priority value functions based on the different scales did not differ considerably, those based on the ratio-scale performed slightly better in the priority ranking, which is logical as also the ALS-based features provide information at a ratio scale. Although the purpose is not to exhaustively discuss the implications of different value function forms, one important practical aspect discovered in the exploratory analysis could be brought up: An incorrect selection of a value function form would result in biased decisions by weighting the rank-orderings of the alternatives to an undesired direction, which is exemplified by a comparison of the transformed and non-transformed value function forms (Appendix 1). This discussion highlights the importance of considering data normalization procedures in detail already at the modeling stage, if the final applications aim at decision analyses where all modeled proxies should be measurable at the same scale.
The ALS and MS-NFI data sources showed several differences, when predicting the priority value functions. Except having a more deterministic relationship with the forest attributes, one additional improvement of the locally fit ALS models is the ability to predict the 0–1-value ranges directly. When using forest attributes such as those predicted by the MS-NFI, the maximum values used in the normalization should be representative of the entire area. In this study, the minima and maxima predictions based on the MS-NFI were more severely incorrect than with ALS data (Figs. 3–4), which partially explains the poorer performance of this data source. Many multivariate predictions from RS data are based on non-parametric nearest neighbor methods, which could have been considered also in the case of ALS data. However, the aforementioned problem would be seriously present also in those types of predictions.
Regardless of the input data or applied scale or mapping technique, it is well recognized that the resulting ES maps will include uncertainties (Eigenbrod et al. 2010; Schulp et al. 2014; Räsänen et al. 2015). A few earlier spatial prioritization studies (Lehtomäki et al. 2015; Räsänen et al. 2015; Vauhkonen and Ruotsalainen 2017a) used forest attribute maps in a similar resolution as considered here, but without a calibration based on field data. Such a practice cannot be recommended due to the observed uncertainties in the uncalibrated MS-NFI data in particular. However, it should be acknowledged that all analyses carried out in the aforementioned studies might not be sensitive to incorrect pixel-level prioritization decisions. Also, each of the aforementioned studies used either aggregated pixels to somewhat account for these effects. It is not possible to address the uncertainty reduction due to the use of aggregated resolution based on the field data of this study, which was collected from fixed-size plots.
Finally, it should be noted that in the actual decision making, the stakeholder preferences may affect or even dictate the allocation of the ESs over suitability of forest structure. Even if the decision maker had no preferences on the ESs, aggregating individual pixels, i.e., deviating from their local optima to compose larger treatment units, could be feasible with respect to the implementations of management prescriptions or achieving ecological or economic objectives that are determined over a larger area (Pukkala et al. 2014). Importantly, the decisions based on the same data may differ for a risk-avoiding, risk-neutral or risk-seeking decision maker (Pukkala and Kangas 1996). The risk preferences of the decision maker should clearly be incorporated in the decision making based on the ES maps. Here, a similar technique as in Pukkala and Kangas (1996) was used to account for the uncertainties emerging from different accuracies of the ES proxy models. The results reported in the last paragraph of the Results section can be concluded such that separate forest ES maps should be prepared using the expected, worst, and best outcomes of the model predictions to fully describe the production possibilities of the landscape under the uncertainties in the models. Also those related to the decision makers' preferences could be accounted for as additional stochastic distributions in the analyses (e.g., Kangas et al. 2007) and incorporated in analyses as pixel-level production constraints. Overall, the prioritization approach should be tested further involving real decision makers.
The provisioning potential of ESs (biodiversity, timber, carbon, berries, and visual amenity) was modeled as expert model-based proxies that express the suitability of forests for the ESs as transformations of forest attributes. The models based on the ALS data explained the variation in these proxies with RMSEs of 15%–30%. The RMSEs of the ALS-based models were 47%–97% of the RMSEs of corresponding predictions based on the MS-NFI forest resource maps. Due to applying a similar field calibration step on both of the data sources, the difference can be attributed to the better ability of ALS to explain the variation in the ES proxies.
The RMSE-differences did not fully translate to the accuracies of land use decisions: instead, prioritizing the land use for the ESs with the highest provisioning potential could be done with rather similar accuracies based on both data sources and at a resolution of 250 m2, i.e., in a considerably more detailed scale than current operational forest management units. ALS-based models for the ES proxies can however be recommended based on their better stability regarding the model errors. The results suggest that separate forest ES maps should be prepared using the expected, worst, and best outcomes of the model predictions to fully describe the production possibilities of forest under the uncertainties in the models.
ALS:
Airborne Laser Scanning
AMEN:
Visual amenity (one of the ecosystem services considered, see Table 1)
BILB:
Suitability for bilberry picking (one of the ecosystem services considered, see Table 1)
BIOD:
Biodiversity (one of the ecosystem services considered, see Table 1)
CARB:
Carbon storage (one of the ecosystem services considered, see Table 1)
COWB:
Suitability for cowberry picking (one of the ecosystem services considered, see Table 1)
DBH:
Diameter-at-breast-height
Ecosystem service
k-NN:
k-Nearest neighbor
LOOCV:
Leave one out cross validation
MCDA:
Multiple criteria decision analysis
MS-NFI:
Multi-Source National Forest Inventory
RMSE:
Root mean squared error
RS:
TIMB:
Timber production (one of the ecosystem services considered, see Table 1)
The acquisition of the studied data was originally supported by the Research Funds of University of Helsinki.
All data and materials can be obtained by requesting from the author.
JV is the sole author. He carried out all analyses and drafted the manuscript. The author read and approved the final manuscript.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Additional file 1: Empirical cumulative density and probability density function forms and Pearson correlation coefficients of different transformations of the expert model predictions for the ESs considered. (DOCX 1758 kb)
Box-Cox transformation of the reference priority values for the ESs
An exploratory analysis revealed a practical comparability issue related to using the expert function values for the ESs directly in Eq. 2, which can be demonstrated by visualizing the empirical cumulative distribution functions and histograms of the data (Additional file 1: Figure S1-Figure S6). The expert functions of Table 1 were fit with many different data sets having different value ranges and resulting to unequal frequencies for the different ESs, when applied in the present data. Using Eq. 2, these differences would have translated directly to the priority values, which was found problematic with respect to their ranking. Look especially at the left-hand columns of Additional file 1: Figure S1-Figure S6 and compare Additional file 1: Figure S6 to the others: the direct use of these values would have resulted to a very high number of plots being prioritized as AMEN only because its distribution more frequently included higher values compared to the distributions of the other ESs, which were more often skewed to the left. For this reason, the expert model values of every ES were transformed to produce frequencies that followed the normal distribution as closely as possible prior to converting the value ranges between 0 and 1. The transformation was obtained as (Box and Cox 1964):
$$ {y}_{ij}^{\left(\lambda \right)}=\left\{\begin{array}{c}\frac{{\mathrm{y}}_{ij}^{\lambda }-1}{\lambda },\kern0.5em if\ \lambda \ne 0;\\ {}\ln \left({y}_{ij}\right),\kern0.5em if\ \lambda =0,\end{array}\right. $$
where y ij is the original value of the i:th observation of the j:th ES proxy and λ is a parameter. The value of λ was selected from an interval of − 3 to 3 as the value that maximized a test statistic on whether the considered sample came from a normally distributed population (Shapiro and Wilk 1965). This transformation did not affect the order of the observations, but produced approximately equally shaped frequency distributions of every ES. As a result, the ESs could be prioritized using two alternative priority value function forms illustrated in the two rightmost columns of Additional file 1: Figure S1-Figure S6: either according to the interval scale (Eq. 1) based only on the order of the observations or according to the ratio scale (Eq. 2) preserving the ratios between the observations, but thanks to the Box-Cox-transformation, having approximately equal (close to normal) frequency distributions between the ESs. The function forms and frequency distributions of the priority values are shown in Additional file 1: Figure S1-Figure S6 and the correlations between the priority values in Additional file 1: Table S1.
During the feature selection process, it was noted that exactly the same features would have been selected to the models, regardless of whether the original or Box-Cox-transformed response variables were used. Therefore, although the Box-Cox-transformation has been rarely used in ALS-based studies, it could potentially aid also other applications by normalizing the response variable to a desired form in the model fitting step.
Confusion matrices for the most important ESs based on expected outcomes
Confusion between ESs considered as most important based on the field data (observed) and MS-NFI maps (predicted) using the expected values of the predicted ES proxies
Confusion between ESs considered as most important based on the field data (observed) and MS-NFI data calibrated with the local field sample (predicted) using the expected values of the predicted ES proxies
Confusion between ESs considered as most important based on the field data (observed) and the expected values of the ALS-based models for ES proxies (predicted)
Confusion matrices for the most important ESs based on extreme outcomes
Confusion between ESs considered as most important based on the field data (observed) and MS-NFI data calibrated with the local field sample (predicted) using the worst outcomes of the predicted ES proxies
Confusion between ESs considered as most important based on the field data (observed) and the worst outcomes of the ALS-based models for ES proxies (predicted)
Confusion between ESs considered as most important based on the field data (observed) and MS-NFI data calibrated with the local field sample (predicted) using the best outcomes of the predicted ES proxies
Confusion between ESs considered as most important based on the field data (observed) and the best outcomes of the ALS-based models for ES proxies (predicted)
Natural Resources Institute Finland (Luke), Bioeconomy and Environment Unit, P.O. Box 68, Yliopistokatu 6, FI-80101 Joensuu, Finland
Andrew ME, Wulder MA, Nelson TA (2014) Potential contributions of remote sensing to ecosystem service assessments. Progr Phys Geogr 38:328–353View ArticleGoogle Scholar
Barber QE, Bater CW, Braid ACR, Coops NC, Tompalski P, Nielsen SE (2016) Airborne laser scanning for modelling understory shrub abundance and productivity. For Ecol Manag 377:46–54View ArticleGoogle Scholar
Barbosa JM, Asner GP (2017) Prioritizing landscapes for restoration based on spatial patterns of ecosystem controls and plant–plant interactions. J Appl Ecol 54:1459–1468View ArticleGoogle Scholar
Barredo JI, Bastrup-Birk A, Teller A, Onaindia M, de Manuel BF, Madariaga I, Rodriguez-Loinaz G, Pinho P, Nunes A, Ramos A, Batista M, Mimo S, Cordovil C, Branquinho C, Gret-Regamey A, Bebi P, Brunner SH, Weibel B, Kopperoinen L, Itkonen P, Viinikka A, Chirici G, Bottalico F, Pesola L, Vizzarri M, Garfi V, Antonello L, Barbati A, Corona P, Cullotta S, Giannico V, Lafortezza R, Lombardi F, Marchetti M, Nocentini S, Riccioli F, Travaglini D, Sallustio L, Rosario I, von Essen M, Nicholas KA, Maguas C, Rebelo R, Santos-Reis M, Santos-Martin F, Zorrilla-Miras P, Montes C, Benayas J, Martin-Lopez B, Snall T, Berglund H, Bengtsson J, Moen J, Busetto L, San-Miguel-Ayanz J, Thurner M, Beer C, Santoro M, Carvalhais N, Wutzler T, Schepaschenko D, Shvidenko A, Kompter E, Ahrens B, Levick SR, Schmullius C (2015) Mapping and assessment of forest ecosystems and their services – Applications and guidance for decision making in the framework of MAES. Report EUR 27751 EN, Joint Research Centre, European Union. doi: https://doi.org/10.2788/720519
Bässler C, Stadler J, Müller J, Förster B, Göttlein A, Brandl R (2011) LiDAR as a rapid tool to predict forest habitat types in Natura 2000 networks. Biodivers Conserv 20:465–481View ArticleGoogle Scholar
Bottalico F, Travaglini D, Chirici G, Marchetti M, Marchi E, Nocentini S, Corona P (2014) Classifying silvicultural systems (coppices vs. high forests) in Mediterranean oak forests by airborne laser scanning data. Eur J Remote Sens 47:437–460View ArticleGoogle Scholar
Box GEP, Cox DR (1964) An analysis of transformations. J Royal Stat Soc Ser B 26:211–252Google Scholar
Brokaw N, Lent R (1999) Vertical structure. In: Hunter ML Jr (ed) Maintaining biodiversity in Forest ecosystems. Cambridge University Press, Cambridge, pp 373–399View ArticleGoogle Scholar
Coops NC, Wulder MA, Culvenor DS, St-Onge B (2004) Comparison of forest attributes extracted from fine spatial resolution multispectral and lidar data. Can J Remote Sens 30:855–866View ArticleGoogle Scholar
Costanza R, d'Arge R, de Groot R, Farber S, Grasso M, Hannon B, Limburg K, Naeem S, O'Neill RV, Paruelo J, Raskin RG, Sutton P, van den Belt M (1997) The value of the world's ecosystem services and natural capital. Nature 387:253–260View ArticleGoogle Scholar
Daily GC, Alexander S, Ehrlich PR, Goulder L, Lubchenco J, Matson PA, Mooney HA, Postel S, Schneider SH, Tilman D, Woodwell GM (1997) Ecosystem services: benefits supplied to human societies by natural ecosystems. Issues Ecol 2:1–16Google Scholar
Davies AB, Asner GP (2014) Advances in animal ecology from 3D-LiDAR ecosystem mapping. Trends Ecol Evol 29:681–691View ArticlePubMedGoogle Scholar
de Groot RS, Alkemade R, Braat L, Hein L, Willemen L (2010) Challenges in integrating the concept of ecosystem services and values in landscape planning, management and decision making. Ecol Compl 7:260–272View ArticleGoogle Scholar
Domingo-Santos JM, de Villarán RF, Rapp-Arrarás Í, de Provens ECP (2011) The visual exposure in forest and rural landscapes: an algorithm and a GIS tool. Landscape Urban Plan 101:52–58View ArticleGoogle Scholar
Dueser RD, Shugart HH Jr (1978) Microhabitats in a forest-floor small mammal fauna. Ecology 59:89–98View ArticleGoogle Scholar
Eigenbrod F, Armsworth PR, Anderson BJ, Heinemeyer A, Gillings S, Roy DB, Thomas CD, Gaston KJ (2010) The impact of proxy-based methods on mapping the distribution of ecosystem services. J Appl Ecol 47:377–385View ArticleGoogle Scholar
Englund O, Berndes G, Cederberg C (2017) How to analyse ecosystem services in landscapes – a systematic review. Ecol Indic 73:492–504View ArticleGoogle Scholar
Foody GM (2015) Valuing map validation: the need for rigorous land cover map accuracy assessment in economic valuations of ecosystem services. Ecol Econ 111:23–28View ArticleGoogle Scholar
Gopal S, Woodcock C (1994) Theory and methods for accuracy assessment of thematic maps using fuzzy sets. Photogramm Eng Remote Sens 60:181–188Google Scholar
Hegetschweiler KT, Plum C, Fischer C, Brändli UB, Ginzler C, Hunziker M (2017) Towards a comprehensive social and natural scientific forest-recreation monitoring instrument – a prototypical approach. Landscape Urban Plan 167:84–97View ArticleGoogle Scholar
Heinonen T, Kurttila M, Pukkala T (2007) Possibilities to aggregate raster cells through spatial optimization in forest planning. Silva Fenn 41:89–103View ArticleGoogle Scholar
Henningsen A, Hamann JD (2007) Systemfit: a package for estimating systems of simultaneous equations in R. J Stat Softw 23(4):1–40View ArticleGoogle Scholar
Hilker T, Frazer GW, Coops NC, Wulder MA, Newnham GJ, Stewart JD, van Leeuwen M, Culvenor DS (2013) Prediction of wood fiber attributes from LiDAR-derived forest canopy indicators. For Sci 59:231–242Google Scholar
Hill RA, Hinsley SA, Broughton RK (2014) Assessing habitats and organism-habitat relationships by airborne laser scanning. In: Maltamo M, Næsset E, Vauhkonen J (eds) Forestry applications of airborne laser scanning. Managing Forest ecosystems, vol 27. Springer, Dordrecht, pp 335–356View ArticleGoogle Scholar
Hou Z, Xu Q, Vauhkonen J, Maltamo M, Tokola T (2016) Species-specific combination and calibration between area-based and tree-based diameter distributions using airborne laser scanning. Can J For Res 46:753–765View ArticleGoogle Scholar
Ihalainen M, Alho J, Kolehmainen O, Pukkala T (2002) Expert models for bilberry and cowberry yields in Finnish forests. For Ecol Manag 157:15–22View ArticleGoogle Scholar
Kane VR, McGaughey RJ, Bakker JD, Gersonde RF, Lutz JA, Franklin JF (2010) Comparisons between field-and LiDAR-based measures of stand structural complexity. Can J For Res 40:761–773View ArticleGoogle Scholar
Kangas A, Kangas J, Kurttila M (2008) Decision support for forest management. Managing forest ecosystems 16. Springer, DordrechtGoogle Scholar
Kangas A, Leskinen P, Kangas J (2007) Comparison of fuzzy and statistical approaches in multicriteria decisionmaking. For Sci 53:37–44Google Scholar
Kangas J (1992) Multiple-use planning of forest resources by using the analytic hierarchy process. Scand J For Res 7:259–268View ArticleGoogle Scholar
Kankare V, Vauhkonen J, Holopainen M, Vastaranta M, Hyyppä J, Hyyppä H, Alho P (2015) Sparse density, leaf-off airborne laser scanning data in aboveground biomass component prediction. Forests 6:1839–1857View ArticleGoogle Scholar
Karjalainen T, Kellomäki S (1996) Greenhouse gas inventory for land use change and forestry in Finland based on international guidelines. Mitig Adapt Strat Glob Change 1:51–71View ArticleGoogle Scholar
Kohler M, Devaux C, Grigulis K, Leitinger G, Lavorel S, Tappeiner U (2017) Plant functional assemblages as indicators of the resilience of grassland ecosystem service provision. Ecol Indic 73:118–127View ArticleGoogle Scholar
Koivuniemi J, Korhonen KT (2006) Inventory by compartments. In: Kangas A, Maltamo M (eds) Forest inventory: methodology and applications. Managing Forest ecosystems, vol 10. Springer, Dordrecht, pp 271–278View ArticleGoogle Scholar
Korhonen L, Korpela I, Heiskanen J, Maltamo M (2011) Airborne discrete-return LIDAR data in the estimation of vertical canopy cover, angular canopy closure and leaf area index. Remote Sens Environ 115:1065–1080View ArticleGoogle Scholar
Korhonen L, Peuhkurinen J, Malinen J, Suvanto A, Maltamo M, Packalén P, Kangas J (2008) The use of airborne laser scanning to estimate sawlog volumes. Forestry 81:499–510View ArticleGoogle Scholar
Korpela I, Hovi A, Morsdorf F (2012) Understory trees in airborne LiDAR data - selective mapping due to transmission losses and echo-triggering mechanisms. Remote Sens Environ 119:92–104View ArticleGoogle Scholar
Kotamaa E, Tokola T, Maltamo M, Packalén P, Kurttila M, Mäkinen A (2010) Integration of remote sensing-based bioenergy inventory data and optimal bucking for stand-level decision making. Eur J For Res 129:875–886View ArticleGoogle Scholar
Lämås T, Sandström E, Jonzén J, Olsson H, Gustafsson L (2015) Tree retention practices in boreal forests: what kind of future landscapes are we creating? Scand J For Res 30:526–537View ArticleGoogle Scholar
Lefsky MA, Cohen WB, Spies TA (2001) An evaluation of alternate remote sensing products for forest inventory, monitoring, and mapping of Douglas-fir forests in western Oregon. Can J For Res 31:78–87View ArticleGoogle Scholar
Lehtomäki J, Tuominen S, Toivonen T, Leinonen A (2015) What data to use for forest conservation planning? A comparison of coarse open and detailed proprietary forest inventory data in Finland. PLoS One. https://doi.org/10.1371/journal.pone.0135926
Leiterer R, Furrer R, Schaepman ME, Morsdorf F (2015) Forest canopy-structure characterization: a data-driven approach. For Ecol Manag 358:48–61View ArticleGoogle Scholar
Liang X, Hyyppä J, Matikainen L (2007) Deciduous-coniferous tree classification using difference between first and last pulse laser signatures. In: Rönnholm P, Hyyppä H, Hyyppä J (eds) Proceedings of ISPRS workshop on laser scanning 2007 and SilviLaser 2007. Int arch Photogramm remote Sens, vol XXXVI, part 3/W52, pp 253–257Google Scholar
Listopad CM, Masters RE, Drake J, Weishampel J, Branquinho C (2015) Structural diversity indices based on airborne LiDAR as ecological indicators for managing highly dynamic landscapes. Ecol Indic 57:268–279View ArticleGoogle Scholar
Luther JE, Skinner R, Fournier RA, van Lier OR, Bowers WW, Coté JF, Hopkinson C, Moulton T (2014) Predicting wood quantity and quality attributes of balsam fir and black spruce using airborne laser scanner data. Forestry 87:313–326View ArticleGoogle Scholar
MacArthur RH, MacArthur JW (1961) On bird species diversity. Ecology 42:594–598View ArticleGoogle Scholar
Malczewski J, Rinner C (2015) Multicriteria decision analysis geographic information science. Advances in Geographic Information Science Springer-Verlag, Berlin HeidelbergView ArticleGoogle Scholar
Maltamo M, Malinen J, Packalén P, Suvanto A, Kangas J (2006) Nonparametric estimation of stem volume using airborne laser scanning, aerial photography, and stand-register data. Can J For Res 36:426–436View ArticleGoogle Scholar
Maltamo M, Næsset E, Vauhkonen J (eds) (2014) Forestry applications of airborne laser scanning - concepts and case studies. Managing Forest ecosystems, vol 27. Springer, DordrechtGoogle Scholar
Maltamo M, Packalén P, Yu X, Eerikäinen K, Hyyppä J, Pitkänen J (2005) Identifying and quantifying structural characteristics of heterogeneous boreal forests using laser scanner data. For Ecol Manag 216:41–50View ArticleGoogle Scholar
Martínez-Harms MJ, Balvanera P (2012) Methods for mapping ecosystem service supply: a review. Int J biodiv Sci Ecosyst Serv Manage 8:17–25View ArticleGoogle Scholar
Melin M, Mehtätalo L, Miettinen J, Tossavainen S, Packalen P (2016) Forest structure as a determinant of grouse brood occurrence – an analysis linking LiDAR data with presence/absence field data. For Ecol Manag 380:202–211View ArticleGoogle Scholar
Melin M, Packalen P, Matala J, Mehtätalo L, Pusenius J (2013) Assessing and modeling moose (Alces alces) habitats with airborne laser scanning data. Int J Appl Earth Obs Geoinfo 23:389–396View ArticleGoogle Scholar
Müller J, Vierling K (2014) Assessing biodiversity by airborne laser scanning. In: Maltamo M, Næsset E, Vauhkonen J (eds) Forestry applications of airborne laser Scanning.Managing Forest ecosystems, vol 27. Springer, Dordrecht, pp 357–374View ArticleGoogle Scholar
Næsset E (2002) Predicting forest stand characteristics with airborne scanning laser using a practical two-stage procedure and field data. Remote Sens Environ 80:88–99View ArticleGoogle Scholar
Næsset E, Gobakken T (2008) Estimation of above- and below-ground biomass across regions of the boreal forest zone using airborne laser. Remote Sens Environ 112:3079–3090View ArticleGoogle Scholar
Natural Resources Institute Finland (2017) File service for publicly available data. http://kartta.metla.fi/index-en.html. Accessed 16 Oct 2017
Niemi MT, Vauhkonen J (2016) Extracting canopy surface texture from airborne laser scanning data for the supervised and unsupervised prediction of area-based forest characteristics. Remote Sens. https://doi.org/10.3390/rs8070582
Ørka HO, Dalponte M, Gobakken T, Næsset E, Ene LT (2013) Characterizing forest species composition using multiple remote sensing data sources and inventory approaches. Scand J For Res 28:677–688View ArticleGoogle Scholar
Ørka HO, Gobakken T, Næsset E, Ene L, Lien V (2012) Simultaneously acquired airborne laser scanning and multispectral imagery for individual tree species identification. Can J Remote Sens 38:125–138View ArticleGoogle Scholar
Packalén P, Heinonen T, Pukkala T, Vauhkonen J, Maltamo M (2011) Dynamic treatment units in Eucalyptus plantation. For Sci 57:416–426Google Scholar
Pascual C, García-Abril A, García-Montero LG, Martín-Fernández S, Cohen WB (2008) Object-based semi-automatic approach for forest structure characterization using lidar data in heterogeneous Pinus sylvestris stands. For Ecol Manag 255:3677–3685View ArticleGoogle Scholar
Patenaude G, Hill RA, Milne R, Gaveau DL, Briggs BBJ, Dawson TP (2004) Quantifying forest above ground carbon content using LiDAR remote sensing. Remote Sens Environ 93:368–380View ArticleGoogle Scholar
Peura M, Gonzalez RS, Müller J, Heurich M, Vierling LA, Mönkkönen M, Bässler C (2016) Mapping a 'cryptic kingdom': performance of lidar derived environmental variables in modelling the occurrence of forest fungi. Remote Sens Environ 186:428–438View ArticleGoogle Scholar
Popescu SC, Hauglin M (2014) Estimation of biomass components by airborne laser scanning. In: Maltamo M, Næsset E, Vauhkonen J (eds) Forestry applications of airborne laser scanning. Managing Forest ecosystems, vol 27. Springer, Dordrecht, pp 157–175View ArticleGoogle Scholar
Pukkala T (2005) Metsikön tuottoarvon ennustemallit kivennäismaan männiköille, kuusikoille ja rauduskoivikoille (in Finnish for "prediction models for the expectation value of pine, spruce and birch stands on mineral soils"). Metsätieteen Aikakauskirja 3(2005):311–322Google Scholar
Pukkala T (2008) Integrating multiple services in the numerical analysis of landscape design. In: von Gadow K, Pukkala T (eds) Designing Green Landscapes. Managing Forest Ecosystems, vol 15. Springer, Dordrecht, pp 137–167View ArticleGoogle Scholar
Pukkala T (2016) Which type of forest management provides most ecosystem services? Forest Ecosyst. https://doi.org/10.1186/s40663-016-0068-5
Pukkala T, Kangas J (1996) A method for integrating risk and attitude toward risk into forest planning. For Sci 42:198–205Google Scholar
Pukkala T, Kellomäki S, Mustonen E (1988) Prediction of the amenity of a tree stand. Scand J For Res 3:533–544View ArticleGoogle Scholar
Pukkala T, Packalén P, Heinonen T (2014) Dynamic treatment units in forest management planning. In: Borges JG, Diaz-Balteiro L, McDill ME, Rodriguez LCE (eds) The Management of Industrial Forest Plantations. Managing Forest ecosystems, vol 33. Springer, Dordrecht, pp 373–392Google Scholar
Pukkala T, Sulkava R, Jaakkola L, Lähde E (2012) Relationships between economic profitability and habitat quality of Siberian jay in uneven-aged Norway spruce forest. For Ecol Manag 276:224–230View ArticleGoogle Scholar
R Core Team (2016) R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna. https://www.R-project.org/. Accessed 16 Oct 2017
Räsänen A, Lensu A, Tomppo E, Kuitunen M (2015) Comparing conservation value maps and mapping methods in a rural landscape in southern Finland. Landscape Online 44:1–19View ArticleGoogle Scholar
Räty J, Vauhkonen J, Maltamo M, Tokola T (2016) On the potential to predetermine dominant tree species based on sparse-density airborne laser scanning data for improving subsequent predictions of species-specific timber volumes. Forest Ecosyst. https://doi.org/10.1186/s40663-016-0060-0
Rechsteiner C, Zellweger F, Gerber A, Breiner FT, Bollmann K (2017) Remotely sensed forest habitat structures improve regional species conservation. Remote Sens Ecol Conserv 3:247–258View ArticleGoogle Scholar
Roces-Díaz JV, Burkhard B, Kruse M, Müller F, Díaz-Varela ER, Álvarez-Álvarez P (2017) Use of ecosystem information derived from forest thematic maps for spatial analysis of ecosystem services in northwestern Spain. Landscape Ecol Eng 13:45–57View ArticleGoogle Scholar
Sani NA, Kafaky SB, Pukkala T, Mataji A (2016) Integrated use of GIS, remote sensing and multi-criteria decision analysis to assess ecological land suitability in multi-functional forestry. J For Res 27:1127–1135View ArticleGoogle Scholar
Schröter M, Rusch GM, Barton DN, Blumentrath S, Nordén B (2014) Ecosystem services and opportunity costs shift spatial priorities for conserving forest biodiversity. PLoS One. https://doi.org/10.1371/journal.pone.0112557
Schulp CJE, Burkhard B, Maes J, Van Vliet J, Verburg PH (2014) Uncertainties in ecosystem service maps: a comparison on the European scale. PLoS One. https://doi.org/10.1371/journal.pone.0109643
Shapiro SS, Wilk MB (1965) An analysis of variance test for normality (complete samples). Biometrika 52:591–611View ArticleGoogle Scholar
Simonson WD, Allen HD, Coomes DA (2014) Applications of airborne lidar for the assessment of animal species diversity. Methods Ecol Evol 5:719–729View ArticleGoogle Scholar
Sverdrup-Thygeson A, Ørka HO, Gobakken T, Næsset E (2016) Can airborne laser scanning assist in mapping and monitoring natural forests? For Ecol Manag 369:116–125View ArticleGoogle Scholar
Thompson SD, Nelson TA, Giesbrecht I, Frazer G, Saunders SC (2016) Data-driven regionalization of forested and non-forested ecosystems in coastal British Columbia with LiDAR and RapidEye imagery. Appl Geogr 69:35–50View ArticleGoogle Scholar
Tomppo E, Haakana M, Katila M, Peräsaari J (2008a) Multi-source national forest inventory – methods and applications. Managing forest ecosystems, vol 18. Springer, DordrechtGoogle Scholar
Tomppo E, Halme M (2004) Using coarse scale forest variables as ancillary information and weighting of variables in k-NN estimation: a genetic algorithm approach. Remote Sens Environ 92:1–20View ArticleGoogle Scholar
Tomppo E, Katila M, Mäkisara K, Peräsaari J (2014) The multi-source national forest inventory of Finland - methods and results 2011. Working Papers of the Finnish Forest Research Institute, vol 319. http://www.metla.fi/julkaisut/workingpapers/2014/mwp319.htm. Accessed 16 Oct 2017
Tomppo E, Olsson H, Ståhl G, Nilsson M, Hagner O, Katila M (2008b) Combining national forest inventory field plots and remote sensing data for forest databases. Remote Sens Environ 112:1982–1999View ArticleGoogle Scholar
Valbuena R, Eerikäinen K, Packalen P, Maltamo M (2016a) Gini coefficient predictions from airborne lidar remote sensing display the effect of management intensity on forest structure. Ecol Indic 60:574–585View ArticleGoogle Scholar
Valbuena R, Maltamo M, Martín-Fernández S, Packalen P, Pascual C, Nabuurs GJ (2013) Patterns of covariance between airborne laser scanning metrics and Lorenz curve descriptors of tree size inequality. Can J Remote Sens 39(sup1):S18–S31View ArticleGoogle Scholar
Valbuena R, Maltamo M, Mehtätalo L, Packalen P (2017) Key structural features of boreal forests may be detected directly using L-moments from airborne lidar data. Remote Sens Environ 194:437–446View ArticleGoogle Scholar
Valbuena R, Maltamo M, Packalen P (2016b) Classification of multilayered forest development classes from low-density national airborne lidar datasets. Forestry 89:392–401View ArticleGoogle Scholar
Valbuena R, Vauhkonen J, Packalen P, Pitkänen J, Maltamo M (2014) Comparison of airborne laser scanning methods for estimating forest structure indicators based on Lorenz curves. ISPRS J Photogramm Remote Sens 95:23–33View ArticleGoogle Scholar
Vauhkonen J, Imponen J (2016) Unsupervised classification of airborne laser scanning data to locate potential wildlife habitats for forest management planning. Forestry 89:350–363View ArticleGoogle Scholar
Vauhkonen J, Packalen P, Malinen J, Pitkänen J, Maltamo M (2014) Airborne laser scanning based decision support for wood procurement planning. Scand J For Res 29(Suppl.1):132–143View ArticleGoogle Scholar
Vauhkonen J, Ruotsalainen R (2017a) Assessing the provisioning potential of ecosystem services in a Scandinavian boreal forest: suitability and tradeoff analyses on grid-based wall-to-wall forest inventory data. For Ecol Manag 389:272–284View ArticleGoogle Scholar
Vauhkonen J, Ruotsalainen R (2017b) Reconstructing forest canopy from the 3D triangulations of airborne laser scanning point data for the visualization and planning of forested landscapes. Ann For Sci 74:9. https://doi.org/10.1007/s13595-016-0598-6 View ArticleGoogle Scholar
Vihervaara P, Auvinen AP, Mononen L, Torma M, Ahlroth P, Anttila S, Bottcher K, Forsius M, Heino J, Heliola J, Koskelainen M, Kuussaari M, Meissner K, Ojala O, Tuominen S, Viitasalo M, Virkkala R (2017) How essential biodiversity variables and remote sensing can help national biodiversity monitoring. Glob Ecol Conserv 10:43–59View ArticleGoogle Scholar
Zimble DA, Evans DL, Carlson GC, Parker RC, Grado SC, Gerard PD (2003) Characterizing vertical forest structure using small-footprint airborne LiDAR. Remote Sens Environ 87:171–182View ArticleGoogle Scholar
Zolkos SG, Goetz SJ, Dubayah R (2013) A meta-analysis of terrestrial aboveground biomass estimation using lidar remote sensing. Remote Sens Environ 128:289–298View ArticleGoogle Scholar | CommonCrawl |
\begin{document}
\title{Fixed-points in the cone of traces on a \Cs}
\centerline{\emph{Dedicated to the memory of John Roe}}
\begin{abstract}
\noindent Nicolas Monod introduced in \cite{Monod:cones} the class of groups with the fixed-point property for cones, characterized by always admitting a non-zero fixed point whenever acting (suitably) on proper weakly complete cones. He proved that his class of groups contains the class of groups with subexponential growth and is contained in the class of supramenable groups. In this paper we investigate what Monod's results say about the existence of invariant traces on (typically non-unital) {$C^*$-al\-ge\-bra} s equipped with an action of a group with the fixed-point property for cones. As an application of these results we provide results on the existence (and non-existence) of traces on the (non-uniform) Roe algebra. \end{abstract}
\section{Introduction}
\noindent Whenever a discrete amenable group acts on a unital {$C^*$-al\-ge\-bra}{} with at least one tracial state, then the {$C^*$-al\-ge\-bra}{} admits an invariant tracial state, and the crossed product {$C^*$-al\-ge\-bra}{} admits a tracial state. This, moreover, characterizes amenable groups. The purpose of this paper is to find statements, similar to this well-known fact, about the existence of invariant traces on non-unital {$C^*$-al\-ge\-bra} s using the results of the recent paper by Monod, \cite{Monod:cones}, in which the class of groups with the \emph{fixed-point property for cones} is introduced and developed.
In \cite{Monod:cones}, a group is said to have the fixed-point property for cones if whenever it acts continuously on a proper weakly complete cone (embedded into a locally convex topological vector space), such that the action is of \emph{cobounded type} and \emph{locally bounded}, then there is a non-zero fixed point in the cone. Being of cobounded type is an analog of an action on a locally compact Hausdorff space being \emph{co-compact}, see \cite[Definition 2]{Monod:cones}. The action is locally bounded if there is a non-zero bounded orbit, see \cite[Definition 1]{Monod:cones}.
Even the group of integers, ${\mathbb Z}$, can fail to leave invariant any non-zero trace when acting on a non-unital {$C^*$-al\-ge\-bra}. For example, the stabilization of the Cuntz algebra ${\mathcal O}_2$ is the crossed product of the stabilized CAR-algebra ${\mathcal A}$ with an action of ${\mathbb Z}$ that scales the traces on ${\mathcal A}$ by a factor of 2. In particular, there are no non-zero invariant traces on ${\mathcal A}$. The cone of (densely defined lower semi-continuous) traces on ${\mathcal A}$ is isomorphic to the cone $[0, \infty)$, and the induced action of ${\mathbb Z}$ on this cone is multiplication by 2, which of course fails to be locally bounded.
The action of any group $G$ on a locally compact Hausdorff space $X$ induces a locally bounded action on the cone of Radon measures on $X$ (which again is the same as the cone of densely defined lower semi-continuous traces on $C_0(X)$). However, any infinite group can act on the locally compact non-compact Cantor set $\mathbf{K}^*$ in a non co-compact way so that there are no non-zero invariant Radon measures, cf.\ \cite[Section 4]{MatuiRor:universal}. Such an action of $G$ on the Radon measures on $\mathbf{K}^*$ fails to be of cobounded type.
The two examples above explain why one must impose conditions on the action, such as being of cobounded type and being locally bounded, to get meaningful results on when invariant traces exist.
Monod proves in \cite{Monod:cones} that the class of groups with the fixed-point property for cones contains the class of groups of subexponetial growth and is contained in the class of supramenable groups introduced by Rosenblatt in \cite{Rosenblatt:supramenable}. It is not known if there are supramenable groups of exponential growth, so the three classes of groups could coincide, although the common belief seems to be that they all are different. Monod proved a number of permanence properties for his class of groups, prominently including that it is closed under central extensions, see \cite[Theorem 8]{Monod:cones}. He also shows that the property of having the fixed-point property for cones can be recast in several ways, including the property that for each non-zero positive function $f \in \ell^\infty(G)$ there is a non-zero invariant positive linear functional (called an \emph{invariant integral} in \cite{Monod:cones}) on the subspace $\ell^\infty(G,f)$ of all bounded functions $G$-dominated by $f$.
We begin our paper in Section~\ref{sec:traces-ideals} by recalling properties of possibly unbounded positive traces on (typically non-unital) {$C^*$-al\-ge\-bra} s, including when they are lower semi-continuous and when they are singular. By default, all traces in this paper are assumed to be positive. Interestingly, many of the traces predicted by Monod turn out to be singular. Traces on {$C^*$-al\-ge\-bra} s were treated systematically already by Dixmier in \cite{Dixmier:traces}. Elliott, Robert and Santiago consider in \cite{EllRobSan:traces} the cone of lower semi-continuous traces as an invariant of the {$C^*$-al\-ge\-bra}, they derive useful properties of this cone, and they make the point that such traces most conveniently are viewed as maps defined on the positive cone of the {$C^*$-al\-ge\-bra}{} taking values in $[0,\infty]$. While this indeed is a convenient way to describe unbounded traces, and one we in part shall use here, the cone structure from the point of view of this paper is sometimes better portrayed when traces are viewed as linear functionals on a suitable domain: a hereditary symmetric algebraic ideal in the {$C^*$-al\-ge\-bra}.
By a theorem of G.K.\ Pedersen one can identify the cone of lower semi-continuous densely defined traces on a {$C^*$-al\-ge\-bra}{} with the cone of traces defined on its Pedersen ideal. In the case where the primitive ideal space of the {$C^*$-al\-ge\-bra}{} is compact, we show that there are non-zero lower semi-continuous densely defined traces if and only if the stabilization of the {$C^*$-al\-ge\-bra}{} contains no full properly infinite projections, thus extending well-known results from both the unital and the simple case. A similar compactness condition appears in our reformulation of coboundedness of the action of the group on the cone of traces.
In Section~\ref{sec:inv-traces} we explain when a {$C^*$-al\-ge\-bra}{} equipped with an action of a group $G$ with the fixed-point property for cones admits an invariant densely defined lower semi-continuous trace. Given that the cone of densely defined lower semi-continuous traces is always proper and weakly complete, all we have to do is to explain when the action of the group on this cone is of cobounded type, respectively, when it is locally bounded. The former can be translated into a compactness statement, as mentioned above, while the latter just means that there exists a non-zero trace which is bounded on all $G$-orbits.
In Section~\ref{sec:inv-integrals} we examine the situation where the group $G$ acts on $\ell^\infty(G)$ with the aim of describing for which positive $f \in \ell^\infty(G)$ the invariant integrals on $\ell^\infty(G)$ normalizing $f$ are lower semi-continuous, respectively, singular. As it turns out, frequently they must be singular. We use this to give an example of a $G$-invariant densely defined trace on a {$C^*$-al\-ge\-bra}{} that does not extend to a trace on the crossed product.
Finally, in Section~\ref{sec:roe}, we consider the particular example of invariant traces on $\ell^\infty(G,{\mathcal K})$ and traces on the Roe algebra $\ell^\infty(G,{\mathcal K}) \rtimes G$. We show that $\ell^\infty(G,{\mathcal K})$ only has the ``obvious'' densely defined lower semi-continuous traces, and hence that it never has non-zero $G$-invariant ones, when $G$ is infinite. In many cases, however, there are invariant lower semi-continuous traces with smaller domains, such as domains defined by projections in $\ell^\infty(G,{\mathcal K})$. Specifically we show that any projection in $\ell^\infty(G,{\mathcal K})$ whose dimension (as a function on $G$) grows subexponentially is normalized by an invaraint lower semi-continuous trace if $G$ has the fixed-point property for cones. In general, for any non-locally finite group $G$, there are (necessarily exponentially growing) projections in $\ell^\infty(G,{\mathcal K})$ not normalized by any invariant trace, and which are properly infinite in the Roe algebra, while the Roe algebra of a locally finite group is always stably finite.
I thank Nicolas Monod, Nigel Higson, Guoliang Yu, and Eduardo Scarparo for useful discussions related to this paper. I also thank the referee for suggesting improvements of the exposition and for pointing out a couple mistakes in an earlier version of this paper.
\section{Hereditary ideals and cones of traces} \label{sec:traces-ideals}
\noindent By a \emph{hereditary ideal} ${\mathcal I}$ in a {$C^*$-al\-ge\-bra}{} ${\mathcal A}$ we shall mean an algebraic two-sided self-adjoint ideal satisfying the hereditary property: if $0 \le a \le b$, $b \in {\mathcal I}$ and $a \in {\mathcal A}$, then $a \in {\mathcal I}$. If $x^*x \in {\mathcal I}$ whenever $xx^* \in {\mathcal I}$, for all $x \in {\mathcal A}$, then we say that ${\mathcal I}$ is \emph{symmetric}. All closed two-sided ideals are hereditary and symmetric.
If a group $G$ acts on the {$C^*$-al\-ge\-bra}{} ${\mathcal A}$, then we refer to an ideal ${\mathcal I}$ as being \emph{$G$-invariant} (or just invariant) if it is invariant under the group action. If $M$ is a subset of ${\mathcal A}$, then let ${\mathcal I}_{\mathcal A}(M)$, respectively, $\overline{{\mathcal I}_{\mathcal A}}(M)$, denote the smallest hereditary ideal in ${\mathcal A}$, respectively, the smallest closed two-ideal in ${\mathcal A}$, which contains $M$; and let similary ${\mathcal I}_{\mathcal A}^G(M)$ and $\overline{{\mathcal I}_{\mathcal A}^G}(M)$ denote the smallest $G$-invariant hereditary ideal, respectively, the smallest $G$-invariant closed two-ideal in ${\mathcal A}$ which contains $M$.
\begin{example} \label{ex:Pedersen}
The \emph{Pedersen ideal}, ${\mathrm{Ped}}({\mathcal A})$, of a {$C^*$-al\-ge\-bra}{} ${\mathcal A}$ is the (unique) smallest dense ideal in ${\mathcal A}$, see \cite[Section 5.6]{Ped:C*-aut}. It is a hereditary symmetric ideal, and, even better: for each $x \in {\mathrm{Ped}}({\mathcal A})$, the hereditary sub-{$C^*$-al\-ge\-bra}, $\overline{x{\mathcal A} x^*}$, of ${\mathcal A}$ is contained in ${\mathrm{Ped}}({\mathcal A})$. In particular, the Pedersen ideal of ${\mathcal A}$ is closed under continuous function calculus on its normal elements, as long as the continuous function vanishes at $0$. If $x\in {\mathcal A}$ is such that $x^*x \in {\mathrm{Ped}}({\mathcal A})$, then $x \in {\mathrm{Ped}}({\mathcal A})$, which shows that the Pedersen ideal is also symmetric. \end{example}
\begin{example} Not all self-adjoint two-sided ideals in a {$C^*$-al\-ge\-bra}{} are hereditary. Consider for example the commutative {$C^*$-al\-ge\-bra}{} ${\mathcal A} = C([-1,1])$, the element $f \in C([-1,1])$ given by $f(t) = |t|$, for $t \in [-1,1]$, and the (two-sided) self-adjoint ideal ${\mathcal I} = {\mathcal A} f$ in ${\mathcal A}$. The function $g(t) = \max\{-\frac12 t,t\}$, $t \in [-1,1]$, then satisfies $0 \le g \le f$, but $g \notin {\mathcal I}$.
Hereditary ideals need not be spanned (or even generated) by their positive elements. Indeed, take again ${\mathcal A} = C([-1,1])$ and let ${\mathcal I} = {\mathcal A} \, \iota$, where $\iota(t) = t$, for $t \in [-1,1]$. If $f \in {\mathcal I} \cap {\mathcal A}^+$, then $f = g \cdot \iota$, for some $g \in \mathcal{P}$, where $\mathcal{P}$ is the set of functions $g \in {\mathcal A}$ such that $g(t) \le 0$, for $t \in [-1,0]$, and $g(t) \ge 0$, for $t \in [0,1]$. Since $g(0)=0$, for all $g \in \mathcal{P}$, it is not possible to write $\iota \in {\mathcal I}$ as a linear combination of functions in ${\mathcal I} \cap {\mathcal A}^+$. To see that ${\mathcal I}$ is hereditary, suppose that $0 \le h \le f$, where $f \in {\mathcal I} \cap {\mathcal A}^+$ and $h \in {\mathcal A}$. Then $f = g \cdot \iota$, for some $g \in \mathcal{P}$. Hence $|h(t)/t| \le |g(t)|$, for all $t \ne 0$, from which we see that $h \in {\mathcal I}$.
Let us also note that (algebraic) two-sided ideals need not be self-adjoint. Consider the commutative {$C^*$-al\-ge\-bra}{} ${\mathcal A} = C(\mathbb{D})$, where $\mathbb{D}$ is the closed unit disk in the complex plane, the function $f(z) = z$, $z \in \mathbb{D}$, and the (two-sided) ideal ${\mathcal I} = {\mathcal A} f$ in ${\mathcal A}$. Then $f$ belongs to ${\mathcal I}$, but $f^* = \bar{f}$ does not.
\end{example}
\noindent For a {$C^*$-al\-ge\-bra}{} ${\mathcal A}$ let $\widetilde{{\mathcal A}}$ denote the unitization of ${\mathcal A}$, when it is non-unital, or ${\mathcal A}$ itself when it already is unital.
\begin{lemma} \label{lm:cone} Let ${\mathcal A}$ be a {$C^*$-al\-ge\-bra}, and let $C$ be a subcone of ${\mathcal A}^+$ satisfying: \begin{enumerate} \item If $a \in C$ and $x \in \widetilde{{\mathcal A}}$, then $x^*ax \in C$, \item $C$ is hereditary: if $0 \le a \le b$, $a \in {\mathcal A}$ and $b \in C$, then $a \in C$. \end{enumerate} Let ${\mathcal I}$ be the linear span of $C$. Then ${\mathcal I}$ is a hereditary ideal in ${\mathcal A}$ and ${\mathcal I} \cap {\mathcal A}^+ = C$. If $C$ is symmetric (in the sense that $x^*x \in C$ implies $xx^* \in C$, for all $x \in {\mathcal A}$), then so is ${\mathcal I}$.
Conversely, if ${\mathcal I}$ is a hereditary ideal in ${\mathcal A}$ and if $C = {\mathcal I} \cap {\mathcal A}^+$, then $C$ is a subcone of ${\mathcal A}^+$ satisfying (i) and (ii) above. The span, ${\mathcal I}_0$, of $C$ is a hereditary subideal of ${\mathcal I}$; and ${\mathcal I}_0 = {\mathcal I}$ if and only if ${\mathcal I}$ is generated as a hereditary ideal by its positive elements. \end{lemma}
\begin{proof} It follows from (i) that if $x \in {\mathcal A}$ and $a \in C$, then \begin{eqnarray*} xa+ax^* &=& (x+1)a(x+1)^* - xax^* - a,\\ i(xa-ax^*) &=& (x-i)a(x-i)^* - xax^*-a, \end{eqnarray*} belong to ${\mathcal I}$, whence $xa$ and $ax^*$ belong to ${\mathcal I}$. This shows that ${\mathcal I}$ is an ideal in ${\mathcal A}$. Clearly, ${\mathcal I}$ is self-adjoint.
A subcone $C$ of ${\mathcal A}^+$ satisfies $\mathrm{span}(C) \cap {\mathcal A}^+ = C$ if and only if whenever $a,b \in C$ are such that $a \le b$, then $b-a \in C$. Hereditary cones clearly have this property, so ${\mathcal I} \cap {\mathcal A}^+ = C$. This also shows that ${\mathcal I}$ is hereditary (because $C$ is hereditary).
It is clear that $C = {\mathcal I} \cap {\mathcal A}^+$ has the stated properties if ${\mathcal I}$ is a hereditary ideal of ${\mathcal A}$; and ${\mathcal I}_0$ is a hereditary ideal of ${\mathcal A}$ by the first part of the lemma. It is contained in ${\mathcal I}$ and contains by definition all positive elements of ${\mathcal I}$. That proves the last claim of the lemma. \end{proof}
\begin{corollary} \label{cor:cone} Let ${\mathcal A}$ be a {$C^*$-al\-ge\-bra}{} and let $M$ be a non-empty subset of ${\mathcal A}^+$. Let $C$ be the set of all elements $a \in {\mathcal A}^+$ for which there exist $n \ge 1$, $e_1, e_2, \dots, e_n \in M$, and $x_1,x_2, \dots, x_n \in \widetilde{{\mathcal A}}$ such that $a \le \sum_{j=1}^n x_j^*e_jx_j$. Then: \begin{enumerate} \item $C$ is a subcone of ${\mathcal A}^+$ satisfying (i) and (ii) of Lemma~\ref{lm:cone}; \item ${\mathcal I}_{\mathcal A}(M) = \mathrm{span}(C)$; \item ${\mathcal I}_{\mathcal A}(M) \cap {\mathcal A}^+ = C$. \end{enumerate} \end{corollary}
\begin{proof} It is clear that (i) holds, so Lemma~\ref{lm:cone} implies that ${\mathcal I}:=\mathrm{span}(C)$ is a hereditary ideal in ${\mathcal A}$ satisfying ${\mathcal I} \cap {\mathcal A}^+ = C$. As $M \subseteq C \subseteq {\mathcal I}$ we conclude that ${\mathcal I}_{\mathcal A}(M) \subseteq {\mathcal I}$. Conversely, $C \subseteq {\mathcal I}_{\mathcal A}(M)$, so ${\mathcal I} \subseteq {\mathcal I}_{\mathcal A}(M)$. \end{proof}
\begin{corollary} \label{lm:ideal} Let ${\mathcal A}$ be a {$C^*$-al\-ge\-bra}{}, let $\alpha$ be an action of a group $G$ on ${\mathcal A}$, and let $M$ be a non-empty subset of ${\mathcal A}^+$. Then \begin{enumerate} \item ${\mathcal I}_{\mathcal A}^G(M) = {\mathcal I}_{\mathcal A}(G.M)$, where $G.M = \{\alpha_t(e) : t\in G, e \in M\}$. \item An element $a \in {\mathcal A}^+$ belongs to ${\mathcal I}_{\mathcal A}^G(M)$ if and only if there exist $n \ge 1$, $t_1, t_2, \dots, t_n \in G$, $y_1,y_2, \dots, y_n \in \widetilde{{\mathcal A}}$, and $e_1,e_2, \dots e_n \in M$ such that $a \le \sum_{j=1}^n y_j^*\alpha_{t_j}(e_j)y_j$. \end{enumerate} \end{corollary}
\begin{proof} (i). As $G.M \subseteq {\mathcal I}_{\mathcal A}^G(M)$ we see that ${\mathcal I}_{\mathcal A}(G.M) \subseteq {\mathcal I}_{\mathcal A}^G(M)$. Conversely, as ${\mathcal I}_{\mathcal A}(G.M)$ is $G$-invariant, it contains ${\mathcal I}_{\mathcal A}^G(M)$. Part (ii) follows from (i) and from Corollary~\ref{cor:cone}. \end{proof}
\noindent For each non-empty subset $M$ of ${\mathcal A}^+$ denote by ${\mathcal J}_{\mathcal A}(M)$ the smallest \emph{symmetric} hereditary ideal in ${\mathcal A}$ containing $M$. If a group $G$ acts on ${\mathcal A}$, then denote by ${\mathcal J}_{\mathcal A}^G(M)$ the smallest symmetric hereditary $G$-invariant ideal in ${\mathcal A}$ containing $M$. Since closed two-sided ideals in a {$C^*$-al\-ge\-bra}{} always are symmetric, we have \begin{equation} \label{eq:J-I} {\mathcal I}_{\mathcal A}(M) \subseteq {\mathcal J}_{\mathcal A}(M) \subseteq \overline{{\mathcal I}_{\mathcal A}}(M), \qquad {\mathcal I}_{\mathcal A}^G(M) \subseteq {\mathcal J}_{\mathcal A}^G(M) \subseteq \overline{{\mathcal I}_{\mathcal A}^G}(M). \end{equation}
\begin{lemma} \label{lm:symmetric-proj} Let ${\mathcal A}$ be a {$C^*$-al\-ge\-bra}{} and let $M$ be a non-empty subset of projections in ${\mathcal A}$, then ${\mathcal I}_{\mathcal A}(M) = {\mathcal J}_{\mathcal A}(M) = {\mathrm{Ped}}({\mathcal A}_0)$, where ${\mathcal A}_0 = \overline{{\mathcal I}_{\mathcal A}}(M)$. If ${\mathcal A}$ is equipped with an action of a group $G$, then ${\mathcal I}_{\mathcal A}^G(M) = {\mathcal J}_{\mathcal A}^G(M) = {\mathrm{Ped}}({\mathcal A}_1)$, where ${\mathcal A}_1 = \overline{{\mathcal I}_{\mathcal A}^G}(M)$. \end{lemma}
\begin{proof} We have $M \subseteq {\mathrm{Ped}}({\mathcal A}_0) \subseteq {\mathcal I}_{\mathcal A}(M) \subseteq {\mathcal J}_{\mathcal A}(M) \subseteq {\mathcal A}_0$ (the first inclusion holds because each projection in a {$C^*$-al\-ge\-bra}{} belong to its Pedersen ideal, and second inclusion holds because ${\mathcal I}_{\mathcal A}(M)$ is a dense ideal in ${\mathcal A}_0$). As ${\mathrm{Ped}}({\mathcal A}_0)$ is a hereditary symmetric ideal, which contains $M$, cf.\ Example~\ref{ex:Pedersen}, ${\mathcal J}_{\mathcal A}(M) \subseteq {\mathrm{Ped}}({\mathcal A}_0)$, so ${\mathrm{Ped}}({\mathcal A}_0) = {\mathcal I}_{\mathcal A}(M) = {\mathcal J}_{\mathcal A}(M)$. The second part of the statement follows from the first part applied to $G.M$ (instead of $M$). \end{proof}
\begin{lemma} \label{lm:J-positive} Let ${\mathcal A}$ be a {$C^*$-al\-ge\-bra}{} and let $M$ be a non-empty set of positive elements in ${\mathcal A}$. Then ${\mathcal J}_{\mathcal A}(M)$ is the linear span of its positive elements. If ${\mathcal A}$ is equipped with an action of a group $G$, then the same holds for ${\mathcal J}_{\mathcal A}^G(M)$. \end{lemma}
\begin{proof} Set $C = {\mathcal J}_{\mathcal A}^G(M) \cap {\mathcal A}^+$ and let ${\mathcal J}_0$ be the linear span of $C$. Then ${\mathcal J}_0$ is a hereditary ideal in ${\mathcal A}$ by Lemma~\ref{lm:cone}. Since ${\mathcal J}_{\mathcal A}^G(M)$ is symmetric and $G$-invariant, the same holds for $C$, and hence for ${\mathcal J}_0$. Moreover, $M \subseteq C \subseteq {\mathcal J}_0 \subseteq {\mathcal J}_{\mathcal A}^G(M)$. As ${\mathcal J}_{\mathcal A}^G(M)$ is the smallest ideal with these properties, ${\mathcal J}_0 = {\mathcal J}_{\mathcal A}(M)$. The first claim is proved in a similar manner. \end{proof}
\begin{definition} \label{def:trace} Let ${\mathcal A}$ be a {$C^*$-al\-ge\-bra}. Denote by $T^+({\mathcal A})$ the cone of traces on the positive cone of ${\mathcal A}$, i.e., the set of additive homogeneous maps $\tau \colon {\mathcal A}^+ \to [0,\infty]$ satisfying $\tau(x^*x) = \tau(xx^*)$, for all $x \in {\mathcal A}$.
For each hereditary symmetric ideal ${\mathcal I}$ in ${\mathcal A}$, let $T({\mathcal I},{\mathcal A})$ denote the cone of linear traces on ${\mathcal I}$, i.e., the set of positive linear maps $\tau \colon {\mathcal I} \to {\mathbb C}$ satisfying $\tau(x^*x) = \tau(xx^*)$, whenever $x\in{\mathcal A}$ is such that $x^*x$ (and hence $xx^*$) belong to ${\mathcal I}$. We refer to ${\mathcal I}$ as the \emph{domain} of $\tau$. If the domain of $\tau$ is a dense ideal of ${\mathcal A}$ (in which case it will contain the Pedersen ideal of ${\mathcal A}$), then $\tau$ is said to be a densely defined trace on ${\mathcal A}$. \end{definition}
\noindent The cone of traces, here denoted by $T^+({\mathcal A})$, is in \cite{EllRobSan:traces} denoted by $T({\mathcal A})$. \emph{Note that all traces by default are assumed to be positive.} The following easy fact will be used repeatedly:
\begin{lemma} \label{lm:traceinequality} Let $\tau$ be a trace on a {$C^*$-al\-ge\-bra}{} ${\mathcal A}$. Then $\tau(x^*ax) \le \|x\|^2 \tau(a)$, for all $a \in {\mathcal A}^+$ (in the domain of $\tau$) and all $x$ in the unitization of ${\mathcal A}$. \end{lemma}
\noindent The ``${\varepsilon}$-cut-down'' $(a-{\varepsilon})_+$ of $a \in {\mathcal A}^+$ appearing in part (i) of the lemma below is defined by applying the continuous positive function $t \mapsto \max\{t-{\varepsilon},0\}$ to $a$.
A trace in $T^+({\mathcal A})$, or a linear trace on ${\mathcal A}$ defined on a given domain, is said to be \emph{lower semi-continuous} if one of the equivalent conditions in the following lemma holds.
\begin{lemma} \label{lm:lsc-eq} Let ${\mathcal A}$ be a {$C^*$-al\-ge\-bra}. The following conditions are equivalent for each trace $\tau$ in $T^+({\mathcal A})$ (or for each linear trace on ${\mathcal A}$): \begin{enumerate} \item $\tau(a) = \sup_{{\varepsilon}>0} \tau((a-{\varepsilon})_+)$ for each $a \in {\mathcal A}^+$ (in the domain of $\tau$), \item whenever $\{a_n\}_{n=1}^\infty$ is an increasing sequence in ${\mathcal A}^+$ (in the domain of $\tau$) converging in norm to $a \in {\mathcal A}^+$ (in the domain of $\tau$), then $\tau(a) = \lim_{n\to\infty} \tau(a_n)$, \item whenever $\{a_n\}_{n=1}^\infty$ is a sequence in ${\mathcal A}^+$ (in the domain of $\tau$) converging in norm to $a \in {\mathcal A}^+$ (in the domain of $\tau$), then $\tau(a) \le \liminf_{n\to\infty} \tau(a_n)$. \end{enumerate} \end{lemma}
\begin{proof} (ii) $\Rightarrow$ (i). To verify (i) one needs only show that $\tau(a) = \lim_{n \to \infty} \tau((a-{\varepsilon}_n)_+)$ for all sequences $\{{\varepsilon}_n\}$ decreasing to $0$; but this is just a special case of (ii).
(iii) $\Rightarrow$ (ii). If $\{a_n\}_{n=1}^\infty$ is an increasing sequence of positive elements converging to $a$, then $\tau(a_n) \le \tau(a)$, for all $n$, by positivity of $\tau$. If (iii) holds, then this entails that $\tau(a) = \lim_{n\to\infty} \tau(a_n)$.
(i) $\Rightarrow$ (iii). Let ${\varepsilon} >0$ be given. Choose $n_0 \ge 1$ such that $\|a_n - a\| <{\varepsilon}$, for all $n \ge n_0$. It then follows from \cite[Lemma 2.2]{KirRor:pi2} that $(a-{\varepsilon})_+ = d_n^*a_nd_n$, for some contractions $d_n$ in ${\mathcal A}$, for all $n \ge n_0$. Hence $\tau((a-{\varepsilon})_+) \le \tau(a_n)$, by Lemma~\ref{lm:traceinequality}. This shows that $\liminf_{n \to \infty} \tau(a_n) \ge \tau((a-{\varepsilon})_+)$. It follows that $\liminf_{n \to \infty} \tau(a_n) \ge \sup_{{\varepsilon}>0} \tau((a-{\varepsilon})_+)$, which proves that (i) implies (iii). \end{proof}
\begin{theorem}[G.K.\ Pedersen, {\cite[Corollary 3.2]{GKP:Pedersen-ideal}}] \label{thm:GKP} The restriction of any densely defined trace on a {$C^*$-al\-ge\-bra}{} ${\mathcal A}$ to the Pedersen ideal of ${\mathcal A}$ is automatically lower semi-continuous. \end{theorem}
\begin{definition} \label{def:lsc-trace} Denote by $T_{\mathrm{lsc}}({\mathcal A})$ the cone of linear traces on ${\mathcal A}$ whose domain is the Pedersen ideal of ${\mathcal A}$. In other words, $T_{\mathrm{lsc}}({\mathcal A}) = T({\mathrm{Ped}}({\mathcal A}),{\mathcal A})$. \end{definition}
\noindent We can identify $T_{\mathrm{lsc}}({\mathcal A})$ with the set of densely defined lower semi-continuous traces on ${\mathcal A}$ as follows: Each trace in $T_{\mathrm{lsc}}({\mathcal A})$ is clearly densely defined, and it is lower semi-continuous by Pedersen's theorem. Conversely, if $\tau$ is a lower semi-continuous densely defined trace, then its restriction $\tau_0$ to the Pedersen ideal of ${\mathcal A}$ belongs to $T_{\mathrm{lsc}}({\mathcal A})$, and $\tau$ is uniquely determined on its domain by $\tau_0$ by Lemma~\ref{lm:lsc-eq} (i), because $(a-{\varepsilon})_+ \in {\mathrm{Ped}}({\mathcal A})$ for all positive $a \in {\mathcal A}$ and all ${\varepsilon} >0$.
A trace in $T_{\mathrm{lsc}}({\mathcal A})$ can usually be extended to a lower semi-continuous trace on a larger domain than the Pedersen ideal; and such an extension is unique, see Proposition~\ref{prop:tau->tau'} below and the subsequent discussion.
It follows from Theorem~\ref{thm:GKP} and Lemma~\ref{lm:symmetric-proj} that each linear trace on a {$C^*$-al\-ge\-bra}{} ${\mathcal A}$ with domain ${\mathcal J}_{\mathcal A}(M)$ or ${\mathcal J}_{\mathcal A}^G(M)$ (when ${\mathcal A}$ has a $G$-action) is lower semi-continuous whenever $M$ is a subset of projections in ${\mathcal A}$.
Observe that $T_{\mathrm{lsc}}(B(H)) = \{0\}$, where $B(H)$ is the bounded operators on a separable infinite dimensional Hilbert space $H$, while $T_{\mathrm{lsc}}({\mathcal K}(H))$ and the cone of lower semi-continuous traces in $T^+(B(H))$ both are equal to the one-dimensional cone spanned by the canonical trace on $B(H)$, in the latter case viewed as a function $B(H)^+ \to [0,\infty]$. The Dixmier trace is an example of a singular trace on ${\mathcal K}(H)$. It belongs to $T^+(B(H))$ and to $T({\mathcal I},{\mathcal K}(H))$, where ${\mathcal I} \subset {\mathcal K}(H)$ is its domain, and it is zero on the finite rank operators.
Consider a general (not necessarily densely defined) linear trace $\tau$ on ${\mathcal A}$ with domain ${\mathcal I}$. The closure, $\overline{{\mathcal I}}$, of ${\mathcal I}$ is a closed two-sided ideal in ${\mathcal A}$, and hence, in particular, a {$C^*$-al\-ge\-bra}; and $\tau$ is of course densely defined relatively to this {$C^*$-al\-ge\-bra}. We have the following inclusions: $${\mathrm{Ped}}(\overline{{\mathcal I}}) \subseteq {\mathcal I} \subseteq \overline{{\mathcal I}}.$$ The restriction of $\tau$ to ${\mathrm{Ped}}(\overline{{\mathcal I}})$ is lower semi-continuous by Theorem~\ref{thm:GKP}. If this restriction is zero, then $\tau$ is said to be \emph{singular}. Each trace $\tau$ on ${\mathcal A}$ with domain ${\mathcal I}$ can in a unique way be written as the sum $\tau = \tau_1+\tau_2$ of a lower semi-continuous trace $\tau_1$ and a singular trace $\tau_2$, both with domain ${\mathcal I}$. The lower semi-continuous part is obtained by restricting $\tau$ to the Pedersen ideal (which is lower semi-continuous) and then extending to a lower semi-continuous trace $\tau_1$ defined on ${\mathcal I}$ as described in \eqref{eq:tau} and the subsequent comments.
One can smoothly and uniquely pass from a trace in $T^+({\mathcal A})$ to a linear trace defined on its natural (maximal) domain:
\begin{proposition} \label{prop:tau->tau'} Let ${\mathcal A}$ be a {$C^*$-al\-ge\-bra}, and let $\tau' \in T^+({\mathcal A})$. Let $C$ be the set of positive elements $a \in {\mathcal A}$ with $\tau'(a) < \infty$, and let ${\mathcal I}$ be the linear span of $C$. Then ${\mathcal I}$ is a hereditary symmetric ideal in ${\mathcal A}$, ${\mathcal I} \cap {\mathcal A}^+ = C$, and there is a unique linear trace $\tau$ with domain ${\mathcal I}$ that agrees with $\tau'$ on $C$.
We can recover $\tau'$ from $\tau$ via the formula \begin{equation} \label{eq:tau'} \tau'(a) = \begin{cases} \tau(a), & a \in C, \\ \infty, & a \in {\mathcal A}^+ \setminus C.\end{cases} \end{equation} If $\tau'$ is lower semi-continuous, then so is $\tau$. \end{proposition}
\begin{proof} Observe that the set $C$ is a symmetric subcone of ${\mathcal A}^+$ satisfying conditions (i) and (ii) of Lemma~\ref{lm:cone} (use Lemma~\ref{lm:traceinequality} to see that Lemma~\ref{lm:cone}~(i) holds). It therefore follows from Lemma~\ref{lm:cone} that ${\mathcal I}$ is a symmetric hereditary ideal in ${\mathcal A}$ and that ${\mathcal I}^+ = {\mathcal I} \cap {\mathcal A}^+ = C$. By additivity and homogeneity of $\tau'$, its restriction to $C$ extends (uniquely) to a linear map $\tau \colon {\mathcal I} \to {\mathbb C}$. If $a \in {\mathcal I}$ is positive, then $a \in C$, so $\tau(a) = \tau'(a) \ge 0$, which shows that $\tau$ is positive. Let $x \in {\mathcal A}$ be such that $x^*x \in {\mathcal I}$. Then $x^*x$ and $xx^*$ are positive elements in ${\mathcal I}$, so both belong to $C$, whence $\tau(x^*x) = \tau'(x^*x) = \tau'(xx^*) = \tau(xx^*)$, so $\tau$ is a trace on ${\mathcal I}$.
It is clear that \eqref{eq:tau'} holds. If $\tau'$ is lower semi-continuous, then so is its restriction to $C$, which shows that $\tau$ is lower semi-continuous. \end{proof}
\noindent \emph{Whenever we talk about a trace on a {$C^*$-al\-ge\-bra}{} ${\mathcal A}$, we shall mean a trace defined on the cone of positive elements of that {$C^*$-al\-ge\-bra}{} taking values in $[0,\infty]$, i.e., a trace in $T^+({\mathcal A})$, and, at the same time, a linear trace on the domain defined in the proposition above, or some other domain to be specified in the context.}
As a converse to the proposition above, consider a linear trace $\tau$ defined on a hereditary symmetric ideal ${\mathcal I}$ in ${\mathcal A}$. Then $\tau'$ given by \eqref{eq:tau'} above, with $C= {\mathcal I} \cap {\mathcal A}^+$, belongs to $T^+({\mathcal A})$, and it agrees with $\tau$ on $C$. If we apply Proposition~\ref{prop:tau->tau'} to $\tau'$, then we get back a new linear trace $\tau_0$ defined on some symmetric hereditary ideal ${\mathcal J}_0$ of ${\mathcal A}$, which contains the sub-ideal ${\mathcal I}_0$ of ${\mathcal I}$ defined in Lemma~\ref{lm:cone} (but perhaps not ${\mathcal I}$ itself); and $\tau$ and $\tau_0$ agree on ${\mathcal I}_0$.
However, this extension of $\tau$ to a trace $\tau'$ defined on the positive cone of ${\mathcal A}$ is not unique, and $\tau'$ need not be lower semi-continuous, even when $\tau$ is lower semi-continuous. If $\tau$ is lower semi-continuous, then the map $\tau' \colon {\mathcal A}^+ \to [0,\infty]$ defined by \begin{equation} \label{eq:tau} \tau'(a) = \sup\{\tau(a_0) : 0 \le a_0 \le a, a_0 \in {\mathcal I}\}, \qquad a \in {\mathcal A}^+, \end{equation}
is a lower semi-continuous trace in $T^+({\mathcal A})$, and it is the unique such that extends $\tau$. In the sequel, when considering a lower semi-continuous trace, we may at wish view it either as a linear trace defined on its domain, or as a trace defined on the positive cone, via \eqref{eq:tau}.
There is a canonical way of extending a lower semi-continuous trace $\tau$ defined on some hereditary symmetric ideal ${\mathcal I}$ of ${\mathcal A}$ to its maximal domain: first extend $\tau$ to a lower semi-continuous trace $\tau' \colon {\mathcal A}^+ \to [0,\infty]$ as in \eqref{eq:tau} above; and then take the linearization $\overline{\tau}$ of $\tau'$ defined in Proposition~\ref{prop:tau->tau'}.
We quote the following well-known result for later reference, see, eg., \cite[Lemma 5.3]{RorSie:action} for a proof.
\begin{lemma} \label{prop:extending} Let ${\mathcal A}$ be a {$C^*$-al\-ge\-bra}{} equipped with an action of a group $G$, and let $\tau$ be a $G$-invariant lower semi-continuous trace on ${\mathcal A}$. It follows that $\tau \circ E$ is a lower semi-continuous trace on the (reduced) crossed product ${\mathcal A} \rtimes G$ that extends $\tau$, where $E \colon {\mathcal A} \rtimes G \to {\mathcal A}$ is the standard conditional expectation. If $\tau$ is densely defined, then so is $\tau \circ E$. \end{lemma}
\noindent One should here view $\tau$ and $\tau \circ E$ as traces defined on the positive cone of ${\mathcal A}$, respectively, ${\mathcal A} \rtimes G$. For the claim that $\tau \circ E$ is densely defined when $\tau$ is, use that $\tau \circ E$ is finite on the positive cone of ${\mathrm{Ped}}({\mathcal A})$, and the hereditary ideal in ${\mathcal A} \rtimes G$ generated by ${\mathrm{Ped}}({\mathcal A})$ is dense in ${\mathcal A} \rtimes G$. It is a curious fact that an invariant densely defined trace $\tau$ on ${\mathcal A}$ need not in general extend to a trace on the crossed product ${\mathcal A} \rtimes G$; in particular, $\tau \circ E$ need not be a trace if $\tau$ is not lower semi-continuous. See Example~\ref{ex:c_0}.
We end this section by considering when a {$C^*$-al\-ge\-bra}{} admits a non-zero densely defined trace. Blackadar and Cuntz proved in \cite{BlaCuntz:infproj} that a stable \emph{simple} {$C^*$-al\-ge\-bra}{} either contains a properly infinite projection or admits a non-zero dimension function (defined on its Pedersen ideal). In the latter case, assuming moreover that the {$C^*$-al\-ge\-bra}{} is exact, it admits a non-zero densely defined trace. (This step follows from the work of Blackadar-Handelman \cite{BlaHan:quasitrace}, Haagerup, \cite{Haa:quasi}, and Kirchberg, \cite{Kir:quasitraces}, as explained in the last part of the proof of the theorem below.) Also, it is well-known that a unital exact {$C^*$-al\-ge\-bra}{} admits a tracial state if and only if no matrix algebra over it is properly infinite. A common feature of simple and of unital {$C^*$-al\-ge\-bra} s is that their primitive ideal spaces are compact. Recall that the primitive ideal space, $\mathrm{Prim}({\mathcal A})$, of a {$C^*$-al\-ge\-bra}{} ${\mathcal A}$ is compact if and only if for all upward directed families $\{{\mathcal I}_\alpha\}$ of closed two-sided ideals in ${\mathcal A}$ whose union is dense in ${\mathcal A}$ there is $\alpha$ such that ${\mathcal A} = {\mathcal I}_\alpha$.
\begin{theorem} \label{thm:existtrace} Let ${\mathcal A}$ be an exact {$C^*$-al\-ge\-bra}{} whose primitive ideal space is compact. Then ${\mathcal A}$ admits a non-zero densely defined lower semi-continuous trace, i.e., $T_{\mathrm{lsc}}({\mathcal A}) \ne \{0\}$, if and only if the stabilization of ${\mathcal A}$ does not contain a full properly infinite projection. \end{theorem}
\begin{proof} The proof is most naturally phrased via dimension functions (as defined by Cuntz in \cite{Cuntz:dimension}) and the Cuntz semigroup, see, eg., \cite{CowEllIvan:Cu}.
Observe first that the cone of densely defined lower semi-continuous traces and the primitive ideal space are not changed by stabilizing the {$C^*$-al\-ge\-bra}, so may assume that ${\mathcal A}$ is stable.
The class of (closed two-sided) ideals of the form $\overline{{\mathcal I}}_{\mathcal A}((e-{\varepsilon})_+)$, where $e \in {\mathcal A}^+$ and ${\varepsilon}>0$, is upwards directed and its union is dense in ${\mathcal A}$. Hence ${\mathcal A} = \overline{{\mathcal I}}_{\mathcal A}((e-{\varepsilon}_0)_+)$, for some $e \in {\mathcal A}^+$ and some ${\varepsilon}_0>0$ by compactness of the primitive ideal space of ${\mathcal A}$. Since $(e-{\varepsilon})_+$ belongs to the Pedersen ideal and since ${\mathcal I}_{\mathcal A}((e-{\varepsilon})_+)$ is a dense ideal in ${\mathcal A}$, for all $0 < {\varepsilon} \le {\varepsilon}_0$, it follows that ${\mathcal I}_{\mathcal A}((e-{\varepsilon})_+) = {\mathrm{Ped}}({\mathcal A})$, for this $e \in {\mathcal A}^+$ and for all $0 < {\varepsilon} \le {\varepsilon}_0$. Let $u_{\varepsilon} = \langle (e-{\varepsilon})_+ \rangle$ be the corresponding element the Cuntz semigroup $\mathrm{Cu}({\mathcal A})$ of ${\mathcal A}$.
Fix $0 < {\varepsilon} \le {\varepsilon}_0$. It follows from Corollary~\ref{cor:cone}, and the fact that $\big\langle \sum_{j=1}^n x_j^*e_j x_j \big\rangle \le \sum_{j=1}^n \langle e_j \rangle$ in $\mathrm{Cu}({\mathcal A})$, for all positive $e_j$ and all $x_j$ in $\widetilde{{\mathcal A}}$, that for each positive $a$ in ${\mathrm{Ped}}({\mathcal A})$ there exists $k \ge 1$ such that $\langle a \rangle \le k u_{\varepsilon}$. In other words, $u_{\varepsilon}$ is an order unit for the sub-semigroup $\mathrm{Cu}_0({\mathcal A})$, consisting of all classes $\langle a \rangle$, where $a$ is a positive element in ${\mathrm{Ped}}({\mathcal A})$. In particular, $u_{{\varepsilon}_0} \le u_{\varepsilon} \le k u_{{\varepsilon}_0}$, for some integer $k \ge 1$ (that depends on ${\varepsilon}$).
Consider first the case that $nu_{{\varepsilon}_0}$ is properly infinite, for some integer $n \ge 1$. Upon replacing $e$ by an $n$-fold direct sum of $e$ with itself (which is possible since ${\mathcal A}$ is assumed to be stable), we may assume that $u_{{\varepsilon}_0}$ itself is properly infinite, i.e., that $ku_{{\varepsilon}_0} \le u_{{\varepsilon}_0}$, for all integers $k \ge 1$. By the discussion in the previous paragraph, we can then conclude that $u_{{\varepsilon}}$ is properly infinite and that $x \le u_{{\varepsilon}}$, for all $x \in \mathrm{Cu}_0({\mathcal A})$ and for all $0 < {\varepsilon} \le {\varepsilon}_0$.
We can now follow the argument of \cite[Proposition 2.7]{PasRor:RR0}, which uses the notion of \emph{scaling elements} introduced by Blackadar and Cuntz, \cite{BlaCuntz:infproj}, to construct a full properly infinite projection $p \in {\mathcal A}$: Fix $0 \le {\varepsilon} < {\varepsilon}_0$. Then $(e-{\varepsilon})_+$ is properly infinite, so by \cite[Proposition 3.3]{KirRor:pi} there exist positive elements $b_1,b_2$ in the hereditary sub-{$C^*$-al\-ge\-bra}{} of ${\mathcal A}$ generated by $(e-{\varepsilon}_0)+$ such that $b_1 \perp b_1$ and $(e-{\varepsilon}_0)_+ \precsim b_j$, for $j=1,2$. In particular, $b_1,b_2 \in {\mathrm{Ped}}({\mathcal A})$. As explained in \cite[Remark 2.5]{PasRor:RR0} there exists $x \in {\mathcal A}$ such that $x^*x(e-{\varepsilon}_0)_+ = (e-{\varepsilon}_0)_+$ and $xx^*$ belongs to the hereditary sub-{$C^*$-al\-ge\-bra}{} of ${\mathcal A}$ generated by $b_1$. This shows that $x$ is a scaling element (cf.\ \cite[Remark 2.4]{PasRor:RR0}) satisfying $x^*xb_2 = b_2$ and $xx^*b_2 = 0$. By \cite{BlaCuntz:infproj}, see also \cite[Remark 2.4]{PasRor:RR0}, we get a projection $p \in {\mathcal A}$ satisfying $b_2p=b_2$. As $u_{{\varepsilon}_0} \le \langle b_2 \rangle \le \langle p \rangle \le u_{{\varepsilon}_0}$, we conclude that $\langle p \rangle$ is a properly infinite order unit of $\mathrm{Cu}_0({\mathcal A})$, whence $p$ is a full properly infinite projection in ${\mathcal A}$.
Suppose now that there is no integer $n \ge 1$ such that $nu_{{\varepsilon}_0}$ is properly infinite. We proceed to show that $T_{\mathrm{lsc}}({\mathcal A}) \ne \{0\}$ in this case. As shown above, $nu_{\varepsilon}$ is not properly infinite, for any $n \ge 1$ and for any $0 < {\varepsilon} \le {\varepsilon}_0$. Fix $0 < {\varepsilon}_1 < {\varepsilon}_0$, and observe that $n u_{{\varepsilon}_1} \le m u_{{\varepsilon}_1}$ implies $n \le m$, for all integers $n,m \ge 0$ (since no multiple of $u_{{\varepsilon}_1}$ is properly infinite). The map $f_0 \colon {\mathbb N}_0 u_{{\varepsilon}_1} \to {\mathbb R}^+$, given by $f_0(nu_{{\varepsilon}_1}) = n$, for $n \ge 0$, is therefore a positive additive map on the sub-semigroup ${\mathbb N}_0 u_{{\varepsilon}_1}$ of $\mathrm{Cu}_0({\mathcal A})$ (where ${\mathbb N}_0 u_{{\varepsilon}_1}$ is equipped with the relative order arising from $\mathrm{Cu}_0({\mathcal A})$). By \cite[Corollary 2.7]{BlaRor:extending} we can extend $f_0$ to a positive additive map (state) $f \colon\mathrm{Cu}_0({\mathcal A}) \to {\mathbb R}^+$ (since $u_{{\varepsilon}_1}$ is an order unit for $\mathrm{Cu}_0({\mathcal A})$). Let $d \colon {\mathrm{Ped}}({\mathcal A})^+ \to {\mathbb R}^+$ be the associated \emph{dimension function} given by $d(a) = f(\langle a \rangle)$, and let $\bar{d} \colon\mathrm{Cu}_0({\mathcal A}) \to {\mathbb R}^+$ be the corresponding \emph{lower semi-continuous} dimension function given by $\bar{d}(a) = \lim_{{\varepsilon} > 0} d((a-{\varepsilon})_+)$, for $a \in {\mathrm{Ped}}({\mathcal A})^+$, cf. \cite[Proposition 4.1]{Ror:UHFII}. Then $$d((e-{\varepsilon}_0)_+) \le \bar{d}((e-{\varepsilon}_1)_+) \le d((e-{\varepsilon}_1)_+),$$ and $d(e-{\varepsilon}_0)_+)>0$ because $d$ is non-zero and $u_{{\varepsilon}_0} = \langle (e-{\varepsilon}_0)_+ \rangle$ is an order unit for $\mathrm{Cu}_0({\mathcal A})$. This shows that $\bar{d}$ is non-zero.
It follows from Blackadar--Handelman, \cite[Theorem II,2,2]{BlaHan:quasitrace}, that the lower semi-con\-tin\-uous dimension function $\bar{d}$ (called rank function in \cite{BlaHan:quasitrace}) lifts to a lower semi-continuous $2$-quasitrace $\tau$ defined on the ``{pre-{$C^*$-al\-ge\-bra}{}}" ${\mathrm{Ped}}({\mathcal A})$, i.e., $\bar{d} = d_\tau$, where $d_\tau(a) = \lim_{n\to\infty} \tau(a^{1/n})$, for all positive elements $a \in {\mathrm{Ped}}({\mathcal A})$. Finally, by Kirchberg's extension, \cite{Kir:quasitraces}, to the non-unital case of Haagerup's theorem, \cite{Haa:quasi}, that any $2$-quasitrace on an exact {$C^*$-al\-ge\-bra}{} is a trace, $\tau$ is a non-zero lower semi-continuous densely defined trace on ${\mathcal A}$. \end{proof}
\noindent It remains unresolved when a {$C^*$-al\-ge\-bra}{} with non-compact primitive ideal space admits a non-zero densely defined lower semi-continuous trace. Clearly, $T_{\mathrm{lsc}}({\mathcal A})$ is non-zero for all commutative {$C^*$-al\-ge\-bra} s ${\mathcal A}$, while the primitive ideal space of a commutative {$C^*$-al\-ge\-bra}{} is compact only when it is unital. On the other hand, absence of full properly infinite projections is not sufficient to guarantee existence of non-zero lower semi-continuous traces. Take, for example, the suspension (or the cone over) any purely infinite {$C^*$-al\-ge\-bra}, cf.\ \cite[Proposition 5.1]{KirRor:pi}. In \cite[Section 4]{MatuiRor:universal} it was shown that any infinite group $G$ admits a (free) action on the locally compact non-compact Cantor set ${\mathbf{K}}^*$ with no non-zero invariant Radon measures. Accordingly, $C_0({\mathbf{K}}^*) \rtimes G$ has no non-zero densely defined lower semi-continuous trace, although $C_0({\mathbf{K}}^*) \rtimes G$ admits an approximate unit consisting of projections, and, if $G$ is supramenable, eg., if $G = {\mathbb Z}$, then no projection in the (stabilization of) $C_0({\mathbf{K}}^*) \rtimes G$ is properly infinite.
The latter example is covered by the proposition below. When $p$ and $q$ are projections in a {$C^*$-al\-ge\-bra}{} ${\mathcal A}$ and $n \ge 1$ is an integer, then denote by $p \otimes 1_n$ the $n$-fold direct sum of $p$ with itself, and write $p \prec \hspace{-.17cm} \prec q$ if $p \otimes 1_n \precsim q$, for all $n \ge 1$.
\begin{proposition} \label{prop:notrace} Let ${\mathcal A}$ be a {$C^*$-al\-ge\-bra}{} admitting an approximate unit consisting of projections. Suppose that for each projection $p \in {\mathcal A}$ there exists a projection $q$ in ${\mathcal A}$ with $p \prec \hspace{-.17cm} \prec q$. Then $T_{\mathrm{lsc}}({\mathcal A}) = \{0\}$. \end{proposition}
\begin{proof} Suppose that $\tau \in T_{\mathrm{lsc}}({\mathcal A})$, let $p$ be a projection in ${\mathcal A}$ and let $q \in {\mathcal A}$ be another projection such that $p \prec \hspace{-.17cm} \prec q$. Since $p$ and $q$ belong to the Pedersen ideal of ${\mathcal A}$, and hence to the domain of $\tau$, we find that $\tau(q) < \infty$, which entails that $\tau(p) = 0$. As ${\mathcal A}$ has an approximate unit consisting of projection, this implies that $\tau = 0$. \end{proof}
\noindent Here is an elementary example of an exact stably finite\footnote{A {$C^*$-al\-ge\-bra}{} ${\mathcal A}$ is said to be \emph{stably finite} if its stabilization ${\mathcal A} \otimes {\mathcal K}$ contains no infinite projections. This definition is meaningful when the {$C^*$-al\-ge\-bra}{} has an approximate unit consisting of projections.} {$C^*$-al\-ge\-bra}{} satisfying the conditions of Proposition~\ref{prop:notrace}, and which accordingly admits no non-zero lower semi-continuous densely defined trace: Let ${\mathcal A}$ be the inductive limit of the sequence ${\mathcal A}_1 \to {\mathcal A}_2 \to {\mathcal A}_3 \to \cdots$, where ${\mathcal A}_1 = {\mathcal K}$, the {$C^*$-al\-ge\-bra}{} of compact operators on a separable Hilbert space, where ${\mathcal A}_{n+1} = \widetilde{{\mathcal A}}_n \otimes {\mathcal K}$, for $n \ge 1$, and where the inclusion ${\mathcal A}_n \to {\mathcal A}_{n+1}$ is given by $a \mapsto a \otimes e \in {\mathcal A}_n \otimes {\mathcal K} \subset {\mathcal A}_{n+1}$, for some fixed $1$-dimensional projection $e \in {\mathcal K}$.
\section{Invariant unbounded traces on {$C^*$-al\-ge\-bra} s} \label{sec:inv-traces}
\noindent We shall here use Monod's characterization of groups with the fixed-point property for cones to say something about when a (typically non-unital) {$C^*$-al\-ge\-bra}{} ${\mathcal A}$ with an action of a group $G$ admits an invariant trace. We are mostly interested in the existence of a (non-zero) invariant densely defined lower semi-continuous trace, i.e., an invariant non-zero trace in the cone $T_{\mathrm{lsc}}({\mathcal A})$ defined in Section~\ref{sec:traces-ideals}. But we shall also address the existence of more general traces (including singular traces and not densely defined traces).
Recall from Definition~\ref{def:trace} that $T({\mathcal I},{\mathcal A})$ is the cone of positive traces on ${\mathcal A}$ with domain ${\mathcal I}$, whenever ${\mathcal I}$ is a hereditary symmetric ideal in ${\mathcal A}$. The cone $T({\mathcal I},{\mathcal A})$ is embedded in the complex vector space $\mathcal{L}({\mathcal I})$ of all linear functionals on ${\mathcal I}$ equipped with the locally convex weak topology induced by ${\mathcal I}$. The dual space $\mathcal{L}({\mathcal I})^*$ of $\mathcal{L}({\mathcal I})$ is naturally isomorphic to ${\mathcal I}$, cf.\ \cite[3.14]{Rudin:FunkAn} (and as remarked in \cite{Monod:cones}), i.e., $\mathcal{L}({\mathcal I})^* = \{\varphi_a : a \in {\mathcal I}\}$, where $\varphi_a$ denotes the functional $\varphi_a(\rho) = \rho(a)$, for $\rho \in \mathcal{L}({\mathcal I})$ and $a \in {\mathcal I}$. The dual space $\mathcal{L}({\mathcal I})^*$ is equipped with the preordering given by $T^+({\mathcal I},{\mathcal A})$, whereby an element $\varphi \in \mathcal{L}({\mathcal I})^*$ is positive if $\varphi(\tau) \ge 0$, for all $\tau \in T({\mathcal I},{\mathcal A})$. Observe that $\varphi \ge 0$ and $-\varphi \ge 0$ if and only if $\varphi(\tau)=0$, for all $\tau \in T({\mathcal I},{\mathcal A})$. The map $a \mapsto \varphi_a$ is a positive isomorphism, but not necessarily an order embedding, since $\varphi_a \ge 0$ does not necessarily imply that $a \ge 0$.
Monod considers real vector spaces in his paper \cite{Monod:cones}, while our vector spaces are complex by the nature of {$C^*$-al\-ge\-bra} s. To translate some properties from Monod's paper to our language we shall occasionally consider the real vector space of all self-adjoint functionals $\varphi$ in $\mathcal{L}({\mathcal I})^*$, and we note that $\varphi_a$ is self-adjoint if and only if $a \in {\mathcal I}$ is self-adjoint.
The cone $T({\mathcal I},{\mathcal A})$ is said to be \emph{proper} if $T({\mathcal I},{\mathcal A}) \cap -T({\mathcal I},{\mathcal A}) = \{0\}$, or, equivalently, if $0$ is the only trace in $T({\mathcal I},{\mathcal A})$ that vanishes on ${\mathcal I} \cap {\mathcal A}^+$. This will hold if ${\mathcal I}$ is the span of its positive elements. Most ideals considered in this paper have this property, including the Pedersen ideal ${\mathrm{Ped}}({\mathcal A})$, or any of the ideals ${\mathcal I}_{\mathcal A}(M)$, ${\mathcal I}_{\mathcal A}^G(M)$, ${\mathcal J}_{\mathcal A}(M)$ or ${\mathcal J}_{\mathcal A}^G(M)$, when $M$ is any non-empty subset of ${\mathcal A}^+$, cf.\ Example~\ref{ex:Pedersen}, Corollary~\ref{cor:cone}, Corollary~\ref{lm:ideal} and Lemma~\ref{lm:J-positive}.
Recall also that $T_{\mathrm{lsc}}({\mathcal A}) = T({\mathrm{Ped}}({\mathcal A}),{\mathcal A})$. We allow for the possibility that the cones $T({\mathcal I},{\mathcal A})$ and $T_{\mathrm{lsc}}({\mathcal A})$ are trivial, that is, equal to $\{0\}$, unless otherwise stated.
\begin{proposition} \label{prop:2} For each {$C^*$-al\-ge\-bra}{} ${\mathcal A}$ and for each hereditary symmetric ideal ${\mathcal I}$ in ${\mathcal A}$, the cone $T({\mathcal I},{\mathcal A})$ is weakly complete. In particular, $T_{\mathrm{lsc}}({\mathcal A})$ is weakly complete. \end{proposition}
\begin{proof} We must show that each weak Cauchy net in $T({\mathcal I},{\mathcal A})$ is weakly convergent, i.e., if $(\tau_i)_i$ is a net in $T({\mathcal I},{\mathcal A})$ such that $(\varphi(\tau_i))_i$ is Cauchy in ${\mathbb C}$, for all $\varphi \in \mathcal{L}({\mathcal I})^*$, then the net converges weakly in $T({\mathcal I},{\mathcal A})$. Since $\varphi_a(\tau_i) = \tau_i(a)$, for all $a \in {\mathcal I}$, being weakly Cauchy implies that $(\tau_i(a))_i$ is Cauchy and hence convergent in ${\mathbb C}$, for all $a \in {\mathcal I}$. Set $\tau(a) = \lim_i \tau_i(a)$, for all $a \in {\mathcal I}$. It is easy to check that $\tau \colon {\mathcal I} \to {\mathbb C}$ is in fact a trace, so it belongs to $T({\mathcal I},{\mathcal A})$, and since $\varphi_a(\tau_i) \to \varphi_a(\tau)$, for all $a \in {\mathcal I}$, $\tau$ is the weak limit of the net $(\tau_i)_i$, as desired. \end{proof}
\noindent Consider an action $\alpha$ of a (discrete) group $G$ on ${\mathcal A}$. If ${\mathcal I}$ is a $G$-invariant hereditary symmetric ideal in ${\mathcal A}$, then $G$ induces an action of the cone $T({\mathcal I},{\mathcal A})$ by $t.\tau = \tau \circ \alpha_t^{-1}$, for $t \in G$ and $\tau \in T({\mathcal I},{\mathcal A})$. It is clear that this action of $G$ on $T({\mathcal I},{\mathcal A})$ is continuous. Each automorphism of ${\mathcal A}$ leaves the Pedersen ideal invariant, so each group action on ${\mathcal A}$ induces an action on the cone $T_{\mathrm{lsc}}({\mathcal A})$.
The action of $G$ on $T({\mathcal I},{\mathcal A})$ is in \cite{Monod:cones} said to be of \emph{cobounded type} if there exists a positive functional $\varphi$ in $\mathcal{L}({\mathcal I})^*$ which $G$-dominates\footnote{If $\varphi$ and $\psi$ are self-adjoint functionals in $\mathcal{L}({\mathcal I})^*$, then $\psi$ is $G$-dominated by $\varphi$ if $\psi \le \sum_{j=1}^n t_j.\varphi$, for some $n \ge 1$ and some $t_1,t_2, \dots, t_n \in G$.} any other self-adjoint functional in $\mathcal{L}({\mathcal I})^*$. This condition is automatically satisfied when ${\mathcal I} = {\mathcal J}_{\mathcal A}^G(e)$, for some positive element $e \in {\mathcal A}^+$, cf.\ Corollary~\ref{cor:cobdd} below, but not always when ${\mathcal I}$ is the Pedersen ideal of ${\mathcal A}$. However, in the latter case we can reformulate the coboundedness condition into more familiar statements for {$C^*$-al\-ge\-bra} s.
A positive element $e \in {\mathcal I}$ is said to \emph{$G$-dominate} a self-adjoint element $a \in {\mathcal I}$ if there are group elements $t_1, \dots, t_n$ such that $a \le \sum_{j=1}^n \alpha_{t_j}(e)$; and $e$ is said to \emph{tracially $G$-dominate} $a$ if there are group elements $t_1, \dots, t_n$ such that $\tau(a) \le \sum_{j=1}^n \tau(\alpha_{t_j}(e))$, for all $\tau \in T({\mathcal I},{\mathcal A})$. The latter holds if and only if $\varphi_a \le \sum_{j=1}^n t_j^{-1}. \varphi_e$; in other words, if $\varphi_e$ $G$-dominates $\varphi_a$. We can summarize these remarks as follows:
\begin{lemma} \label{lm:cobounded0} Let ${\mathcal A}$ be a {$C^*$-al\-ge\-bra}{} equipped with an action of a group $G$, and let ${\mathcal I}$ be an invariant hereditary symmetric ideal in ${\mathcal A}$. The induced action of $G$ on the cone $T({\mathcal I},{\mathcal A})$ is of cobounded type if and only if there is a positive element $e \in {\mathcal I}$, which tracially $G$-dominates each self-adjoint element $a \in {\mathcal I}$. \end{lemma}
\begin{lemma} \label{lm:cobounded1a} Let ${\mathcal I}$ be a $G$-invariant hereditary symmetric ideal in a {$C^*$-al\-ge\-bra}{} ${\mathcal A}$, and let $e$ be a positive element in ${\mathcal I}$. Then the functional $\varphi_a$ is $G$-dominated by $\varphi_e$, for each self-adjoint element $a \in {\mathcal J}_{\mathcal A}^G(e)$. \end{lemma}
\begin{proof} Let $C$ be the set of positive elements $a \in {\mathcal I}$ such that $\varphi_a$ is $G$-dominated by $e$. We claim that $C$ is a $G$-invariant symmetric cone in ${\mathcal A}^+$, which satisfies (i) and (ii) of Lemma~\ref{lm:cone}. Since $e$ clearly belongs to $C$, it will then follow from Lemma~\ref{lm:cone} that ${\mathcal J}_{\mathcal A}^G(e) \cap {\mathcal A}^+ \subseteq C$, and this will prove the lemma.
The set of positive $\varphi \in \mathcal{L}({\mathcal I})^*$ that are $G$-dominated by $\varphi_e$ is a $G$-invariant hereditary cone in the positive cone of $\mathcal{L}({\mathcal I})^*$. As the map $a \mapsto \varphi_a$ is linear, order preserving and satisfies $\varphi_{\alpha_t(a)} = t.\varphi_a$, for $a \in {\mathcal I}^+$ and $t \in G$, we conclude that $C$ is a hereditary $G$-invariant cone in ${\mathcal A}^+$. For each $x \in {\mathcal A}$, for which $x^*x$ (and hence $xx^*$) belong to ${\mathcal I}$, we have $\varphi_{x^*x} \le \varphi_{xx^*} \le \varphi_{x^*x}$, which implies that $C$ is symmetric. It remains to show that $x^*ax$ belongs to $C$ when $a$ belongs to $C$ and $x$ belongs to $\widetilde{{\mathcal A}}$. To see this, recall from Lemma~\ref{lm:traceinequality} that $\tau(x^*ax) \le \|x\|^2 \tau(a)$, so $\varphi_{x^*ax} \le \|x\|^2 \varphi_a$, and the latter is $G$-dominated by $\varphi_e$ since $a \in C$ (and since $C$ is a cone). \end{proof}
\noindent The corollary below follows immediately from Lemma~\ref{lm:cobounded1a}.
\begin{corollary} \label{cor:cobdd} The action of a group $G$ on the cone $T({\mathcal J}_{\mathcal A}^G(e), {\mathcal A})$ is of cobounded type whenever ${\mathcal A}$ is a {$C^*$-al\-ge\-bra}{} with an action of $G$ and $e$ is a positive element in ${\mathcal A}$. \end{corollary}
\noindent The action of a group on the Pedersen ideal of a {$C^*$-al\-ge\-bra}{} is not always of cobounded type, as illustrated in the proposition below, that covers the case of commutative {$C^*$-al\-ge\-bra} s, and which paraphrases and expands a remark on page 71 in \cite{Monod:cones}. We remind the reader that the action of a group $G$ on a locally compact Hausdorff space is \emph{co-compact} if $X = G.K$, for some compact subset $K$ of $X$.
\begin{proposition} \label{lm:cocompact} Let $X$ be a locally compact Hausdorff space equipped with a continuous action of a group $G$. Then the following conditions are equivalent: \begin{enumerate} \item The action of $G$ on $X$ is co-compact. \item $X$ is compact in the (non-Hausdorff) topology on $X$ consisting of the $G$-invariant open subsets of $X$. \item The action of $G$ on the cone of Radon measures on $X$ equipped with the vague topology is of cobounded type. \item The action of $G$ on the cone $T_{\mathrm{lsc}}(C_0(X))$ is of cobounded type. \end{enumerate} \end{proposition}
\begin{proof} (i) $\Rightarrow$ (ii). Let $K$ be a compact subset of $X$ witnessing co-compactness of the action. Let $\{U_i\}_{i \in I}$ be a collection of invariant open sets that covers $X$. Select a finite subset $F\subseteq I$ such that $\{U_i\}_{i \in F}$ covers $K$. Then $\bigcup_{i \in F} U_i = X$, being a $G$-invariant set that contains $K$.
(ii) $\Rightarrow$ (i). Let $\{U_i\}_{i \in I}$ be the collection of all open pre-compact subsets of $X$. Then $X = \bigcup_{i \in I} U_i$, because $X$ is locally compact. For each $i \in I$, set $V_i = \bigcup_{t \in G} t.U_i$. The families $\{U_i\}_{i \in I}$ and $\{V_i\}_{i \in I}$ are both upwards directed (both are closed under forming finite unions). It follows by compactness of $X$ in the topology of invariant open sets that $X = V_i$, for some $i \in I$. Hence $X = \bigcup_{t \in G} t.K$, when $K$ is the (compact) closure of $U_i$.
(i) $\Rightarrow$ (iv). The cone $T_{\mathrm{lsc}}(C_0(X))$ is embedded into the vector space $\mathcal{L}(C_c(X))$ equipped with the weak topology from $C_c(X)$; and the dual space, $\mathcal{L}(C_c(X))^*$, is equal to $C_c(X)$. By co-compactness of the action we can find sets $U \subseteq K \subseteq X$, such that $K$ is compact, $U$ is open, and $G.U = X$. Let $f \in C_c(X)$ be such that $1_K \le f \le 1$. Then any real valued function $g \in C_c(X)$ is $G$-dominated by $f$. Indeed, if $F$ is a finite subset of $G$ such that the support of $g$ is contained in $\bigcup_{t \in F} t.U$, then $g \le \|g\|_\infty \sum_{t \in F} t.f$.
(iv) $\Rightarrow$ (i). Following the set-up of the proof above we can find a positive function $f \in C_c(X)$ which $G$-dominates any other real valued function in $C_c(X)$. Let $K$ be the support of $f$. Let $x \in X$ and choose a positive function $g \in C_c(X)$ such that $g(x)=1$. Then $g \le \sum_{t \in F} t.f$ for some finite subset $F$ of $G$. Thus $t.x \in K$, for some $t \in K$. This shows that $X = G.K$.
(iii) $\Leftrightarrow$ (iv). By Riesz' theorem there is a one-to-one correspondance between Radon measures on $X$ and positive linear functionals (hence traces) on $C_c(X) = {\mathrm{Ped}}(C_0(X))$. \end{proof}
\noindent We have previously, in Theorem~\ref{thm:existtrace}, considered {$C^*$-al\-ge\-bra} s ${\mathcal A}$ whose primitive ideal space, $\mathrm{Prim}({\mathcal A})$, is compact, which happens if whenever $\{{\mathcal I}_\alpha\}$ is an upwards directed net of closed two-sided ideals in ${\mathcal A}$ such that $\bigcup_\alpha {\mathcal I}_\alpha$ is dense in ${\mathcal A}$, then ${\mathcal A}={\mathcal I}_\alpha$, for some $\alpha$. Let us say that a {$C^*$-al\-ge\-bra}{} ${\mathcal A}$ equipped with an action of a group $G$ is \emph{$G$-compact} if whenever $\{{\mathcal I}_\alpha\}$ is an upwards directed net of $G$-invariant closed two-sided ideals in ${\mathcal A}$ such that $\bigcup_\alpha {\mathcal I}_\alpha$ is dense in ${\mathcal A}$, then ${\mathcal A}={\mathcal I}_\alpha$, for some $\alpha$. In the commutative case this is equivalent to condition (ii) of Proposition~\ref{lm:cocompact}. In the non-commutative case we have the following:
\begin{proposition} \label{prop:cobdd} Let ${\mathcal A}$ be a {$C^*$-al\-ge\-bra}{} with an action of a group $G$, and suppose that ${\mathcal A}$ is $G$-compact. Then the induced action of $G$ on the cone $T_{\mathrm{lsc}}({\mathcal A})$ is of cobounded type. \end{proposition}
\begin{proof} The family $\{\overline{{\mathcal J}_{\mathcal A}^G}(e)\}$ of $G$-invariant ideals in ${\mathcal A}$, where $e \in {\mathrm{Ped}}({\mathcal A})^+$, is upward directed and its union is dense in ${\mathcal A}$. Hence by the $G$-compactness assumption there is a positive element $e$ in ${\mathrm{Ped}}({\mathcal A})$ such that the hereditary ideal ${\mathcal J}_{\mathcal A}^G(e)$ is dense in ${\mathcal A}$. As $e \in {\mathrm{Ped}}({\mathcal A})$, this entails that ${\mathcal J}_{\mathcal A}^G(e)= {\mathrm{Ped}}({\mathcal A})$. We can therefore conclude from Lemma~\ref{lm:cobounded1a} that each self-adjoint $\varphi \in \mathcal{L}({\mathrm{Ped}}({\mathcal A}))^*$ is $G$-dominated by $\varphi_e$. \end{proof}
\noindent Examples of $G$-compact {$C^*$-al\-ge\-bra} s include any simple {$C^*$-al\-ge\-bra}{} and, more generally, any {$C^*$-al\-ge\-bra}{} with no non-trivial $G$-invariant closed two-sided ideals. Also any {$C^*$-al\-ge\-bra}{} ${\mathcal A}$, which contains a $G$-full projection, i.e., a projection $p$ such that $\overline{{\mathcal I}_{\mathcal A}^G}(p) = {\mathcal A}$, is $G$-compact.
It is not hard to show that the conclusion of Proposition~\ref{prop:cobdd} still holds under the weaker assumption that ${\mathcal A}/{\mathcal I}_0$ is $G$-compact, where ${\mathcal I}_0$ is the closed two-sided $G$-invariant ideal $${\mathcal I}_0 = \bigcap_{\tau \in T_{\mathrm{lsc}}({\mathcal A})} \{x \in {\mathcal A} : \tau(x^*x) = 0\}.$$
Following Monod, \cite{Monod:cones}, a subset $M$ of $T({\mathcal I},{\mathcal A})$ is \emph{bounded} if for each open neighborhood $0 \in {\mathcal U} \subseteq \mathcal{L}({\mathcal I})$ there is $r > 0$ such that $M \subseteq r \, {\mathcal U}$.
\begin{lemma} \label{lm:3} Let ${\mathcal A}$ be a {$C^*$-al\-ge\-bra}{} and let ${\mathcal I}$ be a hereditary symmetric ideal in ${\mathcal A}$. A subset $M$ of $T({\mathcal I},{\mathcal A})$ is bounded if and only if $\{\tau(a) : \tau \in M\}$ is bounded for all $a \in {\mathcal I}$. In particular, $M \subseteq T_{\mathrm{lsc}}({\mathcal A})$ is bounded if and only if $\{\tau(a) : \tau \in M\}$ is bounded for all $a \in {\mathrm{Ped}}({\mathcal A})$. \end{lemma}
\begin{proof} For each self-adjoint element $a \in {\mathcal I}$, the set ${\mathcal U}_a :=
\{\varphi \in \mathcal{L}({\mathcal I}) : |\varphi(a)| < 1\}$ is an open neighborhood of $0$. Hence, if $M$ is bounded, then $M \subseteq r \, {\mathcal U}_a$ for some $r >0$, which entails that $|\tau(a)| < r$, for all $\tau \in M$. Conversely, suppose that $\{\tau(a) : \tau \in M\}$ is bounded, for all $a \in {\mathcal I}$, and let ${\mathcal U}$ be an open neighborhood of $0$. Then there are $a_1, \dots, a_n$ in ${\mathcal I}$ such that ${\mathcal U}_{a_1, \dots, a_n} \subseteq {\mathcal U}$, where
$${\mathcal U}_{a_1, \dots, a_n} = \{\varphi \in \mathcal{L}({\mathcal I}) : |\varphi(a_i)| < 1, \; \text{for} \: i=1,2, \dots, n\}.$$
Set $r_i = \sup \{|\tau(a_i)| : \tau \in M\}$ and set $r = 1+ \max_i r_i$. Then $|\tau(a_i)| < r$, for all $i$ and for all $\tau \in M$, whence $M \subseteq r \, {\mathcal U}_{a_1,a_2, \dots, a_n} \subseteq r\, {\mathcal U}$, as desired. \end{proof}
\noindent A trace $\tau \in T({\mathcal I},{\mathcal A})$ will be said to be \emph{locally bounded} if it is bounded on the $G$-orbit of each $a \in {\mathcal I}$. Clearly, any bounded trace is $G$-bounded (regardless of the properties of the action). By Monod, \cite{Monod:cones}, the action of $G$ on $T({\mathcal I},{\mathcal A})$ is \emph{locally bounded} if $T({\mathcal I},{\mathcal A})$ contains a non-zero bounded orbit.
\begin{lemma} \label{lm:locallybounded} Let $G$ be a group acting on a {$C^*$-al\-ge\-bra}{} ${\mathcal A}$, let ${\mathcal I}$ be a hereditary symmetric invariant ideal in ${\mathcal A}$, and let $e$ be a positive element in ${\mathcal A}$. \begin{enumerate} \item The induced action of $G$ on $T({\mathcal I},{\mathcal A})$ is locally bounded if and only if there is a non-zero locally bounded trace in $T({\mathcal I},{\mathcal A})$. \item A trace in $T({\mathcal J}_{\mathcal A}^G(e),{\mathcal A})$ is locally bounded if it is bounded on the orbit of $e$. \end{enumerate} \end{lemma}
\begin{proof} (i) is an immediate reformulation of Lemma~\ref{lm:3}. To prove (ii), let $\tau \in T({\mathcal J}_{\mathcal A}^G(e),{\mathcal A})$ be bounded on the orbit of $e$, and let $C$ be the set of positive elements $a \in {\mathcal J}_{\mathcal A}^G(e)$ such that $\tau$ is bounded on the orbit of $a$. Then $C$ satisfies conditions (i) and (ii) of Lemma~\ref{lm:cone}, cf.\ Lemma~\ref{lm:traceinequality}, $e \in C$, and $C$ is $G$-invariant, so $C = {\mathcal J}_{\mathcal A}^G(e) \cap {\mathcal A}^+$ by Lemma~\ref{lm:cone}. \end{proof}
\noindent Groups with the fixed-point property for cones allow invariant lower semi-continuous densely defined traces on a {$C^*$-al\-ge\-bra}{} as follows:
\begin{theorem} \label{thm:a} Let ${\mathcal A}$ be a {$C^*$-al\-ge\-bra}{} with $T_{\mathrm{lsc}}({\mathcal A}) \ne \{0\}$, and let $G$ be a group with the fixed-point property for cones, which acts on ${\mathcal A}$ making ${\mathcal A}$ $G$-compact. Then there is a non-zero invariant, necessarily lower semi-continuous, trace in $T_{\mathrm{lsc}}({\mathcal A})$ if and only if there is a non-zero locally bounded trace in $T_{\mathrm{lsc}}({\mathcal A})$.
If this is the case, then $T_{\mathrm{lsc}}({\mathcal A} \rtimes G)$ is non-zero, i.e., ${\mathcal A} \rtimes G$ admits a non-zero lower semi-continuous densely defined trace. \end{theorem}
\begin{proof} The cone $T_{\mathrm{lsc}}({\mathcal A})$ is weakly complete by Proposition~\ref{prop:2}, the action of $G$ on $T_{\mathrm{lsc}}({\mathcal A})$ is affine and continuous. By the assumption on $G$ there is a non-zero invariant trace in $T_{\mathrm{lsc}}({\mathcal A})$ if the action of $G$ on $T_{\mathrm{lsc}}({\mathcal A})$ is of cobounded type and locally bounded. The former holds by Proposition~\ref{prop:cobdd}, since $A$ is assumed to be $G$-compact, and the latter holds by Lemma~\ref{lm:locallybounded} if there is a non-zero locally bounded trace. Conversely, any invariant trace in $T_{\mathrm{lsc}}({\mathcal A})$ is trivially $G$-bounded. The last claim follows from Lemma~\ref{prop:extending}. \end{proof}
\noindent Consider the class of groups $G$ for which Theorem~\ref{thm:a} holds. This class contains the class of groups with the fixed-point propety for cones, and it is contained in the class of supramenable groups, cf.\ the proposition below, that paraphrases \cite[Theorem 1.1]{KelMonRor:supra}. We do not know if Theorem~\ref{thm:a} characterizes groups with the fixed-point property for cones, or if it characterizes the class of supramenable groups, or some intermediate class of groups.
\begin{proposition} \label{prop:KMR} The following conditions are equivalent for each group $G$: \begin{enumerate} \item $G$ is supramenable. \item Whenever $G$ acts on a commutative {$C^*$-al\-ge\-bra}{} ${\mathcal A}$, such that ${\mathcal A}$ is $G$-compact, then there is a non-zero invariant trace in $T_{\mathrm{lsc}}({\mathcal A})$. \item Whenever $G$ acts on a commutative {$C^*$-al\-ge\-bra}{} ${\mathcal A}$, then for each projection $p \in {\mathcal A}$ there is an invariant lower semi-continuous trace $\tau \in T^+({\mathcal A})$ with $\tau(p)=1$. \item Whenever $G$ acts on a commutative {$C^*$-al\-ge\-bra}{} ${\mathcal A}$, then for each projection $p \in {\mathcal A}$ there is a lower semi-continuous trace $\tau \in T^+({\mathcal A} \rtimes G)$ with $\tau(p)=1$. \end{enumerate} \end{proposition}
\begin{proof} (i) $\Rightarrow$ (ii). Let $X$ be the spectrum of ${\mathcal A}$, so that ${\mathcal A} = C_0(X)$. If ${\mathcal A}$ is $G$-compact, then the action of $G$ on $X$ is co-compact, cf.\ Proposition~\ref{lm:cocompact}, so by \cite[Theorem 1.1]{KelMonRor:supra} there is a non-zero invariant Radon measure on $X$, since $G$ is supramenable. Integrating with respect to this measure gives a non-zero invariant trace in $T_{\mathrm{lsc}}({\mathcal A})$.
(ii) $\Rightarrow$ (iii). The {$C^*$-al\-ge\-bra}{} ${\mathcal B} = \overline{{\mathcal I}_{\mathcal A}^G}(p)$ is $G$-compact being generated by a projection, so (ii) implies that there is a non-zero invariant $\tau \in T_{\mathrm{lsc}}({\mathcal B})$. We must show that $0 < \tau(p) < \infty$. The latter inequality holds because $p \in {\mathrm{Ped}}({\mathcal B})$. If $\tau(p)=0$, then $\tau(x) = 0$ for all $x \in {\mathcal I}_{\mathcal B}^G(p)$, and
${\mathcal I}_{\mathcal B}^G(p) = {\mathrm{Ped}}({\mathcal B})$, cf.\ Lemma~\ref{lm:symmetric-proj}, contradicting that $\tau\ne 0$. Finally, we can extend $\tau$ to a lower semi-continuous trace in $T^+({\mathcal A})$ by \eqref{eq:tau}.
(iii) $\Rightarrow$ (iv). This follows from Lemma~\ref{prop:extending}.
(iv) $\Rightarrow$ (i). If $G$ is non-supramenable, then, by \cite[Theorem 1.1]{KelMonRor:supra}, it admits a minimal, free, purely infinite action on the locally compact non-compact Cantor set $\mathbf{K}^*$, making the crossed product $C_0(\mathbf{K}^*) \rtimes G$ purely infinite (and simple). In particular, there is no non-zero lower semi-continuous trace on $C_0(\mathbf{K}^*) \rtimes G$. \end{proof}
\begin{remark} \label{rem:unital} If $\tau$ is a trace on a {$C^*$-al\-ge\-bra}{} ${\mathcal B}$ such that $\tau(p)=1$, for some projection $p \in {\mathcal B}$, then the restriction of $\tau$ to the unital corner {$C^*$-al\-ge\-bra}{} $p{\mathcal B} p$ is a tracial state. Conversely, each tracial state $\tau$ on $p{\mathcal B} p$ extends to a lower semi-continuous trace in $T^+({\mathcal B})$ normalizing $p$.
To see this, consider the family $\{\mathcal{H}_\alpha\}_\alpha$ of all $\sigma$-unital hereditary sub-{$C^*$-al\-ge\-bra} s of $\overline{{\mathcal I}_{\mathcal B}}(p)$ containing $p{\mathcal B} p$, and observe that $p{\mathcal B} p$ is a full hereditary sub-{$C^*$-al\-ge\-bra}{} of $\mathcal{H}_\alpha$, for each $\alpha$. By Brown's theorem, the inclusion $p{\mathcal B} p \to p {\mathcal B} p \otimes {\mathcal K}$, where ${\mathcal K}$ is the {$C^*$-al\-ge\-bra}{} of compact operators on a separable Hilbert space, extends to an inclusion $\mathcal{H}_\alpha \to p {\mathcal B} p \otimes {\mathcal K}$, for each $\alpha$. The tracical state $\tau$ on $p{\mathcal B} p$ extends (uniquely) to a lower semi-continuous trace on the positive cone of $p {\mathcal B} p \otimes {\mathcal K}$, and thus restricts to a lower semi-continuous trace $\tau_\alpha$ on the positive cone of $\mathcal{H}_\alpha$. The extension of $\tau$ to $\tau_\alpha$ on $\mathcal{H}_\alpha$ is unique.
The familiy $\{\mathcal{H}_\alpha\}_\alpha$ is upwards directed with union $\bigcup_\alpha \mathcal{H}_\alpha = \overline{{\mathcal I}_{\mathcal B}}(p)$. To see the first claim, recall that a {$C^*$-al\-ge\-bra}{} is $\sigma$-unital precisely if it contains a strictly positive element. Let $b_\alpha \in \mathcal{H}_\alpha$ be strictly positive. Then $b_\alpha + b_\beta$ is a strictly positive element in $\overline{(b_\alpha + b_\beta){\mathcal B}(b_\alpha + b_\beta)}$, and the latter is therefore a $\sigma$-unital sub-{$C^*$-al\-ge\-bra} s of $\overline{{\mathcal I}_{\mathcal B}}(p)$ containing $\mathcal{H}_\alpha$ and $\mathcal{H}_\beta$. For the second claim, if $a \in \overline{{\mathcal I}_{\mathcal B}}(p)$ is positive, then $\overline{(a+p){\mathcal B}(a+p)}$ is a $\sigma$-unital hereditary sub-{$C^*$-al\-ge\-bra}{} of $\overline{{\mathcal I}_{\mathcal B}}(p)$ containing $p{\mathcal B} p$ and $a$. There exists therefore a (unique) lower semi-continuous trace $\tau$ in $T^+(\overline{{\mathcal I}_{\mathcal B}}(p))$ that extends each $\tau_\alpha$, and hence $\tau$.
Extend $\tau$ further to a lower semi-continuous trace in $T^+({\mathcal B})$ using \eqref{eq:tau}. Finally, if we wish, we can linearize $\tau$ using Proposition~\ref{prop:tau->tau'}.
\end{remark}
\noindent If we depart from lower semi-continuous densely defined traces and consider possibly singular traces, then we do obtain a $C^*$-algebraic characterization of groups with the fixed-point property for cones. The theorem below is a non-commutative analog of the equivalence between (1) and (4) of Theorem 7 in \cite{Monod:cones}.
\begin{theorem} \label{thm:b} The following conditions are equivalent for any group $G$: \begin{enumerate} \item $G$ has the fixed-point property for cones. \item Whenever $G$ acts on a {$C^*$-al\-ge\-bra}{} ${\mathcal A}$ and whenever $e$ is a positive element in ${\mathcal A}$ for which there exists a non-zero trace in $T({\mathcal J}_{\mathcal A}^G(e),{\mathcal A})$, which is bounded on the $G$-orbit of $e$, then there exists an invariant trace in $T({\mathcal J}_{\mathcal A}^G(e),{\mathcal A})$ normalizing $e$. \item Whenever $G$ acts on a {$C^*$-al\-ge\-bra}{} ${\mathcal A}$ and whenever $e$ is a positive element in ${\mathcal A}$ for which there exists a trace in $T^+({\mathcal A})$ which is non-zero and bounded on the $G$-orbit of $e$, then there is an invariant trace in $T^+({\mathcal A})$ normalizing $e$. \end{enumerate} \end{theorem}
\begin{proof} (i) $\Rightarrow$ (ii). Set ${\mathcal I} = {\mathcal J}_{\mathcal A}^G(e)$. Then $T({\mathcal I},{\mathcal A})$ is weakly complete by Proposition~\ref{prop:2}, the action of $G$ on $T({\mathcal I},{\mathcal A})$ is locally bounded, by Lemma~\ref{lm:locallybounded}, and of cobounded type by Corollary~\ref{cor:cobdd}. The cone of traces $T({\mathcal I},{\mathcal A})$ therefore has a non-zero fixed point $\tau_0$ since $G$ has the fixed-point property for cones. As $\tau_0$ is non-zero and invariant we can use Lemma~\ref{lm:traceinequality}, Lemma~\ref{lm:ideal} and the fact that $e \in {\mathcal I}$ to conclude that $0 < \tau_0(e) <\infty$.
(ii) $\Rightarrow$ (i). Property (ii), applied to ${\mathcal A} = \ell^\infty(G)$ (see also Section~\ref{sec:inv-integrals} below), says that condition (4) in \cite[Theorem 7]{Monod:cones} holds, which again, by that theorem, is equivalent to (i).
(ii) $\Leftrightarrow$ (iii). This follows from Proposition~\ref{prop:tau->tau'} and the remarks below that proposition. \end{proof}
\noindent If ${\mathcal A}$ is commutative, or, more generally, if ${\mathcal A}$ admits a separating family of bounded traces, then the condition in Theorem~\ref{thm:b} (ii) and (iii), that there exists a non-zero locally bounded traces, is always satisfied independent on the action of $G$ on ${\mathcal A}$.
Below we specialize Theorem~\ref{thm:b} to the case where the positive element is a projection.
\begin{corollary} \label{cor:b} Let $G$ be a group with the fixed-point property for cones acting on a {$C^*$-al\-ge\-bra}{} ${\mathcal A}$, and let $p \in {\mathcal A}$ be a non-zero projection. The following conditions are equivalent: \begin{enumerate} \item There is a non-zero lower semi-continuous trace with domain ${\mathcal J}_{\mathcal A}^G(p)$, which is bounded on the $G$-orbit of $p$. \item There is an invariant lower semi-continuous trace in $T^+({\mathcal A})$ normalized on $p$. \item There is a lower semi-continuous trace in $T^+({\mathcal A} \rtimes G)$ normalized on $p$. \end{enumerate} If, in addition, ${\mathcal A} \rtimes G$ is exact, then the conditions above are equivalent to: \begin{itemize} \item[\rm{(iv)}] The {$C^*$-al\-ge\-bra}{} $p({\mathcal A} \rtimes G)p \otimes M_n$ is not properly infinite, for all $n \ge 1$. \end{itemize} \end{corollary}
\begin{proof} (i) $\Rightarrow$ (ii). It follows from Theorem~\ref{thm:b} that there is an invariant trace $\tau$ with domain ${\mathcal J}_{\mathcal A}^G(p)$ normalizing $p$. As remarked below Theorem~\ref{thm:GKP}, each trace on ${\mathcal J}_{\mathcal A}^G(p)$ is automatically lower semi-continuous.
(ii) $\Rightarrow$ (iii). Any invariant lower semi-continuous trace $\tau$ on ${\mathcal A}$ with $\tau(p)=1$ extends to a lower semi-continuous trace on ${\mathcal A} \rtimes G$, by Lemma~\ref{prop:extending}.
(iii) $\Rightarrow$ (i). Take the restriction to ${\mathcal A}$ of the trace whose existence is claimed in (iii) and linearize as in Proposition~\ref{prop:tau->tau'}.
(iii) $\Leftrightarrow$ (iv). It is well-known that this equivalence holds when ${\mathcal A} \rtimes G$ is exact, see Remark~\ref{rem:unital} and the comments above Theorem~\ref{thm:existtrace}. \end{proof}
\noindent We can rephrase the corollary above as follows. A projection $p$ in a {$C^*$-al\-ge\-bra}{} ${\mathcal A}$ with a $G$-action can fail to be normalized by an invariant trace on ${\mathcal A}$ for two reasons. Either ${\mathcal A}$ does not have a trace that is non-zero and bounded on the $G$-orbit of $p$, or the group $G$ possesses some amount of ``paradoxicality'' materialized in failing to have the fixed-point property for cones. In Lemma~\ref{lm:subexpgrowth} we give non-obvious examples of projections for which there exist a trace that is non-zero and bounded on the $G$-orbit of the projection.
While we do not know that every group without the fixed-point property for cones has the ability of acting on a {$C^*$-al\-ge\-bra}{} ${\mathcal A}$ in such a way that it obstructs the existence of an invariant trace normalizing a given projection $p$ in ${\mathcal A}$, for which there is a trace on ${\mathcal A}$ that is non-zero and bounded on the orbit of $p$, it does follow from Proposition~\ref{prop:KMR} that all non-supramenable groups have this quality.
In conclusion, we do not know if Corollary~\ref{cor:b} characterizes the class of groups with the fixed-point property for cones, the class of supramenable groups, or some intermediate class of groups.
\section{Invariant integrals on $\ell^\infty(G)$} \label{sec:inv-integrals}
\noindent The bounded (complex valued) functions, $\ell^\infty(G)$, on a (discrete) group $G$ is a unital {$C^*$-al\-ge\-bra}{} equipped with an action of the group $G$ by left-translation. Following the notation of Monod, \cite{Monod:cones}, for each positive $f \in \ell^\infty(G)$, let $\ell^\infty(G,f)$ denote the set of bounded functions $g$ on $G$ whose absolute value is \emph{$G$-bounded} by $f$, i.e., for which $|g| \le \sum_{j=1}^n t_j. f$, for some $n \ge 1$ and some $t_1, \dots, t_n \in G$, where $t.f$ is the left-translate of $f \in \ell^\infty(G)$ by $t \in G$. In the language of {$C^*$-al\-ge\-bra} s, $\ell^\infty(G,f)$ is the smallest (automatically symmetric) hereditary $G$-invariant ideal in $\ell^\infty(G)$ containing $f$, denoted by ${\mathcal I}_{\ell^\infty(G)}^G(f)$ in the previous sections. The Pedersen ideal of the uniform closure of $\ell^\infty(G,f)$ is denoted by $\ell_c^\infty(G,f)$, and we have the following inclusions: $$\ell_c^\infty(G,f) \subseteq \ell^\infty(G,f) \subseteq \overline{\ell^\infty(G,f)}.$$ An \emph{invariant integral on $G$ normalized for $f$} is a $G$-invariant positive linear functional $\mu$ on $\ell^\infty(G,f)$ satisfying $\mu(f)=1$. In the language of {$C^*$-al\-ge\-bra} s, an (invariant) integral on $\ell^\infty(G,f)$ is an (invariant) trace on $\ell^\infty(G)$ with domain $\ell^\infty(G,f)$.
Monod observed in \cite[Theorem 7]{Monod:cones} that $G$ has the fixed-point property for cones if and only if for each positive function $f$ in $\ell^\infty(G)$ there is an invariant integral on $G$ normalized for $f$. (This result is extended to general {$C^*$-al\-ge\-bra} s in our Theorem~\ref{thm:b}.)
Let $\mu$ be an invariant integral on $\ell^\infty(G)$ normalized on some positive function $f \in \ell^\infty(G)$. We say that $\mu$ is \emph{lower semi-continuous} if $\mu(h) = \lim_{n \to \infty}\mu(h_n)$, whenever $\{h_n\}_{n=1}^\infty$ is an increasing seqence of positive functions in $\ell^\infty(G,f)$ converging uniformly to $h \in \ell^\infty(G,f)$, cf.\ Lemma~\ref{lm:lsc-eq}. The restriction of $\mu$ to $\ell_c^\infty(G,f)$ is automatically lower semi-continuous by Theorem~\ref{thm:GKP} (Pedersen). If the restriction of $\mu$ to $\ell_c^\infty(G,f)$ is zero, then $\mu$ is said to be \emph{singular}.
We shall in this section find conditions that will ensure that an (invariant) integral is lower semi-continuous and also exhibit situations where such integrals necessarily are singular. We start by rephrasing what Proposition~\ref{prop:KMR} says about the existence of lower semi-continuous integrals:
\begin{proposition} \label{lm:cocpt-f} Let $G$ be a supramenable group, let $f$ be a positive function in $\ell^\infty(G)$, and suppose that $\overline{\ell^\infty(G,f)}$ is $G$-compact (see above Proposition~\ref{prop:cobdd}). Then there is a non-zero invariant lower semi-continuous integral on $\ell^\infty_c(G,f)$. \end{proposition}
\begin{example} \label{ex:compacttype} Let $G$ be a group and let $f$ be a positive function in $\ell^\infty(G)$. Then $\overline{\ell^\infty(G,f)}$ is $G$-compact if and only if there exists $\delta>0$ such that $A(f,{\varepsilon}) \propto_G A(f,\delta)$, for all $0 < {\varepsilon} < \delta$; where $A(f,\eta) = \{t \in G : f(t) > \eta\}$, and where $A \propto_G B$, for subsets $A,B$ of $G$, means that $A \subseteq \bigcup_{t \in F} tB$, for some finite subset $F$ of $G$. We will not go into the details of the proof of this, but just mention that to prove the ``if'' part, one observes that $A(f,{\varepsilon}) \propto_G A(f,\delta)$ implies that $(f-{\varepsilon})_+ \in {\mathcal I}_{\ell^\infty(G)}^G((f-\delta')_+)$, for all $0 < \delta' < \delta$.
The condition above ensuring $G$-compactness of $\overline{\ell^\infty(G,f)}$ can further be rewritten as follows: set $A_n = \{t \in G: \frac{1}{n} < f(t) \le \frac{1}{n-1}\}$, for all $n \ge 1$. Then $\overline{\ell^\infty(G,f)}$ is $G$-compact if and only if there exists $N_0 \ge 1$ such that $\bigcup_{n=1}^N A_n \propto_G \bigcup_{n=1}^{N_0} A_n$, for all $N \ge N_0$. This condition holds if $A_k \propto_G \bigcup_{n=1}^{N_0} A_n = A(f,1/N_0)$, for all $k > N_0$. In other words, $\overline{\ell^\infty(G,f)}$ is $G$-compact if the set of $t \in G$ where $f(t)$ is ``very small'' can be controlled by the set where $f(t)$ is ``small enough''. \end{example}
\noindent Proposition~\ref{lm:cocpt-f} and the example above does not give information about the existence of invariant lower semi-continuous integrals normalizing the given positive function $f \in \ell^\infty(G)$. See Example~\ref{ex:c_0} below for more about this problem. Using an example from \cite{MatuiRor:universal}, we proceed to show that the conclusion of Proposition~\ref{lm:cocpt-f} fails without the assumption on $G$-compactness.
\begin{proposition} \label{prop:non-cocpt} For each countably infinite group $G$ there is a positive function $f \in \ell^\infty(G)$ such that $\ell^\infty_c(G,f)$ admits no non-zero invariant integral. In particular, if an invariant integral on $G$ normalized on $f$ exists, as is the case whenever $G$ has the fixed-point property for cones, then it is singular. \end{proposition}
\begin{proof} Let $G$ be a countably infinite group. By \cite[Proposition 4.3]{MatuiRor:universal} and its proof there is an increasing sequence $\{A_n\}_{n \ge 1}$ of (infinite) subsets of $G$ such that if $K_n$ is the (compact-open) closure of (the open set) $A_n$ in $\beta G$, and if $X_n = \bigcup_{t \in G} t.K_n$, then $X := \bigcup_{n \ge 1} X_n$ is an open $G$-invariant subset of $\beta G$ that admits no non-zero invariant Radon measure.
Let $\varphi \colon \ell^\infty(G) \to C(\beta G)$ be the canonical $^*$-isomorphism of {$C^*$-al\-ge\-bra} s, and let ${\mathcal I}$ be the closed $G$-invariant ideal of $\ell^\infty(G)$ such that $\varphi({\mathcal I}) = C_0(X)$. Observe that $C_0(X)$ is the closed $G$-invariant ideal in $C(\beta G)$ generated by the indicator functions $1_{K_n}$, $n \ge 1$. Since $\varphi(1_{A_n}) = 1_{K_n}$, we conclude that ${\mathcal I}$ is the closed $G$-invariant ideal in $\ell^\infty(G)$ generated by the projections $1_{A_n}$, $n \ge 1$, or by the positive function $f = \sum_{n \ge 1} n^{-2} 1_{A_n} \in \ell^\infty(G)$. Hence ${\mathcal I} = \overline{\ell^\infty(G,f)}$.
Since $X$ admits no non-zero invariant Radon measure, $\ell^\infty_c(G,f) \cong C_c(X)$ admits no non-zero invariant integrals. \end{proof}
\begin{example}[On the ideal $c_0(G)$] \label{ex:c_0} For a countably infinite group $G$, the subspace $c_0(G)$ is a closed $G$-simple invariant ideal of $\ell^\infty(G)$, and the Pedersen ideal of $c_0(G)$ is $c_c(G)$. It follows that if $f$ is a positive non-zero function in $c_0(G)$, then $$\ell^\infty_c(G,f) = c_c(G), \qquad \overline{\ell^\infty(G,f)} = c_0(G),$$ while $\ell^\infty(G,f)$ is some invariant hereditary ideal between these two ideals. Being $G$-simple, the ideal $c_0(G)$ is $G$-compact.
The only lower semi-continuous invariant integrals defined on $c_c(G)$ are multiples of the counting measure on $G$. Hence, if $f$ is a positive function in $c_0(G)$, then there is an invariant lower semi-continuous integral on $G$ normalized on $f$ if and only if $f$ belongs to $\ell^1(G)$. If a positive function $f$ in $c_0(G) \setminus \ell^1(G)$ is normalized by an invariant integral on $G$, which is the case if $G$ has the fixed-point property for cones, then this integral is necessarily singular.
Consider now a countably infinite group $G$ with the fixed-point property for cones, and take a non-zero positive function $f \in c_0(G)$. The crossed product {$C^*$-al\-ge\-bra}{} $c_0(G) \rtimes G$ is isomorphic to the {$C^*$-al\-ge\-bra}{} ${\mathcal K}$ of compact operators (on a separable infinite dimensional Hilbert space). There is an invariant integral on $G$ normalized for $f$, and we can view this integral as a invariant densely defined trace $\tau$ on $c_0(G)$ with $\tau(f)=1$. The image of $f$ in $c_0(G) \rtimes G$ is a positive compact operator with eigenvalues $\{f(t)\}_{t \in G}$. It was shown in {\cite{AGPS:traces}} that for a positive compact operator $T$ with eigenvalues $\{\lambda_n\}_{n=1}^\infty$, listed in decreasing order (and with multiplicity), there exists a densely defined trace on ${\mathcal K}$ normalized for $T$ if and only if \begin{equation} \label{eq:AGPS} \liminf_{n\to\infty} \sigma_{2n}/\sigma_n = 1, \end{equation} where $\sigma_n = \sum_{j=1}^n \lambda_j$.
Let $s > 0$ and choose $f \in c_0(G)$ such that the eigenvalues of the positive compact operator $f \in c_0(G) \rtimes G$ is the sequence $\{n^{-s}\}_{n=1}^\infty$. Then $f$ is trace class if and only if $s > 1$; and the limit in \eqref{eq:AGPS} is $1$ if and only if $s \ge 1$. If $s=1$, then $f$ is normalized by the \emph{Dixmier trace} on ${\mathcal K}$. If $0 < s < 1$, then there is no trace on $c_0(G) \rtimes G$ (lower semi-continuous or not) that normalizes $f$. For such a choice of $f$, the invariant densely defined trace $\tau$ on $c_0(G)$ does not extend to a trace on $c_0(G) \rtimes G$, thus showing that the conclusion of Lemma~\ref{prop:extending} fails without the assumption that the trace is lower semi-continuous. \end{example}
\noindent By the remark in the example above, that for no positive functions $f$ in $c_0(G) \setminus \ell^1(G)$ does there exist an invariant lower semi-continuous integral $\mu$ on $\ell^\infty(G,f)$ with $\mu(f)=1$, we get the corollary below, which implies that at least some integrals witnessing the fixed-point property for cones for infinite groups necessarily must be singular.
\begin{corollary} \label{cor:finitegps} Let $G$ be a countable group with the property that for each positive function $f$ in $\ell^\infty(G)$ there exists a $G$-invariant \emph{lower semi-continuous} integral $\mu$ on $\ell^\infty(G,f)$ normalized for $f$. Then $G$ must be a finite group. \end{corollary}
\section{The Roe algebra} \label{sec:roe}
\noindent As an application of the results developed in the previous sections we shall in this last section prove some results about existence (and non-existence) of traces on the Roe algebra $\ell^\infty(G,{\mathcal K}) \rtimes G$ associated with a (countably infinite) group $G$, where ${\mathcal K}$ denotes the {$C^*$-al\-ge\-bra}{} of compact operators on a separable infinite dimensional Hilbert space. These {$C^*$-al\-ge\-bra} s originates from the thesis of John Roe, published in \cite{Roe:thesis}, where the index of elliptic operators are computed using traces on a (variant of) what is now called the Roe algebra.
The {$C^*$-al\-ge\-bra}{} $\ell^\infty(G,{\mathcal K})$ is equipped with the natural action of $G$ given by left-translation. We shall consider existence (and non-existence) of invariant traces on this {$C^*$-al\-ge\-bra}.
First we give a complete description of the densely defined lower semi-continuous traces on these {$C^*$-al\-ge\-bra} s (there are not so many). The group $G$ plays no role in Lemma~\ref{lm:Ped(G,K)} and Proposition~\ref{prop:lsc-Roe} other than as a set, and it could be replaced with the set of natural numbers ${\mathbb N}$.
\begin{lemma} \label{lm:Ped(G,K)} The Pedersen ideal of $\ell^\infty(G,{\mathcal K})$ is equal to $\ell^\infty(G,\mathcal{F})$, where $\mathcal{F} = {\mathrm{Ped}}({\mathcal K})$ is the set of finite rank operators. \end{lemma}
\begin{proof} The ``point evaluation'' at $t \in G$ gives a surjective $^*$-homomorphism $\ell^\infty(G,{\mathcal K}) \to {\mathcal K}$ that maps ${\mathrm{Ped}}(\ell^\infty(G,{\mathcal K}))$ onto ${\mathrm{Ped}}({\mathcal K}) = \mathcal{F}$. This shows that ${\mathrm{Ped}}(\ell^\infty(G,{\mathcal K})) \subseteq \ell^\infty(G,\mathcal{F})$. Conversely, if $x \in \ell^\infty(G,\mathcal{F})$, then $x(t)$ has finite rank, for each $t \in G$, and so there exists a finite dimensional projection $p(t) \in {\mathcal K}$ with $p(t)x(t) = x(t)$. The function $t \mapsto p(t)$ defines a projection $p$ in $\ell^\infty(G,{\mathcal K})$ satisfying $px=x$. As $p$ belongs to the Pedersen ideal, being a projection, it follows that also $x$ belongs to ${\mathrm{Ped}}(\ell^\infty(G,{\mathcal K}))$. \end{proof}
\noindent It is worth mentioning that the Pedersen ideal of a {$C^*$-al\-ge\-bra}{} of the form $C(T,{\mathcal K})$, where $T$ is a compact Hausdorff space, may be properly contained in $C(T,{\mathcal F})$, cf.\ \cite{GilTay:Pedersen-ideal}.
For each $s \in G$, let $\tau_s$ be the (lower semi-continuous densely defined) trace on $\ell^\infty(G,{\mathcal K})$ given by $\tau_s(f) = \mathrm{Tr}(f(s))$, for $f$ either in $\ell^\infty(G,{\mathcal K})^+$ or in $\ell^\infty(G,{\mathcal F})$, where $\mathrm{Tr}$ is the standard trace on the compact operators ${\mathcal K}$.
In the proof of the proposition below we shall view the {$C^*$-al\-ge\-bra}{} $\ell^\infty(G,{\mathcal K})$ as an $\ell^\infty(G)$-algebra via the natural (unital) embedding of $\ell^\infty(G)$ into the center of the multiplier algebra of $\ell^\infty(G,{\mathcal K})$.
\begin{proposition} \label{prop:lsc-Roe} For each countable group $G$, the cone, $T_{\mathrm{lsc}}(\ell^\infty(G,{\mathcal K}))$, of densely defined lower semi-continuous traces on $\ell^\infty(G,{\mathcal K})$ is equal to the cone of finite positive linear combinations of the traces $\tau_s$, $s \in G$, defined above, i.e., traces of the form $\sum_{s \in G} c_s \tau_s$, where $c_s \ge 0$ and $c_s \ne 0$ for only finitely many $s \in G$. \end{proposition}
\begin{proof} Clearly, any trace of the form $\sum_{s \in G} c_s \tau_s$, with $c_s \ge 0$ and $c_s \ne 0$ only for finitely many $s \in G$, is lower semi-continuous and densely defined.
For the converse direction, take $\tau$ in $T_{\mathrm{lsc}}(\ell^\infty(G,{\mathcal K}))$. Fix a one-dimensional projection $e \in {\mathcal K}$. For each $s\in G$, let $e_s \in \ell^\infty(G,{\mathcal F})$ be given by $e_s(s)=e$ and $e_s(t) = 0$, when $t \ne s$. Set $c_s=\tau(e_s) \ge 0$.
We show first that the set $F = \{s \in G : c_s \ne 0\}$ is finite. Suppose it were infinite, and let $\{s_1,s_2,s_3, \dots\}$ be an enumeration of the elements in $F$. Let $p \in \ell^\infty(G,{\mathcal K})$ be a projection such that $\mathrm{Tr}(p(s_n)) \ge n c_{s_n}^{-1}$, for all $n \ge 1$. As $p \ge p \cdot 1_{\{s\}}$, for all $s \in G$, we get $\tau(p) \ge \tau(p \cdot 1_{\{s_n\}}) \ge c_{s_n} \mathrm{Tr}(p(s_n)) \ge n$. As this cannot be true for all $n\ge 1$, we conclude that $F$ is finite.
Set $\tau_0 = \sum_{s \in F} c_s \tau_s$, and observe that $\tau_0(f) = \tau(f \cdot 1_{F})$, for all $f \in \ell^\infty(G,{\mathcal K})$. It follows that $\tau' := \tau-\tau_0$ is a positive trace on $\ell^\infty(G,{\mathcal K})$, satisfying $\tau'(f) = \tau(f \cdot 1_{F^c})$, for all $f \in \ell^\infty(G,{\mathcal K})$. We show that $\tau' = 0$. Assume, to reach a contradiction, that $\tau' \ne 0$. Notice that $\tau'$ vanishes on $c_c(G,{\mathcal K})$ by construction of $\tau_0$. Since the Pedersen ideal of $\ell^\infty(G,{\mathcal K})$ is generated (as a hereditary ideal) by its projections, cf.\ the proof of Lemma~\ref{lm:Ped(G,K)}, there is a projection $p \in \ell^\infty(G,{\mathcal K})$ such that $\tau'(p) > 0$. Write $G = \bigcup_{n=1}^\infty F_n$, where $\{F_n\}_{n \ge 1}$ is a strictly increasing sequence of finite subsets of $G$. Find a sequence $\{p_k\}_{k \ge 1}$ of pairwise orthogonal projections in $\ell^\infty(G,{\mathcal K})$ such that each $p_k$ is equivalent to $p$. Let $q \in \ell^\infty(G,{\mathcal K})$ be the projection given by $$q(s) = p_1(s) + p_2(s) + \cdots + p_n(s), \qquad s \in F_{n+1} \setminus F_n,$$ for $n \ge 0$ (with the convention $F_0 = \emptyset$). Then $$q \cdot 1_{F_n^c} \ge (p_1+p_2+ \cdots + p_n) \cdot 1_{F_n^c},$$ for all $n \ge 1$; and as $\tau'(g \cdot 1_{E^c}) = \tau'(g)$, for all $g \in \ell^\infty(G,{\mathcal K})$ and all finite subsets $E \subseteq G$, we conclude that $$\tau'(q) = \tau'(q \cdot 1_{F_n^c}) \ge \tau'((p_1+p_2+ \cdots + p_n) \cdot 1_{F_n^c}) = \tau'(p_1+p_2+ \cdots + p_n) = n \tau'(p),$$ for all $n\ge 1$, which is impossible. \end{proof}
\noindent A non-zero trace of the form as in Proposition~\ref{prop:lsc-Roe} can clearly not be $G$-invariant when $G$ is infinite, so we obtain the following:
\begin{corollary} \label{cor:noinvtraces} The {$C^*$-al\-ge\-bra}{} $\ell^\infty(G,{\mathcal K})$ admits no non-zero lower semi-continuous invariant densely defined traces whenever $G$ is a countably infinite group, and, consequently, the Roe algebra $\ell^\infty(G,{\mathcal K}) \rtimes G$ admits no non-zero densely defined lower semi-continuous trace. \end{corollary}
\noindent When combining the conclusion of the corollary above with Theorem~\ref{thm:a}, we see that the action of $G$ on $T_{\mathrm{lsc}}(\ell^\infty(G,{\mathcal K}))$ either must fail to be of cobounded type, or there is no non-zero locally bounded trace in $T_{\mathrm{lsc}}(\ell^\infty(G,{\mathcal K}))$. If fact, both fail! That the latter fails follows easily from the description of $T_{\mathrm{lsc}}(\ell^\infty(G,{\mathcal K}))$ in Proposition~\ref{prop:lsc-Roe}.
\begin{lemma} \label{lm:not-cobd} If $G$ is a countably infinite group, then the action of $G$ on $T_\mathrm{lsc}(\ell^\infty(G,{\mathcal K}))$ is not of cobounded type. \end{lemma}
\begin{proof} Let $e$ be a positive contraction in ${\mathrm{Ped}}(\ell^\infty(G,{\mathcal K}))$. We must find another positive contraction $a$ in ${\mathrm{Ped}}(\ell^\infty(G,{\mathcal K}))$ that is not tracially $G$-dominated by $e$, cf.\ Lemma~\ref{lm:cobounded0}. In other words, for all finite sets $t_1,t_2, \dots, t_n \in G$, the inequality $\tau(a) \le \sum_{j=1}^n \tau(t_j.e)$ will fail for at least one $\tau \in T_{\mathrm{lsc}}(\ell^\infty(G,{\mathcal K}))$, or, taking Proposition~\ref{prop:lsc-Roe} into account, $${\mathrm{Tr}}(a(s)) \le \sum_{j=1}^n {\mathrm{Tr}}(e(t_j^{-1}s)),$$ will fail for at least one $s \in G$. It follows from Lemma~\ref{lm:Ped(G,K)} and its proof that $e$ is dominated by a projection in $\ell^\infty(G,{\mathcal K})$, so we may without loss of generality assume that $e$ itself is a projection. Set $f(s) = \mathrm{Tr}(e(s))$, for $s \in G$.
Let $G = \{s_1,s_2,s_3, \dots\}$ be an enumeration of the elements in $G$, and let $\{u_j\}_{j=1}^\infty$ be a sequence in which each element of $G$ is repeated infinitely often. Set $f_N = \sum_{j=1}^N u_j.f$, for each $N \ge 1$, and let $g \colon G \to {\mathbb N}_0$ be given by $g(s_N) = f_N(s_N)+1$, for $N \ge 1$. For each finite set $t_1,t_2, \dots, t_n \in G$ there exists $N \ge 1$ such that $\sum_{j=1}^n t_j.f \le f_N$, which entails that $g(s_N) \nleq \sum_{j=1}^n t_j.f(s_N)$, for each $N \ge 1$.
Let now $a \in \ell^\infty(G,{\mathcal K})$ be a projection such that $\mathrm{Tr}(a(t)) = g(t)$, for all $t \in G$. Then $a$ is not tracially $G$-dominated by $e$. \end{proof}
\noindent Although there are no \emph{densely defined} lower semi-continuous invariant traces on $\ell^\infty(G,{\mathcal K})$, when $G$ is infinite, there are still interesting lower semi-continuous traces on the Roe algebra. In the results to follow we describe the class of projections $p \in \ell^\infty(G,{\mathcal K})$ for which there exists a lower semi-continuous trace on $\ell^\infty(G,{\mathcal K})$ which is bounded and non-zero on the orbit of $p$, respectively, which is invariant and normalizes $p$. These two classes of projections agree when $G$ has the fixed-point property for cones, cf.\ Corollary~\ref{cor:b}. First we note the following general result that holds for locally finite groups:
\begin{proposition} \label{prop:locfinite} The Roe algebra $\ell^\infty(G,{\mathcal K}) \rtimes G$ of any locally finite group $G$ is stably finite. Moreover, for each non-zero projection $p \in \ell^\infty(G,{\mathcal K})$ there exists a lower semi-continuous trace $\tau \in T^+(\ell^\infty(G,{\mathcal K}) \rtimes G)$ with $\tau(p)=1$, and hence an invariant lower semi-continuous trace on $\ell^\infty(G,{\mathcal K})$ normalizing $p$. \end{proposition}
\begin{proof} Write $G = \bigcup_{n=1}^\infty G_n$ as an increasing union of finite groups. The {$C^*$-al\-ge\-bra}{} $\ell^\infty(G,{\mathcal K})$ is stably finite, as can be witnessed by the separating family of densely defined traces from Proposition~\ref{prop:lsc-Roe}. The crossed product $\ell^\infty(G,{\mathcal K}) \rtimes G_n$ is stably finite because it embeds into the stably finite {$C^*$-al\-ge\-bra}{} $\ell^\infty(G,{\mathcal K}) \otimes B(\ell^2(G_n))$. It follows that $$\ell^\infty(G,{\mathcal K}) \rtimes G = \varinjlim \, \ell^\infty(G,{\mathcal K}) \rtimes G_n$$ is stably finite, being an inductive limit of stably finite {$C^*$-al\-ge\-bra} s.
Let next $p \in \ell^\infty(G,{\mathcal K})$ be a non-zero projection. Then \begin{equation} \label{eq:G_n} p(\ell^\infty(G,{\mathcal K}) \rtimes G)p = \varinjlim \, p(\ell^\infty(G,{\mathcal K}) \rtimes G_n)p. \end{equation} The unital {$C^*$-al\-ge\-bra}{} $p(\ell^\infty(G,{\mathcal K}) \rtimes G_n)p$ admits a tracial state, for each $n \ge 1$. To see this, choose $m \ge n$ such that the restriction $p'$ of $p$ to $\ell^\infty(G_m,{\mathcal K})$ is non-zero. The restriction mapping $\ell^\infty(G,{\mathcal K}) \to \ell^\infty(G_m,{\mathcal K})$ is $G_n$-equivariant and therefore extends to a {$^*$-ho\-mo\-mor\-phism}{} $\ell^\infty(G,{\mathcal K}) \rtimes G_n \to \ell^\infty(G_m,{\mathcal K}) \rtimes G_n$, and in turns to a unital {$^*$-ho\-mo\-mor\-phism}{} $p(\ell^\infty(G,{\mathcal K}) \rtimes G_n)p \to p'(\ell^\infty(G_m,{\mathcal K}) \rtimes G_n)p'$. Composing this {$^*$-ho\-mo\-mor\-phism}{} with any tracial state on the (finite dimensional) {$C^*$-al\-ge\-bra}{} $p'(\ell^\infty(G_m,{\mathcal K}) \rtimes G_n)p'$ gives a tracial state on $p(\ell^\infty(G,{\mathcal K}) \rtimes G_n)p$. As the inductive limit of a sequence of unital {$C^*$-al\-ge\-bra} s with unital connecting mappings (e.g., as in \eqref{eq:G_n}) admits a tracial state if (and only if) each {$C^*$-al\-ge\-bra}{} in the sequence does, we conclude that $p(\ell^\infty(G,{\mathcal K}) \rtimes G)p$ admits a tracial state. By Remark~\ref{rem:unital} there is a lower semi-continuous trace on $\ell^\infty(G,{\mathcal K}) \rtimes G$ normalizing the projection $p$. The restriction of this trace to $\ell^\infty(G,{\mathcal K})$ is an invariant lower semi-continuous trace still normalizing $p$. \end{proof}
\noindent The Roe algebra of any infinite, locally finite group provides yet another example of a stably finite {$C^*$-al\-ge\-bra}{} with an approximate unit consisting of projections which has no densely defined lower semi-continuous trace, cf.\ Theorem~\ref{thm:existtrace} and Proposition~\ref{prop:notrace}.
For the more general class of groups $G$ with the fixed-point property for cones, it follows from Corollary~\ref{cor:b} that if $p$ is a projection in $\ell^\infty(G,{\mathcal K})$, then there is a lower semi-continuous invariant trace on $\ell^\infty(G,{\mathcal K})$ that normalizes $p$ (and hence there is a trace on the Roe algebra $\ell^\infty(G,{\mathcal K}) \rtimes G$ normalizing $p$), if and only if there is a trace on $\ell^\infty(G,{\mathcal K})$, which is non-zero and bounded on the orbit $\{t.p\}_{t \in G}$. If $p$ has \emph{uniformly bounded dimension}, i.e., if $\sup_{t \in G} {\mathrm{Tr}}(p(t)) < \infty$, then such a trace clearly exists, take for example $\tau_s$ (defined above Proposition~\ref{prop:lsc-Roe}), for any fixed $s \in G$. One can do a bit better: In Proposition~\ref{cor:Roe-not-prop-inf} below it is shown that for each projection in $\ell^\infty(G,{\mathcal K})$ with uniformly bounded dimension there is a trace on the Roe algebra normalizing this projection provided that the group $G$ is supramenable (a formally weaker condition than having the fixed-point property for cones).
Let $G$ be a countably infinite group, and let $\ell \colon G \to {\mathbb N}_0$ be a proper length function on $G$, i.e., $\ell(t) = 0$ if and only if $t=e$, $\ell(s+t) \le \ell(s) + \ell(t)$, for all $s,t \in G$, and $W_n:= \{t \in G : \ell(t) \le n\}$ is finite, for all $n \ge 0$. Such proper length functions always exist, and if $G$ is finitely generated, then we can take $\ell$ to be the word length function with respect to some finite generating set for $G$. Set \begin{equation} \label{eq:alpha-Z} \alpha_n = \max_{t \in W_n} \mathrm{Tr}(p(t)), \qquad Z_n = \{t \in W_n : \mathrm{Tr}(p(t)) = \alpha_n\}. \end{equation} Let $\tau_n$ be the (lower semi-continuous densely defined) trace on $\ell^\infty(G,{\mathcal K})$ given by
$$\tau_n(f) = \frac{1}{|Z_n|} \sum_{t \in Z_n} \alpha_n^{-1} \, \mathrm{Tr}(f(t)),$$ for $f$ in $\ell^\infty(G,{\mathcal K})^+$ or in $\ell^\infty(G,{\mathcal F})$. Observe that $\tau_n(p) = 1$, for all $n \ge 1$. Let $\omega$ be a free ultrafilter on ${\mathbb N}_0$, and define a trace $\tau_{\omega,p}$ on the positive cone $\ell^\infty(G,{\mathcal K})^+$ by $$\tau_{\omega,p}(f) = \lim_\omega \tau_n(f), \qquad f \in \ell^\infty(G,{\mathcal K})^+,$$ (where the limit along the ultrafilter is taken in the compact set $[0,\infty]$). Let $C_{\omega,p}$ be the cone of positive functions $f$ in $\ell^\infty(G,{\mathcal K})$, for which $\tau_{\omega,p}(f) < \infty$, and let ${\mathcal I}_{\omega,p}$ be the linear span of $C_{\omega,p}$. Then ${\mathcal I}_{\omega,p}$ is a hereditary symmetric ideal in $\ell^\infty(G,{\mathcal K})$, and $\tau_{\omega,p}$ defines a linear trace on ${\mathcal I}_{\omega,p}$, cf.\ Proposition~\ref{prop:tau->tau'}.
We say that the projection $p \in \ell^\infty(G,{\mathcal K})$ has \emph{subexponentially growing dimension} if \begin{equation} \label{eq:expgrowing} \liminf_{n \to \infty} \frac{\alpha_{n+m}}{\alpha_n} = 1, \end{equation} for all $m \ge 0$. (This definition may depend on the choice of proper length function $\ell$, and should be understood to be with respect to \emph{some} proper length function.)
\begin{lemma} \label{lm:filter} Let $\{\alpha_n\}_{n=0}^\infty$ be an increasing sequence of strictly positive real numbers satisfying \eqref{eq:expgrowing}. Then there is a free ultrafilter $\omega$ on ${\mathbb N}_0$ such that $$\lim_\omega \frac{\alpha_{n+m}}{\alpha_n} = 1,$$ for all $m \ge 1$. \end{lemma}
\begin{proof} For each $m \ge 0$ and ${\varepsilon} >0$, set $A_{m,{\varepsilon}} = \{n \ge 0 : \alpha_{n+m}/\alpha_n \le 1+{\varepsilon}\}$. By the assumption that \eqref{eq:expgrowing} holds, each of the sets $A_{m,{\varepsilon}}$ is infinite. The collection of sets $A_{m,{\varepsilon}}$ is downwards directed, since $A_{m_1,{\varepsilon}_1} \subseteq A_{m_2,{\varepsilon}_2}$, when $m_1 \ge m_2$ and ${\varepsilon}_1 \le {\varepsilon}_2$. It follows that the intersection of any finite collection of these sets is infinite. We can therefore find a free ultrafilter $\omega$ which contains all the sets $A_{m,{\varepsilon}}$; and any such ultrafilter will satisfy the conclusion of the lemma. \end{proof}
\begin{lemma} \label{lm:subexpgrowth} Let $G$ be a countably infinite group, let $p \in \ell^\infty(G,{\mathcal K})$ be a projection of subexponentially growing dimension, and let $\omega$ be a free ultrafilter as in Lemma~\ref{lm:filter} for the sequence $\{\alpha_n\}_{n=0}^\infty$ associated with the projection $p$ as in \eqref{eq:alpha-Z}. Then $\tau_{\omega,p}(p) = 1$ and $\tau_{\omega,p}(t.p) \le 1$, for all $t \in G$. \end{lemma}
\begin{proof} We have already noted that $\tau_n(p)=1$, for all $n \ge 1$, which implies that $\tau_{\omega,p}(p) = 1$. Let $m \ge 0$ and let $s \in W_m$. For $n \ge 0$ and $t \in Z_n \subseteq W_n$, we have $st \in W_{n+m}$, so that $\mathrm{Tr}(p(st)) \le \alpha_{n+m}$, which shows that
$$\tau_n(s^{-1}.p) = \frac{1}{|Z_n|} \sum_{t \in Z_n} \alpha_n^{-1} \, \mathrm{Tr}(p(st)) \le \frac{\alpha_{n+m}}{\alpha_n}.$$ By the assumption that $p$ has subexponentially growing dimension, by Lemma~\ref{lm:filter}, and by the choice of $\omega$, we conclude that $\tau_{\omega,p}(s^{-1}.p) \le 1$. \end{proof}
\begin{theorem} \label{thm:subexpgrowth} Let $G$ be a countably infinite group with the fixed-point property for cones and let $p \in \ell^\infty(G,{\mathcal K})$ be a projection of subexponentially growing dimension. Then there is an invariant lower semi-continuous trace on $\ell^\infty(G,{\mathcal K})$ normalized on $p$, and there is a lower semi-continuous trace on the Roe algebra $\ell^\infty(G,{\mathcal K}) \rtimes G$ also normalized on $p$. \end{theorem}
\begin{proof} This follows from Corollary~\ref{cor:b}, where condition (i) is satisfied with $\tau = \tau_{p,\omega}$, cf.\ Lemma~\ref{lm:subexpgrowth}. \end{proof}
\noindent We proceed to examine the case of projections in $\ell^\infty(G,{\mathcal K})$ of uniformly bounded dimension. In the lemma below we embed $\ell^\infty(G)$ into $\ell^\infty(G) \rtimes G$, which again embeds into (the upper left corner of) $(\ell^\infty(G) \rtimes G) \otimes M_n$, for each $n \ge 1$. Note that projections in $\ell^\infty(G)$ are indicator functions $1_E$, for some subset $E$ of $G$.
\begin{lemma} \label{lm:twoprojections} If $G$ is an exact group and $n \ge 1$ is an integer. Then for each projection $p$ in $(\ell^\infty(G) \rtimes G) \otimes M_n$ there is a projection $r$ in $\ell^\infty(G)$ such that $p$ and $r$ generate the same closed two-sided ideal in $(\ell^\infty(G) \rtimes G) \otimes M_n$. \end{lemma}
\begin{proof} Let $n \ge 1$, let $p \in (\ell^\infty(G) \rtimes G) \otimes M_n$ be a projection, and let ${\mathcal I}$ be the closed two-sided ideal in $(\ell^\infty(G) \rtimes G) \otimes M_n$ generated by $p$. It follows from \cite[Theorem 1.16]{Sie:ideals} that ${\mathcal I}$ is the closed two-sided ideal in $(\ell^\infty(G) \rtimes G) \otimes M_n$ generated by $\mathcal{J}:={\mathcal I} \cap (\ell^\infty(G) \otimes M_n)$. Arguing as in the proof of \cite[Proposition 5.3]{KelMonRor:supra} we find a projection $q \in \mathcal{J}$ which generates the ideal ${\mathcal I}$ in $(\ell^\infty(G) \rtimes G) \otimes M_n$. By \cite{Zhang:diagonal}, since $\ell^\infty(G)$ is of real rank zero, $q$ is equivalent to a diagonal projection $\mathrm{diag}(q_1,q_2, \dots, q_n)$ in $\ell^\infty(G) \otimes M_n$. Let $r \in \ell^\infty(G)$ be the supremum of the projections $q_1, q_2, \dots, q_n$. Then $q$ and $r$ generate the same ideal of $\ell^\infty(G) \otimes M_n$, and so $r$ and $p$ generate the same ideal in $(\ell^\infty(G) \rtimes G) \otimes M_n$. \end{proof}
\noindent The result below extends the characterization in \cite{KelMonRor:supra} of supramenable groups in terms of non-existence of properly infinite projections in the uniform Roe algebra.
\begin{proposition} \label{cor:Roe-not-prop-inf} Let $G$ be a group. The following conditions are equivalent: \begin{enumerate} \item $G$ is supramenable. \item Each non-zero projection in the stabilized uniform Roe algebra $(\ell^\infty(G) \rtimes G) \otimes {\mathcal K}$ is normalized by a lower semi-continuous trace in $T^+((\ell^\infty(G) \rtimes G) \otimes {\mathcal K})$. \item Each projection in $\ell^\infty(G,{\mathcal K})$ of uniformly bounded dimension is normalized by an invariant lower semi-continuous trace in $T^+(\ell^\infty(G,{\mathcal K}))$ and by a lower semi-continuous trace in $T^+(\ell^\infty(G,{\mathcal K}) \rtimes G)$ \item The stabilized uniform Roe algebra $(\ell^\infty(G) \rtimes G) \otimes {\mathcal K}$ contains no properly infinite projections. \end{enumerate} \end{proposition}
\begin{proof} (i) $\Rightarrow$ (ii). Since $(\ell^\infty(G) \rtimes G) \otimes {\mathcal K}$ is the inductive limit of {$C^*$-al\-ge\-bra} s of the form $(\ell^\infty(G) \rtimes G) \otimes M_n$, for $n \ge 1$, it suffices to show that each projection $p$ in $(\ell^\infty(G) \rtimes G) \otimes M_n$ is normalized by a lower semi-continuous trace on this {$C^*$-al\-ge\-bra}.
By Lemma~\ref{lm:twoprojections} there is a projection $q = 1_E \in \ell^\infty(G)$ that generates the same closed two-sided ideal in $(\ell^\infty(G) \rtimes G) \otimes M_n$ as $p$. Since $G$ is supramenable, the set $E$ must be non-paradoxical, so by Tarski's theorem there is an invariant trace $\tau_0$ on $\ell^\infty(G,q)$ which normalizes $q$ (see \cite[Proposition 5.3]{KelMonRor:supra}). Extend $\tau_0$ to a trace $\tau$ on ${\mathcal J}:={\mathcal J}_{(\ell^\infty(G) \rtimes G) \otimes M_n}(q)$ satisfying $\tau(q)=1$. Now, $p \in {\mathcal J}$ and $0 < \tau(p) < \infty$, by the assumption on $p$ and $q$, so upon rescaling we obtain a trace $\tau$ on ${\mathcal J}$ satisfying $\tau(p)=1$; and $\tau$ is lower semi-continuous by the remarks below Theorem~\ref{thm:GKP}. Finally, we can extend $\tau$ to a trace in $T^+((\ell^\infty(G) \rtimes G) \otimes M_n)$ using \eqref{eq:tau}.
(ii) $\Rightarrow$ (iii). Let $p$ be a projection in $\ell^\infty(G,{\mathcal K})$ such that $n:= \sup_{t \in G} \mathrm{Tr}(p(t))< \infty$. Let $e \in {\mathcal K}$ be a projection of dimension $n$, and let $\overline{e} \in \ell^\infty(G,{\mathcal K})$ be the projection given by $\overline{e}(t) = e$, for all $t \in G$. Then $\overline{e}$ is fixed by the action of $G$, and we have isomorphisms $$\overline{e}(\ell^\infty(G,{\mathcal K}))\overline{e} \cong \ell^\infty(G) \otimes M_n, \qquad \overline{e}(\ell^\infty(G,{\mathcal K}) \rtimes G)\overline{e} \cong (\ell^\infty(G)\rtimes G) \otimes M_n.$$ Moreover, $p$ is equivalent to a projection $p_0$ in $\overline{e}(\ell^\infty(G,{\mathcal K}))\overline{e}$, which, under the isomorphism above, corresponds to a projection $p_1 \in \ell^\infty(G) \otimes M_n$. By (ii) there is a lower semi-continuous trace on $(\ell^\infty(G)\rtimes G) \otimes M_n$ normalizing $p_1$. Hence there is a lower semi-continuous trace on $\overline{e}(\ell^\infty(G,{\mathcal K}) \rtimes G)\overline{e}$ normalizing $p_0$. Arguing as in Remark~\ref{rem:unital} we can extend this trace to a lower semi-continuous trace in $T^+(\ell^\infty(G,{\mathcal K}) \rtimes G)$. The restriction of $\tau$ to $\ell^\infty(G,{\mathcal K})$ becomes an invariant lower semi-continuous trace.
(iii) $\Rightarrow$ (iv) is clear (no properly infinite projection can be normalized by a trace). If $G$ is non-supramenable, then $\ell^\infty(G) \rtimes G$ contains a propery infinite projection, cf.\ \cite[Proposition 5.5]{RorSie:action}, in which case (iv) cannot be true, which proves (iv) $\Rightarrow$ (i). \end{proof}
\noindent It was shown in \cite{RorSie:action} that the uniform Roe algebra $\ell^\infty(G) \rtimes G$ is properly infinite, i.e., its unit is a properly infinite projection, if and only if $G$ is non-amenable; and in \cite{KelMonRor:supra} it was shown that the uniform Roe algebra \emph{contains} a properly infinite projection if and only if $G$ is non-supramenable. The proposition above allows us to conclude that no matrix algebras over the uniform Roe algebra contains a properly infinite projections when $G$ is supramenable.
It was shown by Wei in \cite{Wei:qd} and Scarparo in \cite{Sca:locfinite}, answering a question in \cite{KelMonRor:supra}, that $\ell^\infty(G) \rtimes G$ is \emph{finite} if and only if $G$ is a locally finite group. Using a similar idea as in \cite{Sca:locfinite} we show below that the Roe algebra $\ell^\infty(G,{\mathcal K}) \rtimes G$ contains a properly infinite projection whenever $G$ is not locally finite. We may therefore strengthen Proposition~\ref{prop:locfinite} as follows: The Roe algebra $\ell^\infty(G,{\mathcal K}) \rtimes G$ is stably finite if \emph{and only if} $G$ is locally finite; and if $\ell^\infty(G,{\mathcal K}) \rtimes G$ is not stably finite, then it contains a \emph{properly infinite} projection.
\begin{lemma} \label{lm:fin-gen-inf} Let $G$ be a non-locally finite group. Then there is a finite subset $S$ of $G$ such that for each integer $N \ge 1$ there exists a non-zero function $\varrho \colon G \to {\mathbb N}_0$ satisfying \begin{equation} \label{eq:Nf} N\varrho \le \sum_{s \in S} s.\varrho. \end{equation} \end{lemma}
\begin{proof} Let $G_0$ be an infinite, finitely generated subgroup of $G$, and let $S$ be a finite symmetric generating set for $G_0$. By \cite[Lemma 1]{Zuk:isoperimetric} there is a one-sided geodesic $\{t_n\}_{n =0}^\infty$ in $G_0$, where $t_{n}t_{n+1}^{-1} \in S$, for all $n \ge 0$; and $n \mapsto t_n$ is injective. Fix $N \ge 1$ and define $\varrho \colon G\to {\mathbb N}_0$ by $\varrho(t_n) = N^n$, for all $n \ge 0$; and set $\varrho(t)=0$ if $t \notin \{t_n : n \ge 0\}$.
Fix $n \ge 0$ and set $s = t_{n}t_{n+1}^{-1} \in S$. Then $s.\varrho(t_{n}) = \varrho(t_{n+1}) = N\varrho(t_n)$. As $\varrho(t) =0$, when $t \notin \{t_n : n \ge 0\}$, we see that \eqref{eq:Nf} holds. \end{proof}
\begin{proposition} \label{prop:propinfproj} For each countable non-locally finite group $G$ there is a projection $p \in \ell^\infty(G,{\mathcal K})$ satisfying: \begin{enumerate} \item $p$ is properly infinite in the Roe algebra $\ell^\infty(G,{\mathcal K}) \rtimes G$. \item There is no trace on the Roe algebra $\ell^\infty(G,{\mathcal K}) \rtimes G$ normalizing the projection $p$. \item Each trace on $\ell^\infty(G,{\mathcal K})$ is either unbounded or zero on the orbit $\{t.p\}_{t \in G}$. \end{enumerate} \end{proposition}
\begin{proof} Note first that if $e$ and $f$ are projections in $\ell^\infty(G,{\mathcal K})$, then $e \sim f$, respectively, $e \precsim f$, in $\ell^\infty(G,{\mathcal K})$ if and only if $\mathrm{Tr}(e(t)) = \mathrm{Tr}(f(t))$, respectively, $\mathrm{Tr}(e(t)) \le \mathrm{Tr}(f(t))$, for all $t \in G$.
Let $\varrho \colon G \to {\mathbb N}_0$ be as in Lemma~\ref{lm:fin-gen-inf} with respect to a finite subset $S$ of $G$ and with $N \ge 2|S|$. Find a projection $q \in \ell^\infty(G,{\mathcal K})$ with $\mathrm{Tr}(q(t)) = \varrho(t)$, for all $t \in G$.
Let $p$ and $p'$ be the the $|S|$-fold, respectively, the $N$-fold, direct sum of $q$ with itself in $\ell^\infty(G,{\mathcal K})$, and set $e = \bigoplus_{s \in S} s.q$. Then $\mathrm{Tr}(p'(t)) = N\varrho(t)$ and $\mathrm{Tr}(e(t)) = \sum_{s\in S} s.\varrho(t)$, for all $t \in G$, so $p' \precsim e$ and $p \oplus p \precsim p'$ in $\ell^\infty(G,{\mathcal K})$. Moreover, $p \sim e$ in $\ell^\infty(G,{\mathcal K}) \rtimes G$ (since $q \sim t.q$ in $\ell^\infty(G,{\mathcal K}) \rtimes G$, for all $t \in G$). Hence, $$p \oplus p \precsim p' \precsim e \sim p,$$ in $\ell^\infty(G,{\mathcal K}) \rtimes G$, which shows that (i) and (ii) hold.
(iii) follows from (ii) when $G$ has the fixed-point property for cones, cf.\ Corollary~\ref{cor:b}, but requires a separate argument for general groups. Applying the operator $\sigma \mapsto \sum_{s \in S} s.\sigma$, on functions $\sigma \colon G \to {\mathbb N}_0$, to the left and right hand side of \eqref{eq:Nf} $k-1$ times, we get that $N^k \varrho \le \sum_{s \in S^k} \overline{s}.\varrho$, for all $k \ge 1$, where $S^k$ is the set of $k$-tuples of elements from $S$, and $\overline{s} \in G$ is the product of the $k$ elements in the $k$-tuple $s \in S^k$. For $k \ge 1$, set $e_k = \bigoplus_{s \in S^k} \overline{s}.q$, and let $p'_k$ be the $N^k$-fold direct sum of $q$ with itself. Then $\mathrm{Tr}(p'_k(t)) = N^k \varrho(t)$ and $\mathrm{Tr}(e_k(t)) = \sum_{s \in S^k} \overline{s}.\varrho(t)$, for all $t \in G$, so $p'_k \precsim e_k$.
Let $\tau$ be any trace in $T^+(\ell^\infty(G,{\mathcal K}))$. Then
$$N^k \tau(q) = \tau(p'_k) \le \tau(e_k) = \sum_{s \in S^k} \tau(\overline{s}.q) \le |S|^k \sup_{t \in G} \tau(t.q).$$
As this holds for all $k \ge 1$, either $\tau(q)=0$ or $\sup_{t \in G} \tau(t.q) = \infty$. Suppose that $\{\tau(t.p)\}_{t \in G}$ is bounded. Then $\{\tau(t.q)\}_{t \in G}$ is also bounded. Fix $t \in G$, and let $\tau'$ be the trace on $\ell^\infty(G,{\mathcal K})$ given by $\tau'(f) = \tau(t.f)$, for $f \in \ell^\infty(G,{\mathcal K})^+$. As $\tau'$ also is bounded on the orbit $\{s.q\}_{s \in G}$, the argument above implies that $0= \tau'(q) = \tau(t.q)$, so $\tau(t.p) = |S|\tau(t.q) = 0$. This proves that $\tau$ is zero on the orbit $\{t.p\}_{t \in G}$. \end{proof}
\noindent Any projection satisfying the conclusions of Proposition~\ref{prop:propinfproj} (iii) above must have exponentially growing dimension with respect to any proper length function on the group, cf.\ Lemma~\ref{lm:subexpgrowth}.
{\small{
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
}}
\noindent Mikael R\o rdam \\ Department of Mathematical Sciences\\ University of Copenhagen\\ Universitetsparken 5, DK-2100, Copenhagen \O\\ Denmark \\ [email protected]\\
\end{document} | arXiv |
Postulate implies existence Lorentz transformation?
My textbook about Special Relativity says that the existence of Lorentz Transformation is guaranteed by the postulates of Special Relativity.
So, I'm assuming it's the first postulate we're talking about; laws of physics remain unchanged in different inertial frames.
However, how does this guarantee we can find the Lorentz Transformation between two inertial frames?
I've never doubted their existence, but I never thought of it as an implication of the first postulate.
special-relativity metric-tensor inertial-frames lorentz-symmetry
Sha VukliaSha Vuklia
$\begingroup$ Maybe a wording question: the existence of Lorentz Transformation doesn't need to be guaranteed, it is a mathematical object that is self consistent (its definition itself says it exists). But the fact that these transformations correspond to a physical reality is a consequence of the first postulate. $\endgroup$ – user130529 Jan 5 '17 at 17:02
$\begingroup$ Are you perhaps looking for the proof that the interval-preserving transformations are the Lorentz transformations? $\endgroup$ – ACuriousMind♦ Jan 6 '17 at 15:02
$\begingroup$ I have come to the conclusion that this is an ill formed question (I mean that in a formal sense, not as a criticism, because it's a good conceptual question that I have been thinking quite a bit about to reach this conclusion). That is, to answer a question like this, you would need to encode the relativity principle as a set of axioms - no one axiom will capture such an informal statement - and it is then arguable how one does this. In my opinion, there would still be other axioms one would need after you have encoded the principle in this way - see .... $\endgroup$ – WetSavannaAnimal Jan 11 '17 at 3:31
$\begingroup$ .... my answer here. $\endgroup$ – WetSavannaAnimal Jan 11 '17 at 3:32
The historical formulation of the relativity principle: the Lorentz transformation derivation a-la Einstein
It seems that the fact which confuses You is the applicability of the statement "laws of physics remain unchanged in different inertial frames", which says something about the dynamics of the system, for derivation of the kinematical transformations relating the coordinate 4-vector $x^{\mu}$ (and hence the velocity 4-vector $v^{\mu}$) in different inertial frames. The reason why the relativity principle in the above form allowed Einstein to derive the Lorentz transformations is another postulate stating that the speed of light is constant. This postulate imposes obvious restrictions on kinematic law relating coordinates (and hence velocities) relative to two frames.
The "speed of light" postulate historically appeared as the result of heuristically derived Maxwell equations, due to which the speed of light is invariant on the EM waves source velocity (at least in one frame). Since the Maxwell equations are dynamical equations, then due to the relativity principle this statement must be true in any frame.
The relativity principle: modern formulation
But as we know now, the fact that the speed of light isn't truly fundamental quantity. Indeed, the gluon and the graviton (and any massless particle) also propagate with the speed equal to speed of light. The constant $c$ has in fact more general meaning - it is maximally possible propagation velocity, related rather to the space-time properties; the fact that the speed of light is equal to $c$ is just connected to the masslessness of the photon. Therefore, from the modern field theory perspective, the postulate about of the speed of light based on the Maxwell equations is very special and should be changed on a more general one, which is independent on details of particular dynamical theory; instead it must depend on properties of the space-time.
This will, of course, lead to the fact that the relativity principle in Einstein's formulation is not enough for the derivation of the Lorentz transformations. Without the "speed of light" postulate it just says that f the given kinematic transformations the laws of dynamics must be covariant under kinematic transformations. Instead, let's formulate it in slightly another way: heuristically, all inertial frames have equal rights. This implies two statements:
1) Descriptions of the same system relative to two different inertial frames have one-to-one correspondence, and the rule determining of this correspondence is the same for any two pairs of inertial frames;
2) The evolution law for the given system has the one form independently on the inertial frame in which we consider it.
I will call the first statement as the "kinematical" part of the relativity principle, while the second statement as the "dynamical" part of the relativity principle.
The first statement is actually what You need to require the existence of Lorentz transformations as well as their unique form. Let's discuss its consequence for two examples: the first one is the derivation of the vector-like Lorentz transformations (in which You're interested in), and the second one is the derivation of the properties of the quantum theory Hilbert space which respects relativity. The second example will also demonstrate the difference of consequences of the first and the second statements.
Derivation of the Lorentz transformations
One way to derive the Lorentz transformations, i.e., the transformations which relate $x^{\mu}$ in different inertial frames with relative velocity $v$, namely $g(x^{\mu}, v)$, is to require the set of axioms. One of possible choice is (see, for example, this paper):
The transformation $g$ is smooth and inversed;
The space-time is homogeneous; in particular this means that if the relative velocity of two objects is equal to zero in one frame, then it will be equal to zero in another frame;
The relativity principle;
The space is isotropic;
It can be shown that the first two axioms leads to the following form of the function $g$: $$ \tag 1 \begin{cases} x' = \gamma(v)(x - vt), \\ t' = \gamma(v)(t - \sigma(v)x) \end{cases} $$ The third postulate (the relativity principle) can fix the form of the functions $\gamma(v), \sigma(v)$.
First, suppose three frames $S_{1}, S_{2}, S_{3}$, and perform both step-to-step transformations between the first, the second and the third frame, and the transformation between the first and the third. By using $(1)$ we have $$ \tag 2 \begin{cases} x_{3} = \gamma_{2}\gamma_{1}((1+v_{2}\sigma_{1})x_{1}-(v_{1}+v_{2})t_{1}), \\ t_{3} = \gamma_{2}\gamma_{1}((1+v_{1}\sigma_{2})t_{1} - (\sigma_{1}+\sigma_{2})x_{1}) \end{cases}, $$ From the other side, the "kinematical" statement of the relativity principle implies existence of the transformation $x_{3}^{\mu} = g(x_{1},v_{3})$ $$ \tag 3 \begin{cases} x_{3} = \gamma_{3}(x_{1} - v_{3}t_{1}), \\ t_{3} = \gamma_{3}(t_{1} - \sigma_{3}x_{1})\end{cases} $$ The relativity principle says us that $(3)$ must be equal to $(2)$, from which follows $$ \tag 4 \frac{\sigma_{i}}{v_{i}} \equiv \frac{\sigma (v_{i})}{v_{i}} = \alpha = \text{const} $$ Second, the relativity principle says that the the form of the direct transformation $x \to x'$ must coincide with the form of inversed transformation $x' \to x$ up to the sign of $v$. By using this requirement, we have $$ \gamma (v)\gamma(-v) = \frac{1}{1 - \alpha v^{2}} $$ I.e., the relativity principle almost uniquely fix the Lorentz transformation! The fourth axiom fixes $\gamma(-v) = \gamma(v)$, and only the value (the sign) of $\alpha$ remains undetermined! Adding the physical requirement that the energy of the particle must increase with increasing of its velocity fixes also the sign of $\alpha$ as positive. It can be then easily shown that the $\frac{1}{\sqrt{\alpha}} \equiv c$ has the sense of maximal propagation velocity.
P.S. One may add the axiom of the independence of time intervals on the choice of the inertial frame, which will reduce the Lorentz transformation to the Galilei transformation.
Relativistic quantum mechanics
Suppose now the consequences of the relativity principle in the Hilbert space of rays $|\Psi\rangle$, defining the quantum-mechanical state. We don't know about any Maxwell equations and want to formulate the general properties of the realization of the Lorentz symmetry in quantum mechanics. This means that we need to use the above mentioned definition of the relativity principle containing both "kinematical" (the first) "dynamical" (the second) statements.
The first statement requires that states $|\Psi{'}\rangle$ and $|\Psi\rangle$ defining the one quantum-mechanical state measured in different inertial frames must be related by the uniquely fixed transformation $$ \tag 5 |\Psi'\rangle = U(\Lambda , a)|\Psi\rangle, $$ where $U$ is the operator of the inhomogeneous Lorentz group (called the Poincare group) transformation, $\Lambda$ is the Lorentz group transformation, while $a$ is translation 4-vector. Repeating the idea of $(3)$-$(4)$ one sees that $$ U(\Lambda_{2},a_{2})U(\Lambda_{1},a_{1}) = e^{i\omega\big((\Lambda_{2},a_{2}),(\Lambda_{1},a_{1})\big)}U(\Lambda_{2}\Lambda_{1},\Lambda_{2}a_{1} + a_{2}), $$ where $\omega$ is the fixed phase which in fact can be $e^{i\omega} = \pm 1$. I.e., the first statement of the relativity principle ("kinematical") says us that on the space of quantum-mechanical rays must be realized the so-called projective representation of the Poincare group by operators $U(\Lambda, a)$.
Let's turn on the second statement ("dynamical"). The quantum theory dynamics describes us how by having the initial state $$ |\Psi(t = 0)\rangle = |\Psi_{0}\rangle $$ to calculate the state $|\Psi (t > 0)\rangle$, which, being expanded on the full basis $\{ |\Psi_{j}\rangle\}$ of the Hilbert space, gives the set of probabilities to find the system in different states: $$ \tag 6 P_{|\Psi_{0}\rangle \to |\Psi_{j}\rangle} = |\langle \Psi(t > 0)|\Psi_{j}\rangle|^{2} $$ The second statement of the relativity principle implies us that $(6)$ must be invariant under the transformation $(5)$. By the Wigner theorem, this means that the $U(\Lambda,a)$ must be linear unitary or anti-linear anti-unitary operator!
To conclude: You see that the "kinematical" and "dynamical" parts of the relativity principle imposes important consequences on the realization of the relativistic symmetry separately from each other.
Name YYYName YYY
$\begingroup$ Good answer, but I think the statement "The first statement is actually what You need to require the existence of Lorentz transformations as well as their unique form. " could be a little misleading: it kind of implies (or could be read to mean) that the Lorentz transformation is contained in this statement (i.e. that it is sufficient to derive the transformation), whereas what you mean is that it is a necessary statement: as you show, you also need to add the spacetime homogeneity postulate, smoothness and so forth to get there. $\endgroup$ – WetSavannaAnimal Jan 9 '17 at 13:43
$\begingroup$ Actually, just continuity, rather than $C^\infty$ smoothness, will also do fine. $g(x,\,v)$ continuous in $x$ along with spacetime homogeneity (see my variation of JoshPhysics's answer here proves that $g$ is linear and so can be represented by a matrix. Thus a matrix group is the group of all Lorentz transformations, and then a continuity (and monotonicity) assumption forces the Lie group structure (see here). BTW, before I forget, +1 :) $\endgroup$ – WetSavannaAnimal Jan 9 '17 at 13:52
$\begingroup$ @WetSavannaAnimalakaRodVance : Thank You for the comments! I need to think a little. $\endgroup$ – Name YYY Jan 9 '17 at 22:54
Postulate I: The laws of physics behave the same in all inertial frames.
Postulate II: The speed of light, $c$, is constant to all inertial observers.
To be clear, both of these postulates are necessary to guarantee the existence and validity of the Lorentz transformations. As for why it's the Lorentz transformations and not some other transformation, the answer is simple—it's the only transformation that is mathematically consistent with the postulates.
Formally, the Lorentz transformations can be proven to exist (satisfy the postulates) and to be unique (the only such transformation). A very good derivation that illustrates these properties can be found in Resnick's Introduction to Special Relativity.
occamsrazoroccamsrazor
$\begingroup$ I don't see what the postulates have to do with the existence of the Lorentz transformation. This transformation exists as a mathematical object, it is one family of linear coordinate transformations among infinitely many linear coordinate transformations, and can be defined without knowing anything about physics. $\endgroup$ – user130529 Jan 5 '17 at 20:03
$\begingroup$ @claude chuber : Lorentz had derived the homonymous transformation before Special Relativity from his intuition that Maxwell Equations of Electromagnetism must valid in exactly the same form (later term covariance) in all inertial frames. Now, I don't understand how the Lorentz transformation is a standing alone 'mathematical object defined without knowing anything about physics'. Is this a modern scientific opinion? Would this transformation exist before Maxwell's equations, Lorentz's Physics and Einstein's Special Relativity ? $\endgroup$ – Frobenius Jan 6 '17 at 0:23
$\begingroup$ @Frobenius : nothing such sophisticated, it is a particular family of linear transformations belonging to the linear space of transformations from $\mathbb R^4$ into itself (as far as we are talking about existence). $\endgroup$ – user130529 Jan 6 '17 at 8:05
The constancy of the speed of light in vacuum is a parameter/consideration in many physical laws.
c is further related to permeability and permitivity of free space, and if it changed there would be many physical consequences.
The Lorentz transform allows this to be upheld.
JMLCarterJMLCarter
$\begingroup$ Actually, Lorentz transformations can be derived from the constancy of $c$ $\endgroup$ – Frédéric Grosshans Jan 5 '17 at 17:22
$\begingroup$ "And further" you can derrive lorentz transforms from this. $\endgroup$ – JMLCarter Jan 5 '17 at 17:30
"My textbook about Special Relativity says that the existence of Lorentz Transformation is guaranteed by the postulates of Special Relativity."
Depending on what one means by Lorentz transformation, this statement is incorrect. It's a necessary assumption, but not sufficient. The following is the flow of logic:
First, one must postulate that a manifold structure / co-ordinates on spacetime are even meaningful, and then that motion can be described by transformations on these co-ordinates.
The relativity postulate then ensures that these transformations depend only on the relative velocity between inertial frames and also completes the group structure for the set of transformations kitted with composition (through enforcing associativity). But you need much else besides (steps (3) through (7)) to get to the Lorentz transformation! That is, unless of course, you take the postulates in steps (3) to (7) as "obvious" and assume them tacitly;
Assumptions of homogeneity of spacetime together with continuity of the transformations in their dependence on the spacetime co-ordinates then show that the group of transformations is a linear, matrix group;
Assumption of continuity of the transformations in their dependence on the relative velocity between frames then shows that this matrix group is a Lie group of $4\times4$ matrices;
Isotropy of space then shows that the Lie group is the identity connected component of one of the orthogonal groups $O^+(4)$ or $O^+(1,\,3)$ (or the Galilee group, in the special case where the free parameter $c$ is infinite);
Causality then rules out the rotations $O^+(4)$;
Experiment shows us that the free parameter $c$ is finite and our group is $O^+(1,\,3)$, not the Galilee group.
It depends on what "level" of proof your looking for, but let's for this answer assume your seeking high rigor with minimal clutch of axioms.
Certainly Galileo's principle (i.e. that physical law is unaffected by inertial motion, à la Salviati's Ship Allegory) is necessary to existence of the LT, but you do need quite a good many other assumptions to make it work.
First of all, one needs to assume that one's neighborhood in the World can be described by a co-ordinate system. This is wholly an experimental result, and a very deep and basic one at that. It's justified by countless everyday observations - if measure the spot on the wall where I think I want the builder to put my window, and I describe it to the builder by Cartesian co-ordinates, the end result (if the builder is competent) is in accordance with my expectations. Or, that people can use maps to meet at an agreed location. All of these seemingly trivial observations justify a postulate like:
Spacetime is modelled by a pseudo-Riemannian manifold and the events of spacetime can be uniquely specified by co-ordinates in a suitable chart of that manifold
We need "pseudo-Riemannian" because this asserts that the notion of distance given by an inner product is meaningful.
Now, this postulate has enabled our use of co-ordinates to describe events. This is what justifies any meaningful notion at all of "transformation" between relatively moving observers.
Next, the relativity postulate enters. We have a laboratory frame $A$, and two frames $B$ and $C$ observable within it. We wish to investigate the transformation of co-ordinates between $B$ and $C$ when they are in relative motion with each other. Suppose first we impart the same inertial motion on both $B$ and $C$ relative to $A$, so that $B$ and $C$ are still at rest relative to one another. Then we impart a second motion on $C$ alone so that $C$ moves relative to $B$. The relativity postulate tells us that the transformation between $B$ and $C$ must depend on this second motion alone. In general, we must allow for the transformation to depend on the common relative motion to $A$ as well, but if there were such a dependence, then $B$ and $C$ could detect that common motion from within their frames by observing that the co-ordinate transformation between them would change, even though their motion relative to each other stayed the same. This would tell against Galileo's principle, and so:
The co-ordinate transformation between two inertial observers depends only on their motion relative to each other, and is independent of any motion relative to any third reference frame
and this is the crucial property one derives from the relativity principle, i.e. that co-ordinate transformations between inertial frames is wholly defined by the relative velocity vector between those frames.
This conclusion also contains another important fact. It implies inverses to any transformation, and, with a bit of work, you can also work out from the complete characterization by relative velocity that the compositions of transformations is associative (by considering successive transformations and noting that the above implies that the order of their application must give the same result, because that result is characterized by the endpoint inertial frames alone). That is,
the transformations between inertial frames form a group
From here, one still needs several assumptions to get to the Lorentz transformation:
Homogeneity of spacetime together with an assumption that the co-ordinate transformation between inertial frames is continuous then tells us that the transformations are linear, as I discuss in my answer here; see also Mark H's answer (although this assumes differentiability as well: this is stronger than needed). So our group of transformations is a matrix group;
An assumption that the transformation is a continuous function of the relative velocity together with a monotonicity assumption (that two relative motions in the same direction compose to give a faster relative motion) then shows that the linear matrix group of transformations is a Lie group, with the transformation matrix given by $\exp(\eta\,K)$ where the $4\times4$ matrix characterizes the direction of motion, and the rapidity parameter $\eta$ is a continuous, monotonic function of relative speed and characterizes the relative speed. This fact I prove in the addendum to my answer here;
An assumption of isotropy of space (independence of transformation from boost direction) then means that we can choose any direction for the relative motion and if we align our co-ordinate system with it so that the $x$ direction points along the relative motion, then our we'll always get the same matrix $K$, independent of direction;
So now we are left with two possibilities for the transformation that mixes the $t$ and $x$ co-ordinate:
$$K = \left(\begin{array}{cc}0&\pm c^{-2}\\1&0\end{array}\right)$$
where $c$ is a free parameter (strictly speaking, there are possibilities of diagonal elements of $K$ as well, but these lead to transformations that are wildly at variance with reality! - they are what Sean Carroll calls the "Alice in Wonderland" transformations). Note that Galilean relativity is the special case where $c\to\infty$.
So, if $c$ is finite, we're left with essentially boosts and rotations that mix the time and space co-ordinates. But rotations are untenable in a causal universe: there would be relative motion that would reverse the direction of any timelike interval, and you could boost to some inertial frame moving relative to me to see me eating my boiled eggs for breakfast before I cooked them!
So, at last the form of the Lorentz transformation must be:
$$\exp(\eta\,K) = \left(\begin{array}{cc}\cosh\eta& \frac{1}{c}\sinh\eta\\c\,\sinh\eta&\cosh\eta\end{array}\right)$$
whence we can read off the relationship between speed and our rapidity parameter $v = c\,\tanh\,\eta$.
The last step is experimental: we must check whether $c$ is finite, and what its value is. The form of the Lorentz transformation shows us that, if $c$ is finite, it is a speed that would be invariant - the same in all frames. So we must look for something whose speed has this property - if we find it then this proves that $c$ is finite and moreover gives us its value experimentally.
Of course you know of something whose speed behaves in this way!
WetSavannaAnimalWetSavannaAnimal
protected by Qmechanic♦ Jan 10 '17 at 0:29
Not the answer you're looking for? Browse other questions tagged special-relativity metric-tensor inertial-frames lorentz-symmetry or ask your own question.
Interval preserving transformations are linear in special relativity
Homogeneity of space implies linearity of Lorentz transformations
What is physical meaning of Lorentz boost?
Homogeneity and isotropy and derivation of the Lorentz transformations
Why do we write the lengths in the following way? Question about Lorentz transformation
Inertial Frames in Special Relativity
Is it possible to derive Lorentz transformation equation without Einstein's postulates?
Einstein's first postulate implies the second?
Does the Lorentz transformation necessary follow from the two postulates of relativity?
Derive Galilean transformation. (The meaning of the relativity)
Definition of the Lorentz transformations
How did Maxwell's theory of electrodynamics contradict the Galilean principle of relativity? (Pre-special relativity)
Do SR Postulates Require the Interval to be Invariant?
How is Einstein's postulate about the invariance of the laws of physics justified? | CommonCrawl |
\begin{document}
\title{Locality and topology in the molecular Aharonov-Bohm effect} \author{Erik Sj\"{o}qvist\footnote{Electronic address: [email protected]}} \affiliation{Department of Quantum Chemistry, Uppsala University, Box 518, Se-751 20 Uppsala, Sweden} \begin{abstract} It is shown that the molecular Aharonov-Bohm effect is neither nonlocal nor topological in the sense of the standard magnetic Aharonov-Bohm effect. It is further argued that there is a close relationship between the molecular Aharonov-Bohm effect and the Aharonov-Casher effect for an electrically neutral spin$-\frac{1}{2}$ particle encircling a line of charge. \end{abstract} \pacs{PACS number(s): 03.65.Vf, 31.30.Gs} \maketitle In the standard magnetic Aharonov-Bohm (AB) effect \cite{aharonov59}, a charged particle exhibits a testable phase shift when encircling a line of magnetic flux. The AB effect is nonlocal as it may happen although the particle experiences no physical field and no exchange of physical quantity takes place along the particle's path \cite{peshkin95}. It is topological as it requires the particle to be confined to a multiply connected region and as any assignment of phase shift along the particle's path is necessarily gauge dependent and thus neither objective nor experimentally testable \cite{peshkin95}.
In the molecular Aharonov-Bohm (MAB) effect \cite{mead79}, the nuclear motion exhibits a measurable effect under adiabatic transport around a conical intersection. A condition for this effect to occur is the accumulation along the nuclear path of a nontrivial Berry phase \cite{berry84} acquired by the corresponding electronic motion. This additional requirement makes MAB special and its physical nature potentially different from that of the standard AB effect.
To delineate this difference is the major aim of this Letter. We demonstrate that although MAB requires the nuclear motion to be confined to a multiply connected region, it fails to obey the remaining criteria for a nonlocal and topological effect. Thus, it follows from our analysis that MAB should neither be regarded as topological nor as nonlocal in the sense of the standard AB effect. Instead, MAB displays an adiabatically averaged autocorrelation among the electronic variables, which in the two-level case resembles that of the local torque on a spin$-\frac{1}{2}$ moving in a locally gauge invariant electric field. Furthermore, it is possible to relate the MAB effect to the noncyclic Berry phase \cite{garcia98} acquired by the electronic variables, which is a locally gauge invariant quantity that could be tested in polarimetry \cite{garcia98} or in interferometry \cite{wagh98}. It should be noted that an argumentation similar to that of this Letter has been put forward in the context of force free electromagnetic effects in neutron interferometry \cite{peshkin95}.
We consider the well studied $E\otimes \epsilon$ Jahn-Teller model, in which the symmetry induced degeneracy of two electronic states is lifted by their interaction with a doubly degenerate vibrational mode. In the diabatic approximation \cite{mead82}, this is described by the vibronic Hamiltonian \cite{zwanziger87} \begin{eqnarray} H & = & \frac{1}{2} p_{r}^{2} + \frac{1}{2r^{2}} p_{\theta}^{2} + \frac{1}{2} r^{2} \nonumber \\
& & + k r^{2|\xi|} \Big[ \cos (2\xi \theta) \sigma_{x} + \sin (2\xi \theta) \sigma_{y} \Big] , \label{eq:vibronicham} \end{eqnarray} where $(r,\theta)$ are polar coordinates of the vibrational mode, $(p_{r},p_{\theta})$ the corresponding canonical momenta, $k$ is the vibronic coupling strength, $\xi = \frac{1}{2},-1,...$ describes the order of the effect \cite{mixing}, and we have neglected spin-orbit coupling. The linear $E\otimes \epsilon$ model is characterized by $\xi = \frac{1}{2}$, while in the quadratic model we have $\xi = -1$. The electronic degrees of freedom are described by Pauli operators defined in terms of the
diabatic electronic states $|0\rangle$ and $|1\rangle$ as
$\sigma_{x} = |0 \rangle \langle 1| + |1 \rangle \langle 0|$,
$\sigma_{y} = -i|0 \rangle \langle 1| + i|1 \rangle \langle 0|$,
and $\sigma_{z} = |0 \rangle \langle 0| - |1 \rangle \langle 1|$. The electronic potential energies associated with $H$ are $E_{\pm} (r) = \frac{1}{2} r^{2} \pm kr + 1/(8r^{2})$, which, by omitting the divergent Born-Huang term $1/(8r^{2})$, conically intersect at the origin $r=0$.
The Born-Oppenheimer regime is attained when the electronic potential energies $E_{\pm}$ are well separated. Explicitly one may take this to mean that $E_{+}-E_{-}$ is large at the minimum of $E_{-}$. For large separation the Born-Huang term $1/(8r^{2})$ is negligible so that $E_{-}$ has its minimum approximately at $r=k$ yielding the Born-Oppenheimer condition $2k^{2} \gg 1$. In this regime the nuclear motion may take place on one of the electronic potential energy surfaces, accompanied by an additional effective vector potential.
Similarly, adiabatic motion of the electronic variables
occurs when \cite{schiff55} $|\langle - |\dot{H}_{e}| +
\rangle | \ll (E_{+}-E_{-})^{2}$, when treating the nuclear degrees of freedom as time-dependent variables.
Here, the operator $H_{e}= kr^{2|\xi|} [\cos (2\xi \theta) \sigma_{x} + \sin (2\xi \theta) \sigma_{y}]$ is the electronic Hamiltonian with instantaneous eigenstates
$|\pm \rangle$ corresponding to the electronic potential energies $E_{\pm}$, and we have put $\hbar = 1$. At the minimum of $E_{-}$ the adiabaticity condition
becomes $\xi |\dot{\theta}| \ll 2k^{2}$, where we again have neglected the Born-Huang term $1/(8r^{2})$.
In the Born-Oppenheimer regime consider the nuclear motion on the lowest electronic potential energy surface, as described by the effective Hamiltonian \begin{eqnarray}
H_{-} = \langle - |H| - \rangle = \frac{1}{2} p_{r}^{2} + \frac{1}{2r^{2}} \Big[ p_{\theta} + \xi \Big]^{2} + E_{-} (r) \end{eqnarray}
with $|-\rangle$ single-valued around the conical intersection at the origin. Here, $\xi {\bf e}_{\theta} /r$ is the MAB vector potential that could be absorbed into the phase factor \begin{equation} \exp \Big[ i\int_{{\bf r}_{0}}^{{\bf r}} \frac{\xi}{r'} {\bf e}_{\theta}' \cdot d{\bf r}' \Big] = \exp \Big[ i\xi (\theta - \theta_{0}) \Big] , \label{eq:mabshift} \end{equation} which for the linear case $\xi = \frac{1}{2}$ corresponds to a nontrivial sign change for a closed loop only if it encircles the conical intersection. This sign change has measurable consequences as it restores the original molecular symmetry of the vibronic ground state \cite{ham87} and as it shifts the spectrum of the quantized nuclear pseudorotation \cite{kendrick97}. On the other hand, in the quadratic case $\xi = -1$, the phase factor in Eq. (\ref{eq:mabshift}) for a closed loop around the conical intersection is $+1$ and the MAB vector potential does not have any observable consequences on the nuclear motion.
The electronic Born-Oppenheimer states are eigenstates of $\sigma_{x} (\theta) \equiv \cos (2\xi \theta ) \sigma_{x} + \sin (2\xi \theta ) \sigma_{y}$. It is therefore perhaps tempting to replace the electronic motion by the appropriate eigenvalue of $\sigma_{x} (\theta )$ in the Born-Oppenheimer regime so that the electronic variables can be ignored, creating an illusion that the nontrivial effect of the MAB vector potential on the nuclear motion in the $\xi = \frac{1}{2}$ case is nonlocal and topological in the sense of the standard AB effect. However, the electronic variables are dynamical and do not commute among themselves. In particular, although the expectation values of the remaining mutually complementary electronic observables $\sigma_{y} (\theta ) \equiv - \sin (2\xi \theta ) \sigma_{x} + \cos (2\xi \theta ) \sigma_{y}$ and $\sigma_{z} (\theta) \equiv \sigma_{z}$ vanish in the Born-Oppenheimer limit, their fluctuations do not. As the molecule is ideally a closed physical system, its total energy is conserved so that equal and opposite fluctuations must be exchanged locally with the internal electromagnetic field of the molecule during the nuclear pseudorotation.
To take the argumentation against the nonlocal and topological nature of MAB a step further, let us consider the vibronic motion in the adiabatic picture. First, we note that the motion of $\theta (t)$ depends among other things on the motion of the electronic variables. Thus, to distinguish the vibronic coupling effect from that associated with the dynamics of the nuclei, it turns out to be useful to transform the electronic variables to an internal molecular frame that co-moves with the pseudorotation. In this frame the vibronic Hamiltonian reads \begin{equation} H' = U^{\dagger}H U = \frac{1}{2k^{2}} \big[ p_{\theta} - \xi \sigma_{z} \big]^{2} + \frac{k^{2}}{2} + k^{2} \sigma_{x} , \end{equation} where $U = \exp [ -i \xi \theta \sigma_{z} ]$ is the unitary spin rotation operator and we have put $r=k$. Using $H'$ and the Heisenberg picture we obtain the equations of motion \begin{eqnarray} k^{2} \dot{\theta} & = & p_{\theta} - \xi \sigma_{z} , \nonumber \\ \dot{p}_{\theta} & = & 0 , \nonumber \\ \dot{\sigma}_{x} & = & 2\xi \dot{\theta} \sigma_{y} , \nonumber \\ \dot{\sigma}_{y} & = & - 2\xi \dot{\theta} \sigma_{x} - 2k^{2} \sigma_{z} , \nonumber \\ \dot{\sigma}_{z} & = & 2k^{2} \sigma_{y} , \label{eq:eqm} \end{eqnarray}
where $\xi |\dot{\theta}| \ll 2k^{2}$ in the adiabatic limit. It follows that the electronic part describes the local torque due to an effective magnetic field ${\bf B} = (2k^{2},0,-2\xi \dot{\theta})$ seen by the electronic variables in the rotating frame. The large static $x$ component of ${\bf B}$ depends only on the vibronic coupling parameter $k$ and is irrelevant to MAB. On the other hand, the small $z$ component corresponds exactly to the MAB effect and gives rise to a ${\mbox{\boldmath $\sigma$}} \cdot ({\bf v} \times {\bf E})$ interaction effect on the spin in its rest frame when it moves in the $r-\theta$ plane exposed to the effective electric field ${\bf E} = (\xi /r) {\bf e}_{r}$. This effective ${\bf E}$ field coincides with that of a charged line in the $z$ direction sitting at the conical intersection and with $\xi$ being proportional to the charge per unit length. Thus, the term responsible for MAB resembles exactly that of the Aharonov-Casher (AC) effect \cite{aharonov84} for an electrically neutral spin$-\frac{1}{2}$ particle encircling a line of charge.
Interpreting the phase shift in Eq. (\ref{eq:mabshift}) as an AC effect suggests that MAB is essentially neither nonlocal nor topological as it depends on an integral whose integrand is proportional to the locally gauge invariant effective ${\bf E}$ field and does not depend on any physical quantity outside the nuclear path. As has been demonstrated in Ref. \cite{peshkin95} in the context of force free electromagnetic effects for neutrons, the local nature of MAB may be further elucidated by considering the relative change of the initial and instantaneous electronic variables being represented by the vector operators ${\mbox{\boldmath $\sigma$}}(0)$ and ${\mbox{\boldmath $\sigma$}} (t)$, respectively. The starting point for such a semiclassical analysis is to consider in the rotating frame the electronic autocorrelation operators \begin{eqnarray} C(t) & = & \frac{1}{4} \Big[ \sigma_{x} (0) \sigma_{x} (t) + \sigma_{y} (0) \sigma_{y} (t) + {\text{h.c.}} \Big] , \nonumber \\ S(t) & = & \frac{1}{4} \Big[ \sigma_{x} (0) \sigma_{y} (t) - \sigma_{y} (0) \sigma_{x} (t) + {\text{h.c.}} \Big] \end{eqnarray} that measure the correlation between the $x-y$ projections of ${\mbox{\boldmath $\sigma$}} (0)$ and ${\mbox{\boldmath $\sigma$}} (t)$. These operators are Hermitian and thus measurable in principle. Their equations of motion read \begin{eqnarray} \dot{C} & = & 2\xi \dot{\theta} \ S - \xi \dot{\theta} \sin (2k^{2}t) , \nonumber \\ \dot{S} & = & -2\xi \dot{\theta} \ C + \xi \dot{\theta} \Big[ 1- \cos (2k^{2}t) \Big] , \label{eq:autoeqm} \end{eqnarray} where we have used Eq. (\ref{eq:eqm}). These equations are characterized by two time scales: the fast electronic oscillations and the slow nuclear pseudorotation, with frequencies $2k^{2}$ and $2\xi \dot{\theta}$, respectively. Thus, we may simplify Eq. (\ref{eq:autoeqm}) by adiabatic averaging \cite{arnold89} over one period of the fast motion yielding \begin{eqnarray} \dot{\overline{C}} & = & 2\xi \dot{\theta} \ \overline{S} , \nonumber \\ \dot{\overline{S}} & = & -2\xi \dot{\theta} \ \overline{C} + \xi \dot{\theta} , \end{eqnarray} which have the solutions \begin{eqnarray} \overline{C} & = & \frac{1}{2} \Big( 1 + \cos [2\xi (\theta - \theta_{0})] \Big) , \nonumber \\ \overline{S} & = & - \frac{1}{2} \sin [2\xi (\theta - \theta_{0})] . \label{eq:autosolutions} \end{eqnarray} This shows that the relative angle $\varphi = 2\xi (\theta - \theta_{0})$ between the two $x-y$ projections of ${\mbox{\boldmath $\sigma$}} (0)$ and ${\mbox{\boldmath $\sigma$}} (t)$ is changed by the action of the local torque. $\varphi$ is precisely twice the MAB phase in Eq. (\ref{eq:mabshift}), where the factor $2$ is the usual rotation factor for spin$-\frac{1}{2}$. Thus, MAB may be described in terms of the relative angle between the $x-y$ projections of ${\mbox{\boldmath $\sigma$}} (0)$ and ${\mbox{\boldmath $\sigma$}} (t)$. The change of this angle is due to the torque on the electronic variables and shows that MAB is essentially a local effect.
There is an objective way to relate the phase shift in Eq. (\ref{eq:mabshift}) locally along the nuclear path using the noncyclic Berry phase $\gamma_{g}$ of the (lowest) electronic Born-Oppenheimer state vector \begin{equation}
|- (\theta) \rangle = \frac{e^{i\alpha (\theta)}}{\sqrt{2}}
\Big[ e^{-i\xi \theta} |0\rangle + e^{i\xi \theta} |1\rangle \Big] . \end{equation} Here, we assume $\alpha$ to be differentiable along the path but otherwise arbitrary. The noncyclic Berry phase is defined by removing the accumulation of local phase changes from the total phase and is testable in polarimetry \cite{garcia98} or in interferometry \cite{wagh98}. We obtain for
$|- (\theta) \rangle$ \cite{garcia98} \begin{eqnarray} \gamma_{g} & = & \arg \langle - (\theta_{0})
|- (\theta) \rangle + i \int_{\theta_{0}}^{\theta}
\langle - (\theta') | \frac{\partial}{\partial \theta'}
|- (\theta') \rangle d\theta' \nonumber \\
& = & \arg \cos [\xi (\theta -\theta_{0})] . \end{eqnarray} Clearly, $\gamma_{g}$ is locally gauge invariant as it is independent of $\alpha$. It corresponds to a phase jump of $\pi$ at $\theta -\theta_{0} = \pi /(2\xi)$, where the overlap
$\langle - (\theta_{0}) |- (\theta) \rangle$ vanishes. In the quadratic $E\times \epsilon$ Jahn-Teller case where $\xi = -1$, a closed loop around the conical intersection contains two $\pi$ phase jumps, thus explaining the $+1$ MAB phase factor. On the other hand, in the linear case where $\xi = \frac{1}{2}$, there is only a single $\pi$ jump creating a physically nontrivial sign change for such a loop. Thus, both the absence in the quadratic case and the presence in the linear case of a nontrivial MAB effect could be explained locally as they both require the existence of points along the nuclear path where the electronic states at $\theta_{0}$ and $\theta$ become orthogonal. This local assignment of electronic phase shift is gauge invariant at each point along the nuclear path and thus experimentally testable in principle. It shows that MAB is essentially not a topological effect.
In conclusion, we have shown in the case of the $E \otimes \epsilon$ Jahn-Teller model that the molecular Aharonov-Bohm (MAB) effect is neither nonlocal nor topological in the sense of the standard magnetic Aharonov-Bohm effect. Locality is preserved as MAB can be explained as a local torque on the electronic variables that accumulates along the path of the nuclei around the conical intersection. It is not topological as it may be described in terms of a gauge invariant effective electric field and as there is an objective way to relate the phase shift locally along the nuclear path via the noncyclic Berry phase. We remark that the present analysis also applies to other molecular systems that exhibit conical intersections, as well as to the microwave resonator experiments recently discussed in the literature \cite{lauber94}. \vskip 0.5 cm This work was supported by the Swedish Research Council.
\end{document} | arXiv |
Robinson–Schensted correspondence
In mathematics, the Robinson–Schensted correspondence is a bijective correspondence between permutations and pairs of standard Young tableaux of the same shape. It has various descriptions, all of which are of algorithmic nature, it has many remarkable properties, and it has applications in combinatorics and other areas such as representation theory. The correspondence has been generalized in numerous ways, notably by Knuth to what is known as the Robinson–Schensted–Knuth correspondence, and a further generalization to pictures by Zelevinsky.
The simplest description of the correspondence is using the Schensted algorithm (Schensted 1961), a procedure that constructs one tableau by successively inserting the values of the permutation according to a specific rule, while the other tableau records the evolution of the shape during construction. The correspondence had been described, in a rather different form, much earlier by Robinson (Robinson 1938), in an attempt to prove the Littlewood–Richardson rule. The correspondence is often referred to as the Robinson–Schensted algorithm, although the procedure used by Robinson is radically different from the Schensted algorithm, and almost entirely forgotten. Other methods of defining the correspondence include a nondeterministic algorithm in terms of jeu de taquin.
The bijective nature of the correspondence relates it to the enumerative identity
$\sum _{\lambda \in {\mathcal {P}}_{n}}(t_{\lambda })^{2}=n!$
where ${\mathcal {P}}_{n}$ denotes the set of partitions of n (or of Young diagrams with n squares), and tλ denotes the number of standard Young tableaux of shape λ.
The Schensted algorithm
The Schensted algorithm starts from the permutation σ written in two-line notation
$\sigma ={\begin{pmatrix}1&2&3&\cdots &n\\\sigma _{1}&\sigma _{2}&\sigma _{3}&\cdots &\sigma _{n}\end{pmatrix}}$
where σi = σ(i), and proceeds by constructing sequentially a sequence of (intermediate) ordered pairs of Young tableaux of the same shape:
$(P_{0},Q_{0}),(P_{1},Q_{1}),\ldots ,(P_{n},Q_{n}),$
where P0 = Q0 are empty tableaux. The output tableaux are P = Pn and Q = Qn. Once Pi−1 is constructed, one forms Pi by inserting σi into Pi−1, and then Qi by adding an entry i to Qi−1 in the square added to the shape by the insertion (so that Pi and Qi have equal shapes for all i). Because of the more passive role of the tableaux Qi, the final one Qn, which is part of the output and from which the previous Qi are easily read off, is called the recording tableau; by contrast the tableaux Pi are called insertion tableaux.
Insertion
The basic procedure used to insert each σi is called Schensted insertion or row-insertion (to distinguish it from a variant procedure called column-insertion). Its simplest form is defined in terms of "incomplete standard tableaux": like standard tableaux they have distinct entries, forming increasing rows and columns, but some values (still to be inserted) may be absent as entries. The procedure takes as arguments such a tableau T and a value x not present as entry of T; it produces as output a new tableau denoted T ← x and a square s by which its shape has grown. The value x appears in the first row of T ← x, either having been added at the end (if no entries larger than x were present), or otherwise replacing the first entry y > x in the first row of T. In the former case s is the square where x is added, and the insertion is completed; in the latter case the replaced entry y is similarly inserted into the second row of T, and so on, until at some step the first case applies (which certainly happens if an empty row of T is reached).
More formally, the following pseudocode describes the row-insertion of a new value x into T.[1]
1. Set i = 1 and j to one more than the length of the first row of T.
2. While j > 1 and x < Ti, j−1, decrease j by 1. (Now (i, j) is the first square in row i with either an entry larger than x in T, or no entry at all.)
3. If the square (i, j) is empty in T, terminate after adding x to T in square (i, j) and setting s = (i, j).
4. Swap the values x and Ti, j. (This inserts the old x into row i, and saves the value it replaces for insertion into the next row.)
5. Increase i by 1 and return to step 2.
The shape of T grows by exactly one square, namely s.
Correctness
The fact that T ← x has increasing rows and columns, if the same holds for T, is not obvious from this procedure (entries in the same column are never even compared). It can however be seen as follows. At all times except immediately after step 4, the square (i, j) is either empty in T or holds a value greater than x; step 5 re-establishes this property because (i, j) now is the square immediately below the one that originally contained x in T. Thus the effect of the replacement in step 4 on the value Ti, j is to make it smaller; in particular it cannot become greater than its right or lower neighbours. On the other hand the new value is not less than its left neighbour (if present) either, as is ensured by the comparison that just made step 2 terminate. Finally to see that the new value is larger than its upper neighbour Ti−1, j if present, observe that Ti−1, j holds after step 5, and that decreasing j in step 2 only decreases the corresponding value Ti−1, j.
Constructing the tableaux
The full Schensted algorithm applied to a permutation σ proceeds as follows.
1. Set both P and Q to the empty tableau
2. For i increasing from 1 to n compute P ← σi and the square s by the insertion procedure; then replace P by P ← σi and add the entry i to the tableau Q in the square s.
3. Terminate, returning the pair (P, Q).
The algorithm produces a pair of standard Young tableaux.
Invertibility of the construction
It can be seen that given any pair (P, Q) of standard Young tableaux of the same shape, there is an inverse procedure that produces a permutation that will give rise to (P, Q) by the Schensted algorithm. It essentially consists of tracing steps of the algorithm backwards, each time using an entry of Q to find the square where the inverse insertion should start, moving the corresponding entry of P to the preceding row, and continuing upwards through the rows until an entry of the first row is replaced, which is the value inserted at the corresponding step of the construction algorithm. These two inverse algorithms define a bijective correspondence between permutations of n on one side, and pairs of standard Young tableaux of equal shape and containing n squares on the other side.
Properties
One of the most fundamental properties, but not evident from the algorithmic construction, is symmetry:
• If the Robinson–Schensted correspondence associates tableaux (P, Q) to a permutation σ, then it associates (Q, P) to the inverse permutation σ−1.
This can be proven, for instance, by appealing to Viennot's geometric construction.
Further properties, all assuming that the correspondence associates tableaux (P, Q) to the permutation σ = (σ1, ..., σn).
• In the pair of tableaux (P′, Q′) associated to the reversed permutation (σn, ..., σ1), the tableau P′ is the transpose of the tableau P, and Q′ is a tableau determined by Q, independently of P (namely the transpose of the tableau obtained from Q by the Schützenberger involution).
• The length of the longest increasing subsequence of σ1, ..., σn is equal to the length of the first row of P (and of Q).
• The length of the longest decreasing subsequence of σ1, ..., σn is equal to the length of the first column of P (and of Q), as follows from the previous two properties.
• The descent set {i : σi > σi+1} of σ equals the descent set {i : i+1 is in a row strictly below the row of i} of Q.
• Identify subsequences of π with their sets of indices. It is a theorem of Greene that for any k ≥ 1, the size of the largest set that can be written as the union of at most k increasing subsequences is λ1 + ... + λk. In particular, λ1 equals the largest length of an increasing subsequence of π.
• If σ is an involution, then the number of fixed points of σ equals the number of columns of odd length in λ.
See also
• Viennot's geometric construction, which provides a diagrammatic interpretation of the correspondence.
• Plactic monoid: the insertion process can be used to define an associative product of Young tableaux with entries between 1 and n, which is referred to as the Plactic monoid of order n.
Notes
1. Adapted from D. E. Knuth (1973), The Art of Computer Programming, vol. 3, pp. 50–51
References
• Fulton, William (1997), Young Tableaux, London Mathematical Society Student Texts, vol. 35, Cambridge University Press, ISBN 978-0-521-56144-0, MR 1464693.
• Knuth, Donald E. (1970), "Permutations, matrices, and generalized Young tableaux", Pacific Journal of Mathematics, 34: 709–727, doi:10.2140/pjm.1970.34.709, MR 0272654
• Robinson, G. de B. (1938), "On the Representations of the Symmetric Group", American Journal of Mathematics, 60 (3): 745–760, doi:10.2307/2371609, JSTOR 2371609, Zbl 0019.25102.
• Sagan, B. E. (2001), The Symmetric Group, Graduate Texts in Mathematics, vol. 203, New York: Springer-Verlag, ISBN 0-387-95067-2.
• Schensted, C. (1961), "Longest increasing and decreasing subsequences", Canadian Journal of Mathematics, 13: 179–191, doi:10.4153/CJM-1961-015-3, MR 0121305.
• Stanley, Richard P. (1999), Enumerative Combinatorics, Vol. 2, Cambridge Studies in Advanced Mathematics, vol. 62, Cambridge University Press, ISBN 978-0-521-56069-6, MR 1676282.
• Zelevinsky, A. V. (1981), "A generalization of the Littlewood–Richardson rule and the Robinson–Schensted–Knuth correspondence", Journal of Algebra, 69 (1): 82–94, doi:10.1016/0021-8693(81)90128-9, MR 0613858.
Further reading
• Green, James A. (2007). Polynomial representations of GLn. Lecture Notes in Mathematics. Vol. 830. With an appendix on Schensted correspondence and Littelmann paths by K. Erdmann, J. A. Green and M. Schocker (2nd corrected and augmented ed.). Berlin: Springer-Verlag. ISBN 3-540-46944-3. Zbl 1108.20044.
External links
• van Leeuwen, M.A.A. (2001) [1994], "Robinson–Schensted correspondence", Encyclopedia of Mathematics, EMS Press
• Williams, L., Interactive animation of the Robinson-Schensted algorithm
| Wikipedia |
Multiscale unfolding of real networks by geometric renormalization
Laplacian renormalization group for heterogeneous networks
Pablo Villegas, Tommaso Gili, … Andrea Gabrielli
Unfolding the multiscale structure of networks with dynamical Ollivier-Ricci curvature
Adam Gosztolai & Alexis Arnaudon
Network geometry
Marián Boguñá, Ivan Bonamassa, … M. Ángeles Serrano
Relative, local and global dimension in complex networks
Robert Peach, Alexis Arnaudon & Mauricio Barahona
Deciphering the generating rules and functionalities of complex networks
Xiongye Xiao, Hanlong Chen & Paul Bogdan
Spatiotemporal signal propagation in complex networks
Chittaranjan Hens, Uzi Harush, … Baruch Barzel
Scale-free networks are rare
Anna D. Broido & Aaron Clauset
Scaling in the space-time of the Internet
István Papp, Levente Varga, … Zoltán Néda
The Polynomial Volume Law of Complex Networks in the Context of Local and Global Optimization
Franz-Benjamin Mocnik
Guillermo García-Pérez1,2,
Marián Boguñá ORCID: orcid.org/0000-0001-7833-34871,2 &
M. Ángeles Serrano1,2,3
Nature Physics volume 14, pages 583–589 (2018)Cite this article
104 Altmetric
Symmetries in physical theories denote invariance under some transformation, such as self-similarity under a change of scale. The renormalization group provides a powerful framework to study these symmetries, leading to a better understanding of the universal properties of phase transitions. However, the small-world property of complex networks complicates application of the renormalization group by introducing correlations between coexisting scales. Here, we provide a framework for the investigation of complex networks at different resolutions. The approach is based on geometric representations, which have been shown to sustain network navigability and to reveal the mechanisms that govern network structure and evolution. We define a geometric renormalization group for networks by embedding them into an underlying hidden metric space. We find that real scale-free networks show geometric scaling under this renormalization group transformation. We unfold the networks in a self-similar multilayer shell that distinguishes the coexisting scales and their interactions. This in turn offers a basis for exploring critical phenomena and universality in complex networks. It also affords us immediate practical applications, including high-fidelity smaller-scale replicas of large networks and a multiscale navigation protocol in hyperbolic space, which betters those on single layers.
The definition of self-similarity and scale invariance1,2 in complex networks has been limited by the lack of a valid source of geometric length-scale transformations. Previous efforts to study these symmetries are based on topology and include spectral coarse-graining3 or box-covering procedures based on shortest path lengths between nodes4,5,6,7,8,9. However, the collection of shortest paths is a poor source of length-based scaling factors in networks due to the small-world10 or even ultrasmall-world11 properties, and the problem remained open. Other studies have approached the multiscale structure of network models in a more geometric way12,13, but their findings cannot be directly applied to real-world networks.
Models of complex networks based on hidden metric spaces14,15,16,17 open the door to a proper geometric definition of self-similarity and scale invariance and to an unfolding of the different scales present in the structure of real networks. These models are able to explain universal features shared by real networks—including the small-world property, scale-free degree distributions and clustering—and also fundamental mechanisms, such as preferential attachment in growing networks18 and the emergence of communities19.
Naturally, the geometricalization of networks allows a reservoir of distance scales so that we can borrow concepts and techniques from the renormalization group in statistical physics20,21. By recursive averaging over short-distance degrees of freedom, renormalization has successfully explained, for instance, the universality properties of critical behaviour in phase transitions22. In this study, we introduce a geometric renormalization group for complex networks (RGN). The method, inspired by the block spin renormalization group devised by L. P. Kadanoff20, relies on a geometric embedding of the networks to coarse-grain neighbouring nodes into supernodes and to define a new map that progressively selects long-range connections.
Evidence of geometric scaling in real networks
Hidden metric space network models couple the topology of a network to an underlying geometry through a universal connectivity law depending on distances on such space, which combine popularity and similarity dimensions14,17,18, such that more popular and similar nodes have more chance to interact. Popularity is related to the degrees of the nodes14,23, and similarity stands as an aggregate of all other attributes that modulate the likelihood of interactions. These two dimensions define a hyperbolic plane as the effective geometry of networks, and their contribution to the probability of connection can be explicit or combined into an effective hyperbolic distance. This gives rise to the two isomorphic geometric models, \({{\mathbb{S}}}^{1}\) and \({{\mathbb{H}}}^{2}\). In the \({{\mathbb{S}}}^{1}\) model14, the popularity of a node i is associated with a hidden degree κi, complemented by its angular position in the one-dimensional sphere (or circle) as a similarity measure, such that the probability of connection increases with the product of the hidden degrees and decreases with their distance along the circle (equation (5) in Methods). Reciprocally, in the equivalent \({{\mathbb{H}}}^{2}\) model16,17, the hidden degree is transformed into a radial coordinate, such that higher degree nodes are placed closer to the centre of the hyperbolic disk, while the angular coordinate remains as in the \({{\mathbb{S}}}^{1}\) circle, and the probability of connection decreases with the hyperbolic distance. In their scale-free version, both models have only three parameters μ, γ and β, which control the average degree \(\left\langle k\right\rangle\), the exponent of the degree distribution γ and the local clustering coefficient \(\bar{c}\), respectively. The radius R of the \({{\mathbb{S}}}^{1}\) circle is adjusted to maintain a constant density of nodes equal to one.
The renormalization transformation is defined on the basis of the similarity dimension represented by the angular coordinate of the nodes. We present here the formulation for the \({{\mathbb{S}}}^{1}\) model, as it makes the similarity dimension explicit and is mathematically more tractable. The transformation zooms out by changing the minimum length scale from that of the original network to a larger value. It proceeds by, first, defining non-overlapping blocks of consecutive nodes of size r along the circle and, second, by coarse-graining the blocks into supernodes. Each supernode is then placed within the angular region defined by the corresponding block so that the order of nodes is preserved. All the links between some node in one supernode and some node in the other, if any, are renormalized into a single link between the two supernodes. This operation can be iterated starting from the original network at layer l = 0. Finally, the set of renormalized network layers l, each rl times smaller than the original one, forms a multiscale shell of the network. Figure 1 illustrates the process.
Fig. 1: Geometric renormalization transformation for complex networks.
Each layer is obtained after a renormalization step with resolution r starting from the original network in l = 0. Each node i in red is placed at an angular position \({\theta }_{i}^{(l)}\) on the \({{\mathbb{S}}}^{1}\) circle and has a size proportional to the logarithm of its hidden degree \({\kappa }_{i}^{(l)}\). Straight solid lines represent the links in each layer. Coarse-graining blocks correspond to the blue shadowed areas, and dashed lines connect nodes to their supernodes in layer l + 1. Two supernodes in layer l + 1 are connected if and only if, in layer l, some node in one supernode is connected to some node in the other (blue links give an example). The geometric renormalization transformation has Abelian semigroup structure with respect to the composition, meaning that a certain number of iterations of a given resolution are equivalent to a single transformation of higher resolution. For instance, in the figure, the same transformation with r = 4 goes from l = 0 to l = 2 in a single step. Whenever the number of nodes is not divisible by r, the last supernode in a layer contains less than r nodes, as in the example at l = 1. The RGN transformations are valid for uneven supernode sizes as well; one could divide the circle into equally sized sectors of a certain arc length such that they contain on average a constant number of nodes. The set of transformations parametrized by r does not include an inverse element to reverse the process.
In this study, we apply the RGN to eight different real scale-free networks from very different domains: technology (Internet24), transportation (Airports25,26), biology (Metabolic27, Proteome28 and Drosophila29), social (Enron30,31) and languages (Music32 and Words33) (section I in the Supplementary Information). First, their geometric maps are obtained by embedding the nodes in the underlying geometry using statistical inference techniques, which identify the hidden degrees and angular coordinates, maximizing the likelihood that the topology of the network is reproduced by the model15,34. Second, we apply the coarse-graining by defining blocks of size r = 2, and iterate the process. In the limit N → ∞, where N is the number of nodes, the RGN can be applied up to any desired scale of observation, whereas it is bounded to order \({\mathscr{O}}({\rm{log}}N)\) iterations in finite systems.
The resulting topological features of three of the renormalized networks are shown in Fig. 2 (see Supplementary Fig. 1 for the others). We observe that the degree distributions, degree–degree correlations, clustering spectra and community structures (see Methods) show self-similar behaviour. The last property suggests a new and efficient multiscale community detection algorithm for complex networks35,36,37.
Fig. 2: Self-similarity of real networks along the RGN flow.
Each column shows the RGN flow with r = 2 of different topological features of the Internet autonomous systems (AS) network (left), the human Metabolic network (middle) and the Music network (right). a, Complementary cumulative distribution Pc of rescaled degrees \({k}_{{\rm{res}}}^{(l)}={k}^{(l)}{\rm{/}}\left\langle {k}^{(l)}\right\rangle\). b, Degree-dependent clustering coefficient over rescaled-degree classes. Degree–degree correlations, as measured by the normalized average nearest-neighbour degree \({\bar{k}}_{{\rm{nn,n}}}\left({k}_{{\rm{res}}}^{(l)}\right)={\bar{k}}_{{\rm{nn}}}\left({k}_{{\rm{res}}}^{(l)}\right)\left\langle {k}^{(l)}\right\rangle {\rm{/}}\left\langle {\left({k}^{(l)}\right)}^{2}\right\rangle\), are shown in the insets. c, RGN flow of the community structure. Q(l) is the modularity in layer l, Q(l,0) is the modularity that the community structure of layer l induces in the original network and nMI(l,0) is the normalized mutual information between the latter and the community structure detected directly in the original network (see Methods). The number of layers in each system is determined by their original size. The horizontal dashed lines indicate the modularity in the original networks.
Geometric renormalization of the \({{\mathbb{S}}}^{1}{\rm{/}}{{\mathbb{H}}}^{2}\) model
The self-similarity exhibited by real-world networks can be understood in terms of their congruency with the hidden metric space network model. As we show analytically, the \({{\mathbb{S}}}^{1}\) and \({{\mathbb{H}}}^{2}\) models are renormalizable in a geometric sense. This means that if a real scale-free network is compatible with the model and admits a good embedding, as it is the case for the real networks analysed in this study, the model will be able to predict its self-similarity and geometric scaling.
We demonstrate next the renormalizability of the \({{\mathbb{S}}}^{1}\) model (see section II in the Supplementary Information for mathematical details and also for the definition of the RGN in hyperbolic space). The renormalized networks remain maximally congruent with the hidden metric space model by assigning a new hidden degree \({\kappa }_{i}^{(l+1)}\) to supernode i in layer l + 1 as a function of the hidden degrees of the nodes it contains in layer l according to:
$${\kappa }_{i}^{(l+1)}={\left(\sum _{j=1}^{r}{\left({\kappa }_{j}^{(l)}\right)}^{\beta }\right)}^{1/\beta }$$
as well as an angular coordinate \({\theta }_{i}^{(l+1)}\) given by:
$${\theta }_{i}^{(l+1)}={\left(\frac{\sum _{j=1}^{r}{\left({\theta }_{j}^{(l)}{\kappa }_{j}^{(l)}\right)}^{\beta }}{\sum _{j=1}^{r}{\left({\kappa }_{j}^{(l)}\right)}^{\beta }}\right)}^{1/\beta }$$
The global parameters need to be rescaled as μ(l+1) = μ(l)/r, β(l+1) = β(l), and R(l+1) = R(l)/r. This implies that the probability \({p}_{ij}^{(l+1)}\) for two supernodes i and j to be connected in layer l + 1 maintains its original form (equation (5) and Fig. 3a). This applies both to the model and to real networks as long as they admit a good embedding (Supplementary Fig. 2). In addition, notice that the renormalization transformations of the geometric layout also have the Abelian semigroup structure.
Fig. 3: RGN flow of synthetic and real networks.
a, Empirical connection probability (p) in a synthetic \({{\mathbb{S}}}^{1}\) network. Fraction of connected pairs of nodes as a function of \({\chi }_{ij}^{(l)}={R}^{(l)}{\rm{\Delta }}{\theta }_{ij}^{(l)}{\rm{/}}\left({\mu }^{(l)}{\kappa }_{i}^{(l)}{\kappa }_{j}^{(l)}\right)\) in the renormalized layers, from l = 0 to l = 8, and r = 2. The original synthetic network has N ≅ 225,000 nodes, γ = 2.5 and β = 1.5. The black dashed line shows the theoretical curve (equation (5)). The inset shows the invariance of the mean local clustering along the RGN flow. b, Hyperbolic embedding of the Metabolic network (top) and its renormalized layer l = 2 (bottom). The colours of the nodes correspond to the community structure detected by the Louvain algorithm. Notice how the renormalized network preserves the original community structure despite being four times smaller. c, Real networks in the connectivity phase diagram. The synthetic network above is also shown. Darker blue (green) in the shaded areas represent higher values of the exponent ν. The dashed line separates the γ-dominated region from the β-dominated region. In phase I, ν > 0 and the network flows towards a fully connected graph. In phase II, ν < 0 and the network flows towards a one-dimensional ring. The red thick line indicates ν = 0 and, hence, the transition between the small-world and non-small-world phases. In region III, the degree distribution loses its scale-freeness along the flow. The inset shows the exponential increase of the average degree of the renormalized real networks \(\left\langle {k}^{(l)}\right\rangle\) with respect to l.
As the networks remain congruent with the \({{\mathbb{S}}}^{1}\) model, hidden degrees κ(l) remain proportional to observed degrees k(l), which allows us to explore the degree distribution of the renormalized layers analytically. It can be shown that, if the original distribution of hidden degrees is a power law with characteristic exponent γ, the renormalized distribution is also a power law with the same exponent asymptotically, as long as (γ − 1)/2 < β (section II in the Supplementary Information), with the only difference being the average degree. Interestingly, the global parameter controlling the clustering coefficient, β, does not change along the flow, which explains the self-similarity of the clustering spectra. Finally, the transformation for the angles (equation (2)) preserves the ordering of nodes and the heterogeneity in their angular density and, as a consequence, the community structure is preserved in the flow15,19,27,38, as shown in Fig. 3b. The model is therefore renormalizable, and RGN realizations at any scale belong to the same ensemble with a different average degree, which should be rescaled to produce self-similar replicas.
A good approximation of the behaviour of the average degree for very large networks can be calculated by taking into account the transformation of hidden degrees along the RGN flow (equation (1) and section II in the Supplementary Information). We obtain \({\left\langle k\right\rangle }^{(l+1)}={r}^{\nu }{\left\langle k\right\rangle }^{(l)}\), with a scaling factor ν depending on the connectivity structure of the original network. If \(0 < \frac{\gamma -1}{\beta }\le 1\), the flow is dominated by the exponent of the degree distribution γ, and the scaling factor is given by:
$$\nu =\frac{2}{\gamma -1}-1$$
whereas the flow is dominated by the strength of clustering if \(1\le \frac{\gamma -1}{\beta } < 2\) and
$$\nu =\frac{2}{\beta }-1$$
Therefore, if γ < 3 or β < 2 (phase I in Fig. 3c), then ν > 0 and the model flows towards a highly connected graph; the average degree is preserved if γ = 3 and β ≥ 2 or β = 2 and γ ≥ 3, which indicates that the network is at the edge of the transition between the small-world and non-small-world phases; and ν < 0 if γ > 3 and β > 2, causing the RGN flow to produce sparser networks approaching a unidimensional ring structure as a fixed point (phase II in Fig. 3c). In this case, the renormalized layers eventually lose the small-world property. Finally, if β < (γ − 1)/2, the degree distribution becomes increasingly homogeneous as r → ∞ (phase III in Fig. 3c), revealing that degree heterogeneity is only present at short scales.
In Fig. 3c, several real networks are shown in the connectivity phase diagram. All of them lay in the region of small-world networks. Furthermore, all of them, except the Internet, Airports and Drosophila networks, belong to the β-dominated region. The inset shows the behaviour of the average degree of each layer l, \(\left\langle {k}^{(l)}\right\rangle\); as predicted, it grows exponentially in all cases.
Interestingly, global properties of the model, such as those reflected in the spectrum of eigenvalues of both the adjacency and Laplacian matrices, and quantities such as the diffusion time and the restabilization time39, show a dependence on γ and β, which is in consonance with the one displayed by the RGN flow of the average degree (Supplementary Figs. 10–12). The \({{\mathbb{S}}}^{1}\) model seems to be more sensitive to small changes in degree heterogeneity in the region \(0 < \frac{\gamma -1}{\beta }\le 1\), whereas changes in clustering are better reflected when \(1\le \frac{\gamma -1}{\beta }\le 2\).
Finally, the RGN transformation can be reformulated for the model in D dimensions. We have recalculated the connectivity phase diagram in Fig. 3c, obtaining qualitatively the same transitions and phases, including region III. Interestingly, the high clustering coefficient observed in real networks poses an upper limit on the potential dimension of the similarity space. We have tested the renormalization transformation using the one-dimensional embedding of networks generated in higher dimensions for which the clustering is realistic, that is, D ≲ 10, and found the same results as in the D = 1 case. The agreement is explained by the fact that the one-dimensional embedding provides a faithful representation for low-dimensional similarity-space networks (section II in the Supplementary Information).
Next, we propose two specific applications. The first one, the production of downscaled network replicas, singles out a specific scale while the second one, a multiscale navigation protocol, exploits multiple scales simultaneously.
The downscaling of the topology of large real-world complex networks to produce smaller high-fidelity replicas can find useful applications, for instance, as an alternative or guidance to sampling methods in large-scale simulations and, in networked communication systems like the Internet, as a reduced testbed to analyse the performance of new routing protocols40,41,42,43. Downscaled network replicas can also be used to perform finite size scaling of critical phenomena in real networks, so that critical exponents could be evaluated starting from a single size instance network. However, the success of such programmes is based on the quality of the downscaled versions, which should reproduce not only local properties of the original network but also its mesoscopic structure. We now present a method for their construction that exploits the fact that, under renormalization, a scale-free network remains self-similar and congruent with the underlying geometric model in the whole self-similarity range of the multilayer shell.
The idea is to single out a specific scale after a certain number of renormalization steps. Typically, the renormalized average degree of real networks increases in the flow (see inset in Fig. 3c), so we apply a pruning of links to reduce the density to the level of the original network (see Methods). Basically, we re-adjust the average-degree parameter μ(l) in the \({{\mathbb{S}}}^{1}\) model and then keep only the renormalized links that are consistent with the readjusted connection probabilities to obtain a statistically equivalent but reduced version.
To illustrate the high-fidelity that downscaled network replicas can achieve, we use them to reproduce the behaviour of dynamical processes in real networks. We selected three different dynamical processes, the classic ferromagnetic Ising model, the susceptible–infected–susceptible (SIS) epidemic spreading model and the Kuramoto model of synchronization (see Methods). We test these dynamics in all the self-similar network layers of the real networks analysed in this study. Results are shown in Fig. 4 and Supplementary Fig. 13. Quite remarkably, for all dynamics and all networks, we observe very similar results between the original and downscaled replicas at all scales. This is particularly interesting as these dynamics have a strong dependence on the mesoscale structure of the underlying networks.
Fig. 4: Dynamics on the downscaled replicas.
Each column shows the order parameters versus the control parameters of different dynamical processes on the original and downscaled replicas of the Internet AS network (left), the human Metabolic network (middle) and the Music network (right) with r = 2, that is, every value of l identifies a network 2l times smaller than the original one. All points show the results averaged over 100 simulations. Error bars indicate the standard deviations of the order parameters. a, Magnetization \({\left\langle \left|m\right|\right\rangle }^{(l)}\) of the Ising model as a function of the inverse temperature 1/T. b, Prevalence \({\left\langle \rho \right\rangle }^{(l)}\) of the SIS model as a function of the infection rate λ. c, Coherence \({\left\langle r\right\rangle }^{(l)}\) of the Kuramoto model as a function of the coupling strength σ. In all cases, the curves of the smaller-scale replicas are extremely similar to the results obtained on the original networks.
Next, we introduce a new multiscale navigation protocol for networks embedded in hyperbolic space, which improves single-layer results15. To this end, we exploit the quasi-isomorphism between the \({{\mathbb{S}}}^{1}\) model and the \({{\mathbb{H}}}^{2}\) model in hyperbolic space16,17 to produce a purely geometric representation of the multiscale shell (see Methods).
The multiscale navigation protocol is based on greedy routing, in which a source node transmitting a packet to a target node sends it to its neighbour closest to the destination in the metric space. As performance metrics, we consider the success rate (fraction of successful greedy paths), and the stretch of successful path (ratio between the number of hops in the greedy path and the topological shortest path). Notice that greedy routing cannot guarantee the existence of a successful path among all pairs of nodes; the packet can get trapped into a loop if sent to an already visited node. In this case, the multiscale navigation protocol can find alternative paths by taking advantage of the increased efficiency of greedy forwarding in the coarse-grained layers. When node i needs to send a packet to some destination node j, node i performs a virtual greedy forwarding step in the highest possible layer to find which supernode should be next in the greedy path. Based on this, node i then forwards the packet to its physical neighbour in the real network, which guarantees that it will eventually reach such supernode. The process is depicted in Fig. 5a (full details in Methods). To guarantee navigation inside supernodes, we require an extra condition in the renormalization process and only consider blocks of connected consecutive nodes (a single node can be left alone forming a supernode by itself). Notice that the new requirement does not alter the self-similarity of the renormalized networks forming the multiscale shell nor the congruency with the hidden metric space (section IV in the Supplementary Information).
Fig. 5: Performance of the multiscale navigation protocol in real networks.
a, Illustration of the multiscale navigation protocol. Red arrows show the unsuccessful greedy path in the original layer of a message attempting to reach the target yellow node. Green arrows show the successful greedy path from the same source using both layers. b, Success rate as a function of the number of layers used in the process, computed for 105 randomly selected pairs. c, Average stretch \(\left\langle {l}_{{\rm{g}}}{\rm{/}}{l}_{{\rm{s}}}\right\rangle\), where lg is the topological length of a path found by the algorithm and ls is the actual shortest path length in the network. d, Relative average load \({\left\langle {\lambda }_{L}^{i}\right\rangle }_{{\rm{Hubs}}}{\rm{/}}{\left\langle {\lambda }_{0}^{i}\right\rangle }_{{\rm{Hubs}}}\) of hubs, where \({\lambda }_{L}^{i}\) is the fraction of successful greedy paths that pass through node i in the multiscale navigation protocol with L renormalized layers. The averages are computed over the 20 highest-degree nodes in the network.
Figure 5b shows the increase of the success rate as the number of renormalized layers L used in the navigation process is increased for the different real networks considered in this study. Interestingly, the success rate always increases, even in systems with very high navigability in l = 0 like the Internet, and this improvement increases the stretch of successful paths only mildly (Fig. 5c). Counterintuitively, the slight increase of the stretch reduces the burden on highly connected nodes (Fig. 5d). As the number of renormalized layers L increases, the average fraction of successful paths passing through the most connected hubs in the network decreases. The improvements come at the expense of adding information about the supernodes to the knowledge needed for standard greedy routing in single-layered networks. However, the trade-off between improvement and information overload is advantageous, as for many systems the addition of just one or two renormalized layer produces already a notable effect.
It is a very well stablished empirical fact that most real complex networks share a very special set of universal features. Among the most relevant ones, networks have heterogeneous degree distributions, strong clustering and are small world. Our hidden metric space network model14,17,18, independent of its \({{\mathbb{S}}}^{1}\) or \({{\mathbb{H}}}^{2}\) formulation, provides a very natural explanation of these properties with a very limited number of parameters and using an effective hyperbolic geometry of two dimensions. Even if the model and the renormalization group can be formulated in D dimensions, the high clustering coefficient observed in real networks poses an upper limit on the potential dimension of the similarity space so that networks can be faithfully embedded in the one-dimensional representation. This is in line with the accumulated empirical evidence, which unambiguously supports the one-dimensional similarity plus degrees as an extremely good proxy for the geometry of real networks.
Interestingly, the existence of a metric space underlying complex networks allows us to define a geometric renormalization group that reveals their multiscale nature. Our geometric models of scale-free networks are shown to be self-similar under the RGN transformation. Even more important is the finding that self-similarity under geometric renormalization is a ubiquitous symmetry of real-world scale-free networks, which provides new evidence supporting the hypothesis that hidden metric spaces underlie real networks.
The renormalization group presented in this work is similar in spirit to the topological renormalization studied in refs 4,5,6,7,8,9 and should be taken as complementary. Instead of using shortest paths as a source of length scales to explore the fractality of networks, we use a continuum geometric framework that allows us to unveil the role of degree heterogeneity and clustering in the self-similarity properties of networks. In our model, a crucial point is the explicit contribution of degrees to the probability of connection, allows us to produce both short-range and long-range connections using a single mechanism captured in a universal connectivity law. The combination of similarity with degrees is a necessary condition to make the model predictive of the multiscale properties of real networks.
From a fundamental point of view, the geometric renormalization group introduced here has proven to be an exceptional tool to unravel the global organization of complex networks across scales and promises to become a standard methodology to analyse real complex networks. It can also help in areas such as the study of metapopulation models, in which transportation fluxes or population movements occur on both local and global scales44. From a practical point of view, we envision many applications. In large-scale simulations, downscaled network replicas could serve as an alternative or guidance to sampling methods, or for fast-track exploration of rough parameter spaces in the search of relevant regions. Downscaled versions of real networks could also be applied to perform finite size scaling, which would allow for the determination of critical exponents from single snapshots of their topology. Other possibilities include the development of a new multilevel community detection method45,46,47 that would exploit the mesoscopic information encoded in the different observation scales.
Renormalization flow of the community structure
To asses how the community structure of the network changes with the RGN flow, we obtained a partition into communities of every layer l, P(l), using the Louvain method48; Fig. 2c shows their modularities Q(l). We also defined the partition induced by P(l) on the original network, P(l,0), obtained by considering that two nodes i and j of the original network are in the same community in P(l,0) if and only if the supernodes of i and j in layer l belong to the same community in P(l). Both the modularity Q(l,0) of P(l,0) and the normalized mutual information nMI(l,0) between both partitions P(0) and P(l,0) are shown in Fig. 2c.
Connection probability in the \({{\mathbb{S}}}^{1}{\rm{/}}{{\mathbb{H}}}^{2}\) geometric model
The \({{\mathbb{S}}}^{1}\) model14 places the nodes of a network into a one-dimensional sphere of radius R and connects every pair i, j with probability
$${p}_{ij}=\frac{1}{1+{\chi }_{ij}^{\beta }}=\frac{1}{1+{\left(\frac{{d}_{ij}}{\mu {\kappa }_{i}{\kappa }_{j}}\right)}^{\beta }}$$
where μ controls the average degree of the network, β its clustering, and dij = RΔθij is the distance between the nodes separated by an angle Δθij; R is, without loss of generality, always set to N/2π, where N is the number of nodes, so that the density of nodes along the circle is equal to 1. The hidden degrees κi and κj are proportional to the degrees of nodes i and j, respectively.
The \({{\mathbb{S}}}^{1}\) model is isomorphic to a purely geometric model, the \({{\mathbb{H}}}^{2}\) model17, in which nodes are placed in a two-dimensional hyperbolic disk of radius:
$${R}_{{{\mathbb{H}}}^{2}}=2{\rm{ln}}\left(\frac{2R}{\mu {\kappa }_{0}^{2}}\right)$$
where κ0 = min{κi}. By mapping every mass κi to a radial coordinate ri according to:
$${r}_{i}={R}_{{{\mathbb{H}}}^{2}}-2{\rm{ln}}\frac{{\kappa }_{i}}{{\kappa }_{0}}$$
the connection probability, equation (5), becomes
$${p}_{ij}=\frac{1}{1+{e}^{\frac{\beta }{2}\left({x}_{ij}-{R}_{{{\mathbb{H}}}^{2}}\right)}}$$
where \({x}_{ij}={r}_{i}+{r}_{j}+2{\rm{ln}}\frac{{\rm{\Delta }}{\theta }_{ij}}{2}\) is a good approximation to the hyperbolic distance between two points with coordinates (ri, θi) and (rj, θj) in the native representation of hyperbolic space. The exact hyperbolic distance \({d}_{{{\mathbb{H}}}^{2}}\) is given by the hyperbolic law of cosines:
$${d}_{{{\mathbb{H}}}^{2}}={\rm{acosh}}\left({\rm{\cosh }}{r}_{i}{\rm{\cosh }}{r}_{j}-{\rm{\sinh }}{r}_{i}{\rm{\sinh }}{r}_{j}{\rm{\cos }}{\rm{\Delta }}{\theta }_{ij}\right)$$
Adjusting the average degree of downscaled network replicas
To reduce the average degree in a renormalized network to the level of the original network, we apply a pruning of links using the underlying metric model with which the networks in all layers are congruent. The procedure is detailed in this section.
The renormalized network in layer l has an average degree \(\left\langle {k}^{(l)}\right\rangle\) generally larger (in phase I) from the original network's \(\left\langle {k}^{(0)}\right\rangle\). Moreover, the new network is congruent with the underlying hidden metric space with a parameter μ(l) = μ(0)/rl controlling its average degree. The main idea is to decrease the value of μ(l) to a new value \({\mu }_{{\rm{new}}}^{(l)}\)—which implies that the connection probability of every pair of nodes (i,j), \({p}_{ij}^{(l)}\), decreases to \({p}_{ij,{\rm{new}}}^{(l)}\). We then prune the existing links by keeping them with probability
$${q}_{ij}^{(l)}=\frac{{p}_{ij,{\rm{new}}}^{(l)}}{{p}_{ij}^{(l)}}$$
Therefore, the probability for a link to exist in the pruned network reads:
$$P\left\{{a}_{ij,{\rm{new}}}^{(l)}=1\right\}={p}_{ij}^{(l)}{q}_{ij}^{(l)}={p}_{ij,{\rm{new}}}^{(l)}$$
whereas the probability for it not to exist is:
$$P\left\{{a}_{ij,{\rm{new}}}^{(l)}=0\right\}=1-{p}_{ij}^{(l)}+{p}_{ij}^{(l)}\left(1-{q}_{ij}^{(l)}\right)=1-{p}_{ij,{\rm{new}}}^{(l)}$$
that is, the pruned network has a lower average degree and is also congruent with the underlying metric space model with the new value of \({\mu }_{{\rm{new}}}^{(l)}\). Hence, we only need to find the right value of \({\mu }_{{\rm{new}}}^{(l)}\) so that \(\left\langle {k}_{new}^{(l)}\right\rangle =\left\langle {k}^{(0)}\right\rangle\). In the thermodynamic limit, the average degree of an \({{\mathbb{S}}}^{1}\) network is proportional to μ, so we could simply set
$${\mu }_{{\rm{new}}}^{(l)}=\frac{\left\langle {k}^{(0)}\right\rangle }{\left\langle {k}^{(l)}\right\rangle }{\mu }^{(l)}$$
However, as we consider real-world networks, finite-size effects play an important role. Indeed, we need to correct the value of \({\mu }_{{\rm{new}}}^{(l)}\) in equation (13). To this end, we use a correcting factor c, initially set to c = 1, and use \({\mu }_{{\rm{new}}}^{(l)}=c\frac{\left\langle {k}^{(0)}\right\rangle }{\left\langle {k}^{(l)}\right\rangle }{\mu }^{(l)}\); for every value of c, we prune the network. If \(\left\langle {k}_{{\rm{new}}}^{(l)}\right\rangle > \left\langle {k}^{(0)}\right\rangle\), we give c the new value c − 0.1u → c, where u is a random variable uniformly distributed between 0 and 1. Similarly, if \(\left\langle {k}_{{\rm{new}}}^{(l)}\right\rangle < \left\langle {k}^{(0)}\right\rangle\), c + 0.1u → c. The process ends when \(\left|\left\langle {k}_{{\rm{new}}}^{(l)}\right\rangle -\left\langle {k}^{(0)}\right\rangle \right|\) is below a given threshold (in our case, we set it to 0.1).
Simulation of dynamical processes
The Ising model is an equilibrium model of interacting spins49. Every node i is assigned a variable si with two possible values si = ±1, and the energy of the system is, in the absence of external field, given by the Hamiltonian
$${\mathscr{H}}=-\sum _{i < j}{J}_{ij}{a}_{ij}{s}_{i}{s}_{j}$$
where aij are the elements of the adjacency matrix and Jij are coupling constants, which we set to one. We start from an initial condition with si = 1 for all i and explore the ensemble of configurations using the Metropolis-Hastings algorithm: we randomly select one node and propose a change in its spin, −si → si. If \({\rm{\Delta }}{\mathscr{H}}\le 0\), we accept the change; otherwise, we accept it with probability \({e}^{-{\rm{\Delta }}{\mathscr{H}}{\rm{/}}T}\), where T is the temperature acting as a control parameter. The order parameter is the absolute magnetization per spin \(\left|m\right|\), where \(m=\frac{1}{N}{\sum }_{i}{s}_{i}\); if all spins point in the same direction, \(\left|m\right|=1\), whereas \(\left|m\right|=0\) if half the spins point in each direction.
In the SIS dynamical model of epidemic spreading50, every node i can present two states at a given time t, susceptible (ni(t) = 0) or infected (ni(t) = 1). Both infection and recovery are Poisson processes. An infected node recovers with rate 1, whereas infected nodes infect their susceptible neighbours at rate λ. We simulate this process using the continuous-time Gillespie algorithm with all nodes initially infected. The order parameter is the prevalence or fraction of infected nodes \(\rho (t)=\frac{1}{N}{\sum }_{i}{n}_{i}(t)\).
The Kuramoto model is a dynamical model for coupled oscillators. Every node i is described by a natural frequency ωi and a time-dependent phase θi(t). A node's phase evolves according to:
$${\mathop{\theta }\limits^{^\circ }}_{i}={\omega }_{i}+\sigma \sum _{i < j}{a}_{ij}{\rm{\sin }}({\theta }_{j}(t)-{\theta }_{i}(t))$$
where aij are the adjacency matrix elements and σ is the coupling strength. We integrate the equations of motion using Heun's method. Initially, the phases θi(0) and the frequencies ωi are randomly drawn from the uniform distributions U(−π, π) and U(−1/2, 1/2) respectively, as in ref. 51. The order parameter \(r(t)=\frac{1}{N}\left|{\sum }_{i}{e}^{i{\theta }_{i}(t)}\right|\) measures the phase coherence of the set of nodes; if all nodes oscillate in phase, r(t) = 1, whereas r(t) → 0 if nodes oscillate in a disordered manner.
In every realization, we compute an average of the order parameter in the stationary state. In the case of the SIS model, the single-realization mean of prevalence values is weighted by time. The curves presented in this work correspond to statistics over 100 realizations.
Multiscale navigation
Given a network and its embedding (layer 0), we merge pairs of consecutive nodes only if they are connected, which guarantees navigation inside supernodes; this process generates layer 1. We repeat the process to generate L renormalized layers. The multiscale navigation protocol requires every node i to be provided with the following local information:
The coordinates \(\left({r}_{i}^{(l)},{\theta }_{i}^{(l)}\right)\) of node i in every layer l.
The list of (super)neighbours of node i in every layer as well as their coordinates.
Let SuperN(i, l) be the supernode to which i belongs in layer l. If SuperN(i, l) is connected to SuperN(k, l) in layer l, at least one of the (super)nodes in layer l − 1 belonging to SuperN(i, l) must be connected to at least one of the (super)nodes in layer l − 1 belonging to SuperN(k, l); such node is called 'gateway'. For every superneighbour of node SuperN(i, l) in layer l, node i knows which (super)node or (super)nodes in layer l − 1 are gateways reaching it. Notice that both the gateways and SuperN(i,l − 1) belong to SuperN(i, l) in layer l so, in layer l − 1, they must either be the same (super)node or different but connected (super)nodes.
If SuperN(i, l − 1) is a gateway reaching some supernode s, at least one of its (super)neighbours in layer l − 1 belongs to s; node i knows which.
This information allows us to navigate the network as follows. Let j be the destination node to which i wants to forward a message, and let node i know j's coordinates in all L layers \(\left({r}_{j}^{(l)},{\theta }_{j}^{(l)}\right)\). To decide which of its physical neighbours (that is, in layer 0) should be next in the message-forwarding process, node i must first check if it is connected to j; in that case, the decision is clear. If it is not, it must:
Find the highest layer lmax in which SuperN(i, lmax) and SuperN(j, lmax) still have different coordinates. Set l = lmax.
Perform a standard step of greedy routing in layer l: find the closest neighbour of SuperN(i, l) to SuperN(j, l). This is the current target SuperT(l).
While l > 0, look into layer l − 1:
Set l = l − 1.
If SuperN(i, l) is a gateway connecting to some (super)node within SuperT(l + 1), node i sets as new current target SuperT(l) its (super)neighbour belonging to SuperT(l + 1) closest to SuperN(j, l).
Else node i sets as new target SuperT(l) the gateway in SuperN(i, l + 1) connecting to SuperT(l + 1) (its (super)neighbour belonging to SuperN(i, l + 1)).
In layer l = 0, SuperT(0) belongs to the real network and she is a neighbour of i, so node i forwards the message to SuperT(0).
The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.
Mandelbrot, B. The Fractal Geometry of Nature (W. H. Freeman and Company, San Francisco, CA, 1982).
Stanley, H. E. Introduction to Phase Transitions and Critical Phenomena (Oxford Univ. Press, Oxford, 1971).
Gfeller, D. & De Los Rios, P. Spectral coarse graining of complex networks. Phys. Rev. Lett. 99, 038701 (2007).
Song, C., Havlin, S. & Makse, H. A. Self-similarity of complex networks. Nature 433, 392–395 (2005).
Goh, K. I., Salvi, G., Kahng, B. & Kim, D. Skeleton and fractal scaling in complex networks. Phys. Rev. Lett. 96, 018701 (2006).
Song, C., Havlin, S. & Makse, H. A. Origins of fractality in the growth of complex networks. Nat. Phys. 2, 275–281 (2006).
Kim, J. S., Goh, K. I., Hahng, B. & Kim, D. Fractality and self-similarity in scale-free networks. New. J. Phys. 9, 177 (2007).
Radicchi, F., Ramasco, J. J., Barrat, A. & Fortunato, S. Complex networks renormalization: flows and fixed points. Phys. Rev. Lett. 101, 148701 (2008).
Rozenfeld, H. D., Song, C. & Makse, H. A. Small-world to fractal transition in complex networks: a renormalization group approach. Phys. Rev. Lett. 104, 025701 (2010).
Watts, D. J. & Strogatz, S. H. Collective dynamics of 'small-world' networks. Nature 393, 440–442 (1998).
ADS MATH Google Scholar
Cohen, R. & Havlin, S. Scale-free networks are ultrasmall. Phys. Rev. Lett. 90, 058701 (2003).
Newman, M. & Watts, D. Renormalization group analysis of the small-world network model. Phys. Lett. A 263, 341–346 (1999).
Boettcher, S. & Brunson, C. Renormalization group for critical phenomena in complex networks. Front. Physiol. 2, 102 (2011).
Serrano, M. Á., Krioukov, D. & Boguñá, M. Self-similarity of complex networks and hidden metric spaces. Phys. Rev. Lett. 100, 078701 (2008).
Boguñá, M., Papadopoulos, F. & Krioukov, D. Sustaining the Internet with hyperbolic mapping. Nat. Commun. 1, 62 (2010).
Krioukov, D., Papadopoulos, F., Vahdat, A. & Boguñá, M. Curvature and temperature of complex networks. Phys. Rev. E 80, 035101 (2009).
Krioukov, D., Papadopoulos, F., Kitsak, M., Vahdat, A. & Boguñá, M. Hyperbolic geometry of complex networks. Phys. Rev. E 82, 036106 (2010).
ADS MathSciNet Google Scholar
Papadopoulos, F., Kitsak, M., Serrano, M. A., Boguna, M. & Krioukov, D. Popularity versus similarity in growing networks. Nature 489, 537–540 (2012).
Zuev, K., Boguñá, M., Bianconi, G. & Krioukov, D. Emergence of soft communities from geometric preferential attachment. Sci. Rep. 5, 9421 (2015).
Kadanoff, L. P. Statistical Physics: Statics, Dynamics and Renormalization (World Scientific, Singapore, 2000).
Wilson, K. G. The renormalization group: critical phenomena and the Kondo problem. Rev. Mod. Phys. 47, 773–840 (1975).
Wilson, K. G. The renormalization group and critical phenomena. Rev. Mod. Phys. 55, 583–600 (1983).
Boguñá, M. & Pastor-Satorras, R. Class of correlated random networks with hidden variables. Phys. Rev. E 68, 036112 (2003).
Claffy, K., Hyun, Y., Keys, K., Fomenkov, M. & Krioukov, D. Internet mapping: from art to science. In 2009 Cybersecurity Applications Technology Conf. for Homeland Security 205–211 (IEEE, New York, NY, 2009).
Openflights Network Dataset (The Koblenz Network Collection, 2016); http://konect.uni-koblenz.de/networks/openflights
Kunegis, J. KONECT—The Koblenz Network Collection. In Proc. Int. Conf. on World Wide Web Companion (eds Scwhabe, D., Almeida, V. & Glaser, H.) 1343–1350 (ACM, New York, NY, 2013).
Serrano, M. A., Boguna, M. & Sagues, F. Uncovering the hidden geometry behind metabolic networks. Mol. BioSyst. 8, 843–850 (2012).
Rolland, T. et al. A proteome-scale map of the human interactome network. Cell 159, 1212–1226 (2014).
Takemura, S.-y et al. A visual motion detection circuit suggested by drosophila connectomics. Nature 500, 175–181 (2013).
Klimt, B. & Yang, Y. The Enron Corpus: A new dataset for email classification research. In Machine Learning: ECML 2004 (eds Boulicaut, J. F. et al.) 217–226 (Springer, Berlin, Heidelberg, 2004).
Leskovec, J., Lang, K. J., Dasgupta, A. & Mahoney, M. W. Community structure in large networks: natural cluster sizes and the absence of large well-defined clusters. Internet Math. 6, 29–123 (2009).
MathSciNet MATH Google Scholar
Serrà, J., Corral, A., Boguñá, M., Haro, M. & Arcos, J. L. Measuring the evolution of contemporary western popular music. Sci. Rep. 2, 521 (2012).
Milo, R. et al. Superfamilies of evolved and designed networks. Science 303, 1538–1542 (2004).
Papadopoulos, F., Aldecoa, R. & Krioukov, D. Network geometry inference using common neighbors. Phys. Rev. E 92, 022807 (2015).
Arenas, A., Fernández, A. & Gómez, S. Analysis of the structure of complex networks at different resolution levels. New. J. Phys. 10, 053039 (2008).
Ronhovde, P. & Nussinov, Z. Multiresolution community detection for megascale networks by information-based replica correlations. Phys. Rev. E 80, 016109 (2009).
Ahn, Y.-Y., Bagrow, J. P. & Lehmann, S. Link communities reveal multiscale complexity in networks. Nature 466, 761–764 (2010).
Garca-Pérez, G., Boguñá, M., Allard, A. & Serrano, M. A. The hidden hyperbolic geometry of international trade: World Trade Atlas 1870–2013. Sci. Rep. 6, 33441 (2016).
Mieghem, P. V. Graph Spectra for Complex Networks (Cambridge Univ. Press, New York, NY, 2011).
Papadopoulos, F., Psounis, K. & Govindan, R. Performance preserving topological downscaling of Internet-like networks. IEEE J. Sel. Area Commun. 24, 2313–2326 (2006).
Papadopoulos, F. & Psounis, K. Efficient identification of uncongested internet links for topology downscaling. SIGCOMM Comput. Commun. Rev. 37, 39–52 (2007).
Yao, W. M. & Fahmy, S. Downscaling network scenarios with denial of service (dos) attacks. In 2008 IEEE Sarnoff Symp. 1–6 (IEEE, New York, NY, 2008).
Yao, W. M. & Fahmy, S. Partitioning network testbed experiments. In 2011 31st Int. Conf. on Distributed Computing Systems 299–309 (IEEE, New York, NY, 2011).
Colizza, V., Pastor-Satorras, R. & Vespignani, A. Reaction-diffusion processes and metapopulation models in heterogeneous networks. Nat. Phys. 3, 276–282 (2007).
Karypis, G. & Kumar, V. A fast and high quality multilevel scheme for partitioning irregular graphs. In Int. Conf. on Parallel Processing (eds Banerjee, P., Polychronopoulos, C. D. & Gallivan, K. A.) 113–122 (CRC, 1995).
Karypis, G. & Kumar, V. A fast and highly quality multilevel scheme for partitioning irregular graphs. SIAM J. Sci. Comput. 20, 359–392 (1999).
Abou-Rjeili, A. & Karypis, G. Multilevel algorithms for partitioning power-law graphs. In IEEE Int. Parallel and Distributed Processing Symp. (IPDPS) 124–124 (IEEE, New York, NY, 2006).
Blondel, V. D., Guillaume, J.-L., Lambiotte, R. & Étienne, L. Fast unfolding of communities in large networks. J. Stat. Mech. 2008, P10008 (2008).
Dorogovtsev, S. N., Goltsev, A. V. & Mendes, J. F. F. Ising model on networks with an arbitrary distribution of connections. Phys. Rev. E 66, 016104 (2002).
Pastor-Satorras, R. & Vespignani, A. Epidemic spreading in scale-free networks. Phys. Rev. Lett. 86, 3200–3203 (2001).
Moreno, Y. & Pacheco, A. F. Synchronization of kuramoto oscillators in scale-free networks. Europhys. Lett. 68, 603 (2004).
We acknowledge support from a James S. McDonnell Foundation Scholar Award in Complex Systems, the ICREA Academia prize, funded by the Generalitat de Catalunya, and Ministerio de Economa y Competitividad of Spain projects no. FIS2013-47282-C2-1-P and no. FIS2016-76830-C2-2-P (AEI/FEDER, UE).
Departament de Física de la Matèria Condensada, Universitat de Barcelona, Barcelona, Spain
Guillermo García-Pérez, Marián Boguñá & M. Ángeles Serrano
Universitat de Barcelona Institute of Complex Systems (UBICS), Universitat de Barcelona, Barcelona, Spain
ICREA, Barcelona, Spain
G.G.-P., M.B. and M.Á.S. contributed to the design and implementation of the research, the analysis of the results and the writing of the manuscript.
Correspondence to M. Ángeles Serrano.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary notes, supplementary figures 1–16, supplementary references
García-Pérez, G., Boguñá, M. & Serrano, M.Á. Multiscale unfolding of real networks by geometric renormalization. Nature Phys 14, 583–589 (2018). https://doi.org/10.1038/s41567-018-0072-5
Pablo Villegas
Tommaso Gili
Andrea Gabrielli
Nature Physics (2023)
A zoom lens for networks
Konstantin Klemm
Detecting the ultra low dimensionality of real networks
Pedro Almagro
Generalised popularity-similarity optimisation model for growing hyperbolic networks beyond two dimensions
Bianka Kovács
Sámuel G. Balogh
Gergely Palla
An anomalous topological phase transition in spatial random graphs
Jasper van der Kolk
Communications Physics (2022)
Nature Physics (Nat. Phys.) ISSN 1745-2481 (online) ISSN 1745-2473 (print) | CommonCrawl |
The impact of toxins on competition dynamics of three species in a polluted aquatic environment
Asymptotic profiles of the endemic equilibrium of a reaction-diffusion-advection SIS epidemic model with saturated incidence rate
June 2021, 26(6): 3023-3041. doi: 10.3934/dcdsb.2020218
The Keller-Segel system with logistic growth and signal-dependent motility
Hai-Yang Jin 1, and Zhi-An Wang 2,,
Department of Mathematics, South China University of Technology, Guangzhou, 510640, China
Department of Applied Mathematics, Hong Kong Polytechnic University, Hung Hom, Hong Kong
*Corresponding author: Zhi-An Wang
Received February 2020 Revised May 2020 Published June 2021 Early access July 2020
Fund Project: The research of H.Y. Jin was supported by the NSF of China (No. 11871226), Guangdong Basic and Applied Basic Research Foundation (No. 2020A1515010140), Guangzhou Science and Technology Program No.202002030363 and the Fundamental Research Funds for the Central Universities. The research of Z.A. Wang was supported by the Hong Kong RGC GRF grant 15303019 (Project ID P0030816)
The paper is concerned with the following chemotaxis system with nonlinear motility functions
$\begin{equation}\label{0-1}\begin{cases}u_t = \nabla \cdot (\gamma(v)\nabla u- u\chi(v)\nabla v)+\mu u(1-u), &x\in \Omega, ~~t>0, \\ 0 = \Delta v+ u-v, & x\in \Omega, ~~t>0, \\u(x, 0) = u_0(x), & x\in \Omega, \end{cases}~~~~(\ast)\end{equation}$
subject to homogeneous Neumann boundary conditions in a bounded domain
$ \Omega\subset \mathbb{R}^2 $
with smooth boundary, where the motility functions
$ \gamma(v) $
$ \chi(v) $
satisfy the following conditions
$ (\gamma, \chi)\in [C^2[0, \infty)]^2 $
$ \gamma(v)>0 $
$ \frac{|\chi(v)|^2}{\gamma(v)} $
is bounded for all
$ v\geq 0 $
By employing the method of energy estimates, we establish the existence of globally bounded solutions of ($\ast$) with
$ \mu>0 $
for any
$ u_0 \in W^{1, \infty}(\Omega) $
$ u_0 \geq (\not\equiv) 0 $
. Then based on a Lyapunov function, we show that all solutions
$ (u, v) $
of ($\ast$) will exponentially converge to the unique constant steady state
$ (1, 1) $
$ \mu>\frac{K_0}{16} $
$ K_0 = \max\limits_{0\leq v \leq \infty}\frac{|\chi(v)|^2}{\gamma(v)} $
Keywords: Chemotaxis, density-dependent motility, global boundedness, exponential decay.
Mathematics Subject Classification: Primary: 35A01, 35B40, 35K57, 35Q92; Secondary: 92C17.
Citation: Hai-Yang Jin, Zhi-An Wang. The Keller-Segel system with logistic growth and signal-dependent motility. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3023-3041. doi: 10.3934/dcdsb.2020218
S. Agmon, A. Douglis and L. Nirenberg, Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions. Ⅰ, Commun. Pure Appl. Math., 12 (1959), 623-727. doi: 10.1002/cpa.3160120405. Google Scholar
S. Agmon, A. Douglis and L. Nirenberg, Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions. Ⅱ, Commun. Pure Appl. Math., 17 (1964), 35-92. doi: 10.1002/cpa.3160170104. Google Scholar
J. Ahn and C. Yoon, Global well-posedness and stability of constant equilibria in parabolic-elliptic chemotaxis systems without gradient sensing, Nonlinearity, 32 (2019), 1327-1351. doi: 10.1088/1361-6544/aaf513. Google Scholar
X. Bai and M. Winkler, Equilibration in a fully parabolic two-species chemotaxis system with competitive kinetics, Indiana Univ. Math. J., 65 (2016), 553-583. doi: 10.1512/iumj.2016.65.5776. Google Scholar
X. Fu, L.-H. Tang, C. Liu, J.-D. Huang, T. Hwa and P. Lenz, Stripe formation in bacterial system with density-suppressed motility, Phys. Rev. Lett., 108 (2012), 198102. doi: 10.1103/PhysRevLett.108.198102. Google Scholar
K. Fujie and J. Jiang, Comparison methods for a Keller-Segel-type model of pattern formations with density-suppressed motilities, arXiv: 2001.01288. Google Scholar
K. Fujie and J. Jiang, Global existence for a kinetic model of pattern formation with density-suppressed motilities, J. Differential Equations, 269 (2020), 5338-5378. doi: 10.1016/j.jde.2020.04.001. Google Scholar
K. Fujie and T. Senba, Global existence and boundedness of radial solutions to a two dimensional fully parabolic chemotaxis system with general sensitivity, Nonlinearity, 29 (2016), 2417-2450. doi: 10.1088/0951-7715/29/8/2417. Google Scholar
H.-Y. Jin, Y.-J. Kim and Z.-A. Wang, Boundedness, stabilization, and pattern formation driven by density-suppressed motility, SIAM J. Appl. Math., 78 (2018), 1632-1657. doi: 10.1137/17M1144647. Google Scholar
H.-Y. Jin and Z.-A. Wang, Critical mass on the Keller-Segel system with signal-dependent motility, Proc. Amer. Math. Soc., DOI: https://doi.org/10.1090/proc/15124, 2020. Google Scholar
E. F. Keller and L. A. Segel, Initiation of slime mold aggregation viewed as an instability, J. Theor. Biol., 26 (1970), 399-415. doi: 10.1016/0022-5193(70)90092-5. Google Scholar
K. Kuto, K. Osaki, T. Sakurai and T. Tsujikawa, Spatial pattern formation in a chemotaxis-diffusion-growth model, Phys. D, 241 (2012), 1629-1639. doi: 10.1016/j.physd.2012.06.009. Google Scholar
J. Lankeit, Chemotaxis can prevent thresholds on population density, Discrete Contin. Dyn. Syst. Ser. B, 20 (2015), 1499-1527. doi: 10.3934/dcdsb.2015.20.1499. Google Scholar
K. Lin and C. Mu, Global dynamics in a fully parabolic chemotaxis system with logistic source, Discrete Contin. Dyn. Syst., 36 (2016), 5025-5046. doi: 10.3934/dcds.2016018. Google Scholar
C. Liu, Sequtential establishment of stripe patterns in an expanding cell population, Science, 334 (2011), 238-241. doi: 10.1126/science.1209042. Google Scholar
M. Ma, C. Ou and Z.-A. Wang, Stationary solutions of a volume filling chemotaxis model with logistic growth and their stability, SIAM J. Appl. Math., 72 (2012), 740-766. doi: 10.1137/110843964. Google Scholar
M. Ma, R. Peng and Z. Wang, Stationary and non-stationary patterns of the density-suppressed motility model, Phys. D, 402 (2020), 132259. doi: 10.1016/j.physd.2019.132259. Google Scholar
N. Mizoguchi and P. Souplet, Nondegeneracy of blow-up points for the parabolic Keller-Segel system, Ann. Inst. H. Poincaré Anal. Non Linéaire, 31 (2014), 851-875. doi: 10.1016/j.anihpc.2013.07.007. Google Scholar
T. Nagai, T. Senba and K. Yoshida, Application of the Trudinger-Moser inequality to a parabolic system of chemotaxis, Funkcial. Ekvac., 40 (1997), 411-433. Google Scholar
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura, Exponential attractor for a chemotaxis-growth system of equations, Nonlinear Anal. TMA, 51 (2002), 119-144. doi: 10.1016/S0362-546X(01)00815-X. Google Scholar
K. J. Painter and T. Hillen, Spatio-temporal chaos in a chemotaxis model, Phys. D, 240 (2011), 363-375. doi: 10.1016/j.physd.2010.09.011. Google Scholar
M. M. Porzio and V. Vespri, Hölder estimates for local solutions of some doubly nonlinear degenerate parabolic equations, J. Differential Equations, 103 (1993), 146-178. doi: 10.1006/jdeq.1993.1045. Google Scholar
P. Quittner and P. Souplet, Superlinear Parabolic Problems: Blow-up, Global Existence and Steady States, Birkhäuser Advanced Texts: Basler Lehrbücher. Birkhäuser Verlag, Basel, 2007. Google Scholar
J. Smith-Roberge, D. Iron and T. Kolokolnikov, Pattern formation in bacterial colonies with density-dependent diffusion, European J. Appl. Math., 30 (2019), 196-218. doi: 10.1017/S0956792518000013. Google Scholar
Y. Tao and M. Winkler, Energy-type estimates and global solvability in a two-dimensional chemotaxis-haptotaxis model with remodeling of non-diffusible attractant, J. Differential Equations, 257 (2014), 784-815. doi: 10.1016/j.jde.2014.04.014. Google Scholar
Y. Tao and Z.-A. Wang, Competing effects of attraction vs. repulsion in chemotaxis, Math. Models Methods Appl. Sci., 23 (2013), 1-36. doi: 10.1142/S0218202512500443. Google Scholar
Y. Tao and M. Winkler, Large time behavior in a multidimensional chemotaxis-haptotaxis model with slow signal diffusion, SIAM J. Math. Anal., 47 (2015), 4229-4250. doi: 10.1137/15M1014115. Google Scholar
Y. Tao and M. Winkler, Effects of signal-dependent motilities in a Keller-Segel-type reaction-diffusion system, Math. Models Meth. Appl. Sci., 27 (2017), 1645-1683. doi: 10.1142/S0218202517500282. Google Scholar
J. I. Tello and M. Winkler, A chemotaxis system with logistic source, Comm. Partial Differential Equations, 32 (2007), 849-877. doi: 10.1080/03605300701319003. Google Scholar
J. Wang and M. Wang, Boundedness in the higher-dimensional Keller-Segel model with signal-dependent motility and logistic growth, J. Math. Phys., 60 (2019), 011507. doi: 10.1063/1.5061738. Google Scholar
M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, Comm. Partial Differential Equations, 35 (2010), 1516-1537. doi: 10.1080/03605300903473426. Google Scholar
M. Winkler, Global solutions in a fully parabolic chemotaxis system with singular sensitivity, Math. Methods Appl. Sci., 34 (2011), 176-190. doi: 10.1002/mma.1346. Google Scholar
M. Winkler, Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with strong logistic dampening, J. Differential Equations, 257 (2014), 1056-1077. doi: 10.1016/j.jde.2014.04.023. Google Scholar
P. Xia, Y. Han, J. Tao and M. Ma, Existence and metastability of non-constant steady states in a Keller-Segel model with density-suppressed motility, Mathematics in Applied Sciences and Engineering, 1 (2020), 1-15. Google Scholar
T. Xiang, Boundedness and global existence in the higher-dimensional parabolic-parabolic chemotaxis system with/without growth source, J. Differential Equations, 258 (2015), 4275-4323. doi: 10.1016/j.jde.2015.01.032. Google Scholar
C. Yoon and Y.-J. Kim, Global existence and aggregation in a Keller-Segel model with Fokker-Planck diffusion, Acta Application Mathematics, 149 (2017), 101-123. doi: 10.1007/s10440-016-0089-7. Google Scholar
Yang Liu. Global existence and exponential decay of strong solutions to the cauchy problem of 3D density-dependent Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1291-1303. doi: 10.3934/dcdsb.2020163
Tianyuan Xu, Shanming Ji, Chunhua Jin, Ming Mei, Jingxue Yin. Early and late stage profiles for a chemotaxis model with density-dependent jump probability. Mathematical Biosciences & Engineering, 2018, 15 (6) : 1345-1385. doi: 10.3934/mbe.2018062
Wuming Li, Xiaojun Liu, Quansen Jiu. The decay estimates of solutions for 1D compressible flows with density-dependent viscosity coefficients. Communications on Pure & Applied Analysis, 2013, 12 (2) : 647-661. doi: 10.3934/cpaa.2013.12.647
Xin Jiang, Zhikun She, Shigui Ruan. Global dynamics of a predator-prey system with density-dependent mortality and ratio-dependent functional response. Discrete & Continuous Dynamical Systems - B, 2021, 26 (4) : 1967-1990. doi: 10.3934/dcdsb.2020041
Chun Huang. Global boundedness for a chemotaxis-competition system with signal dependent sensitivity and loop. Electronic Research Archive, 2021, 29 (5) : 3261-3279. doi: 10.3934/era.2021037
Jishan Fan, Tohru Ozawa. Global Cauchy problem of an ideal density-dependent MHD-$\alpha$ model. Conference Publications, 2011, 2011 (Special) : 400-409. doi: 10.3934/proc.2011.2011.400
Xulong Qin, Zheng-An Yao. Global solutions of the free boundary problem for the compressible Navier-Stokes equations with density-dependent viscosity. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1041-1052. doi: 10.3934/cpaa.2010.9.1041
Guangwu Wang, Boling Guo. Global weak solution to the quantum Navier-Stokes-Landau-Lifshitz equations with density-dependent viscosity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 6141-6166. doi: 10.3934/dcdsb.2019133
Mei Wang, Zilai Li, Zhenhua Guo. Global weak solution to 3D compressible flows with density-dependent viscosity and free boundary. Communications on Pure & Applied Analysis, 2017, 16 (1) : 1-24. doi: 10.3934/cpaa.2017001
Jishan Fan, Fucai Li, Gen Nakamura. Global strong solution to the two-dimensional density-dependent magnetohydrodynamic equations with vaccum. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1481-1490. doi: 10.3934/cpaa.2014.13.1481
Jishan Fan, Tohru Ozawa. An approximation model for the density-dependent magnetohydrodynamic equations. Conference Publications, 2013, 2013 (special) : 207-216. doi: 10.3934/proc.2013.2013.207
Jacques A. L. Silva, Flávia T. Giordani. Density-dependent dispersal in multiple species metapopulations. Mathematical Biosciences & Engineering, 2008, 5 (4) : 843-857. doi: 10.3934/mbe.2008.5.843
Pan Zheng. Global boundedness and decay for a multi-dimensional chemotaxis-haptotaxis system with nonlinear diffusion. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 2039-2056. doi: 10.3934/dcdsb.2016035
Xin Zhong. Global well-posedness to the cauchy problem of two-dimensional density-dependent boussinesq equations with large initial data and vacuum. Discrete & Continuous Dynamical Systems, 2019, 39 (11) : 6713-6745. doi: 10.3934/dcds.2019292
Pierre Degond, Silke Henkes, Hui Yu. Self-organized hydrodynamics with density-dependent velocity. Kinetic & Related Models, 2017, 10 (1) : 193-213. doi: 10.3934/krm.2017008
J. X. Velasco-Hernández, M. Núñez-López, G. Ramírez-Santiago, M. Hernández-Rosales. On carrying-capacity construction, metapopulations and density-dependent mortality. Discrete & Continuous Dynamical Systems - B, 2017, 22 (3) : 1099-1110. doi: 10.3934/dcdsb.2017054
Baojun Song, Wen Du, Jie Lou. Different types of backward bifurcations due to density-dependent treatments. Mathematical Biosciences & Engineering, 2013, 10 (5&6) : 1651-1668. doi: 10.3934/mbe.2013.10.1651
Sachiko Ishida. Global existence and boundedness for chemotaxis-Navier-Stokes systems with position-dependent sensitivity in 2D bounded domains. Discrete & Continuous Dynamical Systems, 2015, 35 (8) : 3463-3482. doi: 10.3934/dcds.2015.35.3463
Pan Zheng, Chunlai Mu, Xiaojun Song. On the boundedness and decay of solutions for a chemotaxis-haptotaxis system with nonlinear diffusion. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1737-1757. doi: 10.3934/dcds.2016.36.1737
Quansen Jiu, Zhouping Xin. The Cauchy problem for 1D compressible flows with density-dependent viscosity coefficients. Kinetic & Related Models, 2008, 1 (2) : 313-330. doi: 10.3934/krm.2008.1.313
Hai-Yang Jin Zhi-An Wang | CommonCrawl |
# The Lorenz equations and their relevance to chaos theory
The Lorenz equations are a set of ordinary differential equations (ODEs) that describe the behavior of a simple model of the Earth's atmosphere. They were first proposed by Edward Lorenz in 1963 to study the prediction of weather patterns. The equations are:
$$
\begin{aligned}
\frac{dx}{dt} &= \sigma(y-x) \\
\frac{dy}{dt} &= x(\rho - z) - y \\
\frac{dz}{dt} &= xy - \beta z
\end{aligned}
$$
where $\sigma$, $\rho$, and $\beta$ are constants. The Lorenz equations exhibit chaotic behavior, meaning that small changes in the initial conditions can lead to drastically different outcomes. This makes them an ideal starting point for studying chaos theory and its applications in machine learning.
Chaos theory is the study of systems that are sensitive to initial conditions and can exhibit unpredictable and seemingly random behavior. It has found applications in various fields, including physics, engineering, and economics. In this textbook, we will explore how chaos theory concepts can be combined with machine learning techniques to create powerful predictive models.
Consider the following Python code that simulates the Lorenz equations using the `scipy.integrate.odeint` function:
```python
import numpy as np
from scipy.integrate import odeint
def lorenz_equations(state, t):
x, y, z = state
dx_dt = sigma * (y - x)
dy_dt = x * (rho - z) - y
dz_dt = x * y - beta * z
return [dx_dt, dy_dt, dz_dt]
sigma = 10
rho = 28
beta = 8/3
initial_conditions = [1, 1, 1]
t = np.linspace(0, 100, 1000)
solution = odeint(lorenz_equations, initial_conditions, t)
```
This code simulates the Lorenz equations with the given constants and initial conditions, and returns a solution that can be used to analyze the behavior of the system over time.
## Exercise
1. Modify the constants $\sigma$, $\rho$, and $\beta$ in the code above to see how different values affect the behavior of the Lorenz system.
2. Change the initial conditions and observe how this affects the long-term behavior of the system.
# Setting up a Python development environment
To work with TensorFlow and chaos theory, you'll need a Python development environment. This section will guide you through setting up a Python environment using Anaconda, a popular open-source distribution.
To set up a Python development environment, follow these steps:
1. Download and install Anaconda from the official website: https://www.anaconda.com/products/distribution.
2. Open the Anaconda Navigator application.
3. Create a new conda environment by clicking on the "Environments" tab, then clicking on the "Create" button.
4. Give your environment a name (e.g., "chaos-tensorflow") and select the Python version (e.g., 3.8).
5. Click on the "Create" button to create the environment.
6. Activate the new environment by clicking on the "Home" tab, then clicking on the "Applications on" dropdown menu and selecting your environment.
7. Launch the Jupyter Notebook application by clicking on the "Notebook" tile.
Now you have a Python development environment set up with Anaconda, and you can start working with TensorFlow and chaos theory.
## Exercise
1. Install the TensorFlow package in your Anaconda environment using the following command:
```
conda install -c conda-forge tensorflow
```
2. Verify that TensorFlow is installed correctly by running the following Python code in a Jupyter Notebook cell:
```python
import tensorflow as tf
print(tf.__version__)
```
# Introduction to TensorFlow and its role in machine learning
TensorFlow is an open-source machine learning framework developed by Google. It is designed to be flexible and efficient, making it suitable for a wide range of applications, including chaos theory and machine learning.
TensorFlow is built on top of the computational graph, which is a way of representing mathematical computations as nodes and edges. This allows for efficient execution of computations and easy modification of the graph.
In TensorFlow, the main building blocks are tensors and operations. Tensors are multi-dimensional arrays that can store any data type, including scalars, vectors, matrices, and higher-dimensional arrays. Operations are functions that take one or more tensors as input and produce a tensor as output.
Here's an example of how to create a tensor and perform a simple operation in TensorFlow:
```python
import tensorflow as tf
# Create a tensor with the values [1, 2, 3]
x = tf.constant([1, 2, 3])
# Perform a simple operation: x + 2
y = x + 2
# Print the result
print(y)
```
This code creates a tensor `x` with the values [1, 2, 3], performs the operation `x + 2` to create a new tensor `y`, and then prints the result.
## Exercise
1. Create a tensor with the values [4, 5, 6] and compute the element-wise product of `x` and the new tensor.
2. Perform the operation `x + y` and `x * y` and print the results.
# Creating and training recurrent neural networks with TensorFlow
To create an RNN in TensorFlow, you can use the `tf.keras.layers.SimpleRNN` layer. Here's an example:
```python
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.SimpleRNN(units=32, input_shape=(None, 1)),
tf.keras.layers.Dense(units=1)
])
model.compile(optimizer='adam', loss='mean_squared_error')
```
This code creates a simple RNN with 32 hidden units and a dense output layer. The `input_shape` parameter is set to `(None, 1)` to indicate that the input can have any length and consists of scalars.
To train the RNN, you'll need a dataset. You can use the `tf.keras.layers.Input` layer to create an input layer and the `model.fit` method to train the model. Here's an example:
```python
import numpy as np
inputs = np.random.rand(1000, 10, 1)
outputs = np.random.rand(1000, 1)
model.fit(inputs, outputs, epochs=10)
```
This code generates random input and output data and trains the model for 10 epochs.
## Exercise
1. Modify the RNN layer to have a different number of hidden units.
2. Train the model with different input and output data.
# Applying chaos theory concepts to machine learning models
Chaos theory concepts can be applied to machine learning models in various ways. One common approach is to use chaotic systems as a source of training data. This can help the model learn to predict the behavior of complex systems, even if it has never seen the system before.
Another approach is to use chaos theory concepts to analyze and understand the behavior of machine learning models. This can help us understand why certain models are more or less likely to succeed, and how they might fail.
Here's an example of how to use the Lorenz system as a source of training data for an RNN:
```python
import numpy as np
from scipy.integrate import odeint
def lorenz_equations(state, t):
x, y, z = state
dx_dt = sigma * (y - x)
dy_dt = x * (rho - z) - y
dz_dt = x * y - beta * z
return [dx_dt, dy_dt, dz_dt]
sigma = 10
rho = 28
beta = 8/3
initial_conditions = [1, 1, 1]
t = np.linspace(0, 100, 1000)
solution = odeint(lorenz_equations, initial_conditions, t)
# Prepare the data for training
input_data = solution[:-1]
output_data = solution[1:]
```
This code simulates the Lorenz equations and then prepares the data for training an RNN.
## Exercise
1. Modify the Lorenz equations to explore different initial conditions and constants.
2. Train an RNN with the modified data and compare its performance to the original model.
# Exploring chaotic systems using machine learning
Another approach is to use machine learning models to analyze the sensitivity of chaotic systems to small changes in their parameters. This can help us understand how small changes in the initial conditions or constants can lead to drastically different outcomes.
Here's an example of how to use a machine learning model to predict the behavior of the Lorenz system:
```python
import tensorflow as tf
# Create a simple RNN model
model = tf.keras.Sequential([
tf.keras.layers.SimpleRNN(units=32, input_shape=(None, 3)),
tf.keras.layers.Dense(units=3)
])
model.compile(optimizer='adam', loss='mean_squared_error')
# Prepare the data for training
input_data = solution[:-1]
output_data = solution[1:]
# Train the model
model.fit(input_data, output_data, epochs=10)
```
This code creates a simple RNN model, prepares the data for training, and then trains the model using the Lorenz system data.
## Exercise
1. Modify the RNN model to have a different number of hidden units.
2. Train the model with different input and output data.
# Real-world applications of chaos theory and machine learning
Chaos theory and machine learning have numerous real-world applications, including:
- Predicting and understanding the behavior of complex systems, such as weather patterns, stock market trends, and biological processes.
- Developing new materials and drug compounds by predicting their properties based on their molecular structure.
- Analyzing the stability and reliability of systems, such as computer networks and financial markets.
- Designing new algorithms and data structures for efficient computation and storage.
In this textbook, we have explored the combination of chaos theory and machine learning using TensorFlow in Python. This powerful approach can be used to study and understand complex systems, make predictions, and develop new models and techniques.
## Exercise
1. Discuss a real-world application of chaos theory and machine learning that is not mentioned in this section.
2. Propose a new research project that combines chaos theory and machine learning to address a specific problem or question.
# Advanced topics in chaos theory and machine learning
- The use of deep learning techniques, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), to analyze and predict the behavior of chaotic systems.
- The study of the sensitivity of chaotic systems to small changes in their parameters, and how this can be used to improve machine learning models.
- The development of new algorithms and data structures for efficient computation and storage in chaotic systems.
These advanced topics provide a deeper understanding of the connections between chaos theory and machine learning, and offer new opportunities for research and development.
## Exercise
1. Discuss a potential application of deep learning techniques to chaos theory.
2. Propose a research project that combines chaos theory and deep learning to address a specific problem or question.
# Combining chaos theory and machine learning using TensorFlow in Python
To accomplish this, you will need to:
1. Simulate the chaotic system using the appropriate mathematical equations or numerical methods.
2. Prepare the data for training by extracting the relevant features and labels.
3. Create a machine learning model using TensorFlow.
4. Train the model using the prepared data.
5. Evaluate the model's performance and make any necessary adjustments.
Here's an example of how to use TensorFlow to combine chaos theory and machine learning:
```python
import numpy as np
from scipy.integrate import odeint
import tensorflow as tf
# Simulate the Lorenz system
# ... (code for simulating the Lorenz system)
# Prepare the data for training
# ... (code for preparing the data)
# Create a simple RNN model
model = tf.keras.Sequential([
tf.keras.layers.SimpleRNN(units=32, input_shape=(None, 3)),
tf.keras.layers.Dense(units=3)
])
model.compile(optimizer='adam', loss='mean_squared_error')
# Train the model
model.fit(input_data, output_data, epochs=10)
```
This code simulates the Lorenz system, prepares the data for training, creates a simple RNN model, and then trains the model using the Lorenz system data.
## Exercise
1. Modify the Lorenz system simulation to explore different initial conditions and constants.
2. Train an RNN with the modified data and compare its performance to the original model.
# Conclusion and future developments
In this textbook, we have explored the combination of chaos theory and machine learning using TensorFlow in Python. This powerful approach can be used to study and understand complex systems, make predictions, and develop new models and techniques.
There are many exciting future developments in this field, including:
- The development of more advanced and efficient algorithms for simulating chaotic systems.
- The integration of chaos theory concepts into machine learning models to improve their performance and reliability.
- The application of chaos theory and machine learning to new and emerging fields, such as quantum computing and artificial intelligence.
## Exercise
1. Discuss a potential future development in the field of chaos theory and machine learning.
2. Propose a new research project that combines chaos theory and machine learning to address a specific problem or question. | Textbooks |
Fourier Series Of Sine Wave
an = 1 π Zπ −π cos(nt)f(t)dt = 0 bn = 1 π Zπ −π sin(nt)f(t)dt = 2(1 − (−1)n) nπ d = 1 2π Zπ −π f(t)dt = 0 Thus, we can model the square wave function f(t) using: f(t) = d + X∞ n=1 (an cos(nt) + bn sin(nt)) = X∞ n=1 2(1 − (−1)n) nπ sin(nt) = 4 π " sin(t) +. 3, but for now we'll accept it without proof, so that we don't get caught up in all the details right at the start. Even Triangle Wave (Cosine Series) Consider the triangle wave. Fourier transform has some basic properties such as linearity, translation, modulation, scaling, conjugation, duality and convolution. Step 3: Finally, substituting all the coefficients in Fourier formula. For Fourier cosine/sine series the basic theorem is the same as for Fourier series. But the Fourier transform being a continuous function. m m F(m) Again, we really need two such plots, one for the cosine series and another for the sine series. Fourier series make use of the orthogonality relationships of the sine and cosine functions. A "circle" is a round, 2d pattern you probably know. Fourier Series - Sine Wave Synthesis. Find the Fourier series of the resulting periodic function: w w w p L L E t t L L t u t, 2, 2 sin 0 0 0. Commented: Star Strider on 19 Oct 2014 hi everyone, i have this simple code of a wave in time domain and frequency domain. The fact that a square wave which is discontinuous can be "built" as a lin-ear combination of sinusoids at harmonically related frequencies is some-what astonishing. In the simple case of just one naturally vibrating string the analysis is straightforward: the vibration is described by a sine wave. Square WaveDifferential Equations - Fourier Series In this section we define the Fourier Series, i. are called the Fourier coefficients. Title and author: Fourier Series with Sound. an = 1 π Zπ −π cos(nt)f(t)dt = 0 bn = 1 π Zπ −π sin(nt)f(t)dt = 2(1 − (−1)n) nπ d = 1 2π Zπ −π f(t)dt = 0 Thus, we can model the square wave function f(t) using: f(t) = d + X∞ n=1 (an cos(nt) + bn sin(nt)) = X∞ n=1 2(1 − (−1)n) nπ sin(nt) = 4 π " sin(t) +. where the frequencies and amplitudes have been normalized to unity for sim-plicity. Fourier showed in 1807 that any piecewise continuous periodic function with a frequency ν can be expressed as the sum of an infinite series of sines and cosines with frequencies of integer multiples of ν; this relation is called the Fourier series. Lec1: Fourier Series Associated Prof Dr. Fourier series: Periodic, odd and even functions. Aljanaby 18 Example: Find the average power supplied to a network if the applied voltage and resulting current are Sol: The total average power is the sum of the harmonic powers: Example: Find the trigonometric Fourier series for the half-wave-rectified sine. Korosh Agha Mohammad Ghasemi on 22 Sep 2020. See full list on mathsisfun. Explanation: Once rectified, it is even, so you only need the cosine series. Recall that the Fourier series of f(x) is defined by where We have the following result: Theorem. • Fourier Series decomposes periodicwaveforms into an infinite sum of weighted cosine and sine functions – We can look at waveforms either in 'time' or 'frequency' – Useful tool: even and odd functions • Some issues we will deal with next time – Fourier Series definition covered today is not very compact. Clean waves mixed with noise, by Andrew Zhu. Verify Fourier Series of Recti ed Sine Wave Numerically Compute the sum of the rst 100 terms in the Fourier series of f ( t ). The point in doing this is to illustrate how we can build a square wave up from multiple sine waves at different frequencies, to prove that a pure square wave is actually equivalent to a series of sine waves. ( 2 π f 0 t)) in time domain, we get two peaks in frequency domain in frequency space with a factor of ( A / 2) j with algebraic sum of delta function for f + f 0 and f − f 0 frequency, where j is the imaginary unit. In order to study the case where the frequency components of the sine and cosine. Selecting different limits makes the. Integral of product of sines. 6 Complex Form of Fourier Series. The time domain signal used in the Fourier series is periodic and continuous. So the Fourier series representation of a perfect sine wave is a perfect sine wave. The theorems that we'll give here will merge the conditions for the Fourier cosine/sine series to be continuous into the theorem. Derivative numerical and analytical calculator. : find exponential Fourier series for pulsed sine wave (OOK)? cos wit f (t)= { f (1)= f (t +T) elsewhere. 1 Periodic Functions and Orthogonality Relations The differential equation y′′ + 2y =F cos!t models a mass-spring system with natural frequency with a pure cosine forcing function of frequency !. A square wave can be considered as a logic function. Lec1: Fourier Series Associated Prof Dr. I've tried to learn about Fourier synthesis from many sources, but they all talk about the Fourier series instead of the Fourier transform, and they all say that for a pure wave all you need is a value in the coefficient of that wave's frequency, and 0 everywhere else. For the number of samples always use a number that is a power of 2, i. The Fourier series is named after the French mathematician Joseph Fourier. Here's a suggestion for an experiment from the book Vibrations and Waves by A. representing a function with a series in the form Sum( B_n sin(n pi x / L) ) from n=1 to n=infinity. The period of the rectified sinusoid is one half of this, or T = T1=2 = ˇ=!1. The Fourier series represents a periodic waveform of a given frequency as a sum of sine and cosine functions that are multiples of the fundamental frequency: Where f(x) is the function in question a 0 is the dc component a n is the level of each cosine wave b n is the level of each sine wave x is the independant variable, or the time in seconds. Fourier analysis and Synthesis Background The French mathematician J. 3, but for now we'll accept it without proof, so that we don't get caught up in all the details right at the start. Fourier Series - Sine Wave Synthesis. The steps to be followed for solving a Fourier series are given below: Step 1: Multiply the given function by sine or cosine, then integrate. 5 Friday Math Movie - Sine Wave to Square Wave using Fourier Series. =tan^2 x$ into Fourier series for $-frac{pi}{2}leq x leq frac{pi}{2}$ 0. Author name; Kyle Forinash; Wolfgang Christian. So, if the amplitude of the swing is adequate, the ROC crossing zero is an excellent time to enter or exit a swing trade. If a function is periodic and follows below 2 conditions, then the Fourier series for such a function exists. For DC it doesn't. Each function is "orthogonal" to each other. Fourier Series Calculator is a Fourier Series on line utility, simply enter your function if piecewise, introduces each of the parts and calculates the Fourier coefficients may also represent up to 20 coefficients. A Fourier series is a series of sine and cosine harmonics of a particular frequency. This idea that any wave is a combination of multiple sinusoidal waves is part of the branch of mathematics called Fourier analysis. Which makes the THD=0which means that there is no harmonic distortion or, another way of putting it, nothing looks like a sine wave more than a sine wave. } The six arrows represent the first six terms of the Fourier series of a square wave. See full list on mathsisfun. The Fourier series is named after the French mathematician Joseph Fourier. Periodic functions arise in the study of wave motion, when a basic waveform repeats itself periodically. =tan^2 x$ into Fourier series for $-frac{pi}{2}leq x leq frac{pi}{2}$ 0. Thus, Fourier series enable us to represent a. Answer (1 of 4): The way my professor taught it, is that each cosine and sine function represents a "perpendicular" direction in an inifinite function space. The Fourier series tells you the amplitude and frequency of the sines and cosines that you should add up to recreate your original function. Fourier analysis, illustrating that any complex wave form can be shown to consist of a series of individual sine waves. or spectral phase of the Fourier series. The computation and study of Fourier series is known as harmonic analysis and is extremely useful as a way to break. Fourier sine series and Fourier sine polynomial for on the interval (The subtle difference here is that sometimes series (that uses sum) has troubles with division by zero. The user is able to input the amplitude and frequency of 5 separate sine waves and sum them together. fourier-series-examples-and-solutions-square-wave 1/5 Downloaded from dev. The Fourier transform accomplishes this by breaking down the original time-based waveform into a series of sinusoidal terms, each with a unique magnitude, frequency, and phase. Fourier Series LABVIEW GUI Documentation INTRODUCTION The Fourier Series GUI is meant to be used as a learning tool to better understand the Fourier Series. For functions that are not periodic, the Fourier series is replaced by the Fourier. the Fourier series, we focus in this lecture on the Fourier series representa-tion of a periodic square wave. Use the sliders to set the number of terms to a power of 2 and to set the frequency of the wave. To make this more concrete, imagine you used the Fourier transform on a recording of someone playing three notes on the piano at the. 4 Even and Odd Parts. The second term is the only sine term in the series. This is just one of the solutions for you to be. The two circles at the bottom represent the exact square wave (blue) and its. Discrete Fourier Series vs. Contributed by: David von Seggern (University Nevada-Reno) (March 2011). Verify Fourier Series of Recti ed Sine Wave Numerically Compute the sum of the rst 100 terms in the Fourier series of f ( t ). In the simple case of just one naturally vibrating string the analysis is straightforward: the vibration is described by a sine wave. Jean Baptiste Joseph Fourier (21 March 1768 – 16 May 1830) Fourier series. Use the sliders to set the number of terms to a power of 2 and to set the frequency of the wave. the function times cosine. Square Wave from Sine Waves. For n>0 other coefficients the even symmetry of the function is exploited to give. The average value (i. Fourier transform has some basic properties such as linearity, translation, modulation, scaling, conjugation, duality and convolution. , the 0 th Fourier Series Coefficients) is a 0 =0. First term in a Fourier series. A square wave can be considered as a logic function. In the processing of audio signals (although it can be used for radio waves, light waves, seismic waves, and even images), Fourier analysis can isolate individual components of a continuous complex waveform, and concentrate. Calculation of sine and cosine series. A Fourier sine series with coefficients fb ng1 n=1 is the expression F(x) = X1 n=1 b nsin nˇx T Theorem. The study of Fourier series is a branch of Fourier analysis. The construct of the Fourier series is given by Here f(x) is the complex periodic function we wish to break down in terms of sine and cosine basis functions. Fourier and a number of his contemporaries were interested in the study of vibrating strings. A Fourier series is a way of representing a periodic function as a (possibly infinite) sum of sine and cosine functions. Find the Fourier series for the parabolic wave. The primary reason that we use Fourier series is that we can better analyze a signal in another domain rather in the original domain. (3): f(t) = a 0 2 + X1 n=1 [a ncos(nt) + b nsin(nt)] = a 0 2 + X1 n=1 a n eint+. The corresponding analysis equations for the Fourier series are usually written in terms of the period of the waveform, denoted by T, rather than the fundamental frequency, f (where f = 1/T). The waveforms in these figures were generated using truncated, finite-term version(s) of the Fourier series expansion for this waveform: The first figure shows the bipolar triangle wave (labelled as "Waveform") overlaid with. Fourier sine series and Fourier sine polynomial for on the interval (The subtle difference here is that sometimes series (that uses sum) has troubles with division by zero. Fourier Sine and Cosine Series. The width in the peak of the Fourier transform is a way of saying there is an uncertainty in the "true" value of the frequency. Complex Fourier Series 1. The Fourier series of is therefore Since odd integers can be written as , where is an integer, we can write the Fourier series in sigma notation as In Example 1 we found the Fourier series of the square-wave function, but we don't know yet whether this function is equal to its Fourier series. The Fourier Transform and its kin operate by analyzing an input waveform into a series of sinusoidal waves of various frequencies and amplitudes. Relation between a Fourier series harmonic component and its power. French: If you speak into the strings. Let's assume we have a square wave with following characteristics: P eriod = 2ms P eak−to −P eak V alue = 2 V Average V alue = 0 V P e r i o d = 2 m s P e a k − t o − P e a k V a l u e = 2 V A v e r a g e V a l u e = 0 V. Fourier Transform can help here, all we need to do is transform the data to another perspective, from the time view(x-axis) to the frequency view(the x-axis will be the wave frequencies). The Fourier Series only holds while the system is linear. OVERVIEW OF FOURIER SERIES In electronics, Fourier series is used to approximate a periodic waveform, in which amplitude verses time characteristic is repeated in a period, T. as the Rate Of Change (ROC) of the Wave. Periodic functions arise in the study of wave motion, when a basic waveform repeats itself periodically. Created by Sal Khan. If 2 ∕= !2 a particular solution is easily found by undetermined coefficients (or by using Laplace transforms) to be yp = F 2. 3 Complex Fourier Series At this stage in your physics career you are all well acquainted with complex numbers and functions. 4 Square Wave. The Fourier series for a half-sine wave and other simple waveforms can be found in hundreds of textbooks and thousands of web sites. The period of the rectified sinusoid is one half of this, or T = T1=2 = ˇ=!1. 1) where a 0, a n, and b. So the Fourier series representation of a perfect sine wave is a perfect sine wave. , pure sine wave only the odd sine coefficients survive and all even coefficients vanish. a n and b n are called Fourier coefficients and are given by. solve it in matlab 0 Comments. Plot the time waveform and the Fourier series coefficients. For n>0 other coefficients the even symmetry of the function is exploited to give. Sine and cosine waves differ in phase by a quarter of a period (i. Finding Fourier coefficients for a square wave. fourier transform of sine wave help. This is known as Fourier transformation. Derivative numerical and analytical calculator. Figure 2a shows the amplitude and the phase spectrum of a cosine wave. A graph showing the contributions of each term makes the same point: the quickly-wiggling lines have the smallest amplitude. Find the Fourier series approximation of the following periodic function f(x), where the first two leading cosine and sine functions must be included. Fourier coefficients for sine terms. Fourier series is applicable to periodic signals only. The theorems that we'll give here will merge the conditions for the Fourier cosine/sine series to be continuous into the theorem. Fourier series: Periodic, odd and even functions. org on November 2, 2021 by guest [EPUB] Fourier Series Examples And Solutions Square Wave Yeah, reviewing a books fourier series examples and solutions square wave could ensue your near links listings. sin (x) + sin (3x)/3 + sin (5x)/5 +. 32} \end{equation}. According to the Fourier theorem, a steady-state wave is composed of a series of sinusoidal components whose frequencies are those of the fundamental and its harmonics, each component having the proper amplitude and phase. • Fourier Series decomposes periodicwaveforms into an infinite sum of weighted cosine and sine functions – We can look at waveforms either in 'time' or 'frequency' – Useful tool: even and odd functions • Some issues we will deal with next time – Fourier Series definition covered today is not very compact. 1 Periodic Functions and Orthogonality Relations The differential equation y′′ + 2y =F cos!t models a mass-spring system with natural frequency with a pure cosine forcing function of frequency !. o Here again all the cosine terms are zero (this is also a Fourier sine series). The width in the peak of the Fourier transform is a way of saying there is an uncertainty in the "true" value of the frequency. Hot Network Questions. It is named after the French mathematician and physicist Jean-Baptiste Joseph Fourier (1768–1830). Fourier series of a simple linear function f(x)=x converges to an odd periodic extension of this function, which is a saw-tooth wave. Fourier and a number of his contemporaries were interested in the study of vibrating strings. are called the Fourier coefficients. One can even approximate a square-wave pattern with a suitable sum that involves a fundamental sine-wave plus a combination of harmon-ics of this fundamental frequency. 1 4 2 2 4 x Obviously, f(t) is piecewiseC 1 without vertical half tangents, sof K 2. Like Example Problem 11. Even the most complex periodic function can be expanded in sines and cosines using the Fourier series. Mayur Gondalia. The width in the peak of the Fourier transform is a way of saying there is an uncertainty in the "true" value of the frequency. The Fourier Series for an odd function is: `f(t)=sum_(n=1)^oo\ b_n\ sin{:(n pi t)/L:}` An odd function has only sine terms in its Fourier expansion. AS(0) Answer: f(t) = 1+sin at cos2nt 1 nr 15 2 Cos 4t -cost + 35 Problem 2. 3: Fourier Cosine and Sine Series, day 1 Trigonometric Fourier Series (Example 2) Complex fourier Series - Example. It is difficult to work with functions as e. In the simple case of just one naturally vibrating string the analysis is straightforward: the vibration is described by a sine wave. (details inside)? Calculus. This is known as Fourier transformation. OVERVIEW OF FOURIER SERIES In electronics, Fourier series is used to approximate a periodic waveform, in which amplitude verses time characteristic is repeated in a period, T. We are trying to fit a single side signal in one case only into an otherwise bipolar system. Joseph Fourier showed that any periodic wave can be represented by a sum of simple sine waves. The Fourier Series (continued) Prof. the Fourier series, we focus in this lecture on the Fourier series representa-tion of a periodic square wave. The corresponding analysis equations for the Fourier series are usually written in terms of the period of the waveform, denoted by T, rather than the fundamental frequency, f (where f = 1/T). Fourier series make use of the orthogonality relationships of the sine and cosine functions. 5, and the one term expansion along with the function is shown in Figure 2: Figure 2. In essence, the same project as those used previously was assigned. This property leads to its importance in Fourier analysis and makes it acoustically unique. For Fourier cosine/sine series the basic theorem is the same as for Fourier series. A Fourier series can be defined as an expansion of a periodic function f(x) in terms of an infinite sum of sine functions and cosine functions. Determine the Fourier series expansion for full wave rectified sine wave i. Because a single cycle of the square wave signal has. Since we can write: Thus, the Fourier series for the square wave is. The Fourier transform is a way for us to take the combined wave, and get each of the sine waves back out. The computation and study of Fourier series is known as harmonic analysis and is extremely useful as a way to break. To make this more concrete, imagine you used the Fourier transform on a recording of someone playing three notes on the piano at the. Aljanaby 18 Example: Find the average power supplied to a network if the applied voltage and resulting current are Sol: The total average power is the sum of the harmonic powers: Example: Find the trigonometric Fourier series for the half-wave-rectified sine. 21/10/2021 · Fourier Series. When we add those carefully weighted sine waves together, we get closer to the square wave. Often, Fourier series are described in terms of functions of real numbers being broken down as a sum of sine waves. A "circle" is a round, 2d pattern you probably know. In this section, we'll try to really explain the notion of a Fourier expansion by building on the ideas of phasors, partials, and sinusoidal components that we introduced in the previous section. Chapter 3: The Frequency Domain Section 3. How to compute a Fourier series: an example Trigonometric Fourier Series (Example 1) Compute Fourier Series Representation of a Function Fourier series: Odd + even functions Fourier Series Example #2Fourier Series Coefficients 11. Math 331, Fall 2017, Lecture 2, (c) Victor Matveev. , pure sine wave only the odd sine coefficients survive and all even coefficients vanish. the function times sine. Fourier series. Determine spectrum amplitudes for half-wave rectified sine. A periodic signal is just a signal that repeats its pattern at some period. A factor 2 arises due to the period is doubled. A series of sine waves are overlapped to create a square wave. The computation and study of Fourier series is known as harmonic analysis and is extremely useful as a way to break. Fourier sine series and Fourier sine polynomial for on the interval (The subtle difference here is that sometimes series (that uses sum) has troubles with division by zero. The Fourier transform accomplishes this by breaking down the original time-based waveform into a series of sinusoidal terms, each with a unique magnitude, frequency, and phase. Find the Fourier series expansion of a half-wave rectified sine wave depicted below. For example the wave in Figure 1, is a sum of the three sine waves shown in Figure. When a square wave AC voltage is applied to a circuit with reactive components (capacitors and inductors), those components react as if. This is just one of the solutions for you to be. only a few of the coefficients of the Fourier series included. Examples of Fourier series 7 Example 1. To make this more concrete, imagine you used the Fourier transform on a recording of someone playing three notes on the piano at the. Using fourier series, a periodic signal can be expressed as a sum of a dc signal , sine function and cosine function. Fourier series of the signal created in x. 2 Fourier Cosine Series. This signal crosses zero each time the wave attains a peak or valley. o The first five sine coefficients are calculated. The periodic waveforms, viz: rectangular wave, triangular wave, sine wave, etc. † The Fourier series is then f(t) = A 2 ¡ 4A …2 X1 n=1 1 (2n¡1)2 cos 2(2n¡1)…t T: Note that the upper limit of the series is 1. The Fourier transform can be applied to continuous or discrete waves, in this chapter, we will only talk about the Discrete Fourier Transform (DFT). Lec1: Fourier Series Associated Prof Dr. Here are a few well known ones: Wave. But as we saw above we can use tricks like breaking the function into pieces, using common sense, geometry and calculus to help us. To motivate this, return to the Fourier series, Eq. Fourier series falls under the category of trigonometric infinite series, where the individual elements of the series are expressed trigonometrically. Modal analysis, natural frequencies, vibrations, dynamic behaviour. Determine the Fourier series expansion for full wave rectified sine wave i. Korosh Agha Mohammad Ghasemi on 22 Sep 2020. The fact that a square wave which is discontinuous can be "built" as a lin-ear combination of sinusoids at harmonically related frequencies is some-what astonishing. As an example, let us find the exponential series for the following rectangular wave, given by. In this section we define the Fourier Sine Series, i. Fourier Series process, effects of harmonics and filtering of signals using a LabVIEW Virtual Instrument. Continuous Fourier Transform F m vs. A plot of wave amplitude versus time can be very complex as in (three periods of the oscillating wave are shown). As an example, let us find the exponential series for the following rectangular wave, given by. where the frequencies and amplitudes have been normalized to unity for sim-plicity. Three things are worth pointing out: the dirst term is the "DC" voltage. You can watch fourier series of different waveforms: https://b. The low-power sine wave has smaller peaks than the other two sine waves. Follow 27 views (last 30 days) Show older comments. trigonometric fourier series 75 of constants a0, an, bn, n = 1,2,. 5|Fourier Series 2 The second form doesn't work as well as the rst one, and there's a reason for that. First term in a Fourier series. We'll eventually prove this theorem in Section 3. Determine the Fourier series expansion for full wave rectified sine wave i. Korosh Agha Mohammad Ghasemi on 22 Sep 2020. Step 2: Estimate for n=0, n=1, etc. A square wave can be considered as a logic function. However, periodic complex signals can also be represented by Fourier series. representing a function with a series in the form Sum(A_n cos(n pi x / L)) from n=0 to n=infinity + Sum(B_n sin(n pi x / L)) from n=1 to n=infinity. To derive formulas for the Fourier coefficients, that is, the a′s and b′s,. check_circle. This is known as Fourier transformation. In the early 1800s Jean-Baptiste Joseph Fourier had proved that any waveform can be expressed as a sum of an infinite set of Sine waves. According to the Fourier theorem, a steady-state wave is composed of a series of sinusoidal components whose frequencies are those of the fundamental and its harmonics, each component having the proper amplitude and phase. Imaginary part How much of a sine of that frequency you need Magnitude Amplitude of combined cosine and sine Phase Relative proportions of sine and cosine The Fourier Transform: Examples, Properties, Common Pairs Example: Fourier Transform of a Cosine f(t) = cos (2 st ) F (u ) = Z 1 1 f(t) e i2 ut dt = Z 1 1 cos (2 st ) e i2 ut dt = Z 1 1. Fourier Series The Fourier series is introduced using an analogy with Sine wave with Linear amplitude or System Sine wave phase changed A sine wave at the input leads to a (possibly differ-ent) sine wave at the out-put. 5|Fourier Series 2 The second form doesn't work as well as the rst one, and there's a reason for that. 5 Continuous Fourier Series. It is difficult to work with functions as e. (details inside)? Calculus. Fourier series is just a means to represent a periodic signal as an infinite sum of sine wave components. Fourier Series Analysis { Fourier Series Analysis (C) 2005-2018 John F. For DC it doesn't. A "sinusoid" is a specific back-and-forth pattern (a sine or cosine wave), and 99% of the time, it refers to motion in one dimension. For example sinf+icosf + 3 sin2f+ 5icos2f where the successive terms are multiples of the fundamental. Verify Fourier Series of Recti ed Sine Wave Numerically Compute the sum of the rst 100 terms in the Fourier series of f ( t ). Then the adjusted function f (t) is de ned by f (t)= f(t)fort= p, p Z ,. The Fourier Series only holds while the system is linear. This movie cleverly demonstrates what Fourier Series really gives us. Introduction. Calculation of sine and cosine series. The time domain signal used in the Fourier series is periodic and continuous. Before getting into the details of Fourier series, it may help to briefly review the terms associated with a sine wave with the figure below. 1) where a 0, a n, and b. Fourier Series and Waves Text will be coming soon! Fourier composition of a square wave Fourier composition of a triangle wave Fourier composition of a sawtooth wave Fourier composition of a pulse train. The fact that a square wave which is discontinuous can be "built" as a lin-ear combination of sinusoids at harmonically related frequencies is some-what astonishing. This version of the Fourier series is called the exponential Fourier series and is generally easier to obtain because only one set of coefficients needs to be evaluated. fourier transform of sine wave help. The Fourier Series for an odd function is: `f(t)=sum_(n=1)^oo\ b_n\ sin{:(n pi t)/L:}` An odd function has only sine terms in its Fourier expansion. Recall that the Fourier series of f(x) is defined by where We have the following result: Theorem. The Fourier transform is a way for us to take the combined wave, and get each of the sine waves back out. Using fourier series, a periodic signal can be expressed as a sum of a dc signal , sine function and cosine function. {displaystyle x (t)=left (-1right)^ {lfloor ftrfloor }. (The meaning of "orthogonal" kind of abstract here…) Any function can be represented as a sum. Here, T is the period of the square wave, or equivalently, f is its frequency, where f = 1/ T. In sound: The Fourier theorem …is the spectral analysis, or Fourier analysis, of a steady-state wave. 2 Fourier Cosine Series. the Fourier series, we focus in this lecture on the Fourier series representa-tion of a periodic square wave. Fourier series: Periodic, odd and even functions. In fact, as we add terms in the Fourier series representa-. Solving DE by Fourier series. Fourier Series and Fourier integral Fourier Transform (FT) Discrete Fourier Transform (DFT) Sine and Cosine functions are periodic wave numbers as the basis functions and operates on real-valued signals. The second term is the only sine term in the series. The Fourier Series only holds while the system is linear. Note how the size of the coefficients is shrinking with n; this is a common feature of Fourier series. , the 0 th Fourier Series Coefficients) is a 0 =0. Fourier analysis is now also used heavily in communication, thermal analysis, image and signal processing, quantum mechanics and physics. Integral of product of sines. Finding Fourier coefficients for a square wave. We are trying to fit a single side signal in one case only into an otherwise bipolar system. Author name; Kyle Forinash; Wolfgang Christian. Fourier Series 10. That turns out to be a special case of this more general rotating vector phenomenon that we'll build up to, but it's where Fourier himself started, and there's good reason for us to start the story there as well. The second function f of t-- I'll just rewrite it again-- f of t takes on the value of negative 1 from negative 1 to 0, and takes on the value of plus 1 from 0 to 1. The Fourier series for a half-sine wave and other simple waveforms can be found in hundreds of textbooks and thousands of web sites. The Fourier Series for an odd function is: `f(t)=sum_(n=1)^oo\ b_n\ sin{:(n pi t)/L:}` An odd function has only sine terms in its Fourier expansion. AS(0) Answer: f(t) = 1+sin at cos2nt 1 nr 15 2 Cos 4t -cost + 35 Problem 2. The period of the rectified sinusoid is one half of this, or T = T1=2 = ˇ=!1. The computation and study of Fourier series is known as harmonic analysis and is extremely useful as a way to break. the Fourier series, we focus in this lecture on the Fourier series representa-tion of a periodic square wave. A "circle" is a round, 2d pattern you probably know. Continuous Fourier Transform F m vs. This signal crosses zero each time the wave attains a peak or valley. Recall that the Taylor series expansion is given by f(x) = ¥ å n=0 cn(x a)n, where the expansion coefficients are. Fourier series is almost always used in harmonic analysis of a waveform. 3: Fourier and the Sum of Sines Soundfile 3. representing a function with a series in the form Sum( B_n sin(n pi x / L) ) from n=1 to n=infinity. We can easily find the first few terms of the series. This is known as Fourier transformation. All that's required is that the Fourier cosine/sine series be continuous and then you can differentiate term by term. the Fourier series, we focus in this lecture on the Fourier series representa-tion of a periodic square wave. Mohamad Hassoun The Exponential Form Fourier Series Recall that the compact trigonometric Fourier series of a periodic, real signal (𝑡) with frequency 𝜔0 is expressed as (𝑡)= 0+∑ cos( 𝜔0𝑡+𝜃 ) ∞ =1 Employing the Euler's formula-based representation cos(𝑥)= 1 2. Start by forming a time vector running from 0 to 10 in steps of 0. Hot Network Questions. Aljanaby 18 Example: Find the average power supplied to a network if the applied voltage and resulting current are Sol: The total average power is the sum of the harmonic powers: Example: Find the trigonometric Fourier series for the half-wave-rectified sine. However, it is very difficult or impossible to create a square wave directly. Each function is "orthogonal" to each other. Note: the sine wave is the same frequency as the square wave; we call this the 1 st (or fundamental) harmonic. Find the Fourier series approximation of the following periodic function f(x), where the first two leading cosine and sine functions must be included. Author name; Kyle Forinash; Wolfgang Christian. 2n, because the FFT program works much more efficiently on such a sample. 3: Fourier and the Sum of Sines Soundfile 3. Fourier series is applicable to periodic signals only. Find the Fourier Cosine series of f(x) = x for. x/ D1 0 2 Figure 8. o Here again all the cosine terms are zero (this is also a Fourier sine series). So, if the amplitude of the swing is adequate, the ROC crossing zero is an excellent time to enter or exit a swing trade. The periodic waveforms, viz: rectangular wave, triangular wave, sine wave, etc. French: If you speak into the strings. Fourier Sine Series Definition. The Fourier transform can be applied to continuous or discrete waves, in this chapter, we will only talk about the Discrete Fourier Transform (DFT). Hot Network Questions. 1) where a 0, a n, and b. 6 Complex Form of Fourier Series. This signal crosses zero each time the wave attains a peak or valley. The period of the sine wave itself is T 2S cccccccc Z0 and there are n cycles of the sine wave in f[t], so it takes a time: 't n 2S cccccccc Z0 for the wave to pass us. A fourier sine series calculator is the best way to find the fourier series of an odd function given. Note it now has period #2L = 2 pi#: Part (i) #a_n = 1/L int_0^(2L) f(x) cos ((n pi x)/L) dx#. Ask Question Asked 3 years, 6 months ago. But as we saw above we can use tricks like breaking the function into pieces, using common sense, geometry and calculus to help us. The Fourier series associated with the rectified sine wave is therefore \begin{equation} f(x) = \frac{2}{\pi} - \frac{4}{\pi} \sum_{n=2, 4, 6, \cdots}^\infty \frac{1}{n^2-1} \cos(n x). solve it in matlab 0 Comments. Korosh Agha Mohammad Ghasemi on 22 Sep 2020. The cosine form is also called the Harmonic form Fourier series or Polar form Fourier series. Transcribed Image Text. 1: The odd square wave with SW. Complex Fourier Series 1. A periodic signal is just a signal that repeats its pattern at some period. This superposition or linear combination is called the Fourier series. Spectrum analysis of a function. Find now the Fourier coefficients for. • Fourier Series decomposes periodicwaveforms into an infinite sum of weighted cosine and sine functions – We can look at waveforms either in 'time' or 'frequency' – Useful tool: even and odd functions • Some issues we will deal with next time – Fourier Series definition covered today is not very compact. Fourier series is almost always used in harmonic analysis of a waveform. 3 Complex Fourier Series At this stage in your physics career you are all well acquainted with complex numbers and functions. Fourier series equation sine wave This brings us to the last member of the Fourier transform family: the Fourier series. We are trying to fit a single side signal in one case only into an otherwise bipolar system. yuvi on 19 Oct 2014. m m F(m) Again, we really need two such plots, one for the cosine series and another for the sine series. Can this be done? If we start with a simple function like a sine wave, one for full cycle in the time domain would correspond to 1 frequency in the frequency domain. This is known as a Fourier series. In this section, we'll try to really explain the notion of a Fourier expansion by building on the ideas of phasors, partials, and sinusoidal components that we introduced in the previous section. The width in the peak of the Fourier transform is a way of saying there is an uncertainty in the "true" value of the frequency. Let's assume we have a square wave with following characteristics: P eriod = 2ms P eak−to −P eak V alue = 2 V Average V alue = 0 V P e r i o d = 2 m s P e a k − t o − P e a k V a l u e = 2 V A v e r a g e V a l u e = 0 V. The Fourier transform accomplishes this by breaking down the original time-based waveform into a series of sinusoidal terms, each with a unique magnitude, frequency, and phase. For the number of samples always use a number that is a power of 2, i. 1 Answer Ultrilliam May 21, 2018 See below. For example sinf+icosf + 3 sin2f+ 5icos2f where the successive terms are multiples of the fundamental. The Fourier series for a half-sine wave and other simple waveforms can be found in hundreds of textbooks and thousands of web sites. As a first example we examine a square wave described by \begin{equation} f(x) = \left\{ \begin{array}{ll} 1 & \quad 0 \leq x < \pi \\ 0 & \quad \pi. This version of the Fourier series is called the exponential Fourier series and is generally easier to obtain because only one set of coefficients needs to be evaluated. We will use the Fourier sine series for representation of the nonhomogeneous solution to satisfy the boundary conditions. , the 0 th Fourier Series Coefficients) is a 0 =0. } The six arrows represent the first six terms of the Fourier series of a square wave. A periodic function f (t ) is said to have a quarter wave symmetry, if it possesses (i) even symmetry at an interval of quarter of a wave (ii) even symmetry and half-wave. A cosine wave is just a sine wave shifted in phase by 90 o (φ. check_circle. Compute the sum of the rst 100 terms in the Fourier series of f ( t ). Square Wave from Sine Waves. So the Fourier series representation of a perfect sine wave is a perfect sine wave. If you click the second button another (smaller) sine wave is added to the picture with a frequency of 3/2 Hz (this is three times as fast as the square wave (and the original sine wave); we call this the 3 rd harmonic). Laurent Series Yield Fourier Series (Fourier Theorem). 3 Representing f(x) by Both a Sine and Cosine Series. In the processing of audio signals (although it can be used for radio waves, light waves, seismic waves, and even images), Fourier analysis can isolate individual components of a continuous complex waveform, and concentrate. Full-Wave Recti ed Sine Wave V. A closer analysis shows that full-wave rectifier and pure sine wave are respectively even and odd extensions of half-wave rectifier! In summary, the Fourier Series for a periodic continuous-time signal can be described using the two equations. A Fourier series is a way of representing a periodic function as a (possibly infinite) sum of sine and cosine functions. 1) where a 0, a n, and b. You have seen that functions have series representations as expansions in powers of x, or x a, in the form of Maclaurin and Taylor series. Show Hide -1 older comments. , 90° or π/2 radians). x/ D1 for 0 < x <. We will use the Fourier sine series for representation of the nonhomogeneous solution to satisfy the boundary conditions. Like Example Problem 11. A plot of wave amplitude versus time can be very complex as in (three periods of the oscillating wave are shown). The Fourier series represents a periodic waveform of a given frequency as a sum of sine and cosine functions that are multiples of the fundamental frequency: Where f(x) is the function in question a 0 is the dc component a n is the level of each cosine wave b n is the level of each sine wave x is the independant variable, or the time in seconds. This Demonstration shows how a Fourier series of sine terms can approximate discontinuous periodic functions well, even with only a few terms in the series. Baron found that we can represent periodic functions by series of sine and cosine waves which are related harmonically to each other. 3 Representing f(x) by Both a Sine and Cosine Series. A Fourier sine series F(x) is an odd 2T-periodic function. That is, it has period `pi`. I am new to geogebra and would like to plot graphs of functions in the time domain, find the fourier transform and then plot in the frequency domain. Conversely, a signal consisting of zeros everywhere except at a single point, called a delta function , has equal Fourier components at all frequencies. First term in a Fourier series. Fourier Series Analysis { Fourier Series Analysis (C) 2005-2018 John F. The computation and study of Fourier series is known as harmonic analysis and is extremely useful as a way to break. Fourier Series Grapher. Fourier and a number of his contemporaries were interested in the study of vibrating strings. 082 Spring 2007 Fourier Series and Fourier Transform, Slide 5 Subtract Positive and Negative Frequencies Note: • As t increases, the subtractionof positiveand negativefrequency complex exponentials leads to a sinewave - Note that the resulting sine wave is purely imaginaryand considered to have a positivefrequency-e-jωt I Q ejωt 2sin(ωt). Full-Wave Recti ed Sine Wave V. It is named after the French mathematician and physicist Jean-Baptiste Joseph Fourier (1768–1830). Properties of Even & Odd Function: While dealing with the Fourier series, we must have a proper idea about the basic stuff of even and odd functions that includes: Addition Properties: The sum of two even functions is always an even function. Find the Fourier Cosine series of f(x) = x for. Recall that the Fourier series of f(x) is defined by where We have the following result: Theorem. (The meaning of "orthogonal" kind of abstract here…) Any function can be represented as a sum. Fourier Series Calculator is a Fourier Series on line utility, simply enter your function if piecewise, introduces each of the parts and calculates the Fourier coefficients may also represent up to 20 coefficients. The Fourier Series only holds while the system is linear. representing a function with a series in the form Sum( B_n sin(n pi x / L) ) from n=1 to n=infinity. Hence we can understand much about essentially any wave simply by studying sinusoidal waves. Here, T is the period of the square wave, or equivalently, f is its frequency, where f = 1/ T. A periodic function f (t ) is said to have a quarter wave symmetry, if it possesses (i) even symmetry at an interval of quarter of a wave (ii) even symmetry and half-wave. Since the time domain signal is periodic, the sine and cosine wave correlation only needs to be evaluated over a single period, i. The Fourier series is named after the French mathematician Joseph Fourier. To derive formulas for the Fourier coefficients, that is, the a′s and b′s,. We are seeing the effect of adding sine or cosine functions. Digitize low-frequency waves from the function generator, sine, triangle, and square. Fourier Series Grapher. where the frequencies and amplitudes have been normalized to unity for sim-plicity. Let the integer m become a real number and let the coefficients, F m, become a function F(m). Math 331, Fall 2017, Lecture 2, (c) Victor Matveev. , to get the value of coefficients. Transcribed Image Text. So, if the amplitude of the swing is adequate, the ROC crossing zero is an excellent time to enter or exit a swing trade. Wave Symmetry: If the periodic signal x(t) has some type of symmetry, then some of the trigonometric Fourier series coefficients may become zero and calculation of the coefficients becomes simple. The Fourier Transform and its kin operate by analyzing an input waveform into a series of sinusoidal waves of various frequencies and amplitudes. Derivative numerical and analytical calculator. Page 13/27. The time domain signal used in the Fourier series is periodic and continuous. square waves, sawtooth are and it is easy to work with sines. and An and Bn are the spectral amplitudes of cosine and sine waves. In sound: The Fourier theorem …is the spectral analysis, or Fourier analysis, of a steady-state wave. x/ Df1 or 0 or 1g. Each function is "orthogonal" to each other. Mayur Gondalia. Then the adjusted function f (t) is de ned by f (t)= f(t)fort= p, p Z ,. This is known as a Fourier series. Spectrum analysis of a function. The sine functions all go to zero at x= Land 2 doesn't, making it hard for the sum of sines to approximate the desired function. According to the Fourier theorem, a steady-state wave is composed of a series of sinusoidal components whose frequencies are those of the fundamental and its harmonics, each component having the proper amplitude and phase. The Fourier transform is a way for us to take the combined wave, and get each of the sine waves back out. Aljanaby 18 Example: Find the average power supplied to a network if the applied voltage and resulting current are Sol: The total average power is the sum of the harmonic powers: Example: Find the trigonometric Fourier series for the half-wave-rectified sine. Joseph Fourier showed that any periodic wave can be represented by a sum of simple sine waves. Introduction. Fourier Series: A Fourier series is a representation of a wave form or other periodic function as a sum of sines and cosines. The point in doing this is to illustrate how we can build a square wave up from multiple sine waves at different frequencies, to prove that a pure square wave is actually equivalent to a series of sine waves. That turns out to be a special case of this more general rotating vector phenomenon that we'll build up to, but it's where Fourier himself started, and there's good reason for us to start the story there as well. The signal was to be displayed in the time domain and a properly. 3 Representing f(x) by Both a Sine and Cosine Series. Fourier series expansion of half wave rectifier Fourier series for a half-wave rectifier. The following two figures show the "Fourier construction" of a periodic, bipolar, unit-amplitude triangle wave. For DC it doesn't. 5 Adding sine waves. Start by forming a time vector running from 0 to 10 in steps of 0. For example the wave in Figure 1, is a sum of the three sine waves shown in Figure. This process, in effect, converts a waveform in the time domain that is difficult to describe mathematically into a more manageable series of sinusoidal functions that. So this is the first function. This is known as a Fourier series. Definition of Fourier series The Fourier sine series, defined in Eq. Plot this fundamental frequency. Numerous texts are available to explain the basics of Discrete Fourier Transform and its very efficient implementation - Fast Fourier Transform (FFT). The second term is the only sine term in the series. endhomelessness. , cos (x ) = cos (–x ). The steps to be followed for solving a Fourier series are given below: Step 1: Multiply the given function by sine or cosine, then integrate. Note: the sine wave is the same frequency as the square wave; we call this the 1 st (or fundamental) harmonic. 5 Continuous Fourier Series. =tan^2 x$ into Fourier series for $-frac{pi}{2}leq x leq frac{pi}{2}$ 0. I am new to geogebra and would like to plot graphs of functions in the time domain, find the fourier transform and then plot in the frequency domain. Wave Symmetry: If the periodic signal x(t) has some type of symmetry, then some of the trigonometric Fourier series coefficients may become zero and calculation of the coefficients becomes simple. The Fourier series represents a periodic waveform of a given frequency as a sum of sine and cosine functions that are multiples of the fundamental frequency: Where f(x) is the function in question a 0 is the dc component a n is the level of each cosine wave b n is the level of each sine wave x is the independant variable, or the time in seconds. 1 Fourier trigonometric series Fourier's theorem states that any (reasonably well-behaved) function can be written in terms of trigonometric or exponential functions. In fact, as we add terms in the Fourier series representa-. In this section we define the Fourier Sine Series, i. We are trying to fit a single side signal in one case only into an otherwise bipolar system. o The first five sine coefficients are calculated. Fourier series of a simple linear function f(x)=x converges to an odd periodic extension of this function, which is a saw-tooth wave. The Fourier Series (continued) Prof. | CommonCrawl |
Sample records for code cyrano3 running
Description of modelling to be implemented in the fuel rod thermomechanics code Cyrano3; Description des modeles a introduire dans le logiciel de thermomecanique du crayon combustible Cyrano3
Baron, D; Bouffioux, P
CYRANO3 is the new EDF thermomechanical code developed to evaluate the overall fuel rod behavior under irradiation. In that context, this paper presents the phenomena to be simulated and the correlations adopted for modelling purposes. The empirical models presented are taken from the CYRANO2 code and a compilation of the relevant literature. The present revision corrects and supplements version B on the basis of its use during the software coding phase from January 1991 to May 1993. (authors). figs., tabs., 120 refs.
A new coupling of the 3D thermal-hydraulic code THYC and the thermo-mechanical code CYRANO3 for PWR calculations
Marguet, S.D. [Electricite de France (EDF), 92 - Clamart (France)
Among all parameters, the fuel temperature has a significant influence on the reactivity of the core, because of the Doppler effect on cross-sections. Most neutronic codes use a straightforward method to calculate an average fuel temperature used in their specific feed-back models. For instance, EDF`s neutronic code COCCINELLE uses the Rowland`s formula using the temperatures of the center and the surface of the pellet. COCCINELLE is coupled to the 3D thermal-hydraulic code THYC with calculates TDoppler with is standard thermal model. In order to improve the accuracy of such calculations, we have developed the coupling of our two latest codes in thermal-hydraulics (THYC) and thermo-mechanics (CYRANO3). THYC calculates two-phase flows in pipes or rod bundles and is used for transient calculations such as steam-line break, boron dilution accidents, DNB predictions, steam generator and condenser studies. CYRANO3 calculates most of the phenomena that take place in the fuel such as: 1) heat transfer induced by nuclear power; 2) thermal expansion of the fuel and the cladding; 3) release of gaseous fission`s products; 4) mechanical interaction between the pellet and the cladding. These two codes are now qualified in their own field and the coupling, using Parallel Virtual Machine (PVM) libraries customized in an home-made-easy-to-use package called CALCIUM, has been validated on `low` configurations (no thermal expansion, constant thermal characteristics) and used on accidental transients such as rod ejection and loss of coolant accident. (K.A.) 7 refs.
Marguet, S.D.
Among all parameters, the fuel temperature has a significant influence on the reactivity of the core, because of the Doppler effect on cross-sections. Most neutronic codes use a straightforward method to calculate an average fuel temperature used in their specific feed-back models. For instance, EDF's neutronic code COCCINELLE uses the Rowland's formula using the temperatures of the center and the surface of the pellet. COCCINELLE is coupled to the 3D thermal-hydraulic code THYC with calculates TDoppler with is standard thermal model. In order to improve the accuracy of such calculations, we have developed the coupling of our two latest codes in thermal-hydraulics (THYC) and thermo-mechanics (CYRANO3). THYC calculates two-phase flows in pipes or rod bundles and is used for transient calculations such as steam-line break, boron dilution accidents, DNB predictions, steam generator and condenser studies. CYRANO3 calculates most of the phenomena that take place in the fuel such as: 1) heat transfer induced by nuclear power; 2) thermal expansion of the fuel and the cladding; 3) release of gaseous fission's products; 4) mechanical interaction between the pellet and the cladding. These two codes are now qualified in their own field and the coupling, using Parallel Virtual Machine (PVM) libraries customized in an home-made-easy-to-use package called CALCIUM, has been validated on 'low' configurations (no thermal expansion, constant thermal characteristics) and used on accidental transients such as rod ejection and loss of coolant accident. (K.A.)
Running codes through the web
Clark, R.E.H.
Dr. Clark presented a report and demonstration of running atomic physics codes through the WWW. The atomic physics data is generated from Los Alamos National Laboratory (LANL) codes that calculate electron impact excitation, ionization, photoionization, and autoionization, and inversed processes through detailed balance. Samples of Web interfaces, input and output are given in the report
Running code as part of an open standards policy
Shah, Rajiv; Kesan, Jay
Governments around the world are considering implementing or even mandating open standards policies. They believe these policies will provide economic, socio-political, and technical benefits. In this article, we analyze the failure of the Massachusetts's open standards policy as applied to document formats. We argue it failed due to the lack of running code. Running code refers to multiple independent, interoperable implementations of an open standard. With running code, users have choice ...
Modelling 3-D mechanical phenomena in a 1-D industrial finite element code: results and perspectives
Guicheret-Retel, V.; Trivaudey, F.; Boubakar, M.L.; Masson, R.; Thevenin, Ph.
Assessing fuel rod integrity in PWR reactors must enjoin two opposite goals: a one-dimensional finite element code (axial revolution symmetry) is needed to provide industrial results at the scale of the reactor core, while the main risk of cladding failure [e.g. pellet-cladding interaction (PCI)] is based on fully three-dimensional phenomena. First, parametric three-dimensional elastic calculations were performed to identify the relevant parameters (fragment number, contact pellet-cladding conditions, etc.) as regards PCI. Axial fragment number as well as friction coefficient are shown to play a major role in PCI as opposed to other parameters. Next, the main limitations of the one-dimensional hypothesis of the finite element code CYRANO3 are identified. To overcome these limitations, both two- and three-dimensional emulations of CYRANO3 were developed. These developments are shown to significantly improve the results provided by CYRANO3. (authors)
RunJumpCode: An Educational Game for Educating Programming
Hinds, Matthew; Baghaei, Nilufar; Ragon, Pedrito; Lambert, Jonathon; Rajakaruna, Tharindu; Houghton, Travers; Dacey, Simon
Programming promotes critical thinking, problem solving and analytic skills through creating solutions that can solve everyday problems. However, learning programming can be a daunting experience for a lot of students. "RunJumpCode" is an educational 2D platformer video game, designed and developed in Unity, to teach players the…
Running the source term code package in Elebra MX-850
Guimaraes, A.C.F.; Goes, A.G.A.
The source term package (STCP) is one of the main tools applied in calculations of behavior of fission products from nuclear power plants. It is a set of computer codes to assist the calculations of the radioactive materials leaving from the metallic containment of power reactors to the environment during a severe reactor accident. The original version of STCP runs in SDC computer systems, but as it has been written in FORTRAN 77, is possible run it in others systems such as IBM, Burroughs, Elebra, etc. The Elebra MX-8500 version of STCP contains 5 codes:March 3, Trapmelt, Tcca, Vanessa and Nava. The example presented in this report has taken into consideration a small LOCA accident into a PWR type reactor. (M.I.)
Strong normalization by type-directed partial evaluation and run-time code generation
Balat, Vincent; Danvy, Olivier
We investigate the synergy between type-directed partial evaluation and run-time code generation for the Caml dialect of ML. Type-directed partial evaluation maps simply typed, closed Caml values to a representation of their long βη-normal form. Caml uses a virtual machine and has the capability...... to load byte code at run time. Representing the long βη-normal forms as byte code gives us the ability to strongly normalize higher-order values (i.e., weak head normal forms in ML), to compile the resulting strong normal forms into byte code, and to load this byte code all in one go, at run time. We...... conclude this note with a preview of our current work on scaling up strong normalization by run-time code generation to the Caml module language....
Project Everware - running other people's code doesn't have to be painful
CERN. Geneva
Everware is a project that allows you to edit and run someone else's code with one click, even if that code has complicated setup instructions. The main aim of the project is to encourage reuse of software between researchers by making it easy and risk free to try out someone else's code.
A Novel Technique for Running the NASA Legacy Code LAPIN Synchronously With Simulations Developed Using Simulink
Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.
This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.
Folklore in bureaucracy code: Running a music event
Krstanović-Lukić Miroslava
Full Text Available A music folk-created piece of work is a construction expressed as a paradigm part of a set in the bureaucracy system and the public arena. Such a work is a mechanical concept, which defines inheritance as a construction of authenticity saturated with elements of folk, national culture. It is also a subject of certain conventions in the system of regulations; namely, it is a part of the administrative code. The usage of the folk created work as a paradigm and legislations is realized through an organizational apparatus that is, it becomes entertainment, a spectacle. This paper analyzes the functioning of the organizational machinery of a folk spectacle, starting with the government authorities, local self-management and the spectacle's administrative committees. To illustrate this phenomenon, the paper presents the development of a trumpet playing festival in Draga�evo. This particular festival establishes a cultural, economic and political order with a clear and defined division of power. The analysis shows that the folk event in question, through its programs and activities, represents a scene and arena of individual and group interests. Organizational interactions are recognized in binary oppositions: sovereignty/dependency official/unofficial, dominancy/ subordination, innovative/inherited common/different, needed/useful, original/copy, one's own/belonging to someone else.
Running the EGS4 Monte Carlo code with Fortran 90 on a pentium computer
Caon, M.; Bibbo, G.; Pattison, J.
The possibility to run the EGS4 Monte Carlo code radiation transport system for medical radiation modelling on a microcomputer is discussed. This has been done using a Fortran 77 compiler with a 32-bit memory addressing system running under a memory extender operating system. In addition a virtual memory manager such as QEMM386 was required. It has successfully run on a SUN Sparcstation2. In 1995 faster Pentium-based microcomputers became available as did the Windows 95 operating system which can handle 32-bit programs, multitasking and provides its own virtual memory management. The paper describe how with simple modification to the batch files it was possible to run EGS4 on a Pentium under Fortran 90 and Windows 95. This combination of software and hardware is cheaper and faster than running it on a SUN Sparcstation2. 8 refs., 1 tab
Caon, M. [Flinders Univ. of South Australia, Bedford Park, SA (Australia)]|[Univercity of South Australia, SA (Australia); Bibbo, G. [Womens and Childrens hospital, SA (Australia); Pattison, J. [Univercity of South Australia, SA (Australia)
The possibility to run the EGS4 Monte Carlo code radiation transport system for medical radiation modelling on a microcomputer is discussed. This has been done using a Fortran 77 compiler with a 32-bit memory addressing system running under a memory extender operating system. In addition a virtual memory manager such as QEMM386 was required. It has successfully run on a SUN Sparcstation2. In 1995 faster Pentium-based microcomputers became available as did the Windows 95 operating system which can handle 32-bit programs, multitasking and provides its own virtual memory management. The paper describe how with simple modification to the batch files it was possible to run EGS4 on a Pentium under Fortran 90 and Windows 95. This combination of software and hardware is cheaper and faster than running it on a SUN Sparcstation2. 8 refs., 1 tab.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code
Susanne Kunkel
Full Text Available NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.
Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation
Li, Yankai; Lin, Meng, E-mail: [email protected]; Yang, Yanhua
When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.
Experience gained in running the EPRI MMS code with an in-house simulation language
Weber, D.S.
The EPRI Modular Modeling System (MMS) code represents a collection of component models and a steam/water properties package. This code has undergone extensive verification and validation testing. Currently, the code requires a commercially available simulation language to run. The Philadelphia Electric Company (PECO) has been modeling power plant systems for over the past sixteen years. As a result, an extensive number of models have been developed. In addition, an extensive amount of experience has been developed and gained using an in-house simulation language. The objective of this study was to explore the possibility of developing an MMS pre-processor which would allow the use of the MMS package with other simulation languages such as the PECO in-house simulation language
MAX: an expert system for running the modular transport code APOLLO II
Loussouarn, O.; Ferraris, C.; Boivineau, A.
MAX is an expert system built to help users of the APOLLO II code to prepare the input data deck to run a job. APOLLO II is a modular transport-theory code for calculating the neutron flux in various geometries. The associated GIBIANE command language allows the user to specify the physical structure and the computational method to be used in the calculation. The purpose of MAX is to bring into play expertise in both neutronic and computing aspects of the code, as well as various computational schemes, in order to generate automatically a batch data set corresponding to the APOLLO II calculation desired by the user. MAX is implemented on the SUN 3/60 workstation with the S1 tool and graphic interface external functions
Automated JPSS VIIRS GEO code change testing by using Chain Run Scripts
Chen, W.; Wang, W.; Zhao, Q.; Das, B.; Mikles, V. J.; Sprietzer, K.; Tsidulko, M.; Zhao, Y.; Dharmawardane, V.; Wolf, W.
The Joint Polar Satellite System (JPSS) is the next generation polar-orbiting operational environmental satellite system. The first satellite in the JPSS series of satellites, J-1, is scheduled to launch in early 2017. J1 will carry similar versions of the instruments that are on board of Suomi National Polar-Orbiting Partnership (S-NPP) satellite which was launched on October 28, 2011. The center for Satellite Applications and Research Algorithm Integration Team (STAR AIT) uses the Algorithm Development Library (ADL) to run S-NPP and pre-J1 algorithms in a development and test mode. The ADL is an offline test system developed by Raytheon to mimic the operational system while enabling a development environment for plug and play algorithms. The Perl Chain Run Scripts have been developed by STAR AIT to automate the staging and processing of multiple JPSS Sensor Data Record (SDR) and Environmental Data Record (EDR) products. JPSS J1 VIIRS Day Night Band (DNB) has anomalous non-linear response at high scan angles based on prelaunch testing. The flight project has proposed multiple mitigation options through onboard aggregation, and the Option 21 has been suggested by the VIIRS SDR team as the baseline aggregation mode. VIIRS GEOlocation (GEO) code analysis results show that J1 DNB GEO product cannot be generated correctly without the software update. The modified code will support both Op21, Op21/26 and is backward compatible with SNPP. J1 GEO code change version 0 delivery package is under development for the current change request. In this presentation, we will discuss how to use the Chain Run Script to verify the code change and Lookup Tables (LUTs) update in ADL Block2.
Validation analysis of pool fire experiment (Run-F7) using SPHINCS code
Yamaguchi, Akira; Tajima, Yuji
SPHINCS (Sodium Fire Phenomenology IN multi-Cell System) code has been developed for the safety analysis of sodium fire accident in a Fast Breeder Reactor. The main features of the SPHINCS code with respect to the sodium pool fire phenomena are multi-dimensional modeling of the thermal behavior in sodium pool and steel liner, modeling of the extension of sodium pool area based on the sodium mass conservation, and equilibrium model for the chemical reaction of pool fire on the flame sheet at the surface of sodium pool during. Therefore, the SPHINCS code is capable of temperature evaluation of the steel liner in detail during the small and/or medium scale sodium leakage accidents. In this study, Run-F7 experiment in which the sodium leakage rate is 11.8 kg/hour has been analyzed. In the experiment the diameter of the sodium pool is approximately 60 cm and the maximum steel liner temperature was 616 degree C. The analytical results tell us the agreement between the SPHINCS analysis and the experiment is excellent with respect to the time history and spatial distribution of the liner temperature, sodium pool extension behavior, as well as atmosphere gas temperature. It is concluded that the pool fire modeling of the SPHINCS code has been validated for this experiment. The SPHINCS code is currently applicable to the sodium pool fire phenomena and the temperature evaluation of the steel liner. The experiment series are continued to check some parameters, i.e., sodium leakage rate and the height of sodium leakage. Thus, the author will analyze the subsequent experiments to check the influence of the parameters and applies SPHINCS to the sodium fire consequence analysis of fast reactor. (author)
Modeling of a confinement bypass accident with CONSEN, a fast-running code for safety analyses in fusion reactors
Caruso, Gianfranco, E-mail: [email protected] [Sapienza University of Rome – DIAEE, Corso Vittorio Emanuele II, 244, 00186 Roma (Italy); Giannetti, Fabio [Sapienza University of Rome – DIAEE, Corso Vittorio Emanuele II, 244, 00186 Roma (Italy); Porfiri, Maria Teresa [ENEA FUS C.R. Frascati, Via Enrico Fermi, 45, 00044 Frascati, Roma (Italy)
Highlights: • The CONSEN code for thermal-hydraulic transients in fusion plants is introduced. • A magnet induced confinement bypass accident in ITER has been simulated. • A comparison with previous MELCOR results for the accident is presented. -- Abstract: The CONSEN (CONServation of ENergy) code is a fast running code to simulate thermal-hydraulic transients, specifically developed for fusion reactors. In order to demonstrate CONSEN capabilities, the paper deals with the accident analysis of the magnet induced confinement bypass for ITER design 1996. During a plasma pulse, a poloidal field magnet experiences an over-voltage condition or an electrical insulation fault that results in two intense electrical arcs. It is assumed that this event produces two one square meters ruptures, resulting in a pathway that connects the interior of the vacuum vessel to the cryostat air space room. The rupture results also in a break of a single cooling channel within the wall of the vacuum vessel and a breach of the magnet cooling line, causing the blow down of a steam/water mixture in the vacuum vessel and in the cryostat and the release of 4 K helium into the cryostat. In the meantime, all the magnet coils are discharged through the magnet protection system actuation. This postulated event creates the simultaneous failure of two radioactive confinement barrier and it envelopes all type of smaller LOCAs into the cryostat. Ice formation on the cryogenic walls is also involved. The accident has been simulated with the CONSEN code up to 32 h. The accident evolution and the phenomena involved are discussed in the paper and the results are compared with available results obtained using the MELCOR code.
Calculation of Sodium Fire Test-I (Run-E6) using sodium combustion analysis code ASSCOPS version 2.0
Nakagiri, Toshio; Ohno, Shuji; Miyake, Osamu [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center
The calculation of Sodium Fire Test-I (Run-E6) was performed using the ASSCOPS (Analysis of Simultaneous Sodium Combustions in Pool and Spray) code version 2.0 in order to determine the parameters used in the code for the calculations of sodium combustion behavior of small or medium scale sodium leak, and to validate the applicability of the code. The parameters used in the code were determined and the validation of the code was confirmed because calculated temperatures, calculated oxygen concentration and other calculated values almost agreed with the test results. (author)
Improvement of Secret Image Invisibility in Circulation Image with Dyadic Wavelet Based Data Hiding with Run-Length Coded Secret Images of Which Location of Codes are Determined with Random Number
Kohei Arai; Yuji Yamada
An attempt is made for improvement of secret image invisibility in circulation images with dyadic wavelet based data hiding with run-length coded secret images of which location of codes are determined by random number. Through experiments, it is confirmed that secret images are almost invisible in circulation images. Also robustness of the proposed data hiding method against data compression of circulation images is discussed. Data hiding performance in terms of invisibility of secret images...
Increasing the efficiency of the TOUGH code for running large-scale problems in nuclear waste isolation
Nitao, J.J.
The TOUGH code developed at Lawrence Berkeley Laboratory (LBL) is being extensively used to numerically simulate the thermal and hydrologic environment around nuclear waste packages in the unsaturated zone for the Yucca Mountain Project. At the Lawrence Livermore National Laboratory (LLNL) we have rewritten approximately 80 percent of the TOUGH code to increase its speed and incorporate new options. The geometry of many requires large numbers of computational elements in order to realistically model detailed physical phenomena, and, as a result, large amounts of computer time are needed. In order to increase the speed of the code we have incorporated fast linear equation solvers, vectorization of substantial portions of code, improved automatic time stepping, and implementation of table look-up for the steam table properties. These enhancements have increased the speed of the code for typical problems by a factor of 20 on the Cray 2 computer. In addition to the increase in computational efficiency we have added several options: vapor pressure lowering; equivalent continuum treatment of fractures; energy and material volumetric, mass and flux accounting; and Stefan-Boltzmann radiative heat transfer. 5 refs
Probabilistic evaluation of fuel element performance by the combined use of a fast running simplistic and a detailed deterministic fuel performance code
Misfeldt, I.
A comprehensive evaluation of fuel element performance requires a probabilistic fuel code supported by a well bench-marked deterministic code. This paper presents an analysis of a SGHWR ramp experiment, where the probabilistic fuel code FRP is utilized in combination with the deterministic fuel models FFRS and SLEUTH/SEER. The statistical methods employed in FRP are Monte Carlo simulation or a low-order Taylor approximation. The fast-running simplistic fuel code FFRS is used for the deterministic simulations, whereas simulations with SLEUTH/SEER are used to verify the predictions of FFRS. The ramp test was performed with a SGHWR fuel element, where 9 of the 36 fuel pins failed. There seemed to be good agreement between the deterministic simulations and the experiment, but the statistical evaluation shows that the uncertainty on the important performance parameters is too large for this ''nice'' result. The analysis does therefore indicate a discrepancy between the experiment and the deterministic code predictions. Possible explanations for this disagreement are discussed. (author)
Method and codes for solving the optimization problem of initial material distribution and controlling of reactor during the run
Isakova, L.Ya.; Rachkova, D.A.; Vtorova, O.Yu.; Matekin, M.P.; Sobol, I.M.
The optimization problem of initial distribution of fuel composition and controlling of the reactor during the run is solved. The optimization problem is formulated as a multicriterial one with different types of constraints. The distinguished feature of the method proposed is the systematic scanning of multidimensional ares, where the trial points in the space of parameters are the points of uniformly distributed LP Ï" -sequences. The reactor computation is carried out by the four group diffusion method in two-dimensional cylindrical geometry. The burnup absorbers are taken into account as additional absorption cross-sections, represented by approximants. The tables of trials make possible the estimation of the values of global extrema. The coordinates of the points where the external values are attained can be estimated too
Impact of e-publication changes in the International Code of Nomenclature for algae, fungi and plants (Melbourne Code, 2012) - did we need to "run for our lives"?
Nicolson, Nicky; Challis, Katherine; Tucker, Allan; Knapp, Sandra
At the Nomenclature Section of the XVIII International Botanical Congress in Melbourne, Australia (IBC), the botanical community voted to allow electronic publication of nomenclatural acts for algae, fungi and plants, and to abolish the rule requiring Latin descriptions or diagnoses for new taxa. Since the 1st January 2012, botanists have been able to publish new names in electronic journals and may use Latin or English as the language of description or diagnosis. Using data on vascular plants from the International Plant Names Index (IPNI) spanning the time period in which these changes occurred, we analysed trajectories in publication trends and assessed the impact of these new rules for descriptions of new species and nomenclatural acts. The data show that the ability to publish electronically has not "opened the floodgates" to an avalanche of sloppy nomenclature, but concomitantly neither has there been a massive expansion in the number of names published, nor of new authors and titles participating in publication of botanical nomenclature. The e-publication changes introduced in the Melbourne Code have gained acceptance, and botanists are using these new techniques to describe and publish their work. They have not, however, accelerated the rate of plant species description or participation in biodiversity discovery as was hoped.
SU-E-T-180: Fano Cavity Test of Proton Transport in Monte Carlo Codes Running On GPU and Xeon Phi
Sterpin, E; Sorriaux, J; Souris, K; Lee, J; Vynckier, S; Schuemann, J; Paganetti, H; Jia, X; Jiang, S
Purpose: In proton dose calculation, clinically compatible speeds are now achieved with Monte Carlo codes (MC) that combine 1) adequate simplifications in the physics of transport and 2) the use of hardware architectures enabling massive parallel computing (like GPUs). However, the uncertainties related to the transport algorithms used in these codes must be kept minimal. Such algorithms can be checked with the so-called "Fano cavity test�. We implemented the test in two codes that run on specific hardware: gPMC on an nVidia GPU and MCsquare on an Intel Xeon Phi (60 cores). Methods: gPMC and MCsquare are designed for transporting protons in CT geometries. Both codes use the method of fictitious interaction to sample the step-length for each transport step. The considered geometry is a water cavity (2×2×0.2 cm 3 , 0.001 g/cm 3 ) in a 10×10×50 cm 3 water phantom (1 g/cm 3 ). CPE in the cavity is established by generating protons over the phantom volume with a uniform momentum (energy E) and a uniform intensity per unit mass I. Assuming no nuclear reactions and no generation of other secondaries, the computed cavity dose should equal IE, according to Fano's theorem. Both codes were tested for initial proton energies of 50, 100, and 200 MeV. Results: For all energies, gPMC and MCsquare are within 0.3 and 0.2 % of the theoretical value IE, respectively (0.1% standard deviation). Single-precision computations (instead of double) increased the error by about 0.1% in MCsquare. Conclusion: Despite the simplifications in the physics of transport, both gPMC and MCsquare successfully pass the Fano test. This ensures optimal accuracy of the codes for clinical applications within the uncertainties on the underlying physical models. It also opens the path to other applications of these codes, like the simulation of ion chamber response
Sterpin, E; Sorriaux, J; Souris, K; Lee, J; Vynckier, S [Universite catholique de Louvain, Brussels, Brussels (Belgium); Schuemann, J; Paganetti, H [Massachusetts General Hospital, Boston, MA (United States); Jia, X; Jiang, S [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)
Purpose: In proton dose calculation, clinically compatible speeds are now achieved with Monte Carlo codes (MC) that combine 1) adequate simplifications in the physics of transport and 2) the use of hardware architectures enabling massive parallel computing (like GPUs). However, the uncertainties related to the transport algorithms used in these codes must be kept minimal. Such algorithms can be checked with the so-called "Fano cavity test�. We implemented the test in two codes that run on specific hardware: gPMC on an nVidia GPU and MCsquare on an Intel Xeon Phi (60 cores). Methods: gPMC and MCsquare are designed for transporting protons in CT geometries. Both codes use the method of fictitious interaction to sample the step-length for each transport step. The considered geometry is a water cavity (2×2×0.2 cm{sup 3}, 0.001 g/cm{sup 3}) in a 10×10×50 cm{sup 3} water phantom (1 g/cm{sup 3}). CPE in the cavity is established by generating protons over the phantom volume with a uniform momentum (energy E) and a uniform intensity per unit mass I. Assuming no nuclear reactions and no generation of other secondaries, the computed cavity dose should equal IE, according to Fano's theorem. Both codes were tested for initial proton energies of 50, 100, and 200 MeV. Results: For all energies, gPMC and MCsquare are within 0.3 and 0.2 % of the theoretical value IE, respectively (0.1% standard deviation). Single-precision computations (instead of double) increased the error by about 0.1% in MCsquare. Conclusion: Despite the simplifications in the physics of transport, both gPMC and MCsquare successfully pass the Fano test. This ensures optimal accuracy of the codes for clinical applications within the uncertainties on the underlying physical models. It also opens the path to other applications of these codes, like the simulation of ion chamber response.
Analysis, by RELAP5 code, of boron dilution phenomena in a mid-loop operation transient, performed in PKL III F2.1 RUN 1 test
Mascari, F.; Vella, G.; Del Nevo, A.; D'Auria, F.
The present paper deals with the post test analysis and accuracy quantification of the test PKL III F2.1 RUN 1 by RELAP5/Mod3.3 code performed in the framework of the international OECD/SETH PKL III Project. The PKL III is a full-height integral test facility (ITF) that models the entire primary system and most of the secondary system (except for turbine and condenser) of pressurized water reactor of KWU design of the 1300-MW (electric) class on a scale of 1:145. Detailed design was based to the largest possible extent on the specific data of Philippsburg nuclear power plant, unit 2. As for the test facilities of this size, the scaling concept aims to simulate overall thermal hydraulic behavior of the full-scale power plant [1]. The main purpose of the project is to investigate PWR safety issues related to boron dilution and in particular this experiment investigates (a) the boron dilution issue during mid-loop operation and shutdown conditions, and (b) assessing primary circuit accident management operations to prevent boron dilution as a consequence of loss of heat removal [2]. In this work the authors deal with a systematic procedure (developed at the university of Pisa) for code assessment and uncertainty qualification and its application to RELAP5 system code. It is used to evaluate the capability of RELAP5 to reproduce the thermal hydraulics of an inadvertent boron dilution event in a PWR. The quantitative analysis has been performed adopting the Fast Fourier Transform Based Method (FFTBM), which has the capability to quantify the errors in code predictions as compared to the measured experimental signal. (author)
Running the running
Cabass, Giovanni; Di Valentino, Eleonora; Melchiorri, Alessandro; Pajer, Enrico; Silk, Joseph
We use the recent observations of Cosmic Microwave Background temperature and polarization anisotropies provided by the Planck satellite experiment to place constraints on the running $\\alpha_\\mathrm{s} = \\mathrm{d}n_{\\mathrm{s}} / \\mathrm{d}\\log k$ and the running of the running $\\beta_{\\mathrm{s}} = \\mathrm{d}\\alpha_{\\mathrm{s}} / \\mathrm{d}\\log k$ of the spectral index $n_{\\mathrm{s}}$ of primordial scalar fluctuations. We find $\\alpha_\\mathrm{s}=0.011\\pm0.010$ and $\\beta_\\mathrm{s}=0.027\\...
Speaking Code
Cox, Geoff
Speaking Code begins by invoking the "Hello World� convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...
Liquidity Runs
Matta, R.; Perotti, E.
Can the risk of losses upon premature liquidation produce bank runs? We show how a unique run equilibrium driven by asset liquidity risk arises even under minimal fundamental risk. To study the role of illiquidity we introduce realistic norms on bank default, such that mandatory stay is triggered
Running Linux
Dalheimer, Matthias Kalle
The fifth edition of Running Linux is greatly expanded, reflecting the maturity of the operating system and the teeming wealth of software available for it. Hot consumer topics such as audio and video playback applications, groupware functionality, and spam filtering are covered, along with the basics in configuration and management that always made the book popular.
RUN COORDINATION
Since the LHC ceased operations in February, a lot has been going on at Point 5, and Run Coordination continues to monitor closely the advance of maintenance and upgrade activities. In the last months, the Pixel detector was extracted and is now stored in the pixel lab in SX5; the beam pipe has been removed and ME1/1 removal has started. We regained access to the vactank and some work on the RBX of HB has started. Since mid-June, electricity and cooling are back in S1 and S2, allowing us to turn equipment back on, at least during the day. 24/7 shifts are not foreseen in the next weeks, and safety tours are mandatory to keep equipment on overnight, but re-commissioning activities are slowly being resumed. Given the (slight) delays accumulated in LS1, it was decided to merge the two global runs initially foreseen into a single exercise during the week of 4 November 2013. The aim of the global run is to check that we can run (parts of) CMS after several months switched off, with the new VME PCs installed, th...
The cross country running season has started well this autumn with two events: the traditional CERN Road Race organized by the Running Club, which took place on Tuesday 5th October, followed by the 'Cross Interentreprises', a team event at the Evaux Sports Center, which took place on Saturday 8th October. The participation at the CERN Road Race was slightly down on last year, with 65 runners, however the participants maintained the tradition of a competitive yet friendly atmosphere. An ample supply of refreshments before the prize giving was appreciated by all after the race. Many thanks to all the runners and volunteers who ensured another successful race. The results can be found here: https://espace.cern.ch/Running-Club/default.aspx CERN participated successfully at the cross interentreprises with very good results. The teams succeeded in obtaining 2nd and 6th place in the Mens category, and 2nd place in the Mixed category. Congratulations to all. See results here: http://www.c...
Christophe Delaere
The focus of Run Coordination during LS1 is to monitor closely the advance of maintenance and upgrade activities, to smooth interactions between subsystems and to ensure that all are ready in time to resume operations in 2015 with a fully calibrated and understood detector. After electricity and cooling were restored to all equipment, at about the time of the last CMS week, recommissioning activities were resumed for all subsystems. On 7 October, DCS shifts began 24/7 to allow subsystems to remain on to facilitate operations. That culminated with the Global Run in November (GriN), which took place as scheduled during the week of 4 November. The GriN has been the first centrally managed operation since the beginning of LS1, and involved all subdetectors but the Pixel Tracker presently in a lab upstairs. All nights were therefore dedicated to long stable runs with as many subdetectors as possible. Among the many achievements in that week, three items may be highlighted. First, the Strip...
M. Chamizo
On 17th January, as soon as the services were restored after the technical stop, sub-systems started powering on. Since then, we have been running 24/7 with reduced shift crew — Shift Leader and DCS shifter — to allow sub-detectors to perform calibration, noise studies, test software upgrades, etc. On 15th and 16th February, we had the first Mid-Week Global Run (MWGR) with the participation of most sub-systems. The aim was to bring CMS back to operation and to ensure that we could run after the winter shutdown. All sub-systems participated in the readout and the trigger was provided by a fraction of the muon systems (CSC and the central RPC wheel). The calorimeter triggers were not available due to work on the optical link system. Initial checks of different distributions from Pixels, Strips, and CSC confirmed things look all right (signal/noise, number of tracks, phi distribution…). High-rate tests were done to test the new CSC firmware to cure the low efficiency ...
G. Rakness.
After three years of running, in February 2013 the era of sub-10-TeV LHC collisions drew to an end. Recall, the 2012 run had been extended by about three months to achieve the full complement of high-energy and heavy-ion physics goals prior to the start of Long Shutdown 1 (LS1), which is now underway. The LHC performance during these exciting years was excellent, delivering a total of 23.3 fb–1 of proton-proton collisions at a centre-of-mass energy of 8 TeV, 6.2 fb–1 at 7 TeV, and 5.5 pb–1 at 2.76 TeV. They also delivered 170 μb–1 lead-lead collisions at 2.76 TeV/nucleon and 32 nb–1 proton-lead collisions at 5 TeV/nucleon. During these years the CMS operations teams and shift crews made tremendous strides to commission the detector, repeatedly stepping up to meet the challenges at every increase of instantaneous luminosity and energy. Although it does not fully cover the achievements of the teams, a way to quantify their success is the fact that that...
The 2010 edition of the annual CERN Road Race will be held on Wednesday 29th September at 18h. The 5.5km race takes place over 3 laps of a 1.8 km circuit in the West Area of the Meyrin site, and is open to everyone working at CERN and their families. There are runners of all speeds, with times ranging from under 17 to over 34 minutes, and the race is run on a handicap basis, by staggering the starting times so that (in theory) all runners finish together. Children (< 15 years) have their own race over 1 lap of 1.8km. As usual, there will be a "best family� challenge (judged on best parent + best child). Trophies are awarded in the usual men's, women's and veterans' categories, and there is a challenge for the best age/performance. Every adult will receive a souvenir prize, financed by a registration fee of 10 CHF. Children enter free (each child will receive a medal). More information, and the online entry form, can be found at http://cern.ch/club...
On Wednesday 14 March, the machine group successfully injected beams into LHC for the first time this year. Within 48 hours they managed to ramp the beams to 4 TeV and proceeded to squeeze to β*=0.6m, settings that are used routinely since then. This brought to an end the CMS Cosmic Run at ~Four Tesla (CRAFT), during which we collected 800k cosmic ray events with a track crossing the central Tracker. That sample has been since then topped up to two million, allowing further refinements of the Tracker Alignment. The LHC started delivering the first collisions on 5 April with two bunches colliding in CMS, giving a pile-up of ~27 interactions per crossing at the beginning of the fill. Since then the machine has increased the number of colliding bunches to reach 1380 bunches and peak instantaneous luminosities around 6.5E33 at the beginning of fills. The average bunch charges reached ~1.5E11 protons per bunch which results in an initial pile-up of ~30 interactions per crossing. During the ...
With the analysis of the first 5 fb–1 culminating in the announcement of the observation of a new particle with mass of around 126 GeV/c2, the CERN directorate decided to extend the LHC run until February 2013. This adds three months to the original schedule. Since then the LHC has continued to perform extremely well, and the total luminosity delivered so far this year is 22 fb–1. CMS also continues to perform excellently, recording data with efficiency higher than 95% for fills with the magnetic field at nominal value. The highest instantaneous luminosity achieved by LHC to date is 7.6x1033 cm–2s–1, which translates into 35 interactions per crossing. On the CMS side there has been a lot of work to handle these extreme conditions, such as a new DAQ computer farm and trigger menus to handle the pile-up, automation of recovery procedures to minimise the lost luminosity, better training for the shift crews, etc. We did suffer from a couple of infrastructure ...
EnergyPlus Run Time Analysis
Hong, Tianzhen; Buhl, Fred; Haves, Philip
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.
DLLExternalCode
DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.
Code portability and data management considerations in the SAS3D LMFBR accident-analysis code
Dunn, F.E.
The SAS3D code was produced from a predecessor in order to reduce or eliminate interrelated problems in the areas of code portability, the large size of the code, inflexibility in the use of memory and the size of cases that can be run, code maintenance, and running speed. Many conventional solutions, such as variable dimensioning, disk storage, virtual memory, and existing code-maintenance utilities were not feasible or did not help in this case. A new data management scheme was developed, coding standards and procedures were adopted, special machine-dependent routines were written, and a portable source code processing code was written. The resulting code is quite portable, quite flexible in the use of memory and the size of cases that can be run, much easier to maintain, and faster running. SAS3D is still a large, long running code that only runs well if sufficient main memory is available
Dr. Sheehan on Running.
Sheehan, George A.
This book is both a personal and technical account of the experience of running by a heart specialist who began a running program at the age of 45. In its seventeen chapters, there is information presented on the spiritual, psychological, and physiological results of running; treatment of athletic injuries resulting from running; effects of diet…
Running and osteoarthritis.
Willick, Stuart E; Hansen, Pamela A
The overall health benefits of cardiovascular exercise, such as running, are well established. However, it is also well established that in certain circumstances running can lead to overload injuries of muscle, tendon, and bone. In contrast, it has not been established that running leads to degeneration of articular cartilage, which is the hallmark of osteoarthritis. This article reviews the available literature on the association between running and osteoarthritis, with a focus on clinical epidemiologic studies. The preponderance of clinical reports refutes an association between running and osteoarthritis. Copyright 2010 Elsevier Inc. All rights reserved.
Code Cactus; Code Cactus
Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)
This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)
Electron run-away
Levinson, I.B.
The run-away effect of electrons for the Coulomb scattering has been studied by Dricer, but the question for other scattering mechanisms is not yet studied. Meanwhile, if the scattering is quasielastic, a general criterion for the run-away may be formulated; in this case the run-away influence on the distribution function may also be studied in somewhat general and qualitative manner. (Auth.)
Triathlon: running injuries.
Spiker, Andrea M; Dixit, Sameer; Cosgarea, Andrew J
The running portion of the triathlon represents the final leg of the competition and, by some reports, the most important part in determining a triathlete's overall success. Although most triathletes spend most of their training time on cycling, running injuries are the most common injuries encountered. Common causes of running injuries include overuse, lack of rest, and activities that aggravate biomechanical predisposers of specific injuries. We discuss the running-associated injuries in the hip, knee, lower leg, ankle, and foot of the triathlete, and the causes, presentation, evaluation, and treatment of each.
XSOR codes users manual
Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.
This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms
Overcoming the "Run" Response
Swanson, Patricia E.
Recent research suggests that it is not simply experiencing anxiety that affects mathematics performance but also how one responds to and regulates that anxiety (Lyons and Beilock 2011). Most people have faced mathematics problems that have triggered their "run response." The issue is not whether one wants to run, but rather…
Overuse injuries in running
Larsen, Lars Henrik; Rasmussen, Sten; Jørgensen, Jens Erik
What is an overuse injury in running? This question is a corner stone of clinical documentation and research based evidence.......What is an overuse injury in running? This question is a corner stone of clinical documentation and research based evidence....
PRECIS Runs at IITM
First page Back Continue Last page Overview Graphics. PRECIS Runs at IITM. Evaluation experiment using LBCs derived from ERA-15 (1979-93). Runs (3 ensembles in each experiment) already completed with LBCs having a length of 30 years each, for. Baseline (1961-90); A2 scenario (2071-2100); B2 scenario ...
The LHCb Run Control
Alessio, F; Callot, O; Duval, P-Y; Franek, B; Frank, M; Galli, D; Gaspar, C; v Herwijnen, E; Jacobsson, R; Jost, B; Neufeld, N; Sambade, A; Schwemmer, R; Somogyi, P
LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provid...
Symmetry in running.
Raibert, M H
Symmetry plays a key role in simplifying the control of legged robots and in giving them the ability to run and balance. The symmetries studied describe motion of the body and legs in terms of even and odd functions of time. A legged system running with these symmetries travels with a fixed forward speed and a stable upright posture. The symmetries used for controlling legged robots may help in elucidating the legged behavior of animals. Measurements of running in the cat and human show that the feet and body sometimes move as predicted by the even and odd symmetry functions.
RUNNING INJURY DEVELOPMENT
Johansen, Karen Krogh; Hulme, Adam; Damsted, Camma
BACKGROUND: Behavioral science methods have rarely been used in running injury research. Therefore, the attitudes amongst runners and their coaches regarding factors leading to running injuries warrants formal investigation. PURPOSE: To investigate the attitudes of middle- and long-distance runners...... able to compete in national championships and their coaches about factors associated with running injury development. METHODS: A link to an online survey was distributed to middle- and long-distance runners and their coaches across 25 Danish Athletics Clubs. The main research question was: "Which...... factors do you believe influence the risk of running injuries?". In response to this question, the athletes and coaches had to click "Yes" or "No" to 19 predefined factors. In addition, they had the possibility to submit a free-text response. RESULTS: A total of 68 athletes and 19 coaches were included...
Krogh Johansen, Karen; Hulme, Adam; Damsted, Camma
Background: Behavioral science methods have rarely been used in running injury research. Therefore, the attitudes amongst runners and their coaches regarding factors leading to running injuries warrants formal investigation. Purpose: To investigate the attitudes of middle- and long-distance runners...... able to compete in national championships and their coaches about factors associated with running injury development. Methods: A link to an online survey was distributed to middle- and long-distance runners and their coaches across 25 Danish Athletics Clubs. The main research question was: "Which...... factors do you believe influence the risk of running injuries?�. In response to this question, the athletes and coaches had to click "Yes� or "No� to 19 predefined factors. In addition, they had the possibility to submit a free-text response. Results: A total of 68 athletes and 19 coaches were included...
runDM: Running couplings of Dark Matter to the Standard Model
D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo
runDM calculates the running of the couplings of Dark Matter (DM) to the Standard Model (SM) in simplified models with vector mediators. By specifying the mass of the mediator and the couplings of the mediator to SM fields at high energy, the code can calculate the couplings at low energy, taking into account the mixing of all dimension-6 operators. runDM can also extract the operator coefficients relevant for direct detection, namely low energy couplings to up, down and strange quarks and to protons and neutrons.
Alessio, F; Barandela, M C; Frank, M; Gaspar, C; Herwijnen, E v; Jacobsson, R; Jost, B; Neufeld, N; Sambade, A; Schwemmer, R; Somogyi, P [CERN, 1211 Geneva 23 (Switzerland); Callot, O [LAL, IN2P3/CNRS and Universite Paris 11, Orsay (France); Duval, P-Y [Centre de Physique des Particules de Marseille, Aix-Marseille Universite, CNRS/IN2P3, Marseille (France); Franek, B [Rutherford Appleton Laboratory, Chilton, Didcot, OX11 0QX (United Kingdom); Galli, D, E-mail: [email protected] [Universita di Bologna and INFN, Bologna (Italy)
LHCb has designed and implemented an integrated Experiment Control System. The Control System uses the same concepts and the same tools to control and monitor all parts of the experiment: the Data Acquisition System, the Timing and the Trigger Systems, the High Level Trigger Farm, the Detector Control System, the Experiment's Infrastructure and the interaction with the CERN Technical Services and the Accelerator. LHCb's Run Control, the main interface used by the experiment's operator, provides access in a hierarchical, coherent and homogeneous manner to all areas of the experiment and to all its sub-detectors. It allows for automated (or manual) configuration and control, including error recovery, of the full experiment in its different running modes. Different instances of the same Run Control interface are used by the various sub-detectors for their stand-alone activities: test runs, calibration runs, etc. The architecture and the tools used to build the control system, the guidelines and components provided to the developers, as well as the first experience with the usage of the Run Control will be presented
Running Boot Camp
Toporek, Chuck
When Steve Jobs jumped on stage at Macworld San Francisco 2006 and announced the new Intel-based Macs, the question wasn't if, but when someone would figure out a hack to get Windows XP running on these new "Mactels." Enter Boot Camp, a new system utility that helps you partition and install Windows XP on your Intel Mac. Boot Camp does all the heavy lifting for you. You won't need to open the Terminal and hack on system files or wave a chicken bone over your iMac to get XP running. This free program makes it easy for anyone to turn their Mac into a dual-boot Windows/OS X machine. Running Bo
Fermilab DART run control
Oleynik, G.; Engelfried, J.; Mengel, L.
DART is the high speed, Unix based data acquisition system being developed by Fermilab in collaboration with seven High Energy Physics Experiments. This paper describes DART run control, which has been developed over the past year and is a flexible, distributed, extensible system for the control and monitoring of the data acquisition systems. The authors discuss the unique and interesting concepts of the run control and some of the experiences in developing it. They also give a brief update and status of the whole DART system
DART is the high speed, Unix based data acquisition system being developed by Fermilab in collaboration with seven High Energy Physics Experiments. This paper describes DART run control, which has been developed over the past year and is a flexible, distributed, extensible system for the, control and monitoring of the data acquisition systems. We discuss the unique and interesting concepts of the run control and some of our experiences in developing it. We also give a brief update and status of the whole DART system
SASSYS LMFBR systems code
Dunn, F.E.; Prohammer, F.G.; Weber, D.P.
The SASSYS LMFBR systems analysis code is being developed mainly to analyze the behavior of the shut-down heat-removal system and the consequences of failures in the system, although it is also capable of analyzing a wide range of transients, from mild operational transients through more severe transients leading to sodium boiling in the core and possible melting of clad and fuel. The code includes a detailed SAS4A multi-channel core treatment plus a general thermal-hydraulic treatment of the primary and intermediate heat-transport loops and the steam generators. The code can handle any LMFBR design, loop or pool, with an arbitrary arrangement of components. The code is fast running: usually faster than real time
'Outrunning' the running ear
In even the most experienced hands, an adequate physical examination of the ears can be difficult to perform because of common problems such as cerumen blockage of the auditory canal, an unco- operative toddler or an exasperated parent. The most common cause for a running ear in a child is acute purulent otitis.
Towards advanced code simulators
Scriven, A.H.
The Central Electricity Generating Board (CEGB) uses advanced thermohydraulic codes extensively to support PWR safety analyses. A system has been developed to allow fully interactive execution of any code with graphical simulation of the operator desk and mimic display. The system operates in a virtual machine environment, with the thermohydraulic code executing in one virtual machine, communicating via interrupts with any number of other virtual machines each running other programs and graphics drivers. The driver code itself does not have to be modified from its normal batch form. Shortly following the release of RELAP5 MOD1 in IBM compatible form in 1983, this code was used as the driver for this system. When RELAP5 MOD2 became available, it was adopted with no changes needed in the basic system. Overall the system has been used for some 5 years for the analysis of LOBI tests, full scale plant studies and for simple what-if studies. For gaining rapid understanding of system dependencies it has proved invaluable. The graphical mimic system, being independent of the driver code, has also been used with other codes to study core rewetting, to replay results obtained from batch jobs on a CRAY2 computer system and to display suitably processed experimental results from the LOBI facility to aid interpretation. For the above work real-time execution was not necessary. Current work now centers on implementing the RELAP 5 code on a true parallel architecture machine. Marconi Simulation have been contracted to investigate the feasibility of using upwards of 100 processors, each capable of a peak of 30 MIPS to run a highly detailed RELAP5 model in real time, complete with specially written 3D core neutronics and balance of plant models. This paper describes the experience of using RELAP5 as an analyzer/simulator, and outlines the proposed methods and problems associated with parallel execution of RELAP5
Computer codes used in particle accelerator design: First edition
This paper contains a listing of more than 150 programs that have been used in the design and analysis of accelerators. Given on each citation are person to contact, classification of the computer code, publications describing the code, computer and language runned on, and a short description of the code. Codes are indexed by subject, person to contact, and code acronym
A Mobile Application Prototype using Network Coding
Pedersen, Morten Videbæk; Heide, Janus; Fitzek, Frank
This paper looks into implementation details of network coding for a mobile application running on commercial mobile phones. We describe the necessary coding operations and algorithms that implements them. The coding algorithms forms the basis for a implementation in C++ and Symbian C++. We report...
ALICE HLT Run 2 performance overview.
Krzewicki, Mikolaj; Lindenstruth, Volker; ALICE Collaboration
For the LHC Run 2 the ALICE HLT architecture was consolidated to comply with the upgraded ALICE detector readout technology. The software framework was optimized and extended to cope with the increased data load. Online calibration of the TPC using online tracking capabilities of the ALICE HLT was deployed. Offline calibration code was adapted to run both online and offline and the HLT framework was extended to support that. The performance of this schema is important for Run 3 related developments. An additional data transport approach was developed using the ZeroMQ library, forming at the same time a test bed for the new data flow model of the O2 system, where further development of this concept is ongoing. This messaging technology was used to implement the calibration feedback loop augmenting the existing, graph oriented HLT transport framework. Utilising the online reconstruction of many detectors, a new asynchronous monitoring scheme was developed to allow real-time monitoring of the physics performance of the ALICE detector, on top of the new messaging scheme for both internal and external communication. Spare computing resources comprising the production and development clusters are run as a tier-2 GRID site using an OpenStack-based setup. The development cluster is running continuously, the production cluster contributes resources opportunistically during periods of LHC inactivity.
Coding Partitions
Fabio Burderi
Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.
Running economy and energy cost of running with backpacks.
Scheer, Volker; Cramer, Leoni; Heitkamp, Hans-Christian
Running is a popular recreational activity and additional weight is often carried in backpacks on longer runs. Our aim was to examine running economy and other physiological parameters while running with a 1kg and 3 kg backpack at different submaximal running velocities. 10 male recreational runners (age 25 ± 4.2 years, VO2peak 60.5 ± 3.1 ml·kg-1·min-1) performed runs on a motorized treadmill of 5 minutes durations at three different submaximal speeds of 70, 80 and 90% of anaerobic lactate threshold (LT) without additional weight, and carrying a 1kg and 3 kg backpack. Oxygen consumption, heart rate, lactate and RPE were measured and analysed. Oxygen consumption, energy cost of running and heart rate increased significantly while running with a backpack weighing 3kg compared to running without additional weight at 80% of speed at lactate threshold (sLT) (p=0.026, p=0.009 and p=0.003) and at 90% sLT (p<0.001, p=0.001 and p=0.001). Running with a 1kg backpack showed a significant increase in heart rate at 80% sLT (p=0.008) and a significant increase in oxygen consumption and heart rate at 90% sLT (p=0.045 and p=0.007) compared to running without additional weight. While running at 70% sLT running economy and cardiovascular effort increased with weighted backpack running compared to running without additional weight, however these increases did not reach statistical significance. Running economy deteriorates and cardiovascular effort increases while running with additional backpack weight especially at higher submaximal running speeds. Backpack weight should therefore be kept to a minimum.
Ubuntu Up and Running
Nixon, Robin
Ubuntu for everyone! This popular Linux-based operating system is perfect for people with little technical background. It's simple to install, and easy to use -- with a strong focus on security. Ubuntu: Up and Running shows you the ins and outs of this system with a complete hands-on tour. You'll learn how Ubuntu works, how to quickly configure and maintain Ubuntu 10.04, and how to use this unique operating system for networking, business, and home entertainment. This book includes a DVD with the complete Ubuntu system and several specialized editions -- including the Mythbuntu multimedia re
ATLAS people can run!
Claudia Marcelloni de Oliveira; Pauline Gagnon
It must be all the training we are getting every day, running around trying to get everything ready for the start of the LHC next year. This year, the ATLAS runners were in fine form and came in force. Nine ATLAS teams signed up for the 37th Annual CERN Relay Race with six runners per team. Under a blasting sun on Wednesday 23rd May 2007, each team covered the distances of 1000m, 800m, 800m, 500m, 500m and 300m taking the runners around the whole Meyrin site, hills included. A small reception took place in the ATLAS secretariat a week later to award the ATLAS Cup to the best ATLAS team. For the details on this complex calculation which takes into account the age of each runner, their gender and the color of their shoes, see the July 2006 issue of ATLAS e-news. The ATLAS Running Athena Team, the only all-women team enrolled this year, won the much coveted ATLAS Cup for the second year in a row. In fact, they are so good that Peter Schmid and Patrick Fassnacht are wondering about reducing the women's bonus in...
Underwater running device
Kogure, Sumio; Matsuo, Takashiro; Yoshida, Yoji
An underwater running device for an underwater inspection device for detecting inner surfaces of a reactor or a water vessel has an outer frame and an inner frame, and both of them are connected slidably by an air cylinder and connected rotatably by a shaft. The outer frame has four outer frame legs, and each of the outer frame legs is equipped with a sucker at the top end. The inner frame has four inner frame legs each equipped with a sucker at the top end. The outer frame legs and the inner frame legs are each connected with the outer frame and the inner frame by the air cylinder. The outer and the inner frame legs can be elevated or lowered (or extended or contracted) by the air cylinder. The sucker is connected with a jet pump-type negative pressure generator. The device can run and move by repeating attraction and releasing of the outer frame legs and the inner frame legs alternately while maintaining the posture of the inspection device stably. (I.N.)
TASS code topical report. V.1 TASS code technical manual
Sim, Suk K.; Chang, W. P.; Kim, K. D.; Kim, H. C.; Yoon, H. Y.
TASS 1.0 code has been developed at KAERI for the initial and reload non-LOCA safety analysis for the operating PWRs as well as the PWRs under construction in Korea. TASS code will replace various vendor's non-LOCA safety analysis codes currently used for the Westinghouse and ABB-CE type PWRs in Korea. This can be achieved through TASS code input modifications specific to each reactor type. The TASS code can be run interactively through the keyboard operation. A simimodular configuration used in developing the TASS code enables the user easily implement new models. TASS code has been programmed using FORTRAN77 which makes it easy to install and port for different computer environments. The TASS code can be utilized for the steady state simulation as well as the non-LOCA transient simulations such as power excursions, reactor coolant pump trips, load rejections, loss of feedwater, steam line breaks, steam generator tube ruptures, rod withdrawal and drop, and anticipated transients without scram (ATWS). The malfunctions of the control systems, components, operator actions and the transients caused by the malfunctions can be easily simulated using the TASS code. This technical report describes the TASS 1.0 code models including reactor thermal hydraulic, reactor core and control models. This TASS code models including reactor thermal hydraulic, reactor core and control models. This TASS code technical manual has been prepared as a part of the TASS code manual which includes TASS code user's manual and TASS code validation report, and will be submitted to the regulatory body as a TASS code topical report for a licensing non-LOCA safety analysis for the Westinghouse and ABB-CE type PWRs operating and under construction in Korea. (author). 42 refs., 29 tabs., 32 figs
The design of the run Clever randomized trial: running volume, -intensity and running-related injuries.
Ramskov, Daniel; Nielsen, Rasmus Oestergaard; Sørensen, Henrik; Parner, Erik; Lind, Martin; Rasmussen, Sten
Injury incidence and prevalence in running populations have been investigated and documented in several studies. However, knowledge about injury etiology and prevention is needed. Training errors in running are modifiable risk factors and people engaged in recreational running need evidence-based running schedules to minimize the risk of injury. The existing literature on running volume and running intensity and the development of injuries show conflicting results. This may be related to previously applied study designs, methods used to quantify the performed running and the statistical analysis of the collected data. The aim of the Run Clever trial is to investigate if a focus on running intensity compared with a focus on running volume in a running schedule influences the overall injury risk differently. The Run Clever trial is a randomized trial with a 24-week follow-up. Healthy recreational runners between 18 and 65 years and with an average of 1-3 running sessions per week the past 6 months are included. Participants are randomized into two intervention groups: Running schedule-I and Schedule-V. Schedule-I emphasizes a progression in running intensity by increasing the weekly volume of running at a hard pace, while Schedule-V emphasizes a progression in running volume, by increasing the weekly overall volume. Data on the running performed is collected by GPS. Participants who sustain running-related injuries are diagnosed by a diagnostic team of physiotherapists using standardized diagnostic criteria. The members of the diagnostic team are blinded. The study design, procedures and informed consent were approved by the Ethics Committee Northern Denmark Region (N-20140069). The Run Clever trial will provide insight into possible differences in injury risk between running schedules emphasizing either running intensity or running volume. The risk of sustaining volume- and intensity-related injuries will be compared in the two intervention groups using a competing
Barefoot running: biomechanics and implications for running injuries.
Altman, Allison R; Davis, Irene S
Despite the technological developments in modern running footwear, up to 79% of runners today get injured in a given year. As we evolved barefoot, examining this mode of running is insightful. Barefoot running encourages a forefoot strike pattern that is associated with a reduction in impact loading and stride length. Studies have shown a reduction in injuries to shod forefoot strikers as compared with rearfoot strikers. In addition to a forefoot strike pattern, barefoot running also affords the runner increased sensory feedback from the foot-ground contact, as well as increased energy storage in the arch. Minimal footwear is being used to mimic barefoot running, but it is not clear whether it truly does. The purpose of this article is to review current and past research on shod and barefoot/minimal footwear running and their implications for running injuries. Clearly more research is needed, and areas for future study are suggested.
Darlington up and running
Show, Don
We've built some of the largest and most successful generating stations in the world. Nonetheless, we cannot take our knowledge and understanding of the technology for granted. Although, I do believe that we are getting better, building safer, more efficient plants, and introducing significant improvements to our existing stations. Ontario Hydro is a large and technically rich organization. Even so, we realize that partnerships with others in the industry are absolutely vital. I am thinking particularly of Atomic Energy of Canada Limited. We enjoy a very close relationship with Aecl, and their support was never more important than during the N/A Investigations. In recent years, we've strengthened our relationship with Aecl considerably. For example, we recently signed an agreement with Aecl, making available all of the Darlington 900 MW e design. Much of the cooperation between Ontario Hydro and Aecl occurs through the CANDU Engineering Authority and the CANDU Owners Group (CO G). These organizations are helping both of US to greatly improve cooperation and efficiency, and they are helping ensure we get the biggest return on our CANDU investments. CO G also provides an important information network which links CANDU operators in Canada, here in Korea, Argentina, India, Pakistan and Romania. In many respects, it is helping to develop the strong partnerships to support CANDU technology worldwide. We all benefit in the long run form sharing information and resources
Backward running or absence of running from Creutz ratios
Giedt, Joel; Weinberg, Evan
We extract the running coupling based on Creutz ratios in SU(2) lattice gauge theory with two Dirac fermions in the adjoint representation. Depending on how the extrapolation to zero fermion mass is performed, either backward running or an absence of running is observed at strong bare coupling. This behavior is consistent with other findings which indicate that this theory has an infrared fixed point.
Physiological demands of running during long distance runs and triathlons.
Hausswirth, C; Lehénaff, D
The aim of this review article is to identify the main metabolic factors which have an influence on the energy cost of running (Cr) during prolonged exercise runs and triathlons. This article proposes a physiological comparison of these 2 exercises and the relationship between running economy and performance. Many terms are used as the equivalent of 'running economy' such as 'oxygen cost', 'metabolic cost', 'energy cost of running', and 'oxygen consumption'. It has been suggested that these expressions may be defined by the rate of oxygen uptake (VO2) at a steady state (i.e. between 60 to 90% of maximal VO2) at a submaximal running speed. Endurance events such as triathlon or marathon running are known to modify biological constants of athletes and should have an influence on their running efficiency. The Cr appears to contribute to the variation found in distance running performance among runners of homogeneous level. This has been shown to be important in sports performance, especially in events like long distance running. In addition, many factors are known or hypothesised to influence Cr such as environmental conditions, participant specificity, and metabolic modifications (e.g. training status, fatigue). The decrease in running economy during a triathlon and/or a marathon could be largely linked to physiological factors such as the enhancement of core temperature and a lack of fluid balance. Moreover, the increase in circulating free fatty acids and glycerol at the end of these long exercise durations bear witness to the decrease in Cr values. The combination of these factors alters the Cr during exercise and hence could modify the athlete's performance in triathlons or a prolonged run.
Tristan code and its application
Nishikawa, K.-I.
Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/Ëœkenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.
Voluntary Wheel Running in Mice.
Goh, Jorming; Ladiges, Warren
Voluntary wheel running in the mouse is used to assess physical performance and endurance and to model exercise training as a way to enhance health. Wheel running is a voluntary activity in contrast to other experimental exercise models in mice, which rely on aversive stimuli to force active movement. This protocol consists of allowing mice to run freely on the open surface of a slanted, plastic saucer-shaped wheel placed inside a standard mouse cage. Rotations are electronically transmitted to a USB hub so that frequency and rate of running can be captured via a software program for data storage and analysis for variable time periods. Mice are individually housed so that accurate recordings can be made for each animal. Factors such as mouse strain, gender, age, and individual motivation, which affect running activity, must be considered in the design of experiments using voluntary wheel running. Copyright © 2015 John Wiley & Sons, Inc.
Effective action and brane running
Brevik, Iver; Ghoroku, Kazuo; Yahiro, Masanobu
We address the renormalized effective action for a Randall-Sundrum brane running in 5D bulk space. The running behavior of the brane action is obtained by shifting the brane position without changing the background and fluctuations. After an appropriate renormalization, we obtain an effective, low energy brane world action, in which the effective 4D Planck mass is independent of the running position. We address some implications for this effective action
Asymmetric information and bank runs
Gu, Chao
It is known that sunspots can trigger panic-based bank runs and that the optimal banking contract can tolerate panic-based runs. The existing literature assumes that these sunspots are based on a publicly observed extrinsic randomizing device. In this paper, I extend the analysis of panic-based runs to include an asymmetric-information, extrinsic randomizing device. Depositors observe different, but correlated, signals on the stability of the bank. I find that if the signals that depositors o...
How to run 100 meters ?
Aftalion, Amandine
A paraitre dans SIAP; The aim of this paper is to bring a mathematical justification to the optimal way of organizing one's effort when running. It is well known from physiologists that all running exercises of duration less than 3mn are run with a strong initial acceleration and a decelerating end; on the contrary, long races are run with a final sprint. This can be explained using a mathematical model describing the evolution of the velocity, the anaerobic energy, and the propulsive force: ...
A Running Start: Resource Guide for Youth Running Programs
Jenny, Seth; Becker, Andrew; Armstrong, Tess
The lack of physical activity is an epidemic problem among American youth today. In order to combat this, many schools are incorporating youth running programs as a part of their comprehensive school physical activity programs. These youth running programs are being implemented before or after school, at school during recess at the elementary…
Changes in Running Mechanics During a 6-Hour Running Race.
Giovanelli, Nicola; Taboga, Paolo; Lazzer, Stefano
To investigate changes in running mechanics during a 6-h running race. Twelve ultraendurance runners (age 41.9 ± 5.8 y, body mass 68.3 ± 12.6 kg, height 1.72 ± 0.09 m) were asked to run as many 874-m flat loops as possible in 6 h. Running speed, contact time (t c ), and aerial time (t a ) were measured in the first lap and every 30 ± 2 min during the race. Peak vertical ground-reaction force (F max ), stride length (SL), vertical downward displacement of the center of mass (Δz), leg-length change (ΔL), vertical stiffness (k vert ), and leg stiffness (k leg ) were then estimated. Mean distance covered by the athletes during the race was 62.9 ± 7.9 km. Compared with the 1st lap, running speed decreased significantly from 4 h 30 min onward (mean -5.6% ± 0.3%, P running, reaching the maximum difference after 5 h 30 min (+6.1%, P = .015). Conversely, k vert decreased after 4 h, reaching the lowest value after 5 h 30 min (-6.5%, P = .008); t a and F max decreased after 4 h 30 min through to the end of the race (mean -29.2% and -5.1%, respectively, P running, suggesting a possible time threshold that could affect performance regardless of absolute running speed.
Coding Labour
Anthony McCosker
Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and 'prodromal' modes, is regulated and governed.
CDF run II run control and online monitor
Arisawa, T.; Ikado, K.; Badgett, W.; Chlebana, F.; Maeshima, K.; McCrory, E.; Meyer, A.; Patrick, J.; Wenzel, H.; Stadie, H.; Wagner, W.; Veramendi, G.
The authors discuss the CDF Run II Run Control and online event monitoring system. Run Control is the top level application that controls the data acquisition activities across 150 front end VME crates and related service processes. Run Control is a real-time multi-threaded application implemented in Java with flexible state machines, using JDBC database connections to configure clients, and including a user friendly and powerful graphical user interface. The CDF online event monitoring system consists of several parts: the event monitoring programs, the display to browse their results, the server program which communicates with the display via socket connections, the error receiver which displays error messages and communicates with Run Control, and the state manager which monitors the state of the monitor programs
TRAC code development status and plans
Spore, J.W.; Liles, D.R.; Nelson, R.A.
This report summarizes the characteristics and current status of the TRAC-PF1/MOD1 computer code. Recent error corrections and user-convenience features are described, and several user enhancements are identified. Current plans for the release of the TRAC-PF1/MOD2 computer code and some preliminary MOD2 results are presented. This new version of the TRAC code implements stability-enhancing two-step numerics into the 3-D vessel, using partial vectorization to obtain a code that has run 400% faster than the MOD1 code
VOA: a 2-d plasma physics code
Eltgroth, P.G.
A 2-dimensional relativistic plasma physics code was written and tested. The non-thermal components of the particle distribution functions are represented by expansion into moments in momentum space. These moments are computed directly from numerical equations. Currently three species are included - electrons, ions and ''beam electrons''. The computer code runs on either the 7600 or STAR machines at LLL. Both the physics and the operation of the code are discussed
Coding in pigeons: Multiple-coding versus single-code/default strategies.
Pinto, Carlos; Machado, Armando
To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.
GAMERA - The New Magnetospheric Code
Lyon, J.; Sorathia, K.; Zhang, B.; Merkin, V. G.; Wiltberger, M. J.; Daldorff, L. K. S.
The Lyon-Fedder-Mobarry (LFM) code has been a main-line magnetospheric simulation code for 30 years. The code base, designed in the age of memory to memory vector ma- chines,is still in wide use for science production but needs upgrading to ensure the long term sustainability. In this presentation, we will discuss our recent efforts to update and improve that code base and also highlight some recent results. The new project GAM- ERA, Grid Agnostic MHD for Extended Research Applications, has kept the original design characteristics of the LFM and made significant improvements. The original de- sign included high order numerical differencing with very aggressive limiting, the ability to use arbitrary, but logically rectangular, grids, and maintenance of div B = 0 through the use of the Yee grid. Significant improvements include high-order upwinding and a non-clipping limiter. One other improvement with wider applicability is an im- proved averaging technique for the singularities in polar and spherical grids. The new code adopts a hybrid structure - multi-threaded OpenMP with an overarching MPI layer for large scale and coupled applications. The MPI layer uses a combination of standard MPI and the Global Array Toolkit from PNL to provide a lightweight mechanism for coupling codes together concurrently. The single processor code is highly efficient and can run magnetospheric simulations at the default CCMC resolution faster than real time on a MacBook pro. We have run the new code through the Athena suite of tests, and the results compare favorably with the codes available to the astrophysics community. LFM/GAMERA has been applied to many different situations ranging from the inner and outer heliosphere and magnetospheres of Venus, the Earth, Jupiter and Saturn. We present example results the Earth's magnetosphere including a coupled ring current (RCM), the magnetospheres of Jupiter and Saturn, and the inner heliosphere.
Speech coding
Ravishankar, C., Hughes Network Systems, Germantown, MD
Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the
Optimal codes as Tanner codes with cyclic component codes
Høholdt, Tom; Pinero, Fernando; Zeng, Peng
In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...
Aztheca Code
Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.
This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)
Vocable Code
Soon, Winnie; Cox, Geoff
a computational and poetic composition for two screens: on one of these, texts and voices are repeated and disrupted by mathematical chaos, together exploring the performativity of code and language; on the other, is a mix of a computer programming syntax and human language. In this sense queer code can...... be understood as both an object and subject of study that intervenes in the world's 'becoming' and how material bodies are produced via human and nonhuman practices. Through mixing the natural and computer language, this article presents a script in six parts from a performative lecture for two persons...
NSURE code
Rattan, D.S.
NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases
LFSC - Linac Feedback Simulation Code
Ivanov, Valentin; /Fermilab
The computer program LFSC (Code>) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output.
MED101: a laser-plasma simulation code. User guide
Rodgers, P.A.; Rose, S.J.; Rogoyski, A.M.
Complete details for running the 1-D laser-plasma simulation code MED101 are given including: an explanation of the input parameters, instructions for running on the Rutherford Appleton Laboratory IBM, Atlas Centre Cray X-MP and DEC VAX, and information on three new graphics packages. The code, based on the existing MEDUSA code, is capable of simulating a wide range of laser-produced plasma experiments including the calculation of X-ray laser gain. (author)
PLASMOR: A laser-plasma simulation code. Pt. 2
Salzman, D.; Krumbein, A.D.; Szichman, H.
This report supplements a previous one which describes the PLASMOR hydrodynamics code. The present report documents the recent changes and additions made in the code. In particular described are two new subroutines for radiative preheat, a system of preprocessors which prepare the code before run, a list of postprocessors which simulate experimental setups, and the basic data sets required to run PLASMOR. In the Appendix a new computer-based manual which lists the main features of PLASMOR is reproduced
The Aster code; Code Aster
Delbecq, J.M
The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)
Coding Class
Ejsing-Duun, Stine; Hansbøl, Mikala
Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...
Uplink Coding
Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio
This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.
ANIMAL code
Lindemuth, I.R.
This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables
Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...
MCNP code
Cramer, S.N.
The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids
Expander Codes
Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.
Running continuous academic adoption programmes
Nielsen, Tobias Alsted
Running successful academic adoption programmes requires executive support, clear strategies, tactical resources and organisational agility. These two presentations will discuss the implementation of strategic academic adoption programs down to very concrete tool customisations to meet specific...
Turkey Run Landfill Emissions Dataset
Data.gov (United States)
U.S. Environmental Protection Agency — landfill emissions measurements for the Turkey run landfill in Georgia. This dataset is associated with the following publication: De la Cruz, F., R. Green, G....
Phthalate SHEDS-HT runs
U.S. Environmental Protection Agency — Inputs and outputs for SHEDS-HT runs of DiNP, DEHP, DBP. This dataset is associated with the following publication: Moreau, M., J. Leonard, K. Phillips, J. Campbell,...
Panda code
Altomare, S.; Minton, G.
PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)
CANAL code
Gara, P.; Martin, E.
The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr
The ZPIC educational code suite
Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.
Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.
Ivanov, Valentin; Fermilab
The computer program LFSC ( ) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output
Some optimizations of the animal code
Fletcher, W.T.
Optimizing techniques were performed on a version of the ANIMAL code (MALAD1B) at the source-code (FORTRAN) level. Sample optimizing techniques and operations used in MALADOP--the optimized version of the code--are presented, along with a critique of some standard CDC 7600 optimizing techniques. The statistical analysis of total CPU time required for MALADOP and MALAD1B shows a run-time saving of 174 msec (almost 3 percent) in the code MALADOP during one time step
GOC: General Orbit Code
Maddox, L.B.; McNeilly, G.S.
GOC (General Orbit Code) is a versatile program which will perform a variety of calculations relevant to isochronous cyclotron design studies. In addition to the usual calculations of interest (e.g., equilibrium and accelerated orbits, focusing frequencies, field isochronization, etc.), GOC has a number of options to calculate injections with a charge change. GOC provides both printed and plotted output, and will follow groups of particles to allow determination of finite-beam properties. An interactive PDP-10 program called GIP, which prepares input data for GOC, is available. GIP is a very easy and convenient way to prepare complicated input data for GOC. Enclosed with this report are several microfiche containing source listings of GOC and other related routines and the printed output from a multiple-option GOC run
Optimization of the particle pusher in a diode simulation code
Theimer, M.M.; Quintenz, J.P.
The particle pusher in Sandia's particle-in-cell diode simulation code has been rewritten to reduce the required run time of a typical simulation. The resulting new version of the code has been found to run up to three times as fast as the original with comparable accuracy. The cost of this optimization was an increase in storage requirements of about 15%. The new version has also been written to run efficiently on a CRAY-1 computing system. Steps taken to affect this reduced run time are described. Various test cases are detailed
Multitasking the code ARC3D. [for computational fluid dynamics
Barton, John T.; Hsiung, Christopher C.
The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.
From concatenated codes to graph codes
Justesen, Jørn; Høholdt, Tom
We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...
Children's Fitness. Managing a Running Program.
Hinkle, J. Scott; Tuckman, Bruce W.
A running program to increase the cardiovascular fitness levels of fourth-, fifth-, and sixth-grade children is described. Discussed are the running environment, implementation of a running program, feedback, and reinforcement. (MT)
Running Improves Pattern Separation during Novel Object Recognition.
Bolz, Leoni; Heigele, Stefanie; Bischofberger, Josef
Running increases adult neurogenesis and improves pattern separation in various memory tasks including context fear conditioning or touch-screen based spatial learning. However, it is unknown whether pattern separation is improved in spontaneous behavior, not emotionally biased by positive or negative reinforcement. Here we investigated the effect of voluntary running on pattern separation during novel object recognition in mice using relatively similar or substantially different objects.We show that running increases hippocampal neurogenesis but does not affect object recognition memory with 1.5 h delay after sample phase. By contrast, at 24 h delay, running significantly improves recognition memory for similar objects, whereas highly different objects can be distinguished by both, running and sedentary mice. These data show that physical exercise improves pattern separation, independent of negative or positive reinforcement. In sedentary mice there is a pronounced temporal gradient for remembering object details. In running mice, however, increased neurogenesis improves hippocampal coding and temporally preserves distinction of novel objects from familiar ones.
Syndrome-source-coding and its universal generalization. [error correcting codes for data compression
Ancheta, T. C., Jr.
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
Barefoot running survey: Evidence from the field
David Hryvniak; Jay Dicharry; Robert Wilder
Background: Running is becoming an increasingly popular activity among Americans with over 50 million participants. Running shoe research and technology has continued to advance with no decrease in overall running injury rates. A growing group of runners are making the choice to try the minimal or barefoot running styles of the pre-modern running shoe era. There is some evidence of decreased forces and torques on the lower extremities with barefoot running, but no clear data regarding how thi...
Cloud Computing for Complex Performance Codes.
Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Klein, Brandon Thorin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miner, John Gifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
Two-Level Semantics and Code Generation
Nielson, Flemming; Nielson, Hanne Riis
A two-level denotational metalanguage that is suitable for defining the semantics of Pascal-like languages is presented. The two levels allow for an explicit distinction between computations taking place at compile-time and computations taking place at run-time. While this distinction is perhaps...... not absolutely necessary for describing the input-output semantics of programming languages, it is necessary when issues such as data flow analysis and code generation are considered. For an example stack-machine, the authors show how to generate code for the run-time computations and still perform the compile...
PORPST: A statistical postprocessor for the PORMC computer code
Eslinger, P.W.; Didier, B.T.
This report describes the theory underlying the PORPST code and gives details for using the code. The PORPST code is designed to do statistical postprocessing on files written by the PORMC computer code. The data written by PORMC are summarized in terms of means, variances, standard deviations, or statistical distributions. In addition, the PORPST code provides for plotting of the results, either internal to the code or through use of the CONTOUR3 postprocessor. Section 2.0 discusses the mathematical basis of the code, and Section 3.0 discusses the code structure. Section 4.0 describes the free-format point command language. Section 5.0 describes in detail the commands to run the program. Section 6.0 provides an example program run, and Section 7.0 provides the references. 11 refs., 1 fig., 17 tabs
Repetition code of 15 qubits
Wootton, James R.; Loss, Daniel
The repetition code is an important primitive for the techniques of quantum error correction. Here we implement repetition codes of at most 15 qubits on the 16 qubit ibmqx3 device. Each experiment is run for a single round of syndrome measurements, achieved using the standard quantum technique of using ancilla qubits and controlled operations. The size of the final syndrome is small enough to allow for lookup table decoding using experimentally obtained data. The results show strong evidence that the logical error rate decays exponentially with code distance, as is expected and required for the development of fault-tolerant quantum computers. The results also give insight into the nature of noise in the device.
The RETRAN-03 computer code
Paulsen, M.P.; McFadden, J.H.; Peterson, C.E.; McClure, J.A.; Gose, G.C.; Jensen, P.J.
The RETRAN-03 code development effort is designed to overcome the major theoretical and practical limitations associated with the RETRAN-02 computer code. The major objectives of the development program are to extend the range of analyses that can be performed with RETRAN, to make the code more dependable and faster running, and to have a more transportable code. The first two objectives are accomplished by developing new models and adding other models to the RETRAN-02 base code. The major model additions for RETRAN-03 are as follows: implicit solution methods for the steady-state and transient forms of the field equations; additional options for the velocity difference equation; a new steady-state initialization option for computer low-power steam generator initial conditions; models for nonequilibrium thermodynamic conditions; and several special-purpose models. The source code and the environmental library for RETRAN-03 are written in standard FORTRAN 77, which allows the last objective to be fulfilled. Some models in RETRAN-02 have been deleted in RETRAN-03. In this paper the changes between RETRAN-02 and RETRAN-03 are reviewed
Red light running camera assessment.
In the 2004-2007 period, the Mission Street SE and 25th Street SE intersection in Salem, Oregon showed relatively few crashes attributable to red light running (RLR) but, since a high number of RLR violations were observed, the intersection was ident...
Teaching Bank Runs through Films
Flynn, David T.
The author advocates the use of films to supplement textbook treatments of bank runs and panics in money and banking or general banking classes. Modern students, particularly those in developed countries, tend to be unfamiliar with potential fragilities of financial systems such as a lack of deposit insurance or other safety net mechanisms. Films…
Running and Breathing in Mammals
Bramble, Dennis M.; Carrier, David R.
Mechanical constraints appear to require that locomotion and breathing be synchronized in running mammals. Phase locking of limb and respiratory frequency has now been recorded during treadmill running in jackrabbits and during locomotion on solid ground in dogs, horses, and humans. Quadrupedal species normally synchronize the locomotor and respiratory cycles at a constant ratio of 1:1 (strides per breath) in both the trot and gallop. Human runners differ from quadrupeds in that while running they employ several phase-locked patterns (4:1, 3:1, 2:1, 1:1, 5:2, and 3:2), although a 2:1 coupling ratio appears to be favored. Even though the evolution of bipedal gait has reduced the mechanical constraints on respiration in man, thereby permitting greater flexibility in breathing pattern, it has seemingly not eliminated the need for the synchronization of respiration and body motion during sustained running. Flying birds have independently achieved phase-locked locomotor and respiratory cycles. This hints that strict locomotor-respiratory coupling may be a vital factor in the sustained aerobic exercise of endothermic vertebrates, especially those in which the stresses of locomotion tend to deform the thoracic complex.
Does Addiction Run in Families?
... Makes Someone More Likely to Get Addicted to Drugs? Does Addiction Run in Families? Why Is It So Hard ... news is that many children whose parents had drug problems don't become addicted when they grow up. The chances of addiction are higher, but it doesn't have to ...
Prediction of ROSA-III experiment Run 702
Koizumi, Yasuo; Soda, Kunihisa; Kikuchi, Osamu.
The purpose of the ROSA-III experiment with a scaled BWR test facility is to examine primary coolant thermalhydraulic behavior and performance during a postulated loss-of-coolant accident of BWR. The results provide information for verification and improvement of reactor safety analysis codes. Run 702 assumes a recirculation line double ended break at the pump suction with average core power and no ECCS. Prediction of the Run 702 experiment was made with computer code RELAP-4J. What determine the coolant behavior are mixture level in the downcomer and flowrates and flow directions at jet pump drive flow nozzle, jet pump suction and discharge. There is thus the need for these measurements to compare predicted results with experimental ones. The liquid level formation model also needs improvement. (author)
Multiple running speed signals in medial entorhinal cortex
Hinman, James R.; Brandon, Mark P.; Climer, Jason R.; Chapman, G. William; Hasselmo, Michael E.
Grid cells in medial entorhinal cortex (MEC) can be modeled using oscillatory interference or attractor dynamic mechanisms that perform path integration, a computation requiring information about running direction and speed. The two classes of computational models often use either an oscillatory frequency or a firing rate that increases as a function of running speed. Yet it is currently not known whether these are two manifestations of the same speed signal or dissociable signals with potentially different anatomical substrates. We examined coding of running speed in MEC and identified these two speed signals to be independent of each other within individual neurons. The medial septum (MS) is strongly linked to locomotor behavior and removal of MS input resulted in strengthening of the firing rate speed signal, while decreasing the strength of the oscillatory speed signal. Thus two speed signals are present in MEC that are differentially affected by disrupted MS input. PMID:27427460
1995 and 1996 Upper Three Runs Dye Study Data Analyses
Chen, K.F.
This report presents an analysis of dye tracer studies conducted on Upper Three Runs. The revised STREAM code was used to analyze these studies and derive a stream velocity and a dispersion coefficient for use in aqueous transport models. These models will be used to facilitate the establishment of aqueous effluent limits and provide contaminant transport information to emergency management in the event of a release
Criticality codes migration to workstations at the Hanford site
Miller, E.M.
Westinghouse Hanford Company, Hanford Site Operations contractor, Richland, Washington, currently runs criticality codes on the Cray X-MP EA/232 computer but has recommended that US Department of Energy DOE-Richland replace the Cray with more economical workstations
Preventing Running Injuries through Barefoot Activity
Hart, Priscilla M.; Smith, Darla R.
Running has become a very popular lifetime physical activity even though there are numerous reports of running injuries. Although common theories have pointed to impact forces and overpronation as the main contributors to chronic running injuries, the increased use of cushioning and orthotics has done little to decrease running injuries. A new…
Running: Improving Form to Reduce Injuries.
Running is often perceived as a good option for "getting into shape," with little thought given to the form, or mechanics, of running. However, as many as 79% of all runners will sustain a running-related injury during any given year. If you are a runner-casual or serious-you should be aware that poor running mechanics may contribute to these injuries. A study published in the August 2015 issue of JOSPT reviewed the existing research to determine whether running mechanics could be improved, which could be important in treating running-related injuries and helping injured runners return to pain-free running.
Some neutronics and thermal-hydraulics codes for reactor analysis using personal computers
Woodruff, W.L.
Some neutronics and thermal-hydraulics codes formerly available only for main frame computers may now be run on personal computers. Brief descriptions of the codes are provided. Running times for some of the codes are compared for an assortment of personal and main frame computers. With some limitations in detail, personal computer versions of the codes can be used to solve many problems of interest in reactor analyses at very modest costs. 11 refs., 4 tabs
Run-off from roofs
Roed, J.
In order to find the run-off from roof material a roof has been constructed with two different slopes (30 deg C and 45 deg C). Beryllium-7 and caesium-137 has been used as tracers. Considering new roof material the pollution removed by runoff processes has been shown to be very different for various roof materials. The pollution is much more easily removed from silicon-treated material than from porous red-tile roof material. Caesium is removed more easily than beryllium. The content of caesium in old roof materials is greater in red-tile than in other less-porous materials. However, the measured removal from new material does not correspond to the amount accumulated in the old. This could be explained by weathering and by saturation effects. This last effect is probably the more important. The measurements on old material indicates a removal of 44-86% of the caesium pollution by run-off, whereas the measurement on new showed a removal of only 31-50%. It has been demonstrated that the pollution concentration in the run-off water could be very different from that in rainwater. The work was part of the EEC Radiation Protection Programme and done under a subcontract with Association Euratom-C.E.A. No. SC-014-BIO-F-423-DK(SD) under contract No. BIO-F-423-81-F. (author)
Better in the long run
CERN Bulletin
Last week, the Chamonix workshop once again proved its worth as a place where all the stakeholders in the LHC can come together, take difficult decisions and reach a consensus on important issues for the future of particle physics. The most important decision we reached last week is to run the LHC for 18 to 24 months at a collision energy of 7 TeV (3.5 TeV per beam). After that, we'll go into a long shutdown in which we'll do all the necessary work to allow us to reach the LHC's design collision energy of 14 TeV for the next run. This means that when beams go back into the LHC later this month, we'll be entering the longest phase of accelerator operation in CERN's history, scheduled to take us into summer or autumn 2011. What led us to this conclusion? Firstly, the LHC is unlike any previous CERN machine. Because it is a cryogenic facility, each run is accompanied by lengthy cool-down and warm-up phases. For that reason, CERN's traditional &...
LHC Report: Positive ion run!
Mike Lamont for the LHC Team
The current LHC ion run has been progressing very well. The first fill with 358 bunches per beam - the maximum number for the year - was on Tuesday, 15 November and was followed by an extended period of steady running. The quality of the beam delivered by the heavy-ion injector chain has been excellent, and this is reflected in both the peak and the integrated luminosity. Â The peak luminosity in ATLAS reached 5x1026 cm-2s-1, which is a factor of ~16 more than last year's peak of 3x1025 cm-2s-1. The integrated luminosity in each of ALICE, ATLAS and CMS is now around 100 inverse microbarn, already comfortably over the nominal target for the run. The polarity of the ALICE spectrometer and solenoid magnets was reversed on Monday, 28 November with the aim of delivering another sizeable amount of luminosity in this configuration. On the whole, the LHC has been behaving very well recently, ensuring good machine availability. On Monday evening, however, a faulty level sensor in the cooling towe...
GASIFICATION TEST RUN TC06
Southern Company Services, Inc.
This report discusses test campaign TC06 of the Kellogg Brown & Root, Inc. (KBR) Transport Reactor train with a Siemens Westinghouse Power Corporation (Siemens Westinghouse) particle filter system at the Power Systems Development Facility (PSDF) located in Wilsonville, Alabama. The Transport Reactor is an advanced circulating fluidized-bed reactor designed to operate as either a combustor or a gasifier using a particulate control device (PCD). The Transport Reactor was operated as a pressurized gasifier during TC06. Test run TC06 was started on July 4, 2001, and completed on September 24, 2001, with an interruption in service between July 25, 2001, and August 19, 2001, due to a filter element failure in the PCD caused by abnormal operating conditions while tuning the main air compressor. The reactor temperature was varied between 1,725 and 1,825 F at pressures from 190 to 230 psig. In TC06, 1,214 hours of solid circulation and 1,025 hours of coal feed were attained with 797 hours of coal feed after the filter element failure. Both reactor and PCD operations were stable during the test run with a stable baseline pressure drop. Due to its length and stability, the TC06 test run provided valuable data necessary to analyze long-term reactor operations and to identify necessary modifications to improve equipment and process performance as well as progressing the goal of many thousands of hours of filter element exposure.
Running jobs in the vacuum
McNab, A; Stagni, F; Garcia, M Ubeda
We present a model for the operation of computing nodes at a site using Virtual Machines (VMs), in which VMs are created and contextualized for experiments by the site itself. For the experiment, these VMs appear to be produced spontaneously 'in the vacuum' rather having to ask the site to create each one. This model takes advantage of the existing pilot job frameworks adopted by many experiments. In the Vacuum model, the contextualization process starts a job agent within the VM and real jobs are fetched from the central task queue as normal. An implementation of the Vacuum scheme, Vac, is presented in which a VM factory runs on each physical worker node to create and contextualize its set of VMs. With this system, each node's VM factory can decide which experiments' VMs to run, based on site-wide target shares and on a peer-to-peer protocol in which the site's VM factories query each other to discover which VM types they are running. A property of this system is that there is no gate keeper service, head node, or batch system accepting and then directing jobs to particular worker nodes, avoiding several central points of failure. Finally, we describe tests of the Vac system using jobs from the central LHCb task queue, using the same contextualization procedure for VMs developed by LHCb for Clouds.
Automatic coding method of the ACR Code
Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi
The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology
Error-correction coding
Hinds, Erold W. (Principal Investigator)
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Code Development and Analysis Program: developmental checkout of the BEACON/MOD2A code
Ramsthaler, J.A.; Lime, J.F.; Sahota, M.S.
A best-estimate transient containment code, BEACON, is being developed by EG and G Idaho, Inc. for the Nuclear Regulatory Commission's reactor safety research program. This is an advanced, two-dimensional fluid flow code designed to predict temperatures and pressures in a dry PWR containment during a hypothetical loss-of-coolant accident. The most recent version of the code, MOD2A, is presently in the final stages of production prior to being released to the National Energy Software Center. As part of the final code checkout, seven sample problems were selected to be run with BEACON/MOD2A
Run Clever - No difference in risk of injury when comparing progression in running volume and running intensity in recreational runners
Ramskov, Daniel; Rasmussen, Sten; Sørensen, Henrik
Background/aim: The Run Clever trial investigated if there was a difference in injury occurrence across two running schedules, focusing on progression in volume of running intensity (Sch-I) or in total running volume (Sch-V). It was hypothesised that 15% more runners with a focus on progression...... in volume of running intensity would sustain an injury compared with runners with a focus on progression in total running volume. Methods: Healthy recreational runners were included and randomly allocated to Sch-I or Sch-V. In the first eight weeks of the 24-week follow-up, all participants (n=839) followed...... participants received real-time, individualised feedback on running intensity and running volume. The primary outcome was running-related injury (RRI). Results: After preconditioning a total of 80 runners sustained an RRI (Sch-I n=36/Sch-V n=44). The cumulative incidence proportion (CIP) in Sch-V (reference...
Dynamic Shannon Coding
Gagie, Travis
We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.
Fundamentals of convolutional coding
Johannesson, Rolf
Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual
Codes Over Hyperfields
Atamewoue Surdive
Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.
Comparison of sodium aerosol codes
Dunbar, I.H.; Fermandjian, J.; Bunz, H.; L'homme, A.; Lhiaubet, G.; Himeno, Y.; Kirby, C.R.; Mitsutsuka, N.
Although hypothetical fast reactor accidents leading to severe core damage are very low probability events, their consequences are to be assessed. During such accidents, one can envisage the ejection of sodium, mixed with fuel and fission products, from the primary circuit into the secondary containment. Aerosols can be formed either by mechanical dispersion of the molten material or as a result of combustion of the sodium in the mixture. Therefore considerable effort has been devoted to study the different sodium aerosol phenomena. To ensure that the problems of describing the physical behaviour of sodium aerosols were adequately understood, a comparison of the codes being developed to describe their behaviour was undertaken. The comparison consists of two parts. The first is a comparative study of the computer codes used to predict aerosol behaviour during a hypothetical accident. It is a critical review of documentation available. The second part is an exercise in which code users have run their own codes with a pre-arranged input. For the critical comparative review of the computer models, documentation has been made available on the following codes: AEROSIM (UK), MAEROS (USA), HAARM-3 (USA), AEROSOLS/A2 (France), AEROSOLS/B1 (France), and PARDISEKO-IIIb (FRG)
Improved Algorithms Speed It Up for Codes
Hazi, A
Huge computers, huge codes, complex problems to solve. The longer it takes to run a code, the more it costs. One way to speed things up and save time and money is through hardware improvements--faster processors, different system designs, bigger computers. But another side of supercomputing can reap savings in time and speed: software improvements to make codes--particularly the mathematical algorithms that form them--run faster and more efficiently. Speed up math? Is that really possible? According to Livermore physicist Eugene Brooks, the answer is a resounding yes. ''Sure, you get great speed-ups by improving hardware,'' says Brooks, the deputy leader for Computational Physics in N Division, which is part of Livermore's Physics and Advanced Technologies (PAT) Directorate. ''But the real bonus comes on the software side, where improvements in software can lead to orders of magnitude improvement in run times.'' Brooks knows whereof he speaks. Working with Laboratory physicist Abraham Szoeke and others, he has been instrumental in devising ways to shrink the running time of what has, historically, been a tough computational nut to crack: radiation transport codes based on the statistical or Monte Carlo method of calculation. And Brooks is not the only one. Others around the Laboratory, including physicists Andrew Williamson, Randolph Hood, and Jeff Grossman, have come up with innovative ways to speed up Monte Carlo calculations using pure mathematics
Symbol synchronization in convolutionally coded systems
Baumert, L. D.; Mceliece, R. J.; Van Tilborg, H. C. A.
Alternate symbol inversion is sometimes applied to the output of convolutional encoders to guarantee sufficient richness of symbol transition for the receiver symbol synchronizer. A bound is given for the length of the transition-free symbol stream in such systems, and those convolutional codes are characterized in which arbitrarily long transition free runs occur.
LHCb siliicon detectors: the Run 1 to Run 2 transition and first experience of Run 2
Rinnert, Kurt
LHCb is a dedicated experiment to study New Physics in the decays of heavy hadrons at the Large Hadron Collider (LHC) at CERN. The detector includes a high precision tracking system consisting of a silicon-strip vertex detector (VELO) surrounding the pp interaction region, a large- area silicon-strip detector located upstream of a dipole magnet (TT), and three stations of silicon- strip detectors (IT) and straw drift tubes placed downstream (OT). The operational transition of the silicon detectors VELO, TT and IT from LHC Run 1 to Run 2 and first Run 2 experiences will be presented. During the long shutdown of the LHC the silicon detectors have been maintained in a safe state and operated regularly to validate changes in the control infrastructure, new operational procedures, updates to the alarm systems and monitoring software. In addition, there have been some infrastructure related challenges due to maintenance performed in the vicinity of the silicon detectors that will be discussed. The LHCb silicon dete...
Barefoot running: does it prevent injuries?
Murphy, Kelly; Curry, Emily J; Matzkin, Elizabeth G
Endurance running has evolved over the course of millions of years and it is now one of the most popular sports today. However, the risk of stress injury in distance runners is high because of the repetitive ground impact forces exerted. These injuries are not only detrimental to the runner, but also place a burden on the medical community. Preventative measures are essential to decrease the risk of injury within the sport. Common running injuries include patellofemoral pain syndrome, tibial stress fractures, plantar fasciitis, and Achilles tendonitis. Barefoot running, as opposed to shod running (with shoes), has recently received significant attention in both the media and the market place for the potential to promote the healing process, increase performance, and decrease injury rates. However, there is controversy over the use of barefoot running to decrease the overall risk of injury secondary to individual differences in lower extremity alignment, gait patterns, and running biomechanics. While barefoot running may benefit certain types of individuals, differences in running stance and individual biomechanics may actually increase injury risk when transitioning to barefoot running. The purpose of this article is to review the currently available clinical evidence on barefoot running and its effectiveness for preventing injury in the runner. Based on a review of current literature, barefoot running is not a substantiated preventative running measure to reduce injury rates in runners. However, barefoot running utility should be assessed on an athlete-specific basis to determine whether barefoot running will be beneficial.
HTML 5 up and running
Pilgrim, Mark
If you don't know about the new features available in HTML5, now's the time to find out. This book provides practical information about how and why the latest version of this markup language will significantly change the way you develop for the Web. HTML5 is still evolving, yet browsers such as Safari, Mozilla, Opera, and Chrome already support many of its features -- and mobile browsers are even farther ahead. HTML5: Up & Running carefully guides you though the important changes in this version with lots of hands-on examples, including markup, graphics, and screenshots. You'll learn how to
Inequality in the long run.
Piketty, Thomas; Saez, Emmanuel
This Review presents basic facts regarding the long-run evolution of income and wealth inequality in Europe and the United States. Income and wealth inequality was very high a century ago, particularly in Europe, but dropped dramatically in the first half of the 20th century. Income inequality has surged back in the United States since the 1970s so that the United States is much more unequal than Europe today. We discuss possible interpretations and lessons for the future. Copyright © 2014, American Association for the Advancement of Science.
Electroweak processes at Run 2
Spalla, Margherita; Sestini, Lorenzo
We present a summary of the studies of the electroweak sector of the Standard Model at LHC after the first year of data taking of Run2, focusing on possible results to be achieved with the analysis of full 2015 and 2016 data. We discuss the measurements of W and Z boson production, with particular attention to the precision determination of basic Standard Model parameters, and the study of multi-boson interactions through the analysis of boson-boson final states. This work is the result of the collaboration between scientists from the ATLAS, CMS and LHCb experiments.
Running gratings in photoconductive materials
Kukhtarev, N. V.; Kukhtareva, T.; Lyuksyutov, S. F.
Starting from the three-dimensional version of a standard photorefractive model (STPM), we obtain a reduced compact Set of equations for an electric field based on the assumption of a quasi-steady-state fast recombination. The equations are suitable for evaluation of a current induced by running...... gratings at small-contrast approximation and also are applicable for the description of space-charge wave domains. We discuss spatial domain and subharmonic beam formation in bismuth silicon oxide (BSO) crystals in the framework of the small-contrast approximation of STPM. The experimental results...
Google Wave Up and Running
Ferrate, Andres
Catch Google Wave, the revolutionary Internet protocol and web service that lets you communicate and collaborate in realtime. With this book, you'll understand how Google Wave integrates email, instant messaging (IM), wiki, and social networking functionality into a powerful and extensible platform. You'll also learn how to use its features, customize its functions, and build sophisticated extensions with Google Wave's open APIs and network protocol. Written for everyone -- from non-techies to ninja coders -- Google Wave: Up and Running provides a complete tour of this complex platform. You'
Vector Network Coding Algorithms
Ebrahimi, Javad; Fragouli, Christina
We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...
R-matrix analysis code (RAC)
Chen Zhenpeng; Qi Huiquan
A comprehensive R-matrix analysis code has been developed. It is based on the multichannel and multilevel R-matrix theory and runs in VAX computer with FORTRAN-77. With this code many kinds of experimental data for one nuclear system can be fitted simultaneously. The comparisions between code RAC and code EDA of LANL are made. The data show both codes produced the same calculation results when one set of R-matrix parameters was used. The differential cross section of 10 B (n, α) 7 Li for E n = 0.4 MeV and the polarization of 16 O (n,n) 16 O for E n = 2.56 MeV are presented
The PS locomotive runs again
Over forty years ago, the PS train entered service to steer the magnets of the accelerator into place... ... a service that was resumed last Tuesday. Left to right: Raymond Brown (CERN), Claude Tholomier (D.B.S.), Marcel Genolin (CERN), Gérard Saumade (D.B.S.), Ingo Ruehl (CERN), Olivier Carlier (D.B.S.), Patrick Poisot (D.B.S.), Christian Recour (D.B.S.). It is more than ten years since people at CERN heard the rumbling of the old PS train's steel wheels. Last Tuesday, the locomotive came back into service to be tested. It is nothing like the monstrous steel engines still running on conventional railways -just a small electric battery-driven vehicle employed on installing the magnets for the PS accelerator more than 40 years ago. To do so, it used the tracks that run round the accelerator. In fact, it is the grandfather of the LEP monorail. After PS was commissioned in 1959, the little train was used more and more rarely. This is because magnets never break down, or hardly ever! In fact, the loc...
(Nearly) portable PIC code for parallel computers
Decyk, V.K.
As part of the Numerical Tokamak Project, the author has developed a (nearly) portable, one dimensional version of the GCPIC algorithm for particle-in-cell codes on parallel computers. This algorithm uses a spatial domain decomposition for the fields, and passes particles from one domain to another as the particles move spatially. With only minor changes, the code has been run in parallel on the Intel Delta, the Cray C-90, the IBM ES/9000 and a cluster of workstations. After a line by line translation into cmfortran, the code was also run on the CM-200. Impressive speeds have been achieved, both on the Intel Delta and the Cray C-90, around 30 nanoseconds per particle per time step. In addition, the author was able to isolate the data management modules, so that the physics modules were not changed much from their sequential version, and the data management modules can be used as open-quotes black boxes.close quotes
Homological stabilizer codes
Anderson, Jonas T., E-mail: [email protected]
In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.
Effect of Minimalist Footwear on Running Efficiency
Gillinov, Stephen M.; Laux, Sara; Kuivila, Thomas; Hass, Daniel; Joy, Susan M.
Background: Although minimalist footwear is increasingly popular among runners, claims that minimalist footwear enhances running biomechanics and efficiency are controversial. Hypothesis: Minimalist and barefoot conditions improve running efficiency when compared with traditional running shoes. Study Design: Randomized crossover trial. Level of Evidence: Level 3. Methods: Fifteen experienced runners each completed three 90-second running trials on a treadmill, each trial performed in a different type of footwear: traditional running shoes with a heavily cushioned heel, minimalist running shoes with minimal heel cushioning, and barefoot (socked). High-speed photography was used to determine foot strike, ground contact time, knee angle, and stride cadence with each footwear type. Results: Runners had more rearfoot strikes in traditional shoes (87%) compared with minimalist shoes (67%) and socked (40%) (P = 0.03). Ground contact time was longest in traditional shoes (265.9 ± 10.9 ms) when compared with minimalist shoes (253.4 ± 11.2 ms) and socked (250.6 ± 16.2 ms) (P = 0.005). There was no difference between groups with respect to knee angle (P = 0.37) or stride cadence (P = 0.20). When comparing running socked to running with minimalist running shoes, there were no differences in measures of running efficiency. Conclusion: When compared with running in traditional, cushioned shoes, both barefoot (socked) running and minimalist running shoes produce greater running efficiency in some experienced runners, with a greater tendency toward a midfoot or forefoot strike and a shorter ground contact time. Minimalist shoes closely approximate socked running in the 4 measurements performed. Clinical Relevance: With regard to running efficiency and biomechanics, in some runners, barefoot (socked) and minimalist footwear are preferable to traditional running shoes. PMID:26131304
pTSC: Data file editing for the Tokamak Simulation Code
Meiss, J.D.
The code pTSC is an editor for the data files needed to run the Princeton Tokamak Simulation Code (TSC). pTSC utilizes the Macintosh interface to create a graphical environment for entering the data. As most of the data to run TSC consists of conductor positions, the graphical interface is especially appropriate
Running Parallel Discrete Event Simulators on Sierra
Barnes, P. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jefferson, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
In this proposal we consider porting the ROSS/Charm++ simulator and the discrete event models that run under its control so that they run on the Sierra architecture and make efficient use of the Volta GPUs.
Diagnostic Coding for Epilepsy.
Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R
Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.
Coding of Neuroinfectious Diseases.
Barkley, Gregory L
ATLAS inner detector: the Run 1 to Run 2 transition, and first experience from Run 2
Dobos, Daniel; The ATLAS collaboration
The ATLAS experiment is equipped with a tracking system, the Inner Detector, built using different technologies, silicon planar sensors (pixel and micro-strip) and gaseous drift- tubes, all embedded in a 2T solenoidal magnetic field. For the LHC Run II, the system has been upgraded; taking advantage of the long showdown, the Pixel Detector was extracted from the experiment and brought to surface, to equip it with new service quarter panels, to repair modules and to ease installation of the Insertable B-Layer (IBL), a fourth layer of pixel detectors, installed in May 2014 between the existing Pixel Detector and a new smaller radius beam-pipe at a radius of 3.3 cm from the beam axis. To cope with the high radiation and pixel occupancy due to the proximity to the interaction point and the increase of Luminosity that LHC will face in Run-2, a new read-out chip within CMOS 130nm and two different silicon sensor pixel technologies (planar and 3D) have been developed. SCT and TRT systems consolidation was also carri...
Adding run history to CLIPS
Tuttle, Sharon M.; Eick, Christoph F.
To debug a C Language Integrated Production System (CLIPS) program, certain 'historical' information about a run is needed. It would be convenient for system builders to have the capability to request such information. We will discuss how historical Rete networks can be used for answering questions that help a system builder detect the cause of an error in a CLIPS program. Moreover, the cost of maintaining a historical Rete network is compared with that for a classical Rete network. We will demonstrate that the cost for assertions is only slightly higher for a historical Rete network. The cost for handling retraction could be significantly higher; however, we will show that by using special data structures that rely on hashing, it is also possible to implement retractions efficiently.
Injecting Artificial Memory Errors Into a Running Computer Program
Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.
Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.
Robotic Bipedal Running : Increasing disturbance rejection
Karssen, J.G.D.
The goal of the research presented in this thesis is to increase the understanding of the human running gait. The understanding of the human running gait is essential for the development of devices, such as prostheses and orthoses, that enable disabled people to run or that enable able people to
David Hryvniak
Conclusion: Prior studies have found that barefoot running often changes biomechanics compared to shod running with a hypothesized relationship of decreased injuries. This paper reports the result of a survey of 509 runners. The results suggest that a large percentage of this sample of runners experienced benefits or no serious harm from transitioning to barefoot or minimal shoe running.
Age and sex influences on running mechanics and coordination variability.
Boyer, Katherine A; Freedman Silvernail, Julia; Hamill, Joseph
The purpose of this study was to examine the impact of age on running mechanics separately for male and female runners and to quantify sex differences in running mechanics and coordination variability for older runners. Kinematics and kinetics were captured for 20 younger (10 male) and 20 older (10 male) adults running overground at 3.5 m · s -1 . A modified vector coding technique was used to calculate segment coordination variability. Lower extremity joint angles, moments and segment coordination variability were compared between age and sex groups. Significant sex-age interaction effects were found for heel-strike hip flexion and ankle in/eversion angles and peak ankle dorsiflexion angle. In older adults, mid-stance knee flexion angle, ankle inversion and abduction moments and hip abduction and external rotation moments differed by sex. Older compared with younger females had reduced coordination variability in the thigh-shank transverse plane couple but greater coordination variability for the shank rotation-foot eversion couple in early stance. These results suggest there may be a non-equivalent aging process in the movement mechanics for males and females. The age and sex differences in running mechanics and coordination variability highlight the need for sex-based analyses for future studies examining injury risk with age.
Contribution to numerical and mechanical modelling of pellet-cladding interaction in nuclear reactor fuel rod
Retel, V.
Pressurised water reactor fuel rods (PWR) are the place of nuclear fission, resulting in unstable and radioactive elements. Today, the mechanical loading on the cladding is harder and harder and is partly due to the fuel pellet movement. Then, the mechanical behaviour of the cladding needs to be simulated with models allowing to assess realistic stress and strain fields for all the running conditions. Besides, the mechanical treatment of the fuel pellet needs to be improved. The study is part of a global way of improving the treatment of pellet-cladding interaction (PCI) in the 1D finite elements EDF code named CYRANO3. Non-axisymmetrical multidirectional effects have to be accounted for in a context of unidirectional axisymmetrical finite elements. The aim of this work is double. Firstly a model simulating the effect of stress concentration on the cladding, due to the opening of the radial cracks of fuel, had been added in the code. Then, the fragmented state of fuel material has been taken into account in the thermomechanical calculation, through a model which led the strain and stress relaxation in the pellet due to the fragmentation, be simulated. This model has been implemented in the code for two types of fuel behaviour: elastic and viscoplastic. (author)
Particle In Cell Codes on Highly Parallel Architectures
Tableman, Adam
We describe strategies and examples of Particle-In-Cell Codes running on Nvidia GPU and Intel Phi architectures. This includes basic implementations in skeletons codes and full-scale development versions (encompassing 1D, 2D, and 3D codes) in Osiris. Both the similarities and differences between Intel's and Nvidia's hardware will be examined. Work supported by grants NSF ACI 1339893, DOE DE SC 000849, DOE DE SC 0008316, DOE DE NA 0001833, and DOE DE FC02 04ER 54780.
Vector Network Coding
We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...
Entropy Coding in HEVC
Sze, Vivienne; Marpe, Detlev
Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...
Generalized concatenated quantum codes
Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei
We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.
Rateless feedback codes
Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip
This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....
Advanced video coding systems
Gao, Wen
This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV
Coding for dummies
Abraham, Nikhil
Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill
Mathematical analysis of running performance and world running records.
Péronnet, F; Thibault, G
The objective of this study was to develop an empirical model relating human running performance to some characteristics of metabolic energy-yielding processes using A, the capacity of anaerobic metabolism (J/kg); MAP, the maximal aerobic power (W/kg); and E, the reduction in peak aerobic power with the natural logarithm of race duration T, when T greater than TMAP = 420 s. Accordingly, the model developed describes the average power output PT (W/kg) sustained over any T as PT = [S/T(1 - e-T/k2)] + 1/T integral of T O [BMR + B(1 - e-t/k1)]dt where S = A and B = MAP - BMR (basal metabolic rate) when T less than TMAP; and S = A + [Af ln(T/TMAP)] and B = (MAP - BMR) + [E ln(T/TMAP)] when T greater than TMAP; k1 = 30 s and k2 = 20 s are time constants describing the kinetics of aerobic and anaerobic metabolism, respectively, at the beginning of exercise; f is a constant describing the reduction in the amount of energy provided from anaerobic metabolism with increasing T; and t is the time from the onset of the race. This model accurately estimates actual power outputs sustained over a wide range of events, e.g., average absolute error between actual and estimated T for men's 1987 world records from 60 m to the marathon = 0.73%. In addition, satisfactory estimations of the metabolic characteristics of world-class male runners were made as follows: A = 1,658 J/kg; MAP = 83.5 ml O2.kg-1.min-1; 83.5% MAP sustained over the marathon distance. Application of the model to analysis of the evolution of A, MAP, and E, and of the progression of men's and women's world records over the years, is presented.
Full core reactor analysis: Running Denovo on Jaguar
Jarrell, J. J.; Godfrey, A. T.; Evans, T. M.; Davidson, G. G. [Oak Ridge National Laboratory, PO Box 2008, Oak Ridge, TN 37831 (United States)
Fully-consistent, full-core, 3D, deterministic neutron transport simulations using the orthogonal mesh code Denovo were run on the massively parallel computing architecture Jaguar XT5. Using energy and spatial parallelization schemes, Denovo was able to efficiently scale to more than 160 k processors. Cell-homogenized cross sections were used with step-characteristics, linear-discontinuous finite element, and trilinear-discontinuous finite element spatial methods. It was determined that using the finite element methods gave considerably more accurate eigenvalue solutions for large-aspect ratio meshes than using step-characteristics. (authors)
First LQCD Physics Runs with MILC and P4RHMC
Soltz, R; Gupta, R
An initial series of physics LQCD runs were submitted to the BG/L science bank with the milc and p4rhmc. Both runs were for lattice dimensions of 32 2 x 8. The p4 calculation was performed with v2.0 QMP( ) MPI.X (semioptomized p4 code using qmp over mpi) and milc v7.2, also using RHMC, but not specifically optimized for BlueGene. Calculations were performed along lines of constant physics, with the light quark masses 2-3 times their physics values and the strange quark mass set by m ud = 0.1m s . Job submissions was performed using the standard milc and p4 scripts provided on the ubgl cluster. Initial thermalized lattices for each code were also provided in this way. The only modifications for running on BG/L were to the directory names and the mT parameter which determines job durations (24 hrs on BG/L vs. 4 hrs on ubgl). The milc scripts were set to resubmit themselves 10 times, and the p4 scripts were submitted serially using the ''psub -d'' job dependency option. The runp4rhmc.tcsh could not be used to resubmit due to the 30m time limit imposed on interactive jobs. Most jobs were submitted to the smallest, 512 node partitions, but both codes could also run on the 1024 node partitions with a gain of only 30-50%. The majority of jobs ran without error. Stalled jobs were often indicative of a communication gap within a partition that LC was able to fix quickly. On some occasion a zero-length lattice file was deleted to allow jobs to restart successfully. Approximately 1000 trajectories were calculated for each beta value, see Table . The analysis was performed with the standard analysis scripts for each code, make( ) summary.pl for milc and analysis.tcsh for p4rhmc. All lattices, log files, and job submission scripts have been archived to permanent storage for subsequent analysis
Progression in Running Intensity or Running Volume and the Development of Specific Injuries in Recreational Runners
-training. Participants were randomized to one of two running schedules: Schedule Intensity(Sch-I) or Schedule Volume(Sch-V). Sch-I progressed the amount of high intensity running (≥88% VO2max) each week. Sch-V progressed total weekly running volume. Global positioning system watch or smartphone collected data on running...
Running Club - Nocturne des Evaux
Les coureurs du CERN sont encore montés sur les plus hautes marches du podium lors de la course interentreprises. Cette course d'équipe qui se déroule de nuit et par équipe de 3 à 4 coureurs est unique dans la région de par son originalité : départ groupé toutes les 30 secondes, les 3 premiers coureurs doivent passer la ligne d'arrivée ensemble. Double victoire pour le running club a la nocturne !!!! 1ère place pour les filles et 22e au classement général; 1ère place pour l'équipe mixte et 4e au général, battant par la même occasion le record de l'épreuve en mixte d'environ 1 minute; 10e place pour l'équipe homme. Retrouvez tous les résultats sur http://www.chp-geneve.ch/web-cms/index.php/nocturne-des-evaux
LHCf completes its first run
LHCf, one of the three smaller experiments at the LHC, has completed its first run. The detectors were removed last week and the analysis of data is continuing. The first results will be ready by the end of the year. One of the two LHCf detectors during the removal operations inside the LHC tunnel. LHCf is made up of two independent detectors located in the tunnel 140 m either side of the ATLAS collision point. The experiment studies the secondary particles created during the head-on collisions in the LHC because they are similar to those created in a cosmic ray shower produced when a cosmic particle hits the Earth's atmosphere. The focus of the experiment is to compare the various shower models used to estimate the primary energy of ultra-high-energy cosmic rays. The energy of proton-proton collisions at the LHC will be equivalent to a cosmic ray of 1017eV hitting the atmosphere, very close to the highest energies observed in the sky. "We have now completed the fir...
Daytime Running Lights. Public Consultation
The Road Safety Authority is considering the policy options available to promote the use of Daytime Running Lights (DRL), including the possibility of mandating the use of DRL on all vehicles. An EC Directive would make DRL mandatory for new vehicles from 2011 onwards and by 2024 it is predicted that due to the natural replacement of the national fleet, almost all vehicles would be equipped with DRL. The RSA is inviting views on introducing DRL measures earlier, whereby all road vehicles would be required to use either dipped head lights during hours of daylight or dedicated DRL from next year onwards. The use of DRL has been found to enhance the visibility of vehicles, thereby increasing road safety by reducing the number and severity of collisions. This paper explores the benefits of DRL and the implications for all road users including pedestrians, cyclists and motorcyclists. In order to ensure a comprehensive consideration of all the issues, the Road Safety Authority is seeking the views and advice of interested parties.
Discussion on LDPC Codes and Uplink Coding
This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.
ETF system code: composition and applications
Reid, R.L.; Wu, K.F.
A computer code has been developed for application to ETF tokamak system and conceptual design studies. The code determines cost, performance, configuration, and technology requirements as a function of tokamak parameters. The ETF code is structured in a modular fashion in order to allow independent modeling of each major tokamak component. The primary benefit of modularization is that it allows updating of a component module, such as the TF coil module, without disturbing the remainder of the system code as long as the input/output to the modules remains unchanged. The modules may be run independently to perform specific design studies, such as determining the effect of allowable strain on TF coil structural requirements, or the modules may be executed together as a system to determine global effects, such as defining the impact of aspect ratio on the entire tokamak system
RCS modeling with the TSAR FDTD code
Pennock, S.T.; Ray, S.L.
The TSAR electromagnetic modeling system consists of a family of related codes that have been designed to work together to provide users with a practical way to set up, run, and interpret the results from complex 3-D finite-difference time-domain (FDTD) electromagnetic simulations. The software has been in development at the Lawrence Livermore National Laboratory (LLNL) and at other sites since 1987. Active internal use of the codes began in 1988 with limited external distribution and use beginning in 1991. TSAR was originally developed to analyze high-power microwave and EMP coupling problems. However, the general-purpose nature of the tools has enabled us to use the codes to solve a broader class of electromagnetic applications and has motivated the addition of new features. In particular a family of near-to-far field transformation routines have been added to the codes, enabling TSAR to be used for radar-cross section and antenna analysis problems.
Impact Accelerations of Barefoot and Shod Running.
Thompson, M; Seegmiller, J; McGowan, C P
During the ground contact phase of running, the body's mass is rapidly decelerated resulting in forces that propagate through the musculoskeletal system. The repetitive attenuation of these impact forces is thought to contribute to overuse injuries. Modern running shoes are designed to reduce impact forces, with the goal to minimize running related overuse injuries. Additionally, the fore/mid foot strike pattern that is adopted by most individuals when running barefoot may reduce impact force transmission. The aim of the present study was to compare the effects of the barefoot running form (fore/mid foot strike & decreased stride length) and running shoes on running kinetics and impact accelerations. 10 healthy, physically active, heel strike runners ran in 3 conditions: shod, barefoot and barefoot while heel striking, during which 3-dimensional motion analysis, ground reaction force and accelerometer data were collected. Shod running was associated with increased ground reaction force and impact peak magnitudes, but decreased impact accelerations, suggesting that the midsole of running shoes helps to attenuate impact forces. Barefoot running exhibited a similar decrease in impact accelerations, as well as decreased impact peak magnitude, which appears to be due to a decrease in stride length and/or a more plantarflexed position at ground contact. © Georg Thieme Verlag KG Stuttgart · New York.
Locally orderless registration code
This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....
Decoding Codes on Graphs
Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.
Manually operated coded switch
Barnette, J.H.
The disclosure related to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made
Coding in Muscle Disease.
Jones, Lyell K; Ney, John P
Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.
An Auto sequence Code to Integrate a Neutron Unfolding Code with thePC-MCA Accuspec
Darsono
In a neutron spectrometry using proton recoil method, the neutronunfolding code is needed to unfold the measured proton spectrum to become theneutron spectrum. The process of the unfolding neutron in the existingneutron spectrometry which was successfully installed last year was doneseparately. This manuscript reports that the auto sequence code to integratethe neutron unfolding code UNFSPEC.EXE with the software facility of thePC-MCA Accuspec has been made and run successfully so that the new neutronspectrometry become compact. The auto sequence code was written based on therules in application program facility of PC-MCA Accuspec and then it wascompiled using AC-EXE. Result of the test of the auto sequence code showedthat for binning width 20, 30, and 40 giving a little different spectrumshape. The binning width around 30 gives a better spectrum in mean of givingsmall error compared to the others. (author)
QR Codes 101
Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark
A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…
[Physiological differences between cycling and running].
Millet, Grégoire
This review compares the differences in systemic responses (VO2max, anaerobic threshold, heart rate and economy) and in underlying mechanisms of adaptation (ventilatory and hemodynamic and neuromuscular responses) between cycling and running. VO2max is specific to the exercise modality. Overall, there is more physiological training transfer from running to cycling than vice-versa. Several other physiological differences between cycling and running are discussed: HR is different between the two activities both for maximal and sub-maximal intensities. The delta efficiency is higher in running. Ventilation is more impaired in cycling than running due to mechanical constraints. Central fatigue and decrease in maximal strength are more important after prolonged exercise in running than in cycling.
Design of ProjectRun21
Damsted, Camma; Parner, Erik Thorlund; Sørensen, Henrik
BACKGROUND: Participation in half-marathon has been steeply increasing during the past decade. In line, a vast number of half-marathon running schedules has surfaced. Unfortunately, the injury incidence proportion for half-marathoners has been found to exceed 30% during 1-year follow......-up. The majority of running-related injuries are suggested to develop as overuse injuries, which leads to injury if the cumulative training load over one or more training sessions exceeds the runners' load capacity for adaptive tissue repair. Owing to an increase of load capacity along with adaptive running...... the association between running experience or running pace and the risk of running-related injury. METHODS: Healthy runners using Global Positioning System (GPS) watch between 18 and 65 years will be invited to participate in this 14-week prospective cohort study. Runners will be allowed to self-select one...
Should the Air Force Teach Running Technique
barefoot running, and gait training techniques. Current research indicates efficiencies in running with a forefoot or midfoot- strike gait, and a...recent retrospective study showed a lower injury rate in forefoot - strike runners as compared with heel- strike runners. However, there are no...barefoot-like� fashion and allows a forefoot or midfoot- strike gait, as opposed to the heel- strike gait style often seen with traditional running
Running-in as an Engineering Optimization
Jamari, Jamari
Running-in is a process which can be found in daily lives. This phenomenon occurs after the start of the contact between fresh solid surfaces, resulting in changes in the surface topography, friction and wear. Before the contacting engineering solid surfaces reach a steady-state operation situation this running-n enhances the contact performance. Running-in is very complex and is a vast problem area. A lot of variable occurs in the running-in process, physically, mechanically or chemically. T...
Run 2 ATLAS Trigger and Detector Performance
Solovyanov, Oleg; The ATLAS collaboration
The 2nd LHC run has started in June 2015 with a proton-proton centre-of-mass collision energy of 13 TeV. During the years 2016 and 2017, LHC delivered an unprecedented amount of luminosity under the ever-increasing challenging conditions in terms of peak luminosity, pile-up and trigger rates. In this talk, the LHC running conditions and the improvements made to the ATLAS experiment in the course of Run 2 will be discussed, and the latest ATLAS detector and ATLAS trigger performance results from the Run 2 will be presented.
How to run ions in the future?
Küchler, D; Manglunki, D; Scrivens, R
In the light of different running scenarios potential source improvements will be discussed (e.g. one month every year versus two month every other year and impact of the different running options [e.g. an extended ion run] on the source). As the oven refills cause most of the down time the oven design and refilling strategies will be presented. A test stand for off-line developments will be taken into account. Also the implications on the necessary manpower for extended runs will be discussed
ATLAS detector performance in Run1: Calorimeters
Burghgrave, B; The ATLAS collaboration
ATLAS operated with an excellent efficiency during the Run 1 data taking period, recording respectively in 2011 and 2012 an integrated luminosity of 5.3 fb-1 at √s = 7 TeV and 21.6 fb-1 at √s = 8TeV. The Liquid Argon and Tile Calorimeter contributed to this effort by operating with a good data quality efficiency, improving over the whole Run 1. This poster presents the Run 1 overall status and performance, LS1 works and Preparations for Run 2.
Electromagnetic field and mechanical stress analysis code
Analysis TEXMAGST is a two stage linear finite element code for the analysis of static magnetic fields in three dimensional structures and associated mechanical stresses produced by the anti J x anti B forces within these structures. The electromagnetic problem is solved in terms of magnetic vector potential A for a given current density anti J as curl 1/μ curl anti A = anti J considering the magnetic permeability as constant. The Coulombian gauge (div anti A = o) was chosen and was implemented through the use of Lagrange multipliers. The second stage of the problem - the calculation of mechanical stresses in the same three dimensional structure is solved by using the same code with few modifications - through a restart card. Body forces anti J x anti B within each element are calculated from the solution of the first stage run and represent the input to the second stage run which will give the solution for the stress problem
Codes and curves
Walker, Judy L
When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...
Computer codes in particle transport physics
Pesic, M.
Simulation of transport and interaction of various particles in complex media and wide energy range (from 1 MeV up to 1 TeV) is very complicated problem that requires valid model of a real process in nature and appropriate solving tool - computer code and data library. A brief overview of computer codes based on Monte Carlo techniques for simulation of transport and interaction of hadrons and ions in wide energy range in three dimensional (3D) geometry is shown. Firstly, a short attention is paid to underline the approach to the solution of the problem - process in nature - by selection of the appropriate 3D model and corresponding tools - computer codes and cross sections data libraries. Process of data collection and evaluation from experimental measurements and theoretical approach to establishing reliable libraries of evaluated cross sections data is Ion g, difficult and not straightforward activity. For this reason, world reference data centers and specialized ones are acknowledged, together with the currently available, state of art evaluated nuclear data libraries, as the ENDF/B-VI, JEF, JENDL, CENDL, BROND, etc. Codes for experimental and theoretical data evaluations (e.g., SAMMY and GNASH) together with the codes for data processing (e.g., NJOY, PREPRO and GRUCON) are briefly described. Examples of data evaluation and data processing to generate computer usable data libraries are shown. Among numerous and various computer codes developed in transport physics of particles, the most general ones are described only: MCNPX, FLUKA and SHIELD. A short overview of basic application of these codes, physical models implemented with their limitations, energy ranges of particles and types of interactions, is given. General information about the codes covers also programming language, operation system, calculation speed and the code availability. An example of increasing computation speed of running MCNPX code using a MPI cluster compared to the code sequential option
Computer Security: is your code sane?
Stefan Lueders, Computer Security Team
How many of us write code? Software? Programs? Scripts? How many of us are properly trained in this and how well do we do it? Do we write functional, clean and correct code, without flaws, bugs and vulnerabilities*? In other words: are our codes sane? Figuring out weaknesses is not that easy (see our quiz in an earlier Bulletin article). Therefore, in order to improve the sanity of your code, prevent common pit-falls, and avoid the bugs and vulnerabilities that can crash your code, or – worse – that can be misused and exploited by attackers, the CERN Computer Security team has reviewed its recommendations for checking the security compliance of your code. "Static Code Analysers� are stand-alone programs that can be run on top of your software stack, regardless of whether it uses Java, C/C++, Perl, PHP, Python, etc. These analysers identify weaknesses and inconsistencies including: employing undeclared variables; expressions resu...
CBP Phase I Code Integration
Smith, F.; Brown, K.; Flach, G.; Sarkar, S.
was developed to link GoldSim with external codes (Smith III et al. 2010). The DLL uses a list of code inputs provided by GoldSim to create an input file for the external application, runs the external code, and returns a list of outputs (read from files created by the external application) back to GoldSim. In this way GoldSim provides: (1) a unified user interface to the applications, (2) the capability of coupling selected codes in a synergistic manner, and (3) the capability of performing probabilistic uncertainty analysis with the codes. GoldSim is made available by the GoldSim Technology Group as a free 'Player' version that allows running but not editing GoldSim models. The player version makes the software readily available to a wider community of users that would wish to use the CBP application but do not have a license for GoldSim.
developed to link GoldSim with external codes (Smith III et al. 2010). The DLL uses a list of code inputs provided by GoldSim to create an input file for the external application, runs the external code, and returns a list of outputs (read from files created by the external application) back to GoldSim. In this way GoldSim provides: (1) a unified user interface to the applications, (2) the capability of coupling selected codes in a synergistic manner, and (3) the capability of performing probabilistic uncertainty analysis with the codes. GoldSim is made available by the GoldSim Technology Group as a free 'Player' version that allows running but not editing GoldSim models. The player version makes the software readily available to a wider community of users that would wish to use the CBP application but do not have a license for GoldSim.
Web interface for plasma analysis codes
Emoto, M. [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan)], E-mail: [email protected]; Murakami, S. [Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501 (Japan); Yoshida, M.; Funaba, H.; Nagayama, Y. [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan)
There are many analysis codes that analyze various aspects of plasma physics. However, most of them are FORTRAN programs that are written to be run in supercomputers. On the other hand, many scientists use GUI (graphical user interface)-based operating systems. For those who are not familiar with supercomputers, it is a difficult task to run analysis codes in supercomputers, and they often hesitate to use these programs to substantiate their ideas. Furthermore, these analysis codes are written for personal use, and the programmers do not expect these programs to be run by other users. In order to make these programs to be widely used by many users, the authors developed user-friendly interfaces using a Web interface. Since the Web browser is one of the most common applications, it is useful for both the users and developers. In order to realize interactive Web interface, AJAX technique is widely used, and the authors also adopted AJAX. To build such an AJAX based Web system, Ruby on Rails plays an important role in this system. Since this application framework, which is written in Ruby, abstracts the Web interfaces necessary to implement AJAX and database functions, it enables the programmers to efficiently develop the Web-based application. In this paper, the authors will introduce the system and demonstrate the usefulness of this approach.
Emoto, M.; Murakami, S.; Yoshida, M.; Funaba, H.; Nagayama, Y.
There are many analysis codes that analyze various aspects of plasma physics. However, most of them are FORTRAN programs that are written to be run in supercomputers. On the other hand, many scientists use GUI (graphical user interface)-based operating systems. For those who are not familiar with supercomputers, it is a difficult task to run analysis codes in supercomputers, and they often hesitate to use these programs to substantiate their ideas. Furthermore, these analysis codes are written for personal use, and the programmers do not expect these programs to be run by other users. In order to make these programs to be widely used by many users, the authors developed user-friendly interfaces using a Web interface. Since the Web browser is one of the most common applications, it is useful for both the users and developers. In order to realize interactive Web interface, AJAX technique is widely used, and the authors also adopted AJAX. To build such an AJAX based Web system, Ruby on Rails plays an important role in this system. Since this application framework, which is written in Ruby, abstracts the Web interfaces necessary to implement AJAX and database functions, it enables the programmers to efficiently develop the Web-based application. In this paper, the authors will introduce the system and demonstrate the usefulness of this approach
Los Alamos radiation transport code system on desktop computing platforms
Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.; West, J.T.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. The current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines
Running Records and First Grade English Learners: An Analysis of Language Related Errors
Briceño, Allison; Klein, Adria F.
The purpose of this study was to determine if first-grade English Learners made patterns of language related errors when reading, and if so, to identify those patterns and how teachers coded language related errors when analyzing English Learners' running records. Using research from the fields of both literacy and Second Language Acquisition, we…
Comparing internal and external run-time coupling of CFD and building energy simulation software
Djunaedy, E.; Hensen, J.L.M.; Loomans, M.G.L.C.
This paper describes a comparison between internal and external run-time coupling of CFD and building energy simulation software. Internal coupling can be seen as the "traditional" way of developing software, i.e. the capabilities of existing software are expanded by merging codes. With external
FLP: a field line plotting code for bundle divertor design
Ruchti, C.
A computer code was developed to aid in the design of bundle divertors. The code can handle discrete toroidal field coils and various divertor coil configurations. All coils must be composed of straight line segments. The code runs on the PDP-10 and displays plots of the configuration, field lines, and field ripple. It automatically chooses the coil currents to connect the separatrix produced by the divertor to the outer edge of the plasma and calculates the required coil cross sections. Several divertor designs are illustrated to show how the code works
Responding for sucrose and wheel-running reinforcement: effect of pre-running.
Belke, Terry W
Six male albino Wistar rats were placed in running wheels and exposed to a fixed interval 30-s schedule that produced either a drop of 15% sucrose solution or the opportunity to run for 15s as reinforcing consequences for lever pressing. Each reinforcer type was signaled by a different stimulus. To assess the effect of pre-running, animals were allowed to run for 1h prior to a session of responding for sucrose and running. Results showed that, after pre-running, response rates in the later segments of the 30-s schedule decreased in the presence of a wheel-running stimulus and increased in the presence of a sucrose stimulus. Wheel-running rates were not affected. Analysis of mean post-reinforcement pauses (PRP) broken down by transitions between successive reinforcers revealed that pre-running lengthened pausing in the presence of the stimulus signaling wheel running and shortened pauses in the presence of the stimulus signaling sucrose. No effect was observed on local response rates. Changes in pausing in the presence of stimuli signaling the two reinforcers were consistent with a decrease in the reinforcing efficacy of wheel running and an increase in the reinforcing efficacy of sucrose. Pre-running decreased motivation to respond for running, but increased motivation to work for food.
The Effect of Training in Minimalist Running Shoes on Running Economy.
Ridge, Sarah T; Standifird, Tyler; Rivera, Jessica; Johnson, A Wayne; Mitchell, Ulrike; Hunter, Iain
The purpose of this study was to examine the effect of minimalist running shoes on oxygen uptake during running before and after a 10-week transition from traditional to minimalist running shoes. Twenty-five recreational runners (no previous experience in minimalist running shoes) participated in submaximal VO2 testing at a self-selected pace while wearing traditional and minimalist running shoes. Ten of the 25 runners gradually transitioned to minimalist running shoes over 10 weeks (experimental group), while the other 15 maintained their typical training regimen (control group). All participants repeated submaximal VO2 testing at the end of 10 weeks. Testing included a 3 minute warm-up, 3 minutes of running in the first pair of shoes, and 3 minutes of running in the second pair of shoes. Shoe order was randomized. Average oxygen uptake was calculated during the last minute of running in each condition. The average change from pre- to post-training for the control group during testing in traditional and minimalist shoes was an improvement of 3.1 ± 15.2% and 2.8 ± 16.2%, respectively. The average change from pre- to post-training for the experimental group during testing in traditional and minimalist shoes was an improvement of 8.4 ± 7.2% and 10.4 ± 6.9%, respectively. Data were analyzed using a 2-way repeated measures ANOVA. There were no significant interaction effects, but the overall improvement in running economy across time (6.15%) was significant (p = 0.015). Running in minimalist running shoes improves running economy in experienced, traditionally shod runners, but not significantly more than when running in traditional running shoes. Improvement in running economy in both groups, regardless of shoe type, may have been due to compliance with training over the 10-week study period and/or familiarity with testing procedures. Key pointsRunning in minimalist footwear did not result in a change in running economy compared to running in traditional footwear
Middle cerebral artery blood velocity during running
Lyngeraa, T. S.; Pedersen, L. M.; Mantoni, T.; Belhage, B.; Rasmussen, L. S.; van Lieshout, J. J.; Pott, F. C.
Running induces characteristic fluctuations in blood pressure (BP) of unknown consequence for organ blood flow. We hypothesized that running-induced BP oscillations are transferred to the cerebral vasculature. In 15 healthy volunteers, transcranial Doppler-determined middle cerebral artery (MCA)
Running with technology: Where are we heading?
Jensen, Mads Møller; Mueller, Florian 'Floyd'
technique- related information in run-training interfaces. From that finding, this paper presents three questions to be addressed by designers of future run-training interfaces. We believe that addressing these questions will support creation of expedient interfaces that improve runners' technique...
The Second Student-Run Homeless Shelter
Seider, Scott C.
From 1983-2011, the Harvard Square Homeless Shelter (HSHS) in Cambridge, Massachusetts, was the only student-run homeless shelter in the United States. However, college students at Villanova, Temple, Drexel, the University of Pennsylvania, and Swarthmore drew upon the HSHS model to open their own student-run homeless shelter in Philadelphia,…
Performance evaluation and financial market runs
Wagner, W.B.
This paper develops a model in which performance evaluation causes runs by fund managers and results in asset fire sales. Performance evaluation nonetheless is efficient as it disciplines managers. Optimal performance evaluation combines absolute and relative components in order to make runs less
Impact of Running Away on Girls' Pregnancy
Thrane, Lisa E.; Chen, Xiaojin
This study assessed the impact of running away on pregnancy in the subsequent year among U.S. adolescents. We also investigated interactions between running away and sexual assault, romance, and school disengagement. Pregnancy among females between 11 and 17 years (n = 6100) was examined utilizing the Longitudinal Study of Adolescent Health (Add…
Teaching Bank Runs with Classroom Experiments
Balkenborg, Dieter; Kaplan, Todd; Miller, Timothy
Once relegated to cinema or history lectures, bank runs have become a modern phenomenon that captures the interest of students. In this article, the authors explain a simple classroom experiment based on the Diamond-Dybvig model (1983) to demonstrate how a bank run--a seemingly irrational event--can occur rationally. They then present possible…
Training errors and running related injuries
Nielsen, Rasmus Østergaard; Buist, Ida; Sørensen, Henrik
The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries.......The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries....
Minimum Wage Effects in the Longer Run
Neumark, David; Nizalova, Olena
Exposure to minimum wages at young ages could lead to adverse longer-run effects via decreased labor market experience and tenure, and diminished education and training, while beneficial longer-run effects could arise if minimum wages increase skill acquisition. Evidence suggests that as individuals reach their late 20s, they earn less the longer…
Long Run Relationship Between Agricultural Production And ...
The study sought to estimate the impact of agricultural production on the long run economic growth in Nigeria using the Vector Error Correction Methodology. The result shows that long run relationship exists between agricultural production and economic growth in Nigeria. Among the variables in the model, crop production ...
Orthopaedic Perspective on Barefoot and Minimalist Running.
Roth, Jonathan; Neumann, Julie; Tao, Matthew
In recent years, there has been a movement toward barefoot and minimalist running. Advocates assert that a lack of cushion and support promotes a forefoot or midfoot strike rather than a rearfoot strike, decreasing the impact transient and stress on the hip and knee. Although the change in gait is theorized to decrease injury risk, this concept has not yet been fully elucidated. However, research has shown diminished symptoms of chronic exertional compartment syndrome and anterior knee pain after a transition to minimalist running. Skeptics are concerned that, because of the effects of the natural environment and the lack of a standardized transition program, barefoot running could lead to additional, unforeseen injuries. Studies have shown that, with the transition to minimalist running, there is increased stress on the foot and ankle and risk of repetitive stress injuries. Nonetheless, despite the large gap of evidence-based knowledge on minimalist running, the potential benefits warrant further research and consideration.
Running injuries - changing trends and demographics.
Fields, Karl B
Running injuries are common. Recently the demographic has changed, in that most runners in road races are older and injuries now include those more common in master runners. In particular, Achilles/calf injuries, iliotibial band injury, meniscus injury, and muscle injuries to the hamstrings and quadriceps represent higher percentages of the overall injury mix in recent epidemiologic studies compared with earlier ones. Evidence suggests that running mileage and previous injury are important predictors of running injury. Evidence-based research now helps guide the treatment of iliotibial band, patellofemoral syndrome, and Achilles tendinopathy. The use of topical nitroglycerin in tendinopathy and orthotics for the treatment of patellofemoral syndrome has moderate to strong evidence. Thus, more current knowledge about the changing demographics of runners and the application of research to guide treatment and, eventually, prevent running injury offers hope that clinicians can help reduce the high morbidity associated with long-distance running.
ATLAS strip detector: Operational Experience and Run1 → Run2 transition
NAGAI, K; The ATLAS collaboration
The ATLAS SCT operational experience and the detector performance during the RUN1 period of LHC will be reported. Additionally the preparation outward to RUN2 during the long shut down 1 will be mentioned.
Excessive Progression in Weekly Running Distance and Risk of Running-related Injuries
Nielsen, R.O.; Parner, Erik Thorlund; Nohr, Ellen Aagaard
Study Design An explorative, 1-year prospective cohort study. Objective To examine whether an association between a sudden change in weekly running distance and running-related injury varies according to injury type. Background It is widely accepted that a sudden increase in running distance...... is strongly related to injury in runners. But the scientific knowledge supporting this assumption is limited. Methods A volunteer sample of 874 healthy novice runners who started a self-structured running regimen were provided a global-positioning-system watch. After each running session during the study...... period, participants were categorized into 1 of the following exposure groups, based on the progression of their weekly running distance: less than 10% or regression, 10% to 30%, or more than 30%. The primary outcome was running-related injury. Results A total of 202 runners sustained a running...
The materiality of Code
Soon, Winnie
This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms' interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...
Coding for optical channels
Djordjevic, Ivan; Vasic, Bane
This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.
SEVERO code - user's manual
Sacramento, A.M. do.
This user's manual contains all the necessary information concerning the use of SEVERO code. This computer code is related to the statistics of extremes = extreme winds, extreme precipitation and flooding hazard risk analysis. (A.C.A.S.)
... An example of flawed code
Computer Security Team
Do you recall our small exercise in the last issue of the Bulletin? We were wondering how well written the following code was: 1 /* Safely Exec program: drop privileges to user uid and group 2 * gid, and use chroot to restrict file system access to jail 3 * directory. Also, don't allow program to run as a 4 * privileged user or group */ 5 void ExecUid(int uid, int gid, char *jailDir, char *prog, char *const argv[]) 6 { 7 if (uid == 0 || gid == 0) { 8 FailExit("ExecUid: root uid or gid not allowed�); 9 } 10 11 chroot(jailDir); /* restrict access to this dir */ 12 13 setuid(uid); /* drop privs */ 14 setgid(gid); 15 16 fprintf(LOGFILE, "Execvp of %s as uid=%d gid=%d\
Synthesizing Certified Code
Whalen, Michael; Schumann, Johann; Fischer, Bernd
Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since code certification uses the same underlying technology as program verification, it requires detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding annotations to th...
FERRET data analysis code
Schmittroth, F.
A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples
Stylize Aesthetic QR Code
Xu, Mingliang; Su, Hao; Li, Yafei; Li, Xi; Liao, Jing; Niu, Jianwei; Lv, Pei; Zhou, Bing
With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the appearance of QR codes, existing works have developed a series of techniques to make the QR code more visual-pleasant. However, these works still leave much to be desired, such as visual diversity, aesthetic quality, flexibility, universal property, and robustness. To address these issues, in this paper, we pro...
Enhancing QR Code Security
Zhang, Linfan; Zheng, Shuang
Quick Response code opens possibility to convey data in a unique way yet insufficient prevention and protection might lead into QR code being exploited on behalf of attackers. This thesis starts by presenting a general introduction of background and stating two problems regarding QR code security, which followed by a comprehensive research on both QR code itself and related issues. From the research a solution taking advantages of cloud and cryptography together with an implementation come af...
Leadership Class Configuration Interaction Code - Status and Opportunities
Vary, James
With support from SciDAC-UNEDF (www.unedf.org) nuclear theorists have developed and are continuously improving a Leadership Class Configuration Interaction Code (LCCI) for forefront nuclear structure calculations. The aim of this project is to make state-of-the-art nuclear structure tools available to the entire community of researchers including graduate students. The project includes codes such as NuShellX, MFDn and BIGSTICK that run a range of computers from laptops to leadership class supercomputers. Codes, scripts, test cases and documentation have been assembled, are under continuous development and are scheduled for release to the entire research community in November 2011. A covering script that accesses the appropriate code and supporting files is under development. In addition, a Data Base Management System (DBMS) that records key information from large production runs and archived results of those runs has been developed (http://nuclear.physics.iastate.edu/info/) and will be released. Following an outline of the project, the code structure, capabilities, the DBMS and current efforts, I will suggest a path forward that would benefit greatly from a significant partnership between researchers who use the codes, code developers and the National Nuclear Data efforts. This research is supported in part by DOE under grant DE-FG02-87ER40371 and grant DE-FC02-09ER41582 (SciDAC-UNEDF).
Opening up codings?
Steensig, Jakob; Heinemann, Trine
doing formal coding and when doing more "traditional� conversation analysis research based on collections. We are more wary, however, of the implication that coding-based research is the end result of a process that starts with qualitative investigations and ends with categories that can be coded...
Gauge color codes
Bombin Palomo, Hector
Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow...
Refactoring test code
A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok
textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from
Rocker shoe, minimalist shoe, and standard running shoe : A comparison of running economy
Sobhani, Sobhan; Bredeweg, Steven; Dekker, Rienk; Kluitenberg, Bas; van den Heuvel, Edwin; Hijmans, Juha; Postema, Klaas
Objectives: Running with rocker shoes is believed to prevent lower limb injuries. However, it is not clear how running in these shoes affects the energy expenditure. The purpose of this study was, therefore, to assess the effects of rocker shoes on running economy in comparison with standard and
Benchmarking NNWSI flow and transport codes: COVE 1 results
Hayden, N.K.
The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of the codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs
Development of the integrated system reliability analysis code MODULE
Han, S.H.; Yoo, K.J.; Kim, T.W.
The major components in a system reliability analysis are the determination of cut sets, importance measure, and uncertainty analysis. Various computer codes have been used for these purposes. For example, SETS and FTAP are used to determine cut sets; Importance for importance calculations; and Sample, CONINT, and MOCUP for uncertainty analysis. There have been problems when the codes run each other and the input and output are not linked, which could result in errors when preparing input for each code. The code MODULE was developed to carry out the above calculations simultaneously without linking input and outputs to other codes. MODULE can also prepare input for SETS for the case of a large fault tree that cannot be handled by MODULE. The flow diagram of the MODULE code is shown. To verify the MODULE code, two examples are selected and the results and computation times are compared with those of SETS, FTAP, CONINT, and MOCUP on both Cyber 170-875 and IBM PC/AT. Two examples are fault trees of the auxiliary feedwater system (AFWS) of Korea Nuclear Units (KNU)-1 and -2, which have 54 gates and 115 events, 39 gates and 92 events, respectively. The MODULE code has the advantage that it can calculate the cut sets, importances, and uncertainties in a single run with little increase in computing time over other codes and that it can be used in personal computers
Software Certification - Coding, Code, and Coders
Havelund, Klaus; Holzmann, Gerard J.
We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.
Running Economy from a Muscle Energetics Perspective
Jared R. Fletcher
Full Text Available The economy of running has traditionally been quantified from the mass-specific oxygen uptake; however, because fuel substrate usage varies with exercise intensity, it is more accurate to express running economy in units of metabolic energy. Fundamentally, the understanding of the major factors that influence the energy cost of running (Erun can be obtained with this approach. Erun is determined by the energy needed for skeletal muscle contraction. Here, we approach the study of Erun from that perspective. The amount of energy needed for skeletal muscle contraction is dependent on the force, duration, shortening, shortening velocity, and length of the muscle. These factors therefore dictate the energy cost of running. It is understood that some determinants of the energy cost of running are not trainable: environmental factors, surface characteristics, and certain anthropometric features. Other factors affecting Erun are altered by training: other anthropometric features, muscle and tendon properties, and running mechanics. Here, the key features that dictate the energy cost during distance running are reviewed in the context of skeletal muscle energetics.
Post-processing of the TRAC code's results
Baron, J.H.; Neuman, D.
The TRAC code serves for the analysis of accidents in nuclear installations from the thermohydraulic point of view. A program has been developed with the aim of processing the information rapidly generated by the code, with screening graph capacity, both in high and low resolution, or either in paper through printer or plotter. Although the programs are intended to be used after the TRAC runs, they may be also used even when the program is running so as to observe the calculation process. The advantages of employing this type of tool, its actual capacity and its possibilities of expansion according to the user's needs are herein described. (Author)
The effect of footwear on running performance and running economy in distance runners.
Fuller, Joel T; Bellenger, Clint R; Thewlis, Dominic; Tsiros, Margarita D; Buckley, Jonathan D
The effect of footwear on running economy has been investigated in numerous studies. However, no systematic review and meta-analysis has synthesised the available literature and the effect of footwear on running performance is not known. The aim of this systematic review and meta-analysis was to investigate the effect of footwear on running performance and running economy in distance runners, by reviewing controlled trials that compare different footwear conditions or compare footwear with barefoot. The Web of Science, Scopus, MEDLINE, CENTRAL (Cochrane Central Register of Controlled Trials), EMBASE, AMED (Allied and Complementary Medicine), CINAHL and SPORTDiscus databases were searched from inception up until April 2014. Included articles reported on controlled trials that examined the effects of footwear or footwear characteristics (including shoe mass, cushioning, motion control, longitudinal bending stiffness, midsole viscoelasticity, drop height and comfort) on running performance or running economy and were published in a peer-reviewed journal. Of the 1,044 records retrieved, 19 studies were included in the systematic review and 14 studies were included in the meta-analysis. No studies were identified that reported effects on running performance. Individual studies reported significant, but trivial, beneficial effects on running economy for comfortable and stiff-soled shoes [standardised mean difference (SMD) beneficial effect on running economy for cushioned shoes (SMD = 0.37; P beneficial effect on running economy for training in minimalist shoes (SMD = 0.79; P beneficial effects on running economy for light shoes and barefoot compared with heavy shoes (SMD running was identified (P running economy. Certain models of footwear and footwear characteristics can improve running economy. Future research in footwear performance should include measures of running performance.
The network code
The Network Code defines the rights and responsibilities of all users of the natural gas transportation system in the liberalised gas industry in the United Kingdom. This report describes the operation of the Code, what it means, how it works and its implications for the various participants in the industry. The topics covered are: development of the competitive gas market in the UK; key points in the Code; gas transportation charging; impact of the Code on producers upstream; impact on shippers; gas storage; supply point administration; impact of the Code on end users; the future. (20 tables; 33 figures) (UK)
Coding for Electronic Mail
Rice, R. F.; Lee, J. J.
Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.
Lyngeraa, Tobias; Pedersen, Lars Møller; Mantoni, T
for eight subjects, respectively, were excluded from analysis because of insufficient signal quality. Running increased mean arterial pressure and mean MCA velocity and induced rhythmic oscillations in BP and in MCA velocity corresponding to the difference between step rate and heart rate (HR) frequencies....... During running, rhythmic oscillations in arterial BP induced by interference between HR and step frequency impact on cerebral blood velocity. For the exercise as a whole, average MCA velocity becomes elevated. These results suggest that running not only induces an increase in regional cerebral blood flow...
CMB constraints on running non-Gaussianity
Oppizzi, Filippo; Liguori, Michele; Renzi, Alessandro; Arroja, Frederico; Bartolo, Nicola
We develop a complete set of tools for CMB forecasting, simulation and estimation of primordial running bispectra, arising from a variety of curvaton and single-field (DBI) models of Inflation. We validate our pipeline using mock CMB running non-Gaussianity realizations and test it on real data by obtaining experimental constraints on the $f_{\\rm NL}$ running spectral index, $n_{\\rm NG}$, using WMAP 9-year data. Our final bounds (68\\% C.L.) read $-0.3< n_{\\rm NG}
Running Injuries During Adolescence and Childhood.
Krabak, Brian J; Snitily, Brian; Milani, Carlo J E
The popularity of running among young athletes has significantly increased over the past few decades. As the number of children who participate in running increases, so do the potential number of injuries to this group. Proper care of these athletes includes a thorough understanding of the unique physiology of the skeletally immature athlete and common injuries in this age group. Treatment should focus on athlete education, modification of training schedule, and correction of biomechanical deficits contributing to injury. Early identification and correction of these factors will allow a safe return to running sports. Copyright © 2016 Elsevier Inc. All rights reserved.
What to do with a Dead Research Code
Nemiroff, Robert J.
The project has ended -- should all of the computer codes that enabled the project be deleted? No. Like research papers, research codes typically carry valuable information past project end dates. Several possible end states to the life of research codes are reviewed. Historically, codes are typically left dormant on an increasingly obscure local disk directory until forgotten. These codes will likely become any or all of: lost, impossible to compile and run, difficult to decipher, and likely deleted when the code's proprietor moves on or dies. It is argued here, though, that it would be better for both code authors and astronomy generally if project codes were archived after use in some way. Archiving is advantageous for code authors because archived codes might increase the author's ADS citable publications, while astronomy as a science gains transparency and reproducibility. Paper-specific codes should be included in the publication of the journal papers they support, just like figures and tables. General codes that support multiple papers, possibly written by multiple authors, including their supporting websites, should be registered with a code registry such as the Astrophysics Source Code Library (ASCL). Codes developed on GitHub can be archived with a third party service such as, currently, BackHub. An important code version might be uploaded to a web archiving service like, currently, Zenodo or Figshare, so that this version receives a Digital Object Identifier (DOI), enabling it to found at a stable address into the future. Similar archiving services that are not DOI-dependent include perma.cc and the Internet Archive Wayback Machine at archive.org. Perhaps most simply, copies of important codes with lasting value might be kept on a cloud service like, for example, Google Drive, while activating Google's Inactive Account Manager.
ATLAS Strip Detector: Operational Experience and Run1-> Run2 Transition
Nagai, Koichi; The ATLAS collaboration
Large hadron collider was operated very successfully during the Run1 and provided a lot of opportunities of physics studies. It currently has a consolidation work toward to the operation at $\\sqrt{s}=14 \\mathrm{TeV}$ in Run2. The ATLAS experiment has achieved excellent performance in Run1 operation, delivering remarkable physics results. The SemiConductor Tracker contributed to the precise measurement of momentum of charged particles. This paper describes the operation experience of the SemiConductor Tracker in Run1 and the preparation toward to the Run2 operation during the LS1.
Electricity prices and fuel costs. Long-run relations and short-run dynamics
Mohammadi, Hassan
The paper examines the long-run relation and short-run dynamics between electricity prices and three fossil fuel prices - coal, natural gas and crude oil - using annual data for the U.S. for 1960-2007. The results suggest (1) a stable long-run relation between real prices for electricity and coal (2) Bi-directional long-run causality between coal and electricity prices. (3) Insignificant long-run relations between electricity and crude oil and/or natural gas prices. And (4) no evidence of asymmetries in the adjustment of electricity prices to deviations from equilibrium. A number of implications are addressed. (author)
User's manual for EXALPHA (a code for calculating electronic properties of molecules). [Muscatel code, multiply scattered electron approximation
Jones, H.D.
The EXALPHA procedures provide a simplified method for running the MUSCATEL computer code, which in turn is used for calculating electronic properties of simple molecules and atomic clusters, based on the multiply scattered electron approximation for the wave equations. The use of the EXALPHA procedures to set up a run of MUSCATEL is described.
NAGRADATA. Code key. Geology
Mueller, W.H.; Schneider, B.; Staeuble, J.
This reference manual provides users of the NAGRADATA system with comprehensive keys to the coding/decoding of geological and technical information to be stored in or retreaved from the databank. Emphasis has been placed on input data coding. When data is retreaved the translation into plain language of stored coded information is done automatically by computer. Three keys each, list the complete set of currently defined codes for the NAGRADATA system, namely codes with appropriate definitions, arranged: 1. according to subject matter (thematically) 2. the codes listed alphabetically and 3. the definitions listed alphabetically. Additional explanation is provided for the proper application of the codes and the logic behind the creation of new codes to be used within the NAGRADATA system. NAGRADATA makes use of codes instead of plain language for data storage; this offers the following advantages: speed of data processing, mainly data retrieval, economies of storage memory requirements, the standardisation of terminology. The nature of this thesaurian type 'key to codes' makes it impossible to either establish a final form or to cover the entire spectrum of requirements. Therefore, this first issue of codes to NAGRADATA must be considered to represent the current state of progress of a living system and future editions will be issued in a loose leave ringbook system which can be updated by an organised (updating) service. (author)
Reactor lattice codes
Kulikowska, T.
The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)
Four-D propagation code for high-energy laser beams: a user's manual
Morris, J.R.
This manual describes the use and structure of the June 30, 1976 version of the Four-D propagation code for high energy laser beams. It provides selected sample output from a typical run and from several debug runs. The Four-D code now includes the important noncoplanar scenario feature. Many problems that required excessive computer time can now be meaningfully simulated as steady-state noncoplanar problems with short run times.
Common running musculoskeletal injuries among recreational half ...
probing the prevalence and nature of running musculoskeletal injuries in the 12 months preceding ... or agony, and which prevented them from physical activity for ..... injuries to professional football players: Developing the UEFA model.
TEK twisted gradient flow running coupling
Pérez, Margarita García; Keegan, Liam; Okawa, Masanori
We measure the running of the twisted gradient flow coupling in the Twisted Eguchi-Kawai (TEK) model, the SU(N) gauge theory on a single site lattice with twisted boundary conditions in the large N limit.
Run-2 Supersymmetry searches in ATLAS
Soffer, Abner; The ATLAS collaboration
Despite the absence of experimental evidence, weak scale supersymmetry remains one of the best motivated and studied Standard Model extensions. With the large increase in collision energy with the LHC Run-2 (from 8TeV to 13 TeV) the sensitivity to heavy strongly produced SUSY particles (squarks and gluinos) increases tremendously. This talk presents recent ATLAS Run-2 searches for such particles in final states including jets, missing transverse momentum, and possibly light leptons.
Running heavy-quark masses in DIS
Alekhin, S.; Moch, S.
We report on determinations of the running mass for charm quarks from deep-inelastic scattering reactions. The method provides complementary information on this fundamental parameter from hadronic processes with space-like kinematics. The obtained values are consistent with but systematically lower than the world average as published by the PDG. We also address the consequences of the running mass scheme for heavy-quark parton distributions in global fits to deep-inelastic scattering data. (orig.)
The meaning of running away for girls.
Peled, Einat; Cohavi, Ayelet
The aim of this qualitative research was to understand how runaway girls perceive the processes involved in leaving home and the meaning they attribute to it. Findings are based on in-depth interviews with 10 Israeli girls aged 13-17 with a history of running away from home. The meaning of running away as it emerged from the girls' descriptions of their lives prior to leaving home was that of survival - both psychological and physical. The girls' stories centered on their evolving experiences of alienation, loneliness and detachment, and the failure of significant relationships at home and outside of home to provide them with the support they needed. These experiences laid the ground for the "final moments" before leaving, when a feeling of "no alternative," a hope for a better future, and various particular triggers led the girls to the decision to leave home. Participants' insights about the dynamics leading to running-away center on the meaning of family relationships, particularly those with the mother, as constituting the girl's psychological home. The girls seemed to perceive running away as an inevitability, rather than a choice, and even portrayed the running away as "living suicide." Yet, their stories clearly demonstrate their ability to cope and the possession of strengths and skills that enabled them to survive in extremely difficult home situations. The findings of this research highlight the importance of improving services for reaching out and supporting girls who are on the verge of running away from home. Such services should be tailored to the needs of girls who experience extreme but often silenced distress at home, and should facilitate alternative solutions to the girls' plight other than running away. An understanding of the dynamics leading to running away from the girls' perspective has the potential to improve the efficacy of services provided by contributing to the creation of a caring, empowering, understanding and trustful professional
A finite element code for electric motor design
Campbell, C. Warren
FEMOT is a finite element program for solving the nonlinear magnetostatic problem. This version uses nonlinear, Newton first order elements. The code can be used for electric motor design and analysis. FEMOT can be embedded within an optimization code that will vary nodal coordinates to optimize the motor design. The output from FEMOT can be used to determine motor back EMF, torque, cogging, and magnet saturation. It will run on a PC and will be available to anyone who wants to use it.
[Osteoarthritis from long-distance running?].
Hohmann, E; Wörtler, K; Imhoff, A
Long distance running has become a fashionable recreational activity. This study investigated the effects of external impact loading on bone and cartilage introduced by performing a marathon race. Seven beginners were compared to six experienced recreational long distance runners and two professional athletes. All participants underwent magnetic resonance imaging of the hip and knee before and after a marathon run. Coronal T1 weighted and STIR sequences were used. The pre MRI served as a baseline investigation and monitored the training effect. All athletes demonstrated normal findings in the pre run scan. All but one athlete in the beginner group demonstrated joint effusions after the race. The experienced and professional runners failed to demonstrate pathology in the post run scans. Recreational and professional long distance runners tolerate high impact forces well. Beginners demonstrate significant changes on the post run scans. Whether those findings are a result of inadequate training (miles and duration) warrant further studies. We conclude that adequate endurance training results in adaptation mechanisms that allow the athlete to compensate for the stresses introduced by long distance running and do not predispose to the onset of osteoarthritis. Significant malalignment of the lower extremity may cause increased focal loading of joint and cartilage.
Running With an Elastic Lower Limb Exoskeleton.
Cherry, Michael S; Kota, Sridhar; Young, Aaron; Ferris, Daniel P
Although there have been many lower limb robotic exoskeletons that have been tested for human walking, few devices have been tested for assisting running. It is possible that a pseudo-passive elastic exoskeleton could benefit human running without the addition of electrical motors due to the spring-like behavior of the human leg. We developed an elastic lower limb exoskeleton that added stiffness in parallel with the entire lower limb. Six healthy, young subjects ran on a treadmill at 2.3 m/s with and without the exoskeleton. Although the exoskeleton was designed to provide ~50% of normal leg stiffness during running, it only provided 24% of leg stiffness during testing. The difference in added leg stiffness was primarily due to soft tissue compression and harness compliance decreasing exoskeleton displacement during stance. As a result, the exoskeleton only supported about 7% of the peak vertical ground reaction force. There was a significant increase in metabolic cost when running with the exoskeleton compared with running without the exoskeleton (ANOVA, P exoskeletons for human running are human-machine interface compliance and the extra lower limb inertia from the exoskeleton.
Metadata aided run selection at ATLAS
Buckingham, R M; Gallas, E J; Tseng, J C-L; Viegas, F; Vinek, E
Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called 'runBrowser' makes these Conditions Metadata available as a Run based selection service. runBrowser, based on PHP and JavaScript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attributes, but also gives the user information at each stage about the relationship between the conditions chosen and the remaining conditions criteria available. When a set of COMA selections are complete, runBrowser produces a human readable report as well as an XML file in a standardized ATLAS format. This XML can be saved for later use or refinement in a future runBrowser session, shared with physics/detector groups, or used as input to ELSSI (event level Metadata browser) or other ATLAS run or event processing services.
You know the Science. Do you know your Code?
This talk is about automated code analysis and transformation tools to support scientific computing. Code bases are difficult to manage because of size, age, or safety requirements. Tools can help scientists and IT engineers understand their code, locate problems, improve quality. Tools can also help transform the code, by implementing complex refactorings, replatforming, or migration to a modern language. Such tools are themselves difficult to build. This talk describes DMS, a meta-tool for building software analysis tools. DMS is a kind of generalized compiler, and can be configured to process arbitrary programming languages, to carry out arbitrary analyses, and to convert specifications into running code. It has been used for a variety of purposes, including converting embedded mission software in the US B-2 Stealth Bomber, providing the US Social Security Administration with a deep view how their 200 millions lines of COBOL are connected, and reverse-engineering legacy factory process control code i...
Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding
Hansen, Johan Peder
We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....
A method for scientific code coupling in a distributed environment
Caremoli, C.; Beaucourt, D.; Chen, O.; Nicolas, G.; Peniguel, C.; Rascle, P.; Richard, N.; Thai Van, D.; Yessayan, A.
This guide book deals with coupling of big scientific codes. First, the context is introduced: big scientific codes devoted to a specific discipline coming to maturity, and more and more needs in terms of multi discipline studies. Then we describe different kinds of code coupling and an example of code coupling: 3D thermal-hydraulic code THYC and 3D neutronics code COCCINELLE. With this example we identify problems to be solved to realize a coupling. We present the different numerical methods usable for the resolution of coupling terms. This leads to define two kinds of coupling: with the leak coupling, we can use explicit methods, and with the strong coupling we need to use implicit methods. On both cases, we analyze the link with the way of parallelizing code. For translation of data from one code to another, we define the notion of Standard Coupling Interface based on a general structure for data. This general structure constitutes an intermediary between the codes, thus allowing a relative independence of the codes from a specific coupling. The proposed method for the implementation of a coupling leads to a simultaneous run of the different codes, while they exchange data. Two kinds of data communication with message exchange are proposed: direct communication between codes with the use of PVM product (Parallel Virtual Machine) and indirect communication with a coupling tool. This second way, with a general code coupling tool, is based on a coupling method, and we strongly recommended to use it. This method is based on the two following principles: re-usability, that means few modifications on existing codes, and definition of a code usable for coupling, that leads to separate the design of a code usable for coupling from the realization of a specific coupling. This coupling tool available from beginning of 1994 is described in general terms. (authors). figs., tabs
An Optimal Linear Coding for Index Coding Problem
Pezeshkpour, Pouya
An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...
Development of EASYQAD version β: A Visualization Code System for QAD-CGGP-A Gamma and Neutron Shielding Calculation Code
Kim, Jae Cheon; Lee, Hwan Soo; Ha, Pham Nhu Viet; Kim, Soon Young; Shin, Chang Ho; Kim, Jong Kyung
EASYQAD had been previously developed by using MATLAB GUI (Graphical User Interface) in order to perform conveniently gamma and neutron shielding calculations at Hanyang University. It had been completed as version α of radiation shielding analysis code. In this study, EASYQAD was upgraded to version β with many additional functions and more user-friendly graphical interfaces. For general users to run it on Windows XP environment without any MATLAB installation, this version was developed into a standalone code system
A PC version of the Monte Carlo criticality code OMEGA
Seifert, E.
A description of the PC version of the Monte Carlo criticality code OMEGA is given. The report contains a general description of the code together with a detailed input description. Furthermore, some examples are given illustrating the generation of an input file. The main field of application is the calculation of the criticality of arrangements of fissionable material. Geometrically complicated arrangements that often appear inside and outside a reactor, e.g. in a fuel storage or transport container, can be considered essentially without geometrical approximations. For example, the real geometry of assemblies containing hexagonal or square lattice structures can be described in full detail. Moreover, the code can be used for special investigations in the field of reactor physics and neutron transport. Many years of practical experience and comparison with reference cases have shown that the code together with the built-in data libraries gives reliable results. OMEGA is completely independent on other widely used criticality codes (KENO, MCNP, etc.), concerning programming and the data base. It is a good practice to run difficult criticality safety problems by different independent codes in order to mutually verify the results. In this way, OMEGA can be used as a redundant code within the family of criticality codes. An advantage of OMEGA is the short calculation time: A typical criticality safety application takes only a few minutes on a Pentium PC. Therefore, the influence of parameter variations can simply be investigated by running many variants of a problem. (orig.)
The Aesthetics of Coding
Andersen, Christian Ulrik
Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer's materiality. Cramer is thus the voice of a new 'code...... avant-garde'. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: "art-oriented programming needs to acknowledge the conditions of its own making – its poesis.� By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...
Majorana fermion codes
Bravyi, Sergey; Terhal, Barbara M; Leemhuis, Bernhard
We initiate the study of Majorana fermion codes (MFCs). These codes can be viewed as extensions of Kitaev's one-dimensional (1D) model of unpaired Majorana fermions in quantum wires to higher spatial dimensions and interacting fermions. The purpose of MFCs is to protect quantum information against low-weight fermionic errors, that is, operators acting on sufficiently small subsets of fermionic modes. We examine to what extent MFCs can surpass qubit stabilizer codes in terms of their stability properties. A general construction of 2D MFCs is proposed that combines topological protection based on a macroscopic code distance with protection based on fermionic parity conservation. Finally, we use MFCs to show how to transform any qubit stabilizer code to a weakly self-dual CSS code.
Theory of epigenetic coding.
Elder, D
The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.
DISP1 code
Vokac, P.
DISP1 code is a simple tool for assessment of the dispersion of the fission product cloud escaping from a nuclear power plant after an accident. The code makes it possible to tentatively check the feasibility of calculations by more complex PSA3 codes and/or codes for real-time dispersion calculations. The number of input parameters is reasonably low and the user interface is simple enough to allow a rapid processing of sensitivity analyses. All input data entered through the user interface are stored in the text format. Implementation of dispersion model corrections taken from the ARCON96 code enables the DISP1 code to be employed for assessment of the radiation hazard within the NPP area, in the control room for instance. (P.A.)
AlgoRun: a Docker-based packaging system for platform-agnostic implemented algorithms.
Hosny, Abdelrahman; Vera-Licona, Paola; Laubenbacher, Reinhard; Favre, Thibauld
There is a growing need in bioinformatics for easy-to-use software implementations of algorithms that are usable across platforms. At the same time, reproducibility of computational results is critical and often a challenge due to source code changes over time and dependencies. The approach introduced in this paper addresses both of these needs with AlgoRun, a dedicated packaging system for implemented algorithms, using Docker technology. Implemented algorithms, packaged with AlgoRun, can be executed through a user-friendly interface directly from a web browser or via a standardized RESTful web API to allow easy integration into more complex workflows. The packaged algorithm includes the entire software execution environment, thereby eliminating the common problem of software dependencies and the irreproducibility of computations over time. AlgoRun-packaged algorithms can be published on http://algorun.org, a centralized searchable directory to find existing AlgoRun-packaged algorithms. AlgoRun is available at http://algorun.org and the source code under GPL license is available at https://github.com/algorun [email protected] Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: [email protected].
Rhexifolia versus Rhexiifolia: Plant Nomenclature Run Amok?
R. Kasten Dumroese; Mark W. Skinner
The International Botanical Congress governs plant nomenclature worldwide through the International Code of Botanical Nomenclature. In the current code are very specific procedures for naming plants with novel compound epithets, and correcting compound epithets, like rhexifolia, that were incorrectly combined.We discuss why rhexiifolia...
Phonological coding during reading.
Leinenger, Mallorie
The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
The aeroelastic code FLEXLAST
Visser, B. [Stork Product Eng., Amsterdam (Netherlands)
To support the discussion on aeroelastic codes, a description of the code FLEXLAST was given and experiences within benchmarks and measurement programmes were summarized. The code FLEXLAST has been developed since 1982 at Stork Product Engineering (SPE). Since 1992 FLEXLAST has been used by Dutch industries for wind turbine and rotor design. Based on the comparison with measurements, it can be concluded that the main shortcomings of wind turbine modelling lie in the field of aerodynamics, wind field and wake modelling. (au)
Optimization of the muon reconstruction algorithms for LHCb Run 2
Aaij, Roel; Dettori, Francesco; Dungs, Kevin; Lopes, Helder; Martinez Santos, Diego; Prisciandaro, Jessica; Sciascia, Barbara; Syropoulos, Vasileios; Stahl, Sascha; Vazquez Gomez, Ricardo
The muon identi�cation algorithm in the LHCb HLT software trigger and offline reconstruction has been revisited in view of the LHC Run 2. This software has undergone a signi�cant refactorisation, resulting in a modularized common code base between the HLT and offline event processing. Because of the latter, the muon identi�cation is now identical in HLT and offline. The HLT1 algorithm sequence has been updated given the new rate and timing constraints. Also, information from the TT subdetector is used in order to reduce ghost tracks and optimize for low $p_T$ muons. The current software is presented here together with performance studies showing improved efficiencies and reduced timing.
MORSE Monte Carlo code
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described
QR codes for dummies
Waters, Joe
Find out how to effectively create, use, and track QR codes QR (Quick Response) codes are popping up everywhere, and businesses are reaping the rewards. Get in on the action with the no-nonsense advice in this streamlined, portable guide. You'll find out how to get started, plan your strategy, and actually create the codes. Then you'll learn to link codes to mobile-friendly content, track your results, and develop ways to give your customers value that will keep them coming back. It's all presented in the straightforward style you've come to know and love, with a dash of humor thrown
Tokamak Systems Code
Reid, R.L.; Barrett, R.J.; Brown, T.G.
The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged
Derivation of the physical equations solved in the inertial confinement stability code DOC. Informal report
Scannapieco, A.J.; Cranfill, C.W.
There now exists an inertial confinement stability code called DOC, which runs as a postprocessor. DOC (a code that has evolved from a previous code, PANSY) is a spherical harmonic linear stability code that integrates, in time, a set of Lagrangian perturbation equations. Effects due to real equations of state, asymmetric energy deposition, thermal conduction, shock propagation, and a time-dependent zeroth-order state are handled in the code. We present here a detailed derivation of the physical equations that are solved in the code
There now exists an inertial confinement stability code called DOC, which runs as a postprocessor. DOC (a code that has evolved from a previous code, PANSY) is a spherical harmonic linear stability code that integrates, in time, a set of Lagrangian perturbation equations. Effects due to real equations of state, asymmetric energy deposition, thermal conduction, shock propagation, and a time-dependent zeroth-order state are handled in the code. We present here a detailed derivation of the physical equations that are solved in the code.
The ATLAS Tau Trigger Performance during LHC Run 1 and Prospects for Run 2
Mitani, T; The ATLAS collaboration
The ATLAS tau trigger is designed to select hadronic decays of the tau leptons. Tau lepton plays an important role in Standard Model (SM) physics, such as in Higgs boson decays. Tau lepton is also important in beyond the SM (BSM) scenarios, such as supersymmetry and exotic particles, as they are often produced preferentially in these models. During the 2010-2012 LHC run (Run1), the tau trigger was accomplished successfully, which leads several rewarding results such as evidence for $H\\rightarrow \\tau\\tau$. From the 2015 LHC run (Run2), LHC will be upgraded and overlapping interactions per bunch crossing (pile-up) are expected to increase by a factor two. It will be challenging to control trigger rates while keeping interesting physics events. This paper summarized the tau trigger performance in Run1 and its prospects for Run2.
Efficient Coding of Information: Huffman Coding -RE ...
to a stream of equally-likely symbols so as to recover the original stream in the event of errors. The for- ... The source-coding problem is one of finding a mapping from U to a ... probability that the random variable X takes the value x written as ...
NR-code: Nonlinear reconstruction code
Yu, Yu; Pen, Ue-Li; Zhu, Hong-Ming
NR-code applies nonlinear reconstruction to the dark matter density field in redshift space and solves for the nonlinear mapping from the initial Lagrangian positions to the final redshift space positions; this reverses the large-scale bulk flows and improves the precision measurement of the baryon acoustic oscillations (BAO) scale.
Not Just Running: Coping with and Managing Everyday Life through Road-Running
Cook, Simon
From the external form, running looks like running. Yet this alikeness masks a hugely divergent practice consisting of different movements, meanings and experiences. In this paper I wish to shed light upon some of these different 'ways of running' and in turn identify a range of the sometimes surprising, sometimes significant and sometimes banal benefits that road-running can gift its practitioners beyond simply exercise and physical fitness. Drawing on an innovative mapping and ethnographic ...
GridRun: A lightweight packaging and execution environment forcompact, multi-architecture binaries
Shalf, John; Goodale, Tom
GridRun offers a very simple set of tools for creating and executing multi-platform binary executables. These ''fat-binaries'' archive native machine code into compact packages that are typically a fraction the size of the original binary images they store, enabling efficient staging of executables for heterogeneous parallel jobs. GridRun interoperates with existing distributed job launchers/managers like Condor and the Globus GRAM to greatly simplify the logic required launching native binary applications in distributed heterogeneous environments.
Students' Gender Stereotypes about Running in Schools
Xiang, Ping; McBride, Ron E.; Lin, Shuqiong; Gao, Zan; Francis, Xueying
Two hundred forty-six students (132 boys, 114 girls) were tracked from fifth to eighth grades, and changes in gender stereotypes about running as a male sport, running performance, interest in running, and intention for future running participation were assessed. Results revealed that neither sex held gender stereotypes about running as a male…
The Run-2 ATLAS Trigger System
Martínez, A Ruiz
The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in up to five times higher rates of processes of interest. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event processing farm. A few examples will be shown, such as the impressive performance improvements in the HLT trigger algorithms used to identify leptons, hadrons and global event quantities like missing transverse energy. Finally, the status of the commissioning of the trigger system and its performance during the 2015 run will be presented. (paper)
Exercise economy in skiing and running
Thomas eLosnegard
Full Text Available Substantial inter-individual variations in exercise economy exist even in highly trained endurance athletes. The variation is believed to be determined partly by intrinsic factors. Therefore, in the present study, we compared exercise economy in V2-skating, double poling and uphill running. Ten highly trained male cross-country skiers (23 ± 3 years, 180 ± 6 cm, 75 ± 8 kg, VO2peak running: 76.3 ± 5.6 mL•kg-1•min-1 participated in the study. Exercise economy and VO2peak during treadmill running, ski skating (V2 technique and double poling were compared based on correlation analysis with subsequent criteria for interpreting the magnitude of correlation (r. There was a very large correlation in exercise economy between V2-skating and double poling (r = 0.81 and a large correlation between V2-skating and running (r = 0.53 and double poling and running (r = 0.58. There were trivial to moderate correlations between exercise economy and VO2peak (r = 0.00-0.23, cycle rate (r = 0.03-0.46, body mass (r = -0.09-0.46 and body height (r = 0.11-0.36. In conclusion, the inter-individual variation in exercise economy could only moderately be explained by differences in VO2peak, body mass and body height and therefore we suggest that other intrinsic factors contribute to the variation in exercise economy between highly trained subjects.
The CMS trigger in Run 2
Tosi, Mia
During its second period of operation (Run 2) which started in 2015, the LHC will reach a peak instantaneous luminosity of approximately 2$\\times 10^{34}$~cm$^{-2}s^{-1}$ with an average pile-up of about 55, far larger than the design value. Under these conditions, the online event selection is a very challenging task. In CMS, it is realised by a two-level trigger system: the Level-1 (L1) Trigger, implemented in custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm.\\\\ In order to face this challenge, the L1 trigger has undergone a major upgrade compared to Run 1, whereby all electronic boards of the system have been replaced, allowing more sophisticated algorithms to be run online. Its last stage, the global trigger, is now able to perform complex selections and to compute high-level quantities, like invariant masses. Likewise, the algorithms that run in the HLT went through big improvements; in particular, new ap...
Chaotic inflation with curvaton induced running
Sloth, Martin Snoager
While dust contamination now appears as a likely explanation of the apparent tension between the recent BICEP2 data and the Planck data, we will here explore the consequences of a large running in the spectral index as suggested by the BICEP2 collaboration as an alternative explanation of the app......While dust contamination now appears as a likely explanation of the apparent tension between the recent BICEP2 data and the Planck data, we will here explore the consequences of a large running in the spectral index as suggested by the BICEP2 collaboration as an alternative explanation...... of the apparent tension, but which would be in conflict with prediction of the simplest model of chaotic inflation. The large field chaotic model is sensitive to UV physics, and the nontrivial running of the spectral index suggested by the BICEP2 collaboration could therefore, if true, be telling us some...... the possibility that the running could be due to some other less UV sensitive degree of freedom. As an example, we ask if it is possible that the curvature perturbation spectrum has a contribution from a curvaton, which makes up for the large running in the spectrum. We find that this effect could mask...
Habitual Minimalist Shod Running Biomechanics and the Acute Response to Running Barefoot.
Tam, Nicholas; Darragh, Ian A J; Divekar, Nikhil V; Lamberts, Robert P
The aim of the study was to determine whether habitual minimalist shoe runners present with purported favorable running biomechanithat reduce running injury risk such as initial loading rate. Eighteen minimalist and 16 traditionally cushioned shod runners were assessed when running both in their preferred training shoe and barefoot. Ankle and knee joint kinetics and kinematics, initial rate of loading, and footstrike angle were measured. Sagittal ankle and knee joint stiffness were also calculated. Results of a two-factor ANOVA presented no group difference in initial rate of loading when participants were running either shod or barefoot; however, initial loading rate increased for both groups when running barefoot (p=0.008). Differences in footstrike angle were observed between groups when running shod, but not when barefoot (minimalist:8.71±8.99 vs. traditional: 17.32±11.48 degrees, p=0.002). Lower ankle joint stiffness was found in both groups when running barefoot (p=0.025). These findings illustrate that risk factors for injury potentially differ between the two groups. Shoe construction differences do change mechanical demands, however, once habituated to the demands of a given shoe condition, certain acute favorable or unfavorable responses may be moderated. The purported benefits of minimalist running shoes in mimicking habitual barefoot running is questioned, and risk of injury may not be attenuated. © Georg Thieme Verlag KG Stuttgart · New York.
Neural network-based run-to-run controller using exposure and resist thickness adjustment
Geary, Shane; Barry, Ronan
This paper describes the development of a run-to-run control algorithm using a feedforward neural network, trained using the backpropagation training method. The algorithm is used to predict the critical dimension of the next lot using previous lot information. It is compared to a common prediction algorithm - the exponentially weighted moving average (EWMA) and is shown to give superior prediction performance in simulations. The manufacturing implementation of the final neural network showed significantly improved process capability when compared to the case where no run-to-run control was utilised.
The running pattern and its importance in running long-distance gears
Jarosław Hoffman
Full Text Available The running pattern is individual for each runner, regardless of distance. We can characterize it as the sum of the data of the runner (age, height, training time, etc. and the parameters of his run. Building the proper technique should focus first and foremost on the work of movement coordination and the power of the runner. In training the correct running steps we can use similar tools as working on deep feeling. The aim of this paper was to define what we can call a running pattern, what is its influence in long-distance running, and the relationship between the training technique and the running pattern. The importance of a running pattern in long-distance racing is immense, as the more distracted and departed from the norm, the greater the harm to the body will cause it to repetition in long run. Putting on training exercises that shape the technique is very important and affects the running pattern significantly.
Transport of mass goods on the top run and bottom run of belt conveyors
Zimmermann, D
For combined coal winning from the collieries 'General Blumenthal' and 'Ewald Fortsetzung' a large belt conveyor plant was taken into operation which is able to transport 1360 tons/h in the top run and 300 tons/h of dirt in the bottom run. The different types of coal are transported separately in intermittent operation with the aid of bunker systems connected to the front and rear of the belt conveyor. Persons can be transported in the top run as well as in the bottom run.
The nuclear reaction model code MEDICUS
Ibishia, A.I.
The new computer code MEDICUS has been used to calculate cross sections of nuclear reactions. The code, implemented in MATLAB 6.5, Mathematica 5, and Fortran 95 programming languages, can be run in graphical and command line mode. Graphical User Interface (GUI) has been built that allows the user to perform calculations and to plot results just by mouse clicking. The MS Windows XP and Red Hat Linux platforms are supported. MEDICUS is a modern nuclear reaction code that can compute charged particle-, photon-, and neutron-induced reactions in the energy range from thresholds to about 200 MeV. The calculation of the cross sections of nuclear reactions are done in the framework of the Exact Many-Body Nuclear Cluster Model (EMBNCM), Direct Nuclear Reactions, Pre-equilibrium Reactions, Optical Model, DWBA, and Exciton Model with Cluster Emission. The code can be used also for the calculation of nuclear cluster structure of nuclei. We have calculated nuclear cluster models for some nuclei such as 177 Lu, 90 Y, and 27 Al. It has been found that nucleus 27 Al can be represented through the two different nuclear cluster models: 25 Mg + d and 24 Na + 3 He. Cross sections in function of energy for the reaction 27 Al( 3 He,x) 22 Na, established as a production method of 22 Na, are calculated by the code MEDICUS. Theoretical calculations of cross sections are in good agreement with experimental results. Reaction mechanisms are taken into account. (author)
SALE: Safeguards Analytical Laboratory Evaluation computer code
Carroll, D.J.; Bush, W.J.; Dolan, C.A.
The Safeguards Analytical Laboratory Evaluation (SALE) program implements an industry-wide quality control and evaluation system aimed at identifying and reducing analytical chemical measurement errors. Samples of well-characterized materials are distributed to laboratory participants at periodic intervals for determination of uranium or plutonium concentration and isotopic distributions. The results of these determinations are statistically-evaluated, and each participant is informed of the accuracy and precision of his results in a timely manner. The SALE computer code which produces the report is designed to facilitate rapid transmission of this information in order that meaningful quality control will be provided. Various statistical techniques comprise the output of the SALE computer code. Assuming an unbalanced nested design, an analysis of variance is performed in subroutine NEST resulting in a test of significance for time and analyst effects. A trend test is performed in subroutine TREND. Microfilm plots are obtained from subroutine CUMPLT. Within-laboratory standard deviations are calculated in the main program or subroutine VAREST, and between-laboratory standard deviations are calculated in SBLV. Other statistical tests are also performed. Up to 1,500 pieces of data for each nuclear material sampled by 75 (or fewer) laboratories may be analyzed with this code. The input deck necessary to run the program is shown, and input parameters are discussed in detail. Printed output and microfilm plot output are described. Output from a typical SALE run is included as a sample problem
Code compression for VLIW embedded processors
Piccinelli, Emiliano; Sannino, Roberto
The implementation of processors for embedded systems implies various issues: main constraints are cost, power dissipation and die area. On the other side, new terminals perform functions that require more computational flexibility and effort. Long code streams must be loaded into memories, which are expensive and power consuming, to run on DSPs or CPUs. To overcome this issue, the "SlimCode" proprietary algorithm presented in this paper (patent pending technology) can reduce the dimensions of the program memory. It can run offline and work directly on the binary code the compiler generates, by compressing it and creating a new binary file, about 40% smaller than the original one, to be loaded into the program memory of the processor. The decompression unit will be a small ASIC, placed between the Memory Controller and the System bus of the processor, keeping unchanged the internal CPU architecture: this implies that the methodology is completely transparent to the core. We present comparisons versus the state-of-the-art IBM Codepack algorithm, along with its architectural implementation into the ST200 VLIW family core.
Computer codes for evaluation of control room habitability (HABIT)
Stage, S.A.
This report describes the Computer Codes for Evaluation of Control Room Habitability (HABIT). HABIT is a package of computer codes designed to be used for the evaluation of control room habitability in the event of an accidental release of toxic chemicals or radioactive materials. Given information about the design of a nuclear power plant, a scenario for the release of toxic chemicals or radionuclides, and information about the air flows and protection systems of the control room, HABIT can be used to estimate the chemical exposure or radiological dose to control room personnel. HABIT is an integrated package of several programs that previously needed to be run separately and required considerable user intervention. This report discusses the theoretical basis and physical assumptions made by each of the modules in HABIT and gives detailed information about the data entry windows. Sample runs are given for each of the modules. A brief section of programming notes is included. A set of computer disks will accompany this report if the report is ordered from the Energy Science and Technology Software Center. The disks contain the files needed to run HABIT on a personal computer running DOS. Source codes for the various HABIT routines are on the disks. Also included are input and output files for three demonstration runs
TART 2000: A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code
Cullen, D.E
TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files
TART 2000 A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code
Cullen, D
TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files.
The Second Workshop on Lineshape Code Comparison: Isolated Lines
Spiros Alexiou
Full Text Available In this work, we briefly summarize the theoretical aspects of isolated line broadening. We present and discuss test run comparisons from different participating lineshape codes for the 2s-2p transition for LiI, B III and NV.
Ensuring that User Defined Code does not See Uninitialized Fields
Nielsen, Anders Bach
Initialization of objects is commonly handled by user code, often in special routines known as constructors. This applies even in a virtual machine with multiple concurrent execution engines that all share the same heap. But for a language where run-time values play a role in the type system...
Bounds on the capacity of constrained two-dimensional codes
Forchhammer, Søren; Justesen, Jørn
Bounds on the capacity of constrained two-dimensional (2-D) codes are presented. The bounds of Calkin and Wilf apply to first-order symmetric constraints. The bounds are generalized in a weaker form to higher order and nonsymmetric constraints. Results are given for constraints specified by run-l...
Code-Switching Functions in Modern Hebrew Teaching and Learning
Gilead, Yona
The teaching and learning of Modern Hebrew outside of Israel is essential to Jewish education and identity. One of the most contested issues in Modern Hebrew pedagogy is the use of code-switching between Modern Hebrew and learners' first language. Moreover, this is one of the longest running disputes in the broader field of second language…
The Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE
Vandenbroucke, B.; Wood, K.
We present the public Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE, which can be used to simulate the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given type, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code, but also as a moving-mesh code.
Statistical screening of input variables in a complex computer code
Krieger, T.J.
A method is presented for ''statistical screening'' of input variables in a complex computer code. The object is to determine the ''effective'' or important input variables by estimating the relative magnitudes of their associated sensitivity coefficients. This is accomplished by performing a numerical experiment consisting of a relatively small number of computer runs with the code followed by a statistical analysis of the results. A formula for estimating the sensitivity coefficients is derived. Reference is made to an earlier work in which the method was applied to a complex reactor code with good results
Establishment of computer code system for nuclear reactor design - analysis
Subki, I.R.; Santoso, B.; Syaukat, A.; Lee, S.M.
Establishment of computer code system for nuclear reactor design analysis is given in this paper. This establishment is an effort to provide the capability in running various codes from nuclear data to reactor design and promote the capability for nuclear reactor design analysis particularly from neutronics and safety points. This establishment is also an effort to enhance the coordination of nuclear codes application and development existing in various research centre in Indonesia. Very prospective results have been obtained with the help of IAEA technical assistance. (author). 6 refs, 1 fig., 1 tab
Three Dimensional Numerical Code for the Expanding Flat Universe
Kyoung W. Min
Full Text Available The current distribution of galaxies may contain clues to the condition of the universe when the galaxies condensed and to the nature of the subsequent expansion of the universe. The development of this large scale structure can be studied by employing N-body computer simulations. The present paper describes the code developed for this purpose. The computer code calculates the motion of collisionless matter action under the force of gravity in an expanding flat universe. The test run of the code shows the error less than 0.5% in 100 iterations.
Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.
Division for Early Childhood, Council for Exceptional Children, 2009
The Code of Ethics of the Division for Early Childhood (DEC) of the Council for Exceptional Children is a public statement of principles and practice guidelines supported by the mission of DEC. The foundation of this Code is based on sound ethical reasoning related to professional practice with young children with disabilities and their families…
Interleaved Product LDPC Codes
Baldi, Marco; Cancellieri, Giovanni; Chiaraluce, Franco
Product LDPC codes take advantage of LDPC decoding algorithms and the high minimum distance of product codes. We propose to add suitable interleavers to improve the waterfall performance of LDPC decoding. Interleaving also reduces the number of low weight codewords, that gives a further advantage in the error floor region.
Insurance billing and coding.
Napier, Rebecca H; Bruelheide, Lori S; Demann, Eric T K; Haug, Richard H
The purpose of this article is to highlight the importance of understanding various numeric and alpha-numeric codes for accurately billing dental and medically related services to private pay or third-party insurance carriers. In the United States, common dental terminology (CDT) codes are most commonly used by dentists to submit claims, whereas current procedural terminology (CPT) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD.9.CM) codes are more commonly used by physicians to bill for their services. The CPT and ICD.9.CM coding systems complement each other in that CPT codes provide the procedure and service information and ICD.9.CM codes provide the reason or rationale for a particular procedure or service. These codes are more commonly used for "medical necessity" determinations, and general dentists and specialists who routinely perform care, including trauma-related care, biopsies, and dental treatment as a result of or in anticipation of a cancer-related treatment, are likely to use these codes. Claim submissions for care provided can be completed electronically or by means of paper forms.
Error Correcting Codes
Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.
Scrum Code Camps
Pries-Heje, Lene; Pries-Heje, Jan; Dalgaard, Bente
is required. In this paper we present the design of such a new approach, the Scrum Code Camp, which can be used to assess agile team capability in a transparent and consistent way. A design science research approach is used to analyze properties of two instances of the Scrum Code Camp where seven agile teams...
RFQ simulation code
Lysenko, W.P.
We have developed the RFQLIB simulation system to provide a means to systematically generate the new versions of radio-frequency quadrupole (RFQ) linac simulation codes that are required by the constantly changing needs of a research environment. This integrated system simplifies keeping track of the various versions of the simulation code and makes it practical to maintain complete and up-to-date documentation. In this scheme, there is a certain standard version of the simulation code that forms a library upon which new versions are built. To generate a new version of the simulation code, the routines to be modified or added are appended to a standard command file, which contains the commands to compile the new routines and link them to the routines in the library. The library itself is rarely changed. Whenever the library is modified, however, this modification is seen by all versions of the simulation code, which actually exist as different versions of the command file. All code is written according to the rules of structured programming. Modularity is enforced by not using COMMON statements, simplifying the relation of the data flow to a hierarchy diagram. Simulation results are similar to those of the PARMTEQ code, as expected, because of the similar physical model. Different capabilities, such as those for generating beams matched in detail to the structure, are available in the new code for help in testing new ideas in designing RFQ linacs
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...
78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards
... Energy Conservation Code. International Existing Building Code. International Fire Code. International... Code. International Property Maintenance Code. International Residential Code. International Swimming Pool and Spa Code International Wildland-Urban Interface Code. International Zoning Code. ICC Standards...
Is running associated with degenerative joint disease?
Panush, R.S.; Schmidt, C.; Caldwell, J.R.
Little information is available regarding the long-term effects, if any, of running on the musculoskeletal system. The authors compared the prevalence of degenerative joint disease among 17 male runners with 18 male nonrunners. Running subjects (53% marathoners) ran a mean of 44.8 km (28 miles)/wk for 12 years. Pain and swelling of hips, knees, ankles and feet and other musculoskeletal complaints among runners were comparable with those among nonrunners. Radiologic examinations (for osteophytes, cartilage thickness, and grade of degeneration) also were without notable differences among groups. They did not find an increased prevalence of osteoarthritis among the runners. Our observations suggest that long-duration, high-mileage running need to be associated with premature degenerative joint disease in the lower extremities
Jefferson Lab Data Acquisition Run Control System
Vardan Gyurjyan; Carl Timmer; David Abbott; William Heyes; Edward Jastrzembski; David Lawrence; Elliott Wolin
A general overview of the Jefferson Lab data acquisition run control system is presented. This run control system is designed to operate the configuration, control, and monitoring of all Jefferson Lab experiments. It controls data-taking activities by coordinating the operation of DAQ sub-systems, online software components and third-party software such as external slow control systems. The main, unique feature which sets this system apart from conventional systems is its incorporation of intelligent agent concepts. Intelligent agents are autonomous programs which interact with each other through certain protocols on a peer-to-peer level. In this case, the protocols and standards used come from the domain-independent Foundation for Intelligent Physical Agents (FIPA), and the implementation used is the Java Agent Development Framework (JADE). A lightweight, XML/RDF-based language was developed to standardize the description of the run control system for configuration purposes
Instrumental Variables in the Long Run
Casey, Gregory; Klemp, Marc Patrick Brag
In the study of long-run economic growth, it is common to use historical or geographical variables as instruments for contemporary endogenous regressors. We study the interpretation of these conventional instrumental variable (IV) regressions in a general, yet simple, framework. Our aim...... quantitative implications for the field of long-run economic growth. We also use our framework to examine related empirical techniques. We find that two prominent regression methodologies - using gravity-based instruments for trade and including ancestry-adjusted variables in linear regression models - have...... is to estimate the long-run causal effect of changes in the endogenous explanatory variable. We find that conventional IV regressions generally cannot recover this parameter of interest. To estimate this parameter, therefore, we develop an augmented IV estimator that combines the conventional regression...
Estimation of POL-iteration methods in fast running DNBR code
Kwon, Hyuk; Kim, S. J.; Seo, K. W.; Hwang, D. H. [KAERI, Daejeon (Korea, Republic of)
In this study, various root finding methods are applied to the POL-iteration module in SCOMS and POLiteration efficiency is compared with reference method. On the base of these results, optimum algorithm of POL iteration is selected. The POL requires the iteration until present local power reach limit power. The process to search the limiting power is equivalent with a root finding of nonlinear equation. POL iteration process involved in online monitoring system used a variant bisection method that is the most robust algorithm to find the root of nonlinear equation. The method including the interval accelerating factor and escaping routine out of ill-posed condition assured the robustness of SCOMS system. POL iteration module in SCOMS shall satisfy the requirement which is a minimum calculation time. For this requirement of calculation time, non-iterative algorithm, few channel model, simple steam table are implemented into SCOMS to improve the calculation time. MDNBR evaluation at a given operating condition requires the DNBR calculation at all axial locations. An increasing of POL-iteration number increased a calculation load of SCOMS significantly. Therefore, calculation efficiency of SCOMS is strongly dependent on the POL iteration number. In case study, the iterations of the methods have a superlinear convergence for finding limiting power but Brent method shows a quardratic convergence speed. These methods are effective and better than the reference bisection algorithm.
Running mobile agent code over simulated inter-networks : an extra gear towards distributed system evaluation
Liotta, A.; Ragusa, C.; Pavlou, G.
Mobile Agent (MA) systems are complex software entities whose behavior, performance and effectiveness cannot always be anticipated by the designer. Their evaluation often presents various aspects that require a careful, methodological approach as well as the adoption of suitable tools, needed to
Validation of thermalhydraulic codes
Wilkie, D.
Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)
Fracture flow code
Dershowitz, W; Herbert, A.; Long, J.
The hydrology of the SCV site will be modelled utilizing discrete fracture flow models. These models are complex, and can not be fully cerified by comparison to analytical solutions. The best approach for verification of these codes is therefore cross-verification between different codes. This is complicated by the variation in assumptions and solution techniques utilized in different codes. Cross-verification procedures are defined which allow comparison of the codes developed by Harwell Laboratory, Lawrence Berkeley Laboratory, and Golder Associates Inc. Six cross-verification datasets are defined for deterministic and stochastic verification of geometric and flow features of the codes. Additional datasets for verification of transport features will be documented in a future report. (13 figs., 7 tabs., 10 refs.) (authors)
The NLstart2run study: running related injuries in novice runners : Running related injuries in novice runners
Kluitenberg, Bas
Hardlopen is wereldwijd een populaire sport welke vaak wordt beoefend voor de positieve gezondheidseffecten. Er is echter een keerzijde. Hardlopers worden vaak geplaagd door blessures. Een probleem waar veelal beginners tegenaan lopen. Dit proefschrift beschrijft de NLstart2run studie, een onderzoek
Abort Gap Cleaning for LHC Run 2
Uythoven, Jan [CERN; Boccardi, Andrea [CERN; Bravin, Enrico [CERN; Goddard, Brennan [CERN; Hemelsoet, Georges-Henry [CERN; Höfle, Wolfgang [CERN; Jacquet, Delphine [CERN; Kain, Verena [CERN; Mazzoni, Stefano [CERN; Meddahi, Malika [CERN; Valuch, Daniel [CERN; Gianfelice-Wendt, Eliana [Fermilab
To minimize the beam losses at the moment of an LHC beam dump the 3 μs long abort gap should contain as few particles as possible. Its population can be minimised by abort gap cleaning using the LHC transverse damper system. The LHC Run 1 experience is briefly recalled; changes foreseen for the LHC Run 2 are presented. They include improvements in the observation of the abort gap population and the mechanism to decide if cleaning is required, changes to the hardware of the transverse dampers to reduce the detrimental effect on the luminosity lifetime and proposed changes to the applied cleaning algorithms.
Luminosity Measurements at LHCb for Run II
Coombs, George
A precise measurement of the luminosity is a necessary component of many physics analyses, especially cross-section measurements. At LHCb two different direct measurement methods are used to determine the luminosity: the "van der Meer scan� (VDM) and the "Beam Gas Imaging� (BGI) methods. A combined result from these two methods gave a precision of less than 2% for Run I and efforts are ongoing to provide a similar result for Run II. Fixed target luminosity is determined with an indirect method based on the single electron scattering cross-section.
Uythoven, J; Bravin, E; Goddard, B; Hemelsoet, GH; Höfle, W; Jacquet, D; Kain, V; Mazzoni, S; Meddahi, M; Valuch, D
To minimise the beam losses at the moment of an LHC beam dump the 3 μs long abort gap should contain as few particles as possible. Its population can be minimised by abort gap cleaning using the LHC transverse damper system. The LHC Run 1 experience is briefly recalled; changes foreseen for the LHC Run 2 are presented. They include improvements in the observation of the abort gap population and the mechanism to decide if cleaning is required, changes to the hardware of the transverse dampers to reduce the detrimental effect on the luminosity lifetime and proposed changes to the applied cleaning algorithms.
Running-mass inflation model and WMAP
Covi, Laura; Lyth, David H.; Melchiorri, Alessandro; Odman, Carolina J.
We consider the observational constraints on the running-mass inflationary model, and, in particular, on the scale dependence of the spectral index, from the new cosmic microwave background (CMB) anisotropy measurements performed by WMAP and from new clustering data from the SLOAN survey. We find that the data strongly constraints a significant positive scale dependence of n, and we translate the analysis into bounds on the physical parameters of the inflaton potential. Looking deeper into specific types of interaction (gauge and Yukawa) we find that the parameter space is significantly constrained by the new data, but that the running-mass model remains viable
Causal Analysis of Railway Running Delays
Cerreto, Fabrizio; Nielsen, Otto Anker; Harrod, Steven
Operating delays and network propagation are inherent characteristics of railway operations. These are traditionally reduced by provision of time supplements or "slack� in railway timetables and operating plans. Supplement allocation policies must trade off reliability in the service commitments...... Denmark (the Danish infrastructure manager). The statistical analysis of the data identifies the minimum running times and the scheduled running time supplements and investigates the evolution of train delays along given train paths. An improved allocation of time supplements would result in smaller...
The Run 2 ATLAS Analysis Event Data Model
SNYDER, S; The ATLAS collaboration; NOWAK, M; EIFERT, T; BUCKLEY, A; ELSING, M; GILLBERG, D; MOYSE, E; KOENEKE, K; KRASZNAHORKAY, A
During the LHC's first Long Shutdown (LS1) ATLAS set out to establish a new analysis model, based on the experience gained during Run 1. A key component of this is a new Event Data Model (EDM), called the xAOD. This format, which is now in production, provides the following features: A separation of the EDM into interface classes that the user code directly interacts with, and data storage classes that hold the payload data. The user sees an Array of Structs (AoS) interface, while the data is stored in a Struct of Arrays (SoA) format in memory, thus making it possible to efficiently auto-vectorise reconstruction code. A simple way of augmenting and reducing the information saved for different data objects. This makes it possible to easily decorate objects with new properties during data analysis, and to remove properties that the analysis does not need. A persistent file format that can be explored directly with ROOT, either with or without loading any additional libraries. This allows fast interactive naviga...
Dedicated OO expertise applied to Run II software projects
Amidei, D.
The change in software language and methodology by CDF and D0 to object-oriented from procedural Fortran is significant. Both experiments requested dedicated expertise that could be applied to software design, coding, advice and review. The Fermilab Run II offline computing outside review panel agreed strongly with the request and recommended that the Fermilab Computing Division hire dedicated OO expertise for the CDF/D0/Computing Division joint project effort. This was done and the two experts have been an invaluable addition to the CDF and D0 upgrade software projects and to the Computing Division in general. These experts have encouraged common approaches and increased the overall quality of the upgrade software. Advice on OO techniques and specific advice on C++ coding has been used. Recently a set of software reviews has been accomplished. This has been a very successful instance of a targeted application of computing expertise, and constitutes a very interesting study of how to move toward modern computing methodologies in HEP
The design of the run Clever randomized trial
Ramskov, Daniel; Nielsen, Rasmus Oestergaard; Sørensen, Henrik
BACKGROUND: Injury incidence and prevalence in running populations have been investigated and documented in several studies. However, knowledge about injury etiology and prevention is needed. Training errors in running are modifiable risk factors and people engaged in recreational running need...... evidence-based running schedules to minimize the risk of injury. The existing literature on running volume and running intensity and the development of injuries show conflicting results. This may be related to previously applied study designs, methods used to quantify the performed running...... and the statistical analysis of the collected data. The aim of the Run Clever trial is to investigate if a focus on running intensity compared with a focus on running volume in a running schedule influences the overall injury risk differently. METHODS/DESIGN: The Run Clever trial is a randomized trial with a 24-week...
Status report on the 'Merging' of the Electron-Cloud Code POSINST with the 3-D Accelerator PIC CODE WARP
Vay, J.-L.; Furman, M.A.; Azevedo, A.W.; Cohen, R.H.; Friedman, A.; Grote, D.P.; Stoltz, P.H.
We have integrated the electron-cloud code POSINST [1] with WARP [2]--a 3-D parallel Particle-In-Cell accelerator code developed for Heavy Ion Inertial Fusion--so that the two can interoperate. Both codes are run in the same process, communicate through a Python interpreter (already used in WARP), and share certain key arrays (so far, particle positions and velocities). Currently, POSINST provides primary and secondary sources of electrons, beam bunch kicks, a particle mover, and diagnostics. WARP provides the field solvers and diagnostics. Secondary emission routines are provided by the Tech-X package CMEE
Simulating three dimensional wave run-up over breakwaters covered by antifer units
Najafi-Jilani, A.; Niri, M. Zakiri; Naderi, Nader
The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS) Volume of Fluid (VOF) code (Flow-3D) was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D) simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.
A. Najafi-Jilani
Full Text Available The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD and Computational Fluid Dynamics (CFD software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS Volume of Fluid (VOF code (Flow-3D was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.
Methods and computer codes for probabilistic sensitivity and uncertainty analysis
Vaurio, J.K.
This paper describes the methods and applications experience with two computer codes that are now available from the National Energy Software Center at Argonne National Laboratory. The purpose of the SCREEN code is to identify a group of most important input variables of a code that has many (tens, hundreds) input variables with uncertainties, and do this without relying on judgment or exhaustive sensitivity studies. Purpose of the PROSA-2 code is to propagate uncertainties and calculate the distributions of interesting output variable(s) of a safety analysis code using response surface techniques, based on the same runs used for screening. Several applications are discussed, but the codes are generic, not tailored to any specific safety application code. They are compatible in terms of input/output requirements but also independent of each other, e.g., PROSA-2 can be used without first using SCREEN if a set of important input variables has first been selected by other methods. Also, although SCREEN can select cases to be run (by random sampling), a user can select cases by other methods if he so prefers, and still use the rest of SCREEN for identifying important input variables
Vectorization of three-dimensional neutron diffusion code CITATION
Harada, Hiroo; Ishiguro, Misako
Three-dimensional multi-group neutron diffusion code CITATION has been widely used for reactor criticality calculations. The code is expected to be run at a high speed by using recent vector supercomputers, when it is appropriately vectorized. In this paper, vectorization methods and their effects are described for the CITATION code. Especially, calculation algorithms suited for vectorization of the inner-outer iterative calculations which spend most of the computing time are discussed. The SLOR method, which is used in the original CITATION code, and the SOR method, which is adopted in the revised code, are vectorized by odd-even mesh ordering. The vectorized CITATION code is executed on the FACOM VP-100 and VP-200 computers, and is found to run over six times faster than the original code for a practical-scale problem. The initial value of the relaxation factor and the number of inner-iterations given as input data are also investigated since the computing time depends on these values. (author)
Huffman coding in advanced audio coding standard
Brzuchalski, Grzegorz
This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.
Short-run and long-run elasticities of import demand for crude oil in Turkey
Altinay, Galip
The aim of this study is to attempt to estimate the short-run and the long-run elasticities of demand for crude oil in Turkey by the recent autoregressive distributed lag (ARDL) bounds testing approach to cointegration. As a developing country, Turkey meets its growing demand for oil principally by foreign suppliers. Thus, the study focuses on modelling the demand for imported crude oil using annual data covering the period 1980-2005. The bounds test results reveal that a long-run cointegration relationship exists between the crude oil import and the explanatory variables: nominal price and income, but not in the model that includes real price in domestic currency. The long-run parameters are estimated through a long-run static solution of the estimated ARDL model, and then the short-run dynamics are estimated by the error correction model. The estimated models pass the diagnostic tests successfully. The findings reveal that the income and price elasticities of import demand for crude oil are inelastic both in the short run and in the long run
Short-Run and Long-Run Elasticities of Diesel Demand in Korea
Seung-Hoon Yoo
Full Text Available This paper investigates the demand function for diesel in Korea covering the period 1986–2011. The short-run and long-run elasticities of diesel demand with respect to price and income are empirically examined using a co-integration and error-correction model. The short-run and long-run price elasticities are estimated to be −0.357 and −0.547, respectively. The short-run and long-run income elasticities are computed to be 1.589 and 1.478, respectively. Thus, diesel demand is relatively inelastic to price change and elastic to income change in both the short-run and long-run. Therefore, a demand-side management through raising the price of diesel will be ineffective and tightening the regulation of using diesel more efficiently appears to be more effective in Korea. The demand for diesel is expected to continuously increase as the economy grows.
Change in running kinematics after cycling are related to alterations in running economy in triathletes.
Bonacci, Jason; Green, Daniel; Saunders, Philo U; Blanch, Peter; Franettovich, Melinda; Chapman, Andrew R; Vicenzino, Bill
Emerging evidence suggests that cycling may influence neuromuscular control during subsequent running but the relationship between altered neuromuscular control and run performance in triathletes is not well understood. The aim of this study was to determine if a 45 min high-intensity cycle influences lower limb movement and muscle recruitment during running and whether changes in limb movement or muscle recruitment are associated with changes in running economy (RE) after cycling. RE, muscle activity (surface electromyography) and limb movement (sagittal plane kinematics) were compared between a control run (no preceding cycle) and a run performed after a 45 min high-intensity cycle in 15 moderately trained triathletes. Muscle recruitment and kinematics during running after cycling were altered in 7 of 15 (46%) triathletes. Changes in kinematics at the knee and ankle were significantly associated with the change in VO(2) after cycling (precruitment in some triathletes and that changes in kinematics, especially at the ankle, are closely related to alterations in running economy after cycling. Copyright 2010 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Comparison of fractions of inactive modules between Run1 and Run2
Motohashi, Kazuki; The ATLAS collaboration
Fraction of inactive modules for each component of the ATLAS pixel detector at the end of Run 1 and the beginning of Run 2. A similar plot which uses a result of functionality tests during LS1 can be found in ATL-INDET-SLIDE-2014-388.
Weekly running volume and risk of running-related injuries among marathon runners
Rasmussen, Christina Haugaard; Nielsen, R.O.; Juul, Martin Serup
The purpose of this study was to investigate if the risk of injury declines with increasing weekly running volume before a marathon race.......The purpose of this study was to investigate if the risk of injury declines with increasing weekly running volume before a marathon race....
Rasmussen, Christina Haugaard; Nielsen, Rasmus Østergaard; Juul, Martin Serup
PURPOSEBACKGROUND: The purpose of this study was to investigate if the risk of injury declines with increasing weekly running volume before a marathon race.......PURPOSEBACKGROUND: The purpose of this study was to investigate if the risk of injury declines with increasing weekly running volume before a marathon race....
Review of SKB's Code Documentation and Testing
Hicks, T.W.
SKB is in the process of developing the SR-Can safety assessment for a KBS 3 repository. The assessment will be based on quantitative analyses using a range of computational codes aimed at developing an understanding of how the repository system will evolve. Clear and comprehensive code documentation and testing will engender confidence in the results of the safety assessment calculations. This report presents the results of a review undertaken on behalf of SKI aimed at providing an understanding of how codes used in the SR 97 safety assessment and those planned for use in the SR-Can safety assessment have been documented and tested. Having identified the codes us ed by SKB, several codes were selected for review. Consideration was given to codes used directly in SKB's safety assessment calculations as well as to some of the less visible codes that are important in quantifying the different repository barrier safety functions. SKB's documentation and testing of the following codes were reviewed: COMP23 - a near-field radionuclide transport model developed by SKB for use in safety assessment calculations. FARF31 - a far-field radionuclide transport model developed by SKB for use in safety assessment calculations. PROPER - SKB's harness for executing probabilistic radionuclide transport calculations using COMP23 and FARF31. The integrated analytical radionuclide transport model that SKB has developed to run in parallel with COMP23 and FARF31. CONNECTFLOW - a discrete fracture network model/continuum model developed by Serco Assurance (based on the coupling of NAMMU and NAPSAC), which SKB is using to combine hydrogeological modelling on the site and regional scales in place of the HYDRASTAR code. DarcyTools - a discrete fracture network model coupled to a continuum model, recently developed by SKB for hydrogeological modelling, also in place of HYDRASTAR. ABAQUS - a finite element material model developed by ABAQUS, Inc, which is used by SKB to model repository buffer
TOPIC: a debugging code for torus geometry input data of Monte Carlo transport code
Iida, Hiromasa; Kawasaki, Hiromitsu.
TOPIC has been developed for debugging geometry input data of the Monte Carlo transport code. the code has the following features: (1) It debugs the geometry input data of not only MORSE-GG but also MORSE-I capable of treating torus geometry. (2) Its calculation results are shown in figures drawn by Plotter or COM, and the regions not defined or doubly defined are easily detected. (3) It finds a multitude of input data errors in a single run. (4) The input data required in this code are few, so that it is readily usable in a time sharing system of FACOM 230-60/75 computer. Example TOPIC calculations in design study of tokamak fusion reactors (JXFR, INTOR-J) are presented. (author)
User's manual for computer code RIBD-II, a fission product inventory code
Marr, D.R.
The computer code RIBD-II is used to calculate inventories, activities, decay powers, and energy releases for the fission products generated in a fuel irradiation. Changes from the earlier RIBD code are: the expansion to include up to 850 fission product isotopes, input in the user-oriented NAMELIST format, and run-time choice of fuels from an extensively enlarged library of nuclear data. The library that is included in the code package contains yield data for 818 fission product isotopes for each of fourteen different fissionable isotopes, together with fission product transmutation cross sections for fast and thermal systems. Calculational algorithms are little changed from those in RIBD. (U.S.)
SURE: a system of computer codes for performing sensitivity/uncertainty analyses with the RELAP code
Bjerke, M.A.
A package of computer codes has been developed to perform a nonlinear uncertainty analysis on transient thermal-hydraulic systems which are modeled with the RELAP computer code. Using an uncertainty around the analyses of experiments in the PWR-BDHT Separate Effects Program at Oak Ridge National Laboratory. The use of FORTRAN programs running interactively on the PDP-10 computer has made the system very easy to use and provided great flexibility in the choice of processing paths. Several experiments simulating a loss-of-coolant accident in a nuclear reactor have been successfully analyzed. It has been shown that the system can be automated easily to further simplify its use and that the conversion of the entire system to a base code other than RELAP is possible
Running and Osteoarthritis: Does Recreational or Competitive Running Increase the Risk?
Exercise, like running, is good for overall health and, specifically, our hearts, lungs, muscles, bones, and brains. However, some people are concerned about the impact of running on longterm joint health. Does running lead to higher rates of arthritis in knees and hips? While many researchers find that running protects bone health, others are concerned that this exercise poses a high risk for age-related changes to hips and knees. A study published in the June 2017 issue of JOSPT suggests that the difference in these outcomes depends on the frequency and intensity of running. J Orthop Sports Phys Ther 2017;47(6):391. doi:10.2519/jospt.2017.0505.
Split-phase motor running as capacitor starts motor and as capacitor run motor
Yahaya Asizehi ENESI
Full Text Available In this paper, the input parameters of a single phase split-phase induction motor is taken to investigate and to study the output performance characteristics of capacitor start and capacitor run induction motor. The value of these input parameters are used in the design characteristics of capacitor run and capacitor start motor with each motor connected to rated or standard capacitor in series with auxiliary winding or starting winding respectively for the normal operational condition. The magnitude of capacitor that will develop maximum torque in capacitor start motor and capacitor run motor are investigated and determined by simulation. Each of these capacitors is connected to the auxiliary winding of split-phase motor thereby transforming it into capacitor start or capacitor run motor. The starting current and starting torque of the split-phase motor (SPM, capacitor run motor (CRM and capacitor star motor (CSM are compared for their suitability in their operational performance and applications.
Report number codes
Nelson, R.N. (ed.)
This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.
Nelson, R.N.
This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name
Long-Run Neutrality and Superneutrality in an ARIMA Framework.
Fisher, Mark E; Seater, John J
The authors formalize long-run neutrality and long-run superneutrality in the context of a bivariate ARIMA model; show how the restrictions implied by long-run neutrality and long-run superneutrality depend on the orders of integration of the variables; apply their analysis to previous work, showing how that work is related to long-run neutrality and long-run superneutrality; and provide some new evidence on long-run neutrality and long-run superneutrality. Copyright 1993 by American Economic...
The arbitrary order design code Tlie 1.0
Zeijts, J. van; Neri, Filippo
We describe the arbitrary order charged particle transfer map code TLIE. This code is a general 6D relativistic design code with a MAD compatible input language and among others implements user defined functions and subroutines and nested fitting and optimization. First we describe the mathematics and physics in the code. Aside from generating maps for all the standard accelerator elements we describe an efficient method for generating nonlinear transfer maps for realistic magnet models. We have implemented the method to arbitrary order in our accelerator design code for cylindrical current sheet magnets. We also have implemented a self-consistent space-charge approach as in CHARLIE. Subsequently we give a description of the input language and finally, we give several examples from productions run, such as cases with stacked multipoles with overlapping fringe fields. (Author)
Recent advances in neutral particle transport methods and codes
Azmy, Y.Y.
An overview of ORNL's three-dimensional neutral particle transport code, TORT, is presented. Special features of the code that make it invaluable for large applications are summarized for the prospective user. Advanced capabilities currently under development and installation in the production release of TORT are discussed; they include: multitasking on Cray platforms running the UNICOS operating system; Adjacent cell Preconditioning acceleration scheme; and graphics codes for displaying computed quantities such as the flux. Further developments for TORT and its companion codes to enhance its present capabilities, as well as expand its range of applications are disucssed. Speculation on the next generation of neutron particle transport codes at ORNL, especially regarding unstructured grids and high order spatial approximations, are also mentioned
Recent advances in the Poisson/superfish codes
Ryne, R.; Barts, T.; Chan, K.C.D.; Cooper, R.; Deaven, H.; Merson, J.; Rodenz, G.
We report on advances in the POISSON/SUPERFISH family of codes used in the design and analysis of magnets and rf cavities. The codes include preprocessors for mesh generation and postprocessors for graphical display of output and calculation of auxiliary quantities. Release 3 became available in January 1992; it contains many code corrections and physics enhancements, and it also includes support for PostScript, DISSPLA, GKS and PLOT10 graphical output. Release 4 will be available in September 1992; it is free of all bit packing, making the codes more portable and able to treat very large numbers of mesh points. Release 4 includes the preprocessor FRONT and a new menu-driven graphical postprocessor that runs on workstations under X-Windows and that is capable of producing arrow plots. We will present examples that illustrate the new capabilities of the codes. (author). 6 refs., 3 figs
COMPBRN III: a computer code for modeling compartment fires
Ho, V.; Siu, N.; Apostolakis, G.; Flanagan, G.F.
The computer code COMPBRN III deterministically models the behavior of compartment fires. This code is an improvement of the original COMPBRN codes. It employs a different air entrainment model and numerical scheme to estimate properties of the ceiling hot gas layer model. Moreover, COMPBRN III incorporates a number of improvements in shape factor calculations and error checking, which distinguish it from the COMPBRN II code. This report presents the ceiling hot gas layer model employed by COMPBRN III as well as several other modifications. Information necessary to run COMPBRN III, including descriptions of required input and resulting output, are also presented. Simulation of experiments and a sample problem are included to demonstrate the usage of the code. 37 figs., 46 refs
Habituation contributes to the decline in wheel running within wheel-running reinforcement periods.
Belke, Terry W; McLaughlin, Ryan J
Habituation appears to play a role in the decline in wheel running within an interval. Aoyama and McSweeney [Aoyama, K., McSweeney, F.K., 2001. Habituation contributes to within-session changes in free wheel running. J. Exp. Anal. Behav. 76, 289-302] showed that when a novel stimulus was presented during a 30-min interval, wheel-running rates following the stimulus increased to levels approximating those earlier in the interval. The present study sought to assess the role of habituation in the decline in running that occurs over a briefer interval. In two experiments, rats responded on fixed-interval 30-s schedules for the opportunity to run for 45 s. Forty reinforcers were completed in each session. In the first experiment, the brake and chamber lights were repeatedly activated and inactivated after 25 s of a reinforcement interval had elapsed to assess the effect on running within the remaining 20 s. Presentations of the brake/light stimulus occurred during nine randomly determined reinforcement intervals in a session. In the second experiment, a 110 dB tone was emitted after 25 s of the reinforcement interval. In both experiments, presentation of the stimulus produced an immediate decline in running that dissipated over sessions. No increase in running following the stimulus was observed in the first experiment until the stimulus-induced decline dissipated. In the second experiment, increases in running were observed following the tone in the first session as well as when data were averaged over several sessions. In general, the results concur with the assertion that habituation plays a role in the decline in wheel running that occurs within both long and short intervals. (c) 2004 Elsevier B.V. All rights reserved.
Healthy Living Initiative: Running/Walking Club
Stylianou, Michalis; Kulinna, Pamela Hodges; Kloeppel, Tiffany
This study was grounded in the public health literature and the call for schools to serve as physical activity intervention sites. Its purpose was twofold: (a) to examine the daily distance covered by students in a before-school running/walking club throughout 1 school year and (b) to gain insights on the teachers perspectives of the club.…
The QCD Running Coupling and its Measurement
Altarelli, Guido
In this lecture, after recalling the basic definitions and facts about the running coupling in QCD, I present a critical discussion of the methods for measuring $\\alpha_s$ and select those that appear to me as the most reliably precise
Daytime running lights : its safety evidence revisited.
Koornstra, M.J.
Retrospective in-depth accident studies from several countries confirm that human perception errors are the main causal factor in road accidents. The share of accident types which are relevant for the effect of daytime running lights (DRL), such as overtaking and crossing accidents, in the total of
105-KE Basin Pilot Run design plan
Sherrell, D.L.
This document identifies all design deliverables and procedures applicable to the 105-KE Basin Pilot Run. It also establishes a general design strategy, defines interface control requirements, and covers planning for mechanical, electrical, instrument/control system, and equipment installation design
AUTHOR|(INSPIRE)INSPIRE-00222798; The ATLAS collaboration
The ATLAS trigger successfully collected collision data during the first run of the LHC between 2009-2013 at different centre-of-mass energies between 900 GeV and 8 TeV. The trigger system consists of a hardware Level-1 and a software-based high level trigger (HLT) that reduces the event rate from the design bunch-crossing rate of 40 MHz to an average recording rate of a few hundred Hz. In Run-2, the LHC will operate at centre-of-mass energies of 13 and 14 TeV and higher luminosity, resulting in roughly five times higher trigger rates. A brief review of the ATLAS trigger system upgrades that were implemented between Run-1 and Run-2, allowing to cope with the increased trigger rates while maintaining or even improving the efficiency to select physics processes of interest, will be given. This includes changes to the Level-1 calorimeter and muon trigger systems, the introduction of a new Level-1 topological trigger module and the merging of the previously two-level HLT system into a single event filter farm. A ...
Collagen gene interactions and endurance running performance
to complete any of the individual components (3.8 km swim, 180 km bike or 42.2 km run) of the 226 km event. The major ... may affect normal collagen fibrillogenesis and alter the mechanical properties of ... using a XP Thermal Cycler (Block model XP-G, BIOER Technology Co.,. Japan). ..... New insights into the function of.
Jet physics at CDF Run II
Safonov, A.; /UC, Davis
The latest results on jet physics at CDF are presented and discussed. Particular attention is paid to studies of the inclusive jet cross section using 177 pb{sup -1} of Run II data. Also discussed is a study of gluon and quark jet fragmentation.
EMBL rescue package keeps bioinformatics centre running
Abott, A
The threat to the EBI arising from the EC refusal to fund its running costs seems to have been temporarily lifted. At a meeting in EMBL, Heidelberg, delegates agreed in principle to make up the shortfall of 5 million euros. A final decision will be taken at a special meeting of the EMBL council in March (1 page).
Measuring the running top-quark mass
Langenfeld, Ulrich; Uwer, Peter
In this contribution we discuss conceptual issues of current mass measurements performed at the Tevatron. In addition we propose an alternative method which is theoretically much cleaner and to a large extend free from the problems encountered in current measurements. In detail we discuss the direct determination of the top-quark's running mass from the cross section measurements performed at the Tevatron. (orig.)
Individualism, innovation, and long-run growth.
Gorodnichenko, Yuriy; Roland, Gerard
Countries having a more individualist culture have enjoyed higher long-run growth than countries with a more collectivist culture. Individualist culture attaches social status rewards to personal achievements and thus, provides not only monetary incentives for innovation but also social status rewards, leading to higher rates of innovation and economic growth.
Estimating Stair Running Performance Using Inertial Sensors
Lauro V. Ojeda
Full Text Available Stair running, both ascending and descending, is a challenging aerobic exercise that many athletes, recreational runners, and soldiers perform during training. Studying biomechanics of stair running over multiple steps has been limited by the practical challenges presented while using optical-based motion tracking systems. We propose using foot-mounted inertial measurement units (IMUs as a solution as they enable unrestricted motion capture in any environment and without need for external references. In particular, this paper presents methods for estimating foot velocity and trajectory during stair running using foot-mounted IMUs. Computational methods leverage the stationary periods occurring during the stance phase and known stair geometry to estimate foot orientation and trajectory, ultimately used to calculate stride metrics. These calculations, applied to human participant stair running data, reveal performance trends through timing, trajectory, energy, and force stride metrics. We present the results of our analysis of experimental data collected on eleven subjects. Overall, we determine that for either ascending or descending, the stance time is the strongest predictor of speed as shown by its high correlation with stride time.
Numerical Modelling of Wave Run-Up
Ramirez, Jorge Robert Rodriguez; Frigaard, Peter; Andersen, Thomas Lykke
Wave loads are important in problems related to offshore structure, such as wave run-up, slamming. The computation of such wave problems are carried out by CFD models. This paper presents one model, NS3, which solve 3D Navier-Stokes equations and use Volume of Fluid (VOF) method to treat the free...
Daytime running lights : costs or benefits?
Brouwer, R.F.T.; Janssen, W.H.; Theeuwes, J.; Alferdinck, J.W.A.M.; Duistermaat, M.
The present study deals with the possibility that road users in the vicinity of a vehicle with daytime running lights (DRL) would suffer from a decreased conspicuity because of (he presence of that vehicle. In an experiment the primary effects of DRL on the conspicuity of other road users were
Running coupling constants of the Luttinger liquid
Boose, D.; Jacquot, J.L.; Polonyi, J.
We compute the one-loop expressions of two running coupling constants of the Luttinger model. The obtained expressions have a nontrivial momentum dependence with Landau poles. The reason for the discrepancy between our results and those of other studies, which find that the scaling laws are trivial, is explained
Wave run-up on sandbag slopes
Thamnoon Rasmeemasmuang
Full Text Available On occasions, sandbag revetments are temporarily applied to armour sandy beaches from erosion. Nevertheless, an empirical formula to determine the wave run -up height on sandbag slopes has not been available heretofore. In this study a wave run-up formula which considers the roughness of slope surfaces is proposed for the case of sandbag slopes. A series of laboratory experiments on the wave run -up on smooth slopes and sandbag slopes were conducted in a regular-wave flume, leading to the finding of empirical parameters for the formula. The proposed empirical formula is applicable to wave steepness ranging from 0.01 to 0.14 and to the thickness of placed sandbags relative to the wave height ranging from 0.17 to 3.0. The study shows that the wave run-up height computed by the formula for the sandbag slopes is 26-40% lower than that computed by the formula for the smooth slopes.
The CDF Run II disk inventory manager
Hubbard, Paul; Lammel, Stephan
The Collider Detector at Fermilab (CDF) experiment records and analyses proton-antiproton interactions at a center-of-mass energy of 2 TeV. Run II of the Fermilab Tevatron started in April of this year. The duration of the run is expected to be over two years. One of the main data handling strategies of CDF for Run II is to hide all tape access from the user and to facilitate sharing of data and thus disk space. A disk inventory manager was designed and developed over the past years to keep track of the data on disk, to coordinate user access to the data, and to stage data back from tape to disk as needed. The CDF Run II disk inventory manager consists of a server process, a user and administrator command line interfaces, and a library with the routines of the client API. Data are managed in filesets which are groups of one or more files. The system keeps track of user access to the filesets and attempts to keep frequently accessed data on disk. Data that are not on disk are automatically staged back from tape as needed. For CDF the main staging method is based on the mt-tools package as tapes are written according to the ANSI standard
Common Running Overuse Injuries and Prevention
Žiga Kozinc
Full Text Available Runners are particularly prone to developing overuse injuries. The most common running-related injuries include medial tibial stress syndrome, Achilles tendinopathy, plantar fasciitis, patellar tendinopathy, iliotibial band syndrome, tibial stress fractures, and patellofemoral pain syndrome. Two of the most significant risk factors appear to be injury history and weekly distance. Several trials have successfully identified biomechanical risk factors for specific injuries, with increased ground reaction forces, excessive foot pronation, hip internal rotation and hip adduction during stance phase being mentioned most often. However, evidence on interventions for lowering injury risk is limited, especially regarding exercise-based interventions. Biofeedback training for lowering ground reaction forces is one of the few methods proven to be effective. It seems that the best way to approach running injury prevention is through individualized treatment. Each athlete should be assessed separately and scanned for risk factors, which should be then addressed with specific exercises. This review provides an overview of most common running-related injuries, with a particular focus on risk factors, and emphasizes the problems encountered in preventing running-related injuries.
The running athlete: Roentgenograms and remedies
Pavlov, H.; Torg, J.S.
The authors have put together an atlas of radiographs of almost every conceivable running injury to the foot, ankle, leg, knee, femur, groin, and spine. Text material is limited to legends which describe the figures, and the remedies listed are brief. The text indicates conservative versus surgical treatment and, in some instances, recommends a surgical procedure
ATLAS Data Preparation in Run 2
Laycock, Paul; The ATLAS collaboration
In this presentation, the data preparation workflows for Run 2 are presented. Online data quality uses a new hybrid software release that incorporates the latest offline data quality monitoring software for the online environment. This is used to provide fast feedback in the control room during a data acquisition (DAQ) run, via a histogram-based monitoring framework as well as the online Event Display. Data are sent to several streams for offline processing at the dedicated Tier-0 computing facility, including dedicated calibration streams and an "express" physics stream containing approximately 2% of the main physics stream. This express stream is processed as data arrives, allowing a first look at the offline data quality within hours of a run end. A prompt calibration loop starts once an ATLAS DAQ run ends, nominally defining a 48 hour period in which calibrations and alignments can be derived using the dedicated calibration and express streams. The bulk processing of the main physics stream starts on expi...
The D0 run II trigger system
Schwienhorst, Reinhard; Michigan State U.
The D0 detector at the Fermilab Tevatron was upgraded for Run II. This upgrade included improvements to the trigger system in order to be able to handle the increased Tevatron luminosity and higher bunch crossing rates compared to Run I. The D0 Run II trigger is a highly exible system to select events to be written to tape from an initial interaction rate of about 2.5 MHz. This is done in a three-tier pipelined, buffered system. The first tier (level 1) processes fast detector pick-off signals in a hardware/firmware based system to reduce the event rate to about 1. 5kHz. The second tier (level 2) uses information from level 1 and forms simple Physics objects to reduce the rate to about 850 Hz. The third tier (level 3) uses full detector readout and event reconstruction on a filter farm to reduce the rate to 20-30 Hz. The D0 trigger menu contains a wide variety of triggers. While the emphasis is on triggering on generic lepton and jet final states, there are also trigger terms for specific final state signatures. In this document we describe the D0 trigger system as it was implemented and is currently operating in Run II
Run-2 ATLAS Trigger and Detector Performance
Winklmeier, Frank; The ATLAS collaboration
The 2nd LHC run has started in June 2015 with a pp centre-of-mass collision energy of 13 TeV, and ATLAS has taken first data at this new energy. In this talk the improvements made to the ATLAS experiment during the 2-year shutdown 2013/2014 will be discussed, and first detector and trigger performance results from the Run-2 will be shown. In general, reconstruction algorithms of tracks, e/gamma, muons, taus, jets and flavour tag- ging have been improved for Run-2. The new reconstruction algorithms and their performance measured using the data taken in 2015 at sqrt(s)=13 TeV will be discussed. Reconstruction efficiency, isolation performance, transverse momentum resolution and momentum scales are measured in various regions of the detector and in momentum intervals enlarged with respect to those measured in the Run-1. This presentation will also give an overview of the upgrades to the ATLAS trigger system that have been implemented during the LHC shutdown in order to deal with the increased trigger rates (fact...
KINETIC CONSEQUENCES OF CONSTRAINING RUNNING BEHAVIOR
John A. Mercer
Full Text Available It is known that impact forces increase with running velocity as well as when stride length increases. Since stride length naturally changes with changes in submaximal running velocity, it was not clear which factor, running velocity or stride length, played a critical role in determining impact characteristics. The aim of the study was to investigate whether or not stride length influences the relationship between running velocity and impact characteristics. Eight volunteers (mass=72.4 ± 8.9 kg; height = 1.7 ± 0.1 m; age = 25 ± 3.4 years completed two running conditions: preferred stride length (PSL and stride length constrained at 2.5 m (SL2.5. During each condition, participants ran at a variety of speeds with the intent that the range of speeds would be similar between conditions. During PSL, participants were given no instructions regarding stride length. During SL2.5, participants were required to strike targets placed on the floor that resulted in a stride length of 2.5 m. Ground reaction forces were recorded (1080 Hz as well as leg and head accelerations (uni-axial accelerometers. Impact force and impact attenuation (calculated as the ratio of head and leg impact accelerations were recorded for each running trial. Scatter plots were generated plotting each parameter against running velocity. Lines of best fit were calculated with the slopes recorded for analysis. The slopes were compared between conditions using paired t-tests. Data from two subjects were dropped from analysis since the velocity ranges were not similar between conditions resulting in the analysis of six subjects. The slope of impact force vs. velocity relationship was different between conditions (PSL: 0.178 ± 0.16 BW/m·s-1; SL2.5: -0.003 ± 0.14 BW/m·s-1; p < 0.05. The slope of the impact attenuation vs. velocity relationship was different between conditions (PSL: 5.12 ± 2.88 %/m·s-1; SL2.5: 1.39 ± 1.51 %/m·s-1; p < 0.05. Stride length was an important factor
Cryptography cracking codes
While cracking a code might seem like something few of us would encounter in our daily lives, it is actually far more prevalent than we may realize. Anyone who has had personal information taken because of a hacked email account can understand the need for cryptography and the importance of encryption-essentially the need to code information to keep it safe. This detailed volume examines the logic and science behind various ciphers, their real world uses, how codes can be broken, and the use of technology in this oft-overlooked field.
Coded Splitting Tree Protocols
Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar
This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... instance is terminated prematurely and subsequently iterated. The combined set of leaves from all the tree instances can then be viewed as a graph code, which is decodable using belief propagation. The main design problem is determining the order of splitting, which enables successful decoding as early...
Transport theory and codes
Clancy, B.E.
This chapter begins with a neutron transport equation which includes the one dimensional plane geometry problems, the one dimensional spherical geometry problems, and numerical solutions. The section on the ANISN code and its look-alikes covers problems which can be solved; eigenvalue problems; outer iteration loop; inner iteration loop; and finite difference solution procedures. The input and output data for ANISN is also discussed. Two dimensional problems such as the DOT code are given. Finally, an overview of the Monte-Carlo methods and codes are elaborated on
Gravity inversion code
Burkhard, N.R.
The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables
The MESORAD dose assessment model: Computer code
Ramsdell, J.V.; Athey, G.F.; Bander, T.J.; Scherpelz, R.I.
MESORAD is a dose equivalent model for emergency response applications that is designed to be run on minicomputers. It has been developed by the Pacific Northwest Laboratory for use as part of the Intermediate Dose Assessment System in the US Nuclear Regulatory Commission Operations Center in Washington, DC, and the Emergency Management System in the US Department of Energy Unified Dose Assessment Center in Richland, Washington. This volume describes the MESORAD computer code and contains a listing of the code. The technical basis for MESORAD is described in the first volume of this report (Scherpelz et al. 1986). A third volume of the documentation planned. That volume will contain utility programs and input and output files that can be used to check the implementation of MESORAD. 18 figs., 4 tabs
The efficacy of downhill running as a method to enhance running economy in trained distance runners.
Shaw, Andrew J; Ingham, Stephen A; Folland, Jonathan P
Running downhill, in comparison to running on the flat, appears to involve an exaggerated stretch-shortening cycle (SSC) due to greater impact loads and higher vertical velocity on landing, whilst also incurring a lower metabolic cost. Therefore, downhill running could facilitate higher volumes of training at higher speeds whilst performing an exaggerated SSC, potentially inducing favourable adaptations in running mechanics and running economy (RE). This investigation assessed the efficacy of a supplementary 8-week programme of downhill running as a means of enhancing RE in well-trained distance runners. Nineteen athletes completed supplementary downhill (-5% gradient; n = 10) or flat (n = 9) run training twice a week for 8 weeks within their habitual training. Participants trained at a standardised intensity based on the velocity of lactate turnpoint (vLTP), with training volume increased incrementally between weeks. Changes in energy cost of running (E C ) and vLTP were assessed on both flat and downhill gradients, in addition to maximal oxygen uptake (⩒O 2max). No changes in E C were observed during flat running following downhill (1.22 ± 0.09 vs 1.20 ± 0.07 Kcal kg -1 km -1 , P = .41) or flat run training (1.21 ± 0.13 vs 1.19 ± 0.12 Kcal kg -1 km -1 ). Moreover, no changes in E C during downhill running were observed in either condition (P > .23). vLTP increased following both downhill (16.5 ± 0.7 vs 16.9 ± 0.6 km h -1 , P = .05) and flat run training (16.9 ± 0.7 vs 17.2 ± 1.0 km h -1 , P = .05), though no differences in responses were observed between groups (P = .53). Therefore, a short programme of supplementary downhill run training does not appear to enhance RE in already well-trained individuals.
Accounting for Laminar Run & Trip Drag in Supersonic Cruise Performance Testing
Goodsell, Aga M.; Kennelly, Robert A.
An improved laminar run and trip drag correction methodology for supersonic cruise performance testing was derived. This method required more careful analysis of the flow visualization images which revealed delayed transition particularly on the inboard upper surface, even for the largest trip disks. In addition, a new code was developed to estimate the laminar run correction. Once the data were corrected for laminar run, the correct approach to the analysis of the trip drag became evident. Although the data originally appeared confusing, the corrected data are consistent with previous results. Furthermore, the modified approach, which was described in this presentation, extends prior historical work by taking into account the delayed transition caused by the blunt leading edges.
PP: A graphics post-processor for the EQ6 reaction path code
Stockman, H.W.
The PP code is a graphics post-processor and plotting program for EQ6, a popular reaction-path code. PP runs on personal computers, allocates memory dynamically, and can handle very large reaction path runs. Plots of simple variable groups, such as fluid and solid phase composition, can be obtained with as few as two keystrokes. Navigation through the list of reaction path variables is simple and efficient. Graphics files can be exported for inclusion in word processing documents and spreadsheets, and experimental data may be imported and superposed on the reaction path runs. The EQ6 thermodynamic database can be searched from within PP, to simplify interpretation of complex plots
Fulcrum Network Codes
Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....
Supervised Convolutional Sparse Coding
Affara, Lama Ahmed; Ghanem, Bernard; Wonka, Peter
coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements
OCA Code Enforcement
Montgomery County of Maryland — The Office of the County Attorney (OCA) processes Code Violation Citations issued by County agencies. The citations can be viewed by issued department, issued date...
The fast code
Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)
The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)
Code Disentanglement: Initial Plan
Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.
Induction technology optimization code
Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.
A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs
VT ZIP Code Areas
Vermont Center for Geographic Information — (Link to Metadata) A ZIP Code Tabulation Area (ZCTA) is a statistical geographic entity that approximates the delivery area for a U.S. Postal Service five-digit...
Bandwidth efficient coding
Anderson, John B
Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.
The description of reactor lattice codes is carried out on the example of the WIMSD-5B code. The WIMS code in its various version is the most recognised lattice code. It is used in all parts of the world for calculations of research and power reactors. The version WIMSD-5B is distributed free of charge by NEA Data Bank. The description of its main features given in the present lecture follows the aspects defined previously for lattice calculations in the lecture on Reactor Lattice Transport Calculations. The spatial models are described, and the approach to the energy treatment is given. Finally the specific algorithm applied in fuel depletion calculations is outlined. (author)
Critical Care Coding for Neurologists.
Nuwer, Marc R; Vespa, Paul M
Lattice Index Coding
Natarajan, Lakshmi; Hong, Yi; Viterbo, Emanuele
The index coding problem involves a sender with K messages to be transmitted across a broadcast channel, and a set of receivers each of which demands a subset of the K messages while having prior knowledge of a different subset as side information. We consider the specific case of noisy index coding where the broadcast channel is Gaussian and every receiver demands all the messages from the source. Instances of this communication problem arise in wireless relay networks, sensor networks, and ...
Cracking the Gender Codes
Rennison, Betina Wolfgang
extensive work to raise the proportion of women. This has helped slightly, but women remain underrepresented at the corporate top. Why is this so? What can be done to solve it? This article presents five different types of answers relating to five discursive codes: nature, talent, business, exclusion...... in leadership management, we must become more aware and take advantage of this complexity. We must crack the codes in order to crack the curve....
Post-test analysis of ROSA-III experiment RUNs 705 and 706
Koizumi, Yasuo; Soda, Kunihisa; Kikuchi, Osamu; Tasaka, Kanji; Shiba, Masayoshi
The purpose of ROSA-III experiment with a scaled BWR Test facility is to examine primary coolant thermal-hydraulic behavior and performance of ECCS during a postulated loss-of-coolant accident of BWR. The results provide the information for verification and improvement of reactor safety analysis codes. RUNs 705 and 706 assumed a 200% double-ended break at the recirculation pump suction. RUN 705 was an isothermal blowdown test without initial power and initial core flow. In RUN 706 for an average core power and no ECCS, the main steam line and feed water line were isolated immediately on the break. Post-test analysis of RUNs 705 and 706 was made with computer code RELAP4J. The agreement in system pressure between calculation and experiment was satisfactory. However, the calculated heater rod surface temperature were significantly higher than the experimental ones. The calculated axial temperature profile was different in tendency from the experimental one. The calculated mixture level behavior in the core was different from the liquid void distribution observed in experiment. The rapid rise of fuel rod surface temperature was caused by the reduction of heat transfer coefficient attributed to the increase of quality. The need was indicated for improvement of analytical model of void distribution in the core, and also to performe a characteristic test of recirculation line under reverse flow and to examine the core inlet flow rate experimentally and analytically. (author)
Massively parallel Monte Carlo. Experiences running nuclear simulations on a large condor cluster
Tickner, James; O'Dwyer, Joel; Roach, Greg; Uher, Josef; Hitchen, Greg
The trivially-parallel nature of Monte Carlo (MC) simulations make them ideally suited for running on a distributed, heterogeneous computing environment. We report on the setup and operation of a large, cycle-harvesting Condor computer cluster, used to run MC simulations of nuclear instruments ('jobs') on approximately 4,500 desktop PCs. Successful operation must balance the competing goals of maximizing the availability of machines for running jobs whilst minimizing the impact on users' PC performance. This requires classification of jobs according to anticipated run-time and priority and careful optimization of the parameters used to control job allocation to host machines. To maximize use of a large Condor cluster, we have created a powerful suite of tools to handle job submission and analysis, as the manual creation, submission and evaluation of large numbers (hundred to thousands) of jobs would be too arduous. We describe some of the key aspects of this suite, which has been interfaced to the well-known MCNP and EGSnrc nuclear codes and our in-house PHOTON optical MC code. We report on our practical experiences of operating our Condor cluster and present examples of several large-scale instrument design problems that have been solved using this tool. (author)
Calcaneus length determines running economy: implications for endurance running performance in modern humans and Neandertals.
Raichlen, David A; Armstrong, Hunter; Lieberman, Daniel E
The endurance running (ER) hypothesis suggests that distance running played an important role in the evolution of the genus Homo. Most researchers have focused on ER performance in modern humans, or on reconstructing ER performance in Homo erectus, however, few studies have examined ER capabilities in other members of the genus Homo. Here, we examine skeletal correlates of ER performance in modern humans in order to evaluate the energetics of running in Neandertals and early Homo sapiens. Recent research suggests that running economy (the energy cost of running at a given speed) is strongly related to the length of the Achilles tendon moment arm. Shorter moment arms allow for greater storage and release of elastic strain energy, reducing energy costs. Here, we show that a skeletal correlate of Achilles tendon moment arm length, the length of the calcaneal tuber, does not correlate with walking economy, but correlates significantly with running economy and explains a high proportion of the variance (80%) in cost between individuals. Neandertals had relatively longer calcaneal tubers than modern humans, which would have increased their energy costs of running. Calcaneal tuber lengths in early H. sapiens do not significantly differ from those of extant modern humans, suggesting Neandertal ER economy was reduced relative to contemporaneous anatomically modern humans. Endurance running is generally thought to be beneficial for gaining access to meat in hot environments, where hominins could have used pursuit hunting to run prey taxa into hyperthermia. We hypothesize that ER performance may have been reduced in Neandertals because they lived in cold climates. Copyright © 2011 Elsevier Ltd. All rights reserved.
Similar Running Economy With Different Running Patterns Along the Aerial-Terrestrial Continuum.
Lussiana, Thibault; Gindre, Cyrille; Hébert-Losier, Kim; Sagawa, Yoshimasa; Gimenez, Philippe; Mourot, Laurent
No unique or ideal running pattern is the most economical for all runners. Classifying the global running patterns of individuals into 2 categories (aerial and terrestrial) using the Volodalen method could permit a better understanding of the relationship between running economy (RE) and biomechanics. The main purpose was to compare the RE of aerial and terrestrial runners. Two coaches classified 58 runners into aerial (n = 29) or terrestrial (n = 29) running patterns on the basis of visual observations. RE, muscle activity, kinematics, and spatiotemporal parameters of both groups were measured during a 5-min run at 12 km/h on a treadmill. Maximal oxygen uptake (V̇O 2 max) and peak treadmill speed (PTS) were assessed during an incremental running test. No differences were observed between aerial and terrestrial patterns for RE, V̇O 2 max, and PTS. However, at 12 km/h, aerial runners exhibited earlier gastrocnemius lateralis activation in preparation for contact, less dorsiflexion at ground contact, higher coactivation indexes, and greater leg stiffness during stance phase than terrestrial runners. Terrestrial runners had more pronounced semitendinosus activation at the start and end of the running cycle, shorter flight time, greater leg compression, and a more rear-foot strike. Different running patterns were associated with similar RE. Aerial runners appear to rely more on elastic energy utilization with a rapid eccentric-concentric coupling time, whereas terrestrial runners appear to propel the body more forward rather than upward to limit work against gravity. Excluding runners with a mixed running pattern from analyses did not affect study interpretation.
Muscle injury after low-intensity downhill running reduces running economy.
Baumann, Cory W; Green, Michael S; Doyle, J Andrew; Rupp, Jeffrey C; Ingalls, Christopher P; Corona, Benjamin T
Contraction-induced muscle injury may reduce running economy (RE) by altering motor unit recruitment, lowering contraction economy, and disturbing running mechanics, any of which may have a deleterious effect on endurance performance. The purpose of this study was to determine if RE is reduced 2 days after performing injurious, low-intensity exercise in 11 healthy active men (27.5 ± 5.7 years; 50.05 ± 1.67 VO2peak). Running economy was determined at treadmill speeds eliciting 65 and 75% of the individual's peak rate of oxygen uptake (VO2peak) 1 day before and 2 days after injury induction. Lower extremity muscle injury was induced with a 30-minute downhill treadmill run (6 × 5 minutes runs, 2 minutes rest, -12% grade, and 12.9 km·h(-1)) that elicited 55% VO2peak. Maximal quadriceps isometric torque was reduced immediately and 2 days after the downhill run by 18 and 10%, and a moderate degree of muscle soreness was present. Two days after the injury, steady-state VO2 and metabolic work (VO2 L·km(-1)) were significantly greater (4-6%) during the 65% VO2peak run. Additionally, postinjury VCO2, VE and rating of perceived exertion were greater at 65% but not at 75% VO2peak, whereas whole blood-lactate concentrations did not change pre-injury to postinjury at either intensity. In conclusion, low-intensity downhill running reduces RE at 65% but not 75% VO2peak. The results of this study and other studies indicate the magnitude to which RE is altered after downhill running is dependent on the severity of the injury and intensity of the RE test.
PEAR code review
De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.
As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)
KENO-V code
The KENO-V code is the current release of the Oak Ridge multigroup Monte Carlo criticality code development. The original KENO, with 16 group Hansen-Roach cross sections and P 1 scattering, was one ot the first multigroup Monte Carlo codes and it and its successors have always been a much-used research tool for criticality studies. KENO-V is able to accept large neutron cross section libraries (a 218 group set is distributed with the code) and has a general P/sub N/ scattering capability. A supergroup feature allows execution of large problems on small computers, but at the expense of increased calculation time and system input/output operations. This supergroup feature is activated automatically by the code in a manner which utilizes as much computer memory as is available. The primary purpose of KENO-V is to calculate the system k/sub eff/, from small bare critical assemblies to large reflected arrays of differing fissile and moderator elements. In this respect KENO-V neither has nor requires the many options and sophisticated biasing techniques of general Monte Carlo codes
Code, standard and specifications
Abdul Nassir Ibrahim; Azali Muhammad; Ab. Razak Hamzah; Abd. Aziz Mohamed; Mohamad Pauzi Ismail
Radiography also same as the other technique, it need standard. This standard was used widely and method of used it also regular. With that, radiography testing only practical based on regulations as mentioned and documented. These regulation or guideline documented in code, standard and specifications. In Malaysia, level one and basic radiographer can do radiography work based on instruction give by level two or three radiographer. This instruction was produced based on guideline that mention in document. Level two must follow the specifications mentioned in standard when write the instruction. From this scenario, it makes clearly that this radiography work is a type of work that everything must follow the rule. For the code, the radiography follow the code of American Society for Mechanical Engineer (ASME) and the only code that have in Malaysia for this time is rule that published by Atomic Energy Licensing Board (AELB) known as Practical code for radiation Protection in Industrial radiography. With the existence of this code, all the radiography must follow the rule or standard regulated automatically.
Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding
Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...
Coupling a Basin Modeling and a Seismic Code using MOAB
Yan, Mi; Jordan, Kirk; Kaushik, Dinesh; Perrone, Michael; Sachdeva, Vipin; Tautges, Timothy J.; Magerlein, John
We report on a demonstration of loose multiphysics coupling between a basin modeling code and a seismic code running on a large parallel machine. Multiphysics coupling, which is one critical capability for a high performance computing (HPC) framework, was implemented using the MOAB open-source mesh and field database. MOAB provides for code coupling by storing mesh data and input and output field data for the coupled analysis codes and interpolating the field values between different meshes used by the coupled codes. We found it straightforward to use MOAB to couple the PBSM basin modeling code and the FWI3D seismic code on an IBM Blue Gene/P system. We describe how the coupling was implemented and present benchmarking results for up to 8 racks of Blue Gene/P with 8192 nodes and MPI processes. The coupling code is fast compared to the analysis codes and it scales well up to at least 8192 nodes, indicating that a mesh and field database is an efficient way to implement loose multiphysics coupling for large parallel machines.
Yan, Mi
Recent developments in KTF. Code optimization and improved numerics
Jimenez, Javier; Avramova, Maria; Sanchez, Victor Hugo; Ivanov, Kostadin
The rapid increase of computer power in the last decade facilitated the development of high fidelity simulations in nuclear engineering allowing a more realistic and accurate optimization as well as safety assessment of reactor cores and power plants compared to the legacy codes. Thermal hydraulic subchannel codes together with time dependent neutron transport codes are the options of choice for an accurate prediction of local safety parameters. Moreover, fast running codes with the best physical models are needed for high fidelity coupled thermal hydraulic / neutron kinetic solutions. Hence at KIT, different subchannel codes such as SUBCHANFLOW and KTF are being improved, validated and coupled with different neutron kinetics solutions. KTF is a subchannel code developed for best-estimate analysis of both Pressurized Water Reactor (PWR) and BWR. It is based on the Pennsylvania State University (PSU) version of COBRA-TF (Coolant Boling in Rod Arrays Two Fluids) named CTF. In this paper, the investigations devoted to the enhancement of the code numeric and informatics structure are presented and discussed. By some examples the gain on code speed-up will be demonstrated and finally an outlook of further activities concentrated on the code improvements will be given. (orig.)
Jimenez, Javier; Avramova, Maria; Sanchez, Victor Hugo; Ivanov, Kostadin [Karlsruhe Institute of Technology (KIT) (Germany). Inst. for Neutron Physics and Reactor Technology (INR)
TRANSURANUS: A fuel rod analysis code ready for use
Lassmann, K; O` Carroll, C; Van de Laar, J [Commission of the European Communities, Karlsruhe (Germany). European Inst. for Transuranium Elements; Ott, C [Paul Scherrer Inst. (PSI), Villigen (Switzerland)
The basic concepts of fuel rod performance codes are discussed. The TRANSURANUS code developed at the Institute for Transuranium Elements, Karlsruhe (GE) is presented. It is a quasi two-dimensional (1{sub 1/2}-D) code designed for treatment of a whole fuel rod for any type of reactor and any situation. The fuel rods found in the majority of test- or power reactors can be analyzed for very different situations (normal, off-normal and accidental). The time scale of the problems to be treated may range from milliseconds to years. The TRANSURANUS code consists of a clearly defined mechanical/mathematical framework into which physical models can easily be incorporated. This framework has been extensively tested and the programming very clearly reflects this structure. The code is well structured and easy to understand. It has a comprehensive material data bank for different fuels, claddings, coolants and their properties. The code can be employed in a deterministic and a statistical version. It is written in standard FORTRAN 77. The code system includes: 2 preprocessor programs (MAKROH and AXORDER) for setting up new data cases; the post-processor URPLOT for plotting all important quantities as a function of the radius, the axial coordinate or the time; the post-processor URSTART evaluating statistical analyses. The TRANSURANUS code exhibits short running times. A new WINDOWS-based interactive interface is under development. The code is now in use in various European institutions and is available to all interested parties. 7 figs., 15 refs.
DESIGN IMPROVEMENT OF THE LOCOMOTIVE RUNNING GEARS
S. V. Myamlin
Full Text Available Purpose. To determine the dynamic qualities of the mainline freight locomotives characterizing the safe motion in tangent and curved track sections at all operational speeds, one needs a whole set of studies, which includes a selection of the design scheme, development of the corresponding mathematical model of the locomotive spatial fluctuations, construction of the computer calculation program, conducting of the theoretical and then experimental studies of the new designs. In this case, one should compare the results with existing designs. One of the necessary conditions for the qualitative improvement of the traction rolling stock is to define the parameters of its running gears. Among the issues related to this problem, an important place is occupied by the task of determining the locomotive dynamic properties on the stage of projection, taking into account the selected technical solutions in the running gear design. Methodology. The mathematical modeling studies are carried out by the numerical integration method of the dynamic loading for the mainline locomotive using the software package «Dynamics of Rail Vehicles » («DYNRAIL». Findings. As a result of research for the improvement of locomotive running gear design it can be seen that the creation of the modern locomotive requires from engineers and scientists the realization of scientific and technical solutions. The solutions enhancing design speed with simultaneous improvement of the traction, braking and dynamic qualities to provide a simple and reliable design, especially the running gear, reducing the costs for maintenance and repair, low initial cost and operating costs for the whole service life, high traction force when starting, which is as close as possible to the ultimate force of adhesion, the ability to work in multiple traction mode and sufficient design speed. Practical Value. The generalization of theoretical, scientific and methodological, experimental studies aimed
Run scenarios for the linear collider
M. Battaglia et al. email = [email protected]
We have examined how a Linear Collider program of 1000 fb -1 could be constructed in the case that a very rich program of new physics is accessible at √s ≤ 500 GeV. We have examined possible run plans that would allow the measurement of the parameters of a 120 GeV Higgs boson, the top quark, and could give information on the sparticle masses in SUSY scenarios in which many states are accessible. We find that the construction of the run plan (the specific energies for collider operation, the mix of initial state electron polarization states, and the use of special e - e - runs) will depend quite sensitively on the specifics of the supersymmetry model, as the decay channels open to particular sparticles vary drastically and discontinuously as the underlying SUSY model parameters are varied. We have explored this dependence somewhat by considering two rather closely related SUSY model points. We have called for operation at a high energy to study kinematic end points, followed by runs in the vicinity of several two body production thresholds once their location is determined by the end point studies. For our benchmarks, the end point runs are capable of disentangling most sparticle states through the use of specific final states and beam polarizations. The estimated sparticle mass precisions, combined from end point and scan data, are given in Table VIII and the corresponding estimates for the mSUGRA parameters are in Table IX. The precision for the Higgs boson mass, width, cross-sections, branching ratios and couplings are given in Table X. The errors on the top quark mass and width are expected to be dominated by the systematic limits imposed by QCD non-perturbative effects. The run plan devotes at least two thirds of the accumulated luminosity near the maximum LC energy, so that the program would be sensitive to unexpected new phenomena at high mass scales. We conclude that with a 1 ab -1 program, expected to take the first 6-7 years of LC operation, one can do
The UK core performance code package
Hutt, P.K.; Gaines, N.; McEllin, M.; White, R.J.; Halsall, M.J.
Over the last few years work has been co-ordinated by Nuclear Electric, originally part of the Central Electricity Generating Board, with contributions from the United Kingdom Atomic Energy Authority and British Nuclear Fuels Limited, to produce a generic, easy-to-use and integrated package of core performance codes able to perform a comprehensive range of calculations for fuel cycle design, safety analysis and on-line operational support for Light Water Reactor and Advanced Gas Cooled Reactor plant. The package consists of modern rationalized generic codes for lattice physics (WIMS), whole reactor calculations (PANTHER), thermal hydraulics (VIPRE) and fuel performance (ENIGMA). These codes, written in FORTRAN77, are highly portable and new developments have followed modern quality assurance standards. These codes can all be run ''stand-alone'' but they are also being integrated within a new UNIX-based interactive system called the Reactor Physics Workbench (RPW). The RPW provides an interactive user interface and a sophisticated data management system. It offers quality assurance features to the user and has facilities for defining complex calculational sequences. The Paper reviews the current capabilities of these components, their integration within the package and outlines future developments underway. Finally, the Paper describes the development of an on-line version of this package which is now being commissioned on UK AGR stations. (author)
Ultra-obligatory running among ultramarathon runners.
Hoffman, Martin D; Krouse, Rhonna
Participants in the Ultrarunners Longitudinal TRAcking (ULTRA) Study were asked to answer "yes" or "no" to the question "If you were to learn, with absolute certainty, that ultramarathon running is bad for your health, would you stop your ultramarathon training and participation?" Among the 1349 runners, 74.1% answered "no". Compared with those answering "yes", they were younger (p life meaning (p = 0.0002) scores on the Motivations of Marathoners Scales. Despite a high health orientation, most ultramarathon runners would not stop running if they learned it was bad for their health as it appears to serve their psychological and personal achievement motivations and their task orientation such that they must perceive enhanced benefits that are worth retaining at the risk of their health.
CMS Computing Operations During Run1
Gutsche, Oliver
During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this presentation we will discuss the operational experience from the first run. We will present the workflows and data flows that were executed, we will discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. In this presentation we will also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.
Effects of intermittent hypoxia on running economy.
Burtscher, M; Gatterer, H; Faulhaber, M; Gerstgrasser, W; Schenk, K
We investigated the effects of two 5-wk periods of intermittent hypoxia on running economy (RE). 11 male and female middle-distance runners were randomly assigned to the intermittent hypoxia group (IHG) or to the control group (CG). All athletes trained for a 13-wk period starting at pre-season until the competition season. The IHG spent additionally 2 h at rest on 3 days/wk for the first and the last 5 weeks in normobaric hypoxia (15-11% FiO2). RE, haematological parameters and body composition were determined at low altitude (600 m) at baseline, after the 5 (th), the 8 (th) and the 13 (th) week of training. RE, determined by the relative oxygen consumption during submaximal running, (-2.3+/-1.2 vs. -0.3+/-0.7 ml/min/kg, Ptraining phase. Georg Thieme Verlag KG Stuttgart . New York.
CMS computing operations during run 1
Adelman, J; Artieda, J; Bagliese, G; Ballestero, D; Bansal, S; Bauerdick, L; Behrenhof, W; Belforte, S; Bloom, K; Blumenfeld, B; Blyweert, S; Bonacorsi, D; Brew, C; Contreras, L; Cristofori, A; Cury, S; da Silva Gomes, D; Dolores Saiz Santos, M; Dost, J; Dykstra, D; Fajardo Hernandez, E; Fanzango, F; Fisk, I; Flix, J; Georges, A; Gi ffels, M; Gomez-Ceballos, G; Gowdy, S; Gutsche, O; Holzman, B; Janssen, X; Kaselis, R; Kcira, D; Kim, B; Klein, D; Klute, M; Kress, T; Kreuzer, P; Lahi , A; Larson, K; Letts, J; Levin, A; Linacre, J; Linares, J; Liu, S; Luyckx, S; Maes, M; Magini, N; Malta, A; Marra Da Silva, J; Mccartin, J; McCrea, A; Mohapatra, A; Molina, J; Mortensen, T; Padhi, S; Paus, C; Piperov, S; Ralph; Sartirana, A; Sciaba, A; S ligoi, I; Spinoso, V; Tadel, M; Traldi, S; Wissing, C; Wuerthwein, F; Yang, M; Zielinski, M; Zvada, M
During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.
28 CFR 544.34 - Inmate running events.
... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Inmate running events. 544.34 Section 544... EDUCATION Inmate Recreation Programs § 544.34 Inmate running events. Running events will ordinarily not... available for all inmate running events. ...
Wave Run-up on the Zeebrugge Rubble Mound Breakwater
De Rouck, Julien; de Walle, Bjorn Van; Troch, Peter
Full-scale wave run-up measurements have been carried out on the Zeebrugge rubble mound breakwater in the frame of the EU-funded OPTICREST project. Wave run-up has been measured by a run-up gauge and by a so-called spiderweb system. The dimensionless wave run-up value Ru2%Hm0 measured in Zeebrugg...
HUDU: The Hanford Unified Dose Utility computer code
Scherpelz, R.I.
The Hanford Unified Dose Utility (HUDU) computer program was developed to provide rapid initial assessment of radiological emergency situations. The HUDU code uses a straight-line Gaussian atmospheric dispersion model to estimate the transport of radionuclides released from an accident site. For dose points on the plume centerline, it calculates internal doses due to inhalation and external doses due to exposure to the plume. The program incorporates a number of features unique to the Hanford Site (operated by the US Department of Energy), including a library of source terms derived from various facilities' safety analysis reports. The HUDU code was designed to run on an IBM-PC or compatible personal computer. The user interface was designed for fast and easy operation with minimal user training. The theoretical basis and mathematical models used in the HUDU computer code are described, as are the computer code itself and the data libraries used. Detailed instructions for operating the code are also included. Appendices to the report contain descriptions of the program modules, listings of HUDU's data library, and descriptions of the verification tests that were run as part of the code development. 14 refs., 19 figs., 2 tabs
1987 DOE review: First collider run operation
Childress, S.; Crawford, J.; Dugan, G.
This review covers the operations of the first run of the 1.8 TeV superconducting super collider. The papers enclosed cover: PBAR source status, fixed target operation, Tevatron cryogenic reliability and capacity upgrade, Tevatron Energy upgrade progress and plans, status of the D0 low beta insertion, 1.8 K and 4.7 K refrigeration for low-β quadrupoles, progress and plans for the LINAC and booster, near term and long term and long term performance improvements
CERN Running Club – Sale of Items
CERN Running club
The CERN Running Club is organising a sale of items on 26 June from 11:30 – 13:00 in the entry area of Restaurant 2 (504 R-202). The items for sale are souvenir prizes of past Relay Races and comprise: Backpacks, thermos, towels, gloves & caps, lamps, long sleeve winter shirts and windproof vest. All items will be sold at 5 CHF.
Analysis of Biomechanical Factors in Bend Running
Bing Zhang; Xinping You; Feng Li
Sprint running is the demonstration of comprehensive abilities of technology and tactics, under various conditions. However, whether it is just to allocate the tracks for short-distance athletes from different racetracks has been the hot topic. This study analyzes its forces, differences in different tracks and winding influences, in the aspects of sport biomechanics. The results indicate, many disadvantages exist in inner tracks, middle tracks are the best and outer ones are inferior to midd...
Marathon Running for Amateurs: Benefits and Risks
Farhad Kapadia
The habitual level of physical activity of the human race has significantly and abruptly declined in the last few generations due to technological developments. The professional societies and government health agencies have published minimum physical activity requirement guidelines to educate the masses about the importance of exercise and to reduce cardiovascular (CV) and all-cause mortality at the population level. There is growing participation in marathon running by amateur, middle-aged c...
Forecasting Long-Run Electricity Prices
Hamm, Gregory; Borison, Adam
Estimation of long-run electricity prices is extremely important but it is also very difficult because of the many uncertainties that will determine future prices, and because of the lack of sufficient historical and forwards data. The difficulty is compounded when forecasters ignore part of the available information or unnecessarily limit their thinking about the future. The authors present a practical approach that addresses these problems. (author)
Comparison of computer codes related to the sodium oxide aerosol behavior in a containment building
Fermandjian, J.
In order to ensure that the problems of describing the physical behavior of sodium aerosols, during hypothetical fast reactor accidents, were adequately understood, a comparison of the computer codes (ABC/INTG, PNC, Japan; AEROSIM, UKAEA/SRD, United Kingdom; PARDISEKO IIIb, KfK, Germany; AEROSOLS/A2 and AEROSOLS/B1, CEA France) was undertaken in the frame of the CEC: exercise in which code users have run their own codes with a prearranged input
Run-to-Run Optimization Control Within Exact Inverse Framework for Scan Tracking.
Yeoh, Ivan L; Reinhall, Per G; Berg, Martin C; Chizeck, Howard J; Seibel, Eric J
A run-to-run optimization controller uses a reduced set of measurement parameters, in comparison to more general feedback controllers, to converge to the best control point for a repetitive process. A new run-to-run optimization controller is presented for the scanning fiber device used for image acquisition and display. This controller utilizes very sparse measurements to estimate a system energy measure and updates the input parameterizations iteratively within a feedforward with exact-inversion framework. Analysis, simulation, and experimental investigations on the scanning fiber device demonstrate improved scan accuracy over previous methods and automatic controller adaptation to changing operating temperature. A specific application example and quantitative error analyses are provided of a scanning fiber endoscope that maintains high image quality continuously across a 20 °C temperature rise without interruption of the 56 Hz video.
Financial Performance of Health Insurers: State-Run Versus Federal-Run Exchanges.
Hall, Mark A; McCue, Michael J; Palazzolo, Jennifer R
Many insurers incurred financial losses in individual markets for health insurance during 2014, the first year of Affordable Care Act mandated changes. This analysis looks at key financial ratios of insurers to compare profitability in 2014 and 2013, identify factors driving financial performance, and contrast the financial performance of health insurers operating in state-run exchanges versus the federal exchange. Overall, the median loss of sampled insurers was -3.9%, no greater than their loss in 2013. Reduced administrative costs offset increases in medical losses. Insurers performed better in states with state-run exchanges than insurers in states using the federal exchange in 2014. Medical loss ratios are the underlying driver more than administrative costs in the difference in performance between states with federal versus state-run exchanges. Policy makers looking to improve the financial performance of the individual market should focus on features that differentiate the markets associated with state-run versus federal exchanges.
Run-off from roofing materials
In order to find the runn-off from roof material, a roof has been constructed with two different slopes (30 deg. and 45 deg.). 7 Be and 137 Cs have been used as tracers. Considering new roof material, the pollution removed by run-off processes has been shown to be very different for various roof materials. The pollution is much more easily removed from silicon-treated material than from porous red-tile roof material. Cesium is removed more easily than beryllium. The content of cesium in old roof materials is greater in red-tile than in other less porous roof materials. However, the measured removal from new material does not correspond to the amount accumulated in the old. This could be explained by weathering and by saturation effects. The last effect is probably the more important. The measurements on old material indicate a removal of 44-86% of cesium pollution by run-off, whereas the measurement on new material showed a removal of only 31-50%. It has been demonstrated that the pollution concentration in run-off water could be very different from that in rainwater
Buckingham, RM; The ATLAS collaboration; Tseng, JC-L; Viegas, F; Vinek, E
Management of the large volume of data collected by any large scale sci- enti�c experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user in- terfaces, to pinpoint collections of data needed for speci�c purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called "runBrowser� makes these Conditions Metadata available as a Run based selection service. runBrowser, based on php and javascript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions at...
Management of the large volume of data collected by any large scale scienti�c experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for speci�c purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called "runBrowser� makes these Conditions Metadata available as a Run based selection service. runBrowser, based on php and javascript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attrib...
Running vacuum cosmological models: linear scalar perturbations
Perico, E.L.D. [Instituto de Física, Universidade de São Paulo, Rua do Matão 1371, CEP 05508-090, São Paulo, SP (Brazil); Tamayo, D.A., E-mail: [email protected], E-mail: [email protected] [Departamento de Astronomia, Universidade de São Paulo, Rua do Matão 1226, CEP 05508-900, São Paulo, SP (Brazil)
In cosmology, phenomenologically motivated expressions for running vacuum are commonly parameterized as linear functions typically denoted by Λ( H {sup 2}) or Λ( R ). Such models assume an equation of state for the vacuum given by P-bar {sub Λ} = - �-bar {sub Λ}, relating its background pressure P-bar {sub Λ} with its mean energy density �-bar {sub Λ} ≡ Λ/8π G . This equation of state suggests that the vacuum dynamics is due to an interaction with the matter content of the universe. Most of the approaches studying the observational impact of these models only consider the interaction between the vacuum and the transient dominant matter component of the universe. We extend such models by assuming that the running vacuum is the sum of independent contributions, namely �-bar {sub Λ} = Σ {sub i} �-bar {sub Λ} {sub i} . Each Λ i vacuum component is associated and interacting with one of the i matter components in both the background and perturbation levels. We derive the evolution equations for the linear scalar vacuum and matter perturbations in those two scenarios, and identify the running vacuum imprints on the cosmic microwave background anisotropies as well as on the matter power spectrum. In the Λ( H {sup 2}) scenario the vacuum is coupled with every matter component, whereas the Λ( R ) description only leads to a coupling between vacuum and non-relativistic matter, producing different effects on the matter power spectrum.
The aerodynamic signature of running spiders.
Jérôme Casas
Full Text Available Many predators display two foraging modes, an ambush strategy and a cruising mode. These foraging strategies have been classically studied in energetic, biomechanical and ecological terms, without considering the role of signals produced by predators and perceived by prey. Wolf spiders are a typical example; they hunt in leaf litter either using an ambush strategy or by moving at high speed, taking over unwary prey. Air flow upstream of running spiders is a source of information for escaping prey, such as crickets and cockroaches. However, air displacement by running arthropods has not been previously examined. Here we show, using digital particle image velocimetry, that running spiders are highly conspicuous aerodynamically, due to substantial air displacement detectable up to several centimetres in front of them. This study explains the bimodal distribution of spider's foraging modes in terms of sensory ecology and is consistent with the escape distances and speeds of cricket prey. These findings may be relevant to the large and diverse array of arthropod prey-predator interactions in leaf litter.
Running-related injuries in school-age children and adolescents treated in emergency departments from 1994 through 2007.
Mehl, Ann J; Nelson, Nicolas G; McKenzie, Lara B
Running for exercise is a popular way to motivate children to be physically active. Running-related injuries are well studied in adults but little information exists for children and adolescents. Through use of the National Electronic Injury Surveillance System database, cases of running-related injuries were selected by using activity codes for exercise (which included running and jogging). Sample weights were used to calculate national estimates. An estimated 225 344 children and adolescents 6 to 18 years old were treated in US emergency departments for running-related injuries. The annual number of cases increased by 34.0% over the study period. One third of the injuries involved a running-related fall and more than one half of the injuries occurred at school. The majority of injuries occurred to the lower extremities and resulted in a sprain or strain. These findings emphasize the need for scientific evidence-based guidelines for pediatric running. The high proportion of running-related falls warrants further research.
SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE
F.N. HASOON
Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.
Nuclear code abstracts (1975 edition)
Akanuma, Makoto; Hirakawa, Takashi
Nuclear Code Abstracts is compiled in the Nuclear Code Committee to exchange information of the nuclear code developments among members of the committee. Enlarging the collection, the present one includes nuclear code abstracts obtained in 1975 through liaison officers of the organizations in Japan participating in the Nuclear Energy Agency's Computer Program Library at Ispra, Italy. The classification of nuclear codes and the format of code abstracts are the same as those in the library. (auth.)
Some new ternary linear codes
Rumen Daskalov
Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].
The Relationship between Running Velocity and the Energy Cost of Turning during Running
Hatamoto, Yoichi; Yamada, Yosuke; Sagayama, Hiroyuki; Higaki, Yasuki; Kiyonaga, Akira; Tanaka, Hiroaki
Ball game players frequently perform changes of direction (CODs) while running; however, there has been little research on the physiological impact of CODs. In particular, the effect of running velocity on the physiological and energy demands of CODs while running has not been clearly determined. The purpose of this study was to examine the relationship between running velocity and the energy cost of a 180°COD and to quantify the energy cost of a 180°COD. Nine male university students (aged 18–22 years) participated in the study. Five shuttle trials were performed in which the subjects were required to run at different velocities (3, 4, 5, 6, 7, and 8 km/h). Each trial consisted of four stages with different turn frequencies (13, 18, 24 and 30 per minute), and each stage lasted 3 minutes. Oxygen consumption was measured during the trial. The energy cost of a COD significantly increased with running velocity (except between 7 and 8 km/h, p = 0.110). The relationship between running velocity and the energy cost of a 180°COD is best represented by a quadratic function (y = −0.012+0.066x +0.008x2, [r = 0.994, p = 0.001]), but is also well represented by a linear (y = −0.228+0.152x, [r = 0.991, prunning velocities have relatively high physiological demands if the COD frequency increases, and that running velocities affect the physiological demands of CODs. These results also showed that the energy expenditure of COD can be evaluated using only two data points. These results may be useful for estimating the energy expenditure of players during a match and designing shuttle exercise training programs. PMID:24497913
Short-run and long-run effects of unemployment on suicides: does welfare regime matter?
Gajewski, Pawel; Zhukovska, Kateryna
Disentangling the immediate effects of an unemployment shock from the long-run relationship has a strong theoretical rationale. Different economic and psychological forces are at play in the first moment and after prolonged unemployment. This study suggests a diverse impact of short- and long-run unemployment on suicides in liberal and social-democratic countries. We take a macro-level perspective and simultaneously estimate the short- and long-run relationships between unemployment and suicide, along with the speed of convergence towards the long-run relationship after a shock, in a panel of 10 high-income countries. We also account for unemployment benefit spending, the share of the population aged 15-34, and the crisis effects. In the liberal group of countries, only a long-run impact of unemployment on suicides is found to be significant (P = 0.010). In social-democratic countries, suicides are associated with initial changes in unemployment (P = 0.028), but the positive link fades over time and becomes insignificant in the long run. Further, crisis effects are a much stronger determinant of suicides in social-democratic countries. Once the broad welfare regime is controlled for, changes in unemployment-related spending do not matter for preventing suicides. A generous welfare system seems efficient at preventing unemployment-related suicides in the long run, but societies in social-democratic countries might be less psychologically immune to sudden negative changes in their professional lives compared with people in liberal countries. Accounting for the different short- and long-run effects could thus improve our understanding of the unemployment-suicide link. © The Author 2017. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
Test results of Run-1 and Run-2 in steam generator safety test facility (SWAT-3)
Kurihara, A.; Yatabe, Toshio; Tanabe, Hiromi; Hiroi, Hiroshi
Large leak sodium-water reaction tests were carried out using SWAT-1 rig and SWAT-3 facility in Power Reactor and Nuclear Fuel Development Corporation (PNC) O-arai Engineering Center to obtain the data on the design of the prototype LMFBR Monju steam generator against a large leak accident. This report provides the results of SWAT-3 Runs 1 and 2. In Runs 1 and 2, the heat transfer tube bundle of the evaporator, fabricated by TOSHIBA/IHI, were used, and the pressure relief line was located at the top of evaporator. The water injection rates in the evaporator were 6.7 kg/s and 14.2 (initial)-9.7 kg/s in Runs 1 and 2 respectively, which corresponded to 3.3 tubes and 7.1 (initial)-4.8 tubes failure in actual size system according to iso-velocity modeling. Approximately two hundreds of measurement points were provided to collect data such as pressure, temperature, strain, sodium level, void, thrust load, acceleration, displacement, flow rate, and so on in each run. Initial spike pressures were 1.13 MPa and 2.62 MPa nearest to injection point in Runs 1 and 2 respectively, and the maximum quasi-steady pressures in evaporator were 0.49 MPa and 0.67 MPa in Runs 1 and 2. No secondary tube failure was observed. The rupture disc of evaporator (RD601) burst at 1.1s in Run-1 and at 0.7s in Run-2 after water injected, and the pressure relief system was well-functioned though a few items for improvement were found. (author)
Yahaya Asizehi ENESI; Jacob TSADO; Mark NWOHU; Usman Abraham USMAN; Odu Ayo IMORU
In this paper, the input parameters of a single phase split-phase induction motor is taken to investigate and to study the output performance characteristics of capacitor start and capacitor run induction motor. The value of these input parameters are used in the design characteristics of capacitor run and capacitor start motor with each motor connected to rated or standard capacitor in series with auxiliary winding or starting winding respectively for the normal operational condition. The ma...
Changes in running kinematics, kinetics, and spring-mass behavior over a 24-h run.
Morin, Jean-Benoît; Samozino, Pierre; Millet, Guillaume Y
This study investigated the changes in running mechanics and spring-mass behavior over a 24-h treadmill run (24TR). Kinematics, kinetics, and spring-mass characteristics of the running step were assessed in 10 experienced ultralong-distance runners before, every 2 h, and after a 24TR using an instrumented treadmill dynamometer. These measurements were performed at 10 km·h, and mechanical parameters were sampled at 1000 Hz for 10 consecutive steps. Contact and aerial times were determined from ground reaction force (GRF) signals and used to compute step frequency. Maximal GRF, loading rate, downward displacement of the center of mass, and leg length change during the support phase were determined and used to compute both vertical and leg stiffness. Subjects' running pattern and spring-mass behavior significantly changed over the 24TR with a 4.9% higher step frequency on average (because of a significantly 4.5% shorter contact time), a lower maximal GRF (by 4.4% on average), a 13.0% lower leg length change during contact, and an increase in both leg and vertical stiffness (+9.9% and +8.6% on average, respectively). Most of these changes were significant from the early phase of the 24TR (fourth to sixth hour of running) and could be speculated as contributing to an overall limitation of the potentially harmful consequences of such a long-duration run on subjects' musculoskeletal system. During a 24TR, the changes in running mechanics and spring-mass behavior show a clear shift toward a higher oscillating frequency and stiffness, along with lower GRF and leg length change (hence a reduced overall eccentric load) during the support phase of running. © 2011 by the American College of Sports Medicine
EPIC: an Error Propagation/Inquiry Code
Baker, A.L.
The use of a computer program EPIC (Error Propagation/Inquiry Code) will be discussed. EPIC calculates the variance of a materials balance closed about a materials balance area (MBA) in a processing plant operated under steady-state conditions. It was designed for use in evaluating the significance of inventory differences in the Department of Energy (DOE) nuclear plants. EPIC rapidly estimates the variance of a materials balance using average plant operating data. The intent is to learn as much as possible about problem areas in a process with simple straightforward calculations assuming a process is running in a steady-state mode. EPIC is designed to be used by plant personnel or others with little computer background. However, the user should be knowledgeable about measurement errors in the system being evaluated and have a limited knowledge of how error terms are combined in error propagation analyses. EPIC contains six variance equations; the appropriate equation is used to calculate the variance at each measurement point. After all of these variances are calculated, the total variance for the MBA is calculated using a simple algebraic sum of variances. The EPIC code runs on any computer that accepts a standard form of the BASIC language. 2 refs., 1 fig., 6 tabs
ACE - Manufacturer Identification Code (MID)
Department of Homeland Security — The ACE Manufacturer Identification Code (MID) application is used to track and control identifications codes for manufacturers. A manufacturer is identified on an...
Algebraic and stochastic coding theory
Kythe, Dave K
Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.
Optical coding theory with Prime
Kwong, Wing C
Although several books cover the coding theory of wireless communications and the hardware technologies and coding techniques of optical CDMA, no book has been specifically dedicated to optical coding theory-until now. Written by renowned authorities in the field, Optical Coding Theory with Prime gathers together in one volume the fundamentals and developments of optical coding theory, with a focus on families of prime codes, supplemented with several families of non-prime codes. The book also explores potential applications to coding-based optical systems and networks. Learn How to Construct
Adjustments with running speed reveal neuromuscular adaptations during landing associated with high mileage running training.
Verheul, Jasper; Clansey, Adam C; Lake, Mark J
It remains to be determined whether running training influences the amplitude of lower limb muscle activations before and during the first half of stance and whether such changes are associated with joint stiffness regulation and usage of stored energy from tendons. Therefore, the aim of this study was to investigate neuromuscular and movement adaptations before and during landing in response to running training across a range of speeds. Two groups of high mileage (HM; >45 km/wk, n = 13) and low mileage (LM; joint stiffness might predominantly be governed by tendon stiffness rather than muscular activations before landing. Estimated elastic work about the ankle was found to be higher in the HM runners, which might play a role in reducing weight acceptance phase muscle activation levels and improve muscle activation efficiency with running training. NEW & NOTEWORTHY Although neuromuscular factors play a key role during running, the influence of high mileage training on neuromuscular function has been poorly studied, especially in relation to running speed. This study is the first to demonstrate changes in neuromuscular conditioning with high mileage training, mainly characterized by lower thigh muscle activation after touch down, higher initial knee stiffness, and greater estimates of energy return, with adaptations being increasingly evident at faster running speeds. Copyright © 2017 the American Physiological Society.
The Aster code
Delbecq, J.M.
Adaptive distributed source coding.
Varodayan, David; Lin, Yao-Chung; Girod, Bernd
We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.
Preventing running injuries. Practical approach for family doctors.
Johnston, C. A. M.; Taunton, J. E.; Lloyd-Smith, D. R.; McKenzie, D. C.
OBJECTIVE: To present a practical approach for preventing running injuries. QUALITY OF EVIDENCE: Much of the research on running injuries is in the form of expert opinion and comparison trials. Recent systematic reviews have summarized research in orthotics, stretching before running, and interventions to prevent soft tissue injuries. MAIN MESSAGE: The most common factors implicated in running injuries are errors in training methods, inappropriate training surfaces and running shoes, malalign...
Speech coding code- excited linear prediction
Bäckström, Tom
This book provides scientific understanding of the most central techniques used in speech coding both for advanced students as well as professionals with a background in speech audio and or digital signal processing. It provides a clear connection between the whys hows and whats thus enabling a clear view of the necessity purpose and solutions provided by various tools as well as their strengths and weaknesses in each respect Equivalently this book sheds light on the following perspectives for each technology presented Objective What do we want to achieve and especially why is this goal important Resource Information What information is available and how can it be useful and Resource Platform What kind of platforms are we working with and what are their capabilities restrictions This includes computational memory and acoustic properties and the transmission capacity of devices used. The book goes on to address Solutions Which solutions have been proposed and how can they be used to reach the stated goals and ...
Run-Time and Compiler Support for Programming in Adaptive Parallel Environments
Guy Edjlali
Full Text Available For better utilization of computing resources, it is important to consider parallel programming environments in which the number of available processors varies at run-time. In this article, we discuss run-time support for data-parallel programming in such an adaptive environment. Executing programs in an adaptive environment requires redistributing data when the number of processors changes, and also requires determining new loop bounds and communication patterns for the new set of processors. We have developed a run-time library to provide this support. We discuss how the run-time library can be used by compilers of high-performance Fortran (HPF-like languages to generate code for an adaptive environment. We present performance results for a Navier-Stokes solver and a multigrid template run on a network of workstations and an IBM SP-2. Our experiments show that if the number of processors is not varied frequently, the cost of data redistribution is not significant compared to the time required for the actual computation. Overall, our work establishes the feasibility of compiling HPF for a network of nondedicated workstations, which are likely to be an important resource for parallel programming in the future.
Changes in foot and shank coupling due to alterations in foot strike pattern during running.
Pohl, Michael B; Buckley, John G
Determining if and how the kinematic relationship between adjacent body segments changes when an individual's gait pattern is experimentally manipulated can yield insight into the robustness of the kinematic coupling across the associated joint(s). The aim of this study was to assess the effects on the kinematic coupling between the forefoot, rearfoot and shank during ground contact of running with alteration in foot strike pattern. Twelve subjects ran over-ground using three different foot strike patterns (heel strike, forefoot strike, toe running). Kinematic data were collected of the forefoot, rearfoot and shank, which were modelled as rigid segments. Coupling at the ankle-complex and midfoot joints was assessed using cross-correlation and vector coding techniques. In general good coupling was found between rearfoot frontal plane motion and transverse plane shank rotation regardless of foot strike pattern. Forefoot motion was also strongly coupled with rearfoot frontal plane motion. Subtle differences were noted in the amount of rearfoot eversion transferred into shank internal rotation in the first 10-15% of stance during heel strike running compared to forefoot and toe running, and this was accompanied by small alterations in forefoot kinematics. These findings indicate that during ground contact in running there is strong coupling between the rearfoot and shank via the action of the joints in the ankle-complex. In addition, there was good coupling of both sagittal and transverse plane forefoot with rearfoot frontal plane motion via the action of the midfoot joints.
Spatially coded backscatter radiography
Thangavelu, S.; Hussein, E.M.A.
Conventional radiography requires access to two opposite sides of an object, which makes it unsuitable for the inspection of extended and/or thick structures (airframes, bridges, floors etc.). Backscatter imaging can overcome this problem, but the indications obtained are difficult to interpret. This paper applies the coded aperture technique to gamma-ray backscatter-radiography in order to enhance the detectability of flaws. This spatial coding method involves the positioning of a mask with closed and open holes to selectively permit or block the passage of radiation. The obtained coded-aperture indications are then mathematically decoded to detect the presence of anomalies. Indications obtained from Monte Carlo calculations were utilized in this work to simulate radiation scattering measurements. These simulated measurements were used to investigate the applicability of this technique to the detection of flaws by backscatter radiography
Aztheca Code; Codigo Aztheca
Quezada G, S.; Espinosa P, G. [Universidad Autonoma Metropolitana, Unidad Iztapalapa, San Rafael Atlixco No. 186, Col. Vicentina, 09340 Ciudad de Mexico (Mexico); Centeno P, J.; Sanchez M, H., E-mail: [email protected] [UNAM, Facultad de Ingenieria, Ciudad Universitaria, Circuito Exterior s/n, 04510 Ciudad de Mexico (Mexico)
The Coding Question.
Gallistel, C R
Recent electrophysiological results imply that the duration of the stimulus onset asynchrony in eyeblink conditioning is encoded by a mechanism intrinsic to the cerebellar Purkinje cell. This raises the general question - how is quantitative information (durations, distances, rates, probabilities, amounts, etc.) transmitted by spike trains and encoded into engrams? The usual assumption is that information is transmitted by firing rates. However, rate codes are energetically inefficient and computationally awkward. A combinatorial code is more plausible. If the engram consists of altered synaptic conductances (the usual assumption), then we must ask how numbers may be written to synapses. It is much easier to formulate a coding hypothesis if the engram is realized by a cell-intrinsic molecular mechanism. Copyright © 2017 Elsevier Ltd. All rights reserved.
Revised SRAC code system
Tsuchihashi, Keichiro; Ishiguro, Yukio; Kaneko, Kunio; Ido, Masaru.
Since the publication of JAERI-1285 in 1983 for the preliminary version of the SRAC code system, a number of additions and modifications to the functions have been made to establish an overall neutronics code system. Major points are (1) addition of JENDL-2 version of data library, (2) a direct treatment of doubly heterogeneous effect on resonance absorption, (3) a generalized Dancoff factor, (4) a cell calculation based on the fixed boundary source problem, (5) the corresponding edit required for experimental analysis and reactor design, (6) a perturbation theory calculation for reactivity change, (7) an auxiliary code for core burnup and fuel management, etc. This report is a revision of the users manual which consists of the general description, input data requirements and their explanation, detailed information on usage, mathematics, contents of libraries and sample I/O. (author)
Code query by example
Vaucouleur, Sebastien
We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.
The correspondence between projective codes and 2-weight codes
Brouwer, A.E.; Eupen, van M.J.M.; Tilborg, van H.C.A.; Willems, F.M.J.
The hyperplanes intersecting a 2-weight code in the same number of points obviously form the point set of a projective code. On the other hand, if we have a projective code C, then we can make a 2-weight code by taking the multiset of points E PC with multiplicity "Y(w), where W is the weight of
Visualizing code and coverage changes for code review
Oosterwaal, Sebastiaan; van Deursen, A.; De Souza Coelho, R.; Sawant, A.A.; Bacchelli, A.
One of the tasks of reviewers is to verify that code modifications are well tested. However, current tools offer little support in understanding precisely how changes to the code relate to changes to the tests. In particular, it is hard to see whether (modified) test code covers the changed code.
Turbo-Gallager Codes: The Emergence of an Intelligent Coding ...
Today, both turbo codes and low-density parity-check codes are largely superior to other code families and are being used in an increasing number of modern communication systems including 3G standards, satellite and deep space communications. However, the two codes have certain distinctive characteristics that ...
Western diet increases wheel running in mice selectively bred for high voluntary wheel running.
Meek, T H; Eisenmann, J C; Garland, T
Mice from a long-term selective breeding experiment for high voluntary wheel running offer a unique model to examine the contributions of genetic and environmental factors in determining the aspects of behavior and metabolism relevant to body-weight regulation and obesity. Starting with generation 16 and continuing through to generation 52, mice from the four replicate high runner (HR) lines have run 2.5-3-fold more revolutions per day as compared with four non-selected control (C) lines, but the nature of this apparent selection limit is not understood. We hypothesized that it might involve the availability of dietary lipids. Wheel running, food consumption (Teklad Rodent Diet (W) 8604, 14% kJ from fat; or Harlan Teklad TD.88137 Western Diet (WD), 42% kJ from fat) and body mass were measured over 1-2-week intervals in 100 males for 2 months starting 3 days after weaning. WD was obesogenic for both HR and C, significantly increasing both body mass and retroperitoneal fat pad mass, the latter even when controlling statistically for wheel-running distance and caloric intake. The HR mice had significantly less fat than C mice, explainable statistically by their greater running distance. On adjusting for body mass, HR mice showed higher caloric intake than C mice, also explainable by their higher running. Accounting for body mass and running, WD initially caused increased caloric intake in both HR and C, but this effect was reversed during the last four weeks of the study. Western diet had little or no effect on wheel running in C mice, but increased revolutions per day by as much as 75% in HR mice, mainly through increased time spent running. The remarkable stimulation of wheel running by WD in HR mice may involve fuel usage during prolonged endurance exercise and/or direct behavioral effects on motivation. Their unique behavioral responses to WD may render HR mice an important model for understanding the control of voluntary activity levels.
The Robust Running Ape: Unraveling the Deep Underpinnings of Coordinated Human Running Proficiency
Full Text Available In comparison to other mammals, humans are not especially strong, swift or supple. Nevertheless, despite these apparent physical limitations, we are among Natures most superbly well-adapted endurance runners. Paradoxically, however, notwithstanding this evolutionary-bestowed proficiency, running-related injuries, and Overuse syndromes in particular, are widely pervasive. The term 'coordination' is similarly ubiquitous within contemporary coaching, conditioning, and rehabilitation cultures. Various theoretical models of coordination exist within the academic literature. However, the specific neural and biological underpinnings of 'running coordination,' and the nature of their integration, remain poorly elaborated. Conventionally running is considered a mundane, readily mastered coordination skill. This illusion of coordinative simplicity, however, is founded upon a platform of immense neural and biological complexities. This extensive complexity presents extreme organizational difficulties yet, simultaneously, provides a multiplicity of viable pathways through which the computational and mechanical burden of running can be proficiently dispersed amongst expanded networks of conditioned neural and peripheral tissue collaborators. Learning to adequately harness this available complexity, however, is a painstakingly slowly emerging, practice-driven process, greatly facilitated by innate evolutionary organizing principles serving to constrain otherwise overwhelming complexity to manageable proportions. As we accumulate running experiences persistent plastic remodeling customizes networked neural connectivity and biological tissue properties to best fit our unique neural and architectural idiosyncrasies, and personal histories: thus neural and peripheral tissue plasticity embeds coordination habits. When, however, coordinative processes are compromised—under the integrated influence of fatigue and/or accumulative cycles of injury, overuse
Code of Medical Ethics
. SZD-SZZ
Full Text Available Te Code was approved on December 12, 1992, at the 3rd regular meeting of the General Assembly of the Medical Chamber of Slovenia and revised on April 24, 1997, at the 27th regular meeting of the General Assembly of the Medical Chamber of Slovenia. The Code was updated and harmonized with the Medical Association of Slovenia and approved on October 6, 2016, at the regular meeting of the General Assembly of the Medical Chamber of Slovenia.
Affara, Lama Ahmed
Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.
CONCEPT computer code
Delene, J.
CONCEPT is a computer code that will provide conceptual capital investment cost estimates for nuclear and coal-fired power plants. The code can develop an estimate for construction at any point in time. Any unit size within the range of about 400 to 1300 MW electric may be selected. Any of 23 reference site locations across the United States and Canada may be selected. PWR, BWR, and coal-fired plants burning high-sulfur and low-sulfur coal can be estimated. Multiple-unit plants can be estimated. Costs due to escalation/inflation and interest during construction are calculated
Principles of speech coding
Ogunfunmi, Tokunbo
It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the
Evaluation Codes from an Affine Veriety Code Perspective
Geil, Hans Olav
Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... includes a reformulation of the usual methods to estimate the minimum distances of evaluation codes into the setting of affine variety codes. Finally we describe the connection to the theory of one-pointgeometric Goppa codes. Contents 4.1 Introduction...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...
The Effects of Backwards Running Training on Forward Running Economy in Trained Males.
Ordway, Jason D; Laubach, Lloyd L; Vanderburgh, Paul M; Jackson, Kurt J
Backwards running (BR) results in greater cardiopulmonary response and muscle activity compared with forward running (FR). BR has traditionally been used in rehabilitation for disorders such as stroke and lower leg extremity injuries, as well as in short bursts during various athletic events. The aim of this study was to measure the effects of sustained backwards running training on forward running economy in trained male athletes. Eight highly trained, male runners (26.13 ± 6.11 years, 174.7 ± 6.4 cm, 68.4 ± 9.24 kg, 8.61 ± 3.21% body fat, 71.40 ± 7.31 ml·kg(-1)·min(-1)) trained with BR while harnessed on a treadmill at 161 m·min(-1) for 5 weeks following a 5-week BR run-in period at a lower speed (134 m·min(-1)). Subjects were tested at baseline, postfamiliarized, and post-BR training for body composition, a ramped VO2max test, and an economy test designed for trained male runners. Subjects improved forward running economy by 2.54% (1.19 ± 1.26 ml·kg(-1)·min(-1), p = 0.032) at 215 m·min(-1). VO2max, body mass, lean mass, fat mass, and % body fat did not change (p > 0.05). Five weeks of BR training improved FR economy in healthy, trained male runners without altering VO2max or body composition. The improvements observed in this study could be a beneficial form of training to an already economical population to improve running economy.
Is There an Economical Running Technique? A Review of Modifiable Biomechanical Factors Affecting Running Economy.
Moore, Isabel S
Running economy (RE) has a strong relationship with running performance, and modifiable running biomechanics are a determining factor of RE. The purposes of this review were to (1) examine the intrinsic and extrinsic modifiable biomechanical factors affecting RE; (2) assess training-induced changes in RE and running biomechanics; (3) evaluate whether an economical running technique can be recommended and; (4) discuss potential areas for future research. Based on current evidence, the intrinsic factors that appeared beneficial for RE were using a preferred stride length range, which allows for stride length deviations up to 3Â % shorter than preferred stride length; lower vertical oscillation; greater leg stiffness; low lower limb moment of inertia; less leg extension at toe-off; larger stride angles; alignment of the ground reaction force and leg axis during propulsion; maintaining arm swing; low thigh antagonist-agonist muscular coactivation; and low activation of lower limb muscles during propulsion. Extrinsic factors associated with a better RE were a firm, compliant shoe-surface interaction and being barefoot or wearing lightweight shoes. Several other modifiable biomechanical factors presented inconsistent relationships with RE. Running biomechanics during ground contact appeared to play an important role, specifically those during propulsion. Therefore, this phase has the strongest direct links with RE. Recurring methodological problems exist within the literature, such as cross-comparisons, assessing variables in isolation, and acute to short-term interventions. Therefore, recommending a general economical running technique should be approached with caution. Future work should focus on interdisciplinary longitudinal investigations combining RE, kinematics, kinetics, and neuromuscular and anatomical aspects, as well as applying a synergistic approach to understanding the role of kinetics.
Ground reaction forces in shallow water running are affected by immersion level, running speed and gender.
Haupenthal, Alessandro; Fontana, Heiliane de Brito; Ruschel, Caroline; dos Santos, Daniela Pacheco; Roesler, Helio
To analyze the effect of depth of immersion, running speed and gender on ground reaction forces during water running. Controlled laboratory study. Twenty adults (ten male and ten female) participated by running at two levels of immersion (hip and chest) and two speed conditions (slow and fast). Data were collected using an underwater force platform. The following variables were analyzed: vertical force peak (Fy), loading rate (LR) and anterior force peak (Fx anterior). Three-factor mixed ANOVA was used to analyze data. Significant effects of immersion level, speed and gender on Fy were observed, without interaction between factors. Fy was greater when females ran fast at the hip level. There was a significant increase in LR with a reduction in the level of immersion regardless of the speed and gender. No effect of speed or gender on LR was observed. Regarding Fx anterior, significant interaction between speed and immersion level was found: in the slow condition, participants presented greater values at chest immersion, whereas, during the fast running condition, greater values were observed at hip level. The effect of gender was only significant during fast water running, with Fx anterior being greater in the men group. Increasing speed raised Fx anterior significantly irrespective of the level of immersion and gender. The magnitude of ground reaction forces during shallow water running are affected by immersion level, running speed and gender and, for this reason, these factors should be taken into account during exercise prescription. Copyright © 2012 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis
Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E. [Sandia National Labs., Albuquerque, NM (United States); Tills, J. [J. Tills and Associates, Inc., Sandia Park, NM (United States)
The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.
Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes
Baratta, A.J.
To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together
To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together.
Interface requirements to couple thermal-hydraulic codes to severe accident codes: ATHLET-CD
Trambauer, K. [GRS, Garching (Germany)
The system code ATHLET-CD is being developed by GRS in cooperation with IKE and IPSN. Its field of application comprises the whole spectrum of leaks and large breaks, as well as operational and abnormal transients for LWRs and VVERs. At present the analyses cover the in-vessel thermal-hydraulics, the early phases of core degradation, as well as fission products and aerosol release from the core and their transport in the Reactor Coolant System. The aim of the code development is to extend the simulation of core degradation up to failure of the reactor pressure vessel and to cover all physically reasonable accident sequences for western and eastern LWRs including RMBKs. The ATHLET-CD structure is highly modular in order to include a manifold spectrum of models and to offer an optimum basis for further development. The code consists of four general modules to describe the reactor coolant system thermal-hydraulics, the core degradation, the fission product core release, and fission product and aerosol transport. Each general module consists of some basic modules which correspond to the process to be simulated or to its specific purpose. Besides the code structure based on the physical modelling, the code follows four strictly separated steps during the course of a calculation: (1) input of structure, geometrical data, initial and boundary condition, (2) initialization of derived quantities, (3) steady state calculation or input of restart data, and (4) transient calculation. In this paper, the transient solution method is briefly presented and the coupling methods are discussed. Three aspects have to be considered for the coupling of different modules in one code system. First is the conservation of masses and energy in the different subsystems as there are fluid, structures, and fission products and aerosols. Second is the convergence of the numerical solution and stability of the calculation. The third aspect is related to the code performance, and running time.
Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E.; Tills, J.
The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions
Design and Development of RunForFun Mobile Application
Anci Anthony
Full Text Available Race run for 5 km or 10 km has been trending recently in many places in Indonesia, especially in Surabaya where there were at least 11 events of race run. The participant's number also increased significantly compared to years before. However, among several race run events, it was seen that some events tended to be replicative and monotone, while among the participants recently were identified the need for increasing the fun factor. RunForFun is a mobile application which designed for participants to reach new experience when participating in a race run event. The mobile application will run on Android OS. The development method of this mobile application would use Reverse Waterfall method. The development of this mobile application uses Ionic Framework which utilizes Cordova as its base to deploy to smartphone devices. Subsequently, RunForRun was tested on 10 participants, and the test shows a significant increase in the fun factor from run race participants. | CommonCrawl |
Aleksander Spivakovsky
Aleksander Spivakovsky is a Ukrainian academic administrator and mathematician. He is the rector of Kherson State University and a professor and chair of informatics, software engineering, and economic cybernetics.[1] On October 21, 2016, Spivakovsky was elected an academician and corresponding member of the National Academy of Educational Sciences of Ukraine.[1][2]
References
1. "Alexander Spivakovsky". Kherson State University. Retrieved 2022-09-25.
2. "Corresponding members". National Academy of Educational Sciences of Ukraine. Retrieved 2022-09-25.
Authority control
International
• VIAF
National
• Israel
• United States
Academics
• ORCID
• Scopus
| Wikipedia |
Last 12 months (1)
Earth and Environmental Sciences (7)
Physics And Astronomy (6)
Materials Research (1)
Journal of Fluid Mechanics (5)
MRS Communications (1)
test society (4)
Materials Research Society (1)
Ryan Test (1)
Evaluation of novel carrier substrates for high reliability and integrated GaN devices in a 200 mm complementary metal–oxide semiconductor compatible process
S. Stoffels, K. Geens, X. Li, D. Wellekens, S. You, M. Zhao, M. Borga, E. Zanoni, G. Meneghesso, M. Meneghini, N.E. Posthuma, M. Van Hove, S. Decoutere
Journal: MRS Communications / Volume 8 / Issue 4 / December 2018
Published online by Cambridge University Press: 17 September 2018, pp. 1387-1394
Print publication: December 2018
Add to cart USD35.00 Added An error has occurred,
In this paper new materials and substrate approaches are discussed which have potential to provide (Al)GaN buffers with a better crystal quality, higher critical electrical field, or thickness and have the potential to offer co-integration of GaN switches at different reference potentials, while maintaining lower wafer bow and maintaining complementary metal–oxide semiconductor (CMOS) compatibility. Engineered silicon substrates, silicon on insulator (SOI) and coefficient of thermal expansion (CTE)-matched substrates have been investigated and benchmarked with respect to each other. SOI and CTE-matched offer benefits for scaling to higher voltage, while a trench isolation process combined with an oxide interlayer substrate allows co-integration of GaN components in a GaN-integrated circuit (IC).
Extreme probing of particle motions in turbulence
M. S. Borgas
Journal: Journal of Fluid Mechanics / Volume 766 / 10 March 2015
Published online by Cambridge University Press: 09 February 2015, pp. 1-4
Print publication: 10 March 2015
Extreme behaviour of fluid and material motions needs to be understood for engineering processes and the behaviour of clouds or plumes of pollution. Applications in the natural environment require scaling of turbulence behaviour and models beyond current computational or laboratory understanding. New computational studies of Biferale et al. (J. Fluid Mech., 2014, vol. 757, pp. 550–572) are probing new regimes of scaling of extreme random events in nature produced by turbulent fluctuations trending towards applications in environmental prediction.
By Ghazi Al-Rawas, Vazken Andréassian, Tianqi Ao, Stacey A. Archfield, Berit Arheimer, András Bárdossy, Trent Biggs, Günter Blöschl, Theresa Blume, Marco Borga, Helge Bormann, Gianluca Botter, Tom Brown, Donald H. Burn, Sean K. Carey, Attilio Castellarin, Francis Chiew, François Colin, Paulin Coulibaly, Armand Crabit, Barry Croke, Siegfried Demuth, Qingyun Duan, Giuliano Di Baldassarre, Thomas Dunne, Ying Fan, Xing Fang, Boris Gartsman, Alexander Gelfan, Mikhail Georgievski, Nick van de Giesen, David C. Goodrich, Hoshin V. Gupta, Khaled Haddad, David M. Hannah, H. A. P. Hapuarachchi, Hege Hisdal, Kamila Hlavčová, Markus Hrachowitz, Denis A. Hughes, Günter Humer, Ruud Hurkmans, Vito Iacobellis, Elena Ilyichyova, Hiroshi Ishidaira, Graham Jewitt, Shaofeng Jia, Jeffrey R. Kennedy, Anthony S. Kiem, Robert Kirnbauer, Thomas R. Kjeldsen, Jürgen Komma, Leonid M. Korytny, Charles N. Kroll, George Kuczera, Gregor Laaha, Henny A. J. van Lanen, Hjalmar Laudon, Jens Liebe, Shijun Lin, Göran Lindström, Suxia Liu, Jun Magome, Danny G. Marks, Dominic Mazvimavi, Jeffrey J. McDonnell, Brian L. McGlynn, Kevin J. McGuire, Neil McIntyre, Thomas A. McMahon, Ralf Merz, Robert A. Metcalfe, Alberto Montanari, David Morris, Roger Moussa, Lakshman Nandagiri, Thomas Nester, Taha B. M. J. Ouarda, Ludovic Oudin, Juraj Parajka, Charles S. Pearson, Murray C. Peel, Charles Perrin, John W. Pomeroy, David A. Post, Ataur Rahman, Liliang Ren, Magdalena Rogger, Dan Rosbjerg, José Luis Salinas, Jos Samuel, Eric Sauquet, Hubert H. G. Savenije, Takahiro Sayama, John C. Schaake, Kevin Shook, Murugesu Sivapalan, Jon Olav Skøien, Chris Soulsby, Christopher Spence, R. 'Sri' Srikanthan, Tammo S. Steenhuis, Jan Szolgay, Yasuto Tachikawa, Kuniyoshi Takeuchi, Lena M. Tallaksen, Dörthe Tetzlaff, Sally E. Thompson, Elena Toth, Peter A. Troch, Remko Uijlenhoet, Carl L. Unkrich, Alberto Viglione, Neil R. Viney, Richard M. Vogel, Thorsten Wagener, M. Todd Walter, Guoqiang Wang, Markus Weiler, Rolf Weingartner, Erwin Weinmann, Hessel Winsemius, Ross A. Woods, Dawen Yang, Chihiro Yoshimura, Andy Young, Gordon Young, Erwin Zehe, Yongqiang Zhang, Maichun C. Zhou
Edited by Günter Blöschl, Technische Universität Wien, Austria, Murugesu Sivapalan, University of Illinois, Urbana-Champaign, Thorsten Wagener, University of Bristol, Alberto Viglione, Technische Universität Wien, Austria, Hubert Savenije, Technische Universiteit Delft, The Netherlands
Book: Runoff Prediction in Ungauged Basins
Published online: 05 April 2013
Print publication: 18 April 2013, pp ix-xiv
3 - A data acquisition framework for runoff prediction in ungauged basins
By B. L. McGlynn, G. Blöschl, M. Borga, H. Bormann, R. Hurkmans, J. Komma, L. Nandagiri, R. Uijlenhoet, T. Wagener
Print publication: 18 April 2013, pp 29-52
A family of stochastic models for two-particle dispersion in isotropic homogeneous stationary turbulence
M. S. Borgas, B. L. Sawford
Journal: Journal of Fluid Mechanics / Volume 279 / 25 November 1994
Published online by Cambridge University Press: 26 April 2006, pp. 69-99
Print publication: 25 November 1994
A family of Lagrangian stochastic models for the joint motion of particle pairs in isotropic homogeneous stationary turbulence is considered. The Markov assumption and well-mixed criterion of Thomson (1990) are used, and the models have quadratic-form functions of velocity for the particle accelerations. Two constraints are derived which formally require that the correct one-particle statistics are obtained by the models. These constraints involve the Eulerian expectation of the 'acceleration' of a fluid particle with conditioned instantaneous velocity, given either at the particle, or at some other particle's position. The Navier-Stokes equations, with Gaussian Eulerian probability distributions, are shown to give quadratic-form conditional accelerations, and models which satisfy these two constraints are found. Dispersion calculations show that the constraints do not always guarantee good one-particle statistics, but it is possible to select a constrained model that does. Thomson's model has good one-particle statistics, but is shown to have unphysical conditional accelerations. Comparisons of relative dispersion for the models are made.
The small-scale structure of acceleration correlations and its role in the statistical theory of turbulent dispersion
Journal: Journal of Fluid Mechanics / Volume 228 / July 1991
Published online by Cambridge University Press: 26 April 2006, pp. 295-320
Print publication: July 1991
Some previously accepted results for the form of one- and two-particle Langrangian turbulence statistics within the inertial subrange are corrected and reinterpreted using dimensional methods and kinematic constraints. These results have a fundamental bearing on the statistical theory of turbulent dispersion.
One-particle statistics are analysed in an inertial frame [Sscr ] moving with constant velocity (which is different for different realizations) equal to the velocity of the particle at the time of labelling. It is shown that the inertial-subrange form of the Lagrangian acceleration correlation traditionally derived from dimensional arguments constrained by the property of stationarity, ${\cal C}_0^{(a)}\overline{\epsilon}/\tau $, where ${\cal C}_0^{(a)}$ is a universal constant, $\overline{\epsilon}$ is the mean rate of dissipation of turbulence kinetic energy and τ is the time lag, is kinematically inconsistent with the corresponding velocity statistics unless ${\cal C}_0^{(a)} = 0$. On the other hand, velocity and displacement correlations in the inertial subrange are non-trivial and the traditional results are confirmed by the present analysis. Remarkably, the universal constant ${\cal C}_0$ which characterizes these latter statistics in the inertial subrange is shown to be entirely prescribed by the inner (dissipation scale) acceleration covariance; i.e. there is no contribution to velocity and displacement statistics from inertial-subrange acceleration structure, but rather there is an accumulation of small-scale effects.
In the two-particle case the (cross) acceleration covariance is deduced from dimensional arguments to be of the form $\overline{\epsilon}t_1^{-1}{\cal R}_2(t_1/t_2)$ in the inertial subrange. In contrast to the one-particle case this is non-trivial since the two-particle acceleration covariance is non-stationary and there is therefore no condition which constraints [Rscr ]2 to a form which is kinematically inconsistent with the corresponding velocity and displacement statistics. Consequently it is possible for two-particle inertial-subrange acceleration structure to make a non-negligible contribution to relative velocity and dispersion statistics. This is manifested through corrections to the universal constant appearing in these statistics, but does not otherwise affect inertial-subrange structure. Nevertheless, these corrections destroy the simple correspondence between relative- and one-particle statistics traditionally derived by assuming that two-particle acceleration correlations are negligible within the inertial subrange.
A simple analytic expression which is proposed as an example of the form of [Rscr ]2 provides an excellent representation in the inertial subrange of Lagrangian stochastic simulations of relative velocity and displacement statistics.
Non-uniqueness and bifurcation in annular and planar channel flows
M. S. Borgas, T. J. Pedley
Journal: Journal of Fluid Mechanics / Volume 214 / May 1990
Print publication: May 1990
High-Reynolds-number steady flow in an annular pipe which encounters a shallow axisymmetric expansion or indentation in the walls is studied using interactive boundary-layer theory. The flow upstream of the indentation (x < 0) is fully developed; the ratio of the shear rate on the outer wall to that on the inner wall is denoted by ρ (0 < ρ < 1): similarity solutions are found for the case where the wall perturbations are proportional to $x^{\frac{1}{3}}$. The solution is unique in a constriction, when the pressure gradient (represented by a parameter b) is favourable (b < 0). In an expansion, however, with an adverse pressure gradient, three different solutions are found if b exceeds a critical value bc. When ρ ≠ 1, one of these solutions, representing a flow that is attached on the inner wall and separated (i.e. has negative wall shear) on the outer, is a continuation of the unique doubly attached flow at small b. The other two, one separated on the inner and not the outer wall and the other separated on both walls, arise from a saddle-node bifurcation at b = bc. The doubly separated flow is never stable, as observed in diffusers. In the case of a planar channel (ρ = 1) symmetry is restored, and the non-uniqueness arises through a supercritical pitchfork bifurcation. This agrees with previous computations on channel flow, but not with Jeffery-Hamel flow, for which the bifurcation is subcritical.
Thin slender water jets
M. S. Borgas, E. O. Tuck
An analysis is provided for the free development of slender jets of water, in which the cross-sections are of small thickness-to-width ratio. | CommonCrawl |
Cardiovascular magnetic resonance 4D flow analysis has a higher diagnostic yield than Doppler echocardiography for detecting increased pulmonary artery pressure
Joao G. Ramos1,
Alexander Fyrdahl1,
Björn Wieslander1,
Gert Reiter2,
Ursula Reiter3,
Ning Jin4,
Eva Maret1,
Maria Eriksson1,
Kenneth Caidahl1,
Peder Sörensson1,5,
Andreas Sigfridsson1 &
Martin Ugander1,6,7
Pulmonary hypertension is definitively diagnosed by the measurement of mean pulmonary artery (PA) pressure (mPAP) using right heart catheterization. Cardiovascular magnetic resonance (CMR) four-dimensional (4D) flow analysis can estimate mPAP from blood flow vortex duration in the PA, with excellent results. Moreover, the peak systolic tricuspid regurgitation (TR) pressure gradient (TRPG) measured by Doppler echocardiography is commonly used in clinical routine to estimate systolic PA pressure. This study aimed to compare CMR and echocardiography with regards to quantitative and categorical agreement, and diagnostic yield for detecting increased PA pressure.
Consecutive clinically referred patients (n = 60, median [interquartile range] age 60 [48–68] years, 33% female) underwent echocardiography and CMR at 1.5 T (n = 43) or 3 T (n = 17). PA vortex duration was used to estimate mPAP using a commercially available time-resolved multiple 2D slice phase contrast three-directional velocity encoded sequence covering the main PA. Transthoracic Doppler echocardiography was performed to measure TR and derive TRPG. Diagnostic yield was defined as the fraction of cases in which CMR or echocardiography detected an increased PA pressure, defined as vortex duration ≥15% of the cardiac cycle (mPAP ≥25 mmHg) or TR velocity > 2.8 m/s (TRPG > 31 mmHg).
Both CMR and echocardiography showed normal PA pressure in 39/60 (65%) patients and increased PA pressure in 9/60 (15%) patients, overall agreement in 48/60 (80%) patients, kappa 0.49 (95% confidence interval 0.27–0.71). CMR had a higher diagnostic yield for detecting increased PA pressure compared to echocardiography (21/60 (35%) vs 9/60 (15%), p < 0.001). In cases with both an observable PA vortex and measurable TR velocity (34/60, 56%), TRPG was correlated with mPAP (R2 = 0.65, p < 0.001).
There is good quantitative and fair categorical agreement between estimated mPAP from CMR and TRPG from echocardiography. CMR has higher diagnostic yield for detecting increased PA pressure compared to echocardiography, potentially due to a lower sensitivity of echocardiography in detecting increased PA pressure compared to CMR, related to limitations in the ability to adequately visualize and measure the TR jet by echocardiography. Future comparison between echocardiography, CMR and invasive measurements are justified to definitively confirm these findings.
Pulmonary hypertension is defined as a mean pulmonary artery (PA) pressure (mPAP) equal to or greater than 25 mmHg assessed invasively by right heart catheterization (RHC) [1]. It affects approximately 1% of adults and is associated with high morbidity and mortality [2].
In clinical routine, pulmonary artery pressure is screened for using Doppler echocardiography [3] by measuring peak systolic tricuspid regurgitant jet velocity (TR) and deriving the peak systolic tricuspid regurgitant pressure gradient (TRPG). Furthermore, mPAP can also be estimated with echocardiography by adding mean right atrial pressure to TRPG, and a calibration factor [4]. However, these parameters tend to over- or underestimate pulmonary pressure compared with invasive measurements [5, 6]. Notably, the usefulness of echocardiography for follow-up and monitoring of treatment in pulmonary hypertension has been shown to be limited [7].
Cardiovascular magnetic resonance (CMR) has been used to estimate mPAP and diagnose pulmonary hypertension [8, 9]. Specifically, the duration, expressed as percentage of the cardiac cycle, of blood flow vortices in the pulmonary trunk assessed by CMR four-dimensional (4D) flow analysis has shown excellent correlation with invasively measured mPAP in previous studies [8, 10, 11]. This method yielded accurate results in all five world health organization (WHO) groups of pulmonary hypertension with high diagnostic sensitivity and specificity [11].
Estimation of mPAP by CMR has not yet been adapted in widespread clinical use, yet it is of great clinical interest to non-invasively, accurately, and precisely estimate PA pressure for the purposes of screening, diagnosis, prognosis, and for monitoring the effects of therapy. However, CMR mPAP and echocardiography TRPG have not yet been compared head-to-head. While TRPG and mPAP are not directly comparable from a physiological standpoint, in practice, TRPG is the main estimator of PA pressure in a clinical setting and is routinely used for screening of PH.
Therefore, the aim of this study was to compare agreement and diagnostic yield for evaluation of pulmonary hypertension between CMR and echocardiography in a clinical consecutive patient population.
Study participants
In this prospective study, we included 60 consecutive patients referred for a clinical CMR exam, who also had undergone or were scheduled for a clinically motivated transthoracic echocardiography. Patients with no contraindications for CMR were considered for inclusion if there was no atrial fibrillation and if the difference between exam dates was less than 60 days. Studies with poor image quality were excluded (n = 2). Approval was obtained from the appropriate local ethical committee and all participants provided written informed consent.
Pre-CMR screening
Pre-CMR screening of all patients included a standard 12-lead electrocardiogram (ECG), blood pressure measurement and a brief history to rule out contraindications to CMR, as per clinical routine. All ECGs were obtained on a GE Marquette system (GE, Little Chalfont, United Kingdom).
CMR acquisition
All CMR images were obtained either at 1.5 T (n = 43) or 3 T (n = 17) (MAGNETOM Aera or MAGNETOM Skyra, Siemens Healthcare, Erlangen, Germany) with ECG gating and phased array receiver coils.
Flow data in the pulmonary artery were acquired with 6–10 gapless slices using a retrospectively electrocardiographically gated two-dimensional spoiled-gradient-echo-based cine phase contrast sequence, with velocity encoding of 90 cm/s in all three spatial directions and three-fold averaging to suppress breathing artifacts. There was no navigator compensation. Typical image acquisition parameters were field of view 340 × 276 mm2, matrix 192 × 112 pixels, slice thickness 6 mm, bandwidth 449 Hz/Px, generalized autocalibrating partial parallel acquisition (GRAPPA) factor 2, autocalibration signal (ACS) 22 lines, flip angle 15°, TR/TE 6.41/4.10 ms, temporal resolution 77 ms interpolated to 20 cardiac phases per cardiac cycle, total imaging duration 6–11 min depending on heart rate and number of slices necessary to cover the main PA in an approximately sagittal orientation [10].
Additionally, the protocol included 2-, 3- and 4-chamber balanced steady-state free precession (bSSFP) cine images of the left ventricle as well as a complete short axis (SA) stack. Typical image parameters included field of view 380 × 320 mm2, matrix size 256 × 143 pixels with 1.5 × 1.5 mm2 in-plane resolution slice thickness 6 mm, bandwidth 930 Hz/Px, TR/TE 2.78/1.16 ms, temporal resolution 36 ms interpolated to 35 cardiac phases per cardiac cycle.
CMR analysis
Left ventricular volumes, mass and ejection fraction (LVEF) were measured from the short-axis cine stack using manual delineations in the software Siemens syngo. Via 4.1 (Siemens Healthcare, Erlangen, Germany). 4D Flow analysis was performed using prototype software (4D Flow, Siemens Healthcare, Erlangen, Germany), blinded to the results of echocardiography. After fully automated eddy current compensation and phase unwrapping, images were manually segmented in order to seed pixels in the right ventricle outflow tract and pulmonary artery only. Streamline visualization was used to visualize the flow in the right ventricular outflow tract. Next, 3D vector visualization was used to detect a blood flow vortex in the main pulmonary artery as previously described [8, 10]. Vortex assessment was performed by two trained observers (JGR and BW). Vortex duration in percentage of the cardiac cycle was calculated based on the ratio of frames with a visible concentric vortex, and the total number of frames. This process is illustrated in Fig. 1. Mean pulmonary arterial pressure (mPAP) was then estimated using the previously described empirically determined equation, \( mPA{P}_{CMR}(mmHg)=\frac{Vortex\ duration\ \left(\%\right)+25.44}{1.59} \) [11]. Increased PA pressure by CMR was defined as a vortex duration ≥15% of the cardiac cycle, corresponding to an estimated mPAP ≥25 mmHg. CMR 4D flow analysis divided patients in three groups: (1) no visible vortex (duration 0%, normal pressure assumed), (2) vortex duration under 15% (normal pressure) and (3) vortex duration greater than or equal to 15% (elevated pressure).
Schematic representation of the CMR 4D flow analysis method for estimation of PA pressure. Each of the boxes with red outline represent a single image from one time frame in the cardiac cycle. The boxes shaded red represent the time frames in which a vortex could be visualized. The top panels show a 3D vortex visualization of pulmonary flow in the right ventricle outflow tract orientation in a representative patient, with the black arrows denoting their location in the cardiac cycle. Visualization was performed using vector arrows, and the color scale denotes velocity of the respective vectors. RV – right ventricle, LV – left ventricle, PA – main pulmonary artery, PB – pulmonary bifurcation
Comprehensive transthoracic echocardiography was performed on all patients, including Doppler measurements of TR, using a commercially available system (Epiq, Philips, Amsterdam, Netherlands). Recordings were obtained from views including the left parasternal modified RV long axis, left parasternal short axis, and the apical four chamber views. All results were calculated as mean of three consecutive TR velocities, as measured from the view from where the Doppler TR jet was maximal and best defined. TRPG in mmHg was obtained from TR velocity using the equation TRPG = 4 velocity (m/s)2. Increased PA pressure by echocardiography was defined as TR jet velocity > 2.8 m/s, corresponding to an estimated TRPG > 31 mmHg. Echocardiographic mPAP was calculated using the Chemla equation, mPAP = 0.61 sPAP (mmHg) + 2 [12].
Similar to CMR, echocardiography patients were divided into three groups: (1) TR not measurable (normal pressure assumed), (2) TR velocity less than or equal to 2.8 m/s (normal pressure) and (3) TR velocity greater than 2.8 m/s (increased pressure).
Statistical testing was performed using freely available software (RStudio 2.1, Boston, MA, USA). Continuous variables were reported as mean ± standard deviation if normally distributed according to the Kolmogorov-Smirnov test, or median [interquartile range] as appropriate. Categorical variables were presented as percentages. To assess interobserver agreement for determination of vortex duration, we calculated bias, interobserver variability, and the interclass correlation of the vortex duration measurements of both readers. Comparison between echocardiographic and CMR measurements was performed using linear regression and Bland-Altman analysis. Categorical agreement between CMR and echocardiography with regards to detecting an increased PA pressure was performed with Cohen's kappa. Diagnostic yield was defined as the fraction of participants with positive findings [13]. Differences in diagnostic yield were tested using McNemar's exact test. A p-value less than 0.05 was considered statistically significant.
The study included consecutive patients (n = 60) referred for CMR due to known or suspected cardiopulmonary disease. Time between echocardiography and CMR was 6 [1–20] days. Patient characteristics are summarized in Table 1 and CMR characteristics are summarized in Table 2. Data on clinical characteristics were acquired at the time of CMR. When comparing the time points of CMR and echocardiography, there were no significant differences in HR (p = 0.07), systolic pressure (p = 0.91) or diastolic pressure (p = 0.06).
Table 1 Patient demographics and clinical characteristics
Table 2 CMR characteristics
Inter-observer variability on duration of vortical blood flow
We quantified interobserver variability using the absolute values of vortex duration as a percentage of the cardiac cycle. Mean values of vortex duration in the pulmonary artery were 9.5 ± 9.7% corresponding to an mPAP of 22.0 ± 6.1 mmHg for reader 1 and 10.8 ± 9.8% corresponding to an mPAP of 22.8 ± 6.2 mmHg for reader 2, with an average measurement between both readers of 10.1 ± 9.4% corresponding to an mPAP of 22.4 ± 5.9 mmHg. Interobserver variability was 3.9%, corresponding to an mPAP of 2.5 mmHg. There was minimal bias between readers (95% CI: 1.4–3.9%, and 0.9–2.5 mmHg). The intraclass correlation coefficient between measurements was 0.85.
Detection of elevated pulmonary arterial pressure with CMR vs echocardiography
The frequency and proportion of patients in each diagnostic category are summarized in Table 3. Among the patients who had increased CMR mPAP but normal TRPG by echocardiography, TRPG was 27 [20–30] mmHg.
Table 3 Frequency of patients in each diagnostic group when comparing cardiovascular magnetic resonance (CMR) and echocardiography (Echo). TR denotes tricuspid regurgitant jet velocity
Comparison between estimated mPAP by CMR 4D flow analysis and TRPG by echocardiography
For those patients that had both a detectable vortex in the pulmonary artery and a measurable TR (n = 34/60, 56%), comparisons of pulmonary artery pressure estimates are shown in Fig. 2. Linear regression yielded an R2 = 0.65, p < 0.001.
Linear regression (solid line) of estimated mean pulmonary artery pressure (mPAP) by cardiovascular magnetic resonance (CMR) and the tricuspid regurgitation (TR) pressure gradient (TRPG) by echocardiography (Echo) in patients with both observable vortex and measurable TR
A comparison between CMR mPAP and echocardiography mPAP derived from TRPG is shown in Fig. 3. Linear regression yielded R2 = 0.65, p < 0.001. Mean difference between CMR mPAP and echocardiography mPAP estimated from TRPG was 4.0 ± 6.9 mmHg, assuming a right atrial pressure of 5 mmHg.
Left panel: Linear regression (solid line) of estimated mean pulmonary artery pressure (mPAP) by cardiovascular magnetic resonance (CMR) and estimated mPAP from TRPG by echocardiography (Echo). Dotted line shows line of identity. Right panel: Bland-Altman plot of estimated mPAP by CMR and estimated mPAP from TRPG by echocardiography in patients with both observable vortex and measurable TR. Mean bias 4.0 ± 6.9 mmHg
Comparison between diagnostic yield of CMR 4D flow analysis and echocardiography in PH diagnosis
There were 21 patients with increased PA pressure by CMR, and 9 patients with increased PA pressure by echocardiography. This translated into a higher diagnostic yield for detecting increased PA pressure by CMR compared to echocardiography (21/60 (35%) vs 9/60 (15%), p < 0.001). Figure 4 shows the difference in diagnostic yield between both methods. Cohen's kappa for categorical agreement was 0.49 (95% confidence interval: 0.27–0.71).
Diagnostic yield for detecting increased pulmonary artery pressure by either a 4D flow pulmonary artery vortex duration ≥15% of the cardiac cycle by cardiovascular magnetic resonance (CMR), or a tricuspid regurgitation jet velocity > 2.8 m/s by transthoracic Doppler echocardiography in all patients (n = 60)
The major finding of this study is that while there is a good quantitative and fair categorical agreement between CMR and echocardiography with regards to detecting and quantifying increased PA pressure, CMR had a more than twice as high diagnostic yield for detecting increased PA pressure.
The method used to estimate mPAP with CMR has been previously described and validated compared to RHC [8]. It was applicable in all types of PH, including acquired pulmonary arterial hypertension, with very good results compared to invasive measurements [11]. We decided to closely replicate the vortex method as originally described, with a time-resolved 2D acquisition of flow data in three spatial directions, which are then analyzed in 4D flow software. While this approach is not what is commonly known as 4D Flow CMR, the method measures time-resolved three-directional blood flow velocities in a volume, and did show an excellently reproducible ability to estimate PA pressures. In these studies, CMR mPAP was compared to RHC mPAP in all clinical subtypes of pulmonary hypertension. Our study is therefore the first to implement and test this method in an independent center with good results. When both methods yielded detectable estimates of PA pressure, we found good agreement between CMR mPAP and echo TRPG, which supports the accuracy of blood flow vortex duration in mPAP estimation. Interestingly, our regression equation when comparing CMR mPAP with TRPG was remarkably similar to Chemla's equation to convert TRPG to mPAP, albeit with an assumed right atrial pressure of 5 mmHg. Furthermore, since echocardiography mPAP = 0.61 sPAP + 2 (Equation 1) [12] and sPAP = 4v2 + RAP (Equation 2), by assuming a RAP of 5 mmHg [14], by substitution we get
$$ mPAP=0.61\ TRPG+5 $$
which is the relationship one would mathematically expect between mPAP and TRPG. Interestingly, this derived equation is very similar to the regression equation we obtained in Fig. 2, i.e. y = 0.59 x + 10.
Our results show a low mean difference of 4.0 ± 6.9 mmHg between CMR mPAP and echocardiographic mPAP calculated using Eq. 3. This result assumes an estimated mean right atrial pressure of 5 mmHg, which is within the normal range. We did not use right atrial pressure estimates from echocardiography, since its estimation yields poorer agreement, with an accuracy as low as 34% [15]. Moreover, assuming a right atrial pressure of 3, 8 or 15 mmHg did not change the results in any meaningful way, with a mean bias consistently below 5 mmHg.
In 20% of cases, patients that had a normal or non-measurable TR also had a vortex duration indicating increased PA pressure by CMR. This group of patients did not have any identifiable characteristics that differed from the remaining patient cohort. Indeed, patients in this group were hemodynamically stable and diverse in terms of underlying diagnosis, none of them being disproportionately represented. By comparison, there were no patients with normal vortex duration that had an increased TR velocity.
Other reasons for the discrepancy between CMR and echocardiography could be a need for different thresholds for either CMR mPAP or echocardiographic TR velocity, or methodological limitations inherent to CMR or echocardiography with regards to the ability to either detect vortex presence, or accurately measure the peak TR velocity due to the angle of the main direction of Doppler flow. Using the current dataset, it is not possible to discern which of these is the main source of the discrepancy between CMR and echocardiography, and future studies are justified to address this question.
Some authors have challenged the accuracy of echocardiography when compared to invasive measurements, especially with regard to the utility of serial measurements [16, 17]. The REVEAL study, which compared peak systolic PA pressure between echocardiography and RHC, showed that in 44% of patients, there was a discordance in the estimation of PA pressure of at least 20% [5]. By comparison, mPAP estimation by CMR has shown excellent agreement with invasive RHC (R2 = 0.95) and excellent precision (standard deviation of the difference between CMR estimation and RHC measurement of mPAP 3.9 mmHg) [11].
CMR more than doubled the diagnostic yield of PA pressure estimation compared to echocardiography (35% vs 15%). Since TR is not always present nor measurable in all patients [18] and CMR 4D flow analysis has shown very good agreement with RHC in previous studies, it is plausible that CMR detects increased PA pressure in patients with undetectable TR. Future studies are needed to confirm this, and in such studies it would be of value to include both CMR, echocardiography and RHC measurements for simultaneous comparison of all three modalities.
The current study had some important limitations. First, echocardiography was not performed immediately after CMR, with a difference between exam dates of maximally 53 days and a median difference of 6 days, with 75% of patients having a difference of less than 20 days between exams. Despite the possibility of hemodynamic changes or relevant clinical events, other similar comparative studies with echocardiography have allowed for even longer periods with no significant impact of time difference between exam dates [5]. It would also be beneficial to guarantee inclusion of all clinical types of PH, to confirm the effectiveness of the vortex duration method in all hemodynamic circumstances. Lastly, data from RHC would be necessary to definitively establish a potential superiority of this CMR method compared with echocardiography. While diagnostic yield is higher in CMR, it is not possible to claim that the diagnosis of elevated PA pressure was correct in all CMR cases without invasive measurements. Despite this, our results show that there is a notable discrepancy between CMR and echocardiography with regards to detection of increased PA pressure at the existing clinical thresholds.
Estimation of mPAP with CMR is not currently performed routinely in the clinical setting. However, there are several hemodynamic parameters relevant to pulmonary hypertension that can be assessed by CMR [19]. Further validation of these techniques may warrant their introduction in clinical protocols, as they allow a more comprehensive physiologic evaluation of the heart. In particular, CMR may become an alternative to repeated RHC mPAP measurements for follow-up in pulmonary hypertension, provided that these results are can be confirmed by invasive measurements in the future.
There is good quantitative and fair categorical agreement between estimated mPAP from CMR and TRPG from echocardiography. CMR more than doubles the diagnostic yield for detecting increased PA pressure compared to echocardiography. This is potentially due to a lower sensitivity of echocardiography in detecting increased PA pressure compared to CMR, possibly related to limitations in the ability to adequately visualize and measure the TR jet by echocardiography. Comparison with invasive measurements is warranted in order to confirm these results.
The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.
CMR:
Cardiovascular magnetic resonance
mPAP:
Mean pulmonary artery pressure
RHC:
Right heart catheterization
TR:
Peak tricuspid regurgitation velocity
TRPG:
Tricuspid regurgitation pressure gradient
Hoeper MM, Bogaard HJ, Condliffe R, Frantz R, Khanna D, Kurzyna M, et al. Definitions and diagnosis of pulmonary hypertension. J Am Coll Cardiol. 2013;62:D42–50. https://doi.org/10.1016/j.jacc.2013.10.032.
Hoeper MM, Ghofrani H-A, Grünig E, Klose H, Olschewski H, Rosenkranz S. Pulmonary hypertension. Dtsch Arztebl Int. 2017;114:73–84. https://doi.org/10.3238/arztebl.2017.0073.
Naing P, Kuppusamy H, Scalia G, Hillis GS, Playford D. Non-invasive assessment of pulmonary vascular resistance in pulmonary hypertension: current knowledge and future direction. Hear Lung Circ. 2017;26(4):323–30. https://doi.org/10.1016/j.hlc.2016.10.008.
Aduen JF, Castello R, Daniels JT, Diaz JA, Safford RE, Heckman MG, et al. Accuracy and precision of three echocardiographic methods for estimating mean pulmonary artery pressure. Chest. 2011;139:347–52. https://doi.org/10.1378/chest.10-0126.
Farber HW, Foreman AJ, Miller DP, McGoon MD. REVEAL registry: correlation of right heart catheterization and echocardiography in patients with pulmonary arterial hypertension. Congest Hear Fail. 2011;17:56–63. https://doi.org/10.1111/j.1751-7133.2010.00202.x.
Finkelhor RS, Lewis SA, Pillai D. Limitations and strengths of Doppler/Echo pulmonary artery systolic pressure-right heart catheterization correlations: a systematic literature review. Echocardiography. 2015;32:10–8. https://doi.org/10.1111/echo.12594.
Pawade T, Holloway B, Bradlow W, Steeds RP. Noninvasive imaging for the diagnosis and prognosis of pulmonary hypertension. Expert Rev Cardiovasc Ther. 2014;12:71–86. https://doi.org/10.1586/14779072.2014.867806.
Reiter G, Reiter U, Kovacs G, Kainz B, Schmidt K, Maier R, et al. Magnetic resonance-derived 3-dimensional blood flow patterns in the main pulmonary artery as a marker of pulmonary hypertension and a measure of elevated mean pulmonary arterial pressure. Circ Cardiovasc Imaging. 2008;1:23–30. https://doi.org/10.1161/CIRCIMAGING.108.780247.
Wang N, Hu X, Liu C, Ali B, Guo X, Liu M, et al. A systematic review of the diagnostic accuracy of cardiovascular magnetic resonance for pulmonary hypertension. Can J Cardiol. 2014;30:455–63. https://doi.org/10.1016/j.cjca.2013.11.028.
Reiter U, Reiter G, Kovacs G, Stalder AF, Gulsun MA, Greiser A, et al. Evaluation of elevated mean pulmonary arterial pressure based on magnetic resonance 4D velocity mapping: comparison of visualization techniques. PLoS One. 2013;8:1–9. https://doi.org/10.1371/journal.pone.0082212.
Reiter G, Reiter U, Kovacs G, Olschewski H, Fuchsjäger M. Blood flow vortices along the main pulmonary artery measured with MR imaging for diagnosis of pulmonary hypertension. Radiology. 2015;275:71–9. https://doi.org/10.1148/radiol.14140849.
Chemla D, Humbert M, Sitbon O, Montani D, Hervé P. Systolic and mean pulmonary artery pressures: are they interchangeable in patients with pulmonary hypertension? Chest. 2015;147:943–50. https://doi.org/10.1378/CHEST.14-1755.
de Haan MC, Nio CY, Thomeer M, de Vries AH, Bossuyt PM, Kuipers EJ, et al. Comparing the diagnostic yields of technologists and radiologists in an invitational colorectal cancer screening program performed with CT colonography. Radiology. 2012;264:771–8. https://doi.org/10.1148/radiol.12112486.
De Vecchis R, Baldi C, Giandomenico G, Di Maio M, Giasi A, Cioppa C. Estimating right atrial pressure using ultrasounds: an old issue revisited with new methods. J Clin Med Res. 2016;8:569–74. https://doi.org/10.14740/jocmr2617w.
Magnino C, Omedè P, Avenatti E, Presutti D, Iannaccone A, Chiarlo M, et al. Inaccuracy of right atrial pressure estimates through inferior vena cava indices. Am J Cardiol. 2017;120:1667–73. https://doi.org/10.1016/j.amjcard.2017.07.069.
Kovacs G, Maier R, Aberer E, Brodmann M, Scheidl S, Hesse C, et al. Assessment of pulmonary arterial pressure during exercise in collagen vascular disease. Chest. 2010;138:270–8. https://doi.org/10.1378/chest.09-2099.
D'Alto M, Romeo E, Argiento P, D'Andrea A, Vanderpool R, Correra A, et al. Accuracy and precision of echocardiography versus right heart catheterization for the assessment of pulmonary hypertension. Int J Cardiol. 2013;168:4058–62. https://doi.org/10.1016/j.ijcard.2013.07.005.
Parent F, Bachir D, Inamo J, Lionnet F, Driss F, Loko G, et al. A hemodynamic study of pulmonary hypertension in sickle cell disease. N Engl J Med. 2011;365:44–53. https://doi.org/10.1056/NEJMoa1005565.
Reiter U, Reiter G, Ager MF. MR phase-contrast imaging in pulmonary hypertension. Br J Radiol. 2016;89. https://doi.org/10.1259/bjr.20150995.
Dr. Wieslander, Dr. Ramos and Dr. Ugander were supported in part by the Swedish Research Council, the Swedish Heart and Lung Foundation, Stockholm County Council, and Karolinska Institutet. These funding bodies had no role in study design, data collection, data analysis or in writing the manuscript.
Department of Clinical Physiology, Karolinska Institutet and Karolinska University Hospital, Stockholm, Sweden
Joao G. Ramos, Alexander Fyrdahl, Björn Wieslander, Eva Maret, Maria Eriksson, Kenneth Caidahl, Peder Sörensson, Andreas Sigfridsson & Martin Ugander
Siemens Healthcare Diagnostics GmbH, Graz, Austria
Gert Reiter
Department of Radiology, Graz Medical University, Graz, Austria
Ursula Reiter
Siemens Medical Solutions, Cleveland, OH, USA
Ning Jin
Department of Cardiology, Karolinska Institutet and Karolinska University Hospital, Stockholm, Sweden
Peder Sörensson
University of Sydney, Northern Clinical School, Sydney Medical School, Kolling Building, Level 12, Room, Sydney, 612017, Australia
Martin Ugander
The Kolling Institute, Royal North Shore Hospital, St Leonards, Sydney, NSW, 2065, Australia
Joao G. Ramos
Alexander Fyrdahl
Björn Wieslander
Eva Maret
Kenneth Caidahl
Andreas Sigfridsson
Study design: JGR, BW, MU. Subject recruitment: JGR, EM, ME, KC, PS. Technical development: AF, NJ, GR, UR, AS. Data acquisition: JGR. Data analysis: JGR. Statistical analysis: JGR. Results interpretation: JGR, BW, AS, MU. Manuscript preparation and critical revision: JGR, AF, BW, GR, UR, EM, ME, KC, PS, AS, MU. All authors have read and approved the manuscript.
Correspondence to Martin Ugander.
This study was approved by the Regional Ethical Review Board (Etikprövningsmyndigheten) in Stockholm with reference number 2011/1077–31. All subjects provided written informed consent.
Dr. Ugander is principal investigator for an institutional research and development agreement regarding cardiovascular MRI between Karolinska University Hospital and Siemens.
Ramos, J.G., Fyrdahl, A., Wieslander, B. et al. Cardiovascular magnetic resonance 4D flow analysis has a higher diagnostic yield than Doppler echocardiography for detecting increased pulmonary artery pressure. BMC Med Imaging 20, 28 (2020). https://doi.org/10.1186/s12880-020-00428-9
4D flow
Thoracic imaging | CommonCrawl |
Array access performance
Abstract machine
Memory and segments
Unsigned integer representation
Signed integer representation
Pointer representation
Array representation
Compiler layout
Collection representation
Consequences of size and alignment rules
Uninitialized objects
Pointer arithmetic
Computer arithmetic
Arena allocation
This course is about learning how computers work, from the perspective of systems software: what makes programs work fast or slow, and how properties of the machines we program impact the programs we write. We want to communicate ideas, tools, and an experimental approach.
The course divides into six units:
Assembly & machine programming
Storage & caching
Kernel programming
The first unit, data representation, begins today. It's all about how different forms of data can be represented in terms the computer can understand. But this is a teaser lecture too, so we'll touch on a bunch of concepts from later units.
Lite Brite memory and data representation
Lite Brite is a pretty fun old-school toy. It's a big black backlit pegboard coupled with a supply of colored pegs, in a limited set of colors. You can plug in the pegs to make all kinds of designs.
Computer memory is a lot like a Lite Brite. A computer's memory is like a vast pegboard where each slot holds one of 256 different colors. The colors are numbered 0 through 255, so each slot holds one byte. (A byte is a number between 0 and 255, inclusive.)
A slot of computer memory is identified by its address. On a computer with M bytes of memory, and therefore M slots, you can think of the address as a number between 0 and M−1. My laptop has 16 gibibytes of memory, so M = 16×230 = 234 = 17,179,869,184 = 0x4'0000'0000—a very large number!
0 1 2 3 4 2^34 - 1 <- addresses
+-----+-----+-----+-----+-----+-- --+-----+-----+-----+
| | | | | | ... | | | | <- values
The problem of data representation is the problem of representing all the concepts we might want to use in programming—integers, fractions, real numbers, sets, pictures, texts, buildings, animal species, relationships—using the limited medium of addresses and bytes.
Powers of ten and powers of two. Digital computers love the number two and all powers of two. The electronics of digital computers are based on the bit, the smallest unit of storage, which a base-two digit: either 0 or 1. More complicated objects are represented by collections of bits. This choice has many scale and error-correction advantages. It also refracts upwards to larger choices, and even into terminology. Memory chips, for example, have capacities based on large powers of two, such as 230 bytes. Since 210 = 1024 is pretty close to 1,000, 220 = 1,048,576 is pretty close to a million, and 230 = 1,073,741,824 is pretty close to a billion, it's common to refer to 230 bytes of memory as "a gigabyte," even though that actually means 109 = 1,000,000,000 bytes. But when trying to be more precise, it's better to use terms that explicitly signal the use of powers of two, such as gibibyte: the "-bi-" component means "binary."
Virtual memory. Modern computers actually abstract their memory spaces using a technique called virtual memory. The lowest-level kind of address, called a physical address, really does take on values between 0 and M−1. However, even on a 16GiB machine like my laptop, the addresses we see in programs can take on values like 0x7ffe'ea2c'aa67 that are much larger than M−1 = 0x3'ffff'ffff. The addresses used in programs are called virtual addresses. They're incredibly useful for protection: since different running programs have logically independent address spaces, it's much less likely that a bug in one program will crash the whole machine. We'll learn about virtual memory in much more depth in the kernel unit; the distinction between virtual and physical addresses is not as critical for data representation.
To represent an array of integers, C++ and C allocate the integers next to each other in memory, in sequential addresses, with no gaps or overlaps. Here, we put the integers 0, 1, and 258 next to each other, starting at address 1008:
1008 1016 1024
--+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+--
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
Say that you have an array of N integers, and you access each of those integers in order, accessing each integer exactly once. Does the order matter?
Computer memory is random-access memory (RAM), which means that a program can access any bytes of memory in any order—it's not, for example, required to read memory in ascending order by address. But if we run experiments, we can see that even in RAM, different access orders have very different performance characteristics.
Our testaccess program sums up all the integers in an array of N integers, using an access order based on its arguments, and prints the resulting delay. Here's the result of a couple experiments on accessing 10,000,000 items in three orders, "up" order (sequential: elements 0, 1, 2, 3, …), "down" order (reverse sequential: N, N−1, N−2, …), and "random" order (as it sounds).
trial 1
-u, up 8.9ms 7.9ms 7.4ms
-d, down 9.2ms 8.9ms 10.6ms
-r, random 316.8ms 352.0ms 360.8ms
Wow! Down order is just a bit slower than up, but random order seems about 40 times slower. Why?
Random order is defeating many of the internal architectural optimizations that make memory access fast on modern machines. Sequential order, since it's more predictable, is much easier to optimize.
Foreshadowing. This part of the lecture is a teaser for the Storage unit, where we cover access patterns and caching, including the processor caches that explain this phenomenon, in much more depth.
Abstract machine and hardware
Programming involves turning an idea into hardware instructions. This transformation happens in multiple steps, some you control and some controlled by other programs.
First you have an idea, like "I want to make a flappy bird iPhone game." The computer can't (yet) understand that idea. So you transform the idea into a program, written in some programming language. This process is called programming.
A C++ program actually runs on an abstract machine. The behavior of this machine is defined by the C++ standard, a technical document. This document is supposed to be so precisely written as to have an exact mathematical meaning, defining exactly how every C++ program behaves. But the document can't run programs!
C++ programs run on hardware (mostly), and the hardware determines what behavior we see. Mapping abstract machine behavior to instructions on real hardware is the task of the C++ compiler (and the standard library and operating system). A C++ compiler is correct if and only if it translates each correct program to instructions that simulate the expected behavior of the abstract machine.
This same rough series of transformations happens for any programming language, although some languages use interpreters rather than compilers.
The C++ abstract machine concerns the construction and modification of objects. An object is a region of data storage that contains a value, such as the integer 12. (The standard specifically says "a region of data storage in the execution environment, the contents of which can represent values".) Consider:
char global_ch = 65;
const char const_global_ch = 66;
char local_ch = 67;
char* allocated_ch = new char(68)
// C-style: `allocated_ch = (char*) malloc(sizeof(char)); *allocated_ch = 68;`
There are five objects here:
global_ch
const_global_ch
local_ch
allocated_ch
the anonymous storage allocated by new char and accessed by *allocated_ch
Objects never overlap: the C abstract machine requires that each of these objects occupies distinct storage.
Each object has a lifetime, which is called storage duration by the standard. There are three different kinds of lifetime.
static lifetime: The object lasts as long as the program runs. (global_ch, const_global_ch)
automatic lifetime: The compiler allocates and destroys the object automatically as the program runs, based on the object's scope (the region of the program in which it is meaningful). (local_ch, allocated_ch)
dynamic lifetime: The programmer allocates and destroys the object explicitly. (*allocated_ch)
An object can have many names. For example, here, local and *ptr refer to the same object:
int local = 1;
int* ptr = &local;
The different names for an object are sometimes called aliases.
Objects with dynamic lifetime aren't easy to use correctly. Dynamic lifetime causes many serious problems in C programs, including memory leaks, use-after-free, double-free, and so forth. Those serious problems cause undefined behavior and play a "disastrously central role" in "our ongoing computer security nightmare". But dynamic lifetime is critically important. Only with dynamic lifetime can you construct an object whose size isn't known at compile time, or construct an object that outlives its creating function.
Hardware implements C objects using memory (so called because it remembers object values). At a high level, a memory is a modifiable array of M bytes, where a byte is a number between 0 and 255 inclusive. That means that, for any number a between 0 and M–1, we can:
Write a byte at address a.
Read the byte at address a (obtaining the most-recently-written value).
The number a is called an address, and since every memory address corresponds to a byte, this is a byte-addressable memory.
On old machines, such as old Macintoshes (pre-OS X), C programs worked directly with this kind of memory. It was a disaster: an incorrect program could overwrite memory belonging to any other running program. Modern machines avoid this problem; we'll see how the kernel unit.
The compiler and operating system work together to put objects at different addresses. A program's address space (which is the range of addresses accessible to a program) divides into regions called segments. Objects with different lifetimes are placed into different segments. The most important segments are:
Code (also known as text or read-only data). Contains instructions and constant global objects. Unmodifiable; static lifetime.
Data. Contains non-constant global objects. Modifiable; static lifetime.
Heap. Modifiable; dynamic lifetime.
Stack. Modifiable; automatic lifetime.
The compiler decides on a segment for each object based on its lifetime. The final compiler phase, which is called the linker, then groups all the program's objects by segment (so, for instance, global variables from different compiler runs are grouped together into a single segment). Finally, when a program runs, the operating system loads the segments into memory. (The stack and heap segments grow on demand.)
We can use a program to investigate where objects with different lifetimes are stored. (See cs61-lectures/datarep2/mexplore.cc.) This shows address ranges like this:
Object declaration
(C++ program text)
(abstract machine)
Example address range
(runtime location in x86-64 Linux, non-PIE)
Constant global
Code (or Text)
0x40'0000 (≈1 × 222)
0x60'0000 (≈1.5 × 222)
0x7fff'448d'0000 (≈247 = 225 × 222)
Anonymous, returned by new
0x1a0'0000 (≈8 × 222)
Constant global data and global data have the same lifetime, but are stored in different segments. The operating system uses different segments so it can prevent the program from modifying constants. It marks the code segment, which contains functions (instructions) and constant global data, as read-only, and any attempt to modify code-segment memory causes a crash (a "Segmentation violation").
An executable is normally at least as big as the static-lifetime data (the code and data segments together). Since all that data must be in memory for the entire lifetime of the program, it's written to disk and then loaded by the OS before the program starts running. There is an exception, however: the "bss" segment is used to hold modifiable static-lifetime data with initial value zero. Such data is common, since all static-lifetime data is initialized to zero unless otherwise specified in the program text. Rather than storing a bunch of zeros in the object files and executable, the compiler and linker simply track the location and size of all zero-initialized global data. The operating system sets this memory to zero during the program load process. Clearing memory is faster than loading data from disk, so this optimization saves both time (the program loads faster) and space (the executable is smaller).
Data type representation
Memory stores bytes, but the C++ abstract machine refers to values of many types, some of which don't fit in a single byte. The compiler, hardware, and standard together define how objects map to bytes. Each object uses a contiguous range of addresses (and thus bytes).
Since C and C++ are designed to help software interface with hardware devices, their standards is transparent about how objects are stored. A C++ program can ask how big an object is using the sizeof keyword. sizeof(T) returns the number of bytes in the representation of an object of type T, and sizeof(x) returns the size of object x. The result of sizeof is a value of type size_t, which is an unsigned integer type large enough to hold any representable size. On 64-bit architectures, such as x86-64 (our focus in this course), size_t can hold numbers between 0 and 264–1.
A bit is the fundamental unit of digital information: it's either 0 or 1.
C++ manages memory in units of bytes—8 contiguous bits that together can represent numbers between 0 and 255. C's unit for a byte is char: the abstract machine says a byte is stored in char. That means an unsigned char holds values in the inclusive range [0, 255].
The C++ standard actually doesn't require that a byte hold 8 bits, and on some crazy machines from decades ago, bytes could hold nine bits! (!?)
But larger numbers, such as 258, don't fit in a single byte. To represent such numbers, we must use multiple bytes. The abstract machine doesn't specify exactly how this is done—it's the compiler and hardware's job to implement a choice. But modern computers always use place–value notation, just like in decimal numbers. In decimal, the number 258 is written with three digits, the meanings of which are determined both by the digit and by their place in the overall number:
\[ 258 = 2\times10^2 + 5\times10^1 + 8\times10^0 \]
The computer uses base 256 instead of base 10. Two adjacent bytes can represent numbers between 0 and \(255\times256+255 = 65535 = 2^{16}-1\), inclusive. A number larger than this would take three or more bytes.
\[ 258 = 1\times256^1 + 2\times256^0 \]
+-----+-----+
258 = | 2 | 1 |
On x86-64, the ones place, the least significant byte, is on the left, at the lowest address in the contiguous two-byte range used to represent the integer. This is the opposite of how decimal numbers are written: decimal numbers put the most significant digit on the left. The representation choice of putting the least-significant byte in the lowest address is called little-endian representation. x86-64 uses little-endian representation.
Some computers actually store multi-byte integers the other way, with the most significant byte stored in the lowest address; that's called big-endian representation. The Internet's fundamental protocols, such as IP and TCP, also use big-endian order for multi-byte integers, so big-endian is also called "network" byte order.
The C++ standard defines five fundamental unsigned integer types, along with relationships among their sizes. Here they are, along with their actual sizes and ranges on x86-64:
(x86-64)
(byte)
[0, 255] = [0, 28−1]
[0, 65,535] = [0, 216−1]
(or unsigned int)
≥sizeof(unsigned short)
[0, 4,294,967,295] = [0, 232−1]
≥sizeof(unsigned)
[0, 18,446,744,073,709,551,615] = [0, 264−1]
≥sizeof(unsigned long)
Other architectures and operating systems implement different ranges for these types. For instance, on IA32 machines like Intel's Pentium (the 32-bit processors that predated x86-64), sizeof(long) was 4, not 8.
Note that all values of a smaller unsigned integer type can fit in any larger unsigned integer type. When a value of a larger unsigned integer type is placed in a smaller unsigned integer object, however, not every value fits; for instance, the unsigned short value 258 doesn't fit in an unsigned char x. When this occurs, the C++ abstract machine requires that the smaller object's value equals the least-significant bits of the larger value (so x will equal 2).
In addition to these types, whose sizes can vary, C++ has integer types whose sizes are fixed. uint8_t, uint16_t, uint32_t, and uint64_t define 8-bit, 16-bit, 32-bit, and 64-bit unsigned integers, respectively; on x86-64, these correspond to unsigned char, unsigned short, unsigned int, and unsigned long.
This general procedure is used to represent a multi-byte integer in memory.
Write the large integer in hexadecimal format, including all leading zeros required by the type size. For example, the unsigned value 65534 would be written 0x0000FFFE. There will be twice as many hexadecimal digits as sizeof(TYPE).
Divide the integer into its component bytes, which are its digits in base 256. In our example, they are, from most to least significant, 0x00, 0x00, 0xFF, and 0xFE.
In little-endian representation, the bytes are stored in memory from least to most significant. If our example was stored at address 0x30, we would have:
0x30: 0xFE 0x31: 0xFF 0x32: 0x00 0x33: 0x00
In big-endian representation, the bytes are stored in the reverse order.
0x30: 0x00 0x31: 0x00 0x32: 0xFF 0x33: 0xFE
Computers are often fastest at dealing with fixed-length numbers, rather than variable-length numbers, and processor internals are organized around a fixed word size. A word is the natural unit of data used by a processor design. In most modern processors, this natural unit is 8 bytes or 64 bits, because this is the power-of-two number of bytes big enough to hold those processors' memory addresses. Many older processors could access less memory and had correspondingly smaller word sizes, such as 4 bytes (32 bits).
The best representation for signed integers—and the choice made by x86-64, and by the C++20 abstract machine—is two's complement. Two's complement representation is based on this principle: Addition and subtraction of signed integers shall use the same instructions as addition and subtraction of unsigned integers.
To see what this means, let's think about what -x should mean when x is an unsigned integer. Wait, negative unsigned?! This isn't an oxymoron because C++ uses modular arithmetic for unsigned integers: the result of an arithmetic operation on unsigned values is always taken modulo 2B, where B is the number of bits in the unsigned value type. Thus, on x86-64,
unsigned a = 0xFFFFFFFFU; // = 2^32 - 1
unsigned b = 0x00000002U;
assert(a + b == 0x00000001U); // True because 2^32 - 1 + 2 = 1 (mod 2^32)!
-x is simply the number that, when added to x, yields 0 (mod 2B). For example, when unsigned x = 0xFFFFFFFFU, then -x == 1U, since x + -x equals zero (mod 232).
To obtain -x, we flip all the bits in x (an operation written ~x) and then add 1. To see why, consider the bit representations. What is x + (~x + 1)? Well, (~x)i (the ith bit of ~x) is 1 whenever xi is 0, and vice versa. That means that every bit of x + ~x is 1 (there are no carries), and x + ~x is the largest unsigned integer, with value 2B-1. If we add 1 to this, we get 2B. Which is 0 (mod 2B)! The highest "carry" bit is dropped, leaving zero.
Two's complement arithmetic uses half of the unsigned integer representations for negative numbers. A two's-complement signed integer with B bits has the following values:
If the most-significant bit is 1, the represented number is negative. Specifically, the represented number is –(~x + 1), where the outer negative sign is mathematical negation (not computer arithmetic).
If every bit is 0, the represented number is 0.
If the most-significant but is 0 but some other bit is 1, the represented number is positive.
The most significant bit is also called the sign bit, because if it is 1, then the represented value depends on the signedness of the type (and that value is negative for signed types).
Another way to think about two's-complement is that, for B-bit integers, the most-significant bit has place value 2B–1 in unsigned arithmetic and negative 2B–1 in signed arithmetic. All other bits have the same place values in both kinds of arithmetic.
The two's-complement bit pattern for x + y is the same whether x and y are considered as signed or unsigned values. For example, in 4-bit arithmetic, 5 has representation 0b0101, while the representation 0b1100 represents 12 if unsigned and –4 if signed (~0b1100 + 1 = 0b0011 + 1 == 4). Let's add those bit patterns and see what we get:
+ 0b1100
0b10001 == 0b0001 (mod 2^4)
Note that this is the right answer for both signed and unsigned arithmetic: 5 + 12 = 17 = 1 (mod 16), and 5 + -4 = 1.
Subtraction and multiplication also produce the same results for unsigned arithmetic and signed two's-complement arithmetic. (For instance, 5 * 12 = 60 = 12 (mod 16), and 5 * -4 = -20 = -4 (mod 16).) This is not true of division. (Consider dividing the 4-bit representation 0b1110 by 2. In signed arithmetic, 0b1110 represents -2, so 0b1110/2 == 0b1111 (-1); but in unsigned arithmetic, 0b1110 is 14, so 0b1110/2 == 0b0111 (7).) And, of course, it is not true of comparison. In signed 4-bit arithmetic, 0b1110 < 0, but in unsigned 4-bit arithmetic, 0b1110 > 0. This means that a C compiler for a two's-complement machine can use a single add instruction for either signed or unsigned numbers, but it must generate different instruction patterns for signed and unsigned division (or less-than, or greater-than).
There are a couple quirks with C signed arithmetic. First, in two's complement, there are more negative numbers than positive numbers. A representation with sign bit is 1, but every other bit 0, has no positive counterpart at the same bit width: for this number, -x == x. (In 4-bit arithmetic, -0b1000 == ~0b1000 + 1 == 0b0111 + 1 == 0b1000.) Second, and far worse, is that arithmetic overflow on signed integers is undefined behavior.
signed char
[−128, 127] = [−27, 27−1]
(or signed short)
=sizeof(unsigned short)
[−32,768, 32,767] = [−215, 215−1]
=sizeof(unsigned)
[−2,147,483,648, 2,147,483,647] = [−231, 231−1]
=sizeof(unsigned long)
[−9,223,372,036,854,775,808, 9,223,372,036,854,775,807] = [−263, 263−1]
=sizeof(unsigned long long)
The C++ abstract machine requires that signed integers have the same sizes as their unsigned counterparts.
We distinguish pointers, which are concepts in the C abstract machine, from addresses, which are hardware concepts. A pointer combines an address and a type.
The memory representation of a pointer is the same as the representation of its address value. The size of that integer is the machine's word size; for example, on x86-64, a pointer occupies 8 bytes, and a pointer to an object located at address 0x400abc would be stored as:
+-----+-----+-----+-----+-----+-----+-----+-----+
|0xbc |0x0a |0x40 | 0 | 0 | 0 | 0 | 0 |
The C++ abstract machine defines an unsigned integer type uintptr_t that can hold any address. (You have to #include <inttypes.h> or <cinttypes> to get the definition.) On most machines, including x86-64, uintptr_t is the same as unsigned long. Cast a pointer to an integer address value with syntax like (uintptr_t) ptr; cast back to a pointer with syntax like (T*) addr. Casts between pointer types and uintptr_t are information preserving, so this assertion will never fail:
void* ptr = malloc(...);
uintptr_t addr = (uintptr_t) ptr;
void* ptr2 = (void*) addr;
assert(ptr == ptr2);
Since it is a 64-bit architecture, the size of an x86-64 address is 64 bits (8 bytes). That's also the size of x86-64 pointers.
The C++ programming language offers several collection mechanisms for grouping subobjects together into new kinds of object. The collections are arrays, structs, and unions. (Classes are a kind of struct. All library types, such as vectors, lists, and hash tables, use combinations of these collection types.) The abstract machine defines how subobjects are laid out inside a collection. This is important, because it lets C/C++ programs exchange messages with hardware and even with programs written in other languages: messages can be exchanged only when both parties agree on layout.
Array layout in C++ is particularly simple: The objects in an array are laid out sequentially in memory, with no gaps or overlaps. Assume a declaration like T x[N], where x is an array of N objects of type T, and say that the address of x is a. Then the address of element x[i] equals a + i * sizeof(T), and sizeof(a) == N * sizeof(T).
Sidebar: Vector representation
The C++ library type std::vector defines an array that can grow and shrink. For instance, this function creates a vector containing the numbers 0 up to N in sequence:
void f(unsigned N) {
std::vector<unsigned> v;
v.push_back(i);
unsigned x = v[i]; // `i`th element of `v`
Here, v is an object with automatic lifetime. This means its size (in the sizeof sense) is fixed at compile time. Remember that the sizes of static- and automatic-lifetime objects must be known at compile time; only dynamic-lifetime objects can have varying size based on runtime parameters. So where and how are v's contents stored?
The C++ abstract machine requires that v's elements are stored in an array in memory. (The v.data() method returns a pointer to the first element of the array.) But it does not define std::vector's layout otherwise, and C++ library designers can choose different layouts based on their needs. We found these to hold for the std::vector in our library:
sizeof(v) == 24 for any vector of any type, and the address of v is a stack address (i.e., v is located in the stack segment).
The first 8 bytes of the vector hold the address of the first element of the contents array—call it the begin address. This address is a heap address, which is as expected, since the contents must have dynamic lifetime. The value of the begin address is the same as that of v.data().
Bytes 8–15 hold the address just past the contents array—call it the end address. Its value is the same as &v.data()[v.size()]. If the vector is empty, then the begin address and the end address are the same.
Bytes 16–23 hold an address greater than or equal to the end address. This is the capacity address. As a vector grows, it will sometimes outgrow its current location and move its contents to new memory addresses. To reduce the number of copies, vectors usually to request more memory from the operating system than they immediately need; this additional space, which is called "capacity," supports cheap growth. Often the capacity doubles on each growth spurt, since this allows operations like v.push_back() to execute in O(1) time on average.
Compilers must also decide where different objects are stored when those objects are not part of a collection. For instance, consider this program:
int i1 = 0;
char c1 = 3;
The abstract machine says these objects cannot overlap, but does not otherwise constrain their positions in memory.
On Linux, GCC will put all these variables into the stack segment, which we can see using hexdump. But it can put them in the stack segment in any order, as we can see by reordering the declarations (try declaration order i1, c1, i2, c2, c3), by changing optimization levels, or by adding different scopes (braces). The abstract machine gives the programmer no guarantees about how object addresses relate. In fact, the compiler may move objects around during execution, as long as it ensures that the program behaves according to the abstract machine. Modern optimizing compilers often do this, particularly for automatic objects.
But what order does the compiler choose? With optimization disabled, the compiler appears to lay out objects in decreasing order by declaration, so the first declared variable in the function has the highest address. With optimization enabled, the compiler follows roughly the same guideline, but it also rearranges objects by type—for instance, it tends to group chars together—and it can reuse space if different variables in the same function have disjoint lifetimes. The optimizing compiler tends to use less space for the same set of variables. This is because it's arranging objects by alignment.
The C++ compiler and library restricts the addresses at which some kinds of data appear. In particular, the address of every int value is always a multiple of 4, whether it's located on the stack (automatic lifetime), the data segment (static lifetime), or the heap (dynamic lifetime).
A bunch of observations will show you these rules:
Address restrictions
(alignof(Type))
char (signed char, unsigned char) 1 No restriction 1
short (unsigned short) 2 Multiple of 2 2
int (unsigned int) 4 Multiple of 4 4
long (unsigned long) 8 Multiple of 8 8
float 4 Multiple of 4 4
double 8 Multiple of 8 8
long double 16 Multiple of 16 16
T* 8 Multiple of 8 8
These are the alignment restrictions for an x86-64 Linux machine.
These restrictions hold for most x86-64 operating systems, except that on Windows, the long type has size and alignment 4. (The long long type has size and alignment 8 on all x86-64 operating systems.)
Just like every type has a size, every type has an alignment. The alignment of a type T is a number a≥1 such that the address of every object of type T must be a multiple of a. Every object with type T has size sizeof(T)—it occupies sizeof(T) contiguous bytes of memory; and has alignment alignof(T)—the address of its first byte is a multiple of alignof(T). You can also say sizeof(x) and alignof(x) where x is the name of an object or another expression.
Alignment restrictions can make hardware simpler, and therefore faster. For instance, consider cache blocks. CPUs access memory through a transparent hardware cache. Data moves from primary memory, or RAM (which is large—a couple gigabytes on most laptops—and uses cheaper, slower technology) to the cache in units of 64 or 128 bytes. Those units are always aligned: on a machine with 128-byte cache blocks, the bytes with memory addresses [127, 128, 129, 130] live in two different cache blocks (with addresses [0, 127] and [128, 255]). But the 4 bytes with addresses [4n, 4n+1, 4n+2, 4n+3] always live in the same cache block. (This is true for any small power of two: the 8 bytes with addresses [8n,…,8n+7] always live in the same cache block.) In general, it's often possible to make a system faster by leveraging restrictions—and here, the CPU hardware can load data faster when it can assume that the data lives in exactly one cache line.
The compiler, library, and operating system all work together to enforce alignment restrictions.
On x86-64 Linux, alignof(T) == sizeof(T) for all fundamental types (the types built in to C: integer types, floating point types, and pointers). But this isn't always true; on x86-32 Linux, double has size 8 but alignment 4.
It's possible to construct user-defined types of arbitrary size, but the largest alignment required by a machine is fixed for that machine. C++ lets you find the maximum alignment for a machine with alignof(std::max_align_t); on x86-64, this is 16, the alignment of the type long double (and the alignment of some less-commonly-used SIMD "vector" types).
We now turn to the abstract machine rules for laying out all collections. The sizes and alignments for user-defined types—arrays, structs, and unions—are derived from a couple simple rules or principles. Here they are. The first rule applies to all types.
1. First-member rule. The address of the first member of a collection equals the address of the collection.
Thus, the address of an array is the same as the address of its first element. The address of a struct is the same as the address of the first member of the struct.
The next three rules depend on the class of collection. Every C abstract machine enforces these rules.
2. Array rule. Arrays are laid out sequentially as described above.
3. Struct rule. The second and subsequent members of a struct are laid out in order, with no overlap, subject to alignment constraints.
4. Union rule. All members of a union share the address of the union.
In C, every struct follows the struct rule, but in C++, only simple structs follow the rule. Complicated structs, such as structs with some public and some private members, or structs with virtual functions, can be laid out however the compiler chooses. The typical situation is that C++ compilers for a machine architecture (e.g., "Linux x86-64") will all agree on a layout procedure for complicated structs. This allows code compiled by different compilers to interoperate.
That next rule defines the operation of the malloc library function.
5. Malloc rule. Any non-null pointer returned by malloc has alignment appropriate for any type. In other words, assuming the allocated size is adequate, the pointer returned from malloc can safely be cast to T* for any T.
Oddly, this holds even for small allocations. The C++ standard (the abstract machine) requires that malloc(1) return a pointer whose alignment is appropriate for any type, including types that don't fit.
And the final rule is not required by the abstract machine, but it's how sizes and alignments on our machines work.
6. Minimum rule. The sizes and alignments of user-defined types, and the offsets of struct members, are minimized within the constraints of the other rules.
The minimum rule, and the sizes and alignments of basic types, are defined by the x86-64 Linux "ABI"—its Application Binary Interface. This specification standardizes how x86-64 Linux C compilers should behave, and lets users mix and match compilers without problems.
Consequences of the size and alignment rules
From these rules we can derive some interesting consequences.
First, the size of every type is a multiple of its alignment.
To see why, consider an array with two elements. By the array rule, these elements have addresses a and a+sizeof(T), where a is the address of the array. Both of these addresses contain a T, so they are both a multiple of alignof(T). That means sizeof(T) is also a multiple of alignof(T).
We can also characterize the sizes and alignments of different collections.
The size of an array of N elements of type T is N * sizeof(T): the sum of the sizes of its elements. The alignment of the array is alignof(T).
The size of a union is the maximum of the sizes of its components (because the union can only hold one component at a time). Its alignment is also the maximum of the alignments of its components.
The size of a struct is at least as big as the sum of the sizes of its components. Its alignment is the maximum of the alignments of its components.
Thus, the alignment of every collection equals the maximum of the alignments of its components.
It's also true that the alignment equals the least common multiple of the alignments of its components. You might have thought lcm was a better answer, but the max is the same as the lcm for every architecture that matters, because all fundamental alignments are powers of two.
The size of a struct might be larger than the sum of the sizes of its components, because of alignment constraints. Since the compiler must lay out struct components in order, and it must obey the components' alignment constraints, and it must ensure different components occupy disjoint addresses, it must sometimes introduce extra space in structs. Here's an example: the struct will have 3 bytes of padding after char c, to ensure that int i2 has the correct alignment.
struct twelve_bytes {
int i1;
char c;
Thanks to padding, reordering struct components can sometimes reduce the total size of a struct. Padding can happen at the end of a struct as well as the middle. Padding can never happen at the start of a struct, however (because of Rule 1).
The rules also imply that the offset of any struct member—which is the difference between the address of the member and the address of the containing struct—is a multiple of the member's alignment.
To see why, consider a struct s with member m at offset o. The malloc rule says that any pointer returned from malloc is correctly aligned for s. Every pointer returned from malloc is maximally aligned, equalling 16*x for some integer x. The struct rule says that the address of m, which is 16*x + o, is correctly aligned. That means that 16*x + o = alignof(m)*y for some integer y. Divide both sides by a = alignof(m) and you see that 16*x/a + o/a = y. But 16/a is an integer—the maximum alignment is a multiple of every alignment—so 16*x/a is an integer. We can conclude that o/a must also be an integer!
Finally, we can also derive the necessity for padding at the end of structs. (How?)
What happens when an object is uninitialized? The answer depends on its lifetime.
static lifetime (e.g., int global; at file scope): The object is initialized to 0.
automatic or dynamic lifetime (e.g., int local; in a function, or int* ptr = new int): The object is uninitialized and reading the object's value before it is assigned causes undefined behavior.
Compiler hijinks
In C++, most dynamic memory allocation uses special language operators, new and delete, rather than library functions.
Though this seems more complex than the library-function style, it has advantages. A C compiler cannot tell what malloc and free do (especially when they are redefined to debugging versions, as in the problem set), so a C compiler cannot necessarily optimize calls to malloc and free away. But the C++ compiler may assume that all uses of new and delete follow the rules laid down by the abstract machine. That means that if the compiler can prove that an allocation is unnecessary or unused, it is free to remove that allocation!
For example, we compiled this program in the problem set environment (based on test003.cc):
char* ptrs[10];
for (int i = 0; i < 10; ++i) {
ptrs[i] = new char[i + 1];
delete[] ptrs[i];
m61_printstatistics();
The optimizing C++ compiler removes all calls to new and delete, leaving only the call to m61_printstatistics()! (For instance, try objdump -d testXXX to look at the compiled x86-64 instructions.) This is valid because the compiler is explicitly allowed to eliminate unused allocations, and here, since the ptrs variable is local and doesn't escape main, all allocations are unused. The C compiler cannot perform this useful transformation. (But the C compiler can do other cool things, such as unroll the loops.)
One of C's more interesting choices is that it explicitly relates pointers and arrays. Although arrays are laid out in memory in a specific way, they generally behave like pointers when they are used. This property probably arose from C's desire to explicitly model memory as an array of bytes, and it has beautiful and confounding effects.
We've already seen one of these effects. The hexdump function has this signature (arguments and return type):
void hexdump(const void* ptr, size_t size);
But we can just pass an array as argument to hexdump:
char c[10];
hexdump(c, sizeof(c));
When used in an expression like this—here, as an argument—the array magically changes into a pointer to its first element. The above call has the same meaning as this:
hexdump(&c[0], 10 * sizeof(c[0]));
C programmers transition between arrays and pointers very naturally.
A confounding effect is that unlike all other types, in C arrays are passed to and returned from functions by reference rather than by value. C is a call-by-value language except for arrays. This means that all function arguments and return values are copied, so that parameter modifications inside a function do not affect the objects passed by the caller—except for arrays. For instance:
void f(int a[2]) {
a[0] = 1;
int x[2] = {100, 101};
f(x);
printf("%d\n", x[0]); // prints 1!
If you don't like this behavior, you can get around it by using a struct or a C++ std::array.
#include <array>
struct array1 { int a[2]; };
void f1(array1 arg) {
arg.a[0] = 1;
void f2(std::array<int, 2> a) {
array1 x = {{100, 101}};
f1(x);
printf("%d\n", x.a[0]); // prints 100
std::array<int, 2> x2 = {100, 101};
f2(x2);
printf("%d\n", x2[0]); // prints 100
C++ extends the logic of this array–pointer correspondence to support arithmetic on pointers as well.
Pointer arithmetic rule. In the C abstract machine, arithmetic on pointers produces the same result as arithmetic on the corresponding array indexes.
Specifically, consider an array T a[n] and pointers T* p1 = &a[i] and T* p2 = &a[j]. Then:
Equality: p1 == p2 if and only if (iff) p1 and p2 point to the same address, which happens iff i == j.
Inequality: Similarly, p1 != p2 iff i != j.
Less-than: p1 < p2 iff i < j.
Also, p1 <= p2 iff i <= j; and p1 > p2 iff i > j; and p1 >= p2 iff i >= j.
Pointer difference: What should p1 - p2 mean? Using array indexes as the basis, p1 - p2 == i - j. (But the type of the difference is always ptrdiff_t, which on x86-64 is long, the signed version of size_t.)
Addition: p1 + k (where k is an integer type) equals the pointer &a[i + k]. (k + p1 returns the same thing.)
Subtraction: p1 - k equals &a[i - k].
Increment and decrement: ++p1 means p1 = p1 + 1, which means p1 = &a[i + 1]. Similarly, --p1 means p1 = &a[i - 1]. (There are also postfix versions, p1++ and p1--, but C++ style prefers the prefix versions.)
No other arithmetic operations on pointers are allowed. You can't multiply pointers, for example. (You can multiply addresses by casting the pointers to the address type, uintptr_t—so (uintptr_t) p1 * (uintptr_t) p2—but why would you?)
From pointers to iterators
Let's write a function that can sum all the integers in an array.
int sum(int a[], int size) {
int sum = 0;
for (int i = 0; i != size; ++i) {
sum += a[i];
This function can compute the sum of the elements of any int array. But because of the pointer–array relationship, its a argument is really a pointer. That allows us to call it with subarrays as well as with whole arrays. For instance:
int a[10] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9};
int s1 = sum(a, 10); // 45
int s2 = sum(&a[0], 10); // same as s1
int s3 = sum(&a[1], 5); // sums s[1]...s[5], computing 15
int s4 = sum(a + 1, 5); // same as s3
This way of thinking about arrays naturally leads to a style that avoids sizes entirely, using instead a sentinel or boundary argument that defines the end of the interesting part of the array.
int sum(int* first, int* last) {
while (first != last) {
sum += *first;
++first;
These expressions compute the same sums as the above:
int s1 = sum(a, a + 10);
int s2 = sum(&a[0], &a[0] + 10);
int s3 = sum(&a[1], &a[1] + 5);
int s4 = sum(a + 1, a + 6);
Note that the data from first to last forms a half-open range. iIn mathematical notation, we care about elements in the range [first, last): the element pointed to by first is included (if it exists), but the element pointed to by last is not. Half-open ranges give us a simple and clear way to describe empty ranges, such as zero-element arrays: if first == last, then the range is empty.
Note that given a ten-element array a, the pointer a + 10 can be formed and compared, but must not be dereferenced—the element a[10] does not exist. The C/C++ abstract machines allow users to form pointers to the "one-past-the-end" boundary elements of arrays, but users must not dereference such pointers.
So in C, two pointers naturally express a range of an array. The C++ standard template library, or STL, brilliantly abstracts this pointer notion to allow two iterators, which are pointer-like objects, to express a range of any standard data structure—an array, a vector, a hash table, a balanced tree, whatever. This version of sum works for any container of ints; notice how little it changed:
template <typename It>
int sum(It first, It last) {
Some example uses:
std::set<int> set_of_ints;
int s1 = sum(set_of_ints.begin(), set_of_ints.end());
std::list<int> linked_list_of_ints;
int s2 = sum(linked_list_of_ints.begin(), linked_list_of_ints.end());
Addresses vs. pointers
What's the difference between these expressions? (Again, a is an array of type T, and p1 == &a[i] and p2 == &a[j].)
ptrdiff_t d1 = p1 - p2;
ptrdiff_t d2 = (uintptr_t) p1 - (uintptr_t) p2;
The first expression is defined analogously to index arithmetic, so d1 == i - j. But the second expression performs the arithmetic on the addresses corresponding to those pointers. We will expect d2 to equal sizeof(T) * d1. Always be aware of which kind of arithmetic you're using. Generally arithmetic on pointers should not involve sizeof, since the sizeof is included automatically according to the abstract machine; but arithmetic on addresses almost always should involve sizeof.
Although C++ is a low-level language, the abstract machine is surprisingly strict about which pointers may be formed and how they can be used. Violate the rules and you're in hell because you have invoked the dreaded undefined behavior.
Given an array a[N] of N elements of type T:
Forming a pointer &a[i] (or a + i) with 0 ≤ i ≤ N is safe.
Forming a pointer &a[i] with i < 0 or i > N causes undefined behavior.
Dereferencing a pointer &a[i] with 0 ≤ i < N is safe.
Dereferencing a pointer &a[i] with i < 0 or i ≥ N causes undefined behavior.
(For the purposes of these rules, objects that are not arrays count as single-element arrays. So given T x, we can safely form &x and &x + 1 and dereference &x.)
What "undefined behavior" means is horrible. A program that executes undefined behavior is erroneous. But the compiler need not catch the error. In fact, the abstract machine says anything goes: undefined behavior is "behavior … for which this International Standard imposes no requirements." "Possible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message)." Other possible behaviors include allowing hackers from the moon to steal all of a program's data, take it over, and force it to delete the hard drive on which it is running. Once undefined behavior executes, a program may do anything, including making demons fly out of the programmer's nose.
Pointer arithmetic, and even pointer comparisons, are also affected by undefined behavior. It's undefined to go beyond and array's bounds using pointer arithmetic. And pointers may be compared for equality or inequality even if they point to different arrays or objects, but if you try to compare different arrays via less-than, like this:
int a[10];
int b[10];
if (a < b + 10) ...
that causes undefined behavior.
If you really want to compare pointers that might be to different arrays—for instance, you're writing a hash function for arbitrary pointers—cast them to uintptr_t first.
Undefined behavior and optimization
A program that causes undefined behavior is not a C++ program. The abstract machine says that a C++ program, by definition, is a program whose behavior is always defined. The C++ compiler is allowed to assume that its input is a C++ program. (Obviously!) So the compiler can assume that its input program will never cause undefined behavior. Thus, since undefined behavior is "impossible," if the compiler can prove that a condition would cause undefined behavior later, it can assume that condition will never occur.
Consider this program:
char* x = /* some value */;
assert(x + 1 > x);
printf("x = %p, x + 1 = %p\n", x, x + 1);
If we supply a value equal to (char*) -1, we're likely to see output like this:
x = 0xffffffffffffffff, x + 1 = 0
with no assertion failure! But that's an apparently impossible result. The printout can only happen if x + 1 > x (otherwise, the assertion will fail and stop the printout). But x + 1, which equals 0, is less than x, which is the largest 8-byte value!
The impossible happens because of undefined behavior reasoning. When the compiler sees an expression like x + 1 > x (with x a pointer), it can reason this way:
"Ah, x + 1. This must be a pointer into the same array as x (or it might be a boundary pointer just past that array, or just past the non-array object x). This must be so because forming any other pointer would cause undefined behavior.
"The pointer comparison is the same as an index comparison. x + 1 > x means the same thing as &x[1] > &x[0]. But that holds iff 1 > 0.
"In my infinite wisdom, I know that 1 > 0. Thus x + 1 > x always holds, and the assertion will never fail.
"My job is to make this code run fast. The fastest code is code that's not there. This assertion will never fail—might as well remove it!"
Integer undefined behavior
Arithmetic on signed integers also has important undefined behaviors. Signed integer arithmetic must never overflow. That is, the compiler may assume that the mathematical result of any signed arithmetic operation, such as x + y (with x and y both int), can be represented inside the relevant type. It causes undefined behavior, therefore, to add 1 to the maximum positive integer. (The ubexplore.cc program demonstrates how this can produce impossible results, as with pointers.)
Arithmetic on unsigned integers is much safer with respect to undefined behavior. Unsigned integers are defined to perform arithmetic modulo their size. This means that if you add 1 to the maximum positive unsigned integer, the result will always be zero.
Dividing an integer by zero causes undefined behavior whether or not the integer is signed.
Sanitizers, which in our makefiles are turned on by supplying SAN=1, can catch many undefined behaviors as soon as they happen. Sanitizers are built in to the compiler itself; a sanitizer involves cooperation between the compiler and the language runtime. This has the major performance advantage that the compiler introduces exactly the required checks, and the optimizer can then use its normal analyses to remove redundant checks.
That said, undefined behavior checking can still be slow. Undefined behavior allows compilers to make assumptions about input values, and those assumptions can directly translate to faster code. Turning on undefined behavior checking can make some benchmark programs run 30% slower [link].
Signed integer undefined behavior
File cs61-lectures/datarep5/ubexplore2.cc contains the following program.
int main(int argc, const char *argv[]) {
assert(argc >= 3);
int n1 = strtol(argv[1], nullptr, 0);
for (int i = n1; i <= n2; ++i) {
printf("%d\n", i);
What will be printed if we run the program with ./ubexplore2 0x7ffffffe 0x7fffffff?
0x7fffffff is the largest positive value can be represented by type int. Adding one to this value yields 0x80000000. In two's complement representation this is the smallest negative number represented by type int.
Assuming that the program behaves this way, then the loop exit condition i > n2 can never be met, and the program should run (and print out numbers) forever.
However, if we run the optimized version of the program, it prints only two numbers and exits:
The unoptimized program does print forever and never exits.
What's going on here? We need to look at the compiled assembly of the program with and without optimization (via objdump -S).
The unoptimized version basically looks like this:
1. compare i and n2... (mov -0x1c(%rbp),%eax; cmp -0x18(%rbp),%eax)
2. and exit if i is greater (jg <end of function>)
3. otherwise, print i (... callq ...)
4. increment i (mov -0x1c(%rbp),%eax; add $0x1,%eax;
mov %eax,-0x1c(%rbp))
5. and go back to step 1 (jmp <step 1>)
This is a pretty direct translation of the loop.
The optimized version, though, does it differently. As always, the optimizer has its own ideas. (Your compiler may produce different results!)
1. compare i and n2... (cmp %r14d,%ebx)
3. otherwise, set tmp = n2 + 1 (lea 0x1(%rax),%ebp)
4. print i (... callq ...)
5. increment i (add $0x1,%ebx)
6. compare i and tmp... (cmp %ebp,%ebx)
7. and go to step 4 if unequal (jne <step 4>)
The compiler changed the source's less than or equal to comparison, i <= n2, into a not equal to comparison in the executable, i != n2 + 1 (in both cases using signed computer arithmetic, i.e., modulo 232)! The comparison i <= n2 will always return true when n2 == 0x7FFFFFFF, the maximum signed integer, so the loop goes on forever. But the i != n2 + 1 comparison does not always return true when n2 == 0x7FFFFFFF: when i wraps around to 0x80000000 (the smallest negative integer), then i equals n2 + 1 (which also wrapped), and the loop stops.
Why did the compiler make this transformation? In the original loop, the step-6 jump is immediately followed by another comparison and jump in steps 1 and 2. The processor jumps all over the place, which can confuse its prediction circuitry and slow down performance. In the transformed loop, the step-7 jump is never followed by a comparison and jump; instead, step 7 goes back to step 4, which always prints the current number. This more streamlined control flow is easier for the processor to make fast.
But the streamlined control flow is only a valid substitution under the assumption that the addition n2 + 1 never overflows. Luckily (sort of), signed arithmetic overflow causes undefined behavior, so the compiler is totally justified in making that assumption!
Programs based on ubexplore2 have demonstrated undefined behavior differences for years, even as the precise reasons why have changed. In some earlier compilers, we found that the optimizer just upgraded the ints to longs—arithmetic on longs is just as fast on x86-64 as arithmetic on ints, since x86-64 is a 64-bit architecture, and sometimes using longs for everything lets the compiler avoid conversions back and forth. The ubexplore2l program demonstrates this form of transformation: since the loop variable is added to a long counter, the compiler opportunistically upgrades i to long as well. This transformation is also only valid under the assumption that i + 1 will not overflow—which it can't, because of undefined behavior.
Using unsigned type prevents all this undefined behavior, because arithmetic overflow on unsigned integers is well defined in C/C++. The ubexplore2u.cc file uses an unsigned loop index and comparison, and ./ubexplore2u and ./ubexplore2u.noopt behave exactly the same (though you have to give arguments like ./ubexplore2u 0xfffffffe 0xffffffff to see the overflow).
Computer arithmetic and bitwise operations
Basic bitwise operators
Computers offer not only the usual arithmetic operators like + and -, but also a set of bitwise operators. The basic ones are & (and), | (or), ^ (xor/exclusive or), and the unary operator ~ (complement). In truth table form:
& (and)
| (or)
^ (xor)
~ (complement)
In C or C++, these operators work on integers. But they work bitwise: the result of an operation is determined by applying the operation independently at each bit position. Here's how to compute 12 & 4 in 4-bit unsigned arithmetic:
12 == 0b 1 1 0 0
^ 4 == 0b 0 1 0 0
0b 0 1 0 0 == 4
These basic bitwise operators simplify certain important arithmetics. For example, (x & (x - 1)) == 0 tests whether x is zero or a power of 2.
Negation of signed integers can also be expressed using a bitwise operator: -x == ~x + 1. This is in fact how we define two's complement representation. We can verify that x and (-x) does add up to zero under this representation:
x + (-x) == (x + ~x) + 1
== 0b 1111... + 1
Bitwise "and" (&) can help with modular arithmetic. For example, x % 32 == (x & 31). We essentially "mask off", or clear, higher order bits to do modulo-powers-of-2 arithmetics. This works in any base. For example, in decimal, the fastest way to compute x % 100 is to take just the two least significant digits of x.
Bitwise shift of unsigned integer
x << i appends i zero bits starting at the least significant bit of x. High order bits that don't fit in the integer are thrown out. For example, assuming 4-bit unsigned integers
0b 1101 << 2 == 0b 0100
Similarly, x >> i appends i zero bits at the most significant end of x. Lower bits are thrown out.
0b 1101 >> 2 == 0b 0011
Bitwise shift helps with division and multiplication. For example:
x / 64 == x >> 6
x * 64 == x << 6
A modern compiler can optimize y = x * 66 into y = (x << 6) + (x << 1).
Bitwise operations also allows us to treat bits within an integer separately. This can be useful for "options".
For example, when we call a function to open a file, we have a lot of options:
Open for reading?
Open for writing?
Read from the end?
Optimize for writing?
We have a lot of true/false options.
One bad way to implement this is to have this function take a bunch of arguments -- one argument for each option. This makes the function call look like this:
open_file(..., true, false, ...)
The long list of arguments slows down the function call, and one can also easily lose track of the meaning of the individual true/false values passed in.
A cheaper way to achieve this is to use a single integer to represent all the options. Have each option defined as a power of 2, and simply | (or) them together and pass them as a single integer.
#define O_READ 1
#define O_WRITE 2
open_file(..., O_READ | O_WRITE); // setting both O_READ and O_WRITE flags
Flags are usually defined as powers of 2 so we set one bit at a time for each flag. It is less common but still possible to define a combination flag that is not a power of 2, so that it sets multiple bits in one go.
File cs61-lectures/datarep5/membench.cc contains a memory allocation benchmark. The core of the benchmark looks like this
void benchmark() {
// allocate a new memory arena for this thread.
// An "arena" is an object that encapsulates a set of memory allocations.
// Arenas can capture allocation statistics and improve speed.
memnode_arena* arena = memnode_arena_new();
// Allocate 4096 memnodes.
memnode* m[4096];
for (int i = 0; i != 4096; ++i) {
m[i] = memnode_alloc(arena);
// `noperations` times, free a memnode and then allocate another one.
for (unsigned i = 0; i != noperations; ++i) {
unsigned pos = i % 4096;
memnode_free(arena, m[pos]);
m[pos] = memnode_alloc(arena);
// Free the remaining memnodes and the arena.
memnode_free(arena, m[i]);
memnode_arena_free(arena);
The benchmark tests the performance of memnode_alloc() and memnode_free() allocator functions. It allocates 4096 memnode objects, then free-and-then-allocates them for noperations times, and then frees all of them.
We notice that by the end of the function, all dynamically allocated data are freed. Can we take advantage of this property to speed up allocation/deallocation?
We only allocate memnodes, and all memnodes are of the same size, so we don't need metadata that keeps track of the size of each allocation. Furthermore, since all dynamically allocated data are freed at the end of the function, for each individual memnode_free() call we don't really need to return memory to the system allocator. We can simply reuse these memory during the function and returns all memory to the system at once when the function exits.
If we run the benchmark with 100000000 allocation, and use the system malloc(), free() functions to implement the memnode allocator, the benchmark finishes in 0.908 seconds.
Our alternative implementation of the allocator can finish in 0.355 seconds, beating the heavily optimized system allocator by a factor of 3. We will reveal how we achieved this in the next lecture.
We continue our exploration with the memnode allocation benchmark introduced from the last lecture.
File cs61-lectures/datarep6/mb-malloc.cc contains a version of the benchmark using the system new and delete operators.
unsigned long memnode_benchmark(unsigned noperations, unsigned step) {
assert(step % 2 == 1); // `step` must be odd
long counter = 0;
const unsigned nnodes = 4096;
memnode* m[nnodes];
for (unsigned i = 0; i != nnodes; ++i) {
m[i] = new memnode;
m[i]->file = "datarep/mb-filename.cc";
m[i]->line = counter;
++counter;
// Replace one `noperations` times.
unsigned pos = (i * step) % nnodes;
delete m[pos];
m[pos] = new memnode;
m[pos]->file = "datarep/mb-filename.cc";
m[pos]->line = counter;
// Compute a statistic and free them.
unsigned long result = 0;
result += m[i]->line;
delete m[i];
In this function we allocate an array of 4096 pointers to memnodes, which occupy 23*212=215 bytes on the stack. We then allocate 4096 memnodes. Our memnode is defined like this:
struct memnode {
std::string file;
unsigned line;
Each memnode contains a std::string object and an unsigned integer. Each std::string object internally contains a pointer points to an character array in the heap. Therefore, every time we create a new memnode, we need 2 allocations: one to allocate the memnode itself, and another one performed internally by the std::string object when we initialize/assign a string value to it.
Every time we deallocate a memnode by calling delete, we also delete the std::string object, and the string object knows that it should deallocate the heap character array it internally maintains. So there are also 2 deallocations occuring each time we free a memnode.
We make the benchmark to return a seemingly meaningless result to prevent an aggressive compiler from optimizing everything away. We also use this result to make sure our subsequent optimizations to the allocator are correct by generating the same result.
This version of the benchmark, using the system allocator, finishes in 0.335 seconds. Not bad at all.
Spoilers alert: We can do 15x better than this.
1st optimization: std::string
We only deal with one file name, namely "datarep/mb-filename.cc", which is constant throughout the program for all memnodes. It's also a string literal, which means it as a constant string has a static life time. Why can't we just simply use a const char* in place of the std::string and let the pointer point to the static constant string? This saves us the internal allocation/deallocation performed by std::string every time we initialize/delete a string.
The fix is easy, we simply change the memnode definition:
const char* file;
This version of the benchmark now finishes in 0.143 seconds, a 2x improvement over the original benchmark. This 2x improvement is consistent with a 2x reduction in numbers of allocation/deallocation mentioned earlier.
You may ask why people still use std::string if it involves an additional allocation and is slower than const char*, as shown in this benchmark. std::string is much more flexible in that it also deals data that doesn't have static life time, such as input from a user or data the program receives over the network. In short, when the program deals with strings that are not constant, heap data is likely to be very useful, and std::string provides facilities to conveniently handle on-heap data.
2nd optimization: the system allocator
We still use the system allocator to allocate/deallocate memnodes. The system allocator is a general-purpose allocator, which means it must handle allocation requests of all sizes. Such general-purpose designs usually comes with a compromise for performance. Since we are only memnodes, which are fairly small objects (and all have the same size), we can build a special- purpose allocator just for them.
File cs61-lectures/datarep6/mb-arena-01.cc contains a version of the benchmark using an arena allocator.
memnode_arena arena;
m[i] = arena.allocate();
arena.deallocate(m[pos]);
m[pos] = arena.allocate();
Compare to the previous version of the benchmark, in this version, instead of calling new and delete, we use arena.allocate() and arena.deallocate() to allocate and free memnodes. Our arena object (with type memnode_area) is our special-purpose allocator for our memnodes.
This is how we implement the memnode_arena allocator:
struct memnode_arena {
std::vector<memnode*> free_list;
memnode* allocate() {
memnode* n;
if (free_list.empty()) {
n = new memnode;
n = free_list.back();
free_list.pop_back();
void deallocate(memnode* n) {
free_list.push_back(n);
This allocator maintains a free list (a C++ vector) of freed memnodes. allocate() simply pops a memnode off the free list if there is any, and deallocate() simply puts the memnode on the free list. This free list serves as a buffer between the system allocator and the benchmark function, so that the system allocator is invoked less frequently. In fact, in the benchmark, the system allocator is only invoked for 4096 times when it initializes the pointer array. That's a huge reduction because all 10-million "recycle" operations in the middle now doesn't involve the system allocator.
With this special-purpose allocator we can finish the benchmark in 0.057 seconds, another 2.5x improvement.
However this allocator now leaks memory: it never actually calls delete! Let's fix this by letting it also keep track of all allocated memnodes. The modified definition of memnode_arena now looks like this:
std::vector<memnode*> allocated;
~memnode_arena() {
destroy_all();
void destroy_all() {
for (auto a : allocated) {
delete a;
allocated.push_back(n);
With the updated allocator we simply need to invoke arena.destroy_all() at the end of the function to fix the memory leak. And we don't even need to invoke this method manually! We can use the C++ destructor for the memnode_arena struct, defined as ~memnode_arena() in the code above, which is automatically called when our arena object goes out of scope. We simply make the destructor invoke the destroy_all() method, and we are all set.
Fixing the leak doesn't appear to affect performance at all. This is because the overhead added by tracking the allocated list and calling delete only affects our initial allocation the 4096 memnode* pointers in the array plus at the very end when we clean up. These 8192 additional operations is a relative small number compared to the 10 million recycle operations, so the added overhead is hardly noticeable.
Spoiler alert: We can improve this by another factor of 2.
3rd optimization: std::vector
In our special purpose allocator memnode_arena, we maintain an allocated list and a free list both using C++ std::vectors. std::vectors are dynamic arrays, and like std::string they involve an additional level of indirection and stores the actual array in the heap. We don't access the allocated list during the "recycling" part of the benchmark (which takes bulk of the benchmark time, as we showed earlier), so the allocated list is probably not our bottleneck. We however, add and remove elements from the free list for each recycle operation, and the indirection introduced by the std::vector here may actually be our bottleneck. Let's find out.
Instead of using a std::vector, we could use a linked list of all free memnodes for the actual free list. We will need to include some extra metadata in the memnode to store pointers for this linked list. However, unlike in the debugging allocator pset, in a free list we don't need to store this metadata in addition to actual memnode data: the memnode is free, and not in use, so we can use reuse its memory, using a union:
union freeable_memnode {
memnode n;
freeable_memnode* next_free;
We then maintain the free list like this:
std::vector<freeable_memnode*> allocated_groups;
freeable_memnode* free_list;
if (!free_list) {
refresh_free_list();
freeable_memnode* fn = free_list;
free_list = fn->next_free;
return &fn->n;
freeable_memnode* fn = (freeable_memnode*) n;
fn->next_free = free_list;
free_list = fn;
Compared to the std::vector free list, this free list we always directly points to an available memnode when it is not empty (free_list !=nullptr), without going through any indirection. In the std::vector free list one would first have to go into the heap to access the actual array containing pointers to free memnodes, and then access the memnode itself.
With this change we can now finish the benchmark under 0.3 seconds! Another 2x improvement over the previous one!
Compared to the benchmark with the system allocator (which finished in 0.335 seconds), we managed to achieve a speedup of nearly 15x with arena allocation. | CommonCrawl |
OSCA: a tool for omic-data-based complex trait analysis
Futao Zhang1,
Wenhan Chen1,
Zhihong Zhu1,
Qian Zhang1,
Marta F. Nabais1,2,
Ting Qi1,
Ian J. Deary3,
Naomi R. Wray1,4,
Peter M. Visscher1,4,
Allan F. McRae1 &
Jian Yang ORCID: orcid.org/0000-0003-2001-24741,4,5
Genome Biology volume 20, Article number: 107 (2019) Cite this article
The rapid increase of omic data has greatly facilitated the investigation of associations between omic profiles such as DNA methylation (DNAm) and complex traits in large cohorts. Here, we propose a mixed-linear-model-based method called MOMENT that tests for association between a DNAm probe and trait with all other distal probes fitted in multiple random-effect components to account for unobserved confounders. We demonstrate by simulations that MOMENT shows a lower false positive rate and more robustness than existing methods. MOMENT has been implemented in a versatile software package called OSCA together with a number of other implementations for omic-data-based analyses.
The rapid proliferation of genetic and omic data in large cohort-based samples in the past decade has greatly advanced our understanding of the genetic architecture of omic profiles and the molecular mechanisms underpinning the genetic variation of human complex traits [1,2,3]. These advances include the identification of a large number of genetic variants associated with gene expression [4, 5], DNA methylation [6, 7], histone modification [8, 9], and protein abundance [10, 11]; the discovery of omic measures associated with complex traits [12, 13]; the improved accuracy in predicting a trait using omic data [14, 15]; and the prioritization of gene targets for complex traits by integrating genetic and omic data in large samples [3, 13, 16,17,18]. These advances have also led to the development of software tools, focusing on a range of different aspects of omic data analysis. Therefore, a software tool that implements reliable and robust statistical methods for comprehensive analysis of omic data with high-performance computing efficiency is required.
A well-recognized challenge in omic-data-based analysis is to control for false positive rate (FPR) in the presence of confounding factors, as failing to model the confounders may lead to spurious associations [19,20,21] and/or a loss of statistical power [22]. While some confounders (e.g., age and sex) are known and available in most data so that their effects can be accounted for by fitting them as covariates in linear models, others are either uncharacterized or difficult to measure. For example, in DNA methylation (DNAm) data from whole blood, cell type compositions (CTCs) are evident confounders in a methylome-wide association study (MWAS; also known as an epigenome-wide association study or EWAS) [21, 23, 24] although CTCs may be useful for the prediction of some phenotypes. CTCs tend to be correlated with the DNAm at CpG sites that are differentially methylated in different cell types (namely differentially methylated sites) and have been shown to be associated with age and multiple traits and diseases [19, 21, 25, 26]. MWAS analysis without accounting for CTCs could give rise to biased test statistics unless neither CTCs nor DNAm sites are associated with the trait in question. Although it is possible to measure CTCs directly or predict them by reference-based prediction methods [27, 28], reference-free methods that are able to correct for confounding effects without the need of characterizing all the confounders have broader applications [22, 29,30,31,32]. Moreover, the predicted CTCs often only explain a certain proportion of variation in CTCs resulting in biased test statistics due to the uncaptured variation in CTCs. Existing reference-free methods are mainly based on the strategy of fitting a number of covariates (estimated from factor analysis or similar approaches with or without reference [22, 29, 31, 32]) in a fixed-effect model or a set of selected DNAm probes in a mixed linear model (MLM) [30]. However, uncharacterized confounders with small to moderate effects and numerous correlations between distal DNAm probes (e.g., those on different chromosomes) induced by the confounders may not be well captured by either a fixed number of principal features or a subset of selected probes.
In this study, we proposed a reference-free method (called MOA: MLM-based omic association) that fits all probes as random effects in an MLM-based association analysis to account for the confounding effects, including the correlations among distal probes induced by the confounding. We then extended the method to stratify the probes into multiple random-effect components (called MOMENT: multi-component MLM-based omic association excluding the target) to model a scenario where some probes are much more strongly associated with the phenotype than others. We evaluated the performance of MOA and MOMENT by extensive simulations and demonstrated their reliability and robustness in comparison with existing methods. We have implemented MOA and MOMENT together with a comprehensive set of other methods for omic data analysis in an easy-to-use and computationally efficient software package, OSCA (omic-data-based complex trait analysis).
Overview of the OSCA software
OSCA comprises four main modules: (1) data management for which we designed a binary format to efficiently store and manage omic data; (2) linear-regression- and MLM-based methods (including the methods proposed in this study) to test for associations between omic measures and complex traits; (3) methods to estimate the proportion of variance in a complex trait captured by all the measures of one or multiple omic profiles (e.g., all SNPs and DNAm probes) and to predict the trait phenotype in a new sample based on the joint effects of all omic measures estimated in a discovery sample; and (4) an efficient implementation of the methods to identify genetic variants associated with an omic profile, e.g., DNA methylation quantitative trait loci (mQTL) analysis. We will describe the methods based on DNAm data, but the methods and software tool are in principle applicable to other types of omic data including gene expression, histone modification, and protein abundance. The computer code of OSCA is written in C++ programming language and supports multi-threading based on OpenMP for high-performance computing. The compiled binary files are freely available at http://cnsgenomics.com/software/osca/.
MLM-based omic association analysis methods
One of the primary applications of OSCA is to test for associations between omic measures (e.g., DNAm probes) and a complex trait (e.g., body mass index (BMI)) correcting for confounding effects. In an MWAS, the test statistics of null probes can be inflated because of the associations of probes with confounders that are correlated with the phenotype. Note that, even if the confounders are not directly associated with the phenotype, the presence of confounders (e.g., CTCs or experimental batches) can cause correlations between the trait-associated probes and the null probes in distal genomic regions or even on different chromosomes, giving rise to inflated test statistics of the null probes (see the simulation results below). Existing methods that fit a number of covariates computed from dimension reduction approaches in a fixed-effect model [22, 31, 32] or a set of selected DNAm probes in an MLM [30] may not be sufficient to correct for confounding effects widely spread among a large number of probes or correlations between distal probes induced by the confounding. We propose two MLM-based approaches (MOA and MOMENT) that include all the (distal) probes as random effects in the model to account for the effects of the confounders on the trait and probes as well as the correlations among distal probes. We show by simulations (see below) that both MOA and MOMENT are more robust than existing methods in controlling for false positive rate (FPR) and family-wise error rate (FWER) in MWAS (see below).
Here, we start with a general MLM that fits all probes as random effects, i.e.,
$$ \mathbf{y}=\mathbf{C}\boldsymbol{\upbeta } +\mathrm{W}\mathbf{u}+\mathbf{e} $$
where y is an n × 1 vector of phenotype values with n being the sample size, C is an n × p matrix for covariates (e.g., age and sex) with p being the number of covariates, β is a p × 1 vector of the effects of covariates on the phenotype, W is an n × m matrix of standardized DNAm measures of all m probes, u is an m × 1 vector of the joint effects of all probes on the phenotype, and e is an n × 1 vector of residuals. In this model, β are fixed effects whereas u and e are random effects with \( \mathbf{u}\sim N\left(\mathbf{0},\mathbf{I}{\sigma}_u^2\right) \) and \( \mathbf{e}\sim N\left(\mathbf{0},\mathbf{I}{\sigma}_e^2\right). \) The variance-covariance matrix for y is \( \operatorname{var}\left(\mathbf{y}\right)=\mathbf{V}={\mathbf{WW}}^{\prime }{\sigma}_u^2+\mathbf{I}{\sigma}_e^2 \). This equation can be re-written as.
$$ \mathbf{V}=\mathbf{A}{\sigma}_o^2+\mathbf{I}{\sigma}_e^2\;\mathrm{with}\;\mathbf{A}={\mathbf{WW}}^{\hbox{'}}/m\kern0.37em \mathrm{and}\;{\sigma}_o^2=m{\sigma}_u^2 $$
where A is defined as the omic-data-based relationship matrix (ORM) ("Methods" section) and \( {\sigma}_o^2 \) is the amount of phenotypic variance captured by all probes. The variance components (\( {\sigma}_o^2 \) and \( {\sigma}_e^2 \)) in such an MLM can be estimated by REML algorithms [33]. Analogous to the method for estimating SNP-based heritability [34, 35], the proportion of variance in the phenotype captured by all the probes can be defined as \( {\rho}^2={\sigma}_o^2/\left({\sigma}_o^2+{\sigma}_e^2\right) \). We name this variance-estimation method OREML following the nomenclature of GREML [34]. The estimated joint probe effects (\( \widehat{\mathbf{u}} \)) from this model by a random-effect estimation approach (e.g., BLUP [36]) can be used to predict the phenotypes of individuals in a new sample based on omic data, i.e., \( {\widehat{\mathbf{y}}}_{\mathrm{new}}={\mathbf{W}}_{\mathrm{new}}\widehat{\mathbf{u}} \). We call this OBLUP.
Model [1] can be extended to test for association between a probe i and the trait, i.e.,
$$ \mathbf{y}={\mathbf{w}}_i{b}_i+\mathbf{C}\boldsymbol{\upbeta } +\mathbf{Wu}+\mathbf{e}\;\mathrm{with}\;\mathbf{V}={\mathbf{WW}}^{\hbox{'}}{\sigma}_u^2+\mathbf{I}{\sigma}_e^2 $$
In comparison to model [1], this model has two additional terms, wi (an n × 1 vector of standardized DNAm measures of a probe i, i.e., the target probe) and bi (the effect of probe i on the phenotype; fixed effect). The probe effect bi (together with the covariates' effects) can be estimated by the generalized least squares (GLS) approach, i.e., \( {\left[{\widehat{b}}_i\ \widehat{\boldsymbol{\upbeta}}\right]}^{\prime }={\left({\mathbf{X}}^{\prime }{\mathbf{V}}^{-1}\mathbf{X}\right)}^{-1}{\mathbf{X}}^{\prime }{\mathbf{V}}^{-1}\mathbf{y} \) and \( \operatorname{var}{\left[{\widehat{b}}_i\ \widehat{\boldsymbol{\upbeta}}\right]}^T={\left({\mathbf{X}}^{\prime }{\mathbf{V}}^{-1}\mathbf{X}\right)}^{-1} \) with X = [wi C]. The sampling variance (standard error (SE) squared) of \( {\widehat{b}}_i \) is the first diagonal element of \( \operatorname{var}{\left[{\widehat{b}}_i\ \widehat{\boldsymbol{\upbeta}}\right]}^T \). The null hypothesis (H0 : bi = 0) can be tested by a two-sided t test (or approximately chi-squared test if sample size is large) given \( {\widehat{b}}_i \) and its SE. We call this method MOA. Applying this method to test each of the probes across the genome is extremely computationally expensive because the variance components \( {\sigma}_u^2 \) and \( {\sigma}_e^2 \) need to be estimated repeatedly for each probe by REML that requires the computation of V−1 (computational complexity of O(n3)) multiple times in an iterative process. To speed up the computation, we use a two-step approach as in [37] to compute V−1, with the first step to perform an eigendecomposition of WW′ and the second step to compute V−1 based on the eigenvalues and eigenvectors. Since the eigendecomposition only needs to be done once for the whole genome scan, this two-step approach reduces the complexity of computing V−1 by orders of magnitude when testing each specific probe. Moreover, as the proportion of phenotypic variance attributable to a single probe is often very small, we can further speed up the computation by an approximate approach (similar to the approximate MLM-based GWAS methods [38, 39]) that only requires to compute V−1 once, assuming that the estimates of \( {\sigma}_u^2 \) and \( {\sigma}_e^2 \) under the null (i.e., bi = 0) are approximately equal to those under the alternative (i.e., bi ≠ 0). Both the approximate and exact MOA approaches have been implemented in OSCA.
There are two properties of the MOA method worthy of consideration. First, the target probe is fitted twice in the MOA model, once as a fixed effect (bi) and again as a random effect (the ith element of u), resulting in a loss of power to detect bi (a recognized issue in MLM-based association analysis with SNP data [39, 40]). This problem can be solved by leaving out probes in close physical proximity of the target probe (including the target) from the random-effect term because DNAm status of CpG sites in close physical proximity is likely to be regulated by the same mechanism and therefore tends to be highly correlated. This strategy has been used previously in both GWAS (genome-wide association study) [39, 40] and MWAS [30]. In practice, we exclude the probes < 50 kb from the target probe. Note that the distance parameter may differ for other types of omic data (e.g., a window size of 100 kbp is recommended for gene expression data; see below for details). Second, MOA assumes a single distribution to all the probe effects in the random-effect term, which may not be well fitted to data if some probes have much stronger associations with the trait than other probes. For example, if CTCs are associated with the phenotype, then all the probes that are highly differentially methylated in different cell types [41,42,43] may present a very different distribution of effects from the other probes. One solution to this issue is to stratify the probes into multiple groups by the association test statistics (from linear regression) and fit them as separate random-effect terms in the model. We extended the MOA method with the two modifications mentioned above and named it as MOMENT (multi-component MLM-based omic association excluding the target). The MOMENT model can be written as
$$ \mathbf{y}={\mathbf{w}}_i{b}_i+\mathbf{C}\boldsymbol{\upbeta } +\sum \limits_j{\mathbf{W}}_j{\mathbf{u}}_j+\mathbf{e}\kern0.37em \mathrm{with}\;\mathbf{V}=\sum \limits_j{\mathbf{W}}_j{{\mathbf{W}}_j}^{\hbox{'}}{\sigma}_{u_j}^2+\mathbf{I}{\sigma}_e^2 $$
where Wj is an n × mj matrix of standardized DNAm measures of the probes in the jth group with mj being the number of probes in the group (excluding probes within 50Kb of the target probe). In practice, the probes are split into two groups by association p values from a linear regression model (i.e., y = wibi + Cβ + e) at a methylome-wide significant threshold (all the methylome-wide significant probes in the first group and the other probes in the second group). The GLS method described in model [3] can be used to estimate bi and its SE for hypothesis testing. Like the exact MOA method, MOMENT is also computationally intensive when applied in a methylome-wide analysis. We can use a similar approximation approach as described above (i.e., using the variance components estimated under the null to compute \( {\widehat{b}}_i \) and SE) to reduce the computing cost. The variance components are re-estimated when one or more probes are excluded from the first group in case that the proportion of phenotypic variance captured by some of the probes in the first group are large.
Simulation analysis
To quantify the false positive rate (or family-wise error rate) and statistical power of MOMENT (implemented in OSCA), we performed simulations based on DNAm and CTC [44] measures on samples from the Lothian Birth Cohorts (LBC) in three scenarios (Additional file 1: Note S1). We simulated a phenotype (1) with effects from a set of "causal probes" (randomly selected from all probes on the odd chromosomes) but no direct effects from the CTCs, (2) with small to large effects from CTCs but no effects from the probes, and (3) with effects from both the causal probes and CTCs (Additional file 1: Note S1). Note that we only sampled the causal probes from the odd chromosomes in scenarios 1 and 3, leaving the probes on the even chromosomes to quantify false positive rate under the null, and that the DNAm measures were adjusted for age, sex, experimental batches, and smoking status. Results from our models were compared to 6 different methods including (1) Unadj: linear regression without adjustment; (2) CTCadj: linear regression with CTCs fitted as covariates; (3) SVA: linear regression with the SVA surrogate variables fitted as covariates [22]; (4) LFMM2-ridge: a latent factor mixed model (LFMM) using ridge algorithm for confounder estimation [32]; (5) LFMM2-lasso: an LFMM using lasso algorithm for confounder estimation [32]; (6) ReFACTor: linear regression with the first 5 sparse principal components (PCs) from ReFACTor fitted as covariates [31]; (7) 5PCs: linear regression with the first 5 PCs, computed from a principal component analysis (PCA), fitted as covariates; and (8) FaST-LMM-EWASher: a set of selected probes fitted as random effect in an MLM [30]. For completeness of the analysis, we also included MOA (implemented in OSCA) in the comparison. We validated using a subset of data generated from simulation scenario 1 that the test statistics from the approximate MOA/MOMENT approach were extremely highly correlated with those from the corresponding exact approach (Pearson correlation > 0.999 for causal probes and > 0.998 for null probes; Additional file 1: Figure S1). Hence, for the ease of computation, we used the approximate MOA/MOMENT approach in all the subsequent analyses.
In simulation scenario 1, although there were no direct effects of the CTCs on the phenotype, the test statistics from Unadj at the null probes were inflated (Fig. 1a and Additional file 1: Table S1) because the null and causal probes—albeit on different sets of chromosomes—are correlated through their correlations with systematic biases such as CTCs. The mean genomic inflation factor (λ) [45] of the null probes (on the even chromosomes) from 100 simulation replicates was 7.67 for Unadj (Additional file 1: Table S1), where λ is defined as the median of χ2 test statistics of the null probes divided by its expected value. CTCadj reduced but not completely removed the inflation in test statistics of the null probes (Fig. 1a and Additional file 1: Table S1), suggesting that the inflation was caused by correlations between the causal and null probes because of the confounding effects of both CTCs and other unobserved confounders. While all the other methods were much less inflated compared to Unadj, MOMENT and MOA showed the least inflation with a mean λ value close to 1. It is slightly surprising to observe that the family-wise error rates (FWERs) of all the methods except MOA and MOMENT were highly inflated (FWERs > 0.6) (Additional file 1: Figure S2a and Additional file 1: Table S1) despite the relatively small genomic inflation at the null probes for most of the methods (Fig. 1a). Here, FWER is defined as the proportion of simulation replicates with at least one null probe at MWAS p value < 0.05/m with m being the number of null probes, which can be interpreted as the probability of observing one or more false positives at a methylome-wide significance level in a single experiment. There was no inflation in FWER for MOMENT, and a marginal inflation for MOA (Additional file 1: Figure S2a and Additional file 1: Table S1), showing the effectiveness of using all (distal) probes to account for the probe correlations. We also quantified the FPR, defined as the proportion of null probes with p values < 0.05 in each simulation replicate. The differences in FPR among the methods showed a similar pattern to the differences in genomic inflation factor (Additional file 1: Figure S2b and Additional file 1: Table S1). We then compared power among the methods. Since the test statistics of many approaches were highly inflated, it is not very meaningful to compare power without accounting for the inflation. We therefore used the area under the ROC curve (AUC) to compare power of the methods given the same level of FPR. Apart from Unadj and CTC, the AUCs of all the methods were on similar levels (Fig. 1b). The conclusions held in additional simulations varying the number of causal probes and the proportion of phenotypic variance captured by the causal probes (Additional file 1: Figure S3 and S4) despite that the inflation in FWER for the existing methods appeared to increase with the increase of the proportion of variance captured per causal probe. Additionally, we applied BACON, a summary-data-based method that seeks to remove genomic inflation taking the true positives into consideration, to the test statistics of all probes produced by the methods tested above. We showed that the inflation in test statistics of the null probes for Unadj was substantially reduced but not completely removed by the BACON adjustment and that the test statistics from MOA and MOMENT remained almost unchanged after the BACON adjustment (Additional file 1: Figure S5).
Power and false positive rate for the MWAS methods in simulation scenario 1. The phenotypes were simulated based on the effects from 100 causal probes but no direct effects from the CTCs. a Mean genomic inflation factor from a method across 100 simulation replicates with an error bar representing ± SE of the mean. The dashed line at 1 shows the expected value if there is no inflation. b Box plot of AUCs for each method from 100 simulation replicates
In simulation scenario 2 where there is no direct probe-trait association, all the probes are null and their χ2 test statistics are expected to follow a χ2 distribution with 1 degree of freedom if the effects of CTCs have been well accounted for. The results showed that the λ value was close to 1 for all the methods except Unadj and FaST-LMM-EWASher (Fig. 2a). It seems that, for some of the methods (e.g., 5PCs and ReFACTor), the λ value slightly increased with the increase of the proportion of variance explained by the CTCs (\( {R}_{\mathrm{CTCs}}^2 \)) (Fig. 2a). The FPRs of the methods were highly consistent with the genomic inflation factors (Additional file 1: Figure S6). Nevertheless, a non-inflated median test statistic does not necessarily mean that the FWER has been well controlled for. In fact, most methods showed inflated FWER in this simulation scenario, and the FWERs of all the methods increased with increasing \( {R}_{\mathrm{CTCs}}^2 \) (Fig. 2b). The FWERs of 5PCs, ReFACTor, LFMM2-ridge, and LFMM2-lasso were close to the expected value (i.e., 0.05) when \( {R}_{\mathrm{CTCs}}^2=0.005 \) and increased to a level between 0.15 and 0.2 when \( {R}_{\mathrm{CTCs}}^2=0.05 \) (Fig. 2b). The relationship between FWER and \( {R}_{\mathrm{CTCs}}^2 \) was relatively flat for SVA with its FWER varying from 0.05 to 0.1 when \( {R}_{\mathrm{CTCs}}^2 \) increased from 0.005 to 0.05. Although FaST-LMM-EWASher showed the most deflated test statistics among all the methods (Fig. 2a), its FWER was substantially higher than all the other methods except Unadj (Fig. 2b), likely due to its feature selection strategy (Additional file 1: Note S2). MOA and MOMENT performed similarly in this simulation scenario and showed the lowest inflation in FWER among all the methods with their FWER being lower than 0.05 when \( {R}_{\mathrm{CTCs}}^2 \) = 0.005 and increased to about 0.1 when \( {R}_{\mathrm{CTCs}}^2 \) = 0.05 (Fig. 2b). In addition, we performed a linear regression analysis with the known CTCs fitted as covariates; as expected, the FWER was close to 0.05 irrespective of the level of \( {R}_{\mathrm{CTCs}}^2 \) (see below for the analysis with predicted CTCs).
Genomic inflation factor and family-wise error rate for the MWAS methods in simulation scenario 2 (effects from CTCs but no causal probes). Shown on the horizontal axis are the \( {R}_{\mathrm{CTCs}}^2 \) values used to simulate the phenotype. a Each dot represents the mean λ value from 1000 simulation replicates given a specified \( {R}_{\mathrm{CTCs}}^2 \) value for a method with an error bar representing ± SE of the mean. b Each dot represents the family-wise error rate, calculated as the proportion of simulation replicates with one or more null probes detected at a methylome-wide significance level
We also compared the methods under the circumstance (simulation scenario 3) where there were associations between the phenotype and CTCs (\( {R}_{\mathrm{CTCs}}^2=0.05 \)) and the null probes were correlated with distal causal probes because both of them were correlated with CTCs (Additional file 1: Note S1). The results were similar to those above (Fig. 1 and Additional file 1: Figure S2). That is, the FWER of MOMENT was close to the expected value, demonstrating the reliability and robustness of the method. The FWER of MOA is slightly higher than that of MOMENT but much lower than those of the other methods which showed strong inflation in FWER and/or FPR due to the correlations between causal and null probes (Additional file 1: Figure S7a, S7c, and S7d, and Additional file 1: Table S2). All the methods showed similar levels of AUC except for Unadj and CTCadj (Additional file 1: Figure S7b). The conclusions held with different sample sizes (Additional file 1: Figure S8 and S9) or different numbers of causal probes with smaller or larger variance explained per causal probe (Additional file 1: Figure S10 and S11). The conclusions also held if we simulated confounding effects on experimental batches in lieu of CTCs (Additional file 1: Figure S12 and S13). We further demonstrated that the result from MOA/MOMENT analysis of the whole sample was consistent with that from a meta-analysis of summary statistics from MOA/MOMENT analyses in two halves of the sample (Additional file 1: Figure S14) and that the methods were applicable to case-control phenotypes (Additional file 1: Figure S15 and S16).
To explore the applicability of the proposed methods to other types of omic data, we tested the methods by simulation based on a real gene expression data set (19,648 gene expression probes on 1219 Mexican American individuals) from the San Antonio Family Heart Study (SAFHS) [46,47,48] ("Methods" section) under simulation scenario 1 (i.e., quantitative phenotypes simulated based on the expression levels of 100 randomly selected causal probes; Additional file 1: Note S1). The result showed that both MOMENT and MOA performed similarly (in comparison to the other methods) as in the simulations based on DNAm data (Additional file 1: Figure S17).
We further compared the computational complexity among the MWAS methods tested in this study and quantified their runtime and memory usage of the methods using simulated and real phenotypes in the LBC (Additional file 1: Table S3). We found that MOA and MOMENT showed the lowest memory usage among all the methods. The approximate MOA approach was the second fastest approach (only slightly slower than LFMM2-ridge), and the approximate MOMENT approach was slower than LFMM2-ridge, approximate MOA, and ReFACTor but much faster than SVA, LFMM2-lasso, and EWASher.
An application of MOMENT to real data
We applied MOMENT and the other methods to four real quantitative traits in the LBC cohorts. These traits, including BMI, height, lung function (measured in the highest score of forced expiratory volume in 1 s), and walking speed (measured in the time taken to walk 6 m), were standardized and corrected for age in each gender group within each sub-cohort (LBC1936 or LBC1921) ("Methods" section). The standardized phenotypes were further processed by a rank-based inverse-normal transformation. The DNAm probes were adjusted for age, sex, and experimental batches. We did not adjust the probes for CTCs or smoking status for the purpose of testing methods (see below).
Consistent with the results from simulations, the test statistics from MOA and MOMENT were not inflated whereas all the other methods showed modest inflation for all the traits (Fig. 3, Table 1, and Additional file 1: Figure S18-S21). Three associations were identified by multiple methods, including one for BMI (cg11202345, detected by all methods), in line with a previous study [49], and two for lung function (cg05575921 and cg05951221, detected by all methods except MOMENT) (Additional file 1: Table S4, Additional file 1: Figure S18 and S20). It should be noted that cg05575921 was reported to be associated with smoking in a previous study [50], indicating that the association between cg05575921 and lung function might be confounded by smoking status. Moreover, MOA, LFMM2-ridge, LFMM2-lasso, and ReFACTor consistently identified 12 additional probes associated with lung function but most of the probes have been linked to smoking in a previous study [51]. Almost all the associations were not significant when smoking status was fitted as a covariate in the models (6.5% of variance in lung function associated with smoking status, Additional file 1: Table S5 and Additional file 1: Figure S22), suggesting that most (if not all) of the probe associations with lung function identified by MOA, LFMM2-ridge, LFMM2-lasso, and ReFACTor were owing to the confounding of smoking. None of the smoking-associated probes were methylome-wide significant for lung function in the analysis using MOMENT (Additional file 1: Figure S20), and the result remained the same when smoking status was fitted as a covariate in MOMENT (Additional file 1: Figure S22), again demonstrating the capability of MOMENT in correcting for unobserved confounding factors. This is further supported by the finding from simulations that the effects of null probes estimated from MOMENT were much less correlated with the phenotype compared to those estimated from MOA (Additional file 1: Figure S23).
QQ plot of p values from MWAS analysis for 4 quantitative traits in the LBC data. The DNAm measures were adjusted for age, sex, and batches. The phenotypes were stratified into groups by sex and cohort and were adjusted for age and standardized to z-scores by rank-based inverse normal transformation in each group. The phenotypes are a BMI, b height, c lung function, and d walking speed
Table 1 Genomic inflation factors reported by different MWAS methods for the 4 traits in the Lothian Birth Cohorts
It has been shown in previous GWASs that MLM-based association analysis methods developed for quantitative traits are applicable to case-control data [37,38,39, 52]. We have shown by simulation that both MOMENT and MOA are applicable to case-control phenotypes regardless whether cases are oversampled (Additional file 1: Figure S15 and S16). To demonstrate the applicability of the proposed methods to discrete phenotypes, we analyzed smoking status (coded as 0, 1, or 2 for non-smoker, former smoker, or current smoker) in the LBC by MOA and MOMENT in comparison with existing methods. All the methods detected a large number (at least 112) of probes at a methylome-wide significance level (p < 2.19e−7) except for MOMENT and EWASher which only identified 4 and 2 probes, respectively, at the methylome-wide significance level (Additional file 1: Figure S24). To validate the association signals other than those identified by MOMENT, we fitted the 4 MOMENT probes as fixed covariates in MOA. None of the additional associations remained methylome-wide significant conditioning on the 4 MOMENT probes (Additional file 1: Figure S25), suggesting that those additional associations detected by MOA (and other methods) were driven by their correlations with the 4 MOMENT signals. MOA failed in this scenario likely because the associations of the 4 MOMENT signals were too strong to be fitted in a single normal distribution with the other probes. This conclusion is further supported by the result that the accuracy of predicting/classifying smoking status in a cross-validation setting using a large number of probes detected by linear regression or MOA was even lower than that using a small number of probes detected by MOMENT (Additional file 1: Table S6). In addition, we recoded the smoking status data to a binary phenotype (0 for non-smoker and 1 for former or current smoker) and applied all the methods to the recoded binary phenotype; the conclusions were similar as above but it seemed that the analyses with the binary phenotype were less powerful than those with the categorical phenotype above (Additional file 1: Figure S26). All these results show the applicability of MOMENT to discrete traits and again demonstrate the robustness and reliability of MOMENT in controlling for false positive associations.
Estimating variance in a phenotype captured by all probes by OREML
We have demonstrated the performance of the omic-data-based association analysis methods in OSCA by simulation and real data analysis. We then turned to evaluate the performance of OREML in estimating the proportion of variance in a complex trait captured by all probes (ρ2) by simulation in two scenarios (Additional file 1: Note S1). The results showed that under either scenario, OREML reported an unbiased estimate of ρ2 (Additional file 1: Table S7). Here, the unbiasedness is defined as that the mean ρ2 estimate from 500 independent simulations is not significantly different from the ρ2 parameter used for simulation. There are two methods implemented in OSCA to compute the ORM ("Methods" section). Our simulation results showed that the estimates of ρ2 based on the two methods were similar (Additional file 1: Table S7).
We also attempted to partition and estimate the proportions of phenotypic variation captured by all SNPs (i.e., \( {h}_{\mathrm{SNP}}^2 \)) and all the DNAm probes respectively when fitted jointly in a model. We first investigated the correlation between genomic relationship matrix (GRM) and methylomic relationship matrix (MRM) in the LBC dataset. We found that the off-diagonal elements of the GRM were almost independent of those of the MRM (r = 0.0045; Additional file 1: Figure S27). From an OREML analysis that fits both the GRM and MRM, we estimated that all the DNAm probes captured 6.5% (SE = 0.038) of the variance for BMI but the estimate for height was nearly zero (\( {\widehat{\rho}}^2 \) = − 0.005 and SE = 0.0086) (Additional file 1: Table S8). These results are in line with the finding from a previous study that the accuracy of genetic risk prediction can be improved by incorporating DNAm data for BMI but not height [14].
In this study, we developed a versatile software tool—OSCA—to manage omic data generated from high-throughput experiments in large cohorts and to facilitate the analyses of complex traits using omic data (Additional file 1: Note S4). The primary applications of OSCA are to identify omic measures associated with a complex trait accounting for unobserved confounding factors (MOMENT) and to estimate the proportion of phenotypic variation captured by all measures of one or multiple omic profiles (OREML). A by-product of the OREML application is to estimate the joint effects of all measures of one or multiple omic profiles (i.e., OBLUP analysis) to predict the phenotype in a new sample. This has been shown to be a powerful and robust approach in age prediction using gene expression or DNAm data [53, 54]. We have also provided computationally efficient implementations in OSCA to manage large-scale omic data and to perform omic-data-based quantitative trait locus (xQTL) analysis and meta-analysis of xQTL summary data. OSCA is an ongoing software development project so that any further methods or functions related to omic-data-based analysis can be included in the software package in the future.
We showed, by simulation, a surprisingly high error rate for all the existing MWAS/EWAS methods, mainly owing to the correlations between distal probes induced by CTCs (and/or other systematic confounders) in DNAm data (Fig. 1). These correlations are widespread at a large number of probes across the methylome (as demonstrated by the proportion of null probes with PMWAS < 0.05 in simulation scenario 1; Additional file 1: Figure S28) and thus are not adequately accounted for by a fixed number of principal features computed from the data (e.g., 5PCs, ReFACTor, LFMM2, and SVA) nor a set of selected probes (e.g., FaST-LMM-EWASher). This conclusion is likely to be applicable to other types of omic data if the measures in distal genomic regions are correlated due to unmeasured confounding factors such as systematic experimental biases or unwanted biological variation, as suggested by our simulations with gene expression data (Additional file 1: Figure S17). This confounding effect can be corrected for by fitting the target probe as a fixed effect and all the other (distal) probes as random effects (i.e., the MOA or MOMENT method). In addition, we tested the robustness of MOMENT to the change of window size used to exclude probes in close physical proximity to the target probe in either direction. We varied the window size from 100 kbp to 250 bp in the MOMENT analysis of data generated from simulation scenario 1 (Additional file 1: Figure S29). We found that the results remained almost unchanged when the window sizes decreased from 100 to 25 kbp whereas there were a substantial number of probes showing deflated test statistics when the window size decreased to 500 bp or 250 bp (Additional file 1: Figure S29). These results justify the use of 50 kbp as the default window size for MOMENT when applied to DNAm data. We also quantified the decay of correlation between a pair of gene expression probes as a function of their physical distance (Additional file 1: Figure S30), which suggests that 100 kbp is an appropriate MOMENT window size for gene expression data although the results remained almost unchanged when the window size was varied from 50 kbp to 1 Mbp in the simulated data (Additional file 1: Figure S31).
Our simulation also showed that, if CTCs or batches explain a large proportion of variation in the phenotype, the FWERs of all the methods tended to be inflated (Additional file 1: Figure S32 and S33) despite that the genomic inflation factor is close to unity for most methods (Fig. 2). We re-ran the simulation under a more extreme setting with \( {R}_{\mathrm{CTCs}}^2 \) varying from 10 to 70%. In this case, the genomic inflation factors of the fixed-effect models (i.e., SVA, ReFACTor, LFMM2, and 5PCs) and the FWERs of all the methods increased as \( {R}_{\mathrm{CTCs}}^2 \) increased (to a lesser extent for FaST-LMM-EWASher), suggesting that there were a set of probes strongly associated with CTCs (Additional file 1: Figure S34). Note that even in this extreme case, MOMENT showed the lowest FWERs on average among all the methods. It is also of note that the FWERs of FaST-LMM-EWASher were relatively low in this scenario (Additional file 1: Figure S32), opposite to its performance when \( {R}_{\mathrm{CTCs}}^2 \) was low (Fig. 2), possibly due to its variable selection strategy (Additional file 1: Note S2). The inflation in FWER was only slightly alleviated by fitting the predicted CTCs as covariates (Additional file 1: Figures S35 and S36). The results also suggest that it may be worth fitting measured CTCs as fixed-effect covariates in MLM-based association analyses such as MOA and MOMENT in practice although this approach is likely to be conservative as indicated by the deflated λ and FWER (Additional file 1: Figure S37). These conclusions are likely to be applicable to other confounding factors such as smoking status, as demonstrated in the analysis of lung function data in the LBC (Additional file 1: Figure S22). Our results also caution the interpretation of associations identified from MWAS for traits that are highly correlated with CTCs and/or other biological confounders. In addition, although our simulation shows that both MOMENT and MOA are applicable to case-control phenotypes (Additional file 1: Figures S15 and S16), direct application of linear model approaches to 0/1 traits is not ideal. If the underlying model is causal (i.e., omic measures have causal effects on the trait), a more appropriate analysis is to use a link function (e.g., a probit or logit model) that connects the 0/1 phenotype to a latent continuous trait, as in the methods recently developed for the analysis of case-control data in GWAS [55,56,57,58]. Since OSCA is an ongoing software development project, the non-linear link functions can be incorporated in the MOMENT/MOA framework in the future.
In conclusion, we showed by simulation the inflation in test statistics of the existing MWAS methods because of the ubiquitous correlations between distal probes caused by confounding factors, and developed two new MWAS methods (MOA and MOMENT) to correct for the inflation. We demonstrated the reliability and robustness of MOMENT by simulations in a number of scenarios and real data analyses. We recommend the use of MOMENT in practice because of its robustness in the presence of unobserved confounders despite that it is slightly less powerful than MOA. We implemented both MOA and MOMENT in a computationally efficient and easy-to-use software tool OSCA together with many other functions for omic-data-based analyses (Additional file 1: Figure S38).
Omic-data-based relationship matrix (ORM)
We have described in Eqs. (1, 2) the OREML model to estimate the proportion of variance in a phenotype captured by the DNAm probes all together. In Eq. (1), i.e., y = Cβ + Wu + e, we define W as a matrix of standardized DNAm measures of all probes, and in Eq. (2), we define the ORM as A = WW ′ /m. Therefore, the omic relationship between individual j and k (the jkth element of A) can be computed as \( {A}_{jk}=\frac{1}{m}\sum \limits_i\left({x}_{ij}-{\mu}_i\right)\left({x}_{ik}-{\mu}_i\right)/{\sigma}_i^2 \), where xij is the unstandardized DNAm level of probe i in individual j, μi and \( {\sigma}_i^2 \) are the mean and variance of the ith probe over all the individuals respectively, and m is the number of probes. This model implicitly assumes that the probes of smaller variance in DNAm level (unstandardized) tend to have larger effects on the phenotype (strictly speaking, stronger associations with the phenotype) and that there is no relationship between the proportion of trait variance captured by a probe and the variance of the probe. We also provide in OSCA an alternative method to compute the ORM, i.e., \( {A}_{jk}=\sum \limits_i\left({x}_{ij}-{\mu}_i\right)\left({x}_{ik}-{\mu}_i\right)/\sum \limits_i{\sigma}_i^2 \). If we use this definition of ORM in the OREML analysis, we implicitly assume that there is no relationship between the probe effect on the trait and the variance of the probe but the proportion of trait variance associated with a probe increases as the variance of the probe increases. We showed by simulation and real data analysis that the difference between OREML results using the two methods was very small (Additional file 1: Tables S7 and S8).
OREML: estimating the proportion of trait variance captured by all DNAm probes
We have shown in Eqs. (1, 2) an OREML model with one random-effect component to estimate the proportion of trait variance captured by all DNAm probes. The model is flexible, which can be extended to partition the trait variance into components associated with different sets of probes (e.g., a model with 22 components with all the probes on each chromosome as a component). A flexible OREML model can be written as.
y = Cβ + ∑iWiui + e with \( \operatorname{var}\left(\mathbf{y}\right)=\mathbf{V}={\sum}_i{{\mathbf{W}}_i{\mathbf{W}}_i}^{\prime }{\sigma}_{u_i}^2+\mathbf{I}{\sigma}_e^2={\sum}_i{\mathbf{A}}_i{\sigma}_{o_i}^2+\mathbf{I}{\sigma}_e^{2.} \)
where the definitions of all the parameters and variables are similar to those in Eqs. (1, 2). The variance components can be estimated by REML [33], and the proportion of the trait variance captured by the ith component can be computed as \( {\rho}_i^2={\sigma}_{o_i}^2/\left({\sum}_i{\sigma}_{o_i}^2+{\sigma}_e^2\right) \).
The multi-component OREML model can be applied to partition the trait variance into components associated with multiple omic profiles. For example, if SNP genotype, DNAm, and gene expression data are available for all the individuals in a cohort, a multi-component OREML model can be used to estimate the proportion of trait variance captured by all SNPs (i.e., the SNP-based heritability), the expression levels of all genes, and the DNAm levels at all the CpG sites. The model can be written as y = Cβ + Wgug + Wtut + Wmum + e with \( \operatorname{var}\left(\mathbf{y}\right)={\mathbf{A}}_g{\sigma}_g^2+{\mathbf{A}}_t{\sigma}_t^2+{\mathbf{A}}_m{\sigma}_m^2+\mathbf{I}{\sigma}_e^2 \)
where Wg, Wt, and Wm are the matrices of standardized SNP genotypes, gene expression measures, and DNAm levels, respectively, with the corresponding effects ug, ut, and um; \( {\mathbf{A}}_g={\mathbf{W}}_g{\mathbf{W}}_g^{\prime }/{m}_g \) is the genomic relationship matrix (GRM) with mg being the number of SNPs, \( {\mathbf{A}}_t={\mathbf{W}}_t{\mathbf{W}}_t^{\prime }/{m}_t \) is the transcriptomic relationship matrix (TRM) with mt being the number of transcripts, and \( {\mathbf{A}}_m={\mathbf{W}}_m{\mathbf{W}}_m^{\prime }/{m}_m \) is the methylomic relationship matrix (MRM) with mm being the number of DNAm probes. Note that the model can be reduced by dropping any of the variance components or expanded by including other types of omic profiles (e.g., protein abundance).
The LBC cohorts [59, 60] consisted of individuals born in 1921 (LBC1921) and 1936 (LBC1936), mostly living in Edinburgh city and the surrounding Lothian region of Scotland. Blood samples were collected with informed consent. The LBC individuals underwent several waves of SNP genotyping and DNAm measures. DNAm levels at 485,512 CpG sites across the genome were measured on 3191 whole blood samples from 3 waves using the Illumina HumanMethylation450 BeadChip. Duplicates or samples with an excessive proportion of low confidence calls across all probes (> 5%) were removed. Probes with an excessive proportion of low confidence calls across all individuals (> 5%) or probes located in sex chromosomes were excluded. In addition, probes encompassing SNPs annotated in dbSNP131 using hg19 coordinates or identified as potentially cross-hybridized methylation probes by a previous study [61] were also excluded. After these QC steps, 3018 samples and 307,360 probes remained (Additional file 1: Note S3). We included in the analysis only the first wave (wave1) of the LBC data consisting of 436 individuals from LBC1921 (average age of 79 years) and 906 individuals from LBC1936 (average age of 70 years) (Additional file 1: Table S9). We removed probes with almost invariable beta values across individuals (standard deviation < 0.02) and retained 1342 individuals and 228,694 probes for analysis.
There were a number of covariates available in the LBC data including age, sex, batches of the experiment (i.e., plate and position of the sample on a chip), and CTCs. The blood cell counts for different cell types, including basophils, eosinophils, monocytes, lymphocytes, and neutrophils, were quantified using an LH50 Beckman Coulter instrument on the same day of blood collection. In addition to the covariates, there are a number of traits measured on the LBC individuals including height (measured without shoes), body mass index (BMI), lung function (measured in the highest score of forced expiratory volume in 1 s), walking speed (measured in the time taken to walk 6 m), and smoking status (never smoked, ex-smoker, or current smoker) [62, 63]. The numbers of missing measurements are noted in Additional file 1: Table S10. For each trait, we adjusted the phenotype for age in each gender group of each cohort (LBC1921 or LBC1936) and standardized the residuals by rank-based inverse normal transformation, which removed the age effect and potential difference in mean and variance between two gender groups or cohorts.
The LBC wave1 individuals were also genotyped by Illumina 610-Quadv1 BeadChip. The QC process of the SNP genotype data has been detailed elsewhere [14]. After excluding SNPs from sex chromosomes and SNPs with low allelic frequency (MAF < 0.01), we retained 523,062 genotyped SNPs for analysis.
We also used a set of gene expression data available at EMBL-EBI (URLs) from the San Antonio Family Heart Study (SAFHS). Sample recruitment, data generation, and quality controls of the SAFHS data have been detailed elsewhere [46,47,48]. We used the processed and standardized gene expression data of 19,648 autosomal probes on 1240 non-diseased Mexican American participants. Age, sex, and smoking status were available in the data. We removed 21 samples with unknown smoking status and retained 1219 individuals for analysis.
OSCA, http://cnsgenomics.com/software/osca
ReFACTor, https://www.cs.tau.ac.il/~heran/cozygene/software/refactor.html
EWASher, https://www.microsoft.com/en-us/research/project/fast-lmm-software-papers/
SVA, https://bioconductor.org/packages/release/bioc/html/sva.html
LFMM2, https://bcm-uga.github.io/lfmm/
The LBC data: https://www.ebi.ac.uk/ega/studies/EGAS00001000910
The SAFHS data: https://www.ebi.ac.uk/arrayexpress/experiments/E-TABM-305/
This study makes use of DNA methylation data from the LBC available at EGA (accession: EGAS00001000910) [64] and gene expression data from the SADHS available at EMBL-EPI (accession: E-TABM-305, [65]). The source code of OSCA is available at a DOI-assigning repository Zenodo (https://doi.org/10.5281/zenodo.2658802) [66] and at GitHub (https://github.com/jianyangqt/osca) under the GNU General Public License v3.0.
Ritchie MD, Holzinger ER, Li R, Pendergrass SA, Kim D. Methods of integrating data to uncover genotype-phenotype interactions. Nat Rev Genet. 2015;16:85–97.
Hasin Y, Seldin M, Lusis A. Multi-omics approaches to disease. Genome Biol. 2017;18:83.
Wu Y, Zeng J, Zhang F, Zhu Z, Qi T, Zheng Z, Lloyd-Jones LR, Marioni RE, Martin NG, Montgomery GW, et al. Integrative analysis of omics summary data reveals putative mechanisms underlying complex traits. Nat Commun. 2018;9:918.
Consortium GT, Laboratory DA, Coordinating Center -Analysis Working G, Statistical Methods groups-Analysis Working G, Enhancing Gg, Fund NIHC, Nih/Nci, Nih/Nhgri, Nih/Nimh, Nih/Nida, et al. Genetic effects on gene expression across human tissues. Nature. 2017;550:204–13.
Lloyd-Jones LR, Holloway A, McRae A, Yang J, Small K, Zhao J, Zeng B, Bakshi A, Metspalu A, Dermitzakis M, et al. The genetic architecture of gene expression in peripheral blood. Am J Hum Genet. 2017;100:371.
Hannon E, Spiers H, Viana J, Pidsley R, Burrage J, Murphy TM, Troakes C, Turecki G, O'Donovan MC, Schalkwyk LC, et al. Methylation QTLs in the developing brain and their enrichment in schizophrenia risk loci. Nat Neurosci. 2016;19:48–54.
Jaffe AE, Gao Y, Deep-Soboslay A, Tao R, Hyde TM, Weinberger DR, Kleinman JE. Mapping DNA methylation across development, genotype and schizophrenia in the human frontal cortex. Nat Neurosci. 2016;19:40–7.
Grubert F, Zaugg JB, Kasowski M, Ursu O, Spacek DV, Martin AR, Greenside P, Srivas R, Phanstiel DH, Pekowska A, et al. Genetic control of chromatin states in humans involves local and distal chromosomal interactions. Cell. 2015;162:1051–65.
Chen L, Ge B, Casale FP, Vasquez L, Kwan T, Garrido-Martin D, Watt S, Yan Y, Kundu K, Ecker S, et al. Genetic drivers of epigenetic and transcriptional variation in human immune cells. Cell. 2016;167:1398–1414 e1324.
Battle A, Khan Z, Wang SH, Mitrano A, Ford MJ, Pritchard JK, Gilad Y. Genomic variation. Impact of regulatory variation from RNA to protein. Science. 2015;347:664–7.
Folkersen L, Fauman E, Sabater-Lleal M, Strawbridge RJ, Franberg M, Sennblad B, Baldassarre D, Veglia F, Humphries SE, Rauramaa R, et al. Mapping of 79 loci for 83 plasma protein biomarkers in cardiovascular disease. PLoS Genet. 2017;13:e1006706.
Wahl S, Drong A, Lehne B, Loh M, Scott WR, Kunze S, Tsai PC, Ried JS, Zhang W, Yang Y, et al. Epigenome-wide association study of body mass index, and the adverse outcomes of adiposity. Nature. 2017;541:81–6.
Gusev A, Mancuso N, Won H, Kousi M, Finucane HK, Reshef Y, Song L, Safi A, Schizophrenia Working Group of the Psychiatric Genomics C, McCarroll S, et al. Transcriptome-wide association study of schizophrenia and chromatin activity yields mechanistic disease insights. Nat Genet. 2018;50:538–48.
Shah S, Bonder MJ, Marioni RE, Zhu Z, McRae AF, Zhernakova A, Harris SE, Liewald D, Henders AK, Mendelson MM, et al. Improving phenotypic prediction by combining genetic and epigenetic associations. Am J Hum Genet. 2015;97:75–85.
van Kessel KEM, van der Keur KA, Dyrskjot L, Algaba F, Welvaart NYC, Beukers W, Segersten U, Keck B, Maurer T, Simic T, et al. Molecular markers increase precision of the European Association of Urology non-muscle-invasive bladder cancer progression risk groups. Clin Cancer Res. 2018;24:1586–93.
Gamazon ER, Wheeler HE, Shah KP, Mozaffari SV, Aquino-Michaels K, Carroll RJ, Eyler AE, Denny JC, Consortium GT, Nicolae DL, et al. A gene-based association method for mapping traits using reference transcriptome data. Nat Genet. 2015;47:1091–8.
Gusev A, Ko A, Shi H, Bhatia G, Chung W, Penninx BW, Jansen R, de Geus EJ, Boomsma DI, Wright FA, et al. Integrative approaches for large-scale transcriptome-wide association studies. Nat Genet. 2016;48:245–52.
Zhu Z, Zhang F, Hu H, Bakshi A, Robinson MR, Powell JE, Montgomery GW, Goddard ME, Wray NR, Visscher PM, Yang J. Integration of summary data from GWAS and eQTL studies predicts complex trait gene targets. Nat Genet. 2016;48:481–7.
Liu Y, Aryee MJ, Padyukov L, Fallin MD, Hesselberg E, Runarsson A, Reinius L, Acevedo N, Taub M, Ronninger M, et al. Epigenome-wide association data implicate DNA methylation as an intermediary of genetic risk in rheumatoid arthritis. Nat Biotechnol. 2013;31:142–7.
Michels KB, Binder AM, Dedeurwaerder S, Epstein CB, Greally JM, Gut I, Houseman EA, Izzi B, Kelsey KT, Meissner A, et al. Recommendations for the design and analysis of epigenome-wide association studies. Nat Methods. 2013;10:949–55.
Jaffe AE, Irizarry RA. Accounting for cellular heterogeneity is critical in epigenome-wide association studies. Genome Biol. 2014;15(2):R31.
Leek JT, Storey JD. Capturing heterogeneity in gene expression studies by surrogate variable analysis. PLoS Genet. 2007;3:1724–35.
Teschendorff AE, Zheng SC. Cell-type deconvolution in epigenome-wide association studies: a review and recommendations. Epigenomics. 2017;9:757–68.
Teschendorff AE, Relton CL. Statistical and integrative system-level analysis of DNA methylation data. Nat Rev Genet. 2018;19:129–47.
Rakyan VK, Beyan H, Down TA, Hawa MI, Maslau S, Aden D, Daunay A, Busato F, Mein CA, Manfras B, et al. Identification of type 1 diabetes-associated DNA methylation variable positions that precede disease diagnosis. PLoS Genet. 2011;7:e1002300.
Teschendorff AE, Menon U, Gentry-Maharaj A, Ramus SJ, Gayther SA, Apostolidou S, Jones A, Lechner M, Beck S, Jacobs IJ, Widschwendter M. An epigenetic signature in peripheral blood predicts active ovarian cancer. PLoS One. 2009;4:e8274.
Guintivano J, Aryee MJ, Kaminsky ZA. A cell epigenotype specific model for the correction of brain cellular heterogeneity bias and its application to age, brain region and major depression. Epigenetics. 2013;8:290–302.
Houseman EA, Accomando WP, Koestler DC, Christensen BC, Marsit CJ, Nelson HH, Wiencke JK, Kelsey KT. DNA methylation arrays as surrogate measures of cell mixture distribution. BMC Bioinformatics. 2012;13:86.
Gagnon-Bartsch JA, Speed TP. Using control genes to correct for unwanted variation in microarray data. Biostatistics. 2012;13:539–52.
Zou J, Lippert C, Heckerman D, Aryee M, Listgarten J. Epigenome-wide association studies without the need for cell-type composition. Nat Methods. 2014;11:309–11.
Rahmani E, Zaitlen N, Baran Y, Eng C, Hu D, Galanter J, Oh S, Burchard EG, Eskin E, Zou J, Halperin E. Sparse PCA corrects for cell type heterogeneity in epigenome-wide association studies. Nat Methods. 2016;13:443–5.
Caye K, Jumentier B, Lepeule J, Francois O. LFMM 2: Fast and Accurate Inference of Gene-Environment Associations in Genome-Wide Studies. Mol Biol Evol. 2019;36:852–60.
Patterson HD, Thompson R. Recovery of inter-block information when block sizes are unequal. Biometrika. 1971;58:545.
Yang J, Benyamin B, McEvoy BP, Gordon S, Henders AK, Nyholt DR, Madden PA, Heath AC, Martin NG, Montgomery GW, et al. Common SNPs explain a large proportion of the heritability for human height. Nat Genet. 2010;42:565–9.
Yang J, Lee SH, Goddard ME, Visscher PM. GCTA: a tool for genome-wide complex trait analysis. Am J Hum Genet. 2011;88:76–82.
Henderson CR. Best linear unbiased estimation and prediction under a selection model. Biometrics. 1975;31:423–47.
Zhou X, Stephens M. Genome-wide efficient mixed-model analysis for association studies. Nat Genet. 2012;44:821–4.
Kang HM, Sul JH, Service SK, Zaitlen NA, Kong SY, Freimer NB, Sabatti C, Eskin E. Variance component model to account for sample structure in genome-wide association studies. Nat Genet. 2010;42:348–54.
Yang J, Zaitlen NA, Goddard ME, Visscher PM, Price AL. Advantages and pitfalls in the application of mixed-model association methods. Nat Genet. 2014;46:100–6.
Lippert C, Listgarten J, Liu Y, Kadie CM, Davidson RI, Heckerman D. FaST linear mixed models for genome-wide association studies. Nat Methods. 2011;8:833–5.
Meissner A, Mikkelsen TS, Gu H, Wernig M, Hanna J, Sivachenko A, Zhang X, Bernstein BE, Nusbaum C, Jaffe DB, et al. Genome-scale DNA methylation maps of pluripotent and differentiated cells. Nature. 2008;454:766–70.
Laurent L, Wong E, Li G, Huynh T, Tsirigos A, Ong CT, Low HM, Kin Sung KW, Rigoutsos I, Loring J, Wei CL. Dynamic changes in the human methylome during differentiation. Genome Res. 2010;20:320–31.
McGregor K, Bernatsky S, Colmegna I, Hudson M, Pastinen T, Labbe A, Greenwood CM. An evaluation of methods correcting for cell-type heterogeneity in DNA methylation studies. Genome Biol. 2016;17:84.
Starr JM, Deary IJ. Sex differences in blood cell counts in the Lothian Birth Cohort 1921 between 79 and 87 years. Maturitas. 2011;69:373–6.
Devlin B, Roeder K. Genomic control for association studies. Biometrics. 1999;55:997–1004.
Goring HH, Curran JE, Johnson MP, Dyer TD, Charlesworth J, Cole SA, Jowett JB, Abraham LJ, Rainwater DL, Comuzzie AG, et al. Discovery of expression QTLs using large-scale transcriptional profiling in human lymphocytes. Nat Genet. 2007;39:1208–16.
Charlesworth JC, Curran JE, Johnson MP, Goring HH, Dyer TD, Diego VP, Kent JW Jr, Mahaney MC, Almasy L, MacCluer JW, et al. Transcriptomic epidemiology of smoking: the effect of smoking on gene expression in lymphocytes. BMC Med Genet. 2010;3:29.
Kent JW Jr, Goring HH, Charlesworth JC, Drigalenko E, Diego VP, Curran JE, Johnson MP, Dyer TD, Cole SA, Jowett JB, et al. Genotypexage interaction in human transcriptional ageing. Mech Ageing Dev. 2012;133:581–90.
Mendelson MM, Marioni RE, Joehanes R, Liu C, Hedman AK, Aslibekyan S, Demerath EW, Guan W, Zhi D, Yao C, et al. Association of body mass index with DNA methylation and gene expression in blood cells and relations to cardiometabolic disease: a Mendelian randomization approach. PLoS Med. 2017;14:e1002215.
Gao X, Jia M, Zhang Y, Breitling LP, Brenner H. DNA methylation changes of whole blood cells in response to active smoking exposure in adults: a systematic review of DNA methylation studies. Clin Epigenetics. 2015;7:113.
Shenker NS, Polidoro S, van Veldhoven K, Sacerdote C, Ricceri F, Birrell MA, Belvisi MG, Brown R, Vineis P, Flanagan JM. Epigenome-wide association study in the European Prospective Investigation into Cancer and Nutrition (EPIC-Turin) identifies novel genetic loci associated with smoking. Hum Mol Genet. 2013;22:843–51.
Listgarten J, Lippert C, Kadie CM, Davidson RI, Eskin E, Heckerman D. Improved linear mixed models for genome-wide association studies. Nat Methods. 2012;9:525–6.
Peters MJ, Joehanes R, Pilling LC, Schurmann C, Conneely KN, Powell J, Reinmaa E, Sutphin GL, Zhernakova A, Schramm K, et al. The transcriptional landscape of age in human peripheral blood. Nat Commun. 2015;6:8570.
Zhang Q, Vallerga C, Walker R, Lin T, Henders A, Montgomery G, He J, Fan D, Fowdar J, Kennedy M, et al. Improved prediction of chronological age from DNA methylation limits it as a biomarker of ageing. bioRxiv. 2018; https://doi.org/10.1101/327890.
Hayeck TJ, Zaitlen NA, Loh PR, Vilhjalmsson B, Pollack S, Gusev A, Yang J, Chen GB, Goddard ME, Visscher PM, et al. Mixed model with correction for case-control ascertainment increases association power. Am J Hum Genet. 2015;96:720–30.
Weissbrod O, Lippert C, Geiger D, Heckerman D. Accurate liability estimation improves power in ascertained case-control studies. Nat Methods. 2015;12:332–4.
Chen H, Wang C, Conomos MP, Stilp AM, Li Z, Sofer T, Szpiro AA, Chen W, Brehm JM, Celedon JC, et al. Control for population structure and relatedness for binary traits in genetic association studies via logistic mixed models. Am J Hum Genet. 2016;98:653–66.
Zhou W, Nielsen JB, Fritsche LG, Dey R, Gabrielsen ME, Wolford BN, LeFaive J, VandeHaar P, Gagliano SA, Gifford A, et al. Efficiently controlling for case-control imbalance and sample relatedness in large-scale genetic association studies. Nat Genet. 2018;50:1335–41.
Deary IJ, Gow AJ, Pattie A, Starr JM. Cohort profile: the Lothian Birth Cohorts of 1921 and 1936. Int J Epidemiol. 2012;41:1576–84.
Taylor AM, Pattie A, Deary IJ. Cohort profile update: the Lothian Birth Cohorts of 1921 and 1936. Int J Epidemiol. 2018;47:1042–1042r.
Price ME, Cotton AM, Lam LL, Farre P, Emberly E, Brown CJ, Robinson WP, Kobor MS. Additional annotation enhances potential for biologically-relevant analysis of the Illumina Infinium HumanMethylation450 BeadChip array. Epigenetics Chromatin. 2013;6:4.
Deary IJ, Whiteman MC, Starr JM, Whalley LJ, Fox HC. The impact of childhood intelligence on later life: following up the Scottish mental surveys of 1932 and 1947. J Pers Soc Psychol. 2004;86:130–47.
Deary IJ, Gow AJ, Taylor MD, Corley J, Brett C, Wilson V, Campbell H, Whalley LJ, Visscher PM, Porteous DJ, Starr JM. The Lothian Birth Cohort 1936: a study to examine influences on cognitive ageing from age 11 to age 70 and beyond. BMC Geriatr. 2007;7:28.
Marioni RE, Shah S, McRae AF, Chen BH, Colicino E, Harris SE, Gibson J, Henders AK, Redmond P, Cox SR, et al: DNA methylation age of blood predicts all-cause mortality in later life. EMBL-EBI 2015, https://www.ebi.ac.uk/ega/studies/EGAS00001000910. [cited 25 May 2019]
Goring HH, Curran JE, Johnson MP, Dyer TD, Charlesworth J, Cole SA, Jowett JB, Abraham LJ, Rainwater DL, Comuzzie AG, et al. Discovery of expression QTLs using large-scale transcriptional profiling in human lymphocytes. EMBL-EBI. 2008; https://www.ebi.ac.uk/arrayexpress/experiments/E-TABM-305/. [cited 25 May 2019]
Zhang F, Chen W, Zhu Z, Zhang Q, Nabais MF, Qi T, Deary IJ, Wray NR, Visscher PM, McRae AF, Yang J. OSCA: a tool for omic-data-based complex trait analysis. Source Code Zenodo Repository. 2019. https://doi.org/10.5281/zenodo.2658802. [cited 25 May 2019]
The review history is available as Additional file 2.
This research was supported by the Australian Research Council (FT180100186), the Australian National Health and Medical Research Council (grants 1107258, 1113400, 1083656, 1078037, and 1078901), and the Sylvia & Charles Viertel Charitable Foundation. The Lothian Birth Cohorts (LBC) are supported by Age UK (Disconnected Mind program). Methylation typing was supported by Centre for Cognitive Ageing and Cognitive Epidemiology (Pilot Fund award), Age UK, The Wellcome Trust Institutional Strategic Support Fund, The University of Edinburgh, and The University of Queensland. The LBC resource is prepared in the Centre for Cognitive Ageing and Cognitive Epidemiology, which is supported by the Medical Research Council and Biotechnology and Biological Sciences Research Council (MR/K026992/1), and which supports I.J.D..
Institute for Molecular Bioscience, The University of Queensland, Brisbane, Queensland, 4072, Australia
Futao Zhang, Wenhan Chen, Zhihong Zhu, Qian Zhang, Marta F. Nabais, Ting Qi, Naomi R. Wray, Peter M. Visscher, Allan F. McRae & Jian Yang
University of Exeter Medical School, Devon, EX2 5DW, UK
Marta F. Nabais
Centre for Cognitive Ageing and Cognitive Epidemiology, Department of Psychology, University of Edinburgh, 7 George Square, Edinburgh, EH8 9JZ, UK
Ian J. Deary
Queensland Brain Institute, The University of Queensland, Brisbane, Queensland, 4072, Australia
Naomi R. Wray, Peter M. Visscher & Jian Yang
Institute for Advanced Research, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
Jian Yang
Futao Zhang
Zhihong Zhu
Qian Zhang
Ting Qi
Naomi R. Wray
Peter M. Visscher
Allan F. McRae
JY conceived the study. JY and FZ designed the experiment. FZ developed the software tool and performed all the analyses under the guidance and/or assistance from JY, ZZ, WC, QZ, MFN, and TQ. JY, AFM, PMV, and NRW contributed funding and resources. IJD, PMV, and NRW contributed the DNA methylation data. FZ, JY, and WC wrote the manuscript with the participation of all authors. All authors read and approved the final manuscript.
Correspondence to Jian Yang.
The use of human data in this study was approved by The University of Queensland Human Research Ethics Committee B (approval number: 2011001173).
Figures S1–S38, Tables S1–S12, and Notes S1-S4. (PDF 17946 kb)
Review history. (DOCX 21 kb)
Zhang, F., Chen, W., Zhu, Z. et al. OSCA: a tool for omic-data-based complex trait analysis. Genome Biol 20, 107 (2019). https://doi.org/10.1186/s13059-019-1718-z | CommonCrawl |
Definition of family of a distribution?
Does a family of a distribution have a different definition for statistics than in other disciplines?
In general, a family of curves is a set of curves, each of which is given by a function or parametrization in which one or more of the parameters is varied. Such families are used, for example, to characterize electronic components.
For statistics, a family according to one source is the result of varying the shape parameter. How then can we understand that the gamma distribution has a shape and scale parameter and only the generalized gamma distribution has, in addition, a location parameter? Does that make the family the result of varying the location parameter? According to @whuber the meaning of a family is implicitly A "parameterization" of a family is a continuous map from a subset of ℝ$^n$, with its usual topology, into the space of distributions, whose image is that family.
What, in simple language, is a family for statistical distributions?
A question about relations among of the statistical properties of distributions from the same family has already generated considerable controversy for a different question so it seems worthwhile to explore the meaning.
That this is not necessarily a simple question is born out by its use in the phrase exponential family, which has nothing to do with a family of curves, but is related to changing the form of the PDF of a distribution by reparameterization not only of parameters, but also substitution of functions of independent random variables.
distributions terminology parametric exponential-family
kjetil b halvorsen
$\begingroup$ By the phrasing "family of a distribution", do you mean something else "a family of distributions"? An exponential family is a family of distributions (with certain properties), and interpreting the pdf of each distribution as a curve,it even corresponds to a family of curves, so the last paragraphs seems confused. $\endgroup$ – Juho Kokkala Dec 29 '17 at 19:16
$\begingroup$ @JuhoKokkala It seems confusing because the meaning of "family" is context dependent. For example, a normal distribution of unknown mean and known variance is in the exponential family. A normal distribution has infinite support, $(-\infty,+\infty)$, and an exponential distribution has semi-infinite support, $[0,+\infty)$, so there is no family of curves for an exponential distribution that covers the range of a normal distribution, they never have the same shape... $\endgroup$ – Carl Dec 29 '17 at 19:36
$\begingroup$ @JuhoKokkala ...and an exponential PDF does not even have a location parameter, whereas a normal distribution cannot do without one. See the link above for the substitutions needed, and the context in which a normal pdf is in the exponential family. $\endgroup$ – Carl Dec 29 '17 at 19:40
$\begingroup$ stats.stackexchange.com/questions/129990/… may be relevant. "normal distribution of unknown mean and known variance is in the exponential family" is, to my knowledge, abuse of terminology (although somewhat common). To be exact, an exponential family is a family of distributions with certain properties. The family of normal distributions with unknown mean and known variance is an exponential family; the family of exponential distributions is another exponential family, etc. $\endgroup$ – Juho Kokkala Dec 30 '17 at 20:40
$\begingroup$ @JuhoKokkala: That "family" is so commonly (ab)used, in a special case, to mean "set of families" is perhaps worth pulling out into another answer. (I can't think of other cases - for some reason it seems no-one's prone to talking of "the location-scale family".) $\endgroup$ – Scortchi - Reinstate Monica♦ Feb 26 '18 at 11:07
The statistical and mathematical concepts are exactly the same, understanding that "family" is a generic mathematical term with technical variations adapted to different circumstances:
A parametric family is a curve (or surface or other finite-dimensional generalization thereof) in the space of all distributions.
The rest of this post explains what that means. As an aside, I don't think any of this is controversial, either mathematically or statistically (apart from one minor issue which is noted below). In support of this opinion I have supplied many references (mostly to Wikipedia articles).
This terminology of "families" tends to be used when studying classes $\mathcal C_Y$ of functions into a set $Y$ or "maps." Given a domain $X$, a family $\mathcal F$ of maps on $X$ parameterized by some set $\Theta$ (the "parameters") is a function
$$\mathcal F : X\times \Theta\to Y$$
for which (1) for each $\theta\in\Theta$, the function $\mathcal{F}_\theta:X\to Y$ given by $\mathcal{F}_\theta(x)=\mathcal{F}(x,\theta)$ is in $\mathcal{C}_Y$ and (2) $\mathcal F$ itself has certain "nice" properties.
The idea is that we want to vary functions from $X$ to $Y$ in a "smooth" or controlled manner. Property (1) means that each $\theta$ designates such a function, while the details of property (2) will capture the sense in which a "small" change in $\theta$ induces a sufficiently "small" change in $\mathcal{F}_\theta$.
A standard mathematical example, close to the one mentioned in the question, is a homotopy. In this case $\mathcal{C}_Y$ is the category of continuous maps from topological spaces $X$ into the topological space $Y$; $\Theta=[0,1]\subset\mathbb{R}$ is the unit interval with its usual topology, and we require that $\mathcal{F}$ be a continuous map from the topological product $X \times \Theta$ into $Y$. It can be thought of as a "continuous deformation of the map $\mathcal{F}_0$ to $\mathcal{F}_1$." When $X=[0,1]$ is itself an interval, such maps are curves in $Y$ and the homotopy is a smooth deformation from one curve to another.
For statistical applications, $\mathcal{C}_Y$ is the set of all distributions on $\mathbb{R}$ (or, in practice, on $\mathbb{R}^n$ for some $n$, but to keep the exposition simple I will focus on $n=1$). We may identify it with the set of all non-decreasing càdlàg functions $\mathbb{R}\to [0,1]$ where the closure of their range includes both $0$ and $1$: these are the cumulative distribution functions, or simply distribution functions. Thus, $X=\mathbb R$ and $Y=[0,1]$.
A family of distributions is any subset of $\mathcal{C}_Y$. Another name for a family is statistical model. It consists of all distributions that we suppose govern our observations, but we do not otherwise know which distribution is the actual one.
A family can be empty.
$\mathcal{C}_Y$ itself is a family.
A family may consist of a single distribution or just a finite number of them.
These abstract set-theoretic characteristics are of relatively little interest or utility. It is only when we consider additional (relevant) mathematical structure on $\mathcal{C}_Y$ that this concept becomes useful. But what properties of $\mathcal{C}_Y$ are of statistical interest? Some that show up frequently are:
$\mathcal{C}_Y$ is a convex set: given any two distributions ${F}, {G}\in \mathcal{C}_Y$, we may form the mixture distribution $(1-t){F}+t{G}\in Y$ for all $t\in[0,1]$. This is a kind of "homotopy" from $F$ to $G$.
Large parts of $\mathcal{C}_Y$ support various pseudo metrics, such as the Kullback-Leibler divergence or the closely related Fisher Information metric.
$\mathcal{C}_Y$ has an additive structure: corresponding to any two distributions $F$ and $G$ is their sum, ${F}\star {G}$.
$\mathcal{C}_Y$ supports many useful, natural functions, often termed "properties." These include any fixed quantile (such as the median) as well as the cumulants.
$\mathcal{C}_Y$ is a subset of a function space. As such, it inherits many useful metrics, such as the sup norm ($L^\infty$ norm) given by $$||F-G||_\infty = \sup_{x\in\mathbb{R}}|F(x)-G(x)|.$$
Natural group actions on $\mathbb R$ induce actions on $\mathcal{C}_Y$. The commonest actions are translations $T_\mu:x \to x+\mu$ and scalings $S_\sigma:x\to x\sigma$ for $\sigma\gt 0$. The effect these have on a distribution is to send $F$ to the distribution given by $F^{\mu,\sigma}(x) = F((x-\mu)/\sigma)$. These lead to the concepts of location-scale families and their generalizations. (I don't supply a reference, because extensive Web searches turn up a variety of different definitions: here, at least, may be a tiny bit of controversy.)
The properties that matter depend on the statistical problem and on how you intend to analyze the data. Addressing all the variations suggested by the preceding characteristics would take too much space for this medium. Let's focus on one common important application.
Take, for instance, Maximum Likelihood. In most applications you will want to be able to use Calculus to obtain an estimate. For this to work, you must be able to "take derivatives" in the family.
(Technical aside: The usual way in which this is accomplished is to select a domain $\Theta\subset \mathbb{R}^d$ for $d\ge 0$ and specify a continuous, locally invertible function $p$ from $\Theta$ into $\mathcal{C}_Y$. (This means that for every $\theta\in\Theta$ there exists a ball $B(\theta, \epsilon)$, with $\epsilon\gt 0$ for which $p\mid_{B(\theta,\epsilon)}: B(\theta,\epsilon)\cap \Theta \to \mathcal{C}_Y$ is one-to-one. In other words, if we alter $\theta$ by a sufficiently small amount we will always get a different distribution.))
Consequently, in most ML applications we require that $p$ be continuous (and hopefully, almost everywhere differentiable) in the $\Theta$ component. (Without continuity, maximizing the likelihood generally becomes an intractable problem.) This leads to the following likelihood-oriented definition of a parametric family:
A parametric family of (univariate) distributions is a locally invertible map $$\mathcal{F}:\mathbb{R}\times\Theta \to [0,1],$$ with $\Theta\subset \mathbb{R}^n$, for which (a) each $\mathcal{F}_\theta$ is a distribution function and (b) for each $x\in\mathbb R$, the function $\mathcal{L}_x: \theta\to [0,1]$ given by $\mathcal{L}_x(\theta) = \mathcal{F}(x,\theta)$ is continuous and almost everywhere differentiable.
Note that a parametric family $\mathcal F$ is more than just the collection of $\mathcal{F}_\theta$: it also includes the specific way in which parameter values $\theta$ correspond to distributions.
Let's end up with some illustrative examples.
Let $\mathcal{C}_Y$ be the set of all Normal distributions. As given, this is not a parametric family: it's just a family. To be parametric, we have to choose a parameterization. One way is to choose $\Theta = \{(\mu,\sigma)\in\mathbb{R}^2\mid \sigma \gt 0\}$ and to map $(\mu,\sigma)$ to the Normal distribution with mean $\mu$ and variance $\sigma^2$.
The set of Poisson$(\lambda)$ distributions is a parametric family with $\lambda\in\Theta=(0,\infty)\subset\mathbb{R}^1$.
The set of Uniform$(\theta, \theta+1)$ distributions (which features prominently in many textbook exercises) is a parametric family with $\theta\in\mathbb{R}^1$. In this case, $F_\theta(x) = \max(0, \min(1, x-\theta))$ is differentiable in $\theta$ except for $\theta\in\{x, x-1\}$.
Let $F$ and $G$ be any two distributions. Then $\mathcal{F}(x,\theta)=(1-\theta)F(x)+\theta G(x)$ is a parametric family for $\theta\in[0,1]$. (Proof: the image of $\mathcal F$ is a set of distributions and its partial derivative in $\theta$ equals $-F(x)+G(x)$ which is defined everywhere.)
The Pearson family is a four-dimensional family, $\Theta\subset\mathbb{R}^4$, which includes (among others) the Normal distributions, Beta distributions, and Inverse Gamma distributions. This illustrates the fact that any one given distribution may belong to many different distribution families. This is perfectly analogous to observing that any point in a (sufficiently large) space may belong to many paths that intersect there. This, together with the previous construction, shows us that no distribution uniquely determines a family to which it belongs.
The family $\mathcal{C}_Y$ of all finite-variance absolutely continuous distributions is not parametric. The proof requires a deep theorem of topology: if we endow $\mathcal{C}_Y$ with any topology (whether statistically useful or not) and $p: \Theta\to\mathcal{C}_Y$ is continuous and locally has a continuous inverse, then locally $\mathcal{C}_Y$ must have the same dimension as that of $\Theta$. However, in all statistically meaningful topologies, $\mathcal{C}_Y$ is infinite dimensional.
whuber♦whuber
$\begingroup$ It will take me about a day to digest your answer. I will have to chew slowly. Meanwhile, thank you. $\endgroup$ – Carl Dec 29 '17 at 16:50
$\begingroup$ (+1) OK, I slogged through it. So is $\mathcal{F}:\mathbb{R}\times\Theta \to [0,1]$ a Polish space or not? Can we do a simple answer so people know how to avoid using the word family improperly, please. @JuhoKokkala related, for example, that Wikipedia abused language in their exponential family, that needs clarification. $\endgroup$ – Carl Dec 30 '17 at 23:12
$\begingroup$ Doesn't the second sentence of this answer serve that request for simplicity? $\endgroup$ – whuber♦ Dec 30 '17 at 23:25
$\begingroup$ IMHO, however uninformed, no, it does not due to incompleteness, it doesn't say what a family isn't. The concept "in the space of all distributions" seems to relate to statistics only. $\endgroup$ – Carl Dec 30 '17 at 23:49
$\begingroup$ I have accepted your answer. You have enough information in it that I could apply it to the question in question. $\endgroup$ – Carl Dec 31 '17 at 22:32
To address a specific point brought up in the question: "exponential family" does not denote a set of distributions. (The standard, say, exponential distribution is a member of the family of exponential distributions, an exponential family; of the family of gamma distributions, also an exponential family; of the family of Weibull distributions, not an exponential family; & of any number of other families you might dream up.) Rather, "exponential" here refers to a property possessed by a family of distributions. So we shouldn't talk of "distributions in the exponential family" but of "exponential families of distributions"—the former is an abuse of terminology, as @JuhoKokkala points out. For some reason no-one commits this abuse when talking of location–scale families.
Scortchi - Reinstate Monica♦Scortchi - Reinstate Monica
Thanks to @whuber there is enough information to summarize in what I hope is a simpler form relating to the question from which this post arose. "Another name for a family [Sic, statistical family] is [a] statistical model."
From that Wikipedia entry: A statistical model consists of all distributions that we suppose govern our observations, but we do not otherwise know which distribution is the actual one. What distinguishes a statistical model from other mathematical models is that a statistical model is non-deterministic. Thus, in a statistical model specified via mathematical equations, some of the variables do not have specific values, but instead have probability distributions; i.e., some of the variables are stochastic. A statistical model is usually thought of as a pair $( S , P )$, where $S$ is the set of possible observations, i.e., the sample space, and $P$ is a set of probability distributions on $S$.
Suppose that we have a statistical model $(S, \mathcal{P})$ with $\mathcal{P}=\{P_{\theta} : \theta \in \Theta\}$. The model is said to be a Parametric model if $\Theta$ has a finite dimension. In notation, we write that $\Theta \subseteq \mathbb{R}^d$ where $d$ is a positive integer ($\mathbb{R}$ denotes the real numbers; other sets can be used, in principle). Here, $d$ is called the dimension of the model.
As an example, if we assume that data arise from a univariate Gaussian distribution, then we are assuming that
$$\mathcal{P}=\left\{P_{\mu,\sigma }(x) \equiv \frac{1}{\sqrt{2 \pi} \sigma} \exp\left( -\frac{(x-\mu)^2}{2\sigma^2}\right) : \mu \in \mathbb{R}, \sigma > 0 \right\}. $$ In this example, the dimension, $d$, equals 2, end quote.
Thus, if we reduce the dimensionality by assigning, for the example above, $\mu=0$, we can show a family of curves by plotting $\sigma=1,2,3,4,5$ or whatever choices for $\sigma$.
Not the answer you're looking for? Browse other questions tagged distributions terminology parametric exponential-family or ask your own question.
What does it mean when a distribution is "INDEXED" by something?
The relationship between the gamma distribution and the normal distribution
Is it possible that two Random Variables from the same distribution family have the same expectation and variance, but different higher moments?
Is any quantitative property of the population a "parameter"?
What is a Dirichlet prior
Definition of exponential family
Gaussian distribution: moments, independence and rotation
Is my understanding of "family of distributions" correct?
Special cases of distributions under different parameterizations
Distribution fitting problem
Exponential family distribution with high-order statistics
Which parameter should be considered as "scale" parameter for Gamma distribution? | CommonCrawl |
Isotoxal figure
In geometry, a polytope (for example, a polygon or a polyhedron) or a tiling is isotoxal (from Greek τόξον 'arc') or edge-transitive if its symmetries act transitively on its edges. Informally, this means that there is only one type of edge to the object: given two edges, there is a translation, rotation, and/or reflection that will move one edge to the other while leaving the region occupied by the object unchanged.
This article is about geometry. For edge transitivity in graph theory, see Edge-transitive graph.
Isotoxal polygons
An isotoxal polygon is an even-sided i.e. equilateral polygon, but not all equilateral polygons are isotoxal. The duals of isotoxal polygons are isogonal polygons. Isotoxal $4n$-gons are centrally symmetric, so are also zonogons.
In general, an isotoxal $2n$-gon has $\mathrm {D} _{n},(^{*}nn)$ dihedral symmetry. For example, a rhombus is an isotoxal "$2$×$2$-gon" (quadrilateral) with $\mathrm {D} _{2},(^{*}22)$ symmetry. All regular polygons (equilateral triangle, square, etc.) are isotoxal, having double the minimum symmetry order: a regular $n$-gon has $\mathrm {D} _{n},(^{*}nn)$ dihedral symmetry.
An isotoxal ${\mathbf {2}}n$-gon with outer internal angle $\alpha $ can be labeled as $\{n_{\alpha }\}.$ The inner internal angle $(\beta )$ may be greater or less than $180$ degrees, making convex or concave polygons.
Star polygons can also be isotoxal, labeled as $\{(n/q)_{\alpha }\},$ with $q\leq n-1$ and with the greatest common divisor $\gcd(n,q)=1,$ where $q$ is the turning number or density.[1] Concave inner vertices can be defined for $q<n/2.$ If $D=\gcd(n,q)\geq 2,$ then $\{(n/q)_{\alpha }\}=\{(Dm/Dp)_{\alpha }\}$ is "reduced" to a compound $D\{(m/p)_{\alpha }\}$ of $D$ rotated copies of $\{(m/p)_{\alpha }\}.$
Caution: The vertices of $\{(n/q)_{\alpha }\}$ are not always placed like those of $\{n_{\alpha }\},$ whereas the vertices of the regular $\{n/q\}$ are placed like those of the regular $\{n\}.$
A set of "uniform tilings", actually isogonal tilings using isotoxal polygons as less symmetric faces than regular ones can be defined.
Examples of irregular isotoxal polygons and compounds
Number of sides $(2n)$ 2×2 2×3 2×4 2×5 2×6 2×7 2×8
$\{n_{\alpha }\}$
Convex: $\beta <180^{\circ }.$
Concave: $\beta >180^{\circ }.$
$\{2_{\alpha }\}$
$\{3_{\alpha }\}$
$\{4_{\alpha }\}$
$\{5_{\alpha }\}$
$\{6_{\alpha }\}$
$\{7_{\alpha }\}$
$\{8_{\alpha }\}$
2-turn
$\{(n/2)_{\alpha }\}$
--
$\{(3/2)_{\alpha }\}$
$2\{2_{\alpha }\}$
$\{(5/2)_{\alpha }\}$
$2\{3_{\alpha }\}$
$\{(7/2)_{\alpha }\}$
$2\{4_{\alpha }\}$
3-turn
$\{(n/3)_{\alpha }\}$
-- --
$\{(4/3)_{\alpha }\}$
$\{(5/3)_{\alpha }\}$
$3\{2_{\alpha }\}$
$\{(7/3)_{\alpha }\}$
$\{(8/3)_{\alpha }\}$
4-turn
$\{(n/4)_{\alpha }\}$
-- -- --
$\{(5/4)_{\alpha }\}$
$2\{(3/2)_{\alpha }\}$
$\{(7/4)_{\alpha }\}$
$4\{2_{\alpha }\}$
5-turn
$\{(n/5)_{\alpha }\}$
-- -- -- --
$\{(6/5)_{\alpha }\}$
$\{(7/5)_{\alpha }\}$
$\{(8/5)_{\alpha }\}$
6-turn
$\{(n/6)_{\alpha }\}$
-- -- -- -- --
$\{(7/6)_{\alpha }\}$
$2\{(4/3)_{\alpha }\}$
7-turn
$\{(n/7)_{\alpha }\}$
-- -- -- -- -- --
$\{(8/7)_{\alpha }\}$
Isotoxal polyhedra and tilings
Main article: List of isotoxal polyhedra and tilings
Regular polyhedra are isohedral (face-transitive), isogonal (vertex-transitive), and isotoxal (edge-transitive).
Quasiregular polyhedra, like the cuboctahedron and the icosidodecahedron, are isogonal and isotoxal, but not isohedral. Their duals, including the rhombic dodecahedron and the rhombic triacontahedron, are isohedral and isotoxal, but not isogonal.
Examples
Quasiregular
polyhedron
Quasiregular dual
polyhedron
Quasiregular
star polyhedron
Quasiregular dual
star polyhedron
Quasiregular
tiling
Quasiregular dual
tiling
A cuboctahedron is an isogonal and isotoxal polyhedron
A rhombic dodecahedron is an isohedral and isotoxal polyhedron
A great icosidodecahedron is an isogonal and isotoxal star polyhedron
A great rhombic triacontahedron is an isohedral and isotoxal star polyhedron
The trihexagonal tiling is an isogonal and isotoxal tiling
The rhombille tiling is an isohedral and isotoxal tiling with p6m (*632) symmetry.
Not every polyhedron or 2-dimensional tessellation constructed from regular polygons is isotoxal. For instance, the truncated icosahedron (the familiar soccerball) is not isotoxal, as it has two edge types: hexagon-hexagon and hexagon-pentagon, and it is not possible for a symmetry of the solid to move a hexagon-hexagon edge onto a hexagon-pentagon edge.
An isotoxal polyhedron has the same dihedral angle for all edges.
The dual of a convex polyhedron is also a convex polyhedron.[2]
The dual of a non-convex polyhedron is also a non-convex polyhedron.[2] (By contraposition.)
The dual of an isotoxal polyhedron is also an isotoxal polyhedron. (See the Dual polyhedron article.)
There are nine convex isotoxal polyhedra: the five (regular) Platonic solids, the two (quasiregular) common cores of dual Platonic solids, and their two duals.
There are fourteen non-convex isotoxal polyhedra: the four (regular) Kepler–Poinsot polyhedra, the two (quasiregular) common cores of dual Kepler–Poinsot polyhedra, and their two duals, plus the three quasiregular ditrigonal (3 | p q) star polyhedra, and their three duals.
There are at least five isotoxal polyhedral compounds: the five regular polyhedral compounds; their five duals are also the five regular polyhedral compounds (or one chiral twin).
There are at least five isotoxal polygonal tilings of the Euclidean plane, and infinitely many isotoxal polygonal tilings of the hyperbolic plane, including the Wythoff constructions from the regular hyperbolic tilings {p,q}, and non-right (p q r) groups.
See also
• Table of polyhedron dihedral angles
• Vertex-transitive
• Face-transitive
• Cell-transitive
References
1. Tilings and Patterns, Branko Gruenbaum, G.C. Shephard, 1987. 2.5 Tilings using star polygons, pp. 82-85.
2. "duality". maths.ac-noumea.nc. Retrieved 2020-09-30.
• Peter R. Cromwell, Polyhedra, Cambridge University Press 1997, ISBN 0-521-55432-2, p. 371 Transitivity
• Grünbaum, Branko; Shephard, G. C. (1987). Tilings and Patterns. New York: W. H. Freeman. ISBN 0-7167-1193-1. (6.4 Isotoxal tilings, 309-321)
• Coxeter, Harold Scott MacDonald; Longuet-Higgins, M. S.; Miller, J. C. P. (1954), "Uniform polyhedra", Philosophical Transactions of the Royal Society of London. Series A. Mathematical and Physical Sciences, 246 (916): 401–450, Bibcode:1954RSPTA.246..401C, doi:10.1098/rsta.1954.0003, ISSN 0080-4614, JSTOR 91532, MR 0062446, S2CID 202575183
Polygons (List)
Triangles
• Acute
• Equilateral
• Ideal
• Isosceles
• Kepler
• Obtuse
• Right
Quadrilaterals
• Antiparallelogram
• Bicentric
• Crossed
• Cyclic
• Equidiagonal
• Ex-tangential
• Harmonic
• Isosceles trapezoid
• Kite
• Orthodiagonal
• Parallelogram
• Rectangle
• Right kite
• Right trapezoid
• Rhombus
• Square
• Tangential
• Tangential trapezoid
• Trapezoid
By number
of sides
1–10 sides
• Monogon (1)
• Digon (2)
• Triangle (3)
• Quadrilateral (4)
• Pentagon (5)
• Hexagon (6)
• Heptagon (7)
• Octagon (8)
• Nonagon (Enneagon, 9)
• Decagon (10)
11–20 sides
• Hendecagon (11)
• Dodecagon (12)
• Tridecagon (13)
• Tetradecagon (14)
• Pentadecagon (15)
• Hexadecagon (16)
• Heptadecagon (17)
• Octadecagon (18)
• Icosagon (20)
>20 sides
• Icositrigon (23)
• Icositetragon (24)
• Triacontagon (30)
• 257-gon
• Chiliagon (1000)
• Myriagon (10,000)
• 65537-gon
• Megagon (1,000,000)
• Apeirogon (∞)
Star polygons
• Pentagram
• Hexagram
• Heptagram
• Octagram
• Enneagram
• Decagram
• Hendecagram
• Dodecagram
Classes
• Concave
• Convex
• Cyclic
• Equiangular
• Equilateral
• Infinite skew
• Isogonal
• Isotoxal
• Magic
• Pseudotriangle
• Rectilinear
• Regular
• Reinhardt
• Simple
• Skew
• Star-shaped
• Tangential
• Weakly simple
| Wikipedia |
Abstract : Monte-Carlo evaluation consists in estimating a position by averaging the outcome of several random continuations, and can serve as an evaluation function at the leaves of a min-max tree. This paper presents a new framework to combine tree search with Monte-Carlo evaluation, that does not separate between a min-max phase and a Monte-Carlo phase. Instead of backing-up the min-max value close to the root, and the average value at some depth, a more general backup operator is defined that progressively changes from averaging to min-max as the number of simulations grows. This approach provides a fine-grained control of the tree growth, at the level of individual simulations, and allows efficient selectivity methods. This algorithm was implemented in a Go-playing program, Crazy Stone, that won the gold medal of the $9 \times 9$ Go tournament at the 11th Computer Olympiad. | CommonCrawl |
\begin{document}
\title{Grassmann angle formulas and identities}
\author{Andr\'e L. G. Mandolesi
\thanks{Instituto de Matemática e Estatística, Universidade Federal da Bahia, Av. Adhemar de Barros s/n, 40170-110, Salvador - BA, Brazil. ORCID 0000-0002-5329-7034. E-mail: \texttt{[email protected]}}}
\date{\today \SELF{v2.1} }
\maketitle
\begin{abstract}
Grassmann angles improve upon similar concepts of angle between subspaces that measure volume contraction in orthogonal projections, working for real or complex subspaces, and being more efficient when dimensions are different. Their relations with contractions, inner and exterior products of multivectors are used to obtain formulas for computing these or similar angles in terms of arbitrary bases, and various identities for the angles with certain families of subspaces. These include generalizations of the Pythagorean trigonometric identity $\cos^2\theta+\sin^2\theta=1$ for high dimensional and complex subspaces, which are connected to generalized Pythagorean theorems for volumes, quantum probabilities and Clifford geometric product.
\noindent
{\bf Keywords:} Grassmann angle, angle between subspaces, Pythagorean identity, Grassmann algebra, exterior algebra.
\noindent
{\bf MSC:} 15A75, 51M05 \end{abstract}
\section{Introduction}
Measuring the separation between subspaces is important in many areas, from geometry and linear algebra to statistics \cite{Hotelling1936}, operator perturbation theory \cite{Kato1995}, data mining \cite{Jiao2018}, etc. In high dimensions, no single number fully describes this separation, and different concepts are used: gap, principal angles, minimal angle, Friedrichs angle, and others. An often used angle concept \cite{Gluck1967,Gunawan2005,Hitzer2010a,Jiang1996} combines principal angles in a way that shows how volumes contract when orthogonally projected between the subspaces. Research on it has been focused on real spaces, but many applications require complex ones.
The relation between angle and volume contraction changes significantly in the complex case. Taking this into account, in \cite{Mandolesi_Grassmann} we defined a Grassmann angle that works well in both cases, with only a few adjustments. It is based on that same angle, but has a subtly important difference, which makes it easier to use with subspaces of distinct dimensions: it is asymmetric, reflecting the dimensional asymmetry between the subspaces in a way that leads to better properties and more general results. This angle is intrinsically connected with Grassmann algebra, and has found interesting applications in the geometry of Grassmannians, Clifford algebra, quantum theory, etc.
In \cite{Mandolesi_Products} we expressed contractions, inner, exterior and Clifford products of blades (simple multivectors) in terms of Grassmann angles. In this article we use the products to get formulas for the angles in terms of arbitrary bases, and identities relating the angles with the subspaces of an orthogonal partition, with coordinate subspaces of orthogonal bases, and others. Some of them generalize the Pythagorean trigonometric identity $\cos^2\theta+\sin^2\theta=1$ for high dimensional and complex subspaces, being related to real and complex Pythagorean theorems for volumes \cite{Mandolesi_Pythagorean} and quantum probabilities \cite{Mandolesi_Born}, and giving a geometric interpretation for an algebraic property of the Clifford product. Some of our results correspond, in the real case, to known ones, but our methods provide simpler proofs, while also extending them to the complex case.
\Cref{sc:preliminaries} presents concepts and results which will be needed. We obtain formulas for computing Grassmann angles in \cref{sc:Formulas}, generalized Pythagorean identities in \cref{sc:Pythagorean trigonometric}, and other useful identities in \cref{sc:other identities}.
\section{Preliminaries}\label{sc:preliminaries}
Here we review results we will use. See \cite{Mandolesi_Grassmann,Mandolesi_Products} for proofs and more details.
In this article, $X$ is a $n$-dimensional vector space over $\mathds{R}$ (real case) or $\mathds{C}$ (complex case), with inner product $\inner{\cdot,\cdot}$ (Hermitian product in the complex case, with conjugate-linearity in the first argument). For subspaces $V,W\subset X$, $\Proj_W:X\rightarrow W$ and $\Proj^V_W:V\rightarrow W$ are orthogonal projections. A \emph{line} is a 1-dimensional subspace.
\subsection{Grassmann algebra and partial orthogonality}
In the Grassmann algebra $\Lambda X$ \cite{Marcus1975,Yokonuma1992}, a \emph{$p$-blade} is a simple multivector $\nu=v_1\wedge\ldots\wedge v_p \in\Lambda^p X$ of \emph{grade} $p$, where $v_1,\ldots,v_p\in X$. If $\nu\neq 0$, it \emph{represents} the $p$-dimensional subspace $V=\Span(v_1,\ldots,v_p)$, and $\Lambda^p V=\Span(\nu)$. A scalar $\nu\in\Lambda^0 X$ is a $0$-blade, representing $\{0\}$.
The inner product of $\nu=v_1\wedge\ldots\wedge v_p$ and $\omega=w_1\wedge\ldots\wedge w_p$, \begin{equation*} \inner{\nu,\omega} = \det\!\big(\inner{v_i,w_j}\big) = \begin{vmatrix} \inner{v_1,w_1}& \cdots & \inner{v_1,w_p} \\ \vdots & \ddots & \vdots \\ \inner{v_p,w_1} & \cdots & \inner{v_p,w_p} \end{vmatrix}, \end{equation*}
is extended linearly (sesquilinearly, in the complex case), with blades of distinct grades being orthogonal, and $\inner{\nu,\omega}=\bar{\nu}\omega$ for $\nu,\omega\in\Lambda^0 X$. In the real case, the norm $\|\nu\|=\sqrt{\inner{\nu,\nu}}$ gives the $p$-dimensional volume of the parallelotope spanned by $v_1,\ldots,v_p$. In the complex case, $\|\nu\|^2$ gives the $2p$-dimensional volume of the parallelotope spanned by $v_1,\im v_1,\ldots,v_p, \im v_p$.
Given a subspace $W\subset X$, let $P=\Proj_W$. The orthogonal projection of $\nu=v_1\wedge\ldots\wedge v_p$ on $\Lambda W\subset\Lambda X$ is $P\nu = Pv_1\wedge\ldots\wedge Pv_p$.
Orthogonality in $\Lambda^p X$ corresponds to a weaker orthogonality concept in $X$.
\begin{definition} For subspaces $V,W\subset X$, $V$ is \emph{partially orthogonal} to $W$ ($V\pperp W$) if there is a nonzero $v\in V$ such that $\inner{v,w}=0$ for all $w\in W$.
\end{definition}
\begin{proposition}\label{pr:partial orthogonality} Let $V,W\subset X$ and $U\subset V$ be subspaces, with $V$ represented by a blade $\nu$, and $P=\Proj_W$. Then: \begin{enumerate}[i)] \item $V \pperp W$ $\Leftrightarrow$ $\dim P(V)<\dim V$. \label{it:orthogonality} \item If $V\pperp W$ then $P\nu=0$, otherwise $P\nu$ represents $P(V)$. \label{it:Pnu represents PV} \item If $V\not\pperp W$ then $U\not\pperp W$. \label{pr:subspace not pperp} \end{enumerate} \end{proposition}
\begin{proposition}\label{pr:partial orth Lambda orth} Let $V,W\subset X$ be nonzero subspaces, and $p=\dim V$. Then $V\pperp W \Leftrightarrow \Lambda^p V \perp \Lambda^p W$. \end{proposition}
\subsection{Coordinate decomposition}\label{sc:Coordinate decomposition}
\begin{definition} For integers $1\leq p\leq q$, let \[ \mathcal{I}_p^q= \{ (i_1,\ldots,i_p)\in\mathds{N}^p : 1\leq i_1 < \ldots<i_p\leq q \}. \]
For any multi-index $I=(i_1,\ldots,i_p)\in\mathcal{I}_p^q$, we write $|I|=i_1+\ldots+i_p$ and, if $p<q$, $\hat{I}=(1,\ldots,\hat{i_1},\ldots,\hat{i_p},\ldots,q)\in\mathcal{I}_{q-p}^q$, where each $\hat{i_k}$ indicates that index has been removed. Also, let $\mathcal{I}_0^q=\{0\}$, and for $I\in\mathcal{I}_0^q$ let $|I|=0$ and $\hat{I}=(1,\ldots,q)\in\mathcal{I}_q^q$. For $I\in\mathcal{I}_q^q$ let $\hat{I}=0\in\mathcal{I}_0^q$. \end{definition}
\begin{definition}
Given a basis $\beta=(w_1,\ldots,w_q)$ of $W\subset X$, and $1\leq p\leq q$, the \emph{coordinate $p$-subspaces} for $\beta$ are the $\binom{q}{p}$ subspaces
\begin{equation*}
W_I = \Span(w_{ i_1},\ldots,w_{ i_p}),
\end{equation*}
for $I=(i_1,\ldots,i_p)\in\mathcal{I}_p^q$, which are represented by the \emph{coordinate $p$-blades} of $\beta$,
\begin{equation}\label{eq:coordinate blades}
\omega_I = w_{ i_1}\wedge\ldots\wedge w_{ i_p}\in \Lambda^p W_I.
\end{equation}
For $I\in\mathcal{I}_0^q$ we have the coordinate $0$-subspace $W_I=\{0\}$ and $\omega_I=1\in\Lambda^0 W_I$. \end{definition}
When $\beta$ is orthonormal, $\{\omega_I\}_{I\in\mathcal{I}_p^q}$ is an orthonormal basis of $\Lambda^p W$.
\begin{definition} Given a decomposed nonzero blade $\omega= w_1\wedge\ldots\wedge w_q\in\Lambda^q X$, take the basis $\beta=(w_1,\ldots,w_q)$ of its subspace. For any $0\leq p\leq q$ and $I\in \mathcal{I}_p^q$, the \emph{coordinate decomposition} of $\omega$ (w.r.t. $I$ and $\beta$) is \begin{equation}\label{eq:multiindex decomposition} \omega = \sigma_I \,\omega_I\wedge\omega_{\hat{I}}, \end{equation}
with $\omega_I$ and $\omega_{\hat{I}}$ as in \eqref{eq:coordinate blades}, and $\sigma_I = (-1)^{|I|+\frac{p(p+1)}{2}}$.
\SELF{$=\sigma_{\omega_I\wedge\omega_{\hat{I}},\omega}$. Tanto faz ser $(-1)^{|I|\pm \frac{p(p+1)}{2}}$} \end{definition}
\subsection{Principal angles and vectors}
\begin{definition}
The \emph{Euclidean angle} $\theta_{v,w}\in[0,\pi]$ between nonzero vectors $v,w\in X$ is given by
$\cos\theta_{v,w} = \frac{\operatorname{Re}\inner{v,w}}{\|v\| \|w\|}$. In the complex case, there is also a \emph{Hermitian angle} $\gamma_{v,w}\in[0,\frac \pi 2]$ defined by $\cos\gamma_{v,w} = \frac{|\langle v,w \rangle|}{\|v\| \|w\|}$. \end{definition}
The Hermitian angle is the Euclidean angle between $v$ and $\Span_{\mathds{C}}(w)$.
A list of principal angles \cite{Galantai2006,Jordan1875} is necessary to fully describe the relative position of high dimensional subspaces.
\begin{definition}
Let $V,W\subset X$ be nonzero subspaces, $p=\dim V$, $q=\dim W$ and $m=\min\{p,q\}$.
Orthonormal bases $(e_1,\ldots,e_p)$ of $V$ and $(f_1,\ldots,f_q)$ of $W$ are associated \emph{principal bases}, formed by \emph{principal vectors}, with \emph{principal angles} $0\leq \theta_1\leq\ldots\leq\theta_m\leq\frac \pi 2$, if
\begin{equation}\label{eq:inner ei fj}
\inner{e_i,f_j} = \delta_{ij}\cos\theta_i.
\end{equation} \end{definition}
A singular value decomposition \cite{Galantai2006,Golub2013} gives such bases: for $P=\Proj^V_W$, the $e_i$'s and $f_i$'s are orthonormal eigenvectors of $P^*P$ and $PP^*$, respectively, and the $\cos\theta_i$'s are square roots of the eigenvalues of $P^*P$, if $p\leq q$, or $PP^*$ otherwise. The $\theta_i$'s are uniquely defined, but the $e_i$'s and $f_i$'s are not. $P$ is given in principal bases by a $q\times p$ diagonal matrix formed with the $\cos\theta_i$'s, as \begin{equation}\label{eq:Pei}
Pe_i=\begin{cases}
f_i\cdot\cos\theta_i \ \text{ if } 1\leq i\leq m, \\
0 \hspace{37pt}\text{ if } i>m.
\end{cases} \end{equation}
A geometric interpretation of principal angles is that the unit sphere of $V$ projects to an ellipsoid in $W$ with semi-axes of lengths $\cos\theta_i$, for $1\leq i\leq m$ (in the complex case there are 2 semi-axes for each $i$). They can also be described recursively with a minimization condition: $e_1$ and $f_1$ form the smallest angle $\theta_1$ between nonzero vectors of $V$ and $W$; in their orthogonal complements we obtain $e_2$, $f_2$ and $\theta_2$ in the same way; and so on.
\subsection{Grassmann angles and blade products}\label{sc:Grassmann angle}
Grassmann angles were introduced in \cite{Mandolesi_Grassmann}, and related to blade products in \cite{Mandolesi_Products}.
\begin{definition} Let $V,W\subset X$ be subspaces, $\nu$ be a nonzero blade representing $V$, and $P=\Proj_W$. The \emph{Grassmann angle} $\Theta_{V,W}\in[0,\frac\pi 2]$ is given by \SELF{$V=0$ : $P\nu=\nu$, $\Theta=0$. \\
$V\neq 0$, $W=0$ : $\nu\neq 0$, $P\nu=0$, $\Theta=\frac\pi 2$ } \begin{equation}\label{eq:norm projection blade}
\cos\Theta_{V,W}=\frac{\|P\nu\|}{\|\nu\|}. \end{equation} \end{definition}
As blade norms (squared, in the complex case) give volumes, $\Theta_{V,W}$ tells us how volumes in $V$ contract when orthogonally projected on $W$. In simple cases, where there is an unambiguous concept of angle between subspaces, $\Theta_{V,W}$ coincides with it, as when $V$ is a line, or $V$ and $W$ are planes in $\mathds{R}^3$.
The main difference between Grassmann angles and similar ones \cite{Gluck1967,Gunawan2005,Hitzer2010a,Jiang1996} is its asymmetry: $\Theta_{V,W}=\frac\pi 2$ if $\dim V>\dim W$, so, in general, $\Theta_{V,W}\neq \Theta_{W,V}$ when dimensions are different. It reflects the dimensional asymmetry between the subspaces, leading to better and more general results. For example, the contraction and exterior product formulas in \cref{pr:products}, or \cref{pr:formula any base dimension,pr:formula complementary angle bases}, only hold without any restrictions thanks to this asymmetry.
Grassmann angles work well in complex spaces, and this makes them useful for applications in areas like quantum theory. Differences due to volumes being given, in this case, by squared norms, have important implications \cite{Mandolesi_Pythagorean,Mandolesi_Born}.
These angles have many useful properties, some of which are listed below.
\begin{proposition}\label{pr:properties Grassmann}
Let $V,W\subset X$ be subspaces, with principal angles $\theta_1,\ldots,\theta_m$, where $m=\min\{p,q\}$ for $p=\dim V$ and $q=\dim W$, and let $P=\Proj^V_W$.
\begin{enumerate}[i)]
\item $\Theta_{V,W}=\frac \pi 2 \ \Leftrightarrow\ V \pperp W$. \label{it:Theta pi2}
\item If $V=\Span(v)$ and $W=\Span(w)$ for nonzero $v,w\in X$ then $\Theta_{V,W} = \min\{\theta_{v,w},\pi-\theta_{v,w}\}$ in the real case, and $\Theta_{V,W} = \gamma_{v,w}$ in the complex one. \label{it:Theta lines}
\item If $V$ is a line and $v\in V$ then $\|P v\|=\|v\|\cos\Theta_{V,W}$. \label{it:Pv}
\item $\vol P(S)=\vol S \cdot\cos\Theta_{V,W}$ ($\cos^2\Theta_{V,W}$ in complex case), where $S\subset V$ is a parallelotope and $\vol$ is the $p$-dimensional volume ($2p$ in complex case). \label{it:projection factor}
\item $\cos^2\Theta_{V,W}=\det(\bar{\mathbf{P}}^T \mathbf{P})$, where $\mathbf{P}$ is a matrix for $P$ in orthonormal bases of $V$ and $W$. \label{it:formula orthonormal bases}
\item If $p> q$ then $\Theta_{V,W}=\frac\pi 2$, otherwise $\cos\Theta_{V,W}=\prod_{i=1}^m \cos\theta_i$.
\item If $p=q$ then $\Theta_{V,W}= \Theta_{W,V}$.
\item $\Theta_{T(V),T(W)} = \Theta_{V,W}$ for any orthogonal (unitary, in the complex case) transformation $T:X\rightarrow X$.\label{it:transformation}
\item If $V'$ and $W'$ are the orthogonal complements of $V\cap W$ in $V$ and $W$, respectively, then $\Theta_{V,W}=\Theta_{V',W'}$. \label{it:orth complem inter}
\item $\Theta_{V,W} = \Theta_{W^\perp,V^\perp}$. \label{it:Theta perp perp} \end{enumerate} \end{proposition}
The Grassmann angle with an orthogonal complement has extra properties which grant it a special name and notation.
\begin{definition}
The \emph{complementary Grassmann angle} $\Theta_{V,W}^\perp \in [0,\frac \pi 2]$ of subspaces $V,W\subset X$ is $\Theta_{V,W}^\perp=\Theta_{V,W^\perp}$. \end{definition}
In general, this is not the usual complement, i.e. $\Theta_{V,W}^\perp\neq\frac \pi 2-\Theta_{V,W}$.
\begin{proposition}\label{pr:complementary simple cases}
Let $V,W\subset X$ be subspaces.
\begin{enumerate}[i)]
\item $\Theta_{V,W}^\perp=0 \ \Leftrightarrow\ $ $V\perp W$.\label{it:Theta perp 0}
\item $\Theta_{V,W}^\perp=\frac \pi 2 \ \Leftrightarrow\ V\cap W\neq\{0\}$. \label{it:Theta perp pi2}
\item If $\theta_1,\ldots,\theta_m$ are the principal angles then $\cos\Theta_{V,W}^\perp=\prod_{i=1}^m \sin\theta_i$. \label{it:complementary product sines}
\item If $V$ is a line then $\Theta_{V,W}^\perp=\frac \pi 2-\Theta_{V,W}$. \label{it:complementary line}
\item $\Theta_{V,W}^\perp = \Theta_{W,V}^\perp$. \label{it:symmetry complementary}
\end{enumerate} \end{proposition}
In \cref{sc:Formulas} we get a simpler way to prove \emph{(iii)} than the one in \cite{Mandolesi_Grassmann}.
\begin{definition}
The \emph{(left) contraction} of $\nu\in\Lambda^p X$ on $\omega\in\Lambda^q X$ is the unique $\nu\lcontr\omega\in \Lambda^{q-p} X$ such that, for all $\mu\in\Lambda^{q-p} X$,
\begin{equation}\label{eq:contraction}
\inner{\mu,\nu \lcontr \omega} = \inner{\nu\wedge\mu,\omega}.
\end{equation}
\end{definition}
This contraction coincides with the inner product when $p=q$, and it is asymmetric, with $\nu\lcontr\omega=0$ if $p>q$. It differs from the one used in Clifford geometric algebra \cite{Dorst2002} by a reversion.
\begin{proposition}
For $\nu\in\Lambda^p X$ and any blade $\omega\in\Lambda^q X$, with $p\leq q$,
\SELF{Inclui $p=0$}
\SELF{Geometric Algebra, Chisolm, eq. 95, dá uma fórmula semelhante pra $\glcontr$}
\begin{equation}\label{eq:contraction coordinate decomposition}
\nu \lcontr \omega = \sum_{I\in\mathcal{I}_p^q} \sigma_I \,\inner{\nu,\omega_I}\, \omega_{\hat{I}},
\end{equation}
where $\sigma_I$, $\omega_I$ and $\omega_{\hat{I}}$ are as in \eqref{eq:multiindex decomposition} for any decomposition $\omega = w_1\wedge\ldots\wedge w_q$. \end{proposition}
\begin{proposition}\label{pr:products} Let $\nu,\omega\in\Lambda X$ be blades representing $V,W\subset X$, respectively. \begin{enumerate}[i)]
\item $|\inner{\nu,\omega}| = \|\nu\|\|\omega\|\cos \Theta_{V,W}$, if $\nu$ and $\omega$ have equal grades. \label{it:Theta inner blades}
\item $\|\nu\lcontr\omega\| = \|\nu\|\|\omega\| \cos \Theta_{V,W}$. \label{it:Theta norm contraction}
\item $\|\nu\wedge\omega\|=\|\nu\|\|\omega\| \cos\Theta^\perp_{V,W}$. \label{it:exterior product} \end{enumerate} \end{proposition}
With a Grassmann angle $\mathbf{\Theta}_{V,W}$ for oriented subspaces (with orientations of $\nu,\omega\in\Lambda^p X$) given by $\cos \mathbf{\Theta}_{V,W} = \frac{\inner{\nu,\omega}}{|\inner{\nu,\omega}|} \cos\Theta_{V,W}$ if $\inner{\nu,\omega}\neq 0$, otherwise $\mathbf{\Theta}_{V,W} = \frac\pi 2$, we also have \begin{equation}\label{eq:inner oriented}
\inner{\nu,\omega} = \|\nu\|\|\omega\|\cos \mathbf{\Theta}_{V,W}. \end{equation}
\section{Formulas for Grassmann angles}\label{sc:Formulas}
Here we give formulas for computing Grassmann and complementary Grassmann angles in terms of arbitrary bases, thus generalizing \cref{pr:properties Grassmann}\emph{\ref{it:formula orthonormal bases}}. They can be adapted for use with similar angles \cite{Gluck1967,Gunawan2005,Hitzer2010a,Jiang1996}, with some dimensional restrictions ($\dim V\leq \dim W$ in \cref{pr:formula any base dimension}, $\dim V\leq \dim W^\perp$ in \cref{pr:formula complementary angle bases}) as these other angles are symmetric.
The first formula, for subspaces of same dimension, follows from \cref{pr:products}\emph{\ref{it:Theta inner blades}}. A similar result, for the real case, appears in \cite{Gluck1967}.
\begin{theorem}\label{pr:formula Theta equal dim} Given bases $(v_1,\ldots,v_p)$ and $(w_1,\ldots,w_p)$ of $p$-dimensional subspaces $V,W\subset X$, let $A=\big(\inner{w_i,w_j}\big), B=\big(\inner{w_i,v_j}\big)$ and $D=\big(\inner{v_i,v_j}\big)$. Then \begin{equation*}
\cos^2 \Theta_{V,W} = \frac{|\det B\,|^2}{\det A \cdot\det D}. \end{equation*} \end{theorem}
\begin{example}\label{ex:formula bases} In $\mathds{C}^3$, let $V=\Span(v_1,v_2)$ and $W=\Span(w_1,w_2)$ with $v_1=(1,-\xi,0), v_2=(0,\xi,-\xi^2)$, $w_1=(1,0,0)$ and $w_2=(0,\xi,0)$, where $\xi=e^{\im\frac{2\pi}{3}}$. The theorem gives $\Theta_{V,W}=\arccos\frac{\sqrt{3}}{3}$.
Since $v=(\xi,\xi^2,-2)\in V$ and $w=(1,\xi,0)\in W$ are orthogonal to $V\cap W=\Span(v_1)$, \cref{pr:properties Grassmann}\emph{\ref{it:Theta lines}} and \emph{\ref{it:orth complem inter}} give $\Theta_{V,W}=\Theta_{\Span(v),\Span(w)}=\gamma_{v,w}$, and the Hermitian angle formula confirms the result. \SELF{Usa \cref{pr:properties Grassmann}\ref{it:Theta lines}} \end{example}
The next formulas require some determinant identities.
\begin{proposition}[Laplace's Expansion \protect{\cite[p.80]{Muir2003}}]\label{pr:Laplace expansion}
Given a $q\times q$ matrix $M$ and a multi-index $J\in\mathcal{I}_p^q$, with $1\leq p<q$,
\begin{equation*}
\det M = \sum_{I\in\mathcal{I}_p^q} (-1)^{|I|+|J|} \det M_{I,J} \cdot \det M_{\hat{I},\hat{J}},
\end{equation*}
where $M_{I,J}$ is the $p\times p$ submatrix formed by entries with row indices in $I$ and column indices in $J$, and $M_{\hat{I},\hat{J}}$ is the $(q-p)\times(q-p)$ submatrix formed by entries with row indices not in $I$ and column indices not in $J$. \end{proposition}
\begin{proposition}[Schur's determinant identity \cite{Brualdi1983}] \label{pr:Schur} Let $M=\begin{psmallmatrix} A & B \\ C & D \end{psmallmatrix}$ be a $(q+p)\times(q+p)$ matrix, partitioned into $q\times q$, $q\times p$, $p\times q$ and $p\times p$ matrices $A$, $B$, $C$ and $D$, respectively. If $A$ is invertible then \begin{equation}\label{eq:Schur A} \det M = \det A \cdot\det(D-CA^{-1}B). \end{equation} Likewise, if $D$ is invertible then \begin{equation}\label{eq:Schur D} \det M = \det D\cdot\det(A-BD^{-1}C). \end{equation} \end{proposition} \begin{proof} Follows by decomposing $M$ into block triangular matrices, as $M=\begin{psmallmatrix} A & 0_{q\times p} \\ C & \mathds{1}_{p\times p} \end{psmallmatrix} \begin{psmallmatrix} \mathds{1}_{q\times q} & A^{-1}B \\ 0_{p\times q} & D-CA^{-1}B \end{psmallmatrix}$ or $M=\begin{psmallmatrix}
A-BD^{-1}C & BD^{-1} \\
0_{p\times q} & \mathds{1}_{p\times p} \end{psmallmatrix} \begin{psmallmatrix}
\mathds{1}_{q\times q} & 0_{q\times p} \\
C & D \end{psmallmatrix}$. \SELF{det of block triangular matrices = product of det of diagonal blocks (by \cref{pr:Laplace expansion})} \end{proof}
We now get a formula for distinct dimensions, simpler than one given in \cite{Gunawan2005} for a similar angle (which corrects another formula from \cite{Risteski2001}).
\begin{theorem}\label{pr:formula any base dimension} Given bases $(v_1,\ldots,v_p)$ of $V$ and $(w_1,\ldots,w_q)$ of $W$, let $A=\big(\inner{w_i,w_j}\big), B=\big(\inner{w_i,v_j}\big)$, and $D=\big(\inner{v_i,v_j}\big)$. Then \begin{equation*} \cos^2 \Theta_{V,W} = \frac{\det(\bar{B}^T \! A^{-1}B)}{\det D}. \end{equation*} \end{theorem} \begin{proof} If $p>q$ then $\Theta_{V,W} = \frac \pi 2$, and the determinant of $\bar{B}^T \! A^{-1}B$ vanishes as it is a $p\times p$ matrix with rank at most $q$.
If $p\leq q$, applying \cref{pr:Laplace expansion}, with $J=(q+1,\ldots,q+p)$, to the $(q+p)\times(q+p)$ block matrix $M=\begin{psmallmatrix} A & B\ \\ \bar{B}^T & 0_{p\times p} \end{psmallmatrix}$, we get \SELF{Como $J$ dá as últimas colunas, das $q+p$ linhas de $M$ só as $q$ primeiras não dão 0, por isso pode usar $\mathcal{I}_p^q$ ao invés de $\mathcal{I}_p^{q+p}$}
\[ \det M = \sum_{I\in\mathcal{I}_p^q} (-1)^{|I|+pq+\frac{p(p+1)}{2}}\cdot \det B_I \cdot\det N_{\hat{I}}, \] where $B_I$ is the $p\times p$ submatrix of $M$ formed by lines of $B$ with indices in $I$, and $N_{\hat{I}}=\begin{psmallmatrix} A_{\hat{I}} \\[1pt] \bar{B}^T \end{psmallmatrix}$ is its $q\times q$ complementary submatrix, formed by lines of $A$ with indices not in $I$ and all of $\bar{B}^T$.
For $\nu=v_1\wedge\ldots \wedge v_p$ and $\omega=w_1\wedge\ldots\wedge w_q$ we have, by \eqref{eq:contraction} and \eqref{eq:contraction coordinate decomposition}, \begin{equation*}
\|\nu\lcontr\omega\|^2 = \inner{\nu\wedge(\nu\lcontr\omega),\omega} = \sum_{I\in\mathcal{I}_p^q} \sigma_I \,\inner{\omega_I,\nu}\, \inner{\nu\wedge\omega_{\hat{I}},\omega}. \end{equation*} Since $\det B_I=\inner{\omega_I,\nu}$, $\det N_{\hat{I}} = \inner{\omega_{\hat{I}}\wedge\nu,\omega} = (-1)^{pq+p}\inner{\nu\wedge\omega_{\hat{I}},\omega}$ \SELF{$(-1)^{-p^2}=(-1)^p$}
and $\sigma_I = (-1)^{|I|+\frac{p(p+1)}{2}}$, we obtain $\|\nu\lcontr\omega\|^2 = (-1)^p \det M$. \Cref{pr:products}\emph{\ref{it:Theta norm contraction}} then gives $\cos^2 \Theta_{V,W} = \frac{(-1)^p \det M}{\det D\det A}$, and the result follows from \eqref{eq:Schur A}.\SELF{$\det(-BA^{-1}\bar{B}^T) = (-1)^p \det(BA^{-1}\bar{B}^T)$} \end{proof}
\begin{example}\label{ex:formula distinct dim}
In $\mathds{R}^4$, let $v=(1,0,1,0)$, $w_1=(0,1,1,0)$, $w_2=(1,2,2,-1)$, $V=\Span(v)$ and $W=\Span(w_1,w_2)$.
Then $A=\begin{psmallmatrix}
2 & 4 \\ 4 & 10
\end{psmallmatrix}$,
$B=\begin{psmallmatrix}
1 \\ 3
\end{psmallmatrix}$ and $D=(2)$, and the theorem gives $\Theta_{V,W}= 45^\circ$, as one can verify by projecting $v$ on $W$.
Switching the roles of $V$ and $W$, we now have $A=(2)$,
$B=(1 \ 3)$, $D=\begin{psmallmatrix}
2 & 4 \\ 4 & 10
\end{psmallmatrix}$ and $\Theta_{W,V}= 90^\circ$, which is correct since $\dim W>\dim V$. \end{example}
We now get formulas for the complementary Grassmann angle.
\begin{theorem}\label{pr:formula complementary angle bases} Given bases $(v_1,\ldots,v_p)$ of $V$ and $(w_1,\ldots,w_q)$ of $W$, let $A=\big(\inner{w_i,w_j}\big), B=\big(\inner{w_i,v_j}\big)$, and $D=\big(\inner{v_i,v_j}\big)$. Then \begin{equation}\label{eq:complementary bases} \cos^2 \Theta^\perp_{V,W} = \frac{\det(A-BD^{-1}\bar{B}^T )}{\det A}. \end{equation} \end{theorem} \begin{proof} Let $\nu=v_1\wedge\ldots\wedge v_p$ and $\omega=w_1\wedge\ldots\wedge w_q$. The result is obtained applying \cref{pr:products}\emph{\ref{it:exterior product}} to $\omega\wedge\nu$, and using \eqref{eq:Schur D} with $M=\begin{psmallmatrix} A & B \\ \bar{B}^T & D \end{psmallmatrix}$. \end{proof}
\begin{corollary}
If $\mathbf{P}$ is a matrix representing $\Proj^V_W$ in orthonormal bases of $V$ and $W$ then
\begin{equation}\label{eq:complementary orthonormal bases}
\cos^2\Theta_{V,W}^\perp=\det(\mathds{1}_{q\times q}-\mathbf{P}\bar{\mathbf{P}}^T ).
\end{equation} \end{corollary}
This gives an easy way to prove \cref{pr:complementary simple cases}\emph{\ref{it:complementary product sines}}, as in principal bases $\mathbf{P}$ is a diagonal matrix formed with the $\cos\theta_i$'s.
\begin{example}
In \cref{ex:formula distinct dim}, \eqref{eq:complementary bases} gives $\Theta^\perp_{V,W}=45^\circ$, in agreement with \cref{pr:complementary simple cases}\emph{\ref{it:complementary line}}. The same formula also gives $\Theta^\perp_{W,V}=45^\circ$, as expected by \cref{pr:complementary simple cases}\emph{\ref{it:symmetry complementary}}.
Direct calculations show the principal angles of $V$ and $W^\perp$ are $0^\circ$ and $45^\circ$, as are those of $W$ and $V^\perp$, confirming the results. \end{example}
\begin{example}
In \cref{ex:formula bases}, using \eqref{eq:complementary bases} with the bases $(v_1,v_2)$ and $(w_1,w_2)$, or \eqref{eq:complementary orthonormal bases} with the orthonormal bases $(\frac{v_1}{\sqrt{2}},\frac{v}{\sqrt{6}})$ and $(\frac{v_1}{\sqrt{2}},\frac{w}{\sqrt{2}})$, we get $\Theta^\perp_{V,W}=90^\circ$, as expected by \cref{pr:complementary simple cases}\emph{\ref{it:Theta perp pi2}}, since $V\cap W\neq\{0\}$. \end{example}
\section{Generalized Pythagorean identities}\label{sc:Pythagorean trigonometric}
The Pythagorean trigonometric identity $\cos^2\theta+\sin^2\theta=1$ can be written as $\cos^2\theta_x+\cos^2\theta_y=1$, with $\theta_x$ and $\theta_y$ being angles a line in $\mathds{R}^2$ makes with the axes. We give generalizations for Grassmann angles which, with \cref{pr:properties Grassmann}\emph{\ref{it:projection factor}}, lead to real and complex Pythagorean theorems for volumes \cite{Mandolesi_Pythagorean}. Some correspond to known results in real spaces, which are now extended to the complex case, with important implications for quantum theory \cite{Mandolesi_Born}. We also get a geometric interpretation for a property of the Clifford product.
The first identity relates the Grassmann angles of a (real or complex) line with all subspaces of an orthogonal partition of $X$.
\begin{theorem} Given an orthogonal partition $X=W_1\oplus\cdots\oplus W_k$ and a line $L\subset X$, \begin{equation}\label{eq:pythagorean line}
\sum_{i=1}^k \cos^2\Theta_{L,W_i} = 1. \end{equation} \end{theorem} \begin{proof}
Given a nonzero $v\in L$, as $\|v\|^2 = \sum_{i} \left\|\Proj_{W_i} v \right\|^2$ the result follows from \cref{pr:properties Grassmann}\emph{\ref{it:Pv}}. \end{proof}
\begin{figure}
\caption{Pythagorean identities for subspaces of equal dimensions.}
\label{fig:angulos-eixos-edit}
\label{fig:angulos-faces-edit}
\label{fig:equal dimensions}
\end{figure}
\begin{example}\label{ex:direction cosines} If $\theta_x, \theta_y$ and $\theta_z$ are the angles between a line in $\mathds{R}^3$ and the axes (\cref{fig:angulos-eixos-edit}), then $\cos^2\theta_x+\cos^2\theta_y+\cos^2\theta_z=1$. \end{example}
This is a known identity for direction cosines and, like other examples we give, is only meant to illustrate the theorem in $\mathds{R}^3$. The relevance of our result lies mainly in the complex case, where it has important connections with quantum theory.
\begin{example}
If $X$ is the complex Hilbert space of a quantum system, $L$ is the complex line of a quantum state vector $\psi$, and the $W_i$'s are the eigenspaces of a quantum observable,
the probability of getting result $i$ when measuring $\psi$ is $p_i=\|\Proj_{W_i}\psi\|^2/\|\psi\|^2$ \cite{CohenTannoudji2019}. So, by \cref{pr:properties Grassmann}\emph{\ref{it:Pv}}, $p_i=\cos^2 \Theta_{L,W_i}$, and \eqref{eq:pythagorean line} reflects the fact that the total probability is 1.
\end{example}
By \cref{pr:properties Grassmann}\emph{\ref{it:projection factor}}, $\cos^2\Theta_{L,W_i}$ measures area contraction in orthogonal projections of the complex line $L$. This is explored in \cite{Mandolesi_Born} to get a new interpretation for quantum probabilities and derive the Born rule.
The next identities relate Grassmann angles of a (real or complex) subspace with coordinate subspaces of an orthogonal basis of $X$.
\begin{theorem}\label{pr:Grassmann coordinate} If $V\subset X$ is a $p$-dimensional subspace, and $n=\dim X$, \[ \sum_{I\in\mathcal{I}_p^n} \cos^2\Theta_{V,W_I} = 1, \] where the $W_I$'s are the coordinate $p$-subspaces of an orthogonal basis of $X$. \end{theorem} \begin{proof}
Without loss of generality, assume the basis is orthonormal, so its coordinate $p$-blades $\omega_I$ form an orthonormal basis of $\Lambda^p X$. For an unit blade $\nu\in\Lambda^p V\subset \Lambda^p X$ we have $\sum_I |\inner{\nu,\omega_I}|^2=1$, and the result follows from \cref{pr:products}\emph{\ref{it:Theta inner blades}}. \end{proof}
A similar result for the real case, in terms of products of cosines of principal angles, appears in \cite{Miao1992}. The theorem extends to affine subspaces, as in the next example, which is a dual of \cref{ex:direction cosines} via \cref{pr:properties Grassmann}\emph{\ref{it:Theta perp perp}}.
\begin{example} If $\theta_{xy}, \theta_{xz}$ and $\theta_{yz}$ are the angles a plane in $\mathds{R}^3$ makes with the coordinate planes (\cref{fig:angulos-faces-edit}) then $\cos^2\theta_{xy}+\cos^2\theta_{xz}+\cos^2\theta_{yz}=1$.
\SELF{Another way to express this result is that the sum of the squared cosines of all angles
between the faces of a trirectangular tetrahedron equals 1. By \cref{pr:Grassmann coordinate}, this generalizes for simplices of any dimension {Cho1992}. } \end{example}
\begin{example} Let $\xi,w_1,w_2$ and $V$ be as in \cref{ex:formula bases}, $w_3=(0,0,\xi^2)$ and $W_{ij}=\Span(w_i,w_j)$. As the unitary transformation given by $T=\left(\begin{smallmatrix}
0 & 0 & \xi \\
\xi & 0 & 0 \\
0 & \xi & 0 \end{smallmatrix}\right)$ maps $W_{12}\mapsto W_{23}$, $W_{23}\mapsto W_{13}$, and preserves $V$, \cref{pr:properties Grassmann}\emph{\ref{it:transformation}} gives $\Theta_{V,W_{12}} = \Theta_{V,W_{23}} = \Theta_{V,W_{13}}$. \SELF{$T:w_1\mapsto w_2\mapsto w_3\mapsto w_1$, $Tv_1=v_2$,\\ $Tv_2= -(v_1+v_2)$} Since $W_{12}$, $W_{13}$ and $W_{23}$ are the coordinate $2$-subspaces of the orthogonal basis $(w_1,w_2,w_3)$ of $\mathds{C}^3$, \cref{pr:Grassmann coordinate} gives $\cos\Theta_{V,W_{ij}}=\frac{\sqrt{3}}{3}$, in agreement with that example. \end{example}
A formula relating the geometric product of Clifford algebra \cite{Hestenes1984clifford} to Grassmann angles, given in \cite{Mandolesi_Products}, implies $\|AB\|^2 = \|A\|^2\|B\|^2 \sum_{J} \cos^2\Theta_{V,Y_J}$, where $A$ and $B$ are $p$-blades representing subspaces $V$ and $W$, and the $Y_J$'s are all coordinate $p$-subspaces of a certain orthogonal basis of $Y=V+W$. This gives a geometric interpretation for a simple yet important property of this product.
Those in \cref{pr:products} are submultiplicative for blades because they correspond to projections on single subspaces. The geometric product, on the other hand, involves projections on all $Y_J$'s, and with \cref{pr:Grassmann coordinate} we see this is what allows $\|AB\|= \|A\|\|B\|$.
Extending a result of \cite{Miao1996}, we also have identities for Grassmann angles with coordinate subspaces of a dimension different from $V$.
\begin{theorem} Let $V\subset X$ be a $p$-dimensional subspace, $0\leq q\leq n=\dim X$, and the $W_I$'s be the coordinate $q$-subspaces of an orthogonal basis of $X$. \begin{enumerate}[i)] \item If $p\leq q$ then $\displaystyle \sum_{I\in\mathcal{I}_q^n} \cos^2\Theta_{V,W_I} =\binom{n-p}{n-q}$. \item If $p>q$ then $\displaystyle \sum_{I\in\mathcal{I}_q^n} \cos^2\Theta_{W_I,V} = \binom{p}{q}$. \end{enumerate} \end{theorem} \begin{proof} We can assume $p,q\neq 0$ and that the basis is orthonormal. So, for $0\leq r\leq n$ and with $\omega_I$'s as in \eqref{eq:coordinate blades}, $\{\omega_I\}_{I\in\mathcal{I}_r^n}$ and $\{\omega_{\hat{I}}\}_{I\in\mathcal{I}_r^n}$ are orthonormal bases of $\Lambda^r X$ and $\Lambda^{n-r} X$, respectively. \begin{itemize} \item[\emph{(i)}] For an unit blade $\nu\in\Lambda^p V$ and $I=(i_1,\ldots,i_q)\in\mathcal{I}_q^n$ we have, by \eqref{eq:contraction coordinate decomposition}, \[ \nu \lcontr \omega_I = \sum_{J\in\mathcal{I}_p^q} \sigma_J \,\inner{\nu,(\omega_I)_J}\, (\omega_I)_{\hat{J}}, \] where $(\omega_I)_J = w_{ i_{j_1}}\wedge\ldots\wedge w_{ i_{j_p}}$ for $J=(j_1,\ldots,j_p)$, and likewise for $(\omega_I)_{\hat{J}}$. As the $(\omega_I)_{\hat{J}}$'s are orthonormal, \cref{pr:products}\emph{\ref{it:Theta norm contraction}} gives
\[ \cos^2\Theta_{V,W_I} = \|\nu \lcontr \omega_I\|^2 = \sum_{J\in\mathcal{I}_p^q} \left|\inner{\nu,(\omega_I)_J}\right|^2, \] and therefore \begin{align*} \sum_{I\in\mathcal{I}_q^n} \cos^2\Theta_{V,W_I}
&= \sum_{I\in\mathcal{I}_q^n} \sum_{J\in\mathcal{I}_p^q} \left|\inner{\nu,(\omega_I)_J}\right|^2 \\
&= \frac{\binom{n}{q} \binom{q}{p}}{\binom{n}{p}} \sum_{K\in\mathcal{I}_p^n} \left|\inner{\nu,\omega_K}\right|^2 = \binom{n-p}{n-q} \|\nu\|^2, \end{align*} where the binomial coefficients account for the number of times each $\omega_K$ appears as a $(\omega_I)_J$ in the double summation.
\item[\emph{(ii)}] For each $I\in\mathcal{I}_q^n$, \cref{pr:properties Grassmann}\emph{\ref{it:Theta perp perp}} gives $\Theta_{W_I,V} = \Theta_{V^\perp,{W_I}^\perp}$. As ${W_I}^\perp = W_{\hat{I}}$ for $\hat{I}\in\mathcal{I}_{n-q}^n$, and $\dim V^\perp = n-p < n-q = \dim {W_I}^\perp$, the result follows from the previous case. \qedhere \end{itemize} \end{proof}
The following examples are again duals of each other.
\begin{figure}
\caption{Pythagorean identities for subspaces of different dimensions.}
\label{fig:angulos-linha-planos}
\label{fig:angulos plano eixos}
\label{fig:diferent dimensions}
\end{figure}
\begin{example} If $\theta_{xy}, \theta_{xz}$ and $\theta_{yz}$ are the angles a line in $\mathds{R}^3$ makes with the coordinate planes (\cref{fig:angulos-linha-planos}) then $\cos^2\theta_{xy}+\cos^2\theta_{xz}+\cos^2\theta_{yz}=2$. \end{example}
\begin{example} If $\theta_{x}, \theta_{y}$ and $\theta_{z}$ are the angles between the axes and a plane in $\mathds{R}^3$ (\cref{fig:angulos plano eixos}) then $\cos^2\theta_{x}+\cos^2\theta_{y}+\cos^2\theta_{z}=2$. \end{example}
\section{Other identities}\label{sc:other identities}
Using the Grassmann angle $\mathbf{\Theta}_{V,W}$ for oriented subspaces, we have:
\begin{theorem}
Given $p$-dimensional oriented subspaces $V,W\subset X$,
\begin{equation*}
\cos\mathbf{\Theta}_{V,W} = \sum_{I\in\mathcal{I}_p^n} \cos\mathbf{\Theta}_{V,X_I} \cos\mathbf{\Theta}_{W,X_I},
\end{equation*}
where the $X_I$'s are the coordinate $p$-subspaces of an orthogonal basis of $X$, with orientations given by the corresponding coordinate $p$-blades. \end{theorem} \begin{proof}
The result follows by decomposing unit blades $\nu\in\Lambda^p V$ and $\omega\in\Lambda^p W$ (with the orientations of $V$ and $W$) in the orthonormal basis of $\Lambda^p X$ formed with the normalized coordinate $p$-blades, and applying \eqref{eq:inner oriented}. \end{proof}
This gives an inequality for $\cos\Theta_{V,W} = |\cos\mathbf{\Theta}_{V,W}|$, like one from \cite{Miao1996}.
\begin{corollary} $\cos\Theta_{V,W} \leq \sum_I \cos\Theta_{V,X_I} \cos\Theta_{W,X_I}$. \end{corollary}
\begin{example}
$\cos\theta = \cos\alpha_x\cos\beta_x + \cos\alpha_y\cos\beta_y + \cos\alpha_z\cos\beta_z$ for the angle $\theta\in[0,\pi]$ between 2 oriented lines in $\mathds{R}^3$ forming angles $\alpha_x,\alpha_y,\alpha_z$ and $\beta_x,\beta_y,\beta_z$ (all in $[0,\pi]$) with the positive axes. \end{example}
The last identities will require some preparation.
\begin{definition}
A coordinate subspace of a principal basis $\beta$ of $V$ w.r.t. $W$ is a \emph{principal subspace}\footnote{Some authors use `principal subspace' for $\Span(e_i,f_i)$, where $e_i\in V$ and $f_i\in W$ are principal vectors corresponding to the same principal angle $\theta_i$.} (of $V$ w.r.t. $W$, for $\beta$).
Two or more subspaces of $V$ are \emph{coprincipal} (w.r.t. $W$) if they are principal for the same $\beta$. \end{definition}
Note that a subspace being principal depends on both $V$ and $W$, even if they are left implicit. Also, $\{0\}$ is always principal.
\begin{lemma}\label{pr:orthogonal subspace principal} Let $V,W\subset X$ be nonzero subspaces and $U\subset V$ be any subspace. If $U\perp W$ then $U$ is principal w.r.t. $W$. \SELF{Usa p/\cref{pr:principal orth proj}. Se $U\subset V\cap W$ também vale} \end{lemma} \begin{proof} By \eqref{eq:inner ei fj}, the union of an orthonormal basis of $U$ and a principal basis of $U^\perp\cap V$ w.r.t. $W$ gives a principal basis of $V$ w.r.t. $W$. \end{proof}
\begin{lemma}\label{pr:principal subspaces} Let $V,W\subset X$ be nonzero subspaces, with associated principal bases $\beta_V$ and $\beta_W$, $U\subset V$ be any subspace, and $P=\Proj_W$. Then: \begin{enumerate}[i)] \item $U$ is principal for $\beta_V$ $\Leftrightarrow$ $U^\perp \cap V$ is principal for $\beta_V$.\label{it:complement principal} \SELF{Usa para \cref{pr:principal orth proj}} \item $U$ is principal for $\beta_V$ $\Rightarrow$ $P(U)$ is principal for $\beta_W$. The converse holds if $V\not\pperp W$.\label{it:P(U) principal} \SELF{\cref{pr:principal partition} usa ida} \end{enumerate} \end{lemma} \begin{proof}
\emph{(i)} If $U$ is spanned by some vectors of the orthogonal basis $\beta_V$, $U^\perp \cap V$ is spanned by the others.
\emph{(ii)} Follows from \eqref{eq:Pei}, and the converse from \cref{pr:partial orthogonality} \SELF{\emph{\ref{it:orthogonality}} and \emph{\ref{pr:subspace not pperp}}}
as well. \end{proof}
\begin{proposition}\label{pr:principal orth proj} Let $V,W\subset X$ be nonzero subspaces, $U\subset V$ be any subspace and $P=\Proj_W$. Then $U$ is principal $\Leftrightarrow$ $P(U)\perp P(U^\perp \cap V)$.\SELF{Usa p/\cref{pr:principal partition}} \end{proposition} \begin{proof} \emph{($\Rightarrow$)} Follows from \eqref{eq:Pei}. \emph{($\Leftarrow$)} By lemmas \ref{pr:orthogonal subspace principal} and \ref{pr:principal subspaces}\emph{\ref{it:complement principal}}, we can assume $U$, $U^\perp \cap V$, $P(U)$ and $P(U^\perp \cap V)$ are not $\{0\}$. As $P(U)\perp P(U^\perp \cap V)$ implies $U\perp P(U^\perp \cap V)$ and $ P(U)\perp U^\perp \cap V$, from associated principal bases of $U$ and $P(U)$, and of $U^\perp \cap V$ and $P(U^\perp \cap V)$, we form principal bases for $V$ and $P(V)$. \SELF{by \eqref{eq:inner ei fj}} \end{proof}
\begin{proposition}\label{pr:principal blades proj orth} Let $V,W\subset X$ be nonzero subspaces, $V_1,V_2\subset V$ be distinct $r$-dimensional coprincipal subspaces w.r.t. $W$, and $P=\Proj_W$. Then: \begin{enumerate}[i)] \item $V_1\pperp V_2$. \item If $V_1\not\pperp W$ then $P(V_1)\pperp P(V_2)$. \item $\inner{\nu_1,\nu_2}=0$ and $\inner{P\nu_1,P\nu_2}=0$ for any $\nu_1\in\Lambda^r V_1$, $\nu_2\in\Lambda^r V_2$. \SELF{Usa p/\cref{pr:identity Theta principal subspaces}.} \end{enumerate} \end{proposition} \begin{proof} \emph{(i)} $V_1$ has an element $e$ of the principal basis which $V_2$ does not. \emph{(ii)} If $V_1\not\pperp W$ then $Pe\neq 0$, and by \cref{pr:principal orth proj} $P(\Span(e))\perp P(V_2)$. \emph{(iii)} By \cref{pr:partial orth Lambda orth}, $\Lambda^r V_1 \perp \Lambda^r V_2$ and, if $V_1\not\pperp W$, $\Lambda^r P(V_1) \perp \Lambda^r P(V_2)$. If $V_1 \pperp W$ then $P\nu_1=0$, by \cref{pr:partial orthogonality}\emph{\ref{it:Pnu represents PV}}.
\end{proof}
\begin{theorem}\label{pr:identity Theta principal subspaces} Given nonzero \SELF{Não precisa, é só pra facilitar} subspaces $V,W\subset X$ and $U\subset V$, let $r=\dim U$ and $p=\dim V$. Then \[ \cos^2\Theta_{U,W} = \sum_{I\in\mathcal{I}^p_r} \cos^2\Theta_{U,V_I} \cdot \cos^2\Theta_{V_I,W}, \] where the $V_I$'s are coordinate $r$-subspaces of a principal basis $\beta$ of $V$ w.r.t. $W$. \end{theorem} \begin{proof} The coordinate $r$-blades $\nu_I\in\Lambda^r V_I$ of $\beta$ form an orthonormal basis of $\Lambda^r V$, so $P\mu = \sum_I \inner{\nu_I,\mu} P\nu_I$ for an unit $\mu\in\Lambda^r U$ and $P=\Proj_W$.
By \cref{pr:principal blades proj orth} the $P\nu_I$'s are mutually orthogonal, so $\|P\mu\|^2 = \sum_I |\inner{\mu,\nu_I}|^2 \|P\nu_I\|^2$. The result follows from \cref{pr:products}\emph{\ref{it:Theta inner blades}} and \eqref{eq:norm projection blade}. \end{proof}
By \cref{pr:Grassmann coordinate} $\sum_{I\in\mathcal{I}^p_r} \cos^2\Theta_{U,V_I} =1$, so this result means $\cos^2\Theta_{U,W}$ is a weighted average of the $\cos^2\Theta_{V_I,W}$'s, with weights given by the $\cos^2\Theta_{U,V_I}$'s.
\begin{figure}
\caption{$\cos^2\beta = \cos^2\alpha +\sin^2\alpha\cos^2\theta$}
\label{fig:angles principal lines}
\end{figure}
\begin{example}
Given planes $V,W\subset\mathds{R}^3$ and a line $U\subset V$, let $\alpha=\Theta_{U,V\cap W}$, $\beta=\Theta_{U,W}$ and $\theta=\Theta_{V,W}$ (\cref{fig:angles principal lines}).
As $V\cap W$ and $(V\cap W)^\perp\cap V$ are principal lines of $V$ w.r.t. $W$, we have $\cos^2\beta = \cos^2\alpha\cdot 1 +\sin^2\alpha\cdot\cos^2\theta$. \end{example}
\begin{definition} A partition $V=\bigoplus_i V_i$ is \emph{principal} w.r.t. $W$ if the $V_i$'s are coprincipal subspaces of $V$ w.r.t. $W$. \end{definition}
Any principal partition is an orthogonal partition. Note that some subspaces of a partition can be $\{0\}$.
\begin{proposition}\label{pr:principal partition} Let $V,W\subset X$ be nonzero subspaces, $P=\Proj_W$, and $V=\bigoplus_i V_i$ be an orthogonal partition. The following are equivalent:\SELF{Usa p/ \cref{pr:converse Theta partition}} \begin{enumerate}[i)] \item $V=\bigoplus_i V_i$ is a principal partition w.r.t. $W$.\label{it:sum Vi principal} \item $P(V)=\bigoplus_i P(V_i)$ is a principal partition w.r.t. $V$.\label{it:sum PVi principal} \item $P(V)=\bigoplus_i P(V_i)$ is an orthogonal partition.\label{it:sum PVi orthogonal} \end{enumerate} \end{proposition} \begin{proof} \emph{(i\,$\Rightarrow$\,ii)} The $P(V_i)$'s are pairwise disjoint by \cref{pr:principal orth proj}, and coprincipal by \cref{pr:principal subspaces}\emph{\ref{it:P(U) principal}}. \emph{(ii\,$\Rightarrow$\,iii)} Immediate. \emph{(iii\,$\Rightarrow$\,i)} As the $P(V_i)$'s are mutually orthogonal, $V_i \perp P(V_j)$ if $i\neq j$. As the $V_i$'s are also mutually orthogonal, by \eqref{eq:inner ei fj} the union of principal bases of theirs w.r.t. $W$ gives a principal basis of $V$. \end{proof}
In \cite{Mandolesi_Grassmann} we got $\cos \Theta_{V,W} = \prod_i \cos\Theta_{V_i,W}$ for a principal partition $V=\bigoplus_i V_i$. We now generalize this for orthogonal partitions and get a partial converse.
\begin{theorem}\label{pr:Theta direct sum} Let $V_1,V_2,W\subset X$ be subspaces and $P=\Proj_W$. If $V_1\perp V_2$, \begin{equation*} \cos \Theta_{V_1\oplus V_2,W} = \cos \Theta_{V_1,W}\cdot \cos \Theta_{V_2,W}\cdot \cos \Theta^\perp_{P(V_1),P(V_2)}. \end{equation*} \end{theorem} \begin{proof} By propositions \ref{pr:partial orthogonality}\emph{\ref{pr:subspace not pperp}} and \ref{pr:properties Grassmann}\emph{\ref{it:Theta pi2}}, we can assume $V_1\not\pperp W$ and $V_2\not\pperp W$. As $V_1\perp V_2$, if $\nu_1$ and $\nu_2$ are unit blades representing them, $\nu_1\wedge\nu_2$ is an unit blade representing $V_1\oplus V_2$.
By \eqref{eq:norm projection blade}, $\cos \Theta_{V_1\oplus V_2,W} = \|P(\nu_1\wedge\nu_2)\| = \|(P\nu_1)\wedge(P\nu_2)\|$, and the result follows from propositions \ref{pr:partial orthogonality}\emph{\ref{it:Pnu represents PV}} and \ref{pr:products}\emph{\ref{it:exterior product}}. \end{proof}
\begin{corollary}\label{pr:Theta orthog partition} Let $V,W\subset X$ be subspaces and $P=\Proj_W$. For an orthogonal partition $V=\bigoplus_{i=1}^k V_i$, \begin{equation*} \cos \Theta_{V,W} = \prod_{i=1}^k \cos \Theta_{V_i,W} \cdot \prod_{i=1}^{k-1}\cos \Theta^\perp_{P(V_i),P(V_{i+1}\oplus\ldots\oplus V_k)}. \end{equation*} \end{corollary}
\begin{proposition}\label{pr:converse Theta partition} For nonzero subspaces $V,W\subset X$ with $V\not\pperp W$, a partition $V=\bigoplus_i V_i$ is principal w.r.t. $W$ if, and only if, it is orthogonal and $\cos \Theta_{V,W} = \prod_i \cos \Theta_{V_i,W}$. \end{proposition} \begin{proof} \Cref{pr:Theta orthog partition} and propositions \ref{pr:complementary simple cases}\emph{\ref{it:Theta perp 0}} and \ref{pr:principal partition} give the converse. \end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document} | arXiv |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.